Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
If A is a $2×2 $ matrix, does $A^2=3A$ imply $ A=0$ or $A=3I$? Problem: Let $A$ be a $2\times 2$ matrix, $O$ be the null matrix, and $I$ be the identity matrix. Is the following statement true?
$$\text{if } A^2=3A \implies A=O\text{ or }3I$$
I tried proving this just by factorising $A^2-3A=O \implies A(A-3I)=O \implies A-3I=O, A=O$. However, I cannot tell if this is correct. Do polynomials work the same way with matricies as they do with real/complex numbers?
| *
*$A-3$ doesn't makes sense. We can add two matrix and we can multiply a matrix by a scalar( one at a time, known as linear combination).
$A-3$ should be $A-3I$ .
*$A(A-3I) =0$ doesn't imply $A=0$ or $A=3I$ . Cancellation doesn't holds.$M_2(F) $ is not an integral domain (contains zero divisors)
Counter example:
$A=\begin{pmatrix}0&0\\0&3\end{pmatrix}$
Then $A^2-3A=0$ but neither $A=0$ nor $A=3I$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4500315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Area of $\Delta ABC$ via $2$ statements
What is the area of the $\Delta ABC$?
Ι. $∠BAC = 45°$.
ΙΙ. $AC = 16 cm$ and $BC = 12 cm$.
(A) question can be answered using only one of the statements alone.
(B) question can be answered using either statement alone.
(C) question can be answered using Ι and ΙΙ together but not using Ι or ΙΙ alone.
(D) question cannot be answered even using Ι and ΙΙ together.
My Approach:
Let $AB=x$. Then, I used the Cosine formula: $Cos A = \frac {b^2+c^2-a^2}{2bc}$
$Cos (45°)= \frac {16^2+x^2-12^2}{2(16x)}$
$\Rightarrow \frac 1 {\sqrt 2}= \frac {256+x^2-144}{2(16x)}$
$\Rightarrow x^2-16\sqrt 2x+112=0$
$\Rightarrow x=8\sqrt 2 \pm 4=AB$
Since I'm getting 2 possible triangles, so I can't distinctly find the area of the triangle.
Thus, (D).
Solution given:
From I and II, we have $∠BAC = 45°, AC = 16 cm, BC = 12 cm$.
Let $D$ be the foot of the perpendicular from $C$ to $AB$.
$AB = AD + DB$
$=CD+ \sqrt {{BC}^2-{CD}^2}=8\sqrt 2+\sqrt {{12}^2-{(8\sqrt 2)}^2}$
$=8\sqrt 2 +4$
Area of $\Delta ABC = \frac 12 (AB)(CD)$
$CD = AC Sin 45° = \frac {16}{\sqrt 2}=8\sqrt 2 ≃ 11.3 cm. \text { Sufficient.} $
Thus, option (C) is correct.
Could it be that the answer (thus, solution) given is wrong? Or am I making mistake somewhere? Please help.
| What happens if point B is between $A$ and $D$? In this case, $AB \ne AD +DB$. I believe you are correct, there are two possible triangles. The triangle with sides $16, 12, 8\sqrt 2 -4$ does exist.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4500451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If $R/I$ is Cohen-Macaulay, is $R/I^2$ also Cohen-Macaulay? Let $R$ be a local ring and $I \subset R$ an ideal such that $R/I$ is cohen-macaulay. Is $R/I^2$ also cohen-macaulay?
I've seen that it always works for principal ideals here:
Is a quotient of ring of polynomials Cohen-Macaulay?
So a counter-example has to have 2 or more generators.
| No. It is not true even for principal ideals unless one assumes more about the local ring $R$, e.g., that $R$ is a domain. For a simple example, let $R=k[\![x,y]\!]/(xy)$ and take $I=(x)$. Then, $R/I \cong k[\![y]\!]$ which is Cohen-Macaulay but $R/I^2 \cong k[\![x,y]\!]/(x^2,xy)$ which is famously not Cohen-Macaulay.
For more exotic examples, note there is a short exact sequence $0 \to I/I^2 \to R/I^2 \to R/I \to 0$ which relates Cohen-Macaulayness of $I/I^2$, which is known as the conormal module of $I$, to that of $R/I^2$. The conormal module (and thus $R/I^2$) puts up some resistance to being Cohen-Macaulay. In several situations it is known that Cohen-Macaulayness of the conormal module forces $R/I$ to be Gorenstein. Several of these situations are detailed in this work of Mantero-Xie. One can use these criteria to construct various types of examples.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4500593",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Simplify and FullSimplify do not simplify a simple term Why don't Simplify and FullSimplify work on: $\frac{\sqrt{a + \cos \theta}}{\sqrt{\frac{a + \cos \theta}{1 + a}}}$?
FullSimplify[ Sqrt[a + Cos[\[Theta]]] /Sqrt[(a + Cos[\[Theta]])/(
1 + a)]]
Even if $\cos \theta \leq 0$ the term should simplify.... no?
| You need to give assumptions, try this:
FullSimplify[ Sqrt[a + Cos[\[Theta]]] /Sqrt[(a + Cos[\[Theta]])/(
1 + a)], Assumptions->a>1]
or try this:
Simplify[ Sqrt[a + Cos[\[Theta]]] /Sqrt[(a + Cos[\[Theta]])/(
1 + a)], Assumptions->a>1]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4500787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve the inequality $3^{(x+3)^2}+\frac{1}{9}\leq 3^{x^2-2}+27^{2x+3}$ I tried to group the summands so that I could decompose them into multipliers, but nothing worked...
$$3^{(x+3)^2}+\frac{1}{9}\leq 3^{x^2-2}+27^{2x+3}\Leftrightarrow 3^{x^2+6x+9}+\frac{1}{9}\leq 3^{x^2-2}+3^{6x+9}$$
$$3^{x^2+6x+7}+1\leq 3^{x^2-4}+3^{6x+7}$$
How to solve it further?
| Hint: You can rewrite the inequality as follows: $(3^{6x+11}-1)(3^{x^2}-1) \le 0$. Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4500952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Proof that real quadratic forms are always diagonalisable I'm having some trouble understanding Theorem 1 in Chapter 8 of Lax's book on Linear Algebra. This chapter is "spectral theory of self-adjoint mappings of a Euclidean space". The theorem is that
Given a real quadratic form $$q(y) = \sum_{i,j} h_{ij} y_i y_j$$
it is possible to change variables as in $Ly=z$ s.t. in terms of the new variabls, $z, q$ is diagonal, i.e. of the form
$$q \left( L^{-1} z \right) = \sum_{1}^{n}d_iz_i^2 \tag{11}\label{11}$$
The proof is as follows.
The proof is entirely elementary and constructive. Suppose that one of the diagonal elements of $q$ is non-zero, say $h_{11}\ne0$. We then group together all terms containing $y_1$
$$q(y)=h_{11}y_1^2+\sum_{2}^{n}h_{1j}y_1y_j+\sum_{2}^{n}h_{ij}y_iy_j$$
Since $H$ is symmetric, $h_{j1}=h_{1j}$, and so we can write
$$h_{11}\left(y_1+h_{11}^{-1}\sum_{2}^{n}h_{1j}y_j\right)^2-h_{11}^{-1}\left(\sum_{2}^{n}h_{1j}y_j\right)^2$$
Set
$$y_1+h_{11}^{-1}\sum_{2}^{n}h_{1j}y_j=z_1\tag{12}$$
We can then write
$$q(y)=h_{11}z_1^2+q_2(y)\tag{13}\label{13}$$
where $q_2$ depends only on $y_2,\dots,y_n$.
If all diagonal terms of $q$ are zero but there is some non-zero off-diagonal term, say $h_{12}=h_{21}\ne0$, then we introduce $y_1+y_2$ and $y_1-y_2$ as new variables, which produces a non-zero diagonal term. If all diagonal and off-diagonal terms are zero, then $q(y)\equiv0$ and there is nothing to prove.
We now apply induction on the number of variables $n$. Using (13) shows that if the quadratic function $q_2$ in $(n-1)$ variables can be written in form (11), then so can $q$ itself. Since $y_2,\dots,y_n$ are related by an invertible matrix to $z_2,\dots,z_n$, it follows from (12) that the full set $y$ is related to $z$ by an invertible matrix.
This proof seems to skip the step of showing that the matrix is actually invertible. As far as I can tell, this method gives an upper triangular matrix $L$. However it doesn't seem to show or give a reason why $L$ is immediately invertible. I think I'm able to follow the reasoning up to (and including) the induction, but it's the last few sentences about invertibility that I'm confused about.
| The answer is quite simple. Since $y_i$ and $z_i$ are "coordinates", they can be treated as constant. Since $q_2$ does not include any terms with $y_1$, this mapping is injective and so the matrix representing this mapping is invertible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4501115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Exercise 4, Section 3.4 of Hoffman’s Linear Algebra
Let $V$ be a two-dimensional vector space over the field $F$, and let $B$ be an ordered basis for $V$. If $T$ is a linear operator on $V$ and $[T]_B=
\begin{bmatrix} a & b \\c & d\\ \end{bmatrix}$. Prove that $T^2-(a+d)T+(ad-bc)I=0$.
My attempt: It’s easy to check $T^2-(a+d)\cdot T+(ad-bc)\cdot\text{id}_V\in L(V,V)$. Let $B=\{\alpha_1,\alpha_2\}$ be basis of $V$. Then $T(\alpha_1)=a\cdot \alpha_1+c\cdot \alpha_2$ and $T(\alpha_2)=b\cdot \alpha_1 +d\cdot \alpha_2$. So
\begin{align}
T^2(\alpha_1) &=T(T(\alpha_1)) \\
&=T(a\cdot \alpha_1 +c\cdot \alpha_2) \\
&=a\cdot T(\alpha_1)+c\cdot T(\alpha_2) \\
&=a\cdot (a\cdot \alpha_1+c\cdot \alpha_2)+c\cdot (b\cdot \alpha_1+d\cdot \alpha_2)\\
&=(a^2+cb)\cdot \alpha_1 +(ac+cd)\alpha_2.
\end{align} Then $(T^2-(a+d)\cdot T+(ad-bc)\cdot\text{id}_V)(\alpha_1)$ $=T^2(\alpha_1)-(a+d)\cdot T(\alpha_1)+(ad-bc)\cdot \text{id}_V(\alpha_1)$ $= [(a^2+cb)\cdot \alpha_1 +(ac+cd)\alpha_2]-(a+d)\cdot [a\cdot \alpha_1+c\cdot \alpha_2] +(ad-bc)\cdot \alpha_1$ $=[(a^2+cb)\cdot \alpha_1 +(ac+cd)\alpha_2]$$-$$[(a^2+ad)\cdot \alpha_1+(ac+cd)\cdot \alpha_2]$$+$$[(ad)\cdot \alpha_1-(bc)\cdot \alpha_1]$ $=0_V$. Similarly $(T^2-(a+d)\cdot T+(ad-bc)\cdot\text{id}_V)(\alpha_2)=0_V$. By theorem 1 section 3.1, $T^2-(a+d)\cdot T+(ad-bc)\cdot\text{id}_V=0_{L(V)}$. Is my proof correct? Once can also show $(T^2-(a+d)\cdot T+(ad-bc)\cdot\text{id}_V)(\alpha)=0_V$, $\forall \alpha \in V$. But that is not efficient.
Observation: See $\text{tr}([T]_{B})=a+d$ and $\text{det}([T]_B)=ad-bc$. So $T^2-\text{tr}([T]_{B})\cdot T+ \text{det}([T]_B) \cdot \text{id}_V=0_{L(V)}$. Can we generalize this problem?
| About the generalization, if you are familiar with the exterior algebra you could express the characteristic polynomial of an operator of dimension $n\times n$ as
$$p_A(t)=\sum_{i=0}^nt^{n-i}(-1)^i\textit{tr}(\bigwedge\nolimits^i A)$$
where $\textit{tr}(\bigwedge\nolimits^i A)$ can be computed as the sum of all principal minors of the matrix which have dimension $i$. Formaly is the trace of the $i$-th exterior power of the matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4501344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
how to prove this ( about surjective functions and inverse functions) I was making excercises for an upcoming exam and when trying to make this one I really got stuck.
The question is:
Prove that a function $f$ from $X$ to $Y$ is surjective if and only if there exists a function $g$ from $Y$ to $X$ with $g$ after $f$ is the unit function (the function that maps every $x$ on $x$).
My thoughts directly went to the inverse function but than I started having doubts because if the domain is bigger than the co-domain the inverse function would have to have more than one value for every input and than g wouldn't be a function.
My second thought was the same but with a reduced domain so that f becomes bijective but I do't think thats allowed in this question.
Does someone know a way to solve/proof this question?
| You don't need to show the existence of an inverse, because the inverse may not exist, if $f$ isn't 1-1. You are supposed to show the right inverse, that is a function $g\colon Y\to X$ such that $f\circ g = \mathrm{id}_Y$.
You are right that $f^{-1}(\{y\})$ can have more than one element. Take just one of them, that is define $g(y)\in f^{-1}(\{y\})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4501458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Injective bounded linear operator on Banach space maps non-dense set to dense set. Let $A\subseteq X$ be a subspace ($X$ is a Banach space). Let $Y$ be another Banach space. Consider a continuous injective linear functional $f:X\rightarrow Y$. Suppose $f(X)\subseteq \overline{f(A)}$, is it true $X\subseteq \overline{A}$? I kind of feel this is obvious but cannot figure out a proof. (I guess this might be related to Baire Category theorem, but it seems open mapping theorem cannot be applied here.)
The context of the problem is from the proof of Jacod Martingale representation theorem. $\mathcal{S}(A)\subseteq \mathcal{H}^2$ be the closed linear stable space generated by $A$. The injection from $\mathcal{H}^2$ to $\mathcal{H}^1$ satisfying $\overline{i(S(A))}=\mathcal{H}^1$. With the statement above, we can conclude $\mathcal{H}^2=\mathcal{S}(A)$.
Any comments or idea?
Thanks in advance!
| This is not true. Let $X=C([0, 1]), Y=L^1([0, 1])$ and $f$ sends a function to itself, which is obviously injective.
As $\|x\|_1=\int_0^1 |x(t)|dt\le\|x\|_{\infty}$, $f$ is clearly bounded.
However $A=\{x\in C([0, 1]): x(0)=0\}$ is closed in $X$, but dense in $Y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4501570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Question about Bounded operator in a Normed Linear space Let $X$ be a normed space and $A \in B(X) $ be a bounded operator, Then $A$ is invertible if and only if $A$ is bounded below and Surjective.
I let $A$ be invertible then $A$ will be bounded below since
$\|x\|$= $\|A^-1(Ax)\| \leq \|A^{-1}\|\,\|Ax\|$ but how can we say that it is surjective?
| If $A$ is invertible then for any $x\in X$ you have $x=A(A^{-1}x)$. So $A$ is surjective.
Conversely, if $A$ is bounded below you know $\|x \|≤c\|Ax \|$ for some $c$ and all $x$. So if $Ax=0$ then $\|x\|=0$, so $x=0$ and $A$ is injective. At this stage you are not entirely done because to show that $A$ is invertible not only you need it bijective, but you also need the inverse to be bounded. In general one uses the Inverse Mapping Theorem for these things, but we're you simply have
$$
\|A^{-1}x \|≤c\|AA^{-1}x\|=c\|x \|,
$$
and so $A^{-1}$ is bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4501779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find a Stirling-formula like for $(x!)!$ Hi working again on the gamma function I find that :
let $0<x\leq 1$ and $1\leq k\leq \infty$ then define :
$$f(x)=(x!)!,g(x)=\left(\left(x!\right)!\right)^{\frac{1}{e^{x^{k}}}}$$
Then a conjecture :
It seems $\exists k\in(1,\infty)$ such that $g(x)$ admits an asymptote as $x\to \infty$.
Then see here Trying to prove the Stirling approximation using concavity for $x\geq 1$ we can squeeze the function $f(x)$ and remains to determine some constant .
If my conjecture is true how to achieve this ?
| As I wrote in comment, the same work can be done for
$$g(x)=(x!)!$$
$$g(x)=g(a)+\Gamma (a+1) \Gamma (a!+1)\sum_{n=2}^\infty \frac {d_n} {n!} \, (x-a)^n$$ The first coefficients being
$$\color{red}{d_2}=\psi ^{(1)}(a+1) \psi ^{(0)}(a!+1)$$
$$\color{red}{d_3}=\psi ^{(2)}(a+1) \psi ^{(0)}(a!+1)$$
$$\color{red}{d_4}=\left(3 \psi ^{(1)}(a+1)^2+\psi ^{(3)}(a+1)\right) \psi ^{(0)}(a!+1)+$$ $$3 \Gamma
(a+1) \psi ^{(1)}(a+1)^2 \psi ^{(0)}(a!+1)^2+3 \Gamma (a+1) \psi ^{(1)}(a+1)^2
\psi ^{(1)}(a!+1)$$
Computing again
$$\Psi_n=\int_0^1 \Big[[(x!)!]-\text{approximation}_{(n)}\Big]^2\,dx$$
$$\left(
\begin{array}{cc}
n & \log_{10}\big[\Psi_n\big] \\
2 & -5.0613 \\
3 & -5.0973 \\
4 & -6.3083 \\
5 & -6.7855 \\
6 & -7.7200 \\
7 & -8.3849 \\
8 & -9.2041 \\
9 & -9.9228 \\
10 & -10.691 \\
11 & -11.423
\end{array}
\right)$$
that is to say
$$\log_{10}\big[\Psi_n\big] \sim -\frac {3n+13}4$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4501927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Finding possible jordan canonical forms when knowing the eigenvalues and specific ranks of matrices For a linear transformation $A: \mathbb{R}^{10} \rightarrow \mathbb{R}^{10}$ we know that $\lambda = 1$ and $ \lambda = 2$ are the only eigenvalues. Eigenspace for $\lambda = 1$ is 3-dimensional, and the eigenspace for $ \lambda = 2$ is 2-dimensional. It is also true that: $rank(A - I)^5 = rank (A - I)^4 = 3$.
What is the dimension of $Ker (A - I)^2$? Write all possible Jordan forms for transformation A.
Attempted solution: I know that because of the dimension of eigenspaces, in the Jordan form there will be 3 Jordan blocks for $\lambda = 1$, and 2 Jordan blocks for $\lambda = 2$. Similarly, because for $\lambda = 1$ the eigenspace is 3-dimensional, the dimension of $Ker (A - I) = 3$. Even though I know that from the above equation, $dimKer (A - I)^5 = dim ker(A - I)^4 = 10 - 3 = 7$, I dont know how this could help me to find $dimKer (A - I)^2$ or the possible Jordan forms.
| For a Jordan block $J$ with eigenvalue $0$, the rank of its powers decreases by $1$ until it vanishes. This means that if three Jordan blocks for $\lambda=1$ have dimension $m_1,m_2,m_3$, none of them can have $m_j\geq5$, because we would have $\operatorname{rank}(A-I)^4\ne\operatorname{rank}(A-I)^5$. So $m_1,m_2,m_3\leq4$. If the Jordan blocks for $\lambda=2$ have dimensions $n_1$ and $n_2$, we know that $n_1+n_2=3$, so $m_1+m_2+m_3=7$.
This means that $n_1,n_2$ are $2,1$, and $m_1,m_2,m_3$ are either $3,3,1$ or $3,2,2$ (up to reordering).
If we had $3,3,1$ then the rank of $(A-I)^2$ would be $(2-1)+(2-1)+0+3=5$ (the dimension $1$ block does not lose rank with powers, and the $3=1+1$ is contributed by the $\lambda=2$ blocks which are invertible in $A-I$ since they have nonzero diagonal), and then $\dim\ker (A-I)^2=5$. If we had $3,2,2$ the rank of $(A-I)^2$ would be $(2-1)+(1-1)+(1-1)+3=4$ and now $\dim\ker (A-I)^2=6$.
In summary, the Jordan blocks are either
$$
J_3(1)\oplus J_3(1)\oplus J_1(1)\oplus J_2(2)\oplus J_1(2), \qquad\dim\ker(A-I)^2=5
$$
or
$$
J_3(1)\oplus J_2(1)\oplus J_2(1)\oplus J_2(2)\oplus J_1(2), \qquad\dim\ker(A-I)^2=6.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4502124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\lceil\ln(n)\rceil+\frac{(n-1)-\lceil\ln(n)\rceil}p\approx\frac{n}p$ when $n$ is much bigger than $p$. What do "much bigger" and $\approx$ mean here? In one book im reading, the author claim the following:
$$\lceil\ln(n)\rceil+\frac{(n-1)-\lceil\ln(n)\rceil}{p} \approx \frac{n}{p}$$
when ''$n$ is much bigger than $p$'' (both positive integers). The exact meaning of ''much bigger'' and $\approx$ (clearly indicating some sort of approximation) is not given, it is treated naturally because either it is obvious or it is assumed the reader is sophisticated enough to figure it out.
Before asserting the stated equivalence, the author surely had some tacit idea of what ''much bigger'' and $\approx$ means. What i would like to know is a reasonable definition for both concepts, so i can derive the result myself. I suspect that a possible definition for ''much bigger'' could be that $p/n$ tends to $\infty$, but i'm not sure how to compute with such a limit.
| I would generally take these sorts of statements as being a variant of the concept of limit. In this case, if we take "much bigger" to mean additively larger, then it means
$\forall (\epsilon>0) \exists D: (n > D+p) \rightarrow |LHS-\frac n p|<\epsilon$
In other words, no matter how small your margin of error is, it's possible to guarantee to get within that margin of error by making sure that the amount by which $n$ is greater than $p$ is above some minimum value $D$.
equivalently,
$\forall (\epsilon>0) \exists D: (n-p> D) \rightarrow |LHS-\frac n p|<\epsilon$
That is, as long as $n-p$ is above some minimum value $D$, your error is less that $\epsilon$.
If "much bigger" means multiplicatively larger, then
$\forall (\epsilon>0) \exists R: (n > Rp) \rightarrow |LHS-\frac n p|<\epsilon$
(If you're not familiar with the acronym, LHS means "Left Hand Side", i.e. the stuff to the left of the "approximately equal" symbol.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4502236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Example of two matrices such that $LM+ML=0$ I am looking for some examples of $n\times n$ complex matrices $L$ and $M$ satisfying $LM=aML$, for some fixed $a\in \mathbb{C}$. In particular if we take $n>2$ and $a=-1$ then we can see that matrices of the form $AB=0=-BA$ gives the required result. But I need some non trivial examples. Please give some hints so that I can proceed.
Should I look some different rings other than the matrix rings?
| We can do the $a = -1$ case in your title using Clifford algebras. Namely, the Clifford algebra $\text{Cl}(\mathbb{R}^2)$ can be given a presentation with two generators $i, j$ subject to the relations
$$i^2 = j^2 = -1, ij + ji = 0.$$
This is exactly the algebra of quaternions $\mathbb{H}$ and in particular is $4$-dimensional, with basis $\{ 1, i, j, ij \}$. The quaternions act on themselves by left multiplication and this gives a $4$-dimensional faithful representation of $\text{Cl}(\mathbb{R}^2)$ which is also its unique simple module. In this representation $i$ and $j$ act via the $4 \times 4$ matrices
$$L_i = \left[ \begin{array}{cccc} 0 & -1 & 0 & 0 \\
1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & -1 & 0 \end{array} \right]$$
$$L_j = \left[ \begin{array}{cccc} 0 & 0 & -1 & 0 \\
0 & 0 & 0 & -1 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \end{array} \right].$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4502383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Question on solving partial derivative in probability theory
When the diffusion process $X_{t}$ is stationary, $F_{s, x}(s+t, y)$ does not depend on $s$. Hence we have a well-defined function
$$
F_{t}(x, y)=F_{s, x}(s+t, y), \quad t>0 .
$$
Since $F_{s, x}(s+t, y)$ does not depend on $s, \frac{d}{d s} F_{s, x}(s+t, y)=0$. This implies the second equality below:
$$
\frac{\partial}{\partial t} F_{t}\left(x, y_{0}\right)=\frac{\partial}{\partial(s+t)} F_{s, x}\left(s+t, y_{0}\right)=\left.\left(-\frac{\partial}{\partial s} F_{s, x}\left(u, y_{0}\right)\right)\right|_{u=s+t} .
$$
Then use Equation (10.9.8) to get
$$
\frac{\partial}{\partial t} F_{t}\left(x, y_{0}\right)=\rho(x) \frac{\partial}{\partial x} F_{t}\left(x, y_{0}\right)+\frac{1}{2} Q(x) \frac{\partial^{2}}{\partial x^{2}} F_{t}\left(x, y_{0}\right)
$$
I am reading a textbook and see this part. I wonder how do I get the second equality as mentioned here.
| $$\frac{d}{ds}F_{s,x}(s+t,y)=0$$
$$\Rightarrow \frac{\partial F_{s,x}(u,y)}{\partial s}\lvert_{u=s+t}+\frac{\partial F_{s,x}(u,y)}{\partial u}\frac{\partial u}{\partial s}=0$$
\begin{align}
\frac{\partial F_{s,x}(u,y)}{\partial s}\lvert_{u=s+t}
&= -\frac{\partial F_{s,x}(u,y)}{\partial u}\frac{\partial u}{\partial s} \\
&= -\frac{\partial F_{s,x}(u,y)}{\partial s}*1 \\
&= -\frac{\partial F_{s,x}(u,y)}{\partial s}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4502543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How many planar graphs of vertex degrees all $\ge4$ are there? How many planar graphs that satisfies $\forall\ v\in V\deg(v)\ge4$ are there?
If there are finite numbers, can you list them/link them?
If there are infinite, is there a proof?
I assume there are infinite, as if there are only a finite number, the Four Color Theorem (which led to this question) would be trivial.
| Yes, there are infinitely many of them.
Here's an example: a graph made of vertices arranged in a $5 \times 5$ square grid, and for each side of the grid, there is an extra vertex that is adjacent to all 'outer' vertices of that side.
It's quite obvious that the planar graph above satisfies the minimum degree $4$ requirement. In fact, as long as the grid is $4 \times 4$ or larger, the requirement will be satisfied. Hence, there's an infinite number of such graphs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4502808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
$x^{10}+x^{11}+\dots+x^{20}$ divided by $x^3+x$. Remainder? Question:
If $x^{10}+x^{11}+\dots+x^{20}$ is divided by $x^3+x$, then what is the remainder?
Options: (A) $x\qquad\quad$ (B)$-x\qquad\quad$ (C)$x^2\qquad\quad$ (D)$-x^2$
In these types of questions generally I follow the following approach:
Since divisor is cubic so the remainder must be a constant/linear/quadratic expression.
$\Rightarrow F(x)=(x^3+x)Q(x)+ax^2+bx+c$
For $x=0$, we get $c=0$
But since $x^3+x$ has no other roots so I can't find $a$ and $b$.
Please help.
Answer:
Option (B)
| What about this : We have for example $$x^{10}+x^{12}=x^{10}(x^2+1)\equiv 0\mod x(x^2+1)$$
This way we can also cancel $11-13,14-16,15-17,18-20$. It remains $x^{19}$ for which you can use $x^6\equiv x^2$ giving $x^{18}\equiv x^6$ hence $x^{18}\equiv x^2$ hence $x^{19}\equiv x^3\equiv -x$ $\mod (x^3+x)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4502967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 8,
"answer_id": 2
} |
The least possible number of tokens on a $n×n$ board Tokens are placed on the squares of a
$2021
×
2021$
board in such a way that each square contains at most one token. The token set of a square of the board is the collection of all tokens which are in the same row or column as this square. (A token belongs to the token set of the square in which it is placed.) What is the least possible number of tokens on the board if no two squares have the same token set?.
It's a problem from https://gonitzoggo.com/archive/problem/441/english Recently, I've been taking preparation for junior Math Olympiad Contest and found this problem.
I have calculated least possible number of tokens for $ 2×2, 3×3, 4×4, 5×5$ but finding no pattern between them.
| We’ll show that TonyK’s construction of $\left\lfloor \frac{3n}2\right\rfloor$ tokens on an $n × n$ board is minimal for $n ≥ 2$ (for $n = 1$, the empty board is better). This is clear for $n = 2$; suppose $n ≥ 3$.
If column $a$ is empty, then there must be at least two tokens in each column $b ≠ a$ (because if there are zero or one at $(b, c)$, then $(a, c)$ and $(b, c)$ have the same token set), for at least $2(n - 1) ≥ \left\lfloor \frac{3n}2\right\rfloor$ tokens. Similarly if row $a$ is empty.
Otherwise, there are no empty rows or columns. Among the tokens that are the only token in their row, no two can share a column (because they’d have the same token set), so at most $n$ tokens are the only token in their row or the only token in their column. Furthermore, at most one of these is the only token in its row and column (because if there were two at $(a, b)$ and $(c, d)$, then $(a, d)$ and $(b, c)$ would have the same token set)). That leaves at least $n - 1$ of the $2n$ rows and columns that must have multiple tokens. So either at least $\left\lceil\frac{n - 1}{2}\right\rceil$ rows have multiple tokens, or at least $\left\lceil\frac{n - 1}{2}\right\rceil$ columns have multiple tokens; either way, this accounts for at least $n + \left\lceil\frac{n - 1}{2}\right\rceil = \left\lfloor\frac{3n}{2}\right\rfloor$ tokens.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4503097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Let $G$ be a group, $H < G$, and $A_H=\{g\mid\bar{g}\subset H\}$ where $\bar{g}$ is the conjugacy class of $g$. Prove $A_H\lhd G$. Suppose $G$ is a group and $H < G$ and $A_H=\{g \mid \bar{g} \subset H \}$ where $\bar{g}$ is the conjugacy class of $g$. I want to prove $A_H\lhd G$.
First approach:
Define $X$ as the set of left cosets of $H$, then define a homomorphism $ \varphi(g) = f_g$ from $G$ to $S_X$ where $f_g(aH)=gaH$.
$g_1=g_2$ implies $f_{g_1}=f_{g_2}$, so $\varphi$ is well defined. Also $\varphi(g_1 g_2)=f_{g_1 g_2}=f_{g_1}(f_{g_2})=\varphi(g_1) \circ \varphi(g_2)$ shows $\varphi$ is a homomorphism.
Now, $\varphi(g)=e_{S_X}$ implies $f_g(aH)=gaH=aH$ for all $a \in G$. Equivalently, it implies $a^{-1}ga \in H$ for all $a \in G$. Therefore, $\ker(\varphi)=A_H$, and so $A_H \lhd G$.
Now, I want to directly prove, first, $A_H$ is a subgroup of $G$, then it is normal in $G$.
I cannot even prove $A_H$ is a subgroup because I cannot prove $A_H$ is closed under inverse.
Can anyone prove $A_H \lhd G$ directly?
| For inverses: the fact that ${a \in A_H}$ we know means ${[a]\subseteq H}$. In particular, ${\forall\ g \in G: g^{-1}ag\in H}$. Since $H$ is closed under inverses we must then also have ${g^{-1}a^{-1}g \in H}$ for every ${g \in G}$, and so then ${[a^{-1}]\subseteq H}$. Hence ${a^{-1} \in A_H}$.
If you want to see the rest let me know and I'll add more
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4503219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
Let $A, B \subseteq ℂ$, if $A \subseteq B$, then $\partial A \subseteq \partial B$? Let $A, B \subseteq ℂ$, if $A \subseteq B$, then $\partial A \subseteq \partial B$?
I thought of a counter example but I'm not sure if the boundaries ($\partial$) are context dependent.
Here it is:
$A = ℝ, B = ℂ$
So obviously, $A \subseteq B$, but here's the problem I'm having.
I know that ℝ is both open and closed set (?), and it's boundaries should be an empty set - but does the context of the topology, that since we look at ℝ as $ℝ \subseteq ℂ$, it's boundaries is just ℝ? Thus satisfies a counter example for the said claim?
| $A, B\subset \Bbb{C}$ and $A\subset B$
Then $\overline{A}\subset \overline{B}$.
If $\partial{A}\subset\partial{B}$ then $\overline{A}\setminus \overset{o}A\subset\overline{B}\setminus \overset{o}B$
implies $\overset{o}B\subset \overset{o}A$
$A\subset B$ doesn't imply $\overset{o}B\subset \overset{o}A$.
Counter examples : Choose $A\subset B $ such that $\overset{o}B\not\subset \overset{o}A$.
$A=C_0(1) :\{z:|z|=1\}$ , $A=C_0(2) :\{z:|z|=2\}$
Then $A\subset B$ but $\overset{o}B=D(0, 2) $ and $\overset{o}A=D(0, 1) $.
Now you can see why your example won't work.
$A=\Bbb{R}\subset \Bbb{C}$ is nowhere dense. It's interior is empty! Empty set is subset of every set. So any set containing $\Bbb{R}$ won't provide a counter example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4503365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Sum of subsets of a Vector Space
Image is from "Linear Algebra Done Right" (Sheldon Jay Axler) page 20.
I"m having trouble with example 1.38.
Example 1.37 seems straightforward, I think $(x, 0, 0) + (0,y,0) = (x+0, 0+y, 0+0) = (x,y,0)$ which is what we define $U + W$ to be.
By the same logic, in 1.38, $(x,x,y,y) + (x,x,x,y) = (2x, 2x, y+x, 2y) = (x, x, \frac{y+x}{2}, y)$.
I can't see how we arrive at the "$(x,x,y,z)$" part in the $U+W$ in the solution.
| The key point here is that the variable names $x, y, z$ are just placeholders in the definitions of $U$ and $W$: So, for example the $x$ in the definition of $U$ has nothing in particular to do with the $x$ in the definition of $W$. To avoid the confusion that can create, let's write rewrite definitions for $U$ and $W$ using different placeholder variable names for the two sets:
$$U := \{(a, a, b, b) : a, b \in F\}, \qquad W := \{(c, c, c, d) : c, d \in F\} .$$
Now, a generic element of $U + W$ looks like
$$(a, a, b, b) + (c, c, c, d) = (a + c, a + c, b + c, b + d) ,$$
so we could write
$$U + W = \{(a + c, a + c, b + c, b + d) : a, b, c, d \in F \} .$$
Renaming $x := a + c$, $y := b + c$, $z := b + d$ gives the simpler desired description of the set:
$$\boxed{U + W = \{(x, x, y, z) : x, y, z \in F\}} .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4503596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to check whether the sets U1, U2, U3 are subspaces of the vector space V. In the $\mathbb{R}$-vector space $V = \mathbb{R}^3$
, we consider the following subsets:
$U_1 := \{x\in\mathbb{R}^3\ |\ x_1 + x_2 − x_3 = 0\}$,
$U_2:= \{x\in\mathbb{R}^3\ |\ \exists n\in\mathbb{Z}\ :\ x_1 = n, x_2 = 2n, x_3 = 3n\}$,
$U_3:= \{x\in\mathbb{R}^3\ |\ (x_1 = 0)\vee(x_2 = 0)\vee (x_3 = 0)\}$
I need to check whether the sets $U_1,\ U_2$, and $U_3$ are subspaces of the vector space $V$, How can I properly do that?
| As commented above you can check whether they are subspaces. You can also check by just looking at the problem of such types. How this happens?
If $W$ be any plane in $\Bbb{R}^3$ passing through the origin, then they satisfy the subspace conditions (You can see here).
Good luck!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4503743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proving that the map between Hilbert spaces is unique.
Theorem: Let $T:H\rightarrow K$ be a bounded linear operator between Hilbert spaces. Then there exists a unique bounded linear operator $T^* :K\rightarrow H$ such that
$$\tag{1}
\forall h\in H,k\in K :\langle Th,k\rangle=\langle h,T^*k\rangle.
$$
The following proof of existence was given by by lecturer:
Suppose there is another linear operator, $S :K\rightarrow H$ such that
$$\tag{2}
\forall h\in H,k\in K :\langle Th,k\rangle=\langle h,Sk\rangle.
$$
Then
$$\tag{3}
\left\langle h, T^{*} k-S k\right\rangle=\left\langle h, T^{*} k\right\rangle-\langle h, S k\rangle=\langle T h, k\rangle-\langle T h, k\rangle=0 .
$$
Thus $T^{*} k=S k$ for all $k\in K$, and therefore $T^*=S$.
What confuses me is that in eq. $(2)$ he defined $S$ such that $\langle h,Sk\rangle=\langle h,T^*k\rangle$ for all $k\in K$, and I don't see why eq. $(3)$ makes it more obvious that $T^*=S$? What exactly is eq. $(3)$ showing that is not obvious from eq. $(1)$ and $(2)$?
| This is using the fact that if a vector $v\in H$ satisfies $\langle h,v\rangle=0$ for all $h\in H$, then $v=0$. (To prove this, just take $h=v$ and use the positive-definiteness of the inner product.) Applying this with $v=T^*k-Sk$, equation (3) tells you that $T^*k-Sk=0$ so $T^*k=Sk$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4503824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Reference request: positive first chern class of tangent bundle implies anticanonical line bundle is ample I am searching for a reference (preferably with a proof) for the following result:
Let $X$ be a smooth projective curve, $T_X$ its tangent bundle, $K_X$ its canonical bundle. If $c_1(T_X) > 0$, then $K_X^{-1}$ is ample.
| As discussed in the comments, one can prove the identity $c_1(\det E) = c_1(E)$ using the splitting principle. Since $K_X^{-1} = \det T_X$, we see that $$c_1(K_X^{-1}) = c_1(\det T_X) = c_1(T_X) > 0$$ and therefore $K_X^{-1}$ is ample. Here we have used the isomorphism $\det(E^*)^* \cong \det E$ to identify $K_X^{-1}$ with $\det(T_X)$ (since $K_X^{-1} = K_X^*$). Alternatively, we could use the identity $c_1(E^*) = -c_1(E)$ to reach the same conclusion: $$c_1(K_X^{-1}) = c_1(K_X^*) = -c_1(K_X) = -c_1(\det T^*_X) = -c_1(T^*_X) = c_1(T_X) > 0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4504028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Decision variable limit condition in constrained linear and quadratic programing We are given a quadratic (or linear) program as$$\min \limits _xx^TPx+q^Tx\qquad \text{s.t.}\quad a_i^tx\leq b_i,\quad \forall i\in \{1,\ldots ,N\}.$$In the above problem, we do not have any limit on the decision variables.
I want to find the decision variable limit for the above problem i.e. $x\in [\![x_\min ,x_\max ]\!]$ such that the optimization problem always yields a solution, even if we manually add or delete the constraints.
Please do let me know if my thought process is correct or if there is a better problem to approach this problem.
I know that the resulting solution will satisfy the KKT condition. If we suppose that only one constraint is active at a time and determine the closed form solution of the decision variable, we have the limit of the variable for the problem. As the constraints are linear in the decision variable, we can add the limit on all the constraints to ensure that the solution will always be feasible. (I am not sure if the maximum is more appropriate and correct).
Any help and guidance will be highly appreciated.
| The objective function is irrelevant to your question, as your interest is a box feasible region of $x$.
In the case that $x$ is an univariable, you can solve the following two linear program:
$$
\begin{array}{lll} x_{min} =& \min_x & x \\
&\text{s.t. } & a_i x\leq b \quad \forall i \in \{1, \dots, N\}
\end{array}
$$
and
$$
\begin{array}{lll} x_{max} =& \min_x & -x \\
&\text{s.t. } & a_i x\leq b \quad \forall i \in \{1, \dots, N\}
\end{array}
$$
In the case that $x$ is a multivariable, the box feasible region is not unique. We'll need more specific feature/requirements in the interested region (such as maximizing the area) in order to narrow down options. See the following example with $x\in\mathbb{R}^2$. Consider the constraints are
$$
\begin{align*}
x_1 \geq 0 \\
x_2 \geq 0 \\
x_1 + x_2 \geq 1.
\end{align*}
$$
The box feasible region is
$\begin{pmatrix} x_1 \\x_2 \end{pmatrix} \in
\begin{bmatrix}
\begin{pmatrix}1-k\\k \end{pmatrix}, &
\begin{pmatrix}\infty \\ \infty \end{pmatrix}
\end{bmatrix}$ for any $k\in [0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4504158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find $\lim_{x\rightarrow 0}{\frac{\tan(x)-x}{x\cdot(e^{x^2}-e^{x^3})}}$ I have to solve the following limit:
$$\lim_{x\rightarrow 0}{\frac{\tan(x)-x}{x\cdot(e^{x^2}-e^{x^3})}}$$
I'm struggling to find a solution. I tried using L'Hôpital's rule but it's not working.
I also tried to do some change in the function, for instance splitting it in two addends, but without success.
|
$L=\lim_{x\rightarrow 0}{\frac{\tan(x)-x}{x(e^{x^2}-e^{x^3})}}$
$$\begin{align}L&=\lim_{x\rightarrow 0}{\frac{\tan(x)-x}{x^3}}\cdot\frac{x^2}{(e^{x^2}-e^{x^3})}\\
\\
&=\lim_{x\rightarrow 0}{\frac{\tan(x)-x}{x^3}}\cdot\lim_{x\rightarrow 0}\frac{x^2}{(e^{x^2}-e^{x^3})}\\
\\
&=\frac{1}{3}\cdot 1=\frac{1}3\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4504260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Side of the largest possible cube inside a cone? Question:
What is the side of the largest possible cube inside a cone of height $12$ units and radius $3\sqrt 2$ units?
Now, when this question came up (in my class), I instantly thought to myself that $1^{st}$ I must check how the cube must be placed. With the limited time I had, I drew the following cases:
I reasoned to myself that any intermediate variations must ultimately settle at these $2$ supposedly extremities. Let the figure on the right be called $R$ and that on left, $L$. Then by looking at the diagram itself, I concluded that since $R$ makes better use of the extended width of the cone down below, so it must be the largest possible cube that one may fit inside the cone.
But my teacher directly proceeded with $L$ and no one else minded it so I questioned, to which, I was told by teacher that $L$ IS better. Thus, here. Please help.
Finally, using similar triangles and the common vertex angle in $L$, we arrived at $4$cm for the side of the cube (in $L$).
| I think your teacher is right, though it's very disappointing that he/she didn't seem to take your question seriously. First thing to note is that your right-hand diagram is not accurate. If you stand a cube upright on one of its corners (I am assuming that's what you meant), the profile will be a hexagon, not a square. One corner will be on the base of the cone. Three corners will be at height $s/\sqrt3$, where $s$ is the side of the cube, and will lie on a circle with radius $s\sqrt{2/3}$. Three more will be at height $2s/\sqrt3$ and will lie on a circle with the same radius. The top corner will be at height $s\sqrt3$ above the base.
By similar triangles, the radius of the cone at height $h$ is given by
$$r=3\sqrt2\Bigl(1-\frac{h}{12}\Bigr)\ ,$$
and the second condition above means that for the cube to fit inside the cone we need
$$3\sqrt2\Bigl(1-\frac{s}{6\sqrt3}\Bigr)\ge s\sqrt{\frac23}\ .$$
Solving gives $s\le2\sqrt3$ which is less than $4$, so the cube in the left-hand diagram is bigger.
If you meant the cube to be standing on an edge then the profile is a square as in your diagram (or a pair of rectangles if you look from a different angle). In this case four vertices form a rectangle with circumradius $s\sqrt3/2$ at height $s/\sqrt2$. Using the same idea as above gives
$$s\le\frac{12\sqrt2}{1+2\sqrt3}\ ,$$
which is actually bigger than before, but still less than $4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4504529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 0
} |
How many distinct transitive actions does quaternion group have. Quaternion group $Q_8$ has though $6$ elements of order $4$, but there are three sets ($(i, -i), (j, -j), (k, -k)$) having seperate subgroups, with identity and $-1$ common only.
Apart from that have one element of order $2$, and $e$.
But, how to use that information, is unclear.
It is not a cyclic group, hence cannot simply state based on orders of elements.
| Hint Just like in your previous post about transitive actions of $C_{12}$, the general result (that @Derek Holt clued us in on) is that there are as many inequivalent actions as conjugacy classes of subgroups (of, in this case, $Q_8$).
The aforementioned also gave us a headstart in the comments that in this case there are $6$ conjugacy classes of subgroups.
The subgroups are: $, \{1\},\langle-1\rangle, \langle i\rangle, \langle j\rangle, \langle k\rangle $ and $Q_8$.
Automorphisms preserve the order of subgroups, so the only question is if the $3$ subgroups of order $4$ are conjugate. Since they each have index $2$, they're all normal. So not conjugate. That's where the number $6$ comes from (each subgroup is its own conjugacy class).
Here's some terminology: the quaternions are Hamiltonian. That's they're a non-abelian group such that every subgroup is normal.
In this problem, that tells us the number of transitive actions is equal to the number of subgroups.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4504636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding a horizontal line that divides a modulus function's area into 2 equal halves
How to find the horizontal line $y=k$ that divides the area of $y=2-\lvert x-2\rvert $ above the $x$-axis into two equal halves using Calculus?
I know of methods other than Calculus to solve this, but my question requires to solve it using Calculus (Definite Integration).
I've tried to solve it using calculus after watching youtube video solutions of the same type of question where the function of interest were parabolas or other curves. Still couldn't get to the right solution. I'm assuming I've made mistakes in taking the limits of integration as this an absolute value function and not a parabola.
How to solve this question using calculus? If there are ways to solve this by integrating the function with respect to $y$ (instead of $x$) which I have seen people doing while solving questions regarding parabola functions, then do let me know!
Note: I'm a high school senior who has been sitting with this problem since morning & I seriously cannot seem to get through and get over it. Helping me solve this question will broaden my knowledge of Calculus & will clear my misconceptions.
I've attached an image to the post as well.
The answer is $y=2-\sqrt{2}$
Thanks
| The area above the $x$ axis is
$$
A =\int_0^4 2 - \lvert x-2 \rvert \, \mathrm{d}x = \int_0^2 2 -(2-x)\, \mathrm{d}x + \int_2^{4}2 -(x-2) \, \mathrm{d}x = 4 \tag{1}
$$
Now, if we draw a horizontal line $\color{green}{y=k}$ on top of the function $y= 2 - \lvert x-2 \rvert$ it splits the latter area in $2$ halves:
In the above diagram the line $\color{green}{y=k}$ is drawn in green, and the $2$ halves are drawn in $\color{orange}{\text{orange}}$ and $\color{blue}{\text{blue}}$ respectively. The question asked is
Find the horizontal line $\color{green}{y=k}$ that divides the area of $y= 2 - \lvert x-2 \rvert$ above the $x$-axis into two equal halves.
Which in the diagram above translates to finding a $k$ such that
$$
\text{Area of }\color{orange}{\text{orange}} \text{ half}= \text{Area of }\color{blue}{\text{blue}} \text{ half} = \frac{A}{2} \overset{(1)}{=} \frac{4}{2} = 2 \tag{2}
$$
So now the question becomes writing the $\color{orange}{\text{orange}}$ and
$\color{blue}{\text{blue}}$ areas using calculus, then setting each of those equal to $2$, and then (hopefully) solving for $k$ from these equations using calculus techniques.
*
*We first calculate some key points relevant to calculating the $\color{orange}{\text{orange}}$ and $\color{blue}{\text{blue}}$ areas. Namely, what are the coordinates of the intersections of $y=k$ with $y = 2 - \lvert x-2 \rvert$?
We achieve these coordinates by setting $y=k$ equal to $y = 2 - \lvert x-2 \rvert$ and solving for $k$ $$k = 2 - \lvert x-2 \rvert \implies \begin{cases} k = 2- (2-x) &\implies x = \color{purple}{k}\\ k = 2 -(x-2) & \implies x =\color{purple}{4-k} \end{cases}$$
*Once you have the coordinates, how can you write an integral for the area of the $\color{blue}{\text{blue}}$ half?
We first notice that we can split the $\color{blue}{\text{blue}}$ region in $3$ parts: $A,B$ and $C$: Each of these regions is an area between the $x$ axis and a function, so we can calculate each of the areas $A$, $B$ and $C$ as integrals as follows $$\color{blue}{\text{Blue}}\text{ area}=\underbrace{\int_0^k \color{red}{2-\lvert x-2\rvert}\, \mathrm{d}x}_{A} + \underbrace{\int_k^{4-k} \color{green}{k} \, \mathrm{d}x}_{B} +\underbrace{\int_{4-k}^4 \color{red}{2-\lvert x-2\rvert}\, \mathrm{d}x}_{C} \tag{3}$$
*Since equation $(2)$ tells us that $\color{blue}{\text{Blue}}\text{ area} = 2$, how can we combine this with the previous step to get an equation for $k$?
Since $\color{blue}{\text{Blue}}\text{ area} = 2$, combining equations $(2)$ and $(3)$ we get \begin{align}&2 = \int_0^k 2-\lvert x-2\rvert\, \mathrm{d}x + \int_{k}^{4-k} k \, \mathrm{d}x + \int_{4-k}^4 2-\lvert x-2\rvert\, \mathrm{d}x \\ \implies & 2 = \int_{0}^{k} x \, \mathrm{d}x + k((4-k) -k) + \int_{4-k}^{4} 4-x\, \mathrm{d}x \\ \overset{\color{purple}{u = 4-x}}{\implies} & 2 = \frac{k^2}{2} + 4k - 2k^2 + \int_{0}^{k} \color{purple}{u}\, \mathrm{d}\color{purple}{u} \\ \implies & 2 = - \frac{3}{2} k^2 + 4k + \frac{k^2}{2} \\ \implies & 2 = 4k -k^2\end{align}
*How can we solve the previous equation to obtain a value for $k$?
Since the previous equation is a quadratic equation in $k$, it can be solved using the quadratic formula. We thus get \begin{align} &2 = 4k -k^2 \\ \implies & k^2 - 4k +2 =0 \\ \implies & k = \frac{4 \pm \sqrt{16 - 4(1)(2)}}{2} = \frac{4 \pm \sqrt{2 \cdot 2^2}}{2} = \frac{4 \pm 2\sqrt{2 }}{2} = 2 \pm \sqrt{2} \end{align} Now, for the line $y=k$ to split the function $y= 2 - \lvert x - 2\rvert$ into the $\color{orange}{\text{orange}}$ and $\color{blue}{\text{blue}}$ halves from the first diagram, $k$ must be between $0$ and $2$ since $2$ is the highest $y$ value of $y= 2 - \lvert x - 2\rvert$. Out of the two possibilities of sign choice in $2 \pm \sqrt{2}$ only $2 - \sqrt{2}$ is between $0$ and $2$, so we can conclude that the value of $k$ we want is $$ \boxed{y = k = 2 - \sqrt{2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4504799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find point of reflection on circle I'm playing around with some ray-tracing type applications and I've run into following problem:
I'm given two points $A$ and $B$ and a circle with the center at $(0,0)$ and radius $r$. $A$ and $B$ have rays that meet and touch the circle at $P$. How can I find P such that the angles of incidence for the two rays are the same (ie $\alpha$ = $\beta$)? $A$ and $B$ are guaranteed to be 'nice', ie they aren't on opposite sides of the circle and the angle of incidence is greater than 0.
Currently I am solving this numerically, but I feel sure that there must be a closed form solution to this, I'm just having some trouble finding it.
|
Let the center of the circle be $O$.
Let $\phi = \angle AOB , \ \Psi = 90^\circ + \alpha = 90^\circ + \beta , \ \theta = \angle AOP $
Let $\overline{OA} = a , \ \overline{OB} = b , \overline{OP} = r $
Using the law of sines applied to $\triangle AOP $
$ \dfrac{a}{\sin \Psi} = \dfrac{r}{\sin( \theta + \Psi)} $
So that,
$ a ( \sin \theta \cos \Psi + \cos \theta \sin \Psi ) = r \sin \Psi $
from which
$ \tan \Psi = \dfrac{ a \sin \theta }{ r - a \cos \theta } $
Similarly, applying the law of sines to $\triangle BOP$ results in
$ \dfrac{b}{\sin \Psi} = \dfrac{ r }{\sin(\phi - \theta + \Psi ) } $
So that
$ b ( \sin(\phi - \theta) \cos \Psi + \cos(\phi - \theta) \sin \Psi ) = r \sin \Psi $
From which,
$ \tan \Psi = \dfrac{ b \sin(\phi - \theta) }{r - b \cos(\phi - \theta) } $
Hence, we now have
$\dfrac{ a \sin \theta }{ r - a \cos \theta } = \dfrac{ b \sin(\phi - \theta) }{r - b \cos(\phi - \theta) } $
from which
$a \sin \theta (r - b \cos(\phi - \theta)) = b (\sin(\phi - \theta) )(r - a \cos \theta) $
Multiplying this out:
$a r \sin \theta - a b \sin \theta \cos(\phi - \theta) - b r \sin (\phi - \theta) + a b \cos \theta \sin(\phi - \theta) = 0$
Expanding $\cos(\phi - \theta) $ and $ \sin(\phi - \theta)$ our equation becomes
$(- b r \sin \phi) \cos \theta + (a r + b r \cos \phi) \sin \theta + (a b \sin \phi) \cos 2 \theta + (- a b \cos \phi) \sin 2 \theta = 0 $
Which is of the form
$c_1 \cos \theta + c_2 \sin \theta + c_3 \cos 2 \theta + c_4 \sin 2 \theta + c_5 = 0 $
where $c_1, c_2, c_3, c_4, c_5$ are known constants. This equation can be solved in closed form or numerically. For the closed form solution check here
Once $\theta$ is determined, and to determine point $P$, find the cross product $\vec{OA} \times \vec{OB} $, with the $z$-component of both vectors set to zero. If the cross product has a positive $z$-component, then $\vec{OB}$ is counter clockwise from $\vec{OA}$ otherwise it is clockwise. Now $P$ is a rotation of $\bigg(r \dfrac{\vec{OA}}{a}\bigg)$ by a positive $\theta$ in the first case, and by a negative $\theta$ in the second case. Rotation is achieved using the rotation matrix
$ R = \begin{bmatrix} \cos \theta && - \sin \theta \\ \sin \theta && \cos \theta \end{bmatrix} $
So,
$ P = \dfrac{r}{a} R A $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4504910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Least Square Estimator Derivation for 2-Dimensional Stochastic Process I am trying to work through an example in this paper Least squares estimators for discretely observed
stochastic processes
The authors give the following
$$
\Psi_{n,\epsilon}(\theta) = \sum_{k=1}^n \frac{\lvert X_{t_k} - X_{t_{k-1}}-b(X_{t_{k-1}},\theta)\Delta_{t_{k-1}}\rvert^2}{\epsilon^2\Delta_{t_{k-1}}}
$$
of which minimizing gives the Least Square Estimator.
The example I am working through is as follows:
The authors state after some basic calculations they achieve the LSE; however, it doesn't seem so basic. If I try to write out $\Psi_{n,\epsilon}(\theta)$ in this case, then it becomes very messy quickly. Also, the matrix $\Lambda_n$ is not invertible, nor is it clear to me where they got that from. So I must be missing something as I assume the example is indeed correct.
Would appreciate if someone would enlighten me.
| Define $B^T=[C,A],\,\tilde{y}=[1,y^{(1)},y^{(2)}]^T$. Note, for $x,y \in \mathbb{R}^2$
$$\begin{aligned}|x-(C+Ay)n^{-1}|^2&=|x-B^T\tilde{y}n^{-1}|^2=\\
&=x^Tx-2x^TB^Tyn^{-1}+\tilde{y}^TBB^T\tilde{y}n^{-2}\end{aligned}$$
and by linearity
$$\varepsilon^{-2}n\sum_{k\leq n}|x_k-(C+Ay_k)n^{-1}|^2=\varepsilon^{-2}n\sum_{k\leq n}x_k^Tx_k-2\varepsilon^{-2}\sum_{k\leq n}x_k^TB^T\tilde{y}_k+\varepsilon^{-2}n^{-1}\sum_{k\leq n}\tilde{y}^T_kBB^T\tilde{y}_k$$
take the derivative wrt to $B$ and set to $0$:
$$-2\sum_{k\leq n}\tilde{y}_kx_k^T+2n^{-1}\bigg(\sum_{k\leq n}\tilde{y}_k\tilde{y}^T_k\bigg)B=0\implies B=\bigg(\sum_{k\leq n}\tilde{y}_k\tilde{y}_k^T\bigg)^{-1}\bigg(n\sum_{k\leq n}\tilde{y}_kx_k^T\bigg)$$
Now note
$$\tilde{y}_k\tilde{y}_k^T=\begin{bmatrix}1&y^{(1)}_k&y^{(2)}_k\\
y^{(1)}_k&(y_k^{(1)})^2&y_k^{(1)}y_k^{(2)}\\
y^{(2)}_k&y_k^{(1)}y_k^{(2)}&(y_k^{(2)})^2
\end{bmatrix},\,\tilde{y}_kx_k^T=
\begin{bmatrix}x^{(1)}_k&x^{(2)}_k\\
x^{(1)}_ky^{(1)}_k&x^{(2)}_ky_k^{(1)}\\
x^{(1)}_ky^{(2)}_k&x^{(2)}_ky_k^{(2)}
\end{bmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4505023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the opposite to "discretization"? Solving an initially continuous problem using discrete math tools is known as "discretization".
Does the opposite option of using continuous tools to solve discrete problems have a name? And if not, is there any reason why?
Coming from natural language processing, that would apply for instance to resorting to word-embeddings, where one expresses words as vectors in order to escape from the essentially discrete nature of word combinatorics. In econometrics, that could apply to the standard modelling of discrete choices, where a fictional continuous quantity (utility) is maximised... and the list probably goes on for ever.
| It seems to be an embedding as you already wrote. Check this Wikipedia entry: https://en.m.wikipedia.org/wiki/Embedding
The function or process could be called embedding or maybe continuity approximation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4505353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Left side is finite but right side becomes infinte despite using correct series expansions I want to evaluate (for integer $n,p$)
$$
L_p = \sum_{m=1}^{n-1} \ln \left( 4 \sin^2 \frac{m \pi }{n}\right) e^{- 2\pi i p m
/n}
$$
and I have got two tools
$$
\sum_{k=1}^{\infty} \frac{\cos kx}{k} = - \frac{1}{2} \ln \left(4 \sin ^2 \frac{x}{2}\right)
$$
and
$$
\sum_{k=1}^{n-1} r^{k} \cos kx = \frac{1 - r \cos x - r^{n} \cos (nx) + r^{n+1} \cos (nx -x)}{r^2 + 1 - 2 r \cos x} -1
$$
both of which are formulas tabulated in Gradshteyn, and I've individually
verified them. (I couldn't derive the first myself, but second was easy to prove.)
Plugging in $L_p$, we get
$$
L_p = \sum_{m=1}^{n-1} \left[-2 \sum_{k=1}^{\infty} \frac{\cos \left( k \cdot \frac{2 m \pi}{n}\right)}{k}\right] e^{- 2\pi i p m/n}
$$
and exchanging the sum (as I understand can always be done according to this
question), this becomes
$$
L_p\; = -2\sum_{k=1}^{\infty} \frac{1}{k} \cdot \left[ \sum_{m=1}^{n-1} e^{-2 \pi i p m/n} \cdot \cos \left( m \cdot \frac{2\pi k}{n}\right)\right]
$$
Setting $r = e^{-2 \pi i p/n}$, $x= 2\pi k/n$ and using above formula, the inner sum becomes $-1$. which means
$$
L_p = 2 \sum_{k=1}^{\infty} \frac{1}{k}
$$
which is of course divergent, but the left side was finite because $m=0$ never
occurs in the sum.
What is going wrong, and how can I get $L_p$?
| You have correctly obtained $$L_p=-2\sum_{k=1}^\infty\frac{A_k}{k},\quad A_k=\sum_{m=1}^{n-1}e^{-2\pi imp/n}\cos(2mk\pi/n),$$ but $A_k$ is not always equal to $-1$: in fact $2A_k=B_{p-k}+B_{p+k}-2$, where $$B_j=\sum_{m=0}^{n-1}e^{-2\pi imj/n}=\begin{cases}n,&n\mid j\\0,&n\nmid j\end{cases}\qquad(j\in\mathbb{Z})$$ Thus, thanks to $\sum_{k=1}^m A_k=0$ (and $A_{m+k}=A_k$), the sum $\sum_{k=1}^\infty A_k/k$ converges.
As for a "closed form", we get $L_p=S_p+S_{-p}$ where, for $0<j\leqslant n$,
\begin{align*}
S_j&=\sum_{k=1}^\infty\frac{1-B_{j-k}}{k}\\&=\lim_{N\to\infty}\sum_{k=1}^{nN}\frac{1-B_{j-k}}{k}\\&=\lim_{N\to\infty}\left(\sum_{k=1}^{nN}\frac1k-\sum_{k=0}^{N-1}\frac{n}{nk+j}\right)\\&=\lim_{N\to\infty}\left[\sum_{k=1}^{nN}\frac1k-\sum_{k=1}^N\frac1k+\sum_{k=0}^{N-1}\left(\frac1{k+1}-\frac1{k+j/n}\right)\right]\\&=\log n+\gamma+\psi(j/n)
\end{align*}
using the digamma function $\psi$, and clearly $S_{n+j}=S_j$ for any $j\in\mathbb{Z}$. This gives an expression for $L_p$ using two values of $\psi$, or just one of them if we apply the reflection formula. But we cannot get rid of $\psi$ completely (not using Gauss's digamma theorem - and we don't, since it would essentially bring us back to the beginning of the question; in fact the above is a way to prove this theorem).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4505528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Does connected open subset intersect another subset exactly when it intersects the subset and its complement? Let $(X, \mathcal{T})$ be a topological space, $U \in \mathcal{T}$ be connected, and $V \subset X$. Does it hold that
$U \cap \partial V(X) \neq \emptyset \iff U \cap V \neq \emptyset \text{ and } U \setminus V \neq \emptyset$?
Here $\partial V(X)$ is the boundary of $V$ in $X$. Similarly $\bullet V(X)$ is the interior of $V$ in $X$, and $\overline{V}(X)$ is the closure of $V$ in $X$.
Background
I can prove that the following are equivalent:
*
*$U \cap \partial V(X) \neq \emptyset$
*$U \cap V \neq \emptyset \text{ and } U \setminus \overline{V}(X) \neq \emptyset$
*$U \cap \bullet V(X) \neq \emptyset \text{ and } U \setminus V \neq \emptyset$
Also, the direction $\implies$ holds even without assuming $U$ is connected, so the question is actually about the $\impliedby$ direction.
In particular, the above shows that the claim holds when $V$ is either open or closed. I suspect there is a counter-example for general $V$, but cannot come up with one.
| For the $\impliedby$ direction:
Suppose $U \cap V \neq \emptyset$ and $U \setminus V \neq \emptyset$, but $\partial V \cap U = \emptyset$. For each $u \in U$, since $u$ is not a boundary point of $V$, there is a neighborhood $B_u \subset U$ of $u$ which is either entirely contained in $V$, or entirely not contained in $V$. If $W = \bigcup_{u \in U \cap V} B_u$, and $W' = \bigcup_{u \in U \setminus V} B_u$, then both are non-empty open sets contained in $U$, and $U = W \cup W'$. So $U$ is disconnected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4505657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Eigen values of a boundary value problem . Consider the differential equation $$y’’+2y’+a(x)y=\lambda y, y’(0)=0=y’(1)$$ where $a(x):[0,1]\to (1,\infty)$ is a continuous function. Let $y$ be a positive eigen function corresponding to eigen value $\lambda$. Then which of the following statements are possibly true ?
$1$ $\lambda >0$.
$2.$ $ \lambda <0$.
$3.$ $\lambda =0$.
$4.$ $\int_0^1 (y’)^2dx=2\int_0^1 yy’dx+\int_0^1(a(x)-\lambda)y^2dx$.
If $\lambda =0$ then differential equation becomes $y’’+2y’+a(x)y=0$. I can solve it only by taking particular $a(x)$. I tried by choosing $a(x)=0$ which gives trivial solution. Then I tried by taking $a(x)=1$ again gives trivial solution. Given problem is not a Sturm Liouville problem so that I can say eigen value will be positive. I think I am not in right direction. Please help me to solve this problem or please some detailed solution. Thank you in advanced.
| Reject option $(3)$ by choosing $a(x)=2$ and $\lambda=0$. The solution in this case is $y(x)=0$ which can never be an eigenfunction.
For option $(1)$ if you choose $\lambda=2(>0)$ with $a(x)=2$, you get the eigenfunction $y=c, c\in\mathbb R-\{0\}$. For positive eigenfunction you need $c\in\mathbb R^+$, so option $(1)$ is possibly true.
Option $(2)$ can be discarded as well for any $\lambda<0$ where a zero solution is obtained on applying conditions. Option $(4)$ is true as proved by xpaul.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4506106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Transformation of a variable Let say $x$ is a continuous variable such that $-a\leq x\leq a$, $a>0$. Is there any way to transform this variable so that the resultant variable can take any value in real line?
I am looking for some $1:1$ transformation so that I can also make reverse transformation.
| Yes. Here are several examples:
$$x \rightarrow \tan\left(\frac{\pi x}{2 a}\right).$$
$$x \rightarrow \frac{1}{x/a-1}+\frac{1}{x/a+1}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4506214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Let $G$ be a group, $|G|=n$. Suppose $\forall d\in \mathbb{N}$ such that $d\mid n$, there are at most $d$ elements s.t. $x^d=1$. Prove $G$ is cyclic. Let $G$ be a group, $|G|=n$. Suppose $\forall d\in \mathbb{N}$ such that $d\mid n$, there are at most $d$ elements such that $x^d=1$. Prove $G$ is cyclic.
I saw these posts:
*
*If a finite group $G$ of order $n$ has at most one subgroup of each order $d\mid n$, then $G$ is cyclic
*How do I show that a finite group $G$ of order $n$ is cyclic if there is at most one subgroup of order $d$ for each $d\mid n$?
I am not sure these questions' posts have the same meaning as my own question.
My attempt:
My approach is to show $G$ has at most one subgroup of each of each order $d\mid n$.
Using cauchy theorem there is a cyclic subgroup in order $p$ for each $p\mid n$ such that $p$ is a prime number.
Let $H<G,|H|=p$, $\forall h\in H$ exists $\space O(h)=p$ (Lagrange's Theorem).
Combining this and the given fact that there are at most $p$ elements $\implies H$ is the unique subgroup with order $p$.
My problem is, how do I deal with non-primary order subgroup?
Any help (and another approach) is welcome.
Thanks!
| Suppose that $G$ isn't cyclic. Let $x\in G$. Then $d=| x|\lt n$. By Euler's formula, $$n=\sum_{d\mid n}\varphi(d)\gt\sum_{d\mid n,d\ne n}\varphi (d)=| G|.$$
The reason we can use Euler's formula here is that by hypothesis there's at most one cyclic subgroup of each order dividing $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4506391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Ratio of $\Gamma(n)$ to $\Gamma(n/2)^2, n\to \infty?$ I'm trying to find the limit
$$\lim_{n\to \infty}\dfrac{\Gamma(n)}{\Gamma(\frac{n}{2})^2},$$
and also trying to find the precise decay rate for the above, i.e. the function $D(n)$ so that if $G(n):=\frac{\Gamma(n)}{\Gamma(\frac{n}{2})^2}$
$$\lim_{n\to \infty}\frac{G(n)}{D(n)}=1$$
How do we find them? My guess is that the limit will be zero, but not sure.
| You do not need strictly need Stirling's approximation or the full power of the Legendre duplication formula, the usual approximations for central binomial coefficients are enough. If $n=2m+2$
$$ \color{blue}{\frac{\Gamma(n)}{\Gamma(n/2)^2}} = \frac{(2m+1)!}{m!^2} = (2m+1)\binom{2m}{m}\sim \frac{4^m}{\sqrt{\pi m}}\left(2m+\frac{3}{4}\right)\color{blue}{\sim\frac{2^{n}}{2\sqrt{2\pi n}}\left(n-\frac{1}{4}\right)}$$
and since
$f(z)=\log\Gamma(z)$ is a convex function on $\mathbb{R}^+$ (as a consequence of the Bohr-Mollerup theorem or just by the integral representation of the $\Gamma$ function) the previous asymptotic approximation holds also if $n$ is an odd number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4506544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Tangent to ellipse at point I am searching for a tangent (or just it's angle) to an ellipse at a specific point on the ellipse (or it's angle to the center of the ellipse).
The equation of the ellipse is $\frac{x^2}{\text{a}^2} + \frac{y^2}{\text{b}^2} = 1$.
a, b, $a$ and the point of the intersection are given and I search for
$b$, the angle of the tangent.
I tried a geometic approach which results a almost accurate angle.
I just take a line at two points, one degree left and right to and recieve the angle by using the arc tan of the points.
Is there a more accurate way?
| Use implicit differentiation.
$$\frac{x^2}{\text{a}^2} + \frac{y^2}{\text{b}^2} = 1$$
$$\frac{2x}{a^2} + \frac{2yy'}{b^2} = 0$$
$$\frac{2b^2x + 2a^2yy'}{a^2b^2} = 0$$
$$2b^2x + 2a^2yy' = 0$$
$$2a^2yy' = -2b^2x$$
$$y' = \frac{-2b^2x}{2a^2y}$$
$$y' = \frac{-b^2x}{a^2y}$$
That gives you the slope of the tangent line at the point $(x, y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4506656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is $T$ self-adjoint if it is closed, symmetric and $\ker T=\ker T^*$? Let $T$ be a densely defined, closed, unbounded, and symmetric linear operator (i.e., $T\subset T^*$) defined in a Hilbert space, with domain $D(T)$.
Is it true that if we further suppose $\ker T=\ker T^*$, then $T$ is self-adjoint, i.e., $T=T^*$?
I know this is true for the class of quasinormal operators. Recall also, that we always have $\ker T\subset \ker T^*$ when $T$ is symmetric.
I hope this is not too obvious!
Thanks a lot.
| Let $\mathcal{H}=L^2[0,\infty)$ and let $T : \mathcal{D}(T) \subset \mathcal{H}\rightarrow\mathcal{H}$ be defined as $Tf=if'$ on the domain $\mathcal{D}(T)$ consisting of all absolutely continuous functions $f\in L^2[0,\infty)$ such that $f(0)=0$ and $f'\in L^2[0,\infty)$. $T$ is a closed, densely-defined, symmetric linear operator. However, $T^*$ is not the same as $T$ because the domain of $T^*$ includes functions $f$ that do not vanish at $0$, unlike the functions in the domain of $T$. The closed symmetric operator $T$ is not self-adjoint, even though $$\mathcal{N}(T)=\mathcal{N}(T^*)=\{0\}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4506796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proving $\sum^n_{k=0} k^2 {n\choose k} = (n+n^2)2^{n-2}$ We can start with the expression of the binomial expansion, $\sum^n_{k=0} {n\choose k} x^ky^{n-k}= (x+y)^n$. Setting $x=y=1$ gives $\sum^n_{k=0} {n\choose k} = 2^n$
Differentiating both sides with respect to $x$ gives $\sum^n_{k=0} k{n\choose k} x^{k-1}y^{n-k} = n(x+y)^{n-1}$, and substituting $x=y=1$ gives $\sum^n_{k=0} k{n\choose k} = n2^{n-1}$
Differentiating again gives $\sum^n_{k=0} k^2{n\choose k} x^{k-2}y^{n-k} = (n^2-n)(x+y)^{n-2}$, and substituting $x=y=1$ gives $$\sum^n_{k=0} k^2{n\choose k} = (n^2-n)2^{n-2}$$
However, the identity is $$\sum^n_{k=0} k^2{n\choose k} = (n^2+n)2^{n-2}$$
I don't seem to have lost a sign somewhere, where is my mistake?
| You have written the second derivative of $x^{k}$ as $k^{2}x^{k-2}$. It is $k(k-1)x^{k-2}$.
If you add $n2^{n-1}$ to $(n^{2}-n)2^{n-2}$ you get $(n^{2}+n)2^{n-2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4506987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proof that $\sup(c+A) = c + \sup(A)$ Understanding Analysis: Abbott exercise $1.3.5$:
Let $A \subseteq \mathbb{R}$ be bounded above and let $c \in \mathbb
{R}$. Define the set $c + A = \{c + a : a \in A\} $.
Proof.
Since $A$ is bounded above, there exists an upper bound $x \in \mathbb{R}$ such that $a \le x$, $\forall a \in A$. It follows that $\sup(A) \le x$. Now, $a \le x \implies c+a \le c+x \implies c+a \le c+\sup(A) $. Therefore, $c+\sup(A)$ is an upper bound of $c+A$.
Let x be any upper bound. Since $a \le x \implies a+c \le x+c$, we can call $x+c$ an upper bound of $c+A$ for all $a \in A$. It follows that $c+ \sup(A) \le x+c$ because $\sup(A)\le x$.
Now, because $c+\sup(A)$ is an upper bound of $c+A$ and $c+\sup(A) \le x+c$ which is another upper bound,
$\sup(c+A) = c+\sup(A)$ by definition.
Is this proof correct?
| We have $x\leq\, supA$ for all $x\in A$ and hence $c+x\,\leq\,c+supA$
for all $x\in A$. That implies that $sup_{A}(c+x)\leq\,c+supA$ and by
definition of the set $c+A$ we obtain: $sup(c+A)\leq\,c+supA$.
Now assume that we have strict inequality. i.e. there is a $\delta$
such that $sup(c+A)\,<\delta\,<\,c+supA$.
Take a sequence $x_{n} \in A$ converging to $supA$.
Then clearly for $n$ sufficiently large $c+x_{n}>\,\delta$.
But $c+x_{n}\,\in \,c+A$ and hence we have an element of $c+A$
greater than the supremum which is an obvious contradiction. Therefore
we have equality and we get our result!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4507252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Use parseval's identity to prove $\int_0^\infty {\sin^4t \over t^2} dt = {\pi \over 4}$ I know how to calculate
$$\int_0^\infty {\sin^4t \over t^4} dt$$
by taking the function $f(x)=1-|x|$ and it will be $\pi \over 3$.
But here the denominator is not raised to power $4$ but $2$.
How should I approach it?
| Let $f(x) := \begin{cases}-\frac14 & \text{if $0 \le x \le 2$}\\ \frac14 & \text{if $-2 \le x < 0$}\\ 0 & \text{otherwise}\end{cases}.$
\begin{align}
F\left(\omega\right) &= \int_{-\infty}^{\infty}f(x)e^{-i\omega x}\mathrm d x\\
&= -\frac14\int_{0}^2 e^{-i\omega x}\mathrm d x + \frac14\int_{-2}^0 e^{-i\omega x}\mathrm d x\\
&= \frac1{4i\omega}\left(e^{2i\omega} - 1 - 1 + e^{-2i\omega}\right)\\
&= \frac1{i\omega}\left(\frac{e^{i\omega} - e^{-i\omega}}{2i}\right)^2\\
&= i\times\frac{1}{\omega}\sin^2\left(\omega\right)
\end{align}
By parceval identity:
\begin{align}
\int_{0}^\infty \frac1{\omega^2}\sin^4\left(\omega\right)\mathrm d \omega
&= \frac1{2}\int_{-\infty}^\infty\left|F(\omega)\right|^2 \mathrm d \omega\\
&= \frac1{2} \times 2\pi \int_{-\infty}^{\infty}\left|f(x)\right|^2\mathrm d x\\ &= \pi \int_{-2}^{2}\frac1{16}\mathrm d x = \frac\pi{16} \times 4 = \frac\pi 4
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4507375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Find an $f\in L^1([0,1])\setminus L^2([0,1])$ such that $\int_E|f(x)|\,dx\leq\sqrt{|E|}$ for all Borel $E\subset [0,1]$.
Suppose $f\in L^1([0,1])$ has the property that
$$\int_E|f(x)|\,dx\leq\sqrt{|E|},\tag{$*$}$$
for every Borel $E \subseteq[0,1]$. Here $|E|$ denotes the Lebesgue measure of $E$.
*
*Show that $f \in L^{p}([0,1])$ for all $1<p<2$.
*Give an example of an $f$ satisfying $(*)$ that is not it $L^{2}([0,1])$.
I can prove $f \in L^{p}([0,1])$ for all $1<p<2$. For each $\lambda>0$, we define $E_\lambda=\{x\in[0,1]: |f(x)|\geq\lambda\}$ and $d_f(\lambda)=|E_\lambda|$. By $(*)$ we have
$$\lambda d_f(\lambda)\leq \int_{E_\lambda}|f(x)|\,dx\leq\sqrt{d_f(\lambda)},$$
hence $$d_f(\lambda)\leq \begin{cases} 1 & \text{if }\lambda\leq1,\\ \lambda^{-2} & \text{if }\lambda>1. \end{cases}$$
Therefore,
$$\int_0^1 |f(x)|^p\,dx=p\int_0^\infty \lambda^{p-1}d_f(\lambda)\,d\lambda\leq 1+p\int_1^\infty \lambda^{p-3}\,d\lambda<\infty$$
for $p\in(1,2)$.
However, I can't find an example an $f$ satisfying $(*)$ that is not it $L^{2}([0,1])$. I guess $f(x)=\frac12x^{-1/2}$ will do the job, but I don't know how to check $(*)$ for this $f$.
Any help would be appreciated!
| Indeed, the function $f(x)=x^{-1/2}/2$ works well. Given a measurable set $E \subset [0,1]$, the goal is to prove
$$\int_E|f(x)|\,dx\leq\sqrt{|E|},\tag{$*$}$$
Given $\epsilon>0$, there exists a countable union of disjoint intervals $V=\cup_{j \ge 1} I_j \supset E$ (with $I_j$ ordered in decreasing length) such that $|V| <|E|+\epsilon$. So it suffices to prove the inequality $(*)$ for $V$ in place of $E$. Let $V_n=\cup_{j = 1}^n I_j$ be the union of the $n$ longest intervals among the $I_j$. If we prove $(*)$ for $V_n$ in place of $E$, then it will follow for $V$ by monotone convergence. But shifting an interval to the left increases the integral of $f$ over this interval, since $f$ is decreasing. Thus the integral is maximized when the intervals are all adjacent and the leftmost endpoint is 0; in that case the inequality is clear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4507492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find $f^{(n)}(1)$ on $f(x)=(1+\sqrt{x})^{2n+2}$
Find $f^{(n)}(1)$ on $f(x)=(1+\sqrt{x})^{2n+2}$ .
Here is a solution by someone:
\begin{align*} f(x)&=(1+\sqrt{x})^{2n+2}=\sum_{k=0}^{2n+2}\binom{2n+2}{k}x^{\frac{k}{2}}\\ &=\sum_{k=0}^{2n+2}\binom{2n+2}{k}\sum_{j=0}^{\infty}\binom{\frac{k}{2}}{j}(x-1)^j\\ &=\sum_{j=0}^{\infty}\sum_{k=0}^{2n+2}\binom{2n+2}{k}\binom{\frac{k}{2}}{j}(x-1)^j. \end{align*}
Hence \begin{align*} f^{(n)}(1)&=n!\sum_{k=0}^{2n+2}\binom{2n+2}{k}\binom{\frac{k}{2}}{n}=n!\cdot4(n+1)^2. \end{align*}
Is it correct? How to compute $$n!\sum_{k=0}^{2n+2}\binom{2n+2}{k}\binom{\frac{k}{2}}{n}=n!\cdot4(n+1)^2?$$
| An approach using the Lagrange inversion theorem: let $f(z)$ be analytic around $z=z_0$ with $f'(z_0)\neq0$; then $w=f(z)$ has an inverse $z=g(w)$ analytic around $w=f(z_0)$ with $$g^{(n)}\big(f(z_0)\big)=\lim_{z\to z_0}\frac{d^{n-1}}{dz^{n-1}}\left(\frac{z-z_0}{f(z)-f(z_0)}\right)^n.\qquad(n>0)$$
We apply this to $f(z)=\dfrac{\sqrt{z}-1}{\sqrt{z}+1}$ and $z_0=1$, thus $g(w)=\left(\dfrac{1+w}{1-w}\right)^2$ and $$\lim_{z\to1}\frac{d^n}{dz^n}(1+\sqrt{z})^{2n+2}=g^{(n+1)}(0)=\color{blue}{4(n+1)!(n+1)}$$ since $g(w)=1+\displaystyle\frac{4w}{(1-w)^2}=1+4w\frac{d}{dw}\frac1{1-w}=1+4\sum_{n=1}^\infty nw^n$ for $|w|<1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4507583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Number of ways to send out $k$ distinct postcards to $n$ friends if we require each friends receives at least $1$ post card. This problem comes from the Lovasz Combinatorial Problems and Exercises book. I'm left rather confused for the second half of the second problem. It asks: We have $k$ distinct postcards and want to send them all to our $n$ friends (a
friend can get any number of postcards, including 0). How many ways can this
be done? What happens if we want to send at least one card to each friend?
The first half is rather simple, however, the second half has me puzzled. This is equivalent to counting the number of surjections yes? However, the answer in the back of the book is $\binom{k}{n}*n!$.
My reasoning gives me that the number of ways to do this would be:
$$n^k-\sum_{i=1}^{n}\binom{n}{i}(n-i)^{k}$$
(something like that I'm typing in a bit of a rush so forgive a mistake if there is one).
But my reasoning is that you count the number of ways to send out the postcards in any manner, and subtract out the ones where you miss some group of $i$ friends. I can't see my error or how my answer is related to the books.
| The answer in the book i.e. $\begin{pmatrix}k\\n\end{pmatrix}.n!=\frac{k!}{(k-n)!}$ represents the number of permutations of $n$ objects out of total $k$ distinct objects; which surely is the number of ways to distribute $k$ postcards among $n$ friends so that each gets exactly one. To get the correct answer, you need to further distribute the remaining $k-n$ postcards among $n$ friends without any restriction. Now this can be done in $n^{k-n}$ ways so the correct answer is the product $\frac{k!}{(k-n)!}\times n^{k-n}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4507675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Minimum value of $\sum_{n=0}^\infty\frac{\cos nx}{3^n}$? This question appeared on my test:
If the range of the expression $$\sum_{n=0}^\infty\frac{\cos nx}{3^n}$$ is $[a,b]$, then $(b-a)$ equals ...
and I am unable to find the lower limit.
I found the maximum value with the help of the infinite geometric series:
since $$\cos nx\leqslant1$$
then $$\cos nx/3^n\leqslant 1/3^n$$
so the sum of the series is less than or equal
$$1+1/3+1/3^2+\cdots$$
which is
$$1/(1-1/3) = 3/2$$
How do I find the minimum value?
| The series can be exactly evaluated by passing to the real part of the geometric series in $e^{inx}/3^n$. You get: $$\sum_{n\ge0}\frac{\cos nx}{3^n}=\frac{1}{2}+\frac{2}{5-3\cos x}$$From which the maximum value $3/2$ is clear and the minimum value $3/4$ is also clear.
That’s overkill, however. You were right to try $\cos x=1$ for the maximum. For the minimum, note that $\cos x=-1$ (another sensible thing to try) at $x=\pi$ would give a series in $\cos n\pi=(-1)^n$. This alternating series yields $3/4$. As for proving this minimum value is indeed a minimum, without calculating the series directly, you could differentiate: $$-\sum_{n\ge0}\frac{n\sin n\pi}{3^n}=0$$And differentiate again: $$-\sum_{n\ge0}\frac{n^2\cos n\pi}{3^n}>0$$By the alternating series test technique. This gives a local minimum, but you can appeal to periodicity to claim it is a global minimum. Indeed, the only points of zero derivative are when $x=\pi k$ for some integer $k$. You could check this by again using the alternating series test (magnitude of summands is strictly decreasing) although getting the details precisely right isn’t trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4507815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Number of ways to choose five lottery numbers so no two are adjacent Someone stated that in a lottery draw, you have an equal chance of being 1 off of every correct number as you do of actually winning, but this is not true if you have adjacent numbers (such as 7/8). I wrote a program which iterates over all possible numbers and counts the number of outcomes where two numbers are adjacent given a top number. It would be too time-consuming to check all 50 numbers for each 5 possible selections, so I instead used lower numbers and tried to find an equation which I can use.
I've got a dataset of x and y values (where x = top number and y = number of outcomes with atleast one adjacent pair). I believe this dataset turns into a quintic function (x^5) when plotted, but I'm not sure. I have two main questions:
*
*How could I find the equation of the graph? I have tried to find the difference of differences (similar to quadratic, ie 1, 4, 9, 16 = 3, 5, 7 = 2) and it turned out that the difference after 5 differences was a constant 120. I have also tried to find the ratio of one number to the next. This leads to my next question...
*This ratio turned out to be strangely nice. When x=8, y=0. x=9,y=120. x=10,y=720. x=11,y=2520. If you divide one number by the previous, you get *6 (6/1), *3.5 (7/2) and so on, (8/3, 9/4). The fraction's denominator and numerator increase by 1 every time. Where does this come from?
Code I used (python) - https://pastebin.com/d2bAQVmV
| You are choosing five numbers between $1$ and $x$, and asking the number of ways where no two of the chosen numbers are either equal or adjacent. The answer is
$$
\binom{x-4}{5}\cdot 5!=x(x-1)(x-2)(x-3)(x-4)
$$
It is well known that, for positive integers $k$ and $x$, the number of unordered selections of $k$ distinct numbers from $\{1,\dots,x\}$ where no two selected numbers are adjacent is $\binom{x-k+1}{k}$. See the sources belwo for a proof. To get the number of ordered selections, you just multiply by $5!$, the number of ways to arrange five numbers, resulting the formula above. This is a degree five polynomial in $x$, as you noticed.
Questions on MSE on counting non-consecutive subsets:
Combinatorial restriction on choosing adjacent objects
How many $5$ element sets can be made?
Analysis of formula for selecting $k$ elements from $n$ elements arranged in a row so that no two selected elements are consecutive
Edit: Everything in this section is only correct for lotteries where the order of the chosen numbers matters.
By the way, the statement "you have an equal chance of being 1 off of every correct number as you do of actually winning" is indeed true, if you interpret it a specific way. Suppose ticket has the numbers $(a,b,c,d,e,f)$. The probability of drawing $(a,b,c,d,e,f)$ is the same as the probability of drawing $(a+1,b+1,c+1,d+1,e+1,f+1)$.
There is a small caveat here; the "+1" must be interpreted modulo $x$. For example, if there were fifty possible lottery numbers, then you would have to say that $50+1$ wraps around back to one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4507945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the following notation common in calculus books? $\int_{0}^{\frac{\pi}{2}} (1-\sin^2\theta)\,d(\sin\theta)$ I am reading a calculus book.
$$
\int_0^{\pi/2} \cos^3\theta \,d\theta
= \int_0^{\pi/2} (1-\sin^2\theta)\,d(\sin\theta)
= \left[\sin\theta-\frac{1}{3}\sin^3\theta\right]_0^{\pi/2}
= \frac{2}{3}.
$$
Is the following notation common in calculus books?
$$\int_0^{\pi/2} (1-\sin^2\theta)\,d(\sin\theta)$$
I think the author considers $\sin\theta$ as a variable.
So, I think the following notation is better:
$$\int_0^1 (1-\sin^2\theta)\,d(\sin\theta)$$
Is the above notation uncommon in calculus books?
| American Calculus books usually use such notation in the context of Riemann-Stieltjes integration, but not for regular Riemann one.
I know the Russian analysis books use this notation very frequently. It's a nice shorthand for substitution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4508079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Let $H$ be a subgroup of $\Bbb R^*$. If $\Bbb R^+ \subseteq H \subseteq \Bbb R^*$, prove that $H = \Bbb R^+$ or $H = \Bbb R^*$.
Let $H$ be a subgroup of $\Bbb R^*$, the group of nonzero real numbers under multiplication. If $\Bbb R^+ \subseteq H \subseteq \Bbb R^*$, prove that $H = \Bbb R^+$ or $H = \Bbb R^*$.
I think that $\Bbb R^+$ is supposed to mean the set of non-negative reals? I am a bit confused on what do I need to prove here? If I assume that $H \ne \Bbb R^+$ and get that $H=\Bbb R^*$, then would that prove the result?
Suppose that $H \ne \Bbb R^+$, then I only have that there exists $h \in H$ such that $h \notin \Bbb R^+$?
| $\Bbb{R}^+=\{a\in\Bbb{R}:a>0\}$.
$\Bbb{R}^+, \Bbb{R}^{\star}$ both are multiplicative group.
If $\Bbb{R}^+\subset H\subset \Bbb{R}^{\star}$ and $H\le \Bbb{R}^{\star}$ then $H=\Bbb{R}^+$ or $H=\Bbb{R}^{\star}$
Proof: Suppose $\Bbb{R}^+\subsetneq H$.
Then $\exists a\in H\setminus \Bbb{R}^+$ i.e $a\le 0$ and since $0\notin H$ , $a<0$.
Claim : $-1\in H$.
$a<0$ implies $-a\in \Bbb{R}^+$ and hence $-\frac{1}{a}\in \Bbb{R}^+$.
Hence $-\frac{1}{a}\in H$.
Since $H\le \Bbb{R}^{\star}$ , $a\cdot-\frac{1}{a}=-1\in H$
Now it's easy to show that $\langle \Bbb{R}^+, -1\rangle =\Bbb{R}^{\star}$
$(b<0$ , then $b=-b\cdot -1)$
Notations:
$\le $: subgroup.
$\langle S \rangle $: subgroup generated by $S$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4508197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
proving convergence using delta-epsilon notation
Let $f:\mathbb{R} \backslash\{ 1 \} \rightarrow \mathbb{R}$ be a function such that $f(x)\rightarrow 1$ as $x \rightarrow 1$. Using $\epsilon - \delta$ and without using any Limit Rules prove that $xf(x) \rightarrow 1$ as $x \rightarrow 1$.
My approach is, for $\delta\ > 0,$ there exist $\epsilon>0$ st $\lvert x-1\rvert <\delta,$ $\lvert f(x)-1\rvert< \epsilon$.
So
$$\lvert xf(x) -1\rvert<\lvert(\delta+1)f(x)-1\rvert<\lvert2f(x)-1\rvert$$
for $\delta <1$.
Please give some hints on how to approach this, thanks!
| Hint:
Consider:
$$|xf(x)-1| = |xf(x)-x+x-1| \leq |xf(x)-x| + |x-1|$$
I've just used the triangle inequality here. So:
$$|xf(x)-1| \leq |x| \cdot |f(x)-1| + |x-1|$$
Now, to make life just a little bit easy for yourself, notice that:
$$|x| = |x-1+1| \leq |x-1| + 1$$
Hence, all in all, we can write:
$$|xf(x)-1| \leq |x-1| \cdot |f(x)-1| + |f(x)-1| + |x-1|$$
Now, can you use this as well as the facts you've been given to prove the result?
The next part will be the solution in full, for your reference. It's for reference only. You should only look at it when you've given this your best shot.
Let $\epsilon > 0$ be given. We want to show that:
$$\exists \delta > 0: \forall x \in \mathbb{R} \setminus \{1\}: |x-1| < \delta \implies |xf(x)-1| < \epsilon$$
First of all, you know that there exists a $\delta_1 > 0$:
$$|x-1| < \delta_1 \implies |f(x)-1| < \frac{1}{3}\epsilon$$
You know this because $f(x) \to 1$. Let's look at the estimate >! I derived in the hint above:
$$|xf(x)-1| \leq |x-1| \cdot |f(x)-1| + |f(x)-1| + |x-1|$$
It would actually be really, really nice if I could just
replace $|x-1| \cdot |f(x)-1|$ with
$1 \cdot \frac{1}{3}\epsilon$ and $|f(x)-1|$ with $\frac{1}{3}\epsilon$ and
$|x-1|$ with $\frac{1}{3} \epsilon$. So, the idea here is
this. Let's pick a $\delta > 0$ such that
$\delta = \min\{1,\delta_1,\frac{1}{3}\epsilon\}$. What this means is >! that:
$$\delta \leq 1 \ , \delta \leq \delta_1 , \ \delta \leq \frac{1}{3}\epsilon$$
So, now, let's assume that $|x-1| < \delta$. That means that >! $|x-1| < 1$ and $|x-1| < \delta_1$ and $|x-1| < \epsilon$. So, >! I can conveniently pick and choose where I want to use these >! inequalities in my final estimate. In particular:
$$|xf(x)-1| \leq |x-1| \cdot |f(x)-1| + |f(x)-1| + |x-1| < 1 \cdot \frac{1}{3} \epsilon + \frac{1}{3}\epsilon + \frac{1}{3}\epsilon = \epsilon$$
But that proves what we wanted so we're done :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4508365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Solve second-order ODE using gamma functions I would like to solve the ODE
$$\left(x-k_1\right)\frac{dY}{dx} + \frac{1}{a}x^2\frac{d^2 Y}{d x^2}-Y+k_2=0$$
with boundary conditions $Y\left(k_3\right)=0$ and $\frac{d Y}{d x}\left(\infty\right)=1-b$ (meaning if $x \to \infty$, the derivative approaches $1-b$). Not sure if it helps, but we also have $0\leq b \leq 1$, $a \geq 0$, and $k_2 + k_3 \leq k_1$. $x$ and $Y$ should also both be $\geq 0$.
I came across this equation years ago and remember that the solution involved gamma functions in some form, but otherwise I'm not sure where to start (or even what term to Google).
Would anyone know how to find the solution here?
| You can solve this equation assuming it is a function of $x$ and $t$, but you will find that when constraining to the boundary conditions it is a function of $x$ only, so I'll treat it like an ODE.
For your question I will use the following trick: given a particular solution $u(x)$ to the homogeneous equation
\begin{align}
u''+f_1(x)u'+f_0(x)u=0,
\end{align}
the ODE
\begin{align}
y''+f_1(x)y'+f_0(x)y=f(x)
\end{align}
is exact under the integrating factor $e^{F_1}u$, where $F_1(x)=\smallint f_1(x)\mathrm dx$. Note that there is a term of the form $xy'-y$, so you should try to find a linear particular solution to your equation. We find that $Y_0=x-k_1$ works. So for your ODE
\begin{align}
Y''+a\frac{x-k_1}{x^2}Y'-\frac{a}{x^2}y+\frac{ak_1}{x^2}=0,
\end{align}
multiply by $(x-k_1)x^ae^{ak_1/x}$ to arrive at
\begin{align}
\left((x-k_1)x^ae^{ak_1/x}Y'-x^ae^{ak_1/x}Y\right)'+ak_1\frac{x-k_1}{x^2}e^{ak_1/x}=0.
\end{align}
Integrating and rearranging we arrive at
\begin{align}
Y'-\frac{1}{x-k_1}Y+\frac{k_1}{x-k_1}=\frac{c_1}{(x-k_1)x^ae^{ak_1/x}}.
\end{align}
This ODE is exact under the integrating factor $1/(x-k_1)$, multiplying, integrating, and rearranging we arrive at
\begin{align}
Y(x)=k_1+(x-k_1)\left(c_2+c_1\mathcal I(x)\right);\quad\left[\mathcal I(x)=\int\frac{\mathrm dx}{(x-k_1)^2x^ae^{ak_1/x}}\right].
\end{align}
We see that we recover the particular solution $Y_0=k_1+c(x-k_1)$ here as well. Now to fit the solution to the constraints. Evaluating $Y(x)$ at $k_3$ and letting $x\rightarrow\infty$ in $Y'(x)$ we get the equations
\begin{align}
\mathcal I(k_3)c_1+c_2&=\frac{k_1}{k_1-k_3},\\
\mathcal I(\infty)c_1+c_2&=1-b;\quad\left[\mathcal I(\infty)=\lim_{x\rightarrow\infty}\mathcal I(x)\right].
\end{align}
Which have the solutions
\begin{align}
c_1&=\frac{1}{\mathcal I(k_3)-\mathcal I(\infty)}\left(\frac{k_1}{k_1-k_3}+b-1\right),\\
c_2&=\frac{1}{\mathcal I(k_3)-\mathcal I(\infty)}\left(\frac{k_1}{k_1-k_3}\mathcal I(\infty)+(1-b)\mathcal I(k_3)\right).
\end{align}
So our solution is
\begin{align}
Y(x)=k_1+\frac{x-k_1}{\mathcal I(k_3)-\mathcal I(\infty)}\left((1-b)(\mathcal I(k_3)-\mathcal I(x))+\frac{k_1}{k_1-k_3}(\mathcal I(x)-\mathcal I(\infty))\right);\\\\\left[\mathcal I(x)=\int\frac{\mathrm dx}{(x-k_1)^2x^ae^{ak_1/x}}\right].
\end{align}
What I've been ignoring is the singularity in $\mathcal I(x)$. If the domain of $x$ includes $k_1$, the integral is divergent. This is why I ask if there are more constraints on your $k$'s! If we restrict our $x$ domain to $[k_3,\infty)$ (as your boundary conditions imply), then we require that $k_3>k_1$ for $x>k_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4508474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Are all minimal closed sets boxes? I'll set up a couple of (standard) definitions, then ask my question.
Definition: If $\mathbb X$ and $\mathbb Y$ are sets and $X\subseteq \mathbb X$ and $Y\subseteq \mathbb Y$, call $X{\times}Y=\{(x,y)\mid x\in X,\ y\in Y\}$ a box, and call $X$ and $Y$ the sides of the box $X{\times}Y$.
Definition: For topologies $\mathbb T_1$ and $\mathbb T_2$, the product topology $\mathbb T_1\times\mathbb T_2$ has open sets generated by boxes of open sets from $\mathbb T_1$ and $\mathbb T_2$.
Definition: Given a topology $\mathbb T$ and a point $p\in\mathbb T$, the closure of $p$, written $|p|$, is the least closed set containing $p$.
I believe it is a fact that in the product topology, the box $C_1{\times}C_2$ is closed in $\mathbb T_1\times\mathbb T_2$ if and only if its sides $C_1$ and $C_2$ are closed in $\mathbb T_1$ and $\mathbb T_2$ respectively.
My question: Given points $p_1\in\mathbb T_1$ and $p_2\in\mathbb T_2$, is the closure $|(p_1,p_2)|$ equal to the box of the closures $|p_1|{\times}|p_2|$? It's fairly clear that $|(p_1,p_2)|\subseteq |p_1|{\times}|p_2|$, but it's less immediately evident to me whether the reverse implication must also hold, so that this would be an equality.
Proof or counterexample very welcome. Thanks in advance.
| Thanks to @Ulli for noting that this is a known result and for the precise reference.
I'll include here what seems (to me) a clean direct argument, for reference.
To prove $|(p_1,p_2)|=|p_1|{\times}|p_2|$ it would suffice to prove equality of the complements.
Say that a set $X$ avoids $x$ when $x\not\in X$, and note that $X{\times}Y$ avoids $(x,y)$ if and only if $X$ avoids $x$ and $Y$ avoids $y$.
Then the complement of $|(p_1,p_2)|$ is the union of open boxes $O_1{\times}O_2$ that avoid $(p_1,p_2)$; which is the union of open boxes $O_1{\times}O_2$ such that $O_1$ avoids $p_1$ and $O_2$ avoids $p_2$; which is the complement of $|p_1|{\times}|p_2|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4508649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof of Total Variance in Statistical Inference In book Statistical Inference by Casella, the theorem 4.4.7 (Conditional variance identity), author proves one expectation to be 0 (shown in green rect).
However, I followed the same idea but got different result. I want to know what's wrong with my proof (below).
Thank you!
My derivation
Consider
\begin{align*}
E_X\big(
\big[ X-E[X|Y] \big] \big[ E[X|Y]-EX \big]
\big)
\end{align*}
Note that the expectation is calculated under the distribution of $X$. Also note that $E[X|Y] = g(Y)$ and $EX=\text{constant}$. So we have
\begin{align*}
E_X\big(
\big[ X-E[X|Y] \big] \big[ E[X|Y]-EX \big]
\big)
&=
\big[ E[X|Y]-EX \big]
E_X
\big[ X-E[X|Y] \big] \\
&=
\big[ E[X|Y]-EX \big]
\big[ EX-E[X|Y] \big] \\
&=
-\big[ E[X|Y]-EX \big]^2 \\
\end{align*}
I took $\big[ E[X|Y]-EX \big]$ out because it's a function of $Y$:
$$E_X[h(X) g(Y)] = \int_x [h(X) g(Y)] f_X(x) dx = g(Y) \int_x h(X) f_X(x) dx = g(Y) E_X[h(X)]$$
Statistical Inference by Casella
| I am thinking this.
Step 1
Consider $\text{Var} X$, the author writes:
\begin{align*}
\text{Var} X =
E \big( \big[ X - EX \big]^2 \big) =
E \big( \big[ X - E(X|Y) + E(X|Y) - EX \big]^2 \big)
\end{align*}
The first $E$ is $E_X$. The second $E$ is not $E_X$, but $E_{X, Y}$.
Step 2
When we say $EX$, we $E_X X$. However, when we say $E[X-E[X|Y]]$, we actually mean $E_{X, Y}[X-E[X|Y]]$.
To see this, note that $E[X|Y]$ is $g(Y)$, and $X-g(Y)$ is $W$, which is just another random variable.
Taking expectation of $W$, we should expect a number, instead of a function of $Y$. That is, there should not be any randomness.
Think about it in this way. Before, you somehow know X's distribution. So you can calculate $EX$. However, later you realize that X actually depends on Y, and you have a new model/understanding on X, which is: when Y=$y_1$, X has one distribution, and when Y=$y_2$, X has one distribution and so on. Then you can also calculate $EX$, but this time, $EX = E_Y E_{X|Y}[X|Y]$.
Maybe the following notation is clearer:
\begin{align*}
E[X-E[X|Y]] &= E[X-g(Y)] \\
&= E_{X, Y}[X-g(Y)] \\
&= \int_x \int_y (x-g(y)) f_{X, Y}(x, y) dy dx \\
\end{align*}
You can see above we should really consider the joint distribution.
Now,
\begin{align*}
E_{X, Y}[X-g(Y)]
&= E_Y E_{(X, Y)|Y}[X-g(Y) | Y] \\
&= E_Y E_{X|Y}[X-g(Y) | Y] \\
&= E_Y E_{X|Y}[X-E[X|Y] | Y] \\
&= E_Y \big[
E_{X|Y}[X|Y]- E[X|Y]
\big] \\
&= 0
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4508817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
is it true? $\operatorname{round}(\varphi^{p_n}) \equiv 1 \!\!\pmod{p_n} $ where $p_n$ is $n$th prime number and $\varphi=1.618033...$ The $\operatorname{round}(\varphi^{n})$ is called n-th Lucas number.
These numbers have most interesting features. One of them is the relationship with prime numbers for example this article
In this regard, I happened to see that :
$$\operatorname{round}(\varphi^{n})\equiv 1\!\!\!\!\pmod{n}$$
if and only if $n$ is prime number.
I wnat to know, is it proven? or it has some exceptions in the larger numbers?
| Let $L_n$ be the $n$-th Lucas number. We have $$\text{round}(\varphi^n) = L_n$$ for $n > 2$ (see here). We indeed have
$$L_n \equiv 1 \!\!\pmod n$$
if $n$ is prime (see here), but there are also composite numbers $n$ for which above congruence is true, and these are the Fibonacci Pseudoprimes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4508930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Motivation in considering abelian groups in modules I am leaning about modules and I did not understand what is the motivation behind an $R$ module $M$ as an abelian group in the definition of the module.
What difference does it make if we consider $M$ as non abelian group? Isn't considering only abelian groups much more restrictive than considering an arbitrary group?
| Let $G$ be a group, $R$ a ring and assume there is an "$R$-module structure on $G$" (something satisfying the usual axioms except we don't require $G$ to be abelian). Then, let $x,y\in G$ be arbitrary and consider that $2.(xy)=(2.x)(2.y)=(1.x)(1.x)(1.y)(1.y)=x^2y^2$ on one hand and $2.(xy)=(1.xy)(1.xy)=(1.x)(1.y)(1.x)(1.y)=xyxy$ on the other hand (we use the distributive law for the product $xy$ and the sum $2=1+1$ in both orders). This implies $xy=yx$. Since $x,y$ were arbitrary, it follows that $G$ is abelian. Thus, there was no loss of generality in requiring $G$ to be abelian to begin with.
Here's a more conceptual explanation of what just happened. The remarkable property of an abelian group $A$ is that the pointwise sum of two homomorphisms is again a homomorphism, which implies that the endomorphisms of $A$ form a ring $\mathrm{End}(A)$. It's an exercise to check that an $R$-module structure on $A$ is the same thing as a ring homomorphism $R\rightarrow\mathrm{End}(A)$. From this perspective, it is only natural to require an abelian group in the definition of an $R$-module. Now, if $G$ is just a group, we still have a multiplicative monoid $\mathrm{End}(G)$ and an $R$-module structure on $G$ is the same thing as a multiplicative monoid homomorphism $R\rightarrow\mathrm{End}(G)$ such that the addition in $R$ gets transformed into the pointwise multiplication in $\mathrm{End}(G)$ (in particular, the pointwise multiplication of two endomorphisms in the image has to be an endomorphism again). The image of this map is then a submonoid of $\mathrm{End}(G)$ that is closed under pointwise multiplication and forms a ring with pointwise multiplication as addition. In particular, the pointwise multiplication of $\mathrm{id}_G$ with itself, i.e. the squaring map, is an endomorphism. The above argument is just the good old exercise that if $G\rightarrow G,\,x\mapsto x^2$ is a homomorphism, then $G$ is abelian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4509130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
$|z_1+…+z_n|=n$ if and only if all the unit $z_i$ are equal Let $z_1,z_2,…,z_n$ be complex numbers of unit length. I want to prove that if the modulus of their sum $z$ is $n$, then they are equal.
My solution goes as follows. On the contrary, assume at least one pair $z_i,z_j$ are distinct so that the angle $\theta_{i,j}$ between them is not $0$. We have $z.z=n^2$ by hypothesis, where . is the dot product. On the other hand, $z.z=n+2\sum_{i\neq j}\cos\theta_{i,j}\lt n+2\binom {n}{2}=n^2$, a contradiction.
Is my solution correct? Is there a more direct proof?
| $\left| z1+z2+z3+.....zn\right|\leqslant \left| z1\right| + \left|z2 \right|+ .....\left|zn \right|$ =n with equality holding when the vectors z1,z2,...zn are parallel.Since the equality holds the vectors are parallel and since they have the same module they are equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4509211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
if $x+\frac{1}{x}=\sqrt2$, then find the value of $x^{2022}+\frac{1}{x^{2022}}$? It is question of mathematical olympiad. kindly solve it guys!
I tried a bit.I am sharing this with u...
•$x+\frac{1}{x}=\sqrt{2}$
•$x^2+1=x\sqrt{2}$
•$x^2-x\sqrt{2}+1=0$
so, $x=\frac{\sqrt{2}}{2}+\frac{i\sqrt{2}}{2}$ or $\frac{\sqrt{2}}{2}-\frac{i\sqrt{2}}{2}$ and
$\frac{1}{x}=\frac{\sqrt{2}}{2}-\frac{i\sqrt{2}}{2}$ or $\frac{\sqrt{2}}{2}+\frac{i\sqrt{2}}{2}$
How can I find $x^{2022}+\frac{1}{x^{2022}}$?
| The solutions you got are (primitive) eighth roots of unity, $\zeta_8=e^{2\pi i/8},\dfrac 1{\zeta_8}=\bar {\zeta _8}$.
Now $$\zeta _8^{2022}=(\zeta _8^8)^{252}\cdot \zeta _8^6=1\cdot \zeta _8^6=\zeta_8^6$$. And $\dfrac 1{\zeta_8^{2022}}=\zeta_8^{-6}$.
So $$\zeta_8^6+\zeta_8^{-6}=-i+i=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4509342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Reference for a characterization of conditional expectation Answers in these posts
*
*Let $X,Y$ be two real valued random variables. What is the joint density of the pair $(X, X)$ and how do we calculate $E[h(X,Y)|X=x]$?
*Conditional density of Sum of two independent and continuous random variables
use the following fact:
reacall that $E(h(X,Y)\mid X)=u(X)$ for some measurable function $u$ if and only if $E(h(X,Y)v(X))=E(u(X)v(X))$ for every bounded measurable function $v$
(Starting from the factorization lemma, which characterizes the conditional expectation a.s.,)
I believe one direction is the a.s.-point-wise approximation of measurable functions together wth the dominated convergence theorem, and the other direction is indicator functions being bounded.
Is this written somewhere? ("Recall" sounds like it is, but I did not know this result and I had to think a bit.)
| ($\Rightarrow$). Suppose $E[h(X,Y)|X]=u(X)$ for some measurable $u$. Since $v$ is assumed to be bounded and measurable, then $v(X)$ is integrable and $\sigma(v(X))\subseteq \sigma(X)$ so
$$\begin{aligned}E[h(X,Y)v(X)]&=E[E[h(X,Y)|X]v(X)]=\\
&=E[u(X)v(X)]
\end{aligned}$$
($\Leftarrow$). Let $A\in \mathscr{B}(\mathbb{R})$. Choose $v(x)=\mathbf{1}_A(x)$, which is bounded and measurable. Then $\mathbf{1}_A(X(\omega))=\mathbf{1}_{\{X\in A\}}(\omega)$. Recall $\sigma(X)=X^{-1}(\mathscr{B}(\mathbb{R}))$ so that if $C\in \sigma(X)$ then $C=X^{-1}(A)$ for some Borel $A$. By assumption
$$E[h(X,Y)\mathbf{1}_{X^{-1}(A)}]=E[u(X)\mathbf{1}_{X^{-1}(A)}],\,\forall A \in \mathscr{B}(\mathbb{R})$$
but this means $E[h(X,Y)|X]=u(X)$, $P$-a.e.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4509474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How can we show that the time shift $t\mapsto x(s+\;\cdot\;)$ is measurable? This is a question of great importance and hence there should be an answer in any textbook on stochastic processes. However, I neither found it nor I know how to approach this:
Let $(E,\mathcal E)$ be a measurable space and $s\ge0$. Moreover, let $$\tau_s:[0,\infty)\to[0,\infty)\;\;\;t\mapsto s+t$$ and $$\theta_s:E^{[0,\:\infty)}\to E^{[0,\:\infty)}\;,\;\;\;x\mapsto x\circ\tau_s.$$
How do we show that $\theta_s$ is $\left(\mathcal E^{\otimes[0,\:\infty)},\mathcal E^{\otimes[0,\:\infty)}\right)$-measurable?
Moreover, what does happen if $E$ is a metric space and we restrict $\theta_s$ to
*
*the Skorohod space $D([0,\infty),E)$ of càdlàg functions from $[0,\infty)\to E$ equipped with the Skorohod metric?
*the space of continuous functions equipped with the supremum norm.
| A map $(F, \mathcal{F}) \to (E^{[0, \infty)}, \mathcal{E}^{\otimes [0, \infty)})$ into $E^{[0, \infty)}$ is $\mathcal{F}/\mathcal{E}^{\otimes[0, \infty)}$-measurable if and only if all of its (all $t \geq 0$) compositions with the canonical projections
$$
\pi_t \colon E^{[0, \infty)} \to E,
\pi_t(x) = x(t),
$$ are $\mathcal{F}/\mathcal{E}$ measurable.
That's more or less the definition of the product sigma algebra.
With your map $\theta_s$ it is
$$
(\pi_t \circ \theta_s)(x) = \pi_t(x(\cdot + s)) = x(t+s) = \pi_{t+s}(x),
$$
i.e.
$$
\pi_t \circ \theta_s = \pi_{t+s}
$$ a canonical projection again.
Therefore $\pi_t \circ \theta_s$ is measurable for all $t, s \geq 0$.
This shows the measurability of $\theta_s$ as map $(E^{[0, \infty)}, \mathcal{E}^{\otimes [0, \infty)}) \to (E^{[0, \infty)}, \mathcal{E}^{\otimes [0, \infty)})$.
For the Skorokhod space and the space of continuous functions the same proof works because the corresponding Borel sigma algebras are generated by the canonical projections too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4509649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluate the limit $\lim\limits_{n\rightarrow \infty}\int_{0}^{\pi}\left|\sin(x)-\sin(2nx)\right|\text{ d}x$
$$\lim_{n\rightarrow \infty}\int_{0}^{\pi}\left|\sin(x)-\sin(2nx)\right|\mathrm d x$$
Heading
I put proper big numbers to $n$ using calculator to estimate the answer.
And then I figured out that the answer seems like $\frac{8}{\pi}$, but
unfortunately I dont have any clues.
Please help.
| \begin{align}
\int_0^\pi \left|\sin x - \sin(2nx)\right|\mathrm d x &= \frac1{2n}\int_{0}^{2n\pi} \left|\sin\left(\frac{x}{2n}\right) - \sin(x)\right|\mathrm dx\\
&= \frac{1}{2n} \sum_{k=0}^{n-1}\int_{2k\pi}^{2(k+1)\pi}\left|\sin\left(\frac{x}{2n}\right) - \sin(x)\right|\mathrm dx\\
&= \frac{1}{2n}\sum_{k=0}^{n-1}\int_0^{2\pi}\left|\sin(x) - \sin\left(\frac{x+2k\pi}{n}\right)\right|\mathrm d x\\
&\underset {n\to \infty} \to \frac1{4\pi} \int_0^{2\pi}\int_{0}^{2\pi}\left|\sin x - \sin y\right|\mathrm dx\,\mathrm dy\\
&= \frac1{2\pi} \int_0^{\pi}\int_{0}^{\pi}\left(\left|\sin x - \sin y\right| + \left|\sin x + \sin y\right|\right)\mathrm dx\,\mathrm dy\\
&= \frac2{\pi} \int_0^{\frac\pi2}\int_{0}^{\frac\pi2}\left(\left|\sin x - \sin y\right| + \left|\sin x + \sin y\right|\right)\mathrm dx\,\mathrm dy\\
&= \frac2\pi \int_0^{\frac\pi2}\int_{0}^{\frac\pi2}\left(\left|\sin x - \sin y\right| + \left|\sin x + \sin y\right|\right)\mathrm dx\,\mathrm dy\\
&= \frac4{\pi} \int_0^{\frac\pi2}\left(\int_{0}^{y}\sin y\,\mathrm dx + \int_{y}^{\frac\pi 2}\sin x\,\mathrm d x\right)\mathrm dy\\
&= \frac4\pi\int_0^{\frac\pi2} \left(y\sin y + \cos y\right)\mathrm d y\\
&= \frac4\pi \left[-y\cos y + 2\sin y\right]_{0}^{\frac\pi2}\\
&= \frac8\pi
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4509765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
A function with two center of symmetry - Prove that there does not exist an irrational number $q$ such that $f(ql-x)=f(ql+x)$. I would like to solve the following question.
Let $\alpha,\beta(\alpha\neq \beta)$ be real numbers.
Suppose a differentiable nonconstant function
$f:\mathbb{R}\to\mathbb{R}$ satisfies
$f(\alpha+x)=f(\alpha-x),f(\beta+x)=f(\beta-x)$ for any $x\in \mathbb{R}$. Prove the following.
(1)$f$ is a periodic function.
(2)Let $l$ be the smallest period of $f$. Prove that there does not
exist an irrational number $q$ such that $f(ql-x)=f(ql+x)$.
My attempt
(1) is solved in Functions from $\mathbb{R} \rightarrow \mathbb{R}$ with 2 centers of symmetry. (Mirror symmetry). Period is $2(\beta-\alpha)$.
I am not sure about (2). I tried to prove it with proof by contradiction. If I could find a period of the form $rl$ where $r$ is an irrational number, then by A real continuous periodic function with two incommensurate periods is constant. we have a contradiction since $f$ is nonconstant. However, I could not find such $r$.
By similar method as in (1), we can prove that $2(ql-\alpha)$ or $2(ql-\beta)$ are periods of $f$. However, I am not sure how to proceed.
| Proposition $(2)$ is false.
Suppose $l=1$. If either one of $\alpha,\beta$ is irrational, then the proposition is false since you can take $q=\alpha,\beta$.
It is the case for $f(x)=\cos{\left(2\pi(x-\alpha)\right)}$ where $\alpha$ is any irrational number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4509899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why do we need wedge product for $dxdy$ to polar coordinates when $dx$ and $dy$ are perpendicular to each other? Why can't we convert $dx$ and $dy$ to polar and straight out multiply the answer to get the area element? By multiply I mean, taking x=rcos(theta) and y=… and then taking differentials dx and dy and multiplying them. Answers here point out that wedge is supposed to be used but this is just a rectangle so do we still need it and why? Thank you.
| As was pointed out in the comments to this question the multiplication of numbers and of differential forms are different. E.g. number multiplication obeys the commutativity rule $AB=BA$, while multiplication of differentials does not in general. That is why we use the $\wedge$-sign to indicate this different nature of multiplication.
Differentials have an anticommutativity rule: $dx\wedge dy=-dy\wedge dx$. Here is a conceptual way to understand this. Differentials like $dx$ with one component are used in measuring length, differentials like $dx\wedge dy$ with two components are relevant to measuring area. If you swap $dy\wedge dx$ then you change the "orientation" of the area in question and the area is measured with negative weight.
This comes in handy if you think about very elementary situations: If you have an L-shape that it is obtained by removing a rectangle from a larger one at one of the corners you want to compute the area of the L-shape as the difference of the areas of the two rectangles, i.e. the sum of the two areas but with the area of the smaller rectangle having negative weight. The anticommutativity of the $\wedge$-product helps in such situations and is useful there. Because of this, multiplication of differentials is different in nature than multiplication of numbers.
If you want to see how $\wedge$-products work out computationally in an explicit example you can go to the link provided by peek-a-boo or wikipedia.
You can also compare with multiplication of matrices or of quaternions: for these commutativity doesn't hold in general either. However the motivation and details why and how commutativity is violated in these cases is different from differentials.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4510038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to find the intersection of two circles on a sphere? I have two vectors $a, b \in \mathbb{R}^3$ which define points on the unit sphere, so $|a| = |b| = 1$. I also have two angles $\alpha, \beta \in [0, \frac{\pi}{2})$.
Taken together, $(a, \alpha)$ defines a circle $A$ of all vectors on the unit sphere that make an angle $\alpha$ with the center vector $a$, and analogously for $(b, \beta)$:
$$A = \{ p \in \mathbb{R}^3 \mid p \cdot a = \cos \alpha \land |p| = 1 \}$$
$$B = \{ p \in \mathbb{R}^3 \mid p \cdot b = \cos \beta \land |p| = 1 \}$$
Assuming that these circles intersect in exactly two points, how do I find these points?
The equations are quite simple:
$$a \cdot p = \cos \alpha$$
$$b \cdot p = \cos \beta$$
$$|p| = 1$$
I could try decomposing these into their individual coordinates and solving the system by hand, but I'm thinking there must be a more elegant approach involving (mostly) vector operations. Probably I should parameterize $p$ cleverly as a function of some $x$, such that I get a quadratic equation in $x$, because we have at most 2 solutions in non-degenerate cases.
| Found a way!
Consider point $q$, halfway between both solution points $p_1$ and $p_2$. The great circle defined by $p, q$ is perpendicular to the one defined by $a, b$.
We can find $q$ using the Pythagorean theorem on a sphere: $\cos A + \cos B = \cos C$. We apply it to triangle $a, q, p$ to find:
$$\cos aq \cos pq = \cos \alpha$$
Since the dot product gives the cosine of the angle:
$$(a \cdot q)(p \cdot q) = \cos \alpha$$
$$p \cdot q = \frac{\cos \alpha}{a \cdot q}$$
And the same for triangle $b, q, p$:
$$p \cdot q = \frac{\cos \beta}{b \cdot q}$$
Both are equal to $p \cdot q$, so we find:
$$(a \cdot q) \cos \beta = (b \cdot q) \cos \alpha$$
Rearrange the terms:
$$q \cdot (a \cos \beta - b \cos \alpha) = 0$$
So $q$ is perpendicular to the vector $a \cos \beta - b \cos \alpha$. And because $q$ is on the great circle defined by $a, b$, it is also perpendicular to $a \times b$, so we can find $q$ by crossing those two vectors:
$$q = \mathrm{normalize}((a \times b) \times (a \cos \beta - b \cos \alpha))$$
Now all we need to do is rotate $q$ perpendicular to the great circle $a, b$ towards $p$. The rotation axis $r$ is on the great circle $a, b$, so perpendicular to $a \times b$, and also perpendicular to $q$:
$$r = (a \times b) \times q$$
What is the angle $\rho$ over which we need to rotate $q$ to find $p$? We already have two nice equations for $p \cdot q$, so we just take the inverse cosine:
$$\rho = \arccos(p \cdot q) = \arccos\left(\frac{\cos \alpha}{a \cdot q}\right)$$
Finally, rotate $q$ around $r$ over an angle of $\rho$ or $-\rho$ to find our points $p$.
P.S. I'm doing this in performance-sensitive software, so I'm open to suggestions on how to do it with fewer calculations. Also, $\alpha$ and $\beta$ are rather small, leading to some loss of precision with all the cosines; ideas on how to avoid that are also welcome.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4510171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$X$ is a minimal separating set iff every vertex in $X$ has a neighbor in the components of $G-X$ containing $u$ and in the component containing $v$ I'd like to prove the following problem:
Let $G$ be a graph and suppose some two vertices $u,v \in V(G)$ are separated by $X \subseteq V(G)$. Show that $X$ is a minimal separating set $\iff$ every vertex in $X$ has a neighbor in the component of $G-X$ containing $u$ and another in the component containing $v$.
The solution I have proves the problem as follows:
Let $X$ be a separating set of $u$ and $v$. This means that $u$ and $v$ are in different components of $G-X$. Let's call these components $C_u$ and $C_v$. Let $x \in X$. If $x$ is connected to both $C_u$ and $C_v$ in $G$ then adding it to $G-X$ connects the components $C_u$ and $C_v$ i.e. $X-x$ is not separating. If $x$ is not connected to both, say it has no neighbors in $C_v$, then adding it back to $G-X$ cannot joint $C_u$ and $C_v$. Therefore, $X-x$ is still separating.
Now comes the part that confuses me:
This means that removing a vertex $x$ from a separating set keeps the set separating $\iff x$ had neighbors in both $C_u$ and $C_v$. Hence $X$ is minimal separating $\iff$ all its vertices have neighbors in both $C_u$ and $C_v$.
Question: $X$ stays separating if the vertex $x$ is not connected to both. How can I conclude then that removing $x$ keeps the set separating $\iff$ $x$ had neighbors in both $C_u$ and $C_v$. What is the reasoning behind the final conclusion?
| The sentence is just wrong; instead of "$x$ had neighbors" it should be "$x$ did not have neighbors".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4510342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Base and neighborhoods Mathematical Analysis II by Zorich:
Definition 1 A set $X$ is said to be a topological space if a system $\tau$ of subsets of $X$ is exhibited (called open sets in $X$) possessing the following properties:
a) $\emptyset\in\tau$;$X\in\tau$.
b) $(\forall\alpha\in A(\tau_\alpha\in\tau))\implies\bigcup_{\alpha\in A}\tau_\alpha\in\tau$.
c) $(\tau_i\in\tau;i=1,\dots,n)\implies\bigcap^n_{i=1}\tau_i\in\tau.$
Definition 3 A base of the topological space $(X,\tau)$ is a family $\mathfrak{B}$ of open subsets of $X$ such that every open set $G\in\tau$ is the union of some collection of elements of the family $\mathfrak{B}$.
Definition 5 A neighborhood of a point of a topological space $(X,\tau)$ is an open set containing the point.
I want to ask:
It is clear that the system of all neighborhoods of all possible points of topological space can serve as a base for the topology of that space.
If $G$ is an nonempty open set in $X$, then it contains some point $p\in X$ and is a neighborhood of $p$. Hence the collection of all neighborhoods of all possible points of a topological space $(X,\tau)$ is $\tau\setminus\emptyset$. It this what the author means? I think this statement expresses nothing.
| You are right, the system $\mathfrak N$ of all neighborhoods of all points of a topological space $(X,\tau)$ is a base for the topology $\tau$. In fact, $\mathfrak N = \tau \setminus \{\emptyset\}$.
Let me remark $\emptyset$ can always be written as the union over the empty index set. For each index function $\iota : A \to \mathcal B \subset \tau$ into the base $\mathcal B$, we define $\tau_\alpha = \iota(\alpha)$ and $\bigcup_{\alpha \in A} \tau_\alpha = \{ x \in X \mid \exists \alpha \in A : x \in \tau_\alpha\}$. If $A = \emptyset$, we have only one (trivial) function $\emptyset \to \mathcal B$, and the above general definition gives us $\bigcup_{\alpha \in \emptyset} \tau_\alpha = \emptyset$.
Anyway, the system $\mathfrak N$ is only one of many bases. The intention of the concept of "base" is to find smaller systems than $\tau$ or $\mathfrak N$ which generate $\tau$ by forming unions. In other words, we want to make an "economical" approach.
For example, in $\mathbb R^n$ with its standard Euclidean topology the open balls
$$B(x,\epsilon) = \{ x' \in \mathbb R^n \mid \lVert x' - x \rVert < \epsilon \}$$
form a base.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4510574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Other approaches to simplify $\frac{\tan^2x-\sin^2x}{\tan 2x-2\tan x}$ I want to simplify the trigonometric expression $\frac{\tan^2x-\sin^2x}{\tan 2x-2\tan x}$.
My approach,
Here I used the abbreviation $s,c,t$ for $\sin x$ and $\cos x$ and $\tan x$ respectively,
Numerator is, $$\frac{s^2}{c^2}-s^2=\frac{s^2-s^2c^2}{c^2}=\frac{s^4}{c^2}=s^2t^2.$$
And denominator is,
$$\frac{2t}{1-t^2}-2t=\frac{2t^3}{1-t^2}.$$
So $$\frac{\tan^2x-\sin^2x}{\tan 2x-2\tan x}=s^2t^2\times\frac{1-t^2}{2t^3}=\sin^2x\times \frac1{\tan 2x}=\frac{1-\cos 2x}{2\tan 2x}.$$
I'm looking for alternative approaches to simplify the expression.
| $\begin{align}\tan^2 x-\sin^2 x&=\sin^2 x(\sec^2 x-1) \\&=\sin^2 x\tan^2x\space \tag1\end{align}$
$$\tan 2x=\frac{2 \tan x}{1-\tan ^2x}$$
$$\tan 2x-2\tan x=\tan 2x\tan^2 x\space \tag2$$
From (1) and (2),
$$\begin{align}\frac{\tan^2x-\sin^2x}{\tan 2x-2\tan x}&=\frac{\sin^2 x\tan^2x}{\tan 2x\tan^2 x}\\&=\frac{\sin^2x}{\tan 2x}\end{align}$$
which is equal to the given answer. (put $\sin^2x=\frac{1}{2}(1-\cos 2x)$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4510735",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Which number is bigger: $e^{\sqrt{5}} + e^{\sqrt{7}}$ or $2e^{\sqrt{6}}$? Let $f:(0, + \infty) \rightarrow \mathbb{R}$ be $f(x) = e^{\sqrt{x}}$.
Which number is bigger: $e^{\sqrt{5}} + e^{\sqrt{7}}$ or $2e^{\sqrt{6}}$?
I would like to ask for a hint on how to approach this question, I don't have an idea.
| Let $f(x)=e^{\sqrt{x}}$, so the second derivative is:
$$f''(x)=\frac{e^{\sqrt{x}}}{4x^{3/2}}(\sqrt{x}-1)$$
Since $f''(x)>0$ for $x\in (1,\infty)$, it is concave up (convex). So we have
$$\frac{f(x)+f(y)}{2}>f\left(\frac{x+y}{2}\right)$$
Let $x=5, y=7$
$$\frac{f(5)+f(7)}{2}>f\left(6\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4510895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Equation of a cylinder with a profile / ellipsoid I have a geometry of a cylinder that is curved along both the lengths where there is generally a height of a cylinder. I am aware of the equation of a cylinder. I wanted to know what could the equation of a curved cylinder be as a function of the x,y, and z coordinates.? Also, could this curved cylinder be called an ellipsoid?
I have attached a geometry herewith along with the dimensions.
2D
3D
| That would probably be called a "barrel shape". The equation depends on the type of curve you choose for the barrel. If it's a circular curve, if the $y$ axis is along the axis of symmetry of the barrel and we count the centre of the barrel as the origin. then the equation for the barrel would be something like
$$x^2+z^2=r^2$$
for the $x$ and $z$ coordinates, and
$$(r+h)^2+y^2=R^2$$
Here $R$ is the radius of curvature of the barrel side and $h<R$ is a constant which expresses an offset from the origin of the centre of the radius of curvature.
So we end up like
$$(\sqrt{x^2+z^2}+h)^2+y^2=R^2.$$
In the case of your diagram, it looks like you have $h=80 \text{mm}$ and $R=90 \text{mm}$.
I couldn't persuade Wolfram Alpha to do a 3D plot of this, but here is a 2D version.
In practice you would do this parametrically.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4510998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Question on axiom of extentionality I am following the proof of ordered pair property in the book "Set theory for guided independent study" by Goldrei. Author discusses the case $ \{x \} = \{ u \} $ and says, by ZF1 (axiom of extensionality), it follows that $ x = u $. Now axiom of extentionality is given as follows
$$ \forall x \forall y \left(x = y \longleftrightarrow \forall z(z \in x \longleftrightarrow z \in y)\right) $$
Now, I don't see the conclusion follows from ZF1. Can anybody help ?
Thanks
| We are given that $\{x\} =\{u\}$. By extensionality, this means that $\forall z (z \in \{x\} \leftrightarrow z \in \{u\})$. Instantiating this with $z=x$ we get $x \in \{x\} \leftrightarrow x \in \{u\}$. We do have $x \in \{x\}$, and thus $x \in \{u\}$. By definition of $\{u\}$ this implies that $x=u$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4511231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove if $\sqrt{x+1}+\sqrt{y+2}+\sqrt{z+3}=\sqrt{y+1}+\sqrt{z+2}+\sqrt{x+3}=\sqrt{z+1}+\sqrt{x+2}+\sqrt{y+3}$, then $x=y=z$. Let $x$, $y$, $z$ be real numbers satisfying $$
\begin{align}
&\sqrt{x+1}+\sqrt{y+2}+\sqrt{z+3}\\
=&\sqrt{y+1}+\sqrt{z+2}+\sqrt{x+3}\\
=&\sqrt{z+1}+\sqrt{x+2}+\sqrt{y+3}.
\end{align}$$
Prove that $x=y=z$.
I tried assuming $x>y>z$, $x>y=z$,$x<y<z$, etc., but none of the directions work. Please help me solve this problem.
| The solution of tehtmi is wonderful, and I have a similar approach.
For each parameter $t \in \{x, y, z\}$ and each $1 \leq i \leq 3$, let $t_i = \sqrt{t + i}$. For example $x_2 = \sqrt{x + 2}$. So we have:
\begin{align*}
&x_1 + y_2 + z_3\\
=\ &y_1 + z_2 + x_3 \label{1}\tag{$*$}\\
=\ &z_1 + x_2 + y_3
\end{align*}
Suppose $x = \min\{x, y, z\}$. Note that the function $f(t) = \sqrt{t + m} - \sqrt{t + n}$ for all $m > n$ is strictly decreasing. Thus
\begin{alignat*}{2}
y_2 - y_1 &\leq x_2 - x_1 &&\implies x_1 + y_2 \leq y_1 + x_2\\
z_3 - z_2 &\leq x_3 - x_2 &&\implies z_3 + x_2 \leq z_2 + x_3\\
&\ &&\stackrel{+}{\implies} x_1 + y_2 + z_3 \leq y_1 + z_2 + x_3
\end{alignat*}
But by \eqref{1} the equal case has occurred, and the equal case occurs only for $x = y = z$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4511321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 2,
"answer_id": 0
} |
Why this transformation matrix $A$ has $\begin{pmatrix}0 \\ 1\end{pmatrix}$ as Eigenvector? I have the following transformation matrix:
$$
A=\begin{pmatrix}
1 & 0 \\
-1 & 4
\end{pmatrix}
$$
If I resolve to find the eigenvalues I get:
$$
\begin{vmatrix}
A-\lambda I
\end{vmatrix} = 0
$$
which leads to:
$$
\lambda_1 = 1;
\lambda_2 = 4
$$
Now if I try to calculate the eigenvectors
For $\lambda_1$ I get:
$$
(A - 1 I)\begin{pmatrix}x_1 \\ x_2 \end{pmatrix}=\begin{pmatrix}0 \\ -x_1+3 x_2 \end{pmatrix}
$$
and for $\lambda_2$:
$$
(A - 1 I)\begin{pmatrix}x_1 \\ x_2 \end{pmatrix}=\begin{pmatrix}-3 x_1 \\ -x_1 \end{pmatrix}
$$
I see (and can compute using a symbolic calculation program) that there are two eigenvectors:
$$
e1 = \begin{pmatrix}0 \\ 1\end{pmatrix}
$$
$$
e2 = \begin{pmatrix}3 \\ 1\end{pmatrix}
$$
I can easily see why $\begin{pmatrix}3 \\ 1\end{pmatrix}$ is an eigenvector for the eigenvalue $\lambda_2$.
But I am struggling to understand why $\begin{pmatrix}0 \\ 1\end{pmatrix}$ is an eigenvector.
Could someone help me to understand why?
| For $\lambda_2$:
$$
(A - 1 I)\begin{pmatrix}x_1 \\ x_2 \end{pmatrix}=\begin{pmatrix}-3 x_1 \\ -x_1 \end{pmatrix}=\begin{pmatrix}0 \\ 0 \end{pmatrix}\Rightarrow x_1=0\Rightarrow v_2=\begin{pmatrix}0 \\ x_2 \end{pmatrix}=x_2 \begin{pmatrix}0 \\ 1 \end{pmatrix}
$$
So your second eigenvector is $\begin{pmatrix}0 \\ 1 \end{pmatrix}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4511455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Prove $\cot\frac{\theta}{2} - 2\cot\theta = \tan\frac{\theta}{2}$ This is a math exercise I am doing for my year 10 math class. Basically I would like to show how I would prove that $\cot\frac{\theta}{2} - 2\cot\theta = \tan\frac{\theta}{2}$
I was able to expand the LHS and was able to use fractions to get that $$\cot\frac{\theta}{2} - 2\cot\theta = \tan\frac{\theta}{2} = \frac{\cos\frac{\theta}{2} - 2\cos\theta}{\sin\theta}$$
I do not know what I would do from here.
Any help will be appreciated. Thanks :)
| suppose $\quad x=\frac \theta 2\quad$so that $\quad \theta=2x$:
$$\cot x-2\cot 2x=\frac {\cos x}{\sin x}-2\frac{\cos 2x}{\sin 2x}=\frac {2\cos^2x-2(\cos^2x-\sin^2x)}{\sin 2x}=\cdots $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4511842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Generalisation of Bertrand's postulate: not just one but two primes Just a sunday's question. Bertrand's postulate states that for every $n>1$, there is always at least one prime $p$ such that $n \lt p \lt 2n$.
Playing around with some small numbers suggests the question if there is possibly not just one but there are at least two primes in that interval.
From the following numerical example we can see that the answer seems to be affirmative for $n\ge6$
\begin{array}{ccc}
n&\{n-1,2n-1\}&\text{list of primes }p\\
1 & \{0,1\} & \{\} \\
2 & \{1,3\} & \{3\} \\
3 & \{2,5\} & \{5\} \\
4 & \{3,7\} & \{5,7\} \\
5 & \{4,9\} & \{7\} \\
6 & \{5,11\} & \{7,11\} \\
7 & \{6,13\} & \{11,13\} \\
8 & \{7,15\} & \{11,13\} \\
9 & \{8,17\} & \{11,13,17\} \\
10 & \{9,19\} & \{11,13,17,19\} \\
\end{array}
Question
I would greatly appreciate some references for this most probably well known result.
EDIT
Here I add some plots which show the number of primes between $n+1$ and $2n-1$ over different ranges of $n$.
I have fitted the curves with the function
$$b(n) = \frac{A n }{\log(1+B n)}$$
The parameters are shown in the plots.
Discussion
*
*These graphs illustrate the proof of Erdös: "for any $k$ and sufficiently large $n$ there are always at least $k$ primes between $n$ and $2n$", as was pointed out in a comment of @lulu and in the solution of @Thomas Preu
*The number $b(n)$ is not monotonous on a small scale.
*The shape of the fitted functions is just what was to be expected from the prime counting function $\pi(n)$. Indeed we have exactly just $b(n) = \pi(2n) - \pi(n)$.
| Here is a precise form of what you are looking for. Theorem 3.6 of Das and Paul's 2019 paper shows that
For any integer $n$ and $k$, there are at least $k-1$ primes between
$n$ and $kn$ when $n \ge 1.1\ln(2.5k) $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4511929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Sample means and confidence intervals For a population, where its parameters are unknown, why is it that if we take samples, those samples create a distribution that resembles that of a Normal Distribution? What if the original population distribution is not normal, why don't the samples create a distribution that is similar to the population, because they should have similar features to the population?
| It is not the case that samples from an arbitrary distribution are normal.
What you are thinking of is the Central Limit Theorem (CLT) which instead tells us the following...
Let $F$ be a (sufficiently regular) distribution with mean $\mu$ and standard deviation $\sigma$. Let $X_{1},X_{2},\ldots$ be a sequence of IID samples from $F$.
Then, the sample mean is "roughly" normal when a "large" number of samples $n$ are used:
$$
\frac{X_{1}+\cdots+X_{n}}{n}\approx N(\mu,\sigma^{2}/n).
$$
Note, in particular, that the uncertainty in our estimate of the true mean by the sample mean tends to zero as more and more samples are used (i.e., $n \rightarrow \infty$).
The above is not a rigorous statement ($\approx$).
More rigorous forms of the CLT involving different flavors of convergence can be found on the corresponding Wikipedia article.
I recommend you visit these in the future after learning about convergence of random variables.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4512264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Prove that $\frac {x \csc x + y \csc y}{2} < \sec \frac {x + y}{2}$
If $0 < x,y < \frac {\pi}{2}$, prove that:
$$
\frac {x \csc x + y \csc y}{2} < \sec \frac {x + y}{2}
$$
My attempt. First, I tried to change this inequality:
$$
\frac {\frac{x}{\sin x} + \frac{y}{\sin y}}{2} < \frac{1}{\cos \frac{x+y}{2}}
$$
Then,it's easy to know:
$$
LHS = \frac{1}{2} \left ( \frac {x}{2\sin \frac {x}{2} \cos \frac{x}{2}} + \frac{y}{2 \sin \frac{y}{2} \cos \frac{y}{2}} \right ) < \frac{1}{2} \left ( \frac{1}{\cos^{2} \frac{x}{2}} +\frac{1}{\cos^{2} \frac{y}{2}} \right )
$$
$$
RHS > \frac{1}{\cos \frac{x}{2} \cos \frac{y}{2}}
$$
How to solve it next? It seems this way is wrong.
| Lemma 1. $f(x)=\frac{x}{\sin x}$ is a positive, increasing and convex function on $I=\left(0,\frac{\pi}{2}\right)$.
Let us assume that $\mu = \frac{x+y}{2}\in I$ is fixed and $x\leq y$. Let us set $\delta=\frac{y-x}{2}$. By Lemma 1,
$$ \sup_{\substack{0\leq \delta < \min\left(\mu,\frac{\pi}{2}-\mu\right)\\ }}\left(\frac{\mu-\delta}{\sin(\mu-\delta)}+\frac{\mu+\delta}{\sin(\mu+\delta)}\right)$$
is achieved at the right endpoint of the range for $\delta$. If $\mu\leq\frac{\pi}{4}$ such supremum equals $1+\frac{2\mu}{\sin(2\mu)}$.
If $\mu\geq\frac{\pi}{4}$ such supremum equals $\frac{\pi}{2}+\frac{2\mu-\pi/2}{\sin(2\mu-\pi/2)}$. The inequality
$$ \frac{\pi}{2}+\frac{2\mu-\pi/2}{\sin(2\mu-\pi/2)}\leq \frac{2}{\cos\mu} $$
over the interval $\left[\frac{\pi}{4},\frac{\pi}{2}\right)$ is very loose, hence the problem boils down to showing that
$$ 1+\frac{2\mu}{\sin(2\mu)} \leq \frac{2}{\cos\mu} $$
holds over the interval $\left(0,\frac{\pi}{4}\right)$. By multiplying both sides by $\sin\mu\cos\mu$ we get that the inequality is equivalent to
$$ \sin\mu\cos\mu + \mu \leq 2\sin\mu\tag{E} $$
which follows from the termwise integration of $1+\cos(2\mu)\leq 2\cos\mu$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4512437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to show that linear map is surjective? I have the following linear map:
$$T:\mathcal{P}^2 \to \Bbb{R}^2 $$
where $\mathcal{P}^2$ denotes the vector space of polynomials with real coefficients having degree at most $2$
$ T $ is defined by $$T(ax^2+bx+c)=[a+b , b−c]^t$$
I do not know how to prove that $T$ is surjective.
I know its not injective.
Yet,i do not now how to formally show that is a surjection.
I tried following the answer at How to show that a linear map is surjective?
Using the formula described:
*
*$\dim(V) = 2$,
*$\dim(\operatorname{range}T)=1$
According to the formula "will be surjective if $\dim V= \dim \operatorname{range} T$"
this map would not be surjective.
Yet, in the website i took this exercise from, it says this map is a surjective.
So, i am wondering what i did wrong, and if there are better ways to show that this map is a surjective.
I came across on this exercise on this website.
| For any $(p,q)\in\mathbb R^2$, $T(px^2-q)=(p+0,0--q)=(p,q)$. Since this is true for any pair, $T$ is surjective.
$T(x^2+5x-3)=T(2x^2+4x+2)=(6,2)$ so $T$ is not injective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4512585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Intuitive understanding of why trigonometric functions relate to both triangles and the complex exponential The trigonometric functions sine and cosine can be defined in terms of the complex exponential:
$$\sin(x) = \frac{1}{2i}(e^{ix} - e^{-ix})$$
$$\cos(x) = \frac{1}{2}(e^{ix} + e^{ix})$$
And can also be defined in terms of the lengths of the sides of a triangle where $x$ now represents the angle between the hypotenuse and the adjacent sides $H$, $A$, and $O$ represent the lengths of the hypotenuse, adjacent, and opposite sides respectively:
$$\sin(x) = \frac{O}{H}$$
$$\cos(x) = \frac{A}{H}$$
I find it utterly bizarre that something as seemingly unrelated as the dimensions of a triangle relates to exponentiation and the root of a negative.
Mathematically, how is this possible?
Intuitively, why is this possible?
| $$e^{ix}=\cos x + i \sin x$$
is intuitively a circle of radius $|e^{ix}|=1$ in the complex plane with the radius, the real part, and the complex part forming a right-angled triangle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4512723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Entire function $f: \mathbb{C} \to \mathbb{C}$ maps every unbounded sequence to an unbounded sequence then $f$ is a polynomial? I came across this problem. Hopefully my last for the exam tomorrow.
(i) Entire function $f: \mathbb{C} \to \mathbb{C}$ maps every unbounded sequence to an unbounded sequence then $f$ is a polynomial?
The other question is
(ii) Is the same conclusion true if $f$ is holomorphic in $ \mathbb{C} \setminus\{ 0\}? $
My thoughts go to the Liouville's Theorem first, since $f$ is unbounded it can't be constant. From there I would say that the function $f$ is analytic (bc holomorphic) with $\sum\nolimits_{n=0}^ \infty a_n(z - z_0)^n$ and some $a_n \neq 0$ for an $n > 0$ since $f$ is not constant. And therefore it must be a polynomial? But I dont have the gut feeling that I am here right or am I? :D On the other question (ii) I can't find a thing why it not should work like my thought befor actually.
Thank you guys
| *
*Consider $\infty$ as a singularity point. It's known that it's a pole of an entire function $f$ iff $f$ is a polynomial. Let's show it is a pole. Obviously it's not a removable singularity since then $f$ is bounded. If it's an essential singularity, by the Casserti Weirstrass thm we can find an unbounded sequence $a_n\to\infty$ with $f(a_n)\to 0$, contradiction. Therefore it is a pole and $f$ is a polynomial.
*This is false - consider $f(z)=\frac{z^2+1}{z}$. If $a_n$ is unbounded, take $a_{n_k}\to\infty$. Since $\infty$ is a pole of $f$, $f(a_{n_k})\to\infty$ and $f(a_n)$ is unbounded. So $f$ maps unbounded sequences to unbounded sequences, but is not a polynomial.
In fact, one can show that the only functions that are holomorphic in the punctured plane and satisfy this property must be rational with a pole at infinity (this is if and only if).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4512847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Number of chords to be drawn to return the starting point Given a pair of concentric circles, chords AB,BC,CD,... of the outer circle are drawn such that they all touch the inner circle. If ∠ABC = 75°;how many chords can be drawn before returning to the starting point?
Thought to start by joining points of contact with the centre,but couldn't got any further idea. Any help is appreciated.
Question Source: IOQM Olympiad
| Method using modular arithmetic
Notice that $\triangle AOB$ is isosceles and therefore $\angle AOB = 105^\circ$.
Each successive chord adds $105^\circ$ to this angle so we want to know when this would be a multiple of $360^\circ$. In other words, we want
$$
105n \equiv 0 \pmod {360}
$$
This implies
$$
7n \equiv 0 \pmod { 24}
$$
Since $\gcd{(7, 24)} = 1$, the minimum $n$ that satisfies this equation is $\boxed{n = 24}$
Method using Complex Numbers
Without loss of generality, assume that the outer circle has a radius of $1$. We can rotate the initial diagram such that the point $A$ corresponds to the point $(1, 0)$ on the coordinate plane.
Every time we draw a chord, the angle increases by $105^\circ$, so essentially we are rotating the complex number by $105^\circ$ each time.
Let $z = e^{i \cdot 0}$. The next chord point (in our case $B$) will occour at $z = e^{i \cdot \frac{7\pi}{12}}$ and $C$ will occour at $z = e^{i \cdot \frac{7\pi}{6}}$ and so on.
We want to know, when $z = e^{i \cdot \frac{7\pi}{12} \cdot n}$ will return to the point $A$ at $(1, 0)$. Clearly, this will happen when:
$$
\frac{7\pi}{12}\cdot n = 2\pi k
$$
In other words, for some integer $n$ we want:
$$
7n = 24k
$$
The above simply corresponds to the modular equation:
$$
7n \equiv 0 \pmod {24}
$$
Since $\gcd(7,24)=1$, the minimum n that satisfies this equation is $\boxed{n=24}$.
Verification
If we use complex numbers, we actually know the coordinates of each point. For example,
$$
e^{i \cdot \frac{7\pi}{12}} \implies (\cos(105^\circ), \ \sin(105^\circ))
$$
Using this we can be sure that the only solution occurs when $n = 24$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4513015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does there exist Eulerian quadrangulations that are not 1- or 2-degenerate? I am looking for Eulerian planar quadrangulations that are not 1- or 2-degenerate, but I cannot seem to find such graphs.
Note: a graph is Eulerian if and only if every vertex has an even degree. Clearly there exists such quadrangulations, for example, $K_{2,6}$ where the points in the partite set of $2$-points are placed on both sides of a line of the set with $6$ points. However, the case of creating a planar quadrangulation with each vertex having an even degree, and the quadrangulation also being $\ge 3$-degenerate, this becomes a lot more complicated. Are there any existing examples of such?
| Remove the blue vertices to find a subgraph (which is itself a quadrangulation!) that has minimum degree $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4513161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tossing two coins with unknown success probability I have two coins with uniformly distributed unknown success probabilities $p_1$ and $p_2$.
I toss both coins $N$ times and got $n_1$ and $n_2$ successes for the first and second coins respectively. ($N$ is large, both $n$s are small. $N \gg n_1 > n_2$).
Knowing this, what is the probability that $p_1 > p_2$.
I understand that it is a conditional probability and I think that I can solve for $N=1$, $n_1=1$ and $n_2=0$. [at least calculations in my head claim this is $5/6$].
I suppose that I need to integrate probabilities of getting the outcome on the lower triangle vs integrating on the whole square.
Since, this approach did not give me anything intuitive or simple (to be solved on the paper),
Is there anything simpler, and/or more intuitive?
Update:
After I have been asked few times, how to calculate things in the head...
First of, deprive yourself of paper or access to the internet.
Calculations involved in this case are not too difficult and, when you lack any tool, you can give a try.
So we have to calculate:
$$\frac{\int\limits_0^1\int\limits_{p_2}^1 p_1(1-p_2) dp_1 dp_2}{\int\limits_0^1\int\limits_{0}^1 p_1(1-p_2) dp_1 dp_2}.$$
The upper integral is much more involved, but in the end it is a polynomial, and you solve it like on the paper:
$$I = \int\limits_0^1\int\limits_{p_2}^1 p_1(1-p_2) dp_1 dp_2 = \int\limits_0^1 \frac{1}{2}p_1^2\Big|_{p_2}^1(1-p_2) dp_2, $$ then
$$I = \frac{1}{2}\int\limits_0^1 (1-p_2^2)(1-p_2)dp_2.$$
Here is the point where you should believe in yourself, specifically, that you can hold $4$ numbers (simple monomials) in the head at once, cached.
$$I = \frac{1}{2}\int\limits_0^1 (1-p_2-p_2^2+p_2^3)dp_2.$$
But luckily, $\int\limits_0^1 x^k = \frac{1}{k+1},$ so we have that
$$I = \frac{1}{2} \cdot \left[\frac{1}{0+1} - \frac{1}{1+1} - \frac{1}{2+1}+\frac{1}{3+1}\right] = \frac{5}{24}$$
The second integral, can be calculated directly, or we can notice, that due to symmetry(ies) it is exactly $\frac{1}{4}$.
| In general, assuming uniform $B(1,1)$ priors on $p_1$ and $p_2$, you have $$f_X(x) = \text{PDF of }B(1 + n_1, 1 + (N-n_1)),$$
$$f_Y(y) = \text{PDF of }B(1 + n_2, 1 + (N-n_2)).$$
So the PDF of $Z = X - Y$ is
$$f_Z(z) = \int f_Y(y)f_X(y+z)dy,$$
(because $X=Y+Z$) where the integration range is $(0,1-z)$ for $z>0$ and $(-z,1)$ for $z<0$. And then simply $\mathbb{P}(Z>0)=\int_0^1f_Z(z)dz$.
I applied this general method to verify your calculation that for $N=1,n_1=1,n_2=0$, the answer is $5/6$, by calculating
$$f_Z(z)=\left\{
\begin{align}
&\frac23 (1-z)(1+4z+z^2) &\text{ if } z>0\\
&\frac23 (z+1)^3 &\text{ if } z<0 \\
\end{align}
\right.$$
How did you do it in your head?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4513420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why does $(2^x -1)/x$ not have a vertical asymptote? Why does $(2^x -1)/x$ not have a vertical asymptote as opposed to $(2^x)/x$?
Can you factor in the first expression so that you can cancel out the $x$ term?
| mweiss gave already a nice explanation, but let me give a proof requiring as little calculus as I can.
Let $f:\mathbb{R} \setminus \{0\} \to \mathbb R$ be $f(x) = (2^x-1)/x$.
For function $f$ to have vertical asymptote at $x = 0$ it would have to be unbounded near $0$. Our function is not defined at $0$, but there exists limit: $$
\lim_{x\to 0} f(x) = \lim_{x\to 0}\frac{2^x - 2^0}{x - 0} = \left. (2^x)' \right|_{x=0} = \left. (\ln 2\cdot 2^x) \right|_{x=0} =\ln 2.
$$
Let's define $\tilde f:\mathbb R \to \mathbb R$ as $$
\tilde f(x) =\begin{cases}
f(x) \quad &\text{if} \, x \neq 0 \\
\ln 2\quad &\text{if} \, x = 0. \\
\end{cases}
$$
Since $f$ was continuous on its domain, and $\tilde f$ is continuous at $x = 0$, then $\tilde f$ is continuous everywhere. So in particular, it is bounded near $0$ (formally, using Extreme Value Theorem it is bounded on for example $[-1, 1]$). That shows $f$ is bounded near $0$. $\square$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4513537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
$x^{x^2} + x^{x^8} =?$ Given $x^{x^4} = 4$ If x is any complex number such that $x^{x^4} = 4$ , then find all the possible values of :
$x^{x^2} + x^{x^8}$
First, I used laws of exponents to give $18$ as answer. However , I realised that I've misused it. Further I used logarithms which yielded
$$ x^{x^2} + x^{x^8} = 4^{1/x^2} + 4^{x^2} $$
After this, I am stuck and can't proceed further. By hit and trial, $x = \sqrt{2}$ seems to be one of the solution. Any help is appreciated.
| This answer only works (based on the original question) on the set of real numbers.
Hint:
\begin{align}
x^{x^4}=4&\implies \left(x^4\right)^{x^4}=4^4\\
&\implies x^4=4\\
&\implies x=\pm\sqrt 2=\pm 2^{\frac 12}.\end{align}
Justification about the step $$\left(x^4\right)^{x^4}=4^4\implies x^4=4$$
We know that if $x\ge0$ and $x,y\in\mathbb R$ then the equation $ye^y=x$ has exactly one solution $W_0(x)$. Hence, we have
$$x^4\ln x^4=\ln x^4 e^{\ln x^4}=4\ln 4\ge 0$$
So, the fact $x^4=4$ is an only possible solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4513739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to read $Q_j = \{k:\beta_k = j\}$? In my work, I came across the following expression:
$$Q_j = \{k:\beta_k = j\},\tag1$$
where $j = \mathcal{J} = \{1,...,J\}$ set of cell-phone tower,
$k = \mathcal{K} = \{1,...,K\}$ is a set of users, and
$\beta_k$ is the index of cell-phone tower to whom user $k$ is associated with.
I understand that equation $(1)$ expresses a set but, how to read it in terms of set theory?
|
*
*$j = \mathcal{J} = \{1,...,J\}$ set of cell-phone tower
*$k = \mathcal{K} = \{1,...,K\}$ is the set of users
*$\beta_k$ is the index of cell-phone tower to whom user $k$ is associated with
*$Q_j = \{k:\beta_k = j\}.\tag1$
Correction:
*
*$\mathcal{J}=\{j_1,...,j_J\}$ is a set of cellphone towers.
*$\mathcal{K}=\{k_1,...,k_K\}$ is a set of cellphone users.
*For each cellphone user $k,\;\beta_k$ is the cellphone tower that $k$ is associated with.
*$\forall j\in\mathcal J,\quad Q_j = \{k\in\mathcal{K}:\beta_k = j\}.\tag1$
$\mathcal J$ is a set of $J$ cellphone towers; $\mathcal K$ is a set of $K$ cellphone users; $$\forall j\in\mathcal J,\quad Q_j = \{k\in\mathcal{K}:k\text{ is associated with }j\}.$$ That last clause reads: for each tower $j$ in $\mathcal{J},\,Q_j$ is the subset of $\mathcal K$ such that every user in $Q_j$ is associated with $j.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4513838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $A,B$ be matrices under certain conditions, then $AB=B$ Let $k, n$be positive integers. Let $G= \{ A_1, A_2, \cdots, A_k\}$ be a set of real $n\times n$ matrices such that $G$ is a group under the usual matrix multiplication. Let $B=A_1 + A_2 + \cdots +A_k$. Prove that $A_i B=B$ for every $i\in \{ 1,2, \cdots, k\}$.
At first I tried writing out the matrices. Then it became a mess of notations. To make the notations a bit nicer, I used $A^i$ instead of $A_i$. Then I got $(A^iB)_{pq} = \sum_{x=1}^n A^i_{xp}(\sum_{i=1}^kA^{i}_{px})$..
Then I tried something different. I got $I= (B^{-1}B) = (A_iB)^{-1}B = B^{-1} A_i^{-1}B$. So $B=A_i^{-1}B$. But then I realized that $B$ might not be invertible.
Thanks in advance for any help!
| Notice that the map $f_{i}:G\to G$ defined by $f_i(g)=A_ig$ is a bijection for every $i$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4513998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Change of variables and Tonelli applied to a negative function I am trying to solve the following exercise. I believe I have solved the first two parts. What do I do in the case of $f$ being negative? I cannot use the change of variables formula, or Tonelli for this. I assume I have to split the function into its positive and negative parts. How can I apply these theorems which are defined for a nonnegative function to a negative function?
Suppose that $X$ and $Y$ are independent and that $f(x, y)$ is nonnegative. Put $g(x)=E[f(x, Y)]$ and show that $E[g(X)]=E[f(X, Y)]$. Show more generally that $\int_{X \in A} g(X) d P=\int_{X \in A} f(X, Y) d P$. Extend to $f$ that may be negative.
Let $\mu$ be the distribution of $X$ and $\nu$ the distribution of $Y$. Since $X$ and $Y$ are independent, $\pi = (\nu \times \mu)$
\begin{align*}
E[g(X)]\\
=\int_{\Omega} g(X) d P & \qquad \text{Definition of expectation}\\
=\int_{\mathbb{R}} g(x) \mu(dx) & \qquad \text{Lebesgue's change of variables formula}\\
=\int_{-\infty}^{\infty} E\left[f(x, Y)] \mu(dx)\right. & \qquad \text{ Definition of $g(x)$}
\end{align*}
Now, expanding the inner expectation,
\begin{align*}
=\int_{-\infty}^{\infty} \int_{\Omega} f(x, Y) d P \mu(dx) & \qquad \text{Definition of expectation} \\
=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} f(x, y) \nu(dy)\mu(dx) & \qquad \text{Lebesgue's change of variables}\\
=\int_{\mathbb{R} \times \mathbb{R}} f(x,y) \pi(d(y,x)) & \qquad \text{Tonelli theorem (similar to Fubini but for $L^+(\Omega)$) }\\
=\int_{\Omega} f(X, Y) d P & \qquad \text{Lebesgue's change of variables}\\
=E[f(X, Y)]
\end{align*}
\begin{align*}
\int_{X \in A} g(X) d P\\
=\int_{A} g(x) \mu(dx) & \qquad \text{Lebesgue's change of variables}\\
=\int_{A} E[f(x, Y)) \mu(dx) & \qquad \text{Definition of $g(x)$}\\
=\int_{A} \int_{\mathbb{R}} f(x, y) \nu(dy) \mu(dx) & \qquad \text{Change of variables }\\
=\int_{A \times \mathbb{R}} f(x,y) \pi(d(y,x)) & \qquad \text{Tonelli theorem}\\
=\int_{\left\{X \in A, Y \in \mathbb{R}^{2}\right\}} f(X, Y) d P & \qquad \text{Change of variables}\\
=\int_{X \in A} f(X, Y) d P
\end{align*}
| If the functions $f$ and $g$ are such that for each $x\in\mathbb R$, $f(x,Y)$, $f(X,Y)$, $g(X)$ are all integrable but possibly take negative values, then either the Fubini-Tonelli theorem or writing $f = f^+ - f^-$, $g = g^+ - g^-$ and using linearity together with a repetition of the argument you used for nonnegative functions provides the extension to functions that possibly take on negative values.
Added: Strictly speaking, we do not need to assume the functions are integrable, but we do require something just short of that. To fix ideas, let's just look at $g(X)$. By all accounts, we should be able to say
$$
\int g(X) = \int g^+(X) - \int g^-(X),
$$
but the point is that the quantity $\int g(X)$ is only defined when at least one of $\int g^+(X)<\infty$, $\int g^-(X)<\infty$ is assumed, to avoid having to assign values to expressions of the form $\infty - \infty$. Often, we do not work at this level of generality (though we surely still have occasion to), and instead we assume $g(X)$ is (absolutely Lebesgue) integrable in that we assume $\int |g(X)|<\infty$. Assuming absolute integrability, we may appeal to the "Fubini" part of the Fubini-Tonelli theorem, or we can use linearity just as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4514174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Current value of option risk free rate=$r$
volatility of stock price=$\sigma$
continuous dividend rate=$q$
$a>0,K>0$
If your stock price S becomes below $K$ at maturity T,
the option A pays you $aS_T$.
Otherwise, this option pays you zero.
I have to prove that the curret value($v_0$) of this option A is
$v_0=aS_0e^{-qT}\phi(-d)$
where $d=\frac{(ln\frac{S_0}{K}+(r-q+\sigma^2/2))}{\sigma\sqrt{T}}$ and
$\phi$ is the cumulative distribution function of standard normal distribution.
I learned Black-Scholes formula but I can't even figure out how to start.
Any suggestions please?
| if you know how to derive B&s formula, you're halfway there. The basic principle of math. finance is that the price of a contingent claim is just the expctation of discounted payoff, i.e. $\mathbb{E}[e^{-rT}\text{Payoff}]$.
Here $\text{Payoff} = aS_T \mathbb{I}_{\{S_T < K\}}$ and so we have to compute the following, where $z \sim N(mT,\sigma^2 T), m = (r-q-\frac{\sigma^2}{2})$:
$$
\begin{align}
\mathbb{E}[e^{-rT} aS_T \mathbb{I}_{\{S_T < K\}}] &= ae^{-rT} \int_{- \infty}^{+\infty}\frac{S_0 e^{z}e^{-\frac{(z-mT)^2}{2 \sigma^2 T}}}{\sqrt{2 \pi \sigma^2 T}} \mathbb{I}_{\{S_0 e^z < K\}}dz\\
&= \text{Substitute} \; y=\frac{z-mT}{\sigma T} \; \text{and take into account the carachteristic function} \\
&= aS_0e^{-rT}\int_{-\infty}^{\frac{\ln \frac{K}{S_0}-mT}{\sigma T}}e^{mT}\frac{e^{-\frac{1}{2}(y^2 -2\sigma \sqrt{T} y)}}{\sqrt{2 \pi}}dy \\
&= \text{Complete the square in the exponent} \\
&= aS_0e^{-rT} e^{mT}\int_{-\infty}^{\frac{\ln \frac{K}{S_0}-mT}{\sigma T}}e^{\frac{\sigma^2T}{2}}\frac{e^{-\frac{1}{2}(y-\sigma \sqrt{T})^2}}{\sqrt{2 \pi}}dy \\
&= \text{Substitute} \; t = y-\sigma \sqrt{T} \\
&= aS_0e^{-rT} e^{mT}\int_{-\infty}^{-d}\frac{e^{-\frac{t^2}{2}}}{\sqrt{2 \pi}}dt \\
&= \text{Simplify the exponentials and notice the normal CDF} \\
&= aS_0 e^{-qT} \Phi(-d)
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4514338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can this probability question be solved using $3$D cube volume?
A dice is rolled $3$ times. Find probability of getting a larger number than the previous number each time.
The answer in the book is : $^6C_3/6^3=20/216$.
But I am thinking of solving it using geometry. Had the question been of $2$ dice rolls, I would have drawn a $6\times6$ grid of possible outcomes and found area of upper right triangle. Since it is for $3$ trials, can we draw a $6\times6\times6$ cube and then find the volume to find the probability?
|
Kindly, consider the cube above. It's a $6\times6\times6$ cube.
$\qquad\qquad(661)(662)\dots(666)$
$\qquad\qquad\dots$
$\quad\quad(631)(632)\dots(636)$
$\quad(621)(622)\dots(626)$
$(611)(612)\dots(616)$
$\vdots\qquad\qquad\qquad\vdots$
$\vdots\qquad\qquad\qquad\vdots$
$(211)(212)\dots(216)$
$(111)(112)\dots(116)$
Suppose you number the cube in above fashion.
Now you'll definitely observe a pattern out of it.
Since you are interested in points of form $(a,b,c)$ where $a<b<c$, following pattern/figure will emerge:
You'll note the following:
*
*It was fun but never again!
*Isn't there an easier/smarter method to solve such questions?
Thus, you'll end up earning for and praising the combinatorics method/logic/approach.
There are anyways a lot of limitations with the cube method:
*
*What if 4 dice are rolled!
*Takes a lot of time to create a mental image of the shape that will come about.
*How will you calculate the area!
*What if there's not really any pattern at all!
For the above reasons we don't prefer to solve using geometry.
Every method has its certain application where it suits best.
I'm not saying to not try solving using geometry, you should, for fun's sake, that what makes maths beautiful and for sure you'll end up learning something useful.
To have some fun, you may visit the $3$D cube here: https://www.geogebra.org/calculator/ennsywtt
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4514462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How good is the following method for dividing a set of numbers into two subsets with sums that are as close as possible? The problem is part of the NP group, but the following simple method seems to work pretty well. Sort the list in descending order. Then remove the numbers one at a time and insert into the subset that has the smaller sum at the time of insertion.
The difference between the two sums will definitely be no greater than the maximum of the original set. In executing the algorithm, each time that the sum of one subset overtakes the other, the difference between the sums is guaranteed to be no greater than the last number that was placed.
It is easy to find cases where this algorithm does not give the minimum sum difference, but even in those cases it works pretty well. I tried it for prime numbers and squares, cubes, Fibonacci numbers and pseudorandom numbers from 1 to 1,000,000, and in each case got a sum difference that were small compared to the total sum of the original set. The only way I can imagine the algorithm not doing well is if the numbers are far apart and there is just one way to arrange them so that the two sums are close. Is there a way of determining a worst case scenario?
| This algorithm is known as longest processing time first scheduling and is known to have an approximation ratio of $\frac76$ in the two-partition case, like here: the largest part in the partition obtained by this algorithm will be at most $\frac76$ of the size of the largest part in the optimal partition.
Said approximation ratio is met by the input $\{3,3,2,2,2\}$, where the algorithm outputs $\{3,2,2\},\{3,2\}$ with largest part $7$ but the optimal partition is $\{3,3\},\{2,2,2\}$ with largest part $6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4514566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
There are infinitely many primes that divide $2^{n^3+1}-3^{n^2+1}+5^{n+1}$? $$q(n)=2^{n^3+1}-3^{n^2+1}+5^{n+1}$$
Claim: If $p>2$ divides $q(n)$ then $p$ not divides $q(n+k)$, for $k>0$
If $p|q(n)$, then $$p|q(n)(5^k)(5^{k^2+2nk+k^2})(2^{k^3+3n^2k+3nk^2})=2^{(n+k)^3+1}(a)-3^{(n+k)^2+1}(b)+5^{n+k}(c)$$
But we supposed that it's true for $a=b=c=1$, instead $a,b,c>1$
Since a prime that divides $q(n)$ does not divide any $q(n + k)$, we conclude that for every $q(n)$ there is always a different prime. So there are infinitely many primes that divide $q(n)$
Do you know if the proof is correct? I am not very sure
| For $p$ prime and $n \in \mathbb{Z}$, let $M(p,n)$ be the largest integer so that $p^{M(p,n)}$ divides $n$.
Suppose that only finitely many primes divide $q$ namely $z_{1},....,z_{m}$. Set $Z= \{z_{1},...,z_{m}\}$. Note that some subset $H \subseteq Z$ is ``maximal'' in the sense that $\exists n_{H} \in \mathbb{N}$ so that $\left(\prod_{w \in H} w\right) | q(n_{H})$ and we have
$$H \subsetneq \mathbb{} G \subseteq Z \text{ }\Rightarrow \forall z \in \mathbb{Z}\text{ } \prod_{w \in G}w \not \div q(z) $$
Thus for all $\beta \in \mathbb{Z}$, $q(n_{H} + \beta \phi(n_{H}))$ is a product of primes in $H$. Moreover for $\beta$ sufficiently large we have $M(h, q(n_{H} + \beta \phi(n_{H})) = M(h, q(n_{H}))$ whenever $h \in H$ (here we take into consideration if $h$ were to be $2,3$ and $5$ and apply Euler's theorem). Thus we have $q(n_{H} + \beta\phi(n_{H})) \leq q(n_{H})$ for all $\beta$ sufficiently large which is impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4514682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Can we solve $\frac{dx}{dt} = 3x+3t$ by "multiplying by $dt$"? In preparation for ODE, I was just doing some practice problems when I came across this.
$$\frac{dx}{dt} = 3x+3t$$
Although I looked at the solution where it states to do an integration factor, is there anything inherently wrong with simply multiplying dt on both sides and then taking the integral of both sides and then isolating $x$? I would appreciate any clarity on this matter. Thanks so much.
|
$dx/dt = 3x+3t$
No, you can't isolate them, because if you multiply $dt$ on both sides and integrate, then you get:
$$\int dx=\int 3xdt+\int3tdt$$
How do you deal with the first term on the RHS?
What you can do is either use the integration factor method, or to use substitution, let $u=x+t$, then we have
$$\frac{dx}{dt}=\frac{du}{dt}-1$$
the new differential equation becomes:
$$\frac{du}{dt}-1=3u$$
Now, you can separate them and integrate:
$$\int \frac{du}{3u+1}=\int dt$$
Can you proceed from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4514839",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
$\int^{\infty}_{0} \frac{e^{-\alpha x}}{x^{\beta}} dx < \infty$, where $\alpha > 0$ and $ 0< \beta < 1$. $\int^{\infty}_{0} \frac{e^{-\alpha x}}{x^{\beta}} dx \in L^{1}(0, \infty)$, where $\alpha > 0$ and $ 0< \beta < 1$.
My idea is to use the fact that
$$
\int^{\infty}_{0} \frac{e^{-\alpha x}}{x^{\beta}} dx = \int^{1}_{0} \frac{e^{-\alpha x}}{x^{\beta}} dx + \int^{\infty}_{1} \frac{e^{-\alpha x}}{x^{\beta}} dx
$$
I know
$$
\int^{\infty}_{1} \frac{e^{-\alpha x}}{x^{\beta}} dx < \int^{\infty}_{1} e^{-\alpha x} dx < \infty
$$
My problem is in showing that
$$
\int^{1}_{0} \frac{e^{-\alpha x}}{x^{\beta}} dx < \infty.
$$
| You have to use that $\beta\in(0,1)$: If you integrate by parts once, the $x^\beta$ disappears from your denominator:
$$\int\limits_0^1 \frac{e^{-\alpha x}}{x^\beta}dx = \frac{1}{1-\beta}\left([x^{1-\beta}e^{-\alpha x}]_0^1 +\alpha \int\limits_0^1 x^{1-\beta}e^{-\alpha x}dx\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4515052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Linearly ordered, non CT-groups A linearly ordered group is a group $(G,\dot,1)$ together with a linear ordering $<$ on $G$ with $f<g \Longrightarrow (f h<g h$ and $h f < h g)$ for all $f,g,h \in G$.
A commutative-transitive group (or CT-group) is a group $(G,\dot,1)$ in which two elements which commute with a third, non-trivial one, commute together. Equivalently, this is a group in which centralizers are commutative.
I'm having a hard time finding linearly ordered groups whose underlying group is not a CT-group, but I see no reason why any linearly ordered group should be a CT-group. Are there ways to construct linearly ordered non CT-groups?
| Such groups are often called "bi-orderable".
Torsion-free nilpotent groups are all bi-orderable.
But a non-abelian nilpotent group is never commutative-transitive.
Hence, every non-abelian torsion-free nilpotent group yields an example (bi-orderable, not commutative-transitive).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4515161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Are there positive solutions to $x^q- a = 0$ other than $\sqrt[q]{a}?$ I'm reading a book that says the following (translated):
"Before we proceed, let us remember that, given a real number $a > 0$ and a integer $q > 0$, the symbol $\sqrt[q]{a}$ represents a positive real number such that its $q$-power equals to $a$, that is to say it's the only positive solution of $x^q-a=0$".
My whole problem is to show that there is one, and only one, positive real root to $x^q-a=0$. In the case $x^2 - a = 0$ I already couldn't go further. Any ideas? Thanks in advance.
| Suppose that we know there is a positive number $\sqrt[q]{a}$ such that $(\sqrt[q]{a})^q-a=0$, but we don't know yet if there is a different positive number $x$ such that $x^q-a=0$. Using the identity
$$
w^n-z^n=(w-z)(w^{n-1}+w^{n-2}z+\dots+z^{n-1})
$$
we see that the equation $x^q-a=0$ is equivalent to
$$
\left(x-\sqrt[q]{a}\right)\left(x^{q-1}+x^{q-2}(\sqrt[q]{a})+\dots+(\sqrt{a})^{q-1}\right)=0 \, .
$$
One of the bracketed terms on the LHS must be equal to zero for the above equality to hold. Since the second bracketed expression is positive, we must have $x-\sqrt[q]{a}=0$, and so $x=\sqrt[q]{a}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4515277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
how to understand this new way to solve matrix under basis change? I just found this exercise using a new shortcut to solve matrix under new basis given the old matrix, please enlighten me why this shortcut works:
The Exercise: relative to the standard unit vectors $e_1$, $e_2$, a linear map $A:\mathbb{R}^2 \to \mathbb{R}^2$ is described by the matrix $ A=\begin{pmatrix}
1 & 0 \\
0 & -1 \\
\end{pmatrix}$. Find the representing matrix $A'$ for this linear map relative to the basis $v'_1 = \begin{pmatrix} \frac{1}{\sqrt2} \\ -\frac{1}{\sqrt2} \\ \end{pmatrix}$, $v'_2 = \begin{pmatrix} \frac{1}{\sqrt2} \\ \frac{1}{\sqrt2} \\ \end{pmatrix}$
Instead of traditional $A'=PAP^{-1}$ way, the answer give a new shortcut:
compute the images of the basis vectors and express them in $v'_i$ then the coefficients give the columns of $A'$.
$$Av'_1 = 0v'_1 + 1v'_2 , \ \ Av'_1 = 1v'_1 + 0v'_2$$
so $ A' = \begin{pmatrix}
0 & 1 \\
1 & 0 \\
\end{pmatrix}$
note1: I feel this shortcut only work when one of the basis is standard unit vectors.
note2: the book is using $[A]^e_e$ multiply $[v'_i]_e$, the result is $[Av'_i]_e$ in space 2 of the digram below, my confusion is why expressing them using $[v'_i]_e$ will give the matrix $[A']^{v'}_{v'}$ which should be get by expressing $[A'v'_i]_{v'}$ using $[v'_i]_{v'}$ that relating space 3, 4 of the digram.
$\require{AMScd}$
\begin{CD}
1\ \mathbb{R}^2 \text{ using basis } e @>A>>2\ \mathbb{R}^2 \text{ using basis } e\\
@V P=[I]^e_{v'} VV @VV P V\\
3\ \mathbb{R}^2 \text{ using basis } v' @>>A'>4\ \mathbb{R}^2 \text{ using basis } v'
\end{CD}
| Maybe reading 《linear algebra done right》chapter3 is helpful. In this problem, linear transformation is known. Matrix is a way to show it. To finding its matrix under bases $v_{1}^{\prime}, v_{2}^{\prime}$,the only thing we need is how this transformation affect on bases $v_{1}^{\prime}, v_{2}^{\prime}$.Solve $Av_{1}^{\prime}, Av_{2}^{\prime}$ and rewrite them as a linear combination of $v_{1}^{\prime}, v_{2}^{\prime}$,then the column of matrix solved because of matrix definition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4515379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proof of Implicit Differentiation (showing a statement is true)
Prove that if $x^2+y^2-2y\sqrt{1+x^2} = 0$, then $dy/dx = x/\sqrt{1+x^2}$.
Whilst I have implicitly differentiated in terms of x in order to derive that
$$dy/dx = (-x+2xy/\sqrt{1+x^2})/(y\sqrt{1+x^2}-1-x^2)$$
however I am unsure as to what my next steps are. I believe that I will need to rearrange the original supposition in order to achieve the proof however I do require some help as to how can I do this as I cannot see what would possibly cancel out.
Additionally, does anyone have any particularly handy methods/techniques to be able to complete proofs of this format of question more easily? I do understand that there is not one singular technique that can be used when completing this proofs but is there any strategy that minimizes the number of dead-ends that I come across in my working?
Thanks
| A simpler way: the identity
$$x^2+y^2-2y\sqrt{1+x^2} = 0$$
is equivalent to
$$y^2+(1+x^2)-2y\sqrt{1+x^2} = 1$$
that is
$$(y-\sqrt{1+x^2})^2=1$$
and differentiating in terms of $x$ both sides, we obtain
$$2(y-\sqrt{1+x^2})\left(y'-\frac{x}{\sqrt{1+x^2}}\right)=0.$$
The first factor is always different from zero: in fact letting $y=\sqrt{1+x^2}$ into the given equation we get
$$x^2+(1+x^2)-2(1+x^2) = 0$$
that is $-1=0$ which is impossible. Therefore the second factor has to be zero, and we find
$$y'=\frac{x}{\sqrt{1+x^2}}.$$
Preliminary step. Notice that by the two dimensional implicit function theorem, $y'(x)$ exists (and therefore we are allowed to take the derivative in terms of $x$) if
$$\frac{\partial F}{\partial y}=2(y-\sqrt{1+x^2})\not=0$$
where $F(x,y)=x^2+y^2-2y\sqrt{1+x^2}$, which holds as shown above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4515483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Derivative of an implicit trigonometric function I was doing my homework which included finding the derivative of implicit function. One of them was
$$3\sin(xy)+ 4\cos(xy) = 5$$
On first attempt I did this as follows
$$\frac{d}{dx}(3\sin(xy)+ 4\cos(xy)) = \frac{d}{dx}(5)$$
$$[3\cos(xy)-4\sin(xy)][y+x\frac{dy}{dx}] = 0$$
$$\frac{dy}{dx}= \frac{-y}{x}$$
It matched the answer and I was happy. I don't know why but I wrote the original expression as this
$$\frac{3}{5}\sin(xy)+ \frac{4}{5}\cos(xy) = 1$$
$$\cos(\alpha)\sin(xy)+ \sin(\alpha)\cos(xy) = 1$$
$$\sin(xy +\alpha)=1 \quad \quad \textrm{where} \quad\alpha = \arccos(\frac{3}{5})$$
Again from step 2 of finding derivative,
$$[3\cos(xy)-4\sin(xy)][y+x\frac{dy}{dx}] = 0$$
$$[\frac{3}{5}\cos(xy)-\frac{4}{5}\sin(xy)][y+x\frac{dy}{dx}] = 0$$
$$[\cos(\alpha)\cos(xy)-\sin(\alpha)\sin(xy)][y+x\frac{dy}{dx}] = 0$$
$$[\cos(xy +\alpha)][y+x\frac{dy}{dx}] = 0$$
Since $\sin(xy +\alpha) =1 \implies \cos(xy+\alpha)=0$ what did in finding the derivate while going from step 2 to 3 was diving by zero which is not valid. So I cannot actually conclude that the derivative is $\frac{-y}{x}$. But then why is the answer right(wolfram alpha says this too)? Is there any other way of finding this derivate?
Thank you in advance.
| Your problem is not suited for a direct application of the IFT. The function $f(x,y)=3\sin(xy) + 4\cos(xy) -5$
is smooth but if $f$ has a zero at $P(x_0,y_0)$ then as shown by your calculation $\partial_x f_{|P}=
\partial_y f_{|P}=0$ so a priori you can not apply the IFT.
The function $f(x,y)=g(xy)=5 (\cos(xy+\alpha)-1)$ has isolated zeros when $xy+\alpha = 2\pi k$, $k\in {\Bbb Z}$ which leads to the wanted solution for the derivative but $g$ has zero derivative at the solution. If you replace the constant 5 on the RHS of your equation by any $c\in (-5,5)$ you do not run into this problem and the IFT works fine ok.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4515567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Proving $f'''(x)≥3$ exists where $f(-1)=0$, $f(0)=0$, $f(1)=1$, $f'(0)=0$ Let $f:[-1,1]\to\mathbb{R}$ be $C^3$ with $f(-1)=0$, $f(0)=0$, $f(1)=1$, and $f'(0)=0$. I need to show that there exists a point $x\in(-1,1)$ such that $f'''(x)≥3$. I would like some feedback on my solution. I am also curious if it's possible to use concavity from supposing $g'''<0$ to find another proof.
First, we apply MVT thrice to find points $-1<a_1<a_2<a_3<0$ such that
$$f'(a_1)=f(0)-f(-1)=0=\frac{f'(0)-f'(a_1)}{-a_1}=f''(a_2) $$
and
$$f'''(a_3)=\frac{f''(0)-f''(a_2)}{-a_2}=\frac{f''(0)}{-a_2}.$$
Now let $g(x)=f(x)-\frac{1}{2}x^2(x+1)$ so $g''(x)=f''(x)-3x-1$. Repeating the same process as above for $g$, we find points $0<b_2<b_1<1$ such that
$$g'(b_1)=g(1)-g(0)=0 \implies g''(b_2)=g'(b_1)-g'(0)=0.$$
Suppose now for the sake of contradiction that $f''''(x)<3$ for all $x\in(-1,1)$. Then $f''(0)<-3a_2$ and by MVT, there is some point in $(a_2,b_2)$ for which $g''(b_2)-g''(a_2)<3(b_2-a_2)$ so $-(-f''(a_2)+3a_2+1)>3(b_2-a_2)>3a_2$ and thus $a_2<-\frac{1}{6}$. It follows that $f''(0)<\frac{1}{2}$, which means $g''(0)=f''(0)-1<0$. Therefore, by MVT, there is a point $c\in(0,b_2)\subset(-1,1)$ such that
$$g'''(c)=f'''(c)-3=\frac{g''(b_2)-g''(0)}{b_2}>0\implies f'''(c)>3.$$
| This isn't true. Consider the function:
$$f(x) = \frac{1}{2} x(x+1)$$
Observe that $f(-1) = 0$, $f(0) = 0$ and $f(1) = 1$. On the other hand, $f'''(x) = 0$ for all $x \in (-1,1)$.
Edit:
Hmm there seems to be a problem somewhere in the middle. You write:
$$g''(b_2)-g''(a_2) = -g''(a_2) < 3(b_2-a_2)$$
But then you write:
$$-(-g''(a_2))> 3(b_2-a_2) > 3a_2$$
right after that and I'm not entirely sure how this follows. See, we could have that $a_2 = -\frac{1}{2}$, $b_2 = \frac{1}{2}$. Then, $3(b_2-a_2) = 3$ and $-(3a_2+1) = \frac{1}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4515737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What $\int_0^a x^ne^xdx$ looks like? Given the integral $$\int_0^ax^ne^xdx$$
I wanted to know what it looks like for an integer $n$ and a positive $a$. After evaluating some values with mathematica, I believe it looks like $e^ap_n(a)+c_n$, where $p_n(a)$ is a polynomial with degree $n$ in $a$, and $c_n$ is an integer. Is this correct?
| \begin{align}
\int_0^a x^ne^xdx
=&\ \frac{d^n}{dt^n}\left(\int_0^ae^{t x}dx\right)_{t=1}
= \frac{d^n}{dt^n}\left(\frac{e^{ta}-1}t\right)_{t=1}\\
=& \ (-1)^n n! \bigg(e^a\sum_{k=0}^n \frac{(-a)^k}{k!} - 1\bigg)
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4515891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.