text
stringlengths 83
79.5k
|
|---|
H: Understanding Cantor diagonalisation in binary
Until now I have studied and understood Cantor's proof. My problem comes when looking at binary representation:
integer binary representation encoding for diag proof
1 1 10000...
2 10 01000...
3 11 11000...
4 100 00100...
5 101 10100...
So far so good. We have now represented all integers uniquely by their binary representation.
Now let's apply the diagonalisation and generate the number K:
K = 0011111111111111....
I know that after 1 and 2, the representation will be 1 because we move right faster than the 1's do.
So according to Cantor, this binary number will not be present in our enumeration, so what is this number?
K = 0*1 + 0*2 + 1*4 + 1*8 + ...
K = 4 * (1 + 2 + 4 + ... )
K = 4 * (3 + K)
K = -4
Clearly something funny is happening here, I have two questions:
Has this demonstrated a binary number not present in our enumeration? If not, why not?
How have we enumerated a negative binary number?
AI: $K$ is not a real number, since $\sum_{n=2}^\infty 2^n$ does not converge. So no, you did not find an integer which is missing from your enumeration. Your proof for $K=-4$ is similar to the "proof" that $\sum_{n=1}^\infty n=-\frac{1}{12}$ in that both apply facts about convergent series to a non-convergent series.
Usually if you write down this kind of diagonal encoding you're trying to prove that the reals are uncountable. So you should encode the reals (or at least a nicely encodable subset like $[0,1)$), not the integers. If you do that, the string $a_1a_2a_3\dots$ encodes the series $\sum_{n=0}^\infty a_n2^{-n}$, which does converge, since its dominated by the convergent geometric series. So there you will actually find a not yet encoded real number, since the series then actually corresponds to a number.
|
H: In this textbook explanation of needing partial derivatives, how is this partial derivative not an indeterminate form?
$$ f(x,y) = x^\frac{1}{3}y^\frac{1}{3} $$
$$\frac{\partial f}{\partial x}(0,0) = \lim_{x \to 0} \frac{f(h,0)-f(0,0)}{h}= \lim_{x \to 0} \frac{0-0}{h} = 0$$
"and, similarly, $\frac{\partial f}{\partial y}(0,0) =0$ (these are not indeterminate forms!). It is necessary
to use the original definition of partial derivatives, because the functions $x^\frac{1}{3}$ and $y^\frac{1}{3}$
are not themselves differentiable at 0."
This is a portion of textbook explaning why a simple definition of a partial derivative does not work but a linear approximation definition of a partial derivative must be used.
However, I'm confused at this part where they seem to be trying to use a counterexample to prove why a simple definition of partial derivatives does not work. Isn't this limit an indeterminate form? Yet, as you can see, the textbook claims this limit is not an indeterminate form to make their case.
I would greatly appreciate your help in making sense of this textbook.
Reference textbook: Vector Calculus by Marsden and Tromba 5th edition.
AI: It is defined because the $0$ in the numerator of your limit is "real" $0$. Look, in the expression:
$$\lim_{x\rightarrow 0}\frac{0}{x}$$
The answer is in fact $0$, because the denominator will be a very little value close to $0$, but not $0$, while the numerator is just $0$ ($0$ divided by any real number different to $0$ is equal to $0$, and the $x$ in the denominator isn't equal to $0$ despite being as close to it as you want). So in fact:
$$\lim_{x\rightarrow 0}\frac{0}{x}=0.$$
So your limit isn't an indeterminate form.
|
H: Given $S=1+3+5+\cdots+2017+2019$, find $\frac{1}{1010}S-1008$.
I'm wrong, or the answer in the book is wrong:
Given $S=1+3+5+\cdots+2017+2019$, find $$\frac{1}{1010}S-1008$$
My attempt is:
$S=1+3+5+\cdots+2017+2019$
$S=\frac{2019-1}{2}(1+2019)$
$S=2038180$
Now I find
$$\frac{1}{1010}S-1008=\frac{1}{1010}2038180-1008=1010$$
But in the book answer is $2$. Where is my error, or is the answer in the book wrong? Help me please.
AI: $$S_3:=1+3+5=9,\\\frac{S_3}3-(3-2)=2$$ and more generally
$$S_n=1+3+5+\cdots(2n-1)=n^2,\\\frac Sn-(n-2)=2.$$
|
H: both $A$ and $B$ has eigenvalues other than $0,1$ and $rkA+rkB=n$
Let $A$ and $B$ be diagonalizable $n$-dimensional square matrices. Suppose both $A$ and $B$ has eigenvalues other than $0, 1$, and and $rkA+rkB=n$. Show that such $A$ and $B$ do not exist.
Any help would be appreciated, thank you.
In my original problem, $A+B=E$ is also condition, but I guess this condition is not needed.
AI: The requirement that $A + B = E$ is necessary. For instance, taking
$$
A = \pmatrix{2&0\\0&0}, \quad B = \pmatrix{3&0\\0&0}
$$
gives us an example where both $A$ and $B$ have an eigenvalue not equal to $0$ or $1$ ($2$ for $A$ and $3$ for $B$), but the sum of the ranks is $1 + 1 = 2$.
On the other hand, if $A$ and $B$ satisfy $A + B = E$, and $A$ is diagonalizable, then we can assume without loss of generality that $A$ is diagonal so that
$$
A = \pmatrix{a_1 \\ & \ddots \\ && a_n}, \quad B = E - A =
\pmatrix{1 - a_1 \\ & \ddots\\ && 1 - a_n}.
$$
The rank of a diagonal matrix is the number of non-zero values on its diagonal. We see that the sum of the ranks of $A,B$ must be at least $n$, and the sum can only be exactly $n$ if we have either $a_i = 0$ or $1 - a_i = 0$ for $i = 1,\dots,n$.
However, if $A$ has an eigenvalue not equal to $0$ or $1$, then there is an $i$ for which $a_i$ is neither $0$ nor $1$, which means that it is not true that $a_i = 0$ or $1 - a_i = 0$. So, the sum of the ranks is not $n$.
|
H: Show that $|\cos(x)| \geq 1 - \sin^2 (x), \forall x \in \mathbb{R}$.
Show that $|\cos(x)| \geq 1 - \sin^2 (x), \forall x \in \mathbb{R}$.
I'm using the graph of both $f(x) = |\cos(x)|$ and $f(x) = 1 - \sin^2 (x)$ only for showing, but I think it's doesn't enough. For $>$, some $x$ holds, for example: $x=\frac{2\pi}{3}$. I'm not able to show it completely. Any idea?
Thanks in advanced.
AI: This is equivalent to
$$|\cos x|\ge\cos^2x$$ or
$$|\cos x|(1-|\cos x|)\ge0,$$ which is true.
|
H: Given $\sum\limits_{i=1}^m(a_i+b_i)=c,$ what is the maximal value of the expression $\sum\limits_{i=1}^ma_ib_i?$
Given $2m$ non-negative numbers $\{a_i\}_{i=1}^m$ and $\{b_i\}_{i=1}^m$ with $\sum\limits_{i=1}^m(a_i+b_i)=c,$ what is the maximal possible value of the expression $\sum\limits_{i=1}^ma_ib_i?$
When $m=1$ then given $a_1+b_1=c,$ the maximal value of $a_1b_1$ is $c^2/4$. How can that maximal value be calculated for the case $m>1$?
AI: We have
$$
\sum_{i=1}^m a_i b_i \le \frac 14 \sum_{i=1}^m (a_i + b_i)^2
\le \frac 14 \left( \sum_{i=1}^m (a_i + b_i)\right)^2 = \frac 14 c^2 \, .
$$
Equality holds (exactly) if $a_j = b_j = c/2$ for one index $j$ and all other $a_i, b_i$ are zero.
The first estimate is the inequality between geometric and arithmetic mean, and the second estimate is
$$
c_1^2 + \ldots + c_m^2 \le (c_1 + \ldots + c_m)^2
$$
for non-negative real numbers, with equality if and only if all but one $c_i$ are zero.
|
H: Prove that a set is Borel(and hence Lebesgue)
I'm trying to practice for the real-analysis final exam and I found this...Could you please help?
For $n$ $\in$ $\mathbb{N}$, define the following subsets of $\mathbb{R}$:
$$
A_n=\begin{cases} (0,1]\cup[n,n+1) & , n-even \\
(0,1]\cup[n,n+2) & ,n-odd
\end{cases}
$$
Justify why $A_n$ is Borel and find $\lim_{x \to +\infty} \lambda(A_n).$
I was thinking that we could write these intervals as unions of open intervals and, being countable, they are Borel, but I'm not sure if this is correct...Also, I think that the result of the limit is 2 in the first case and 3 in the last one?
AI: All intervals are Borel sets and unions of two Borel sets are Borel. Hence each $A_n$ is Borel . As far as $\lim \lambda (A_n)$ is concerned the limit does not exist since there are two limit points $2%$ and $3$.
|
H: $X = f^{-1}(f(X))$ if and only if $X = f^{-1}(Z)$ for some $Z \subseteq B$
In my study of functions, I found this result in ”Proofs and Fundamentals” by Ethan D. Bloch that I’m attempting to prove. First, I already now that $X \subseteq f^{-1}(f(X))$ and $f(f^{-1}(Y)) \subseteq Y $ and I’m using this two results in my proof.
Result: Let $f:A \rightarrow B$ a map and let $X \subseteq A$ and $Y \subseteq B$. Then $X = f^{-1}(f(X))$ if and only if $X = f^{-1}(Z)$ for some $Z \subseteq B$.
My proof came as following.
Proof: $\impliedby$. Suppose that there exists a set $Z \subseteq B$ such that $X = f^{-1}(Z)$. Let $Z_0$ be that set. By the result mentioned above, we have that $X \subseteq f^{-1}(f(X))$. Let $x_0 \in f^{-1}(f(X))$. By definition, $f(x_0) \in f(X)$. Since $X = f^{-1}(Z_0)$, we see that $f(x_0) \in f(f^{-1}(Z_0)).$ By the second result mentioned above, we conclude that $f(x_0) \in Z_0$. By definition, we have that $x_0 \in f^{-1}(Z_0)$. Hence $x_0 \in X$. By definition of equality of sets we conclude that, in these conditions, $X = f^{-1}(f(X))$.
$\implies$. Suppose that $X = f^{-1}(f(X))$ and let $Z_1$ be the set defined by $Z_1 = f(X)$. By definition, $f(X) =$ {$b \in B$ | $b = f(x)$ for some $x \in X$}. Hence $f(X) \subseteq B$. From here we deduce that $Z_1 \subseteq B$. By hypothesis, we have that $X = f^{-1}(f(X))$, therefore $X = f^{-1}(Z_1)$. We have shown that there exists a subset of $B$ such that the inverse image of this set is $X$.
MY PROBLEM:
To me, the first part of the proof seems right but I would like to get some feedback.
The second part is making me uncomfortable. It just don’t seem right to me. Is it right? Is there any other approach to prove the second part?
In the book, Bloch gives some hints to some exercises. And for this one, he suggests the use of the following theorem: “Let $f:A \rightarrow B$ be a map. Let $S, T \subseteq B$. If $S \subseteq T$, then $f^{-1}(S) \subseteq f^{-1}(T)$”. Although i don’t see the point is using this theorem here. Do you have any idea?
Thank you for your attention.
AI: Your proof of the second part is correct. Indeed, this is the "easy" part of the problem. If $X=f^{-1}(f(X))$ then certainly $X$ is of the form $f^{-1}(Z)$ for some $Z\subseteq Y$. Just let $Z=f(X)$ as you have done.
As for the hint, you could argue the first part as follows.
Suppose $X=f^{-1}(Z)$. Then $f(X)\subseteq Z$. So $f^{-1}(f(X))\subseteq f^{-1}(Z)$ (by the hint). This says $f^{-1}(f(X))\subseteq X$ by assumption that $X=f^{-1}(Z)$. So $f^{-1}(f(X))= X$ since $X\subseteq f^{-1}(f(X))$ is always true (as you note).
(This is more or less the same as your proof, just using fewer words. You implicitly use the hint in your argument too.)
|
H: Isn't $V^n$ a $\Bbb K$-vector space?
Suppose $V$ is a $\Bbb K$-vector space. Let $n \in \Bbb N.$ Can't we say that $V^n$ is also a $\Bbb K$-vector with respect to component wise addition and component wise scalar multiplication? If so what can we say about $\dim (V^n)$ in terms of $\dim (V)$?
AI: Yes, this is indeed a vector space. Also, if $V$ is finite dimensional and $\{e_1,...,e_n\}$ is a basis then it is easy to prove that the set:
$\{(e_1,0),(e_2,0),...,(e_n,0),(0,e_1),(0,e_2),...,(0,e_n)\}$
is a basis of $V^2$, and hence $\dim(V^2)=2\dim(V)$. It follows by induction that $\dim(V^n)=n\dim(V)$.
|
H: Convex polyhedron with $3$ vertices, $2$ faces and $3$ edges
Can a convex polyhedron with $3$ vertices, $2$ faces and $3$ edges exist? If so, how does it look like or what's its name? My imagination just fails here...
I was reading a proof of Euler's formula (https://plus.maths.org/content/eulers-polyhedron-formula) where it boils down to showing that $V+F-E$ for the polyhedron in question equals $2$. But I'm not really sure if such a convex polyhedron can exist at all.
AI: Depends on what you mean by "polyhedron". If you, like me, want the faces and edges to be flat and straight, then this is impossible. You need at least four faces to make a polyhedron.
If you allow faces and edges to be curved, then you can imagine gluing two equilateral triangles edge to edge, and then "inflate" it a bit. Or, equivalently, take a sphere, call the equator an edge, and put three vertices along the same equator.
|
H: $f:(X,\tau) \mapsto (Y,\tau')$ is continuous and $ \tau'$ is T2 Why is $ \{p \in X\ f(p)=q\}=f^{-1}(\{p\}) $ closed?
Let $f:(X,\tau) \mapsto (Y,\tau')$ be continuous and $\tau' $ is Hausdorff.
$ \forall q \in Y$, we have that $\{p \in X\ f(p)=q\}=f^{-1}(\{p\})$ is closed
I don't know how to prove this. I wanted to use the Hausdorff's definition but I only have a point$ q \in Y$ to start with and Hausdorff requires 2 points, If I consider a second point p and disjoint open sets containing them, I don't know how to use the continuity of f. I know that if $\{p\}$
were closed, then by the continuity the pre-image would be closed, but this only happens in the discrete topology, while the proposition is general.
How do I go about it?
AI: $\{p\}$ is indeed closed. Let us show that its complement is open. Suppose $q \in Y \setminus \{p\}$. Then $q \neq p$ so there exist disjoint open sets $U$ and $V$ in $Y$ such that $p \in U$ and $q \in V$. Now $V$ is contained in $Y \setminus \{p\}$ and it is an open set containing $q$. Hence $Y \setminus \{p\}$ is open.
|
H: How to show that $J_{n+1} = \frac{3n-1}{3n} J_n$?
Let $$J_n := \int_{0}^{\infty} \frac{1}{(x^3 + 1)^n} \, {\rm d} x$$
where $n > 2$ is integer. How to show that $J_{n+1} = \frac{3n-1}{3n} J_n$?
AI: Hint:
Let $y=\dfrac x{(x^3+1)^m}$
$$\dfrac{dy}{dx} =\dfrac1{(x^3+1)^m}+\dfrac{(-m)x(3x^2)}{(x^3+1)^{m+1}} =\cdots =\dfrac{3m}{(x^3+1)^{m+1}}-\dfrac{3m-1}{(x^3+1)^m}$$
Integrate both sides wrt $$\dfrac x{(x^3+1)^m}=3mI_{m+1}-(3m-1)I_m$$ where $$I_n=\int\dfrac{dx}{(x^3+1)^n}$$
Now $\dfrac x{(x^3+1)^m}\big|_0^\infty=0-0$
|
H: All solutions of $f(x)f(-x)=1$
What are all the solutions of the functional equation $$f(x)f(-x)=1\,?$$
This one is trivial: $$f(x)=e^{cx},$$
as it is implied (for example) by the fundamental property of exponentials, namely $e^a e^b=e^{a+b}$. But there is another solution:
$$f(x)=\frac{c+x}{c-x}.$$
Are there any more solutions? How can I be sure?
AI: Presumably, you want $f:\mathbb{R}\to\mathbb{R}$. If you want to use a different domain or codomain, the answer is probably not going to change much.
You can simply pick any $h:\mathbb{R}_{>0}\to\mathbb{R}_{\neq 0}$ and $\epsilon\in\{-1,+1\}$. Then, define the function $f:\mathbb{R}\to\mathbb{R}$ by
$$f(x):=\left\{\begin{array}{ll}
h(x)&\text{if }x>0\,,\\
\epsilon&\text{if }x=0\,,\\
\dfrac{1}{h(-x)}&\text{if }x<0\,.
\end{array}\right.$$
Then, $f$ satisfies the required functional equation. Note that any such function $f$ takes the form above.
If you demand that $f$ is continuous, then $h$ has to be continuous and $\lim\limits_{t\to 0^+}\,h(t)=\epsilon$. This is all you need. It is a much more interesting problem to characterize all smooth or analytic functions $f$ that satisfy your functional equation. It turns out that the solutions are $f(x)=\epsilon\,\exp\big(g(x)\big)$, where $\epsilon\in\{-1,+1\}$ and $g:\mathbb{R}\to\mathbb{R}$ is a smooth or analytic, odd function. If you want $f$ to be just $k$-time differentiable, then $g$ is $k$-time differentiable.
|
H: Prove: $\tan{\frac{x}{2}}\sec{x}= \tan{x} - \tan{\frac{x}{2}}$
I was solving a question which required the above identity to proceed but I never found its proof anywhere. I tried to prove it but got stuck after a while.
I reached till here:
To Prove: $$\tan{\frac{x}{2}}\sec{x}= \tan{x} - \tan{\frac{x}{2}}$$
But I don't know what to do next.
Any help is appreciated
Thanks
AI: $$\tan \dfrac x2\sec x=\dfrac{\sin\dfrac x2}{\cos\dfrac x2\cdot\cos x}$$
Use $\sin\dfrac x2=\sin\left(x-\dfrac x2\right)=?$
|
H: Reduce the differential equation $y= 2px+p^{2}y^{2}$ to Clairaut’s form
Reduce the following differential equation to Clairaut’s form by using the substitution and hence solve:
$y= 2px+p^{2}y^{2}$ where $p={dy\over dx}$
I used $y^{2}=v$ then I get
$v-2p_{1}x + {(x p_{1})^{2}\over v}= ({p_{1}\over2})^{4}$ where $p_{1}={dv\over dx}$
so this is not useful to reduce to Clairaut's form,
please give me a hint to solve this or give me a suitable substitution
Thank you.
AI: If it is a Clairaut equation after some transformation, then its derivative should still factor, modulo the original equation, into a factor containing the second derivative and some other factor for the singular solution. Here you get
$$
y'=2y'+2xy''+2yy'^3+2y^2y'y''
\\
0=(1+2yy'^2)y'+(2x+2y^2y')y''
$$
Now try to remove the higher-degree factors
$$
0=(y+2y^2y'^2)y'^2+2(xy'+y^2y'^2)yy''
\\
0=(y+2(y-2xy'))y'^2+2(xy'+(y-2xy'))yy''
\\
0=(3y-4xy')y'^2+2(y-xy')yy''
$$
There appears to be no way to have the first term to have a non-trivial common factor with the second term. There has to be something wrong with your given task.
A simple modification that works with your substitution is
$$
y=2px+yp^2\implies v=v'x+\frac{v'^2}4
$$
|
H: Volume of the region of sphere between two planes.
I want to find the volume of the region of the sphere $x^2+y^2+z^2=1$, between the planes $z=1$ and $z=\frac{\sqrt{3}}{2}$
I have used triple integral for calculating this
$$\int _0^{2\pi }\int _{0}^{\frac{\pi }{6}}\int _{0 }^1\:\rho ^2sin\phi \:d\rho \:d\phi \:d\theta $$
Are the limits of integral i have chosen correct??
Edit:
As from the comment, my integral is not correct, so please clarify what region actually the above integral represents.
AI: The bottom of your region is a horizontal plane which is not the $xy$-plane. This means you should at least consider cylindrical coordinates. For instance,
$$
\int_0^{2\pi}\int_0^{1/2}\int_{\sqrt3/2}^{\sqrt{1-r^2}} r\,dz\,dr\,d\theta
$$
or
$$
\int_0^{2\pi}\int_{\sqrt3/2}^1\int_0^{\sqrt{1-z^2}}r\,dr\,dz\,d\theta
$$
In case you really want spherical coordinates, if we put the radius as the innermost integral, note that it doesn't go from $0$, it goes from wherever $z=\sqrt3/2$, which is to say from $\frac{\sqrt3}{2\cos\phi}$, which is not easy to integrate. If we do $\phi$ first instead, we get
$$
\int_0^{2\pi}\int_{\sqrt3/2}^1\int_0^{\arccos(2\rho/\sqrt3)}\rho^2\sin\phi\,d\phi\,d\rho\,d\theta
$$
which might look scary at first, but everything turns out pretty nice in the end.
|
H: Is my proof of an upper bound $u$ is the supremum of $\mathit{A}$ iff $\forall(\epsilon>0)$ $\exists a\in\mathit{A}$ such that $u-\epsilon
I have attempted to prove that an upper bound $u$ is the supremum of $\mathit{A}$ if and only if for all $\epsilon>0$ there exists an $a\in\mathit{A}$ such that $u-\epsilon<a$.
Here is my attempted proof.
Let $u$ be an upper bound of non-empty set $\mathit{A}$ in $\mathbb{R}$. We shall first prove the if-part of the statement than the only-if-part of the statement.
We shall use proof by contradiction to prove that $u$ is the supremum of $\mathit{A}$ if for all $\epsilon>0$ there is an $a\in\mathit{A}$ such that $u-\epsilon<a$. Let $u$ be an upper bound of $\mathit{A}$.
Suppose that for all $\epsilon>0$ there is an $a\in\mathit{A}$ such that $u-\epsilon<a$.
Also suppose that $u$ is not the least upper bound of $\mathit{A}$.
Then there is a $\beta$ such that $\beta<u$ and $\beta$ is an upper bound.
Now let $\epsilon$ be $u-\beta$.
We know that $\epsilon$ is positive since we assumed $\beta$ is greater than $u$.
So we can replace $u-\beta$ with the inequality that we first assumed and write $\beta<a$.
But this is a contradiction since we assumed $\beta$ is an upper bound of $\mathit{A}$.
Therefore $u$ is the least upper bound of $\mathit{A}$ if for all $\epsilon>0$ there exists an $a\in\mathit{A}$ such that $u-\epsilon<a$.
Now lets prove the only-if-part, which is for all $\epsilon>0$ there exists an $a\in\mathit{A}$ such that $u-\epsilon<a$ if $u$ is the supremum of $\mathit{A}$.
We shall prove this statement by proof by contradiction. Let $u$ be an upper bound of $\mathit{A}$ Suppose that u is the supremum of $\mathit{A}$.
Also suppose that there exists an $\epsilon>0$ such that for all $a\in\mathit{A}$ we have $a<u-\epsilon$.
Then $u-\epsilon$ is also an upper bound of $\mathit{A}$.
Since $\epsilon>0$ it is obvious that $u-\epsilon<u$.
But this is a contradiction since we know that $u$ is the least upper bound of $\mathit{A}$ and $\mathit{A}$ can not have a smaller upper bound than $u$.
Therefore for all $\epsilon>0$ there exists an $a\in\mathit{A}$ such that $u-\epsilon<a$ if $u$ is the supremum of $\mathit{A}$.
Q.E.D
Is my proof attempt correct? I could not be sure about the second contradiction that I have found. Also is this proof suitable for formal mathematical writing? Thanks!
AI: I guess that you're using the definition of supremum as the smallest upper bound.
Suppose $u$ is the supremum and take $\varepsilon>0$. Then $u-\varepsilon<u$, so $u-\varepsilon$ is not an upper bound of $A$, which implies that there exists $a\in A$ such that $a>u-\varepsilon$.
Suppose $u$ is not the supremum; then there exists an upper bound $v$ of $A$ such that $v<u$. Set $\varepsilon=u-v$. If $a\in A$, then $a\le v=u-\varepsilon$, so $u$ does not satisfy the condition.
About your proof: it's too long and full of repetitions, but essentially correct: the main ideas are there. Contradiction is not necessary: in the above, I proved the contrapositive.
|
H: Tensor Algebra. Finding a well-defined linear map from Functional's.
Let $V$ and $W$ be finite dimensional $K$ vector spaces. Prove that for $\varphi \in V^*$ and $\psi \in W^*$ exists a well-defined map: $$P_{\varphi , \psi} : V \otimes W \to K, v\otimes w \mapsto \varphi(v) \psi(w)$$
I would like some help in understanding what is meant by this assignment in one of the Linear Algebra II books that i own. I have been trying to figure out how we can find a map like this. I already have a few ideas:
We can use $\varphi$ and $\psi$ on $v$ and $v$ before using the Tensor Product, then try to use the multilinear property to obtain the map that was given above. Though i got the hunch, that i did not understand the properties of Tensor Products with maps/functional's at all.
Any help is appreciated.
AI: The $K$-linear space $V \otimes W$, together with the bilinear map $V \times W \xrightarrow{\otimes} V \otimes W$ such that $(v,w)\mapsto v \otimes w$, is characterised (up to isomorphism of $K$-linear spaces) by the following universal property (the one that you are talking about in your question): whenever $U$ is a $K$-linear space and $f \colon V \times W \to U$ is a bilinear map, there is unique a $K$-linear map $f'\colon V\otimes W \to U$ such that $f' \circ \otimes = f$.
Hence in this case, in order to get that your map $P_{\phi,\psi}$ (which will be your $f'$) exists and is $K$-linear, you only need to observe that the corresponding arrow: $$f\colon V \times W \ni (v,w)\mapsto \phi(v)\psi(w) \in K$$ is bilinear (here you'd use that $\phi$ and $\psi$ are elements of the dual spaces, that is, they are $K$-linear). Then you're done by the universal property of $V\otimes W$ that we are talking about.
|
H: $\cos\theta\cos2\theta\cos3\theta + \cos2\theta\cos3\theta\cos4\theta + ...$
Evaluate: $$\cos\theta\cos2\theta\cos3\theta + \cos2\theta\cos3\theta\cos4\theta + …$$ upto $n$ terms
I tried solving the general term $\cos n\theta\cos (n+1)\theta\cos (n+2)\theta$.First, I applied the formula $2\cos\alpha\cos\beta = \cos(\alpha+\beta)+\cos(\alpha-\beta)$ on the two extreme terms. After solving I applied this once again and after further solving arrived at $$\frac{1}{4}[\cos(3n+3)\theta + \cos(n+1)\theta+\cos(n+3)\theta+\cos(n-1)\theta]$$
which I simplified to
$$\frac{\cos n\theta}{2}[\cos\theta+\cos(2n+3)\theta]$$
After this I am stuck as to what else I could do so as to make the telescope or something else to easily calculate the sum using some fact from trigonometry. Or maybe this is a dead end. And help or hints would be appreciated, thanks
AI: $$\cos(n-1)t\cdot\cos nt\cdot\cos(n+1)t$$
$$=\dfrac{\cos nt(\cos2t+\cos2n t)}2$$
$$=\dfrac{\cos2t\cos nt}2+\dfrac{\cos nt+\cos3nt}4$$
$$=\dfrac{2\cos2t+1}4\cdot\cos nt+\dfrac{\cos3nt }4$$
Use $\sum \cos$ when angles are in arithmetic progression
|
H: Does a non-negative polynomial of three variables have minimum?
I was wondering, does a non-negative polynomial of three variables (in $\mathbb{R}^3$) have a minimum point?
I understand that for example $(0,0,0)$ is a minimum point for some of them, but what could be the answer in the general case?
AI: Such a polynomial can, but does not need to have a minimum point.
$f(x,y,z)=x^2+y^2+z^2$ is an example of the former, which takes the minum value of $0$ at $(0,0,0)$.
But the polynomial $g(x,y,z)=x^2+y^2+(xyz-1)^2$ does not have a minimum point. We have $g(x,y,z) \ge 0$ obviously as sum of $3$ squares, but the equality can't be reached, as that would require $x=0, y=0$ and $xyz=1$, which is impossible.
So we have that $g(x,y,z) > 0$ but also $g\left(\frac1n,\frac1n, n^2\right)=\frac2{n^2}$ for each $n > 0, n \in \mathbb Z$. That means $g(x,y,z)$ can take arbitrary small positive values but can never reach $0$ exactly, so it does not have a minimum point.
Note that this construction works for polynomials of $2$ or more variables, while for just one variable a lower bounded polynomial will always have a minimum point.
That's because for one variable, a polynomial always tends to either $\pm\infty$ when the argument tends to $+\infty$ and $-\infty$. If it has a lower bound, it means for the minimum only a finite interval is interesting, then the usual theorem for a continuous function attaining a minimum value on a finite, closed interval proves the conclusion.
|
H: Determine Lebesgue integral of a function containing floor function
I'm practicing for the real-analysis exam and I've got stuck at this integral... Could you help me, please?
Determine: $$ \int_{[0,\infty)}\dfrac{1}{\lfloor{x+1}\rfloor\cdot\lfloor{x+2}\rfloor}d\lambda(x).$$
It is ok to split in two integrals and to obtain two natural logarithms?
AI: For $x\in[k, k+1)$, where $k\in\mathbb{N}$, we have $\lfloor{x}\rfloor=k$. Also, note that for any $l\in\mathbb{N}$ we have $\lfloor{x+l}\rfloor = \lfloor{x}\rfloor + l$.
\begin{align}
\int_0^n \frac{1}{\lfloor{x+1}\rfloor\lfloor{x+2}\rfloor}\ dx
&=
\sum_{k=0}^{n-1} \int_{k}^{k+1} \frac{1}{\lfloor{x+1}\rfloor\lfloor{x+2}\rfloor}\ dx\\
&=
\sum_{k=0}^{n-1} \frac{1}{(k+1)(k+2)}\\
&=
\sum_{k=0}^{n-1} \left( \frac{1}{k+1} - \frac{1}{k+2} \right)\\
&=
\sum_{k=0}^{n-1} \frac{1}{k+1} - \sum_{k=1}^{n} \frac{1}{k+1}\\
&=
1 - \frac{1}{n+1} \text{.}
\end{align}
|
H: Problem with general progressions
Question: Let $a_1,a_2,a_3,a_4$, and $a_5$ be such that $a_1,a_2,a_3$ are in an $A.P.$ and $a_3,a_4,a_5$ are in $H.P.$ Then prove that $\log{a_1},\log{a_3},\log{a_5}$ will be in $A.P.$
My approach:
As $a_1,a_2,a_3$ are in an $A.P.$, $$2{a_2}={a_1+a_3}$$
Let's call this equation $I$
And as $a_3,a_4,a_5$ are in $H.P.$, then $$a_4=\frac{2a_3a_5}{a_3+a_5}$$
Let's call this equation $II$
My problem is that after I substitute the value for ${a_3}$ as $2a_2-a_1$ in equation $II$, I do not get the desired answer. Please help.
AI: Given $a_1,a_3,a_5$ such that $\log{a_1},\log{a_3},\log{a_5}$ are NOT in A.P. then we can always find $a_2$ and $a_4$ such that the first two conditions are satisfied. Take, for instance, $a_1=1,a_3=3,a_5=5$ and
$a_2=2$ and $a_3=15/4$.
On the other hand, by considering ALSO the third condition, i.e. $a_2,a_3,a_4$ are in G.P., we have that
$$a_3^2=a_2a_4=\frac{a_1+a_3}{2}\cdot \frac{2a_3a_5}{a_3+a_5}\Leftrightarrow a_3=(a_1+a_3)\cdot \frac{a_5}{a_3+a_5}$$
which implies that
$$0=a_3(a_3+a_5)-(a_1+a_3)a_5=a_3^2-a_1a_5$$
and therefore $\log{a_1},\log{a_3},\log{a_5}$ are in A.P.
|
H: Doubts about series convergence/divergence and properties of compound functions.
Here are some questions about series and functions.
The task is to provide a counterexample for false statements and a proof for true statements (which are at most two).
-> Questions in image format <-
/Question in text format/
-(I) Let (a$_n$)$_n$$_\in$$ _\Bbb N$ and (b$_n$)$_n$$_\in$$ _\Bbb N$ be two sequences of real numbers such that $\sum_{n=1}^\infty (a_n)$ converges and $\sum_{n=1}^\infty (b_n)$ diverges to positive infinity. Then:
$\sum_{n=1}^\infty sin(a_n^2)$ converges.
$\sum_{n=1}^\infty \frac 1{(1+b_n^2)}$ converges.
$\sum_{n=1}^\infty \sqrt[]{|a_n|}(b_n^2)$ diverges.
$\sum_{n=1}^\infty (-1)^na_n$ converges.
-(II) Consider $f,g: \Bbb R\rightarrow \Bbb R$. let $f$ be continuous and have an absolute minimum. Also, let $g$ be bounded and have an absolute minimum. Then:
$g\circ f$ is continuous.
$f\circ g$ is bounded.
$g\circ f$ has an absolute maximum.
$f$ is bounded.
AI: For question (I), all options are incorrect, so for contradicting options (1) and (4), take aₙ = (-1)ⁿ/√n, for option (2) take bₙ= 1/n, for option (3) take aₙ=1/n² and bₙ=1/n.
For question (II), option (a) is incorrect, take,$g(x)=
\begin{cases}
1, & \text{if $x$ is rational} \\
-1, & \text{if $x$ is irrational}
\end{cases}$. And take f(x)=x².
Option (b) is correct, since g is bounded on $\mathbb{R}$, and since, f(x) is continuous on $\mathbb{R}$ which is restricted on bounded domain g($\mathbb{R}$), so fg must bounded on $\mathbb{R}$.
option (c) is correct, since g is bounded on whole $\mathbb{R}$. So restricted g on f($\mathbb{R}$) must be bounded. So, g can take absolute maximum value.
Option (d) is incorrect, take f(x)=x²
|
H: Why is $(a,+\infty)$ part of the topology generated by the base $\mathfrak{B}=\{B \subseteq \mathbb{R}\ | B=[a, +\infty), a \in \mathbb{R}\}$?
If we consider
$\mathfrak{B}=\{B \subseteq \mathbb{R}\ | B=[a, +\infty), a \in \mathbb{R}\}$ is the base of a topology over $\mathbb{R}$, whose open sets are the positive half-lines with closed or open endpoint, plus $\emptyset$ and $\mathbb{R}$. ...(1)
In fact, if we considered a family $ \mathfrak{A} =\{A_i=[a_i,+\infty) | a_i \in \mathbb{R} , i \in I\} \subseteq \mathfrak{B}$, where I is a subset of indices and let $\alpha=inf a_i$(eventualy $\alpha = -\infty)$. ...(2)
Now $\bigcup_{i\in I} A_i=(\alpha, + \infty)$ if $\alpha \neq a_i $ $ \forall i$, otherwise it coincides with $[\alpha, +\infty)$ ...(3)
My questions
(1)Why are half-lines with open endpoints ($(a,+\infty$) part of the topology? It looks like intersecting and unifying all I can get are half-lines with closed endpoind
(2)I guess there is a typo here, the inf of a number makes no sense, it should be $\alpha=inf A_i$ instead of $\alpha=inf a_i$
(3) Why is $\bigcup_{i\in I} A_i=(\alpha, + \infty)$ if $\alpha \neq a_i $ ? It looks like any union, even an infinite one would always yield a set of the form $[a,+\infty)$
AI: Let $a \in \mathbb{R}$ and $n >1$ be an integer, and define $A_n = [a+\frac1n,+\infty)$. For all $n$, $A_n \in \mathcal{T}$, the topology generated by $\mathfrak{B}$. So $A = \cup_{n>1} A_n \in \mathcal{T}$ by the definition of the topology $\mathcal{T}$. Notice that $A = (a,+\infty)$.
|
H: If $\pi$ is a permutation, how many permutation $\sigma$ can be reached from $\pi$ exchanging $r$ indices?
This problem comes from my Master Thesis in combinatorial optimization.
Let $\pi \in \mathbf{S}_n$ be a permutation of $n$ elements and $r \in \{2,3,\dots,n\}$. Define the neihborhood $\mathbf{N}_r(\pi)$ centered at $\pi$ of radius $r$ as all the permutation that can be reached from $\pi$ exchanging $r$ indices, i.e.
$$
\mathbf{N}_r(\pi):= \Big\{\sigma \in S_n \, \bigg| \, \# \{i \, | \,\pi(i) \neq \sigma(i) \} = r \Big\}
$$
I would like to prove that
$$\label{1}
\tag{1}
\#\mathbf{N}_r(\pi) = \binom{n}{r} r! \sum_{k=2}^r
\frac{(-1)^k}{k!}, \qquad \text{for every $\pi \in \mathbf{S}_n$.}$$
I showed by hand that
$$
\#\mathbf{N}_2(\pi) = \binom{n}{2}
$$
Is there a theorical way to prove the result \ref{1}?
I think the problem can be transformed into some classic counting combinatorial problem.
AI: As requested in the comments,
For any $r$, we build such a permutation by selecting the $r$ fixed points and then choosing a Derangement of the others. Thus the answer is $$\binom nr \,\times \,!(n-r)$$
|
H: Why does $\sqrt a\sqrt b =\sqrt {ab}$ only hold when at least one of $a$ and $b$ is a positive number?
I've just been introduced to complex numbers, and I have found it surprising that the radical rule apparently holds even when one of $a$ and $b$ is a negative number. However, if both $a$ and $b$ are negative, then this rule doesn't work. Why is this?
Here is my attempt at proving the radical rule for positive $a,b$. I was wondering if this proof could be generalised for negative $a,b$, and whether this could form part of the explanation for when the radical rule holds. (Unfortunately, though, I am yet to learn about how the natural logarithm works when it can accept complex arguments.)
\begin{align}
\sqrt a\sqrt b &= a^{\frac{1}{2}}b^{\frac{1}{2}} \\
&=e^{\frac{1}{2}\ln a}\times e^{\frac{1}{2}\ln b} \\
&=e^{\frac{1}{2}\ln a+\frac{1}{2}\ln b} \\
&=e^{\frac{1}{2}\ln(ab)} \\
&=(ab)^{\frac{1}{2}} \\
&=\sqrt{ab}
\end{align}
I have also heard that part of the reason for why the radical rule only works in certain cases is because there is no way of ordering $i$ and $-i$. In other words, there is no way of saying that $i$ is 'greater than' $-i$, or vice versa. Taking this idea to the extreme, does this mean that we can't even say that $5$ is greater than $3$ when working with the complex plane?
AI: If $x=\sqrt{a}$ and $y=\sqrt{b}$, then $(xy)^2\stackrel{(1)}{=}x^2y^2=ab$ and $xy\stackrel{(2)}{=}\color{blue}{\pm}\sqrt{ab}$. This reasoning works for both $\sqrt{a},\,\sqrt{b}\in\Bbb R$ and $\sqrt{a},\,\sqrt{b}\in\Bbb C$, because (1) uses commutativity (also needed on the display line below) and (2) uses the nonexistence of zero divisors, so that$$(u-v)(u+v)\stackrel{(1)}{=}u^2-v^2=0\implies u\mp v=0.$$To lose the $\color{blue}{\pm}$ when $\sqrt{a},\,\sqrt{b}\in\Bbb R$, we use the fact that then $a,\,b\ge0$, and their square roots are defined as the non-negative choices for $x,\,y$; then $x,\,y,\,\sqrt{ab}$ are all $\ge0$, so $\sqrt{ab}$ is $xy$ as opposed to $-xy$. In particular, this uses the fact that non-negative reals are closed under multiplication. But there is no analogous half $H$ of $\Bbb C$ in which we can place square roots, so that (i) for each $z\in\Bbb C\setminus\{0\}$ either $z\in H$ or $z\in-H$ but not both, and (ii) $z,\,w\in H\implies zw\in H$.
|
H: $P(X-EX \geq t) \leq P((M-m)S \geq 2t)$. Is this inequality true? And if so, how does one prove it?
Let $X \in [m,M]$ be a random variable and $S$ be the Rademacher random variable (e.i $P(S=1)=P(S=-1)=1/2$). Is the following inequality true?
$$
P(X-EX \geq t) \leq P((M-m)S \geq 2t)
$$
This inequality showed up while I was trying to prove the Hoeffding's inequality, and I was wondering if someone could help me either prove or disprove it.
The more general inequality is the following. For $X_i \in [m_i,M_i]$ independent, and $S_i\sim Rademacher$ also independent, one should prove that
$$
P(\sum^n_{i=1}X_i - EX_i \geq t) \leq
P(\sum^n_{i=1}(M_i - m_i)S_i \geq 2t)
$$
AI: I think it is not true.
It is quite well possible to create a random variable $X\in\left[-\frac{1}{2},\frac12\right]$ that is distributed in such a way that $\mathbb{E}X<0$ and $P\left(X< x\right)<1$
for every $x<\frac12$.
If the inequality is correct then for any $t>\frac{1}{2}$ we find: $$P\left(X-\mathbb{E}X\geq t\right)\leq P\left(S\geq2t\right)=0$$
This implies that: $$P\left(X<\mathbb{E}X+\frac{1}{2}\right)=1$$
However this contradicts that $P\left(X< x\right)<1$
for every $x<\frac12$.
|
H: Confusion about Suprema Properties and Spivak's Proof of the Intermediate Value Theorem
If it's possible, I'm wondering if someone can clarify the following for me as part of the proof of the Intermediate Value Theorem by Spivak in the 4th edition of his Calculus (proof and auxiliary theorem given at the bottom of the post). The line I am focusing on is "there is some number $x_0$ in $A$ which satisfies $\alpha−\delta<x_0<\alpha$ (because otherwise $\alpha$ would not be the least upper bound of $A$)." In particular, I am not entirely convinced that I am internalizing the part in brackets.
My question is, why does the fact that $x_0<\alpha$ (where $\alpha$ is of course the supremum of the set, which exists as established earlier in the proof) necessarily require that $x_0$ is in $A$?
Just for example, $a-1 < \alpha$, but $a-1$ is not in $A$. Now this is a cheeky example, since it can (or so it seems to me) easily be established that we can find a $\delta$ so that this $x_0$ is between $a$ and $\alpha$. But even then, I feel like I am missing something. Why must this $x_0$ be in $A$ based on the definition of suprema? The proof, and the part I am confused about, intuitively make sense to me based on my hazy characterization of the supremum of a set as having the property that any number less than it (and greater than some other element in the set - here $a$ for example is known to be in the set and $x_0$ is greater than $a$) should be in the set, but that seems to me to roughly rest on the notion that the set contains all numbers (again, and unfortunately, roughly speaking) between that number known to be in the set and the supremum.
This question is admittedly closely related to this one, but I thought it was sufficiently different because it seems I am more confused about properties of suprema than about the proof as a whole.
Theorem 7-1 (Intermediate Value Theorem):
If $f$ is continuous on $[a,b]$ and $f(a) < 0 < f(b)$, then there is some number $z$ in $[a, b]$ such that $f(x)=0$.
Proof: Define the set $A$ as follows:
$$A=\{x : a \le x \le b, \text{ and } f \text{ is negative on the interval } [a,x]\}.$$
Clearly $A \neq \varnothing$, since $a$ is in $A$; in fact, there is some $\delta>0$ such that $A$ contains all points $x$ satisfying $a \le x < a + \delta$; this follows from Problem 6-16, since $f$ is continuous on $[a,b]$ and $f(a)<0$. Similarly, $b$ is an upper bound for $A$ and, in fact, there is a $\delta > 0$ such that all points $x$ satisfying $b - \delta < x \le b$ are upper bounds for $A$; this also follows from Problem 6-16, since $f(b)>0$.
From these remarks, it follows that $A$ has a least upper bound $\alpha$ and that $a < \alpha < b$. We now wish to show that $f(\alpha) = 0$, by eliminating the possibilities $f(\alpha) < 0$ and $f(\alpha) > 0$.
Suppose first that $f(\alpha)<0$. By Theorem 6-3, there is a $\delta>0$ such that $f(x)<0$ for $\alpha - \delta< x < \alpha + \delta$. Now there is some number $x_0$ in $A$ which satisfies $\alpha - \delta< x_0 < \alpha$ (because otherwise, $\alpha$ would not be the least upper bound of $A$).
This means that $f$ is negative on the whole interval $[a,x_0]$. But if $x_1$ is a number between $\alpha$ and $\alpha+\delta$, then $f$ is also negative on the whole interval $[x_0,x_1]$. Therefore $f$ is negative on the interval $[a,x_1]$, so $x_1$ is in $A$. But this contradicts the fact that $\alpha$ is an upper bound for $A$; our original assumption that $f(\alpha)<0$ must be false.
In doing the proof, Spivak uses Theorem 6-3, given below:
Theorem 6-3:
Suppose $f$ is continuous at $a$, and $f(a)>0$. Then $f(x)>0$ for all $x$ in some interval containing $a$; more precisely, there is a number $\delta > 0$ such that $f(x)>0$ for all $x$ satisfying $|x−a|<\delta$. Similarly, if $f(a)<0$, then there is a number $\delta>0$ such that $f(x)<0$ for all $x$ satisfying $|x−a|<\delta$.
AI: Why must this $x_0$ be in $A$
It seems that your confusion sort of stems from the fact that you didn't completely see how $x_0$ was chosen.
Assuming I understand what your mistake is, let me first make the following point.
Spivak is not saying that: if $x_0 \in \Bbb R$ satisfies $\alpha - \delta < x_0 < \alpha$, then $x_0 \in A$.
In fact, that statement would really be false.
What Spivak really is saying is that: Given any $\delta > 0$, there exists some $x_0 \in A$ such that $\alpha - \delta < x_0 < \alpha$.
This can be seen as follows:
Since $\delta > 0$, we have that $\alpha - \delta < \alpha$.
Since $\alpha$ is the least upper bound of $A$, we must have that $\alpha - \delta$ is not an upper bound of $A$.
What that means is that there exists $x_0 \in A$ such that $\alpha - \delta < x_0$.
Finally, since $x_0 \in A$, we must have that $x_0 \le \alpha$.
However, he has given a strict inequality, so you should be able to argue that you can actually choose $x_0 \neq \alpha$. (For this, you would need more properties about $A$. It will not just follow from the definition of least upper bound.)
Showing that $x_0 \neq\alpha$:
We do this by showing that $\alpha\notin A$. (Since we already know that $x_0 \in A$, this would give us that $x_0 \neq \alpha$.)
To see this, suppose that $\alpha \in A$. In this case, we have that $f$ is negative on $[a, \alpha]$. Moreover, by Theorem 6-3 (as quoted in the question), we see that there exists $\delta > 0$ such that
$$f(x) < 0\quad\text{for all } \alpha - \delta < x < \alpha + \delta.$$
Moreover, since $\alpha < b$, we can choose $\delta$ small enough such that $\alpha + \delta < b$.
In particular, we have that $f(x) < 0$ for all $x \in \left[\alpha, \alpha + \frac{1}{2}\delta\right]$. Since $f$ is already negative on $[a, \alpha]$ (by assumption), we now see that $$f(x) < 0 \text{ for all } x \in \left[a, \alpha+\frac{1}{2}\delta\right].$$
Since $\alpha+\frac{1}{2}\delta < b$, we see that $\alpha+\frac{1}{2}\delta \in A$.
This contradicts that $\alpha$ is an upper bound since $\alpha < \alpha+\frac{1}{2}\delta$.
|
H: Calculate $\int \left(1+\ln \left(1+\ln (...+\left(1+ \ln(x))\right)\right)\right) dx$.
This is not particularly a useful integral or one asked in an exam or anything. I just really enjoy doing random integrals and derivatives. With that being said, how can I find:
$$\int \left(1+\ln \left(1+\ln (...+\left(1+ \ln(x))\right)\right)\right) dx$$
I tried to star with the simple case of: $$\int \left(1+\ln(x)\right)dx=x\ln \left(x\right)+C$$ I then introduced the first nest. Here I did Integration by Parts with $u=1+\ln \left(1+\ln \left(x\right)\right), v'=1$:
$$\int \left(1+\ln \left(1+\ln \left(x\right)\right)\right)dx=x\left(1+\ln \left(1+\ln \left(x\right)\right)\right)-\frac{1}{e}\text{Ei}\left(\ln \left(x\right)+1\right)+C$$
However, these take quite some time and it's only the first nest. Therefore, I'm curious as to if $(a)$ such an integral does have solution and $(b)$ how to derive that situation.
PS - Again this is just a fun question and not from an exam, website or anywhere important.
AI: $$\int 1+\ln(1+\ln(1+\ln(1+...\ln(x)))) dx=F(x)$$
$$1+\ln(1+\ln(1+\ln(1+...\ln(x))))=F'(x)$$
$$1+\ln(F'(x))=F'(x)$$
$$F'(x)=1$$
$$F(x)=x+c$$
|
H: Why is expectation of a exponential family equal to $\frac{\partial A(\eta)}{\partial \eta}$?
My understanding: A member of the exponential family is any well defined distribution of the form $h(x)\exp[\eta \boldsymbol{\cdot} T(x)-A(\eta)]$ where $T:\mathbb{R}^n \to \mathbb{R}^n$ , $A:\mathbb{R}^n \to \mathbb{R}$, and $h:\mathbb{R}^n \to \mathbb{R}$ are arbitrary functions.
The above used moment generating function to prove the last property in quesion. But I am not sure why the integral equals the $e^{A(\eta+u)-A(\eta)}$. Are there more conditions on the function A H ,and T that I am missing?
AI: Because the density must have integral equal to one, we have
$$\int h(x) e^{\eta^\top T(x) - A(\eta)} \, dx = 1$$
$$\implies e^{A(\eta)} = \int h(x) e^{\eta^\top T(x)} \, dx.$$
Then just plug in $\eta + u$ into $A(\cdot)$ to verify your expression.
|
H: What is the motivation behind Binomial Distribution?
If there are $N$ repetitions of a "random experiment" and the "success"
probability is $θ$ at each repetition, then the number of "successes" $x$ has a binomial
distribution:
$$p(x|θ) = {N\choose k}θ^x (1 − θ)^{N−x} $$
Now I am wondering what ${N\choose k}θ^x (1 − θ)^{N−x} $ gives or what ${N\choose k}θ^x (1 − θ)^{N−x} $ means?
What I see is that the probability of something happening $\theta$ is multiplied $x$ times and that is multiplied by the probability of not happening that $(1-\theta)$, multiplied remainder $(N-x)$ times and ${N\choose k}$is also multiplied to the product.
What does these $3$ terms actually achieve? Specially what is the role of ${N\choose k} $ or why ${N\choose k} $ is in the $p(x|θ)$?
AI: Suppose you have a biased coin which has a probability $\theta$ of showing heads when flipped once and you want to know the probability of it showing heads exactly $x$ times when flipped $N$ times. Then you might say
the probability it shows tails when flipped once is $1-\theta$
the probability it shows heads every time when flipped $x$ times is $\theta^x$
the probability it shows tails every time when flipped $N-x$ times is $(1-\theta)^{N-x}$
the probability it shows heads $x$ times and then tails $N-x$ times when flipped $N$ times is $\theta^x(1-\theta)^{N-x}$
But there are other orders of results which also end up with it showing heads exactly $x$ times when flipped $N$ times. There are in fact ${N \choose x}$ possible such orders and they each have probability $\theta^x(1-\theta)^{N-x}$.
So the total probability of it showing heads exactly $x$ times when flipped $N$ times is ${N \choose x}\theta^x(1-\theta)^{N-x}$
|
H: Is the operation $i = x + y * \text{width}$ reversible?
In game programming, there is this very common operation we do to index positions in a two-dimensional matrix over a one-dimensional array. We basically associate the value $a_{xy}$ of some matrix $A$ with width $w$ to the index $i = x + y * w$ of the array. This image illustrates the overall concept, but it can be used for a myriad of other things:
However, I was recently wondering whether this operation is reversible; that is, having an index in the array and the width of the matrix, is it possible to obtain the corresponding $(x,y)$?
From my experience with mathematics, I was under the impression that this operation is both injective (since, with a fixed width, every $(x,y)$ position has a single corresponding index), and surjective (since all indexes were generated from at least one position). Nevertheless, I was unable to come up with an inverse of this transformation that didn't result in x being dependent on the y or vice-versa.
AI: Mathematically we have (assuming that $w > 0$ and $x, y \ge 0$ are integers)
$$
i \bmod w = (x + y \cdot w) \bmod w = x \bmod w = x
$$
because $x$ is in the range $0, \ldots, w-1$, and
$$
w \cdot y \le i \le w \cdot y + w - 1 < w \cdot (y+1)\\
\implies y \le \frac iw < y+1 \implies y = \left\lfloor \frac iw \right\rfloor \, .
$$
In many programming languages this can be computed as
x = i % w # remainder operator
y = i / w # truncating integer division
|
H: Can a ratio of two members of a positive sequence converge to zero?
Can this happen $\lim_{n\to\infty}\frac{a_{n+1}}{a_n}=0$ for a sequence $a_n>0$?
AI: Yes. For example letting
$$a_n = \frac{1}{n!}$$
yields
$$\lim_{n\to\infty}\frac{a_{n+1}}{a_n} = \lim_{n\to\infty}\frac{1}{n+1} = 0$$
|
H: Prove that lim $\int_{E_n}f d\mu = 0$
Well they give me the following statement:
Be $f$ integrable in $(X,F,\mu)$ and {$E_n$} $\subset F$ as {$E_n$} $ \downarrow E$ with $\mu(E) = 0$. Prove that $\lim_{n \to \infty} \int_{E_n} f d\mu = 0$
Well my idea to prove this is use the DCT (Dominated Convergence Theorem) as $f*\mathbb{1_{E_n}} \rightarrow f*\mathbb{1_{E}}$ and |$f*\mathbb{1_{E_n}}$| $< f$ I use the DCT and:
$\lim \int_{E_n} f d\mu = \lim \int f*\mathbb{1_{E_n}} d\mu = \int_{X} \lim f*\mathbb{1_{E_n}} d\mu = \int_{X} f*\mathbb{1_{E}} d\mu = f*\mu(E) = 0$
I don't know if this is correct or not.
AI: Let $(X,\mathcal F, \mu)$ be a measure space and let $f$ be nonnegative and integrable. For every $E \in \mathcal F$, define a new measure $\nu$ by $$\nu(E) = \int_E f\,d\mu$$
In your case, since $(E_n) \downarrow E$, so $E_{n+1} \subset E_n$, under the hypothesis that one of the $E_n$ has finite measure, thanks to the continuity from above of $\nu$ we have
$$ \lim_{n\to \infty} \int_{E_n} f d\mu = \lim_{n\to \infty} \nu(E_n) = \nu\left(\bigcap_nE_n \right) = \nu(E) = \int_E f d\mu = 0$$
|
H: Doubt in understanding definition of random variable
I have recently started studying probability. Kindly explain me following
Random variable means a function from $\Omega \to R$(set of real numbers). This is what i understood from my school books. But when i started reading Ross, it is given as in addition to above, any Borel set in R must have preimage in $\Omega$ i.e, preimage of Borel set in R must be an event.
Why this 2nd part is needed? Does it mean Random function(variable) X is "onto"
Another doubt, I thought random variable is some variable. But i understood now that it is infact a function. Then why it is called random "variable"?
If X, Y are 2 random variables, i know that XY is also random variable
My proof:
$X: \Omega \to R \\ Y: \Omega \to R $
then
$XY(\alpha) = X(Y(\alpha)), \alpha$ is some event in $\Omega$ sample space. Since composition of functions is a function from same domain $\Omega$ to R and so XY is a function and so it is a random variable. Please correct me if i am wrong.
AI: There is a common saying: "A random variable is neither random nor a variable." The terminology might have come from some intuition at a non-measure-theoretic level, but in the formal definition it is simply a [measurable] function.
This "measurability" (preimage of Borel set must be measurable in $\Omega$) is important because you would like to assign a probability to events of the form $\{X \in B\}$ for Borel sets $B$. This probability is assigned from a probability measure on $\Omega$. If the preimage $X^{-1}(B)$ is not measurable in $\Omega$, then you cannot assign a probability to that event.
$XY$ is defined as the product $X(\alpha) Y(\alpha)$, not the composition.
|
H: Convergence/divergence of $\sum_{n=1}^{\infty} \frac{{(-1)}^n \tan{(n)}}{n^2}$
Convergence/divergence of $$\sum_{n=1}^{\infty} \frac{{(-1)}^n \tan{(n)}}{n^2}$$
I thought diverge because some value $n \approx \frac{\pi}{2}+\pi k \implies \tan{n}=\pm \infty$ but key says that this never holds because $\pi$ is irrational. He said $\tan{n}$ is bounded and clearly series without $\tan{n}$ converges. Someone can explain this further to me please?
AI: Let $\frac{p_n}{q_n}$ be a convergent of the continued fraction of $\pi$ such that $q_n$ is odd and $p_n$ is even. We have
$$ \left|\frac{\pi}{2} q_n - \frac{p_n}{2} \right| \leq \frac{1}{2q_n} $$
and since both $\sin$ and $\cos$ are Lipschitz-continuous we have
$$\left|\sin\left(\frac{p_n}{2}\right)\right|\geq 1-\frac{1}{2q_n}\approx 1-\frac{\pi}{4(p_n/2)}, $$
$$\left|\cos\left(\frac{p_n}{2}\right)\right|\leq \frac{1}{2q_n}\approx \frac{\pi}{4(p_n/2)} $$
so there is a sequence of natural numbers $n$ such that $|\tan(n)|$ is at least as large as $\left(\frac{2}{\pi}-\varepsilon\right)n$.
If we assume that the irrationality measure of $\pi$ is $>3$ we have that $\frac{\tan n}{n^2}$ is not even bounded.
On the other hand the irrationality measure of $\pi$ is still unknown (it is conjectured to be $2$, but nowadays we only know that it is $\leq 7.11$), so to discuss the convergence of such series, like the Flint Hills series, is pretty pointless.
Your series is probably convergent since $(-1)^n$ has bounded partial sums and $\frac{\tan(n)}{n^2}$ is probably convergent to zero without wild oscillations, but we currently lack the technology for proving such a claim.
|
H: What's a fair way to share fees in a group road trip with a personal and a rental car?
I'm planning vacations with a group of friends (12 people), and it involves a ~1200km return trip by car. Only one of us owns a suitable car (4 pax), so we've rented a minivan to transport the other 8, and we're debating on how best to share the costs.
Normally, if none of the cars were rentals, each car owner would just divide the price of fuel and tolls over their passengers, themselves included.
Logically, we could do the same with the rental fees. But as a passenger who could either be in the personal car or in the rented minivan, their share of the cost will be vastly different depending on which car they end up in, for the same trip. That would be unfair to the passengers of the rental car.
We could also share the sum of all fees of both cars across all of us. But that would be unfair to the car owner, who ends up paying a higher trip cost than if it were just his car and passengers sharing the cost, despite owning a car and enduring the associated hassles and yearly expenses.
If we calculate it that way, we need to include the full, actual cost to him of using his car for the trip, including maintenance, amortization, and insurance.
What would be the best way to share these costs?
EDIT to avoid opinion-based interpersonal advice: I'm looking for the most "scientifically fair" solution, some kind of calculation model. Answers that challenge whether we need to be that precisely fair in a group of friends are absolutely right, and all parties have indeed agreed to a "simple and imperfect" solution. We're left with the academical question that is the object of my post: "but what would be the fairest model?"
AI: Here is a way to split the costs equally:
The car owner pays:
(cost of the car only/number of passengers in the car)
Everyone else pays:
(((cost of the rental + the car) - (cost of the car only/number of passengers in the car)) / (number of passengers in total - 1)
In this way:
The owner pays a split cost of the car only
Everyone else pays a split cost of everything combined - what the owner paid
|
H: Extending bounded functions to unbounded ones
In a proof I read - which I will omit here since it does not contribute to the question - something has to be proven for a general function $h(x): IR \rightarrow IR^+$ where $h$ is a Borel function.
At the beginning, the proof was restricted to a bounded function $h(x)$.
After the the proof had been finished, it was stated that the assumption of boundedness of $h$ can easily be relaxed by $\min(n,h(x))$, where one should note that $\lim_{n\rightarrow \infty}\min(n,h(x))= h(x)$.
However, I don't manage to see why $\min(n,h(x))$ relaxes the assumption of the bounded function $h$? For me, $|\min(n,h(x))| \leq h(x)$ holds, where $h(x)$ is still bounded, so the boundedness remains?
Thanks in advance for your help! :-)
AI: Without seeing what result is being proven about $h$, I can not be certain, but here is a common approach:
For every $n\in\mathbb{N}$, Apply the previous result to the bounded function $x\mapsto \textrm{min}(n,h(x))$.
Consider the sequence of functions, $(\textrm{min}\{n,h(x)\})_{n\in\mathbb{N}}$ which converges to $h(x)$. Every element of the sequence satisfies the property implied by your proof.
If whatever property in your proof is preserved "in the limit" then you are done. More precisely, if the set of functions satisfying your property is closed, then you are done. It should be noted that there are plenty of properties which are not preserved "in the limit" (e.g. boundedness)!
|
H: Is a function from $\Bbb{R}^+\times\Bbb{R}^+\rightarrow\Bbb{R}^+\times\Bbb{R}^+$ injective and/or surjective?
The function is defined on $\Bbb{R}^+\times\Bbb{R}^+$ as $f(x,y)=(2x+y, xy)$
I know the usual way to prove $f$ is injective is to assume $f(a,b)=f(c,d)$, and then show that this implies $(a,b)=(c,d)$.
That leads to two equations: $2a+b=2c+d$ and $ab=cd$.
I struggled to manipulate these, so I wondered if the function is not injective, and can I find a counterexample? I was lucky to realize that $f(3,4)=f(2,6)=(10,12)$. So the function is not injective, but I still haven't proved it in general, i.e. without relying on a specific counterexample.
The proof for surjective would look like: Assume $(a,b)\in\Bbb{R}^+\times\Bbb{R}^+$. Can we find, in terms of $a$ and $b$, $(x,y)\in\Bbb{R}^+\times\Bbb{R}^+$ such that $f(x,y)=(a,b)$?
Again there are two equations: $2x+y=a$, and $xy=b$
Solving the first equation for $y$ and substituting into the second equation leads to the quadratic equation $2x^2-ax+b=0$.
Using the quadratic formula, we get a discriminant of $a^2-8b$, which can easily be made negative by choosing $b>\frac{a^2}{8}$.
Does that mean any ordered pair $(a,b)$ satisfying $b>\frac{a^2}{8}$ is not in the image of the function?
AI: So the function is not injective, but I still haven't proved it in general, i.e. without relying on a specific counterexample.
A single counterexample is all you need. You don't need to prove it in general. If a single counterexample exists then the statement isn't true.
And if the statement isn't true..... then it's not true.
.......
However if you don't want to randomly grasp for counterexamples you can attempt to prove it is injective, find the snag, and point out it will fail for a class of counter examples.
i.e.
$2a+b=2c+d$ and $ab=cd$
If we let be $b = 2c-2a +d$ and then $a(2c-2a+d)= 2ac -2a^2 +ad = cd$ so
$d(c-a)= 2ac -2a^2 = 2a(c-a)$.
If $c=a$ then we do get that $b=d$ and $(a,b) = (c,d)$ but if $c\ne a$ then we get $d=2a$ and $b=2c$.
And we will always have $f(a,b)=f(\frac 12b,2a)= (2a +b, ab)=(2(\frac 12 b)+2a, \frac 12b\cdot 2a)$.
so any such $(a,b) \ne (\frac 12b, 2a)$ will be a counterexample.
But a single counterexample is good enough and probably easier.
(The way I'd do it: is to let $xy = 0$ so if $x = 0$ we get $f(0,y) = (y,0)$ and if $y = 0$ we get $f(x,0) = (2x, 0)$ so if $y=2x$ and $x=1$ and $y=2$ we get $f(1,0) = (2,0)$ and $f(0,2) = (2,0)$..... FWIW)
surjectivity:
Does that mean any ordered pair (a,b) satisfying b>a28 is not in the image of the function?
Yeah, I think so.
Let see. If $f(x,y) = (4, 3)$
Then $2x+y=4$ and $xy = 3$. $y= 4-2x$ so $x(4-2x) = 3$ so $8x^2-4x+3 =0$ so $x =\frac {4\pm \sqrt{16-4*3*8}}{16}$ has no (real) solution.
It's not surjective.
Good job!
|
H: Series equivalent to harmonic series
Question: In the accepted answer, In this link given here
Another simple series convergence question: $\sum\limits_{n=3}^\infty \frac1{n (\ln n)\ln(\ln n)}$
What is mean by last line "which is essentially the harmonic series"?
Is that mean, $n\ln^2 2 + \ln2\cdot\ln\ln 2 ≤n$ for all $n\in\mathbb{N}$ and hence for their reciprocals, reverse inequality will holds and hence by comparison test, as harmonic series diverges, so that series $\sum\frac {1}{n\ln^2 2 + \ln2\cdot\ln\ln 2}$ diverges ?
Am i correct? Please help..
AI: That means
$$\frac{1}{n\ln^22+\ln2⋅\ln\ln2} \sim \frac{1}{n\ln^22},$$
and by summing $\frac{1}{n\ln^22}$ you find the harmonic series up to a factor.
More generally, if we have two sequences $(a_n)_{n\in\mathbb{N}}$ and $(b_n)_{n\in\mathbb{N}}$ whose elements are positive real numbers, then we say that these sequences are asymptotically equivalent if
$$\lim_{n\to\infty}\frac{a_n}{b_n}=1,$$
and we write $a_n\sim b_n.$
One can verify that $\sum_n a_n$ converge iff $\sum_n b_n$ coverges (hint : $\frac{1}{2}a_n \leq b_n \leq 2a_n$ for $n$ big enough).
|
H: Two definitions of $C_0(X)$. Do they coincide?
Let $X$ be a topological space. Then we can define
$$C_0(X):=\{f \in C(X)\mid \forall \epsilon > 0: \exists K \subseteq X \mathrm{\ compact}: \forall x \notin K: |f(x)| < \epsilon\}$$
If $X$ is locally compact, I have also seen the following definition, if $X$ is locally compact:
$$C_0'(X) = \{f \in C(X)\mid \forall \epsilon > 0: \{x \in X: |f(x) | \geq \epsilon\} \mathrm{\ compact}\}$$
What is the relation between these two definitions of $C_0?$ Clearly $C_0'(X) \subseteq C_0(X)$. Do we have equality? Why do we require that $X$ is locally compact in the second definition?
AI: Yes, the two definitions are equivalent, even without assuming local compactness. (A usual terminology to describe this behavior is to say that such a function vanishes at infinity.)
To see that $C_0'(X)\subseteq C_0(X)$, take $f\in C_0'(X)$. Then, $K_{\varepsilon}\equiv\{x\in X\,|\,|f(x)|\geq\varepsilon\}$ is compact for every $\varepsilon>0$. Use this $K_{\varepsilon}$ to verify that $f\in C_0(X)$.
Conversely, suppose that $f\in C_0(X)$ and fix an arbitrary $\varepsilon>0$. There must exist some compact $K\subseteq X$ such that $x\in X\setminus K$ implies $|f(x)|<\varepsilon$. Therefore, the set $E\equiv\{x\in X\,|\,|f(x)|\geq\varepsilon\}$ is a subset of $K$. But $E$ is closed due to the continuity of $f$ and a closed subset of a compact set is compact in any topological space. The conclusion is that $E$ is compact, which implies that $f\in C_0'(X)$.
|
H: Delta approximating function integral
I was dealing with a problem where a delta approximating function is given. Apart from the fact that this delta approximating functions has the form $$g_\epsilon(x) = \epsilon^{-3}g(\epsilon^{-1}x)\tag{1}$$ and that $g\in L^1(\mathbb{R}^3)\cap L^2(\mathbb{R}^3)$ with $\int g(x)\,\mathrm{d}^3x = 1$, no specific form of the $g_\epsilon$ is given. What i then was trying to evaluate is the integral $$\int g_\epsilon(x)f(x)\,\mathrm{d}^3x\tag{2}$$ for which i would imagine that $$\lim_{\epsilon\to 0}\int g_\epsilon(x)f(x)\,\mathrm{d}^3x = f(0)\tag{3}$$ but I'm not quite sure what to do with the integral $(2)$ which is what interests me.
What i can say about $(2)$? And, if there's one, what's an explicit form of that integral?
Maybe a bit of context could help. I'm trying to study the resolvent of an hamiltonian which contains a term $$\mu_\epsilon(g_\epsilon, \cdot)g_\epsilon\qquad \mu_\epsilon\in \mathbb{R}$$ and to do so i'm trying to find the action of the hamiltonian in Fourier space by computing the integral $$\mu_\epsilon\int(g_\epsilon, f)g_\epsilon(x)e^{-ikx}\,\mathrm{d}^3x = \int g_\epsilon(x) e^{-ikx}\,\mathrm{d}^3x\int \overline{g_\epsilon}(y)f(y)\,\mathrm{d}^3 y$$ where the second integral is what I'm trying to understand.
AI: The integral in (2) is $\int_{\Bbb R^3}g(x)f(\epsilon y)\mathrm{d}^3y$. For sufficiently nice $f$ continuous at $0$ with finite $f(0)$, we can move a $\lim_{\epsilon\to0^+}$ operator inside the integral. This gives $\int_{\Bbb R^3}g(x)f(0)\mathrm{d}^3y=f(0)\int_{\Bbb R^3}g(x)\mathrm{d}^3y=f(0)$.
|
H: Venn Diagram Set Theory Question
So Given that Set A and B are disjoint, then A intersection C and B intersection C are also disjoint.
Now I'm having trouble figuring out how to exactly draw That A intersection C and B intersection C are also disjoint. Would I just draw Set C in the middle of Set A & B and then A intersects with B and Set C intersects with B?
AI: I think you pretty much have it. The diagram would look something like this:
|
H: Matchings in bipartite graph
I was given the following statement: Be $G=(X \cup Y, E)$ a bipartite graph connected with $|X|=|Y|=4$ $|E|=7$ , all maximal matching in G is maximum.
I must say if it is true or false and justify.
By testing examples I deduced that it is true.
But I don't know how I could justify it.
All help is welcome! Thank you!
AI: The statement is not true though.
You can find maximal matchings that are not maximum matchings.
As a hint: consider a much simpler case, where $|X| = |Y| = 2$ and $|E| = 3$. If you want this to be connected, it must be a path $v_1, v_2, v_3, v_4$ of length three. The set $\{v_2v_3\}$ consisting of only the middle edge is a maximal matching, but not a maximum one (since $\{v_1v_2, v_3v_4\}$ has two edges, it is a maximum matching).
You can scale up this same idea to get a counter-example to your statement.
|
H: Does a left group action induce an open continuous map?
Let $X$ be a topological space and $G$ a topological group with an action $f:G\times X\to X$ so that $f(g,x)$ is denoted by $g\cdot x$. Let us fix $g\in G$, I want to know if given an open set $U\subset X$ the set $g\cdot U=\{g\cdot x\mid x\in U \}$ is homeomorphic to $U$. My guess is that the answer is positive for the map $x\mapsto g\cdot x$ is clearly bijective, so it suffices to show that it is both continuous and open. I figure that $g$ must be continuous since $f$ already is, but I cannot justify this, how could I prove openness of this map?
AI: In fact, for each $g$, the map $x\mapsto g\cdot x$ is a homeomorphism. Clearly it is continuous and bijective, and its inverse is $x\mapsto g^{-1}\cdot x$, which is also continuous. Therefore in particular the map is open.
To elaborate on why $x\mapsto g\cdot x$ is continuous, the map $f:\{g\}\times X\to X$ is continuous being the restriction of a continuous map, and $x\mapsto g\cdot x$ is $X\to \{g\}\times X\to X$.
|
H: Can an $n \times n$ matrix satisfy an $n$ degree polynomial equation other than its characteristic polynomial equation?
Can an $n \times n$ matrix satisfy an $n$ degree polynomial equation other than its characteristic polynomial equation?
I was curious if the characteristic polynomial equation is the only $n$ degree equation that can be satisfied by a matrix.
I have tried by trial and error to make up an equation for $2\times 2$ matrix but always end up with the characteristic polynomial.
AI: If the eigenvalues are distinct, then the characteristic polynomial is the only monic $n$th degree polynomial the matrix satisfies, since the eigenvalues are the roots, and so determine an $n$th degree polynomial up to a constant factor. If the minimal polynomial is the characteristic polynomial, we can make a similar statement. If not, then it's not true as ancientmathematician's answer shows.
|
H: Calculate $\frac{d}{dx}\left(x^x+x^{2x}+x^{3x}+...\right)$.
Calculate:
$$\frac{d}{dx}\left(x^x+x^{2x}+x^{3x}+...+x^{nx}\right), n \in\mathbb{N_{\geq 1}}$$
If I have $x^x$ as my first case, then I get $$\frac{d}{dy}x^x=x^x\left(\ln \left(x\right)+1\right)$$ Likewise, for $n=2$ I get: $$\frac{d}{dx}\left(x^x+x^{2x}\right)=x^x\left(\ln \left(x\right)+1\right)+2x^{2x}\left(\ln \left(x\right)+1\right)$$
For $n=3$:
$$\frac{d}{dx}\left(x^x+x^{2x}+x^{3x}\right)=x^x\left(\ln \left(x\right)+1\right)+2x^{2x}\left(\ln \left(x\right)+1\right)+3x^{3x}\left(\ln \left(x\right)+1\right)$$
If I keep following this logic, I can see that for the last $n$, I get:
$$\frac{d}{dx}\left(x^x+x^{2x}+x^{3x}+...+x^{nx}\right)=x^x\left(\ln \left(x\right)+1\right)+2x^{2x}\left(\ln \left(x\right)+1\right)+3x^{3x}\left(\ln \left(x\right)+1\right) + ...+nx^{nx}(\ln(x)+1)$$
I can then use induction to prove this. However, if instead of the function being finite, what if it was infinite? How would I find:
$$\frac{d}{dx}\left(x^x+x^{2x}+x^{3x}+...\right)$$
AI: If $|x^x|<1$, you can recognize the sequence as a geometric sequence to find its sum. Then, differentiate the sum of the geometric series.
$$\lim_{n \to \infty} \left(x^x+x^{2x}+x^{3x}+\ldots+x^{nx}\right)=\frac{x^x}{1-x^x}=-1+\frac{1}{1-x^x}$$
$$\frac{d}{dx} \left(-1+\frac{1}{1-x^x}\right)=\frac{x^x \left(\ln{(x)}+1\right)}{{\left(1-x^x\right)}^2}$$
|
H: Open subset of C[0,1]?
Let us define $\|u\|_{\infty}=\sup_{x\in[0,1]}|u(x)|$ in the space $C[0,1]$. So we are working in the normed vector space $(C[0,1], \|\cdot\|_{\infty})$. $\;$Let: $$\boldsymbol{F}=\{f\in C[0,1]: f(x)>0,\; \forall\; x\in [0,1]\}.$$
Is $\boldsymbol{F}$ open?
Intuition says yes it is open because of the strict inequality, but then I thought that can't I just take a function $f\in \boldsymbol{F}$ with $\|f\|_{\infty}=a$ and then take a function $g\notin \boldsymbol{F}$ but $\|g\|_{\infty}=a-\varepsilon$ so the difference is arbitarily small and this would say that $\boldsymbol{F}$ is not open?
Are my ideas correct? And if so can someone maybe give more insight on how formulate this proof nicer and also how to think about this space geometrically.
AI: Let $f\in F$; since $f$ is continuous and takes only positive values, the infimum of $f$ on $[0,1]$ is also positive, say equal to $m$. The ball for the uniform norm centered at $f$ and having radius $m/2$ is contained in $F$, hence $F$ is open.
|
H: Solving a system with tensor quantities
I have an equation that involves
$\sum_{abcd} x_ax_bx_cx_d M^{abcd}+\sum_{ab}x_ax_bN^{ab}=0$ where M, N are known.
M has the property that $M^{abcd}=M^{cdab}$. N is symmetric in its indices.
I would like to do something like this (although I think it is not formal):
Define matrix $X$ with elements $X_{ab} \equiv x_a x_b$. Unpack elements in a vector $\vec{X}$ so that I have indices $\alpha \equiv (ab)$.
Now write $M^{abcd}\equiv \mathcal{M}^{\alpha\beta}$. In some sense I am defining a matrix from a tensor like quantity.
Finally I pack $N^{ab}$ in a vector $\vec{N}$ with elements $N_{\alpha}$.
Now my equation becomes:
$\vec{x}^T\mathcal{M}\vec{x}+\vec{x}^T\vec{N}=0\ (1)$
(I also have some extra condition on this, but for the sake of the question I omit it as I am interested in this part)
First question:
Is equation (1) legitimate? I feel that if I unpack everything I should reobtain the initial equation.
Second question:
Does it make sense to take the inverse of this matrix coming from a tensor?
Sorry for this naive questions, I've never encountered a problem like this and wanted to make sure these operation would make sense.
AI: I think that everything here is completely legitimate. If you'd like to connect to existing techniques and terminology, what you are doing to the vectors $X$ and $M$ is called tensor reshaping (or more specifically, vectorization and matricization).
Following the notation of the linked article: suppose that $M$ has size $n \times n \times n \times n$, and $N$ has size $n \times n$. If $\mu:[n] \times [n] \to [n^2]$ is a bijection, then
$$
\vec x_{\mu(a,b)} = X_{ab} = x_ax_b, \quad \mathcal M^{\mu(a,b)\mu(c,d)}= M^{abcd}, \quad \vec N_{\mu(a,b)} = N^{ab}.
$$
In terms of some other notation: if the standard (lexicographical) choice of $\mu$ is taken, then the resulting $X$ can be written as $X = (x \otimes x)(x \otimes x)^T$, where $x$ is a column-vector, $T$ denotes a transpose, and $\otimes$ denotes a Kronecker product.
Regarding your second question: it makes sense to take the inverse of $\mathcal M$ (assuming this inverse exists) in the same way that it makes sense to take the inverse of any square matrix. It is unclear, however, whether there is any meaningful connection to be made between the resulting inverse $\mathcal M^{-1}$ and the original tensor $M$.
|
H: Evaluate $\int\frac{dx}{(a+b\cos(x))^2},(a>b)$
Evaluate $$\int\frac{dx}{(a+b\cos(x))^2},(a>b)$$
I tried to write 1 in numerator as $p'(x)(a+b\cos(x))-p(x)(a+b \cos(x))'$,making something like quotient rule but did not get after that.
AI: $$I=\int \frac{dx}{(a+b\cos x)^2} \\=\int\frac{dx}{\left(a+b\cdot \frac{1-\tan^2 \frac x2}{1+\tan^2 \frac x2} \right)^2}\\=\int\frac{1+\tan^2 \frac x2}{\left(a(1+\tan^2 \frac x2) +b(1-\tan^2\frac x2)\right)^2} \sec^2\frac x2 \ dx \\ \overset{t=\tan \frac x2}=2\int\frac{1+t^2}{\left[(a-b)t^2 +a+b\right]^2} dt$$ Use partial fractions to break this down into $$ \underbrace{\frac{1}{a-b}\int \frac{dt}{(a-b)t^2 +a+b} }_{I_1} -\underbrace{\frac{2b}{a-b}\int\frac{dt}{\left((a-b)t^2 +a+b\right)^2}}_{I_2}$$ $I_1$ is easy to evaluate (use the arctan function). For $I_2$, substitute $t^2=\frac{a+b}{a-b}\tan^2y$, which results in a massive simplification: $$I_2 =\frac{1}{(a+b)\sqrt{a^2-b^2}}\int \cos^2y \ dy $$ Hopefully you can take it from here.
|
H: Question about finite sequences
Suppose $(a_1,\dots,a_n)$ is a sequence of real numbers such that $$a_1\leq a_2\leq \dots \leq a_n.$$
If $(b_1,\dots b_n)$ is a rearrangement of the sequence $(a_1,\dots,a_n)$ such that $$b_1\leq b_2\leq \dots \leq b_n,$$
then does it follow $a_1=b_1,\dots,a_n=b_n$?
I know that if the sequences were strictly monotonic, then the conclusion would have been obvious. How do I prove the question here, though... please help me with a proof maybe. Thank you in advance.
AI: Apart from the claim being obvious, Suppose $b_n\ne a_n$.
If $b_n<a_n$, then $a_n=b_i$ for some $i<n$, but then $a_n=b_i\le \ldots \le b_n<a_n$, contradiction.
Similarly, $b_n>a_n$ leads to a contradiction. Hence $a_n=b_n$. Now use induction on $n$.
|
H: Frobenius endomorphism not surjective in a ring
I'm trying to find a simple counterexample that proves that, given a ring A with prime characteristic, the Frobenius endomorphism is not surjective in general. Are there any elegant examples?
AI: You can think about $\mathbb{F}_p(X)$. Then $X$ is not in the image of the Frobenius endomorphism.
|
H: Let $a,b,c,d$ be $∈ ℝ$ and let A=$\begin{pmatrix}a & b \\ c & d\end{pmatrix}$
Background
Find values of a,b,c,d such at $A^T$=-A and $det(A)≠0$
My work so far
$$\begin{pmatrix}-3 & -5 \\ -7 & -9\end{pmatrix}^T$$
which equals
$$\begin{pmatrix}-3 & -7 \\ -5 & -9\end{pmatrix}$$
and $det(A)=-8$, which satisfies $A^T$=-A and $det(A)≠0$
However, I need to find values for a,b,c,d all non-zero, such that $A^2=l_2$. How would I go about doing this? Would this simply be the inverse, such as
$$\begin{pmatrix}-3 & -5 \\ -7 & -9\end{pmatrix}$$
which equals
$$\begin{pmatrix}\frac{9}{8} & \frac{-5}{8} \\ \frac{-7}{8} & \frac{3}{8}\end{pmatrix}$$
or am I incorrect here?
AI: Here are my suggestions:
Let $A = \begin{bmatrix}a & b\\ c& d\end{bmatrix}$.
Without choosing specific values for $a,b,c,d$, write out what $A^T$ is. If you can't remember, look in your book.
The same way, write out what $-A$ is.
Does the transpose do anything to the diagonal?
Is there any number such that $x = -x$?
Write out what $\det A$ is.
What needs to be true about the off-diagonal elements for both $A^T = -A$ and $\det A \neq 0$?
|
H: Projection onto a closed convex set in a general Hilbert space
Let $E$ denote a real Hilbert space and suppose $G \subset E$ is a nonempty closed convex set. I know that in this case, there is a unique nearest point in $G$ to each $x \in E$. Call this point $P_G(x)$.
I am trying to prove the following proposition:
Let $\bar x \in E \setminus G$, and consider the hyperplane $H = \{x \in E : \langle x - P_G(\bar x), \bar x - P_G(\bar x)\rangle \leq 0\}$.Then $G \subset H$ but $\bar x \not \in H$.
It is clear that $\bar x \not \in H$, for else $\bar x = P_G(\bar x)$ and then $\bar x \in G$.
To show $G \subset H$, I came up with the following proof that works in $\mathbf{R}^d$, but I am not sure how to justify (e.g. the differentiation) in the setting of a Hilbert space.
Suppose that $x' \in G$ but $x' \not \in H$; hence $\langle x' - P_G(\bar x), \bar x - P_G(\bar x)\rangle > 0$. Consider the point
$$
x_t := P_G(\bar x) + t\big(x' - P_G(\bar x)\big), \quad t \in \mathbf{R}
$$
And also consider
$$
f(t) = \|x_t - \bar x\|^2 - \|P_G(\bar x) - \bar x\|^2
$$
Note that
$$
\dot{f}(t) = 2\Big\langle P_G(\bar x) - \bar x + t\big(x' - P_G(\bar x)\big), x' - P_G(\bar x)\Big\rangle
$$
In particular $\dot{f}(0) < 0$ by hypothesis. Therefore for some $0 < \tau < 1$, we see that $f(\tau) < f(0) = 0$. This implies that
$$
\|x_\tau - \bar x\| < \|P_G(\bar x) - \bar x\|,
$$
but on the other hand $x_\tau \in G$, which is a contradiction.
AI: Let $p=P_G(\bar{x})$.
Since $\bar{x} \notin G$ you know that$ \|\bar{x}-p\| >0$.
Now note that $\langle \bar{x} - p, \bar x - p \rangle = \|\bar{x}-p\|^2 >0$ and so $\bar{x} \notin H$.
For any $x \in G$ we have $\|x-\bar{x}\|^2 = \|x-p+p-\bar{x}\|^2 \ge \|p-\bar{x}\|^2$,
and so$\|x-p\|^2 + 2 \langle x-p,p-\bar{x} \rangle \ge 0$ for all $x \in G$.
Replacing $x$ by $tx+(1-t)p$ with $t \in [0,1]$, we get
$t^2 \|x-p\|^2 + 2 t\langle x-p,p-\bar{x} \rangle \ge 0$ for all $x \in G$ and $t \in [0,1]$. Hence
$\langle x-p,p-\bar{x} \rangle \ge 0$ for all $x \in G$.
In particular, if $x \in G$ then $x \in H$.
|
H: Conservation laws with source term
Consider the IVP
\begin{eqnarray}
u_t+F(x,u)_x=S(x,u)\\
u(x,0)=u_0(x)
\end{eqnarray}
If $S(x,u)=0$ and $u\in C([0,T],L^1(\mathbb{R})),$ then we have $$\int\limits_{\mathbb{R}}u_0(x)dx=\int\limits_{\mathbb{R}}u(x,t)dx.$$ (physically which can be interpreted as conservation of mass...)
What happens when $S(x,u)\neq 0$
P.S. Please give a proof or suggest a reference..
AI: If $S \neq 0$ then $S$ serves it's role as a source, for example, something that can "introduce mass". Let's compute as before:
\begin{align*}
\partial_t \int_{\mathbb{R}} u(x, t)\, dx &= \int_{\mathbb{R}} u_t(x, t)\, dx \\
&= -\int_{\mathbb{R}} \frac{d}{dx}F(x, u(x, t))\, dx + \int_{\mathbb{R}} S(x, u(x, t))\, dx \\
&= \int_{\mathbb{R}}S(x, u(x, t))\, dx.
\end{align*}
The $F$ integral vanishes if we assume sufficient decay, e.g. $F$ goes to zero at $\pm \infty$, by the fundamental theorem of calculus. Then we have
$$
\text{Mass at time $t$ = }\int_{\mathbb{R}}u(x, t)\, dx = \int_{\mathbb{R}}u_0(x)\, dx + \int_0^t \int_{\mathbb{R}}S(x, u(x, s))\, dx \, ds.
$$
So the interpretation is that the quantity $\int_{\mathbb{R}}u(x, t)\, dx$, rather than being constant in $t$, changes from its initial value by integrating the source $S$.
|
H: GLn group action rank
Having the group action $GL(n,K) \times K^{n\times m} \to K^{n\times m} \; (g,A)↦gA $. The rank is an invariant but not an separating one (why?), how then does the orbits look like? If we acted also from the other side it would be JNF, but so its applying Gauss and for me it seems like only the rank is invariant?
AI: It is not true that the rank is the only invariant. Note that the kernel of $A$ (or equivalently, the row-space) is also invariant.
Each orbit under this action corresponds to a single reduced row echelon form matrix.
To see that the rank is not an orbit-separating invariant, verify that the matrices
$$
\pmatrix{1&0&0\\0&1&0}, \quad \pmatrix{1&0&0\\0&0&1}
$$
do not lie in the same orbit.
|
H: Limit of function of two variables
Let
$$f(x,y)=\frac{x^4y}{x^2+y^2}$$
for $(x,y)\ne (0,0)$. I want to prove that $\lim_{(x,y)\to (0,0)}\frac{x^4y}{x^2+y^2}=0$ by definition. So, I was wondering if there is any useful bound for
$$\frac{x^4|y|}{x^2+y^2}$$
in order to use the definition of limit for proving this statement. Any hint will be appreciated.
AI: If $x=0,y\neq 0$, then $f(0,y)=0$ and the limit is clearly zero. Else, we have $x^2+y^2 \geq x^2$, whence $(x^2+y^2)^{-1}\leq x^{-2}$, so
$$
0 \leq \left|\frac{x^4y}{x^2+y^2}\right| \leq \left|\frac{x^4y}{x^2}\right| =\left|{x^2y}\right|\to 0
$$
|
H: I was wondering if a general identity of $\cos^{2m}(x)=\,$[in terms of] $\cos^{2m-1}$ exists?
I was wondering if a general identity of $\cos^{2m}(x)=\,$[in terms of] $\cos^{2m-1}$ exists?
One particular example where $m=1$ is:
$$\cos^2(x)=\frac12(\cos(2x)+1).\tag{1}$$
but can this be generalised to an even power of $\cos^{2m}(x)$ expressed in terms of the next odd power down $\cos^{2m-1}$?
So say for a random example maybe $\cos^{42}(x)$ equals something in terms of $\cos^{41}$ .
AI: Let's try it for the cube.
$\cos(x)^3-(a\cos(bx)^2+c)=\frac 14\cos(3x)+\frac 34\cos(x)-\frac a2\cos(2bx)-\frac a2-c$
So to make the most things cancel, set $a=\frac 12,\ b=\frac 32,\ c=-\frac 14$
$$\cos(x)^3-\left(\frac 12\cos(\frac{3x}2)^2-\frac 14\right)=\frac 34\cos(x)$$
And we cannot get rid of the term in $\cos(x)$.
Eventually we can replace it by $\ \frac 32\cos(\frac x2)^2-\frac 34\ $ to have an expression in term of $\sum a_i\cos(b_ix)^2$
$$\cos(x)^3= \frac 12\cos\left(\frac{3x}2\right)^2+\frac 32\cos\left(\frac x2\right)^2-1$$
So this is going to be recursive, and you can find such a combinaison for any degree, I suppose you can try to work out a closed form from Chebychev polynomial, not sure if this is the right course of action though or even if such a lenghty expression is desirable. You wanted a short formula for $\cos^{m+1}$ in term of $\cos^m$ it ends up being not so short...
|
H: Why is $ \frac{5}{64}((161+72\sqrt{5})^{-n}+(161+72\sqrt{5})^{n}-2)$ always a perfect square?
I'm working on a puzzle, and the solution requires me somehow establishing that
$$ f(n):=\frac{5}{64}\Big(\big(161+72\sqrt{5}\big)^{-n}+\big(161+72\sqrt{5}\big)^{n}-2\Big)$$
is a perfect square for $n\in \mathbb{Z}_{\geq 0}$.
I've done a lot of simplification to get to this point, and am stuck here. I can provide the context of the puzzle if necessary, but it's pretty far removed from what I have here. The goal is basically to show that a formula generates solutions to a given equation.
Any tips on how to proceed?
Here's the first few values:
$$\begin{array}{|c|c|}
\hline
n&\text{value}\\ \hline
0&0\\ \hline
1& 5^2 \\ \hline
2&90^2 \\ \hline
3& 1615^2\\ \hline
4& 28980^2\\ \hline
\end{array}$$
AI: Let $a=9+4\sqrt{5}$, then $$f(n) = {5\over 64}(a^n-a^{-n})^2$$
Now let $$b_n = {\sqrt{5}\over 8}(a^n-a^{-n})$$
so it is enought to prove that every $b_n$ is an integer. This can be done easly if you write a recursive formula for $b_n$:
$$b_{n+1}= 18b_n-b_{n-1}$$ where $b_0=0$ and $b_1=5$ and prove that fact with induction.
|
H: Evaluate $\lim_{x\to+\infty} \frac{x-\sin{x}}{x+\sin{x}}$
Evaluate $$\lim_{x\to+\infty} \frac{x-\sin{x}}{x+\sin{x}}$$
$\sin{x}$ is a bounded function, but I still have no clue what to do. Any hint?
AI: Use:
$$
\frac{x-\sin x}{x+\sin x}=\frac{x+\sin x-2\sin x}{x+\sin x}=1-\frac{2\sin x}{x+\sin x}
$$
It should be easy enough to evaluate the limit using the rightmost expression.
|
H: What is the Fourier transform of functions of the type $p(x)e^{-x^2}$ $(p \in \mathbb{C}[x])$?
First some context to my question:
I have two sets $M=\{p(x)e^{-x^2}:p\in \mathbb{C}[x]\}$ and $N=\{\hat{f}:f\in M\}$. Both are left modules of the Weyl algebra $A_1$. There are a few other technical details that I will not get into because they would not be relevant to my question. I need to show that $M\cong N$ as $A_1$-modules. I have defined a mapping from $M$ to $N$ as $f \mapsto \hat{f}$. I am trying to prove that this mapping is a bijective $A_1$-module homomorphism.
To show that I have an $A_1$-module homomorphism, I first need to know what the Fourier transform of the type of functions in $M$ looks like. It used to be that I knew how to calculate the Fourier transformation, but that was some time ago. I would appreciate if someone can provide an answer to this question or perhaps point to an article that contains the answer. I am not that interested in the details of how to find the answer; just knowing what $\hat{f}$ is suffices for my purposes.
The definition of $\hat{f}$ in the notes I am reading is
AI: If it makes no differene to you, I'd use $e^{-\frac{1}{2}x^2}$ instead, which simplifies the formulas. Then you can use the fact that $h_n(x) = (-1)^n\frac{d^n}{dx^n} e^{-\frac{1}{2}x^2}$ is of the form $p_n(x)e^{-\frac{1}{2}x^2}$ where $p_n$ is a monic polynomial of degree $n$. (cf. Hermite polynomials, Hermite functions) In particular, you can write any polynomial $p$ as a linear combination of such polynomials:
$$ p(x) e^{-\frac{1}{2}x^2} = \sum_{k=0}^{\deg p} \alpha_k h_k(x) $$
And $h_n$ is trivial to Fourier transform, since $\mathcal F[\frac{d^n}{dx^n} f](w) = (iw)^n \hat f(w)$. In particular, the Fourier Transforms are again functions of the same form. This is because the Hermite functions are eigenfunctions of the Fourier Transform. If you stick with $e^{-x^2}$ then nothing substaintially changes besides that you will get the exponent $e^{-w^2/4}$ in the ferquency domain (https://en.wikipedia.org/wiki/Fourier_transform#Square-integrable_functions,_one-dimensional)
|
H: Integrating $ \int_{0}^{\pi} \sin^4 (x) \cos(kx) dx$
Caveat: $ k \in N$
so after some 'repeated integration by parts I get and some 'evaluations',
$ k^2 G = 4G - 3 \int_{0}^{\pi} \sin^2 (2x) \cos(kx) dx$
so this becomes,
$ G = \frac{3}{4-k^2} \int_{0}^{\pi} \sin^2 (2x) \cos(kx) dx$
now this result is already weird because if I put k=2 , becomes undefined. If I work it further, I get
$ G - \frac{3}{4-k^2} \frac{16}{k^3} \int_{0}^{\pi} \sin(4x) \sin(kx) dx$
Now how do I proof this integral is '0'for $k>5$
And the values for $ k \in {0,1,2,3,4}$ ?
PS: please don't reduce $sin^4 (x)$ into linear factors. I want a solution finishing my 'parts' method
AI: To be clear, I think the best way of doing this is to use the Chebyshev formulas to write $\sin^4(x)= 1/8 (-4 \cos(2 x) + \cos(4 x) + 3)$, and then to use the orthogonality of $(\cos(jx))_{j=1}^{\infty}$ when integrated over $[0,\pi]$. However, if you insist on avoiding this method, one approach is to convert everything to complex-exponentials:
$$
I_k=\int_0^{\pi} \left(\frac{e^{ix}-e^{-ix}}{2i}\right)^4 \frac{e^{ikx}+e^{-ikx}}{2}\,dx
$$
$$
=\frac{1}{32}\int_0^{\pi} \left({e^{ix}-e^{-ix}}\right)^4 (e^{ikx}+e^{-ikx})\,dx
$$
$$
=\frac{1}{32}\int_0^{\pi} \left({e^{4ix}-4e^{2ix}+6-4e^{-2ix}+e^{-4ix}}\right)(e^{ikx}+e^{-ikx})\,dx
$$You can foil this out and use the exponential rule; after a tedious computation, you get that:
$$
I_k ``=" \frac{24 \sin(k π)}{64 k - 20 k^3 + k^5} = \begin{cases}
\frac{3\pi}{8},& k=0\\
\frac{-\pi}{4},& k= 2\\
\frac{\pi}{16},& k= 4\\
0,& \text{else}
\end{cases},
$$which agrees with the original result (here $``="$ means equal, except at $k=0,2, 4$, where one should take the limit).
|
H: Question about some term in Sage while using GF(9)
I tried to define an elliptic curve over $GF(9)$ in Sage, and some term $z2$ appeared, see below (click on the image if the font is too small):
I know that it has something to do with the definition of $GF(9)$ - probably it describes how it works as a $GF(3)$-space.
However, I do not know how to access the necessary information about $z2$ (computing its minimal polynomial does not work as well). Could you please explain me what $z2$ is? Thank you!
AI: You can specify the name for the root of an irreducible polynomial to be used in construction of $GF(9)$, e.g.:
sage: G = GF(9,'a')
You can also get the polynomial used:
sage: G.modulus()
x^2 + 2*x + 2
And then
sage: E2 = EllipticCurve(G, [-1,0])
sage: E2
Elliptic Curve defined by y^2 = x^3 + 2*x over Finite Field in a of size 3^2
sage: E2.rational_points()
[(0 : 0 : 1), (0 : 1 : 0), (1 : 0 : 1), (2 : 0 : 1), (a : a : 1), (a : 2*a : 1), (a + 1 : a : 1), (a + 1 : 2*a : 1), (a + 2 : a : 1), (a + 2 : 2*a : 1), (2*a : a + 2 : 1), (2*a : 2*a + 1 : 1), (2*a + 1 : a + 2 : 1), (2*a + 1 : 2*a + 1 : 1), (2*a + 2 : a + 2 : 1), (2*a + 2 : 2*a + 1 : 1)]
sage: E2.cardinality()
16
|
H: Given $f:\mathbb R \backslash \{-1,0\} \to \mathbb R$, $f(x) = \frac{ |x \sin x| }{x + x^2}$: what values $c \in\mathbb R$ makes $f$ has limit.
I tried to demonstrate that is known that $x$, $sin x$, $x^2$ and $|x|$ are continuous functions. So the unique values that could not have limit are $c = -1$ or $c = 0$ but I cannot go on. Am I right?
AI: I continued trying with this problem and I found out that $\lim_{x\to{0^+}}\frac{sin x}{1 + x}=0 $ and the same with $x \to 0^-$ where I changed the sign of the abs and it equals to $0$ to. So it remains to prove that exist $lim_{x \to -1 } f(x)$.
|
H: At most one connected component of $\{z: |f(z)| < M \}$
I am trying to show that if $f$ is an entire function, then there is at most one connected component of the complement of $\widehat{\mathbb{C}}$ of the set $\{ z: |f(z)| < M \}$.
Based on the post At most one connected component of unbounded portion of entire function. taking the complement in $\widehat{\mathbb{C}}$ rather than $\mathbb{C}$ is necessary. If I can show that all the components of the complement are unbounded, then they all contain $\infty$ and so there is only one component. However, it is true that all the components are unbounded?
AI: On a bounded component of $\{z: |f(z)| \ge M\}$, $f$ would have to attain a maximum.
|
H: Show that $E\exp(-tX_i) \leq \frac{1}{t}$
This is exercise 2.2.10 present in the book High-Dimensional Probability, by Vershynin.
Let $X_1,\ldots,X_n$ be non-negative independent r.v with the densities bounded by $1.$ Show that the MGF of $X_i$ satisfies
$$
E \exp(-tX_i)\leq \frac{1}{t}
$$
After that, deduce that for any $\varepsilon >0$, one has
$$
P\left(
\sum^n_{i=1}X_i \leq \varepsilon n
\right)\leq
(e\varepsilon)^n
$$
Some help would be much appreciated. I was not able to prove event the first inequality. This question is present in the section dealing with Hoeffding's inequality, so it probably is used somehow.
AI: The first inequality comes from the fact that they have densities bounded by 1:
$$ \mathbb{E}[e^{-t X_i}] = \int_0^\infty e^{-tx} p_i(x)dx \le \int_0^\infty e^{-tx}dx = \frac 1t. $$
For the second inequality, we can show the bound assuming the $X_i$ are independent. We have that for any $t > 0$
\begin{align*}
P\left( \sum_{i=1}^n X_i \le \varepsilon n \right) &= P\left( \sum_{i=1}^n(-t X_i) \ge -\varepsilon nt \right) \\
&= P\left( \exp \left(\sum_{i=1}^n(-t X_i)\right) \ge e^{-\varepsilon nt} \right) \\
&\le e^{\varepsilon nt} \mathbb{E}\left[\exp \left(\sum_{i=1}^n(-t X_i)\right)\right] \\
&= e^{\varepsilon nt}\prod_{i=1}^n \mathbb{E}[e^{-tX_t}] \\
&\le e^{\varepsilon n t} \frac{1}{t^n} \\
&= \left( \frac{e^{\varepsilon t}}{t} \right)^n.
\end{align*}
Now choose $t$ to minimize $\frac{e^{\varepsilon t}}{t}$. The value that minimizes $\frac{e^{\varepsilon t}}{t}$ is $t^* := \frac 1{\varepsilon}$, so we have
$$ P\left( \sum_{i=1}^n X_i \le \varepsilon n \right) \le \left( \frac{e^{\varepsilon t^*}}{t^*} \right)^n = (e \varepsilon)^n.$$
|
H: One property of limits
How to prove this property of limits:
$ \lim_{x \to a} \frac{f(x)}{g(x)} = \lim_{x \to a} \frac{f(x) - f(a)}{g(x) - g(a)}$?
Also, is there intuition for this result?
Note. I have seen this when learning L'Hospital rule so I am not sure if this is true always.
AI: This is not true in general. Taking $f(x) = x$ and $g(x) = x^2$, we look at $$ \lim_{x \rightarrow 2} \frac{x - 2}{x^2 - 4} = \lim_{x \rightarrow 2} \frac{1}{x+2} = \frac{1}{4}$$ whereas $$ \lim_{x \rightarrow 2} \frac{x}{x^2} = \frac{1}{2}.$$
That said, if $f(a)=g(a)=0$ and $f,g$ are differentiable near $x=a$ with $f',g'$ continuous and $g'(a) \neq 0$, then we have that $$ \lim_{x \rightarrow a} \frac{f(x)}{g(x)}=\lim_{x \rightarrow a} \frac{f(x)-f(a)}{g(x)-g(a)} = \lim_{x \rightarrow a} \frac{\frac{f(x)-f(a)}{x-a}}{\frac{g(x)-g(a))}{x-a}} = \frac{f'(a)}{g'(a)}.$$ My guess is the exercise is meant to get you to prove a simple case of L'Hospital's rule.
|
H: Why is any ring homomorphism from a ring of integers to an algebraically closed field (char=0) injective?
Let $\mathcal{O}_L$ be the ring of integers in a number field $L$. Let $K$ be an algebraically closed field of characteristic zero, and let $f:\mathcal{O}_L \to K$ be a ring homomorphism. Why must $f$ be injective?
I know the following: if $\alpha \in \mathcal{O}_L$ has minimal monic irreducible polynomial $h_\alpha(x) \in \mathbb{Z}[x]$, then $f(\alpha)$ is also a root of $h_\alpha(x)$. That is, $f$ preserves Galois orbits of $\operatorname{Gal}(L/\mathbb{Q})$ acting on $\mathcal{O}_L$. That is, $f(\alpha) = \sigma(\alpha)$ for some $\sigma \in \operatorname{Gal}(L/\mathbb{Q})$. However, if $\alpha, \beta \in \mathcal{O}_L$ with $f(\alpha) = f(\beta)$, I don't know how to conclude that $f(\alpha) = \sigma(\alpha)$ and $f(\beta) = \sigma(\beta)$ for the same Galois automorphism $\sigma$. If I could do that, I could apply $\sigma^{-1}$ to both and conclude that $\alpha = \beta$, but I don't see how this works.
AI: Every nonzero ideal of $\mathcal{O}_L$ contains an integer (if $a\in\mathcal{O}_L$ is nonzero, then its minimal polynomial has nonzero constant term and that constant term is in the ideal generated by $a$). So, if the kernel of $f$ were nontrivial, it would contain a nonzero integer, but this is impossible since every nonzero integer remains nonzero in $K$ since $K$ has characteristic $0$.
|
H: The sequence $(x_n)$ diverges, then $(\sqrt[3]{x_n})$ diverges
I need to prove if it is true or not. I really tried with some definitions and propositions and I could not reach the answer.
AI: Suppose $\sqrt[3]{x_n}$ converges. Then we know, that we can multiply two converge sequence and obtain $\sqrt[3]{x_{n}^2}$ converges. Multiplying once more gives, that $x_n$ converges. Now you can use this implication to obtain your sentence.
|
H: Solving system of equations with three unknowns
I need to solve an equation of a line using three known coordinate pairs (x0, y0), (x1, y1), and (x2, y2).
The equation of the plane is, of course, ax + by + c = 0.
I'm writing a little piece of code to calculate the position of a point w.r.t a line as it changes, so I just want to reduce the math as much as possible for efficiency, which means pre-solving this in terms of the x and y coordinates.
I can derive the equations to solve for a, b and c by hand in advance and write the code that way, but it ends up an ugly mess of substitutions to do it.
Is there a more straightforward approach that, giving my coordinates, I can solve for the constants quickly, without having to rely on matrix math or pulling in a math library?
AI: A line by two points is
$$\begin{vmatrix}x&y&1\\x_0&y_0&1\\x_1&y_1&1\end{vmatrix}=
\begin{vmatrix}y_0&1\\y_1&1\end{vmatrix}x-
\begin{vmatrix}x_0&1\\x_1&1\end{vmatrix}y+
\begin{vmatrix}x_0&y_0\\x_1&y_1\end{vmatrix}=0.$$
A plane by three points is
$$\begin{vmatrix}x&y&z&1\\x_0&y_0&z_0&1\\x_1&y_1&z_1&1\\x_2&y_2&z_2&1\end{vmatrix}
=\begin{vmatrix}y_0&z_0&1\\y_1&z_1&1\\y_2&z_2&1\end{vmatrix}x-\begin{vmatrix}x_0&z_0&1\\x_1&z_1&1\\x_2&z_2&1\end{vmatrix}y+\begin{vmatrix}x_0&y_0&1\\x_1&y_1&1\\x_2&y_2&1\end{vmatrix}z-\begin{vmatrix}x_0&y_0&z_0\\x_1&y_1&z_1\\x_2&y_2&z_2\end{vmatrix}
=\begin{vmatrix}y_{10}&z_{10}\\y_2&z_{20}\end{vmatrix}x-\begin{vmatrix}x_{10}&z_{10}\\x_{20}&z_{20}\end{vmatrix}y+\begin{vmatrix}x_{10}&y_{10}\\x_{20}&y_{20}\end{vmatrix}z-\begin{vmatrix}x_0&y_0&z_0\\x_{10}&y_{10}&z_{10}\\x_{20}&y_{20}&z_{20}\end{vmatrix}
=0$$
where $a_{ij}:=a_j-a_i$.
|
H: Argument matrix row operations (3x4)
Background
Given the argument matrix
$$A=\begin{bmatrix}1 & 3 & -5 & 3\\4 & 10 & -6 & -4\\-4 & -14 & -4 & -5\end{bmatrix}$$
perform each row operation in the order specified and enter the final result.
My work so far
a) First: $R2→R2-4R1$
$$\begin{bmatrix}0 & -2 & 14 & -16\end{bmatrix}$$
b) Second: $R3→R3+4R1$
$$\begin{bmatrix}0 & -2 & -24 & 7\end{bmatrix}$$
Am I on the right track here? I'm using RREF to work these out.
AI: You're on exactly the right track! The only thing is when you do a row (column) operation, that row isn't isolated, it becomes that row (column) in the new matrix. That is, $R_2 \rightarrow R_2 - 4R_1$ gives
$$ \begin{bmatrix} 1 & 3 & -5 & 3 \\ 0 & -2 & 14 & -4 \\ -4 & -14 & -4 & -5 \end{bmatrix}.$$
Then, when you perform the next operation, the same thing happens with the next row. That is $R_3 \rightarrow R_3 + 4R_1$ gives
$$ \begin{bmatrix} 1 & 3 & -5 & 3 \\ 0 & -2 & 14 & -4 \\ 0 & -2 & -24 & 7 \end{bmatrix}.$$
Keep up the good work!
|
H: $L^\infty(\mathbb{R}^n)$ function that is also homogenous with degree zero
Consider a homogeneous function $m$ in $\mathbb{R}^n$ with degree zero, ie
$$m(\lambda \xi) = m(\xi), \;\;\;\;\;\; \forall \lambda >0.$$
Is it true that $m \in L^\infty(\mathbb{R}^n)$ if, and only if, $m \in L^\infty(S^{n-1})$??
Attempt: [EDITED] The sufficiency isn't that trivial as I though it would be. If $m$ is in $L^\infty(\Bbb R^n) $, exists a null Lebesgue measure se $A$ in $\Bbb R$ such that,
$$|m(x)| \leq \|m\|_{L^\infty(\Bbb{R}^n)}, \;\;\;\;\; \forall x \in \Bbb{R}^n\backslash A.$$
I can't find the null measureble set in $S^{n-1}$ such that the inequality above holds in the complement of this set...
We prove the necessity. Supose that $m \in L^\infty(S^{n-1})$. So there exists $A \subset S^{n-1}$ with measure zero in $S^{n-1}$ such that,
$$m(\xi) \leq \|m\|_{L^\infty(S^{n-1})}, \;\;\;\; \forall\;\; \xi \in A^c.$$
So I defined a subset $B$ of $\mathbb{R}^n$ by $B:=\{ \xi \neq 0 : \xi/|\xi| \in A\}$ and I want to prove that this subset $B$ has measure zero. Supose it's proved that $|B|=0$. Then, $\xi \notin B$ implies that $\xi/|\xi| \notin A$, and using the homogeneity of $m$, we have
$$|m(\xi)| = |m(\xi/|\xi|)| \leq \|m\|_{L^\infty(S^{n-1})}.$$
So, we conclude that $\|m\|_{L^\infty(\mathbb{R}^n)} < \infty$.
AI: If you know that spherical coordinates work for the Lebesgue integral, you may write
$$|B|=\int_{\mathbb{R}^n} \chi_B(x)\,dx=\int_0^\infty\int_{S^{n-1}}\chi_B(\theta,r)r^{n-1}\,d\theta\,dr=\int_0^\infty\int_{S^{n-1}}\chi_A(\theta)\,d\theta r^{n-1}\,dr=0$$
because the first integral vanishes.
|
H: Is $A^2$ the same thing as $A^TA$?
Assume A is a matrix. Is $A^2$ the same thing as $A^TA$? I keep on seeing $A^2$ but it's tough to find a walkthrough of calculating $A^2$.
AI: Here's a counterexample:
Let $A = \begin{pmatrix}
1 & 1 \\
0 & 1 \\
\end{pmatrix}$. We have:
$$
A^2 =
\begin{pmatrix}
1 & 1 \\
0 & 1 \\
\end{pmatrix}
\cdot
\begin{pmatrix}
1 & 1 \\
0 & 1 \\
\end{pmatrix}
=
\begin{pmatrix}
1 & 2 \\
0 & 1 \\
\end{pmatrix}
\not=
\begin{pmatrix}
1 & 1 \\
1 & 2 \\
\end{pmatrix}
=
\begin{pmatrix}
1 & 0 \\
1 & 1 \\
\end{pmatrix}
\cdot
\begin{pmatrix}
1 & 1 \\
0 & 1 \\
\end{pmatrix}
= A^T A
$$
And also
$$
A^2 =
\begin{pmatrix}
1 & 1 \\
0 & 1 \\
\end{pmatrix}
\cdot
\begin{pmatrix}
1 & 1 \\
0 & 1 \\
\end{pmatrix}
=
\begin{pmatrix}
1 & 2 \\
0 & 1 \\
\end{pmatrix}
\not=
\begin{pmatrix}
2 & 1 \\
1 & 1 \\
\end{pmatrix}
=
\begin{pmatrix}
1 & 1 \\
0 & 1 \\
\end{pmatrix}
\cdot
\begin{pmatrix}
1 & 0 \\
1 & 1 \\
\end{pmatrix}
= AA^T
$$
|
H: Central limit theorem; shooter hitting a target
A shooter hits a target with probability $0.4$ and shoots at target $150$ times. Find at least one interval in which, with probability $0.8$, will be the number of shooter's hits of target.
A random variable $S_{150}: B(150;0.4)$ represents the number of hits. We are looking for an interval $(a,b)$.
Also, $ES_{150}=150*0.4=60$ and $DS_{150}=150*0.4*0.6=36$.
Using central limit theorem, I set the probability to be equal to $0.8$:
$P(a<S_{150}<b)=P(\frac{a-60}{6}<\frac{S_{150}-60}{6}<\frac{b-60}{6})=0.8$
I am just lost after this step, how do I find a and b?
AI: You have $\sigma = \sqrt{0.6 \cdot 0.4}$. Using CLT, just plug in $n, \sigma, n, p$ that you are given:
$$
P(\frac{a-np}{\sqrt{n p (1-p)}}\leq \frac{S_n - np}{\sqrt{n p(1-p)}} \leq \frac{b-np}{\sqrt{n p (1-p)}}) \approx \Phi(\beta) - \Phi(\alpha) = 0.8
$$
Since $Z \sim N(0,1)$ is symmetric around $0$, you can easily find for which $\alpha, \beta$ the area under its $\varphi$ is equal to $0.8$. Once you've got them, plug in the equations to get $a,b$.
|
H: Evaluate $\lim_{x\to+\infty} \frac{3x^3+x\cos{\sqrt{x}}}{x^4\sin{\frac{1}{x}}+1}$
Evaluate $$\lim_{x\to+\infty} \frac{3x^3+x\cos{\sqrt{x}}}{x^4\sin{\frac{1}{x}}+1}$$
My attempt: $$\lim_{x\to+\infty} \frac{3x^3+x\cos{\sqrt{x}}}{x^4\sin{\frac{1}{x}}+1}=\lim_{x\to+\infty} \frac{3x^2+\cos{\sqrt{x}}}{x^3\sin{\frac{1}{x}}+\frac{1}{x^3}}$$ Let $t=\frac{1}{x}$, then
$$\lim_{x\to+\infty} \frac{3x^2+\cos{\sqrt{x}}}{x^3\sin{\frac{1}{x}}+\frac{1}{x^3}}=\lim_{t\to0^+} \frac{\frac{3}{t^2}+\cos{\frac{1}{\sqrt{t}}}}{\frac{1}{t^3}\sin{t}+t^3}$$
That's where I got stuck. I think this substitution didn't help much. Maybe there's a way to apply the squeeze theorem, but it's not so obvious. Hint, please?
AI: $$\lim_{x\to+\infty} \frac{3x^3+x\cos{\sqrt{x}}}{x^4\sin{\frac{1}{x}}+1}=\lim_{x\to+\infty} \frac{3x^3(1+\frac{x\cos{\sqrt{x}}}{3x^3}) }{x^3 \left( \frac{\sin{\frac{1}{x}}}{\frac{1}{x}} +\frac{1}{x^3} \right)} = 3$$
|
H: Using Bernoulli's Inequality to prove $n^{\frac{1}{n}} < 2-\frac{1}{n}$
I was trying to prove that $$n^{\frac{1}{n}}<2-\frac{1}{n}$$ for all natural numbers $n \ge 2$.
The base case of n = 2 was trivial.
Looking at the $n+1$ case, I wrote that $$(n+1)^{\frac{1}{1+n}} \ge 1 + \frac{n}{n+1}$$ for some $n>2$, but I wasn't sure how to proceed from here
AI: Write
$$\Bigl(2-\frac1n\Bigr)^n=\biggl(1+\Bigl(1-\frac1n\Bigr)\biggr)^n >1+n\Bigl(1-\frac1n\Bigr)=1+n-1.$$
|
H: Strong induction and mistake
what is the fault in this reasoning by strong induction
For all $ A $ and $ B $ of $ M_p (K) $ and
all integer $n$ we have: $ A ^ n B = B $
The proof :
Denote $\forall n\in \mathbb N,\quad P (n) $ : $ A ^ n B = B $
The property is true at rank $ n = 0 $ because $ A ^ 0
B = I_n B = B $
Let $ n $ be a natural integer, suppose $ P (n) $
true up to rank $ n $.
$ A ^ {n + 1} B = A A ^ n B = AB $ (hypothesis at rank
$ n $) and $ A B = B $ (assumption at rank $ 1 $)
we thus obtain $ A ^ {n + 1} B = B $
and $ P (n + 1) $ is true
AI: In the inductive step you're only allowed to use previous steps. $P(1)$ is not a previous step for $P(1).$
The inductive step requires that $P(1)$ is already true, which is valid for $P(n)$ for all $n \geq2,$
but since you only have $P(0)$ as a base case you needed an inductive step that is valid for all $n \geq 1.$
|
H: Relationship between image of a linear transformation and its support
Suppose I have a linear transformation $T: V \rightarrow V$.
The kernel of the transformation is the subspace spanned by the vectors $v\in V$ such that $Tv = 0$. The orthogonal complement to the kernel is called the support of $T$. Finally, the image of $T$ is the subspace spanned by vectors $Tv$ for $v\in V$.
Are the image and the support always the same if the linear transformation is from one vector space to itself? If yes, how does one show this and if not, what is the relationship between the two?
EDIT: Thanks to Ted Shifrin for the helpful comment. If $T$ is self-adjoint, then pick any $v$ from the kernel of $T$ and some $w\in V$. It holds that $0 = \langle Tv, w\rangle = \langle v, Tw\rangle$ i.e. $Tw$ is orthogonal to $v$ or $Tw = 0$. This makes it clear that the image and the support are indeed the same.
AI: If $V$ is a finite dimensional vector space (inner product space presumably), then the only relationship between the image and the support is that they necessary have the same dimension (as a consequence of the rank-nullity theorem).
|
H: Solve the inequality $|3x-5| - |2x+3| >0$.
In order to solve the inequality $|3x-5| - |2x+3| >0$, I added $|2x+3|$ to both sides of the given inequality to get $$|3x-5| > |2x+3|$$ Then assuming that both $3x-5$ and $2x+3$ are positive for certain values of $x$, $$3x-5 > 2x+3$$ implies $$x>8$$ If $3x-5$ is positive and $2x-3$ is negative for certain values of $x$, then $$3x-5 > -2x-3$$ implies $$5x >2$$ implies $$x > \dfrac{2}{5}$$ I'm supposed to get that $x < \dfrac{2}{5}$ according to the solutions, but I'm not sure how to get that solution.
AI: Whenever you tackle absolute value algebra like this, first find your critical points. Here:
$$3x-5=0\implies x=\frac 53;\ 2x+3=0\implies x=-\frac 32$$
So we have three cases to deal with: $x\geq \frac 53;\ -\frac32\leq x<\frac53;\ x<-\frac32$
In the first, both moduli are positive, so $3x-5>2x+3\implies x>8$, as you got.
In the second, only $|2x+3|$ is positive, so $-(3x-5)>2x+3\implies x<\frac25$
Taking into account our range for the second case, we see a solution here in $-\frac32<x<\frac25$.
Can you deal with the third case, where $x<-\frac32$?
|
H: Value of $\lim_{n \to \infty} \sqrt[n^2]{\sqrt{3!!}\cdot \sqrt[3]{5!!} \ldots \sqrt[n]{(2n-1)!!}}$
$$L=\lim_{n \to \infty} \sqrt[n^2]{\sqrt{3!!}\cdot \sqrt[3]{5!!} \ldots \sqrt[n]{(2n-1)!!}}$$
It turns out that this limit equals $1$. The solution key uses Stolz-Cesaro theorem and I was wondering if this could be evaluated without this theorem.
The furthest I got to was
$$\ln{L}=\frac{1}{n^2} \sum_{i=2}^n \frac{1}{i} \ln{\left(2i-1\right)!!}$$
This may not help though. Any suggestions?
AI: We have $$\begin{align}
0&< \ln L_n=\frac{1}{n^2} \sum_{i=2}^n \frac{1}{i} \ln{\left(2i-1\right)!!}\\&<\frac{\log(2n!)}{n^2}\sum_{i=2}^n\frac1i\\
&<\frac{\log(2n!)}{n^2}H_n\\
&<\frac{2n\log(2n)}{n^2}\left(\log n +\gamma +1\right)\to0
\end{align}$$
as $n\to\infty$
so that $\ln L_n\to 0$ and $L_n\to1$.
|
H: Closed subset of metric spaces
Let $X$ be a metric space with $p \in X$ a point, $C \subset X$ a subset.
Show $C$ is closed iff $C \cap \overline{B_R(p)}$ is closed for any $R>0$.
Supposing $C$ is closed is pretty easy as intersecting it with closed ball is still closed.
So then assume $C \cap \overline{B_R(p)}$ is closed (so it equals its closure) and want to show $C$ is closed, i.e., $C = \overline{C}$.
Is this the way to go about it? Clearly $C \subset \overline{C}$, so we wish to show $\overline{C} \subset C$ but taking $x \in \overline{C}$ and showing $x \in C$?
Because then $x$ is a limit point of $C$ so any open ball (for any choice of $R>0$) centered at $p$ intersects $C$ nontrivially? Am I on the right track? Just a hint will suffice not an entire solution. Thanks!
AI: suppose $x$ is a limit point for $C$, but maybe not an element of $C$. Let $R\,=\,d(p,x)\,+\,5.$ By hypothesis, the set
$$C\,\cap\,\overline{B_R(p)}\,\cap\,\overline{B_1(x)}$$
is closed. There is a sequence in $C$ that converges to $x$. This sequence is ultimately in the unit ball at $x$, and we have made $\overline{B_R(p)}$ big enough to include that unit ball. The triple intersection above is closed, so contains all of its limit points. Hence $x$ belongs to $C$.
|
H: Is a finite union of countable sets at most countable?
I thought of this result:
A finite union of countable sets is at most countable.
which I also tried to prove:
Let $A_1, A_2, \dots, A_n$ be a finite collection of countable sets. Then, each $A_i$ must be enumerable, that is, we can write
\begin{align*}
A_1 &= \tau_1, \tau_2, \dots \\
A_2 &= \zeta_1, \zeta_2, \dots \\
&\vdots \\
A_n &= \phi_1, \phi_2, \dots
\end{align*}
We can then construct a sequence as follows:
\begin{equation}\tag{1}
\tau_1, \tau_2, \dots, \dots, \zeta_1, \zeta_2, \dots, \dots, \dots, \phi_1, \phi_2, \dots
\end{equation}
Note that some of the terms in the sequence in (1) might be repeated. If an infinite number of terms of this sequence are repeated, then we can retain a single copy of each of the repeated terms and eliminate all the duplicates; this will leave us with a finite number of terms in the sequence in (1). Similarly, if only a finite number of terms in the sequence in (1) are repeated, then eliminating all the duplicates as explained above; we will be left with a sequence of terms from (1) which can still be indexed by set of positive integers. Therefore, $\bigcup_{i=1}^{n}A_i$ is at most countable.
I have two questions. Is it true that a finite union of a countable sets is at most countable? Secondly, are there any inaccuracies in my proof?
AI: The claim is true, and as the comments mention, an even stronger statement is true: a countable union of countable sets is countable (if you want you can speak of "at most countable" everywhere... depending on how exactly you defined "countable").
However, your proposed proof is incorrect, because it is not clear what your sequence actually is. How are you supposed to go "infinitely far" using
\begin{align}
\tau_1, \tau_2, \tau_3, \dots
\end{align}
and then once you "list out infinitely many, start again" with
\begin{align}
\dots \zeta_1, \zeta_2, \zeta_3, \dots?
\end{align}
What are the $\dots$ even supposed to mean? Note that you should only use the three little dots when you're $100\%$ confident that you can translate that intuitive notation into something more rigorous and unambiguous (because the misunderstanding of "$\dots$" is the cause of so many confusions in math).
This is incorrect. One approach to construct a sequence is as follows:
\begin{align}
\tau_1, \zeta_1, \dots, \phi_1| \tau_2, \zeta_2, \dots \phi_2|\tau_3, \zeta_3, \dots, \phi_3, \dots
\end{align}
(I put vertical bars $|$ only to help "see" what I mean; just think of them as a comma if you wish).
If you want to be slightly more precise, you can start by indexing the elements of the set slightly differently: for each $i \in \{1, \dots, n\}$, let the elements of the set $A_i$ be denoted as $A_i = \{a_{ki}\}_{k \in \Bbb{N}}$. Then, the sequence we're defining is as follows
\begin{align}
a_{11}, \dots a_{1n}| a_{21}, \dots, a_{2n}| a_{31}, \dots a_{3n}| \dots
\end{align}
In words, you're "going down vertically $n$ times, then moving to the right to the next column, then going down again, and then repeating".
If you want the countable union case, look up the diagonal argument.
|
H: Laurent Series of $\frac{1}{z(1-z)}$ in neighborhood of $z=1$ and $z=0$
So, the question is: Laurent Series of $\frac{1}{z(1-z)}$ in neighborhood of $z=1$ and $z=0$.
I know I can find Laurent series' all over MSE, but in an effort to build my own intuition, and to see the entire process, I'm just trying to show all the details.
So, here is my attempt:
First, we need to break $\frac{1}{z(1-z)}=\frac{1}{z}+\frac{1}{1-z}$ using partial fraction decomposition.
First question: "In a neighborhood of $z=0$ and $z=1$" does this mean we consider $|z|<1$ for "neighborhood of $z=0$" and $|z-1|<1$ for "neighborhood of $z=1$"?
Supposing that I am right about my first question, then $\frac{1}{1-z}=\sum_{n=0}^\infty z^n$, but we still have to play with the $\frac{1}{z}$ part.
Second question: Is there an obvious way to break down $\frac{1}{z}$?
I was thinking I could answer my second question this way: $\frac{1}{z}=\frac{1}{1+z-1}=\frac{1}{1-(-(z-1))}=\sum_{n=0}^\infty(-1)^n(z-1)^n$.
So, getting back to the problem, we would have, in a neighborhood of $z=0$, the Laurent series of $\frac{1}{z(1-z)}$ is $\sum_{n=0}^\infty(-1)^n(z-1)^n+\sum_{n=0}^\infty z^n=\sum_{n=0}^\infty(-1)^nz^n(z-1)^n$..... right?
Now, for the "in a neighborhood of $z=1$" part, I (assuming I am thinking of my questions $1$ correctly) consider $|z-1|<1$, and so $\frac{1}{1-z}=-\sum_{n=1}^\infty\frac{1}{z^n}$ and $\frac{1}{z}=-\sum_{n=1}^\infty\frac{(-1)^n}{(z-1)^n}$ (then add), right?
Any insight, tips, ideas, etc. would be greatly appreciated! Thank you.
AI: It is much simpler than what you did:
Near $z=0$, just write
$$\frac1{z(1-z)}=\frac1z(1+z+z^2+z^3+\dots)=\frac1z+1+z+z^2+\dotsm.$$
Near $z=1$, use substitution first: set $u=z-1$, so
\begin{align}\frac1{z(1-z)}&=-\frac1{u(1+u)}=-\frac1u(1-u+u^2-u^3+\dotsm)\\
&=-\frac1u+1-u+u^2-\dotsm
\end{align}
and eventually, substitute $u$ with $z-1$.
|
H: Rational numbers as a series of rationals
Any real number $0<x\leq 1$ can be written as
$$
x = \sum_{n=1}^\infty \frac{1}{p_1\dots p_n},
$$
where $p_1\leq p_2\leq\dots$ is a unique sequence of integers $>1$. The number $x$ is a rational if and only if for some $n_0\in\mathbb{N}$, $p_n=p_{n_0}$ for all $n>n_0$. I know that for $\pi_k = p_1\dots p_k$ and $x=a/b$,
$$
0<a\pi_k - b m_k<\frac{b}{p_{k+1}-1}
$$
for some $m_k\in\mathbb{N}$.The only if part is easy, I have already done it. However, I do not know how to proceed with the if part. The construction is given here.
EDIT: Here is my try. Since $x=a/b$, we have
$$
0<a\pi_k - b m_k<\frac{b}{p_{k+1}-1}.
$$
Then $\frac{b}{p_{k+1}-1}\rightarrow 0$ unless for some $n_0\in\mathbb{N}$, $\frac{b}{p_{k+1}-1}$ remains finite. That is $p_n=p_{n_0}$ for all $n>n_0$.
Is this claim correct?
AI: Direction 1: ($\Leftarrow$) Assume that for $n\geq n_0$ we have $p_n=p_{n_0}$. Without loss of generality, we may as well also assume that for $n<n_0$, $p_n<p_{n_0}$. Then
$$x=\sum_{n=1}^\infty \frac{1}{p_1p_2...p_n}=\sum_{n=1}^{n_0-2}\frac{1}{p_1p_2...p_n}+\sum_{n=n_0-1}^{\infty}\frac{1}{p_1p_2...p_n}$$
Now, denote
$$\beta=p_1p_2...p_{n_0-1}$$
Then the sum above becomes
$$=\sum_{n=1}^{n_0-2}\frac{1}{p_1p_2...p_n}+\sum_{n=n_0-1}^{\infty}\frac{1}{\beta p_{n_0}^{n-(n_0-1)}}$$
$$=\sum_{n=1}^{n_0-2}\frac{1}{p_1p_2...p_n}+\frac{1}{\beta}\sum_{n=0}^{\infty}\frac{1}{ p_{n_0}^{n}}=\sum_{n=1}^{n_0-2}\frac{1}{p_1p_2...p_n}+\frac{1}{\beta(1-p_{n_0}^{-1})}$$
which is rational.
Direction 2: ($\Rightarrow$) Assume that
$$x=\sum_{n=1}^\infty \frac{1}{p_1p_2...p_n}$$
is rational. Then $x=\frac{a}{b}$ can be written such that $\gcd(a,b)=1$. Then
$$a=\sum_{n=1}^\infty \frac{b}{p_1p_2...p_n}$$
Now, assume by way of contradiction that $p_n$ is not eventually constant. Since $p_n$ is an integer, increasing, and not constant, we know
$$\lim_{n\to\infty}p_n=\infty$$
Let $N$ be the smallest index such that $p_{N}-1>b$
Then we know
$$a\prod_{n=1}^{N-1}p_n=\prod_{n=1}^{N-1}p_n\sum_{n=1}^{N-1}\frac{b}{p_1p_2...p_n}+\prod_{n=1}^{N-1}p_n\sum_{n=N}^\infty \frac{b}{p_1p_2...p_n}$$
$$\Rightarrow a\prod_{n=1}^{N-1}p_n-\prod_{n=1}^{N-1}p_n\sum_{n=1}^{N-1}\frac{b}{p_1p_2...p_n}=\sum_{n=N}^\infty \frac{b}{p_{N}p_{N+1}...p_n}$$
Now, note that the left hand side is an integer as
$$\prod_{i=1}^n p_i\bigg\vert \prod_{n=1}^{N-1}p_n\text{ for }n=1,2,...,N-1$$
The left side is also positive as
$$a=\sum_{n=1}^{\infty}\frac{b}{p_1p_2...p_n}>\sum_{n=1}^{N-1}\frac{b}{p_1p_2...p_n}$$
This implies
$$\sum_{n=N}^\infty \frac{b}{p_{N}p_{N+1}...p_n}\in\mathbb{N}$$
(where we have excluded $0$ from $\mathbb{N}$). But we also know
$$p_N\leq p_{N+1}\leq p_{N+2}\leq \cdots$$
which implies
$$0<\sum_{n=N}^\infty \frac{b}{p_{N}p_{N+1}...p_n}\leq\sum_{n=N}^\infty \frac{b}{p_{N}^{n-N+1}}=\frac{1}{p_N}\frac{b}{1-p_N^{-1}}=\frac{b}{p_N-1}<1$$
This is a contradiction as we have found and integer between $0$ and $1$. We conclude that at some point, $p_n$ becomes a constant sequence.
|
H: Proving symplectic identity
Let $\Lambda$ be a skew-symmetric matrix and $Q$ a symmetric matrix. Let $\text{Id}$ be the identity matrix and $h > 0$ a real number. I am trying to prove the following identity:
$$
(\text{Id} + \frac{h}{2} \Lambda Q) \Lambda (\text{Id} + \frac{h}{2} \Lambda Q)^\top = (\text{Id} - \frac{h}{2} \Lambda Q) \Lambda (\text{Id} - \frac{h}{2} \Lambda Q)^\top
$$
Here is my attempt at this where I use the facts that $Q^\top=Q$ and $\Lambda^\top = -\Lambda$. We have,
\begin{align}
(\text{Id} + \frac{h}{2} \Lambda Q) \Lambda (\text{Id} + \frac{h}{2} \Lambda Q)^\top &= (\text{Id} - \frac{h}{2} \Lambda^\top Q) \Lambda (\text{Id} - \frac{h}{2} \Lambda^\top Q)^\top \\
&= (\text{Id} - \frac{h}{2} \Lambda^\top Q) \Lambda (\text{Id} - \frac{h}{2} Q\Lambda) \\
&= (\text{Id} - \frac{h}{2} Q\Lambda )^\top \Lambda (\text{Id} - \frac{h}{2} Q\Lambda)
\end{align}
This looks almost correct but I've got the transpose on the wrong side. Does anyone see how I can manipulate this to the correct form?
AI: Note that $(\text{Id} + \frac{h}{2} \Lambda Q) \Lambda = \Lambda(\text{Id}+\frac{h}{2} Q\Lambda)$ and $(\text{Id} + \frac{h}{2} \Lambda Q)^\top = \text{Id}-\frac{h}{2}Q\Lambda$.
Thus \begin{align} (\text{Id} + \frac{h}{2} \Lambda Q) \Lambda (\text{Id} + \frac{h}{2} \Lambda Q)^\top &= \Lambda(\text{Id}+\frac{h}{2} Q\Lambda)(\text{Id}-\frac{h}{2}Q\Lambda) \\ &= \Lambda(\text{Id} - \frac{h^2}{4}Q\Lambda Q\Lambda) \end{align}
The same argument for $-\Lambda$ (which is also skew-symmetric) yields \begin{align} (\text{Id} - \frac{h}{2} \Lambda Q) \Lambda(\text{Id} - \frac{h}{2} \Lambda Q)^\top &= -(\text{Id} - \frac{h}{2} \Lambda Q) (-\Lambda)(\text{Id} - \frac{h}{2} \Lambda Q)^\top \\ &= -(-\Lambda)\left(\text{Id} - \frac{h^2}{4}Q(-\Lambda) Q(-\Lambda)\right) \\ &= \Lambda(\text{Id} - \frac{h^2}{4}Q\Lambda Q\Lambda) \\ &= (\text{Id} + \frac{h}{2} \Lambda Q) \Lambda (\text{Id} + \frac{h}{2} \Lambda Q)^\top \end{align}
as desired.
|
H: Evaluate $\lim_{x\to+\infty} \frac{\sqrt{x}(\sin{x}+\sqrt{x}\cos{x})}{x\sqrt{x}-\sin({x\sqrt{x})}}$
Evaluate $$\lim_{x\to+\infty} \frac{\sqrt{x}(\sin{x}+\sqrt{x}\cos{x})}{x\sqrt{x}-\sin({x\sqrt{x})}}$$
My attempt: $$\lim_{x\to+\infty} \frac{\sqrt{x}(\sin{x}+\sqrt{x}\cos{x})}{x\sqrt{x}-\sin({x\sqrt{x})}}=\lim_{x\to+\infty} \frac{\sin{x}+\sqrt{x}\cos{x}}{x-\frac{\sin({x\sqrt{x})}}{\sqrt{x}}}$$
I see it is possible to apply the squeeze theorem to the negative term in the denominator, but I do not know about if this is the right path. Any hint?
AI: HINT
You should factor $x\sqrt{x}$ instead of $\sqrt{x}$. Precisely,
\begin{align*}
\frac{\sqrt{x}(\sin(x) + \sqrt{x}\cos(x))}{x\sqrt{x} - \sin(x\sqrt{x})} = \frac{\frac{\sin(x)}{x} + \frac{\cos(x)}{\sqrt{x}}}{1 - \frac{\sin(x\sqrt{x})}{x\sqrt{x}}}
\end{align*}
Can you take it from here?
|
H: Is the $\arg\min$ of a strictly convex function continuous?
Let $X\subset \mathbb{R}^n$ and $Y\subset \mathbb{R}^m$ be compact and convex sets, and let $f:X\times Y\rightarrow \mathbb{R}$ be a continuous function. Suppose that for each $y$, $f(x,y)$ is strictly convex.
Define the function $g : Y \to X$ as follows:
$$g(y) = \arg\min_x f(x,y)$$
Is $g$ continuous? If not, are there additional restrictions that we can impose on $f$ such that it is? Thanks for any help!
(Two similar questions have been asked but in "Is the function argmin continuous?" there is no assumption of convexity and in the answer to "the continuity of argmin on convex function" it is assumed that $f$ is continuously differentiable.)
AI: Since $x \mapsto f(x,y)$ is strictly convex, there is a unique minimiser,
call it $g(y)$.
Suppose $y_k \to y^*$, and let $x_k = g(y_k)$ and $x^* = g(y^*)$.
Choose any subsequence $I \subset \mathbb{N}$, then $x_k$ has an accumulation point $x'$. Since $f(x_k,y_k) \le f(x,y_k)$ for all $x \in X$ and $k \in I$ we see that
$f(x',y^*) \le f(x,y^*)$
for all $x \in X$ and since the minimiser is unique, we have $x'=x^*$.
In particular, this shows that $x_k \to x^*$ and so $g$ is continuous.
The proof just relies on $X$ being compact and $x \to f(x,y)$ having a unique minimiser for each $y$.
|
H: Show that $A^3+4A^2+A=I_3$ for 3x3 matrix
Let $A$ be the matrix
$$\begin{pmatrix}-1 & 2 & 1 \\ 2 & -4 & -4 \\ 2 & 0 & 0 \end{pmatrix}$$
Show that $A^3+4A^2+A=I_3$
How would I go about doing this? The first step would suffice, but I'm having difficulty starting this off.
AI: Hints: try to find out characteristics equation of A, then apply Cayley Hamilton's theorem.
|
H: Integrating the exponential over the area bounded by the functions $y=x$ and $y=x^3$
Can someone please help me solve the following problem below? Thank you
Compute the integral of the function over the area bounded by the functions $y=x$ and $y=x^3$ $$f(x,y) = e^{x^2}$$
AI: HINT
You can think about it in terms of physics. If the function $f(x,y)$ denotes the mass distribution, the mass corresponding to the area between the curves $y = x$ and $y = x^{3}$ is given by
\begin{align*}
M = \int_{D}\mathrm{d}m = \int_{D}f(x,y)\mathrm{d}x\mathrm{d}y
\end{align*}
At your case, $D = \{(x,y)\in\textbf{R}^{2}\mid (0\leq x\leq 1)\wedge(x^{3}\leq y\leq x)\}$.
Hence we have the following result:
\begin{align*}
\int_{0}^{1}\int_{x^{3}}^{x}e^{x^{2}}\mathrm{d}y\mathrm{d}x = \int_{0}^{1}(x - x^{3})e^{x^{2}}\mathrm{d}x
\end{align*}
Can you take it from here?
|
H: Evaluating $\lim_{x\to+\infty} \frac{\sqrt{x}\cos{x}+2x^2\sin\left({\frac{1}{x}}\right)}{x-\sqrt{1+x^2}}$
Evaluate $$\lim_{x\to+\infty} \frac{\sqrt{x}\cos{x}+2x^2\sin({\frac{1}{x}})}{x-\sqrt{1+x^2}}$$
My attempt: $$\lim_{x\to+\infty} \frac{\sqrt{x}\cos{x}+2x^2\sin\left({\frac{1}{x}}\right)}{x-\sqrt{1+x^2}}=\lim_{x\to+\infty} \frac{x^2\sqrt{x}\left(\frac{\cos{x}}{x^2}+2\frac{\sin{\frac{1}{x}}}{\sqrt{x}}\right)}{x\left(1-\sqrt{1+\frac{1}{x^2}}\right)}$$$$=\lim_{x\to+\infty} x\sqrt{x}\cdot \frac{\left(\frac{\cos{x}}{x^2}+2\frac{\sin{\frac{1}{x}}}{\sqrt{x}}\right)}{1-\sqrt{1+\frac{1}{x^2}}}$$
Both numerator and denominator tend to zero, while $x\sqrt{x} \to +\infty$. Any help is appreciated.
AI: HINT:
Let $x=\frac1t$
$$\lim_{x\to+\infty} \frac{\sqrt{x}\cos{x}+2x^2\sin({\frac{1}{x}})}{x-\sqrt{1+x^2}}=\lim_{t\to 0}\frac{\sqrt{t}\cos\frac1t+\frac{2}{t}\sin t}{1-\sqrt{t^2+1}}$$
$$=\lim_{t\to 0}\frac{\left(\sqrt{t}\cos\frac1t+2\cdot \frac{\sin t}{t}\right)(1+\sqrt{t^2+1})}{(1-\sqrt{t^2+1})(1+\sqrt{t^2+1})}$$
$$=-\lim_{t\to 0}\frac{\left(\sqrt{t}\cos\frac1t+2\cdot \frac{\sin t}{t}\right)(1+\sqrt{t^2+1})}{t^2}$$
|
H: Defining a multiset
Can the multiset $A=\{1,1,1,2,2,2,3,3,3,...,n,n,n\}$ be represented as
$$A=\bigcup_{i=1}^{n}\{n,n,n\}$$
where $n$ is a positive integer.
Or am I using the union notation completely incorrectly? If so, is how would I define set $A$?
Note:
To clarify my experience with this area of maths, I am going through high school education, and to my understanding, a multiset is simply a set with repeated elements. I also know that a multiset is written just like a regular set ($\{a, a, b, b\}$) but I don't know how to differentiate the two as it is said that $\{n,n,n\} = \{n\}$.
AI: It looks ok, and would probably do the job. One common convention is to use double braces when writing multisets. There are also many accepted ways of expressing the above idea. Some examples:
$$\{\{(1,3),(2,3),...,(n,3)\}\}$$
$$\{\{1\times 3,2\times 3,...,n\times 3\}\}$$
$$\{\{1:3,2:3,...,n:3\}\}$$
Among others.
|
H: one person winning 5 tickets odds
In a raffle with 90 tickets, 9 people buy 10 tickets each. There are 5 winning tickets which are drawn at random.
Find the probability that one person gets all 5 winning tickets?
P(person A wins 1st ticket) = $\frac{10}{90}$
P(person A wins 2nd ticket) =$ \frac{9}{89}$
P(person A wins 3rd ticket) =$ \frac{8}{88}$
P(person A wins 4th ticket) =$ \frac{7}{87}$
P(person A wins 5th ticket) =$ \frac{6}{86}$
Would the probability be:
$$\frac{10}{90}*\frac{9}{89}*\frac{8}{88}*\frac{7}{87}*\frac{6}{86} $$
This is giving me a very low probability which makes me believe is wrong. How could i answer this question?
AI: You evaluate the probability that $A$ gets all the winning tickets, but you want the probability that someone has the 5 winning tickets. You need to multiply by $9$.
$$9\times\frac{10}{90}\times\frac{9}{89}\times\frac{8}{88}\times\frac{7}{87}\times\frac{6}{86}$$
|
H: Understanding an example about Cauchy's integral formula
I have two questions about the following example taken from Palka's "An introduction to complex function theory". I highlighted with red the parts that I don't understand.
Why does the first equality in 5.11 hold?
Since $r \to \infty$, won't we have that eventually the disk $\Delta$ won't encompass $| \gamma |$? (where $|\gamma|$ is the curve parametrized by $\gamma$). I think we cant't make $\Delta$ bigger because it would contain $-i$ and $f$ must be analytic in $\Delta$ to apply the local Cauchy's integral formula.
AI: $$\int_0^r\frac{\cos t}{1+t^2}\,\mathrm{d}t=\frac12\int_{-r}^r\frac{\cos t}{1+t^2}\,\mathrm{d}t$$ because the integrand is an even function. $$\int_0^r\frac{\sin t}{1+t^2}\,\mathrm{d}t=0$$ because the integrand is an odd function.
I'm not sure I understand what your second question is. Are you thinking that the center of the disk remains fixed as $r$ increases? This isn't so. No matter how big $r$ gets we can construct a disk that includes the segment $[-r, r]$ and doesn't contain $-i$.
EDIT
More specifically, we can construct a circle passing through the points $-r, r, \frac{-i}2$. (The center will lie on the positive imaginary axis.) Now if we increase the radius by some $0<\varepsilon<\frac12$ the circle will look like the one in Figure $14$. As $r$ increases, the center of the circle we construct will travel "north" on the imaginary axis.
|
H: Separable Differential Equation, finding the constant C
I have a question about the Separable Differential Equation theorem.
According to my textbook, this is the theorem.
A differential equation of the form $dy/dx=f(y)g(x)$ is cal separable. We separate the variables by writing it in the form
$1/f(y) dy =g(x) dx.$
The solution is found by anti differentiating each side with respect to tis thusly isolated variable.
Take question 1.
$dy/dx = x/y $and $y=2$ when $x=1$. Use separation of varaibles to solve the initial value problem. Indicate the domain over which the solution is valid.
Answer:
I was able to integrate this. But when I got to second line, I plugged in the respective x and y values and got C=3/2. Why does the author simply the fraction first?
AI: The author multiplied both sides of the equation by two and absorbed the two into the constant:
$$\frac{y^2}{2}=\frac{x^2}{2}+c$$
$$y^2=x^2+C$$
Using $y(1)=2$:
$$4=1+C \Rightarrow C=3$$
$$\Rightarrow y=\sqrt{x^2+3}$$
If you do it your way you still obtain the same final answer:
$$\frac{y^2}{2}=\frac{x^2}{2}+c$$
$$2=\frac{1}{2}+C$$
$$\Rightarrow C=\frac{3}{2}$$
$$\frac{y^2}{2}=\frac{x^2}{2}+\frac{3}{2}$$
$$\Rightarrow y^2=x^2+3$$
$$\Rightarrow y=\sqrt{x^2+3}$$
|
H: Deriving the Laplacian in spherical coordinates by concatenation of divergence and gradient.
In earlier exercises, I have derived the formula of divergence in spherical coordinates as $$\textrm{div }\vec{v}= \frac{1}{r^2}\frac{\partial (r^2 v_r)}{\partial r}+\frac{1}{r \sin \vartheta}(\frac{\partial(v_{\vartheta}\sin \vartheta)}{\partial \vartheta}+\frac{\partial v_{\varphi}}{\partial \varphi})$$ with a vector field $\vec{v}(\vec{r})=v_rê_r+v_{\varphi}ê_{\varphi}+v_{\vartheta}ê_{\varphi}$ as well as the formula for the gradient as $$\nabla=\frac{\partial}{\partial r}ê_r+\frac{1}{r}\frac{\partial}{\partial \varphi}ê_{\varphi}+\frac{1}{r \sin{\varphi}}\frac{\partial}{\partial \vartheta}ê_{\vartheta}$$.
Now I am asked to concatenate the gradient with the divergence to arrive at the formula for the Laplacian of a scalar-field $f(r,\varphi,\vartheta)$, which is defined as the divergence of the gradient, but I am slightly confused. Looking at the solution, I get: $$\Delta = \frac{1}{r^2}\frac{\partial}{\partial r}(r^2 \frac{\partial f}{\partial r})+\frac{1}{r^2 \sin \vartheta}(\frac{\partial}{\partial \vartheta}(\sin \vartheta \frac{\partial f}{\partial \vartheta})+\frac{\partial^2 f}{\partial \varphi^2})$$. I can see that it follows the definition, somehow, but why do we put the factors from the gradient before the partial derivatives of the divergence and the partial derivatives from the gradient after the partial derivatives from the divergence?
AI: For the scalar Laplacian, you take the gradient first, then the divergence. Given a scalar field $U$,
$$\nabla U=\left(\frac{\partial U}{\partial r},\frac{1}{r\sin\phi}\frac{\partial U}{\partial \theta},\frac{1}{r}\frac{\partial U}{\partial \phi}\right)$$
The formula for the divergence is
$$\nabla \boldsymbol{\cdot}\mathbf{F}=\frac{1}{r^2\sin(\phi)}\left(\frac{\partial(r^2\sin(\phi)F_r)}{\partial r}+\frac{\partial(rF_\theta)}{\partial \theta}+\frac{\partial(r\sin(\phi)F_\phi)}{\partial \phi}\right)$$
Plugging in,
$$\nabla^2U=\nabla\boldsymbol{\cdot}\nabla U=\frac{1}{r^2\sin(\phi)}\left(\frac{\partial(r^2\sin(\phi)\frac{\partial U}{\partial r})}{\partial r}+\frac{\partial(r\frac{1}{r\sin\phi}\frac{\partial U}{\partial \theta})}{\partial \theta}+\frac{\partial(r\sin(\phi)\frac{1}{r}\frac{\partial U}{\partial \phi})}{\partial \phi}\right)$$
Which we can clean up as
$$\nabla^2 U=\frac{1}{r^2\sin\phi}\left(\sin\phi\frac{\partial}{\partial r}\left(r^2\frac{\partial U}{\partial r}\right)+\frac{1}{\sin\phi}\frac{\partial}{\partial \theta}\left(\frac{\partial U}{\partial \theta}\right)+\frac{\partial}{\partial \phi}\left(\sin\phi\frac{\partial U}{\partial \phi}\right)\right)$$
Which finally reduces to
$$\nabla^2 U=\frac{1}{r^2}\frac{\partial}{\partial r}\left(r^2\frac{\partial U}{\partial r}\right)+\frac{1}{r^2\sin^2(\phi)}\frac{\partial^2 U}{\partial \theta^2}+\frac{1}{r^2\sin(\phi)}\frac{\partial}{\partial \phi}\left(\sin(\phi)\frac{\partial U}{\partial \phi}\right).$$
|
H: Multiplying $A\preceq B$ with a matrix
I have a matrix inequality,
$$A\preceq B,$$
where $\preceq$ means that $B-A$ is psd.
update: How can I show that if $M$ is a positive definite matrix, then the inequality above is equivalent
$$M A M^\ast \preceq M B M^\ast.$$
AI: Your "only if " is not ok here, for counter example take A=I and B=2I, I is Identity matrix here.
Then M can be any orthogonal matrix!
For "if" part ( what have you asked in comment?),.
Since, $M$ is positive definite iff $x^\ast Mx>0$ for all $x\neq0$. Write
$$xM^\ast(A-B)Mx^\ast=(Mx^\ast)^\ast (A-B) (Mx^\ast),$$ and denote $Mx^\ast=y$, and $y\neq 0$ because if $y=0$, then $xy=xMx*=0$.
Now $y^\ast(A-B)y>0$ for all $y>0$, by definition of $(A-B)$ being positive definite. So, $M*(A-B)M$ positive definite if $M$ is positive definite.
|
H: Does the convergence to 0 in $L^2(0,T;L^2(K))$ for all compact $K \subset \mathbb{R}^{d}$ imply the convergence in $L^2(0,T;L^2(\mathbb{R}^{d}))$?
Let $(f_n)$ be a sequence in $L^2(0,T;L^2(\mathbb{R}^{d}))$ such that:
$\|f_n\|_{L^2(0,T;L^2(\mathbb{R}^{d}))} \leq C_T$ for all $n \in \mathbb{N}$;
$f_n \rightarrow 0$ a.e. in $(0,T)\times \mathbb{R}^{d}$;
$f_n \rightarrow 0$ in $L^2(0,T;L^2(K))$ for all compact $K \subset \mathbb{R}^{d}$.
My question: Is it true that $f_n \rightarrow 0$ in $L^2(0,T;L^2(\mathbb{R}^{d}))$?
I was trying to work with the sequence of compacts $B[0,k+1]\setminus B(0,k)$ and write $(0,T)\times \mathbb{R}^{d}=\bigcup_{k=1}^{\infty} (0,T) \times (B[0,k+1]\setminus B(0,k))$ to get an estimate like $$\int_{(0,T)\times (B[0,k+1]\setminus B(0,k))} |f_n(x,t)|^2 \, dx dt <\frac{\varepsilon}{2^k} \tag{*}$$ for all $\varepsilon>0$ in order to use the following result:
Theorem: If $\int_E f$ exists and $E=\bigcup_k E_k$ is the countable union of disjoint measurable sets $E_k$, then $\int_E f=\sum_k \int_{E_k} f$.
But I don't know if I can use this result for vector integrations. Moreover, the estimate $(*)$ only holds for a $k\geq k_0(n)$, that is, the constant $k_0$ depends on $n$.
AI: This isn't necessarily true. We can just let $\phi$ be some smooth compactly supported function on $(0, T) \times \mathbb{R}^d$ and then let $f_n(t,x) = \phi(t, x - nv)$ where $v$ is some nonzero vector in $\mathbb{R}^d$ be the translations of $\phi$ in space. Then:
\begin{align*}
&(1) \ \ \ \ \|{f_n}\|_{L^2([0, T]; L^2(\mathbb{R}^d))}^{2} = \int_0^T \|f_n(t)\|_{L^2(\mathbb{R}^d)}^{2}\, dt = \int_0^T \|\phi(t)\|_{L^2(\mathbb{R}^d)}^{2}\, dt =: C_T \\
&(2) \ \ \ \ f_n \to 0 \ \text{a.e., since for any} \ (t, x) \in [0, T] \times \mathbb{R}^d, f_n(t, x) \to 0 \\
&(3) \ \ \ \ f_n \to 0 \ \text{in} \ L^2([0, T]; L^2(K)) \ \forall K \subset \mathbb{R}^d \ \text{compact}.
\end{align*}
The last point follows since $f_n(t)|_K = 0$ for all $t$ for $n$ large enough (depending on $K$), as $\phi$ has compact support.
The computation in item (1) shows that the $L^2([0, T]; L^2(\mathbb{R}^d))$ norm of each $f_n$ is always $C_T$ hence the sequence cannot approach zero in this space.
|
H: What does fibre-wise mean?
I am doing some exercises in Lagrangian systems in the book Quantum Mechanics for Mathematicians. One exercise says:
Let $f$ be a $C^\infty$ function on a manifold $M$. Show that the Lagrangian systems $(M,L)$ and $(M,L+df)$ (where $df$ is fibre-wise linear function on $TM$) have the same equations of motion.
I do know what $df$ means as a differential form, and I solved the exercise. I just had never read such terminology ("fibre-wise") and I wonder what does that mean.
AI: $TM\xrightarrow\pi M$ is a vector bundle. In particular, this means that for every $p\in M$, the fibre $\pi^{-1}\{p\}=T_pM\subseteq TM$ is a vector space.
Saying that $df$ is fiberwise linear means that for every $p\in M$ the function $(df)_p:T_pM\to\mathbb R$ is linear.
|
H: how to construct this equalizer
If X is a topological space and (U$_i$)$_i$$_\in$$_I$ is a family of open subsets of X, write U = $\cup$$_i$$_\in$$_I$ U$_i$ and U$_i$$_j$ = U$_i$ $\cap$ U$_j$ for i, j $\in$ I.
(i) if f, g: U $\to$ R are continuous functions such that f|U$_i$ = g|U$_i$, then f = g
(ii) if (f$_i$: U$_i$)$_i$$_\in$$_I$ is a family of continuous real-valued functions such that f$_i$|U$_i$$_j$ = f$_j$|U$_i$$_j$ for all i, j, then there exist a unique continuous f: U $\to$ R with f|U$_i$ = f$_i$ for all i.
For every oepn V $\subseteq$ X, define $\varGamma$(V) = {continuous f: V$\to$ R} and $\varGamma$(W) $\to$$\varGamma$(V) to be the restriction for V a subset of W. Then properties (i)(ii) say that $\varGamma$(U) is the equalizer of the family of maps $\varGamma$(U$_i$) $\to$$\varGamma$(V$_i$).
I am confused by the last sentence. How (i)(ii) give that conclusion? What are the two maps $\varGamma$(U$_i$) $\to$$\varGamma$(V$_i$) which consist of part of the equalizer? And I take it the map from $\varGamma$(U) $\to$$\varGamma$(U$_i$) is the restriction?
I've also tried thinking about it as presheaf functor, still not quite clear. Thank you SO MUCH!
AI: In the category of vector spaces we have the following diagram:
$$\prod_{i\in I}\Gamma(U_i)\begin{array}{c}\stackrel \alpha\longrightarrow\\ \stackrel \beta \longrightarrow \end{array}\prod_{(i,j)\in I\times I} \Gamma(U_{ij}),$$
where \begin{eqnarray*}\alpha(\{f_i\}_{i\in I})&=& \{f_i|U_{ij}\}_{i,j},\\
\beta(\{f_i\}_{i\in I})&=& \{f_j|U_{ij}\}_{i,j}.
\end{eqnarray*}
From (i), (ii) we know that the equaliser of this diagram is $\Gamma(U)$:
$$\Gamma(U)\stackrel\iota\longrightarrow\prod_{i\in I}\Gamma(U_i)\begin{array}{c}\stackrel \alpha\longrightarrow\\ \stackrel \beta \longrightarrow \end{array}\prod_{(i,j)\in I\times I} \Gamma(U_{ij})$$
Here $\iota(f)=\{f|U_i\}_i$.
|
H: Why does $\frac{a}{b}<0$ imply $ab<0$?
I'm not sure if this was asked before, but my question is: why does $\frac{a}{b}<0$ imply $ab<0$? How do you prove it both intuitively and rigorously(using math)? I think I understand it intuitively: it's becuase for $\frac{a}{b}$ to be negative, exactly one of $a$ or $b$ has to be negative. For $ab$ to be negative, exactly one of $a$ or $b$ has to be negative. That means that these two imply each other. But how would I prove this rigorously? If I multiply both sides of $\frac{a}{b}<0$ by $b$, first of all, I don't know whether $b$ is positive or negative so I don't know which way the inequality sign is facing, and second, even if we did know that it flipped or didn't flip, we would only get $a<0$ or if the sign didn't flip $a>0$. Do I split it into cases then(case 1: $b<0$ and case 2: $b>0$)? It seems like it would work but there might be a more slicker way of proving it?
AI: Multiply both sides by $b^2$ which is always positive and hence doesn't flip the inequality sign.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.