text
stringlengths
83
79.5k
H: Is the unit ball on the set of continuous functions of a space $X$ strictly convex? I have been trying to show that $C(X)$ is not strictly convex but I have been having a tough time, any help would be appreciated. AI: I assume that your $X$ is compact (so that $\|f\|_\infty$ is defined for all continous $f\colon X\to\Bbb R$). If we additionally assume that $X$ allow the existence of non-constant continuous functions (e.g., $X$ is not endowed with the indiscrete topology), then the following works: Let $f\colon X\to\Bbb R$ be non-constant continouus. Then $\|f\|\ne 0$ and $g:=\frac1{\|f\|}f$ has norm $1$. Then there exists $x_0\in X$ with $g(x_0)=\pm 1$. Define $h(x)=g(x_0)$. Then all convex combinations of $g$ and $h$ have norm $1$.
H: Show that each vector in an n-dimensional vector space can be represented as the summation of its components along the orthonormal basis. Show that in an n-dimensional vector space V over the universal set with orthogonal basis {$a_1, a_2,..., a_n$}, each vector B can be expressed as: B = $\frac{<B,a_1>a_1}{||a_1||^2}$ + $\frac{<B,a_2>a_2}{||a_2||^2}$ +.... + $\frac{<B,a_n>a_n}{||a_n||^2}$ I tried something along the lines of using B = $\frac{(B.V)V}{||V||^2}$ but I couldn't get to the final answer. AI: Let $B$ be a vector of $V$. Since $\{a_1,\ldots, a_n\}$ is a basis for $V$, you can find coefficients $\alpha_\, \ldots, \alpha_n$ such that $$B=\alpha_1 a_1+\ldots +\alpha_n a_n$$ Note that \begin{align*} \langle B, a_i\rangle&=\langle \alpha_1 a_1+\ldots +\alpha_n a_n, a_i\rangle \nonumber\\ &=\alpha_1 \langle a_1, a_i\rangle +\ldots +\alpha_n \langle a_n,a_i\rangle\nonumber \end{align*} Since the basis is orthogonal, the above expression simplifies as \begin{align*} \langle B, a_i\rangle&=\alpha_i\langle a_i, a_i\rangle\nonumber\\ &=\alpha_i ||a_i||^2\nonumber \end{align*} Hence $$\alpha_i=\frac{\langle B,a_i\rangle}{||a_i||^2}$$
H: Find a limit for a sequence of functions with the domain in $(0, \infty)$ How to find limits for function $f_n = \sqrt{n}\left(\sqrt{x - \frac{1}{n}}- \sqrt{x}\right)$ if $Df \in (0,\infty)$ I think as $n \rightarrow \infty$ it would be $\infty$ and then no matter the $x$ the limit would be $\infty$. But in case when $x$ is also infinitely large, I can maybe apply L'Hospital rule? Then: $$\frac{\left(\sqrt{x - \frac{1}{n}}+\sqrt{x}\right)}{\frac{1}{{\sqrt{n}}}} = \left[ \frac{\infty}{\infty} \right] = \frac{\frac{1}{2}\left(x - \frac{1}{n}\right)^{-\frac{1}{2}}+\frac{1}{2}x^{-\frac{1}{2}}}{-\frac{1}{2}n^{-\frac{3}{2}}} = 0 \ \ [\text{as } n \rightarrow \infty]$$ Is this correct? Ok, first I just take a derivative of $\frac{1}{n^{-\frac{1}{2}}}:$ $$\frac{1}{n^{-\frac{1}{2}}} = \frac{1}{2}\frac{n^{-\frac{3}{2}}}{n}=\frac{1}{2}n^{-\frac{5}{2}}$$ AI: First of all, if $x >0$, then for large $n$, $\sqrt{x -\frac{1}{n}} \geqslant \sqrt{\frac{x}{2}} >0$. So your sequence will verify \begin{align} \sqrt{n}\left(\sqrt{x - \frac{1}{n}} + \sqrt{x}\right) \geqslant \sqrt{n}\left( \sqrt{\frac{x}{2}}+\sqrt{x}\right) \to_{n\to+\infty} +\infty \end{align} But I think you miswrote the question and there is a "-" missing, and it would be \begin{align} \sqrt{n}\left(\sqrt{x - \frac{1}{n}} - \sqrt{x} \right) \end{align} maybe? In addition, I don't understand what means "$Df \in (0\infty)$": who is $f$, what is $D$ here, and is $x$ fixed or is it a variable? Edit As you miswrote it, the thing is to look closer nd find for a quotient converging to a derivative: say $f$ is the square-root function. Then, for $n$ large enough so $x-\frac{1}{n} >0$ : \begin{align} \sqrt{n}\left(\sqrt{x -\frac{1}{n}} - \sqrt{x} \right) & = \sqrt{n}\left(f(x-\frac{1}{n}) - f(x) \right) \\ & = \dfrac{f(x-\frac{1}{n})-f(x)}{\frac{-1}{n}} \times \sqrt{n} \times (-\frac{1}{n}) \\ &= \dfrac{f(x-\frac{1}{n})-f(x)}{\frac{-1}{n}} \times \left(-\frac{1}{\sqrt{n}} \right) \end{align} The first term in the product converges to $f'(x) = \dfrac{1}{2\sqrt{x}}$ while the second term converges to $0$. Then you can conclude
H: Let $R$ be the region enclosed by $x^2+4y^2\geq 1$ and $x^2+y^2\leq 1$. Then the value of $\int \int_R |xy|dx dy$ Let $R$ be the region enclosed by $x^2+4y^2\geq 1$ and $x^2+y^2\leq 1$. Then the value of $$\int \int_R |xy|dx dy$$ is _________ My attempt I am getting $5/24$. But answer given is $.375$ Where is my mistake? Is there any short cut to solve this question? AI: Because the integrand is $|xy|$, I would just calculate the region in the first quadrant then multiply by $4$. $4\int_0^1 \int_{\frac{1}{2}{\sqrt{1-x^2}}}^{\sqrt{1-x^2}} |xy| \; dy \;dx=4\int_0^1 \frac{x}{2} y^2 \bigg\rvert_{\frac{1}{2}{\sqrt{1-x^2}}}^{\sqrt{1-x^2}} \;dx=\frac{3}{2}\int_0^1 x-x^3 \;dx =\frac{3}{2}\cdot\frac{1}{4}=\boxed{\frac{3}{8}}$
H: How do you prove this asymptotic relations between $n!$ and $\frac1en+\frac1{2e}\ln(2 \pi n)$? Let $\varepsilon(n):=\frac1en+\frac1{2e}\ln(2 \pi n)$. Toying around in Wolfram, I found the following results: $$\lim_{n \rightarrow \infty} \frac{\varepsilon(n)^n}{n!}=1 \tag{1}$$ $$\lim_{n \rightarrow \infty}\varepsilon(n)-\sqrt[n]{n!}=0 \tag{2}$$ Is this correct? How could one prove it? Are there any more meaningful relations between $\varepsilon(n)$ and $n!$ ? AI: Note that \begin{align*} & \left( {\frac{1}{e}n + \frac{1}{{2e}}\log (2\pi n)} \right)^n = \left( {\frac{n}{e}} \right)^n \left( {1 + \frac{1}{n}\log \sqrt {2\pi n} } \right)^n \\ & = \left( {\frac{n}{e}} \right)^n \exp \left( {n\log \left( {1 + \frac{1}{n}\log \sqrt {2\pi n} } \right)} \right) \\& = \left( {\frac{n}{e}} \right)^n \exp \left( {n\left( {\frac{1}{n}\log \sqrt {2\pi n} + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{{n^2 }}} \right)} \right)} \right) \\ & = \left( {\frac{n}{e}} \right)^n \sqrt {2\pi n} \exp \left( {\mathcal{O}\!\left( {\frac{{\log ^2 n}}{n}} \right)} \right) = \left( {\frac{n}{e}} \right)^n \sqrt {2\pi n} \left( {1 + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{n}} \right)} \right). \end{align*} Thus, (1) follows from Stirling's formula. For (2), we have by Stirling's formula, \begin{align*} \sqrt[n]{{n!}} & = \frac{n}{e}\sqrt[n]{{\sqrt {2\pi n} \left( {1 + \mathcal{O}\!\left( {\frac{1}{n}} \right)} \right)}} \\ & = \frac{n}{e}\sqrt[{2n}]{{2\pi n}}\left( {1 + \mathcal{O}\!\left( {\frac{1}{{n^2 }}} \right)} \right) = \frac{n}{e}\exp \left( {\frac{1}{{2n}}\log (2\pi n)} \right)\left( {1 + \mathcal{O}\!\left( {\frac{1}{{n^2 }}} \right)} \right) \\ & = \frac{n}{e}\left( {1 + \frac{1}{{2n}}\log (2\pi n) + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{{n^2 }}} \right)} \right)\left( {1 + \mathcal{O}\!\left( {\frac{1}{{n^2 }}} \right)} \right) \\ & = \frac{n}{e} + \frac{1}{{2e}}\log (2\pi n) + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{n}} \right). \end{align*}
H: every non-finite set intersects non-void open set From General Topology by Kelley. A separable space may fail to satisfy the $2^{nd}$ axiom of countability. For example, let $X$ be an uncountable set with the topology consisting of the void set and the complements of finite sets. Then every non-finite set is dense because it intersects every non-void open set. On the other hand ... Why in this case every non-finite set intersects every non-void open set? Definition A topological space $X$ is separable iff there is a countable subset which is dense in $X$. AI: If the non-finite set doesn't intersect a non-void open set, it must be contained inside the complement of that open set, which is finite.
H: If $z+\frac{1}{z}=2\cos\theta,$ where $z\in\Bbb C$, show that $\left|\frac{z^{2 n}-1}{z^{2n}+1}\right|=|\tan n\theta|$ If $z+\frac{1}{z}=2 \cos \theta,$ where $z$ is a complex number, show that $$ \left|\frac{z^{2 n}-1}{z^{2 n}+1}\right|=|\tan n \theta| $$ My Approach: $$ \begin{array}{l}|\sin \theta|=\left|\sqrt{1-\cos ^{2} \theta}\right| \\ =\left|\sqrt{1-\left(\frac{z^{2}+1}{2z}\right)^{2}}\right| \\ =\left|\sqrt{\frac{4 z^{2}-z^{4}-2 z^{2}-1}{4 z^{2}}} \right|\\ =\left|\sqrt{\frac{-\left(z^{4}-2 z^{2}+1\right)}{4 z^{2}}}\right|=\left|\sqrt{\frac{\left(z^{2}-1\right)^{2}}{4 z^{2}}}\right| \\ =|\frac{z^{2}-1}{2 z}|\end{array} $$ So $|\tan \theta|=\left|\frac{z^{2}-1}{z^{2}+1}\right|$ ( proven when $n = 1$) Is there any way to prove directly by taking $n$? AI: $$z+\frac 1z=2\cos \theta\Rightarrow z^2-2\cos\theta z+1=0$$ Solve this quadratic to obtain $$z=cos\theta\pm i\sin\theta$$ Or, $$z=e^{i\theta},e^{-i\theta}$$ Now, $$\frac{z^{2n}-1}{z^{2n}+1}=\frac{z^n-1/z^n}{z^n+1/z^n}$$ In either of the above cases(for $z$), we have, $$z^n+1/z^n=e^{in\theta}+e^{-in\theta}=2\cos n\theta$$ But depending on the value of $z$ we have $$z^n-1/z^n=\pm \left(e^{in\theta}-e^{-in\theta}\right)=\pm 2\sin n\theta$$ We now have $$\left |\frac{z^{2n}-1}{z^{2n}+1}\right |=\left|\frac{\pm 2\sin n\theta}{2\cos n\theta}\right|=|\tan n\theta|$$ Hence proved.
H: Find the two complex numbers whose sum is $5 - i$ and product is $8+i$ I have tried to solve this problem, but when I write the equations I end up with: $$a + c = 5$$ $$b + d = -1$$ $$ab - cd = 8$$ $$ad + cb = 1$$ But substituting $a$ and $b$ get 2 equations I can't be able to solve AI: Hint: the numbers are the solutions to the equation $z^2 - (5-i)z + (8+i) =0$. (by Vieta's formulas)
H: A doubt regarding proof of values of trigonometric functions at allied angles There are certain identities that help us to determine the values of trigonometric functions at $\dfrac{\pi}{2}+x \text{, } \pi-x$ etc. given the values of $\sin x, \cos x$. Now, when we prove such identities, we usually take the value of $x$ to be in the interval $\Big (0, \dfrac{\pi}{2} \Big )$. Isn't it necessary to prove the identities by taking the value of $x$ in all $4$ quadrants individually and then arriving at the outcome? If not, then why not? Pardon me if this sounds silly. Thanks! AI: If the basis of your proofs are from Euler's identity $e^{i \theta} = \cos \theta + i \sin \theta$ then the quadrant becomes irrelevant. Also consider the identities $\cos \theta = \dfrac {e^{i \theta} + e^{-i \theta} } 2$ and $\sin \theta = \dfrac {e^{i \theta} - e^{-i \theta} } 2$.
H: sum of :$\sum_{k=1}^\infty\frac{(-1)^k}{2k-1} \cos(2k-1)$ How can I find the sum of :$$\sum_{k=1}^\infty\frac{(-1)^k}{2k-1} \cos(2k-1)$$ I don't fully understand the parseval identity so I am asking if we can use it to find the sum, and if so, how I should use it. Is there a Fourier series we know the convergence to a function that can help? AI: Recall the Maclaurin series of arctangent, valid for $|z|\leq 1,$ $z\neq\pm i$: $$ \arctan(z) = \sum_{k=0}^{\infty}\frac{(-1)^kz^{2k+1}}{2k+1} $$ $$ -\arctan(z) = \sum_{k=1}^{\infty}\frac{(-1)^kz^{2k-1}}{2k-1} $$Put $z=e^{i}$ and take the real part: $$ \Re(-\arctan(e^i)) = \Re\left( \sum_{k=1}^{\infty}\frac{(-1)^k(e^{i})^{2k-1}}{2k-1}\right)=\sum_{k=1}^{\infty}\frac{(-1)^k\cos(2k-1)}{2k-1} $$The LHS evaluates to $-1\cdot \pi/4$, since the argument (angle) is $1$ and $\arctan(1)=\pi/4$.
H: Substitution problem in an integral Say I have: $\int_0^\pi\cos^2\theta\sin\theta d\theta$ and I choose to make the substitution $t=\sin\theta$. I then get an integral $\int_0 ^0 t\sqrt{1-t^2}dt$ and the result is zero. What's the fallacy here? I know I can substitute $\cos\theta$, I was just wondering. AI: The problem is that from $\pi/2$ to $\pi$, $$\cos\theta=-\sqrt{1-\sin^2\theta}$$ Note the minus sign in front of the square root.
H: Fourier Series Expansion, getting an undefined coefficient I'm asked to find the Fourier series of the following function as a sine series with period $2\pi$. $f(x)= cosx \ on \ [0,\pi]$ Since we wish to get a sine series we need to make $a_n = 0 \ for \ all \ n\geq 0$. Hence, we need an odd extension. Then I did the required calculations as follows: $$ \begin{aligned} f ( x ) = & \sum _ { n = 1 } ^ { \infty } b _ { n } \cdot \sin \left( \frac { n \pi } { L } x \right) = \sum _ { n = 1 } ^ { \infty } b _ { n } \sin ( n x ) \end{aligned} $$ $$ b _ { n } = \frac { 1 } { \pi } \int _ { 0 } ^ { \pi } \cos v \cdot \sin ( n x ) \cdot d x $$ $$ b_n = \frac { 1 } { \pi } \left[ - \frac { \cos ( \pi + n \pi ) } { 1 + n } - \frac { \cos ( n n - \pi ) } { n - 1 } + \frac { 1 } { 1 + n } + \frac { 1 } { n - 1 } \right] $$ However, what bothers me is that we have now undefined terms for $n=1$ on the right hand side. I'm missing the point somewhere but I couldn't figure out. Can you help me what can be done to resolve this? EDIT: Read comment section. $$ \begin{array} { l } b _ { 1 } = \frac { 1 } { L } \langle f ( x ) , \sin x \rangle \\ b _ { 1 } = \frac { 1 } { \pi } \int _ { - \pi } ^ { \pi } \cos x \cdot \sin x \cdot d x \\ b _ { 1 } = \frac { 2 } { \pi } \int _ { 0 } ^ { \pi } \cos x \cdot \sin x \cdot d x \\ b _ { 1 } = \frac { 1 } { \pi } \int _ { 0 } ^ { \pi } 2 \sin x \cdot \cos x = 0 \end{array} $$ AI: The problem is that your last equality holds only when $n\neq 1$. To get $b_1$, you need to substitute in $n=1$ to the definition and calculate it separately. And I think your task is to find the Fourier Sine Series of $f$, that is, extend it to an odd function with $f(-x)=-f(x)$, and calculate the Fourier Series of the resulting function on $[-\pi,\pi]$. But this is almost the same, because for an odd $f$, we have that: $$\int_{-\pi}^{\pi}f(x)\sin(nx)\mathrm{d}x=2\int_{0}^{\pi}f(x)\sin(nx)\mathrm{d}x$$
H: Diffeomorphism and local isometry I'm asked to show that $F(x,y,z)=(3x,2y,5z)$ is an diffeomorphism between $\mathbb{S}^{2}(1)$ and the ellipsoid $(\frac{x}{3})^{2}+(\frac{y}{2})^{2}+(\frac{z}{5})^{2}=1$. For this aim what I have seen is that (I) its class is infinity (each component is) (II) its inverse is given by $F^{-1}(x,y,z)=(\frac{x}{3},\frac{y}{2},\frac{z}{5})$ and (III) it's infinity class. This is in the general case because I haven't used yet that I define this map between the sphere and the ellipsoid. So, what I have done in this case is well let be $(u,v)\in(0,\pi)\times(0,2\pi)$ and let be $x(u,v)=(sin(u)cos(v),sin(u)sin(v),cos(u))$ and $y(u,v)=(\frac{sin(u)cos(v)}{3},\frac{sin(u)sin(v)}{2},\frac{cos(u)}{5})$ parametrizations of the sphere and the ellipsoid respectively. Then, $F\circ y$ carries points of the ellipsoid to the sphere and $F^{-1}\circ x$ carries points of the sphere to the ellipsoid. Is this correct? Then I have to calculate its differential in any point of the sphere and it's inmmediate. \begin{equation} \begin{pmatrix} 3 & 0 & 0\\ 0 & 2 & 0\\ 0 & 0 & 5 \end{pmatrix} \end{equation} I'm asked if this differential conserves angles. The last thing is to determine if this map conserves de norm of vectors. For both things before what I have to do is prove that this map is or no a local isometry. So for doing this I have calculated the normal vector $N\circ x(u,v)=x(u,v)$ so $T_{p}S=\{(x,y,z)\mathbb{R}^{3}:<(x,y,z),x(u,v)>=0\}$. But my problem is that with this expresion how to proceed to prove that $<DF(u),DF(v)>=<u,v>$ for all vectors in $T_{p}S$ (where $S=\mathbb{S}^{2}(1))$? I'm lost here. Any hint to continue is appreciated! AI: This is one particular scenario where observing $S^2$ and the ellipsoid as surfaces embedded in $\mathbb{R}^{3}$ is actually much more helpful, compared to the usual intrinsic approach in case of manifolds.You just need to observe that a function $f$ defined on a subset $K$ of $\mathbb{R}^{n}$ is differentiable at a point $u$ of $K$ if it is the restriction of a differentiable map defined on an open(in $\mathbb{R}^{n}$) neighborhood $U$ of $u$ to $U\cap K$.Using this, it's easy to observe that $F$ in your query is actually the restriction of an invertible linear map $T$(i.e the non uniform scaling you have mentioned) to $S^2$.Hence, it's a diffeomorphism because the aforementioned linear map is. Next, this is not conformal.The simple reason behind this is, the $xy$-plane, $yz$-plane and the $zx$-planes are all invariant under $T$, and each of them arises as tangent planes of $S^2$ at $(0,0,1)$, $(1,0,0)$ and $(0,1,0)$ respectively.But on each of these planes $T$(and hence, $dF$) actually happens to be a non uniform scaling and hence, cannot be conformal.This is because in two dimensions, the only conformal linear maps are the scalar multiples of isometries.
H: Can we find a closed-form formula for $a_{n+1}=\frac{a_n}{a_n+1}$, given $a_1=a$? We have a sequence: $$a_{n+1}=\frac{a_n}{a_n+1}$$ $$ a_1 = a $$ $$ a_2 = \frac{a}{a+1} $$ $$ a_3 = \text{here things get really messy and for all the following } $$ I can't get the formula since the expressions just get more and more complicated. Is there a faster way to do this? I also have two side questions: What can we say about convergence of this series at the asymptote at $a_n$ = -1 The other question that I have is when we test the convergence for other values how much should we prove that $a_n$ is increasing/decreasing and how much is enough to just state for instance $a_{n+1} < a_n$ without the proof of it. AI: If $a=0$ then $a_n=0$ for all $n$. If $a_n\neq 0$ you can write $b_n = 1/a_n$. Then you have simply $b_{n+1} = 1+b_n$. It means $b_n = \frac{1}{a} + n$ and so $a_n = \frac{1}{\frac{1}{a} + n}$. From this expression you see that if $a\neq 0$ you will never have $a_n=0$ and that if $\exists m\in \mathbb{N}$ such that $1/a = -m$ then the sequence is not defined for $n\geq m$. If $-1/a$ is not an integer, then the sequence will tend to $0$ for $n\to \infty$, no matter the value of $a$. EDIT: My answer assumed that $a_0=a$. If you want to recover the sequence with $a_1=a$, which is what you wrote in your question, you should shift every index, i.e. $$ a_n = \frac{1}{\frac{1}{a}+(n-1)}\,.$$
H: Let $X=\mathbb R$ with cofinite topology and $A=[0,1]$ with subspace topology - show $A$ is compact Let $X=\mathbb R$ with the cofinite topology and $A=[0,1]$ with the subspace topology. I've just proven that every closed subspace of a compact space is compact, and now I'm asked to show that $A$ is compact, but not a closed subset of $X$. (I've shown this second part.) I know that $\mathbb R$ is compact when given the cofinite topology, so to show $A$ is compact I need to show it is a closed subspace of $X$, but I'm not really sure what the distinction is between closed subspace and closed subset. I know that the open sets in the subspace topology consist of $U=V\cap A$ for $V$ open in $X$, so do I just need to show that $[0,1]$ cannot be written in this form? Suppose it can be written as $[0,1]=V\cap [0,1]$ for some $V$ open in $X$, so $V=\mathbb R\backslash\{x_1,...,x_k\}$. That would mean that $\mathbb R\backslash\{x_1,...,x_k\}\subset [0,1]$ which is clearly false, so $A$ is a closed subspace of $X$. (I only worked this out while asking the question so basically asking if this is correct now) EDIT: Just realised this doesn't imply $\mathbb R\backslash\{x_1,...,x_k\}\subset [0,1]$ but actually that $[0,1]\subset \mathbb R\backslash\{x_1,...,x_k\}$, which doesn't lead me anywhere. AI: Just running with the definition works. First, we see that the non-empty open sets in $A$ are sets with finite compliment in $A$. Now say $\{ U_\alpha \}$ is an open cover of $A$. Let consider some $U_0$. It's compliment in $A$ is a finite set $\{a_1, a_2, \dots a_n \}$. Pick $U_i$ from the open cover such that it covers $a_i$. We thus have a finite subcover $\{ U_0, U_1, U_2, \dots U_n \} $.
H: How does Grinberg's theorem work? Grinberg's theorem is a condition used to prove the existence of an Hamilton cycle on a planar graph. It is formulated in this way: Let $G$ be a finite planar graph with a Hamiltonian cycle $C$, with a fixed planar embedding. Denote by $ƒ_k$ and $g_k$ the number of $k$-gonal faces of the embedding that are inside and outside of $C$, respectively. Then $$ \sum_{k \geq 3} (k-2)(f_k - g_k) = 0 $$ While i think i understood the definition i do not know how to apply it on a real problem. For instance, in a graph like that: how can i identify the internal/external faces of an hypothetical Hamilton cycle $C$ if what i want to do is actually find one of it(an Hamilton cycle)? I mean, the theorem should be used(as far as i understood) to prove(or disprove) the existence of an Hamilton cycle, yet the definition implies that i have to find one to use the whole theorem. Anybody can help me to understand? I'd like to see an example, even a different one from what i brought should be fine. AI: Of course, before we find a Hamiltonian cycle or even know if one exists, we cannot say which faces are inside faces or outside faces. However, if there is a Hamiltonian cycle, then there is some, unknown to us, partition for which the sum equals $0$. So the general idea for using the theorem is this: if we prove that no matter how you partition the faces into "inside" and "outside", we cannot make the sum equal to $0$, then there cannot be a Hamiltonian cycle. (In subtler applications, I can imagine making arguments such as "if all of these faces are inside faces of a Hamiltonian cycle, then this other face cannot be an outside face". But I don't know of any applications where we can't just take "inside" and "outside" to be an arbitrary partition of the faces and get a contradiction.) In the example you give, I count $21$ faces with $5$ sides, $3$ faces with $8$ sides, and $1$ face with $9$ sides (the external face). So in order to make the sum equal $0$, we must have $$ 3(f_5 - g_5) + 6 (f_8 - g_8) + 7 (f_9 - g_9) = 0 $$ where $f_5 + g_5 = 21$, $f_8 + g_8 = 3$, and $f_9 + g_9 = 1$. Taking the sum mod $3$, we get $f_9 - g_9 \equiv 0 \pmod 3$, which cannot happen if one of $f_9, g_9$ is $1$ and the other is $0$. So it's impossible to make the sum equal $0$, and therefore there cannot be a Hamiltonian cycle in this graph. This strategy, by the way, cannot prove the existence of a Hamiltonian cycle. Just because there's an arbitrary partition of the faces into an "inside" category and an "outside" category for which the sum is $0$, doesn't mean there is actually a Hamiltonian cycle that contains all of the "inside" faces and none of the "outside" faces.
H: How can Complex numbers be written in this way? In quantum optics, we assume that the eigenvalues of coherent states are complex numbers. So, when we determine the overlap of two different coherent states we have to deal with complex numbers all the time. In an academic book, I encountered an equation and I want to know how complex numbers can be expressed in this way: Assume that $\alpha$ and $\beta$ are complex numbers, we have: $e^{\alpha^*\beta-\alpha\beta^*}$=$e^{2i{\Im}(\alpha^*\beta)}$ Can somebody explain for me, how $\alpha^*\beta-\alpha\beta^* = 2i{\Im}(\alpha^*\beta)$? Are there any equivalent expressions? Can we say that it is also equal to $2i{\Im}(\alpha\beta^*)$ or $-2i{\Im}(\alpha^*\beta)$? AI: Write $\alpha^\ast\beta=a+bi$ with $a,\,b\in\Bbb R$ so $\alpha\beta^\ast=(\alpha^\ast\beta)^\ast=a-bi$ and$$\alpha^\ast\beta-\alpha\beta^\ast=2bi=2i\Im(\alpha^\ast\beta)=-2i\Im(\alpha\beta^\ast).$$
H: Need to find maximum in order to prove that the function converges uniformly. So, here is the function: $f_n = \sqrt{n}\left(\sqrt{x+\frac{1}{n}}-\sqrt{x}\right)$ I found that the limit for it is $0$. Now I want to show whether it converges uniformly. So first I tried to find a maximum. $$f'_n(x) = \frac{1}{2}\sqrt{n}\left(\frac{1}{\sqrt{x+\frac{1}{n}}}-\frac{1}{\sqrt{x}}\right) = \\ \frac{1}{2}\sqrt{n}\left(\frac{\sqrt{x}-\sqrt{x+\frac{1}{n}}}{\sqrt{x}\sqrt{x+\frac{1}{n}}}\right)$$ Then I solve for the x at denominator. $$\sqrt{x}-\sqrt{x+\frac{1}{n}}=0 \Rightarrow x -2\sqrt{x}\sqrt{x+\frac{1}{n}}+x+\frac{1}{n}= 0 \\ 2\sqrt{x}\sqrt{{x+\frac{1}{n}}}=\frac{1}{n}+2x \Rightarrow 2x^2+2x\frac{1}{n}=\frac{1}{n^2}+4x\frac{1}{n}+4x^2 \\ 2x^2+2x\frac{1}{n}+\frac{1}{n^2}=0 \ \ x_1 = \frac{-2\frac{1}{n}+ \sqrt{\frac{4}{n^2}-\frac{8}{n^2}}}{6} = -\frac{1}{3n}+\frac{1}{3n}=0$$ $$x_2 = -\frac{2}{3n}$$ I get very clumsy $x_1$ so I gather I have made some mistake all along. Can someone help? AI: Your sequence of functions converges pointwise to $0$ on $\mathbb{R}_+^*$. Thus, it converges uniformly to zero if $||f_n||_{\infty}= \sup_{x}|f_n(x)| \to 0$. But $f_n(x) \to 1$ as $x \to 0$, so $||f_n||_{\infty}\geqslant 1$ for all $n$ and $(f_n)$ does not converge uniformly
H: what does this notation $Z1\{condition\}$ mean? I am reading that you can decompose a random variable like this: $$ Z = Z1\{Z\leq\theta\mathbb{E}[Z]\} + Z1\{Z>\theta\mathbb{E}[Z]\} $$ What does $$Z1\{condition\} $$ mean in this case? Thanks for your help. AI: $1_{\{condition\}}$ is an indicator function: $1$ when condition is true, $0$ when it is not. $Z 1_{\{condition\}}$ is just the random variable $Z$ multiplied by this.
H: Integral of Stochastic Integral $ x(t) = \int_0^te^{as}dW_s $ I'm working through an exercise where I have got the following object: $$ x(t) = \int_0^te^{as}dW_s $$ for some constant $a$. Now, I need to find the distribution of $\int_0^t x(s) ds$. I'm struggling to do this and I don't know where to look for help. So, I want to find the distribution of: $$ \int_0^t \int_0^s e^{au} dW_uds $$ How do I go about solving this problem? In particular I need to find it's mean and variance. I assume that the order of integration can't be swapped, but I don't know for certain. Can anyone enlighten me? AI: Integrate by parts \begin{align} \int_0^t x(s) ds&= tx(t) -\int_0^t sdx(s) \\ & =t\int_0^te^{as}dW_s-\int_0^t se^{as}dW_s \\ &= \int_0^t(t-s)e^{as}dW_s \end{align} which is a normal distribution of zero-mean and variance $$var=\int_0^t(t-s)e^{2as}ds=\frac{e^{2at}-2at-1}{4a^2}$$
H: Units of $R[X]/(aX-1)$ In the most upvoted answer to another question here, the author states that: $R\to R[x]$ followed by the quotient map $R[x]\to R[x]/(ax-1)$. Call this $f$. Note that $f(a)$ is a unit in $R[x]/(ax-1)$ If I understand correctly, the result is $f(a)=ax^0\mod{ax-1}=ax^0$, and I fail to see why this is necessarily a unit, as $a\in R$ is just a non-zero element in an integral domain, so it doesn't necessarily have an inverse. Why is $f(a)$ a unit? Am I calculating it wrong? AI: Let the ideal $(ax-1) = I$. The map $f$ sends $a \mapsto a+I$ in the quotient ring and $(x+I)(a+I) = ax+I$ but $-ax+1 \in I$ so $(x+I)(a+I) = 1+I$. Hence $f(a)$ is a unit in $R[x]/I$.
H: Number of Elliptic Curves over Fp I am a beginner/amateur in the topic according to https://www.math.brown.edu/~jhs/Presentations/WyomingEllipticCurve.pdf on page 45, There are approximately 2p different elliptic curves defined over $F_p$. On SAGE, I tried to enumerate elliptic curves over $F_5$ looking at all curves $Y^2 = X^3 + iX+j$ with a non-null discriminant $\Delta = 4i^3 + 27j^2$ > p = 5 Fp = FiniteField(p) R = IntegerModRing(p) A = matrix(R, p, p, lambda i,j: 4*i*i*i+27*j*j) > L= [] > for i in range(p): > for j in range(p): > if A[i,j]!=0 : > E = EllipticCurve(Fp,[i,j]) > L.append(E.points()) > S = list (set(L)) > S and i get 20 answers [[(0 : 0 : 1), (0 : 1 : 0)], [(0 : 1 : 0), (0 : 2 : 1), (0 : 3 : 1), (2 : 1 : 1), (2 : 4 : 1), (4 : 1 : 1), (4 : 4 : 1)], [(0 : 1 : 0), (3 : 1 : 1), (3 : 4 : 1)], [(0 : 1 : 0), (0 : 1 : 1), (0 : 4 : 1), (2 : 1 : 1), (2 : 4 : 1), (3 : 1 : 1), (3 : 4 : 1), (4 : 2 : 1), (4 : 3 : 1)], [(0 : 1 : 0), (2 : 0 : 1), (3 : 2 : 1), (3 : 3 : 1), (4 : 1 : 1), (4 : 4 : 1)], [(0 : 1 : 0), (3 : 2 : 1), (3 : 3 : 1), (4 : 2 : 1), (4 : 3 : 1)], [(0 : 1 : 0), (0 : 1 : 1), (0 : 4 : 1), (1 : 1 : 1), (1 : 4 : 1), (3 : 0 : 1), (4 : 1 : 1), (4 : 4 : 1)], [(0 : 1 : 0), (0 : 1 : 1), (0 : 4 : 1), (2 : 2 : 1), (2 : 3 : 1), (4 : 0 : 1)], [(0 : 1 : 0), (0 : 2 : 1), (0 : 3 : 1), (1 : 2 : 1), (1 : 3 : 1), (2 : 0 : 1), (4 : 2 : 1), (4 : 3 : 1)], [(0 : 1 : 0), (1 : 2 : 1), (1 : 3 : 1), (2 : 1 : 1), (2 : 4 : 1), (3 : 0 : 1)], [(0 : 0 : 1), (0 : 1 : 0), (1 : 2 : 1), (1 : 3 : 1), (2 : 2 : 1), (2 : 3 : 1), (3 : 1 : 1), (3 : 4 : 1), (4 : 1 : 1), (4 : 4 : 1)], [(0 : 1 : 0), (0 : 2 : 1), (0 : 3 : 1), (1 : 1 : 1), (1 : 4 : 1), (2 : 2 : 1), (2 : 3 : 1), (3 : 2 : 1), (3 : 3 : 1)], [(0 : 1 : 0), (1 : 0 : 1), (4 : 1 : 1), (4 : 4 : 1)], [(0 : 0 : 1), (0 : 1 : 0), (1 : 0 : 1), (2 : 1 : 1), (2 : 4 : 1), (3 : 2 : 1), (3 : 3 : 1), (4 : 0 : 1)], [(0 : 1 : 0), (0 : 2 : 1), (0 : 3 : 1), (1 : 0 : 1), (3 : 1 : 1), (3 : 4 : 1)], [(0 : 1 : 0), (1 : 2 : 1), (1 : 3 : 1), (4 : 0 : 1)], [(0 : 1 : 0), (2 : 2 : 1), (2 : 3 : 1)], [(0 : 1 : 0), (0 : 1 : 1), (0 : 4 : 1), (1 : 2 : 1), (1 : 3 : 1), (3 : 2 : 1), (3 : 3 : 1)], [(0 : 0 : 1), (0 : 1 : 0), (2 : 0 : 1), (3 : 0 : 1)], [(0 : 1 : 0), (1 : 1 : 1), (1 : 4 : 1), (2 : 1 : 1), (2 : 4 : 1)]] have i done something wrong ? or am I missing something ? According to Silverman i was expecting around 2x5 = 10 different curves. are some identifiable by some symmetry ? i can see there are indeed pairs of curves with the same number of points but they don't really look obviously the same to me, nor can i see an obvious relationship between the coefficients of those having the same number of points. AI: Elliptic curves over an algebraically closed field are isomorphic if and only if they have the same $j$-invariant, so computing the $j$-invariants will tell you when two curves are not isomorphic at least (if the $j$-invariants are different!) Over a non-algebraically closed field elliptic curves of fixed $j$-invariant (not equal to 0, 1728) are all quadratic twists of each other, that is there exists $D\in \mathbf F_p^\times$ s.t. $E_1\cong E_2^{(D)}$, where the $D$th quadratic twist is defined by $$y^2 = x^3 + ax +b \mapsto Dy^2 = x^3 +ax+b$$ this is short Weierstrass form, so lets use $p\ne 2,3$, the quadratic twist is isomorphic to the original curve when $D$ is a square. For a finite field the group $\mathbf F_p^\times/(\mathbf F_p^\times)^2 \cong C_2$, there every non-square differs from each other by a square, so there is only one possible quadratic twist of each elliptic curve, which is not isomorphic to the original, this is what gives us that there are roughly $2p$ curves, excluding $0, 1728$ we have $\approx p-2$ $j$-invariants, each of which gives two non-isomorphic quadratic twists over $\mathbf F_p$. In Sage you can use the .is_quadratic_twist method of an elliptic curve to check if two curves are quadratic twists, and .is_isomorphic to check isomorphism. You can also find the twisted curves using .quadratic_twist. Using these methods you can reduce your list down to the exact number of curves, or build up the complete list starting from the set of all $j$-invariants. Note that when $j = 0,1728$ this is more complicated as you also get sextic and quartic twists!
H: Which integral Steifel-Whitney classes are universally $0$? Let $BO(n)$ denote the classifying space of the orthogonal group $O(n)$. Then there is the well-known ring isomorphism $$H^*(BO(n);\mathbb{Z}/2) \cong \mathbb{Z}/2[w_1,\dots,w_n] $$ where $w_i \in H^i(BO(n);\mathbb{Z}/2)$ is the $i$-th universal Steifel-Whitney class. From the short-exact sequence $\mathbb{Z} \stackrel{\cdot 2}{\to} \mathbb{Z} \to \mathbb{Z}/2$ there is a natural Bockstein homomorphism $\beta\colon H^k(-;\mathbb{Z}/2) \to H^{k+1}(-;\mathbb{Z})$, which in particular has the property that $\beta(c) = 0$ iff $c$ is the mod-$2$ reduction of some integral class. Then we can define the integral Steifel-Whitney classes $$ W_i = \beta(w_{i-1}) \in H^i(BO(n);\mathbb{Z}).$$ I haven't found much information about these in the usual sources other than the definition. My question is whether these are all non-zero, and if not whether there is a complete description of which ones vanish. In particular I would like to know if the universal $W_4$ is $0$. This question is (tangentially) related to this other question about the Universal Coefficient Theorem, which led me to wonder if there is $2$-torsion in $H^4(BO(n);\mathbb{Z})$ for $n\geq 4$. I know that $H^*(BO(n);\mathbb{Z})$ consists of a free part generated by Pontryagin classes and a $2$-torsion part given by $im(\beta)$, but I can't determine whether $\beta(w_3)=0$ or not. Note: I am aware of papers by Brown and Feshbach giving fairly explicit/complete descriptions of the ring $H^*(BO(n);\mathbb{Z})$, but I have only been able to find them on JSTOR and I don't have access. Edit: An idea I had was to try to use the formula for $Sq^i(w_j)$ (for example here) and then I think it's true that $Sq^1 = (\text{reduction mod-}$2$)\circ\beta$. I computed $Sq^1(w_3) = w_1w_3 + w_4$, which is non-zero in $H^*(BO(n);\mathbb{Z})$ by algebraic independence of SW classes, but then this would mean $W_4 = \beta(w_3) \neq 0$. Is this argument valid? AI: Let $\rho\colon H^*(-;\mathbb{Z}) \to H^*(-;\mathbb{Z}/2)$ be the natural "reduction mod-$2$" map induced by $\mathbb{Z} \to \mathbb{Z}/2$. Then we can use the fact that the first steenrod square $Sq^1$ is actually the composition $\rho \circ \beta$, in conjuction with the formula (linked in the question) $$Sq^i(w_j) = \sum_{t=0}^i\binom{j-i-1+t}{t}w_{i-t}w_{j+t} $$ where $i < j$ (the Steenrod square vanishes if $j > i$ for degree reasons, and $Sq^j w_j = w_j^2$). In particular, let $i = 1$ and $j> 1$: then the formula simplifies considerably to $$Sq^1(w_j) = w_1w_j + (j-1)w_{j+1}. $$ Since the Steifel-Whitney classes are algebraically independent in $H^*(BO(n);\mathbb{Z}/2)$ it follows from this formula that $Sq^1(w_j)$ and hence $\beta(w_j)$ are non-zero for $j > 1$, therefore $W_j \neq 0$ for $j > 2$; moreover $Sq^1(w_1) = w_1^2 \neq 0$ so $W_2$ is non-zero as well. Edit: As Connor Malin observes in a comment the class $W_1$ is actually $0$, since the natural transformation $\beta\colon H^0(-;\mathbb{Z}/2) \to H^1(-;\mathbb{Z})$ is given by a map $\mathbb{Z}/2 \to K(\mathbb{Z},1)\simeq S^1$ which cannot be homotopically non-trivial. In other words it's not just $Sq^1\colon H^0(-;\mathbb{Z}/2) \to H^1(-;\mathbb{Z}/2)$ which is trivial in this degree, but $\beta$ itself.
H: Relation between variance and a type of expectation I'm asking if there are some relations/inequalities between those two: $$ \mathbb{E}[f(X)(X - \mathbb{E}[X])] \tag{1}, $$ and $$ \mathrm{Var}[X] = \mathbb{E}[(X - \mathbb{E}[X])^2] \tag{2}. $$ Let us assume $f$ is some regular bounded continuous funciton. My guess is that (1) should be somehow upper bounded by the variance (2). And (1) $\to 0$ when $(2)\to 0$. Because intuitively, when the variance (2) approaches to $0$, then $X$ should be infinitely close to its mean $\mathbb{E}[X]$ in probability, which means $(X - \mathbb{E}[X]) \to 0$ in probability measure, thus (1) $\to 0$. In extreme case, when $\mathrm{Var}[X]=0$, then $X$ is degenerate, then (1)$=0$. So how to formally prove this, and how to find such upper bound if possible? AI: Using the Cauchy-Schwarz inequality: $|\mathbb{E}(XY)|^2 \leqslant \mathbb{E}(X^2)\mathbb{E}(Y^2)$ $|\mathbb{E}[f(X)(X-\mathbb{E}[X])]|^2 \leqslant \mathbb{E}[f(X)^2]\mathbb{E}[(X-\mathbb{E}[X])^2]$ so $|\mathbb{E}[f(X)(X-\mathbb{E}[X])]|^2 \leqslant \mathbb{E}[f(X)^2] \text{Var}[X]$ Hence if $\text{Var}[x] \rightarrow 0$ then $\mathbb{E}[f(X)(X-\mathbb{E}[X])] \rightarrow 0$ (assuming $f$ is bounded) Additionally if $x \mapsto f(x)^2$ is a concave function then Jensen's inequality could also be used: $\mathbb{E}[f(X)^2] \leqslant f(\mathbb{E}[X])^2$. This would decrease how much information about $f$ is needed.
H: Question similar to Collatz game A problem is this: Something I'm thinking about. Let $f(n)$ be the number of $1$ bits in $n$, i.e $f(3)=2,f(8)=1$ consider the trajectory of a natural $k$ under the map $n -> n+f(n)$. does it always collapse into the trajectory of $1$ under the same map? PLEASE Help!!! do you need to use Turán's theorem?? the first $30$ numbers when doing this game starting from $1$ is 2 3 5 7 10 12 14 17 19 22 25 28 31 36 38 41 44 47 52 55 60 64 65 67 70 73 76 79 84 87 and then if we start from $4$, we get 5 7 10 12 14 17 19 22 25 28 31 36 38 41 44 47 52 55 60 64 65 67 70 73 76 79 84 87 92 96 and if we start from $6$ we get: 8 9 11 14 17 19 22 25 28 31 36 38 41 44 47 52 55 60 64 65 67 70 73 76 79 84 87 92 96 98 and if we start from $11$, we get 14 17 19 22 25 28 31 36 38 41 44 47 52 55 60 64 65 67 70 73 76 79 84 87 92 96 98 101 105 109 and if we start from $13$ we get 16 17 19 22 25 28 31 36 38 41 44 47 52 55 60 64 65 67 70 73 76 79 84 87 92 96 98 101 105 109 AI: OEIS sequence A096303 is related to this question. The author conjectures that any number will meet the trajectory of 1 under the map $n \rightarrow n + f(n)$.
H: For $T(t)$ strongly continuous, check that $T(t)x - x = tAx + \int_0^t (t-s) T(s)A^2 x ds$. I am reading Lemma 2.8 of "Semigroups of Linear Operators and Applications to Partial Differential Equations" by Pazy: Let $A$ be the infinitesimal generator of a strongly continuous semigroup $T(t)$ satisfying $\|T(t)\| \leq M$ for $t \geq 0$. If $x \in D(A^2)$, then $\|Ax\|^2 \leq 4M^2 \|A^2x\| \|x\|$. In part of the proof, it says: Using the fact that for any $x \in D(A)$, $T(b)x - T(a)x = \int_a^b T(s) Ax d s = \int_a^b AT(s)x ds$, it is easy to check that $$T(t)x - x = tAx + \int_0^t(t-s) T(s)Ax^2 ds.$$ But I don't know how to derive it from the fact it mentions. Could anyone help me with that? AI: Apply the fact given to you onto the term $AT(s)x$ in the integrand. That is \begin{aligned} \int^t_0A(T(s)x)\,ds &=\int^t_0A\Big(x+\int^s_0AT(v)x\,dv\Big)ds\\ &=tAx +\int^t_0\Big(\int^s_0A^2T(v)xdv\Big)ds \end{aligned} The rest should follow by changing order of integration. The validity of this relies on the strong continuity of the semigroup $T$.
H: Estimated Time of Completion given Progress and Elapsed Time Say I have some task that I have spent an hour on, and I am 33% done. It should take me 3 hours to complete this task. Another task may be 50% done, and I have spent 3 hours on it. That should take a total of 6 hours. What is that equation? Something like time_remaining = f(percent_done, time_spent) AI: Since time used = percent done * total time you want total time = (time used)/(percent done) If you want the remaining time time remaining = total time - time used = time used * (1/(percent done) - 1)
H: First isomorphism theorem for groups proof that function is well-defined Theorem: If $\alpha:G \to H$ is a homomorphism, then $G/ \ker{(\alpha)}$ is isomorphic to $\alpha(G)$. Denote $\ker(\alpha)$ as $K$, and recall that $K$ is a normal subgroup of group $G$ for homormorphism $\alpha$. Then the function $\beta:G/K \to \alpha(G)$ defined by $\beta(aK)=\alpha(a)$ is an isomorphism. First, show that $\beta(aK)$ is an unambiguous/well-defined expression. Suppose the cosets $aK=bK$ for $a,b \in G$. Is $\beta(aK)=\alpha(a)=\alpha(b)=\beta(bK)$? Recall that $g_1 \equiv g_2 \mod H \leq G$ means $g_1^{-1}g_2 \in H$, and congruence mod some subgroup $H$ is an equivalence relation on $G$, with equivalence classes being the left cosets $gH=\{gh \mid h \in H\}$ for elements $g \in G$. Similarly, defining $g_1 \equiv g_2 \mod H$ as $g_1g_2^{-1} \in H$ is also an equivalence relation partitioning $G$ into right cosets $Hg$. The proof I'm reading (Lee's Abstract Algebra) claims this: Suppose that $aK=bK$. Then $a^{-1}b \in K$, and therefore $\alpha(a^{-1}b)=e$. That is, $(\alpha(a))^{-1}\alpha(b)=e$, so $\alpha(a)=\alpha(b)$. I agree that we have shown that $\alpha(b)$ is the right inverse of $(\alpha(a))^{-1}$. This is very close to $\alpha(b)=((\alpha(a))^{-1})^{-1}$, but don't we need to show $\alpha(b)$ is also the left inverse of $\alpha(a)^{-1}$? We used the left coset equivalence relation: equivalence classes $aK=[a]=[b]=bK$, so $a \sim b$, so $a^{-1}b \in K$. We could also say $b \sim a$, so $b^{-1}a \in K$ and $\alpha(b)^{-1}\alpha(a)=e$, but I don't think this helps. I think we have to use the fact that $K$ is normal: $Ka=Kb$, so $b \sim a$ under the right coset equivalence relation, so $ba^{-1} \in K$, so $\alpha(b)\alpha(a)^{-1}=e$, so $\alpha(b)$ is also the left inverse of $\alpha(a)^{-1}$. Am I correct, and is there a simpler way to show $\alpha(a)=\alpha(b)$? AI: Say $g,h \in G$, and we know that $g^{-1}h = e$. Then, we have $g (g^{-1}h) = g e$. Hence, we get that $(g g^{-1}) h = h = g$ In your question, you acknowledge that $\alpha (a)^{-1} \alpha(b) = e $. You can just proceed as above from there. In lieu of your remark about left and right inverses - those are the same in a group. If $gh = e$, then $ghg = g$, and left multiplying by $g^-1$ gives you that $hg = e$. This is a more general fact that may interest you.
H: What is meant by “how an element in the domain is mapped to its image”. In the following lecture given by Fredric Schuller, he mentions this during the lecture which is on multilinear algebra (know that $P$ is a set of polynomials such that $p$ $\in$ $P$): “consider the map $I$: $P$ $\to$ $\mathbb{R}$, now I need to say how a polynomial is mapped to a real number $I(p)$ $:= $$\int_0^1$$p(x)dx$“ My questions is: What does he mean when he says “how a polynomial is mapped to a real number”? Is he simply referring to the definition of $I$? I’ve just never heard someone say that instead of saying “The map $I$ is defined as...” AI: He is just defining the mapping. He is saying: I am defining a mapping $I: P \to \mathbb{R}$ which assigns to each polynomial $p(x)$ the real number $\int_0^1 p(x) dx$.
H: Why do we use the method of matrix exponential? I have a linear system of homogeneous ordinary differential equations, i.e.: $$ \dot{x}=Ax $$ where $A$ is an $n\times n$ real matrix. The matrix exponential method (described for example here) tells me that $$ e^{At} C $$ where $C=(C_1,C_2,C_3)$ are arbitrary constants, is the general solution to the system. There is another method, which I will call here the eigenvectors method, (described in three parts here) that builds the solutions step-by-step, column-by-column. If $A$ is diagonizable with eigenvectors $v_1,\dots,v_n$ we obtain the solution $$ x=C_1 e^{\lambda_1 t} v_1 + \dots + C_n e^{\lambda_n t} $$ If $A$ is not diagonizable, then we use the Jordan form of $A$: for generalized eigenvectors we have generalized summands. So if $v$ is not an eigenvector, but a generalized eigenvector, then instead of writing $C_i e^{\lambda_i t} v$ in the sum, we write $$ C_i e^{\lambda_i t}\left( 1+t+\frac{t^2}{2} + \frac{t^3}{3!} + \dots + \frac{t^n}{n!} \right) v\qquad (*)$$ where $n$ is the rank of the generalized eigenvector $v$. Now comes my question. When I first saw these two methods, I thought they are one and the same method. It's because, to compute $e^{At}$ we also need to compute the (maybe generalized) diagonalization $A=M D M^{-1}$ where $D$ is in Jordan form and $M$ is a base changing matrix. Then, by a known formula $$ e^{At}= e^{M(Dt)M^{-1}} = M^{-1}e^{Dt}M $$ and computing $e^{Dt}$ can be done easilly. If $D$ is diagonal, then $e^{Dt}$ is just element-wise exponenciation. If there is a off-diagonal nonzero element, we get something resembling $(*)$. So the eigenvectors method is just matrix exponential method in disguise, right? Sadly no. Even if $D$ is indeed diagonal, then the matrix exponential method will give $$ x = M^{-1} e^{Dt} M C $$ but the eigenvector method will give instead $$ x = e^{Dt} M C $$ and this is really confusing for me. Why on Earth would I want to compute $M^{-1}$, when it's completely unnecessary? I just fail to understand why the matrix exponential method exists. So I want to know why. For me now it looks like the matrix exponential does more work for the uglier results, because almost always $e^{Dt}M$ is simple and $M^{-1} e^{Dt} M$ is ugly. Maybe I am wrong or something that I've written above is wrong, that answers my question? (The question is motivated by an 1 hour+ of expanding out $M^{-1} e^{Dt} M$, after when I discovered the more elegant method.) AI: The eigenvectors method is a consequence of the matrix exponential method. You get (*) by computing $e^{At}v$, where $v$ is a generalized eigenvector. The reason you got two different matrices is that they are two different fundamental matrices for equation $$ \dot{x} = Ax. $$ In other words, both $X_1(t)=e^{At}$ and $X_2(t)$ obtained by the eigenvectors method satisfy this equation, and the general solution is in the form $$ x(t)=X_1(t)C_1 $$ or equivalently $$ x(t)=X_2(t)C_2. $$ However, if you add initial conditions, the constants $C_1$ and $C_2$ will be different. So to sum up, the eigenvectors method relies on the form of exponential matrix, but indeed it is usually nicer to apply than obtaining the exponential matrix itself.
H: Entire function that is a bijection on the unit disk is a rotation I'm working on this problem "Let $f$ be an entire function. Suppose $f$ restricted to the unit disk is a bijection. Prove that $f$ is a rotation." My attempt: It is tempting to use Schwarz lemma. Let $T$ be the linear transformation that maps the unit disk to itself and $T(f(0))=0$ (If I remember it right, $T=\frac{z-f(0)}{1-\overline{f(0)}z}$). Then $|Tf(z)|\leq 1$ in the unit disk and $T(f(0))=0$. So, if I can show that $|Tf(z)|=|z|$ for some nonzero $z$, we are done by the lemma. I want to say $Tf$ achieves maximum on the unit circle. So if the maximum is 1, $|T(f(z))|=1=|z|$ for some $z$ there. However I am stuck here. AI: All holomorphic bijections of the unit disk onto itself are Möbius transformations of the form $$ T(z) = e^{i\lambda} \frac{z-a}{1-\overline a z} $$ for some $a \in \Bbb D$ and $\lambda \in \Bbb R$. (See for example Can we characterize the Möbius transformations that maps the unit disk into itself?.) If $T$ is the restriction of an entire function $f$ then necessarily $a=0$ (otherwise $f$ would have a pole at $z= 1/\overline a$).
H: Does $f'(x) > 0$ and $f''(x) < 0$ imply that $f'(x)$ is converging to $0$ as $x$ is tending to infinity? I have a question related to another question which I asked today, where some people gave me two nice counterexamples. Suppose you have a differentiable function $f:\mathbb{R}_+ \rightarrow \mathbb{R}_+ $, where $f'(x)>0, f''(x) <0$ and $f(0) = 0$. Does this imply that $f'(x)$ is converging to 0 as $x$ is tending to infinity? AI: Here's a counterexample. Start with the function $$f(x) = \sqrt{x^2-1} $$ on the domain $x \ge 1$; this is the upper half of one branch of a hyperbola which is asymptotic to $y=x$. Next, translate this downward by a unit: $$g(x) = \sqrt{x^2-1}-1 $$ so now it intersects the $x$-axis at the point $x=\sqrt{2}$. Finally, translate this leftward by $\sqrt{2}$: $$h(x) = \sqrt{(x + \sqrt{2})^2 - 1} - 1 $$ This function is asymptotic to a certain line of slope $+1$ (by taking the line $y=x$, translating doward by a unit, and leftward by $\sqrt{2}$). So, its derivative approaches $1$ as $x \to \infty$. And it satisfies $f(0)=0$, $f'(x)>0$, $f''(x) < 0$.
H: Find a convergence interval for the series of functions. I tried to use the ratio test to determine the convergence interval for the series of functions given as $\Sigma^{\infty}_{k=0}\frac{k!(x-2)^k}{k^k}$. But the ratio test was infinity no matter the x. Here is the working: $$\Sigma^{\infty}_{k=0}\frac{k!(x-2)^k}{k^k} \\ \text{ Izmanto Dalambēra testu } \frac{a_{n+1}}{a_n} = \frac{(k+1)!(x-2)^{k+1}k^k}{k^{k+1}k!(x-2)^k} = \frac{(x-2)^k (k+1)}{k} \\ \lim_{k \to \infty} \left|\frac{a_{k+1}}{a_k} \right| = \infty$$ Maybe there is a better way to find the convergence interval? AI: Your computations are not correct. We have\begin{align}\frac{\left|\frac{(k+1)!(x-2)^{k+1}}{(k+1)^{k+1}}\right|}{\left|\frac{k!(x-2)^k}{k^k}\right|}&=\left(\frac k{k+1}\right)^k|x-2|\\&\to\frac{|x-2|}e.\end{align}Therefore, the radius of convergence is $e$.
H: Find irreducible factors without factorizing I have an exercise from my course notes that states: Find how many irreducible factors has $f(x) = x^{26}-1$ over $\mathbb{F}_3$ and their degrees. (don't factorize it) I see immediately that the $1$ is a root of $f$. So I have $f(x)=(x-1) g(x)$ where $g$ has degree $25$ with no root in $\mathbb{F}_3$. But I don't know really how to move. AI: The distinct-degree factorization theorem tells us that $X^{27}-X$ factors mod $3$ into the product of all monic irreducible polynomials in $\mathbb{F}_3[X]$ whose degree divides $3$ (since $27 = 3^3$). So $X^{26}-1$ is the product of all monic irreducible polynomials in $\mathbb{F}_3[X]$ of degree $1$ and $3$ except $X$.
H: Help with exercise in complex analysis on the existence of a mapping I have encountered the following exercise from a practice exam: Does there exist an analytic function mapping the annulus: $ A = \{ z | 1 \leq |z| \leq 4 \} $ onto the annulus: $ B = \{ z | 1 \leq |z| \leq 2 \} $ And which takes $ C_1 \to C_1 $ and $ C_4 \to C_2 $ where $ C_r $ is the circle centered at the origin with radius $ r $. I have no clue how to approach this. Perhaps if I assume such a function exists I can reach a contradiction but how to do this? Or perhaps such a function exists and I cannot think of it? I appreciate all help on this. AI: The answer is no because the function would (essentially) need to be $\sqrt z$ and that is not globally defined in the annulus. To prove this just let $u(z)=2\log |f| - \log |z|$ harmonic and zero on the boundary so $u(z)=0$ hence $2\log |f|= \log |z|$ But now using a local holomorphic logarithm $h_w(z)= \log f(z)$ around any point $w \in A$ one gets that $|\frac{e^{2h_w(z)}}{z}|=1$ so $e^{2h_w(z)}=\alpha_wz, |\alpha_w|=1$ and logarithmic differentiation gives $(2f'/f)(z)=1/z$ which holds in all of $A$ as the dependence on $w$ vanishes. Integrating on a circle of radius in-between $1$ and $4$ leads to the contradiction $2k=1$ for some integer $k$. Done! (edit later - note that the same proof shows more generally that a (holomorphic) map from annulus $(1,R_1)$ to annulus $(1,R_2), 1 < R_1, R_2$ that takes distinct boundary circle to distinct boundary circles exists iff $R_2=R_1^k, k$ integral so for example $z^2$ takes $B$ to $A$ in the OP notations - the only thing to add is that if the map inverts circles (takes $1$ to $R_2$) then compose it with an annulus inversion and then if $R_2=R_1^a, a>0$ the proof above with $a$ instead of $1/2$ shows that $k/a=1$ for some positive integer $k$)
H: Difference of two increasing functions. Let $f$, and $g$, both be an increasing function w.r.t $x$, s.t $x\in(0,1)$. What can we comment on the nature of $f-g$? AI: The information provided on $f$ and $g$ is not enough to deduce how $f-g$ behaves. Depending on which function increases faster affects whether or not $f-g$ increases, decreases, or is constant. If $f(x)=e^x$ and $g(x)=\frac{x}{10}$ then $f-g$ is increasing, but switch $f$ and $g$ and you get $f-g$ is decreasing. Similarly if $f$ and $g$ grow at the same rate/are identical then $f-g$ is constant.
H: Uniform convergence for series Given the series:$\Sigma_{k=1}^{\infty} \frac{x^2 \cos (2x)}{1+k^4x^6} $ show that the series converge uniformly. I tried to do it like this: $$\frac{x^2 \cos (2x)}{1+k^4x^6} \le \frac{x^2 }{1+k^4x^6} \\ \text{Atvasina pēc } x \text{ un pielīdzina nullei: } \frac{2x(1+k^4x^6) -x^2 (6k^4x^6) }{(1+k^4x^6)^2} = 0 \\ x(2+2k^4x^6-6k^4x^7) = 0 \Rightarrow x_1 = 0 \\ x(k^4x^5-3k^4x^6)=-1 \Rightarrow x_2 = -1 \\ x^5(k^4-3k^4x) = 0 \Rightarrow x_3 = -\frac{1}{3} \\ f''_k =\frac{(2+2k^4x^6+12x^6-26k^4x^7-36k^4x^7)(1+k^4x^6)-(2x(1+k^4x^6) -x^2 (6k^4x^6)) 6k^4x^5}{1+k^4x^6} \\ f''_k(0) = 2 > 0 \Rightarrow \text{ nav max} \\ f''_k(-1) = (2+4k^4+24 +52k^4 +72k^4) (1+2k^4) -(12k^4+24k^8-2\cdot 1296k^8) > 0 \\$$ But the way I do it, it looks clumsy and no point is max. Then what could be another way? AI: The $x^2\cos{(2x)}$ can be pulled out of the summation since the index of the summation is $k$. You're left with $\sum_{k=1}^{\infty} \frac{1}{1+k^4x^6}$, where x is treated as a constant. Compare the summation to $\sum_{k=1}^{\infty} \frac{1}{k^4x^6}$, which is larger than the original summation. The summation $\sum_{k=1}^{\infty} \frac{1}{k^4x^6}$ converges from the integral test or p-series test for $x\neq0$. By direct comparison test, the original summation converges. And for the case when $x=0$ the the summation turns into $\sum_{k=1}^{\infty} 0$ which converges.
H: Determine $I_n=\int_{-\infty}^\infty t^ne^{-\frac{t^2}{a}}dt$ if $\Large \int_0^\infty t^ne^{-t}dt =n!$ What's about $\Large I_n=\int_{-\infty}^\infty t^ne^{-\frac{t^2}{a}}dt$ ; $a\in \mathbb{R}$ AI: Let $t=\frac{x^2}a$ in $\int_0^\infty t^m e^{-t}dt =m!$ to get $$\frac2{a^{m+1}} \int_0^\infty x^{2m+1}e^{-\frac{x^2}a}= m! $$ Then, for even $n\ge2$, substitute $n=2m+1$ in above result \begin{align} I_n &= \int_{-\infty}^\infty t^ne^{-\frac{t^2}{a}}dt =a^{\frac{n+1}2}\left(\frac{n-1}2\right)! =a^{\frac{n+1}2}\frac{(n-1)(n-3)...1}{2^{n-2}}\frac{\sqrt\pi}2 \end{align} where $(\frac12)! = \frac{\sqrt\pi}2$.
H: If $f(x)$ is continuous on $[0,1], \text{ and } 0\leq f(x)\leq1, \forall x \in [0,1], \text{ prove } \exists t \in [0,1] \text{ s.t. } f(t) = t$ My thinking is that $f(x)$ has to intersect with function $g(x) = x$ at some point, but I don't know how to prove this. AI: Guide: $f$ is continuous, $g$ is continuous, hence $h=f-g$ is continuous. Argue that $h(0)$ is nonnegative and $h(1)$ is nonpositive and you can use intermediate value theorem.
H: Use eigenvectors and eigenvalues for transformation matrix of quadractic form. I'm really confused by a task I have been given: We are looking at a quadratic form $Q(\vec{x})=\vec{x}^TA\vec{x}$ where the symmetric matrix $A = A^T$ has normalized eigenvectors $\vec{v_i}$ with corresponding eigenvalues $\alpha_i$. So far, I understand every component. But then we are asked to use $\vec{v_i}$ and $\alpha_i$ to compute new coordinates $x_i^{\prime}=B_{ij}x_j$ (with an orthogonal change of basis $B$) and coefficients $\lambda_i$ such that $Q^{\prime}(\vec{x})^{\prime}= \lambda_1 x_1^{\prime}+...+\lambda_n x_n^{\prime}$ with $Q(\vec{x})=Q^{\prime}(\vec{x})^{\prime}$. Even though I know about the quadratic form as well as about the eigenvalue problem, I really can't figure out whats being asked in this task. I'd appreciate and help or hint. AI: This is the main idea: if $[x]_{\mathcal B} = (x_1',\dots,x_n')$ denotes the coordinate-vector of $x = (x_1,\dots,x_n)$ relative to the basis $\mathcal B = \{v_1,\dots,v_n\}$ (which is to say that $x = x_1' v_1 + \cdots + x_n' v_n$), then we have $$ Q(x) = Q'(x') = \alpha_1^2x_1' + \cdots + \alpha_n^2 x_n'. $$ Accordingly, answer the question by finding an orthogonal matrix $B$ for which $x_i' = \sum_j B_{ij}x_j$, where $x_i'$ denotes the $i$th entry of $[x]_{\mathcal B}$ as defined above. Regarding the solution: we want a matrix $B$ for which $[x]_{\mathcal B} = Bx$. Let $M$ denote the matrix whose columns are $v_1,\dots,v_n$. We find that this matrix solves the "opposite" problem. In particular, it's easy to show that we have $M[x]_{\mathcal B} = x$. If we solve for $[x]_{\mathcal B}$, we see that $[x]_{\mathcal B} = M^{-1}x$. However, because $M$ is orthogonal (has orthonormal columns), it's easy to find the inverse of $M$; we have $M^{-1} = M^T$, the transpose of $M$. Long story short: $B$ is the matrix whose rows are $v_1,\dots,v_n$.
H: $\sum_{n=0}^\infty\frac{H_n(x)H_n(y)t^n}{2^nn!}$=$\frac{\exp\left[\frac{2xyt-(x^2+y^2)t^2}{1-t^2}\right]}{\sqrt{1-t^2}}$ I am told to prove that : $$\sum_{n=0}^\infty\frac{H_n(x)H_n(y)t^n}{2^nn!} = \frac{\exp\left[\frac{2xyt-(x^2+y^2)t^2}{1-t^2}\right]}{\sqrt{1-t^2}}$$ where $H_n(x)$ is Hermite polynomial.I am wondering how to prove it.please help me how to prove this. Thanks in advance! AI: Hint: Try to use the generating function for Hermite polynomials, given by $$\exp(2xt-t^2) = \sum_{n=0}^{\infty} H_n(x) \frac{t^n}{n!}$$ EDIT: As suggested by OP, I am posting the link to the solution here.The full derivation is mentioned there with the final expression in Eq. 18 with the simple substitution $t \to \frac{t}{2}$ in that equation to obtain the result here.
H: Let $x$ be a non zero vector in the complex vector space $\mathbb C^n$ and $A=xx^H$.Find all the Eigenvalues and their Eigen spaces. Let $x$ be a non zero vector in the complex vector space $\mathbb C^n$ and $A=xx^H$.Find all the Eigenvalues and their Eigen spaces.[where $x^H=$ conjugate transpose of $x$] Here rank of $A$ is $1$ and $A$ is a Hermitian matrix so all the Eigenvalues are real.since rank $<n $ it implies that at least one eigenvalue is $0$.Now considering the null space of $A$ it has dimension $(n-1)$.So algebraic multiplicity of $0$ is either $(n-1)$ or $n$ But I can not proceed further.Is there any other Eigenvalues and what are their Eigen spaces? AI: Note that $\lambda = x^{H}x$ is an eigenvalue associated to the eigenvector $x$. Next note that $A$ satisfies $f(t) = t^{2} - \lambda t$.
H: Question About the Definition of a $\mathbb{C}-$algebra In the 'Quick review of commutative algebra' chapter a $\mathbb{C}-$ algebra is defined as a commutative ring that contains $\mathbb{C}$ as a subset. I don't understand if these coordinate rings are vector spaces nor if a variety is supposed to be a vector space. I was thinking of functions on a circle such that $x^{2}+y^{2}-1 = 0$, but I can't tell how $\mathbb{C}$ is a subset in the range of the functions restricted to the circle. I mean how can a point such as $(0,1)$ be defined to go to the entire complex plane such as $1+i$, and $0+i$, and $1+0i$ and everything? AI: Let $k$ denote a field. A $k-$algebra is a ring $A$ equipped with an inclusion map $k\to A$. This gives $A$ a structure of a ring equipped with a compatible $k-$vector space structure. In this case, we are taking $k=\mathbb{C}$. If you have an affine variety $X\subseteq \mathbb{A}^n_{\mathbb{C}}$, say with ideal $\mathfrak{a}=(f_1,\ldots, f_k)\subseteq \Bbb{C}[x_1,\ldots, x_n]$, then it has affine coordinate ring $A(X)=\Bbb{C}[x_1,\ldots, x_n]/\mathfrak{a}.$ As long as $\mathfrak{a}$ is a proper ideal, it cannot contain $\mathbb{C}\subseteq \mathbb{C}[x_1,\ldots, x_n]$, and it follows that there is also a copy of $\mathbb{C}$ embedded in $A(X)$ by the composition $$ \mathbb{C}\to \mathbb{C}[x_1,\ldots, x_n]\to A(X).$$ It follows that $A(X)$ is a $\mathbb{C}-$algebra. As mentioned in the comments above, the correct way to think about this is as $A(X)$ describing the polynomial functions on $X$, and $\mathbb{C}$ being included as the constant functions on $X$. That is, $\lambda \in \mathbb{C}\subseteq A(X)$ corresponds to a function $f$ on $X$ so that for all $x\in X$, $f(x)=\lambda$.
H: Investigate convergence of a series using comparison test The series with $a_n$: $a_n = (n^{1/3}-(n-1)^{1/3})/n^{1/2} $ I tried comparing it to the $1/n^2$ and $1/n^{3/2}$ because those definitely converge, but proving the inequality gives rise to pretty complicated polynomials with high degree, and I cant seem to find an elegant proof that this series converges. AI: HINT: Using $\displaystyle a-b=\frac{a^3-b^3}{a^2+ab+b^2}$ with $a=n^{1/3}$ and $b=(n-1)^{1/3}$, we have for $n\ge 1$ $$\begin{align} n^{1/3}-(n-1)^{1/3}&=\frac{1}{n^{2/3}+n^{1/3}(n-1)^{1/3}+(n-1)^{2/3}}\\\\ &\le \frac{1}{n^{2/3}} \end{align}$$
H: An unbounded function whose square is uniformly continuous on $\mathbb R$ I want an example of a continuous function $f:\mathbb R \to [0, \infty)$ which is unbounded but such that $f^2$ is uniformly continuous. However much I try I am able to get only bounded functions like $\sin x$. We have $\sin^2 x$ is uniformly continuous on $\mathbb R$ because its derivative is bounded on $\mathbb R$. But how do I get an unbounded function satisfying the hypothesis? Please help. AI: Any smooth unbounded function $f(x)$ such that $f(x)f'(x)$ is bounded will do. Proof: The derivative of $f(x)^2$ is $2f(x)f'(x),$ which is bounded. It is well known that any differentiable function with bounded derivative is Lipschitz, hence is uniformly continuous. For example $f(x) = \ln (1+x^2)$ has this property.
H: Find if series converge ( series containing complex numbers) Here I try to determine whether series converge: $$\sum^{\infty}_{k=1} \frac{(z-2i)^{3k}}{k^3 2^k}$$ $$\left|\frac{a_{k+1}}{a_k}\right| =\left| \frac{(z-2i)^{3(k+1)} 2^{k} k^3}{(k+1)^32^{k+1}(z-2i)^{3k}}\right| = \frac{1}{2} \left|\frac{z-2i}{k+1}\right|^3 \left|\frac{k}{(z-2i)^3} \right|^k =\frac{1}{2(k+1)} \left(\frac{k}{|z-2i|}\right)^k $$ It cannot be less than 1 for large $k$, so I don't know what to do. AI: You made some errors in your simplification. A corrected version is as follows. $$\begin{aligned} \left|\frac{a_{k+1}}{a_k}\right| &= \left| \frac{(z-2i)^{3(k+1)}2^k k^3}{(k+1)^3 2^{k+1}(z-2i)^{3k}} \right| \\ &= \frac{1}{2} \left| \frac{k}{k+1}\right|^3 \left| z-2i \right|^3 \end{aligned}$$ Therefore $$\lim_{k\to \infty} \left|\frac{a_{k+1}}{a_k}\right| = \frac{1}{2}|z-2i|^3$$ So convergence will hold if $|z-2i|^3 < 2$.
H: Why any integer $n$ can only have one prime factor greater than $\sqrt{n}$? I know the proof that for a composite number $n$, there is at least one prime factor less than or equal to $\sqrt{n}$ but I don't know how to prove this following statement: Any number $n$ can have only one prime factor greater than $\sqrt{n}$. So is there a connection between these two statements? How do you prove the second statement? AI: Assume the contrary: $n=p_1p_2r$ where $p_1,p_2>\sqrt{n}$, and $r$ contains all other factors in $n$. Then let $t=p_1p_2>\sqrt{n}\cdot \sqrt{n}=n$ Then $n=tr>n$. This is a contradiction, so the assumption must be false.
H: How does $H$ act on $G^t$ in the wreath product $G^t \wr H$? I'm reading this expository paper about group theory in the Rubik's cube. I'm a little confused by the definition of the wreath product in this paper. Example 3.12 on page 12 states that the elements of the wreath product $(\mathbb{Z}/2\mathbb{Z})^3 \wr S_3$ for the set X={0, 1, 2} are {$(0,0,0) \rho, (1,0,0) \rho, (0, 1, 0) \rho, (0, 0, 1) \rho, (1, 1, 0) \rho, (0, 1, 1) \rho, (1, 0, 1) \rho, (1, 1, 1) \rho$} where $\rho$ is an element of $S_3$. My question is: is $\rho$ just an arbitrary element in $S_3$, or is $\rho$ somehow specially chosen? I don't understand how $S_3$ acts on $(\mathbb{Z}/2\mathbb{Z})^3$ in the same way as $S_3$ acts on the set X. More generally, I don't understand how, for a direct product $G^t \wr H$, where $H$ acts on a set of size $t$, how $H$ acts on $G^t$. Specifically, which elements of $H$ are acting on $G^t$? P.s I'd really appreciate it if answers were geared towards a high school student with little formal abstract algebra background. Thank you. :) AI: $S_3$ acts by permuting the entries of a $3$-tuple in $\newcommand{\Z}{\mathbb{Z}} \DeclareMathOperator{\Aut}{Aut} (\mathbb{Z}/2\mathbb{Z})^3$. In symbols: given $\pi \in S_3$, $$ \pi \cdot (a_1, a_2, a_3) = (a_{\pi^{-1}(1)}, a_{\pi^{-1}(2)}, a_{\pi^{-1}(3)}) \, . $$ (The inverse is just there for bookkeeping, to make sure this defines a left-, rather than right-, action.) So for instance, taking $\pi = (1\ 2)$ and $(a_1, a_2, a_3) = (0,1,0)$, then $\pi^{-1} = \pi$ and $$ \pi \cdot (a_1, a_2, a_3) = (a_2, a_1, a_3) = (1,0,0) \, . $$ This action induces a homomorphism $\varphi: S_3 \to \Aut((\Z/2\Z)^3)$, which allows us to define the semidirect product $(\Z/2\Z)^3 \rtimes_\varphi S_3$. As a set $(\Z/2\Z)^3 \rtimes_\varphi S_3$ is just the cartesian product $(\Z/2\Z)^3 \times S_3$, so as a set $$ (\Z/2\Z)^3 \rtimes_\varphi S_3 = \{((a_1, a_2, a_3), \rho) : a_1, a_2, a_3 \in \Z/2\Z, \rho \in S_3\} \, . $$ However, the multiplication is "twisted" by the action of $S_3$: \begin{align*} ((a_1, a_2, a_3), \pi) \ ((b_1, b_2, b_3), \rho) &= ((a_1, a_2, a_3) + \pi \cdot (b_1, b_2, b_3), \pi \rho)\\ &= ((a_1, a_2, a_3) + (b_{\pi^{-1}(1)}, b_{\pi^{-1}(2)}, b_{\pi^{-1}(3)}), \pi \rho)\\ &= ((a_1 + b_{\pi^{-1}(1)}, a_2 + b_{\pi^{-1}(2)}, a_3 + b_{\pi^{-1}(3)}), \pi \rho) \, . \end{align*} I only defined the wreath product for $S_t$, but the the same thing works for any subgroup of $S_t$, or more generally for any group $H$ with a homomorphism $H \to S_t$. (This is what it means for a group to act on a set of size $t$.)
H: Defining a complex polynomial with certain roots How can i define a complex polynomial, with real coefficents $a_{0}$ and the form $p(z)=a_{2}z^2 + a_{1}z + a_{0}$ with the roots being at the same time $z_{1}=1-i$ and $z_{2}=-1+i$, AI: In a quadratic polynomial $p(z)=a_2z^2+a_1z+a_0$ with roots $z_1,z_2$, we have that $\frac{a_0}{a_2}=z_1 \cdot z_2$ and $-\frac{a_1}{a_2}=z_1+z_2$. So we have the following equations $$ \frac{a_0}{a_2}=z_1 \cdot z_2=(1-i)(-1+i)=2i \\ -\frac{a_1}{a_2}=z_1+z_2=(1-i)+(-1+i)=0 $$ Hence $a_1=0$ and we are left with $\frac{a_0}{a_2}=2i$. Since we want $a_0$ real, we can choose $a_0=2, a_2=-i$ and thus $p(z)=-iz^2+2$.
H: Lebesgue measure/integral problem - where do I go wrong? Let $f\geq0$ be a bounded function supported in a measurable set $E$ with $m(E)<\infty$. Show that if $\int_E f=0$, then $m(E)=0$. My proof: Let $E_\epsilon=\{x\in E:f(x)\geq\epsilon\}$. Then for any $\epsilon>0$, $$ 0=\int_Ef\geq\int_{E_\epsilon}f\geq\int_{E_\epsilon}\epsilon=\epsilon m(E_\epsilon)\geq 0. $$ Since $\epsilon>0$, we must have $m(E_\epsilon)=0$. Finally, $E=\cup_{k\geq1}E_{1/k}$, so $m(E)\leq\sum_{k\geq 1}m(E_{1/k})=0$. Silly counterexample: Take $f\equiv 0$ on the interval $[0,1]$. Then $\int_{[0,1]} f=0$ but $m([0,1])=1$. Clearly I am missing some sort of minor detail here, but I cannot figure it out. What am I doing wrong in my proof? It seems like this problem is not true as stated (because of my counterexample), so I want to know why my proof is wrong. EDIT: Is it because $E\not=\cup_{k\geq 1}E_{1/k}$? For example, if $f(x)=0$, then $x\not\in E_\epsilon$ for any $\epsilon>0?$ I suppose this proof becomes true if $f>0$ for all $x\in E$, for then there will always exist some $\epsilon>0$ for which $x\in E_\epsilon$. AI: The problem is that we need not have $E=\bigcup_{k\geq 1} E_{1/k}.$ Instead, we have $$\{x\in E: f(x)>0\} = \bigcup_{k\geq 1} E_{1/k}.$$ Note the strict inequality in the LHS rather than $\geq.$ Hence, the problem is true if we assume $f>0$ on $E$, rather than $f\geq 0$.
H: If $f(x)$ is continuous on $(a,b), \text{ and } x_1,x_2 \in (a,b), m_1,m_2 > 0$ $$\text{If} f(x) \text{ is continuous on } (a,b), \text{ and } x_1,x_2 \in (a,b), m_1,m_2 > 0$$ $$ \text{ prove }\exists c \in (a,b) \text{ s.t. } f(c) = \frac{m_1f(x_1)+m_2f(x_2)}{m_1 + m_2}$$ I need a hint on how to prove this AI: $f$ is continuous at $[x_1,x_2]$ and $$f([x_1,x_2])=[m,M]$$ with $$m\le f(x_1)\le M$$ $$m\le f(x_2)\le M$$ $$m_1m\le m_1f(x_1)\le m_1M$$ $$m_2m\le m_2f(x_2)\le m_2M$$ $$m(m_1+m_2)\le m_f(x_1)+m_2f(x_2)\le M(m_1+m_2)$$ $$m\le \frac{m_1f(x_1)+m_2f(x_2)}{m_1+m_2}\le M$$ thus there exists $c\in [x_1,x_2] :$ $$\frac{m_1f(x_1)+m_2f(x_2)}{m_1+m_2}=f(c)$$
H: Proof that there exists a unique $x^* \in X$ such that $T(x^* ) = x^*$ . Suppose $(X, \rho)$ is a complete metric space, and suppose the function $T : (X, ρ) \rightarrow (X, ρ)$ is such that $T_n = T ◦ T ◦ · · · ◦ T$ (n times) is a contraction map for some $n \ge 2$. Prove that there exists a unique $x^* \in X$ such that $T(x^* ) = x^*$ . My attempt: Since $T_n$ is a contraction, the Contraction Mapping Theorem implies that there is a unique $x^* \in X$ such that $T_n(x^*) = x^*$. Then observe that $$T(x^*) = T(T_n(x^*)) = T_{n+1}(x^*) = T_n(T(x^*)).$$ So now that we have derived that $T(x^*) = T_n(T(x^*))$ can we say $T(x^*) = x^*$ must be true since it's that only solution s.t. $T_n(x^*) = x^*$ or is there more to be said about $T(x^*) = T_n(T(x^*))$ that leads to the hypothesis that: exists a unique $x^* \in X$ such that $T(x^* ) = x^*$. AI: This is to almost finish your attempt: We had earlier that $T_n(x^*) = x^*$. Now we have $T_n(T(x^*)) = T(x^*)$ i.e. $y = T(x^*)$ is also a solution the the equation $T_n(x) = x$. By uniqueness, $y = x^*$ i.e. $T(x^*) = x^*$. So yes, we can say $T(x^*) = x^*$ is true by uniqueness guaranteed by the contraction mapping theorem applied to $T_n$. Note that we haven't shown that $T(x) = x$ has a unique solution! This is the final part of the proof, but this should be easy. Addendum: final part of the proof. We had $T(x^*) = x^*$. Now, suppose $\alpha \in X$ is also a solution to $T(x) = x$ i.e. $T(\alpha) = \alpha$. Then $$T_n(\alpha) = T_{n-1}(T(\alpha)) = T_{n-1}(\alpha) = T_{n-2}(T(\alpha)) = T_{n-2}(\alpha) = \ldots = T(\alpha) = \alpha.$$ Now we have $T_n(\alpha) = \alpha$ and also $T_n(x^*) = x^*$ from earlier. By uniqueness guaranteed by the contraction mapping theorem on $T_n$, we must have $\alpha = x^*$. So indeed, $T(x) = x$ has a unique solution.
H: Unit circle coordinates to pi If there are two coordinates of a unit circle, e.g. $x=0$, $ y=1$, I know this is $\frac{\pi}{2}$. How can I calculate pi for any two coordinates, even if they are not places on the unit circle, like $x=1.23$, $y=-0.1$? AI: If I am understanding correctly, you can take the $\text{atan2}(\frac{y}{x})$ in radians. Where the atan2 function is defined in this link: atan2 link look under the heading Definition and computation. So if you have the numbers you have above: $\text{atan2}(\frac{-0.1}{1.23}) = -0.081 \text{ radians}$ to get the number of pi radians, we can divide by $\pi$ $-.081= \pi x$ this implies that: $x = -.0258$ So you want the number $-.0258\pi$
H: Cauchy product of two formal power series I am thinking if I could get help for the following question: Given a formal power series $$g(z)=\sum_{i=0}^\infty a_i z^{-i}$$ does there always exists another (non-trivial) formal power series $y(z)$, such that the Cauchy product between $y$ and $g$ $$ y \times g= \sum_{i=0}^\infty c_k z^{-k} $$ satisfies that $$ \sum_{i=0}^\infty |c_k| < \infty? $$ AI: If the constant term $a_0$ of $g$ is nonzero, then $g$ will be invertible: that is, there is some $y$ such that $y \times g=1$. If $g$ is nonzero (but we don't make any assumptions about its constant term), let $a_k$ be the first nonzero coefficient of $g$. We then have \begin{align*} g(z)&=\sum_{n=0}^\infty a_nz^n\\ &=\sum_{n=k}^\infty a_nz^n&&\text{(because all the prior terms are zero)}\\ &=z^k\sum_{n=k}^\infty a_nz^{n-k}&&\text{(factoring out the common factor of }z^k\text{ from each term)}\\ &=z^k\sum_{n=0}^\infty a_{n+k}z^n&&\text{(relabeling).} \end{align*} The sum in the last line is a power series with nonzero constant term $a_k$. That is, we can write $g=z^kh$, where $h$ is a power series with nonzero constant term. So if we take $y$ to be the multiplicative inverse of $h$, then $y \times g=z^k$. Finally, if $g=0$, we can take $y$ to be anything we want and have $y \times g=0$. So, for any power series $g$, we can find $y$ such that $y \times g$ such that finitely many of the $c_k$ (in fact, at most one of the $c_k$!) are nonzero, which means that the sum of their absolute values must converge.
H: Showing that $y=P\cos(\ln(t))+Q\sin(\ln(t))$ satisfies $t^2\frac{d^2y}{dt^2}+t\frac{dy}{dt}+y=0$ Show that, if $P$ and $Q$ are constants and $$y = P\cos(\ln(t)) + Q\sin(\ln(t))$$ then $$t^2\frac{d^2y}{dt^2}+t\frac{dy}{dt}+y=0$$ AI: $P$ and $ Q $ are constant, thus by chaine rule differentiation, $$y'=-P\sin(\ln(t))\frac 1t +Q\cos(\ln(t))\frac 1t$$ $$y''=-P\cos(\ln(t))\frac{1}{t^2}+$$ $$P\sin(\ln(t))\frac{1}{t^2}-$$ $$Q\sin(\ln(t))\frac{1}{t^2}-$$ $$Q\cos(\ln(t))\frac{1}{t^2}$$ $$t^2y''=-y-ty'$$
H: "Inverse" moment generating function of standard normal distributed random variable This is just a trivial question maybe but, is the Moment generating function for $X$ the same as for $-X$ for a normally distributed random variable, so $E(e^{tX})=E(e^{-tX})$? If not, what is the difference between them? AI: If $$X \sim \operatorname{Normal}(\mu = 0, \sigma^2),$$ then yes, $\operatorname{E}[e^{tX}] = \operatorname{E}[e^{-tX}]$. Otherwise, it is not true. We can perform the computation explicitly: $$\begin{align*} \operatorname{E}[e^{tX}] &= \int_{x=-\infty}^\infty e^{tx} \frac{e^{-x^2/(2\sigma^2)}}{\sqrt{2\pi} \sigma} \, dx \\ &= \int_{x=-\infty}^\infty \frac{1}{\sqrt{2\pi} \sigma} e^{-(x^2 - 2\sigma^2 t x + (\sigma^2 t)^2)/(2\sigma^2)} e^{\sigma^2 t^2/2} \, dx \\ &= e^{\sigma^2 t^2/2} \int_{x=-\infty}^\infty \frac{e^{-(x-\sigma^2 t)^2/(2\sigma^2)}}{\sqrt{2\pi}\sigma} \, dx \\ &= e^{\sigma^2 t^2/2}, \end{align*}$$ since the last integrand is the density of a normal distribution with mean $\sigma^2 t$ and variance $\sigma^2$, thus integrates to $1$. It follows that $$\operatorname{E}[e^{-tX}] = e^{\sigma^2 (-t)^2/2} = e^{\sigma^2 t^2/2} = \operatorname{E}[e^{tX}].$$
H: Is $\mathbb{R}$ a vector space over $\mathbb{R}$? It might be an interesting question before studying the concept of orientation on $\mathbb{R}$ as it is studied on $\mathbb{R^2}$ & $\mathbb{R^3}$ AI: Any ring $R$ is a an $R$-module via its intrinsic multiplication. So in the case when $R$ is a field, $R$ is a vector space over itself. In particular, $\mathbb{R}$ is a vector space over itself that is one-dimensional generated, for example, by $1$. Similarly, $\mathbb{C}$ is a $\mathbb{C}$ vector space of dimension one and $\mathbb{F}_2$ is a vector space over $\mathbb{F}_2$ of dimension one.
H: Quotienting $\Bbb{Q}$ by a fractional ideal, what happens? Consider arithmetic $\pmod N, \ N \in \Bbb{N}$. Now suppose there is $a \in \Bbb{Z}/N$ such that $(a,N) = 1$. Then Can we consider arithmetic in $R:=\Bbb{Q}/(N/a)\Bbb{Z}$ equivalent in some way to arithmetic in $\Bbb{Z}/N$? If we're in $R$ then equality $x = y \iff (N/a) \mid (x - y) \iff (x-y)/(N/a) \in \Bbb{Z} \iff ax - ay \in N \Bbb{Z}$. Thus you see there is a way to get back to $\Bbb{Z}/N$ and things are equal iff they are in $R$. So is $R$ finite? AI: For any non-zero rational $$\Bbb{Q/xZ\cong x^{-1}(Q/xZ)=x^{-1} Q/Z=Q/Z}$$ And $\Bbb{Z/nZ\cong n^{-1}Z/Z}$ is a finite subgroup of it.
H: eigen pair for sin(A) original image I don't know how to get the eigen pair for $\sin (A)$ AI: Let's prove that :Eigen pair of A and sin A are the same in the sense that if $(\lambda, v) $is an eigen pair of A then (sin$\lambda, v) $ is an eigen pair of sin A. $\tag{1}$ Since A is a symmetric matrix, it is diagonalizable that is there exist diagonal matrix D and orthogonal matrix Q such that $A= QDQ^T$ Writing Taylor series for sin, we have : sin A=$ A-A^3/3!+A^5/5!-...=QDQ^T-(QD^3Q^T)/3!+(QD^5Q^T)/5!-...=Q(D-D^3/3!+D^5/5!-...)Q^T=QEQ^T$ where $E=D-D^3/3!+D^5/5!-...$. Note that E is also a diagonal matrix with $\sin\lambda_i$ on diagonal. Columns of $Q$ (say $q_1,q_2,q_3$)are orthonormal(eigenvectors). Hence, (sin A)$ Q= QE\implies \sin A[q_1\; q_2 \;q_3]=[q_1\sin\lambda_1\;q_2\sin \lambda_2\;q_3\sin\lambda_3]$ By comparing columns on both sides, (1) follows.
H: Bound probability of deviation of norm For $X, Y$ iid, show that: $$P(||X|| > t) \leq 3 P(||X + Y|| >2t/3)$$ for any norm $||\cdot||$ and $t >0$ The proof I've seen is just two steps, but I am not understanding the argument leading to the inequality: $$P(||X|| > t) = P(||(X + Y) + (X + Z) - (Y + Z)|| > 2t) \leq 3 P(||X + Y|| >2t/3)$$ where $Z$ is also distributed as $X$ and $Y$ and independent of them. This seems like an application of the triangle inequality, but I haven't been able to get the details right. AI: The first equality follows from the identity $2X=(X+Y)+(X+Z)-(Y+Z)$. The inequality follows from the following: If $$\|X+Y\| \leq \frac {2t} 3$$, $$\|X+Z\| \leq \frac {2t} 3$$ and $$\|Y+Z\| \leq \frac {2t} 3$$ then $$\|(X+Y)+(X+Z)-(Y+Z)\| \leq \frac {2t} 3++\frac {2t} 3+\frac {2t} 3= 2t$$ which gives $(\|(X+Y)+(X+Z)-(Y+Z)\| > 2t) \subseteq (\|X+Y\| > \frac {2t} 3) \cup (\|X+Z\| > \frac {2t} 3) \cup (\|Y+Z\| > \frac {2t} 3)$
H: number of ideals in a set and determine the maximal ideals Let $f(X)=(X^2-2)(X^4-X)$ and $g(X)=(X^2-1)X\in \mathbb{Q}[X]$. Let $I=(f,g)$ the ideal generated by $f$ and $g$. 1) How many ideals does $\mathbb{Q}[X]/I$ has? 2) What are the maximal ideals? I have already computed that $I=X(X-1)$ and proved that $\mathbb{Q}[X]/I\cong Q\times Q$ where $Q$ is the field of fractions. But my knowledge of rings theory is a bit poor and I don't see how to answer the questions. I have also thougth that $Q\times Q$ is not a field, then (f,g) is not maximal. But I don't know if it is useful. Any hints will be appreciated Note: In my notes I am asked to determine the number of ideals of $ \mathbb{Q}[X]/(fg) $, but I think that it is a misprint since in the same exercise I was asked to compute $ I $ and prove the isomorphism. AI: Assuming you already have $\mathbb{Q}[x]/\langle x \rangle \cong \mathbb{Q}$ and $\mathbb{Q}[x]/\langle x-1 \rangle \cong \mathbb{Q}$. Your observation that $I$ is not a maximal ideal of $\mathbb{Q}[x]$ because $\mathbb{Q} \times \mathbb{Q}$ is not a field is correct. But I don't think the question is asking for the maximal ideals of $\Bbb{Q}[x]$. Observe that $\Bbb{Q}[x]$ is a PID (because $\mathbb{Q}$ is a field), so every ideal of $\Bbb{Q}[x]$ is of the form $\langle p(x) \rangle$ for some polynomial $p(x)$. Thus for ideals of $\Bbb{Q}[x]/I$ you should be looking at ideals of the form $\langle p(x) \rangle/ I$. This would mean that $$I \subseteq \langle p(x) \rangle \quad \iff \quad x(x-1) \in \langle p(x) \rangle \quad \iff \quad p(x) | x(x-1).$$ Thus all ideals of $\Bbb{Q}[x]/I$ are as follows: $$\Bbb{Q}[x]/I, \quad \langle x-1 \rangle/I, \quad \langle x \rangle/I, \quad I/I$$ In all there are $4$ ideals of this ring: two trivial ones and two non-trivial ones. The Third Isomorphism Theorem gives that for the ideals $I,J$ such that $I \subseteq J$, we have $$(R/I)\big/(J/I) \cong R/J.$$ Thus we have, $$\left(\Bbb{Q}[x]/I\right) \big/ \left(\langle x-1 \rangle/I\right) \cong \Bbb{Q}[x]/\langle x-1 \rangle \cong \Bbb{Q}.$$ Since $\Bbb{Q}$ is a field, this ideal is maximal. Likewise you can get the other non-trivial ideal to be maximal as well.
H: If $A \in L(R^n,R^m)$, then $||A|| < \infty$ and A is a uniformly continuous mapping of $R^n$ into $R^m$. Definition: for $A \in L(R^n,R^m)$, define the norm $||A||$ of A be the sup of all numbers $|Ax|$, where x ranges over all vectors in $R^n$ with $|x| \leq 1$. Observe that $|Ax| \leq ||A|| |x|$ holds for all $x \in R^n$. If $\lambda$ is that $|Ax| \leq \lambda |x|$ for all $x \in R^n$, then $||A|| \leq \lambda$. Theorem: If $A \in L(R^n,R^m)$, then $||A|| < \infty$ and A is a uniformly continuous mapping of $R^n$ into $R^m$. Proof: Let ${e_1,..,e_n}$ be the standard basis in $R^n$ and suppose $x=\sum c_i e_i$, $|x| \leq 1$, so that $|c_i| \leq 1$ for $i=1,...,n$. Then $$|Ax|=| \sum c_i A e_i| \leq \sum |c_i| |A e_i| \leq \sum |A e_i|$$ so that $$||A|| \leq \sum_{i=1}^n |A e_i| < \infty$$. Since $$|Ax-Ay| \leq ||A|| |x-y|$$ if $x,y \in R^n$, the statement is proved. Can someone explain the how does the inequality $|Ax| \leq ||A|| |x|$ prove uniformly continuous mapping, and what does $\lambda$ mean in this context? This is from Rudin's book by the way. AI: If $A=0$ then it is uniformly continuous so I will assume that $A \neq 0$. If $\epsilon >0$ is given take $\delta =\frac {\epsilon} {\|A\|}$. Then $\|Ax-Ay\| \leq \|A\| \|x-y\|<\epsilon$ whenever $\|x-y\| <\delta$. By definition this shows that $A$ is uniformly continuous. $\lambda$ is just any positive number.
H: Prove inequality $\|Z\|_2\le \|Z\|_1^{1/4} \|Z\|_3^{3/4}$ Prove $\|Z\|_2\le \|Z\|_1^{1/4} \|Z\|_3^{3/4}$ for random variable $Z$. AI: By Holder's inequality, $$\mathbb{E}[Z^2]=\mathbb{E}[Z^{1/2}Z^{3/2}]\le (\mathbb{E}|Z|)^{1/2}(\mathbb{E}|Z|^3)^{1/2}$$ Rearrange and then it is proved.
H: Derivative of $\text{Tr}[B X^T A X^{-1}]$ Let $A, B, X \in \mathbb{R}^{n \times n}$ and assume that $X^{-1}$ exists. Derive $\frac{\partial K}{\partial X}$ where $K(X)= \text{Tr}[B X^T A X^{-1}]$ I have tried the following so far ($U = B X^T A X^{-1}, K = \text{Tr}[U]$): $$ \frac{\partial K}{\partial X} = \frac{\partial K}{\partial U} \frac{\partial U}{\partial X} \\ = \frac{\partial \text{Tr}[U]}{\partial U} \frac{\partial U}{\partial X} \\ =I_n \frac{\partial U}{\partial X} = \frac{\partial U}{\partial X} \\ B X^T \frac{\partial A X^{-1}}{\partial X} + A X^{-1} \frac{\partial B X^T}{\partial X} $$ But now I am lacking the tools to compute the matrix-by-matrix derivatives. AI: Let's use a colon to denote the trace/Frobenius product, i.e. $$A:B={\rm Tr}(A^TB)$$ The cyclic property allows terms in a trace product to be rearranged in lots of ways, e.g. $$\eqalign{ A:B &= A^T:B^T &= B:A \\ A:BC &= B^TA:C &= AC^T:B \\ }$$ Write the function using the trace product, then calculate its differential and gradient. $$\eqalign{ K &= AX^{-1}B:X \;= A^TXB^T:X^{-1} \\ dK &= AX^{-1}B:dX + A^TXB^T:dX^{-1} \\ &= AX^{-1}B:dX - A^TXB^T:X^{-1}dX\,X^{-1} \\ &= AX^{-1}B:dX - X^{-T}A^TXB^TX^{-T}:dX \\ &= \Big(AX^{-1}B - X^{-T}A^TXB^TX^{-T}\Big):dX \\ \frac{\partial K}{\partial X} &= AX^{-1}B \;-\; X^{-T}A^TXB^TX^{-T} \\ \\ }$$ In the above, the differential of $X^{-1}$ was utilized; here's how it was derived. $$\eqalign{ I &= X^{-1}X \\ 0 &= dX^{-1}X + X^{-1}dX \\ 0 &= dX^{-1} + X^{-1}dX\,X^{-1} \\ dX^{-1} &= -X^{-1}dX\,X^{-1} \\ }$$ As you have discovered, the problem with the chain rule in matrix calculus is that it very often requires the calculation intermediate quantities which are higher-order tensors, e.g. matrix/matrix, matrix/vector, and vector/matrix derivatives. The differential approach is simpler because the differential of a matrix is just another matrix and obeys the rules of matrix algebra.
H: Is this set in $\mathbb{R}^2$ closed? Is $A=\{(x,y)\in \mathbb{R}^2 : x=\frac1n\}$ $, n \in \mathbb{N}, 0\leq y\leq 1$ a closed set? I think it is but I don't have any demonstration yet. AI: Following the hints given in the comments: notice that for all $n \in \mathbb{N}$, $a_n \doteq \left(\frac{1}{n}, 0 \right) \in A$. And clearly $a_n \to (0, 0)$. However, $(0, 0) \notin A$, since there is no $n \in \mathbb{N}$ that satisfies $0 = \frac{1}{n}$. Therefore $\overline{A} \neq A$ and $A$ is not closed.
H: Is it possible to solve this probability question without knowing if these two events are independent or not? The question gives P(A) = 1/3, P(B) = 1/4 and P(A⋂B) = 1/6 and asks for P(A⋃BC) The solution provided is the following: P(A∪Bc) = P(A) +P(Bc)−P(A∩Bc) =P(A) +P(Bc)−(P(A)−P(A∩B)) =P(Bc) +P(A∩B) = 11/12 My question is, how can you conclude that P(A∩Bc) = P(A)−P(A∩B) without knowing that P(A∩Bc) = P(A)P(Bc) = P(A)(1 - P(B)) = P(A) - P(A)P(B) which implies that P(A) and P(B) are independent of each other, as well as implying that P(Bc) and P(A) are independent of each other Which is clearly not true given that the problem prompt ( P(A) = 1/3, P(B) = 1/4, but P(A⋂B) = 1/6) AI: To answer your question directly, observe that $$ P(A \cap B) + P(A \cap B^c) = P(A), $$ which holds for all events, regardless of independence.
H: How to calculate the following Probability given the combination of discrete and continuous events? Is it even possible? This is a problem i came up with myself, but i don't even know how to start to solve it? So maybe somebody here knows or sees a way to approach it: You are given a dice with $n \in \mathbb{N}$ sides $A_1,A_2...A_n$ each with the same chance to occur. Now if you hit even indicesed faces, you get a random amount of money $X\sim [0,2n ]$ and if you hit an odd faced one you get $X\sim [0,n]$ but additionally you are allowed to throw the dice again (if you want) under the condition that if you get an even indicessed face on the second throw, you lose all your money. Assume that you hit an odd faced dice and have to decide whether to continue or not if your goal is to gain as much money as possible. Since the amount of money is continuous, but the throw of the dice is discrete, how would one compute any of this in the first place? is it even possible to compute anything? What is the field of study called that allows me to calculate such things? AI: There are several ways to interpret your question, and your problem indeed has a Solution. If you can throw the die again any number of times as long as you get an odd number, then there is no maximum amount of money you can win, since you can always risk more in order to increase it. Most of these kind of problems are approached by computing the expected value of a Random variable, which by the Law of Large numbers is the average value when you repeat the experiment infinitely many times, so the best choice is the one that makes you get a bigger expected value. In this case(I'll assume $n$ is even, you can work by your own the odd case after understanding this one), We can define $Y$ as the random variable defined by the total amount you earn if you throw the die again. And $N$ as the amount you get if you don't. If you don't throw the die again you earn $X\sim[0,n]$, so $N=X$ and then $E(N)=E(X)=\frac{n}{2}$ (I'm assuming your notation means uniform distribution). If you throw the die again You have two possible outcomes: $\bullet$ You get an even number and lose everything, in this case the probability of that event is $1/2$. $\bullet$ You get again an odd number and win a random amount of money given by $X\sim[0,n]$. Here we can't express with a single formula the behavior of the variable $Y$. But if we denote by $A$ the event of getting an odd indicized face, you can describe the conditional distributions $Y|A=N+U$, ($N\sim[0,n]$ represents the money you already won and $U\sim[0,n]$ is independent from $N$ and represents the money you will get for the second roll) and $Y|A^c=0$ Now, we use another result which is called the Law of total expectation, which grants $$E(Y)=E(Y|A)P(A)+E(Y|A^c)P(A^c)$$ So we have by linearity of $E$: $$E(Y)=[E(N)+E(U)]\frac{1}{2}+0\times \frac{1}{2}=\frac{E(N)+E(U)}{2}=\frac{\frac{n}{2}+\frac{n}{2}}{2}.$$ So $E(Y)=\frac{n}{2} =E(N).$ It means in average you will win the same amount independently of your choice (The even numbered die should be different since the probability is not $\frac{1}{2}$.) The difference here is given by the risk you're willing to take. That's quantified by another operator of random variables called Variance. The bigger the variance, the higher the risk and the rewards. Also, the random variables which are part discrete and part continuous are called mixed random variables
H: Radius of convergence $\sum_{n=0}^{\infty} (n+a^{n})z^{n}$ hi could you please help me find the radius of convergence for the next series $\sum_{n=0}^{\infty} (n+a^{n})z^{n}$ please i have tried by the quotient criterion and using the hardy formula but i did not come to any conclusion and put it into wolfram but i dont know how get to the result it throws. AI: $$\sum_{n=0}^\infty n z^n = z \left(\frac{d}{dz} \sum_{n=0}^\infty z^n\right)$$ $$\sum_{n=0}^\infty a^n z^n = \sum_{n=0}^\infty (az)^n$$ The first series has radius of convergence $R_1 = 1$, and the second series has radius of convergence $R_2 = \frac{1}{|a|}$. Therefore, the radius of convergence is $$R = \min\left\{1, \frac{1}{|a|}\right\}.$$
H: An entire function whose only zeros are positive integer This is a problem from my past Qual: "Prove or disapprove. There is an entire function $f$ s.t. $f(n)=0$ for all $n\in \mathbb{N}$ and nonzero elsewhere." For this type of problems, I think of Identity Theorem. $f(n)=0$ for all $n$ and hence the set $\{z|f(z)=0 \}$ has an accumulation point ($\infty$), so $f=0$ and hence there is no such function. However, I suppose $\infty$ does not count as an accumulation point? Am I right and if not, how should I solve this? AI: $\frac{1}{\Gamma(-z)}$ will do if you allow $n=0$ as a zero, a translation by $1$ of that if you start at $n=1$
H: Prove that all singularity of $\frac{1}{e^z+3z}$ is of order 1 This is a problem from my past QUal: "Prove that all singularity of $$\frac{1}{e^z+3z}$$ is of order 1. You don't need to find the singularities." Usually this kind of problem is easy to me. My procedure is to find the singularities, and use Taylor series on it. That's why this problem throws me off. I cannot find the poles here: $e^z+3z=0$. And the comment at the end clearly indicates that I took the wrong path. But then how can I prove this? AI: If $f$ is analytic in a neighbourhood of $z=a$ and $1/f(z)$ has a singularity at $z=a$ that is not a pole of order $1$, then both $f(a) = 0$ and $f'(a)=0$. But here $f(a) = e^a + 3a$ and $f'(a) = e^a + 3$, so...
H: Finding $P(X+Z>Y)$ where $X,Y,Z$ are exponential random variables Let $X$,$Y$,$Z$ be independent random variables with exponential distribution of parameter $\lambda$, then $X,Y,Z$ ~ $\xi(\lambda)$. The task is to calculate $P(X+Z>Y)$. Comment: In previous excersices, by finding the joint density function of $X,Y$ ($f_{XY}=\lambda^{2}e^{-\lambda x}e^{-\lambda y}1_{[0,+\infty)x[0,+\infty)}(x,y)$) and integrating I got $P(X>Y)=\frac{1}{2}$ but I can't add $Z$ into the mix. Edit: just in case, I removed what should be the answer. AI: By the same principle, you must triple integrate over the domain $\{(x,z,y):x\in[0{..}\infty), z\in[0{..}\infty),y\in[0{..}x+z)\}$ $$\mathsf P(X+Z>Y)=\int_0^\infty\int_0^\infty\int_0^{x+z} \lambda^3\mathrm e^{-\lambda(x+y+z)}~\mathrm d y~\mathrm d z~\mathrm d x $$
H: Understanding $(n \cdot 1)(m \cdot 1) = (nm) \cdot 1$ I read that $(n \cdot 1)(m \cdot 1) = (nm) \cdot 1$ in a ring with unity $1$ because $\left(\underbrace{1+\cdots+1}_{n \textrm{ times}}\right)\left(\underbrace{1+\cdots+1}_{m \textrm{ times}}\right) = \underbrace{1+\cdots+1}_{nm \textrm{ times}}$ by the distributive law (of addition) of rings. However, the left distributive law is $a \cdot (b+c) = (a \cdot b) + (a \cdot c)$ and the right distributive law is $(a+b) \cdot c = (a \cdot c) + (b \cdot c)$. I don't understand how the equation above follows from either the left or the right distributive law. For instance, how is addition getting distributed (of course, $1$ above is not the same as the integer 1, but represents unity)? Thanks! AI: It’s easier to see if you temporarily abbreviate $n\cdot 1$ to $a$, say: then you have $$a(\underbrace{1+\ldots+1}_m)=\underbrace{a+\ldots+a}_m$$ by repeated applications of the left distributive law. But $$\begin{align*} \underbrace{a+\ldots+a}_m&=\underbrace{(\underbrace{1+\ldots+1}_n)+\ldots+(\underbrace{1+\ldots+1}_n)}_m\\ &=\underbrace{1+\ldots+1}_{nm}\;. \end{align*}$$ A really rigorous proof would proceed by a double induction on $m$ and $n$, but what’s going on is straightforward enough once you see it that the rigorous proof is a bit of overkill unless the point is to practise such arguments.
H: Why this conditional variation and expectation equality holds? Assume $(X_i)_{i\ge 0}$ are random variables (not necessarily martingale) adapted to the filtration $(\mathcal{F}_i)_{i\ge 0}$. I found a statement that says $$\text{Var}(X_{i+1}-X_i\mid\mathcal{F}_i)=E((X_{i+1}-X_i)^2\mid \mathcal{F}_i).$$ I am wondering why this is true? AI: We have : $$ Var(X_{i+1} - X_i | \mathcal F_i) = E[(X_{i+1} - X_i - E(X_{i+1} - X_i | \mathcal F_i) )^2|\mathcal F_i] \\ = E[(X_{i+1} - X_i)^2 | \mathcal F_i] \color{red}{-E[X_{i+1} - X_i | \mathcal F_i]^2} $$ which follows from expanding the square, linearity and usual conditional variance rules. The term in $\color{red}{\text{red}}$ can be zero if $X_{i}$ were a martingale (or $X_{i+1}-X_i$ forms a series of "martingale differences", which are used in the proof of variance inequalities like Efron-Stein), but if it is not zero then the statement you give does not hold.
H: How to find the intergral $I_{A}=\int_{0}^{2\pi}\frac{\sin^2{x}}{(1+A\cos{x})^2}dx$ Let $A\in (0,1)$be give real number ,find the closed form intergral $$I_{A}=\int_{0}^{2\pi}\dfrac{\sin^2{x}}{(1+A\cos{x})^2}dx$$ This integral comes from a physical problem,following is my try: since $$I_{A}=\int_{0}^{2\pi}\dfrac{\sin^2{x}}{(1+A\cos{x})^2}dx=I_{1}+I_{2}$$ Where $$I_{1}=\int_{0}^{\pi}\dfrac{\sin^2{x}}{(1+A\cos{x})^2}dx,I_{2}=\int_{\pi}^{2\pi}\dfrac{\sin^2{x}}{(1+A\cos{x})^2}dx$$ For $I_{2}$ Let $x=\pi+t$,then we have $$I_{2}=\int_{0}^{\pi}\dfrac{\sin^2{x}}{(1-A\cos{x})^2}dx$$ so $$I_{A}=I_{1}+I_{2}=2\int_{0}^{\pi}\dfrac{\sin^2{x}(1+A^2\cos^2{x})}{(1-A^2\cos^2{x})^2}dx=4\int_{0}^{\frac{\pi}{2}}\dfrac{\sin^2{x}(1+A^2\cos^2{x})}{(1-A^2\cos^2{x})^2}dx$$ Then I fell ugly, so how to prove it? Thank you AI: Use integration by parts $$\int \frac{\sin^2 x\ dx}{(1+A\cos x)^2}$$ $$=\int \sin x\cdot \frac{\sin x}{(1+A\cos x)^2}\ dx$$ $$=\sin x\cdot \frac{1}{A(1+A\cos x)}-\int \frac{\cos x}{A(1+A\cos x)}\ dx$$ $$=\frac{\sin x}{A(1+A\cos x)}-\frac{1}{A^2}\int \frac{(1+A\cos x)-1}{1+A\cos x}\ dx$$ $$=\frac{\sin x}{A(1+A\cos x)}-\frac{x}{A^2}+\frac{1}{A^2}\int \frac{dx}{1+A\cos x}$$ $$=\frac{\sin x}{A(1+A\cos x)}-\frac{x}{A^2}+\frac{1}{A^2}\int \frac{dx}{1+A\frac{1-\tan^2\frac x2}{1+\tan^2\frac x2}}$$ $$=\frac{\sin x}{A(1+A\cos x)}-\frac{x}{A^2}+\frac{2}{A^2}\int \frac{\frac 12\sec^2\frac x2dx}{(1-A)\left(\frac{1+A}{1-A}+\tan^2\frac x2\right)}$$ $$=\frac{\sin x}{A(1+A\cos x)}-\frac{x}{A^2}+\frac{2}{A^2(1-A)}\int \frac{d\left( \tan\frac x2\right)}{\left(\tan\frac x2\right)^2+\left(\sqrt{\frac{1+A}{1-A}}\right)^2}$$ $$=\frac{\sin x}{A(1+A\cos x)}-\frac{x}{A^2}+\frac{2}{A^2\sqrt{1-A^2}}\tan^{-1}\left(\tan\frac{x}{2}\sqrt{\frac{1-A}{1+A}}\right)$$ $$\therefore \int_0^{2\pi} \frac{\sin^2 x\ dx}{(1+A\cos x)^2}=2\int_0^{\pi} \frac{\sin^2 x\ dx}{(1+A\cos x)^2}=\color{blue}{\frac{2\pi}{A^2}\left(\frac{1}{\sqrt{1-A^2}}-1\right)}$$
H: Sequence of terms in a series is unbounded out side its region of convergence Let $\sum_{n=0}^\infty a_nz^n$ be a series with radius of convergence $R$, then show that the sequence $\{a_nz_0^n\}$ is unbounded for $|z_0|>R$. My thought: Can we take $|a_n|^\frac{1}{n}\geq \frac{1}{R} $ from Cauchy Hadamard formula? Any alternative? AI: Simply this: We know that $R = \lim \inf \frac 1 {|a_n|^{1/n}}$. So for $|z_0| > R$, we have $\frac {z_0} R = \lim \sup |a_n z_0^n|^{1/n} > 1$. So there exists some $r>1$ such that there are infinitely many terms of $|a_n z_0^n|^{1/n} > r$. In other words, for infinitely many $n$, we get that $|a_n z_0^n|> r^n$
H: Boundary of union of open subsets Let $X$ be a topological space and each $V_i \subset X$ be an open subset of $X$, where $i \in I$. Denote $V_I = \{V_i : i \in I\}$. Below I'll show that $$(*) \quad \quad \partial \left(\bigcup V_I\right) = \overline{\bigcup \partial V_I} \setminus \bigcup V_I $$ provided $X$ is locally connected, or $V_I$ is locally finite. The problem Does this equation hold in arbitrary topological spaces without restrictions? Also of interest are other conditions under which this equation holds. Note For an arbitrary collection $V_I$ in an arbitrary topological space, $$\overline{\bigcup \overline{V_I}} = \overline{\bigcup V_I}$$ Hence, we always have $$ \begin{aligned} \partial \left(\bigcup V_I\right) & = \overline{\bigcup V_I} \setminus \bigcup V_I \\ {} & = \overline{\bigcup \overline{V_I}} \setminus \bigcup V_I \\ {} & \supset \overline{\bigcup \partial V_I} \setminus \bigcup V_I \end{aligned} $$ Theorem A Let $X$ be a topological space, $U, V \subset X$ both be open, and $U$ be connected. Then $U \cap V \neq \emptyset$ and $U \setminus V \neq \emptyset$ if and only if $U \cap \partial V \neq \emptyset$. Proof A Suppose $U \cap \partial V = \emptyset$. Then $U = (U \cap V) \cup (U \setminus \overline{V})$, and these subsets are disjoint. Since $U$ is connected, either $U \cap V = \emptyset$, or $U \setminus \overline{V} = \emptyset$. Because of the assumption, the latter is equivalent to $U \setminus V = \emptyset$. Suppose $U \cap \partial V \neq \emptyset$. Then $U \cap \overline{V} \cap \overline{X \setminus V} \neq \emptyset$, which implies $U \cap \overline{V} \neq \emptyset$ and $U \cap \overline{X \setminus V} \neq \emptyset$. Since $U$ is open, $U \cap \overline{V} = \emptyset \iff U \cap V \neq \emptyset$. Since $V$ is open, $U \setminus V = U \cap (X \setminus V) = U \cap \overline{X \setminus V} \neq \emptyset$. Theorem B Let $(X, \mathcal{T})$ be a locally connected topological space, and $V_I$ be as in the problem description. Then $(*)$ holds. Proof B Let $U = \bigcup V_I$, and denote by $\mathcal{T}^*(x)$ the connected open neighborhoods of $x$. By Theorem A, $$ \begin{aligned} {} & x \in \partial U \\ \iff & x \in \overline{U} \setminus U \\ \iff & x \in \overline{\bigcup V_I} \setminus U \\ \iff & \forall W \in \mathcal{T}^*(x) : W \cap \bigcup V_I \neq \emptyset \land x \in X \setminus U \\ \iff & \forall W \in \mathcal{T}^*(x) : \exists i \in I: W \cap V_i \neq \emptyset \land x \in X \setminus U \\ \iff & \forall W \in \mathcal{T}^*(x) : \exists i \in I: W \cap V_i \neq \emptyset \land W \setminus V_i \neq \emptyset \land x \in X \setminus U \\ \iff & \forall W \in \mathcal{T}^*(x) : \exists i \in I: W \cap \partial V_i \neq \emptyset \land x \in X \setminus U \\ \iff & x \in \overline{\bigcup \partial V_I} \setminus U. \end{aligned} $$ Theorem C Let $(X, \mathcal{T})$ be a topological space, and $V_I$ be as in the problem description, and also locally finite. Then $(*)$ holds. Proof C Let $U = \bigcup V_I$. For a locally finite collection (open subset or not), it holds that $$\overline{\bigcup V_I} = \bigcup \overline{V_I}.$$ Therefore $$ \begin{aligned} \partial U & = \overline{U} \setminus U \\ {} & = \overline{\bigcup V_I} \setminus U \\ {} & = \bigcup \overline{V_I} \setminus U \\ {} & = \bigcup \{\overline{V_i} : i \in I\} \setminus U \\ {} & = \bigcup \{\overline{V_i} \setminus V_i : i \in I\} \setminus U \\ {} & = \bigcup \partial V_I \setminus U \\ {} & = \overline{\bigcup \partial V_I} \setminus U. \end{aligned} $$ AI: Here's a counterexample as per my comment. Let $X = 2^\omega$ be Cantor space with the usual topology, generated by basic clopen sets $[\sigma] = \{ \sigma^\frown \alpha: \alpha \in 2^\omega \}$ for finite strings $\sigma \in 2^{<\omega}$. Let $U \subseteq 2^\omega$ be any open set which is not closed (examples of such things here, e.g. the complement of a point). Then, $U = \bigcup_{i \in I} V_i$ for some basic $V_i$. We use the fact that $A \subseteq X$ clopen $\iff$ $\partial A = \varnothing$. $U$ is not clopen, so $\partial U \neq \varnothing$. However, all the $V_i$ are clopen, so $\bigcup_{i \in I} \partial V_i = \varnothing$. It follows that $\partial U \neq \overline{\bigcup_{i \in I} \partial V_i} \setminus \bigcup_{i \in I} V_i$. Presumably this works because Cantor space fails badly to be satisfy any sort of connectedness - it is totally disconnected.
H: Recurrent sequence convergence. I'm trying to prove this but I can't, I tried to use an infinite integral for it but I can't do it. Let $$f_o(x)$$ continuos in $$0\leq x\leq a$$ Show that the sequence of functions defined by $$f_n(x)-\int_{0}^{x}{x}f_{n-1}(t)dt$$ for $$n=1,2,3,\ldots$$ converges uniformly to $$f(x)≡0$$ in $[0,a]$. AI: Being continuous, $f_0$ is bounded on $[0,a]$. Suppose $|f_0(x)| \leq M$. Let us show by induction that $|f_n(x) | \leq M\frac {x^{2n}} {b_n}$ where $b_0=1$ and $b_{n+1}=(2n+1)b_n$ . This is true for $n=0$. Suppose it is true for $n$. Then $|f_{n+1}(x)| \leq x \int_0^{x} Mt^{2n}/b_n=\frac {Mx^{2n+2}} {b_{n+1}}$. We have proved out claim. Now use the convegence of the series $\sum \frac {a^{n}} {b_n}$ to conclude that $\frac {a^{n}} {n!} \to 0$. [Use ratio test for convergence of the series; $\frac {b_{n+1}} {b_n} =\frac 1 {2n+1}$].
H: Unique Solution to 1st Order Autonomous ODE Take the ODE $y'=F(y)$. Show it has a unique solution with initial condition $y(t_0) = y_0$ in a neighborhood of $t_0$ provided $F$ in continuous and $F(y_0) \neq 0$. I am trying to use the inverse function theorem by solving the ODE the inverse function satisfies but I am getting stuck. AI: The ODE $$ \frac{dy}{dx} = F(y) $$ seems to separate into $$ dx = \frac{dy}{F(y)} \iff x = \int \frac{dy}{F(y)} = \int f(y) dy, $$ where $f(y) = 1/F(y)$. Can you prove $f$ is integrable? UPDATE After your response, we then understand that $f$ is integrable, therefore we conclude that there is some anti-derivative $\phi$ of $f$, so $t = \phi(y) + C$ and we enforce the initial condition, calculating $$ C = t_0 - \phi(y_0), $$ so the final unique solution looks like $$ t - t_0 = \phi(y) - \phi(y_0). $$
H: Show that $\sum_{v=0}^N (-1)^v {N \choose v} \left(1- \frac{v}n\right)^r \to (1-e^{-p})^N$. (Feller Volume 1, Q.13, p.61) Let $$u(r,n) = \sum_{v=0}^N (-1)^v {N \choose v} \left(1- \frac{v}n\right)^r.$$ Show that if $n\to \infty$ and $r \to \infty$ so that $r/n \to p$, then $u(r,n) \to (1-e^{-p})^N$. Although I am not sure if this exercise is related to the previous one, I have proved previously that $$\frac{{n - N \choose r-N}}{{n \choose r}} \to p^N.$$ But, this exercise involves sum and $(-1)^v$, and I am not sure how to proceed. I would appreciate if you give some hint. AI: Recall $\lim_{n \to \infty} \left ( 1 - \frac{v}{n} \right )^n = e^{-v}$. Using this, we can see the following (as $n,r \to \infty$): $$ u(r,n) = \sum_{v=0}^N (-1)^v \binom{N}{v} \left ( 1 - \frac{v}{n} \right )^r = \sum_{v=0}^N (-1)^v \binom{N}{v} \left ( \left ( 1 - \frac{v}{n} \right )^n \right )^{r/n} = \sum_{v=0}^N (-1)^v \binom{N}{v} \left ( e^{-v} \right )^{r/n} $$ Now if we apply our knowledge that $r/n \to p$, we see how to simplify further to $$ \sum_{v=0}^N (-1)^v \binom{N}{v} e^{-vp} $$ Finally, using the binomial theorem, we see that this is equal to $$ (1 - e^{-p})^N $$ I hope this helps ^_^
H: Proving $\sum_{k=1}^\infty \rightarrow -\infty$ almost surely if $P(X_k=k^2)=\frac{1}{k^2}=p_k, P(X_k=-1)=1-p_k$. Suppose $\{X_k\}_{k\geq 1}$ are independent with $$P(X_k=k^2)=\frac{1}{k^2}=p_k, P(X_k=-1)=1-p_k.$$ Show $\sum_{k=1}^n X_k\rightarrow -\infty$ almost surely as $n\rightarrow \infty.$ I can see that each $X_k\rightarrow-1$ almost surely, because $P(X_k\rightarrow-1)=1$, and I think that Borel-Cantelli Lemma may be useful but I don't know how to proceed, I would appreciate any hint. AI: $\sum P(X_k=k^{2})=\sum \frac 1 {k^{2}} <\infty$. Borel cantelli Lemma shows that with probability $1$, $X_k \neq k^{2}$ for all sufficiently large values of $n$. But the only values of $X_k$ are $k^{2}$ and $-1$ so $X_k=-1$ for all sufficiently large values of $n$. This implies that $ \sum\limits_{k=1}^{n} X_k \to -\infty$ with probability $1$.
H: Proving that in a graph with 400 vertices, each with a valency of 201, there exists a subgraph isomorphic to $K_3$ I need to prove that a graph $G$ with 400 vertices, each of a valency 201, has a subgraph isomorphic to $K_3$ As far as I understand, I need to prove that there exists a triangle within the graph $G$, but I am not exactly certain on how to approach the problem. I suppose it is not just randomly that we have a valency of vertices slightly above half the vertices amount, but I don't know how to make use of it. AI: A triangle free graph on $n$ vertices has at most $\lfloor n^2/4 \rfloor$ edges. Let $n = 200.$ Then $G$ has $n(n+1) > n^2 = \lfloor (2n)^2/4 \rfloor$ edges, so it contains a triangle.
H: How to prove only answer The question is Find all prime numbers, $p$ such that $16p+1$ gives a perfect cube. By trial and error, I have found 307 to be a solution but how do I prove that it is the only solution (if it is)? I've got that $16p+1$ can only give odd solutions that end in $7,9,1,2,3$ but that's about it. AI: $$16p+1=n^3\iff 16p=n^3-1$$ $$\iff 16p=(n-1)(n^2+n+1)$$ We know by trial an error that $p>16$, and $$n^2+n+1>n-1 \iff n^2>2 \implies n \ge 2$$ so we'll choose the factors accordingly. Note: I concluded from the fact that $p>16$ and $n^2+n+1>n-1$ that to split factors, we have to have $n^2+n+1=kp$ where $k|16$ and thus $n-1=16/k$, and wrote down what happens in these cases. First $$n-1=16 \iff n=17$$ $$n^2+n+1=p \implies p=307$$ Second: note that $$n \equiv0,1 \pmod{2} \implies n^2+n+1 \equiv1 \pmod{2} \tag{1}$$ But since now $n^2+n+1 \ne p$ we must have $n^2+n+1 =kp$, where $k|16$ and $k \ne 1$ which contradicts with $(1)$
H: I want to prove that given a limit point, it is possible to find a sequence that converges to it Given this definition, Prove carefully that if $E \subset X$ and if $x$ is a limit point of $E$, then $\exists$ a sequence $(a_n)$ in $E$ that converges to $x$. Proof: (attempt) Let x be limit point of $E$. Let $r > 0$ be given and put $V = N_{r} (x)$ Notice that we can find $a_1 \in N_1(x)$ (putting $r = 1$) such that $a_1 \neq x$ We can find $a_2 \in N_{1/2} (x) $ such that $a_2 \neq x$ and so on. In general, we can find $a_n \in N_{1/n} (x) $ such that $a_n \neq x$ and $a_n \in E$. Now, Let $\varepsilon > 0$ be given. We can find $N > \dfrac{1}{\varepsilon }$ so that for all $n > N$, we have $$ d(a_n,x) < \dfrac{1}{n} < \dfrac{1}{N} < \varepsilon $$ so $a_n \to x$. Is this a correct proof? AI: Yes, it is correct, but: The sentence “Let $r>0$ be given and put $V=N_r(x)$.” is useless. Neither $r$ nor $V$ are used again. There is no need to write that each $a_n$ is distinct from $x$. Yes, you can choose $a_n$ such that that occurs, but that's irrelevant for the proof.
H: Solve $(1+i)^z = i$ where $z$ is a complex number. How can I solve this equation? $$(1+i)^z = i$$ The only way I can start is to use $e^{z\log(1+i)}$. AI: Note that $1+i=\sqrt{2}\left(\frac{1+i}{\sqrt{2}}\right)=\sqrt{2}e^{\frac{i\pi}{4}}$. Now let $z=x+iy$. Then \begin{align*} (1+i)^z & =\left(\sqrt{2}e^{\frac{i\pi}{4}}\right)^{x+iy}\\ &=\left(\sqrt{2}e^{\frac{i\pi}{4}}\right)^{x}\cdot \left(\sqrt{2}e^{\frac{i\pi}{4}}\right)^{iy}\\ &=\left(2^{x/2}e^{\frac{ix\pi}{4}}\right)\cdot \left(2^{iy/2}e^{\frac{-y\pi}{4}}\right)\\ &=\left(2^{x/2}e^{\frac{-y\pi}{4}}\right)\cdot \left(e^{iy (\ln 2)/2}e^{\frac{ix\pi}{4}}\right)\\ &=\left(2^{x/2}e^{\frac{-y\pi}{4}}\right)\cdot \left(e^{i\left[\frac{2y\ln 2+x\pi}{4}\right]}\right). \end{align*} In order to solve $(1+i)^z=i$, we do the following: first we equate the magnitude of both sides to get \begin{align*} |(1+i)^z|&=|i|\\ 2^{x/2}e^{\frac{-y\pi}{4}}&=1\\ 2^{x/2}&=e^{\frac{y\pi}{4}}\\ \color{red}{x \ln 4} & \color{red}{=y\pi}. \end{align*} Now we equate the angles to get \begin{align*} \frac{2y\ln 2+x\pi}{4}&=\frac{(4n+1)\pi}{2}\\ \color{blue}{2y\ln2+\pi x}&=\color{blue}{2(2n+1)\pi}. \end{align*} Now solve for $x$ and $y$ from the two equations to get $z=\frac{i(4n+1)\pi}{2\log(1+i)}$.
H: Newton method exchanging row suppose to have a function $F(x,y,z) = [ f_1(x,y,z),f_2(x,y,z),f_3(x,y,z)]$ and that $f_1$ depend only by x, $f_2$ depends only by y and $f_3$ depends only by z. Now if I apply newton method I can write $[d_x,d_y,d_z] = -J^{-1}(x,y,z)*F(x,y,z)$. The question is if I exchange the rows of $F$ for example in this way $F_{new}(x,y,z) = [ f_2(x,y,z),f_1(x,y,z),f_3(x,y,z)]$ it seems I can write in the same way this $[d_x,d_y,d_z] = -J^{-1}(x,y,z)*F_{new}(x,y,z)$ , but intuitively I expect something like this $[d_y,d_x,d_z] = -J^{-1}(x,y,z)*F_{new}(x,y,z)$. Can you please explain me why the inversion of rows doesn't affect the steps retrieved? AI: This is because the permutation that swaps these rows is also applied to the Jacobian. Let this permutation be $P$, then we have $$ F_{new} = PF\implies J_{new} = PJ. $$ The Newton step $s_{new}$ is then $$ s_{new} = -J_{new}^{-1}F_{new}(x_k) = -J^{-1}P^{-1}PF = -J^{-1}F(x_k) = s $$
H: Evaluating erf(x) using Taylor's series I tried to evaluate error function using Taylor series by using its definition $$ erf(z) = \frac{2}{\sqrt{\pi}}\int_0^ze^{-t^2}dt$$ I've used Taylor expansion to evaluate this integration and i got this $$ \frac{2}{\sqrt{\pi}}\int_0^ze^{-t^2}dt = \frac{2}{\sqrt{\pi}}\int_0^z\sum_{k=0}^{\infty}\frac{(-1)^kt^{2k}}{k!}$$ $$ \frac{2}{\sqrt{\pi}}\sum_{k=0}^{\infty}\frac{(-1)^kt^{2k+1}}{(2k+1)k!}\Bigg|_{t=0}^{t=z} $$ I've used wolfram alpha to evaluate this summation on certain values but the series diverges when getting partial sum the value becomes bigger while calculating more terms but when evaluating infinity series wolfram automatically uses built-in error function to get value why do I get huge values when calculating partial sums but it converges when calculating infinity series ? AI: The ratio test or similar proves the series converges for all $x$. However a brief inspection of each term should convince you that for a large $x$ the terms begin by increasing and only for larger $n$ will they start approaching zero. If you define $a_n$ $$ a_n = \dfrac{2}{\sqrt \pi} \dfrac{(-1)^{n}}{(2n+1) \cdot n!} $$ so that the series becomes $ \text {erf}(z) = \sum_n a_n z^{2n+1} $, you can calculate $$ \frac{a_{n+1}}{a_n} = -\frac{(2n+1) z^2 }{(2n+3) (n+1) } $$ which plainly has limit zero as $n \to \infty$ but does not start to decrease in magnitude until $ (2n+3)(n+1)/(2n+1) > |z|^2 $. This makes the series particularly inappropriate for calculating $\text{erf}(z)$ for large $z$. In computer floating point arithmetic the large terms will dominate the sum early on and fail to cancel so that all accuracy is lost. The sum will eventually converge (if individual terms do not exceed the large number limit and overflow) but the apparent limit will be useless. If you consult Wikipedia or similar references you can find alternative formulae, particularly those for the complementary error function $\text{erfc} (x) = 1 - \text{erf} (x) $, that converge effectively for large arguments and allow accurate calculation of the function.
H: Behaviour of meromorphic functions near poles. Let $F(z)$ be a meromorphic function on a domain $\Omega$ with a pole at $z=a .$ Let $C \subset \Omega$ be a simple closed contour with the point $z=a$ enclosed in its interior domain $I,$ and assume that $F(z)$ is continuous on $C .$ Prove that there exists a constant $M>0$ such that within $I, F(z)$ attains every value in the set $\{w \in \mathbb{C}:|w|>M\}$ My thought: How to use the equivalent condition $\lim_{z\rightarrow a}f(z)=\infty$ to that $a$ is a pole for $F(z)$. AI: Near $a$, you can write $f(z)$ as $\frac{g(z)}{(z-a)^n}$, for some $n\in\mathbb{N}$ ($n$ is the order of the pole) and some analytic function $g$ such that $g(a)\ne0$. Take $r>0$ such that $D(a,r)\setminus\{a\}\subset\Omega$ and that $z\in D(a,r)\setminus\{a\}\implies g(z)\ne0$. Then, in that punctered disk,$$\frac1{f(z)}=\frac{(z-a)^n}{g(z)}$$and this quotient defines an analytic function $\varphi\colon D(a,r)\longrightarrow\Bbb C$ if you put $\varphi=0$. By the open mapping theorem, $\varphi\bigl(D(a,r)\bigr)$ is an open set, and therefore it ontains some disk $D(0,\varepsilon)$. So, the image of $f$ contains every $z\in\Bbb C$ such that $|z|>\frac1\varepsilon$.
H: Proving functions equality is reflexive, symmetric, and transitive The reason I am posting this is that these seem too trivial and my "proofs" feel like I am doing nothing other than stating definitions, not even manipulating them. Here is how functions equality is defined in the book: Two functions $f : X → Y$ ,$g: X→ Y$ with the same domain and range are said to be equal, $f=g$, if and only if $f(x) = g(x)$ for all $x ∈ X$. (If $f(x)$ and $g(x)$ agree for some values of $x$, but not others, then we do not consider $f$ and $g$ to be equal.) My work: Reflexive: Let $f : X → Y$ be a function then by definition for all $x ∈ X$ we have $f(x)=f(x)$ so $f$ is equal to itself. Symmetric: Let $f : X → Y$ and $g: X→ Y$ be two functions, if we have $f=g$ then for all $x ∈ X$ we have $f(x)=g(x)$, if every input $x$ of $f$ gives the same output as when this input is put in $g$ then conversely every input $x$ of $g$ gives the same output as when this input is put in $f$, formally, $x ∈ X$ we have $g(x)=f(x)$, thus $g=f$ and the notion of function equality is symmetric. Transitive: Let $f : X → Y$ and $g: X→ Y$ and $h: X→ Y$ be three functions if $f=g$ and $g=h$, for all $x ∈ X$ we have $f(x)=g(x)$ and for all $x ∈ X$ we have $g(x)=h(x)$ similarly, we can essentially replace $g$ by $f$ as they are equal, so we get $f=h$, thus the notion of function equality is transitive. Can someone let me know is what I did is correct and if not how to improve it? AI: For transitivity, you could directly invoke the transitivity of equality. That looks more elegant to me. You may also want to improve the grammar by splitting up the various run-on sentences into separate bits.
H: Find matrix from given minimal polynomial I have to find a $3$x$3$ matrix with the minimal polynomial: $x^2 -9$. What I've tried: If it would be a 2x2 matrix, then : $$ C:=\begin{pmatrix} 0&9 \\ 1 & 0 \end{pmatrix} $$ Then i tried adding columns and rows with zeros, but the minimal polynomial is now $x^3-9x$ $$ C:=\begin{pmatrix} 0&9&0 \\ 1 & 0&0 \\0&0&0 \end{pmatrix} $$ How can i fix this? AI: Your example will not work because the eigenvalues you need for such a matrix must be $3,-3$. Your characteristic polynomial should be something like $(x-3)^2(x+3)$ or $(x+3)^2(x-3)$. Thus look for a matrix with eigenvalues as say $3,3,-3$. The simplest matrix that satisfies this will be $$A=\begin{bmatrix}3&0&0\\0&3&0\\0&0&-3\end{bmatrix}$$
H: Evaluation of limit using $1$ st principle Express the following limits as $f'(c)$ for some function $f$ and some number $c$ $$\lim_{h\rightarrow 0}\frac{5(1+h)^{20}-6(1+h)^3+1}{h}$$ What i try I have tried using D L Hopital rule $$\lim_{h\rightarrow 0}\frac{100(1+h)^{19}-18(1+h)^2}{1}=82$$ But i did not understand How can i express in the form of $f'(c)$ Seems that to solve it using $1$ st principle. Thanks AI: Put $f(x)=5x^{20}-6x^3$. Then $f(1)=-1$ and so$$\lim_{h\to0}\frac{5(1+h)^{20}-6(1+h)^3+1}h=\lim_{h\to0}\frac{f(h+1)-f(1)}h=f'(1).$$
H: A three digit number was decreased by the sum of its digits .Then the same operation was carried out with the resulting number,et cetera A three digit number was decreased by the sum of its digits .Then the same operation was carried out with the resulting number,et cetera ,100 times in all .Prove that the final number is zero . One method is very computational - where we look at the number of times 27,18,9 are removed from the original number (within certain ranges) and find a pattern etc.But this is tedious . Can someone suggest a more elegant proof. AI: The result is a number divisible by $9$, because $$(100x+10y+z)-(x+y+z)=9(11x+y).$$ Being a decreasing process, we "jump" from a multiple of $9$ to a smaller multiple of $9$. Doing it a hundred times, we necessarily fall on $0$... ...Under the condition that we have started from a number less that $900$ (thanks to Empy2 and quasi for pointing it). It remains the cases in interval $[900,999]$. Here is a special treatment for them. Let us denote the interval $[k*100,(k+1)*100)$ as being the slice $S_k$. Indeed, in fact, the "jumps" are not always from a multiple of $9$ to the immediate multiple of $9$ below it ("small jumps"), but happen to be multiples of $18$ ("large jumps"). Here is in particular what happens when we take a number in the slice $S_9$. It is included in one of the two sequences "coalescent" in $891$ : $$\begin{cases}994 \rightarrow 972 \rightarrow 954 \rightarrow 936 \rightarrow 918 \rightarrow 900 \searrow \\ \text{} \\ \ \ \ \ \ \ \ \ \ \ \ \ \ 981 \rightarrow 963 \rightarrow 945 \rightarrow 927 \rightarrow 909 \nearrow \end{cases} 891 \rightarrow 873 \rightarrow 855 \rightarrow 837 \rightarrow 819 \rightarrow \ \text{etc.}$$ In slices $S_{9}$ and $S_{8}$, using almost always "large jumps" instead of small jumps, we already "spare" ten small jumps which is what was needed to arrive at $0$ with at most a hundred jumps. It looks me not necessary to explain it with more words, but a graphic will help to capture in a single glance what happens in the cases of three numbers and the associated sequence of "jumps": $999$, the extreme case (already a multiple of $9$, with an exceptional initial jump of $-27$). [blue points], $971$ [red points], $943$ [black points]. We see in particular that the slopes are either $-18$ or $-9$, with a dominance of $-18$ on the left and $-9$ on the right. This is why the numbers of the last slice $S_9=[900,999]$ need not that much steps to reach value 0 (at most $80$ in fact). Fig. 1 : Slopes $-18$, dominant in the first part leave progressively the place to slopes $-9$.
H: Is it enough if $g$ is injective for $g ◦ f$ to be injective? This question came to mind when I got the following question: Let $f : X → Y$ and $g : Y → Z$ be functions. Show that if $f$ and $g$ are both injective, then so is $g ◦ f$ But this got me wondering, isn't enough for just $g$ to be injective? We have $g ◦ f=g(f(x))$ so $g$ is just taking a certain input, by definition $g$ can't be injective unless it gives a unique output for every unique input, so regardless what $f(x)$ results in it's still an input, and it falls under "every input" so $g ◦ f$ should be injective regardless of the what $f(x)$ is. Note: I know that the question doesn't imply what I am saying is wrong, in fact, it says nothing about it, so this got me wondering. AI: No. Injectivity means that for two different inputs you get two different outputs. If you now take $f$ non-injective, then there exists two different inputs $x_1,x_2$ such that $f(x_1)=f(x_2)$. But then naturally also $g(f(x_1))=g(f(x_2))$ since you are taking twice the same argument and applying $g$. This is equivalent to $(g\circ f)(x_1)=(g\circ f) (x_2)$: hence $g\circ f$ cannot be injective. Thus $f$ being injective is a necessary (but not sufficient) condition for $g\circ f$ to be injective
H: ${\lim_{x\rightarrow \infty}}(\sqrt{x^2+2x+3} - \sqrt{x^2+3})^{x}$ $${\lim_{x\rightarrow \infty}}(\sqrt{x^2+2x+3} - \sqrt{x^2+3})^{x}$$ I tried taking log both sides (on paper). After taking log how do I proceed? You get $\infty$ * $(\infty-\infty$). But $\infty-\infty$ could be any number. How can I take this as 0? Even if I take it as zero, the question becomes nasty. Please help me solve this limit. It must be solved without using series expansion as this isn't taught. Only L'Hopital and basic algebra. AI: $$A=\left(\sqrt{x^2+2x+3} - \sqrt{x^2+3}\right)^{x}$$ For what is inside the parenthesis, use Taylor $$\sqrt{x^2+2x+3}=x+1+\frac{1}{x}-\frac{1}{x^2}+O\left(\frac{1}{x^3}\right)$$ $$\sqrt{x^2+3}=x+\frac{3}{2 x}+O\left(\frac{1}{x^3}\right)$$ $$\sqrt{x^2+2x+3} - \sqrt{x^2+3}=1-\frac{1}{2 x}-\frac{1}{x^2}+O\left(\frac{1}{x^3}\right)$$ $$\log(A)=x \log\left(1-\frac{1}{2 x}-\frac{1}{x^2}+O\left(\frac{1}{x^3}\right) \right)=-\frac{1}{2}-\frac{9}{8 x}+O\left(\frac{1}{x^2}\right)$$ $$A=e^{\log(A)}=\frac{1}{\sqrt{e}}-\frac{9}{8 \sqrt{e} x}+O\left(\frac{1}{x^2}\right)$$ which shows the limit and also how it is approached.
H: Inequality for the expectation of a nonnegative random variable Let $X$ be a nonnegative random variable with distribution $F$ and mean $\mu=E(X)>0$. Let $A_{\mu}=[\mu, \infty)$. Is it true that $$ \int_{A_\mu} x dF(x) \geq \mu/2 $$ must then hold? I'm trying to find counterexamples, both for continuous or discrete random variables, but I'm not finding any, so I've started suspecting the inequality is actually true. Also, I'm wondering whether there's a quick proof which establishes it, so that I'm maybe missing something stupid. I've tried a proof by contradiction, but I did not get that far. Edit: The inequality trivially holds if the mean $\mu$ is strictly smaller than the median. Indeed, $$ \mu = E[X|X\geq \mu] P(X\geq \mu)+E[X|X < \mu] P(X < \mu) $$ thus $$ \int_{A_\mu} x dF(x) =E[X|X\geq \mu] P(X\geq \mu)\\ =\mu-E[X|X < \mu] P(X < \mu)\\ \geq \mu\{1-P(X < \mu)\}\\ \geq \mu/2. $$ But what if the opposite is true, i.e. if the mean is larger than the median? AI: In general, the inequality does not hold. Consider the following counterexample: $X$ takes value $6/4$ with probability $3/4$ and value $7/2$ with probability $1/4$. In this case, $\mu=2$. Saying that the inequality does not hold is the same as saying that $$ \int_{A_\mu^c}xdF(x)=E[X|X<2]P(X<2)> \mu/2=1. $$ Clearly, $E[X|X<2]=6/4$, thus the above inequality rewrites $$ P(X<2)> 1/(6/4)=4/6=2/3 $$ which holds true since $P(X<2)=P(X=6/4)=3/4$.
H: Why Bond(graph theory) is non-empty According to Diestel, bond is defined as the minimal non-empty element of cut-space at page 25. He explain the non-empty condition in the difinition of a bond bites only if G is disconnected. What is the meaning of this? AI: Recall the definition of bond : A minimal non-empty cut in $G$ is a bond Then your question is related to the footnote $11$ : The empty set of edges is a cut only if the graph is disconnected. Therefore if the graph is connected, any cut is non-empty, and the definition of bonds is just minimal cut in $G$, the condition non-empty is useless as this can never happen.
H: Prove that a cycle of length $k\geq 2$ can be written as a product of $k-1$ transpositions. Prove that a cycle of length $k\geq 2$ can be written as a product of $k-1$ transpositions as follows: $$ (a_1 ... a_{k-1} a_{k})=(a_1 a_{k})(a_1 a_{k-1})...(a_1 a_2).$$ I found an answer here: Permutations as a product of transpositions but I'm not able to generalize. It has kind of been an intuition that it will be correct, but I'm not able to present logical mathematical arguements. AI: Prove by induction. For $k = 2$, $(a_1a_2)$ is already a transposition. Now suppose: $$ (a_1\cdots a_{k-1}a_k) = (a_1a_k)(a_1a_{k-1})\cdots(a_1a_2) $$ Then: $$ (a_1\cdots a_ka_{k+1}) =^! (a_1a_{k+1})(a_1\cdots a_{k-1}a_k) = (a_1a_{k+1})(a_1a_k)(a_1a_{k-1})\cdots(a_1a_2) $$ So the only non-trivial part of this proof is the equality $=^!$. To show this, we want to show that applying the permutation $(a_1\cdots a_ka_{k+1})$ to any $a_i$ is the same as applying the permutation $(a_1a_{k+1})(a_1\cdots a_{k-1}a_k)$. We consider a few cases. Case 1: $i = k$. Then applying $(a_1\cdots a_ka_{k+1})$ to $a_k$ clearly gives $a_{k+1}$. On the other hand, applying $(a_1\cdots a_{k-1}a_k)$ to $a_k$ gives $a_1$, and composing it with $(a_1a_{k+1})$ gives $a_{k+1}$. Case 2: $i = k+1$. Applying $(a_1\cdots a_ka_{k+1})$ to $a_{k+1}$ clearly gives $a_1$. $(a_1\cdots a_{k-1}a_k)$ has no effect on $a_{k+1}$, but $(a_1a_{k+1})$ swaps it to give $a_1$. Case 3: $i = 1,2,\dots,k-1$. Applying $(a_1\cdots a_ka_{k+1})$ to $a_i$ clearly gives $a_{i+1}$, and same for applying $(a_1\cdots a_{k-1}a_k)$. Since $i+1 \neq 1,k+1$, $(a_1a_{k+1})$ has no effect on $a_{i+1}$, so it still gives $a_{i+1}$. In all cases, both permutations permutate $a_i$ to the same position.
H: How to prove that $f(x)=x+\frac{1}{x}$ is not cyclic? Let $f(x)=x+\frac{1}{x}$ and define a cyclic function as one where $f(f(...f(x)...))=x$. How do prove that $f(x)$ is not cyclic? What I tried was to calculate the first composition: $f(f(x))=x+\frac{1}{x}+\frac{1}{x+\frac{1}{x}}=\frac{x^4+3x^2+1}{x^3+x}$ Intuitively, I feel that this is clearly not going to simplify down to $x$, but how can I prove this beyond reasonable doubt? AI: Hint: for positive $x$ $$ f(x) > x $$
H: Would the man in the white room have a 100% of crossing the line? I had this thought in my head for a couple of months now and I really wanted to see what the answer to it is. So here it is: A man is sitting in a white room, where all he has to do is cross over a purple line to travel back to Earth. However, he has infinite time to do so. Does this mean he has a 100% chance of crossing the line, because he has as much time as possible, meaning that soon enough sometime he will cross the line? (Also not accounting for eating, bathroom, or sleeping) AI: This can be answered somewhat precisely using the theory of random walks. We could model the man's motion as a random walk in the 2D-lattice $\mathbb{Z}^2$, starting at the origin $(0,0)$. It turns out that in one or two dimensions, the probability of eventually passing through an arbitrary point $(a,b)$ tends to $1$ as time goes to infinity. Interestingly enough, this doesn't hold in three dimensions, where the probability of reaching arbitrary $(a,b,c)$ is only $\sim 34 \%$ on average, and this decreases the more dimensions you add. So your hypothesis is correct - given enough time, the man would eventually cross the line. Of course, this assumes the man in question is constantly moving randomly, which is a questionable assumption in itself...
H: Does there exist an infinite set S that is closed under infinite unions but not finite unions? Does there exist an infinite set $S$ of sets, such that for every infinite subset $I$ of $S$, $\bigcup I \in S$, but $S$ is not closed under finite unions? AI: Let $$S=\{\,A\subseteq \Bbb N\mid 1\in A\,\}\cup \{\{2\},\{3\}\}.$$ Then $\{2\}\cup\{3\}\notin S$, but every infinite union will contain $1$ and be $\in S$.
H: Find all possible positive integers $x$ and $y$ such that the equation: $(x+y)(x-y)=\frac{(y+1)(y-1)}{24}$ is satisfied. My approach so far: The given equation can be rewritten as: $x^2 -y^2=\frac{y^2 -1}{24}.$ This gives $24x^2 +1=25y^2=(5y)^2.$ So $(24x^2+1)$ must also be a perfect square. This implies $x=0, 1$ is two such possible values for $x$. As $x>0$, so $x=1$ is a possible solution. Corresponding to this, we get $y=1$. But how to check does there exist any other solutions or not? Please suggest.. Thanks in advance. AI: The equation is equivalent to $$ 24x^2 - 25y^2 =- 1. $$ An equation of the form $ax^2-by^2=c$ can be solved by continued fractions, see here, or by the LMM method. The solutions are given by the family $$ x = u + 25 v, y = u + 24 v $$ with $u^2-600v^2=1$. This is Pell's equation. Its fundamental solution is $(u,v)=(49,2)$, which gives $(x,y)=(99,97)$. So we have infinitely many solutions.