text
stringlengths
83
79.5k
H: Is $\int_{0}^{\infty} e^{-(a^2x^2+\frac{b^2}{x^2})}dx=\int_{0}^{\infty}e^{-(b^{2}{X}^2+\frac{a^2}{X^2})}dX$ for arbitrary $a,b$ and fixed range? The problem says, If $\int_{0}^{\infty} \mathbb{e^{-(a^2x^2+\frac{b^2}{x^2})dx}=\frac{\sqrt{\pi}}{2a}.e^{-2ab}} \longrightarrow(i)$, then prove that $\mathbb{\int_{0}^{\infty}\frac{1}{x^2}e^{-(a^2x^2+\frac{b^2}{x^2})}dx=\frac{\sqrt{\pi}}{2b}.e^{-2ab}}$ If we take $\mathrm{x=\frac{b}{a\mathscr{X}}}$, where $\mathrm{a}, \mathrm{b}$ are any constant, then we have $$\mathbb{\int_{0}^{\infty}\frac{1}{x^2}e^{-(a^2x^2+\frac{b^2}{x^2})}dx} \longrightarrow (ii)\\=\mathbb{e^{-2ab}}\mathbb{\int_{0}^{\infty}\frac{1}{x^2}e^{-(ax+\frac{b}{x})^2}dx}\\=\mathbb{\frac{-a\ e^{-2ab}}{b}}\mathbb{\int_{\infty}^{0}e^{-(b\mathscr{X}+\frac{a}{\mathscr{X}})^2}d\mathscr{X}}\\=\mathbb{\frac{a}{b}}\mathbb{\int_{0}^{\infty}e^{-(b^{2}\mathscr{X}^2+\frac{a^2}{\mathscr{X}^2})}d\mathscr{X}}\longrightarrow(iii)\\$$ Since $\mathrm{a}, \mathrm{b}$ are any constant and condition $(i) \ \textrm{&} \ (iii)$ look same and the integration range remains same, therefore should we use the value of $(i)$ in $(iii)$ ? If we plug in the value of $(i)$ in $(iii)$, the desired result comes. But is this a correct aprroach to do that? Any help, explanation is valuable and highly appreciated. AI: You can "plug i into iii" via the "integration-by-substitution" $$ \mathscr{X} = x \\ d \mathscr{X} = dx $$ (i.e., using the chain rule in the simplest possible way); this shows that the integral you're calling "iii" is the same as the integral you're calling "i".
H: Find the general solution of the D.E. $ x^2 y'' -2xy'+2y=x\ln(x) $ with $x>1$ $$ x^2 y'' -2xy'+2y=x\ln(x) $$ with $x>1 $. We can tell that it is a Euler's equation. I started by setting $u=\ln x$, $$\boxed{u''-2u+2=\dotsb \ ?} $$ Having problems with the $u$ replacement. What are next steps? AI: Hint: Plug $y=x^{\lambda}$ into the homogeneous equation to find roots of the characteristic equation, which yields $$\lambda_1=1, \lambda_2=2$$ Then the homogeneous solution is the linear combination $$y_{h} = c_1 x+c_2 x^2$$ To find the particular solutions, compute the Wronskian $W(x)$, then divide both sides by $W(x)$. The RHS of the equation now becomes $\frac{\log(x)}{x}=f(x)$. Then the particular solution is of the form $$y_{p} = v_1(x)\cdot x+v_2(x)\cdot x^2$$ where $$v_1(x)=-\int\frac{f(x)\cdot x^2}{W(x)}dx, v_2(x)=\int\frac{f(x)\cdot x}{W(x)}dx$$ I leave the calculations to you. The particular solution is $$y_{p} = -\frac{1}{2}x(\ln(x)^2+2\ln(x)+2)$$
H: I don't understand this exponent simplification I've been doing the Khan Academy math courses to brush up on my math foundations before starting my CS/math degree in the fall semester. I just don't fully understand negative exponents, I stumbled upon the following exponent simplification: Equation screenshot I understand how they got to a^-12/b^8 as it's simply exponent properties. I also understand that raising something to a negative exponent is the same as 1/a^12. But the part I don't understand is the last step. Shouldn't it be (1/a^12)/(b^8)? Why are the numerator and the denominator suddenly multiplied? AI: $$\frac{a^{-12}}{b^8} = a^{-12}\cdot \frac{1}{b^8} =\frac{1}{a^{12}}\cdot \frac{1}{b^8} =\frac{1}{a^{12}\cdot b^8}$$ If you divide by two things, one after another, then it is the same as dividing by their product. $$(1/x)/y = 1/(xy)$$ For example divide $30$ by $2$ and then by $3$ and you get $(30/2)/3=15/3=5$ which is the same as $30/6$. This is really no different to what happens with subtraction. If you subtract two things, one after the other, it is the same as if you subtract the sum of the two things: $$a-b-c = a-(b+c)$$
H: Computation of Definite Integral of Rational Function I am dealing with a definite integral of a rational function that seems quite hard to get a nice closed form/explicit expression for. Let $ -1 < z < 1 $, then my aim is to determine an expression for the integral $ I $ in terms of $ z $: $$ I(z) = \int_{0}^{\infty} \frac{(1+z)t^4 + (1-z)}{(1+z)^2 t^6 + 3(1+z)(5+z)t^4 + 3(1-z)(5-z)t^2 + (1-z)^2} \ dt $$ Any help would be appreciated. Current Attempts Thank you @Claude Leibovici for your answer. Following the process of the comment, one can arrive at the following. If we let $ (1+z)^2Q(t) := (1+z)^2 t^6 + 3(1+z)(5+z)t^4 + 3(1-z)(5-z)t^2 + (1-z)^2 $ for $ t \in (0,\infty) $, then $ Q $ has 6 roots $ \pm \omega_i \in \mathbb{C} $ for $ i = 1,2,3 $, all of which are dependent on $ z \in (-1,1) $. In particular, it can be shown that $$ \omega_k(z)^2 = \frac{4\sqrt{4z + 5}\cos\left(\frac{1}{3}\left(\arccos\left(-\frac{2z^2 + 14z + 11}{(4z + 5)^{3/2}}\right) - 2\pi(k-1)\right)\right) - (5+z)}{(1+z)} < 0, $$ for $ z \in (-1,1) $. Then using a partial fractions approach yields $$ I(z) = \int_{0}^{\infty} \frac{(1+z)t^4 + (1-z)}{(1+z)^2Q(t)} \ dt \\ = \frac{i\pi}{2(1+z)^2}\frac{\omega_1\omega_2\omega_3(\omega_1\omega_2 + \omega_1\omega_3 + \omega_2\omega_3)(1 + z) + (\omega_1 + \omega_2 + \omega_3)(1 - z)}{\omega_1\omega_2\omega_3(\omega_1 + \omega_2)(\omega_1+\omega_3)(\omega_2 + \omega_3)}, $$ where it is known, by Vieta's formula that $$ \omega_1^2 + \omega_2^2 + \omega_3^2 = -\frac{3(5+z)}{(1+z)} $$ $$ \omega_1^2\omega_2^2 + \omega_1^2\omega_3^2 + \omega_2^2\omega_3^2 = \frac{3(1-z)(5-z)}{(1+z)^2} $$ $$ \omega_1^2\omega_2^2\omega_3^2 = -\frac{(1-z)^2}{(1+z)^2} $$ A clearer question would be: is there a way to simplify the above evaluation of $ I(z) $ into a nicer expression? AI: Consider the integrand $$\frac{(1+z)t^4 + (1-z)}{(1+z)^2 t^6 + 3(1+z)(5+z)t^4 + 3(1-z)(5-z)t^2 + (1-z)^2} $$ and rewrite it as $$\frac 1 {z+1} \frac{t^4+a}{(t^2-b)(t^2-c)(t^2-d)}$$ where $a=\frac {1-z}{1+z}$ and $(b,c,d)$ are the roots of the cubic equation in $t^2$. Now, using partial fraction decomposition $$\frac{t^4+a}{(t^2-b)(t^2-c)(t^2-d)}=$$ $$-\frac{a+b^2}{(b-c) (b-d) \left(t^2-b\right)}+\frac{a+c^2}{(b-c) (c-d) \left(t^2-c\right)}-\frac{a+d^2}{(b-d) (c-d) \left(t^2-d\right)}$$ and the antiderivative will not make any problem. A numerical analysis for $-1 < z < 1$ shows that $(b,c,d)$ are all negative. So, we face three integrals looking like $$I_k=\int \frac {dt}{t^2+k}=\frac{\tan ^{-1}\left(\frac{t}{\sqrt{k}}\right)}{\sqrt{k}}\implies J_k=\int_0^\infty \frac {dt}{t^2+k}=\frac{\pi }{2 \sqrt{k}}$$ What is left is just the computation of $(b,c,d)$. Since they are all real, I suggest you use the trigonometric method for cubic equations.
H: Paley-Zygmund inequality. I was trying to understand the proof of the Paley-Zygmund inequality, but encountered the following step, which is not clear for me, i.e. $$\mathbb{E}(X\cdot \ \mathbb{1}_{X < \lambda\mathbb{E}(X)}) \leq \lambda\cdot \mathbb{E}(X)$$ Sorry for this "brilliant" question, but I would really appreciate if someone can elaborate on this. AI: $X1_{X <\lambda \mathbb E X} \leq \lambda (\mathbb EX) 1_{X <\lambda \mathbb EX}$. Now take expectation on both sides and use the fact that $P(X <\lambda \mathbb E X) \leq 1$ Note that the following is also correct: $X1_{X \leq \lambda \mathbb E X} \leq \lambda (\mathbb EX) 1_{X \leq \lambda \mathbb EX}$; take expectation on both sides and use the fact that $P(X \leq \lambda \mathbb E X) \leq 1$ to get $\mathbb E X1_{X \leq \lambda \mathbb E X} \leq \lambda (\mathbb EX)$.
H: n total antennas of which m are defective, confused by reasoning? My problem follows from Sheldon Ross' A First Course in probability. And the question can be viewed in more detail here: Confused by combinatorical reasoning (n functional antennas, m defective problem) Basically there are a total of n antennas, of which m are defective. In how many ways can these n antennas be arranged so that no two defective antennas are side-by-side with each other? I understand the reasoning presented in the answer to this problem in the link above. But consider the case where n = 9 and m = 7. In this case, can we say that there is no solution, because the given ${n-m+1}\choose{m}$ doesn't work? In general, when can we say that there is no way to arrange the antennas such that no two defective cannot be placed side by side with each other? AI: The binomial coefficient will only make sense when $m\le n-m+1$, or $$m\le \frac{n+1}{2}$$
H: $\det \varphi$ is not a zero divisor $\implies$ Endomorphism $\varphi$ of a free $R$-Module injective Let $R$ be a commutative Ring with $1$, $M$ a free $R$-Module of rank $2$ and $\varphi \in \operatorname{End}_R (M)$. Show that: If $\det \varphi$ is not a zero divisor in $R$, then $\varphi$ is injective. Ok, so assume $\varphi$ is not injective. Then there exists a $x \in \ker \varphi$, $x\neq 0$ such that $\varphi(x) = 0$. Since $M$ is free, then there exist $r_1,r_2 \in R$ with $x = r_1m_1 + r_2m_2$ where not both $r_1, r_2$ are zero. Since $(m_1,m_2)$ is a basis of $M$, I know that $(m_1 \wedge m_2)$ is a basis of $\bigwedge^2 M$. From here, I don't really know how to proceed, advice is greatly appreciated. AI: Hint : If $\varphi(x) = 0$ then $0=\varphi(x)\wedge \varphi(m_2) = r_1 \varphi(m_1)\wedge \varphi(m_2) = (\det(\varphi) r_1) m_1\wedge m_2$. Similarly, $0= (\det(\varphi)r_2) m_1\wedge m_2$.
H: Reducing this summation in a recurrence relation I'm reviewing some of the solutions for the CLRS textbook by Cormen... For practice and I'm stuck on a recurrence embedded inside a summation. \begin{aligned} T(n) &=1+\sum_{j=0}^{n-1} 2^{j} \\ &=1+\left(2^{n}-1\right) \\ &=2^{n} \end{aligned} I'm not sure how they came out of the summation to obtain $ 1 + 2^n - 1$ Thanks AI: Ok. I see now They used the geometric series. So $$\sum_{k=0}^{n} x^{k}=\frac{x^{n+1}-1}{x-1}$$ $=1+\frac{2^n-1}{2-1}$ Then we use this and it reduces to the desired output
H: Upper bound on the $\|\cdot\|_2$ norm of a tridiagonal matrix Let $T\in M_{n}(\mathbb{R})$ be a tridiagonal matrix. What can we say about operator norm $\|T\|_2$? I'm asking this question because we know that if $T$ were only diagonal, then $\|T\|_2$ is the largest absolute value of any diagonal entry of $T$, as shown here, and so there might also a similar result for tridiagonal or $k$-diagonal matrices in general. AI: Edit. As user8675309 rightly points out, using Schur's test $\|A\|_2\le\sqrt{\|A\|_1\|A\|_\infty}$, we get $\|A\|_2\le3\max_{i,j}|a_{ij}|$. This bound is at least asymptotically tight: let $A_n$ be the $n\times n$ symmetric tridiagonal Toeplitz matrix with all entries on the three diagonals equal to $1$. Then $\lim_{n\to\infty}\|A_n\|_2=3$. For a $(2k+1)$-diagonal matrix, Schur's test gives $\|A\|_2\le\min(n,\,2k+1)\max_{i,j}|a_{ij}|$. (Old answer) Here is an obvious upper bound: since $A$ is tridiagonal, when we calculate $(AA^T)_{ij}$ as the sum $\sum_ka_{ik}a_{jk}$, at most three summands are involved. Also, as $AA^T$ is pentadiagonal, if we apply Gerschgorin disc theorem and the triangle inequality, we get $\rho(A^TA)\le(5)(3)\max_{i,j}|a_{ij}|^2$. Consequently, $$ \|A\|_2=\sqrt{\rho(AA^T)}\le\sqrt{15}\max_{i,j}|a_{ij}|. $$
H: How to efficiently perform matrix inversion more than once $\left(A^TA + \mu I \right)^{-1}$ if $\mu$ is changing but $A$ is fixed? How to efficiently perform matrix inversion more than once $\left(A^TA + \mu I \right)^{-1}$ if $\mu \in \mathbb{R}$ is changing but $A \in \mathbb{R}^{n \times m}$ is fixed? Or do I need to invert the whole matrix in every update of $\mu$? AI: If the change in $\mu$ is small, you might use the series $$ (B+\lambda I)^{-1} = (B+\mu I)^{-1} - (\lambda - \mu) (B + \mu I)^{-2} + (\lambda - \mu)^2 (B + \mu I)^{-3} - \ldots $$
H: Expected number of unique transient states visited in an absorbing markov chain. If I have an absorbing Markov chain represented as a transition matrix P - same notation as Wikipedia article. $ P = \begin{pmatrix} Q & R \\ 0 & I_r \end{pmatrix} $ How would I compute the expected number of unique transient states visited before arriving at the absorbing state? I am assuming it would involve some sum of products over row probabilities in $Q$ but I am not sure. -- For my specific case, I only have one terminating state. AI: It will of course depend on the initial state. The probability of visiting state $j$ is the probability of ending up at $j$ in a modified Markov chain where you make $j$ absorbing. Now sum this over all transient states $j$.
H: $(L^\infty ,\left \| . \right \|_\infty )$ is complete I only have a question about the proof of this theorem. In the proof , we took any cauchy sequence $(f_n)$ in $L^\infty$ and we want to show that this sequence converges. We also considered the set $A_k$ of measure zero and its countable union A to arrive to $\left | f(x)-f_n(x) \right |\leq \frac{1}{k}$ for all $x\in X-A,n\geq N_k$ Till here everything is clear, but then we set $f=0$ on $A$ . Why ? Is it because we want $f$ to be measurable ? And couldn't we choose something other than $0$ ? AI: I would say it's just a convention. You want a function $f$, you have found it's value outside of $A$, the easiest $f$ you can think about on the whole space is found setting it to be $0$ in $A$. Note that it is not really important since it will just be a representative of the class of functions in $L^\infty$. So yes, there's nothing in the value $0$ itself in this case, you may very well choose $1/17$ or $f\equiv g$ on $A$ for some function $g$.
H: how much would you be willing to pay for this card game? suppose there are 12 cards with number 1 - 12 on each card. All cards are faced down and are listed one by one in front you. You get to pick one card, if the card has 1 on it, you get $1. If it's 2, you get $2, so on and so forth. Now you have the chance to put the first card back, re-shuffle it, and re-pick another card. If the second card is higher than the first card, you will get paid by an amount equivalent to the number of the second card. if the second card is less, you will get paid by whatever amount printed on the first card. How much would you be willing to pay for this game? AI: For each $k\in \{1,\cdots, 12\}$ let $\psi(k)$ denote the probability that both of your choices are $≤k$. Since the draws are independent we have $$\psi(k)=\left(\frac k{12}\right)^2$$ Now let $P(k)$ denote the probability that the maximum of the two draws is exactly $k$. Clearly we have $$P(1)=\psi(1)\quad \& \quad P(k)=\psi(k)-\psi(k-1)\quad \text {for}\,k>1$$ It is now easy to compute the expect value of the max, $$E=\sum kP(k)=8.486$$
H: Describe $\mathbb{Z} \times \mathbb{Z} /\mathbb{3Z}\times\mathbb{Z} $ I am trying to describe this quotient group $\mathbb{Z}\times\mathbb{Z}/\mathbb{3Z}\times\mathbb{Z}$ Let's denote with $A$ and $B$ respectively $\mathbb{Z} \times \mathbb{Z} $ and $\mathbb{3Z}\times\mathbb{Z}$ $A / B:= \{a+B : a \in A\}$ my problem is (might be a stupid one) there are infinity of $a \in A$ in form $(a_1,a_2)$ so i am confused about how to built that quotient group. AI: @lhf has given the right solution, but your comment suggests you don't understand it. Here's a more elementary way. We want to give a list of the distinct cosets $a+B$ - I use your notation. When does $a+B=x+B$?. Answer, if and only if $a-x\in B$. Taking $a=(m,n)$ and $x=(u,v)$ we have that these cosets are the same if and only if$(m-u,n-v)\in B=3\mathbb{Z}\times \mathbb{Z}$. That is $m=u$ is a multiple of $3$ and $n-v$ is any integer. So for a set of cosets we can take the following set of 3 cosets : $\{(0,0)+B, (1,0)+B, (2,0)+B\}$.
H: Let $X$ and $Y$ be points contained in the disk of radius $r$ around the point P. Explain why $d(X, Y) \leq 2r$. Working on the book: Lang, Serge & Murrow, Gene. "Geometry - Second Edition" (p. 23) Let $X$ and $Y$ be points contained in the disk of radius $r$ around the point P. Explain why $d(X, Y) \leq 2r$. Use the Triangle Inequality. The solution given by the author is: Since $X$ and $Y$ are contained in the disc, $d(P,X) \leq r$ and $d(P,Y) \leq r$. $d(X,Y) \leq d(P,X) + d(P,Y)$ by the Triangle inequality. Therefore, $d(X,Y) < r + r = 2r$. I know this property of Real numbers: If $a < b$ and $c$ is any real number, then $$a + c < b + c$$ $$a - c < b - c$$ My question is: what's the justification for the conclusion $d(X,Y) < r + r = 2r$ ? How is that property instantiated in order to reach it (in case this is the appropiate property)? AI: Since $d(P,X)<r$ and $d(P,Y)<r$ and since you know that $d(X,Y)\leqslant d(P,X)+d(P,Y)$, you have\begin{align}d(X,Y)&\leqslant d(P,X)+d(P,Y)\\&<r+d(P,Y)\\&<r+r\\&=2r.\end{align}
H: Condition is true if only 1 out of 3 variables is true or if the 3 are false: better logic? I have this condition: (A is true OR B is true OR C is true) OR (A is false AND B is false AND C is false) (edit: It's been pointed out that this formula is wrong for what I want) So as the title says, I want the condition to be true if only 1 of A, B or C is true, or if they're all false. Is there a better way to write this condition? edit 2: The context is a SQL Server validation check. Thanks. AI: Within an SQL query something similar to A+B+C<=1 should work with implicit conversion from bool to int, although the SQL dialect isn't mentioned in the OP I can assume this IF(A,1,0)+IF(B,1,0)+IF(C,1,0)<=1 shuold work either way, because most SQL dialects have IF, if do not support implicit bool$\to$int conversion.
H: Weak-* closed subspaces and Preduals, a la von Neumann Algebras Let $X$ be a Banach Space. Suppose that $M \leq X^*$ is a weak-* closed subspace. Is it true that $M$ has a predual? According to my understanding, taking the pre-annihilator $M^\perp = \{x \in X \vert \forall a \in M, \, a(x)=0\}$ we should get that $M_* = X/M^\perp$ satisfies $(M_*)^* \simeq M$. I couldn't prove it. Further, what is really bothering me is if one looks at the restriction of the functionals $X \subset X^{**}$ under the canonical injection, to be functionals on $M$, is it true that this space can be identified as Banach spaces with $M_*$? I am asking this since this somehow should be the situation for von Neumann Algebras, where the above translates to Why can we identify the predual $M_*$ of a a von Neuman Algebra both as the space of ultraweakly continuous functionals on $M$, and as the quotient Banch space $L^1(\mathcal{B}(\mathcal{H}))/M^\perp$ of the trace class operators? AI: Note that $M^\perp$ is closed, as such $X/M^\perp$ is a normed space and also Banach. Further every element $a\in M$ induces a map $X/M^\perp\to\Bbb C$, $[x]\mapsto a(x)$. This map has the same norm as $a$ as can be checked, hence you may identify $M$ with a sub-space of $(X/M^\perp)^*$. What is left to check is that every element of $(X/M^\perp)^*$ comes from an element of $M$; here is where the weak* closedness of $M$ will enter. Specifically if $M$ is weak* closed and $V\subseteq X/M^\perp$ is finite dimensional and $q:V\to\Bbb C$ linear then there is an $a\in M$ with $a\lvert_V=q$. We'll do this proof for completeness. If $\dim(V)=1$ this is clear, as there must be an $a\in M$ with $a\lvert_V\neq0$, else $\pi^{-1}(V)\subseteq M^\perp$ which is a contradiction ($\pi:X\to X/M^\perp$ the projection). For $\dim(V)>1$ do an induction, assume that for each strict subspace of $V$ we can find an $a$ agreeing with $q$ on that subspace. So let $e_1,...,e_n$ be a basis of $V$, there must be some $b\in M$ with $b(e_1)=...=b(e_{n-1})=0$ and $b(e_n)\neq0$, as otherwise whenever two elements of $M$ agree on $\mathrm{span}(e_1,...,e_{n-1})$ they agree on $\mathrm{span}(e_1,...,e_n)$ and there must be a linear formula so that $a(e_n)=\sum_i x_i\,a(e_i)$, hence $a(e_n-\sum_ix_i e_i)=0$ for all $a\in M$ and $e_n-\sum_i x_ie_i$ is $0$ in $X/M^\perp$, contradicting that $e_1,...,e_n$ is a basis. So if $a\in M$ with $a\lvert_{\mathrm{span}(e_1,...,e_{n-1}})=q\lvert_{\mathrm{span}(e_1,...,e_{n-1}})$, then $[a + (q(e_n)-a(e_n))b]\lvert_V =q$ completing the induction. Now if $q\in (X/M^\perp)^*$ let $\mathcal V$ denote the directed set of finite dimensional sub-spaces of $X/M^\perp$ and for each $V\in\mathcal V$ let $a_V\in M$ be such that $a_V\lvert_V=q\lvert_V$. Then $a_V$ converges pointwise to $q$ on $X/M^\perp$, by weak* closure you get that $q\in M$. (Small remark: In the end I am a bit sloppy with identifications. For $q\in X/M^\perp$ the above procedure will give a net $a_V\in M$ so that $a_V \to q\circ \pi$ as elements of $X^*$, giving a pre-image of $q$ in $M$ under the identification of $M$ with a sub-space of $X/M^\perp$.)
H: limit of the sequence $x_{n}:= \sqrt[n]{n \sqrt[n]{n \sqrt[n]{n\ldots}}}$ I was thinking what happens with the sequence $\{x_n\}_{n\in \Bbb N}$ where: $$x_{n}:= \sqrt[n]{n \sqrt[n]{n \sqrt[n]{n\ldots}}}$$ When you look some terms, for example $x_{1}=1$, $x_{2}=\sqrt[]{2 \sqrt[]{2 \sqrt[]{2 ...}}}$, $x_{3}=\sqrt[3]{3 \sqrt[3]{3 \sqrt[3]{3 ...}}}$, these terms and the others will be continued fractions, where each one converges. I'm asking what happens with $\lim\limits_{n \to \infty}\sqrt[n]{n \sqrt[n]{n \sqrt[n]{n ...}}}$ ?. I have an idea and it is $\lim\limits_{n \to \infty}\sqrt[n]{n \sqrt[n]{n \sqrt[n]{n ...}}}=1$. My reasoning is in the fact: $$\sqrt[n]{n \sqrt[n]{n \sqrt[n]{n ...}}}= \displaystyle {n^{\frac{1}{n}}} n^{\frac{1}{n^2}} n^{\frac{1}{n^3}}...$$ And you know that: $${\displaystyle \frac{1}{n}> \frac{1}{n^k} \textrm{ for } n,k \in \Bbb N}$$ Then: $$n^{\frac{1}{n}}> n^{\frac{1}{n^k}} \geq 1$$ Like $\lim\limits_{n \to \infty}n^{\frac{1}{n}}=1$ and $\lim\limits_{n \to \infty}1=1$, by the Squeeze Theorem $\lim\limits_{n \to \infty}\sqrt[n]{n \sqrt[n]{n \sqrt[n]{n ...}}}=1$. Is this reasoning correct? What do you think about $x_{n}$? Do you think there is another way to prove it? I receive suggestions or comments. Thank you. AI: Yes, the limit is $1$: $$x_n=n^{\sum_{k=1}^{\infty}\frac{1}{n^k}}= n^{\frac{1}{n-1}}=e^{\frac{\ln(n)}{n-1}}\to 1.$$
H: Evaluating the $ \lim_{n \to \infty} \prod_{1\leq k \leq n} (1+\frac{k}{n})^{1/k}$ I am really struggling to work out the limit of the following product: $$ \lim_{n \to \infty} \prod_{1\leq k \leq n} \left (1+\frac{k}{n} \right)^{1/k}.$$ So far, I have spent most of my time looking at the log of the above expression. If we set the desired limit equal to $L$, I end up with: $$\log L = \lim_{n\to \infty}\log\left(\frac{n+1}{n} \right)+\frac{1}{2}\log\left(\frac{n+2}{n} \right) +\cdots +\frac{1}{n}\log\left(\frac{n+n}{n} \right),$$ which I can simplify to: $$ \log L = \lim_{n\to \infty} \log(n+1)+\frac{1}{2}\log(n+2)+\cdots \frac{1}{n}\log(2n)-\log(n)\left(1+\frac{1}{2}+\cdots\frac{1}{n}\right). $$ I tried to consider the above expression in a different form with an integral, but was unable to arrive at anything useful. I have been stuck on this for quite awhile now, and would appreciate any insight. Thanks AI: Hint, based on Surb: $$ \log L = \lim_{n\to\infty} \sum_{k=1}^n\frac{1}{k}\log\left(1+\frac{k}{n}\right) =\frac{1}{n}\sum_{k=1}^n\frac{n}{k}\log\left(1+\frac{k}{n}\right) =\int_0^1\frac{\log(1+x)}{x}\;dx $$ by a Riemann sum argument.
H: Derivative of the delta function at some point The derivative of the delta function can be treated similar to the actual delta function. Suppose I have an expression like $$\frac{\mathrm{d}}{\mathrm{d}x}\delta\left(x-x_0\right)$$ what does this mean for the integral $$\int\mathrm{d}x\ f\left(x\right)\frac{\mathrm{d}}{\mathrm{d}x}\delta\left(x-x_0\right)$$ Furthermore, does it matter whether or not i derive with respect to the variable before or after the minus sign, i.e. does $$\int\mathrm{d}x\ f\left(x\right)\frac{\mathrm{d}}{\mathrm{d}x}\delta\left(x_0-x\right)$$ give a different result? Finally, does it matter whether or not I derive the delta function with the argument $x_0-x$ or the delta function alone and then plug in the argument: $$\int\mathrm{d}x\ f\left(x\right)\delta'\left(x_0-x\right)$$ where $\delta'\left(x\right)$ is first "differentiated" with respect to $x$ and then evaluated at $x_0-x$ and/or $x-x_0$. AI: The derivative is usually defined via integration by parts, because we can't really apply the usual definition. The basic idea is the following: If we take 2 compactly supported, infinitely differentiable function $u$ and $v$, we get that $$\int u'v=-\int uv'$$ So if $f$ is compactly supported and infinitely differentiable, we "can" do the same (in the sense that it's the defining property of the derivative): $$\int \mathrm{d}x f(x)\delta'(x-x_0)=-\int \mathrm{d}x f'(x)\delta(x-x_0)=-f'(x_0)$$ This concept is called distributional derivative: https://en.m.wikipedia.org/wiki/Distribution_(mathematics)#Derivatives_of_distributions Note: the weak derivative of an $L^1$ function is usually defined the same way.
H: If the $p$-core of a finite group $G$ is trivial, what can be deduced from that about $G$ I am looking for (textbook) references for the following situation: If $G$ is a finite group and its $p$-core $\mathcal{O}_p(G)$ is trivial, what can be deduced from this about $G$? (Do groups with this property have a special name?) Thanks. AI: Very little, I'm afraid. For example, $G$ could be a $q$-group for some prime $q\neq p$. Or it could be a simple group, or a symmetric group. Notice that the group $G/O_p(G)$ always has your property. Do you have it in any specific context? For example, a group with no $p$-core as the centralizer of a $q$-element in a simple group?
H: Given matrix $A^2$, how to find matrix $A$? Let $$A^2 = \begin{pmatrix} 3 & 1 \\ 2 & 2 \end{pmatrix}$$ Knowing that $A$ has positive eigenvalues, what is $A$? What I did was the following: $$A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}$$ so $$A^2 = \begin{pmatrix} a^2 + bc & ab+bd \\ ac+cd & bc+d^2 \end{pmatrix}$$ I got stuck here after trying to solve the 4 equations. Can someone help, please? AI: Computing matrix powers can be done with diagonalization. The eigenvalues of $A^2$ have sum $5$ (trace) and product $4$ (determinant), so they are $1$ and $4$. The corresponding eigenvectors of $A^2$ are $\pmatrix{1\\-2}$ and $\pmatrix{1\\1}$, respectively. Therefore, $A^2$ is diagonalized as follows: $\pmatrix{1&1\\-2&1}\pmatrix{1&0\\0&4}\pmatrix{1&1\\-2&1}^{-1}=\pmatrix{1&1\\-2&1}\pmatrix{1&0\\0&4}\dfrac{\pmatrix{1&-1\\2&1}}3.$ Therefore, we can take $A=\pmatrix{1&1\\-2&1}\pmatrix{1&0\\0&2}\dfrac{\pmatrix{1&-1\\2&1}}3=\dfrac{\pmatrix{5&1\\2&4}}3$.
H: What does "measurable" mean intuitively? So if we have an outer measure $\mu$ on a set $\Omega$, we defined: A subset A $\subseteq$ $\Omega$ is called $\mu$-measurable, if for all B $\subseteq$ $\Omega$: $\mu$(B) = $\mu$(B $\cap$ A) + $\mu$(B \ A). And i understand the definition, but i always thought we can only measure measurable sets, i.e. $\mu$ is only defined for measurable sets, but it's defined for all subsets of $\Omega$. So why do we define it this way or what's the intuition behind it? If we measure a non-measurable set, does that mean the value will be "wrong" in a way? Or do I take the word "measurable" too literally? AI: What you're taking about is not a measure, but an outer measure. They are used to construct actual measures, and are defined on the entire power set of the underlying set. The first step in constructing a measure on $\Omega$ is asking what the measure should actually tell us about the sets it measures, and then constructing such a measure for every subset of $\Omega$. For instance, we could technically define the volume of an arbitrary subset of $\mathbb R^n$ as $\operatorname{vol}(A)=\inf\{\sum_{B\in\mathcal B} \operatorname{vol}B~|~\mathcal B\textrm{ is a collection of boxes covering }A\}$, where the volume $\operatorname{vol}B$ of a box is the product of its side lengths. This works for any subset of $\mathbb R^n$, since the infimum is well-defined. But now we have to make sure that this outer measure (in this case it's the outer Lebesgue measure) is well-behaved, not just well-defined. In particular, we would like the volume of a disjoint union of sets to be the sum of their volumes. So we throw out any set which would break this feature. That is, those sets $A$ which can cut another set $B$ into two pieces $B\cap A$ and $B\backslash A$, such that the volume of their union is not the sum of their volumes. We call those "not measurable", and the rest "measurable". Now the measurable ones form a $\sigma$-algebra, which in retrospect justifies calling them measurable. If we now restrict the outer measure to the measurable sets, we get an actual measure which captures the idea we used to define the outer measure, and behaves nicely as well.
H: A cyclic vector with respect to a von Neumann algebra is also cyclic with respect to the commutant. Let $\mathcal{M}\subseteq B(\mathcal{H})$ be a von Neumann algebra with a given cyclic vector $\xi\in\mathcal{H}$, i.e. $[\mathcal{M}\xi]=\mathcal{H}$ where $[\mathcal{M}\xi]$ is the closed vector subspace generated by $\mathcal{M}\xi$. Let us consider the commutant $\mathcal{M}'$ of $\mathcal{M}$ defined by $\mathcal{M}'=\{y\in B(\mathcal{H}):xy=yx \text{ for all } x\in B(\mathcal{H})\}$. Question: Is it true that the vector $\xi$ becomes cyclic with respect to $\mathcal{M}'$ as well, i.e. $[\mathcal{M}'\xi]=\mathcal{H}$? AI: No, this is not true. For example, every non-zero vector is cyclic for $B(H)$, yet $B(H)^\prime=\mathbb{C} I$ does not have any cyclic vectors (unless $\operatorname{dim} H\leq 1$). But cyclic vectors can be characterized in terms of the commutant: A vector $\xi$ is cyclic for $\mathcal{M}$ if and only if it is separating for $\mathcal{M}^\prime$, that is, $x\xi=0$ implies $x=0$ for $x\in \mathcal{M}^\prime$. I suggest you try and prove this yourself. If necessary, you can find the proof in the first volume of Takesaki's book on operator algebras.
H: Evaluate $\lim_{n \to \infty} \frac{2^n}{3^n}$ As stated in the question, I'm trying to find the limit $$\lim_{n \to \infty} \frac{2^n}{3^n}$$ This is my attempt: $$ \lim_{n \to \infty} \frac{2^n}{3^n} = \lim_{n \to \infty} 2^n \cdot \lim_{n \to \infty} \frac{1}{3^n}$$ The first limit pulls to $\infty$ whereas the second limit pulls to $0$ and hence the limit will be $0$. Is the justfication right ? Is there any other way to solve it ? AI: Claim: for $n\geq 1$, $n/2<(3/2)^n$. The claim is immediate for $n=1$ and follows by induction: the LHS increases by $1/2$ as $n$ increases to $n+1$ and the RHS increases by at least $3/4$ (by much more, in fact, but this is sufficient). Then $0 <(2/3)^n<2/n$, and $2/n\to 0$. By the Squeeze Theorem, $(2/3)^n\to 0$.
H: Prove that: For all sets $A$ and $B$, $A\cap B = A \cup B\Leftrightarrow A = B$. I wasn't able to prove this statement. I'd much appreciate if you could lend a helping hand. AI: Prove the contrapositive. Assume $A\ne B$. Then without loss of generality there is $x\in A$ such that $x\not\in B$. (If not, switch $A$ and $B$.) Then $x\in A\cup B$ but $x\not\in A\cap B$. Addendum in response to comment where OP asked for direct proof: If $A=B$, then $A\cup B=A\cup A=A=A\cap A=A\cap B$. Conversely, if $A\cup B=A\cap B$, then $x\in A\cup B\iff x\in A\cap B$, so $x\in A $ or $x\in B\iff x\in A $ and $x\in B$, so $x\in A\implies x\in A $ or $x\in B \implies x\in A $ and $ x\in B\implies x\in B$, and likewise $x\in B\implies x\in A$, so $A=B$.
H: Is there an efficient way to calculate the following power series? I want to find coefficients of a power series $K_p$ given by the equation: $$\frac{1-z^2}{1+z^2-2z\cos(\theta)} = \sum_{p=0}^{\infty}K_pz^p$$ where $\theta$ is a constant. I have checked that $K_0=1, K_1=2\cos(\theta), K_2 = 2\cos(2\theta), K_3 = 2\cos(3\theta)$. And I know that the answer is $K_p = 2\cos(p\theta)$. But is there an efficient way to derive the answer for an arbitrary $p$? AI: Hint I guess that $z\in\mathbb R$. Then, take the real part of $$\sum_{p=0}^\infty (e^{i\theta }z)^p =\frac{1}{1-ze^{i\theta }}.$$
H: What does "programming" mean in mathematics? In the past few years, I have came across some topics in Math and CS that have the word "programming" in them. For example, there are linear programming, quadratic programming and dynamic programming. However, I find it hard to pin down what "programming" mean. A standard dictionary defines "programming" either as the act of instructing a computer to do certain things based on computer code or as the act of organizing and arranging things. But the way "programming" is used in the aforementioned topics seem to be more related to the act of optimization. AI: These optimization problems are called "programs" for historical reasons. The methods were developed in the 1940s, some time before there was a more standard usage of the term "programming". Here are several resources. The first two are very quick to read; the third goes into a more detailed history: an answer on mathoverflow.net that gives some more context; a similar question on math.stackexchange with an insightful answer; and a paper titled A Brief History of Linear and Mixed-Integer Programming Computation.
H: Optional Stopping Theorem Let $X=(X_n)_{n\in\mathbb{N}}$ be a stochastic process. The optional stopping theorem (OST) requires $X$ to be a martingale. The OST assures that under certain conditions on the stopping time $\tau$, it holds that $\mathbb{E}[X_\tau]=\mathbb{E}[X_0]$. But just by being a martingale it would follow that $\mathbb{E}[X_n]=\mathbb{E}[X_0],\ \forall n\in\mathbb{N}$. This is even used in the proof of the OST. What is so special in the OST? AI: Yes, the equality $\mathbb{E}[X_n]=\mathbb{E}[X_0]$ holds, but $X_T$ is a different random variable, this is not an element of the sequence $X_n$. It depends on the values of $T$ as well. So the expectation of $\mathbb{E}[X_T]$ might be different. For example, consider the simple random walk. Let $(Y_n)_{n=1}^\infty$ be a sequence of independent random variables such that $\mathbb{P}(Y_n=1)=\mathbb{P}(Y_n=-1)=\frac{1}{2}$. Then we define $S_0=0$ and $S_n=Y_1+...+Y_n$ for all $n\in\mathbb{N}$. Then $(S_n)$ is a martingale with respect to the filtration $\mathcal{F_0}=\{\emptyset, \Omega\}, \mathcal{F_n}=\sigma(Y_1,..,Y_n)$. Obviously $\mathbb{E}[S_n]=0$ for all $n$. But now we can define the stopping time $T=\inf\{n\in\mathbb{N}: S_n=1\}$. It is a not very trivial fact that $\mathbb{P}(T<\infty)=1$, this can be proved after some work. But then from the definition it follows that $\mathbb{E}[S_T]=1$, because $S_T=1$ at every point where $T$ is finite. (which happens almost surely) So in general $\mathbb{E}[X_T]=\mathbb{E}[X_0]$ might not hold. The optional stopping theorem gives some conditions under which the equality holds.
H: How do I evaluate $\int_{-R}^{R} \sqrt{R^2-x^2}\,dx$ using the change of variables $x = R\sin(w)$? I'm trying to answer this question from my calculus 1 exam I did last December. The question asks us to compute a definite integral by using the following change of variables $$ x=R \sin(w)$$ in the following equation: $$\int_{-R}^R \sqrt{R^2-x^2} \ dx $$ The thing that's confusing me is how to do this change of variable if the equation is already expressed in terms of $x$? AI: You define $x$ in terms of $w$ via the given rule. To make the change of variable you notice that $$\frac{dx}{dw}=R\cos(w)\iff dx=R\cos(w)\,dw$$ thus the integral becomes \begin{align*} \int_{-R}^R\sqrt{R^2-x^2}\,dx&=\int_{-\pi/2}^{\pi/2}\sqrt{R^2-R^2\sin^2(w)}\cdot R\cos(w)\,dw\\ &=\int_{-\pi/2}^{\pi/2}R\cos(w)\cdot R\cos(w)\,dw\\ &=R^2\int_{-\pi/2}^{\pi/2}\cos^2(w)\,dw \end{align*} you can end it from here.
H: $f^{-1}(D-C)=f^{-1}(D)-f^{-1}(C)$ Please can you give me feedback on this proof? Result: Let $f:A \rightarrow B$ be a function. Let $C$, $D \subseteq B$. Then $f^{-1}(D-C)=f^{-1}(D)-f^{-1}(C)$. Proof: To show that $f^{-1}(D-C)=f^{-1}(D)-f^{-1}(C)$, it is sufficient to show that the set in each side is a subset of the other. Let $x \in f^{-1}(D-C)$. By definition, we see that $f(x) \in D-C$. Hence, $f(x) \in D$ and $f(x) \notin C$. We deduce that $x \in f^{-1}(D)$ and $x \notin f^{-1}(C)$. Then $x \in f^{-1}(D) - f^{-1}(C)$. Therefore $f^{-1}(D-C) \subseteq f^{-1}(D) - f^{-1}(C)$. Now, let $y \in f^{-1}(D) - f^{-1}(C)$. Then $y \in f^{-1}(D)$ and $y \notin f^{-1}(C)$. By definition, we see that $f(y) \in D$ and $f(y) \notin C$. From here we see that $f(y) \in D-C$. Then, by definition, $y \in f^{-1}(D-C)$. Therefore $f^{-1}(D)-f^{-1}(C) \subseteq f^{-1}(D-C)$. This ends the proof. Thank you for your attention! AI: Your proof is correct. You can alternatively combine the two parts for a shorter proof as follows: $$\begin{align*}x \in f^{-1}(D-C)&\iff f(x) \in D-C\\ &\iff f(x) \in D \text{ and } f(x) \notin C\\ &\iff x \in f^{-1}(D) \text{ and } x \notin f^{-1}(C)\\ &\iff x \in f^{-1}(D)-f^{-1}(C). \end{align*}$$
H: Understanding Compact Sets (In Complex Analysis) The definition that I have for a compact set is that The way that I interpret this is that I can pick any sequence and it should have a limit point (i.e. point of accumulation). However, I can't understand a couple of things about this definition. Why is it not true that all points can be points of accumulation? Also, if I pick the disc of radius 1 centred at the origin of the complex plane and make a sequence say $0.7 +0.31i$, $0.8$, $0.653 - 0.74i$, $0.92 + 0.01i$ and so on of random complex numbers (members of this closed disc that is compact) how does it nesseseraly converge? If anybody could shine some light on this I would greatly appreciate it. AI: Any point in $S$ can be an accumulation point of a sequence in $S$. Let $p$ be a point in $S$, the sequence $(p)_{n \in \Bbb{N}}$ is a sequence in $S$, the constantly $p$ sequence, and it has one accumulation point, $p$. It is worth remembering that a sequence can have more than one accumulation point. Consider the sequence $$ \left( \begin{cases} 1 ,& n \text{ odd} \\ 0 ,& n \text{ even} \end{cases} \right)_{n \in \Bbb{N}} $$ It has both $0$ and $1$ as accumulation points, but converges to neither. Do not conflate accumulation point with convergent. It is the case that, for each accumulation point, there is a subsequence that converges to that accumulation point. Your "random points" scheme works to prevent having a convergent. It does not prevent having a subsequence which converges to an accumulation point. Suppose you didn't select your points randomly but instead chose your points to avoid having any accumulation point. Pick $\varepsilon > 0$. Each time you put down a point, you can only ever have finitely many in a disk of radius $\epsilon$ around any other point already in your sequence. So partition the unit disk into squares, with edges parallel to the coordinate axes at integer multiples of $\varepsilon/2$, including the upper and left edges and both upper vertices. (Why "${}/2$"? Because $\sqrt{2}/2 < 1$, so any radius $\varepsilon$ disk centered in that square covers that entire square.) (The orientation of the grid is entirely irrelevant -- just rotat the disk to any other orientation. The alignment of the disk is also irrelevant -- if you choose to offset the grid, you get the same result.) There are only finitely many squares, so there are only finitely many disks. In each radius $\varepsilon$ disk, to prevent having a convergent, you are only permitted a finite number of points. But this means your sequence is only finitely long. So this can't happen, there is at least one disk that has infinitely many points in it. And this is true for any choice of $\varepsilon$, so an infinitely long sequence in the unit disk must have at least one accumulation point (and a subsequence which converges to that accumulation point).
H: Evaluate $\frac{2021!+2020!}{2019!+2018!}.$ Evaluate $\displaystyle{\frac{2021!+2020!}{2019!+2018!}}.$ I think we can write this as $\displaystyle{\frac{2018!\cdot2019\cdot2020\cdot2021+2018!\cdot2019\cdot2020}{2018!\cdot2019+2018!}},$ but I don't know if this is the right direction. AI: $$\frac{2021!+2020!}{2019!+2018!}$$ $$ =\frac{2022\cdot2020!}{2020\cdot2018!}$$ $$= \frac{2022\cdot2020\cdot2019}{2020}$$ $$= 2022\cdot2019$$
H: How do I proceed in proving that $\cos 20^o\cos 40^o\cos 60^o\cos 80^o = \frac{1}{16}$ I have taken the value of $\cos 20^o$ to be $x$. Now, $$\cos 20^o\cos 40^o\cos 60^o\cos 80^o = \dfrac{\cos 20^o}{2}\cos(40^o)\cos(80^o)$$ $$=\dfrac{x}{2}\Bigg( \dfrac{\cos(40^o + 80^o) + \cos(40^o - 80^o)}{2} \Bigg)$$ $$=\dfrac{x}{2}\Bigg( \dfrac{\dfrac{-1}{\text{ }2} + \cos 40^o}{2} \Bigg) = \dfrac{x}{4}\Bigg( \dfrac{-1}{\text{ }2}+2x^2-1 \Bigg)$$ $$=\dfrac{x}{8}(4x^2-3) = \dfrac{4x^3-3x}{8}$$ Can this even be continued (I mean to ask if this method will lead me any further)? If yes, how do I continue it? Thanks! PS : Please don't provide alternatives, I want to try them myself first. This question is only for the sake of asking if this method can be continued and it if can be, then how. AI: You are one step away from the final answer. Use the fact that $$\cos3x=4\cos^3x-3\cos x$$ and the problem is solved.
H: Is a countable infinite union of $\Sigma_1$ sets is $\Sigma_1$? I’m reading Kunen’s book Foundations of mathematics. My question is whether a countable union of $\Sigma_1$ sets in $HF$ is also $\Sigma_1$ or not. I wonder if we can think $\Sigma_1$ sets as open sets in HF. AI: $\Sigma_1$ subsets of $\mathit{HF}$ are the recursively enumerable ones, as you might have read in Kunen's book. The intuition of being “open” sets is not completely incorrect, if you consider them effectively open. With this in mind, it can be concluded that the union of an effectively enumerated family of $\Sigma_1$ sets is again $\Sigma_1$. But, following Andreas' comment above, any subset of $\omega$ can be obtained a countable union of $\Sigma_1$ sets; and there are indeed such sets that are not $\Sigma_1$— for instance, the set of (codes for) non halting Turing machines.
H: A quiz question in real analysis on finding whether a function is bounded, continuous under certain conditions This particular question was asked my senior's quiz and I was unable to solve it. So, I am asking for help here. Let $f:\mathbb{R}\to\mathbb{R}$ be a function such that $f(x+1)=f(x)$ for all $x\in \mathbb{R}$. Which of the following statement(s) is/are true? (A) $f$ is bounded (B) $f$ is bounded if it is continuous (C) $f$ is differentiable if it is continuous (D) $f$ is uniformly continuous if it is continuous By using definition of continuity along with boundedness definition, I can say that (B) is true. A function can be defined on $x=1/2 + \mathbb{Z} $ such that it's $\infty $ at $1/2 + \mathbb{Z} $ and finite elsewhere, so (A) becomes false. But I am unable to think about (C) and (D) . Answer is : B,D. Kindly help. AI: The condition on $f$ is equivalent to saying that $f$ is a periodic function with period 1. Therefore we should really think of $f$ as a function on the unit interval $[0, 1]$ with $f(0) = f(1)$. An arbitrary function on $[0, 1]$ has no nice properties like boundedness, so we can cross it off. If we're continuous, though, since $[0, 1]$ is compact, $f$ must be bounded. This gives us B. For C, there are many continuous functions which are not differentiable (in fact, most continuous functions are not differentiable, in some sense). For D, we use that continuity on a compact set implies uniform continuity. The takeaway is that the condition $f(x) = f(x + 1)$ lets us just think of a function on $[0, 1]$ (with $f(0) = f(1)$) instead of the whole real line.
H: Need for left limits in Stochastic Calculus theorems A lot of the theorems in stochastic analysis are stated for cadlag processes (i.e. right continuous processes with left limits), but I am having a hard time seeing why the "left limits" part is important. It seems like for the most part just right continuity is enough, so I was wondering if anybody had a general explanation for why the assumption of left limits is usually included. For a specific example, Proposition 2.3.5 in Revuz and Yor's "Continuous Martingales and Brownian Motion" states A cadlag adapted process $X$ is a martingale if and only if for every bounded stopping time $T$ the random variable $X_T \in L^1$ and $\mathbb{E}[X_T] = \mathbb{E}[X_0]$. The "only if" part comes from the optional stopping theorem, which did not include the assumption that $X$ is cadlag (because martingales have cadlag modifications anyway when the filtration satisfies the usual conditions). The proof for the converse direction is to fix $s < t$ and $A \in \mathcal F_s$ and define $T = t 1_{A^c} + s 1_A$ and use that $\mathbb{E}[X_t] = \mathbb{E}[X_T]$ to show $\mathbb{E}[X_t 1_A] = \mathbb{E}[X_s 1_A]$ and hence $\mathbb{E}[X_t | \mathcal F_s] = X_s$, but this also doesn't seem to use the left limits assumption. I originally thought it was to ensure $X$ is progressively measurable so that $X_T$ is measurable, but being right continuous and adapted is enough to conclude $X$ is progressively measurable so I'm still confused on why we need left limits. AI: In the context of semimartingales, left limits come for free: a right-continuous finite variation process automatically has left limits, and ditto (a.s.) for a local martingales. There is probably some "historical accident" at play here too. Stochastic Calculus originally grew out of the theory of Markov processes, where cadlag was a default.
H: Is every sine and cosine orthogonal to every other? I've been learning about Fourier series, and haven't found an explicit statement of this requirement for constructing any arbitrary function using just sines and cosines, so I'm asking here. Is it true that $\sin{ax},\sin{bx},\cos{cx},\cos{dx}$ are all orthogonal to each other for all distinct real $a,b,c,d$? Symbolically: $$\int \sin{ax} \sin{bx} = \int \sin{ax} \cos{cx} = \int \sin{ax} \cos{dx} = \int \sin{bx} \cos{cx}\; ...=0 $$ Is this easy to show? AI: I'll assume the range of integration is $[-\pi,\pi]$ and that $a,b$ are nonzero integers. $\int_{-\pi}^{\pi} \sin(a x)\cos(bx)\,dx=0$ because the integrand is odd and integrable. If $a\neq b$, then $\int_{-\pi}^{\pi} \sin(a x)\sin(bx)\,dx=\frac{1}{2}\int_{-\pi}^{\pi} -\cos((a+b)x)+\cos((a-b)x)\,dx=0$. Similarly for the double cosine case: $\int_{-\pi}^{\pi} \cos(a x)\cos(bx)\,dx=\frac{1}{2}\int_{-\pi}^{\pi} \cos((a+b)x)+\cos((a-b)x)\,dx=0$.
H: What is the meaning of division of a formal power series by $x$? Background: a formal power series is defined as an expression of the form $\sum_{n\geq 0} a_n x^n$. If $f =\sum_{n=0}^\infty a_n x^n$ then, we write $\{a_n\}_{n\geq 0} \leftrightarrow f$. Two formal power series are equal if each of the components match. The sum and difference of two formal series is defined component-wise. Also the product of two formal power series $\sum_n a_n, \ \sum_n b_n$ is defined as the formal power series $\sum_n c_n$ where $c_n = \sum_k a_k b_{n-k}$. Two formal power series $\sum_n a_n,\ \sum_n b_n$ are called reciprocal if $\sum_n a_n \sum_n b_n = \sum_n b_n \sum_n a_n = 1$. Now, in generatingfunctionology (2.2), Wilf mentions that for $k \geq 0$, we have, if $\{a_n\}_{n \geq 0} \leftrightarrow f$, then $\{a_{n+k}\}_{n \geq 0} \leftrightarrow \frac{f - a_0 - \dots - a_{k-1}x^{k-1}}{x^k}$. In particular, $\{a_{n+1}\}_{n \geq 0} \leftrightarrow \frac{f-a_0}{x}$. In fact, in (2.1) it is discussed that a formal power series has a reciprocal if and only if the constant term is non-zero. What is confusing to me is what it means to divide a formal power series by $x$. In fact, I am not sure what exactly $\frac{1}{x}$ is the context of formal power series. Although we can interpret $\frac{1}{x}$ as a formal power series that results in $1$ when multiplied by $x$, there is no expression of $\frac{1}{x}$ in the form $\sum_n a_n$. So my question is: what does $\{a_n\}_{n \geq 0} \leftrightarrow f$ $\implies$ $\{a_{n+k}\}_{n \geq 0} \leftrightarrow \frac{f - a_0 - \dots - a_{k-1}x^{k-1}}{x^k}$ actually mean in the context of formal power series? The explantion should not depend on any analytical property of $f$, as we are treating $f$ as only an algebraic object without any analytical property. AI: First you need to correct your definition of $f$: $f\leftrightarrow\langle a_n:n\ge 0\rangle$ means that $$f(x)=\sum_{n\ge 0}a_n\color{red}{x^n}\;.$$ Then $$\begin{align*} f(x)-a_0-a_1x-\ldots-a_{k-1}x^{k-1}&=\sum_{n\ge k}a_nx^n\\ &=x^k\sum_{n\ge k}a_nx^{n-k}\\ &=x^k\sum_{n\ge 0}a_{n+k}x^n\; \end{align*}$$ dividing by $x^k$ now has a clear formal meaning and results in the series $$\sum_{n\ge 0}a_{n+k}x^n=a_k+a_{k+1}x+a_{k+2}x^2+\ldots\;.$$ We can read off the coefficients and see that by definition $$\sum_{n\ge 0}a_{n+k}x^n\leftrightarrow\langle a_k,a_{k+1},a_{k+2},\ldots\rangle=\langle a_{n+k}:n\ge 0\rangle\;.$$ In short, it’s a straightforward formal algebraic manipulation. Added: Don’t think of it as division: think of $$\frac{\sum_{n\ge 0}a_nx^n-a_0-a_1x-\ldots-a_{k-1}x^{k-1}}{x^k}=\sum_{n\ge 0}a_{n+k}x^n$$ as an alternative way to write $$\sum_{n\ge 0}a_nx^n=\sum_{n=0}^{k-1}a_nx^n+x^k\sum_{n\ge k}a_nx^{n+k}\;,$$ one that emphasizes the nature of the transformation from $\sum_{n\ge 0}a_nx^n$ to $\sum_{n\ge 0}a_{n+k}x^n$, the fact that it corresponds to a left shift of the associated sequence.
H: Intersection of maximal ideals of $\mathbb{Q}[x]/(f(x))$ are nilpotent elements I'm studying for an algebra qualifying exam (here is the practice exam to prove its not a HW problem) and I was trying to do problem 5(c). Here it is again: Let $R=\mathbb{Q}[x]/(f(x))$ where $f\in \mathbb{Q}[x]$ is a non-constant polynomial. Show that the intersection of all the maximal ideals of $R$ is exactly the nilpotent elements in $R$. Here is my attempt at the solution: Let $M$ be a maximal ideal of $R$. I know that $\mathbb{Q}[x]/(f(x))/M=\mathbb{Q}[x]/(f(x),M)$ and that $\mathbb{Q}[x]$ is a PID so $(f(x), M)=(p(x))$ for some $p\in \mathbb{Q}[x]$. But since $M$ is maximal, $\mathbb{Q}[x]/(f(x),M)=\mathbb{Q}[x]/(p(x))$ should be a field and therefore $p(x)=x-a$ for some $a\in \mathbb{Q}$. Therefore $\cap_{M \text{ maximal}} M=0$ which is clearly a nilpotent element. I am not sure if what I have done is correct and if it is, I am unsure how to prove the other inclusion (that there are no non-trivial nilpotent elements). Thanks in advance for the help! AI: If $a$ is nilpotent then it is in every maximal ideal. Factorize $f\in \Bbb{Q}[x]$ in irreducibles. Deduce the maximal ideals of $\Bbb{Q}[x]/(f)$. If $a$ is in every maximal ideal then it is in $\bigcap_j \mathfrak{m}_j=\prod_j \mathfrak{m}_j$ and $a^{\deg(f)}$ is in $(f)= 0$.
H: For the elementary row operation of exchanging two rows, is it required that the rows be different? For example, on this page, $i \neq j$ is specified for the elementary row operation of row addition, but not for row switching. Of course, I understand that if $i = j$ and you switch rows $i$ and $j$, then you are just leaving the given matrix unchanged. The reason I ask is because I was given the problem to find the determinant of the different kinds of elementary matrices and ordinarily if you exchange rows $i$ and $j$ with $i \neq j$, the determinant of the resulting elementary matrix is $-1$, but if $i = j$ is allowed, then that elementary matrix will just be the identity matrix and have a determinant of $1$. So I was unsure whether in my solution to the problem I would need to include the case where $i = j$ separately, or whether exchanging a row with itself is not defined as an elementary row operation. AI: You have a good eye for detail. If this is for a class assignment, I suggest pointing out your observation. You could infer that row swapping is really only intended for $i\neq j$, but in the event that $i$ is allowed to equal $j$, the determinant is something different. You could also express the determinant as $(-1)^{\operatorname{sign}(i-j)}$ or $(-1)^{\delta_{ij}}$, where $\delta_{ij}$ is the Kronecker delta.
H: Automorphism of the unit disk that fixes a point I came across the following question: For each $b \in \mathbb{D}$, construct an automorphism $\phi$ of the unit disk that is not the identity map such that $\phi(b)=b$. I know that all automorphisms of the disk must have the form $e^{i\theta}\left(\frac{a-z}{1-\overline{a}z}\right)$ where $a \in \mathbb{D}$. The only map that I can see would fix $b$ is just a rotation by some multiple of $2\pi$. How should I proceed? AI: Assume $b \ne 0$ as otherwise problem trivial. Take $\psi_1(z)=\frac{z+b}{1+\bar b z}$. Clearly $\psi_1(0)=b$. Now take $\psi_2(z)=\alpha\frac{z-b}{1-\bar b z}, |\alpha|=1, \alpha \ne 1$. Clearly $\psi_2(b)=0$ so $\phi=\psi_1 \circ \psi_2$ satisfies $\phi(b)=b$ Since $\psi_2(0)=-\alpha b \ne -b$, $\phi(0) \ne 0$ so $\phi$ is not the identity
H: Proof of continuous force of interest with infinitely compounded interest rate. For actuaries, delta and i upper infinity Specifically, we know the following: $$(1+\frac{i^{(2)}}{2})^2 =(1+\frac{i^{(4)}}{4})^4=(1+\frac{i^{(12)}}{12})^{12}= 1+i, $$ Where $i^{(2)}$, $i^{(4)}$, and $i^{(12)}$ are the interest rates compunded semiannually, quarterly, and monthly, respectively, and i is the interest rate compunded annually. However, I noticed in the Standard Ultimate Life Table provided by the Society of Actuaries that $i^{(\infty)}$ = $\delta$, or more specifically: $$\lim\limits_{m \to \infty}(1+\frac{i^{(m)}}{m})^m=e^\delta=1+i$$ This makes perfect sense intuitively and I found a post about the more common notation $A=Pe^{rt}$ where $\lim\limits_{n \to \infty}(1+\frac{1}{n})^n=e$ is used in the proof to show the relationship with $A=P(1+\frac{r}{n})^{nt}$, which also makes sense, but I was wondering if someone could use the relationship bewteen $i^{(m)}$ and i to prove the following: $$\lim\limits_{m \to \infty}[(1+i)^{\frac{1}{m}}-1]m=\delta$$Where $\delta=\ln(1+i)$. Thanks! AI: Note $$\left((1+i)^{1/m} - 1\right)m = \log(1+i) \cdot \frac{\exp\left(\frac{1}{m} \log (1+i)\right) - e^0}{\frac{1}{m} \log(1+i) - 0},$$ so that with the substitution $h = \frac{1}{m} \log(1+i)$, we obtain $$\lim_{m \to \infty} \left((1+i)^{1/m} - 1\right)m = \log(1+i) \lim_{h \to 0} \frac{e^h - e^0}{h - 0} = f'(0) \log(1+i) = \log(1+i),$$ where $f(x) = e^x$ and we have applied the definition of derivative $$f'(a) = \lim_{h \to a} \frac{f(h) - f(a)}{h - a}.$$
H: Examples of singular non locally constant functions What are (as easy as possible) examples of functions $f$ with the following properties? singular, i.e. continuous, non-constant, and differentiable almost everywhere with derivative zero, non locally constant, i.e. $\exists x$ with $f'(x)=0$ but $\forall U $neighborhood of $x$, $ \exists y∈U$ with $f(x)≠f(y)$. Note that the above definition of non locally constant is unusual, but I don't know how to call this specific case. Anyway, this is this case I ask for. AI: Take any singular function $g:[0,1]\to\mathbb{R}$ (e.g., the Cantor function) and extend $g$ to all of $\mathbb{R}$ by making it constant on $(-\infty,0]$ and $[1,\infty)$. Now pick an increasing sequence $(a_n)$ converging to some value $a\in\mathbb{R}$ and a sequence $(c_n)$ such that $\sum c_n$ converges, and consider the function $$f(x)=\sum_{n=0}^\infty c_ng\left(\frac{x-a_n}{a_{n+1}-a_n}\right).$$ This function looks like a bunch of scaled and shifted copies of $g$ on the intervals $[a_n,a_{n+1}]$, accumulating at the point $a$. If $c_n$ shrinks fast enough, then $f$ will be differentiable at $a$ with $f'(a)=0$, but $f$ is not constant on any neighborhood of $a$.
H: PDF of $Y = WX$ There are two independent random variables, $X \sim \mathcal{N}(0, 1)$ and $W$ whose PMF is given by $$ P(W = w) = \begin{cases} \frac{1}{2} \hspace{3mm} \text{if} \hspace{3mm} w = \pm1 \\ 0 \hspace{3mm} \text{otherwise}. \end{cases} $$ A third random variable is defined as $Y = WX$. I want to find the density of $Y$. \begin{align} P(Y \leq y) &= P(WX \leq y)\\ &= P(X \leq \frac{y}{W}) \\&= \sum_{w \in \{1, -1\}}P(X \leq \frac{y}{w})P(W = w)\\ &= \frac{1}{2}\int_{-\infty}^{-y}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx + \frac{1}{2}\int_{-\infty}^{y}\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}}dx \end{align} When I differentiate the CDF to get PDF of $Y$, both the terms cancel out due to sign of $y$ in one of the integrals. I know that $Y \sim \mathcal{N}(0, 1)$. What am I doing wrong? Thanks. AI: You are on the right track. The mistake is $X\leq \frac{y}{W}$. Since, W can take both values +1, -1 you cannot take it directly to the denominator without changing inequality.
H: Let $G$ be a group of order $2016 = 2^5 \cdot 3^2 \cdot 7$ in which all elements of order $7$ are conjugate. Let $G$ be a group of order $2016 = 2^5 \cdot 3^2 \cdot 7$ in which all elements of order $7$ are conjugate. Prove that $G$ has a normal subgroup of index $2$ I know any subgroup of index $2$ must be normal, and I feel like the way the question is worded I should start off by letting $\Omega$ be the set of elements of $G$ of order $7$ and then let $G$ act on $\Omega$ by conjugation and then get that we have a homomorphism from $G$ into $S_{|\Omega|}$ and let $K$ be the kernel of this action. So $K$ is a normal subgroup of $G$. My lack of intuition tells me that I should try to prove $k$ is the group of index $2$ I am looking for, and that I should some how use the fact that $A_{|\Omega|}$ is a normal subgroup of $S_{|\Omega|}$ of index $2$ to do this, but I can't figure out. I feel like it would be helpful if I could figure out the order of $\Omega$. By Sylows Theorem I know that there are $1, 8, 36$, or $288$ subgroups of order $7$ so the order of $\Omega$ must be $6, 48, 216$, or $1728$. Also the fact that for any subgroup $H < G$ we have that $|G:H|$ divides $7!$ iff $2$ divides $H$ has very much got my attention but don't see how its relevant. Any help would be appreciated. AI: Here are some ideas. I will leave you to fill in the details. Of course there might be easier ways to do it! The 6 nontrivial elements in a Sylow $7$-subgroup $P$ are all conjugate, and they must be conjugate in $N_G(P)$, so $|N_G(P)|$ is divisible by $6$. That rules out $36$ and $288$ for the number of Sylow $7$-subgroups. If there is a single Sylow $7$-subgroup, then $P \lhd G$, and also $C_G(P) \lhd G$ with $|G/C_G(P)|=6$. So $G/C_G(P)$ and hence also $G$ have subgroups of index 2. When there are $8$ Sylow $7$-subgroups, the image of the conjugation action of $G$ on $\Omega = {\rm Syl}_7(G)$ is a 3-transitive group of order $8 \times 7 \times 6$, where the 2-point stabilizer contains a $6$-cycle, which is an odd permutation. So intersecting with ${\rm Alt}(\Omega)$ gives a subgroup of index 2.
H: Gradient of norms - general advice I have something of the following sort: $$ F(x): \mathbb{R}^n \to \mathbb{R} $$ Where $F(x)$ is a function mapping from one value to another. For example, I may have functions of the form $$ F(x) = \|x - x_0\|_2^2 $$ or $$ F(x) = \|Ax - b\|_2^2 $$ Now, I would like to know how to find the gradient for different $l_2$ norms as follows: $$ \nabla F(x)$$ I also know that $$ F(x) = \|x - x_0\|_2^2 = (x - x_0)^T(x-x_0)$$ Unfortunately, my vector/norm calculus is not knowledgeable, so I would like to know general methods to apply when using calculus on these mathematical objects/books to consult on how to perform these. I know how to break down the matrix/vector and thus perform the gradient calculations, but I would like a generalized way to perform these computations on the whole matrices/vectors and norms without breaking them down into their element-wise operations. AI: Background info: If $F:\mathbb R^n \to \mathbb R^m$ is differentiable at $x$, then $F'(x)$ is an $m \times n$ matrix which satisfies $$ \tag{1} \underbrace{F(x + \Delta x)}_{m \times 1} \approx \underbrace{F(x)}_{m \times 1} + \underbrace{F'(x)}_{m \times n} \underbrace{\Delta x}_{n \times 1}. $$ The approximation is good when $\Delta x$ is small. The local linear approximation (1) is sometimes called "Newton's approximation", and it is the key to understanding and computing derivatives in calculus. It is the basic idea at the heart of differential calculus. Most formulas of calculus can be derived easily just by applying Newton's approximation. In the special case that $F:\mathbb R^n \to \mathbb R$, $F'(x)$ is a $1 \times n$ matrix (a row vector). Often we use the convention that the gradient of $F$ at $x$ is a column vector, so that $$ \nabla F(x) = F'(x)^T. $$ For $F(x) = \|x \|_2^2$, if you don't want to compute the partial derivatives of $F$ (which would be easy in this case), you could think directly in terms of Newton's approximation. With this choice of $F$, we have \begin{align} F(x + \Delta x) &= \|x + \Delta x \|_2^2 \\ &= \| x \|^2 + 2 x^T \Delta x + \| \Delta x \|_2^2 \\ &\approx F(x) + 2 x^T \Delta x. \end{align} Comparing with Newton's approximation, we discover that $$ F'(x) = 2 x^T. $$ If we use the convention that $\nabla F(x)$ is a column vector, then $$ \nabla F(x) = F'(x)^T = 2x. $$ This is the result we would expect or guess based on what we know about single-variable calculus. (In single-variable calculus, if $F(x) = x^2$, then $F'(x) = 2x$.) To compute the gradient of the function $F(x) = \| Ax - b \|_2^2$, I recommend using the chain rule, as I explained here: https://math.stackexchange.com/a/3508376/40119 This makes the calculation simple and elegant.
H: Database of solutions to this generalised Pell equation. Does there exist a database of primary solutions to generalised Pell's equations of the form: $$x^2 - 2w^2 = -N$$ for every constant $N \in \mathbb{Z}$? AI: The tree diagrams below, if extended out enough layers, show all your "primary" solutions for $x^2 - 2 y^2 = n$ with $n>0$ and $\gcd(x,y) = 1.$ Note that this means $n$ cannot be divisible by $4,$ nor by $3,5,11,13$ or any primes $q$ with $q \equiv 3,5 \pmod 8.$ As far as the test case $n=119,$ two primary solutions are small enough to appear in these trees, the other two "seed" solutions are just a bit too big. This method was introduced by J. H. Conway and is written up in more recent books as well, I like Weissman's An Illustrated Theory of Numbers. Oh, well. In the following, the full set of solutions to $w_n^2 - 2 v_n^2 = 119$ follows $$ w_{n+8} = 6 w_{n+4} - w_n $$ $$ v_{n+8} = 6 v_{n+4} - v_n $$ For instance $6 \cdot 37 - 11 = 211$ and $6 \cdot 25 - 1 = 149$ jagy@phobeusjunior:~$ ./Pell_Target_Fundamental Automorphism matrix: 3 4 2 3 Automorphism backwards: 3 -4 -2 3 3^2 - 2 2^2 = 1 w^2 - 2 v^2 = 119 = 7 17 Fri Jul 3 11:57:01 PDT 2020 w: 11 v: 1 SEED KEEP +- w: 13 v: 5 SEED KEEP +- w: 19 v: 11 SEED BACK ONE STEP 13 , -5 w: 29 v: 19 SEED BACK ONE STEP 11 , -1 w: 37 v: 25 w: 59 v: 41 w: 101 v: 71 w: 163 v: 115 w: 211 v: 149 w: 341 v: 241 w: 587 v: 415 w: 949 v: 671 w: 1229 v: 869 w: 1987 v: 1405 w: 3421 v: 2419 w: 5531 v: 3911 w: 7163 v: 5065 w: 11581 v: 8189 w: 19939 v: 14099 w: 32237 v: 22795 w: 41749 v: 29521 w: 67499 v: 47729 w: 116213 v: 82175 w: 187891 v: 132859 w: 243331 v: 172061 w: 393413 v: 278185 w: 677339 v: 478951 w: 1095109 v: 774359 w: 1418237 v: 1002845 w: 2292979 v: 1621381 w: 3947821 v: 2791531 w: 6382763 v: 4513295 w: 8266091 v: 5845009 w: 13364461 v: 9450101 w: 23009587 v: 16270235 Fri Jul 3 11:58:02 PDT 2020 w^2 - 2 v^2 = 119 = 7 17
H: Biholomorphism between Riemann Surfaces. Let's consider the homeomorphism $\psi : \mathbb C \rightarrow B_{1}(0), z \rightarrow \frac {z}{|z|+1}$. Let $Z$ be be the Riemann Surface with topological space $\mathbb C$ induced by the chart $( \mathbb C,\psi)$. I have to show that $Z$ is biholomorphic $B_{1}(0)$ . I think that this means that I have to find a holomorphic function $g$ such that : $\psi \space \circ \space g \space \circ \space id_{\mathbb{C}}^{-1}$ is a holomorphism and $id_{\mathbb{C}} \space \circ \space g^{-1} \space \circ \space \psi^{-1}$ is also a holomorphism, but I got stuck finding such function. Any ideas? Thank you in advance. AI: One point of importance here is that $g$ need only be holomorphic as a map $B_1(0) \to Z$, not to the usual complex structure on $\mathbb{C}$. For this it suffices to check that $g$ is a diffeomorphism and $\varphi \circ g$ is holomorphic for all coordinate charts $\varphi$ in an atlas for the Riemann surface structure on $Z$. (We only need check holomorphicity one way, which comes from the fact that the inverse of a holomorphic map with nonvanishing derivative is holomorphic). Since $Z$ is given with $\psi$ as the only chart in an atlas, we only need to show there exists a diffeomorphism $g$ with $\psi \circ g$ holomorphic. Such a map is given by the inverse map of $\psi: \mathbb{C} \to B_1(0)$, which one can work out to be $$g(z) = \frac{z}{1-|z|},$$ which is a diffeomorphism $B_1(0) \to \mathbb{C}$ and satisfies $\psi \circ g(z) = z$ for all $z \in B_1(0)$, which is holomorphic. This completes the proof.
H: Kernel of a linear functional (definite integral) I need to find the Kernel of a linear functional, in this case the linear functional is a definite integral (I have already proved that a definite integral is linear). I have F: R2[x] to R, defined by F(p) = $$\int_0^1 p(x) \,dx$$ I am suspecting that it has something to do with the annihilator subspace, but I cannot quite yet visualize how the solution should be. I am starting to get in this topic, and in internet I dont find enough information about this particular case of linear functionals. Thanks. AI: You can find a straight line whose integral vanishes, take $x-1/2$ for example, and a quadratic, say $x^2-1/3$, which vanishes, too. As the kernel can't have dimension $3$ and both functions are linearly independent, you've found a basis of the kernel.
H: Bound of Angle Between Minimal Vectors of a Lattice I am considering a sublattice $\Lambda \subset \mathbb{Z}^2 \subset \mathbb{R}^2$ of dimension/rank 2. From $\Lambda$ I am pulling out a vector $\mathbf{v}_1 $ of minimal length, and a vector $\mathbf{v}_2$ of minimal length subject to the condition that $\mathbf{v}_1$ and $\mathbf{v}_2$ are linearly independent. A paper by Heath-Brown in 1984 ("Diophantine Approximation with Square-Free Numbers") states that the angle $\theta$ between these two vectors satisfies $\pi/3 \leq \theta \leq 2\pi/3$. I've been struggling to see why this is the case. Currently what I've tried is rotating $\mathbb{R}^2$ so that $\Lambda$ has a basis vector (not necessarily distinct from the $\mathbf{v}_i$!) that lies on the $x$-axis and then using some calculus, but I can't seem to achieve the bound on $\theta$ above. Any help would be appreciated! AI: We are in $\Bbb R^2$. We can take coordinates so that $v_1=(a,0)$ and $v_2=(b,c)$ where $a$ and $c$ are positive. Now, we if we consider $v_2-kv_1=(b-ka,c)$ with $k\in\Bbb Z$ we can replace $b$ by $b'=b-ka$ with $|b'|\le a/2$, without increasing the length of $v_2$. So as $v_2$ is shortest possible, then $|b|\le a/2$. Also as $v_1$ is shortest possible, then $a^2\le b^2+c^2$. Then $$|v_1\cdot v_2|=|ab|\le\frac{a^2}2\le\frac{a\sqrt{b^2+c^2}}2$$ and so $$|\cos\theta|\le\frac12.$$
H: If $f$ is bijection in a dense subset then $f$ is bijection in all space Let $X=(X,\mathcal{T}_X)$ and $Y=(Y,\mathcal{T}_Y)$ be topological Hausdorff spaces and $f: X \longrightarrow Y$ be a continuous function. If $f:D \subset X \longrightarrow Y$, with $D$ dense in $X$, is a bijection (one-to-one and onto) then $f:X \longrightarrow Y$ is a bijection too. This is true in general? AI: This is not true in general, and the fact that $X$ and $Y$ are topological spaces (or that $D$ is dense in $X$ and $f$ is continuous) is not relevant to the question. Suppose that $D$ is a strict subset of $X$. Since $f\vert_D$ is onto, we have that for each $y\in Y$, there is some $x\in D$ such that $f(x) = y$. Then, for each $x^\prime\in X\setminus D$, we have that there is some $x\in D$ so that $f(x) = f(x^\prime)$. This implies that $f$ is not injective. More generally, Take any two sets $X$ and $Y$, and a function $f\colon X\to Y$ whose restriction $f\vert_D \colon D\to Y$, where $D\subsetneq X$ is a strict subset, is surjective. Then, $f$ cannot be injective. However, this also helps one see that if $D = X$, then, trivially, the claim is true.
H: How could I find a slant asymptote of a function like x*e^(1/x) Is there a general way of finding this. Usually what I find on the internet is dividing the function by ax + b but I can't seem to make it work AI: Hint: Use differential geometry! The oblique asymptotes have the equation: $$y=kx+b, \space \text{ with } \space \space k = \lim_{x \to \infty} \frac{f(x)}{x}, \space \space b = \lim_{x \to \infty} [f(x) - kx].$$
H: Show that if $\sum_{n=1}^{\infty}a_{n}$ converges, then so do $\sum_{n=1}^{\infty}a^{2}_{n}$ and $\sum_{n=1}^{\infty}a_{n}/(1-a_{n})$. Suppose that $0 < a_{n} < 1$ for $n\in\textbf{N}$. Show that if $\sum_{n=1}^{\infty}a_{n}$ converges, then so do $\sum_{n=1}^{\infty}a^{2}_{n}$ and $\sum_{n=1}^{\infty}a_{n}/(1-a_{n})$. Are the converse statements true? My solution Since $0 < a_{n} < 1$, we conclude that $0 < a^{2}_{n} < a_{n} < 1$. Based on such fact, it results that $\sum_{n=1}^{\infty}a^{2}_{n}$ converges due to the comparison test. On the other hand, I do not know how to approach the second series. AI: Your proof of the first claim is correct. If $\sum_n a_n$ converges and $a_n>0$, there is an $N\in \mathbb{N}$ such that for $n\geq N$, $a_n<1/2$. Then we could bound $a_n/(1-a_n)$ by $2a_n$ after this $N$, and use a comparison test. Regarding the converses: the first claim is false, as we can see by taking $a_n=1/(2n+1)^2$. The second claim is true: since $\sum_n a_n/(1-a_n)$ converges and $0<a_n<1$, $a_n$ achieves a maximum value $M<1$. Then we can compare $\sum_n a_n$ with $(1-M)^{-1}\sum_n a_n$, which converges.
H: A question in proof of Section - Transpose of Linear Transformation in Hoffman Kunze Linear Algebra While studying Linear Algebra from Hoffman and Kunze I have a question in a proof It's image : My question is in 2nd last line of proof. I think in summation limit of i should be 1 to m instead of 1 to n as in the equation in last fifth line limit is i=1 to m. AI: The previous line should be $f = \sum_{i=1}^n f(\alpha_i) \, f_i$. With this, it is correct that \begin{align} T^t(g_j) &= \sum_{i=1}^n [T^t(g_j)](\alpha_i) f_i = \sum_{i=1}^n A_{ji}f_i \end{align}
H: Proving $Y = WX$ and $X$ are uncorrelated From my previous question here, I am able to prove $Y \sim \mathcal{N}(0,1)$, where PMF of $W$ is $$ P(W = w) = \begin{cases} \frac{1}{2} \hspace{3mm} \text{if} \hspace{3mm} w = \pm1 \\ 0 \hspace{3mm} \text{otherwise}. \end{cases} $$ and $X\sim\mathcal{N}(0, 1)$, independent of $W$. I need to show that $X$ and $Y$ are uncorrelated. I have tried evaluating $\mathbb{E}XY = \mathbb EWX^2$, which needed the distribution of $X^2$, which I have evaluated to be $\frac{1}{2\sqrt x}\frac{1}{\sqrt{2\pi}}e^{-\frac{x}{2}}, \hspace{3mm} x \geq 0$. How do I use this to prove uncorrelatedness? AI: Because $X$ is independent of $W$, $$E[WX^2] = E[W] E[X^2] = 0.$$
H: Are neighborhood bases always a subset of a basis? The definition of basis and neighborhood basis are: Let $(X,\tau)$ be a topological space, a base of $\tau$ is a subset $\mathfrak{B}$ of $\tau$ such that each open set $A \in \tau$ is union of elements of $\mathfrak{B}$ If $p \in X$, a subset $\mathfrak{B}_p\subseteq U_p=\{U \in \tau | p \in U\}$ of neighborboods of $ p$ is called a neighborhood basis of $ p$ if for each $U \in U_p$ there exist an $V\in \mathfrak{B}_p$ such that $V\subseteq U$ Additionaly, I have the following proposition in my lectures notes Let $(X,\tau)$ be a topological space (1) If $\mathfrak{B}$ is a basis of $\tau$ and if $p\in X$, the set $\mathfrak{B}_p=\{B \in \mathfrak{B}| p \in B\}$ is a neighborhood basis at $p$. (2) If for each $p\in X$ a neighborhood basis at $p $ : $\mathfrak{B}_p$ is defined, the set $\mathfrak{B}=\{B \in \mathfrak{B}_p | p \in X\}$ is a basis of $\tau$ and the definition of countable basis and countable neighborhood basis Let $(X,\tau)$ be a topological space. If $\tau$ admits a basis whose cardinality is countable, $\tau$ is said to have a countable basis If each $p \in X$ admits a countably-infinite neighborhood basis, $\tau$ is said to have a countably-infinite neighborhood basis In those cases $(X, \tau)$ is called a countable-basis topological space or countable-neighborhood-basis topological space respectively. *The term countable includes finite sets: a set is countable if it is finite or countably-infinite. Then they say that by the second statement (2) of the initial proposition, countable basis implies countable neighborhood basis(that is if $\mathfrak{B}$is countable, then $\mathfrak{B}_p$ is countable), but the converse is not true. ....(3) A couple of related questions Are neighborhood bases always a subset of a basis as stated in (1) : $\mathfrak{B}_p=\{B \in \mathfrak{B}| p \in B\}$ ?, or this is just one possibility? I attempt to give an explanation of (3): Since by (1)(they used (2), but I do not agree, I Think I need (1)), $\mathfrak{B}_p \subseteq \mathfrak{B}$, $\mathfrak{B}$ is countable by hypothesis and a subset of a countable set is countable, it follows that $\mathfrak{B}_p $ is countable. The converse is not true, because we could have an uncountable number of points in X, for which the union of countable neighbordhood bases might not be countable Is this explanation correct?, as you see it depends heavily on the definition of $\mathfrak{B}_p$ as a subset of $\mathfrak{B}$, so it wouldn't be general if a neighborhood basis could be defined without taking the elements from a basis. AI: If $\mathscr{B}$ is a base for a topology $\tau$ on a set $X$, and $p\in X$, the set $\mathscr{B}_p=\{B\in\mathscr{B}:p\in B\}$ is always a nbhd base at $p$. However, it is entirely possible to have a base $\mathscr{B}$ for $\tau$ and a nbhd base $\mathscr{B}_p$ at $p$ such that $\mathscr{B}_p\cap\mathscr{B}=\varnothing$. For instance, let $X=\Bbb R$, and let $\tau$ be the usual topology on $\Bbb R$. Let $$\mathscr{B}=\{(p,q):p,q\in\Bbb Q\text{ and }p<q\}\;,$$ the set of open intervals with rational endpoints; $\mathscr{B}$ is a base for $\tau$. Let $$\mathscr{B}_0=\{(-x,x):0<x\in\Bbb R\setminus\Bbb Q\}\;,$$ the set of symmetric open intervals around $0$ with irrational endpoints. Clearly $\mathscr{B}_0\cap\mathscr{B}=\varnothing$, but $\mathscr{B}_0$ is nevertheless a nbhd base at $0$. The best way to explain (3) is to give an actual example showing that it is possible to have a countable nbhd base at each point but no countable base for the whole topology. Probably the simplest example is to let $X$ be an uncountable set and $\tau=\wp(X)$ the discrete topology on $X$. For each $x\in X$ the collection $\big\{\{x\}\big\}$ is a nbhd base at $x$, and it’s certainly countable! However, any base for $\tau$ must include every one of the singleton sets $\{x\}$ for $x\in X$ and must therefore be uncountable.
H: Find all $a$ such that $y=\log_\frac{1}{\sqrt3} (x-2a) = \log_3(x-2a^3-3a^2) $ Find all values of parameter $a\in \mathbb{Z}$ such that $$y= \log_\frac{1}{\sqrt3} (x-2a)$$ $$and$$ $$y = \log_3(x-2a^3-3a^2)$$ intersect at points with whole coordinates. This is what I did: $$\log_\frac{1}{\sqrt3}(x-2a) = \frac{log_3(x-2a)}{\log_33^{-1/2}}$$ $$\frac{\log_3(x-2a)}{\log_33^{-1/2}} = \log_3(x-2a^3-3a^2)$$ $$-2\log_3(x-2a) = \log_3(x-2a^3-3a^2)$$ $$\log_3\frac{1}{(x-2a)^2} = \log_3(x-2a^3-3a^2)$$ $$\frac{1}{(x-2a)^2} = (x-2a^3-3a^2)$$ I got up to this point, not sure how to proceed. AI: After getting a common base, $3$, and exponentiating, we have $$ \frac{1}{(x-2 a)^2} = -3 a^2 - 2 a^3 + x $$Since we are requiring $a,x$ to be whole numbers, the RHS is an integer which means the LHS must be as well: then $x-2a=\pm 1$. However, since these came from logarithms, only the case $x-2a=1$ is allowed. Then the equation becomes $1=-3a^2-2a^3+1+2a$, whose solutions are $a=\{-2,0,1/2\}$. So in summary, the point $(a,x)=(0,1)$ is the only whole number solution , and there is also the solution $(a,x)=(-2,-3)$ if one admits negative integers.
H: Let $C_0(Q)=\{f:\Bbb{N}→\Bbb{Q}:(\forall ε>0)(\exists N∈\Bbb{N})(\forall m>N)(|f(m)|<ε)\}$. If $\{a_n\},\{b_n\}∈C_0(Q)$ then $\{a_n+b_n\}∈C_0(Q)$. Suppose that we have $C_{0}(Q)=\{f:\mathbb{N}\rightarrow\mathbb{Q}:(\forall\epsilon>0)(\exists N\in\mathbb{N})(\forall m>N)(|f(m)|<\epsilon)\}$ and we want to show that if $\{a_{n}\}$,$\{b_{n}\}\in C_{0}(Q)$ it holds that $\{a_{n}+b_{n}\}\in C_{0}(Q)$. I have looked at the problem, and I think it can be solved using the Triangle Inequality. We have that in general $|a_{n}+b_{n}|\leq|a_{n}|+|b_{n}|$. Then, given the first proposition, it must be that $|a_{n}|<\epsilon,$ and it must be that $|b_{n}|<\epsilon,$ and so, it must be that $|a_{n}|+|b_{n}|<2\epsilon.$ By the triangle inequality, $|a_{n}+b_{n}|\leq|a_{n}|+|b_{n}|,$ and we have that $|a_{n}+b_{n}|<2\epsilon$. Now, here is the main question: this manipulation implies that $\frac{1}{2}|a_{n}+b_{n}|<\epsilon$ but not entirely $|a_{n}+b_{n}|<\epsilon,$ and I have yet to think of a manipulation from the former to the latter. This is the most direct way I can think of proving this. Any thoughts? AI: Consider two sequences $\{a_n\}$ and $\{b_n\}$ in $C_0(Q).$ Given any $\varepsilon > 0,$ there exists an integer $N(a)$ such that for all integers $m \geq N(a) + 1,$ we have that $|a_m| < \frac \varepsilon 2.$ Likewise, there exists an integer $N(b)$ such that for all integers $m \geq N(b) + 1,$ we have that $|b_m| < \frac \varepsilon 2.$ Consider the integer $N = \max \{N(a), N(b)\}.$ Observe that for all integers $m \geq N + 1,$ we have that $$|a_m + b_m| \leq |a_m| + |b_m| < \frac \varepsilon 2 + \frac \varepsilon 2 = \varepsilon$$ by the Triangle Inequality. We conclude that $\{a_n + b_n \}$ is an element of $C_0(Q),$ as desired.
H: Show that constant function is integrable. An exercise is asked in the book to Show that the constant function is integrable and find its value of integration. I tried in the following way to show the statement Suppose $f:\mathbb [a,b]\to \mathbb R$ such that $ f(x)=\lambda$ where $\lambda$ is any constant. Let $P$ be any partition on $[a,b]$, ie $$P=\left\{a=t_0<t_1<t_2\cdots< t_n=b\right\}$$ then Upper Darboux sum and Lower Darboux sum we evaluate by $$\begin{aligned} U(f,P)=\sum_{1\leq k\leq n}\operatorname{Sup}\left\{f(x): x\in [t_{k-1},t_k]\right\}(t_k-t_{k-1})\\ L(f,P)=\sum_{1\leq k\leq n}\operatorname{inf}\left\{f(x): x\in [t_{k-1},t_k]\right\}(t_k-t_{k-1}) \end{aligned}$$ Now what about the supremum and infimum of $f(x)$? If $\operatorname{sup}\left\{f(x): x\in[a,b]\right\}=\lambda$ but then $f(x)$ is constant so infimum of $f(x)$ is also $\lambda$ which immediately follows that $$L(f,P)=\lambda(b-a)=U(f,P)$$ Further $$L(f)\geq L(f,P) ,\; U(f)\leq U(f,P) \implies L(f)=U(f)=\lambda(b-a)$$ shows that $f(x)$ is integrable and its values is $$L(f)\leq \int_a^b f(x) \leq U(f)\implies \int_a^b f(x) dx =\lambda(b-a)$$ Now how to show that the supremum and infimum of the constant function is constant itself with/without using completeness property? Any sorts of help will be appreciated. Thank you. AI: If $$(\forall x\in[t_{k-1},t_k])\;\; f(x)=\lambda$$ then $$(\forall x\in [t_{k-1},t_k])\;\; \lambda\le f(x)\le \lambda$$ $$\implies \lambda\le \sup_{[t_{k-1},t_k]}f\le \lambda$$ $$\implies \sup_{[t_{k-1},t_k]}f=\lambda$$ This gives $$U(f,P)-L(f,P)=0<\epsilon$$
H: Using Gaussian Kernel to Define Distance I want to define a region of ball with radius $R$ such that close to center the value is 1 and at the boundary, the value is 0. The gaussian kernel comes to my mind but I would like to know how can I set the $\sigma^2$ such that it works with given radius? $$f(x,x')=e^{-\tfrac{||x-x'||^2}{2\sigma^2}}$$ such that at $x'=R$ the function is 0? AI: If you want a function that goes to $0$ at $\Vert x-x^\prime\Vert=R$, you could instead use $\left(1-\tfrac{\Vert x-x^\prime\Vert^2}{R^2}\right)^+$ with $y^+:=\max\{y,\,0\}$. A Gaussian kernel never becomes $0$.
H: Diagonal arguments for uncountable lists? The diagonal argument is a general proof strategy that is used in many proofs in mathematics. I want to consider the following two examples: There is no enumeration of the real numbers. Because if there were such an enumeration of all real numbers, one could define a real number $x$ not contained in the list by considering, for each $n$, the $n$th decimal place of the $n$th real number, say $a_n^n$, and set $x=0.b_1b_2\dots$, where $b_n$ is chosen in such a way that $b_n\not=a_n^n$ for each $n$. Of course, whenever I use the variable $n$, $n$ should range over all natural numbers. There is a computable function that is not primitive recursive. Since the set of all primitive recursive functions is countable, one can enumerate all the primitive recursive functions: $f_1, f_2, \dots$ Now $n\mapsto f_n(n)+1$ is not primitive recursive. In these two arguments one uses the diagonal method to construct an element not contained in a list. In both proofs, this list is countable, thus the families $(a_n^n)_n$, $(b_n)_n$, and $(f_n)_n$ are indexed by the set $\mathbb N$. Question: Are there similar usages of diagonal arguments, where the index set is uncountable? Note: I know Cantor's theorem, which is true for all sets (no matter what cardinality). So this would be an abstract answer to my question. But I would be interested if there are other examples, maybe some more concrete ones. AI: Yes. Let $\kappa$ be a regular cardinal, for example $\omega_1$. We say that $C\subseteq\kappa$ is unbounded if $\sup C=\kappa$, and we say that it is closed if $\sup(C\cap\beta)=\beta\to\beta\in C$. If $C$ is closed and unbounded we call it a club. In some sense the notion of a club is a good approximation of "almost everywhere" as far as the order topology is concerned. Now you might ask, how many clubs are there? Well, there are certainly $\kappa$ of them, since $\kappa\setminus\alpha$ is a club. And it is not hard to check that the intersection of any two clubs is a club, and in fact if $\gamma<\kappa$ and $\{C_\alpha\mid\alpha<\gamma\}$ is a collection of clubs, then $\bigcap C_\alpha$ is a club. So we have at least $\kappa^{<\kappa}$ clubs. But is that everything? Well, clearly the intersection of $\kappa$ different clubs can be empty, just look at $\bigcap_{\alpha<\kappa}\kappa\setminus\alpha$. But we can define the diagonal intersection of $\{C_\alpha\mid\alpha<\kappa\}$ which is denoted by $\triangle_{\alpha<\kappa}C_\alpha$ and defined by $$\beta\in\triangle_{\alpha<\kappa}C_\alpha\iff\beta\in\bigcap_{\alpha<\beta}C_\alpha.$$ This is a form of diagonalization, and it turns out that the diagonal intersection of clubs is a club. Therefore there are $\kappa^\kappa$ different clubs.
H: Every irreducible closed has a generic point If $X$ is a scheme and $Z$ an irreducible closed subset, then $\exists\,\xi\in Z$ with $Z=\overline{\{\xi\}}$ Here is my attempt: Let $X=\bigcup_i U_i$ be an open affine cover. Since $Z$ is irreducible, then $Z\cap U_i$ is a closed irreducible subset of the affine scheme $U_i$. This means there is a prime ideal $\mathfrak{p}_i\in U_i$ such that $Z\cap U_i=V_i(\mathfrak{p}_i)$ (where $V_i(I):=\{\mathfrak{p}\in U_i\mid \mathfrak{p}\supset I\}$) On the other hand, $V_i(\mathfrak{p}_i)=\overline{\{\mathfrak{p}_i\}}^{U_i}$ (closure in $U_i$) or, equivalently, $V_i(\mathfrak{p}_i)=\overline{\{\mathfrak{p}_i\}}\cap U_i$. Therefore: $$Z=\bigcup_iZ\cap U_i=\bigcup_i\overline{\{\mathfrak{p}_i\}}\cap U_i\subset\bigcup_i\overline{\{\mathfrak{p}_i\}}$$ Conversely, since $\mathfrak{p}_i\in Z$, then $\overline{\{\mathfrak{p}_i\}}\subset Z$ for all $i$, so $\bigcup_i\overline{\{\mathfrak{p}_i\}}\subset Z$ and: $$Z=\bigcup_i\overline{\{\mathfrak{p}_i\}}$$ By irreducibility of $Z$, we have $Z=\overline{\{\mathfrak{p}_i\}}$ for some $i$. $_\blacksquare$ Is this correct? I'm not sure about the conclusion since the union may be infinite, which made me suspicious. AI: It's not quite correct that $Z\cap U_i$ is $V$ of a prime ideal (if you mean $V$ to give a subscheme, not just a subset, which you should): think about the case $V(x^2)$. This is irreducible but not reduced. This is an error, but it's a fixable one, since you can just take the reduction everywhere and end up with the same topological space and then apply your argument. The final part of your argument is indeed problematic: just because an irreducible topological space is a union of infinitely many closed subsets, one cannot conclude that one subset is necessarily the whole thing. Consider the underlying topological space of $\Bbb A^1_\Bbb C$ without the generic point, for instance - it's irreducible and also the union of infinitely many closed points. A way to fix this is to show that the generic point is in every affine open and generic points of irreducible affine schemes are unique. This shouldn't be too difficult, but if you have trouble, drop me a comment.
H: Find maximum point of $f(x,y,z) = 8x^2 +4yz -16z +600$ with one restriction I need to find the critical points of $$f(x,y,z) = 8x^2 +4yz -16z +600$$ restricted by $4x^2+y^2+4z^2=16$. I constructed the lagrangian function $$L(x, y, z, \lambda ) = 8x^2 +4yz -16z +600 - \lambda (4x^2+y^2+4z^2-16) $$ but I'm very confused about how to determine those points. I know I need to make a system with all the first derivatives of $L$ equaled to $0$. I did it but every time I try to solve it I get different solutions. How can I get the points? Thanks. AI: There are a number of constrained critical points. If you set the gradient of $L$ equal to $0$, you find that $$(4x,z,y-4) = \lambda(4x,y,4z).$$ If $x\ne 0$, we must have $\lambda = 1$ and then $z=y$ and $y-4=4z$. This gives $y=z=-4/3$ and $x=\pm 4/3$. However, if $x=0$, then we also get additional solutions by setting \begin{equation} (z,y-4) = \lambda(y,4z),\tag{$*$} \end{equation} from which we get $$\frac zy = \frac{y-4}{4z}.$$ (Note that we cannot have $x=y=z=0$ on our constraint set, so this is fine. Note that ($*$) says that $z=0$ if and only if $y=0$.) This yields $4z^2=y(y-4)$, which, if I'm not mistaken, leads, along with the constraint equation, to $y^2-2y-8 = (y-4)(y+2) = 0$, so $y=4$ or $y=-2$. These give additional critical points $(0,4,0)$ and $(0,-2,\pm\sqrt3)$. Because there's such dispute amongst the various answerers, let me check the values of $f$ at these various points: \begin{multline} f(\pm 4/3, -4/3, -4/3) = \frac{1928}3, \quad f(0,4,0) = 600, \\ f(0,-2,\sqrt3) = 600 - 24\sqrt 3,\quad f(0,-2,-\sqrt3) = 600 + 24\sqrt3. \end{multline} Indeed, $1928/3$ wins out for the maximum value, but only just barely!! Philosophical Remark: You do not need to solve explicitly for $\lambda$; you can eliminate it as I did by taking ratios. It is to emphasize this pedagogical point that I wrote out the solution so carefully. :)
H: If matrix $X$ & $Y$ anti-commute then show that the two matrices are linearly independent Show that if matrix $X_1$ & $X_2$ anti-commute then show that the two matrices are linearly independent and $X_i ^{\,2}\ne0$ I know $X_1X_2=-X_2X_1$ from the definition then I tried the following: $$X_1^{-1}X_1X_2=-X_1^{-1}X_2X_1$$ $$X_2 = -X_1^{-1}X_2X_1 \ (1)$$ $$and$$ $$X_1X_2X_2^{-1}=-X_2X_1X_2^{-1}$$ $$X_1=-X_2X_1X_2^{-1} \ (2)$$ Then I'll substitute (1) into (2) to get: $$X_1=X_1^{-1}X_2X_1X_1X_2^{-1}$$ $$X_1=-X_1^{-1}X_1X_2X_1X_2^{-2}$$ $$X_1=X_1X_2X_2^{-2}$$ But I'm not sure if this does anything AI: If $X_1=\lambda X_2$, then $0=X_1X_2+X_1X_1=2\lambda X_2^{\,2}$. So, either $\lambda=0$ (in which case $X_1=0$) or $X_2^{\,2}=0$.
H: Whay does an operator commuting with a finite rank operator have eigenvalues? Let $A,B\in B(H)$ be such that $AB=BA$ where $B\neq 0$ is a finite rank operator. Does it follow that $A$ has eigenvalues? If yes, why please? Thanks a lot. AI: Hint: $A$ maps the range of $B$ (which is a finite-dimensional space) to itself.
H: Evaluating $\int _0^1\frac{\ln \left(x^2+x+1\right)}{x\left(x+1\right)}\:dx$ How can i evaluate this integral, maybe differentiation under the integral sign? i started expressing the integral as the following, $$\int _0^1\frac{\ln \left(x^2+x+1\right)}{x\left(x+1\right)}\:dx=\int _0^1\frac{\ln \left(x^2+x+1\right)}{x}\:dx-\int _0^1\frac{\ln \left(x^2+x+1\right)}{x+1}\:dx\:$$ But i dont know how to keep going, ill appreciate any solutions or hints. AI: I dont think Feynman's trick would work best here, following your path: $$\int _0^1\frac{\ln \left(x^2+x+1\right)}{x\left(x+1\right)}\:dx=\int _0^1\frac{\ln \left(x^2+x+1\right)}{x}\:dx-\underbrace{\int _0^1\frac{\ln \left(x^2+x+1\right)}{x+1}\:dx}_{x=\frac{1-t}{1+t}}\:$$ $$=\int _0^1\frac{\ln \left(x^3-1\right)}{x}\:dx-\int _0^1\frac{\ln \left(x-1\right)}{x}\:dx-\int _0^1\frac{\ln \left(x^2+3\right)}{x+1}\:dx+2\int _0^1\frac{\ln \left(x+1\right)}{x+1}\:dx$$ $$-\sum _{k=1}^{\infty }\frac{1}{k}\int _0^1x^{3k-1}\:dx\:+\sum _{k=1}^{\infty }\frac{1}{k}\:\int _0^1x^{k-1}\:dx-\int _0^1\frac{\ln \left(x^2+3\right)}{x+1}\:dx+\ln ^2\left(2\right)$$ To solve that remaining integral you can use the identity i derived here So, $$=\frac{2\zeta \left(2\right)}{3}-(-\frac{\ln ^2\left(3\right)}{4}-\frac{\text{Li}_2\left(-\frac{1}{3}\right)}{2}-\frac{\ln ^2\left(4\right)}{4}+\frac{\ln \left(3\right)\ln \left(4\right)}{2}-\arctan ^2\left(\sqrt{\frac{1}{3}}\right)+\ln \left(2\right)\ln \left(4\right))+\ln ^2\left(2\right)$$ $$\frac{\pi ^2}{9}+\frac{\ln ^2\left(3\right)}{4}+\frac{\text{Li}_2\left(-\frac{1}{3}\right)}{2}+\ln ^2\left(2\right)-\ln \left(3\right)\ln \left(2\right)+\frac{\pi ^2}{36}-2\ln ^2\left(2\right)+\ln ^2\left(2\right)$$ So your integral's solution is, $$\boxed{\int _0^1\frac{\ln \left(x^2+x+1\right)}{x\left(x+1\right)}\:dx=\frac{5\pi ^2}{36}+\frac{\ln ^2\left(3\right)}{4}+\frac{\text{Li}_2\left(-\frac{1}{3}\right)}{2}-\ln \left(3\right)\ln \left(2\right)}$$
H: Formally Introducing the Intersection Symbol into ZFC Set Theory I am currently reading Lectures in Logic and Set Theory: Volume 2, Set Theory by Tourlakis. In the book, he formally introduces the power set notation, $\mathcal{P}(A)$, as well as union, $\bigcup A$, into the formal, first-order theory of sets as unary function symbols by extending the theory through definition. This process is described in the linked Wikipedia article and also here but, in summary, to introduce a function symbol into our theory, we must first find a defining formula for the function, \begin{equation}\forall x_1\ldots\forall x_n\phi(f(x_1,\ldots, x_n), x_1,\ldots x_n),\tag{Defining Axiom}\end{equation} where $\phi(y,x_1,\ldots,x_n)$ is a first-order formula with free variables $y,x_1,\ldots,x_n$, then take this defining axiom and add it to our formal theory as a non-logical axiom. However, this is provided that we first have a proof of the existence and uniqueness of such an object for every possible term: $$\forall x_1\ldots\forall x_n\exists ! y\phi(y,x_1,\ldots x_n)\tag{Existential Formula}.$$ My question is how is this done for the intersection symbol, $\bigcap$? Tourlakis avoids the issue since $\bigcap\varnothing$ is not a set and thus "violates" the existential formula. Specifically, he writes, "We do not feel inclined to perform acrobatics just to get around the fact that $\bigcap\varnothing$ cannot be a formal term: it is not a set." I would like to know how this issue is resolved and, if the "acrobatics" required to get around this issue is too extreme, how I can be assured that leaving this as a loose end will not cause too many issues in the theory. AI: Eric Wofsey has described the "default value" approach. Another option is to simply modify the semantics of first-order logic to admit partial functions. Then any time we have a formula $\varphi(x_1,...,x_n,y)$ such that our theory proves that for each $a_1,...,a_n$ there is at most one $b$ with $\varphi(a_1,...,a_n,b)$, we can introduce a symbol for the partial function defined by $\varphi$. Of course, doing this requires going back and modifying the proof system, which is some tedious work that the "default value" approach doesn't require us to do. However, it's ultimately not that hard, and allowing partial functions is arguably more natural anyways since in informal mathematics we use partial functions (e.g. "$x\over y$") all the time without worry.
H: $\nabla f(x)^T(y-x) \geq 0$ if $x$ is optimal for a convex $f(x)$. In the Convex Optimization book by Boyd and Vandenberghe, it states that if $x \in X$ and $X$ denotes the feasible set, and $f(x)$ is a convex objective function, then $x$ is optimal IFF $$\nabla f(x)^T(y-x) \geq 0$$ if $x$ is optimal for a convex $f(x)$. I understand this logic, but I don't understand why it's $\geq$ instead of $=$. If $x$ is optimal, then isn't the gradient of the objective function $f(x)$ zero there? AI: This is probably in the context of a constrained optimization problem "minimize $f$ within convex set $X$." In constrained optimization, the gradient need not be zero at the optimizer (specifically, when the optimizer is not in the interior of $X$). The optimality condition is "$\nabla f(x)^\top (y-x) \ge 0$ for all $y \in X$." When $X$ is the entire space, then this result generalizes to the "gradient equals zero" condition for unconstrained optimization, since the vector $y-x$ can point in any direction.
H: if $\{a_{n}\}\in C_{0}(Q)$ and $\{b_{n}\}\in C(Q)$ then $\{a_{n}\cdot b_{n}\}\in C_{0}(Q)$. Suppose that we have $C_{0}(Q)=\{f:\mathbb{N}\rightarrow\mathbb{Q}:(\forall\epsilon>0)(\exists N\in\mathbb{N})(\forall m>N)(|f(m)|<\epsilon)\}$ and we have $C(Q)=\{f:\mathbb{N}\rightarrow\mathbb{Q}:(\forall\epsilon>0)(\exists N\in\mathbb{N})(\forall m,n>N)(|f(m)-f(n)|<\epsilon)\}$ and we want to show that if $\{a_{n}\}\in C_{0}(Q)$ and $\{b_{n}\}\in C(Q)$ it holds that $\{a_{n}\cdot b_{n}\}\in C_{0}(Q)$. I have looked at the problem, and I think it can be solved using the Triangle Inequality. We have that, $|a_{m}-a_{n}|<\epsilon_{1}$ and that $|b_{n}|<\epsilon_{2}$ and arrived at the idea that$|a_{m}-a_{n}|+|b_{n}|<\epsilon_{1}+\epsilon_{2}$ and since$|a_{m}-a_{n}+b_{n}|<(|a_{m}-a_{n}|+|b_{n}|)$ then it must be that $|a_{m}-a_{n}+b_{n}|<\epsilon_{1}+\epsilon_{2}$. From here the idea is to after manipulation extract a $|a_{n}\cdot b_{n}|$ on the right side and gain some insight into the form of $\epsilon_{1}$ and $\epsilon_{2}$ such that $\epsilon_{1}+\epsilon_{2}=\epsilon$ and $|a_{n}\cdot b_{n}|<\epsilon$. Any thoughts? AI: Hint: Sequences in $C(Q)$ are Cauchy, hence converge, hence are bounded. What can you say about a bounded sequence times a sequence going to $0?$
H: Existence of ordered bases $\beta$ and $\gamma$ for $V$ and $W$, such that $[T]_{\beta}^{\gamma}$ is a diagonal - Questions about proof Let $V$ and $W$ be vector spaces such that $\text{dim}(V) = \text{dim}(W)$, and let $T:V \to W$ be linear. Show that there exist ordered bases $\beta$ and $\gamma$ for $V$ and $W$, respectively, such that $[T]_{\beta}^{\gamma}$ is a diagonal matrix. My question pertains to two different steps taken in the proof that I do not fully see the jump in logic coming about. Here are the parts where I am having trouble. My first question has to do with Step 3: What is trying to be accomplished by writing $$\sum_{j = 1}^{k}c_{j}v_{j} - \sum_{i = k+1}^{n}c_{i}v_{i} = 0 \ ?$$ I get that $\sum_{i = k+1}^{n}c_{i}v_{i}$ is in the Null Space of $T$ and as such can be written as a linear combination of the basis of the Null Space. But how does this lead into Step 4? Of which I also ask how is it that $$c_{1} = c_{2} = \dots c_{k} = c_{k+1} = \dots c_{n} = 0 \ ?$$ I get why $c_{1} = c_{2} = \dots c_{k} = 0$. That's because these were the coefficients for the basis vectors from the Null Space. But the other ones?...why are they all $0$'s ? The other steps of the proof after this make sense to me, it was mainly those two steps. I'm just frustrated because it feels as if I attempt these proofs and then I remember some established results, but never the necessary added results to answer the question...I'm rambling...apologies. AI: The images contain typoes. In step 3, after the first line, the $u_j$'s are wrongly converted into $v_j$'s. They should still be $u_j$'s, and so you will get a linear combination of elements of base $\beta$ which equate 0 in the end. This justifies all coefficients being deduced to be $0$.
H: Division of two polynomial expressions Is $$1/(X^n - 1), n \in N$$ a polynomial? Intuitively I would say yes, because 1 is a polynomial($ X^0$) and so is $X^n - 1$. But Sage (The CAS) appears to disagree, when I type the expression in and call the function is_polynomial() I get False. Can somebody explain why this expression isn't a polynomial? AI: Polynomials in $x$ are defined as a finite sum of terms of the form $ax^n$. $a$ is restricted to some set like the integers, rationals, reals, integers $\bmod $ something, complex numbers. $n$ is restricted to the nonnegative integers. Rational functions of polynomials do not count, so Sage is correct.
H: Consider the operator on $C[0,1]$ $T(f)=f(\sin(x))$ Consider the operator on $C[0,1]$ $T(f)=f(\sin(x))$ Show that this operator is not compact. I honestly do not know how to show this. One idea is to find a weakly convergent sequence, whose image does not converge strongly. But I do not know a nice sequence that converges weakly in $C[0,1]$. Another idea is to use Ascoli thoerem. Show that the image of an open ball cannot be equicontinuous. I think that is the solution, i just do not know how to find a sequence of functions that make equicontinuity fail. Seems hard to do. AI: $f_n(x)=e^{-nx}$ defines a bounded sequence in $C[0,1]$. If $T$ is compact then $e^{-n\sin x}$ would have a norm convergent subsequence. But this sequence converges point-wise to $1$ when $x=0$ and $0$ when $0<x\leq 1$. Since the limit function is not continuous it follows there cannot be a uniformly convergent subsequence. Hence $T$ is not compact.
H: Definition of Mapping Cylinder and Cone I think I understand the idea of the mapping cylinder and cone, except that I am confused about what is, and is not, included. Specifically suppose we have a CW complex X and a CW subcomplex A; in other words, a CW pair (X,A). Let Y denote X without A joined, and consider the associated mapping cone (or cylinder) that joins A to Y in order to form X. My question is this: Is all of Y included in the mapping cone/cylinder, or just the part that A attaches to? Is X a subset of the mapping cylinder or cone? The confusion arises in my mind because in the second illustration on p2 of Hatcher, he clearly indicates that if one object is joined to a second object, all of the second object is included in the mapping cylinder (or cone). But, on the top of p14 we have a CW pair (X,A), and Hatcher refers to the union of X and CA, where CA is the mapping cone that attaches A to X. But such a union is always just CA according to the illustration on p2, so his equality reduces to X/A=CA/CA, which makes no sense because CA/CA is just a point. So, from my perspective, there is a problem either way. What am I missing? AI: On p.2 Hatcher defines the mapping cylinder $M_f$ of a map $f : X \to Y$ as the quotient space of the disjoint union $(X \times I) \sqcup Y$ obtained by identifying each $(x,1) \in X \times I$ with $f(x) \in Y$. The mapping cone $C_f$ of $f$ is defined similarly on p. 13. For a CW-pair $(X,A)$ we consider the inclusion map $i : A \to X$ and can form the mapping cylinder $M_i$ and the mapping cone $C_i$. The latter is $X \cup CA$, where a little bit laxly $[a,0] \in CA$ is thought as the same point as $a \in A \subset X$. This means $X \cap CA = A \subset X$. Perhaps your confusion comes from the fact that the mapping cylinder (cone) is defined for maps $f : X \to Y$, but for CW-pairs $(X,A)$ one considers the inclusion map $i : A \to X$. This means that $f$ is instanciated by $i$, $X$ by $A$ and $Y$ by $X$.
H: What does it mean for a category to have a semi-automorphism? A while back, somebody asked me about why automorphisms are always isomorphisms. I bobbled the question a bit. Invertability is always one of those nice things that I take for granted. But he got me wondering. If I have a morphism whose source and target is the same class which is not invertable, what does that mean? I even had to make up the name, semi-automorphism, stealing from the naming of a semigroup, since I could not find a term to describe such a morphism. What would it mean if a category had such a morphism? Being "structure" preserving, I naturally think of reduction rules in type theory, but is there a more fundamental definition for what they would mean? And, ideally, I'd like to understand this in a way which explains why isomorphisms whose source and target are the same are common enough to earn so much attention in category theory while these non-invertible equivalents do not. AI: The word you're looking for is "endomorphism", and it is not at all rare for a category to have endomorphisms which are not automorphisms. Consider, for example, the category of vector spaces with morphisms given by linear transformations - not every linear map from a vector space to itself is invertible (i.e., there are square matrices which are not invertible).
H: If $f$ us periodic and even, what I can conclude about of $\int f \;dx$? Let $f: \mathbb{R} \longrightarrow \mathbb{R}$ be a periodic, even and differentiable function. If $L>0$ is the minimal period of $f$, what can I conclude about $$I :=\int_{0}^{L} f(x)\; dx?$$ By the hypotheses we have $$f(0)=f(L) \quad \text{and} \quad f'(0)=0.$$ My intuition tells me that we can conclude that $ I = 0 $. Is this, in general, true? AI: Take $f(x)=\sin^{2}x$ for a counter-example. Here $L=\pi$ and the integral is strictly positive.
H: When does a monic arrow in $\mathcal{C}^\rightarrow$ implies the corresponding arrows are monic in $\mathcal{C}$? Let $\mathcal{C}$ be a category, and let $(\varphi, \psi)$ be an arrow in the arrow category $\mathcal{C}^\rightarrow$, where $\varphi : a \rightarrow a'$, $\psi : b \rightarrow b'$, and there exist $f : a \rightarrow b$, $g : a' \rightarrow b'$ such that the corresponding square commutes (that is, $\psi \circ f = g \circ \varphi$). Let's also assume $(\varphi, \psi)$ is monic in $\mathcal{C}^\rightarrow$. What does it tell us about $\varphi, \psi$ in $\mathcal{C}$? First, I managed to prove the following: if $\xi_1, \xi_2 : c \rightarrow a, \chi_1, \chi_2 : d \rightarrow b$ are such that $(\xi_1, \chi_1)$ and $(\xi_2, \chi_2)$ are arrows in $\mathcal{C}^\rightarrow$ and such that $\varphi \circ \xi_1 = \varphi \circ \xi_2$ and $\psi \circ \chi_1 = \psi \circ \chi_2$, then $\xi_1 = \xi_2$ and $\chi_1 = \chi_2$: That is, $\varphi$ and $\psi$ are monomorphisms if we restrict our arrows in $\mathcal{C}$ to the ones that make it into arrows in the arrow category. Can we do better? Let's take arbitrary $\xi_1, \xi_2 : c \rightarrow a$, also take $c$ as $d$ (and hence $h = \text{id}_c$), and take $\chi_1 = f \circ \xi_1, \chi_2 = f \circ \xi_2$. Then it follows that, in particular, $\varphi \circ \xi_1 = \varphi \circ \xi_2 \Rightarrow \xi_1 = \xi_2$, hence $\varphi$ is monic in $\mathcal{C}$. Can we do a similar trick to prove $\psi$ is monic? The best I could come up with is the following. Assume $\mathcal{C}$ has initial objects, take $c = 0$ and arbitrary $\chi_1, \chi_2 : d \rightarrow b$: Then the leftmost square commutes, and by a similar argument we see $\psi$ is monic. Does it make sense? If so, can I do the $\psi$ part without assuming initial objects? AI: Suppose that $C$ has an initial object $0$. Then, the codomain functor $C^{\rightarrow} \to C$ has a left adjoint sending each object $X$ of $C$ to the unique arrow $0 \to X$, and so it preserves monomorphisms. Your proof that the codomain functor preserves monomorphisms is just a special case of the general proof of the fact that right adjoint functors preserve monomorphisms. Also, regardless of the existence of an initial object in $C$, the domain functor $C^{\rightarrow} \to C$ always has a left adjoint sending each object of $C$ to its identity arrow. And again, your proof that the domain functor preserves monomorphisms is just a special case of the general proof for right adjoint functors.
H: Limit of $\frac{x^a-a^x}{a^x-a^a}$ Limit of $$\lim_{x\to a} \frac{x^a-a^x}{a^x-a^a},$$ where $$a\in (0,\infty), a\neq1.$$ I know to use L'Hopital, but i am confused with the requirments of $a$. AI: The function that you are computing the limit of, $$f(x)=\frac{x^a-a^x}{a^x-a^a}\,,$$ is not defined for either $a=1$ or $a=0$, as the denominator would be zero for all $x$. Also, we need $a\geq0$, because $a<0$ implies that $a^a$ is a complex number. Those are the reason of the conditions on $a$: $$ a\geq0 \wedge (a\notin\{0,1\}) \Rightarrow a\in(0,\infty)\wedge a\neq1.$$ To compute the limit, use L'Hopital knowing that $\frac{{\rm d}}{{\rm d}x}a^x = a^x\log(a)$ and $\frac{{\rm d}}{{\rm d}x}x^a = ax^{a-1}$.
H: What is the number of modes in prime product series? Let $S$ be the number of possible solutions for the formula below $$a * b \equiv c \pmod{x}$$ Where $a,b,c,x \in \mathbb{Z}$ and $0 < a \leq b < x$ I would like to find $c$ and $x$ for which $S/x$ will be the smallest. I tested all the $c$ and $x$ values from $5<x<1,000,000; c < x$ and found that the top smallest $S$ numbers are: $x=2*3; c=5; S = 0.16666666666666666$ $x=2*3*5; c=7; S = 0.13333333333333333$ $x=2*3*5*7; c=11; S = 0.11428571428571428$ $x=2*3*5*7*11; c=13; S = 0.1038961038961039$ $x=2*3*5*7*11*13; c=17; S = 0.0959040959040959$ $x=2*3*5*7*11*13*17; c=19; S = 0.09026267849797262$ With this clear pattern of prime number series, is there any efficient way to calculate $S$ for bigger numbers? AI: You can use Euler's totient function to show that: If $x$ a product of the initial distinct primes, i.e. $x=\prod p_i$, and if $c$ is coprime to $x$, then $$S=\frac1{2x}\prod\left(p_i-1\right)$$
H: Why is a stochastic matrix a $l^2$ contraction If $P$ is a doubly stochastic matrix i.e. $P=(p_{ij})_{1\leq i,j \leq n}$ is s.t. the row sums $\sum_j p_{ij}=1$ for all $i$ and $\sum_i p_{ij}=1$ for all $j$, then may I know why $$||Px||\leq ||x||$$ for all $x\in \mathbb{R}^n$ where $||\cdot||$ is the $l^2$ norm? AI: This is related to the more general formulation that if $P\colon L^2\to L^2$ is given by $Pf(x) = \int p(x,y)\,f(y)\,dy$, and \begin{align*} \sup_x\int|p(x,y)|\,dy &\le 1,\\ \sup_y\int|p(x,y)|\,dx &\le 1, \tag{$\ast$} \end{align*} then $\|P\|_{L^2\to L^2}\le 1$. To see this, recall that \begin{align*} \|P\|_{L^2\to L^2} &= \sup |\langle Pf,g\rangle|\\ &= \sup\bigg|\iint p(x,y)\,f(y)\,g(x)\,dy\,dx\bigg| \end{align*} where the sup is taken over all $f,g$ with $\|f\|_{L^2},\|g\|_{L^2}\le 1$. Then, since $|fg| \le \frac12(|f|^2 + |g|^2)$, we have \begin{align*} \|P\|_{L^2\to L^2}&\le \sup\bigg(\frac12\iint |p(x,y)|\,|f(y)|^2\,dy\,dx + \frac12\iint |p(x,y)|\,|g(x)|^2\,dy\,dx\bigg). \end{align*} Now in the first integral, integrate first with respect to $x$ and then with respect to $y$, and in the second integral, integrate first with respect to $y$ and then with respect to $x$, and the conclusion is that \begin{align*} \|P\|_{L^2\to L^2} \le \frac12 + \frac12 = 1. \end{align*} This argument can be easily adapted to the case you are interested in where $Pf$ is given by \begin{align*} (Pf)_x = \sum_y p_{xy}f_y \end{align*} and the doubly stochastic assumption gives us the two inequalities $(\ast)$, where we replace integration by summation in this case.
H: Extension of a Continuous function on a dense subset to its closure. A continuous function on $\mathbb Q\cap [0 \, 1]$ can be extended to a continuous function on [0 1] -Prove. We have the result that a uniformly continuous function on a set A can be extended continuously to $\overline A$. I am unable to apply the result in this case because I don’t know whether the given function is uniformly continuous on $\mathbb Q \cap [0\,1]. $ Please help. AI: It's false. Let $$f:\Bbb Q\cap[0,1]\to[0,1]:q\mapsto\begin{cases} 0,&\text{if }0\le q<\frac12\sqrt2\\ 1,&\text{if }\frac12\sqrt2<q\le 1\; \end{cases}$$ $f$ is continuous on its domain, but it cannot be extended to a function on $[0,1]$ that is continuous at $\frac12\sqrt2$.
H: Steepest-descent optimization procedure with step size given by harmonic sequence Here is a minimization procedure I've "dreamed up." I'm hoping to gain a better understanding of its mathematical properties and practical efficiency. Given a (locally) convex function $f(x):{\mathbb{R}}^n \to \mathbb{R}$, initial $x_1$, initial step size $a_1$, and tolerance $\delta$: If $\lVert\nabla f(x_k )\rVert<\delta$, return $x_k$; otherwise: Pick step direction $d_k \equiv -\nabla f(x_k )/\lVert\nabla f(x_k )\rVert$. Pick step size $a_k$. Let $x_{k+1} \equiv x_k +a_k d_k$. Let $a_{k+1} \equiv a_1 /k$. Let $k\equiv k+1$ and return to step 1. Most optimization procedures require you to do some kind of line search after picking the step direction, but this algorithm avoids that computation by simply choosing an arbitrary $a_1$ and letting it decrease as the function iterates. Since $$a_k =\frac{1}{k}$$ the step size approaches $0$ in the limit $k\to \infty$ and the sequence of iterates $\left\{ x_k \right\}$ is convergent. On the other hand, since the sum $$\sum_{k=1}^{\infty } a_k =a_1 \sum_{k=1}^{\infty } \frac{1}{k}$$ is divergent, the cumulative sum of the step sizes is infinite, so assuming convexity, we will never get "stuck" at an $x$ far from $x^*$. (I am unsure of how to prove this formally.) The above properties also apply for a more general algorithm where, in step 5, we let $a_{k+1} \equiv a_1 /k^t$ with $t\in (0,1]$. Is there a name for this optimization procedure? What are its convergence properties? How should one select the initial values $x_1$ and $a_1$ in the general case? Here is a proof-of-concept implementation in Matlab. Since we have to compute the gradient numerically, I have it evaluate the gradient over a "neighborhood" of size nsize around $x_k$. nsize is initialized to 0.01 and decreases by a factor of $k$ at each iteration, which prevents cycling. [x, y] = minimize2d(@obj, -1.34, 1.79, 1, 0.01, 10e-15); x_star = x(end) y_star = y(end) f_star = obj(x_star, y_star) [x_plot, y_plot] = meshgrid(linspace(-1.6, 0.3, 51),linspace(.9, 1.9, 51)); z_plot = obj(x_plot, y_plot); contour(x_plot, y_plot, z_plot, 10) hold on plot(x, y, "-k") scatter(x_star, y_star) hold off function f = obj(x, y) f = 4*x.^2 + exp(1.5*y) + exp(-y) - 10*y; end function [x, y] = minimize2d(fun, x0, y0, a0, Nsize, tol) x = x0; y = y0; a = a0; grad_magnitude = tol + 1; i = 1; while grad_magnitude > tol a = a0 / i; Nsize = Nsize / i; [xN, yN] = meshgrid(linspace(x(i)-Nsize, x(i)+Nsize, 3), ... linspace(y(i)-Nsize, y(i)+Nsize, 3)); f = fun(xN, yN); [px, py] = gradient(f); grad_magnitude = norm([px(2) py(2)]); step = -a * [px(2), py(2)] / norm([px(2) py(2)]); x(i+1) = x(i) + step(1); y(i+1) = y(i) + step(2); i = i + 1; end nit = i end Output: nit = 16 x_star = -7.5968e-06 y_star = 1.2651 f_star = -5.6986 AI: Upon finishing writing my answer, I realized that I misread your "step 2." What I write below is for a version of the algorithm where $d_k = -\nabla f(x_k)$, so that the magnitude of the gradient affects the actual step. I will still refer to $a_k$ as the "step size." I understand this is a bit different that the algorithm you have written, but I hope the answer is still helpful anyway. This is essentially gradient descent where you have chosen a specific sequence of step sizes. Your "step 1" is a stopping criterion in place of "stop when $\nabla f(x_k)= 0$" to account for numerical imprecision. There are many resources discussing the properties of gradient descent; here is a course with notes and here is a text. There you can find convergence results that depend on your assumptions on $f$. In some cases, a constant step size can get you a $O(1/\sqrt{k})$ error rate, while in special circumstances, a decreasing step size can guarantee a faster $O(1/k)$ error rate. I am being purposefully vague here because you need to introduce various technical notions to state these results precisely. Finally, your observation about your step sizes diverging is something that Robbins and Monro observed for stochastic methods. In that context, the intuition is that the divergence condition $\sum_k a_k = \infty$ ensures that you have enough "gas" to explore the space, while the convergence condition $\sum_k a_k^2 < \infty$ ensures that your steps are decreasing sufficiently fast enough so that you can hone in on the solution instead of jumping wildly all over the place. Again, this is in the context of stochastic methods; I am not sure this intuition holds for non-stochastic methods like gradient descent.
H: Split $n \ge 45$ into 30 positive integers and prove there exists some consecutive numbers whose sum is 14 Literally, there is an integer sequence $A = \{a_1, a_2, ..., a_{30}\}$. Given that $30\le\sum_ia_i\le45$ and $a_i > 0$, prove that $\exists c\le d(\sum_{i=c}^da_i=14)$. AI: Hint: Let $A_i = \sum_{j=1}^i a_i$. Consider the numbers $ A_1, A_2, \ldots A_{30}$ and $ A_1 + 14, A_2 + 14, \ldots A_{30} + 14$. How many numbers are there? How many possible values could they take? What technique in discrete math (which you tagged with) seems the most likely for us to use? What happens when you apply this technique? Hence, conclude that there is some consecutive string that sum to exactly 14.
H: Why does compactness of a subset in a Euclidean space imply that it is closed and bounded? I'm just getting started on topology, and having trouble reconciling with the Heine-Borel Theorem. That is: For a subset S of Euclidean space $\mathbb{R}^n$, the following are equivalent: S is closed and bounded S is compact; that is, every open cover of S has a finite subcover I've read the proof, but I've managed to get hooked on a "counterexample" that I can't think my way around. Suppose S is some open set in $\mathbb{R}^n$, and then let C be a collection of sets containing just S. Is C then not an open cover of S with a finite subcover (that subcover being C itself)? This would imply that S is both compact and open. Any guidance would be greatly appreciated. Thanks AI: You've exhibited an example of an open cover with a finite subcover. To show a subset is compact, we must show that every open cover admits a finite subcover. If $S \subseteq \mathbb{R}^n$ is a nonempty open subset, I claim that we can cook up an example of an open cover with no finite subcover. In particular, this will show that $S$ is not compact. For $n\geq 1$, define $U_n \subseteq S$ to be $$U_n =\{ x \in S : d(x, \mathbb{R}^n \setminus S) > 1/n \}.$$ Here, $d(x, T)$ means "the minimal distance between the point $x$ and the set $T$", and is defined precisely as $$d(x,T) = \inf_{y\in T} d(x,y).$$ In words, $U_n$ is the subset of $S$ consisting of elements that are at least $1/n$ away from the edge of $S$. We have $U_1\subseteq U_2 \subseteq \cdots \subseteq S$. Eventually, every element of $S$ will fall into some $U_n$, so this is an open cover. If we take a finite subcover, then we will miss some points.
H: Deriving a function based on its properties Suppose I have a function $\Lambda(t)$ for any $t>0$. This function has the following three properties: $\Lambda(t)$ is differentiable. $\Lambda(t)$ is strictly increasing. $\Lambda(T) = \Lambda(T+S) - \Lambda(S)$ for any $T,S>0$. It is stated that the function has the form $\Lambda(t) = \lambda t$, but how can I formally derive this from the above three properties. Thanks in advance. AI: Hint: Try proving these properties first, by giving $S$ and $T$ various values: $$\Lambda(0)=0$$ $$\Lambda(2t) = 2\Lambda(t)$$ $$\Lambda(nt) = n\Lambda(t)$$ $$\Lambda(m/n)=m/n\Lambda(1)$$ This should give linearity on $\mathbf{Q}_+$. To expand linearity to $\mathbf{R}_+$, based on monotonicity only, see proofwiki (using density of the set of rationals in the set of real numbers and Peak Point lemma and Squeeze theorem). See also Cauchy functional equation for more.
H: About the smallest value of B I am trying to solve this problem: We know that there's a inequality: $$(3n-1)(n+B)\geq A(4n-1)n$$ When $A=\frac{3}{4}$, what is the smallest possible value of B. So, what I did is that: $$B\geq \frac{\frac{3}{4}n(4n-1)-n(3n-1)}{3n-1}$$ We can deduce that: $$B\geq \frac{3n(4n-1)-4n(3n-1)}{4(3n-1)}$$ Expand and simplifies: $$B\geq \frac{1}{12-\frac{1}{n}}$$ We know that when n is greater, the denominator of the RHS would be greater, meaning that RHS would be smaller. So, the smallest value of RHS would result in the smallest value of LHS. When $n \to \infty$: The RHS $\to$ $\frac{1}{12}$. I thought that the smallest value of B should be $\frac{1}{12}$. But, it turns out to be $\frac{1}{8}$. Note: $n\geq 1$, and n is integers. May I know why my method doesn't work? thank you so much. AI: Because it should be $$B\geq\frac{1}{12-\frac{4}{n}}.$$ The minimal value of $\frac{1}{12-\frac{4}{n}}$ does not exist bu the infimum is equal to $\frac{1}{12}.$ Also, fo $n\geq1$ we obtain: $$\frac{1}{12-\frac{4}{n}}\leq\frac{1}{8}.$$ Thus, the minimal value of $B$ for which the inequality $$B\geq\frac{1}{12-\frac{4}{n}}$$ is true for any natural $n$ it's $$B=\frac{1}{8}.$$
H: What is the difference between $P(S = s, N = n )$ vs $P(S = s | N =n)$ What is the difference between $P(S = s, N = n )$ vs $P(S = s | N =n)$? I know the former is a joint probability and the latter is a conditional probability, and that $P(S=s, N=n) = P(S=s | N=n)P(N=n)$; however, I can't seem to distinguish between the meaning of the 2 in certain problems. For this discussion, let's consider $S$ to be the random variable for the sum of numbers we draw from a hat. $N$ is the number of draws. Let's just say we draw with replacement for simplicity. So $P(S=s, N=n)$, in words, is the probability that we draw $n$ times AND the sum of the $n$ draws is $s$. $P(S=s | N=n)$, in words, is the probability of drawing a sum $s$ given that we draw $n$ times. The two in this situation sounds identical to me, which would imply $P(N=n)=1$. But is this really the case, or am I not understand this correctly? AI: You have an event of Drawing a specific sum from the hat Drawing from the hat a specific number of times and then a compound event of drawing from the hat a specific number of times and drawing a specific sum. You need to carefully determine what your universal set $U$ is -- the set whose cardinality you put in the denominator to calculate probabilities. Suppose the number of draws, $n_{i\ge1}$, belongs to a finite set $P=\{n_1,n_2,...,n_m\}$, and the numbers in the hat are from the set $X$. While calculating $P(S=s, N=n_1)$, each instance of $n_1$ draws yielding $s$ sum is accounted in the numerator, i.e. the number of $n_1$-tuples adding up to $s$,$$\sum_{x_1+x_2+...+x_{n_1}=s\\x_i\in X}1$$and each instance of any sum being obtained after any number of draws is accounted in the denominator (which is your compound event), i.e. the number of tuples of any length $n_i\in P$,$$|U|=|S=s,N=n_1|+|S\neq s,N=n_1|+|S=s,N\neq n_1|+|S\neq s,N\neq n_1|\\=\sum_{i=1}^m|X|^{n_i}$$ While calculating $P(S=s|N=n_1)$, the universal set is restricted to only $n_1$-tuples. You add only those $n_1$-tuples whose sum is $s$ in the numerator, i.e.$$\sum_{x_1+x_2+...+x_{n_1}=s\\x_i\in X}1$$ and all $n_1$-tuples in the denominator, i.e.$$|U|=|N=n_1|=|S=s,N=n_1|+|S\neq s,N=n_1|=|X|^{n_1}$$ Indeed, when the only element of $P$ is $n_1, P(N=n_1)=1/m=1$ and both the probabilities are identical.
H: Is there a surjective function such as $f:2^\Bbb N \rightarrow \Bbb N$ This seems like a pretty simple question but nothing helpful came to my mind. Is there a surjective function such as: $f:2^\Bbb N \rightarrow \Bbb N$ AI: Yes, simply for cardinality reasons there will be many such maps. A concrete example: the function $f$ defined by taking $\alpha \in 2^{\mathbb N}$ to the first $n$ such that $\alpha(n) = 1$. This is surjective by considering $\alpha_n$ which is $1$ at $n$ and $0$ everywhere else.
H: prove that $\phi(a)=\frac{\int_{0}^{0.5} (\frac{u}{1-u})^{2a-1} du}{\int_{0.5}^{1} (\frac{u}{1-u})^{2a-1} du}>1 \Longleftrightarrow a<0.5$. Let $a\in (0,1) $, the question is how to prove $$\phi(a)=\frac{\int_{0}^{0.5} (\frac{u}{1-u})^{2a-1} du}{\int_{0.5}^{1} (\frac{u}{1-u})^{2a-1} du}>1 \Longleftrightarrow a<0.5$$ The plot of $(a,\phi (a))$ shows $\phi$ is monotone. R Code a<<-.5 fu<-function(u){ ret.value<-((u)/(1-u))^(2*a-1) return(ret.value) } #(integrate1<-integrate(fu,lower=0,upper=.5)$value) #(integrate2<-integrate(fu,lower=.5,upper=1)$value) s<-seq(.1,.9,len=100) ratio<-c() for(i in 1:length(s)){ a<<-s[i] ratio[i]<-integrate(fu,lower=0,upper=.5)$value/integrate(fu,lower=.5,upper=1)$value } plot(s,ratio,typ="l",axes=F,xlim=c(0,1)) axis(1,pos=0, (0:4)/4,(0:4)/4, col.axis = "black",padj=-.7, lwd.ticks=1 ,tck=-.01,cex.axis=.95) axis(2,pos=0, c(0,1,5,10,15,20),c("",1,5,10,15,""), col.axis = "black",padj=.4, lwd.ticks=2 ,tcl=-.1,las=1, hadj=.4,cex.axis=.95) abline(v=0.5,col="red") abline(h=1,col="red") AI: Let us consider the integral at denominator. With the change of variable $t=1-u$ it can be rewritten as $$ \int_0^{0.5} \left(\frac{1-t}{t}\right)^{2a-1}dt = \int_0^{0.5}\left(\frac{t}{1-t}\right)^{1-2a}\, dt, $$ so that $$ \phi(a) = \frac{\int_0^{0.5} f(u)^{2a-1}\, du}{\int_0^{0.5} f(u)^{1-2a}\, du}\,, $$ with $f(u) = u/(1-u)$. It is easy to check that $0 < f(u) < 1$ for every $u \in (0, 0.5)$. Hence, if $2a - 1 < 0$, i.e. if $a < 0.5$, we have that $$ f(u)^{2a-1} > f(u)^{1-2a}, \quad \forall u \in (0, 0.5) \qquad (a < 0.5), $$ so that $\phi(a) > 1$. On the other hand, the inequality is reversed if $a > 0.5$ and, finally, $\phi(a) = 1$ if $a = 0.5$.
H: Verify if the sum of two subspaces are equal to $\mathbb R^3$ I'm having trouble in this exercise because i am not sure if I'm dealing with it in the right way. The exercise is to consider the following subspaces of $\mathbb R^3$: $$U=\{(x,y,z) \in \mathbb {R}^3\mid x=y=z=0\}$$ $$V=\{ (x,y,z) \in \mathbb {R}^3\mid z=0\}$$ And the purpose is to show that $U+V=\mathbb {R}^3$ and if it's also a direct sum. Well, what I noticed is that any element of $u$ is the null vector. Then I let $u=\vec{0}$ and $v=(x,y,0)$ where $u \in U$ and $v \in V$. Therefore, $u+v=v=(0,0,0)+(x,y,0)=x(1,0,0)+y(0,1,0)$ which is $V=[(1,0,0),(0,1,0)]$. That was my conclusion. But isn't equals to $\mathbb {R}^2$? what I did wrong? AI: Your conclusion is correct. You also can just show that a vector in $\mathbb{R}^3$ can't exist in $U+V$ and that's a proof that $U+V\neq \mathbb{R}^3$. For this case, since in any of $U,V$ the $z$ entry is zero, you can just say that any vector with $z\neq 0$ can't be generated as sum of vectors of $U$ and $V$.
H: why $x \in \mathbb{Q}^c ?$ i have some confusion in sorgenfrey line, my doubt is marked in red line and red box given below My Doubts : we know that $\mathbb{Q}$ is countable and $\mathbb{Q}^c$ is uncountable Here we have already assume that $( S,T)$ has a countable basis , then why $x \in \mathbb{Q}^c$ ? From my point of view it should be $V= \{( x-1, x] : x \in \mathbb{Q}\}$ AI: The $x \in \Bbb Q^\complement$ is in fact unnecessary. The author could just as well have considered $V=\{(x-1,x]: x \in \Bbb R\}$ instead. We then have, as $\mathcal{A}$ is some base for the Sorgenfrey line, and $(x-1,x]$ is open in that topology and $x$ is in it, that there is some $A_x \in \mathcal{A}$ such that $$x \in A_x \subseteq (x-1,x] .$$ Then if $x < y$, $y \notin A_x$ (as $A_x$ lies to the left of $x$), but $y \in A_y$, so $A_x \neq A_y$. So the assignment $x \to A_x$ is a 1-1 mapping from $\Bbb R$ into $\mathcal{A}$, and as $\mathcal{A}$ is any base for the Sorgenfrey line, any base of that space is uncountable, which proves the assertion. The fact that $x$ is in $\Bbb Q$ or not is completely irrelevant. Don't let it distract you from the core of the argument, which is what I just repeated.
H: Expressing a presheaf as a colimit of representables I don't understand how the highlighted isomorphism follows. And why is every object in $\mathbf {Set}\times\mathbf{Set}$ is a sum of copies of $(1,\emptyset)$ and $(\emptyset,1)$? Next, right after the density theorem, there's this example: By the theorem, the colimit of $H_\bullet \circ P$ is $X$. This example says that the colimit is the sum of five representables, namely $H_K+H_K+H_K+H_L+H_L$. How does this follow from the theorem? AI: I think $1$ is a one-point set, and "sum" here is coproduct. In the category of sets, coproduct is disjoint union. Likewise in $\textbf{Set}\times\textbf{Set}$ the coproduct of $(A_1,B_1)$ and $(A_2,B_2)$ is $(A_1+ A_2,B_1+ B_2)$. This extends to coproducts of more than two objects, even infinitely many objects. So $$(\{a,b,c\},\{d,e\})\cong(\{a\},\emptyset)+(\{b\},\emptyset)+(\{c\},\emptyset) +(\emptyset,\{d\})+(\emptyset,\{e\}) \cong(1,\emptyset)+(1,\emptyset)+(1,\emptyset) +(\emptyset,1)+(\emptyset,1)$$ etc.
H: Find the generator of $(Z,*)$ let $Z$ be the set of integers let $\ast\ $ be an operation defined on $Z$ by $a*b=a+b-1$ $\forall \ $ $a,b$ $\in \ $$Z$ Is $(Z,*)$ cyclic if so find the generator of $(Z,*)$. I think it is not cyclic group since if it is $1$ its inverse same as $1$ therefore there is no generator but I am not sure if I am wrong correct me. AI: Since $1$ is the identity element of $(\Bbb Z,*)$, then it is natural that it is not a generator. However, $2$ is a generator of $(\Bbb Z,*)$. To see why, see that $2*2=3$, $2*3=4$, $2*4=5$ and so on…
H: What is the concrete meaning of fixing an extension field through a subgroup of automorphisms in $x^3-2$? The automorphisms corresponding to the extension fields of the splitting polynomials of $x^3-2$ are enumerated in this answer: we know that the automorphism of $\mathbb{Q}(\sqrt[3]{2}, \omega_3)$, which fix $\mathbb{Q}$ are determined by the action on $\sqrt[3]{2}$ and $\omega_3$, where $\omega_3$ is a third root of unity. It's trivial to conclude that such an automorphism sends $\sqrt[3]{2}$ to a root of $x^3 - 2$ and $\omega_3$ to a root of $x^2 + x + 1$. Making all possible combinations we get: $$ e : \begin{array}{lr} \sqrt[3]{2} \mapsto \sqrt[3]{2}\\ \omega_3 \mapsto \omega_3 \end{array} \quad \quad r: \begin{array}{lr} \sqrt[3]{2} \mapsto \omega_3\sqrt[3]{2}\\ \omega_3 \mapsto \omega_3 \end{array} \quad \quad r^2: \begin{array}{lr} \sqrt[3]{2} \mapsto \omega_3^2\sqrt[3]{2}\\ \omega_3 \mapsto \omega_3 \end{array} $$ $$ f : \begin{array}{lr} \sqrt[3]{2} \mapsto \sqrt[3]{2}\\ \omega_3 \mapsto \omega_3^2 \end{array} \quad \quad fr: \begin{array}{lr} \sqrt[3]{2} \mapsto \omega_3^2\sqrt[3]{2}\\ \omega_3 \mapsto \omega_3^2 \end{array} \quad \quad fr^2: \begin{array}{lr} \sqrt[3]{2} \mapsto \omega_3\sqrt[3]{2}\\ \omega_3 \mapsto \omega_3^2 \end{array} $$ These are shown as symmetries in the dihedral group $D3$ ($\zeta,$ in place of $\omega,$ and $zeta^2$ on either side of the face of the triangle; $2^{1/3}$ elements at the vertices): Now by checking which elements of the basis remain fixed by a subgroup of $ G,$ you can determine the corresponding fixed fields. I understand that (composition from R to L): $r$ fixes $\omega$ and $\omega^2.$ $f$ fixes $2^{1/3}.$ $fr$ fixes $2^{1/3}(1+\omega).$ $fr^2$ fixes $2^{2/3}(1+\omega).$ But I am still not sure what "fixing" means - How can you check that these automorphisms "fix" these subfields? It would be possibly enlightening to see how $fr$ fixes $2^{1/3}(1+\omega),$ and $fr^2$ fixes $2^{2/3}(1+\omega).$ I don't quite see without a concrete example what is meant by "positions" for example, although I understand it has to do with permutations. Notes in relation to the accepted answer: The expression of the fixed elements above, which was extracted from here, was reformulated using the minimal polynomial $x^2 + x + 1,$ which for a root of it, $\omega,$ obeys $\omega^2 + \omega + 1 =0;$ and hence, $\omega + 1 = -\omega^2.$ Therefore, $2^{1/3}(1+\omega)=-2^{1/3}\omega^2.$ the automorphisms detailed above can be summarized into: $$\begin{align} &r: 2^{1/3} \mapsto \omega 2^{1/3}\\ &r: \omega \mapsto \omega \end{align} $$ $$\begin{align} f: &2^{1/3} \mapsto 2^{1/3}\\ f: & \omega \mapsto \omega^2 \end{align} $$ As shown in the answer, there is a mistake in the list of fixed points above. The correct correspondence is: $$\begin{align} &\omega 2^{1/3} \text{ is a fixed point of } fr.\\ &\omega^2 2^{1/3} \text{ is a fixed point of } fr^2. \end{align}$$ Finally a reminder of of the rules to apply automorphisms to the elements of the basis following the rules: $$\begin{align} \phi(a+b)&=\phi(a)+\phi(b)\\ \phi(ab)&=\phi(a)\phi(b)\\ \phi(z)&=z, \forall z\in F\text{ base field} \end{align}$$ So if we look into the rotations of $\omega_3,$ $$\begin{align} r(\omega_3)&=\omega_3\\ r^2(\omega_3)&=\omega_3 \end{align}$$ by definition of the automorphism. But also, $$\begin{align} r(\omega_3^2)&=r(\omega_3)r(\omega_3)=\omega_3\omega_3=\omega_3^2\\ r^2(\omega_3^2)&=r(r(\omega_3^2))=r(\omega_3^2)=\omega_3^2 \end{align}$$ Hence, $r$ fixes $\omega_3$ and $\omega_3^2.$ This corresponds to $\mathbb Q(\omega_3).$ AI: An element $z$ is said to be a fixed point of an automorphism $\sigma$, if $\sigma(z)=z$, and that's what you are looking for here. But I think something is wrong with the data. Let's check the action of $fr$ on the number $z=2^{1/3}(1+\omega)=-2^{1/3}\omega^2$: $$ r(z)=-r(2^{1/3})r(\omega)^2=-\omega2^{1/3}\omega^2=-2^{1/3}, $$ and therefore $$ fr(z)=f(r(z))=f(-2^{1/3})=-2^{1/3}\neq z. $$ Meaning that $z$ is not a fixed point of $f\circ r$. On the other hand $$ r^2(z)=-r(2^{1/3})=-\omega 2^{1/3}, $$ and therefore $$ fr^2(z)=f(r^2(z))=-f(\omega2^{1/3})=-f(\omega)f(2^{1/3})=-\omega^22^{1/3}=z. $$ So $z$ is a fixed point of the automorphism $fr^2$. The reason Mark Bennet (in a comment) and I asked about the order of composition is that in this Galois group we have the relation $rf=fr^2$. This Galois group is not abelian, so the order of composition matters. If automorphisms were applied left-to-right, then we would have $(z)rf=z$. There are many ways to find fixed points for $fr$. In terms of Galois theory the fixed points are related to intermediate fields. By definition $f$ fixes all the elements of $\Bbb{Q}(\root3\of2)$. By the above calculation $fr^2$ fixes all the elements of $\Bbb{Q}(z)=\Bbb{Q}(\omega^2\root3\of2)$. It stands to reason that the we should see the intermediate field generated by the third root of $x^3-2$, namely $\Bbb{Q}(\omega\root3\of2)$ also. Indeed, $$r(\omega\root3\of2)=r(\omega)r(\root3\of2)=\omega^2\root3\of2$$ and hence $$f(r\omega\root3\of2)=f(\omega)^2f(\root3\of2)=\omega^4\root3\of2=\omega\root3\of2.$$ So $\omega\root3\of2$ is a fixed point of $fr$. A general fact about group actions is that if $z$ is a fixed point of $g$ then $h(z)$ is a fixed point of $hgh^{-1}$: $$(hgh^{-1})(h(z))=h(g(h^{-1}(h(z))))=h(g(z))=h(z).$$ In this Galois group we can use the relation $rf=fr^2$ I mentioned above. It implies that $$rfr^{-1}=(rf)r^{-1}=(fr^2)r^{-1}=fr.$$ Applying the general observation to $g=f$, its fixed point $z=\root3\of2$, and $h=r$, it follows that $r(z)=\omega\root3\of2$ must be a fixed point of $rfr^{-1}=fr$. Then there is the lower technology, boring way, but one that is guaranteed to work. You know the effect of the automorphism $fr$ to the elements of the basis $\mathcal{B}=\{1,\root3\of2,\root3\of4,\omega,\omega\root3\of2,\omega\root3\of4\}$. You can then write the matrix $M$ of $fr$ with respect to $\mathcal{B}$. The fixed points are exactly the eigenspace of the eigenvalue $\lambda=1$. Leaving the calculations to you. That eigenspace is 3-dimensional. Given that $(fr)^2=e$, the eigenvalues satisfy $\lambda^2=1$, so $\lambda=-1$ is the other eigenvalue. The corresponding eigenspace is also 3-dimensional. That is hardly a surprise given that if $w$ belongs to eigenvalue $-1$, then so does $zw$ for all the fixed points $z$.
H: How do we calculate the rotation of 3D vectors? Consider three vectors as 3D axis in a unit sphere: $$A = (1,0,0)$$ $$B = (0,1,0)$$ $$C = (0,0,1)$$ If we rotate the sphere around y-axis by $\theta$ and then around the x-axis by $\phi$. How do we calculate the new vectors? I came up with a solution of $$A_x = \cos(\theta)$$ $$A_y = 0 $$ $$A_z = \sin(\theta)$$ $$B_x = \sin(\theta) . \sin(\phi) $$ $$B_y = \cos(\theta) . \cos(\phi) $$ $$B_z = \cos(\theta) . \sin(\phi) $$ Although I got some correct results, I can tell my approach is incorrect. AI: The transformation you're describing is the composition of two rotations, $R_y(\theta)$ and $R_x(\phi)$. So the associated matrix $M$ is the product of these two matrices: $$M= R_x(\phi) R_y(\theta) = \begin{pmatrix} 1&0&0 \\ 0&\cos(\phi)&-\sin(\phi) \\ 0&\sin(\phi)&\cos(\phi)\end{pmatrix}\begin{pmatrix} \cos(\theta)&0&\sin(\theta)\\0&1&0\\-\sin(\theta)&0&\cos(\theta)\end{pmatrix}=\begin{pmatrix}\cos(\theta)&0&\sin(\theta) \\ \sin(\phi)\sin(\theta)&\cos(\phi)&-\sin(\phi)\cos(\theta) \\ -\cos(\phi)\sin(\theta)&\sin(\phi)&\cos(\phi)\cos(\theta) \end{pmatrix}$$ The columns of $M$ are the new vectors $A$, $B$ and $C$.
H: Locus of the circumcenter of triangle formed by the axes and tangent to a given circle. A circle centered at $(2,2)$ touches the coordinate axes and a straight variable line $AB$ in the first quadrant, such that $A$ lies the $Y-$ axis, $B$ lies of the $X-$ axis and the circle lies between the origin and the line $AB$. Find the locus of the circumcenter of the triangle $OAB$, where $O$ denotes the origin. Answer: $xy=x+y+\sqrt{x^2+y^2}$ I was able to solve this question with a very lengthy approach outlined below: Since $\Delta OAB$ is always right angled at $O$, it's circumcenter will be the midpoint of the segment $AB$. Also, the equation of the given circle would be $$(x-2)^2+(y-2)^2=4$$ Then I considered the question of the line $AB$ to be $y+mx=c$, where $m$ is positive number. Then I used the fact that this line is tangent to the given circle, i.e. the perpendicular distance of the line from the point $(2,2)$ is $2$ units to obtain a quadratic equation in $c$: $$c^2-4(1+m)c+8m=0$$ Now, this yielded two values for $c$ of which one has to be rejected because in that case the line was trapped between the circle and the origin. Thus, $c=2+2m+2\sqrt{1+m^2}$. From here, the coordinates of the circumcenter required are: $\left(\frac{1+m+\sqrt{1+m^2}}{m},1+m+\sqrt{1+m^2}\right)$. Luckily, this was on a MCQ test and all the options had the terms $xy$, $x+y$ and $\sqrt{x^2+y^2}$ present. So I could evaluate these values and then see which option is correct. But the above method is far too lengthy, and I'm looking for a shorter method, if it exists. Please note that the average time for a question in the test was nearly two to three minutes. Thanks a bunch! AI: The first thing to note is that $\triangle OAB$ is always a right triangle, and its incenter is always $(2,2)$. Thus, hypotenuse $AB$ is always a diameter of the circumcircle, and the circumcenter is therefore the midpoint of $AB$. If the line passing through $AB$ has equation $$\frac{x}{a} + \frac{y}{b} = 1,$$ then the $x$-intercept is $(a,0)$, and the $y$-intercept is $(0,b)$. The area enclosed by the triangle may be calculated in two ways: $$|\triangle OAB| = \frac{ab}{2} = rs$$ where $r = 2$ is the inradius and $$s = \frac{1}{2}\left(a + b + \sqrt{a^2 + b^2}\right)$$ is the semiperimeter. Consequently, $$\frac{ab}{2} = a+b+\sqrt{a^2+b^2}.$$ Since $(x,y) = (a/2, b/2)$ is the circumcenter, we obtain the relationship $$2xy = 2x + 2y + \sqrt{(2x)^2+(2y)^2},$$ or $$xy = x + y + \sqrt{x^2 + y^2},$$ as claimed. The implicit formula for the locus admits a natural parametrization using the angle $\theta$ formed by the ray from the origin to $(x,y)$ and the positive $x$-axis: $$(x,y) = \left(1 + \tan \left(\frac{\theta}{2} + \frac{\pi}{4}\right), 1 + \cot \frac{\theta}{2} \right), \quad 0 < \theta < \frac{\pi}{2}.$$
H: Are these two statements about the probability tending to Normal distribution equivalent? Are these two statements equivalent: $P(X_n < \beta) \to z(\beta)$ For all fixed $\alpha \text{ < } \beta \in \mathbb{R} \quad P(\alpha < X_n < \beta) \to z(\beta) - z(\alpha)$ $z(\beta) = P(X < \beta)$ where $X$ ~ $N(0,1)$ I can see that the first implies the second one. But I'm not sure if the second implies the first. I think we might be able to use Fatou's lemma here. AI: The answer is YES. Let $F(x)=P(X\leq x),F_n(x)=P(X_n \leq x)$ and $\epsilon >0$. There exists $M$ such that $1-F(M)+F(-M) <\epsilon$. Hence there exists $n_0$ such that $1-F_n(M)+F_n(-M) <\epsilon$ for all $n \geq n_0$. For any $x$ we have $F_n(x)=[F_n(x)-F_n(-M)]+F_n(-M)$ and $F(x)=[F(x)-F(-M)]+F(-M)$. Note that $F_n(-M)<\epsilon$ and $F(-M)<\epsilon$. Can you finish?
H: Halving number at regular intervals vs same number at doubling intervals. The infinite series: $\sum_{n=0}^\infty 2^{-n}$ converges to 2. If I was given an apple after one day, half an apple after another day, a quarter of an apple after another day, and so on, I would never quite get two apples. In my (very limited) understanding, and apple-a-day is the same as two-apples-per-two-days or half-an-apple-per-12-hours. If I was given an apple after one day, another apple after a further two days, another apple after another four days, etc., I would have an unending supply of apples. Why are these two different answers, and how is this difference expressed mathematically? AI: Your approach to spread out the apples over several days doesn't become $1+\frac12+\frac14+\frac18+\cdots$. It becomes $$ 1+\frac12+\frac12+\frac14+\frac14+\frac14+\frac14+\cdots $$ The first apple has one day all on its own. The second apple is spread out over two days, giving you two half apples. The third apple is spread out over 4 days, giving you four quarter apples. And so on. It isn't surprising that one approach gives more apples than the other.
H: Why is the horizontal asymptote of $\lim_{x \to \infty} \frac{x}{x^2+1}$ $y=0$? I figured, the limit of below is: $$\lim_{x \to \infty} \frac{x}{x^2+1} = \frac{\infty}{\infty^2 + 1} \approx \frac{\infty}{\infty} = 1$$. Should the horizontal asymptote be $y=1$ then? AI: You have $$\lim_{x\rightarrow\infty}\frac{x}{x^2+1}=\lim_{x\rightarrow\infty}\frac{1}{x+\frac{1}{x}}=0$$ which You see after multiplying by $\frac{1}{x}$ in enumerator and denominator. It easily leads to false results if You carelessly use the symbol $\infty$.
H: How can I comment these explanation? There are column vectors and there exists comments about it. it's given in the image. i don't understand inconsistent, consistent because the translation of these terms into my language is unmeaningfull. could you comment on the image. really thanks AI: The notation consistent and inconsistent refers to the number of solutions of an equation system: If the system has AT LEAST one solution, we say it's consistent. If the system has NO possible solutions, we say it's inconsistent. I hope you will be able to understand the 4 statements of your problem given this information. If not comment and i'll edit.