text
stringlengths
83
79.5k
H: Equivalence of projections in smaller von Neumann algebra I came across the following assertion and I can't understand why it's true. We are given with two finite equivalent projection $e\sim f$ in some von Neumann algebra $A$ (with a unit of course). It's known that the projection $q=e\vee f$ is also finite, so we infer the algebra $qAq$ is a finite algebra (that is, the unit element is a finite projection). Now suppose that $q-e\sim q-f$ in $qAq$, show that $q-e\sim q-f$ also in $A$. Thanks in advance. Comment: Technically it is true that $q-e\sim q-f$ in $qAq$ so we don't really need to assume that. AI: Every element in $qAq$ is also in $A$. In particular, the partial isometry that realizes the equivalence.
H: Doubt about substitution in $\int_0^{2\pi} \frac{dx}{2+\cos x}$ While evaluating $$\int_0^{2\pi} \frac{dx}{2+\cos x}$$ I thought about letting $t=\tan \frac{x}{2}$, but I get the obviously wrong result $$\int_0^{2\pi} \frac{dx}{2+\cos x}=\int_0^{\tan\pi} g(t)dt=\int_0^0 g(t)dt=0$$ Which is not possible, since $\frac{1}{3} \leq \frac{1}{2+\cos x}\leq 1$ hence $$\frac{2}{3} \pi \leq \int_0^{2\pi} \frac{dx}{2+\cos x} \leq 2\pi$$ Why is this wrong? Is this related to the fact that $\tan \frac{x}{2}$ is not one-to-one in $[0,2\pi]$? If the problem is that, couldn't we just consider $(0,2\pi)$? The integral should be the same in the open interval, am I wrong? Thanks. AI: Hint As commented $$\tan\dfrac x2$$ is not even continuous at $$x=\pi$$ To avoid discontinuity, use if $f(2a-x)=f(x),$ $$\int_0^{2a}f(x)=\int_0^af(x)dx+\int_a^{2a}f(x)dx$$ Set $2a-x=y$ in the second integral. Here $a=\pi$ Use https://en.m.wikipedia.org/wiki/Weierstrass_substitution#The_substitution
H: Characterize the isomorphisms in $\operatorname{Hom}(G,H)$ < $H^G$ when they exist ( $G , H$ cyclic) In my lecture notes I have this exercise: Let $G$ and $H$ be cyclic groups. Having defined the operation: $$\varphi \psi: G \rightarrow H: x \mapsto \varphi(x) \psi(x)$$ for which $H^G$ is a group and $\operatorname{Hom}(G,H)$ is a subgroup of $H^G.$ Characterize the isomorphisms in $\operatorname{Hom}(G,H)$ when they exist. the solution goes like this: It is easily found that since $G=\langle g\rangle$, $$\operatorname{Im}(\varphi)=\{\varphi(g^n)=(\varphi(g))^n\ |n \in \mathbb{Z}\}$$ and $$\operatorname{Ker}(\varphi)=\{g^n | =(\varphi(g))^n = 1 \}$$ Then they argue some things I don't understand well: if $m=\operatorname{ord}(\varphi(g))$, it follows that $\operatorname{Ker}(\varphi)=\langle g^m\rangle <G$ ----> how did they come up with it? If $\varphi :G \rightarrow H$ is an isomorphism, since it is surjective: $\varphi(g)$ must generate $H$ --> why is that? I know H must be generated by some element of it but why does it have to be the image of the generator of G? And since $\varphi$ must be injective then $g^m=1$, then the $\operatorname{ord}(\varphi(g)))=\operatorname{ord}(g)$.--> I don't know how they made injectivity imply that. I would argue that they both have to have the same number of elements for finite order but can't relate it to injectivity Can someone clarify this steps? AI: If $x$ is an element of $\operatorname{Ker}\phi$, then $\phi(x)=1$. Since $x$ is an element of $G$ and $G$ is cyclic generated by $g$, we may write $x=g^k$ for some integer $k$. Then $\phi(g^k)=\phi(g)^k=1$. Since the order of $\phi(g)$ divides every integer $k$ such that $\phi(g)^k=1$, you have that $m$ divides $k$, hence $x=g^k\in \langle g^m\rangle$. Conversely, if $x\in\langle g^m\rangle$, then $m$ divides $k$, hence $k$ may be expressed as a product of the form $k=mu$ for some integer $u$ and then $\phi(g^k)=(\phi(g)^m)^u=1^u=1$, so that $x\in\operatorname{Ker} \phi$. If $\phi$ is surjective, then for every element $h\in H$ there exists an element $g^k\in \langle g\rangle$ such that $h=\phi(g^k)=\phi(g)^k$, showing that every element of $H$ is in the subgroup generated by $\phi(g)$. Conversely, if some element $h$ in $H$ is in the subgroup of $H$ generated by $\phi(g)$ then there exists an integer $k$ such that $h=\phi(g)^k=\phi(g^k)$, thus $h$ is in the image of $G$ under $\phi$, and since $h$ was arbitrary chosen, $\phi$ is finally proven to be surjective. If $\phi$ is injective, then $\operatorname{Ker}\phi=\{ 1\}$, hence $\phi(g^m)=1$ implies that $g^m=1$. Hence, the order of $g$ divides the order of its image $\phi(g)$. Since the order of the image surely divides the order of $g$, you finally get that the two orders are the same when $\phi$ is an isomorphism.
H: Auxiliary result related to the exponential martingale inequality Let $(\Omega,\mathcal A,\operatorname P)$ be a complete probability space and $(\mathcal F_t)_{t\ge0}$ be a complete filtration on $(\Omega,\mathcal A,\operatorname P)$. Let $(M_t)_{t\ge0}$ be a local $\mathcal F$-martingale on $(\Omega,\mathcal A,\operatorname P)$. By the Itō formula, $$N^\sigma:=e^{-\frac{\sigma^2}2[M]+\sigma M}=N_0+\sigma N\cdot M\tag1$$ is a local $\mathcal F$-martingale for all $\sigma\in\mathbb R$. Now assume $(M_t)_{t\ge0}$ is a continuous $\mathcal F$-martingale and $$\operatorname E\left[e^{\lambda[M]_t}\right]<\infty\;\;\;\text{for all }t>0\text{ and }\lambda>0\tag2.$$ Are we able to conclude that $N^\sigma$ is a $\mathcal F$-martingale for all $\sigma\in\mathbb R$? Let me stress one subtlety, which might be involved here: If $M$ is square-integrable, we know that $N\cdot M$ is a square-integrable $\mathcal F$-martingale if $$\operatorname E\left[\int_0^t|N_s|^2\:{\rm d}[M]_s\right]<\infty\;\;\;\text{for all }t>0\tag3.$$ I'm not sure whether the square-integrability of $M$ is really necessary for the martingale conclusion to hold (it is surely necessary to obtain the square-integrability of $N\cdot M$, but in the context of this question we are not interested in this integrability conclusion). So, maybe we need to assume that $M$ is square-integrable. Ignoring this for a moment, we clearly can use that $e^x\le1$ for all $x\le0$ and hence \begin{equation}\begin{split}\operatorname E\left[\int_0^t|N_s|^2\:{\rm d}[M]_s\right]&=\operatorname E\left[\int_0^te^{-\sigma^2[M]_s+2\sigma M_s}\:{\rm d}[M]_s\right]\\&\le\operatorname E\left[\int_0^te^{2\sigma M_s}\:{\rm d}[M]_s\right]\end{split}\tag4\end{equation} for all $t>0$. Does the assumption $(2)$ somehow imply that $(4)$ is finite for all $t>0$? AI: The condition in $(2)$ is enough to guarantee $N^\sigma$ is a martingale by Novikov's condition. Let $\mathcal E(M)_t := e^{M_t - \frac{1}{2} [M]_t}$ and notice that $N^\sigma = \mathcal E(\sigma M)$. Then since $\mathbb{E}[e^{\frac 12 [\sigma M]_t}] = \mathbb{E}[e^{\frac{\sigma^2}2 [M]_t}] < \infty$, Novikov's condition gives $N^\sigma$ is a martingale. We don't need to assume $M_t$ is square integrable because $(2)$ actually implies that $M_t$ has finite moments of all orders. From a Taylor expansion, $\mathbb{E}[[M]_t^p] \le c_p \mathbb{E}[e^{[M]_t}] < \infty$ so the BDG inequality gives $\mathbb{E}[\sup_{s \le t} |M_s|^p] \le C_p \mathbb{E}[[M]_t^{p/2}] < \infty$ for all $p > 0$. To get $N^\sigma_t$ is square integrable, we can apply Holder's inequality: \begin{align*} \mathbb{E}[(N^\sigma_t)^2] &= \mathbb{E}[e^{2\sigma M_t - \sigma^2 [M]_t}] \\ &= \mathbb{E}[(e^{4\sigma M_t - 8 \sigma^2 [M]_t})^{1/2}e^{4 \sigma^2 [M]_t}] \\ &= \mathbb{E}[(N^{4 \sigma}_t)^{1/2}e^{4 \sigma^2 [M]_t}] \\ &\le \mathbb{E}[N^{4 \sigma}_t]^{1/2} \mathbb{E}[e^{8 \sigma [M]_t}]^{1/2} < \infty. \end{align*}
H: Groups such that the corresponding algebra is central Find all groups $G$ such that the corresponding algebra $\mathbb{C}[G]$ is central I know that since $\mathbb{C}$ is algebraically closed we have: $\mathbb{C}[G]=\prod_{i=1}^sM_{n_s}(\mathbb{C})$ so in order for $\mathbb{C}[G]$ to be central we must have $\mathbb{C}[G]=M_n(\mathbb{C})$. This implies $|G|=n^2$ and $G$ non abelian or $n=1$ which gives us the trivial group. However if $n>1$ we would have a non trivial group with exactly 1 conjugacy class so the group in question is exactly the trivial one. Have I gotten it wrong somewhere? AI: Seems ok, but if you're sure that it has to be a single matrix ring, it's faster to just note that $M_n(\mathbb C)$ is simple and $\mathbb C[G]$ is only simple when its augmentation ideal is $\{0\}$, meaning $G=\{1\}$.
H: How do you evaluate a summation with variables in the parameters? my problem is $\sum_{i=n+1}^{3n} (2i-3)$ I have done a few summations in calc 2, but I do not remember what you are supposed to do when there are variables in both parameters. I remember the rules that $i = \frac{(n^2+n)}{2} $ but Im not sure if that applies here edit: adjusted the denominator which I had incorrect AI: Note that $$\sum_{i=n+1}^{3n} (2i-3) = \sum_{i=1}^{3n} (2i-3) - \sum_{i=1}^{n}(2i-3)$$ Can you take it from here? You will have to use the fact that $$\sum_{i=1}^{n} i = \frac{n(n+1)}{2}$$ For the sake of completeness, here is the full solution: \begin{align*}\sum_{i=n+1}^{3n} (2i-3) = \sum_{i=1}^{3n} (2i-3) - \sum_{i=1}^{n}(2i-3)&=2\sum_{i=1}^{3n}i-9n-2\sum_{i=1}^{n}i+3n \\ &=3n(3n+1)-9n-n(n+1)+3n \\ &=8n^2-4n \end{align*}
H: Is it bad to resort to numerical examples to understand the idea behind a proof? When reading proofs, I often get confused and need to devise my own examples to understand what's going on. Is this practice ok or should I train myself to think in abstract terms? As an example, here's something that I'd need a sketch on paper to understand. AI: That's perfectly normal. I do the same thing, and I've heard it strictly encouraged to solidify your understanding with examples. Like, you can bet the author looked at tons of examples before they even came up with the correct statement of the theorem they're proving. :)
H: In a separable normed space, does any set contains a countable dense subset? Let $X$ be a separable normed space and $ A \subseteq X$ be nonempty. Does there exists a countable subset $A’$ of $A$ such that $A’$ is dense on $A$? If not, please provide a counterexample. What about the case on which $X$ is finite dimensional? It seems simple but I have not being able to came up with a general construction, even in finite dimensions. Initially I thought of taking the smallest subspace containing $A$ and defining $A’: = S \cap ric(A)$, where $S$ is a countable dense subset of $X$ and $ric(A)$ denotes the relative interior of $A$, but this doesn't work. AI: Any separable metrisable space is second-countable. Second-countability is hereditary and implies separability. So every subset of a separable metrisable space is separable. Suppose $S$ is a countable dense subset of $X$. For each $s \in S$ and $n \in \mathbb N$ choose, if possible, $a_{sn} \in A$ with $||s - a_{sn}|| < 1/n$. The set of these $a_{sn}$s is a countable dense subset of $A$.
H: Prove that $F$ can not be tangent to a surface. Suposse that $F=F(x,y,z)\in \mathbb{R}^{3}$ is a vectorial field continuously differentiable and satisfies $div F =\partial_{x} F_{1} + \partial_{y} F_{2} + \partial_{z} F_{3}>0$ in the interior of a domain $\Omega \subset R^{3}$, open and boundend, and it's frontier $\partial \Omega$ is at least class $C^{1}$ and orientable. Prove that $F$ can't be tangent to $\partial \Omega$ in all point of $\partial \Omega$. I saw this problem in an admission test for a posgraduate program in mathematics and I don't know how to attack the problem. AI: $F$ is tangent to $\partial \Omega $ everywhere if and only if $F(x)\cdot \nu(x)=0$ for all $x\in \partial \Omega $, where $\nu$ is the exterior normal vector to $\partial \Omega $. Using divergence theorem yields $$\iint_{\partial \Omega }F\cdot \nu=\iiint_\Omega \text{div}(F)\,\mathrm d x\,\mathrm d y\,\mathrm d z>0.$$ Therefore, $F\cdot \nu=0$ doesn't hold everywhere on $\partial \Omega $.
H: Closed set of irrationals with non zero outer measure Let $A$ be the set of irrationals in $[0,1]$, then for every $\epsilon >0$ how we can construct a closed subset $B$ of $A$ such that outer measure $\mu^{*}$ of $B$ satisfies : $\mu^{*}(B) \ge 1- \epsilon$. I know that every finite set is closed but that will not work here, somehow I need to construct a set of some special kind of irrationals such that above property is satisfied. I am not able to think the type of set that will work here. AI: The key is not to think of what we want to incude, rather of what we want to exclude. Hint: enumerate the rationals in $[0,1]$ as $r_n$, $n = 1,2,3,\ldots$. Let $B = [0,1] \backslash \bigcup_{n=1}^\infty (r_n - \epsilon_n, r_n + \epsilon_n)$ for a suitable sequence $\epsilon_n$ of positive numbers.
H: How to recognize the Laplace transform of a function with compact support? The question is pretty much self-contained in the title: is there some criterion for recognizing the Laplace transforms of compact-supported functions, other than the explicit computation of $\mathcal{L}^{-1}$? The question arises in a peculiar context: some integrals of oscillating functions can be converted into integrals of monotonic functions by exploiting the self-adjointness of the Laplace transform, for instance $$ \int_{0}^{+\infty}\frac{\sin(s)}{\sqrt{s}}\,ds = \int_{0}^{+\infty}\frac{dx}{\sqrt{\pi x}(1+x^2)} $$ and for numerical purposes the latter form is clearly more manageable than the former. On the other hand integrals of compact-supported functions are easier to handle through interpolation and quadrature, so it would be a nice thing to recognize in $\frac{1+e^{-\pi s}}{1+s^2}$ the Laplace transform of the chunk of the sine wave supported on $[0,\pi]$, in order to compute $$ \int_{0}^{+\infty}\frac{1+e^{-\pi s}}{\sqrt{s}(1+s^2)}\,ds $$ by applying a quadrature scheme (as done here) to $$ \int_{0}^{\pi}\frac{\sin(s)}{\sqrt{s}}\,ds. $$ The essence of the question is to understand which kinds of functions allow this trick. AI: $F(s)$ is the Laplace transform of a $L^2[-r,r]$ function iff $F(s)$ is entire, uniformly $L^2$ on vertical strips (*), and $F(s) = O(e^{r |\Re(s)|})$. Proof : for $|t|> r+|a|$ let $c\to -sign(t) \infty$ in $$2i\pi f(t) \ast 1_{[0,a]}=\int_{c-i\infty}^{c+i\infty} \frac{1-e^{-a s}}{s} F(s)e^{st}ds\tag{1}$$ (*) this means $\int_{|y|>T} |F(x+iy)|^2dy,x\in [u,v]$ tends to $0$ uniformly as $T\to \infty$ so that $(1)$ doesn't depend on $c$.
H: Inequality about the degree of minimal polynomial For finite-dimensional vector space $V$, there exist linear operators $A$ and $B$ on $V$ such that $AB=BA$ commutative relation holds. If we define the $A$'s minimal polynomial degree by $\deg(A)$, how can I prove the inequality $\deg(A+B)\leq \deg(A)\deg(B)$? I grasp the idea that in the minimal polynomial of $A+B$, I could expand the $(A+B)^k$ terms by exchanging multiplication order of $A$ and $B$ but I can't proceed further. AI: Note that $\deg(A)$ is the dimension of the subspace consisting of all polynomials of $A$. Let $m = \deg(A), n = \deg(B)$. Every polynomial of $p(A)$ can be written as a linear combination of the powers $I,A,\dots,A^{m-1}$ of $A$, and every polynomial $p(B)$ can be written as a linear combination of the powers $I,B,\dots,B^{n-1}$ of $B$. We conclude that for any bivariate polynomial $p(x,y)$, $p(A,B)$ can be written as a linear combination of the elements of $S = \{A^jB^k : 0 \leq j \leq m-1, \ 0 \leq k \leq n-1\}$. Note that $S$ contains $\deg(A)\deg(B)$ elements, so its span is at most $\deg(A) \deg(B)$ dimensional. Because $ \operatorname{span}\{I,(A+B),(A+B)^2,\dots\} \subset \operatorname{span}(S), $ we can conclude that $$ \deg(A + B) = \dim \operatorname{span}\{I,(A+B),(A+B)^2,\dots\} \leq \dim \operatorname{span}(S) \leq \deg(A)\deg(B).$$ For another perspective, we could note that the map from $\Bbb F[x,y]$ to $p(x,y)$ defined by $p\mapsto p(A,B)$ is an $\Bbb F$-algebra homomorphism.
H: Composition of local diffeomorphisms is a local diffeomorphism Let $F: M\rightarrow N$ , $G:N\rightarrow P$ be local diffeomorphisms, where $M,N,P$ are smooth manifolds. I would like to show that $G\circ F: M\rightarrow P$ is a local diffeomorphism. My attempt: Let $x\in M$. Since $F:M\rightarrow N$ is a local diffeomorphism, there exists an open set $U$ of $x$ such that $F(U)$ is open in $N$ and $F|_U: U\rightarrow F(U)$ is a diffeomorphism. Similarly, since $F(x)\in N$, there exists a neighborhood $V$ of $F(x)$ such that $G(V)$ is open in $P$ and $G|_V: V \rightarrow G(V)$ is a diffeomorphism. I thought of considering the set: $F|_U^{-1}(F(U)\cap V)$, since this set is open in $U$.However, I have not gotten far. May I have hints? Please do not use immersions AI: I think you basically have it. Call $F|_U^{-1}(F(U)\cap V)=W$, then $(G\circ F)|_W$ is a diffeomorphism, since the composition of diffeomorphisms is a diffeomorphism, and because diffeomorphisms behave well under restriction. There is another way to see this: $F$ is a local diffeomorphism at $p\in M$ if and only if $DF_p:T_pM\to T_pN$ is a nonsingular linear transformation. So, by your assumptions $DF_p$ and $DG_{F(p)}$ are both nonsingular, hence $$ D(G\circ F)_p=DG_{F(p)}\circ DF_p$$ is also.
H: Putting $n$ balls into n-1 cells such that no cell is empty Suppose that $n$ identical balls are placed into $n − 1$ distinct boxes such that each distinguishable arrangement is equally likely. Find the probability that no box remains empty. My answer is $$ \frac{(n-1)(n-1)!}{(n-1)^n} $$ where the denominator is the total number of ways of putting $n$ balls in $n-1$ cells and the numerator is the number of ways $1$ ball can be put into $n-1$ boxes and then the remaining $n-1$ balls are put into $n-1$ boxes in $(n-1)!$ number of ways. Is this correct? If not, why? The correct answer for this seems to be $$ \frac{n-1}{\binom{2n-2}{n}}.$$ AI: To have no box empty, you first put one ball in each box. Since the balls are identical so there is only one way of doing this. Now you are left with one ball and that can go to any of the $n-1$ boxes. So there are $n-1$ ways to achieve this (so called favorable ways). Now for the total number of ways to distribute the $n$ balls into $n-1$ boxes we let $x_i$ be the number of balls in the $i-$th box. So we want to find the number of non-negative ($x_i \geq 0 $) integer solutions to the following equation: $$x_1+x_2+\dotsb+x_{n-1}=n.$$ Read about stars & bars problem to find the answer to this.
H: Uniqueness of linear codes In this textbook, I found the following remark: An $(n,k)$ linear code $\mathcal{C}$ is a unique subspace consisting of a set of $2^k$ codewords. The statement surprised me because in vector spaces over infinite fields like $\mathbb{R}^n$, there are infinitely many subspaces with dimension $k<n$. Below is my attempt to prove the statement. Let $\mathcal{C}_1$ and $\mathcal{C_2}$ be $(n,k)$ linear codes. Consider their generator matrices in systematic form $\mathbf{G}_1=[\mathbf{I}_k|\mathbf{P}_1]$ and $\mathbf{G}_2=[\mathbf{I}_k|\mathbf{P}_2]$. By symmetry, it suffices to show that every codeword in $\mathcal{C}_1$ is a codeword in $\mathcal{C}_2$. Let $\mathbf{u}$ be a binary $1\times k$ vector. Then, the corresponding codeword in $\mathcal{C}_1$ is $\mathbf{x}=[\mathbf{u}|\mathbf{u}\mathbf{P}_1]$. Then, a necessary condition so that $\mathbf{x}\in\mathcal{C}_2$ is to set $\mathbf{u}=\mathbf{v}$ so that $$ \mathbf{x}=[\mathbf{u}|\mathbf{u}\mathbf{P}_1]=[\mathbf{v}|\mathbf{v}\mathbf{P}_2], $$ Then, to complete the proof, I have to show that $\mathbf{u}(\mathbf{P}_1-\mathbf{P}_2)=\mathbf{0}$. My question is: What argument can I use to prove that $\mathbf{P}_1 =\mathbf{P}_2$ using what I have right now? AI: A $(n,k)$ linear code $\mathcal{C}$ is a subspace living in the vector space $(\mathbb{F}^n,\mathbb{F}_2,+_2,.)$ which has $2^k$ distinct code words, is the right way to read it. Let $G$ be the generator matrix for $\mathcal{C}$. Since $\mathcal{C}$ is a $k$ dimensional subspace in $\mathbb{F}^n_2$, $G$ is a $k \times n$. Then for any vector $x=(x_1,x_2\dots,x_k), \hspace{0.2cm}$ $ x_i\in \mathbb{F}_2$, $G^Tx$ is a codeword. Suppose for some $x_1,x_2$, $G^Tx_1=G^Tx_2$ then, $G^T(x_1-x_2)=0$. This implies $x_1-x_2 \in \text{ Nullspace }(G^T)$. But $G^{T}$ is a matrix with linearly independent columns by definition and hence only the zero linear combination fetch the $0$ codeword and thus, $x_1-x_2=0$, guaranteeing uniqness of codewords. Finally as there are $2^k$ possible $x$ vectors, the statement holds true.
H: What notation for derivatives should be used? I‘ve seen Leibniz’s, Lagrange’s, Euler’s and Newton’s notation for derivatives. They’re quite different, and I suppose they all have different applications where they shine the brightest. In what circumstances are the different notations most common? Is there a reason to have multiple notations for the same thing? AI: When a concept is as important as the derivative is, and with as complex a history, and used in as many niche applications, it is prone to having many different notations that are used within different circles. The short answer is, whatever area you're working in, see what notation textbooks/professors/papers in the field typically use. If everyone in your field uses the same notation, use that one. Here are a few loose guidelines from my experience: The notation $\dot y$ is typically used to mean the derivative with respect to time. It is very common in classical mechanics. The notation $dy/dx$ is typically used more in applied math and the notation $f'(x)$ is typically used more in pure math. This one is very rough and should be taken with a large dose of salt. The partial derivative notation $\partial f/\partial x$ is the "default" and the most common. It is also the most classical. In partial differential equations, the notation $f_x$ for the partial derivative w.r.t. $x$ is very common. I think mathematicians working with differential forms seem to like $D_x f$ for the partial derivative w.r.t. $x$. Many feel that $\partial f/\partial x$ is just too clunky, especially when written in matrices. Sometimes other notations will appear. Once, in a very specific area I was working in, the notation was $\delta y/\delta t$!
H: show that Norm of v+iT(v) equals to the Norm of v-iT(v) So I have $V$ an inner product space above $C$ and a linear operator $T$ such as $T=T^*$ on $V$ I need to prove that: $$||v+iT(v)||=||v−iT(v)||$$ I tried to write it by definition, but I didn't get any useful. I also don't understand how to use the hermitian characteristic which given. AI: Some fun facts used below: $\langle u,tv\rangle=\bar{t}\langle u,v\rangle$, $\langle tu,v\rangle=t\langle u,v\rangle$, and $\langle T(u),v\rangle=\langle u,T^{*}(v)\rangle$, where $t \in \Bbb{C}$ is a scalar. \begin{align*} \|v+iT(v)\|^2&=\langle v+iT(v),\, v+iT(v)\rangle\\ &=\langle v,v\rangle + \langle v,iT(v)\rangle + \langle iT(v),v\rangle + \langle iT(v),iT(v)\rangle\\ &=\|v\|^2-i\langle v,T(v)\rangle+i\langle T(v),v\rangle+\|T(v)\|^2\\ &=\|v\|^2-i\langle v,T(v)\rangle+i\langle v,T^{*}(v)\rangle+\|T(v)\|^2 \\ &=\|v\|^2-i\langle v,T(v)\rangle+i\langle v,T(v)\rangle+\|T(v)\|^2 && (\because T^{*}=T)\\ &=\|v\|^2+\|T(v)\|^2. \end{align*} Now do the same with RHS to see the equality.
H: Dimensional analysis in combinatorics From HMMT: Fifteen freshmen are sitting in a circle around a table, but the course assistant (who remains to stand) has made only six copies of today's handout. No freshman should get more than one handout and any freshman who does not get one should be able to read a neighbor's. If the freshman are distinguishable but the handouts are not, how many ways are there to distribute the six handouts subject to the above conditions? The solution starts by considering the expected number of handouts to be received by any individual student. By linearity of expectation, there are 15 students and 6 handouts, so each student is expected to individually receive 6/15 handouts. Then, for an arbitrary individual student S, we compute the number of distributions of handouts in which S receives a handout, called $y$. Also, let $x$ be the answer; $x$ is the number of ways to distribute the six handouts subject to the conditions of the problem. Now, the solution states that $y=\frac{6}{15}x \Longleftrightarrow x = \frac{15}{6}y$, which is how we shall find the answer. This feels nearly obvious because with $y=\frac{6}{15}x$ we're multiplying the # distributions by the expected number of handouts per student, but if we apply some kind of "dimensional analysis" to this, the multiplication does not turn out to something like "# distributions per student". In particular, what would be the resulting meaning if we divided $x$ by the number of students as in $\frac{x}{15}$, rather than dividing $x$ by the number of students, then multiplying by the number of handouts, as we do with the equation $y=\frac{6}{15}x$? AI: Well, $\frac{6}{15}$ can be thought of as n expected value, but in the solution, it is the probability that a student receives a handout, and probability is unitless. Here's an equivalent way to phrase it, which may be easier to accept. Let $x$ be the overall number of solutions, and let $x_1, x_2, \dots, x_{15}$ be the number of solutions in which student $1, 2, \dots, 15$ respectively gets a handout. The solution shows that $x_i = 50$ for any $i$. If we add up $x_1 + x_2 + \dots + x_{15}$, then each solution is counted $6$ times, because in each solution, $6$ students get handouts. Therefore $$ x_1 + x_2 + \dots + x_{15} = 6x \implies 15 \cdot 50 = 6x \implies x = \frac{15}{6} \cdot 50 = 125. $$
H: Prove that every sequence of real numbers has at least one limit point Sequence is infinite and bounded. Let $A=\{x_n|n \in\mathbb{N}\}.$ Since $A$ is both bounded and infinite existence of limit point comes directly from BW theorem for sets Sequence is infinite and unbounded. Let $G$ be some neighbourhood of $+\infty$ (same applies for $-\infty$). For any $M\in\mathbb{R}, \exists n\in\mathbb{N}$ such that $x_n\in(M,+\infty)$ $\forall n\geq$ some $n_0$ thus there is a subsequence of $x_n$ that converges to infinity and so we can say that $+\infty$ is limit point of $x_n$ Sequence is finite and bounded There is certain real $a$ such that $x_n=a$ for finite $n$.$\implies \exists x_{n_k}=a; \forall k\in\mathbb{N}\implies lim_{k\to\infty} x_{n_k} = a$ thus there is subsequence of $x_n$ that converges to some point ($a$) which is its limit point. Sequence cannot be finite and unbounded in $\mathbb{R}$ Please check my proof for any errors. AI: You haven’t actually finished the first case. You know that the set $A$ has a limit point, say $p$, but you still have to show that the sequence has $p$ as a limit point (or as I would call it, a cluster point), i.e., that it has a subsequence converging to $p$. You can do this by recursively constructing the subsequence. Suppose that for $k=1,\ldots,m$ you’ve chosen $n_k\in\Bbb Z^+$ such that $n_1<\ldots<n_m$ and $|x_{n_k}-p|<\frac1k$; there are infinitely many $\ell\in\Bbb Z^+$ such that $|x_\ell-p|<\frac1{m+1}$, so let $$n_{m+1}=\min\left\{\ell\in\Bbb Z^+:\ell>n_m\text{ and }|x_\ell-p|<\frac1{m+1}\right\}\;.$$ This allows the recursive construction to continue, and we get a subsequence $\langle x_{n_k}:k\in\Bbb Z^+\rangle$ of the original sequence that converges to $p$. This shows that $p$ really is a limit point of the original sequence. In the second case your really ought to do something similar: you need to show that you can actually get a subsequence converging to $+\infty$. It would suffice to show that we can find $n_k\in\Bbb Z^+$ for $k\in\Bbb Z^+$ such that $n_1<n_2<\ldots$ and $x_{n_k}>k$ for each $k\in\Bbb Z^+$; this can be done by a recursive construction very similar to the one that I just did for the first case. I think that you have a typo in your third case: I believe that you meant to say that there is an $a\in\Bbb R$ such that $x_n=a$ for infinitely many $n\in\Bbb Z^+$. In that case the subsequence $\langle x_n:x_n=a\rangle$ is a constant subsequence converging to $a$.
H: One nappe of the hyperbola is an embedding. Pollack 1.3.8 The problem asks to check that the map $f:\mathbb{R}^1\to \mathbb{R}^2$ given by $t\mapsto(\cosh(t),\sinh(t))$ is a closed embedding. I tried two different approaches to solve this problem. First, we can use the fact that $$f \text{ is a closed embedding if and only if } f(\mathbb{R}^1) \text{ is closed in }\mathbb{R}^2$$ and $f$ is a homeomorphism onto its image. We can see that $f(\mathbb{R}^1)$ is closed since its complement is open. Next, we want to show that $f$ is a homeomorphism. If we are going to look at it geometrically, then it is obvious as we can see that the image/preimage of open sets is going to be open. My first question: how to show this more precisely? I was thinking to define a function $g:f(\mathbb{R}^1)\to \mathbb{R}^1$ given by $g(x,y)=\ln(x+y)$ since $$(x,y)=(\cosh(t),\sinh(t))\text{, so } x+y=e^t\text{ i.e. }t=\ln(x+y)$$ So, we can see that $f$ and $g$ are continuous functions and inverses of each other i.e. $f$ is a homeomorphism. Does it work? (1) Second approach is to use the direct definition of the closed embedding. In other words, $$f\text{ is a closed embedding if and only if } f\text{ is an immersion and the preimage of every compact set is compact.}$$ $f$ is an immersion if the differential $df_a$ is injective for all $a\in\mathbb{R}^1$ i.e. the tangent vector is never zero. But, we can see that $df_a=\begin{bmatrix}\sinh(a)\\ \cosh(a)\end{bmatrix}$, so $|df_a|\neq0$ To show that the preimage of a compact set is compact, I used the following. If we take any compact set $A$ in $\mathbb{R}^2$, then we always can find a closed ball center at the origin that will contain $A$. That closed ball will intersect $f(\mathbb{R}^1)$ at some point $(x_0,y_0)$ i.e. at some point $t_0\in \mathbb{R}^1$ as $(x_0,y_0)\in f(\mathbb{R}^1)$. Then we can see that $$f^{-1}(A\cap f(\mathbb{R}^1))\subset[-t_0,t_0]$$ Since $A\cap f(\mathbb{R}^1)$ is closed, $f$ is continuous, and $[-t_0,t_0]$ is compact, then $f^{-1}(A\cap f(\mathbb{R}^1))$ is compact as the closed subset of the compact set. Does it work? (2) AI: First approach: You claimed, but did not prove, that the complement of $f(\Bbb R)$ is open. It is easier to prove that $f(\Bbb R)$ is closed. In fact,$$f(\Bbb R)=\{(x,y)\in\Bbb R^2\mid x\geqslant0\}\cap\{(x,y)\in\Bbb R^2\mid x^2-y^2=1\}.\tag1$$The first set is closed, since it is $\varphi^{-1}\bigl([0,\infty)\bigr)$ with $\varphi(x,y)=x$ and $\varphi$ is continuous; and the second set is closed, since it is $\psi^{-1}\bigl(\{1\}\bigr)$ and $\psi$ is continuous. So, $(1)$ is closed. And, in order to prove that $f$ is a homeomorphism onto its image, what you dis is correct, but it is simpler to use the fac that $\sinh$ is a homeomorphism (from $\Bbb R$ onto itself) and that $(x,y)\mapsto\sinh^{-1}(y)$ is the inverse of $f$. Second approach: Use the fact that every compact subset of $\Bbb R^2$ is contained in some set of the form $\{(x,y)\in\Bbb R^2\mid x\leqslant M\}$, for some $M\geqslant1$, and that, if $t_0\in[0,\infty)$ is such that $\cosh(t_0)=M$, then $f^{-1}(A)\subset f^{-1}(M)=[-t_0,t_0]$.
H: Discover and prove a theorem relating $\bigcap_{i \in J}A_i$ and $\bigcup_{X \in \mathcal{F}}(\bigcap_{i \in X}A_i)$. This is an exercise from Velleman's "How To Prove It": Suppose $\mathcal{F}$ is a nonempty family of sets. Let $I = \bigcup \mathcal{F}$ and $J = \bigcap \mathcal{F}$. Suppose also that $J \neq \emptyset$, and notice that it follows that for every $X \in \mathcal{F}$, $X \neq \emptyset$, and also that $I \neq \emptyset$. Finally, suppose that $\{A_i | i \in I\}$ is an indexed family of sets. d. Discover and prove a theorem relating $\bigcap_{i \in J}A_i$ and $\bigcup_{X \in \mathcal{F}}(\bigcap_{i \in X}A_i)$. After doing a few examples on paper, I decided that $\bigcup_{X \in \mathcal{F}}(\bigcap_{i \in X}A_i) \subseteq \bigcap_{i \in J}A_i$. Here is a proof of this supposition: Proof: Let $y \in \bigcup_{X \in \mathcal{F}}(\bigcap_{i \in X}A_i)$ be arbitrary. Then we can choose an $X \in \mathcal{F}$ such that $y \in \bigcap_{i \in X}A_i$. Now let $j \in J = \bigcap \mathcal{F}$ be arbitrary. Since $j \in \bigcap \mathcal{F}$ and $X \in \mathcal{F}$, we must have $j \in X$. Then since $j \in X$ and $y \in \bigcap_{i \in X}A_i$, $y \in A_j$. Since $j$ was arbitrary, $y \in \bigcap_{i \in J} A_i$. Since $y$ was arbitrary, $\bigcup_{X \in \mathcal{F}}(\bigcap_{i \in X}A_i) \subseteq \bigcap_{i \in J}A_i$. $\square$ I am struggling to understand how this is true intuitively. Right now, I am thinking of $\mathcal{F}$ as a family of sets containing sets of indices, e.g., {{1,2}, {2,3}, {2,4}} (notice $\bigcap \mathcal{F} \neq \emptyset$). Then $y \in \bigcup_{X \in \mathcal{F}}(\bigcap_{i \in X}A_i)$ means that there is a set of indices in $\mathcal{F}$ such that $y$ is contained in $A_i$ for every index $i$ in that set. $y \in \bigcap_{i \in J}A_i$ means that for every index $i$ that is contained in all sets $X \in \mathcal{F}$, we must have $y \in A_i$. The formal proof seems to work out, but I am not seeing the relationship between these two sets clearly. AI: If $X\in\mathscr{F}$, then $J\subseteq X$. This immediately implies that $\bigcap_{i\in X}A_i\subseteq\bigcap_{i\in J}A_i$: when you intersect the $A_i$ with $i\in X$, you’re intersecting all of the $A_i$ with $i\in J$ and possible some others as well, so if anything you get a smaller intersection. This is true for every $X\in\mathscr{F}$, so each of the intersections $\bigcap_{i\in X}A_i$ is contained in the big intersection $\bigcap_{i\in J}A_i$, and therefore their union is as well: $\bigcup_{X\in\mathscr{F}}\bigcap{i\in X}A_i\subseteq\bigcap_{i\in J}A_i$. The key to intuitive understanding is recognizing that when you intersect a larger collection of sets, you’re imposing more restriction on what can be in the intersection, so you get a smaller intersection. Each $X$ is larger than $J$ (or, to be more accurate, at least as large as $J$).
H: Given $V \in \mathbb{R}^{n\times(n-r)}$, why $V^TAV = 0$ implies $\operatorname{rank}(A) \leq r$? I am doing a problem from convex optimization written by Stephen P Boyd. I am having trouble understanding the solution. The original problem statement and solution is as follow: 2.13 Conic hull of outer products. Consider the set of rank-k outer products, defined as $\left\{X X^{T} \mid X \in \mathbf{R}^{n \times k}, \ \textbf{rank} X=k\right\} .$ Describe its conic hull in simple terms. Solution. We have $X X^{T} \succeq 0$ and $\textbf{rank}\left(X X^{T}\right)=k .$ A positive combination of such matrices can have rank up to $n,$ but never less than $k .$ Indeed, Let $A$ and $B$ be positive semidefinite matrices of rank $k,$ with $\textbf{rank}(A+B)=r<k .$ Let $V \in \mathbf{R}^{n \times(n-r)}$ be a matrix with $\mathcal{R}(V)=\mathcal{N}(A+B),$ i.e. $$V^{T}(A+B) V=V^{T} A V+V^{T} B V=0$$ since $A, B \succeq 0,$ this means $$V^{T} A V=V^{T} B V=0$$ which implies that $\textbf{rank} A \leq r$ and $\textbf{rank} B \leq r .$ We conclude that $\textbf{rank}(A+B) \geq k$ for any $A, B$ such that $\textbf{rank}(A, B)=k$ and $A, B \succeq 0$. It follows that the conic hull of the set of rank- $k$ outer products is the set of positive semidefinite matrices of rank greater than or equal to $k,$ along with the zero matrix. In the solution above, there are two steps that I don't understand. Why $\mathcal{R}(V) = \mathcal{N}(A+B)$ implies $V^T(A+B)V = 0$? (The notation here, $\mathcal{R,N}$ means range and nullspace, respectively.) Why $V^TAV = 0$ implies $\textbf{rank} A \leq r$? AI: If ${\cal R} V = \ker(A+B)$ then $(A+B)V x = 0$ for all $x$ hence $(A+B)V=0$. Hence it follows that $V^T(A+B)V = 0$. Note that if $A$ is symmetric positive semi definite then using the spectral decomposition we can write $A = C^T C$ for some $C$. So, if $V^TAV = 0$ then $(CV)^T(CV) = 0$ and so $CV =0$ and so $C^TCV=AV = 0$. Also, note that the proof as you have shown only establishes that matrices in the conic hull have rank $\ge k$, but does not show that for any $r =k+1,...,n$ that there is a conical combination that has rank $r$. It is not hard to demonstrate but the above is not a complete proof. Pick $A\ge 0$ of rank $r \in \{k,...,n\}$ and suppose that $U$ is an orthogonal matrix such that $U^TAU = \Lambda = \operatorname{diag} \{\lambda_1,...,\lambda_r,0,..., 0\}$, where the $\lambda_1,...,\lambda_r$ are all the strictly positive eigenvalues. If $b \in \{0,1\}^r$, let $\Lambda_b = \operatorname{diag} \{ b_1 \lambda_1,..., b_r \lambda_r, 0,..., \}$. Let $B= \{ b \in \{0,1\}^r | \text{exactly }k\text{ of the }b_i\text{ are 1}\}$ and note that if $b \in B$ then $\Lambda_b$ has rank $k$ and hence so does $U \Lambda_b U^T$. Finally, note tha $\Lambda = {r \over k}{1 \over \binom{r}{k} }\sum_{b \in B} \Lambda_b$ and so $A = {r \over k}{1 \over \binom{r}{k} }\sum_{b \in B} U \Lambda_b T^T$
H: History of Gamma and Beta functions I'm looking for a book on the history of gamma $\Gamma$ and beta $B$ functions! thank you in advance. AI: I don't know of any books, but here is an article on this topic: Davis, P.J.: Leonard Euler’s integral: a historical profile of the Gamma function. Am. Math. Mon. 66, 849–869 (1959)
H: When matrix $A$ is linear isometry in $\|\cdot\|_{\infty}$ norm? What are necessary/sufficient conditions for matrix $A \in \mathbb{R}^{n\times n}$ to hold the following property? $$\|Av\|_{\infty} = \|v\|_{\infty}$$ AI: It holds if and only if $A$ is an entrywise signed permutation matrix. Since $\|Ae_j\|_\infty=\|e_j\|_\infty=1$, every $|a_{ij}|$ is bounded above by $1$ and each column of $A$ contains at least one entry whose value is $\pm1$. On the other hand, as $\|v\|_\infty=\|Av\|_\infty$ for all $v$, we also have $\max_i\sum_j|a_{ij}|=\|A\|_\infty=1$. Therefore, each row of $A$ has at most one entry whose value is $\pm1$. It follows that on each column or on each row of $A$, there is exactly one entry whose value is $\pm1$. However, as $\max_i\sum_j|a_{ij}|=\|A\|_\infty=1$, all other entries must be zero. Therefore $A$ is a permutation matrix carrying signs on its entries.
H: Independence of two random variables (by checking joint generating function) In my textbook it says that two random variables $X$ and $Y$ are independent if and only if $G_{X,Y}(s,t) = G_X(s) \cdot G_Y(t)$ (where $G$ is the probability generating function of the random variable). I'm trying to prove this statement, this is what I have so far. $\Rightarrow$ If $X$ and $Y$ are independent, then $G_{X,Y}(s,t) = \mathbb{E}[s^X t^Y]= \mathbb{E}[s^X]\mathbb{E}[t^Y] = G_X(s) G_Y(t)$. $\Leftarrow$ If $G_{X,Y}(s,t) = G_X(s) G_Y(t),$ then $\sum_{k=0}^\infty \sum_{j=0}^\infty s^k t^j p_{X,Y}(k,j) = \sum_{k=0}^\infty s^k \sum_{j=0}^\infty t^j p_{X,Y}(k,j)= \sum_{k=0}^\infty s^k p_X(k) \sum_{j=0}^\infty t^j p_Y(j).$ Am I able to directly conclude that $p_{X,Y}(k,j) = p_X(k)p_Y(j)$ from the equation above? I know that I have to somehow equate the coefficients and deduce that the joint distribution is the product of the marginals. I'm just not quite sure how to proceed. AI: $\def\eq{\,{=}\,}$The probability mass function of a discrete integer-valued random variable is recovered by taking derivatives of its probability generating function, vis:$$\begin{align}\mathsf P(X\eq k) &=\dfrac{\mathsf G_X^{(k)}(0)}{k!}&&\big[k\in\Bbb N\big]\\[1ex]\mathsf P(Y\eq j)&=\dfrac{\mathsf G_Y^{(j)}(0)}{t!}&&\big[j\in\Bbb N\big]\\[1ex]\mathsf P(X\eq k,Y\eq j)&=\dfrac{\mathsf G_{X,Y}^{(k,j)}(0,0) }{k!~j!}&&\big[k\in\Bbb N,j\in\Bbb N\big]\end{align}$$ So your task is to show that: $$\mathsf G_{X,Y}(s,t)=\mathsf G_X(s)\cdot\mathsf G_Y(t)\implies \mathsf G_{X,Y}^{(k,j)}(0,0)=\mathsf G_X^{(k)}(0)\cdot\mathsf G_Y^{(j)}(0)$$ $~\\~$ $\small\mathsf G_Z^{(i)}(0)\mathop{:=}\left.\dfrac{\mathrm d^i\mathsf G_Z(r)}{\mathrm d r~^i}\right\vert_{r=0}$
H: Find a power series that is convergent on the closed unit disk but diverges elsewhere. Question: Does there exist a power series centered at $z=0$, $f(z)=\sum_{n=0}^\infty a_n z^n$ such that the domain of $f$ is exactly the unit disk $D^2\subset \mathbb{C}$? In other words, I'm looking for a power series whose radius of convergence $\rho=1$ such that the series also converges on the unit circle. Motivation: I'm thinking about a problem: "does there exist a Laurent series that converges only on the unit circle but nowhere else?" I realize that this problem reduces to the above question. AI: We know $f(z)$ will converge if it converges absolutely (on $D^2$), i.e. $$ \sum_{n=1}^\infty |a_n| \: |z|^n, $$ converges. Take $a_n = 1/n^2$. For $|z| \leq 1$ (i.e. $z \in D^2$), we have: $$ \sum^\infty_{n=1} \frac{1}{n^2} |z|^n \leq \sum^\infty_{n=1} \frac{1}{n^2}, $$ and the RHS converges by the $p$-test. Thus the LHS converges (since all terms are non-negative), implying absolute convergence of $\sum_n \frac{1}{n^2} z^n$. For $|z| > 1$, we see: $$ \lim_{n \to \infty} \frac{1}{n^2} |z|^n \neq 0, $$ so that $\sum_n \frac{1}{n^2} z^n$ diverges on $|z| > 1$ by the divergence test for complex series. Thus, $f(z) = \sum_{n=1}^\infty \frac{1}{n^2} z^n$ is an example of a function that meets your criteria. (Notice that I started at $n= 1$, but if you want to start at $n= 0$ you can take $a_0$ to be anything, say 1, and the argument still holds).
H: Does the green area converge to a known constant when $n\to \infty$? Let $n$ denote the number of the rectangles in the figure above. We know that the gray area converges to Euler-Mascheroni constant $(\gamma)$ when $n\to \infty$. I have three questions about the green area: $1)$ Does it converge to a known constant when $n\to \infty$? $2)$ Can we relate it to $\gamma$ when $n\to \infty$? $3)$ Which is one is larger when $n\to \infty$, the gray area or the green one? I came come up with this question when I was searching about $\gamma$ and saw the image above and the green area brought my attention. Thank you. AI: In your picture, we have: $$ \text{gray}+\text{green} = \lim_{n\to{\infty}}{\sum_{j=1}^{n} \left(\frac{1}{j} - \frac{1}{j+1}\right)} $$ This telescopes, showing $$ \text{gray}+\text{green} = 1 $$ Therefore $$ \text{green} = 1-\text{gray} = 1-\gamma $$ Because the graph $1/x$ is convex, we have $\text{green} < \text{gray}$. In fact, $\gamma \approx 0.57$ so $1-\gamma \approx 0.43$.
H: How to compute $\sum_{n=1}^\infty \frac{H_{2n}^2}{n^2}$? where $H_n$ denotes the harmonic number. I can't see $$\sum_{n\geq 1} \frac{1}{n^2}\left(\int_0^1 \frac{1-x^{2n}}{1-x}\ \mathrm{d}x\right)^2$$ be of any assistance; even $$-\sum_{n\geq 1}H_{2n}^2\int_0^1 x^{n-1}\log{x}$$ does not seem like it would be of any assistance unless we know of a nice closed form/ generating function for $\sum_{n\geq 1} x^{n-1}H_{2n }^2$ which I have high doubts about. Evidently, I really do not even know where to start. I know how to compute $$\sum_{n\geq1}\frac{H_n^2}{n}$$ and $$\sum_{k\geq 1}\frac{H_{2n}}{n^2}$$ but the desired sum is a mystery. Thank you! AI: $$\sum_{n=1}^\infty\frac{H_{2n}^2}{n^2}=4\sum_{n=1}^\infty\frac{H_{2n}^2}{(2n)^2}$$ now use $2\sum_{n=1}^\infty f(2n)=\sum_{n=1}^\infty f(n)+\sum_{n=1}^\infty (-1)^nf(n)$ $$\Longrightarrow \sum_{n=1}^\infty\frac{H_{2n}^2}{n^2}=2\sum_{n=1}^\infty\frac{H_{n}^2}{n^2}+2\sum_{n=1}^\infty\frac{(-1)^nH_{n}^2}{n^2}$$ where $$\sum_{n=1}^\infty\frac{H_n^2}{n^2}=\frac{17}4\zeta(4)$$ and $$\sum_{n=1}^{\infty}\frac{(-1)^nH_n^2}{n^2}=2\operatorname{Li}_4\left(\frac12\right)-\frac{41}{16}\zeta(4)+\frac74\ln2\zeta(3)-\frac12\ln^22\zeta(2)+\frac1{12}\ln^42$$
H: quotient ideal & primary decomposition A 'quotient ideal' associated to a pair of ideals $\frak{a}, \frak{b} $ $\subset R$ of a commutative ring with $1_R$ is a new ideal defined as $(\frak{a}:\frak{b})$ $= \{r \in R \mid r\frak{b} \subset \frak{a} \} $. at wikipedia page about quotient ideals I found a remark that needs clarification. the assertion is that the ideal quotient is useful for calculating primary decompositions. How concretly the ideal quotient helps to determine a primary decomposition of a ideal? let me remind that a primary decomposition of a ideal $\frak{a}$ is if we can write this ideal as an intersection $\frak{a}= P_1 \cap P_2 \cap ... \cap P_m$ where $\frak{P}_i$ are primary ideals. I would be very grateful if somebody could explain the main idea why the quotient ideals provide a useful tool to calculate such primary decomposition. AI: One such example is Atiyah-Macdonald Theorem 4.5. If we assume that the decomposition is minimal, then the prime ideals $\mathfrak{p}_i = r(\mathfrak{P}_i)$ are precisely the ideals $r(\mathfrak{a} : x)$ for $x \in R$. Other applications appear in Chapter 4 of Atiyah-Macdonald.
H: Definition of power set uses strict subset In the following lecture notes one finds the following: Examples. The following are all examples of σ-algebras. (HW: check this in each case) • Let P(X) denote the collection of all subsets of X, i.e. P(X) := {A : A ⊂ X} (we write ⊂ rather than ⊆, so in our notation X ⊂ X is a true statement). P(X) is called the power set of X, and is a σ-algebra. I don't understand how using the strict subset makes X ⊂ X a true statement, or what the author is trying to get across here? AI: Note that the author says: "we write $\subset$ rather than $\subseteq$, so in our notation $X\subset X$ is a true statement." They are using the symbol "$\subset$" as a synonym for "$\subseteq$." (This leaves the symbol "$\subsetneq$" for proper subsethood.) Munrkes' topology textbook also follows this convention. Personally I think this is a terrible choice since it clashes with "$<$ vs. $\le$," but they are being consistent and stating it explicitly.
H: How to show $\sum_{k=0}^{\infty} \frac{1}{k!} \left( \int_{1}^{x} \frac{1}{t} \ dt \right)^k =x$? There are a lot of ways to show that $e^x$ and $\ln(x)$ are inverse functions of each other depending on how you define them. I am trying to show that given the definitions $$ e^x:= \sum_{k=0}^{\infty} \frac{x^k}{k!} \qquad \text{and} \qquad \ln(x) := \int_{1}^{x} \frac{1}{t} \ dt$$ then $$ e^{\ln(x)}=\sum_{k=0}^{\infty} \frac{1}{k!} \left( \int_{1}^{x} \frac{1}{t} \ dt \right)^k =x$$ My attempt: My idea was to show that $\frac{d^2}{dx^2} e^{\ln(x)} =0$, and then use the initial conditions I can get by evaluating the definitions at specific values to figure out that the constant of integration must be $0$. Doing this I get $$ \frac{d^2}{dx^2}\sum_{k=0}^{\infty} \frac{1}{k!} \left( \int_{1}^{x} \frac{1}{t} \ dt \right)^k = \sum_{k=0}^{\infty} \frac{1}{k!} \frac{d^2}{dx^2} \left( \int_{1}^{x} \frac{1}{t} \ dt \right)^k $$ From here I use the fact that $\frac{d^2}{dx^2} f(g(x)) = g'(x)^2 f''(g(x)) + f'(g(x))g''(x)$, which applied to this gives me \begin{align*} =& \sum_{k=0}^{\infty} \frac{1}{k!} \left[\left(\frac{1}{x}\right)^2 k(k-1)\left( \int_{1}^{x} \frac{1}{t} \ dt \right)^{k-2} + k\left( \int_{1}^{x} \frac{1}{t} \ dt \right)^{k-1} \left(-\frac{1}{x^2}\right) \right]\\ =&\frac{1}{x^2}\left[ \sum_{k=0}^{\infty} \frac{1}{(k-2)!}\left( \int_{1}^{x} \frac{1}{t} \ dt \right)^{k-2} - \sum_{k=0}^{\infty} \frac{1}{(k-1)!}\left( \int_{1}^{x} \frac{1}{t} \ dt \right)^{k-1} \right]= \frac{1}{x^2} \left(\frac{1}{(-2)!}\right) \left[\ln(x) \right]^{-2} \end{align*} which is the point where I noticed I may have made several mistakes in the process, since this last result didn't make much sense to me. Could anyone tell me where my mistakes are? And also, does anyone know another way to rigorously show this result from the definitions in the beginning? Thank you! AI: It is best to avoid conventional symbols for these functions while dealing with such problems because doing so runs the risk of inadvertently using some of their properties without proof. So let $$f(x) =\sum_{k=0}^{\infty} \frac{x^k} {k!}, x\in\mathbb{R}, g(x) =\int_{1}^{x}\frac{dt}{t},x>0$$ From these definitions we get $$f'(x) =f(x), g'(x) =\frac{1}{x}$$ and therefore if $h(x) =g(f(x))-x $ then $$h'(x) =g'(f(x)) f'(x) -1=\frac{1}{f(x)}\cdot f(x) - 1=0$$ It follows $h$ is constant with $$h(x) =h(0)=g(f(0))=g(1)=0$$ One can't apply similar technique to show that $f(g(x)) =x$, but that can be deduced from $g(f(x)) =x$ by observing that $f, g$ are strictly monotone and hence each is invertible.
H: Every commutative ring of matrices over $\mathbb{R}$ is isomorphic to the diagonals? Diagonal matrices are an abelian group under addition, and with multiplication they become a commutative ring $(\mathcal{D},+, *)$. More generally, the set of $n \times n$ matrices in $M_n(\mathbb{R})=\mathbb{R}^{n \times n}$ that are simultaneously diagonalized by a given eigenbasis (see Prove that simultaneously diagonalizable matrices commute) will also yield a commutative ring. I believe any such set will be isomorphic to the diagonals, since all elements are of the form $SDS^{-1}$ for fixed $S$ and any diagonal $D$. My hypothesis: if $\mathcal{R} \subseteq M_n(\mathbb{R})$ forms a commutative ring $(\mathcal{R},+,*)$, then $\mathcal{R} \cong \mathcal{D}$. True or false? EDIT: False, as scalar matrices $Z(M_n(\mathbb{R}))=kI_n \not \cong \mathcal{D}$. So the hypothesis should be $\mathcal{R} \cong \mathcal{D}$ OR some subring of $\mathcal{D}$. I am considering "rings" to be unital, although rng counterexamples are still interesting. User JCAA provided an excellent counterexample. For $\alpha, a_i \in \mathbb{R}$, consider upper triangular matrices of the form $$ \begin{bmatrix} \alpha & a_{2} & \dots & a_{n} \\ 0 & \alpha & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \alpha \\ \end{bmatrix} = \begin{bmatrix} \alpha I_1 & A_{1 \times (n-1)} \\ 0_{(n-1) \times 1} & \alpha I_{n-1} \\ \end{bmatrix} $$ (The right side is block matrix notation.) For distinct matrices in this set $\mathcal{U}$, we have $$ \begin{bmatrix} \alpha I & A \\ 0 & \alpha I \\ \end{bmatrix} \begin{bmatrix} \beta I & A \\ 0 & \beta I \\ \end{bmatrix} = \begin{bmatrix} \alpha \beta I & \alpha A + \beta A \\ 0 & \alpha \beta I \\ \end{bmatrix} = \begin{bmatrix} \alpha \beta & (\alpha + \beta) a_{2} & \dots & (\alpha + \beta) a_{n} \\ 0 & \alpha \beta & \dots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \dots & \alpha \beta \\ \end{bmatrix} $$ So multiplication is closed and commutative (and associative, distributive since "multiplication" is just composition of linear transformations); moreover, $I_n \in \mathcal{U}$ so this becomes a unital ring. Unlike $\mathcal{D} \cong \mathbb{R} \times \dots \times \mathbb{R}$, some $u \in \mathcal{U}$ satisfy $u^2 = 0$. AI: The answer is "no". Consider the ring of matrices with first row $(0, x,y,...,z)$ and all other entries 0. This is a ring with zero product, so commutative. It is not isomorphic to a subring of $D$.
H: Conditional distribution of trivariate normal Consider three non-independent normally distributed random variables $(T,S,C)$. I am interested in the distribution of $T$ conditional on $S=s$ and $C=c$. I know that, for $\mu_T=\mu_S=\mu_C=0$ and $\sigma_T=\sigma_S=\sigma_C=1$, the conditional mean is given by $E[T|S=s, C=c]=\beta_{TS;c} s+\beta_{TC;s} c$ where the betas are the regression coefficients: $\beta_{ij;k}=\frac{\rho_{ij}-\rho_{ik}\rho_{jk}}{\sqrt{1-\rho_{ik}^2}\sqrt{1-\rho_{jk}^2}}$ Is there a similar way to parameterise $Var[T|S=s, C=c]$? NB. I'm not interested in a step-by-step derivation – I'd be equally happy with an expression derived from the symbolic integrals in Mathematica. AI: Assuming by "three non-independent normally distribution random variables" you mean that $(T,S,C)$ has a trivariate normal distribution with mean vector $(\mu_T,\mu_S,\mu_C)$ and covariance matrix $$\Sigma=\left( \begin{array}{ccc} \sigma_T^2 & \rho_{TS} \sigma_S \sigma_T & \rho_{TC} \sigma_C \sigma_T \\ \rho_{TS} \sigma_S \sigma_T & \sigma_S^2 & \rho_{SC} \sigma_C \sigma_S \\ \rho_{TC} \sigma_C \sigma_T & \rho_{SC} \sigma_C \sigma_S & \sigma_C^2 \\ \end{array} \right)$$ Using Mathematica the conditional CDF for $T|S=s,C=c$ is conditionalCDF = Probability[T <= t \[Conditioned] S == s && C == c, {T, S, C} \[Distributed] MultinormalDistribution[μ, Σ]]; Then the conditional PDF is conditionalPDF = D[conditionalCDF, t] We see from inspection that the conditional pdf is that of a normally distributed random variable with mean and variance which can be simplified to $$\mu_T+\frac{\sigma_T (\sigma_C (s-\mu_S) (\rho_{SC} \rho_{TC}- \rho_{TS})-\sigma_S (c-\mu_C) (\rho_{TC}-\rho_{SC} \rho_{TS}))} {\left(\rho_{SC}^2-1\right) \sigma_C \sigma_S}$$ and $$\frac{\sigma_T^2 \left(\rho_{SC}^2-2 \rho_{SC} \rho_{TC} \rho_{TS}+\rho_{TC}^2+\rho_{TS}^2-1\right)}{\rho_{SC}^2-1}$$ respectively.
H: Prove that for all sets A and B, A ∩ B = ∅ implies ( A ∪ B ) - B = A In the next proof we avail ourselves of the next lemma: For all sets A and B, ( A ∪ B ) - B = A. Proof: Let A and B be arbitrary sets and let x ∈ ( A ∪ B ) - B. x ∈ ( A ∪ B ) - B ⇔ x ∈ ( A ∪ B ) ∧ x ∉ B ⇔ ( x ∈ A ∨ x ∈ B ) ∧ x ∉ B ⇔ ( x ∈ A ∧ x ∉ B ) ∨ ( x ∈ B ∨ x ∉ B) For p ∧ ~ p ≡ F and p ∧ q ⇒ p where p and q are prepositions, ( x ∈ A ∧ x ∉ B ) ∨ ( x ∈ B ∨ x ∉ B ) ⇔ x ∈ A ∨ F By modus tollendo ponens, x ∈ A ∨ F ⇔ x ∈ A So then, x ∈ ( A ∪ B ) - B ⇔ x ∈ A And ( A ∪ B ) - B = A Therefore, the statement for all sets A and B, A ∩ B = ∅ implies ( A ∪ B ) - B = A is true trivially. Is this proof right? AI: Consider $$B \subset A, B \ne \emptyset, A \setminus B = C \ne \emptyset$$ Then, because B is non empty $$A \cup B = A \ne A \setminus B$$
H: What's going on with this Wolfram Alpha matrix multiplication? I was computing the matrix exponential of $$X = \begin{pmatrix} 1 & -1 & 0 \\ -1 & 2 & -1 \\ 0 & -1 & 1 \end{pmatrix}$$ by diagonalization. Wolfram Alpha and Symbolab both solve this problem correctly. After the diagonalization and taking the exponential of the diagonal matrix, the only thing left to do is to compute the product $$e^X = Pe^DP^{-1} = \begin{pmatrix}1&-1&1\\ 1&0&-2\\ 1&1&1\end{pmatrix}\begin{pmatrix}1&0&0\\ 0&e&0\\ 0&0&e^3\end{pmatrix}\begin{pmatrix}\frac{1}{3}&\frac{1}{3}&\frac{1}{3}\\ -\frac{1}{2}&0&\frac{1}{2}\\ \frac{1}{6}&-\frac{1}{3}&\frac{1}{6}\end{pmatrix},$$ and this is where I ran into a problem. Symbolab computes this multiplication and obtains the value of $e^X$ that I was expecting (and that Wolfram Alpha and Symbolab both reported earlier), that is, $$e^x = \begin{pmatrix}\frac{2+3e+e^3}{6}&\frac{-e^3+1}{3}&\frac{2-3e+e^3}{6}\\ \frac{-e^3+1}{3}&\frac{2e^3+1}{3}&\frac{-e^3+1}{3}\\ \frac{2-3e+e^3}{6}&\frac{-e^3+1}{3}&\frac{2+3e+e^3}{6}\end{pmatrix}.$$ But when I tried the same multiplication on Wolfram Alpha, all of the entries were off by $1/3$rd. I checked to make sure that I had typed everything in right, checked what Wolfram Alpha gave for the inverse to see if it matched, etc., and everything else seemed right. Am I doing something wrong? Does Wolfram Alpha not know how to multiply matrices or am I just making a simple mistake somewhere? AI: As user1551 pointed out in a comment, the answer was as simple as a missing $1$: If $$D = \begin{pmatrix} 0 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 3 \end{pmatrix}, \quad \textrm{then} \quad e^D = \begin{pmatrix} 1 & 0 & 0 \\ 0 & e & 0 \\ 0 & 0 & e^3 \end{pmatrix}, \quad \textrm{not} \quad e^D = \begin{pmatrix} 0 & 0 & 0 \\ 0 & e & 0 \\ 0 & 0 & e^3 \end{pmatrix}.$$
H: Square ABCD of side a and N on AB. Find radius of congruent incircles of ACN and BCN ABCD is a square with side a and diagonal AC. The incircles, with radius r, of the triangles ACN and BCN are congruent, with N in AB. What is the radius r in terms of a? AI: Let $BN = x$. Then, $AN = a-x$ and $CN = \sqrt{x^2+a^2}$. Establish the triangle areas below $$Area_{NBC} =\frac12 NB \cdot BC = \frac12r\cdot (NB+BC + NC)$$ $$Area_{NAC} =\frac12 NA \cdot BC = \frac12r\cdot (NA+AC + NC)$$ or $$xa= r(x+a+\sqrt{x^2+a^2})$$ $$(x-a)a= r(a-x+\sqrt2 a+\sqrt{x^2+a^2})$$ Solve to obtain $x=\sqrt{\frac{\sqrt2-1}2}a$ and the radius $$r = \frac12(1-\sqrt{\sqrt2-1})a$$
H: Set theory related objective questions. If $X$ and $Y$ are two non-empty finite sets and $f:X\to Y$ and $g:Y\to X$ are mappings such that $g\circ f:X\to X$ is a surjective (i.e., onto) map, then (A) $f$ must be one-to-one. (B) $f$ must be onto. (C) $g$ must be one-to-one. (D) $X$ and $Y$ must have the same number of elements. If $g\circ f$ is onto then I know only that $g$ is onto but this is not given in options. Please help me to find correct options. Thanks. AI: HINT: Because $X$ is finite, if $g\circ f$ is surjective, it must also be one-to-one.
H: Is there some reason that the fact that $L^\infty$ is the dual space of $L^1$ is an important fact? Why is the fact that $L^\infty$ is the dual space of $L^1$ an important fact? AI: For a sigma-finite measure $\mu$, $L^\infty(\mu)$ is the dual of $L^1(\mu)$. However, it is not the case that $L^1$ is also the dual of $L^\infty$! See this and this for more details.
H: Question on abelian groups I am solving an exercise in Artin's textbook that asks us to assume, within some group $G$, that $xyz = 1$ and asks if this implies that $yxz = 1$. I've found a counterexample, but want to be sure that I'm using the language correctly. To motivate finding the counterexample, I supposed for a contradiction that $yxz = 1$. I was able to prove that $xz = zx$. It was easy to, within the non-abelian group $S_3$ under composition, find an example where $xyz = 1$ but $yxz \neq 1$. Here is my question: is finding that $yxz = 1$ implies that $xz = zx$ the same as finding that $yxz = 1$ implies that the group is abelian? $x$ and $z$ are arbitrary, surely, but I've added an additional assumption that $xyz = 1$. It isn't clear, for example, that I also have that $xy = yx$. My understanding at the moment is that I have not found that the group is abelian, but rather that these particular $x$ and $z$ live in the center of the group, $Z(G)$. Is this correct? AI: No, it is not the same as proving the group is abelian. What you’ve done is show that if $xyz$ and $yxz$ are both trivial, then $x$ commutes with $z$. Nothing more, and nothing else. (Edit: Though you may be able to derive other properties about $x$, $y$, and $z$, as user750041 says in comments; but just about these elements and how they interact with each other). It does not assert that $y$ and $x$ commute, nor that $y$ and $z$ commute. You have not even shown that $x$ and $z$ are central, only that they commute with each other, not with everything in $G$. Note also that “$x$ and $z$ are arbitrary” is inaccurate: they must satisfy both $xyz=1$ and $yxz=1$. That makes them not arbitrary!
H: Converting logarithm to decimal form I apologize if this is a poorly formatted question, but i really need some help here... I am trying to solve the following problem: $4\ln^3$ When I input this into my calculator, I get $4.3944$. However, when i input it into mathway, I get $5.0136$, which is the correct answer. Here is a picture of it in mathway: mathway_img I have spent the last $2$ hours trying to figure out how to properly convert this problem into decimal form, as well as why my calculator keeps giving me a different answer. But, since I am new to logarithms, I have not been able to figure out how to get the answer $5.0136$. Could someone please tell me how mathway gets this answer? Also, why does my calculator give me a different answer than mathway? Am I inputting it incorrectly? AI: The expression in question is $$\frac{4\ln^3(11)}{11}\;.$$ In this context $\ln^3(11)$ means $(\ln 11)^3$, just as $\sin^2\theta$ normally means $(\sin\theta)^2$; this is approximately $2.3978953^3$, or about $13.787662$. Now multiply that by $4$ and divide by $11$ to get about $5.0136953$.
H: Let $p$ be a prime number and let $C$ be a cyclic subgroup of order $p$ in $G = S_p$. Compute the order of the normalizer $N_G(C)$. Task is: Let $p$ be a prime number and let $C$ be a cyclic subgroup of order $p$ in $G = S_p$. Compute the order of the normalizer $N_G(C)$. It's clear that $e$ is in the normalizer, as well at C itself(since it's abelian), so $|N_G(C)| \geq p+1$, but I can't find a way to make any conclusions for the rest of the group. AI: Let $C=\langle \alpha \rangle$, then $\alpha=(a_1\, a_2 \ldots a_p)$. There are $p!$ such expressions, but each cycle can be written $p$ ways as such an expression. This gives us $(p−1)! $ $ p-$cycles in $S_p$, and we know they are all conjugate. Each of these cycles generates a group of order $p$, and each such group has $p−1$ generators. Thus there are $(p−2)!$ cyclic subgroups of order $p$ in $S_p$, all conjugate. Hence the normalizer of anyone of them has index $(p−2)!$ and hence has order $p(p−1)$.
H: Calculate the limit of the following function How to find the following limit: $$\lim_{x\to4} \frac{\arctan\left(\frac{2-x^{0.5}}{x}\right)}{2x^{0.5}-x}?$$ I hope to get some explanation since I'm stuck on this for hours. Edit: I got 3 different suggested answers, which is right? AI: As $\lim\limits_{x\to 0}\frac{\sin x}{x}=1$ we have $\lim\limits_{x\to 0}\frac{\tan x}{x}=1$ and therefore $\lim\limits_{x\to 0}\frac{\arctan x}{x}=1$. $$\lim\limits_{x\to4} \frac{\arctan\left(\frac{2-\sqrt{x}}{x}\right)}{2\sqrt{x}-x}= \lim\limits_{x\to4} \frac{\arctan\left(\frac{2-\sqrt{x}}{x}\right)}{\sqrt{x}\left(2-\sqrt{x}\right)}= \lim\limits_{x\to4} \frac{\arctan\left(\frac{2-\sqrt{x}}{x}\right)}{\sqrt{x^3}\left(\frac{2-\sqrt{x}}{x}\right)}= 1\cdot\lim\limits_{x\to4}\frac{1}{\sqrt{x^3}}=\frac18$$
H: Every smooth function on a manifold defines a hamiltonian vector field. Let $(M,\omega)$ be a symplectic manifold, and let $H$ be a smooth function on $M$. I want to show that $H$ is a Hamiltonian function i.e. there exists a smooth vector field $X$ on $M$ such that $$\iota_X\omega=dH \text{ }(*)$$ Since $\omega$ is a non-degenerate form as it's a symplectic form, then we can see that we can solve $(*)$ for $X$. Question: Why can we do this? My idea was to expand this equation in local coordinates. For example, let's do a simple example when $\dim(M)=2$. Let's choose some point $p\in M$ and some chart $(U,\varphi)$ containing that point with $\varphi(p)=0$. Also, consider some symplectic form $\omega_p=c(x,y)(dx)_p\wedge (dy)_p$ where $c(x,y)$ is non-zero (I am going to drop a subscript $p$ for a simplification). For a given function $H:M\to\mathbb{R}$, we want to find $X=a(x,y)\partial_x+b(x,y)\partial_y$ where $\partial_x=\frac{\partial}{\partial x}$ s.t. $(*)$ holds in local coordinates i.e. we want to solve it for the functions $a(x,y)$ and $b(x,y)$. Since $\iota_X\omega$ and $dH$ are linear, then it's enough to check $(*)$ on the basis $\{\partial_x,\partial_y\}$ of $T_pM$. I will write $H_x$ instead of $\frac{\partial H}{\partial x}$. Then we can see that $\iota_X\omega(\partial_x)=dH(\partial_x)$ gives us $$dH(\partial_x)=(H_xdx+H_ydy)(\partial_x)=H_x\text{ and }$$ $$\iota_X\omega(\partial_x)=\omega(X,\partial_x)=c(x,y)dx\wedge dy(a(x,y)\partial_x+b(x,y)\partial_y,\partial_x)=-c(x,y)b(x,y)$$ So, we have that $$H_x=-c(x,y)b(x,y)$$ The same argument works for $\partial_y$ and gives us $$H_y=c(x,y)a(x,y)$$ Since $c(x,y)\neq0$ then we have that $$a(x,y)=\frac{H_y}{c(x,y)}\text{ and }b(x,y)=\frac{-H_x}{c(x,y)}$$ which are smooth function. Therefore, $X$ is a smooth vector field which satisfies $(*)$. So, as I understand, I can use the same approach for the bigger dimension. Where instead of $c(x,y)\neq0$, I will use the fact that $\omega$ is non-degenerate? So, in other words, if I have $H$ and $\omega$, then I can explicitly find coordinates of $X$ by solving a similar system. AI: The proof that you can "solve for $X$ "is actually a rather simple extension of a linear algebra fact, which is why I'll treat the vector space case closely. Let's recall what non-degeneracy means. Definition. Let $V$ be a finite-dimensional vector space over $\Bbb{R}$, and let $\omega:V \times V \to \Bbb{R}$ be bilinear (and skew-symmetric... but this isn't really necessary). We say $\omega$ is non-degenerate if the map $\omega^{\flat}:V \to V^*$ defined by \begin{align} \omega^{\flat}(x):= \omega(x, \cdot) \equiv \iota_x \omega \equiv \bigg( y \in V \mapsto \omega(x,y) \in \Bbb{R}\bigg) \in V^* \end{align} is injective (or in finite-dimensions, we can equivalently require that it be an isomorphism). Note that I use $\equiv$ to mean "same thing expressed in different notation". You may have seen the definition probably stated as "for all $x \in V$, if for all $y \in V$, $\omega(x,y) = 0$ then $x=0$". Well, this is exactly what it means for $\omega^{\flat}$ to be injective (and hence an isomorphism). Now, being an isomorphism means it has a linear inverse, which we may denote as $\omega^{\sharp}:V^* \to V$. So, for any covector $\alpha \in V^*$, we can consider the vector $x:= \omega^{\sharp}(\alpha) \in V$. What's special about this vector $x$? Well, just apply $\omega^{\flat}$ to both sides of this equation and you'll see that \begin{align} \omega^{\flat}(x) = \omega^{\flat}(\omega^{\sharp}(\alpha)) = \alpha \end{align} in other words, \begin{align} \omega(x, \cdot) = \iota_x\omega = \alpha \end{align} This is why given a covector $\alpha$, we can always find a vector to make the above equation true. In your case, you just have to repeat everything pointwise. $dH$ is a covector-field (i.e a $1$-form). So, consider the vector field $X$ defined pointwise as $X_p := (\omega_p)^{\sharp}\left( dH_p\right) \in T_pM$. Then, it will satisfy \begin{align} \omega_p(X_p, \cdot) = dH_p \end{align} i.e if you remove the point $p$, then $\iota_X \omega = \omega(X, \cdot) = dH$. By the way, doing thing in coordinates may be a little hard, because as you can see, it involves the inverse mapping $\omega^{\sharp}$. But anyway, if you're working in some chart $(U,x)$ of the manifold $M$, with the coordinate basis $\{\partial/\partial x^1, \dots \partial/ \partial x^n\}$, and dual basis $\{dx^1, \dots dx^n\}$, then define the functions \begin{align} \omega_{ij}:= \omega\left( \dfrac{\partial}{\partial x^i}, \dfrac{\partial}{\partial x^j}\right) \end{align} and let $[\omega^{ij}]$ be the inverse matrix of $[\omega_{ij}]$. Then, the components of the vector field $X = \sum_{i}X^i\frac{\partial}{\partial x^i} $ will be \begin{align} X^i &= \sum_{i=1}^n\omega^{ij} \dfrac{\partial H}{\partial x^j}. \end{align} (you see, the appearance of the inverse matrix entries makes things not-so-easy)
H: Show $|f(z)| \leq \frac{2A|z|}{1 - |z|}$ I have the following question: Let $f$ be a holomorphic function on $\mathbb{D}$. Assume $f(0) = 0$ and $\Re f ≤ A$ on $\mathbb{D}$ for some constant $A > 0$. Show that $|f(z)| ≤ \frac{2A|z|} {1 − |z|}$ for all $z ∈ \mathbb{D}$. The $f(0)$ hypothesis and the fact that $f$ is defined on $\mathbb{D}$ makes me think that this could involve some type of Schwartz Lemma. But, the there is no reason to think that the function is bounded by $1$. I also do not know how to dissect the $\Re f \leq A$ part. Any help would be greatly appreciated. AI: Wlog we can assume that $f$ extends to the boundary analytically (as otherwise we use $f_r(z)=f(rz)$ and take $r \to 1$) and we use what Wikipedia calls the Schwarz Integral formula (or what is called in some books the Poisson-Cauchy representation - the Cauchy completion of the Poisson formula): $f(z)=\frac{1}{2\pi}\int_0^{2\pi}{\Re f(e^{it})\frac{e^{it}+z}{e^{it}-z}}dt$ so $f(z)=f(z)-f(0)=\frac{1}{2\pi}\int_0^{2\pi}{\Re f(e^{it})(\frac{e^{it}+z}{e^{it}-z}}-1)dt=\frac{1}{2\pi}\int_0^{2\pi}{\Re f(e^{it})\frac{2z}{e^{it}-z}}dt$ Taking absolute values: $|f(z)| \le \frac{1}{2\pi}\int_0^{2\pi}{A\frac{2|z|}{|e^{it}-z|}}dt$ But $|e^{it}-z| \ge 1-|z|$ so we get the required result and we are done: $|f(z)| \le \frac{1}{2\pi}\int_0^{2\pi}{A\frac{2|z|}{1-|z|}}dt=A\frac{2|z|}{1-|z|}$ A second solution can be given noting that $\Re (-f/A) \ge -1, f(0)=0$ means that $f$ is subordinated to $g(z)=\frac{2z}{1-z}$ (which sends the unit disc precisely to the domain $ \Re w >-1$ as wlog we can assume $Re f <A$ as otherwise $f=0$ by maximum modulus and $f(0)=0$) This means that there is $\phi(0)=0, \phi : D \to D$ holomorphic so $-f(z)/A=g(\phi(z))=\frac{2\phi(z)}{1-\phi(z)}$. But $|\phi(z)| \le |z|$ by Schwarz so taking absolute values we recover the required result since $|1-\phi(z)| \ge 1 -|\phi(z)| \ge 1-|z|$
H: All combinations/cases of equality of 3 variables I have three integer variables like: c0, c1, c2 I don't have the focus and have a difficult time determining all the possible cases of their equality: if c1 == c0 && c2 == c0 { // ... } else if c1 != c0 && c2 != c0 && c1 != c2 { // ... } else if c1 != c0 && c2 != c0 && c1 == c2 { // ... } else if c1 == c0 && c2 != c0 { // ... } else if ... ? Can anybody help? AI: They are either all equal (one possibility), or two are equal and one isn't (three possibilities), or none are equal (one possibility). This gives us five unique possibilities in total: if c0 == c1 && c1 == c2 { // ... } else if c0 == c1 && c1 != c2 { // ... } else if c1 == c2 && c0 != c1 { // ... } else if c0 == c2 && c1 != c2 { // ... } And now use else to account for the last possibility of none of them being equal: else { // ... }
H: given a divergent series, can we conclude a related sequence is not converging to zero? say we have a sequence of non-negative reals, $a_1, a_2, \dots$, and that $\displaystyle\sum\limits_{n=1}^{\infty}a_n$ is divergent, meaning convergent to infinity. Under this scenario I am trying to prove that the following sequence in $m$ cannot converge to zero. $$t_m \,\,=\,\, \displaystyle\sum\limits_{n=1}^{m} \frac{n}{m}a_n$$ I'd like to know if this proposition is true. I was hoping so, but became stuck trying to prove it. My reasoning so far: Since $\Sigma a_n \,=\, +\infty$, the sequence of partial sums is not Cauchy. So there exists an $\epsilon$ and indices $i>j>0$ for which $$a_{j+1} + a_{j+2} + \dots + a_{i} \,\, \geq \epsilon.$$ But then we can say there is an infinite sequence of such finite segments; we can always produce another one. Now look at the sequence $t$, e.g. $$t_5 \,\,=\,\, \frac{1}{5}a_1 \,+\,\frac{2}{5}a_2 \,+\,\frac{3}{5}a_3 \,+\,\frac{4}{5}a_4 \,+\,\frac{5}{5}a_5 \,\,\geq\,\,\frac{1}{2}a_3 \,+\,\frac{1}{2}a_4 \,+\,\frac{1}{2}a_5.$$ Thus in general: $$t_m \,\,\geq\,\,\frac{1}{2}\displaystyle\sum\limits_{[m/2]+1}^{m}a_n.$$ Is there any hope to tie this to the sequence of epsilon segments above, and show that my sequence $t$ is strictly away from zero? It seems a little reasonable, since as $m$ grows big, $t_m$ is a sum of many many terms, arbitrarily many. It would suffice to show that infinitely often, my $t_m$ is at least the fixed positive epsilon. AI: A counterexample is given by $a_n=(n\log n)^{-1}$ for $n>1$ (and, say, $a_1=0$). The series $\sum_{n=1}^{\infty}a_n$ is well-known to diverge, while $t_m=m^{-1}\sum_{n=1}^{m}(\log n)^{-1}\underset{m\to\infty}{\longrightarrow}0$ by the Stolz–Cesàro theorem.
H: Understanding the proof of Proposition 10 in Ch.2 in Real analysis by Royden and Fitzpatrick "Fourth Edition " Here is the proposition and its proof: My question is: I do not understand how the last equality came from the one just before it, could anyone explains this for me, please? AI: It’s just invariance of $m^*$ under translation, in this case by $y$: $x\in[A-y]\cap E$ iff $x+y\in A\cap[E+y]$, and $x\in[A-y]\cap E^C$ iff $x+y\in A\cap[E+y]^C$. That is, $$A\cap[E+y]=\big([A-y]\cap E\big)+y\;,$$ and $$A\cap[E+y]^C=\left([A-y]\cap E^C\right)+y\;.$$
H: About index (set theory) Definition: A collection of sets $\mathbb{E} $ is said to be indexed by a set A if and only if there is a funcitom F from A onto $\mathbb{E} $. In this case we call A the index set lf $\mathbb{E} $, say $\mathbb{E} $ is indexed by A, and represent F(a) by $ E_a $. In particular, $\mathbb{E} $ is indexed by A means $\mathbb{E} =\{E_a\}_{a \in A} $. I am not sure how to draw up the relation between function and sets in this case. Can someone explain this definition about index more intuitively? Thanks. AI: Suppose that we have sets $\{a,b,c\}$, $\{a,d,x,y\}$, and $\{c,u\}$; we can form the collection $$\Bbb E=\big\{\{a,b,c\},\{a,d,x,y\},\{c,u\}\big\}\;.$$ Now let $A=\{1,2,3\}$; we can define a function $f:A\to\Bbb E$ by $$\begin{align*} f(1)&=\{a,b,c\}\\ f(2)&=\{a,d,x,y\}\\ f(3)&=\{c,u\}\;. \end{align*}$$ The collection $\Bbb E$ is now indexed by $A$ via the indexing function $f$, and instead of having to talk about $\{a,b,c\}$, $\{a,d,x,y\}$, and $\{c,u\}$, we can refer to the three sets by their indices: $E_1=f(1)=\{a,b,c\}$, $E_2=f(2)=\{a,d,x,y\}$, and $E_3=f(3)=\{c,u\}$. Of course we could index $\Bbb E$ by other sets. We might, for instance, use the set $I=\{0,1,2\}$, with the indexing function $g$ defined by $g(0)=\{c,u\}$, $g(1)=\{a,b,c\}$, and $g(2)=\{a,d,x,y\}$; in that case we would refer to the sets as $E_0=g(0)=\{c,u\}$, $E_1=g(1)=\{a,b,c\}$, and $E_2=g(2)=\{a,d,x,y\}$. An index set $A$ for $\Bbb E$ is just a set of convenient labels for the members of $\Bbb E$, and the indexing function $f$ simply assigns each label in $A$ to one of the members of $\Bbb E$ in such a way that each member of $\Bbb E$ gets a label. If $a\in A$ is assigned to some $X\in\Bbb E$, i.e., if $f(a)=X$, we’ve given $X$ the label $a$ and can now call it $E_a$. This brings some uniformity into the names that we’re using for the members of $\Bbb E$.
H: How would I algebraically write this simple exponential word problem? I'm just a high-school student who's bad at math but truly does have an interest in it, I just have a hard time fully understanding things and remembering them. So I have to write a function for this really simple exponential word problem. I won't write the whole problem out, instead, I'll just share the part I need help with. Basically I just need to find a function that describes this sequence: $(5 \times 2)2)2)2)2)2)2)...$ f(x), x being the number of times the initial value (5) is doubled. I almost can't believe I can't figure this out. Maybe I'm just tired AI: It is called a geometric progression (GP). The function that describes the given sequence is the general term of this GP. $$f(x)=5(2)^{x-1},\,x\in\mathbb N. $$
H: A linear transformation given by a 2x2 matrix A doubles areas. What is the determinant of A? How should I think about this problem? AI: Consider the area of the unit square $|[i\times j]|=1$. If the transformation is $\begin{pmatrix}x\\y\end{pmatrix}\to \begin{pmatrix}a&b\\c&d\end{pmatrix} \begin{pmatrix}x\\y\end{pmatrix}$ then it maps $i\to ai+cj$, $j\to bi+dj$ and the area of the image will be $$|[(ai+cj)\times(bi+dj)]|\\ =|[ai\times bi]+[ai\times dj]+[cj\times bi]+[cj\times dj]|\\ =|0+ad[i\times j]+bc[j\times i]+0|\\ =|ad-bc|\cdot|[i\times j]|=|\det A|=2$$ So $\det A=\pm 2$. Another way to think of it is $A=SJS^{-1}$ where $S$ is some rotation matrix and $J$ is some scale matrix, so if $J$ scales one coordinate in $\lambda_x$ times and another in $\lambda_y$ times, the signed area scale will be $\lambda_x\lambda_y$, but $\det S=1$ thus $\det A=\det J=\lambda_x\lambda_y$ that makes the problem simple.
H: Is there an integral representation for Euler's Totient function? The question is pretty much in the title. Euler's Totient function $\varphi(n)$ satisfy the following formula: $$\varphi(n) =n \prod_{p|n}\left(1-\frac{1}{p}\right)$$ Is it possible through this formula or something else to represent $\varphi(n)$ as an integral? AI: One possible way is to consider the Dirichlet series of the Euler's totient function, and arrive at an integral using the coefficient inversion formula (relevant content: sections 2 and 7 here: https://en.wikipedia.org/wiki/Dirichlet_series). I do not know if this satisfies the conditions in the question asked (because it was slightly vague) but I hope it helps.
H: What is a mathematically rigorous treatment of "sentences", "words" and "language"? How do mathematicians rigorously deal with words and sentences? In other words, how are words sentences constructed in a mathematical sense? Is there some (accepted) system for constructing words? I know this is a rather broad question, but I am in the process of trying to think about "words" and "sentences" more rigorously, because often in computational mathematics we deal with sets of words or sets of sentences. These words are then transformed in some ways into the numerical domain to perform calculations on them. An attempt could be as follows: To start we may define a set containing all the symbols in a "language system", call it $\mathcal{S}$. Then a word is just a finite combination of a subset of these symbols. We can either define as a relation $r: \{a_1, \ldots, a_n\} \subset \mathcal{S} \mapsto w$, or perhaps as an element of the power set of $\mathcal{S}$, say $\mathcal{W} := \mathcal{P}(\mathcal{S})$. Then building upon this, a sentence would be some combination of words, i.e., $\mathcal{P}(\mathcal{W})$ where we define a function that puts the correct punctuations in place. Just some thoughts. I wonder if people have tried to make this more rigorous. Note: I have briefly studied regular expressions and formal language (long time ago) but I don't really think we necessarily have to abstract everything to the $0$s and $1$s to make sense of words. AI: To my knowledge there is no deep study of such 'language systems' but the study of logic has made quite some progress in that regard. There are multiple ways of formalizing the concept of language systems, like just plain sequences, trees, or other but all this formulations are equivalent. Model Theory may be what you have in mind, it studies theories and their structures, but to write down the theories axioms it is needed a language. I will give a brief definition of a language and give the example of a language for the theory of natural numbers. A language has: $n$-placed functions. E.g. the 2-ary $+$ function, the multiplication $\times$ and the successor $S$. $n$-placed relations. E.g. the 2-ary $<$ less than relation. Or the 1-ary $Odd$ relation. Constants. E.g. $0, 1, 2,...$ (As you may have noticed, you only need the constant $0$. the rest can be written down using it and the successor function). Similarly Logic has a language ($\wedge , \forall, \lnot$, etc; though this language is usally taken as a primitive). Now every sequence of these elements is a valid expression, but we only care about well formed formulas. Which are those that satisfy some requirements. E.g. a 2-ary relation may require have a term on each side of it e.g. ($1<2$). Common sense restrictions like that can be used to generate, by recursion, the set of all well formed formulas. The phrase sentence is usually just meant to mean a well formed formula with no variables unquantified. Some nice things about this construction (assuming a lot of stuff I haven't mentioned explicitely): Unique readability theorem: There is only one way to interpret a sentence. No ambiguities in math. Effective computation to check if an expression is a well formed formula. There's a lot more to say but that would make me write a book so I'd just better recommend you a better book in first order logic: I personally like Enderton's A mathematical introduction to logic.
H: At least two points are $13$ apart Let $x_0,\dots,x_{37}$ be $38$ distinct integral points inside $[0,60]$ with $x_0=0$ (e.g. $0,1,2,\dots 37$ or $0,2,3,\dots 38$, etc). Prove that there exists two points $x_i$ and $x_j$ such that $x_j-x_i=13$. I thought of pigeonhole principle, but couldn't find a way to apply. AI: Suppose it's possible. Call $S=\{x_0,...,x_{37}\}$ such a solution. Then split $[0;60]$ in intervals of length 12, except for the last one. $$I_1=[0;12]\quad I_2=[13;25]\quad I_3=[26;38]\quad I_4=[39;41]\quad I_5=[42;54]\quad I_6=[55;60]$$ Then consider the map that sends : every $x\in S\cap I_{2k}\quad\quad$ to $\quad x-13\in \bar{S}\cap I_{2k-1} $ every $x\in S\cap I_{2k-1}\quad $ to $\quad x+13 \in \bar{S}\cap I_{2k} $. This map is well-defined except for at most $7$ numbers in $I_5$ (those who lie between $48$ and $54$). This map is injective and by definition $S$ and $f(S)$ are disjoint subsets of $[0;60]$. But the cardinality of $S\cup f(S)$ is at least $38 + 38-7= 69 > 61$. Contradiction.
H: A function is Lipchitz iff his absolute value is Lipchitz? I would like to know if this assertion is true: A function is Lipchitz iff his absolute value is Lipchitz? I succes to prove that if the function is Lipchitz then the absolute value is Lipchitz but for the other direction I don't know if is it true (This is not a homework) I will be grateful if someone could help me . Thank you in advance ! AI: If $f(x)=1$ for $x$ rational and $-1$ for $x$ irrational then $|f|$ is Lipschitz but $f$ is not.
H: Is the field characteristic necessary for this diagonalization question? I was looking into this question and I stumbled upon a similar problem, but with a slightly different hypothesis: Let $A:M_{n\times n }\left(\mathbb{F}\right)\mapsto M_{n\times n}\left(\mathbb{F}\right)$ be the operator defined by $A(X) =X^T$, with the characteristic of the field $\mathbb{F}$ not equal to 2. Find the eigenvalues and eigenspaces of the transformation and argue if $A$ is diagonalizable or not. In the same question I mentioned earlier, there is this answer to the first part of the problem. As far as I could tell, this proof is legitimate without the hypothesis of the characteristic of a field $\neq2$, and I think the rest of the problem can be argued without the use of it as well since they follow from seeing that the eigenvalues are $1$ and $-1$. Is there a detail that is being glossed over where we need to use this hypothesis? Or can the problem be generalized to cases without the extra condition? Thank you! AI: The eigenvalues are still $\pm1$ when $\operatorname{char}(\mathbb F)=2$. Of course, since $-1=1$ in this case, all eigenvalues of $A$ are equal to $1$. The characteristic matters when eigenspaces or diagonalisation are concerned. When $\operatorname{char}(\mathbb F)\ne2$, since the annihilating polynomial $x^2-1=(x-1)(x+1)$ is a product of distinct linear factors, $A$ must be diagonalisable. However, when $\operatorname{char}(\mathbb F)=2$, the annihilating polynomial $x^2-1=(x-1)^2$ has repeated factors. Therefore we cannot infer that $A$ is diagonalisable. In fact, since all eigenvalues of $A$ are equal to $1$ but $A$ is not the identity map, it cannot possibly be diagonalisable. Put it another way, when $\operatorname{char}(\mathbb F)\ne2$, the eigen "vectors" of $A$ corresponding to the eigenvalue $-1$ are are the skew-symmetric matrices in $M_{n\times n}(\mathbb F)$, while the eigen "vectors " corresponding to the eigenvalue $1$ are are the symmetric matrices. Since every square matrix is the sum of a skew-symmetric matrix and a symmetric matrix, you can construct an eigenbasis of $M_{n\times n}(\mathbb F)$ from the eigenvectors of $A$. The situation is different when $\operatorname{char}(\mathbb F)=2$. In this case, since all eigenvalues are equal to $1$, the only eigen "vectors" of $A$ are the symmetric matrices. You cannot build an eigenbasis from them because there are matrices in $M_{n\times n}(\mathbb F)$ that are not symmetric (such as most upper triangular matrices).
H: Order of an element divides $m$ when $a^m \equiv 1 \pmod n$ https://brilliant.org/wiki/order-of-an-element/ I was referring to the above link for order of an element and in basic properties while proving property $1$ due to minimality of d, $d \le gcd(m,d)$ is written. Is it because $mx+dy\ge d$ i.e. $gcd(m,d)\ge d$ but that inequality hold only for all positive $x$ and $y$ but there are cases when $x$ is positive and $y$ is negative, in that case $mx+dy \le d$ ie $gcd(m,d)\le d$? Can someone help me what does the minimality of d actually mean and how did this inequality obtained $d \le gcd(m,d)$? AI: The order is defined to be the smallest positive $p$ such that $a^p \equiv 1 \pmod{n}$ holds. Since we have shown that $a^{gcd(d.m)} \equiv 1 \pmod{n}$ and we know that $gcd(d,m)>0$, then we must have $d \le gcd(d,m)$.
H: Total Variation Distance between two uniform distributions Two distributions with $P=Unif([0,s])$ and $Q=Unif([0,t])$ where $0<s<t$ I have the general formula and use the uniform pdf for P and Q $$TV(P,Q) = 1/2 \int_{x\in E} |p_{\theta}-p_{\theta'}| dx$$ $$= 1/2 \int_{x\in E} |\frac{1}{s}-\frac{1}{t}| dx$$ Now I am having trouble with integrating. Which space do I integrate on, from s to t, since we want the distance of the two pdfs? And if so, how do I proceed? AI: It is $\frac 1 2 \int_0^{s} |\frac 1s -\frac 1t|dx+\frac 1 2\int_s^{t} \frac 1 t dx$.
H: The number of real roots of the equation $ 1+\frac{x}{1}+\frac{x^{2}}{2}+\frac{x^{3}}{3}+\dots+\frac{x^{7}}{7}=0$ The number of real roots of the equation $$ 1+\frac{x}{1}+\frac{x^{2}}{2}+\frac{x^{3}}{3}+\dots+\frac{x^{7}}{7}=0$$ applying Descartes sign rule I am getting 0 positve roots and 7 negative roots!. But the answer is 1. How to proceed? AI: There are no positive roots, and Descartes' rule of signs gives the existence of a negative root. If there were two negative roots, there would be a zero of the derivative between them. The derivative is $$\frac{x^7-1}{x-1}$$ and has no real zeros. Or more simply, since the derivative never vanishes and $\lim_{x\to-\infty}f(x)=-\infty$ and $\lim_{x\to\infty}f(x)=\infty$, there must be exactly one real zero, which must be negative, since it obviously isn't positve.
H: How to show that $ ({\bf I_n}-A)^{-1} = \Sigma_{l=0}^m A^l $ I struggle with the following task: Let $n\in\mathbb{N}$ and $A\in\mathbb{C}^{nxn}$ be nilpotent, i.e. there is $m\in\mathbb{N}$ such that $A^m$ = 0 Show that $ ({\bf I_n}-A)^{-1} = \Sigma_{l=0}^m A^l $ Can anyone help me out? AI: A nilpotent matrix $A$ only has eigenvalues which are $0$. The matrix $I-A$ then only has eigenvalues $1$. $(I-A)^{-1}$ will have inverted eigenvalues and $1^{-1}=1$ What does Caley-Hamilton theorem tell us? $(I+A-I)^k =0 \Leftrightarrow A^k=0$. But we may already know this from nilpotency. Another fruitful aspect of this is the geometric series. Maybe this question helps you there.
H: Restriction of a a function I have $f: [-1,1] \rightarrow R$. If $f_{|[-1,0]}(x)$ and $f_{|[0,1]}(x) $ are two increasing restrictions then f is increasing? AI: Yes. Let $u \leq v$. If $u,v \in [0,1]$ or $u,v \in [-1,0]$ it is clear that $f(u) \leq f(v)$. If $v\in [0,1]$ or $u \in [-1,0]$ then $f(u) \leq f(0)\leq f(v)$. If are talking about strictly increasing functions then also the result is true by a similar argument.
H: Hi, I am looking books to learn math from scratch I have seen posts on this site someone refered to khan academy, but I don't want to watch videos, so I have got to learn math from books. I want to learn it very very beginning. Thanks in advance. AI: So it really depends what you mean from the ‘very very beginning’. A book series that you could buy is the Art of Problem Solving books (https://artofproblemsolving.com/store) which have a book series from pre-algebra to calculus. However, this can get very expensive. Edit: As mentioned below, The Art of Problem Solving Books sort of do what they say on the tin, and develop a sense of problem solving. If that is something you want to develop then these books are recommended, however, if you just want to learn the basics, then other books may be more suitable (for example Open Stax has a series of books in a similar style to a traditional course).
H: Is the Jordan normal form uniquely determined by the characteristic and minimal polynomial? I was looking into this answer to a question about obtaining the Jordan normal form given the characteristic and minimal polynomials of a matrix. In this answer, it is stated that "The multiplicity of an eigenvalue as a root of the characteristic polynomial is the size of the block with that eigenvalue in the Jordan form. The size of the largest sub-block (Elementary Jordan Block) is the multiplicity of that eigenvalue as a root of the minimal polynomial". I was then thinking of examples of matrices to apply this to, and I came up with the example of a matrix with characteristic polynomial $f(x) = (x-1)^4(x+1)$ and minimal polynomial $m(x) = (x-1)^2(x+1)$. Using the method described in the answer, I know that the largest elementary Jordan Block for the eigenvalue $1$ should be of size $2$. But given this, I can make $2$ distinct Jordan blocks for the eigenvalue $1$: $$\begin{pmatrix} 1&1&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ \end{pmatrix} \qquad \text{and} \qquad \begin{pmatrix} 1&1&0&0\\ 0&1&0&0\\ 0&0&1&1\\ 0&0&0&1\\ \end{pmatrix} $$ where the first Jordan block has one elementary block of size $2$ and $2$ elementary blocks of size $1$, and the second Jordan block is made up of $2$ elementary blocks, each one of size $2$. Do the characteristic and minimal polynomial always uniquely determine the Jordan normal form? In which case my understanding is wrong, and I would ask if someone could tell me what am I missing. Or alternatively, when do the characteristic and minimal polynomial uniquely determine the Jordan normal form? Thank you! AI: Generally knowing only the characteristic polynomial and the minimal polynomial is not enough to determine uniquely the Jordan normal form, like you showed in the question. I think that the only times where just knowing these two polyomials gives you also the Jordan normal form is when the degree of the minimal polynomial is very low or very high. For example, if you know that $f(x) = (x-\lambda)^n$ and $m(x)=(x-\lambda)$ you know that the Jordan normal form is the diagonal one $f(x) = (x-\lambda)^n$ and $m(x)=(x-\lambda)^n$ you know that the Jordan normal form is the one made up by only one Jordan block of dimension $n$ $f(x) = (x-\lambda)^n$ and $m(x)=(x-\lambda)^{n-1}$ you know that the Jordan normal form is the one made up by one Jordan block of dimension $n-1$ and one Jordan block of dimension $1$. I think that this cases, and the ones where every eigenvalue behave like one of these cases, are the only one where the two polynomials determines uniquely the Jordan normal form.
H: Suppose that $V, W$ are closed in $X$ and that $V\cap W, V\cup W$ are connected. Show that $V$ and $W$ are connected. Suppose that $V, W$ are closed in $X$ and that $V\cap W, V\cup W$ are connected. Show that $V$ and $W$ are connected. My attempt: Suppose that $V=U_1\cup U_2$ where $U_i$ are open in $V$, not empty and $U_1\cap U_2=\emptyset$. This induces a separation for $V\cap W$: $$ V\cap W = (U_1\cap W)\cup(U_2\cap W).$$ $U_i\cap W$ is open in $V\cap W$ $(U_1\cap W)\cup (U_2\cap W)=\emptyset$ as otherwise $U_1\cap U_2 \ne \emptyset$. Now, I need to show that both sets are not empty. We know that $V\cup W$ is connected and closed in $X$. So, for all $x\in V\cup W=U_1\cup U_2\cup W$ and all neighborhoods $Z$ (in $X$!) of $x$ we have that $Z\cap (V\cup W) \ne \emptyset$. We can choose $x\in U_1 \subseteq V\cup W$. But then I would have to find a neighborhood of $x$ in $X$. I can't find one... How can I complete this bullet point? Thanks. AI: You certainly know that a space $Y$ is connected if and only if each continuous map $f : Y \to 2$ is constant. Here $2 = \{0,1\}$ with the discrete topology. So let $f : V \to 2$ be continuous. Since $V \cap W$ is connected, $f$ is constant on $V \cap W$. W.l.o.g. we may assume $f(x) = 0$ for $x \in V \cap W$. Define $$F : V \cup W \to 2, F(x) = \begin{cases} f(x) & x \in V \\ 0 & x \in W \end{cases}$$ This map is well-defined and continuous because $V$ and $W$ are closed in $V \cup W$. Hence $F$ is constant which implies that $f$ is constant.
H: How to find many bijective functions from rationals on $(0,1)$ to rationals on $(0,1)$ Let $S=\{x\in\Bbb Q:\ 0<x<1\}$. I am trying to find a sequence of bijective functions from $S$ to itself, where each function is strictly increasing. But currently I can only think of $f(x)=x$ which is a trivial example. Intuitively I think there are infinitely many such functions but I am struggling to construct them. Thanks in advance for any help or hint. AI: First of all, observe that every such function extends to a continuous bijection between $[0,1]$ and itself. [To see this, first prove that left and right limits exist, for every $x\in [0,1]$, and are equal.] Example. Another function with this property is $$ f(x)=\left\{\begin{array}{ccc} 2x & \text{if} & x\in [0,1/3], \\ \frac{x+1}{2} & \text{if} & x \in [1/3,1]. \end{array}\right. $$ In general, if $$ q_0=0<q_1<\cdots<q_{n-1}<1=q_n, \quad r_0=0<r_1<\cdots<r_{n-1}<1=r_n $$ are rationals, then the function which is defined as $$ f(q_i)=r_i, \quad i=0,1,\ldots,n, $$ and $f$ is linear in each interval $[q_{i-1},q_i]$, $i=1,\ldots,n$, also satisfies the property is the OP. Next, consider two strictly increasing sequences of rationals $\{q_n\}$ and $\{r_n\}$, with $q_0=r_0=0$, which tend to 1, i.e., $$ 0=q_0<q_1<\cdots<q_{n-1}<q_{n}\to 1, \\ 0=r_0<r_1<\cdots<r_{n-1}<r_{n}\to 1, $$ and define $f: [0,1]\to[0,1]$, so that $f(q_i)=r_i$, $i\in\mathbb N$, and $f$ linear in each interval $[q_{i-1},q_i]$. Then this $f$ satisfies the property of the OP, and there exist $2^{\aleph_0}$ such functions, which is equal to the cardinality of $C[0,1]$. Hence the answer is: The cardinality of the functions satisfying the OP is $2^{\aleph_0}$.
H: extrema under constraints - lagrangian multipliers Find all global extrema of $f(x,y,z)=x^3+y^3+z^3$ under the constraints a) $x^2+y^2+z^2=1$ b) $2x^2+y^2=1$ Regarding a), I've tried to use the lagrangian multipliers, that is $(3x^2-\lambda 2x,3y^2-2\lambda y-3z^2-2\lambda z,-x^2-y^2-z^2+1)=(0,0,0,0)$ and solving this gives me $x=0$ or $x=\frac{2}{3}\lambda$. If I try to substitute the latter in the other equations, I only get a solution of the system that is dependent on $\lambda$, but no actual solution for $\lambda$ itself. The same issue arises when I try to solve b). AI: That is not how the method of Lagrange multipliers should be used whan there is mor than one restriction. You should solve the system$$\left\{\begin{array}{l}\frac{\partial f}{\partial x}(x,y,z)=\lambda\frac{\partial g}{\partial x}(x,y,z)+\mu\frac{\partial h}{\partial x}(x,y,z)\\\frac{\partial f}{\partial y}(x,y,z)=\lambda\frac{\partial g}{\partial y}(x,y,z)+\mu\frac{\partial h}{\partial y}(x,y,z)\\\frac{\partial f}{\partial z}(x,y,z)=\lambda\frac{\partial g}{\partial z}(x,y,z)+\mu\frac{\partial h}{\partial z}(x,y,z)\\g(x,y,z)=1\\h(x,y,z)=1.\end{array}\right.$$You will get lots of solutions: $(x,y,z)=(0,\pm1,0)$; $(x,y,z)=\pm\left(\frac1{\sqrt2},0,\frac1{\sqrt2}\right)$; $(x,y,z)=\pm\left(\frac1{\sqrt2},0,-\frac1{\sqrt2}\right)$; $(x,y,z)=\pm\left(\frac1{\sqrt3},\frac1{\sqrt3},\frac1{\sqrt3}\right)$.
H: Proving $\mathbb{Z} \subseteq \mathbb{R}^{2}$ is closed, not open and not bounded metric spaces I'm working my way through the examples in Rudin on open sets, closed sets and other related material to get use to working with the definitions. I'm trying to show that $\mathbb{Z} \subseteq \mathbb{R}^{2}$ is closed, but not open and not bounded. My attempts are below. Closed: We want to show that $\mathbb{Z} \subseteq \mathbb{R}^{2}$ contains all of its limit points. Suppose that $p =(p_{1}, p_{2})$ is a limit point of $\mathbb{Z}$ and that $\epsilon = \frac{1}{2}$ (say). Then $B_{\epsilon}(p) = \{(a,b): d((a,b), (p_{1},p_{2})) < \epsilon\}.$ For our given choice of $\epsilon, B_{\epsilon}(p) = p.$ Which contradicts the fact that $p$ is a limit point and therefore $\mathbb{Z}$ doesn't have any limit points. Hence it is closed (it contains all of its limit points, because it doesn't have any!). NOT open: Consider the point $(0,0)$, and let $\epsilon = \frac{1}{2}.$ Then $B_{\epsilon}((0,0))$ doesn't contain any other points of $\mathbb{Z}$ other than the point $(0,0)$ itself. Hence it is not open. Question: What should the distance function actually be for $B_{\epsilon}((0,0))$? I am presuming it is the Euclidean metric in $\mathbb{R}^{2}$ i.e. we would be showing $\sqrt{(a-0)^{2} + (b-0)^{2}} < \epsilon$ for some $(a,b) \in \mathbb{Z}.$ NOT bounded: If we have a metric space $(X,d)$ and $E \subseteq X$, Rudin defines a bounded set as follows $E$ is bounded if there is a real number $M$ and a point $q \in X$ such that $d(p,q) < M$ for all $p$ in $E$. Am I correct that NOT bounded is therefore defined as follows: $E$ is not bounded is for all real numbers $M$ and all points $q \in X$, $d(p,q) \geq M$ for some $p$ in $E$ (I wasn't sure if its only the quantifiers that I swap) I'm not too sure how to proceed. I was thinking I could maybe use the Archemedian property in someway, but because I'm in a subset of $\mathbb{R}^{2}$ I'm not sure if that approach will work. Similarly I can't use $\sup$ or $\inf$ as I'm in a subset of $\mathbb{R}^{2}$. Thanks. AI: I will assume that we are using a sensible embedding of $\mathbb{Z} \subset \mathbb{R}^2$ (say by $n \mapsto (n, 0)$). To your question about the metric. From context I would presume that the metric is the usual euclidean one (the claim is false if we take an arbitrary metric - consider the discrete metric which makes all subsets open!). Your proofs of closedness and non-openness seem fine. You are right to think the set is not bounded. What you want to show is Given any $M > 0$ and any $p \in \mathbb{R}^2$, there is some $a \in \mathbb{Z} \subset \mathbb{R}^2$ such that $d(p, a) > M$. To do this, suppose that $p = (x, y)$. Then $d(p, (n,0)) \geq |n - x|$, so if we choose some large $n$ so that $|n - x| > M$ we are done.
H: Solving recurrence relation with 2 variables If I have a recurrence relation like $$T(n,k)=\frac{T(n-1,k)+T(n,k-1)}{2}$$ with initial values $\forall n \quad T(n,0)=T_0$ and $\forall k \quad T(0,k)=0$. How can I solve it? By the way this came up when I was solving a physics problem AI: Here is a graphical representation of your recurrence relationship : (which, now that you have settled correctly your initial data, isn't compulsory, but is interesting by itself because it shows its similarity with the ¨Pascal's triangle" (see below). Some numerical computations on the first values of $T_{n,k}$ in the case of $T_0=1$ give the following first numerical results with denominators $2^{n+k-1}$. (please note the diagonal values equal to $1/2$) : Out of this array, we can build a simplified one by turning it $135°$clockwise in the "Pascal's triangle' manner and keeping only the numerators where the right diagonal, instead of being filled by "ones", is filled by successive powers of $2$ : $$\begin{array}{ccccccccc} &&&&&1&&&&\\ &&&&\color{blue}{1}&&\color{red}{2}&&&&&\\ &&&\color{blue}{1}&&3&&\color{red}{4}&&&&&&&&\\ &&\color{blue}{1}&&4&&7&&\color{red}{8}&&&\\ &\color{blue}{1}&&5&&11&&15&&\color{red}{16}&&&\\ \color{blue}{1}&&6&&16&&26&&31&&\color{red}{32} \end{array}\tag{1}$$ We have simplified the problem because in this way only integers are managed, and (thanks to an indication by the OP) this is known in the litterature under the name "Bernoulli triangle" yielding the explicit formula for the coefficients in the previous "Pascal's like" array (1) (it is why we write $T'$ instead of $T$): $$\displaystyle T'_{n,k}=\frac{1}{2^{n}}\sum _{p=0}^{k}{\binom {n}{p}} \ \ \text{for} \ \ k=0,1,\cdots n$$
H: Jordan Canonical form of the operator $(T^2 - T)|_{\operatorname{Im}(T^4)}$. Suppose $T: V \to V$ is a nilpotent linear operator with $\dim (V) = 21$ and we know that $$\dim (\ker(T)) = 6$$ $$\dim (\ker(T^2)) = 11$$ $$\dim (\ker(T^3)) = 15$$ $$\dim (\ker(T^4)) = 18$$ $$\dim (\ker(T^5)) = 20$$ $$\dim (\ker(T^6)) = 21$$ We have to find the Jordan Canonical form of the operator $(T^2 - T)|_{\operatorname{Im}(T^4)}$. What I have understood so far: $T$ is a nilpotent linear operator of order $6$. There will be $6$ Jordan blocks and size of the Jordan block we will get from the above information. Then how to proceed with the problem? Thank You!! AI: Most of the information is superfluous. All we need to know is that the dimension is $21$, that $T$ is nilpotent of order $6$ and that $\dim \ker T^4 =18$. By the Rank Nullity Theorem the dimension of $U:=\text{Im}( T^4)$ is $3$. Let $S=T(T-I)$; then $S^2 T^4=T^6 (T-I)^2=0$. So the minimal polynomial of $S|_U$ divides $X^2$. As $T^4\not=0$ the minimal polynomial of $S$ is not $1$. It cannot be $X$ either, else $T^5=-T^5(T-I)=0$ which is not so. So the JCF of $S$ consists of a single $1\times 1$ block and a single $2\times 2$ block.
H: What is $\cos x-\cos2x+\cos3x-\cos4x...\pm\cos(Nx)$? I want to arrive at a closed expression for $f_N(x)=\frac{2}{π}(\cos x-\cos2x+\cos3x-\cos4x...\pm \cos Nx)$ ($+\cos(Nx)$ if $N$ is odd, $-\cos(Nx)$ if $N$ is even) Using the fact that $\cos(x)=\frac{e^{ix}+e^{-ix}}{2}$ and $\sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$, and by expressing our function as a G.P. with common ratio $-e^{-ix}$, I was able to express $f_N(x)$ (when $N$ is odd) as: $$(\frac{1}{π}•\frac{e^{Nix}-e^{-Nix}}{1+e^{-ix}})+\frac{1}{π}$$ However I got stuck when I tried to simplify the above equation in terms of trigonometric terms. I was only able to simplify upto: $$\frac{-1}{π}•\frac{\sin(Nx)•2i}{1+e^{-ix}}+\frac{1}{π}$$ I was a bit worried I had done something wrong while expressing the function as a G.P. but I checked my work and it seems fine. I guess that the answer lies in some simple manipulation of terms but unfortunately, I was not able to find the solution to this problem. Appreciate any help :) AI: Expanding on @Bernard's answer, for even $N$ the sum is$$\begin{align}\Re\left[e^{ix}\frac{1-e^{iNx}}{1+e^{ix}}\right]&=\Re\left[e^{ix}\frac{-2ie^{iNx/2}\sin\frac{Nx}{2}}{2\cos\frac{x}{2}e^{ix/2}}\right]\\&=\Im\left[e^{i(N+1)x/2}\frac{\sin\frac{Nx}{2}}{\cos\frac{x}{2}}\right]\\&=\frac{\sin\frac{Nx}{2}\cos\frac{(N+1)x}{2}}{\cos\frac{x}{2}},\end{align}$$while for odd $N$ it's$$\Re\left[e^{ix}\frac{1+e^{iNx}}{1+e^{ix}}\right]=\Re\left[e^{i(N+1)x/2}\frac{\cos\frac{Nx}{2}}{\cos\frac{x}{2}}\right]=\frac{\cos\frac{Nx}{2}\cos\frac{(N+1)x}{2}}{\cos\frac{x}{2}}.$$You can unify these e.g. as$$\frac{\sin\left(\frac{Nx}{2}+(1+(-1)^{N+1})\frac{\pi}{4}\right)\cos\frac{(N+1)x}{2}}{\cos\frac{x}{2}}.$$
H: Looking for an elegant proof for $e^{jn\pi} = (-1)^n e^{-jn\pi}$ I want to prove that $$e^{jn\pi} = (-1)^n e^{-jn\pi} \quad (1)$$ My attempt $$ e^{jn\pi} = (-1)^n e^{-jn\pi} \iff ln(e^{jn\pi}) = ln((-1)^n) + ln(e^{-jn\pi} ) \iff jn\pi = jn\pi - jn\pi$$ $$\iff n = 0 $$ $$\text{But this should be true } \forall n, \text{therefore something went wrong here}$$ I am very confident that there is a more beautiful proof based on Euler's formula but I can't find it. What I've also tried is: $$e^{jn\pi} = cos(n\pi) + j sin(n\pi) = (-1)^n + j sin(n\pi) = ? $$ AI: The statement is false. Notice that $$(e^{j \pi})^n = (-1)^n$$ Then your statement is $$(-1)^n = (-1)^n (-1)^{-n}$$ However, the right-hand-side simplifies to $(-1)^0 = 1$, so the statement can only be true for $n$ even at best. For instance, letting $n=3$ produces the statement $-1 = 1$, though at least $n=4$ is true.
H: Can a Path contain a cycle Can a path contain a cycle .i.e is this diagram a valid path a-----b----c----d \ / \ / e According to this definition of path from 'Introduction to Graph Theory by D.B. West' So, I ordered this as a,b,c,d,e and call it a path which includes all the edges and vertices in the diagram. path ordering : e,a,b,c,d Is this wrong, it seems so to me, but don't know why? AI: Your ordering does not satisfy the given definition, since vertices $d$ and $e$ are not adjacent in the graph, but are adjacent in your list. Indeed, no such ordering exists due to the cycle $a \to b \to e \to a$.
H: Action of a certain operator on half-integral weight modular forms - a computation in Kohnen's paper This is regarding a certain computation in page 37 of W. Kohnen's 1982 paper, Newforms of half-integral weight (article here or here, but both require a paid access) - I am unable to follow a step. Precisely, I cannot follow how equation (2) in the paper is obtained from the previous step. Elaborating, (after skipping/simplifying) the equation in hand is \begin{align} g(z) &= \epsilon^k \sum_{n\geq1} \left( \epsilon^{-1/2} i^{-k}e^{-\pi i/4}e^{\pi in/2} + \epsilon^{1/2} i^{k}e^{\pi i/4}e^{-\pi in/2} \right) a(n)e^{2\pi inz} \\ &= (-1)^{[k+1/2]}\epsilon\sqrt2\left( \sum_{\substack{n\geq1\\ \epsilon(-1)^kn\equiv0,1\pmod4}}a(n)e^{2\pi inz} - \sum_{\substack{n\geq1 \\ \epsilon(-1)^kn\equiv2,3\pmod4}}a(n)e^{2\pi inz} \right) \end{align} where $\epsilon^2=1.$ How does the last step follows from the previous one? So far I have reached \begin{align} g(z) &= \epsilon^k \sum_{n\geq1} \left( \epsilon^{-1/2} i^{-k}e^{-\pi i/4}e^{\pi in/2} + \epsilon^{1/2} i^{k}e^{\pi i/4}e^{-\pi in/2} \right) a(n)e^{2\pi inz} \\ &= \frac{\epsilon^k}{\sqrt2} \sum_{n\geq1} \left( \epsilon^{-1/2} i^{-k}(1-i)i^n + \epsilon^{1/2} i^{k}(1+i)(-i)^n \right) a(n)e^{2\pi inz}\\ &= \frac{\epsilon^k}{\sqrt2} \sum_{n\geq1} \left( \epsilon^{-1/2} (-1)^{-k/2}(1-i)(-1)^{n/2} + \epsilon^{1/2} (-1)^{k/2}(1+i)(-1)^n(-1)^{n/2} \right) a(n)e^{2\pi inz}. \end{align} How to proceed further? [Edit : $\epsilon=\pm1$ always, with $(-1)^{1/2}=i$, and $[x]$ denotes integer part of $x$] Note: The title of the question mentions "operator". Actually, $g(z)=f|\xi+\xi'$ where $f$ is a half-integral weight modular form and $\xi,\xi'$ are operators. I have skipped the details so as to keep only the relevant parts. AI: I'll assume $\epsilon\in\{1,i\}$. There's an awful lot of clutter here. Set $b(n)=a(n)e^{2\pi nz}$, $A=\epsilon^{-1/2}i^{-k}e^{-\pi i/4}$ and $B=\epsilon^{1/2}i^{k}e^{\pi i/4}$. The sum in question is then $$S=\epsilon^k\sum_{n}(Ai^n+Bi^{-n})b(n).$$ We really just need to determine $$C_n=Ai^n+Bi^{-n}.$$ Clearly $C_{n+2}=-C_n$ and $C_{n+4}=C_n$, so only $C_0$ and $C_1$ matter. $$C_0=\begin{cases}2\cos((2k+1)\pi/4)&\text{if }\epsilon^{1/2}=1\\ 2\cos((2k+3)\pi/4)&\text{if }\epsilon^{1/2}=i\end{cases}$$ and $$C_1=\begin{cases}2\cos((2k-1)\pi/4)&\text{if }\epsilon^{1/2}=1\\ 2\cos((2k+1)\pi/4)&\text{if }\epsilon^{1/2}=i\end{cases}$$ In each case $C_0$ and $C_1$ are either of $\pm\sqrt2$. When $k$ is even and $\epsilon^{1/2}=1$, or when $k$ is odd and $\epsilon^{1/2}=1$, $C_1=C_0$. Otherwise, $C_1=-C_0$. So $C_1=(-1)^k\epsilon C_0$. Using $C_{n+2}=-C_n$ gives $$S=\epsilon^kC_0\left(\sum_{(-1)^k\epsilon n\equiv0,1\pmod4}b(n) -\sum_{(-1)^k\epsilon n\equiv2,3\pmod4}b(n)\right).$$ We only need to identify the factor $\omega_k=\epsilon^kS_0$. This will depend on $k$ modulo $4$. But $\omega_{k+2}=-\omega_k$, so we only need $\omega_0$ and $\omega_1$. When $\epsilon=1$, $\omega_0=\sqrt2$ and $\omega_1=-\sqrt2$, and when $\epsilon=-1$, $\omega_0=\omega_1=-\sqrt2$. To match the second formula, I think the sign in the formula should be $(-1)^{[(k+1)/2]}$ where $[x]$ denotes the integer part of $x$. Is that what you intended to write?
H: Solve the following DE: $ 2y(y'+2)=x(y')^2 $ I'm stuck trying to understand how to solve this differential equation: $$ 2y(y'+2)=x(y')^2 $$ The main problem is to understand what type it is. I have never come across anything like this before. Could anyone give me a hint? At first, I thought it is a Lagrange equation, so that was my attempt: $$ y=\frac{x(y')^2}{2y'+4} $$ $$ y'=p\Rightarrow y'=p=\frac{(p^2+2xpp')(2p+4)-2xp^2}{(2p+4)^2} $$ And then I got stuck trying to solve this equation for $x$ in terms of $p$. AI: $$2y(y'+2)=x(y')^2$$ This is D'alembert's differential equation: $$y=x \left (\dfrac {y'^2}{2(y'+2)}\right)$$ Is of the form: $$y=x f(y')+g(y')$$ You made a little mistake here : $$p=\frac{(p^2+2xpp')(2p+4)-2xp^2}{(2p+4)^2}$$ It shoul be: $$p=\frac{(p^2+2xpp')(2p+4)-2xp^2\color {red}{p'}}{(2p+4)^2}$$ Then it factorize nicely into: $$p(p+4)(p+2-xp')=0$$ $$ \begin{cases} p=0 \\ p+4=0 \\ p+2-xp'=0 \end{cases} $$ And $y=0$ is also a solution.
H: Group cohomology of Galois group of finite extension of finite fields Let $E/F$ be a finite extension of finite fields; hence, it is a cyclic Galois extension, so let the Galois group be $G$. Hilbert's Theorem 90 states that $H^1(G, E^{\times})=0$. My question is: How to show that $H^n(G, E^{\times})=0,\forall n\ge 2$ ? AI: When $G$ is a finite cyclic group, its cohomology is periodic: $H^{n+2}(G,M)\cong H^n(G,M)$ for all $n\ge1$. This proves your equation for odd $n$. But this fails for $n=0$. In this case $H^2(G,M)\cong{\hat H}^0(G,M)$, the $0$-th Tate cohomology group. This is defined to be $M^G/T(M)$ where $M^G=H^0(G,M)$ is the set of $G$-invariant elements of $M$ and $T(M)$ is the image of $M$ under the trace map $T:m\mapsto\sum_{g\in G}m^g$. There are various ways of proving $\hat H^0(G,E^\times)\cong H^2(G,E^\times)$ is trivial. First, by direct computation: when $|F|=q$ and $|E|=q^n$, $E^\times$ is cyclic of order $q^n-1$, $(E^\times)^G$ has order $q-1$ and $T$ is powering by $(q^n-1)/(q-1)$. Second, by the theory of the Herbrand quotient, as $E^\times$ is a finite group, $|H^2(G,E^\times)|=|H^1(G,E^\times)|$. Third, $H^2(G,E^\times)$ is a subgroup of the Brauer group of $F$, which is trivial by Wedderburn's theorem on finite division rings.
H: Find all complex numbers $\forall{n}\in\mathbb{N}$ find all complex numbers $z$ satisfying the equation $z^n=-\bar{z}$. I found $0$ and $1$ and for other? How can I prove it? AI: $\displaystyle z^n = -\overline z$. Let $\displaystyle z = re^{i\theta}$, giving $\displaystyle r^ne^{in\theta} = -re^{-i\theta}$ Since the modulus of the two sides has to be equal, $\displaystyle r^n = r$, giving the solutions $\displaystyle r=0$ and $\displaystyle r=1$. $\displaystyle r=0 \implies z=0$ is a single solution, while $\displaystyle r = 1 \implies z = e^{i\theta}$ is a set of solutions we need to find. We have $\displaystyle e^{in\theta} = -e^{-i\theta}$ $\displaystyle e^{i\theta(n+1)} = -1$ $\displaystyle e^{i\theta(n+1)} = e^{(2k+1)i\pi}, \forall k \in \mathbb{Z}$ Hence $\displaystyle \theta = \frac{2k+1}{n+1}\pi, \forall k \in \mathbb{Z}$ giving $\displaystyle z = e^{i(\frac{2k+1}{n+1})\pi} = \cos (\frac{2k+1}{n+1})\pi + i\sin (\frac{2k+1}{n+1})\pi , \forall k \in \mathbb{Z}$ To eliminate duplicate solutions, consider the argument modulo $\displaystyle 2\pi$.
H: Why $ \dfrac{d}{dx} f(kx) = kf'(kx) $? In MIT OCW 18.01 Single variable calculus lecture 6, minute 18:00-18:20, the professor used an example to explain why $e$ exists. Context: $f(x) = 2^x$ $b=2^k$ $ f(kx)=2^{kx} = (2^k)^x $ Then the professor wrote this, that by chain rule: $$ \dfrac{d}{dx} f(kx) = kf'(kx) $$ Why does it work? Why need to add additional $k$ there? If I expand derivative of $f(x)$ as: $$ \dfrac{d}{dx}f(x) = 2^x\dfrac{(2^{\Delta x} - 1)}{\Delta x} $$ then derivative of $f(kx)$ should be: $$ \dfrac{d}{dx}f(kx) = f'(kx) = 2^{kx}\dfrac{(2^{\Delta x} - 1)}{\Delta x} $$ Even if it's typo where Professor intended to write: $\dfrac{d}{dx}f(kx) = kf'(x) $ I don't think that it will work either. Because $$ k 2^{x}\dfrac{(2^{\Delta x} - 1)}{\Delta x} \ne 2^{kx}\dfrac{(2^{\Delta x} - 1)}{\Delta x} $$ right? AI: Since $f(kx)=2^{kx}$, then derivative for $x$ is $$\frac{df}{dx}(kx)=f'(kx)\cdot k$$ Because if you let $g(x)=kx$, then $$\frac{df}{dx}(g(x))=f'(g(x))g'(x)$$ which is the chain rule.
H: Normalizer with respect to a compact subgroup Let $G$ be a semisimple Lie group with $K$ to be the compact subgroup in the Iwasawa decomposition $G=KAN.$ Then, if $\mathfrak a$ is the Lie algebra of $A$ what is $N_K(\mathfrak a)$ which is referred as normalizer in Knapp's book? I can understand the meaning of $N_{\mathfrak g}(\mathfrak a)$ which is the normalizer of $\mathfrak a$ in $\mathfrak g,$ i.e. the smallest subalegbra in which $\mathfrak a$ becomes an ideal. AI: Knapp does not define it explicitly but this notation usually means $$N_K(\mathfrak{a})=\{ k\in K: Ad(k)(\mathfrak{a})=\mathfrak{a}\},$$ which is consistent with how he uses it (see e.g. Prop. 6.52). This is how Duistermaat (Section 2.8) defines it.
H: Solve $y'(x)=\frac{y^2(x)/x^2}{1+y(x)/x}$ via substitution Assignment: Find the solution of the following ODE: $$y'(x)=\frac{y^2(x)/x^2}{1+y(x)/x}.$$ I've tried the substitution $RHS=:v(x,y)$, as well as a few others but doesn't seem to work in simplifying the differential equation, since, for example, the former results in $$\frac{dv}{dx}=\frac{2y\frac{dy}{dx}(x+xy)-y^2((x+xy)\frac{dy}{dx})}{x^2(1+y)^2}$$ which doesn't seem helpful. AI: Let $y=vx$. Then you get $$v+xv'=\frac{v^2}{1+v}.$$I think you can take it from here.
H: Application of distance between probability measures Let $P\sim Q$ be two equivalent probability measures. There seem to exsist different notions of how to define a difference between the two probability measures/distributions. For example, Total variation: $$\delta(P,Q)=\sup_{A} |P(A)-Q(A)|$$ Kullback–Leibler divergence: $$D_{KL}(P,Q)=\int_\mathbb{R}p(x)\ln\left(\frac{p(x)}{q(x)}\right)\mathrm{d}x$$ Hellinger distance: $$H^2(P,Q)=\int_\mathbb{R}\left(\sqrt{p(x)}-\sqrt{q(x)}\right)^2\mathrm{d}x$$ Bhattacharyya distance: $$B(P,Q)=-\ln\left(\int_\mathbb{R}\sqrt{p(x)q(x)}\mathrm{d}x\right)$$ Jensen–Shannon divergence: $$JSD(P,Q)=\frac{1}{2}D_{KL}\left(P,\frac{P+Q}{2}\right)+\frac{1}{2}D_{KL}\left(Q,\frac{P+Q}{2}\right)$$ I've got two questions. What is the intuitive meaning? Is it as simple as: if the distance between $P$ and $Q$ is big, then an unlikely event under $P$ may be very likely under $Q$ and vice versa? Does any of these differences tell me anything about how $E^P[X]$ differs from $E^Q[X]$ for a measurable random variable $X$? What about higher moments of $X$? AI: This answer is not complete, just a piece of useful intuition. I can speak for the KL-Divergence with some intuitions of a related quantity. Firstly note that KL Divergence is not a metric, $D_{KL}(P,Q) \neq D_{KL}(Q,P)$. Hence this measure of "distance" doesn't agree to our intuition of the metric. To see then what it is worth for, let us suppose that $(X,Y)\sim P_{XY}$. Then if we choose, $P=P_{XY}$ and $Q=P_{X}P_{Y}$ then, $$D_{KL}(P,Q)=\mathbb{E}\left[\log\frac{P_{XY}}{P_Xp_Y}\right]$$ When is $D_{KL}(P,Q)=0$? This happens exactly when $P_{XY}=P_XP_Y$. In other words, $X,Y$ are independent random variables. Hence for this case, $D_{KL}$ measures "how much" the random variables are independent of each other. (If you are familiar with Information Theory, $D_{KL}(P_{XY},P_XP_Y)=I(X;Y)$ is know as the Mutual Information between $X,Y$.) The Jensen–Shannon divergence is an extension of KL Divergence to make it symmetric about its arguments.
H: Verify identity $\frac{\sin(t+h)-\sin(t)}{h}$ = $\cos(t)$ $\frac{\sin(h)}{h}$ + $\sin(t)$ $\frac{\cos(h)-1}{h}$ I've been asked to verify the following identity and I don't know how to do it. $\frac{\sin(t+h)-\sin(t)}{h}$ =$ \cos(t)$ $\frac{\sin(h)}{h}$ + $\sin(t)$ $\frac{\cos(h)-1}{h}$ When I try I get: $\frac{\sin(t+h)-\sin(t)}{h}$ = $\frac{\sin(t)\cos(h)+\cos(t)\sin(h)-\sin(t)}{h}$ I've then seen some things on the internet showing that: $\sin(t)\cos(h)-\sin(t) = \sin(t)\cos(h-1)$ This could be a next step but no one ever explains how this identity works. This identity doesn't show up when I look up identities in general and the ones that do show up tell you the identity for different combinations of $\cos(a)$ and $\sin(b)$, for example the double-angle identity or sum-to-product identity. So if you can shed some light on this one as well it would be appreciated. AI: You have the correct answer, just need one more step to break the terms up as requested. Namely: \begin{align} \frac{\sin(t+h)-\sin(t)}{h} &= \frac{\sin(t)\cos(h)+\cos(t)\sin(h)-\sin(t)}{h} \\ &= \frac{\sin(h)\cos(t)+\sin(t)(\cos(h)-1)}{h} \\ &= \cos(t)\cdot\frac{\sin(h)}h+\sin(t)\cdot\frac{\cos(h)-1}{h} \end{align}
H: Convex Combination again a characteristic function I also checked some other posts about similar problems but still not able to find a solution for this problem. Let $\phi(t)$ be a characteristic function. Is $$\gamma (t) = \frac{1}{3} \phi(2t) + \frac{2}{3} \phi(\frac{t}{4})$$ also a characteristic function? It seems like a convex combination of characteristic functions. I assume it is a characteristic function but do not know how to show it. I also checked the properties which (hopefully) holds in my calculations. AI: It all boils down to one, not very hard fact: Fact: If $\{\mu_k\}_{k \in \mathbb N_+}$ is a family of probability measures on $(E,\mathcal B(E))$, and $(p_k)_{k \in \mathbb N_+}$ is a sequence of wages (that is $p_k \in [0,1]$ for every $k \in \mathbb N_+$ and $\sum_{k=1}^\infty p_k=1$), then $\nu = \sum_{k=1}^\infty p_k\mu_k$ is also a probability measure. (Note we don't need the convex combination to be finite) The proof isn't that hard, firstly note that $\nu$ is well defined on $(E,\mathcal B(E))$, since for every $A \in \mathcal B(E)$ we have $\mu_k(A) \le 1$, so the series converges for any such $A$. Obviously $\nu$ takes only non-negative values. $\nu(E) =\sum_{k=1}^\infty p_k\mu_k(E) = \sum_{k=1}^\infty p_k = 1$ And lastly taking $A_1,A_2,...$ disjoint, we have: $$\nu(\bigcup A_j) = \sum_{k=1}^\infty p_k \mu_k(\bigcup A_j) = \sum_{k=1}^\infty \sum_{j=1}^\infty p_k\mu_k(A_j) = \sum_{j=1}^\infty \sum_{k=1}^\infty p_k\mu_k(A_j)=\sum_{j=1}^\infty \nu(A_j)$$ Where we could change order of summation due to non-negative values. Having this fact, it can be shown that $\gamma$ is a characteristic function of measure $$\nu = \frac{1}{3}\mu_{2X} + \frac{2}{3} \mu_{\frac{X}{4}}$$, where $\mu_Y$ means the distribution of $Y$ Indeed: $$ \gamma(t)= \int_{\mathbb R}e^{itx}d\nu(x) = \int_{\mathbb R}e^{itx}d(\frac{1}{3}\mu_{2X}(x) + \frac{2}{3}\mu_{\frac{X}{4}}(x)) = \frac{1}{3}\int_{\mathbb R}e^{itx}d\mu_{2X}(x) + \frac{2}{3}\int_{\mathbb R}e^{itx}d\mu_{\frac{X}{4}}(x)$$ All you need to know now, is to know that when $\varphi$ is a characteristic function of $X$ (or equivalently $\mu_X$), then function $t \to \varphi(at)$ is a characteristic function of $aX$ (or again, equivalently $\mu_{aX}$), getting: $$ \gamma(t) = \frac{1}{3}\varphi(2t) + \frac{2}{3}\varphi(\frac{t}{4})$$
H: Show that all of the roots of $f(z)= cz^n-e^z$ are from multiplicity of 1 Given the equation $cz^n-e^z=0$, $|c|>e , n\in \mathbb{N} $. Show that all the roots of $f(z)= cz^n-e^z$ in $\bar D= \{|z|\leq 1\}$ are from multiplicity of 1. I used Rouché's theorem by defining $g(z)=cz^n$ to show that $f(z)$ has $n$ roots inside the unit disk. If I could prove that for every root $w$, s.t $f(w)=0$, we get that $f'(w)\neq 0$, it would solve the rest, but I'm not sure how to do that. Would appreciate any help:) AI: Note that $f'(z)=cnz^{n-1}-e^z$ and the if $f(z)=f'(z)=0$, then $cz^n=cnz^{n-1}$ and therefore $z=n$ or $z=0$. But neither $n$ nor $0$ are roots of $f$.
H: $\operatorname P\left[X\ge\lambda\right]\le e^{-a\lambda}$ for all $\lambda$ implies $\operatorname E\left[e^X\right]\le\frac a{a-1}$ Let $X$ be a real-valued random variable and $a>1$ with $$\operatorname P\left[X\ge\lambda\right]\le e^{-a\lambda}\;\;\;\text{for all }\lambda\ge0\tag1.$$ I want to show that $$\operatorname E\left[e^X\right]\le\frac a{a-1}\tag2.$$ My first thought was to write $$\operatorname E\left[e^X\right]=\int_0^\infty\operatorname P\left[X\ge\ln\lambda\right]\:{\rm d}\lambda\le\int_0^\infty\frac1{\lambda^a}\:{\rm d}\lambda=\left.\frac{\lambda^{1-a}}{1-a}\right|_{\lambda\:=\:0}^{\lambda\:=\:\infty}\tag3,$$ but since the right-hand side is $\infty$, this doesn't seem to be the right approach. So, how can we show the claim? AI: $Ee^{X} =\int_0^{1} P(X\geq \log \lambda ) d\lambda +\int_1^{\infty} P(X\geq \log \lambda ) d\lambda \leq 1+\frac {\lambda^{a-1}} {1-a}|_1^{\infty}\leq 1+\frac 1 {a-1}=\frac a {a-1}$.
H: Showing that $\mathbb{C}$ is closed, open, perfect but not bounded - metric spaces Consider the following subset of $\mathbb{R}^{2}.$ The set of all complex numbers (i.e. $\mathbb{R}^{2}$). I'm trying to show that this set is Closed, Open, Perfect but not Bounded. Closed: let us denote the set of all complex numbers by $E$. Then $E^{c} = \emptyset.$ It's vacuously true that $\emptyset$ is open. Hence $E$ is closed. Open: Similar to the above argument. The empty set is also Closed. Hence $E$ is open. Perfect: We need only, check that every point is a limit point. Let $z \in E.$ Then for $\epsilon > 0$ consider $B_{\epsilon}(z) = \{w: d(w,z) < \epsilon\}.$ I know that this is nonempty and therefore every point is a limit point, but I'm not really seeing how to explain why. (In my mind, I'm just thinking of drawing a small circle around the point $z$ then since we are dealing with Complex numbers it will of course contain another point!) Not Bounded: We need to show that for any $M \in \mathbb{R}$ and $z \in \mathbb{R}^{2}$, there is some $w \in \mathbb{C} \subset \mathbb{R}^{2}$ such that $d(z,w) > M.$ I'm not too sure how to mathematically show it, but the fact it's the entire complex plane, it is clearly going to not be bounded! Is there a general strategy for proving bounded / not bounded properties of sets? This seems to be the part I struggle on most. AI: Show that a subset is bounded if it is contained in some open/closed ball. Clearly $\mathbb{C}$ is not contained in an open bal, so $\mathbb{C}$ is not bounded.
H: Prove that integration . Help? I have a question. I solved this problem but I couldn't find the end. Firstly This integral is a Sine Integral. I expanded that as using the product-sum formulas. Thus, $$\frac{\sin^3(x)}{x} = \frac{3\sin(x) - \sin(3x)}{4x}$$ Then I took integral them one by one and found that $$= \frac{3Si(x)}{4} - \frac{Si(3x)}{4} + C$$ I don't know how I use $Si(x)$. Can you help me? AI: The definition of ${Si(x)}$ is ${\int_{0}^{x}\frac{\sin(t)}{t}dt}$. In terms of solving the integral indefinitely - you are done. If you want to solve this indefinitely, however, say ${\int_{0}^{\infty}\frac{\sin^3(t)}{t}dt}$, then notice ${Si(0)=0}$, and indeed since ${\int_{0}^{\infty}\frac{\sin(t)}{t}dt=\frac{\pi}{2}}$ we have ${\lim_{x\rightarrow\infty}Si(x)=\frac{\pi}{2}}$ and hence you get $${=\lim_{x\rightarrow\infty}\left(\frac{3}{4}Si(x)-\frac{Si(3x)}{4}\right)-(0)=\frac{3}{4}\frac{\pi}{2} - \frac{1}{4}\frac{\pi}{2}=\frac{\pi}{4}}$$ And you are done.
H: In how many ways can I combine x elements In how many ways can I combine x elements and why? Example: If I have 6 elements, indicating each element with a number, some possible cases could be: $1,2,3/\,1,4,3/\, 1,2,5,6/\, ...$ AI: You are looking for, $xC_0+xC_1+\dots+xC_x=2^x$
H: Is the minimum non-countable well-ordered set compact? Let $A$ be a Cardinate product of 2 non-countable sets $C$ and $D$. $$A=C \times D$$ According to the well-ordering theorem, we can define a well-order on the non-countable set $A$. We define a lexicographical order. Take a minimum non-countable subset $B$ of $A$.$ B$ exists because in the interval $[0,x)$ of $A$, we can both find the $x$ such that $[0,x)$ countable or non-countable. Use the supremum and infimum principle(which can be proved in a well-ordered set), we can find a minimum $x$ and $[0,x)$ non-countable. Is this kind of set compact with the order topology? AI: No, if $(X, <)$ is well-ordered (say, $\min(X)=0$) and uncountable such that $[0,x)$ is countable for all $x$, the open cover $$\mathcal{U}=\{[0,x): x \in X\}$$ cannot have a finite subcover. For if $[0,x_1), \ldots, [0,x_N)$ were one, then define $M=\max(x_1,\ldots,x_N)) \in X$ and so $X = \bigcup_{i=1}^N [0,x_i) = [0,M)$ which contradicts the uncountability of $X$ (or we can say it doesn't cover the right hand point $M$ too). In fact, one can show that the sup of countably many points in $X$ lies in $X$ still and $\mathcal{U}$ does not have a countable subcover either.
H: Bézout's coefficients, modular inverse I have to find $14^{-1} (\mod 17)$ I made the equation, $$14x+17y=1$$ By Euclidean division algorithm- $$ 17=14\times1+3 \\ 14=3\times4 +2\\3=2\times1+1$$ If I reverse the process then, $$1=3-2\times1 \\=3-(14-3\times4)\times1 \\= 3-[14-(17-14\times1)\times4]\times1$$ But this doesn't seem to in the required form... What have I done wrong? AI: Everything is right so far. You just need to note that $3 = 17 - 14$, so you have: $$1 =(17-14)-[14-(17-14\times1)\times4]\times1$$ $$=(17-14)-[14-(17-14)\times4]$$ $$=(17-14)-[14-(17 \times 4-14\times 4)]$$ $$=(17-14)-[-17 \times 4 \color{red}{+}14 \times 5]$$ $$=(17-14) + 17 \times 4 -14 \times 5]$$ $$=17 \times 5-14 \times 6$$
H: Does Lebesgue measurable set $E \subset [0,1]$ contain a $G_{\delta}$ set with the same measure? Let $E \subset [0,1]$ be a Lebesgue measurable set. It is known that there exists a $G_{\delta}$ set $G \supset E$ such that $m(E)=m(G)$. But my problem is: Does $E \subset [0,1]$ Lebesgue measurable set always contain a $G_{\delta}$ subset $G \subset E$ such that $m(E)=m(G)$? Thanks for any help! AI: Let $E$ be a meager set of measure 1, which one can construct in various ways; e.g. There exist meager subsets of $\mathbb{R}$ whose complements have Lebesgue measure zero. If $G \subset E$ is $G_\delta$ then since $G$ is meager, by the Baire category theorem it is not dense. That means there is some nonempty open $U \subset [0,1]$ with $G \subset [0,1] \setminus U$. As such, $m(G) \le 1 - m(U) < 1$.
H: Find the minimum of $x^3+\frac{1}{x^2}$ for $x>0$ Finding this minimum must be only done using ineaqualities. $x^3+\frac{1}{x^2}=\frac{1}{2}x^3+\frac{1}{2}x^3+\frac{1}{3x^2}+\frac{1}{3x^2}+\frac{1}{3x^2}$ Using inequalities of arithemtic and geometric means: $\frac{\frac{1}{2}x^3+\frac{1}{2}x^3+\frac{1}{3x^2}+\frac{1}{3x^2}+\frac{1}{3x^2}+1}{6}\geqslant \sqrt[6]{\frac{1}{2}x^3\frac{1}{2}x^3\frac{1}{3x^2}\frac{1}{3x^2}\frac{1}{3x^2}}=\sqrt[6]{\frac{1}{108}}\Rightarrow x^3+\frac{1}{x^2}\geqslant 6\sqrt[6]{\frac{1}{108}}-1 $ Sadly $\ 6\sqrt[6]{\frac{1}{108}}-1$ is not correct answer, it is not the minimum. AI: Very similar to what you have done: $$\frac{\frac{1}{2}x^3+\frac{1}{2}x^3+\frac{1}{3x^2}+\frac{1}{3x^2}+\frac{1}{3x^2}}{5}\geq \sqrt[5]{\frac{1}{2}x^3\frac{1}{2}x^3\frac{1}{3x^2}\frac{1}{3x^2}\frac{1}{3x^2}}=\sqrt[5]{\frac{1}{108}}$$ This gives us $$x^3+\frac{1}{x^2}\geq \sqrt[5]{\frac{1}{108}}$$ Desmos screenshot: P.S. Your method fails because equality holds only when $$\frac{x^3}{2}=\frac{1}{3x^2}=1$$ which is impossible. The extra "one" you added in your AM-GM application screwed your attempt.
H: Show that the polynomial $x^{8}-x^{7}+x^{2}-x+15$ has no real root. Show that the polynomial $$x^{8}-x^{7}+x^{2}-x+15$$ has no real root. Source As I have learnt from my previous post, using Descarte's Sign rule I am getting $4$ positive and $0$ negative roots. So no of nonreal roots are $N-(p+q) = 8-(4+0)= 4$.So rest $4$ roots must be real! It is violating the question's condition.Can anybody help me out! I am not understanding the derivative concept regarding this! AI: If $x<0$ every term is positive, hence the polynomial is $>0$. If $x>1$ , then $x^2-x>0$ , $x^8 - x^7 >0$ , $15>0$ hence the function is $>0$. If $0<x<1$, then $1-x>0$ , $x^2 - x^6>0$, $14+x^8>0$ hence the function is $>0$ Hence for every real $x$ the function is $>0$ , hence no real root.
H: Orthogonal functions I am reading a paper about Non-coherent FSK modulation, we have orthogonal functions in that, they have written $$ \sin^2(\pi(a-b))/ \ (pi(a-b))^2 = 1 \ \ \ \ \ a=b $$ $$ \sin^2(\pi(a-b))/ \ (pi(a-b))^2 = 0 \ \ \ \ \ a!=b $$ Can anyone explain this how? Isn't that a=b =0 and a!=b = some value ? AI: Given the linked paper and the equation number, you’re confused about this: Edit: The $i\neq k$ case follows from $\sin(\pi\times\text{integer}) = 0$.
H: Why does $\lim_{x \to 0} \frac{\lfloor{x^2}\rfloor}{x^2}$ not exist? $$\lim_{x \to 0} \frac{\lfloor{x^2}\rfloor}{x^2}$$ $\lfloor x^2 \rfloor = 0 \space \forall x \in (-1,1)$ and $x^2 > 0 \space \forall \space x$ in the vicinity of $0$ but not at $0$. Hence this limit should be $\lim_{x \to 0} \frac{0}{x^2} = 0$. The answer key in my textbook and GeoGebra, however, seem to disagree with this. Both claim that the limit is undefined. Why is this so? AI: This limit exists and is equal to zero. Consider any sequence $\{x_n\}_{n\in\Bbb N}$ converging to zero, none of whose terms is zero. Then, $$f(x_n)=\dfrac{\lfloor x_n^2\rfloor}{x_n^2}=0\,\forall n>N$$ where $N$ is a number such that $|x_n|<1\forall n>N$. Therefore, given limit is zero by sequential criteria.
H: Proving that this function all over the positive integer gives us this sequence? Firstly, we have this sequence : $1,1,2,1,2,3,1,2,3,4,...$ which is the sequence of integers $1$ to $k$ followed by integers $1$ to $k+1$. We could say a fractal sequence. Secondly, we have this formula : $$a_n=\frac{1}{2}(2n+\lfloor\sqrt{2n}+\frac{1}{2}\rfloor-\lfloor\sqrt {2n}+\frac{1}{2}\rfloor^2)$$ where $n\ge1$ $a_1=1$ ; $a_2=1$ ; $a_3=2$ ; $a_4=1$ ; $a_5=2$ ; $a_6=3$ ; $a_7=1$ I don't know for sure but i think this formula gives us this sequence. How to prove this ? AI: $\newcommand{\bb}[1]{\left( #1 \right)}$ $\newcommand{\f}[1]{\left\lfloor #1 \right\rfloor}$ Key observation: Given $m \in \Bbb{Z}^+$, if: $$ \sum_{k=1}^m k = \frac{m(m+1)}{2} < n \leq \frac{(m+1)(m+2)}{2} = \sum_{k=1}^{m+1} k $$ then: $$ a_n = n - \frac{m(m+1)}{2} $$ Now rewrite your formula in the following manner: \begin{align*} a_n &= n - \frac{1}{2}\bb{\f{\sqrt{2n} + \frac{1}{2}}^2 - \f{\sqrt{2n} + \frac{1}{2}}} \\ &= n - \frac{1}{2}\f{\sqrt{2n} + \frac{1}{2}}\bb{\f{\sqrt{2n} + \frac{1}{2}} - 1} \\ &= n - \frac{1}{2}\bb{\f{\sqrt{2n} - \frac{1}{2}} + 1}\f{\sqrt{2n} - \frac{1}{2}} \\ \end{align*} Observe that if $\f{\sqrt{2n} - \frac{1}{2}} = m$, then we're done. Thus, it suffices to show that this indeed holds if $\frac{m(m+1)}{2} < n \leq \frac{(m+1)(m+2)}{2}$ for some $m \in \Bbb{Z}^+$. This is because: \begin{align*} \f{\sqrt{2n} - \frac{1}{2}} = m &\iff m \leq \sqrt{2n} - \frac{1}{2} < m + 1 \\ &\iff m + \frac{1}{2} \leq \sqrt{2n} < m + \frac{3}{2} \\ &\iff \bb{m + \frac{1}{2}}^2 \leq 2n < \bb{m + \frac{3}{2}}^2 \\ &\iff \frac{1}{2}\bb{m^2 + m + \frac{1}{4}} \leq n < \frac{1}{2}\bb{m^2 + 3m + \frac{9}{4}} \\ &\iff \frac{m(m+1)}{2} + \frac{1}{8} \leq n < \frac{(m+1)(m+2)}{2} + \frac{1}{8} \\ &\iff \frac{m(m+1)}{2} < n \leq \frac{(m+1)(m+2)}{2} \end{align*} where the last $\iff$ holds because $\frac{m(m+1)}{2},n,\frac{(m+1)(m+2)}{2}$ are all integers.
H: Show that $f(x, y): \mathbb R^2\to \mathbb R, (x,y)\to e^{x^2+y^2}$ has a isolated local extremum Show that $f(x, y): \mathbb R^2\to \mathbb R, (x,y)\to e^{x^2+y^2}$ has a isolated local extremum. My idea: I think I have to find critical points, where the function is $0$, but it never gets $0$, so which are the points and how to find? Did also Hesse Matrix. AI: Hint: Find critical point(s), i.e. solve this system: $$\frac{\partial f}{\partial x} = 0, \\\frac{\partial f}{\partial y}=0.$$ The resulting pair(s) of values $(x_1, y_1), (x_2, y_2),...$ can be substituted into the second derivates $$A=\frac{\partial^2f}{\partial x^2}, \space B=\frac{\partial^2f}{\partial x \partial y}, \space C=\frac{\partial^2f}{\partial y^2}. $$ Depending on the expression $\space AC-B^2 \space$ it can be decided whether an extreme value exists... Figure:
H: How to integrate $\int_0^\infty \left( \frac{\sin az}{z^2+1}\right)^2 dz$ I have to evaluate the following integral for $a>0$: $$\int_0^\infty \left( \frac{\sin az}{z^2+1}\right)^2 dz$$ I don't exactly know how to do this kind of integral. But I think I need to use the residue theorem. Maybe I could use the trick $\cos(2a) = 1 - \sin^2(a)$, but that makes the integral much more difficult I think. AI: Since$$\sin(az)=\frac{e^{iaz}-e^{-iaz}}{2i},$$and since you are integrating an even function, your integral is equal to\begin{multline}-\frac18\int_{-\infty}^\infty\frac{e^{2iaz}+e^{-2iaz}-2}{(z^2+1)^2}\,\mathrm dz=\\=-\frac18\left(\int_{-\infty}^\infty\frac{e^{2iaz}}{(z^2+1)^2}\,\mathrm dz+\int_{-\infty}^\infty\frac{e^{-2iaz}}{(z^2+1)^2}\,\mathrm dz-\int_{-\infty}^\infty\frac2{(z^2+1)^2}\,\mathrm dz\right).\end{multline}Now, let us see how to compute these integrals. First of all, since $a>0$,\begin{align}\int_{-\infty}^\infty\frac{e^{2iaz}}{(z^2+1)^2}\,\mathrm dz&=2\pi i\operatorname{res}\left(i,\frac{e^{2iaz}}{(z^2+1)^2}\right)\\&=2\pi i\left(-\frac14i(2a+1)e^{2a}\right)\\&=\frac12\pi(2a+1)e^{-2a}.\end{align}It is a real number. Since the second integral is the conjugate of the first one, it is equal to the same number. Finally\begin{align}\int_{-\infty}^\infty\frac2{(z^2+1)^2}\,\mathrm dz&=2\pi i\operatorname{res}\left(i,\frac2{(x^2+1)^2}\right)\\&=2\pi i\left(-\frac i2\right)\\&=\pi.\end{align}And so\begin{align}\int_0^\infty\frac{\sin^2(ax)}{(x^2+1)^2}\,\mathrm dx&=-\frac18\left(2\times\frac12\pi(2a+1)e^{-2a}-\pi\right)\\&=\frac18\pi\left(1-(2a+1)e^{-2a}\right).\end{align}
H: What is the connection between vector functions and space curves? I can't grasp what is the difference between vector functions and space carves. for example: $$\gamma(t)=(f(t),g(t),h(t))$$ I can assume this as a vector that starts from $(0,0,0)$ and points to a specific coordinate and also as a curve. In many cases they are both same but some times it must be clarified. for example when we define $T(t)$ as the tangent vector, what is tangent on? curve or vector? AI: This is a somehow conventional abuse of terminology. Let's say you have a map $$ \gamma:(0,1)\to \mathbb{R}^3 $$ usually with some extra properties such as differentiability. The image of $(0,1)$ under $\gamma$ is a "curve" in the space. Sometimes, people say that the map itself is the curve or a parametrization of the curve. But there can be other map $\beta:(0,1)\to\mathbb{R}^3$ that shares the same image of $\gamma$. The tangent vector at point $\gamma(t_0)$ is given by $\gamma'(t_0)$. Hope this helps!
H: prove or disprove about piecewise continuous‏ functions just some proves or disproves, I can really use some help/clues with: If F is piecewise continuous‏ in $ [-\pi,\pi]$ then it belongs to $L_2[-\pi,\pi]$ I don't think its true, maybe $cot(x)$ disprove it? not sure. If F belongs to $L_2[-\pi,\pi]$ then F is piecewise continuous‏. I think its true but don't know how to prove it in general. If f is continuous‏ in $R$ then f belongs to $L_1(R)$ thank you! AI: False, take any function with a vertical asymptote that diverges sufficiently quickly False, take any measurable bounded non-piecewise continuous function on $[-\pi, \pi]$, e.g. the characteristic function of the irrationals in $[-\pi, \pi]$ or the unit fraction indicator function. False, take any non-null constant function.
H: $U_{pq}$ ($p,q$ are distinct odd primes) has an element of order $\mathrm{lcm}(p-1,q-1)$ Question: Let $p$ $\DeclareMathOperator{\ord}{ord}$ and $q$ are distinct odd primes and $n=pq$. Show that there is an integer not divisible by $p$ or $q$ such that $\ord_n$ of that integer is $\operatorname{lcm}(p-1,q-1)$. I guess here, $\ord_n(x)$ means the order of $[x]$ in multiplicative group of units of $\Bbb{Z}_n$, denoted by $U_n$. So I reduced the problem to following: Let $p$ and $q$ be two distinct odd primes, show that $U_{pq}$ has an element of order $\operatorname{lcm}(p-1,q-1)$. AI: The following is a well-known result in group theory: If an abelian group $G$ has an element of order $m$ and an element of order $n$ then it has an element of order $\mathrm{lcm}(m,n)$ (It is an exercise in Topics In Algebra of Herstein) In your question $U_n=U_{pq}$ has order $\varphi(pq)=(p-1)(q-1)$. Both $p,q$ have primitive roots. Let $a$ and $b$ are primitive roots modulo $p$ and $q$ respectively. Let $d=\gcd(p-1,q-1)$. Show that $a^{\frac{q-1}{d}}\pmod{pq}$ is an element of order $p-1$ in $U_n$ and $b^{\frac{p-1}{d}}\pmod{pq}$ is an element of order $q-1$ in $U_n$. Hence you can now conclude using the result stated at the beginning since $U_n$ is definitely abelian.
H: Proof of the partial converse of Cauchy Riemann equations: how does the author conclude the limit is $0$? I was reading through the book "Complex Analysis" by John M. Howie. On page 54, he goes through a proof of the partial converse of the Cauchy Riemann equations - that is, given an open neighbourhood where the complex function has continuous partial derivatives and satisfies the Cauchy Riemann equations, the derivative of the complex function exists at every point within this neighbourhood. He goes through a proof, but at one stage essentially concludes $${\lim_{l\rightarrow 0}\frac{1}{l}\left(\epsilon_1h + \epsilon_2 k + i\epsilon_3 h + i\epsilon_4 k\right)=0}$$ where ${l=h+ik}$, and ${\epsilon_1 = \frac{\partial u}{\partial x}(a+\text{a bit},b+\text{a bit})-\frac{\partial u}{\partial x}(a,b)}$ (where the "a bit" is just some junk involving ${h,k}$) and the other epsilons are defined similarly (in terms of the other partial derivatives). Because of the continuity of the partial derivatives, these epsilons go to $0$ as ${(h,k) \rightarrow (0,0)}$, to which I agree. But I'm not sure how he concludes the limit above is $0$? Am I missing something? If it helps, I can post a picture of the proof in question if I'm allowed to do so AI: Since $l=h+ik$, we have $|h|\leq |l|$. So $|\frac hl|$ is bounded above. Thus $$\frac{\epsilon_1 h}{l}\to 0$$ as $l\to 0$ since $\epsilon_1\to 0$ whenever $l\to 0$. Similarly, you can conclude for the other terms.