text
stringlengths
83
79.5k
H: If $A=\begin{bmatrix}a&b\cr b&c\end{bmatrix}$ show $2|b|≤$difference of eigenvalues If $A=\begin{bmatrix}a&b\cr b&c\end{bmatrix}$ and let $\alpha\ \&\ \beta$ be the eigenvalues where $\alpha\ge\beta$. Show $2|b|≤\alpha-\beta$ I first tried to find the eigenvalues by solving for: $$(a-\lambda)(c-\lambda)-b^2=0$$ $$\lambda^2+(-a-c)\lambda+ac-b^2=0$$ we then could substitute the eigenvalues to get: $$\alpha^2+(-a-c)\alpha=b^2-ac$$ and $$\beta^2+(-a-c)\beta=b^2-ac$$ But I don't know how to proceed from here AI: The (nonnegative) difference of the roots of a quadratic $$x^2 + Ax + B = 0$$ is given by $$\sqrt{A^2 - 4B}.$$ (Just use the quadratic formula.) In this case, we get the difference $\alpha - \beta$ to be $$\sqrt{(a+c)^2 - 4ac + 4b^2} = \sqrt{(a - c)^2 + 4b^2} \ge \sqrt{4b^2} = 2|b|.$$
H: Divergence of Infinite Series: ${e^{(tx)}\left(\frac{1}{2}\right)^{x}}$ I need to show whether the following infinite series converges, and if it does, to which value. Here, $t$ is a constant: $$\sum_{x=1}^{\infty}{e^{tx}(\frac{1}{2})^{x}}$$ This is my solution: $$\sum_{x=1}^{\infty}{e^{tx}(\frac{1}{2})^{x}} = \sum_{x=1}^{\infty}{e^{t}e^{x}(\frac{1}{2})^{x}} = e^{t}\sum_{x=1}^{\infty}{(\frac{e}{2})^{x}}$$ $$|\frac{e}{2}| > 1$$ Therefore it diverges. The solution, however, is that it converges to $\frac{e^t}{2-e^t}$. What is my mistake here? AI: As pointed out by @charlus, ${e^{tx} \neq e^t e^x}$. Instead, you have: $${=\sum_{x=1}^{\infty}\left(e^t\right)^{x}0.5^x}$$ $${=\sum_{x=1}^{\infty}\left(\frac{e^t}{2}\right)^x}$$ This is a geometric series. Provided ${\left(\frac{e^t}{2}\right) < 1}$ (as pointed out by @charlus again, this implies ${t < \ln(2)}$), we have $${=\sum_{x=0}^{\infty}\left(\frac{e^t}{2}\right)^{x}-1=\frac{1}{1-\frac{e^t}{2}}-1}$$ (I started the index from $0$ and just subtracted $1$ at the end so we can use ${\sum_{n=0}^{\infty}r^n = \frac{1}{1-r}}$). Simplifying you get $${=\frac{2}{2-e^t}-\frac{2-e^t}{2-e^t}=\frac{e^t}{2-e^t}}$$
H: Christoffel symbols, dual space I am confused with the definition of Christoffel symbols for the dual space. Let $M$ be some manifold, $x_i$ local coordinates The Christoffel symbols are defines as $\nabla_{\partial_i} \partial_j = \Gamma^k_{ij} \partial_k$ where $\nabla$ is the Levi Civita connection on $M$. Now I read that $\nabla_{\partial_i} dx_j = - \Gamma^j_{ik} dx_k$ but what is $\nabla$ here? It can't be the Levi-Civita connection right? AI: Whenever one has a connection $\nabla$ on $TM$ there is an induced connection on the dual bundle $T^*M$ (which we will also call $\nabla$ because the abuse of notation is consistent), and it defined in such a way that the connection satisfies a sort of product rule when taking the covariant derivative of $(\alpha,v)=\alpha(v)$. $$\nabla_X (\alpha,v) = (\nabla_X\alpha,v) +(\alpha,\nabla_X v)$$ But then recall that the action of the covariant derivative on a function $\nabla_X(f)=X(f)$ so we have $$X \left((\alpha ,v)\right)=(\nabla_X\alpha,v) +(\alpha,\nabla_X v)$$ From here by picking basis one forms and vectors you can arrive at the formula you had. So in short it is the induced connection on the cotangent bundle of the Levi-Civita connection.
H: What should the maximum possible rank of a matrix be under certain conditions? I have been trying questions in Linear Algebra and I couldn't correctly solve this particular question. Let $A$ be a non-zero $4\times 4$ complex matrix such that $A^2= 0$. What is the largest possible rank of $A$? All eigenvalues must satisfy $\lambda^{2} = 0$ and so all eigenvalues are $0$. I am unable to think what results I should use now. Can anyone please tell how I should proceed. Any help would be really appreciated. Answer: $2$ AI: We can construct one of rank $2$: $$A = \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 0 &0 \\ 0& 0&0&1 \\ 0&0&0&0\end{bmatrix}$$ fits the bill. Clearly $A$ can't have rank four (since then it would be invertible), but why can't $A$ have rank $3$? If it had rank $3$, then there could only be one linearly independent vector in the zero eigenspace (i.e. nullspace). That would have to mean that there is only one Jordan block in the Jordan Normal Form of $A$ (number of Jordan blocks for a given eigenvalue $=$ number of linearly independent eigenvectors for that eigenvalue). Thus, $A$ would have to be similar to a $4 \times 4$ matrix that is a single zero Jordan block, i.e. $$A = S \cdot \begin{bmatrix} 0 & 1 & 0 & 0 \\ 0 & 0 & 1 &0 \\ 0& 0&0&1 \\ 0&0&0&0\end{bmatrix} \cdot S^{-1}$$ for some invertible $S$. But clearly $$A^2 = S \cdot \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 &1 \\ 0& 0&0&0 \\ 0&0&0&0\end{bmatrix} \cdot S^{-1}$$ cannot be zero, so we conclude it must be impossible for $A$ to have rank $3$. $\square$
H: Show that $f(X) = X^2$, where $X \in M_n(\mathbb R)$, is smooth Considering that $M_{n}(\mathbb{R})$ can be identified with $\mathbb{R}^{n^{2}}$, define $f(X) = X^2$. I want to prove that $f$ is a smooth function. I don't know how can I relate the matrices with the concept of differentiability in Euclidean space. For example, consider $X \in \mathbb{R}^{n^{2}}$, we can think of $X$ like a matrix, thus $f(X) = X^{2}$ is a matrix too, but similar to what we did earlier, we can consider $X^{2}$ as a vector in $\mathbb{R}^{n^{2}}$, then is it correct to consider $f$ as $$f(X) = (f_{1}(X), \cdots, f_{n^{2}}(X)),$$ where $f_{i}(X)$ is some entry of matrix $X^{2}$ (and therefore is a polynomial)? AI: Once you identify $M_n(\Bbb R)$ with $\Bbb R^{n^2}$ (in the "natural way"), you can define the projection map $$\pi_{i,j}:M_n(\Bbb R)\to\Bbb R$$ which sends a matrix to its $(i, j)$-th entry. You can also show that this map is a $C^\infty$ function. Now, as you noted, each component $f_i(X)$ is a polynomial in the entries of $X$. That is, you can write it as a product/sum of some projection maps. Since the product/sum of $C^\infty$ functions is again $C^\infty$, you see that $f_i$ is a $C^\infty$ function. In turn, that gives you that $f = (f_1, \ldots, f_{n^2})$ is.
H: I need to prove that in a complete oriented graph , you can change the orientation of one edge so that the resulting graph is connected. Prove that in a complete oriented graph, you can change the orientation of one edge so that the resulting graph is connected. I tried to use induction. but I dont know how if $n=3$ it's true, then suppose it's working for n let's prove that it will work for $n+1$ I don't know how to prove this. Also I think that if if our primordial graph is connected, it's impossible. But I'm not sure. AI: It is a famous result of Redei that states every tournament (i.e. directed complete graph) graph has a directed Hamilton path (one that goes thru each vertex). The proof can be read here. Let the Hamilton path start at $u$ and end at $v$. The tournament graph has an edge either from $u$ to $v$ or from $v$ to $u$. If the edge is from $u$ to $v$, then reverse it. If the edge is the other way around, do nothing. (If you have to change an edge, just change an arbitrary edge that isn't the edge from $v$ to $u$ or any of the edges in the Hamilton path.) In either case, we've created a Hamilton cycle in the graph, so it must be strongly connected now. Note that as bof observed, this proof works whenever the number of vertices is at least four. The cases with one and two vertices are trivial. The case with three vertices is basically as trivial, and can either be proven with brute force or the pigeon hole principle.
H: Proof that any number is equal to $1$ Before I embark on this bizzare proof, I will quickly evaluate the following infinite square root; this will aid us in future calculations and working: Consider $$x=\sqrt{2+\sqrt{{2}+\sqrt{{2}+\sqrt{{2}...}}}}$$ $$x^2-2=\sqrt{2+\sqrt{{2}+\sqrt{{2}+\sqrt{{2}...}}}}=x \implies x^2-x-2=0\implies x=2$$ as $x>0$. Now for the proof: I was attempting some different infinite expansions/square roots when trying to solve another question of mine (Evaluate $\sqrt{x+\sqrt{{x^2}+\sqrt{{x^3}+\sqrt{{x^4}...}}}}$ ) and I came across this: $$x+\frac{1}{x}=\sqrt{(x+\frac{1}{x})^2}=\sqrt{2+x^2+\frac{1}{x^2}}=\sqrt{2+\sqrt{(x^2+\frac{1}{x^2}}})^2=\sqrt{2+\sqrt{2+x^4+\frac{1}{x^4}}}=\sqrt{2+\sqrt{2+\sqrt{(x^4+\frac{1}{x^4})^2}}}=\sqrt{2+\sqrt{2+\sqrt{2+x^8+\frac{1}{x^8}}}}=\sqrt{2+\sqrt{{2}+\sqrt{{2}+\sqrt{{2}...}}}}=2$$ if you keep on applying this and using the result found at the start of the question. So we have that for any real number $x$ that $$x+\frac{1}{x}=2\implies x^2-2x+1=0\implies (x-1)^2=0$$ so we finally have: $$x=1$$ Where have I gone wrong, for surely this cannot be correct? AI: As you go along the last square root has $x^{2n}+\frac{1}{x^{2n}}$ which diverges, so it can't be ignored as $n\to \infty$
H: Show that $|a|\leq \int_{-1}^{1}\,|ax+b|\,\text{d}x$ Show that $\displaystyle|a|\leq \int_{-1}^{1}\,|ax+b|\,\text{d}x$. I did this problem but in a way that I do not like. I just divided it into many cases. Assumed $a>0$ and then did cases depending on where $-b/a$ is. However, this seems very computational and there must be a simpler way, at least i hope. This problem arose when considering the linear functional from $Span\{1,x\}$ with $L^1$ norm to $f'(0)$. I wanted to show that this operator is bounded which leads to the following inequality. AI: \begin{align} \int_{-1}^1|ax + b| {\rm d}x&=\int_{-1}^1\big|(ax + b)\,\text{sign}(x)\big| {\rm d}x \\&\ge \left|\int_{-1}^1 (ax + b)\,\text{sign}(x) {\rm d}x\right|\\ &= |a|. \end{align} (I think this is the first time I use the inequality $\displaystyle\int_a^b|f| \ge \left|\int_a^bf\right|$ in this direction!) Edit by Batominovski. Please undelete this. Your idea works. It just needs a small twist. I fixed that for you. (You can delete this remark. I am ok with that.) Edit by Aryaman. The above solution is mainly thanks to Batominovski, who fixed a rather silly calculation of mine. Thank you!
H: Constructing one-one functions under a constraint using derangements This question has been asked before, but I wanted to approach it via derangements: and I haven't seen this answered satisfactorily on MSE using derangements. Let $A=(1,2,3,4,5)$ and $B=(0,1,2,3,4,5)$. We need to find construct one-one functions $f$ from $A$ to $B$, such that $f(1)\neq0$ and $f(i)\neq i$ for $i$ in $\{1,2,3,4,5\}$. Approach: Since $B$ has $6$ elements and $f$ is one-one, there will be one element $e$ in $B$ that has no pre-image. If $e$ is $0$, then we simply have $d(5)=44$ cases. If $e$ is $1$, we again have $d(5)=44$ cases, since $1$ cant be associated with $0$. Suppose $e$ belongs to $(2,3,4,5)$. There are $4$ ways to do this.For instance , let $e$=$3$. Then, suppose $f(3)=0$: we have a total of $d(4)$ cases. If $f(3)=1$, we again have $d(4)$ cases. Now suppose $f(3)$ belongs to $(2,4,5)$:There are three cases. Say $f(3)=2$. Now We need to map $(1,2,4,5)$ to $(0,1,4,5)$ (under the original constraint, ofcourse). There should be $d(3)-d(2)$ ways to do this: since $2$ cant map to $2$, we only need to worry about $(1,4,5)$ :i.e $d(3)$ cases. However these $d(3)$ cases consider the cases where $4$ and $5$ are deranged but $1$ is associated to $0$ ,($d(2)$ cases), which we need to subtract. So, all in all, we need to consider $d(3)-d(2)$ cases. Which makes the final answer: $d(5)+d(5)+ 4[d(4)+d(4)+3(d(3)-d(2))]=232$.However the correct answer is $256$. I think the cases where $e=1$ and $e=0$ are correct.Any insight/corrections regarding my approach in the treatment of cases involving $e$ belonging to $(2,3,4,5)$ will be appreciated. AI: In the 3rd case, I'd just make 2 cases. We are counting the number of permutations $\pi:\{1,2,4,5\}\mapsto\{0,1,4,5\}$ such that $\pi(1)\notin \{0,1\}, \pi(4)\neq4$ & $\pi(5)\neq 5$. $2$ gets mapped to $0$ or $1$. For each of these, we have $d(3)$ possible assignments of $(1,4,5)$ $2$ gets mapped to $4$ or $5$. For each of these, $1$ must get mapped to $5$ and $4$ respectively. So we have 2 possible assignments for $4$ and $5$ for each such assignment of $2$. The final answer then comes to $2d(5)+4(2d(4)+3(2d(3)+2+2)) = 256$
H: Generate two valid vertices of isosceles triangle, given one vertex, an angle, and a distance Trigonometry question: I want to find a way to randomly sample the coordinates of the two remaining vertices $C_2$ and $C_3$ of an isosceles triangle, given one initial coordinate $C_1$. I have the coordinate of one vertex ($C_1$), the angle between the vertex and the two remaining vertices (let's call it $\theta$), and the distance between $C_1$ and the two remaining vertices $C_2$ and $C_3$ (let's call it $R$: by isosceles triangle I mean, $R$ describes both the distance from $C_1$ to $C_2$ as well as from $C_1$ to $C_3$) How do I randomly choose two valid points $C_2$ and $C_3$ that satisfy the conditions of being theta degrees apart (relative to $C_1$) and being the same distance $R$ from $C_1$? AI: Given $C_1$, $\theta$ and $|C_1C_2|=|C_1C_3|=R$, the locus of the points $C_2,C_3$ is the circumference of the circle centered at $C_1$, and the location of one of the points $C_2,C_2$ is defined by the location of the other. So, for example, you can choose randomly an angle $\phi$ to locate the point $C_3$, \begin{align} C_3&=C_1+R\cdot(\cos\phi,\sin\phi) \end{align} then the coordinates of the other point would be \begin{align} C_2&=C_1+R\cdot(\cos(\phi+\theta),\sin(\phi+\theta)) . \end{align}
H: Intuition behind equation for volume of a cone without calculus I have come back to study geometry a bit and I'm kind of stuck at deriving the volume formula for a cone. I have read the calculus-based derivation and it totally makes sense, but calculus has been around for 200+ years, cones have been around forever. Intuition leads me to believe that there must be a way for people to logically explain that a cone in 1/3th the volume of a cilinder of the same size before calculus was even a thing. (similarly to the way that the area equation of a circle can be derived from breaking down the circle into infinite triangular slices.) Is there any logical way to get to a cone's volume equation without calculus? can't it be explained using some geometrical argument? How did civilizations wrap their heads around a cone's volume before calculus? AI: ETA: Based on the comments, I should make it clear upfront that this is not an explanation that is free of calculus. It avoids much of the mechanical manipulations of integral calculus, but the basic notions are in there, though "dressed up" in a way that hopefully conveys some intuition about how the formula comes about. One possibility is to notice that you can dissect a unit cube into three congruent portions, each of which is a skew pyramid with the same one vertex as the apex, and one of the three opposite squares as the base. Therefore, the volume of those skew pyramids is $1/3$, or equivalently, equal to one third times the height times the area of the base: $V = \frac13Bh$. Then imagine taking any of the skew pyramids and cutting it into infinitesimal square slices parallel to its base, and then "straightening" it out. That should not change the volume, so we still have $V = \frac13Bh$. If we stretch the pyramid out, we may change $B$ or $h$, but you may convince yourself that we still have $V = \frac13Bh$. Finally, if we take each square slice and shave off everything except the circle we inscribe inside it, then clearly the remaining area of each slice (and therefore of the base) is reduced in the same proportion as the overall volume, so we still have $$ V = \frac13Bh $$ Of course, the foregoing is hardly a proof, but it may serve to satisfy intuition, perhaps.
H: Problem about choice of balls from an urn with Bayes rule An urn contains 2N balls numbered from 1 to 2N. An experiment with It consists of choosing, randomly and without replacement, two balls from the ballot box, consecutively. Calculate the probability that the first choice results in an even number under the hypothesis that the sum is even. The solution gives by the book is: Consider the following events: B: the sum of the two chosen numbers is even. A1: the number of the first choice is even. A2: The number of the second choice is odd. So: $$P(A_{1}|B)= \frac{B|P(A_{1})P(A_1)}{P(B|P(A_{1})P(A_{1})+P(B|P(A_{2})P(A_{2})}=\frac{1}{2}$$ I know some probability, but I'm confused. Now I understand it was used the Bayes rule, but I don't understand very well why the events chosen was A1 and A2, because missing cases where the first choice is odd(A3) and the second is even(A4), I think this is because if the event A1 and A2 can't happen if B already happened. But I'm not sure about that. Besides that, I don't know how calculate P(A1), P(B|A1), and P(B|A2). One of my options was that $P(A1)=\frac{N}{2N}=\frac{1}{2}$ because in the first choice half is even and $P(A2)=\frac{N}{2N-1}$ because an even ball was obtained after the first option, so now there are same odd balls but now there are fewer balls in total. But I don't know how get P(B|A1) only with that. Maybe I'm wrong in all of that. Thank you very much. AI: I think you need to step back and think about this a bit more intuitively. Suppose I first draw one number $X_1$ without replacement from $\{1, 2, \ldots, 2N\}$. Then, from the remaining numbers, I draw a second number $X_2$. If $X_1 + X_2$ is odd, I put the numbers back and start over. If $X_1 + X_2$ is even, I stop, and I ask you to guess whether $X_1$ is even or odd. Now, given that $X_1 + X_2$ is even--you are not asked anything about $X_1$ otherwise--you know that $X_1$ and $X_2$ must share the same parity; i.e., they are either both even, or both odd. But since there are just as many even numbers as there are odd numbers from the list, intuition suggests that the "even/even" case is equally likely as the "odd/odd" case. Therefore, the probability that $X_1$ is even given that the sum is even, should be $1/2$. For more support of this intuition, let us consider a small case, say $N = 2$. Then our list is $\{1, 2, 3, 4\}$ and there are only four ways to draw two numbers without replacement such that their sum is even: $$(X_1, X_2) \in \{(1,3), (2,4), (3,1), (4,2)\}.$$ In exactly half of these cases, $X_1$ is even. So let's now turn to the Bayesian reasoning. Define the event $A_i$ to represent the outcome that $X_i$ is even, so that $\bar A_i$ is the complementary event that $X_i$ is odd, for $i \in \{1, 2\}$. now define $B$ to represent the outcome that $X_1 + X_2$ is even. We want to compute $\Pr[A_1 \mid B]$, the probability that $X_1$ is even given $X_1 + X_2$ is even. Then a straightforward application of Bayes' theorem yields $$\Pr[A_1 \mid B] = \frac{\Pr[B \mid A_1]\Pr[A_1]}{\Pr[B]}.$$ The denominator is the unconditional or marginal probability of event $B$, which can be expanded using the law of total probability: $$\Pr[B] = \Pr[B \mid A_1]\Pr[A_1] + \Pr[B \mid \bar A_1]\Pr[\bar A_1].$$ That is to say, we conditioned event $B$ based on whether $X_1$ is odd or even. If $X_1$ is even, then we require $X_2$ to also be even. In other words, $\Pr[A_1] = \Pr[\bar A_1] = \frac{1}{2}$ is obvious, but $$\Pr[B \mid A_1] = \Pr[A_2 \mid A_1] = \frac{N - 1}{2N - 1},$$ because once you have drawn an even number, there are only $N-1$ remaining even numbers to select out of $2N-1$ numbers. Similarly, $$\Pr[B \mid \bar A_1] = \Pr[\bar A_2 \mid \bar A_1] = \frac{N - 1}{2N - 1}.$$ The rest is simple substitution: $$\Pr[A_1 \mid B] = \frac{\frac{N-1}{2N - 1}\cdot \frac{1}{2}}{\frac{N-1}{2N - 1}\cdot \frac{1}{2} + \frac{N-1}{2N - 1}\cdot \frac{1}{2}} = \frac{1}{2}.$$ But our earlier intuition shows why this kind of formal computation is unnecessary--it is the symmetry of even/odd numbers in the list that shows why the probability is $1/2$. As an exercise, then, suppose the list is actually $\{1, 2, \ldots, 2N + 1\}$. That is to say, it is not longer the case that there are as many even numbers as odd ones. How does this affect the resulting probability? How would you generalize it to the case where the list contains arbitrary distinct integers, say $a$ odd numbers, and $b$ even numbers?
H: Show that $\int_a^be^{tf(x)}\text{d}x \underset{}{\sim} \int_{x_0-\delta}^{x_0+\delta}e^{tf(x)}\text{d}x$ under an hypothesis. Let $a < b$, and let $f : [a, b] \mapsto \mathbb R$, $f \in C^\infty[a,b]$. Suppose there exists a unique point $x_0 \in [a,b]$ where $f$ reaches its maximum, we have $a < x_0 < b$, $f''(x_0)\ne0$. Show that we have $f''(x_0) < 0$. Show that for all $\delta > 0$ such as $\delta < \min(x_0-a, b - x_0)$, we have, as $t \to \infty$ $$\int_a^be^{tf(x)}\text{d}x \underset{}{\sim} \int_{x_0-\delta}^{x_0+\delta}e^{tf(x)}\text{d}x.$$ I did question 1. quite easily but I would need some help for question 2. It tried solving this problem by splitting the integral $\displaystyle{\int_a^be^{tf(x)}\text{d}x}$ in the three following integrals : $\displaystyle{\int_a^{x_0-\delta}e^{tf(x)}\text{d}x}$, $\displaystyle{\int_{x_0-\delta}^{x_0+\delta}e^{tf(x)}\text{d}x}$, $\displaystyle{\int_{x_0+\delta}^be^{tf(x)}\text{d}x}$ and then I tried showing the left and right integrals tend to $0$ as $x$ tends to $\infty$. Any help would be greatly apreciated. AI: Let $M := f(x_0)$ and given any such $\delta$ as in your statement, since $x_0$ is the unique point where the maximum is attained, there exists $\epsilon >0$ small enough such that on $[a,b]\setminus(x_0 - \delta, x_0 + \delta)$, we have $f \leq (M-\epsilon)$. Then, $$ \int_{[a,b]\setminus(x_0 - \delta, x_0 + \delta)} e^{tf(x)}dx \leq (b-a) e^{t(M-\epsilon)}. $$ By continuity, we can find $\eta >0$ small enough such that $(x_0 -\eta, x_0 + \eta) \subset (x_0 -\delta, x_0 + \delta)$ and on $(x_0 -\eta, x_0 + \eta)$, we have $f \geq M-(\epsilon/2)$. Then we get $$ \int_{(x_0 - \delta, x_0 + \delta)} e^{tf(x)}dx \geq \int_{(x_0 - \eta, x_0 + \eta)} e^{tf(x)}dx \geq 2\eta e^{t(M-(\epsilon/2))}. $$ Hence as $t \rightarrow \infty$, (since $M-(\epsilon/2) > M-\epsilon$) we can conclude that $$ \int_{(x_0 - \delta, x_0 + \delta)} e^{tf(x)}dx \gg \int_{[a,b]\setminus(x_0 - \delta, x_0 + \delta)} e^{tf(x)}dx, $$ and the required result then follows.
H: If the differential of a function is constant, does the function has to be affine? Suppose $f:V \subset R^n \to R^m$ is a smooth function such that $D_f = A$ for some constant matrix $A$ in $V$, and $V$ is open. My question is, is it true that $f(x) = y + Ax$ for some constant $y \in R^m$? If so, why? AI: This is not true, but it’s almost true. The function $$g(x)=f(x)-Ax$$ has total derivative equal to $0$ everywhere, so it must be constant on every connected component of $V$. If $V$ is connected, the result thus follows.
H: Sigma Notation Equation? So I came across this post on stackoverflow which discussed the ranges of integer variables in C++. And the last point of the top-voted response was the unsigned long long int which apparently ranges from 0 to 18,446,744,073,709,551,615 (18.5 quintillion?). But that response did not mention what that would be equivalent to in bytes... Now I realize that there is obviously numerous other methods to figure this out, but what popped in my mind as a first thought is this: $$\sum_{n=0}^{x}2^n = 18,446,744,073,709,551,615$$ Where x is the number of bits - 1. How would one go on about solving an equation like this? Sorry if it's something obvious, but no particular approach is coming to mind. I already know that it's 8 bytes(x = 63)--the real topic here is equations that involve the Sigma Notation, and are they actually often seen in any fields of maths? AI: You can use the finite geometric series formula $$\sum_{n=0}^N x^n = \frac{1-x^{N+1}}{1-x}$$ In this case this has a simple form $2^{N+1}-1$, so then the equation is easily solved by taking logarithms.
H: What's $(1 2 3)(1 4 5)$? Everybody gives a different answer. From my calculation: $(1 4 5 2 3)$. From Joseph Gallian's Contemporary Modern Algebra, 9th edition, page 100: From WolframAlpha: AI: Technically, the book isn't mistaken but might be slightly misleading. The order of the permutation $(1 2 3)(1 4 5)$ is equal to the order of the permutation $(1 4 5 3 2)$, but the disjoint cycle form of $(1 2 3)(1 4 5)$ is $(1 4 5 2 3)$. Also, note that Wolframalpha permutations goes from left to right, try inputting $(1 4 5)(1 2 3)$.
H: $A\subseteq\mathbb{R}$ is closed iff $\sup(A\cap[a,b])\in A$ and $\inf(A\cap[a,b])\in A$. Suppose that $A\subseteq\mathbb{R}$. Show that the following are equivalent: (a) $A$ is closed. (b) If $[a,b]$ is a closed interval for which $A\cap[a,b]$ is non-empty, then $\sup(A\cap[a,b])\in A$ and $\inf(A\cap[a,b])\in A$. MY ATTEMPT Let us prove the implication $(a)\Rightarrow(b)$ first. Since $A\cap[a,b]$ is non-empty and bounded above by $b$, it admits a supremum. Moroever, we do also have that $A\cap[a,b]\subseteq A$. Hence we conclude \begin{align*} \sup(A\cap[a,b])\in\overline{A\cap[a,b]}\subseteq\overline{A} = A \Rightarrow \sup(A\cap[a,b])\in A \end{align*} This is because the supremum of a set is a closure point of it. Similar reasoning applies to the infimum. However I do not know how to approach the converse implication. Could someone tell me what I am missing? Any contribution is appreciated. AI: Hint: Suppose $A$ is not closed. Then, there exists some converging (and WLOG monotonic) sequence $x_1,x_2,\dots\in A\subset \mathbb{R}$ such that $\lim_{n\to \infty}x_n=x\not\in A$. Take $[a,b]=[x-1,x]$ if the sequence is increasing, or $[a,b]=[x,x+1]$ if it is deceasing, and observe... (comment if more detail is needed!)
H: Uniform convergence of $f_n(x) = \frac{x^ne^{-x}}{(2n)!}$ I came across this question while studying for my qualifying exams and it was grouped together with problems involving the Weierstrass M-test. Let $f_n(x) = \frac{x^ne^{-x}}{(2n)!}$. Does $(f_n(x))$ converge uniformly on $[0,\infty)$? I don't think I can apply the M-test since we are not looking at the series $\sum f_n(x)$, and I can't apply Dini's theorem since $[0,\infty)$ is not compact. Hints are greatly appreciated. AI: hint For $x\ge 0$, $$f'_n(x)=\frac{1}{(2n)!}x^{n-1}e^{-x}(n-x)$$ So, $$M_n=\max_{[0,+\infty)}\{|f_n(x)-0|\}=f_n(n)$$ $$=\frac{n^ne^{-n}}{(2n)!}$$ Now $$\lim_{n\to+\infty}\frac{M_{n+1}}{M_n}=$$ $$\lim_{n\to+\infty}(1+\frac 1n)^n\frac{n+1}{(2n+1)(2n+2)e}=0$$ $$\implies \sum M_n \text{ converges}$$ $$\implies \lim_{n\to+\infty}M_n=0$$ $$\implies \text{ the convergence is uniform}$$
H: What does it mean to converge to a point if it is not clear what the point even is? What does it mean for a sequence to converge to an element if the limit might not necessarily be defined or known, or not necessarily in the universe under consideration (whatever this means)? I am not just talking about real numbers; it can be more general. The definition of a sequence $(x_n)$ converging to $x$ seems to say: for any $\epsilon > 0$, there is $N \in \mathbb{N}$ such that $|x_n - x| < \epsilon$ for all $n > N$. But doesn’t this assume the existence of a point $x$ under consideration? For example, when we show that $\{p \in \mathbb{Q}: p^2 < 2\}$ “approaches” a number, e.g. by considering $1.4, 1.41, 1.414, ...$, what does that even mean, if we have not yet constructed the real numbers? What is meant by “number” in this case? Does it even make sense to say that? Although we might not "know" what the limiting point $x$ is for $\{p \in \mathbb{Q}: p^2 < 2\}$, it seems to still make sense to speak of a limit. (Again, I'm assuming we do not as yet know what real numbers might be, or what completeness is, etc.). In such a case, how should we define convergence, if we cannot have a point to explicitly refer to? In general, couldn’t we converge to “something”, but it is not at all clear what that “something” should or might be? If it isn’t clear what that something should be, then how can we even speak of converging to that something? Is this just a logical/semantic/notational thing in the definition of convergence? AI: This is why the definition of convergence assumes a metric space (or more generally a topological space, but let's stay in metric spaces): Let $(X,d)$ be a metric space and $(x_n)_{n\in\mathbb N}$ a sequence in $X$. It is said to converge to $x\in X$ if for arbitrarily small $\varepsilon>0$ there exists a sufficiently large $N\in\mathbb N$ such that $d(x,x_n)<\varepsilon$ for all $n\geq N$. $d(x,y)$ is essentially a generalization of the distance between $x$ and $y$. In the rational or real numbers, it's usually what we would intuitively think of as distance, namely $d(x,y)=\vert y-x\vert$. That's why $\vert x-x_n\vert$ comes up in your less general definition. Anyway, with this more rigorous definition, the answer to your question is: Convergence is very much defined by the underlying set. If there is some bigger set containing $X$ and $x_n$ "converges" to an element $y$ in that bigger set, but which is not in $X$, then we just don't say that $x_n$ converges. With this in mind, we have to make a careful distinction when determining the convergence of the sequence $1,1.4,1.41,1.415,\dots$: If we consider the rationals as the underlying set, then it does not converge, since the "limit" $\sqrt2$ is not in the underlying set. But if we consider the reals as the underlying set, it does converge. There is also a certain notion of when a sequence "should" converge. If the members of the sequence get arbitrarily close to each other for large enough index $n$, then we would intuitively expect it to converge. Such a sequence is then called Cauchy sequence, named after French mathematician Augustin-Louis Cauchy, who just assumed it as obvious that a sequence converges if it is a "Cauchy sequence". But it turns out, we have to be more careful, because not every Cauchy sequence converges. At least not in every metric space, because the point to which such a sequence should converge might not be in the underlying set. Such a metric space is called incomplete, otherwise it is called complete. But for every incomplete metric space, there is a bigger, complete metric space in which every Cauchy sequence does converge. Such a metric space is then called the completion of the smaller space. For instance, the real numbers are the completion of the rational numbers: There are rational sequences converging to an irrational number because the rationals are incomplete, and there is no larger space than the reals containing additional points to which a real sequence might converge, because the reals are complete.
H: Increase in argument of a smooth function (Gamelin V.III. section 2 exercise 8) The following is question 8 in chapter 8.2 of Gamelin. Let $D$ be a domain in the complex plane and $f(z)$ a smooth complex valued function on $D$. Suppose that for any $a,b\in \mathbb{C}$ and any circle $\partial D(z_0,\varepsilon)\subset D$ such that $f(z)+az+b\neq 0$ for all $z\in \partial D(z_0,\varepsilon)$, the increase in argument around such a circle is non-negative. That is, \begin{equation} \oint_{\partial D(z_0,\varepsilon)} d\,\text{arg} (f(z)+az+b)\geq 0. \end{equation} We can then supposedly prove that $f(z)$ is analytic. The hint is given to expand $f(z)$ as a taylor series and to notice that if $A=\frac{\partial f(z)}{\partial \overline{z}}(z_0)\neq 0$, then the increase in argument of $A(\overline{z}-\overline{z_0})+o(\lvert z-z_0\rvert)$ around $\lvert z-z_0\rvert =\varepsilon$ is strictly negative. I tried to mess around with this hint, and I got that clearly the increase in argument for the first term is $-2\pi$, but how do I do anything with the little o term? I think I'm still not clear on what this notation is actually saying. This is in the chapter on Rouche's Theorem, so maybe I can use that, but I'm still not sure. AI: $A(\overline{z}-\overline{z_0})+o(\lvert z-z_0\rvert)=A(\overline{z}-\overline{z_0})(1+o(1))$ so the increase in the argument of $A(\overline{z}-\overline{z_0})+o(\lvert z-z_0\rvert)$ is the same as the increase in the argument of $A(\overline{z}-\overline{z_0})$ (for example by taking a simple homotopy $A(\overline{z}-\overline{z_0})(1+o(1)t)$ which cannot go through zero for any $0 \le t \le 1$ because $1+o(1)t \ne 0, 0 \le t \le 1$)
H: Continuous functions $f: \mathbb{R} \to \mathbb{R}$ such that $f\big(f(x)\big)=rf(x)+sx$ and $r,s \in (0, 1/2).$ I wish to find all continuous functions $f: \mathbb{R} \to \mathbb{R}$ that satisfy $$f\big(f(x)\big)=rf(x)+sx\quad\forall x\in\mathbb{R}\,,$$ where $r,s \in (0, 1/2)$. Here's my work so far: Let $r_1 > r_2$ be the roots of $x^2-rx-s.$ We have $1>|r_1|>|r_2|$ and $r_2<0<r_1.$ Furthermore, $f(x)=r_1 x, f(x)=r_2 x$ are clearly solutions to the FE. If $f(x)=f(y),$ then $rf(y)+sy = f(f(y))=f(f(x))=rf(x)+sx \Rightarrow x=y.$ Thus, $f$ is injective, which means $f$ is either strictly increasing or strictly decreasing. If $f$ is bounded below or above, then $sx = f(f(x))-rf(x)$ is as well, which is absurd. Thus, $f(\mathbb{R})=\mathbb{R}$ and $f$ is surjective. Now assume $f$ is increasing. I suspect that we need to show $f(x)=r_1 x$ from here, and the strictly decreasing case will be similar. Let $a_0 = x_0, a_{n+1} = f(a_n).$ Then $a_{n+2}-ra_{n+1}-sa_n = 0,$ This is a linear recursion with characteristic polynomial $x^2-rx-s,$ so $a_n = c_1 r_1^n + c_2 r_2^n$ for some $c_1, c_2.$ Solving for the constants, we get $c_1 = \frac{f(x_0) - x_0 r_2}{r_1-r_2}, c_2 = \frac{x_0 r_1 - f(x_0)}{r_1-r_2}.$ I managed to find $f(0) = 0$ by taking $n \to \infty,$ but did not get any use out of this observation. Another idea that I remembered could come in helpful is the fact that $x>y \Rightarrow f(x)>f(y)$ for increasing functions. Perhaps if we assume $f(x_0) > rx_0$ for some $x_0,$ we could derive a contradiction. $f(x_0) < rx_0$ would lead to a similar contradiction, and we would be done. Using $f$ on both sides $n$ times, we get $f^{n+1}(x_0) > f^{n}(rx_0),$ which simplifies to $f(x_0) > f(r_1 x_0) \cdot \frac{r_1^n - r_2^n}{r_1^{n+1}-r_2^{n+1}} + r_2 r_1^n (r_2-r_1)x_0$ after some algebraic manipulation. If we use the fact that $|r_2/r_1| < 1$ and take $n \to \infty,$ we get $f(x_0) > f(r_1 x_0)/r_1 \Rightarrow r_1f(x_0) > f(r_1 x_0).$ Unfortunately, this by itself is not a contradiction. You can compose both sides with $f$ again and take the limit once more, but that seems like it won't lead anywhere. Update: Composing both sides with $f$ and taking the limit once more gives $f(r_1 f(x_0)) > r_1(r_2f(x_0)+f(r_1 x)+r_1r_2x_0).$ Still no contradiction. After all this work, I have exhausted my bag of tricks. What else can I use on this problem? What would be the motivation behind these additional approaches? AI: In this solution, for a nonnegative integer $n$, $f^{\circ n}$ denotes the $n$-time iteration of $f$ (with $f^{\circ 0}$ being the identity function), and for a negative integer $n$, $f^{\circ n}$ is the functional inverse of $f^{\circ |n|}$. I quote your results $f(0)=0$ and, for every $n\in\mathbb{Z}$ (not just $n\in \mathbb{Z}_{\geq 0}$), $$f^{\circ n}(x)=\frac{f(x)-r_2x}{r_1-r_2}\,r_1^n+\frac{r_1x-f(x)}{r_1-r_2}\,r_2^n\,,$$ where $r_1>0>r_2$ with $1>|r_1|>|r_2|$. I shall prove that $f(x)=r_1x$ for all $x\in\mathbb{R}$ if $f$ is increasing, and $f(x)=r_2x$ for all $x\in\mathbb{R}$ if $x$ is decreasing. If $f$ is increasing, then $f^{\circ n}(x)>0$ for all $x>0$ and $n\in\mathbb{Z}$. In particular, when $n$ is a large negative integer, we can see that the term $r_2^n$ dominates the term $r_1^n$ in $f^{\circ n}(x)$. Thus, unless $f(x)=r_1x$, the $r_2^n$-term in $f^{\circ n}(x)$ will oscillate between positive and negative values, contradicting with $f^{\circ n}(x)>0$. Thus, $f(x)=r_1x$ for $x>0$. A similar argument shows that $f(x)=r_1x$ for $x<0$. Ergo, $f(x)=r_1x$ for all $x\in\mathbb{R}$. If $f$ is decreasing, then $(-1)^n\,f^{\circ n}(x)>0$ for all $x>0$ and $n\in\mathbb{Z}$. In particular, when $n$ is a large positive integer, we can see that the term $(-r_1)^n$ dominates the term $(-r_2)^n$ in $(-1)^n\,f^{\circ n}(x)$. Thus, unless $f(x)=r_2x$, the $(-r_1)^n$-term in $(-1)^n\,f^{\circ n}(x)$ will oscillate between positive and negative values, contradicting with $(-1)^n\,f^{\circ n}(x)>0$. Thus, $f(x)=r_2x$ for $x>0$. A similar argument shows that $f(x)=r_2x$ for $x<0$. Ergo, $f(x)=r_2x$ for all $x\in\mathbb{R}$.
H: Factorial—Googol problem help So over the past week, I encountered this interesting problem in which I need help with, so any help is highly appreciated! Problem: What it is least integer $n$ such that $n!$ is greater than googol? My work so far: We are trying to find the minimal n where $n!\ge 10^{100}$. With this, we can log both sides and get $\log _{10} n! \ge 100$. Now we can split the LHS into $\log (n) + \log (n-1) +... + \log (1)$ to get $\log (n) + \log (n-1) +... + \log (1) \ge 100$.From here we dissect the LHS. We can take the nearest integer to each individual term. To know how many 2' s we'll have and etc., we'll have to find the maximum n in which $[\log(n)] = 1$ and the max n when $[\log(n)] = 0$. We find that n is 31, 3. Now we can set up the equation $2(n-32)+31-4+1+0 \ge 100$. We get n=68. However, this is not the exact answer, I know, however it is around there. AI: You can use Stirling's approximation, which will still leave some calculation. We have $$n!\approx \frac {n^n}{e^n}\sqrt{2 \pi n}\\ \log(n!)\approx n \log n -n +\frac 12\log(2 \pi n)$$ Maybe you know that $\log 10 \approx 2.30$, so we want $\log n! \gt \log 10^{100} \approx 230$. I would start by ignoring the last term (we can make the correction later, when we are close), so we want $$n (\log n-1) = 230\\n = \frac {230}{\log n-1}$$ which is nice for fixed point iteration if we have a calculator or choose $n$s that we can compute easily. I know more base $10$ logs than natural logs, so I will write this as $$n = \frac {230}{2.30 \log_{10} n-1}$$ If we make our first guess $100$, the log is $2$ and our next guess is $50.$ I know $\log_{10}5 \approx 0.70$ so our third guess is $$n_3=\frac {230}{2.30 \log_{10} 50-1}\approx \frac{230}{2.30 \cdot 1.70-1}=\frac {230}{2^2-0.09-1}=\frac{2.30}{2.91}\approx 79$$ Then knowing $\log_{10}2 \approx 0.301, \log_{10}8 \approx 0.903$ $$n_4=\frac {230}{2.30 \log_{10} 79-1}\approx \frac {230}{2.30\cdot 1.90-1}\approx \frac{230}{3.37}\approx 68$$ and we are getting close. If I truly had to not use a calculator, I would note that the $\frac 12\log{2 \pi n} \approx \frac 12\log(450)\approx 3$ and subtract that from the average of $68$ and $79$, getting about $70$ and call it good. A final check with Alpha shows this is very close, but I would feel it was within $3$ either way based on the rough work.
H: Given a bilinear map, how do I produce a cross-product on cohomology? If $M$, $N$, and $P$ are $R$-modules, and there exists a bilinear map between $M\times N$ and $P$, how do I construct a cross product $$\times: H^p(X;M)\times H^q(Y;N)\to H^{p+q}(X\times Y;P)?$$ I say let $S^*(X;M)$ be the singular cochain complex of $X$ with coefficients in the $R$-module $M$, $S^*(Y;N)$ be the singular cochain complex of $Y$ with coefficients in the $R$-module $N$, and $S^*(X\times Y;P)$ be the singular cochain complex of $X\times Y$ with coefficients in the $R$-module $P$. I thought about defining a map $\times:S^p(X;M)\times S^q(Y;N)\to S^{p+q}(X\times Y;P)$ that commutes with the differential maps but I cannot think of how it should be defined? AI: It is useful to think of the "topological" cross product as the combination of the "algebraic" cross product + some formalities (such as Eilenberg-Zilber when working with singular complexes, or some straightforward manipulation when dealing with cellular complexes). Note that you have an algebraic cross product $S^p(X;M) \times S^q(Y;N)$ to the cochain complex tensor product $S^{p+q}(S^p(X) \otimes S^q(Y), M \otimes_R N)$ (for lack of better notation) given by $\alpha \otimes \beta \mapsto \alpha \times \beta$ where $(\alpha \times \beta) (z,w) = \alpha (z) \otimes \beta (w)$. Since you have a bilinear map $M \times N \to P$, this is the same as a homomorphism $M \otimes_R N \to P$ and so you can change coefficients. This is the step that most concerns your problem. After this, you need to use the fact that $H^\ast(S^*(X) \otimes S^*(Y)) \cong H^\ast(S^*(X \times Y))$ and naturally so. Essentially the tensor product chain complex and the chain complex associated to product of topological spaces are chain homotopic. For cellular cochains, this isn't too hard to show directly. For singular (and therefore simplicial) cochains, you might have to resort to Eilenberg-Zilber.
H: Show that a subset $I$ of $\textbf{R}$ is an interval if and only if whenever $a,b\in I$ and $0\leq t\leq 1$ then $(1-t)a + tb\in I$. Show that a subset $I$ of $\textbf{R}$ is an interval if and only if whenever $a,b\in I$ and $0\leq t\leq 1$ then $(1-t)a + tb\in I$. MY ATTEMPT We say that a subset $I$ of $\textbf{R}$ is an interval iff $a \leq c \leq b$ and $a,b\in I$ implies that $c\in I$. Based on such definition, we may proceed. Let us prove the implication $(\Rightarrow)$ first. Suppose that $I$ is an interval and $a,b\in I$ s.t. $a \leq x \leq b$. We shall prove that $(1-t)a + tb\in I$ whenever $t\in[0,1]$. In order to do so, let us consider that $t = (x-a)/(b-a)\in[0,1]$. We have that $t\in[0,1]$ because $a\leq x\leq b$. Moreover, we do also have that \begin{align*} x = a + x - a = a + (b-a)\times\frac{x-a}{b-a} = a + (b-a)t = (1-t)a+ bt\in I \end{align*} Conversely, let us prove the implication $(\Leftarrow)$. Let us suppose that $a,b\in I$ and $(1-t)a + tb\in I$ whenever $t\in[0,1]$. Thus, if $a\leq x\leq b$, we have \begin{align*} t = \frac{x-a}{b-a}\in[0,1]\Rightarrow (1-t)a + bt = a + t(b-a) = a + \frac{x-a}{b-a}\times(b-a) = a + x - a = x \in I \end{align*} and we are done. Could someone please check if the wording of my proof is correct? Any contribution is appreciated. AI: The issue with your approach has been described in comments to your question, but for completeness sake let us put it here also. You are supposed to show that for every $t\in[0,1]$ the number $(1-t)a+tb$ lies in interval $[a, b] $. But then you take up $t$ in some specific form $t=(x-a) /b-a$. And you write that $t\in[0,1]$ because $a\leq x\leq b$. Well, you had to prove the other way round that if $t\in[0,1]$ then $x\in[a, b] $. Just do it directly. Let $t\in [0,1]$ and we need to show $$(1-t)a+tb\geq a$$ ie $$tb\geq a-(1-t)a=ta$$ which clearly holds if $t=0$ and if $t>0$ this is equivalent to $b\geq a$ which is already part of hypothesis. And thus we have shown $$(1-t)a+tb\geq a$$ Next we also need to show $$(1-t)a+tb\leq b$$ or $$(1-t)a\leq (1-t)b$$ Clearly this holds for $t=1$ and if $0\leq t <1$ then we can see (via dividing by positive number $1-t$) that it is equivalent to $a\leq b$ which is already given. This completes the proof of first part. For the second part let us assume that for every $t\in[0,1]$ the number $(1-t)a+bt\in I$. If $a\leq c\leq b$ then we can write $$c=(1-t) a+tb$$ where $t=(c-a) /(b-a) $. Clearly $$0\leq c-a\leq b-a$$ and hence $0\leq t\leq 1$ and then by our assumption $c=(1-t)a+tb\in I$ and therefore $I$ is an interval. Also note that if $a=b$ then this has to be handled specially.
H: Conditions on an entire function that would make it a polynomial Here is the question: Suppose $f$ is entire. a) Suppose $|f(z)|\leq A|z|^N+B$ $\forall z\in\mathbb{C}$ where $A,B<\infty$ are constants. Show $f$ is a polynomial of degree $\leq N$. b) Suppose $f$ satisfies $f(z_n)\rightarrow\infty$ whenever $z_n\rightarrow\infty$. Show $f$ is a polynomial. Here is my idea: for (a), since $f$ is entire we can consider $f$ on a circle centered at $a$ of radius $r$. By Cauchy, we can estimate the $k^{th}$ derivative by $|f^{(k)}(z)|\leq\frac{k!M}{r^k}$ where $M=\max_{|z-a|<r}|f(z)|$. Since $|f(z)|\leq A|z|^N+B$, we know that $M\leq A|z|^N+B$. Thus, $|f^{(k)}(z)|\leq\frac{k!(A|z|^N+B)}{r^k}$. So could this lead me to an argument where I could say that each derivative is bounded, and thus may be estimated by a polynomial, thus $f$ can be too? Moreover, since the highest power of any derivative is $N$, then can we say that the highest power of $f$ is $N$ too? The trouble is that what if the $k^{th}$ derivative is of power $N$, then $f$ would have power $N+1$... so something is wrong here... I'm not too sure where to start with (b). I was going to try and piggy-back off my argument in (a) but I wasn't seeing any way that it could be done. Any hints, ideas, thoughts, etc. are greatly appreciated! Thank you so much! AI: For (b), that condition implies that $f$ extends to a meromorphic function on the Riemann sphere, with a pole at $\infty$ and no poles anywhere else. The meromorphic functions on the Riemann sphere are precisely the rational functions, so $f(x) = p(x) / q(x)$ for some coprime polynomials $p$ and $q$. Since $f$ has no poles on $\mathbb{C}$, $q(x)$ cannot have any roots so must be constant.
H: Isometry between $k$-dimensional subspace of $\mathbb{R}^n$ and $\mathbb{R}^k$ If $W\subset\mathbb{R}^n$ is a $k$-dimensional subspace of $\mathbb{R}^n$ with the usual Euclidean norm, then is there a result that says that there is an isometry between $W$ and $\mathbb{R}^k$? Could someone possibly provide either an easy proof or counterexample that would go along with this? AI: To flesh out Thomas Andrews' hint, since $W$ is $k$-dimensional it admits an orthonormal basis $B_W = \{\vec{u}_1, \cdots, \vec{u}_k\}$. Define the linear transformation $T: W \to \mathbb{R}^k$ such that $T(\vec{u}_i) = \hat{e}_i$, where $\hat{e}_i$ is the $i$th standard basis vector in $\mathbb{R}^k$. To see why this is an isometry, you can note that $T$ is length-preserving, i.e. $\|T \vec{u}\| = \|\vec{u}\|$ for any $\vec{u} \in W$. If we let $\vec{u} = c_1 \vec{u}_1 + \cdots + c_k \vec{u}_k$ (recall the $\vec{u}_i$ form a basis), then we can see that $$\|T \vec{u}\|= \|T(c_1 \vec{u}_1 + \cdots + c_k \vec{u}_k)\| = \|c_1 \hat{e}_1 + \cdots + c_k \hat{e}_k\| = \sqrt{\sum_{i = 1}^k c_i^2} = \|\vec{u}\|$$ This implies that $T$ preserves the dot-product as well (due to the polarization identity). $T$ is also clearly invertible, because its rank is $k$ and the dimension of the domain is also $k$. Hence, we are done. $\square$
H: Proving a non-homogeneous inequality with $x,y,z>0$ For $x,y,z>0.$ Prove: $$\frac{1}{2}+\frac{1}{2}{r}^{2}+\frac{1}{3}\,{p}^{2}+\frac{2}{3}\,{q}^{2}-\frac{1}{6} Q-\frac{3}{2} r-\frac{2}{3}q-\frac{1}{6}pq-\frac{5}{3} \,pr\geqslant 0$$ where $$\Big[p=x+y+z,q=xy+zx+yz,r=xyz,Q= \left( x-y \right) \left( y-z \right) \left( z-x \right)\Big ]$$ My SOS proof$:$ $$\text{LHS}=\frac{1}{12}\,\sum \left( 3\,{z}^{2}+1 \right) \left( x-y \right) ^{2}+\frac{1}{6} \sum\,y \left( y+z \right) \left( x-1 \right) ^{2}+\frac{1}{2}\, \left( xyz-1 \right) ^{2} \geqslant 0$$ By the way$,$ there is an C-S proof :D Who can find$?$ AI: Since by AM-GM $$1+x^2y^2z^2\geq2xyz$$ and $$\sum_{cyc}x^2y^2\geq\sum_{cyc}x^2yz,$$ it's enough to prove that: $$\sum_{cyc}x^2y^2-\sum_{cyc}(x^2y+xyz)+\sum_{cyc}x^2\geq0,$$ which is true by AM-GM twice: $$\sum_{cyc}(x^2y^2+x^2)\geq2\sum_{cyc}x^2y\geq\sum_{cyc}(x^2y+xyz).$$
H: Questions about proving the statement: $\tau(\omega)$ is a stopping time, iff $\{ \tau(\omega) < t\}$ $\in \mathcal{F}_t$, for all $t \geq 0$. Assume the filtration is right-continuous ($\mathcal{F}_{t+0} := \cap_{s>t}\mathcal{F_s} = \mathcal{F}_t$) and complete, then we have that $\tau(\omega)$ is a stopping time, if and only if $\{ \tau(\omega) < t\}$ $\in \mathcal{F}_t$, for all $t \geq 0$. Here is the proof: $\Rightarrow$: $\{ \tau(\omega) < t\}$ = $\cup_{n=1}^{\infty} \{ \tau(\omega)\leq t- 1/n\} \in \mathcal{F}_t;$ The "$\in$" is truth due to the definition of stopping time and the definition of sigma algebra; that is $\{ \tau(\omega)\leq t- 1/n\} \in \mathcal{F}_{t-1/n} \subset \mathcal{F}_t$ for all $n$, then the union is in $\mathcal{F_t}$. $\Leftarrow$: $\{ \tau(\omega) \leq t\}$ = $\cap_{n=1}^{\infty} \{ \tau(\omega) < t+ 1/n\} \in \mathcal{F}_{t+0} = \mathcal{F}_t;$ One question is that: The last equality follows from our right continuous assumption, but the why the "$\in$" is the case? Another question is that: Many books give the statement $\cap_{s>t}\mathcal{F_s} = \cap_{n=1}^{\infty}\mathcal{F}_{t+1/n}$, but they don't provide any proof of it, I've tried to prove it, but it doesn't make sense to me. can you give any thought about it? AI: Equivalently, $$\bigcap_{n=1}^{\infty} \{ \tau(\omega) < t+ 1/n\} \in \mathcal{F}_s$$ for every $s > t$, which is implied by, since $\{ \tau(\omega) < t+ 1/n\}$ is a decreasing sequence of events, that $\{ \tau(\omega) < t+ 1/n\} \in \mathcal{F}_s$ for all $n > N$ for some $N$. To see this just set $N = 1/(s-t)$. For the second question, $$\bigcap_{s>t}\mathcal{F_s} \subseteq \bigcap_{n=1}^{\infty}\mathcal{F}_{t+1/n}$$ follows immediately from the fact that $t+1/n > t$ so the left intersection is over more sets. The other direction is the archimedean property. Fix $s > t$. Then there is some positive integer $m$ such that $t+1/m < s$ showing $$\bigcap_{n=1}^{\infty}\mathcal{F}_{t+1/n} \subseteq \mathcal{F}_{t+1/m} \subseteq \mathcal{F}_s.$$ Now just intersect over all $s>t$.
H: Let $a$,$b$ be positive elements in a C* algebra with $\|a\|,\|b\|\leq 1$, does $\|a-b\|\leq 1$ hold? On Murphy's C*-algebra and operator theory page 89 it says Now suppose that $a$ is positive and $\|a\| \leq 1$. Then $u_\lambda - a$ is hermitian and $\| u_\lambda -a\|\leq 1$. Where $u_\lambda$ is an arbitrary approximate unit in a C*-algebra, which means $u_\lambda$ is a positive element satisfying $\|u_\lambda\|\leq 1$. Could you tell me why is $\| u_\lambda -a\|\leq 1$ ? Thanks. AI: Assertion: $\|a-b\|\leq \text{max}\{\|a\|,\|b\|\}$ holds for positive $a$,$b$. Add a unit if there isn't one. Since $-\|b\|\leq -b\leq a-b \leq a\leq \|a\|$ we have $-\|b\|\leq a-b\leq \|a\|$. By applying Gelfand transform to $C^*(a-b,1)$ we see $\|a-b\|\leq \text{max}\{\|a\|,\|b\|\}$.
H: Given $f(x)$ is continuous on $[0,1]$ and $f(f(x))=1$ for $x\in[0,1]$. Prove that $\int_0^1 f(x)\,dx > \frac34$. Let $f$ be a continuous function whose domain includes $[0,1]$, such that $0 \le f(x) \le 1$ for all $x \in [0,1]$, and such that $f(f(x)) = 1$ for all $x \in [0,1]$. Prove that $\int_0^1 f(x)\,dx > \frac34$. Here's all that I have, from the Mean Value Theorem, we have some $c\in[0,1]$, and $a$, such that $$a=f(c)=\int_0^1 f(x)dx.$$ By the Extreme Value Theorem, there exist some $m$, $n\in[0,1]$ such that $$f(m)\ge f(x)\ge f(n).$$ I'm stuck here. Is this the right approach? Where do I go from here? I also got to know what the very fact that $f(f(x))=1$ shows that there is some $x$ such that $f(x)=1$ because the range of $f(x)$ is the domain of $f(x)$ (which I'm still trying to understand; I know what it means, I'm just trying to take it in). AI: $f(1)=f(f(f(1)))=(f\circ f) (f(1))=1$ $f([0,1])=[a,1]$ for some $a >0$ since the image is connected hence an interval ending at $1$ and compact hence the interval is closed while obviously $f([a,1])=1$ so $a >0$ But now on $[0,a], f(x) \ge a$ so $\int_0^1f(x)dx=\int_0^af(x)dx+\int_a^1f(x)dx \ge a^2+1-a \ge 3/4$ and we cannot have equality since then $a=1/2$ and because $f(1/2)=1, f(x) \to 1, x \to 1/2, x<1/2$ so $f$ cannot be identically $1/2$ on $[0,1/2)$ and it is bigger on at least a small interval near $1/2$ Note that by choosing that interval very small and making $f$ linear there (and $1/2$ before, $1$ after) we can get the integral $3/4+\epsilon$ so the result is sharp. Done!
H: Group Characters and Centralizer of an element Let G be a finite group. $\hat{G}$ donates a set of all irreducible representations of G over $\mathbb{C}$. $C_G(h)$ denotes centraliser of an element $h\in G$. I want to prove $|C_G(h)|=\sum\limits_{\chi \in \hat{G}}\chi(h)\overline{\chi (h)}$. I dont know how to get this. Please help me with this. AI: One can approach this by phrasing the character orthonormality in terms of matrices. So we will view characters as complex valued functions on conjugacy classes of $G$, and note that the orthonormality of irreducible characters can be expressed as: \begin{equation}\sum_{C}|C|\chi_V(C)\overline{\chi_W(C)}=|G|\delta_{V,W}\end{equation} Where $V$, $W$ are irreducible representations, and $C$ runs over the conjugacy classes of $G$. Let $M$ be the character table, viewed as a $\text{#}C\times \text{#}C$ matrix. Then if $D$ is the diagonal matrix with $(C,C)$ entry equal to $|C|$, this equality becomes \begin{equation} MDM^*=|G|I\end{equation} Where $M^*$ is the conjugate transpose of $M$. So rearranging this, we obtain \begin{equation}M^*M=|G|D^{-1}\end{equation} Noting that $\frac{|G|}{|C|}$ is the cardinality of the centraliser of an element of $C$ yields your result by comparing entries of these two matrices.
H: Why is $\mathbb{Z}_{6}$ a free $\mathbb{Z}_{6}$ module? $\mathbb{Z}_{6}=\{\bar{0},\bar{1},\bar{2},\bar{3},\bar{4},\bar{5}\}$. I know that in order for a module to be free, it has to have a basis. Further, I know that if $b_{1},...,b_{n}$ is a basis for $\mathbb{Z}_{6}$, then $r_{1}b_{1}+...+r_{n}b_{n}=0$, $r_{1}=...=r_{n}=0$. I do not know how I can use this to show that $\mathbb{Z}_{6}$ is a free $\mathbb{Z}_{6}$-module. AI: Observe that $\;\{\overline1\}\;$ , or $\;\{\overline 5\}\;$, are free basis of $\;\Bbb Z_6\;$ over itself, because $\;m\cdot\overline 1=0\iff m=0\pmod6\;$ (and the same is true with $\;\overline5\;$) This is the reason $\;\Bbb Z_6\;$ is a free module of rank $\;1\;$ over itself.
H: lamellar field if line intergral zero I'm reading in the textbook Electromagnetics with Applications and it says (page 18) that if the line integral around a closed path is zero, $$\oint_C\vec{F}.d\vec{L}=0$$ then the vector field $\vec{F}$ is lamellar (conservative). Specifically, the book says, "Any field for which the line integral around a closed path is zero is called a conservative or lamellar field." I read this wiki page and it makes sense. Here is my question: Is the above quote strictly correct (specifically for physical systems like electromagnetic fields)? In other words, is it possible to find a path such that a line integral is zero for a field that is not a conservative field? Just trying to get this locked down. Thanks. AI: Given a path integral of the form: $$\oint_C\vec{F}\cdot d\vec{L}$$ Letting $\vec{F}=\langle P(x,y),Q(x,y) \rangle$, we can rewrite as follows: $$\oint_C Pdx +Qdy$$ We can then invoke green's theorem: $$\oint_C Pdx +Qdy=\iint_R\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}dA$$ As you can see if the vector field is conservative, that is if $\frac{\partial Q}{\partial x}=\frac{\partial P}{\partial y}$ the integral is clearly zero. However we want to know if there exists a vector field that is not conservative, where the above integral is zero. Define a function: $$f(x,y)=\frac{\partial Q}{\partial x}-\frac{\partial P}{\partial y}$$ And then plug that into the integral and set it equal zero: $$\iint_Rf(x,y)dA=0$$ If we let $f(x,y)=x^3y^2$ and the region $R$ be a square of side length 2 centered at the origin it is clear that this integral goes to zero. But can we write that function in terms of the partial derivatives? Yes, say we let our $Q$ and $P$ functions equal the following: $$P(x,y)=x$$ $$\frac{\partial P}{y}=0$$ $$Q(x,y)=\frac{x^4y^2}{4}$$ $$\frac{\partial Q}{\partial X}=x^3y^2$$ Therefore we have found a vector field $\vec{F}=\langle x,\frac{x^4y^2}{4} \rangle $ that is not conservative, which can yield a closed path integral equal to $0$.
H: Prove the following by changing the order of integration This question is from my university paper: $$\int_{0}^{\infty} \int_{0}^{x} x e^{\frac{-x^2}{y}}\; dy\;dx = \frac{1}{2}$$ I tried using DUIS method but integration is getting complicated $f(x) = \int_{0}^{x} x e^{\frac{-x^2}{y}}\; dy$ $ \frac{d}{dx}f(x) = \int_{0}^{x} \frac{d}{dx} x e^{\frac{-x^2}{y}}\; dy $ $ \frac{d}{dx}f(x) = \int_{0}^{x} \frac{e^{-x^2}(1-2x^2)}{y}\; dy $ $ \hspace{75px}= e^{(-x^2)} (1 - 2 x^2) \{\log(x)-\log(0)\}$ $ \hspace{75px}= e^{(-x^2)} (1 - 2 x^2)\;\log(x)$ $\int \frac{d}{dx}f(x) = \int{e^{(-x^2)} (1 - 2 x^2)\;\log(x) \;dx} $ First we must integrate with respect to $y$ and that's where I'm getting stuck. Is there any other method to solve this? What's going wrong? AI: Why don't you do what is written in your question's title?: $$\int_0^\infty\int_0^x xe^{-x^2/y}\,dy\,dx=\int_0^\infty\int_y^\infty xe^{-x^2/y}\,dx\,dy=\int_0^\infty\left.\left(-\frac y2\right)e^{-x^2/y}\right|_y^\infty dy=$$ $$=-\frac12\int_0^\infty y\left(0-e^{-y}\right)\,dy=\frac12\int_0^\infty ye^{-y}dy=\left.\frac12\left[-ye^{-y}\right]\right|_0^\infty+\frac12\int_0^\infty e^{-y}dy=$$ $$\left.\frac12\left(-e^{-y}\right)\right|_0^\infty=\frac12$$
H: Definition of convergence of real sequence We say a sequence converges to a real number l if given any epsilon greater than 0 there exists a natural number N such that after the first N-1 terms all the remaining terms of the sequence lie in epsilon symmetric neighborhood of l i.e|xn-l|<epsilon after first N-1 terms.Here instead of less than can I take less than or equal to?Is there any problem if I do so. AI: Yes, this is an equivalent definition. Consider the conditions: (1) For all $\epsilon > 0$, we eventually have $|x_n - l| < \varepsilon$. (2) For all $\epsilon > 0$, we eventually have $|x_n - l| \leq \varepsilon$. It is obvious that condition (1) implies condition (2), for the same $\varepsilon$. Conversely, assume (2) is true and let us prove (1) for some given value $\varepsilon > 0$. This follows immediately from the fact that (2) is true with $\varepsilon$ replaced by $\varepsilon/2$.
H: Prove that the closed ball $\overline{B}(0,1) \subseteq \mathcal{C}([0,1], \mathbb{R})$ can't be covered by countably-many compact sets. For $\mathcal{C}([0,1],\mathbb{R})$ the space of continuous functions $f : [0,1] \to \mathbb{R}$, I'm asked to prove that $\overline{B}(0,1) = \{ f \in \mathcal{C}([0,1], \mathbb{R}) : ||f||_\infty \leqslant 1 \}$ can't be covered by countably-many compact sets in $\mathcal{C}([0,1],\mathbb{R})$. But I'm not sure where to even begin. Any ideas on how to start? AI: This actually holds in general for infinite-dimensional Banach Spaces, such as $\mathcal{C}([0, 1], \Bbb{R})$. Suppose $X$ is a Banach Space. If $X$ is covered by countably many compact sets $(K_n)_{n \in \Bbb{N}}$, then because each set is closed, the Baire Category Theorem implies that at least one of the $K_i$s has a non-empty interior. That is, there exists some closed ball $x + rB_X \subseteq K_i$, where $r$ is some positive real number. But then, this implies, $$B_X \subseteq \frac{1}{r}(-x + K_i),$$ which is compact, hence $B_X$ is compact. A compact closed unit ball implies that $X$ is finite-dimensional. Thus, if $X$ is infinite-dimensional, no such compact sets can exist.
H: How to prove $n^{\log n}$ is $\mathcal{O}(2^n)$? I've seen proofs here that help with $n\log n = \mathcal{O}(n^2)$. However, if we take it a step further, how could one prove $n^{\log n}$ is $ \mathcal{O}(2^n)$? We are assuming $n\in\mathbb{N}$. Would it extend to $n\in (0,\infty)$? If we apply limit of $x\rightarrow\infty$, the final limit achieved is $\infty$ itself (after applying exponent rule). So how do I start my logic to make this proof? Any help is greatly appreciated. AI: For simplicity, assuming the logarithm is in base $2$ (it doesn't really matter what base the logarithm is, because they differ by a constant), you can also note that $$\frac{n^{\log n}}{2^n} = \frac{2^{\log^2 n}}{2^n} = 2^{\log^2 n - n}$$ Since the function $2^x$ tends to $0$ as $x \to -\infty$, and since $\log^2 n = o(n)$, it follows that the exponent is $-\Theta(n)$ and namely, it tends to negative infinity. This means that the above ratio tends to zero as $n \to \infty$, so we can indeed conclude that $n^{\log n} = O(2^n)$.
H: The convergence of sequence. i'd like to show that $a_n=-\ln(n)+$$\sum_{k=1}^{n} {{1}\over {k}} $ converges to some $\alpha$ $\in$$[0,1]$ I found that the sequence monotone decreasing sequence, by mathematics induction. So i'd like to show that the sequence $a_n$$\in$$[0,1]$, for concluding above statement. But I'm stuck here. How can I show that? I tried to $a_n$ is monotone decreasing and $a_1$=1, $a_n$$\in$$(-$infinity$,1]$ but i can't find how to show that it is more than equal to 0 .. AI: Hint Consider the map $f(n)=1/n$ on $[1,n]$ Then, $\int_1^n f(n)\,dn$$\leq$$1+1/2+1/3+....+1/n$. ( This can be seen from, upper sum is greater than or equal to actual area under the curve.) Hope you can follow from here.
H: Density of certain space in $L^\infty (\mathbb{R},\Sigma, \mu),$ for some finite positive measure $\mu.$ Let $\mu$ be a finite positive measure on $\mathbb{R}.$ Consider the measure space $(\mathbb{R},\Sigma, \mu), $ where $\Sigma$ is the collection of all Borel sets. Q:1) Is it true that the space of all compactly supported continuous functions dense in $L^\infty (\mathbb{R},\Sigma, \mu)?$ 2)Is it true that the space of all bounded continuous functions dense in $L^\infty (\mathbb{R},\Sigma, \mu)?$ AI: Suppose $\mu $ and Lebesgue measure are mutually absolutely continuous. (For example, $\mu$ could be the standard Gaussian measure). If a sequence $(f_n)$ of continuous functions converges in $L^{\infty} (\mu)$ then $\|f_n-f_m\|_{\infty} \to 0$. The $L^{\infty}$ norm of a continuous function is same as the supremum norm. Hence $f_n$ converges uniformly and the limit is necessarily continuous. Hence the answer to both of your questions is NO. (Discontinuous functions cannot be approximated by continuous functions in $L^{\infty}$ norm). [Suppose $h$ is continuous and $|h(x)| \leq M$ almost everywhere. Then $|h(x)| \leq M$ for every $x$. This is because the complement of a set of measure $0$ is dense. If $h$ is continuous and $|h(x)| \leq M$ for all $x$ in dense set then the same hods for all $x$ by continuity].
H: Does existence of a left or right inverse imply existence of inverses? Suppose $G$ is a set with a binary operation such that: (Associativity) For all $a, b, c \in G$, $(ab)c = a(bc)$. (Identity) There is $e \in G$ such that, for all $a \in G$, $ae = ea = a$. (Left inverse or right inverse) For all $a \in G$, $ba = e$ for some $b \in G$ or (note the difference with and) $ac = e$ for some $c \in G$. Does this imply that every element $a \in G$ has an inverse, i.e. an element that is both a left and right inverse? That is, for all $a \in G$, is there $a’$ such that $aa’ = a’a = e$? In other words, is $G$ a group? AI: The answer is yes. Suppose $a$ has right inverse but not left inverse: $ab=e$. Then let $f=ba$. We have $ f^2= baba=ba=f$ and $f\ne e$. The element $f$ has a left or right inverse $c$. Suppose $fc=e$ , Then $f=fe=ffc=fc=e$, so $f=e$, a contradiction. If $cf =e$ then $f=ef=cff=cf=e$, again a contradiction.
H: The tangent to the curve $y = ax^3$ at the point $(5, b)$ has a gradient of $30$. Find the values of the constants $a$ and $b$. The tangent to the curve $y = ax^3$ at the point $(5,b)$ has a gradient of $30$. Find the values of the constants $a$ and $b$. My working so far: $$\frac{dy}{dx} = 3ax^2$$ tangent: $y = mx + c$ substituting $3ax^2$ (derivative) as gradient we get $75a=30, a=0.4$ How do you solve for $b$ from there? AI: What you did is fine. Then, $b=\frac25\times5^3=50$.
H: Remainder when $\prod_{n=1}^{100}(1- n^{2} +n^{4})$ is divided by $101$ What is the remainder when the expression $$\prod_{n=1}^{100}(1- n^{2} +n^{4})$$ is divided by $101$? If $\zeta=\dfrac{-1+\sqrt{-3}}{2}$, then $$1-n^2+n^4=(1-n+n^2)(1+n+n^2)=(-\zeta-n)(-\bar{\zeta}-n)(\zeta-n)(\bar{\zeta}-n).$$ We then have $$\prod_{n=1}^{100}(1-n^2+n^4)\equiv \prod_{n=1}^{100}\big((-\zeta-n)(-\bar{\zeta}-n)(\zeta-n)(\bar{\zeta}-n)\big)\pmod{101}\,.$$ Since $$\prod_{n=1}^{100}(x-n)\equiv x^{100}-1\pmod{101},$$ we obtain $$\prod_{n=1}^{100}(1-n^2+n^4)\equiv\big((-\zeta)^{100}-1\big)\big((-\bar\zeta)^{100}-1\big)\big(\zeta^{100}-1\big)\big(\bar{\zeta}^{100}-1\big)\pmod{101}\,.$$ Since $\zeta^3=1$ and $\bar{\zeta}^3=1$, we get $$(-\zeta)^{100}=\zeta^{100}=\zeta\text{ and }(-\bar\zeta)^{100}=\bar\zeta^{100}=\bar\zeta\,.$$ Therefore, $$\prod_{n=1}^{100}(1-n^2+n^4)\equiv (\zeta-1)^2(\bar{\zeta}-1)^2=\big((1-\zeta)(1-\bar{\zeta})\big)^2\pmod{101}\,.$$ As $$(x-\zeta)(x-\bar{\zeta})=x^2+x+1\,,$$ we get $$\prod_{n=1}^{100}(1-n^2+n^4)\equiv (1^2+1+1)^2=9\pmod{101}\,.$$ Are there other solutions? How do we solve this problem without resorting to complex numbers? AI: Modulo $101$, the set of values $0^3, 1^3,\dots,100^3$ is a permutation of $0,1,2,\dots,100.$ This is because $101$ is prime and $3$ is not a divisor of $100.$ But $$n^4-n^2+1=\frac{n^6+1}{n^2+1}$$ Now, if $n=10,91$ then $n^2+1$ is divisible by $101.$ The other terms are a permutation, so: $$\begin{align}\prod_{n=1}^{100} (n^4-n^2+1)&=(10^4-10^2+1)(91^4-91+1)\prod_{n\neq 10,91}\frac{n^6+1}{n^2+1}\\ &\equiv (10^4-10^2+1)((-10)^4-(-10)^2+1)\pmod{101}\\ &\equiv 3\cdot 3=9\pmod{101} \end{align}$$ This works more generally if $p\equiv 5\pmod {12}:$ $$\prod_{n=1}^{p-1}\left(n^4-n^2+1\right)\equiv 9\pmod p$$ If $p\equiv 11\pmod{12},$ the remainder is $1.$ I think when $p\equiv 1\pmod{12},$ the remainder is $0.$ Not sure about $p\equiv 7\pmod{12}.$
H: How to prove that if f is continuous that the exponential function satisfies the functional equation? Prove that if $f : \mathbb{R}→\mathbb{R}$ is continuous with the property $f(x+y)=f(x)·f(y)$ for all $x,y \in\mathbb{R}$ and $f(1) = a > 0$, then $f(x) = e^x$. I'd be greatfull for any help! AI: $f(x)=f(0+x)=f(0)f(x)$ so $f(0)=1$ or $f(x)=0$. Since $f(1)>0$ the latter is denied. For $n\in\mathbb{N}$, $f(n)=f(1+\cdots+1)=f(1)^n=a^n$ by induction. $1=f(0)=f(n-n)=f(n)f(-n)$ hence $f(-n)=f(n)^{-1}$. $f(\frac{1}{n}+\cdots+\frac{1}{n})=f(1)=a$ implies $f(\frac{1}{n})=a^{1/n}$. Hence $f(\frac{m}{n})=a^{m/n}$. Now, since $f$ is assumed to be continuous, one can continue to the conclusion that $f(x)=a^x$ by taking rational approximations $p_n\to x$, $f(x)=\lim_{n\to\infty}f(p_n)=\lim_{n\to\infty}a^{p_n}=a^x$.
H: Are differentiable functions of bounded variation on [0,1] also absolutely continuous? There was a similar question on this matter, but "continuous differentiable" is too strong an assumption, so I was wondering if this can be relaxed to "differentiable". AI: I got an answer referencing Rudin'n Real and Complex Analysis, Thm 7.21, which the author deleted later because of a mistake. I am grateful that he led me to this theorem. Assume $f\in BV([0,1])$, then $f=f_1-f_2$, where $f_i$ are bounded increasing, $f'_i$ exists a.e., and $f'=f'_1-f'_2$ a.e.. Since $\int_0^1 f'_i\leq f_i(1)-f_i(0)$, we know $f'_i\in L^1([0,1])$ and thus $f\in L^1([0,1])$. Then we can apply Rudin 7.21 (i.e. If $f$ is differentiable, $f'\in L^1([0,1])$, then $f$ is absolutely continuous)
H: $\lim_{x \to 0}\left(\frac{\sin^2(x)}{1-\cos(x)}\right)$ without L'Hopital's rule I'm trying to calculate $\lim_{x \to 0}\left(\frac{\sin^2(x)}{1-\cos(x)}\right)$ without L'Hopital's rule. The trigonometrical identity $\sin^2(x) = \frac{1-\cos(2x)}{2}$ doesn't seem to lead anywhere. I also attempted to calculate using $\cos^2(x) + \sin^2(x) = 1$ without success. Any ideas? AI: Consider the following: $$\sin^2(x) = 1-\cos^2(x) = (1-\cos(x))(1+\cos(x))$$ You can do the cancellation and BABAM! Evaluating the limit after that is easy :D
H: 5 digit even number and different digits I have a machine that creates different numbers with $5$ digits. Of course the first digit can't be $0$. I would like to choose one them, what is the probability that the number is even? And what is the probability that the number is even and all the digits differ? My solution for even number: $9* 10 * 10 * 10 * 5!/9 * 10 * 10 * 10 * 10$ and for even with differ digits: $9 * 9 * 8 * 7 * 5!/9 * 10 * 10 * 10 * 10$ AI: The first expression should be $\frac{9*1000*5}{9*10000}$. The second expression should be $\frac{9*8*7*6+8*8*7*6*4}{90000}$. If the last digit is $0$ then there are $9*8*7*6$ choices. If the last digit is one of $2,4,6,8$, then there are $8*8*7*6*4$ choices.
H: Cumulative Probabilities: What am I missing here? Sorry if this question is a bit lower-level, yet complicated, but I feel like there is something wrong and I cannot put my finger on it. This scenario is adapted from something I read elsewhere on the internet, but simplified to get the basic point across. Lets say I have a system where I have different tiered shapes. Triangles can be bought from the store for 1 dollar. To make a square, you must combine two triangles. However, this combining has a 20% chance to be successful, in which case the 2 triangles combine to become 1 square. It has a 80% chance to fail, in which case the 2 triangles combine to form 1 failed triangle. What's the average cost for a square? You would buy 2 triangles at a time until you succeed. Then you can sell the excess failed triangles from failed attempts for 1 dollar each. However, you cannot combine the triangles from failed attempts to save money (lets just say they are failed and only can be combined once to make the situation simpler, but again, can be sold to the store for 1 dollar). Person 1 says the following: Well since it is a 20% success rate, then on average I should have 5 combinations needed to get 1 square. 5 combinations costs 10 triangles, and I would be left with 1 square and 4 failed triangles. I bought 10 triangles for 10 dollars, but then I sold the 4 failed ones for 4 dollars, so the net cost is 6 dollars per square. Person 2 says the following: Since this is a cumulative probability of independent events we can predict the probability of getting a square after $n$ combinations using $1-(1-x)^n$ where $x$ is the probability of success per event and $n$ is number of attempts. This handy formula is derived from multiplying the probability of failure after each event. We want the average number of times to combine, so we want to find the $n$ that corresponds to 0.5 probability (meaning that after $n$ combinations, there will be a 50% chance of getting at least 1 success. Since the success rate is 20%, we do $0.5=1-(1-0.2)^n$ and solve for $n$. We find $n\approx3.106$. This means we need on average about 3.106 combinations, or 6.212 triangles to be bought. Also, I can sell the failed triangles. Since all but one of the combinations fail, then if there are 3.106 combinations needed on average, then we would have 2.106 failed combinations, which can be sold for 2.106 dollars. We do 6.212-2.106 to get the net cost of 4.106 dollars per square. Who is more correct? For the record I am pretty certain that person 1's interpretation of average needed combinations is naive and incorrect, but I am not sure if person 2's calculation for the number of failed triangles is correct. Even more confusing, other people have verified person 1's concept using computer programs running over millions of trials, finding that the result is always 6 dollars. Thus, person 1's interpretation is generally accepted in this community. This, again, seems impossible considering that the average needed combination calculation seems to be naive and incorrect. And for fun, here's the full scenario (still adapted): When combining a shape, there is a 50% chance to return a failed shape of the same tier, 20% to upgrade 1 tier, and 30% chance return a failed shape of 1 tier lower. If you combine 2 triangles, since triangles are the lowest tier, it has an 80% chance to give a failed triangle and 20% chance to give a square. If I were to combine two squares, there's a 50% chance to return a failed square, a 20% chance to return a pentagon, and a 30% chance to return a failed triangle. The same goes for combining pentagons to potentially get a hexagon. Find the average cost per square, assuming that triangles are worth 1 dollar. Then use that average cost to calculate the average cost per pentagon (failed squares can be sold for the average cost per square) and average cost per hexagon (failed pentagons can be sold for the average cost per pentagon, just like squares). Thank you so much. I hope we can get to the bottom of this. AI: Let $T$ be the random number of pairs of triangles one must buy and combine in order to obtain the first square. What is the probability that $T = 1$, that is to say, one is successful with the first pair of triangles purchased? This is simply $p = 0.2$ as you specified. How much did it cost? Well, it cost $2T = 2$ dollars. What is the probability that $T = 2$? This means the first combination failed and the second succeeded. This happens with probability $(1-p)p = (0.8)(0.2) = 0.16$, because the outcome of each trial is independent of the outcome of the previous trial. You spent $2T = 4$ dollars for the four triangles, and resold the $T - 1 = 1$ failed triangle created in the first trial, for a net cost of $2T - (T - 1) = T + 1 = 3$ dollars. In general, the probability that it takes exactly $T = t$ pairs of triangles to obtain the first square is $$\Pr[T = t] = (1-p)^{t-1} p, \quad t = 1, 2, 3, \ldots,$$ because in order for the first success to occur on the $t^{\rm th}$ trial, all the previous $t-1$ trials must be failures, each with probability $1-p$, and then the $t^{\rm th}$ trial succeeds with probability $p$. The net cost of this outcome is $$C = 2T - (T-1) = T+1$$ dollars: it costs $2T$ dollars to buy all the pairs of triangles, then since only the last pair is successful, there are $T-1$ failed triangles to sell back. On average, how many trials are needed to obtain the first square? This is the expected value of $T$, which is given by $$\operatorname{E}[T] = \sum_{t=1}^\infty t \Pr[T = t] = 1p + 2(1-p)p + 3(1-p)^2 p + 4(1-p)^3 p + \cdots.$$ It is not too difficult to prove that this has value $1/p$. So in the case that $p = 0.2$, it takes on average $5$ trials to obtain the first square. The average cost is therefore $$\operatorname{E}[C] = \operatorname{E}[T] + 1 = 6$$ dollars. If you run a simulation by repeatedly buying triangles and counting how many pairs it takes to get the first square, each time keeping track of this number, you might obtain something like $$\{4, 1, 1, 10, 3, 4, 1, 3, 4, 2, 3, 5, 12, 5, 3, 3, 12, 2, 2, 5, \ldots\}$$ and then if you took the average of these results, you'd find it would be very close to $5$--and the cost would be close to $6$. But you'd need to do this many, many times. What does the other method describe? Well, it answers the question, "what is the fewest number of pairs would I need to buy in order to have at least a 50% chance of getting a square?" This of course is not the same thing as the average number of pairs one needs to buy to obtain the first square. Instead, it is asking for the smallest $m$ such that $$\Pr[T \le m] \ge 0.5.$$ To solve this, we have to compute $$\Pr[T \le m] = \sum_{t=1}^m (1-p)^{t-1} p = 1 - (1-p)^m,$$ which aligns with your formula and makes intuitive sense, since the probability that it takes strictly more than $m$ pairs is equal to the probability that the first $m$ pairs are failures; i.e., $$\Pr[T > m] = (1-p)^m,$$ hence the probability the first success occurs in $m$ or fewer combinations is $1$ minus this. Then we want to solve $$1 - (1-0.2)^m \ge 0.5,$$ or $$m \ge \frac{\log 2}{\log \frac{5}{4}} \approx 3.10628.$$ Since it is nonsensical to require a noninteger number of combinations, the smallest $m$ that guarantees a probability of at least $0.5$ of a success is $m = 4$, and the probability of success within $4$ trials is $\Pr[T \le 4] = 1 - (1-0.2)^4 \approx 0.5904$. But the number $m = 3.10628$ is not some kind of "average" number of trials, at least not in the same sense as "average" was used in the first calculation. Such a value is an artifact of treating an equation that only has meaning for positive integer values of the variable $m$ as if it is valid for all real numbers.
H: How do I find the area of the region bounded by the curve and the tangent? The diagram shows a sketch of the graph of the curve $\displaystyle y=\frac{1}{4}x-x^3$ together with the tangent to the curve at the point $A(k, 0)$. Find the area of the region bounded by the curve and the tangent, giving your answer as a fraction in its lowest terms. So far, Equation of tangent: $y = -x/2 -1/4$ Equation of Curve = $\frac{1}{4}x -x^3$ Coordinates of point $A=(-0.5,0)$ Coordinates of point $B=(1,-0.75)$ With that, I found the area of curve using this Upper limit $= 1$ Lower limit $= -1/2$ $$\int{\frac{1}{4}x-x^3}$$ $$=[x^2/8 - x^4/4]$$ $$=-1/8 - 1/64 = 9/64$$ The solution to this question is $\frac{27}{64}$. How to solve this question? AI: You are correct in that $A=(-0.5,0)$ and $B=(1,0.75)$. Given the equation of the curve is $\frac{1}{4}x-x^3$, then the equation of the tangent is $-\frac{x}{2}-\frac{1}{4}$. The integral you should evaluate should therefore be: $$\int_{-0.5}^{1}{(\frac{1}{4}x-x^3)-(-\frac{x}{2}-\frac{1}{4})}$$ $$\int_{-0.5}^{1}{-x^3+\frac{3x}{4}}+\frac{1}{4}$$ You subtract the equation of the tangent from the equation of the curve since the tangent creates a sort of bound to the curve. Your mistake was that you only evaluated the integral of the curve, without considering what the difference evaluating between the tangent and curve would be when the bounds are different. It seems that you are completely able to evaluate these sort of integrals. Can you continue from here to arrive at $\frac{27}{64}$?
H: Why does $\left|\frac{\sin(n+1)}{2^{n+1}}+...+\frac{\sin(n+p)}{2^{n+p}}\right|\leq\frac{|\sin(n+1)|}{2^{n+1}}+...+\frac{|\sin(n+p)|}{2^{n+p}}$ hold? I trying to understand a proof (using Cauchy's general criterion of convergence) of why the series $\sum_{n=1}^{\infty }\frac{\sin (n)}{2^{n}}$ converges . At the beginning, the following inequality is expressed: $$\left | \frac{\sin (n+1)}{2^{n+1}}+...+\frac{\sin (n+p)}{2^{n+p}} \right |\leq \frac{|\sin (n+1)|}{2^{n+1}}+...+ \frac{|\sin (n+p)|}{2^{n+p}}$$ where $n,p$ are natural numbers. Why does this hold? Is the triangle inequality with more than 2 terms on $\mathbb{R}$ a valid fact (from what seems to be the case here) ? AI: Yes. You can deduce the three-term version by using the two-term version twice: $$|x+y+z|\leq |x+y|+|z|\leq |x|+|y|+|z|$$ and you can similarly prove it for any number of terms by induction.
H: I don't understand uniform convergence Assume we have some series of functions defined for any $x\geq0$: $$S(x)=\sum_{n=1}^{\infty} \frac{3x+n}{x+n^{3}}$$ Let's assume uniform convergence. Meaning $$\forall{\epsilon\geq0}, \exists N s.t. \forall{n \geq N}, \forall{p}, \forall{x\geq0}$$ $$ \lvert \frac{3x+(n+1)}{x+(n+1)^{3}} + \frac{3x+(n+2)}{x+(n+2)^{3}} + \cdots + \frac{3x+(n+p)}{x+(n+p)^{3}} \rvert \leq \epsilon $$ Let's pick a specific $n_0 \geq N$ and $p=1$: $$ \lvert \frac{3x+(n_0)}{x+(n_0)^{3}} \rvert = \frac{3x+(n_0)}{x+(n_0)^{3}} \leq \epsilon $$ We know this is false, because $\lim_{x\to\infty}=3$ which obviously greater than many $\epsilon$. However, this is the part I don't really understand... If instead of infinity, we approach some number $M\lt \infty$ then the series is uniformly convergent. Why? What's to stop us from picking $M+1$ and continuing the series? AI: Just asserting that a series of functions is uniformly convergent is meaningless. What has a meaning is to assert that a series of functions converges uniformly (or not) on some given set. Your series does not converge uniformly on $[0,\infty)$, but it does converge uniformly on any bounded subset of $[0,\infty)$. There is no contradiction here.
H: Understanding the monotonous convergence theorem I'm stuck at the next point in the proof of this theorem Monotone Convergence Theorem: If $(f_n)$ is a monotone increasing sequence of nonnegative measurable functions which converges to $f$, then $$\int f d\mu=\lim\int f_n d\mu$$ How the integral preserves order in functions we have $$\int f_n \leq \int f_{n+1} \leq \int f d\mu$$ Then the succession of integrals is monotone increasing and in a certain sense, it is bounded by $\int f d\mu$. But, what happens if the integral of some $f_n$ is infinite? would have $$+\infty\leq+\infty\leq\cdots+\infty\leq+\infty$$ Is this correct? I know it has to do with the integral taking values ​​in the extended reals, but I don't see the point in the inequality of infinities. AI: Yes, that is correct. The point is that once the integral is infinite, it stays infinite. So if $\int f_n\,\mathrm{d}\mu=\infty$ for some $n$, then $\int \lim_n f_n\,\mathrm{d}\mu=\infty$. In the extended reals, $+\infty$ is the maximum, so if something is greater or equal than $+\infty$, it must be equal to it.
H: Find $2f(x)\cdot f(x-8) - 3f(x+12) - 2 = 0$ Function $f$ $\in \mathbb{R}$ is odd and has a period of $4$. On a $[0,2]$ segment function $f$ is defined as $f(x)= 4x - 2x^2$. Find the set of solutions for the equation: $$2f(x)\cdot f(x-8) - 3f(x+12) - 2 = 0$$ So, here's my attempt: function has a period of $4$ means that $f(x) = f(x+4) = f(x+8) = f(x+12)$. Also, from the fact that $f$ is odd, $f(x-8)$ just means $-f(x+8) = -f(x)$ (I might be wrong here). Now let $4x-2x^2 = z$. We get the equation: $$2z\cdot(-z)-3z-2=0$$ $$-2z^2-3z-2=0$$ $$2z^2+3z+2 = 0 $$ $D < 0$, so this doesn't have a solution. What am I doing wrong? AI: In your solution, you make the following error $$f(x) \space \text{is odd} \implies f(x-8) = -f(x+8)$$ This is not true, $f$ being odd means that $f(-x) = -f(-x)$, hence $f(x-8) = -f(-x+8)$ Now, since the period is 4, what we can say is that $f(x-8) = f(x-4) = f(x)$ Hence, replacing $f(x) = z$ $$2z^2 - 3z-2 = 0$$ $$\implies z = \frac{3 \pm 5}{4} = -\frac{1}{2},2$$ Now is where your odd function restriction will come into play EDIT Adding as it is not straightforward how to continue. Since $f$ is only defined on $[0,2]$, you extend it to $[-2,2]$ using the odd restriction. Hence $f(x)$ is defined as $$f(x) = \begin{cases}4x-2x^2 & x\in[0,2] \\ 4x+2x^2 & x \in[-2,0]\end{cases}$$ Now, if you solve on this full domain for the above, you have $$4x+2x^2 = -\frac{1}{2} \implies x = -1 \pm \frac{\sqrt{3}}{2}$$ $$4x - 2x^2 = 2 \implies x = 1$$ Now, since our full function $f$ is a periodic repetition of this, the roots will get repeated as well, and get shifted either up or down by 4 units each. Hence the roots will be of the form $$x = \begin{cases}1 + 4k \\ 4k-1 + \frac{\sqrt{3}}{2} \\ 4k - 1 - \frac{\sqrt{3}}{2}\end{cases}$$
H: On the functoriality of assigning a simplicial complex to is Stanley-Reisner ring If $k$ is a field and $\Delta$ a finite simplicial complex with vertex set $x_1, \ldots, x_n$, the Stanley-Reiser ideal of $\Delta$ is $$I_\Delta := \left\langle \prod_{i \in S}x_i : S \not \in \Delta \right\rangle \subset k[x_1, \ldots, x_n].$$ There is a bijective correspondence between simplicial complexes on a finite set $x_1, \ldots, x_n$ and monomial ideals of $k[x_1, \ldots, x_n]$ given by $\Delta\leftrightarrow I_\Delta$. The Stanley-Reiser ring of $K$ is $k[\Delta] := k[x_1, \ldots, x_n]/I_\Delta$. Are these constructions functorial? Concretely, if $f : \Delta_1 \to \Delta_2$ is a simplicial map between finite simplicial complexes, does this induce a $k$-algebra morphism between $k[\Delta_1]$ and $k[\Delta_2]$? AI: To expand on my comment, yes this correspondence can be made functorial, but is a contravariant functor. Proposition (See e.g. Proposition 3.1.5 cf. this answer) Let $f:\Gamma \to \Delta$ be a simplicial map where $\{1,2,\dots,m\}$ and $\{1,2,\dots,n\}$ are the vertex sets of $\Gamma$ and $\Delta$ respectively. Define the map $f^*:k[x_1,\dots,x_n] \to k[y_1,\dots,y_m]$ on generators by $f^*(x_i)=\displaystyle \sum_{j \in f^{-1}(i)} y_j$. Then $f^*$ induces a homomorphism $k[\Delta] \to k[\Gamma]$. For example, let $\Gamma=\{\{1,\},\{2\}\}$ and $\Delta=\{\{1\}\}$, with simplicial map $f:\Gamma \to \Delta$ given by $f(1)=f(2)=1$. Note this is the example mentioned by Angina Seng in their answer. Then $k[\Delta]=k[y]$ and $k[\Gamma]=k[x_1,x_2]/(x_1x_2)$. The map $f^*$ induces the map $k[\Delta] \to k[\Gamma]$ given by $y \mapsto x_1+x_2$.
H: Curve tangent line A curve with the equation $x^2-x+1$ has two tangent lines $a$ and $b$ that intersects at $x=1$, what is $y$? can I determine $y$ when $a$ and $b$ are perpendicular, or the gradient of $a$ and $b$ are $-1$ I tried using the second point, but knowing that they have the same gradient means they won't intersect, so it cannot be determined from that from the first point, I know that the gradient of the tangent lines must be $2x-1$, and the equation for line is $y=mx+b$ if both are perpendicular, then $m_a × m_b = -1$. so does that mean I could make it $(2x-1)(2x-1)=-1$? because both lines have the same gradient? I tried by making the gradient for line $a = (2x-1)$ and gradient for line $b -1/(2x-1)$. but they intersect at $x=1$ I'm stuck right there, is there a way of using either one to determine the y which both lines intersect? AI: Let $A$ be the $x$-coordinate of the point of tangency of the line $a$, then its equation is $$y=(2A-1)(x-A)+A^2-A+1.$$ Similarly for $b$, $$y=(2B-1)(x-B)+B^2-B+1.$$ Therefore for $x=1$ we have $$(2A-1)(1-A)+A^2-A+1=y=(2B-1)(1-B)+B^2-B+1$$ and we finally find $$(A-B)(A+B-2)=0.$$ It follows that if $a$ and $b$ are distinct lines then $A+B=2$. Therefore we have infinite pairs of lines $a$ and $b$ that satisfy the given conditions. The $y$-coordinate of their intersection point is $A(2-A)=AB$ (it depends on the pair of lines).
H: Convergence of the Series $ \sum_{n=1}^{\infty} \left(1-(\ln 2)^{1 / n^2}\right) $ Check the Convergence of the Series $$ \sum_{n=1}^{\infty} \left(1-(\ln 2)^{1 / n^2}\right) $$ My attempt: I feel that $$ 0\leq 1-(\ln 2)^\frac{1}{x^{2}} \leq \frac{1}{x^{2}}~~~ \forall x \in \mathbb R$$ to use direct comparison test, But unable to Prove it. AI: Hint: $\frac {e^{x}-1} x\to 1$ as $x \to 0$ (by series expansion or L'Hopital's Rule) so $|e^{x}-1| <2|x|$ whenever $|x|$ is sufficiently small. Take $x=\frac 1 {n^{2}} \ln \ln 2$.
H: Questions in Theorem 6 of Chapter 6 of Hoffman Kunze Linear Algebra While self studying Linear Algebra from Hoffman Kunze I am unable to understand some deductions in Theorem-6 on Page 204 . Adding Image: $(1)$ In 7th last line from below I am not able to deduce how $q- q(c_{j})$ equals $( x-c_{j} ) h$ despite the definition of $q$ being clear to me ( It's given in line above. $(2)$ How did in 3rd last line from below $p(T)\alpha=0$? And how does that implies the belonging of $q(T)\alpha$ in $W$? The equality of $p(T)\alpha = (T-c_{j}I ) q(T)\alpha$ is clear but I don't know how they became equal to $0$. Kindly help. AI: (1) Of course, $c_j$ is a root of $q-q(c_j)$. Hence there is some polynomial $h$ such that $q-q(c_j) = (x-c_j)\cdot h$. (2) Since $p$ is the minimal polynomial of $T$, one has $p(T) = 0$ by definition. Moreover, $W$ was defined to be the span of all characteristic vectors of $T$. Here, one has $0 = (T-c_j I)q(T)\alpha$. In other words, $q(T)\alpha \in \mathrm{ker}(T-c_jI)$, which means that $q(T)\alpha$ is a characteristic vector of $T$ to the eigenvalue $c_j$ (or $q(T)\alpha = 0$).
H: A question about a proof of 'Cauchy integral formula' Theorem: Let $U \subset \Bbb C$ be an open setand $f \in H(U) $, i.e. $f $ is holomorphic on $U$. Let $z_0\in U$ be a point and $r>0$ such that $\bar{D}(z_0,r)\subset U $. Then $\forall z \in \bar{D}(z_0,r)$, $$f(z)=\frac{1}{2\pi i}\displaystyle \oint _{\lvert \xi-z_0=r \rvert}\frac{f(\xi)}{\xi -z}d\xi$$. Proof: Since $U$ is open, we can choose $\varepsilon>0$ such that $D(z_0,r+\varepsilon) \subset U$. Fix a point $z \in D(z_0,r+\varepsilon)$. We define a function $$F(\xi)=\frac{f(\xi)-f(z)}{\xi - z} \;\;\;\;\text{if} \;\;\xi\in D(z_0,r+\varepsilon)-\{z\}$$ and $$F(\xi)=f'(z)\;\;\;\;\text{if} \;\;\xi = z$$. Then clearly, $$F \in C(D(z_0,r+\varepsilon))\;\cap\; H(D(z_0,r+\varepsilon)-\{ z\}) \;\;\;\; \color{darkorange}{(\star)}$$ Then,$\dots$ (skip the whole proof) My question is that why $\color{darkorange}{(\star)}$ is true? Is subtracting $z$ a necessity so that it grantees $F$ is holomorphic on the open disk? Or $F$ is holomorphic on $D(z_0,r+\varepsilon)-\{z\}$ without a doubt. We can use the fact that $F$ is holomorphic on an open disk without finite points, so $F$ have a antiderivative on $D(z_0,r+\varepsilon)$. AI: We know that $F$ is holomorphic on $D(z_0,r+\varepsilon)-\{z\}$ by the quotiënt rule for differentation of holomorphic functions, since the denominator is nonzero on this open set. We do not however know whether $F$ is holomorphic at $z$, although we know that $f$ is. It turns out that we the function $F$ as defined here is holomorphic, and this fact can be used to derive the Taylor series of holomorphic functions. But the proof of this fact relies on the theorem you are stating itself. This may be proved at a later point in your book; if not, you can read pages 124-125 of Ahlfor's Complex Analysis. So strictly speaking, excluding $z$ isn't a necessity, but at the point in your book that hasn't been proved yet, so $z$ is excluded to avoid circular reasoning.
H: Odds of a specific dice distribution occuring when playing Catan (2d6) So this is messing with me, because I have no idea how I would even go about starting calculating this. I have played a game of Catan. For those that don't know, it's a board game that involves rolling two D6 per turn. So there are 12 possible outcomes for each roll, but each number has a different probability of occuring. During a game, we had the following dice rolls occur over a 69 turn game: 2: 5 3: 6 4: 5 5: 11 6: 7 7: 9 8: 8 9: 9 10: 0 11: 8 12: 3 This lead to the conversation about what are the odds of no 10s occuring (easy to calculate, it's (33/36)^69 = 1 in ~400). This further then lead to a conversation about how rare any given combination of rolls is. I just wanted to calculate some lower and upper bounds, but I honestly had no idea how to: A: Deal with the fact that the odds of each number are different B: Deal with the fact that [A, A, B] = [B, A, A] So my question is how do I go about calculating the probability of any given combination of numbers? Just getting me started looking in the right place would help a lot. An upper bound is easy. The worse case is rolling all 2s (or 12s), for which the odds are 1/36^69 = 4e-108, but beyond that I'm stuck. A "simpler" solution that I would also be happy with, would be the odds of the "perfect" distribution occuring during a 72 move game. So two 2s, four 3s, six 4s, etc. Every number perfectly matching its frequency. This also represents a lower bound for the probability of any given dice combination (in a 72 move game) No idea how hard or complicated this is. But I thought I would ask. If there's somewhere better to ask, just redirect me and I'll ask there! AI: First of all it is required to compute the probabilities which are: $$ \begin{array}{c|ccccccccccc} i&2&3&4&5&6&7&8&9&10&11&12\\ p_i\times\frac1{36}&1&2&3&4&5&6&5&4&3&2&1 \end{array} $$ The probability of a specific outcome $n_2$ of '2', $n_3$ of '3' and so on after $n$ rounds ($\sum_i n_i=n$) can be computed as: $$ n!\prod_{i=2}^{12}\frac{p_i^{n_i}}{n_i!}. $$
H: How to calculate $\lim _{x\to \infty }\left(\frac{x^2+3x+2}{x^2\:-x\:+\:1\:}\right)^x$ I am trying to calculate $$\lim _{x\to \infty }\left(\frac{x^2+3x+2}{x^2\:-x\:+\:1\:}\right)^x$$ My initial thought is that it is in exponential form $\left(1+\frac{a}{f(x)}\right)^{f(x)}$. I tried to factor the polynomials $\frac{(x+1)(x+2)}{x(x-1)+1}$ in order to bring it to that form, but had no success. I also tried to apply the chain rule as following, but found nothing interesting either $$e^{x\ln({\left(\frac{x^2+3x+2}{x^2\:-x\:+\:1\:}\right)}}$$ Any ideas that don't involve D'Hopital's Rule? AI: Your initial thought is right. Keep thinking. $$\lim _{x\to \infty }\left(\frac{x^2+3x+2}{x^2\:-x\:+\:1\:}\right)^x=\lim _{x\to \infty }\left(1+\frac{4x+1}{x^2\:-x\:+\:1\:}\right)^x=\\ \lim _{x\to \infty }\left[\left(1+\frac{1}{f(x)}\right)^{f(x)}\right]^{\frac{x}{f(x)}}=\exp\left({\lim\limits_{x\to \infty }\frac{4x^2+x}{x^2\:-x\:+\:1\:}}\right)=e^4.$$ where $f(x)=\frac{x^2-x+1}{4x+1}$ and $\lim_\limits{x\to\infty} f(x)=\infty$.
H: If $\exists f : (X,\tau) \rightarrow (\{ 0,1\}, \tau_\text{discr})$ continuous, non constant, $(X,\tau) $ is not connected Let $(X, \tau)$ be a topological space and consider the discrete topology over $\{0,1\}$; Prove that if, there exists a continuous non-constant mapping $ f : X \rightarrow \{ 0,1\}$, allora $(X, \tau)$ is not connected. I have no idea how to tackle this, any idea? AI: Hint: $X$ is connected if and only if the only both open and closed subspaces are $X$ and $\varnothing$. Try to find other open and closed subsets from these hypotheses.
H: Hilbert space subspace of "equally projected elements" Apologies for the title, bit of a struggle to come up with something non-generic. Let $H$ be a Hilbert space and $p:H\rightarrow H$ an orthogonal projection. Suppose $h_1,\,h_2\in H\backslash\{0\}$ and $$\alpha:=\frac{\|p(h_1)\|^2}{\|h_1\|^2}=\frac{\|p(h_2)\|^2}{\|h_2\|^2}.$$ Does it follow that: $$\frac{\|p(h_1+h_2)\|^2}{\|h_1+h_2\|^2}=\alpha?$$ AI: No. Let $H=\mathbb{C}^2$, $p$ the projection onto the first coordinate and $h_1=(1,1)$, $h_2=(1,-1)$. Then $$ \frac{\lVert p(h_1)\rVert^2}{\lVert h_1\rVert^2}=\frac{\lVert p(h_2)\rVert^2}{\lVert h_2\rVert^2}=\frac1 2, $$ yet $$ \frac{\lVert p(h_1+h_2)\rVert^2}{\lVert h_1+h_2\rVert^2}=1. $$
H: Proving that $\mathbb Z$ with the finite-closed topology satisfies the second axiom of countability. In my general topology textbook there is the following exercise: A topological space $(X,\tau)$ is said to satisfy the second axiom of countability if there exists a basis $B$ for $\tau$, where $B$ consists of only a countable number of sets. Let $(X,\tau)$ be the set of all integers with the finite-closed topology. Does the space $(X,\tau)$ satisfies the second axiom of countability. I made a proof for this but in my prove I made a mistake that I'll point out with the number (1). My proof Let $A_i=\{\mathbb Z\setminus B : \text{card}\ B=i\}$, for $B \subset \mathbb Z$. Then we have that $\tau= \bigcup \limits _{i=0} ^\infty A_i$. If every set $A_i$ is countable then we have that $\tau$ is countable as well. Let's fix a value of $i$ and define the set $C_i=\{X: \text{card} \ X=i\}$, for $X \subset \mathbb Z$. Each $X$ is countable because it's finite, and because $C_i=\bigcup X$ (1), then $C_i$ is countable. Let $f_i: C_i \to A_i$, with $f_i(X)=Z\setminus X$, then $f_i$ is bijective, thus $C_i \sim A_i$, thus $A_i$ is countable, proving that $\tau$ is also countable. If $B$ is a basis for this space, then $B \subset \tau$, any subset of a countable set is also countable thus $B$ is countable, so $(\mathbb Z, \tau)$ does satisfy the axiom. My mistake was that $C_i \neq \bigcup X$, but instead $C_i = \bigcup \{X\}$. How can I prove that $C_i$ is countable thus concluding the proof? AI: You are right that $C_i\neq \bigcup X$ and that rather $C_i \bigcup {X}$. I think it is easiest to prove $C_i$ is countable by induction on $i$. For $i=0$ this is trivial (in fact $C_0$ has only one element: the empty set), and it's still pretty trivial for $i=1$ (since $C_1$ is more or less $\mathbb{Z}$ itself). Now, supposing $C_n$ is countable, it is pretty easy to show $C_{n+1}$ is countable as well. For instance, you can say that, since $C_n$ is countable and $\mathbb{Z}$ is countable, also $C_n\times \mathbb{Z}$ is countable, and hence $S:=\{(A,z):A\in C_n,z\in\mathbb{Z}\setminus A\}$ is countable, and we a surjective map from $S$ onto $C_{n+1}$ (sending $(A,z)$ to $A\cup \{z\}$. Thus, $C_n$ is countable for every $n$. By the way, a remark: you proved that every basis of this space is countable. This is correct, but you didn't need to show that. You only needed to show that there exists a countable basis of this space.
H: Prove that $\sup \{f(x)+g(x):\space x\in X\}\leq \sup \{f(x):\space x\in X\}+\sup \{g(x):\space x\in X\}$ I know that this question had been answered before here but I am asking to please check a method used by me which resulted in the wrong conclusion. Let $X$ be a nonempty set, let $f$ and $g$ be defined on $X$ and have bounded range in $\mathbb R$. Prove that $\sup \{f(x)+g(x):\space x\in X\}\leq \sup \{f(x):\space x\in X\}+\sup \{g(x):\space x\in X\}$ My approach: Let $u=\sup \{f(x):\space x\in X\}$ and $v=\sup \{g(x):\space x\in X\}$ $$u\geq f(x)\space\forall\space x\in X$$ $$v\geq g(x)\space\forall\space x\in X$$ $$\therefore u+v\geq f(x)+g(x)\space\forall\space x\in X$$ Thus we can imply that $u+v$ is an upper bound of $f(x)+g(x)$ Let $w$ be another upper bound for $f(x)+g(x)$ $$\therefore w>f(x)+g(x)\space \forall \space x\in X$$ $$w-g(x)>f(x)\space \forall \space x\in X$$ $\therefore w-g(x)$ is an upper bound for $\{f(x):\space x\in X\}$ $$\therefore w-g(x)>u\space \forall \space x\in X$$ $$w-u>g(x)\space \forall \space x\in X$$ Thus we can imply that $w-u$ is an upper bound for $\{g(x):\space x\in X\}$ $$\therefore w-u>v\Rightarrow w>u+v$$ Now since $w$ is arbitrary, we can imply that any upper bound for $\{f(x)+g(x):\space x\in X\}$ would be greater than $u+v$ Thus we can imply that $u+v=\sup \{f(x)+g(x):\space x\in X\}$ $\therefore \sup \{f(x)+g(x):\space x\in X\}=\sup \{f(x):\space x\in X\}+\sup \{g(x):\space x\in X\}$ which clearly is not correct. I know that all this effort was irrelevant and unnecessary for this question but please help me find the mistake in this solution THANKS AI: First, you already know that the proof was complete once you reached $u+v$ is an upper bound of $\{f(x)+g(x):x \in X\}.$ The problem in your attempt to prove the converse is concluding $w -g(x)$ is an upper bound for $\{f(x):x \in X\}.$ For a fixed $x,$ you have $w-g(x)\geq f(x)$ but you may not have $w-g(x)\geq f(y)$ for some $y \neq x.$ Therefore you cannot say $w-g(x)$ is an upper bound of $\{f(y):y \in X\}$.
H: Matrixmultiplication of 2 invertible Matrices is not commutative I need to proof that the set of all invertible n x n Matrices with n > 1 is not commutative under multiplication? How could I do this? Thank you. AI: We typically refer to the group of all invertible $n \times n$ matrices over the real numbers with the operation of multiplication as the General Linear Group, denoted $GL_n(\mathbb{R})$. The question then becomes how to show that this group is not abelian. For the case $n=2$, consider the matrices given by $$A = \begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix} \quad \text{ and } \quad B = \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix}.$$ We have that $$ AB = \begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$$ and $$BA = \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix}.$$ Since $AB \neq BA$, we can conclude that matrices of size $2 \times 2$ do not commute and $GL_2(\mathbb{R})$ is not abelian. To show this for $n \geq 3$, just extend these matrices by adding $1$'s along the main diagonal and $0$'s everywhere else and you have shown this for all $n \geq 2$. Hope this helps!
H: Prove a set is of full measure This is a problem in my textbook: $E\subset[0,1], E$ is Lebesgue measurable, if there exists $\delta\in(0,1)$ such that for any interval $I\subset[0,1]$, $m(E\cap I)\geq \delta|I|$, then $m(E)=1$ A hint of this problem is also given in the textbook: use the lemma below and prove the problem by contradiction Lemma: If $0<m(E)<\infty$, then $\forall\alpha\in(0,1)$, there exists an interval $I$ such that $m^*(E\cap I)>\alpha|I|$ I am able to prove this lemma. Then I assume $m(E)<1$. Using the lemma, I get $m([0,1]\setminus E\cap I)>\delta|I|, \forall$ interval $I\subset [0,1]$, and I get stuck after this. AI: First comments: I don't follow your use of the lemma to get the conclusion you state. The lemma provides an interval with some properties and you are getting a conclusion for all intervals. Second, I'm going to write $m(I)$ because I am allergic to using $|I|$ for the Lebesgue measure of an interval. We have $\delta>0$ such that for any $I\subset [0,1]$, $m(E\cap I)\geq \delta m(I)$. Suppose $m(E)<1$. Note that this implies $\delta<1$ since otherwise setting $I=[0,1]$ we would get $m(E)\geq \delta\geq 1$. Now the big hint is to apply the lemma to $F=[0,1]\setminus E$ and $\alpha=1-\delta$. I've hidden the rest. The lemma provides an interval $I$ so that $m(F\cap I)>\alpha m(I)$. So we have $m(I)=m(E\cap I)+m(F\cap I)>\delta m(I)+\alpha m(I)=m(I)$.
H: If we didn't have examples of irrational numbers, would we know they exist? Irrational numbers are very easy to find. Square roots require only a little bit more than the most basic arithmetic. So it might be that this question is impossible to answer because it presupposes a world where math looks completely different to what it really does. I am hoping this is not the case. If we didn't have examples of irrational numbers, would $\mathbb{R}$ and $\mathbb{Q}$ be assumed to be the same set? Can we construct a proof that irrational numbers exist without giving examples? AI: From the surrounding discussion it seems that perhaps there are two questions here: What is the impetus to define the real numbers and how do we get such a definition? Given a definition of $\mathbb{R}$ can we prove that irrational numbers exist without constructing examples? My answer addresses the latter question. You can prove that the real numbers are uncountable (there is no bijection between the real numbers and the integers). You can prove the rational numbers are countable (there is a bijection between the rational numbers and the integers). So $\mathbb{R}\setminus\mathbb{Q}$ is nonempty. Since I don't know your background I will add some more. A bijection between two sets $X$ and $Y$ is a function $f:X\to Y$ that is both one-to-one and onto, i.e, for all $x,y\in X$, $f(x)=f(y)$ implies $x=y$; and for all $y\in Y$ there is $x\in X$ such that $f(x)=y$. An infinite set is $X$ countable if there is a bijection $f:X\to \mathbb{N}$. It is a famous result of Cantor (called a diagonal argument) that $\mathbb{R}$ is uncountable. It is a standard (and good practice) exercise that $\mathbb{Q}$ is countable. It is a standard (and good practice) exercise that if $X$ and $Y$ are countable then so is $X\cup Y$. So if $\mathbb{R}\backslash \mathbb{Q}$ were countable then so would be $\mathbb{R}$. So altogether, this actually shows that not only do irrational numbers exist, but there are more irrational numbers than rational numbers because the set $\mathbb{R}\backslash\mathbb{Q}$ must be uncountable by the above points. By the way, the same kind of proof shows that transcendental numbers exist and there are more transcendental numbers than algebraic numbers. Indeed, the set of algebraic numbers is also countable and so its complement in $\mathbb{R}$ is uncountable.
H: Properties of centralization in group theory Recall Let $C\subseteq G$ where $G$ is a group. If $x\in G$ for any $c\in C,$ $xc=cx,$ then $x$ centralizes $C$ $$C_G(C) := \{x \in G : \text{ for any } x\in C, xc=cx \}$$ Clearly, $$C_G(C)=\bigcap_{c\in C}C_G(c)$$ Prove that a) $C\subseteq C_G(C_G(C))$ b) If $C\subseteq D$ then $C_G(D)\subseteq C_G(C)$ c) $C_G(C_G(C_G(C)))=C_G(C)$ My Proof-trying: a) Let $c\in C$. We will show $c\in C_G(C_G(C)).$ We need to show for any $x\in C_G(C),$ $xc=cx.$ Note that $x\in C_G(C)$ $\iff$ for any $c\in C$ $xc=cx$ by definition. Hence $c\in C_G(C_G(C)).$ Therefore $C\subseteq C_G(C_G(C)).$ b) Assume $C\subseteq D.$ Let $x\in C_G(D).$ Then for any $d\in D$, we have $dx=xd.$ Since $C\subseteq D$, then for any $c\in C$, we have $cx=xc.$ So we are done. c)I couldn't show it. Can you check my proof, if there is a false, can you edit? Can you help for c)? Thanks... AI: The proofs for a) and b) are OK. For c), first, apply b) to the inclusion a) to obtain one inclusion. Second, to obtain the reverse inclusion, apply a) to $C_G(C)$.
H: On interiors of Jordan curves Recently,while studying a problem in physics whose solution required an application of the Jordan curve theorem to the phase space in order to make it rigorous, I asked myself if its possible for two Jordan curves $E$ and $F$ on the plane to be contained in each other's interiors.I tried to prove that it's not,failed,realized there might be some counterexample,and ended up drawing complicated spirals all over the pages of my notebook.I haven't yet got it, so I ask for some help here.Properly phrased,my question is,for two Jordan curves $E$ and $F$ on the plane,it's known that $E$ is contained in the interior of $F$.Is it possible for $F$ to be contained in the interior of $E$? AI: $F$ (resp. $E$) separates the plane in two connected components $U_F$ and $B_F$ (resp. $U_E$ and $B_E$), where $U$ stands for unbounded and $B$ for bounded. This is part of the statement of the Jordan curve theorem. The bounded one is what you refer to as "interior". So we know $E \subset B_F$. Assume $F \subset B_E$. This implies that $U_F$ (and $B_F$ too, but this is irrelevant to the argument that follows) intersects $B_E$. Since $E$ does not intersect $U_F$ (as it is contained in $B_F$), it follows that $U_F$ is contained in the separation $U_E$ and $B_E$. But we know that $U_F$ intersects $B_E$ and therefore, since $U_F$ is connected, it must be contained in $B_E$. But this is a contradiction, due to the fact $U_F$ is unbounded and $B_E$ is bounded.
H: Why does $G$ finite abelian group with $i_G$ involutions have no subgroups isomorphic to $K_4$ if $6\nmid i_G(i_G-1)$? I was reviewing my answer here, and have realized that the provided solution ought to work for any finite abelian group. The generalization would then be: Claim. Let $G$ be a finite abelian group and $i_G$ the number of elements of order $2$ of $G$. Then, the number of subgroups of $G$ isomorphic to Klein's $4$-group, say $n_K$, is given by: $$n_K=\frac{1}{6}\cdot i_G(i_G-1)$$ As a corollary, this would mean that, if a finite abelian group $G$ has $i_G$ elements of order $2$ such that $6\nmid i_G(i_G-1)$, then $G$ has no subgroups isomorphic to $K_4$. Provided that this not based on false assumptions (the linked answer and/or its generalization), what's the group-theoretic motivation for that? AI: Let $G$ be an abelian group. Then we have $G=G_2\times G_{2'}$, where $G_2$ is the Sylow $2$-subgroup of $G$ and $G_{2'}$ is the Hall $2'$-subgroup of $G$. It suffices to consider $G_2$ if we do with involutions. Note also that \begin{equation*} G_2=\mathbb{Z}_{2^{e_1}}\times\cdots\times\mathbb{Z}_{2^{e_k}} \end{equation*} for some $1\le e_1\le\cdots\le e_k$, and each $\mathbb{Z}_{2^{e_i}}$ gives exactly one involution. Therefore, $i_G=i_{G_2}=2^k-1$. If $k=0$ then $G$ is odd. In this case of course $G$ has no involution. If $k=1$ then it is also straightforward to see that $n_K=0$. If $k\ge 2$ then any two distinct involutions generate a $K_4$. Your claim is correct. But $6\nmid i_G(i_G-1)$ if and only if $2^k-1=i_G\equiv 2\pmod 3$, which can never happen.
H: calculus problem involving integration and mean value theorem http://people.math.sc.edu/girardi/m555/current/hw/MVT-Flett.pdf the first question of this pdf $g(x)$ is a continous function in $[a,b]$ and $ g(a)=0$, $\int_a^b g(t) \ dt=0$ to prove there exit $ c\in (a,b) $ such that $g(c)(c-a)=\int_a^cg(t) \ dt$ I did manage to write down the prove I dont know whether thats correct or not,anyways my solution is not elegant I worked with couple of cases and used method of contradiction along with WLOG , which doesnot look pretty. I looking for a more elegant solution AI: In the linked paper it is indicated to consider $$\varphi(x) = \frac{\int_a^xg(t) dt}{x-a} \text{ where } \varphi(a) = g(a) = 0$$ Now, according to the assumptions you have $$\varphi(a) = \varphi(b)= 0$$ and you can apply Rolle theorem. So, there is a $c \in (a,b)$ such that $$\varphi'(c) = \frac{g(c)(c-a)-\int_a^cg(t) dt}{(c-a)^2}= 0$$ From this your claim follows immediately.
H: Does absolute convergence imply conditional convergence? My problem is $\sum_{n=1}^{\infty} a_{n} $ absolute converges does it imply that $\sum_{n=1}^{\infty} a_{n}^{2} $ converges? My proof for $\sum_{n=1}^{\infty} a_{n} $ absolute converges does it imply that $\sum_{n=1}^{\infty} a_{n} $ $-|a_{n}| \le a_{n} \le |a_{n}|$ $0 \le a_{n} + |a_{n}| \le 2|a_{n}|$ If $\sum_{}^{} a_{n}$ converges than $2\sum_{}^{} a_{n}$ converges $\implies$$2\sum_{}^{} |a_{n}|$ converges We have $\sum_{}^{} a_{n} + |a_{n}|$ converges since $\sum_{}^{} 2|a_{n}|$ converges and $a_{n} + |a_{n}| \le 2|a_{n}|$ But since $a_{n} \le a_{n} +|a_{n}|$ then $\sum_{}^{} a_{n} \le \sum_{}^{} a_{n}+|a_{n}|$ which converges so $\sum_{}^{} a_{n}$ converges. Can I use this proof to prove $\sum_{n=1}^{\infty} a_{n} $ absolute converges does it imply that $\sum_{n=1}^{\infty} a_{n}^{2} $ converges? Thank you for any kind of help. AI: Since $\sum_{n=1}^{\infty} a_{n}$ absolute converges that means $\sum_{n=1}^{\infty}$ |aₙ| converges. Hence, this implies |aₙ|< k some positive real k. So, now |aₙ²|<k|aₙ| for all n, So this implies now $\sum_{n=1}^{\infty}$|aₙ²| convergent, as $\sum_{n=1}^{\infty}$ |aₙ| is convergent( by comparison test). So, that's mean $\sum_{n=1}^{\infty}$ aₙ² is absolutely convergent.
H: Find the integral of the second kind for Bernoulli leminscate I have tried to solve this integral of the second kind( with respect to $x,y$) but I stumbled on finding a correct path over which to integrate. I know from the definition that I should get at $ \int\limits_{C}{{Pdx\, + Q\,dy}} = \int\limits_{C}{{P\left( {x,y} \right)dx}} + \int\limits_{C}{{Q\left( {x,y} \right)\,dy}}$, but I lack examples and I don`t know how to apply it. I am new to this type of problems and I do not have many examples, could you provide a full proof, or at least in the form of an answer, such that it would serve as a model for similar problems I encounter? Thank you very much!!! AI: $$(x^2+y^2)^2=a^2(x^2-y^2)$$ is a quadratic equation in both $x^2$ and $y^2$. For example you can draw $$y=\pm\sqrt{\frac{\sqrt{8ax^2+a^2}-2x^2-a}2}.$$ But integration will be arduous in Cartesian coordinates, it is better to switch to polar.
H: Model the constraints of an integer linear program I have to model the costraints of an integer linear program. The objective function is not important here. Suppose we have $C$ configurations (indexed by $i = 1, \dots, C$) and $T$ time intervals (indexed by $j = 1, \dots, T$). I wish that: a) only one configuration is active at each time interval; b) when a configuration has been chosen I must keep it for at least $\bar{t}$ time intervals (with $\bar{t} \leq T$). So I define the decision variables: $y(i,j) := \begin{cases} 1 & \text{if configuration $i$ is active at time $j$}; \\[2ex] 0 & \text{otherwise}. \end{cases}$ For request a) I write the constraints: \begin{equation} \sum_{i=1}^C y(i,j) = 1 \ \ \ \forall \ j = 1, \dots, T. \end{equation} Now, I don't know how to proceed with the request b). Any help or suggestions are appreciated. Thanks in advance. AI: For part b), you can introduce another binary variable $x_{i,j}$ to indicate the start, together with constraints: \begin{align} x_{i,j} &\le y_{i,k} &&\text{for $k\in\{j,\dots,j+\overline{t}-1\}$} \tag1\\ x_{i,j} &\le 1 - y_{i,j-1} \tag2\\ (1 - y_{i,j-1}) + y_{i,j} - 1 &\le x_{i,j} \tag3 \end{align} Constraint $(1)$ enforces $x_{i,j} = 1 \implies \land_{k=j}^{j+\overline{t}-1} (y_{i,k} = 1)$. Constraint $(2)$ enforces $x_{i,j} = 1 \implies y_{i,j-1} = 0$. Constraint $(3)$ enforces $(y_{i,j-1} = 0 \land y_{i,j} = 1) \implies x_{i,j} = 1$. Alternatively, you can omit $x$ and just impose $$(1 - y_{i,j-1}) + y_{i,j} - 1 \le y_{i,k} \quad\text{for $k\in\{j+1,\dots,j+\overline{t}-1\}$},$$ equivalently, $$y_{i,j} - y_{i,j-1} \le y_{i,k} \quad\text{for $k\in\{j+1,\dots,j+\overline{t}-1\}$}, \tag4$$ (where $y_{i,j}$ is treated as $0$ if $j<0$ or $j>T$), which enforces $$(y_{i,j-1} = 0 \land y_{i,j} = 1) \implies \bigwedge_{k=j+1}^{j+\overline{t}-1} (y_{i,k} = 1).$$ Yet a third approach is to enforce $$\left(y_{i,j-1} \land \lnot \bigwedge_{k=1}^{\overline{t}} y_{i,j-k}\right) \implies y_{i,j},$$ equivalently, $$\left(\lnot y_{i,j-1} \lor \bigwedge_{k=1}^{\overline{t}} y_{i,j-k}\right) \lor y_{i,j},$$ which has conjunctive normal form $$\bigwedge_{k=1}^{\overline{t}} \left(\lnot y_{i,j-1} \lor y_{i,j-k} \lor y_{i,j}\right),$$ yielding linear constraints $$1 - y_{i,j-1} + y_{i,j-k} + y_{i,j} \ge 1 \quad \text{for $k \in \{1,\dots,\overline{t}\}$}.$$ Rearranging this, we obtain $$y_{i,j} \ge y_{i,j-1} - y_{i,j-k} \quad \text{for $k \in \{1,\dots,\overline{t}\}$}. \tag5$$ Now we can recover @toronto hrb's formulation by aggregation: $$y_{i,j} \ge y_{i,j-1} - \frac{1}{\overline{t}}\sum_{k=1}^{\overline{t}} y_{i,j-k} \tag6$$ So $(6)$ is correct but weaker than $(5)$.
H: Properties about topological vector spaces Let $E$ be a topological vector space. First I want to prove that, given a $V \subset E$ balanced and $\lambda>0$ then $$ \lambda V \subset \beta V, \: \forall \;\lambda< \beta. \tag{1}. $$ For this I tried the following: let $\lambda,\beta>0$ such that $\lambda<\beta$ and let $x \in \lambda V$. Thus, there exists $v \in V$ satisfying $x=\lambda v$. Since $V$ is balanced, then $$\frac{\lambda}{|\lambda|}v = \frac{\lambda}{\lambda}v=v \in V \Rightarrow \frac{1}{\lambda}x \in V \Rightarrow \frac{\beta}{\lambda}x \in \beta V.$$ But, how can I conclude that $ x \in \beta V $? I also want to prove that: given $U\subset E$ a neighborhood of $0 \in E$ such that $U$ is open and balanced, then $$H:= \bigcup_{n=1}^{\infty}nU=E.\tag{2}$$ It's clear that $H \subset E$. But the converse inclusion , that is, given $x \in E$ there exists $n_0 \in \mathbb{N}$ so that $x=n_0U$, I couldn't prove. Are my ideas right? Any suggestion? Remember that: a subset $A$ of a vector space $X$ is said to be balanced if for every $x \in A$ and every $\lambda \in \mathbb{C}$, $|\lambda|\leq 1$, we have $\lambda x\in A$. AI: For the first bit: If $x=\lambda v\in \lambda V$, then $x=\beta \frac{\lambda}{\beta}v \in \beta V$, as $\vert \lambda/\beta\vert \le 1$ and $V$is balanced. For the second question,balanced is not needed. If $x\in E$, the map $\alpha \mapsto \alpha x$ (defined on the base field) is continuous and thus, if $n\rightarrow \infty$, we have $x/n\rightarrow 0$. In particular, the sequence eventually lies in $U$ (the open neighbourhood of $0$) and for some $n$ we have $x/n\in U$.
H: How many subsets of $\{0,1,\ldots,9\}$ have the property that there are at least two elements and the sum of the two largest elements is 13? I am not sure how to start but I think I need to consider the amount of subsets where there are 2 elements, 3 elements, and 4 elements ... separately. How do I start? AI: Suggestion: Start by choosing the two elements that will sum to $13$, say $a$ and $b$ with $a<b$. These will be the largest two elements of the set. Then you can also include (or not) any elements that are less than $a$. For instance if you take $8$ and $5$ initially, you can then include any subset of the elements $4,3,2,1,0$.
H: Compactness, connecteness and Hausdorffness on $ (S^2 /\mathscr{R}, \tau_{e_{|S^2}}/\mathscr{R}) $ Consider over $S^2$ with the induced euclidean topology, the equivalence relation: $(x,y,z)\mathscr{R}(x',y',z') \iff x+y=x'+y'$ Let $X=S^2 /\mathscr{R}$ be the quotient set with the quotient topology $\tau$, Prove that $(X,\tau)$ is compact, connected and Hausdorff. let $\tau_{e_{|S^2}}$ be the induced euclidean topology over the unit sphere $S^2$ My try: For compactness I would say that since the sphere is closed and bounded, the property passes to the quotient , so the quotient $ (S^2 /\mathscr{R}, \tau_{e_{|S^2}}/\mathscr{R})=(X,\tau) $ is then compact For connectedness since the sphere is path-connected, then the property passes to the quotient and it is connected. I am unsure however if this reasonings are correct, because I haven't used the given $\mathscr{R}$, so this would imply it is valid for any equivalence relation For Hausdorffness I am not sure how to proceed, since I think in this case the equivalence relation does comes into play. $(S^2,\tau_{e_{|S^2})}$ is Hausdorff ,right?, because it is a subset of $\mathbb{R}^3$in a Hausdorff space Provided we are on a Haussdorff space,I know that to prove the quotient is Hausdorff I have to prove that there exist disjoint saturated open neighbordhoods $A \in \mathscr{U}_p$, $B \in \mathscr{U}_q$ $\forall p , q$ not in the same equivalence class. Can someone shed some light? AI: So first of all, I assume that $S^2=\{v\in\mathbb{R}^3\ |\ \lVert v\rVert=1\}$ is the standard 2-dimensional sphere. Well, it doesn't really matter what $S^2$ is, as long as it is a compact and connected subspace of $\mathbb{R}^3$. Recall that we have the standard projection $$\pi:S^2\to S^2/\mathscr{R}$$ $$\pi(x)=[x]_\mathscr{R}$$ which is a continuous surjection. For compactness I would say that since the sphere is closed and bounded, the property passes to the quotient , so the quotient $ (S^2 /\mathscr{R}, \tau_{e_{|S^2}}/\mathscr{R})=(X,\tau) $ is then compact "Closed and bounded" property does not pass to quotients. Quotients need not be a subspace of $\mathbb{R}^n$, or even metrizable for "bounded" to make sense. Also quotient maps need not preserve closed sets. Nevertheless, the quotient will be compact. We can use our projection $\pi$ and note that the image of a compact space is compact. Indeed, if $f:X\to Y$ is a continuous surjection with $X$ compact, then take $\mathscr{U}$ an open covering of $Y$. Then $f^{-1}(\mathscr{U})=\{f^{-1}(U)\ |\ U\in\mathscr{U}\}$ is an open covering of $X$. By compactness this covering has a finite subcovering $\{f^{-1}(U_1),\ldots,f^{-1}(U_n)\}$. You can easily check that $\{U_1,\ldots,U_n\}$ is a finite subcovering of $\mathscr{U}$. For connectedness since the sphere is path-connected, then the property passes to the quotient and it is connected. Yes. A continuous image of any (path) connected space is (path) connected, i.e. $\pi(S^2)=S^2/\mathscr{R}$ is (path) connected. The concrete formula for $\mathscr{R}$ is irrelevant. For Hausdorffness I am not sure how to procede, since I think in this case the equivalence relation does comes into play. It sure does. Unlike connectedness and compactness, not every quotient of a Hausdorff space is Hausdorff. But we have this neat property that a quotient of a compact Hausdorff space $X$ via relationship $R$ is Hausdorff if and only if $R$ is a closed subspace of $X\times X$. See this: Question about quotient of a compact Hausdorff space So let's have a look at $\mathscr{R}=\{(x,y,z,x',y',z')\in\mathbb{R}^6\ |\ x+y=x'+y'\}$. Is that a closed subspace of $S^2\times S^2$? Sure it is, because we have a continuous function $$f:S^2\times S^2\to\mathbb{R}$$ $$f(x,y,z,x',y',z')=x+y-x'-y'$$ and with that we have $\mathscr{R}=f^{-1}(\{0\})$ and so it is closed.
H: Number of 5 digit numbers with the first three numbers decreasing and the last 3 increasing? The question is "Compute the number of 5-digit integers ABCDE, with all digits distinct, such that the first 3 digits are strictly decreasing, and the last 3 digits are strictly increasing." This is an old question for an AMC competition since I am practicing for it. I know that C has to be the smallest number and can only range from 0-5. Can someone solve this for me? AI: Pick $5$ distinct digits. The least digit must go in the third position of the number. Pick two of the remaining $4$ digits to go to the left of the middle digit. Everything else is forced. For example if you pick digits 0, 2, 3, 6, 8, then your number must be of the form $**0**$. If you pick $2$ and $6$ to go to the left of the middle digit, your number must be $62038$.
H: Proving that $\mathcal B$ is a basis for a product topology In my general topology textbook there is the following exercice: Let $\mathcal B_1$ be a basis for a topology $\tau_1$ on a set $X$, and $\mathcal B_2$ be a basis for a topology $\tau_2$ on a set $Y$. Let $\mathcal B$ be a be a collection of subsets $X \times Y$ consisting of all the sets $B_1 \times B_2$, where $B_1 \in \mathcal B_1$ and $B_2 \in \mathcal B_2$. Prove that $\mathcal B$ is a basis for a topology on $X \times Y$. My first thought was to use the following properties: $\mathcal B$ is a basis for some topology on a set $X$ if an only if: $X = \bigcup_{B \in \mathcal B} B$ for $B_1,B_2 \in \mathcal B$, then the $B_1 \cap B_2$ is the union of members of $\mathcal B$ I think that we can use this because we already know that $\mathcal B_1$ and $\mathcal B_2$ satisfy these properties, but I'm still confused with this exercise, and I don't want so solve it just by using some properties and not understanding the problem. Can someone give me some tips or some intuition behind this problem that might help me understand it? Anything helps really, I'm just having some trouble "visualizing" the problem AI: You may visualize your problem by considering the case $X=Y=\mathbb{R}$. We may infer our exercise from this basic case. Connecting the visualization and the real proof would be another point, however. I do not know there is any further intuition behind this problem. Sometimes mathematics is 'just do and works,' although your exercise is what everyone desires: we can find a basis of a given product of spaces by taking a product of two bases. Now let us see how to solve your problem. Your attempt is nice, and it works with some modifications. The following description of the second condition on $\mathcal{B}$ would be helpful: For any $U, V\in\mathcal{B}$ and $x\in U\cap V$, there is $N\in\mathcal{B}$ such that $x\in N\subseteq U\cap V$. Moreover, we have to check the collection of open sets generated by $\mathcal{B}$ is exactly same as the collection of open sets of $X\times Y$.
H: Given matrix $A$, find matrix $X$ such that $e^X = A$ Given the following matrix $$A = \begin{bmatrix} -2 & 2 & 1\\ 2 & -3 & -2\\ -5 & 6 & 4\end{bmatrix}$$ how can we show that there exists a complex matrix $X$ such that $e^X = A$. I have struggled to find the information about workaround the problem. However, the determinant of matrix $A$ is non-zero and its eigenvalues are $-1$ and $1$. AI: The Jordan form of $A$ is given by $$J = \begin{bmatrix} -1 & 1 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1\end{bmatrix}$$ with $A = PJP^{-1}$ where $P = \begin{bmatrix} 1 & 1 & 0 \\ 0 & 1 & -1 \\ 1 & 0 & 2\end{bmatrix}$. First we can try to find $\log J$ blockwise. Searching for the logarithm of $\begin{bmatrix} -1 & 1 \\ 0 & -1\end{bmatrix}$ again as some Jordan block, if we set $f(x) = e^{-x}$ then $$\exp\left(\begin{bmatrix} i\pi & -1 \\ 0 & i\pi\end{bmatrix}\right) = f\left(\begin{bmatrix} -i\pi & 1 \\ 0 & -i\pi\end{bmatrix}\right) = \begin{bmatrix} f(-i\pi) & f'(-i\pi) \\ 0 & f(-i\pi)\end{bmatrix} = \begin{bmatrix} -1 & 1 \\ 0 & -1\end{bmatrix}$$ so we can set (clearly it's not unique) $$\log J = \begin{bmatrix} i\pi & -1 & 0 \\ 0 & i\pi & 0 \\ 0 & 0 & 0\end{bmatrix}$$ and finally we get $$\log A = P(\log J) P^{-1} = \begin{bmatrix} 1 & -2 & -1 \\ 0 & 0 & 0 \\ 1 & -2 & -1\end{bmatrix}+i\pi \begin{bmatrix} 1 & 0 & 0 \\ -1 & 2 & 1 \\ 2 & -2 & -1\end{bmatrix}.$$
H: Riesz representation theorem vs. natural duality for $L^2$ We know that the spaces $L^p(\Omega)$ and $L^q(\Omega)$ are isometric and isomorphic for $p,q$ conjugate and $p,q \neq 1,\infty$. Call the isomorphism $l\colon L^p(\Omega) \to L^q(\Omega)$. Take $p=q=2$. $L^2(\Omega)$ is a Hilbert space so we have a isometric isomorphism $r$ between $L^2(\Omega)$ and $L^2(\Omega)^*$ which is the Riesz map. Is this the same map as the above? I.e. is $l \equiv r$? AI: Are you sure that we know that $L^p(\Omega)$ is isometrically isomorphic to $L^q(\Omega)$ for $p,q$ conjugate exponents? We actually know that $L^p$ is isometrically isomorphic to the dual of $L^q$, not to $L^q$ itself. An immediate example to show that this is not true: consider the measure space $\mathbb{N}$ with the counting measure, so the $L^p$ spaces are actually the $\ell^p$ spaces. we know that for $0<s<t<\infty$ it is $\ell^s\subset\ell^t$. Indeed, if $(x_n)\in \ell^s$, then $\sum_n|x_n|^s<\infty$ so $|x_n|^s$ becomes less than $1$ after a certain point, so $|x_n|$ becomes less than 1 after a certain point, so,since $s<t$ it will be $|x_n|^t\leq|x_n|^t$ from that point on which shows that $(x_n)\in\ell^t$. Now take for example the conjugate exponents $p=3/2$ and $q=3$. We have that $\ell^{3/2}\subset\ell^3$, but this inclusion is not isometric: for example, the sequence $x=(\frac{1}{2^n})_{n=1}^\infty$ satisfies $$\|x\|_{3/2}=\sum_{n=1}^\infty\frac{1}{2^{3n/2}}=\frac{2^{-3/2}}{1-2^{-3/2}} $$ while on the other hand it is $$\|x\|_3=\sum_{n=1}^\infty\frac{1}{2^{3n}}=\frac{8^{-1}}{1-8^{-1}}$$ Any Hilbert space is isometrically isomorphic to its dual via the Riesz map $$H\to H^*,\;\;\;x\mapsto\langle-,x\rangle $$ The isometric isomorphism between $L^p$ and $(L^q)^*$ is given by $f\mapsto\varphi_f:L^q\to\mathbb{C}$ where $\varphi_f(g)=\int\overline{f}g$. Since the inner product on $L^2$ is $\langle f,g\rangle=\int f\overline{g}$, we actually see that the Riesz map of $L^2$ coincides with the isometric isomorphism between $L^2$ and $(L^2)^*$.
H: Properties of rational function of two monotone functions Consider two increasing functions $f,g:\mathbb{N}_{>0}\to \mathbb{N}_{>0}$ and let $h:=f/g$ (we assume $g(x)\neq 0$ for all $x>0$). In other words, $f(x)< f(x+1)$, $g(x)< g(x+1)$ holds for all $x>0$ and $h(x)=f(x)/g(x)$. We assume that $\lim_{x\to \infty}h(x)=0$. Then, is $h$ unimodal (in $x>0$) ? i.e., $\cdots \leq h(x^*-1)\leq h(x^*)\geq h(x^*+1) \geq\cdots$ for some $x^*>0$. For example, $h(x)=x/(x^2+1)$ is unimodal (in $x>0$). AI: No. Here is a counterexample: $$ f(x)=e^{2x+\sin(x)}\quad g(x)=e^{2x-\sin(x)+\log(1+x^2)},$$ for $x>0$. It is easy to check that both are monotone increasing, and that their ratio is not.
H: Show that the sequence $\left\{\frac{1}{1+nz}\right\}$ is uniformly convergent to zero for all $z$ such that $|z|\geq 2$. Show that the sequence $\left\{\frac{1}{1+nz}\right\}$ is uniformly convergent to zero for all $z$ such that $|z|\geq 2$. The definition for uniform convergence is: We say $\left\{f_n\right\}$ converges to $f$ uniformly on a subset $S$ of $\Omega$ iff for every $\epsilon >0$ there is an $n_0\in\mathbb{N}$ so that if $n>n_0$ then $|f_n(z)-f(z)|<\epsilon$ for all $z\in S$. I found this example in my book, but I don't understand how they proved it with definition. All it says is: $|\frac{1}{1+nz}|\leq \frac{1}{n|z|-1}\leq\frac{1}{2n-1}\leq\frac{1}{n}$. AI: They have used reverse triangle inequality to derive that inequality. Note that: to show that $f_n \to f$ uniformly on $S$, it suffices to show that $$\sup_{z\in S} |f_n(z) - f(z)| \to 0.$$ (Why? Use the definition given to you as well as the definition of the limit of a sequence of real numbers.) In this case, you have that $|f_n(z) - 0| \le 1/n$ for all $z \in S$ and hence, $$\sup_{z\in S} |f_n(z) - f(z)| \le \dfrac{1}{n}.$$ Using the Sandwich theorem, we see that $\displaystyle\sup_{z\in S} |f_n(z) - f(z)| \to 0$, as desired.
H: Proving that $\mathbb{Z_{-}} \cap \mathbb{N}=\emptyset$ In assumption that $\mathbb{N}$ and successor function ($\overline{x}$) over $\mathbb{N}$ is defined by 5 Peano axioms: $1\in\mathbb{N}$ $n\in\mathbb{N} \Rightarrow \overline{n}\in\mathbb{N}$ $\nexists n\in\mathbb{N}:\ \overline{n}=1$ $\forall n,m,k\in\mathbb{N}:\ (\overline{n}=k\land\overline{m}=k)\ \Rightarrow\ n=m$ $\left[P(1)\land\forall n\in\mathbb{N}\ [P(n)\Rightarrow P(\overline n)]\right]\ \Rightarrow \ \forall n\in\mathbb{N}\ [P(n)]$ and an addition operation is defined over $\mathbb{N}$ by: $\forall n\in\mathbb{N}\quad n + 1 = 1 + n = \overline{n}$ $\forall n,m\in\mathbb{N}\quad n+\overline{m}=\overline{n+m}$ and that we have proved some base properties of addition over $\mathbb{N}$ like existence, uniqueness and commutivity, we are extending $\mathbb{N}$ and operations to new set (called $\mathbb{Z}$) by including a neutral element (denoted by "$0$") where: $$\tag{1}\label{d:zero}\forall n\in \mathbb{N}\quad n+0=0+n=n$$ *and we know that $0\notin\mathbb{N}$, because if $0\in\mathbb{N}$ then by substitution $n=1$ to \eqref{d:zero}: $$0+1=1\quad\Leftrightarrow\quad \overline{0}=1\quad\text{(and that one contradicts 3rd Peano axiom)}$$ and by constructing negative numbers $(-n)\in\mathbb{Z}_-$ for all $n\in\mathbb{N}$ by relation $n+(-n)=0$. So the question is how to prove that constructed set $\mathbb{Z}_-$ is not already in $\mathbb{N}$? Maybe there is a simple prove for $(-1)$ which I can extend to other numbers. AI: Suppose $-x \in \mathbb{N}$ for some $x \in \mathbb{N}$. Then $0 = x + (-x) \in \mathbb{N}$ (because $\mathbb{N}$ is closed under addition). But as you already showed, $0 \notin \mathbb{N}$. Contradiction!
H: Why $\mathrm{Restriction}\circ \mathrm{Corestriction}$ is multiplication on cohomology? Let $G$ be a group, and let $H$ be a subgroup of index $m$. Let $A$ be a $G$-module. we have restriction $$\mathrm{Res}: H^n(G,A)\to H^n(H,A)$$ and co-restriction $$\mathrm{Cor}: H^n(H,A)\to H^n(G,A).$$ It is known that $$\mathrm{Cor}\circ \mathrm{Res}(c)=mc.$$ I don't see why the converse holds: Why $$\mathrm{Res}\circ \mathrm{Cor}(c)=mc$$ for every $c\in H^n(H,A)$? In this direction applying Res doesn't do anything to the function, and we stay with some sum $\sum_{i=1}^m g_i f(g_i^{-1}p)$ for every $p\in P_n$ for a projective resolution $P_n$ of $\mathbb Z$ as a $G$-module. The different terms in the sum seem like distrinct functions, and we cannot put the $g_i^{-1}$ outside because $f$ is only a $\mathbb Z H$-module homomorphism. Here's the claim in Dummit & Foote: AI: You say that $f$ is only an $H$-module homomorphism, but in fact it's a $G$-equivariant: we started with a $G$-invariant homomorphism, "forgot" it was $G$-invariant, and then applied the formula with the sums. Since it was $G$-invariant, each term in the formula with the sum is just $f(p)$. EDIT: as discussed in the comments, the other claim about corestricting and then restricting is false. Reference Neukirch Schmidt and Wingberg Corollary 1.5.7 for the correct formulation.
H: Is $\left(\begin{smallmatrix}0&0&1\\1&0&0\\ 0&1&0\end{smallmatrix}\right)$ diagonalizable over $\mathbb{Z}_2$? Is $A= \begin{pmatrix} 0 & 0 & 1\\ 1 & 0 & 0\\ 0 & 1 & 0 \end{pmatrix}$ diagonalizable over $\mathbb{Z}_2$? I tried two approaches and got two different answers so I was hoping someone could point me to a flaw in my reasoning: First approach: The minimal polynomial for $A$ is easily found to be $m(x) =x^3-1$ which is the same as $x-1$ over $\mathbb{Z}_2$. Since the minimal polynomial decomposes into distinct linear factors it must be that $A$ is diagonalizable over $\mathbb{Z}_2$. Second approach: It follows from the minimal polynomial that $1$ is the only eigenvalue of $A$ . The eigenvector equation is $\begin{pmatrix} 0 & 0 & 1\\ 1 & 0 & 0\\ 0 & 1 & 0 \end{pmatrix} \begin{pmatrix} x \\ y\\ z \end{pmatrix} = \begin{pmatrix} z \\ x\\ y \end{pmatrix} = 1 \times \begin{pmatrix} x \\ y\\ z \end{pmatrix} $ and the only solution is $\begin{pmatrix} 1 \\ 1\\ 1 \end{pmatrix}$. But $\mathbb{Z}_2^3$ has dimension $3$, so there is no basis for $\mathbb{Z}_2^3$ consisting of eigenvectors for $A$. $A$ can't be diagonalized over $\mathbb{Z}_2$. What went wrong? Many thanks! AI: Your claim that $x^3-1$ and $x-1$ are the same polynomial is the mistake. Yes, as functions over $\mathbb{Z_2}$ they are the same. But a polynomial is not a function. A polynomial is a formal sum of the form $\sum_{i=0}^n a_ix^i$ where the coefficients are elements in the field. So $x^3-1$ is actually not a product of distinct linear factors. Another way to see this is a mistake is to note that if $x-1$ was the minimal polynomial of $A$ then it would mean that $A-I=0$ and hence $A=I$. But $A$ is not the identity matrix.
H: series involving zeta function and cotangent I have been recently finding the values for the even positive integers of the zeta function using fourier series, and it is well know that these are all of the form $\frac{\pi^{2n}}{a_{2n}}$ and so I thought about whether or not the series below would converge: $$S=\sum_{n=1}^\infty\frac{\zeta(2n)}{\pi^{2n}}=\frac{1-\cot(1)}{2}$$ As you can see it does converge and has an interesting value however I am not sure how to prove it. I do know that: $$\zeta(2n)=(-1)^{n+1}\frac{B_{2n}(2\pi)^{2n}}{2(2n)!}$$ and so we could say: $$S=\sum_{n=1}^\infty\frac{(-1)^{n+1}B_{2n}2^{2n-1}}{(2n)!}$$ How can I manipulate this? Thanks AI: Notice $${\sum_{n=1}^{\infty}\frac{\zeta(2n)}{\pi^{2n}}=\sum_{n=1}^{\infty}\sum_{k=1}^{\infty}\left(\frac{1}{k^2}\right)^n\frac{1}{\pi^{2n}}}$$ Interchanging sums $${\Rightarrow \sum_{k=1}^{\infty}\sum_{n=1}^{\infty}\left(\frac{1}{(k\pi)^2}\right)^n}$$ This is the sum of an infinite number of Geometric series! $${=\sum_{k=1}^{\infty}\left(\left(\frac{1}{1-\frac{1}{(k\pi)^2}}\right)-1\right)}$$ Simplifying this, you get $${\Rightarrow \sum_{k=1}^{\infty}\frac{1}{(k\pi)^2 - 1}}$$ Now we can factor out the ${\pi^2}$ to get $${=\frac{1}{\pi^2}\sum_{k=1}^{\infty}\frac{1}{k^2 - \left(\frac{1}{\pi^2}\right)}}$$ Fortunately, this is a sum we can deal with very nicely using the formula $${\sum_{n=0}^{\infty}\frac{1}{n^2 + a^2} = \frac{1+a\pi\coth(a\pi)}{2a^2}}$$ (source: https://en.wikipedia.org/wiki/List_of_mathematical_series . Can be derived using Fourier series!) Can you take it from here?
H: Constructing entire function from two functions with the same simple pole Given two complex functions $f,g : \mathbb{C}\backslash\{z_0\} \to \mathbb{C}$ which are analytic everywhere in $\mathbb{C}$ except in one simple pole, which they have in the same point $z_0$. Does there exist a function $h : \mathbb{C} \to \mathbb{C}$ such that the function $\Theta: \mathbb{C} \to \mathbb{C} : z \mapsto f(z) - h(z)g(z)$ does not have any singularities? I fail to find a solid mathematical argument, and I doubt it is even possible to find such an $h$ in general... AI: (edit later - if one has simple poles, the constant $a/b$ works for $h$ where $a$ is the residue of $f$ and $b$ is the residue of $g$ at $z_0$, so the proof is trivially one line) The result is true if and only if the order of the pole of $g$ is equal or higher to the order of the pole of $f$ (the statement mentions simple poles, but your example has poles of order $1$ and $2$, so I am treating the general case). Let the order of the poles for $f$ and $g$ at $z_0$ be $m,n$ respectively One way is obvious since for any finite function $h: \mathbb C \to \mathbb C$ (doesn't need to be analytic, continuous etc), if $k(z)=f(z)-h(z)g(z)$ is finite (no singularities), then $(z-z_0)^nk$ is finite and since $(z-z_0)^nh(z)g(z)$ is also finite, it follows that $(z-z_0)^nf(z)$ is finite, or $m \le n$ Conversely, assume $m \le n$, then we show we can find an entire function $h$ as required (a polynomial of degree at most $n$ will work as we will see from the proof), by considering $f_1(z), g_1(z)$ the principal parts of $f,g$ at $z_0$ (in other words $f-f_1, g-g_1$ are entire; we note that $f_1(z)=\frac{Q(z)}{(z-z_0)^m}, g_1(z)=\frac{P(z)}{(z-z_0)^n}, \deg Q = m-1, \deg P=n-1, P(z_0) \ne 0, Q(z_0) \ne 0$ and we claim that we can find $h$ polynomial of degree at most $n$ s.t $(z-z_0)^{n-m}Q-h(z)P(z)$ has a zero of order $n$ at $z_0$, which obviously implies that $f-hg$ is entire. We take $h(z)=(z-z_0)^{n-m}h_1(z)$ and then we need $h_1(z_0)P(z_0)=Q(z_0)$ which we get uniquely and non-zero since $P(z_0) \ne 0, Q(z_0) \ne 0$; so $Q-h_1P$ has a zero at $z_0$; differentiating we need $Q'-h_1'P-h_1P'$ to have a zero there, but since we know $Q',h_1,P,P'$ at $z_0$ and since $P(z_0) \ne 0$ we determine $h_1'(z_0)$ uniquely; now it is clear that we can continue differentiating since at each step forcing $Q-h_1P$ to have a higher order zero at $z_0$ by taking the next derivative, all the terms are determined except $h_1^{(k)}(z_0)P(z_0)$, where $k$ is the order we need now, and again $P(z_0)=0$ gives $h_1^{(k)}(z_0)$ uniquely. In particular we need to determine an entire function $h_1$ given $h_1(z_0) \ne 0, h_1'(z_0),...h_1^{(n-m)}(z_0)$ and it is obvious that a polynomial of degree at most $n-m$ works for $h_1$, hence a polynomial of degree at most $n$ works for $h=(z-z_0)^{n-m}h_1$ so we are done!
H: In $\mathbb{Z}[X]$ what inputs can X take? In the polynomial rings $\mathbb{Z}[X]$ or more generally $R[X]$ what inputs can we put in the place of X. For example in $\mathbb{Z}[X]$. X can be replaced by any real or complex numbers. But in general can we put any member of any set in the place of X ? AI: In $R[X]$, $X$ is an indeterminate, i.e. a symbolic variable. When you say you "put" some $x$ in place of $X$, you're talking about the mapping $f_x: \sum_i r_i X^i \mapsto \sum_i r_i x^i$. If $R$ is a commutative ring and $x$ a member of some commutative ring $S$ that contains $R$, this is a ring homomorphism of $R[X]$ into $S$.
H: Reference for a real algebraic geometry problem Disclaimer: I am not a mathematician by training. I encountered the following problem in my research. Assume that I have $N$ real variables $x_1, x_2, \dots, x_N$. I am given $N$ homogeneous polynomials in the $x_i$ unknowns, each with a different degree. More specifically: $$\begin{aligned} P_1 &= \sum_i x_i - c_1\\ P_2 &= \sum_i x_i^2 - c_2\\ &\qquad\vdots \\ P_N &= \sum_i x_i^N - c_N \end{aligned}$$ where $c_1, c_2, \dots, c_N$ are given real coefficients. I need to find, if they exist, real solutions of the above equations. I am asking for references where I can learn the tools needed to attack this type of problems. Thank you. AI: There are Newton's formulas expressing elementary symmetric polynomials in terms of power sums. Since your equations are $P_k=c_k$, where $P_k$ are power sums, you can express the elementary symmetric polynomials $A_k$ in terms of your $c_k$. Then your solutions $(x_1,\ldots,x_N)$ are permutations of the set of roots of the polynomial $$x^N+A_1x^{N-1}+\cdots+A_N.$$ If you want a real solution, all solutions of this equation must be real. To determine this and to find the roots, there are many methods, the most common one is the Sturm's method. It is an algorithm which can be used to find out whether all roots of a polynomial are real, and to localize them. Once the roots are localized, Newton's method can be used to find them with arbitrary precision. Newton's method almost always converges when all roots are real, and always converges if you choose initial approximations sufficiently well (using the previous step, localization). Before running Newton's method, make sure that there are no multiple roots, and if there are, get rid of them. To do this, use discriminant.
H: Why can't I minimize the squared distance? My question is "Why can't I minimize the squared distance?" It would be a lot easier, but it yields the wrong answer. I set out to write an example for using Newton's Method for Multiple equations and decided to show the following problem. Given a line, such as $\space 4x+3y-7=0 \space$ and three points, $A=(1,4), \space B=(3,4), \space C=(-1,-1)$, find the point on the line that is the shortest summed distance to all three points. A nearly identical problem has been discussed in this forum here. How to find the point on a line which has the minimum length to three points? I used Lagrange optimization to set up my problem and solved it with Geogebra's CAS, before writing code in SAGEmath. The Lagrange system is as follows. $$\mathscr{L}(x,y,\lambda)=\Vert{A-(x,y)}\Vert+\Vert B-(x,y)\Vert+\Vert C-(x,y)\Vert -\lambda \left(4x+3y-7\right) \tag{EQ 1} \label{EQ_1}$$ I took the partial derivatives, set them to zero and solved the system. However, it was a lot easier to manipulate the derivatives if I squared the equations first. That is, I wanted my Lagrange system to be $$\mathscr{L}(x,y,\lambda)=\left[(1-x)^2+(4-y)^2\right]+\left[(3-x)^2+(4-y)^2\right]+\left[(-1-x)^2+(-1-y)^2\right] -\lambda \left(4x+3y-7\right) \tag{EQ 2} \label{EQ 2}$$ When I am only using one point, this works perfectly and I rationalize it by saying, ". . .this has to be a non-negative function since it is difficult to imagine a negative distance. Any non-negative function will have the same minimum as its square since at every point it would be multiplied by itself." If I use $\eqref{EQ 2}$ it indeed minimizes the squared distance from the 3 points to the line, but that distance is not the same as the minimum distance to the line. I have a working example at https://www.geogebra.org/m/hptbypvy . It takes 20 or 30 sec to load and I have to tell my browser to wait a couple of times. AI: You are probably thinking of the following fact: minimizing $\sqrt{d^2}$ is equivalent to minimizing $d^2$ because $\sqrt{\cdot}$ is a monotonic increasing function. But once you introduce sums you throw things off. Minimizing $\sum\sqrt{d_i^2}=\sum|d_i|$ is not equivalent to minimizing $\sum d_i^2$. Look at a simple example. Take $d_i=(i-x)$, for $i=1, 2, 6$. The minimum of $\sum d_i^2=(1-x)^2+(2-x)^2+(6-x)^2=3x^2-18x+41$ occurs at $x=3$. But the minimum of $\sum\sqrt{d_i^2}=|1-x|+|2-x|+|6-x|$ occurs at $x=2$. This is related to the fact that the mean minimizes the sum of squared deviations, while the median minimizes the sum of absolute deviations.
H: Finding $a$ and $b$ such that $\lim _{x\to \infty}\left(\frac{x^2+1}{x+1}-ax-b\right)=0 $ I'm having trouble understanding limits at infinity. For instance, If $$\lim _{x\to\infty}\left(\frac{x^2+1}{x+1}-ax-b\right)=0 $$ where $a$ and $b$ are some real constants; find $a$ and $b$. As per the solution the value of $a$ must be $1$ and the value of b is $-1$. Why? AI: We have \begin{align}\frac{x^2+1}{x+1}-ax-b&=\frac{x^2+1-ax(x+1)-b(x+1)}{x+1}\\&=\frac{x^2(1-a)+x(-a-b)+(1-b)}{x+1}.\end{align} Therefore $$\lim _{x\to \infty}\left(\frac{x^2+1}{x+1}-ax-b\right)=\lim _{x\to \infty}\left(\frac{x^2(1-a)+x(-a-b)+(1-b)}{x+1}\right)=0,$$ only if the degree of the numerator is less than the degree of the denominator. In order for this to happen, $a$ must be equal to $1$ so that the $x^2$ term in the numerator is $0$. Then, the $x$ term in the numerator must also be $0$ and we find: $$-a-b=0 \implies -1-b=0 \implies b=-1.$$
H: Show that the power series $\sum_{n=0}^\infty \frac{z^n}{n!}$ converges uniformly for all $z$. Show that the power series $\sum_{n=0}^\infty \frac{z^n}{n!}$ converges uniformly for all $z$. I know that the definition for uniform convergence is: We say ${_}$ converges to uniformly on a subset of Ω iff for every $>0$ there is an $_0∈ℕ$ so that if $>_0$ then $|_()−()|<$ for all $∈$. BUT I'm not sure if I can use it the same for a power series or if there are different rules with that. Can I? Also, what would the f(z) be for this? I get that $f_n(z)=\frac{z^n}{n!}$, but how do I get $f(z)$? AI: When considering series of functions such as $z \mapsto \sum_{n=0}^\infty g_n(z)$, you take $f_n$ as the partial sum : $$ f_n(z) = \sum_{k=0}^n g_k(z). $$ However, the exponential series does not converge uniformaly on all $\mathbb C$. It does however uniformly converges on every bounded subset of $\mathbb C$. Indeed, if $|z| \leq M$ : $$ \left| \frac{z^n}{n!} \right| = \frac{|z|^n}{n!} \leq \frac{M^n}{n!} $$ and the series $\sum_n \frac{M^n}{n!}$ converges (its sum is $e^M$). Is that clearer?
H: Pontryagin dual of an inverse limit For a locally compact Hausdorff abelian topological group $G$, let $G^\vee = \mathrm{Hom}_{cts}(G, \mathbb{R}/\mathbb{Z})$ denote its Pontryagin dual, endowed with the compact-open topology. It is more or less easy to show that, if $(M_i)_i$ is a directed system of such groups, then $$ (\varinjlim M_i)^\vee \cong \varprojlim M_i^\vee $$ with the obvious morphisms on the RHS. I am interested in the somehow dual question: if $\{M_i\}$ is an inverse system of such groups, is it true that $$ (\varprojlim M_i)^\vee \cong \varinjlim M_i^\vee $$ ? On one hand, Pontryagin duality (the double dual of something is isomorphic to that thing) would seem to indicate that the above is true: $$ (\varprojlim M_i)^\vee \cong (\varprojlim M_i^{\vee\vee})^\vee \color{blue}{\cong} (\varinjlim M_i^\vee)^{\vee\vee} \color{blue}{\cong} (\varinjlim M_i^\vee). $$ However this seems to contradict the following example: consider the inverse system $(\mathbb{Z})_i$ with $i \in \mathbb{N}$ and transition maps equal to multiplication by e.g. $p$ a prime. Here $\mathbb{Z}$ is given the discrete topology. Then on one hand we have $\mathbb{Z}^\vee \cong \mathbb{R}/\mathbb{Z}$ and on the other $\varprojlim \mathbb{Z} = \{0\}$. How can it be that $$ (\varprojlim \mathbb{Z})^\vee = \{0\}^\vee = 0 \stackrel{?}{\cong} \varinjlim \mathbb{R}/\mathbb{Z} $$ ? If my computations are correct, the transition maps on the RHS are multiplication by $p$, and that direct limit certainly does not seem trivial. P.S.: I guess for the two duality properties of limits stated at the beginning, one needs to ask the corresponding direct/inverse limit to be locally compact (in order to take duals). I'm not sure if that is really an extra condition or always the case, but it holds in the example at any rate. Edit: In the two isomorphisms highlighted in blue, one is tacitly assuming that the inverse/direct limit in question is indeed (locally compact, abelian) Hausdorff to apply Pontryagin duality. This is what fails in the example. AI: If $\mathbf{LCA}$ denotes the category of locally compact abelian groups, then the Pontryagin dual establishes an equivalence of categories $\mathbf{LCA}^{\mathrm{op}}\to\mathbf{LCA}$. In particular, this means that sending LCA's to their duals will send limits to colimits and colimits to limits (inverse / direct limits being another term for these). Thus, the answer to your original question is yes, and your argument for proving this is correct. Edit: originally, I had the arrows on backwards in the computation. Regarding your example, the limit of $\dots\xrightarrow{\times p}\mathbb Z\xrightarrow{\times p}\mathbb Z$ is indeed trivial (like you mentioned), and the dual diagram $\mathbb R/\mathbb Z\to\mathbb R/\mathbb Z\to\dots$ is given by multiplication by $p$ again, like you mentioned. However, the colimit of that diagram inside $\mathbf{LCA}$ is subtle, as it has to still be locally compact and Hausdorff. This means it can't be necessarily computed in the ambient category of general topological groups. If we take the colimit in topological groups, the resulting group will not be Hausdorff: if we take $p=3$ for concreteness, then the element $\frac12$ (being fixed under multiplication by $p$) will be nonzero in the colimit in topological groups, but there is no open set separating it from the neutral element. For any open neighbourhood of $\frac12$ contains some element of the form $\frac n{3^m}$ for $n$, $m$ integers, which vanishes in the colimit. However, we can correct this colimit to be a colimit in $\mathbf{LCA}$ by applying the "Hausdorffification" done in this answer, and it will (has to!) turn out that the correction will make the colimit trivial.
H: Showing that there is always an unbounded $f:X \to \mathbb{R}$ if $X$ is infinite, without choice Consider the claim "for any infinite set $X$, there exists an unbounded $f:X \to \mathbb{R}$". If we assume the axiom of choice, this claim is trivial to prove. Indeed, given choice we know there exists an $S \subset X$ which is countably infinite, and thus we map that set to $\mathbb{N} \subset \mathbb{R}$ and map each $x \in X \setminus S$ to $0$. Can this be proven without choice? AI: This is absolutely not provable. If $X$ is an amorphous set, which is an infinite set which cannot be split into two disjoint infinite subsets, then every function from $X$ into a linearly ordered set, in particular $\Bbb R$, has a finite range. And of course, the existence of amorphous sets is consistent with $\sf ZF$. More generally, $X$ cannot be mapped onto $\Bbb N$ if and only if $X$ cannot be mapped onto an unbounded subset of $\Bbb R$. And we know that $X$ cannot be mapped onto $\Bbb N$ if and only if its power set is Dedekind-finite.
H: values of q for which tangent integral is converges Finding value of $q$ for which $$\int^{1}_{0}\frac{1}{(\tan (x))^{q}}dx$$ converges What i try:: Let $\tan x=t.$ Then $\displaystyle dx=\frac{1}{\sec^2 (x)}dx=\frac{1}{1+t^2}dt$ And changing limits $$I=\int^{\tan (1)}_{0}\frac{1}{(1+t^2)t^{q}}dt<\int^{\tan(1)}_{0}\frac{1}{t^2\cdot t^{q}}dt$$ $$I<\int^{\tan(1)}_{0}t^{-q-2}dt=\frac{1}{-q-1}\bigg(t^{-q-1}\bigg)\bigg|^{\tan(1)}_{0}=\frac{1}{-q-1}\cdot (\tan (1))^{-q-1}$$ Here $q\in\mathbb{R}-\{-1\}$ for which integral i converges Can anyone plese explain me is my solution is right. If not then how do i solve it. Help me please.Thanks AI: As $x\to0$, $\tan x\sim x$ and so $I$ converges iff $\int_0^1\frac{dx}{x^q}$ converges, that is iff $q<1$.
H: How can I prove that $16 \lt {1+\frac1{\sqrt2}+\frac1{\sqrt3}+\cdots+\frac1{\sqrt{80}}<18}$? What i want to prove is this $$16 \lt {1+\frac1{\sqrt2}+\frac1{\sqrt3}+\cdots+\frac1{\sqrt{80}}<18}$$ I haven't encountered any problem of this kind before, how do we proceed? Making approximations dosen't seem feasible, so all the suggestions are welcome. AI: Let's look at the telescoping sum $$S=(\sqrt2-1)+(\sqrt3-\sqrt2)+(\sqrt4-\sqrt3)+\cdots+(\sqrt{81}-\sqrt{80})=9-1=8.$$ Then $$S=\sum_{n=1}^{80}(\sqrt{n+1}-\sqrt n)=\sum_{n=1}^{80}\frac1{\sqrt{n+1}+\sqrt n}.$$ So $$S<\sum_{n=1}^{80}\frac1{2\sqrt n}$$ so $$\sum_{n=1}^{80}\frac1{\sqrt n}>2S=16.$$ Also $$S>\sum_{n=1}^{80}\frac1{2\sqrt{n+1}}$$ so that $$\sum_{n=1}^{80}\frac1{\sqrt{n+1}}<2S=16.$$ But $$\sum_{n=1}^{80}\frac1{\sqrt n} =\sum_{n=1}^{80}\frac1{\sqrt{n+1}}+1-\frac19$$ and we get $$\sum_{n=1}^{80}\frac1{\sqrt n}<17-\frac19.$$
H: Integrating factor for an ordinary differential equation our teacher had asked a question in final exam and i did something and sent him. but my solution graded 0. so i have the question you may help me. i really appreciate your help. $$ xdy -(x^2+y^2+y)dx = 0 $$ eq is given. $$ \mu = (x^2+y^2)^\Omega $$ and this is the integration constant that makes the differential equation exact differential equation. the question what is omega. AI: The ODE $Mdx+Ndy=0$ is exact if $\frac{\partial M}{\partial y}=\frac{\partial N}{\partial x}$ The given ODE $$-(x^2+y^2+y)dx+xdy=0~~~~(1)$$ is not exact, but if we multiply by $\mu=(x^2+y^2)^{-1}$ We get $$-(1+\frac{y}{x^2+y^2}) dx+\frac{x}{x^2+y^2}dy=0 ~~~~(2)$$ Then it becomes exact as $$\frac{\partial M}{\partial y}=\frac{\partial N}{\partial x}=\frac{y^2-x^2}{(x^2+y^2)^2}$$ Then the solution of (1) or (2) is written as $$\int -(1+\frac{y}{x^2+y^2}) dx ~~\text{(treat $y$ as constant)}+ \int 0 dy=C$$ $$\implies x+\tan^{-1}\frac{y}{x}=C$$ So inthe integrating factor $\mu$, $\Omega=-1$ For OP: Take $\mu=(x^2+y^2)^\Omega$ and multiply it to (1) The new $N$ and $M$ are $$M=-(x^2+y^2)^\Omega (x^2+y^2+y),~~ N=x(x^2+y^2)^\Omega$$ Then $$\frac{\partial M}{\partial y}=(x^2+y^2)^{\Omega-1}[-(\Omega+1)(x^2+y^2)2y-x^2-y^2+2y^2]~~~~(3)$$ $$\frac{\partial N}{\partial x}=(x^2+y^2)^{\Omega-1}(x^2+y^2+2\Omega x^2]~~~~(4)$$ Check $\Omega=-1$, makes (3) and (2) identical.
H: Solutions of biquadratic equation being successive members of arithmetic progression What should be the relationship between $p$ and $q$, so that $x^4+px^2+q=0$ equations has four solutions which are successive members of arithmetic equation. The answer is root from $q / p = -3/10$, but I have no clue what is going on here and would like to know. All i could deduce before seeing the answer is that $p<0$ and $q>0$. I would appreciate if someone who can solve this replies and helps. Thanks AI: Ignoring the case $p=q=0$, the solutions to this equation will be of the form $x=\pm \alpha, \pm \beta \ (\alpha\gt\beta \gt 0)$ and the arithmetic sequence (in order) would be $$-\alpha, -\beta,\ \beta, \alpha$$ By definition of an arithmetic progression, $$\alpha -\beta = 2\beta \implies \alpha =3\beta$$ Applying Vieta’s formulas on $x^4+px^2+q=0$, $$\alpha^2+\beta^2 = -p \\ \alpha^2 \beta^2 = q $$$$\implies10\beta^2 =-p \\ 3\beta^2 =\sqrt q $$ or $$\frac{\sqrt q}{p} =\frac{-3}{10}$$
H: Question about SVD and orthogonal matrices Let $X$ be a $m \times n$ matrix. By SVD, I obtain $X = UDV^T$, where $U$ and $V$ are both orthogonal matrices, and $D$ is a diagonal matrix. I think the following is true (but not sure why): $(VDV^T + aI_n)^{-1} = V(D + aI_n)^{-1}V^T$ where $a\in \mathbb{R}$ is some scalar, and $I_n$ is a $n\times n$ identity matrix. Can anyone explain to me why one is able to move $V$ to the left of the matrix inverse and $V^T$ to the right? AI: I don't know what the SVD has to do with it but yes, your manipulation is correct. We are given that $D$ is diagonal and $V$ is orthogonal. Note that $V^{-1} = V^T$ and $(V^T)^{-1} = V$. So, we have $$ \begin{align} (VDV^T + aI_n)^{-1} &= (VDV^T + V(a I_n)V^T)^{-1} \\ & = (V(D + aI_n)V^T)^{-1} \\ & = (V^T)^{-1}(D + aI_n)^{-1}V^{-1} = V(D + aI_n)^{-1}V^T. \end{align} $$
H: Does $L^1$ imply $L^p$ on finite measure spaces? If $(\Omega,\mu)$ is a finite measure space, i.e., if $\mu(\Omega)<\infty$, then does $f\in L^1(\Omega)$ imply that $f\in L^p(\Omega)$ for every $p$? This is just a statement that I feel like I've heard before, but I don't have a lot of great intuition for $L^p$ spaces yet, unfortunately. AI: No, it's the other way around. If $p>1$ and the measure space is finite then $f\in L^p$ implies $f\in L^1$. The proof uses Holder's inequality. Take $q$ such that $\frac{1}{p}+\frac{1}{q}=1$. Then: $$\int_X |f|\leq \left(\int_X |f|^p\right)^{\frac{1}{p}} \left(\int_X 1^q\right)^{\frac{1}{q}}<\infty$$ Where $\int_X 1<\infty$ exactly follows from the fact that $X$ is a finite measure space. The converse is false. For example, consider the restriction of the Lebesgue measure to $[0,1]$. Then $\frac{1}{\sqrt{x}}$ is in $L^1$ but not in $L^2$, because $\int_0^1 \frac{1}{\sqrt{x}}\,\mathrm{d}x<\infty$ but $\int_0^1\frac{1}{x}\,\mathrm{d}x=\infty$.