text
stringlengths
83
79.5k
H: Prove that $\exists !c \in \mathbb{R} \exists ! x \in \mathbb{R} (x^2 + 3x + c = 0)$ This is an exercise from Velleman's "How To Prove It". I am struggling with how to finish the final part of the uniqueness proof, so any hints would be appreciated! a. Prove that there is a unique real number $c$ such that there is a unique real number $x$ such that $x^2 + 3x + c = 0$. Proof: Let $c = \frac{9}{4}$. Let $x = -\frac{3}{2}$. It follows that $x^2 + 3x + c = \frac{9}{4} - \frac{9}{2} + \frac{9}{4} = 0$. To show that $x$ is unique, let $y \in \mathbb{R}$ be arbitrary such that $y^2 + 3y + c = 0$. So $y^2 + 3y + \frac{9}{4} = 0$, and $(y+\frac{3}{2})^2 = 0$. It immediately follows that $y = -\frac{3}{2} = x$. Now to show that $c$ is unique, let $d, e \in \mathbb{R}$ be arbitrary such that $\exists ! x \in \mathbb{R} (x^2 + 3x + d = 0)$ and $\exists ! x \in \mathbb{R} (x^2 + 3x + e = 0)$. This means that $\exists x \in \mathbb{R}(x^2 + 3x + d = 0)$, $\exists x \in \mathbb{R}(x^2 + 3x + e = 0)$, $\forall y \in \mathbb{R} \forall z \in \mathbb{R} ((y^2 + 3y +d = 0 \wedge z^2 + 3z + d = 0 )\rightarrow y =z)$, and $\forall y \in \mathbb{R} \forall z \in \mathbb{R} ((y^2 + 3y +e = 0 \wedge z^2 + 3z + e = 0 )\rightarrow y =z)$. (How can we show that $d = e$ to finish this?) AI: Your approach is complicated and I do not recommend it. Instead, consider the quadratic formula applied here: $$ x = \frac{-3 \pm \sqrt{9 - 4c}}{2} $$ Let $d$ be another number such that there exists a unique $x$ in which $x^2 + 3x + d = 0$. We note that $d \not> \frac{9}{4}$, as otherwise $x$ it not real (so no such real $x$ exists). We also see that $d \not< \frac{9}{4}$, as otherwise this $x$ is not unique. Thus, $d = \frac{9}{4} = c$ necessarily.
H: All integer solutions of $x^3-y^3=2020$. Find all integer pairs $(x,y)$ satisfying $$x^3-y^3=2020\,.$$ First, $x^3-y^3=(x-y)(x^2+xy+y^2)=2020$ and $2020=2^2\cdot 5 \cdot 101$. But what next? Can it be worked out by using modulo? Or how? Any idea? Thanks in advance. AI: There are no solutions, because $x^3,y^3\equiv0$ or $\pm1\pmod7$, but $2020\equiv4\bmod7$.
H: If $\lambda \not= 0$ and $\lambda x = 0$, then $x = 0$ I am trying to use the definition of vector spaces to prove that, if $\lambda \not= 0$ and $\lambda x = 0$, then $x = 0$. One proof I have seen begins as follows: $\lambda 0 = 0$ for each $\lambda$ since $\lambda 0 = \lambda(0, + 0) = \lambda 0 + \lambda 0$. It then proceeds as follows: Therefore, since $\lambda \not= 0$, $$\begin{align} \lambda x = 0 &\Rightarrow \lambda^{-1} (\lambda x) = \lambda^{-1} 0 \\ &\Rightarrow (\lambda^{-1} \lambda) x = 0 \\ &\Rightarrow 1x = 0 \\ &\Rightarrow x = 0 \end{align}$$ It is this part that I am confused about: $\lambda 0 = 0$ for each $\lambda$ since $\lambda 0 = \lambda(0, + 0) = \lambda 0 + \lambda 0$. It is not clear to me what purpose this has in the proof, nor is it clear to me that it is even valid (that is, it is not clear to me that it is a valid claim, given the definition of a vector space). This part seems out-of-place to me. I would greatly appreciate it if people would please take the time to explain this. AI: Read the main proof again: $$ \begin{align} \lambda x=0 &\Rightarrow\lambda^{-1}(\lambda x)=\color{red}{\lambda^{-1}0}\\ &\Rightarrow(\lambda^{-1}\lambda)x=\color{red}{0}\\ &\Rightarrow\cdots\end{align} $$ One needs to justify that $\lambda^{-1}0=0$. So, it suffices to prove that $\lambda0=0$ for every scalar $\lambda$. I think this is the purpose of the leading line of the proof.
H: power series of $f(z)=\frac{1}{z^2+1}$ at $1$ In an exercise I am asked to find the power series of the function $f(z)=\frac{1}{z^2+1}$ centered at the point $1$. My approach: $$ \begin{align} \frac{1}{z^2+1} & = \frac{1}{z^2+ 2 - 1} \\ \\& = \frac{1}{2}\cdot\frac{1}{1-\frac{1}{2}(1-z^2)} \\ \\& = \frac{1}{2} \sum_{n \geq 0} \frac{1}{2^n}(-1)^n(z^2-1)^n \\ \\& = \sum_{n \geq 0} \frac{(-1)^n}{2^{n+1}}(z^2-1)^n \end{align} $$ Using the Binomial theorem we get that: $$\begin{align} \sum_{n \geq 0} \frac{(-1)^n}{2^{n+1}}(z^2-1)^n & = \sum_{n \geq 0} \frac{(-1)^n}{2^{n+1}}\sum_{k=0}^n {n \choose k} (-1)^{n-k} z^{2k} \\ \\ & = \sum_{n \geq 0}\sum_{k=0}^n \frac{(-1)^{2n-k}}{2^{n+1}} {n \choose k} z^{2k} \\ \\ & = \sum_{k \geq 0}\underbrace{\sum_{n\geq k} \frac{(-1)^{2n-k}}{2^{n+1}} {n \choose k}}_{:=b_k} z^{2k} \\ \\ & = \sum_{k \geq 0} b_k z^{2k} \end{align}$$ I got this result but I think that this is wrong for the following reason: A power series of a function centered at $a$ is written as: $\sum a_n (z-a)^n$, and I did not end up with something of that form So how can I solve this problem and find a power series for the function $f$ centered at $a=1$? AI: hint $$f(z)=\frac{1}{1+z^2}$$ $$g(z)=f(z+1)=\frac{1}{1+(z+1)^2}$$ $$=\frac{1}{2+2z+z^2}$$ $$=\frac 12 \frac{1}{1+z+\frac{z^2}{2}}$$ $$=\frac 12\frac{1}{1-X}$$ with $$X=-z-\frac{z^2}{2}$$ expand $ g(z)$ around $ z=0 $ as $$\frac 12(1+X+X^2+...)=\sum_{n=0}^\infty a_nz^n$$ to get the expansion of $ f $ around $ z=1 $ by using $$f(z)=g(z-1)=\sum_{n=0}^\infty a_n(z-1)^n$$
H: Prove that $0$ is the only $2\pi$-periodic solution of $\ddot{x}+3x+x^3=0$ Prove that $0$ is the only $2\pi$-periodic solution of $\ddot{x}+3x+x^3=0$. I don't know how to deal with this non-linear differential equation. I tried to consider $\ddot{x}(t+2\pi)+3x(t+2\pi)+x^3(t+2\pi)=0$ but with no success... I need to prove this in order to solve a problem of dependence on initial conditions. Can you please help me? AI: Find an $f(x,y)$ for which $f(x,\dot x)=c$ is constant. Convert that to $\dot x=g(x)$ The period is $t= \int \frac{dx}{g(x)}$
H: Evaluate limit of $\lim_{x\to 1} \frac{e^{(n!)^x}-e^{n!}}{x-1}$ $$\lim_{x\to 1} \frac{e^{(n!)^x}-e^{n!}}{x-1}$$ Since it is $e^{(n!)^x}$ and not $e^{x×(n!)}$ so I can't using the general form for $\frac{e^x-1}{x}$ as $x\to 0$ could somebody help? It'd better if without using l'hopital rules AI: Let $$f(x)=e^{(n!)^x}=e^{e^{x\log{n!}}}$$ The limit we’re looking at is the derivative at $x=1$ of $f$. So let’s find the derivative of $f=e^{g(x)}$ $$f’(x)=g’(x)e^{g(x)}=\log{n!}e^{x\log{n!}} e^{e^{x\log{n!}}}$$ Now plug $x=1$ $$f’(1)=n!\log{n!}e^{n!}$$
H: Solving system of linear equations using orthogonal matrix I'm given the following matrix: $$ A = \begin{pmatrix} \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}} \\ 0 & 1 & 0 \\ \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \end{pmatrix} $$ We're asked to determine if this matrix is orthogonal. I did this successfully using the rule that a matrix $A$ is orthogonal iff $A^T = A^{-1}$. The second part of the question however, I find to be difficult to answer since I have no idea how to start. The question is as follows: How would you determine a solution for the system of linear equations $A\vec{x} = \vec{b}$ using the orthogonal matrix above. I have determined that $A\vec{x} = \vec{b}$ translates to: $$ \begin{pmatrix} \frac{1}{\sqrt{2}} & 0 & -\frac{1}{\sqrt{2}} \\ 0 & 1 & 0 \\ \frac{1}{\sqrt{2}} & 0 & \frac{1}{\sqrt{2}} \end{pmatrix} \begin{pmatrix} x \\y \\z \end{pmatrix} = \begin{pmatrix} b_1 \\ b_2 \\ b_3 \end{pmatrix} $$ yet I have no idea how the above matrix could help me solve this system easily. Anyone have any idea on how you would answer this type of question? AI: The point of showing your matrix was orthogonal was that you then know the inverse. Once you know that the columns/row are perpendicular (dot product zero) for all columns that are not themselves and each column/row are unit vectors, then you know the matrix is orthogonal. $$ \begin{aligned} v_1&= \begin{pmatrix} \dfrac{1}{\sqrt{2}} \\ 0 \\ \dfrac{1}{\sqrt{2}} \end{pmatrix} \\ v_2&= \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \\ v_3&= \begin{pmatrix} -\dfrac{1}{\sqrt{2}} \\ 0 \\ \dfrac{1}{\sqrt{2}} \end{pmatrix} \\ v_1 \cdot v_1&= \dfrac{1}{2} + 0 + \dfrac{1}{2}= 1 \\ v_2 \cdot v_2&= 0 + 1 + 0 = 1 \\ v_3 \cdot v_3&= \dfrac{1}{2} + 0 + \dfrac{1}{2} \\ v_1 \cdot v_2&= 0 \\ v_1 \cdot v_3&= 0 \\ v_2 \cdot v_3&= 0 \end{aligned} $$ This shows that $U$ is orthogonal. Then if $U$ is a orthogonal matrix, we know $U^{-1}= U^T$. Then knowing the inverse, we can solve the system $Ux=b$ via $$ \begin{aligned} Ux&= b \\ U^{-1}Ux&= U^{-1}b \\ U^TUx&= U^Tb \\ x&= U^Tb \end{aligned} $$ NOTE. I originally used the word unitary. A unitary matrix is just the complex number version of an orthogonal matrix. So if the matrix is real then it is orthogonal if and only if it is unitary. I do prefer the word unitary, because it reminds what you need to check - the columns (or rows) are perpendicular to each other and each row/column has unit length (length 1).
H: Show that each $\hat x$ is a member of $C_0(\Delta)$ Suppose $A$ is a commutative Banach algebra without a unit, let $\Delta$ be the set of all complex homomorphisms of $A$ which are not identically $0$. Each $x\in A$ defines a function $\hat x$ on $\Delta$, given by $$\hat x(h)=h(x)\quad (h\in\Delta).$$ If we give $\Delta$ the Gelfand topology, then $\Delta$ is a locally compact Hausdorff space, i.e., the maximal ideal space of $A$. Then how to show that each $\hat x$ is a member of $C_0(\Delta)$? We know that if $A$ is unital, then $\Delta$ is actually compact and it follows that $C_0(\Delta)=C(\Delta)$ which implies each $\hat x$ is in $C_0(\Delta)$. But I cannot find a proof when $\Delta$ is not unital. Also, I do not have a solid background in Banach algebra. I just want to make sense of this statement. Thank you! Or why can we conclude that $\hat x\in C_0(\Delta(\Lambda))$ in the first definition between Proposition 23 and Theorem 24 in this document: http://www.karlin.mff.cuni.cz/~kalenda/data/fa116e-14.pdf AI: Fix an $x\in A$. For every $\varepsilon \gt o$, the set $\{h\in\Delta\mid |h(x)|\geq\varepsilon\}$ is weak-star closed(by definition of the weak-star topology) and bounded($\|h\|\leq1$ for all $h\in\Delta$). Thus by Banach-Alaoglu, $\{h\in\Delta\mid |h(x)|\geq\varepsilon\}$ is weak-star compact. That's exactly what you needed to prove.
H: Getting value of x by differentiating a given equation If we consider an equation $x=2x^2,$ we find that the values of $x$ that solve this equation are $0$ and $1/2$. Now, if we differentiate this equation on both sides with respect to $x,$ we get $1=4x.$ Now, I know that it is wrong to say that the value of $x=1/4,$ but then, what does $x=1/4$ signify? Is it related to maxima or minima? Please help me with this. AI: What do $x = 0, 1/2$ signify in $x = 2x^{2}$? If you think of them in terms of roots of the polynomial $p(x) = 2x^2 - x$, then the roots of its derivative, namely $4x - 1$, will be, as you said, $x =1/4$. The roots of the derivative indicate the inflexion points of $p(x)$. We can see that $p(x)$ is decreasing from $x = 0$ up to $x = 1/4$, and then increases indefinitely.
H: Isomorphism between two partially ordered sets. I want to define an Isomorphism $\phi:\left\langle [n]\times[m],\leq_{Lex}\right\rangle \rightarrow\left\langle [n\cdot m],\leq\right\rangle $ I understand how to write down this isomorphism by hand: lets say $n,m = 2$, then we define: $(0,0) \rightarrow 0$ $(0,1) \rightarrow 1$ $(1,0) \rightarrow 2$ $(1,1) \rightarrow 3$ However, how could I generalize this idea for arbitrary $n,m$ with an explicit match rule? AI: It helps a lot if you think about what the lexicographic product means. $A\times B$ is ordered by replacing each point in $A$ by a copy of $B$. So that means that $n\times m$ means taking $n$ points in a line, and replacing each by $m$ points. In other words, we look at $n$ multiples of $m$. So a point is identified by identifying which copy, and then its placement within that copy. Or, simply put $$(i,j)\mapsto i\cdot m+j.$$
H: Find parametric equations for the midpoint $P$ of the ladder The following problem appears at MIT OCW Course 18.02 multivariable calculus. The top extremity of a ladder of length $L$ rests against a vertical wall, while the bottom is being pulled away. Find parametric equations for the midpoint $P$ of the ladder, using as a parameter the angle $\theta$ between the ladder and the ground (i.e., the $x$-axis). And here is a sketch of the diagram for the problem. We can find the parametric equations for the midpoint $P$ by finding the vector $OP.$ I write $OP$ as a sum of two vectors $$OP = OB + BP.$$ We know how to calculate the two vectors $OB$ and $BP.$ Using the Pythagorean Theorem, we can find the first component of $OB$ and the second one is obviously zero, hence $$OB = \langle L \cos \theta, 0 \rangle.$$ And by assuming that $BP$ is the radius of a circle with center at $B,$ we can find that $$BP = \left \langle \frac{L}{2} \cos \theta, \frac{L}{2} \sin \theta \right \rangle.$$ So, we find that $$OP = \left \langle \frac{3L}{2} \cos \theta, \frac{L}{2} \sin \theta \right \rangle.$$ But the professor's solution is $$OP = \biggl \langle -\frac{L}{2} \cos \theta,\frac{L}{2} \sin \theta \biggr \rangle.$$ What's wrong with my solution? AI: When you write a vector $\vec v = (v_1,v_2)$ you're implicitly writing $$ \vec v = v_1 \vec i + v_2 \vec j$$ for some linearly independent vectors $\vec i ,\vec j.$ You wrote $$\vec {OB} = (L \cos \theta,0)$$ Depending on which vectors $i,j$ you are choosing this can be either true or false. It seems to me quite natural to consider the line representing the wall as the $y$ axis and the $x$ axis as the line drawn by the floor along with the usual coordinate system. You've correctly identified the distance between $O$ and $B$ as $L \cos \theta$. In the usual coordinates we then have $$ \vec {OB} = (-L \cos \theta,0) \quad \text { and not } \quad (L \cos \theta ,0)$$ This gives you the desired result of $\vec {OB} + \vec {BP} = (-L/2 \cos \theta , L/2 \sin \theta).$ Regardless of all this, it seems to me that it is much easier to find the coordinates of $B$ and $Q$ (where the ladder touches the wall) and to take the average component wise than to determine $\vec{BP}.$
H: Is there a matrix $A \neq 0$ such that $A\in F^{2\times 2}$ and $A^2=0$? Any hints? i don't know how to disprove the statement I looked at the multiplication with parameters and looked at the different cases but there were not enough information $F$ is a field AI: Take the field $F_2$, then consider the matrix, $$\left( \begin{matrix}1 & 1\\ 1 & 1\end{matrix} \right)$$
H: Convergence of a specific series The assignment that I am having trouble with is as follows: a) Use the fact that $\lim_{n\rightarrow\infty}n^3\left(\frac{1}{n}-\sin\left(\frac{1}{n}\right)\right)=\frac{1}{6}$ to show that $$\sum_{n=0}^\infty\left(\frac 1n-\sin\left(\frac 1n\right)\right)$$ converges. b) Determine the radius of convergence of the power series $$\sum_{n=0}^\infty\left(\frac 1n-\sin\left(\frac 1n\right)\right)x^n$$ I am having real trouble with computing the limit $$\lim_{n\rightarrow\infty}\left\lvert\frac 1n-\sin\left(\frac 1n\right)\right\rvert^{\frac1n}=r^{-1}$$ Here r is the radius of convergence. AI: As the first part (to use limit comparison test then) was given in this answer (indeed $\lim\limits_{n\to\infty}\frac{\frac{1}{n}-\sin\left(\frac{1}{n}\right)}{\frac{1}{n^3}}$ is given to be $\frac{1}{6}$), the second: $\frac{1}{n}=x$ (that's an only a variable change for the limit, no relation to the $x$ in the question), $$\begin{align*}\lim\limits_{x\to 0}|x-\sin x|^x&= \lim\limits_{x\to 0}\left|\frac{x-\sin x}{x^3}\right|^x\cdot x^{3x}\\ &=\lim\limits_{x\to 0}\left|\frac{1}{6}\right|^x\cdot e^{3x\ln x}=1 \end{align*}$$ Where $\lim\limits_{x\to 0} x \ln x$ is known to be $0$.
H: Is $\int_{-a}^{a}fg^{\prime}+\int_{-a}^{a}fg^{\prime\prime}=\int_{-a}^{a}f^{\prime}g+\int_{-a}^{a}f^{\prime\prime}g$ true? Using integration by parts, I have been trying to figure out if $$\int_{-a}^{a}f(x)g^{\prime}(x)dx+\int_{-a}^{a}f(x)g^{\prime\prime}(x)dx=\int_{-a}^{a}f^{\prime}(x)g(x)dx+\int_{-a}^{a}f^{\prime\prime}(x)g(x)dx$$ is true or not, but I somehow couldn't get there. I am not expecting a proof or a proof that is doesn't work, but if anyone could tell me if it is true and I should continue searching that would be of great help. AI: Well, just a quick look with Mathematica reveals: In[1]:=f[x_] := Cos[x] g[x_] := Sin[x] a = Pi; Integrate[f[x]*D[g[x], x], {x, -a, a}] + Integrate[f[x]*D[g[x], {x, 2}], {x, -a, a}] Out[1]=\[Pi] And: In[2]:=f[x_] := Cos[x] g[x_] := Sin[x] a = Pi; Integrate[D[f[x], x]*g[x], {x, -a, a}] + Integrate[D[f[x], {x, 2}]*g[x], {x, -a, a}] Out[2]=-\[Pi] So, it is not true in general.
H: Showing that the intersection of a cylinder and a plane is an ellipse One of the questions in my homework was: "Show that the curve $\vec{r}(t)=\cos t \vec{i}+\sin t \vec{j}+(1-\cos t)\vec{k}$ is an ellipse by showing that it is the intersection of a cylinder and a plane. Find equations for the cylinder and the plane." It is easy to see that the curve is the intersection of the cylinder $x^2 + y^2 =1$ and the plane $x+z=1$. Maybe I'm misunderstanding, but the question makes it sound like the hard part is showing that the curve is the intersection of a cylinder and a plane and that once they are found it is obvious that the curve is an ellipse. I have no idea how I can prove that the curve is an ellipse. Since it is not parallel to the $xy$ plane (or any other "conventional" planes), I am having a hard time showing that it satisfies the equation of an ellipse. Edit: I should add that I have not taken a linear algebra course yet, so please bear that in mind when posting a solution/hint. (I’m only adding this because many of the multivariable calculus hints online assume I am familiar with linear algebra.) AI: You need to rotate coordiate system as $z_1 = (x+z)/\sqrt2$, $x_1=(x-z)/\sqrt2$, $y_1=y$, in this coordinate system the equations are: $$ x+z=z_1\sqrt2=1,\\ x^2+y^2 = 2(x_1+z_1)^2 +y_1^2=1 $$ In the plane $z_1=1/\sqrt2$, the equation of intersection reads: $$ 2\left(x_1+\frac1{\sqrt2}\right)^2+y_1^2=1, $$ which is a general equation for ellipse.
H: Count the number of dimes? You are given a bag with 100 coins.  The bag only has pennies, dimes and half-dollars.  The bag has at least one of each coin.  The total value of the coins in the bag is worth $5,  How many dimes are in the bag? AI: Let $P$ be the number of pennies, $D$ the number of dimes, and $H$ the number of half-dollars. We have $P+D+H=100$ and $P+10D+50H=500$. Subtract the former from the latter and we have $9D+49H=400$. There is only one integer pair solution $(D,H)$ to that last equation, and that solves the problem.
H: Expected hitting times for simple random walk on a hypercube Setup In an $n$-dimensional hypercube $C_n = \{0,1\}^n$, we define the Hamming distance of two vertices $d(A,B)$ to be the number of coordinates in which they differ. (e.g. $d((0,0,1), (1,0,1)) = 2$.) A simple random walk on the vertices of $C_n$ has $1/n$ chance of moving to each of the $n$ adjacent vertices. Problem I am trying to find some expected hitting times $t_d$, where A and B are given (fixed) vertices of $C_n$, $d = d(A, B)$, and $$t_d = \mathbb{E}(\text{time to hit B starting from A}).$$ For example, $t_0$ is the expected return time. My attempts Using the inverse of the invariant distribution, we get $$t_0 = 2^n.$$ To find $t_1$, we try to express $t_0$ in another way, by conditioning on the first step: $$t_0 = \underbrace{\frac1n \times (1+t_1) + \ldots + \frac1n \times (1+t_1)}_{n\text{ terms}} $$ $$ \Rightarrow t_1 = 2^n - 1.$$ Similarly, we can find $t_2$. Note that the only outcomes after two steps are i) returning, and ii) being $d = 2$ away from the start. $$t_0 = \frac1n \times 2 + \frac{n-1}n \times (2+t_2) $$ $$\Rightarrow t_2 = \frac{n2^n-2}{n-1}-2 = \frac{n(2^n - 2)}{n-1}.$$ Why do I need help To find $t_3$, I try to do the same trick. However, I suspect I have made a mistake. $$t_0 = \frac1n \times 2 + \frac{n-1}{n}\frac1n \times (3+t_1) + \frac{n-1}{n}\frac{n-1}{n}\times (3+t_3)$$ This gives $t_3 = 8.5$ when $n = 3$ (a cube), contradicting this question, which says that $t_3 = 10$ in this case. AI: Suppose you’re at distance 2 at 2nd step. Then, there’s two (not one) way of getting to a vertex at distance 1. (Distances are relative to the starting vertex). Now, $$t_0 = \frac1n \times 2 + \frac{n-1}{n}\frac2n \times (3+t_1) + \frac{n-1}{n}\frac{n-2}{n}\times (3+t_3)$$ which gives $$t_3= \frac{(n^2-2n+2)2^n-(6n-4)}{(n-1)(n-2)} - 3.$$
H: Connectedness and complexity in Polish spaces I was wondering: How complex can connected subsets of Polish spaces be? Are there connected non-Borel subsets of a Polish space? Given $X$ Polish space (not totally disconnected), does it have proper analytic ( $\boldsymbol{\Sigma}_1^1(X)\setminus \boldsymbol{\Delta}_1^1(X)$ ) connected subsets? Thanks! AI: Well, for some Polish spaces, connected sets are very restricted--for instance, in $\mathbb{R}$ they are obviously all Borel. But in general Polish spaces, they can pretty much be as bad as you want. For instance, in $\mathbb{R}^2$, you can construct a connected set at any desired level of the projective hierarchy (including not projective at all!) as follows. Start with a set $A\subset\mathbb{R}$ at the desired level, and then take the set $(\{0\}\times\mathbb{R})\cup (\mathbb{R}\times A)$. Alternatively, given any collection of $\mathfrak{c}$ subsets of $\mathbb{R}^2$ of size $\mathfrak{c}$, by a transfinite recursion of length $\mathfrak{c}$ you can build a set $X\subseteq\mathbb{R}^2$ which intersects all of your sets but does not contain any of them. Assuming your collection includes all uncountable closed sets, such an $X$ is automatically connected (see this answer). So for instance, taking the collection of uncountably closed sets, this gives a connected subset of $\mathbb{R}^2$ without the perfect set property. Or assuming CH (or just CH for projective sets), you can take the collection of all uncountable projective sets and build a connected subset of $\mathbb{R}^2$ that is not projective and does not even contain any uncountable projective set.
H: Can two planes in $\mathbb{R}^4$ not intersect and also not be parallel? Intuitively, I was thinking we can approach this the same way as two lines in 3D space, where they may not intersect nor be parallel. I think I've come up with the example of ${(0,x,y,0):x,y \in R},{(1,x,0,y):x,y \in R}$, where the planes are not parallel and also don't intersect. However, I'm not sure how to develop this into a proof that it's true in general. AI: A proof of this statement comes directly form Rouché-Capelli theorem; two planes are described each by two equations, when you intersect them you find a 4x4 matrix A associated to the sistem AX=b. You can suppose that only three out the four equations are linearly independent, so you can choose b linearly independent from the columns of A , and the sistem has no solution, so the planes don’t intersect. Now there is no containment with the space of directions of the two plane but then intersection in not the zero space. So they aren’t parallel.
H: How to prove this series Cauchy product? Let the first series be $\sum_{k=0}^{\infty} u_{k}(x-a)^{k}$. Let the second series be $\sum_{k=0}^{\infty} v_{k}(x-a)^{k}$ Their convergence areas are $R_{1}$ and $R_{2}$. I don't know how to prove that $f(x)*g(x)=\sum_{k=0}^{\infty} w_{k}(x-a)^{k}$, where $w_{k}=\sum_{k} u_{k} v_{k-n}$ and that its convergence area is $R\geq\min{R_{1},R_{2}}$. I thought about the convergence area so that if the smallest convergence area is 0, then R must be greater or equal than it. But I don't know how that helps me or how to prove anything. AI: If $w_k=\sum_{n=0}^ku_nv_{k-n}$, then\begin{align}w_k(x-a)^k&=\sum_{n=0}^ku_nv_{k-n}(x-a)^k\\&=\sum_{n=0}^ku_n(x-a)^nv_{k-n}(x-a)^{k-n}.\end{align}So, $w_k(x-a)^k$ is the $k$th term of the Cauchy product of the series $\sum_{k=0}^\infty u_k(x-a)^k$ and $\sum_{k=0}^\infty v_k(x-a)^k$. So, if $|x-a|<R_1$ and $|x-a|<R_2$, the series$$\sum_{k=0}^\infty w_k(x-a)^k$$converges, since it is the Cauchy product of two absolutely convergent series. Therefore, the radius of convergence is at least $\min\{R_1,R_2\}$.
H: how to construct radical function around a cusp? I read this construction from the paper "Harmonic maps into singular spaces and p-adic superrigidity for lattices in groups of rank one": Let M be a rank 1 locally symmetric space. On each cusp $\hat{M}$ of $M$ there exists a proper function $r: \hat{M} \to R_+$ such that r is smooth with $|\nabla r|=1$ and r has compact level set. The metric g on $\hat{M}$ may then be written as $dr^2+ ^r g$ where $^rg$ is a metric on $\Sigma_0=r^{-1}(0)$. It seems to me that this function r works as the radical function as in the spherical coordinate. So it should be the distance to the cusp. Am I correct? Also, I do not understand that $^rg$ is a metric on $\Sigma_0=r^{-1}(0)$. If it is analogous to the spherical coordinate, it should look like $dr^2+ r\cdot g$ where g is the metric on $r^{-1}(1)$. I want to make sure if the original expression is a typo or I do not understand this correctly. AI: Take $N \subset M$ an orientable hypersurface and $\nu$ a normal field along $N$. Then the normal exponential map \begin{align} E : (-\varepsilon,\varepsilon) \times N &\longrightarrow M \\ (t,x) &\longmapsto \exp_x (t\nu_x) \end{align} is -under some curvature asumptions- a (local) diffeomorphism. Read on the left, the metrics $g$ of $M$ is \begin{align} E^*g = \mathrm{d}t^2 + g_t \end{align} with $(g_t)$ the restriction of $E^*g$ on $\{t\}\times N\simeq N$. You can think of $(g_t)$ as a family of metrics over $N$ with one parameter. In this example, $r(x) = d(N,x)$ has $|\nabla r|=1$ and $r^{-1}(0) = N$. In that view, it is similar to polar coordinates, where $N$ plays the role of the sphere. To match more to your question: to construct such a function $r$, it seems you can prove that there exists a compact orientable hypersurface $N$ near a cusp for which the function $E$ is defined on $[0,+\infty)\times N$ and is a diffeomorphism onto its image. For example, in dimension $2$, take a (convec) non-contractible loop around the cusp.
H: Sum of product of NB combinations While trying to solve the PMF of two independent NB random variables, I end up with a summation of the product of two combinations: $$\sum_{j=0}^{k} {j+r-1 \choose j} {k-j+s-1 \choose k-j} $$ According to the textbook, it should be equal to $$ {k+r+s-1 \choose k} $$ But how can the above summation be reduced to the single combination? AI: There are two keys here. First of all, the binomaial coefficient $\binom{n}k$ can be extended to allow negative $n$ (or even complex $n$) by setting $$ \binom{n}k\stackrel{\text{def}}{=}\frac{n(n-1)\cdots(n-k+1)}{k!} $$ Importanly, with this generalization, the relation $(1+x)^n=\sum_{k=0}^\infty \binom{n}kx^k$ is still true even when $n$ is complex. When $n$ is negative, we can write this expression in terms of a regular binomial coefficient. If $n>0$, then $$ \binom{-n}k=\frac{(-n)(-n-1)\cdots(-n-k+1)}{k!}=(-1)^k\frac{(n+k-1)\cdots(n+1)n}{k!}=(-1)^k\binom{n+k-1}{k} $$ Therefore, your summation can be rewriten as $$ \sum_{j=0}^k\binom{r+j-1}j\binom{s+k-j-1}{k-j}=\sum_{j=0}^k\binom{-r}j(-1)^j\binom{-s}{k-j}(-1)^{k-j}=(-1)^k\sum_{j=0}^k\binom{-r}j\binom{-s}{k-j} $$ Note that this summation (without the $(-1)^k$) looks a lot like one side of Vandemonde's idenity, the only problem being that the upper indices are negative. Indeed, Vandemonde's identity is still true in this context: for any complex numbers $a$ and $b$, it is true that $$ \binom{a+b}{k} =\sum_{j=0}^k \binom{a}j\binom{b}{k-j} $$ which can be proven by examing the coefficient of $x^k$ in both sides of the equation $$ (1+x)^{a+b}=(1+x)^a(1+x)^b $$ Therefore, you get \begin{align} \sum_{j=0}^k\binom{r+j-1}j\binom{s+k-j-1}{k-j} &=(-1)^k\sum_{j=0}^k\binom{-r}j\binom{-s}{k-j}\\ &=(-1)^k\binom{-r-s}{k}\\ &=\binom{r+s+k-1}{k} \end{align}
H: Question about $x \sin(1/x)$ at $x = 0$ I know that $f(x) = \sin(1/x)x$ takes the value $0$ at $x=0$. What else can we say about the function $f(x)$ at $x=0$? More specifically, is $f$ continuous at $x=0$? Is it even differentiable at $x=0$? Thank you! AI: The function $f(x)= x \sin(1/x)$ is not $0$ at $x=0$ as it is not even defined there. But it does have a removable discontinuity there, i.e. $\lim_{x \to 0} x \sin(1/x)=0$. You can easily prove this using Squeeze Theorem, comparing $f(x)$ to $|x|$ because $|\sin(1/x)| \leq 1$. So what I think you mean to ask is $$ f(x)= \begin{cases} x\sin\left(\dfrac{1}{x}\right),& x \neq 0 \\ 0,& x=0 \end{cases} $$ continuous or differentiable at $x=0$. The answer is yes to continuous and a no to differentiable. Obviously, $f(x)$ is continuous/differentiable for all $x \neq 0$. The only question is what happens at $x=0$, where it is continuous but not differentiable. I would try these both. For continuity, you essentially just need to use a Squeeze Theorem argument for continuity. For differentiability, simply use the definition of the derivative. You can find the continuity argument at this link and the differentiable one at this link.
H: Exponential map on Lie groups being a diffeomorphism Consider a Lie group $G$ and let it's Lie algebra be $\mathfrak{g}$. Let the exponential map be denoted by $\exp: \mathfrak{g} \to G$. Given any $g \in G$, does there exist an open set $O \subset \mathfrak{g}$ with $g \in \exp(O)$ such that the exponential map restricted to $O$ is a diffeomorphism? Note: I am not conversant with Riemannian geometry. AI: Not in general. Let $G = S^3$, thought of as the unit length quaternions. Let $g = -1$. I claim that for any $x\in \exp^{-1}(g)$, and for any open neighborhood $O$ of $x$, that $\exp|_O:O\rightarrow G$ fails to be injective. In particular, this restricted map cannot be a diffeomoprhism onto its image. Note that $\mathfrak{g}$ can be identified with the imaginary quaternions (where the Lie bracket is the commutator). If we declare the basis $\{i,j,k\}$ of $\mathfrak{g}$ to be orthonormal, then the adjoint action of $G$ is isometric. Also note that the adjoint action is simply the usual $SO(3)$ action on $\mathfrak{g}$ (a three dimensional vector space), so is transitive on any sphere centered at the origin. Now, suppose $x\in \exp^{-1}(g)$. Then, for any $h\in G$, we have $\exp(Ad_h x) = h\exp(x)h^{-1} = h(-1)h^{-1} = -1$, so $Ad_h x\in \exp^{-1}(g)$ as well. Since the adjoint action is transitive on spheres, it follows that if $x\in \exp^{-1}(g)$, then all purely imaginary quaternions $y$ with $|x| = |y|$ are also in $\exp^{-1}(g)$. In other words, $\exp^{-1}(g)$ consists of a union of spheres centered at the origin. Restricting $x$ to be purely imaginary and complex, the group exponential is just the usual complex exponential map. In particular, $\exp^{-1}(g)\cap \operatorname{Im}(\mathbb{C}) = \{ n\pi: n\text{is an odd integer}\}$. Thus, $\exp^{-1}(g)$ consists of an infinite union of spheres centered at the origin, where there sphere each have radius an odd multiple of $\pi$. Now, suppose $x\in \exp^{-1}(g)$ and let $O$ be any open neighborhood of $x$. Since $x\in\exp^{-1}(g)$ it is one some sphere $S$ whose radious is an odd multiple of $\pi$. But then $O\cap S$ is an infinite set. In particular, for any $y\in O\cap S$, $\exp(y) = g$, so $\exp|_O$ is not injective.
H: connectedness of A={$(x,y):x \in \mathbb{Q}\,$ or$ \, y \in \mathbb{Q}$} i have the following problem about connectedness, prove that in the Euclidean plane A={$(x,y):x \in \mathbb{Q}\,$ or$ \, y \in \mathbb{Q}$} is connected I have tried in several ways, but it causes me a problem, that some of the coordinates have to be rational AI: Observe that $$A = \{(x,y) \in \mathbb R^2 :\, (x,y) \notin (\mathbb R \setminus \mathbb Q)^2\} = \mathbb R^2 \setminus (\mathbb R \setminus \mathbb Q)^2,$$ and in fact, you can prove something more general: If $X$ and $Y$ are connected topological spaces, $A$ is a proper subset of $X$ and $B$ is a proper subset of $Y$, then $(X \times Y) \setminus (A \times B)$ is connected.
H: Finding the subgroups of cyclic groups Currently trying to work out all of the subgroups of $X \times Y$, $X = C_3 = \langle x \rangle$ and $Y = C_3 = \langle y \rangle$. I know that $X = \{1, x, x^2\}$ and $Y = \{1, y, y^2\}$. I also know that $$X \times Y = \{(1,1), (1,y), (1,y^2), (x,1), (x,y), (x,y^2), (x^2, 1), (x^2, y), (x^2, y^2)\}.$$ I came up with these subgroups: \begin{align*} H_1 &= \{(1,1), (1,y), (1,y^2)\} \\ H_2 &= \{(1,1), (x,1), (x^2,1)\} \\ H_3 &= \{(1,1), (x,y), (x^2,y^2)\} \\ H_4 &= \{(1,1), (x,y^2), (x^2,y)\} \\ H_5 &= \{(1,1), (1,y), (1,y^2), (x,1), (x,y), (x,y^2), (x^2, 1), (x^2, y), (x^2, y^2)\} \end{align*} I would just like to double check with someone that these are correct and that they're ALL of the possible subgroups. If not, I'd appreciate someone helping me out and pointing out/explaining my mistakes. AI: It's possible to be a bit systematic here, to make sure that we have them all. $X\times Y$ has $9$ elements. That means any subgroup has either $1$, $3$ or $9$ elements. The orders of $1$ and $9$ are easily handled (as long as you remember them in the first place). That leaves $3$. Any subgroup with $3$ elements will be cyclic. So just take each non-identity element in the group, and see which cyclic group it generates, and you're done. There are $8$ non-identity elements, and they all have order $3$, which should make it $8$ order $3$ subgroups... Except not really. Since each order $3$ subgroup has $2$ potential generators, there is some overlap: $\langle(x, y)\rangle = \langle (x^2, y^2)\rangle$, for instance. This means that the $8$ different cyclic-subgroup-generators make a total of $4$ subgroups. And you have found $4$, so you have them all. Good job!
H: find the dim S. Problem taken from Apostol calculas Volume $2$ page No: $13$ books Let $P$ denote the linear space of all real polynomials of degree $\le n$, where $n$ is fixed. let $ S$ denote the set of all polynomials $f$ in $P$, satisfying the condition given below . find the dim S. $1.$$f$ is even. $2.$ $f$ is odd. My attempt : we know that dim $P_n = n+1$ I thinks if $f$ is even then dim $S= \frac{n}{2}$ if $f$ is odd then dim $S= \frac{n+1}{2}$ Is its true ? AI: Case 1: $n$ is even, say $n=2k$ *Even $f$ will be of the type $ c+c_1x^2+c_2x^4+...c_kx^{2k}$ hence dim(S) =$k+1=n/2+1$ **Odd $f$ will be of the type $d_1x+d_3x^3+...+d_{2k-1}x^{2k-1}$, hence dim(S) =$k=n/2$ Similarly consider, case 2: $n$ is odd. PS: * because even $ f(x) =\frac{ f(x) +f(-x)} {2}$ ** odd $f(x) =\frac{f(x) - f(-x)} {2}$
H: Saddle point approximation gives a null result So I want to compute the following integral $$I=\int_0^1 x\sqrt{1-x}\exp \left(a^2x^2\right) dx$$ where $a>>1$. If we try to do a Saddle point approximation \begin{align} I&=\int_0^1 f(x)\exp \left(a^2g(x)\right)\\ &\approx f(x_0)\exp\left(a^2g(x_0)\right)\sqrt{\frac{2\pi}{-a^2g''(x_0)}}\left(1+o\left(1/a^2\right)\right) \end{align} where $g(x_0)=0$. In the case of $I$, we have $g(x)=x^2$ and $f(x)=x\sqrt{1-x}$ and $x_0=0$. The problem is that $f(x_0)=0$ so this gives us $I=0$. How can one get around such a problem? AI: The problem you have is that the prefactor ($f$) vanishes at the saddle point. One possibility here is to integrate by parts. Note that $$ x\exp(a^2x^2)dx=\frac{1}{2a^2}d\left(\exp(a^2x^2)\right); $$ so $$ \begin{eqnarray} \int_{0}^{1}x\sqrt{1-x}\exp(a^2x^2)dx&=&\frac{1}{2a^2}\int_{0}^{1}\sqrt{1-x}\cdot d\left(\exp(a^2x^2)\right) \\ &=&\frac{1}{2a^2}\sqrt{1-x}\exp(a^2x^2)\big\vert_{0}^{1} + \frac{1}{4a^2}\int_{0}^{1}\frac{\exp(a^2 x^2)}{\sqrt{1-x}}dx \\ &=&-\frac{1}{2a^2}+\frac{1}{4a^2}\int_{0}^{1}\frac{\exp(a^2x^2)}{\sqrt{1-x}}dx. \end{eqnarray} $$ This is exact (up to algebra mistakes I may have made :)), and you can apply the saddle point approximation to the remaining integral. It may seem like a lucky accident that integration by parts can be applied. However, quite generally you can expand $f(x)$ and $g(x)$ as power series around the saddle point $x=x_0$, and when the constant term $f(x_0)$ vanishes, you can integrate by parts. Sometimes this needs to be done repeatedly: for instance, if $f(x)\sim (x-x_0)^2$, you need to integrate by parts twice.
H: What am I doing (simple uniform converge problem) $f_n(x)=n^2x^2e^{-xn}$ consider the function $f_n(x)=n^2x^2e^{-xn}$ I am asked if it uniform converge on $A=(a,\infty) \quad a>0$ So it easy to see that in converge to $0$, but when I wanted to check uniform converge I noticed that the function getting maximum on $x=\frac{2}{n}$ pluging this value of $x$ to the function will give: $f_n(\frac{2}{n})=\frac{4}{e^2}$, from here I concluded that $\sup|f_n(x)-f(x)|>0$ but this solution is wrong according to their solution, they actually used the same value of $x$ that I found to show it is uniform converge.they say for every $n>\frac{2}{a}$ u get that $\sup |f_n|=f_n(a)$ wich can indicates that is uniform converge, I actually not understand why we looking on $n>\frac{2}{a}$ I found specific $x$ that getting a value for the function so the supremum cannot be zero AI: It is false that $\sup_{x\in[a,\infty)}|f_n(x)|=4e^{-2}$. This is true if $\frac2n\geqslant a$, but this only occurs for finitely many $n$'s. If $\frac2n<a$, then $f_n$ is strictly decreasing and it attains its maximum at $a$. That maximum is $n^2a^2a^{-na}$ and, since $\lim_{n\to\infty}n^2a^2a^{-na}=0$, your sequence converges uniformly to $0$ indeed.
H: Not understanding unexplained notation I am reading a set of lecture notes on Functional Analysis and I have come across a not introduced notation when reading a corollary of Hahn-Banach Extension Theorem. Could anybody please explain what does $S_{X^*}$ represent? The corollary is the following: Let $X$ be a normed vector space. For each $x_0 \in X$ there exists $f \in S_{X^*}$ such that $f(x_0) = \| x_0\|$. In particular, $\|x\| = \max\{ |f(x)| : \; f \in S_{X^*}\}$ for all $x\in X$. I do not have access to notes of prerequisite courses and it is difficult to get in touch with the writer of the document. Any help appreciated! AI: For any normed space $X$, $S_X$ usually denotes the unit sphere of $X$: $$S_X=\{x\in X:\|x\|=1\}.$$ This result tells us that there is a functional $f\in X^*$ such that $|f(x)|\leq\|x\|$ for all $x\in X$, and $|f(x_0)|=\|x_0\|$.
H: Why is probability sometimes calculated using ordered pairs of outcomes rather than unordered pairs? For example, if we are tossing two coins, where each coin falls on either heads ($H$) or tails ($T$), we have the following possible outcomes: $\{H, H \}$, $\{H, T \}$, $\{T, T \}$. However, when solving some exercises, I noticed that this is not the way to go when looking for the number of possible outcomes in probability. Rather, we conceive of the outcomes of coin tosses as ordered pairs. In this case, we have $(H,H)$, $(H,T)$, $(T,H)$, $(T,T)$. I don't understand why the first approach is wrong and the rationale of why we need two ordered pairs $(H,T)$ and $(T,H)$ rather than just $\{H,T\}$. Therefore, I don't really understand what probability is about either. Please, help me understand. AI: It helps to look at the probability spaces that each generates: When working with unordered pairs: $$\begin{array}{c|c}\text{Outcome} & \text{Probability} \\ \hline \{H,H\} & 0.25 \\ \{H,T\} & 0.5 \\ \{T,T\} & 0.25\end{array}$$ Note how different outcomes have different probabilities. On the other hand, working with ordered pairs: $$\begin{array}{c|c}\text{Outcome} & \text{Probability} \\ \hline (H,H) & 0.25 \\ (H,T) & 0.25 \\ (T,H) & 0.25 \\ (T,T) & 0.25\end{array}$$ Sample spaces where every outcome is equally probable is called an Equiprobable space. In general, given the choice between two probability spaces, Equiprobable spaces are often (but not always) easier to work with, even though they have some redundant information. But, both probability spaces are valid ways to record the results flipping two fair coins, and in some contexts, you may prefer the probability space of unordered pairs rather than the Equiprobable space. An example of when you must use a non-equiprobable space is when you are conducting Bernoulli trials. You have a single sample space with two outcomes that are not equiprobable. Now, using unordered pairs tends to be as common (or possibly even more common) than ordered pairs. If the probability of flipping heads on an unfair coin is $0<p<1$, then over the course of ten flips, you wind up with the probability space: $$\begin{array}{c|c}\text{Outcome} & \text{Probability} \\ \hline \text{10 heads} & \dbinom{10}{0}p^{10}(1-p)^0 \\ \text{9 heads, 1 tail} & \dbinom{10}{1}p^9(1-p)^1 \\ \text{8 heads, 2 tails} & \dbinom{10}{2}p^8(1-p)^2 \\ \vdots & \vdots \\ k\text{ heads, }10-k\text{ tails} & \dbinom{10}{k}p^k(1-p)^{10-k}\end{array}$$
H: Proof for pumping lemma for new kind of CFL I have a context-free grammar $(V,\Sigma,R,S)$ that is defined by the condition that every production in $R$ has to be on one of the following two forms: $A\to uBv$ where $A,B\in V$ and $u,v\in\Sigma^*$ and $A\to u$ where $A\in V$ and $u\in\Sigma^*$ The pumping lemma for languages that are produced by this grammar goes much like the pumping lemma for CFL, but the third condition is a little bit different. In the pumping lemma for CFL, the condition states that $|vxy|\leq p$, but for this new language, the condition is as follows: $|uv|\leq p$ and $|yz|\leq p$ Can I prove the pumping lemma for this language in a similar way to the proof for pumping lemma for CFL? AI: Yes, you can use the same basic idea to prove the pumping lemma for this kind of grammar. In some ways it’s easier, since the derivation tree is simpler: it has a single ‘spine’ of non-terminal symbols with terminal symbols coming off the spine on each side. The calculation of $p$ is perhaps the biggest difference: you’ll have to take into account both $|V|$ and the maximum lengths of $u$ and $v$ in productions of the type $A\to uBv$. It would be a little easier if you first proved that such a grammar can be converted to one in which every production is either of the form $A\to u$ with $A\in V$ and $u\in\Sigma^*$ or of the form $A\to \sigma B\tau$ with $A,B\in V$ and $\sigma,\tau\in\Sigma$.
H: Homeomorphism between two circles I'm trying to understand how homeomorphism works and have found a result that I find somehow distant from my intuition. Consider the functions: $$f(x)=\left\{\begin{matrix} x-5 & x \in 2\mathbb{S}^1+5\\ x& x \in \mathbb{S}^1 \end{matrix}\right.$$ $$g(x)=\left\{\begin{matrix} x+5 & x \in 2\mathbb{S}^1\\ x& x \in \mathbb{S}^1 \end{matrix}\right.$$ where $f: A\rightarrow B$, $g: B\rightarrow A$, $A:= \mathbb{S}^1 \cup 2\mathbb{S}^1+5 , B:=\mathbb{S}^1 \cup 2\mathbb{S}^1$ and everything is viewed inside $\mathbb{R}^2$ with the usual topology. By the pasting lemma they are continuous and since they are one the inverse of the other one I conclude that two concentric circles are homeomorphic to two circles with one external to the other one. Supposed that what I said above is true I'm moving one of the circles inside the other one, and by doing so these two circle will somehow have to intersect during the transformation, which is not a behaviour I'd expect from a homeomorphism (it is imprecise but I hope to have been clear in exposing what my doubt is). My question: have I made any mistake? If yes, how do you intuitively justify this fact? AI: You should NOT think of a homeomorphism as describing a continuous transformation: it is simply a mapping showing that two objects are topologically identical. There is no motion involved. In your example the maps do not represent a movement of one circle ‘through’ the other: they just say that these two object, each consisting of two disjoint circles, are topologically indistinguishable. And they are topologically indistinguishable from any other pair of disjoint circles, no matter how they are embedded in the plane, in $\Bbb R^3$, or anywhere else.
H: How is this function injective? I'm currently studying Galois Theory and I came across this theorem. Theorem Let $E$ be a field, $p(x)\in E[x]$ an irreducible polynomial of degree $d$ and $I = \langle p(x) \rangle$ the ideal generated by $p(x)$. Then $E[x]/I$ is an extension field of $E$. The proof first uses the fact that since $E$ is a field, $E[x]$ is an euclidean domain so $p(x)$ being irreducible is also prime. Then as $E[x]$ is also a principal ideal domain, $I$ is a maximal ideal and thus $E[x]/I$ is a field. Finally proceeds to define the function $$ \varphi : E\to E[x]/I$$ given by $$\varphi (a) = \bar{a} = a + I $$ and affirms its injectivity to get an isomorphism $$ \varphi : E \to \varphi(E)\subseteq E[x]/I $$ Concluding that $E[x]/I$ is an extension field of $E$ I get the proof except for the fact that $\varphi$ is injective. I assume that we can think of this function as the composition $$ \varphi = \pi\circ f $$ where $$ f : E \to E[x] $$ assigns each element in $E$ its constant polynomial, and $$ \pi : E[x]\to E[x]/I $$ which assigns each element in $E[x]$ its left coset in $E[x]/I$ However, I know $f$ is injective but $\pi$ is not, so I'm having a hard time showing that $\varphi$ is injective. How should I proceed? :( Thank you. AI: Note $a$ and $b$ are constants (polynomials of degree $0$ if you like). Assume $a+I = b+I$. Then $a-b \in I = \langle p(x) \rangle$, so $a-b$ is a multiple of $p(x)$. But $a-b$ is a constant, and how can a constant be a multiple of a polynomial of degree $1$ or higher over a field? By taking degrees, only if that constant is $0$, so $a-b=0$ and $\varphi$ is injective. (You need not consider $\pi$ for this argument.)
H: Confusion about the Definition of Smooth Functions on a Manifold I am slightly confused about the definition of smooth functions on a smooth manifold given in An Introduction to Manifolds by Loring Tu (Second Edition, page no. 59). The definition is given below. I am confused because I don't see how $f\circ \phi^{-1}$ is, in general, defined. Let $\phi: U \to X$, where $X$ is an open subset of $\mathbb{R}^n$. Here, $\phi^{-1}: X \to U$. Then $f\circ \phi^{-1}$ is defined if the codomain of $\phi^{-1}$ is equal to the domain of $f$, which is not the case. Because the codomain of $\phi^{-1}$ is $U$ and the domain of $f$ is $M \supset U$. To my understanding, what we can define is $\left.f\right|_{U}\circ \phi^{-1}$, where $\left.f\right|_{U}$ is the restriction of $f$ to $U$. I am missing something here? AI: You are right, although your heart will be lighter if you will accept such notation when nothing is unclear, since the restriction symbol is bulky and hard to read.
H: transform a 2 dimensional ode 1 system to 2nd order one dimension system Given a matrix $M$ of $2 \times 2$, and an ode: $$y'=My$$ let $y=(v_1,v_2)$. find a second order ode such $v_1,v_2$ are solutions. AI: We have $y'=My$. I consider $M = \begin{bmatrix}m_{11} &m_{12} \\ m_{21} &m_{22}\end{bmatrix}$, and the solution is $y =\begin{bmatrix} v_1\\ v_2\end{bmatrix}$. This gives us the equations $$\begin{cases}v_1'= m_{11}v_1+m_{12}v_2 \\ v_2'= m_{21}v_1+m_{22}v_2 \end{cases} \Rightarrow \begin{cases}v_1''= m_{11}v_1'+m_{12}v_2' \\ v_2'= m_{21}v_1+m_{22}v_2 \end{cases} $$ Therefore $v_2 = \frac{v_1'-m_{11}v_1}{m_{12}}$ and $v_2' = \frac{v_1''-m_{11}v'_1} {m_{12}}$, substituting these two equations in the second equation we get $$\frac{v_1''-m_{11}v'_1} {m_{12}}=m_{21}v_1+m_{22}\times\frac{v_1'-m_{11}v_1}{m_{12}} \\ \Rightarrow v_1''=(m_{11}+m_{22})v_1'+(m_{22}m_{11}-m_{21}m_{12})v_1 \\ \Rightarrow v_1'' - \text{tr}(M)v_1'+\det(M)v_1=0$$ The same equation arises for $v_2$ also. Now that I think about it, this result is obvious (have you noticed the Cayley-Hamilton equation), consider the further analysis below $$y'=My \Rightarrow Dy=My \Rightarrow (DI_2-M)y=0 \Rightarrow \det(DI_2-M)y=0 \\ \Rightarrow ((D-m_{11})(D-m_{22})-m_{21}m_{12})y=0 \Rightarrow y'' - \text{tr}(M)y'+\det(M)y=0$$ Where $D$ is the differentiation operator and $I_2$ is the two by two identity matrix. Which means you were right, eigenvalue equation of the matrix is now obvious.
H: Matrix equation solution (what condition a matrix needs to fulfill to make the equation possible) Let $A$ be an $m \times n$ matrix such that $\mathrm{rank}(A) < n$. If one were to set the equation \begin{equation*} A Z = B A, \end{equation*} what is the condition that matrix $Z$ needs to fulfill such that the above relation is even possible? $Z$ is an $n \times n$ matrix. Thank you! AI: Presumably you want to solve this for the $m \times m$ matrix $B$. On the null space of $A$, the right side is $0$, so $Z$ must map the null space of $A$ into the null space of $A$. If $Z$ satisfies this condition, then you can take $B = A Z A^+$ where $A^+$ is the Moore-Penrose pseudoinverse of $A$. This works because $A^+A$ is the orthogonal projection on the orthogonal complement of the null space of $A$, so for $x$ in that orthogonal complement, $BAx = A Z A^+ A x = A Z x$.
H: Continuous functions from compact Hausdorff spaces to the interval Let $X$ be a compact Hausdorff space and let $C(X,I)$ be the set of all continuous functions from $X$ into the closed interval $[0,1]$. If we equip $C(X,I)$ with the the topology of uniform convergence is $C(X,I)$ compact? I'm inclined to think not, but I can't think of a simple counterexample. AI: $C(X,I)$ is the closed ball of radius $1/2$ centred at the constant function $1/2$ in the Banach space $C(X,\mathbb R)$. A closed ball of nonzero radius in a Banach space is compact if and only if the Banach space is finite-dimensional. $C(X,\mathbb R)$ is infinite-dimensional. Therefore $C(X,I)$ is not compact.
H: Fibonacci Rabbit's variation Okay so I am trying to understand modifications to the famous fibonacci rabbit problems so I can make a generalized website for it as a pet project, where people just need to input paramaters and it will generate the tree like structure and the recurrence relation if possible. One pair of rabbits is left in an island. At the age of 3 months it produces 1 pairs of rabbit, and then every 2 months they produce 2 pairs of rabbits and no rabbits die. I'm able to generate the tree like structure for it just fine. However, what will be the recurrence relation for it for the number of rabbits produced at nth month? Moreover is there a nice way to generalize this? Would appreciate some help over here. EDIT: The children follow the same pattern. They produce one pair at the age of 3 months and then two pairs every 2 months. AI: This is a bit complicated because you need to keep track of how many pairs were born in odd numbered months and how many were born in even numbered months. You can make a pair of coupled recurrences. Let $A(n)$ be the number alive in month $n$ that were born in an even numbered month and $B(n)$ the number alive in month $n$ that were born in an odd numbered month. The first pair was born in month $0$, so $A(1)=1,A(2)=1,A(3)=1,B(0)=B(1)=B(2)=0,B(3)=1$ In an odd numbered month there are no even births, so $A(2n+1)=A(2n)$. Similarly $B(2n)=B(2n-1)$. In month $2n$ we get one pair from every $B$ pair alive $3$ months ago and an additional pair from the $B$ pairs alive $5$ months ago, so $A(2n)=A(2n-1)+B(2n-3)+B(2n-5)$. Similarly $B(2n+1)=B(2n)+A(2n-2)+A(2n-4)$ The total number alive at month $n$ is $A(n)+B(n)$ You can substitute in to separate the two recurrences at the price of a longer tail: $$A(2n)=A(2n-1)+A(2n-6)+2A(2n-8)+A(2n-10)\\B(2n+1)=B(2n)+B(2n-5)+2B(2n-7)+B(2n-9)$$ The asymptotic growth goes about as $1.40835^n$. You can read about recurrence relations to see where this comes from.
H: Prove that the collection is a basis that generates standard topology of the real line Attempt: We can use a Lemma in munkres to show this. That is, if for any open set (in usual topology) we can always find some member of collection $\mathscr{C}$ inside the open set. In other words, Let $(c,d)$ be any open set in usual topology in the line and let $ x \in (c,d)$. If we can find some $(a,b) \in \mathscr{B}$ with $ x \in (a,b) \subset (c,d)$ this we are done. I think if we put $a = c + \dfrac{|x-c|}{2} $ and $b = d - \dfrac{|d-x|}{2}$ then we have $$ x \in (a,b) \subset (c,d) $$ and $a,b \in \mathbb{Q}$. Is this correct? AI: That doesn’t ensure that $a$ and $b$ are rational. For instance, suppose that $c=1$, $d=2$, and $x=\sqrt2$; then $a=\frac{1+\sqrt2}2$ and $b=1+\frac{\sqrt2}2$, both of which are irrational. Pinpointing specific rational numbers in the intervals $(c,x)$ and $(x,d)$ is a bit difficult; a much better idea is simply to use the fact that the rationals are dense in the reals, meaning that between any two real numbers there is a rational number. That said, you also need to show that it suffices to consider open set of the form $(c,d)$. I don’t recall whether Munkres has already proved this at that point; if not, you need to start with an arbitrary open set $U$ and an arbitrary $x\in U$ and show that there are rational numbers $p$ and $q$ such that $x\in(p,q)\subseteq U$. Use the fact that the set of open intervals is a base for the usual topology on $\Bbb R$. In view of the questions below, I’m going to add a bit of general discussion. Suppose that you have a space $\langle X,\tau\rangle$. You should think of a base for $\tau$ as a set of building blocks for $\tau$: every member of $\tau$ — i.e., every open set in $X$ — is a union of basic open sets. Suppose that $\mathscr{B}$ is a collection of subsets of $X$, and we want to know whether $\mathscr{B}$ is a base for $\tau$. The answer will be yes if two things are true: every $U\in\tau$ is a union of members of $\mathscr{B}$, and every union of members of $\mathscr{B}$ is in $\tau$, i.e., is open. In symbols, $\tau=\left\{\bigcup\mathscr{V}:\mathscr{V}\subseteq\mathscr{B}\right\}$. These requirements can be boiled down to ones that are in general easier to check. For the first one, let $U\in\tau$. If $U=\bigcup\mathscr{B}_U$ for some collection $\mathscr{B}_U$ of members of $\mathscr{B}$, what can we say about the sets in $\mathscr{B}_U$? Certainly they must all be subsets of $U$: otherwise $\bigcup\mathscr{B}_U$ would contain points of $X\setminus U$. On the other hand, each point of $U$ must belong to at least one of the sets in $\mathscr{B}_U$: otherwise there will be points of $U$ that are not in the union $\bigcup\mathscr{B}_U$. In other words for each $x\in U$ there must be a $B_x\in\mathscr{B}_U$ such that $x\in B_x$. We can combine these two facts in a single requirement: for each $x\in U$ there is a $B\in\mathscr{B}_U$ such that $x\in B_x\subseteq U$. And the first bullet point above can then be replaced by this one: For each $U\in\tau$ and each $x\in U$ there is a $B\in\mathscr{B}$ such that $x\in B\subseteq U$. The second requirement is easy to arrange. Each $B\in\mathscr{B}$ is trivially the union of members of $\mathscr{B}$, so in particular it must be open, i.e., belong to $\tau$. Conversely, if every member of $\mathscr{B}$ is open, then of course all unions of members of $\mathscr{B}$ are automatically open as well. Thus, we can replace the second bullet point with this one: $\mathscr{B}\subseteq\tau$, i.e., every member of $\mathscr{B}$ is an open set. In short, when you want to check whether a family $\mathscr{B}$ is a base for a particular topology $\tau$, you need to check two things: that $\mathscr{B}\subseteq\tau$, i.e., that $\mathscr{B}$ consists entirely of open sets, and that for any open set $U$ and any point $x\in U$ there is some $B\in\mathscr{B}$ such that $x\in B\subseteq U$. In the present problem this amounts to verifying that each interval $(p,q)$ with rational endpoints is an open set in the usual topology, which is trivial, and verifying that if $U$ is any open set in the usual topology, and $x\in U$, then there are rational numbers $p$ and $q$ such that $x\in(p,q)\subseteq U$, which you did in an earlier comment. If you tried to do this with the set of intervals $[p,q)$ such that $a,b\in\Bbb R$ and $p<q$, you’d be able to show that if $U$ is an ordinary open set, and $x\in U$, there are $a,b\in\Bbb R$ such that $x\in[a,b)\subseteq U$, but you would not be able to show that these sets $[a,b)$ are open in the usual topology, because they aren’t. You would in fact have a base for a topology on $\Bbb R$, but not the usual one: you would have a base for the lower limit topology.
H: Finding the mean sales price everyone. I have the following problem: a factory produce valves, with 20% chance of a given valve be broken. The valves are sold in boxes, containing ten valves in each box. If no broken valve is found, then they sell the box for 10 dollars. With one broken valve, the box costs 8 dollars. With two or three, the box is sold for $6.00. More than three valves broken, they sell the box for 2 dollars. What is the mean sales price for the boxes? I calculated the chances of getting a box with no broken valves, the chances of getting a box with 9 normal valves and 1 broken, the chances of getting a box with 8 normal valves and 2 broken and so on. Then, I calculated the mean of the probabilities but got stuck in this part. How can I proceed? Or is there a better way to solve it? Thanks you. AI: You're almost there. For the probabilities, I am assuming you got it correct and got $10.7374\%$ chance of $0$ broken, $26.8435\%$ chance of $1$ broken, $30.199\%$ chance of $2$ broken, $20.1327\%$ chance of $3$ broken, and $12.0874\%$ chance of more than $3$ broken. Now, all we have to do is multiply the probabilities by the price and add them all up to get the mean price. More explanation coming, but for now... $10.7374\%\cdot 10=\$1.07374$. This is just the probability of $0$ valves broken multiplied by the price, $\$10$. Similarly, $26.8435\%\cdot 8=\$2.14748$. For $2$ valves, it's $\$1.81194$ and for $3$ valves, it's $\$1.207962$. Finally, for $4$ or more valves, it's $12.0874\%\cdot 2=\$0.241748$. Adding all the values up our answer is simply $1.07374+2.14748+1.207962+1.88194+0.241748=\fbox{\$6.55287}$. You may be asking, "But why is all we need to do add the values? Since it's a mean, don't we have to divide by something? We already did. We already divided when computing the probability. Let's take a simple example. Let's shorten this problem down to simple terms: Let's say there are packs of 3 valves, where $50\%$ of the time the valves are broken. If none are broken, they are sold for $\$5$, while for $1, 2, \text { and } 3$ valves broken, they sell for $\$4, \$3, \text { and } \$2$ dollars, respectively. We can see that the mean will probably be $\$3.50$, since there's a $50\%$ broken rate and the cost ranges from $5$ to $2$, and the middle of that is $3.50$. Let's calculate using how we did above. The probability of none breaking is $\frac{1}{8}$ so multiplying by $5$ gets us a $\$\frac{5}{8}$ths probability. Similarly, the next is a $\frac{3}{8}$ths chance so multiplying by $4$ gets us a $\$\frac{3}{2}$ value. The next two get us $\$\frac{9}{8}$ and $\$\frac{1}{4}$, respectively. Adding up gets us $\frac{7}{2}$, or $\$3.50$ as our answer. Just as we expected! This happens because, again, we already divided for our probability. If we calculated the number of ways $5, 4, 3, \text{ and } 2$ dollars could be computed, then we would have to divide by $8$. But we already divided to get our probability so there is no need to divide again. Therefore, our answer is $\fbox{\$6.55287}$, or if we round, $\fbox{\$6.55}$. -FruDe
H: Eigenvectors for eigenvalue with multiplicity $\mu = 2$ I'm looking for a way to determine linearly independent eigenvectors if an eigenvalue has a multiplicity of e.g. $2$. I've looked for this online but cannot really seem to find a satisfying answer to the question. Given is a matrix A: $$ A = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 6 \\ 3 & 6 & 9 \end{pmatrix}$$ I know an exact formula to calculate the eigenvalues of this matrix since a previous exercise learned me that if we have a matrix: $$ M = \begin{pmatrix} a^2 + t & ab & ac \\ ab & b^2 + t & bc \\ ac & bc & c^2 + t\end{pmatrix} $$ The determinant of this matrix is $det(M) = t^2(t + a^2 + b^2 + c^2)$. When creating the characteristic polynomial for $A$, I find the following values for the variables $a, b, c$ and $t$: $$ \begin{cases} a = 1 \\ b = 2 \\ c = 3 \\ t = -\lambda \end{cases} $$ After calculating the eigenvalues using this trick, I find them to be $\lambda_1 = 14$ and $\lambda_2 = 0$ (with multiplicity $\mu = 2$). I can find the eigenvector for $\lambda_1$, but when I try and find the eigenvectors for $\lambda_2$, I never get the same results as the solution provides, which are two linearly independent vectors: $$\vec{x_1} = (2, -1, 0)^T$$ and $$\vec{x_2} = (3, 0, -1)^T$$ Can someone explain me how to find these eigenvectors? AI: You want to find a basis of the null space of $A - \lambda I$. Gaussian elimination is the standard way of doing this.
H: Find the saddle point of $F(x_1,x_2,x_3,y_1,y_2,y_3)=(x_1-2x_2+x_3)y_1+(2x_1-2x_3)y_2+(-x_1+x_2)y_3$ Finding the saddle points of $F(x_1,x_2,x_3,y_1,y_2,y_3)=(x_1-2x_2+x_3)y_1+(2x_1-2x_3)y_2$+$(-x_1+x_2)y_3$ subject to the constraints $x_1+x_2+x_3=1, y_1+y_2+y_3=1$. Show that the saddle point is $x=(\frac{1/3}{1/3},\frac{1/3}{1/3},\frac{1/3}{1/3}),y=(\frac{2}{7},\frac{1}{7},\frac{4}{7})$. I know how to find the saddle point of a function with two variable by using $\Delta=(f_{12})^2-f_{11}f_{22}$, then when $\Delta>0$ the point is a saddle point. But for this question, by using the constraints, we can reduce the variables from 6 to 4, but still can not use the $\Delta$ formula, and this question requires not to use lagrange multiplier. Thanks. AI: At a saddle point, all of the partial derivatives are 0. This leads to a system of equations: \begin{align} x_1 -2x_2 +x_3=0,\tag{1}\\ 2x_1 -2x_3=0,\tag{2}\\ x_2-x_1=0,\tag{3}\\ y_1+2y_2-y_3=0, \tag{4}\\ -2y_1 +y_3=0,\tag{5}\\ y_1-2y_2=0. \tag{6} \end{align} Adding 1 to both sides of equation (4) leads to \begin{equation} 2 y_1 = 1-3 y_2. \tag{7} \end{equation} Solving for $y_1$ from equation (6) and substituting in (7) leads to $y_2= \frac{1}{7}$ and consequently $y_1=2y_2=\frac{2}{7}$ and $y_3=2y_1=\frac{4}{7}$. Similarly, we can subtract 1 from both sides of equation (1) and use the constraint for x, which leads to $x_2=\frac{1}{3}$ and consequently from (3), $x_1=x_2=\frac{1}{3}$. Finally, from (3), we have $x_3=x_1=\frac{1}{3}$. EDIT: Computing the Hessian matrix yields: \begin{equation} H = \begin{pmatrix} 0 & 0 & 0 & 1 & 2 & -1 \\ 0 & 0 & 0 & -2 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 0\\ 1 & -2 & 1 & 0 & 0 & 0\\ 2 & 0 & 0 & 0 & 0 & 0\\ -1 & 1 & 0 & 0 & 0 & 0 \end{pmatrix}. \end{equation} Diagonalizing gives us the characteristic polynomial \begin{equation} \lambda^6-12\lambda^4 +27\lambda^2 -4=0. \end{equation} This polynomial has a $\lambda \rightarrow -\lambda$ symmetry. So if we can find one non-zero real solution, we automatically show that the Hessian is indefinite (There are both positive and negative solutions). Therefore the critical point above is indeed a saddle point. EDIT 2: In case you find it difficult to actually find the roots of the polynomial, substitute $\rho=\lambda^2$, this gives \begin{equation} \rho^3-12\rho^2 +27\rho -4=0. \end{equation} Now, using the fact that complex roots come in pairs and that we can see that zero is not a solution, we can deduce that there is at least one real solution of the cubic $\rho_0$ and therefore two opposing sign solutions $\lambda_{+/-} = \pm \sqrt{|\rho_0|}$.
H: Basic question about vector subspaces As part of some question in a more advanced course in linear algebra I'm using this claim which I'm pretty sure is true but a little confused about how to justify it. Given a vector space $V$, $\dim V=n \in \mathbb{N} $, and a subspace $W \subset V$ (meaning $\dim V > \dim W$). If we choose any basis $B$ for $V$, we can choose $B' \subset B$ such that $B'$ is a basis for $W$. So first of all, is it true like i think or am I horribly wrong, and if it is how do we justify it? I guess a prove for $\dim W=\dim V-1$ will suffice. AI: It's not true... If you take $V=\mathbb{R}^2$, $W = \mathrm{Span}\langle (1,1)\rangle$ and as a basis of $V$ $\{(1,0),(0,1)\}$, then you have a counterexample, since both these vectors do not lie in $W$. On the other hand it is always possible to extend a basis of $W$ to the whole $V$, by adding $\dim V-\dim W$ independent vectors.
H: How to evaluate $\int_{|z|=2} \frac{|z| e^z}{z^2} dz$? The only thing that I know is that the result should be $\displaystyle 4\pi i$. Could you give me a hint/suggestion? I thought about using the Residue's theorem, where if $\displaystyle f(z) = \frac{|z|e^z}{z^2}$, it has a 2nd order pole in $0$, but after that, I don't know what to do next. AI: We have $$\int_{|z|=2} \frac{|z| e^z}{z^2} dz=2\int_{|z|=2} \frac{e^z}{z^2} dz.\tag{1}$$ Note that, in $\{|z|<2\}$, $z=0$ is the only pole of $\frac{e^z}{z^2}$. By Residue theorem, we have $$\int_{|z|=2} \frac{e^z}{z^2} dz=2\pi i Res(\frac{e^z}{z^2}, 0).\tag{2}$$ Note that $$\frac{e^z}{z^2}=\frac{1}{z^2}\left(1+z+\frac{z^2}{2!}+\cdots\right) =\frac{1}{z^2}+\frac{1}{z}+\frac{1}{2!}+\cdots.$$ Hence, $Res(\frac{e^z}{z^2}, 0)=1$. Combining this with $(1)$ and $(2)$, we obtain $$\int_{|z|=2} \frac{|z| e^z}{z^2} dz=2\cdot 2\pi i=4\pi i.$$
H: Elements of quotient ring $\mathbb{Z}_3[x]/I$ being represented as $ax^2 + bx + c + I$ by Euclidean Algorithm? I came upon this problem in http://sites.millersville.edu/bikenaga/abstract-algebra-1/quotient-rings-of-polynomial-rings/quotient-rings-of-polynomial-rings.pdf, but I don't understand how he applied the Euclidean Algorithm to arrive at the $ax^2 + bx + c + I$ form for the elements in this quotient ring. Could anyone elaborate a little more? No need for the specific steps, just enough to get me started. Thanks very much! AI: Divide by $x^3 + 2x + 1$ using the division algorithm. It's part of the statement of the theorem that the remainder has degree less than $3$ (i.e. less than the degree of the thing you are dividing by), hence it is of the form $r(x) = ax^2 + bx + c$.
H: Inequality with measure and weight Let $(M, \mu)$ be a measure space and $g\in L_{loc}^{1}(M)$ be a positive non-zero function. How can one show that for $f$ measurable and $a>0$ $$\mu\left\{|f|> \frac{1}{a}\right\}\leq a^{2}\frac{\int_{M}{f^{2}gd\mu}}{\int_{M}{gd\mu}}?$$ Thanks in advance! AI: This is not true. Take $M = [0,2]$ and $\mu$ the Lebesgue measure. Then if $g=1$ and $f = C/a$ with $1<C<\sqrt{2}$, we have $\int g\, d\mu = 2$ and $\int f^2g\, d\mu = 2C^2/a^2$. So $$2 = \mu\left\lbrace |f|>1/a\right\rbrace > a^2 \frac{\int f^2g\, d\mu}{\int g\, d\mu}= C^2.$$
H: Use of the $\subset$ and $\subseteq$ symbols in the definition of a power set and re-defining the power set with these symbols. In my Mathematics textbook, the definition of the power set of a given set is given as follows : $$P(A) = \{X : X \subseteq A \}$$ Now, this is used to say that the power set of a given set $A$ is the set that contains all sets that are a subset of $A$. But, the symbol $\subseteq$ is generally used to denote improper subsets. Basically, if $A \subseteq B \iff A = B$. But, if we take that definition of the symbol $\subseteq$, then the power set will only contain the set that is equal to the given set i.e. in that case, $P(A) = A$. So, that means that the symbol $\subseteq$ is just used as a general subset symbol, which would include both proper and improper subsets of the given set, right? But, sometimes the symbol $\subset$ is also used in place of this. And in the definition of a power set itself too. What I mean is that in another textbook, I saw the definition of a power set as follows : $$P(A) = \{ X : X \subset A \}$$ So, what I think is that in the definitions of power set that I mentioned above, both $\subset$ and $\subseteq$ are used to just represent a subset (whether it be a proper one, or an improper one). But, if we use $\subseteq$ as a symbol for improper subsets and $\subset$ as a symbol for proper subsets, would the definition of a power set be : $$P(A) = \{ X : X \subset A \} \cup \{ X : X \subseteq A \} = \{ X : X \subset A \} \cup \{ A \} \text{ ?}$$ AI: $A\subseteq B$ does NOT mean that $A=B$. It means simply that $A$ is a subset of $B$ and explicitly allows that subset to be $B$ itself. Many people use $A\subset B$ to mean the same thing. Others use $A\subset B$ to mean that $A$ is a proper subset of $B$, i.e., any subset of $B$ except $B$ itself. And some of us, including me, prefer to avoid this ambiguity by writing $A\subsetneqq B$ or $A\subsetneq B$ when we want to specify that $A$ is a proper subset of $B$. The answer to your final question is yes, though you don’t need the first union: if you use $\subseteq$ for arbitrary subsets and $\subset$ strictly for proper subsets, it’s simply $$\wp(A)=\{X:X\subseteq A\}=\{X:X\subset A\}\cup\{A\}\;.$$
H: An increasing arithmetic sequence of positive integers has $ {a_{19}}=20$ and ${a_{a_{20}}}=22$ . Find ${a_{2019}}$ Question:- An increasing arithmetic sequence of positive integers has $ {a_{19}}=20$ and ${a_{a_{20}}}=22$ . Find ${a_{2019}}$ I Found this question on a blogspot.On Seeing the statement of question it looks difficult to solve since we have only small information to proceed.I don't know whether it is possible to find ${a_{2019}}$. This questiom was asked QSHS PMO Team Selection Test, 2019. Can anybody help me out! AI: If $d$ is the common difference of successive terms, then $d$ must be a positive integer; since $a_{19}=20$, this means that $a_{20}\ge 21$, $a_{21}\ge 22$, and $a_n\ge 23$ for all $n>21$. You know that $a_{20}=a_{19}+d=20+d$, so $a_{a_{20}}=a_{20+d}=22$, and therefore $20+d\le 21$. It follows that $d=1$, and since $a_{19}=20$, this implies that $a_n=n+1$ for all $n\ge 1$ and hence that $a_{2019}=2020$.
H: To which values of $a$ is the following an inner product space? Given the following inner space over $\mathbb{R}^{2}$: $$ \left< \left(\begin{pmatrix} x_{1}\\ x_{2} \end{pmatrix} ,\begin{pmatrix} y_{1}\\ y_{2} \end{pmatrix}\right) \right> =x_{1} y_{1} -3x_{1} y_{2} \ -3x_{2} y_{1} +ax_{2} y_{2} \ $$ To which values of $a$ is it an inner space? I figured it has to do with the positive/definite property since I already checked for the other twos and it's true for any values of $a$. But I don't understand how am I supposed to solve such equation: $x_{1} y_{1} -3x_{1} y_{2} \ -3x_{2} y_{1} +ax_{2} y_{2} \ \geqslant 0$ Thank you so much for the help! AI: In order to check positive definiteness, you should check the inner product of $x$ with itself, not with a general $y$. Hence, your inequality becomes $$ \langle x_1,x_2\rangle_a:=x_1^2-6x_1x_2+a x_2^2\geq 0 $$ Now, this is a quadratic in $x_2$ with discriminant $36x_1^2-4ax_1^2=(36-4a)x_1^2$. Now, in order for the quadratic to always be strictly positive for $x_1\neq 0$, it must have no root, so the discriminant must be negative, which happens if and only if $a > \frac{36}{4}$ for $x_1\neq 0$. Now, we must check that for such a choice of $a$, the constant sign of the inner product is positive, no matter the value of $x_1,x_2$. To this end, we see for $a>\frac{36}{4}$ that the above quadratic is convex and hence, for such an $a$ and $x_1\neq 0$, $$ \lim_{x_2\to\infty} \langle x_1,x_2\rangle_a=\infty, $$ implying that $\langle x_1,x_2\rangle_a\geq 0$ for all $x_2$, since the quadratic has no root and hence, constant sign. If $x_2\neq 0$ and $x_1=0$, then positivity is obvious. We conclude that for $\langle \cdot,\cdot\rangle_a$ is positive definite if and only if $a>\frac{36}{4}$..
H: Relation between an increasing function and its Riemann integral I have recently been trying some questions on Riemann integral. Got stuck in one of the problems which says: Suppose $f$ is an increasing real-valued function on [$0$,$\infty$] with $f$($x$) $\gt$ $0$ for all $x$ and let $g$($x$)=$\frac{1}{x}$$\int_{0}^{x}f(u)du$ where $0$ $\lt x\lt$ $\infty$. Then $g$($x$) $\le$ $f$($x$) for all $x$ in ($0,\infty$). How to show this? I can show that $g$($x$) $\gt$ $0$ but unable to show the above relation. Help, please! AI: Let $ x>0$. $ f $ is increasing at $ [0,x] \;\;\implies$ $$(\forall u\in[0,x]) \;\; f(u)\le f(x) \;\;\implies $$ $$\int_0^xf(u)du\le \int_0^xf(x)du \;\; \implies$$ $$\int_0^xf(u)du\le f(x)(x-0)\;\; \implies$$ $$g(x)\le f(x)$$ We used the fact that a monotonic function at $ [a,b] $ is Riemann-integrable at $[a,b]$.
H: Limit of integrals where both bounds and integrand depend on $n$ I want to find the limit $$\lim_{n\rightarrow\infty}\int_{[0,n]}\left(1+\frac{x}{n}\right)^n e^{-2x} \, d\lambda(x)$$ I have noted the following: It is well known that $(1+\frac{x}{n})^n$ converges to $\exp(x)$ in the limit. So I'm certain the limit is equal to the improper integral $$\int_0^\infty e^{-x} \, dx=1$$ My question is: How can I argue/use the convergence of the integrated when I can't pull the limit into the integral, as the bound depends on it? Also: I'm not taking the measure $\lambda$ into account at all, am I making a mistake there? AI: $$ 1_{[0,n]} (x) \left( 1 + \frac x n \right)^n e^{-2x} \le e^{-x} \text{ for } 0 \le x < +\infty \\ \text{and } e^{-x} \text{ does not depend on $n$} \\ \text{and } \int_0^\infty \left| e^{-x} \right| \, dx < +\infty $$ Therefore the dominated convergence theorem is applicable.
H: Does null average against every smooth function implies independence? Are these assertions equivalent? $f:\mathbb{S}^1\times \mathbb{S}^1\to\mathbb{C}$ is such that $$ \int_0^{2\pi}\int_0^{2\pi}f(x,y)\psi(y)dydx=0$$ for all $\psi\in C^{\infty}(\mathbb{S}^1).$ $f:\mathbb{S}^1\times \mathbb{S}^1\to\mathbb{C}$ is such that that $f$ does not depend on $y$, that is, $f(x,y) = f(x,0),\, \forall y\in\mathbb{S}^1 $, and $\int_0^{2\pi}f(x,y)dx = 0.$ It's clear that the second implies the first, though I'm having some difficulty to prove that the first implies the second. My idea was trying to prove that $f(x,y) - f(x,0) $ is the null function, by contradiction: suppose it isn't, then taking an appropriate $\psi$ to arrive at a contradiction, though I ran into some problems trying to fit $f(x,0)$ inside the integral... AI: No the two assertions are not equivalent. Take $f(x,y) = ye^{ix}$. Then for every $\def\S{\mathbb{S}^1}\psi \in C^\infty(\S)$, we have $$\int_{[0,2\pi]^2} f(x,y)\psi(y)\,dxdy = \int_0^{2\pi} \psi(y)\left(\int_0^{2\pi} f(x,y)\, dx\right) \,dy = 0$$ since $\int_0^{2\pi} f(x,y) \, dx = 0$ for every $y$. But obviously $f$ depends on $y$. Assertion $1$ should be equivalent to: $f \colon \S \times \S \to \mathbb{C}$ satisfies $\int_0^{2\pi} f(x,y)\, dx = 0$ for every $y \in \S$.
H: Is it correct to state that $\langle x(t),x(t)\rangle' = 2\langle x'(t),x(t)\rangle$ for an arbitrary inner product? Let $x:\textbf{R}\to\textbf{R}^{3}$ be a differentiable function, and let $r:\textbf{R}\to\textbf{R}$ be the function $r(t) = \|x(t)\|$, where $\|x\|$ denotes the length of $x$ as measured in the usual $l^{2}$ metric. Let $t_{0}$ be a real number. Show that if $r(t_{0})\neq 0$, then $r$ is differentiable at $t_{0}$, and \begin{align*} r'(t_{0}) = \frac{\langle x'(t_{0}),x(t_{0})\rangle}{r(t_{0})} \end{align*} MY ATTEMPT Since $r^{2}(t) = \|x(t)\|^{2} = \langle x(t),x(t)\rangle = x^{2}_{1}(t) + x^{2}_{2}(t) + x^{2}_{3}(t)$, we conclude that \begin{align*} r(t)r'(t) = x_{1}(t)x'_{1}(t) + x_{2}(t)x'_{2}(t) + x_{3}(t)x'_{3}(t) = \langle x'(t),x(t)\rangle \end{align*} Since $r(t_{0}) \neq 0$, the result follows: \begin{align*} r'(t_{0}) = \frac{\langle x'(t_{0}),x(t_{0})\rangle}{r(t_{0})} \end{align*} Based on this exercise, I would like to know if it is correct to state that \begin{align*} \frac{\mathrm{d}}{\mathrm{d}t}\langle x(t),x(t)\rangle = 2\langle x'(t),x(t)\rangle \end{align*} for an arbitrary inner product space. AI: Yes, that's true in general. Even more generally, if $\omega:E \times E \to F$ is a bounded bilinear map between normed vector spaces $E$ and $F$, and if $x: \Bbb{R} \to E$ is a differentiable map, then \begin{align} \dfrac{d}{dt} \bigg|_t \omega(x(t), x(t)) = \omega(x(t), x'(t)) + \omega(x'(t), x(t)). \end{align} If you further assume $\omega$ is symmetric then this reduces to: \begin{align} \dfrac{d}{dt} \bigg|_t \omega(x(t), x(t)) = 2\omega(x'(t), x(t)). \end{align} What you proved is the special case where $E = \Bbb{R}^3$ and $F = \Bbb{R}$ and $\omega(\cdot, \cdot) = \langle \cdot, \cdot \rangle$.
H: An exponential distribution that represents time between events You're responsible for maintaining four ATMs (E,W,N, and S). The time between failures for each ATM is exponentially distributed with mean time between failures 6 hours, 5 hours, 8 hours, and 8 hours, respectively. The ATMs can be serviced between 8 A.M. and 8 P.M. (a) All ATMs are working at 8 A.M. What's the probability that the first failure will occur at or before 8:30 A.M. ? (b) What is the probability that the first ATM to fail will be W ? (c) ATM E just stopped working. The time to service it is exponentially distributed with mean 3 hours. Find the probability that another ATM will stop working before you're able to repair ATM E. Then, find the probability that ATMs W, N and S will all stop working before you're able to repair ATM E. Here is my solution, so far : Let $W$, $X$, $Y$, and $Z$ be the exponential random variable representing the time (in hours) between failures for ATMs E, W, N and S, respectively. Then, their probability density functions are given by $f(w) = \frac{1}{6}e^{\frac{-1}{6}w}$ for $w \geq 0$ and $f(w) = 0$ otherwise $f(x) = \frac{1}{5}e^{\frac{-1}{5}x}$ for $x \geq 0$ and $f(x) = 0$ otherwise $f(y) = \frac{1}{8}e^{\frac{-1}{8}y}$ for $y \geq 0$ and $f(y) = 0$ otherwise $f(z) = \frac{1}{8}e^{\frac{-1}{8}z}$ for $z \geq 0$ and $f(z) = 0$ otherwise (a) $P(W < 0.5$ or $X < 0.5$ or $Y < 0.5$ or $Z < 0.5)$ $= P(W < 0.5) + P(X < 0.5) + P(Y < 0.5) + P(Z < 0.5) - P(W < 0.5, X < 0.5, Y < 0.5, Z < 0.5)$ $ = \int_{0}^{0.5} f(w)dw + \int_{0}^{0.5} f(x)dx + \int_{0}^{0.5} f(y)dy + \int_{0}^{0.5} f(z)dz - \int_{0}^{0.5} f(w)dw\int_{0}^{0.5} f(x)dx\int_{0}^{0.5} f(y)dy\int_{0}^{0.5} f(z)dz$ (where we've used that $W,X,Y,Z$ are independent for the last term) $= 0.29625$. Is this correct for part (a) ? (b) I believe that the relevant probability would be $P(W < X, W < Y, W < Z)$. I also know have that $W, X, Y , Z$ are independent. How do I go about finding this probability, then ? (c) Let $T$ be the exponential random variable representing the time to service ATM E. Then, it has probability density function given by $f(t) = \frac{1}{3}e^{\frac{-1}{3}t}$ for $t \geq 0$ and $f(t) = 0$ otherwise. I believe the probability that another ATM will stop working before you're able to repair ATM E is $P(X < T$ or $Y < T$ or $Z < T)$. Also, I believe that the probability that ATMs W, N and S will all stop working before you're able to repair ATM E is $P(X < T, Y < T, Z < T)$. Would this be correct ? Thank you! AI: Part (a) is not correct because you are adding probabilities for outcomes that are not necessarily mutually exclusive. For example, if you toss three fair coins, the probability that you get at least one heads obviously is not $1/2 + 1/2 + 1/2 = 3/2$. That is what you are doing in your calculation. Instead, the correct calculation should consider the complementary event, namely that the "opposite" outcome of at least one machine failing within the first half hour, is the outcome in which all machines remain operational after the first half hour. That is to say, $$\Pr[(W \le 0.5) \cup (X \le 0.5) \cup (Y \le 0.5) \cup (Z \le 0.5)] \\ = 1 - \Pr[(W > 0.5) \cap (X > 0.5) \cap (Y > 0.5) \cap (Z > 0.5)].$$ And now since the lifetimes are independent, the joint probability of each machine remaining operational after $0.5$ hours is the product of the individual probabilities that each machine lasts that long. This again is analogous to the coin example, in which the correct computation is $$1 - (1 - 1/2)(1 - 1/2)(1 - 1/2) = 7/8.$$ For (b), there are a number of ways to do the computation. One way is to integrate over the joint density of all four variables, over the region for which the desired outcome occurs; e.g., $$\Pr[W < X, Y, Z] = \int_{x=0}^\infty \int_{y=0}^\infty \int_{z=0}^\infty \int_{w = 0}^{\min(x,y,z)} f(x,y,z,w) \, dw \, dz \, dy \, dx.$$ But there's a better way. You can compute the first order statistic of machines $X$, $Y$, and $Z$. In other words, figure out what the probability density function is for the random variable $M = \min(X, Y, Z)$ by observing $$\Pr[M > m] = \Pr[\min(X,Y,Z) > m] = \Pr[(X > m) \cap (Y > m) \cap (Z > m)] \overset{\text{ind}}{=} \Pr[X > m]\Pr[Y > m]\Pr[Z > m].$$ Then compute the resulting density for $M$, and use that to compute $$\Pr[W < X, Y, Z] = \Pr[W < M].$$ For (c), exploit the memorylessness property of the exponential distribution. You want to compute the density of the first order statistic of the other three machines, just like you did for part (b). Then given that Machine E has failed, the first time to failure of the other three machines is the first order statistic, and the calculation proceeds in a similar fashion as (b). For the second part of (c), you calculate the maximum or last order statistic, $$L = \max(X, Y, Z)$$ and repeat the computation as above.
H: Every integer $α>2$ can be expressed as $2a+3b$ I am confused about a homework problem I have, and don't really know where to begin. I need to prove this. Any idea of where I can start. I am not necessarily looking for a solution, but a place to begin. The statement is that Show that every integer $α > 2$ can be written in the form $α = 2a + 3b$ where $a$ and $b$ are nonnegative integers. AI: For even $\alpha$ take $b=0$; what then is $a$? For odd $\alpha$ take $b=1$; what then is $a$?
H: Why $f(x)=\Sigma_{n=0}^{\infty}\frac{x^n}{n\ln(n)}$ $\in c^n$ consider $f(x)=\sum_{n=0}^{\infty}\frac{x^n}{n\ln(n)}$ and $h(x)=\sum_{n=0}^{\infty}\frac{\sin(nx)}{1+n^4}$. why $f(x)\in c^n$ in $[\frac{-1}{8},\frac{1}{9}]$ and $h(x)\in c^2$ in $\mathbb{R}$ I think h(x) is a periodic function and I think it can get derivative infinte times. I just not sure I understand how to show that a function of series can get derivative specific times. AI: hint $f(x)$ is the sum of a power series whose radius of convergence is $$\lim_{n\to+\infty}\frac{(n+1)\ln(n+1)}{n\ln(n)}=1$$ thus $ f $ is $C^\infty $ at $(-1,1)$ and therefore at $ [-\frac 18 \frac 19] $. $h $ is the sum of a series of functions satisfying $$(\forall x\in \Bbb R)\;\;|\frac{\sin(nx)}{2+n^4}|\le \frac{1}{n^4}$$ thus the series is uniformly convergent at $\Bbb R$. The series of the derivatives satisfies $$(\forall x\in\Bbb R)\;\; |\frac{n\cos(nx)}{1+n^4}|\le \frac{1}{n^3}$$ thus, $h$ is differentiable at $\Bbb R$ and $$h'(x)=\sum_{n=0}^\infty \frac{n\cos(nx)}{1+n^4}$$ but $$|\frac{-n^2\sin(nx)}{1+n^4}|\le \frac{1}{n^2}$$ thus $h'$ is differentiable at $\Bbb R$ and $$h''(x)=\sum_{n=0}^\infty \frac{-n^2\sin(nx)}{1+n^4}$$ so, $ h''$ is continuous at $ \Bbb R$.
H: Accuracy of low rank approximation I am currently studying about randomized low-Rank Approximation of a matrix. In the problem's statement, given $m$ x $n$ $A$,it is referred that we want to minimize $\|A-Q_{k}Q^{T}_{k}A\|$ and for this reason, we seek a $m$ x $l$ $Q$ with orthonormal columns that approximates well the range(A). The challenge is to keep $l$ as low as possible. What I don't get is how the number $l$ affects accuracy. I think that if $l=n$, the above norm would be zero because $Q$ would be orthogonal and the result of the multiplication would be the identity matrix. I saw that if $l < n$ the result is not the identity matrix but I don't deeply understand the reason. I ran a simple test in Matlab with $qr()$ and saw that if the returned matrix $Q$ is not square, $QQ^{T}$ is not the identity matrix. How is this explained? I suppose that it is quite fundamental but I am stuck. C = randn(5,4); [q,~] = qr(C,0); % Economic qr q*q' Thank you in advance AI: Note that the matrix $Q$ has rank at most $\min(m, l)$, namely its rank is at most $l$. Since the product of matrices has a rank no more than the rank of any of the matrices that were multiplied, it follows that $Q Q^T$ has rank at most $l$ (in fact, it has rank exactly $l$). As $Q Q^T$ is an $m \times m$ matrix with rank at most $l$, it follows that if $l < m$, then $Q Q^T \neq I_m$, since $I_m$ has rank $m$. The matrix $Q Q^T$ represents an orthogonal projection onto the space spanned by the columns of $Q$. So the matrix $Q Q^T A$ is an orthogonal projection of each of $A$'s columns onto the space spanned by the columns of $Q$. Thus, the quantity $\|A - Q Q^T A\|$ can be thought of as a measure of distance between the subspace spanned by $Q$'s columns and the space spanned by $A$'s columns. Note that as we increase the number of columns $l$ of $Q$, the space spanned these $l$ columns becomes a better approximation of the space spanned by $A$'s columns. As you have observed, if we let $m = l$, then the spaced spanned by $Q$'s columns is just $\mathbb{R}^m$ itself, and so an orthogonal projection onto $\mathbb{R}^m$ is simply the identity (at which point the approximation becomes exact). The motivation for finding low-rank approximations is that they are easier to deal with, calculate, and manipulate. Furthermore, in many applications there is little extra benefit to be offered by working with the exact forms of the matrices. Indeed, low-rank approximations can often be quite good, even with rank $l \ll m$. EDIT: To see why $QQ^T$ is the matrix applying an orthogonal projection onto a space spanned by $Q$'s columns, note that given an orthonormal basis $\{\vec{u}_1, \vec{u}_2, \cdots, \vec{u}_l\}$ of a linear subspace $\mathcal{U} \subset \mathbb{R}^m$, the orthogonal projection of some vector $\vec{v} \in \mathbb{R}^m$ onto $\mathcal{U}$ is given by $$\sum_{i = 1}^l (\vec{u}_i \cdot \vec{v}) \vec{u}_i$$ The expression above represents the sum of the individual projections of $\vec{v}$ onto each of the $\vec{u}_i$ (which is the standard way of calculating an orthogonal projection onto a multidimensional subspace). It can be directly verified that if the $\vec{u}_i$ are the columns of $Q$, $QQ^T$ is just another way of writing the above expression. Indeed, $$Q Q^T \vec{v} = Q \begin{bmatrix} \vec{u}_2 \cdot \vec{v} \\ \vec{u}_2 \cdot \vec{v} \\ \vdots \\ \vec{u}_l \cdot \vec{v} \end{bmatrix} = \sum_{i = 1}^l (\vec{u}_i \cdot \vec{v}) \vec{u}_i$$
H: Find $\lim_{n \to \infty} n^2 \int_{n}^{5n}\frac{x^3}{1+x^6}dx$ Question:Find the limit $\lim_{n \to \infty} n^2 \int_{n}^{5n}\frac{x^3}{1+x^6}dx$ I tried to convert it into $\frac{0}{0}$ indeterminate form,then applying L'Hospital's rule but the expressions in numerator are not nice to integrate.I do not know other way to solve this limit. Can anybody help me out! AI: Let $x=nu$, so that $dx=ndu$. We see $$n^2\int_n^{5n}{x^3\over1+x^6}dx=n^2\int_1^5{n^3u^3\over1+n^6u^6}ndu=\int_1^5{u^3\over(1/n)^6+u^6}du\to\int_1^5{u^3\over u^6}du=\int_1^5{du\over u^3}={1\over2}\left(1-{1\over5^2}\right)={12\over25}$$
H: False Assumption in Rank Calculation - Flawed Argument? I've gotten an exam back, and I think I've found somewhere that I can snag some marks, but I'm not sure about the quality of the argument presented in the question. Let $B$ be a matrix that is obtained from changing the value of exactly one entry of a matrix $A$. Then $rankB$ has one of three possible values: $rankA−1$, $rankA$, or $rankA+1$. This was a true or false question, which I answered as being false. This is because there are such counterexamples as the zero matrix, whose rank is zero. It is impossible for a matrix to have a rank of -1. Since three values are not possible, the statement is false. During the assessment, I asked my professor about what the exact argument of this statement was, and he said that it was "There are 3 possible values, and it is one of them". It seems to me that this asserts 2 points - one regarding the existence of 3 values, one regarding value itself. Since I can show that there are not necessarily 3 possible values, I think that this negates the statement. Is my reasoning correct? Could someone provide some insight as to why or why not? AI: I do not agree with your reading. The statement asserts that, regardless of what $A$ is, the resulting matrix will have a rank which is one of the three options. It does not assert that each of the options are possible in any particular circumstance, nor does it assert that that all three options are always achievable. That is, you are trying to read the statement as saying that for any $A$, you will be able to change the value of one entry in one way so that the resulting matrix has rank $\mathrm{rank}(A)-1$, in some other way so the ranks are equal, and in some third way so the rank of the resulting matrix is $\mathrm{rank}(A)+1$. The statement does not assert that; to assert that, you would need an extra clause saying something like “... and all three possibilities will be achievable.” The statement does not make the two assertions you think it does, it only makes one: that $$\mathrm{rank}(A)-1\leq\mathrm{rank}(B)\leq\mathrm{rank}(A)+1.$$ That assertion is true. Added. You say in comments that the professor said that “and it is one of them” is implied in the statement. Yes; but it does not imply “and each one of the three possibilities can occur”, which is what you read into it. What you have is an implication: the premise is “you change one of the entries of $A$” and the consequent is “the rank will be equal to the rank of $A$, to the rank of $A$ minus $1$, or to the rank of $A$ plus $1$”. What you read into the problem was an extra clause saying that, plus “and each of the three possibilities will occur.” The kind of statement you read in is worthwhile: it’s usually signaled by words like “and this is best possible”, “and all possibilities can occur”, and the like. Without them, it does not do that. Having said that, let me say (as I usually do) that going over the exam to see “where [you] can snag some marks” is... exactly the attitude least likely to get you any sympathy from a professor, and exactly the wrong attitude if your objective is to actually learn. You should be looking to understand what you did wrong, why it is wrong, and how to do it right in the future. If you want to go in and explain that this was your reasoning, and ask why, in the professor’s opinion, it is incorrect, go for it. But do so to explain yourself and try to understand why the professor agrees or does not agree with you; do not do so because you are hoping to squeeze some extra credit out of them.
H: Evaluate $\sum_{n=0}^{\infty} \frac{{\left(\left(n+1\right)\ln{2}\right)}^n}{2^n n!}$ Evaluate: $$\sum_{n=0}^{\infty} \frac{{\left(\left(n+1\right)\ln{2}\right)}^n}{2^n n!}$$ I am not sure where to start. The ${\left(n+1\right)}^n$ term is obnoxious as I can't split the fraction. Perhaps this can be turned into a double summation by using binomial theorem on ${\left(n+1\right)}^n$? AI: On the Wikipedia page for the Lambert W function, we find the following Maclaurin series identity: $$\bigg(\frac{W(x)}{x}\bigg)^r =\sum_{n=0}^\infty \frac{r(n+r)^{n-1}}{n!}(-x)^n$$ Performing some elementary transformations on this series yields the following identitiy: $$\frac{1}{1+W(x)}\bigg(\frac{W(x)}{x}\bigg)^r=\sum_{n=0}^\infty \frac{(n+r)^n}{n!}(-x)^n$$ If we plug in $r=1$ and $x=-\ln(2)/2$, and use the fact that $W(-\ln(2)/2)=-\ln(2)$, we obtain the value of the series that you’re looking for: $$\color{green}{\frac{2}{1-\ln(2)}}=\sum_{n=0}^\infty \frac{(n+1)^n}{n!}\bigg(\frac{\ln(2)}{2}\bigg)^n$$ which, corresponding with numerical approximations, is about $6.5178$.
H: Negation of "Either X is true, or Y is true, but not both" Negation of "Either X is true, or Y is true, but not both" My attempt: If seems that let X be true and Y be true, not X for X is false and not Y for Y is false. In order for the above statement to be True, we need: The negation of both X and Y to be true: negate(X and Y) -> not X or not Y For "Either X is true, or Y is true, but not both" is equivalent to below: ((X or Y) and (not X or not Y)) The negation of the above (negation turns "and" into "or", and turns "or" into "and"): ((not X and not Y) or (X and Y)) Is this logical or? I am pretty lost... AI: A basic approach is to see there only are 4 possibilities: $X$ and $Y$ $X$ and not $Y$ not $X$ and $Y$ not $X$ and not $Y$ Your statement was (2) or (3), so its negation is (1) or (4).
H: Let $s_k(n)$ denote number of digits in $(k+2)^n$ in base $k$ , evaluate $\lim_{n→∞}\frac{s_6(n)s_4(n)}{n^2}$. Let $s_k(n)$ denote number of digits in $(k+2)^n$ in base $k$ , evaluate $\lim_{n→∞}\frac{s_6(n)s_4(n)}{n^2}$. How to find out the number of digits in a particular base? Any hint for the problem is appreciated. AI: Here’s a bit to get you started. An integer $n$ requires $d$ digits in base $k$ if $k^{d-1}\le n<k^d$, i.e., if $d-1\le\log_kn<d$. Thus, the number of base $k$ digits in $n$ is $\lfloor\log_kn\rfloor+1$. In particular, the number of digits in $(k+2)^n$ in base $k$ is $$\lfloor\log_k(k+2)^n\rfloor+1=\lfloor n\log_k(k+2)\rfloor+1\;,$$ so $$s_6(n)s_4(n)=(\lfloor n\log_68\rfloor+1)(\lfloor n\log_46\rfloor+1)$$
H: Evaluate $\lim_{n \to \infty} \sum_{j=0}^{n} \sum_{i=0}^j \frac{i^2+j^2}{n^4+ijn^2}$ I am asked to evaluate: $$\lim_{n \to \infty} \sum_{j=0}^{n} \sum_{i=0}^j \frac{i^2+j^2}{n^4+ijn^2}$$ I am not experienced with double summations, but I tried simplifying the expression above into: $$\lim_{n \to \infty}\frac{1}{n^2} \sum_{j=0}^{n} \sum_{i=0}^j \frac{{\left(\frac{i}{n}\right)}^2+{\left(\frac{j}{n}\right)}^2}{1+\left(\frac{i}{n}\right) \left(\frac{j}{n}\right)}$$ Perhaps this is Riemann integral (double)? AI: HINT: Your Riemann Integral idea was a good one. We can express your limit using a double integral as follows: $$\int_0^1 \int_0^x \frac{x^2+y^2}{1+xy}\space dy \space dx$$ Can you take it from here?
H: Helix around Helix around Circle I'm trying to find the parametric equations for a helix around a helix around a circle (helix on helix on circle) That is: I would like to start with a circle, add a helix around it and a helix around the helix.(See video) I'm ok even if the second helix is not perfectly orthogonal to the first helix as long as we can have a simpler parametrization. I'm ok also if the curve represents a helix around a helix around a helix. I know the helix around a helix around an axis is quite easy but I was not able to find a solutions for this case. I'm interested in this parametric curve as a way to represent time and I would like to write a program to show data attached to that curve. Edit: I already know the parametric equations of a helix around a torus: $$x(t) = (R+ r\cos(nt)) \cos(t)$$ $$y(t) = (R+ r\cos(nt)) \sin(t)$$ $$z(t) = s t + r \sin(nt)$$ where $R$ is the radius of the torus $r$ is the radius of the helix $n$ is the winding number $s$ vertical velocity ($0$ if we want a closed curve). What I'm looking for is the next level helix on top of that. AI: HINT: You are looking at a curve spiraling at a constant rate about a torus. Start by finding parametric equations of a torus. Now make the two angles linear functions of $t$. It appears you want one to go much faster than the other, so that suggests how you should relate those linear functions. EDIT: Based on your comment, you want a curve that spirals around a given space curve. The way to do that is to take an orthonormal basis for the normal plane to the curve at each point and go around a circle as you move along the curve. In particular, take the Frenet frame $T,N,B$ for the curve, parametrized say by $\alpha(s)$. Now consider $$\alpha(s)+\cos\theta(s)N(s)+\sin\theta(s)B(s)$$ where $\theta$ is a linear function of $s$. (If you don't know about Frenet frames, see this or my differential geometry text, linked in my profile. It's most convenient to work with arclength-parametrized curves $\alpha(s)$, but the chain rule will do the heavy lifting for you if they're not.)
H: Property of solution to a Cauchy problem Let $I\subseteq \Bbb R$ an interval, let $w$ differentiable on $I$ such that $$w'(t)\le L|w(t)|\qquad \forall t\in I$$ for some $L>0.$ Let $t_0\in I$. Prove that $$w(t_0)\le 0\implies w(t)\le0\quad \forall t>t_0$$ and $$w(t_0)\ge0\implies w(t)\ge0\quad \forall t<t_0$$ AI: $w$ can increase no faster than $w'(t) = L |w(t)|$. If $w(t_0)<0$, then you have that $w'(t) \leq -L w(t)$ for $t\leq T$ where $T = \inf {s: w(s)\geq 0}$. Define $u(t)$ as the solution of $u'(t) = -L u(t)$ and $u(t_0) = w(t_0)$. Then for $t\leq T$ you have that $w(t)\leq u(t)$. $u(t)$ can be easily found as $$u(t) = u(t_0)\,e^{-L (t-t_0)}\leq0\,.$$ Since $u(t)$ is always $\leq 0$ for all $t>t_0$ then $w(t)\leq 0$ for all $t>t_0$, i.e. $T=\infty$. The case for $t<t_0$ is analogous.
H: Odd-dimensional $\mathbb{R}$-vector space has a one-dimensional $\varphi$-invariant subspace Let $V$ be an $\mathbb{R}$-vector space with odd dimension, and let $\varphi$ be an endomorphism on $V$. Show $V$ has a one-dimensional $\varphi$-invariant subspace. I already know that $\ker f(\varphi)$ is a $\varphi$-invariant subspace for any polynomial $f$ with coefficients in $\mathbb{R}$. Can I somehow use this to find the desired one-dimensional subspace of $V$? AI: Hint: The characteristic polynomial of $\varphi$ has a real root because it has odd degree.
H: Do the functions f(x), g(x), and h(x) exist so that f'(x)=g(x), g'(x)=h(x), and h'(x)=f(x), but none of the functions are multiples of each other? I know that there are functions where, if you take the derivative of that function a multiple of n times, the $n^{th}$ derivative of that function is equal to the original function (e.g. $e^x$ and zero for n = 1, $e^{-x}$ for n = 2, $\sin{x}$ and $\cos{x}$ for n = 4). I am curious as to whether or not there are functions where the smallest possible value of n is not equal to 1, 2, or 4. My gut feeling says that there should be, at the very least, functions where the smallest value of n is 3. However, I have not been able to find any. I am curious about the n = 3 case in particular, but a general solution for all n would be fantastic. AI: Take $f$ to be the sum of every third term of the Taylor series for exp, i.e., $$ f(x)=1+\frac{x^3}{3!}+\frac{x^6}{6!}+\frac{x^9}{9!}+\dots. $$ Define $g=f'$ and $h=f''$. Observe that $h'=f'''=f$. Clearly the same idea also works to produce longer derivative-cycles.
H: Lebesgue measure of specific subset of $[0,1)$ For any $x\in [0,1)$ we assign its binary representation $(x_1,x_2,\dots,x_n,\dots)$ without $1$ in period. Let $\{n_k\}_{k=1}^{\infty}$ be some increasing sequence of natural numbers, $\{a_k\}_{k=1}^{\infty}$ be some sequence of $0$ and $1$. Let $A=\{x\in [0,1): x_{n_k}=a_k \ \text{for} \ k\in \mathbb{N}\}$. Prove $\mu(A)=0$. I came up with the following idea: For each $m\geq 1$ we define the set $$A_m=\{x\in [0,1): x_{n_k}=a_k \ \text{for} \ k=1,\dots,m\}.$$ Then we see that $A_1\supset A_2\supset \dots \supset A_n \supset A_{n+1}\supset \dots$ and $A=\cap_{m=1}^{\infty} A_m$. I was trying to show that each $A_m$ is Lebesgue measurable but I failed. Suppose hypothetically I have shown this then since $\mu(A_1)<\infty $ then $$\mu(A)=\lim _{m\to \infty}\mu (A_m).$$ Also my second hypothesis is that for each $m\geq 1$ we have $\mu(A_{m+1})=\dfrac{\mu(A_m)}{2}$. Can anyone show to me the proof of the following moments: 1) Why each $A_m$ is Lebesgue measurable? I was not able to prove it even for $A_1$. It suffices to prove for the sets $A_1$ and the general case follows from the intersection. 2) How to prove rigorously that $\mu(A_{m+1})=\dfrac{\mu(A_m)}{2}$? Possible answer: I guess it follows from invariance of Lebesgue measure. Denote by $A_{m+1}^0=\{x\in [0,1): x_{n_k}=a_k \ \text{for} \ k=1,\dots,m \ \text{and} \ x_{n_{m+1}}=0\}$ and $A_{m+1}^1=\{x\in [0,1): x_{n_k}=a_k \ \text{for} \ k=1,\dots,m \ \text{and} \ x_{n_{m+1}}=1\}$ then $A_m=A_{m+1}^0\cup A_{m+1}^1$ and note that this union is disjoint. And since $A_{m+1}^1=A_{m+1}^0+\dfrac{1}{2^{n_{m+1}}}$ then $\mu(A_{m+1}^1)=\mu(A_{m+1}^0)$ and $\mu(A_m)=2\mu(A_{m+1})$. 3) The question says we are considering the binary representation without $1$ in period? Am I right that in this case any number from $[0,1)$ has unique binary expansion? Would be very grateful for answers! AI: Let's show that $A_1=\{x\in [0,1): x_{n_1}=a_1\}$ is Lebesgue measurable. Since $a_1\in \{0,1\}$ we will prove for the case when $a_1=0$. Denote by $A_1^0=\{x\in [0,1): x_{n_1}=0\}$ and $A_1^1=\{x\in [0,1): x_{n_1}=1\}$ then $A_1^0\cup A_1^1=[0,1)$ where the union is disjoint because we do not consider the representation with $1$ in the period. So if we prove that $A_1^0$ is measurable then it follows that $A_1^1$ is also measurable because $[0,1)$ is measurable. So WLOG we assume that $A_1=\{x\in [0,1): x_{n}=0\}$ then $A_1$ is the disjoint union of the following sets $$A_1=\bigsqcup_{} A_1^{\epsilon_1,\dots,\epsilon_{n-1}},$$ where $ A_1^{\epsilon_1,\dots,\epsilon_{n-1}}=\{x\in [0,1): x_1=\epsilon_1, \dots, x_{n-1}=\epsilon_{n-1}, x_{n}=0\}$ and the union is taken over all $(n-1)$-tuples of $(\epsilon_1,\dots,\epsilon_{n-1})$ where each $\epsilon_j\in \{0,1\}$, so it means that this union contains $2^{n-1}$ sets. Also the union is disjoint because we do not consider the expansions with $1$ in the period. We'll show that for each such $(n-1)$-tuple the set $A_1^{\epsilon_1,\dots,\epsilon_{n-1}}$ is measurable. But note that $A_1^{\epsilon_1,\dots,\epsilon_{n-1}}=B+\frac{\epsilon_1}{2}+\dots+\frac{\epsilon_{n-1}}{2^{n-1}}$, where $B=\{x\in [0,1): x_1=x_2=\dots=x_n=0\}$. And it is easy to show that $B=[0,\frac{1}{2^n})$ which is measurable and hence the $A_1^{\epsilon_1,\dots,\epsilon_{n-1}}$ is measurable and it means that $A_1$ is measurable being the union of measurable sets. Then $A_m=\{x\in [0,1): x_{n_k}=a_k \ \text{for} \ k=1,\dots,m\}$ is also measurable because $A_m=A_1^{n_1}\cap \dots\cap A_{1}^{n_m}$ where $A_1^{n_i}=\{x\in [0,1): x_{n_i}=a_i\}$.
H: Evaluating $\lim\limits_{n \to \infty} \sqrt[n]{\frac{n!}{\sum\limits_{m=1}^n m^m}}$ Evaluate: $$\lim_{n \to \infty} \sqrt[n]{\frac{n!}{\sum_{m=1}^n m^m}}$$ In case it's hard to read, that is the n-th root. I don't know how to evaluate this limit or know what the first step is... I believe that: $$\sum_{m=1}^n m^m$$ doesn't have a closed form so I suppose there must be some identity or theorem that must be applied to this limit. According to the answer key, the limit evaluates to $\frac{1}{e}$. AI: Let $a_n= (n! / \sum_{m=1}^n m^m)^{1/n}$. Observe that $n^n \le \sum_{m=1}^n m^m \le n n^n$. Then $$(\frac{n!}{n^n})^{1/n} \frac{1}{n^{1/n}} \le a_n \le (\frac{n!}{n^n})^{1/n}.$$ Since $n^{1/n}\to 1$, we need to find the limit of $(n! / n^n)^{1/n}$. Take the logarithm of this expression to obtain the Riemann sum $$ \frac{1}{n} \sum_{j=1}^n \ln (j/n) \to \int_0^1 \ln x dx = -1.$$ Therefore $a_n \to e^{-1}$.
H: Volume of tetrahedron. Let $D$ be a tetrahedron with corners $(0,0,0), (2,0,0), (0,6,0), (0,0,4)$. Find the volume of $D$ by setting up a triple integral. The equation of the plane containing these points is $$6x+2y+3z-12=0.$$ My question is how do I set up the bounds on the three integrals to get a volume? Thanks for the help. AI: The general idea is, after picking an order of integration (say $\mathrm{d}x\,\mathrm{d}y\,\mathrm{d}z$), work backwards. The range where $z$ works (for some possible $x,y$) is going to be from $z=0$ to $3z-12=0$, maximizing $z$ by setting $x=y=0$. Now, suppose we know $z$ (for the middle integral). What values of $y$ are possible - that's $y=0$ to $2y+3z-12=0$ (solve this equation for $y$). Finally, for the inner integral, for any given $y,z$, $x$ itself ranges from $0$ to whatever the maximum is solved for.
H: constant polynomial Can you give me a counter example of this ; Let $ P: = P (x, y) $ is a polynomial function with positive coefficients : $$P (x, y)=\sum\limits_{i+j\leq n}^{n} a_{ij}x^{i}y^{j}), \quad a_{ij}\geq 0 ,\forall i,j$$ $ a, b \ge 0 $ (edit $a\neq b$) such that $ Q_1 (x) := P (x, b) $ is constant function and $ Q_2 (y) := P (a, y) $ is also constant function , then $ P $ is constant function . AI: No counterexample exists. Express the polynomial as $$P(x,y)=\sum_{j=0}^m\sum_{i=0}^nc_{i,j}x^iy^j$$ for $c_{i,j}\geq 0$. Suppose there exist $a,b\geq0$ with $a\neq b$ such that $P(x,b)$ and $P(a,y)$ are both constant polynomials. Since $a\neq b$, $a$ and $b$ cannot both be zero. Without loss of generality, suppose $a\neq 0$. Then we see $$P(a,y)=\sum_{j=0}^m\left(\sum_{i=0}^nc_{i,j}a^i\right)y^j=\sum_{i=0}^nc_{i,0}a^i+\sum_{j=1}^m\left(\sum_{i=0}^nc_{i,j}a^i\right)y^j.$$ Since $P(a,y)$ is constant, the coefficients $\sum_{i=0}^nc_{i,j}a^i$ on $y^j$ must all be $0$ for $j>0$. But $\sum_{i=0}^nc_{i,j}a^i$ is a sum of nonnegative numbers which can therefore only be zero if each term is zero. Hence, $c_{i,j}a^i=0$ for $j>0$ which implies $c_{i,j}=0$ since $a^i\neq 0$. Then we now know $$P(x,y)=\sum_{i=0}^nc_{i,0}x^i.$$ So, it is now clear that $P(x,y)=P(x,b)$. But $P(x,b)$ is a constant polynomial, so $P(x,y)$ is a constant polynomial as well.
H: Sine Parametric function exercise Find the biggest negative value of $a$ , for which the maximum of $f(x) =sin(24x+\frac{πa}{100})$ is at $x_0=π$ The answer is $a=-150$, but I don't understand the solving way. I would appreciate if you'd help me please AI: Recall that the general solution of $\sin\theta = 1$ is: $$ \theta = \frac{\pi}{2} + 2\pi n, \text{where }n \in \mathbb Z $$ So the general solution for finding all maximum values of $f(x)$ is: \begin{align*} 24x + \frac{\pi a}{100} &= \frac{\pi}{2} + 2\pi n, \text{where }n \in \mathbb Z \\ 24x &= \frac{\pi}{2} - \frac{\pi a}{100} + 2\pi n, \text{where }n \in \mathbb Z \\ x &= \frac{1}{24} \left(\frac{\pi}{2} - \frac{\pi a}{100} + 2\pi n \right), \text{where }n \in \mathbb Z \\ \end{align*} In particular, we know that there is some $n_0 \in \mathbb Z$ such that $x_0 = \pi$, so: \begin{align*} \pi &= \frac{1}{24} \left(\frac{\pi}{2} - \frac{\pi a}{100} + 2\pi n_0 \right) \\ 24\pi &= \frac{\pi}{2} - \frac{\pi a}{100} + 2\pi n_0 \\ \frac{\pi a}{100} &= \frac{\pi}{2} - 24\pi + 2\pi n_0 \\ a &= -2350 + 200 n_0 \\ \end{align*} Since $a < 0$, we know that $n_0 < \frac{2350}{200} = 11.75$. Rounding down to the nearest integer, we find that $n_0 = 11$ so that $a = -2350 + 200(11) = -150$, as desired. $~~\blacksquare$
H: Manipulating the product of the dot product of multiple vectors is producing a paradox Unless otherwise indicated all defined vectors lie on the surface of the unit sphere (i.e their norm is 1). Let's take the following expression: $(V\cdot Z_1)(V\cdot Z_2) = V^TZ_1V^TZ_2$ Given that the dot product is commutative the following is true: $V^TZ_1V^TZ_2 = V^TZ_1 Z_2^TV$ Let's assume there exists a vector $Z'$ such that $V^TZ_1 Z_2^TV = V^TZ'$ Then: $VV^TZ_1 Z_2^TV = VV^TZ'$ but since $V$ is of norm 1 we get: $Z_1 Z_2^TV = Z'$ Which is a vector colinear to Z_1 and norm $Z_2^TV$. However $Z_1$ and $Z_2$ were arbitrary so you can redo the proof and multiply by $V^T$ from the right instead and you get that $Z'$ must be colinear to $Z_2$. It is of course impossible for $Z'$ to always be colinear to 2 arbtirary vectors. So one might be tempted to say no such $Z'$ exists. However that's not possible. Take $0 \leq (V\cdot Z_1)(V\cdot Z_2) = \theta \leq 1$ Since that scalar is in the range of $\cos$, trivially there exists $Z'$ such that $Z' \cdot V = \theta$ What did I do wrong? AI: Even if $$V^T V = \left[ \begin{matrix} V_x & V_y & V_z \end{matrix} \right] \left[ \begin{matrix} V_x \\ V_y \\ V_z \end{matrix} \right] = V_x^2 + V_y^2 + V_z^2 = 1$$ you have $$V V^T = \left[ \begin{matrix} V_x \\ V_y \\ V_z \end{matrix} \right] \left[ \begin{matrix} V_x & V_y & V_z \end{matrix} \right] = \left[ \begin{matrix} V_x^2 & V_x V_y & V_x V_z \\ V_y V_x & V_y^2 & V_y V_z \\ V_z V_x & V_z V_y & V_z^2 \end{matrix} \right] \ne I$$ In particular, $$V V^T Z = \left[ \begin{matrix} V_x^2 & V_x V_y & V_x V_z \\ V_y V_x & V_y^2 & V_y V_z \\ V_z V_x & V_z V_y & V_z^2 \end{matrix} \right] \left[ \begin{matrix} Z_x \\ Z_y \\ Z_z \end{matrix} \right ] = \left[ \begin{matrix} V_x^2 Z_x + V_x V_y Z_y + V_x V_z Z_z \\ V_x V_y Z_x + V_y^2 Z_y + V_y V_z Z_z \\ V_x V_z Z_x + V_y V_z Z_y + V_z^2 Z_z \end{matrix} \right] \ne \left[ \begin{matrix} Z_x \\ Z_y \\ Z_z \end{matrix} \right ]$$ Perhaps you forgot that matrix multiplication isn't commutative?
H: Evaluate the integral $\int_{|z-1|=2} \frac{1}{z^2 - 2i} dz$ It should be solved with the Cauchy's Integral Formula. The solution given is $\displaystyle \frac{\pi}{2} + i\frac{\pi}{2}$ but I obtained $0$. I did this: Let $C = \{|z-1|=2\}$ $$\displaystyle\int_C \frac{dz}{z^2-2i} = \int_C \frac{dz}{(z-(1+i))(z+(1+i))} $$ $$=\frac{1}{2+2i}\int_C \frac{dz}{z-(1+i)} - \frac{1}{2+2i}\int_C \frac{dz}{z+(1+i)}$$ With the Cauchy's formula, I get $$ \frac{1}{2+2i}\int_C \frac{dz}{z-(1+i)} = \frac{2\pi i }{2+2i}(f'(z_0))$$ If $z_0 = (1+i)$ and $f(z) = 1$, then $f'(1+i) = 0$. And analogously with $\displaystyle \frac{1}{2+2i}\int_C \frac{dz}{z+(1+i)}$. Is there something wrong in my procedure? AI: Cauchy's integral formula is $$\int_C\frac{f(z)}{z-a}\,dz=2\pi i f(z)$$ when $C$ is a positively oriented contour, $a$ is inside $C$ and $f$ is holomorphic. Letting $C$ be your contour, $a=1+i$ and $f(z)=1$ gives $$\int_C\frac{dz}{z-(1+i)}=2\pi i.$$ But $-1-i$ is outside $C$, so $$\int_C\frac{dz}{z+(1+i)}=0$$ by Cauchy's theorem. So your original integral equals $$\frac{2\pi i}{2+2i}=\frac\pi 2(1+i).$$ But a simpler approach is to avoid partial fractions, and take $a=1+i$ and $f(z)=1/(z+1+i)$ in Cauchy's integral formula.
H: can we conclude $f(m)=\sqrt2(\log m)^{1/2}$? If $$f(m)=\sup\{s: s^2/2\leq \log m\}$$ then can we conclude that $f(m)=\sqrt{2}(\log m)^{1/2}?$ AI: $$ \frac{s^2}{2}\leqslant \log m\iff |s|\leqslant \sqrt{2\log m} $$ Thus $f(m)=\sqrt{2\log m}$.
H: find the solution of $2x\sin{\left(\frac{y}{x}\right)}dx+3y\cos{\left(\frac{y}{x}\right)}dx-3x\cos{\left(\frac{y}{x}\right)}dy=0$ A solution of the equation $$2x\sin{\left(\frac{y}{x}\right)}dx+3y\cos{\left(\frac{y}{x}\right)}dx-3x\cos{\left(\frac{y}{x}\right)}dy=0$$ I know the answer $c\sin(3y/x)$ but I don't know the solution AI: Rewrite O.D.E: $2x\sin{\left(\frac{y}{x}\right)}dx+3y\cos{\left(\frac{y}{x}\right)}dx-3x\cos{\left(\frac{y}{x}\right)}dy=0$ as follows $$\left(2\sin{\left(\frac{y}{x}\right)}+3\frac yx\cos{\left(\frac{y}{x}\right)}\right)-3\cos{\left(\frac{y}{x}\right)}\frac{dy}{dx}=0$$ Substitute $y=vx\implies \frac{dy}{dx}=v+x\frac{dv}{dx}$ $$\left(2\sin v+3v\cos v\right)-3\cos v\left(v+x\frac{dv}{dx}\right)=0$$ $$\cot v\ dv=-\frac{2}{3}\frac{dx}{x}$$ $$3\int \cot v\ dv=-2\int \frac{dx}{x}+C$$ $$3\ln\sin v=-2\ln x+\ln c $$ $$\ln\sin^3 v=\ln\left(\frac{c}{x^2}\right)$$ $$\sin^3 v=\frac{c}{x^2}$$ $$\sin^3 \left(\frac{y}{x}\right)=\frac{c}{x^2}$$ $$x^2\sin^3 \left(\frac{y}{x}\right)=c$$
H: Concept of 2-variable function for operators on an $n$-dimensional inner product space I'm reading the book "Finite-Dimensional Vector Spaces (2nd Ed)" by PR Halmos. The concept of a 2-variable function (or polynomial) for operators is introduced in Theorem 1 of Section 84 on page 171 in the following setting: Two self-adjoint operators $A$ and $B$ on an $n$-dimensional inner product space are commutative, and have respective spectral forms $A = \sum_{i=1}^n \alpha_i E_i$ and $B = \sum_{j=1}^n \beta_j F_j$. There exists some real-valued function (or polynomial) $h$ in two variables, given by $h(\alpha_i, \beta_j) = \gamma_{ij}$, where the $\gamma$'s are arbitrary, pairwise-distinct real numbers (i.e., $ij \neq kl \implies \gamma_{ij} \neq \gamma_{kl}$). Under this setting, the author first argues that $A$ and $B$ commute $\implies E_i$ and $F_j$ commute for all $i, j$. (This part is clear to me.) But then he briskly states that the function (or polynomial) given by $h(A, B)$ equals $\sum_{i=1}^n \sum_{j=1}^n h(\alpha_i, \beta_j)E_iF_j$. (This part puzzles me.) While I understand why each $E_i$ commutes with each $F_j$ for all $i$ and $j$, I'm struggling to understand why $h(A, B)$ equals what the author has stated. Perhaps because I am unable to comprehend the concept of a 2-variable function (or polynomial) of operators even though I do understand the concept of a 1-variable function (or polynomial) of an operator. Would appreciate some help. AI: Since the $\{E_i\}$ and $\{F_j\}$ are orthogonal idempotents (meaning $E_iE_j=\delta_{ij}E_j$ and similar for $F_j$), any power of $A$ is given by $A^k=\sum_i \alpha_i^k E_i$. Therefore we have $$ h(A,B)=\sum_{k,\ell} h_{k\ell} A^kB^{\ell}=\sum_{k,\ell} h_{k\ell}\left(\sum_i \alpha_i^kE_i\right)\left(\sum_j \beta_j^{\ell} F_j\right) $$ $$ = \sum_{i,j} \left(\sum_{k,\ell} h_{k\ell}\alpha_i^k\beta_j^\ell\right)E_iF_j=\sum_{i,j} h(\alpha_i,\beta_j)E_iF_j. $$
H: How do the Averages Work? I am trying to figure out the average items sold per customer for the year 2019 I have multiple customers per day, who each have a random number of items. Sometimes a customer makes more than one purchase a day - about 6% of the time. In that case - all sales for that customer are considered to be 1 sale. I have calculated this in 2 fashions - The 1st: Total # Items Sold ( 27,427,131 ) / Total # customers ( 6,556,133 ) = 4.18 AVG(Items Sold Per Customer who make a purchase Per Day) = 4.21 The second way I did using an SQL function on an MS SQL Server The first way I did by getting the total # from the same SQL Database With > 1 million customers - I can't replicate the 2nd method in Excel - and so I can't step it through and confirm the data. I don't quite understand why the 2 avgs are different - but I suspected they would be and would like to use the # I think is more accurate - the 2nd one: 4.21 My boss has asked me to provide the formula used, and wants me to "prove" my answer. Not an unreasonable request, but I'm a little lost in explaining why I am getting 2 different averages. How do I explain this? OR - and this is a real possibility - The numbers SHOULD be exactly the same and I am doing something wrong in one of my steps to calculate this. AI: You're running into weighted averages on unequal sample sizes. The second formula is incorrect. Suppose there's only the following two days: 2 items sold, 1 customer 2 items sold, 2 customers It's obvious the first formula generates the correct result ($4/3$, 4 items/3 customers), the second is $\mathrm{AVG}(2,1)$ which is not $4/3$. See Simpson's paradox for how bad it can get.
H: If two random variables follow the same distribution, does it mean X=Y? The question here is: Find the covariance of $X$ and $Y^{2}$ when $X $~ $N(0,1)$ $Y $~ $N(0,1)$ Do $Cov(X, Y^{2})$ and $Cov(X, X^{2})$ have the same value? Or, is $Cov(X, Y^{2})$ equal to $Cov(X, X^{2})$ ? AI: This distinction is the entire point of "independent and identically distributed" random variables. In fact this question is completely underdefined. Without restriction, the relationship between $X$ and $Y$ can be anything, from (as an unstated assumption, actually) the same, to independent, to completely anticorrelated ($X=-Y$).
H: Is the following statement true: $((A \Rightarrow B) \wedge (B \in C)) \Rightarrow A \in C$? Just came up with problem that seems to be very basic, but I can't figure it out. I'm pretty certain it's not true. A, B are logical sentences and C is a set. Can you prove that below statement is true, or give a counterexample to show it's false? $((A \Rightarrow B) \wedge (B \in C)) \Rightarrow A \in C$? AI: I’ve slightly modified what I wrote in my comment to make $C$ correspond to a more interesting property than simply is identical to $B$. Take $C$ to be the set of all propositions logically equivalent to $B$, and take $A$ to be any proposition such that $A\Rightarrow B$ and $B\not\Rightarrow A$; then $B\in C$, but $A\notin C$.
H: $A$ is positive definite iff $\det(A_k) > 0$ Let $A$ be a symmetric $n \times n$ matrix, and $V = \mathbb{R}^n$. Define $\langle v,w \rangle = v^t A w$ with $v,w \in V$. Show that $A$ is positive definite iff $\det(A_k) > 0$ for $1 \leq k \leq n$ where $A_k$ is the $k \times k$-matrix which is the left upper $k \times k$ corner of A. If $k=1$ it was quite easy. I'm having troubles with the rest. I tried using the fact that the determinant of a matrix is the product of its eigenvalues but I didn't get anywhere. AI: $(\Leftarrow)$ We proceed by induction. You have already proven the base case. Let us assume that it holds for all $(n - 1) \times (n - 1)$ matrices $B$ with all $\Delta_k = \det(B_k)$, where $1 \leq k \leq ( n -1)$. (Here I'm using the same notation as you: $B_k$ is the upper-left $k \times k$ matrix of $B$). Let us further assume that we are now dealing with an $n \times n$ matrix $A$ whose upper left $(n - 1) \times (n - 1)$ matrix is $B$. The goal of the inductive step will be to show that $A$ as described has no non-positive eigenvalues, which is a necessary and sufficient condition for being positive definite. Our first aim in the inductive step will be to show that there is at most one non-positive eigenvalue (counting multiplicites) of $A$. That is, there is at most one non-positive eigenvalue and it has multiplicity at most $1$. Indeed, suppose the contrary: if there existed either two distinct non-positive eigenvalues or a non-positive eigenvalue with multiplicity at least $2$, then by the Spectral Theorem we could find two eigenvectors $\vec{u}$ and $\vec{v}$ such that $A \vec{u} = \lambda_1 \vec{u}$, $A \vec{v} = \lambda_2 \vec{v}$, $\|\vec{u}\| = \|\vec{v}\| = 1$ and $\vec{u} \cdot \vec{v} = 0$. (Note here that $\lambda_1, \lambda_2 \leq 0$ and it could be that $\lambda_1 = \lambda_2$.) Let us choose $\vec{w}$ to be a non-trivial linear combination of $\vec{u}$ and $\vec{v}$ whose last entry is zero. (For example, we can choose $\vec{w} = v_n \vec{u} - u_n \vec{v}$, where $v_n$ and $u_n$ are the $n$th entries of $\vec{u}$ and $\vec{v}$, respectively. On one hand, $$\vec{w}^T A \vec{w} = v_n^2 \vec{u}^T A \vec{u} + u_n^2 \vec{v}^T A \vec{v} = \lambda_1 v_n^2 + \lambda_2 u_n^2 \leq 0$$ However, since $w_n = 0$, it is the case that $$\vec{w}^T A \vec{w} = \vec{x}^T B \vec{x} > 0$$ where $\vec{x}$ is $\vec{w}$ with the last entry truncated. Note that the second inequality follows from the inductive hypothesis, as $B$ is assumed to be positive-definite. We have reached a contradiction, so indeed there is at most one non-positive eigenvalue, including multiplicities. However, it is not possible to have a single eigenvalue with multiplicity one be non-positive. Indeed, the determinant of $A$, which is assumed to be positive, is the product of the eigenvalues (counting multiplicities). If this were the case, then $\det A$ would have to be non-positive, as the product of $n - 1$ positive eigenvalues and the singular non-positive eigenvalue (recall we've shown that at least $n - 1$ of $A$'s eigenvalues are positive). The claim is proven. $\square$ $(\Rightarrow)$ To prove necessity, we will prove the contrapositive. Let $k^*$ be the minimal value of $k$ such that $\Delta_k \leq 0$. If we retrace the steps of the proof of the previous claim, then we can note that $A_k$ for all $k < k^*$ is positive definite, but since $\Delta_{k^*} \leq 0$, it follows that $A_{k^*}$ has at least one non-positive eigenvalue. But if $A_{k^*}$ is not positive definite, then so too is $A$. To see why this is, let $\vec{v} \in \mathbb{R}_{k^*}$ be the vector such that $\vec{v}^T A_{k^*} \vec{v} \leq 0$. Then by filling $\vec{v}$ with zeroes until it has $n$ entries to obtain the vector $\vec{w}$, we have obtained a vector $\vec{w}$ such that $\vec{w}^T A \vec{w} \leq 0$. $\square$
H: How many paths are possible such that the area of the square cut off is exactly half the area of the entire square? A vertical polygonal path will be formed by picking one point from each row of the four by four grid of points below (Fig. 1), and then connecting these points sequentially from top to bottom. The area of the grid to the left of the polygonal path will then be shaded. For how many four-point selections will the vertical polygonal path result in exactly half of the grid's area being shaded? One example is given in Figure 2. I want to find a better way than bashing the $4^4$ ways of making a line? Thanks! AI: So your problem boils down to determining the number of solutions of a Diophantine equation (which is an area of mathematics I know very little about.) I'm going to present a solution for an $n\times n$ lattice. Let's get started with some definitions. Essentially, the process here is selecting a point from each row. I'll give the selection in the $k$th row a "left index", $x_k$ and a "right index", $y_k$. These indices start from $0$, that is, the "left index" of the leftmost point is $0$ and the "right index" of the leftmost point is $n-1$. So in your Fig. 2, the left indices are $x_1=2, x_2=0, x_3=2, x_4=3$. And the right indices are $y_1=1, y_2=3,y_3=1,y_4=0.$ It is always true that $$x_k+y_k=n-1.$$ Hopefully this is clear enough, but please comment if you need additional clarification. To solve this problem, I'm going to define an area function. The area function is the sum of the areas of trapezoids formed by pairs of points. That is, $$A=a_1+a_2+...+a_{n-1}$$ Where $a_1$ is the area between the first and second row, $a_2$ the area between the second and third, and so on. WLOG, I'll call the distance between adjacent lattice points $1$ (so then, the total area of the lattice is $(n-1)^2$). Thus, $a_k= \frac{1}{2}(b_k+b_{k+1})$, where $b_k$ is the $k$th trapezoid "base". Therefore the left hand area is $$A_L=\sum_{i=1}^{n-1}{\frac{1}{2}(x_i+x_{i+1})} \equiv \frac{S}{2}$$ And the right hand area is $$A_R=\sum_{i=1}^{n-1}{\frac{1}{2}(y_i+y_{i+1})}$$ However this can be restated as $$A_R=\sum_{i=1}^{n-1}{\frac{1}{2}(n-x_i-1+n-x_{i+1}-1)}$$ $$A_R=\sum_{i=1}^{n-1}{\frac{1}{2}((2n-2)-x_i-x_{i+1})}$$ $$A_R=\sum_{i=1}^{n-1}{n-1}+\sum_{i=1}^{n-1}{-x_i-x_{i+1}}$$ $$A_R=(n-1)^2-\frac{S}{2}.$$ As a sanity check, the area of the entire lattice should be equal to $A_L+A_R$, and it is indeed true that $$A_L+A_R=\frac{S}{2}+(n-1)^2-\frac{S}{2}=(n-1)^2$$ Which is consistent. Now, for the left and right hand areas to be equal, $$A_L=A_R \implies S=(n-1)^2$$ Recalling the definition of $S$, $$\sum_{i=1}^{n-1}{x_i+x_{i+1}}=x_1+x_n+2\sum_{i=2}^{n-1}{x_i}=(n-1)^2.\tag{1}$$ This is a Diophantine equation subject to the constraints that $x_1,...,x_n \in \{0,1,2,...,n-1\}.$ For the $n=4$ case, this is $$x_1+x_4+2x_2+2x_3=9$$ Which has $28$ solutions. This formulation is consistent as it produces $2$ solutions for the $n=2$ case and $5$ solutions for the $n=3$ case. This can be verified easily on the diagram with pencil and paper. Unfortunately, not only does my formula not account for rotations, but I also don't know how many solutions it will have given the number $n$ (combinatorics people, help!) but hopefully this is a good amount of insight to get going. FYI: the $n=4$ case was checked with the following Python code: n=4 X=(0,0,0,0) solutions=[] for x1 in range(0,n): for x2 in range(0,n): for x3 in range(0,n): for x4 in range(0,n): X = (x1,x2,x3,x4) S=x1+x4+2*(x2+x3) if(S==(n-1)**2): solutions.append(X) print(str(solutions)) print(len(solutions))
H: Prove $(a^2+b^2+c^2)^3 \geqq 9(a^3+b^3+c^3)$ For $a,b,c>0; abc=1.$ Prove$:$ $$(a^2+b^2+c^2)^3 \geqq 9(a^3+b^3+c^3)$$ My proof by SOS is ugly and hard if without computer$:$ $$\left( {a}^{2}+{b}^{2}+{c}^{2} \right) ^{3}-9\,abc \left( {a}^{3}+{b} ^{3}+{c}^{3} \right)$$ $$=\frac{1}{8}\, \left( b-c \right) ^{6}+{\frac {117\, \left( b+c \right) ^{4} \left( b+c-2\,a \right) ^{2}}{1024}}+{\frac {3\,{a}^{2} \left( 40\,{a }^{2}+7\,{b}^{2}+14\,bc+7\,{c}^{2} \right) \left( b-c \right) ^{2}}{ 32}}$$ $$+{\frac {3\, \left( b+c \right) ^{2} \left( 3\,a-2\,b-2\,c \right) ^{2} \left( b-c \right) ^{2}}{32}}+\frac{3}{16}\, \left( a+2\,b+2\,c \right) \left( 4\,a+b+c \right) \left( b-c \right) ^{4}$$ $$+{\frac { \left( 16\,{a}^{2}+24\,ab+24\,ac+11\,{b}^{2}+22\,bc+11\,{c}^{ 2} \right) \left( 4\,a-b-c \right) ^{2} \left( b+c-2\,a \right) ^{2} }{1024}} \geqq 0$$ I think$,$ $uvw$ is the best way here but it's not concordant for student in The Secondary School. Also$,$ BW helps here, but not is nice, I think. So I wanna nice solution for it! Thanks for a real lot! AI: Yes, SOS helps: $$(a^2+b^2+c^2)^3-9(a^3+b^3+c^3)=(a^2+b^2+c^2)^3-9abc(a^3+b^3+c^3)=$$ $$=\frac{1}{2}\sum_{cyc}(2a^6+6a^4b^2+6a^4c^2-18a^4bc+4a^2b^2c^2)=$$ $$=\frac{1}{2}\sum_{cyc}(2a^6-a^4b^2-a^4c^2+7a^4b^2+7a^4c^2-14c^4ab-4a^4bc+4a^2b^2c^2)=$$ $$=\frac{1}{2}\sum_{cyc}(a-b)^2((a+b)^2(a^2+b^2)+7c^4-2abc(a+b+c))=$$ $$=\frac{1}{2}\sum_{cyc}(a-b)^2(7c^4-2abc^2-2ab(a+b)c+(a+b)^2(a^2+b^2))\geq0,$$ where the last inequality is true by AM-GM: $$c^4+\frac{1}{8}(a^2+b^2)(a+b)^2\geq c^4+a^2b^2\geq2abc^2$$ and $$6c^4+\frac{7}{8}(a+b)^2(a^2+b^2)\geq2ab(a+b)c.$$ Can you prove the last inequality by AM-GM?
H: How to solve $\frac{2s^s}{(1-s)^{1-s}}\leq 3$? How to compute that $$\sup_s\{s\geq 1: \log \frac{2s^s(1-s)}{(1-s)^s}\leq \log 3\}?$$ AI: If you consider the function $$f(s)=\frac{2s^s}{(1-s)^{1-s}}$$ you have $$f'(s)=2 (1-s)^{s-1} s^s (\log (1-s)+\log (s)+2)$$ which is zero when $$\log (1-s)+\log (s)+2=0 \implies s(1-s)=e^{-2}\implies s\pm=\frac{1}{2}\pm \frac{\sqrt{e^2-4}}{2 e}$$ Use the second derivative test to show that $s_-$ corresponds to a minimum and $s_+$ to a maximum. Now $$f(s_+)=2 e^{-\frac{\sqrt{e^2-4}}{e}} \sqrt{\frac{e+\sqrt{e^2-4}}{e-\sqrt{e^2-4}}}\approx 2.31615$$
H: $\alpha\in L$ algebraic over $K$ implies $\beta\in L$ algebraic over $K$ iff $\beta$ algebraic over $K(\alpha)$. The fact that $\beta\in L$ algebraic over $K$ implies $\beta$ is algebraic over $K(\alpha)$ is obvious. Since $\alpha\in L$ is algebraic over $K$, we have $[K(\alpha):K]=n<\infty$. Assume $\beta$ is algebraic over $K(\alpha)$. Then $[K(\beta):K(\alpha)]=d<\infty$. We have $$ [K(\beta):K]=[K(\beta):K(\alpha)][K(\alpha):K]=dn<\infty. $$ Hence $\beta$ is algebraic over $K$. Is this correct? AI: It may not be true that $K(\beta)$ is an extension of $K(\alpha)$. So using the term $[K(\beta):K(\alpha)]$ is not proper. You can modify your reasoning as follows: $$[K(\alpha, \beta):K] = [K(\beta, \alpha):K(\alpha)][K(\alpha):K]$$ which is finite by the assumption that $\beta$ is algebraic over $K(\alpha)$ and $\alpha$ is algerbaic over $K$. So $[K(\alpha, \beta):K]$ is finite and therefore so is $[K(\beta):K]$ and we are done.
H: Every nonzero homomorphism of a field to a ring is injective? I've read the following theorem: And am trying to understand what was done there. I think it is the following: We want to prove that $$\text{Non zero hom of field to ring} \implies \text{hom is injective}$$ I think they used the contrapositive: $$\overbrace{\varphi(a)=0 \;\wedge \;a\neq0}^{\text{hom not injective}}\implies \overbrace{\varphi(b)=0}^{\text{zero hom}}$$ Is that correct? I believe it is, I just want to confirm. AI: Yes, you've understood it correctly. It is perhaps clearer to see it in the following way, though. Given a ring map $\varphi:k\to A$ from a field $k$ to a ring $A$, we know that $\ker \varphi$ is an ideal of $k$. The ideals in a field are $(0)$ or $(1)$. So, $\varphi$ is either injective when $\ker \varphi=(0)$ or the zero map when $\ker\varphi=(1)=k$.
H: How to find the probability of countable infinite sets? consider the set A = {1,2,3,...n}. If all subsets of A are equally likely to be chosen, what is the probability that a randomly selected subset of A contains 1? AI: The total number of subsets of A is given as $$n(\phi(A)) = 2^{n} $$ The way I get this is that for a given subset, either an element is present or not present, and hence gives two options for each element. Now, for subsets that include 1, that option is already taken. Hence the number of subsets would be $2^{n - 1}$ for deciding which of the remaining elements will be part of the set Hence the probability of picking out one of these sets is $$P= \frac{2^{n-1}}{2^n} = \frac{1}{2}$$
H: Closure of family of hypersurfaces over a punctured disk Let $\Delta\subset \mathbb C$ be an open disk, $\Delta^*=\Delta\setminus\{0\}$ the punctured disk. Let $p:\mathcal{X}\to \Delta^*$ be a family of irreducible projective hypersurfaces of degree $d$. In other words, we have a diagram $\require{AMScd}$ \begin{CD} \mathcal{X} @>i>> \mathbb P^n\times \Delta^*\\ @V{p}VV @VVV\\ \Delta^* @= \Delta^* \end{CD} where $i$ is embedding, $p$ is proper with each fiber an irreducible hypersurface (not necessarily smooth). Moreover I assume that $p$ is an algebraic map, so you can replace $\Delta^*$ by a quasiprojective curve if you want. I'd like to know the answers to the following: Question 1: If $F_t$ is the homogeneous polynomial defining the hypersurface $p^{-1}(t)$, with $t\in \Delta^*$. Can we argue $F_t$ depends on $t$ algebraically? In other words, can we argue that $F_t=\sum_I a_{I,t} x^I$, (where $I$ is multiindex) for $t\in \Delta^*$ and each coefficient $a_{I,t}$ is a polynomial in $t$? Question 2: If the above is true, then we can define the limit hypersurface$$F_0:=\lim_{t\to 0}F_t.$$ Let $\bar{\mathcal{X}}$ be the closure of $\mathcal{X}$ in $\mathbb P^n\times \Delta$, I'd like to know is $\{F_0=0\}$ coincides with fiber of zero of $\bar{\mathcal{X}}$? Note that $F_0$ can be reducible and having nonreduced components, so my goal is to understand if the closure $\bar{\mathcal{X}}$ can capture this information. The closure here is the same in both Zariski topology and analytic topology, by my assumption that $p$ is an algebraic family. Thanks in advance for any comment or answer. AI: Assume $\mathbb{P}^n = \mathbb{P}(V)$. Consider the morphism of sheaves $$ a \colon L := p_*(I_{\mathcal{X}}(d)) \to p_*(\mathcal{O}_{\mathbb{P}(V) \times \Delta^*}(d)) = S^dV^\vee \otimes \mathcal{O}_{\Delta^*}. $$ By base change $L$ is a line bundle and the morphism $a$ is a fiberwise monomorphism. But $Pic(\Delta^*) = 0$, hence $L \cong \mathcal{O}_{\Delta^*}$ and the morphism $a$ is given by a collection $a_I$ of functions on $\Delta^*$. This answers the first question. Now we can consider the morphism $a$ as a rational section of the bundle $S^dV^\vee \otimes \mathcal{O}_{\Delta}$. As such it might have a pole or zero at the origin. Multiplying by appropriate coordinate we can get rid of the pole and make sure that $a(0) \ne 0$. Then $a$ defines a hypersurface $$ \bar{\mathcal{X}} \subset \mathbb{P}^n \times \Delta $$ It remains to note it is the closure of $\mathcal{X}$. Indeed, $\bar{\mathcal{X}}$ is irreducible (because $a(0) \ne 0$) and reduced (because it is Cohen-Macaulay and generically reduced). This answers the second question.
H: Understanding the fulfillment of a condition required for applying the pasting lemma. Here is the solution of the question (which asks us to prove that the relation of homotopy among maps $X \rightarrow Y$ is an equivalence relation): My question is about the last part of proving transitivity. Why while applying the Pasting lemma the author of the solution was sure that $X \times [0,1/2]$ and $X \times [1/2, 1]$ are closed even though we do not have any information about the closure of $X.$ Could anyone explain this point for me please? AI: The total space we're working in, is $X \times I$ and $X \times [0,\frac12]$ is closed in it, because it's $\pi_2^{-1}[[0,\frac12]]$, the inverse image of the projection under a closed set. And by definition of the product topology these are closed in $X \times I$. $X$ is closed in itself (as in any topology), but that's irrelevant. Wherever $X$ "came from" (a subspace of some earlier space maybe), we're treating it as a space in its own right and put the product topology on $X \times I$.
H: Is $(X,T)$ is door space? A topological space $(X, T)$ is said to be a door space if every subset of $X$ is either an open set or a closed set (or both). Is the following given statement is true/false ? If$ X$ is an infinite set and $T$ is the finite-closed topology, then $(X, T)$ is a door space. My attempt : I thinks this statement is True take $X=\Bbb N$, there are plenty of open sets which are the whole thing or the empty set, such as $\{1,2, 3,4,5,\ldots, n+1,\ldots\}$ AI: What about using $X=\Bbb{N}$, and taking $U=\{1,3,5,7,\ldots\}$, i.e. the odd numbers. Then this set is not open, because its complement is not finite, and it is not closed because it is not finite. So, if I understood your definition correctly, this is not a door space.
H: Pigeonhole Problem with Subsets Let $S$ be an arbitrary subset of $\{1, 2, ..., 99\}$ with $|S|=10$. Prove that there are two different subsets $A$ and $B$ (don't have to be disjoint) of $S$ so that $$\text{the sum of all the elements in $A$} = \text{the sum of all the elements in $B$}$$ Ex. $S=\{1, 2, 3, 4, 5, 6, 7, 8, 9, 10\}$, the sets $A = \{1, 2, 3, 4\}$ and $B = \{1,9\}$ satisfy the condition since $1 + 2 + 3 + 4 = 10 = 1 + 9$. Edit: I know that the total subsets would be $2^{10} = 1024$. I originally thought that the largest sum value is $945$, but that wouldn't make sense because that implies that both A and B are the same set, which they can't be, so I don't know what number to compare it to. AI: However we choose our set the largest the sum of all of the numbers in the set can be is 945. There are $2^{10} = 1024$ subsets of our set. There are $1024$ pigeons and $945$ pigeon-holes. One pigeon-hole must contain more than one pigeon.
H: What is the sum of n terms 1.5.9+5.9.13+9.13.17…? How to solve this? My attempt is to write the $r^{th}$ term which is $$(4r-3)(4r+1)(4r+5)$$ Then let $p=4r+1$ The $r^{th}$ term becomes $$(p-4)(p)(p+4)=p^3-16p$$ Now we know summation $\sum p^3 =[\frac{n(n+1)}{2}]^2$ and $\sum p = \frac{n(n+1)}{2}$. Therefore, $$\sum_{p=5}^{4n+1} (p^3-16p)$$ Is there any short method to solve it or just breaking the $r^{th}$ term into difference of two term? AI: $$T_r=(4r-3)(4r+1)(4r+5)$$ Let $$V_r=(4r-3)(4r+1)(4r+5)(4r+9), V_0=-135$$ Check that $$T_r=\frac{1}{4}[V_r-V_{r-1}]$$ Next we do telescopic summation: $$T_1=\frac{1}{4}[V_1-V_0], T_2=\frac{1}{4}[V_2-V_1], T_3=\frac{1}{4}[V_3-V_2],.......$$ $$T_{n-1}=\frac{1}{4}[V_{n-1}-V_{n-2}], T_n=\frac{1}{4}[V_{n}-V_{n-1}].$$ Then $$S_n=\sum_{r=1}^{n} T_r=\frac{1}{4}[V_n-V_0]=\frac{1}{4}[(4n-3)(4n+1)(4n+5)(4n+9)+135],$$ which is a polynomial of degree 4 in $n$.
H: How do we denote the $0$ vector in a Quotient Space $V /W$? Just $0$ or $0 + W$? Unfortunately my textbook has no mention of the notation for this, nor can I find clarification for this online. AI: I think the most standard way to write it is just $0$, but you can also call it $0 + W$ or $[0]$. There is no fixed notation in this sense: Different books use different notations.
H: Closed form of $\int_0^\infty \arctan^2 \left (\frac{2x}{1 + x^2} \right ) \, dx$ Can a closed form solution for the following integral be found: $$\int_0^\infty \arctan^2 \left (\frac{2x}{1 + x^2} \right ) \, dx\,?$$ I have tried all the standard tricks such as integration by parts, various substitutions, and parametric differentiation (Feynman's trick), but all to no avail. An attempt is letting $$f(t):=\int_0^\infty\,\arctan^2\left(\frac{2tx}{1+x^2}\right)\,\text{d}x\,.$$ Therefore, $$f'(t)=\int_0^\infty\,\frac{8x^2(x^2+1)}{\big(x^4+2(2t^2+1)x^2+1\big)^2}\,\left(1+x^2-4tx\arctan\left(\frac{2tx}{1+x^2}\right)^{\vphantom{a^2}}\right)\,\text{d}x\,.$$ This doesn't seem to go anywhere. Help! AI: $$I=\int_0^\infty \arctan^2 \left (\frac{2x}{x^2 + 1} \right ) dx\overset{IBP}=4\int_0^\infty \frac{x(x^2-1)\arctan\left(\frac{2x}{x^2+1}\right)}{x^4+6x^2+1}dx$$ We have that: $$4\int\frac{x(x^2-1)}{x^4+6x^2+1}dx=(\sqrt 2 +1)\ln(x^2+(\sqrt 2+1)^2)-(\sqrt 2-1)\ln(x^2+(\sqrt 2-1)^2)$$ $$\frac{d}{dx}\arctan\left(\frac{2x}{x^2+1}\right)=\frac12\left(\frac{\sqrt 2+1}{x^2+(\sqrt 2+1)^2}-\frac{\sqrt 2-1}{x^2+(\sqrt 2-1)^2}\right)$$ Thus integrating by parts again and simplifying we obtain: $$I=\int_0^\infty \frac{(\sqrt 2+1)^2 \ln(x^2+(\sqrt 2+1)^2)}{x^2+(\sqrt 2+1)^2}dx+\int_0^\infty \frac{(\sqrt 2-1)^2 \ln(x^2+(\sqrt 2-1)^2)}{x^2+(\sqrt 2-1)^2}dx$$ $$-\int_0^\infty \frac{\ln(x^2+(\sqrt 2-1)^2)}{x^2+(\sqrt 2+1)^2}dx-\int_0^\infty \frac{\ln(x^2+(\sqrt 2+1)^2)}{x^2+(\sqrt 2-1)^2)}dx$$ From here we have the following result: $$\int_0^\infty \frac{\ln(x^2+a^2)}{x^2+b^2}dx=\frac{\pi}{b}\ln(a+b), \ a,b>0$$ So using this result and with some algebra everything simplifies to: $$\boxed{\int_0^\infty \arctan^2 \left (\frac{2x}{x^2 + 1} \right ) dx=2\pi \ln(1+\sqrt 2)-\sqrt 2\pi \ln 2}$$
H: Another proof by contradiction $\sqrt{2}$ is irrational. There's a famous proof that $\sqrt{2}$ is irrational by assuming $\sqrt{2}=p/q$ for relatively prime $p$ and $q$ and then proving that this leads to $p$ and $q$ being both even which contradicts with them being coprime. Now there's something I noticed that may make another easier proof: Assume for the sake of contradiction that there exists positive integers $p$ and $q$ such that $\gcd(p,q)=1 \implies \gcd(p^2,q^2)=1$, and $$\sqrt{2}=\frac{p}{q} \implies 2q^2=p^2 \implies q^2|p^2$$ Now the last result contradicts with $p^2$ and $q^2$ being coprime, except if $q^2=1 \implies q= 1 \implies p^2=2$, but $2$ is not a perfect square so there's no such $p$ Is this proof correct? Edit (adding a comment): this can be generalized that for any positive integer $n>1$, the $n$th root of a non-$n$th-powered integer is irrational. For example: let $k\ne m^n$ be a positive integer for any positive integer $m$, we have $\sqrt[n]{k}$ is irrational, because assume for the sake of contradiction (for coprime positive integers $p,q$, we have $\gcd(p,q)=1 \implies \gcd(p^n,q^n)=1$ because $p$ and $q$ have different prime factors and thus so is $p^n$ and $q^n$, considering the fundamental theorem of arithmetic) $$\sqrt[n]{k}=p/q \implies kq^n=p^n \implies q^n|p^n$$ this, similarly, contradicts with $\gcd(p^n,q^n)=1$ and implies $q^n=1$ and so $q=1$ so $k=p^n$, a contradiction. AI: Yes, it's valid, provided you address @razivo's point. Indeed, any prime factor dividing both $p^2$ and $q^2$ divides both $p$ and $q$. In case you're interested, there are several other well-known proofs. (It looks like that list omits the proof by the rational root theorem.) I'll leave it to others to say when "two proofs" are different enough to be different proofs.
H: Simplify $4^3\sin^4(20^\circ)\sin^2(70^\circ)-4\sqrt3\sin^3(20^\circ)\sin(70^\circ)+3$ I was trying to solve a question where the two sides of a triangle were $$\frac{a\sin(20^\circ)}{\sin(70^\circ)}$$and $$\frac{a\sin(60^\circ)\sin(30^\circ)}{\sin(70^\circ)\sin(40^\circ)}$$ and the ange between them was $70^\circ$ I used the law of cosines to try find the third side which I call $c$ and I soon found that $$\frac{a^2[4^3\sin^4(20^\circ)\sin^2(70^\circ)-4\sqrt3\sin^3(20^\circ)\sin(70^\circ)+3]}{4^3\sin^4(70^\circ)\sin^2(20^\circ)}=c^2$$ But after this I’m not able to simplify further, please help, there still is a possibility that I might’ve done something wrong that I reached this step, please assist. AI: We have the following theorem: $$ \prod_{0<k<n}2\sin\frac{k\pi}n=n. $$ Particularly: $$\begin{align} \prod_{0<k<9}{\sin\frac{k\pi}9}=[2^4\sin(20^\circ)\sin(40^\circ)\sin(60^\circ)\sin(80^\circ)]^2=9\\ \implies \sin(20^\circ)\sin(40^\circ)\sin(60^\circ)\sin(80^\circ)=\frac{3}{16}.\tag1 \end{align} $$ Let us apply this to your triangle: $$AB=\frac{a\sin(20^\circ)}{\sin(70^\circ)},\quad AC=\frac{a\sin(60^\circ)\sin(30^\circ)}{\sin(70^\circ)\sin(40^\circ)},\\ \alpha=\measuredangle CAB=70^\circ,\quad\beta=\measuredangle ABC,\quad \quad\gamma=\measuredangle BCA.$$ Then we have by law of sines: $$ \frac{\sin\beta}{\sin\gamma}=\frac{AC}{AB}=\frac{\sin(60^\circ)\sin(30^\circ)}{\sin(20^\circ)\sin(40^\circ)}\stackrel{(1)} =\frac{16\sin(60^\circ)\sin(30^\circ)\sin(60^\circ)\sin(80^\circ)}3 =\frac{\sin(80^\circ)}{\sin(30^\circ)}. $$ In view of $\beta+\gamma=110^\circ$ one concludes $$\beta=80^\circ,\quad \gamma=30^\circ.$$ Finally: $$ \frac{BC}{AB}=\frac{\sin\alpha}{\sin\gamma}\implies BC=\frac{a\sin(20^\circ)}{\sin(70^\circ)}\frac{\sin(70^\circ)}{\sin(30^\circ)}=2a\sin(20^\circ). $$
H: $A \subset B$ be a faithfully flat extension of domains and $B$ is integrally closed then $A$ is also integrally closed. Let $A \subset B$ be a faithfully flat extension of integral domains. If $B$ is integrally closed then I have to show that $A$ is also integrally closed. Assuming $L,K$ be the field of fractions of the domains $A,B$, respectively, and take $\tilde{A}$ to be the integral closure of $A$ in $K$. Since $B$ is integrally closed and $A \subset B$ we've $\tilde{A} \subset B$.So we finally have a tower of domains as $A \subset \tilde{A} \subset B$ with $A \subset B$ faithfully flat and $A \subset \tilde{A}$ integral. If using these we can show that $A \subset \tilde{A}$ is a flat extension then we are done. But I can't show that. I need some help to complete it. Thanks. AI: Let $x \in A^\sim$. Then since $x$ in $K$, $x = a/b$ for some $a,b$ in $A$. Consider the ideal $I = (bA:_A a)$. If $I = A$, then we are done. Since $A \subset B$ is flat, $IB = (bB :_B a)$, and the RHS is $B$ as $a/b \in B$. Furthermore, since $A \subset B$ is faithfully flat, this implies that $I = IB \cap A = B \cap A = A$.
H: Determine if the statement is true or false. NBHM 2014 PhD question. If $f$ and $g$ are continuous functions on $\mathbb{R}$ such that $\forall x \in \mathbb{R}$, $f(g(x))= g(f(x))$. If there exist $x_0 \in \mathbb{R}$ such that $f(f(x_0)) = g(g(x_0))$ then there exist $x_1 \in \mathbb{R}$ such that $f(x_1)= g(x_1)$. I was successful neither in proving the statement nor finding a counter-example. I tried to consider the function $h(x) = f(x)-g(x)$ and check if h changes sign at some point, but nothing useful came out. Any help or hint is highly appreciated. AI: Without loss of generality suppose that $f(x_0)\le g(x_0)$. It suffices to find a point $x_2$ such that $f(x_2) \ge g(x_2)$, for then the intermediate value theorem yields a point $x_1$ between $x_0$ and $x_2$ such that $f(x_1)=g(x_1)$. Consider $f(f(f(x_0)))$ and $g(f(f(x_0)))$. If $f(f(f(x_0))) \ge g(f(f(x_0)))$ then we are done by taking $x_2 = f(f(x_0))$. So suppose $f(f(f(x_0))) < g(f(f(x_0)))$. Then $$ f(g(f(x_0))) = g(f(f(x_0))) > f(f(f(x_0))) = f(g(g(x_0))) = g(f(g(x_0))), $$ and so we are done by taking $x_2 = f(g(x_0))$. (Under the hypotheses that $f$ and $g$ commute and that $f(f(x_0))=g(g(x_0))$, for any positive integer $n$, we see that of all possible compositions $h$ of $n$ functions each of which is either $f$ or $g$, there are only $2$ potentially different values of $h(x_0)$—one for the $h$s that have an odd number of $f$s, and one for the $h$s that have an even number of $f$s. These multiple identities of the iterates of $x_0$ make it easy to find points that can be interpreted as values of $f$ or as values of $g$.)
H: How do concepts such as limits work in probability theory, as opposed to calculus? When I am flipping a fair coin and say that as the number of trials approaches $\infty$ the number of heads approaches $50\%$, what do I really mean? Intuitively, I would associate it with the concept of a limit, as used in calculus: $$ \lim_{t \to \infty} \left(\frac{H}{H+T}\right)=0.5 \\ \text{Where $t$ = the number of trials, $H$ = the number of heads, and $T$ = the number of tails} $$ However, this intuition seems to break down when I use the formal definition of a limit: $$ \lim_{t \to \infty} \left(\frac{H}{H+T}\right)=0.5 \text{ if and only if}\\ \text{for every $\varepsilon>0$, there exists $N>0$ such that for all $t$} \\ \text{if $t>N$ then $|0.5-\frac{H}{H+T}|<\varepsilon$} $$ Well I don't know if there will be $N > 0$ that satisifes this definition! It all depends on what comes up. However many times I flip the coin, $\frac{H}{H+T}$ might just equal $0$. So how might I define terms such as "approaches" in probability theory if the conventional definition does not work? Edit: User nicomezi has pointed out that this a huge topic. Therefore, I will accept even a very short introduction to this subject as an answer. AI: There are several notions of convergence in probability. Your example is an instance of the law of large numbers which has a weak form and a strong form. The weak form states that $H/(H+T)$ "converges in probability" to $0.5$. Formally, for any $\epsilon > 0$, $$\lim_{t \to \infty}P\left(\left|\frac{H}{H+T} - 0.5\right| > \epsilon\right) = 0.$$ The strong form states that $H/(H+T)$ "converges almost surely" to $0.5$. Formally, $$P\left(\lim_{t \to \infty} \frac{H}{H+T} = 0.5\right) = 1.$$ Almost sure convergence implies convergence in probability, hence the strong law implies the weak law (but is harder to prove). Response to comment: If you imagine flipping a coin many many times, then you can keep track of the sequence $\frac{H}{H+T}$ as $t$ increases. In your words, this sequence is random, since it "depends on what comes up," so in different parallel universes this sequence will be different. You are correct that it is possible that you always flip tails, in which case the sequence is $0,0,\ldots$. But given a particular sequence, you simply have a sequence of real numbers, so the limit in $\lim_{t \to \infty} \frac{H}{H+T}$ is the usual limit you are familiar with (which may not exist for some sequences). The law of large numbers states that with probability $1$, this sequence not only has a limit, but that limit is $0.5$. So scenarios like the one you mentioned (always flipping tails) happen with probability $0$.