text
stringlengths
83
79.5k
H: Show that every adherent point of $X$ is either a limit point or an isolated point of $X$, but cannot be both. Let $X\subseteq\Bbb R$ and $x\in\Bbb R$. Show that every adherent point of $X$ is either a limit point or an isolated point of $X$, but cannot be both. Conversely, show that every limit point and every isolated point of $X$ is an adherent point of $X$. MY ATTEMPT Let us try to prove $(\Leftarrow)$ first. To start with, let us reinforce the definition of adherent point. Given $X\subseteq\Bbb R$, we say that $x\in\Bbb R$ is an adherent point of $X$ if $\forall\varepsilon > 0$ there corresponds a $y\in X$ s. t. $|x-y|\leqslant \varepsilon$. If $x$ is a limit point of $X$, then it is an adherent point of $X\setminus\{x\}$. Thus there exists a sequence $x_n\in X\setminus\{x\}$ s. t. $x_n\to x$. In other words, $(\forall\varepsilon > 0)(\exists N_{\varepsilon}\in\Bbb N_0)$ s. t. $n\geqslant N_{\varepsilon}\implies 0 < |x_n - x| \leqslant\varepsilon$. In particular, $\forall\varepsilon > 0$ there is a term $x_{N_{\varepsilon}}\in X$ s. t. the definition of adherent point is satisfied. Similarly, we say that $x\in X$ is an isolated point if there is an $\varepsilon > 0$ s. t. $|x - y| > \varepsilon\ \forall y\in X\setminus\{x\}$. Nonetheless, $x$ is still an adherent point of $X$ in this case. This is because no matter which $\varepsilon > 0$ one chooses, it suffices to pick $y = x$ and the relation $|x - y| = 0 < \varepsilon$ always holds. Let us try to prove ($\Rightarrow$). At this point, I got stuck. Could someone please help me how to solve it? I also would like to know if such results keep valid in arbitrary metric spaces. Any comments on the wording of my solution is welcome. AI: In the first part there is no need to introduce sequences. If $x$ is a limit point of $X$, then $x$ is an adherent point of $X\setminus\{x\}$, so for each $\epsilon>0$ there is a $y\in X\setminus\{x\}$ such that $|x-y|\le\epsilon$; certainly $y\in X$, so $x$ is an adherent point of $X$. And if $x$ is an isolated point of $X$, then $x\in X$, and of course $|x-x|<\epsilon$ for every $\epsilon>0$, so $x$ is an adherent point of $X$. For the other direction I would show that if $x$ is an adherent point of $X$ that is not a limit point of $X$, then $x$ is an isolated point of $X$. If $x$ is not a limit point of $X$, there is an $\epsilon>0$ such that $|x-y|>\epsilon$ for each $y\in X\setminus\{x\}$. And if $x$ is an adherent point of $X$, then ... Yes, this is true for all metric spaces. In a slightly more general form it is true for all topological spaces: it says that a point $x$ is in the closure of a set $X$ if and only if it is in the closure of $X\setminus\{x\}$ or is an isolated point of $X$.
H: Parametrize cycloid in terms of its arc length Can someone share how I can parametrize $\mathbf r(t)=\langle t-\sin t, 1-\cos t\rangle$ over the interval $0\le t\le 2\pi$ in terms of its arc length? That is, $\mathbf r(s)=\langle ?,?\rangle$ in terms of $s$, its arc length. Thanks. AI: hint $$s(t)=\int_0^t\sqrt{x'^2(u)+y'^2(u)}du$$ $$=\int_0^t\sqrt{(1-\cos(u))^2+\sin^2(u)}du$$ $$=\int_0^t\sqrt{4\sin^2(\frac u2)}du$$ $$=2\int_0^t\sin(\frac u2)du=4(1-\cos(\frac t2))$$ then $$t=2\arccos(1-\frac s4)$$
H: Convergence of Series of Independent Poisson Random Variables Let $\{X_r\}_{r\ge1}$ be independent Poisson random variables with respective parameters $\{\lambda_r\}_{r\ge1}$. Show that $\sum_\limits{r\ge1} X_r$ converges or diverges almost surely according as $\sum_\limits{r\ge1} \lambda_r$ converges or diverges. I know this question has already been asked and has an answer outlined here but I have a question on the solution and given that the post is so old, I thought I would make a new post with a partial solution then pose my question. Since Poisson random variables take values on the set $\{0,1,2,3,..\}$ we have that: \begin{align} \sum_\limits{r\ge1} X_r < \infty \ \text{a.s} &\iff \mathbb{P}(X_r=0\ \ \text{eventually})=1\\ &\iff \mathbb{P}(X_r>0\ \ \text{i.o.})=0\\ &\iff \sum_\limits{r\ge1} \mathbb{P}(X_r>0)<\infty \quad \text{by Borell-Cantelli Lemmas} \end{align} Now note that: \begin{align} \sum_\limits{r\ge1} \mathbb{P}(X_r>0)=\sum_\limits{r\ge1} (1- \mathbb{P}(X_r=0))=\sum_\limits{r\ge1} (1-e^{-\lambda_r})\le\sum_\limits{r\ge1} \lambda_r \end{align} Thus, $\sum_\limits{r\ge1} \lambda_r < \infty \implies \sum_\limits{r\ge1}\mathbb{P}(X_r > 0)<\infty \implies \sum_\limits{r\ge1} X_r < \infty \ \text{a.s}$, and half the result is proved. Now, again since Poisson random variables take values on the set $\{0,1,2,3,..\}$ we have that: \begin{align} \sum_\limits{r\ge1} X_r = \infty \ \text{a.s} &\iff \mathbb{P}(X_r=0\ \ \text{eventually})=0\\ &\iff \mathbb{P}(X_r>0\ \ \text{i.o.})=1\\ &\iff \sum_\limits{r\ge1} \mathbb{P}(X_r>0)= \infty \quad \text{by Borell-Cantelli Lemmas} \end{align} Now note that: \begin{align} \sum_\limits{r\ge1} \mathbb{P}(X_r>0)=\sum_\limits{r\ge1} (1-e^{-\lambda_r}) \ge \sum_\limits{r\ge1} \Bigl(\frac{\lambda_r}{\lambda_r +1}\Bigr) \end{align} Now we want to claim that: $\sum_\limits{r\ge1} \lambda_r = \infty \implies \sum_\limits{r\ge1}\mathbb{P}(X_r > 0) = \infty \implies \sum_\limits{r\ge1} X_r = \infty \ \text{a.s}$, completing the proof. The only issue here is that this last claim is a little less obvious, so my question is does \begin{align} \sum_\limits{r\ge1} \lambda_r = \infty \implies \sum_\limits{r\ge1} \Bigl(\frac{\lambda_r}{\lambda_r +1}\Bigr)= \infty? \end{align} If so, is this an easy thing to see and I am just missing it or does it require a bit of unpacking? Or is there a better way to show the second half of the result? AI: If $\lambda \le 1$, then $\lambda/(1+\lambda) \ge \lambda/2$, while if $\lambda > 1$, then $\lambda/(1+\lambda) > 1/2$. If infinitely many $\lambda_i > 1$, then infinitely many $\lambda_i/(1+\lambda_i) > 1/2$ and the second sum diverges. If not, use a Limit Comparison Test.
H: Integrating $\sin(x)x^2$ by parts, why do we only add $C$ at the end? Let $f(x)=\sin(x)x^2$. If you were to do integration by parts you would get: $$\int \sin(x)x^2dx=\int\sin(x)dx\times x^2-\iint\sin(x)dx\times\frac{d}{dx}x^2dx$$ $$\int \sin(x)x^2dx=-\cos(x)x^2-\int-2\cos(x)xdx$$ $$\int\sin(x)x^2dx=-\cos(x)x^2+2(\int\cos(x)dx\times x-\iint\cos(x)dx\times\frac{d}{dx}xdx)$$ $$\int\sin(x)x^2dx=-\cos(x)x^2+2(\sin(x)x+\cos(x))+C$$ The part that I don't understand is why do we add the constant only at the end? For example, in row #$1$ you need to find the $\int \sin(x)dx$. We take it to be $-\cos(x)$ but in reality, it is $-\cos(x)+C_1$. If you were to compute this with $\int \sin(x)dx=-\cos(x)+C_1$, you would get the wrong result because $\int \sin(x)dx$ needs to be integrated again, and $C_1$ would turn in $x$, therefore you would get the wrong result. How do we know that $C_1=0$? I get that the solution will be correct that way, but why? AI: The familiar formula for integration by parts is $$\int udv= uv-\int vdu $$ Now if you like to add a constant to your $v$ you get $$\int udv= u(v+c)-\int (v+c)du = uv+uc-\int vdu -c\int du = $$ $$uv+uc-\int vdu -cu = uv- \int vdu $$ Which is exactly the same result due to cancelation of $cu$ and -$cu$
H: Showing $ \displaystyle\int_{t-1}^t \log{x}\mathrm dx < \log{t} $ How can I show that $$ \int_{t-1}^t \log{x}dx < \log{t} $$ I can calculate the integral of $\log{x}$ as, $x\log{x}-x$. But after calculating, definite integral and re-aranging, I am not able to get the desired result. Is it true that the above inequality holds true for any monotonic function ? AI: For each $x\in[t-1,t)$ , $\log(x)<\log(t)$ and $\log$ is a continuous function. Therefore\begin{align}\int_{t-1}^t\log(x)\,\mathrm dx&<\int_{t-1}^t\log(t)\,\mathrm dx\\&=\bigl(t-(t-1)\bigr)\log(t)\\&=\log(t).\end{align}
H: Inequality for expectation-value of commutator While working on the derivation of the Heisenberg Uncertainty Principle, I'm getting stuck on showing that the following inequality holds true for the Hermitian operators $A$ and $B$ and the arbitrary quantum state $| \psi \rangle$: $$|\langle \psi |[A,B]| \psi \rangle|^2 \leq 4\langle \psi |A^2| \psi \rangle\langle \psi |B^2| \psi \rangle$$ Following (in)equalities are already given/proven by this point: $$|\langle \psi |[A,B]| \psi \rangle|^2 + |\langle \psi |\{A,B\}| \psi \rangle|^2 = 4|\langle \psi |AB| \psi \rangle|^2$$ $$\text{Cauchy-Schwarz:}\quad|\langle \psi |AB| \psi \rangle|^2 \leq \langle \psi |A^2| \psi \rangle\langle \psi |B^2| \psi \rangle$$ What is it that I'm missing? I've made it to $0 \leq \langle \psi |A^2| \psi \rangle\langle \psi |B^2| \psi \rangle + Re(\langle \psi |AB| \psi \rangle^2)$ which to me seems like a dead end. I've been looking into this way to long, so it might be very obvious and I just made a silly mistake... Thanks in advance. AI: From $$|\langle \psi |[A,B]| \psi \rangle|^2 + |\langle \psi |\{A,B\}| \psi \rangle|^2 = 4|\langle \psi |AB| \psi \rangle|^2$$ You have $$|\langle \psi |[A,B]| \psi \rangle|^2 =- |\langle \psi |\{A,B\}| \psi \rangle|^2 + 4|\langle \psi |AB| \psi \rangle|^2$$ Which means that $$|\langle \psi |[A,B]| \psi \rangle|^2 \leq 4|\langle \psi |AB| \psi \rangle|^2$$ Now, Using Cauchy-Schwarz the identity you wanted to prove is proven: $$|\langle \psi |[A,B]| \psi \rangle|^2 \leq 4|\langle \psi |AB| \psi \rangle|^2 \leq 4\langle \psi |A^2| \psi \rangle\langle \psi |B^2| \psi \rangle$$
H: Electronic device life span My question is - If the life span of electric component distributes uniformly During the year, and in one device there's $3$ electronic components that are parallel to each other. What is the expected value of the device (in years)? So If the components are parallel, that means that if one is broken, than all the device will not work. Meaning, the life span is the the life span of one electronic component? So I can say that the expected value of the device is $1$ year or am I missing something here? AI: This is false. So If the components are parallel, that means that if one is broken, than all the device will not work What you said is referred to devices connected in series (in a row). If the devices are parallel, the system will work until the latest is working. So the life of the system is equivalent to the $max(x)$ Now, you know that $f_X(x)=\mathbb{1}_{[0;1]}(x)$ and also $f_Z(z)=3z^2\mathbb{1}_{[0;1]}(z)$ Thus $$\mathbb{E}[Z]=\int_0^1 3z^3 dz=\frac{3}{4}$$
H: Whats the probability of getting 6 dots on die atleast once after throwing 10 times? Whats the probability of getting 6 dots on die atleast once after throwing 10 times? Most logical way to solve this is to use reverse event, right? : $A=$Getting $6$ atleast once $A'=$Not getting $6$ at all $$\omega=6^{10}$$ $$P(A)=1-P(A')$$ $$P=1- \left(\frac{5}{6}\right)^{10}$$ AI: Your answer is correct. Since probability of getting 6 dots, $\frac16$ and that of not getting 6 dots, $\frac{5}{6}$ both remain constant from one throw to the next. Therefore, you can also use Binomial distribution to find probability of getting $6$ dots on the die, at least once in 10 throws as follows $$1-\text{Probability of not getting 6 dots at all in 10 throws}$$ $$=1-^{10}C_0\left(\frac{1}{6}\right)^0\left(\frac{5}{6}\right)^{10}$$
H: Completion of a separable metric space is separable One of my textbooks states that the completion of a metric space is separable when the metric space is separable. I am not sure why that is the case. Could someone explain this to me? AI: Because the original space is densely embedded in its completion. Therefore the countable dense subset of the original space is dense in the completion. Added: Let $\iota: X\to \overline X$ be the embedding an $\operatorname{cl} C=X$ with $C$ countable. Every non-empty open set $U\subseteq\overline X$ intersects $\iota[X]$. Hence $\iota^{-1}[U]$ is non-empty and open, and therefore $\iota^{-1}[U]$ intersects $C$. This means that $\iota[C]$ intersects $U$, establishing density of the countable set $\iota[C]$.
H: If $f$ and $g$ are both continuous, then the functions $\max\{f,g\}$ and $\min\{f,g\}$ are also continuous Let $X$ be a subset of $\textbf{R}$, and let $f:X\to\textbf{R}$ and $g:X\to\textbf{R}$ be functions. Let $x_{0}\in X$. Then if $f$ and $g$ are both continuous at $x_{0}$, then the functions $\max\{f,g\}$ and $\min\{f,g\}$ are also continuous at $x_{0}$. MY ATTEMPT Once one knows the identities \begin{align*} \max\{a,b\} = \frac{a + b + |a-b|}{2}\quad\wedge\quad\min\{a,b\} = \frac{a + b - |a-b|}{2} \end{align*} we can argue the continuity of the functions $\max\{f,g\}$ and $\min\{f,g\}$ as a consequence of composition of continuous functions. My question is: is there another way to prove it without appealing to such identities? (Or one way which deduces them at least). EDIT As commented by @GEdgar, the continuity of $|\cdot|$ has already been proved. AI: One way of deduce these identities: It is easy to see that $\max\{f,g\} + \min\{f,g\} = f+g$ and that $\max\{f,g\} - \min\{f,g\} = |f-g|$. Adding both equalities and dividing by $2$ we obtain the first identity, and substracting the second from the first one, and again dividing by $2$, we obtain the second identity.
H: How do I give a good sketch of the Artin-Wedderburn Theorem? I have an oral exam coming up, and it's definitely possible A-W comes as a question due to its importance. However, the proof of the theorem is rather long, it has many claims, it's pretty technical, etc. It's a difficult proof, and I have no clue how to give a good sketch of it. What should I keep in mind? The only lemma we use is if $A$ is a minimal left ideal in a ring $R$ then $A^2 = 0$ or $A = Re, e$ an idempotent in $R$ AI: Try this link that includes a short (less than two pages of pure proof content) and well structured proof based on Weddeburn's Theorem for simple ring with a minimal left ideal (with a short proof by Henderson based on Brauer's Lemma) and by introducing a weak finiteness condition in a ring (equivalent to the ring containing no infinite orthogonal set of idempotents) to obtain the extension to semiprime left Artinian rings.
H: Show that $n\int_0^{2\pi}\sin^n\theta d \theta = (n-1)\int_0^{2\pi}\sin^{(n-2)}\theta d\theta$ Let $I_n = \int_0^{2\pi}\sin^n\theta d\theta$ show that for $n\geq2$: $$nI_n = (n-1)I_{n-2}$$ What I've tried I think this could be done by applying integration by parts multiple times and rearranging, but I can't seem to get the right substitution. Substituting $u=\sin\theta, v'=\sin^{n-1}\theta$ in $\int uv' = uv - \int u'v$ gives: $-\sin^{n-1}\theta\cos\theta + \int(n-1)\sin^{n-2}\cos^2\theta d\theta$ but that doesn't seem any closer to showing the recursion. AI: Note that $$\int(n-1)\sin^{n-2}\theta\cos^2\theta d\theta =$$ $$\int(n-1)\sin^{n-2}\theta(1-\sin^2 \theta) d\theta =$$ $$\int(n-1)\sin^{n-2}\theta d\theta -\int(n-1)\sin^{n}\theta d\theta $$ you can take it from here.
H: Brownian motion and stopping times (an exercise in LeGall) I am a beginner in Brownian motion and not a pro of probability theory and I would like to check my solution of Exercise 2.26 of the book by LeGall, stating: Let $B$ be a Brownian motion, for each $a\ge0$ set $T_a=\inf\{t\ge0: B_t=a\}$. Show that for $0\le a\le b$ the random variable $T_b - T_a$ is independent of $\sigma(T_c, 0\le c\le a)$ and it has the same distribution as $T_{b-a}$. My idea is to use the strong Markov property of BM and to look at $B_t^{(T_a)} = 1_{T_a<\infty}(B_{T_a + t} - B_{T_a})$, the BM "rebooted" at time $T_a$. The strong Markov property states that this is again a BM and that it is independent of $$\mathscr{F}_{T_a} = \{A\in\mathscr{F}_\infty:\forall t\ge0,\ A\cap\{T_a\le t\}\in\mathscr{F}_t\}\ ,$$ where $\mathscr{F}_t$ is our filtration. Now the $\sigma$-algebra $\sigma(T_c, 0\le c\le a)$ is generated by $\{T_c\le s\}\in\mathscr{F}_\infty$ for $0\le c\le a$ and $s\ge 0$. We notice that since $c\le a$ and the sample paths of the BM are continuous, if $T_c>t$ then we need have $T_a>t$. Therefore, it follows that $$\{T_c\le s\}\cap\{T_a\le t\} = \{T_c\le s\wedge t\}\cap\{T_a\le t\}$$ and since both sets are in $\mathscr{F}_t$, so is their intersection. It follows that $\{T_c\le s\}\in\mathscr{F}_{T_a}$, and thus that $\sigma(T_c, 0\le c\le a)\subseteq\mathscr{F}_{T_a}$. In particular, we have obtained that $B_t^{(T_a)}$ is independent of $\sigma(T_c, 0\le c\le a)$. Now we can conclude. Denote by $T_d^{(T_a)} = \inf\{t\ge0: B_t^{(T_a)}=a\}$. For $A\in\sigma(T_c, 0\le c\le a)$ we have \begin{align} P(T_b - T_a\mid A) ={}&P(T_{b-a}^{(T_a)} - T_0^{(T_a)}\mid A)\\ ={}&P(T_{b-a}^{(T_a)} - T_0^{(T_a)}) \end{align} where the first line is quite obvious from the definitions, and in the second line we used the fact that the "rebooted" stopping times are defined only in terms of the "rebooted" BM, which is independent of $A$. Now on the one hand the last line equals $P(T_b - T_a)$ by the same reasoning we already made in the first line of the above. On the other hand, we have that $T_0^{(T_a)} = 0$ a.s. and that $T_{b-a}^{(T_a)}$ has the same distribution as $T_{b-a}$ since the "rebooted" BM is again a BM. I think this fully proves the statement we wanted. Did I miss anything or was I imprecise at any step? Any help or comment is greatly appreciated. Thanks in advance! AI: I think your solution is fine, but your conclusion was a little bit hard for me to follow. I think it would be more clear to just argue that $T_b - T_a = T_{b-a}^{(T_a)}$ is measurable with respect to $\sigma(B_t^{(T_a)}, 0\le t)$ in the same way that you did and use the fact that $\sigma(B_t^{(T_a)}, 0 \le t)$ is independent of $\sigma(T_c, 0 \le c \le a)$ again. You use this fact anyway, but this avoids needing to use the conditional probabilities and streamlines the argument.
H: Find a vector $x\in\Bbb R^2$ s. t. $T(x)=b$. Let $$A=\begin{bmatrix}1&-3\\3&5\\ -1&7\\\end{bmatrix},b=\begin{bmatrix} 3\\2\\-5\\\end{bmatrix},c=\begin{bmatrix}3\\2\\5\\\end{bmatrix}, u =\begin{bmatrix}2\\-1\\\end{bmatrix}$$ and define a linear transformation of $$T:\Bbb R^2\to\Bbb R^3\ \ \ T(x):= Ax$$ (a) Find $T(u)$ (b) Find $a,b\in\Bbb R^2$ s.t. $T(a)=T(b)=b$. (c) Is there more than one $x$ s.t. $T(x)=b$? (d) Determine whether $c\in\operatorname{Im}T$ This question is very frustrating, stressful, and annoying, and I have spent literally $8\mathrm{h}$ trying to solve it, looking up stuff on the internet, as there is NOTHING like it in the note books. All of what we have is $\Bbb R^2\to\Bbb R^2$ with predefined "defined by" values. Asking the professor, he says to "Look it up.", which isn't helpful. What I know that I have to do is: $T(x+y)=T(x)+T(y)$ and $T(cx) = cT(x)$ I don't know where to go from there... Please help. Thanks. AI: You know how to multiply a matrix and a vector, so find $Au$ and that is $T(u)$ For the other parts you need to find a vector $v$ such that $Av$ is given. So you solve for the components of $v$ and see if the answer is unique or even if you have an answer.
H: Does genus uniquely determine a compact connected manifold? Given a Smooth compact and connected 2-manifold $M$ with genus $n$. Is $M$ unique? Up to diffeomorphism, clearly. AI: $M$ you do not precise wether $M$ is oriented or not, $M$ is not unique. For example the genus of the torus is $1$ and the genus of the projective plane is also $1$. https://en.wikipedia.org/wiki/Genus_(mathematics)
H: Rouché's theorem for $|z|>2$ Can someone help me find how many roots does this polynom : $p(z)=z^4-5z+1$ have on set $|z|>2$ using Rouché's theorem? I'm thinking like this: This polynomial has 4 roots(fundamental theorem of algebra) , and by applying Rouché's theorem I can find that $p$ has one root on unit disc $|z|<1$ , (since the largest coefficient is at $z$). In similar way , I got that $p$ has 4 roots on $|z|=2$ , since by using $g(z)=z^{4}$ ,I have $|p-g|=|-5z+1|<10+1=11<2^4=|g(z)|$ , so by Rouché's theorem $p$ has 4 roots in $|z|<2$. Meanwhile, I have to find number of roots in $|z|>2$ , and by the proof above, I would say that $p$ has NO roots on $|z|>2$ (since it has 4 in total , and I got that all of them are in the interior of this set). But,the result has to be 3-polynom $p$ has 3 roots on $|z|>2$. Can someone please help me with this?I don't know what I'm doing wrong... Thank you in advance! AI: What you have done is correct.If $|z| >2$ then $|p(z)| \geq |z^{4}-5z| -1=|z|(|z|^{3} -5)-1>2(8-5)-1 >0$ so there is no root for $|z| >2$.
H: What about sinusoidal waves makes the Fourier Series so useful? Using the Fourier series can allow you to decompose a function completely in a continuous interval into a set of harmonics. An increasingly better approximation can be found by taking more terms in the expansion. However, I could also do this trivially by approximating the function with a series of impulses. The more impulses I add, the better the approximation in the interval. What properties of the Fourier series make it a better decomposition? AI: The functions $\sin$ and $\cos$ are so-called eigenfunctions of $D^2$, the square of the differentiation operator, in the sense that $$D^2\sin x = -\sin x,\quad D^2\cos x=-\cos x.$$ Furthermore, we of course have $$D\sin x=\cos x,\quad D\cos x=-\sin x.$$What this means is that it is much easier to work with Fourier series expansions when discussing differential equations. Suppose for instance we wanted to solve the ODE $$(D^2+3D+1)y=y''+3y'+4y=0.$$ One direct way could be to write the expansion $$y=\frac{a_0}{2}+\sum_{n\geq 1}a_n\cos(nx)+b_n\sin(nx)$$ so that $$ \begin{split} y' & = \sum_n-na_n\sin(nx)+nb_n\cos(nx)\\ y'' &= \sum_n-n^2a_n\cos(nx)-n^2b_n\sin(nx). \end{split} $$ Substituting back in, we obtain $$y''+3y'+4y=\frac{a_0}{2}+\sum_n(a_n+nb_n-n^2a_n)\cos(nx)+(b_n-na_n-n^2b_n)\sin(nx)=0.$$ By uniqueness of the Fourier series, we can now find $a_n,b_n$ explicitly. Of course, this is a somewhat contrived example since there is a much more direct method to solve the ODE, but the idea applies in general: the fact that the derivatives of elements of the set $\{\sin,\cos,-\sin,-\cos\}$ remain in the set is incredibly useful when discussing ODEs and PDEs in general. It is also what makes them more useful in certain cases than things like the Taylor series.
H: Newton-Coulomb differential equation solution I am trying to solve the following DE: $$ m \mathbf{\ddot{r}}=\frac{Q_1Q_2}{4\pi\epsilon_0\mathbf{r}\cdot\mathbf{r}}\mathbf{\hat{r}} $$ Which I know is a tough one, so I settled for the simpler version: $$ m \ddot{r}=\frac{Q_1Q_2}{4\pi\epsilon_0r^2} $$ But I still couldn't figure out an exact solution to this DE, and can't really recall any method I know that would help but analytical solution which is not the goal, so I inserted it to Mathematica and got this equation after setting $c_1=1$ and $c_2=0$ for sake of simplicity: $$ r \sqrt{1-\frac{\alpha }{r}}+\alpha \tanh ^{-1}\left(\sqrt{1-\frac{\alpha }{r}}\right)=t $$ where: $$ \alpha = \frac{Q_1Q_2}{2m\pi\epsilon_0} $$ but the program fails at solving this equation and I too can't figure out a way to extract an exact solution. So, if anyone can help, much appreciated! AI: That is an exact (albeit implicit) solution. Sorry, you're not going to get a closed form expression for $r(t)$ as a function of $t$. EDIT: The way to get the implicit solution is as follows. This being an autonomous second-order differential equation, you can write a first-order equation for $v = dr/dt$ as a function of $r$: $$ \dfrac{dv}{dr} = \dfrac{dv/dt}{dr/dt} = \dfrac{\alpha/(2r^2)}{v} = \dfrac{\alpha}{2 r^2 v}$$ This is a separable equation, so you get $$\eqalign{2 v \; dv & = \alpha \;\dfrac{ dr}{r^2} \cr v^2 &= -\frac{\alpha}{r} + c_1\cr v &= \pm \sqrt{c_1 - \frac{\alpha}{r}}}$$ Writing $v = dr/dt$ again, that becomes another separable equation, and so $$ t = \pm\int \dfrac{dr}{\sqrt{c_1 - \alpha/r}} + c_2 $$ and that rather complicated integral (in the case $c_1 = 1$) gives you Mathematica's solution. Most integrals, and most differential equations, don't have closed-form solutions. The question is not "why doesn't it?" but rather "why does it?" You're lucky to get even this much of a solution.
H: $X\to Y\to Z$ is a closed immersion and $Y\to Z$ is separated, then $X\to Y$ is a closed immersion Let $X,Y,Z$ be schemes. I'm trying to prove that: If $X\to Y\to Z$ is a closed immersion and $Y\to Z$ is separated, then $X\to Y$ is a closed immersion Here's where I'm at: Let $f:X\to Y$, $g:Y\to Z$. Since $g\circ f$ is a closed immersion, in particular $f$ is injective. Furthermore, $(g\circ f)^\#_x=f_x^\#\circ g_x^\#$ is surjective, hence $f_x^\#$ is surjective for all $x$. The only thing left is to show that $f$ is closed. By assumption, if $F\subset X$ is closed, then $(g\circ f)(F)$ is closed. I don't know how to conclude $f(F)$ is closed using the fact that $g$ is separated. Is this even possible? Any suggestions are welcome, thank you! AI: First, the graph $\Gamma_f = (1_X,f) : X \to X \times_Z Y$ is the pullback of the diagonal $\Delta = (1_Y, 1_Y) : Y \to Y \times_Z Y$ along the map $f \times_Z 1_Y : X \times_Z Y \to Y \times_Z Y$ (check this directly). Since closed immersions are stable under base change and $\Delta$ is a closed immersion by assumption, $\Gamma_f$ is a closed immersion. Now, $f$ factors as $\pi_2 \circ \Gamma_f$, where $\pi_2 : X \times_Z Y \to Y$ is the projection on the second factor, so it suffices to show that $\pi_2$ is a closed immersion. To do this, note that $\pi_2$ can be identified with map $(g \circ f) \times_Z 1_Y : X \times_Z Y \to Z \times_Z Y$ via the canonical identification $Y \cong Z \times_Z Y$. But $g \circ f$ is a closed immersion by assumption, and closed immersions are stable under base change, so $\pi_2$ is a closed immersion as well. We conclude that $f = \pi_2 \circ \Gamma_f$ is the composition of two closed immersions, hence also a closed immersion.
H: Is a function of two independent GMMs still GMM? Consider two independent random variables $X, Y$ which are two Gaussian mixture model (GMM) in 1-dim space. Let $Z = f(X, Y)$, is R.V. $Z$ still a GMM? I randomly generated some GMMs and function $f(\cdot, \cdot)$ and it seems that the distribution of $Z$ is still GMM but I am not sure how to prove this (and also I am not sure the conclusion can be drawed). I appreciate a lot for any hint. AI: I don't think you can possibly prove this without further restrictions on $f$. For instance, suppose $f(x, y) = 0$. Then $Z$ is deterministic. Likewise you can easily design $f$ to be piecewise such that $Z$ is distributed as a Bernoulli variable with some parameter depending on $X,Y$. So in general $Z$ won't be distributed as a mixture of gaussians.
H: Showing one function is greater than another function (or proving an inequality) How would you show that $e^x \lt \frac{1}{1-x} $ for $ 0 \lt x \lt 1 $? I thought we could define a new function, $h=f-g$, where $f(x)=e^x$ and $g(x)=\frac{1}{1-x}$. Then, move on to calculate the derivative of h and then show that it is negative, implying that h is decreasing. But I did not know how to show that h is negative in $(0,1)$ AI: The desired inequality is, of course, equivalent to $$f(x):=e^x(1-x)<1$$ for $0<x<1$. We consider that $$f'(x)=-xe^x\leq0$$ on $[0,1]$, with $f'(x)=0$ exactly when $x=0$. It follows, therefore, that $f$ is monotonically decreasing on $[0,1]$, with a local maximum at $x=0$ where $f(0)=1$.
H: Proving $(A \cup B) - (A \cup C) \subset A \cup (B - C)$ I am trying to prove that $(A \cup B) - (A \cup C) \subset A \cup (B - C)$. Here is as far as I managed to get. Let $x \in (A \cup B) - (A \cup C)$. Then: \begin{align*} x \in (A \cup B) - (A \cup C) & \implies x \in (A \cup B) \text{ and } x \not \in (A \cup C) \\ & \implies (x \in A \text{ or } x \in B) \text{ and } (x \not \in A \text{ and } x \not \in C) \\ & \implies ((x \in A \text{ or } x \in B) \text{ and } x \not \in A) \text{ and } x \not \in C \\ & \implies (x \in B \text{ and } x \not \in A) \text{ and } x \not \in C \\ & \implies x \in B - A \text{ and } x \not \in C \end{align*} In other words, I cannot managed to get there, and have gone down a wildly peculiar path. Any help to get me on the right track would be appreciated. AI: Once you conclude that $(x \in A$ or $x \in B)$ and ($x \notin A$ and $x \notin C$.) Consider two cases, if $x \in A$, then you are done. If $x \notin A$, then we must have $x \in B$ and $x \notin C$. That if we have shown that $x \in A \cup (B-C).$
H: Taylor polynomial error of 3rd degree if f is only 2 times derivable. If f is only two times derivable and i know f's Taylor polynomial of second degree. Is the error even defined for that polynomial? Or it's equal to zero? AI: The error certainly exists, even if $f$ is only twice differentiable at $c$: $\begin{align*} R(x) &= f(x) - \left( f(c) + f'(c) (x - c) + \frac{1}{2} f''(c) (x - c)^2 \right) \end{align*}$ What you might not be able to do is to use e.g. Lagrange's form of the remainder: $\begin{align*} R(x) &= \frac{1}{3!} f'(\xi) (x - c)^3 \end{align*}$ for some $\xi$ between $c$ and $x$. The derivation of this form of the remainder uses the mean value theorem for the derivative, and so assumes the relevant derivative is continuous in the relevant closed interval between $c$ and $x$.
H: All fractions which exponentiated by another fraction gives yet another fraction Consider $\left(\dfrac{a}{b}\right)^{\dfrac{c}{d}}=\dfrac{e}{f}$, where $a, b, c, d, e, f \in \mathbb{Z}$ (the fractions need not be irreducible). Which are all $a, b, c, d$? Note: I'm not interested in $e, f$. AI: This can give a rational result only if $a / b$ is an exact $d$-th power of a fraction, i.e., if the reduced fraction can be written: $\begin{align*} \frac{a}{b} &= \frac{u^d}{v^d} \end{align*}$ for integers $u, v$; the result is then just: $\begin{align*} \left(\frac{a}{b}\right)^{\frac{c}{d}} &= \frac{u^c}{v^c} \end{align*}$
H: Power series expansion of an expresion I hope you can help me with a problem where I'm stuck. I need to expand $\frac{k!}{(1-st)^{k+1}}$ into $\sum_{n=0}^{\infty} \frac{(n+k)!}{n!}(st)^n$ and I don't know where to start. Thank's you in advance. AI: Hint: If $f(x) = \dfrac{1}{1-x}$, then the $k^{th}$ derivative is $f^{(k)}(x) = \dfrac{k!}{(1-x)^{k+1}}.$ Now, use the geometric series to get a power series formula for $f(x)$ (valid of course only for $|x| < 1$).
H: I need a little help with a Calculus problem. Here is the original problem. Here is my work thus far. I know that the third graph is correct already. I can't figure out why 20/7 is incorrect though. I keep getting the same answer no matter how I attack the problem. AI: Goddard's method is correct. Another way to think about it is that the area inside the 7 by 9 rectangle is comprised of three sums: $$63 = 3 + \int_{\frac{1}{3}}^7 \frac{1}{x^2}\mathrm{d}x + R$$ where $63$ is the area of the 7 by 9 rectangle, 3 is the area of the $\frac{1}{3}$ by $9$ strip in the left of the rectangle, the integral is the area under the curve, and $R$ is the area of the region you desire. You've already calculated the integral as $\frac{20}{7}$, so plugging in we find $R = 60 - 20/7 = 400 / 7$.
H: $\textrm{GL}_2(\mathbb{Z}/p^2\mathbb{Z}) \to \textrm{GL}_2(\mathbb{Z}/p\mathbb{Z})$ has no section for $p > 3$ Let $p > 3$ be a prime. I want to show that the exact sequence $$1 \to \ker \pi \to \textrm{GL}_2(\mathbb{Z}/p^2\mathbb{Z}) \xrightarrow{\pi} \textrm{GL}_2(\mathbb{Z}/p\mathbb{Z}) \to 1$$ is not split. It's easy to check that $A = \ker\pi$ is abelian, thus $G = \textrm{GL}_2(\mathbb{Z}/p\mathbb{Z})$ acts on $K$ by conjugation, making $A$ a $G$-module. If $$s \colon \textrm{GL}_2(\mathbb{Z}/p\mathbb{Z}) \to \textrm{GL}_2(\mathbb{Z}/p^2\mathbb{Z})$$ is a set-theoretic section of $\pi$, then $$\alpha_s(g,h) = s(g)s(h)s(gh)^{-1}$$ defines a $2$-cocyle $G \times G \to A$ whose class in $H^2(G,A)$ is independent of the choice of section. Since $\alpha_s$ is trivial if $s$ is a homomorphism, it suffices to show that $\alpha_s$ is not a coboundary, i.e. there does not exist a function $\phi \colon G \to A$ such that $$s(g)s(h)s(gh)^{-1} = s(g)\phi(h)s(g)^{-1}\phi(gh)^{-1}\phi(h)$$ for all $g,h \in G$. I tried supposing such a $\phi$ exists and looking at the subgroup of $U$ unipotent matrices. Every element of $A$ is $p$-torsion, so $\alpha_s$ is a $p$-torsion elemeent of $H^2(G,A$). Since $U$ is the $p$-Sylow subgroup of $G$, the restriction map $H^2(G,A) \to H^2(U,A)$ is injective on the $p$-part of $H^2(G,A)$, so looking at the restriction of $\alpha_s$ to $U \times U$ should be sufficient. Choose the section $s$ so that it maps $U$ to unipotent matrices in $\textrm{GL}_2(\mathbb{Z}/p^2\mathbb{Z})$. After playing around with the coboundary condition, I found that $\phi(g)$ and $s(g)$ should commute for all $g \in U$. Since $s(g)$ is unipotent, this means $$\phi(g) = \begin{pmatrix} 1 + a_gp & b_gp \\ 0 & 1 + a_gp \end{pmatrix}$$ for some $a_g,b_g \in \mathbb{Z}/p^2\mathbb{Z}$, well-defined modulo $p$. If $s_g$ denotes the upper-right entry of $s(g)$ for $g \in U$, then the coboundary condition tells us that $g \to a_g$ is a homomorphism and $$(b_g - b_{gh} + b_h)p = s_g - s_{gh} + s_h$$ for all $g,h \in U$. So we get a system of linear equations over $\mathbb{Z}/p\mathbb{Z}$ with $p$ variables. It's not clear to me how to show that this system is inconsistent, or how to use the hypothesis that $p > 3$. Moreover, lifting $0,1,-1$ in $\mathbb{Z}/3\mathbb{Z}$ to $0,1,-1$ in $\mathbb{Z}/9\mathbb{Z}$, I checked that this system is inconsistent over $\mathbb{Z}/3\mathbb{Z}$. But I thought that the exact sequence was supposed to be split in the case $p = 3$. So I'm unsure where I went wrong and also unsure how to proceed. AI: Let $S=\pmatrix{1&1\\0&1}\in\text{GL}_2(\Bbb Z/p\Bbb Z)$. This has order $p$, and one's intuition suggests that any lifting to $\text{GL}_2(\Bbb Z/p^2\Bbb Z)$ should have order $p^2$, meaning that there's no section for $\pi$. But is this true? A lifting of $S$ has the form $S'=I+A$ where $$A=\pmatrix{ap&1+bp\\cp&dp}\in\text{GL}_2(\Bbb Z/p^2\Bbb Z).$$ Then $$A^2=\pmatrix{cp&(a+d)p\\0&cp},$$ $$A^3=\pmatrix{0&cp\\0&0}$$ and $A^4=0$. For $p\ge5$ then $$S'^p=I+pA+\binom p2A^2+\binom p3A^3=I+\pmatrix{0&p\\0&0}$$ which does mean that $S'$ does not have order $p$ in $\text{GL}_2(\Bbb Z/p^2\Bbb Z)$. This argument breaks down for $p\in\{2,3\}$. For instance with $p=3$ one has $$S'^3=I+3A+3A^2+A^3=I+\pmatrix{0&(c+1)p\\0&0}$$ so one can take $c=-1$. Of course this is somewhat shy of proving that in this case $\pi$ has a section, but it shows that this argument does not refute it.
H: Is an infinitesimally small portion of a surface essentially a 2-D area? (Surface Integrals) I'm trying to derive the equation for surface flux $$\iint_{S}\vec F\cdot \hat n \;dS $$ So far, I understand that if we consider the vector field going through smooth surface. Then the flux on a small piece of the surface, or surface element, $\Delta S$, is given by the contribution of $\vec F$ in the direction of the unit normal $\hat n$, times the piece $\Delta S$ $$\vec F\cdot \hat n \Delta s$$ Stewart's Calculus then suggests that this portion $\Delta S$ is essentially $\Delta A$, a 2-D area. My question is if an infinitesimally small portion of a surface is essentially a 2-D area somewhat like a plane? Thanks. AI: This is fairly easy to think about in 3 dimensions, but more formality is needed when talking about higher dimensions. Yes, it is true that the surface of a solid is 2 dimensional, because we would need exactly 2 parameters to parameterize it. This is the same reason why a space curve is one dimensional even though it travels through 3D space - because we can parameterize some curve $C$ using a single variable function $\mathbf{r}: t \mapsto \mathbf{r}(t)$.
H: Prove that $\sqrt{n}$ is irrational unless $n = m^2$ for some natural number $m$. From Spivak's "Calculus": A fundamental theorem about integers, which we will not prove here, states that this factorization [factorization into a product of primes] is unique, except for the order of the factors. Thus, for example, 28 can never be written as a product of primes one of which is 3, nor can it be written in a way that involves 2 only once (now you should appreciate why 1 is not allowed as a prime). (b) Using this fact, prove that $\sqrt{n}$ is irrational unless $n = m^2$ for some natural number $m$. The solution is given as If $\sqrt{n} = a/b$, then $nb^2 = a^2$, so the factorization into primes of $nb^2$ and of $a^2$ must be the same. Now every prime appears an even number of times in the factorization of $a^2$, and of $b^2$, so the same must be true of the factorization of $n$. This implies that n is square. Does this solution really prove the original statement? The statement was $\sqrt{n}$ is irrational $\rightarrow \forall m \in \mathbb{N} (n \neq m^2)$ or, equivalently, $\exists m \in \mathbb{N} (n = m^2) \rightarrow \sqrt{n}$ is rational. It seems that they proved $\sqrt{n}$ is rational $\rightarrow \exists m \in \mathbb{N} (m^2 = n) $ AI: You are correct. That is what they proved. It is also equivalent to what (b) says in the problem statement. It may be useful to see that (as I alluded to in my comment) we can interpret "$A$ unless $B$" as "$A$ is generally true, except when $B$ is true." Therefore, if $A$ isn't true, that can only mean that $B$ is true. In the present problem, we have $A$ = "$\sqrt{n}$ is irrational" and $B$ = "$n$ is a perfect square." By our above reasoning, we can interpret this as "If $\sqrt{n}$ isn't irrational (i.e., it is rational), then $n$ must be a perfect square." And that is indeed what the given solution that you quoted shows. Incidentally, strictly speaking, "$A$ unless $B$" does not necessarily imply "If $B$, then not $A$." It is possible that $B$ is true, and yet $A$ remains true. It only says that if $A$ is false, than $B$ is true. Granted, it often does mean a strict opposition between $A$ and $B$ in ordinary discourse, and I'm not sure I'd want to prepare a legal argument predicated on this ambiguity, but logically, the strict opposition doesn't follow.
H: Show that $\int_{0}^{\pi/2} (\frac{1}{\sin^3{\theta}} - \frac{1}{\sin^2{\theta}})^{1/4} \cos{\theta} d\theta = \frac{(\Gamma(1/4))^2}{2\sqrt{\pi}}$ Well, I have shown that $B(n, n+1) = \frac{(\Gamma(n))^2}{2\sqrt{2n}}$ From there I could deduce that $B(1/4, 5/4) = \frac{(\Gamma(1/4))^2}{2\sqrt{\pi}}$, then $n=1/4$. I also know that $B(x, y) = \frac{(\Gamma(x))(\Gamma(y))} {\Gamma(x+y)} = 2 \int_{0}^{\pi/2} \sin^{2x-1}{\theta} \cos^{2y-1}{\theta} d\theta$. So I suppose I should reduce the given integral to a form similar to the one above. Is there any trigonometric property that can help me that or am I seeing it wrong? AI: Since you already received a good answer, another possible solution using the substitution proposed by @gt6989b in comments. $$\int \left(\frac1{u^3} - \frac1{u^2}\right)^{1/4}\, du=2 \sqrt[4]{(1-u) u}+2 \sqrt[4]{u} \,\,\, _2F_1\left(\frac{1}{4},\frac{3}{4};\frac{5}{4};u\right)$$ $$\int_0^1 \left(\frac1{u^3} - \frac1{u^2}\right)^{1/4}\, du=2\,_2F_1\left(\frac{1}{4},\frac{3}{4};\frac{5}{4};1\right)=\frac{2 \sqrt{2 \pi }\, \Gamma \left(\frac{5}{4}\right)}{\Gamma \left(\frac{3}{4}\right)}=\frac{\left(\Gamma(1/4)\right)^2}{2\sqrt{\pi}}$$
H: proof $ \frac{z-z'\bar{z}}{1-z'} \in \Bbb R $ is real. How to prove that $$ \frac{z-z'\bar{z}}{1-z'} \in \Bbb R $$ given that $z', z \in \Bbb{C}$ , and $|z'|=1$. AI: For a nonzero complex number $w$ we have $\frac{1}{w} = \frac{\bar{w}}{|w|^2}$. Taking $w=1-z'$, we have $$\frac{z-z'\bar{z}}{1-z'} = \frac{(z-z'\bar{z})\overline{(1-z')}}{|1-z'|^2}.$$ Using $|z'|^2=1$, the numerator can be expanded as $z - z'\bar{z} - \overline{z'} z + \bar{z}$. Check that the imaginary part of this quantity is zero.
H: Locus of points and angles The angle bisector at A of triangle ABC cuts BC at L. If C describes a circle whose center is A and B remains fixed, what is the locus of the points L? I tried some sketches and some geogebra but I don't know how to catch and land my idea. Any hint? Thank you !! AI: As $AB$ is fixed and $AC=R$ doesn't change too, we can use $\frac{BL}{LC}=\frac{BA}{AC}$ thus it will be circle, homothetical to the given. $$\frac{BC}{BL}=\frac{LC}{BL}+1=\frac{AC}{BA}+1=\frac{R+AB}{AB},$$ $$\frac{BL}{BC}=\frac{AB}{R+AB},$$ $$BL=\frac{AB}{R+AB}BC,$$ so the center of homothety is $B$ and the coefficient is $\frac{AB}{R+AB}$.
H: volume of solid generated by the regin bounded by curve $y=\sqrt{x},y=\frac{x-3}{2},y=0$ about $x$ axis Using sell method to find the volume of solid generated by revolving the region bounded by $$y=\sqrt{x},y=\frac{x-3}{2},y=0$$ about $x$ axis, is (using shell method) What I try: Solving two given curves $$\sqrt{x}=\frac{x-3}{2}\Longrightarrow x^2-10x+9=0$$ We have $x=1$ (Invalid) and $x=9$ (Valid). Put $x=9$ in $y=\sqrt{x}$ we have $y=3$ Now Volume of solid form by rotation about $x$ axis is $$=\int^{9}_{0}2\pi y\bigg(y^2-2y-3\bigg)dy$$ Is my Volume Integral is right? If not then how do I solve it? Help me please. AI: So I would instead split this up into two integrals: $$\pi\int_0^3{(\sqrt{x})^2}dx + \pi\int_3^9{(\sqrt{x})^2-\left(\frac{x-3}{2}\right)^2}dx$$. Using the shell method: $$\int_0^3{2\pi y(2y+3-y^2)}dy$$.
H: Convert polynomial of fractional order to polynomial of integer order There is an equation: $p(x) = (\alpha \cdot x + 1)^{3/2}$ Are there ways to convert this equation into a polynomial of integer order with an arbitrary highest degree (with even or odd choice), for example: $p(x) = (\alpha \cdot x + 1)^{3/2} \xrightarrow{Transform} p(x) = c_0 + c_1 \cdot x^1+ ... + c_m \cdot x^m$ where $m$ - even or odd arbitrary degree of polynomial, and $c_i$ - polynomial coefficients. Here we are not talking about approximating this equation by polynomials and not about expanding it into a Taylor series. In general, this is a bad way for this equation, because with an increase in the $\alpha$ coefficient, the Runge phenomenon manifests itself more and more. This is a fractional order system, the structure is very similar to the polynomial equation, so I thought that there should be ways to convert the fractional order system into a similar integer order system that describes the same curve on any interval as the original fractional order system. Is there such a transformation? EDIT: I would like to specify the problem: There is the following ratio: $f(x) = \frac{(x+1)^4}{(\alpha \cdot x + 1)^{3/2}}$ It is necessary to eliminate the fractional degree from this ratio and at the same time observe the condition under which the order of the numerator should not exceed the order of the denominator. I will try to show why squaring is not a very good solution in this case: $f(x) = \frac{((x+1)^4)^2}{((\alpha \cdot x + 1)^{3/2})^2} = \frac{(x+1)^8}{(\alpha \cdot x + 1)^{3}}$ In this case, the degree of the numerator is greater than the order of the denominator. EDIT №2: Squaring helps only if the order of the denominator is greater than the order of the numerator from the very beginning. Therefore, my comment should not be taken as a complete rejection of this method. It seems to me that there should be a more elegant way. AI: Unfortunately, if I understand your question correctly, there is no polynomial that suits your needs. You are looking for some polynomial $P(x)$ so that $$P(x)=(\alpha x+1)^{3/2}$$ for all real $x$. Squaring this, we must certainly have that $$P(x)^2=(\alpha x+1)^3$$ for all $x$. However, now both sides are actually polynomials, so they must in fact be the same polynomial. However, the degree of the polynomial on the left side is $2\deg P$, while the degree of the right side is $3$ (unless $\alpha=0$). Since $3$ is not even, $2\deg P\neq 3$, and thus we have reached a contradiction. If you are satisfied with an approximation on an interval, take a look at the Stone--Weierstrass theorem. It tells you that, given any continuous function on any interval, you can always approximate it as well as you desire by a polynomial.
H: What is different between field , Vector function and plane? What is different between field , Vector function and plane , they seem all shown with same equation: $ \ r \left( t \right) = \left\langle { f \left( t \right),g \left ( t \right) , h \left( t \right)} \right\rangle$ AI: Almost everything you'll see in math is defined in terms of sets and functions. A function in mathematics has a very precise definition, but for now the following "informal" definition is good enough. A function consists of three pieces of data, $(f,A,B)$, and we write this as $f:A \to B$. Here, $A$ and $B$ are sets, called the domain and target space/ codomain respectively, and $f$ is a "rule", which assigns to each $a \in A$, an element $f(a) \in B$. One example is $f:\Bbb{R} \to \Bbb{R}$, $f(x) = x^2 \sin(x)$. Notice how I gave you three pieces of information; I told you the domain, the target space and the actual rule (for each real number $x$, $f(x)$ is the number obtained by squaring and then multiplying by the sine of the number). So, the answer to your question "what is the difference between field and vector function" is that the difference is in what the domain and target space of the function are. For example a scalar field.: Definition $1$. Let $S\subseteq \Bbb{R}^n$ be a non-empty set. A scalar field on $S$ is by definition a function $f:S \to \Bbb{R}$. By the way, this should not be confused with the concept of a field of scalars (or simply a field) as defined in abstract algebra. Next, for vector fields: Definition $2$. Let $S\subseteq \Bbb{R}^n$ be a non-empty set. A vector field on $S$ is by definition a function $f:S \to \Bbb{R}^n$. Another name as opposed to a "vector field on $S$" is "a vector-valued function defined on $S$", or more specifically, "an $\Bbb{R}^n$-valued function defined on the set $S$". Anyway, there are several ways of saying the same thing, and it doesn't matter what you call it, as long as you know what it is you're talking about. So, you see everything is a function. The only question you have to ask is "what is the domain and target space of the function". By the way, in more advanced math, we typically provide slightly more refined definitions, but I think for the moment, the definitions I presented here are sufficiently general, and they serve well enough to illustrate my main point that "everything" is a function, and that you just have to ask what is the domain and target space. Now, a plane, let's call it $\Pi$, in $\Bbb{R}^3$ is by definition a certain subset of $\Bbb{R}^3$. There are several ways of describing planes as you may have seen For example, \begin{align} \Pi = \{(x,y,z) = \mathbf{r} \in \Bbb{R}^3| \, \, \, \mathbf{r} \cdot \mathbf{n} = p\} \end{align} or equivalently, if we write $\mathbf{n} = (a,b,c)$, then \begin{align} \Pi &= \{(x,y,z) \in \Bbb{R}^3| \, \, ax + by + cz = p\} \end{align} Basically, a plane in $\Bbb{R}^3$ is a subset $\Pi$, and it turns out that in order to specify a plane, you can usually use an equation to describe it. For example, when we say something like: consider the plane $3x + 2y + z = 89$, what we mean is to consider the set of points $(x,y,z)$ which satisfy that equation, i.e \begin{align} \Pi := \{(x,y,z) \in \Bbb{R}^3| 3x + 2y + z = 89\} \end{align} Another way of describing a plane is to describe it as the image of a certain function. For example, consider the function $f: \Bbb{R}^2 \to \Bbb{R}^3$ defined by the rule \begin{align} f(s,t) &= (0,1,0) + s(1,3,5) + t(2,8,7) \\ &= (s+ 2t, 1 + 3s + 8t, 5s + 7t) \end{align} Then the image/range set of the function $f$ describes a plane. i.e we consider the set \begin{align} \Pi := \text{image}(f) \subset \Bbb{R}^3. \end{align} This is also a plane. Summary: A scalar field and vector field are both functions, the difference is in what the domain and target space are. A plane is a subset, and to tell me specifically what the plane is, there are several ways of doing so, but the main point is that a plane is a certain subset... how you choose to describe it isn't that important.
H: Prove that,a triangle can be formed using sides equal to the diagonals of any convex pentagon. A friend of mine recently gave me a problem saying, $\textbf{Question:}$ Take any convex pentagon.Show that, we can make a triangle whose side lengths will be distinct diagonals of the pentagon. I thought of using trigonometry but they all seemed to random.I couldn't even get started with this question. AI: Hint: Let $d$ be the longest diagonal and consider the two diagonals sharing an endpoint with $d$...
H: Infinite-dimensional inner product spaces: if $A^k = I$ for self-adjoint $A$ and for integer $k > 0$, then $A^2 = I$ Exercise 5(d), Sec 80, Pg 162, PR Halmos's Finite-Dimensional Vector Spaces: If $A^k = I$ where $A$ is a self-adjoint operator and $k > 0$ is some positive integer, show that $A^2 = I$. The underlying inner product space is not necessarily finite-dimensional. I see that it is rather straightforward to establish the result in finite-dimensional spaces (over both real and complex fields), using the Spectral Theorem for self-adjoint operators. However, I'm finding it difficult to extend the result to infinite dimensions for $k \geq 3$. Towards showing $A^2 = I$, my (unsuccessful) attempts so far have been around establishing $\Vert A^2x-x\Vert = 0$. Would appreciate some help. Thanks for reading. AI: For any inner product space, complete or not, let $v$ be any vector and consider the finite-dimensional subspace generated from $v,Av,\ldots,A^{k-1}v$. Then this space is $A$-invariant, so $A$ restricted to it is a symmetric matrix. As mentioned in the question, it is straightforward to show that $A^2=I$ in this space, that is, $A^2v=v$. Since this is true of any vector, $A^2=I$.
H: Null eigenvector assumption: is it always true that for this non positive matrix $A$, there exists $v$ such that $A_{ij} v^{j} = 0$? Let $M$ be a Riemannian manifold and $A_t$ a $(0, 2)$ positive definite tensor defined in $M$ for all $t \in [t_0, t_1)$ and suppose that there exists $p \in M$ and $v \in T_p M$ such that $A(v, v) = 0$ at $(p, t_1)$. Does this imply that there exists a vector $\tilde{v}$ satisfying $A_{ij}\tilde{v}^{j} = 0$? Context: when one proves the maximum principle for tensors, we define a certain $(0, 2)$ tensor that depends on time like that and it's necessary to prove that $A > 0$ for all $t \in [0, T)$ for a certain $T$. Proceeding by contradiction, we suppose that that's not true, so that there exists a point and instant $(p, t_1)$ such that $A > 0$ for all $t \in [t_0, t_1)$ but $A(v, v) = 0$ at $(p, t_1)$. I get that so far. But the problem is that $A(v, v) = A_{ij}v^{i} v^{j} \neq A_{ij} v^{j}$. I don't see how $A_{ij}v^{i}v^{j} = 0$ implies that $A_{ij}v^{j} = 0$ (of course, if it were the other way around this would be trivial, but it's not). AI: From what you have written I'm under the impression that you know that the map $w\mapsto A(w,w) $ has a minimum in $v$ (for fixed $(p, t_1)$). But then it's derivative in $v$ is zero, and that's just the map $w\mapsto 2 A(v, w)$. So this then implies that $A(v, .) $ is zero.
H: Change the order of integration I need to change the order of integration (hope that's a correct term in English)$$\int_{0}^{1}dx \int_{0}^{\sqrt{(1+x^2)}}f(x,y) \, dy$$ so we have the following intervals for $x$ and $y$: $$\begin{cases}0\leq x \leq 1 \\ 0\leq y \leq \sqrt{1+x^2}\end{cases}$$ Then we can draw a figure using these intervals.\ Now lets divide y interval into 2 parts: $$0\leq y \leq 1\, and \,1\leq y\leq \sqrt{2}$$ Doing the same thing with the interval x we get two corresponding intervals $$0\leq x \leq 1\, and \, 0\leq x \leq \sqrt{y^2-1}$$ As a result we get a sum of integrals $$\int_{0}^{1}dx \int_{0}^{\sqrt{(1+x^2)}}f(x,y) \, dy=\int_{0}^{1}dy \int_{0}^{1}f(x,y) \, dx + \int_{1}^{\sqrt{2}}dy \int_{0}^{\sqrt{(y^2-1)}}f(x,y) \, dx$$ What I want is to make sure that the procedure and the result are correct as it seems too easy to be correct... AI: There is one mistake: $y \leq \sqrt {1+x^{2}}$ becomes $y^{2} \leq 1 +x^{2}$. When $y^{2} \geq 1$ this becomes $x \geq \sqrt {y^{2}-1}$ not $x \leq \sqrt {y^{2}-1}$. So the second term is $\int_1^{\sqrt 2} dy \int _{\sqrt {y^{2}-1}} ^{1}f(x,y) dx $
H: Can the letter $\epsilon$ be defined as a parameter instead of the conventional meaning? If in some sections of my proof, $\epsilon$ represents the common meaning of the small variable approaching zero, while in another section of the proof, $\epsilon $ is defined as a parameter, would it be considered "unconventional notation" or "bad grammar"? AI: That is indeed an unconventional notation, but not bad grammar at all. On the other hand, it is not at all a good option to use, within a single proof, the same symbol for two different objects.
H: Distribution of $e^{\sum_{i=1}^N Z_i}$ with gaussian $Z$ a) If $Z\sim N(\mu,\sigma^2)$, find the distribution of $e^{\sum_{i=1}^N Z_i}$. $X = \sum_{i=1}^N Z_i\sim N(n\mu,\sqrt{n}\sigma).$ But then what would the distribution of $e^X$ be? AI: If $X=log Y$ follows a Gaussian distribution (which is exactly your case)... then Y is lognormal PS: you supposed also that the N gaussian are independent (if they are not independent the distribution of $\sum_i Z_i$ is always a Gaussian but with different parameters you confused $n$ with $N$; it is not a big deal, perhaps only a typo
H: Number of matrices with positive determinant whose entries are {1,-1} How many different matrices $3 \times 3$ matrix, whose every entry is 1 or -1, have positive determinant? My approach: I found out the possible values of determinant of all such $3 \times 3$ matrices whose entires are 1 or -1: {-4,-2,0,2,4}. The number of matrices with positive determinant = number of matrices with negative determinant by symmetry, (as we can just multiply each entry with -1, and sign of determinant would change as it's an odd degree). So the answer would be $\frac{2^9-n(0)}{2}$ where n(0) is the number of singular matrices(determinant 0). For n(0), any two rows/columns must be proportional to each other i.e. all 1 or all -1 or one of them 1 and the other -1.(4 possibilities for two such choosen rows/columns). So I tried to count it as follows: $({3 \choose 2} \cdot 4 \cdot 2^3)\cdot 2=192$ for rows and columns. However this includes cases with three rows /three column repeated as well, for which we must subtract the over counted cases. For all 3 rows/columns to be proportional there are $(3\cdot 2 +2) \cdot 2 = 16$ ways of which each is counted thrice. So a total of 48 repeated cases. So n(0)=192-48=144. This gives number of matrices with positive determinant as $\frac{512-144}{2}=184$, However the answer is given as $96$. Where am I going wrong? Also is there a more systematic way of doing this question? AI: For counting the singular cases, here's a geometric interpretation . . . Consider the solid cube $C=\big{\{}(x,y,z)\in \mathbb{R}^3\mid x,y,z\in [-1,1]\big\}$. Let $S$ be the set of $3{\,\times\,}3$ matrices with all entries in $\{\pm 1\}$. For $A\in S$ we can regard the $3$ rows of $A$ as vertices $V_1,V_2,V_3$ of $C$, hence $\det(A)=0$ if and only if the vectors $\vec{V_1},\vec{V_2},\vec{V_3}$ are coplanar. To count the number of matrices $A\in S$ such that $\det(A)=0$, consider $3$ cases . . . Case $(1)$:$\;V_1,V_2,V_3$ are all equal. The count for this case is $8$. Case $(2)$:$\;$Exactly two of $V_1,V_2,V_3$ are all equal. The count for this case is $8{\,\cdot\,}7{\,\cdot\,}3=168$, since there $8$ choices for the vertex with multiplicity $2$, $7$ choices for the non-duplicated vertex, and $3$ ways of arranging the rows of $A$. Case $(3)$:$\;\vec{V_1},\vec{V_2},\vec{V_3}$ are coplanar and $V_1,V_2,V_3$ are distinct. Then the plane containing the triangle with vertices $V_1,V_2,V_3$ must pass through the origin. There are $6$ such planes, each of which passes through two diagonally opposite edges of $C$. Each such plane contains $4$ vertices of $C$, so for each such plane there are $4$ choices for $\{V_1,V_2,V_3\}$. Thus there are $6{\,\cdot\,}4=24$ possible choices for $\{V_1,V_2,V_3\}$, and for each such choice there are $3!=6$ was of arranging the rows of $A$. Hence the count for this case is $24{\,\cdot\,}6=144$. Summing the counts for the $3$ cases, we get a total count of $8+168+144=320$ for the count of the number of singular matrices. Hence we get ${\Large{\frac{2^9-320}{2}}}=96$ for the count of the number of matrices $A\in S$ with $\det(A) > 0$.
H: How are joint probability distributions constructed from product measures? I often see a construction in measure theory in regards to product measures. This is outlined below (taken from Wikipedia because it's very generic) Let $ (X_{1},\Sigma _{1})$ and $(X_{2},\Sigma _{2})$ be two measurable spaces, that is, $\Sigma _{1} $ and $\Sigma _{2}$ are sigma algebras on $X_{1}$ and $X_{2}$ respectively, and let $\mu _{1} $and $\mu _{2}$ be measures on these spaces. Denote by $\Sigma _{1}\otimes \Sigma_{2}$ the sigma algebra > on the Cartesian product $X_{1}\times X_{2} $ generated by subsets of the form $B_{1}\times B_{2}$, where $B_{1}\in > \Sigma _{1}$ and $B_{2}\in \Sigma _{2}.$ This sigma algebra is called the tensor-product σ-algebra on the product space. A product measure $\mu _{1}\times \mu _{2}$ is defined to be a measure on the measurable space $(X_{1}\times X_{2},\Sigma _{1}\otimes \Sigma _{2})$ satisfying the property $(\mu _{1}\times \mu _{2})(B_{1}\times B_{2})=\mu > _{1}(B_{1})\mu _{2}(B_{2})$ for all $B_{1}\in \Sigma _{1},\ B_{2}\in \Sigma _{2}.$ Questions: From the perspective of probability theory, this product measure construction looks a lot like the construction of a joint probability $P(X,Y)$ where the $X$ and $Y$ are independent. Is this assumption correct? If (1.) is correct, then how does the notion of correlation come into the structure of product measures? How is correlation built into measure theory so that it can pass onto probability theory? AI: You're correct that the statement of independence is, that if $X$ and $Y$ are (real-valued) random variables on a common probability space with respective distributions $X(P)$ and $Y(P)$, then $X$ and $Y$ are said to be independent exactly if the distribution of $(X,Y)$ (which is a random vector and thus, its distribution is a measure on $\mathbb{R}^2$) is the product measure $X(P)\otimes Y(P)$. 2. Correlation is not inherent to product measures since these are exactly the distributions of independent random variables - in particular, uncorrelated ones. However, when $X$ and $Y$ are not independent, then $(X,Y)(P)$ is not a product measure, but some other measure on $\mathbb{R}^2$. This allows for the covariance $$ \int_{\mathbb{R}^2} xy\; \textrm{d}(X,Y)(P)-\int_{\mathbb{R}^2} xy\;\textrm{d}X(P)\otimes Y(P) $$ to be non-trivial. 3. About the relation between product measures and other measures. One way that general measures do pop up from product measures is via conditional distributions (see, for instance, https://en.wikipedia.org/wiki/Regular_conditional_probability). If you have a good notion that "conditional on $Y=y$, then $X$ follows the distribution $\mu_y$," then you have a scheme as follows: Let $(U,Y)$ be an independent pair such that $Y$ is still $Y$ and $U$ is uniform on $[0,1]$. Then, if $F_y$ is the cumulative distribution function of $\mu_y$ with right-continuous generalised inverse $F_y^{-1}$, we have that $F_y^{-1}(U)$ follows the distribution $\mu_y$. Therefore, given such a pair $(U,Y)$, the transformation $(U,Y)\mapsto (F_Y(U),Y)$ yields a variable with the same distribution as $(X,Y)$. Thus, the latter distribution (which is not a product measure) can be constructed from the former (which is).
H: Determining that a certain diffeomorphism of $\Bbb R^n-\{0\}$ is orientation preserving or not Consider the diffeomorphism $f:\Bbb R^n-\{0\} \to \Bbb R^n-\{0\}$ (whose inverse is itself) given by $x\mapsto x/|x|^2$. How can we determine that $f$ is orientation preserving? For $n=1$ it is clearly orientation reversing, and also for $n=2$, but it seems not easy to compute its Jacobian determinant for large $n$, so I think there should be another method. Can I get a hint? AI: You have to calculate the derivative in one point only, since the sign of the Jacobian cannot change on the connected set on which the map is defined (because otherwise it would have a zero somewhere). Now note that the restrtiction of the map to the unit sphere is just the identity, and that it's derivative, in $x$ with $|x|=1$, will map $x$ to $-x$. From this you can easily get a diagonal representation of the derivative in $x \in S^{n-1}$, which allows you to directly read off the sign of the determinant.
H: Let A and B be subsets of R, prove that int(A ∪ B) ⊂ int(A) ∪ int(B) $x\in int(A \cup B)$ implies that there exists $\epsilon>0$ such that $(x-\epsilon, x+\epsilon)\subset(A \cup B),$ hence, $(x-\epsilon,x+\epsilon)\subset A \cup (x-\epsilon,x+\epsilon)\subset B$, implies $x\in int(A)\cup int(B)$. Is this true? And how to prove that $int(A ∪ B)$ ⊂ $int(A)$ ∪ $int(B)$? AI: This is false. If $A$ is the set of all rational numbers and $B$ is the set of all irrational numbers then $A$ and $B$ have no interior but the interior of $A \cup B$ is $\mathbb R$.
H: $M\subseteq p\implies (M)\subseteq p$ where $p$ is a prime ideal, $(M)$ the ideal generated by $M$ I'm stuck on this result needed for a larger problem: Let $M\subseteq R$, $p$ be a prime ideal of $R$ then: $$M\subseteq p\implies (M)\subseteq p$$ where $(M)$ denotes the ideal generated by $M$. I attempted to show a contradiction using ideal properties, but I'm not getting anywhere. How can the inclusion be derived? AI: Well as @TokenToucan pointed out in the comments this is true for any ideal $I$. If $M \subset R $ which is contained in an ideal $I$ of $R$, then by definition $(M)$ is the intersection of all the ideals of $R$ which contain $M$. Now the ideal $I$ contains $M$ by hypothesis, hence $(M) \subset I$.
H: Where did my solution fail? In Buck's Advanced Calculus book, I saw an exercise that says Let $f$ be defined by $\displaystyle{f(x,y)=\frac{x^2y^2}{x^2+y^2},}$ with $f(0,0)=0.$ By checking various sequences, test this for continuity at $(0,0).$ Can you tell whether or not it is continuous there? If we let $x^2y^2=x^2+y^2,$ we will find that$$y^2(x^2-1)=x^2\implies y^2=\frac{x^2}{x^2-1}\implies y=\sqrt{\frac{x^2}{x^2-1}}$$and if we let $\displaystyle{p_n=(x_n,y_n)=\left(x_n,\sqrt\frac{x_n^2}{x_n^2-1}\right)}$ with $\lim x_n=0,$ $$\lim p_n=\left(0,\lim\sqrt\frac{x_n^2}{x_n^2-1}\right)=\left(0,\sqrt\frac{0}{-1}\right)=\left(0,0\right)$$ but because of $x_n^2y_n^2=x_n^2+y_n^2,$ $\lim f(p_n)$ will be equal to $1.$ Hence, $f$ is not continuous at $(0,0).$ But the book says it is. Where did my solution fail? AI: The expression $\sqrt{\frac{x^2}{x^2-1}}$ only makes sense if $|x|>1$. So, if $\lim_{n\to\infty}x_n=0$, it makes no sense to talk about $\sqrt{\frac{x_n^{\,2}}{x_n^{\,2}-1}}$ when $n\gg1$.
H: Show if $f(x)=x^4+2x^2+2$ is injective or surjective $$f(x)=x^4+2x^2+2$$ Is it injective: $$x^4+2x^2+2=0$$ $$x^2=u $$ $$u^2+2u+2=0 $$ $$u_{1,2}=\frac{-2\pm\sqrt{-4}}{2}$$ Doesn't have a real root. We can also try another way to see if it is injective: $$f^{-1}(f(x))=\text{identity of domain}$$ $$x=y^4+2y^2+2 = (y^2+1)^2+1$$ when we solve for y we get: $$\pm\sqrt{\pm\sqrt{x-1}-1}=f^{-1}(x)$$ So this means we can do the following: $$f^{-1}(f(x))=\pm\sqrt{\pm\sqrt{x^4+2x^2+1}-1}$$ I have a feeling that something went wrong here. I later also need to show for surjectivity(in lectures we have shown this way:$f(f^{-1}(x)=\text{identity of range}$). But it stops here. AI: It's not injective; $f(-1) = f(1) = 4$. It's also not surjective; $f(x) = (x^2+1)^2 + 1 \geq 1$ so it takes no negative values.
H: proof that $\lim \sup (a_n · b_n) = \lim \sup a_n · \lim \sup b_n $ With $a_n, b_n \geq 0$ or non-negative for all $n$ in $\mathbb N$. Proof that for $n \to \infty $ $\lim \sup (a_n · b_n) = \lim \sup a_n · \lim \sup b_n $ , if $a_n$ and $b_n $converge. My start would be: Since $a_n$ converges and $\lim a_n = a$, so is $\lim \sup a_n = a$ and since $b_n$ converges and $\lim b_n = b$, so is $\lim \sup b_n = b$. So that, $\lim \sup (a_n · b_n) = a · b$. AI: I't not correct ! take $a_n=\boldsymbol 1_{2\mathbb N}(n)$ and $b_n=\boldsymbol 1_{2\mathbb N+1}(n)$. Then $$\limsup (a_nb_n)=0$$ whereas $$\limsup(a_n)\limsup(b_n)=1.$$
H: Limits of radicals of polynomials Given a polynomial $$p(x)=(x+a_1)(x+a_2)...(x+a_n)$$ $$\lim_{x \to \infty} p(x)^{\frac{1}{n}}-x$$ This limit seems to equal to the average of the $a_n$’s. How do I prove this? I think it’s an inductive proof but I just don’t know how. It might involve l’hopital but I managed to prove quadratic and cubic without. AI: You have $$\begin{aligned} \log\left(p(x)\right)^{1/n} &= \frac{1}{n}\sum_{k=1}^n\log(x+a_k)\\ &=\frac{1}{n}\sum_{k=1}^n\left(\log x + \log\left(1+\frac{a_k}{x}\right)\right)\\ &= \log x + \frac{1}{nx}\sum_{k=1}^n a_k + \frac{\epsilon(x)}{x^2} \end{aligned}$$ where $\lim\limits_{x \to \infty } \epsilon(x) = 0$. Taking the exponential you indeed get that the expected limit is equal to the mean of the $a_k$.
H: Rotating/orienting in 3D space around any point and any axes I'm writing code that requires rotation of objects around any point in 3D space. I've found the methods for rotating objects by the euler angles, but they all seem to rotate around the origin. So to rotate around any point, must I first move the coordinate system by subtracting the coordinates of the rotation point from each point in the object, do the rotation, and then move the coordinate system again? Or are the simpler, more direct (and more computationally efficient) ways to do this? AI: A 3D rotation around an arbitrary point $(x_0, y_0, z_0)$ is described by $$\left[ \begin{matrix} x^\prime \\ y^\prime \\ z^\prime \end{matrix} \right] = \left[ \begin{matrix} X_x & Y_x & Z_x \\ X_y & Y_y & Z_y \\ X_z & Y_z & Z_z \end{matrix} \right] \left[ \begin{matrix} x - x_0 \\ y - y_0 \\ z - z_0 \end{matrix} \right] + \left[ \begin{matrix} x_0 \\ y_0 \\ z_0 \end{matrix} \right]$$ which, as OP noted, first subtracts the center point, rotates around the origin, then adds back the center point; equivalently written as $$\vec{p}^\prime = \mathbf{R} \left( \vec{p} - \vec{p}_0 \right) + \vec{p}_0$$ We can combine the two translations, saving three subtractions per point – not much, but might help in a computer program. This is because $$\vec{p}^\prime = \mathbf{R} \vec{p} + \left( \vec{p}_0 - \mathbf{R} \vec{p}_0 \right)$$ In other words, you can use the simple form, either $$\left[ \begin{matrix} x^\prime \\ y^\prime \\ z^\prime \end{matrix} \right] = \left[ \begin{matrix} X_x & Y_x & Z_x \\ X_y & Y_y & Z_y \\ X_z & Y_z & Z_z \end{matrix} \right] \left[ \begin{matrix} x \\ y \\ z \end{matrix} \right] + \left[ \begin{matrix} T_x \\ T_y \\ T_z \end{matrix} \right]$$ or, equivalently, $$\left[ \begin{matrix} x^\prime \\ y^\prime \\ z^\prime \\ 1 \end{matrix} \right] = \left[ \begin{matrix} X_x & Y_x & Z_x & T_x \\ X_y & Y_y & Z_y & T_y \\ X_z & Y_z & Z_z & T_z \\ 0 & 0 & 0 & 1 \end{matrix} \right] \left[ \begin{matrix} x \\ y \\ z \\ 1 \end{matrix} \right]$$ where $$\left[ \begin{matrix} T_x \\ T_y \\ T_z \end{matrix} \right] = \left[ \begin{matrix} x_0 \\ y_0 \\ z_0 \end{matrix} \right] - \left[ \begin{matrix} X_x & Y_x & Z_x \\ X_y & Y_y & Z_y \\ X_z & Y_z & Z_z \end{matrix} \right] \left[ \begin{matrix} x_0 \\ y_0 \\ z_0 \end{matrix} \right]$$ to apply a rotation $\mathbf{R}$ around a centerpoint $(x_0, y_0, z_0)$. The 4×4 matrix form is particularly useful if you use SIMD, like SSE or AVX, with four-component vectors; that's one reason why many 3D libraries use it. Another is that the same form can be used for projection.
H: Hypothesis testing with an exponential distribution I have the following problem: Given the data $X_1, X_2, \ldots, X_{15}$ which we consider as a sample from a distribution with a probability density of $\exp(-(x-\theta))$ for $x\ge\theta$. We test the $H_0: \theta=0$ against the $H_1: \theta>0$. As test statistic $T$ we take $T = \min\{x_1, x_2, \ldots, x_{15}\}$ . Big values for $T$ indicate the $H_1$. Assume the observed value of $T$ equals $t=0.1$. What is the p-value of this test? Hint: If $X_1, X_2,\ldots,X_n$ is a sample from an $\operatorname{Exp}(\lambda)$ distribution, than $\min\{X_1, X_2,\ldots,X_n\}$ has an $\operatorname{Exp}(n\lambda)$ distribution. The solution says 0.22. I know that the first question you have to ask youself regarding the p-value is: "What is the probability that the H0 would generate a sample θ>0?" So I assume H0 is true and take θ = 0. The probability-density function becomes: f(x) = Exp(-x). I take up the hint, so I make it f(x) = Exp(-nx) This is where I get stuck. I don't know how to proceed with the information given: Assume the observed value of T equals t=0.1. Can I have feedback on this problem? Thanks, Ter AI: If yuo are familiar with models having a Monotone Likelihood Ratio, in this case the p-value can be easily calculated (under $H_0$) in the following way: $e^{-\frac{15}{10}}\approx 0.22$
H: What does it precisely mean for the limit to not exist? I know a question With the exact same title has an answer but it hasn’t areally answered mine so please hear me out. I know what it means for the limit At a point to “ not exist”; the function doesn’t approach a certain value as x approaches some value. But i have seen this phrase used in 2 different contexts. One where the one-sided limits are different and the second where function approaches infinity or negative infinity from both sides. The notation for the second, “infinite limit, is always written out in the regular limit notation by the author, who says that it is “describing the way in which limit does not exist”. What are the differences in these 2 types of non-existences in context of the precise definition of a limit not existing? Secondly, the author explains how The limit of a quotient cannot be computed by the quotient of the limits when the limit of the denominator is equal to zero and that of the numerator is Positive and proceeds to say that the “limit does not exist” without mentioning the limit notation used for the “infinite limits”, so I assume that he’s implying that this is the first kind of non existence. Shoudn’t this be the second type, since this is no longer an intermediate form and should clearly approach a very large positive number? Sorry for not formatting, I post on a tablet and dont know how to format. AI: We say the the limit $\lim_{x\to a}f(x)$ is a real number $l$ when$$(\forall\varepsilon>0)(\exists\delta>0)(\forall x\in D_f):|x-a|<\delta\implies|f(x)-l|<\varepsilon,$$we say that $\lim_{x\to a}f(x)=\infty$ when$$(\forall M\in\Bbb R)(\exists\delta>0)(\forall x\in D_f):|x-a|<\delta\implies f(x)>M,$$and we say that $\lim_{x\to a}f(x)=-\infty$ when$$(\forall M\in\Bbb R)(\exists\delta>0)(\forall x\in D_f):|x-a|<\delta\implies f(x)<M.$$The limits $\lim_{x\to a^+}f(x)$ and $\lim_{x\to a^-}f(x)$ are similar, but then $|x-a|<\delta$ becomes $0<x-a<\delta$ and $-\delta<x-a<0$ respectively. In any case, $\lim_{x\to a}f(x)$ exists if and only if both limits $\lim_{x\to a^+}f(x)$ and $\lim_{x\to a^-}f(x)$ exist. When someone says that the limit $\lim_{x\to a}f(x)$ exists, that person should let it clear whether he or she is talking about existence in $\Bbb R$ or about existence in $\Bbb R\cup\{\pm\infty\}$. Assuming that the context here is just the existence in $\Bbb R$, how can a limit fail to exist? These are the possibilities: One of the limits $\lim_{x\to a^\pm}f(x)$ does not exist in $\Bbb R$. Both of them exist, but they are distinct.
H: Proving the continuity of $f(x)=\sum_{n=1}^{\infty}\frac{1}{\sqrt{n}}\bigl(\exp\bigl(-\frac{x^2}{n}\bigr)-1\bigr)$ I would like to prove the continuity of $f(x)=\sum_{n=1}^{\infty}\frac{1}{\sqrt{n}}\bigl(\exp\bigl(-\frac{x^2}{n}\bigr)-1\bigr)$ with $x\in\mathbb{R}$. The most obvious way to go about this is to first state that $\left\lvert \exp\left(-\frac{x^2}{n}\right)-1\right\rvert \leq \frac{x^2}{n} $. This then implies that $\left\lvert\frac{1}{\sqrt{n}}\left(\exp\left(-\frac{x^2}{n}\right)-1\right)\right\rvert \leq \frac{x^2}{n^{3/2}} $. Ideally then we would use a Weierstrass $M$-test to find a convergent majorant series as this would then grant us the continuity of $f(x)$ as desired. However the issue that I am finding is that the majorant series may NOT depend on $x$. Now I think I have found a way around this issue, but suddenly it feels like I am in fact merely showing regular convergence and not uniform convergence. My idea is as follows: Consider the restriction of $f(x)$ onto the interval $M$ with $x\in M$ and $M=]-x_0,x_0[$ Then the series $\sum_{n=1}^{\infty}\frac{x_0^2}{n^{3/2}}$ is a convergent majorant series for $f(x)$ restricted to $M$ and as such we have uniform convergence on the interval $M$ and thus specially uniform convergence in the point $x$ since $x\in M$, which then implies continuity in $x$. But we may generate this restriction for any $x$ that we desire so we are able to prove continuity in any general $x\in\mathbb{R}$ and thus we must have that $f(x)$ is continuous in all $x\in\mathbb{R}$. Is this proof correct? I have to admit that when I do my restriction and start proving continuity for a single point at a time it feels like I am no longer using uniform convergence, so I'm a little iffy on whether the math here holds up. I would deeply appreciate some comments on whether or not I've made any mistakes! AI: Your proof is correct. Your proof may be harnessed by a couple of ways: As pointed out by @Kavi Rama Murthy, for each given $x \in \mathbb{R}$, you may want to specify the interval on which the Weierstrass $M$-test will be applied. For this purpose, you may pick any bounded interval that contains $x$ as an interior point, such as $(x_0 - 1, x_0 + 1)$. Alternatively, you may apply Pasting Lemma: Pasting Lemma. Let $X$ and $Y$ be topological spaces, and let $\{ A_{i} \}_{i \in I}$ be an arbitrary family of open sets. If a function $f : \bigcup_{i\in I}A_i \to Y$ is continuous on each $A_i$, $i \in I$, then $f$ itself is also continuous. If you are not familiar with point-set topology, you may regard both $X$ and $Y$ in the above lemma as $\mathbb{R}$. Its proof is quite straightforward. Now if we choose the index set as $I = (0, \infty)$ and let $A_i = (-i, i)$ for each $i \in I$, then your argument shows the function $f(x) = \sum_{n=1}^{\infty}n^{-1/2}(e^{-x^2/n}-1)$ is continuous on each interval $A_i$, and so, $f$ is continuous on all of $\bigcup_{i\in I}A_i=\bigcup_{i>0}(-i,i)=\mathbb{R}$. Finally, let me conclude by proving the following claim: Claim. The defining sum for $f(x)$ does converge uniformly on all of $\mathbb{R}$. Proof. By the local uniform convergence again, we find that $f$ is term-wise differentiable with $$ f'(x) = -\sum_{n=1}^{\infty} \frac{2x}{n^{3/2}}e^{-\frac{x^2}{n}}. $$ Now define $g : [0, \infty) \to \mathbb{R}$ by $$ g(t) = \begin{cases} 2t^{-3/2}e^{-1/t}, & t > 0; \\ 0, & t = 0. \end{cases} $$ Then $g$ is continuous, non-negative, monotone-increasing on $[0, \frac{2}{3}]$, and monotone-decreasing on $[\frac{2}{3}, \infty)$. Using this, it is not hard to prove that $$ -f'(x) = \sum_{n=1}^{\infty} g\left(\frac{n}{x^2}\right)\frac{1}{x^2} \xrightarrow[x\to\infty]{} \int_{0}^{\infty} g(t) \, \mathrm{d}t = 2\sqrt{\pi}. $$ Then by the L'Hospital's Rule, $$ \lim_{x\to\infty} \frac{f(x)}{x} = \lim_{x\to\infty} f'(x) = - 2\sqrt{\pi}. $$ In particular, $f(x)$ is unbounded from below. Since each partial sum of $f(x)$ is bounded on all of $\mathbb{R}$, the convergence cannot be uniform.
H: spans of distinct sets of vector space basis Let basis of $V$ vector space $S = \{a_1,a_2,...,a_n\}$. Let's divide this set into two distinct sets $S_1$ and $S_2$. So are $Sp(S_1)$ and $Sp(S_2)$ distinct sets? Intuitively, i can say that $Sp(S_1)$ and $Sp(S_2)$ are distinct sets. But how can i prove that systematically ? AI: It is first worth noting that the span of any set of vectors contains the zero vector, so the spans of $S_1$ and $S_2$ are in fact never disjoint. Clearly other than this trivial case though, they should be. Well, if the two spans are not disjoint, they must have an element in common. But if there is a vector in the span of both, then a linear combination of vectors in $S_1$ can be written as a linear combination of vectors in $S_2$, so $$\alpha_1 a_1 + \alpha_2 a_2 + \dots + \alpha_k a_k = \beta_1 a_{k+1} + \beta_2 a_{k+2} + \dots + \beta_m a_{n}$$ (with not all $\alpha_i, \beta_i = 0$) Subtracting the vectors in $S_2$ from both sides gives $$\alpha_1 a_1 + \alpha_2 a_2 + \dots + \alpha_k a_k - \beta_1 a_{k+1} - \beta_2 a_{k+2} - \dots - \beta_m a_{n} = 0$$ and then clearly linear independence of $S$ is explicitly violated because we can express the zero vector as a linear combination of the vectors with some nonzero coefficients. Hence, $S$ is not a basis and we have a contradiction. Another thing worth noting is that we really only need to assume linear independence of $S$ here. Whether or not $S$ is a basis for $V$ is irrelevant—it may span the whole space or not. We should really prove that linear independence implies this property by proving that its negation implies linear dependence, and your statement is the contrapositive of this. Edit: I just came across this question again and a simpler argument occurred to me. The two sets $S_1$ and $S_2$ are actually each a basis for a subspace of $V$. Hence, any vector in either subspace is expressible uniquely in the respective basis. So if we had a vector in the span of both, it would be expressible in two different ways (if we consider the original basis, the union of the two sets), which is clearly a contradiction. This is essentially the same argument as above—just less algebraic and perhaps slightly more intuitive.
H: Finding a matrix $A$'s eigenvalues from its square $A^2$'s eigenvalues In the proof I read, a $n \times n$ matrix $A^2$ has the eigenvalues $k^2$ with multiplicity $1$ and $k-1$ with multiplicity $n-1$. We also know that the matrix $A$ itself is symmetric. Then the proof says Since $A$ is symmetric and hence diagonalizable, we conclude that A has the eigenvalues $k$ (of multiplicity $1$) and $\pm \sqrt{k-1}$. What I didn't understand with this part of the proof is why did we only consider $k$, but not $-k$ as the eigenvalue of $A$. Can you please explain? Edit(Additional information in the proof that might be related): The matrix $A^2$ is all $k's$ in the main diagonal and $1's$ everywhere else. Or we can write it as, $A^2=(k-1)I+J$ where $I$ is the $n \times n$ identity matrix and $J$ is the $n \times n$ matrix of all $1's$. Also the matrix $A$ consists of only $1's$ and $0's$ with a main diagonal of all $0's$. Finally, we have $k+n-1=k^2$ AI: Let $n=2$, $k=1$, suppose $$A=\begin{bmatrix}-1 & 0 \\ 0 & 0\end{bmatrix}$$ then we have $$A^2 = \begin{bmatrix}1 & 0 \\ 0 & 0 \end{bmatrix}$$ As we can see, it is possible that the eigenvalue of $A$ is $-k$. Hence you are right unless there is other additional informations. Edit: Let the all one vector be $e$. Then we have $$A^2e=(k+(n-1))e=k^2e$$ Hence we have $$Ae=\pm k e$$ However, notice that $A$ is a nonnegative matrix, $Ae$ sums up all the columns and hence the result must be nonnegative. Hence $$Ae=|k|e$$ Since $A$ is symmetric, the eigenvalues are real, that is we need $\pm \sqrt{k-1}$ to be real, hence we have $k-1 \ge 0$, and $k$ is positive. Hence the eigenvalue is $k$ and not $-k$. Remark: The other eigenvectors for $A$ are $e_1-e_i$ where $i \in \{2, \ldots, n\}$.
H: How does summation work on big O notation In an article I'm reading (http://www.jmlr.org/papers/volume13/dekel12a/dekel12a.pdf) it says: "It is well-known that the optimal regret bound [...] is $O(\sqrt m)$" Then: "[in a network,] assuming that each node processes $m/k$ inputs, the expected regret per node is $O(\sqrt \frac{m}{k})$. Therefore, the total regret across all k nodes is $O(\sqrt km)$". The second part of this statement does not seem very obvious to me, can someone explain how they got this result? Thanks! AI: Formally, if you have $f\in\mathcal{O}(g)$, by definition $$\limsup_{n\rightarrow\infty}\left|\frac{f(n)}{g(n)}\right|<\infty\text{.}$$ You can trivially get $$\limsup_{n\rightarrow\infty}\left|\frac{\sum_{i=1}^kf(n)}{kg(n)}\right|<\infty$$ as the same limit, so adding $k$ copies of $f$ means you are now $\in\mathcal{O}(kg)$. Notably, this works even if $k$ itself varies. They're just summing expectations by linearity, $k\sqrt{\frac{m}{k}}=\sqrt{km}$.
H: Generalizing a Riemann sum by midpoints "Consider the function $f(x)=x$ on the interval $[0,a]$, where $a>0$. Let $P$ be an arbitrary partition of the interval $[0,a]$. Determine the Riemann sum associated with the midpoints $c_i$ of this partition." Assuming that we are partitioning this area into sub-intervals $\Delta x$ of equal length, then $$\Delta x = \frac{a-0}{n}=\frac{a}{n}$$ and the midpoint of each subinterval is $c_i=\frac{a}{2n}$. The Riemann sum is then $$S(P,f,c_i)=\frac{a}{n}\cdot \Bigg[f\Bigg(\frac{a}{2n}\Bigg)+f\Bigg(\frac{3a}{2n}\Bigg)+\cdots+f\Bigg(a-\frac{a}{2n}\Bigg)\Bigg].(1)$$ Is this $$\frac{a}{n}\sum_{k=1}^nf\Bigg[\Big(2k-1\Big)\cdot\Big(\frac{a}{2n}\Big)\Bigg]$$ the proper generalized form of $(1)$? AI: Yes, it is. You could also maybe replace $f(x)$: $$\frac an \sum_{k=1}^n (2k-1)\cdot\left(\frac{a}{2n}\right) \\=\frac{a^2}{n^2} \sum_{k=1}^n k -\frac{a^2}{2n^2}\sum_{k=1}^n1\\=\frac{a^2}{n}\cdot\frac{(n+1)}{2}-\frac{a^2}{2n}\\=\frac{a^2}{n} \left(\frac{n+1}{2}-\frac 12\right) \\=\frac{a^2}{2}$$
H: Span forming a basis Show that the vectors $v_1=(1,1,1)$, $v_2=(1,1,0)$, $v_3=(1,0,0)$, $v_4=(3,2,0)$ span $\mathbb{R}^3$ but do not form a basis for $\mathbb{R}^3$: I understand how to find different find the general solution through RREF using the augmented matrix, but I do not understand how the arbitrary vector $x=(x,y,z)$ can be written as $x=zv_1+(y-z)v_2+(x-y)v_3+0v_4$ AI: I think you're confused about how to "discover" the answer. I'll show that below. First of all, it should be obvious that $\{v_1,v_2,v_3\}$ span $\mathbb{R}^3$. The question is how to prove it. You need to prove that any $u=\begin{bmatrix}x\\y\\z\end{bmatrix}$ can be written in terms of $$v_1=\begin{bmatrix}1\\1\\1\end{bmatrix}, v_2=\begin{bmatrix}1\\1\\0\end{bmatrix}, v_3=\begin{bmatrix}1\\0\\0\end{bmatrix}.$$ Clearly you can now see that we should choose $zv_1$ as part of our answer, because that's the only way to make the third component of the vectors be equal. In maths notation, we want the coefficients $c_i$ such that $$\begin{bmatrix}x\\y\\z\end{bmatrix} = c_1\begin{bmatrix}1\\1\\1\end{bmatrix}+ c_2\begin{bmatrix}1\\1\\0\end{bmatrix}+ c_3\begin{bmatrix}1\\0\\0\end{bmatrix}.$$ In case you're unsure how to read this, this is actually three equations in the three unknowns $c_1, c_2$ and $c_3$, like so: $$x = c_1 \cdot 1 + c_2 \cdot 1 + c_3 \cdot 1$$ $$y = c_1 \cdot 1 + c_2 \cdot 1 + c_3 \cdot 0$$ $$z = c_1 \cdot 1 + c_2 \cdot 0 + c_3 \cdot 0$$ Now can you complete the task?
H: Is the function $x^4+2x^2+2$ limited on the interval $I=(-\infty,0]$ injective and what is it's inverse. What I tried: $$(y^2+1)^2 + 1=x$$ $$\pm\sqrt{\sqrt{x-1}-1}=f^{-1} $$ Is the correct answer positive square root or the negative. I think it's the negative since the interval that was for the domain is now for the range, but I am not sure if it really works this way. EDIT: The inverse of the limited function AI: Your intuition is correct. It should be the negative square root (due to the domain of the original function). Note that the inner square root is the positive square root too. (Can you see why?)
H: Density function - probability My question is - If I have this density: $$f_X(x) = \begin{cases} A, & 0 \le x \le 2 \\ B, & 2 < x \le 5 \\ 0, & \text{else} \end{cases}$$ and it is known that $E(x) = 3.05$ I need to find $P(0.05<X≤3|X>0.1)$ So how can I calculate A and B? Do I'm add them together to 3.05? and then do the formula of Bayes theorem? AI: take a look the informatio you have: $$\begin{cases} 2A+3B=1 \\ A\int_0^2x dx+B\int_2^5x dx=3.05 \\ \end{cases} $$ $$\begin{cases} 2A+3B=1 \\ 2A+10.5B=3.05 \\ \end{cases} $$ that means: $$\begin{cases} A=0.09 \\ B=0.273 \\ \end{cases} $$ EDIT: rest of the solution First of all a drawing: This is the graph of your density. As you can check, the total area is correctly 1 (but some decimals) Your probability states $$\mathbb{P}[0.05 <X <3|X>0.1]=\frac{\mathbb{P}[0.1 <X <3]}{\mathbb{P}[X>0.1]}=\frac{(2-0.1)\cdot 0.09+(3-2)\cdot 0.273}{(2-0.1)\cdot 0.09+(5-2)\cdot 0.273}\approx 0.448$$
H: Vertices and orthocentre You are given with a triangle ABC having orthocentre H. You know the coordinates of 2 vertices A and B and the ortho centre H. How would you proceed to find the third vertex ? What I've done : I assumed a triangle ABH and found the orthocentre for that triangle. The orthocentre comes out to be C. Is this a proper method or was I just lucky to get the answer ? Also if the method is correct, is there any shorter method to find the same either by formula or geometrically. AI: This is correct; the orthocentre of $ABH$ is always $C$. Indeed, we have $AC \perp BH$ (since $BH$ is an altitude) and $BC \perp AH$ (since $AH$ is an altitude). So the perpendiculars from $A$ to $BH$ and $B$ to $AH$ meet at $C$, and $C$ is the orthocentre of $ABH$.
H: Chebyshev's Inequality Produces Result with Negative Value I am trying to solve the following problem: The radius of a circle is a random variable X, with the mean value $\mu_x=10$, and variance $\sigma^2=5$. Compute a lower bound on the probability that the radius lies between 8 and 12 using Chebyshev's inequality. According to Chebyshev's inequality: $$P[|X-\mu_X| \le k] > 1-\frac{\sigma^2}{k^2} \tag{1}$$ This is equivalent to: $$P[\mu_X - k \le X \le \mu_X + k] > 1-\frac{\sigma^2}{k^2} \tag{2}$$ As we need to compute $P[8 \le X \le 12]$, therefore, comparing $P[\mu_X - k \le X \le \mu_X + k]$ and $P[8 \le X \le 12]$, $$\mu_X - k = 8 \tag{3}$$ $$\mu_X + k = 12 \tag{4}$$ As $\mu_X=10$, therefore, $k=2$. Using these values in (2) produces the following: $$P[8 \le X \le 12]> 1-\frac{\sigma^2}{k^2} = 1-\frac{5}{4} \tag{5}$$ $$P[8 \le X \le 12]> -\frac{1}{4} \tag{6}$$ This results in a negative value of the bound on the probability. What could be wrong? I would much appreciate any help with this. AI: As far as I can tell, nothing has gone wrong. Chebyshev's inequality doesn't tell you anything if what you're looking at is within one standard deviation of the mean -- and in this case, it is. A more illuminating example might be to consider what actually happens. Suppose that $$P(X=10-\sqrt{5})=\frac12=P(X=10+\sqrt{5})$$ The standard deviation of this data is, of course, $\sqrt{5}$, (with variance $5$), but $P[8\le X\le 12]=0.$ Of course, if the distribution were uniform or normal, then the probability will be greater, but this example demonstrates that Chebyshev, by itself, is absolutely useless in this scenario.
H: Calculating the volume of the mountain $(x^2+y^2+z-2)^2+z-1=0$ where $z\ge0$ So we can calculate the volume by $$\iiint_V1 dxdydz$$ using cylindrical coordinates that is equal to $$\iiint_{V'} r drd\theta dz$$ But I am not sure on how to extract the limits of the integral. AI: In cylindrical coordinates we have that $$(r^2+z-2)^2 + z - 1 = 0$$ Rearrange this to get $$(r^2+z-2)^2 = 1-z$$ then take square roots on both sides $$r^2 = 2-z \pm \sqrt{1-z}$$ This gives us the integral $$\int_0^{2\pi} \int_0^1 \int_{\sqrt{2-z - \sqrt{1-z}}}^{\sqrt{2-z + \sqrt{1-z}}}r\:dr\:dz\:d\theta$$ The bounds for $z$ come from the second equation. Since the left side is a square, $1-z$ must be nonnegative. Integrating gives us $$2\pi \int_0^1 \sqrt{1-z}\:dz = \frac{4\pi}{3}$$
H: How solve the equation $Ax^B+C = y$ I am doing one sensor calibration. For that I need to calculate $A, B, C$. The equation is of the format $$y = A x^B + C$$ for different values of input $x$, sensor will give me $y$. How can I calculate $y$? How many input/output values are required to get $A,B,C$? AI: You can eliminate $A$, and $C$ using three points, by $$\frac{y_2-y_1}{x_2-x_1}=\frac{x_2^B-x_1^B}{x_1^B-x_0^B}.$$ This is a nonlinear equation in $B$, that will require a numerical solver. If by chance you can choose the values of $x$, and adopt $x_1:=\sqrt{x_2x_0}$, you get $$\frac{x_2^B-\sqrt{x_2x_0}^B}{\sqrt{x_2x_0}^B-x_0^B}=\sqrt{\frac{x_2}{x_0}}^B$$ and you can solve for $B$ by logarithms.
H: Every PID is integrally closed I assumed a root $x$ of a monic polynomial $p(t)\in R[t]$ where $R$ is a PID. I need to show that $x$ lies in $R$ I assumed $p(t)=a_{0}+...+a_{n-1}t^{n-1}+t^{n}$ Then we have $a_{0}=-x^{n}-...-a_{1}x$ As $a_{0}\in R$ Therefore, RHS belongs to the PID $R$ i.e. $(-x^{n-1}-...-a_{1})x\in R$ I can't see a way to proceed further. How do I utilize that every ideal of R is principal? AI: Write $x=\frac{p}{q}$ where $q\neq 0$ and $\gcd (p,q)=1$. You can do this since $R$ is a $UFD$. Now substituting this you get $$a_0+a_1\frac{p}{q}+a_2\left (\frac{p}{q} \right )^2+\dots +\left (\frac{p}{q} \right )^n=0$$Clearing the denominator gives you $$a_0q^n +a_1pq^{n-1}+\dots +p^n=0$$$$\implies p^n=-\left (a_0q^n +a_1pq^{n-1}+\dots +a_{n-1}p^{n-1}q\right )$$ This shows $q| RHS \implies q| LHS $ i.e. $q|p^n$ but $\gcd(p^n,q)=1$ This forces $q$ to be a unit in $R$ and hence $x\in R$. Note: The above proof works for any $UFD$.
H: Existence of a null sequence that dominates an infinite family of null sequences Let us call $a =(a_n)_{n \geq 0}$ a $\textit{null sequence }$ if $a_n \geq 0$ for all $n \geq 0$ and $\lim_{n \rightarrow \infty} a_n =0$. For two null sequences $a$ and $b$, we say that $a$ dominates $b$ if there exists $n_0$ such that $a_n \geq b_n$ for all $n \geq n_0$. Given an infinite family of null sequences $(a^{(i)})_{i \in \mathbb{N}}$, is there a null sequence $a$ that dominates $a^{(i)}$ for all $i \in \mathbb{N}$? AI: Yes. Choose positive integers $N_1\lt N_2\lt N_3\lt\cdots$ so that $$\max\left(a^{(1)}_n,a^{(2)}_n,a^{(3)}_n,\dots,a^{(i)}_n\right)\le\frac1i\text{ for all }n\ge N_i.$$ Define a sequence $b=(b_n)$ so that $b_n=\frac1i$ when $N_i\le n\lt N_{i+1}$. You can easily verify that $b$ is a null sequence which dominates each of the given null sequences.
H: Order of convergence of derivative limit What is the order of convergence of this expression?: $\frac{f(t+h)-f(t)}{h}-f'(t)$ (How fast does the limit of the definition of derivative approches the derivative compared to $h$ asymptotically?) Can we say something in general? What if $f\in C^p$ or $f\in C^\infty$? Can someone point me in the right direction? It looks like for the polinomials, exponential and trigonometric functions the order is $h$ (just by looking at the proof of the derivative). AI: If f is smooth enough, it will be of order $n-1$ such that $f^{(k)}(t)=0$ for all $1<k<n$ and $f^{(n)}(t)\neq 0$. You can see it by approximating the function using it's taylor polynomial of order $n$. $$f(t+h)-f(t)=f'(t)h+f^{(n)}(t)h^n+o(h^n)$$ $$\frac{f(t+h)-f(t)}{h}-f'(t)=f^{(n)}(t)h^{n-1}n+\frac{o(h^n)}{h}=f^{(n)}(t)h^{n-1}+o(h^{n-1})$$
H: Listing all the prime ideals of R[x] How to list down all the prime ideals of R[x](R=real number)? I know that R[x] is a PID,thus the ideal generated by every irreducible polynomial in R[x] is a prime ideal. But can we write it in set notation,just like we can say for C[x](C=complex number)? AI: Let $f(X)$ be an irreducible polynomial in $\mathbb R[X]$. We shall show $\deg f(X)\leq 2$. Suppose $f$ has no real root or in other words $f$ is not linear. By the Fundamental Theorem of Algebra, $f$ has a root $\alpha \in \mathbb C$. Since $\alpha \notin \mathbb R$, $\bar \alpha$ is also a root of $f$. This shows $(X-\alpha)(X-\bar \alpha)$ divides $f(X)$ in $\mathbb C[X]$. But $(X-\alpha)(X-\bar \alpha )$ is a real polynomial. So we get $(X-\alpha)(X-\bar \alpha)$ divides $f(X)$ in $\mathbb R[X]$. Thus $$f=\mu(X-\alpha)(X-\bar \alpha)$$ where $\mu$ is a unit in $\mathbb R[X]$ i.e. $\mu \in \mathbb R^*$. Thus the irreducible polynomials are precisely linear polynomials and quadratics with no real roots.
H: Hypothesis testing $p$ value I have to following question: From a collection of objects numbered $\{1,2,....,K\}$ $20$ objects are picked and replaced. We want to test $H_0: K=100000$ against $H_1 < 100000$, with the highest ranking number $M$ of our sample as test statistic. We find for our realisation for $M$ the value $81115$. What is the $P$ value? The correct answer is: $0.015$ I know that the definition of the $p$-value is: The $p$-value is the probability of getting the observed value of the test static or a value with even greater evidence against $H_0$, if the hypothesis is actually true or in formula form $P (T \ge t)$ I have the following questions: What are $T$ and $t$? I think that distribution is uniform, but how do I calculate the $p$-value Can I get feedback? AI: HINT: in a uniform distribution defined between the values $a$ and $b$, the cdf for $k \, \epsilon \,[a,b]$ is $(k-a)/(b-a)$.
H: covering a rectangle with an ellipse of minimal area Given a rectangle $B$ I would like to find an ellipse $C$ such that $B$ is contained inside $C$. Of course this is always possible, but I was wondering can we do this in a way that the area of $C$ is controlled? More specifically, can we prove that there exists a constant $D>0$ such that given any $B$ there exists $C$ containing $B$ with $$ \frac{Area (C)}{Area(B)} < D? $$ AI: The smallest ellipse containing $B$ touches the rectangle. Scale the ellipse by $\alpha$ in the direction of an axis so that it becomes a circle $C'$. The rectangle remains a rectangle $B'$ with area scaled by $\alpha$ as well. Now, the largest area of a rectangle in a circle is that of a square, namely $2r^2$. So $$\frac{Area(B)}{Area(C)}=\frac{\alpha Area(B')}{\alpha Area(C')}\ge\frac{2}{\pi}.$$ So if $\alpha$ is chosen to be the ratio of the sides of the rectangle, then $B'$ is a square, $C$ is an ellipse with ratio of major/minor axes of $\alpha$, and $$\frac{Area(C)}{Area(B)}=\frac{\pi}{2}$$ Hence take any $D\ge\frac{\pi}{2}$.
H: Group operation used in decomposition of Fundamental Theorem of Finite Abelian Groups As I was taught, any finite abelian group $G$ can be represented up to isomorphism by a direct product of cyclic integer groups of prime power (there is some canonical fuss there, but that's only by convention). So then, one can write: $$G\sim \mathbb{Z}_{p_1}\times\mathbb{Z}_{p_2}\times\dots\times\mathbb{Z}_{p_n}$$ Now, on the Wikipedia page about abelian groups, I have two things confusing me. The first is that they sometimes write $\mathbb{Z}_{p_1}\times\mathbb{Z}_{p_2}$, and other times $\mathbb{Z}_{p_1}\oplus\mathbb{Z}_{p_2}$. I don't know whether that's a matter of notation, or that it has actual meaning (e.g. having addition vs. multiplication in the end result or in the individual factors). I was taught using $\times$ only. Secondly, it is stated that: For another example, every abelian group of order $8$ is isomorphic to either $\mathbb {Z} _{8}$ (the integers $0$ to $7$ under addition modulo $8$), $\mathbb {Z}_{4}\oplus \mathbb {Z} _{2}$ (the odd integers $1$ to $15$ under multiplication modulo $16$), or $\mathbb {Z} _{2}\oplus \mathbb {Z}_{2}\oplus \mathbb {Z} _{2}$. I take it that whatever is in parentheses is an example of such a group. However, how do I know which group operation is used in the factors of these decompositions, and in the end result? Neither the page nor my Abstract Algebra textbook mentions this explicitly, as far as I can tell. However, there is a big difference between $\langle\mathbb{Z}_p, +\rangle$ and $\langle\mathbb{Z}_p, \cdot\rangle$: For one, we don't actually consider the $0$ in $\langle\mathbb{Z}_p, \cdot\rangle$ to be part of the group, making it (or, if you will, the multiplicative group inside it) of order $p-1$ instead of $p$; there can't be a bijection between two finite groups of unequal order, so this would change the isomorphism the theorem says there is. For two, all $\langle\mathbb{Z}_n, +\rangle$, regardless of $n$ (generated by $1$) are cyclic. This is the case for $\langle\mathbb{Z}_p, \cdot\rangle$ if $p$ is prime, but canonically, we have structures like $\mathbb{Z}_4$ appear in the decomposition, which is not cyclic under multiplication. This also affects any isomorphism. Which operations are used for the factors and in the result, and am I correct that they matter after all? AI: The direct product and the direct sum always coincide on finite abelian groups, so you can use either. About your second question: the examples inside the brackets actually tell you what the operation is. For $\mathbb{Z}_8$ it says "addition modulo 8". Indeed, you can verify that $1$ is a generator (as $n=1+1+...+1$, $n$ times) and therefore that group is cyclic of order $8$. If you meant "how do I know in the general case", when you are given an abelian group you are given elements and a group operation. Every subgroup that appears in the decomposition inherits that group operation, so there is no ambiguity. Many cyclic groups can be realised as additive groups $(\mathbb{Z}_p, +)$ and also as multiplicative groups $(\mathbb{Z}_q^\times, \cdot)$. This is not a contradiction, as the group operation does not distinguish between $+$ and $\cdot$ when you consider the factors just up to isomorphism.
H: Let $Y=X^2$. Find the distribution function of $Y$. The density function is $f(x)=xe^{-x}$ if $x\geq 0$ and $0$ otherwise. Let X be a continuous random variable with density function $f(x)=xe^{-x}$ if $x\geq 0$ and $0$ otherwise. Then the distibution function of X is $F_X(x)=1-(x+1)e^{-x}$. Let $Y = X^2$. Find the cumulative distribution function of $Y$. My result is $F_Y (z)=P(Y≤z)=P(-\sqrt z≤X≤\sqrt z)=F_X (\sqrt z)-F_X (-\sqrt z)=((1-√z) e^{2\sqrt z}-√z-1) e^{-\sqrt z}$. But $((1-√z) e^{2\sqrt z}-√z-1) e^{-\sqrt z}$ takes negativ values. Isn't that a problem? AI: The distribution function $F_X(x)$ is zero for $x<0$ if its density is zero for $x<0$. So $F_X(-\sqrt{z})=0$ for $z\geq 0$
H: Help with an integral involving complex number I have an integral expression involving complex variable $z(x)=|x|^3+i z_i(x)$: $$\int_{-\infty}^\infty e ^{z}dx=2 \text{Re}\int_0^\infty e^{z^+}dx, $$ where $z^+=x^3+i z_i$ and $z(-x)=\overline{z(x)}$. Note that $z$ is not analytic in the complex $x$ plane. The usefulness of the expression is that it converts the integral of a non-analytic function to that of an analytic one in the complex $x$ plane. I would appreciate any help in explaining the equality. AI: It's a special case of$$\int_{-\infty}^\infty f(x)dx=\int_0^\infty f(x)dx+\int_{-\infty}^0 f(x)dx=\int_0^\infty(f(x)+f(-x))dx,$$so in particular$$\int_{-\infty}^\infty g(|x|)dx=2\int_0^\infty g(x)dx.$$The real part of$$\int_{-\infty}^\infty\exp(|x|^3+iz_i)dx=\int_{-\infty}^\infty\exp|x|^3\cos z_idx+i\int_{-\infty}^\infty\exp|x|^3\sin z_idx.$$transforms as required (although it diverges). The imaginary part does too, because $z_i$ is odd, so$$\int_{-\infty}^\infty\exp|x|^3\sin z_idx=\int_0^\infty\exp(x^3)\underbrace{(\sin z_i(x)+\sin z_i(-x))}_{0}dx=0.$$
H: For which $a$ the solution is defined on the interval For which $a$ the solution of $$\begin{cases} \frac{z'}{z^2}= e^{-x^2} \\ z(0)=a \end{cases}$$ is defined on the interval $[0,\infty)$ My try: $$\frac{z'}{z^2}= e^{-x^2}$$ $$\int \frac{z'}{z^2} \,dx=\int e^{-x^2}\,dx$$ $$-\frac{1}{z(x)}+C=\int e^{-x^2}\,dx$$ $$z(x)=\frac{1}{\frac 1a -\int e^{-x^2}\,dx}$$ I think that this equation has the solution when exist $z'$. $z'(x)=\frac{e^{-x^2}}{(\frac 1a - \int e^{-x^2}\,dx)^2}$ so exist when $(\frac 1a - \int e^{-x^2}\,dx)$ doesn't converge to $0$ However, I do not know how it matters for the range given in the task. Can you help me finish this? AI: We have $\int e^{-x^2}dx=\frac{\sqrt{\pi}}{2}erf(x)+\text{constant}$ (https://en.wikipedia.org/wiki/Error_function) and thus we can write $z(x)=\frac{1}{\frac{1}{a}-\frac{\sqrt{\pi}}{2}erf(x)}$=$-\frac{2a}{a\sqrt{\pi}erf(x)-2}$. This is defined when $a\sqrt{\pi}erf(x)-2\neq0$, and is equal to zero when $a=\frac{2}{\sqrt{\pi}erf(x)}$, in fact it is defined on the interval $(-\infty,\infty)$ for $-\frac{2}{\sqrt{\pi}} <a < \frac{2}{\sqrt{\pi}} $ (play around with values of a on here https://www.desmos.com/calculator/wkqyixrgxc). Then using the fact that $erf(0)=0$ and $erf(\infty)=1$ we have that on the interval $[0,\infty)$ : $a\leq z(x) <-\frac{2a}{-2+a\sqrt{pi}}$ for $a<\frac{2}{\sqrt{\pi}} $
H: Improper integrals where both limits are infinite For an improper integral of the form $$\int_{-\infty}^{\infty}f(x) \, dx,$$ I'm told that I must set $$\int_{-\infty}^{\infty}f(x) \, dx =\lim_{c \to \infty} \int_k^cf(x) \, dx+\lim_{c \to -\infty}\int_c^kf(x) \, dx.$$ Why can I not set $$\int_{-\infty}^{\infty}f(x) \, dx =\lim_{a\to \infty}\int_{-a}^a f(x) \, dx \, ?$$ AI: The reason for the definition is: It could happen that $$ \lim_{a\to \infty}\int_{-a}^a f(x) \, dx \tag1$$ exists but one or both of $$ \lim_{c \to \infty} \int_k^cf(x)\; dx,\qquad\lim_{c \to -\infty}\int_c^kf(x) \, dx. \tag2$$ do not exist. The situation where both of $(2)$ exist is called "convergence" of the improper integral. The situation where $(1)$ exists is called the "principal value" of the integral. The limit in $(1)$ may sometimes be written $$ \text{P.V. }\int_{-\infty}^\infty f(x)\;dx . $$ The principal value can have bad properties that a convergent integral cannot. examples $$ \text{P.V. }\int_{-\infty}^\infty x \; dx = 0 $$ change variables $y=x+1$ to get $$ \text{P.V. }\int_{-\infty}^\infty (y+1) \; dy = \infty $$
H: Complex analysis residue calculus Would you know how to calculate: $\int_{0}^{\pi} \frac{1}{6+\sin^2(z)} dz$? Thanks. AI: An hint: Write $\sin z= \frac 1 2 (w-\frac{1}{w})$ and note that: $$2\int_{0}^{\pi} \frac{1}{6+\sin^2(z)} dz=\int_{0}^{2\pi} \frac{1}{6+\sin^2(z)} dz=\int_{\gamma}\frac{1}{6+(\frac 1 2 (w-\frac{1}{w}))^2}\frac{1}{iw}dw$$ where $\gamma(t)=e^{it}$ for $t \in [0,2\pi]$. Use finally the Residue Theorem to compute the last one.
H: Change of variable in integral. Why is it performed this way in the below example? On a book, I read, for a certain $y\in\mathbb{R}$: given the integral $\displaystyle\int\limits_{-y}^{+\infty}f(x)dx$, performing the substitution $x=-u$ yields: $$\displaystyle\int\limits_{-\infty}^{y}f(-u)du$$ As to the bounds of integration, substitution is ok to me. What sounds strange is the differential. That is, I would expect:$$\frac{du}{dx}=-1$$ hence $$dx=-du$$ hence, in conclusion $$\displaystyle\int\limits_{-\infty}^{y}-f(-u)du$$ and not $$\displaystyle\int\limits_{-\infty}^{y}f(-u)du$$ Could you please clarify this doubt? I'm getting crazy since I do not know if I am making some silly mistake or if I am not realizing some basic fact or if book is wrong. AI: $$\int_{-y}^\infty f(x)dx=\int_\color{blue}y^\color{red}{-\infty}-f(-x)dx\\ =\int_\color{red}{-\infty}^\color{blue}yf(-x)dx$$
H: Prove there exist two distinct points $\eta,\xi \in (a,b)$ such that $f'(\eta)f'(\xi)=\left[\frac{f(b)-f(a)}{b-a}\right]^2$. Suppose $f(x)$ is continuos over $[a,b]$ and differentiable over $(a,b)$. Prove there exist two distinct points $\eta,\xi \in (a,b)$ such that $f'(\eta)f'(\xi)=\left[\dfrac{f(b)-f(a)}{b-a}\right]^2$. For this purpose, if we can prove that, there exists $c\in(a,b)$ such that $$\frac{f(c)-f(a)}{c-a}\cdot\frac{f(b)-f(c)}{b-c}=\left[\frac{f(b)-f(a)}{b-a}\right]^2,\tag{*}$$then applying Lagrange MVT over $(a,c)$ and $(c,b)$ respectively, the conclusion is followed. But $(*)$ seems not to hold necessarily. AI: Construct function $$ g(x) = [f(x) - f(a)]^2 - (x-a)^2 \left[\frac{f(b)- f(a)}{b-a}\right]^2, $$ then we have $g(a) = g(b) = 0$. Hence, from Lagrange's MVT there exists $\xi \in (a, b)$ such that $g'(\xi) = 0$, i.e., $$ [f(\xi) - f(a)]f'(\xi) - (\xi - a) \left[\frac{f(b)- f(a)}{b-a}\right]^2 = 0. $$ Therefore, $$ \frac{f(\xi) - f(a)}{\xi - a} \cdot f'(\xi) - \left[\frac{f(b)- f(a)}{b-a}\right]^2 = 0. $$ Again, applying Lagrange's MVT, we can find $\eta \in (a, \xi)$ such that $$ \frac{f(\xi) - f(a)}{\xi - a} = f'(\eta), $$ which finishes the proof. Remark Following the same derivation and using mathematical induction, one can easily generalize the conclusion to any $n \geq 1$: There exist $n$ distinct points $\xi_1, \dots, \xi_n \in (a, b)$ such that $$ \prod_{i = 1}^n f'(\xi_i) = \left[\frac{f(b) - f(a)}{b - a}\right]^n. $$
H: Solve $\sin(x) = \cos(x)$ where $0°\leq x\leq 450°$ How many solutions are there: $\sin(x) = \cos(x)$ where $0°\leq x\leq 450°$? My solution: transform the equation to $\tan(x)=1$ , using the unit circle I see 3 solutions: $x_1 = π/4$ , $x_2 = 5π/4$, $x_3 = 9π/4$. The maths book says there are 7 solutions, where is my mistake? AI: You are correct, there are only $3$ solutions to the equation $\sin(x)=\cos(x)$ in the interval $[0,\frac{5}{2}\pi]$. Perhaps you might've missed something else in the question? A graphical approach shows that there are $3$ solutions at $\frac{\pi}{4},\frac{7\pi}{4}$ and $\frac{9\pi}{4}$.
H: Confusion in definition of union of Indexed family of sets Given $F$ = Family of Indexed sets and $A_i$ are those sets Given definition is : $ \cup F = ( x | \exists A (A \in F \land x \in A ) $ Why this cannot be written as $ \cup F = ( x | \exists A (A \in F \rightarrow x \in A ) $ As latter implies that there exists at least one set in $F$ which has $x$ in it AI: The condition $\exists A[A\in F\to x\in A]$ is satisfied for every $x$. Just take some fixed set $A_0$ that satisfies $A_0\notin F$ (such a set always exists). Then for every $x$ the following statement is vacuously true:$$A_0\in F\to x\in A_0$$ To make things more clear observe that the condition can also be written as:$$A_0\notin F\vee x\in A_0$$ edit: Let it be that $F$ is collection of sets that does not contain every set. Further let it be that: $$\mathcal C:=\{x\mid\exists A[A\in F\to x\in A]\}$$ It is our aim to prove that $\mathcal C$ is the collection of all sets. For this let it be that $y$ is some arbitrary set. It is enough to prove that this arbitrary set satisfies $y\in\mathcal C$. Proof: Proving that $y\in\mathcal C$ is exactly the same thing as showing that $\exists A[A\in F\to y\in A]$ is a true statement. When is $\exists A[A\in F\to y\in A]$ a true statement? If and only if a set $A_0$ can be found such that $A_0\in F\to y\in A_0$ is a true statement. Can we find such a set? Yes! Just take a set $A_0$ that satisfies $A_0\notin F$. Then the statement $A_0\in F$ is false so that the statement $A_0\in F\to y\in A_0$ is true. Proved is now that $y\in\mathcal C$ and we are ready. So $\mathcal C$ is the collection of all sets. This shows that $\mathcal C$ is not the same as the $\{x\mid\exists A[A\in F\text{ and } x\in A\}$.
H: $\sqrt{a+b} (\sqrt{3a-b}+\sqrt{3b-a})\leq4\sqrt{ab}$ I was training for upcoming Olympiads, working on inequalities, and the following inequality came up: $$\sqrt{a+b} (\sqrt{3a-b}+\sqrt{3b-a})\leq4\sqrt{ab}$$ with the obvious delimitations $3b\geq a;\: 3a\geq b.$ I've been pondering the question for quite a while, and tried the using CS among others but didn't find the solution, which, given the level of sophistication the problem should have, surprises me. Any help would be appreciated. AI: Presumably we also have the restriction $a,b\ge 0$. With that assumption we can proceed as follows . . . If $a+b=0$, then $a=b=0$, and for that case, the inequality clearly holds. So assume $a+b > 0$. Since the inequality is homogeneous, the truth of the inequality remains the same if $a,b$ are scaled by an arbitrary positive constant, hence without loss of generality, we can assume $a+b=1$. Replacing $b$ by $1-a$, it remains to prove $$ \sqrt{4a-1}+\sqrt{3-4a}\le 4\sqrt{a(1-a)} \qquad\qquad\;\, $$ for all $a\in \left[{\large{\frac{1}{4}}},{\large{\frac{3}{4}}}\right]$. From here it's just routine algebra . . . \begin{align*} & \sqrt{4a-1}+\sqrt{3-4a}\,\le 4\sqrt{a(1-a)}\\[4pt] \iff\;& \left(\sqrt{4a-1}+\sqrt{3-4a}\right)^2\le \left(4\sqrt{a(1-a)}\right)^2\\[4pt] \iff\;& 2+2\sqrt{(4a-1)(3-4a)}\,\le -16a^2+16a\\[4pt] \iff\;& \sqrt{(4a-1)(3-4a)}\,\le -8a^2+8a-1\\[4pt] \iff\;& (4a-1)(3-4a)\le \left(-8a^2+8a-1\right)^2\\[4pt] \iff\;& -16a^2+16a-3\le 64a^4-128a^3+80a^2-16a+1\\[4pt] \iff\;& 64a^4-128a^3+96a^2-32a+4\ge 0\\[4pt] \iff\;& 16a^4-32a^3+24a^2-8a+1\ge 0\\[4pt] \iff\;& (2a-1)^4\ge 0\\[4pt] \end{align*} which is true. Note:$\;$For the reverse implications, we need to have $-16a^2+16a\ge 0$ and $-8a^2+8a-1\ge 0$, both of which hold since $a\in \left[{\large{\frac{1}{4}}},{\large{\frac{3}{4}}}\right]$.
H: A double integral question which I made up to test my understanding So I wish to find the value of the integral $\int_{C}f$, where $f = \sin(xy)$ and $C = ([-1,1] \times[-1,1]) \backslash \{(x,y):\|(x,y)\|<1 \}$. So $C$ is basically the closed unit cube minus the unit open ball. So the standard trick is to note that $C \subset A = [-1,1]\times [-1,1]$ and $f$ is certainly a continuous function on $A$. So certainly $\int_{A}f$ exists. Further the boundary of $C$ has measure zero. Therefore $\int_{C}f$ exists. Now if $\psi$ is the characteristic function of $C$, then \begin{equation} \int_{C}f = \int_{A}f\psi \end{equation} Now invoking Fubini's theorem, we get that $\int_{A} f \psi = \int_{-1}^{1}(\int_{-1}^1f\psi \ dy) \ dx$. Now the first integral boils down to the following \begin{equation} \int_{-1}^{1}f\psi \ dy = \int_{-1}^{1}\sin(xy)\psi\ dy = \int_{-1}^{-\sqrt{1-x^2}}\sin(xy)\ dy+ \int_{\sqrt{1-x^2}}^{1}\sin(xy)\ dx \end{equation} Now \begin{equation} \int_{\sqrt{1-x^2}}^{1}\sin(xy)\ dx = \frac{-\cos(xy)}{x} \Bigg|_{\sqrt{1-x^2}}^{1} = \frac{\cos(x\sqrt{1-x^2})}{x} - \frac{\cos(x)}{x} \end{equation} Similarly \begin{equation} \int_{-1}^{-\sqrt{1-x^2}}\sin(xy)\ dy = \frac{\cos(x\sqrt{1-x^2})}{x} - \frac{\cos(x)}{x} \end{equation} So \begin{equation} \int_{-1}^{1}f\psi \ dy = 2\left( \frac{\cos(x\sqrt{1-x^2})}{x} - \frac{\cos(x)}{x} \right) \end{equation} But how do I calculate \begin{equation} \int_{-1}^{1} \frac{\cos(x\sqrt{1-x^2})}{x} - \frac{\cos(x)}{x} \end{equation} If my calculations are right, this last integral is not defined. Where have I gone wrong? AI: For $x=0$ your $\int_{-1}^{-\sqrt{1-x^2}}\cdots+\int^{1}_{\sqrt{1-x^2}}\cdots$ part is just $0$. Unsurprisingly, it is also the case that $\frac{\cos(x\sqrt{1-x^2})}x-\frac{\cos x}x$ extends continuously to a function such that $g(0)=0$.
H: if differentiable and derivative is bound then total variation is bounded I am having problem with this question (from Erhan Cinlar's Probability and Stochastic Chapter 1 Section 5 Question 24b): Let $\mathscr{A}$ be a collection of disjoint intervals of the form $(,]$ whose union is $(s,t]$. Let $V_f(s,t)=\sup\limits_{\mathscr{A}}\sum\limits_{(u,v]\in\mathscr{A}}|f(v)-f(u)|$. If $f$ is differentiable and its derivative is bounded by $b$ on $[s,t]$, then $V_f(s,t)\leq(t-s)\cdot b$. My attempt: $\forall \mathscr{A}$ By Triangle Inequality, $\sum\limits_{(u,v]\in\mathscr{A}}(f(v)-f(u))\leq\sum\limits_{(u,v]\in\mathscr{A}}|f(v)-f(u)|$ Thus $\forall \mathscr{A}, f(t)-f(s)\leq\sum\limits_{(u,v]\in\mathscr{A}}|f(v)-f(u)|$, since union of intervals in $\mathscr{A}$ is $(s,t]$ Thus $f(t)-f(s)\leq V_f(s,t)$, by definition of supremum $\implies$ $\frac{f(t)-f(s)}{t-s}\leq\frac{V_f(s,t)}{t-s}$, since $t>s$ $\implies$ $f'(c)\leq \frac{V_f(s,t)}{t-s}$, By Mean Value theorem, $\exists c\in(s,t), f'(c)=\frac{f(t)-f(s)}{t-s}$ But I am unable to use the derivative is bounded since the inequality seems to be in the wrong direction Can I have a hint please? AI: $$\sum_{(u,v]\in \mathscr A} |f(u)-f(v)|=\sum_{(u,v]\in \mathscr A}|f'(c_{u,v})||u-v|$$$$\leq \sum_{(u,v]\in \mathscr A}M |u-v|\leq M(t-s).$$ Here, $|f'|\leq M$. Note that the intervals in $\mathscr A$ are disjoint and contained in $(s,t]$. So sum of lengths of the intervals in $\mathscr A$ is less or equals to length of $(s,t]$.
H: Proving $(A \cap B) \cup (A - B) = A$ I think I have figured out this proof, but was hoping someone could verify it. \begin{align*} x \in (A \cap B) \cup (A - B) & \iff x \in A \cap B \land x \in A - B \\ & \iff (x \in A \land x \in B) \lor(x \in A \land x \not \in B) \\ & \iff x \in A \land (x \in B \lor x \not \in B) \\ & \iff x \in A. \end{align*} The first line is the definition of union. The second is the definition of interection and set difference. The third uses the rule from propositional logic that $p \wedge (q \lor r) \equiv (p \wedge q) \lor (p \wedge r)$. Finally, $p \lor \neg p$ is a tautology that is always true, and $p \wedge T \equiv p$. AI: The retranscript from set operations to propositions is immediate: $$(a\land b)\lor(a\land\lnot b)$$ and this can be rewritten $$a\land(b\lor\lnot b)=a.$$ You can also use a "membership" table, $$\begin{array}{|c|c|c|c|c|} A&B&A\cap B&A\setminus B&(A\cap B)\cup(A\setminus B) \\\hline \in&\in&\in&\notin&\in \\\in&\notin&\notin&\in&\in \\\notin&\in&\notin&\notin&\notin \\\notin&\notin&\notin&\notin&\notin \\\hline \end{array}$$
H: Explicit formulas for natural (Hessenberg) and ordinal sum: Do those work? As is known, ordinal numbers have a natural mapping into the surreal numbers of the form $$f(\alpha) = \{f(\beta):\beta\in\alpha\mid\}$$ Moreover, surreal addition of those numbers corresponds to the natural (Hessenberg) addition of ordinal numbers. Now the sum of surreal numbers has an explicit recursive formula, while I haven't seen such a formula for the natural sum. Therefore I got the idea to simply translate the surreal number formula back to set theory. The most direct translation would be $$\alpha\oplus \beta = \{x\oplus\beta:x\in\alpha\}\cup\{\alpha\oplus x:x\in\beta\}$$ but that is easily checked not to work, as we would get e.g. $1\oplus 1 = \{1\} \ne \{0,1\}=2$. The reason of course is that in the surreal numbers we have the equivalence relations where we can freely add numbers to the left set as long as there's a larger number already in that set. An easy solution would be to add that “filling up downwards” explicitly to the definition, but that would somewhat defeat the goal of having an explicit formula. Therefore I thought about filling the hole with the operands themselves. That is, my guess at the explicit formula is: $$\alpha\oplus\beta = \alpha\cup\beta\cup\{x\oplus\beta:x\in\alpha\}\cup\{\alpha\oplus x:x\in\beta\}$$ However I failed to even prove that this is associative, let alone that it indeed in all cases gives an ordinal again. Seeing that the most obvious difference between natural and ordinal sum is the non-commutativity of the latter, I also guessed at a formula for the ordinal sum by simply de-symmetrizing the formula: $$\alpha + \beta = \alpha \cup \{\alpha + x:x\in\beta\}$$ Here I think I at least can prove associativity, which together with the easy to prove fact $\alpha+1=\alpha\cup\{\alpha\}$ means that if it eventually breaks, it does so at a limit ordinal for $\beta$. My question now is: Do those formulas indeed reproduce natural and ordinal addition of ordinals, and if not, where do they break down? AI: Consider the Cantor normal forms (in descending order) of two ordinals $\alpha= \omega^{\eta_1} m_1+...+\omega^{\eta_r} m_r$ and $\beta=\omega^{\eta_1} n_1 +...+\omega^{\eta_r} n_r$, where some $n_i,m_i$ may be zero. I assume you know how the normal form of $\alpha\oplus\beta$ is obtained from those two. I claim that your formula is valid. What you want to prove is that any ordinal $\mu$ strictly below $\alpha \oplus \beta$ is of the form $\mu \in \alpha \cup \beta$ or $\mu= \alpha\oplus\gamma$ for some $\gamma \in \beta$ or $\mu = \delta \oplus \beta$ for some $\delta \in \alpha$. Now considering the normal form of $\mu$, you can parse through these different cases by focusing on the largest exponent with non zero coefficient and seeing whether it appears in the normal form of $\alpha$ or $\beta$. I can give more details if needed.
H: Getting distance image Pleases take a look at the picture. AC = a, AB = b, I just made a distance from A point as k(CD = k). At that time how to get the lengths of DF(d) and BF(c)? AI: Let $CE=e$, $EB=f$, $ED=d_1$, $FE=d_2$. Then $\triangle ABC\sim\triangle CDE$, thus $\frac{CA}{CE}=\frac{AB}{CD}$, i.e. $e=\frac{ak}{b}$. From $\triangle CDE$ we get $d_1=\sqrt{e^2+k^2}$. From $\triangle EFB\sim\triangle ECD$, we get $d_2=\frac{ce}{k}$. From $\triangle ABC\sim\triangle FBE$, we get $\frac{a}{d_2}=\frac{e+f}{f}=\frac{e+\sqrt{d_2^2+c^2}}{\sqrt{d_2^2+c^2}}$, insert $d_2=\frac{ce}{k}$, then \begin{equation} c=\frac{\frac{ak}{e}\sqrt{\frac{e^2}{k^2}+1}-e}{\sqrt{\frac{e^2}{k^2}+1}}. \end{equation} Now you can calculate $d=d_1+d_2$ and $c$ in terms of $a,b,k$ by inserting $e=\frac{ak}{b}$.
H: Determine representation matrix of orthogonal projection Let $V=\mathbb C^2$, $\mathcal E$ be the standard basis of $V$ and let $$U = \left\{ \begin{pmatrix} z_1 \\ z_2 \end{pmatrix}\in V | (1+i)z_1 + (3+2i)z_2 = 0 \right\}.$$ Finally let $p:V\to V$ be the orthogonal projection from $V$ to $U$ wrt the canoical scalar product on $V$. I'm trying to find that $A:=M_{\mathcal E}^{\mathcal E}(p)$, and I really do not know how. I could imagine that A is hermitian, or that I have to somehow construct a basis of U. How do I go about doing this? This has given me many troubles so far. AI: Note: Since the problem involves complex numbers, I assume everywhere you say "orthogonal" you mean "unitary." In this case the canonical scalar product between two vectors $\vec{v}, \vec{w}$ is $\vec{v}^* \vec{w}$, where the asterisk denotes the complex conjugate transpose of a matrix. $U$ is one-dimensional (i.e. all vectors in it point in the same direction), so finding an orthonormal basis for it is easy. Note that any vector of $U$ is of the form $$\begin{pmatrix} z_1 \\ z_2 \end{pmatrix} = t\begin{pmatrix} 3 + 2i \\ -1 - i \end{pmatrix}$$ for some $t \in \mathbb{C}$. This is obtained just by solving the equation that vectors in $U$ satisfy. To find an orthonormal basis for $U$, we must find a vector of unit norm in $U$; this occurs when $$t = \left(|3 + 2i|^2 + |-1 - i|^2\right)^{-1/2} = \frac{1}{\sqrt{15}}$$ From here, let us denote $$\vec{u} = \frac{1}{\sqrt 15}\begin{pmatrix} 3 + 2i \\ -1 - i \end{pmatrix}$$ which is the sole element of an orthonormal basis for $U$. The orthogonal projection of some vector $\vec{v} \in V$ onto $U$ is then given by $$\text{proj}_{\vec{u}}(\vec{v}) = (\vec{v} \cdot \vec{u}) \vec{u} = \vec{u} \vec{u}^* \vec{v}$$ And hence we can see that the matrix representing the projection is given by $$A = \vec{u} \vec{u}^* = \frac{1}{15} \begin{pmatrix} 13 & 1 + 5i \\ 1 - 5i & 2\end{pmatrix}$$
H: What does this Ring-Homomorphism do to constants: $\mathbb{Z}[X]\rightarrow \mathbb{Z}[i], X\mapsto i$? I have been given this definition for a Ring homomorphism: $$\varphi:\mathbb{Z}[X]\rightarrow \mathbb{Z}[i], X\mapsto i$$ Where $\mathbb{Z}[i]$ is the ring of the Gaussian numbers (complex numbers with integer components). With the properties of ring homomorphisms, I've found $$\varphi(X)=i, \varphi(X^2)=-1, \varphi(X+X^2)=i-1$$ and so on. But what does this homomorphism map $$\varphi(X^2+2), \varphi(X+5)$$ to? These are just examples, I don't need to calculate those specifically. Ultimately I need to show that this homomorphism induces an isomorphism $\mathbb{Z}[X]/(X^2+1)\cong\mathbb{Z}[i]$, and I don't see how I can do that without the definition of $\varphi$ for constants. AI: Any positive integer can be written as a sum of 1s: $$ n = \underbrace{1 + \cdots + 1}_{n\text{ times}}. $$ For example, $5 = 1 + 1 + 1 + 1 + 1$. Since $\phi$ is a homomorphism, you know that $$ \phi(n) = \phi(1 + \cdots + 1) = \phi(1) + \cdots + \phi(1) = 1 + \cdots + 1 = 5. $$ We also know that $\phi(0) = 0$, and that $\phi(-x) = -\phi(x)$ for any $x$, and combining these facts we get that for any integer $n$ we have $\phi(n) = n$. Thus in this case, a constant is just taken to itself (or, the version of itself as an element of $\mathbb Z[i]$). So now you can see what happens with an arbitrary polynomial: for instance, $$ \phi(X^2 + 5) = \phi(X^2) + \phi(5) = \phi(X)^2 + 5 = i^2 + 5 = 4. $$ Note that this doesn't work for general rings $R[X]$ if only $\phi(X)$ is specified, but it works for $\mathbb Z$ because any element of $Z$ can be written in terms of 1 and the ring operations.
H: Prove that the homomorphism $f$ is an isomorphism We have $f:(\mathbb{Z}/n\mathbb{Z},+)\rightarrow(U_n,.)$ is a homomorphism we need to prove it bijective. $f$ is defined to be $f(\bar{k})=z^{k}$ and $U_n$={ $z\in\mathbb{C}\setminus \{0\}$ such that $z^{n}=1$}. proof. $f$ is injective: let $x,y \in\mathbb{Z}/n\mathbb{Z}$ then $x=\bar{k}$ and $y=\bar{k^\prime}$ $f(x)=f(y)$ then $f(\bar{k})=f(\bar{k^\prime})$ hence $z^{k}=z^{k^\prime}$ so $k=k^\prime$. $f$ is surjective: let $y\in U_n$ then $\exists$ $k\in\mathbb{Z}/n\mathbb{Z} $ such that $y=z^{k}=f(\bar{k})$ I'm not really sure if I proved it surjective. Also, could anyone tell me how to write this better. AI: Hint: Since both groups have the same number of elements, it suffices to prove that the map is injective.
H: Find all the endomorphisms of the multiplicative group $\mathbb R^+$. I am looking for the set $End(\mathbb R^+)$i.e. the set of all endomorphisms from $\mathbb R^{+}$ to itself where the operation is multiplication.Can someone help me to find them explicitly.I doubt whether they can be found explicitly or are existential.Do we require Zorn's Lemma here? AI: $\mathbb R^+$ is isomorphic to the additive group $\mathbb R$ via the logarithm. This is a $\mathbb Q$-vector space. Any endomorphism of this as a group will be $\mathbb Q$-linear, since it is $\mathbb Z$-linear. Thus the set of endomorphisms is all $\mathbb Q$-linear maps. You won't find an explicit description of all of these since we can't even write $\mathbb R$ as a $\mathbb Q$-vector space explicitly, but this is at least a complete description of them.
H: Conditional expectation of maximum given the sum Let $X_1,X_2$ be i.i.d. random variables with uniform law on $\{ 1, \dots N \}$. I want to compute $$ E \left[ \max \{ X_1, X_2 \} \vert X_1 + X_2 \right].$$ How do I approach this? Do I need to consider particular cases? AI: Guide: For $k=2,3,\dots,2N-1,2N$ find the function $f$ prescribed by: $$f\left(k\right):=\mathbb{E}\left[\max\left\{ X_{1},X_{2}\right\} \mid X_{1}+X_{2}=k\right]=$$$$\sum_{i=1}^{N}\sum_{j=1}^{N}\max\left\{ i,j\right\} P\left(X_{1}=i,X_{2}=j\mid X_{1}+X_{2}=k\right)$$ Then: $$\mathbb{E}\left[\max\left\{ X_{1},X_{2}\right\} \mid X_{1}+X_{2}\right]=f\left(X_{1}+X_{2}\right)$$
H: Continuous curves and Lebesgue measure Let $d\geq 2$, and let $\gamma: [a,b]\to \mathbb{R}^d$ be a continuously differentiable function. Show that the curve of this function $\{(t,\gamma(t)): a \leq t \leq b\}$ has Lebesgue measure zero. This problem was directly or indirectly discussed here, here and here. Nevertheless I have a small confusion regarding my own proof. I have proven the assertion as follows: by continuiuty assumption we have $$\forall \epsilon>0 \quad \exists \delta>0 \quad \forall x,y \in [a,b]: \quad d(x,y)\leq \delta \implies d(f(x),f(y)) \leq \epsilon$$ Choosing $\delta$ for our $\sqrt[d]{\epsilon}$ we partition the interval into $[a, a+\delta],\ldots, [a+(n-1)\delta, a + n\delta]$ where $n=\lceil (b-a)/\delta \rceil$. We then can contain our curve in the boxes (I abbuse the notation a little bit) $$ [a, a+\delta]\times[\gamma(a)- \sqrt[d]{\epsilon}\cdot 1, \gamma(a)+ \sqrt[d]{\epsilon}\cdot 1],\ldots, [a+(n-1)\delta, a + n\delta] \times[\gamma(a+(n-1)\delta)- \sqrt[d]{\epsilon}\cdot 1, \gamma(a+(n-1)\delta)+ \sqrt[d]{\epsilon}\cdot 1]$$ each has elementary measure $n\cdot \delta\cdot\epsilon \approx (b-a) \epsilon$ and it can get arbitrarily small. The problem is that I only used continuity and not the continuous differentiability. This is most certainly troubling since in the remark Prof. Tao explicitly says that this result does not hold for continuous functions due to the existence of space filling curves. So at which step in the proof did I make a logical fallacy? AI: Small mistake: "by continuity assumption" should actually be "by uniform continuity". More seriously, as I understand it, you're trying to show that the image of $\gamma$ has measure zero right? This would mean $\text{image}(\gamma) \subset \Bbb{R}^d$, and hence you should be covering it by cubes/rectangles in $\Bbb{R}^d$, but you've been covering it using rectangles in $\Bbb{R}^{d+1}$, which doesn't make sense. As a result, if you trace through your work, the total measure of the cubes in $\Bbb{R}^d$ is $n \cdot \epsilon$, not $n\cdot \delta\cdot\epsilon$. I'm sure you can see the problem here: as $\epsilon$ gets smaller, the $n$ could grow much much larger, so the product $n\cdot \epsilon$ isn't small anymore. This is the flaw in your proof (another reason why you should suspect your proof is that you never made use of the hypothesis $d \geq 2$.). The typical argument using continuous differentiability allows you to deduce that the function is locally Lipschitz (by the mean-value inequality), and also in this special case, your domain $[a,b]$ happens to be compact, so actually the function $\gamma$ is Lipschitz (not just locally). This Lipschitz condition gives you a more refined estimate on "how much" your function can grow, and this is exactly what you need to prove the claim.
H: Solving integral by Feynman technique Consider, $$ I = \int_{0}^{\infty} \frac{1}{ (1+ax^2)^{m+1}} dx$$ Then, $$ I'(a) = -(m+1) \int_{0}^{\infty} \frac{2ax}{(1+ax^2)^{2m+2} } dx$$ so that $$I'(a) = \frac{ m+1}{2(2m-1)} [ (1+ax^2)^{1-2m}]_{0}^{\infty}$$ Now what do I do? I am finding it difficult to proceed AI: $I'(a)$ should really be $$I'(a) = -(m+1)\int_0^\infty \frac{x^2}{(1+ax^2)^{m+2}}\:dx$$ Then use integration by parts: $$I'(a) = \frac{x}{2a(1+ax^2)^{m+1}}\Bigr|_0^\infty - \frac{1}{2a}\int_0^\infty \frac{1}{(1+ax^2)^{m+1}}\:dx$$ which means that $$2aI' + I = 0$$ Can you take it from here? I'll still leave the general solution to you. However, one thing you'll immediately find is that the usual candidates for initial values don't tell us anything new as $I(0) \to \infty$ and $I(\infty) \to \infty$. Instead we'll try to find $I(1)$: $$I(1) = \int_0^\infty \frac{1}{(1+x^2)^{m+1}}\:dx$$ The trick is to let $x = \tan \theta \implies dx = \sec^2 \theta \:d\theta$ $$I(1) = \int_0^\frac{\pi}{2} \cos^{2m}\theta\:d\theta$$ Since the power is even, we can use symmetry to say that $$\int_0^\frac{\pi}{2} \cos^{2m}\theta\:d\theta = \frac{1}{4}\int_0^{2\pi} \cos^{2m}\theta\:d\theta$$ Then use Euler's formula and the binomial expansion to get that $$ = \frac{1}{4^{m+1}}\sum_{k=0}^{2m}{2m \choose k} \int_0^{2\pi} e^{i2(m-k)\theta}\:d\theta$$ All of the integrals will evaluate to $0$ except when $k=m$, leaving us with the only surviving term being $$I(1)=\frac{2\pi}{4^{m+1}}{2m \choose m}$$
H: Changing basis of linear map The matrix wrt. the standard basis of a given linear map is: $$[T]_B=\begin{bmatrix}-1&2&1\\3 & 1 & 0\\1&1&1\end{bmatrix}$$ I want to express this in the basis $B'={(0,1,2),(3,1,0),(0,1,1)}$. Say I want to first find column one. Now, $$T(0,1,2)=T(0,1,0)+2 T(0,0,1)=\begin{bmatrix}4\\1\\3 \end{bmatrix}$$ To represent this in basis $B'$, i need to find the coefficients $\lambda_i$, such that: $$\begin{bmatrix}4\\1\\3\end{bmatrix}=\lambda_1\begin{bmatrix}0\\1\\2\end{bmatrix}+\lambda_2\begin{bmatrix}3\\1\\0\end{bmatrix}+\lambda_{3}\begin{bmatrix}0\\1\\1\end{bmatrix}$$ First of all, I would like to hear if this is the right procedure to find the transformation matrix in basis $B'$. If yes, what is the easiest way to find the coefficients (yep, I'm new at this). Edit: I realized that my lecture notes states that if $\{x_i\}$ are the basis vectors of one basis and $\{y_i\}$ of another, then the transformation $Ax_k$ can be written in basis $\{y_i\}$ through: $$A x_{k}=\sum_{i} \alpha_{i k} y_{i}$$ In this case, $$Ax_1=[T]_Be_1=\begin{bmatrix}-1\\3\\1\end{bmatrix}=\alpha_{11} b_1+\alpha_{21} b_2+\alpha_{31} b_2=\alpha_{11}\begin{bmatrix}0\\1\\2 \end{bmatrix}+\alpha_{21}\begin{bmatrix}3\\1\\0\end{bmatrix}+\alpha_{31}\begin{bmatrix}0\\1\\1\end{bmatrix}$$ Which leads to $$\alpha_{11}=-\frac{7}{3}, \quad \alpha_{21}=-\frac{1}{3}, \quad \alpha_{31}=\frac{17}{3}$$ Is this correct, or did I misunderstand something? AI: Yes, your thoughts are right. Note that $$\left[\begin{array}{c} 4 \\ 1 \\ 3 \end{array}\right]=\lambda_{1}\left[\begin{array}{c} 0 \\ 1 \\ 2 \end{array}\right]+\lambda_{2}\left[\begin{array}{c} 3 \\ 1 \\ 0 \end{array}\right]+\lambda_{3}\left[\begin{array}{c} 0 \\ 1 \\ 1 \end{array}\right]$$ is equivalent to $$\left[\begin{array}{c} 4 \\ 1 \\ 3 \end{array}\right]=\left[\begin{array}{c} 3\lambda_2 \\ \lambda_1+\lambda_2+\lambda_3 \\ 2\lambda_1+\lambda_3 \end{array}\right]$$ which gives you the linear system $$\left\{\begin{array}{rcrcrcc} & & 3\lambda_2 & & & = & 4 \\ \lambda_1 & + & \lambda_2 & + & \lambda_3 & = & 1 \\ 2\lambda_1 & & & + & \lambda_3 & = & 3 \end{array}\right.$$ Now you just need to solve it to find the first column of $[T]_{B'}$.
H: Find such points on the ellipsoid $x^2+2y^2+4z^2 = 8$ that are the farthest and nearest to the point $(0;0;3)$ So, I need to compute this optimisation exercise using Lagrange multiplier. I know how to do these sorts of exercises when a constraint function is given, but in this case there's only a point. So, I don't know how to interpret it. Is it meant to be that there is three constraint functions $x=0, y=0 \text{ and } z=0 \text{ or how to interpret it?}$ AI: The way to interpret this problem is to find extrema of the function $$f(x, y, z) = x^2 + y^2 + (z - 3)^2$$ which represents the square distance between a point $(x, y, z)$ and the point $(0, 0, 3)$. Here you are subject to the constraint $$ g(x, y, z) = \frac{x^2}{8} + \frac{y^2}{4} + \frac{z^2}{2} - 1 = 0$$ which is the set of points on your ellipse (I've just divided by $8$ and subtracted $1$ on both sides). The method of Lagrange multipliers dictates that in a problem with one constraint $g = 0$, an extremum of $f$ is found when $$\nabla f = \lambda \nabla g$$ Plugging in the $f$ and $g$ we have, we obtain the following systems of equations: $$\begin{cases} 2x = \lambda x/4 \\ 2y = \lambda y / 2 \\ 2z - 6 = \lambda z \\ g = 0\end{cases}$$ There a few different cases we must consider here: The first is when $\lambda = 0$, in which case $z = 3$ (from the third equation). However, no point on the constraint has $z = 3$, so we can eliminate this case. Note that if $\lambda \neq 0$, then at least one of $x$ or $y$ must be zero (otherwise we would have $\lambda = 8$ from the first equation and $\lambda = 4$ from the second). If $x = 0$ and $y \neq 0$, then $\lambda = 4$, forcing $z = -3$ from the third equation. But no point on the constraint has $z = -3$, so we can rule out this case. If $y = 0$ and $x \neq 0$, then $\lambda = 8$, forcing $z = -1$. From the last equation, this would force $x^2 = 4 \iff x = \pm 2$, making the solutions in this case to be the points $(2, 0, -1)$ and $(-2, 0, -1)$. Finally, if both $x$ and $y$ are zero, then the fourth equation forces $z^2 = 2 \iff z = \pm \sqrt2$, with $\lambda = 2 \mp 3 \sqrt{2}$. So this case yields the points $(0, 0, \sqrt{2})$ and $(0,0, -\sqrt 2)$. The only Lagrange points left standing are the points $(\pm 2, 0, -1)$ and $(0, 0, \pm \sqrt{2})$. What remains is to compare the values of $f$ at each of these four points to determine which is a global minimizer/maximizer. Can you proceed from here?
H: $\xi \frac{1 - \xi^{n+1}}{1 - (1 - \xi)} = 1 - \xi^{n+1}$ Let $ 0 < \xi < 1$. Then I've been told that this inequality holds true, but it doesn't seem obvious to me and I don't see how the equalities were reached. I think it might be a geometric series, but I am not sure. $$\xi + \xi(1-\xi) + \xi(1-\xi)^2 + \dots + \xi(1-\xi)^n = \frac{1 - \xi^{n+1}}{1 - (1 - \xi)} = 1 - \xi^{n+1}$$ AI: You should have: $$\xi + \xi(1-\xi) + \xi(1-\xi)^2 + \dots + \xi(1-\xi)^n$$ $$=\xi \frac{1-(1-\xi)^{n+1}}{1-(1-\xi)}$$ $$=1-(1-\xi)^{n+1}$$
H: When doing nested integration, which of these is the correct notation for the order of integtation limits? If we wish to do a nested integration over a function of multiple variables, is there a (or, what is the) correct order to write the limits on the integral symbol? For example, to integrate the function $f(x,y)$ first with respect to $x$ over the range $(x=-1$ to $x=1)$, and then secondly with respect to $y$ over the range $(y=-2$ to $y=2)$, which of the following is the correct notation: Option 1: $$ \int_{-2}^{2} \int_{-1}^{1} f(x,y)\; dx\;dy $$ Option 2: $$ \int_{-1}^{1} \int_{-2}^{2} f(x,y)\; dx\;dy $$ Are the integration symbols "nested", starting with the inner ones and working outwards (as in option 1)? Or, do they maintain the order as the differential operators (as in option 2, i.e the $x$ one comes first, then the $y$ one). AI: Option 1 is the only convention I know (except the one kimchi wrote in the comments, but this is non of your two options and only used by physicists, as far as I know). When you write $$ \int_{-2}^{2} \int_{-1}^{1} f(x,y)\; dx\;dy $$ you can imagine brackets like in $$ \int_{-2}^{2}\left( \int_{-1}^{1} f(x,y)\; dx\right)\;dy $$ which are usually not written.
H: Calculating future average I have an average of 3,5 over 16 data points. If I wish to increase this average to, say, 6, how many future data points must be of a certain value? Assume all future data points are of the same value. If the current average of 16 data points is 3,5, then four new data points each of value 7 would result in a new average of 4,2. I know how to calculate this by hand. What I'm looking for is some formula that I can plug in to my spreadsheet. Would it be possible to calculate how many data points are needed to reach an average of 6, given each new data point has a value of 7? Also, would it be possible to reverse the unknown(say I know there will be 20 more data points. What does each value have to be to reach a new average of 6)? Is there a term for this type of formula/problem? I tried searching for a solution but I'm not quite sure what I'm supposed to search for. AI: A simple way of thinking about this is to consider the current total sum in any situation. In the beginning, you have $16$ data points and their total sum is $3.5 \cdot 16 = 56$. Now, you wish to add some terms, let's say $n$ terms (a known number), of value $x$ so that the new average is $\mu_{new}$. This just means that the new total sum is $$ 56 + nx = (16+n)\mu_{new} $$ from which you can solve for $x$: $$ x = \frac{(16+n)\mu_{new} - 56}{n} $$ If instead you know $x$, you can solve for $n$: $$ n = \frac{ 16\mu_{new}-56 }{x -\mu_{new}} $$
H: Is $ \mu (\{t:\lim_nf_n(t)\neq f(t) \text { or }\lim_n f_n (t) \text { does not exist }\})=0 $? Let $(E,\mathcal {A} ,\mu)$ be a finite measure space and $\{f_n\} $ be a sequence of integrable function such that : for all $\epsilon \in]0,\frac {\sqrt{2}}{2}] $ $$ \lim_n\mu (\{t:\sup_{k\geq n}|f_k(t)-f(t)|>\epsilon\})=0 $$ Can we say that $$ \mu (\{t:\lim_nf_n(t)\neq f(t) \text { or }\lim_n f_n (t) \text { does not exist }\})=0 $$ AI: Set $A_n:=\{t:\sup_{k\geq n}|f_k(t)-f(t)|>\epsilon\}$ and note that $$ \lim_{n\to\infty}\mu (A_n)=\lim_{n\to\infty}\int \mathbf{1}_{A_n}\mathop{}\!d \mu\overset{(*)}{=}\int \lim_{n\to\infty}\mathbf{1}_{A_n}\mathop{}\!d \mu =\mu (\{t:\limsup_{n\to\infty}|f_n(t)-f(t)|>\epsilon \})=0 $$ where in $(*)$ we used the monotone convergence theorem because $(A_n)$ is decreasing and $\mu$ is finite. Therefore $\mu (\{t:\limsup_{n\to\infty}|f_n(t)-f(t)|=0 \})=\mu(E)$, so $f_n\to f$ almost everywhere.