text
stringlengths
83
79.5k
H: What does "analytic" mean in the context of control system Consider LTI causal and finite-dimensional system $G$ and let $\hat{G}$ be its Laplace transform. We say that $\hat{G}$ is stable if it is analytic in the closed right half-plane (Re $s>0$). What "analytic" means here? I know that $\hat{G}$ is stable if doesn't have poles in RHP. Also I found wiki definition for analytic function. But I couldn't relate them. AI: The Wikipedia definition is exactly right. Your system is a linear time-invariant system of differential equations in finitely many variables. The Laplace transform of such a system is a rational function, and thus is analytic except for poles. It is stable if none of those poles are in the closed right half plane (which, BTW, is ${\text Re}(s) \ge 0$, not $>0$).
H: Fast way to determine $P^{-1}$ for diagonalising a $3 \times 3$ matrix? Diagonalise the following matrix: $$A = \begin{pmatrix}-1&-2&2\\ \:4&3&-4\\ \:0&-2&1\end{pmatrix}$$ I know I have to represent the matrix $A$ as $A = P^{-1}SP$, where $P$ is a matrix with only the eigenvalues of $A$ in the diagonal, and $P$ is the corresponding eigenvector matrix. I don't have any trouble with find the eigenvectors and eigenvalues; $$S=\begin{pmatrix}3&0&0\\ \:0&-1&0\\ \:0&0&1\end{pmatrix}, P = \begin{pmatrix}1&0&1\\ \:-1&1&0\\ \:1&1&1\end{pmatrix}$$ However, I am having a tough time trying to find the inverse of $P$ in a quick way. The issue is that I already spent a long time trying to find $S$ and $P$ and if I try to find $P^{-1}$, using either this method, or this method, takes quite some time. Therefore, I'm hoping that there is a quick way to determine $P^{-1}$. Essentially - is there another/quicker method to determine $P^{-1}$? AI: The fastest, reliable method for determining $P^{-1}$ is to use Gaussian elimination (the second method). I strongly recommend against using the first method. In some cases, including this one, it may be easier to use this easier "trick" method (but please note it is not very reliable in the general case!). If the inverse is $P^{-1} = \begin{bmatrix} \vec{v}_1 & \vec{v}_2 & \vec{v}_3 \end{bmatrix}$, then the $\vec{v}_i$ satsify $P \vec{v}_i = \hat{e}_i$, so spotting a linear combination of the columns of $P$ that gives every basis vector will find the inverse. For example, we can observe without too much trouble that $$\hat{e}_1 = -\vec{u}_1 - \vec{u}_2 + 2 \vec{u}_3, ~~\hat{e}_2 = - \vec{u}_1 + \vec{u}_3, ~~\hat{e}_3 = \vec{u}_1 + \vec{u}_2 - \vec{u}_3 $$ where $\vec{u}_i$ is the $i$th column of $P$. The linear combination yielding the $i$th basis vector should be placed in the $i$th column of $P^{-1}$, i.e. $$P^{-1} = \begin{bmatrix} -1 & -1 & 1 \\ -1 & 0 & 1 \\ 2 & 1 & -1\end{bmatrix}$$ Note that here it was easier to spot these linear combinations as the matrix contained some zeroes.
H: What is the technique I should use here $\phi_{1}(x)=e^{x^{2}}, y^{\prime \prime}-4 x y^{\prime}+\left(4 x^{2}-2\right) y=0$ I am asking to solve this differential equation while I am given one of its solutions, I am not familliar with this kind of equation, it reminds me ODE but the coeficciant are constants so I will not be able to use the charachteristic polynomial. $$\phi_{1}(x)=e^{x^{2}}, \quad y^{\prime \prime}-4 x y^{\prime}+\left(4 x^{2}-2\right) y=0$$ AI: As suggested in the comments the given solution suggests the substitution $$y=ue^{x^2}$$ which gives $$y'=(2xu+u')e^{x^2}$$ $$y''=((2+4x^2)u+4xu'+u'')e^{x^2}$$ and hence the original equation is equivalent to $$((2+4x^2)u+4xu'+u'')e^{x^2}-4x(2xu+u')e^{x^2}+(4x^2-2)ue^{x^2}=0$$ which reduces significantly to $$u''=0\implies y=(C_1+C_2x)e^{x^2}$$
H: Create plot for random walk Is there any tool, that can create such a plot? AI: The position of a randomly walking object is given by: $$ S_n=\sum_{i=1}^{n}X_i $$where in its simplest form, it is assumed that $X_i$s are iid and $\Pr\{X_i=1\}=\Pr\{X_i=-1\}={1\over 2}$. To sketch $S_n$ in terms of $n$, you can consider the following code blocks: MATLAB N = 70; % Number of steps X = 2*(rand(1,N)<0.5)-1; % For producing i.i.d. X random variables S = cumsum(X); % For the random walk sequence stem(S) title('Random walk vs time') Python import numpy import matplotlib.pyplot as plt N = 70 # Number of steps X = 2*(numpy.random.rand(N)<0.5)-1 # For producing i.i.d. X random variables S = numpy.cumsum(X) # For the random walk sequence plt.stem(S) plt.title('Random walk vs time',fontsize=14)
H: Conjugate of quaternion doesn't give expected result I'm using this site to play with quaternions. All of my quaternions are unit quaternions. I find quaternion of some Euler Angles(x, y, z) by using the website -inputs are degree and ZYX order Euler- and then by inputting the conjugate of the founded quaternion, I expect to see Conjugate of my Euler Angles (-x, -y, -z) My flow is: Euler(Input) => Quaternion(Result) => Conjugate of the Quaternion(Input) => Euler Conjugate (Which is original Euler multiplied by -1)(Result) In Degree and ZYX format, I input values x = 70 y = 30 z = 0 And resulting quaternion is [x, y, z, w(scalar)] [ 0.5540323, 0.2120121, -0.1484525, 0.7912401 ] When I input conjugate of this quaternion, which is vector parts multiplied by -1: [ -0.5540323, -0.2120121, 0.1484525, 0.7912401 ] Resulting Euler angles as (Z Y X) are: [ x: -72.5047593, y: -9.8465479, z: 28.4812339 ] Which is not related to my first angles (70, 30, 0). Shouldn't the conjugate of a quaternion give results of Euler angles multiplied by -1, (-70, -30, 0) I tested the result of (-70, -30, 0) degrees and the resulting quaternion is [ -0.5540323, -0.2120121, -0.1484525, 0.7912401 ] Which has x and y components multiplied by -1, but z component is preserved. What is the point I'm missing in this problem? AI: You're missing two things. First, the conjugate quaternion should give you the inverse rotation, that is, a rotation that undoes your original Euler-angle rotation. The inverse rotation is not obtained just by changing the signs of the Euler angles. You can invert a rotation by reversing the rotation around each individual axis, but only if you apply it to the axes in reverse order. Merely changing the three angles in your set of Euler angles, you are claiming you can apply the rotations to the axes in the same order as before, which does not work in general. Second, Euler-angle representations are not unique. You could input a set of angles, get a quaternion, and then try converting that quaternion back to Euler angles; you are not guaranteed to get the same Euler angles back even in that case, where you are dealing with only one rotation. For example, this answer deals with one particular interpretation of Euler angles and one particular set of conversion functions to and from quaternions, where the second set of Euler angles will be different from the first if the $y$-axis rotation in the original Euler angles was greater than a right angle.
H: Norm of convolution of $f$ and $g$ where $f \in L^1(R)$ and $g \in L^p(R)$ Here is the question: For $f \in L^1(R)$ and $g \in L^p(R)$, define $f*g(x)=\int_{- \infty}^\infty f(x-y)g(y)dy$ . prove that $f*g\in L^p(R)$ and $||f*g||_p\le||f||_1||g||_p$ This question is from Real and Complex analysis by Walter Rudin, on Chp 9 exercise Q4. My thought: imitate the proof in the book. I first prove that $f*g$ is measurable. By consider the double integral on $|f(x-y)g(y)|$ , I can only reach $||f*g||_p\le||f||_p||g||_p$ instead of $||f*g||_p\le||f||_1||g||_p$ But p-norm and 1-norm of f are not related. Thanks for any help. AI: Your problem is also a particular case of Young's inequality: Let $1\leq r,p,q\leq\infty$ satisfy $\tfrac1r=\tfrac1p+\tfrac1q-1$. If $f\in\mathcal{L}_p(\mathbb{R}^n,\lambda_n)$ and $g\in\mathcal{L}_q(\mathbb{R}^n,\lambda_n)$, then $f*g\in\mathcal{L}_r(\lambda_n)$ and $$ \|f*g\|_r\leq\|f\|_p\|g\|_q $$ Here is a short proof: For any $s\geq1$, let $s'$ be its conjugate, that is $\frac1s+\frac1{s'}=1$. From $$ \frac1r+\frac{1}{q'}+\frac{1}{p'}=\frac1r+\Big(1-\frac{1}{q}\Big)+ \Big(1-\frac{1}{p}\Big)=1 $$ we get $$ \big(1-\frac{p}{r}\Big)q'=p\Big(\frac1p-\frac1r\Big)q'= p\Big(1-\frac1q\Big)q'=p\\ \big(1-\frac{q}{r}\Big)p'=q\Big(\frac1q-\frac1r\Big)p'= q\Big(1-\frac1p\Big)p'=q $$ If $1<r,p,r<\infty$, then by H"older's inequality \begin{aligned} &|(f*g)(x)|\leq \int\Big(|f(y)|^{p/r}|g(x-y)|^{q/r}\Big)|f(y)|^{1-p/r}|g(x-y)|^{1-q/r}\,dy\\ &\leq \Big(\int|f(y)|^p|g(x-y)|^q\,dy\Big)^{1/r} \Big(\int|f(y)|^{(1-p/r)q'}\,dy\Big)^{1/q'} \Big(\int|g(x-y)|^{(1-q/r)p'}\,dy\Big)^{1/p'}\\ &=\big(|f|^p*|g|^q(x)\big)^{1/r}\|f\|^{p/q'}_p\|g\|^{q/p'}_q \end{aligned} Hence $$ \int|f*g(x)|^r\,dx\leq\big(\int|f|^p*|g|^q(x)\,dx\big) \|f\|^{pr/q'}_p\|g\|^{qr/p'}_q\\ =\|f\|^p_p\|g\|^q_q\|f\|^{pr/q'}_p\|g\|^{qr/p'}_q =\|f\|^r_p\|g\|^r_q $$ If $r=\infty$ and $q=p'$, then a direct application of H"older's inequality and the symmetric and translation invariance properties of Lebesgue measure shows that $$ \|f*g(x)|\leq\|f\|_p\|g\|_q,\qquad x\in\mathbb{R}^n. $$ Hence $\|f*g\|_\infty\leq\|f\|_p\|g\|_q$ The particular case $r=p$, $q=1$ can also be proved by an application of the generalized Minkowski inequality (Minkowski's integral inequality): For any $1\leq p<\infty$, and any measurable function $\phi:(X,\mathscr{B},\mu)\otimes(Y,\mathscr{F},\nu)\rightarrow\mathbb{R}$, and $\mu$, $\nu$ are $\sigma$--finite, \begin{aligned} \Big(\int_X\Big|\int_Y \phi(x,y)\, \nu(dy)\Big|^p\,\mu(dx)\Big)^{\tfrac{1}{p}}\leq \int_Y \Big(\int_X |\phi(x,y)|^p\,\mu(dx)\Big)^{\tfrac{1}{p}}\,\nu(dy) \end{aligned} with $\nu(dy)=f(y)\,dy$, $\mu(dx)=dx$ and $\phi(x,y)=f(x-y)$. \begin{aligned}\Big(\int\Big|\int g(x-y)f(y)\,dy\Big|^p\,dx\Big)^{1/p}&\leq \int\Big(\int|g(x-y)|^p\,dx\Big)^{1/p}|f(y)|\,dy\\ &=\int\Big(\int|g(x)|^p\,dx\Big)^{1/p}|f(y)|\,dy=\|g\|_p\|f\|_1 \end{aligned}
H: Monopoly: pricing. The total costs of a monopolistic firm are CT = 10q + 2 * (q ^ 2). Assuming that the firm decides to produce q* = 10 and that for that level of production the price elasticity of demand is equal to: | epsilon q, p | = 3, determine the price charged by the company. My partial solution: Marginal cost = 10 + 4q Having q=10 => Marginal cost = 50 The reference solution is: price = 75. How can I achieve this? Thank you for considering my request. AI: The general pricing rule (https://en.wikipedia.org/wiki/Lerner_index) is that $$ \dfrac{p-c'}{p} = \dfrac{1}{|\varepsilon|} $$ or $$ \dfrac{p-50}{p}=\dfrac{1}{3} $$ so that $$ 3p^* = 150 + p^* $$ or $$ p^* = 75. $$
H: If $AX=BX$ where $A$, $B$, $X$ are all square matrices, when can I calim $A$ = $B$? My matrix knowledge is on the rusty end and this question has bothered me for a while now recently. I have always assumed that if $A$, $B$, $X$ are all square matrices, and when $AX=BX$ where $X$ is invertible, I can claim $A$ = $B$ by multiplying $X^{-1}$ on both sides. This is obviously false in the case of eigendecomposition of a square matrix. $AP = DP$ where $D$ is a diagonal marix with eigen values of $A$ on the diagonal and $P$ is a square matrix with its columns being the eigenvectors of $A$. In this case I obviously cannot claim $A = D$ just because $P$ is invertible. So what are the conditions where I can apply this technique? AI: It is true that if $X$ is invertible than $AX = BX$ implies $A=B$. I think you're a bit confused in your expression for the diagonalizable matrix. It's $A P = PD$.
H: Let $a\in\mathbb{C}, |a|=1$ and $c$ an irrational real number. Prove: $a^c$ is dense in the unit circle. Let $a\in\mathbb{C}$ a complex number such that $|a|=1$ and $c$ an irrational real number. Prove: The set $a^c$ is dense in the unit circle. The problem is taken from Notes on Complex Function Theory by Donald Sarason. Would appreciate any help. AI: Let $c$ and $a$ be fixed as in the question. For any complex number $w$ with $|w|=1$ define $$ S_w:=\{w\exp(2\pi i nc):n\in\mathbb{N}\} $$ According to your specifications if $s=Arg(a)$ then the set you are asking about is precisely $S_w$ where we choose $w=\exp(ics)$. However we can show $S_w$ is dense for any choice of $w$ on the unit circle. There are two steps. Show $S_1$ is dense (details below). Given arbitrary $w$, write $S_w=w\cdot S_1$. So $S_w$ is a uniform rotation of the dense set $S_1$ and hence is dense itself. To do step 1, this is the usual "irrational rotation". There are many sources with this proof. Here are two from the network Dense set in the unit circle- reference needed Prove that the orbit of an iterated rotation of 0 (by (A)(Pi), A irrational) around a circle centered at the origin is dense in the circle. Added later: In light of the comments below the question, I can help clarify. While $a$ and $c$ are fixed, the intention seems to be to not fix a branch of logarithm, leading to a set of values for $a^c=\exp(c\log(a))$, in which $\log(a)$ takes any value $i(Arg(a)+2\pi n)$ as $n$ ranges over natural numbers. Since $c$ is irrational we obtain a countable set of values, and this is the set of values that one wants to show is dense. In other words If $z$ is an irrational number, then the countably infinitely many choices of $\log w$ lead to infinitely many distinct values for $w^z$. Source: https://en.wikipedia.org/wiki/Exponentiation#Complex_exponents_with_complex_bases Note that if $a=1$ then formally one can still obtain a set of values in the same way, which would be the set $S_1$ above. Though this is abusing the notation of complex exponentiation.
H: Taylor series for 1/(1-x). I am trying to write a Taylor series for $$ f(x)=\frac{1}{1-x}, \ x<1 \ .$$ In most sources, it is said, that this function can be written as a Taylor series, if $$ \left| x \right|<1. $$ However, I don't get the same condition for x. Because $$ f^{\left(n\right)}\left(x\right)=\frac{n!}{\left(1-x\right)^{n+1}},$$ the remainder term (or its abolute value) is $$ \left|\frac{x^{n+1}}{\left(1-c\right)^{n+2}}\right|=\frac{\left|x\right|^{n+1}}{\left|1-c\right|^{n+2}}{,}\ \mathrm{where} \ 0\le c\le x<1\ \mathrm{or}\ x\le c\le0.$$ If $$ 0 \le c \le x<1, $$ then $$ 0\ge-c\ge-x>-1\ \Leftrightarrow\ 1\ge1-c\ge1-x>0\ \Leftrightarrow\ 0<\left(1-x\right)^{n+2}\le\left(1-c\right)^{n+2}\le1\ \Leftrightarrow\ 1\le\frac{1}{\left(1-c\right)^{n+2}}\le\frac{1}{\left(1-x\right)^{n+2}} \Leftrightarrow\ \left| x^{n+1} \right|\le \left| \frac{x^{n+1}}{\left(1-c\right)^{n+2}} \right| \le \left| \frac{x^{n+1}}{\left(1-x\right)^{n+2}} \right|=\begin{cases} e^{\ln\frac{x}{1-x}\cdot n+\left(\ln x-2\ln\left(1-x\right)\right)}\ , \ 0<x<1&\\ 0\ , \ x=0& \end{cases} .$$ When $$ n \rightarrow \infty, $$ for remainder to converge to zero, x has to be such that $$ 0 \le x < \frac{1}{2}.$$ Then the lower bound of the remainder $$ \left|x^{n+1} \right| \rightarrow 0 $$ and the upper bound $$ \left| \frac{x^{n+1}}{\left(1-x\right)^{n+2}}\right| \rightarrow 0 .$$ If $$ x \le c \le 0 ,$$ then $$ \ -x\ge-c\ge0\ \Leftrightarrow\ 1-x\ge1-c\ge1\ \Leftrightarrow\ \left(1-x\right)^{n+2}\ge\left(1-c\right)^{n+2}\ge1\ \Leftrightarrow\ \frac{1}{\left(1-x\right)^{n+2}}\le\frac{1}{\left(1-c\right)^{n+2}}\le1\ \Leftrightarrow\ \left|x\right|^{n+1}\ge\left|\frac{x^{n+1}}{\left(1-c\right)^{n+2}}\right|\ge\left|\frac{x^{n+1}}{\left(1-x\right)^{n+2}}\right|=\frac{\left|x\right|^{n+1}}{\left|1-x\right|^{n+2}}=\begin{cases} e^{\ln\left|\frac{x}{1-x}\right|\cdot n+\left(\ln\left|x\right|-2\ln\left|1-x\right|\right){,}\, x<0}&\\ 0\ , \ x=0& \end{cases} . $$ When $$ n \rightarrow \infty, $$ for remainder to converge to zero, x has to be such that $$ -1 < x \le 0.$$ Then the upper bound of the remainder $$\left|x\right|^{n+1} \rightarrow 0 $$ and the lower bound $$ \frac{\left|x\right|^{n+1}}{\left|1-x\right|^{n+2}} \rightarrow 0. $$ Thus I would say, that function f can be written as Taylor series only when $$ -1 < x < \frac{1}{2} . $$ Isn't this true? AI: It is a well-known and indisputable (even elementary) fact that for all $x\ne1$ and all natural $n$ $$\sum_{k=0}^n x^k=\frac1{1-x}-\frac{x{^{n+1}}}{1-x},$$ implying that $\forall|x|<1$, $$\lim_{n\to\infty}\sum_{k=0}^n x^k=\frac1{1-x}.$$ This falsifies your claim. Needless to say, $$\left.\left(\frac1{1-x}\right)^{(k)}\right|_{x=0}=k!$$ Note that the remainder can be evaluated exactly as $$\sum_{k=n+1}^\infty x^k=\frac{x^{n+1}}{1-x}.$$ Comparing to the Taylor-Lagrange expression of the remainder, $$\frac{f^{(n+1)}(c)}{(n+1)!}x^{n+1}=\frac{x^{n+1}}{(1-c)^{n+2}},$$ we have $$c=1-\sqrt[n-2]{1-x}\le1.$$ This formula shows that a partial sum plus the remainder yield a correct evaluation of the function for all $x<1$ and cannot work for $x\ge1$ (because of the singularity). In fact, the study of the Taylor-Lagrange remainder cannot tell you about convergence.
H: positive semidefiniteness of the Hessian of $f\circ g$ Let $\Omega\subset\mathbb{R}^n$ be open, $g\in C^2(\Omega,\mathbb{R}^n)$ and $D^2g(x)$ be positive semidefinite for all $x\in\Omega$. Let furthermore $f\in C^2(\mathbb{R},\mathbb{R})$ with $f',f''\geq 0$. Show that $D^2(f\circ g)(x)$ is positive semidefinite, for all $x\in\Omega$. I've tried to compute this explicitly using matrices and got stuck: $y^TD^2(f\circ g)(x)\cdot y=y^TD(D(f\circ g)(x))(y)=y^T(D(Df(g(x))\circ Dg(x))(y))$. How do I continue from here (or is there an easier method)? AI: You can verify this using the chain rule. First, for $x \in \Omega$, we have $D(f \circ g)(x) = Df(g(x)) Dg(x)$. Differentiating again, making use of the product rule, we have $D^2(f \circ g)(x) =D^2f(g(x)) Dg(x)^\top Dg(x) + Df(g(x)) D^2g(x)$. Let $x \in \Omega, v \in \mathbb{R}^n$. Then: \begin{align*} v^\top D^2(f\circ g)(x)v &= v^\top[D^2f(g(x)) Dg(x)^\top Dg(x) + Df(g(x)) D^2g(x))]v \\ &= v^\top [D^2f(g(x)) Dg(x)^\top Dg(x)] v + v^\top [Df(g(x)) D^2g(x))]v \end{align*} Consider the first term. $$v^\top [D^2f(g(x)) Dg(x)^\top Dg(x)] v = D^2f(g(x)) v^\top Dg(x)^\top Dg(x) v = D^2 f(g(x)) (Dg(x)v)^2 \geq 0$$ since $D^2 f(g(x)) \geq 0$ as $f'' \geq 0$. Consider the second term. $$v^\top [Df(g(x)) D^2g(x))]v = Df(g(x)) (v^\top D^2g(x)v) \geq 0$$ since $Df(g(x)) \geq 0 $ as $f' \geq 0$ and $D^2g(x)$ is positive semidefinite. Putting the two inequalities together, we conclude $D^2(f\circ g)(x)$ is positive semidefinite.
H: Show that $g_n$ converges to $g$ uniformly. Problem Let $f:\Bbb{R}\times[0,1]\rightarrow\Bbb{R}$ be a continuous function and $\{x_n\}$ a sequence of reals converging to $x$. Define $g_n(y)=f(x_n,y),\hspace{0.5cm}0\le y\le1$ $g(y)=f(x,y),\hspace{0.9cm}0\le y\le1$. Show that $g_n$ converges to $g$ uniformly on $[0,1]$. My attempt From continuity of $f$, $g_n$ pointwise converges to $g$ on $[0,1]$. Now for given $\epsilon>0$ and $0\le y\le1$, there exists positive integer $n_y$ such that $|g_n(y)-g(y)|<\epsilon$, for all $n\ge n_y$. Hence we obtain $\{(g_{n_y}(y)-\epsilon,g_{n_y}(y)+\epsilon)\}_{0\le y\le1}$ is an open convering of image of $g$. Now from continuity of $f$, $g$ is continuous. So $\{g(y):0\le y\le1\}$ is compact. Hence there are $y_1,y_2,\dots,y_k\in[0,1]$ such that $\{(g_{n_{y_i}}(y_i)-\epsilon,g_{n_{y_i}}(y_i)+\epsilon)\}_{1\le i\le k}$ covers the image of $g$. If we put, $N=\operatorname{max}\{n_{y_i}:i=1,2,\dots,k\}$ then for all $y\in[0,1]$, $|g_n(y)-g(y)|<\epsilon$, for all $n\ge N$. Thus $g_n$ converges uniformly to $g$ on $[0,1]$. Is the proof correct? If not then please pinpoint or improve. Thank you. AI: Another approach: Note the set $\{x_1,x_2,\dots\}\cup \{x\}$ is bounded, hence is contained in $[-M,M]$ for some positive $M.$ Since $[-M,M]\times [0,1]$ is compact, $f$ is uniformly continuous there. Let $\epsilon >0.$ Then there exists $\delta>0$ such that $z,w\in[-M,M]\times [0,1],$ $ |z-w|< \delta,$ implies $|f(z)-f(w)|<\epsilon.$ Since $x_n\to x,$ we can choose $N$ such that $|x_n-x|<\delta$ for $n>N.$ For such $n,$ $|(x_n,y)-(x,y)| = |x_n-x|<\delta$ for all $y\in [0,1].$ Hence for $n>N,$ $|g(x_n,y)-g(x,y)|<\epsilon$ for all $y\in [0,1]$ as desired.
H: Solution to partial differential equation by separating variables Could someone please show me how to calculate this math problem? By separating the variables, find the solution to the partial differential equation $$\frac{\partial^{2} u}{\partial x^{2}}-\frac{1}{4} \frac{\partial^{2} u}{\partial t^{2}}-u=0$$ in the area $x \ge 0$ with the boundary conditions $$u(0, t)=\cos t, \quad \lim \limits_{x \rightarrow+\infty} u(x, t)=0$$ Thanks in advance! AI: The solution to this PDE is as follows $$ \frac{\partial^{2} u}{\partial x^{2}}-\frac{1}{4} \frac{\partial^{2} u}{\partial t^{2}}-u=0 $$ and $$ u(0, t)=\cos t, \quad \lim \limits_{x \rightarrow+\infty} u(x, t)=0 $$ The assumed solution is $u(x, t)=\xi(x)T(t)$ so that the PDE becomes $$ \xi''(x)T(t)-\frac{1}{4}\xi(x)T''(t)-\xi(x)T(t)=0 $$ Then, to separate the variables rearrange to see that $$ \frac{\xi''(x)}{\xi(x)}-\frac{T''(t)}{4T(t)}-1=0 $$ Next we realize that $\xi''(x)/\xi(x)$ and $T''(t)/T(t)$ must be constant because $x$ and $t$ are independent variables. We choose $T''(t)/T(t)=-\alpha^2$ to be negative because of the form of the boundary condition at $x=0$, and thus $\xi''(x)/\xi(x)=-\alpha^2/4+1$. This leads us to $$ u(x, t)=\left(c_1 \sin\left(\frac{t}{\alpha}\right)+c_2\cos\left(\frac{t}{\alpha}\right)\right)\left(c_3 \exp\left(\frac{x}{\sqrt{-\alpha^2/4+1}}\right)+c_4\exp\left(\frac{-x}{\sqrt{-\alpha^2/4+1}}\right)\right) $$ Applying the BCs, we see that $\alpha=1$, $c_1=c_3=0$, $c_2c_4=1$, yielding the final solution $$ u(x, t)=\cos(t)\exp\left(\frac{-2x}{\sqrt{3}}\right) $$
H: Continuity of a two variables function - $\epsilon-\delta$ definition Let's suppose we have a function $$u=u(x,t):[-\pi,\pi]\times(0,+\infty)\to \mathbb{R}$$ such that $u$ is continuous on $[-\pi,\pi]\times[t_0,+\infty)$ for every $t_0>0$. Can we use the $\epsilon-\delta$ definition (ie, definition of continuity between metric spaces) in order to prove that $u$ is continuous on the whole $[-\pi,\pi]\times(0,+\infty)$? Any hint would be really appreciated. Thanks a lot in advance. AI: Yes; let $(x,t) \in [- \pi, \pi] \times (0, \infty)$ and let $\epsilon > 0.$ Since we know $u$ is continous on $[- \pi, \pi] \times [t/2, \infty)$, we can find a $\delta' > 0$ such that $|(x', t') - (x, t)| < \delta'$ and $(x', t') \in [- \pi, \pi] \times [t/2, \infty)$ implies $|u(x', t') - u(x, t)| < \epsilon$. You can now verify that $\delta:= \text{min}\{\delta', t/2\}$ works on $[- \pi, \pi] \times (0, \infty)$.
H: Vector space over field span the same space . Let V be vector space over field $R$, and $v_1, v_2, v_3 \in V$. Prove that span(B) = span(A), when $B= \{v_1 + 2 v_2, v_1 + v_2 - v_3, 5v_3\}$ and $A = \{v_1, 4v_2, 6v_3\}$ It's clearly to prove that when $v_1, v_2, v_3$ are independent. But I don't have idea how to show without this assumption. AI: You don't need LI. Show that each of the vectors in $A$ can be expressed as a linear combination of vectors in $B$ and vice versa. It is clear for set $B$. But to express each vector in $A$ as a linear combination of vectors in $B$, consider the following: \begin{align*} v_1&=\color{red}{-1}(v_1+2v_2)+\color{red}{2}(v_1+v_2-v_3)+\color{red}{\frac{2}{5}}(5v_3)\\ 4v_2&=\color{red}{4}(v_1+2v_2)-\color{red}{4}(v_1+v_2-v_3)-\color{red}{\frac{4}{5}}(5v_3)\\ 6v_3&=\color{red}{\frac{6}{5}}(5v_3). \end{align*} Thus $A \subset \text{Span}(B)$, consequently $\text{Span}(A) \subseteq \text{Span}(B)$
H: Simply Connected Complex Domains I am studying Complex Analysis using Sarason's text. In it he says that a domain $G \subset \mathbb{C}$ is simply connected if the extended complement $\bar{\mathbb{C}} \setminus G$ is a connected set. Throughout the text a domain is a nonempty, open connected set. Is there a way to prove that convex domains are simply connected using just this definition? Sarason provides a handful of other notions that are equivalent to simple connectedness (namely the Winding Number Criterion) but I'm having trouble wrapping my head around how an extended complement can be regarded as a connected set. AI: Let $ G \subseteq \mathbb{C} $ be a convex domain and take a point $ z \in \mathbb{C} \setminus G $. Since $ G $ is convex, either $ \{z + t \mid t \geq 0\} $ or $ \{z - t \mid t \leq 0\} $ is disjoint from $ G $. This ray connects $ z $ to the point of infinity in $ \bar{\mathbb{C}} $.
H: A problem concerning a parallelogram and a circle Sorry for the ambiguous title. If you can phrase it better, feel free to edit. "A parallelogram $ABCD$ has sides $AB = 16$ and $AD = 20$. A circle, which passes through the point $C$, touches the sides $AB$ and $AD$, and passes through sides $BC$ and $CD$ at points $M$ and $N$, such that $\frac{BM}{MC} = \frac{1}{8}$. Find $\frac{DN}{NC}$." Apparently, I'm supposed to solve this using triangle similarity, because that's the chapter's name (But I'm open to other answers too!). I've tried marking the center of the circle and going from there, creating triangles and seeking similarity. But couldn't really go far without it getting overly complicated. Here's the picture: AI: We have to be careful because the given picture is misleading, actually $K$ and $L$ lie outside $ABCD$. We have $BM\cdot BC = \frac{1}{9}BC^2 = \frac{400}{9}=BK^2$, hence $BK=\frac{20}{3}$ and $AL=AK=16+\frac{20}{3}=\frac{68}{3}$, such that $DL=\frac{8}{3}$. This gives $DN\cdot DC=\frac{64}{9}$, hence $DN=\frac{4}{9}$ and $\frac{DN}{NC}=\frac{4/9}{16-4/9}=\color{red}{\frac{1}{35}}$. There is a second solution with $\widehat{DAB}\approx 96.38^\circ$ and $\frac{DN}{NC}=\color{red}{\frac{4}{5}}$; in this case $K$ and $L$ properly lie on $AB$ and $AD$. This is probably the intended solution if we label the vertices of $ABCD$ counter-clockwise, as usually done.
H: Euclidean algorithm worst case: why never be more than five times the number of its digits (base 10)? I read the Euclidean algorithm of Wikipedia page(https://en.wikipedia.org/wiki/Euclidean_algorithm), but I was stuck at worst-case proof. At the second paragraph, it says: For if the algorithm requires $N$ steps, then b is greater than or equal to $F_{N+1}$ which in turn is greater than or equal to $φ^{N−1}$, where $φ$ is the golden ratio. Since $b ≥ φ^{N−1}$, then $N−1 ≤ log_φb$. Since $log_{10}φ > 1/5$, $(N − 1)/5 < log_{10}φ×log_φb = log_{10}b$. Thus, $N ≤ 5×log_{10}b$. Thus, the Euclidean algorithm always needs less than $O(h)$ divisions, where $h$ is the number of digits in the smaller number $b$. I just can't understand why $b≥φ^{N−1}$? Is it because $F_{N+1}≥φ^{N−1}$? how to prove relation between Fibonacci number $F_{N+1}$ and the golden ratio $φ^{N−1}$? AI: A don't follow your notations, but I guess what you really want to show is that $$F_{N}\ge \varphi^{N-2}$$ where $\varphi$ is the golden ratio. Since $\varphi^2=\varphi+1$, one can show easily by induction on $N$ that $$\varphi^N=F_N\varphi+F_{N-1}$$ Hence $$\varphi^N=F_N\varphi+F_{N-1}\le F_N\varphi+F_{N}=F_N(\varphi+1)=F_N\varphi^2$$ so you get the result.
H: Does the operator $(\hat{f}\cdot m )^\vee$ maps Schwartz in it self? Given $m \in L^\infty$ and $\phi \in \mathcal{S}$ a Schwartz function, is it true that $(\hat{f}\cdot m)^\vee$ is a Schwartz function?? I trying to prove this so I could conclude that operator of the form $(\hat{f}\cdot m)^\vee$ maps $\mathcal{S}$ to it self. Attempt: Given $\alpha, \beta$ multi-index, we have to prove that $$\sup_{x \in \mathbb{R^n}}|x^\alpha\partial^\beta(\hat{f}\cdot m)^\vee(x)| < \infty. $$ When $\alpha = 0$, using some properties of Fourier tranform, we get $$\partial^\beta(\hat{f}\cdot m)^\vee(x) = ((2\pi i \xi)^\beta\hat{f}(\xi)m(\xi))^\vee(x) = ((\partial^\beta f)^\wedge\cdot m)^\vee(x).$$ Then, taking the absolute value of the expression above and by definition of inverse Fourier transform, \begin{align*} |\partial^\beta(\hat{f}\cdot m)^\vee(x)| = & \left| \int_\mathbb{R^n} (\partial^\beta f)^\wedge(\xi)m(\xi) e^{2\pi i \xi\cdot x} d\xi \right| \\ \leq & \int_\mathbb{R^n} |(\partial^\beta f)^\wedge(\xi)||m(\xi)|d\xi \\ \leq &\|m\|_{L^\infty} \int_\mathbb{R^n} |(\partial^\beta f)^\wedge(\xi)|d\xi \\ =& \|m\|_{L^\infty} \|(\partial^\beta f)^\wedge\|_{L^1}. \end{align*} The $L^1$-norm of $(\partial^\beta f)^\wedge$ is finite, since this is a Schwartz function. My problem is for $\alpha \neq 0$. For simplicity and in view of properties of Fourier transform, I changed $x^\alpha$ for $(-2\pi i x)^\alpha$ and I want to show that the supreme of $|(-2\pi i x)^\alpha \partial^\beta(\hat{f}\cdot m)^\vee(x)|$ over all $x \in \mathbb{R^n}$ is finite: \begin{align*} |(-2\pi i x)^\alpha \partial^\beta(\hat{f}\cdot m)^\vee(x)| = & |(-2\pi i x)^\alpha ((\partial^\beta f)^\wedge\cdot m)^\vee(x)| \\ = & |[\partial^\alpha((\partial^\beta f)^\wedge \cdot m)]^\vee(x)|. \end{align*} How do I proceed from here?? Does it make sense the derivative $\partial^\alpha((\partial^\beta f)^\wedge \cdot m)$ ?? AI: If $(\hat{f} \cdot m )^\vee$ is Schwarz, then so is $\hat{f}\cdot m$ (as the Fourier transform sends Schwarz space into itself). However, $\hat{f} \cdot m$ needs not even to be continuous (as we only assume $m\in L^\infty$). So, no. The function $(\hat{f} \cdot m)^\vee$ needs not to be Schwarz.
H: Split $C[0,1]$ into direct sum of two infinite-dimensional subspaces How to split $C[0,1]$ in the form of a direct sum of two infinite-dimensional subspaces? AI: A more concrete decomposition. For every continuous function $f$ of $[0,1]$ let $f^*$ be the function $f(x)+f(1-x)$ and $f^{**}$ be the function $f(x)-f(1-x)$. Both sets of finctions $f^*$ and $f^{**}$ are infinite dimensional subspaces, they intersect by the space consisting of one function and their sum is the whole space of continuous functions on the unit interval.
H: Measure theory problem - Integrable functions- Showing f=0 a.e Suppose $f \in L^1(R) $ and satisfies $\limsup_{h\to0}\int_R |\frac{f(x+h)-f(x)}{h}| = 0$ then show that $f=0$ a.e. I'm not really sure how to approach this problem ( I have a feeling that we need to use the dominated convergence theorem but no idea how to apply it) and what's the significance of almost everywhere in this question? AI: It's not true. Consider $$f=1_{[0,1]} $$ the indicator function of the unit interval. Then we have for $0.1 > h>0$ that $$f(x+h)-f(x) = 1_{[-h,0)} - 1_{[1-h,1)} $$ and so we get $$\int_R f(x+h)-f(x) dx = h - h = 0$$ The same result we get for $-0.1 <h <0$ and so we get $$ \limsup_{h\to0}\int_R \frac{f(x+h)-f(x)}{h} dx = \limsup_{h\to0}\frac {1}{h}\int_R f(x+h)-f(x) dx = 0$$
H: Calculate $\lim_{n \rightarrow \infty} \phi_{\frac{S_n}{n}}(t)$ They give me $X_1, X_2, X_3, ...$ random variables that are independent and with the same distribution and they ask me to calculate $\lim_{n \rightarrow \infty} \phi_{\frac{S_n}{n}}(t)$. Obs: $S_n = X_1 + ... X_n$ and $E(X_i) = \mu$. Well my first idea is to see what is $\phi_{\frac{S_n}{n}}(t)$: $\phi_{\frac{S_n}{n}}(t) = \phi_{S_n}(\frac{t}{n}) = (\phi_{X_1}(\frac{t}{n}))^n $ $\phi_{X_1}(\frac{t}{n}) = E(e^{\frac{itX}{n}}) = \int_{\Omega}e^{\frac{itX}{n}}dP = \int_{-\infty}^{\infty}e^{\frac{itx}{n}}f(x)dx$ But from here I don't know how can I continue. AI: By the strong law of large numbers you get $$\frac{S_n}{n} \to \mu \quad a.s $$ but this implies $$ \phi_\frac{S_n}{n} \to \phi_\mu $$ by dominated convergence theorem which lead to the wanted result
H: Self-adjoint operator $H$ is not in the $C^*$-algebra generated by unitary operator $U$? Let $U, H$ be operators on Hilbert space $L^2\left(\mathbb{T}, \frac{d \theta}{2\pi}\right)$($\mathbb{T}$ is unit circle $S^1$), for any $f\in L^2\left(\mathbb{T}, \frac{d \theta}{2\pi}\right)$, $$ (Uf)(e^{i\theta}) = e^{i\theta}f(e^{i\theta}), (Hf)(e^{i\theta}) = \theta f(e^{i\theta}), \theta\in [0,2\pi). $$ Why $H$ is not in the $C^*$-algebra generated by $U$? Clearly $U$ is unitary and $H$ is self-adjoint. It seems that $U=\sum_{n=0}^{\infty} \frac{(iH)^n}{n!}$. And we have a theorem(?): $C^*(U)=C(\sigma(U))$, where $C^*(U)$ is the $C^*$-algebra generated by $U$ and $I$, $C(\sigma(U))$ is the set of all continuous functions from $\sigma(U)\to \mathbb{C}$.(I am not quite sure...) Could you please help me? Thank you in advance! AI: Let's observe something first: $U$ is the multiplication operator by $u(e^{i\theta})=e^{i\theta}$ while $H$ is the multiplication operator by $h(e^{i\theta})=\theta$. We have a $*$-homomorphism (i.e. a map that preserves addition, multiplication and involutions) $$L^\infty(\mathbb{T})\to\mathcal{B}\big(L^2(\mathbb{T})\big)$$ given by $\varphi\mapsto M_\varphi$, where $M_\varphi:L^2(\mathbb{T})\to L^2(\mathbb{T})$ acts as $M_\varphi(f)=\varphi\cdot f$ (pointwise multiplication). So $U=M_u, H=M_h$. This $*$-homomorphism is actually isometric (I can add the details for this, but you can find it in other posts here as well). Thus the image of this $*$-homomorphism is actually a $C^*$-algebra, say $\mathcal{M}\subset \mathcal{B}\big(L^2(\mathbb{T})\big)$. Now our $*$-homomorphism is an isomorphism of $C^*$-algebras between $L^\infty(\mathbb{T})$ and $\mathcal{M}$. Your question, "is $H$ contained in $C^*(U)$?", is equivalent to "is $h$ contained in $C^*(u)$?" via this isomorphism. So we have to think about the $C^*$-algebra $L^\infty(\mathbb{T})$. Involution here is simply taking the conjugate function. The $C^*$-algebra generated by a normal element $u$ is the norm-closure of all the polynomials $p(u,u^*)$, where $p(z,w)\in\mathbb{C}[z,w]$. Now since $u$ is a unitary, we can immediately see that $$C^*(u)=\overline{\{p_1(e^{i\theta})+p_2(e^{-i\theta}): p_1(z),p_2(z)\in\mathbb{C}[z]\}}=\text{ the closure of trigonometric polynomials }$$ where the closure is taken in $\|\cdot\|_\infty$. This is equal to $C(\mathbb{T})$, the space of continuous function over the unit circle. Is $e^{i\theta}\mapsto \theta$ continuous over $\mathbb{T}$? No, because as $e^{i\theta}\to 1$, $\theta$ bumbles between $0$ and $2\pi$. So $h\not\in C^*(u)$.
H: The natural domain of $f(x)=\frac{\sqrt{5−x^2}}{(x−1)(2x−1)}$ Background As there is a radical in the numerator, this restriction would need to be applied first. $$f(x)=\frac{\sqrt{5−x^2}}{(x−1)(2x−1)}$$ Thus, $x=-\sqrt{5}$, and for the denominator, $x=1, x=\frac{1}{2}$ The natural domain would then be: $[-\sqrt{5},\frac{1}{2})∪(\frac{1}{2},1)∪(1,\sqrt{5}]$ Am I on the right path here? Also, am I correct in saying this natural domain ends at $\sqrt{5}$, rather than $∞$? AI: Your answer is correct, but your explanation could be clearer. You didn't say what "$x = -\sqrt{5}$" means, and what about $+\sqrt{5}$? I would say that from the radical, we know $5 - x^2 \geq 0$, so $x^2 \leq 5$, so $-\sqrt{5} \leq x \leq 5$. Similarly for the values where the denominator would be zero: I would either use words like "the denominator cannot be zero, which would happen at $x = 1$, $x = \frac 12$", or just write $x \neq 1, x \neq \frac 12$ to match the actual conclusion. For a way to double check whether the natural domain stops at $\sqrt{5}$ or extends to $+\infty$, imagine plugging in some really large positive number for $x$. If $x$ is large and positive, then $5-x^2$ is large and negative, and $\sqrt{5-x^2}$ is a problem. So very large positive numbers should not be in the domain.
H: Taylor series for function of two variables In my textbook taylor series for function of two variable has been written like this: $$ f(a+h,b+k)=f(a,b)+f_x(a,b)h+f_y(a,b)k+\frac 1 2 (f_{xx}(a,b)h^2+ 2hkf_{xy}(a,b)+f_{yy}(a,b)k^2)+h^2+k^2)^{\frac{3}{2}}B(h,k) $$ in which $B(h,k)$ is a bounded function around the center. My question is where did $(h^2+k^2)^{\frac{3}{2}}$ come from ? why is $B(h,k)$ bounded and what does it mean in this context that $B(h,k)$ is bounded? in my textbook doesn't mention anything about these. Can someone please explain these AI: For a single variable function $f$, the Taylor polynomial $T_nf$ of degree $n$ has the property that $f(x_0+h)=T_nf(x_0;h)+\vert h\vert^{n+1} B_n(h)$ with $B_n$ bounded. This essentially says that $f$ is approximately equal to the Taylor polynomial plus some remainder which goes to 0 very quickly as $h\to0$. To be specific, it goes to $0$ quicker than $h^n$. This is guaranteed by the boundedness of $B_n$: The remainder is $h^{n+1}$ (so of higher order than $h^n$) multiplied by something bounded, so it doesn't slow down the speed with which the remainder goes to 0. This can be generalized to multi-variable functions by replacing $h$ with the vector $\vec h=(h_1,\dots,h_n)$ (or in 2d, you could also name the components $(h,k)$) and $\vert h\vert$ with the norm $\vert \vec h\vert=\sqrt{h_1^2+\dots+h_n^2}$, or in 2d with $\sqrt{h^2+k^2}$. What it should say to actually capture the intuition is something like this: $f(\vec a+\vec h)=T_2f(\vec a;\vec h)+\vert\vec h\vert^{2+1} B_2(\vec h)$, where $\vec a=(a,b),~\vec h(=h,k)$ and where $T_2f$ is just all the terms I've suppressed in this notation (so the actual Taylor polynomial itself). Now just plug in $\vert \vec h\vert=\sqrt{h^2+k^2}$ and you'll see where the $(\dots)^{\frac{3}{2}}$ comes from.
H: Is $f(p)=\beta p^{\alpha}$ the unique nonnegative function on $[0,1]$ satisfying $\frac{f(p)}{f(1-p)}=\left(\frac{p}{1-p}\right)^{\alpha}$? Let $f(p)\geq 0$ be a function on $[0,1]$. Suppose $$ \frac{f(p)}{f(1-p)}=\left(\frac{p}{1-p}\right)^{\alpha} $$ for some constant $\alpha>0$ and all $p\in [0,1]$. A specific solution to this equation is $$ f(p)=\beta p^{\alpha}. $$ The question is: is the power function given above unique? AI: If $g$ is any (non vanishing) function satisfying $g(p) = g(1-p)$ for every $p\in [0,1]$, then also $$ f(p) = g(p) p^\alpha $$ is a solution.
H: Prove there isn't an increasing $\omega_1$ sequence on real set Although I have read that it's quite easy to prove there isn't an $\omega_1$ increasing sequence on real set I spent a lot of time figuring out why it happens and finally I think I made it, but I'm not sure about it. Here is my approach. First of all, I supossed there was an increasing sequence $s: \omega_1\longrightarrow\mathbb{R}$ and I considered the following sets $D_n=\{\alpha<\omega_1\;\vert\;s(\alpha+1)-s(\alpha)<\frac{1}{n}\}$ After that, to demonstrate that there was a set $ D_n $ such that its cardinality was $\aleph_1 $, I assumed not, but that would mean that all sets $ D_n $ were countable and then their union would also be countable so that there is $ \aleph_1 $ ordinals less than $ \omega_1 $ so that each of them does not belong to any set $ D_n $ (due to the regularity of $ \omega_1 $), that would mean that for all of them $ s (\alpha + 1 ) -s (\alpha) \geq 1 $ messing up the injection of $ s $ because there would be $ \aleph_1 $ ordinals in the domain and $ \aleph_0 $ in the range. Finally, the existence of a $D_{n_{0}}$ set such that its cardinality is $\aleph_1$ is a contradiction for the following reason. Take a random $\alpha$ in $D_{n_{0}}$, then $s(\alpha+1)-s(\alpha)<\frac{1}{n_0}$ but also, by the archimedean postulate, there would be a natural number $m$ such that $\frac{1}{m}<s(\alpha+1)-s(\alpha)<\frac{1}{n_0}$ and that number $\frac{1}{m}$ would never reach $\frac{1}{n_0}$ ,because there are $\aleph_1$ numbers between them, and that would contradicts the archimedean postulate. Am I right up to this point? Thanks in advance for your help and time. PD: I'm a begginer Set theory student so sorry if I said something that didn't make sense. AI: I’m afraid that I don’t understand your argument that no $D_n$ can be uncountable; in particular, I can’t make any sense of the assertion that ‘$\frac1m$ would never reach $\frac1{n_0}$’. In any case, you’re working much too hard: all you have to do is note that each interval $\big(s(\alpha),s(\alpha+1)\big)$ must contain a rational number $q_\alpha$, and since these intervals are pairwise disjoint, $\{q_\alpha:\alpha\in\omega_1\}$ would then be an uncountable set of rational numbers, which is absurd.
H: Calculate $\int_{1}^{\phi}\frac{x^{2}+1}{x^{4}-x^{2}+1}\ln\left(x+1-\frac{1}{x}\right) \mathrm{dx}$ $$\int_{1}^{\phi}\frac{x^{2}+1}{x^{4}-x^{2}+1}\ln\left(x+1-\frac{1}{x}\right) \mathrm{dx}$$ Insane integral! So far I have tried to complete the square for the denominator then substitute and use taylor series for the natural logarithm about x=0. Is this integral possible?! AI: I assume that $\phi$ is the golden ratio. Consider $u=x-\frac{1}{x}$ so that $\frac{x^2}{x^2+1} du=dx$: $$I=\int_0^1 \frac{x^2}{x^2+1} \cdot \frac{x^2+1}{x^4-x^2+1} \ln{\left(1+u\right)} \; du$$ $$I=\int_0^1 \frac{x^2}{x^4-x^2+1} \ln{\left(1+u\right)} \; du$$ $$I=\int_0^1 \frac{1}{x^2-1+\frac{1}{x^2}} \ln{\left(1+u\right)} \; du$$ $$I=\int_0^1 \frac{1}{u^2+1} \ln{\left(1+u\right)} \; du$$ Now, let $u=\tan{t}$: $$I=\int_0^{\frac{\pi}{4}} \ln{\left(1+\tan{t}\right)} \; dt$$ Then, substitute $w=\frac{\pi}{4}-t$ $$I=\int_0^{\frac{\pi}{4}} \ln{\left(1+\tan{\left(\frac{\pi}{4}-w\right)}\right)} \; dw$$ Use the tangent angle addition formula: $$I=\int_0^{\frac{\pi}{4}} \ln{\left(1+\frac{1-\tan{w}}{1+\tan{w}}\right)} \; dw$$ $$I=\int_0^{\frac{\pi}{4}} \ln{2} \; dw-\int_0^{\frac{\pi}{4}} \ln{\left(1+\tan{w}\right)} \; dw$$ Remember that the second integral is $I$: $$2I=\int_0^{\frac{\pi}{4}} \ln{2} \; dw$$ $$I= \frac{\pi \ln{2}}{8}$$
H: Calculating the arc length of a difficult radical function I am fairly new to calculus and self learning integration from home has been challenging so I'm sorry if I make any mistakes. I want to work out the arc length of: $y = \sqrt{7.2 (x-\frac {1}{7}}) - 2.023, [0.213, 0.1.27]$. I have used the definition of a definite integral and got $\int_{0.075}^{0.58} \sqrt{1+\left(\frac{3.54}{\sqrt{7x-1}}\right)²} dx$ =$\int_{0.075}^{0.58} \sqrt{1+\frac{12.5316}{7x-1}} dx$ So far which I think is correct. How would I proceed from here? Would I use u-substitution and then a trigonometric substitution to eliminate the exponent? Any help is appreciated. AI: First hint: let $C = 12.5316$, and never write any more decimals in the rest of your work. You can plug in $C$ at the end. Similarly, I'll use $a$ and $b$ for the limits of integration. So you have \begin{align} I &= \int_a^b \sqrt{1 + \frac{C}{7x-1}}~dx \end{align} First step is common denominator: \begin{align} I &= \int_a^b \sqrt{\frac{7x -1 + C}{7x-1}}~dx \end{align} Now let $u = 7x + 1, du = 7 dx$ to get \begin{align} I &= \int_a^b \sqrt{\frac{7x -1 + C}{7x-1}}~dx\\ &= \int_{x=a}^{x=b} \sqrt{\frac{u + C}{u}}~\frac{1}{7}~du\\ \text{so that}\\ 7I &= \int_{x=a}^{x=b} \sqrt{\frac{u + C}{u}}~du\\ \end{align} At this point...you'd really like to get rid of that $C$, so let's let $$ u = C\tan^2 v \\ du = 2C \tan v \sec^2 v ~ dv $$ so that \begin{align} 7I &= \int_{x=a}^{x=b} \sqrt{\frac{u + C}{u}}~du\\ &= \int_{x=a}^{x=b} \sqrt{\frac{C(\tan^2 v + 1)}{C \tan^2 v}}~2C\tan v \sec^2 v ~ dv\\ &= 2C\int_{x=a}^{x=b} \sqrt{\frac{\sec^2 v}{\tan^2 v}}\tan v \sec^2 v ~ dv; & \text{write out in sines/cosines...}\\ &= 2C\int_{x=a}^{x=b} \sqrt{\frac{\frac{1}{\cos^2 v}}{\frac{\sin^2 v}{\cos^2 v}}}\frac{\sin v}{\cos^3 v}~ dv; \\ &= 2C\int_{x=a}^{x=b} \sqrt{\frac{1}{\sin^2 v}}\frac{\sin v}{\cos^3 v}~ dv \\ &= 2C\int_{x=a}^{x=b} \frac{1}{\sin v}\frac{\sin v}{\cos^3 v}~ dv \\ &= 2C\int_{x=a}^{x=b} \frac{1}{\cos^3 v}~ dv \\ &= 2C\int_{x=a}^{x=b} \frac{\cos v}{\cos^4 v}~ dv \\ &= 2C\int_{x=a}^{x=b} \frac{\cos v}{(1-\sin^2(v))^2}~ dv; & \text{subst $t = \sin v$} \\ &= 2C\int_{x=a}^{x=b} \frac{1}{(1-t^2)^2}~ dt; \\ \end{align} which is now an ordinary partial-fractions integral; when you're done, you'll have to backsubstitute for $t$, and then $v$, and then $u$ to finally get an expression in $x$ into which you can plug the limits $a$ and $b$. This does feel like a long way around, and there's probably a far simpler approach...but this is the one that jumped out at me.
H: In Wikipedia's Statement of The Universal Approximation Theorem, is it taking identity activation on the output layer with no bias? See the Universal Approximation Theorem (arbitrary width) on wikipedia or below. The universal approximation theorem (arbitrary width) is talking about a neural network with 1 hidden layer (input, hidden, output). In the case of a 3 layers network (1 hidden), the activation function should be evaluated twice, once on the second layer (first hidden) and once on the output layer. Is this theorem assuming weights $v_i$ between the hidden and output layer with identity activation and no bias? If so, why do you think the authors felt no need to clarify this beyond the equation given? It seems strange that this wouldn't be mentioned, but just thrown into the formula. I looked at the paper referenced (I found the same paper elsewhere, as their link just leads to a paper behind a paywall) on the Wikipedia article, but it seemed to be lacking this detail as well. From Wikipedia: "Universal approximation theorem; arbitrary width. Let $\varphi:\mathbb{R}\to\mathbb{R}$ be any continuous function (called the activation function). Let $K \subseteq \mathbb{R}^n$ be compact. The space of real-valued continuous functions on $K$ is denoted by $C(K)$. Let $\mathcal{M}$ denote the space of functions of the form $$ F( x ) = \sum_{i=1}^{N} v_i \varphi \left( w_i^T x + b_i\right) $$ for all integers $N \in \mathbb{N}$, real constants $v_i,b_i\in\mathbb{R}$ and real vectors $w_i \in \mathbb{R}^m$ for $i=1,\ldots,N$. Then, if and only if $\varphi$ is polynomial, the following statement is true: given any $\varepsilon>0$ and any $f\in C(K)$, there exists $F \in \mathcal{M}$ such that $$ | F( x ) - f ( x ) | < \varepsilon $$ for all $x\in K$. In other words, $\mathcal{M}$ is dense in $C(K)$ with respect to the uniform norm if and only if$\varphi$ is nonpolynomial. This theorem extends straightforwardly to networks with any fixed number of hidden layers: the theorem implies that the first layer can approximate any desired function, and that later layers can approximate the identity function. Thus any fixed-depth network may approximate any continuous function, and this version of the theorem applies to networks with bounded depth and arbitrary width." AI: Most common activation functions, when applied at the final layer, will impose a restriction on the range of the function represented by the network. For example, if using a ReLU activation on the final layer, then the output of the network will lie in $\{x\in\mathbb{R}^n: x_i\ge 0, \forall i\}$. Similarly, if using a sigmoid on the final layer, then the output will lie in $\{x\in\mathbb{R}^n: |x_i|\leq 1, \forall i\}$. If the output of the network is constrained to a proper subset of $\mathbb{R}^n$, then the network cannot be a universal function approximator, so that is likely why the authors did not explicitly mention this.
H: Convergence of $e^x$ I am working with the Maclaurin series for $f(x)= e^x$. I am in the point of proving that the series converges to $f(x)$ for all $x$, using Taylor's theorem with remainder I have to show the following: $$ \mathop {\lim }\limits_{n \to \infty } \frac{{\left| x \right|^{n + 1} }}{{(n + 1)!}} = 0. $$ How do you work out the divergence test? AI: Are you familiar with the ratio test for infinite series? From this you can prove pretty easily that $\sum_{n = 1}^{\infty} \dfrac{x^n}{n!}$ converges absolutely for all $x \in \mathbb{R}$. Then by the divergence test, its general term $\dfrac{x^n}{n!}$ must tend to $0$ as $n \to \infty$.
H: Calculating the value of a determinant $\begin{vmatrix} 1 & 2 & 1 & -2 & 1 & 4\\ -3 & 5 & 8 & 4 & -3 & 7 \\ 2 & 2 & 2 & -1 & -1 & -1 \\ 1 & 2 & 3 & 4 & 5 & 6 \\ 2 & 4 & 2 & -4 & 2 & 8 \\ 3 & 5 & 7 & 11 & 13 & 17 \\ \end{vmatrix} $ I tried to make an upper or under triangle matrix, where the value of the determinant the multiplication of the elements in the diagonal. in the 3.rd row I could be a row which is including only one non-zero element but it also didn't help that much. There is probably one trick what I still can't see. My goal is to find an easy not a mechanical way to calculate the value of this determinant. AI: Notice the fifth row is twice the first row (viewed as row vectors), by the multi-linearity of the determinant, $\det A=0$.
H: When a Radon measure finite in bounded sets? Let $(X,d)$ be a locally compact separable complete metric space (locally compact Polish metric space) and $\sigma$ a Radon measure i.e. a Borel measure finite in compact sets $\sigma(K)<\infty$ and regular: For every $A \in \mathcal{B}(X)$ $$\sigma(A)=\inf\{ \sigma(O): A \subset O \text{ open} \},$$ $$\sigma(A)=\sup\{ \sigma(K): A \supset K \text{ compact} \}$$ Can I guarantee in this case that $\sigma$ is finite in bounded sets? They are bounded w.r.t. the distance $d$. I know that this is true in $\mathbb{R}^d$ because closed balls are compact. In a metric space I can't use this fact. I proved that $\sigma$ is $\sigma$-finite because I can get a increasing sequence of compact $K_n$ such that $X = \cup_{n \geq 1} K_n$. Could you give me any hint? Thanks a lot! AI: For every metric $d(x, y)$ you can define $\bar{d}(x, y)=min\{d(x, y), 1\}$ that is a metric inducing the same topology of $d(x, y)$ for wich every set is bounded. Hence I say no, elsewhere every set would have a finite measure.
H: Submodule generated by $N\cup N'$ is subset of $N+N'$ I want to show that for submodules $M,M'$, the submodule generated by $M\cup M'$ is equal to the submodule $M+M'$. The inclusion $M+M'\subseteq$ is easy from the closedness of the submodules under addition. However, I don't have a clue how to do the reverse inclusion. How can I show that the set generated by $M\cup M'\subseteq M+M'$? Or is there an easier way to get this result? AI: The generated submodule by a set $S$ is the smallest submodule (by inclusion) which contains $S$. Now, $M+M'$ is a submodule which contains the set $M\cup M'$ (for example, if $x\in M$ then $x=x+0\in M+M'$. Similarly for $x\in M'$), so it must contain the submodule which is generated by $M\cup M'$.
H: Using expected value of number of attendees to an event to calculate expected revenue In the problem you pre-sell 21 non-refundable tickets values at 50 dollars to an event and can only accommodate 20 people. But in the event the 21st person shows up you must pay the person who's out $100. Each person has a 2% of not showing up, independent of what anyone else does. After having seen the solution it makes perfect sense. I'm looking for insight (with a good dose of intuition or other examples if possible) why my initial instinct of how to go about the problem was faulty so as to avoid similar mistakes in the future. The actual solution conditions the expected payout on whether or not 21 people show. I think of this as a total law of expectation, in the same way we have a total law of probability. E(payout) = E(payout|21 people show)P(21 people show) + E(payout|20 or fewer show)(1-P(21 show)) Like I said this makes total sense as to why it works. Here's what my gut instinct immediately said to try however when I read the problem. Calculate the expected number of attendees. It's binomial with p=.98 and n=21 so E(attendees) = 20.58. So I just said you have the 21(50)-(.58)(100) = 992. My thinking was that on average since 20.58 people show up then you have to pay back on average 58% of the penalty each time. I've noticed this same sort of thing in a few contexts now where I've used expected value to calculate some number in the problem and then went on to base a payment off that number and it's been not quite right (but always kind of close), so I'm wanting to prevent this going forward. It may be just as simple as the revenue has two different means under two different scenarios and you can't try to cram them together. Thus you have to partition into two cases. Like I said, I understand why I'm wrong, but I'm looking for some kind of insight on what specifically breaks down in my method and whether my method could be tweaked to produce the correct answer. I hope that makes sense. AI: One way to see why your approach doesn't work is if we modify $p$ to be smaller, so that the expected value of the number of attendees is less than $20$. For example, suppose $p = 2/3$. Then if $X$ is the random number of attendees, $$\operatorname{E}[X] = np = 21(2/3) = 14 < 20.$$ By your calculation, there is no excess above $20$ in the expected number of attendees, so how do you account for this? Would you compute $21(50) - (0)(100)$? That is also obviously wrong because for any $p > 0$, even if it is small, there remains a positive probability that $X = 21$, thus the expected revenue must always be strictly less than $21(50) = 1050$. In the above case where $p = 2/3$, we have $$\Pr[X = 21] = \binom{21}{21}p^{21} (1-p)^{21-21} = p^{21} \approx 0.000200486.$$ While this numeric example gives us an idea of why there is a flaw in your approach, we still don't have a formal mathematical explanation. We can see that the expected number of attendees is not the meaningful quantity through which we can obtain the expected revenue. This is because the relationship between the random variable $X$ and the random revenue, say $Y$, is not a linear one. Specifically, we have $$Y = \begin{cases} 1050, & 0 \le X \le 20, \\ 950, & X = 21. \end{cases}$$ We could use some tricks to write this in other ways, for example $$Y = 1050 - 100 \max(0, X - 20).$$ And in fact, this is a good way to generalize the original question to the case where if there are only $s$ seats, and each ticket buyer that shows up above the seat limit must be refunded $100$. Then $$Y = 1050 - 100 \max(0, X - s)$$ and the original question sets $s = 20$. But as you can see from this formula, $$\operatorname{E}[Y] = 1050 - 100 \operatorname{E}[\max(0, X - 20)] \ne 1050 - 100 \max(0, \operatorname{E}[X] - 20).$$ In fact, the RHS is precisely what you tried to do. You tried to take the average number of attendees $\operatorname{E}[X]$, subtract $20$, and this excess is what you multiplied by $100$. And my counterexample at the beginning considered what happened when $\operatorname{E}[X] < 20$ so that the maximum of $0$ and a negative number is $0$, which clearly results in a wrong answer. So it's clear we can't do this because $$\operatorname{E}[g(X)] \ne g(\operatorname{E}[X])$$ for some general function $g$. For example, $\operatorname{E}[X^2] \ne (\operatorname{E}[X])^2$. Expectation is a linear operator, so if $g$ is a linear function, it does work: $$\operatorname{E}[aX + b] = a\operatorname{E}[X] + b,$$ for constants $a$, $b$. But it doesn't work when $g$ is nonlinear, as in this case. This brings us to the question of how we might evaluate an expression such as $$\operatorname{E}[\max(0, X-s)].$$ Well, this was originally written as a piecewise/casewise function, where the cases were whether $X > s$ or $X \le s$. So those are the outcomes on which we must condition the expectation: $$\operatorname{E}[\max(0, X-s)] = \operatorname{E}[0]\Pr[X-s \le 0] + \operatorname{E}[X-s \mid X - s > 0]\Pr[X-s > 0].$$ Since the first term is just $0$, the second term is $$(\operatorname{E}[X \mid X > s]-s)\Pr[X > s].$$ For $s = 20$, $n = 21$, $p = 0.98$, we get $$\begin{align} (\operatorname{E}[X \mid X > 20] - 20)\Pr[X > 20] &= (\operatorname{E}[X \mid X = 21] - 20)\Pr[X = 21] \\ &= (21-20)(0.98)^{21} \\ &\approx 0.654256, \end{align}$$ hence $$\operatorname{E}[Y] \approx 1050 - 100(0.654256) = 984.574.$$ As an exercise, what would be your expected revenue if $s = 19$? That is to say, if there were only $19$ seats available, and each attendee in excess needs to be refunded $100$?
H: Clarification for maximizing the Rayleigh Quotient Suppose I have a Hermitian matrix $M$ with orthonormal eigenbasis $\{x_1, \ldots x_n \}$, then for a unit vector $v$, I am reading about maximizing the Rayleigh quotient $R(M,t) =\frac{t* M t}{t^*t}$ = $t^*Mt = \lambda_i t_i^2$, where $t_i$ is the $i'th$ co-ordinate of $t$ in the eigenbasis. The max value is stated in my notes to be obtained by choosing $y$ to be the eigenvector corresponding to the largest eigenvalue of $M$. It's clear to me that among the choices $x_i$ that $R(M,x_i)$ is maximal for $x_i$ corresponding to $\lambda_{max}$ since $M{x_i} = \lambda_{max}$, but I'm not sure why there couldn't be some other vector $y$ a combination of the eigenvectors that would give a larger output. Any insights appreciated. AI: Perhaps it might be easier to look at $\max_{\|t\|^2 = 1} t^*Mt$ and use Lagrange multipliers which will give (since the gradient of the constraint is always non zero and hence linearly independent) $Mt - \mu t = 0$ at a maximiser and so $\mu$ must be an eigenvalue and $t$ must be a corresponding eigenvector. Since the cost is continuous and the constraint set compact we know there is a maxmimiser. Since the cost at a solution of $Mt - \mu t = 0$ is $\mu$ we see it is maximised when $\mu$ is as large as possible, hence the maximum eigenvalue. I'm not sure if this was the point of confusion, but note that $\ker (M-\mu I)$ may be more than one dimensional.
H: Ratio of area covered by four equilateral triangles in a rectangle The following puzzle is taken from social media (NuBay Science communication group). It asks to calculate the fraction (ratio) of colored area in the schematic figure below where the four colored triangles are supposed to be equilateral. The sides of the rectangle are not mentioned. At first one might think that the problem is not well-posed. However, it turns out that the fact that such a configuration exists for the rectangle at hand (note that for example for a square it is clearly impossible to have such a configuration) yields a condition on the proportions of the rectangle. This condition in turn allows to determine the ratio. The question here is to determine the condition on the proportions of the rectangle and the fraction (ratio) of the colored area. Both turn out to be unique and the problem is thus well-posed. Note: The question is self-answered, see this answer. AI: Immediately obvious is the fact that half the base of the green triangle equals the altitudes of each of the yellow and orange triangles, thus the green to yellow and orange triangle similarity ratio is $\sqrt{3}$, and by the same reasoning, the yellow and orange triangles to the red triangle have similar ratio $\sqrt{3}$. If the altitude of the red triangle is $1$, then the width of the rectangle is $2\sqrt{3}$ and the height is $1 + 3 = 4$, for an aspect ratio of $2 : \sqrt{3}$. If we look at the rectangle that encloses half the red and yellow triangles, the white triangle is equal in area to the full red triangle, and the half yellow triangle is half the area of the rectangle. Therefore, the colored areas in that rectangle comprise $4/6 = 2/3$ of the area of that rectangle. Since this relationship is the same regardless of the scale, the whole figure is shaded by $2/3$. It is worth entertaining a generalization of the given figure to acute isosceles triangles. Suppose the half-angle of the apex of the yellow triangle is $\theta$; then for $0 < \theta \le \pi/4$, the triangles are in similarity ratio $1 : \cot \theta : \cot^2 \theta$ from smallest to largest, and the rectangle has aspect ratio $$\frac{1 + \cot^2 \theta}{2 \cot \theta} = \csc 2\theta.$$ The ratio of the shaded area to the rectangle's area is simply $$\frac{1}{2}\sec^2 \theta.$$ For the equilateral case, $\theta = \pi/6$.
H: ODE $ y''-3y'+2y=4(2-a)e^{-ax}$ I have to solve the system $y'=-y-2z+2e^{-ax}$, $z'=3y+4z+e^{-ax}$. I expressed z from the first equation and plugged it into the second equation. The result is equation $y''-3y'+2y=4(2-a)e^{-ax}$. How to solve this equation? Any help is welcome. Thanks in advance. AI: $$y''-3y'+2y=4(2-a)e^{-ax}$$ Solve the homogeneous part first: $$y''-3y'+2y=0$$ Polynomial characteristic is $$r^2-3r+2=(r-1)(r-2)=0 $$ $$ \implies S_r=\{1,2\}$$ $$y_h=c_1e^x+c_2e^{2x}$$ For the particular solution it depends on $a$ if $a \ne -1,-2$ try the particular solution: $$y_p=Ae^{-ax}$$ $a$ if $a = -1$ or $a=-2$ try the particular solution: $$y_p=Axe^{-ax}$$ Plug this in the original inhomogeneous DE and find the constant A then the solution is $$y=y_h+y_p$$ You can add both DE and solve a first order DE instead $$(y+z)'=2(y+z)+3e^{-ax}$$ $$((y+z)e^{-2x})'=3e^{-(a+2)x}$$ This is easy to solve. $$y=-z +3e^{2x}\int e^{-(a+2)x}dx$$ The integral depends on the value of a you have to discuss ($a \ne -2$ or $a=-2$). Plug y in this DE $$z'=3y+4z+e^{-ax}$$ Then you have a first DE in $z$ that's easy to integrate.
H: An ultranet $x_\lambda$ is frequentely in $Y$ if and only if it is residually too. Definition If $x_\lambda$ is a net from a directed set $\Lambda$ into $X$ and if $Y$ is a subset of $X$ then we say that $x_\lambda$ is redisually in $Y$ if there exsit $\lambda_0\in\Lambda$ such that $X_\lambda\in Y$ for any $\lambda\ge\lambda_0$ Definition If $x_\lambda$ is a net from a directed set $\Lambda$ into $X$ and if $Y$ is a subset of $X$ then we say that $x_\lambda$ is frequently in $Y$ if for any $\lambda\in\Lambda$ there exist $\lambda_0\ge\lambda$ such that $x_{\lambda_0}\in y$ What shown belove is a reference from "General Topology" by Stephen Willard So I want to discuss the claim for which if an ultranet is frequently in $E$ then it is residually in $X-E$. Cleraly if $x_\lambda$ is a net residualliy in $Y$ then for any $\overline{\lambda}\in\Lambda$ there exist $\lambda_0$ such that if $\lambda\ge\lambda_0$ then $x_\lambda\in Y$ and so if we pick $\overline{\lambda}_0\in\Lambda$ such that $\overline{\lambda},\lambda_0\le\overline{\lambda}_0$ (we can do this since $\Lambda$ is a directed set) then it follows that $x_{\overline{\lambda}_0}\in Y$ and $\overline{\lambda}_0\ge\overline{\lambda}$ so that $x_\lambda$ is frequentely in $Y$. So clearly any ultranet is a net and so for what we have proved above if an ultranet is residually in $E$ then it is frequentely too. However I can't prove the inverse implication so I ask to do it. Could someone help me, pease? AI: Suppose that $\nu=\langle x_d:d\in D\rangle$ is an ultranet that is frequently in some set $E$. Then for each $d_0\in D$ there is a $d\in D$ such that $d_0\le d$ and $x_d\in E$, so $\nu$ cannot be residually in $X\setminus E$. But $\nu$ is an ultranet, so by definition it is either residually in $E$ or in $X\setminus E$, and since it is not residually in $X\setminus E$, it must be residually in $E$.
H: Help in solving $\lim_{x\to 0}\frac{x-e^x+\cos 2x}{x^2}$ The question is $$\lim_{x\to 0}\frac{x-e^x+\cos 2x}{x^2}$$ I tried solving it as follows My method My answer is $-2$ but actual answer is $-5/2$ so where is my method wrong? AI: The answers given above are all giving methods to solve the problem. But this is to tell you where you went wrong in your approach. \begin{align*} \lim_{x \to 0}\frac{x-e^x+\cos 2x}{x^2}&=\lim_{x \to 0}\frac{x-e^x+1+\cos 2x-1}{x^2}\\ &=\lim_{x \to 0}\frac{x-e^x+1-2\sin^2 x}{x^2}\\ &=\lim_{x \to 0}\frac{x-e^x+1}{x^2}-2\lim_{x \to 0}\frac{\sin^2 x}{x^2}\\ &=\color{red}{\lim_{x \to 0}\frac{1}{x}-\lim_{x \to 0}\frac{e^x-1}{x^2}}-2\\ \end{align*} This highlighted step is where you went wrong. The limit of $1/x$ doesn't exist and you split the second term as $\frac{1}{x}\left(\frac{e^x-1}{x}\right)$ and then just took the limit of only the parenthetical term which again is incorrect because when you take $\lim_{x\to a} f(x)g(x)$, then for this to be $\lim_{x \to a} f(x) \, \lim_{x \to a} g(x)$, you need both limits should exist, whereas here $\lim_{x \to 0} 1/x$ doesn't.
H: Show that in [0,1] with its usual topology there exists a net having no convergent strict subnet I only have difficulties in the final step: Show that {$x_y:y\in I$} has no convergent strict subnet. My efforts: With the construction, $I$ is a minimal uncountable well-ordered set. Thus it has the following properties: (1) Every countable subset of $I$ has an upper bound in $I$. (2) $I$ has no largest element. (3) For every $\alpha\in I$, the subset {$x|\alpha Wx$} is uncountable. Also {$x_y:y\in I$} is a monotonically increasing net in the $W$ sense. Assume {$x_y:y\in I$} has a convergent strict subnet {$x_z:z\in J\subset I,J$ is cofinal in $I$}, say, converging to $v\in J$. Then given any neighborhood of $v$ in the form of $(a,b)$ or $[0, b)$ or $(a, 1]$ in the usual sense there is $\alpha\in J$ such that $\beta\in J$ is in this neighbor if $\alpha W\beta$. If $v\neq 0$, choose a neighborhood $(a,b)$ of $v$. Then I don't know how to continue. AI: HINT: For each $n\in\Bbb Z^+$ there is an $\alpha_n\in J$ such that $|x_\beta-v|<\frac1n$ whenever $\alpha W\beta\in J$. Let $\alpha=\sup_n\alpha_n$. Do you see the problem here?
H: Subset of $M_2(\mathbb{R})$ isomorphic to a field? Consider matrices of the form $\begin{bmatrix} a & b \\ -b & a-b \end{bmatrix}$ with entries from $\mathbb{R}$, closed under addition and matrix multiplication (see Unital rings within matrices ). This forms a unital and commutative ring. Moreover, the determinant of such a matrix is of the form $x^2 + y^2 - xy = f(x,y)$ for $x,y \in \mathbb{R}$. Notice that $f_x = 2x - y$ and $f_y = 2y - x$, so the only possible extremum is at the critical point $(0,0)$, and for large $(x,y)$ we have positive $f(x,y)$, so I suspect that $f(x,y) > 0$ for $(x,y) \neq (0,0)$. So all nonzero matrices of this form are invertible, which means this ring is a field. Call it $M$. Is there a more well-known field $K$ such that $M \cong K$? I notice that $M \cong \mathbb{R}^2$ with multiplication defined as $(a,b) * (c,d) = (ac-bd, ad+bc-bd)$, which is close to defining it as $(ac-bd, ad+bc)$ for $\mathbb{C}$. EDIT: there are much simpler ways to prove $x^2+y^2-xy \neq 0$ if $(x,y) \neq (0,0)$. How can I prove that $xy\leq x^2+y^2$? AI: Are you familiar with $\Bbb{R}[\omega]=\{a+b\omega \, | \, a,b \in \Bbb{R}\}$, Eisenstein ring, where $\omega$ is the cube root of unity? Here the norm $|a+b\omega|$ is defined as $(a+b\omega)(a+b\omega^2)=a^2-ab+b^2$. If you are "determined" you can find the isomorphism:-)
H: How to find $\lim_{n \to \infty} \left( \frac{2^n - 1}{2^n} \right)^{\log_2 n}$ I am trying to show that something occurs with high probability, and my final expression is $$\lim_{n \to \infty} \left( \frac{2^n - 1}{2^n} \right)^{\log_2 n}$$ Based on trying some very large numbers it seems that this does indeed converge to $1$, but how do I prove so rigorously? I don't know any good lower bounds for $2^n - 1$ when $n$ is large. I also tried L'Hopital's rule (using technology of course), but the derivatives are monstrous and don't seem to provide any clue. Thanks in advance! AI: $(\frac{2^n-1}{2^n})^{\log_2n}=(1-\frac{1}{2^n})^{\log_2n}=((1-\frac{1}{2^n})^{2^n})^{\frac{\log_2n}{2^n}}\to (\frac{1}{e})^0=1$
H: Proving that $\tau=\bigcap \limits_{i\in I} \tau_i$ is a topology on $X$ In my general topology textbook one exercise ask me to prove the following: If $\tau_1,\tau_2,...,\tau_n$ are topologies on a set $X$, then $\tau = \bigcap \limits_{i=1}^n \tau_i$ is a topology on $X$. If for each $i \in I$, for some index set $I$, each $\tau_i$ is a topology on the set $X$, then $\tau=\bigcap\limits_{i\in I} \tau_i$ is a topology on $X$. In the first one I proved using mathematical induction that $\tau = \bigcap\limits_{i=1}^n \tau_i$ is a topology on $X$. Doesn't that proof aplly also to the second statement? What is the difference between statement one and two? AI: i) $\emptyset$ and $X$ are in every topology so they are in $\tau$. ii) Let $U=\bigcup_{i\in I} A_i$, with $A_j \in \tau$. Then, $U$ is open in every topology, since $A_i$ is in $\tau$ for all $i\in I$. Therefore, $U \in \tau$. iii) Let $V=A_1\cap\dots\cap A_n$ be a finite intersection of elements of the intersection. Then, $V$ is open in every topology, since $A_i$ is in $\tau$ for all $i\in \{1,\dots,n\}$. Thus, $V \in \tau$.
H: The operator norm is defined based on the supremum or equivalently the maximum. The definition is $$\|A\| = \sup_{x \neq 0} \frac{\|Ax\|}{\|x\|} = \max \left\{ \|Ax\|: \|x\|=1 \right\}$$ but how is maximum coming in place? AI: I assume here you mean that you are working in a finite dimensional normed space. In that case the sphere $\{x: ||x||=1\}$ is a compact set, and since linear transformations in finite dimensional normed spaces are continuous, this implies that $||Ax||$ has a maximum on that sphere. This is no longer true in infinite dimensional normed spaces. There you really must write supremum, not maximum.
H: Evaluate $\sqrt{a+b+\sqrt{\left(2ab+b^2\right)}}$ Evaluate $\sqrt{a+b+\sqrt{\left(2ab+b^2\right)}}$ My attempt: Let $\sqrt{a+b+\sqrt{\left(2ab+b^2\right)}}=\sqrt{x}+\sqrt{y}$ Square both sides: $a+b+\sqrt{\left(2ab+b^2\right)}=x+2\sqrt{xy}+y$ Rearrange: $\sqrt{\left(2ab+b^2\right)}-2\sqrt{xy}=x+y-a-b$ That's where my lights go off. Any leads? Thanks in advance. AI: After taking squares, you can proceed as follows $$(a+b)+\sqrt{\left(2ab+b^2\right)}=(x+y)+2\sqrt{xy}$$ Compare corresponding (conjugate) parts on both the sides of above equation, we get $$x+y=a+b\tag 1$$ $$2\sqrt{xy}=\sqrt{2ab+b^2}\iff 4xy=2ab+b^2\tag 2$$ $$x-y=\pm\sqrt{(x+y)^2-4xy}=\pm\sqrt{(a+b)^2-(2ab+b^2)}=\pm a\tag3$$ Solving (1) & (3) we get $x$ & $y$ as follows $$x=a+\frac{b}{2}, \ y=\frac b2\ \ \ \text{OR}\ \ \ x=\frac b2, \ y=a+\frac{b}{2}$$
H: schauder basis vs isomorphisms My doubt is about a space with schauder basis. If a space has schauder basis so can i say that it has an isometric isomorphism with some space? For an exemple: $c$ has a schauder basis, can i considere another set a dual space who it is isomorphic? I am thinking this because i know that $(c_0)'=l_1$ and can i see something like this to $c_0$? ps: I know that every separate space has an isomorphism with some subspace of $l_{\infty}$. But in this case i think that i can considerate is just trivial isomorphism. plus: I know that $l_1'=l_{\infty}$ and $l_p=l_q'$, with $\frac{1}{p}+\frac{1}{q}=1$. Is there another set that can i make this easily? AI: In general the Schauder basis won't give you a way to make an isometric isomorphism between different Banach spaces. For instance $\ell^2$ is not isometrically isomorphic to any $\ell^p$ if $p \neq 2$. One way to see this is that $\ell^2$ is a Hilbert space but $\ell^p$ for $p \neq 2$ isn't. You may be interested in the theorem which states that any two infinite-dimensional separable Hilbert spaces are isometrically isomorphic (in particular isometrically isomorphic to $\ell^2$).
H: Almost everywhere differentiable (composition)! Let $f:\mathbb{R}\to\mathbb{R}$ be a function differentiable almost everywhere and $g$ a function defined by $$g(t)=\arctan(f(t))$$ I read in a paper that $g$ is differentiable almost everywhere. Can someone tell me why $g$ is differentiable almost everywhere? AI: If $f$ is differentiable at $t$, then (because $\arctan$ is differentiable at $f(t)$) by the chain rule $g$ is also differentiable at $t$. If we state this contrapositively, we have \begin{align} \{t\in \Bbb{R}| \, \, \text{$g$ not differentiable at $t$}\} \subseteq \{t\in \Bbb{R}| \, \, \text{$f$ not differentiable at $t$}\} \end{align} Since the set on the right has measure zero, so does the set on the left.
H: Gradient in linear regression with weights From 3.3 in Pattern Recognition in Machine Learning, I am asked to obtain weights for a regression with a weighted square loss function. That is, $E(w,x) = \sum_{j=1}^n r_j(y_j - x_j^Tw)^2$ where $r_j$ is the weight for example $j$. I'm trying to formulate this as a vector problem and take its gradient. If we let $R =\begin{bmatrix} r_1 & 0 & \dots & 0 & 0 \\ 0 & r_2 & 0 & \dots &0 \\ 0 & \dots & \dots & \ddots & 0 \\ 0 & \dots 0 & \dots & 0 & r_n \end{bmatrix} $ be a diagonal matrix with weights on the diagonal, we can rewrite $E(w,x) = (y-Xw)^TR(y-Xw)$. Then $\nabla_w E(w,x) = 2RX^T(y-Xw)$ Solving for $w$, I don't get the desired answer of $X(X^TX)^{-1}\sqrt{R}y$ Where am I wrong? AI: The gradient is $-2X^\top R (y - Xw)$. (Note that $RX^\top$ does not make sense since $R$ is $n \times n$ and $X^\top$ is $p \times n$.) Setting this equal to zero yields $\hat{w} = (X^\top R X)^{-1} X^\top R y$, which is the solution for weighted least squares. Not sure where your "desired solution" comes from.
H: If a random variable $X$ is integrable, how do we show that $X^2$ is also integrable? If we know that a random variable $X$ is integrable, i.e. $\mathbb{E}(|X|) < \infty$, how do we show that $X^2$ is also integrable? AI: As pointed out in the comments, the claim isn't true. For example, we can consider a random variable $X$ with probability density function $$f_X(x):=\frac{3}{2x^{5/2}}\enspace \text{for }x\in [1,\infty).$$ Then you can verify that $$\mathbb{E}[X]=\int_{[1,\infty)}xf_X(x)=3<\infty$$ while $$\mathbb{E}[X^2]=\int_{[1,\infty)}x^2f_X(x)=\infty.$$
H: Is there a geometric analog of absolute value? I'm wondering whether there exists a geometric analog concept of absolute value. In other words, if absolute value can be defined as $$ \text{abs}(x) =\max(x,-x) $$ intuitively the additive distance from $0$ to $x$, is there a geometric version $$ \text{Geoabs}(x) = \max(x, 1/x) $$ which is intuitively the multiplicative "distance" from $1$ to $x$? Update: Agreed it only makes sense for $Geoabs()$ to be restricted to positive reals. To give some context on application, I am working on the solution of an optimization problem something like: $$ \begin{array}{ll} \text{minimize} & \prod_i Geoabs(x_i) \\ \text{subject to} & \prod_{i \in S_j} x_i = C_j && \forall j \\ &x_i > 0 && \forall i . \end{array} $$ Basically want to satisfy all these product equations $j$ by moving $x_i$'s as little as possible from $1$. Note by the construction there are always infinite feasible solutions. AI: To make things easier I'll set $f(x)=\max\{x,-x\}$ and $g(x)=\max\{x,\frac{1}{x}\}$. So we understand that $f:\mathbb{R}\to \mathbb{R}^+$ and $g: \mathbb{R}^+\to \mathbb{R}^+$. Then $\exp(f(x))=g(\exp(x))$. So we can use this to translate some properties like the triangle inequality. $$ g(xy)=g(\exp(\log(xy)))=\exp(f(\log(xy)))=\exp(f(\log(x)+\log(y))) $$ $$ \leq \exp(f(\log x)+f(\log y))=\exp(f(\log x))\exp(f(\log y))=g(\exp(\log(x))g(\exp(\log(x)) $$ $$ =g(x)g(y) $$ So $g(xy)\leq g(x)g(y)$ and we have the multiplicative triangle inequality. Of course this is easier to show directly but the method emphasizes the "transfer". Another good sign is $g(x)=1$ if and only if $x=1$. All in all it looks like you're moving between $(\mathbb{R},+)$ and $(\mathbb{R}^+,\cdot)$ with $\log$ and $\exp$. So a nice question. I'm sure there's more to say.
H: Support Vector Machines (SVMs), unclear math steps I am studing the maths behind the Support Vector Machines (SVMs), but there are two not clear steps. Following the video of 16. Learning: Support Vector Machines (MIT OpenCourseWare, minutes 14:24), we have the following steps $Width = (\bar{x}_{+}-\bar{x}_{+}) \cdot \frac{\bar{w}}{\left \| w \right \|} = \frac{(1-b)(b-1)}{\left \| w \right \|} = \frac{2}{\left \| w \right \|}$ then $Max = \frac{1}{\left \| w \right \|}$ therefore $Min = \left \| w \right \| = \frac{1}{2}\left \| w \right \|^2$ I don't understand (1) why max is $\frac{1}{\left \| w \right \|}$ and min is $\left \| w \right \|$, and (2) why $\left \| w \right \|$ = $\frac{1}{2}\left \| w \right \|^2$ AI: You are putting "equals signs" where there should not be any. (It is not "$\max = \frac{1}{\|w\|}$" he means "$\max \frac{1}{\|w\|}$" i.e. "maximize the quantity $1/\|w\|$.") If the goal of SVM is to maximize the width, which is shown to be $2/\|w\|$, then it is equivalent to minimize $\|w\|$. (Choosing $w$ to make $2/\|w\|$ big will also make $\|w\|$ small.) If you want to minimze $\|w\|$, it is equivalent to minimizing $\frac{1}{2}\|w\|^2$ (making $\|w\|$ small will make $\frac{1}{2} \|w\|^2$ small). The particular choice of $\frac{1}{2} \|w\|^2$ is convenient for steps that appear later.
H: Union of intervals for $f(x)=\frac{\sqrt{7−x^2}}{(x−3)(4x−2)}$ Background From the radical, we know $7−x^2≥0$ so $x^2≤7$, so $−\sqrt{7}≤x≤7$ $$x≠3, x≠\frac{1}{2}$$ The natural domain would then be: $[−\sqrt{7},\frac{1}{2})∪(\frac{1}{2},3)∪(3,\sqrt{7}]$ However, I checked a calculator and the solution was: $[−\sqrt{7},\frac{1}{2})∪(\frac{1}{2},\sqrt{7}]$ Where did I go wrong with my solution? Is there a difference between the two solutions, and why would the calculator leave out the 3? EDIT: the 1 was a typo. Apologies! AI: When you're first using them, functions are defined everywhere except where they aren't. In your case, this means: You can't divide by zero; this would happen at $x=3,x=1/2$. You can't take the square root of a negative number; this would happen if $x^2>7$. This means the domain is any point in $[-\sqrt{7},\sqrt{7}]$ except $x=1/2$, namely $[-\sqrt{7},1/2)\cup(1/2,\sqrt{7}]$.
H: Change of variables in basic differential equation from Hamming's Art of Doing Science and Engineering I'm reading Richard Hamming's "The Art of Doing Science and Engineering" and I'm trying to figure out whether there's an error in the book or I'm misapplying some change of variable rules. In Chapter 2 he derives the basic logistic growth curve in terms of an exponential growth model with a natural upper limit. He states: "Let $L$ be the upper limit. Then the next simplest growth equation seems to be $$ \begin{equation} \frac{dy}{dt} = ky(L -y) \end{equation}. $$ At this point we of course reduce it to a standard form that eliminates the constants. Set $y = Lz$ and $x = t/(kL^2)$, then we have $$ \frac{dz}{dx} = z(1 - z) $$ as the reduced form for the growth problem, where the saturation level is now 1." The change $y = Lz$ makes sense to me, but it seems like the change of variable for $t$ should actually be $x = t\times(kL)$. This gives us $\frac{dz}{dy} = \frac{1}{L}$ and $\frac{dt}{dx} = \frac{1}{kL}$ so that $$ \begin{align*} \frac{dz}{dx} &= \frac{dz}{dy} \cdot \frac{dy}{dt} \cdot \frac{dt}{dx} \\ &= \frac{1}{L} \cdot kL^2z(1 - z) \cdot \frac{1}{kL}\\ &= z(1 - z). \end{align*} $$ I don't see how the constants would cancel out using the substitution $x = t/(kL^2)$. So is this a typo in the book or am I misapplying the chain rule? AI: $$\begin{equation} \frac{dy}{dt} = ky(L -y) \end{equation}.$$ Change the variable $y=zL$: $$\frac{dz}{dt} =kL z(1 -z)$$ We have that: $$x = tkL$$ $$dx =kL dt$$ Therefore: $$\frac{dz}{dt} \dfrac {dx}{dt}= kLz(1 -z)$$ $$\frac{dz}{dt} kL= kLz(1 -z)$$ $$\frac{dz}{dt}= z(1 -z)$$ You applied the chain rule correctly.
H: easiest proof of the Prime Number Theorem to study and teach? I know there are several variants of proofs for the Prime Number Theorem. Which one is the easiest one to study and then re-teach? By easiest, I mean those that assume minimal knowledge beyond secondary school mathematics. For example, most school leavers having done maths will have calculus, and could stretch to understand concepts like asymptotic equivalence and integration in the complex plane, but won't have concepts like group theory. AI: You might check out D. Zagier, Newman's Short Proof of the Prime Number Theorem, which appears in The American Mathematical Monthly, Vol. 104, No. 8 (Oct., 1997), pp. 705-708 and is available online at http://www.jstor.org/stable/2975232. This is the easiest proof of which I am aware, but its challenging nevertheless. But an undergrad with a solid background in analysis should be able to hack it.
H: Point of indeterminacy of the projection map (exercise I.4.3 Hartshorne) I am trying to prove that the morphism $\varphi : W = {\mathbb{P}}^2 \setminus \big\{ [0:0:1] \big\} \rightarrow {\mathbb{P}}^1$ given by $\varphi([a_0 : a_1 : a_2]) = [a_0 : a_1]$ cannot be extended to the point at infinity. My approach is assuming $\psi : {\mathbb{P}}^2 \rightarrow {\mathbb{P}}^1$ is a morphism such that $\psi \lvert_W = \varphi$ leads to a contradiction. Assuming $\psi([0:0:1]) = [a_0 : a_1] = \psi([a_0 : a_1 : a_2])$, I am trying to cook up a nice regular function which cannot be pulled back to a regular function. Now (w.l.g.) assuming $a_0 \neq 0$, a reasonable choice is $f : U_0 \rightarrow k$ where $U_0 := \big\{ [u_0 : u_1] ~:~ u_0 \neq 0 \big\} \subseteq {\mathbb{P}}^1$. Taking a small enough open subset $V \subseteq U_0$ where $f = g/h$ (a polynomial quotient), I need to show that $f \circ \psi : \psi^{-1}(V) \rightarrow k$ is not regular. This is where I am lost. I am not sure if it is the correct way to approach? This is final step to solve exercise I.4.3(b), Hartshorne. AI: I think there's an easier way to show this. I'll present a hint and then detail how to make it work under some spoiler text so you can try for yourself before looking at the answer. Hint: Consider the two lines $V(x)$ and $V(y)$ in $\Bbb P^2$. What does the projection map do to each of these lines? What can you infer about the possible extensions of this map to the point $[0:0:1]$ via these extensions? Projection sends $V(x)\setminus [0:0:1]$ to the point $[0:1]$ and $V(y)\setminus [0:0:1]$ to the point $[1:0]$. Since $V(x)\setminus [0:0:1]$ and $V(y)\setminus [0:0:1]$ are irreducible, we must have that an extension of these maps to $V(x)$ or $V(y)$ respectively must send $[0:0:1]$ into $\overline{\varphi(V(x)\setminus [0:0:1])}$ or $\overline{\varphi(V(y)\setminus [0:0:1])}$ respectively. But this gives that $\varphi([0:0:1])$ should simultaneously be equal to $[0:1]$ and $[1:0]$, a contradiction.
H: Showing that $(\mathbb{Q},+)$ has no maximal normal subgroup In A Course in Abstract Algebra Volume: Author(s): Vijay K. Khanna, S.K. Bhamri it is proven that: $\mathbb{Q}$ with $+$ has no maximal normal subgroup. Proof: Let $H$ maximal normal subgroup of $\mathbb{Q}$. Then $\mathbb{Q}/H$ is simple and so $\mathbb{Q}/H$ has no non trivial normal subgroup i.e., it will have no non trivial subgroup $(\mathbb{Q}$ being abelian, all subgroups are normal). Thus $\mathbb{Q}/H$ is a cyclic group or prime order $p$. Let $x+H\in \mathbb{Q}/H$ by any element. Then $p(H+x)=H$ i.e., $H+px=H$ or that $px\in H$ for all $x\in \mathbb{Q}$. Let $y\in\mathbb{Q}$ then $\frac{y}{p}\in \mathbb{Q}$. If $\frac{y}{p}=x$ then $y=px\in H$ or that $\mathbb{Q}\subseteq H\subseteq \mathbb{Q}$ a contradiction. Question 1. Why a group $G$ simple and abelian is cyclic of orden $p$ prime? I proves the part of cyclic. Indeed, Let $G$ simple abelian, let $e\neq g\in G$ then $\langle g\rangle$ is normal in $G$ because $G$ is abelian and $\langle g\rangle=G$ because $G$ is simple. But, why $|\langle g\rangle |=p$ some $p$ prime? Question 2. Why $px\in H$? Thanks in advance. AI: If $|g|=pq$, $|g^p|=q$ if $|g|$ is infinite the group generated by $g^2$ is a subgroup. For the second question, consider the quotient map $u:\mathbb{Q}\rightarrow\mathbb{Q}/H$, $u(px)=pu(x)=0$ since the order of $\mathbb{Q}/H$ is $p$. This implies that $px\in H$.
H: Direct proof that integral of a function does not depend on the $\sigma$-algebra used to define it? If $\mathcal{G}\subset\mathcal{F}$ are two $\sigma$ algebras on a set $X$, $\mu$ is a nonnegative measure on $(X,\mathcal{F})$ and $f:X\to[0,+\infty]$ is $\mathcal{G}$-measurable, then there are two possible definitions of $\int f~d\mu$. The first is as an $\mathcal{F}$-measurable map: $$\int_\mathcal{F} f~d\mu:=\sup\left\{\int\varphi~d\mu~\Big|\:\begin{array}{c}0\leq\varphi\leq f\text{ simple}\\\mathcal{F}\text{-measurable map}\end{array}\right\}$$ the second is as a $\mathcal{G}$-measurable map: $$\int_\mathcal{G} f~d\mu:=\sup\left\{\int\varphi~d\mu~\Big|\:\begin{array}{c}0\leq\varphi\leq f\text{ simple}\\ \mathcal{G}\text{-measurable map}\end{array}\right\}$$ These two definitions coincide as follows for instance from an application of Fatou's lemma (for $\mathcal{F}$-measurable maps) and the dominated convergence theorem (for $\mathcal{G}$-measurable maps). Question: is there a direct argument proving $\int_\mathcal{G} f~d\mu=\int_\mathcal{F} f~d\mu$ that I'm overlooking? Maybe on the basis of a monotone class argument? Proof I had in mind: Note that by definition of both integrals the result is true for indicator functions $1_A$, $A\in\mathcal{G}$. Furthermore, by linearity of both $\int_\mathcal{G}\cdot~ d\mu$ and $\int_\mathcal{F}\cdot~ d\mu$, the result holds for all nonnegative $\mathcal{G}$-measurable simple functions ${\color{blue}{(1)}}$. Furthermore, since every $\mathcal{G}$-measurable simple function is a $\mathcal{F}$-measurable simple function, one always has $$ \int_\mathcal{G} f~d\mu\leq\int_\mathcal{F} f~d\mu. $$ Thus, if $\int_\mathcal{G} f~d\mu=+\infty$, the result is established. Now suppose $\int_\mathcal{G} f~d\mu<+\infty$. For every $n\in\Bbb{N}$ we set for instance $$ f_n = \sum_{i=1}^{n2^n}2^{-n}1_{[i/2^n\leq f]}. $$ Then $0\leq f_n\leq f$ is a nonnegative $\mathcal{G}$-measurable simple function which converges pointwise to $f$. We get that $f\in L^1(\Omega,\mathcal{F},\mu)$ and the reverse inequality since by Fatou's lemma $\color{red}{(2)}$ and dominated convergence $\color{green}{(3)}$ $$\begin{eqnarray} 0 \leq \int_\mathcal{F} f~d\mu = \int_\mathcal{F}\liminf_n f_n~d\mu & \overset{\color{red}{(2)}}\leq & \liminf_n\int_\mathcal{F} f_n~d\mu\\ & \overset{\color{blue}{(1)}}= & \liminf_n\int_\mathcal{G} f_n~d\mu\\ & \overset{\color{green}{(3)}}= & \int_\mathcal{G} f~d\mu. \end{eqnarray}$$ Better proof. As noted in @KaviRamaMurthy's answer, the right tool is the monotone convergence theorem. We know by the first point ${\color{blue}{(1)}}$ that both definitions agree on nonnegative simple $\mathcal{G}$-measurable functions. Taking the $0\leq f_n\leq f_{n+1}\leq f$ defined above, one has by two applications of the monotone convergence theorem that $$ \int_{\mathcal{F}} f~d\mu \overset{\text{MCT}}= \lim_n \int_{\mathcal{F}} f_n~d\mu \overset{{\color{blue}{(1)}}}= \lim_n \int_{\mathcal{G}} f_n~d\mu \overset{\text{MCT}}= \int_{\mathcal{G}} f~d\mu $$ AI: In the books I have used $\int fd\mu$ is defined using a specific construction of simple functions approximating $f$: $f_n(x)=\frac {i-1} {2^{n}}$ if $\frac {i-1} {2^{n}} \leq f <\frac i {2^{n}}$, $i \leq N2^{n}$ and $f_n(x)=N$ for $i >N2^{n}$. And $\int fd\mu=\lim \int f_n d\mu$. In the present case the same sequence is also a sequence of simple functions on $(X, \mathcal G)$ increasing to $f$ so the definition of $\int fd\mu$ is same whether you are using the sigma algebra $\mathcal G$ or $\mathcal F$.
H: What is the "most obtuse" triangle that can fit on a sphere? Often, when someone introduces the idea of non-euclidean geometry they give the examples of spherical and hyperbolic geometry. To help visualize these concepts, they'll usually compare the sum of the angles of ordinary triangles in each of these geometries, as well as comparing them to a triangle in euclidean geometry. I'm going to use the phrase "most obtuse" as meaning the triangle with the largest sum of its inner angles. Taking the example of a unit sphere, what is the most obtuse and non-self-intersecting triangle that can be inscribed on its surface? AI: The most obtuse triangle has angles that sum to $180^\circ\cdot3-\varepsilon$ and is just shy of being a hemisphere. Or, if you allow it, push that triangle past covering half of the sphere, then the angles are each greater that 180°. Keep pushing the triangle further and you almost cover the whole sphere, except for a small triangle and the angles sum to just under 900°
H: Separable first order differential equivalence: how can I get expected outcome? I am told a colony at any time has a population at time $t$ according to the rule: $$\frac{dB}{dt} = kB + I$$ Where $B$ is the colony size, $k$ is a positive constant, and $I$ is some constant. I want to show that the colony size at any time $t$ is: $$B(t) = \frac{1}{k}(Ae^{kt}-I)$$ Where $A = \pm e^C$. Here's my work so far: $$\begin{align} \frac{dB}{dt} &= kB+I \\ dB &= (kB+I)\ dt \\ \frac{dB}{kB+I} &= 1\ dt \\ \int \frac{dB}{kB+I} &= \int 1\ dt \\ \ln |kB+I| &= t + C \\ e^{\ln |kB+I|} &= e^{t+C} \\ |kB+I| &= e^{t+C} \\ &= e^t e^C \\ kB+I &= Ae^t \\ kB &= Ae^t - I \\ B &= \frac{1}{k}(Ae^t - I) \end{align}$$ Note the exponent $k$ is missing from the right side. I'm not particularly sure what I'm meant to be doing here, other than showing equivalence: $$\frac{dB}{dt} = kB+I \equiv B(t) = \frac{1}{k}(Ae^{kt}-I)$$ What are the steps to show this? AI: The error is between the fourth and fifth line of your work. $$ \int \frac{dB}{kB+I} = \frac1k \ln |kB+I| $$ The $\frac1k$ was left out in your work. (To see why it's there, simply do a u substitution with u=kB+I). From there onward, your work is all correct. Just carry the $k$ through each step and you will arrive at the desired expression.
H: A probability counterexample for the measure $Q(A) = \int_{\Omega} X \mathbb{1}_A \mathbb{1}_B \ \text{d} \mathbb{P}$ Let $(\Omega, \mathcal{A}, \mathbb{P})$ be a probability space, $B \in \mathcal{A}$ and $X : \Omega \to (0,\infty)$ an integrable random variable. Define $$Q : \mathcal{A} \to [0,\infty) \quad \text{such that} \quad Q(A) = \int_{\Omega} X \mathbb{1}_A \mathbb{1}_B \ \text{d} \mathbb{P}$$ I've shown that $Q$ defines a measure on $(\Omega, \mathcal{A})$ by Beppo Levi (exchanging integral and sum). Then I have shown that for any random variable $Y \geq 0$, we have $$\int_{\Omega} Y \ \text{d} Q = \int_{\Omega} XY \mathbb{1}_B \ \text{d} \mathbb{P}$$ by first showing that it's true for $Y = \mathbb{1}_A$ then it's true by linearity for any simple function $\sum_{k=1}^{n} \alpha_k\mathbb{1}_{A_k}$ and finally true by MCT for a non-negative random variable that can be approximated by simple functions $\varphi_{n} \uparrow Y$. Assuming now that $\int_{\Omega} X \ \text{d} \mathbb{P} = 1$, does it follow that $Q$ is a probability measure, i.e $Q(\Omega) = 1$? I don't think so, since $Q(\Omega) = \int_{B} X \ \text{d} \mathbb{P} \leq \int_{\Omega} X \ \text{d} \mathbb{P} = 1$ but can't come up with a counterexample. Any help on what to consider? Thanks. AI: Just take $X=1$. Then $\int XdP=1$ but $Q(\Omega)=P(B)$ which may be less than $1$. For a specific counter-example take $\Omega=(0,1)$, $P=$ Lebesgue measure $B=(0,\frac 1 2)$ and $X=1$.
H: The angle between a line and a normal vector The problem I am trying to solve is below: What is the angle formed by the line $(1,2,0) + t(-1,2,1)$ and a normal vector of the plane $x+y-z = 4?$ Give your answer in degrees. I am having a little bit of trouble solving this problem. I have watched videos and looked at websites, as well as books I have, however, all of them aren't very helpful. I have also tried drawing a diagram, but I don't think I did it correctly. Any advice/answers? AI: HINT \begin{align*} \langle x,y\rangle = \|x\|\|y\|\cos(\theta) \end{align*}
H: Calculating derivative of linear distance This is an exercise from Morris Kline's "Calculus: An Intuitive and Physical Approach". If an object moves along a circle of radius $R$, its position can be described by specifying the angle $\theta$ through which it has rotated. The derivative of $\theta$ with respect to $t$ (time) is called the angular velocity and is usually denoted by $\omega$; that is, $\theta ' = \omega$. The derivative of the angular velocity with respect to $t$ is called the angular acceleration and is usually denoted by $\alpha$; that is $\theta '' = \omega ' = \alpha$. The linear distance covered by the object is $s = R \theta$ if $\theta$ is measured in radians. Answer the following questions concerning circular motion:What is the linear velocity? Answer from key: since $s = R \theta$, $s' = R \theta '$What is the linear acceleration? Answer from key: from $s' = R \theta ' $, we have $s'' = R \theta '' = R \alpha$. It is clear to me why $s'$ is the linear velocity, but I am unsure of how $s'$ was actually computed in the answer. The linear velocity should be the instantaneous rate of change in linear distance with respect to time, which is $\frac{ds}{dt}$. The variable $t$ does not appear in the equation for $s$, so how was $s'$ calculated? This is fairly early on in the book, so nothing like the chain rule has been covered. AI: When the particle is moving the angle $\theta$ is changing with time so it is function of $t$. Thus $$\frac{ds}{dt}=\frac{d(R\theta)}{dt}=R\frac{d\theta}{dt}=R\omega.$$
H: Show $\det(F_n)=1$ for all $n$ Consider the $n\times n$ matrix $F_n= (f_{i,j})$ of binomial coefficients $$f_{i,j}=\begin{pmatrix}i-1+j-1\\i-1\end{pmatrix}$$ Prove that $\det(F_n)=1$ for all $n$. My current idea is to apply Leibniz formula for determinants and induction, but it seems too complicated. Any better ideas and suggestions are welcome. AI: Using the formula $\binom{n}{k}=\binom{n-1}{k}+\binom{n-1}{k-1}$ you get, by applying the column operations $$ \left\{ \begin{array}{lcl} C_n&\gets& C_n-C_{n-1} \\[1mm] C_{n-1}&\gets& C_{n-1}-C_{n-2} \\[1mm] &\vdots\\[1mm] C_2&\gets& C_2-C_{1} \end{array} \right. $$ that $$\det(F_n) = \det\left( \begin{array}{c|ccc} 1&0&\cdots&0\\\hline *\\ \vdots&&F_{n-1}\\ *\end{array}\right) = \det(F_{n-1})$$ So that the sequence $\det(F_n)$ is constant equalt to its first term $\det(F_1)=1$.
H: Difference b/w $p$ and $P(X)$ i.e. output of Binomial Distribution ($BD$)? Is it possible to have $p=1$ but $BD=0$? I have a confusion with Binomial Distribution. For Binomial Distribution we use the formula: $P(x) = {n \choose k} \cdot p^x \cdot (1-p)^{n-x}$ Now let's suppose $p=1$ but if we put this $1$ in $P(x)$, we will get $P(x)=0$. How can we define this situation? Zulfi. AI: Um, funny. If $p=1$, then when you repeat the experiment $n$ times, $P(X=n)=1$ and $P(X=k)=0$ for all other $k$. You can't use the formula to calculate $P(X=n)$, but for all other $P(X=k)$ the formula still works since $0^{n-k}=0$ if $k\ne n$. Edit: If you read Hogg and Craig's "Intro to Mathematical Statistics", you will find that they define the binomial distribution with a restriction of $0<p<1$.
H: Showing $f(h)=g^{-1}h^{-1}gh$ for any $g \in G, h\in H$ where $H$ is normal subgroup of $G$. Let $H$ be a finite normal subgroup of $G$. Show that for any $g\in G$, the map $f:H \to H$ defined by $f(h)=g^{-1}h^{-1}gh$ is bijective. Edit to problem: Let $H$ be a finite normal subgroup of $G$. Let $g\in G$ have order $n$ and the only element in $H$ that commute with $g$ is $e$, the identity element of $H$. Show that for any $g\in G$, the map $f:H \to H$ defined by $f(h)=g^{-1}h^{-1}gh$ is bijective. Here's my attempt. I can't prove the surjectivity yet. So, I try prove the injectivity. Let $a,b \in H$. Assume that $f(a)=f(b)$. Now, $g^{-1}a^{-1}ga = g^{-1}b^{-1}gb$ $a^{-1}ga = b^{-1}gb$ $a^{-1}gag^{-1} = b^{-1}gbg^{-1}$ $a^{-1}H = b^{-1}H$ $a^{-1} = b^{-1} \Leftrightarrow a=b$ Is that true? If no, how to prove it? Any idea to prove the surjectivity too? Thanks in advance. AI: You cannot conclude from $a^{-1}H = b^{-1}H$ that $a^{-1}=b^{-1}$. For one thing, since you are assuming that $a,b\in H$, then you always have $a^{-1}H=b^{-1}H = H$, whether $a$ and $b$ are equal or not. However, you are almost there in the modified problem, with $H$ finite and the assumption that no element of $H$ other than $e$ commutes with $g$. From $f(a)=g(b)$, you have $a^{-1}ga = b^{-1}gb$. This gives you $$\begin{align*} a^{-1}ga &= b^{-1}gb\\ ba^{-1}g &= gba^{-1}\\ (ba^{-1})g &= g(ba^{-1}). \end{align*}$$ Can you take it from there?
H: Evaluate $\lim\limits_{n \to \infty}\frac{|\sin 1|+2|\sin 2|+\cdots+n|\sin n|}{n^2}.$ It's well-known that $$\lim\limits_{n \to \infty}\frac{|\sin 1|+|\sin 2|+\cdots+|\sin n|}{n}=\frac{2}{\pi},$$which can be obtained by the uniform distribution. Can it be used directly to solve the present problem? AI: Yes, you may just invoke summation by parts. If you know that $$ s(n) = \sum_{k=1}^{n}\left|\sin k\right| = \frac{2}{\pi}n+O(1) $$ then $$ S(n) = \sum_{k=1}^{n}k\left|\sin k\right| = n s(n) - \sum_{k=1}^{n-1} s(k) $$ where $$ n s(n) = \frac{2}{\pi}n^2 + O(n), $$ $$ \sum_{k=1}^{n-1}s(k) = O(n)+\sum_{k=1}^{n-1}\frac{2}{\pi}k = \frac{1}{\pi}n^2+O(n), $$ so $$ S(n) = \frac{1}{\pi} n^2 + O(n).$$ If you start with the weaker $s(n)=\frac{2}{\pi}n+o(n)$ you end up with $S(n)=\frac{1}{\pi}n^2+o(n^2)$.
H: Transversality of two mappings and diagonal I've never taken differential topology and am confused by the definition of transversality, and while trying to solve the following I got stuck. Given smooth manifolds and maps $f:M\to N$ and $g:P\to N$, show that $f$ and $g$ are transversal to each other if and only if $f\times g: M\times P\to N\times N$ is transversal to the diagonal $\Delta \subset N\times N$. AI: By definition $f \pitchfork g$ means that for every $x \in M$ and $y \in P$ such that $z\doteq f(x) = g(y)$, we have $${\rm d}f_x[T_xM] + {\rm d}g_y[T_yP] = T_zN.$$This is not required to be a direct sum. If $S \subseteq N$ is a submanifold, we write $f \pitchfork S$ instead to mean $f \pitchfork \iota_S$, where $\iota_S\colon S \hookrightarrow N$ is the inclusion map. It is easy to see that ${\rm d}(f\times g)_{(x,y)}[M\times P] = {\rm d}f_x[T_xM] \times {\rm d}g_y[T_yP]$. Also, we have that the tangent space to the diagonal is the diagonal of the tangent space, i.e., $T_{(z,z)}\Delta = \{ (w,w) \mid w \in T_zN\}$. $\implies:$ Assume that $f \pitchfork g$, and let's prove that $(f\times g) \pitchfork \Delta$. Assume that $(x,y) \in M\times P$ are such that $z \doteq f(x) = g(y)$ and take $(w_1,w_2) \in T_{(z,z)}(N\times N) = T_zN\times T_zN$. Our goal is to write $(w_1,w_2)$ as the sum of something in ${\rm d}f_x[T_xM]\times {\rm d}g_y[T_yP]$ with something in $T_{(z,z)}\Delta$. Well, $f \pitchfork g$ gives $u \in T_xM$ and $v\in T_yP$ such that $w_1-w_2 = {\rm d}f_x(u) - {\rm d}g_y(v)$. Now let $w = w_1 - {\rm d}f_x(u)$ and note that $w = w_2 - {\rm d}g_y(v)$ holds as well. Then we have that $$(w_1,w_2) = ({\rm d}f_x(u),{\rm d}g_y(v)) + (w,w),$$and this shows that $(f\times g)\pitchfork \Delta$, as wanted. $\impliedby:$ Assume that $(f\times g)\pitchfork \Delta$ and let's show that $f \pitchfork g$. So, take $w \in T_zN$ and look at $(w,0) \in T_{(z,z)}(N\times N)$. The assumption $(f\times g)\pitchfork \Delta$ provides $u \in T_xM$, $v \in T_yP$ and $w' \in T_zN$ such that $$(w,0) = ({\rm d}f_x(u), {\rm d}g_y(v)) + (w',w').$$It readily follows that $w = {\rm d}f_x(u)+{\rm d}g_y(-v)$, which shows that $f \pitchfork g$.
H: Convergence of real integral I'm trying to analyze the convergence of following integral: $$ \int_{0}^{1}\frac{dx}{\sqrt[3]{x(e^{x}-e^{-x})}} $$ Currently not being able to get a hint on how to proceed to do it, any help is really appreciated. So far I've tried: 1. $$ \frac{dx}{\sqrt[3]{x(e^{x}-e^{-x})}} >= \frac{dx}{\sqrt[3]{x(e-e^{-1})}} $$ But right expression converges, so it's not helping. 2. $$ \frac{dx}{\sqrt[3]{x(e^{x}-e^{-x})}} = \frac{dx}{\sqrt[3]{2xsh(x))}} $$ But I couldn't get anything useful. 3. I would expect this behaves as other easiest function which I could test that converges/diverges easily but I'm a bit lost on the transformation needed. Thanks! EDIT: Solution Proposed: $$ \sim \frac{1}{x^{\frac{2}{3}}} $$ Then comparisson by limits: $$ \lim_{x \to 0} \frac{\frac{1}{\sqrt[3]{x(e^{x}-e^{-x})}}}{\frac{1}{x^{\frac{2}{3}}}} = \sqrt[3]{\lim_{x \to 0}\frac{x}{(e^{x}-e^{-x})}} = \frac{1}{\sqrt[3]{2}} $$ Finally as the limit is $\neq 0, \neq \infty$, then it behaves as the proposed function which converges. Thank you! AI: $e^x - e^{-x} > 2x > x$ for $x> 0$ so we have the inequality $$\frac{1}{\sqrt[3]{x(e^x-e^{-x})}} \leq \frac{1}{x^{\frac{2}{3}}}$$ Thus the integral $$0 < \int_0^1 \frac{dx}{\sqrt[3]{x(e^x-e^{-x})}} \leq \int_0^1 x^{-\frac{2}{3}}\:dx = 3$$ converges.
H: Suppose $\lim_{n\to\infty}f(i,n)=\infty$ for all $i$, does that mean that $\lim_{n\to\infty}f(n,n)=\infty$ as well? Suppose that $f:\mathbb{N}\times\mathbb{N}\rightarrow \mathbb{R}$ is a function such that for each $i\in\mathbb{N}$, as $n\rightarrow \infty$, $$ f(i,n) \rightarrow 0.$$ Does this imply that $f(n,n) \rightarrow 0$ as $n\rightarrow \infty$? AI: No. Let $$f(m,n)=\begin{cases} m,&\text{if }n\le m\\ \frac1n,&\text{if }n>m\;; \end{cases}$$ clearly $f(n,n)=n$ for all $n$, so the diagonal sequence actually blows up. Clearly one can modify the example to give $\langle f(n,n):n\in\Bbb N\rangle$ any desired behavior: $$\begin{array}{c|cc} m\backslash n&1&2&3&4&5&6&\ldots\\\hline 1&a_1&\frac12&\frac13&\frac14&\frac15&\frac16&\ldots\\ 2&1&a_2&\frac13&\frac14&\frac15&\frac16&\ldots\\ 3&1&1&a_3&\frac14&\frac15&\frac16&\ldots\\ 4&1&1&1&a_4&\frac15&\frac16&\ldots\\ 5&1&1&1&1&a_5&\frac16&\ldots\\ 6&1&1&1&1&1&a_6&\ldots \end{array}$$
H: Proving convexity of a set The problem Given a convex set $S \subset \mathbb{R}^n$. Prove that $\overline{S}$ is also convex. What i've tried: To solve this problem i've tried using the definition that $\overline{S}$ is composed by the interior set and the frontier set of $S$. Hence, $S$ is convex, so, the interior set of $S$ is already convex. to prove that $\overline{S}$ is convex we just need to prove that the frontier set of $S$ is convex. Problem As you guys can see, i've came here because i can't prove this statement. So i ask you guys: is my way of approaching this problem right? AI: I assume that $\overline{S}$ denotes the topological closure of $S$. Any element $p$ of $\overline{S}$ can be written as $p=\lim_n x_n$ where $x_n\in S$. Now, let $p=\lim_n x_n$ and $q=\lim_n y_n$ be elements of $\overline{S}$, where $x_n, y_n\in S$. We want to show that $\lambda p+(1-\lambda) q\in \overline{S}$, for $\lambda\in [0,1]$. Now, $\lambda p+(1-\lambda) q=\lim_n \lambda x_n+(1-\lambda)y_n$. By convexity of $S$, we have $\lambda x_n+(1-\lambda)y_n\in S$ for each $n$. Thus $\lambda p+(1-\lambda) q$ is a limit of a sequence of points in $S$, so by definition of the closure it lies in $\overline{S}$, as desored.
H: Polynomials of degree $n$ with $\Bbb{F}_p$ are always reducible in $\Bbb{F}_{p^n}$ This is a rather basic question, but I can't seem to find any reference here on StackExchange. Is it true that, given a polynomial $p(x) \in \Bbb{F}_p[x]$ of degree $n$, we have that $p(x)$ is always reducible in $\Bbb{F}_{p^n}[x]$? If it is not true, then I'm in particular more interested in the case where $n = 2$. AI: This is true. If $P(x)$ is irreducible over $\mathbb F_{p^n}$ then it is certainly irreducible over $\mathbb F_{p}$. But that means that $\mathbb F_p(\alpha)$ for some root $\alpha$ of $P$ is a degree $n$ extension of $\mathbb F_p$, and hence equal to $\mathbb F_{p^n}$, and so $P(x)$ would have a root in $\mathbb F_{p^n}$, contradicting the initial assumption that it was irreducible.
H: Can we say that , for a parabola , no such point exists inside the parabola which is midpoint of more than one chord? I was reading about the properties of parabola, amongst which one of the property was that parabola has no centre. I tried to prove it by considering four parametric points on the parabola i.e. $P_1(a(t_1)^2,2at_1), \,P_2(a(t_2)^2,2at_2), \\P_3(a(t_3)^2,2at_3), \,P_4(a(t_4)^2,2at_4)$ Further I equated the coordinates of midpoint of $P_1P_2$ and $P_3P_4$, after doing this I got that either $P1=P3$ and $P_2=P_4$ or $P_1=P_4$ and $P_2=P_3$, i.e. the two chords are coincident . So from the above observation can I conclude that for a parabola a point which lies inside the parabola cannot be the midpoint of more than one chord? AI: Let $(x,y)$ be a point satisfying $y > x^2$. Then the system $$\begin{align} x &= \frac{x_1 + x_2}{2}, \\ y &= \frac{x_1^2 + x_2^2}{2} \end{align}$$ has, up to permutation of $x_1, x_2$, the unique solution $$x_1 = x \pm \sqrt{y-x^2}, \quad x_2 = x \mp \sqrt{y-x^2}.$$ It follows that for each "interior" point of the parabola $y = x^2$, there is precisely one pair of points $(x_1, x_1^2)$, $(x_2, x_2^2)$ on the parabola whose midpoint is $(x,y)$. Since all nondegenerate parabolas are similar, the result follows.
H: How to prove $\tanh^{-1}(\sin x)=\sin^{-1}(\tan x)$ Here's what I attempted: $$ y =\tanh^{-1}(\sin x)$$ $$\tanh y=\sin x$$ But I don't know what to do after this. Please help me. AI: These two are not equal. Let $x = \pi/3$ for instance. Then $$\text{arctanh}(\sin(x)) \approx 1.32$$ (per Wolfram), while you can't even get a real-number result for the latter equation. This is because $$\arcsin(\tan(\pi/3)) = \arcsin( \sqrt 3)$$ but the domain for arcsine (for real outputs) is $[-1,1]$. In particular, Wolfram approximates the answer as about $1.57 - 1.15i$.
H: Is this norm equivalent to the $\ell_1$ norm? I am studying for my qualifying exams and was asked to prove or disprove that the following norm is equivalent to the $\ell_1$ norm: $$\lVert x \rVert' = 2\left\lvert \sum_{n=1}^{\infty}x_n \right\rvert + \sum_{n=2}^{\infty}\left(1+\frac{1}{n}\right) \lvert x_n\rvert$$ It was easy to show that $\lVert x\rVert' \leq 4\lVert x\rVert_1$ but I have been trying for quite a while to show there is a constant $C$ such that $C\lVert x\rVert_1 \leq \lVert x\rVert'$. It has been so difficult I am beginning to believe they are not equivalent, when originally I thought they were. AI: The $\sum_{n=2}^\infty |x_n|$ part is common to both norms, so we just need to control $x_1$. This can be done as follows- $$|x_1| = \left| \sum_{n=1}^\infty x_n - \sum_{n=2}^\infty x_n\right| \le \left| \sum_{n=1}^\infty x_n\right| + \sum_{n=2}^\infty |x_n|.$$ Hence, $$ \|x\|_{\ell_1} = |x_1| + \sum_{n=2}^\infty |x_n| \le \left| \sum_{n=1}^\infty x_n\right| + 2\sum_{n=2}^\infty |x_n| \le 2 \|x\|'.$$
H: Convergence of double series involving minimum Determine the convergence of the series $$\sum_{n,m\in\mathbb{Z}:|n-m|>10,|m-10|>0}\min\{|n|^{-10},|n-m|^{-10}\}.$$ I tried solving this using "an integral test", saying $$ \sum_{n,m\in\mathbb{Z}:|n-m|>10,|m-10|>0}\min\{|n|^{-10},|n-m|^{-10}\} \le \int_{|x-y|>10,|y-10|>0} \min \{|x|^{-10},|x-y|^{-10}\}dxdy .$$ However the integral to the right diverges (it's bounded from below by $\int_{-\infty}^{-10}\int_{20}^\infty \min\{...\} dxdy=\iint_{[20,\infty)\times[-\infty,-10)}|x|^{-10}dxdy$ which is divergent). It does not mean however anything as the sum may still be convergent. Does anybody has an idea how to show this series converges/diverges? AI: 1. Let me first convince you that the sum converges by investigating a much easier version: $$ \sum_{(n,m)\neq 0} |n|^{-10}\wedge|m|^{-10} $$ Decomposing the sum over the subregions divided by the lines $m = \pm n$, and exploiting the symmetry, this sum is bounded from above by $$ 4 \sum_{n=1}^{\infty} \sum_{m=-n}^{n} \frac{1}{n^{10}} = 4 \sum_{n=1}^{\infty} \frac{2n+1}{n^{10}}, $$ which converges. 2. Now we move on to the sum in the question. Since the sum $\sum_{|n-10|>10} |n|^{-10} \wedge |n-10|^{-10} $ converges, it suffices to study the convergence of $$ S := \sum_{|n-m|>10} |n|^{-10} \wedge |n-m|^{-10}. $$ By noting that $$ |n-m| \geq |n| \quad \Longleftrightarrow \quad (m \geq 0 \text{ and } n \leq 2m) \text{ or } (m \leq 0 \text{ and } n \geq 2m), $$ $\hspace{80pt}$ we get \begin{align*} S &\leq 2 \sum_{\substack{|n-m|>10 \\ m \geq 0}} |n|^{-10} \wedge |n-m|^{-10} \\ &\leq 2 \sum_{\substack{|n-m|>10 \\ m \geq 0 \\ n \geq 2m}} |n|^{-10} + 2 \sum_{\substack{|n-m|>10 \\ m \geq 0 \\ n \leq 2m}} |n-m|^{-10} \\ &\leq 2 \sum_{m \geq 0} \sum_{\substack{n \geq m/2 \\ n \neq 0}} |n|^{-10} + 2\sum_{l=11}^{\infty} \frac{\#\{ (n,m) : m=n+l, m \geq 0, n \leq 2m\}}{l^{10}}. \end{align*} Now by using the fact that $$ \sum_{\substack{n \geq m/2 \\ n \neq 0}} |n|^{-10} = \mathcal{O}(m^{-9}) $$ and $$ \#\{ (n,m) : m=n+l, m \geq 0, n \leq 2m\} = \mathcal{O}(l), $$ it follows that the above upper bound converges.
H: If a circle and parabola touch each other and also have common root then what's the relationship between their coefficient? Actually this question is from physics (projectile motion) but i believe its related more to maths here equation of parabola $$y=ax-5x^2-5(ax)^2$$ And that of circle $$x^2+y^2=(a/5(1+a^2))^2$$ where $a=\tan \theta$ link to original question https://physics.stackexchange.com/q/562407 Now they touch each other also they share a common root now we have to find the values of a Now this seems nonsense but if we add slider on graph to a then they seem to meet the required condition at $theta$ approximately equal to $73^\circ$ also no boundattion for initial velocity so ignore the $40 \space m/s$ velocity in graph we can take it $1\ m/s$ also $\theta$" /> But how do we prove that when $\theta=(\approx)73^\circ$ the circle and parabola meet the required condition of touching each other and having a common root as shown in graph please help me AI: Let $r = \dfrac{a}{5(a^2+1)}$ be the radius of the circle (the way you wrote it is a bit ambiguous, but this is what you need for the parabola and circle to intersect at $x=r, y=0$). The resultant of $x^2 + y^2 - r^2$ and $y - (a x - 5 (1+a^2) x^2)$ with respect to $y$ is $${\frac { \left( \left( 5\,{a}^{2}+5 \right) x-a \right) \left( \left( 125\,{a}^{6}+375\,{a}^{4}+375\,{a}^{2}+125 \right) {x} ^{3}+ \left( -25\,{a}^{5}-50\,{a}^{3}-25\,a \right) {x}^{2}+ \left( 5 \,{a}^{2}+5 \right) x+a \right) }{ 25 \left( {a}^{2}+1 \right) ^{2}}} $$ This must be $0$ where the two curves intersect. The first factor in the numerator is $0$ at $x = r$, so the other factor in the numerator gives the other intersection. Now for the curves to be tangent at that point, we want this to have discriminant $0$ with respect to $x$. That discriminant turns out to be $$ (62500 (a^4-11 a^2-1)) (a^2+1)^6$$ so the value of $a$ must be the (positive real) root of $a^4 - 11 a^2 - 1$, which is $$ \sqrt{(11 + 5 \sqrt{5})/2} \approx 3.330190676 $$ This is $\arctan(\theta)$ where $\theta \approx 1.279079822$ radians or $73.28587546$ degrees.
H: Proof Verification for Baby Rudin's Chapter 4 Exercise 4 I'm trying to prove: $f: X \to Y$ is continuous, $g: X \to Y$ is continuous, $E \subset X$ is dense in $X \implies$ $f(E)$ is dense in $f(X)$ and $\forall p \in E, g(p) = f(p) \implies g(p) = f(p) \forall p \in X$. My attempt: First, we show that $f(E)$ is dense in $f(X)$. Note that it might be that some members of $f(X)$ also belong to $f(E)$ which means that we only need to show that those members of $f(X)$ that do not additionally belong to $f(E)$ are limits points of $f(E)$. To this end, let $y \in f(X)$ such that $y \notin f(E)$. Then, $y = f(p)$ for some $p \in X \setminus E$. Since $E$ is dense in $X$, $p$ is a limit point of $E$. Since $f$ is continuous at $p$, for any $x \in E$ satisfying $x \to p$, we have that $f(x) \to f(p) = y$. Since $y$ was arbitrarily chosen, $f(E)$ is dense in $f(X)$. My question: Is my proof for showing that "$f(E)$ is dense in $f(X)$" inaccurate in any way? AI: It seems fine to me, and assuming this is your first time learning analysis, I would try to be more explicit by saying this for the last part: since $p$ is a limit point of $E$, there exists a sequence of points $\{x_n\}\subset E$ with $x_n\rightarrow p$. Then by continuity we get a sequence of points $\{f(x_n)\}\subset f(E)$ with $f(x_n)\rightarrow f(p)=:y$. Hence every neighborhood of $y\in f(X)$ contains a point of $f(E)$ and $f(E)$ is dense in $f(X)$. $\boxed{}$
H: Confusion over solution to Linear Transformation from P2 to P3 I'm trying to understand the solution to the question below. I warrant I'm probably confused over the notation. In the question (attached below) it says that transformation T(p)[x] = xp(x-3), with standard basis for P2 and P3. I assume you need to break it down into: Po = 1 + 0x + 0x^2 P1 = 0 + 1x + 0x^2 P2 = 0 + 0x + x^2 And apply the transformation to each Pn. However, I have no idea how you would do that stepwise when the transformation is defined by xp(x-3). Any help would be greatly appreciated! Thanks Polynomial Question AI: When you want to change basis, the columns of the transformation matrix are made up by applying the transformation on each element of the basis of your domain space, to get the corresponding column of your transformation matrix when expressed in the basis of $P3$. Now the standard domain of $P2$ is $\{(1,0,0), (0,1,0) ,(0,0,1)\}$ where each element represents the coefficient of $1, x, x^2$ respectively Now, let these "vectors" be $P_1, P_2, P_3$ (as each represents a polynomial) Hence, $T(P_1) = xP_1(x-3) = x = (0,1,0,0)$ $T(P_2) = xP_2(x-3) = x(x-3) = x^2-3x = (0,-3,1,0)$ $T(P_3) = xP_3(x-3) = x((x-3)^2) = x^3 - 6x^2 + 9x = (0,9,-6,1)$ Hence, we can write the standard matrix representation of the transformation T as $$\begin{bmatrix}0 & 0 & 0 \\ 1 & -3 & 9 \\ 0 & 1 & -6 \\ 0 & 0 & 1\end{bmatrix}$$
H: would n dimensional extreme point of Polytope hiding form all n-1 dimensional coordinate projection? Given a Polytope $X \subset R^n$ and assume we have nonempty extreme point set $P(X)$. We can do have $n$ different $n-1$ coordinate projection (namely hiding one coordinate). We use $h(X, i)$ indicates $i$th coordinate get hidden. Would it be possible For $n \geq 3$, $\exists x \in P(X), h(x, i) \notin P(h(X, i)), \forall i \in {1\cdots n} $ ? For a $n =2$ case is kind of trivial to prove, take $X$ as a pentagon, and there are 5 extreme points. and project it to both X and Y axis would result in two line segments which in total there can be maximum 4 distinct extreme points. So that there must be 1 extreme point missing. I'm having a trouble prove it for $n =3$ case. AI: Take the convex hull of $n+1$ points consisting of the standard basis vectors $e_1, \ldots, e_n$ together with $v_t = (t, t, \ldots, t)$ for some $t \in (0, 1/n)$. Then the projection of $v_t$ is in the interior of any coordinate projection.
H: Expected value of coin game You have four coins and I have four coins. We both throw the four and if your four sides equal to mine, I will give you 2 dollar and otherwise you give me 1. Will you do it? I want to calculate the expected value of the game, and I'm not sure why this is wrong. Let X be the probability that we get the same four sides (and so I win the game). X = P(I get 4H, You get 4H) + P(I Get 3H, You Get 3H) + P(I get 2H, You get 2H) + P(I get 1H, You get 1H) + P(I get 0H, you get 0H) X = ((1/2)^4 * 4C4)^2 + ((1/2)^4 * 4C3)^2 + ((1/2)^4 * 4C2)^2 + ((1/2)^4 * 4C1)^2 + ((1/2)^4 * 4C0)^2 I have the squares to account for both of us getting the outcome. The expected value should thus be 2*X - 1(1-X) = -.25 Why is this wrong? AI: Your calculation looks fine. I get $X=\frac {70}{256},$ which makes the value of the game $\frac{140-186}{256}=-\frac{46}{256}\approx -0.1797$. I don't want to play. Pay me $3$ on a win and I will play.
H: Sum of geometric series when exponent is $2n$, not $n$? I have a probability below which denotes the chance of catching a fish. $$P = \left(\frac{1}{4}\right) + \left(\frac{1}{4}\right)\left(\frac{3}{4}\right)^2 + \left(\frac{1}{4}\right)\left(\frac{3}{4}\right)^4 + \dots $$ I can find a generalized form of $P$ by assuming the first term is $\left(\frac{1}{4}\right)\left(\frac{3}{4}\right)^0$ to come up with an infinite series: $$P = \sum_{n=0}^\infty \left(\frac{1}{4}\right)\left(\frac{3}{4}\right)^{2n}$$ I'm learning about series now and I know that if $|r| \geq 1$ the series is divergent. Here I have $|r| = \dfrac{3}{4}$ so there is a sum. I also have $a = \dfrac{1}{4}$. In the examples, the series have an exponent of $n$ but none have $2n$. Were it to be $n$ instead, I could find the sum as: $$P = \frac{a}{1-r} = \frac{\frac{1}{4}}{1-\frac{3}{4}} = 1$$ But this is with $n$, not $2n$, where the answer I seek is $\dfrac{4}{7}$. How can I approach this number? How do I reconcile the $2n$ and is there a formula for this like there was when it was $n$? I can do partial sums and find an approximation but what is the sum as $n \to \infty$? I've marked the question calculus as this is a unit in Stewart's Early Transcendentals calculus text. AI: $$\sum_{n=0}^\infty \left(\frac{1}{4}\right)\left(\frac{3}{4}\right)^{2n}=\frac{1}{4}\sum_{n=0}^\infty\left(\frac{9}{16}\right)^{n}=\frac{1}{4} \frac{1}{1- \frac{9}{16}}=\frac{4}{7}.$$
H: Probabilty with Combinations concept check When we calculate the probability of a random selection of $3$ students being all boys, from a group of $6$ boys and $4$ girls, then we can just multiply $\mathbb P(\text{1st being boy}) \times \mathbb P(\text{2nd being boy}) \times \mathbb P(\text{3rd being boy})$ i.e. $\frac 6 {10} \times \frac 5 9 \times \frac 4 8 $. But when we need the probability of two coin flips being TAILS in $5$ flips of a fair coin, we need to multiply the $\mathbb P(T)\times\mathbb P(T)\times\mathbb P(H)\times\mathbb P(H)\times\mathbb P(H)\times{5\choose 2}$. Why do we need to multiply the probabilities by the number of combinations in the coin flips problem but not in the first? AI: Because there is only one way to get three boys. If you were calculating the probability of getting two boys and one girl, you could do $\frac 6{10} \cdot \frac 59 \frac 48$, which chooses two boys first, then one girl, then multiply by $3$ because there are three orders you can choose the three children.
H: Proof of the change of variables formula without using the Monotone Convergence Theorem I recently encountered the problem Exercise 36 in Tao's An Introduction to Measure Theory. The link of an online version of this problem is here. Now I quote this problem as follows: Exercise 36 (Change of variables formula) Let $(X, \mathcal{B}, \mu)$ be a measure space, and let $\phi: X \rightarrow Y$ be a measurable morphism (as defined in Remark 8 from $(X, \mathcal{B})$ to another measurable space $(Y, \mathcal{C}). $ Define the pushforward $\phi_{*} \mu: \mathcal{C} \rightarrow[0,+\infty]$ of $\mu$ by $\phi$ by the formula $\phi_{*} \mu(E):=\mu\left(\phi^{-1}(E)\right)$ Show that $\phi_{*} \mu$ is a measure on $\mathcal{C},$ so that $\left(Y, \mathcal{C}, \phi_{*} \mu\right)$ is a measure space. If $f: Y \rightarrow[0,+\infty]$ is measurable, show that $\int_{Y} f d \phi_{*} \mu=\int_{X}(f \circ \phi) d \mu$ (Hint: the quickest proof here is via the monotone convergence theorem below, but it is also possible to prove the exercise without this theorem.) I really eager about how to prove the second statement WITHOUT the Monotone Convergence Theorem, in order to follow the procedure of the book. I tried hard and figured out only the case that $f$ is a simple function. How can we prove the case that $f$ is a general unsigned (nonnegative) function? The author have not provide the solution yet. Any help is appreciated. AI: Recall that, by definition, $\int_Y f \ \mathrm{d}\phi_*\mu:=\sup\left\{\int_Y s \ \mathrm{d}\phi_*\mu : s \text{ is simple and } 0 \leq s \leq f\right\}$. You already proved that $\int_Y s \ \mathrm{d}\phi_*\mu=\int_X (s\circ \phi) \ \mathrm{d}\mu$ for any simple function $s$. If $s$ is simple with $0\leq s \leq f$, then clearly $s \circ \phi$ is simple and $0 \leq (s \circ \phi)\leq (f \circ \phi)$. Thus, by the definition above we get $$ \int_Y f \ \mathrm{d}\phi_*\mu \leq \int_X (f\circ \phi) \ \mathrm{d}\mu. $$ EDIT: For the reverse inequality, my first two arguments were flawed. I think that I've fixed it. See details below. Let $t: X \to [0, +\infty]$ be a simple function with $t \leq (f \circ \phi)$, say given by $$ t:=\sum_{k=1}^n \alpha_k \chi_{E_k} $$ where $\alpha_k$ is such that $0 \leq\alpha_k \leq f(\phi(x))$ when $x \in E_k$. Define $$ F_k:=\{ y \in Y: f(y)\geq \alpha_k\}. $$ Then each $F_k$ is measurable because $f$ is measurable. Define a simple function $s: Y \to [0,+\infty]$ by $$ s:=\sum_{k=1}^n \alpha_k \chi_{F_k}. $$ By construction $0 \leq s \leq f$. Further, notice that $E_k=\{ x \in X: (f \circ \phi)(x) \geq \alpha_k\}$ and therefore it follows that $\phi^{-1}(F_k)=E_k$. Thus, by definition of the measure $\phi_*\mu$ we get $\int_X t \ \mathrm{d}\mu = \int_Y s \ \mathrm{d}\phi_*\mu$. Thus, taking sup over all simple functions $t \leq (f \circ \phi)$ we conclude that $$ \int_X (f\circ \phi) \ \mathrm{d}\mu \leq \int_Y f \ \mathrm{d}\phi_*\mu. $$ This is the reverse inequality.
H: Find out the limit of $f(z)=(z-2)\log|z-2|, z\neq 2$ at the point $z_0=2$, or explain why it does not exist. Question: Find out the limit of $f(z)=(z-2)\log|z-2|, z\neq 2$ at the point $z_0=2$, or explain why it does not exist. My approach: Let $z=x+iy$, where $x,y\in\mathbb{R}$. This implies that $f(z)=f(x+iy)=(x+iy-2)\log|x+iy-2|=(x+iy-2)\log\left(\sqrt{(x-2)^2+y^2}\right).$ Therefore, $u:=\Re(f(z))=(x-2)\log\left(\sqrt{(x-2)^2+y^2}\right)$ and $v:=\Im(f(z))=y\log\left(\sqrt{(x-2)^2+y^2}\right).$ Now $$\lim_{z\to 2}u=\lim_{(x,y)\to (2,0)}u(x,y)=\lim_{(x,y)\to (2,0)}(x-2)\log\left(\sqrt{(x-2)^2+y^2}\right)$$ and $$\lim_{z\to 2}v=\lim_{(x,y)\to (2,0)}v(x,y)=\lim_{(x,y)\to (2,0)}y\log\left(\sqrt{(x-2)^2+y^2}\right).$$ Observe that both of these limits are of the indeterminate form $0.(-\infty)$. But, I cannot use L'Hopital's rule here, and I do not know if there is a multi-variable form of L'Hopital's rule or not. So, how to proceed after this and is there any other way to solve the problem? AI: By definition of limit the statement $$(z-2)\ln |z-2| \to 0$$ as $z \to 2$ is equivalent to the statement $$t \ln t \to 0$$ as $t \to 0+$. Do you know how to prove this? [We have to show that given $\epsilon >0$ there exists $\delta >0$ such that $|(z-2)\ln |z-2|| <\epsilon$ whenever $0<|z-2 | <\delta$. Just Put $t =|z-2|$ to see the equivalence. The fact that $t \ln t \to 0$ is proved by applying L'Hopital's Rule to $\frac {\ln t} {1/t}$].
H: Can the range of a linear transformation contains the null space? Let $V$ be a finite vector space, and let $T$ be a linear transformation $T:V\rightarrow V$. If $\operatorname{null}(T)=\operatorname{span}\{\phi\}$, can $\operatorname{ran}(T)$ contains $\phi$, where $\phi$ is not the trivial vector? I know that $\operatorname{ran}(T)^0=\operatorname{null}(T^*)$ and $\operatorname{null}(T)^0=\operatorname{ran}(T^*)$, where $T^*$ is the dual operator $T^*:V^*\rightarrow V^*$. Let $\{\phi, e_1, e_2\}$ be a basis in $V$. Then, $\{T(e_1), T(e_2)\}$ spans $\operatorname{ran}(T)$ and there are unique numbers $a_i,b_i$ such that $T(e_1)=a_0\phi+a_1e_1+a_2e_2$ and $T(e_2)=b_0\phi+b_1e_1+b_2e_2$, because $\operatorname{ran}(T)\subset V$. Now let $\operatorname{null}(T^*)=\operatorname{span}\{\phi^*\}$ then $\phi^*(T(e_1))=\phi^*(T(e_2))=0$. If $\phi^*$ is one element of dual basis such that $\phi^*(\phi)=1$, then $a_0$ and $b_0$ must be zero, and the range does not contain the null space. Moreover $V=\operatorname{null}(T)\oplus\operatorname{ran}(T)$. However I do not know that $\phi^*(\phi)=1$ always. I have been stuck here. AI: In general, if $\ker(T)\subseteq TV$, then $\operatorname{rank}(T)\ge\operatorname{nullity}(T)$ and hence $\dim V\ge2\operatorname{nullity}(T)$. Conversely, if $K$ is any subspace of $V$ such that $n=\dim V\ge2\dim K=2k$, then $r:=n-k\ge k$. Let $\{u_1,u_2,\ldots,u_k\}$ be any basis of $K$. Complete it to a basis $\{u_1,u_2,\ldots,u_k,v_1,v_2,\ldots,v_r\}$ of $V$. Since $r\ge k$, we may define a linear transformation $T$ such that \begin{cases} T(u_i)=0,\\ T(v_i)=u_i&\text{ when }i\le k,\\ T(v_i)=v_i&\text{ when }i> k. \end{cases} Now $K=\ker(T)\subseteq TV$. In your case, since $K=\operatorname{span}(\phi)$ is one-dimensional, there exists a linear transformation $T:V\to V$ such that $K=\ker(T)\subseteq TV$ if and only if $\dim V\ge2$.
H: Bessel processes and Brownian motion In class, we talked about Bessel process as a process which solves the SDE: $$ dB=\frac{n-1}{2B}dt+\sum_{i=1}^n\frac{W_i}{B}dW_i $$ Where $W_1,W_2,...,W_n$ are independent, standard Brownian motions. We then showed that $B=||W||=(\sum_{i=1}^n(W_i)^2)^{1/2}$ solves this equation. As a side note, the professor said that $X_t=\intop_0^t\sum_{i=1}^n\frac{W_i}{B}dW_i$ is a BM and that we can show it using Levy’s theorem. I tried to prove that myself but got stuck at showing that $X_t^2-t$ is a martingale. Wherever I looked (books, lecture notes, or other questions here) it was like "you can easily see that $X_t$ is BM by Levy’s theorem". Well, turned out it's not that easy for me. Can someone here please help me see that "easy" proof? Thank you! AI: I'll show $X_t^2-t$ is a martingale. Note $X_t$ satisfies $dX_t=\sum_{i=1}^n\tfrac{W_i(t)}{B(t)}dW_i(t)$. Since the $W_i$ are independent $(dW_i)(dW_j)=\delta_{ij}dt$ where $\delta_{ij}$ is 1 if $i=j$ and 0 otherwise. It follows that \begin{align*} (dX_t)^2 = \sum_{i=1}^n \tfrac{W_i(t)^2}{B(t)^2}dt \end{align*} or, equivalently, \begin{align*} X_t^2 = \sum_{i=1}^n\int_0^t \tfrac{W_i(s)^2}{B(s)^2}ds = \int_0^t \sum_{i=1}^n\tfrac{W_i(s)^2}{B(s)^2}ds = t. \end{align*} The latter holds since $\sum_{i=1}^n\tfrac{W_i(s)^2}{B(s)^2}=1$ for all $s>0$ almost surely. Thus $X_t^2-t=0$ for all $t$ and so \begin{align*} \mathbb{E}(X_t^2-t\vert\mathcal{F}_s) = 0 = X_s^2-s \end{align*} for any $s<t$. Here $\{\mathcal{F}_t\}$ is the filtration generated by the Brownian motions. Since $X_t^2-t$ is also integrable for all $t$ (it is $0$) and adapted to $\{\mathcal{F}_t\}$, it is a martingale.
H: $(W_1\cap W_2)^{0}=W_1^0+W_2^0$ If $W_1$ and $W_2$ are subspaces of a finite dimensional vector space $V$, then $$(W_1\cap W_2)^{0}=W_1^0+W_2^0$$. Attempt Suppose $f\in W_1^0+W_2^0$. Then $f=f_1+f_2\in W_1^0+W_2^0$ ,where $f_1\in W_1$ and $f_2\in W_2^0.$ Now for $z\in (W_1\cap W_2)$, $f(z)=(f_1+f_2)(z)=f_1(z)+f_2(z)=0+0=0$. Therefore, $f\in (W_1\cap W_2)^0$. Thus $W_1^0+W_2^0\subseteq (W_1\cap W_2)^0$. How to prove this part $(W_1\cap W_2)^0\subseteq W_1^0+W_2^0$. Let $f\in (W_1\cap W_2)^0$ then $f(z)=0$ for $z\in (W_1\cap W_2)$. How to proceed next? Any hint. Thanks in advance. AI: Hint - Theorem If $W_1$ and $W_2$ are finite dimensional subspaces of a vector space $V$, then $W_1+W_2$ is finite dimensional and $$\operatorname{dim}W_1+\operatorname{dim}W_2=\operatorname{dim}(W_1\cap W_2)+\operatorname{dim}(W_1+W_2)$$. Proof of the above theorem gives an idea to complete the proof of your problem.
H: Can this inductive proof that $\sum_{i=0}^n2^{2i+1}=\frac23(4^n-1)$ be simplified? The general structure of equations I've used for the inductive step for proofs with a summation is something like: We'll prove that $\sum_{i = 0}^{n + 1} (\text{something}) = (\text{closed form expression})$ \begin{align} \sum_{i = 0}^{n + 1} (\text{something}) &= \sum_{i = 0}^n (\text{something}) + \text{last term} &\\ &= [\text{expression via I.H.}] + \text{last term} &\\ &= \text{do some work...} &\\ &= \text{some more work...} &\\ &= (\text{finally reach the closed form expression we want}) \end{align} This structure is very nice, since the equation is one-sided, and very easy to follow. However I solved a problem that I couldn't solve with this one-sided structure, and I had to substitute the LHS with the closed form expression I'm trying to prove, so I could use some of its terms to simplify the RHS. This is fine and valid, but I'd like to know if there's a simpler way to perform this proof that doesn't employ the substitution you see below: In other words, I couldn't figure out how to simplify $\frac{2}{3}(4^n - 1) + 2^{2n + 1}$ to get $\frac{2}{3}(4^{n + 1} - 1)$. The farthest I got was: \begin{align} &= \frac{2}{3}(4^n - 1) + 2^{2n + 1} &\\ &= \frac{2}{3}(4^n - 1) + 2\cdot 2^{2n} &\\ &= \frac{2}{3}(4^n - 1) + 2\cdot 4^n &\\ &= \frac{2}{3}(4^n - 1) + \frac{3 \cdot 2\cdot 4^n}{3} &\\ &= \frac{2}{3}(4^n - 1 + 3 \cdot 4^n) &\\ \end{align} AI: $$\frac{2}{3}(4^n - 1) + 2^{2n + 1}= \frac23\left(\color{red}{4^n}-1+\overbrace{\color{red}{3\cdot2^{2n}}}^{=3\cdot4^n}\right)=\frac23\left(\color{red}{4\cdot4^n}-1\right) = \frac{2}{3}(4^{n + 1} - 1)$$
H: What Is Bigger $100^{100}$or $\sqrt{99^{99} \cdot 101^{101}}$ Hello every what is bigger $100^{100}$or $\sqrt{99^{99} \cdot 101^{101}}$? I tried to square up and I got $100^{200}$ or $99^{99} \cdot 101^{101}$ and I don't have an idea how to continue. AI: Taking logarithms, we see that we want to compare $f(100)$ and $\frac12(f(99)+f(101))$, where $f(x) = x\log x$. But $f(x)$ is a convex function (its second derivative $\frac1x$ is always positive), which means that the the secant line through $(99,f(99))$ and $(101,f(101))$ lies above the graph of the function. In particular, the fact that the midpoint of this secant line lies above the point $(100,f(100))$ on the graph is exactly the statement that $\frac12(f(99)+f(101)) > f(100)$, and so $\sqrt{99^{99}101^{101}} > 100^{100}$.
H: I want to show some function as the norm of two vectors. I want to see $[\sin(x-y)]^2$ is rewritten by the square of the norm of the difference of two vectors like following (It is a sample and it's not right). $$ \| (\sin(x) , \cos(x) \sin(x) ) - (\sin(y) , \cos(y) \sin(y) )\| $$ Please tell me how to consider this problem ... It is the problem 6.6 of Foundations of Machine Leraning (Mohri etc). AI: Let $u=\frac{1}{2}(\sin 2x, \cos 2x)$ and $v=\frac{1}{2}(\sin 2y , \cos 2y)$. Then \begin{align*} 2(u-v)&=(\sin 2x-\sin 2y, \cos 2x-\cos 2y)\\ 4\|u-v\|^2&=(\sin 2x-\sin 2y)^2+ (\cos 2x-\cos 2y)^2\\ &=2-2(\sin 2x\sin 2y+\cos 2x\cos 2y)\\ &=2-2\cos(2x-2y)\\ &=2 (1-\cos(2x-2y))\\ &=4 \sin^2(x-y)\\ \|u-v\|^2&=\sin^2(x-y) \end{align*}
H: How can I integrate $\int\frac{e^{2x}-1}{\sqrt{e^{3x}+e^x} } \mathop{dx}$? How can I evaluate this integral $$\int\dfrac{e^{2x}-1}{\sqrt{e^{3x}+e^x} } \mathop{dx}=\;\;?$$ My attempt: I tried using substitution $e^x=\tan\theta$, $e^x\ dx=\sec^2\theta\ d\theta$, $dx=\sec\theta \csc\theta \ d\theta.$ $$\int\dfrac{\tan^2\theta-1}{\sqrt{\tan^3\theta+\tan\theta } }\ \sec\theta \csc\theta\ d\theta $$ $$=\int\dfrac{\tan^2\theta-1}{\sec\theta\sqrt{\tan\theta } }\ \sec\theta \csc\theta d\theta. $$ I used $\tan\theta= \dfrac{1}{\cot\theta}$ $$=\int\dfrac{1-\cot^2\theta}{\cot^{3/2}\theta }\csc\theta d\theta $$ $$=\int(\cot^{-3/2}\theta-\sqrt{\cot\theta} )\csc\theta d\theta. $$ I got stuck here. I can't see whether further substitution will work or not. Will integration by parts work? Please help me solve this integral. I am learning calculus. Thank in advance. AI: I used the same steps you did follow but I stopped at $$I=\int \left(1-\cot ^2(\theta )\right) \sec (\theta )\sqrt{\tan (\theta )}\, d\theta$$ Rewrite it as $$I={\displaystyle\int}\dfrac{\cos^2\left(\theta\right)-\sin^2\left(\theta\right)}{\cos^\frac{3}{2}\left(\theta\right)\sin^\frac{3}{2}\left(\theta\right)}\,d\theta$$ Now $$u=\cos\left(\theta\right)\sin\left(\theta\right)\implies du=\cos^2\left(\theta\right)-\sin^2\left(\theta\right)\implies d\theta=\dfrac{du}{\cos^2\left(\theta\right)-\sin^2\left(\theta\right)}$$ $$I=\int\dfrac{du}{u^\frac{3}{2}}u==-\dfrac{2}{\sqrt{u}}+C$$ Back to $\theta$ $$I=\frac 2{\sqrt{\sin(\theta)\cos(\theta)}}=\frac {2\sqrt 2}{\sqrt{\sin(2\theta)}}+C$$
H: Can I get $\frac{f(x)}{g(x)}$ integrable when $g(x)\neq0$ for $f(x),g(x)$ is integrable Can I get $\frac{f(x)}{g(x)}$ (meaning $\int_{a}^{b}\frac{f(x)}{g(x)}dx$ is convergent)integrable when $g(x)\neq0$ for $f(x),g(x)$ is integrable? When $g(x)\geq C>0$, $\frac{f(x)}{g(x)}$ is integrable. But if $g(x)$ is just infinitely closer to $0$ and there will be no constant to bound it except $0$ itself. I think in this condition it maybe cause the $\frac{f(x)}{g(x)}\rightarrow \infty$ so it may not integrable. Is my idea right? Or is there some other ideas about it? Thank you! AI: On $(0,1)$ take $f(x)=1$ and $g(x)=x$. Then $\frac f g $ is not integrable.
H: On characterisation of smooth $G$ equivariant morphisms between Product manifolds with $G$ action In particular I am interested in the following! Let $M$ be a smooth manifold and $G$ be a Lie Group. Let $\rho: (M \times G)\times G \rightarrow M \times G$ be the smooth action of $G$ on $M \times G$ given by $(m,g).g' \mapsto (m,g.g')$. Let $\phi: M \times G \rightarrow M \times G$ be a $G-$ equivariant smooth map (that is $\phi(m,g).g'=\phi(m,g.g')$) such that it is identity on the first component. Then can we say that $\phi$ is always of the form $(x,g) \mapsto (x, \psi(x)^{-1}.g)$ for some map $\psi:M \rightarrow G$? .If such map $\psi$ exists then can we say $\psi$ is smooth? AI: By your assumption about $\phi$ being identity on the first component we have $$\phi(m,g)=\big(m,\beta(m, g)\big)$$ for some $G$-equivariant $\beta:M\times G\to G$. Note that $\beta$ is smooth since $\phi$ is. Now $\beta(m,g)=\beta(m,1)g$ and therefore the map you are looking for is given by $\psi(m)=\beta(m,1)^{-1}$ which is smooth as well.
H: On $(0,\infty)$, the metrics $d(x,y)=|x-y|+|\frac1x-\frac1y|$ and $d_e(x,y)=|x-y|$ are equivalent. Problem Let $Y=(0,\infty)$ and define the metric $d(x,y)=|x-y|+|\displaystyle\frac1x-\displaystyle\frac1y|$ on $Y$. Let $d_e(x,y)=|x-y|$ be the usual Euclidean metric on $Y$, then show that both the metrics $d$ and $d_e$ are topologically equivalent on $Y$. What I want to show is, any $d$ open ball is $d_e$ open ball and vice versa. Now $B_d(x;r)\subseteq B_{d_e}(x;r)$ proves that any $d_e$ open ball is $d$ open. But for the converse, I could not prove it. Any hint please!! Or, is there any easy way to look at this problem? Because $d$ is sum of two matrics. Thank you. AI: Consider the following Let $f:X\to \Bbb R$ be continuous on a metric space $(X,d)$. Then, $(X,\sigma)$, where $\sigma:X\times X\to [0,\infty)$ defined by $\sigma(x,y)=d(x,y)+\big|f(x)-f(y)\big|$, is topologically equivatent to $(X,d)$. One side is clear, namely $B_\sigma(a,\epsilon)\subseteq B_d(a,\epsilon)$. To prove the other side, let $a\in X$ and $\epsilon>0$ be given. Using continuity of $f$ at $a$, there is $\delta$ with $\frac{\epsilon}{2}>\delta>0$ such that $d(x,a)<\delta\implies \big|f(x)-f(a)\big|<\frac{\epsilon}{2}$. Now, $B_d(a,\delta)\subseteq B_\sigma(a,\epsilon)$. This is because of the fact that $x\in B_d(a,\delta)\implies \sigma(x,a)=d(x,a)+\big|f(x)-f(a)\big|<\delta+\epsilon/2<\epsilon/2+\epsilon/2=\epsilon.$ So, we are done.
H: a problem regarding projection maps on finite dimensional vector space Let $V$ be a finite dimensional vector space over a field $F$ of characteristic zero. Let $E_1 , E_2, ...,E_k$ be projections of $V$ such that $E_1+E_2+...+E_k=I$. Show that $E_iE_j = 0$ for all $i\neq j$. Hint: Use the trace function. Using the hint I got that $\operatorname{trace}(E_i)=\dim(\operatorname{range}(E_i))$ for all $i,\;1\le i\le k$. Again $I=E_1+E_2+\cdots+E_k \Rightarrow V=\operatorname{range}(E_1)+\operatorname{range}(E_2)+\cdots+\operatorname{range}(E_k)$ Combining both we get $V=\operatorname{range}(E_1)\oplus\operatorname{range}(E_2)\oplus\cdots\oplus\operatorname{range}(E_k)$ After this step I am not being able to progress further. Please help me. Though this problem has already been discussed in a earlier post, there was no hint regarding this approach. So I am posting this problem again. Please do not mark this problem as duplicate. Thank you in advance. AI: $\text{Range}(E_i E_j) \subset \text{Range}(E_i)$. Also $E_i E_j = (I - E_1 - \cdots - E_k)E_j = E_j - E_1E_j - \cdots - E_k E_j$, so $\text{Range}(E_i E_j) \subset \text{Range}(E_j) + \text{Range}(E_1) + \cdots + \text{Range}(E_k)$, where this sum misses out $\text{Range}(E_i)$. Now $\text{Range}(E_i) \cap (\text{Range}(E_j) + \text{Range}(E_1) + \cdots + \text{Range}(E_k) ) = 0$ as the sum is direct, so $\text{Range}(E_i E_j) = 0$. Hence $E_iE_j = 0$.
H: Conjunctive Normal Form evaluates true when atleast half of the clauses are true. This is an Exam question. Which of the Following is TRUE about formulae in Conjunctive Normal form? -For any formula, there is a truth assignment for which at least half the clauses evaluate true. -For any formula, there is a truth assignment for a which all the clauses evaluate to true. -There is a formula such that for each truth assignment at most one fourth of the clauses evaluate to true. -None The answer given is : "For any formula, there is a truth assignment for which at least half the clauses evaluate true." To which I doubt too much! This problem is asked earlier on this site itself but the accepted answer is somehow not at all satisfying to me.(may be I am a dumba**) So here is what I interpreted step by step: Interpretation of CNF form: E=(a+b+c)(a+b'+c)(a+b'+c') (E is the expression/formula) In short product of sum's of literals. Interpretation of term 'clauses': Here in the above case there are 3 clauses. And In the total expression I am having 3 literals a, b, c. Interpretation of the problem : Now First choice says: "For any formula, there is a truth assignment for which at least half the clauses evaluate true." Means (what i understood): We are simply asked to find for E = 1 how many clauses should evaluate to true? (As soon as I saw this question.. my first answer that popped In my head is: 100%..to which I agree still now) Why / How 100% ? My answer would be : "how on Earth would some one multiply 0's and get 1?!". What I feel those all clauses must evalute to true!! Analogy: π = X * Y *Z (π is simply notation where the product is stored..!) This is a product of 3 bits. With 3 bits I can have 8 possibles combinations for these.. 000,001,010,011,100,101,110,111 Now only 111 is the case when π would be 1(which is very obvious..even for a school kid ) Why? Bcoz x=y=z=1 so π=1 * 1 * 1 = 1 And in all 7 other options cases the product would be obviously zero. That means for product of 3 term to be 'not zero' all three have to be 1 only!! (Yes ofcourse right..or am i wrong here too!!?) This is the basis for my ans being 100% So, I think all the clauses must evalute to true, unless that happens, E can't be 1, it would be 0 only..(?) Plz explain the option, and also comment on my interpretation ..whether that is correct in any sort? Any help would be much regarded! AI: As regards your attempt to interpret the problem, for your example $$E=(a+b+c)(a+b'+c)(a+b'+c')$$ there are $3$ clauses, namely $$a+b+c,\;\;a+b'+c,\;\;a+b'+c'$$ But the claim is not about assignments that make $E$ true. Rather, the claim is that there is some assignment of truth values to the variables that makes at least half of the $3$ clauses true. For this particular example, using $a=1$ and any truth values for $b,c$ makes all $3$ clauses true. If instead we take the example $$E=(a)(a')(a')$$ the truth value $a=0$ makes $2$ of the $3$ clauses true, but no truth value for $a$ makes all $3$ clauses true. Let's consider the actual claim . . . Claim: If $P$ is a $\text{CNF}$ expression of the form $P=P_1\land \cdots \land P_k$ with variables in $\{X_1,...,X_n\}$, there is some truth assignment to the variables $X_1,...,X_n$ such that at least half of $P_1,...,P_k$ evaluate to true. Proof: Choose any truth assignment for the variables. If at least half of $P_1,...,P_k$ evaluate to true, we're done. If not, more than half of $P_1,...,P_k$ must evaluate to false, hence by negating the chosen truth value assignments for the variables, those of $P_1,...,P_k$ which evaluated to false will evaluate to true, so again we're done.
H: if $(X,\tau)$ is an $T_1$- space, then every subset of $X$ is a saturated set In an general topology exercise I have to prove that if $(X,\tau)$ is an $T_1$- space, then every subset of $X$ is a saturated set (i.e. it is an intersection of open sets). My approach: Because $(X,\tau)$ is a $T_1$- space, then every singleton is closed. Let $A \in \tau$, then we can write: $A=\bigcup \limits_{x\in A} \{x\}$. Therefore: $X \setminus A= X \setminus \bigcup \limits_{x\in A} \{x\}= \bigcap \limits_{x \in A} X\setminus \{x\}$. If we define a set $P\in\tau, P:= X \setminus A$, then we end up with: $$P=\bigcap \limits_{x\in X\setminus P} X\setminus\{x\}$$ And because because every singleton is closed, every $X\setminus\{x\}$ is opened, this $P$ is the intersection of open intervals. After writing the proof I realized that I assumed that $\forall P \in \tau,\exists A \in \tau: P=X\setminus A$. Is this true in this specific case? If so how can I prove it? In : $P=\bigcap \limits_{x\in X\setminus P} X\setminus\{x\}$, the intersection of those sets can either be finite or infinite. Is this a problem? Because for a set to be a topology it has to be closed under only a finite number of intersections. In the definition of a saturated set is it specified if the number of intersections is finite or if it can be an infinite number of intersections? AI: You want to show every set is saturated, so you can't restrict to $P \in \tau.$ In fact, in what you have shown, is not restricted to $P \in \tau.$ Let $P \subseteq X.$ Then $$P=X \setminus (X\setminus P)=X\setminus\left( \bigcup_{x \in X\setminus P}\{x\}\right)=\bigcap_{x \in X\setminus P}(X \setminus \{x\}).$$ Since $X$ is $T_1,$ therefore, $P$ is an intersection of open sets and hence saturated. This is essentially what you did but there is no need to first start with $A \in \tau$ or in fact, take $P \in \tau.$
H: Prove that $\frac{d^4\phi}{dx^4}=f(x)$ using integration by parts and fundamental theorem A function f is continuous on $[0,∞)$ and $\phi(x) = \frac{1}{3!}\int_0^x(x−t)^3f(t)dt, x ≥ 0$. Show that $\frac{d^4\phi}{dx^4}=f(x),\forall x\geq0$. I can prove this using newton Leibnitz theorem but can't see how to do it by only using fundamental theorem and by parts. AI: We have, $$\phi(x) =\frac{1}{3!}\left[(x-t)^3\int f(t)dt \bigg|_0^x-3\int_0^x(x-t)^2\left(\int f(u)du\right) dt\right]\\=a_1 -\frac{1}{2!}\int_0^x(x-t)^2 \left(\int f(u)du\right)dt$$ where $a_1$ is some constant. Repeating this process two more times, we’ll get $$\phi(x) = a_1+a_2+a_3+\int_0^x\left(\iiint f(u)\ du\right)dt $$ Notice $a_i$ will always be constants as plugging in $t=x$ makes the term dependent on $x$ vanish. The result follows by FTC from here.
H: Show that an operator is continuous Let $X := C^1 [0,1]$ and define $||f|| := |f(0)| + \sup_{0\le t \le 1} |f'(t)|$. Now, consider the operator $T: X \to \mathbb{R}$ defined as $$Tf = \int_0^1 f(t)dt$$ Show that T is continuous. I know this should be proved with the closed graph theorem, but I want to ask if this alternative proof holds too: Let's show $T$ is bounded: $$ \|T\| = \sup_{\|f\| = 1} |Tf| = \sup_{\|f\| = 1} \left| \int_0^1 f(t)\,dt \right|$$ Now, for all $f \in C^1 [0,1]$, we have $\|f\| = 1 \iff1 - |f(0)| = \sup_{0\le t \le 1} |f'(t)|$ We can deduce that $$0 \le \sup_{0\le t \le 1} |f'(t)| \le 1$$ So every function $f$ such that $\|f\| =1$ can't tilt more than the function $f(x) = x$. $$ \Rightarrow \| T \| = \sup_{\|f\| = 1} \left| \int_0^1 f(t)\,dt \right| \le \left| \int_0^1 t \,dt \right| = \frac{1}{2}$$ AI: Let $f\in X$, and let $M=\sup_{0\leq t\leq 1}|f'(t)|$. Then: $|T(f)|=|\int_0^1 f(t)dt|=|\int_0^1 (f(t)-f(0))dt+\int_0^1 f(0)dt|=|\int_0^1 (f(t)-f(0))dt+f(0)|\leq$ $\leq |f(0)|+\int_0^1 |f(t)-f(0)|dt\leq |f(0)|+M\int_0^1 tdt=|f(0)|+\frac{M}{2}\leq |f(0)|+M=||f||$ We use the mean value theorem here, for every $t$ we have $|f(t)-f(0)|\leq M|t-0|$.