text
stringlengths
83
79.5k
H: Geometric Series of Matrices when $I-A$ is singular I need to understand the behavior of the following series as $n$ grows very large, where $A$ is an $n\times n$ matrix, $B$ is a $n\times 1$ vector, and $y_0$ is any vector in $\mathbb{R}^n$ $$ y_n = A^ny_0 + \sum_{k=0}^{n-1}A^kB $$ Doing a little bit of analysis on the matrix I have (which is a really complicated matrix thats a function of some small value $s$ which is why I left it out) I was able to see that the largest eigenvalue of $A$ is $1$, all other values are $0\leq \lambda < 1$ Unfortunately that means that the geometric matrix series formula $$ \sum_{k=0}^{n-1}A^kB = (I-A)^{-1}(I-A^n)B $$ won't work because $(I-A)$ is singular. Does anyone know how to get around this issue? AI: supposing that $\lambda_1 =1$ is simple, then you should be able to use a common technique from Markov Chains. In particular, with $W := \mathbf v_1\mathbf u_1^*$ where these are the right and left eigenvectors associated with $\lambda_1$ and $\text{trace}(W)=1$, then $\Big(I+A+A^2+....+ A^{n-1}\Big)\Big(I-A+W\Big) = I -A^n +nW$ and invert $\Big(I-A+W\Big)$ this is exercise 16 on p. 468 of Grinstead and Snell's free intro to probability book https://math.dartmouth.edu/~prob/prob/prob.pdf
H: Is pointwise convergence permutation-invariant? I am interested in convergence between countable sequences of real numbers. (Perhaps the definitions to follow are nonstandard. Sorry!) Say that the sequence $\langle \langle x^1_1,x^1_2,x^1_3,...\rangle, \langle x^2_1,x^2_2,x^2_3,...\rangle, ...\rangle$ pointwise converges to $\langle x_1,x_2,x_3,...\rangle$ iff the sequence $\langle x^1_i, x^2_i, x^3_i,...\rangle$ converges to $x_i$ for all $i$. All $x^j_i$ are real numbers, so the notion of convergence for each "coordinate" is the standard one for real numbers. By a permutation of a set I mean a one-one function from that set to itself. Given these definitions, is the following statement true? If $f$ is a permutation of $\mathbb{N}$, and the sequence $\langle \langle x^1_1,x^1_2,x^1_3,...\rangle, \langle x^2_1,x^2_2,x^2_3,...\rangle, ...\rangle$ pointwise converges to $\langle x_1,x_2,x_3,...\rangle$, then also the sequence $\langle \langle x^1_{f(1)},x^1_{f(2)},x^1_{f(3)},...\rangle, \langle x^2_{f(1)},x^2_{f(2)},x^2_{f(3)},...\rangle, ...\rangle$ pointwise converges to $\langle x_{f(1)},x_{f(2)},x_{f(3)},...\rangle$. What if we make $f$ a finite permutation of $\mathbb{N}$ in the sense that $f$ is a permutation of $\mathbb{N}$ and $f(i)\neq i$ for finitely many $i$ at most? Any references would be great, too! Thanks! AI: Your conjectures are certainly true: if $\langle x^1_i, x^2_i, x^3_i,...\rangle$ converges to $x_i$ for every $i$, then $\langle x^1_{f(j)}, x^2_{f(j)}, x^3_{f(j)},...\rangle$ converges to $x_{f(j)}$, because every $f(j)$ equals $i$ for some $i$. Since it holds for all permutations, it necessarily holds for all finite permutations. The interesting example to consider is the one where $$x_i^j = \begin{cases} 0, &\text{when $i\ne j$}\\ 10^{10^i}, &\text{when $i= j$} \end{cases} $$ Under your definition, the sequences are converging pointwise to ${\bf 0} =\langle 0,0,0,\ldots\rangle$, while under any reasonable definition of “distance”, each one is much “farther away” from $\bf 0$ than the one before. This shows that pointwise convergence doesn't completely capture our intuitions about covergence in this space.
H: How can I justify that the open interval $(0,1)$ is the infinite union of closed intervals? I have to show that: $$\bigcup_{i=2}^\infty [\frac{1}{n},\frac{n-1}{n}] = (0,1)$$ The first part: $$\bigcup_{i=2}^\infty [\frac{1}{n},\frac{n-1}{n}] \subset (0,1)$$ is easy to show, but for the second part: $$(0,1) \subset \bigcup_{i=2}^\infty [\frac{1}{n},\frac{n-1}{n}]$$ I don't have any idea how to prove it. I can't use the Principle of Nested Intervals because it's clear that if $I_{k}=[\frac{1}{k},\frac{k-1}{k}]$ where $k$ is a natural number, $I_{k+1} \not\subset I_{k}$ and I have the union. AI: Given $x \in (0,1)$, choose a natural number $n > \text{max}\{2,\frac{1}{x},\frac{1}{1-x}\}$; that number exists by the Archmidean principle. It follows that $x > \frac{1}{n}$ and that $x < \frac{n-1}{n}$ and so $x \in [\frac{1}{n},\frac{n-1}{n}]$.
H: sum of pairs and sum of squares $\left(\sum x_i\right)^2-2\sum_{cyc}x_ix_j=\sum x_i^2$ I was messing around today and came across the following which I believe to be true: $$\left(\sum x_i\right)^2-2\sum_{cyc}x_ix_j=\sum x_i^2$$ for a set of $x=(x_1,x_2,...x_n)$ where $n\ge2$ and it is easy to prove for small values manually but how would I prove this rigourously? I thought about using the usual method of a base case, $n=k$ then $n=k+1$ but I am not quite sure how to do this? $$n=2$$ $$\sum x_i=x+y,\sum_{cyc}x_ix_j=xy,\sum x_i^2=x^2+y^2$$ now: $$(x+y)^2-2xy=x^2+y^2$$ is easily shown. I have done the same for $n=3$ but how should I go about doing the general case? AI: If you expand $$(x_1 + \cdots + x_n)(x_1 + \cdots + x_n)$$ you get $n^2$ addends: $x_1^2, \ldots, x_n^2$, as well as two each of $x_i x_j$ for $i \ne j$. The cyclic sum accounts for some of these latter terms, but misses terms like $x_1 x_3$ or $x_2 x_4$.
H: If $\int_E f\,d\mu=\int_E g\,d\mu$ then $f=g$ a.e? Let $(E,\mathcal{A},\mu)$ be a finite measure space. Let $f$ and $g$ two real-valued measurable functions such that $$\int_E f\,d\mu=\int_E g\,d\mu$$ Can we say that $f=g$ a.e $(*)$? Or it is necessary that: $\forall A\in \mathcal{A}$: such that $$\int_A f\,d\mu=\int_A g\,d\mu$$ I ask this question because I did not understand why in the book "Handbook of Multivalued Analysis Volume 1: Theory" written by "Shouchuan Hu", in Theorem 3.34 (page 187) If $(*)$ not, what is the cause of why the author wrote this? AI: Definitely not. $$\int_{-1}^1 \sin(x) dx = \int_{-1}^1 x dx = 0$$ The second assertion is true though, but you should say what you tried before we can help you.
H: $I=\langle \{2,3,4,...\}\rangle $ is maximal ideal of $(P(\mathbb{N}),\Delta,\cap)$ This was given to me by my friend. Prove that $I=\langle \{2,3,4,...\}\rangle $ is maximal ideal of $(P(\mathbb{N}),\Delta,\cap)$ My thoughts:- Let $A\in P(\mathbb{N})$ Then $A\cap \mathbb{N}=\mathbb{N}\cap A=A$ ,so $\mathbb{N}$ is multiplicative identity. Similarly $\phi$ is additive identity. Now I claim that ideal $I$ contains all possible subsets of $\mathbb{N}\setminus \{1\}$. Trial Proof of the claim by me:- $I$ is the principal ideal generated by $\mathbb{N}\setminus \{1\}$ and it contains all elements of the form $A\cap (\mathbb{N}\setminus \{1\}$) where $A\in P(\mathbb{N})$. Since $A$ is arbitary subset of $\mathbb{N}$, the ideal $I$ contains all subsets of $\mathbb{N}\setminus \{1\}$. Now this proof might be incomplete .If yes , please help me complete it. To show $I$ is maximal , let us assume that it is properly contained in a maximal ideal $M$(say). Again I claim that $M$ must contain a set (obviously as an element)larger than $\mathbb{N}\setminus \{1\} $ or otherwise if for every $X\in M$ (where $X \in P(\mathbb{N}))$ , we have $X \subseteq \mathbb{N}\setminus \{1\}$,then $M\subseteq I$, a contradiction. Hence $M$ must contains a set larger than $\mathbb{N}\setminus \{1\}$ but the only set larger than that is $\mathbb{N}$ which is the multiplicative identity and hence $M=P(\mathbb{N})$. Thus , a contradiction .So $I$ must be maximal. Again , I am in doubt here.I need your expertise to check my proof or if it is totally wrong, please give a proof or at least a hint. Thanks for your valuable time!! AI: It’s close, but not quite right. $M$ need not contain a set larger than $\Bbb N\setminus\{1\}$: it just needs to contain a set that is not in $\Bbb N\setminus\{1\}$. Thus, $M$ must contain some set $A$ such that $1\in A$. (For instance, $\{1\}$ might be an element of $M$.) You want to use $A$ to show that $\Bbb N\in M$. Now $\Bbb N\setminus A\subseteq\Bbb N\setminus\{1\}$, so $\Bbb N\setminus A\in I\subseteq M$; what is $A\triangle(\Bbb N\setminus A)$?
H: sequence of function, what is wrong with my solution? Prove that the following series converges uniformly in $[0,\infty)$ $$\sum\limits_{n=1}^\infty\frac{(-1)^n}{x+n}$$ So, I need to prove that for every $x\geq0$ and every $\epsilon > 0 $ the following is true: $$\lim \sup |S(x)-S_n(x)|=0$$ So, I found: $$\lim \sup |S(x)-S_n(x)| \leq \lim \sup \sum\limits_{k=n+1}^\infty\frac{1}{x+k}$$ but the latter doesn't converge to zero... how may I solve this? AI: The alternating series test clearly applies here for any given $x$: we have a series whose terms alternate in sign, but the absolute values of whom tend to $0$ monotonically. I'm guessing you probably are aware that the function converges, but there's a bit extra to the conclusion alternating series test that is not so well known: the remainder of the series is less than or equal to the absolute value of the next term. That is, $$\left|S(x) - S_n(x)\right| \le \frac{1}{x + n + 1} \le \frac{1}{n + 1},$$ when $x \ge 0$. See Wikipedia for more information.
H: Confusion about argument result As part of a larger problem, I wish to find the arguments $\theta$ for the complex numbers $-\frac{1}{2}\pm\frac{\sqrt{3}}{2}i$. This is $\theta = \arctan(-\sqrt{3})$, which is $-60^{\circ}=300^{\circ}$ for the positive and $\theta = \arctan(\sqrt{3})=60^{\circ}$ for the negative. However, it is then given in a solution to the problem that the possible arguments are only $120^{\circ}$ and $240^{\circ}$. I feel as though I am missing something painfully obvious, but I don't understand what I am missing. AI: The correct formula is $$\tan(\theta) = \frac{y}{x},$$ where $x$ and $y$ are the real and imaginary parts, respectively of a complex number $z$, and $\theta$ is its argument. However, students tend to make the following incorrect simplification: $$\theta = \tan^{-1}\left(\frac{y}{x}\right).$$ The above is wrong, as $\tan^{-1}$ can only produce angles between $-\pi/2$ and $\pi/2$, i.e. complex numbers in the first or fourth quadrants, i.e. complex numbers with positive real components. Try plotting the number. You'll see immediately that its argument is definitely not $-\pi/3$. Use the fact that $\tan$ has period $\pi$, and you'll see that instead the answer is $-\pi/3 + \pi = 2\pi/3$, or $120^\circ$; this angle produces the same $\tan$ value, and lies in the correct quadrant.
H: Proof that the nonnegativy of squares does not follow from the rest of the prepositive cone definition A prepositive cone $P$ is a subset of a field $F$ defined as exhibiting the following properties: Additive closure: $p+q\in P$; Multiplicative closure: $p\cdot q\in P$; No negative identity: $-1\notin P$; Nonnegativity of squares: if $x\in F$ then $x\cdot x\in P$. My intuition wants me to believe the fourth axiom is redundant. I wanted to try to prove this by contradiction, but all my ideas involved assuming that a field is ordered. If a field is ordered, then $x^2\ge0$, but this seems like circular reasoning. How can I prove that 4 does not follow from 1 through 3? AI: There are a lot of examples of subsets of fields that fulfill 1 to 3 but not 4. $$ F=\mathbb{R}\;\;,\;\; P=[1,\infty ) \;\;,\;\;\text{Axiom 4 fails for }x\in(-1,1) \\ F=\mathbb{R}\;\;,\;\; P=(0,\infty ) \;\;,\;\;\text{Axiom 4 fails for }x = 0 \phantom{xxxx}\\ $$ There are also more exotic examples like $F=\mathbb{Q}$ and $P$ is the set of all non-negative fractions with denominators that are not divisible by a given prime $p.$
H: General Solution of IVPs I was trying to obtain the general solution the IVP(INTIAL VALUE PROBLEM) $$ y'=y^2-\frac{y}{x}-\frac{1}{x^2}$$ ;${x}>0$ ; $y_1$($x)=\frac{1}{x}$ ; $y(1)=2$. Where $y_1(x)$ denotes a solution to the differential equation. But was clealry stuck since I couldn't apply bernoulli or normal principles Anyone with an idea to go by it... AI: You have the structure for the substitution $u=xy$. Then $$ \frac{u'}{u^2-1}=\frac1x\implies \frac{u-1}{u+1}=Cx^2\implies u(x)=\frac{1+Cx^2}{1-Cx^2} $$ so that then finally $$ y(x)=\frac{1+Cx^2}{x(1-Cx^2)}. $$
H: Do there exist $2k+1$ irrational numbers such that their product and sum are both rational? I recently found a problem saying "Find 2 irrational numbers such that their sum and product is both rational." After a while I noticed any pair like $(a+\sqrt{b},a-\sqrt{b})$ work .From this I could easily say this statement is true for any even integer.But then I thought whether the same is true for odd integers. I realized proving it for $3$ would imply the statement being true for all odd integers. I tried to work with similar expression as above.I also tried to work with the cubic polynomial.But I couldn't make any significant progress. AI: You could take $\{j\:\sqrt[2k+1]{2}|1\le j\le 2k\}\cup\{-k(2k+1)\:\sqrt[2k+1]{2}\}$. The sum is $0$; the product is $-(2k+1)!2k$.
H: Unboundedness of ODE with trigonometric coefficients The task I am working on involves proving that the solutions to the system $$\dot x = \begin{pmatrix}\cos{t} & \sin{t} \newline \sin{t} & -\cos{t}\end{pmatrix}x$$ are unbounded. To start I introduced the new variable $z = x_1 + ix_2$ and reformulated the system of ODE's to: $$\dot z = \bar z ·e^{it}$$ The solution can be found as shown by a collegue before, but it seems to be too complex for what is required from the task. I would be happy for any tips and suggestions that you can give me. AI: Here is a solution without complex numbers. Taking the magnitude squared on both sides we have that $$\dot{x}_1^2 + \dot{x}_2^2 = (x_1 \cos t + x_2 \sin t)^2 + (x_1 \sin t - x_2 \cos t)^2 = x_1^2 + x_2^2$$ which means $$|x| = |\dot x| \geq \dot{|x|}$$ This is true for any vector but is most easily seen in polar coordinates where $|\dot{x}| = \sqrt{\dot{r}^2+r^2\dot{\theta}^2}$ but $\dot{|x|} = \dot{r}$. The unboundedness result follows from Gronwall's inequality, either towards $+\infty$ for growth or $-\infty$ for decay.
H: Squeeze Theorem and Second Partial Derivative of a Piecewise function $$f(x,y)=\arctan((xy^3)/(x^2+y^2))$$ for $(x,y) \ne(0,0)$ and $$0$$ for $(x,y)=(0,0)$. I have tried to prove its continuity at the point $(0,0)$ by using the Squeeze Theorem for $$y=x$$ and $$y=mx$$ such that $$\lim_{x\to 0}\arctan(x^2/2)\le\arctan((xy^3)/(x^2+y^2))\le\lim_{x\to 0}\arctan((x^2m^3)/(1+m^2))$$ so that the $$\arctan((xy^3)/(x^2+y^2))=0$$by the squeeze theorem. And calculate its second degree derivative at (0,0) $$f_{xy}(0,0)$$, I couldn't see how to calculate the second degree derivative since the function is a piecewise one, wouldnt it be just 0? Any help would be appreciated. AI: For each $x\in\Bbb R$, $|\arctan x|\leqslant|x|$ and therefore, if $(x,y)\ne(0,0)$,$$0\leqslant\left|\arctan\left(\frac{xy^3}{x^2+y^2}\right)\right|\leqslant\frac{|xy^3|}{x^2+y^2}=|xy|\frac{y^2}{x^2+y^2}\leqslant|xy|.$$So, since $\lim_{(x,y)\to(0,0)}|xy|=0$, it follows from the squeez theorem that$$\lim_{(x,y)\to(0,0)}\arctan\left(\frac{xy^3}{x^2+y^2}\right)=0.$$
H: Are projections always bounded? Let $X$ be a Banach space and $P : \mathcal{D}(P) \subset X \to X$ such that $P\mathcal{D}(P)\subset \mathcal{D}(P)$ and that $P^2x = Px,~\forall x \in \mathcal{D}(P).$ Does this imply that $P$ is bounded? I really think that the answer is no. But could not handle an example so far. However, here is the intuition: suppose that $\{x_n\} \subset \mathcal{D}(P)$ converges to some $x\in \mathcal{D}(P)$. Then, $$P(P-I)x_n = 0,$$ and hence, for each $n$ there is $y_n \in \ker P$ satisfying $$Px_n = x_n + y_n.$$ Therefore, if $(y_n)$ does not converge, we are done. Any hints? AI: Take $f:X\to\mathbb C$ linear and unbounded. Fix $x_0\in X$ with $f(x_0)=1$. Now define $Px=f(x)\,x_0$. Then $P$ is an unbounded projection of rank one. You can make this natural if you don't require $P$ to be everywhere defined. For instance let $X=C[0,1]$ with the uniform norm, and let $\mathcal D(P)=C^1[0,1]$. Define $Pf$ to be the function $(Pf)t=f'(1)t$. Then $P$ is an unbounded projection.
H: How to Prove a Special Case of Stokes' Theorem? I am currently in Calculus 3, or Multivariable Calculus and need to prove this special case of Stokes' theorem. Please forgive me as I do need this simplified to the bones to understand the explanations. This version is below. $$ \int_{\partial S}\mathbf{F}(x,y,z)\cdot d \mathbf{r} = \iint_S(\nabla\times\mathbf{F})\cdot \mathbf{n} dS $$ The proof starts with the conditions of $ S= \{ (x,y,z)\vert z=f(x,y),(x,y)\in R \} $ where R is the region in the $ xy $ -plane with piecewise-smooth boundary $ \partial R $ , where $ f(x,y) $ has continuous first partial derivatives and for which $ \partial R $ is the projection of the boundary $ \partial S $ of the surface S onto the $ xy $ -plane. The first step called for the curl of F where $ F(x,y,z) = \langle M(x,y,z),N(x,y,z),P(x,y,z) \rangle $ which I found. $$ curl F = \nabla\times\mathbf{F} = \begin{vmatrix} \hat{i} & \hat{j} & \hat{k} \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ M(x,y,z) & N(x,y,z) & P(x,y,z) \\ \end{vmatrix} = (\frac{\partial P}{\partial y} -\frac{\partial N}{\partial z})\hat{i} + (\frac{\partial M}{\partial z} -\frac{\partial P}{\partial x})\hat{j} + (\frac{\partial N}{\partial x} -\frac{\partial M}{\partial z})\hat{k} $$ Of course, we're less than halfway done with the steps. The second step had the condition where $ G(x,y,z) = z - f(x,y) $ and called for the exterior unit normal vector $ \frac{\nabla G}{\vert \vert \nabla G \vert \vert} $ to any point on the surface S. Now this might be a great jump like a joke flying above my head but for some reason I keep on thinking this leads to what is seen below. $$ n = \frac{\nabla G}{\vert \vert \nabla G \vert \vert} = \frac{\langle 0,0,0 \rangle}{\sqrt{0^2+0^2+0^2}} = undefined $$ This is because one of the initial conditions is $ z=f(x,y) $ so I believe they cancel and I know this should not be the case because this would nullify the entire proof (unless I'm mistaken). I think this is a major oversight and yet I can't figure out why. If anybody could help fix this misconception, I would appreciate it. And I also have no idea as to why a separate function $ G(x,y,z) $ is necessary in order to prove this theorem. If anybody has extra time to aid me in solving the rest, I will list the next steps. The third step asks to express $ \int_{\partial S}\mathbf{F}(x,y,z)\cdot d \mathbf{r} = \iint_S(\nabla\times\mathbf{F})\cdot \mathbf{n} dS $ in terms of M, N, and P with a hint that $ dS = \vert \vert \frac{\partial \mathbf{r}}{\partial u} \times \frac{\partial \mathbf{r}}{\partial v} \vert \vert dA $ where $ \vert \vert \frac{\partial \mathbf{r}}{\partial u} \times \frac{\partial \mathbf{r}}{\partial v} \vert \vert = \sqrt{ (\frac{\partial z}{\partial x})^2 + \frac{\partial z}{\partial y})^2 + 1} $ Having not done this yet, I believe the left side of the equation could be rewritten using the condition in the first step of the proof where $ F(x,y,z) = \langle M(x,y,z),N(x,y,z),P(x,y,z) \rangle $ so that $ \int_{\partial S}\mathbf{F}(x,y,z)\cdot d \mathbf{r} = \int_{\partial S} M(x,y,z)\hat{i} + N(x,y,z)\hat{j} + P(x,y,z)\hat{k}\cdot d \mathbf{r} $ which I do not believe can be simplified (correct me if I'm wrong). As for the right side of the equation, I simply do not remember how to manipulate it to be in terms of M, N, and P but I do believe the second step and finding the exterior unit normal vector $ n $ is quite important. The fourth step expects us to show that $ \int_{\partial S} M(x,y,z)dx = - \iint_R(\frac{\partial M}{\partial y} + \frac{\partial M}{\partial z}f_y) _{z=f(x,y)}dA $ , $ \int_{\partial S} N(x,y,z)dy = \iint_R(\frac{\partial N}{\partial x} + \frac{\partial N}{\partial z}f_x) _{z=f(x,y)}dA $ , and $ \int_{\partial S} P(x,y,z)dz = \iint_R(\frac{\partial P}{\partial x}f_y + \frac{\partial P}{\partial y}f_x) _{z=f(x,y)}dA $ . This comes with a hint to let the boundary of R be described parametrically by $ \partial R = \{ (x,y)\vert x=x(t),y=y(t),a \le t \le b \} $ which implies that the boundary of S is described parametrically by $ \partial R = \{ (x,y,z)\vert x=x(t),y=y(t),z=(x(t),y(t)),a \le t \le b \} $ . Use Green's Theorem and the Chain Theorem to prove the given equations. The fifth step (also the last) asks us to explain how the results prove Stokes' Theorem. As I said, I am not that fluent in the language of math and hope that you are able to break it down for me if possible. Thank you and I hope you are doing well! AI: This is because one of the initial conditions is $z=f(x,y)$ so I believe they cancel [...] $G(x,y,z)=z-f(x,y)=0$ on the surface (indeed, this is the definition of the surface - the set of points (x,y,z) on which $G$ vanishes), but is positive above it and negative below it, meaning that $\nabla G$ points perpendicularly away from the surface in the direction of increasing $z$. Explicitly, $$\nabla G = \left\langle -\frac{\partial f}{\partial x},-\frac{\partial f}{\partial y},1\right\rangle$$ $$\Vert \nabla G\Vert = \sqrt{1 + \left(\frac{\partial f}{\partial x}\right)^2 + \left(\frac{\partial f}{\partial y}\right)^2}$$ and the unit vector pointing away from the surface is given by $$\hat n = \frac{\nabla G}{\Vert \nabla G \Vert}$$
H: How would one find the value of this summation? How would one go about finding the value of this summation? ${\sum_{n=1}^{\infty}}\frac{(-1)^{n+1}}{n^2+1}$ AI: You can verify that if $s\notin \mathbb Z$, $$\cos{st} = \frac{\sin{\pi s}}{\pi s} \left [1+2 s^2 \sum_{n=1}^{\infty} \frac{(-1)^{n+1} \cos{n t}}{n^2-s^2} \right ]$$ So if $t=0$, $$1 = \frac{\sin{\pi s}}{\pi s} \left [1+2 s^2 \sum_{n=1}^{\infty} \frac{(-1)^{n+1} }{n^2-s^2} \right ]$$ which gives, with $s=ia$ $$\sum_{n=1}^{\infty} \frac{(-1)^{n+1} }{n^2+a^2} = -\frac{1}{2a^2}\left(\frac{\pi i a}{\sin(\pi i a)}-1\right) $$ $$\boxed{\sum_{n=1}^{\infty} \frac{(-1)^{n+1} }{n^2+1} = \frac{1}{2}\left(1-\frac{\pi}{\sinh(\pi)}\right)}$$
H: Show then that the inequality $(z-x)\int_{y}^zf(u)du≥(z-y)\int_{x}^zf(u)du$ holds for any $0 ≤ x < y < z.$ QUESTION: Let $f : [0,∞) → \mathbb{R}$ be a non-decreasing continuous function. Show then that the inequality $$(z-x)\int_{y}^zf(u)du≥(z-y)\int_{x}^zf(u)du$$ holds for any $0 ≤ x < y < z.$ MY APPROACH: We observe that the integral on the L.H.S. represents the area of the curve $f(x)$ from $y$ to $z$ which is certainly smaller than ($\because$ the function is non-decreasing) that represented by the integral on the R.H.S which is from $x$ to $z$ $(\because x<y<z)$. And obviously, $(z-x)>(z-y)$, which is in accordance to the given inequality. Now since $x,y,z$ are arbitrary, how do we know that by how much one quantity is greater or smaller than the other. The inequality seems like- $$(greater)(smaller)≥(smaller)(greater)$$ How do I solve this? Any help will be much appreciated. Thank you so much. AI: By the mean value theorem \begin{align} \int_{y}^zf(u)du &=(z-y) f(u) \\ \int_{x}^zf(u)du &= (z-x) f(v) \end{align} Where $u\in [y,z]$ and $v\in [x,z]$. Can you take it from here?
H: Is there a special way to solve a problem like this with the indicated setup? My Answer, and the Question My question is if the problem was converted to the following:\begin{align}\cos(x)y'+\sin(x)y&=2\sin(x)\\ u'y'+uy&=2u \end{align} How can I advance from here to solve the problem if substitution were my aim in answering the question (which of course is answered)? AI: Your approach, $$u'y'+uy=2u$$ $$uy'-u'y=-2u'\quad (\text{where, } \ \ u=\cos x)$$ $$\frac{y'}{u}-\frac{u'}{u^2}y=-2\frac{u'}{u^2}\quad (\text{dividing by } \ u^2)$$ $$d\left(\frac{y}{u}\right)=2 \ d\left(\frac{1}{u}\right)$$ $$\int d\left(\frac{y}{u}\right)=2\int d\left(\frac{1}{u}\right)$$ $$\frac yu=\frac2u+C$$ $$y=Cu+2$$ Alternatively, $$\cos(x)y'+\sin (x)y=2\sin(x)$$ $$y'+\tan (x)y=2\tan(x)$$ Multiplying $\sec x$ on both the sides, $$y'\sec (x)+\sec(x)\tan (x)y=2\sec(x)\tan(x)$$ $$\frac{d}{dx}\left(y\sec (x)\right)=2\sec(x)\tan(x)$$ $$\int d\left(y\sec (x)\right)=\int 2\sec(x)\tan(x)dx$$ $$y\sec(x)=2\sec(x)+C$$ $$y=C\cos (x)+2$$
H: Showing a set is closed in the product topology We have some topological space $X$ with continuous functions $f,g:X \rightarrow \mathbb{R}$ equipped with the usual topology I want to show that then set: $$E = \{(x,y):f(x)=g(y)\} \subset X \times X $$ is closed in $X \times X$ with the product topology. So I need to show that $(X \times X) \setminus E$ is open, but I’m not sure how. AI: Note that $h(x, y)=f(x)-g(y)$ is continuous because $f$ and $g$ are. Then $(X \times X) \setminus E = h^{-1}(\Bbb R \setminus \{0 \})$. Since $\Bbb R \setminus \{ 0 \}$ is open and $h$ is continuous, you have the result you need.
H: Understanding how $E(X)=\frac{91}{36}$ was calculated I've found an older question here and I just can't understand how they got these values. The question says: "We a roll a fair dice 2 times. ( independent ): a) X denotes the number of the first throw; Y denotes the sum of the two throws. Calculate Cov(X,Y). Calculate the correlation coefficient ϕX,Y." I know the formula $cov(X,Y) = E[XY] - E[X]E[Y]$ and in the answers they calculated E[X] = 91/36. How did they reach this number? From my understanding the mean should be 1 * 1/6 + 2 * 1/6 + ... + 6 * 1/6 = 21/6. What am I missing? AI: If the question you found is this question, the expectation $E(X)=\frac{91}{36}$ is referring to part (b). You are correct that in part (a), $E(X)=\frac72$. To get the value $\frac{91}{36}$, construct a $6\times 6$ table of all outcomes of the experiment. The $36$ possible values for (first roll, second roll) are $$ \begin{matrix}(1,1)&(1,2)&(1,3)&(1,4)&(1,5)&(1,6)\\ (2,1)&(2,2)&(2,3)&(2,4)&(2,5)&(2,6)\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\\ (5,1)&(5,2)&(5,3)&(5,4)&(5,5)&(5,6)\\ (6,1)&(6,2)&(6,3)&(6,4)&(6,5)&(6,6) \end{matrix} $$ Here are the corresponding values for $X:=$ smaller of the two rolls: $$ \begin{matrix}1&1&1&1&1&1\\ 1&2&2&2&2&2\\ \cdots&\cdots&\cdots&\cdots&\cdots&\cdots&\\ 1&2&3&4&5&5\\ 1&2&3&4&5&6 \end{matrix} $$ Since all $36$ outcomes are equally likely, count up the occurrences of $1,2,\ldots,6$ to obtain $P(X=1)=\frac{11}{36}$, $P(X=2)=\frac{9}{36}$, $P(X=3)=\frac7{36}$, $P(X=4)=\frac5{36}$, $P(X=5)=\frac3{36}$, $P(X=6)=\frac1{36}$, and finally compute $E(X)=\frac{91}{36}$.
H: Check if the last digit in a 4 digit number is a 1 Is there a way to check if the last digit of a 4-digit (or any digit) number is a 1? I am trying to make something that checks if the first and last digit of a number are both 1. Suppose our number is 1581. To check if the first digit is 1 is trivial. I can go.. if number - 2000 < 0 AND number - 999 > 0, then the first digit is always one. Is there a test similar to above to check if the last digit is 1? EDIT: Just to add more context to the question, we want to find an automated way to check if the number starts and ends with a 1. AI: If it has MOD then something like $$MOD(1481,10)=1$$. If it has FLOOR or INT to give the integer part, then $$1481-10×FLOOR(1481/10)=1$$
H: Doubt about chromatic number proof I'm in a discrete math course and was trying to prove the following theorem: A graph G with $\Delta(G) = k$ ($\Delta(G)$ is the max vertex degree) is $(k+1)-$coloreable. I've tried my own, and I've read the answers in here, here and this but I still have a big doubt using an induction proof as shown in 2. In the induction step, when we delete a vertex v of the graph $G$ with $\Delta(G) = k$: What garantees me that once the vertex is deleted, $\Delta(G)$ is stil k? As they show that $\Delta(G') = k$, what garantees that? PD: pardon my english, I'm not native speaker. AI: It’s entirely possible that after $v$ is removed, the maximum degree of the resulting graph $H$ is less than $k$. It doesn’t matter: the result being proved at that link is that if $\Delta(G)\color{red}{\le}k$, then $G$ is $k$-colorable. We assumed that $\Delta(G)\le k$, and removing a vertex cannot increase the maximum degree, so $\Delta(H)\le k$, and we can apply the induction hypothesis.
H: Is there a proper ideal of $B(H)$ that contains a proper projection Let $H$ be a infinite-dimensional separable Hilbert space and $\mathcal{I}$ be a proper closed two-sided ideal of $B(H)$. Can $\mathcal{I}$ contain a projection for a infinite dimensioal proper closed subspace? If $H$ is not separable (or $\mathcal{I}$ not closed) while all other conditions remain the same, will the conclusion change? What if $\mathcal{I}$ is only one-sided (other conditions remain the same)? (Added) This question is inspired by Corollary 5.11 in Banach Algebra Technique in Operator Theory written by Ronald G. Douglas. This corollary claims that in an infinite dimensional separable Hilbert Space the ideal of compact operators $\mathcal{K}$ is the only proper closed two-sided ideal in $B(H)$. The proof first assumes that $T \in \mathcal{I}$ and $T$ is NOT compact. Hence by the converse of Lemma 5.8 in the range of $T$ there is a closed infinite dimensional subspace say $M$. Define $ S_0: M \rightarrow H, S_0(Tv) = v\,\implies\,S_0 \in B(M)$ according to closed mapping theorem. Define $S\,\vert_M = S_0, S\,\vert_{M^{\perp}} = 0\,\implies\,TS = P_M \in \mathcal{I}$. At last the proof claims that hence $\mathcal{I}$ contains the identity mapping. I am not sure if the last statement is correct. Lemma 5.8: $H$ is a $\infty - dim$ Hilbert space. $T$ a compact operator iff $ran(T)$ contains NO closed $\infty - dim$ subspace. In the original text there is only $\implies$ direction but the converse is also true. Let $H_{\leq 1}$ be the closed unit ball of $H$ and hence $\overline{TH_{\leq 1}}$ is a finite dimensional closed and bounded subspace and hence compact (with respect to the original topology). AI: Suppose $\mathcal{I}$ contains a projection $P$ onto an infinite-dimensional closed subspace $M$. In particular, $\dim M=\dim H=\aleph_0$ since $H$ is separable, so we can pick a Hilbert space isomorphism $U:M\to H$. Extend $U$ to an operator $T:H\to H$ which vanishes on $M^\perp$, and let $S:H\to H$ be the composition of $U^{-1}:H\to M$ with the inclusion $M\to H$. Then $TPS=1$, so $1\in\mathcal{I}$ and the ideal is not proper. This argument does not use the assumption that $\mathcal{I}$ is closed, but it does use the assumption that $H$ is separable and $\mathcal{I}$ is two-sided. If $H$ has uncountable dimension, then the set of operators whose range is separable forms a proper closed two-sided ideal and contains projections onto infinite-dimensional subspaces. If $\mathcal{I}$ is only a one-sided ideal, then it could be the ideal of all operators that vanish on $M^\perp$ (for a left ideal) or operators whose range is contained in $M$ (for a right ideal).
H: Field norm well-behaved with respect to minimal polynomial I'm not sure if this property is standard but this is what some examples suggested: Let $\alpha$ and $\beta$ be algebraic numbers. Is it true that $$|N_{\mathbb Q(\alpha) / \mathbb Q}(min_{\beta / \mathbb Q}(\alpha))| = |N_{\mathbb Q(\beta) / \mathbb Q}(min_{\alpha / \mathbb Q}(\beta))| ?$$ I would be really obliged if someone could provide a proof or counterexample. If the above conjecture is false, can something be said in the case in the following special cases: When the extension $\mathbb Q(\beta) / \mathbb Q$ is Galois? When the extension $\mathbb Q(\beta) / \mathbb Q$ is a cyclotomic extension? AI: Let $S,T$ be the (finite) sets of embeddings of $\mathbb{Q}(\beta)$ and $\mathbb{Q}(\alpha)$ in $\mathbb{C}$. Let $n=|S|$, and, for each $0 \leq k \leq n$, $(-1)^kb_k$ be the $k$-th elementary symmetric polynomial in the elements of $S$. Then $min_{\beta/\mathbb{Q}}(\alpha)=\prod_{\sigma \in S}{(\alpha-\sigma(\beta))}=\sum_{k=0}^n{\alpha^kb_{n-k}}$. So the norm $\mathbb{Q}(\alpha)/\mathbb{Q}$ of the LHS is $\prod_{\tau \in T}{\sum_{k=0}^n{\tau(\alpha)^kb{n-k}}}=\prod_{\tau \in T}{\prod_{\sigma \in S}{(\tau(\alpha)-\sigma(\beta))}$. So actually, the quotient $LHS/RHS$ for you is equal to $(-1)^{|S||T|}$ and the equality holds iff $\alpha$ and $\beta$ are conjugates or $|S|$ or $|T|$ is even. Otherwise, there is a sign.
H: Application of calculus in kinematics Newton 2nd law As we all know it the Newton's second law of motion describes the relationship between force and acceleration of a body as diectly proportional. Now lets say a body in motion and is of mass of 9kg and its acceleration is $a($t)=-9$t$. we are to find its displacement $y$ as a funtion of time $t$. I know from calculus that when you integrate acceleration you get velocity and when you integrate velocity you get displacement. i followed suit but i want to see if this a valid way of going by it or... perphaps its wrong AI: Let $y(t)$ be the displacement on time $ t$. Then $$a=\frac{d^2y}{dt^2}=-9t$$ after a first integration, we find $$\frac{dy}{dt}=\int_{t_0}^t(-9u)du$$ $$=\Bigl[-\frac{9u^2}{2}\Bigr]_{t_0}^t$$ $$=-\frac{9t^2}{2}+V_0$$ the second integration gives $$y(t)=\int_{t_0}^t(-\frac 92u^2+V_0)du$$ $$=\Bigl[-\frac{3}{2}u^3+V_0u\Bigr]_{t_0}^t$$ $$=-\frac 32 t^3+V_0t+Y_0$$ $Y_0$ and $ V_0$ are respectively the position and the velocity at $t=0$.
H: Why does the number of possible probability distributions have the cardinality of the continuum? Wikipedia's article on parametric statistical models (https://en.wikipedia.org/wiki/Parametric_model) mentions that you could parameterize all probability distributions with a one-dimensional real parameter, since the set of all probability measures & $\mathbb{R}$ share the same cardinality. This fact is mentioned in the cited text (Bickel et al, Efficient and Adaptive Estimation for Semiparametric Models), but not proved or elaborated on. This is pretty neat to me. (If I'd been forced to guess, I would have guessed the set of possible probability distributions to be bigger, since pdfs are functions $\mathbb{R}\rightarrow\mathbb{R}$, and we're counting probability distributions that don't have a density, too. It's got to be countable additivity constraining the number of possible distributions, but how?) Where could I go to find a proof of this, or is it straightforward enough to outline in an answer here? Does its proof depend on AC or the continuum hypothesis? We need some kind of condition on the cardinality of the sample space that neither Wikipedia or Bickel mention, right (if it's too big, then the number of degenerate probability distributions is too big)? AI: A probability on $\mathbb{R}$, be it continuous or not, is given by its CDF $x \mapsto\mathbb{P}(X \leq x)$. A CDF is right-continuous, and the set of right-continuous functions has the cardinality of $\mathbb{R}$. To see this, you can for instance argue that the values of such a function are given by its values at the rational points, so it has at most the cardinality of a countable product of copies of $\mathbb{R}$, which has the cardinality of $\mathbb{R}$ as well.
H: Mean of a function of Binomial Distribution Let $X$ be a random variable following the Binomial Distribution with parameters $n$ and $p$. Show that $$ \mathbb{E}\left[\frac{1}{1+X}\right]=\frac{1-\left(1-p\right)^{n+1}}{p(n+1)}, $$ where $\mathbb{E}[\cdot]$ is the mean value funtion. My textbook had this exercise and I found it very interesting, but i can't solve it because I suck at infinite sums with the binomial coefficient. Any suggestion is appreciated. Thank you very much in advance. AI: Note that $$ E\frac{1}{1+X}=\sum_{k=0}^n\frac{1}{1+k}\binom{n}{k}p^k(1-p)^{n-k}= \sum_{k=0}^n\left[\binom{n}{k}p^k(1-p)^{n-k}\int_{0}^1t^k\,dt\right] $$ i.e. $$ E\frac{1}{1+X}=\int_{0}^1\sum_{k=0}^n \binom{n}{k}(pt)^k(1-p)^{n-k}\, dt=\int_0^1(1-p+pt)^n\,d t $$ where we have used the linearlity of the integral and the binomial theorem. I leave it to you to compute the integral to yield the desired answer.
H: $re^{i\omega} \rightarrow re^{2i\phi}$ not holomorphic over $\mathbb{C} \backslash \{0\}$ Try to solve this problem: Show that function $ f: re^{i\phi} \rightarrow re^{2i\phi}$ not holomorphic over $\mathbb{C} \backslash \{0\}$ My solution: We have $$f(z) =f(x+yi) = f(re^{i\phi}) = re^{2i\phi} = rcos(2\phi) + i sin(2\phi)$$ and $u(x,y) = rcos(2\phi), v(x,y) = sin(2\phi)$ After using Cauchy–Riemann equation we have. I'm having a hitch here because I don't know what to do next. $$ \frac{\partial u}{\partial x} = , \frac{\partial u}{\partial y} = , \frac{\partial v}{\partial y} = , \frac{\partial v}{\partial x} = $$ UPD after comment But! if i have Q-R equations in polar form: $$ \left( \frac{\partial u}{\partial r}\right) = \frac{1}{r} \left( \frac{\partial v}{\partial \theta}\right) \ \ \ \ \ \text{and} \ \ \ \ \left(\frac{\partial v}{\partial r} \right) = \frac{-1}{r} \left( \frac{\partial u}{\partial \theta}\right) $$ We have: $u(r,\phi) = rcos(2\phi), v(r,\phi) = sin(2\phi)$ and: $$ \left( \frac{\partial u}{\partial r}\right) = cos(2\phi) \ \ 2cos(2\phi)= \frac{1}{r} \left( \frac{\partial v}{\partial \theta}\right) \ \ \text{and} \ \ \left(\frac{\partial v}{\partial r} \right) = sin(2\phi) \ -2sin(2\phi) = \frac{-1}{r} \left( \frac{\partial u}{\partial \theta}\right) $$ we have system: $$ $$ \begin{cases} cos(2\phi) = 2cos(2\phi) \\ sin(2\phi) = -2sin(2\phi) \end{cases} $$ $$ AI: Observe that $f(z)=\frac {z^{2}} {|z|}$ for all $z \neq 0$. Let us prove by contradiction that this is not holomorphic in $\mathbb C \setminus \{0\}$. If $f$ is holomorphic then so is $\frac 1 {|z|}$ because $\frac 1 {|z|}$ is the product of $f$ and the holomorphic function $\frac 1 {z^{2}}$. But now $|z|$ itself is holomorphic since it is the reciprocal of a holomorphic function with no zeros. Can you finish the proof by showing that $|z|$ is not holomorphic? No real valued function other than a constant is holomorphic in any domain.
H: Does the property hold almost everywhere? Suppose a property holds in every compact subset of $I$. Does it follow that this property holds almost everywhere on $I$? I want to use the dominated convergence theorem but do not know almost anything about Lebesgue Measure. I know that the function series converges on every compact subset of $(0,1)$, does it follow that this series converges almost everywhere on $(0,1)$? Actually it is my fault to ask this question not in more proper way, only if I know much about Lebesgue Measure. AI: Suppose there is a set of positive measure in which you have not convergence. Then it exists a compact set $[ \epsilon, 1-\epsilon]$ such that its intersection with the set considered before has positive measure. But the function converges on $[ \epsilon, 1-\epsilon]$, and you have a contradiction. This should actually prove that the function converges everywhere, and it is based on the fact that you can view your open interval as a countable increasing union of compact sets
H: Difficulty in understanding the proof of infinitude of primes in a certain arithmetic progression Let $m$ as a fixed odd prime. How to show there are infinitely many primes of the form $2km+1$ (for some positive integer $k$). Can someone please help? Any help would be appreciated. Thanks in advance. AI: The only one written out with sufficient detail for me to reverse-engineer the proof is the Landry one. Here is my take on it (in English): First fix $m$ as an odd prime. We will show there are infinitely many primes of the form $2km+1$ (for some positive integer $k$). Assume there are only finitely many (we reach a contradiction). Let $\theta$ be the largest one. Let $x$ be the product of all primes of the form $2km+1$. Then Claim 1: $x^m+1$ is not divisible by any primes of the form $2km+1$. Claim 2: $\frac{x^m+1}{x+1}$ is a positive integer. Claim 3: All prime divisors of $\frac{x^m+1}{x+1}$ are of the form $2km+1$. Assuming these three claims: From Claim 1 and 2 it follows that the integer $\frac{x^m+1}{x+1}$ is not divisible by any primes of the form $2km+1$. So $\frac{x^m+1}{x+1}$ must be divisible by some prime that is not of the form $2km+1$. But this contradicts Claim 3. $\Box$ Proof of Claim 1: Let $z$ be a prime of the form $2km+1$. By construction, $x^m$ is divisible by $z$. Thus, $x^m+1$ cannot be divisible by $z$. Proof of Claim 2: We have for any real number $y$: $$ (1+y+y^2+...+y^{m-1})(1-y) = 1-y^m$$ Choosing $y=-x$ (where $-x$ is an integer) gives $$ \underbrace{(1+(-x) +(-x)^2 + ... + (-x)^{m-1})}_{integer}(1+x)=1-(-x)^m = 1+x^m$$ where the final equality holds because $m$ is odd, so $(-1)^m=-1$. Proof of Claim 3: I don't know. But, nothing in the above has used the assumption that $m$ is prime (we have only used that $m$ is an odd positive integer). So, Claim 3 must somehow use the assumption that $m$ is prime.
H: $x \in \operatorname{Int}(A) \iff d(x, A^c) > 0$. In $\Bbb R$ with the usual metric $|\cdot|$. I have to show that $$x\in \text{Int}(A) \iff d(x,A^{c})>0$$ where $\text{Int}(A)$ is the interior of $A$. When i suppose that $d(x,A^{c})>0$ I have to show that $x\in \text{Int}(A)$. For this, let $x\in A$ and I consider $\epsilon = d(x,A^{c})$, and I construct $B_{\epsilon} (x)=${$y\in \Bbb R : |x-y|<\epsilon$}. Am I in the right way? and if I am, how i can conclude that?. When I suppose that $x\in \text{Int}(A)$ so I know that $d(x,A^{c})\geq 0$ so $d(x,A^{c})=0$ or $d(x,A^{c})\gt 0$ but if I consider the case $d(x,A^{c})= 0$ that is $d(x,A^{c})= \sup \{|x-y|$ : $y \notin \Bbb R \}=0$ but I know there exist some $\epsilon >0$ such that $B_{\epsilon} (x)\subset A$ and i don't know if it follows. I accept any suggestions or comments. Thank you. AI: Let $r=d(x,A^{c}) >0$. Then $B(x,r) \subseteq A$ because if $d(y,x) <r$ but $y \in A^{c}$ then we get the contradiction $r=d(x,A^{c}) \leq d(y,x) <r$. Now suppose $B(x,r) \subseteq A$. Then $y \in A^{c}$ implies that $y \notin B(x,r)$ which means $d(y,x) \geq r$. Since this is true for all $y \in A^{c}$ we get $d(x,A^{c}) \geq r$ so $d(x,A^{c})>0$.
H: Priority of properties in Laplace Transform. Supose I have a function f(t) which corresponding Laplace Transform is F(s) Now I want to calculate the Laplace transform of $f(a (t-b))$ using Laplace Transform properties. If $L(g(t))= G(s)$ Property A: Time shift $L( g(t-a)) = e^{-a s} G(s)$ Property B. Time scaling $L(g(at) = \frac{1}{a} G(\frac{s}{a}) )$ Then I have a question about the order in which I can use them. For example if I go A then B: $L(g(a(t-b))) = e^{-b s} L(g(at))$ $e^{-b s} L(g(at)) = e^{-b s} L( g(t-a)) = e^{-b s} \frac{1}{a} G(\frac{s}{a}) $ If I go B then A: $L(g(a(t-b))) = \frac{1}{a} L( g(t-b) ) $ the transform must be evaluated with $\frac{s}{a}$ $\frac{1}{a} L( g(t-b) ) = \frac{1}{a} e^{-bs} L( g(t) ) = \frac{1}{a} e^{-b \frac{s}{a}} G(\frac{s}{a}) $ Which one is correct? Please correct me if anything is wrong, even language :D (not native here) AI: For the second one $$L(g(a(t-b))) =L(g(at-ab))) = \frac{1}{a} L( g(t-ab) )$$ the transform must be evaluated with $s/a$ $$L(g(a(t-b))) = \frac{1}{a} e^{-abs/a}G(\frac s a)$$ $$L(g(a(t-b))) = \frac{1}{a} e^{-bs}G(\frac s a)$$ It gives the same answer in both cases.
H: Can we always construct a matrix using its eigenvectors? In physics, a Hermitian matrix represents an observable and can be constructed using its eigenvalues and eigenvectors in the following way: $$ A = \sum_i \lambda_i v_iv_i^\dagger \qquad \qquad (1)$$ where $\lambda_i$ and $v_i$ are the $i^{th}$ eigenvalue and eigenvector and $v_i^\dagger$ is the transpose conjugate of $v_i$. The proof is the following: If the eigenvectors form an orthonormal basis, $\{v_i\}$, then we have: $$ \sum_i v_iv_i^\dagger =1$$ This must be true becuse we have can write a vector $u$ in the $\{v_i\}$ basis by: $$ u = \sum_i v_i v_i^\dagger u $$ Therefore, we can apply this identity twice to $A$ and get: $$ A = \sum_i \sum_j v_iv_i^\dagger A v_jv_j^\dagger = \sum_i \sum_j v_iv_i^\dagger \lambda_j v_jv_j^\dagger= \sum_i \sum_j \lambda_j v_iv_i^\dagger v_jv_j^\dagger = \sum_i \sum_j \lambda_j v_i \delta_{ij} v_j^\dagger= \sum_i \lambda_j v_i v_j^\dagger$$ Is equation (1) only valid for matrices having eigenvectors that form a basis? Or all matrices can be constructed in this way? AI: If the eigenvalues of $A$ are real, but $A\neq A^\dagger$, then the right hand side of (1) is Hermitian, but the left hand side is not, so (1) fails. An example is $$\left(\begin{array}{cc} 1&1\\0&2\end{array}\right)$$.
H: The representation theory of a group Let $V$ be the $\mathbb KG$-module. Denote by $V^G$ the subspace of $V$ consisting of all the invariant elements under the action of $G,$ i.e., $V^G = \{v\in V| g\cdot v = v\}.$ Consider $W = V/V^G.$ Prove that $W = W^G.$ Clearly, $W^G \subseteq W.$ How to prove $W \subseteq W^G$? I need some insight! AI: For an easy counterexample let $G = \mathrm{GL}_2(\mathbb R)$ and $V = \mathbb R^2$ considered as column vectors with the natural left action of matrix multiplication. Then $V^G = \{0\}$ so $W = V/\{0\} \simeq V$ is $2$-dimensional and $W^G = \{0\}$ again.
H: Finding irreducible polynomial in finite field I would like to find an irreducible polynomial of degree $3$ in $\mathbb{F}_4$, where $$\mathbb{F}_4 = \{a+b\alpha| \ a, b\in \mathbb{F}_2, \alpha^2 = \alpha + 1\}.$$ I first tried to find an irreducible polynomial of degree 2. Since $\mathbb{F}_4 = \mathbb{F_2[\alpha]}$, we know $f(x) = x^2 - x - 1$ is irreducible since $f(\alpha) = 0$ and its degree matches the degree of the simple extension. However, when it comes to finding a degree 3 irreducible polynomial, I feel it would be very difficult to argue whether a given polynomial is irreducible. Any suggestions on how to approach this? AI: Deciding whether a degree $3$ polynomial is irreducible over $\mathbb F_4$ is actually quite easy. If a degree $3$ polynomial factors then at least one of those factors must be degree $1$, i.e., your polynomial must have a root. So a degree $3$ polynomial factors if and only if it has a root. As your field has only $4$ elements this is straightforward to check.
H: Complicated infinite sum convergence Do the following infinite sums $$ \sum_{n=0}^{\infty}\binom{m+n}{n}b^{n}\text{ and }\sum_{m=0}^{\infty}\binom{m+n}{n}a^{m} $$ converge? If so how to calculate their limit (the $m+n$ in the binomial coefficient confuses me)? AI: Depends on $b$ and $a$, obviously... and they are really the same sum, as: $$ \binom{m + n}{n} = \binom{m + n}{m} $$ If you expand by the extended binomial theorem: $\begin{align*} (1 + a)^{-r} &= \sum_{k \ge 0} \binom{-r}{k} a^k \\ &= \sum_{k \ge 0} (-1)^k \binom{r + k - 1}{r - 1} a^k \\ &= \sum_{k \ge 0} (-1)^k \binom{r - 1 + k}{k} a^k \end{align*}$ Your sum is just: $\begin{align*} (1 - a)^{-(m + 1)} &= \sum_{n \ge 0} (-1)^n \binom{(m + 1) - 1 + n}{n} (-a)^n \\ &= \sum_{n \ge 0} \binom{m + n}{n} a^n \end{align*}$ As for the binomial coefficient: $\begin{align*} \binom{-r}{k} &= \frac{(-r)^{\underline{k}}}{k!} \\ &= \frac{(-r) (-r - 1) \dotsm (-r - (k + 1))}{k!} \\ &= (-1)^k \frac{r (r + 1) \dotsm (r + k - 1)}{k!} \\ &= (-1)^k \frac{(r + k - 1)^{\underline{k}}}{k!} \\ &= (-1)^k \binom{r + k - 1}{k} \\ &= (-1)^k \binom{r + k - 1}{r - 1} \end{align*}$ (This uses Knuth's notation for falling factorial powers). Rounding up: As the binomial series for powers that aren't positive integers converges only for $\lvert a \rvert < 1$, that is the range of validity for your series.
H: $x$ is a member of $X$ if it is a member of $Y$--what is proof that $X=Y$? $x$ is a member of $X$ if it is a member of $Y$. From this fact can we get to the statement $X=Y$? AI: You can’t make this conclusion unless the converse is true as well that if y is a member of Y then it is also a member of X
H: A boundary point is a limit point of $M$ and $M^{c}$? I have been studying the basic concepts in topology and i'm wondering if i have a boundary point i can see it like a limit point of $M$ and $M^{c}$ where $M\subset X$ and $(X,d)$ is a metric space. AI: That’s almost right: it’s a point $x$ that is in both the closure of $M$ and the closure of $X\setminus M$. Thus, every open nbhd of $x$ intersects both $M$ and $X\setminus M$, but $x$ need not actually be a limit point of both of them. For instance, $2$ is a boundary point of $M=[0,1]\cup\{2\}$, but it isn’t a limit point of $M$: it’s in the closure of $M$ simply because it’s in $M$. (However, it’s in the closure of $\Bbb R\setminus M$ because it really is a limit point of that set.) If $X$ is a metric space, though, you can say that a point $x$ is in the boundary of $M$ if and only if there are sequences in both $M$ and $X\setminus M$ that converge to $x$, so long as you bear in mind that one of those sequences may be constant at $x$.
H: Using the sum disturbance method, find a compact form of the following sums: I tried to solve these two examples, but without success, could someone help me solve it because I got stuck on them and don't understand how to solve them. Using the sum disturbance method, find a compact form of the following sums: \begin{align} &\textrm{(a)} &&\sum_{k=1}^n{(-1)^k \frac{k}{2^{k-1}}}\\ &\textrm{(b)} &&\sum_{k=1}^n{(1+k2^{k-1})^2} \end{align} The disturbance method is sum $${s_{n+1} = a_{1} + a_2 + \dots + a_n + a_{n+1}}$$ expressed in two ways. The first is obvious: $$s_{n+1} = s_n + a_{n+1}.$$ The second is to present $s_{n+1}$ in form: $$s_{n+1} = a_1 + f(s_n),$$ where f is a function. Then we get the equality $$s_n + a_{n+1} = a_1 + f(s_n).$$ This equality can be treated as an equation with one unknown $s_n$. When it is solved in relation to this unknown, we obtain a compact form of a sum. AI: For part (a), \begin{align} f(s_n) &=s_{n+1}-a_1\\ &=\sum_{k=2}^{n+1}{(-1)^k \frac{k}{2^{k-1}}}\\ &=\sum_{k=1}^n {(-1)^{k+1} \frac{k+1}{2^k}}\\ &=-\frac{1}{2}\sum_{k=1}^n (-1)^k \frac{k}{2^{k-1}} -\sum_{k=1}^n \left(-\frac{1}{2}\right)^k\\ &=-\frac{s_n}{2} -\frac{-1/2-(-1/2)^{n+1}}{1-(-1/2)}\\ &=-\frac{s_n}{2} +\frac{1-(-1/2)^n}{3}, \end{align} so $s_n + a_{n+1} = a_1 + f(s_n)$ becomes $$s_n + (-1)^{n+1} \frac{n+1}{2^n} = -1 -\frac{s_n}{2} +\frac{1-(-1/2)^n}{3},$$ which implies that $$s_n = \frac{2}{3}\left((-1)^n \frac{n+1}{2^n} -1 +\frac{1-(-1/2)^n}{3}\right) = \frac{2}{9}\left((3n+2)\left(\frac{-1}{2}\right)^n -2\right). $$
H: If $2a^2+8b^2+5c^2=2c(a+6b)$ then find $a:b:c$ It's a question involving three variables $a, b$ and $c$. One just has to find the ratio of the three variables. AI: $$ P^T H P = D $$ $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & 0 \\ \frac{ 1 }{ 2 } & \frac{ 3 }{ 4 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 2 & 0 & - 1 \\ 0 & 8 & - 6 \\ - 1 & - 6 & 5 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 0 & \frac{ 1 }{ 2 } \\ 0 & 1 & \frac{ 3 }{ 4 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 2 & 0 & 0 \\ 0 & 8 & 0 \\ 0 & 0 & 0 \\ \end{array} \right) $$ To get zero as $v^T D v$ if and only if the column vector $v$ is of the form $(0,0,t)^T$ for some real number $t.$ Then $Pv$ is of the form $(t/2,3t/4,t)^T$ or, letting $t = 4s,$ of form $(2s, 3s, 4s)^T$ Your proportion notation would say $$ 2 : 3 : 4 $$
H: Extending of a sheaf by zero (exercise II.1.19 (b) from Hartshorne) I'll reproduce part of Hartshorne's exercise II.1.19 (b) (the rest is not important to the question): Let $Z$ be a closed subset of a topological space $X$ an $i:Z\to X$ the inclusion. If $\mathcal{F}$ is a sheaf on $U:=X\setminus Z$, let $j_!\mathcal{F}$ be the sheaf on $X$ associated to the presheaf $V\mapsto \mathcal{F}(V)$ if $V\subset U$ and $V\mapsto 0$ otherwise. [...] Let $\mathcal{G}$ be the presheaf defined above. Suppose there is an open $V\subset X$ such that $V\supsetneq U$. By definition of $\mathcal{G}$, we have $\mathcal{G}(V)=0$. Consequently, applying the restriction we find that $\mathcal{G}(V\cap U)=0$, therefore $\mathcal{F}(U)=\mathcal{G}(U)=\mathcal{G}(V\cap U)=0$. Which ultimately means $\mathcal{F}=0$. This is very weird. Am I missing something? AI: The presheaf $\mathcal{G}$ has $$ \mathcal{G}(V)= \begin{cases} 0&V\cap Z\ne \varnothing\\ \mathcal{F}(V)&V\subseteq U. \end{cases} $$ I understand your concern as being that given $V$ strictly containing $U$ (such as $X$), we get that $\mathcal{G}(V)=0$. Then, you wish to apply the restriction morphism $\mathcal{G}(V)\to \mathcal{G}(V\cap U)=\mathcal{G}(U)$ to deduce that $\mathcal{G}(U)=0$, which ought not be true. I think you are assuming that the restriction map need be surjective, which is definitely not the case. Indeed, just look at the sheaf of holomorphic functions on $\mathbb{P}^1$. Indeed, $\mathcal{O}(\Bbb{P}^1)\to \mathcal{O}(U)$ is not surjective in general, as the former contains only constant functions.
H: Does polynomial equality hold for multi-variables polynomials? Let $$f(x)=a_0x^n+a_1x^{(n-1)}y+a_2x^{(n-2)}y^2...+a_{(n-1)}xy^{(n-1)}+a_ny^n$$ $$g(x)=b_0x^n+b_1x^{(n-1)}y+b_2x^{(n-2)}y^2...+b_{(n-1)}xy^{(n-1)}+b_ny^n$$ If $$f(x)=g(x)$$ for all x and y. Does $$a_i=b_i$$ And, what if I include terms with power less than n? AI: Yes. Set $y=1$, then the problem is reduced to a single variable polynomial. For your second question, you might need to work with a finite set of $y's$ to get the needed result.
H: Let $a$ & $b$ be non-zero vectors such that $a · b = 0$. Use geometric description of scalar product to show that Let $a$ & $b$ be non-zero vectors such that $a \cdot b = 0$. Use geometric description of scalar product to show that $a$ & $b$ are perpendicular vectors. What I've done so far is to state that $\cos 90$ (i.e. perpendicular) gives $0$ which would make the rest of the equation $|a||b|\cos \theta = 0$. I'm confused as to how set this out as a "formal proof"? Thanks in advance! This is from the Cambridge Specialist 11 textbook. AI: You have all the correct ideas. It's now about putting them together in a coherent way. You are told that $a \cdot b= 0$. You know that $a \cdot b= |a| |b| \cos \theta$. Therefore, $0= |a| |b| \cos \theta$. So one of $|a|, |b|, \cos \theta$ is zero. Use what was given about $a,b$ to explain why $|a|,|b|$ are not zero. Then have you know, $\cos \theta = 0$. Now $\theta$ is the angle between $a,b$. Because $\cos \theta=0$, what are the possibilities for $\theta$? What does this mean about $a,b$?
H: Proof the the field of rational numbers has the Archimedean property I was tasked by the question to prove that the field of rational numbers has the Archimedean property Proof: Let $x$ and $y$ $\in$$ Q$ and $n$ be a positive integer. If $Q$ does not have the Archimedean property, then $nx\leq y$ For the case where $nx=y$, then $n+1\gt y$ however as $n+1$ is a positive integer then by contrapositive $Q$ must have the archimedean property Is this valid and how can I phrase this better? AI: Perhaps the following is a bit cleaner: Fix $x,y\in\mathbb{Q}$ with $x>0$. We need to produce a positive integer $n$ with $nx>y$. By definition of $\mathbb{Q}$, there are integers $a,b,c,d$ with $x=\frac{a}{b}$, $y=\frac{c}{d}$ and $b>0$ and $d>0$. As $x>0$, we have that $a>0$. Now, there are two cases: If $y\le 0$, then taking $n=1$ does the trick, since then $y\le 0<x=1x$. If $y>0$, then $c>0$. But then $$ y=\frac{c}{d}\le c \le ac=bcx<bcx+x=(bc+1)x $$ Since $bc+1$ is a positive integer, we are done.
H: Establishing a bijection Let $A,B$ be finite sets. Prove that $|A\cup B|=|A|+|B|-|A\cap B|$ by establishing a bijection from $A\cup B$ to $\{1,2,\ldots,|A|+|B|-|A\cap B|\}$. These are some hints that I got but I'm still confused on how to come up with a bijection. Prove for disjoint finite sets $A,B$ ( i.e. $A\cap B=∅$) that $|A\cup B|=|A|+|B|$ by coming up with a bijection N→A∪B. $A\cup B=(A∖B)\cup(B∖A)\cup(A\cap B)$ is a disjoint (!) decomposition of $A\cup B$. A=(A∖B)∪(A∩B) is a disjoint (!) decomposition of A B=(B∖A)∪(B∩A) is a disjoint (!) decomposition of B AI: For any $k \in \mathbb{N}$ and any finite set $F$, there is a bijection between $F$ and the set $I_n$ defined by $I_n \doteq \{1, \cdots, n \}$. So, given two finite disjoint sets $A, B$, there exist bijections between $A$ and $I_m$ for some $m \in \mathbb{N}$ and between $B$ and $I_n$ for some $n \in \mathbb{N}$. Let's name them $f: I_m \to A$ and $g: I_n \to B$. Then one example of a bijection between $A \cup B$ and $I_k$ (where $k = m +n$) is given by $$I_k \ni x \mapsto \begin{cases} f(x) \text{ if } 1 \leq x \leq n \\ g(x-n) \text{ if } 1 \leq x - n \leq m\end{cases}$$ Writing $A \cup B$ as a disjoint union$$A \cup B = (A \setminus B) \cup B$$ we have a bijection between $A \cup B$ and $\{1, \cdots, |(A \setminus B) \cup B| \}$ by my previous remarks. All that remains to show is that: $$|(A \setminus B) \cup B| = |A| + |B| - |A \cap B|$$ Now, since we can write $A$ as a disjoint union $$A = (A \setminus B) \cup (A \cap B)$$ it's clear that $|A| - |A \cap B| = |A \setminus B|$. Therefore $$|(A \setminus B) \cup B| = |A \setminus B| + |B| =|A| - |A \cap B| +|B| = |A| + |B| - |A \cap B| $$ as desired.
H: Is there a notion of maps that can "expand" spaces "linearly"? For linear transformations, the dimension of the image is at most the dimension of the domain. More generally, given vectors $v_1, ..., v_n$ in the domain, the vectors $Tv_1, ..., Tv_n$ span the image. So, intuitively, we can only either preserve the dimension of our input space or "collapse" it into a space of smaller dimension. What I am asking is whether there is a notion of taking a smaller dimensional space and "expanding" it or "extending" it "linearly", e.g. mapping a line to a plane. Is this useful or practical? It seems quite natural. AI: If you are looking for a surjective linear map $A:V\to W$, where $\dim V<\dim W$ so that $A$ is a "linear extension" of $V$, then this is impossible by the rank-nullity theorem: if $A$ is surjective, then $\dim\operatorname{im}A=\dim W$ and $\dim\ker A=0$, so $\dim W=\dim V$, and in particular we cannot have $\dim V<\dim W$. More exotic surjective and even continuous maps from spaces of lower dimension to spaces of higher-dimension do exist (see space-filling curves).
H: How many length-$k$ ternary strings have evenly many of a given symbol? I write down a string of $k$ letters, where each letter is $X, Y, \text{or } Z.$ The letter $X$ appears an even number of times. How many different sequences of letters could I have written down? I think I need to start by setting up some cases, and building a recursion from that. I tried but arrived at a really weird form. May I have a start, please? AI: Let $A_k$ denote the number of sequences of length $k$ with an even number of $X$'s. For any of the $3^{k-1}$ sequences of length $k-1$, by adding either an $X$ or a $Y$ to the end, you may obtain a sequence with an even number of $X$'s. Additionally, for each of the $A_{k-1}$ sequences of length $k-1$ with an even number of $X$'s, you may add a $Z$ to the end, to obtain a sequence of length $k$ with an even number of $X$'s. Thus $A_{k}=3^{k-1}+A_{k-1}$. As $A_0=1$ we have$$A_k=1+1+3+9+\cdots+3^{k-1}=\frac{3^k+1}2.$$
H: Equivalence relation proof, problem $x∼y \iff x^2 − 2x + 1 = y^2 + 4y + 4$ Determine if the following relationships are equivalent. If they are, determine the equivalence classes, give a set of indices and the quotient set. Sketch out the graph of each relationship (whether or not it is an equivalence one): In $\mathbb{R}$, we define the relationship “∼” as $x ∼ y$ ⇐⇒ $x^2$ − 2x + 1 = $y^2$ + 4y + 4$. I have a question about transitivity, how can i prove it? i need to show that x∼z I tried to do this: Suppose $x∼y$ and $y∼z$ thus: $x^2$ − 2x + 1 = $y^2$ + 4y + 4 and $y^2$ − 2y + 1 = $z^2$ + 4z + 4. so we equal to 0 $x^2$ − 2x + 1- ($y^2$ + 4y + 4)=0 and $y^2$ − 2y + 1-($z^2$ + 4z + 4)=0 then we equal the equations: $x^2$ − 2x + 1- ($y^2$ + 4y + 4)=$y^2$ − 2y + 1-($z^2$ + 4z + 4) we resolve and we have : (( $x^2$ − 2x + 1)+1)+(-$y^2$ + y)+7.5=((-1)($z^2$ + 4z + 4))+(-$y^2$ -y)-7.5 So i think this relation is transitive, but i dont know if i`m good, and i dont have any idea how to make the equivalence classes or set of indices AI: This relation is not transitive. A single counterexample suffices to prove that. $x\sim y\iff (x-1)^2=(y+2)^2$. Can you show $5\sim2$ and $2\sim -3$ but not $5\sim-3$?
H: Functional equation involving three different functions: $ f ( x + y ) = g ( x ) + h ( y ) $ If $ f , g , h : \mathbb R \to \mathbb R $ all are continuous functions such that $$ f ( x + y ) = g ( x ) + h ( y ) \text , \quad \forall x , y \in \mathbb R \text , $$ find $ f $, $ g $ and $ h $. I have literally no idea where to begin. I mean, what can I even say or claim and prove? How should I go about this? Also, a quick doubt I had was if a function is continuous over $\mathbb{R}$, can we safely say that it must be a linear function? If not, Why? AI: Note that $$g(x) + h(y) = f(x + y) = f(y + x) = g(y) + h(x) \implies h(x) - g(x) = h(y) - g(y)$$ for all $x, y \in \Bbb{R}$. That is, $h - g$ is a constant function, i.e. there exists some $k \in \Bbb{R}$ such hat $h(x) = g(x) + k$ for all $x \in \Bbb{R}$. This gives us the equivalent functional equation $$f(x + y) = g(x) + g(y) + k.$$ Note that, when $y = 0$, we simply see that $$f(x) = g(x) + g(0) + k,$$ hence $$g(x + y) + g(0) + k = g(x) + g(y) + k \implies g(x + y) + g(0) = g(x) + g(y).$$ Let $L(x) = g(x) - g(0)$. Then, the above equation simplifies to $$L(x + y) = L(x) + L(y),$$ which is Cauchy's functional equation. Since $g$ is continuous, so is $L$, and hence $L$ is linear. On $\Bbb{R}$, this means $L(x) = ax$ for some $a \in \Bbb{R}$. So, rebuilding, we have \begin{align*} g(x) &= ax + c \\ h(x) &= ax + c + k \\ f(x) &= ax + 2c + k, \end{align*} where $a, c, k \in \Bbb{R}$ are parameters. Checking this family of possible solutions, we get $$f(x + y) = a(x + y) + 2c + k = ax + c + ay + c + k = g(x) + h(y),$$ verifying that all functions of the above form are indeed solutions, yielding a complete family of solutions.
H: Proving $\operatorname{cos}(x+y)=\operatorname{cos}(x)\operatorname{cos}(y)-\operatorname{sin}(x)\operatorname{sin}(y)$ using differentiation While proving $\operatorname{cos}(x+y)=\operatorname{cos}(x)\operatorname{cos}(y)-\operatorname{sin}(x)\operatorname{sin}(y)$ by this $$\operatorname{sin}(x+y)=\operatorname{sin}(x)\operatorname{cos}(y)+\operatorname{cos}(x)\operatorname{sin}(y) \\ \text{differentiating both sides w.r.t } x \\ \operatorname{cos}(x+y) \left(1+\frac{dy}{dx}\right)=(\operatorname{cos}(x)\operatorname{cos}(y)-\operatorname{sin}(x)\operatorname{sin}(y))\left(1+\frac{dy}{dx}\right)\\ \text{for $\frac{dy}{dx} \neq -1$}\\\operatorname{cos}(x+y)=\operatorname{cos}(x)\operatorname{cos}(y)-\operatorname{sin}(x)\operatorname{sin}(y) $$ Now I am confused what happens when$\frac{dy}{dx} = -1$ AI: If all you want to do is derive $$\cos(x+y)=\cos(x)\cos(y)-\sin(x)\sin(y)$$ from $$\sin(x+y)=\sin(x)\cos(y)+\cos(x)\sin(y)$$ then you actually don't really care about the case where $\frac{dy}{dx}=-1$ - the implication only uses any single evaluation of this general derivative. You could get the same result by differentiating treating $x$ as constant or $y$ as constant or taking $y=x+c$. The only case that wouldn't work is if you tried to derive this taking $x+y$ to be constant. A bit more justified would be to take a total derivative of this expression; you would get: $$d\sin(x+y) = \cos(x+y)\,dx + \cos(x+y)\,dy$$ and $$d(\sin(x)\cos(y)+\cos(x)\sin(y))=\cos(x)\cos(y)\,dx-\sin(x)\sin(y)\,dy-\sin(x)\sin(y)\,dx+\cos(x)\cos(y)\,dy$$ which tells you how the expression must change no matter which direction you move in - geometrically, this tells you about the tangent plane to the bivariate function $\sin(x+y)$. Since these tangent planes are equal, you can actually just set the coefficient of $dx$ (or, equally well, $dy$) on either to be equal to get: $$\cos(x+y)=\cos(x)\cos(y)-\sin(x)\sin(y).$$ In more elementary terms, this is just the derivative of the given expression with respect to $x$ taking $y$ to be constant - but it's a bit more insightful to see that this is just half of an expression where the change in the given terms is written as a weighted sum of the change in $x$ and $y$.
H: Find the largest integer $n$ such that $n$ is divisible by all positive integers less than $\sqrt[3]{n}$ Find the largest integer $n$ such that $n$ is divisible by all positive integers less than $\sqrt[3]{n}$. 420 satisfies the condition since $7<$ $\sqrt[3]{420}<8$ and $420=\operatorname{lcm}\{1,2,3,4,5,6,7\}$ Suppose $n>420$ is an integer such that every positive integer less than $\sqrt[3]{n}$ divides $n .$ Then $\sqrt[3]{n}>7,$ so $420=\operatorname{lcm}(1,2,3,4,5,6,7)$ divides $n$ thus $n \geq 840$ and $\sqrt[3]{n}>9 .$ Thus $2520=\operatorname{lcm}(1,2, \ldots, 9)$ divides $n$ and $\sqrt[3]{n}>13$ now this pattern looks continues,but i am not able to prove that this pattern always continues.. AI: You have the basic right idea of showing the $\operatorname{lcm}$ values increase faster than the cube of the highest value used in the $\operatorname{lcm}$, with the following being one way to finish the solution. Define for any positive integer $m$, $$f(m) = \operatorname{lcm}(1,2,\ldots,m) \tag{1}\label{eq1A}$$ For some prime $m \gt 8$, consider if you have $$f(m) \gt 8m^3 \tag{2}\label{eq2A}$$ For any integer $n \ge m^3$, since $\sqrt[3]{n} \ge m$, you would need to include $m$ in the $\operatorname{lcm}$. However, \eqref{eq2A} shows you actually need $n \gt 8m^3$, so $\sqrt[3]{n} \gt 2m$. The less restrictive formulation of Betrand's postulate states there's always at least one prime $p$ where $m \lt p \lt 2m$, so since $p \gt 8$ and this prime $p$ must be multiplied into the $\operatorname{lcm}$ value, you have $$f(2m) \gt p(8m^3) \gt 8(8m^3) = 64m^3 \tag{3}\label{eq3A}$$ Thus, you actually have $n \gt 64m^3$, giving $\sqrt[3]{n} \gt 4m$. You can use the procedure above repeatedly, in an inductive type fashion, to get $n \gt (8^{k})m^{3}$ for $k = 1, 2, 3, \ldots$, showing there is no larger $n$ which works. As for the base case, note that $$f(11) = 27\text{,}720 \gt 10\text{,}648 = 8(11^3) \tag{4}\label{eq4A}$$ Since it seems you've checked the other cases for $m \lt 11$, this shows the largest $n$ which works is what you've already found, i.e., $420$.
H: Need to simplify $(A ∩ \varnothing)' \cap (A \cup B)'$ I have the following expression I need to simplify: $$(A \cap \varnothing)' \cap (A \cup B)'$$ So, far my solution is to use DeMorgan's Law to simplify it as follows: $$(A' \cup \varnothing') \cap (A' \cap B')$$ But I'm not sure where to go from here. I was perhaps thinking of using Communicative Law to swap the $A'$ and $\varnothing'$ in the middle: $$A' \cup \varnothing' \cap A \cap B'$$ So, it becomes: $$(A' \cup A') \cap (\varnothing' \cap B)'$$ And then go from there. But I'm not sure if that's allowed. Is there another way to simplify this? AI: Better: $A \cap \emptyset = \emptyset.$ Let $X$ be the universal set. Then $$ X \cap (A \cup B)' = X \cap A' \cap B'$$ Now as A and B are in the universal set, so are their complements. This best simplifies as $A' \cap B'$.
H: If $R \cup S$ is an equivalence relation on $A$, then $R$ and $S$ are equivalence relations on $A$ Let $A$ be a non-empty set, and let $R$ and $S$ be relations on A. If $R \cup S$ is an equivalence relation on $A$, then $R$ and $S$ are equivalence relations on $A$. How can i prove it? I think it is not necessarily true but i dont know how to start. AI: This is not true. Here's an explicit example. Let $A = \{0, 1\}$, $R = \{(0, 0)\}$, $S = \{(1, 1)\}$. Then $R \cup S = \{(0, 0), (1, 1)\}$, which is an equivalence relation (just equality). Neither $R$ nor $S$ are equivalence relations as neither are reflexive.
H: do all matrices with $\det(A)=\pm 1$ form a group under multiplication? all matrices with determinant one form the special linear group. it is explained that because $\det(A) \det(B)=\det(AB)$ it is closed as $1*1=1$ and because the general linear group is a group, and special linear group is a part of the general one, and because all of the inverses must have determinant 1 and also be in the special linear group, the inversion axiom holds. doesn't this also hold for the group of matrices defined by $\det(A)= \pm 1$? $$(1)(1)=1,(-1)1=-1,1(-1)=-1,(-1)(-1)=1$$ so it is closed.can a similar argument to the special linear group prove this is a group? can anyone provide a counter example or prove this? AI: Yeah, it is a group for the same reasons: If $\det(A),\det(B)\in \{\pm 1\}$, then $\det(AB)=\det(A)\det(B)\in \{\pm 1\}$. If $\det(A)\in \{\pm 1\}$, then $\det(A^{-1})=\det(A)^{-1}\in \{\pm 1\}$. $\det(I_n)=1\in \{\pm 1\}$. This shows that $\{A\in GL_n(\mathbb{C}):\det(A)\in \{\pm 1\}\}$ is a subgroup of $GL_n(\mathbb{C})$.
H: About minimum and maximum inequality. Let $a_i , b_i \geqslant 0$ $\forall i \in \{1 , 2 , 3 , .... , n\}$ Prove that $$min\{\frac{a_i}{b_i}\} \leqslant \frac{\sum_{i=1}^n a_i}{\sum_{i=1}^n b_i} \leqslant max\{\frac{a_i}{b_i}\}$$ Given that $min\{...\}$ and $max\{...\}$ equals to the minimum and the maximum element in the set. I don’t know how to deal with minimum and maximum. I would like some hints (or solution) please. Thank you. AI: Here is yet another proof, which is really elementary. We begin by observing that \begin{align} \sum_{i=1}^n a_i = \sum_{i=1}^n \frac{a_i}{b_i}b_i \end{align} From this identity we have \begin{align} \sum_{i=1}^n a_i = \sum_{i=1}^n \frac{a_i}{b_i}b_i \geq \min\left\{\frac{a_i}{b_i}\right\}\sum_{i=1}^n b_i \end{align} and \begin{align} \sum_{i=1}^n a_i = \sum_{i=1}^n \frac{a_i}{b_i}b_i \leq \max\left\{\frac{a_i}{b_i}\right\}\sum_{i=1}^n b_i \end{align} Putting these two together we obtain \begin{align} \min\left\{\frac{a_i}{b_i}\right\}\leq \frac{\sum_{i=1}^n a_i}{\sum_{i=1}^n b_i} \leq \max\left\{\frac{a_i}{b_i}\right\} \end{align}
H: Can we deduce that $\mu(X)\leq \mu(Y)$? Given a measurable function $f: X \to Y$, if $f$ is injective and measure-preserving that $\mu(f(A))=\mu(A)$ for all subsets $A$ and $\mu$ is a probability measure, can we deduce that $\mu(X)\leq \mu(Y)$? Do we have $\mu(f(X))=\mu(X)\leq \mu(Y)$ since $f(X)\subset Y$? AI: The question now very different from the one for which I posted this answer. $\mu(X)=\mu (f^{-1}(Y))=\mu (Y)$ so you have equality.
H: Example of the non-commutative ring with the set of units are commutative I was looking for an Example of the non-commutative ring with the set of units are commutative. it will be a great help. Thanks in advance. AI: Take the ring of noncommutative polynomials $R\langle X_1, X_2\rangle$ over any commutative ring $R$. The units will be the same as in R. NB: this is a special case of Mike Debellevue's answer, with $M$ being the free monoid on two generators.
H: Why is the expected value, $E(X)$, after $5$ games $0$? In a game the probability that you win is $1/2$. If you win the game, you get $1\$$ and if you loose the game you loose $1\$$. Let $X$ denote the total amount of money after $5$ games. The expected value $E(X)=0$. Why is that? AI: The chance you win is $50$ percent and the chance you lose is $50$ percent and both incur a win and lose of a dollar respectively. This implies that the expected value of the amount of money you will make after each game, $E(X_1)$, is $0$. The way this is calculated is you take the fractional probability you win and multiply it by the gain you receive and add that to the product of the fractional probability you lose and the money you give up when you lose. Mathematically this means: $E(X_1)= \frac{1}{2}(1)+\frac{1}{2}(-1) = 0$ Now as mentioned in the comment by Aravind, expected value is linear. This means that the expected value of the money you make from one game can be multiplied by 5 to get the expected value of the money you make from five games. Thus: $E(X)= E(X_1)*5 = 0*5 = 0$ To clarify the issue regarding your comment about the arithmetic mean argument. The expected value of playing three games for example is the arithmetic mean of every possible iteration of playing three games (each weighted equally in this case because the probably that each iteration happens is the same). So it is the mean of this quantity: Situation 1: w, w, w = 1+1+1 = 3 Situation 2: w, w, l = 1+1-1 = 1 Situation 3: w, l, w = 1-1+1 = 1 Situation 4: w, l, l = 1-1-1 = -1 Situation 5: l, w, w = -1+1+1 = 1 Situation 6: l, w, l = -1+1-1 = -1 Situation 7: l, l, w = -1-1+1 = -1 Situation 8: l, l, l = -1-1-1 = -3 Note: w = win, l = loss, and a sequence w, w, l means win game 1, win game 2, lose game 3. The arithmetic mean of this is: $\frac{3+1+1-1+1-1-1-3}{8} = \frac{0}{8} = 0$
H: $f''(x) = g(x)$ and $g''(x) = f(x).$ Suppose also that $f(x)g(x)$ is linear in $x$ on $(a,b).$ Show that $f(x) = g(x) = 0$ for all $x ∈ (a,b).$ QUESTION: Let $f$ and $g$ be two non-decreasing twice differentiable functions defined on an interval $(a,b)$ such that for each $x ∈ (a,b), f''(x) = g(x)$ and $g''(x) = f(x).$ Suppose also that $f(x)g(x)$ is linear in $x$ on $(a,b).$ Show that we must have $f(x) = g(x) = 0$ for all $x ∈ (a,b).$ MY ANSWER: I have done the proof, but it's not a rigorous one.. this is what I did- Let $f(x)=x^k$, $(k>0)$ which is increasing on $(a,b)$. Now, $$f'(x)=kx^{k-1}$$$$f''(x)=k(k-1)x^{k-2}$$ According to the question, $g(x)=k(k-1)x^{k-2}$, therefore, $$g'(x)=k(k-1)(k-2)x^{k-3}$$$$g''(x)=k(k-1)(k-2)(k-3)x^{k-4}$$ Now according to the question again, $f(x)=g''(x)$, but $f(x)=x^k$, therefore our statement implies that, $$x^k=k(k-1)(k-2)(k-3)x^{k-4}$$ Also, it is said that $f(x)g(x)$ must be linear in $x$. Therefore, we observe that, $$k(k-1)(k-2)(k-3)x^kx^{k-4}$$ must be linear in $x$. Which clearly states that, $$k+(k-4)=1$$$$\therefore 2k-4=1$$$$\implies k=\frac{5}2$$ Putting $k=\frac{5}2$ in the previous equation, we get, $$x^4=\frac{5}2(\frac{5}2-1)(\frac{5}2-2)(\frac{5}2-3)$$$$\implies x^4=-\frac{15}{16}$$ which is clearly impossible for any $x$ in $\mathbb{R}$. Therefore, we may conclude that $$k\neq\frac{5}2$$ and the only way to satisfy both the above statements is to make $x=0$. Therefore, we can conclude that $f(x)=0$ and consequently $g(x)=0$ Note 1: we observe, if $k<4$ then the the value of the derivatives becomes zero somewhere in between and our proof works. Note 2: if we had assumed the function more generally as $f(x)=x^k+c$ then too, it would have worked, only $c$ would have become zero at last (it is forced to become).. Now, there are a hell lot of non-decreasing functions out there (even trigonometric functions if defined in suitable intervals) and obviously this proof is not rigorous. Without assuming any function, how do I proceed to do this? Any help will be much appreciated. Thank you. AI: Since $f(x)g(x)$ is linear then its second derivative is 0.So,$f''(x)g(x)+2f'(x)g'(x)+g''(x)f(x) =0$ that is $g^{2}(x)+f^2(x) =-2f'(x)g'(x)$ and as $f,g$ are twice differentiable and non-deacreasing,so $f'(x)$ and $g'(x)$ can not be negative.So $f(x)=g(x)=0$.
H: Is the sets in density topology Euclidean $G_\delta$? It has been shown that every Borel subset of density topology X is d-$G_\delta$. I'm curious about its connection to the euclidean topology. For example, is the close/open set in the density topology a $G_\delta$ set in Euclidean topology? It seems false to me; for example, pick a non-Borel set S of Lebesgue measure 0, then it's closed in the density topology but definitely not a $G_\delta$ set in Euclidean topology. But I would like to know more about the connections between sets in density topology and its euclidean counterpart. Is there any related theorem on this subject? AI: The last chapter of Oxtoby's Measure and Category discusses category measure spaces and shortly discusses the density topology and proves the latter is regular. It is not normal (a reference for this is given). So earlier results in that chapter then imply that a countable union of nowhere dense sets (in the density topology) is still nowhere dense (and in fact being nowhere dense in this topology is equivalen to having Lebesgue measure $0$). So a nowhere dense Cantor set (a set homeomorphic to $2^{\Bbb N}$) with positive measure (which exist in $\Bbb R$) is Euclidean closed and has empty interior in the usual topology but in the density topology has non-empty interior. It's well known that the Euclidean topology is coarser (smaller) than the density topology, so this shows it's quite a bit smaller. The density topology is a pretty unusual topology, very non-metrisable. These papers might help you to gain more (set-theoretic) topological understanding of it. This entry and its references are more about the analysis side of things.
H: Residue at essential singularity of $\frac{\mathrm{e}^{\frac{1}{z}}}{z^2-2z+2}$ Find the residue at $z=0$ of the function $$f(z)=\frac{\mathrm{e}^{\frac{1}{z}}}{z^2-2z+2}.$$ This is a very small part of a previous exam question of complex analysis. One way of doing it is decomposing $\frac{1}{z^2-2z+2}$ into partial fractions, finding the corresponding Laurent series around $z=0$, then multiplying it with the Laurent series around $z=0$ of $\mathrm{e}^{\frac{1}{z}}$. I am wondering if there is an easier method. I would not start above method in exam conditions because of lack of time. AI: You can apply the residue theorem in the form $\sum\limits_{\omega\in\Omega}\operatorname*{Res}\limits_{z=\omega}f(z)=0$, where $\Omega$ is the set of singularities of $f$ (assuming it is finite) together with the point $z=\infty$, and $\operatorname*{Res}\limits_{z=\infty}f(z)$ is treated specially. In our case, $$\bbox[5pt,border:2pt solid]{\operatorname*{Res}_{z=0}f(z)+\operatorname*{Res}_{z=1+\mathrm{i}}f(z)+\operatorname*{Res}_{z=1-\mathrm{i}}f(z)=0,}$$ with $\operatorname*{Res}\limits_{z=1\pm\mathrm{i}}f(z)$ easier to compute. Another way to obtain the same is to see that the residue at $z=0$ is $$\frac{1}{2\pi\mathrm{i}}\oint_{|z|=r}f(z)\,dz=\frac{1}{2\pi\mathrm{i}}\oint_{|w|=1/r}\frac{f(1/w)}{w^2}\,dw$$ if $r>0$ is small enough; the second integral (obtained using $z=1/w$) is the sum of residues of its integrand at $w=1/(1\pm\mathrm{i})$; substituting $w=1/z$ back, we get the equality above.
H: Proof of closedness of discrete subgroup of a topological group Here is a lemma of John Ratcliffe's book: Lemma: If $G$ is a topological group with a metric topology, then every discrete subgroup of $G$ is closed in $G$. Proof. Let $\Gamma$ be a discrete subgroup of $G$ and suppose that $G − \Gamma$ is not open. Then there is a $g$ in $G − \Gamma$ and $g_n$ in $B(g, 1/n) \cap \Gamma$ for each integer $n > 0$. As $g_n \to g$ in $G$, we have that $g_ng_{n+1}^{-1} \to 1$ in $\Gamma$. But $\{g_ng_{n+1}^{-1}\}$ is not eventually constant, which contradicts Lemma 2. Therefore, the set $G − \Gamma$ must be open, and so $\Gamma$ is closed in $G$. Q1: Why is it clear that $\{g_ng_{n+1}^{-1}\}$ is not eventually constant? Q2: Is metric topology important here? (I think being Hausdorff suffices.) AI: If ${g_n g_{n+1}^{-1}}$ were eventually constant (let's say $g_n g_{n+1}^{-1} = g_m g_{m+1}^{-1}$ for all $n,m \geq N$, where $N$ is some fixed natural number), then the limit of the sequence (which is unique because metric topologies are Hausdorff) would be $g_N g_{N+1}^{-1}$. At the same time, the limit of the sequence is $1$, so $g_n g_{n+1}^{-1} = 1$ for all $n \geq N$. This means that $g_n = g_{n+1}$ for all $n \geq N$, and thus $g_n \to g_N$. Since $g_n \to g$, w have $g = g_N$. This is a contradiction because $g \in G - \Gamma$ but $g_N \in \Gamma$. Edit: to answer your second question, the metric is actually used in this argument to get the sequence $g_n$. But you're right that Hausdorff is all that you really need: suppose $G$ is Hausdorff and $\Gamma$ is a non-closed discrete subgroup. Then there is some limit point $g$ of $\Gamma$ such that $g \notin \Gamma$. Take a sequence $g_n \to g$ with each $g_n \in \Gamma$ and use the same argument as in this proof. It's perhaps a little harder to see that $\{g_n g_{n+1}^{-1}\}$ must be convergent, but it's still doable: Let $U$ be an open set containing $1$. Let $V = \{(x,y) \in G \times G : xy^{-1} \in U\}$. $V$ is also open because $G$ is a topological group. The sequence $\{(g_n, g_{n+1})\}$ converges to $(1,1)$, so there is some $N \in \mathbb{N}$ such that $(g_n, g_{n+1}) \in V$ for all $n \geq N$. This means that $g_n g_{n+1}^{-1} \in U$ for all $n \geq N$, and since $U$ was an arbitrary open neighborhood of $1$, we have that $g_n g_{n+1}^{-1} \to 1$. Edit 2 The first edit is wrong – the result is still true in general when $G$ is Hausdorff, but my proof assumes that "$\Gamma$ is not closed" implies "there exists a sequence in $G - \Gamma$ which converges to a point in $\Gamma$", which is not true in all topological spaces! We need to use a different argument; thanks to @HennoBrandsma for linking to this relatively simple approach.
H: ${\displaystyle \int_{0}^{7}f^{(7)}(x)(x-1)^6dx}$ where $f(x)=e^{-\frac{1}{x^2}}\sin(x)$ for $x>0$ and $f(0)=0$. How to find the below integral: $$\int_{0}^{1}f^{(7)}(x)(x-1)^6dx$$ where $f(x)=e^{-\frac{1}{x^2}}\sin(x)$ for $x>0$ and $f(0)=0$. My attempt: I tried to find the derivative $f^{(7)}(x)$, but at $f^{(3)}(x)$ itself the derivative becomes very complex and I think this will take too long to find the above integral. $$f^{(3)}(x)=-\frac{e^{-\frac{1}{x^2}}((6x^6-24x^4+36x^2-8)\sin(x)+(x^9+18x^5-12x^3)\cos(x))}{x^9}$$ Please help me in solving this question. Thanks in advance. AI: Integration by parts is definitely the way to go. We have an integral of the form $$\int_0^7 f^{(7)}(x)g(x) \; \mathrm{d}x,$$ where $g(x)$ is differentiable, and $g^{(7)}$ is constantly $0$. So, \begin{align*} \int_0^7 f^{(7)}(x) g(x) \; \mathrm{d}x &= [f^{(6)}(x)g(x)]_0^7 - \int_0^7 f^{(6)}(x) g'(x) \; \mathrm{d}x \\ &= [f^{(6)}(x)g(x)]_0^7 - [f^{(5)}(x)g'(x)]_0^7 + \int_0^7 f^{(5)}(x) g''(x) \; \mathrm{d}x \\ &= [f^{(6)}(x)g(x)]_0^7 - [f^{(5)}(x)g'(x)]_0^7 + [f^{(4)}(x)g''(x)]_0^7 - \int_0^7 f^{(5)}(x) g^{(3)}(x) \; \mathrm{d}x \\ &\vdots \\ &= [f^{(6)}(x)g(x)]_0^7 - [f^{(5)}(x)g'(x)]_0^7 + [f^{(4)}(x)g''(x)]_0^7 - [f^{(3)}(x)g^{(3)}(x)] \\ &+ [f''(x)g^{(4)}(x)]_0^7 - [f'(x)g^{(5)}(x)]_0^7 + [f(x)g^{(6)}(x)]_0^7 - \underbrace{\int_0^7 f(x)g^{(7)}(x) \; \mathrm{d}x}_{= \;0}. \end{align*} The rest is tedious substitution.
H: PROPOSED PROOF for: If $p$ is a prime number such that $p|ab$, then $p|a$ or $p|b$. Since $p|ab$, then $\exists \alpha \in\ \mathbb{Z}$ such that: $$\alpha p=ab$$ By dividing both sides by $a$, we obtain: $$(\alpha \div b)p=a \Rightarrow\ p|a \ \ ( \text{Similarly to prove that} \ \ p|b)$$ Is this proof comprehensive? Does it fulfill what is required? AI: It's not complete because it's possible that $b$ might not divide $\alpha$. For example, consider $$p | \alpha p$$ with $p \nmid \alpha$. Then indeed $\alpha p = \alpha p$ but $\alpha \div p$ is not an integer number.
H: How to prove: $\:a\: if $0<\lambda <1$ and $\:a\:<\:x_1<b,a<x_2<b$ and $ \:x=\lambda x_1+\left(1-\lambda \right)x_2$ how to prove : $\:a\:<x<b$ AI: Notice that $$ x < \lambda b + (1-\lambda) b = b $$ and $$ x > a(1-\lambda) + a \lambda = a $$
H: For periodic $f,g$ (and continuous), does $\lim (f-g) = 0 $ imply $f=g$? Take $f, g: \mathbb{R} \to \mathbb{R} $ continuous and periodic and assume $\lim_{x \to \infty} (f(x)-g(x)) = 0$. Does it follow $f=g$? Prove or give counter example. I cant find any counterexample, so maybe we should prove it: we given that for any $\epsilon > 0$ we can find some $\delta > 0$ such that $x> \delta$ imply $|f(x)-g(x)| < \epsilon $ Let $H(x) = f(x) - g(x)$. and so $H$ better also be periodic and continous. Now, certainly for any $\delta > 0$ we have $|H|< \epsilon $ which implies $H=0$ or that $f(x) =g(x)$. But, we still need to prove for $x \leq \delta$ Now, this seems intuitively obvious as because periodicity and continuity: if after some $\delta > 0$, we have that $f(x) = g(x)$ then we can find some interval after $\delta> 0$, say $[a,b]$ for which $H(x)$ is identical in $[0,\delta]$ and thus $H(x) = 0$ for every $x \in \mathbb{R}$. Now, this argument seems fishy. Am I on the right track or I am completely off tracks? AI: If I correctly understand your reasoning, you're not far off. However I would put the proof the following way. If $f$ and $g$ are periodic with the same period $T$ (as you seem to assume in your reasoning), then $H$ is also periodic with period $T$. For any $x\in\mathbb{R}$ we have $\lim_{n\rightarrow\infty} (x+nT) = \infty$. Since $\lim_{x\rightarrow\infty} H(x) = 0$, that means that $$\lim_{n\rightarrow\infty} H(x+nT) = 0$$ However, because $H$ is periodic, it means that $H(x+nT)=H(x)$ and $$\lim_{n\rightarrow\infty} H(x) = 0$$ that is $$H(x) = 0$$ . If periods of $T_1$ and $T_2$ of functions $f$ and $g$ are different, but $T_1/T_2 \in\mathbb{Q}$, then you can find $T$ such that $T=nT_1=mT_2$ with $n,m\in\mathbb{N}$. $H$ will be periodic with period $T$, and the proof continues as in the previous case. If $T_1/T_2\notin\mathbb{Q}$, then there exist two sequences sequences $n_k,m_k\rightarrow\infty$, such that $$\lim_{k\rightarrow\infty} \left(n_kT_1/T_2 - m_k\right) = 0 $$ that is$$\lim_{k\rightarrow\infty} \left(n_kT_1 - m_k T_2\right) = 0 $$ Since $g$ is continuous, we have, for every $x\in\mathbb{R}$: $$ \lim_{k\rightarrow\infty} g\left(x+n_kT_1 - m_kT_2 \right)= g(x)$$ Then we have $$0 =\lim_{k\rightarrow\infty} H(x+n_kT_1) = \\ = \lim_{k\rightarrow\infty} \left(f(x+n_kT_1) - g(x+n_kT_1)\right) = \\ = \lim_{k\rightarrow\infty} \left(f(x) - g(x+n_kT_1-m_kT_2) \right) = \\ = f(x)-g(x)$$ .
H: If $a_1a_2 = 1, a_2a_3 = 2, a_3a_4 = 3 \cdots$ and $ \lim_{n \to \infty} \frac{a_n}{a_{n+1}} = 1$, find $|a_1|$ If $a_1a_2 = 1, a_2a_3 = 2, a_3a_4 = 3 \cdots$ and $\displaystyle \lim_{n \to \infty} \frac{a_n}{a_{n+1}} = 1$. Find $|a_1|$ I could conclude that $\displaystyle \lim_{n \to \infty}a_n$ must be $\infty$. But couldn't get any further. Any help is welcome. AI: Note that $$ \frac{a_3}{a_1} = \frac{a_2a_3}{a_1a_2} = \frac21,\quad \frac{a_5}{a_3} = \frac{a_4a_5}{a_3a_4} = \frac43,\quad \frac{a_7}{a_5} = \frac{a_6a_7}{a_5a_6} = \frac65,\quad\dots $$ and therefore $$ \frac{a_{2k+1}}{a_1} = \frac{a_3}{a_1} \frac{a_5}{a_3}\cdots \frac{a_{2k+1}}{a_{2k-1}} = \frac21 \frac43 \cdots \frac{2k}{2k-1} = \frac{4^k(k!)^2}{(2k)!}. $$ Similarly, $$ \frac{a_{2k+2}}{a_2} = \frac{a_4}{a_2} \frac{a_6}{a_4} \cdots \frac{a_{2k+2}}{a_{2k}} = \frac32 \frac54 \cdots \frac{2k+1}{2k} = \frac{(2k+1)!}{4^k(k!)^2}. $$ Therefore $$ 1 = \lim_{k\to\infty} \frac{a_{2k+1}}{a_{2k+2}} = \lim_{k\to\infty} \frac{4^k(k!)^2 a_1/(2k)!}{(2k+1)!a_2/4^k(k!)^2} = \lim_{k\to\infty} \frac{a_1}{a_2} \frac{16^k(k!)^4}{(2k)!(2k+1)!} = \frac{a_1}{a_2} \frac\pi2 $$ (using Stirling's formula), which forces $a_1/a_2 = 2/\pi$; together with $a_1a_2=1$ this yields $a_1=\pm\sqrt{2/\pi}$. (One can reality check $\lim_{k\to\infty} \frac{a_{2k}}{a_{2k+1}}$ as well if desired.)
H: Examples where $1+I$ is inverted but $I$ is not mapped into the Jacobson radical Let $f:A\to B$ be a commutative ring morphism. Let $I\vartriangleleft A$ be an ideal. If $f(I)\subset \mathrm J(B)$ is contained in the Jacobson radical then $f(1+i)=1+f(i)\in B^\times$ is a unit so $1+I$ is inverted. On the other hand, if $1+f(i)\in B^\times$ it does not seem to follow that $f(i)\in \mathrm J(B)$ since perhaps $1+bf(i)\notin B^\times$ for some $b\in B$. Question. What are some examples where this implication fails i.e $f(1+I)\subset B^\times$ but $f(I)\nsubseteq\mathrm J(B)$? AI: There are some very basic examples. Let $f:\Bbb Z\to\Bbb Q$ be the inclusion map, and let $I$ be any non-trivial ideal, but for argument's sake say $I=3\Bbb Z$. Then $f(1+I)\subseteq\Bbb Q^\times$ but of course $J(\Bbb Q)=0$. Taking $I=3\Bbb Z$ we have another property: $1$ is the only invertible element of $1+I$ over $\Bbb Z$.
H: Why can WLOG be used in binary integer representation theorem? I was trying to understand the uniqueness portion of the proof for integer representation theorem. Then I saw this:https://math.stackexchange.com/a/607774/789305. He made an assumption that $r>s$, reached a contradiction and then claimed $r$ is not $>$ $s$. He immediately then claims $r=s$. This was the same case with the book I was using to read: Shouldn't they also cover the "Assume the $r<s$" part, repeat the same steps on reaching a contradiction and only then claim $r=s$? Edited: I understand that the part WLOG(without loss of generality) means that validity of proof is applicable in general despite narrowing the statement to a particular case(due to assumption). This means that I don't have to add the "Assume the $r<s$" part But the question is why is $r>s$ of WLOG. What makes that so obvious? AI: Let $r\ne s$. Then without loss of generality one can assume that $r<s$. This is perfectly fine. If $r>s$, then one could swap the numbers such that $r<s$ without loss of generality. So there is no need to handle the case $r>s$ separately.
H: Find $c$ such that $P(Z^2 > c) = 0.95$ I was wondering if any of you could help me with this statistics problem. AI: If $Z\sim N(0,\,1)$, you can proceed in one of two ways: Since $Z^2\sim\chi_1^2$, a $\chi_\nu^2$ table will tell you that for $\nu=1$ the value with CDF $1-0.95=0.05$ is $0.00393$. Let $\Phi$ denote the CDF of $Z\sim N(0,\,1)$. For $c\ge0$,$$P(Z^2\le c)=P(-\sqrt{c}\le Z\le\sqrt{c})=\Phi(\sqrt{c})-\Phi(-\sqrt{c})=2\Phi(\sqrt{c})-1,$$i.e. $P(Z^2>c)=1-P(Z^2\le c)=2(1-\Phi(\sqrt{c}))$. Let $\Phi^{-1}$ denote the inverse of $\Phi$, so$$\Phi(\sqrt{c})=1-\frac{0.95}{2}=0.525\implies c=(\Phi^{-1}(0.525))^2=0.00393.$$Again, a table - this time, for $N(0,\,1)$ - will tell you what you need, i.e. $\Phi^{-1}(0.525)=0.0627$.
H: Almost sure convergence of random variables with same mean and the difference goes to zero on the product Let $X_n$ be a sequence of independent real valued random variables on the same event space, with the same (finite) mean $\mu$. Suppose that for almost every couple of points $(\omega,\omega')$ in the squared space of events, we have $$ X_n(\omega) - X_n(\omega') \to 0. $$ Is it true that $X_n$ converges to $\mu$ almost surely? And if it is false, is it true at least that there exists a random variable $X$ that is an almost surely limit for the sequence? Another equivalent way we can formulate the question is as follows: consider two sequences of independent random variables $X_n$ and $Y_n$ where for every $n$, $X_n\equiv Y_n$ (meaning they have the same distribution), and all the variables have the same finite mean $\mu$. If $X_n-Y_n\to 0$ almost surely, is it true that $X_n\to\mu$ almost surely? or that there exists a random variable $X$ that is an almost sure limit for $X_n$? The idea here should be that every $X_n$ tends to be a constant function as $n$ goes to infinity, since the couples $(\omega,\omega')$ where $X_n$ differs more than $\varepsilon$ are few. Since the mean is the same for every $X_n$, then the constant it converges to should be the mean. The weak point is that the mean can actually be manipulated by big deviation on very small events, but it should be possible to handle from the fact that the variables are independent. AI: What we can say is that $X_n-y_n$ converges almost surely for some sequence $(y_n)$ of real numbers. (To say that $(X_n)$ itself converges you need some addition hypothesis). Sketch of proof: Let $X=(X_n)$ and $Y=(Y_n)$. These are independent $\mathbb R^{\infty}$ valued random vectors. Let $E$ be the set of all Cauchy sequences in $\mathbb R^{\infty}$. The $E$ is a measurable set and we have $P(X-Y\in E)=1$. Fubini's Theorem shows that there exists some sequence $y=(y_n)$ such that $P(X-y\in E)=1$. Hence $X_n-y_n$ converges almost surely.
H: Matrix of the differentiation operation Exercise: Find the matrix of the derivative operation $D$ related to the base $\{1, t, t^2,..., t^n\}$ $$D: \mathcal P_{n} \to \mathcal P_{n}$$ I found a possible solution to this exercise, given that $D(t^k)=kt^{k-1}$ $$ \begin{equation*} D_{n+1,n+1} = \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 2 & \cdots & 0 \\ \vdots & \vdots & \vdots &\ddots & \vdots \\ 0 & 0 & 0 &\cdots & n \\ 0 & 0 & 0 &\cdots & 0 \\ \end{pmatrix} \end{equation*}$$Nevertheless, it doesn't convince me at all, because when multiplying the matrix with the vectors in $\mathcal P_{n}$, the exponent remains the same. Is this solution correct? AI: The numbers in your vectors represent the linear combination of the basis elements needed to form a polynomial. For example, \begin{equation*} v = \begin{pmatrix} 3 \\ 4 \\ \vdots \\ 6 \\ 7 \\ \end{pmatrix} \end{equation*} The polynomial represented by this vector is $3+4x+...6x^{n-1}+7x^n$. Now, for a polynomial like $p(x)=1$, it is represented by \begin{equation*} v = \begin{pmatrix} 1 \\ 0 \\ \vdots \\ 0 \\ 0 \\ \end{pmatrix} \end{equation*} This extracts out the first column of $D_{n+1, n+1}$ after left multiplying by $D$, which gives you the zero vector, corresponding to the polynomial $p'(x)=0$. For another example, if $p(x)=x$, the vector representing it is \begin{equation*} v = \begin{pmatrix} 0 \\ 1 \\ \vdots \\ 0 \\ 0 \\ \end{pmatrix} \end{equation*} After $D$ acts on it, the second column is returned, which is $p'(x)=1$. So $D$ works as expected.
H: How does error analysis and significant figures actually work? I could never quite grasp the precise concept. For starters, some books seem to use absolute error to find the number of correct digits the aproximation shares with the exact value whereas others use relative error. To me, it makes more intuitive sense to use the former, but even so, if you take, for example the numbers 2.97 an 3.01, the abs error is 0.04 which is less than $$0.5*10^{-1}$$ which means they share at least one digit, but I don't see it. What digit is that? It's even more confusing to me when it comes to iterative methods since you don't have access to the exact value. I tried running this code to see if things would get a little more clear: def eulers_number(): n = 2 k = [2,2.25] while abs(k[1]-k[0]) >= 0.5 * 10**(-12): n = n * 2 k.append((1 + 1/n)**n) k = k[1:] print(k) return k[1] The output is [2.7182818284584274 , 2.718281828458736], the last two iterations, so I should have 12 correct decimal places and yet my calculator gives me 2.718281828459045.... so my 12th digit is actually wrong(8 instead of 9). Why is that? I'm sorry if it all seems pretty basic, but it bugs me a lot and I would deeply appreciate any help. Thanks in advance! AI: Digits and Distance In your leading example, you get the same digits in the number $3.0$ if you round to one digit after the dot. In general it is not that easy to connect the digit sequence and the rounding error, one can always construct some counter-examples. However, in designing algorithms it is more natural to measure the convergence in terms of distance than of digits, so one uses that in most cases when two numbers are less than $0.5\cdot 10^{-k}$ apart, they share the same $k$ digits after the dot after rounding to these $k$ digits. In about half the cases (like $1.04$ and $1.06$ with $k=1$) the last digit may differ by 1. This is a good enough compromise, so that the digit-to-error correspondence is very often understood this way. Increment and Limit In the second part you make the wrong claim that the increment of a sequence is indicative of the distance to the limit of the sequence. Take as example the harmonic series. Its terms or increments $\frac1n$ fall below every threshold, however the series itself diverges to infinity. However, it is correct that in efficient numerical methods, where the convergence is at least linear, then indeed the distance to the limit is proportional to the last increment. As one can see from the remainder formula of the geometric series. For quadratic or higher order convergence, the factor of this proportionality is close to $1$. See for example the Newton-Kantorovich theorem. For the given case $a_n=(1+\frac1n)^n$ using the sub-sequence with $n=2^m$, one gets indeed linear convergence. It is known that $$ a_n=\exp(n\ln(1+\tfrac1n))=\exp(1-\tfrac1{2n}+\tfrac1{3n^2}+...) =e\cdot (1-\tfrac1{2n}+\tfrac7{12n^2}+...) $$ Thus the increment is $a_{2n}-a_n=\frac{e}{4n}+O(n^{-2})$ while the distance to the limit $e-a_{2n}=\frac{e}{4n}+O(n^{-2})$ is of the same size. This distance pattern is quite faithfully represented in your calculated values, using the digits after the 10th, $a_n=...5842..., a_{2n}=...5873..., e=...5904...$ where the difference is both times $...0031...$ Thus rounding of $e$ and $a_{2n}$ to 12 digits is expected to have the same digit sequence. However, the distance of $e$ and $a_n$ is twice as large and thus can have a different last digit in the rounded number.
H: All prime divisors of $\frac{x^m+1}{x+1}$ are of the form $2km+1$. Let $m$ be an odd prime and $x$ be the product of all primes of the form $2km+1$. Then all prime divisors of $\frac{x^m+1}{x+1}$ are of the form $2km+1$. What I know is that $\frac{x^m+1}{x+1}$ is an integer. Here is the link to the answer which prompted this question. Can anyone help me how to prove this. Any help would be appreciated. Thanks in advance. AI: We have $2 \nmid \frac{x^m+1}{x+1}$. Let odd prime $p$ divide $\frac{x^m+1}{x+1}$ : Case $1$ : $m \mid (p-1)$ We clearly have $p=mq+1$ for some $q \in \mathbb{N}$. As $p$ is an odd prime, $mq+1$ is odd, and thus, $mq$ is even. Moreover, $m$ is an odd prime, thus, $q=2k$ for some $k \in \mathbb{N}$. Substituting: $$p=2km+1$$ which proves that our prime divisor is of the required form. Case $2$ : $m \nmid (p-1)$ We have: $$p \mid (x^m+1) \implies p \mid(x^{2m}-1) \implies p \mid(x^{\gcd(2m,p-1)}-1)$$ by Fermat's Little Theorem. Since $m$ is an odd prime not dividing $p-1$, it follows: $$\gcd(2m,p-1)=\gcd(2,p-1)=2$$ This shows us that $p \mid (x^2-1)$. We thus either have $p \mid (x-1)$ or $p \mid (x+1)$. Subcase $1$ : $p \mid (x-1)$ We have: $$p \mid (x-1) \implies p \mid (x^m-1)$$ Since $p \mid (x^m+1)$, it follows that $(x^m+1)-(x^m-1)=2$ is also divisible by $p$ which is a contradiction as $p$ is an odd prime. Subcase $2$ : $p \mid (x+1)$ This is the same as $x \equiv -1 \pmod{p}$. But then: $$\frac{x^m+1}{x+1} \equiv x^{m-1}-x^{m-2}+\cdots+1 \equiv 1-(-1)+1-(-1)+\cdots+1 \equiv m \pmod{p}$$ As $p \mid \frac{x^m+1}{x+1}$, it follows that $p \mid m$. Since $p$ and $m$ are both odd primes, we must thus have $p=m$. However: $$p \mid (x^m+1) \implies m \mid (x^m+1)$$ Note that as all the prime factors of $x$ are $1 \pmod{m}$, we have $x \equiv 1 \pmod{m}$. Then: $$0 \equiv x^m+1 \equiv 1+1 \equiv 2 \pmod{m} \implies m \mid 2$$ and this is once again a contradiction since $m$ is an odd prime. Thus, we have proved that all prime divisors of $\frac{x^m+1}{x+1}$ are of the form $2km+1$.
H: Prove sum of $k^2$ using $k^3$ So the title may be a little bit vague, but I am quite stuck with the following problem. Asked is to first prove that $(k + 1)^3 - k^3 = 3k^2 + 3k + 1$. This is not the problem however. The question now asks to prove that $$ \sum_{k=1}^{n} k^2 = \frac{1}{6}n(n+1)(2n+1)$$ using the fact that $(k + 1)^3 - k^3 = 3k^2 + 3k + 1$. I have no idea however to start working on this. Anyone have any idea? Does this have anything to do with telescopic series? AI: Hint: take the sum for $k=1$ to $n$ of both sides of the equation $(k+1)^3 - k^3 = 3k^2 + 3k + 1$.
H: If a commutative ring with $1$ does not have a unit of order $2$, then $R$ does not have an element of order $2$. Let $R$ be a commutative ring with $1$. This is a simple question, but I can't see whether it is true or not. Suppose $R$ does not have a "unit" of (additive) order $2$, i.e. , a unit $u$ such that $u=-u$. Then it is necessarily true that $R$ does not have an element of order $2$? AI: This is not true. For example, consider the ring $\mathbb{Z}[X]/(2X)$. The units in this ring are only $\pm 1$, but $X+X=0$.
H: Show that there are no integer solutions to $2x^{11}+3y^{11}=6z^{11}$ I have managed to show that $x$ must be a multiple of $3$ and $y$ must be even, which produces the equation $$3^{10}s^{11}+2^{10}t^{11}=z^{11},$$ with $s=x/3$ and $t=y/2$. I have tried to approach this by applying Fermat's Little Theorem mod $11$ setting the tenth powers to $1$ but I've not managed to get any further. Any help is really appreciated, thanks! AI: Simply take modulo $23$. If $23 \nmid n$, then: $$n^{22} \equiv 1 \pmod{23} \implies n^{11} \equiv \pm 1 \pmod{23}$$ Thus, for all $n \in \mathbb{N}$, we have $n^{11} \equiv -1,0,1 \pmod{23}$. Try substituting these values in your equation. You would see that the only possibility is $x^{11} \equiv y^{11} \equiv z^{11} \equiv 0\pmod{23}$. Substituting $x=23x_1$, $y=23y_1$ and $z=23z_1$, we get: $$2(x_1)^{11}+3(y_1)^{11} = 6(z_1)^{11}$$ Now, you will get $23$ divides $x_1,y_1,z_1$, which will give a process of infinite descent. This is clearly a contradiction. Thus, your equation does not have solutions in positive integers.
H: Unique differentiable linear operator mapping $\mathbb{R} \rightarrow \mathbb{R}^N$? I am trying to find a mapping $\phi$ such that: $\phi$ uniquely maps $\mathbb{R}$ to the subspace of $\mathbb{R}^N$ (for bounded $N$), where every dimension of the vector is bounded between $-1$ and $1$. $\phi$ is differentiable $\phi$ forms a linear operator Does such a mapping exist? My intuition tells me that the first criterion would require some sort of nonlinearity, which perhaps would make the last criterion infeasible. Thanks! AI: Notice that if $\phi :\mathbb R \rightarrow \mathbb R^N$ is linear then $\phi$ is also differentiable so we need only investigate the first condition in detail. The zero map $\phi :\mathbb R \rightarrow \mathbb R^N : v \mapsto \phi(v) = 0$ is the only linear map such that $ -1 \leq \phi(v)_i \leq 1$ for all $v$. If $\phi$ is not the zero map, then $\exists v : \phi(v) \neq 0.$ Using linearity we have that $\phi(\lambda v) = \lambda \phi(v) \; \forall \lambda \in \mathbb R.$ Therefore by taking $\lambda$ large enough we can find a vector whose components are not all between $-1$ and $1.$
H: How to find the degree of the extension $[\mathbb{Q}(\sqrt[4]{3+2\sqrt{5}}):\mathbb{Q}]$? How to find the degree of extension for $[\mathbb{Q}(\sqrt[4]{3+2\sqrt{5}}):\mathbb{Q}]$? I believe that the minimal polynomial of $\sqrt[4]{3+2\sqrt{5}}$ is $x^8-6x^4-11$, but I don't know how to show that it is irreducible over $\Bbb{Q}$. I also tried to show that $x^4-(3+2\sqrt{5})$ is irreducible over $\Bbb{Q}(\sqrt{5})$, but it is still too complicated for me. AI: Here's a nice trick how to show that $f(x)=x^8-6x^4-11$ is irreducible over $\mathbb{Q}$. By Gauss' lemma it is irreducible over $\mathbb{Q}$ iff it is irreducible over $\mathbb{Z}$. Let $f=gh$ be a product of two polynomials with integer coefficients. Looking at the constant term of $f$ which is $-11$, you see that the constant terms of $g$ and $h$ have to be $\pm 1$ and $\pm 11$. Since the constant term is the product of all roots of the polynomial up to a sign, $g$ or $h$, and as a result $f$, has a root $\alpha$ with $\lvert \alpha \rvert \leq 1$. But then $\lvert \alpha^8 - 6 \alpha^4 -11 \rvert \geq 11-6-1 > 0$ and so $\alpha$ is not a zero of $f$, contradiction. Hence $f$ is irreducible.
H: 'Irregular' conformal mapping of square onto circle? Assume that I want to find the conformal mapping from a square onto the unit disk. In the regular case, where the four edge points are positioned along cardinal directions, this mapping seems to have a relatively simple solution. I have plotted this in the figure below (a). The scenario I am interested in is a bit more complicated. I still want to map from a square (or rectangle) onto a disk, but I want to place the edge points of the square irregularly on the target circle (figure b). Assume that I know the positions of these target points. Is it possible to find a mapping like this? If so, how would I go about solving this issue? AI: Your question is equivalent to asking if, given two quadruples $(z_1, z_2, z_3, z_4)$ and $(w_1, w_2, w_3, w_4)$ of pairwise distinct points on the unit circle, there is a conformal map from the unit disk onto itself which extends continuously to the closed disk, and maps $z_k$ to $w_k$ for $k=1, 2, 3, 4$. Conformal automorphisms of the unit disk are Möbius transformations, they are uniquely determined by the images of three distinct points, preserve the cross-ratio and the orientation. Therefore such a mapping exists exactly if the cross-ratios $(z_1, z_2; z_3, z_4)$ and $(w_1, w_2; w_3, w_4)$ are equal, and if the two quadruples have the same orientation with respect to the unit disk. In that case the mapping $T$ is given by $$ (T(z), w_2; w_3, w_4) = (z, z_2; z_3, z_4) \, . $$
H: Existence of a strange Group It seems to me that if $G$ is a finite group and $|G| \geq 3$ and odd then there exists an element $a \in G$ such that $a \neq a^{-1}$. I can easily show this for order $3$ and $5$ but for higher orders the computations become unwieldy. My attempt: I want to prove this by contradiction. I assumed the existence of such a group and proved that it must be abelian. Let $a_{1},a_{2},\dots,a_{n}$ be precisely the sequence of elements in $G$ I think that $a_{1}a_{2}\dots a_{n}\neq a_{k}$ for all $1 \leq k \leq n$ (This will show that the group operation is not closed) because if it's the case $a_{1}\dots a_{k-1}a_{k+1} \dots a_{n} = e$ where e is the identity element of the group $G$. How to obtain the contradiction from here onwards is where I am stuck. AI: Since $G$ has odd order $n$, there is a prime $p>2$ dividing $n$. By Cauchy's theorem, there exists an element $a\in G$ of order $p$. Hence $a^2\neq e$ so that $a\neq a^{-1}$.
H: How do I solve this problem using binomial theorem with square root in it? The question is: Find the coefficient of $\frac{1}{x\sqrt x}$ in the expansion of $(x^2 - \frac{1}{2\sqrt x})^{18}$. I have included a photo to make it easier to read because I do not know how to format the question. AI: Hint: Any term in the expansion would be of the form $${18\choose r} \cdot (x^2)^r \cdot\left(\frac 12 x^{-1/2}\right)^{18-r} $$ You need $x^{2r -\frac{18-r}{2}}=\frac{1}{x\sqrt x} =x^{-3/2}$ or $$2r-\frac{18-r}{2} =-\frac 32$$ Solve for $r$ from this equation and just plug its value in the first expression.
H: $A=\{(x,y) : x = u+v,y = v,u^2+v^2 ≤ 1\}$ Derive the length of the longest line segment that can be enclosed inside the region $A$. QUESTION: Let $A$ be the region in the $xy$-plane given by $$A=\{(x,y) : x = u+v,y = v,u^2+v^2 ≤ 1\}$$Derive the length of the longest line segment that can be enclosed inside the region $A$. MY APPROACH: Firstly, I didn't quite get the meaning of the question. I mean, we are considering the rectangular coordinates with normal meaning of symbols. So $x$ stands for the $x$ axis and $y$ stands for the $y$ axis. In that case, what are $u$ and $v$ here? I tried to formulate an equation in $x$ and $y$. This is what I found out- $$x^2+2y^2-2xy-1≤0$$ Now I do not know how to proceed from here.. does $u$ and $v$ stand for a separate coordinate system altogether? If so, how? Pardon me, this question has been asked before. But the solution was too brief and incomplete. I didn't get that. Any help will be much appreciated. Thank you so much. AI: $x^2 + 2y^2 - 2xy -1 = 0$ is some curve, and the region is enclosed in this curve. Certainly our two point in $A$ should be at the boundary to maximize their distance. Let $(a, b)$ and $(c, d)$ be two points. Then under condition $$a^2 + 2b^2 - 2ab - 1 = c^2 + 2d^2 - 2cd -1 =0$$ we have to maximize $M = (a-b)^2 + (c-d)^2$. By adding two condition, we have $a^2 + c^2 + 2(b^2 + d^2) - 2(ab + cd) = 2$ or $$a^2 + b^2 + c^2 + d^2 - 2(ab+cd) = 2 + b^2 + d^2 .$$ Note that the LHS is $M$. So we have $M = 2+ b^2 + d^2$. This is maximized where $b^2$ and $d^2$ are maximized. Since $(x-y)^2 + y^2 \le 1$, we have $|y| =\pm1$ attains the max value of $y^2$ with $x = y$. i.e. $(a, b) = (-1, 1)$, $(c, d) = (1, -1)$. The distance is maximum and is $2\sqrt{2}$. Here is a plot of the region. It is an ellipse.
H: Verifying that the identity functor is indeed a functor The identity functor $1_{\mathscr C}:\mathscr C \to \mathscr C$ has $1_{\mathscr C}(a)=a$, $1_{\mathscr C}(f)=f$. Verify that it is indeed a functor, specifically that it is a function that assigns: To each $\mathscr C$-object $a$, a $\mathscr C$-object $1_{\mathscr C}(a)$ To each $\mathscr C$-arrow $f:a\to b$ a $\mathscr C$-arrow $1_{\mathscr C}(f):1_{\mathscr C}(a)\to 1_{\mathscr C}(b)$ such that $1_{\mathscr C}(1_a)=1_{1_{\mathscr C}(a)}$ for all $\mathscr C$-objects $a$ $1_{\mathscr C}(g\circ f)=1_{\mathscr C}(g)\circ 1_{\mathscr C}(f)$, whenever $g\circ f$ is defined 2i is where I do not understand. LHS is supposedly an arrow (because it is a functor applied to an arrow), while RHS is supposed to be an object (because it is an arrow applied to a functor applied to $a$) - so how can an arrow be identical to an object? From the LHS we have $1_{\mathscr C}(1_a)\to 1_a$ by def. that $1_{\mathscr C}(f)=f$. From the RHS, we have $1_{1_{\mathscr C}(a)}\to 1_a$ by def. that $1_{\mathscr C}(a)=a$. This looks identical to what I have for LHS above...but I don't actually know what this is - is this supposed to be an arrow now? How did we go from dealing with an object to an arrow? AI: To avoid confusion it might be better to use another notation for identity arrows here. Doing so 2i states that:$$1_{\mathscr C}(\mathsf{id}_a)=\mathsf{id}_{1_{\mathscr C}(a)}$$ Note that both LHS and RHS are (identity) arrows.
H: How to calculate $\int_0^\frac{\pi}{2} \sum_{k=0}^\infty \frac{1-k\sin{kx}}{2^k} dx $? I have to find $\int_0^\frac{\pi}{2} \sum_{k=0}^\infty \frac{1-k\sin{kx}}{2^k} dx$, but can't figure out how to do it. If $\frac{1-k\sin{kx}}{2^k}$ converges it would be easy to calculate $\int_0^\frac{\pi}{2} \frac{1-k\sin{kx}}{2^k} dx$, but I don't know how to prove the convergence and how to find the sum. The answer of the integral should be $\frac{1}{2^k}\left(\frac{\pi }{2}+\cos \left(\frac{\pi k}{2}\right)-1\right)$ and I think the convergence can be proved with Weierstrass. If I'm correct the solution can be found with $\sum_{k=0}^\infty \frac{1}{2^k}\left(\frac{\pi }{2}+\cos \left(\frac{\pi k}{2}\right)-1\right)$. AI: For each $k\in\Bbb Z_+$ and each $x\in\left[0,\frac\pi2\right]$,$$\left|\frac{1-k\sin(kx)}{2^k}\right|\leqslant\frac{1+k}{2^k}.$$Since the series $\sum_{k=0}^\infty\frac{1+k}{2^k}$ (by, say, the ratio test), your series converges uniformly on $\left[0,\frac\pi2\right]$, by the Weierstrass $M$-test. So\begin{align}\int_0^{\pi/2}\sum_{k=0}^\infty\frac{1-k\sin(kx)}{2^k}\,\mathrm dx&=\sum_{k=0}^\infty\int_0^{\pi/2}\frac{1-k\sin(kx)}{2^k}\,\mathrm dx\\&=\sum_{k=0}^\infty\frac{-1+\pi/2+\cos\left(k\pi/2\right)}{2^k}\\&=\sum_{k=0}^\infty\frac{-1+\pi/2}{2^k}+\sum_{k=0}^\infty\frac{(-1)^k}{2^{2k}}\\&=-2+\pi+\frac1{1+1/4}\\&=\pi-\frac65.\end{align}
H: Result of number with decimal point modulo $10$. This is a very simple question, very trivial to many, but I have to resolve this doubt! I have done the division $2.2/10$ by hand, and the result is $0.22$, without any remainder: But why if I do with calculator, I obtain this: $$2.2 \mod 10 = 2.2$$ why it is not equal to $0$, instead? AI: You can define modulo for real variables like this: $\quad {a \bmod b=a-\lfloor \frac ab\rfloor\times b}$ Here $\frac ab=0.22$ whose integer part is $0$, so basically for $0\le x<10$ then $x\bmod 10=x$.
H: Prove that if $|f(z)| \geq |f(z_{0})|$ then $f(z_{0})=0$ Let $U \subseteq \mathbb{C}$ be an open connected subset and $f: U \rightarrow \mathbb{C}$ an holomorphic function. Let $z_{0} \in U$ and $r > 0$ with $B_{r}(z_{0}) \subseteq U$. If $\forall z \in B_{r}(z_{0})$ $|f(z)| \geq |f(z_{0})|$ then $f(z_{0})=0$. As $f$ is an holomorphic map $f$ has the mean value property and then $|f(z_{0})|\leq \max_{\partial B_{r}(z_{0})}|f|$ but I don't how to derive that is zero. Thanks in advance. AI: If $f(z_0)\not=0$ consider an open connected nbd $V$ of $z_0$ in which $f$ is never vanishes and then apply maximum modules principal to $g:=\frac{1}{f|_V}$. That's $|g|\leq |g(z_0)|$, so $g$ is constant on $V$. So identity theorem implies $f\equiv f(z_0)$ on $U$.
H: An application of Fubini’s theorem on Fourier transform Given $f,g\in L^1(\mathbb{R}^n)$ and we denote the Fourier transform of $f$ by $\widehat{f}$. I want to prove that $$\int_{\mathbb{R}^n}\widehat{f}(x)g(x)~dx= \int_{\mathbb{R}^n}f(x)\widehat{g}(x)~dx.$$ Here’s my attempt: \begin{align} \int_{\mathbb{R}^n}\widehat{f}(x)g(x)~dx=&\int _{\mathbb{R}^n}\left\{\int _{\mathbb{R}^n} f(t)e^{-2\pi it\cdot x}~dt\right\}g(x)dx\\ =&\int _{\mathbb{R}^n}\left\{\int _{\mathbb{R}^n} g(x)f(t)e^{-2\pi it\cdot x}~dt\right\}dx\\ {\color{red}{=}}&\int _{\mathbb{R}^n}\left\{\int_{\mathbb{R}^n} g(x)f(t)e^{-2\pi it\cdot x}dx\right\}~dt\\ =&\int_{\mathbb{R}^n}\left\{\int_{\mathbb{R}^n} g(x)e^{-2\pi it\cdot x}dx\right\}f(t)~dt\\ =&\int_{\mathbb{R}^n}f(t)\widehat{g}(t)~dt\\ =&\int_{\mathbb{R}^n}f(x)\widehat{g}(x)dx. \end{align} In the third equation I used Fubini’s theorem. But here’s something that I’m not sure: if we want to apply Fubini’s theorem, then $F(x,t):= g(x)f(t)e^{-2\pi it\cdot x}$ must be $\mathbb{R}^n\times\mathbb{R}^n$ integrable. But I was stuck when trying to prove that $F(x,t)$ is integrable. Could you give me some help? Thanks! AI: $$\int_{\mathbb{R}^n \times \mathbb{R}^n}|F(x,t)| \lambda(dx,dt)$$ $$=\int_{\mathbb{R}^n \times \mathbb{R}^n}|g(x) f(t)| \lambda(dx,dt)$$ $$= \int_{\mathbb{R}^n}\int_{\mathbb{R}^n} |g(x)||f(t)| \lambda(dx) \lambda(dt)$$ $$= \Vert g \Vert_1 \Vert f \Vert_1 < \infty$$ by Fubini for positive functions.
H: Bias of uniform maximum Let $X_i$ be i.i.d. uniform random variables in $[0,\theta]$, for some $\theta>0$ and $M_n = max(X_i)$ I am trying to find the bias of $M_n$ as an estimator of $\theta:$ $$E[M_n] - \theta = $$ Computing $E[M_n] = \frac{n}{n+1}$ I guessed the answer would be $\frac{n}{n+1} - \theta$ But this doesn't seem to be the final answer. What would be the next step? Is there another way? AI: The Expected value you calculated is wrong. $$\mathbb{E}[M_n]=\int_0^{\theta}\frac{n}{\theta^n}t^n dt=\frac{n}{n+1}\theta$$ So your estimator is biased but asymptotically correct and its $Bias(M_n)=-\frac{\theta}{n+1}$. It is asymptotically correct as the Bias goes to zero when $n \rightarrow \infty$ If you are interested in finding the unbiased estimator for $\theta$ it is obviously $\hat{\theta}=\frac{n+1}{n}M_n$
H: Upper semicontinuous decomposition I'm reading a paper from Y. Ünlü called Lattices of compactifications of Tychonoff spaces. I've bumped into some definitions that I've never seen; while I find most of them understandable, there's one that I can't break down. Definition 1. A decomposition $\alpha$ of a space $X$ is a partition of $X$ into compact sets. So far, this one is easy to understand. Definition 2. A decomposition $\alpha$ of a set $X$ is called upper semicontinuous if for $K \in \alpha$, $K \subset V$ and $V$ is open in $X$, then there is an open set $W$ in $X$ such that $K \subset W \subset V$ and $W = \bigcup \{L \in \alpha\ \textrm{s.t.}\ L \cap W \neq \emptyset\}$. A set $W$ satisfying this condition is called $\alpha$-saturated. It's here where I come into trouble. In $W = \bigcup \{L \in \alpha\ \textrm{s.t.}\ W \cap L \neq \emptyset\}$ how can we express $W$ in terms of itself, if it hasn't already been defined? Could you please clarify this definition (maybe it's a typo and the author wanted to write $V$ instead of $W$, or maybe I'm not good enough to get it) and give me some equivalent statement? AI: A better formulation of the notion of upper semicontinuous decompositions is to demand If $V \subseteq X$ is open, then $W= \bigcup \{K: K \in \alpha: K \subseteq V\}$ is open in $X$ and this is equivalent to the quotient map $q$ (defined by the decomposition onto the set of classes $X{/}\alpha$ being an open map. See this survey for more explanation. Then if $\alpha$ satisfies this and we have (as in your situation) $K \in \alpha, K \subseteq V$, we take $W$ as promised, which is open (and clearly contains $\alpha$) and if $L \in \alpha$ intersects $W$, then $L \subseteq W$ (because we have a partition: some $x \in L$ is contained in an $L'$ that sits inside $W$ (by definition of $W$) but $L=L'$ as intersecting means equality in a partition.), and $W$ is saturated. And no, the definition as stated is just the definition of being saturated not the definition of $W$ itself. A set $S$ (in general) is saturated for a partition $\alpha$, iff $$\forall K \in \alpha: (K \cap S \neq \emptyset) \to K \subseteq S$$ (so if you have a point of a member you have the whole member; " if you eat them a bit, you eat them completely", to put it weirdly.)
H: Show that every contractible space is connected. So I'm trying to show every contractible topological space is connected. At the moment I've established that $X$ is contractible iff there is a homotopy equivalence $f:\{x_0\}\to X$ for some $x_0\in X$, but I'm having trouble showing that if $X$ and $Y$ are homotopy equivalent and $X$ is connected, then so is $Y$ (from which the result would follow, as $\{x_0\}$ is clearly connected). AI: Let $H: X \times I$ be a contracting homotopy to $p \in X$, so that $H$ is continuous. $H(x,0)=x$ for all $x \in X$. $H(x,1)=p$ for all $x \in X$. $H(p,t)=p$ for all $t$. Then for each $x \in X$ we have a continuous path $p_x: [0,1] \to X$ such that $p_x(0)=x, p_x(1)= p$, namely $p_x(t)=H(x,t)$ where we restrict $H$ to $\{x\} \times I$, reallly, so $p_x$ is continuous as $H$ is. Then clearly $X$ is path-connected: If $x,y \in X$ define $p: I \to X$ by $$p(t)= \begin{cases} p_x(2t)& 0 \le t \le \frac12\\ p_y(2-2t) & \frac12 \le t \le 1\\ \end{cases}$$ which is continuous by the pasting lemma for two closed sets (note that $t=\frac12$ is consistent as $p_x(1)= p = p_y(1)$, so they connect); we go from $x$ to $p$ via $p_x$ and then beckwards from $p$ to $y$ via $p_y$. And a path-connected space is connected. (Follows from $I$ being connected and continuous paths preserving connectedness, so $p[I]$ also is connected.)
H: Statement true for all prime numbers -- can this be done by Math Induction? Let's say that you want to prove a statement is true for all prime numbers. Can this be done by Math Induction? AI: Let's say that you want to prove a statement is true for all prime numbers. Such as Fermat's Last Theorem: $$ X^p+Y^p=Z^p$$ has no non-trivial integer solutions for all primes $p>2$. Can this be done by Math Induction? This is not known.
H: Calculating Distance From a Line To Point Hello everyone I have a point $P$ and a line $l$. And I need to find all the $X$ points that for them the distance from $X$ to $P$ is the half distance from $X$ to $l$ in $2d$. I tried to use the distance formula but I didn't success. AI: You can very well use the distance formula. If we have $P(a,b)$ and $L: lx+my+n=0$, then any point $X(x,y)$ must satisfy $$\sqrt{(x-a)^2+(y-b)^2} = \frac 12 \cdot \frac{|lx+my+n|}{\sqrt{l^2+m^2}} $$ Now just square and simplify. You should get the locus of $X$ as $$x^2(3l^2+4m^2) +y^2(4l^2+3m^2) -x(8al^2+8am^2+2ln) -y(8bl^2+8bm^2+2mn) +xy(2lm) +4(l^2+m^2)(a^2+b^2) -n^2=0$$
H: summation of random variable into harmonic number In our textbook "Algorithm Design" we are given an example of a deck of $n$ cards and we have to guess the correct card, and every time a card is drawn, we remember that cards so we are uniformally guessing only among the cards not yet seen. I don't understand the final part of the following summation, where the authors reduce the summation to a harmonic number... note: the random variable $X_i$ takes on 1 if the ith value is correct or 0 otherwise: $$\Pr[X] = \sum\limits_{i=1}^n E[X_i] = \sum\limits_{i=1} ^n \frac {1}{n-i+1} = \sum\limits_{i =1}^n \frac {1}{i}$$ I don't see how they derived $\sum\limits_{i =1}^n \frac {1}{i}$, which I know becomes a harmonic number [Ideally someone might show this without calculus]... Thanks AI: They reindexed the sum. As $i$ runs from $1$ to $n$, $j=n-i+1$ runs from $n$ to $1$, so $$ \sum_{i=1}^n\frac1{n-i+1}=\sum_{j=1}^n\frac1j=\sum_{i=1}^n\frac1i\;. $$
H: What do first derivatives, factorials, and alternating signs have to do with explicit and recursive forms of sequences? I'm a math teacher now, although a few years ago I was finishing up my M.Ed. As part of my studies, I was tasked with conducting my own study of high school level topics and finding unique results. As a student, I had always been fascinated with the idea of converting sequences in explicit form into recursive form, especially since nowhere could I find a general formula. The images below are my submission to my graduate school and an explanation as to the formula I arrived at for converting polynomial sequences in explicit form into their recursive counterparts. My BIG question is why? Why does this formula involve the nth derivative, the nth factorial, and an alternating sign? AI: For an analytic function such as a polynomial, we have Taylor series: $f(t)=\sum\limits_{n=0}^\infty \dfrac{f^{(n)}(x)}{n!}(t-x)^n$. Take $t=x-1$ to get your formula.
H: Variance of the difference of Brownian Motions I have a question about the variance of the following formula: $Var(W(t) - \frac{t}{T}W(T-t))$. Where $W(t)$ is a Brownian motion. I tried the following: $Var(W(t) - \frac{t}{T}W(T-t)) = E[W(t) - \frac{t}{T}W(T-t)]^2 - (E[W(t) - \frac{t}{T}W(T-t)])^2$. Now $E[W(t) - \frac{t}{T}W(T-t)] = E[W(t)] - \frac{t}{T}E[W(T-t) = 0 - \frac{t}{T}*T = t$, and: $E[W(t) - \frac{t}{T}W(T-t)]^2 = E[W^2(t) - \frac{2t}{T}W(t)W(T-t) + \frac{t^2}{T^2}W^2(T-t)]$, this expectation consists of three terms: $E[W^2(t)] = Var(W(t)) + E[W(t)]^2 = t + 0 = t$ $\frac{t^2}{T^2}E[W^2(T-t)] = \frac{t^2}{T^2}(Var(W^2(T-t)) + E[W(T-t)]^2) = \frac{t^2}{T^2}(t + T)$ $\frac{2t}{T}E[W(t)W(T-t)] = $ Here I'm getting stuck, I'm not sure if this is even the correct approach or that I should just completely change my way of calculating the variance. Thanks in advance AI: Firstly, you have miscalculated the expectation. For any $s$, assuming that you consider a Brownian motion started from $0$, $\mathbb{E}[W(s)] = 0$. In particular, with $s = T-t$, we get that $$\mathbb{E}[W(T-t)] = 0$$ so that $$\mathbb{E}[W(t) - \frac{t}{T}W(T-t)] = 0 + \frac{t}{T} \cdot 0 = 0.$$ Now I move onto the variance. You have correctly found the $3$ expectations you'll need to compute for this and computed the first of those correctly. For the second, you should get that $$\frac{t^2}{T^2}E[W(T-t)^2] = \frac{t^2}{T^2} (T-t)$$ by using the fact that the covariance of $W$ is known to be given by $\mathbb{E}[W(t)W(s)] = t \wedge s$ with $t = s = T-t$. Finally, for the third term again use the known of the covariance of Brownian motion to get that $$\frac{2t}{T}\mathbb{E}[W(t)W(T-t)] = \frac{2t}{T} (t \wedge (T-t))$$
H: Computing a trace containing $\gamma$-matrices I want to compute the following trace $$Tr \Big( Y(\not{\!p_1'}+m) \Big) \ \ (1)$$ Where $$\not{\!A} := \gamma^{\alpha} A_{\alpha} \ \ (2)$$ $$Y:= 4 \not{\!f_1} \not{\!p} \not{\!f_1} + m[-16(pf_1)+16 f_1^2] + m^2 ( 4 \not{\!p} - 16 \not{\!f_1})+16 m^3 \ \ (3)$$ The answer is given to be: $$Tr \Big( Y(\not{\!p_1'}+m) \Big)=16 \Big( 2(f_1p)(f_1p') -f_1^2(pp')+m^2[-4(pf_1)+4f_1^2]+m^2 [(pp')-4(f_1 p')] +4m^4 \Big) \ \ (4)$$ We need to use the following properties 1) If $(\gamma^{\alpha}\gamma^{\beta}...\gamma^{\mu}\gamma^{\nu})$ contains an odd number of $\gamma$-matrices $$Tr(\gamma^{\alpha}\gamma^{\beta}...\gamma^{\mu}\gamma^{\nu})=0$$ 2) $$Tr(\not{\!A}\not{\!B})=4AB$$ 3) $$Tr(\not{\!A}\not{\!B}\not{\!C}\not{\!D})=4\Big( (AB)(CD)-(AC)(BD)+(AD)(BC) \Big)$$ 4) Given $A, B$ to be square matrices $$Tr(A+B)=Tr(A) + Tr(B)$$ Besides: I've assumed that the trace of a scalar is equal to itself. (i.e. $Tr(m)=m$) Applying such properties I get $$Tr \Big( Y(\not{\!p_1'}+m) \Big) = 16 \Big( (f_1 p)(f_1 p') - f_1^2(pp') + f_1p'p p' \Big) + 16m^2p^2-64 m^2 f_1 p + 16m^2[-pf_1+f_1^2]+16m^4 \ \ (5)$$ Which is not the provided solution $(4)$. What am I missing? Source: Second Edition QFT, Mandl & Shaw page 144. Any help is appreciated. AI: The trace of a scalar is not the scalar itself. In this context, the scalar $m$ represents $m\mathbb1$, where $\mathbb1$ is the $4\times4$ identity matrix. Thus $\operatorname{Tr}m=4m$.
H: Show that not exists any polynomial function such that $f(x) = \log (1+x)$. Does anyone have any idea on that problem? Let $f : \mathbb{R} \to \mathbb{R}$ be a polynomial function. Show that not exists any $f$ such that $f(x) = \log (1+x)$. It's easy to show that $a_0 = 0$ and $a_1 = 1$. But after i don't have any idea. Any point? Thanks! AI: Maybe there are many ways to show this; but what I have in mind is this: If $f$ is a polynomial of degree $k$, then its $(k+1)$-th derivative is zero. But this is not the case for $\log(1+x)$; the $(k+1)$-th derivative of $\log(1+x)$ does not vanish.
H: When two Algebraic vector bundles on a Noetherian quasi-affine scheme are equal in $K_0$ of the scheme Let $X$ be a (connected) Noetherian scheme and $K_0(X)$ denote the Grothendieck group of the category of Algebraic vector bundles (coherent sheaves that are locally free and of constant rank ( as $X$ is connected) ). My question is: If for two Algebraic vector bundles $\mathcal F, \mathcal G$ on $X$, we have $[\mathcal F]=[\mathcal G]$ in $K_0(X)$, then is it necessarily true that $\mathcal F \oplus \mathcal O_X^{\oplus n}\cong \mathcal G \oplus \mathcal O_X^{\oplus n}$ for some integer $n\ge 0$ ? I know this is true if $X$ is affine, but I'm not sure what happens otherwise. I'm most interested in the case where $X$ is quasi-affine. AI: This is in general false. Since you are interested in quasi-affines, let me give such an example. Take $R=k[X_1,\ldots, X_n]$ with $n$ sufficiently large (say greater than 2). Let $X=\operatorname{Spec} R-(X_1,\ldots, X_n)$. Take $M=Re_1\oplus \cdots\oplus Re_n/\sum X_ie_i$ and restrict it to $X$ to get a vector bundle $F$. Then, $[F\oplus \mathcal{O}_X]=[\mathcal{O}^n_X]$ in $K_0(X)$, but $F\not\cong\mathcal{O}_X^{n-1}$.
H: Relation between squared norms and sets of orthonormal vectors having the same span I was asked to show the following claim but I'm stuck. It seems I can't find the right reasoning path. Given a matrix $M\in\mathbb{R}^{n\times d}$ and two sets of pairwise orthogonal unit vectors {$u_1, ..., u_k$} and {$v_1,..., v_k$} s.t. $span(\{u_1,...u_k\})=span(\{v_1,...v_k\})$. Show that $$\sum_{i=1}^{k}||Mu_i||^2=\sum_{i=1}^{k}||Mv_i||^2$$ Can anyone give me any hint? AI: Hints: a) $v=Ru$ with $R$ orthogonal matrix b) $||Mv||^2 = v^TM^TMv $ c) $Trace(R^T M^T M R) = Trace(M^T M)$