text
stringlengths
83
79.5k
H: Small-angle approximation of $ \frac{\sin^2 x}{x^2 \sqrt{1-\frac{\sin^2 x}{3}}} $ I need to show the following: $$ \frac{\sin^2 x}{x^2 \sqrt{1-\frac{\sin^2 x}{3}}} \approx 1-\frac{x^2}{6} $$ when $ x $ is small. I think this problem is trickier than most other questions like it because in the original source there is comment saying "if you got $ 1+\frac{x^2}{6} $ [what I got] then think again!". My attempt was: When $ x $ is small, $ \sin x \approx x $ so $$ \frac{\sin^2{x}}{x^2 \sqrt{1-\frac{\sin^2 x}{3}}} = \frac{x^2}{x^2 \sqrt{1-\frac{x^2}{3}}} = \left ( 1 - \frac{x^2}{3} \right )^{-\frac{1}{2}} $$ Then using the binomial series approximation, $$ \left ( 1 - \frac{x^2}{3} \right )^{-\frac{1}{2}} \approx 1 - \frac{1}{2}\left ( -\frac{x^2}{3} \right ) + ... = 1 + \frac{x^2}{6} $$ ...and so it looks like I've fallen into whatever trap the question set. Where is my error? AI: $${\sin x\over x}\approx1-{1\over6}x^2$$ so $${\sin^2x\over x^2}\approx\left(1-{1\over6}x^2\right)^2\approx1-{1\over3}x^2$$ not just $1$. We get $${\sin^2x\over x^2\sqrt{1-{\sin^2x\over3}}}\approx\left(1-{1\over3}x^2\right)\left(1+{1\over6}x^2\right)\approx1-{1\over6}x^2$$
H: Infer the second isomorphism theorem from the first one I'm trying to infer the second isomorphism theorem on groups from the first one. Could you please verify if my attempt is fine or contains logical mistakes? Let $G$ be a group, $S \le G$, and $N \trianglelefteq G$. Then $(S N) / N \cong S /(S \cap N)$. My attempt: Consider the map $\phi: S N \to S /(S \cap N), \quad sn \mapsto s(S \cap N)$. Let $s_1, s_2 \in S$ and $n_1,n_2 \in N$ such that $s_1 n_1 = s_2 n_2$. We have $s_1 n_1 = s_2 n_2 \iff s_1^{-1} s_2 =n_1 n_2^{-1} \implies s_1^{-1} s_2 \in S \cap N$ $\iff s_1(S \cap N) = s_2(S \cap N)$. Hence $\phi$ is well-defined. Clearly, $\phi$ is surjective. It follows from $S \le G$ and $N \trianglelefteq G$ that $(S\cap N) \trianglelefteq S$. Hence $S /(S \cap N)$ is a group. Next, we show that $\phi$ is a homomorphism. Let $s_1, s_2 \in S$ and $n\in N$. It follows from $N \trianglelefteq G$ that $ns_2 = s_2 k$ for some $k \in N$. Hence $\phi ((s_1n)(s_2n)) = \phi (s_1 s_2kn) = (s_1s_2) (S \cap N) =$ $(s_1(S \cap N)) (s_2 (S \cap N))=\phi (s_1n) \phi (s_2n)$. It's not hard to verify that $\operatorname{ker} \phi := \{sn \in SN \mid (s,n) \in S \times N \text{ and } s (S \cap N) = S \cap N\} = N$. By first isomorphism theorem on groups, the result then follows. AI: It's perfect. Maybe it would be slightly easier to deduce the same starting from the other direction, with a map $S\to SN/N$.
H: Find $\theta$ such that $W = X \cosθ +Y \sinθ \text{ and } Z = X \cosθ −Y \sinθ$ are independent. Find $\theta$ such that $W = X \cosθ +Y \sinθ \text{ and } Z = X \cosθ −Y \sinθ$ are independent. It is given that X and Y be jointly normal each with mean $0$ and variance $1$. I have shown that $$W \sim N(0, 1)$$ $$Z \sim N(0, 1)$$ Now, how can I proceed from here? AI: Since $X$ and $Y$ are bivariate normal distribution. Theorm: Let $(X_1,X_2,...,X_n)$ be an n-dimensional RV with a normal distribution. Let $Y_1,Y_2,...,Y_k, k\leq n,$ be linear functions of $X_j (j = 1,2,...,n)$. Then $(Y1,Y2,...,Yk)$ also has a multivariate normal distribution. $W$ and $Z$ are also the linear combinations of X and Y. Hence we can use the above theorem. $$\mathbb{E}[WZ] = 0$$ $$\mathbb{E}[X^2\cos^2\theta - Y^2\cos^2\theta] = 0$$ $$\tan^2\theta = {\mathbb{E}[X^2] \over \mathbb{E}[Y^2]}$$ $$\theta = \tan^{-1}\bigg[\sqrt{\mathbb{E}[X^2] \over \mathbb{E}[Y^2]}\bigg]$$
H: Trace norm of a trace class operator exercise in Conway Exercise in Conway's Functional Analysis book: Let $T$ be a trace class operator on a Hilbert space ${\cal H}$. Prove: $$\sup\{|\mbox{tr}(CT)|:\ C\ \mbox{is compact}, ||C||\leq 1\}=||T||_1.$$ Here, $||T||_1=\mbox{tr}[(T^*T)^{\frac{1}{2}}]$ is the trace norm. I can prove that $\leq$ holds. I can prove the equality in the finite dimensional case using polar decomposition. This led me to believe that a polar decomposition argument should also work for the infinite dimensional case. However, I am not sure how to use the compactness assumption. Any hints for the $\geq $ inequality? AI: Let $|T|=(T^*T)^{1/2}$. Then $\|\,|T|\,\|_1=\|T\|_1$. As $|T|$ is compact and positive, we can write via the Spectral Theorem $$ |T|=\sum_{j=1}^\infty\lambda_jP_j, $$ where $\{P_j\}$ are rank-one and $\lambda_j\geq0$ for all $j$. Note that $\operatorname{Tr}(|T|)=\sum_j\lambda_j$. Let $Q_k=\sum_{j=1}^kP_j$. Then $$ \operatorname{Tr}(Q_k|T|)=\sum_{j=1}^k\lambda_j. $$ So, given $\varepsilon>0$, we can choose $k$ big enough such that $\sum_{j>k}\lambda_j<\varepsilon$. Then $$ \operatorname{Tr}(Q_k|T|)>\operatorname{Tr}(|T|)-\varepsilon=\|T\|_1-\varepsilon. $$ Now using the Polar Decomposition, $T=V|T|$ for a partial isometry $V$, and $|T|=V^*T$. Then $$ \operatorname{Tr}(Q_kV^*T)>\|T\|_1-\varepsilon. $$ The operator $Q_kV^*$ is compact because $Q_k$ is finite-rank, and $\|Q_kV^*\|\leq\|Q_k\|\,\|V^*\|=1$.
H: Albert Einstein's quotation on the nature of mathematics How can it be that mathematics, being after all a product of human thought independent of experience, is so admirably adapted to the objects of reality$?$ The above quotation is by Albert Einstein. I was wondering what did he really mean by that$?$ So if we consider a mathematical system or a mathematical model corresponding to a physical system and then we develop that mathematical system independent of that physical system, then still those two systems could be related$?$ Is it where the power of mathematics lies$?$ AI: I don't think anyone will be able to say with certainty exactly what Einstein meant, but we can take a stab at it, and in my opinion I think that what you said is essentially what he was trying to get at. I think it's what makes Mathematics so amazing. It's like a magic trick, but the magic doesn't wear off "just because you know how it's done". Mathematics for many, many years now has been independently developed from the Sciences. Originally, Mathematics would have been deep-rooted in classical, Physical thinking - for example, the invention of the Natural Numbers. It's very easy to see the usefulness in being able to count objects for trading things, for example (and hence so would being able to add and subtract!). But the idea of a number like $2$ "existing" as a separate thing would not have been a thing at all - it would have been purely a label. No one can go and point out the number $2$ and say "there it is!". It really is purely in your mind - a construct. Yet we still develop more and more theories, purely on imaginary objects that exist only within our imagination - and yet a lot of it can be used to describe the world around us. I like to think of Mathematics as a sort of "artificial sense" - you can see, hear, taste, smell - in the same way Mathematics allows you to logically deduce things about your environment. Things that are not calculable from our standard Biological senses. It really is quite absurd when you think about it in this way: $${\text{Reality}\rightarrow\text{Imaginary place in your mind}\rightarrow \text{Reality}}$$ A counterargument to this is "well, Mathematics is Fundamentally deep-rooted in reality, and so are we - so of course it works out nicely!". Which is true. But personally, this doesn't deduct from the beauty of it at all for me. That all being said, Mathematics is separate from Physics. As Mathematicians, we consider problems and create things all the time that are not necessarily immediately Physically applicable. We do this because (a) it can lead to the creation of more Mathematics, and this could later be helpful in the real world (this is the beauty we believe Einstein is referring to) and (b) because it's a challenge, and physically applicable or not - the results are usually still beautiful.
H: Partition of $\mathbb{N}$ into APs A continuation of this question : How does one classify the set of all partitions (into disjoint parts) $\{AP_1, AP_2, AP_3\cdots\}$ of $\mathbb{N}$ into APs such that all the APs have a distinct common difference? One such example was given by @HagenVonEitzen in the linked post: $$\left\{\{(2k+1)2^{i-1} \mid k\in \{0,1,2,\cdots\}\}\ \mid\ i\in\{1,2,3\cdots\}\right \}$$ Naively decomposing numbers into powers of $k$ and a $k$−free part doesn't work because I then end up with APs that have the same common difference. AI: There must be one AP beginning at $1$, say $AP_1 = \{n: n \equiv 1 \mod m_1\}$. To make things nontrivial, $m_1 \ge 2$. The first natural number not in $AP_1$ is $2$, so it must be in $AP_2 = \{n: n \equiv 2 \mod m_2\}$. In order for this to be disjoint from $AP_1$, we need $\gcd(m_1, m_2) > 1$. We also want $m_2 \ne m_1$. This will also avoid making the process stop after two AP's, which it would if $m_2 = m_1 = 2$. Proceed inductively: after $k$ steps, we will have $AP_1, \ldots, AP_k$ with differences $m_1, \ldots, m_k$. Let $a_{k+1}$ be the first natural number not included in any of these. It will be in $AP_{k+1} = \{n: n \equiv a_{k+1} \mod m_{k+1}\}$. In order to avoid intersecting previous AP's, we need, for $1 \le j \le k$, $gcd(m_j, m_{k+1})$ not to divide $a_{k+1}-a_j$. Note that $a_{k+1} - a_j$ is not a multiple of $m_j$ (since by assumption $a_{k+1} \notin AP_j$) so we can always take $m_{k+1}$ to be a multiple of $\text{lcm}(m_1, \ldots, m_k)$. Moreover, we want $m_{k+1}$ distinct from $m_1, \ldots, m_k$, and we want to make sure we haven't exhausted all of $\mathbb N$: if that would happen for a particular candidate for $m_{k+1}$, just take a larger candidate.
H: Are the singularities of $f(z) = \frac{z^2+1}{z^2(z+1)}$ removable? Looking at the function $f(z) = \frac{z^2+1}{z^2(z+1)}$, I have found the singularities to be at $z=0$ and $z=-1$. My question is if they are removable. I expanded this into the Laurent series $\frac{z^2+1}{z^2}-\frac{z^2+1}{z}+(z^2+1)-z(z^2+1)+...$, but from here I am not completely clear on how to tell if the singularity is removable. I believe that we are supposed to find the common coefficient, and if that coefficient is $0$ for all terms then a singularity is removable. If so, what is the common coefficient and which singularity does that make removable? $z=-1$? AI: The singularities at $z=0$ and $z=-1$ are not removeable. They are pole singularities. If a function $f$ has a removeable singularity at $z=z_0$, then there is an analytic function at $z_0$ that is equal to $f$ for all $z\ne z_0$. A point $z_0$ is a pole of $f$ if there exists a function $g$ that is analytic at $z_0$, with $g(z_0)\ne0$, such that $f(z) = g(z) / (z − z_0)^n$ for some positive integer $n$. The smallest $n$ for which $f(z)=g(z)/(z-z_0)^n$ is called the order of the pole. Here, the pole at $z=0$ is a second order pole while the pole at $z=-1$ is a first order pole. That is, we have $$\lim_{z\to 0}z^2 \frac{z^2+1}{z^2(z+1)}=1$$ and $$\lim_{z\to -1}(z+1)\frac{z^2+1}{z^2(z+1)}=2$$
H: Does $e^{-kx} = -ke^x$? I have been working on a homework problem where I need to integrate $-ke^x dx$. I decided to factor out the $-k$ and get $e^x + C$ and then multiply $-k$ back in. However, I noticed that the back of the book gets $e^{-kx} + C$. I think that if one was to differentiate this, they would in fact get $-ke^x$, so it also seems valid, but I can't figure out the steps or rules that I would use to get this as my answer. I can't think of any rule that I already have learned that says however that $e^{-kx}$ should equal $-ke^x$. Did I do something wrong in either of these integrations, or is this actually correct? it can make a pretty big difference when it is involved in the context of a larger problem. AI: Note that if you differentiate $e^{-kx}$ you get $-ke^{-kx}$, and not $-ke^{x}$. If the problem is stated as $\int -k e^{x}\,dx$ then your answer of $-ke^{x} + C$ is correct. If the problem is written as $\int -ke^{-kx}\,dx$ then the book's answer of $e^{-kx} + C$ is correct. My guess is you either misread the problem or there is a typo in the book somewhere. To answer your question in the title, no, $e^{-kx}$ and $-ke^{x}$ are not equal in general.
H: Absolutely continuous on $[-1,1]$ of a function Show that $f(x)=x^2 \cos\left(\dfrac{\pi}{2x}\right)$ when $x\neq 0$, and $0$ when $x=0$, is absolutely continuous on $[-1,1]$. I'm honestly not sure how to get this one off the ground. I thought about maybe trying to prove that it's Lipschitz, but 1) I'm not even convinced that's true, and 2) if it is true, I can't get that off the ground either. Any thoughts would be greatly appreciated. Thanks in advance. AI: To prove that $ f $ is Lipschitz at $ [-1,1] $ , You just need to show that the derivative $ f' $ is bounded at $ [-1,1] $. For $ x=0 $, $$f'(0)=\lim_{x\to 0}\frac{f(x)-0}{x}$$ $$=\lim_{x\to 0}x\cos(\frac{\pi}{2x})=0$$ For $ x\ne 0$, $$f'(x)=2x\cos(\frac{\pi}{2x})+\frac{\pi}{2}\sin(\frac{\pi}{2x})$$ $$\implies |f'(x)|\le 2|x|+\frac{\pi}{2}\le 2+\frac{\pi}{2}$$
H: Howard Eves' Introduction to the History of Mathematics -- editions and years published? Can anyone point me towards where I may be able to find a list of all the editions of Howard Eves' Introduction to the History of Mathematics? I know little about this book; I understand that it's used in some American universities as a text for their undergraduate History of Mathematics courses. All the online sources may give the year of publication, or the edition, but apart from 6th edition 1990 I have no idea what was published when. AI: WorldCat appears to give $1953$ for the first edition, $1964$ for the Revised editions, $1969$ for the third edition, $1976$ for the fourth edition, $1983$ for the fifth edition, and $1990$ for the sixth edition. There is one listing for a $1989$ sixth edition, but I suspect that it’s simply an error on someone’s part.
H: Euler ODE - particular solution problem I have this ODE: $$x^2y'' -xy' +y = 6x\ln(x)$$ Once I solve the homogenous part I get that $$y_h = xC_1 + x\ln(x)C_2$$ But I am having problems with the particular solution. When I try to find it using variation of constants: $$C_1'x + C_2'x\ln(x)=0$$ $$C_1' + C_2'(1+\ln(x))=6x\ln(x)$$ The Wronskian is $x$ and if I solve the system of equations I get that $C_1'=-6x\ln^2(x)$ and $C_2' = 6x\ln(x)$ If I integrate both and plug them in the upper solution and add them, I get $$\frac{3x^3(\ln(x)+1)}{2} + C_1x + C_2x\ln(x)$$ The solution provided by Wolfram Alpha is $$C_1x + C_2x\ln(x)+x\ln^3(x)$$ Where did I go wrong? I solved the system of equations and I checked it on symbolab, I got $C_1$ and $C_2$ correctly. What did I do wrong? Thanks! AI: You got the homogeneous and Wronskian correct. We have $$y'' - \dfrac{y'}{x} + \dfrac{y}{x^2} = \dfrac{6 \ln(x)}{x}$$ Using Variation of Parameters (notice that the form to apply VoP is $y'' + \ldots$), from the homogeneous solution and ODE, we have $$y_1 = x, y_2 = x \ln(x), f(x) = \dfrac{6 \ln(x)}{x}$$ So, we get $$\begin{align} w_1 &= -\int \dfrac{f(x) y_2}{W}~dx = -\int \dfrac{6 \ln^2(x)}{x}~dx = - 2 \ln^3(x) \\ w_2 &= \int \dfrac{f(x) y_1}{W}~dx = \int \dfrac{6 \ln(x)}{x}~dx = 3 \ln^2(x) \end{align}$$ The solution is given by $$y(x) = y_h(x) + y_1 w_1(x) + y_2 w_2(x)$$
H: Find the singularities of $f(z) =\frac{1}{(2\sin z - 1)^2}$. Find the singularities of $f(z) =\frac{1}{(2\sin z - 1)^2}$. I am just learning about singularities and I was wondering if someone could give me feedback on my work. So I think, for this function, that there are singularities at $z=\frac{\pi}{6}+2k\pi,\frac{5\pi}{6}+2k\pi$, would this be correct? Additionally, I am classifying them as essential and nonremovable. Is this also correct? To determine that they were nonremovable, I tool the limit of the function approaching the singularities and found that they tended towards infinity. AI: This $f$ has a singularity anywhere \begin{align*} 2 \sin z - 1 &= 0 \text{,} \\ \sin z &= 1/2 \text{.} \end{align*} This has infinitely many solutions in the reals, \begin{align*} z \in &\{ \arcsin(1/2) + 2 \pi k : k \in \Bbb{Z} \} \cup \\ \quad &\{\pi - \arcsin(1/2) + 2 \pi k : k \in \Bbb{Z}\} \\ = & \{ \pi/6 + 2 \pi k : k \in \Bbb{Z} \} \cup \\ \quad &\{5\pi/6 + 2 \pi k : k \in \Bbb{Z}\} \text{.} \end{align*} It turns out these are the only two families of solutions, but you have done nothing to show that there are not more solutions. To do so, start with, for $x,y\in\Bbb{R}$, $$ \sin(x+\mathrm{i} \, y) = \sin(x) \cosh(y) + \mathrm{i} \cos(x) \sinh(y) \text{.} $$ Then talk about the zeroes of the real-valued cosine and hyperbolic sine (since $1/2$ has imaginary part $0$), which will restrict $y$ so that $\cosh = 1$, forcing the real solutions to be the only solutions. If you have shown that the limits are infinity approaching each of these infinitely many singularities, then you have shown that they are not removable. I'm not convinced you have understood the definition/nature of essential singularities.
H: Sufficient to show the cases when $x = 0$, $y=0$, and $(x,y) \ne (0,0)$? For a fixed $k \in \mathbb{N}$, define $f_k: \mathbb{R}^2 \rightarrow \mathbb{R}$ by: $$ f_k(x,y)= \begin{cases} \dfrac{x^2(x+y^2)}{x^2+y^{2k}} &, (x,y)\neq (0,0)\\ 0 &, (x,y)=(0,0)\\ \end{cases} $$ Show that $f_1$ is not differentiable at $(0,0)$, but $f_k$ is differentiable at $(0,0)$ for each $k\geq 2$. (Hint: At some point it may help to separately consider the cases $|x|\geq |y|^k$ and $|x|\leq |y|^k$.) So I was able to determine that if $f_k$ is differentiable, the derivative is $(1,0)$. So after some algebra, I see that I have to show that $\left|\dfrac{x^2y^{2} - x y^{2k}}{(x^2 + y^{2k})\sqrt{x^2+y^2}}\right| \to 0$ as $(x,y) \to (0,0)$. When evaluating this limit, is it sufficient to show the cases when $x = 0$, $y=0$, and $(x,y) \ne (0,0)$? Why or why not? AI: So far you have shown that $$\lim\limits_{(x,y)\to(0,0)}|\frac{f_k(x,y)-f_k(0,0)-\frac{\partial f_k}{\partial x}(0,0)x-\frac{\partial f_k}{\partial y}(0,0)y}{\sqrt{x^2+y^2}}|=\lim\limits_{(x,y)\to(0,0)}|\frac{x^2y^2-xy^{2k}}{(x^2+y^{2k})\sqrt{x^2+y^2}}|$$ The function is differentiable at $(0,0)$ if and only if the above limit is zero. Note that for the case $k=1$ the function $f_1$ indeed isn't differentiable since along the path $x=y$ we have $$\lim\limits_{(y,y)\to(0,0)}|\frac{x^2y^2-xy^2}{(x^2+y^2)\sqrt{x^2+y^2}}|=\lim\limits_{(y,y)\to(0,0)}|\frac{y^4-y^3}{2\sqrt2y^3}|=\frac{1}{2\sqrt{2}}\neq0$$ Now suppose that $k\geq2$. Then, using the standard inequality $|\frac{x-y}{2}|\leq\sqrt{\frac{x^2+y^2}{2}}$, we have $$|\frac{x^2y^2-xy^{2k}}{(x^2+y^{2k})\sqrt{x^2+y^2}}|\leq\sqrt{2}\cdot\sqrt{\frac{x^4y^4+x^2y^{4k}}{(x^2+y^{2k})^2(x^2+y^2)}}\leq\sqrt{2}\cdot\sqrt{\frac{x^2y^4(x^2+y^{2k})}{(x^2+y^{2k})^2(x^2+y^2)}}\leq\sqrt{2}|y|$$ and so $$\lim\limits_{(x,y)\to(0,0)}|\frac{x^2y^2-xy^{2k}}{(x^2+y^{2k})\sqrt{x^2+y^2}}|\leq\lim\limits_{y\to0}\sqrt{2}|y|=0,$$ i.e. $f_k$ is differentiable at $(0,0)$ for $k\geq2$. $$$$
H: Increasing convergence of sequence bounded below. Assume that you have a measurespace $(A,\mathcal{A},\mu)$. And you sequence of measurable functions $f_n \rightarrow \mathbb{R}$, that are increasing, and each function is bounded below by a common value $-M$. Do we then have that $\lim\limits_{n \rightarrow \infty}\int\limits_{A}f_n(x)d\mu=\int\limits_{A}\lim\limits_{n \rightarrow \infty}f_nn(x)d\mu$? I am able to prove this for a finite measure space by considering the non-negative and increasing sequence $\{f_n+M\}$ and using the monotone convergint theorem. But does it hold for a measure-space with infinite measure? The reason I don't get it to work with a general measure space is that the integral of the constant function $M$ may not be finite, so I get in a situation where I can't cancel the parts. AI: Not true. On the real line with Lebesgue measure let $f_n(x)=-1$ for $x \geq n$ and $0$ for $x <n$. Then $f_n \geq -1, (f_n)$ is increasing and $\lim \int f_n=-\infty \neq 0 =\int \lim f_n$
H: Convex function on closed bounded Interval Implies Lipschitz counterexample I am considering a convex function $f:[a,b] \rightarrow \mathbb{R}$ and have been asked to show that $f$ is absolutely continuous on $[a,b]$. I've attempted to use the Chordal Slope Theorem to show that $f$ is Lipschitz on $[a,b]$, but have only been able to show that $f$ is Lipschitz for $[c,d] \subseteq (a,b)$. I'm beginning to think that the problem as stated is false, but can't think of a counterexample. I appreciate any guidance. AI: A convex function on $[0,1]$ that is not continuous (and therefore not absolutely continuous) is $$ f(x) = \cases{1 & if $x=0$\cr 0 & otherwise}$$
H: MSE for MLE of normal distribution's ${\sigma}^2$ So I've known $MLE$ for ${\sigma}^2$ is $\hat{{\sigma}^2}=\frac{1}{n}\sum_{i=1}^{n} (X_{i} -\bar{X})^2$, and I'm looking for $MSE$ of $\hat{{\sigma}^2}$. But I'm having trouble to get the result. What I tried goes like below: By definition, $MSE$ = $E[(\hat{{\sigma}^2}$ - ${\sigma}^2$)$^2$], which is = $Var(\hat{{\sigma}^2} - {\sigma}^2)+(E(\hat{{\sigma}^2} - {\sigma}^2))^2$ = $Var(\hat{{\sigma}^2})-Var({{\sigma}^2})+(E(\hat{{\sigma}^2} - {\sigma}^2))^2$. From here, I tried to find $Var(\hat{{\sigma}^2})$, which is = $Var(\frac{1}{n}\sum_{i=1}^{n} (X_{i} -\bar{X})^2$) = $\frac{1}{n^2}Var(\sum_{i=1}^{n} X_{i}^2 -n\bar{X}^2)$ = $\frac{1}{n^2}(\sum_{i=1}^{n} Var (X_{i}^2) -n^2Var(\bar{X}^2))$ But I'm not sure how to get $Var (X_{i}^2)$ and $Var(\bar{X}^2)$. I tried $Var (X_{i}^2)$ = $E(X_i^4) - (E(X_i^2))^2$, But I'm not quite sure what $E(X_i^4)$ would be. Could anyone help me with this? Am I on the correct path to solve this? Thanks in advance! AI: With $\displaystyle \widehat{\sigma^2} = \frac 1 n \sum_{i=1}^n \left( X_i - \overline X \right)^2$ you have $\displaystyle\frac{n\widehat{\sigma^2}}{\sigma^2} \sim \chi^2_{n-1},$ so $$ \operatorname{var}\left( \,\widehat{\sigma^2} \, \right) = \frac{\sigma^4}{n^2} \operatorname{var}(\chi^2_{n-1}) = \frac{\sigma^4}{n^2}\cdot 2(n-1). $$
H: Is it provable in ZF that there is no nontrivial elementary embedding $\pi: V\to V$? Is it open whether it can be proved in $ZF$ alone that if $\pi:V\to M$ is a nontrivial elementary embedding, then $M\not=V$? AI: The short version is that you're asking whether Reinhardt cardinals are consistent with $\mathsf{ZF}$, and this is totally open right now. Indeed, even stronger large cardinal hypotheses are currently not known to be inconsistent with $\mathsf{ZF}$ (e.g. super-Reinhardt, Berkeley, etc.). The longer version is that what you've written doesn't actually make sense in the rather restricted language of $\mathsf{ZF}$, since we can't refer to (let alone quantify over) class functions from the universe to itself. We can handle this in one of a few different ways: Work in an appropriate class theory which is a conservative extension of $\mathsf{ZF}$, as $\mathsf{NBG}$ is to $\mathsf{ZFC}$. This is in some ways the most natural approach, and matches if I recall correctly how the argument was originally phrased ("$\mathsf{NBG}$ proves that there is no nontrivial elementary embedding from $V$ to $V$"). Look at the "set-sized" version of Reinhardt cardinals, namely critical points of nontrivial elementary embeddings of the form $V_{\lambda+2}\rightarrow V_{\lambda + 2}$. (Note that the "combinatorial core" of the Kunen inconsistency argument is in fact a $\mathsf{ZFC}$-proof that there is no nontrivial elementary embedding from $V_{\lambda+2}$ into $V_{\lambda+2}$ for any $\lambda$.) While this shift may seem unnatural, note that it's actually quite useful in that it reveals new concepts as being of potential interest: e.g. in my opinion it's really only in light of the $V_{\lambda+2}$-analysis that the rank-into-rank cardinals emerge as natural objects. Work in a version "$\mathsf{ZF(j)}$" of $\mathsf{ZF}$ which has a new constant naming a putative nontrivial elementary embedding from $V$ into itself and try to derive a contradiction. The key subtlety here is to get the axioms of this theory right - it's crucial that we extend the Separation and Replacement schemes to formulas involving $j$. (Specifically, the non-extended version of $\mathsf{ZFC}(j)$ is consistent relative to $\mathsf{ZFC}$ + "$0^\sharp$ exists," which can look like a $\mathsf{ZFC}$-proof of the nonexistence of $0^\sharp$.) Whichever approach we choose, though, the answer is: it's currently open.
H: Independence of two binomial variables I am trying to figure out whether these two variables are independent or not, and why. In the situation of the problem I was given, the number of clients $N$ follows a Poisson distribution of mean $c$. The number of successful clients $X$ has a probability $p$. The number of events is unlimited and each client is independent. I understand that both the number of successful clients $X$ and the number of unsuccessful clients $Y$ follow a binomial distribution, and that the total numbers of clients $N$ is the addition of those two variables: $X\sim\operatorname{Bin}(N,p)$ ; $Y\sim\operatorname{Bin}(N,1-p)$ ; $N=X+Y$. Are $X$ and $Y$ independent, and why? AI: Rather than $X\sim\operatorname{Bin}(N,p)$ I would write $X\mid N\sim\operatorname{Bin}(N,p),$ and similarly $Y\mid N\sim\operatorname{Bin}(N,1-p).$ You have \begin{align} & \Pr(X=x\ \&\ Y=y) = \operatorname E(\Pr(X=x\ \&\ Y=y\mid N)) \\[8pt] \text{and } & \Pr(X=x\ \&\ Y=y\mid N=n) = 0 \text{ unless } n = x+y. \\[8pt] \text{So } & \operatorname E(\Pr(X=x\ \&\ Y=y\mid N)) \\[8pt] = {} & \sum_{n=0}^\infty \Pr(X=x\ \&\ Y=y\mid N=n)\Pr(N=n) \\[8pt] = {} & \Pr(X=x\ \&\ Y=y\mid N=x+y)\Pr(N=x+y) \\ & \text{(All other terms in the sum vanish.)} \\[8pt] = & \Pr(X=x\mid N) \Pr(N=x+y) \\ & \text{(since conditional on $N=x+y$, the} \\ & \phantom{(}\text{events $X=x$ and $Y=y$ are the same)} \\[8pt] = {} & \binom{x+y} x p^x(1-p)^y \cdot \frac{\lambda^{x+y} e^{-\lambda}}{(x+y)!} \\[8pt] = {} & \frac{(p\lambda)^2 e^{-p\lambda}}{x!} \cdot \frac{((1-p)\lambda)^y e^{(1-p) \lambda}}{y!} \end{align} So $X,Y$ are independent Poisson-distributed random variables with expected values $p\lambda$ and $(1-p)\lambda.$
H: Composing trigonometric functions Let $f(x)=\sin(x)$. If $g$ and $h$ are functions on $\mathbb{R}$ such that $g(f(x))= h(f(x))$, can we conclude $g=h$ ? Can we actually compare $g$ and $h$? I am confused. Please, help me. AI: The range of $sin(x)$ is only [-1,1], not all of $\mathbb R$ . It is possible to come up with functions $f$ and $g$ that coincide on [-1,1] but not on other values of $\mathbb R$, so I would think the answer is no. Now if $f$ and $g$ were continuous, that might not be the case, but the problem statement did not specify that.
H: How to integrate $\frac{\int _{-w}^w\:e^{-\frac{x^2}{w^a}}dx}{\int _{-\infty \:}^{\infty \:}\:e^{-\frac{x^2}{w^a}}dx}$ I have found $\frac{\frac{\sqrt{\pi }}{2}\text{erf}\left(\sqrt{w}\right)-\frac{\sqrt{\pi }}{2}\text{erf}\left(-\sqrt{w}\right)}{\pi ^{\frac{1}{2}}}$ for $\frac{\int _{-w}^w\:e^{-\frac{x^2}{w}}dx}{\int _{-\infty }^{\infty }\:e^{-\frac{x^2}{w}}dx}$. However, I need to find how raising $w$ to the power of $a$ will change the solution. AI: $$ \frac{ \displaystyle \int_{-w}^w e^{-x^2/w^a} \, dx}{ \displaystyle \int_{-\infty}^\infty e^{-x^2/w^a} \, dx}$$ $$ \begin{align} & u = x/w^{a/2} \\ {} \\ & u^2 = x^2/w^a \\ {} \\ & w^{a/2} \, du = dx \end{align} $$ As $x$ goes from $-w$ to $w,\,\,\,$ $u$ goes from $-w^{1-a/2}$ to $w^{1-a/2}.$ Thus we have $$ \frac{\displaystyle\int_{-w^{1-a/2}}^{w^{1-a/2}} e^{-u^2}\, du}{ \displaystyle\int_{-\infty}^{+\infty} e^{-u^2} \, du }. $$ The factor $w^{a/2}$ in the numerator and denominator has canceled. $\qquad$
H: Geometric intuition about continuous and uniform continuous functions I'm looking for a geometric way to identify functions that are continuous but not uniformly continuous without using the definition. I can't really put my hands on a concrete difference between the two. AI: If you do something like Robinson's non-standard analysis, allowing infinitely large and infinitely small numbers, you can say $x\mapsto e^x$ fails to be uniformly continuous because when $x$ is infinitely large, an infinitely small increase in $x$ can result in an increase in $e^x$ that is not infinitely small. For example, when $e^x$ increases by $1,$ since the derivative is infinite at that point, $x$ will increase by an infinitely small amount. Likewise when $x\ne0$ is infinitely small, then $\sin\dfrac 1 x$ can go all the way from $+1$ to $-1$ with an infinitely small change in $x,$ so this function of $x$ is not uniformly continuous. If $f(x)$ changes by an infinitely small amount whenever $x$ changes from a standard real number to a number differing from that by an infinitely small amounnt, then $f$ is continuous. If in addition, this works not only when $x$ is a standard real number but also whenever $x$ differs from a standard real number by an infinitesimal and also whenever $x$ is an infinitely large non-standard real number, then $f$ is uniformly continous.
H: Chcking if a function defined for a convergent sequence is Riemann Integrable Let $\{a_n\}$ be a sequence of real numbers that converges to $1$. Define $f:[0,2]\to \mathbb{R} $ by $$ f(x)= \begin{cases} 1&\text{if }\, x\in \{a_n\}\\ 0& \text{otherwise}\\ \end{cases} $$ Prove $f$ is Riemann integrable on $[0,2]$ I was trying to solve this question and got stuck on exactly how to prove the function is integrable. I tried to use the fact that a convergent sequence has content $0$, and that a bounded real valued function is Riemann integrable if its discontinuities have content $0$. However, I'm not sure where to go from there. Any help or hints is greatly appreciated! AI: Given $\varepsilon>0,$ one has infinitely many members of the sequence within the set $[1,1+\varepsilon/2],$ and only finitely many members of the sequence outside of that interval but still within the interval $[1,2].$ Each of those finitely many is within an open interval whose length is so small that the sum of all of those finitely many lengths does not exceed $\varepsilon/2.$ Use a partition of the interval $[1,2]$ whose points are the endpoints of those intervals. Then the upper sum is no more than $\varepsilon.$ Thus no matter how small $\varepsilon>0$ is, there is a partition whose upper sum is not more than $\varepsilon.$
H: Question about sequence convergent to a limit point and Axiom of (Countable) Choice I read this question about how the Axiom of Countable Choice is both necessary and sufficient to show the following: If a point $a$ in a metric space $X$ is a limit point of $A\subseteq X$, then there is a sequence of points in $A-\{a\}$ converging to $a$. Is the Axiom of Countable Choice also enough if we require that the sequence be a sequence of distinct points? AI: Yes. If $(a_n)$ is a sequence in $A-\{a\}$ converging to $a$, just let $(b_m)$ be the sequence obtained from $(a_n)$ by removing duplicates (so $b_m$ is the $m$th distinct term of $(a_n)$). Then $(b_m)$ still converges to $a$, since it is a subsequence of $(a_n)$.
H: What exactly does the definition of a nilpotent group mean? I'm studying nilpotent and solvable group and find it pretty hard to tell what the definition of a nilpotent group is after. For example, a group is solvable iff it has a solvable series (that is, a subnormal series such that each factor is abelian). This equivalent definition tells something clearly about the structure of the group for me. Then what about a nilpotent group? Since it's a condition stronger than solvable, in which part does it strengthen the equivalent defination above? Is there a true proposition like "a group is nilpotent iff it has a subnormal series such that each factor is abelian and something else" ? AI: There are several equivalent definitions of nilpotent groups. The one most similar to the definition of solvable groups given in the OP is this. A group is nilpotent iff there exists a normal series $$1=Z_0<Z_1<Z_2... <Z_n=G$$ such that $Z_i/Z_{i-1}$ is central in $G/Z_{i-1}$ for every $i=1,..., n$. In particular the series is subnomal and all factors are abelian (because the center of any group is abelian), so nilpotent groups are solvable.
H: 2 questions in Theorem 5 of Hoffman Kunze Linear Algebra I am studying linear algebra from textbook Hoffman and Kunze and I have a question in a Theorem of Section 9.3 ( Positive Forms) Image of Theorem: Questions : why in 2nd paragraph g(X, X)$\geq$ 0 holds. Clearly, g(X, X) is a 1×1 matrix but I am unable to understand how it must be non-negative. (2) How does in next line of above question invertibility of P and X$\neq$ 0 implies ${(PX)^{*} } PX$ >0 ? AI: If I understand the notation (especially assuming that $P^{*}$ is the conjugate transpose of $P$), $PX$ represents a $n \times 1$ vector, so then $\left(PX\right)^{*}\left(PX\right)$ is the squared magnitude of that vector, and thus must be $\ge 0$ in general. If $X \ne \mathbf{0}$, then the squared magnitude must be $\gt 0$, in light of the fact that $P$ is invertible, and thus only has $\mathbf{0}$ in its null-space. I hope this helps.
H: Vector addition and scalar multiplication If $Q$ is the set of positive real numbers. $Q^2 = \{(x,y)\mid x, y \in Q\} $ can be shown with operations of vector addition and scalar multiplication using the formulas $(x_1, y_1) + (x_2, y_2) = (x_1x_2, y_1y_2)$ and $ c(x, y) = (x^c, y^c)$ where $c$, a real number, is a vector space. Find the following vectors in $Q^2$ : the negative of $(4, 2)$, the vector $c(x,y)$ where $c= 1/3$ and $(x, y) = (9, 15)$ and the zero vector. Now I assume the question is asking to show that vector addition and scalar multiplication work for all three of the things that need to be found. I can see this works for the zero vector if we let the components of $x$ and $y$ equal to $0$ then both scalar multiplication and addition would produce the zero vector. I know the negative of $(4,2)$ is $ -(4, 2)$ and $ \dfrac13(12, 18) = (4, 6)$ but I can't see how both the formulas for vector addition and scalar multiplication work for them. Am I missing something? AI: Rough hints: The zero vector is a vector $(x_0, y_0)$ such that for all $(x,y)\in Q^2$, we have $(x_0,y_0)+(x,y)=(x,y)$. So this means $x_0x=x$ and $y_0y=y$. What do you think these $x_0, y_0$ would be? Now the negative of a vector $(x,y)$ is a vector $(x',y')$ such that $(x,y)+(x',y')=(x_0,y_0)$. This implies $xx'=x_0$ and $yy'=y_0$. Given the values of $x_0$ and $y_0$ from the previous paragraph, what can you conclude about $x'$ and $y'$. And the definition of scalar multiplication is clear enough: $\frac13(9,15)=(9^{\frac13},15^{\frac13})$. If you are stuck somewhere, feel free to ask for more details. Hope this helps.
H: How does $e^x\cdot e^X$ equal $e^{x+X}$? I know that they equal each other, but when I'm trying to prove it, something doesn't match. Please mind the difference between the two equations, one is a lowercase $x$ and the other is an uppercase $x.$ I know that the formula to get $e^x$ is $\frac{x^n}{n!}$. So I apply on $e^x$and it becomes $1+x+\frac{1}{2}x^2+\frac{1}{6}x^3+\cdots$ and the same for $e^X$. This image shows what I'm thinking, and what happens when I multiply them on each other. But I want to go more deeper and show the full equation of the numbers shown, for example, showing $\frac{1}{6}x^3$ and others as well. If I did, there would be a total of 16 numbers shown that are arranged in order of degrees. So I try to prove it like this picture. But the problem shows when I try to prove it more. I tried proving it more, but $\frac{1}{6}(x+X)^3$ won't work. Than what should I do to make it work? Or what formula should I use? AI: Perhaps you're confused about the arrangement. We should get $e^xe^x=e^{2x}$, which can be written $$e^{2x} = 1 +(2x) + \tfrac{(2x)^2}{2!} + \tfrac{(2x)^3}{3!}+\cdots$$ $$=1 + 2x + \tfrac{4x^2}{2!} + \tfrac{8x^3}{3!} + \cdots$$ So let's multiply series: $$e^x\cdot e^x = \left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right)\cdot\left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right)$$ $$= 1\cdot\left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right) + x\cdot\left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right) + \tfrac{x^2}{2!}{\cdot\left(1 + x + \tfrac{x^2}{2!} + \tfrac{x^3}{3!} + \cdots\right)}+\cdots$$ $$= (1) + (1\cdot x + x\cdot 1) + (1\cdot \tfrac{x^2}{2!} + x\cdot x + \tfrac{x^2}{2!}\cdot 1) + \cdots $$ $$= 1 + (2)x + (\tfrac1{2!} + 1 + \tfrac1{2!})x^2 + \cdots$$ $$=1 + 2x + \tfrac4{2!}x^2 + \cdots$$ You collect the products of a fixed degree $n$ as $$1\cdot \tfrac{x^n}{n!} + x\cdot\tfrac{x^{n-1}}{(n-1)!} + \tfrac{x^2}{2!}\cdot\tfrac{x^{n-2}}{(n-2)!} + \cdots + \tfrac{x^{n-2}}{(n-2)!}\cdot \tfrac{x^2}{2!} +\tfrac{x^{n-1}}{(n-1)!}\cdot x + \tfrac{x^{n}}{n!}\cdot 1$$ $$= (\tfrac1{n!} + \tfrac{n}{n!} + \tfrac{n(n-1)}{n!} + \tfrac{n(n-1)(n-2)}{n!} + \cdots)x^n$$ Your job is to show that this is $\tfrac{2^n}{n!}x^n$ (expand $(1+1)^n)$ using the binomial formula). Retry: I think you are doing it right, you just didn't collect all terms correctly. You are really just using the distributive law, like $$(a + b + c+\cdots)(\textrm{terms}) = a\cdot(\textrm{terms}) + b\cdot(\textrm{terms}) + c\cdot(\textrm{terms})+\cdots$$ For each of the products on the RHS, you need to look for results of the same degree (we're really looking at the exponential series here, of course). Constant terms only occur as $1\cdot 1$, in the first product on the RHS. Degree 1 terms occur as $1\cdot x$ or $x\cdot 1$ (in the first and second products on the RHS). Degree 2 terms occur as $1\cdot x^2$, $x\cdot x$, or $x^2\cdot 1$ (in the first, second, and third products on the RHS). Degree 3 terms occur as $1\cdot x^3$, $x\cdot x^2$, $x^2\cdot x$, or $x^3\cdot 1$ (in the 1st, 2nd, 3rd, and 4th products on the RHS). And so on.
H: Integer exponent equation Show that $$(2^a-1)(2^b-1)=2^{2^c}+1$$ doesn't have solution in positive integers $a$, $b$, and $c$. After expansion I got $$2^{a+b}-2^a-2^b=2^{2^c}\,.$$ Any hint will be appreciated. AI: $$2^{a+b}=2^a+2^b+2^{2^c}$$ Case $1$: If $a=b$, then $$2^{2a}=2^{a+1}+2^{2^c}$$ If $a+1$ and $2^c$ are distinct, then surely, the hamming weight on the RHS is $2$ but the hamming weight on the left is $1$. Hence we must have $a+1=2^c$ $$2^{2a}=2^{a+2}$$ Hence $2a=a+2$, hence $a=2$, but since $a+1=2^c$, we have $3=2^c$ which is a contradiction. Case $2: a \ne b$, if $2^c$ is not equal to $a$ or $b$, then the hamming weight on the right is $3$ but the hamming weight on the left is $1$. Also, let $a> b$. In order to have the hamming weight of $2^a+2^b$ to drop to $1$ upon adding $2^{2^c}$. We need $a=b+1$ and $b=2^c$. $$2^{2b+1}=2^{b+1}+2^b+2^b=2^{b+2}$$ $$2b+1=b+2$$ $$b=1$$ Hence $c=0$. Hence, we do not have positive solution.
H: Topology in an underlying set of $X.$ Let $X$ be a topological space and let $S$ be a subset $X$ fixed. Show that $$\tau= \{A \cup (B \cap S) \mid A,B \text{ open at } X \}$$ determines another topology on the underlying subset of $X.$ My attempt Using that $A \cup (B \cap S)= (A\cup B) \cap (A \cup S)$ where we see that the union and intersection of subsets of $\tau$ remains in $\tau$ and with it we see that it is a topology. I am new to this, I would appreciate it if you could tell me if the test is correct or failing, because you would give me some hint to learn topology. Thank you. AI: To show that $\tau$ is a topology on $X$, you must show three things: $\varnothing,X\in\tau$; for any $U,V\in\tau$, $U\cap V\in\tau$ ($\tau$ is closed under intersection); and for any $\mathscr{U}\subseteq\tau$, $\bigcup\mathscr{U}\in\tau$ ($\tau$ is closed under taking arbitrary unions). Let $\tau_0$ be the original topology on $X$. Then by definition $$\tau=\{G\cup(H\cap S):G,H\in\tau_0\}\;.$$ Since $\tau_0$ is a topology on $X$, we know that $\varnothing\in\tau_0$, so for each $U\in\tau_0$ we know that $$U\cup(\varnothing\cap S)=U\cup\varnothing=U\in\tau\;;$$ this shows that $\tau_0\subseteq\tau$ and hence that $\varnothing,X\in\tau$ (since $\varnothing,X\in\tau_0$). Now let $U,V\in\tau$; there are $G_U,H_U,G_V,H_V\in\tau_0$ such that $U=G_U\cup(H_U\cap S)$ and $V=G_V\cup(H_V\cap S)$. Thus, $$\begin{align*} U\cap V&=\big(G_U\cup(H_U\cap S)\big)\cap\big(G_V\cup(H_V\cap S)\big)\\ &=\Big(G_U\cap\big(G_V\cup(H_V\cap S)\big)\Big)\cup\Big((H_U\cap S)\cap\big(G_V\cup(H_V\cap S)\big)\Big)\\ &=(G_U\cap G_V)\cup(G_U\cap H_V\cap S)\cup(H_U\cap S\cap G_V)\cup(H_U\cap S\cap H_V\cap S)\\ &=(G_U\cap G_V)\cup(G_U\cap H_V\cap S)\cup(G_V\cap H_U\cap S)\cup(H_U\cap H_V\cap S)\\ &=(G_U\cap G_V)\cup\Big(\big((G_U\cap H_V)\cup(G_V\cap H_U)\cup(H_U\cap H_V)\big)\cap S\Big)\\ &=G\cup(H\cap S)\;, \end{align*}$$ where $G=G_U\cap G_V$ and $H=(G_U\cap H_V)\cup(G_V\cap H_U)\cup(H_U\cap H_V)$. Both $G$ and $H$ are open in $\tau_0$, the original topology, so by $G\cup(H\cap S)\in\tau$, i.e., $U\cap V\in\tau$, and $\tau$ is therefore closed under intersection. Finally, let $\mathscr{U}\subseteq\tau$. For each $U\in\mathscr{U}$ there are $G_U,H_U\in\tau_0$ such that $U=G_U\cup(H_U\cap S)$. Now $$\begin{align*} \bigcup\mathscr{U}&=\bigcup_{U\in\mathscr{U}}\big(G_U\cup(H_U\cap S)\big)\\ &=\bigcup_{U\in\mathscr{U}}G_U\cup\bigcup_{U\in\mathscr{U}}(H_U\cap S)\\ &=\bigcup_{U\in\mathscr{U}}G_U\cup\left(\left(\bigcup_{U\in\mathscr{U}}H_U\right)\cap S\right)\\ &=G\cup(H\cap S)\;, \end{align*}$$ where $$G=\bigcup_{U\in\mathscr{U}}G_H\quad\text{and}\quad H=\bigcup_{U\in\mathscr{U}}H_U\;.$$ Being a topology, $\tau_0$ is closed under arbitrary unions, so $G,H\in\tau_0$, and therefore by definition $\bigcup\mathscr{U}\in\tau$, which is therefore also closed under arbitrary unions. This completes the proof that $\tau$ is a topology on $X$. It is the smallest topology that contains both the original topology $\tau_0$ and the set $S$; that is, $\tau_0\cup\{S\}$ is a base for $\tau$. (To see that $S\in\tau$, just observe that $S=\varnothing\cup(X\cap S)$, where $\varnothing,X\in\tau_0$.)
H: Solve $2x^2+y^2-z=2\sqrt{4x+8y-z}-19$ I am trying to solve the following equation. $$ 2x^2+y^2-z=2\sqrt{4x+8y-z}-19 $$ To get rid of the square root, I tried squaring both sides which lead to $$ (2x^2+y^2-z+19)^2=16x+32y-4z $$ which was too complex to deal with. Also, I have tried some substitutions to simplify the equation, but none of them were working. I believe that the equation could be solved with a appropriate substitution and factorization, yet I have no idea what to do. Any hint or help is appreciated. AI: It's $$2x^2+y^2-4x-8y+18+4x+8y-z-2\sqrt{4x+8y-z}+1=0$$ 0r $$2(x-1)^2+(y-4)^2+(\sqrt{4x+8y-z}-1)^2=0,$$ which gives $$x-1=y-4=\sqrt{4x+8y-z}-1=0.$$ Can you end it now?
H: Find the turning points of $(x^{2} + y^{2})^{2} = x^ {2} - y^{2}$ Using implicit differentiation I found the derivative of this function. Here is my working: but how would I evaluate the numerator as $0$ when $2x - 4x^3 - 4xy^2 = 0$? I tried moving all the $x$ terms to one side and all the $y$ terms to the other but to no avail. The answer is supposed to be $x = \pm \frac{\sqrt6}{4}$ and $y = \pm \frac{\sqrt2}{4}$. How do I get that? AI: Take out a factor of $2x$: $$2x(1 - 2x^2 - 2y^2) = 0$$ $$\Rightarrow 2x = 0, \ 1 - 2x^2 - 2y^2 = 0$$ $$\Rightarrow x = 0, x^2+y^2 = \frac{1}{2}.$$ The turning points must also lie on the original curve $(x^2+y^2)^2 = x^2 - y^2$. Can you continue?
H: Problem related to real monic quadratic polynomial Let $f(x)$ be a real monic quadratic polynomial. If ${x_1},{x_2},{x_3},{x_4},{x_5}$ be the $5$ points where $g(x) = |f(|x|)|$ is non-differentiable and $\sum_{i = 1}^5 {\left| {{x_i}} \right| = 8} $ then find the value of $\frac{1}{5}\mathop {\lim }\limits_{x \to \infty } \frac{{{x^2} - f\left( x \right)}}{x}$. My approach is as follow, real monic quadratic means that leading coefficient viz. the value of $a$ in $ax^2+bx+c=0$ is $1$. That is the equation is of the form $x^2+bx+c$, for real case $b^2-4c\ge0$. But not able to approach. AI: $g$ is non-differentiable at zeroes of $f$ and their reflections across $0$ and at $0$. So if $x_1=m>0$ and $x_2=n>0$ then $x_3=-m$, $x_4=-n$ and $x_5=0$. Thus $$2n+2m =8 \implies b= -(m+n)=-4$$ So $$f(x) = x^2-4x+c \implies \lim_{x\to \infty}{4x-c\over x} = 4$$ So the result is $4/5$.
H: Proof verification that $t(n+1)=t(n) + \pi$ using mathematical induction I am beginning to learn how to write proofs and I would like some verification on this simple proof I have done for the sum of the interior angles of a polygon. I thought this would be a good one to prove since it is pretty basic. So, I have the following hypothesized functional relation for the angle measures of a polygon being $$t(n+1)=t(n) + \pi$$ where $t(n)=(n-2)\pi$. I want to show that this is true for all values of $n\ge2$ in the natural numbers. If I have any incorrect wording or baseless assumptions or steps missing, I would appreciate it if you could point them out. Proof: Base Case: $t(3)=(3-2)\pi = \pi =t(2)+\pi=0+\pi=\pi$. Induction Step: Assume $t(n+1)=t(n) + \pi$ is true for any arbitrary value $n=k\ge2 \, \forall \, n,k \in \mathbb{N}$. Then $$\begin{align*} t(k+1) &= t(k)+\pi \\ &= (k-2)\pi+\pi \end{align*}$$ and $$\begin{align*} t(k+2) &= (k+2-2)\pi\\ & = ([k+1]-2+1)\pi \\ &= ([k+1]-2)\pi+\pi \\ &=t(k+1)+\pi \end{align*} $$ Sorry if this is too easy or trivial to be on here, just want some clarification on my wording and format. Also, is this the same as proving that the total interior angle measure of a polygon is equal to $(n-2)\pi$ for all natural numbers greater than or equal to 2? Edit: I believe in the first part of the induction step, I go in reverse when showing the expansion of $t(k+1)$. It should start as $t(k+1)=(k+1-2)\pi$. Also, is that step redundant? I could have just proved it from that step initially. I believe I am also missing the step where I assume $t(n)$ to be true for all specified values. AI: This might provide some additional clarity: In the "inductive step", you may assume that $t\left(k\right)$ is true, and then show that $t\left(k+1\right)=t\left(k\right)+\pi$: $$ t\left(k+1\right)=\left(\left(k+1\right)-2\right)\pi $$ from the definition of $t\left(n\right)$. This leads to: $$ t\left(k+1\right)=\left(\left(k-2\right)+1\right)\pi $$ $$ t\left(k+1\right)=\left(k-2\right)\pi+\pi $$ $$ t\left(k+1\right)=t\left(k\right)+\pi $$ I hope this helps.
H: How to solve gaussian integral for $x^2e^{-\frac{x^2}{w}}$? I am trying to find $\sigma=\displaystyle\sqrt{\int _{-\infty \:}^{\infty \:}x^2e^{-\frac{x^2}{w}}dx}$ for the function $f(x)=e^{-\frac{x^2}{w}}$. I have tried tabular integration by parts, but it quickly got messy and I stopped after the second integration $\sqrt{w}\frac{\sqrt{\pi }}{2}\text{erf}\left(\frac{x}{\sqrt{w}}\right)$. From some quick research no elementary function exists for the indefinite integral. So how would I find the definite integral in this case? I would be grateful for any help. AI: Starting with $\displaystyle \int_{-\infty}^{\infty}e^{- x^2}\,\mathrm{dx} = \sqrt{\pi}$, let $x \mapsto \sqrt{\lambda} x$ then $\displaystyle \int_{-\infty}^{\infty}e^{-\lambda x^2}\,\mathrm{dx} = \frac{\sqrt{\pi}}{\sqrt{\lambda}}$ Define $\displaystyle $ $\displaystyle f(\lambda) := \int_{-\infty}^{\infty}e^{-\lambda x^2}\,\mathrm{dx} = \frac{\sqrt{\pi}}{\sqrt{\lambda}} $ then $\displaystyle f'(\lambda)=-\int_{-\infty}^{\infty}x^2 e^{-\lambda x^2}\,\mathrm{dx} = -\frac{\sqrt{\pi}}{2\lambda^{3/2}} $ so that $$\int_{-\infty}^{\infty}x^2 e^{-\frac{1}{w} x^2}\,\mathrm{dx} = \frac{1}{2}w^{3/2}\sqrt{\pi}.$$
H: Finding the length of intersected line given a square of 3 equal rectangles and a perpendicular line This is probably very straight forward, but I'm drawing a blank.. I want to find the length of $A-I$. This isn't a trick question, if anything looks slightly off, it's just a bad drawing. It's essentially a square divided into three $1/3$ rectangles. Using the givens, I know $D-C = 316.227766$ and therefore, $D-H = 158.113883$, but I'm just drawing a blank on moving forward since it's been a while since I've done basic geometry.. AI: Using 'pure' geometry, notice that $H$ is the midpoint of square $ABEF$. Then point $I$ is a $90º$ rotation clockwise from point $D$. Furthermore, $DH = IH$ because $AB = BE = EF = FA$. Since $DHI$ is a right angle, $DI^2 = DH^2 + DH^2$ by Pythagoras' theorem. $DH^2 = 50^2 + 150^2$, so $DI^2 = 50000$. Triangle $ADI$ is again a right triangle, so $AI^2 + AD^2 = DI^2$. Therefore $AI = \sqrt{DI^2-DA^2} = \sqrt{50000 -200^2} = 100$.
H: Can this distribution be expressed as a known distribution? I have this density function $$f_{\theta}(x)=\left\{\begin{array}{ll}\frac{1}{x^{2} \theta} e^{-\frac{1}{\theta x}} & \text { if } x \geq 0 \\ 0 & \text { other case }\end{array}\right.$$ Im trying to express it as a known distribution as i want to make some calculus on probabilites. I tried with a exponential and gamma distribution but with no results, dont know what more to try. Any help would be nice AI: It's an inverse-gamma distribution with $\alpha=1,\,\beta=1/\theta$.
H: Intersection of infinite sets -question I am stuck at B . could any one help me ? AI: Let $n\in \bigcap_{k=1}^{\infty}A_k$ such that $n\geq1$. Then $n\in A_k$ for all $k\geq 1$. That implies $n$ is divisible by every $k\geq 1$, which is not possible. So $\bigcap_{k=1}^{\infty}A_k=\{0\}=A_0$.
H: Using algebra to solve a graph problem I’m reading a textbook about graph theory and its application and suddenly I’m facing some problems understanding a part which needs algebra background. I’m gonna skip the graph related part ; The only think that I should mention is if $xy=yx$ then $x$ and $y$ are adjacent in the graph. The example is $S=\{(1),(1\ 2),(1\ 3),(2\ 3),(1\ 2\ 3),(1\ 3\ 2)\}$ So I have to see if $(1\ 3)(2\ 3)=(2\ 3)(1\ 3)$ The book’s answer is $(1\ 3)(2\ 3)=(1\ 3\ 2)$ And $(2\ 3)(1\ 3)=(1\ 2\ 3)$ I have no idea how it is calculated. I would appreciate if you even link somthing for me to help me figure this out cause as I said I have no algebra background. AI: This $S$ is the group of all bijection maps on the set $\{1,2,3\}$. Where $(13)$ is the map that sends: $1\mapsto 3, 2\mapsto2, 3\mapsto1 $. Then $(23)$ is the map that sends: $1\mapsto 1, 2\mapsto3, 3\mapsto2 $. Then $(123)$ is the map that sends: $1\mapsto 2, 2\mapsto3, 3\mapsto1 $. $(1)$ is the identity map. I hope now you will get how the mappings are written.
H: 1 question in Theorem 10 of section Spectral Theory of Hoffman Kunze I am self studying Linear Algebra from Textbook Hoffman and Kunze and I have a question in Theorem 10 of Chapter 9 . Adding it's image: How it's clear that for every $\alpha$ in $E_{j}V $ f(T) $\alpha$ = f($c_{j} $) $\alpha$ ? Can anyone please tell reasoning behind it. AI: If $\alpha\in E_jV$, then $\alpha=E_jv$ for some vector $v$. Therefore $$ f(T)\alpha=\sum_if(c_i)E_iE_jv=f(c_j)E_j^2v=f(c_j)\alpha. $$
H: How to evaluate $\int_0^{\pi/2} \frac{\sin x}{\sin^{2n+1}x +\cos^{2n+1}x} dx$? I have an exercise to evalute the following integral for all $n\geq 1 $ $$I(n)=\int_0^{\frac{\pi}{2}} \frac{\sin x}{\sin^{2n+1} x+\cos^{2n+1} x}dx$$ I attempted to find the closed form for the integral above in the following manner, where I used the integral identity $\int_a^bf(x)=\int_a^b f(a+b-x)dx$. $$I(\bar{n})=\int_0^{\frac{\pi}{2}}\frac{\cos x}{\cos^{2n-1} x+\sin ^{2n-1} x}dx$$ adding $I(n)$ and $I(\bar{n})$ its reduces to $$\frac{1}{2}\int_0^{\frac{\pi}{2}}\frac{\cos x +\sin x}{\cos^{2n+1}x +\sin^{2n+1}x}dx$$ using the algebraic identity $a^n+b^n=(a+b)(a^{n-1}-a^{n-2}b+a^{n-3}b^2+\cdots +b^{n-1})$ for odd integers $n$, I get $$\frac{1}{2}\int_{0}^{\frac{\pi}{2}}\frac{1}{\cos^{2n}x-\cos^{2n-1}\sin x+\cdots +\sin^{2n}x}dx $$ I'm now stuck here. How can I continue now?. Thanks in advance. AI: This integral appear in the Jozsef Wildt International mathematical competition proposed by Ovidui Furdui and Alina Sintamarian which I solved in the following way , couples of months back. For all $n\geq 2$ we shall show that $$I(n)=\int_{0}^{\frac{\pi}{2}} \frac{\sin x}{\sin^{2n-1}x+\cos^{2n-1}x}dx=\frac{\pi}{2n-1}\sum_{k=0}^{n-2}{n-2\choose k}\operatorname{csc}\left(\frac{(2\pi(n-k-1)}{2n-1}\right)$$. Before we prove the above closed form we shall be using classical result. Lemma: For all $n>1, m\in\mathbb {N}$following holds $$ \int_0^{\infty}\frac{x^{m-1}}{1+x^n}dx=\frac{\pi}{n}\operatorname{csc}\left(\frac{m}{n}\pi\right)$$ Proof: We make subbing of $\frac{1}{1+x^n} = y$ and the integral takes the form of beta function, i.e; $$ \frac{1}{n}\int_0^{1}y^{1-\frac{m-n}{n}}(1-y)^{\frac{m}{n}-1}dy=\frac{1}{n}\Gamma\left(1-\frac{m}{n}\right)\Gamma\left(\frac{m}{n}\right)=\frac{\pi}{n}\operatorname{csc}\left(\frac{m}{n}\pi\right) $$ We evaluate the main integral $I(n)$ as follows $$\int_{0}^{\frac{\pi}{2}}\frac{\sin x\sec^{2n-1}x}{1+\tan^{2n-1} x}dx=\int_0^{\frac{\pi}{2}}\frac{\tan x (\sec^{2}x)^{n-2}\sec^2x}{1+\tan^{2n-1}x}dx$$ substitute $\tan x =u\implies \sec^{2}x dx=du$ and hence $$\int_0^{\infty}\frac{u(u^2+1)^{n-2} du}{1+u^{2n-1}} =\int_0^{\infty}\frac{u^2}{1+u^{2n-1}}\sum_{k=0}^{n-2}{n-2\choose k} u^{2(n-2-k)}du =\sum_{k=0}^{n-2}{n-2\choose k}\left(\int_0^{\infty}\frac{u^{2n-2k-4+1}}{1+u^{2n-1}} du\right) \underbrace{=}_{Lemma}\frac{\pi}{2n-1}\sum_{k=0}^{n-2}{n-2\choose k}\operatorname {csc}\left(\frac{2(n-k-1)\pi}{2n-1}\right)$$ we are done. For $n=3$ we have a beautiful closed form for above integral $$\int_0^{\frac{\pi}{2}}\frac{\sin x}{\sin ^{5} x+\cos^5 x}dx = \frac{2}{5} \sqrt{1+\frac{2}{\sqrt 5}}\pi\approx 1.729 $$
H: Determine which of the following sets is a null set as defined in the question. A set $X\subseteq \Bbb R$ is said to be a null set if for every $\epsilon\gt0$ there exists a countable collection $\{(a_k,b_k)\}_{k=1}^\infty$ of open intervals such that $X\subseteq \cup_{k=1}^\infty\{(a_k,b_k)\}$ and $\sum_{k=1}^\infty(b_k - a_k)\le \epsilon$. Which of the following sets is not a null set? Every finite set $\Bbb Q^c$, the set of irrational numbers $\Bbb Q$, the set of rational numbers $\Bbb N$, the set of natural numbers I haven't been able to grasp the question clearly. What I did notice is that among the sets given, only $\Bbb Q^c$ is uncountable. Is the definition of null set given in this question actually that of a countable set? AI: You are partially correct. Any countable set will be a null set. If your set $X$ consists of just the points $\{x_k\}_{k=1}^\infty$ then for any $\epsilon>0$ you may let: $$a_k=x_k-{\frac\epsilon{2^{k+1}}},\qquad b_k=x_k+{\frac\epsilon{2^{k+1}}}$$ Then for all $k$, we have $x_k\in (a_k,b_k)$. Thus $X\subseteq \bigcup_{k=1}^\infty(a_k,b_k)$. Further we have $\sum_{k=1}^\infty (b_k-a_k)=\sum_{k=1}^\infty \frac\epsilon{2^k}=\epsilon$. We may conclude that a countable subset of $\mathbb{R}$ does indeed satisfy the condition of being a null set. That is enough to answer the multiple choice question. However the definition given for null set is not the same as for countable set. Not every null set is countable. For example the Cantor set is a null set, but not countable. That is the Cantor set consists of real numbers in $[0,1]$ which may written out in ternary expansion using only the digits $0$ and $2$. Clearly this not countable (real numbers in $[0,1]$ may be written out using just two digits in binary). However the Cantor set may be contained in a union of $2^n$ intervals each of size as close as you like to $\frac{1}{3^{n}}$ for any $n$. For any $\epsilon>0$ you may choose $n$ sufficiently large that $\left(\frac{2}3\right)^n<\epsilon$, so the sum of the lengths of the $2^n$ intervals will be less than $\epsilon$.
H: How can I solve for $a$ in $0=\frac{\sqrt{\frac{a}{2}}-\sqrt{2a}}{a^{2}}+\frac{E_{\alpha}-E_{\beta}}{4}\exp\left(-\frac{a}{4}\right)$ I am trying to find the local maximum of $\frac{\sqrt{2a}}{a}-\left(E_{\alpha}-E_{\beta}\right)\exp\left(-\frac{a}{4}\right)$, where $E_{\alpha}$ and $E_{\beta}$ are constants: $$\displaystyle\frac{d}{da}\left[\frac{\sqrt{2a}}{a}-\left(E_{\alpha}-E_{\beta}\right)\exp\left(-\frac{a}{4}\right)\right]=\frac{\sqrt{\frac{a}{2}}-\sqrt{2a}}{a^{2}}+\frac{E_{\alpha}-E_{\beta}}{4}\exp\left(-\frac{a}{4}\right)$$ But when setting this to $0$ to find critical points, I run into problems. I tried taking the constant term $\frac{E_{\alpha}-E_{\beta}}{4}$ onto the LHS and taking the log of both sides. However, I can't seem to isolate $a$. I would grateful for any help. AI: Let $f(x)=\sqrt{2/x}-4Ke^{-x/4}$ where $K$ is a constant. Then $$f'(x)=-\frac1{\sqrt 2x^{3/2}}+Ke^{-x/4}=0\implies K\sqrt 2x^{3/2}e^{-x/4}=1$$ so the maximum is one of the solutions of $4K^4x^6e^{-x}=1\implies x=-6W\left(-\frac1{6(2K^2)^{1/3}}\right)$, and substitute $K=(E_\alpha-E_\beta)/4$. This simplifies a bit to give $x=-6W\left(-\frac1{3(E_\alpha-E_\beta)^{2/3}}\right)$. For a more detailed explanation of how the Lambert $W$ function is derived, let us solve the more general equation $ax^be^{-x}=1$. Write $$a^{1/b}xe^{-x/b}=1\implies a^{1/b}\left(-\frac xb\right)e^{-x/b}=-\frac1b\implies\left(-\frac xb\right)e^{-x/b}=-\frac1{ba^{1/b}}.$$ Taking Lambert $W$ of both sides yields $$-\frac xb=W\left(-\frac1{ba^{1/b}}\right)\implies x=-bW\left(-\frac1{ba^{1/b}}\right).$$ In the context of the question, replace $a:=4K^4$ and $b=6$.
H: Does this ses $0\rightarrow \mathbb{Z}/p\mathbb{Z}\rightarrow \mathbb{R}/p\mathbb{Z}\rightarrow \mathbb{R}/\mathbb{Z}\rightarrow 0$ split? Does the following short exact sequence of $\mathbb{Z}$-modules split ? $$0\rightarrow \mathbb{Z}/p\mathbb{Z}\rightarrow \mathbb{R}/p\mathbb{Z}\rightarrow \mathbb{R}/\mathbb{Z}\rightarrow 0,$$ where all maps are the natural maps. AI: No, this sequence does not split. Indeed, a splitting would give rise to a morphism $\mathbb{R}/p\mathbb{Z} \to \mathbb{Z}/p\mathbb{Z}$ which when post-composed with the map $\mathbb{Z}/p\mathbb{Z} \to \mathbb{R}/p\mathbb{Z}$ gives the identity on $\mathbb{Z}/p\mathbb{Z}$. But $\mathbb{R}$ is a divisible abelian group, and therefore so is any quotient, e.g. $\mathbb{R}/p\mathbb{Z}$. Hence, any group homomorphism $\mathbb{R}/p\mathbb{Z} \to \mathbb{Z}/p\mathbb{Z}$ is the zero map. (Precompose with multiplication by $p$ on $\mathbb{R}/p\mathbb{Z}$, which is surjective.)
H: Prove that $\tan^{-1}\frac{\sqrt{1+x^2}+\sqrt{1-x^2}}{\sqrt{1+x^2}-\sqrt{1-x^2}}=\frac{\pi}{4}+\frac 12 \cos^{-1}x^2$ Let the above expression be equal to $\phi$ $$\frac{\tan \phi +1}{\tan \phi-1}=\sqrt{\frac{1+x^2}{1-x^2}}$$ $$\frac{1+\tan^2\phi +2\tan \phi}{1+\tan^2 \phi-2\tan \phi}=\frac{1+x^2}{1-x^2}$$ $$\frac{1+\tan^2\phi}{2\tan \phi }=\frac{1}{x^2}$$ $$\sin 2\phi=x^2$$ $$\phi=\frac{\pi}{4}-\frac 12 \cos^{-1}x^2$$ Where am I going wrong? AI: Because for $x\neq0$ and $-1\leq x\leq1$ easy to see that: $$0<\frac{\pi}{4}+\frac 12 \cos^{-1}x^2<\frac{\pi}{2}$$ and we obtain: $$\tan\left(\frac{\pi}{4}+\frac 12 \cos^{-1}x^2\right)=\frac{1+\tan\frac{1}{2}\arccos{x^2}}{1-\tan\frac{1}{2}\arccos{x^2}}=$$ $$=\frac{\cos\frac{1}{2}\arccos{x^2}+\sin\frac{1}{2}\arccos{x^2}}{\cos\frac{1}{2}\arccos{x^2}-\sin\frac{1}{2}\arccos{x^2}}=\frac{\sqrt{\frac{1+x^2}{2}}+\sqrt{\frac{1-x^2}{2}}}{\sqrt{\frac{1+x^2}{2}}-\sqrt{\frac{1-x^2}{2}}}=\frac{\sqrt{1+x^2}+\sqrt{1-x^2}}{\sqrt{1+x^2}-\sqrt{1-x^2}}.$$ Your mistake in the last line. Indeed, since $$\frac{\sqrt{1+x^2}+\sqrt{1-x^2}}{\sqrt{1+x^2}-\sqrt{1-x^2}}>1$$ and from here $$\frac{\pi}{4}<\phi<\frac{\pi}{2},$$ we obtain: $$2\phi=\pi-\arcsin{x^2}=\frac{\pi}{2}+\arccos{x^2}.$$
H: Simple question related to Correlation & Covariance $X$ and $Y$ are two random variables with $\Bbb E[X] = \Bbb E[Y] = 1$ and $\Bbb E[X^2] = \Bbb E[Y^2] = 2$. Which of the following is not possible: $\Bbb E[XY] > 0$ $\Bbb E[XY] < 0$ $\Bbb E[XY] = 0$ $\Bbb E[XY] \le 2$ I reached the following conclusions: $\operatorname{Cov}(X,Y) = \Bbb E[XY] - \Bbb E[X]\Bbb E[Y] = \Bbb E[XY]-1$ $\operatorname{Var}(X) = \Bbb E[X^2] - \Bbb E[X]^2 = \operatorname{Var}(Y) = 1$ $\operatorname{Corr}(X,Y) = \operatorname{Cov}(X,Y)$ I noticed that this value is the maximum value for both correlation and covariance since $$\operatorname{Corr}(X,Y)=\frac{\operatorname{Cov}(X,Y)}{\sqrt{\operatorname{Var}(X)\operatorname{Var}(Y)}}$$ and $\operatorname{Corr}(X,Y) = \Bbb E[XY]-1$ but I still did not reach a conclusion regarding $\Bbb E[XY]$. Any help is appreciated! AI: Since the correlation $\rho=\Bbb E[XY]-1$ and $|\rho|\le1$, we must have $0\le\Bbb E[XY]\le2$ so the second option is not possible.
H: Series + Number theory problem from JEE exam . My approach : Used variables for given conditions and made some equations. I got the AP as 3,5,7 and GP as 1,3,27 but it is not matching with the condition given in the question. I would really appreciate your help. Thank you. AI: We have that $$a_2=\frac{a_1+a_3}{2}=\frac{15-a_2}{2}\implies a_2=5, a_1+a_3=10$$ but $a_1,a_3$ are not supposed to be integers. Similarly, $$b_2^2=b_1b_3=\frac{27}{b_2}\implies b_2=3, b_1b_3=9$$ but $b_1,b_3$ are not supposed to be integers. Moreover $$(a_1+b_1)(a_3+b_3)=(a_2+b_2)^2=(5+3)^2=64=2^6$$ Therefore $a_1+b_1=2^{6-n}\leq a_3+b_3=2^n$ with $n=3,4,5,6$. By checking the four cases, it turns out that the good one is $n=6$ and that the maximum value is $$a_3=\frac{73+7\sqrt{73-4\cdot 3}}{2}\approx 63.836.$$ Hence the given answers are both confirmed.
H: Does $F/N_{1}\cong F/N_{2}$ implies $N_{1}\cong N_{2}$ and vice versa? It would be great for me to see an example of finite group $F$ and two epimorphisms $\varphi_{i}:F\twoheadrightarrow G$ from $F$ onto a group $G$, with $N_{1}\ncong N_{2}$, where $N_{i}=ker\varphi_{i}$, $i=1,2$ (e.g., finite group which contains two non-isomorphic subgroups each of index $2$). However, I am interested more in the case $F=F(a,b)$ the free group on the set $\{a,b\}$. That is, the case we have two isomorphic presentations $\langle a,b|R_{1}\rangle\cong\langle a,b|R_{2}\rangle$ but with non-isomorphic normal closures $N_{i}=N_{F}(R_{i})$, $i=1,2$. Note that if $G_{1}$ (or $G_{2}$) is finite, then the ranks of $N_{1}$ and $N_{2}$ are equal, so they are isomorphic (since they are free by Nieslen-Schreier). What about the converse: if $N_{1}\cong N_{2}$ are two isomorphic normal subgroups of the free group $F=F(a,b)$, can we deduce that $F/N_{1}\cong F/N_{2}$? Thanks. AI: For example, let $G=D_8$ of order $8$, generated by $a$ of order $4$ and $b$ of order $2$. Then $\langle a\rangle$ and $\langle a^2,b\rangle$ are normal subgroups of index $2$, non-isomorphic. (One is cyclic of order $4$, one is Klein four.) For free groups, every subgroup of a free group is free, so there are three options for $N_1$. It has finite rank, infinite rank, or is trivial. It has finite rank if and only if it has finite index. Thus if $|F/N_1|$ is finite then $F/N_1\cong F/N_2$ implies $N_2$ has finite index. The rank of $N_i$ is given by a formula in terms of index and number of generators of the free group ($n(d-1)+1$, where $d$ is the number of generators and $n$ the index), and so $N_1\cong N_2$ in this case. If $N_1\cong N_2$ then all you know is that $F/N_1$ and $F/N_2$ have the same order. If $N_1$ is trivial then $F/N_1\cong F$. As Derek Holt says, this means that $N_2=1$ as $F$ is not isomorphic to a proper quotient of itself. The final case is when $N_1$ is not finitely generated, or equivalently, $N_1$ has infinite index (and is non-trivial). Then all such $N_i$ are isomorphic (as $F$ is countable). Thus if $F/N_1\cong F/N_2$ then $N_1\cong N_2$, and if $N_1\cong N_2$ then $F/N_1$ and $F/N_2$ have the same order.
H: Prove that if ${(v_1- v_2) } \in U$ where $U$ is a subspace of $V$ then $v_1 + U$ = $v_2 + U$ where $v_1 , v_2$ belong in $V$ I assumed $v_1 - v_2$ to be equal to some $u \in U$. Then I wrote $u$ = $u_2 - u_1$. So now $v_1 - v_2 = u_2 - u_1$ for any $u_2,u_1 \in U$. Therefore $v_1 + u_1= v_2 + u_2$ for any $u_2,u_1 \in U$ Thus $v_1 + U_1= v_2 + U_2$ I still got this wrong on my test. The correct solution was to write $v_1 + u= v_2 + (v_1 - v_2) + u$. Now since $(v_1 - v_2) \in U$ $(v_1+u) \in (v_2 + U)$. Now you can go in the other direction and prove that $(v_2+u) \in (v_2+ U)$. Thus $v_1 + U$ = $v_2 + U$. Now I understand that the second proof look more formal but I don't know why my proof seems incomplete according to my professor AI: Can we do it this way? Let's consider a map $T$ $\in$ $\mathcal{L}(V, V/U)$. Where the quotient space is defined as $$ V/U := \{ v + u~:~\text{for any } v \in V \text{ and } u \in U, \text{ where } U \subseteq V \} $$ We know that the map is defined as \begin{align*} T :&~ V \to V/U\\ & v \mapsto v + U \end{align*} And we know that $\text{ker}(T) = U$. According to the question $v_{1} - v_{2}$ $\in$ $U = \text{ker}(T)$, where $v_{1}, v_{2}$ $\in$ $V$. Therefore we have \begin{align*} &T(v_{1} - v_{2}) = 0\\ \implies & T(v_{1}) - T(v_{2}) = 0 \quad [\text{as } T \in \mathcal{L}(V, V/U)]\\ \implies & T(v_{1}) = T(v_{2})\\ \implies & v_{1} + U = v_{2} + U \end{align*} Is this approach wrong?
H: How to divide an interval into geometrically increasing sub-intervals It'll be obvious when I see the answer, but I'm too tired/thick to figure it out. If I have a certain time period, and I want to divide it into a given number of geometrically increasing (or decreasing) durations so that the ratio of interval(n)/interval(n-1) is constant, then how do I figure out the ratio and/or the starting duration of the first interval? For example if you wanted to divide a 15-second period into 4 increasing durations you might get the geometric series 1, 2, 4, 8 (where the ratio is 2, and start duration is 1 second) I think it's something to do with log(T)/N, where T is the total time and N is the number of divisions, but that's as far as I've got. AI: Say you have an interval of length L and you want to divide it into N intervals $I_k$ s.t. $I_k/I_{k-1} =q$ for $k=1,..N-1$. Then the length of each interval can be written as $cq^k, k=0,...,N-1$, where $c>0$ is some constant and $$ L= \sum_{k=0}^{N-1} cq^k = c \frac{1-q^N}{1-q}$$ You would have to solve this for fixed N, which is not trivial when N>2.
H: Find all positive integers $x$ and $y$ for which $\frac{1}{x} + \frac{1}{y} = \frac{1}{p}.$ Let $p$ be a prime number. Find all positive integers $x$ and $y$ for which $$\dfrac{1}{x} + \dfrac{1}{y} = \dfrac{1}{p}.$$ Multiplying the given expression by $xy$ results in $y+x = \dfrac{xy}{p} \Rightarrow p(x+y) = xy$. I was suggested to get this to the form $(x-p)(y-p) = p^2$, which seems a bit weird to me since I usually with these kinds of questions I would find a system which would lead to finding the correct $(x, y)$, but having the term $p^2$ is not something I've seen before. Is the idea behind this so that I would want to have an expression where I have something of the form $(x-k)(y-n) = p$ and could deduce from here that either $(x-k) = 1$, $(y-n) = p$ or the other way around since $p$ can only have the factors $1$ and $p$? AI: Once you get to $(x-p)(y-p)=p^2$ you use unique factorisation of integer to state that either $x-p=y-p=p$ leading to $x=y=2p$ or $x-p=p^2$ and $y-p=1$ leading to $x=p^2+p$ and $y=p+1$ The case $x-p=-p^2$ and $y-p=-1$ is to be excluded because $x$ and $y$ are assumed to be positive integers
H: Jordan normal form powers Let $A$ be a $n\times n$ such that $A=PBP^{-1}$ where $B$ is in Jordan normal form with $\lambda_i(k)_j$ Where $i$ is the size, $k$ is the eigenvalue and $j$ the order. If $A$ was diagonal($i=1$) then $A^n$ in Jordan form has $\lambda_1(k^n)_j$. If the Jordan form has Jordan blocks bigger then 1, how do we find $A^n$ In Jordan form? AI: A Jordon block of size $i$ and eigenvalue $k$ has the form $(kI+t)$ where $t^i=0$. Thus $(kI+t)^n$ will just be the truncated binomial expansion:$$\sum_{r=0}^{i-1} {n \choose r} k^{n-r}t^r$$
H: Why is $\mathbb{R}-\mathbb{Q}$ an uncountable set and how can I prove it? I am now starting to prepare for a discrete mathematics class. On a test, I came across the following question: Which of the following sets are countable? $$\mathbb{Z},\mathbb{R}, \mathbb{R-Q}, \{31,2,2019\} $$ The only countable sets are: $\mathbb{Z}$ (easily proved) and $\{31,2,2019\}$ as it is a finite set. Using Cantor's method we prove that there is not a bijective function, such that $\mathbb{R}$ is countable. So there is only $\mathbb{R-Q}$, which is the set of irrational numbers. Can anyone suggest a proper way for me to prove that this is a uncountable set? AI: First prove that the union of two countable sets is countable. Let $\mathbb{R}\setminus\mathbb{Q}=X$. If $X$ is countable then $\mathbb{R}=X\cup \mathbb{Q}$ is countable, which it isn't.
H: Showing $\sum_{\alpha=0}^k \binom{k+15}{\alpha + 10}\binom{k}{\alpha}=\frac{2^{2k+15}(k+6)(k+7)\left(\frac{2k+15}{2}\right)!}{\sqrtπ (k+10)!}$ I want to simplify $\sum_{\alpha=0}^k \binom{k+15}{\alpha + 10}\binom{k}{\alpha}$ but this can't be directly simplified using Vandermonde Chu Identity. Wolfram shows a nice closed form expression but I can't get to it. My attempt $\sum_{k=0}^n \binom{k+15}{5+k-\alpha}\binom{k}{\alpha}=\binom{2k+15}{5+k}$ by thinking of this sum as choosing a set of 5+k objects from 2k+15 objects in total but I feel that this isn't correct as some terms are missing AI: Prove that $$\binom{2k+15}{k+5}=\frac{2^{2k+15}(k+6)(k+7)\left(\frac{2k+15}{2}\right)!}{\sqrtπ (k+10)!}$$ Number of $2$'s in $\left(\frac{2k+15}{2}\right)!=\frac{2k+15+1}{2}=k+8$. So, the RHS becomes \begin{align*} &\Rightarrow\frac{2^{k+7}(k+6)(k+7)(2k+15)!!\ \Gamma\left(\frac12\right)}{\sqrtπ(k+10)!}\\ &=\frac{2^{k+5}(k+5)!(2k+12)(2k+14)(2k+15)!!}{(k+10)!(k+5)!}&\left(\because \Gamma\left(\frac12\right)=\sqrt{\pi}\right)\\ &=\frac{(2k+10)!!(2k+12)(2k+14)}{(k+10)!(k+5)!}\frac{(2k+15)!}{(2k+14)!!}\\ &=\frac{(2k+15)!}{(k+10)!(k+5)!}\\ \end{align*} PS: I think that Wolfram Alpha's algorithm doesn't apply the common logic of choosing $k+5$ people from $2k+15$.
H: Holomorphic maps preserve Hausdorff dimension. In a paper I read there is the following claim: Let $f:\mathbb{C}\to \mathbb{C}$ be a non-constant entire transcendental function(essential singularity at infinity) and $A\subset \mathbb{C}$ a set in the complex plane. Then $f^{-1}(A)$, $A$ and $f(A)$ have the same Hausdorff dimension. I know that bi-Lipschitz maps preserve Hausdorff dimension but I dont see why entire maps in the complex plane should too. Perhaps because entire maps are locally bi-Lipschitz away from critical points. But do locally bi-Lipschitz maps preserve the dimension? Can someone prove this or provide a reference for a proof? AI: It's exactly as you said. Since $f$ is locally bi-Lipschitz, then if $A_n=A\cap \tilde{D}(0,n)$, where $\tilde{D}(0,n)$ is the closed disk of radius $n$, then $\dim_H(f(A_n))=\dim_H(f^{-1}(A_n))=\dim_H(A_n)$, $\forall \ n\in\mathbb{N}$. Now use that in general, $\dim_H(\cup_{n\geq 1} B_n)= \sup_{n\geq 1} \dim_H(B_n)$.
H: Mapping a set with the function $z^2$ in the complex plane I have the following set: $G=\{ z \in \mathbb{C} : \Im{(z)}>0, \Re{(z)}<0 \}$ $f(z) = z^2$ I need to draw $ f(G) $ but I don't get a good answer using $ z=x+iy $ and trying to understand the complex plane with information on the cartesian plane. How do I go about it? I understand that G is the upper left quarter of the complex plane and I know the answer is the lower half plane but I need to see the steps. after that I need to write $ f(G) $. Would like to see that too. Follow up question: This time the function is $f(z) = log(z)$ and the set is: $G=\{ z \in \mathbb{C} : |z|>0, -\pi < arg(z) < \pi \}$ I don't have an intuition of what the log function does to the given set so I'd appreciate a calculation. AI: We may rewrite $$G=\{z\in\Bbb{C},|z|\gt 0,\,{\pi\over 2}\lt\arg{z}\lt\pi\}$$ Squaring means squaring the module and doubling the argument So $$f(G)=\{z\in\Bbb{C},|z|\gt 0,\,\pi\lt\arg{z}\lt 2\pi\}$$ And so $f(G)$ is the lower half plane For the follow up consider the following $$\log{r\cdot e^{i\theta}}=\log{r}+i \theta$$ This means that the image by logarithm of $H$ the right half plane is the horizontal strip $\{z\in\Bbb{C},-\pi\lt\operatorname{Im}{z}\lt\pi\}$
H: Power Series and Analyticity of a complex function I was studying about the connection of analytic function and their power series representation. Finally, I came to an understanding that, if I am given with an function, analytic at some point 'a', then I will be able to write a power series representation of that function, where that power series representation is convergent in some circle centered around that 'a'. Now, what about the behavior points outside this circle of convergence? Can the function remain analytic at those points? In short, is it true if a function having a power series representation about a point is not convergent at a point outside the radius of convergence, then we cannot say about the analyticity of that function at that point. Is my understanding correct? Or Am I still missing the essence of the power series expansion? AI: Your understanding is correct. Suppose that you define$$\begin{array}{rccc}f\colon&\Bbb C&\longrightarrow&\Bbb C\\&z&\mapsto&\begin{cases}\frac1{1-z}&\text{ if }|z|<1\\0&\text{ otherwise.}\end{cases}\end{array}$$Then, at $D(0,1)$, you have$$f(z)=1+z+z^2+\cdots$$and the radius of convergence of the series $1+z+z^2+\cdots$ is $1$. But $f$ is not analytic. It's not even continuous.
H: How does $A = P\cdot (1+r)^{n}$ becomes a graph function of $A = P\cdot (1+rt)$ Accumulated simple interest function is: $A = P\cdot (1+rt)$ When I tried to graph it, it fails. After I read some articles, it tells that to graph simple interest function is a different equation: $A = P\cdot (1+r)^{n}$ where $A$ is on Y-axis and $n$ is on X-axis. And it works. My question is: How to find out a graph function from a function? AI: If you want to graph $A=P(1+rt)$ you multiply out the brackets first. $A(t)=P+Pr\cdot t$. Here t is on the $x$-axis and A on the $y$-axis. This is a linear function with the slope Pr and the intercept $P$. I have added the linear function (simple interest) to your graph.
H: Alternate way to solve $\lim\limits_{x \to 0} (\sin x) ^x$? My solution: $$ \lim_{x \to 0} (\sin x)^x = \lim_{x \to 0} e^{(x)(\ln \sin x)} = \exp \left( \lim_{x \to 0} (x) (\ln \sin x) \right)$$ Now we have $\lim_{x \to 0} (x) (\ln\sin x)$. Now we can say that the limit is $0$ as $\ln$, $\sin x$ decreases more slowly than $x$. Question: Is there any other method to solve the above limit without using the arguments saying one function decreases slower? AI: I would write $x \ln (\sin x)=x \ln (\frac {\sin x} x) +x\ln x$. The first term tends to $(0)(0)=0$ so we only have to find $\lim x \ln x$. For this we can apply L'Hopital's Rule to $\lim \frac {\ln x} {1/x}$.
H: Solving $\frac{x^4}{4}+\frac{1}{y}=c$ for $y$ is giving a wrong answer I have a simple equation where I want to solve for $y$ but I am getting wrong answer using my steps with some slight sign changes. I know I am making a silly mistake somewhere but its frustrating that I can't find it. Kindly take a look. NOTE: I know I can see steps in Symbolab but I want to know where I am wrong in my steps. Thanks $$\cfrac{x^4}{4}+\cfrac{1}{y} = c $$ The answer should be: $$y=-\cfrac{4}{x^4-4c}$$ but, as shown below, I am getting: $$y = \frac{4}{4c-x^4}$$ AI: Your solution is: $$y=\frac{4}{4c-x^4}$$ Let's factor out a negative sign from the denominator to get: $$y=-\frac{4}{x^4-4c}$$ which is exactly what Symbolab has. Your solution is identical.
H: If $\operatorname{Spec} A=\operatorname{Spec} B$, then $A$ is isomorphic to $B$ Proposition II.$3.2$ in Hartshorne Regarding to this question, I wonder why $\operatorname{Spec} A=\operatorname{Spec} B$, then $A$ is isomorphic to $B$. To be more precise, Let $A,B$ are rings, and suppose the schemes $\operatorname{Spec} A$ and $\operatorname{Spec} B$ are isomorphic as schemes. Then, I want to show $A$ and $B$ are isomorphic as rings. Of course, if we see $\operatorname{Spec} A$ and $\operatorname{Spec} B$ as just a set of prime ideals, then we cannot say $A$ and $B$ are isomorphic as rings. For example, a pair of another fields is an example. Thank you. AI: I would avoid saying that things are "equal", but rather that they are isomorphic via a named isomorphism $\newcommand{\Spec}{\operatorname{Spec}}\phi\colon\Spec A\to\Spec B$. If you consider the corresponding map on structure sheaves, and take global sections, you get the morphism $\phi^\#\colon B \to A$. By assumption, $\phi$ has a bilateral inverse, which yields a bilateral inverse to $\phi^\#$. Regarding your example concerning the spectra of two fields, it is misleading probably because you use a statement of the form "$\Spec k= \Spec k'$", parsing it as an equality of sets. If you instead say that $\Spec k \stackrel{\sim}\to \Spec k'$, via a scheme morphism $(\phi,\phi^\#)$ which admits an inverse, then your issue is resolved. Motto: «Equality evil, isomorphism good».
H: If $m\;|\;p^2k$ and $pk \lt m$, then $m$ must be $p^2k$. Let $p$ be a prime and $1\le k$ be an integer. If $m\;|\;p^2k$ and $pk \lt m$, how can I prove that $m$ must be $p^2k$? I looks like there is no other option for $m$ since $p$ is a prime, but I can't write a formal proof. Thanks for helping. AI: This is false. Take $m=18, p=3, k=4$. You have $m| p^2k$ and $m>pk$.
H: A question about order and prime. (group) Let $G$ be a group and let $a\in G$ have order $pk$ for some prime $p$, where $1\le k$. Prove that if there is $x \in G $ with $x^p=a$, then the order of $x$ is $p^2k$. My attempt: If $m$ is the order of $x$, then $m\;|\;p^2k$, which means there is an integer $1\le n$ such that $mn=p^2k$. Then neither $n\;|\;p$ nor $n\;|\;k$. If I want to prove that $n=1$, what should I do next? Thanks for helping!! AI: $pk$ divides the order of $x$ which divides $p^2k$.Since the order of $x$ is $>pk$, it should be $p^2k$ by the fund. theorem of arithmetic.
H: How to prove $|\Omega|^{-1/p}||u||_p\leq |\Omega|^{-1/q}||u||_q $? How to prove $|\Omega|^{-1/p}||u||_p\leq |\Omega|^{-1/q}||u||_q $? $u\in L^q(\Omega)$. I guess using Holder inequality above inequality is true. But I could not properly arrange term to get required? Please give me a hint so that I can prove the above inequality? Any Help/ Hint will be appreciated. AI: This is true only for $0<p<q$. By Holders' inequality $\int |u|^{p} =\int (1) (|u|^{p})\leq (\int 1^{r})^{1/r} (\int |u|^{q})^{p/q} =|\Omega|^{r} (\int |u|^{q})^{p/q}$ where $\frac 1 r +\frac p q=1$. Hence $(\int |u|^{p})^{1/p} \leq |\Omega|^{1/rp} (\int |u|^{q})^{1/q}$. Just compute $r$ to finish the proof.
H: Given the vertices of a convex polytope, calculate its centroid I would like to calculate the centroid (center of mass in case of homogeneous materials) of a convex polytope (equivalent of polyhedron in $n-$dimensional space). The vertices are given and I can use a general programming language (without special mathematical libraries) for this purpose. I outlined the following concept: (1) Divide the polytope into mutually exclusive and collectively exhaustive simplices (equivalent of tetrahedron in n-dimensional space) the following way: (1.1) Take any of the vertices. (1.2) Find $n$ (count of dimensions of the space) neighbouring vertices (if there are more, it does not matter which ones). (1.3) Process the simplex determined by these $n+1$ points. (1.4) Remove the vertex chosen in 1.1 from the set. (1.5) Continue with any other vertex (taking one of those found in 1.2 improves efficiency, however, since you will have to find only one additional neighbouring vertex). (1.6) When only n+1 vertices are left, it is the last simplex; and after it you have covered the whole polytope. (2) Calculate the volume of the simplex with the formula: $\frac{\det \begin{bmatrix} v_{0} & \cdots & v_{n} \\ 1 & \cdots & 1 \end{bmatrix}}{n!}$ (3) Calculate the centroid of the simplex with the formula: $\frac{\sum_{i=0}^{n} v_{i}}{n+1}$ (4) Calculate an average from the values produced in 3 using the values received in 2 as weights. Based on a $2-$dimensional pilot this may work, however, imagining things in n-dimensional space is not my strength. Does this method determine the centroid? Is there a simpler or faster one (the number of vertices is a few thousand)? AI: This method fails in three dimensions and higher. Consider a regular octahedron. Choose a vertex. (By symmetry, it makes no difference which one you choose.) This vertex has four neighbors. Choose three of them. (Again by symmetry, it makes no difference which ones you choose.) The four vertices (your original choice plus the three neighbors) determine a tetrahedron. If you remove the interior of this tetrahedron from the interior of the octahedron, what is left is the interior of a non-convex polyhedron. What your algorithm does in step 1.4 is to remove not only the volume of the selected tetrahedron, but also the volume of the tetrahedron formed by your original chosen point, the neighbor that you did not choose, and the two common neighbors of these two points. All that is left is a square pyramid. If your next choice is a vertex on the base of the pyramid (as it will be if you choose the next vertex by "taking one of those found in 1.2"), you will end up processing all of the volume of the pyramid, but since you missed the volume of a tetrahedron you will find the centroid of the non-convex polyhedron formed by the three tetrahedra you processed, not the center of the octahedron. If your next choice is the apex of the pyramid, you will process only one more tetrahedron and will lose the volume of the remaining part of the pyramid when you delete the apex. You will reach step 1.6 because only four vertices are left, but actually they are four vertices of a square, not four vertices of a tetrahedron. The idea of "taking one of those found in 1.2" does not work as expected, however, because the original $n$ neighbors of the first vertex are not always neighbors of each other. You can find the centroid of the polytope by partitioning it into simplexes and taking a weighted average of their centroids using the formulas in the question. But you will need to ensure that the simplexes and their interiors contain the entire polytope and its interior and that there are no two simplexes whose interiors intersect.
H: Is the spectral radius of $DA$ less then the one of $A$? Is the spectral radius of $DA$ less than the spectral radius of $A$ when $D$ is diagonal where all diagonal entries are nonnegative and less than 1? This is true when $A$ is normal, since $$ \rho(DA) \le \|DA\|\le \|D\| \| A\| \le \|A\| = \rho(A) $$ My guess is that it is false in general. Notice that it is enough to prove $\|(DA)^k\|\le \|A^k\|$ definitively in $k$. If we let $D$ have negative values, then it would imply that any sign change in any row does not change the spectral radius, that is preposterous. AI: This is false for every $n\ge2$. Pick any two vectors $u$ and $v$ such that $u_iv_i<0<u_jv_j$ for some $i\ne j$. Let $D=\operatorname{diag}(\operatorname{sign}(u_1v_1),\ldots,\operatorname{sign}(u_nv_n))$. Then $$ v^TDu=\sum_i\operatorname{sign}(u_iv_i)u_iv_i=\sum_i|u_iv_i|>\left|\sum_iu_iv_i\right|=|v^Tu|. $$ Therefore, when $A=uv^T$, we have $\rho(DA)=|v^TDu|=v^TDu>|v^Tu|=\rho(A)$. By the continuity of spectral radius, we may reduce the diagonal entries of $D$ and perturb $A=uv^T$ to obtain other counterexamples such that $|d_{ii}|$ can be smaller than $1$ and $\operatorname{rank}(A)$ can be any number ranging from $1$ to $n$. However, it is true that $\rho(DA)\le\rho(A)$ when we also have $A\ge0$ entrywise. This is because $(DA)^k\le A^k$ entrywise for every positive integer $k$, so that $\rho(DA)=\lim_{k\to\infty}\|(DA)^k\|_1^{1/k}\le\lim_{k\to\infty}\|(A)^k\|_1^{1/k}=\rho(A)$ by Gelfand's formula.
H: How do you generate a normal subgroup from relations? From Rotman's Algebraic Topology: A group $G$ is defined by generators $X = \{x_k \in K\}$ and relations $\Delta = \{r_j = 1 : j \in J\}$ if $G \cong F / R$, where $F$ is the free group on $X$ and $R$ is the normal subgroup of $F$ generated by $\{r_j : j \in J\}$. The ordered pair $(X, \Delta)$ is called a presentation of $G$. What are the elements of $R$? What does it mean to generate a normal subgroup from a set of relations? Since I see "generated by $\{r_j : j \in J\}$" I'm assuming it means $\langle r_j : j \in J\rangle$ where $a \in $ iff $a = \sum_{j \in J} r_j^{m_j}$. But since $r_j = 1, \forall j$, then doesn't every element reduce to the identity? AI: The elements $r_j$ are not equal to $1$ in the free group $F$. But by including them in $R$ they become the identity in the quotient $F/R$. To avoid confusion in notation, let me use $S$ for the set of relations $r_j$. So $S=\{r_j:j\in J\}$. So the set $S$ is a subset of the free group $F$ on $X$. These are "words" in $X$ that you wish to be the identity in the quotient $F/R$. In order to have well-defined quotient group, we need $R$ to be: a subgroup normal. So this is why we take the normal subgroup generated by $S$. By definition this is the subgroup generated by all conjugates $gr_jg^{-1}$ of the relations $r_j$ in $S$. So $$ R=\langle gr_jg^{-1}:j\in J,\space g\in F\rangle $$ Sometimes $S^F$ denotes the set of $F$-conjugates of elements in $S$, and so $R=\langle S^F\rangle$. For example, see https://proofwiki.org/wiki/Definition:Generated_Normal_Subgroup
H: Calculate $\exp \left[\begin{smallmatrix} 4 & 3 \\ -1 & 2 \end{smallmatrix}\right]$ Given $$ A= \begin{bmatrix} 4 & 3 \\ -1 & 2 \end{bmatrix} $$ how do I calculate $e^A$? I know the following formula: If $X$ is a $2 \times 2$ matrix with trace $0$, then $$e^X = \cos{\sqrt{\det(X)}} I_2 + \dfrac{\sin{\sqrt{\det(X)}}}{\sqrt{\det(X)}} X$$ Where we interpret the coeffficient of $X$ is $1$ if $\det(X)=0$. The hint that i got in order to calculate $e^A$ is to reduce the calculation to the case with trace $0$. I dont know how to do that ( to reduce the calculation to the traceless case). Can anyone please help, I cannot proceed. AI: One result that will get you the final part of the way it's the following: If $M$ and $N$ are square matrices that commute, then$$e^{M+N}=e^Me^N$$ And since any matrix commutes with multiples of the identity, can you think of a way to write $A$ as the sum of a traceless matrix and a multiple of the identity matrix?
H: If collection of subgroups of $G$ form a chain , then $G$ is cyclic? If $G$ is a group such that for any two subgroups $H,K$ either $H\le K$ or $K\le H$ holds ,then $G$ is cyclic. I am going to prove this is true for finite groups and false for infinite groups. Proof (For finite group) Let $G$ be a finite group satisfying the condition. Step 1: To show $G$ must be a $p$ group. Let $|G|$ has at least two prime factors $p$ and $q$ (say, with some powers) then a Sylow- $p$ subgroup and a Sylow- $q$ subgroup is neither a subgroup of one another and so the condition does not hold , a contradiction. So if $G$ is finite and satisfies the condition then it is $p$ group. Step 2: To show $G$ is cyclic Now , let $|G|=p^n$ where $n\ge 2$ Then let $H$ be a subgroup of order $p^{n-1}$ which exists by Sylow's Theorem. Then let $a\in G\setminus H$ and $K=\langle a \rangle$ . Now $K\le H$ is not possible since $a\notin H$. So, $H\le K$ must hold. Then either $|K|=p^{n-1}$ or $p^n$. If $|K|=p^{n-1}$ then $H=K$ which again contradicts $a\notin H$ So we must have $|K|=p^{n} \Rightarrow G=K$ which proves $G$ is cyclic. So the result follows . An example for falsity in infinite groups Let $G=\{z \in \mathbb{C} : z^{2^n}=1 , n\in \mathbb{N}\}$ i.e $G$ consists of all square roots, fourth roots of unity. etc $G$ forms a group with respect to multiplication. Proof : Lef $z_1,z_2 \in G$ Then $z_i^{2^{n_i}}=1,i=1,2$ for some $n_1,n_2 \in \mathbb{N}$ Then $(z_1z_2)^{2^{n_1+n_2}}=(z_1^{2^{n_1}}) ^{2^{n_2}}(z_2^{2^{n_2}}) ^{2^{n_1}} =1$ Of course, there is the identity $1$. Let us define $A_n =\{z\in \mathbb{C} : z^{2^n}=1\}$ where $n=1,2,3.,$ These $A_n$'s form a chain of subgroups given by $A_1 \subset A_2 \subset A_3...$ I just need to show there are no other proper subgroups. Let $A\lt G$ and be finite. Let $n=\displaystyle\max_i \{i: z^{2^i}=1 $ for the least $i, z \in A\}$ Then $A_n \subseteq A$ since $A$ is a group. The reverse inclusion $A\subseteq A_n$ is also true by the above chain and nature of $n$ So $A=A_n$ Now , if $A$ be infinite then the set $\{i:z^{2^i}=1 $ for the least $i ,z \in A\}$ is unbounded above and hence must it must contain $A_n$ for every $n$ which is nothing but $G$ But evidently $G$ is not cyclic, not even finitely generated. Is there any logical fallacy in my work? Do you have a better proof/solution ? Is ther any generalisation of this result? Sorry if my work is clumsy as I am not used to writing sophisticated proofs. Thanks a lot for your time. AI: Your proof for finite groups is correct. However, it might be easier to point out that if $G$ is generated by $x_1,\dots,x_n$ then the subgroups $\langle x_i\rangle$ form a chain. Thus all but one of the $x_i$ is unnecessary, and $G=\langle x\rangle$. Your example is also correct. My proof above shows that, in such a group, every finitely generated subgroup is cyclic. Such groups are called locally cyclic. The easiest example of a locally cyclic group is the rationals under addition, but as you can see from that, not every locally cyclic group has your property. I'm not sure that I would accept your proof that $A=A_n$ though, as you don't seem to have demonstrated it beyond saying that it's clear. $A_n\subseteq A$ as $A$ is a group doesn't seem to make sense. You need to note that $A_n$ contains all elements of $G$ whose order divides $2^n$. Then $A\leq A_n$ by choice of $n$, and since $A_n$ is cyclic (and generated by any element of $A_n\setminus A_{n-1}$) $A$ contains all of $A_n$, since it contains a generator of $A$, again by choice of $n$. Notice that you may replace $2$ by $p$ in your example. These groups are called Prüfer groups. Edit: Notice that you can describe your group as a quotient of the dyadic rationals. Indeed, back in the 1930s, Baer proved that every locally cyclic group (which is necessarily abelian) is a quotient of a subgroup of the rational numbers.
H: Convergence/divergence of the improper integral $ \intop_{1}^{\infty}\sin\left(x^{p}\right)dx $ I have to determine if the improper integral $ \intop_{1}^{\infty}\sin\left(x^{p}\right)dx $ convergent/divergent for any $ 0<p \in \mathbb{R} $ Here's what Ive done : We can substitute $ x^{p}=y $ and then we'll get $ \intop_{1}^{\infty}\sin\left(x^{p}\right)dx=\intop_{1}^{\infty}y^{\frac{1-p}{p}}\sin\left(y\right) $ Thus, for $ p>1 $ the integral will converge by Dirichlet's theorem. My intuition is that for $ p<1 $ the integral diverges. I'll write it again, let $ \frac{1-p}{p}=\alpha $ How do we prove that $ \intop_{1}^{\infty}x^{\alpha}\sin\left(x\right)dx $ diverge for $ \alpha > 0 $ ? I tried to show that $ x^{\alpha}\sin\left(x\right) $ will not follow Cauchy's condition but it got complicated. Thanks in advance. AI: $$ \int_{\pi}^{n\pi} x^{\alpha}\sin(x)dx=\sum_{k=1}^{n-1}\int_{k\pi}^{(k+1)\pi}x^{\alpha}\sin(x)dx=\sum_{k=1}^{n-1}(-1)^k\int_{0}^{\pi}(u+k\pi)^{\alpha}\sin(u)du $$ The integral $\int_{0}^{\pi}(u+k\pi)^{\alpha}\sin(u)du$ does not converge to $0$ as $k\rightarrow +\infty$ because, using $\sin(x)\geqslant\frac{2}{\pi}x$ for $x\in[0,\pi/2]$, $$ \int_{0}^{\pi}(u+k\pi)^{\alpha}\sin(u)du\geqslant\int_0^{\pi/2}(u+k\pi)^{\alpha}\sin(u)du\geqslant\frac{2}{\pi}\int_0^{\pi/2}(u+k\pi)^{\alpha}udu\geqslant\frac{2}{\pi}\int_0^{\pi/2}u^{\alpha+1}du\geqslant\frac{ (\pi/2)^{\alpha+1}}{\alpha+2} $$ Thus the integral $$\int_1^{\infty}x^{\alpha}\sin(x)dx$$ diverges for $\alpha>0$.
H: Function series I'm seeking assistance in correcting my inevitable mistakes and answers to my questions. $\textbf{Problem}$ a) Let $f_{k}(x) = 1, k\leq x \leq k+1$ and $0$ for all other values of $x$. Show that $(f_{k})$ converges uniformly to $0$ on all intervals $0<t\leq x \leq T < \infty$ b) Is it true that $\lim_{k \to \infty }\int_{0}^{\infty} f_{k}(x)dx = 0?$ c) Let $(f_{k})$ be a progression of functions on the interval $(0,\infty)$ that converges uniformly towards $f$ on every interval $0<t\leq x \leq T < \infty$. Suppose furthermore that there exists a function $g$ on $(0,\infty)$ such that $|f_{k}(x)|\leq g(x)$ for all $k$ and $x$ and that $\int_{0}^{\infty}g(x)dx < \infty$. Show that $\lim_{k \to \infty} \int_{0}^{\infty}f_{k}(x)dx = \int_{0}^{\infty}f(x)dx$ $\textbf{Answers}$ a) I do not know how to prove this but when $k$ gets bigger and bigger the interval at which $f_k(x) = 1$ decreases which implies that $f_{k}(x) \to f(x) = 0, \text{when} \: k \to \infty$. And thus $||f_{k}(x)-0|| = max(0,\text{sup}_{x\in I} |f_{k}(x)|) = 0$ and thus $f_{k}$ converges uniformly on $I$. (Here $I$ is the interval stated above). $\textit{Is this reasoning correct? If so, how can I make it more rigorous?}$ b) Yes, since $f_{k}(x)$ converges uniformly on $I$. $\lim_{k \to \infty }\int_{0}^{\infty} f_{k}(x)dx = \int_{0}^{\infty} \text{lim}_{k \to \infty } f_{k}(x)dx = \int_{0}^{\infty}0dx = 0$ $\textit{Would this be true with the bounds replaced with any given interval}$ $J \subset I$? c) The fact that $(f_{k})$ is uniformly convergent implies that $\lim_{k \to \infty }\int_{0}^{\infty} f_{k}(x)dx = \int_{0}^{\infty} \text{lim}_{k \to \infty } f_{k}(x)dx = \int_{0}^{\infty} f(x)dx$ and since $|f_{k}(x)|\leq g(x)$ then $\int_{0}^{\infty} f(x)dx \leq \int_{0}^{\infty} |f(x)|dx \leq \int_{0}^{\infty} g(x)dx \leq \infty$ Have I done any mistakes, am I just incorrect in my reasoning or does everything look fine? Thankful for any assistance! AI: (a) Let $[t,T]$. Then evebtually, $k>T$, thus $f_k\equiv 0$ for such $k$. Thus it converges uniformly. (b) Second one is not correct, since $\int_{0}^{\infty}f_{k}(x)dx= \int_{k}^{k+1}f_{k}(x)dx=\int_{k}^{k+1}1dx=1$. Thus $\int_{0}^{\infty}f_{k}(x)dx=1\to1\neq 0 =\int_{0}^{\infty}0dx$. (c) see dominated convergence theorem
H: Negation of the Universal Subset definition I am trying to understand the use of quantifiers within the definition of a subset. The definition of a subset is: $$ A \subseteq B \equiv \forall x(x \in A \rightarrow x \in B) $$ I am confused about, when you negate the statement of a subset, to an existential quantifier: $$ \forall x(x \in A \rightarrow x \in B) \equiv \exists x(x \in A \land x \notin B) $$ How do these two have the same meaning? I am struggling to get to grips with the negated version. To clarify, I understand the conversion and can do that applying the relevant laws but in idiomatic English it makes no sense. For Universal Quantifier, it means For all $x$, if $x$ is a member of $A$, then $x$ is a member of $B$. However, for a Existential Quantifier: There exists some value of such that $x$ is a member of $A$ but is not a member of $B$? That surely goes against the principle of a subset? AI: Answer: My understanding was incorrect. When you negate a statement that has a quantifier, you look to say the opposite of the original meaning. Quite surprise how I forgot about this. Please refer to the comments below.
H: Does automata theory have many interesting open questions? I’m studying Automata Theory and I love it. My question is, is there much more research to discover in automata theory, aside from quantum automata? Also, as a side question, are quantum automata the place to be for research in this field? AI: You will find a series of open problems in J.-É. Pin, Open problems about regular languages, 35 years later, The Role of Theory in Computer Science, Essays Dedicated to Janusz Brzozowski, World Scientific (2017), 153-176. Another famous open problem in automata theory is Cerny's conjecture.
H: Distance between set and point, confused of partial derivatives. Let $H = \{(x,y,z)\ \in \mathbb{R}^{3}: x^2+y^2 - z^2 + 4 = 0$ Compute the shortest distance between H and point $p=(2,4,0)$. I am a bit confused because I tried a direct approach. $$ x^2+y^2 + 4 = z^2$$ Let $D(H,p) = \sqrt{(2-x)^{2}+(4-y)^{2} + x^{2} + y^{2}+4}$ So I tried compute $$\frac{\partial D}{\partial x} = (\sqrt{2} (-1 + x))/\sqrt{12 - 2 x + x^2 - 4 y + y^2}$$ $$\frac{\partial D}{\partial y} = (\sqrt{2} (-2 + y))/\sqrt{12 - 2 x + x^2 - 4 y + y^2}$$ It seems not nice to compare with zero. Do you have another idea? AI: You are after the minimum of $(x-2)^2+(y-4)^2+z^2$ (no, I did not use the square root) under the restriction that $x^2+y^2-z^2=-4$. It is natural to use here the method of Lagrange multipliers. So, one has to solve the system$$\left\{\begin{array}{l}2(x-2)=2\lambda x\\2(y-4)=2\lambda y\\2z=-2\lambda z\\x^2+y^2-z^2=-4.\end{array}\right.$$Its only solutions are $(x,y,z,\lambda)=(1,2,\pm3,-1)$. So, the distance from $H$ to $(2,4,0)$ is$$\sqrt{(1-2)^2+(2-4)^2+(\pm3)^2}=\sqrt{14}.$$
H: Does collections of function on $\mathbb{R}_+$ means domain or range? I do not fully understand a hint from Erhan Cinlar's Probability and Stochastics (its on page 67). This is the hint: Let $F$ be the collection of all function $f$ on $\bar{\mathbb{R}}_+$ having the form $f(x)=\sum\limits_{i=1}^{n}c_ie^{-r_ix}$ for some integer $n\geq 1$, constants $c_1,...,c_n$ in $\mathbb{R}$, and constants $r_1,...,r_n$ in $\mathbb{R}_+$. My question is, is $\bar{\mathbb{R}}_+$ the domain, or the range of the functions $f$ (in collection $F$)? AI: To answer the question in one word, it's 'related' to domain. $c_1,\ldots,c_n$ belong to domain $\mathbb{R}$, and $r_1,\ldots,r_n$ belong to domain $\mathbb{R}^{+}$ (Non-negative R). Hence, '$\mathbb{R}^{+}$' is not the domain, but the set point out by $\mathbb{R}^{+}$ is part of the domain and not range. I hope this helps!
H: Invariance and symmetric groups My (naïve) idea of symmetry is that it requires some form of invariance under transformation, but I struggle to see how this survives the idea of the symmetric group $ S_n$. Beyond the bijective property what, if anything, is invariant in a permutation? And if the answer is nothing, what separates a permutation from, say, the rotation of an equilateral triangle through an arbitrary angle? AI: For an equilateral triangle you can exchange any two points and it will still look the same. You could do the same thing with a tetrahedron, but you need 4 dimensions. With a square (4 points, 4 sides) it looks different if you swap two adjacent corners without also swapping the opposite corners. However if you also had the diagonals drawn for the square, then you could, as the points and all the connections would stll look the same. The groups of symmetries generalise this. Imagine a number of points arranged equally spaced around a circle. Suppose there are lines joining every pair of points. If you swap any two points (moving the lines appropriately) it will still look the same. You actually don't need any particular arrangement, but the circle may make it a little easier to visualise. It is the collection of points that remains invariant. If you rotate a triangle through an arbitrary angle the vertices will probably not occupy the same 3 places.
H: What is the indefinite integral of $\int \left(-\frac{1}{5x}\right) dx$? Why does $\int \left(-\frac{1}{5x}\right) dx = -\frac{1}{5}\ln(x)+c$ and not $-\frac{1}{5}\ln(5x)+c$ instead? When I differentiate both solutions I get the same answer, $-\frac{1}{5x}$. Can anyone give me an explanation as to why? Thank you. AI: Both are correct: on one side, $$ \int -\frac{1}{5x}dx = -\frac15 \int \frac{1}{x}dx = -\frac15\ln(x) + c,$$ because of the linearity of integrals $\int af(x) dx = a\int f(x) dx$ for every constant number $a$ (in particular $a = -\frac15$ in this case). On the other side, you might also change variable, and set $u = 5x$, if you wish, but in that case $du = 5dx$ and therefore $$ \begin{split} \int - \frac{1}{5x} dx &= - \int \frac{1}{u} \frac{du}{5} \\ &= -\frac{1}{5} \ln(u) + c’ \\ &= -\frac15\ln(5x) + c’ = -\frac15 \ln(x) - \frac15\ln(5) + c’ \end{split} $$ Since $\ln(5x) = \ln(5) + \ln(x)$ it’s just a metter of choosing a different constant $c$ or $c’$. They clearly both give the same function if you differentiate, for they differ for a constant value $\ln(5)$ whose derivative is zero.
H: If $x$ and $y$ are integer variables that have value $0$ or $1$, then what does the expression $x + y - xy$ mean? Answer Options: $1.$ Logical AND $2.$ Logical OR $3.$ Nothing, it makes no sense $4.$ Logical implies I'm stuck between Logical OR and "Nothing, it makes no sense" because when I create a truth table for $x + y - xy$, it is equivalent to Logical OR. But also, "$-$" doesn't really mean anything in boolean logic, right? Please help me out here. AI: You're right that "$-$" doesn't really have an interpretation as a binary operation of booleans, but, as the truth table shows, this whole expression does make sense even if it contains a component that does not. Subtraction does make sense in the unary operation that $1-x$ is "not $x$" in booleans. Really, what is going on here is that we have the boolean identity: $$x\text{ or }y=\text{not }((\text{not } x)\text{ and }(\text{not }y))$$ and the right hand side encodes as $$1-(1-x)(1-y)$$ where all the subtractions have a clear interpretation. This just happens, in the world in integers, to be the same as $x+y-xy$ after distributing - even though this doesn't really make sense in the world of booleans.
H: Let $T:\mathbb{R}^n\to\mathbb{R}^m$ be a linear transformation. Prove that the range of T is a subspace of $\mathbb{R}^m$ with dimension at most $n$ I'm having some trouble with the second part of this question. I tried using the result that any $n+1$ vectors in $\mathbb{R}^n$ are linearly dependent, but no luck here. The first part is easy and here is my solution. The range of $T$ can be written as $S=\{A\mathbf{v}:\mathbf{v}\in\mathbb{R}^n\}$ where $A$ is the standard matrix of $T$. Then, $$\mathbf{0}\in S \:\text{since} \:\mathbf{0}=A\mathbf{0}$$ $$\text{Let}\: A\mathbf{u},\,A\mathbf{v}\in S\implies A\mathbf{u}+A\mathbf{v}=A\mathbf{u+v}\in S \:\text{since} \: \mathbf{u+v}\in\mathbb{R}^n$$ $$\text{Let}\,A\mathbf{v}\in S,\: c\in\mathbb{R}\implies c(A\mathbf{v})=A(c\mathbf{v})\in S \:\text{since} \: c\mathbf{v}\in\mathbb{R}^n$$ Please provide some hints for the second part of the question regarding the upper limit of the dimension of $S$. I think the proof wouldn't be very tedious or long, but instead I need to be able to use some key concepts. (I hope that the tips/ hints doesn't involve eigenvalues and stuff because I haven't learnt it yet. Thanks a lot!) AI: Hint: If $e_1,...,e_n$ are the vectors of a basis of $\mathbb{R^n}$ then show that the set $\{T(e_1),...,T(e_n)\}$ spans $S$.
H: A fiber bundle with trivial structure group is (isomorphic) to a trivial bundle. How do I show that a fiber bundle with trivial structure group is (isomorphic to) a trivial bundle? I say let $\phi$ and $\phi'$ be two charts over $U$. Then $\phi=\phi'$ since there is a continuous function $\theta_{\phi,\phi'}:U\to \{e\}$ such that $\phi'(u,f)=\phi(u,\theta_{\phi,\phi'}(u)\cdot f)=\phi(u,e\cdot f)=\phi(u,f)$ for all $u\in U$ and $f\in F $. How does this help me? AI: Let’s say we’ve covered our base space of $p: Z\rightarrow X$ with a collection of trivializations, $f_i:p^{-1} (U_i ) \rightarrow U_i \times F$ and that all the transition functions are trivial. Recall that to get the total space back from the information of the transition functions, we glue $U_i \times F$ to $U_j \times F$ by the prescription $(x,t)=(x,\theta_{j,i}(t))$ for all pairs $(i,j)$. Since the transition functions are trivial we are gluing via the identity. We may define the function $T: Z \rightarrow X \times F$ via $z$ goes to $z’$ where we get $z’$ by taking a trivialization $f_i$ around $p(z)$ and letting $z’=f_i(z)$. This has an inverse defined in a similar way. The fact that this is well defined follows from the fact that we glued via the identity.
H: Relationship Between Moment-Generating Functions and Laplace Transforms According to Wikipedia, the moment-generating function $M_X (t)$ of a probability distribution $f_X (x)$ is given by $M_X (t) = \int_{-\infty}^\infty e^{tx}f_X (x)\ dx$. Is $t$ time? If so, why does it appear in the output of this transform rather than the input? In Differential Equations, the Laplace Transform transforms the time domain into the frequency domain. In this scenario, is the $x$ domain "one hop in the direction opposite the frequency domain" relative to the time domain, so that the Laplace Transform transforms the $x$ domain to the time domain, and a composition of two Laplace Transforms would transform the $x$ domain to the frequency domain? Why is $t$ negated relative to the Laplace Transform? My understanding from the derivation of a Laplace Transform from a power series is that making the exponent negative is a substitution choice. Doesn't choosing to make the exponent positive make the integral much less likely to converge? AI: In an MGF, $t$ isn't time or any other physical quantity, but a "dummy variable" whose dimension is inverse to that of $X$ (so if $X$ is dimensionless $t$ is too, but we certainly wouldn't pair a spatial length with time). The symbol $t$ is arbitrary; $k$ is often used instead, as is $s$. Similarly, a Laplace transform doesn't require a physical interpretation such as the time/frequency one you gave, but note again this relates two physical quantities whose product is dimensionless. (This requirement exists because we exponentiate such a product; $e^z=1+z+z^2/2+\cdots$ only makes sense for dimensionless $z$.) Any physical examples, such as time and frequency or length and wavenumber, results from some physical context in which a Laplace transform naturally arises. Actually, this is even more commonplace with Fourier transforms. The Laplace transform of a function $f(x)$ is defined as $\int_0^\infty e^{-sx}f(x)dx$, thus a function of $s$ (and if you prefer, change $x$ and $s$ to whichever letters you like). Note the integral is only over $x\ge0$, so the $e^{-sx}$ factor helps convergence if $\Re s$ is large enough. For example, the choice $f(x)=e^{ax}$ leads to a convergence Laplace transform provided $\Re s>\Re a$. The resemblance to MGFs is superficial.
H: Limit of $\lim _{\left(x,y\right)\to \left(0,0\right)}\left(\left(xy\right)\ln \left(x^2+y^2\right)\right)$ I want to calculate limit of $\lim _{\left(x,y\right)\to \left(0,0\right)}\left(\left(xy\right)\ln \left(x^2+y^2\right)\right)$ using Squeeze theorem or using definition of limit. please help AI: HINT: Note that $|xy|\le \frac12(x^2+y^2)$ so that $$\left|xy\log(x^2+y^2)\right|\le (x^2+y^2)\left|\log\left(\sqrt{x^2+y^2}\right)\right|$$ Can you finish now?
H: Minimum distance of intersection between two spheres along a specified vector I have two spheres $(S_1, S_2)$ within a Cartesian space defined by their centroids $(p_1,p_2)$ and their radii $(r_1,r_2)$. $p_1$ is located at the origin of the coordinate frame such that $x_1^2 + y_1^2 + z_1^2 = r_1^2$. $p_2$ is located at the point $(x_0, y_0, z_0)$ such that $(x_2 - x_0)^2 + (y_2 - y_0)^2 + (z_2 - z_0)^2 = r_2^2$. $S_2$ lies in the path of $S_1$ as it is translated along the unit vector $V$ $=$ $[0$ $0$ $-1]$. Note that the centroids of the spheres are not coincident along the vector $V$ (see the attached figure), otherwise the solution would be trivial. I am trying to determine the translation $p_1 - t$ at which $S_1$ will 'collide' (i.e. intersect) with $S_2$ (see the linked image). I am happy to use a solution that assumes that $V$ is aligned to the coordinate axis (i.e. my example above) rather than any arbitrary vector. I know that the equivalent problem in two dimensions can be solved as follows $d = \min(p_y \pm \sqrt{(r_1+r_2)^2 - p_x^2})$, where $p_x = x_1 - x_0$, $p_y = y_1 - y_0$ and $d$ is the minimum distance along $V$ that $S_1$ intersects with $S_2$ Any help would be greatly appreciated! AI: Parametrize the center of the second sphere along the translation vector as $t$, for example $$\vec{c} = \vec{c}_0 + t \vec{c}_V$$ and solve for $t$ in $$\lVert \vec{c} \rVert = r_1 + r_2$$ Typically, it is easier to solve for the squared distance, i.e. $$\lVert \vec{c} \rVert^2 = \left(r_1 + r_2\right)^2$$ which is allowed because both sides are nonnegative anyway. If we assume $\vec{c}_0 = (x_0 , y_0 , z_0)$ and $\vec{c}_V = (0, 0, 1)$, then $$\lVert \vec{c} \rVert^2 = x_0^2 + y_0^2 + (z_0 + t)^2$$ and therefore $$\begin{aligned} x_0^2 + y_0^2 + (z_0 + t)^2 &= (r_1 + r_2)^2 \\ t^2 + 2 z_0 t + x_0^2 + y_0^2 - (r_1 + r_2)^2 &= 0 \\ t &= - z_0 \pm \sqrt{ (r_1 + r_2)^2 - x_0^2 - y_0^2 } \\ \end{aligned}$$
H: How $\{am + pn : m, n \in \mathbb{Z}\}=\langle 1 \rangle$? I don't understand how $\{am + pn : m, n \in \mathbb{Z}\}$ is equal $\langle 1 \rangle$, doesn't $\langle 1 \rangle$ contains all integer of $\mathbb{Z}$? the passage I got it from - Prime ideal property. If $p$ is prime and the ideal $\langle p \rangle$ contains $\langle ab \rangle$, then $\langle p \rangle$ contains $\langle a \rangle$ or $\langle p \rangle$ contains $\langle b \rangle$ Suppose $\langle a \rangle \nsubseteq \langle p \rangle$, so we have to prove $\langle b \rangle \subseteq \langle p \rangle$. Proof: Since the ideal $\{am + pn : m, n \in \mathbb{Z}\}$ contains both $\langle p \rangle$ and $\langle a \rangle$, and $\langle a \rangle \nsubseteq \langle p \rangle, {am +pn : m,n \in \mathbb{Z}}$ can only equal $\langle 1 \rangle$. This means $1 = am+pn$ for some $m, n \in \mathbb{Z}...$ AI: Fitst $\langle1\rangle=\Bbb{Z}$. Next $\langle a\rangle\not\subseteq \langle p\rangle$ implies $p\not\mid a$. Thus $a$ and $p$ are relatively prime. So there exists integers $x$ and $y$ such that $ax+py=1$. This gives any integer $n$ can be written as integral linear combination of $a$ and $p$, since $n=anx+pny$. Thus $\Bbb{Z}\subseteq \{am+pn:m,n\in\Bbb{Z}\}$. And $\{am+pn:m,n\in\Bbb{Z}\}\subseteq \Bbb{Z}$ very easily. So $\Bbb{Z}=\langle1\rangle= \{am+pn:m,n\in\Bbb{Z}\}$.
H: Let $X$ be a Banach space and $E$ a sublinear subspace . Show there exists a surjective isometry $\phi : E^* \rightarrow \overline{ E}^* $ Let $X$ be a Banach space and $E$ a sublinear subspace of $X$ . Show there exists a surjective isometry $\phi : E^* \rightarrow \overline{ E}^* $ I think it could be a derived fact of the Godlstine lemma or something related to the weak topology but I am really lost at it . Could you please give some guidelines? AI: Define $$\phi: E ^* \to \overline{E}^*: f \mapsto (\phi_f: \overline{E} \to \Bbb{K}: e \mapsto \lim_n f(e_n))$$ where $e= \lim_n e_n$. Note first that $\lim_n f(e_n)$ exists because $f$ is continuous. Next, we show that this is well-defined: if $e= \lim_n e_n = \lim_n e_n'$. Then $\lim_n (e_n-e_n') =0$ so $\lim_n (f(e_n)-f(e_n'))=0$. Hence $\lim_n f(e_n) = \lim_n f(e_n')$. Check that $\phi$ satisfies the required properties.
H: Question on showing invertibility of function $\Phi: \mathcal{L}(V,W) \to M_{m \times n}(F)$ in regards to establishing an isomorphism. My question comes from a proof of establishing that the function $\Phi: \mathcal{L}(V,W) \to M_{m\times n}(F)$ is an isomorphism. The statement of the theorem comes from Linear Algebra by Friedberg, Insel, and Spence. What follows is the statement of theorem and its proof. I understand the uniqueness portion of this proof. What I'm not grasping is how showing the existence of this matrix establishes that $\Phi$ is one-to-one and onto? AI: For injective: assume $\Phi(T)=0$ then $[T]^\gamma_\beta=0 \Rightarrow T=0$. So $N(\Phi)=\{0\}$ $\Rightarrow$ $\Phi$ is injective. For surjective: Take a matrix, define a linear transformation then how are these two the linear transformation and matrix are defined? The linear transformation must be such that the matrix of the linear transformation relative to the basis $\beta$ and $\gamma$ is the matrix that you started with. Can you show that $\Phi$ is linear transformation?
H: $G= \langle a, b : a^{7} = b^{3} = 1,\ b^{-1}ab = a^{2} \rangle$ and commutator group Consider the $a$ and $b$ the following permutation in $S_{7}$: $$a = (1\ 2\ 3\ 4\ 5\ 6\ 7 ),\ b = (2\ 3\ 5)(4\ 7\ 6)$$ Consider the group $G = \langle a, b \rangle$. I know that $a^{7} = b^{3} = 1$ and $b^{-1}ab = a^{2}$. Moreover, with these relations we can say that $|G| = 21$. Well, is simple to see that $\langle a \rangle \leq G'$, because $$[a, b] = a^{-1}b^{-1}ab = a^{-1}a^{2} = a$$ I'd like to know that $G' = [G, G] = \langle a \rangle$. Someone can help me? Thank you. AI: To see the other inclusion, consider the map $G\rightarrow\mathbb{Z}_3$ sending $a$ to $0$ and sending $b$ to $1$. This application preserves relations, and so it induces an homomorphism from $G$ to $\mathbb{Z}_3$. Its kernel coincides with $\langle a\rangle$. Therefore $G/\langle a \rangle\cong\mathbb{Z}_3.$ By definition of commutator subgroup (as the smallest subgroup whose quotient is an abelian group), we obtain $G'=\langle a\rangle$.
H: Proving $\frac{1}{2^n}\sum_{z\in\{0,1\}^n} (-1)^{z\cdot (x\oplus y)}=\delta_{xy}$, where $x\oplus y$ is the bitwise sum In quantum algorithms I often find this identity: $$\frac{1}{2^n}\sum_{z\in\{0,1\}^n} (-1)^{z\cdot (x\oplus y)}=\delta_{xy}$$ where $x\oplus y$ is the bitwise sum. I am not able to prove in general the case in which $x\neq y$, I can only show that it works for particular sets (e.g. $\{0,1\}^2$, $\{0,1\}^3$). AI: I am not able to prove in general the case in which $x\ne y$ Suppose $x\ne y$, and let $a:=x\oplus y.$ Then $a$ has a leftmost $1$-bit, say in position $p$. For any $z$ whose $p$th bit is $1$ (i.e. matching $a$ at that bit), let $f_a(z)$ be the result of flipping that bit in $z$ (so as to not match $a$ at that bit). Now, among the $2^n$ $z$ values, exactly half of them match the leftmost $1$-bit of $a$ and half do not; furthermore, $z\cdot a$ and $f_a(z)\cdot a$ have opposite parity, hence $$\begin{align}\sum_{z\in\{0,1\}^n} (-1)^{z\cdot a}&=\sum_{z\in\{0,1\}^n:\text{$z$ matches the leftmost $1$-bit in $a$}} \left((-1)^{z\cdot a}+(-1)^{f_a(z)\cdot a}\right)=0. \end{align}$$
H: Is there a formula $A(x_1,\ldots,x_k)$ with k variables that's true for a truth assignment o exactly when a prime number of variables are true under o Answer Options: Yes, such a formula exists in principle, but currently, no one knows how to construct it Yes, one can construct such a formula, though it would get quite unwieldy when k is large No, there are far too many primes for this to exist I have no idea what the answer is. I thought that primality was a concept that is too complicated for propositional logic. Please help. AI: HINT Yes, primality in general is not something you want to use propositional logic for, but given any specific $k$, there can only be a finite number of primes smaller or equal to $k$.
H: How do I find the local max And min I'm stuck on this question and have no idea how to find the local max and min of $$\int_a^x \frac{t^2+2t-24}{1+\cos^{2}t}, dt$$ I'm not sure where to start with this one since the integral doesn't have an exact boundary. AI: You can use the fundamental theorem of calculus which says $$\frac{\partial }{\partial t}\left\{\int_{a}^{t} f(\tau)d\tau\right\} = f'(t)$$ For us this means $$f'(x) = \frac{x^2+2x - 24}{\cos(x)^2+1}$$ The denominator is $\geq 1$ always, so derivative can be 0 only when $x^2+2x-24=0$, which by completing the square is when $$(x+1)^2-25 = ((x+1)-5)((x+1)+5)=0$$ In other words $\cases{x_1 = -6\\x_2 = 4}$ Now it is enough to see that we have positive coefficient for $x^2$ to conclude that negative derivative is between the roots. This means the first one must be a maximum and the second a minimum. What remains is to evaluate the integral in these two points, if you need to find the function value there, that is.
H: For which values the matrix $ B = \Big(\begin{matrix} A & -A\\ -A & \alpha A \end{matrix}\Big)$ results positive definite (strict)? I'm trying to solve the next problem where given $A \in \mathbb{R}^{n \times n}$ symmetric positive definite I have to find the values for $\alpha \in \mathbb{R}$ such that this matrix: $ B = \Big(\begin{matrix} A & -A\\ -A & \alpha A \end{matrix}\Big)$ Is positive definite. Also I notice that $B$ is symmetric, so I tried to find it's Cholesky's decomposition by block product, but I wasn't able to do so. AI: $$ \left( \begin{array}{rr} I & I \\ I & 0 \\ \end{array} \right) \left( \begin{array}{rr} A & -A \\ -A & \alpha A \\ \end{array} \right) \left( \begin{array}{rr} I & I \\ I & 0 \\ \end{array} \right) = \left( \begin{array}{cc} (\alpha-1)A & 0 \\ 0& A \\ \end{array} \right) $$
H: Show that $P(A\mid B) > P(A\mid B^{c}) \implies P(B\mid A) > P(B\mid A^{c})$ My initial thought is to start from $P(A\mid B)$ and $P(A\mid B^{c})$ and transform them to expressions involving $P(B\mid A)$ and $P(B\mid A^{c})$ respectively. $P(A\mid B) = \dfrac{P(A)P(B\mid A)}{P(B)}$ $P(A\mid B^{c}) = \dfrac{P(A) - P(B) + P(A^{c})P(B\mid A^{c})}{P(B^{c})}$ The expressions get quite messy when I try to compare them, and I am wondering if there is a better a way. AI: To begin with, notice that \begin{align*} P(A|B) - P(A|B^{c}) > 0 & \Longleftrightarrow \frac{P(A\cap B)}{P(B)} - \frac{P(A\cap B^{c})}{P(B^{c})} > 0\\\\ & \Longleftrightarrow \frac{P(B^{c})P(A\cap B) - P(B)P(A\cap B^{c})}{P(B)P(B^{c})} > 0\\\\ & \Longleftrightarrow P(A\cap B) - P(B)(P(A\cap B) + P(A\cap B^{c})) > 0\\\\ & \Longleftrightarrow P(A\cap B) - P(B)P(A) > 0 \end{align*} Now we can manipulate the second relation in order to obtain \begin{align*} P(B|A) - P(B|A^{c}) & = \frac{P(A\cap B)}{P(A)} - \frac{P(A^{c}\cap B)}{P(A^{c})}\\\\ & = \frac{P(A^{c})P(A\cap B) - P(A)P(A^{c}\cap B)}{P(A)P(A^{c})}\\\\ & = \frac{P(A\cap B) - P(A)(P(A\cap B) + P(A^{c}\cap B))}{P(A)P(A^{c}))}\\\\ & = \frac{P(A\cap B) - P(A)P(B)}{P(A)P(A^{c})} \end{align*} which is positive due to the first result.
H: Construct a set of the real line such that nth derived set is empty. I am self studying from the book Elementary Real and Complex Analysis but need some help with the following exercise. Let $A'$ denote the set of all limit points of a given subset $A$ of a metric space $M$ and let $A^{(n)}=(A^{(n-1)})'$. Given any $n$, construct a set $A$ on the real line such that $A^{(n)}$ is nonempty while $A^{(n+1)}$ is empty. First, I noticed that for $A= \{\frac{1}{M} \mid M \in \mathbb{N}\}$, then $A'=\{0\}$ and $A'' = \emptyset$. Therefore, I proposed the set $A=\{\frac{1}{m_1} + \cdots + \frac{1}{m_n} \mid m_i \in \mathbb{N} \}$ in order to reach the simple case at some step. While it is clear that any sum $\frac{1}{m_1} + \cdots + \frac{1}{m_{n-1}}$ is a limit point of $A$, I cannot prove that there are not other limit points. I also failed to prove this by induction. AI: You do want to start with a simple sequence. Then make each point of the sequence the limit of another simple sequence. Then make each of the newly added points the limit of yet another simple sequence. Keep going. Here’s one way to do it all at once; I’ll leave it to you to verify the details. For $n\ge 1$ let $\Sigma_n$ be the set of $n$-tuples of positive integers. For each $\langle k_1,\ldots,k_n\rangle\in\Sigma_n$ let $$x_{\langle k_1,\ldots,k_n\rangle}=2^{-k_1}+2^{-(k_1+k_2)}+\ldots+2^{-(k_1+k_2+\ldots+k_n)}\;.$$ Let $L_n=\{x_{\langle k_1,\ldots,k_n\rangle}:\langle k_1,\ldots,k_n\rangle\in\Sigma_n\}$. Finally, for $n\ge 0$ let $$A_n=\{0\}\cup\bigcup_{k=1}^{n+1}L_k\;.$$ Show that $A_0'=\{0\}$, and $A_n'=A_{n-1}$ for $n\ge 1$. I suggest drawing pictures of the first two or three of these sets to get a better idea of what’s going on here. You will find it useful to show that $x_{\langle k_1,\ldots,k_n,k\rangle}<x_{\langle k_1,\ldots,k_n-1\rangle}$ for all $k\ge 1$ and $k_n\ge 2$.
H: What is the maximum possible value of $y$ Let $x$ be a prime number and $y$ be an integer.If following expression is an integer then find maximum possible value of$y$. $$\frac{1}{xy}+\frac{2}{xy}+\frac{3}{xy}+\frac{4}{xy} + \cdots + \frac{54}{xy}+\frac{55}{xy}$$ My answer:$$\frac{1}{xy}+\frac{2}{xy}+\frac{3}{xy}+\frac{4}{xy}+\cdots+\frac{54}{xy}+\frac{55}{xy}=\frac{2^2 \times 5 \times7 \times11}{xy}$$ Obviously $x\in\{2,5,7,11\}$ and for $x ≠2$, $y$ can't be a factor of $x$ because then the expression is not an integer number. So $y≠5,7,11$. and for $x=2$ for the same reason $y≠4$. So the maximum of $y$ should be $2$. Is $y=2$ really the maximum value of $y$ ? or I am missing something? Thank you! AI: The correct value should be $y = 770$. You have almost solved the problem. Once you see that the sum is equal to $(2^2 \times 5 \times 7 \times 11)/xy,$ you can just set $x = 2$ so that $y = 11 \times 7 \times 5 \times 2 = 770$. This satisfies the desired constraints, and it's clear that we can't do any better (we've exhausted all of the prime factors available, and changing the value of $x$ will only decrease the value of $y$). Note that the expression is still an integer: $$\frac{2^2 \times 5 \times 7 \times 11}{xy} = \frac{1540}{2 \times 770} = 1.$$
H: Is there a transformation matrix for given mapping of vectors I have vectors $v_1, v_2, v_3$ in $\Bbb{R}^3$ and vectors $w_1, w_2, w_3$ in $\Bbb{R}^4$. There are three mappings from $v_1$ to $w_1$, from $v_2$ to $w_2$, from $v_3$ to $w_3$. The question is, is there a linear map $\phi$ that maps those vectors in that way? My first thought is, if $v_1, v_2, v_3$ are linearly independent and $w_1, w_2, w_3$ are linearly independent, then we can have just the transformation matrix $4 \times 3$ with the vectors w as columns. But what if the $v$ - vectors are not linearly independent? What if the $w$ - vectors are not linearly independent? How do I find a transformation matrix? AI: If $v_1,v_2,v_3$ are linearly independent then irrespective of choice of $w_1,w_2,w_3$, there is such a linear map $\phi$. But if $v_1,v_2,v_3$ are not linearly independent, then it depends on $w_1,w_2,w_3$. Edit Suppose $v_1=(1,0,0)$ and $v_2=(2,0,0)$ then $v_1$ and $v_2$ are linearly dependent because $v_2=2v_1$. So now if there exists a such linear map $\phi$ then $\phi(v_2)=\phi(2v_1)=2\phi(v_1)$. This gives $w_2$ has to be equal with $2w_1$. So in this example if your chosen $w_2\neq 2w_1$ then there will be no such linear map $\phi$.
H: Trouble proving the equality when asked to compute the operator norm $\phi : \ell^{2} \to \mathbb R$ where $\phi(x)=\sum \frac{x_{n}}{n}$ Compute the operator norm $\phi : \ell^{2} \to \mathbb R$ where $\phi(x)=\sum\limits_{n \in \mathbb N} \frac{x_{n}}{n}$ My proof so far: $\lvert \phi(x)\rvert=\lvert\sum\limits_{n \in \mathbb N} \frac{x_{n}}{n}\rvert\leq\sum\limits_{n \in \mathbb N} \rvert\frac{x_{n}}{n}\rvert \leq (\sum\limits_{n \in \mathbb N} x_{n}^2)^{\frac{1}{2}}\cdot (\sum\limits_{n \in \mathbb N} \frac{1}{n^2})^{\frac{1}{2}}\implies \lvert \lvert\phi\rvert\rvert_{*}\leq (\sum\limits_{n \in \mathbb N} \frac{1}{n^2})^{\frac{1}{2}}=:M$ Now, for me it always generally difficult to prove the reverse inequality, as I always need to normalize the the sequence, i.e. $(x^{n})_{n}\subseteq B_{1}^{\lvert \lvert \cdot \rvert \rvert_{2}}(0)$. I cannot find a way to satisfy the restriction in the unit ball while still approximating $M$. In other spaces like $\ell^{1}, \ell^{\infty}$ this is a lot easier. Any ideas/hints? AI: Consider the sequence $\left(\frac1{Mn}\right)_{n\in\Bbb N}$. Then its norm is $1$ and $\phi\left(\left(\frac1{Mn}\right)_{n\in\Bbb N}\right)=M$. So $\|\phi\|\geqslant M$.
H: Solving $\cos(z) = \frac{5}{2}$ I'm given $$\cos(z) = \frac{5}{2}$$ and I'm trying to solve for $z$ but I keep going in circles. I know $\cos z = 5/2 = 1/2(e^{iz}+e^{-iz})$ so then $e^{iz}+e^{-iz} = 5$ but then I'm stuck AI: Taking $t=e^{iz}$ we get $$t+\frac{1}{t}=5 \implies t^2-5t+1=0 \implies t_{1,2} = \frac{5 \pm \sqrt{21}}{2}$$
H: Spectral norm, eigenvalues range I stumbled upon a property in solutions of some exercises which stated that if a hessian of a possibly non-convex function f(x) is bounded in spectral norm then its eigenvalues lie in the interval. $$ ||\nabla^2f(x)||_2 \leq L $$ $$ eigenvalues \in [-L, L]$$ I fail to understand or more I am unable to find where this property comes from, I looked through many materials about spectral norm, spectral radius and I think at this point I am completely confused. I know that spectral norm is the maximal singular value of a matrix. In this case does it mean that hessian is symmetric so eigenvalues == singular values? How do we go further with that to get the interval? I get the upper bound of the interval, it's obvious but why the lower bound. Thank you in advance for pointing me to right sources or directly answering. AI: If $A v = \lambda v$, then $\|Av\| =|\lambda| \|v\|$ and so if $\|A\|$ is the corresponding induced norm we have $\| A\| \ge |\lambda|$ for any eigenvalue (equivalently, $|\lambda| \in [-\|A\|,\|A\|]$). Note that the spectral norm is induced by the Euclidean norm. Since the Hessian is real symmetric, the eigenvalues are real and so for any eigenvalue $\lambda$ we have from above that $\lambda \in [-\|A\|,\|A\|]$.
H: Example of a ring with a unique two sided maximal ideal which is not a local ring (that is it has more than one left or right maximal ideals). Let $R$ be a ring (possibly non-commutative). Definition $R$ is called a local ring if it has a unique left(and equivalently right) maximal ideal. I am looking for an example of a ring (obviously non-commutative) which has a unique two-sided maximal ideal but is not local. AI: The ring of $2×2$ matrices over reals is simple so it has one maximal two sided ideal, 0, but it has 2 maximal left (right) ideals.
H: Derivative of a matrix with respect to elements of another matrix I am trying to take the "derivative" of a matrix $M = ABB^TA^T$ with respect to the inner matrix $B$, where $A$ is $n\times n$ and $B$ is $n\times m$ with $m \leq n$. In other words, looking for a closed form solution to $\frac{\partial M_{ab}}{\partial B_{ij}}$ for all $1\leq a,b,i \leq n$ and $1\leq j \leq m$ Eventually, the goal would be to use this to find the derivative of the sum of all elements of $M$ with respect to the elements of $B$. So if $\mathbf{1}$ is the $n\times 1$ vector of $1$'s, then let: $f(B) = \sum_{a=1}^n \sum_{b=1}^n M_{ab} = \mathbf{1}^T A B B^T A^T \mathbf{1}$ And my end goal is to find $\frac{\partial f(B)}{\partial B_{ij}}$. Of course, since the derivative of a sum is the sum of derivatives, I should be able to find the full solution from a closed-form expression for $\frac{\partial M_{ab}}{\partial B_{ij}}$ This seems like it should have a simple solution but I have been stuck on it for a while. Any help is welcome and appreciated. AI: Use a colon to denote the trace/Frobenius product, i.e. $$A:B = {\rm Tr}(A^TB) = {\rm Tr}(AB^T) = B:A$$ and use it to rewrite the cost function. Then calculate its differential and gradient. $$\eqalign{ f &= {\rm Tr}({\tt1}^TABB^TA^T{\tt1}) \\ &= {\rm Tr}(A^T{\tt1}{\tt1}^TABB^T) \\ &= A^T{\tt1}{\tt1}^TA:BB^T \\ df &= A^T{\tt1}{\tt1}^TA:(dB\,B^T+B\,dB^T) \\ &= 2A^T{\tt1}{\tt1}^TA:dB\,B^T \\ &= 2A^T{\tt1}{\tt1}^TAB:dB \\ \frac{\partial f}{\partial B} &= 2A^T{\tt1}{\tt1}^TAB \\ }$$ NB:   The properties of the trace allow terms in a colon product to be shuffled around, e.g. $$A:BC \;=\; AC^T:B \;=\; B^TA:C \;=\; etc.$$ Also recall that the trace of a scalar quantity is equal to the quantity itself.
H: What is the probability distribution of the maximum cycle length in a permutation game? There is a "classic" counterintuitive scenario, in which you have $N$ boxes, $N$ players. Player $i$ has a dollar bill tagged with the number $i$. Each player places their dollar bill into a box at random, where each box is tagged with a distinct number between $1$ and $N$. Now each of the $N$ players gets to examine $n$ boxes ($n<N$), and if all of them find their associated bill, then each player receives $R>1$ dollars. Otherwise they all lose their starting bill. The players can coordinate before the game starts but cannot communicate after they begin opening boxes. The "winning strategy", at least if $n$ is not too much smaller than $N$, is for player $i$ to open box $i$, then the box whose number is printed on the bill they found, etc. The idea is that if you connect $i$ to $j$ if and only if box $i$ contains bill $j$, then you get a decomposition of $\{ 1,2,\dots,N \}$ into cycles. Now if all players are using this strategy, then they win if and only if the largest cycle contains at most $n$ boxes. This condition is sufficient for victory, because player $i$ will find bill $i$ when the procedure would instruct them to open box $i$ a second time. This condition is also necessary, because for every cycle there exists at least one player that will traverse it, and they will only win if they can get all the way to the end of the cycle before being stopped. What is counterintuitive is that, if all players use this strategy, then the group wins with a probability far greater than $(n/N)^N$; for example if $n=50,N=100$ then the winning probability is about 0.31 while the "choose randomly" strategy wins with probability $2^{-100} \approx 8 \cdot 10^{-31}$. The usual intuitive explanation for how this can happen even though each player separately has only a probability of $n/N$ to find their bill is that the "winning strategy" makes it so that the players tend to either all find their bill or else many of them don't. And indeed, in support of that explanation, one can see that if any players don't find their bill, then at least $n+1$ players don't. The usual question is, given $n,N,R$, and teammates that are perfect rational agents, do you take the bet? The additional information needed to answer that is the probability to win. With that in mind, my question is: is there an explicit formula for the probability that the players win this game as a function of $n$ and $N$? My question can be rephrased in math jargon as: given a random directed graph on $N$ vertices where each vertex has out-degree $1$ and in-degree $1$, what is the probability distribution of the size of the largest cycle in the graph? AI: While not an explicit formula, there is a very good asymptotic estimate for this number; it was proved by Goncharov in 1944 (see for example Section 1.4 of this paper by Granville). In that notation, let's think about the probability that a randomly chosen permutation of $N$ numbers has all of its cycles less than $N/u$ in size. (In the OP's formulation, $n=N/u$ and hence $u=N/n$.) As $N$ gets large, the probability that a random permutation of $N$ numbers has all cycles smaller than $N/u$ approaches a particular constant $\rho(u)$ (the Dickman rho function). The exact function $\rho(u)$ is pretty ugly, but there are two observations we can make about it. First, $\rho(2) = 1-\ln 2 \approx 0.306853$, which explains the $31\%$ the OP observed; if $N=1000$ and $n=500$, we would be even closer to $1-\ln2$. Second, $\rho(u)$ is roughly $u^{-u}$ when $u$ is large; in the OP's notation, that's about $(N/n)^{-N/n} = (n/N)^{N/n}$, which is the $n$th root of the "choose random" probability (that is, way higher). One way to view this is that when $N$ is large and $n$ is rather smaller than $N$, the probabilility of winning one "choose randomly" game is approximately the same as winning $n$ consecutive "winning strategy" games! That's how much better this strategy is than choosing randomly.