text
stringlengths
83
79.5k
H: Evaluating $\int (x^6+x^3)\sqrt[3]{x^3+2}dx$ I am trying to evaluate: $$\int (x^6+x^3)\sqrt[3]{x^3+2} \ \ dx$$ My solution: $$\int (x^5+x^2)\sqrt[3]{x^6+2x^3} \ \ dx$$ Let $$(x^6+2x^3) = t^3 \ \ \text{and} \ \ (x^5+x^2) \ \ dx = \frac{1}{2}t^2 \ \ dt$$ $$\frac{1}{2}\int t^2\cdot t \ \ dt = \frac{1}{2}.\frac{t^4}{4}+C $$ So $$\int (x^5+x^2)\sqrt[3]{x^6+2x^3} \ \ dx = \frac{1}{8}(x^6+2x^3)^{{4}/{3}}+C$$ Is that right? And is there a different way ? AI: The $t$ stuff is not necessary. You can directly let $u=x^6+2x^3$. Then $(x^5+x^2)\,dx=\frac{1}{6}\,du$. But the initial step was the key one.
H: Formally prove/disprove that $\sqrt{n}o(\sqrt{n}) = o(n)$ I'm wondering how to formally show that $\sqrt{n}o(\sqrt{n}) = o(n)$. The problem I'm having is that I don't really know how to formally resolve the multiplication on the LHS. It would be straightforward to show the result for $\sqrt{n}O(\sqrt{n}) = O(n)$ by simply replacing the $O(\sqrt{n})$ with $c\sqrt{n}$ for some constant $c$. $f(n) \in o(\sqrt{n})$ only tells me that $\lim_{n\rightarrow\infty}f(n)/\sqrt{n}=0$, which doesn't seem to be very helpful here. Is there an way to treat $o(\sqrt{n})$ analogously to $O(\sqrt{n})$? AI: Suppose $f(n)\in o(\sqrt{n})$. We want to show $\sqrt{n}f(n)\in o(n)$, i.e. that $\lim\limits_{n\to\infty} \sqrt{n}f(n)/n=0$. All we need to do is observe that $\sqrt{n}f(n)/n=f(n)/\sqrt{n}$ and we're done.
H: Does $N(z)=\pm 1$ imply $z$ is a unit in $\mathbb{Z}[\sqrt{10}]$? I've been trying to prove that $\mathbb{Z}[\sqrt{10}]$ is not factorial. I did this by defining the norm $N(a+b\sqrt{10})=a^2-10b^2$. I was able to show for myself that $N(z)=\pm 2$ and $N(z)=\pm 5$ have no solutions, and my idea is to show that $2,5,\sqrt{10}$ are irreducible, and then $2\cdot 5=10=\sqrt{10}\sqrt{10}$ gives two factorizations of $10$ into nonassociate irreducibles. The only hitch is I need to show $N(z)=\pm 1$ implies $z$ is a unit, from which it will follow that $2,5,\sqrt{10}$ are irreducibles since $N(2)=4$, $N(5)=25$, and $N(\sqrt{10})=-10$, but norms never take values $\pm 2,\pm 5$. I know that if $z$ is a unit, then $N(z)=\pm 1$, but I don't know how to show the converse. If there is a better solution, I'd be happy to see that instead. Thanks. AI: Since $N(z) = z\overline{z}$, where $\overline{a+b\sqrt{10}} = a-b\sqrt{10}$, and $z\in\mathbb{Z}[\sqrt{10}]$ implies $\overline{z}\in\mathbb{Z}[\sqrt{10}]$, it follows that if $N(z)=\pm 1$, then $z$ is a unit, with $z^{-1}=\pm\overline{z}$. Conversely, if $z$ is a unit, then multiplicativity of the norm map shows that $N(z)$ must be a unit in $\mathbb{Z}$. Thus, $z\in\mathbb{Z}[\sqrt{10}]$ is a unit if and only if $N(z)=\pm 1$.
H: Name for a type of subgraph that comes from identification of vertices? Is there a special name for the kind of subgraphs you get by taking some sequence of the following operation: Pick two vertices and identify them so all edges going to either vertex get sent to the new vertex. AI: You don't get a subgraph when you do that, but what you might call a quotient graph instead (the natural map goes the other way). The operation is called vertex contraction.
H: How to find convergence region of $\sum_{n\geqslant 0, m \geqslant 0} x^n y^m \binom{n+m}{n}^2$ The following two series are special cases of Appell $F_3$ and $F_4$, namely: $$ \mathcal{S}_1 = \sum_{n \geqslant 0, m \geqslant 0} \frac{x^n y^m}{\binom{n+m}{n}} $$ and $$ \mathcal{S}_2 = \sum_{n \geqslant 0, m \geqslant 0} \binom{n+m}{n}^2 x^n y^m $$ How would one establish that $\mathcal{S}_1$ converges for $\{ (x,y)\colon -1<x<1, -1<y<1 \}$, and $\mathcal{S}_2$ converges for $\{ (x,y) \colon \sqrt{|x|} + \sqrt{|y|} < 1\}$. AI: You can use the ratio test for double series. Here is a good reference, just look under the ratio test theorem:
H: Is it possible to have a point $P_1$ not $\chi$-semistable but $P_2$ $\chi$-semistable with these two points in the same orbit? Let $G$ be a group acting on an affine variety $X\subseteq \mathbb{A}_{\mathbb{C}}^n$. Suppose $P_1$ and $P_2$ are two points in $X$ such that $g\circ P_1=P_2$ for some $g\in G$. This means that $P_2$ is in the $G$-orbit of $P_1$, and vice versa. Let $\chi$ be a character of $G$. We say $P$ is $\chi$-semistable if there exists a function $f:X\rightarrow \mathbb{C}$ so that $f(g\circ P)=\chi(g)f(P)$ for all $g\in G$ with $f(P)$ nonvanishing. $\mathbf{Question}:$ Is it possible for $P_1$ to not be $\chi$-semistable but $P_2$ is $\chi$-semistable? That is, is it possible to have the following: there doesn't exist a $\chi$-semistable function for $P_1$ but there exists a $\chi$-semistable function for $P_2$ even though they are in the same $G$-orbit? I am currently having a somewhat difficult time coming up with such an example (I'm trying to come up with such example by taking $G=\mathbb{C}^{\times}$ or $GL_n(\mathbb{C})$ acting on some affine $n$-space). AI: No, any $\chi$-semistable function $f$ for $P_1$ is also one for $P_2$, and vice versa. Just compute and verify this, using that $\chi$ is a character.
H: How would I calculate the area of the shaded region of a circle with radius $6$ and length of chord $AB=6. $ How would I calculate the area of the shaded region of a circle with radius 6 and length of chord AB is 6. AI: Hint: Join the center of the circle to the points A and B. You'll obtain a triangle. What type of triangle is it?
H: What is a good technique to decide step size in sub-gradient method for dual decomposition? I am looking at the following paper to implement dual decomposition for my algorithm: http://www.csd.uoc.gr/~komod/publications/docs/DualDecomposition_PAMI.pdf On Pg.29 they suggest setting the step size for the sub-gradient method by taking the difference of the best primal solution and current dual solution and dividing by the L2-norm of the sub-gradient at current iteration. My doubt is the following: Do I use sub-gradients for each slave problem and compute a different step-size for each slave problem? Or is there some way I can compute the sub-gradient for the combined dual problem? AI: Step-sizes are the crucial and difficult point when using subgradient methods. Basically you need that the step-sizes tend to zero but not too fast. If one uses a-priori step-sizes (e.g. of the form $1/k$) then the method provably converges but in practice it will slow down such that you'll not observe convergence. The dynamic rule they suggest in the paper (in equation (40)) look like so-called Polyak step-sizes with estimation of the true optimal values (obtained by values of the dual problem). One can prove convergence with these step-sizes under special conditions. I do not know a good reference off the top of my head but many books on (nonsmooth) convex optimization should treat this.
H: Does $\sum\limits_{n=1}^\infty \frac{n^n}{3^n n!}$ converge? Test the convergence of the series $$\sum_{n=1}^\infty \frac{n^n}{3^n n!}$$ I know that if the nth term tends to $\infty$ then the series is divergent and if it is tends to 0 it is convergent . Also I'm familiar with some test e.g. Ratio test, d'Alembert test, comparison test etc. But I could not solve it in proper way. I know as $n$ increases $n^n$ increases more rapidly than $n!$ or $3^n$ but no idea when they both are multiplied AI: I don't see a problem in solving it by using D'Alembert's Ratio-Test. You need to find the convergence of $$\sum_{n=1}^\infty \dfrac{n^n}{3^n n!}$$ So let $u_n= \dfrac{n^n}{3^n n!} $ which implies $u_{n+1}= \dfrac{(n+1)^{n+1}}{3^{n+1} (n+1)!}$. Hence $$\dfrac{u_n}{u_{n+1}}=\dfrac{n^n}{3^n n!} . \dfrac{3^{n+1} (n+1)!}{(n+1)^{n+1}}$$ $$\dfrac{u_n}{u_{n+1}}=\dfrac{n^n}{3^n n!} . \dfrac{3^{n}.\ 3 \ .(n+1).n!}{(n+1)^{n}.(n+1)}$$ $$\dfrac{u_n}{u_{n+1}}=\dfrac{3.n^n}{(n+1)^n}$$ $$\dfrac{u_n}{u_{n+1}}=\dfrac{3}{(1+\dfrac{1}{n})^n}$$ $$\lim_{n\to \infty} \dfrac{u_n}{u_{n+1}}= \dfrac{3}{e} > 1 $$ $$( \ \rm{ Since } \ \lim_{n \to \infty } (1+\dfrac{1}{n})^n = e ) $$ See this for the proof of the term $e$. Hence by D'Alemberts ratio test we have $l>1 ( l=\lim_{n\to \infty} \dfrac{u_n}{u_{n+1}})$ , $\sum_{n=1}^\infty \dfrac{n^n}{3^n n!}$ converges. Thank you.
H: Multiple choice question - number of real roots of $x^6 − 5x^4 + 16x^2 − 72x + 9$ The equation $x^6 − 5x^4 + 16x^2 − 72x + 9 = 0$ has (A) exactly two distinct real roots (B) exactly three distinct real roots (C) exactly four distinct real roots (D) six distinct real roots AI: You have: $f(x)=x^6-5x^4+16x^2-72x+9$ $f'(x)=6x^5-20x^3+32x-72$ $f''(x)=30x^4-60x^2+32$ If you notice that $$f''(x)=30(x^4-2x^2+1)+2=30(x^2-1)^2+2 \ge 2 >0,$$ you can see that $f'(x)$ is strictly increasing. Together with $\lim\limits_{x\to-\infty} f'(x)=-\infty$ and $\lim\limits_{x\to\infty} f'(x)=\infty$ this implies that there is exactly one root $x_0$ of $f'(x)$. Thus $f(x)$ is decreasing on $(-\infty,x_0)$ and increasing on $(x_0,\infty)$. Since $\lim\limits_{x\to\infty} f(x)=\infty$ and $f(1)=1-5+16-72+9=-51$, we see that $f(x)$ has both positive and negative values. Thus $f(x)$ must have two real roots, one of them in the interval $(-\infty,x_0)$ and another one in the interval $(x_0,\infty)$. You can check the behavior of $f(x)$ and the behavior of $f'(x)$ at WolframAlpha.
H: Formula obtained by using Trignometric approximation for a triangle with a very small side I am reading a paper on the force between hooft polyakov monopoles, but I am completely baffled by one of the 'elementary trignometric' equation they have got using an approximation. Consider a triangle say triangle ABC. The author says that when A is very close to B, i.e. when $\cos{C}\approx 1$, we get $\cos{C}-1=-\frac{1}{2a^2}c^{2}\sin^2{B}$. Please can anyone tell me, how has this been done. I am extremely sorry if this is a silly question. AI: EDIT Hmm, the question changed while I was answering. See the 2. revision. When $A$ is close to $B$, we get a small $\theta_2$. So a truncated Taylor series would look like $$\cos{\theta_2}-1=-\frac{1}{2}\theta_2^2\; .$$ Now $\theta_2\approx \tan \theta_2=\frac{r_1\cos \theta_1}{s}$, where $r_1\cos \theta_1$ is the projection of $r_1$ on the opposite side in a right triangle $AB'C$, with $B'$ being in $\overline{BC}$, such that $AB' \perp BC$.
H: Find the approximate change in $y$ as $x$ increases from 2 to 2.02 Find the approximate change in $y$ as $x$ increases from 2 to 2.02 The equation of a curve is $y=4x^3-8x^2+10$ a)Find $\frac{dy}{dx}$ $\frac{dy}{dx}=12x^2-16x$ But I don't know how to answer below b) "Find the approximate change in $y$ as $x$ increases from 2 to 2.02" I have try $12(2)^2-16(2) = 16/2.02=$ not right $4(2)^3-8(2)^2+10=10/2.02=$ not right $12(2.02)^2-16(2.02)=16.644/2=$ not right etc.... the right answer is =$0.32$ help out thanks. AI: We have $y=4x^3-8x^2+10$, and therefore $\frac{dy}{dx}=12x^2-16x$. We use the tangent line approximation, also known as the linear approximation. The derivative at $x=2$ is equal to $16$. Therefore, if $\Delta x$ represents the change in $x$, and $\Delta y$ represents the change in $y$, we have $$\Delta y \approx (16)\Delta x.$$ Remarks: One important way to get insight about the linear approximation is geometric. Let $f(x)=4x^3-8x^2+10$. The idea is that the tangent line at $x=2$ is close to the curve when $x$ is close to $2$, that the tangent line kisses the curve at $x=2$. A tiny bug, sitting on the curve $y=f(x)$ at $x=2$, would think she was sitting on a straight ine, the tangent line. Recall that the tangent line at $x=a$ has equation $$y-f(a)=f'(a)(x-a).$$ In our case, the tangent line has equation $$y-f(2)=16(x-2).$$ Because the tangent line is close to the curve when $x$ is close to $2$, we have $$f(2.02)-f(2)\approx (16)(2.02-2).$$ This says that the change in $y$ is approximately $(16)(0.02).$ Another way of thinking about it is kinematic, in terms of motion. So let us use the letter $t$ instead of $x$. A particle is moving along the $y$-axis. At any time $t$, the displacement of the particle is $4t^3-8t^2+10$. Then the velocity at time $t$ is the derivative of $4t^3-8t^2+10$, evaluated at $t=2$. If time changes from $2$ to $2.02$, then the change in $y$ (the change in displacement) is approximately the velocity at time $2$ times the elapsed time. So the change in $y$ is approximately $(16)(0.02)$. The reason that the approximation is reasonable is that the velocity does not change very much from time $2$ to time $2.02$, so the velocity remains close to $16$. If the velocity were exactly $16$, then the change in displacement would be exactly $(16)(0.02)$. Since velocity does change a little, the approximation is not exact. It is worthwhile to do an explicit numerical calculation to check how good the tangent line approximation is in this case. The calculator says that $f(2.02)$ is nearly equal to $10.326432$, so to calculator accuracy, the change in $y$ is about $0.326432$. The linear approximation we made predicts a change of approximately $0.32$. Pretty close! Finally, we can think of our calculation in terms of the definition of the derivative. Recall that $$f'(2)=\lim{h\to 0} \frac{f(2+h)-f(2)}{h}.$$ So if $h$ is kind of close to $0$, like $h=0.02$, then we should have $$\frac{f(2+h)-f(2)}{h}\approx f'(2).$$ This can be written as $f(2+h)-f(2) \approx (f'(2))h$.
H: Question about proof of Going-down theorem I have written a proof of the Going-down theorem that doesn't use some of the assumptions so it's false but I can't find the mistake. Can you tell where it's wrong? *Going-down*$^\prime$: Let $R,S$ be rings such that $S \subset R$ and $R$ is integral over $S$. Let $q_1$ be a prime ideal in $R$ such that $p_1 = q_1 \cap S$ is a prime ideal in $S$. Let $p_2 \subset p_1$ be another prime ideal in $S$. Then there exists a prime ideal $q_2$ in $R$ s.t. $q_2 \cap S = p_2$. (Note: the missing assumptions are that $R,S$ are integral domains and $S$ is integrally closed) Theorems we use for the false proof: 1.(5.10) Let $A \subset B$ be rings, $B$ integral over $A$, and let $p$ be a prime ideal of $A$. Then there exists a prime ideal $q$ of $B$ such that $q \cap A = p$. 2.(5.6.)If $S$ is a multiplicatively closed subset of $A$ and $B$ is integral over $A$ then $S^{-1}B$ is integral over $S^{-1}A$. 3.(3.11 iv)) The prime ideals of $S^{-1}A$ are in one-to-one correspondence ($p \leftrightarrow S^{-1}p$) with prime ideals of $A$ which don't meet $S$. False proof: By (5.6), $R_{q_1}$ is integral over $S_{p_1}$. By (3.11), $\overline{p_2} = {p_2}_{p_1}$ is prime in $S_{p_1}$ since $p_2 \subset p_1$ by assumption. By (5.10) there exists a prime ideal $\overline{q_2}$ in $R_{q_1}$ such that $\overline{q_2} \cap S_{p_1} = \overline{p_2}$. We know that the following diagram commutes: $$ \begin{matrix} S & \xrightarrow{i} & R \\ \left\downarrow{\psi}\vphantom{\int}\right. & & \left\downarrow{\varphi}\vphantom{\int}\right.\\ S_{p_1}& \xrightarrow{i_{p_1}} & R_{q_1} \end{matrix} $$ We claim that $q_2 = \varphi^{-1}\overline{q_2}$ is an ideal such that $q_2 \subset q_1$ and $q_2 \cap S = p_2$. The claim $q_2 \subset q_1$ follows by construction (or from (3.11)). We also have $$ \varphi^{-1} (\overline{q_2}) \cap S = i^{-1}\varphi^{-1}(\overline{q_2}) = \psi^{-1}i^{-1}_{p_1}(\overline{q_2}) = \psi^{-1}(S_{p_1} \cap \overline{q_2} ) = \psi^{-1}(\overline{q_2}) = p_2$$ AI: You begin your proof with "By (5.6), $R_{q_1}$ is integral over $S_{p_1}$" But this recourse to (5.6) is illegitimate: the multiplicative set here is $\Sigma=S\setminus \mathfrak p_1 $ . However $R_{\mathfrak q_1}=T^{-1}R $ where $T=R\setminus \mathfrak {q}_1$ and we only have $\Sigma \subset T$, not $\Sigma=T$. (Don't think, draw a Venn diagram!) So $S_{\mathfrak p_1}\to \Sigma^{-1} R$ is indeed integral, but $S_{p_1}\to R_{q_1}$ has no reason to be integral. Edit In Exercise 4 of the same chapter 5 the authors give an example where indeed $R_{q_1}$ is not integral over $S_{p_1}$, so that it is not just that one cannot apply (5.6), but the result (unsurprisingly) is impossible to deduce from the too weak hypothesis.
H: A property of non-Archimedean metrics I have recently been reading about non-Archimedean metrics on fields (in Koblitz: $p$-adic Numbers, $p$-adic Analysis, and Zeta-Functions), and came across the exercise: Prove that a norm $\|.\|$ on a field $F$ is non-Archimedean if and only if $$\{x\in F : \|x\| < 1 \} \cap \{x\in F : \|x-1\| < 1 \} = \emptyset.$$ In one direction, the proof was trivial, and in the other direction somewhat harder, but my question is really about where this question comes from. If I were trying to think up exercises on this topic, I don't think I would have thought of this one in a million years. I am (gradually) getting used to the "eccentricities" of non-Archimedean metrics, but if someone could give me some idea of the intuition that lies behind this particular property, I would be grateful. AI: The defining property of ultrametrics is that in every triangle two longer sides are equal: more precisely, if ABC is a triangle (=triple of points) and $|AB|\ge |BC|\ge |AC|$ then $|AB|=|BC|$. Now, the exercise asks about the existence of a triangle in which one side has length 1 while the other two are strictly shorter: designed to be a contradiction to the definition.
H: characterization of functions I have a question that consists of the characterization of all functions $f(x)$ and all constants $k\in\mathbb{R}$ satisfying: $$f:\mathbb{R}^+\rightarrow (0,1)$$ $$k-\int_4^x\frac{f(t)}{t}dt\leq\log(2)-\frac{1}{2}\log(x),\ \ \forall x\geq 4$$ Does someone have an idea about the second inequatily? Thanks a lot for your help! I have found a sufficient condition. If we express $\log(x)$ by integral, then we have $$k-\int_4^x\frac{f(t)}{t}dt\leq-\int_4^x\frac{1}{2t},\ \ \forall x\geq 4$$ which leads to a sufficient condition $$k\leq 0$$ $$f(x)\geq\frac{1}{2}$$ Does someone have another idea to make this characterizaton more accurate? Thanks a lot AI: This inequality can only hold if $k\leqslant0$, as the limit when $x\to4^+$ shows, and it holds for $k=0$ as soon as, for every $x\geqslant4$, $$ \int_4^x2f(t)\,\mathrm dt\geqslant x-4. $$ This condition is strictly weaker than the condition that $2f(x)\geqslant1$ for every $x\geqslant4$. For any given $k\lt0$, a weaker condition is to ask that, for every $x\geqslant4$, $$ \int_4^x2f(t)\,\mathrm dt\geqslant x-4+8k. $$ Finally, for any given $k\lt0$, a more general condition is to ask that, for every $x\geqslant4$, $$ \int_4^x2f(t)\,\mathrm dt\geqslant x-4+8k+g(x), $$ where the function $g$ is such that $$ \inf\limits_{x\geqslant4}\left(g(x)+x\int_4^x\frac{g(t)}{t^2}\,\mathrm dt\right)\geqslant0. $$
H: Limit of a sequence involving root of a factorial: $\lim_{n \to \infty} \frac{n}{ \sqrt [n]{n!}}$ I need to check if $$\lim_{n \to \infty} \frac{n}{ \sqrt [n]{n!}}$$ converges or not. Additionally, I wanted to show that the sequence is monotonically increasing in n and so limit exists. Any help is appreciated. I had tried taking log and manipulating the sequence but I could not prove monotonicity this way. AI: Use Stirling's approximation: $ n! \sim \sqrt{2 \pi n} \left(\frac{n}{e}\right)^n $ and you'll get $$ \lim_{n \rightarrow \infty} \frac{n}{(n!)^{1/n}} =\lim_{n \rightarrow \infty} \frac{n}{(\sqrt{2 \pi n} \left(\frac{n}{e}\right)^n)^{1/n}} =\lim_{n \rightarrow \infty} \frac{n}{({2 \pi n})^{1/2n} \left(\frac{n}{e}\right)} =\lim_{n \rightarrow \infty} \frac{e}{({2 \pi n})^{1/2n} }=e, $$ because $\lim_{n\to \infty} ({2 \pi n})^{1/2n}= \lim_{n\to \infty} n^{1/n}=1$.
H: Transition probabilities for a nonlinear state space model I am trying to compute the transition probabilities of the model given by $X_{n+1} = f_{n+1}(X_n,W_{n+1})$ where $X_n's, W_n's$ are $R^k$ valued random variables for $n \geq 0$, $W_n's$ are independent and $f_n's$ are measurable. Also, define $\mathcal{F}_n=\sigma(X_0,W_1,\cdots,W_n)$. (From this definition we get that $X_n$ is $\mathcal{F}_n$ measurable). From these notes, the transition probabilites are defined as $$p_{n+1}(X_n,B) := P[X_{n+1}\in B|\mathcal{F}_n]$$ Is it true that $$ P[X_{n+1}\in B|\mathcal{F}_n] (\omega) = P[f(X_n(w),W_{n+1})\in B]~a.s. $$ If so, what is methodology for the proof and if not, what are the transition probabilites for this model? Thanks for the help. AI: Consider $g_{n+1}(x)=\mathrm P(f_{n+1}(x,W_{n+1})\in B)$, then $\mathrm P(X_{n+1}\in B\mid\mathcal F_n)=g_{n+1}(X_n)$ almost surely. Notes: (1.) One should add the hypothesis that $(W_n)_{n\geqslant1}$ is independent of $X_0$. (2.) I simply do not understand the RHS of your formula (random variable? real number?). (3.) A congenial reference on these matters is the small blue book by David Williams called $\color{blue}{\textit{Probability with martingales}}$. Get it and read it!
H: Category Theory usage in Algebraic Topology First my question: How much category theory should someone studying algebraic topology generally know? Motivation: I am taking my first graduate course in algebraic topology next semester, and, up to this point, I have never taken the time to learn any category theory. I've read that category theory helps one to understand the underlying structure of the subject and that it was developed by those studying algebraic topology. Since I do not know the exact content which will be covered in this course, I am trying to find out what amount of category theory someone studying algebraic topology should generally know. My university has a very general outline for what the course could include, so, to narrow the question a bit, I will give the list of possible topics for the course. Possible Topics: unstable homotopy theory spectra bordism theory cohomology of groups localization rational homotopy theory differential topology spectral sequences K-theory model categories All in all, I am well overdue to learn the language of categories, so this question is really about how much category theory one needs in day to day life in the field. Update I emailed the professor teaching the course and he said he hopes to cover the following (though maybe it is too much): homotopy, homotopy equivalences, mapping cones, mapping cylinders fibrations and cofibrations, and homotopy groups, and long exact homotopy sequences. classifing spaces of groups. Freudenthal theorem, the Hurewicz and the Whitehead theorem. Eilenberg-MacLane spaces and Postnikov towers. homology and cohomology theories defined by spectra. AI: The list of possible topics that you provide vary in their categorical demands from the relatively light (e.g. differential topology) to the rather heavy (e.g. spectra, model categories). So a better answer might be possible if you know more about the focus of the course. My personal bias about category theory and topology, however, is that you should mostly just learn what you need along the way. The language of categories and homological algebra was largely invented by topologists and geometers who had a specific need in mind, and in my opinion it is most illuminating to learn an abstraction at the same time as the things to be abstracted. For example, the axioms which define a model category would probably look like complete nonsense if you try to just stare at them, but they seem natural and meaningful when you consider the model structure on the category of, say, simplicial sets in topology. So if you're thinking about just buying a book on categories and spending a month reading it, I think your time could be better spent in other ways. It would be a little bit like buying a book on set theory before taking a course on real analysis - the language of sets is certainly important and relevant, but you can probably pick it up as you go. Many topology books are written with a similar attitude toward categories. All that said, if you have a particular reason to worry about this (for instance if you're worried about the person teaching the course) or if you're the sort of person who enjoys pushing around diagrams for its own sake (some people do) then here are a few suggestions. Category theory often enters into topology as a way to organize all of the homological algebra involved, so it might not hurt to brush up on that. Perhaps you've already been exposed to the language of exact sequences and chain complexes; if not then that would be a good place to start (though it will be very dry without any motivation). Group cohomology is an important subject in its own right, and it might help you learn some more of the language in a reasonably familiar setting. Alternatively, you might pick a specific result or tool in category theory - like the adjoint functor theorem or the Yoneda lemma - and try to understand the proof and some applications.
H: $n$ points forming a convex $n$-gon Suppose I am given a collection of $n$ points, any four of which form a convex quadrilateral. I wish to establish that these $n$ points form a convex $n$-gon. I am thinking about using induction. The case $n=4$ is trivial. If the result is assumed for $n-1$ how do I establish it for $n$? Thanks. AI: Consider the convex hull. If it has fewer than $n$ points, then there's a point in its interior. Use that point to form a non-convex quadrilateral - do you see a way to do this?
H: If $N\lhd H×K$ then $N$ is abelian or $N$ intersects one of $H$ or $K$ nontrivially I am thinking on this problem: If $N\lhd H×K$ then either $N$ is abelian or $N$ intersects one of $H$ or $K$ nontrivially. I assume; $N$ is not abelian so, there is $(n,n')$ and $(m,m')$ in $N$ such that $([n,m],[n',m'])\neq 1$. But I can’t go further. Hints are appreciated. Thanks. AI: Remember: in general, $\,N\lhd G\Longrightarrow [G,N]\leq N$ , so in your case: $$N\lhd H\times K\Longleftrightarrow [H\times K:N]\leq N\Longrightarrow \,\,\text{in particular}\,\,[H:N]\,,\,[K:N]\leq N$$ where we identify $\,H\cong H\times 1\,\,,\,K\cong 1\times K\,$ Now suppose $\,N\,$ intersects both $\,H\,,\,K\,$ trivially, so $\,[H,N]\subset H\cap N =1\,$ and etc...can you take it from here?
H: General form of Integration by Parts This is a question just out of interest to know the power of integration by parts. There are various level of integration by parts. What are some of the most general form of integration by parts? I have encountered it very often in PDE's. I look forward to gaining more insights on it. Thank you for your ideas, help and discussions. AI: All versions of integration by parts that I have seen boil down to two things. Stokes' Theorem: if $\omega$ is an $n-1$ form on $M$, and $n$-manifold with boundary $\partial M$, then $$ \int_M \mathrm{d}\omega = \int_{\partial M} \omega $$ Leibniz rule for differential forms: $$ \mathrm{d}(\eta \wedge \omega) = \mathrm{d}\eta \wedge \omega + (-1)^{\text{degree}(\eta)}\eta\wedge \mathrm{d}\omega $$ The only other ingredient that is sometimes needed is some basic housecleaning coming from Riemannian and/or differential geometry: things like how covariant or coordinate partial derivatives relate to the exterior derivative, and how to write the divergence of a vector field as equivalently the exterior derivative of its dual $n-1$ form.
H: Advantage of accepting non-measurable sets What would be the advantage of accepting non-measurable sets? I personally feel that non-measurable sets only exist because of infamous Banach-Tarski paradox... AI: One correction to your question is that non-measurable sets actually proved to exist by Vitali in 1905, his construction of a non-measurable set is now called a Vitali set (assuming the axiom of choice). The Banach-Tarski paradox appeared about two decades later in 1923. There is no immediate advantage in accepting the existence of non-measurable sets. In fact it "harms" us in some way, it means that we have to be more careful in how we define measure and so on. However there is a great advantage in accepting the axiom of choice, or at least the ultrafilter lemma (which is a weakened version of choice), both implying the existence of non-measurable sets. In fact much weaker claims than the axiom of choice imply the existence of non-measurable sets. To name a few: The [weak] ultrafilter lemma, The Hahn-Banach theorem (which also implies the Banach-Tarski paradox), The real numbers can be well-ordered, Every family of pairs has a choice function. To read more, you can try Herrlich's wonderful chapter about measurability in his book The Axiom of Choice. Whether or not to accept such existence boils down, in essence, to what you are trying to do. If you want to do finitistic mathematics, deal with finitely generated objects and a limited collection of their subsets then there is no harm in not assuming the axiom of choice. However if you wish to deal with infinitely generated objects, such as $\ell_2(\mathbb N)$ or other measure theoretic necessities, then the axiom of choice is usually needed to allow a "smooth" transition from finitely generated objects to infinitely generated objects. The key problem is provability, a lot of properties depend on the axiom of choice and we simply cannot prove their truth value without it. So you end up having to assume a lot more than simply saying "assume choice". In this aspect, assuming the axiom of choice helps both to decide a lot of properties (but not all, of course) as well allows immediate generalizations of the proofs to higher cardinalities. To read more: Advantage of accepting the axiom of choice Is Banach-Alaoglu equivalent to AC? Foundation for analysis without axiom of choice? Axiom of choice and calculus Number Theory in a Choice-less World Can one construct a non-measurable set without Axiom of choice?
H: 2 quick notation questions re: vectors and transformations Is it customary to omit one pair of parentheses and write $T \begin{pmatrix} 1\\2\\1\\1 \end{pmatrix} $ instead of $T \begin{pmatrix} \begin{pmatrix} 1\\2\\1\\1 \end{pmatrix} \end{pmatrix}$ to indicate the image of $\begin{pmatrix} 1\\2\\1\\1 \end{pmatrix}$ under T ? When writing it out by hand, do you prefer the straight or curly underscore underneath "v" to indicate that v is a vector (as opposed to a scalar) ? AI: For (1), I would specify something like $v = \begin{pmatrix} 1\\2\\1\\1 \end{pmatrix}$, and then write $T(v)$. I think that omitting the parenthesis could be unambiguous, but writing $T \begin{pmatrix} \begin{pmatrix} 1\\2\\1\\1 \end{pmatrix} \end{pmatrix}$ frequently is going to get frustrating quickly. For (2), in my experience, vector and matrix nomenclature depends very much on the field (as in subject matter, not algebraic structure). Personally, in handwriting I prefer a harpoon over the symbol indicating that it is a vector. This is a personal preference question though, so probably best suited for community wiki.
H: Contractible homotopy fibre for CW complexes, categorial construction of the homotopy inverse Let $f:X\to Y$ be a map of topological spaces. Assume further that the homotopy fibre is contractible. We get a long exact sequence on the homotopy groups and if $X$ and $Y$ are connected $f$ is a weak equivalence. If $X$ and $Y$ are CW complexes, then $f$ is automatically a homotopy equivalence. Question: Can we show the existence of the homotopy inverse just using categorial methods (i.e. the homotopy pullback property) or do we really have to use Whitehead? Moreover: If we don't assume that the spaces are CW-complexes we can't hope for $f$ to be a homotopy equivalence, can we? Edit: Strangely enough I feel that the first part should work while the second statement also feels true. However if the first part works I don't see a reason why it should only work for CW-complexes. AI: There is a map from $S^1$ to the pseudocircle which is a weak equivalence, but not a homotopy equivalence. In fact, because $S^1$ is Hausdorff there are no nonconstant functions from the pseudocircle to $S^1$. This answers the second question positively; the range being a CW-complex is important to being able to construct a homotopy inverse in general. I don't think that there is a "purely" categorical proof, because being a CW-complex isn't really a purely categorical property. It does let you describe a CW-complex by taking iterated pushouts along the maps $S^{n-1} \to D^n$, and so you can construct a homotopy inverse by inductively using the pushout property and the fact that the relative homotopy groups are trivial (essentially an obstruction theory argument). Whether you call this purely categorical is perhaps a matter of taste, and this is basically how you prove the Whitehead theorem anyway.
H: Down-sets in posets and directed sets Let P be a poset and let us say that a subset A of P is a down-set if: $$x \in A, y < x \implies y \in A.$$ A directed set is a poset P such that for every two elements, $a,b \in P$ we can find $c \in P$ such that $c \geq a $ and $c \geq b$. Now, I am trying to prove the following statement: A poset P is directed if and only if for every finite down-set D we can find a $c \in P$ such that $c \geq d, \forall d \in D$. One direction is very easy, namely, to show that if a poset is directed, we can find such an upper bound for every finite down-set. The other direction seems a bit more tricky, but I suspect there's something obvious I am missing here. How could I show the other implication? Is the statement really true? What about say $\mathbb{Z} \amalg \mathbb{Z}$, with the usual order on each factor? Are there any non-empty finite down-sets here? For what I can see, $\mathbb{Z} \amalg \mathbb{Z}$ is not directed, yet if there are no other finite down-sets than the empty set, then it satisfies the statement above. AI: Your counterexample is correct and shows that the statement you're trying to prove is wrong. There's not much else to say!
H: area between polar equation $r = \sin\theta$ and $r = \cos\theta$ Below is the exact question and answer from my textbook: Find the area of the region enclosed between the two curves $C_{1}$ and $C_{2}$ where $C_{1}$ has the polar equation $r = \sin\theta$ and $C_{2}$ has the polar equation $r = \cos\theta$. answer is $\frac{\pi}{8} - \frac{1}{16}$ I spend some time figuring this out... At first I need find intersection (ie: $\sin\theta = \cos\theta$) between this 2 equations but this obviously didn't made any sense. But then how would i find lower and upper limit [a,b] using the formula for area = $\int_a^b \frac{1}{2}(\sin\theta-\cos\theta)^2 \,d\theta$ i assume the textbook is asking area for this: AI: The sine and the cosine are equal when $\theta=\pi/4$. The two curves are actually circles with radii 1/2 and center $(0,1/2)$ and $(1/2,0)$ for the $\sin$ and $\cos$ respectively. You can thus find the area by computing the following integral $$\int_0^{\pi/4} (\sin\theta)^2 d\theta$$ in which I multiplied by $2$ exploiting symmetry.
H: How many points does Stone-Čech compactification add? I would like to know how Stone-Čech compactification works with simple examples, like $(0,1)$, $\mathbb{R}$, and $B_r(0)$ (the open ball of $R^2)$. I've studied the one-point compactification and this is way more difficult to understand. All the texts I've found till now start immediately with functions and closure of functions. If somebody could give me some "visible" ideas, I would bear those in mind and understanding the theory would be a little easier. AI: There are spaces like $[0,\omega_1)$ with the order topology where the Stone-Cech compactification is the same as the one-point compactification. On the other hand, the Stone-Cech compactification of the natural numbers $\mathbb N$ has cardinality $2^{\mathfrak c}$. So the answers depends a lot on the space.
H: The difference between $\frac{\partial^2 y}{\partial x^2}$ and $\frac{\partial y^2}{\partial x^2}$ The question is $y''=2y^3$. I know I can substitute $y'=p$. My question is if I can seperate x and y and integrate both sides twice? AI: $$\dfrac{d^2y}{dx^2}=\dfrac{d(\frac{dy}{dx})}{dx}$$ So you have $$\frac{d(\frac{dy}{dx})}{dx}=2y^3\implies d(\frac{dy}{dx})=2y^3 dx$$ Separation of variables won't work directly here. However if you multiply each side by $\frac{dy}{dx}$: $$\frac{dy}{dx}d(\frac{dy}{dx})=2y^3 dx\cdot(\frac{dy}{dx})=2y^3dy$$ and now you can integrate all you like. Keep in mind that looking at separation of variables as multiplying and/or cancelling is a bit 'handwavy', though it does work.
H: Convergence of a function in the continuous functions metric space with infinite norm induced metric. I know that $(C[0, 2], d_{\infty})$ is a complete metric space, being $C[0, 2]$ the set of continuous functions in the closed interval $[0, 2]$ and $d_\infty$ the distance metric induced by the infinite norm, i.e., $d_\infty\{f(x), g(x)\} = ||f(x) - g(x)||_\infty = \sup \{|f(x) - g(x)|\}$ with $x \in C[0, 2]$. Given the following sequence of functions: $f_n(x) = \begin{cases} 0 & \mbox{if } 0 \leq x \leq 1 - \frac{1}{n}, \\ nx + 1 - n & \mbox{if } 1 - \frac{1}{n} < x \leq 1, \\ 1 & \mbox{if } 1 < x \leq 2, \end{cases}$ I would like know if, in this metric space, this sequence is a Cauchy sequence, and if so, to which limit it converges. The origin of this question is that, using a norm-2 induced metric, this sequence converges to the non continuous step function. This shows that $(C[a, b], d_{2})$ is not complete. But I when I tried to calculate the limit of this sequence for $d_\infty$ I found myself lost. I would really appreciate any hint. Thanks in advance! AI: Draw a picture of the difference between $f_n$, $f_m$. This should give a hint. More explicitly: Suppose $n>m$, and let $x = 1-\frac{1}{n}$. Then $f_n(x)=0$, and $f_m(x) = 1-\frac{m}{n}$. Thus $d_{\infty}(f_n,f_m) \geq 1-\frac{m}{n}$, for all $n>m$. In particular, we can choose $n=2m$, and get $d_{\infty}(f_{2m},f_m) \geq \frac{1}{2}$, so the sequence $f_n$ cannot be Cauchy.
H: Does an uncountable intersection of sets with probability one also have probability one? ; in connection with the ergodic theorem Let $(\Omega, {\cal F},P)$ be a complete probability space and $T$ a mesure-preserving transformation on $\Omega$ that is ergodic. The point-wise ergodic theorem states that for any $f\in L^1(P)$, $$\frac{1}{N}\sum_{j=0}^{N-1}f(T^j \omega) \to \int_{\Omega}f(\omega)P(d\omega) \quad P\text{-a.e.}$$ This is equivalent to saying that the set $$A_{f}:=\left\{\omega\in\Omega; \frac{1}{N}\sum_{j=0}^{N-1}f(T^j \omega) \to \int_{\Omega}f(\omega)P(d\omega) \;\; \text{holds}\right\}$$ has probability $1$. Notice here that the set of probability $1$ may be different for a different $f\in L^1(P)$. My desired goal is to make a set with probability $1$ for which the above convergence of the Cesaro means holds regardless of the choice of $f\in L^1(P)$. I think that one way to take for this objective is to take an intersection of $A_{f}$ over $f\in L^1(P)$. Then the dependence on $f$ disappears, i.e., for $\omega\in \cap A_{f}$ we have the above convergence for every $f\in L^1(P)$. My question is; the intersection $\cap A_{f}$ is m'ble and has probability 1? (I think that if the intersection is taken over a countable index set, then the question is trivial and the answer is affirmative.) Is there any need for appropriate additional assumptions in order to ensure that the set $\cap A_{f}$ is measurable and has probability one? Or are there other ways to take than taking the intersection of $A_{f}$? AI: Given any $\omega \in \Omega$, define a function $f_\omega$ by $f_\omega(T^k(\omega))=1$, $f_\omega(\alpha)=0$ for all other $\alpha$. Then $$\int f_\omega = \sum_{k=0}^\infty P(T^k(\omega))=\sum_{k=0}^\infty P(\omega)$$ by translation-invariance. Since $\Omega$ is a probability space, this last sum cannot diverge and thus must vanish. But by construction $\frac{1}{N}\sum_k T^k(\omega)=1$ for all $N$. So $\omega \not\in A_{f_{\omega}}$. Since $\omega$ was arbitrary, $\cap A_f$ is empty. In other words, you can't remove the $f$-dependence from $A_f$; given any point, you can always concoct some function that's badly behaved (ergodically speaking) at that point.
H: Evaluating $\int(2x^2+1)e^{x^2}dx$ $$\int(2x^2+1)e^{x^2}dx$$ The answer of course: $$\int(2x^2+1)e^{x^2}\,dx=xe^{x^2}+C$$ But what kind of techniques we should use with problem like this ? AI: You can expand the integrand, and get $$2x^2e^{x^2}+e^{x^2}=$$ $$x\cdot 2x e^{x^2}+1\cdot e^{x^2}=$$ Note that $x'=1$ and that $(e^{x^2})'=2xe^{x^2}$ so you get $$=x\cdot (e^{x^2})'+(x)'\cdot e^{x^2}=(xe^{x^2})'$$ Thus you integral is $xe^{x^2}+C$. Of course, the above is integration by parts in disguise, but it is good to develop some observational skills with problems of this kind.
H: Using a Bivariate Gaussian Distribution to Predict Range of Movement I am currently attempting to use a bivariate normal distribution to identify the most likely range of movement for a blob in computer vision. This itself is not the problem, however; I do not understand how σ plays a role in finding discrete probability contours. I am not permitted to post images yet since my reputation is too low, but here is a link to a the graph: This is a sample contour plot from Mathematica which displays the Bivariate Probability Density Function with σX = .27 and σY = .54 . μX = 0, μY = 0, and ρ = 0. I would appreciate it very much if someone could explicate what determines the contour ellipses and how I would go about calculating them for functions of variable σX and σY. AI: I am not a Mathematica expert, but it seems as though the values for the level curves were selected such that the level curves represent five even steps from the peak at (0,0) to where the surface "levels out." Regardless, let's look at the bivariate Gaussian distribution for $\rho = 0$, which implies that $X$ and $Y$ are uncorrelated. You can write the PDF of this distribution as $f(x,y) = \frac{1}{2\pi\sigma_x\sigma_y} \exp \left(-\frac{1}{2}\left[\frac{x-\mu_x}{\sigma_x^2}+\frac{y-\mu_y}{\sigma_y^2}-\frac{2(x-\mu_x)(y-\mu_y)}{\sigma_x \sigma_y}\right]\right).$ You can compute this surface in a straightforward manner, and use any contour curve generating algorithm to plot those curves.
H: N-points compatification I know that Alexandroff compatification is unique, and if the Alexandroff compatification of two spaces are not homeomorphic, then the spaces can't be. Does uniqueness stand in n point compatifications? And what does homeomorphism (or not) between the compatifications tells us about the original spaces? Finally, if A has a 2 points compatification, and B doesn't (maybe has 1 point compatification) can we say A and B are not homeomorphic? AI: Uniqueness does not hold: for example, $(0,1)\cup (2,3)$ can be 2-point compactified into a circle (by adding $0=3$ and $1=2$) or into two disjoint circles (by adding $0=1$ and $2=3$). However this is of no consequence for topological invariance. As long as a property is defined in terms of topology on $X$, it is invariant under homeomorphism. If you wish to make this more precise, you can rephrase the definition: $X$ having an $n$-point compactification means that $X$ is homeomorphic to some space of the form $Y\setminus \{y_1,\dots,y_n\}$ where $Y$ is compact Hausdorff and $y_i\in Y$ are distinct.
H: Why is an alternating $2$-form decomposable if and only if its self-wedge vanishes? Given a vector space $V$, and a $2$-tensor $w$ in the second exterior power $\Lambda^2 V$. Assume that $w \wedge w=0$. Why is $w$ decomposable? Thanks for your help! AI: There is a canonical form: there is a basis $e_1,\dots,e_n$ of $V$ and a $k$ such that $w=e_1\wedge e_2+\dots +e_{2k-1}\wedge e_{2k}$. Notice that if $k>1$ then $w\wedge w$ contains the term $2e_1\wedge e_2\wedge e_3\wedge e_4$, hence $w\wedge w\neq 0$. We can conclude that if $w\wedge w=0$ then the canonical form is $w=e_1\wedge e_2$.
H: Whats the probability a subset of an $\mathbb F_2$ vector space is a spanning set? Let $V$ be an $n$-dimensional $\mathbb F_2$ vector space. Note that $V$ has $2^n$ elements and $\mathcal P(V)$ has $2^{2^n}$. I'm interested in the probability (under a uniform distribution) that an element of $\mathcal P(V)$ is a spanning set for $V$. Equivalently a closed form formula (or at least one who's asymptotics as $n\rightarrow \infty$ are easy to work out) for the number of spanning sets or non-spanning sets. It's not hard to show that the probability is greater than or equal to $1/2$. Since any subset of size greater than $2^{n-1}$ must span the space. I calculated the proportion of spanning sets of size $n$ for $n$ up to $200$ which seem to be going to a number starting with $.2887$. This leads me to believe that the probability exceeds $1/2$. I couldn't nail down a formula for arbitrary sized subsets though to continue experimental calculations. I feel like this is something that's been done before, but googling I've mostly found things concerning counting points on varieties over finite fields or counting subspaces of finite fields. Any references would be appreciated. AI: A random subset of $V$ is owerwhelmingly likely to span $V$. Let's look at how hard it is for a random subset not to span $V$. In order not to span $V$, there must be an $(n-1)$-dimensional subspace that contains the entire subset. There are exactly $2^n-1$ such subspaces, since they are in bijective correspondence with the nontrivial linear maps $V\to\mathbb F_2$ (each subspace is the kernel of exactly one map). For each fixed $(n-1)$ dimensional subspace, the probability for a random subset to stay within that subspace is $2^{-2^{n-1}}$, since the $2^{n-1}$ vectors outside the subspace must all randomly decide not to be in the subset. So the probability for a random set not to span is at most $(2^n-1)2^{-2^{n-1}} < 2^{-(2^{n-1}-n)}$ (and this is slightly too high, because there are a few non-spanning subsets that have more than one proper subspace they fit into and so are counted twice here). Everything else spans. Even for $n$ as small as $5$ the probability for a random subset to span $V$ is above 99.9%, and the number of 9's increases exponentially with larger $n$s.
H: Does this kind of matrix have a name? Are these kind of matrices generally known in mathematics? Do they have a name? $$ \left[\begin{array}{rrr} A & B \\ B & A \\ \end{array}\right] $$ $$ \left[\begin{array}{rrr} A & B & C \\ C & A & B \\ B & C & A \\ \end{array}\right] $$ $$ \left[\begin{array}{rrr} A & B & C & D \\ D & A & B & C \\ C & D & A & B \\ B & C & D & A \\ \end{array}\right] $$ $$ \left[\begin{array}{rrr} A & B & C & D & E \\ E & A & B & C & D \\ D & E & A & B & C \\ C & D & E & A & B \\ B & C & D & E & A \\ \end{array}\right] $$ The main thing is that each letter will be in the same columnn/row just once. I'm trying to do some combination calculations with big matrices following this pattern, so knowing effective ways to generate and compute these would help. (The pattern here is that the next row is made by shifting the previous row to one right.) AI: In combinatorics, these are known as Latin Squares
H: Multiple-choice question regarding $\lim\limits_{n \to \infty} \sum\limits_{k = 1}^n \left| e^{\frac{2\pi ik}{n}} − e^{\frac{2\pi i(k-1)}{n}} \right|$ The limit $$\lim_{n \to \infty} \sum_{k = 1}^n \left| e^{\frac{2\pi ik}{n}} − e^{\frac{2\pi i(k-1)}{n}} \right|$$ is (A) $2$ (B) $2e$ (C) $2\pi$ (D) $2i$. I can't solve this problem. Do I need to use $$e^{i\theta} = \cos \theta + i \sin \theta$$ or do I need some other formula to proceed? I don't understand that is I need to interchange the limit and summation. Please help me. This is a multiple choice question from a sample test paper of ISI MSTAT examination. AI: Route 1: Geometrically, the $n$th roots of unity $e^{2\pi i k/n}$ form a regular $n$-gon in the complex plane $\Bbb C$, so the distances between consecutive vertices $|e^{2\pi ik/n}-e^{2\pi i(k-1)/n}|$ are the side lengths and the sum is the perimeter of the $n$-gon, which will approximate the unit circle as $n\to\infty$. What is the circumference of the unit circle? Here's a visual aid with $n=5$ and $n=12$: Route 2: We have $$\sum_{k=1}^n|e^{2\pi ik/n}-e^{2\pi i(k-1)/n}|=\sum_{k=1}^n|e^{2\pi ik/n}||1-e^{-2\pi i/n}| \\[5pt] =n|1-e^{-2\pi i/n}|.$$ The limit of this as $n\to\infty$ can be evaluated analytically by invoking a Taylor series expansion of the exponential function, $e^x\approx 1+x$ as $x\approx 0$ (formally, $e^x=1+x+O(x^2)$). Specifically, $$\lim_{n\to\infty}n|1-(1-2\pi i/n+\cdots)|=? $$
H: Application of the Chebyshev inequality was revising my stuffs for my stochastics exams and came across this question that I couldn't figure my way around.. Let $X_1,\ldots,X_n$ be independent, identically distributed random variables with $E(X_1) =a$ and $$S_n = \frac{1}{n} \sum_{i = 1}^n X_i$$ Using the Chebychev inequality, give the smallest possible value of $x$, where $\mathbb{P}(\left | S_{100} - a \right | \geq x) \leq 0.01$, in the case where $X_{1}$ with the parameters $p \in [0,1]$ is: a) binomially distributed ($B(10,p)$) b) geometrically distributed I gathered that $0.01 =\frac{Var(X))}{x^2}$ owing to the Chebychev inequality and figured that if I could derive $Var(X)$, I could then get $x$. And using the given distributions (binomial or geometric), I attempted to solve for $Var(X) = E(X^2) - (E(X))^2$, although I couldn't proceed any further with the finding of $E(X^2)$. Although first and foremost, am I heading off into the right track? If so, how should I go about finding $E(X^2)$? Thanks for the help, as always! AI: I think this is the right track. Since $S_{n}$ is a sum of iid r.v.s we have \begin{equation} \text{E}(S_n) = \frac{1}{n} \text{E}(X_1+\ldots+X_n) =\frac{1}{n}\left[ \text{E}(X_1)+\ldots + \text{E}(X_n) \right] = \text{E}(X_1) = a \end{equation} This can be used to relate $a$ to the mean of the distributions you want. Similarly, \begin{equation} \text{Var}(S_n) = \text{Var}\left(\frac{X_1+\ldots+X_n}{n}\right) = \frac{1}{n^2} \text{Var}(X_1+\ldots+X_n) = \frac{n}{n^2} \text{Var}(X_1) = \frac{\text{Var}(X_1)}{n} \end{equation} This follows since, being iid, the $X_i$ are uncorrelated. Now, noting that $0.01 = 1/n$ the result becomes \begin{equation} x^2 = \text{Var}(X_1) \end{equation} Thus, 1) For $X_i \sim Bin(10,p)$, $\text{Var}(X_1) = 10 p(1-p)$ 2) For $X_i \sim Geom(p)$, $\text{Var}(X_1) = (1-p)/p^2$
H: Double-Well Delta Potentials - Schrödinger Equation Page 177 on Davies' book- Spectral theory of diff operatrs contains the following computation problem: Calculate the negative eigenvalues and the corresponding eigenfunctions of the following operator: $H:= -\frac{d^2 }{dx^2 } -\delta_{-r} -2\delta_{r} $ . The book gives the calculation for the operator $$-\frac{d^2 }{dx^2 } -2\delta_{-r} -2\delta_{r} \tag{*},$$ but there few things I really need to understand before trying to solve the exercise: 1) In the symmetric potential case, it's valid to assume the eigenfunctions are even or odd... But can we assume the same thing in the asymmetric case? 2) Can we deduce something from writing our new operator as: $ H = (-\frac{d^2 }{dx^2 } -2\delta_{-r} -2\delta_{r} )+ \delta_{-r} $ ? 3) In his calculation, Davies' says that the operator (*) has excatly two negative eigenvalues ... How can he see that? Can someone explain me how did he get the boundary conditions- $ f'(r+)-f'(r-)=-2f(r) $ , $f'(-r+ ) - f'(-r- )=-2f(-r) $ ? (I understand we have a jump discontinuity of the first derivative (and the second), but how it implies these boundary conditions? Thanks in advance AI: 1) No, you can't assume that the eigenfunctions are even or odd. You can only do that when the Hamiltonian commutes with the parity operator. 2) I guess you could try deducing something from that, e.g. using perturbation theory with $\delta_{-r}$ as the perturbation, but I wouldn't go down that road; it doesn't seem very promising. 3) I don't know whether he "saw" that the operator has exactly two negative eigenvalues – I presume he says that because he performed the calculation. Regarding the boundary conditions: The jump discontinuity is in the first derivative, not the second; the second derivative has a delta peak at that point, since it has to cancel the delta peak from the potential in the Schrödinger equation. Integrating the second derivative yields the first derivative, and integrating over $-2\delta f$ yields a jump of height $-2f$; since there's no jump in $f$ itself, these two have to be equal, so the first derivative must have a jump of height $-2f$. [Edit in response to the comment:] Away from the delta peaks, for negative eigenvalues the solution is a superposition of two exponentials decaying towards positive and negative $x$ values, respectively. There are three regions, left, right and centre. In the left region there can be no leftward increasing component, and in the right region there can be no rightward increasing component. That leaves four unknown amplitudes, of which we can arbitrarily set one to $1$ since the wavefunction will be normalized: $$ f(x)= \begin{cases} \mathrm e^{\lambda x}&x \le -r\;,\\ b_+\mathrm e^{\lambda x}+b_-\mathrm e^{-\lambda x}&-r \lt x \le r\;,\\ c\mathrm e^{-\lambda x}&r \lt x\;.\\ \end{cases} $$ As you wrote, in the symmetric case, we can assume that the wavefunction has definite parity. For positive parity, we get $$ f_+(x)= \begin{cases} \mathrm e^{\lambda x}&x \le -r\;,\\ b\mathrm e^{\lambda x}+b\mathrm e^{-\lambda x}&-r \lt x \le r\;,\\ \mathrm e^{-\lambda x}&r \lt x\;.\\ \end{cases} $$ The continuity condition is $$ \mathrm e^{-\lambda r} = b\mathrm e^{\lambda r}+b\mathrm e^{-\lambda r}\;, $$ and the jump condition that you wrote is $$ -\lambda\mathrm e^{-\lambda r}-\left(\lambda b\mathrm e^{\lambda r}-\lambda b\mathrm e^{-\lambda r}\right)=-2\mathrm e^{-\lambda r}\;. $$ (There's only one of each now because of the symmetry.) We can solve the first condition for $b$ and substitute it into the second: $$b=\frac{\mathrm e^{-\lambda r}}{\mathrm e^{\lambda r}+\mathrm e^{-\lambda r}}\;,$$ $$ -\lambda\mathrm e^{-\lambda r}-\lambda\frac{\mathrm e^{-\lambda r}}{\mathrm e^{\lambda r}+\mathrm e^{-\lambda r}}\left(\mathrm e^{\lambda r}-\mathrm e^{-\lambda r}\right)=-2\mathrm e^{-\lambda r}\;. $$ Dividing by $-\lambda\mathrm e^{-\lambda r}$ yields $$ \begin{align} 1+\frac{\mathrm e^{\lambda r}-\mathrm e^{-\lambda r}}{\mathrm e^{\lambda r}+\mathrm e^{-\lambda r}} &= \frac2\lambda\;,\\\\ \frac{\mathrm e^{\lambda r}}{\mathrm e^{\lambda r}+\mathrm e^{-\lambda r}} &= \frac1\lambda\;,\\\\ 1+\mathrm e^{-2\lambda r}&=\lambda\;. \end{align} $$ Since the left-hand side is strictly decreasing and the right-hand side is strictly increasing, this equation has exactly one solution, which for $r=1$ Wolfram|Alpha locates at $\lambda \approx1.10886$. You can do the same thing for negative parity to find the second eigenvalue. In the asymmetric case, you'll have to do a little more work, since you have to keep all three constants if you can't use symmetry to simplify. P.S.: If you're wondering how come $\lambda$ occurs without $r$ in the equation even though $r$ seems to be the only length scale in the problem: There's a hidden length scale because the jump in $f'$ should have units of inverse length, so the delta strength $2$ introduces a characteristic length $1/2$.
H: multiple choice summation problem Let $$X = \frac{1}{1001} + \frac{1}{1002} + \frac{1}{1003} + \cdots + \frac{1}{3001}.$$ Then (A) $X < 1$ (B) $X > 3/2$ (C) $1 < X < 3/2$ (D) none of the above holds. I assume that the answer is the third choice $1<X<3/2$. I integrate out $1/x$ in the interval $(1001, 3001)$ and get a result that satisfies only the choice C. Is this a Riemann sum? Please help. AI: With respect to your Riemann sum approach: the idea is that for positive, decreasing functions, the Riemann sum and the integral carefully approximate each other, more or less as in the proof of the integral test of convergence. If you'd like another sort of approach, we could approach it naively. Separate the sum into $250$ element blocks, $\frac{1}{1001}$ to $\frac{1}{1250}$ in the first block $B_1$, $\frac{1}{1251}$ to $\frac{1}{1500}$ in the second block $B_2$, and so on. We'll have $12$ blocks. Note that $\frac{1}{5} = \frac{250}{1250} \leq B_1 \leq \frac{250}{1000} = \frac{1}{4}$. Similarly, we get that $\frac{1}{i + 4} \leq B_i \leq \frac{1}{i+3}$ for all of our blocks. This means that our sum has upper and lower bounds: $$ 1 < \frac{1}{5} + \frac{1}{6} + \dots \frac{1}{16} \leq B_1 + \dots + B_{12} \leq \frac{1}{4} + \dots + \frac{1}{15}< \frac{3}{2}$$ And this gives the desired inequality.
H: Proving $A\cap(B-C)$ is equal to $(A\cap B)-(A\cap C)$ Prove that: $A\cap (B-C) = (A\cap B)-(A\cap C)$. Tried to prove this by using Algebra of Classes. Then used idempotent property in $A$ as well as double negation in $A$ but still it didn't work. AI: If an element is in the left side, then it is: in A in B not in C If an element is in the right side, then it is in $A\cap B$ not in $A\cap C$ which is the same thing as saying "it's in A and B, but not in C". To fully justify: An element on the left is in A and B, meaning that it's in the $A\cap B$ on the right. It's not in C though, which excludes it from the $A\cap C$ on the right. Those together mean that it's in the right-hand side. So if it's in the left, it's in the right. In other words, $$A\cap(B-C)\subseteq (A\cap B)-(A\cap C)$$ Can we run this process in reverse? Yes! If an element is in the righthand side, it's in $A\cap B$, so it's in $A$ and in $B$. However it's not in $A\cap C$. If it's in $A$ (as we've determined) but not in $A\cap C$, it must not be in $C$. If it's not in $C$ in the first place, but it's in $B$, then it must still be in $B-C$. It's also in $A$, so it must be in $A\cap (B-C)$ as well. In other words, $$A\cap(B-C)\supseteq (A\cap B)-(A\cap C)$$ These two taken together imply that both sets are equal.
H: What is $\frac{dy}{dx}|_{y=-1}$ for $(xy^3 + x^2y^7)\frac{dy}{dx} = 1$ given that $y \left(\frac{1}{4}\right)=1$ Suppose a solution of the differential equation $$(xy^3 + x^2y^7)\frac{dy}{dx} = 1$$ satisfies the initial condition $y \left(\frac{1}{4}\right)=1$ . Then the value of $\dfrac{dy}{dx}$ when $y = −1$ is (A) $4/3$ (B) $−4/3 $ (C) $16/5$ (D) $−16/5$. This is not a homogeneus equation so I can't solve it. Do I need to know some special procedure to solve this problem? If the main differential equation is solved then I can solve the problem. AI: Since you want to know $x$ at given values of $y$, it's more convenient to switch dependent and independent variables and write the problem as $$ \dfrac{dx}{dy} = xy^3+x^2 y^7, \ x(1) = 1/4 $$ Now this Bernoulli differential equation actually does have a closed-form solution $$ x \left( y \right) = \left( {\rm e}^{(1-y^4)/4}+4-{y}^{4} \right) ^{-1} $$ But even without that, the fact that the right side of the differential equation is an odd function of $y$ shows that all solutions that pass through $y=0$ must be even functions of $y$. Thus we must have $x(-1) = x(1) = 1/4$. And then you can calculate $dy/dx$ at the point $(x=1/4, y=-1)$ from the original differential equation. In terms of the original differential equation, of course, we can't have $y$ as a function of $x$ with both $y(1/4) = 1$ and $y(1/4) = -1$. What it means is that the two solutions with initial conditions $y(1/4) = 1$ and $y(1/4) = -1$ collide at a singularity on the line $y=0$, but nevertheless we can regard them as two branches of the same integral curve.
H: Difference between Norm and Distance I'm now studying metric space. Here, I don't understand why definitions of distance and norm in euclidean space are repectively given in my book. I understand the difference between two concepts when i'm working on non-euclidean space, but is there any even slight difference between these two concepts when it is $\mathbb{R}^k$? AI: All norms can be used to create a distance function as in $d(x,y) = \|x-y\|$, but not all distance functions have a corresponding norm, even in $\mathbb{R}^k$. For example, a trivial distance that has no equivalent norm is $d(x,x) = 0$, and $d(x,y) = 1$, when $x\neq y$. Another distance on $\mathbb{R}$ that has no equivalent norm is $d(x,y) = | \arctan x - \arctan y|$. However, in general, when working in $\mathbb{R}^k$ the distance used is one induced by a norm, and 'unusual' distances are typically used to illustrate other mathematical concepts (eg, the $\arctan$ distance gives an example of an incomplete metric space).
H: Group inclusion- Quotiens Inclusion Given a group $G$ , and two subgroups $ G_1, G_2 $ such that $ G_1 \subseteq G_2 $ , is it true that $ G/G_2 \subseteq G/G_1 $ ? Thanks in advance ! AI: What you probably had in mind is the following: $$G/G_2\cong\left(G/G_1\right)/\left(G_2/G_1\right)$$ by the 2nd or 3d. isomorphism theorem, so what you asked can't generally be as $\,G/G_2\,$ is a quotient of $\,G/G_1\,$ , i.e.: there exists a surjective group homomorphism $$G/G_1\longrightarrow G/G_2\,\,,\,\,xG_1\to xG_2$$
H: Cardinality of the complex numbers in ZF As you all know, cardinality of $\mathbb{R} = 2^{\aleph_0}$ can be proved in ZF, since cardinality of $\mathbb{N} \times \mathbb{N} = \aleph_0$ can be proved in ZF. I know that the statement 'For any infinite set $A$, $|A\times A|=|A|$ is weaker than A.C. I wonder if there is a way to prove in ZF that $|A\times A|=|A|$ when $|A|=2^{\aleph_0}$ specifically. AI: Yes, you can prove this in ZF. One way of seeing this is to note that $2^{\aleph_0}$ is the size of $A=\{0,1\}^{\mathbb N}$ (the set of functions from ${\mathbb N}$ to $\{0,1\}$), and $A\times A$ is easily seen to be in bijection with $\{0,1\}^{\mathbb N\sqcup\mathbb N}$, where $\sqcup$ denote disjoint union. But $\mathbb N\sqcup\mathbb N$ is in bijection with $\mathbb N$ (think even and odd numbers). In fact, Cantor's classical proofs that $\mathbb R^2$ and $\mathbb R$ are in bijection do not use choice. For example, $\mathbb R$ and $(0,1)$ are easily seen to be in bijection (think $\arctan$ or somesuch), and one can find a bijection between $(0,1)^2$ and $(0,1)$ by looking at decimal expansions and intertwining. (Usually one needs to treat a small (countable) set a bit differently in these arguments.) In general, the cardinal $2^\kappa$ is the size of the set of functions from a set of size $\kappa$ to $\{0,1\}$, so $2^\kappa\times 2^\kappa$ is $2^{\kappa+\kappa}$ (as with exponentiation of finite numbers), where the sum denotes the size of a disjoint union of two sets of size $\kappa$. So, it is enough to know that $\kappa+\kappa=\kappa$ to conclude that $2^\kappa\times 2^\kappa =2^\kappa$. It is consistent with ZF to have infinite sets $A$ such that $A\times A$ is not in bijection with $A$. In fact, as already pointed out in Martin's answer, if there are no such exceptions, choice holds. It is also consistent to have sets $A$ such that $A\sqcup A$ and $A$ are in bijection, but $A\times A$ and $A$ are not. In that case, ${\mathcal P}(A)$ and ${\mathcal P}(A)\times{\mathcal P}(A)$ would be in bijection, as explained above. Note that if $A$ is infinite and $A\times A$ and $A$ are in bijection, then so are $A\sqcup A$ and $A$ (by Schröder-Bernstein, which does not need choice). For the case you are asking, already $\mathbb N\times\mathbb N$ is in bijection with $\mathbb N$, so things work out nicely.
H: Antisymmetric's Opposite (If existant) I am learning of Equivalence Relations and for something to be one is has to be: Reflexive (i.e., $aRa$) Symmetric (i.e., $aRb$ $\Rightarrow$ $bRa$) Transitive (i.e., $aRb$ & $bRc$ $\Rightarrow$ $aRc$) Where we let $R \subseteq A\times A$ be a relation on a non-empty set $A$. I see that Irreflexive is the 'opposite' of Reflexive. Similarly that Symmetric and Asymmetric are opposites. Would Antisymmetric be the opposite of Transitive? Why is that the case, if so? If not, is it a complete separate property of relations and can it be true that something can be an equivalence relation and also Antisymmetric? AI: Well, let's consider what such a property would entail. $R$ is said to be antisymmetric iff $$\forall a,b\in A(a\:R\:b\:\wedge\:b\:R\:a\:\Rightarrow\: a=b).$$ Observe that if $p,q$ are statements, then $p\Rightarrow q$ is logically equivalent to $q\vee\neg p$, and its negation would be $p\wedge\neg q$. In this case, by "opposite", it seems you're intending the opposed universal statement, rather than simply the negation. That is, we're only negating the latter part, e.g: $\forall p(q)$ becomes $\forall p(\neg q)$. Thus, $R$ is the "opposite" of antisymmetric iff $$\forall a,b\in A(a\:R\:b\:\wedge\:b\:R\:a\:\wedge\:a\neq b).$$ Consider that statement, though. The only set $A$ on which it could hold would be $A=\emptyset$. (Why?) Thus, only the empty relation is the "opposite" of antisymmetric. But the empty relation is also antisymmetric, so this isn't really a useful or particularly relevant property. As to the latter part of your question, an equivalence relation can be antisymmetric. Consider the identity relation $R:=\bigl\{\langle a,b\rangle\in A\times A:a=b \bigr\}$, for example. In fact, this is the only antisymmetric equivalence relation on $A$. Proof: Let $S$ be an antisymmetric equivalence relation on $A$. It suffices to show that for all $a,b\in A,$ we have $a\:S\:b$ if and only if $a\:R\:b$. By reflexivity, we have that $a\:R\:b$ implies $a\:S\:b$. On the other hand, if $a\:S\:b$, then $b\:S\:a$ by symmetry, and then by antisymmetry we have $a\:R\:b$.
H: Period of a finite binary sequence Let $G:N\to\{0,1\}$, and let $L$ be some period of $G$, so that $G(i+kL)=G(i)$. What's the best a good way to find the smallest period of $G$? I mean an algorithm that takes ($G$,$L$) and outputs the smallest period. AI: Let $G[m,n]$ denote the string formed by the values of $G$ between based indices $m$ and $n$ (inclusive). (Here all indices are $1$-based.) Use a string matching algorithm to locate the first occurrence of the string $G[1,L]$ within the string $G[2,2L]$. The index of this occurrence is equal to the minimal period. You can easily find linear-time string matching algorithms off the shelf (for instance, Knuth-Morris-Pratt can be implemented in a dozen lines of code). With a bit more care you could probably read off the period from the prefix table rather than performing an explicit query.
H: Bound on unit vectors could someone help me with this simple problem. As always with homework, hints are specially welcome. Let $v=(v_1,v_2)$ be a two-dimensional unit vector with complex coefficients. If $|v_1|<a$ and $|v_2|<a$ then $|v_1|+|v_2|\geq \frac{1}{a}$. AI: I think I got it. \begin{equation} |v_1|+|v_2|\geq \frac{|v_1|}{a}|v_1|+\frac{|v_2|}{a}|v_2|\geq \frac{|v_1|^2+|v_2|^2}{a}=\frac{1}{a} \end{equation}
H: A random walk on $\mathbb{Z}$ with a twist I am trying to decide whether the following random walk is recurrent or not. Intuitively, I think it is - but I am not familiar with techniques of proving it. My random walk is the following: on each point $i$, I can turn to $i-1$ with probability $\frac{1}{3}$, to $i+1$ with probability $\frac{1}{3}$, and with probability $\frac{1}{3}$ I jump to some recurrent graph $G_i$ (meaning $G_i$ contains the vertex $i$ and other vertices not in $\mathbb{Z}$). For example, the $G_i$'s can be copies of $\mathbb{Z}$, and then we can think of my walk as a walk on $\mathbb{Z}^2$ where $(i,j)$ is connected to $(i',j')$ iff $j=j'=0,i' = i\pm 1$ or $i'=i, j'=j\pm 1$. This graph is recurrent (since it's a subgraph of the standard $\mathbb{Z}^2$ graph, which is recurrent). So, is my intuition correct and the walk should be recurrent? And if not, what stronger conditions can I ask from the $G_i$'s for it to be recurrent? AI: Let $(X_n)_{n\geqslant0}$ denote this random walk. Define recursively a sequence $(t(n))_{n\geqslant0}$ of stopping times by $t(-1)=-1$ and, for every $n\geqslant0$, $t(n)=\inf\{k\geqslant1+t(n-1)\mid X_k\in\mathbb Z\}$. Every graph $G_i$ is recurrent hence every $t(n)$ is almost surely finite. Let $Y_n=X(t_n)$. Then $(Y_n)_{n\geqslant0}$ performs a random walk on $\mathbb Z$ which, when at $i$, jumps to $i-1$, $i$ or $i+1$, with equal probabilities $\frac13$. Thus, $(Y_n)_{n\geqslant0}$ is recurrent, hence there exists infinitely many finite times $(s(n))_{n\geqslant0}$ such that $Y_{s(n)}=0$, say. For every $n$, $X(t_{s(n)})=0$, in particular $(X_n)_{n\geqslant0}$ is recurrent. Note that as soon as one graph $G_i$ is transient, the argument above breaks down, and, in fact, $(X_n)_{n\geqslant0}$ becomes transient.
H: Combinatorial Interpretation of Fractional Binomial Coefficients My question is a bit imprecise - but I hope you like it. I even strongly think it has a proper answer. The binomial coefficient $\binom{\frac{1}{2}}{n}$ is strongly related to Catalan numbers - the expression $(1-4x)^{\frac{1}{2}}$ appears when calculating the generating function of the Catalan numbers and solving a quadratic equation. I am trying to find some combinatorial interpretation of $\binom{\frac{1}{k}}{n}$ for non-zero integer $k$. I feel it must exist - I don't know if it is because of intuition or because I've seen something similar and forgot. I want an elementary interpretation, maybe related to trees (since Catalan numbers count binary trees). So, can anyone find a combinatorial interpretation of those coefficients (possibly multiplied by some power such as $k^n$)? My motivation: I can show p-adically that $\binom{\frac{1}{k}}{n}$ is $p$-integral for any prime $p$ not dividing $k$. I am looking for a combinatorial proof of this property, and a combinatorial interpretation of $k^m \binom{\frac{1}{k}}{n}$ (for some integer $m$) will suffice for this. AI: Propp's Exponentiation and Euler Measure contains such an interpretation.
H: How to compare a sum of uniform RVs with a uniform RV? Let $(X_n)_{n\geq1}$ be a sequence of i.i.d. $\sim \text{Uni}([0,1])$ distributed random variables. I want to show that $$\mathbb{P}\big(n^2 X_{n+1} < \sum\limits_{k=1}^n X_k ~\text{for infinitely many } n \in \mathbb{N} \big)=1 $$ This cries out for Borel Cantelli, but I don't know how to use this theorem when there is more than one random variable involved. By what means can I compare the two sides? AI: This is an application of Lévy's conditional form of Borel-Cantelli lemma. The result is often stated as follows. Consider a sequence of events $(A_n)_n$ which is adapted to a given filtration $(\mathcal F_n)_n$. Then the random series $\sum\limits_n\mathbf 1_{A_n}$ converges/diverges almost surely if and only if the random series $\sum\limits_n\mathrm P(A_{n+1}\mid\mathcal F_{n})$ converges/diverges almost surely. Here, consider $\mathcal F_n=\sigma(X_k;k\leqslant n)$ and $A_{n+1}=[n^2X_{n+1}\leqslant S_n]$ with $S_n=X_1+\cdots+X_n$. Then $\mathrm P(A_{n+1}\mid\mathcal F_{n})=\frac1{n^2}S_n$. By the strong law of large numbers, $\frac1nS_n\to\mathrm E(X_1)=\frac12$ hence, almost surely, $S_n\gt\frac14n$ for every $n$ large enough. This proves that $\sum\limits_n\frac1{n^2}S_n$ diverges almost surely. Hence, almost surely, infinitely many events $A_n$ occur, QED.
H: The Vector Space over another Vector Space Is it possible to consider a vector space over another vectorspace instead over a field as usual, where came into play that we need a field? And in such a vector space, the vector could be represented as tuples from $V^k$, where V is the vector space from which the other space is built from, and so the vectors are tuples of vectors (which could be considered as matrices). Is something wrong with such constructions? AI: First of all, you can consider the space of matrices (of a particular size) to be a vector space over the scalar field directly, since you can add matrices and multiply a matrix by a scalar. There's a good chance the this will actually be sufficient for whatever concrete application (if any) you have in mind. Another concept that may align somewhat with what you have in mind is that of a tensor product, which will allow you to view a matrix as a sum of various "row vectors" multiplied by each of a basis of "column vectors", or vice versa. That has more of the features you seem to be thinking of, but also a somewhat steeper learning curve. However, if you want an actual "vector space over $V$", you run into the problem that the right-hand side of the vector space axiom $$ a\cdot(b\cdot v) = (ab) \cdot v $$ makes sense only if you can multiply scalars, and a general vector space $V$ does not have a concept that fits into that position. Of course, if you can somehow define a multiplication operation on your vector space, you can plug it into the "scalar field" slot of the definition of vector space and see what happens. How such a multiplication should look depends on the vector space, and on your whims and desires. However, if you want the result to behave even slightly like a vector space, you'll want your multiplication to satisfy some reasonable rules -- at least associativity and distributivity over the vector addition, and possibly also commutativity. (Note in passing that requiring associativity rules out using the cross product in $\mathbb R^3$ here). A vector space equipped with an associative and distributive multiplication is called an algebra, or a commutative algebra if the multiplication is commutative. You can speak of a "vector space" over an algebra, except that for historical reasons such as thing is called a module over the algebra rather than a "vector space". Since we're not assuming that things have multiplicative inverses or identities, modules can be a bit wilder than vector spaces in general. If your multiplication is so nicely behaved that the original vector space happens to be a field (for example, $\mathbb C$ can be throught of as $\mathbb R^2$ equipped with a particular multiplication operation that makes it a field), you can of course have true vector spaces over that.
H: Proof of Chebyshev Inequality I was going through the proof of the Chebyshev Inequality here . And I seem to be facing some trouble in the approximation stage. I can't seem to follow how $\epsilon$ has been approximated to $(t-\mu)$. AI: It is an inequality. The text in that document breaks up the flow slightly. It should read like this: $\int_{-\infty}^{\mu-\epsilon}(t-\mu)^2f_X(t)dt+\int_{\mu+\epsilon}^{\infty}(t-\mu)^2f_X(t)dt \ge \int_{-\infty}^{\mu-\epsilon}\epsilon^2f_X(t)dt+\int_{\mu+\epsilon}^{\infty}\epsilon^2f_X(t)dt$. They did not replace $(t-\mu)^2$ with $\epsilon^2$. Rather, they exploited the inequality $(t-\mu)^2 \ge \epsilon^2$ to achieve the inequality I typed above.
H: Help me evaluate $\int_0^1 \frac{\log(x+1)}{1+x^2} dx$ I need to evaluate this integral: $\int_0^1 \frac{\log(x+1)}{1+x^2} dx$. I've tried $t=\log(x+1)$, $t=x+1$, but to no avail. I've noticed that: $\int_0^1 \frac{\log(x+1)}{1+x^2} dx = \int_0^1\log(x+1) \arctan'(x)dx =\left. \log(x+1)\arctan(x) \right|_{x=0}^{x=1} - \int_0^1\frac{\arctan(x)}{x+1}dx$ But can't get further than this. Any help is appreciated, thank you. AI: Going a little round-about way. Consider, for $ s \geqslant 0$, a parametric modification of the integral at hand: $$ \mathcal{I}(s) = \int_0^1 \frac{\log(1+s x)}{1+x^2} \mathrm{d} x $$ The goal is to determine $\mathcal{I}(1)$. Now: $$ \begin{eqnarray} \mathcal{I}(1) &=& \int_0^1 \mathcal{I}^\prime(s) \mathrm{d} s = \int_0^1 \left( \int_0^1 \frac{x}{1+s x} \frac{\mathrm{d} x}{1+x^2} \right) \mathrm{d} s \\ &=& \int_0^1 \left.\left( - \frac{1}{1+s^2} \log(1+s x) + \frac{s}{1+s^2} \arctan(x) + \frac{1}{2} \frac{\log(1+x^2)}{1+x^2} \right) \right|_{x=0}^{x=1} \mathrm{d} s \\ &=& \int_0^1 \left( \color\green{ -\frac{\log(1+s)}{1+s^2}} + \frac{1}{4} \frac{\pi s+\log(4)}{1+s^2}\right) \mathrm{d} s = - \mathcal{I}(1) + \frac{1}{4} \pi \log(2) \end{eqnarray} $$ Hence $$ \mathcal{I}(1) = \frac{\pi}{8} \log(2) $$
H: The meaning of matrix powers If a matrix can represent a system of equations, what is the meaning of the square of that matrix? It represents another system? What is the relation with the final system and the first? AI: I'm not sure if this clarifies or obscures. Write your set of equations as: $$Ax = b$$ Where $b$ is a vector and $A$ is an $n\times n$ matrix. Then solving $$A^2 x = b$$ Is the same as solving: $$A x = y$$ $$A y = b$$ Therefore, if we think of the $y$ as another $n$ unknowns, we can write the $2n\times 2n$ matrix for this set of $2n$ equations in $2n$ unknowns as: $$\left(\begin{array}{rrr} A & -I \\ 0 & A \\ \end{array}\right)\left(\begin{array}{rrr} x\\ y \end{array}\right) = \left(\begin{array}{rrr} 0\\ b \end{array}\right)$$ That this "reduces" to solving an equation of the form $A^2 x = b$ is the property of $A^2$. We can actually do much the same thing for any product of two matrices.
H: functions over dependent random variables Say we have a set of identically distributed integer-valued random variables: $\{ A_i \}_{i=1}^n$, such that they are not independent. Say we have another set of identically distributed integer-valued random variables $\{ B_i \}_{i=1}^n$, such that they are not independent (different dependence than that of the $A_i$'s). If we have $A_i \sim B_i$, this is, $\lim A_i / B_i = 1$, what can we say about a function of the random variables such as: $f(A_1, \dots, A_n) = \max_i A_i$ and $f(B_1, \dots, B_n) = \max_i B_i$ Does it follow that $f(A_1, \dots, A_n) \sim f(B_1, \dots, B_n)$? Thanks! AI: The limit, $\lim \frac{A_{i}}{B_{i}} = 1$ is ill defined, but seems to mean that $A_{i}$ and $B_{i}$ are essentially the same till the first degree in the exponent. I am assuming the limit $\displaystyle\lim_{n \to\infty} \frac{A_{i}}{B_{i}} = 1$ holds $\forall i = 1, \dots, n$, and that the random variable $A_{i}$ and $B_{i}$ depend on $n$. The question breaks down to the following: Is $\mathbb{P}\left(f(A_1, \dots, A_n) = f(B_1, \dots, B_n)\right) = 1$? (almost sure convergence) For the given functions, $f(A_1, \dots, A_n) = \max_i A_i$ and $f(B_1, \dots, B_n) = \max_i B_i$, we have: Is $\mathbb{P}\left(\max_i A_i = \max_i B_i\right) = 1$? Intuitively, yes. Let me try to give a (semi)-rigorous proof. Proof: Let us explicitly represent $A_i$'s as a function of $n$, i.e. $A_i(n)$. Also, without loss of generality, let us take these random variables to be positive. Let $\max_i A_i(n) = A_j(n)$ for some $j \in \{1, \dots n\}$. Then, by the relationship between $A_j(n)$ and $B_j(n)$ in the form of the above limit, for every $\epsilon > 0$, there exists a large enough $n_0$, such that $n \geq n_0$ and $$A_j(n)/\left(1 - \epsilon\right) \geq B_j(n) \geq A_j(n)/\left(1 + \epsilon\right)\hspace{20 mm}(1)$$ Hence, if $A_j(n)$ is the maximum, as assumed above, then it stands to reason that $B_j(n)$ is the maximum among $B_i(n)$'s. Let me prove this rigorously. Assume that $B_j(n)$ is not the maximum among $B_i(n)$'s. Let us also assume that there is only one $A_j(n)$ satisfying $\max_i A_i(n) = A_j(n)$. Consider $\max_i B_i(n) = B_k(n)$ for some $k \in \{1, \dots n\} \neq j.$ Then, for every $\epsilon > 0$, there exists some $n_1$ such that $n > n_1$ and: $$ A_k(n)/\left(1 - \epsilon\right) \geq B_k(n) \geq A_k(n)/\left(1 + \epsilon\right) \hspace{20 mm} (2) $$ This implies that $B_k(n)$ is arbitrarily close to $A_k(n)$ as $ n \to \infty$. Similarly, eq. (1) tells us that $B_j(n)$ is arbitrarily close to $A_j(n)$ as $n \to \infty$. However, $max_i A_i(n) = A_j(n)$ and hence $A_j(n) \geq A_k(n)$. We, however, assumed that there exists only one $j$ that satisfies $max_i A_i(n) = A_j(n)$. Hence, clearly, $B_j(n) > B_k(n)$ for sufficiently large $n > \max(n_0, n_1)$. Clearly, we have a contradiction and therefore, $B_j(n)$ is indeed the maximum among $B_i(n)$ in the limit. The case, where $A_j(n) = A_k(n)$ (for sufficiently large $n > \max(n_0, n_1)$) is trivially taken care of, since in this case $B_j(n) = B_k(n)$ and hence, in either case the limit, $$\lim_{n \to\infty} \frac{\max_i A_{i}}{\max_i B_{i}} = \lim_{n \to\infty} \frac{A_{j}}{B_{j}} = 1 \hspace{20 mm} (3)$$ where $j$ is as defined above. To finish it off, note that the limits mentioned in eq. (3) are the definition of almost sure convergence, and hence: $\displaystyle \mathbb{P}\left(A_i = B_i\right) = 1$, $\forall i$. Therefore, $\displaystyle \mathbb{P}\left(\max_i A_i = \max_i B_i\right) = 1$, where $j$ is such that $\max_i A_i = A_j$ and $\max_i B_i = B_j$. Q.E.D. By the way, not quite sure if the same reasoning can lead to a similar outcome for any general functions, $f(A_1, \dots, A_n)$ and $f(B_1, \dots, B_n)$.
H: Find a Jordan Canonical Matrix Similar to a real matrix A If A is a matrix, Find a Jordan Canonical matrix similar to A: $ c(x)=\text{det}(xI-A)=(x-3)^{5}(x-2)^{4} $ The information given about A is: $ \text{rank}(A-3I)=7 $ $ \text{rank}(A-3I)^{2}=5 $ $ \text{rank}(A-3I)^{3}=4 $ $ \text{rank}(A-3I)^{4}=4 $ $ \text{rank}(A-2I)=7 $ $ \text{rank}(A-2I)^{2}=5 $ $ \text{rank}(A-2I)^{3}=5 $ I cannot find an example that will show me how to apply the theorems in the textbook to help me answer this question. :( The textbook is: "Matrices and Linear Transformations" by Cullen. I was googling, and came across something on wikipedia. I'm not sure if this is correct or useful: $ rank(A-3I)=7 \therefore \text{there are 7 Jordan blocks of size 2} $ is this correct? AI: The eigenvalues are $3$ (with multiplicity 5) and $2$ (with multiplicity 4). The number of Jordan blocks associated to $3$ is equal to the dimension of the eigenspace of $3$, namely, to $\mathrm{nullity}(A-3I)$. By the Rank-Nullity Theorem, since $A$ is $9\times 9$ and the rank of $A-3I$ is $7$, the nullity is $2$, so there will be two Jordan blocks associated to $3$. How many blocks are there of size at least 1? The nullity of $A-3I$. How many blocks are there of size at least 2? The initial vectors and second vectors of every block will be annihilated by $(A-3I)^2$; so you can figure out how many blocks have size at least $2$ by comparing the nullity of $A-3I$ and the nullity of $(A-3I)^2$. Every "extra" dimension you get in $(A-3I)^2$ gives you a block of size at least $2$. We have enough information to know that the nullity of $(A-3I)^2$ is 4; that means that there are 4 vectors in a Jordan canonical basis that are annihilated by $(A-3I)^2$. Two of them are the eigenvectors we already had, so that leaves $2$ vectors that correspond to blocks of size at least $2$. Continuing this way, we see that since the nullity of $(A-3I)^3$ is $5$, we get one more vector than before. That means that there is one block of size at least $3$ (four of the vectors were already accounted for, so you just add one more); the nullity of $(A-3I)^4$ is also $5$, so there are no new vectors added. There are no blocks of size at least $4$. So we have two Jordan blocks associated with $3$: one block of size $2$, one block of size $3$. Do something similar with $\lambda=2$. This can all be visualized with the dot diagram of $A$ associated to $3$. This is an array of dots where there is a column for each block, and the number of rows in each column represents the length of the corresponding cycle. If column $i$ has $p_i$ rows, then we will have $p_1\geq p_2\geq\cdots\geq p_k$ (where $k$ is the number of columns). Let $r_1\leq r_2\leq\cdots\leq r_{p_1}$ be the number of dots in each row. You can reconstruct the diagram on the basis of the $p_i$ or on the basis of the $r_j$. The following formulas give you the $r_i$: $$\begin{align*} r_1 &= \dim(V) - \mathrm{rank}(T-\lambda I)\\ r_j &= \mathrm{rank}((T-\lambda I)^{j-1}) - \mathrm{rank}((T-\lambda I)^j)\quad\text{if }j\gt 1. \end{align*}$$ In your example, for $\lambda=3$, we would have $$\begin{align*} r_1 &= 9-7 = 2\\ r_2 &= 7-5 = 2\\ r_3 &= 5-4 = 1\\ r_4 &= 4-4 = 0. \end{align*}$$ So the dot diagram associated to $\lambda=3$ is: $$\begin{array}{ccc} \bullet &\quad & \bullet\\ \bullet & \quad & \bullet\\ \bullet\\ \end{array}$$ The first column tells you there is a block of length $3$; the second column that there is a block of length $2$; the lack of a third column tells you there are only two blocks. The $\bullet$ correspond to vectors in Jordan canonical basis. If $v_1$ and $v_2$ are the end vectors of the Jordan cycles associated to the two blocks, the dots correspond as follows: $$\begin{array}{lcl} \bullet\ (A-3 I)^{3-1}(v_1) &\quad&\bullet\ (A-3 I)^{2-1}(v_2)\\ \bullet\ (A-3 I)^{3-2}(v_1) &\quad&\bullet\ v_2\\ \bullet\ v_1 \end{array}$$
H: inequality on inner product Let $x \in \Bbb R^n$ and $Q \in M_{n \times n}(\Bbb R)$, where $Q$ is hermitian and negative definite. Let $(\cdot,\cdot)$ be the usual euclidian inner product. I need to prove the following inequality: $$(x,Qx) \le a(x,x),$$ where $a$ is the maximum eigenvalue of $Q$. Any idea? AI: So, since $Q$ is Hermitian, it can be written as $Q=U\Lambda U^\text{H}$, where $\Lambda$ contains the eigenvalues (which are negative) and U is unitary ($U^HU=I$). Then \begin{equation} (x,Qx) = x^H U\Lambda U^H x = (U^Hx) \Lambda (U^Hx) \end{equation} We are done actually: since all eigenvalues are negative we have \begin{equation} (U^Hx) \Lambda (U^Hx) \leq (U^Hx) A (U^Hx) \end{equation} where $A=\text{diag}(a,a,\ldots,a)=aI$. So \begin{equation} (x,Qx) \leq a (U^Hx) (U^Hx) = a ||U^Hx|| = a||x||=a(x,x) \end{equation} Here I also used the fact that, since U is unitary it does not affect the length of vectors.
H: Behavior of the spectral radius of a convergent matrix when some of the elements of the matrix change sign I want to prove (or disprove) the following statement: If $A$ is a square matrix with non-negative elements that has spectral radius less then $1$, then any matrix obtained from $A$ by arbitrarily changing the sign of the elements has the same property. This problem appeared recently when studying the convergence of some matrices and I would like to believe that is true. AI: This is true. Recall that if $\|A\|$ is the operator norm of an $n\times n$ matrix $A$ and $\|A\|_\infty$ is the maximum absolute value of the entries of $A$, then, $$\|A\|_\infty \le \|A\| \le n\|A\|_\infty.$$ If $B$ is an arbitrarily signed version of the non-negative matrix $A$, the triangle inequality shows that $\|B^k\|_\infty \le \|A^k\|_\infty$ for any $k\ge 1$. By Gelfand's formula we have: $$ \rho(B) = \lim_{k\to\infty} \|B^k\|^{1/k} \le \lim_{k\to\infty} (n\|B^k\|_\infty)^{1/k} \le \lim_{k\to\infty} (n\|A^k\|_\infty)^{1/k} \le \lim_{k\to\infty} (n\|A^k\|)^{1/k} = \rho(A).$$
H: Degree of Hessian surface invariant under linear transformations? Given a surface $V(f) \subset \mathbb{P}^n$ for a homogeneous polynomial $f$ of degree $d$ on $\mathbb{P}^n$ and a linear transformation $g \in SL(n+1)$. Is the degree of the Hessian $H_f = V(\det (\frac{\partial f}{\partial x_i\partial x_j}))$ of $f$ equal to the degree of the Hessian $H_{f\circ g} = V(\det (\frac{\partial (f\circ g)}{\partial x_i\partial x_j}))$ of $f\circ g$? If so, why? If not, are there any restrictions under which this holds? Many thanks in advance! AI: If $f(x_0,...,x_n)$ is a homogeneous polynomial of degree $d$, its second partial derivatives $\frac{\partial ^2f}{\partial x_i\partial x_j}$ have degree $d-2$, so that the Hessian determinant $Hess(f)=det(\frac{\partial^2 f}{\partial x_i\partial x_j})$ is homogeneous of degree $(d-2)(n+1)$ or is zero. Since $f\circ g$ is also homogeneous of degree $d$, we see that $Hess(f\circ g)$ is homogeneous of degree $(d-2)(n+1)$ or is zero. Conclusion Your varieties $H_f=V(Hess(f))$ and $H_{f\circ g}=V(Hess(f\circ g))$ are equal to $\mathbb P^n$ or are hypersurfaces of degree $(d-2)(n+1)$.
H: Proving that if $\mathrm{char}(F)=p>0$ then if $g(x)\in F[x]$ is irreducible then $g(x)$ have multiple roots iff $g'(x)=0$ I am going over my lecture notes in my Field theory class and I saw this following statement without a proof: if $\mathrm{char}(F)=p>0$ then if $g(x)\in F[x]$ is irreducible then $g(x)$ have multiple roots iff $g'(x)=0$. I believe I can prove that if $g(x)$ have multiple roots then $g'(x)=0$ but I am not sure and I am unable to prove that the converse is also true. My reasoning is as follows: $g(x)$ have multiple roots imply there is an extension $K/F$ and $\alpha\in K$ s.t $g(\alpha)=g'(\alpha)=0$ (since if the multiplicity of $\alpha$ in $g(x)$ is $m>1$ (since it is not a simple root) then the multiplicity of $\alpha$ in $g'(x)$ is at least $m-1\gt 0$). Since $g(x)$ is irreducible and we may assume WLOG that $g(x)$ is monic then it follows that the minimal polynomial of $\alpha$ over $F$ is $g(x)$ , but $g'(\alpha)=0$ and if $g'\neq0$ then $\deg(g')<\deg(g)$ and this is a contradiction. Is my argument correct, and how can I prove the converse ? help is appreciated! AI: Your argument for necessity is correct. For sufficiency, if $g'(x)=0$ and $g(x)$ is irreducible, then it is not constant (constant polynomials are units, hence not irreducible by definition). If we write $g(x) = a_nx^n+ a_{n-1}x^{n-1}+\cdots + a_0$, with $n\gt 0$ and $a_n\neq 0$, then we conclude that for every $i$ such that $a_i\neq 0$, we must have $ia_i=0$; therefore, $i$ is a multiple of $p$. Thus, $g(x)$ is a polynomial in $x^p$. Therefore we have that $g(x) = a_0 + a_1x^p + \cdots +a_kx^{kp}$. Passing to an extension of $F$ where each coefficient is a $p$th power, if necessary, we can write $g(x)$ as $$\begin{align*} g(x) &= a_0 + a_1x^p + \cdots + a_kx^{kp}\\ &= r_0^p + r_1^px^p + \cdots + r_k^px^{pk} \\ &= (r_0 + r_1x + r_2x^2 + \cdots + r_kx^k)^p \end{align*}$$ hence $g(x) = h(x)^p$ for some $h(x)$, and hence $g(x)$ must have repeated roots.
H: Intuitive interpretation of limsup and liminf of sequences of sets? What is an intuitive interpretation of the 'events' $$\limsup A_n:=\bigcap_{n=0}^{\infty}\bigcup_{k=n}^{\infty}A_k$$ and $$\liminf A_n:=\bigcup_{n=0}^{\infty}\bigcap_{k=n}^{\infty}A_k$$ when $A_n$ are subsets of a measured space $(\Omega, F,\mu)$. Of the first it should be that 'an infinite number of those events is verified', but I don't see how to explain (or interpret this). Thanks for any help! AI: Try reading it piece by piece. Recall that $A\cup B$ means that at least one of $A$, $B$ happens and $A\cap B$ means that both $A$ and $B$ happen. Infinite unions and intersections are interpreted similarly. In your case, $\bigcup_{k=n}^{\infty}A_k$ means that at least one of the events $A_k$ for $k\geq n$ happens. In other words "there exists $k\geq n$ such that $A_k$ happens". Now, let $B_n=\bigcup_{k=n}^{\infty}A_k$ to simplify notation a bit. This gives us $\bigcap_{n=0}^{\infty}\bigcup_{k=n}^{\infty}A_k = \bigcap_{n=0}^{\infty}B_n$. This is interpreted as "all of the events $B_n$ for $n\geq 0$ happen" which is the same as "for each $n\geq 0$ the event $B_n$ happens". Combined with the above interpretation, this tells us that that $\limsup A_n$ means "for each $n\geq 0$ it happens that there is a $k\geq n$ such that $A_k$ happens". This is precisely the same as saying that infinitely many of the events $A_k$ happen. The other one is interpreted similarly: $\bigcap_{k=n}^{\infty}A_k$ means that for all $k\geq n$ the event $A_k$ happens. So, $\bigcup_{n=0}^{\infty}\bigcap_{k=n}^{\infty}A_k$ says that for at least one $n\geq0$ the event $\bigcap_{k=n}^{\infty}A_k$ will happen, i.e.: there is a $n\geq 0$ such that for all $k\geq n$ the event $A_k$ happens. In other words: $\liminf A_n$ is the event that from some point on, every event happens. Edit: As requested by Diego, I'm adding a further explanation. Sets are naturally ordered by inclusion $\subseteq$. This is a partial order, even a lattice. (Putting aside the fact that the universe of sets is not a set.) In fact, every family of sets has an $\inf$ and $\sup$ with respect to $\subseteq$, which can be defined by: $$\inf_{\lambda\in\Lambda}A_\lambda =\bigcap_{\lambda\in\Lambda}A_\lambda$$ and $$\sup_{\lambda\in\Lambda}A_\lambda =\bigcup_{\lambda\in\Lambda}A_\lambda.$$ Now, the usual definition of $\limsup$ and $\liminf$ (of sequences of real numbers) can be rephrased in terms of infima and suprema as follows: $$\liminf_{n\to\infty}a_n=\sup_{n\geq 0}\inf_{k\geq n} a_n$$ and $$\limsup_{n\to\infty}a_n=\inf_{n\geq 0}\sup_{k\geq n} a_n.$$ We can now use the same definition for sets: $$\liminf_{n\to\infty}A_n=\sup_{n\geq 0}\inf_{k\geq n} A_n$$ and $$\limsup_{n\to\infty}A_n=\inf_{n\geq 0}\sup_{k\geq n} A_n.$$ Rewriting this in terms of $\bigcup$ and $\bigcap$, we get precisely the definitions from the question.
H: Proving:$\tan(20^{\circ})\cdot \tan(30^{\circ}) \cdot \tan(40^{\circ})=\tan(10^{\circ})$ how to prove that : $\tan20^{\circ}.\tan30^{\circ}.\tan40^{\circ}=\tan10^{\circ}$? I know how to prove $ \frac{\tan 20^{0}\cdot\tan 30^{0}}{\tan 10^{0}}=\tan 50^{0}, $ in this way: $ \tan{20^0} = \sqrt{3}.\tan{50^0}.\tan{10^0}$ $\Longleftrightarrow \sin{20^0}.\cos{50^0}.\cos{10^0} = \sqrt{3}.\sin{50^0}.\sin{10^0}.\cos{20^0}$ $\Longleftrightarrow \frac{1}{2}\sin{20^0}(\cos{60^0}+\cos{40^0}) = \frac{\sqrt{3}}{2}(\cos{40^0}-\cos{60^0}).\cos{20^0}$ $\Longleftrightarrow \frac{1}{4}\sin{20^0}+\frac{1}{2}\sin{20^0}.\cos{40^0} = \frac{\sqrt{3}}{2}\cos{40^0}.\cos{20^0}-\frac{\sqrt{3}}{4}.\cos{20^0}$ $\Longleftrightarrow \frac{1}{4}\sin{20^0}-\frac{1}{4}\sin{20^0}+\frac{1}{4}\sin{60^0} = \frac{\sqrt{3}}{4}\cos{60^0}+\frac{\sqrt{3}}{4}\cos{20^0}-\frac{\sqrt{3}}{4}\cos{20^0}$ $\Longleftrightarrow \frac{\sqrt{3}}{8} = \frac{\sqrt{3}}{8}$ Could this help to prove the first one and how ?Do i just need to know that $ \frac{1}{\tan\theta}=\tan(90^{\circ}-\theta) $ ? AI: In a word, yes. You already know that (in degrees) $\tan 20\cdot\tan30=\tan10\cdot\tan50$ so $$\tan20\cdot\tan30\cdot\tan40 = \tan10\cdot\tan50\cdot\tan40$$ and your observation that $$\frac{1}{\tan40}=\tan50$$ is all you need.
H: Describe all the compact subsets of this space Consider the topological space $(X,\mathscr{U})$, where $X=\mathbb{R}^2$ and the topology $\mathscr{U}$ is generated by the collection of sets $\{(0,0)\}\cup \{I_a\}$ where $I_a$ are the open intervals on the rays departing from the origin. We then make a quotient of this space, lets call it $Y$, by identifying the points on the closed disk $D^2$ centred at the origin of radius 1. Intuitively I can see some closed subsets that are compact (essentially closed segments on the rays) and others which are not (any subset extending over a "continuity" of rays). But, being the question "Describe all the compacts", can you suggest me a way to find them all? Is there a procedure someone can follow in this or in similar cases? Are there some results achievable by finding which separation axiom hold in this space? Finally, what is the role of the identification in this particular problem? I can't find anything relevant. AI: The answer provided below should be sufficient to walk you through the proof. I've left some places for you to fill in the proof since this is homework. I would start by describing the compact sets in the prequotient space. From here, images of compact sets in the prequotient space are compact in the quotient space (most of the time, these are not the only compact sets). In general, there is not a particularly good method of describing all the compact subsets of a given topological space. As BenjaLim pointed out, the space you described is Hausdorff, so any compact set must be closed. Let $(\hat X,\mathscr{U})$ denote the topological space $\hat X=\mathbb{R}^2$ with $\mathscr{U}$ the topology generated by the set $U=\{(0,0)\}\cup\{I_a\}$, where $I_a$ are the open intervals on the rays departing from the origin. Let $(X,\mathscr{V})$ denote the topological space obtained by identifying all the points on the unit disk in the space $(X,\mathscr{U})$. Let $p:\hat X\to X$ denote the quotient map. Now what is the space $(\hat X,\mathscr{U})$ like? Well, notice that each open ray from the origin is naturally homeomorphic to $\mathbb{R}$ with the standard topology. So, it is not difficult to see that $(\hat X,\mathscr{U})\cong \coprod_{a\in[0,1]} (X_a,U_a)$, where $\coprod$ denotes the disjoint union space with the disjoint union topology, and the topological spaces $(X_a,U_a)$ are all $\mathbb{R}$ with the standard topology, except $(X_0,U_0)$, which is the one point space. Finally, note that compact sets in the disjoint union topology are precisely those sets which are finite unions $\bigcup_{i=1}^n K_{a_i}$ where $K_{a_i}\subset X_{a_i}$ is compact in the $U_{a_i}$ topology (You should prove this, it's not particularly difficult). So, the compact sets in the prequotient are precisely those which are finite unions of "closed and bounded" (in the subspace topology, which is homeomorphic to $\mathbb{R}$) subsets of the open rays extending from the origin, possibly along with the point $(0,0)$. Hence, the images of these under the map $p$ are all compact by the continuity of $p$. Now, what about the quotient space $(X,\mathscr{V})$? The purpose of this identification is to make it so that the space is no longer obviously a disjoint union of simpler spaces. Now, let $b=p((0,1))$, that is, $b$ is the image of the unit disk under $p$. Now, $p$ is a closed map (you could prove this), and so restricting it to the set on which it is injective we get a homeomorphism from $\mathbb{R}^2\setminus\overline{\mathbb{D}}$ (with the subspace topology inherited from $(\hat X,\mathscr{U})$) onto $X\setminus\{b\}$ (with its subspace topology). Thus, any set $K\subset X$ such that $a\notin K$, is compact in $X$ if and only if it is the image under $p$ of a compact subset of $\hat X$. From the previous paragraph, we have reduced the problem to describing the compact sets of $X$ that contain the point $a$. Let $K\subset X$ be compact with $a\in K$. Suppose that there exist points $a_n\in K$ with $a_n\not=a$ such that $\hat{a}_n=p^{-1}(a_n)$ have the property that $\hat{a}_n$ and $\hat{a}_m$ lie on different rays if $m\not=n$. Now we form saturated open sets $\hat{V}_n$ in $\hat X$ such that $\hat{a}_k\in \hat{V}_n$ if and only if $k\le n$, and that $\bigcup \hat{V}_n = \hat X$. Then $K\subset\bigcup V_n$, but is not contained in any finite subcollection, which will contradict the compactness of $K$. Hence $K$ is the image of a set that is contained in finitely many of the rays extending from the origin. I leave it to you to construct some appropriate $\hat{V}_n$ sets (I know these exist, I have constructed them). The previous paragraph proves that any compact set in $X$ must be the image of a compact set in $\hat{X}$. This completes the proof.
H: Can this function be rewritten to improve numerical stability? I'm writing a program that needs to evaluate the function $$f(x) = \frac{1 - e^{-ux}}{u}$$ often with small values of $u$ (i.e. $u \ll x$). In the limit $u \to 0$ we have $f(x) = x$ using L'Hôpital's rule, so the function is well-behaved and non-singular in this limit, but evaluating this in the straightforward way using floating-point arithmetic causes problems when $u$ is small: then the exponential is quite close to 1, and when we subtract it from 1, the difference doesn't have great precision. Is there a way to rewrite this function so it can be computed more accurately both in the small-$u$ limit and for larger values of $u$ (i.e. $u \approx x$)? Of course, I could add a test to my program that switches to computing just $f(x) = x$, or perhaps $f(x) = x - ux^2/2$ (second-order approximation) when $u$ is smaller than some threshold. But I'm curious to see if there's a way to do this without introducing a test. AI: We once had this (or at least something very similar) in my first numerical analysis class: There it was proposed to calculate $f$ in terms of $y = \exp(-ux)$ instead. I.e. first set $y$ and then calculate $$f(x) = \frac{1-e^{-ux}}{u} = -x\frac{1-y}{\log(y)}$$ I have never understood why this should have any effect on the calculation, however... (It may only have an effect, if we make some assumptions on how the values get calculated inside the computer). Edit: J.M. gave a very nice link in the comments, where it is explained why the above works! If you make some numerical tests, let me know what you get (I'd be interested whether this actually does what they claimed it would)! ;)
H: Concept check with Probablity. Classic birthday problem The solution to the problem is (a) 365 days for the sample space. (b) $$\frac{365 \times 1 \times 1}{365^3} = \frac{1}{365^2}$$ I understand (a), there are 365 days in a year.... But I don't understand the reasoning of (b). To compute, I think it's easier to ask the probability that they all won't have the same birthday and then let one minus that. So for person A, there are 365/365 days I could choose and for person B there is 364/365 days and for person C is 363/365 days. Hence why shouldn't the probablity be $$1 - \frac{365 \times 364 \times 363}{365^3}$$ AI: To put all of this in one place: a. The sample space is by definition all of the possible things that could happen. The collection of possible birthdays (blah, blah, no leap years, assuming that each birthday is equally likely) for three people is (Person A's birthday, Person B's birthday, Person C"s birthday). This is a Cartesian product, but ignore that for the moment. Each birthday may be represented by a number $1,\dots 365$ so the sample space of all possible birthdays is the collection of all possible triples $(a, b, c)$ with $1\le a, b, c \le 365$. Some possible birthdays, then, are $(19, 2, 350), (41, 41, 41), (200, 15, 200)$ and so on. Since the birthday for a person has no bearing on the birthday of another, the size of the sample space is $365\text{ (for person A) }\times 365\text{ (for person B) }\times 365\text{ (for person C) } = 365^3$. b. The answer you were given comes from the observation that the number of ways all three people could have a birthday is to pick a date for person A (365 ways) and then use the same date for person B (one way) and for person C (again, one way), leading to the probability $$ \frac{365\times1\times1}{365\times365\times365}=\frac{1}{365^2} $$ The problem with your calculation is that you were counting one minus the probability that all three people had different birthdays. That's a perfectly reasonable event, but it ignores the other way three people could fail to have all their birthdays on the same day, since you didn't count the possibility that two had the same birthday and the remaining one didn't.
H: How many rolls until probability of a 5 is at least 1/2? Problem: Jak rolls two die and wants the probability of rolling at least a 5 to be $\frac{1}{2}$. How many should Jak roll? basically, I got two answers (same), but different approaches. Can someone tell me why one of them could be wrong? Solution 1 The chance of getting a 5 is $\frac{1}{6}$ and rolling n dies or n times (same meaning in this situation) is $\frac{n}{6}$. Now I want the probability to be at least $\frac{1}{2}$ So I have $\frac{n}{6} > \frac{1}{2} \implies n > 3 \implies n = 4$ That is I want to roll four dices. Solution 2 This one is a bit more formal. It basically thinks about the probability of NOT getting that 5. So I have $\frac{5}{6}$. Now rolling n times, I get $(\frac{5}{6})^n$ Thus, the event of getting at least one five would be $1 - (\frac{5}{6})^n$ and we want this to be greater than 1/2. $$1 - (\frac{5}{6})^n > \frac{1}{2} \implies n > 3.80 \implies n = 4 $$ Both gives me n = 4, but clearly the first one is informal. Could someone tell me why the first one could be wrong? AI: Unfortunately, the first one is in fact wrong because you can't add probabilities in this way. If you could, we could set up a ridiculous situation. Say that, instead of a die, you are using a fair coin and you want to guarantee getting heads. If probabilities simply added, that means that the probability of getting heads after two trials would be $2\cdot\frac{1}{2}=1$. This is clearly not true. Your first method would be correct, however, if there was something to prevent the same event from happening twice, but this isn't the case in the problem. Your second method is correct. You are finding the probability $P$ that the opposite of the event happens every time. $P$ is the complement of the event happening at least once, and therefore $1-P$ is the probability you are looking for. Edit: I should have been clearer about this. $n=4$ is the right answer, and you justified it perfectly the second time. The first time just happens to be right; it's not a valid method of finding the answer to this problem.
H: Triangle Inside Circle If the radius of the circle is equal to the length of the chord $AB$, what is the value of $x$? How would I solve this problem ? AI: Without Trigonometry: Let $O$, be the center of the circle. In $\triangle OAB$, $AB=OA=OB=$ radius implying $\triangle OAB$ to be an equilateral triangle. Thus, $\angle OAC = 10^\circ$. Again, in $\triangle OAC$, $OA=OC$, so it is an isosceles triangle, thus $\angle OAC= \angle OCA=10^\circ$ Now, using the central angle theorem, $\angle COB = 2\times \angle BAC= 100 ^\circ$ $\triangle OBC$ is also isosceles (as $OB=OC$), thus $\angle OBC= \angle OCB=\frac 12 (180^\circ-\angle COB)=40^\circ$. Now, $\angle OCB = 10^\circ + \angle ACB\implies \angle ACB = x = 30^\circ$
H: Map Surjective on a Disk I've got another question from a student that has stumped me: Let $D^{n+1}$ be the $n+1$-disk, with boundary sphere $S^n$. Suppose $f:D^{n+1}\longrightarrow \mathbb{R}^{n+1}$ is a map such that $f(S^n)\subseteq S^n$. Furthermore, suppose that $f|_{S^n}$ has nonzero degree. Show that $f(D^{n+1})$ contains $D^{n+1}$. I have to admit, I'm at a loss to even start this problem. AI: Alright, under the general heading of the Hopf Degree Theorem, we have the Extension Theorem. I'm looking at Guillemin and Pollack, pages 145-146, in the smooth category actually. BUT see WOOKIE for the continuous case: A map $f: \mathbb S^n \rightarrow \mathbb S^n$ is extendable to a map $F: \mathbb D^{n+1} \rightarrow \mathbb S^n$ if and only if $\deg(f)=0.$ The Extension Theorem is the same with the preimage replaced by any compact connected oriented $W$ with boundary $\partial W$ of dimension $n+1$ and $n.$ Anyway, the $f|_{S^n}$ you are given has nonzero degree, so there is no extension $F$ that maps all of the closed ball to the sphere. Meanwhile, assume that there is a point $U$ in the open ball that is not in the image of $f.$ Compose $f$ with central projection from $U$ onto $\mathbb S^n.$ This new map takes $\mathbb D^{n+1}$ to $\mathbb S^n,$ so it is an extension. This is a contradiction. Note that it was not really necessary to have the missing point $U$ be at the origin, as the sphere is star-shaped around any point of the open ball. Furthermore, as the original $f\left(\mathbb D^{n+1}\right)$ is compact, the distance from it to $U$ is bounded from below. This seems necessary for concluding that the composed map is continuous.
H: Efficient method to evaluate the following series: $\sum_{n=1}^\infty \frac{n^2\cdot (n+1)^2}{n!}$ How do I calculate the infinite series: $$\frac{1^2\cdot 2^2}{1!}+\frac{2^2\cdot 3^2}{2!}+\dots \quad?$$ I tried to find the nth term $t_n$. $$t_n=\frac{n^2\cdot (n+1)^2}{n!}.$$ So, $$\sum_{n=1}^{\infty}t_n=\sum_{n=1}^{\infty}\frac{n^4}{n!}+2\sum_{n=1}^{\infty}\frac{n^3}{n!}+\sum_{n=1}^{\infty}\frac{n^2}{n!}$$ after expanding. But I do not know what to do next. Thanks. AI: You are right, now you need to expand them separately and express each of them in form of $e$: $$ \sum \limits_{n=1}^{\infty}\frac{n^2}{n!}= \sum \limits_{n=1}^{\infty} \frac{n+(n-1)n}{n!} = \sum \limits_{n=1}^{\infty} \frac 1{(n-1)!} +\frac 1{(n-2)!} = 2e $$ Similarly, we can show that, $$\sum \limits_{n=1}^{\infty}\frac{n^3}{n!}= \sum \limits_{n=1}^{\infty} \frac{n+(n^2-1)n}{n!} = 5e$$ and, $$\sum \limits_{n=1}^{\infty}\frac{n^4}{n!}= 15e$$
H: Expanding fractions as powers of $z$ I'm reading through complex functions in Boas' book, and there's a part when discussing Laurent series where she says: "Now, for $0 <|z|<1$, we expand each of the fractions in the parenthesis in powers of $z$." The equation she refers to is the following: $$f(z) = \frac {4}{z} \left({\frac{1}{1+z}}+ {\frac{1}{2-z}}\right).$$ As a result of the expansion, she gets: $$f(z)=-3+9z/2-15z^2/4+33z^3/8+ \cdots +6/z.$$ I have no clue how she got the second equation from the first. Specifically, I don't know what she means by "expand each of the fractions in the parenthesis in powers of $z$". An explanation would be appreciated. AI: Basically the author is using the geometric series expansion in each of the two terms inside the parenthesis. $$\frac{1}{a \pm z} = \frac{1}{a} \frac{1}{1 \pm \dfrac{z}{a}} = \frac{1}{a}\sum_{n = 0}^{\infty} \left( \frac{\pm z}{a} \right )^n $$
H: what is the meaning of 100% of 100%? When we say a% of b, we mean 'a' parts out of the 100 parts of 'b'. e.g. 1% of 200 means divide 200 in 100 equal parts and select 1 out of it, i.e. 2 So, what does it mean when we say any percentage of any percentage? like, what is the significance of 100% of 100%? AI: 100 percent. percent=per hundred. 100% is 100 per 100=100/100=1. So 100% of 100% is 100% of 1, which is 1. 50% of 50% is 50% of .5, or half of 50%, which is .25, or 25%. You convert them to their numeric representation and multiply. For contrast, there is the (less common) per mil, which is 1/1000. 1000 per mil = 100 per cent = 1. You're really working with decimal numbers the whole time, just with units of 1/100 or 1/1000.
H: Proof: For all integers $x$ and $y$, if $x^3+x = y^3+y$ then $x = y$ I need help proving the following statement: For all integers $x$ and $y$, if $x^3+x = y^3+y$ then $x = y$ The statement is true, I just need to know the thought process, or a lead in the right direction. I think I might have to use a contradiction, but I don't know where to begin. Any help would be much appreciated. AI: We have \begin{eqnarray*} x^3+x=y^3+y&\Longleftrightarrow& (x^3-y^3)+(x-y)=0\\ &\Longleftrightarrow& (x-y)(x^2+y^2+xy+1)=0. \end{eqnarray*} Since $x^2+y^2+xy+1=(x+\frac{y}{2})^2+\frac{3}{4}y^2+1>0$, we get $x=y$. The hypothesis $x,y$ are integer numbers is redundant.
H: What is the answer for the $\lim\limits_{n\rightarrow \infty} \frac{\sin(nt)}{\sin(t)}$? Let $t\in (0,\pi)$ and $n$ change in natural numbers. I am wondering what is the answer to the following limit. $$\lim_{n\rightarrow \infty} \frac{\sin(nt)}{\sin(t)}.$$ Thank you. AI: Well $\ \frac {\sin(nt)}{\pi t}\to \delta(t)\ $ as $n\to\infty\ $ (equation (9)) so that (since $\frac t{\sin(t)}\to1$ as $t\to0$) : $$\lim_{n\to \infty} \frac{\sin(nt)}{\sin(t)}=\pi \frac t{\sin(t)}\delta(t)=\pi\delta(t)$$
H: General relationships of variables in expression I am asked to determine how certain modifications to the variables in Coulomb's equation will effect the resultant force: $$F=k\frac{Q_1Q_2}{r^2}$$ The question asks me what will happen with $Q_1$ doubles, and I determine $F$ is doubled. Then I am asked what happens when $Q_1$ and $Q_2$ are doubled, and I get $F$ is quadrupled. But then I am asked what happens when $r^2$ is tripled and I come across some confusion. I calculate $F$ is one third, using $1m$ for $r$ and then solving the equation once for $r=1^2$ and then $r=(1^2)3$, and comparing the resulting values in magnitude. The answer states that $F$ is a ninth. Additionally, I am asked what will happen to $F$ when $Q_1$ and $Q_2$ are doubled and $r^2$ is halved. I am hoping that someone cant hint me in the right direction. AI: Let $F(Q_1,Q_2,r) = k \frac{Q_1 Q_2}{r^2}$. I'm not sure if you intend for $r$ or $r^2$ to be halved. If $r$ is halved, then $F(2Q_1, 2Q_2, \frac{1}{2} r) = k \frac{2Q_1 2Q_2}{(\frac{1}{2} r)^2} = k \frac{4Q_1 Q_2}{\frac{1}{4} r^2} = k \frac{16Q_1 Q_2}{r^2} = 16\, F(Q_1,Q_2,r)$. If $r^2$ is halved, then $F(2Q_1, 2Q_2, \frac{1}{\sqrt{2}} r) = k \frac{2Q_1 2Q_2}{(\frac{1}{\sqrt{2}} r)^2} = k \frac{4Q_1 Q_2}{\frac{1}{2} r^2} = k \frac{8Q_1 Q_2}{r^2} = 8\, F(Q_1,Q_2,r)$.
H: Finding the Laurent series and pole for $f(z)=\frac {z^2+z+1}{(z-1)^2}$ How do I (a) find the Laurent series for the following: $$ f(z)=\frac {z^2+z+1}{(z-1)^2}$$ (b) Find its pole and its order. I suppose finding the Laurent series would make it easy to find the latter, but I think there's a short cut to finding the pole? Anyhow, I'm interested in both part (a) and part (b) above. The only way I know to solve for part (a) is to use partial fractions, but in such a case I would still have a $(z-1)^2$ in one of the denominators since the above would split up as $$f(z)=(z^2+z+1)\left[\frac{A}{z-1} +\frac{B}{(z-1)^2}\right]$$ Not sure how to proceed. AI: $$ f(z)=\frac {z^2+z+1}{(z-1)^2} =\frac {(z-1)^2 + 3z}{(z-1)^2} = 1+ \frac {3z}{(z-1)^2} = 1+ \frac {3}{z-1}+\frac {3}{(z-1)^2}$$ From this onwards, you can find your solutions what you wanted. I think it will be helpful to you.
H: Proof of $(A - B) - C = A - (B \cup C)$ I have several of these types of problems, and it would be great if I can get some help on one so I have a guide on how I can solve these. I tried asking another problem, but it turned out to be a special case problem, so hopefully this one works out normally. The question is: Prove $(A - B) - C = A - (B \cup C)$ I know I must prove both sides are not equivalent to each other to complete this proof. Here's my shot: We start with left side. if $x \in C$, then $x \notin A$ and $x \in B$. So $x \in (B \cup C)$ So $A - (B \cup C)$ Is this the right idea? Should I then reverse the proof to prove it the other way around, or is that unnecessary? Should it be more formal? Thanks! AI: The normal way to prove equalities like this one is in two steps. First, show that if some $x \in$ LHS, then it must also be $\in$ RHS. Then, you must show that if $x\in$ RHS, then it must also be $\in$ RHS. These two together are enough to prove that LHS = RHS. So, in this specific example, step 1 looks like this. $x\in (A\backslash B)\backslash C$ $\Rightarrow ( x\in A$ and $x\notin B )$ and $x\notin C$ $\Rightarrow x\in A$ and not $(x\in B $ or $x\in C)$ $\Rightarrow x\in A$ and $x\notin B \cup C$ $\Rightarrow x\in A \backslash ( B \cup C )$ So that's the first half. I have left the second half as an exercise for you to solve yourself. Note that there are other ways of solving such questions, but this is probably the most straightforward. Good luck.
H: Are there any elegant methods to classify of the Gaussian primes? Out of curiosity, are there any relatively quick classifications of all the Gaussian primes, the primes in $\mathbb{Z}[i]$? I found a classification here, but the process comes off as rather tedious. No doubt the end classification is nice, but is there a quicker and cleaner process to classify them all? If it matters at all, I have a decent grasp on elementary number theory/quadratic reciprocity and ring/field theory but not much in the way of algebraic number theory yet. AI: Lattice theorem for rings: For any ring homomorphism $\psi:A\to B$ and any ideal $I$ of $B$, the set $\psi^{-1}(I)$ is an ideal of $A$. This map of ideals is inclusion-preserving, i.e. if $I\subseteq J$ are ideals of $B$, then $\psi^{-1}(I)\subseteq \psi^{-1}(J)$. Furthermore, for any prime ideal $P\subset B$, we have that $\psi^{-1}(P)$ is a prime ideal of $A$. We have an inclusion homomorphism $j:\mathbb{Z}\to\mathbb{Z}[i]$. Our strategy is to classify the (non-zero) prime ideals $P$ of $\mathbb{Z}[i]$ according to the "value" of $j^{-1}(P)=P\cap \mathbb{Z}$. First, we will re-express the ring $\mathbb{Z}[i]$ in a more convenient form: Consider the ring homomorphism $\operatorname{ev}:\mathbb{Z}[x]\to\mathbb{Z}[i]$ defined by $\phi(f)=f(i)$. This function is surjective, and the kernel of $\operatorname{ev}$ is the ideal $(x^2+1)$ of $\mathbb{Z}[x]$. Therefore, by the first isomorphism theorem, $\operatorname{ev}$ descends to an isomorphism $\phi:R\to\mathbb{Z}[i]$, where $R=\mathbb{Z}[x]/(x^2+1)$. For the remainder of this post, we will now be interested in classifying the prime ideals of $R$. The lattice theorem guarantees that $\phi$ sets up a bijective, order-preserving correspondence between the prime ideals of the two rings. I can explain more about why this is okay if you'd like. Consider the inclusion homomorphism $k:\mathbb{Z}\to R$ (of course $k=\phi^{-1}\circ j$); we will pretend for the sake of expediency that $\mathbb{Z}$ is actually contained as a subset of $R$. For each (non-zero) prime ideal $(q)$ of $\mathbb{Z}$, we want to classify the prime ideals $Q$ of $R$ such that $k^{-1}(Q)=(q)$, i.e. the prime ideals $Q$ of $R$ that contain $(q)$. Note that a prime ideal $Q$ of $R$ will contain the $\mathbb{Z}$-ideal $(q)$ if and only if $Q$ contains the element $q$, which is the case if and only if $Q$ contains the $R$-ideal $qR$. By the lattice theorem, the prime ideals of $R$ containing $qR$ are in order-preserving bijection with the prime ideals of $$R/qR=(\mathbb{Z}[x]/(x^2+1))/q(\mathbb{Z}[x]/(x^2+1))\cong \mathbb{Z}[x]/(q,x^2+1)\cong \mathbb{F}_q[x]/(x^2+1).$$ If $q=2$, then in $\mathbb{F}_2$ we have $x^2+1=(x+1)^2$, so that $$R/2R\cong \mathbb{F}_2[x]/(x+1)^2$$ has one prime ideal, namely $(x+1)\mathbb{F}_2[x]/(x+1)^2$. Unwinding our various isomorphisms, this corresponds to $i+1$ in $\mathbb{Z}[i]$. If $q\equiv 1\bmod 4$, then $x^2+1$ is reducible (by quadratic reciprocity there is some $a\in \mathbb{F}_q$ such that $a^2=-1$ in $\mathbb{F}_q$), so that $$R/qR\cong \mathbb{F}_q[x]/(x-a)(x+a)$$ which has two prime ideals, $(x-a)\mathbb{F}_q[x]/(x-a)(x+a)$ and $(x+a)\mathbb{F}_q[x]/(x-a)(x+a)$. Unwinding our various isomorphisms, this corresponds to a conjugate pair of Gaussian primes $\pi$, $\overline{\pi}$ with norm $q\equiv 1\bmod 4$ in $\mathbb{Z}[i]$. Lastly, if $q\equiv 3\bmod 4$, then $x^2+1$ is irreducible (by quadratic reciprocity there is no $a\in \mathbb{F}_q$ such that $a^2=-1$ in $\mathbb{F}_q$), so that $$R/qR\cong \mathbb{F}_q[x]/(x^2+1)\cong\mathbb{F}_{q^2}$$ is a field and therefore has one prime ideal, namely the zero ideal. Unwinding our various isomorphisms, this corresponds to a Gaussian prime $q\equiv 3\bmod 4$ that lives in $\mathbb{Z}$. Needless to say, this method is well-known and not original to me; for example I'm pretty sure that Neukirch does precisely this at the outset of his book. I'll see if I can find some good references later. Also, I'd just like to comment that algebro-geometrically, what's happening is we're looking at the fibers of $\operatorname{Spec}\mathbb{Z}[i]$ over $\operatorname{Spec}\mathbb{Z}$; look at the curve in $\operatorname{Spec}\mathbb{Z}[x]$ corresponding to the ideal $(x^2+1)$ in Mumford's famous sketch: I have to get to sleep, so I'm signing off for now, but I'll respond to any comments or questions when I can.
H: Prove that $ 1.462 \le \int_0^1 e^{{x}^{2}}\le 1.463$ Prove the following integral inequality: $$ 1.462 \le \int_0^1 e^{{x}^{2}}\le 1.463$$ This is a high school problem. So far i did manage to prove that the integral is bigger than $1.462$ by using Taylor expansion, namely: $$1.462\le 1.4625=\int_0^1 1+x^2+\frac{x^4}{2}+\frac{x^6}{6}+\frac{x^8}{24}+\frac{x^{10}}{120}\le \int_0^1 e^{{x}^{2}}$$ For the right bound i'm still looking for a way. However, i wonder if there is an elegant way to solve both sides. AI: Here is how to find the upper bound, integration by parts gives $$ \int _{0}^{1}\!{{\rm e}^{{x}^{2}}}{dx}={{\rm e}}-\int _{0}^{1}\!2 \,{x}^{2}{{\rm e}^{{x}^{2}}}{dx}$$ Using the fact that $$ 2\,{x}^{2}+2\,{x}^{4}+{x}^{6}+1/3\,{x}^{8}+1/12\,{x}^{10}+{\frac {1}{ 60}}\,{x}^{12}\leq 2\,{x}^{2}{{\rm e}^{{x}^{2}}}$$ gives $$ \int _{0}^{1} (\!2\,{x}^{2}+2\,{x}^{4}+{x}^{6}+1/3\,{x}^{8}+1/12\,{x}^{ 10}+{\frac {1}{60}}\,{x}^{12}){dx}\leq \int _{0}^{1}\!2\,{x}^{2}{ {\rm e}^{{x}^{2}}}{dx}\,,$$ since both functions are positive. Multiplying both sides of the above inequality by -1, yields, $$-\int _{0}^{1}(\!2\,{x}^{2}+2\,{x}^{4}+{x}^{6}+1/3\,{x}^{8}+1/12\,{x}^{ 10}+{\frac {1}{60}}\,{x}^{12}){dx}\geq -\int _{0}^{1}\!2\,{x}^{2}{ {\rm e}^{{x}^{2}}}{dx}$$ adding e to both sides of the last inequality gives $$ e-\int _{0}^{1}(\!2\,{x}^{2}+2\,{x}^{4}+{x}^{6}+1/3\,{x}^{8}+1/12\,{x}^{ 10}+{\frac {1}{60}}\,{x}^{12}){dx}\geq e-\int _{0}^{1}\!2\,{x}^{2}{ {\rm e}^{{x}^{2}}}{dx}$$ Evaluating the integral of the approximate power series gives the upper bound $$ \int _{0}^{1}\!{{\rm e}^{{x}^{2}}}{dx} = e-\int _{0}^{1}\!2\,{x}^{2}{ {\rm e}^{{x}^{2}}}{dx} \leq 1.462863173 < 1.463 $$
H: Non-aleph infinite cardinals I'm now confused with a concept of $\aleph$. 1.$\aleph$ is a cardinal number that is well-ordered in ZF.(Defined as an initial ordinal that is equipotent with). Does that mean $\aleph_x$ in ZF may NOT be equal to $\aleph_x$ in ZFC? 2.I don't know how to define $\aleph$ in ZF. Here's what I tried. Do we call $A$, the class of alephs? That is, let $[\alpha]$ = {$\beta \in OR$|$\beta \simeq \alpha$} Since OR is well-ordered (class of von-Neumann ordinals), $[\alpha]$ has a least element $\alpha_l$. Let $A$={$\alpha_l \in OR$|$\alpha \in OR$} Let $V$ be the union of every $a \in A$. Since $V$ is a subset of OR, $V$ is well-Ordered. I'm trying to show that power set of $V$ is well-ordered, thus if $V$ is a set, $V$ and the power set of $V$ is equiotent (since every well-ordered set is isomorphich with some $b\in OR$), which is a contradiction to show that $V$ is a proper class, hence $A$ is a proper class. - I don't know how to show that the power set of $V$ is well-ordered. Last question is, How do I define 'class of cardinals'? , since there might be some sets equipotent with none of alephs in ZF. AI: The class of $\aleph$ numbers is the same class of cardinals that you know in a model of ZFC. Namely, initial ordinals. The definitions are exactly the same. Furthermore by definition the $\aleph$ cardinals are ordinals, so the correspond to well ordered sets. On the other hand, if $A$ is not a well-orderable set, then $|A|$ corresponds to the set $$\{B\mid \exists f\colon A\to B\text{ a bijection}\land\operatorname{rank}(B)\text{ is minimal}\}$$ Where the $\operatorname{rank}$ operator is the von Neumann rank of $B$. This set is not an ordinal, clearly, and it may lack any internal structure. The class of cardinals, therefore, is combined from two parts: The $\aleph$ numbers which are "ordinal which cannot be put in bijection with any of its elements". We can see that the $\aleph$ numbers do not form a set directly, suppose that they would, then there was an ordinal $\gamma$ such that the set of $\aleph$ has von Neumann rank $\gamma$. In particular all of its elements have rank $<\gamma$. Let $\kappa$ be the first ordinal above $\gamma$ such that $\kappa$ is not in bijection with any of its elements, then $\kappa$ is an $\aleph$, but its von Neumann rank is $\kappa>\gamma$ in contradiction. Cardinals of sets which are not well-orderable. These are described as sets $A$ such that "Every two members of $A$ have a bijection between them, all the elements of $A$ have the same von Neumann rank, and no set of lower rank has a bijection with any element of $A$, and if there is a $B$ of the same von Neumann rank as a member of $A$, and they are in bijection then $B$ is an element of $A$ as well" Yes, it is a bit clumsy and unclear, but set theory without choice may get like that often. It is immediate that the class of cardinals is a proper class since it contains all the $\aleph$-cardinals. Much like in ZFC the cardinals make a proper class, the arguments carry over in this case as well. Lastly, you cannot prove that a power set of a well-ordered set is well-ordered because if the axiom of choice fails this is simply not true. Furthermore, $A$ itself is a class, as it contains elements of unbounded rank, so we need to be more careful with "the union over $A$" as it is not a set as well, that is $V$ itself is a class. As $V$ is a class its power "set" is not a set and does not exist, and as I remarked power sets of a well-ordered set need not be well-orderable. See also: Defining cardinality in the absence of choice There's non-Aleph transfinite cardinals without the axiom of choice? How do we know an $ \aleph_1 $ exists at all? (this asserts that $\aleph_1$ exist, even without choice, and the argument carries over to high cardinals)
H: How to solve this equation involving $()^x$? I have the equation: $\left (\sqrt{3+2\sqrt{2}} \right )^x- \left (\sqrt{3-2\sqrt{2}} \right )^x=\frac{3}{2}$ I wrote the left side of the equation as square roots. $(1+\sqrt{2})^x-(1-\sqrt{2})^x=\frac{3}{2}$ How do I found out the final solution? Thank you very much! P.S. The answers I can choose from are: a) $x=1$ b) $x=2$ c) $x=\frac{2\lg2}{\lg(3+2\sqrt2)}$ d) $x=\frac{2\lg2}{\lg(3-2\sqrt2)}$ e) no solution f) $x=2\lg2$ AI: First of all, $\sqrt{3-2\sqrt{2}}$ is $\sqrt{2} - 1$. Now, Let $(\sqrt{2}+1)^x$ be y. $y - \frac{1}{y} = \frac{3}{2}$. So $2y^2 - 3y - 2 = 0$. So y is 2 or $-\frac{1}{2}$. As y = $(\sqrt{2}+1)^x$, y must be positive and thus 2. $(\sqrt{2}+1)^x = 2$ So $x = \frac{log(2)}{log(\sqrt{2}+1)}$, which is same as option c) here as you can see by simplifying the expression in c). I guess they are also taking lg instead of log, but as you can see it makes no difference whatsoever.
H: Evaluating $\lim\limits_{z \to 0} \frac{z\cdot \cos(z)}{\sin(z)}$ I'm reading Boas' chapter on functions of a complex variable, and she's talking here about finding residues. However, I don't understand the evaluation of the limit(below). How is it that we can take $\cos(0)$ out of the limit while $\sin(z)$ and $z$ stay within the limit? $$ \lim_{z \to 0} \frac{z\cdot\cos(z)}{\sin(z)}=\cos (0)\cdot \lim_{z \to 0}\frac{z}{\sin(z)}=1\cdot 1=1 $$ AI: You can take stuff out of the limits which are smooth (or even continuous) and non-zero about that point. Here $\cos(0) = 1$, but $z(0)$ and $\sin(z)$ although smooth, evaluate to zero. The reason we can do this is the product rule of limits. We can actually write an intermediary step including $lim_{z\rightarrow0}\cos(z)$ which evaluates to $\cos(0)$. We can't take the other two parts $z(0)$ and $\sin(z)$ as they evaluate to zero and we'd be left with a 0/0.
H: Can a subset of a group, which does not contain the identity element, be a group Let $G$ be a group. Let $1 \in G$ be the identity element of $G$. Let $S \subset G$, with $1 \notin S$. Is it possible for $S$ to be a group, with some other element playing the role of the identity in $S$? AI: A subset $S$ of $G$ can not be a subgroup with respect to the multiplication in $G$ since the element $1$ is unique and is not in $S$. (I think Dylan explained this well in his comment). If you are asking if $S$ can be a group with different multiplication than the multiplication in $G$ then take for example $S=\{g\}$ for some $g\in G$ and define $g\bigstar g=g$.
H: Count all degree 2 monic irreducible and not irreducible polynomials There's this exercise that really has kept me stuck for a day by now, will you please help me figure out: let's consider polynomials in $\mathbb Z_3$: characterize degree 2 not irreducible monic polynomials. How many are they? characterize degree 2 irreducible monic polynomials. How many are they? how many degree 4 monic polynomials have no roots and are not irreducible? I have no idea how to characterize those polynomials, I just figured that all polynomials with $\Delta = 0$ or $\Delta = 1$ are not irreducible, if $\Delta = 2$ then the polynomial is actually irreducible. I couldn't help but enumerate all monic degree 2 polynomials and find one by one for myself if they met the condition I've set. Is there a better way to count them without being forced to find them all? What about the last question, that really left me with no idea. AI: As I wrote in my comment, it is more easy to count the reducible degree two polynomials. Such a polynomial is a product of the monic dregree one polynomials. There are three such ($x$, $x-1$ and $x-2$). There are six different products of two of them, hence there are six recucible degree two polynomials. As there are nine monic degree two polynomials alltogether, there are three irreducible ones. To count the reducible monic degree four polynomials without zero, we argue along the same lines: Such a polynomial is a product $p(x)q(x)$ where $p$, $q$ are monic irreducible degree two polynomials. As we found above, there are three such, we get six reducible, monic, degree four polynomials.
H: Chain rule for Hessian matrix Given $f\colon \mathbb{R}^n\rightarrow \mathbb{R}$ smooth and $\phi \in GL(n)$. What is the Hessian matrix $H_{f\circ \phi} = \left(\frac{\partial ^2 (f\circ \phi)}{\partial x_i\partial x_j}\right)_{ij}$? AI: Denote $H_g(x)$ the Hessian matrix of a function $g$. Denote $g=f\circ \phi$. By the chain rule, we have $$D(f\circ\phi)(x)\cdot h=D(f(\phi (x))\cdot D(\phi (x))\cdot h=D(f(\phi (x)))\cdot \phi (x)\cdot h$$ hence $D(g)(x)=D(f(\phi (x)))\cdot \phi (x)$. In particular, $$\partial_j g(x)=\sum_{k=1}^n\partial_kf(\phi (x))a_{kj},$$ where $a_{kj}$ is the $(k,j)$-th entry of $\phi (x)$.We can do the same, for a fixed $k$, for the map $x\mapsto \partial_kf(\phi (x))$. We get \begin{align} \partial_{ij}f(\phi (x))&=\sum_{k,l=1}^n(H_f(\phi (x)))_{lk}a_{li}a_{kj}\\ &=\sum_{k=1}^n(\phi(x)^t H_f(\phi (x)))_{ik}a_{kj}\\ &=(\phi(x)^tH_f(\phi (x))\phi (x))_{ij}. \end{align}
H: Example of non-finitely generated $R$-algebra By definition, an $R$-algebra is a ring homomorphism $f: R \to S$. For example, if $R=\mathbb Z$ and $S= \mathbb Z / n \mathbb Z$ then the projection $k \mapsto k \mod n$ is a ring homomorphism so that $\mathbb Z / n \mathbb Z$ is a $\mathbb Z$-algebra. I think the point of an algebra is that it's a bit like a module in that we extend its structure by adding a ring that is acting on it. In the case of modules, we start with an abelian group and in the case of algebras we start with a ring. Now for my question: I've been trying to come up with a non-finitely generated $R$-algebra but couldn't. Can someone help me and give me an example? Thank you. AI: You may find the following example illustrative of the questions involved. If $K$ is a field, then the polynomial ring $K[X]$, although it is a (countably) infinite dimensional vector space over $K$, is finitely generated (by $X$ alone) as a $K$-algebra. However the field $K(X)$ of rational functions (quotients of polynomials) is not finitely generated as a $K$-algebra, since any finite set of generators can only produce finitely many irreducible factors in the denominators. The argument is similar to $\mathbf Q$ being non finitely generated as a $\mathbf Z$-algebra (see the comment by Dylan Moreland), because of the inifiniteness of the set of prime factors one needs for denominators. One can also prove (can you do it?) that the ring $K[[X]]$ of formal power series is not finitely generated as a $K$-algebra. Nor is $\mathbf R$ finitely generated as $\mathbf Q$-algebra; in these examples the sheer (uncountable) dimension as a vector space already excludes finite generation.
H: Inverse function theorem in Banach space to prove short time existence of PDE (explanation of statements) Let $$X = C^{k+2, \alpha}(S(T)),$$ $$Y = C^{k, \alpha}(S(T)),$$ where $S(T) = S^1 \times [0,T]$. Don't think of $T$ as fixed, but varying. So these Banach spaces contains functions with different time intervals. Suppose there is a map $F:X \to Y$ with $$F(u) = u_t - a(x,t,u,u_x,u_{xx})$$ where $a(x,t,z,p,q)$ is smooth in its arguments. We want to show that there is a unique $u^*$ such that $F(u^*) = 0.$ To do this, we can show that the derivative at $u$ $$DF(u)v = v_t - a_z(u)v - a_p(u)v_x - a_q(u)v_{xx}$$ is invertible (or bijective or linear isomorphism) at a particular function $u$. It is invertible, and we also know that the inverse mappings $DF(u)^{-1}$ are varying continuously and are uniformly bounded (for bounded $u$) regardless of the time interval in the domain. Now can someone please explain these points I don't understand: If there is a $u^0 \in X$ such that $F(u^0)$ is small, then the inverse function theorem implies that for all small $s \in Y$, there exists a unique $u$ such that $F(u) = s$, and $u$ depends continuously on $s$. Is that right? By "small", I guess the author means close to zero. My understanding is, if $F(u^0)$ is in a neighbourhood of zero, for all functions $s$ in that same neighbourhood of $0$, we can find a $u$ in some neighbourhood of $u^0$ such that $F(u) = s$. Is that correct? Why does $u$ depend continuously on $s$? Now if $u^0 = a(x,t,0,0,0)t \in X$, provided $T$ is small enough, $F(u^0)$ is as close to $0$ as required. This is the point when we take the time interval $[0,T]$ to be short. It is true that $F(u^0) \to 0$ as $t \to 0$. But this is pointwise convergence, don't we need convergence in the $Y$ norm? Also, how can we be sure that 0 in fact lies in the neighbourhood of $Y$ that becomes invertible? AI: Your interpretation is not quite right. Inverse function theorem (under your assumptions) states that for $u^0$, there exists a neighborhood $N^0 \ni F(u^0)$ such that $F$ is invertible on $N^0$ with continuous inverse (in fact continuously differentiable). By the uniform bound on $DF^{-1}$ you can take the size of $N^0$ to be uniform: in particular there exists a constant $\delta$ such that $N^0 \supseteq F(u_0) + B_\delta$. So if $\|F(u_0)\| < \delta$, $0\in N^0$ and there exists $\epsilon$ such that $B_\epsilon \subseteq N^0$, meaning that for $s\in B_\epsilon \subseteq N^0$ we have a continuous map $F^{-1}: B_\epsilon \to X$. Yes, you need convergence in the $Y$ norm. Note that (writing $a(x,t) = a(x,t,0,0,0)$) $$F(ta(x,t)) = a(x,t) + ta_t(x,t) - a(x,t,ta, ta_x, ta_{xx})$$ The first and third terms combine to give (using the differentiability of $a$) a quantity that is $O(t)$. This shows that in particular that all $x$ derivatives of $F(ta(x,t))$ are $O(t)$ and hence can be made as small as you want by making $t$ small. There is, however, a problem with the $t$ derivatives, if you take $F(ta(x,t))_t |_{t = 0}$ you get $$ a_t(x,0) + a_t(x,0) - a_t(x,0) - a_u(x,0) a(x,0) - a_p(x,0) a_x(x,0) - a_q(x,0) a_{xx}(x,0) $$ which is not in general 0. So whichever source you are quoting from is incomplete in its argument.
H: Possibly false proof in AM Here is the excerpt of the book where I suspect a mistake (page 66): Where they say "The restriction to $A$ of the natural homomorphism $A^\prime \to k^\prime$" I think we don't want a restriction. We start with the quotient map $\pi: A[x^{-1}] \to A[x^{-1}] /m$ where $m$ is a maximal ideal containing $x^{-1}$. We take an algebraic closure $\Omega$ of the field $A[x^{-1}] /m$ and consider the map $i \circ \pi: A[x^{-1}] \to \Omega$. Then by the previous theorem, (5.21), we can extend $i \circ \pi$ to some valuation ring $B$ of $K$ containing $A[x^{-1}]$: $g: B \to \Omega$ such that $g|_{A[x^{-1}]} = i \circ \pi$. Then $g(x^{-1}) = 0$. Hence $x^{-1} \in ker(g)$ and since the kernel is a proper ideal of $B$, $x^{-1}$ is not a unit in $B$ and hence $x$ is not in $B$. Do you agree with my version and that what is written in Atiyah-Macdonald is not correct? Thank you. AI: Congratulations for spotting the difficulty and correcting it, Clark: you are absolutely right and I completely agree with your version! As a slightly different formulation for the proof that $x\notin B$, I would just remark that if we had $x\in B$, we would deduce the absurd conclusion $$1=g(1)=g(x\cdot x^{-1})=g(x)\cdot g(x^{-1})=g(x)\cdot 0=0$$
H: Trigonometric equation inversion I am trying to invert the following equation to have it with $\theta$ as the subject: $y = \cos \theta \sin \theta - \cos \theta -\sin \theta$ I tried both standard trig as well as trying to reformulate it as a differential equation (albeit I might have chosen an awkward substitution). Nothing seems to stick and I keep on ending up with pretty nasty expressions. Any ideas? AI: $$y = \cos \theta \sin \theta - \cos \theta -\sin \theta$$ $$y - \cos \theta \sin \theta = -\cos \theta -\sin \theta$$ $$y^2 - 2y\cos \theta \sin \theta +\cos^2\theta\sin^2\theta= 1+2\cos \theta \sin \theta$$ $$y^2 - y\sin(2\theta) +\frac14\sin^2(2\theta)= 1+\sin(2\theta)$$ This is a quadratic equation for $\sin(2\theta)$ that you can solve.
H: Another question about a proof in Atiyah-Macdonald I have a question about the following proof in Atiyah-Macdonald: 1:Why is $\Omega$ infinite? Are all algebraically closed fields infinite? 2: How does the existence of $\xi$ follow from $\Omega$ being infinite? Thanks. AI: 1.) Yes, algebraically closed fields are infinite: For, if $\mathbb F$ is a finite field, we can consider the polynomial $p(X) = 1 + \prod_{a \in \mathbb F}(X-a)$, which has no zeros as $p(\alpha) = 1 + 0 = 1$ for each $\alpha \in \mathbb F$, so $\mathbb F$ is not algebraically closed. 2.) $\xi$ is choosed such that $\xi$ is not a zero of $q(X) = \sum_{i=0}^n f(a_i)X^{n-i}$, as $f(a_0) \ne 0$, $q$ has degree $n$ and hence at most $n$ different zeros. As $\Omega$ has, by 1.), infinitely many elements, there is a $\xi \in \Omega$ with $0 \ne q(\xi) = \sum_{i=0}^n f(a_i)\xi^{n-i}$.
H: What is the limit for the following sequence of products? $\lim\limits_{n \to \infty}$ $\displaystyle\frac{q \cdot n +1}{q \cdot n} \cdot \frac{q \cdot n +p+1}{q \cdot n +p} \cdot \ldots \cdot \frac{q \cdot n +n \cdot p +1}{q \cdot n + n \cdot p}$ , for $q > 0, p \geq 2$ . Thank a lot ! AI: Introducing the parameters $a=q/p$ and $b=1/p$, the $n$th ratio is $$ R_n=\prod_{k=0}^n\frac{an+b+k}{an+k}=\frac{\Gamma(an+b+n+1)\cdot\Gamma(an)}{\Gamma(an+b)\cdot\Gamma(an+n+1)}. $$ One knows that $\Gamma(x+b)\sim x^b\cdot\Gamma(x)$ when $x\to+\infty$. Applying this twice, one gets $$ \frac{\Gamma(an+b+n+1)}{\Gamma(an+n+1)}\sim (an+n+1)^b\sim (a+1)^bn^b, \qquad \frac{\Gamma(an+b)}{\Gamma(an)}\sim a^bn^b, $$ hence $$ \lim\limits_{n\to\infty}R_n=\left(\frac{a+1}a\right)^b=\left(\frac{q+p}q\right)^{1/p}. $$
H: probabilty of random points on perimeter containing center related question: probablity of random pick up three points inside a regular triangle which form a triangle and contain the center What is the probability that a (possibly degenerate) triangle made by three randonly chosen points on the perimeter of an n-gon contains the centre of an n-gon? For a square, there is a $\frac{1}{16}$ chance that the points are in configuration a, $\frac{3}{16} $ for configuraion b, and $\frac{3}8$ for c and d. The probility that the points contain the center is $0$ for a and c, $\frac{1}3$ for b (since the center is contained iff one point is on each side of the line TF1 and arbitrarily taking the square to have unit sides yields $2\int_0^1 a-a^2 \mathrm{d} a=\frac{1}3$) and $\frac{1}2$ for d (center contained iff B1 is the opposite side of the line through D1H1 to C1, $\int_0^1 a \mathrm{d}a=\frac{1}2$).Therefore, if I have somehow not made an error, the probability is $\frac{1}4$. [edited] The limiting case of a circle is $\frac{2}{\tau}\int_0^{\frac{\tau}2}\frac{a}{\tau}\mathrm{d}a=\frac{1}4$ (using $\tau=2\pi$ just to be controversial) AI: The answer is in fact always $\frac14$, independent of $n$; you forgot to multiply by $2$ for symmetry and divide by $\tau$ for normalization in the circle result. Wherever the first point is chosen, the "diameter" on which it lies divides the $n$-gon into two symmetric halves, and the second and third points must be in opposite halves. The line connecting them must also lie above the centre (as seen from the first point), and if the second point is at a distance $x$ from the first point along the perimeter (in units of the length of the perimeter), there's an admissible range of length $\frac12-x$ for the third point. Thus the probability is $$2\int_0^{1/2}\left(\frac12-x\right)\mathrm dx=2\int_0^{1/2}x\,\mathrm dx=\frac14\;,$$ where the factor of $2$ is for symmetry because the second and third point can be interchanged.
H: Value of $P(12)+P(-8)$ if $P(x)=x^{4}+ax^{3}+bx^{2}+cx+d$, $P(1)=10$, $P(2)=20$, $P(3)=30$ What will be the value of $P(12)+P(-8)$ if $P(x)=x^{4}+ax^{3}+bx^{2}+cx+d$ provided that $P(1)=10$, $P(2)=20$, $P(3)=30$? I put these values and got three simultaneous equations in $a, b, c, d$. What is the smarter way to approach these problems? AI: Two remarks, to avoid almost every computation: The polynomial $P(x)-10x$ has roots $1$, $2$ and $3$, hence there exists a polynomial $Q$ such that $P(x)-10x=(x-1)(x-2)(x-3)Q(x)$. The polynomial $P(x)-10x$ has degree $4$ and leading coefficient $1$, hence $Q(x)=x+z$ for some unknown constant $z$ whose value will be irrelevant. Thus, $P(12)+P(-8)=10\cdot(12-8)+11\cdot10\cdot9\cdot(12+z)+9\cdot10\cdot11\cdot(8-z)$, that is, $P(12)+P(-8)=10\cdot4+11\cdot10\cdot9\cdot(12+z+8-z)=40+990\cdot20=19840$.
H: $(B/m)[x] = B[x]/M$? Assume $K$ is a field and $B$ is a subring of $K$ and $x \in K$. Let $m$ be a maximal ideal of $B$. Let $m^e$ denote the extension of $m$ in $B[x]$. Let $M$ be a maximal ideal in $B[x]$ containing $m^e$. Can someone explain to me why $(B/m)[\bar{x}] = B[x]/M$ where $\bar{x}$ is the image of $x$ in $B[x]/M$? Thank you. AI: I'm afraid your question doesn't really make sense because $\mathfrak m^e$ might be equal to $B[x]$, and then of course no maximal ideal $M\supset \mathfrak m^e $ exists. Here is an example of such failure: Let $K=\mathbb Q, B=\mathbb Z, x=\frac{1}{2}$ and $\mathfrak m=(2)\subset B=\mathbb Z$. Then $\mathfrak m^e=(1)=B[ x]=\mathbb Z[\frac{1}{2}]$ because $2\cdot \frac{1}{2} =1\in \mathfrak m^e$ . Thus no maximal $M$ with $\mathfrak m^e\subset M\subset B$ can exist. Edit However if $M$ exists then $M\cap B=\mathfrak m$ (by maximality of $M$) and you have an injective morphism $$B/\mathfrak m\to B[x]/M \quad (INJ)$$ An element of $B[x]/M$ is the class $\overline {\Sigma b_jx^j}$ of an element $\Sigma b_jx^j\in B[x]$ and thus can be written $ \Sigma \overline {b_j}\cdot \overline {x} ^j\in (B/\mathfrak m) [\bar x]$ as you wish, once you notice that $(INJ)$ allows you to see $B/\mathfrak m$ as a subring of $B[x]/M$. With this identification you have indeed $ B[x]/M=(B/m)[\bar{x}]$
H: Equation involving prime numbers Given the equation: $$p^2+\phi=q$$ where $p$ and $q$ are prime numbers and $\phi$ a constant, it seems the equation doesn't have solutions for $\phi=1,2,3$, but it has solutions for $\phi=4$. Is it possible to show why? Or maybe, there are solutions that I am not able to find also for $\phi=1,2,3$? Thanks for any suggestions. AI: Consider divisibility of $p^2+2$ by 2 (for $\phi=1,3$) and by 3 (for $\phi=2$).
H: The "need" for cohomology theories In many surveys or introductions, one can see sentences such as "there was a need for this type of cohomology" or "X succeeded in inventing the cohomology of...". My question is: why is there a need to develop cohomology theories ? What does it bring to the studies involved ? (I have a little background in homological algebra. Apart from simplicial homology and the fact that it allows to "detect holes", assume that I know nothing about more complicated homology or cohomology theories) AI: I am not an expert, but I think of a cohomology theory as a description of obstructions to solving some sort of equation. For example, if the first simplicial cohomology of a simplicial complex vanishes, there is no obstruction to assigning weights to edges and faces so that some equations relating these weights are satisfied. (see chapter 3 of Hatcher's Algebraic Topology for more). So to answer your question, one might hope that a new cohomology theory would describe a new obstruction to solving some type of equation on a particular space. Solving equations on topological spaces is pretty useful, so cohomology theories are as well. An example from complex analysis: let $f$ be a holomorphic on some open set $U \subset \mathbb{C}$. Perhaps you'd like to find a holomorphic antiderivative for $f$, i.e. a function $g$ which is holomorphic on $U$ and satisfies $$\frac{\partial}{\partial z} g = f.$$ It's a basic result in complex analysis that if $U$ is simply-connected, we can always find such a $g$. Let's look instead at $\mathbb{C} \setminus {0}$, the complex plane without the origin. Let $f = \frac{1}{z}$. This function is holomorphic on the punctured plane, but (again from basic complex analysis) it has no holomorphic antiderivative. So one cannot always solve the equation $\frac{d}{dz}g = f$ on this space. This can be seen by looking at the "sheaf cohomology of the sheaf of holomorphic differential forms." Moral: sheaf cohomology, a separate theory from simplicial homology, is useful.
H: Finite measure spaces with a total closed set Endowing $R$ with a finite borel measure. How to find a closed set with its total measure and every closed subset of it has minor measure? AI: Let $\mathcal O:=\{O\subset \Bbb R, m(O)=0, O\mbox{ open}\}$ and $S':=\bigcup_{O\in\mathcal O}O$. $S'$ is open hence separable, and by Lindelöf property we can extract a countable subcover, which proves that $S'$ is open and of measure $0$. Let $S:=\Bbb R\setminus S'$ (support of $m$). It has full measure. Let $F\subset S$ a closed subset and assume that $m(S)=m(F)$. We have $S'=S^c\subset F^c$ and since $m$ is finite, $m(F^c)=m(S^c)=0$. So $F^c\in\mathcal O$ and $F^c\subset S'$ which gives $S\subset F$. So if a closed subset of $S$ has full measure it should be $S$. Note that it works for a Lindelöf topological space, since it's the only feature of the real line we have used.
H: Projection matrices I have found these two apparently contradicting remarks about projection matrices: A matrix $P$ is idempotent if $PP = P$. An idempotent matrix that is also Hermitian is called a projection matrix. $P$ is a projector if $PP = P$. Projectors are always positive which implies that they are always Hermitian. Which of both is correct? Is a matrix $P$ that verifies $PP=P$ always Hermitian? AI: Let $A:=\pmatrix{1&1\\0&0}$. We have $$A\cdot A=\pmatrix{1&1\\0&0}\cdot\pmatrix{1&1\\0&0}=\pmatrix{1&1\\0&0}=A,$$ but $A$ is not hermitian.
H: How to transform data distributed around zero to make it closer to normal? I have data that ranges continuously from $-1$ to $+1$, with lots of zeros in the middle. I want to transform the data to a normal distribution. How would I do this? My normal approach with data containing zeros is to $+1$ then transform ($\log_{10}(\bullet)$, $\sqrt{\bullet}$, etc). However, if I add $1$ to my data, the values that were $-1$ become zero and the transformation produce infinite values. Is it valid to add a number greater than $1$? AI: You can certainly add any number you want to the data. That will shift it horizontally, but it will not make the distribution any different except for changing the center point. In particular, it will be no closer to a normal distribution. Recall the normal distribution is defined with a pdf of $\frac 1{2\pi \sigma}\exp(\frac {(x-\mu)^2}{2\sigma^2})$. Adding a constant to all your data just adds the same constant to $\mu$. Your later transform can also be adapted to deal with zeros-instead of taking the log(data), you can take log(data+10), which has the same effect as adding 10 and then taking the log.
H: show that $x^2+y^2=z^5+z$ Has infinitely many relatively prime integral solutions How to show that this equation: $$x^2+y^2=z^5+z$$ Has infinitely many relatively prime integral solutions AI: The number $z^4+1$ is a sum of two relatively prime squares. Let $z$ be the sum of two relatively prime squares. Then the product $(z^4+1)z$ is a sum of two squares, by the Brahmagupta identity $$(s^2+t^2)(u^2+v^2)=(su\pm tv)^2+(sv\mp tu)^2.$$ Now we take care of the relatively prime part. Suppose that $m$ has a representation as a sum of two squares, but no primitive representation. Then $m$ is divisible by $4$ or by some prime of the form $4k+3$. In our case, primes of the form $4k+3$ are irrelevant. And if $z$ is a sum of two relatively prime squares, then $(z^4+1)z$ cannot be divisible by $4$. So pick for example $z$ a power of $5$, or a prime of the form $4k+1$.