text
stringlengths
83
79.5k
H: 3xy + 14x + 17y + 71 = 0 need some advice $$3xy + 14x + 17y + 71 = 0$$ Need to find both $x$ and $y$. If there was only one variable then this is easy problem. Have tried: $$\begin{align}3xy &= -14x - 17y - 71 \\ x &= \frac{-14x - 17y - 71}{3y}\end{align}$$ Then tried to put this expression everywhere instead of $x$ but it tooks forever to find both $x$ and $y$. I don't even know how to get on right track. Please give any advice. Thanks. AI: If $x$ and $y$ can be real numbers, with one equation in two unknowns you will have one dimension of freedom. Solving for $x$, for example, $x=- \frac {17y+71}{3y+14}$. You can substitute in any value for $y$ you want except $\frac {-14}3$ and find $x$. If $x$ and $y$ are integers you can use divisibility testing to restrict the options.
H: Examples of 'almost'-vector spaces/modules, where distributivity fails? I'm wondering what things go horribly wrong by not requiring distributivity in the defitinion of an $R$-module, or $k$-vector space. A 'concrete' example for $\mathbb{R}$ would be quite satisfying. EDIT: Arturo Magidin points out there are two distributivity laws, which completely slipped my mind. I had the law $\alpha(x+y)=\alpha x + \alpha y$ in mind when asking this. He also gave a concrete answer, but I'd love to see other examples of this, and any references would be lovely, should they exist. AI: There are two "distributivity" laws in an $R$-module/vector space: For all $a\in R$, $x,y\in M$, $a(x+y) = ax+ay$; For all $a,b\in R$, $x\in M$, $(a+b)x = ax+bx$. An example in which all axioms of a vector space except for (1) above holds is: Take $V=\mathbb{C}^2$ with its usual addition; define scalar multiplication by: $$\alpha(x,y) = \left\{\begin{array}{ll} (\alpha x,\alpha y) &\text{if }x\neq 0;\\ (0,\overline{\alpha}y) &\text{if }x=0. \end{array}\right.$$ With an arbitrary ring/field, any nontrivial automorphism will do instead of complex conjugation. If you don't mind other axioms failing, you can take $V=\mathbb{R}^n$ over $\mathbb{R}$, take $\alpha(x,y) = (0,0)$ if $\alpha\neq 1$, and $1(x,y)=(x,y)$, with $V=\mathbb{R}^2$. Note, however, that this does not satisfy associativity: if $\alpha\neq 0,1$ and $\beta=\frac{1}{\alpha}$, then $\alpha(\beta (x,y)) =(0,0)$, but $(\alpha\beta)(x,y) = (x,y)$. An example in which all axioms of a vector space except for (2) above holds is: take $V=\mathbb{R}$ with its usual addition, and define scalar multiplication by $\alpha\cdot x = \alpha^2x$. For a longer discussion of the independence of the sundry vector space axioms, see here.
H: What are differences between semidirect product and direct product? Given two groups $A, B$, we can construct direct product $A \times B$ whose elements are of the form $(a, b), a \in A, b\in B$. If $A, B$ are subgroups of a group $G$ and $A \cap B =\{1\}$, then we can construct semidirect product $A \rtimes B$ whose elements are of the form $ab, a\in A, b\in B$. In this case, is the semidirect product $A \rtimes B$ the same as direct product $A \times B$? Is the order of $AB$ equal to $|A||B|$? For example, is the order of $(\mathbb{Z}/2\mathbb{Z})^n \rtimes S_n$ equal to $2^n * n!$? Here $S_n$ is the symmetrical group of order $n$. Thank you very much. AI: In your second sentence, $A$ is required to be a normal subgroup. The semi direct product $A \rtimes B$ will always have the same order as the direct product $A \times B$, but in a direct product $B$ will be normal too. Take $A$ to be order 3, $B$ to be order 2, and consider two non-isomorphic groups of order 6. The cyclic group of order 6 will contain $A \times B$, but the symmetric group of degree 3 will contain $A \rtimes B$ that is not a direct product. Weyl group example The hyperoctahedral group $B_n$ is defined to be a semi-direct product $(\mathbb{Z}/2\mathbb{Z})^n \rtimes S_n$ and can be explicitly represented as the group of all integer matrices such that (1) all non-zero entries are ±1, and (2) each row and each column has exactly one nonzero entry. The subgroup $A=(\mathbb{Z}/2\mathbb{Z})^n$ corresponds to the diagonal matrices with diagonal entries all being ±1, so $2^n$ different possibilities. This is a normal subgroup of the hyperoctahedral group. The subgroup $B$ of permutation matrices, the group of all integer matrices such that (1) all non-zero entries are 1, and (2) each row and each column has exactly one nonzero entry, is also a subgroup of the hyperoctahedral group. However, it is not normal. You can see this form the presentation of the hyperoctahedral group on its Coxeter generators: $$\left\langle s_1, s_2, \ldots, s_n : s_1^2 = s_i^2 = 1, (s_1 s_2)^4 = (s_i s_{i+1})^3 = (s_i s_j)^2 = 1 \mid 2 \leq i \leq n, i+2 \leq j, j \leq n \right\rangle$$ where the generator $s_1$ is the diagonal matrix with a single $-1$ in its top left entry, and 1s on the rest of the diagonal, and the generators $s_i$ are the permutation matrix formed from the identity by switching both the $i-1$st and $i$th rows, and the corresponding columns. If this was a direct product, then we would need $n$ generators for $A$. In the semi-direct product, the elements of $B$, especially $s_i$ for $2 \leq i \leq n$, move the first generator $s_1$ of $A$, to the other $n$ generators needed. In the direct product $(s_1 s_2)^2$ would be the identity, but in the hyperoctahedral group, it is the matrix whose first row has a $-1$ in the second position, whose second row has a 1 in the first position, and in all other rows, the 1 is in the same place as for an identity matrix.
H: How do you solve this radical equation $\sqrt{2x+5} + 2\sqrt{x+6} = 5$? I have a radical equation $$ \sqrt{2x+5} + 2\sqrt{x+6} = 5 $$ and I am having trouble calculating an answer. I keep on getting weird numbers that are not correct as my answer. How do you solve this? A step-by-step procedure would be highly appreciated. AI: In fact my answer is the practical explanation of Old John's answer: $$ [\sqrt{2x+5}+2 \sqrt{x+6}]^{2} = (2x+5)+4\sqrt{(2x+5)(x+6)} +4(x+6) = 25$$ $$ -4-6x = 4\sqrt{(2x+5)(x+6)}$$ $$ 16+48x+36{x}^{2} = 32{x}^{2}+272x+480$$ $$ x= -2 $$ $$ x= 58 $$ Then you check if they are solutions to the initial equation. You will get $x=-2$
H: Word Problem - Adding an amount after a certain limit I cant solve this word problem: Courier charges to a certain destination are $65$ cents for first $250$ grams and $10$ cents for each additional $100$ grams or part thereof. What could be the weight of package for which charge is $\$ 1.55$ ? I am solving it as: $155 $cents = $65 + 90$ = $250$ grams + $900$ grams (since $10$ cents is for $100$ grams) I get the answer $1150$ but the answer is suppose to be $1145$. AI: We use your analysis. We paid an extra $90$ cents. For that, we could ship a package that weighs $250+ \frac{90}{10}(100)$ grams, that is, $1150$ grams. But note that the fine print says that we pay $10$ cents for every $100$ grams or part thereof. So if we are "over" the basic $250$ grams, say by $802$ grams, we pay $80$ cents for the $800$ grams, and an extra $10$ cents for the measly $2$ extra grams over $800$. In effect, we are paying as if our package weighed $250+900$. (So we might as well open the package and put a couple of cookies in. The shipping cost won't change.) Mathematically, all one can say is that if we paid $\$1.55$ to ship the package, then the weight $w$ of the package satisifes the inequality $250+800 \lt w \le 250+900$, that is, $$1050 \lt w \le 1150.$$ If the question was a multiple choice question, and $1145$ was the only "answer" supplied that is in the above interval, then $1145$ is the right answer.
H: Prove $\frac{1-\sin(2A)}{\cos(2A)}=\frac{1-\tan A}{1+\tan A}$ How would I prove the following double angle identity? $$\frac{1-\sin(2A)}{\cos(2A)}=\frac{1-\tan A}{1+\tan A}$$ My work thus far is $$\frac{1-2\sin A\cos A}{\cos^2A-\sin^2A}$$ $$\frac{1-2\sin A\cos A}{(\cos A+\sin A)(\cos A-\sin A)}$$ Sadly I am stuck. AI: $$\frac{1-2\sin A\cos A}{\cos^2A-\sin^2A}$$ $$=\frac{\sin^2A+\cos^2A-2\sin A\cos A}{(\cos A+\sin A)(\cos A-\sin A)}$$ $$=\frac{(\cos A-\sin A)^2}{(\cos A+\sin A)(\cos A-\sin A)}$$ $$=\frac{\cos A-\sin A}{\cos A+\sin A}$$ assuming ${\cos A-\sin A} ≠ 0$ $$=\frac{1-\tan A}{1 +\tan A}$$ assuming ${\cos A} ≠ 0$
H: Prove $\cot A\sin 2A=1+\cos 2A$ How would I prove the following two trigonometric identity. $$\cot A\sin 2A=1+\cos 2A$$ This is my work so far $$\frac{\cos A}{\sin A}(2\sin A \cos A)=1+\cos 2A$$ I am not sure what I would do next to make them equal. AI: \begin{eqnarray*} cotAsin2A=\frac{cosA}{sinA}(2sinAcosA)=cosA(2cosA)=2cos^2A=1+cos2A \end{eqnarray*} This should be everything.
H: Is this correct for this expression? I have seen the expression the identity product in somewhere and I try to express it again as \begin{align} % \prod _{k=1}^K\left(1-x_{k}\right)=\sum _{k=1}^K \frac{(-1)^k}{k!}\underbrace{\sum _{n_1=1}^K \ldots \sum _{n_k=1}^K}_{n_1\neq n_2\neq \ldots \neq ~ n_k} \prod _{t=1}^k x_{n_t} % \end{align} However, I am not sure if it is correct expression or not ? Could you give me a hint ? AI: The good expression would be (don't forget $k=0$ !) $$ \prod_{k=1}^K(1-x_k)=\sum_{k=0}^K(-1)^k\sum_{1\leq i_1<\ldots <i_k\leq K}x_{i_1}\ldots x_{i_k} = \sum_{k=0}^K\frac{(-1)^k}{k!}\sum_{1\leq i_1\neq \ldots \neq i_k\leq K}x_{i_1}\ldots x_{i_k}. $$ You can see it directly by developing the product and playing with permutations, or using the relation coefficients and roots of a polynomials using symmetric functions, namely plugging $z=1$ in $$ \prod_{i=1}^K(z-x_i)=\sum_{k=0}^Kz^{K-k}(-1)^k\sum_{1\leq i_1<\ldots <i_k\leq K}x_{i_1}\ldots x_{i_k}. $$
H: Show matrix $A+5B$ has an inverse with integer entries given the following conditions Let $A$ and $B$ be 2×2 matrices with integer entries such that each of $A$, $A + B$, $A + 2B$, $A + 3B$, $A + 4B$ has an inverse with integer entries. Show that the same is true for $A + 5B$. AI: First note that a matrix $X$ with integer entries is invertible and its inverse has integer entries if and only if $\det(X)=\pm1$. Let $P(x)=\det(A+xB)$. Then $P(x)$ is a polynomial of degree at most $4$, with integer coefficients, and $P(0),P(1),P(2),P(3),P(4) \in \{ \pm 1 \}$. Claim: $P(0)=P(1)=P(3)=P(4)$. Proof: It is known that $b-a|P(b)-P(a)$ for $a,b$ integers. Then $3|P(4)-P(1), 3|P(3)-P(0)$ and $4|P(4)-P(0)$. Since the RHS of each division is $0$ or $\pm 2$, it follows it is zero. This proves the claim. Now, $P(x)-P(0)$ is a polynomial of degree at most four which has the roots $0,1,3,4$. Thus $$P(x)-P(0)=ax(x-1)(x-3)(x-4) \,.$$ hence $P(2)=P(0)-4a$. Since $P(2), P(0) \in \{ \pm 1 \}$, it follows that $a=0$, and hence $P(x)$ is the constant polynomial $1$ or $-1$. Extra One can actually deduce further from here that $\det(A)=\pm 1$ and $A^{-1}B$ is nilpotenet. Indeed $A$ is invertible and since $\det(A+xB)=\det(A)\det(I+xA^{-1}B)$ you get from here that $\det(I+xA^{-1}B)=1$. It is trivial to deduce next that the characteristic polynomial of $A^{-1}B$ is $x^2$.
H: How to get a new point of a vector when rotated. I want to obtain the new point of a vector that I rotate like this. When I rotate them, I have the angle of rotation. I want to know x and y, it rotates taking the reference point of 0,0 Thanks AI: To give a general answer, you take your position vector $\vec{v}\in\mathbb{R}^{n}$, and you multiply it by the appropriate rotation matrix ${\bf M}\in\mathbb{R}^{n\times n}$. So we have: $$\vec{v}'={\bf M}\vec{v}$$ This will give you the position vector under the rotation described by ${\bf M}$. So let's take your example, of the vector $\vec{v}\in\mathbb{R}^{2}$, where $\vec{v}=\left[55,0\right]$, and multiply it by the matrix ${\bf M}\in\mathbb{R}^{2\times2}$, where ${\bf M}=\left[\begin{smallmatrix}\cos{30^{\circ}} & -\sin{30^{\circ}} \\ \sin{30^{\circ}} & \cos{30^{\circ}} \end{smallmatrix}\right]$. So we have: $$\vec{v}'=\underbrace{\begin{bmatrix}\frac{\sqrt{3}}{2} & -\frac{1}{2} \\ \frac{1}{2} & \frac{\sqrt{3}}{2}\end{bmatrix}}_{\bf M}\underbrace{\left[55\atop 0\right]}_{\vec{v}}\approx\left[47.63 \atop 27.5\right]$$ Hope this helps!
H: The count of functions from $X$ to $Y$? Let $X$ be a set with $N$ elements, and $Y$ a set with $M$ elements. What is the count of possible functions from $X$ to $Y$? If answer is $M^N$, then why can't it be $N \times M$? That is, why not count for each $x \in X$, the possible mappings from $x$ to each $y\in Y$? AI: Say you need to determine what you will drink with your meals at home tomorrow. There are two meals, breakfast and dinner. And you have three options of drink for each meal: milk, water, and beer. Now, this is the same as having a function $f\colon\{$ breakfast, dinner$\}\to\{$ milk, water, beer$\}$. How many different combinations of drinks can you have? It's more than $2\times 3=6$ combinations; it's $3^2=9$ combinations: Milk at both breakfast and dinner; Milk at breakfast, water at dinner; Milk at breakfast, beer at dinner; Water at breakfast, milk at dinner; Water at both breakfast and dinner; Water at breakfast, beer at dinner; Beer at breakfast, milk at dinner; Beer at breakfast, water at dinner; Beer at both breakfast and dinner. As you see: three choices for what to do for breakfast, and three choices for what to do at dinner, for a total of $3\times 3 = 3^2$ possible outcomes. If you had to plan for $N$ different meals, and you had $M$ choices for each meal? $M$ possible choices for the first, $M$ possible choices for the second, and so on. In the end, do you have $N\times M$, or do you have $M^N$?
H: Find the area of the region bounded by two curves I need to find the area of the region that is bounded by $y=x^2-4$ and $y=2x-1$ I think I solved it, but I don't know what the right answer is so I'm not sure! I got: $$da = wl$$ $$=(x^2-4)-(2x-1)dy$$ $$=(x^2-2x-3)dy$$ $$\int{}da = \int{(x^2-2x-3)dy}$$ $$a = \frac{x^3}{3}-x^2-3x+c$$ AI: I take it you mean $y=x^2-4$ and $y=2x-1$. Draw a picture. We get a familiar parabola, and a straight line. The straight line $y=2x-1$ meets the parabola where $x^2-4=2x-1$. This can be rearranged to $x^2-2x-3=0$. The quadratic factors as $(x-3)(x+1)$, so the meeting points are at $x=-1$ and $x=3$. Note that the finite region caught between the two has the line above the curve. Thus our area is $$\int_{-1}^3\left((2x-1)-(x^2-4)\right)\,dx.$$ Before integrating, simplify the integrand a bit.
H: Integral of unimodal functions $f>0$. Suppose $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is positive almost everywhere and integrable. We know that if $n=1$ and if $f$ is unimodal then the integral $F(x)=\int_{[-\infty,x]} f$ is convex for all $x<a$ and concave for all $x>a$, where $a$ is the point of unimodality. Suppose now that one extends the definition of unimodality in the following manner: $\{x\ |\ f(x)\ge a\}$ is a convex set for every $a>0$. I'm wondering if this result holds as well in the sense that the distribution funciton $$F(x_1,\ldots,x_n)=\int_{[-\infty,x_i]} f(x_1,\ldots,x_n)dx_1 \ldots dx_n$$ is eventually concave (i.e., in some region $\cap \{x_i\ge t_i\}$. Intuitively, the restriction on the density function should yield this kind of outcome since as soon as one moves away from the mode, the function starts to decrease. Am I getting the right idea? Can anyone refer me to reference they know of. Thanks in advance. AI: With $n=2$, consider $f(x,y) = \sum_{j=1}^\infty 2^{-j} f_j(x,y)$ where $f_j(x,y) = 1$ for $-j \le x,y \le j$ and $0$ otherwise. Thus each nonempty $\{(x,y): f(x,y) \ge a \}$ is a square. $F(x,y) = \sum_{j=1}^\infty F_j(x,y)$ where $F_j(x,y) = (x+j)(y+j)$ for $-j < x,y < j$, while $F_j(x,y)$ is constant for $x,y > j$. Since the Hessian matrix of $(x+j)(y+j)$ is $\pmatrix{0 & 1\cr 1 & 0\cr}$ which is indefinite, $F$ is not concave or convex on $k < x,y < k+1$ for any $k$.
H: The relation between a metric space $(X,d)$ and the topological space that arises from it. Consider the topological space $(\Bbb R,\mathfrak I)$ that arises from the metric space $(\Bbb R,d)$, with $d(x,y)=|x-y|$. I want to prove that $\partial(a,b)=\partial[a,b]=\{a,b\}$. I have that $$x\in \partial A \iff d(x,A)=0\wedge d(x,X\setminus A)=0$$ [This used to be a much longer and tortuous question, but since I can't delete it, I'll just leave what might interest other users, though it wasn't my main concern. When I find a suitable way to ask about my concern, I'll edit] AI: If you want to use $$\partial A = \overline{A}\cap \overline{X-A}$$ you can proceed as follows: Step 1. If $x\in (a,b)$, then $x\notin\partial{(a,b)}$ and $x\notin\partial{[a,b]}$. Proof. Let $\epsilon = \min\{x-a,b-x\}$. Then $\{ y\mid d(x,y)\lt\frac{\epsilon}{2}\}\subseteq (a,b)$ and is open, so $x\notin\overline{X-(a,b)}$; since $\overline{X-[a,b]}\subseteq\overline{X-(a,b)}$, it follows that $x\notin\overline{X-[a,b]}$. Step 2. If $x\in (-\infty,a)$ then $x\notin\partial{(a,b)}$ and $x\notin\partial[a,b]$. Proof. Let $\epsilon =a-x$. Then $\{y\mid d(x,y)\lt \frac{\epsilon}{2}\}\subseteq (-\infty,a)\subseteq X-[a,b]$ and is open, so $x\notin\overline{[a,b]}$. Since $\overline{(a,b)}\subseteq\overline{[a,b]}$, it follows that $x\notin\overline{[a,b]}$. Step 3. If $x\in (b,\infty)$, then $x\notin\partial{(a,b)}$ and $x\notin\partial[a,b]$. Proof. Similar to that in step 2. Step 4. $a\in\partial(a,b)\cap\partial[a,b]$. Proof. Let $\epsilon\gt 0$. Let $\delta=\frac{1}{2}\min\{b-a,\epsilon\}$. Then $a+\delta\in (a,b)\cap \{x\mid d(x,a)\lt\epsilon\}$, and $a-\delta\in (X-[a,b])\cap \{x\mid d(x,a)\lt\epsilon\}$. Thus, every open ball containing $a$ intersects $(a,b)$, so $a\in\overline{(a,b)}\subseteq \overline{[a,b]}$; and every open ball containing $a$ intersects $X-[a,b]$ and so $a\in\overline{X-[a,b]}\subseteq\overline{X-(a,b)}$. Thus, $a\in \partial (a,b)\cap\partial[a,b]$. Step 5. $b\in\partial(a,b)\cap\partial[a,b]$. Proof. Similar to step 4.
H: Cauchy Riemann Equations for $g(v(x,y))=u(x,y)$. I have that $f(z)=u(x,y)+iv(x,y)$ for $z=x+iy$ is analytic on an open, connected set $U$. Suppose there is a function $g: \mathbb{R} \longrightarrow \mathbb{R}$ such that $g(v(x,y))=u(x,y)$. Prove that $f$ is a constant function. I am having trouble applying the Cauchy Riemann equations to the composition of functions above. AI: The Cauchy-Riemann equations imply $$\begin{eqnarray*} \frac{\partial g}{\partial v} \frac{\partial v}{\partial x} &=& \frac{\partial v}{\partial y} \\ \frac{\partial g}{\partial v} \frac{\partial v}{\partial y} &=& -\frac{\partial v}{\partial x}. \end{eqnarray*}$$ Therefore, $$\left[ \left(\frac{\partial g}{\partial v}\right)^2 + 1\right] \frac{\partial v}{\partial y} = 0.$$ But $g(v) \in\mathbb{R}$ and so $\frac{\partial v}{\partial y} = 0$. Use this fact, along with the Cauchy-Riemann equations and the identity $g(v) = u$ to argue $u$ and $v$ are constant functions.
H: Showing $\sum_{n=1}^{\infty}\left\Vert x\right\Vert ^{n} $ does not converge uniformly. Prove that the series $\sum_{n=1}^{\infty}\left\Vert x\right\Vert ^{n} $, $x\in\mathbb{R}^{n} $, does not converge uniformly on the unit ball $\left\{ x\in\mathbb{R}^{n}\mid\left\Vert x\right\Vert <1\right\} $. I am not sure how to show this. What I got to is that the given series is a geomtric series and hence $$f\left(x\right)=\sum_{n=1}^{\infty}\left\Vert x\right\Vert ^{n}=\frac{\left\Vert x\right\Vert }{1-\left\Vert x\right\Vert }$$ on the unit ball, which is continuous (on the unit ball). But this doesn't tell us anything. AI: Suppose this converges uniformly to $\frac{\|x\|}{1-\|x\|}$. Then for any $\epsilon>0$, we have some $N$ such that $$\frac{\|x\|}{1-\|x\|}-\sum\limits_{n=1}^N \|x\|^n<\epsilon, \forall x\in B_1(0)$$ but this cannot be the case, as $$\lim\limits_{\|x\|\to 1}\frac{\|x\|}{1-\|x\|}-\sum\limits_{n=1}^N \|x\|^n=\infty>\epsilon$$ since the sum is bounded by $N$ yet the fraction diverges to $\infty$. Thus the series does not converge uniformly on the unit ball $B_1(0)$.
H: What function satisfies $x^2 f(x) + f(1-x) = 2x-x^4$? What function satisfies $x^2 f(x) + f(1-x) = 2x-x^4$? I'm especially curious if there is both an algebraic and calculus-based derivation of the solution. AI: We start from $$x^2f(x)+f(1-x)=2x-x^4.$$ Replace $x$ by $1-x$. Then $1-x$ gets replaced by $x$. So $$(1-x)^2f(1-x)+f(x)=2(1-x)-(1-x)^4.$$ Two linear equations in two unknowns, $f(x)$ and $f(1-x)$. Solve for $f(x)$.
H: Is composition of piecewise linear functions again a piecewise linear function? I have some piecewise linear (not necessarily continuous) functions (also, in case it matters, in my specific case 'a' is larger than 0 in all functions). Is every the composition of those functions again a piecewise linear (not necessarily continuous) function? If yes, how about the slightly more complicated scenario. The functions are still piecewise, but now the pieces are not ideal linear functions anymore but include a small amount of 'noise': f(x)= a*x+b+randomNoise (randomNoise is different for every call to f(), but always smaller than a). Is the composition of such functions again a piecewise linear function of the form f(x)= a*x+b+randomNoise for all pieces? AI: Yes, the composition of piecewise linear functions is piecewise linear. However, your "linear plus random noise" class is not closed under composition, at least under reasonable standard assumptions about what "random noise" is allowed to look like. Consider $$f(x) = \max(0,x) + 0\cdot\mathit{noise} $$ $$g(x) = 0 + 1\cdot\mathit{noise} $$ Then $f(g(x))$ is $0$ plus some one-sided noise, which probably isn't what you were expecting to get.
H: Finding the expectation of a truncating event given a particular outcome? Approaching the following problem: Gambles are independent, and each one results in the player being equally likely to win or lose 1 unit. Let $W$ denote the net winnings of a gambler whose strategy is to stop gambling immediately after his first win. Find $E[W]$ Realizing that the expectation is not just $E[ W] = \sum\limits_{x} i \left(\frac{ 1}{ 2}\right)$ I am unsure of how to approach the problem. Letting $W_L$ be the value of $W$ accumulated by loses and $W_W$ be the value of $W$ accumulated by wins, I am inclined to believe we are looking at $E[ W_L] = \sum\limits_{i = 0}^\infty -i \left(\frac{ 1}{ 2}\right)^i$ and the $E[ W_W] = 1$. However, 1) I do not know if this is the correct approach and 2) should this indeed be the correct approach, I do not understand [conceptually ] how to merge these two summations? AI: Let $W$ be the "winnings," and $X$ the number of trials until the first head. Then $W=-(X-1)+1=2-X$. Now use the fact that $E(X)=\frac{1}{1/2}$ and the fact that $E(2-X)=2-E(X)$. Here we used the fact that if $p$ is the probability of success on any trial, and $p\ne 0$, then $E(X)=\frac{1}{p}$. (The random variable $X$ has geometric distribution.) There are various ways to prove the required result. For example, when $p=1/2$, the probability that $X=n$ is equal to $\frac{1}{2^n}$. So $$E(X)=1\cdot \frac{1}{2}+2\cdot \frac{1}{^2}+3\cdot \frac{1}{2^3}+\cdots.\tag{$1$}$$ Multiply by $2$. We get $$2E(X)=1+2\cdot \frac{1}{2}+3\cdot \frac{1}{2^2}+4\cdot\frac{1}{2^3}+\cdots.\tag{$2$}.$$ Subtract $(1)$ from $(2)$. We get $$E(X)=1+\frac{1}{2}+\frac{1}{2^2}+\frac{1}{2^3}+\cdots.$$ The series on the left is an infinite geometric series with sum $2$. Another way: We find $E(W)$ directly. With probability $\frac{1}{2}$ we win immediately, ending up with $1$ dollar. With probability $\frac{1}{2}$ we "win" $-1$ dollar on the first throw, and in effect the game starts all over again, and our expected net gain is $E(W)$. It follows that $$E(W)=\frac{1}{2}\cdot 1+\frac{1}{2}\cdot(-1)+\frac{1}{2}E(W)).$$ On the assumption that $E(W)$ is finite, we can now solve the above equation for $E(W)$, and find that $E(W)=0$.
H: Proving $\frac{-\theta + \theta^2}{2}$ is an algebraic integer in $K = \mathbb{Q}(\theta)$, given that $\theta^3 + 11\theta - 4 = 0$ As the title says, given that $\theta^3 + 11\theta - 4 = 0$, I'm trying to prove that $\frac{-\theta + \theta^2}{2}$ is an algebraic integer in $K = \mathbb{Q}(\theta)$. I know that $x^3 + 11x -4$ is irreducible in $\mathbb{Q}[x]$ since it's irreducible in $\mathbb{F}_3[x]$. I also know that the set of algebraic integers forms an integral domain and thus I know that $-\theta + \theta^2$ is an algebraic integer, unfortunately that's the best I can do with that method since $\frac{1}{2}$ is specifically not an algebraic integer. Clearly I need to somehow use the polynomial to solve this, but I can't see how, can anyone point me in the right direction? Thanks. AI: The minimal polynomial of $(\theta^2-\theta)/2$ is ${z}^{3}+11\,{z}^{2}+36\,z+4$. One way to get this is: if $t = (\theta^2-\theta)/2$, express $t^3 + b t^2 + c t + d$ as a rational linear combination of $1$, $\theta$ and $\theta^2$, and solve the system of equations that say that the coefficients of $1$, $\theta$ and $\theta^2$ are all $0$.
H: A Couple of Normal Bundle Questions We are working through old qualifying exams to study. There were two questions concerning normal bundles that have stumped us: $1$. Let $f:\mathbb{R}^{n+1}\longrightarrow \mathbb{R}$ be smooth and have $0$ as a regular value. Let $M=f^{-1}(0)$. (a) Show that $M$ has a non-vanishing normal field. (b) Show that $M\times S^1$ is parallelizable. $2$. Let $M$ be a submanifold of $N$, both without boundary. If the normal bundle of $M$ in $N$ is orientable and $M$ is nullhomotopic in $N$, show that $M$ is orientable. More elementary answers are sought. But, any kind of help would be appreciated. Thanks. AI: Some hints: (a) consider $\nabla f$. $\quad$(b)Show that $TM\oplus \epsilon^1\cong T\mathbb{R}^{n+1}|_M$ is a trivial bundle, then analyze $T(M\times S^1)$. Try to use homotopy to construct an orientation of $TN|_M$. Let $F:M\times[0,1]\rightarrow N$ be a smooth homotopy map s.t. $F_0$ is embedding, $F_1$ is mapping to a point $p$, then pull back (my method is parallel transportation) the orientation of $T_p N$ to $TN|_M$, and then use $TN|_M\cong TM\oplus T^\perp M$.
H: The isomorphism of quotient groups If $X$ is an abelian group and $A,B$ are its subgroup with $A\cong B$, is quotient group $X/A$ isomorphic to the quotient group $X/B$ ? AI: Let X be the group of integers under addition. Let A be the group of even integers, B the group of integers divisible by 3. Then X, A, and B are isomorphic to one another, they are all infinite cyclic groups. However X/A is the group of integers mod 2, a cyclic group of order 2, and X/B is the group of integers mod 3, a cyclic group of order 3.
H: What is the average distance of a combination set? I'm working on a genetic algorithm and would like to map each function to a set of "codons". So + -> 011. Given this, I would like to figure out how easy it would be for any given codon set to mutate into another codon set. If you were to take two combinations, say 012 and 111, they would have a distance of 2 since because it would take 2 mutations to move from one to the other. Eg. 012 - > 011 - > 111. So, here's my question : given an (n, k) combination set, what is the average distance between any two combinations? I think you might be able to model this as a small world network, but if there is a better approach I'm all ears. AI: You seem to be talking about $k$-tuples with $n$ possible values. If two of those are randomly chosen, the probability that they differ in the $j$'th position is $1 - 1/n$, and the expected Hamming distance between them is $k(1-1/n)$.
H: Evaluate $\tan^{2}(20^{\circ}) + \tan^{2}(40^{\circ}) + \tan^{2}(80^{\circ})$ Evaluate $\tan^{2}(20^{\circ}) + \tan^{2}(40^{\circ}) + \tan^{2}(80^{\circ})$. Can anyone help me with this? Thank You! AI: Method $1:$ We know $$ \tan^2A=\frac{1-\cos2A}{1+\cos2A} $$ Let us find the cubic equation whose roots are $\cos40^\circ, \cos80^\circ, \cos160^\circ$. As $\cos(3\cdot 40^{\circ})=\cos120^{\circ}=-\frac{1}{2}$ or, $4\cos^340^{\circ} -3\cos40^{\circ}=-\frac{1}{2}$. So, $\cos40^{\circ} $ is a root of $$ 4x^3-3x=-\frac12\implies 8x^3-6x+1=0 $$ Similarly, $\cos80^{\circ},\cos160^{\circ}$ are also the roots of $ 8x^3-6x+1=0 $ (Another derivation can be found at the bottom) If we replace $x$ with $\dfrac{1-y}{1+y}$, the sum of the roots of the new equation in $y$ will give us the desired value. Method $2:$ (Inspired by Zarrax's answer) Observe that $\tan(3\cdot20^\circ)=\tan60^\circ=\sqrt3$ $\tan(3\cdot40^\circ)=\tan120^\circ=\tan(180^\circ-60^\circ)=-\tan60^\circ=-\sqrt3$ $\iff \tan\{3(-40^\circ)\}=\sqrt3$ and $\tan(3\cdot80^\circ)=\tan240^\circ=\tan(180^\circ+60^\circ)=\tan60^\circ=\sqrt3$ $$\text{As }\tan3\theta=\frac{3\tan\theta-\tan^3\theta}{1-3\tan^2\theta}$$ $$\text{the roots of the equation } t^3-3\sqrt3t^2-3t+\sqrt3=0 (\text{ Putting } \tan3\theta=\sqrt3)$$ will be $\tan20^\circ,\tan(-40^\circ)=-\tan40^\circ, \tan80^\circ$ Using Vieta's formulas, $$\tan20^\circ+(-\tan40^\circ)+\tan80^\circ=\frac{3\sqrt3}1$$ $$\text{and } \tan20^\circ(-\tan40^\circ)+\tan20^\circ\cdot\tan80^\circ+\tan80^\circ(-\tan40^\circ)=-3$$ $$\text{So,}\tan^220^\circ+\tan^240^\circ+\tan^280^\circ =(\tan20^\circ)^2+(-\tan40^\circ)^2+(\tan80^\circ)^2$$ $$=\{\tan20^\circ+(-\tan40^\circ)+\tan80^\circ\}^2$$ $$-2\{\tan20^\circ(-\tan40^\circ)+\tan20^\circ\cdot\tan80^\circ+\tan80^\circ(-\tan40^\circ)\}$$ $$=(3\sqrt3)^2-2(-3)=33$$ [ Applying the following identities, $$\begin{align*} \cos 2A+\cos 2B&=2\cos(A-B)(A+B),\\ \sin2A&=2\sin A\cos A,\\ 2\cos A\cos B&=\cos(A-B)+\cos(A+B) \end{align*}$$ we get $$\begin{align*} \cos40^{\circ} + \cos80^{\circ} + \cos160^{\circ}&=0\\ \cos40^{\circ}\cos80^{\circ} + \cos80^{\circ}\cos160^{\circ} + \cos160^{\circ}\cos40^{\circ}&=-\frac{3}{4}\\ \end{align*}$$ $$\text{ and } \cos40^{\circ} \cos80^{\circ} \cos160^{\circ}=-\frac{1}{8}$$ Then the cubic equation whose roots are $\cos40^{\circ}, \cos80^{\circ}, \cos160^{\circ}$ is $$ x^3-\frac{3}{4}x+\frac{1}{8}=0 $$ ]
H: Proof of Gauss's Lemma (Riemannian Geometry version) I was self-learning Do Carmo's Riemannian Geometry, there is a step in the proof of Gauss's Lemma what I can't quite figure out. Since $d\,\exp_p$ is linear and, by the definition of $\exp_p$, $$ \langle (d\,\exp_p)_v(v),(d\,\exp_p)_v(w_T)\rangle=\langle v,w_T\rangle. $$ So I went on wikipedia hoping to find something that can help me figure this out. I did find something. HERE. It says that $(d\,\exp_p)_v(v)=v$. In order to do that, it constructs that curve $\alpha(t)$ with $\alpha(0)=v$, $\alpha'(0)=v$. And it gives $\alpha(t)=(t+1)v$. I agree with all these. Then it argues that you can view $\alpha (t)=vt$ since it's just a shift of parametrization. Okay. I am okay with that. But in this case, should $(d\,\exp_p)_v(v)={d\over dt}(\exp_po \alpha(t))|_{t=1}$, instead of evaluating the derivative at $t=0$ as argued in wiki? Also, I found myself really uncomfortable with all the abuse of notations in Differential Geometry. Like here, the identification of the $T_p M$ and $T_v(T_p M)$ freaks me out. Does this mean that when a local coordinate system is picked, $T_p M$ under the natural basis will be the same as $R^n$, and so does $T_v (T_p M)$? AI: The equation you are citing in combination with the reasoning you are citing, too, is used in do Carmo only for $w_T$ parallel to $v$. Otherwise the equation would be still correct (that is the content of the Gauss lemma) but requires a different justification (as presented later on in Do Carmo). If $w_T$ is parallel to $v$ then $\exp_p(tw)$ is just a parametrization of the geodesic through $p$ in direction $v$ with constant speed $|w|$ (this is where the definition of $\exp$ is used) and the derivative in your formula can be computed as the derivative of this geodesic w.r.t to $t$, so it is simply the tangent vector to that geodesic of length $|w|$ in the point under examination. 'Abuse' of notation in the way you describe it is quite common in Differential Geometry whenever possible, cause otherwise equations are often quite clumsy. A (slightly) more formal notation can be found, e.g., in Klingenbergs Riemannian Geometry.
H: Product rule for partial derivatives I am going through the solution for a problem (1.7 from Goldstein's Classical Mechanics) where it says: I don't understand why the right-hand side of the second line only contains 4 terms when there should be 5. The very last term on line 1 has been expanded into 1 term on line 2 using the product rule, but according to the product rule there should be 2 terms. AI: $q$, $\dot{q}$ and $\ddot{q}$ are being treated here as separate variables, so $\dfrac{\partial \ddot{q}}{\partial \dot{q}} = 0$.
H: Derivative iteration and factorials. I'm in year 11 at high school and have just learn about calculus and the derivative of a function. I found that if I iterated the function $f(x) = x^n$ through the derivative process $k$ number of times where $k \leq n$ and $f(x)$ is $k=1$, $f'(x)$ is $k=2$ I would get: $[n(n-1)(n-2)...(n-(k-2))]x^{n-k}$. i.e.: $$ \begin{align} &k = 1: f(x) = x^n\\ &k = 2: f(x) = nx^{n-1}\\ &k = 3: f(x) = n(n-1)x^{n-2} \end{align} $$ The interesting stuff starts to happen at $k=3$, if I expand out the n terms in front of $x$ I get $n^2 - n$. Now If I take the derivative of this I get $2n-1$ and ignoring constants I get $2n$ and if I sub the value for $k$ in (3), I get 6 for both of them as $3^2 - 3 = 9-3 = 6$ and $2*3 = 6$. This is also interesting as this is the factorial for 3. The same thing happens at $k=4$ and onwards, all the derivatives are equal for the value of the $(k-1)$th derivative of $x^n$ and the product of these derivatives are the factorial value for $k$. My question is: What have I discovered and, if it is interesting or new. AI: Yes, there is actually a nice general formula (which you should be able to prove by induction): $$\frac{\mathrm d^k}{\mathrm dx^k}x^n=\frac{n!}{(n-k)!}x^{n-k}=k!\binom{n}{k}x^{n-k}$$
H: How to write this in sigma notation? Newton's formula for interpolation is $$P(x)=c_1+c_2(x-x_1)+c_3(x-x_1)(x-x_2)+c_4(x-x_1)(x-x_2)(x-x_3)+\cdots$$ I prefer sigma notation, when it is possible. Can this be written in sigma notation? AI: You can write $\displaystyle P(x)= \sum\limits_{i=1}^{+ \infty} c_i \prod\limits_{j=1}^{i-1} (x-x_j)$, with the convention $\prod\limits_{j=1}^0 (x-x_j)=1$.
H: Show $1 + 2 \sum_{n=1}^N \cos n x = \frac{ \sin (N + 1/2) x }{\sin \frac{x}{2}}$ for $x \neq 0$ For $x \neq 0$, $$ 1 + 2 \sum_{n=1}^N \cos n x = \frac{ \sin (N + 1/2) x }{\sin \frac{x}{2}} $$ AI: Here is a well known trigonometric trick $$ 1+2\sum\limits_{n=1}^N\cos (nx)= 1+\frac{1}{\sin(x/2)}\sum\limits_{n=1}^N 2\cos (nx)\sin (x/2)=\\ 1+\frac{1}{\sin (x/2)}\sum\limits_{n=1}^N(\sin (nx+x/2)-\sin (nx-x/2))=\\ 1+\frac{1}{\sin (x/2)}(\sin (Nx+x/2)-\sin (x/2))=\\ 1+\frac{\sin (Nx+x/2)}{\sin (x/2)}-1= \frac{\sin (N+1/2)x}{\sin (x/2)} $$ And this is a complex analysis approach $$ 1+2\sum\limits_{n=1}^N\cos(nx)= e^{i0x}+\sum\limits_{n=1}^N(e^{inx}+e^{-inx})= $$ $$ \sum\limits_{n=-N}^N e^{inx}= \frac{e^{-iNx}(e^{i(2N+1)x}-1)}{e^{ix}-1}= \frac{e^{i(N+1)x}-e^{-iNx}}{e^{ix}-1}= $$ $$ \frac{e^{i(N+1/2)x}-e^{-i(N+1/2)x}}{e^{ix/2}-e^{-ix/2}}= \frac{2i\sin(N+1/2)x}{2i\sin(x/2)}= \frac{\sin(N+1/2)x}{\sin(x/2)} $$
H: Martingale Problem and PDE's Let $X$ be a RCLL Markov Process with generator $A$. Then I know that $$ M^f = f(X)-f(X_0)-\int Af(X_s)ds $$ is a martingal for every $f\in \mathcal{D}_A$. If we suppose that $Af=0$, we see that $f(X)-f(X_0)$ is a martingale. Further I know that from the Markov property $$E[f(X_{t+h})|\mathcal{F}_t]=P_hf(X_t)$$ Where $(P_t)$ is the transition semigroup. Let $X_0=x$ be the starting point. If we assume that $f(X)-f(x)$ is a martingale, then $$P_tf(x)-f(x)=E[f(X_t)-f(x)]=0$$ Why is this equation true? I wanted to use the property above with conditional distribution, without success. AI: If $Af=0$, then, for every $t\geqslant0$, $\mathrm e^{tA}f=f+\displaystyle\sum_{n\geqslant0}\frac{t^{n+1}}{(n+1)!}A^n(Af)=f$. By definition of $A$, $\mathrm e^{tA}=P_t$. Hence $P_tf=f$.
H: How to prove that $R/I \otimes_R M \cong M / IM$ Possible Duplicate: Showing that if $R$ is local and $M$ an $R$-module, then $M \otimes_R (R/\mathfrak m) \cong M / \mathfrak m M$. In one of the answers to one of my previous questions the following claim was mentioned: $R/I \otimes_R M \cong M / IM$ So I tried to prove it. Can you help me finish my proof? Thanks! We recall that $M \otimes_R -$ is a covariant right-exact functor that it is exact if $M$ is flat and we observe that the following is an exact sequence: $$ 0 \to IM \xrightarrow{i} M \xrightarrow{\pi} M / IM \to 0$$ $R/I$ is an $R$ module since $R/I$ is a subring of $R$ closed with respect to multiplication from $R$ but it is not necessarily flat hence we only get exactness on one side: $$ (R/I) \otimes_R IM \xrightarrow{id \otimes i} (R/I) \otimes_R M \xrightarrow{id \otimes \pi} (R/I) \otimes_R M / IM \to 0$$ Then $$ \mathrm{Im(id \otimes \pi)} = (R/I) \otimes_R M / IM \cong (R/I) \otimes_R M / \mathrm{Ker} (id \otimes i) $$ so we want to show two things: (i) that $\mathrm{Ker} (id \otimes i) = \{0\}$. To this end let $r + I \otimes im \in (R/I) \otimes_R IM $ and assume $ id \otimes i(r + I \otimes im ) = r + I \otimes im = 0 + I \otimes 0$. I'm not sure how to proceed from here. (ii) and that $M /IM \cong (R/I) \otimes_R M / IM $ Can you help me? Thanks. AI: Consider $$ 0 \to I \to R \to R/I \to 0 $$ Apply the right exact functor $-\otimes_R M$, and you get $$ I\otimes_R M \to R\otimes_R M \to (R/I)\otimes_R M \to 0 $$ But $R\otimes_R M$ is canonically identified with $M$ by $a\otimes m \mapsto am$. Then $I\otimes_R M \to R\otimes_R M = M$ is $a\otimes m \mapsto am$ so, by definition, its image is $IM$. So the exactness of the sequence tells you that $$ M/IM \cong (R/I)\otimes_R M $$
H: Moving points along a curve on sphere. I have two points on a unit sphere. I also have their coordinates. theta=linspace(0,2*pi,20); phi=linspace(0,pi,20); [theta,phi]=meshgrid(theta,phi); rho=1; x=rho*sin(phi).*cos(theta); y=rho*sin(phi).*sin(theta); z=rho*cos(phi); mesh(x,y,z) xyz=randn(3,2); xyz=bsxfun(@rdivide,xyz,sqrt(sum(xyz.^2,1))); a=xyz(:,1)'; b=xyz(:,2)'; plot3([a(1) b(1)],[a(2) b(2)],[a(3) b(3)],'r'); % Connect ab Now I want to move both the points a and b towards each other on this unit circle by a fixed parameter( i.e. 5% of distance between them) everytime. I dont know how to do this. Please help using the code I have written as will help me understand better. AI: You can calculate the vector to rotate each point about with the cross product, and you can calculate the distance between the two points with the norm, which can be used as a measure for the angle you can do the rotation using rodrigues formula. see http://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula figure(1);clf theta=linspace(0,2*pi,20); phi=linspace(0,pi,20); [theta,phi]=meshgrid(theta,phi); rho=1; x=rho*sin(phi).*cos(theta); y=rho*sin(phi).*sin(theta); z=rho*cos(phi); mesh(x,y,z);hold on % xyz=randn(3,2); % xyz=bsxfun(@rdivide,xyz,sqrt(sum(xyz.^2,1))); % a=xyz(:,1)'; % b=xyz(:,2)'; plot3([a(1) b(1)],[a(2) b(2)],[a(3) b(3)],'r','linewidth',4);hold on % Connect ab angle = norm(a-b)/20; % 5 percent v = cross(a,b)/norm(cross(a,b)); plot3([0 2*v(1)],[0 2*v(2)],[0 2*v(3)],'k');shg % Connect ab set(gca,'dataaspectratio',[1 1 1]) a_new = a*cos( angle) + cross(v,a)*sin( angle)+v*(v.'*a)*(1-cos( angle)); % rodrigues b_new = b*cos(-angle) + cross(v,b)*sin(-angle)+v*(v.'*b)*(1-cos(-angle)); % rodrigue plot3([a_new(1) b_new(1)],[a_new(2) b_new(2)],[a_new(3) b_new(3)],'b','linewidth',4);shg % Connect ab this is checked. sorry i made a mistake in the previous edit
H: Evaluating $\int ^\frac{\pi}{2}_{0} \sin\left(2x+\frac{\pi}{4}\right)\ dx$ Find the exact value of the following definite integral: $$\int ^\frac{\pi}{2}_{0} \sin\left(2x+\frac{\pi}{4}\right)\:dx=\left[-\frac{1}{2}(2x+\frac{\pi}{4})\right]^\frac{\pi}{2}_{0}$$ $$=-\frac{1}{2}\left(2\frac{\pi}{2}+\frac{\pi}{4}\right)+\frac{1}{2}\left(2\cdot 0+\frac{\pi}{4}\right)$$ $$=-\frac{1}{2}\left(\pi+\frac{\pi}{4}\right)+\frac{1}{2}\left(\frac{\pi}{4}\right)=-\frac{\pi}{2}$$ but the right answer is: $$\int_{0}^{\frac{\pi}{2}}{\sin{\left(2x+\frac{\pi}{4}\right)\:dx}}=\frac{\sqrt{2}}{2}$$ Help me out! thanks! AI: You have forget $\cos$ after you have done antiderivative. Solve- $$\int^\frac{\pi}{2}_{0}\sin\left(2x+\frac{\pi}{4}\right)\:dx=\left[-\frac{1}{2}\cos\left(2x+\frac{\pi}{4}\right)\right]^\frac{\pi}{2}_{0}$$ $$=-\frac{1}{2}\cos\left(2\frac{\pi}{2}+\frac{\pi}{4}\right)+\frac{1}{2}\cos\left(2\cdot 0+\frac{\pi}{4}\right)$$ $$=-\frac{1}{2}\cos\left(\pi+\frac{\pi}{4}\right)+\frac{1}{2}\cos\left(\frac{\pi}{4}\right)=\frac{\sqrt{2}}{2} \quad \blacksquare$$
H: Proving convexity of this set in $\ell^2$ This is a follow-up to the question I posted earlier this week. Consider, for a fixed sequence $(a_n)_n\in\ell^2$ the subspace $$C=\{(x_n)_{n}\in\ell^2 : |x_n|\le a_n\text{ for all }n\in\mathbb{N}\}\subset\ell^2.$$ Is this set convex in $\ell^2$? According to the book [An introduction to Nonlinear Analysis, by Martin Schechter, page 175] it is true, but there is no proof. Can someone help me out? AI: Let $x=(x_n), y=(y_n) \in C$. We need to show that for all $t \in [0,1]$ we have $(1-t)x+ty \in C$. So let $t \in [0,1]$; then $$(1-t)x+ty = ((1-t)x_n + ty_n)$$ So you need to prove that $\left| (1-t)x_n + ty_n \right| \le a_n$ for all $n \in \mathbb{N}$. This follows immediately from the following properties of $\left| - \right|$: $\left| a+b \right| \le \left| a \right| + \left| b \right|$ $\left| a b \right| = \left| a \right| \left| b \right|$ Also noting that if $t \in [0,1]$ then $1-t \ge 0$ and $t \ge 0$.
H: Question about proof of $A[X] \otimes_A A[Y] \cong A[X, Y] $ As far as I understand universal properties, one can prove $A[X] \otimes_A A[Y] \cong A[X, Y] $ where $A$ is a commutative unital ring in two ways: (i) by showing that $A[X,Y]$ satisfies the universal property of $A[X] \otimes_A A[Y] $ (ii) by using the universal property of $A[X] \otimes_A A[Y] $ to obtain an isomorphism $\ell: A[X] \otimes_A A[Y] \to A[X,Y]$ Now surely these two must be interchangeable, meaning I can use either of the two to prove it. So I tried to do (i) as follows: Define $b: A[X] \times A[Y] \to A[X,Y]$ as $(p(X), q(Y)) \mapsto p(X)q(Y)$. Then $b$ is bilinear. Now let $N$ be any $R$-module and $b^\prime: A[X] \times A[Y] \to N$ any bilinear map. I can't seem to define $\ell: A[X,Y] \to N$ suitably. The "usual" way to define it would've been $\ell: p(x,y) \mapsto b^\prime(1,p(x,y)) $ but that's not allowed in this case. Question: is it really not possible to prove the claim using (i) in this case? AI: I can see your problem. As Marlu suggested and in some of the comments above, the trick is to treat of the elements $x^iy^j$ as a "basis" for the polynomial ring $A[x,y]$. In fact, this is the trick because suppose you take some $\sum_{i,j} X^iY^j \in A[x,y]$. Suppose hypothetically that you already have a linear map $L : A[X,Y] \rightarrow N$. Then the action of $L$ on this polynomial being $\sum_{i,j}L(X^iY^j)$ so that the image of any polynomial in $A[X,Y]$ is in fact completely determined by the action of $L$ on the $X^iY^j$. Let us keep this idea in mind and consider the diagram below. Because we want the diagram to commute, we should have just concentrating on $X^iY^j$ that $$\begin{eqnarray*} b'(X^i,Y^j) &=& \ell \circ b(X^i,Y^j) \\ &=&\ell(X^iY^j) \end{eqnarray*} $$ Now from what I said in the first paragraph, you can extend $\ell$ additively. Let us check that $\ell$ is compatible with scalar multiplication. Take any $a \in A$. Then $$\begin{eqnarray*} \ell(aX^iY^j) &=& b'(aX^i,Y^j)\\ &=& ab'(X^i,Y^J) \\ &=& a\ell(X^iY^j) \end{eqnarray*}$$ I could take the $a$ out of $b'(\cdot, \cdot)$ because we are now considering $A[X]$ and $A[Y]$ as $A$ - modules and so $b'$ is $A$ - bilinear. We have now completed the check that $\ell$ is linear and uniqueness should be obvious. It follows you have shown that $A[X,Y]$ satisfies the universal property of the tensor product $A[X] \otimes_A A[Y]$ from which it follows that $$A[X,Y] \cong A[X] \otimes_A A[Y].$$ $$\hspace{6in} \square$$
H: What is the difference between "probability density function" and "probability distribution function"? Whats the difference between probability density function and probability distribution function? AI: The relation between the probability density funtion $f$ and the cumulative distribution function $F$ is... if $f$ is discrete: $$ F(k) = \sum_{i \le k} f(i) $$ if $f$ is continuous: $$ F(x) = \int_{y \le x} f(y)\,dy $$
H: A circle on the plane Possible Duplicate: Parametric Equation of a Circle in 3D Space? I know that, for example, if a circle is on a plane with counter-clockwise orientation, and with center $(a,b)$ and radius $R$, it has parametrization $$r(t)=(a + R \cos{t};b + R \sin{t}) \quad 0 \leq t \leq 2\pi$$ and with clockwise orientation $$r(t)=(a + R \sin{t},b + R \cos{t}).$$ Also, I know of forms of circle parametrization if it lies on horizontal plane $z=c$ and center $O(a,b,c)$, or if it is located on the plane $x=c$, I know how to parametrize the circle in this case. I am interested in what happens if the circle does not lie in any plane parallel to the coordinate planes? AI: Let $\mathbf{u},\mathbf{v}$ be any two orthonormal vectors in $\mathbb{R}^n$, let $\mathbf{a} \in \mathbb{R}^n$ and let $R > 0$ be a positive real number. Then the circle of radius $R$ with centre $\mathbf{a}$ lying in the plane through $\mathbf{a}$ which is parallel to $\mathbf{u}$ and $\mathbf{v}$ is given by $$\mathbf{r}(t) = \mathbf{a} + (R\cos t)\mathbf{u} + (R\sin t)\mathbf{v}$$ where $\mathbf{r}(t)$ denotes the locus of the points on the circle. So given a plane $\Pi \subseteq \mathbb{R}^3$, calculate $\mathbf{u}$ and $\mathbf{v}$ and substitute into the above.
H: Compute $ I_{n}=\int_{-\infty}^\infty \frac{1-\cos x \cos 2x \cdots \cos nx}{x^2}\,dx$ I'm very curious about the ways I may compute the following integral. I'd be very glad to know your approaching ways for this integral: $$ I_{n} \equiv \int_{-\infty}^\infty {1-\cos\left(x\right)\cos\left(2x\right)\ldots\cos\left(nx\right) \over x^{2}} \,{\rm d}x $$ According to W|A, $I_1=\pi$, $I_2=2\pi$, $I_3=3\pi$, and one may be tempted to think that it's about an arithmetical progression here, but things change (unfortunately) from $I_4$ that is $\frac{9 \pi}{2}$. This problem came to my mind when I was working on a different problem. AI: First note that $$\int_{-\infty}^{\infty} \frac{1-\cos ax}{x^2} \; dx = \left[ -\frac{1-\cos ax}{x}\right]_{-\infty}^{\infty} + a \int_{-\infty}^{\infty} \frac{\sin ax}{x} \; dx = \pi \, |a|,$$ by the Dirichlet integral. Also, by mathematical induction we can easily prove that $$ \prod_{k=1}^{n} \cos \theta_k = \frac{1}{2^n} \sum_{\mathrm{e}\in S} \cos\left( e_1 \theta_1 + \cdots + e_n \theta_n \right),$$ where the summation runs over the set $S = \{ -1, 1\}^n$. Thus we have $$ \begin{align*} I_n = \int_{-\infty}^{\infty} \frac{1-\cos x \cdots \cos nx}{x^2} \; dx &= \frac{1}{2^n} \sum_{\mathrm{e}\in S} \int_{-\infty}^{\infty} \frac{1-\cos(e_1 x + \cdots + e_n nx)}{x^2} \; dx \\ &= \frac{\pi}{2^n} \sum_{\mathrm{e}\in S} \left|e_1 + \cdots + e_n n\right|. \end{align*}$$ For example, if $n = 3$, we have $\left|\pm 1 \pm 2 \pm 3\right| = 0, 0, 2, 2, 4, 4, 6, 6$ and hence $$I_3 = \frac{\pi}{8}(0 + 0 + 2 + 2 + 4 + 4 + 6 + 6) = 3\pi.$$ Let the summation part as $$ A_n = \sum_{\mathrm{e}\in S} |e_1 + \cdots + e_n n|.$$ The first 10 terms of $(A_n)$ are given by $$ \left(A_n\right) = (2, 8, 24, 72, 196, 500, 1232, 2968, 7016, 16280, \cdots ), $$ and thus the corresponding $(I_n)$ are given by $$ \left(I_n\right) = \left( \pi ,2 \pi ,3 \pi ,\frac{9 \pi }{2},\frac{49 \pi }{8},\frac{125 \pi }{16},\frac{77 \pi }{8},\frac{371 \pi }{32},\frac{877 \pi }{64},\frac{2035 \pi }{128} \right).$$ So far, I was unable to find a simple formula for $(A_n)$, and I guess that it is not easy to find such one. p.s. The probability distribution of $S_n = e_1 + \cdots + e_n n$ is bell-shaped, and fits quite well with the corresponding normal distribution $X_n \sim N(0, \mathbb{V}(S_n))$. Thus it is not bad to conjecture that $$ \frac{A_n}{2^n} = \mathbb{E}|S_n| \approx \mathbb{E}|X_n| = \sqrt{\frac{n(n+1)(2n+1)}{3\pi}},$$ and hence $$ I_n \approx \sqrt{\frac{\pi \, n(n+1)(2n+1)}{3}}.$$ Indeed, numerical experiment shows that I was able to prove a much weaker statement: $$ \lim_{n\to\infty} \frac{I_n}{n^{3/2}} = \sqrt{\frac{2\pi}{3}}. $$ First, we observe that for $|x| \leq 1$ we have $$ \log \cos x = -\frac{x^2}{2} + O\left(x^4\right).$$ Thus in particular, $$ \sum_{k=1}^{n} \log\cos\left(\frac{kx}{n}\right) = \sum_{k=1}^{n}\left[-\frac{k^2 x^2}{2n^2} + O\left(\frac{k^4x^4}{n^4}\right)\right] = -\frac{nx^2}{6} + O\left(x^2 \vee nx^4\right).$$ Now let $$ \begin{align*}\frac{1}{n^{3/2}} \int_{-\infty}^{\infty} \frac{1 - \prod_{k=1}^{n}\cos (kx)}{x^2} \; dx &= \frac{1}{\sqrt{n}} \int_{-\infty}^{\infty} \frac{1 - \prod_{k=1}^{n}\cos \left(\frac{kx}{n}\right)}{x^2} \; dx \qquad (nx \mapsto x) \\ &= \frac{1}{\sqrt{n}} \int_{|x|\leq 1} + \frac{1}{\sqrt{n}} \int_{|x| > 1} =: J_n + K_n. \end{align*}$$ For $K_n$, we have $$ \left|K_n\right| \leq \frac{1}{\sqrt{n}} \int_{1}^{\infty} \frac{2}{x^2}\;dx = O\left(\frac{1}{\sqrt{n}}\right).$$ For $J_n$, the substitution $\sqrt{n} x \mapsto y$ gives $$ \begin{align*} J_n &= \frac{1}{\sqrt{n}} \int_{|x|\leq 1} \left( 1 - \exp\left( -\frac{nx^2}{6} + O\left(x^2 \vee nx^4\right) \right) \right) \; \frac{dx}{x^2} \\ &= \int_{|y|\leq\sqrt{n}} \left( 1 - \exp\left( -\frac{y^2}{6} + O\left(\frac{y^2}{n}\right) \right) \right) \; \frac{dy}{y^2} \\ &\xrightarrow[]{n\to\infty} \int_{-\infty}^{\infty} \frac{1 - e^{-y^2/6}}{y^2} \; dy \\ &= \left[-\frac{1-e^{-y^2/6}}{y}\right]_{-\infty}^{\infty} + \frac{1}{3} \int_{-\infty}^{\infty} e^{-y^2/6} \; dy = \sqrt{\frac{2\pi}{3}}. \end{align*}$$ This completes the proof.
H: Surface Area of the simple figure There is a question which states The solid brick figure shown is made of small bricks of side 1. When the large brick is disassembled into its component small bricks , the total surface area of all small bricks is how much greater than surface area of larger brick Here is how I am attempting it: Surface Area of Large Brick = $6x^2=6(2)^2=24$ Surface Area of 1 small Brick = $6x^2=6(1)^2=6$ Total Small Bricks = 12 = So Total Surface Area = $72$ Difference = $72-24=48$ But the difference is suppose to be $40$. How ? (EDIT : Image Edited) AI: For the brick you've drawn, the large brick has two $3\times3$ faces and four $2\times 3$ faces, so its surface area is $2(9)+4(6)=42$. There are $3\times3\times2 = 18$ smaller bricks, each of which has surface area $6$ so the total surface area of the $18$ small bricks is $18(6)=108$. The difference of the areas, then, is $108-42=66$. On the other hand, if the brick has dimension $2\times2\times3$, then the large brick has two $2\times2$ faces and four $2\times 3$ faces, so its surface area is $2(4)+4(6)=32$. There are $2\times2\times3 = 12$ smaller bricks, each of which has surface area $6$ so the total surface area of the $12$ small bricks is $12(6)=72$. The difference of the areas, then, is $72-32=40$. This is likely what the original problem was, as mentioned in the comments.
H: A connected k-regular bipartite graph is 2-connected. I've been struggling with this exercise; all ideas have been unfruitful, leading to dead ends. It is from Balakrishnan's A Textbook of Graph Theory, in the connectivity chapter: Prove that a connected k-regular bipartite graph is 2-connected. (That is, deletion of one vertex alone is not enough to disconnect the graph). I think the objective is to make use of Whitney's theorem according to which a graph (with at least 3 vertices) is 2-connected iff any two of its vertices are connected by at least two internally disjoint paths. But I'll welcome any ideas or solutions. Thank you! AI: Let $G = (V_{1}\cup V_{2},E)$ be a connected, $k$-regular bipartite graph where $V_{1}$ and $V_{2}$ are the partite vertex sets. As the case $k=1$ is trivial, we may assume that $k \geq 2$ and therefore $|V_{1}\cup V_{2}|\geq 4$. Assume for contradiction that $G$ is not $2$-connected. As $G$ is connected but not $2$-connected, there exists a vertex $v$ whose removal disconnects the graph. Without loss of generality we may assume that $v \in V_{1}$. Then $G-v = \uplus_{i\in [1,a]} G_{i}$ where each $G_{i}$ is a connected component and $a \geq 2$. As $a \geq 2$, there exists some component $G_{b}$ such that $|V_{1}\cap V(G_{b})| \geq |V_{2}\cap V(G_{b})|$. (It shouldn't be too hard to convince yourself of this) For convenience denote $L = V_{1}\cap V(G_{b})$ and $R = V_{2}\cap V(G_{b})$. As $G_{b}$ is a connected component and $G$ was connected, and $v \in V_{1}$, at least one vertex in $R$ was adjacent to $v$, and therefore has degree less than $k$. However the vertices in $L$ have lost no edges Then we have $$ \sum_{u\in R}deg(u) < k\cdot|R| < k\cdot|L| = \sum_{w \in L}deg(w) $$ However as $G[L \cup R]$ forms a bipartite graph, we know $$ \sum_{u\in R}deg(u) = \sum_{w \in L}deg(w) $$ Thus we have a contradiction, so $G$ must be at least $2$-connected.
H: An infinite series of a product of three logarithms I was told this interesting question today, but I haven't managed to get very far: Evaluate $$\sum_{n=1}^\infty \log \left(1+\frac{1}{n}\right)\log \left(1+\frac{1}{2n}\right)\log \left(1+\frac{1}{2n+1}\right).$$ I am interested in seeing at least a few solutions. AI: Here is a solution I just found. Notice that $$\log\left(1+\frac{1}{2n+1}\right)=\log\left(1+\frac{1}{n}\right)-\log\left(1+\frac{1}{2n}\right)$$ so that our series becomes $$\sum_{n=1}^{\infty}\left(\log\left(1+\frac{1}{n}\right)^{2}\log\left(1+\frac{1}{2n}\right)-\log\left(1+\frac{1}{n}\right)\log\left(1+\frac{1}{2n}\right)^{2}\right).$$ Since $$\log\left(1+\frac{1}{2n+1}\right)^{3}=\log\left(1+\frac{1}{n}\right)^{3}-3\log\left(1+\frac{1}{n}\right)^{2}\log\left(1+\frac{1}{2n}\right)+3\log\left(1+\frac{1}{n}\right)\log\left(1+\frac{1}{2n}\right)^{2}-\log\left(1+\frac{1}{2n}\right)^{3},$$ we see that our series equals $$\frac{1}{3}\left(\sum_{n=1}^{\infty}\log\left(1+\frac{1}{n}\right)^{3}-\log\left(1+\frac{1}{2n}\right)^{3}-\log\left(1+\frac{1}{2n+1}\right)^{3}\right),$$ and the above telescopes and equals $$\frac{\left(\log2\right)^{3}}{3}.$$
H: Exponential function: Change base to exp Why is the following true? $$\left[\frac{N - it}{N}\right]^{j+1} = \exp\left(-\frac{ijt}{N}\right)$$ i,j - integers less than N. Is there any theorem which allows me to get this result? I tried with $$N=2^{32}, i=j=2^8, t=2^{16}$$ and its almost true. AI: As has been made clear in the comments, the result is false as stated. However, it is approximately true in the limit as $it/N\to0$ (and $j$ is not too large). Here is one way to get a rigorous estimate of this nature. First, note that $\ln(1-x)=-x+O(x^2)$ as $x\to0$ (you can easily give rigorous upper bounds on the $O(x^2)$ term if needed). With $x=it/N$, we take exponentials, then raise both sides to the $j$th power to get $$\Bigl(1-\frac{it}{N}\Bigr)^j=\exp\biggl(-\frac{ijt}{N}+jO\Bigl(\Bigl(\frac{it}{N}\Bigr)^2\Bigr)\biggr)$$ where the big O refers to the limit $it/N\to0$. The missing factor $1-it/N$ on the left hand side (because of $j$ replacing $j+1$) is clearly close to $1$ in this case.
H: Do such sequences exist? I wish to know if there are real sequences $(a_k)$, $(b_k)$ (and if there are, how to construct such sequences) such that: $b_k<0$ for each $k \in \mathbb{N}$ with $\lim\limits_{k \rightarrow \infty} b_k=-\infty,$ $$\sum_{k=1}^\infty |a_k| |b_k|^n< \infty, \space\forall n\in\mathbb{N}\cup\{0\}$$ $$\sum_{k=1}^\infty a_k b_k^n=1, \space \forall n\in\mathbb{N}\cup\{0\}$$ AI: For any unbounded sequence $(b_k)$ and any sequence $(\lambda_n)$, you can find a sequence $(a_k)$ such that for any $n$, $\sum_{k=1}^\infty a_k b_k^n$ converges absolutely to $\lambda_n$ : Suppose you have constructed $a_k$ up to some index $k_0$, such that for all $m < n$, $\sum_{k=1}^{k_0} a_k b_k^m = \lambda_m$. The goal now is to extend this up to some index $k_1$ such that for all $m \le n$, $\sum_{k=1}^{k_1} a_k b_k^m = \lambda_m$, with "arbitrarily small coefficients". This means we have to add $y = \lambda_n - \sum_{k=1}^{k_0} a_k b_k^n$ to the partial sum of the $n$th series, and $0$ to the first $n-1$ series. In order to do that, look at the next $n-1$ distincts values of $b_k$ (for $k > k_0$), say they are $c_1,c_2, \ldots c_{n-1}$, look at the Vandermonde matrix $$ M(X) = \begin{pmatrix} c_1^1 & \ldots & c_{n-1}^1 & X^1 \\ \vdots & & \vdots & \vdots \\ c_1^n & \ldots & c_{n-1}^n & X^n \end{pmatrix}$$ and solve for the vector $A(X)$ in the equation $M(X) A(X) = (0,0,\ldots,0,y)$. We get that $A(X)$ has to be $y$ times the $n$th column of $M(X)^{-1}$. Thus $A(X)$ is $y/det(M(X))$ times the transpose of the $n$th row of the comatrix of $M(X)$. If you look carefully at this row, each entry there is a polynomial in $X$ of degree $n-1$, except the last one which is a constant. Furthermore, the determinant of $M(X)$ is of degree $n$, so the entries of the vector $A(X)$ are rational fractions of degree $-1$ and $-n$. In particular, for $X$ large enough, $\sum_{1 \le i,j \le n-1} |A_i(X)c_i^j| < 2^{-n}$ Thus, if you pick $k_1$ such that $b_{k_1}$ is large enough, you will get from $A(b_{k_1})$ the coefficients to put in front of $c_1^n, c_2^n, \ldots, c_{n-1}^n, b_{k_1}^n$ such that all the partial sums are what we want, and the absolute contribution to what we added to each partial sum is less than $2^{-n-1}$. Now, repeat this procedure for all $n$, and you get a suitable sequence $(a_k)$.
H: How to prove that if $m = 10t+k $ and $67|t - 20k$ then 67|m? m, t, k are Natural numbers. How can I prove that if $m = 10t+k $ and $67|t - 20k$ then 67|m ? AI: from first equation $k=m-10*t$,now put into equation we get that 67 divides $t-20*(m-10*t)$ or $67$ divides $201*t-20*m$, because $201/67=3$ ,it means that $201*t$ is divisible by $67$,now we have to have $-20*m$ have to divisible by $67$ which is possible if $67/m$
H: Decay for the tail of a series. Let $p>1$. I would like to have an estimate for the decay of the sequence $s_{n}=\sum_{k=n}^{\infty}k^{-p}$. Does anyone know of a bound of this type in the literature? Thanks! AI: Look at the proof of the integral test of convergence for a sequence; we identify $s_n$ as upper and lower Riemann sums of integrals to get the bounds: $$ \int_{n+1}^\infty x^{-p} \ dx \leq \sum_{k=n}^\infty \frac{1}{k^p} \leq \int_{n}^\infty x^{-p} \ dx. $$ Evaluating the integrals, we then have $$ \frac{1}{p-1} \frac{1}{(n+1)^{p}} \leq \sum_{k=n}^\infty \frac{1}{k^p} \leq \frac{1}{p-1} \frac{1}{n^p}. $$
H: Partial Fractions Expansion of $\tanh(z)/z$ I have seen the following formula in papers (without citations) and in Mathematica's documentation about Tanh[]: $$ \frac{\tanh(z)}{8z}=\sum_{k=1}^{\infty} \frac{1}{(2k-1)^2 \pi^2+4z^2} $$ I have no idea how to prove it and I have also encountered in my research similar sums involving, for instance, $\mathrm{coth}$. It would be nice to have a general method for working with these problems; any suggestions? AI: There is the infinite product representation $$\cosh\,z=\prod_{k=1}^\infty \left(1+\frac{4z^2}{\pi^2(2k-1)^2}\right)$$ Taking logarithms gives $$\log\cosh\,z=\sum_{k=1}^\infty \log\left(1+\frac{4z^2}{\pi^2(2k-1)^2}\right)$$ If we differentiate both sides, we have $$\tanh\,z=\sum_{k=1}^\infty \frac{\frac{8z}{\pi^2(2k-1)^2}}{1+\frac{4z^2}{\pi^2(2k-1)^2}}$$ which simplifies to $$\tanh\,z=\sum_{k=1}^\infty \frac{8z}{4z^2+\pi^2(2k-1)^2}$$ Note that the infinite product that we started with is the factorization of $\cosh$ over its (imaginary) zeroes. Here is a related question.
H: Explain this code to compute $\log(1+x)$ It's well known that you need to take care when writing a function to compute $\log(1+x)$ when $x$ is small. Because of floating point roundoff, $1+x$ may have less precision than $x$, which can translate to large relative error in computing $\log(1+x)$. In numerical libraries you often see a function log1p(), which computes the log of one plus its argument, for this reason. However, I was recently puzzled by this implementation of log1p (commented source): def log1p(x): y = 1 + x z = y - 1 if z == 0: return x else: return x * (log(y) / z) Why is x * (log(y) / z) a better approximation to $\log(1+x)$ than simply log(y)? AI: The answer involves a subtlety of floating point numbers. I'll use decimal floating point in this answer, but it applies equally well to binary floating point. Consider decimal floating point arithmetic with four places of precision, and let $$x = 0.0001234$$ Then under floating point addition $\oplus$, we have $$y = 1 \oplus x = 1.0001$$ and $$z = y \ominus 1 = 0.0001000$$ If we now denote the lost precision in $x$ by $s = 0.0000234$, and the remaining part by $\bar{x}$, then we can write $$x = \bar{x} + s$$ $$y = 1 + \bar{x}$$ $$z = \bar{x}$$ Now, the exact value of $\log(1+x)$ is $$\log(1+x) = \log(1+\bar{x}+s) = \bar{x}+s + O(\bar{x}^2) = 0.0001234$$ If we compute $\log(y)$ then we get $$\log(1+\bar{x}) = \bar{x} + O(\bar{x}^2) = .0001000$$ On the other hand, if we compute $x \times (\log(y)/z)$ then we get $$(\bar{x}+s) \otimes (\log(1+ \bar{x}) \div \bar{x}) = (\bar{x}+s)(\bar{x}\div \bar{x}) = \bar{x}+s = 0.0001234$$ so we keep the digits of precision that would have been lost without this correction.
H: To show if $O(G)=p^{n}$ then $Z(G)\neq \{e\}$. Possible Duplicate: Normal and central subgroups of finite $p$-groups I want to show that if $O(G)=p^{n}$ then $Z(G)\neq \{e\}$, where $p$ is a prime number and $Z(G)=\{a\in G | ax=xa, \forall x\in G\}$, which is also known as a center of the group $G$. I think I have to use the Lagrange theorem which state that in a finite group $G$, if $H$ is a subgroup of $G$ then $O(H)|O(G)$. But i don't get any right approach to use this to prove the given result. AI: Hint. $$ o(G) = o(Z(G)) + \sum_{C(a)\neq G} \frac {o(G)} {o(C(a))} $$ where $C(a)$ is the centralizer of $a$.
H: How to find estimated total time, based on time elapsed and bytes downloaded? I'm doing C programming exercise for college, and couldn't figure out how to calculate the following value based on four values (2 of each from one type of variable). The exercise is to write a program for a progress bar for downloading a file. Given values/variables: time: the current time startTime: the download start time bytes: the number of bytes downloaded totalBytes: the total file size in bytes The program needs to calculate the Elapsed_Time, Percent_Completed, Download_Speed, Total_Time and Remaining_Time. I can't figure out the formula for Total_Time. That is the estimated total time to download the file. I used this, but didn't work: (time - startTime) / (totalBytes - bytes) Any help is appreciated. Cheers AI: You have spent time-startTime downloading bytes. So you would estimate the rate of download as $\frac {\text{bytes}}{\text{time-startTime}}$. Then the total estimated time is totalBytes divided by this. $\text {Total_Time}=\frac{\text{totalBytes(time-startTime)}}{\text{bytes}}$
H: Finding a point along a line a certain distance away from another point! Let's say you have two points, $(x_0, y_0)$ and $(x_1, y_1)$. The gradient of the line between them is: $$m = (y_1 - y_0)/(x_1 - x_0)$$ And therefore the equation of the line between them is: $$y = m (x - x_0) + y_0$$ Now, since I want another point along this line, but a distance $d$ away from $(x_0, y_0)$, I will get an equation of a circle with radius $d$ with a center $(x_0, y_0)$ then find the point of intersection between the circle equation and the line equation. Circle Equation w/ radius $d$: $$(x - x_0)^2 + (y - y_0)^2 = d^2$$ Now, if I replace $y$ in the circle equation with $m(x - x_0) + y_0$ I get: $$(x - x_0)^2 + m^2(x - x_0)^2 = d^2$$ I factor is out and simplify it and I get: $$x = x_0 \pm d/ \sqrt{1 + m^2}$$ However, upon testing this equation out it seems that it does not work! Is there an obvious error that I have made in my theoretical side or have I just been fluffing up my calculations? AI: Another way, using vectors: Let $\mathbf v = (x_1,y_1)-(x_0,y_0)$. Normalize this to $\mathbf u = \frac{\mathbf v}{||\mathbf v||}$. The point along your line at a distance $d$ from $(x_0,y_0)$ is then $(x_0,y_0)+d\mathbf u$, if you want it in the direction of $(x_1,y_1)$, or $(x_0,y_0)-d\mathbf u$, if you want it in the opposite direction. One advantage of doing the calculation this way is that you won't run into a problem with division by zero in the case that $x_0 = x_1$.
H: Confusing double angle identity How would I solve the following double angle identity. $$\cos^4x=\frac{3}{8}+\frac{1}{2}\cos(2x)+\frac{1}{8}\cos(4x)$$ So far my work is $$\frac{3}{8}+\frac{2\cos^x-1}{2}+\frac{1}{8}(2\cos^2x-1)$$ But how would I proceed. AI: Notice that \begin{eqnarray} \cos(2x)&=& \cos^2 x - \sin^2 x \\ &=& 2 \cos^2 x - 1.\\ \end{eqnarray} Then \begin{equation} \cos^2 x = \dfrac{1}{2}(1+\cos(2x)). \end{equation} Hence, \begin{eqnarray} \cos^4 x &=& (\cos^2 x)^2\\ &=& \left[\dfrac{1}{2}(1 + \cos(2x))\right]^2\\ &=& \dfrac{1}{4}(1 +2 \cos(2x)+ \cos^2(2x))\\ &=& \dfrac{1}{4} +\dfrac{1}{2} \cos(2x) + \dfrac{1}{4}\dfrac{1}{2}(1+\cos(4x))\\ &=& 3/8 + 1/2 \cos(2x) +1/8 \cos(4x) \end{eqnarray}
H: How to show that $h(x^p) \equiv h(x)^p \pmod{p}$? Possible Duplicate: Why $g(x^{p})=(g(x))^{p}$ in the reduction mod $p$? Let $h(x) \in \mathbb{Z}[x]$ and $p$ be a prime. We know that for any integer $\alpha$ we have that $\alpha^p \equiv \alpha \pmod{p}$. How can we use this to show that $h(x^p) \equiv h(x)^p \pmod{p}$? It seems to me that we have to reduce the indeterminate $x$ modulo $p,$ which does not make sense. AI: Imagine expanding $$\left(a_n x^n +a_{n-1}x^{n-1} +\cdots +a_0\right)^p$$ by using the Multinomial Theorem. All of the "mixed" multinomial coefficients are divisible by $p$, so modulo $p$ we end up with $$a_n^p (x^n)^p +a_{n-1}^p(x^{n-1})^p +\cdots + a_0^p,$$ which, modulo $p$, is the same polymomial as $$a_n(x^p)^n +a_{n-1}(x^p)^{n-1}+\cdots +a_0.$$
H: Does the monoid of sets of numbers with addition have a name? The monoid is the set of all sets of integers (but reals or complex numbers could work too). Addition between two elements is defined as $a+b = \{\ x+y\ |\ x \in a,\ y \in b\ \}$. As far as I can tell, the only category of algebraic structures this fits into is monoids, since it's not a group. Is this structure known or used anywhere, and does it have a name? AI: This operation is called Minkowski addition. I'm not aware of any special name for the algebraic structure of $\mathcal{P}(\mathbb{Z})$ (or whatever) equipped with Minkowski addition though.
H: points of $X$ with non trivial stabilizers are discrete so far I understand about the statement: let $p_i,i=1\dots,n$ has non trivial stabilizers i.e $S_{p_1}=\{g:g.p=p, g\in G\}\neq\{e\}$, is non trivial subgroup of $G$ for $p_1$ and so forth upto $p_n$ we will get $S_{p_n}$,so we need to show $\{p_1,\dots,p_n\}$ is discrete. could any one make me understand the 2nd line of the proof? and in 3rd line $g$ is continous, how come a point $g\in G$ is continuous? AI: Second line: $G$ is finite, so there must be at least one $g\in G$ that fixes infinitely many of the $p_i$; otherwise there could only be finitely many $p_i$. These $p_i$ fixed by that $g$ form a subsequence all of whose terms are fixed by the same nontrivial element $g$. Third line: This is a slight abuse of notation; $g$ here is not the element of $G$ but the map induced by it, which is continuous since $G$ acts holomorphically.
H: Laplace transform of a product of Modified Bessel Functions Working with a scalar field in 2 dimensions I've come to the following integral, from which I can extract the proper ultraviolet behavior ($a \ll 1$) of the theory: $\int_0^\infty e^{-(4+a^2)x}\left[I_0(2x)\right]^2 ds$. It is obvious to me that this is the Laplace transform of $\left[I_0(2x)\right]^2$ evaluated at $s = (4+a^2)$. From Wikipedia I got the formula $\int_0^\infty e^{-sx} f(x)g(x) dx = \frac{1}{2\pi i} \lim_{T \to \infty} \int_{c-iT}^{c+it} F(\sigma)G(s-\sigma) d\sigma$, where $F(\sigma)$ and $G(\sigma)$ are the Laplace transforms of $f(x)$ and $g(x)$, respectively. I'm encountering some trouble trying to get an analytical result for that integral. What I actually need is its $a \approx 0$ behavior, but a full analytical answer would be great. Thanks in advance for any help! AI: Given that $I_0(2 x)^2 = 1 + 2 x^2 + \frac{3}{2} x^4 + \frac{5}{9}x^6 + \mathcal{o}(x^6)$ we see that it is a hypergeometric function: $$ I_0(2x)^2 = {}_1F_2\left(\frac{1}{2}; 1,1; 4 x^2\right) = \sum_{n=0}^\infty \frac{\left(\frac{1}{2}\right)_n}{(1)_n (1)_n} \frac{(4 x^2)^n}{n!} = \sum_{n=0}^\infty \left(\frac{x^{n}}{n!} \right)^2 \binom{2n}{n} $$ Now, integrate term-wise: $$ \int_0^\infty \mathrm{e}^{-k x} [ I_0(2x) ]^2 \mathrm{d} x = \sum_{n=0}^\infty \frac{(2n)!}{n!^4} \int_0^\infty x^{2n} \mathrm{e}^{-k x} \mathrm{d} x = \sum_{n=0}^\infty \frac{(2n)!}{n!^4} \frac{(2n)!}{k^{2n+1}} = \frac{1}{k} \sum_{n=0}^\infty \left( \binom{2n}{n} \frac{1}{k^n} \right)^2 $$ The sum is again hypergeometric, since ratio of subsequent summands is $\frac{4 (2n+1)^2}{k^2 (n+1)^2} = \frac{16}{k^2} \frac{\left( n+ \frac{1}{2}\right)^2}{(n+1)^2} $, thus the sum equals: $$ \int_0^\infty \mathrm{e}^{-k x} [ I_0(2x) ]^2 \mathrm{d} x = \frac{1}{k} \cdot {}_2F_1\left( \frac{1}{2}, \frac{1}{2}; 1; \frac{16}{k^2}\right) = \frac{2}{ \pi k} K\left(\frac{16}{k^2}\right) $$ where $K(m)$ is a complete elliptic integral: $$K(m) = \int_0^{\pi/2} \frac{\mathrm{d} \phi}{\sqrt{1-m \cdot \sin^2(\phi)}} $$
H: hints on solving $ \sin^2 x {d^2y \over dx^2} = 2 y$ How to solve this differentiation equation? $$\sin^2 x {d^2y \over dx^2} = 2 y$$ I don't know how to begin. Can it be any simpler than this? AI: Cleaning up Maple's solution, I get $$ y \left( x \right) ={\frac {c_{{1}} \left( \cos \left( 2\,x \right) +1 \right) }{\sin \left( 2\,x \right) }}+{\frac {c_{{2}} \left( x\cos \left( 2\,x \right) -\sin \left( 2\,x \right) +x \right) }{\sin \left( 2\,x \right) }} $$ I can't imagine that anyone would assign that differential equation as homework to be solved by hand. Are you sure the assignment requires solving the differential equation in closed form?
H: Idempotents in $\mathbb Z_n$ An element $a$ of the ring $(P,+,\cdot)$ is called idempotent if $a^2=a$. An idempotent $a$ is called nontrivial if $a \neq 0$ and $a \neq 1$. My question concerns idempotents in rings $\mathbb Z_n$, with addition and multiplication modulo $n$, where $n$ is natural number. Obviously when $n$ is a prime number then there is no nontrivial idempotent. If $n$ is nonprime it may happen, for example $n=4, n=9$, that also there is no. Is it known, in general, for what $n$ there are nontrivial idempotents and what is a form of such idempotents? AI: As often happens when dealing with $\mathbf{Z}_n$, the Chinese remainder theorem is your friend. If the prime factorization of $n$ is $$ n=\prod_i p_i^{a_i}, $$ then by CRT we have an isomorphism of rings $$ \mathbf{Z}_n\cong\bigoplus_i \mathbf{Z}_{p_i^{a_i}}. $$ Observe that the isomorphism maps the residue class of an integer $m$ (modulo $n$) to a vector with all the components equal to the residue class of $m$ (this time modulo various prime powers): $$ \overline{m}\mapsto(\overline{m},\overline{m},\ldots,\overline{m}). $$ So the residue class of $m$ is an idempotent if and only if it is an idempotent modulo all the prime powers $p_i^{a_i}$. Let us look at the case of a prime power modulus $p^t$. The congruence $x^2\equiv x\pmod{p^t}$ holds, iff $p^t$ divides $x^2-x=x(x-1)$. Here only one of the factors of, $x$ or $(x-1)$, can be divisible by $p$, so for the product to be divisible by $p^t$ the said factor then has to be divisible by $p^t$. Thus we can conclude that $x\equiv 0,1 \pmod{p^t}$ are the only idempotents modulo $p^t$. Therefore we require that $$ m\equiv 0,1\pmod{p_i^{a_i}} $$ for all $i$. By CRT these congruences are independent for different $i$, so the number of pairwise non-congruent idempotents is equal to $2^\ell$, where $\ell$ is the number of distinct prime factors $p_i$ of $n$.
H: Spectrum of a field Let's $F$ be a field. What is $\operatorname{Spec}(F)$? I know that $\operatorname{Spec}(R)$ for ring $R$ is the set of prime ideals of $R$. But field doesn't have any non-trivial ideals. Thanks a lot! AI: As you say $\mathrm{Spec}(R)$ is defined to be the set of all prime ideals of $R$. If $R$ is a field, the only proper ideal is $0$ hence you get $\mathrm{Spec}(F) = \{0\}$. It gets more interesting if your space is a ring that is not a field, like for example $R = \mathbb Z$. Then you can endow it with the following topology: each closed set in the space corresponds to an ideal $J$ of $R$, defined as $C(J) = \{ p \mid p \text{ a prime ideal of } R \text{ such that } J \subset p\}$. Now what does $\mathrm{Spec}(\mathbb Z)$ endowed with this topology look like? Well, first of all, the points in our space correspond to prime ideals and since $\mathbb Z$ is a principal ideal domain, each point looks like $p\mathbb Z$. Note that the zero ideal $\{0\}$ is prime if and only if the ring is an integral domain, so in this case, zero is also a point in our space. Next we want to know what closed sets look like. For this, let's stick a prime ideal into $C(\cdot)$ and see what comes out: $C(p\mathbb Z) = \{ p \mathbb Z \}$ which means, every singleton set in our space is closed. Now what are the closed sets corresponding to non-prime ideals? Well, $n$ has only a finite number of prime divisors and each point in $C(n\mathbb Z)$ corresponds to a divisor of $n$: $C(n\mathbb Z) = \{ p \mathbb Z \mid p \mathbb Z \text{ a prime ideal containing } n \mathbb Z \}$. Now we know that all closed sets are finite. Edit (I apologise for the blunder kindly pointed out by Rene and t.b.) You need to be careful about what open sets, i.e. complements of closed sets look like. You can easily trick yourself into believing that since a set is closed if and only if it's finite, $\mathrm{Spec}(\mathbb Z)$ has the cofinite topology. But this is false. To see this note that if we indeed had the cofinite topology, $\mathrm{Spec}(\mathbb Z) \setminus \{\langle 0 \rangle \}$ would be open. But for this to be true, $\langle 0 \rangle$ would have to be closed which means that we would have to have an ideal $I$ in $\mathbb Z$ such that the only prime ideal it is contained in is $\langle 0 \rangle$. But this implies that $I = \langle 0 \rangle$ which implies that $I$ is contained in every prime ideal. Hence there is no ideal $I$ such that $C(I) = \{ \langle 0 \rangle \}$. As pointed out in Rene's answer, it boils down to all open sets contain zero since the complement of a closed set, $C(n\mathbb Z)^c$, is all prime ideals contained in $n \mathbb Z$ which always includes zero since we're in an integral domain so that the zero ideal is prime.
H: "Simplifying" an extension of scalars Let $A$ be a commutative $\mathbb Z$-algebra and $M$ be a $\mathbb Z\oplus \mathbb Z$-module. Then $A\otimes_{\mathbb Z} M$ is an $A\oplus A$-module. Is it true that $(A\oplus A)\otimes_{\mathbb Z\oplus \mathbb Z} M \cong A\otimes_{\mathbb Z} M$ as $A\oplus A$-modules? It seems to me that the map given by $$(a_1, a_2)\times m \mapsto a_1\otimes_{\mathbb Z} (1,0)m + a_2\otimes_{\mathbb Z} (0,1) m$$ defines a map in one direction, and that the map $$a\times m\rightarrow (a,a)\otimes_{\mathbb Z\oplus \mathbb Z} m$$ defines its inverse. More generally, is it true/ is there a slick reason that given commutative unital $R$-algebras $A$, $B$ and $C$ such that $B\cong A\otimes_{R} C$, and a $C$-module $M$, we have $B\otimes_C M\cong A\otimes_{R} M$ as $B$-modules? AI: Yes. You ask if in general $$ (A \otimes_R C) \otimes_C M = A \otimes_R M, $$ when both sides make sense. This is true; the map is $(a \otimes c) \otimes m\mapsto a\otimes cm$ with inverse map $$ a \otimes m \mapsto (a \otimes 1) \otimes m. $$ Of course there's a lot to check here before you believe me.
H: Independence of Rotation Matrix Definitions I am trying to solve a system of non-linear equations. I know that 9 of my variables put together form a 3x3 rotation matrix $$ A = \left( \begin{matrix} a_{11}& a_{12}& a_{13}\\ a_{21}& a_{22}& a_{23}\\ a_{31}& a_{32}& a_{33} \end{matrix} \right) $$ There are many properties of rotation matrices for example $AA^{T} = I$ $\det(A) = 1$ $\mathrm{magnitude}(a_{11},a_{21},a_{31}) = 1\\ \mathrm{magnitude} (a_{12},a_{22},a_{32}) = 1\\ \mathrm{magnitude}(a_{13},a_{23},a_{33}) = 1 $ I'm sure some of you could provide me with various others. My question is how many of them are independent, and which ones are independent? If these are 9 unknowns, how many useful equations does this constraint give me? AI: $A A^T = I$ already imposes $6$ independent constraints: $3$ saying that the rows should be orthogonal to each other, and $3$ saying that each row should be a vector of length $1$ (in particular the last three conditions you wrote down are redundant). These conditions also imply that $$\det(A) \det(A^T) = \det(A)^2 = 1$$ hence that $\det(A) = \pm 1$. The result we get is a $(9 - 6 = 3)$-dimensional manifold, namely the group $\text{O}(3)$ of rotations and reflections. The final condition $\det(A) = 1$ is an independent constraint but does not drop the dimension; it just singles out the connected component containing the rotations.
H: float result for two smallest integer division I want to know the two integer number that division of them is this float. for example x / y = 1.333333333.... $x$ and $y$ can be 8, 6 and 4, 3 ... i need x = 4 and y = 3. For next example my number is 1.41 what are x and y? how can i find them ? AI: As in J.D.'s edit the best method is to use continued fractions : At each step (for the evaluation of $x$) : note the integer part $j\leftarrow \lfloor x\rfloor$ (illustrated in blue) compute the new fraction $f$ as illustrated : write the previous fraction multiply the numerator and denominator by $j$ add the numerator and denominator of the previous previous fraction (starting with $\frac 10$) evaluate the fractional part $x\leftarrow x-j$ stop when $x$ becomes $0$ or at least very small (depending of the precision of your evaluation) or when you decide too... else compute $x$'s multiplicative inverse $x\leftarrow \frac 1x$ and repeat (as an alternative you could keep the 'rounded' integer and subtract it instead of the integer part) Complete process for $x=1.41$ : the succession of fractions is $\frac 10\to\frac 11\to\frac 32\to \frac 75\to \cdots\to \frac {141}{10}$ (note how these fractions 'shift' in the second fraction from the right) $ \begin{array} {l|r|ccccc} x&j&&&&&f\\ \hline\\ 1.41 & \color{#0000ff}{1} & \color{#0000ff}{1}&=&\frac {\color{#0000ff}{1}}1&=&\frac 11\\ 1/0.41=2.43902\cdots & \color{#0000ff}{2} &1+\cfrac 1{\color{#0000ff}{2}}&=& \frac {1\cdot \color{#0000ff}{2}+1}{1\cdot \color{#0000ff}{2} +0}&=&\frac 32\\ 1/0.4390\cdots=2,2777\cdots & \color{#0000ff}{2}&1+\cfrac 1{2+\cfrac 1{\color{#0000ff}{2}}}&=& \frac {3\cdot \color{#0000ff}{2}+1}{2\cdot \color{#0000ff}{2} +1}&=&\frac 75\\ 1/0.2777\cdots=3.6 &\color{#0000ff}{3}&1+\cfrac 1{2+\cfrac 1{2+\cfrac 1{\color{#0000ff}{3}}}}&=& \frac {7\cdot \color{#0000ff}{3}+3}{5\cdot \color{#0000ff}{3} +2}&=&\frac {24}{17}\\ 1/0.6=1.6666\cdots & \color{#0000ff}{1}&\cdots&=&\frac {24\cdot \color{#0000ff}{1}+7}{17\cdot \color{#0000ff}{1} +5}&=&\frac {31}{22}\\ 1/0.6666\cdots=1.5\cdots & \color{#0000ff}{1}&\cdots&=& \frac {31\cdot \color{#0000ff}{1}+24}{22\cdot \color{#0000ff}{1} +17}&=&\frac {55}{39}\\ 1/0.5=2 & \color{#0000ff}{2}&\cdots&=& \frac {55\cdot \color{#0000ff}{2}+31}{39\cdot \color{#0000ff}{2}+22}&=&\frac {141}{100}\\ \text{stop!} &\\ \end{array} $ Since we started with a rational number the process ended in a finite number of steps (returning the exact $\frac {141}{100}$ here). For a quadratic number (like $\sqrt{3}+1$) the j integers produced will repeat. For none of these the $j$ number produced won't repeat (up to machine precision of course!). Hoping this helped too,
H: Finiteness of an integral w.r.t. a finite Borel measure. Suppose $\mu$ is a finite Borel measure on $\mathbb{R}^3$. Define $h : \mathbb{R}^3 \rightarrow \mathbb{R}$ by $$h(x) = \int_{\mathbb{R}^3} \dfrac{d\mu(y)}{\|x - y\|}.$$ Question 1: Must $h(x)$ be finite for almost every $x$, w.r.t. Lebesgue measure? Question 2: Suppose $h(x)$ is indeed finite for (Lebesgue) almost every $x$. Does this imply that $\mu$ and $m^3$ (the Lebesgue measure on $\mathbb{R}^3$) are mutually singular? I'm also interested in any comments about interesting conditions that could imply a.e. finiteness for $h$. Since interesting is somewhat vague, I'm not stating this as a formal question. Thanks. AI: Your function $h$ is the Newtonian potential of the measure $\mu$. When the measure $\mu$ is finite, this function is superharmonic ([1], Theorem 6.3), and thus finite almost everywhere with respect to Lebesgue measure ([1], Theorem 4.10). In addition, $h$ is Lebesgue integrable over every compact subset of $\mathbb{R}^3$. Reference: [1] Introduction to Potential Theory by L.L. Helms (Wiley, 1969) Added: That last statement gives a clue on how to find a direct proof. Let $K$ be a compact subset of $\mathbb{R}^3$ and integrate $h$ over $K$: $$\int_Kh(x)\,dx=\int_K \int_{\mathbb{R}^3} {1\over\|x-y\|}\,\mu(dy)\,dx = \int_{\mathbb{R}^3} \int_K {1\over\|x-y\|}\,dx \,\mu(dy).$$ Let's show that $g(y):= \int_K {1\over\|x-y\|}\,dx$ is a bounded function. For $y$ with distance greater than 1 from $K$ we have $g(y)\leq \lambda(K)$. On the other hand, there is a fixed radius $R$ so that for all other $y$, the set $K$ is contained in the ball $B(y,R)$. So, for such $y$, $$g(y)\leq \int_{B(y,R)} {1\over\|x-y\|}\,dx = \int_{B(0,R)} {1\over\|x\|}\,dx = 4\pi \int_0^R {1\over r}\, r^2\,dr = 2\pi R^2.$$ Combining these bounds shows that $g$ is a bounded function, and since $\mu$ is finite, this means that the integral of $h$ over $K$ is finite. This implies that $h$ is finite Lebesgue almost everywhere.
H: Sampling a combination randomly I want to sample a combination of $N$ elements (without replacement) from a list of $M$ elements where $M\gg N$. There are algorithms to do this when each element is picked with uniform probability. I want to do the same for the non-uniform case. Let each specific element $i$ is associated with a positive constant $p_i$. Then, say for $N=2$, I want the probability of sampling the combination $\{i,j\}$ to be equal to $\frac{p_i+p_j}{Z}$, where $Z$ is the normalization constant, i.e. $Z=\sum_{k,l}p_k+p_l\;\;\forall k,l$. Any hints, pointers for an efficient and unbiased sampling algorithm are much appreciated. AI: In general, you want to pick $M$ distinct elements where the probability of result $A$ (a subset of $\{1,\ldots,N\}$ is $p(A) = \sum_{i \in A} p_i/Z$, $Z = \sum_A \sum_{i \in A} p_i = {N-1 \choose M-1} \sum_i p_i$. Let $S(A) = \sum_{i \in A} p_i$ and $S = \sum_{i=1}^N p_i$. Let ${\cal P}_k(B)$ denote the collection of all $k$-element subsets of $B$. The probability of choosing item number $1$ is $$\eqalign{P(1) &= \sum_{A \in {\cal P}_{M-1}(\{2,\ldots,N\})} p(\{1\} \cup A) = \sum_{A \in {\cal P}_{M-1}(\{2,\ldots,N\})} \dfrac{p_1 + S(A)}{Z}\cr &= \dfrac{p_1}{S} + \sum_{A \in {\cal P}_{M-1}(\{2,\ldots,N\})} \sum_{j \in A} \dfrac{p_j}{{{N-1} \choose {M-1}} S} = \dfrac{p_1}{S} + \sum_{j=2}^N \dfrac{{{N-2} \choose {M-2}} p_j}{{{N-1} \choose {M-1}} S}\cr &= \dfrac{p_1}{S} + \dfrac{M-1}{N-1} \dfrac{S - p_1}{S} = \dfrac{N-M}{N-1} \dfrac{p_1}{S} + \dfrac{M-1}{N-1} } $$ You can use a sequential procedure: first decide (using this probability) whether or not to choose item $1$. Depending on whether you choose $1$ or not, you have $M-1$ or $M$ items to be chosen from the remaining $N-1$. The conditional probability of choosing $2$ given this first choice is then $$ \dfrac{\sum_{A \in {\cal P}_{M-2}(\{3,\ldots,N\})} p(\{1,2\} \cup A)}{P(1)} \ \text{or} \ \dfrac{\sum_{A \in {\cal P}_{M-1}(\{3,\ldots,N\})} p(\{2\} \cup A)}{1 - P(1)}$$ which can be obtained by a similar calculation. After deciding whether or not to choose $2$ using this conditional probability, you calculate the probability for $3$, and continue in this way until all $M$ items are chosen.
H: Almost a perfect cuboid While reading a very old book on diophantine equations, I came across this exercise: Find an infinite number of positive integer solutions of the equations $$x^2 + y^2 = u^2$$ $$y^2 + z^2 = v^2$$ $$z^2 + x^2 = w^2$$ I have found a few solutions by hand, for example $x=240$, $y = 117$, $z = 44$, and trivially multiples will also produce solutions, but I assume that the book is really asking for solutions where there is no common factor of $x$, $y$, $z$. I have spent a few hours trying to get something from the standard parametric solutions of $x^2 + y^2 = z^2$ without success, and wondered if anyone has any insight they can offer. Clearly this is (slightly) connected with the problem of finding an integer cuboid with all the face diagonals integral, and the main diagonal also integral, which I assume is still an open problem. AI: Mathworld gives an answer from any Pythagorean triangle and remarks the Euler found two families but that they don't exhaust all the possibilities.
H: How is exponent notation related to this example I am not sure what is meant by exponent notation and therefore how to answer this question is baffling me. Rewrite this in exponent notation: $\sqrt[3]{x^2y(z-X)^5}$ AI: Oh so it is literally just a case of doing this? $({x^2y(z-X)^5})^\frac{1}{3}$
H: About Poisson Equation. I want to solve $ - \Delta u = f$ in $\Omega$ with $u = \phi $ on $ \partial \Omega$. But if I have the solutions of (1) and (2) below : $$ - \Delta u_1 = f \; \text{in } \Omega , \; u_1 = 0 \; \text{on } \partial \Omega \tag{1}$$ $$ - \Delta u_2 = 0 \; \text{in } \Omega , \; u_2 = \phi \; \text{on } \partial \Omega \tag{2}$$ Then how can I solve the problem $ - \Delta u = f$ in $\Omega$ with $u = \phi $ on $ \partial \Omega$ by using $u_1 , u_2 $ ? AI: The solution is $v=u_1+u_2$. In fact, we have $$ -\Delta v= -\Delta u_1 - \Delta u_2 = f \quad \mbox{in} \Omega$$ and $$ v= 0 +\phi = \phi\quad \mbox{on} \quad \partial \Omega.$$
H: A very simple Expected Value question Can someone explain to me why is $P(Y = 1) = P(X = -1)+P(X = 1)$? Why is P(Y = 1) the sum of P(X = -1) and P(X= 1)? I don't see how $Y = X^2$ comes into play? I am very new to this stuff. AI: We have $Y=1$ iff $X^2=1$ iff $X=1$ or $X=-1$. The only way that $X^2$ can be $1$ is $X=1$ or $X=-1$. The probability that $X=1$ is $0.3$, the probability that $X=-1$ is $0.2$, so the probability that one or the other happens is $0.3+0.2$. You can draw a Venn diagram. Divide the "world" into $3$ non-intersecting pieces, label one of them $X=1$, another $X=-1$, and the third $X=0$. The probability that $X^2=1$ is the sum of the probabilities of the first two pieces.
H: Good book on evaluating difficult definite integrals (without elementary antiderivatives)? I am very interested in evaluating difficult definite integrals without elementary antiderivatives by manipulating the integral somehow (e.g. contour integration, interchanging order of integration/summation, differentiation under the integral sign, etc.), especially if they have elegant solutions. However, I simply cannot seem to find a good book that covers many ways to evaluate these. The single book that I know that covers some good techniques is the Schaum's Advanced Calculus. What is another good book that explains methods and techniques of integration of these fun integrals? AI: I've only skimmed it, but Irresistible Integrals by George Boros and Victor H. Moll seems worth a look.
H: Are the matrix products $AB$ and $BA$ similar? Given two matrices $A,B.$ On what conditions does $AB \sim BA$ hold? AI: If $A$ is invertible, then $AB = A(BA)A^{-1}$ which shows that $AB$ and $BA$ are similar. Similar (no pun intended) proof if $B$ is invertible.
H: How can I prove Stokes theorem using Green's formula? $$ \int_{\partial \Omega} (u ~dx + v ~dy) = \iint_{\Omega} \left( \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right) ~dx ~dy $$ Then I want to prove that$$ \int_{\partial \Omega} w = \iint_{\Omega} ~dw, \;(w = u ~dx + v ~dy) $$ Would you give me an elementary proof for this? AI: $\def\d{\mathrm{d}} \def\w{\omega}$Green's theorem is a special case of Stokes' theorem, not the other way around. Let $\w$ be the differential one-form $u \d x + v \d y$. The exterior derivative of $\w$ is $$\d \w = \left(\frac{\partial v}{\partial x} - \frac{\partial u}{\partial y}\right)\d x\wedge \d y.$$ Stokes' theorem takes the form $$\int_{\partial\Omega} (u \d x + v \d y) = \int_\Omega \left(\frac{\partial v}{\partial x} - \frac{\partial u}{\partial y}\right)\d x\wedge \d y.$$ Since the manifold is $\mathbb{R}^2$ this can be rewritten as $$\int_{\partial \Omega} (u dx + v dy) = \int_{\Omega} \left( \frac{\partial v}{\partial x} - \frac{\partial u}{\partial y} \right) dx dy.$$ This is Green's theorem.
H: Continuous Dice rolls A dice is being rolled continuously. Suppose you roll a 3 ( or any number ). What is the probability that you will get the next "3" exactly after n rolls? AI: I will interpret "after $n$ rolls" as meaning that we want the probability that our next $n-1$ rolls do not yield a $3$, but the $n$-th roll does give a $3$.. The probability of a non-$3$ on any toss is $\frac{5}{6}$. So the probability of $n-1$ consecutive non-$3$'s followed by a $3$ is $$\left(\frac{5}{6}\right)^{n-1}\frac{1}{6}.$$ The reason that we simply multiply is that the results on successive tosses of the die are independent. The die does not remember what results it has come up with in the past. We can also get the result by using a counting argument. Imagine tossing a die $n$ times, and recording the results. So a record of $4$ tosses might read $4,5,4,1$. There are $6^n$ possible records of the results of $n$ tosses, all equally likely. There are $5^{n-1}$ records in which the first $n-1$ results are a non-$3$ and the last result is a $3$. So our probability is $$\frac{5^{n-1}}{6^n}.$$
H: question about Skolem theories Right now I am reading a proof of Downward Löwenheim-Skolem theorem in Hodges, but I am slightly confused about a proof Hodges makes. Let me write down some of the definitions. Definition: Let $T$ be a first-order theory in a first-order language $\scr{L}$. >Then a skolemisation of $T$ is a theory $T^+\supseteq T$ in a first order >language $\scr{L}^+\supseteq \scr{L}$ such that Any model of $T$ can be expanded to a model of $T^+$. For every formula $\phi(\bar{x}, y)$ of $\scr{L}^+$ with $\bar{x}$ a nonempty tuple, there is a term $t$ of $\scr{L}^+$ such that $$ T^+\vdash \forall \bar{x}(\exists y\phi(\bar{x}, y)\to \phi(\bar{x}, t(\bar{x}))). $$ A theory $T$ is said to be a Skolem Theory or to have Skolem functions if $T^+:=T$ and $\scr{L}^+ = \scr{L}$ is a skolemisation of $T$, i.e. $T$ is a skolemisation of itself. Hodges goes on to say that for a Skolem theory $T$, and $A$ a model of $T$ and $X$ a subset of $A$, if the skolem hull $B:=\langle X\rangle_A$ is nonempty, then it is an elementary substructure of $A$. He proves this by invoking the Tarski-Vaught Criterion for elementary substructures. In particular, if $\phi(\bar{x}, y)$ is a $\scr{L}$-formula and $\bar{a}$ is a tuple from $B$ with $A\models \exists y \phi(\bar{a}, y)$, then $A\models \phi(\bar{a}, t(\bar{a}))$ for the appropriate term $t$. But $t^A(\bar{a})\in B$ as $B$ is closed under the functions, and he concludes that $B$ satisfies the Tarski-Vaught Criterion and so $B\preceq A$. Here is my question, what if $\phi(\bar{x}, y)$ is actually $\phi(y)$? Then $\exists y\phi(y)$ is a sentence and so does not have a Skolem function. So if $A\models \exists y\phi$, how can we guarantee that $B$ possesses a witness to $\exists y\phi$? My attempt to work around this is to consider the formula $$ \psi(x, y): = \ulcorner \phi(y)\wedge x=x\urcorner. $$ Then $\exists y\phi(y)$ is logically equivalent to $\exists y \psi(x, y)$. As $T$ is a skolem theory, there is a term $t(x)$ such that $$ T\vdash \forall x(\exists y\psi(x, y)\to \psi(x, t(x))). $$ Then for any $a\in B$, $t^A(a)$ will be an element of $B$ witnessing $A\models \exists y\phi$. Is this a correct resolution of my question, or is there something else I am missing? Thanks for the help! AI: Yes, your solution works. A less "cheap" way to resolve things is to remove the word "nonempty" from Hodges' definition, so that $(\exists y)\phi(y)$ has a zero-ary Skolem function, that is, a "Skolem constant" $c_\phi$. That is the definition of a Skolemization some others use. (I am not familiar with Hodges' definitions and conventions; he might not require that a variable shown in a formula name must actually appear, in which case he could write "$(\exists y)[y = y]$" as $\phi(x)$.)
H: Prime-base products: Plot For each $n \in \mathbb{N}$, let $f(n)$ map $n$ to the product of the primes that divide $n$. So for $n=112$, $n=2^4 \cdot 7^1$, $f(n)= 2 \cdot 7 = 14$. For $n=1000 = 2^3 \cdot 3^3$, $f(1000)=6$. Continuing in this manner, I arrive at the following plot:             Essentially: I would appreciate an explanation of this plot, "in the large": I can see why it contains near-lines at slopes $1,2,3,4,\ldots,$ etc., but, still, somehow the observational regularity was a surprise to me, due, no doubt, to my numerical naivety—especially the way it degenerates near the $x$-axis to such a regular stipulated pattern. I'd appreciate being educated—Thanks! AI: I cannot tell how much you know. If a number $n$ is squarefree then $f(n) = n.$ This is the most frequent case, as $6/\pi^2$ of natural numbers are squarefree. Next most frequent are 4 times an odd squarefree number, in which case $f(n) = n/2,$ a line of slope $1/2,$ as I think you had figured out. These numbers are not as numerous, a count of $f(n) = n/2$ should show frequency below $6/\pi^2.$ All your lines will be slope $1/k$ for some natural $k,$ but it is not just a printer effect that larger $k$ shows a less dense line. Anyway, worth calculating the actual density of the set $f(n) = n/k.$ Meanwhile, note that a computer printer does not actually join up dots in a line into a printed line, that would be nice but is not realistic. There are optical effects in your graph that suggest we are seeing step functions. If so, there are artificial patterns not supported mathematically. Alright, i am seeing an interesting variation on frequency that I did not expect. Let us define $$ g(k) = \frac{\pi^2}{6} \cdot \mbox{frequency of} \; \left\{ f(n) = n/k \right\}. $$ Therefore $$ g(1) = 1. $$ What I am finding is, for a prime $p,$ $$ g(p) = \frac{1}{ p \,(p+1)}, $$ $$ g(p^2) = \frac{1}{ p^2 \,(p+1)}, $$ $$ g(p^m) = \frac{1}{ p^m \,(p+1)}. $$ Furthermore $g$ is multiplicative, so when $\gcd(m,n) = 1,$ then $$g(mn) = g(m) g(n). $$ Note that it is necessary that the frequency of all possible events be $1,$ so $$ \sum_{k=1}^\infty \; g(k) = \frac{\pi^2}{6}.$$ I will need to think about the sum, it ought not to be difficult to recover this from known material on $\zeta(2).$ EDIT: got it. see EULER. For any specific prime, we get $$ G(p) = g(1) + g(p) + g(p^2) + g(p^3) + \cdots = \left( 1 + \frac{1}{p(p+1)} + \frac{1}{p^2(p+1)} + + \frac{1}{p^3(p+1)} + \cdots \right) $$ or $$ G(p) = 1 + \left( \frac{1}{p+1} \right) \left( \frac{1}{p} + \frac{1}{p^2} + \frac{1}{p^3} + \cdots \right) $$ or $$ G(p) = \frac{p^2}{p^2 - 1} = \frac{1}{1 - \frac{1}{p^2}}. $$ Euler's Product Formula tells us that $$ \prod_p \; G(p) = \zeta(2) = \frac{\pi^2 }{6}. $$ The usual bit about unique factorization and multiplicative functions is $$ \sum_{k=1}^\infty \; g(k) = \prod_p \; G(p) = \zeta(2) = \frac{\pi^2}{6}.$$
H: If $p$ is a factor of $m^2$ then $p$ is a factor of $m$ I'm a complete beginner and not sure where to go with this proof of Euclid's lemma. Any help would be greatly appreciated. If $m$ is a positive integer and a prime number $p$ is a factor of $m^2,$ then $p$ is a factor of $m.$ So far I have: Since we know that $m$ is a positive integer, then $m^2$ must also be positive. We also know that $p$ is positive integer, since it is a prime number. So $m^2 = p*k$ where $k$ is positive since both $m^2$ and $p$ are positive. Therefore, $k$ is greater than or equal to $1.$ ...? AI: If you know about prime factorization.. the fundamental theorem of arithmetic states that every non-zero integer has a unique (up to ordering) factorization into products of prime powers. That is, $$ m = \pm p_1^{k_1} p_2^{k_2} \cdots p_n^{k_n} $$ where $p_i$'s are primes and $k_i > 0$ for all $i.$ ($\pm$ based on the sign of $m.$) Now what is the prime factorization of $m^2$? It's $$ m = p_1^{2k_1} p_2^{2k_2} \cdots p_n^{2k_n}. $$ Can you now use this information to show that $p \ | \ m^2 \implies p \ | \ m$?
H: Are minimal prime ideals in a graded ring graded? Let $A=\oplus A_i$ be a graded ring. Let $\mathfrak p$ be a minimal prime in $A$. Is $\mathfrak p$ a graded ideal? Intuitively, this means the irreducible components of a projective variety are also projective varieties. When $A$ is Noetherian, I can give a proof, as follows. There is some filtration of $A$, as an $A$ module, $$0=M_0\subset M_1\subset\cdots\subset M_n=A$$ such that $M_i/M_{i-1}\cong A/\mathfrak p_i$, for some graded prime ideal $\mathfrak p_i$. Then I claim that the nilradical is $\cap\mathfrak p_i$. This is because $$x^n=0 \Rightarrow x^nA=0 \Rightarrow x^nM_i\subset M_{i-1}\forall i \Rightarrow x^n\in \cap \mathfrak p_i \Leftrightarrow x\in \cap \mathfrak p_i $$ and $$ x\in \cap \mathfrak p_i \Rightarrow xM_i\subset M_{i+1},\forall i \Rightarrow x^nA=0 \Leftrightarrow x=0.$$ Hence the mininal primdes are just the minimal elements in $\{\mathfrak p_i\}$. I would like to know if this assertion is still true if we drop the Noetherian condition, or if anyone has some more direct proofs. Thanks! AI: Yes, the minimal primes of a graded ring are graded. If $\mathfrak{p}$ is any prime, then the ideal $\mathfrak{p}^h$ generated by the homogeneous elements of $\mathfrak{p}$ is also prime, and certainly graded. So if $\mathfrak{p}$ is minimal, $\mathfrak{p}=\mathfrak{p}^h$ is graded.
H: Does a disjoint set forest have multiple distinct "upwards closed" partitions? The following is an excerpt from a powerpoint on the role of the inverse Ackermann function in determining the complexity of path compression. Dissection of a disjoint set forest $F$ with node set $X$ Partition of $X$ into “top part” $X_t$ and “bottom part” $X_b$ so that top part $X_t$ is “upwards closed” i.e. $x∈X_t ⇒$ every ancestor of $x$ is in $X_t$ also Using the definition of a "upwards closed" partition mentioned above, aren't there several distinct $X_t$ partitions that meet this criteria? Consider the following partitions that all appear to meet the aforementioned definition: The first distinct partition is only the root. The second distinct partition is the root and its children The third distinct partition is the root, its children and its grandchildren The remaining distinct partitions consist of the root and up to the $n^{th}$ generation of children Do there exist multiple distinct "upwards closed" partitions in a disjoint set forest? AI: Yes, of course. This is just the definition of a "dissection"; there are several partitions that satisfy this definition. The upper part of a dissection need not contain all nodes at a given level; for example, it could consist of the root, just one of its children, and all of the descendants of that child. The next few slides basically argue that you can analyze the effect of a series of path compressions on a forest $F$ by analyzing a corresponding sequence of path compressions in an arbitrary dissection of $F$. Then later, the slides show that by choosing a particular dissection (and arguing recursively), one obtains the inverse-Ackermann amortized time bound.
H: Evaluating $\int\limits_0^\infty \! \frac{x^{1/n}}{1+x^2} \ \mathrm{d}x$ I've been trying to evaluate the following integral from the 2011 Harvard PhD Qualifying Exam. For all $n\in\mathbb{N}^+$ in general: $$\int\limits_0^\infty \! \frac{x^{1/n}}{1+x^2} \ \mathrm{d}x$$ However, I'm not quite sure where to begin, even. There is a possibility that it is related to complex analysis, so I tried going at it with Cauchy's and also with residues, but I still haven't managed to get any further in solving it. AI: Beta function I see @robjohn beat me to it. The substitution is slightly different, so here it is. Here's a simple approach using the beta function. First, notice the integral diverges logarithmically for $n=1$, since the integrand goes like $1/x$ for large $x$. Let $t=x^2$. Then $$\begin{eqnarray*} \int_0^\infty dx\, \frac{x^{1/n}}{1+x^2} &=& \frac{1}{2}\int_0^\infty dt\, \frac{t^{(1-n)/(2n)}}{1+t} \\ &=& \frac{1}{2} \textstyle B\left(\frac{1}{2} + \frac{1}{2n}, \frac{1}{2} - \frac{1}{2n}\right) \\ &=& \frac{1}{2} \textstyle\Gamma\left(\frac{1}{2} + \frac{1}{2n}\right) \Gamma\left(\frac{1}{2} - \frac{1}{2n}\right) \\ &=& \frac{\pi}{2 \sin\left(\frac{\pi}{2} + \frac{\pi}{2n}\right)} \\ &=& \frac{\pi}{2} \sec \frac{\pi}{2n}. \end{eqnarray*}$$ Some details Above we use the integral representation for the beta function $$B(x,y) = \int_0^\infty dt\, \frac{t^{x-1}}{(1+t)^{x+y}}$$ for $\mathrm{Re}(x)>0$, $\mathrm{Re}(y)>0$. We also use Euler's reflection formula, $$\Gamma(1-z)\Gamma(z) = \frac{\pi}{\sin\pi z}.$$ Addendum: A method with residue calculus Let $t = x^{1/n}$. Then $$\begin{eqnarray*} I &=& \int_0^\infty dx\, \frac{x^{1/n}}{1+x^2} \\ &=& n\int_0^\infty dt\, \frac{t^n}{t^{2n}+1} \end{eqnarray*}$$ Notice the last integral has no cuts for integral $n$. The residues are at the roots of $t^{2n}+1=0$. Consider the pie-shaped contour with one edge along the positive real axis, another edge along the line $e^{i\pi/n}t$ with $t$ real and positive, and the "crust" at infinity. $\hspace{4.5cm}$ There is one residue in the contour at $t_0 = e^{i\pi/(2n)}$. The integral along the real axis is just $I$. The integral along the other edge of the pie is $$\begin{eqnarray*} I' &=& n\int_\gamma dz\,\frac{z^n}{z^{2n}+1} \\ &=& n \int_\infty^0 dt e^{i\pi/n} \frac{t^n e^{i\pi}}{t^{2n}+1} \\ &=& -e^{i(n+1)\pi/n} I. \end{eqnarray*}$$ The integral along the crust goes like $1/R^{2n-1}$ as the radius of the pie goes to infinity, and so vanishes in the limit. Therefore, $$\begin{eqnarray*} I + I' &=& 2\pi i \,\mathrm{Res}_{t=t_0}\, \frac{t^n}{t^{2n}+1} \\ &=& 2\pi i \frac{t_0^n}{2n t_0^{2n-1}}. \end{eqnarray*}$$ Using this and the formula for $I'$ in terms of $I$ above we find $$I = \frac{\pi}{2} \sec \frac{\pi}{2n},$$ as before.
H: Proof that if $s_n \leq t_n$ for $n \geq N$, then $\liminf_{n \rightarrow \infty} s_n \leq \liminf_{n \rightarrow \infty} t_n$ This is half of Theorem 3.19 from Baby Rudin. Rudin claims the proof is trivial. What I've come up with so far doesn't seem trivial, however, and is probably also wrong (my problem with it is pointed out below). This makes me wonder whether I'm overlooking some useful fact and/or using an unprofitable approach. Theorem. If $s_n \leq t_n$ for $n \geq N$, where $N$ is fixed, then $$ \liminf_{n \rightarrow \infty} s_n \leq \liminf_{n \rightarrow \infty} t_n. $$ Proof. Suppose that $s_n \leq t_n$ if $n \geq N$, but that $$ \liminf_{n \rightarrow \infty} s_n > \liminf_{n \rightarrow \infty} t_n. $$ Let $E_s$ denote the set of all subsequential limits of $\{s_n\}$, and let $E_t$ denote the set of all subsequential limits of $\{t_n\}$. Then $$ \inf E_s > \inf E_t. $$ This implies that there exists some $x \in E_t$ such that $\inf E_s > x > \inf E_t$, since otherwise $\inf E_t$ would not be the greatest lower bound of $E_t$. Hence some subsequence of $\{t_n\}$, say $\{t_{n_i}\}$, converges to $x < \inf E_s$. Lemma. (from Rudin) If $x < \liminf_{n \rightarrow \infty} s_n$, then there exists an integer $N$ such that if $n \geq N$, then $s_n > x$. By the lemma, there exists an integer $N_0$ such that if $n \geq N_0$, then $s_n > x$. Now, let $\epsilon = \inf_{n \geq N_0} \{s_n - x\}$. This is where I think my proof breaks down. Can't $\epsilon$ be zero? Then, since $\{t_{n_i}\}$ converges to $x$, there exists an integer $N_1$ such that if $n_i \geq N_1$, then $|t_{n_i} - x| < \epsilon$. But this means that, if $n_i \geq \max\{N, N_0, N_1\}$, we have both $$ s_{n_i} > t_{n_i}, $$ as well as $s_{n_i} \geq t_{n_i}$, a contradiction. AI: You are making it too hard for yourself. By definition, $\liminf_{n \to \infty} a_n = \lim_{n \to \infty} \inf_{k\geq n} a_k$. Also, notice that the sequence $\inf_{k\geq n} a_k$ is non-decreasing. So, if $s_n \leq t_n$, then clearly $\inf_{k\geq n} s_k \leq t_n$, for any $n$. From this it follows that $\inf_{k\geq n} s_k \leq \inf_{k\geq n} t_k$, for any $n$ (if not, then $\inf_{k\geq n} s_k > t_{k'}$, for some $k' \geq n$, which is an immediate contradiction). Since both sides are non-decreasing, we have $\inf_{k\geq n} s_k \leq \lim_{n' \to \infty} \inf_{k\geq n'} t_k = \liminf_{n \to \infty} t_n$, and then we have $\lim_{n' \to \infty} \inf_{k\geq n'} s_k = \liminf_{n \to \infty} s_n \leq \liminf_{n \to \infty} t_n$, as desired.
H: Identity related to binomial distribution? While writing a (non-math) paper I came across the following apparent identity: $N \cdot \mathop \sum \limits_{i = 1}^N \frac{1}{i}\left( {\begin{array}{*{20}{c}} {N - 1}\\ {i - 1} \end{array}} \right){p^{i - 1}}{\left( {1 - p} \right)^{N - i}} = \frac{{1 - {{\left( {1 - p} \right)}^N}}}{p}$ where $N$ is a positive integer and $p$ is a nonzero probability. Based on intuition and some manual checks, this looks like it should be true for all such $N$ and $p$. I can't prove this, and being mostly ignorant about math, I don't know how to learn what I need to prove this. I'd really appreciate anything helpful, whether a quick pointer in the right direction or the whole proof (or a proof or example that the two aren't identical). Note also that ${1 - {\left( {1 - p} \right)}^N} = {{\sum\limits_{i = 1}^N {\left( {\begin{array}{*{20}{c}} N\\ i \end{array}} \right){p^i}{{\left( {1 - p} \right)}^{N - i}}} }}$ and that ${p = {1 - {\left( {1 - p} \right)}^1}}$ For background, see the current draft with relevant highlightings here. AI: Some manipulation gives the desired result. Mostly, one has to note that $$\frac{N}{i}\binom{N-1}{i-1}=\binom{N}{i}.$$ For $\binom{N-1}{i-1}=\frac{(N-1)!}{(i-1)!((N-1)-(i-1))!}=\frac{(N-1)!}{(i-1)!(N-i)!}.$ Multiply the top by $N$, the bottom by $i$, and we get $\frac{N!}{i!(N-i)!}$, which is just $\binom{N}{i}$. So our sum is $$\sum_{i=1}^N\binom{N}{i}p^{i-1}(1-p)^{N-i}.$$ Multiply the inside by $p$, and divide by $p$ on the outside. We get $$\frac{1}{p}\sum_{i=1}^Np^i(1-p)^{N-i}.$$ You have written down enough facts to take it the rest of the way. Our expression is equal to $$\frac{1}{p}\left(\sum_{i=0}^N\binom{N}{i}p^i(1-p)^{N-i} -\binom{N}{0}p^0(1-p)^N \right).$$ The $\sum_{i=0}^N$ stuff is just the binomial expansion of $(p+(1-p))^N$, so it is equal to $1$. Or alternately it is the sum of the binomial probabilities, so it is $1$. Finally, the term $\binom{N}{0}p^0(1-p)^N$ is an awkward way of writing $(1-p)^N$.
H: Has this operator $0$ as an eigenvalue / where is my error? I know of a theorem that tells me, that every compact linear operator on an infinitedimensional Hilbert space has to have the eigenvalue $0$. On the other hand I have the operator \begin{eqnarray*} & T:\ell^{2}\rightarrow\ell^{2}\\ & \left(x_{1},x_{2},\ldots\right)\mapsto\left(\lambda_{1}x_{1},\lambda_{2}x_{2},\ldots\right), \end{eqnarray*} where $\left(\lambda_{n}\right)_{n}$ is a sequence of real nonnegative numbers, tending to $0$. Then this mapping can't have $0$ as an eigenvalue, since if that were the case, there had to be a $\left(y_{1},y_{2},\ldots\right)\in\ell^{2}$ with not all $y_{n}$'s being zero, such that $\lambda_{n}y_{n}=0$ for all $n\in\mathbb{N}$. Since $\lambda_{n}\neq0$, that would imply that all $y_{n}$'s are there. Where is my error ? The operator $T$ is compact and $\ell^{2}$ is infinitedimensional, so this should be a counterexample to the theorem above. AI: $0$ being in the spectrum means that $T$ isn't invertible, which in infinite-dimensional space no longer means that it's not injective. You should be able to show that $T$ isn't surjective.
H: If both $a$ and $b$ $\not \equiv 0 \pmod{p}$ then $ab \not\equiv 0 \pmod{p}$ Any help with this proof would be great. Not even sure where to begin. I'm pretty much a total newbie. If $a$ is not congruent to $0 \pmod{p}$ and $b$ is not congruent to $0 \pmod{p},$ where $p$ is a prime number, then $a*b$ is not congruent to $0 \pmod{p}.$ Also, not sure why it is necessary to assume that p is a prime number...? AI: Let $Q=\{q_1,\ldots,q_m\}$ be the primes that occur in the prime factorisation of $a$, and $R=\{r_1,\ldots,r_n\}$ be the primes that occur in the prime factorisation of $b$. Then $Q\cup R$ is the set of primes that occur in the prime factorisation of $ab$. Your premise is that neither $a$ nor $b$ is a multiple of $p$. So $p\notin Q$ and $p\notin R$. Therefore $p\notin Q\cup R$ and $ab$ is also not a multiple of $p$.
H: Under what conditions does the expression $(x+y)^4$ equal $x^4+y^4$? In problem 16(c) of chapter 1 of Calculus, Spivak asks the reader to determine the conditions under which the expression $(x + y)^4$ equals $x^4 + y^4$. Clearly, $$ (x + y)^4 = x^4 + y^4 \Leftrightarrow x = 0 \vee y = 0 \vee 4x^2 + 6xy + 4y^2 = 0 $$ From the preceding problem, we know that $0 \leq 4x^2 + 6xy + 4y^2$. If $x = 0$ and $y = 0$ then $4x^2 + 6xy + 4y^2 = 0$. If either $x = 0$ and $y \neq 0$ or $x \neq 0$ and $y = 0$ then $0 < 4x^2 + 6xy + 4y^2$. I want to show that if $x \neq 0$ and $y \neq 0$ then $0 < 4x^2 + 6xy + 4y^2$, which is intuitively true. In order to show that $0 < 4x^2 + 6xy + 4y^2$, it suffices to show that $6xy < 4x^2 + 4y^2$, but I'm not sure how to demonstrate that this inequality is true. I would presumably derive it from the ordered field axioms in conjunction with the local assumptions of the problem. AI: Use the identity $$4x^2+6xy+4y^2=x^2+y^2+(x+y)^2+(x+y)^2+(x+y)^2.$$ Any square $w^2$ is $\ge 0$, with equality iff $w=0$. The sum of objects that are $\ge 0$ is $\ge 0$, with equality only when all the objects are $0$. This forces $x=y=0$. Remark: The approach above is minimalist in that we use only facts true in all ordered fields. If we are willing to use properties such as existence of square roots of positive numbers, then we can complete the square in the traditional way.
H: Count the number of n-bit strings with an even number of zeros. I am currently self-studying introductory combinatorics by reading Introduction to combinatorial mathematics. I am currently in the first chapter, and I have a question regarding one of the examples. The question was asking to count the number of n-bit strings with an even number of zeros. The answer is of course $2^{n-1}$. The author gave 2 solutions. I however didn't completely understand what I think is the straightforward one. The solution I got was that he took out 1 bit, leaving $(n-1)$ bits, if the number of zeros is even in the $(n-1)$-bit number, then he will just append a 1, if not then he will append a zero. So in the end we just needed to count the number of $(n-1)$-bit strings. The other solution (the straightforward one) that I didn't understand examined the symmetry that half of the $2^n$ must have an even number of zeros, and the other half will have an odd number of zeros. I just don't get why this property must hold. I can understand that half of the $2^n$ numbers will have even parity, but I can't see how it holds for the parity of the number of zero or one bits. If anyone can show me how that property holds, I'd be very grateful. I'd also be interested to see different explanations and proofs if possible. Thank you. AI: Divide the $2^n$ strings into two groups, one with an odd number of zeros and one with an even number of zeros. If you take anything from the "odd" group, and flip the first bit, you will get something in the "even" group. Similarly, flipping the first bit of anything in the "even" group will produce something in the "odd" group. Once you realize that there is no chance of overlap (that is, flipping two different strings cannot give the same result), it means that the two groups have to be exactly the same size.
H: Does this sequence of operators in Hilbert space, given by an algorithm, terminate Let $H$ be an infinitedimensional Hilbert space and $T$ a compact selfadjoint operator in it. Consider the following Algorithm: Let $$ H_{1}=H,\ T_{1}=T $$ and let $\lambda_{1}$ be that eigenvalue of $T_{1}$ whose absolute value equals $\left\Vert T_{1}\right\Vert $ (there is a theorem that tells me that compact selfadjoint operators $T$ always possess an eigenvalue $\lambda$, such that $\left|\lambda\right|=\left\Vert T\right\Vert $) and $f_{1}$ the associated normed eigenvector. Now let $$ H_{2}=\left\{ f_{1}\right\} ^{\perp},\ T_{2}=T\Bigr|_{H_{2}} $$ (one can check that setting $T_{2}=T\Bigr|_{H_{2}}$ is welldefined) and $\lambda_{2}$ be again the eigenvalue of $T_{2}$ such that $\left\Vert T_{2}\right\Vert $ and $f_{2}$ be again its corresponding normed eigenvector. Continuing let $$ H_{3}=\left\{ f_{1},f_{2}\right\} ^{\perp},\ T_{3}=T\Bigr|_{H_{3}} $$ and so on... This algorithm shall terminate of $T_{n}$ is the zero operator for some $n\in\mathbb{N}$. Now my question is: If $T$ isn't a finite rank operator, is it possible that this algorithm stops after a finite number of steps? If yes, can one please provide me with detailed example of such an operator $T$ (or otherwise a proof that this algorithm never terminates )? AI: If the algorithm stops after $n$ iterations, then you have $T_n$ is the zero operator on $H_n$. Since $H = \mathbb{sp}\{f_k\}_{k=1}^{n-1} \bigoplus H_n$, if $x \in H$, then $x = \sum_{k=1}^{n-1} \alpha_k f_k + y$, where $y \in H_n$. Then you have $$Tx = \sum_{k=1}^{n-1} \alpha_k T f_k +T y = \sum_{k=1}^{n-1} \alpha_k T f_k +T_n y = \sum_{k=1}^{n-1} \alpha_k T f_k \in \mathbb{sp}\{f_k\}_{k=1}^{n-1}.$$ Hence $T$ is a finite rank operator.
H: Fibonacci Sequence Variants I learnt about finding the $n$th Fibonacci number using matrix exponentiation in $\log n$ time. Then I tried finding similar formula for sequences of the form $$S_{n} = S_{n-1} + S_{n-2} + a n + b$$ in terms of Fibonacci sequence. But I could not find expression except for $a = 0$, in which case it is $$S_n = F_n + b(F_n-1)$$ Is there an expression for the general case or is there any method to find the $S_n$ in terms of $F_n$, in which case I can calculate $S_n$ in $\log n$ time? AI: Let $T_n=S_n+an+3a+b$ for every $n\geqslant0$, then $(S_n)_{n\geqslant0}$ solves the recursion you are interested in if and only if $(T_n)_{n\geqslant0}$ solves the Fibonacci recursion $T_{n+2}=T_{n+1}+T_n$ for every $n\geqslant0$. The Fibonacci sequence $(F_n)_{n\geqslant0}$ starts from $(F_0,F_1)=(0,1)$, and the shifted Fibonacci sequence $(F_{n+1})_{n\geqslant0}$ starts from $(F_1,F_2)=(1,1)$, hence $T_n=T_0F_{n+1}+(T_1-T_0)F_n$ for every $n\geqslant0$. Finally, for every $n\geqslant0$, $$ S_n=(S_0+3a+b)F_{n+1}+(S_1-S_0+a)F_n-an-3a-b. $$ Edit: Here are some explanations on the introduction of $(T_n)_{n\geqslant0}$. Consider the transformation $\Phi:x\to\Phi(x)$ defined on the space of sequences $x=(x_n)_{n\geqslant0}$ by $\Phi(x)_n=x_{n+2}-x_{n+1}-x_n$ for every $n\geqslant0$. Then $\Phi$ is linear and one wants to solve $\Phi(x)=z^{a,b}$ with $z^{a,b}=(z^{a,b}_n)_{n\geqslant0}$ and $z^{a,b}_n=a(n+2)+b$ for every $n\geqslant0$. The Fibonacci sequence $F=(F_n)_{n\geqslant0}$ and the shifted Fibonacci sequence $G=(F_{n+1})_{n\geqslant0}$ are linearly independent and in the kernel of $\Phi$. Every sequence $x=(x_n)_{n\geqslant0}$ in the kernel of $\Phi$ is determined by $x_0$ and $x_1$ hence the kernel of $\Phi$ is the vector space generated by $F$ and $G$. Furthermore $\Phi(z^{a,b})=z^{-a,a-b}$ hence $\Phi(z^{-a,-a-b})=z^{a,b}$. This shows that every sequence $x$ such that $\Phi(x)=z^{a,b}$ is $x=\alpha F+\beta G+z^{-a,-a-b}$ for some scalar $\alpha$ and $\beta$. (Note that the $n$th term of $x-z^{-a,-a-b}$ is $x_n+an+3a+b$.)
H: Change of Variable for Lebesgue Integral on $\mathbb{R}^n$ Let $A=(a_{ij})$ be a real symmetric, positive definite $n \times n$ matrix and set $$ F(x_1,x_2,\ldots, ,x_n)= \sum_{i,j}a_{ij}x_ix_j.$$ I am trying to show that for any non-negative measurable function $\alpha$ on the real line $$ \int_{\mathbb{R}^n} \alpha(F(x_1,x_2,\ldots, ,x_n))\,dm= \frac{1}{\sqrt{\det A}}\int_{\mathbb{R}^n} \alpha(x_1^2+x_2^2+\ldots +x_n^2)\,dm,$$ where $dm$ is the Lebesgue measure on $\mathbb{R}^n$. I thought in the following sense: Somehow, change of variables theorem for $\mathbb{R}^n$ should be used and here we need the Jacobian matrix for the transformation $F$. Is $A$ the matrix representation of $F$? How to write Jacobian for $F$? Moreover, $A$ is diagonalizable and all of its eigenvalues, $\lambda_1\lambda_2\ldots\lambda_n$, are positive. Therefore, $D= P^{-1}A P$, where $D$ is the diagonal $n \times n$ matrix with eigenvalues on the diagonal and $P$ is the orthogonal $n \times n$ matrix consisting of eigenfunctions as column vectors. Then, it follows that $$\det A=\det D=\lambda_1\lambda_2\ldots\lambda_n.$$ How can we combine all these data? Can you please help me? Thank you! AI: As you notice, we can write $A=P^tDP$, where $D$ is diagonal, with real entries and $P$ orthogonal. The LHs is $$I:=\int_{\Bbb R^n}F(x^tP^tDPx)dx_1\dots dx_n.$$ Do the change $y=Px$; since $P$ is orthogonal the absolute value of the Jacobian is $1$, hence $$I=\int_{\Bbb R^n}F(y^tDy)dy_1\dots dy_n=\int_{\Bbb R^n}F\left(\sum_{j=1}^n\lambda_jy_j^2\right)dy_1\dots dy_n.$$ Since $\lambda_j>0$, let $t_j:=\sqrt{\lambda_j}y_j$ for $1\leq j\leq n$. The inverse of the Jacobian of this transformation is $\frac 1{\sqrt{\det D}}$, which is what we want since $\det D=\det A$.
H: Some Simple Algebra \begin{align*} x &= \frac 12 js + \frac 12 is \\\ y &= \frac 14 is - \frac 14 js \end{align*} How can I find a \begin{align*} i &= \\\ j &= \end{align*} conversion of this? Edit: I am not happy with the moderaters assumption on my syntax. x = (j * s / 2) + (i * s / 2) y = (i * s / 4) - (j * s / 4) is the proper format. AI: Looking at the two equations, they look pretty damn similar. There must some relationships between $x$ and $2y$. Calculate $x+2y$ and $x-2y$: $$x+2y=is$$ $$x-2y=js$$ And you have $i$ and $j$. It's easy to take if from here.
H: When is there in a probability space no null sets? I remember my lecturer saying that in some cases there will be no other null set than the trivial one (the empty set), but I can't remember exactly the condition. I've been thinking and convinced my self that for finite sets, equipped with the power set and a probability measure defined through the counting measure the statement would be true pretty obviously, but how about a countable state space? My reasoning is that it's a big deal in conditional expectations, since we would then not have to work with versions of the conditional expectation. Hope someone can help give me some insight, Henrik AI: Edit: I am assuming that all singletons are measurable. See the comments below. (This is a reasonable assumption. In many cases, the probabilty measure lives on a topological space in which singletons are closed and all Borel sets are measurable.) There is only the trivial null set iff every singleton has positive measure. So in the countable case, if the underlying space is $\mathbb N$, you could assign to each $\{n\}$, $n\in\mathbb N$, the measure $2^{-n-1}$. This generates a probability space with no non-trivial set of measure $0$. If your space is uncountable, you will always have a singleton of measure 0, since there cannot be uncountably many pairwise disjoint measurable sets of positive measure. As for filtering away null sets, yes, you can always consider the $\sigma$-algebra of measurable set and factor out the ideal of sets of measure 0. This give you the measure algebra of space, and the only measure 0 element of this algebra is the equivalence class of the empty set, but this process doesn't give you a probability space as such, just a complete Boolean algebra.
H: Wiener process question When I look up the definition of 'Wiener process' at Wikipedia, it tells me: $W(0) = 0$ and $W(t) - W(s) \sim N(0, t-s)$. When I try to simulate this in matlab, I get different results when I define a vector $W1$ to be like: $W1 = cumsum(dW)$, where $dW(j) \sim N(0, dt)$, and a vector $W2$ to be like: $W2(0) = 0$ and $W2(j) \sim N(0, dt*j)$ $W2$ apparently doesn't look like a Brownian motion, but it is still compliant with the requirements. How comes? AI: In $W_2$ you're plotting a sample from a fresh different instance of the Wiener process for each $j$. The successive values are uncorrelated with each other, so even though things like $W_2(17)-W_2(0)$ have the distribution specified by the definition, $W_2(17)-W_2(16)$ doesn't.
H: Solving this pie-chart Hi could anyone please guide me on how I would go about calculating the percentage of a specific sector from this Pi-chart. I think I am suppose to use data from the bar chart and apply it to the pie-chart but I don't really know how. The question is According to the data provided, in 2005 the total expenditure (outgoings) on Advertising and Sales and Distribution was closest to which of the following amounts a)50,000,000 b)35,000,000 c)30,000,000 d)$5,000,000 e)3,000,000 Ans is a AI: Ignore the bar graph. Looking at the relevant sections of the pie chart, estimate their area as a fraction of the whole. I would estimate it at about $1/3$ of the total area, but in any case it is certainly more than $1/4$ and less than $1/2$. Now look at the possible answers as fractions of the total $165$ million. Only one falls within the required range.
H: proving "$C^1([−1,1])$ is dense in the given space with given norm" Define $$E = \left \{ f \in W^{1,2} (-1,1) \; | \; \| f \|_E := \left( \int_{-1}^1 (1-x^2 ) | f' (x) |^2 dx + \int_{-1}^1 | f(x) |^2 dx \right)^{\frac{1}{2}} < \infty \right \}.$$ Then how can I prove that $ C^1 ([-1,1]) $ is dense in $E$ ? Is this concerned with Sobolev inequality? AI: I assume that Ann wants the space $$E = \left \{ f \in L^2(-1,1) \; \bigg| \; \| f \|_E := \left( \int_{-1}^1 (1-x^2 ) | f' (x) |^2 dx + \int_{-1}^1 | f(x) |^2 dx \right)^{\frac{1}{2}} < \infty \right \}.$$ That is, $f\in E$ if it belongs to $L^2(-1,1)$ and has a weak derivative $f^\prime$ on $(-1,1)$ satisfying $$ \int_{-1}^1 (1-x^2 ) | f' (x) |^2 dx <\infty.$$ Now for $f\in E$, let $p$ be a polynomial with small $ \int_{-1}^1 (1-x^2 )\, |p(x)-f'(x)|^2 dx$, and then let $q$ be the polynomial with $q(0)=f(0)$ and $q^\prime=p$. From the aside below, we see that $q$ is close to $f$ in $E$-norm, which shows that polynomials are dense in $E$. For a more general, multivariable version of this result, see Are polynomials dense in Gaussian Sobolev space? Aside: Suppose that $f\in E$ and $f(0)=0$. Then for $0\leq x\leq 1$ we have $f(x)=\int_0^x f^\prime(y)\,dy$. Cauchy-Schwarz gives $f(x)^2\leq x\int_0^x (f^\prime(y))^2\,dy,$ and integrating in $x$ we find $$\int^1_0 f(x)^2\,dx\leq \int_0^1 x \int_0^x (f^\prime(y))^2\,dy\,dx={1\over 2}\int_0^1 (1-y^2) (f^\prime(y))^2\,dy.$$ Adding a similar contribution from negative $x$-values, we conclude that $$\int^1_{-1} f(x)^2\,dx\leq {1\over 2}\int_{-1}^1 (1-y^2) (f^\prime(y))^2\,dy.$$
H: Prove that this is a Banach space Let $I=[0,1]$ and let $\displaystyle X:=\left\{f: I\times \mathbb R\to \mathbb R\colon \sup_{(t,x)}\frac{|f(t,x)|}{1+|x|}<\infty\right\}$. Prove that $X$, equipped with the norm $\displaystyle \|f\|:=\sup_{(t,x)}\frac{|f(t,x)|}{1+|x|}$ is a Banach space. My first attempt was to use the characterization that $X$ is a Banach space if and only if every absolutely convergent series converges, but no success. Then I've noticed that I can prove convergence on every Ball of arbitrarily large radius, however still i cannot conclude on the whole $I\times \mathbb R$. Maybe Ascoli Arzela, but, honestly, i don't know. Hope you can help me. Thank you. AI: The fact that $X$ is a normed space follows from the properties of supremum. Take $\{f_k\}\subset X$ a Cauchy sequence. In particular, for each $(t,x)\in [0,1]\times \Bbb R$, the sequence $\{f_k(t,x)\}$ is Cauchy, hence converges to some $f(t,x)$. Fix $\varepsilon>0$, and take $N=n(\varepsilon)$ such that the norm of $f_k-f_j$ is $\leq \varepsilon$ whenever $j,k\geq N$. We have for all $(t,x)\in I\times \Bbb R$ and $k,j\geq N$ that $$|f_j(t,x)-f_k(t,x)|\leq (1+|x|)\varepsilon.$$ We can take the limit in the last equation to show that $f\in X$. Further more, we can show that this gives the convergence of $f_j$ to $f$. An alternative way is the following. First show that if $X$ is a non-emptyset, the set of bounded real-valued functions defined on $X$, $B(X)$, endowed with the supremum norm is a Banach space. In this particular case, show that $X$ is a closed subspace of $B(X)$.
H: Minimize sum of smallest and largest among integers on the real line. Suppose there are 3 non-negative integers $x$, $y$ and $z$ on the real line. We are told that $x + y + z = 300$. Without loss of generality, assume $x$ to be the smallest integer, and $z$ to be the largest. How do I minimize $(x + z)$? Attempt: $x + z = 300 - y$, so for a start I should maximize y. This occurs at $y = z - 1$. So, we have $x + 2z = 301$. Now, $z = \dfrac {301}2 - \dfrac x2$. $\dfrac {dz}{dx} = -\dfrac 12$. Increasing $x$ by $1$ decreases $z$ only by $\dfrac12$. So, I should pick the smallest possible $x$, which is $1$. Then, $z = 150$. $\min (x+z) = 151$. Questions Is my logic correct? Is there a systematic way to solve questions of this kind? i.e. given non-negative numbers on the real line that sum up to a fixed value, how to minimize the sum of the largest and smallest of them? AI: Your logic is fine. Working with integers there are sometimes "end effects". You seem to be requiring that $y \ne z$ and an alternate solution is $(0,149,151)$ but that has the same sum of $x+z$. Without the restriction that $y \ne z$ you could have $(0,150,150)$ for a sum of $150$ Your approach is quite systematic. If you had to have $7$ different non-negative integers sum to $300$ and wanted to minimize the sum of smallest plus largest, you would argue the same way-you want the middle ones to be as large as possible, so you have six numbers that add to $300$ (or a little less), so the average one is $50$, so they are $(47,48,49,50,51,52)$ and you need to add a $3$ to make $300$ and the sum is $3+52=55$. You don't really need the derivative here.
H: Function $f(x)$ similar to exp(x) where $-f(x)$ is approximately $f(-x)$ I am wondering if there is a function $f(x)$ "similar" to the exponential function $\exp(x)$ such that: $-f(x) \approx f(-x)$ I would also like $f(x)$ to have the following property: $\frac{{f(a)}}{{f(b)}} = f(a - b)$ Or alternately, $\frac{{f(a)}}{{f(b)}} \approx f(a - b)$ AI: You might be interested in the hyperbolic sine "sinh". It is antisymmetric and its asymptotic behaviour for $x\to\infty$ is similar to the one of the exponential function.
H: Prove $\cos x= 2 \cos^2{\frac{x}{2}}-1=1-2\sin^2{\frac{x}{2}}$ How would I prove the following trig identity? $$\cos x= 2 \cos^2{\frac{x}{2}}-1=1-2\sin^2{\frac{x}{2}}$$ I am not sure where to begin any help would be useful. AI: I am sure you know the formula $\cos(a+b) = \cos a \cos b - \sin a \sin b$. Let $a=b = \frac{x}{2}$, which gives $\cos x = (\cos \frac{x}{2})^2 - (\sin \frac{x}{2})^2$. Since $(\cos \frac{x}{2})^2 + (\sin \frac{x}{2})^2 = 1$, this gives $\cos x = (\cos \frac{x}{2})^2 + (\cos \frac{x}{2})^2 -1$, which is your formula above. The other follows a similar approach, except you replace the $(\cos \frac{x}{2})^2$ term instead of the $(\sin \frac{x}{2})^2$ term. Here is the second part explicitly: We already have $\cos x = (\cos \frac{x}{2})^2 - (\sin \frac{x}{2})^2$. Since $(\cos \frac{x}{2})^2 + (\sin \frac{x}{2})^2 = 1$, this gives $(\cos \frac{x}{2})^2 = 1-(\sin \frac{x}{2})^2$. Substituting gives $\cos x = 1-(\sin \frac{x}{2})^2 - (\sin \frac{x}{2})^2 = 1-2 (\sin \frac{x}{2})^2$.
H: If $\lim_{n \to \infty }n \ln\left ( \frac{a_{n}}{a_{n+1}} \right )=g>1$, then $\sum_{n=1}^{\infty }a_{n}$ is convergent Consider the series $\sum_{n=1}^{\infty }a_{n}$ where $a_{n}> 0$ for all $n\in \mathbb{N}$. Assume that: $\lim_{n \to \infty }n \ln\left ( \frac{a_{n}}{a_{n+1}} \right )=g$. I need to prove that if $g> 1$, then $\sum_{n=1}^{\infty }a_{n}$ converges. Similarly, if $g< 1$, then $\sum_{n=1}^{\infty }a_{n}$ diverges. Here is the solution to this problem as given to me, however I couldn't understand the last part of this proof. If someone has an idea, please share. In the solution to this problem, we need the following lemma (which I will not prove): Lemma: Let $\sum_{n=1}^{\infty }a_{n}$ and $\sum_{n=1}^{\infty }b_{n}$ be two series of positive terms that satisfy the following inequality: $\frac{a_{n+1}}{a_{n}}\leq \frac{b_{n+1}}{b_{n}}$ for $n\geq K$. One can prove that if $\sum_{n=1}^{\infty }b_{n}$ converges, then $\sum_{n=1}^{\infty }a_{n}$ also converges. If $g>1$ and let $\epsilon > 0$ be so small so that $g-\epsilon>1$. Then, for sufficiently large $n$: $n\ln\left ( \frac{a_{n}}{a_{n+1}} \right )>g-\epsilon$. Also, from the inequality: $\left ( 1+\frac{1}{n} \right )^{n}< e< \left ( 1+\frac{1}{n} \right )^{n+1}$, it follows that: $n\ln\left ( 1+\frac{1}{n} \right )<1$. So, $n\ln\left ( 1+\frac{1}{n} \right )<1<g-\epsilon <n\ln\left ( \frac{a_{n}}{a_{n+1}} \right ).$ I can understand everything up to this part. The part I couldn't get is the following: The solution says that it follows from the above inequality that $$\frac{a_{n+1}}{a_{n}}<\frac{\frac{1}{(n+1)^{g-\epsilon }}}{\frac{1}{n^{g-\epsilon }}}.\tag{*}\label{lab} $$ Then we use the previous lemma to conclude the convergence of the series. Can anyone tell me how to derive the inequality \eqref{lab}? Thanks. AI: Let $u=g-\epsilon$, then $n\log(a_n/a_{n+1})\gt u$, hence $a_{n+1}/a_n\lt\mathrm e^{-u/n}$. Since $\mathrm e\gt\left(1+\frac1n\right)^n$, this yields $\mathrm e^{-u/n}\lt\left(1+\frac1n\right)^{-u}$. The RHS is the odd-loking ratio $\frac{\frac1{(n+1)^u}}{\frac1{n^u}}$ in the RHS of $(*)$. Edit The other case is $g\lt1$, then $n\log(a_n/a_{n+1})\lt1$ for every $n$ large enough, hence $a_{n+1}/a_n\gt\mathrm e^{-1/n}$. Since $\mathrm e\lt\left(1+\frac1{n-1}\right)^{n}$, $\mathrm e^{-1/n}\gt\frac{n-1}{n}$. Thus, the sequence $(na_{n+1})_n$ is nondecreasing for $n$ large enough. In particular, $a_{n+1}\gt C/n$ for every $n$ large enough, for some positive $C$, and this implies that the series $\sum\limits_na_n$ diverges.
H: Proof of convergence of a sum of mean-consistent estimators After a few weeks off I am back at my self-study of Measure-Theoretic probability. As always, I thank the community for any detail and answers they can provide as I try to work myself through these exercises. Suppose $\theta_n$ and $\phi_n$ are mean consistent estimators for $\theta,\phi$. Prove: $\theta_n +\phi_n\rightarrow^{\mathcal{L}_1}\theta +\phi$ and $\max(\theta_n,\phi_n)\rightarrow^{\mathcal{L}_1}\max(\theta,\phi)$ Thus far I have gathered/surmised that the definition of mean consistent estimators essentially says: $\theta_n\rightarrow^{\mathcal{L}_1}\theta $ $\phi_n\rightarrow^{\mathcal{L}_1}\phi$ Even under the assumption that $\theta_n\rightarrow\theta$ a.s, I am somewhat confused on how to proceed. AI: Integrate the inequalities $$ |(\theta_n+\phi_n)-(\theta+\phi)|\leqslant|\theta_n-\theta|+|\phi_n-\phi|, $$ and $$ |\max\{\theta_n,\phi_n\}-\max\{\theta,\phi\}|\leqslant|\theta_n-\theta|+|\phi_n-\phi|. $$
H: Word-Problem using a bar-chart I can not manage to solve this problem. Any suggestions on how I can solve it. If in 2006 Pharmacom spent the same dollar amount on administration (administration outgoings) as in 2005, but the total outgoings increased by ten percent, approximately what fraction of the total outgoings in 2006 would administration outgoings represent? Ans a) 1/2 b) 1/3 c) 1/4 d) 1/10 e) 1/20 Ans(d) AI: Just a hint. Call $a_n$ the administrative expenses in year $n$, $t_n$ the total expenses. From the question it is given that: $$1.1t_{2005}=t_{2006}$$ And that:$$a_{2005}=a_{2006}$$ Ergo: $$\frac{a_{2005}}{1.1t_{2005}}=\frac{a_{2006}}{t_{2006}}$$ We can read off $\displaystyle\frac{a_{2005}}{t_{2005}}$ from the pie chart. The question is: find $\displaystyle\frac{a_{2006}}{t_{2006}}$.
H: Prove $\sin x=2\sin\frac{x}{2}\cos\frac{x}{2}$ How would I prove the following identity? $$\sin x=2\sin\frac{x}{2}\cos\frac{x}{2}$$ I know $$\sin(a+b)=\sin a\cos b+\sin a \cos b.$$ So I did $$\sin\frac{x}{2}\cos\frac{x}{2}+\cos\frac{x}{2}\sin\frac{x}{2}.$$ But what technique would I have to use to continue the problem? AI: Since $\sin(a+b) = \sin a \cos b + \cos a \sin b$, letting $a = b = \frac{x}{2}$ gives $\sin x = \sin \frac{x}{2} \cos \frac{x}{2} + \cos \frac{x}{2} \sin \frac{x}{2} = 2 (\sin \frac{x}{2} \cos \frac{x}{2})$.
H: Show that $E(|X|)<\infty$ and $E(X_n)\rightarrow E(X)$ After a few weeks off I am back at my self-study of Measure-Theoretic probability. As always, I thank the community for any detail and answers they can provide as I try to work myself through these exercises. Perhaps this is an application of Levi? The question is: Suppose $X_1,X_2,...$ is a sequence of random variables, not necessarily nonnegative, and $X_n\uparrow X a.s.$. Also assume $\sup_n E(|X_n|)<\infty$. Show that $E(|X|)<\infty$ and $E(X_n)\rightarrow E(X)$ AI: First, Fatou's lemma gives $E (\liminf_n |X_n|) \leq \liminf_n E |X_n| \leq \sup_n E |X_n| < \infty$. Since $\liminf_n |X_n| = |X|$ a.e., this shows that $E |X| < \infty$. We have $0 \leq X - X_n \leq X - X_1 $ a.e. We know that $E |X-X_1| \leq E |X| + E |X_1| < \infty$, so apply the DCT to get $\lim_n E (X - X_n) = 0$, from which the result follows.