text
stringlengths
83
79.5k
H: Intersection of $|z_1 - x|=r$ and $|z_2 - y|=r$ Let $x,y \in \mathbb{R}^k$ ($k\geq 3$), $|x-y|=d>0$ and $r>0$. Prove that if $2r>d$, then there are infinitely many $z\in \mathbb{R}^k$ such that $|z-x|=|z-y|=r$. Here's what I have proved; The existence of such $z$, and $|z-x|=|z-y|=r$ iff $(z-(x+y)/2)\cdot(x-y)=0$ and $|z-(x+y)/2|=\sqrt{r^2 - d^2/4}$ I know exactly what's happening here and that there are infinitely many such $z$, but cannot show this logically. This prob is on 'analysis by rudin' so no topology please.. Edited: Only thing i need to prove here is that 'There exist infinitely many $d\in \mathbb{R}^k$ such that $(x-y)•d=0$ and $|d|=1$.' AI: I will assume that $d=|x-y|$ the distance between $x$ and $y$. First, take the hyperpan $\mathbf{P}$ defined by $\{z, |z-x| = |z-y|\}$. Now, let's consider $u=\displaystyle\frac{x+y}{2}$, $\mathbf{C} = \{z, |z-u| = r'\}$, with $r' = r-\frac{d}{2}$ ($r'>0$ by hypothesis) the hypersphere of center $u$ and of radius $r$. All we have to prove now is that $\mathbf{S}=\mathbf{C}\cap\mathbf{P}$ is infinite. We have $dim \mathbf{P}=k-1\geq2$, so $\mathbf{P}$ has a base orthonormal $(v_1,\dots,v_{k-1})$. We now have $w_1= u+r' v_1$ and $w_2= u+r' v_2$ that belong to $\mathbf{S}$. Let's define $\mathbf{K}=\{\cos\theta w_1+\sin\theta w_2, \theta\in[0,2 \Pi]\}$. We can verify that any point $z$ in $\mathbf{K}$ verifies $|z-x|= |z-y| = r$: $z-x=\frac{x+y}{2}+\cos\theta w_1+\sin\theta w_2-x = \cos\theta w_1+\sin\theta w_2-x = (u+r' v_1)\cos\theta+(u+r' w_2)\sin\theta-x = \frac{y-x}{2}+r'(v_1 \cos\theta+v_2\sin\theta)$. Now $y-x$ is orthogonal to $P$, and then to $v_1$ and $v_2$, so $|z-x| = \frac{|y-x|}{2} + |v_1\cos\theta + v_2\sin\theta| = \frac{d}{2}+r'=r$ and same to prove that $|z-y|=r$.
H: Proving every element in $1+8\cdot \mathbb{Z}_{2}$ is a square Let $\mathbb{Z}_{p}$ denote the ring of $p$- adic numbers. How can I prove that every elements of $1+8\cdot \mathbb{Z}_{2}$ is a square. I am not comfortable in working $\mathbb{Z}_{p}$'s. So a detailed solution would be of great help and I would learn in future as to how to deal with such problems. Thanks. AI: Indeed this is a Hensel's Lemma calculation, in fact a very standard one in the theory of local fields. It often goes under the name Local Square Theorem. For a statement and proof, see e.g. Lemma 2.11 of these notes.
H: Best known bounds for Ramsey numbers I realize a similar question has been asked before but what I want to know is a little different and is not answered by the link in the answer to that question. I am interested in knowing the best known general upper and lower bounds (non-asymptotic) for an arbitrary Ramsey number $R(k,l)$. Similarly the best known general upper and lower bounds for an arbitrary diagonal Ramsey number $R(k,k)$. It would be good if someone could also tell me asymptotic bounds in these cases as well.(I am not sure whether wikipedia is upto date.) Thanks. AI: I suggest you take a look at this paper by Radziszowski. This site is also up to date with new results about $R(k,l)$.
H: How to get upper-left, upper-right, lower-left and lower-right corners XY coordinates from a rectangle shape. How can I get the get upper-left, upper-right, lower-left and lower-right corners given XY coordinates from a rectangle shape when I have the following data available to me? positionX positionY width height rotation Is there an easy way of doing this? Clarification: The rectangle is being rotated at positionX and positionY, the upper-left corner when no rotation is applied (rotation=0). AI: First, let me take these smaller notifications: positionX = $x$ positionY = $y$ width = $w$ height = $h$ rotation = $\theta$ Thus, our top-left point is $(x,y)$. The other 3 points will be(without rotation): $(x+w, y)$, $(x+w, y-h)$ and $(x, y-h)$. Since we are rotating the complete geometry about point $(x,y)$ by an angle of $\theta$, we'll have new points given as: $(x, y)$ $(x + w*\cos(\theta), y + w*\sin(\theta))$ $(x + w*\cos(\theta) + h*\cos(\frac{3\pi}{2}-\theta), y + w*\sin(\theta) + h*\sin(\frac{3\pi}{2}-\theta))$ $(x + h*\cos(\frac{3\pi}{2}+\theta), y + h*\sin(\frac{3\pi}{2}+\theta))$ which, on simplification give us the co-ordinates $(x, y)$ $(x + w*\cos(\theta), y + w*\sin(\theta))$ $(x + w*\cos(\theta) - h*\sin(\theta), y + w*\sin(\theta) - h*\cos(\theta))$ $(x + h*\sin(\theta), y - h*\cos\theta))$ I am not entirely sure of the conversion I did for $\sin(\frac{3\pi}{2}±\theta)$ or $\cos(\frac{3\pi}{2}±\theta)$ KEY: 1. -> Top-Left corner, 2. -> Top-Right Corner, 3. -> Bottom-Right Corner and 4. -> Bottom-Left Corner.
H: Find monic grade 3 polynomial in $\mathbb Z_p[x]$ then factorize Let $f = 15x^4+22x^3-x=0$ a polynomial in $\mathbb Z_p[x]$, find the first prime $p$ value that will make $f$ result in being grade 3 and monic. Then factorize $f$ in $\mathbb Z_3[x]$ as product of irreducible factors. Find the $p$ value In order for $f$ to be monic and grade 3, I need to find a $p$ value so that: $$\begin{aligned} 15 \equiv_p 0 \end{aligned}$$ $$\begin{aligned} 22 \equiv_p 1 \end{aligned}$$ it's easy to see that $p = 3$, so I will be working in $\mathbb Z_3[x]$ and $f = x^3+2x$. Factorizing $f$ I need to find all the solutions to $x^3+2x=0$ in order to get it factorized as product of irreducible factors. $$\begin{aligned} x^3+2x = x(x^2+2) = 0 \Leftrightarrow x = 0 \vee (x^2+2) = 0 \end{aligned}$$ Let's examine $\Delta = b^2-4ac$ of $(x^2+2)$ $$\begin{aligned} \Delta = 0 - 8 = -8 \equiv_3 = 1 \Rightarrow \end{aligned}$$ $$\begin{aligned} \frac{-b \pm \sqrt{\Delta}}{2a} = \frac{0 \pm \sqrt{1}}{2} \Rightarrow x_1 = +\frac{1}{2} \equiv_3 2 \text{, } x_2=-\frac{1}{2} \equiv_3 1 \end{aligned}$$ So $f = x^3+2x= x(x+2)(x+1)$. As my algebra exam day is fast approaching I really am in need of knowing if I've got this right or not. Sorry if I incuded trivial calculations in it, but I have to be extra sure I have understood everything. I have a question, though. Is it wrong to assume $x^3 = 1$ as we are dealing with $\mathbb Z_3$ elements? Doesn't the modulo apply to the exponents as well? AI: Your calculation seems right, though you could have avoided using the quadratic formula by noting that $x^2+2 = x^2-1 = (x+1)(x-1)$ in $\mathbb Z/3\mathbb Z$. To answer your question on reducing exponents: In $\mathbb Z/3\mathbb Z$ every nonzero element $x$ satisfies $x^2 = 1$ so you can reduce exponents modulo $2$ (not modulo $3$). In general, if $m$ is an integer and $x$ is coprime to $m$ then $x^i \equiv x^j \mod m$ if $i \equiv j \mod {\varphi(m)}$. Here, $\varphi$ is Euler's totient function. If $m=p$ is a prime then $\varphi(p) = p-1$, so $x^{p-1} \equiv 1 \mod p$ if $p\nmid x$ (this is Fermat's little theorem).
H: Prove that , any primitive root $r$ of $p^n$ is also a primitive root of $p$ For an odd prime $p$, prove that any primitive root $r$ of $p^n$ is also a primitive root of $p$ So I have assumed $r$ have order $k$ modulo $p$ , So $k|p-1$.Then if I am able to show that $p-1|k$ then I am done .But I haven't been able to show that.Can anybody help me this method?Any other type of prove is also welcomed. AI: For any $a$ relatively prime to $p^n$, there is an integer $k$ such that $r^k\equiv a \pmod{p^n}$ and hence such that $r^k \equiv a\pmod{p}$. Thus $r$ is a primitive root of $p$.
H: Group homomorphism to the multiplicative subgroup of a field Let $G$ be a finite group and let $\varphi:G \rightarrow F^{\times}$ be an homomorphism where $F$ is a field. $H$ is a subgroup of $G$ that contains $Ker(\varphi)$. Prove that $H \lhd G$ and that $G/H$ is cyclic. I have no idea how to prove this. Thank you. AI: As was pointed out in the comments we have to assume that $G$ is finite. Let $K := \ker \varphi$. Then $K \leq H \leq G$ and $K$ is normal in $G$. By the isomorphism theorem, $G/K \cong \mathrm{im} \varphi$. This is a finite subgroup of $F^\times$, hence cyclic. Consider $H \hookrightarrow G \twoheadrightarrow G/K$. The kernel of this map is $H \cap K = K$, so $K$ is normal in $H$ and $H/K$ is a subgroup of $G/K$. Since $G/K$ is cyclic, we have $H/K \unlhd G/K$. Using this, we can show $H \unlhd G$: Let $x \in G$ and $h \in H$. Since $H/K \unlhd G/K$, there is an $h' \in H$ s.t. $x^{-1}hx \equiv h' \mod K$, i.e. $x^{-1}hx = h'k$ for some $k \in K$. The right side is an element of $HK = H$, hence $x^{-1}Hx \subseteq H$. This shows $H \unlhd G$. Now, by another isomorphism theorem, $G/H \cong (G/K)/(H/K)$. This is a quotient of a cyclic group, hence cyclic.
H: Where's the problem in this equation? Resulting in $4 = 5$ I just saw this equation and I can't find out where's the problem: $$25-45 = 16-36$$ $$25- 2 \cdot 5 \cdot \frac{9}{2} = 16- 2\cdot4\cdot\frac{9}{2}$$ $$25 - 2\cdot 5\cdot \frac{9}{2} + \frac{81}{4} = 16 - 2\cdot 4 \cdot \frac{9}{2} + \frac{81}{4}$$ $$\left( 5-\frac{9}{2} \right) ^2 = \left (4-\frac{9}{2} \right) ^2$$ $$5-\frac{9}{2} = 4 - \frac{9}{2}$$ $$5=4$$ AI: $a^2 = b^2$ does not imply that $a = b$.
H: Normal to the plane under the condition describes the cone The plane $lx+my+nz=0$ moves in such a way that its intersection with the planes $ax+by+cz+d=0$ and $a'x + b'y + c'z+d'=0$ are perpendicular. Show that the normal to the plane through the origin describes in general, a cone of the second degree and find its equation. My analysis Here the given plane $lx+my+nz=0$ passes through the origin, so considering a normal dropped from origin is an incorrect term Where am I going wrong? Soham AI: The intersection with first plane gives the following line (arrive at this just by linear algebra): $$\frac x{cm-bn}=\frac y{an-cl}=\frac z{bl-am}$$ Similarly the intersection with the second line is: $$\frac x{c'm-b'n}=\frac y{a'n-c'l}=\frac z{b'l-a'm}$$ For these lines to be perpendicular the direction ratios' inner product should be 0. i.e., $$(cm-bn)(c'm-b'n)+(an-cl)(a'n-c'l)+(bl-am)(b'l-a'm)=0$$ i.e., $$(bb'+cc')l^2+(cc'+aa')m^2+(aa'+bb')n^2-(ab'+a'b)lm-(bc'+b'c)mn-(ac'+a'c)ln=0$$ Try and show that this is the equation of a cone.
H: Find the magnitude of the acute angle between the lines $2y+3x=4$ and $x+y=5$. Find the magnitude of the acute angle between the lines $2y+3x=4$ and $x+y=5$. I have no idea how to start the above equation. I try to draw the graph of $2y+3x=4$ and $x+y=5$ in the calculator but nothing show in the calculator. The formula provided in the text book is gradient of $l_{1}=m_{1}=\tan \theta_{1}$ gradient of $l_{2}=m_{2}=\tan \theta_{2}$ Help me out! thanks. AI: The equations can be rewritten as: $$y = -\frac{3}{2}x + 2$$ $$y = -x + 5$$ So, the slopes are: $\tan(\theta_1) =m_1 = -1.5$ and $\tan(\theta_2) = m_2 = -1$. Therefore, the acute angle($\phi$) between the two lines is: $$\tan(\phi) = \tan(|\theta_2 - \theta_1|) = \left|\frac{\tan(\theta_2) - \tan(\theta_1)}{1 + \tan(\theta_1)\tan(\theta_2)}\right|$$ or, $$\tan(\phi) = \left|\frac{m_2 - m_1}{1 + m_1m_2}\right|$$ $$ = \left|\frac{(-1.5) - (-1)}{1 + (-1.5)(-1)}\right|$$ $$ = \left|-\frac{0.5}{2.5}\right| = \frac{1}{5}$$ Hence, the angle will be: $$\phi = \tan^{-1}\left(\frac{1}{5}\right) = 11.309932^\circ$$ or $\phi = 11.31^\circ$
H: Understanding compact subsets of metric spaces Please help me understand the following definition: Let $(X,d)$ be a metric space, a subset $S \in X$ is called compact, if any infinite sequence $\{x_{n}\}_{n\in\Bbb N}\in S$ has a sub-sequence with a limit in S. What does "if any infinite sequence" mean? Maybe: At least one, all? What does "has a sub-sequence" mean? Maybe: At least one? Exactly one? I am no mathematician and I don't understand the (practical) relevance of this property. Please explain it to me. AI: The definition can be reformulated as follows - "Let $(X,d)$ be a metric space, a subset $S∈X$ is called compact, if for all infinte sequences $\{ x_{n}\}_{n=1}^{∞}\subseteq S$ the following holds: $\{ x_{n}\}_{n=1}^{∞}$ has a concentration point and if $\bar{x}$ is a concentration point of $\{x_{n}\}_{n=1}^{∞}$ , than $\bar{x} \in S$." Now consider the definition of a concentration point of a infinite sequence. The practical side of compactness of a given set is that it contains it's "edge". If any definition seem vague to you, try to rewrite it using relevant notions with which you're more familiar with.
H: Suppose I have a function $y=x+1$, then is this function the same as $y=\frac{ x^2+x}{x } $? Suppose I have a function $y=x+1$ Then is this funcion the same as $y=\frac{ x^2+x}{x } $ ? The domain of x in the first function is $R$ and in the second function is $x\neq 0$. AI: In the strict sense: they are not the same By definition a function is a triple $(f,X,Y)$ such that $X,Y$ are sets and $f$ is a subset of $X\times Y$ with the property that for each $x$ etc. Two functions $(f,X,Y)$ and $(g,Z,W)$ are - again by definition - equal if $f=g$, $X=Z$ and $Y=W$ If you have the functions $$ f:\mathbb R \rightarrow \mathbb R \quad f(x)=1 $$ and $$ g:\mathbb R\setminus \{0\} \rightarrow \mathbb R \quad g(x)={x\over x}=1 $$ then $f$ and $g$ are not the same function. All you can say is that $$ f(x)=g(x) \quad\text{for all } x \in \mathbb R\setminus \{0\} $$ Just keep in mind that equality of functions is a tiny bit more than $f(x)=g(x)$.
H: Invariant subspace under orthogonal matrix Let $V=\mathbb{R}^{n}$ and $T\,:V\to V$ be defined by $Tv=Av$ where $A\in M_{n}(\mathbb{R})$ is an orthogonal matrix. My lecture wrote that if $W\subset V$ is a subspace of $V$ then if $W$ is $A$ invariant then $W^{\perp}$ is also $A$ invariant. What I do know is that if $W$ is $A$ invariant then $W^{\perp}$ is also $A^*=A^{t}$ invariant, but I could not deduce from this that it is also $A$ invariant. Is this 'fact' true ? I couldn't prove it (I tried writing a proof similar to the case I know, using inner products and failed), help is appreciated! AI: If $$\forall\,w\in W\,\,,\,Aw=w'\in W\Longrightarrow A^{-1}w'=A^{-1}Aw=w\in W$$ The above is enough since we know $\,A\,$ is bijective (why?), so for any $\,w'\in W\,$ there always exists $\,w\in W\,\,\,s.t.\,\,\,Aw=w'\,$ Thus, $\,W\,\,is\,\,A-\,$ invariant iff it is $\,A^{-1}-\,$ invariant, and now use Marc's answer.
H: Seeking clarification of Lebesgue definition given for $\int _{0}^{1}x^{-a}dx$ I came across the example "Show that $\int _{0}^{1}x^{-a}dx$ exists as a Lebesgue integral, and is equal to $1/(1-a)$, if $0 < a < 1$; but is infinite if $a\geq 1$. The Lebesgue definition of the integral is $$\lim _{n\rightarrow \infty }\left\{ \int _{0}^{n^{-\frac {1} {a}}}ndx+\int _{n^{-\frac {1} {a}}}^{1}x^{-a}dx\right\} $$ and the results are the same as in the elementary theory. To put my question bluntly i just do not understand why and how the author determined to split the original integral. I am aware if we take the limit the first integral's upper bound would become 0 and the second integral's lower limit would be 0 and the second integral would look the same as the one we were originally presented with. I suppose i do not quite understand the motivation behind the step. Any light shed on this matter would be much appreciated. Edit: As per request the definition provided in the book is. The Lebesgue integral of $f(x)$ over $(a, b)$ is the common limit of the sums $s$ and $S$ when the number of division-points $y_v$ is increased indefinitely, so that the greatest value of $y_{v+1} - y_v$ tends to zero. where $$s=\sum _{v=0}^{n}y_{v}\mu\left( e_{\nu }\right) $$ and $$S=\sum _{v=0}^{n}y_{v+1}\mu\left( e_{\nu }\right) $$ AI: In modern parlance, Titchmarsh (The Theory of functions, Second edition, Section 10.7) defines the integral of an unbounded nonnegative measurable function $f$ as the limit of the integrals of $f_n=\min\{f,n\}$, where the integral of a bounded measurable function was defined earlier in the book along the lines of your Edit (which only applies to bounded functions, and not to the general case as one might be led to believe by your Edit, see Section 10.4 in the book). Other, perhaps more convincing, treatments of Lebesgue integral exist, which yield the same result for the integral of $f$ than this one. Nevertheless, if one wants to stick to this definition, the job is to compute, in the case at hand, that is, for $f(x)=x^{-a}$ on $(0,1)$, $$ \int_0^1f_n(x)\,\mathrm dx=n\cdot \left[x\right]_{x=0}^{x=n^{-1/a}}+\frac1{1-a}\cdot\left[x^{1-a}\right]_{x=n^{-1/a}}^{x=1}=\frac{1-a\cdot n^{1-1/a}}{1-a}, $$ with the obvious modification when $a=1$, and to study the limit, if any, of the RHS when $n\to\infty$. To wit, each LHS is the integral of a bounded continuous function $f_n$ and, for these, one knows the result is the value of the Riemann integral.
H: What is the rotation axis and rotation angle of the composition of two rotation matrix in $\mathbb{R}^{3}$ I was told in class that a rotation matrix is defined by a rotation angle and rotation axis, if we call the rotation axis $v$ and take a basis of $\mathbb{R}^{3}=\{v\}\bigoplus\{v\}^{\perp}$ then the matrix is similar by an orthogonal matrix to a matrix of the form $$\begin{pmatrix}\cos\theta & -\sin\theta\\ \sin\theta & \cos\theta\\ & & 1 \end{pmatrix}$$ I asked my self the following question: If I rotate in the $xy$ plain (i.e. rotation axis is $z$) in angle $\theta$, and then rotate in the $yz$ plain (i.e. rotation axis is $x$) in angle $\varphi$ , what rotation matrix I get ? I tried multiplying the corresponding matrices but that did not produce anything useful, I can't also thing of a vector $v\in\mathbb{R}^{3}$that is invariant under the composition... What is the rotation axis, and the rotation angle of these two compositions ? Help is appreciated! AI: When composing two rotations, it is useful to know that a rotation about $\alpha$ about an axis $\ell$ can be written as the composition of two reflections in planes containing $\ell$, the first being chosen arbitrarily and the second being at an (oriented) angle $\frac\alpha2$ with respect to the first. Now in the composition of $4$ reflections you get, you can make your choices so that the second and third planes of reflection (the second reflection for the first rotation and the first reflection for the second rotation) are both equal to the unique plane passing through the two axes. Then poof!, those second and third reflections annihilate each other, and you are left with the composition of the first and the fourth reflection, which is a rotation with axis the intersection of those planes, and angle twice the angle between those planes. If you want to calculate the axis and angle in terms of the original angles, formulas get a bit complicated (even for very easy choices of initial axes as in the question), but such is life, the concrete answer isn't really very easy to write down or remember.
H: Do we have always $f(A \cap B) = f(A) \cap f(B)$? Suppose $A$ and $B$ are subsets of a topological space and $f$ is any function from $X$ to another topological space $Y$. Do we have always $f(A \cap B) = f(A) \cap f(B)$? Thanks in advance AI: Let $y \in f(A\cap B)$. So there is an $x \in A\cap B$, so $f(x) = y \in f(A\cap B)$. Then obviously $x \in A$, so $y = f(x) \in f(A)$. Also $x \in B$, so $y = f(x) \in f(B)$. This proves that $f(A\cap B) \subseteq f(A) \cap f(B)$. Now for the other way: as an example, say that $f: \mathbb{R} \to \mathbb{R}$ and $A = [0,1]$ and $B = [2,3]$, can you find both sides for a simple example of $f$?
H: Is there a simple method to prove that the square of the Sorgenfrey line is not normal? Is there a simple method to prove that the square of the Sorgenfrey line is not normal? The method in the book is a little complex. Could someone help me? AI: I always use Jones' lemma. It's a handy tool to show non-normality of other spaces as well. You need some basic facts: Suppose $X$ is normal. For every pair $A$, $B$ of closed disjoint non-empty subsets in $X$, there is a continuous function $f: X \rightarrow [0,1]$ such that $f(x) = 0$ for $x \in A$ and $f(x) = 1$ for $x \in B$. This is often called Urysohn's lemma. If $f,g: X \rightarrow Y$ are continuous, and $Y$ is Hausdorff, and for some dense subset $D$ of $X$ we have $f(x) = g(x)$ for all $x \in D$, then $f(x) = g(x)$ for all $x \in X$. (Proof sketch: if not for some $x$, pull back disjoint open neighbourhoods of $f(x)$ and $g(x)$, both of these intersect $D$ and $f$ and $g$ cannot agree on those points.) This implies: 2') The function $R$ that maps a continuous function $f$ from $X$ to $Y$ to a continuous function $R(f)$ from $D$ to $Y$ by restricting $f$ to $D$, is 1-1. Now, Jones' Lemma: If $X$ is normal and $D$ is dense and infinite in $X$ and $C$ is closed and discrete (in the subspace topology) in $X$ then (as cardinal numbers) $2^{|C|} \le 2^{|D|}$. Proof: for every non-trivial subset $A$ of $C$, $A$ and $C \setminus A$ are disjoint, closed in $X$ (both are closed in $C$, as $C$ is discrete, and closed subsets of a closed set are closed in the large set.), so by 1) there is a continuous function $f_A$ on $X$ that maps $A$ to $0$ and $C \setminus A$ to $1$. Note that this defines a family of distinct continuous functions (if $A \neq B$ then we can find a point in $A\setminus B$ or $B \setminus A$ that shows that $f_A \neq f_B$) from $X$ to $[0,1]$. But from 2' we know that there is a 1-1 mapping from the set of all continuous functions from $X$ to $[0,1]$ to the set of all continuous functions from $D$ to $[0,1]$ and the latter set is bounded in size by $[0,1]^D = (2^{|N|})^{D} = 2^{|N||D|} = 2^{|D|}$, and the last step holds as $D$ is infinite. As we have a family of size $2^{|C|}$ (all non-trivial, i.e. non-empty, non-$C$, subsets of $C$) we conclude that $2^{|C|} \le 2^{|D|}$, and this concludes the proof. Applications: a) The Sorgenfrey plane: using the antidiagonal $C = \{(x, -x): x \in \mathbb{R} \}$ and $D = \mathbb{Q} \times \mathbb{Q}$ as dense subset. As $2^{|C|} = 2^\mathfrak{c} > \mathfrak{c} = 2^{|D|}$, Jones' lemma says that $X$ cannot be normal. b) The Niemytzki plane (or Moore plane) (see e.g. here) is not normal, with a similar computation, using $C$ the $x$-axis and $D$ the rational points in the upper halfplane.
H: About the inverse of the Jacobian matrix I have a doubt on Jacobian matrices. Consider the non linear transformation $$ \left[ \begin{array}{c} x\\ y\\ z \end{array}\right] = \mathbf{G}\left( \left[ \begin{array}{c} \hat{x}\\ \hat{y}\\ \hat{z} \end{array}\right] \right) = \left[ \begin{array}{c} \hat{x}g(\hat{z})\\ \hat{y}g(\hat{z})\\ \hat{z} \end{array}\right] $$ whose Jacobian reads $$ \text{J} = \left[ \begin{array}{ccc} g & 0 & \hat{x}g'\\ 0 & g & \hat{y}g'\\ 0 & 0 & 1 \end{array} \right] $$ If I invert this matrix I get $$ \text{J}^{-1} = \left[ \begin{array}{ccc} 1/g & 0 & -\hat{x}g'/g\\ 0 & 1/g & -\hat{y}g'/g\\ 0 & 0 & 1 \end{array} \right] $$ which I thought should be the same as the Jacobian of the inverse transformation. However, solving for $\hat{x}, \hat{y}, \hat{z}$ in the definition of the transformation, I get $$ \left[ \begin{array}{c} \hat{x}\\ \hat{y}\\ \hat{z} \end{array}\right] = \mathbf{G}^{-1}\left( \left[ \begin{array}{c} x\\ y\\ z \end{array}\right] \right) = \left[ \begin{array}{c} x/g(z)\\ y/g(z)\\ z \end{array}\right] $$ whose Jacobian now reads $$ \text{J}^{-1} = \left[ \begin{array}{ccc} 1/g & 0 & -\hat{x}g'/g^2\\ 0 & 1/g & -\hat{y}g'/g^2\\ 0 & 0 & 1 \end{array} \right] $$ which is slightly different. My question is: which one is the correct Jacobian for the inverse? Weren't they supposed to be the same? If so, where's my mistake? Thank you in advance! AI: $G$ maps a point $p$ to $G(p)$. The Jacobian maps a tangent vector at $p$ to one at $G(p)$. The inverse is the Jacobian for $G^{-1}$ at $G(p)$. So, in the second formula you should substitute $x g(z)$ for $x$, $yg(z)$ for $y$, and $z$ for $z$ to recover the first. All consistent (modulo the typo I mentioned in the comment), well done!
H: Why does "separable" imply the "countable chain condition"? Why does "separable" imply the "countable chain condition"? Thanks for any help. AI: Let $D$ be a dense subset of $X$ and let $\{ U_i : i \in I \}$ be a pairwise disjoint family of non-empty open sets indexed by $I$. Define a map $f: I \rightarrow D$ by picking $f(i) \in U_i \cap D$, which can be done as each $U_i$ is non-empty and open and $D$ is dense. For a countable $D$ we can choose the one with minimal index in some fixed enumeration of $D$, for definiteness. The function $f$ is 1-1, because if $i \neq j$ then $f(i) \in U_i$ and $f(j) \in U_j$ but as $U_i \cap U_j = \emptyset$, $f(i) \neq f(j)$. Hence we have an injection from $I$ into $D$ and so $|I| \le |D|$ as cardinal numbers. If $X$ is separable we can fix some countable dense subset $D$ and this then shows that all pairwise disjoint families of non-empty open sets are at most countable, or $X$ is ccc.
H: Solution of functional equation $f(x)=-f(x-a)$ I have a problem with finding solution. I suppose it will be something like $f(x) =G(x)\Re(e^{\frac{x\pi}{a}})$, where $\Re$ is real part of a complex number, $G(x)$ periodic function whith period $\frac{a}{n}$ and $n$ is a natural number. Can you help me? Thanks a lot for any help. :) AI: We may assume $a=\pi$. Simple examples that come to mind are the sine and the cosine function. Unfortunately these function have zeros, but a combination of the two allows the following construction: If $$f(x)\equiv-f(x-\pi)\qquad (*)$$ then the function $g(x):=e^{ix}f(x)$ is a $\pi$-periodic complex valued function (check it!). Conversely: If $x\mapsto g(x)\in{\mathbb C}$ is an arbitrary $\pi$-periodic function then $f(x):=e^{- ix} g(x)$ satisfies the functional equation $(*)$. Now I assume you are interested in solutions of $(*)$ that are real-valued for $x\in{\mathbb R}$. In this respect note that the real part ${\rm Re}f(x)$ of a solution of $(*)$ automatically is a solution of $(*)$ either. Doing the computations one can say the following: Any real solution of $(*)$ can be written in the form $$f(x)=a(x)\cos x+b(x)\sin x\ ,$$ where the functions $a(\cdot)$ and $b(\cdot)$ are real-valued and $\pi$-periodic; but this representation is not unique.
H: Inclusion of $\mathbb{L}^p$ spaces, reloaded I have a follow-up from this question. It was proved that, if $X$ is a linear subspace of $\mathbb{L}^1 (\mathbb{R})$ such that: $X$ is closed in $\mathbb{L}^1 (\mathbb{R})$; $X \subset \bigcup_{p > 1} \mathbb{L}^p (\mathbb{R})$, then $X \subset \mathbb{L}^p (\mathbb{R})$ for some $p>1$. I was wondering whether one could find a subspace $X$ satisfying these hypotheses and which is infinite-dimensional. It turns out this is possible. If one chooses a bump function, and considers the closure for the $\mathbb{L}^1 (\mathbb{R})$ norm of the space generated by the translates by integers of this bump function, one can emulate the $\ell^1$ space. The resulting $X$ will be closed, and included in $\mathbb{L}^p (\mathbb{R})$ for all $p>0$. To avoid this phenomenon, I'll restrict myself to smaller spaces. Is there a linear, closed, infinite-dimensional subspace $X$ of $\mathbb{L}^1 ([0,1])$ which is included in $\mathbb{L}^p ([0,1])$ for some $p>1$? The problem is that any obvious choice of countable basis will very easily generate all of $\mathbb{L}^1 ([0,1])$ (polynomials, trigonometric polynomials...), or $\mathbb{L}^1 (A)$ for some $A \subset [0,1]$, or at least one function which is in $\mathbb{L}^1$ but not in $\mathbb{L}^p$ for $p>1$... AI: An infinite-dimensional example can be obtained as follows: Let $(Y_n)_{n=1}^\infty \subset L^2[0,1]$ be a sequence of independent standard Gaussians, that is random variables with density $\frac{1}{\sqrt{2\pi}}e^{-t^2/2}$. Let $X$ be the closed linear span of $(Y_n)_{n=1}^\infty$ in $L^2[0,1]$. I claim that $X \subset L^p$ for all $1 \leq p \lt \infty$. Consider a finite linear combination $S_N = \sum_{n=1}^N a_n Y_n$. Then $S_N$ is a normal random variable, has mean zero and variance $\sigma^2 = E(|S_N|^2) = \sum_{n=1}^N |a_n|^2$, so $\frac{1}{\sigma} S_N$ is a standard Gaussian, too. This shows in particular that the space $X$ is isometrically isomorphic to $\ell^2$. Moreover, we can compute for the $L^p$-norm of $S_N$ as above that $$ \begin{align*} E(|S_N|^p) & = \sigma^p \frac{2}{\sqrt{2\pi}} \int_{0}^\infty t^p e^{-t^2/2}\,dt = \sigma^p \frac{2}{\sqrt{2\pi}} 2^{(p-1)/2} \int_{0}^\infty s^{(p-1)/2} e^{-s}\,ds \\ &= \sigma^p \sqrt{\frac{2^p}{\pi}} \Gamma\left(\frac{p+1}{2}\right), \end{align*} $$ so $\|S_N\|_p = C_p \cdot \|S_N\|_2$ for all $1 \leq p \lt \infty$. This shows that $X$ is a closed subspace of all spaces $L^p[0,1]$, and up to a constant factor depending only on $p$, its norm is the same as the $L^2$-norm.
H: Prove that there exists a natural number n for which $11\mid (2^{n} - 1)$ I'm thinking putting it into modulo form: there exists a natural number $n$ for which $$2^{n}\equiv 1 \pmod {11}$$ but I don't know what to do next and I'm still confused how to figure out remainders when doing modulos, like $2^n\equiv \;?? \pmod{11}$. Is there some pattern to find $??$ or you would have to use specific numbers for $??$ which is divisible by $11$? AI: A natural number $n$ will have the property that $11\mid 2^n-1$ precisely when $n$ is a multiple of $10$. $$\begin{array}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|} n & \!\!\!\!\!& 1 & 2& 3& 4& 5& 6& 7&8&9&\mathbf{\Large 10}&11&12&13&14\\\hline\\ 2^n\bmod 11 & \!\!\!\!\!& 2 & 4& 8 &5 &10 &9 & 7&3 &6&\mathbf{\Large 1 }&2&4&8&5 \end{array}\;\;\cdots\;\;\begin{array}{|c|c|}19 &\mathbf{\Large20}\\\hline\\ 6&\mathbf{\Large1}\end{array}\;\;\cdots$$ To be even more explicit, Here is a proof that there exists a natural number $n$ such that $2^n\equiv 1\bmod 11$. Consider $n=10$: $$2^{10}-1=1024-1=1023=3\times \fbox{11}\times 31$$ so that $11\mid 2^{10}-1$. Thus by definition $2^{10}-1\equiv0\bmod 11$, and therefore $2^{10}\equiv 1\bmod 11$. and Here is a proof that there exists a natural number $n$ such that $2^n\equiv 1\bmod 11$. Consider $n=20$: $$2^{20}-1=1,048,576-1=1,048,575=3\times 5^2\times \fbox{11}\times 31\times 41$$ so that $11\mid 2^{20}-1$. Thus by definition $2^{20}-1\equiv0\bmod 11$, and therefore $2^{20}\equiv 1\bmod 11$.
H: Shortest distance between two shapes This is the scenario of my problem. I have an image of two objects ( of arbitrary shape, not convex, not touching or crossing each other, kept a few space apart). And I am supposed to find the shortest distance between these two shapes. First thing that came to my mind was to use bruteforce methods, ie find all the elements of shape A's perimeter (Let it be set X) and same for B's perimeter (Let it be Y ). Then find distance between all possible combinations ( excluding repetitions) of elements of X and Y and take the minimum value in it. But I am sure it will take a lot of time. Is there any other better way to do this ? AI: You can find lots of literature via the key phrase collision detection. For example, the 2004 paper, "Efficient Collision Detection between 2D Polygons" (PDF download link), or another 2004 paper, "Kinetic collision detection for two simple polygons" (author link). I even ran across a recent patented(!) algorithm: "Collision detection of concave bodies using art gallery problem and cube maps," 2010 (patent link). And here is some older literature: "Efficient distance computation between non-convex objects," 1994 (IEEE link). From any of these papers, you could search forward in time via Google Scholar.
H: Counting zero-digits between 1 and 1 million I just remembered a problem I read years ago but never found an answer: Find how many 0-digits exist in natural numbers between 1 and 1 million. I am a programmer, so a quick brute-force would easily give me the answer, but I am more interested in a pen-and-paper solution. AI: Just to show there is more than one way to do it: How many zero digits are there in all six-digit numbers? The first digit is never zero, but if we pool all of the non-first digits together, no value occurs more often than the others, so exactly one-tenth of them will be zeroes. There are $9\cdot 5\cdot 10^5$ such digits all of all, and a tenth of them is $9\cdot 5 \cdot 10^4$. Repeating that reasoning for each possible length, the number of zero digits we find between $1$ and $999999$ inclusive is $$\sum_{n=2}^6 9(n-1)10^{n-2} = 9\cdot 54321 = 488889 $$ To that we may (depending on how we interpret "between" in the problem statement) need to add the 6 zeroes from 1,000,000 itself, giving a total of 488,895.
H: Does $\sum_{n\ge1} \sin (\pi \sqrt{n^2+1}) $ converge/diverge? How would you prove convergence/divergence of the following series? $$\sum_{n\ge1} \sin (\pi \sqrt{n^2+1}) $$ I'm interested in more ways of proving convergence/divergence for this series. Thanks. EDIT I'm going to post the solution I've found here: $$a_{n}= \sin (\pi \sqrt{n^2+1})=\sin (\pi (\sqrt{n^2+1}-n)+n\pi)=(-1)^n \sin (\pi (\sqrt{n^2+1}-n))=$$ $$ (-1)^n \sin \frac{\pi}{\sqrt{n^2+1}+n}$$ The sequence $b_{n} = \sin \frac{\pi}{\sqrt{n^2+1}+n}$ monotonically decreases to $0$. Since our series is an alternating series then it converges. AI: $$ \sum_{n\ge1} \sin\left(\pi\sqrt{n^2+1}\right) = \sum_{n\ge1} \pm\sin\left(\pi\left(\sqrt{n^2+1}-n\right)\right) $$ (Trigonometric identity. Later we'll worry about "$\pm$".) Now $$ \sqrt{n^2+1}-n = \frac{1}{\sqrt{n^2+1}+n} $$ by rationalizing the numerator. So we have the sum of terms whose absolute values are $$ \left|\sin\left(\frac{\pi}{\sqrt{n^2+1}+n}\right) \right| \le \frac{\pi}{\sqrt{n^2+1}+n} \to0\text{ as }n\to\infty.\tag{1} $$ But the signs alternate and the terms decrease in size, so this converges. (They decrease in size because sine is an increasing function near $0$ and the sequence inside the sine decreases.) It does not converge absolutely, since $\sin x\ge x/2$ for $x$ small and positive, and the sum of the terms asserted to approach $0$ in $(1)$ above diverges to $\infty$.
H: Contour Integration of Square Root with Laurent Series Recently I've been working on branch cuts of square root functions, and come to problems like this: Find a single-valued analytic branch $f$ of $\sqrt{z^2+z}$ on the set $\{ z \in \mathbb{C}: |z| >1 \}$ such that $f(2) = -\sqrt{6}$. Evaluate the integral of $f$ around the contour $|z| =2$ (positively oriented) using the Laurent series of the square root. I can find an $f$: the function $-\sqrt{z}\sqrt{z+1}$ where the square roots are the principal branch of the square roots function. However, the only example of using Laurent series I've seen (in these notes) for contour integrals like this works by factoring a $z$ out of a square root; i.e. writing $$ (z^2+z)^{1/2} = z(1+1/z)^{1/2} $$ and then using the Laurent expansion for the (principal branch) square root $\sqrt{z}$ about $z=1$ to expand $(1+1/z)^{1/2} $. My question is about the signs here. The approach above assumes that $\sqrt{z^2} = z$, but it seems that in my branch of the square root, I have $\sqrt{z^2} = -z$. And then the square root I'm left with in the factorization is still defined based on the branch of $\sqrt{z^2+z} \;$ I'm using, so would I use the Laurent expansion for $- \sqrt{z}$ instead? I understand that if both signs need to be negative, there's no net effect on the result; I'm just trying to understand the details of the computation. Any insight would be appreciated. AI: Concretely expand $\sqrt{z^2+z}=-z\sqrt{1+1/z}\ $ that verifies $f(2)=-\sqrt{6}$ (the $+$ possibility doesn't apply). The cut should be chosen between the points $-1$ and $0$ as in your linked file because $f(2)=-\sqrt{6}$ is indicated without additional specification (compare to 'the top side of the cut' in the pdf) so that the function must be smooth at point $z=2$. This point would indeed be on the cut had we chosen the cut outside of $(-1,0)$. Here is a picture of the argument of your function as considered (the minus adds $\pi$ to the argument) : note that for $z$ real with $z<-1$ the arguments become equal to 0 $\pmod {2\pi}$ The same kind of problem was handled in this other thread with the cut shown outside the points or inside. Note that there is a link to the handling of the general case at the end (just set a=−1 and b=0 there).
H: Convention on non-negative singular values? In the literature I have on disposal it is stated that singular values are non-negative values, and that, for a symmetric matrix $A$, the SVD and EVD coincide. This would mean that singular values of $A$ are the eigenvalues of $A$, but the eigenvalues of $A$ can be negative, regardless of $A$ being symmetric. So, I wonder if the choice of singular values being exclusively positive is some kind of convention? If so, how degenerate that is given the above observation the equivalence of SVD and EVD for symmetric matrices? AI: You can factor a (not necessarily square) matrix as orthogonal times diagonal times orthogonal, and the diagonal entries need not all be non-negative. But multiplying a row or a column of an orthogonal matrix by $-1$ still gives an orthogonal matrix, and you can do that and change a minus to a plus in the diagonal matrix. In that way, the two orthogonal matrices can be chosen so that the diagonal entries in that matrix are all non-negative. Those are what are taken to be the singular values. It's a convention to define it that way. But I suspect there are theorems that say that's the only way to define it that makes it have specified nice properties, and those theorems would not be mere conventions.
H: How to get an absolute total from percentage and $\pm$ total? So, for a site that a friend is developing, we need to work with a legacy plugin that counts votes as either positive or negative totals, but doesn't already provide an absolute total. We can, however, get the percentage out of the plugin. So my question is: using the percentage, and either the positive or negative vote total, can we get the absolute total of votes cast on a particular item? AI: If $P$ the number of positive votes, $N$ the negative votes, $T=P+N$ the total, $p=100 \times \frac{P}{T}$ the positive percentage and $n=100 \times \frac{N}{T}$ the negative percentage then you can use any of $$T=100 \times \frac{P}{p} =100 \times \frac{N}{n} =100 \times \frac{P}{100-n} =100 \times \frac{N}{100-p}$$ so long as the denominator is not $0$.
H: Finite p-group with a cyclic frattini subgroup. I have a question about the following theorem that I found in some research. Is it possible that $E$ is the identity? I just found this elaborated proof that might help. AI: Yes, $E$ can be the identity, as anon points out. The dihedral group with $8$ elements and the quaternion group of order $8$ are examples where this happens. Another general set of examples is when $Q$ is an extra-special group of ordr $p^{2n+1}$, $p$ any prime, and $n$ any positive integer. In the case, $Z(Q) = [Q,Q] = \Phi(Q)$ has order $p,$ and these exist for evey $p$ and $n$ (in fact, there is always more than one isomorphism type of extraspecial group of those orders). The existence of extraspecial groups of order $p^{3}$ is easy- we have seen two above when $p = 2.$ For $p$ odd, take (for example), $Q = \langle x,y : x^{p} = y^{p^{2}} = 1, x^{-1}yx = y^{1+p} \rangle$. For general $n,$ take a central product of $n$ extra-special groups of order $p^{3}$. The central product of two groups $A$ and $B$ which each have center of order $p$ may be realised as the direct product, with the "diagonal subgroup" of the two (isomorphic) centers factored out.
H: Is every algebraic curve birational to a planar curve Let $X$ be an algebraic curve over an algebraically closed field $k$. Does there exist a polynomial $f\in k[x,y]$ such that $X$ is birational to the curve $\{f(x,y)=0\}$? I think I can prove this using Noether Normalization Lemma. Is this correct? If yes, is it too much? That is, is there an easier argument? AI: Two curves are birational if and only if their function fields are isomorphic. $k(X)$ has transcendence degree $1$, so pick a transcendental element $x \in X$. Then $k(X)$ is a finite extension of $k(x)$. By the primitive element theorem (in this context a birational version of Noether normalization), there exists a primitive element $y \in k(X)$ such that $k(X) = k(x)[y]$; $y$ satisfies a minimal polynomial $f(x, y) = 0$ over $k[x]$, and the conclusion follows. Edit: Above I implicitly assumed that the extension $k(x) \to k(X)$ is separable. This is automatic if $k$ has characteristic $0$. If $k$ has characteristic $p$ we need to choose $x$ more carefully and I am not sure how to do this without going through the characteristic-$p$ proof of Noether normalization.
H: How to solve the differential equation $dN/dt=aN-\mu t$ in terms of $t$, $a$, $\mu$ and $N(0)$ The number, $N$, of animals of a certain species at time $t$ years increases at a rate of $aN$ per year by births, but decreases at a rate of $\mu t$ per year by deaths, where $a$ and $\mu$ are positive constants. Modelled as continuous variables, $N$ and $t$ are related by the differential equation: $$dN/dt=aN-\mu t$$ Given that $N=N(0)$ when $t=0$, find $N$ in terms of $t$, $a$, $\mu$ and $N(0)$. AI: You can use an integrating factor. $$e^{-at}N'(t) - ae^{-at}N(t) = -\mu t e^{-at}$$ Now undo the product rule. $$\left(e^{-at} N(t)\right)' = -\mu te^{-at}$$ Now integrate to see that $$ e^{-at}N(t) - N(0) = -\mu \int_0^t se^{-as}\,ds.$$ To finish, integrate by parts and solve for $N$.
H: convergence of alternating series — weakening a hypothesis A comment below this answer inspires this question. Suppose $a_n\in\mathbb{R}$ for $n=1,2,3,\ldots$ and $|a_n|\to0$ as $n\to\infty$. Further suppose the terms alternate in sign. If moreover the sequence $\{|a_n|\}_{n=1}^\infty$ is decreasing, then $\displaystyle\sum_{n=1}^\infty a_n$ converges. How much can the hypothesis that it is decreasing be weakened while still being strong enough that the sum must converge? And are there any interesting or useful weaker hypotheses? AI: One may ask that $a_n=b_n+c_n$ where $(b_n)$ is alternating (that is, $b_n\to0$ monotonically and $(-1)^nb_n$ of constant sign) and $(c_n)$ is absolutely summable (that is, $\sum\limits_n|c_n|$ finite). This applies readily to show that $\sum\limits_n\dfrac{(-1)^n}{n^\alpha+(-1)^n}$ converges if and only if $\alpha\gt\frac12$. Note that the absolute value of the general term is not monotonous when $\alpha\leqslant1$.
H: $B(V,W)$ is complete if $W$ is Let $B(V,W)$ be the space of bounded linear maps from $V$ to $W$. Then it is complete with respect to the operator norm. Can you tell me if my proof is correct? Thanks. It's easy to verify that the operator defines a norm. Let $T_n$ be Cauchy in $B(V,W)$ with respect to $\|\cdot\|$. Let $\varepsilon > 0$. We want to show that $T_n \to T$ for some $T \in B(V,W)$. We have that $T_n v$ is a Cauchy sequence in $W$ since $\|T_nv - T_mv\|_W \leq \|T_n-T_m\| \|v\|_V < \varepsilon$ for $n,m$ large enough since $T_n$ is Cauchy with respect to $\|\cdot\|$ by assumption. Since $W$ is complete, the pointwise limit $Tv$ of $T_nv$ is in $W$ for all $v$. Now we need to show that $T$ is linear, bounded and $T_n \to T$ in the operator norm. (i) $T(\alpha v + \beta w) = \lim_{n \to \infty} T_n(\alpha v + \beta w) = \alpha \lim_{n \to \infty} T_nv + \beta \lim_{n \to \infty} T_n w = \alpha T v + \beta T w $ (ii) $\|T\|=\sup_{\|v\|\leq 1}\|Tv\| = \sup_{\|v\| \leq 1} \|Tv + T_n v - T_n v\| \leq \sup_{\|v\| \leq 1} \|Tv - T_n v\| + \| T_n v\|$ (iii) $\|(T-T_n)v \|_W < \varepsilon $ for all $v$ if $n$ large enough, hence for $n$ large enough, $\|T-T_n\| < \varepsilon $ AI: In your $(ii)$ the first term should be $$||T||:=\sup_{||v||\leq 1}||Tv||$$ instead of $||Tv||$. And your $(iii)$ should be $||(T_m-T_n)v||\leq \varepsilon ||v||$ for all $v\in V$ and for $m,n\geq \nu_{\varepsilon},$ therefore $$||(T-T_n)v||\leq \varepsilon||v||, \forall v\in V,\forall n\geq\nu_{\varepsilon}$$
H: Subtraction by 1 when solving terms of sequences? If Arithmetic and Geometric Sequences are simply Linear and Exponential functions respectively. Why then do we subtract the n variable by 1 when solving for certain terms in these sequences? $$t_n=d(n-1)+a$$ $$t_n=a\cdot r^{n-1}$$ I've tried exploring this question from a graphing perspective and it's clear this results in the sequence index starting at 1 rather then zero but is that really the only reason? Personally I'd rather start sequence indices at 0 as I've found going back and forth between linear equation techniques and sequence term formulas makes it easy to make off by 1 errors if one isn't careful. AI: There's nothing wrong with using $n$ instead of $n-1$. It'll give you the same sequence, only shifted by 1. For example, suppose that we have two sequences $$ t_n=3(n-1)+2\qquad s_n=3n+2 $$ then we'll have \begin{array}{ccccccc} n & 1 & 2 & 3 & 4 & 5 &\dots\\ \hline t_n & 2 & 5 & 8 & 11 & 14 &\dots\\ s_n & 5 & 8 & 11 & 14 & 17 \end{array} There's no real difference between the two. Some sequences (especially those defined recursively) have properties that are more tidily defined when they start at a particular index, but as long as you're careful you can shift the terms in a sequence (or a series, for that matter) by any amount you wish.
H: How does homeomorphism map sets boundaries? I'm at the end of my first course on general topology, but this topic was not well developed. I can tell that an homeomorphism preserves the quality of a point to be a boundary point for a subset of a topological space. In particular, from space X to space Y, one only needs a function to be countinuos (or something else? I think there's not even need for bijectivness). But what happens when we talk about The boundary as a whole? What happens in terms of connected component et cetera? Am I right to say we can distinguish two subspaces by their boundaries even when this is not included in the subspace? (Specially when considering the topology on an open subset). AI: I'm not sure I understand your question, but perhaps this example will help: Consider the continuous function $f:\mathbb{R}\to\mathbb{R}$ defined by $f(x)=x^2$. Let our set be $S=[-2,1]$. Then $\partial S=\{-2,1\}$, but the image of $S$ is $f(S)=[0,4]$, which has boundary $\partial f(S)=\{0,4\}$, so that $f(1)$ is not in the boundary of $f(S)$ even though $1$ was in the boundary of $S$. Regarding "distinguishing two subspaces by their boundaries": let $X=\{a,b,c\}$ with the topology $T=\{\varnothing,\{a\},\{b\},\{a,b\},X\}$. Let $A=\{a\}$ and $B=\{b\}$. Then $$\partial A=\overline{A}\setminus A^o=\{a,c\}\setminus \{a\}=\{c\}\quad\text{and}\quad \partial B=\overline{B}\setminus B^o=\{b,c\}\setminus\{b\}=\{c\}$$ so that $\partial A=\partial B$, even though $A\neq B$, and both $A$ and $B$ are open subsets of $X$ that do not contain their boundaries. Consider the unit circle $\mathbb{S}^1=\{(x,y)\in\mathbb{R}^2\mid x^2+y^2=1\}$. Let $A,B\subseteq\mathbb{S}^1$ be $$A=\{(\cos(2\pi t),\sin(2\pi t))\mid t\in (0,\tfrac{1}{2})\}\quad\text{ and }\quad B=\{(\cos(2\pi t),\sin(2\pi t))\mid t\in (0,1)\}.$$ Then $A$ and $B$ are homeomorphic to each other, even though $$\partial A=\{(1,0),(-1,0)\}\quad\text{ and }\quad \partial B=\{(1,0)\}$$ are not homeomorphic to each other.
H: Why is the inradius of any triangle at most half its circumradius? Is there any geometrically simple reason why the inradius of a triangle should be at most half its circumradius? I end up wanting the fact for this answer. I know of two proofs of this fact. Proof 1: The radius of the nine-point circle is half the circumradius. Feuerbach's theorem states that the incircle is internally tangent to the nine-point circle, and hence has a smaller radius. Proof 2: The Steiner inellipse is the inconic with the largest area. The Steiner circumellipse is the circumconic with the smallest area, and has 4 times the area of the Steiner inellipse. Hence the circumcircle has at least 4 times the area of the incircle. These both feel kind of sledgehammerish to me; I'd be happier if there were some nice Euclidean-geometry proof (or a way to convince myself that no such thing is likely to exist, so the sledgehammer is necessary). EDIT for ease of future searching: The internet tells me this is often known as "Euler's triangle inequality." AI: So Proof #1 can be modified to be completely elementary. First, it is easy to show that the incircle is the smallest circle touching all 3 sides. The circle passing through the midpoints (the nine-point circle) obviously has circumradius half that of the larger circle. No need to invoke Feuerbach's theorem for this. Cheers, Rofler
H: Problems regarding exponents Write each of the following expressions in the form $ca^pb^q$ where $c$, $p$, $q$ are numbers: $\dfrac{(2a^2)^3}{b}$ solved $\sqrt{9ab^3}$ solved $\dfrac{a(2/b)}{3/a}$ solved $\dfrac{ab-a}{b^2-b}$ I tried and got to, $(ab-a)(b^2-b)^{-1}$. I know I'm supposed to bring $b^2-b$ to the top somehow because the answer calls for no fractions. That's all I have for that one. $\dfrac{a^{-1}}{b^{-1}\sqrt{a}}$ I've figured out that $\sqrt(a) = a^{\frac{1}{2}}$. I also brought $b$ to the top and $a$ to the bottom to acquire; $1b^1/1a^1(a^{\frac{1}{2}})$. That's as far as I've gotten on that problem. $\left(\dfrac{a^{2/3}}{b^{1/2}}\right)^2 \cdot \dfrac{b^{3/2}}{a^{1/2}}$ I am completely clueless on this one. Any help would be accepted. AI: For your first one, $(a^m)^n=a^{mn}$ and $\frac 1 {a^n}=a^{-n}$ For your second, $\sqrt{a}=a^{1/2}$ Show us some attempts at solutions using those. I almost guarantee that you won't get help if you don't explain what you've tried. Partly because it's harder to explain a solution if we don't know where your understanding is lacking, and partly because we really don't want to become some kind of homework database. EDIT: fantastic! Now we can get somewhere: 4: you can factor the numerator and denominator. Do that first. 5: this one's really gross. Write $a^{-1}$ and $b^{-1}$ as $\frac 1 a$ and $\frac 1 b$ and simplify the resulting fraction. then apply exponent rules. Namely $a^ma^n=a^{m+n}$. 6: $\dfrac{b^{3/2}}{a^{1/2}}=\dfrac{a^{-1/2}}{b^{-3/2}}$. Writing it like this (i.e. introducing negatives on purpose) makes the numerators and denominators play nice together. Then use $(a^m)^n=a^{mn}$ and $a^ma^n=a^{m+n}$. Welcome to math.SE!
H: $\kappa$-complete, $\lambda$-saturated ideal properties Kunen, II.56. Having trouble proving the properties of the following: The definition: $S(\kappa,\lambda,\mathbb{I})$ is the statement that $\kappa > \omega$ and $\mathbb{I}$ is a $\kappa$-complete ideal on $\kappa$ which contains each singleton and which is $\lambda$-saturated, meaning: there is no family $\{X_\alpha : \alpha < \lambda\}$, such that each $X_\alpha \notin \mathbb{I}$ but $\alpha \ne \beta \rightarrow (X_\alpha \cap X_\beta) \in \mathbb{I}$. Need to show that: a) $\exists\lambda\exists\mathbb{I} S(\kappa,\lambda,\mathbb{I}) \rightarrow \kappa$ is regular. b) $S(\kappa,\lambda,\mathbb{I}) \land \lambda < \lambda' \rightarrow S(\kappa,\lambda',\mathbb{I})$ c) $\exists\mathbb{I}S(\kappa,\kappa,\mathbb{I}) \rightarrow \kappa$ is weakly inaccesible. I think, the main problem for me here, is not knowing what sets are in or out of $\mathbb{I}$. I'll appreciate any help. Thanks in advance. AI: For (a), suppose that $\mathscr{I}$ is a $\kappa$-complete ideal on $\kappa$ that contains the singletons. Then if $\mathscr{A}\subseteq\mathscr{I}$, and $|\mathscr{A}|<\kappa$, $\bigcup\mathscr{A}\in\mathscr{I}$. Thus, $\mathscr{I}$ contains every subset of $\kappa$ of cardinality less than $\kappa$. Suppose that $\operatorname{cf}\kappa=\mu<\kappa$. Then $\kappa$ is the union of $\mu$ sets of cardinality less than $\kappa$, each of which is in $\mathscr{I}$, and $\mathscr{I}$ is $\kappa$-complete, so $\kappa\in\mathscr{I}$, contradicting Definition 6.2(a). Part (b) is pretty trivial, as Asaf notes in the comments. For (c), use Theorem 6.11 to conclude that if $\kappa$ were a successor cardinal, then there would be pairwise disjoint sets $X_\alpha\subseteq\kappa$ for $\alpha<\kappa$ such that each $X_\alpha\notin\mathscr{I}$. $\{X_\alpha:\alpha<\kappa\}$ would then be a witness to $\kappa$ not being $\kappa$-saturated, since $X_\alpha\cap X_\beta=\varnothing\in\mathscr{I}$ whenever $\alpha<\beta<\kappa$. Thus, $\kappa$ must be a limit cardinal, and by (a) it must be regular.
H: Why operator systems contain an abundant of positive elements? I am reading Sec33 of Conway's A Course in Operator Theory, according to his definition, An operator system is a linear manifold $\mathcal{S}$ in a $C^*$-algebra such that $1\in\mathcal{S}$ and $\mathcal{S}=\mathcal{S}^*$. Then he makes the comment that operator systems have an abundance of positive elements. His argument is that for every hermitian element $a\in\mathcal{S}$, $|a|$ also lies in $\mathcal{S}$ and hence the positive parts and negative parts lie in $\mathcal{S}$ and hence $\mathcal{S}$ is spanned by its positive elements. However, I do not know why $|a|$ lies in $\mathcal{S}$ since $\mathcal{S}$ is only assumed to be a linear manifold, not an algebra. Can somebody give a hint? Thanks! AI: I assume "linear manifold" means "subspace." I also do not understand the given argument (if $|a|$ denotes the absolute value), but here's another one: the second condition implies that $S$ is spanned by its self-adjoint elements. Let $a$ be such an element. If $\lambda \in \mathbb{R}$ is greater than the spectral radius of $a$ then $a + \lambda$ is positive, so $a = (a + \lambda) - \lambda$ is a difference of positive elements.
H: Calculating the points of tangency for two circles given a picture I have two circles with the same radius and I want to calculate the points of tangency. For example, in the picture below, I want to calculate $(x_3, y_3)$ and $(x_4,y_4)$. I have the radius and the distance between the two circles as shown below: AI: The gradient of the tangent to any circle is given by $$-\frac{(x'-a)}{(y'-b)}$$ Where (x',y') is the point of tangency and (a,b) is the center of the circle. Now the gradient of the line joining the centers of the two circles is same as the gradient of the tangent. Hence in this case this essentially translates to the following equation $$ -\frac{(x_3-x_1)}{(y_3-y_1)}=\frac{(y_2-y_1)}{(x_2-x_1)}$$ The other equation is $$ (x_3-x_1)^2+(y_3-y_1)^2= R^2$$ Solving the above two equations you will get two points for $(x_3,y_3)$. This shows the existence of two parallel tangents. Similarly you can solve for $(x_4,y_4)$
H: supremum norm and submultiplicativity If $f$, $g \in C(S)$ where $S$ is a compact set in $\mathbb{R}^n$ then it is true that $$\lVert fg \rVert \leq \lVert f \rVert \lVert g \rVert$$ where the norm is the usual supremum norm. Why is this not true if $S$ is not compact? What other conditions can $S$ satisfy so that this is true? AI: It is true if $S$ is not compact, but the issue is that continuous functions on noncompact spaces may not be bounded, and so the supremum will not exist (although it could be defined to be $\infty$, but this is not a real number) and so the supremum norm is not a norm.
H: A little integration paradox The following integral can be obtained using the online Wolfram integrator: $$\int \frac{dx}{1+\cos^2 x} = \frac{\tan^{-1}(\frac{\tan x}{\sqrt{2}})}{\sqrt{2}}$$ Now assume we are performing this integration between $0$ and $2\pi$. Hence the result of the integration is zero. On the other hand when looking at the integrand, $\displaystyle \frac{1}{1+\cos^2 x}$, we see that it is a periodic function that is never negative. The fact that it is never negative guarantees that the result of the integration will never be zero (i.e., intuitively there is a positive area under the curve). What is going on here? AI: An unusual substitution, the "$\tan(z/2)$"substitution, is being done here. You must take a careful look at what you are integrating. Here is a reference on this substitution.
H: Application of Banach Separation theorem Let $(\mathcal{H},\langle\cdot,\cdot\rangle)$ be a Hilbert Space, $U\subset \mathcal{H},U\not=\mathcal{H}$ be a closed subspace and $x\in\mathcal{H}\setminus U$. Prove that there exists $\phi\in\mathcal{H}^*$, such that\begin{align}\text{Re } \phi(x)<\inf_{u\in U}\text{Re }\phi(u) \end{align} Hint: Observe that $\inf_{u\in U}\text{Re }\phi(u)\leq0$. This seems like an application of the Banach Separation theorem. But the way I know it is not directly applicable. I know that for two disjoint convex sets $A$ and $B$ of which one is open there exists a functional seperating them. Is there anything special in this problem about $\mathcal{H}$ being Hilbert and not some general Banach space? AI: There is more general result which proof you can find in Rudin's Functional Analysis Let $A$ and $B$ be disjoint convex subsets of topological vector space $X$. If $A$ is compact and $B$ is closed then there exist $\varphi\in X^*$ such that $$ \sup\limits_{x\in A}\mathrm{Re}(\varphi(x))<\inf\limits_{x\in B}\mathrm{Re}(\varphi(x)) $$ Your result follows if we take $X=\mathcal{H}$, $A=\{x\}$ and $B=U$. Of course this is a sledgehammer for such a simple problem, because in case of Hilbert space we can explicitly say that functional $$ \varphi(z)=\langle z, \mathrm{Pr}_U(x)-x\rangle $$ will fit, where $\mathrm{Pr}_U(x)$ is the unique orthogonal projection of vector $x$ on closed subspace $U$.
H: Why doesn't $2\pi\int_{-1}^1\sqrt{1-x^2}dx$ give the surface area of a sphere of radius $1$? Possible Duplicate: Areas versus volumes of revolution For fun I decided to derive the surface area of a sphere of radius $1$ from the formula for the perimeter of a circle. This integral is what I came up with: $$2\pi\int_{-1}^1\sqrt{1-x^2}dx = \pi^2$$ Unfortunately the desired value is $4\pi$. My rationale was simply to stack infinitely thin 'hula-hoops' whose radii followed the curvature of the sphere. I can't readily see where my conceptual misunderstandings are, can someone help elucidate them for me? Thanks. AI: You need to use the surface area for a surface of revolution formula $$2\pi\int \rho\,ds,$$ where $ds$ is an element of arc length. The smart way to go is to parametrize the semicircle as follows, $x(t) = \cos(t)$, $y(t) = \sin(t)$. We have $$ds = \sqrt{x'(t)^2 + y'(t)^2} = 1. $$ The quantity $\rho$ is the radius of the surface of revolution, which, in this case, is $y = \sin(t)$. For the circle $$\sigma = 2\pi\int_0^\pi \sin(t)\,dt = 2\pi(2) = 4\pi.$$
H: A sequence with infinitely many radicals: $a_{n}=\sqrt{1+\sqrt{a+\sqrt{a^2+\cdots+\sqrt{a^n}}}}$ Consider the sequence $\{a_{n}\}$, with $n\ge1$ and $a>0$, defined as: $$a_{n}=\sqrt{1+\sqrt{a+\sqrt{a^2+\cdots+\sqrt{a^n}}}}$$ I'm trying to prove here 2 things: a). the sequence is convergent; b). the sequence's limit when n goes to $\infty$. I may suppose that there must be a proof for this general case. I saw this problem with the case $a=2$ (where it was required to prove only the convergence), but this is just a particular case. The generalization seems to be much more interesting. AI: Here is a full answer to part (a) and a partial answer to part (b). Call $(a_n(a))_{n\geqslant1}$ the sequence when the value of the parameter is $a$. One has $a_0(1)=1$ and $a_{n+1}(1)=u(a_n(1))$ for every $n\geqslant0$ with $u(x)=\sqrt{1+x}$. Hence the usual technique shows that the sequence $(a_n(1))_{n\geqslant0}$ is increasing to $a_\infty(1)=\alpha$ where $\alpha$ solves the equation $\alpha=u(\alpha)$, that is, $\color{red}{\alpha=\frac12(1+\sqrt5)}$. When $a\lt1$, $a_n(a)\leqslant a_n(1)$ and $(a_n(a))_{n\geqslant0}$ is increasing hence $(a_n(a))_{n\geqslant0}$ converges to a finite limit $a_\infty(a)$ with $\color{red}{\sqrt{1+\sqrt{a}}\lt a_\infty(a)\leqslant \alpha}$. When $a\gt1$, $\sqrt{1+\sqrt{aa_{n-1}(1)}}\leqslant a_n(a)\leqslant\sqrt{1+\sqrt{a}a_{n-1}(1)}$ and $(a_n(a))_{n\geqslant0}$ is increasing hence it converges to a finite limit $a_\infty(a)$ with $\color{red}{\sqrt{1+\sqrt{\alpha a}}\lt a_\infty(a)\leqslant\sqrt{1+\alpha\sqrt{a}}}$. To show the upper bound on $a_n(a)$, one carries over every power of $a$ to the left until it reaches the position of $\sqrt{a}$. Crossing a square root sign halves the exponent and $a\gt1$ hence the power of $a$ which just crossed a square root sign is smaller than the preceding one. For example, the first step of the proof uses $a^{n/2}\leqslant a^{n-1}$ to deduce $$ \sqrt{a^{n-1}+\sqrt{a^n}}=\sqrt{a^{n-1}+a^{n/2}\sqrt1}\leqslant\sqrt{a^{n-1}(1+\sqrt1)}=a^{(n-1)/2}\sqrt{1+\sqrt1}, $$ the second step uses $a^{(n-1)/2}\leqslant a^{n-2}$, and so on, until $a^{3/2}\leqslant a^2$ and $a^{2/2}\leqslant a$. A similar reasoning yields the lower bound. Finally, the map $a\mapsto a_\infty(a)$ is nondecreasing from $\color{red}{a_\infty(0)=1}$ to $\color{red}{a_\infty(+\infty)=+\infty}$.
H: Calculate the total error percentage Below it the table which contains Actual Count and Error Count for each ID. USER_ID | Actual_Count | Error_Count -----------+--------------------+--------------------- 1345653 5 4 534140349 5 0 682527813 4 0 687612723 3 0 704318001 5 4 So if you look at the above scenario, For this 1345653 ID Out of 5, it has 4 errors, same with 704318001 ID, out of 5 it has 4 errors. And all other ID's they didn't have any errors as Error_Count is Zero for them. So in general if I need to calculate what's the total error percentage, then How I will calculate that? Is it possible to calculate the total error percentage? AI: You have a total of 8 errors out of 22 tries, for an error percentage of about 36.4%.
H: The analogous generalization for the commutativity of unions. Let $\{I_j\}$ be a family of sets indexed by $J$ and let $$K=\bigcup_{j\in J}I_j$$ Then let $\{A_k\}$ be a family of sets indexed by $K$. The generalization of the associative law for unions is that $$\bigcup_{k\in K}A_k=\bigcup_{j\in J}\left(\bigcup_{i\in I_j}A_i \right)$$ What I interpret this as is: "To take the union over $K$, pick an $I_j \in K$, perform the union of all $A_i$ such that $i\in I_j$, and for each $j\in J$ unite all this possible unions to get $\bigcup_{k\in K}A_k$. What this is saying is that the order in which the $j$ and thus the $I_j$ are picked is of no importance in the ultimate union. The above is a generalization of $$(A\cup B)\cup C=A\cup (B\cup C)$$ How can I find the analogous generalization for $$A \cup B=B \cup A?$$ AI: Let $K$ be an index set, and $S(K)$ be group of bijections from $K$ to $K$. Then one can say that for $\sigma\in S(K)$ and arbitrary family $\{A_k:k\in K\}$ we have $$ \bigcup\limits_{k\in K} A_k=\bigcup\limits_{k\in K} A_{\sigma(k)} $$
H: Finite non-abelian $p$-group cannot split over its center Show that a finite non-abelian $p$-group cannot split over its center. I'd be happy for some clues. AI: This definition is not very common, so it may be worth mentioning here: Definition. Let $G$ be a group. A subgroup $K$ of $G$ is said to be co-central in $G$ if $G=Z(G)K$. Theorem. Let $G$ be a group. If $K$ is co-central in $G$, then $Z(K)\subseteq Z(G)$. Proof. Let $z\in Z(K)$, and $x\in G$. We need to show that $zx=xz$. Since $G=Z(G)K$, there exists $a\in Z(G)$ and $k\in K$ such that $x=ak$. Then $$\begin{align*} zx &= zak\\ &= azk &&\text{(since }a\in Z(G)\text{)}\\ &= akz &&\text{(since }z\in Z(K)\text{ and }k\in K\text{)}\\ &= xz. \end{align*}$$ Thus, $z\in Z(G)$, as claimed. $\Box$ Now assume that we can write a $p$-group $P$ as $P=Z(P)H$ with $H$ a subgroup. Then by the lemma, $Z(H)\subseteq Z(P)$. If $P$ were split over $Z(P)$, then we would have $Z(P)\cap H = \{e\}$, hence $Z(H)=\{e\}$. Why is that a very big problem for $H$, which can only be solved if $H=\{e\}$?
H: Solving $\cos^2 \theta + \cos \theta = 2$ Solve the following for $\theta$: $\cos^2 \theta + \cos \theta = 2$ [Hint: There is only one solution.] I started this out by changing $\cos^2\theta$ to $\dfrac{1+\cos(2\theta)}{2}+\cos\theta=2$ $1+\cos(2\theta)$ turns into $1+\cos^2\theta-\sin^2\theta$ which all becomes; $\dfrac{1+\cos^2\theta-\sin^2\theta}{2}+\cos\theta=2$ Not to sure what to do after this. I was going to try a power reducing rule for $\sin^2\theta$ but that would make $\dfrac{1+\cos^2\theta- \left(\frac{1-\cos(2\theta)}2 \right)}2+\cos\theta=2$. Please do help. AI: Replacing $\cos^2\theta$ with and expression involving $\cos2\theta$ is not necessarily a good idea; then you have to deal with cosines of two different angles. A better approach is to realize that what we have is a quadratic equation: let us define $y$ to be $y=\cos\theta$. Then we can rewrite the equation as $$y^2 + y = 2$$ or $y^2 + y - 2 = 0$. We know how to solve quadratic equations: the solutions are $$\begin{align*} y_1 &= \frac{-1+\sqrt{1+8}}{2} = \frac{-1+3}{2} = 1\\ y_2 &= \frac{-1-\sqrt{1+8}}{2} = \frac{-1-3}{2} = -2. \end{align*}$$ However, now we remember that $y$ is actually $\cos\theta$, so now we want to find the solutions to $\cos\theta = 1$ and of $\cos\theta=-2$. Since $-1\leq\cos\theta\leq 1$, the latter equation has no solutions. So the answer is that the solutions are exactly the $\theta$ for which $\cos(\theta)=1$. (Which we could have figured out cleverly by making the observation made by David Mitra in comments, but I wanted to give you an idea of how to approach this kind of equation if the answer is not so obvious.)
H: Quantified definition of the derivative How do you quantify: A function $f:\mathrm{dom}(f) \longrightarrow \mathrm{codom}(f)$ is differentiable at every $x$ contained in $\mathrm{dom}(f)$ if the limit $$\lim_{h \to 0}\frac{f(x+h)-f(x)}{h}$$ exists. I have looked everywhere for a quantified definition of the above and have only found quantified versions that don't utilize the limit as $h$ approaches zero. Here is my attempt: $$(\forall \varepsilon > 0)(\exists \delta > 0)(\forall x)\left( |h| < \delta \Rightarrow \left| f'(x) - \frac{f(x + h) - f(x)}{h} \right| < \varepsilon \right).$$ I believe this to be correct however as I am teaching myself analysis, I am being extra cautious with everything. AI: One problem is that $f'(x)$ is free in your formula; another second problem is that you need to exclude the limit point $0$ (that is, you need to exclude $h=0$). Since the definition says that a certain thing exists, the correct quantification should start with an existential, followed by the definition of "the limit is equal to"; since the limit may depend on the $x$, the existential must follow the $x$. $$(\forall x)(\exists L)(\forall\epsilon\gt 0)(\exists\delta\gt 0)\left( 0\lt |h|\lt \delta\implies \left|\frac{f(x+h)-f(x)}{h} - L\right|\lt\epsilon\right).$$ That is: for every $x$, there is a real number $L$ such that the corresponding limit exists and is equal to $L$. Alternatively, you can rewrite "$0\lt |h|\lt \delta$" as $(|h|\lt \delta\land h\neq 0)$", or any formula that amounts to "the absolute value of $h$ is smaller than $\delta$ and $h$ is not zero".
H: How to define Mach Subsonic by the Mach Supersonic? I read the book Mechanic of fluids shames and I find this relationship: $$\frac{1+kM_1^2}{1+kM_2^2} =\frac{M_1}{M_2} \left ( \frac{1+\dfrac{(k-1)}{2}M_1^2}{1+\dfrac{(k-1)}{2}M_2^2} \right )^{0.5}$$ where $M_1$ is the Mach number of supersonic flow and $M_2$ is the Mach number for subsonic flow. How can I find $M_2$ as a function of $M_1$, say $M_2 = f(M_1)$? Sorry for my English. AI: A small amount of work will show that squaring the expression above gives: $$(m_1-m_2)(m_1+m_2)(2km_1^2 m_2^2-k (m_2^2+m_1^2)+m_1^2+m_2^2-2) = 0.$$ Two solutions are obvious (and presumably uninteresting): $m_2 = \pm m_1$. The other two are also straightforward (assuming I haven't made a mistake): $$m_2 = \pm \sqrt{\frac{1+\frac{k-1}{2} m_1^2}{k m_1^2+\frac{1-k}{2}}}.$$ The original equation rules out the negative solutions.
H: Verifying some trigonometric identities: $\frac{\csc\theta}{\cot\theta}-\frac{\cot\theta}{\csc\theta}=\tan\theta\sin\theta$ Prove the following: 46. $\dfrac{\csc\theta}{\cot\theta}-\dfrac{\cot\theta}{\csc\theta}=\tan\theta\sin\theta$ I got as far as Right Side: $\tan\theta\sin\theta$ to $\dfrac{\sin\theta}{\cos\theta}\dfrac{\sin\theta}{1}$ and then; $\dfrac{\sin^2\theta}{\cos\theta}$ Left Side: $$\begin{align*} \dfrac{\csc\theta}{\cot\theta}-\dfrac{\cot\theta}{\csc\theta} &= \dfrac{\frac{1}{\sin^2\theta}-{\frac{\cos^2\theta}{\sin^2\theta}}}{\frac{\cos\theta}{\sin\theta}-{\frac{1}{\sin^2\theta}}}\\ &= \dfrac{\frac{1-\cos^2\theta}{\sin^2\theta}}{\frac{\cos\theta}{\sin^2\theta}}\\ &= \dfrac{\frac{\sin^2\theta}{\sin^2\theta}}{\frac{\cos\theta}{\sin^2\theta}}\\ &= \frac{1}{\frac{\cos\theta}{\sin^2\theta}}\\ &= \frac{\sin^2\theta}{\cos\theta} \end{align*}$$ Thanks a lot! AI: There is no "cross cancelling". You are subtracting the fractions, not multiplying them. $$\begin{align*} \frac{\csc\theta}{\cot\theta} - \frac{\cot\theta}{\csc\theta} & = \frac{\csc^2\theta - \cot^2\theta}{\cot\theta\csc\theta}\\ &= \frac{\quad\frac{1}{\sin^2\theta} - \frac{\cos^2\theta}{\sin^2\theta}\quad}{\frac{\cos\theta}{\sin\theta}\frac{1}{\sin\theta}}\\ &= \frac{\quad\frac{1 - \cos^2\theta}{\sin^2\theta}\quad}{\frac{\cos\theta}{\sin^2\theta}}. \end{align*}$$ Can you take it from there?
H: Are the square and the maximum of distribution functions a distribution function? Let $F$ and $G$ be (one dimensional) distribution functions. Decide which of the following are distribution functions. (a) $F^2$, (b) $H$, where $H(t) = \max \{F(t),G(t)\}$. Justify your answer. I know the definition and properties of distribution function but I could not solve the problem in rigid way AI: A function $F$ is a cumulative probability distribution function on $\mathbb{R}$ if and only if the following are true: $F(x)\to0$ as $x\to-\infty$; $F(x)\to1$ as $x\to+\infty$; $F$ is non-decreasing, i.e. whenever $a<b$ then $F(a)\le F(b)$. $F$ is right-continuous. So ask yourself whether those are true of $F^2$ and of $\max\{F,G\}$ if they are true of $F$ and $G$.
H: Why use the derivative and not the symmetric derivative? The symmetric derivative is always equal to the regular derivative when it exists, and still isn't defined for jump discontinuities. From what I can tell the only differences are that a symmetric derivative will give the 'expected slope' for removable discontinuities, and the average slope at cusps. These seem like extremely reasonable quantities to work with (especially the former), so I'm wondering why the 'typical' derivative isn't taken to be this one. What advantage is there to taking $\lim\limits_{h\to0}\frac{f(x+h)-f(x)} h$ as the main quantity of interest instead? Why would we want to use the one that's defined less often? AI: The symmetric derivative being defined at more places isn't a good thing. In my mind, the main point of differentiation is to locally approximate a function by a linear function. That is, the heart of saying that the derivative $f'(a)$ exists at a point $a$ is the statement that $$f(x) = f(a) + f'(a) (x - a) + o(|x - a|)$$ as $x \to a$, and if I were the King of Calculus this is how the derivative would actually be defined. (Among other things, this definition generalizes smoothly to higher dimensions.) Removable discontinuities are a non-issue as they should just be removed, but at a cusp we do not have this property for any possible value of $f'(a)$, so we shouldn't be talking about derivatives at such points at all. (We can talk about left or right derivatives, but this is something different.) The symmetric derivative at $a$ is not a natural definition. It has the utterly strange property that any weirdness in a neighborhood of $a$ is ignored if it happens to be canceled by equivalent weirdness after reflecting around $a$. Let me give an example. Consider the function $f(x) = 1_{\mathbb{Q}}(x)$ which is equal to $1$ if $x$ is rational and $0$ otherwise. The symmetric derivative of $f$ at any rational point exists and is equal to $0$! Is there any reasonable sense in which $f$ is differentiable at a rational point? The ordinary derivative, on the other hand, is sensitive to weirdness around $a$ because it compares all of that weirdness to $f(a)$.
H: Trigonometric Identities To Prove $\tan\theta+\cot\theta=\dfrac{2}{\sin2\theta}$ Left Side: $$\begin{align*} \tan\theta+\cot\theta={\sin\theta\over\cos\theta}+{\cos\theta\over\sin\theta}={\sin^2\theta+\cos^2\theta\over\cos\theta\sin\theta} = \dfrac{1}{1\sin\theta\cos\theta} \end{align*}$$ Right Side: $$\begin{align*} \dfrac{2}{\sin2\theta}=\dfrac{2}{2\sin\theta\cos\theta}=\dfrac{1}{1\cos\theta\sin\theta} \end{align*}$$ I got it now. Thanks! AI: $$\tan(\theta) + \cot(\theta) = {\sin(\theta)\over \cos(\theta)} + {\cos(\theta)\over \sin(\theta)} = {\sin^2(\theta) + \cos^2(\theta) \over\cos(\theta)\sin(\theta)} = {1\over\sin(\theta)\cos(\theta)}.$$ Now avail yourself of the fact that $$\sin(2\theta) = 2\cos( \theta)\sin(\theta).$$
H: Nontrivial subring with unity different from the whole ring? Is there an example of a ring $R$ with unity and a nontrivial subring $J$, such that $1_J \ne 1_R$? AI: If to you, "ring" means "ring with unity", then the definition of "subring" requires that the unity be the same as that of the larger ring, just like "submonoid" requres that the identity be the same as that of the larger monoid. So under this definition, the answer is "no", because the definition of "subring" requires that if $R$ is a subring of $S$, then $1_S\in R$, and so $1_R=1_S$. If by "ring" you don't require unity, and you are asking if it is possible to have rings $R$ and $S$, with $R\subseteq S$, and where both $R$ and $S$ happen to have a unity and $1_R\neq 1_S$, then yes: take $S=\mathbb{Z}\times\mathbb{Z}$, and $R=\mathbb{Z}\times\{0\}$. Then $1_S=(1,1)$ and $1_R=(1,0)$. In fact, every time you write a ring with unity $S$ as $S=R_1\times R_2$, you have that $1_S=(1_{R_1},1_{R_2})$. The converse holds in part; this is the notion of "central idempotents", which is connected with the decomposition of rings into direct products: Proposition. Let $S$ be a ring with unity, and suppose that $R\subseteq S$ is a subgroup that is closed under multiplication, and such that there exists $e_R\in R$ that is central in $S$ ($e_Rs=se_R$ for all $s\in S$) such that $e_Rr=re_R=r$ for all $r\in R$. Then $S\cong R\times T$, where $T$ is a ring with unity. Proof. Let $T=S(1_S-e_R)$. Then $T$ is an ideal of $S$: it is trivially a left ideal; and since $1_S-e_R$ is central, $S(1_S-e_R) = (1_S-e_R)S$ which is trivially a right ideal. In particular, $T$ is a subgroup that is closed under multiplication. Moreover, $1_S-e_R$ is idempotent: note that $(1_S-e_R)(1_S-e_R) = 1_S - e_R - e_R+e_Re_R$. But $e_Re_R=e_R$ since $e_R$ is an identity for $R$, so $(1_S-e_R)^2=1_S-e_R$. Thus, for every $t\in T$, there exists $s\in S$ such that $t=s(1_S-e_R)$, so $$t(1_S-e_R) = s(1_S-e_R)^2 = s(1_S-e_R) = t$$ and since $1_S-e_R$ is central, this proves $1_S-e_R$ is a unity for $T$. Note also that $$\begin{align*} e_R(1_S-e_R) &= e_R-e_Re_R = e_R-e_R = 0,\\ \text{and}\qquad (1_S-e_R)e_R &= e_R-e_Re_R = e_R-e_R=0. \end{align*}$$ Now consider the map $S\to R\times T$ given by $s\mapsto (se_R,s(1_S-e_R)$. Note that the map is one-to-one: if $se_R= te_R$ and $s(1_S-e_R) = t(1_S-e_R)$, then $$s = s(e_R+1_S-e_R) = se_R + s(1_S-e_R) = te_R+t(1_S-e_R) = t(e_R+1_S-e_R) = t.$$ And the map is onto: given $r\in R$ and $t\in T$, there exist $s,s'\in S$ such that $r=se_R$ and $t=s'(1_S-e_R)$. Let $u=se_R + s'(1_S-e_R)$. Then $$\begin{align*} ue_R &= se_Re_R + s'(1_S-e_R)e_R = se_R + s'0 = se_R = r\\ u(1_S-e_R) &= se_R(1_S-e_R) + s'(1_S-e_R)(1_S-e_R) = s0+s'(1_S-e_R) = s'(1_S-e_R)=t. \end{align*}$$ Thus, the image of $u$ is $(ue_R,u(1_S-e_R)) = (r,t)$. SO the map is onto. It is easy to verify that it is both additive and multiplicative, so we get an isomorphism of rings. $\Box$
H: Trigonometric Identities: $\frac{\sin^2\theta}{1+\cos\theta}=1-\cos\theta$ $\dfrac{\sin^2\theta}{1+\cos\theta}=1-\cos\theta$ Right Side: $1-\cos\theta$ either stays the same, or can be $1-\dfrac{1}{\sec\theta}$ Left Side: $$\begin{align*} &= \dfrac{\sin^2\theta}{1+\cos\theta}\\ &= \dfrac{1-\cos^2\theta}{1+\cos\theta} &= \dfrac{(1-\cos\theta)(1+\cos\theta)}{1+cos\theta} &= 1-\cos\theta \end{align*}$$ Is this correct? AI: Perhaps slightly simpler and shorter (FYI, what you did is correct): $$\frac{\sin^2x}{1+\cos x}=1-\cos x\Longleftrightarrow \sin^2x=(1-\cos x)(1+\cos x)\Longleftrightarrow \sin^2x=1-\cos^2x$$ And since the last equality is just the trigonometric Pytahgoras Theorem we're done.
H: $(\sin\theta+\cos\theta)^2=1+\sin2\theta$ 49) $(\sin\theta+\cos\theta)^2=1+\sin2\theta$ Left Side: \begin{align*} (\sin\theta+\cos\theta)^2=\sin^2\theta+2c\cos\theta\sin\theta+cos^2\theta=1+2\cos\theta\sin\theta \end{align*} This can either be $1$ or I can power reduce it. I don't know. Right Side: \begin{align*} 1+\sin2\theta=1+2\sin\theta\cos\theta \end{align*} Thank you! AI: Open parentheses and use: $$(1)\,\,\sin^2x+\cos^2x=1$$ $$(2)\,\,\sin 2x=2\sin x\cos x$$
H: Looking for a 'second course' in logic and set theory (forcing, large cardinals...) I'm a recent graduate and will likely be out of the maths business for now - but there are a few things that I'd still really like to learn about - forcing and large cardinals being two of them. My background is what one would probably call a 'first graduate course' in logic and set theory (some intro to ZFC, ordinals, cardinals, and computability theory). Can you recommend any books or online lecture notes which are accessible to someone with my previous knowledge? Thanks a lot! AI: Kunen's "Set Theory: An Introduction to Independence Proofs" is a really well written introduction to, well, independence proofs. It doesn't do a lot with large cardinals, at least not the really large ones, but it does do a thorough treatment of forcing. It also develops Godel's constructible universe in proving the consistency of AC and GCH with ZF, along with other basic methods used in proofs of independence or consistency. I think it finds a good balance between being a gentle introduction, but also efficiently getting through the material. I would highly recommend it for a second set theory course.
H: patterns for u-shaped graphs When the equation is $Ax + By = C$, you know it will be a straight line. Is there a specific pattern to know (without plotting $x$ and $y$ yet) that the graph will be u-shaped? For example, the equation $y = x^2 - 9x – 12$ forms a u-shape. But how would you know that by looking at it? How would you know that, for example, it's not a L shape or something else? Is there an equation, as there is in the straight line? Thanks for any response. AI: In addition to Alex' excellent answer, I'd like to contribute the following which may be more accessible if you haven't heard about limits before and gives a slightly more concrete criterion for the 'u-shapes'. I will also assume that by u-shape you mean things that actually look a bit like a 'u' rather than just becoming large on both sides. The u-shape you describe is called a parabola. And indeed, you can recognize many of these by their equation: First look at the graph of $y=x^2$, the simplest example of such a parabola. Now, if you have any equation like $y=Ax^2+Bx+C,\ A>0$, you can complete the square: $Ax^2+Bx+C=A(x^2+\frac{B}{A}x+\frac{B^2}{4A^2}+\frac{C}{A}-\frac{B^2}{4A^2})=A(x+\frac{B}{2A})^2 + C-\frac{B^2}{4A}$, so this is the simple parabola you saw before, moved to the left by a distance of $\frac{B}{2A}$, stretched in $y$-direction by a factor of $A$ and finally moved upwards by a distance of $C-\frac{B^2}{4A}$. If $A<0$, your parabola is turned upside down. For equations with higher powers of $x$, it is more complicated to find out what its graph looks like. As Alex said, odd degrees (highest powers) never give u-shapes, while even degrees can give u-shapes but also 'w-shapes' - consider for example $x^4-3x^2+1$ and more intricate shapes.
H: Minimizers of an expression with little O notation Suppose that $f(x) = o(\sqrt{x})$ as $x\rightarrow\infty$ and let $x^*(a)$ denote the minimizer of $f(x) + a^{3/2}/x$, that is, the value of $x$ that minimizes said expression (assuming such a value exists). As $a\rightarrow\infty$, is it true that $x^*(a) = \omega(a)$, i.e. that the minimizer grows super-linearly in $a$? AI: Yes. It suffices to show that the minimum is $o(\sqrt{a})$. Suppose not. Then there is $c>0$ and a sequence $a_n\to\infty$ such that $f(x)+a_n^{3/2}/x\ge c\sqrt{a_n}$ for all $n$ and all $x$. But for large $x$ we have $f(x)<c^{3/2}\sqrt{x}/1000$. Setting $x=100a_n/c$ yields a contradiction.
H: Write each expression in the form $ca^pb^q$ Write each expression in the form $ca^pb^q$ c) $\dfrac{a\left(\frac{2}{b}\right)}{\frac{3}{a}}$ \begin{align*} &= \frac{a\left(\frac{2}{b}\right)}{1}*\frac{\left(\frac{a}{3}\right)}{3}=\dfrac{a^2\left(\frac{2}{b}\right)}{3}=\frac{a^2}{1}*\frac{2}{b}*\frac{1}{3}=\frac{2a^2}{3b}*\frac{b}{1}=\frac{2a^2b}{3}=\frac{2}{3}a^2b^1 \end{align*} e) $\dfrac{a^{-1}}{(b^{-1})\sqrt{a}}$ \begin{align*} &= \frac{1}{(b^{-1})a\sqrt{a}}=\frac{1b}{1a^1a^{\frac{1}{2}}}=\frac{1b^1}{1a^{\frac{2}{3}}}=1a^{\frac{-2}{3}}b^1 \end{align*} These are my steps. Any corrections help. AI: On what grounds did you move $b$ to the top on (c)? It's incorrect. You cannot just multiply by $\frac{b}{1}$ because it pleases you to do so. And, after you suddenly create a factor of $\frac{b}{1}$ ex nihilo, it would have cancelled with the denominator. So both the penultimate and antepenultimate equality signs are incorrect. $$\begin{align*} \frac{a(\frac{2}{b})}{\frac{3}{a}} &= \frac{2a}{b}\frac{a}{3}\\ &= \frac{2}{3}\frac{a^2}{b}\\ &= \frac{2}{3}a^2b^{-1}. \end{align*}$$ (d) is almost correct, except that $1+\frac{1}{2}=\frac{3}{2}$, not $\frac{2}{3}$. So the exponent of $a$ is incorrect.
H: Use equalities to derive important trigonometric functions The trigonometric functions I must know: (A) $\sin(-x)=-\sin x$ (B) $\cos(-x)=\cos x$ (C) $\cos(x+y)=\cos x\cos y-\sin y\sin x$ (D) $\sin(x+y)=\sin x\cos y+\cos x\sin y$ $\sin^2x+\cos^2x=1$ (Use (C) and $\cos0=1$) Can anyone help me just understand what the first one is asking. It's giving me a property and I'm supposed to derive the identities. But I don't even know where to begin. Any help? I will post the other problems once I figure this one out. AI: Hint: Note that by (A) and (B) $$\cos^2(x)+\sin^2(x)=\cos(x)\cos(-x)-\sin(-x)\sin(x)$$ and apply (C).
H: A isometric map in metric space is surjective? Possible Duplicate: Isometries of $\mathbb{R}^n$ Let $X$ be a compact metric space and $f$ be an isometric map from $X$ to $X$. Prove $f$ is a surjective map. AI: Here is an alternative to the proof linked to in the comments: Suppose there existed $x \in X\setminus f(X)$. Then $x$ has positive distance $d$ from the compact set $f(X)$. Now consider the recursively defined sequence $$x_0 := x, \qquad x_n := f(x_{n-1}) \quad \forall \, n>0$$ We have $d(x_0, x_n)\ge d$ for all $n>0$, by assumption on $x$. This implies that we also have $d(x_k, x_{k+n}) = d(x_0, x_n) \ge d$ for all $k,n>0$ (here we use that $f$ is an isometry). Therefore $d(x_n, x_m) \ge d$ for all $m\ne n$, which is in contradiction to sequential compactness of $X$.
H: Is it valid to consider average rate of change as a 3 variable function? The average rate of change can be modeled as a function: $f:D^2 \to \mathbb{R}$ where $D$ is the domain of the primary function in consideration. It maps two variables - the ends of the interval- to the average rate of change of that particular interval. The derivative is $f$ restricted to the set: $\{(x,x): x \in D\}$, essentially making it a "slice". Is this a valid interpretation? AI: If you have a function $f\colon I\to\mathbb R$ defined on an interval $I$, and you consider its "average rate of change" as a function $g\colon I^2\to\mathbb R$ defined as $$g(x,y) = \frac{f(y)-f(x)}{y-x},$$ hoping to say that $f'(x) = g(x,x)$, then you immediately run into the problem that $g$ is not even defined at $(x,x)$ because you'll be dividing by zero. You'll have to consider the limit of $g(u,v)$ as $(u,v)$ tends to $(x,x)$ instead. That should be fine -- I'd wager that the limit is defined and equal to $f'(x)$ exactly where $f$ is differentiable. But really, you've just replaced the one-variable limit in the definition of the derivative with a two-variable limit (and those are much more complicated), so I don't see this as a very useful interpretation.
H: How many elements of a given order in a finite group Let $G$ be a finite group and $n_k$ the number of elements of order $k$ in $G$. Show that $n_3$ is even and $o(G) - n_2$ is odd. By Lagrange's Theorem, if $k$ does not divide $o(G)$, there are no elements of order $k$ in $G$. That implies $$3\!\not|\;o(G)\Longrightarrow n_3=0\Longrightarrow n_3\text{ even}$$ and $$2\!\not|\;o(G)\Longrightarrow o(G)\text{ odd}\wedge n_2=0\Longrightarrow o(G)-n_2\text{ odd}\;.$$ How must I proceed to calculate $n_3\!\!\mod2$ when $3$ divides the order of $G$ and $o(G)-n_2\equiv n_2$ $\!\!\!\mod2$ when $2$ divides it? AI: To see that $n_3$ is even, note that each element $a\in G$ of order $3$ is associated with a subgroup $\{e,a,a^{-1}\}$, and that there are exactly two elements of order $3$ corresponding to each such subgroup. To see that $o(G)-n_2$ is odd, do something similar.For each $a\in G$ not of order $2$, $a^{-1}$ is not of order $2$ as well, and is thus distinct from $a$ except in the single case $a=e$.
H: The Frobenius endomorphism Let $\mathbf F$ be a field of prime characteristic $p$. It is known that the Frobenius map $c\phi=c^p~~\forall c\in\mathbf F$ is an endomorphism of $\mathbf F$. Moreover, since the only ideals of $\mathbf F$ are $\{0\}$ and $\mathbf F$, we know that $\ker(\phi)=\{0\}$. This implies that $\phi$ is injective. It is known that $\phi$ is not surjective in general. However, if we consider the following argument below, it seems to me that $\phi$ must be an automorphism (i.e., a bijective endomorphism). "We claim that $\mathbf F/\{0\}\cong\mathbf F$. Consider the homomorphism $\theta:\mathbf F/\{0\}\rightarrow\mathbf F$ given by $(\{0\}+r)\theta=r$, where $r\in\mathbf F$. Then it is easy to show that it is both injective and surjective; and so our claim holds. However, we also know by the First Ring Isomorphism Theorem that $\mathbf F/\ker(\phi)\cong \text{im}(\phi)$. So we conclude that $\text{im}(\phi)\cong\mathbf F$. But $\text{im}(\phi)\subseteq\mathbf F$ so necessarily $\text{im}(\phi)=\mathbf F$. Therefore $\phi$ is an automorphism of $\mathbf F$." Could someone tell me where the mistake lies? AI: Your mistake comes in assuming that because $\textrm{im} \phi \subseteq \Bbb{F}$ and $\textrm{im} \phi \cong \Bbb{F}$ then $\textrm{im} \phi = \Bbb{F}$. Here's a good example to consider. I will produce for you an example of two fields $E,H$ one of which is contained in the other with $E \cong H$ (as rings) but $E \neq H$. Consider $F = \Bbb{Z}/2\Bbb{Z}$ and $t$ an indeterminate. Let $E = F(t)$, $H = F(t^2)$. Then you can see that $H \subseteq E$ and $H \cong E$ (as rings) by the map $f$ that is constant on $F$ and sends $t \mapsto t^{2}$. However clearly $H \neq E$. To see this begin by noticing that $F(t^2)$ is the fraction field of $F[t^2]$. If $t \in F(t^2)$, we have that $\frac{p(t^2)}{q(t^2)} = t$ for some polynomials $p$ and $q$. Then you have $tq(t^2) = p(t^2)$. But then the guy on the left has all odd powers while the guy on the left only even powers, a contradiction.
H: Why does $\{1\cdot a\! \! \pmod p, 2\cdot a\! \! \pmod p,\ldots, (p-1)\cdot a\! \! \pmod p\}$ $= \{1, 2,\ldots, p-1\}$ when $a$ and $p$ are coprime? Why is it that $\{1\cdot a \pmod p, 2\cdot a \pmod p,\ldots, (p-1)\cdot a \pmod p\} = \{1, 2,\ldots, p-1\}$ (albeit in a different order) when a and p are coprimes? I can't figure this out and I've been beating my head for the whole weekend. Googling around I've found mention of Fermat's Little Theorem (e.g. here), but I can't see how it helps me. I've verified it by hand, it seems perfectly believable to me (mostly because I find myself thinking of the way the circle of fifths works), but I can't come up with a good proof. Any help, pretty please? Thanks a lot. P.S.: Pardon my English. I'm from the land of pizza and mandolins. AI: Suppose $ra$ and $sa$ are the same, modulo $p$. Then $sa-ra$ is a multiple of $p$. So $(s-r)a$ is a multiple of $p$. But by hypothesis $a$ and $p$ are coprime, so $s-r$ is a multiple of $p$. But if $r$ and $s$ are between 1 and $p-1$, inclusive, then $s-r$ can't be a multiple of $p$ unless $r=s$. This shows that all the numbers in the first set in your question are different. Since zero doesn't appear in that set, and since there are $p-1$ numbers in that set, they must be the same as the numbers in the second set in your question.
H: Understanding of Derivatives via Limits The book I'm reading introduces derivatives via limits. It gives the following example: $f(x) = 12x-3x^3$ $f'(x)=\lim{\Delta x\rightarrow 0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$ $=\lim_{\Delta x\rightarrow 0}\frac{12(x+\Delta x)-(x+\Delta x)^3-(12x-x^3)}{\Delta x}$ $=\lim_{\Delta x\rightarrow 0}\frac{12x+ 12\Delta x -x^3 -3x^2\Delta x -3x(\Delta x)^2-(\Delta x)^3-12x+x^3}{\Delta x}$ $=\lim_{\Delta x\rightarrow 0}(12 - 3x^2 - 3x\Delta x - (\Delta x)^2)$ $=12-3x^2$ I'm having trouble with how they got from step 2 to step 3. Where did $\Delta x$ on the bottom go? AI: You have: $$=\lim_{\Delta x\rightarrow 0}\frac{12x+ 12\Delta x -x^3 -3x^2\Delta x -3x(\Delta x)^2-(\Delta x)^3-12x+x^3}{\Delta x}$$ Note that $\color{#bb0000}{12x} + 12\Delta x - \color{#00bb00}{x^3} - 3x^2\Delta x - 3x (\Delta x)^2 - (\Delta x)^3 - \color{#bb0000}{12x} + \color{#00bb00}{x^3} $ $= 12 \Delta x - 3x^2 \Delta x - 3x (\Delta x)^2 - (\Delta x)^3$ So they just cancel out a $\Delta x$ everywhere.
H: Factorize $f$ as product of irreducible factors in $\mathbb Z_5$ Let $f = 3x^3+2x^2+2x+3$, factorize $f$ as product of irreducible factors in $\mathbb Z_5$. First thing I've used the polynomial reminder theorem so to make the first factorization: $$\begin{aligned} f = 3x^3+2x^2+2x+3 = (3x^2-x+3)(x+1)\end{aligned}$$ Obviously then as second step I've taken care of that quadratic polynomial, so: $x_1,x_2=\frac{-b\pm\sqrt{\Delta}}{2a}=\frac{1\pm\sqrt{1-4(9)}}{6}=\frac{1\pm\sqrt{-35}}{6}$ my question is as I've done calculations in $\mathbb Z_5$, was I allowed to do that: as $-35 \equiv_5 100 \Rightarrow \sqrt{\Delta}=\sqrt{-35} = \sqrt{100}$ then $x_1= \frac{11}{6} = 1 \text { (mod 5)}$, $x_2= -\frac{3}{2} = 1 \text { (mod 5)}$, therefore my resulting product would be $f = (x+1)(x+1)(x+1)$. I think I have done something illegal, that is why multiplying back $(x+1)(x+1)$ I get $x^2+2x+1 \neq 3x^2-x+3$. Any ideas on how can I get to the right result? AI: If $f(X) = aX^2 + bX + c$ is a quadratic polynomial with roots $x_1$ and $x_2$ then $f(X) = a(X-x_1)(X-x_2)$ (the factor $a$ is necessary to get the right leading coefficient). You found that $3x^2-x+3$ has a double root at $x_1 = x_2 = 1$, so $3x^2-x+3 = 3(x-1)^2$. Your mistakes were You forgot to multiply by the leading coefficient $3$. You concluded that a root in $1$ corresponds to the linear factor $(x+1)$, but this would mean a root in $-1$. The right linear factor is $(x-1)$.
H: Computing operator norm exercise I did the following exercise (given in my notes) can you tell me if my answer is correct? Thanks. Exercise: Compute the operator norm of the continuous map $f \mapsto f$ when viewed: (a) as a map $T: C^1([0, 1]) \to C([0, 1])$ (b) as a map $S: C([0, 1]) \to L^1_\mu([0, 1])$, where $\mu$ is Lebesgue measure on $[0, 1]$ (c) Compute the operator norm of the composition of the maps from (a) and from (b). (d) Now restrict the maps in (a), (b) and (c) to the space of functions $f$ with $f(0) = 0$, and compute the operator norms again. My answer: (a) $\|T\| = \sup_{\|f\|_{C^1} \leq 1} \|f\|_\infty \leq \sup_{\|f\|_\infty \leq 1} \|f\|_\infty = 1$ since for example $f(x) = x \in C^1[0,1]$ and $\|f\|_\infty = 1$. OTOH, $\|T\| \geq 1$ since $\|Tf\| = \|x\| = 1$. (b) $\|S\| = \sup_{\|f\|_\infty \leq 1} \|f\|_1 = 1$ since $\|f\|_1 \leq \int_{[0,1]} 1 d \mu$ so $\sup_{\|f\|_\infty} \|f\|_1 \leq 1$. But $f(x) = 1 \in C[0,1]$ and $\|1\|_\infty \leq 1$ and $\|1\|_1 = 1$. (c) $\|ST\| \leq \|S\| \|T\| = 1$. We have $\|ST\| \geq 1$ since for $f(x) = 1$, $\| STf\| = \|Sf\| = 1$. (d) (d.a) $\|T\| = 1$ by the same argument as in (a). (d.b) $\|S\| = 1$ because the sequence $f_n$ where $f_n$ is the function that is $nx$ on $[0, \frac1n]$ and $1$ on $[\frac1n, 1]$ is in $C[0,1]$ and $\|f_n\|_\infty \leq 1$ and $\|f_n\|_1 \to 1$ (by monotone convergence theorem) (d.c) $\|ST\| \leq 1$ (same argument as (c)). Unfortunately, $f_n$ from (d.b) are not in $C^1[0,1]$. So I'm not sure what to do. Is $\|ST\| = 1$ in this case too? Or perhaps not? AI: (Using the norm $\|f\|_{C^1}=\|f\|_\infty+\|f'\|_\infty$ (d.a) by mean value theorem, we have $\|f\|_\infty\le\|f'\|_\infty$, since for all $x$, $|f(x)-f(0)\\le\|f'\|_\infty|x-0|\le\|f'\|_\infty$, so $2\|f\|_\infty\le\|f\|_\infty+\|f'\|_\infty=1$, and hence, $ \|T\|\le1/2$. But $f$, defined by $f(x)=\frac{x}{2}$ satisfy $\|f\|_{C^1}=1$ and $\|f\|_\infty=1/2$, then $\|T\|=1/2$. (d.c) let $f\in C^1$ with norm $1$. By previous argument, $\|f\|_\infty\le1/2$. If for any $y$ we have $f(y)>y/2$, then there exist $c$ such that $f'(c)=f(y)/y$, so if $\|f\|_\infty=1/2$, then $\|f\|_{C^2}>1$. From this, we have $\|ST\|=1/4$.
H: Find the acute angle of intersection of the curves $y=\cos x$ and $y=e^{-x}$ at the point $(0,1)$. Find the acute angle of intersection of the curves $y=\cos x$ and $y=e^{-x}$ at the point $(0,1)$. My method: $y=\cos(x)$ $(0,1)$ $1=\cos(0)$ $=0$ $\frac{dy}{dx}=-\sin(x)$ $=-\sin(0)$ $=0$ I did the above step exactly from the example given in the text book, but I can't get the answer. The answer is $45^{\circ}$ Help me out by step by step solution. thanks AI: $$f(x)=\cos x\Longrightarrow f'(x)=-\sin x\Longrightarrow f'(0)=0=:m_1$$ $$g(x)=e^{-x}\Longrightarrow g'(x)=-e^{-x}\Longrightarrow g'(0)=-1=:m_2$$ So you have that the functions' tangent lines at $\,(0,1)\,$ have slopes $\,0\,$ (this means the tangent line of $\,f\,$ at this point is horizontal) and $\,-1\,$ , so what's the acute angle between two lines with these slopes? Yup, it is $\,45^\circ\,$ , as you can readily check. Of course, you can use the formula $$\tan\alpha = \arctan\left|\frac{m_1-m_2}{1+m_1m_2}\right|=\arctan\frac{1}{1}=\frac{\pi}{4}\text{radians}=45^\circ$$with $\,\alpha\,=$ the angle between the curves.
H: Average run lengths for large numbers of trials: Intuition and proof This article states that the formula for the average run lengths for large numbers of trials is:$$\frac{1}{1-Pr(event\ in\ one\ trial)}.$$ My questions What is the intuition behind this formula? Do you know an elementary proof for this result? AI: This is one of the workhorses of first courses in probability. Assume one repeats an experiment whose probability to happen in one trial is $p$, that the results of the successive trials are independent, and that one wishes to estimate the mean number $R$ of consecutive successes before the next failure (that is, a run of successes) once a success occurred. For every $r\geqslant1$, the event $[R\geqslant r]$ corresponds to $r-1$ supplementary successes after the first one, hence $\mathrm P(R\geqslant r)=p^{r-1}$. Thus the distribution of $R$ is geometric with parameter $p$ and the mean length of a run is the expectation of $R$, that is, $$ \langle R\rangle=\sum_{r\geqslant1}r\cdot\mathrm P(R=r)=\sum_{r\geqslant1}\mathrm P(R\geqslant r)=\sum_{s\geqslant0}p^s=\frac1{1-p}. $$ Edit Alternatively, note that either a failure occurs immediately after the first success and then $R=1$, which happens with probability $1-p$, or a second success occurs immediately after the first success and then $R=1+R'$ where $R'$ is distributed like $R$, which happens with probability $p$. Taking expectations, one sees that $\langle R\rangle=(1-p)\cdot1+p\cdot(1+\langle R\rangle)$, that is, $\langle R\rangle=1/(1-p)$.
H: If two sets have the same sum and xor are they necessarily the same? Let $A = \{A_1, A_2, A_3, \cdots, A_n\}$ and $B = \{B_1, B_2, B_3,\cdots, B_n\}$. where $A_i\in \mathbb{Z}$ and $B_i\in \mathbb{Z}$. Say, $$S_{1} = A_1 + A_2 + A_3 + \cdots + A_n = \sum_{i=1}^{n}{A_{i}} \\ S_{2} = B_1 + B_2 + B_3 + \cdots + B_n = \sum_{i=1}^{n}{B_{i}}$$ And, $$X_1 = A_1 \oplus A_2 \oplus A_3 \oplus \cdots \oplus A_n = \bigoplus_{i=1}^{n}{A_{i}} \\ X_2 = B_1 \oplus B_2 \oplus B_3 \oplus \cdots \oplus B_n = \bigoplus_{i=1}^{n}{B_{i}}$$ Where $\oplus$ is the XOR operator. If $S_{1} = S_{2}$ and $X_{1}=X_{2}$, does this imply that $A$ and $B$ contain the same set of integers? AI: No. Counterexample: $$\begin{align*} A &= \{ 1, 6, 8, 48 \} \\ B &= \{ 3, 4, 24, 32 \} \end{align*}$$ More generally, any sets of integers of the form $$ A = \{ 2^{a_k}, 2^{b_k} + 2^{c_k} \}_{k = 1,2,\ldots} \qquad\qquad B = \{ 2^{a_k} + 2^{b_k}, 2^{c_k} \}_{k = 1,2,\ldots} $$ where the sequences $a_k, b_k, c_k$ never repeat and also don't have any elements in common, will be a counterexample. This can be generalised to any sequence of non-overlapping binary vectors, in which there are more vectors with Hamming weight 2 or greater than Hamming weight 1, interpreted as integers in binary notation.
H: Lie algebra representation induced from homomorphism between spin group and SO(n,n) Consider the spin group, we know it is a double cover with the map: $\rho: Spin(n,n)\longrightarrow SO(n,n)$ s.t $\rho(x)(v)= xvx^{-1}$ where $v$ is an element of 2n dimensional vector space V and $x$ is an element of spin group (multiplications are Clifford multiplication). I read that this map induces a lie algebra representation given by: $d\rho:so(n,n) \longrightarrow so(n,n)$ s.t $d \rho_{x}(v)=xv-vx$ here $x$ is an element of $so(n,n)$ and $v$ is again an element of V. I cannot understand the derivation of this lie algebra representation. Can anyone help me? :) AI: This basically comes from the product rule of differentiation. Recall that for a general Lie algebra homomorphism $\rho : G\to H$ you can compute its derivative $d\rho: Lie(G) \to Lie(H)$ by the formula $$ d\rho(X) = \frac{d}{dt}\vert_{t=0} \rho(\exp(tX)). $$ In the case you have, this gives $$ d\rho(x) (v) = \frac{d}{dt}\vert_{t=0} \exp(tx) v \exp(-tx), $$ where $x \in so(n,n)$, which is identified with the second filtration of the Clifford algebra. Now you can think of the right hand side is taking place in the Clifford algebra and the product rule of differentiation holds so you get $$ d\rho(x)(v) = \frac{d}{dt} \vert_{t=0} \exp(tx) v \exp(0x) + \exp(0x) v \frac{d}{dt}\vert_{t=0} \exp(-tx) = xv +v(-x). $$ Notice that this formula does not make sense if you think of $so(n,n)$ in terms of $2n \times 2n$ matrices since right multiplying a column vector by a matrix doesn't make sense. The $xv - vx$ is taking place in the Clifford algebra.
H: Prove if $x^{2}-5xy-3$ is even, then $x+y$ is odd, where $x,y \in\mathbb{Z}$ I know for you this is easy but for me is not. I give my best shot but it's no use so I need someone to teach about all this stuff. As I try to solve this one, I come up with this answer: Suppose $x^2-5xy-3$ is even, then $x=2a + 1$ and $y=2b$ for some integers $a,b \in\mathbb{Z}$. Thus, $$\begin{aligned}x^2-5xy-3&=(2a+1)^2-5(2a+1)(2b)-3 \\ &=(4a^2+4a+1)-20ab+10b-3 \\ &=2(2a^2+2a)-20ab+10b-2 \\ &=\;?\end{aligned}$$ And I don't know what's the next step. I know there's something wrong with my procedure. I also have plenty of other questions that need to be answered. I've already answer this equations but I can't solve it. If $m$ is odd and $n$ is even, then $m^2-5mn+n^2+1$ is even, where $m,n\in\mathbb{Z}$. If $x-y$ is even, then $x^2+3xy-5$ is odd, where $m,n \in\mathbb{Z}$. Let $a,b \in\mathbb{Z}$. If $2b^2-3ab+1$ is even, then $2a-b$ is odd. Let $m,n \in\mathbb{Z}$. Prove that if $m^2+1$ is even, then $2n+m$ is even. Even though I'm not good in math, I know in the future I will be good in math by practicing and with your help. This not a assignment, I'm practicing solving problems like this to be good in math. AI: Here is how to do the first problem you mentioned within the question (the one where you showed your work) - it should give you the general idea. Key fact A: An integer $t$ is even if and only if $t=2k$ for some integer $k$; and similarly, $t$ is odd if and only if $t=2k+1$ for some integer $k$. Key fact B: $$\begin{array}{c|c|c|} + & \bf\text{odd} & \bf\text{even}\\\hline \bf\text{odd} & \text{even} & \text{odd}\\\hline \bf\text{even} & \text{odd} & \text{even}\\\hline \end{array}$$ Key fact C: $$\begin{array}{c|c|c|} \times & \bf\text{odd} & \bf\text{even}\\\hline \bf\text{odd} & \text{odd} & \text{even}\\\hline \bf\text{even} & \text{even} & \text{even}\\\hline \end{array}$$ Now suppose that, for some $x,y\in\mathbb{Z}$, the quantity $$x^2-5xy-3=(x^2-5xy)+(-3)$$ is even. Because $-3$ is odd, this is only possible if $x^2-5xy$ is odd (look at key fact B). We can factor $x^2-5xy$ as $(x)\times(x-5y)$. The only way that $x^2-5xy$ can be odd is if both $x$ and $x-5y$ are odd (look at key fact C). Because $x$ is odd and $x-5y=(x)+(-5y)$ is odd, we can see that $-5y$ must be even (key fact B). Because $-5y=(-5)\times (y)$ is even and $-5$ is odd, it must be the case that $y$ is even (key fact C). Thus, we have shown, starting from the knowledge that $x^2-5xy-3$ is odd, that $x$ must be odd and $y$ must be even. By key fact A, there must be an integer $a$ such that $x=2a+1$, and there must be an integer $b$ such that $y=2b$. Lastly, using key fact B, the fact that $x$ is odd and $y$ is even implies that $x+y$ is odd.
H: An homeomorphism between $\mathbb{R}-\mathbb{Q}$ and $(\mathbb{R}-\mathbb{Q})\cap (0,1)$? Are $\mathbb{R}-\mathbb{Q}$ and $(\mathbb{R}-\mathbb{Q})\cap (0,1)$ homeomorphic? My claim is they are and I'm trying using this function:$$f:(\mathbb{R}-\mathbb{Q})\cap (0,1) \rightarrow (\mathbb{R}-\mathbb{Q})\cap (0,\infty)\;\;\;\; \;f(x)=\frac{1}{x}-1$$ which is a restriction of $g=1/x-1$. Proven this, then it would be easy to prove it for ($-\infty$,$+\infty$). So I think I now need to show that $f$ is well defined, which is true because $g$ transform rational numbers into rational and irrational into irrational. So $f$ is well defined, it's bijective, but is it continuous in the subspace topology? I believe it is using the same argument I exposed two lines above. Is my claim false, and/or the proof? AI: Yes, they are homeomorphic. They are both homeomorphic to the Baire space $\omega^\omega$ of all sequences of natural numbers, which is a classical result in descriptive set theory. Your argument seems correct. Continuity follows from the fact that it is a restriction of a rational function (and rational functions are continuous where defined), and rational functions with rational coefficients preserve rationality. As the inverse of $f$ (that is, $1/(y+1)$) is well-defined and clearly continuous and preserves rationality (implying $f$ preserves irrationality), it is enough. ==edit== I just noticed that you intended to show homeomorphism with $\mathbf R\setminus \mathbf Q$ and not $\mathbf R_{>0}\setminus \mathbf Q$. In this case you should extend your argument a little, like so for example: $(0,1)\setminus \mathbf Q$ is easily homeomorphic with $(-1,1)\setminus \mathbf Q$ (by $h(x)=2x-1$), and then $f$ defined in the same way for positive numbers and separately as $-f(-x)$ for negative numbers yield a homeomorphism onto $\mathbf R\setminus\mathbf Q$. Continuity is still not hard to see.
H: Counting the number of graphs on n vertices I want to count the number of simple graphs on $n$ vertices where it is given that there is a fixed $K_k$ among those $n$ vertices. The way I am reasoning is this: the edges within the $n-k$ non-$K_k$ vertices can be filled out in $2^{\tbinom{n-k}{2}}$ ways, and for each of the $k$ vertices $v_1,\cdots v_k$, the edges from $v_i$ to the $n-k$ non-$K_k$ vertices can be filled out in $2^{n-k}$ ways. This yields a total of $2^{k(n-k)}2^{\tbinom{n-k}{2}}$. Is this correct? Thanks. AI: This is correct, but it seems a bit more complicated than necessary. You have a mixed term $kn$ in the exponent that cancels when you spell out the binomial coefficient. To avoid introducing it in the first place, you could argue that there are $\binom n2$ edges in all and $\binom k2$ of them are already determined, so the exponent is $\binom n2-\binom k2$. By the way, in a case like this I'd specify that you want to count the number of labeled graphs.
H: Prove that: $ \int_{0}^{1} \ln \sqrt{\frac{1+\cos x}{1-\sin x}}\le \ln 2$ I plan to prove the following integral inequality: $$ \int_{0}^{1} \ln \sqrt{\frac{1+\cos x}{1-\sin x}}\le \ln 2$$ Since we have to deal with a convex function on this interval i thought of considering the area of the trapeze that can be formed if we unify the points $(0, f(0))$ and $(1, f(1))$, where the function $f(x) =\ln \sqrt{\frac{1+\cos x}{1-\sin x}}$, but things are ugly even if the method itself isn't complicated. So, I'm looking for something better if possible. AI: Here's a (hopefully) corrected proof that uses convexity along with the trapezoid rule: You can rewrite what you're trying to prove as $$ \int_{0}^{1} \ln {\frac{1+\cos x}{1-\sin x}}\,dx\le 2\ln 2$$ Let $f(x) = \ln {\frac{1+\cos x}{1-\sin x}} = \ln(1 + \cos x) - \ln (1 - \sin x)$. Then $$f'(x) = -\frac{\sin x}{1 + \cos x} + \frac{\cos x}{1 - \sin x}$$ Using the tangent half-angle formula, this is the same as $$-\tan(x/2) + \tan(x/2 + \pi/4)$$ Therefore $$f''(x) = -(1/2)\sec^2(x/2) + 1/2\sec^2(x/2 + \pi/4)$$ Since $\sec$ is increasing on $(0,1/2 + \pi/4)$, we see that $f''(x) > 0$. So the integrand is convex. When applied to a convex function, the trapezoid rule always gives a result larger than the integral. But already with $2$ pieces, the trapezoid rule here gives $$1/4(\ln(1 + \cos(0)) - \ln(1 - \sin(0)) + 2(\ln(1 + \cos(1/2)) - \ln(1 - \sin(1/2)))$$ $$ +\ln(1 + \cos(1)) - \ln(1 - \sin(1)) )$$ $$= 1.3831395912690787...$$ This is slightly less than $2\ln2 = 1.3862943611198906...$, so the original integral is less than $\ln 2$ as needed.
H: Proof that operator is compact Prove that the operator $T:\ell^1\rightarrow\ell^1$ which maps $x=(x_1,x_2,\dots)$ to $\left(x_1,\frac{x_2}{2},\frac{x_3}{3},\dots\right)$ is compact. For an arbitrary sequence $x^{(N)}\in\ell^1$ one would have extract a convergent subsequence of $T x^{(N)}$. Maybe via the diagonal argument? AI: Define $$T_j(x):=\left(x_1,\frac{x_2}2,\dots,\frac{x_j}j,0,\dots,0\right).$$ It's a compact operator (because it's finite ranked) and $$T(x)-T_j(x)=\left(0,\dots,0,\frac{x_{j+1}}{j+1},\dots\right),$$ hence $$\lVert T(x)-T_j(x)\rVert_{\ell^1}=\sum_{k=j+1}^{+\infty}\frac{|x_k|}{k}\leq \frac 1{j+1}\sum_{k=j+1}^{+\infty}|x_k|\leq \frac 1{j+1}\lVert x\rVert_{\ell^1},$$ which proves that $\lVert T-T_j\rVert\leq \frac 1{j+1}$. To conclude, notice that a norm limit of compact operators is compact.
H: How to prove that $a^2(1+b^2)+b^2(1+c^2)+c^2(1+a^2)\geq6abc$ Help me prove $a^2(1+b^2)+b^2(1+c^2)+c^2(1+a^2)\geq6abc$ AI: $a^2(1+b^2)+b^2(1+c^2)+c^2(1+a^2)\geq6abc$ Since $(a-bc)^2\geq 0$, $(b-ac)^2\geq 0$, $(c-ab)^2\geq 0$, then $a^2+b^2c^2\geq 2abc$, $b^2+a^2c^2\geq 2abc$, $c^2+a^2b^2\geq 2abc$. By of collectted through for through three inequalities last will be obtained $a^2+b^2c^2+b^2+a^2c^2+c^2+a^2b^2\geq6abc$, or $a^2+a^2b^2+b^2+b^2c^2+c^2+a^2c^2\geq6abc$.
H: minimum requirement to be $f=g$ , $f$, $g$ are holomorphic Given that $f,g:\mathbb{C}\rightarrow \mathbb{C}$ are holomorphic, $A=\{x\in\mathbb{R}:f(x)=g(x)\}$. The minimum requirement for $f=g$ is $A$ is uncountable $A$ has positive lebesgue measure $A$ contains a nontrivial interval $A=\mathbb{R}$ By identity theorem to be $f=g$ we just need a limit point inside $A$, so If $A$ has positive lebesgue measure then it will contain a interval so which will have one limit point. so $2$ is correct? AI: The minimum is the list is the requirement 1. The discrete subsets of $\Bbb C$ are necessarily countable, since $\Bbb C$ is separable (*). So if $A$ is uncountable, it's not a discrete set and it has a limit point, which proves that $f\equiv g$. (*) Let $D\subset\Bbb C$ be discrete. For $z\in D$, consider $r_z$ such that $B(z,r_z)\cap (D\setminus \{z\})=\emptyset$. Then by separability, extract from the open cover $(B(z,r_z),z\in D)$ a countable subcover of $D$.
H: Automorphism of an Infinite cyclic group $\newcommand{\Id}{\operatorname{Id}}$ $f$ is an automorphism of an infinite cyclic group $G$ then 1.$f^n\neq \Id_G$ 2.$f^2=\Id_G$ 3.$f=\Id_G$ if $f^n=\Id_G$ then every element of $G$ will have finite order but in an infinite cyclic group only identity element has finite order, same for 2, so $3$ is correct? AI: Why would $f^n=\text{id}_G$ imply that every element of $G$ will have finite order? Just because $$f^n(a)=\underbrace{f(f(\cdots f}_{n\text{ times}}(a)))=a$$ does not mean that $a^n=a$. Hint: An infinite cyclic group is isomorphic to $\mathbb{Z}$. Which individual elements of $\mathbb{Z}$ generate $\mathbb{Z}$? Any group homomorphism $f:\mathbb{Z}\to G$ is determined by what it does to generators. Where can a homomorphism $f:\mathbb{Z}\to\mathbb{Z}$ send a generator if it is to be an automorphism?
H: lim sup of sequence of continuous function from $[0,1]\rightarrow [0,1]$ $f_n:[0,1]\to [0,1]$ be a continuous function and let $f:[0,1]\to [0,1]$ be defined by $$f(x)=\operatorname{lim\;sup}\limits_{n\rightarrow\infty}\; f_n(x)$$ Then $f$ is continuous and measurable continuous, but need not be measurable measurable, but need not be continuous need not be measurable or continuous. I guess $3$ is correct, but I'm not able to prove it. AI: Note that $$\lim\sup f_n(x)=\inf_{n\geq 1} \sup_{k\geq n} f_k(x)$$ Let $g_n(x)=\sup_{k\geq n}f_k(x)$, then $$g_n^{-1}(-\infty,a]=\lbrace x :g_n(x)\leq a\rbrace=\lbrace x :\sup_{k\geq n}f_k (x)\leq a\rbrace=\lbrace x: f_k(x)\leq a\mbox{ for all } k\geq n\rbrace$$ Hence $$g_n^{-1}(-\infty,a]=\bigcap_{k\geq n}f_k^{-1}(-\infty,a]$$ It follows that $g_n^{-1}(-\infty,a]$ is a measurable set (being intersection of measurable sets) and so $g_n$ is measurable.. In a similar fashion we can also prove that $\inf_{n\geq 1}g_n$ is also measurable (try), so we have $\lim\sup f_n(x)$ is measurable. For the second part $f_n(x)=x^n$ converges pointwise to $g$ where $g(x)=0$ when $0\leq x<1$ and $g(1)=1$ and surely $g$ is not continous.
H: upper bound for a sum of inverse index-distances Consider the following sum: $$\sum_{i=1}^{n} \sum_{\substack{j=1\\ j\neq i}}^{n} \frac{1}{\vert i-j \vert ^{1/2}} \leq const \; n^\alpha$$ What is a good (i.e. also easy to achieve) and the best possible estimate (i.e. $\alpha$ being as small as possible) for this sum? AI: Compare sums to integrals. As mentioned by @Davide Giraudo, the sum to estimate is $2S_n$ with $$ S_n=n\sum\limits_{k=1}^n\frac1{\sqrt{k}}-\sum\limits_{k=1}^n\sqrt{k}. $$ For every $k\geqslant1$, $$ \frac1{\sqrt{k}}\leqslant\int_{k-1}^k\frac{\mathrm dx}{\sqrt{x}},\qquad\sqrt{k}\geqslant\int_{k-1}^k\sqrt{x}\mathrm dx, $$ hence, summing these, $$ S_n\leqslant \int_{0}^n\left(\frac{n}{\sqrt{x}}-\sqrt{x}\right)\mathrm dx=\left[2n\sqrt{x}-\frac23x\sqrt{x}\right]_{x=0}^{x=n}=\frac43n\sqrt{n}. $$ Likewise, for every $k\geqslant1$, $$ \frac1{\sqrt{k}}\geqslant\int_{k}^{k+1}\frac{\mathrm dx}{\sqrt{x}},\qquad\sqrt{k}\leqslant\int_{k}^{k+1}\sqrt{x}\mathrm dx, $$ hence, summing these, $$ S_n\geqslant \int_{1}^{n+1}\left(\frac{n}{\sqrt{x}}-\sqrt{x}\right)\mathrm dx=\left[2n\sqrt{x}-\frac23x\sqrt{x}\right]_{x=1}^{x=n+1}=\frac43n\sqrt{n}-R_n, $$ with $R_n\geqslant0$ and $R_n\ll n\sqrt{n}$. This proves the optimal upper bound $Cn^\alpha$ of $2S_n$ is reached for $C=\frac83$ and $\alpha=\frac32$.
H: Find a Nonsingular matrix in Jordan Form Let $$ A= \begin{pmatrix} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 1 & 1\\ \end {pmatrix} $$ Find a nonsingular matrix $P$ such that $P^{-1}AP$ is in Jordan form. The course I am taking uses the textbook "Matrices and Linear Transformation" by Cullen. The example in the book explains how to find $P$ if I know the characteristic polynomial of $A$. When I tried to find the characteristic polynomial of this matrix, I got TWO eigenvalues: 0 and 1. According to the example, I need to first find the matrix J which A is similar to. Theorem 5.12 in my textbook states:If $A \in F_{n\times n}$ has characteristic polynomial $c(x)=\det(xI-A)=\prod^{r}_{i=1}(x-\lambda_{i})^{s_{i}}$ then $A$ is similar to a matrix $J$ with the $\lambda_{i}$ on the diagonal, zeros and ones on the subdiagonal, and zeros elsewhere. Am I correct in saying $$J= \begin{pmatrix} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 0 & 1\\ \end{pmatrix} $$ Please help. AI: Yes, you're correct. The Jordan blocks in your case are $$J_2(0) = \pmatrix{0 & 0\\ 1 & 0} \\ J_1(1) = \pmatrix{1}.$$ You need to look at Theorem 5.13 in your book.
H: Interpretation of $f(n) \in o(n)$ Suppose that some function $f(n)$ is in $o(n)$. Is it fomally correct to say that there exists an $N$ such that for all $n \ge N$ it holds that $$f(n) \le \frac{c n}{g(n)}$$ where $c>0$ is a constant and $g(n)$ is a strictly increasing function of $n$ ? My reasoning is that $f(n) \in o(n)$ implies that $\lim_{n\rightarrow\infty} \frac{f(n)}{n} = 0$, so the above should follow directly from $f(n) \in o(n)$, right? AI: You can reformulate your question as: is it true that if $f(n)\to 0$, then there is an increasing function $g(n)$ such that for $n$ large enough $f(n)\leq c/g(n)$? I think the answer to this question is positive. As Yury pointed out, it should be enough to take $$g(n)=h(n)\text{inf}\left\{\frac{1}{f(k)},k\geq n\right\}$$ where $h(n)$ is any positive increasing function smaller than one.
H: $\tbinom{2p}{p}-2$ is divisible by $p^3$ The problem is as follows: Let $p>3$ be a prime. Show that $\tbinom{2p}{p}-2$ is divisible by $p^3$. The only thing I can think of is that $(2p)!-2(p!)^2$ is divisible by $p^2$ which doesn't help me much. Can someone point me in the right direction? Is there a combinatorial approach to this problem? Thanks AI: $${2p\choose p}=\frac{(2p)(2p-1)\ldots (2p-(p-1))}{p!}=\frac{2(2p-1)\ldots (2p-(p-1))}{(p-1)!}=2{2p-1\choose p-1}$$ Now by Wolstenholme's theorem $${2p\choose p}\equiv 2\cdot1\equiv 2\mod p^3$$ ${} {} {}$
H: Simple Linear Algebra Problem I am working on a question from the first chapter of a Linear Algebra textbook I'm reading. Let $A=(1,1,-1)$, $B=(-3,2,-2)$, and $C=(2,2,-4)$. Prove that $\Delta ABC$ is a right-angled triangle. I know that the angle between $\overrightarrow{AB}$ and $\overrightarrow{AC}$ must be $90°$. In other words, $\overrightarrow{AB} \cdot \overrightarrow{AC}=0$. My trouble is I cant translate these given points into vectors in order to show the necessary calculations. I'm having the same trouble with other questions. Any help would be appreciated. AI: $$\vec{AB} = (-3,2,-2) - (1,1,-1) = (-4,1,-1)$$ $$\vec{BC} = (2,2,-4) - (-3,2,-2) = (5,0,-2)$$ $$\vec{CA} = (1,1,-1) - (2,2,-4) = (-1,-1,3)$$ Now look at $\vec{AB} \cdot \vec{CA}$ to conclude that the right angle is at $A$.
H: Why these conditions make this map open? If $A \subset \mathbb{R}^n$ is an open set and $g: A \to \mathbb{R}^n$ is an injective continuously differentiable function such that $\forall x \in A, \, \det g'(x) \neq 0$, does $g(U)$ is open for each $U \subset A$ open? Why? This is about p. 67 of Spivak's Calculus on Manifolds (down of the page), where he says: "the collection of all $g(U)$ is an open cover of $g(A)$". AI: For any $u \in U$, $g'(u)$ is invertible so the inverse function theorem says $g$ is a homeomorphism from a neighbourhood of $u$ to a neighbourhood of $g(u)$. In particular $g(V)$ is open for some open set $V \subseteq U$ containing $u$. Then $g(U)$ is the union of these open sets, so it is open.
H: Derived Set of a given subset of Real Line. Let $A = \{a +\pi b : a , b \in \mathbb{Z}\}$ is a subset of $\mathbb{R}$. What will be the derived set of it? AI: $A$ is an additive subgroup of $\Bbb R$, which is not discrete since $\pi$ is irrational. A known result about additive sub-groups of $\Bbb R$ shows that $A$ is dense in $\Bbb R$ for the usual topology. Each point which is not in $A$ is a limit point. For the points which are in $A$, we use irrationality criteria (proposition 4 in the link).
H: Why does synthetic division work? Synthethic division is commonly taught, but I have never actually had a proof/explanation shown to me. Why does it work? Work So Far I related the "$x$" to powers to 10, and then proceeded to relate synthetic division to non-polynomial division, but couldn't seem to find the correlation. Research So Far My teacher doesn't seem to have a valid explanation for why it works. A google search doesn't provide any good results either. All I seem to get is a Yahoo answers link with a badly formatted proof that makes it hard to understand and a physics forum link that links synthetic division to "normal division" by relating the "x" to 10, a conclusion I have already arrived at. AI: Per request, I post my comment here. Synthetic division is simply the polynomial long division algorithm optimized for the case when the divisor is linear (degree $1$). Said Wikipedia pages both do the same example. If you place these pages side-by-side and compare the associated steps then it should be clear how the optimization works.
H: $\int\frac{\sin\left(x\right)}{\cos\left(x\right)}\,\mathrm{d}x$ by substitution I'm trying to solve the following integral: $$\int\frac{\sin\left(x\right)}{\cos\left(x\right)}\,\mathrm{d}x$$ Using the substitution method with the substitution $u = \sin\left(x\right)$. The exercise has two parts: the first one is using the substitution $u = \cos\left(x\right)$. No problem. I'm having difficulties with the second part, which is using the substitution $u = \sin\left(x\right)$. I spent a couple of hours with the exercise before asking here, and after some trials I got this: $$\int f\left(g\left(x\right)\right)g'\left(x\right)\,\mathrm{d}x = \int f\left(u\right)\,\mathrm{d}u$$ $$g\left(x\right) = \sin\left(x\right)$$ $$g'\left(x\right) = \cos\left(x\right)$$ $$f\left(x\right) = \frac{x}{\cos^2\left(\arcsin(x)\right)} = \frac{x}{1 - x^2}$$ $$\int f\left(u\right)\,\mathrm{d}u = -\frac{1}{2}\log|1 - u^2| + C = -\frac{1}{2}\log|1 - \sin^2\left(x\right)| + C$$ $$1 - \sin^2\left(x\right) = \cos^2\left(x\right)$$ $$\int\frac{\sin\left(x\right)}{\cos\left(x\right)}\,\mathrm{d}x = -\frac{1}{2}\log|\cos^2\left(x\right)| + C = -\log|\cos\left(x\right)| + C$$ But it feels too complicated, $f\left(x\right)$ was really hard for me to discover. What am I missing? AI: $\displaystyle \int\left(\frac{\sin\left(x\right)}{\cos\left(x\right)}\right)\,\mathrm{d}x$ As you noticed, it's easy to let $u = \cos (x)$, and that's how I would recommend it. But suppose we weren't being as slick as you were, but we still wanted to let $u = \sin (x)$. Then the problem comes from $du = \cos (x)$, which isn't there. But $\dfrac{1}{\cos x} = \dfrac{\cos x}{\cos^2 x}$, and combining $u = \sin x$ with $\cos^2 x = 1 - \sin^2 x$, we get that $\dfrac{\cos x }{\cos^2 x} = \dfrac{du}{1 - u^2}$. Thus $\displaystyle \int \dfrac{\sin x}{\cos x}dx = \int \dfrac{udu}{1-u^2}$, which is what you called $\int f(x)$.
H: Remembering exact sine cosine and tangent values? There exists a common trick to remember exact sine cosine and tangent values. The trick is relatively long, so instead of reposting it, please refer to my answer on this page. Although I have used this trick for a while, I've never understood why it works. I understand the tangent values (sine/cosine according to right angle ratios) and the why the cosine values are in "the opposite order" of sine values (due to the cosine function being a $90^\circ$ phase shift on the sine function) but why does doing the above trick provide the correct values for sine (and cosine in opposite order)? AI: This is just a briefer restatement of the same rule, which I find easier to remember. I don't think there is a particular reason why it works. (If there were, there would be some nice generalization, which there does not seem to be.) $\theta \hskip2cm 0^\circ \hskip 1cm 30^\circ \hskip 1cm 45^\circ \hskip1cm 60^\circ \hskip1cm 90^\circ $ $\sin(\theta) \hskip1cm {\sqrt{0} \over 2}\hskip1cm {\sqrt{1}\over 2}\hskip1cm {\sqrt{2}\over 2}\hskip1cm {\sqrt{3}\over 2}\hskip1cm {\sqrt{4}\over 2}$ $\cos(\theta) \hskip1cm {\sqrt{4} \over 2}\hskip1cm {\sqrt{3}\over 2}\hskip1cm {\sqrt{2}\over 2}\hskip1cm {\sqrt{1}\over 2}\hskip1cm {\sqrt{0}\over 2}$ Edit: You might say that these values correspond to looking at triangles where the Pythagorean theorem becomes one of the following: $$4=0+4=1+3=2+2=3+1=4+0$$ and that these triangles happen to have very nice angles.
H: If $f:D\to \mathbb{R}$ is continuous and exists $(x_n)\in D$ such as that $x_n\to a\notin D$ and $f(x_n)\to \ell$ then $\lim_{x\to a}f(x)=\ell$? Assertion: If $f:X\setminus\left\{a\right\}\to \mathbb{R}$ is continuous and there exists a sequence $(x_n):\mathbb{N}\to X\setminus\left\{a\right\}$ such as that $x_n\to a$ and $f(x_n)\to \ell$ prove that $\lim_{x\to a}f(x)=\ell$ I have three questions: 1) Is the assertion correct? If not, please provide counter-examples. In that case can the assertion become correct if we require that $f$ is monotonic, differentiable etc.? 2)Is my proof correct? If not, please pinpoint the problem and give a hint to the right direcition. Personally, what makes me doubt it are the choices of $N$ and $\delta$ since they depend on another 3)If the proof is correct, then is there a way to shorten it? My Proof: Let $\epsilon>0$. Since $f(x_n)\to \ell$ \begin{equation} \exists N_1\in \mathbb{N}:n\ge N_1\Rightarrow \left|f(x_n)-\ell\right|<\frac{\epsilon}{2}\end{equation} Thus, $\left|f(x_{N_1})-\ell\right|<\frac{\epsilon}{2}$ and by the continuity of $f$ at $x_{N_1}$, \begin{equation} \exists \delta_1>0:\left|x-x_{N_1}\right|<\delta_1\Rightarrow \left|f(x)-f(x_{N_1})\right|<\frac{\epsilon}{2} \end{equation} Since $x_n\to a$, \begin{equation} \exists N_2\in \mathbb{N}:n\ge N_2\Rightarrow \left|x_n-a\right|<\delta_1\end{equation} Thus, $\left|x_{N_2}-a\right|<\delta_1$ and by letting $N=\max\left\{N_1,N_2\right\}$, \begin{gather} 0<\left|x-a\right|<\delta_1\Rightarrow \left|x-x_N+x_N-a\right|<\delta_1\Rightarrow \left|x-x_N\right|-\left|x_N-a\right|<\delta_1\\ 0<\left|x-a\right|<\delta_1\Rightarrow \left|x-x_N\right|<\delta_1+\left|x_N-a\right| \end{gather} By the continuity of $f$ at $x_N$, \begin{equation} \exists \delta_3>0:0<\left|x-x_N\right|<\delta_3\Rightarrow \left|f(x)-f(x_N)\right|<\frac{\epsilon}{2} \end{equation} Thus, letting $\delta=\max\left\{\delta_1+\left|x_N-a\right|,\delta_3\right\}>0$ we have that, \begin{gather} 0<\left|x-a\right|<\delta\Rightarrow \left|x-x_N\right|<\delta\Rightarrow \left|f(x)-\ell+\ell-f(x_N)\right|<\frac{\epsilon}{2}\Rightarrow \left|f(x)-\ell\right|-\left|f(x_N)-\ell\right|<\frac{\epsilon}{2}\\ 0<\left|x-a\right|<\delta\Rightarrow\left|f(x)-\ell\right|<\left|f(x_N)-\ell\right|+\frac{\epsilon}{2}<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon \end{gather} We conclude that $\lim_{x\to a}f(x)=\ell$ Thank you in advance EDIT: The proof is false. One of the mistakes is in this part: "Thus, letting $\delta=\max\left\{\delta_1+\left|x_N-a\right|,\delta_3\right\}>0$ we have that, \begin{gather} 0<\left|x-a\right|<\delta{\color{Red} \Rightarrow} \left|x-x_N\right|<\delta{\color{Red} \Rightarrow} \left|f(x)-\ell+\ell-f(x_N)\right|<\frac{\epsilon}{2}\end{gather}" AI: Your assertion is wrong. A counterexample is for instance given by the sign function, $sgn : \mathbb R \rightarrow \mathbb R$. The sign function is continuous on $\mathbb R\backslash \{0\}$, but $$ \lim_{n\rightarrow \infty} sgn(1/n) = 1, $$ and $$ \lim_{n\rightarrow \infty} sgn(-1/n) = -1. $$ Here $(1/n)$ and $(-1/n)$ are both sequences that converge to zero, but the sequences $(sgn(1/n))$ and $sgn(-1/n)$ are very much different. The mistake in your proof is that the distance between an arbitrary point $x$ that is close to $a$ and members of the sequence does not become arbitrary small, so you don't have something like for all $\delta_3$ there is an $N_3$ such that $$ \vert x-x_n\vert≤\delta_3, ~~\text{for } n≥N_3. $$ But your proof would need something like this. In our counterexample with the function $sgn$ this more or less means that if the sequence is given by $-(1/n)$ then I know something about $sgn(x)$ for negative $x$, but I can not say anything about the function values for positive $x$.
H: Find a nonsingular matrix P given that A is similar to a Jordan matrix Given ${\bf A}$ is similar to a Jordan matrix find a nonsingular matrix $\bf P$ such that ${\bf P}^{-1}{\bf AP}={\bf J}$ $$ {\bf A}= \begin{bmatrix} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 & 1 & 1\\ \end{bmatrix} $$ I have worked out $$ {\bf J}= \begin{bmatrix} 0&0&0\\ 1&0&0\\ 0&0&1\\ \end{bmatrix} $$ The textbook I am using shows an example where ${\bf A}$ has one eigenvalue. I am unsure how to apply this example to the question I have. I am using "Matrices and Linear Transformations" by Cullen. The example I was looking at is on page204. Please help. AI: $A$ has three eigenvalues, or at least one at $1$ and one at $0$ of multiplicity 1. This tells you the general form of the Jordan form. Looking at $\ker A$ gives the $(0,1,-1)^T$ vector, $\ker (A-I)$ gives $(0,0,1)^T$, and looking at $\ker A^2$ yields the $(1,0,-1)^T$ vector. After that, it is a matter of permuting these vectors to get $P$. Try $P=\begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ -1 & -1 & 1 \end{bmatrix}$. Then $P^{-1} A P = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{bmatrix}$, which is a Jordan form.
H: Question about proof that multiplication in Banach algebra is continuous Here's the proof in my notes: Where does the last inequality come from? If I want to show that it's continuous at $((x,y)$ I can use the inverse triangle inequality to get $$ (\|x^\prime\| + \|y\|)\varepsilon \leq (\|x\| + \|y \| + \varepsilon)\varepsilon$$ Thanks. AI: Since $\varepsilon<1$, $$\color{red}{\lVert x'\rVert}+\lVert y\rVert\leq \color{red}{\lVert x'-x\rVert+\lVert x\rVert} +\lVert y\rVert\leq\color{red}{\varepsilon}+\lVert x\rVert+\lVert y\rVert\leq \color{red}{1}+\lVert x\rVert+\lVert y\rVert.$$
H: Why is $\log(b,n) = \lfloor \log_b(n) \rfloor$ primitive recursive? I read in an introduction to primitive recursive function and Wikipedia that $$\log(b,n) = \lfloor \log_b(n) \rfloor$$ is primitive recursive. But how can that be? Is there any easy proof (and therefore a definition of the function using only constants, projection, composition and primitive recursion)? Thanks in advance! AI: I'll try and sketch a construction. Firstly, note that a function defined by primitive recursive cases from primitive recursive functions is still primitive recursive (rather easy to prove). So, we attempt to define this by recursion on $n$. Since $\log_b(0)$ is not something we want to consider, we'll assume $b,n > 0$. Let $$\log(b,1) = 0$$ Which is certainly primitive recursive. Then define $$\log(b,n+1) = F(\log(b,n),b,n)$$ Where $F$ is the following function, defined by cases. $F$ takes $\log(b,n)+1$, and checks if $$b^{(\log(b,n) + 1)} > n+1$$ In other words, it sees if $\log(b,n) + 1$ is still shooting too high. If it is, then we stick with what we've got: $\log(b,n)$. Otherwise, the time has finally come to move on up to $\log(b,n) + 1$, and so $F$ outputs that. It's not hard to see that $F$ is defined by primitive recursive cases from primitive recursive functions, and so have shown that $\log(b,n)$ is primitive recursive.
H: Entropy of Order Statistic Consider $n$ independent and identically distributed random variables $ \{X_i\}_{i=1,...n} $ with support on some interval $[a,b]$ and its $n$'th order statistic $\max_{i \in \{1,...n\}} X_i$ . The following "entropy-looking" measure of dispersion of the maximum is $$ - \int_a^b F^n(x) \ln F^n(x) dx ,$$ where $F(x)= \Pr (X \le x) $. It seems natural that the "entropy" should be decreasing in $n$ (just think about $n$ very large). Is this a known result? I did in fact prove that the entropy is monotone, but the proof turned out to be lengthy and messy. I would expect that there is a simple argument. Does anyone know? AI: No, the entropy is not monotone. For example, consider $F_X(x) = x^{1/N}$ on $[0,1]$. Then $\max(X_1,\ldots,X_N)$ is uniform on $[0,1]$. The entropy of $\max(X_1,\ldots,X_n)$ increases as a function of $n$ for $1 \le n \le N$, reaching $0$ at $n=N$, then decreases after that.
H: if $x^2 \bmod p = q$ and I know $p$ and $q$, how to get $x$? if $x^2 \bmod p = q$ and I know $p$ and $q$, how to get $x$? I'm aware this has to do with quadratic residues but I do not know how to actually solve it. $p$ is a prime of form $4k+3$ AI: Euler's theorem says that $\left(\frac{q}{p}\right) \equiv q^{\frac{p-1}{2}} \bmod p$. On the other hand, assuming there is a solution, $\left(\frac{q}{p}\right) = 1$. So you have $q^{\frac{p+1}{2}} \equiv q*q^{\frac{p-1}{2}} \equiv q \bmod p$. Since $p+1$ is divisible by $4$, this gives solutions $$x \equiv \pm q^{\frac{p+1}{4}} \bmod p.$$