text
stringlengths
83
79.5k
H: How does a Class group measure the failure of Unique factorization? I have been stuck with a severe problem from last few days. I have developed some intuition for my-self in understanding the class group, but I lost the track of it in my brain. So I am now facing a hell. The Class group is given by $\rm{Cl}(F)=$ {Fractional Ideals of F} / {Principle fractional Ideals of F} , ($F$ is a quadratic number field) so that we are actually removing the Principal fractional ideals there (that's what I understood by quotient group). But how can that class group measure the failure of Unique Factorization ? For example a common example that can be found in any text books is $\mathbb{Z[\sqrt{-5}]}$ in which we can factorize $6=2\cdot3=(1+\sqrt{-5})(1-\sqrt{-5})$. So it fails to have unique factorization. Now can someone kindly clarify these points ? How can one construct $\rm{Cl}(\sqrt{-5})$ by using the quotient groups ? What are the elements of $\rm{Cl}(\sqrt{-5})$ ? What do those elements indicate ? ( I think they must some-how indicate the residues that are preventing the $\mathbb{Z[\sqrt{-5}]}$ from having a unique factorization ) What does $h(n)$ indicate ? ( Class number ). When $h(n)=1$ it implies that unique factorization exists . But what does the $1$ in $h(n)=1$ indicate. It means that there is one element in the class group , but doesn't that prevent Unique Factorization ? EDIT: I am interested in knowing whether are there any polynomial time running algorithms that list out all the numbers that fail to hold the unique factorization with in a number field ? I am expecting that may be Class group might have something to do with these things. By using the class group of a number field can we extract all such numbers ? For example, if we plug in $\mathbb{Z}[\sqrt{-5}]$ then we need to have $6$ and other numbers that don't admit to a unique factorization. Please do answer the above points and save me from confusion . Thank you. AI: For your third bullet, if there's only one element to the class group, then unique factorization holds because then all the fractional ideals are principal, and in particular the ring of integers is a principal ideal domain, which is equivalent to unique factorization for Dedekind rings. As to the first two, I'll only state that the class group of $\mathbb{Z}[\sqrt{-5}]$ is $\mathbb{Z}/2\mathbb{Z}$. Maybe someone else can help you out with the details of the computation, but we can be sure the group isn't trivially due to examples like $(1+\sqrt{-5},1-\sqrt{-5})$. You can see that this ideal squares to $(2)$, and that the class group is $\mathbb{Z}/2\mathbb{Z}$ just tells that in fact every ideal is either principal or the square root of a principal. Unfortunately, you can see we lose a lot of information in passing to the class group, and in particular it doesn't tell us anything at all about which elements are obstacles to unique factorization. The intuition, rather, is that a more complicated class group implies we're further from unique factorization.
H: Evaluate $\lim_{n \to \infty }\frac{(n!)^{1/n}}{n}$. Possible Duplicate: Finding the limit of $\frac {n}{\sqrt[n]{n!}}$ Evaluate $$\lim_{n \to \infty }\frac{(n!)^{1/n}}{n}.$$ Can anyone help me with this? I have no idea how to start with. Thank you. AI: Let's work it out elementarily by wisely applying Cauchy-d'Alembert criterion: $$\lim_{n\to\infty} \frac{n!^{\frac{1}{n}}}{n}=\lim_{n\to\infty}\left(\frac{n!}{n^n}\right)^{\frac{1}{n}} = \lim_{n\to\infty} \frac{(n+1)!}{(n+1)^{(n+1)}}\cdot \frac{n^{n}}{n!} = \lim_{n\to\infty} \frac{n^{n}}{(n+1)^{n}} =\lim_{n\to\infty} \frac{1}{\left(1+\frac{1}{n}\right)^{n}}=\frac{1}{e}. $$ Also notice that by applying Stolz–Cesàro theorem you get the celebre limit: $$\lim_{n\to\infty} (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}} = \frac{1}{e}.$$ The sequence $L_{n} = (n+1)!^{\frac{1}{n+1}} - (n)!^{\frac{1}{n}}$ is called Lalescu sequence, after the name of a great Romanian mathematician, Traian Lalescu. Q.E.D.
H: Calculate $I(\alpha, x,y)=\int\limits_0^1 {{v^{\alpha - 1}}{{(1 - vx)}^{\alpha - 1}}{e^{vy}}dv,\,\,\,0 < \alpha ,x,y < 1}.$ I want to calculate this integral with singularity: $$I(\alpha, x,y)=\int\limits_0^1 {{v^{\alpha - 1}}{{(1 - vx)}^{\alpha - 1}}{e^{vy}}dv,\,\,\,0 < \alpha ,x,y < 1}. $$ I hope to obtain a closed formula via special functions, maybe some hypergeometric functions. A related differential equation for solving $I(\alpha, x,y)$ as a function of $x$,$y$ is also interesting. AI: Expand in series with respect to $y$ and integrate term-wise: $$ I(\alpha, x,y) = \sum_{n=0}^\infty \frac{y^n}{n!} \int_0^1 v^{n+\alpha-1} (1-x v)^{\alpha-1} \mathrm{d} v = \sum_{n=0}^\infty \frac{y^n}{n!} \frac{1}{n+\alpha} {}_2 F_1\left(n+\alpha, 1-\alpha; n+\alpha+1 ;x\right) $$ From here you see that this is confluent bivariate hypergeometric function, Horn function $\Phi_1$: $$ I(\alpha, x, y) = \frac{1}{\alpha} \Phi_1\left(\alpha, 1-\alpha, \alpha+1, x, y\right) $$
H: Complex inequality How can I show this inequality $\sqrt{2}|z|\geq |\mathrm{Re} (z)|+|\mathrm{Im}(z)| $ please give me some hint. Which result is useful to show this. please help me out.thanks in advance. AI: You need to think about the geometric representation of complex numbers. Given a rectangle with sides $a = |\Im(z)|$ and $b = |\Re(z)|$, show that its diagonal is not shorter than $\frac{a+b}{\sqrt 2}$.
H: Uniqueness of prime-power fields I'm still stuck on the proof of the following theorem. I've asked two questions so far to get to where I am even at this point. Theorem: Let $p$ be a prime and let $n\in\mathbb{Z}^{+}$. If $E$ and $E'$ are fields of order $p^{n}$, then $E\cong E'$. Proof: Both $E$ And $E'$ have $\mathbb{Z}_{p}$ as prime fields (up to isomorphism). By Corollary 33.6, $E$ is a simple extension of $\mathbb{Z}_{p}$ of degree $n$, so there exists an irreducible polynomial $f(x)$ in $\mathbb{Z}_{p}[x]$ such that $$E\cong \mathbb{Z}_{p}[x] / \langle f(x)\rangle$$ The next line stumps me again. "Since the elements of $E$ and $E'$ are exactly the roots of $x^{p^{n}} - x$, $f(x)$ is a factor of $x^{p^{n}} - x$ in $\mathbb{Z}_{p}[x]$." The rest of the proof: "Because $E'$ also consists of zeros of $x^{p^{n}} - x$, we see that $E'$ also contains zeros of the irreducible $f(x)$ in $\mathbb{Z}_{p}[x]$. Thus because $E'$ also contains $p^{n}$ elements, $E'$ is also isomorphic to $\mathbb{Z}_{p}[x] / \langle f(x)\rangle$." It follows from work already done that elements of $E$ and $E'$ are zeros of $x^{p^{n}} - x$. But I don't follow how we get $f(x)$ as a factor of $x^{p^{n}} - x$ from this. This is actually what prompted my Question about algebraic field extensions as I thought that might be relevant; but it turns out not to be the reason. Thanks very much for any help you can give. AI: Are you missing the following bits? The polynomial $f(x)\in \mathbb{Z}_p[x]$ of degree $n$ has a root in the field $E$. Because the field extension $E/\mathbb{Z}_p$ is Galois, and $f(x)$ is irreducible, all the roots of $f(x)$ are distinct and in $E$. So $$ f(x)=(x-a_1)(x-a_2)\cdots (x-a_n) $$ for some elements $a_1,a_2,\ldots,a_n\in E$. The elements of $E$ are exactly the zeros of the polynomial $p(x)=x^{p^n}-x$. In other words $$ p(x)=\prod_{a\in E}(x-a). $$ The troubling claim follows from this. The zeros $a_i,i=1,2,\ldots,n,$ are among the zeros of $p(x)$, so $f(x)\mid p(x)$. In particular, the polynomial $f(x)$ also has $n$ zeros in $E'$, because $p(x)$ has $p^n$ roots there, and the roots of $f(x)$ are among those. Edit: Proving my first claim. This depends heavily on the properties of the so called Frobenius homomorhpism $F:E\to E, x\mapsto x^p$. This is a homomorphism, because obviously $F(1)=1$ and $F(xy)=(xy)^p=x^py^p=F(x)F(y)$ for all $x,y\in E$. Less obvious is that $F$ respects addition as well, i.e. $$ F(x+y)=(x+y)^p=x^p+y^p=F(x)+F(y) $$ for all $x,y\in E$. This follows from the binomial formula together with the observation that the binomial coefficients ${p\choose i}$ are all divisible by $p$, when $1\le i\le p-1$. From little Fermat it follows that $F(x)=x^p=x$ for all the elements $x$ of the subfield $\mathbb{Z}_p$. We need to also make the observation that $x^p=x$ only when $x\in\mathbb{Z}_p$. This is because the polynomial equation $x^p-x=0$ can have at most $p$ solutions in the field $E$, and we already found $p$ solutions. So we assume that $f(x)=x^n+f_{n-1}x^{n-1}+f_{n-2}x^{n-2}+\cdots+f_1x+f_0\in \mathbb{Z}_p[x]$ is irreducible, and has a root $a_1$ in $E$ (=the coset of $x$ in $\mathbb{Z}_p[x]/\langle f(x)\rangle$). In other words $a_1\notin\mathbb{Z}_p$ and $$ a_1^n+f_{n-1}a_1^{n-1}+\cdots f_1a_1+f_0=0. $$ Let's apply the mapping $F$ to this equation. Remember that $F(f_i)=f_i$ for all $i$. We get $$ a_1^{pn}+f_{n-1}a_1^{p(n-1)}+\cdots f_1a_1^p+f_0=0, $$ or, upon inspection, $f(a_1^p)=0$. Because $a_1\notin\mathbb{Z}_p$, $a_1^p\neq a_1$. Therefore we have found another zero $a_2=a_1^p$ of $f(x)$ in $E$. We can repeat the argument and keep finding roots of $f(x)$: $a_3=a_2^p$, $a_4=a_3^p$ et cetera. Because $f(x)$ can have at most $n$ roots in $E$, this sequence of roots will have to start repeating at some point. Because $F$ is injective (its kernel is trivial), the repetition must start from $a_1$, in other words $a_1=a_1^{p^k}$ for some $k, 2\le k\le n$. The polynomial $$ g(x)=(x-a_1)(x-a_1^p)\cdots (x-a_1^{p^{k-1}}) $$ is stable under $F$, so its coefficients are in $\mathbb{Z}_p$. Furthermore, $g(x)\mid f(x)$. But $f(x)$ was irreducible, so we must have $g(x)=f(x)$, and $k=n$. But all the roots of $g(x)$ are distinct and in $E$ by construction. Therefore the same holds for $f(x)=g(x)$.
H: Understanding why the roots of homogeneous difference equation must be eigenvalues There is some obvious relationship between the root solutions to a homogeneous difference equation (as a recurrence relation) and eigenvalues which I'm trying to see. I have read over the wiki article 3.2, 3.4 and the eigenvalues ($\lambda$ ) are hinted at as the roots, but I'm still not sure why these must be eigenvalues of some matrix, say $A_0$, and what the meaning of $A_0$ may be. It seems that to solve a homogeneous linear difference equation we find the "characteristic polynomial" by simply factoring one difference equation. However, typically to find the "characteristic polynomial" I would solve the characteristic equation for some matrix, $A_0 = \begin{pmatrix} 1 & 0 & 0\\ 0 & -2 & 0 \\ 0 & 0 & 3 \\ \end{pmatrix}$ $(A_0 - \lambda I)\mathbf x = \mathbf 0$, then solve for the determinant equal to $0$, and then solve for each $\lambda$ e.g. $ \det(A_0 - \lambda I) = 0$ $(1 - \lambda)(2 + \lambda)(3 - \lambda) = 0$ Now suppose this also happens to be a solution to some linear difference equation, and so here the characteristic polynomial is $\lambda^3 - 2\lambda^2 - 5\lambda + 6 = 0$, and the difference equation is. $y_{k+3} - 2y_{k+2} - 5y_{k+1} + 6y_k = 0 $. Then, for example, $\lambda = 3$ is a solution for all k. Now, given we have found this solution to this difference equation, how can we explain some special relationship to $A_0$, other than $\lambda = 3$ happens to be an eigenvalue of $A_0$? Is there any meaning to make of $A_0$? (cf. 4.8, Linear Algebra 4th, D. Lay) AI: You have a mistake in your expansion of the characteristic polynomial, it should be $\lambda^3-2 \lambda^2-5 \lambda +6$. To see the connection between this polynomial and the matrix $A_0$, it helps to reduce the difference equation down a first order equation in many variables. Let $x_k^1 = y_k, x_k^2 = y_{k+1}, x_k^3 = y_{k+2}$. Then the difference equation becomes $$x_{k+1}^1 = x_k^2$$ $$x_{k+1}^2 = x_k^3$$ $$x_{k+1}^3 = -6 x_k^1 + 5 x_k^2 + 2 x_k^3,$$ or, in matrix terms, with $x_k = (x_k^1,x_k^2,x_k^3)^T$: $$ x_{k+1} = A x_k = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ -6 & 5 & 2 \end{bmatrix} x.$$ Note the connection between the characteristic polynomial coefficients and the bottom row of the matrix (this is called the controllable canonical form in control circles). If you work out the eigenvalues of $A$ you will find that they are $\{-2,1,3\}$. In fact, the characteristic polynomial of $A$ is $\lambda^3-2 \lambda^2-5 \lambda +6$. Hence $A$ is diagonalizable by some matrix $V$, and you have $A_0 = V^{-1} A V$, where $A_0$ has the form above. They share the same characteristic polynomial because $\det (\lambda I - A_0) = \det (\lambda I - V^{-1} A V) = \det V^{-1} \det (\lambda I - A ) \det V = \det (\lambda I - A)$. Relevant links: http://en.wikipedia.org/wiki/Companion_matrix http://en.wikipedia.org/wiki/State_space_representation#Canonical_realizations
H: show that $x_n$ converges and find the limit Let $\left\{ x_n \right\}_{n\geq0}$ be a sequence of real numbers such that $$x_{n+1}=\lambda x_n+(1-\lambda)x_{n-1},\ n\geq 1,$$for some $0<\lambda<1$ (a) Show that $x_n=x_0+(x_1-x_0)\sum_{k=0}^{n-1}(\lambda -1)^k$ (b) Hence, or otherwise, show that $x_n$ converges and find the limit. Note that $x_{n+1}=\lambda x_n+(1-\lambda)x_{n-1}$$$=>x_{n+1}-x_n=(\lambda-1)(x_n-x_{n-1})=\cdots=(\lambda-1)^n(x_1-x_0),\forall n$$ Hence we get $x_n-x_0=(x_n-x_{n-1})+(x_{n-1}-x_{n-2})+\cdots+(x_1-x_0)=(\lambda-1)^{n-1}(x_1-x_0)+(\lambda-1)^{n-2}(x_1-x_0)+\cdots+(x_1-x_0)=(x_1-x_0)\sum_{k=0}^{n-1}(\lambda -1)^k.$ Help me in convergence part. AI: Hint: The geometric series $1+r+r^2+r^3+\cdots$ converges to $\frac{1}{1-r}$ if $|r|\lt 1$. This is a fact that is undoubtedly already familiar to you. It can be proved by showing that $$1+r+r^2+\cdots +r^{n-1}=\frac{1-r^n}{1-r},\tag{$1$}$$ and then noting that $|r|\lt 1$, then $r^n\to 0$ as $n \to \infty$. One way to prove $(1)$ is to show that $$(1-r)(1+r+r^2+\cdots +r^{n-1})=1-r^n.$$ This can be done by multipling out the left-hand side and observing the mass cancellation. In our case, $r=\lambda-1$ and therefore $\frac{1}{1-r}=\frac{1}{2-\lambda}$.
H: Simplifying Equivalent Functions Given two functions in closed form such that f(x) is the same for all x for both functions, is there always a way to manipulate either function to make it so they are written exactly the same or can you have two functions that can be proven equivalent yet neither can be simplified to look the same as the other? AI: Your question is equivalent to asking if there is some normal form for every closed form expression that we can reduce equivalent expressions to. Now the answer depends on the kind of formal system you use, i.e. what exactly you call closed form. Surely when we're restricting ourselves to few enough operations, there is. Take e.g. arithmetic terms containing natural numbers, +, $\cdot$ and variables, we can completely simplify the terms, thus yielding a normal form. However for some not too complicated systems like λ-calculus, which just consists of function creation and application, is has been proven that deciding whether two terms are equivalent is already undecidable, so there is no normal form.
H: What is the chance of an event happening a set number of times or more after a number of trials? Assuming every trial is independent from all the others and the probability of a successful run is the same every trial, how can you determine the chance of a successful trial a set number of times or more? For example, You run 20 independent trials and the chance of a "successful" independent trial each time is 60%. how would you determine the chance of 3 or more"successful" trials? AI: If the probability of success on any trial is $p$, then the probability of exactly $k$ successes in $n$ trials is $$\binom{n}{k}p^k(1-p)^{n-k}.$$ For details, look for the Binomial Distribution on Wikipedia. So to calculate the probability of $3$ or more successes in your example, let $p=0.60$ and $n=20$. Then calculate the probabilities that $k=3$, $k=4$, and so on up to $k=20$ using the above formula, and add up. A lot of work! It is much easier in this case to find the probability of $2$ or fewer successes by using the above formula, and subtracting the answer from $1$. So, with $p=0.60$, the probability of $3$ or more successes is $$1-\left(\binom{20}{0}p^0(1-p)^{20}+\binom{20}{1}p(1-p)^{19}+\binom{20}{2}p^2(1-p)^{18} \right).$$ For the calculations, note that $\binom{n}{k}=\frac{n!}{k!(n-k)!}$. In particular, $\binom{20}{0}=1$, $\binom{20}{1}=20$ and $\binom{20}{2}=\frac{(20)(19)}{2!}=190$.
H: On the definition of limit of a sequence I apologize in advance if this turns out to be a trivial question, but when we define the limit of a sequence $x_n$ (it it exists) as the number $a$ such that $\forall \delta > 0\ \exists r : n \ge r \Rightarrow | x_n - a | < \delta$, is there a particular reason why we use $< \delta $ instead of $\le\ \delta$? I first thought of this when reading the proof of the uniqueness of the limit. And indeed that proof does not hold if we use the "less or equal" form. But then I failed at trying to construct a sequence that under that new definition, would have two distinct limits. So I'm thinking that must not be the reason. So why is it that the definition uses the $< \delta $ form? Is it just a matter of convention? Thanks in advance. AI: It makes no difference. The version $|x_n-a|\lt \delta$ is more traditional. Since it makes no difference, one might as well go along with tradition. By the way, what you wrote would be slightly easier to read if instead of $\delta$ you used everyone's default favourite little guy $\epsilon$. Remark: For proving the uniqueness, it is true that the wording of the proof might change slightly. If $a$ and $b$ are distinct, let $\delta=\frac{|b-a|}{3}$. By the definition of limit, if $n$ is large enough then $|x_n-a|\le \delta$ and $|x_n-b|\le \delta$, which contradicts the Triangle Inequality. If we use the more common $\lt $ version of the definition of limit, we can get away with $\delta=\frac{|b-a|}{2}$. But the geometric idea does not change: we can't be simultaneously real close to $a$ and real close to $b$. The rest is minor detail.
H: Area of a polygon Possible Duplicate: How quickly we forget - basic trig. Calculate the area of a polygon Calculate area of a figure based on vertices I saw a formula in a book, $$\mathrm{area}=\frac{1}{2}\left|\sum_{i}(x_iy_{i-1}-x_{i-1}y_i)\right|.$$ Where $x_iy_i$ are the vertices of the polygon. Since it was an exercise in the book (no proof) I would very much like to see a proof, or maybe an outline? I can't come up with anything. This is no homework, I just got very curious and wanted to know. AI: On OP's request: This is called the Shoelace Formula or the Shoelace method and it works as long as your polygon is simple (non-selfintersecting). There is a nice explanation on the Wikipedia page and it refers (among other things) to the article Bart Braden, The Surveyor’s Area Formula, The College Mathematics Journal, September 1986, Volume 17, Number 4, pp. 326–337 which looks pretty nice at first glance. Some related questions on this site: How quickly we forget - basic trig. Calculate the area of a polygon How to calculate the area of a polygon? Calculate area of a figure based on vertices
H: Classifing groups of order 56: problems with the semidirect product While I was doing an exercise about the classification of groups of order 56, I had some problems concerning the semidirect product. Let $G$ a group of order 56 and let us suppose that the 7-Sylow is normal (let's call it $H$). Then we want to construct the non abelian group whose 2-Sylow $S$ is $S \cong \mathbb Z_8$. First of all, we have to determine the homomorphism $\phi \colon \mathbb Z_8 \to \text{Aut}(\mathbb Z_7)$. It is known that $\text{Aut}(\mathbb Z_7) \cong \mathbb Z_6$ and the isomorphism is given by $$ \begin{split} & \mathbb Z_6 \to \text{Aut}(\mathbb Z_7) \\ & a \mapsto \psi_a \colon \mathbb Z_7 \ni n \mapsto an \in \mathbb Z_7 \end{split} $$ So we can start by finding the homomorphism $\mathbb Z_8 \to \mathbb Z_6$. There are exacly $(6,8)=2$ such homomorphism. Who are they? Simply the one who sends everything to $0$ and the multiplication by $3$ (which is the only element in $\mathbb Z_6$ whose order - 2 - divides 8). In multiplicative terms, they are the homomorphism which sends everything to $1$ and the homomorphism which sends $n \mapsto 6^n=(-1)^n$. So, by composition, we have the two homomorphism $$ \begin{split} \phi_1 \colon & \mathbb Z_8 \to \text{Aut}(\mathbb Z_7)\\ & n \mapsto \text{id} \end{split} $$ and $$ \begin{split} \phi_2 \colon & \mathbb Z_8 \to \text{Aut}(\mathbb Z_7)\\ & n \mapsto \psi_n \colon \mathbb Z_7 \ni x \mapsto 6^nx = (-1)^nx \in \mathbb Z_7 \end{split} $$ Am I right? Now, if we take $\phi_1$ we simply get the direct product. What if we take $\phi_2$? For sake of simplicity, let's assume additive notation (this is stupid, I know but it has helped me somehow to understand). If I'm not wrong, we obtain that $H \rtimes_{\phi_2} \mathbb Z_8$ is the set $H \times \mathbb Z_8$ with the operation given by $$ (a,b) + (c,d) = (a+(-1)^bc,b+d) $$ Now if I do $(0,k)+(h,0)-(0,k) = ((-1)^k h, 0) = \phi_k(h)$ which is exactly what I want. Now I must pass to the much more confortable multiplicative notation: so let's $C_7=\langle s \rangle$ and $C_8=\langle r \rangle$ be the cyclic groups of order 7 and 8. Then we define the automorphisms $$ \begin{split} \phi_n \colon & C_7 \to C_7 \\ & x \mapsto x^{(-1)^n} \end{split} $$ and the homomorphism $$ \begin{split} \psi \colon & C_8 \to \text{Aut}(C_7) \\ & n \mapsto \phi_n \end{split} $$ In other words, we can simply say that $\psi$ is the homomorphism which sends the generator $r$ to the inversion $x^{-1}$. Am I right so far? Well, now $C_7 \rtimes_{\psi} C_8$ is the set $C_7 \times C_8$ with the operation given by $$ (a,b)(c,d) = (ac^{(-1)^b},bd) $$ I do again the calculation $(1,k)(h,1)(1,k)^{-1}=(h^{(-1)^k},1) = \phi_k(h)$ which is exactly what I want (also according to ineff's answer). Are there any mistakes? May I ask one more question? Who is this mysterious group I've built up? Is it isomorphic to some other (simpler) group? How can I do to write down a presentation? I thank you in advance for your kind help. AI: I suppose your confusion is due to the double representation of the semidirect product: internal vs external. The semidirect product's theorem states that if you have a group $G$ having two subgroups $H,K < G$ such that $H$ is normal in $G$, $H \cap K = \{1_G\}$ and $G=HK$ then there's an isomorphism $G \cong H \rtimes_\psi K$, for a certain $\psi \colon K \to \text{Aut}(H)$. By the theorem we can represent every element of $G$ as a pair $(h,k) \in H \times K$ (which is the support of the group $H \rtimes K$). Consider the two subgroups $\bar H = \{(h,1_K) | h \in H\} \leq H \rtimes K$ and $\bar K = \{(1_H,k)|k \in K\} \leq H \rtimes K$, these subgroups correspond, via the isomorphism, to the subgroups $H$ and $K$ of $G$. In $H \rtimes K$ we have that for every $h \in H$ and $k \in K$ $$(1_H,k) * (h,1_K) *(1_H,k)^{-1} = (\psi_k(h),1_K)$$ if we identify every $h \in H$ with its corresponding element $(h,1_K)$ and every $k \in K$ with $(1_H,k)$ then this equality become (internally in $G$) $$k*h*k^{-1}=\psi_k(h)$$ The $\psi$ which determine the operation in the semidirect product is exactly the homomorphism sending every $k \in K$ in the (restriction to $H$ of the) automorphism $\psi_k \colon H \to H$ which send $h \in H$ in $khk^{-1}$ (this is clearly well defined because $H$ is normal in $G$. Hope this help.
H: Does $A^nx=\lambda ^n x$ apply for $n$ smaller than $-1$ (assuming $A$ is invertible)? I am able to show (by proving 3 separate cases) that $Ax=\lambda x$ for nonzero $x$ and invertible $A$ implies that $A^nx=\lambda ^n x$ for all integers $n$ greater than or equal to $-1$. I was trying to extend this theorem to the rest of the negative integers, but I ran into a hitch because $A$ invertible doesn't imply $A^n$ invertible. So it seems my original proof was as general as it can be. Is my reasoning correct though? Thanks. AI: If $A$ is invertible and $A x = \lambda x$ for some $x\neq 0$, then $\lambda \neq 0$, and $A^{-1} x = \frac{1}{\lambda} x$. It follows that $A^{-n} x = \lambda^{-n} x$ for all integers $n$.
H: Proof: $X\ge 0, r>0\Rightarrow E(X^r)=r\int_0^{\infty}x^{r-1}P(X>x)dx$ As the title states, the problem at hand is proving the following: $X\ge 0, r>0\Rightarrow E(X^r)=r\int_0^{\infty}x^{r-1}P(X>x)dx$ Attempt/thoughts on a solution I am guessing this is an application of Fubini's Theorem, but wouldn't that require writing $P(X>x)$ as an expectation? If so, how is this accomplished? Thoughts and help are appreciated. AI: Proof: Consider the expectation of the identity $$ X^r=r\int_0^{X}x^{r-1}\,\mathrm dx=r\int_0^{+\infty}x^{r-1}\mathbf 1_{X>x}\,\mathrm dx. $$
H: Proving a relation between 2 sets as antisymmetric Let $U = \{1,...,n\}$ And let $A$ and $B$ be partitions of the set $U$ such that $$\bigcup A = \bigcup B = U$$ and $|A|=s, |B|=t$ Let's define a relation between the sets $A$ and $B$ as follows: $$B \succ A \iff \forall_{1 \leq i \leq t}, \exists_{1 \leq j \leq s}: B_i \subseteq A_j$$ Now, we want to prove that $\succ$ is an antisymmetric relation. Thus, we want to prove that $$\forall_{A,B},A \succ B \land B \succ A \implies A=B$$ How would one prove this? I'm pretty much stuck after laying out the conditions, and I don't have a clue on what assumptions I should make that will lead me anywhere. AI: First of all, for what you write to make sense, you're not picking two partitions and defining a relation between those particular two partitions. You're defining a single relation on the set of all partitions. Second, I think you're overcomplicating things by considering a partition to be an indexed family of subsets of $U$. Indeed, if you consider the assignment of indices to be part of the partition, then what you want to prove is not true, because then the two partitions $$ A_1 = \{1\}, A_2 = \{2,3,\ldots,n\} $$ and $$ B_1 = \{2,3,\ldots,n\}, B_2 = \{1\} $$ would satisfy $A\succ B\succ A$, but $A\ne B$. So we need to work with partitions being unordered collections of subsets of $U$, and your relation should then be defined as $$ B\succ A \quad\iff \forall b\in B\; \exists a\in A: b\subseteq a$$ In order to prove that this is antisymmetic, we assume $A\succ B\succ A$ and seek to prove that $A \subseteq B$. (Then, since $A$ and $B$ were arbitrary, and we also have $B\succ A\succ B$, the same argument shows $B\subseteq A$, so $A = B$). In order to prove this, it is crucial that $A$ and $B$ are partitions, which requires among other things that (1) any two different members of $A$ must be disjoint, and (2) the empty set is not in $A$. Now, we're assuming that $A\succ B\succ A$. To prove $A\subseteq B$ consider any $x\in A$. One of the $\succ$s gives us $y\in B$ such that $x\subseteq y$, and the other gives $z\in A$ sucht that $x\subseteq y\subseteq z$. But because $x$ and $z$ are both in $A$, they must be either equal or disjoint. Since $x$ is non-empty and $x\subseteq z$ they can't be disjoint, so $x=z$. But then $x\subseteq y \subseteq x$, and $y$ must equal $x$. Since $y$ was in $B$, we have proved $x\in B$.
H: Combinations of characteristic functions: $\alpha\phi_1+(1-\alpha)\phi_2$ Suppose we are given two characteristic functions: $\phi_1,\phi_2$ and I want to take a weighted average of them as below: $\alpha\phi_1+(1-\alpha)\phi_2$ for any $\alpha\in [0,1]$ Can it be proven that the result is also a characteristic function? If so, I am guessing this result could extend to any number of combinations $\alpha_i$ as long as $\sum_i\alpha_i=1$ Secondly, if $\phi$ is again a characteristic function, then $\mathfrak{R}e\phi(t)=\frac12(\phi(t)+\phi(-t))$ is also a characteristic function. I don't even know how to begin attempting this proof as I am not sure what the $\mathfrak{R}$ represents. Lastly, regarding the symmetry of characteristic functions, $\phi$ is symmetric about zero iff it is real-valued iff the corresponding distribution is symmetric about zero. Once again, my lack of familiarity with the complex plane leaves me in the dark here. Why can a complex-valued function not be symmetric about zero? AI: To prove that these are characteristic functions, using random variables yields simpler, more intuitive, proofs. In the first case, assume that $\phi_1(t)=\mathrm E(\mathrm e^{itX_1})$ and $\phi_2(t)=\mathrm E(\mathrm e^{itX_2})$ for some random variables $X_1$ and $X_2$ defined on the same probability space and introduce a Bernoulli random variable $A$ such that $\mathrm P(A=1)=\alpha$ and $\mathrm P(A=0)=1-\alpha$, independent of $X_1$ and $X_2$. Then: The function $\alpha\phi_1+(1-\alpha)\phi_2$ is the characteristic function of the random variable $AX_1+(1-A)X_2$. The extension to more than two random variables is direct. Assume that $\phi_k(t)=\mathrm E(\mathrm e^{itX_k})$ for every $k$, for some random variables $X_k$ defined on the same probability space and introduce an integer valued random variable $A$ such that $\mathrm P(A=k)=\alpha_k$ for every $k$, independent of $(X_k)_k$. Then: The function $\sum\limits_k\alpha_k\phi_k$ is the characteristic function of the random variable $X_A=\sum\limits_kX_k\mathbf 1_{A=k}$. In the second case, assume that $\phi(t)=\mathrm E(\mathrm e^{itX})$ for some random variable $X$ and introduce a Bernoulli random variable $A$ such that $\mathrm P(A=1)=\mathrm P(A=-1)=\frac12$, independent of $X$. Then: The function $t\mapsto\frac12(\phi(t)+\phi(-t))$ is the characteristic function of the random variable $AX$.
H: Calculate BPM from history of hand positions I calculate the beats per minute (BPM) out of Kinect hand positions tracked from a conductor. I do that by finding the last and the second last minimum in my history data. I then calculate the time difference between these two minimums and extrapolate this difference to a minute to get the current BPM. However I am struggling finding the correct mathematical equation for that. How would I start? It is basically just the following, but how would I use the $history$ instead of first and secondLow? And have a function like $bpm(x) = ?$ or similar. $$bpm = 60 / (firstLow - secondLow)$$ To show how my data-set does look like I add a simple graphic which should give you a little more details. Thank you very much for your help. AI: Long story short, you don't want to look for peaks and the distance between them, for many reasons. First, it is heuristically difficult to do this with real data. Second, you will get all sorts of BPM variation due to small effects -- essentially small variations will corrupt your data. What you want to do is draw a threshold at around 40% of a recent maximum, and then count all rising (or falling) crossings of that threshold. It's much easier to compute the crossing of a threshold than it is to determine a maximum/minimum in real time, and it is far less sensitive to noise. This works because if you assume a periodic behavior, which is usually a fair assumption, then the wavelength between peaks is exactly the same as the wavelength between rising crossings of a given threshold. In practice, to get reliable results, you might want an adaptive type estimator, because of possible variations in the baseline of the signal. This is a similar algorithm (almost exactly the same, really) to that used in detection of heart rate based on pulse oximetry data. However, if you are comfortable with your method, then it is simple to compute BPM. Let $\Delta t := t_{2}-t_{1}$. Then, you have $BPM = \frac{1 \mathrm{beat}}{\Delta t \mathrm{sec}} \cdot \frac{60 \mathrm{sec}}{1 \mathrm{min}}$ (in other words, your equation was correct).
H: Newton's method - finding suitable starting point I have some trouble solving a problem in my textbook: Given the following function: $$f(x) = x^{-1} - R$$ Assume $R > 0$. Write a short algorithm to find $1/R$ by Newton's method applied to $f$. Do not use division or exponentiation in the algorithm. For positive $R$, what starting points are suitable? OK, so I've managed to solve the first part of the problem. Using Newton's method and rearranging terms I have gotten: $$x_{n+1} = x_{n}(2 - Rx_{n})$$ This is correct according to the book, and I can just use my standard algorithm for Newton's method with this expression. So far so good. The problem is the second part, where I am supposed to find suitable starting points. I figured that if $x_{1} = -x_{0}$, then the iterations cycle. So then I get: $$\begin{align*} -x_{0} &= x_{0}(2 - Rx_{0})\\ -3x_{0} &= - Rx_{0}^2\\ -3 &= -Rx_{0}\\ x_{0} &= 3/R \end{align*}$$ Thus my answer would be that we must have $x_{0} < 3/R$. My book, however, says: If $R > 1$ then the starting point $x_{0}$ should be close to $0$ but smaller than $1/R$. So what is wrong with my reasoning here? If anyone can help me out, I would really appreciate it! AI: The function $f(x) = x^{-1}$ is monotonically decreasing for $x > 0$. Newton's method works by following a tangent line of the function at a certain point to the x-axis, computing the zero-crossing of that tangent line, and repeating the process at that new value. What happens if you pick a large initial value for $x_0$, say $x_0 > 1/R$? Draw that tangent line, and follow it back up to the $x$-axis. It may not cross the $x$-axis at any point $x > 0$. That puts you in a whole different region of the function $1/x$, and you may not ever converge to your root.
H: Example of a meromorphic function in $\mathbb{C}$ but not in $\mathbb{C}_{\infty}$ I need to produce an example of a meromorphic function on $\mathbb{C}$ but not meromorphic on the Riemann sphere $\mathbb{C}_{\infty}$. Will this work: $f(z)=e^z-1/z$? Other examples are welcome. Thank you. AI: I just wanted to note that your example $e^z-\frac1z$ works but so does the simpler $e^z$. (You can add in the $-\frac1z$ in order to put a pole in $\mathbb C$ if you like, but typically holomorphic functions are considered particular instances of meromorphic functions).
H: Local solutions of a Diophantine equation I am trying to prove that the equation $$3x^3 + 4y^3 +5z^3 \equiv 0 \pmod{p}$$ has a non-trivial solution for all primes $p$. I am sure that this is a standard exercise, and I have done the easy parts: treating $p=2, 3, 5$ as special cases (very simple), and then for $p\geq 7$, those for which $p \equiv 2 \pmod{3}$ is also straightforward, as everything is a cubic residue $\pmod{p}$, but I am having a mental block about the remaining cases where $p \equiv 1 \pmod{3}$ and only $(p-1)/3$ of the integers $\pmod{p}$ are cubic residues. I was hoping to be able to show that the original equation has non-trivial solutions in $\mathbb{Q}_p$, and that this might be an easy first step towards the $p$-adic case. Any pointers, or references to a proof (I am sure there must be some in the literature) would be most gratefully received. AI: I think Selmer's example is in some book I own but I cannot find it. It would be a natural footnote in any of my quadratic forms books, but there you go. Here is some stuff from a book you may not be looking at, page 79, second edition of $p$-adic Numbers by Fernando Q. Gouvea. Related to your example is Problem 121, show your same conditions for $$ (x^2 -2) (x^2 - 17) (x^2 - 34) = 0, $$ which can be checked wit the rational roots theorem. Hmmm. Then $$ x^4 - 2 y^2 = 17. $$ He says non-existence of rational solutions is the hard part in this one. I think this one is accessible from stuff in Mordell's book Diophantine Equations. The one I wanted to get to is how $x^2 + y^2 + z^3 = n$ has a solution in $\mathbb Z$ for every $n,$ both $n,z$ allowed to be negative when needed, but $$ x^2 + y^2 + z^9 \neq 216 p^3 $$ for positive prime $p \equiv 1 \pmod 4,$ see Integers of the form $a^2+b^2+c^3+d^3$ and MEEEEE
H: Finding Probability Generating function for $P\left\{ X > n+1\right\} $ I am trying to find probability generating function for $P\left\{ X > n+1\right\} $. Let X be a random variable assuming the values $0, 1, 2, ...$. The notation both for the distribution of $X$ and for it's tails are $P\left\{ X = j\right\} = p_j$, $P\left\{ X > j\right\} = q_j$. So the generating functions of the sequences $\{p_j\}$ and $\{q_j\}$ are $$P(s) = p_0 + p_1s+ p_2s^2 + p_3s^3+...$$ and $$Q(s) = q_0 + q_1s+ q_2s^2 + q_3s^3+...$$ As $P(1) = 1$, the series for $P(s)$ converges absolutely at least for $-1 \leq s \leq 1$. The coefficients of $Q(s)$ are less than unity, and so the series for $Q(s)$ converges at least in the open interval $-1 < s < 1$. Also for $-1 < s < 1$ there is a known identity which provides the relation $$Q\left( s\right) =\dfrac {1-P\left( s\right) } {1-s}$$ In most of similar problems the approach i have been taking is to convert the desired probability into some form of an algebraic equation of $P\left\{ X = j\right\}$, $P\left\{ X > j\right\}$ and may be 1. Then substitute in one of the known generating function and solve for an expression of the generating function of the desired probability. I am unsure this time this idea is working as i can n't seem to shake off of the tail due to +1 on the n. $$P\left\{ X > n+1\right\} = 1-P\left\{ X \leq n+1\right\} $$ $$=1 -P\left\{ X = n+1\right\}-P\left\{ X < n+1\right\}$$ I am unsure if should further expand $P\left\{ X < n+1\right\}$ part as it would keep unfolding recursively. Any help would be much appreciated. AI: Let $H(s)=\sum\limits_{n=0}^{+\infty}\mathrm P(X\gt n+1)\,s^n$, then $$ sH(s)=\sum_{n=1}^{+\infty}\mathrm P(X\gt n)\,s^{n}=\sum_{n=0}^{+\infty}\mathrm P(X\gt n)\,s^{n}-\mathrm P(X\gt0)\cdot1=Q(s)-q_0$$
H: What does the following notation mean: $\Bbb Q[x,y]$? I came across this notation while at this MathOverflow thread and I could not find its meaning. It makes the biggest sense that $f(x,y) \in \Bbb Q[x,y] $ represents any continuous function on the interval $[x,y] \in \Bbb Q$ and thus $\Bbb Q[x,y]$ is a set of such functions. However, I am not sure. AI: It means a polynomial in two variables with coefficients in the rational numbers $\mathbb Q.$
H: complex contour line Integral I am trying to calculate this Integral where Z is an element of the Complex number. I think I need to find the residue because there is a singularity (the center of the contour is actually the singularity $z = -1 + i$) So I Believe I should find the residue. Is this all correct? and if it is, how do I find the residue? $$c = -1 + i + e^{it} $$ $$ 0 < t \leq 2\pi$$ $$\int_c \frac1{(z^4 + 4)} dz$$ AI: Recall that if $z_1$ is a simple singularity of $f(z)$ $$\operatorname{Res}_{z=z_1} f(z)= \lim_{z \to z_1} (z-z_1)f(z)$$
H: number of zeros of a complex polynomial's leading term Let $p(z)=a_n z^n + a_{n-1} z^{n+1}+...$ be a polynomial of degree $n$. Prove that in a disc of sufficiently large radius, $p(z)$ and $r(z)=a_n z^n$ have the same number of zeros. AI: The number of zeros of $p$ is finite, equal to the degree. Consider a disc of radius the largest modulus of a root, plus an epsilon.
H: Is equality proof valid if terms are moved across equality? For any two events A and B with $Pr(B) > 0$, prove that $Pr(A^c|B) = 1− Pr(A|B)$. Is there a valid way to finish a proof if a step moves a term across the equality as follows? Show: $Pr(A^c|B) = 1− Pr(A|B)$ $Pr(A^c|B) + Pr(A|B) = 1 - Pr(A|B) + Pr(A|B)$ $1 = (Pr(A\cap B) + Pr(A^C \cap B))/Pr(B) $ $= Pr(B \cap (A \cup A^C)) / P(B) $ $= Pr(B)/Pr(B) = 1$ Therefore $Pr(A^c|B) + Pr(A|B) = 1$, and $Pr(A^c|B) = 1− Pr(A|B)$ AI: Why not just drop the first line after "Suppose"? Everything after that is correct and proves the proposition. Good job!
H: $f(x)$ absolutely continuous $\implies e^{f(x)}$ absolutely continuous, for $x \in [a,b]$? If $f(x)$ is absolutely continuous (a.c.) on [a,b], is the function $e^{f(x)}$ also absolutely continuous on [a,b] ? thanks AI: It is true. Suppose $g$ is Lipschitz with rank $L$ on $[a,b]$, and $f$ is AC. Then $g \circ f$ is also AC. To see this, suppose $f$ is AC, and let $\epsilon>0$. Choose $\delta>0$ such that if $(y_k,x_k)$ are a finite collection of pairwise disjoint intervals in $[a,b]$ with $\sum |y_k-x_k| < \delta$, then $\sum |f(y_k)-f(x_k)| < \frac{\epsilon}{L}$. Now consider $\sum |g \circ f(y_k)-g \circ f(x_k)| = \sum |g(f(y_k))-g ( f(x_k))| \leq L \sum |f(y_k)-f(x_k)| < \epsilon$. Hence $g \circ f$ is AC. Since $x \mapsto e^x$ is smooth, it is Lipschitz on any compact interval, hence the function $ x \mapsto e^{f(x)}$ is AC.
H: When can one use logarithms to multiply matrices If $a,b \in \mathbb{Z}_{+}$, then $\exp(\log(a)+\log(b))=ab$. If $A$ and $B$ are square matrices, when can we multiply $A$ and $B$ using logarithms? If $A \neq B^{-1}$, should $A$ and $B$ be symmetric? AI: If $A$ and $B$ commute, i.e. $AB=BA$, then the same identity holds for an appropriate definition of $\exp$ and $\log$. If $A$ and $B$ do not commute, then things are much more complicated.
H: Showing that a set of trigonometric functions is linearly independent over $\mathbb{R}$ I would like to determine under what conditions on $k$ the set $$ \begin{align} A = &\{1,\cos(t),\sin(t), \\ &\quad \cos(t(1+k)),\sin(t(1+k)),\cos(t(1−k)),\sin(t(1−k)), \\ &\quad \cos(t(1+2k)),\sin(t(1+2k)),\cos(t(1−2k)),\sin(t(1−2k))\}, \end{align}$$ is linearly independent, where $k$ is some arbitrary real number. As motivation, I know that the set defined by $$ \{1, \cos wt, \sin wt\}, \quad w = 1, \dots, n $$ is linearly independent on $\mathbb{R}$, which one generally proves by computing the Wronskian. I thought that I could extend this result to the set in question, but I haven't found a proper way to do so. My intuition tells me that $A$ will be linearly dependent when the arguments of the trig functions coincide, which will depend on the value of $k$. Though, I'm at a loss for proving this is true. Computing the Wronskian for this set required an inordinate amount of time-- I stopped running the calculation after a day. Is there perhaps a way to reduce the set in question so that the Wronskian becomes manageable? I'm interested in any suggestions/alternative methods for proving linear independence that could help my situation. Note that I'd like to have a result that holds for any $m = 0, \dots, n,$ where $n \in \mathbb{Z}$ if possible. Thanks for your time. EDIT: The set originally defined in the first instance of this post was incorrectly cited. My sincere apologies. AI: The answer is $k = 0, \pm 1, \pm \frac{1}{2}$. This follows from the following result. Claim: The functions $\{ 1, \sin rt, \cos rt \}$ for $r$ a positive real are linearly independent over $\mathbb{R}$. Proof 1. Suppose that $\sum s_r \sin rt + \sum c_r \cos rt = 0$ is a nontrivial linear dependence. Consider the largest positive real $r_0$ such that $c_{r_0} \neq 0$. Take a large even number of derivatives until the coefficient of $\cos r_0 t$ is substantially larger than the remaining coefficients of the other cosine terms and then substitute $t = 0$; we obtain a number which cannot be equal to zero, which is a contradiction. So no cosines appear. Similarly, consider the largest positive real $r_1$ such that $s_{r_1} \neq 0$. Take a large odd number of derivatives until the coefficient of $\cos r_1 t$ is substantially larger than the remaining coefficients of the other cosine terms (which come from differentiating sine terms) and then substitute $t = 0$; we obtain a number which cannot be equal to zero, which is a contradiction. So no sines appear. So $1$ is the only function which can appear in a nontrivial linear dependence, and so there are no such linear dependences. Proof 2. It suffices to prove that the functions are all linearly independent over $\mathbb{C}$. Using the fact that $$\cos rt = \frac{e^{irt} + e^{-irt}}{2}, \sin rt = \frac{e^{irt} - e^{-irt}}{2i}$$ it suffices to prove that the functions $\{ e^{irt} \}$ for $r$ a real are linearly independent. This can be straightforwardly done by computing the Wronskian and in fact shows that in fact the functions $\{ e^{zt} \}$ for $z$ a complex number are linearly independent. Proof 3. Begins the same as Proof 2, but we do not compute the Wronskian. Instead, let $\sum c_z e^{zt} = 0$ be a nontrivial linear dependence with a minimal number of terms and differentiate to obtain $$\sum z c_z e^{zt} = 0.$$ If $z_0$ is any complex number such that $z_0 \neq 0$ and $c_{z_0} \neq 0$ (such a number must exist in a nontrivial linear dependence), then $$\sum (z - z_0) c_z e^{zt} = 0$$ is a linear dependence with a fewer number of terms; contradiction. So there are no nontrivial linear dependences.
H: What is the units digit of $13^4\cdot17^2\cdot29^3$? What is the units digit of $13^4\cdot17^2\cdot29^3$? I saw this on a GMAT practice test and was wondering how to approach it without using a calculator. Thanks. AI: If you compute modulo $10,$ then you'll get $$13^4 17^2 29^3 \equiv 3^4 7^2 (-1)^3\equiv -81\cdot49\equiv (-1)^2\equiv 1 (\mathrm{mod}~10).$$ Thus the last digit is $1.$
H: Show that $A$ is non-singular (a) Let $A$ be an $n × n$ real matrix such that $(A + I)^4 = 0$ where $I$ denotes the identity matrix. Show that $A$ is non-singular. (b) Give an example of a nonzero $2×2$ real matrix $A$ such that $x′Ax = 0$ for all real vectors $x$. (a) Note that $(A + I)^4 = 0=>A^4+4A^3+6A^2+4A+I=0$ Then I am stuck. (b)$\begin{pmatrix} 0 & 1\\-1 & 0 \end{pmatrix}$ is an example. I get the matrix from intuitive idea. Is there another way to get such matrix. AI: a) Notice that $p(A) = 0$ where $p(x) = \left(x+1\right)^4$. It follows that the minimal polynomial $\mu$ divides $p$. Since $\lambda$ is an eigenvalue if and only if $(x-\lambda)\mid \mu$ it follows that $-1$ is the only eigenvalue. More specifically, $0$ is not an eigenvalue and hence $A$ is invertible. b) If we multiply the equation out we obtain $$x^\mathrm{T}Ax = 0$$ $$\begin{pmatrix} x & y \end{pmatrix}\begin{pmatrix} a & b\\c & d \end{pmatrix}\begin{pmatrix} x\\y \end{pmatrix} = 0$$ $$ax^2 + dy^2 + (b+c)xy = 0$$ taking $x, y= 0$ gives $d, a = 0$ respectively. It is easily seen then that $$a = d = b+c = 0$$ is necessary and sufficient, i.e. all such examples are simply scalar multiples of the matrix you gave.
H: Am I right in thinking $\frac{x^{2}}{ax+b}$ is an improper rational expression? Am I right in thinking $\dfrac{x^{2}}{ax+b}$ is an improper rational expression? If so, can someone help me figure out how to write it as the sum of a polynomial and proper rational expression? I have not a clue. AI: I'll do the first few steps; here's hoping you'll catch on to what I'm doing: $$\begin{align*} \frac{x^2}{ax+b}&=\frac{ax^2}{a(ax+b)}\\ &=\frac{ax^2}{a(ax+b)}+\frac{bx}{a(ax+b)}-\frac{bx}{a(ax+b)}\\ &=\frac{(ax+b)x}{a(ax+b)}-\frac{bx}{a(ax+b)}\\ &=\frac{x}{a}-\frac{b}{a}\frac{x}{ax+b}\\ &=\frac{x}{a}-\frac{b}{a}\left(\frac{ax}{a(ax+b)}+\frac{b}{a(ax+b)}-\frac{b}{a(ax+b)}\right)\\ \end{align*}$$ Can you take it from here?
H: which of the following statements are true and which are false Let $f$ and $g$ be continuous functions such that $f(x) ≤ g(x)$ for all $x ∈ [0, 1]$. Determine which of the following statements are true and which are false: $$ \begin{align} (a) & {}\quad \int_0^x |f(t)|~dt \leq\int_0^x |g(t)|~dt ~\forall~x ∈ [0, 1]\\ (b) & {}\quad \int_0^x (|f(t)|+f(t))~dt \leq\int_0^x \left(|g(t)|+g(t)\right)~dt ~\forall~x ∈ [0, 1]\\ (c) & {}\quad \int_0^x (|f(t)|-f(t))~dt \leq\int_0^x \left(|g(t)|-g(t)\right)~dt ~\forall~x ∈ [0, 1] \end{align}$$ For any statement which you believe to be true, you need to give a proof and for any statement which you believe to be false, you need to give a counter example. Where to start and how I proceed? Please help me. AI: Some hints. a) Consider $f$ negative and $g$ positive with $f$ having a larger absolute value at all points. b) Note that $|f(x)| + f(x)$ is $2f(x)$ for $f(x) \ge 0$ and $0$ for $f(x) < 0$. So consider the the sign of the functions. When $f$ is positive and certainly $g$ is as well, what can you say about the integrands? If $f(x)$ is negative, then |f(x)| + f(x) is $0$, what can you say about the integrand of $|g(x)| + g(x)$? c) Idea is similar to b.
H: Is the result always n+1? I'm reading a book on algorithms by Kurt Mehlhorn and Peter Sanders. On page 2, the following Theorem is stated: The addition of two n-digit integers requires exactly n primitive operations. The result is an n+1-digit integer. Is the result always n+1. What if you have 2 2-digit integers like 11 and 22 or others like that? AI: It simply says that n+1 digits are sufficient to represent the sum. Keep in mind that you are reading a book on algorithms, the algorithm simply needs to be able to make an assumption that holds. If you were to check whether to use n or n+1 digits each time, you could save a few bits in some cases but the worst case space usage remains the same and you are increasing the time taken unecessarily. On that line, even when it refers to n-digit integers here, it does not necessarily mean that the leading digit in any of them is necessarily non-zero here. In fact, the text here makes this same point right before starting the section on addition: "We can artificially turn any n-digit integer into an m-digit integer for any m≥n by adding additional leading zeros. Concretely, “425” and “000425” represent the same integer. We shall use a and b for the two operands of an addition or multiplication and assume throughout this section that a and b are n-digit integers. The assumption that both operands have the same length simplifies the presentation without changing the key message of the chapter. We shall come back to this remark at the end of the chapter.We refer to the digits of a as an−1 to a0, with an−1 being the most significant digit (also called leading digit) and a0 being the least significant digit, and write a = ($a_{n−1} . . . a_0$). The leading digit may be zero." To see why n+1 digits are sufficient here, if you are working in base b, then any n-digit number is less than $b^n$, so the sum of two such numbers is less than $2b^n$, which is less than or equal to $b^{n+1}$ if b is at least 2. Being less than $b^{n+1}$ means that n+1 digits are sufficient to represent it.
H: Show that $g$ is differentiable on $(0,\infty)$ Let $f:\mathbb{R}\to \mathbb{R}$ be a bounded continuous function. Define $g:[0,\infty)\to \mathbb{R}$ by $$g(x)=\int_{-x}^{x}(2xt+1)f(t) ~dt.$$ Show that $g$ is differentiable on $(0,\infty)$ and find the derivative of $g$. I can find the derivative. I can check differentiablity by equating left hand limit and right hand limit in one point. How can I check differentiablity on an interval? Please help. AI: If $h$ is continuous on $\mathbb{R}$, then the function $\phi(x) = \int_0^x h(t) dt$ is differentiable on $\mathbb{R}$, with derivative $\phi'(x) = h(x)$. To see this, choose $ x \in \mathbb{R}$, and let $\epsilon>0$. Since $h$ is continuous, there exists a $\delta>0$ such that if $|y-x| < \delta$, then $|h(y)-h(x)| < \epsilon$. Then you have $$|\phi(x+\eta) - \phi(x) - h(x) \eta| = |\int_{x}^{x+\eta} (h(t)-h(x)) dt| \leq \epsilon \eta,$$ from which the desired result follows. Since $x \in \mathbb{R}$ was arbitrary, it follows that $h$ is differentiable on $\mathbb{R}$. Now consider the function $\eta(x) = \int_{-x}^x h(t) dt$. Since we can write $\eta(x) = \int_0^x h(t) dt - \int_0^{-x} h(t) dt$, it is clear that $\eta$ is differentiable, and $\eta'(x) = h(x)+h(-x)$ (the latter term involves the composition rule). Now let $x \in (0,\infty)$ and write $$g(x) = 2 x \int_{-x}^x t f(t) dt + \int_{-x}^x f(t) dt.$$ Since the functions $f$ and $t \mapsto t f(t)$ are continuous, it follows that $g$ is differentiable at $x$. Since $x$ was arbitrary, $g$ is differentiable everywhere on $(0, \infty)$. The derivative is straightforward to compute using the product rule as: $$g'(x) = 2x(x f(x)+(-x)f(-x))+2 \int_{-x}^x f(t) dt +f(x)+f(-x).$$
H: Bézier approximation of archimedes spiral? As part of an iOS app I’m making, I want to draw a decent approximation of an Archimedes spiral. The drawing library I’m using (CGPath in Quartz 2D, which is C-based) supports arcs as well as cubic and quadratic Bézier curves. What is a good method of approximating an Archimedes spiral using either of these path types? For example the wikipedia exemplar image says it was “drawn as a series of minimum-error Bézier segments.” How would one generate such segments? My math background takes me through Calculus III plus some stuff I picked up from a classical mechanics class, but it’s been a couple of years so I’m rusty. What I have so far: For a spiral r = a + b $\theta$, I used the information from this page to find that the cartesian slope at any point (r, $\theta$) is equal to $$\frac{dy}{dx}=\frac{b\sin\theta\space+\space(a + b\theta)\cos\theta}{b\cos\theta\space-\space(a + b\theta)\sin\theta}$$ From here, I could use point-slope to find the equation of a tangent line at any point, but how do I go about finding the proper lengths of the handles (i.e. the positions of the middle two points) for the curve? Or would an approximation with circular arc segments be better/easier/faster? If I can’t figure it out, I’ll just use a static image in the app, but it occurs to me that I don’t even know of a way to generate a high-quality image of an Archimedes spiral! The Spiral tool in Illustrator, for example, does only logarithmic spirals. AI: So it looks like the Wikipedia reference image uses 45 degree sections of these curves. You can use the equation for the spiral to give you the tangent line at the beginning and end of these curve sections. Evaluate the derivative at these two points to get the tangent line slope and then shift your line appropriately to hit the point used. The intersection Of these two lines should be your control point. Once you have found your control point you can put it in the function 'CGPathAddQuadCurveToPoint' for the cx, cy (I think) along with the point you want to go to (also from the spiral equation). For reference--check out the animation under 'quadratic curves' here For extra speed, you only have to find 8 tangent lines max--just shift them out for the next cycle of the spiral and reuse them.
H: A question about probability I have met a interesting question: If today rains, the probability that tomorrow rains is $0.6.$ If today doesn't rain, the probability that tomorrow rains is $0.2.$ Given Tuesday rained, what's the probability that Monday rained? I have no idea how to solve this. If I make the question a bit more complicated: Given Tuesday rain, what's the probability that the Sunday just before rained? AI: If there is some constant probability $p$ of rain on any given day, in the absence of information about any other day's rainfall, then we have $p = 0.6p + 0.2(1-p)$, which can be solved to give $p=1/3$. So, probability of rain on Monday and Tuesday (if we don't already know that it rained on Tuesday) is $p \cdot 0.6 = 1/5$. Probability of rain on Tuesday but not Monday (if we don't aleady know that it rained on Tuesday) is $(1-p) \cdot 0.2 = 2/15$. But these are the cases when we KNOW that it rained on Tuesday; so the probability that there's rain on Monday given that it rained on Tuesday is $\dfrac{1/5}{1/5+2/15} = \dfrac{3}{5}$. However, it's possible that there is no such constant probability $p$. In this case, as time goes on, the probability of rain from day to day will approach 1/3 in the limit. This is only an issue if the world has existed for a finite number of days. In this case, though, there is not enough information to solve the problem.
H: Trivial solution when solving in integers Suppose we want to solve $4(x+y)^{2}-3xy-6(x+y)=0$ where $x$ and $y$ are both integers. Why we only get the trivial solution? AI: The equation can be arranged as a quadratic equation of x: $4x^2+x(5y-6)+(4y^2-6y)=0$. The discriminant(D)= $(5y-6)^2-4.4.(4y^2-6y)=36+36y-29y^2$ Now D needs to be ≥0 as x is real. $36+36y-29y^2 ≥0 ⇔ 29y^2-36y-36 ≤0 $ =>$\frac{18-6\sqrt{38}}{29}≤y≤\frac{18+6\sqrt{38}}{29}$ as $(t-a)(t-b)≤0$(where a≤b) => (a≤t and t≤b) or (b≤t and t≤a). Clearly, a≤t≤b as the other option is impossible. Now, $\frac{18-6\sqrt{38}}{29}>-1$ and $\frac{18+6\sqrt{38}}{29}<2$ (by observation) $=>-1 < y < 2$ as y being integer can be 0 or 1 to make x real. If $y=0,D=36$, if $y=1,D=43$ . Also D needs to be perfect square as x is integer. So, y must be 0 to make x real ad integral. Observation: the function in the LHS of the equation is symmetric w.r.t. x,y. The calculation & conclusion won't change if we interchange x,y.
H: Why does A=$\{\langle M_1,M_2,M_3 \rangle : L(M_1) \cap L(M_2) \ne L(M_3)\}$ isn't in $RE$. I'm trying to figure out what's wrong with this following Turing machine which determinate that the following language A=$\{\langle M_1,M_2,M_3 \rangle : L(M_1) \cap L(M_2) \ne L(M_3)\}$ is in $RE$. I said that we can build a Turing machine that run all inputs in lexicography order,in parallel: For an input $x$: we run it on $M_1$ and $M_2$ if one of them rejected the input, we skip this input, and don't use it. If both of them accepted: we run $x$ on $M_3$ if it rejected we return $true$, if it accepted we skip this input and don't use it. If we are in infinite: both $M_1$ and $M_2$ are in loop for checking $x$, or one of them accepted and the other in an infinite loop, or $M_3$ in infinite loop for checking $x$, if we reached this part, so the machine is infinite loop. What is not correct? I accept if I reached an $x$ which satisfies the condition or I'm in infinite loop. AI: Suppose $M_1$ and $M_2$ are two Turing Machine that does not halt on anything. Suppose $M_3$ is a Turing Machine that halts only on some $x$. If you run your algorithm and try test $\langle M_1, M_2, M_3 \rangle$ by running $M_1$ and $M_2$ on $x$. Here you algorithm would would not halt because $M_1$ and $M_2$ would not halt. Because $L(M_1) = L(M_2) = \emptyset$ and $L(M_3) = \{x\}$, you can not skip this step because this the only difference in the language. The essential problem is the above and the last line where you said "I accept if I reached an x which satisfies the condition or I'm in infinite loop". How do you ever know that you are in an infinite loop. After running some Turing Machine for 1000 steps, how do you know that it wont halt on the 1001 step. However, the above only shows that your algorithm does not prove that show that $A$ is RE. You have not proven that $A$ is not RE. To prove $A$ is not RE, one possible method is to reduce a language known to be not RE to $A$. Let $K$ denote the Halting Problem which is not computable. Let $\bar{K}$ be the complement of $K$. Hence $\bar{K}$ is not RE. Now reduce $\bar{K}$ to $A$. (I leave the detail to you.) Now if $A$ was RE, this reduction would prove that $\bar{K}$ is RE. Contradiction.
H: Compute $\lim_{x\to\infty} \frac{{(x!)}^{\frac{1}{x}-1} (x\Gamma(x+1) \psi^{(0)}(x+1)-x! \log(x!))}{x^2}$ What's the strategy one may use when facing a limit like this one? I think it's more important to know the possible ways to go than the answer itself. It's a problem that came to my mind again when I was working on a different problem. $$\lim_{x\to\infty} \frac{{(x!)}^{\frac{1}{x}-1} (x\Gamma(x+1) \psi^{(0)}(x+1)-x! \log(x!))}{x^2}$$ Any suggestion, hint are very welcome. AI: Note that $$ \frac{\left(\Gamma(x+1)^{1/x}\right)'}{(x)'}= \lim_{x\to\infty} \frac{{(\Gamma(x))}^{\frac{1}{x}-1} (x\Gamma(x+1) \psi^{(0)}(x+1)-\Gamma(x) \log(\Gamma(x)))}{x^2} $$ So recalling L'Hopital's rule we see that it is enough to find $$ \lim\limits_{x\to\infty}\frac{\Gamma(x+1)^{1/x}}{x} $$ We know the following asymptotic $$ \Gamma(x+1)\sim\left(\frac{x}{e}\right)^x\sqrt{2\pi x}\quad\text{ when }\quad x\to\infty $$ then $$ \lim\limits_{x\to\infty}\frac{\Gamma(x+1)^{1/x}}{x}= \lim\limits_{x\to\infty}\frac{\frac{x}{e}(2\pi x)^{1/(2x)}}{x}= \lim\limits_{x\to\infty}\frac{1}{e}(2\pi x)^{1/(2x)}=\frac{1}{e} $$ As for the another one approach to the limit $$ \lim\limits_{x\to\infty}\frac{(x!)^{1/x}}{x} $$ see this question.
H: Calculating the extinction probability I am trying to solve the following problem. In a branching process the number offspring per individual has a binomial distribution with parameters 2, p. Starting with a single individual, calculate the extinction probability. I believe the solution to such a problem is evaluated using the equation $z=P(z)$ where of course $p(z)$ is pgf of the size of the nth generation. $$z = (p(z-1) + 1)^n$$ Due to lack of enough appropriate examples i am unsure how to proceed from here. I believe due to the convolution of each generation depending on the last we have a recursive equation. So would it be correct to solve for $z_2$ using $z_2 = (p(z_1-1) + 1)^2$ and $z_1 = (p(z_0-1) + 1)^1$. AI: Here's a more direct solution: You have two attempts not to go extinct. Each succeeds if a) a descendant is produced with probability $p$ and b) that descendant's branch survives with probability $q$. So your survival probability $q$ must satisfy $$q=(pq)^2+2pq(1-pq)=pq(2-pq)\;.$$ One solution is $q=0$, the other is $q=(2p-1)/p^2$. The crossover occurs when the two solutions coincide, i.e. at $p=1/2$. For $p\le1/2$, the survival probability is $0$ (which makes sense, since in that case the expected number of descendants is $\le1$), whereas for $p\gt1/2$ it is $(2p-1)p^2$, so the extinction probability is $1-(2p-1)/p^2=(p^2-2p+1)/p^2=((1-p)/p)^2$.
H: $\alpha < \beta$ implies that $\gamma+\alpha<\gamma+\beta$ and $\alpha+\gamma\le\beta+\gamma$ for ordinals This is an exercise from Kunen's book. Show that $\alpha < \beta$ implies that $\gamma+\alpha<\gamma+\beta$ and $\alpha+\gamma\le\beta+\gamma$. Given an example to show that the "$\le$" cannot be replaced by "$<$". Also show: $$\alpha \le \beta \rightarrow \exists!\delta (\alpha+\delta=\beta).$$ What I've tried: I could give an example such that show the "$\le$" cannot be replaced by "$<$": $1+\omega = 2+\omega=\omega$, where $\alpha=1$, $\beta=2$ and $\gamma=\omega$. I could also see these two inequalities is obviously right, but I cannot how to use the set language to show. For the last question, I want let $\delta=\{\gamma:\alpha+\gamma\le\beta\}$ to prove. Am I right? Could anybody help me? Thanks ahead:) AI: $\alpha<\beta$ $\Rightarrow$ $\gamma+\alpha<\gamma+\beta$ $\alpha<\beta$ means: if $A$ has ordinal type $\alpha $ and $B$ has ordinal type $\beta$, then $A$ is isomorphic to some initial segment $B_a$ of the set $B$. Using embedding $f \colon A\to B$ can you obtain similar embedding for the sets of ordinal types $\gamma+\alpha$ and $\gamma+\beta$? Note: If you work with von Neumann's definition of ordinals, you could use $A=\alpha$, $B=\beta$. (For this construction, you have $\alpha\le\beta$ $\Leftrightarrow$ $\alpha\subseteq\beta$ and $\alpha<\beta$ $\Leftrightarrow$ $\alpha\subsetneqq\beta$; moreover you know that $\alpha$ is an initial segment of beta.) A different solution. Suppose we already have shown the second part. Then $\alpha<\beta$ implies that there is $\delta>0$ such that $\beta=\alpha+\delta$. And then $\gamma+\beta=\gamma+\alpha+\delta>\gamma+\alpha$. Transfinite induction: Starting point of induction is $\beta=\alpha+1$. (The smallest ordinal greater than $\alpha$.) We have $\gamma+\alpha<\gamma+\alpha+1$. Inductive step - successor ordinal: We combine $\gamma+\alpha<\gamma+\beta$ and $\gamma+\beta<\gamma+\beta+1$. Inductive step - limit ordinal: If $\beta=\sup_{i\in I}\beta_i$ and $\gamma+\alpha<\gamma+\beta_i$ for each $i\in I$, then $\gamma+\alpha<\gamma+\beta$. (This was induction on the well-ordered set $\{\beta\text{ is an ordinal}; \beta>\alpha\}$.) $\alpha\le\beta$ $\Rightarrow$ $(\exists\delta) \beta=\alpha+\delta$ You know that $\alpha\subseteq\beta$. What about trying ordinal type of the well-ordered subset $\beta\setminus\alpha$ for $\delta$? (If you work with the naive definition of ordinals and their inequality, you'll have $B\setminus A$ instead; where $A$ and $B$ have ordinal types $\alpha$ and $\beta$, respectively.) Transfinite induction: Similar as the first part. We will use $\alpha+\sup_{i\in I}\beta_i=\sup_{i\in I}(\alpha+\beta_i)$. (Right-continuity of the addition.) $\alpha\le\beta$ $\Rightarrow$ $\alpha+\gamma\le\beta+\gamma$ One simple solution seems to be: Fix $\alpha$, $\beta$ and show this by transfinite induction on $\gamma$. It is clear for $\gamma=0$. For successor ordinals you need basically to know that $\alpha\le\beta$ $\Rightarrow$ $\alpha+1\le\beta+1$. The inductive step for limit ordinals requires showing that $\alpha_i\le \beta_i$ $\Rightarrow$ $\sup_{i\in I}\alpha_i \le \sup_{i\in I}\beta_i$. (Which is easy if you're working with von Neumann's definition - the supremum is just union of ordinals.) The first two parts seem to be doable by transfinite induction, too (if someone prefers such solution).
H: Why is $\int_{-\infty}^{\infty} f(x)\sin(tx)dx$ continuous? This is embarrassing. I asked this question several months ago: Let $f$, a Lebesgue integrable function in $\mathbb{R}$ ($\int_{\mathbb{R}}|f| < \infty$). Let: $$g(t) := \int_{-\infty}^{\infty} f(x)\sin(tx)dx.$$ Show that $g$ is continuous and that: $$ \lim_{|t| \rightarrow \infty} g(t) = 0. $$ I wrote there that I proved that $g$ is continuous, but I can't for life of me remember my proof, come up with a new one or find my notes. Help? AI: Show that for each $t$ and each sequence $\{t_n\}$ which converges to $t$ we have $g(t_n)\to g(t)$. To see this, define $F_n(x):=f(x)\sin(t_nx)$ and $F(x)=f(x)\sin(tx)$. By continuity of $y\mapsto \sin y$, we have $F_n(x)\to F(x)$ for each $x$. We have $|F_n(x)|\leq |f(x)|$ and $f$ is integrable. Hence we can apply dominated convergence theorem.
H: Prove that: $\lim_{n\to\infty} f(n+1) - f(n) = \lim_{x\to\infty} (f(x))' $ I conjecture that in some specific conditions a differentiating function gives the following equality: $$\lim_{n\to\infty} f(n+1) - f(n) = \lim_{x\to\infty} (f(x))' $$ However, I'm not sure yet what exactly those conditions are in order to precisely know where I may apply this rule or not. If you wanna take a look over my posted problem here you'll immediately notice that this rule applies for that case. I really appreciate if you help me clarify this. AI: I believe that the minimal assumptions are: $f \colon [0,+\infty) \to \mathbb{R}$ is differentiable; $\lim_{x \to +\infty} f'(x)$ exists. Then you easily check that $$\lim_{n \to +\infty} f(n+1)-f(n)=\lim_{x \to +\infty} f'(x),$$ since you can apply Lagrange's theorem: $$f(n+1)-f(n)=f'(\xi_n)$$ for a suitable $\xi_n \in (n,n+1)$. I think that the two limits are not equivalent to each other, since it may be impossible to control $f'$ by using the values of $f$ at discrete points.
H: Factoring for extremely large numbers that are a power of 2. This is a variation of this question. I want to find the number of factors for a given large integer that I already know to be a power of 2. Given that the number is a power of 2, does that help by eliminating most scenarios e.g. factors cannot be odd. at least one number of a factor pair has to be a power of 2 itself. Question: What other properties does the power series of 2 have that I can use to find factors more efficiently? How can I represent the same in the form of an equation or function? AI: If you already know the number is a power of 2, then all the factors are also powers of 2. So, if $n=2^k$, then the factors are $1, 2, 2^2, \dots 2^k$, and there are exactly $k+1$ of them.
H: What is the cokernel of this map? Consider the matrix $$\begin{pmatrix} -12 & 6 &0\\ 58 & 34 & 18 \\ 18 & 12 & 6\end{pmatrix}$$ The problem ask me to decompose the kernel and cokernel of this matrix (regarded as a linear map) into cyclic $\mathbb{Z}$ modules (abelian groups). I immediately found the one dimensional kernel, but I do not know how to deal with the cokernel. It should have dimension one as well, but it might be a direct sum of cyclic groups. Since this question appeared in all kinds of prelim tests I feel these must be a universal way to solve it. AI: The universal way to solve this kind of problem is provided by the Smith normal form.
H: What is the relationship between $\pi_{2}\overline{X}$ and $\pi_{2}(X)$? From Harvard qualification exam, 1990. Consider the space $X=\mathbb{S}^{1}\wedge \mathbb{S}^{2}$, alternatively viewing it as a sphere with north and south poles connected. I was asked to: 1): the relationship between $\pi_{2}X$ and $\pi_{2}\overline{X}$, where $\overline{X}$ is $X$'s universal cover. 2): Calculate $\pi_{2}(X)$. I think $\overline{X}$ is just $\mathbb{R}^{2}\wedge \mathbb{R}^{1}$, and its second homotopy group should be $0$. But I do not see any nontrivial relationship between $\pi_{2}X$ and $\pi_{2}\overline{X}$, for the deck transformation argument does not extend to spheres(as opposed to loops). For the second question, my guess is $\pi_{2}(X)=\mathbb{Z}$; again I need a proof. Does the relationship $$\pi_{2}(X\wedge Y)=\pi_{2}(X)\oplus \pi_{2}(Y)$$ hold as fundamental groups? I do not see a nontrivial fibration from $\overline{X}\rightarrow X$ that can make me use the long exact sequence of homotopy groups. So I ask in here. AI: I think your mistake is the following: the universal cover of $\mathbb S^2$ is not $\mathbb R^2$. It is just $\mathbb S^2$ because $\mathbb S^2$ is already simply connected.
H: How do I prove the existence of infinite union in ZFC? Given an infinite set of sets A - how can I prove in ZFC that the union of all the elements of A exists? AI: It is the axiom of union which assert this. The axiom states that if $A$ is a set, then there exists a set $B$ such that $B=\bigcup A$, that is to say For every $x$, $x\in B$ if and only if there exists $y$ such that $y\in A$ and $x\in y$.
H: Hölder continuous and uniformly convergence subsequence Let $\alpha \in (0,1]$. A function $f: [0,1]\rightarrow \mathbb{R}$ is defined to be $\alpha$-Hölder continuous if $$ N_{\alpha}(f)=\sup\{ \frac{|f(x)-f(y)|}{|x-y|^\alpha} : x,y\in[0,1] \ \ \ \ x\neq y \} < \infty $$ (a) Suppose $\{f_n\}$ is a sequence of functions from $[0,1]$ to $\mathbb{R}$ such that for all $n=1, 2, 3, \cdots $ we have $N_{\alpha}(f_n)\leq 1$ and $|f_n(x)|\leq 1$ for all $x\in [0,1]$. Show that $f_n$ has a uniformly convergent subsequence. (b) Show that (a) is false if the condition "$N_{\alpha}(f_n)\leq 1$" is replaced by "$N_{\alpha}(f_n) < \infty$". End of question. My problem is part(b) confused me. See, for part (a) $f_n$ is uniformly bounded on compact set $[0,1]$ and equicontinuous, hence Arzela Ascoli theorem can be applied and the conclusion followed. The way I showed equiconinuity is this, Given $\epsilon >0$, take $\delta < \epsilon^{\frac{1}{\alpha}}$, hence if $|x-y|<\delta$, $|f_n(x)-f_n(y)|<\epsilon$ for all $n\in \mathbb{N}$ and $x\in[0,1]$. Now suppose that in Part (b), we let $N_{\alpha}(f_n)=M<\infty$, isn't it that if we take $\delta< (\frac{\epsilon}{M})^{\frac{1}{\alpha}}$ then equicontinuity still holds and Arzela Ascoli theorem still apply? Why not? What is the example in (b)? I am really confused. Hope someone can help me with this. Thanks. AI: I think you misinterpreted in b., since it's a case contained, after scaling in the first one. Consider a sequence of functions $f_n$ such that $f_n(k2^{-n})=0$ for each $k$ and $f_n((2k+1)2^{-n-1})=1$, with linear interpolation. Then $\lVert f_n\rVert_{\infty}=1$ and $N_{\alpha}(f_n)=2^{(n+1)\alpha}$. A subsequence cannot be uniformly equicontinuous. Indeed, let $\{n_k\}$ an increasing sequence of integers. We will see that the condition of equi-continuity fails for $\varepsilon=1/2$. For $f_{n_k}$ we can take at most $\delta=2^{-(n_k+1)}$, but it won't work for $f_{n_{k+1}}$.
H: Transformation under rotation of Riemann sphere Suppose the Riemann sphere $S$ is rotated by the angle $\phi$ round the diameter whose end points have $a,-1/\bar{a} $ (which have antipodal preimages) as stereographic projections. Suppose moreover, $z$ and $\zeta$ are stereographic projections of points corresponding to each other under this rotation. What is the relationship between $z$ and $\zeta$ ? AI: The given rotation of $S$ appears as a Moebius transformation $T$ of the closed complex plane $\bar{\mathbb C}$. This $T$ will have the complex numbers $a$ and $-1/\bar a$ as fixed points. Therefore $z$ and $\zeta:=T(z)$ are related by a formula of the form $${\zeta -a\over \zeta+1/\bar a}=\lambda\ {z -a\over z+1/\bar a}\ ,\qquad(*)$$ where $\lambda$ is a certain complex constant. Since neither of the two fixed points is attracting one necessarily has $\lambda=e^{i\alpha}$. In order to determine $\alpha$ we look at things in the immediate neighborhood of $a$. When $z\to a$ then the image point $\zeta=T(z)$ approaches $a$ as well, and we have $$T'(a)=\lim_{z\to a}{\zeta -a\over z-a}= e^{i\alpha}\lim_{z\to a}\ {\zeta+1/\bar a\over z+1/\bar a}=e^{i\alpha}\ .$$ As the stereographic projection is conformal we should have $\arg T'(a)=\phi$, and this leads to the conclusion that in fact $\alpha=\phi$. Of course you now can solve $(*)$ for $\zeta$ in terms of $z$.
H: Does $L=\{(\langle M \rangle,k)| M$ is a TM and $\exists w\in \sum^*$ s.t when $M$ runs on $w$, $M$ visits some state at least $k$ times$\} \in R$? I'd like your help with understanding , how come the following language is decidable (in $R$): $L=\{(\langle M \rangle,k)| M$ is a TM and $\exists w\in \sum^*$ such that when $M$ runs on $w$, $M$ visits some state at least $k$ times$\}$ I tried to think of a Turing machine which decides the problem, but I didn't reach to any smart conclusions. AI: The language is really in $R$. To see if a TM with $N$ states visits one of them at least $k$ times you only need to check $N \cdot k$ steps of it. In $N \cdot k$ steps, only $N \cdot k$ symbols can be used and thus only words $N \cdot k$ symbols long need to be considered. There is only a finite number of such words. In the end the time needed to check if TM belongs in L is $|\Sigma|^{N \cdot k}N \cdot k$.
H: Is this CRC calculation correct? I am currently studying for an exam and trying to check a message (binary) for errors using a polynomial, I would like if somebody could verify that my results below are (in)valid. Thanks. Message: 11110101 11110101 Polynomial: X4 + x2 + 1 Divisor (Derived from polynomial): 10101 Remainder:111 Result: There is an error in the above message? Also, I had a link to an online calculator that would do the division but can't relocate it, any links to a calculator would be greatly appreciated. Thanks. AI: 1111010111110101 | 10101 +10101 | 11001011101 10111 | +10101 | 10011 | +10101 | 11011 | +10101 | 11101 | +10101 | 10000 | +10101 | 10110 | +10101 | 111 | <- you are right! there is an error in the message!
H: the measure of a set.. $C(n)$ is defined as the set that remains after removing from $[0, 1]$ an open interval of length 1/n centered at 1/2, then an open interval of length $1/n^2$ from the center of each of the two remaining intervals, then open intervals of length $1/n^3$ from the centers of each of the remaining $4=2^2$ intervals, and so on. How would I show that the measure of $C(n)$ is $\frac{n-3}{n-2}$? Any hints greatly appreciated: By drawing out examples for small n, I would think that the measure is 0 since we are continuously removing open intervals and keeping the endpoints. I also see that C(3) = 0 since C(3) is the Cantor set. AI: After $k\geqslant0$ steps of the procedure, there remains $2^k$ disjoint intervals (and not $k+1$, as you seem to believe) and one removes an interval of length $1/n^{k+1}$ from each of them. If $n\gt2$, there is no overlap of the pieces that are removed hence the total length of the removed intervals is $$ \sum_{k=0}^{+\infty}\frac{2^k}{n^{k+1}}=\frac1{n-2}. $$
H: How to integrate $\frac{1}{\sqrt{1+x^2}}$ using substitution? How you integrate $\frac{1}{\sqrt{1+x^2}}$ using following substitution? $1+x^2=t$ $\Rightarrow$ $x=\sqrt{t-1} \Rightarrow dx = \frac{dt}{2\sqrt{t-1}}dt$... Now I'm stuck. I don't know how to proceed using substitution rule. AI: By the substitution you suggested you get $$ \int \frac1{2\sqrt{t(t-1)}} \,dt= \int \frac1{\sqrt{4t^2-4t}} \,dt= \int \frac1{\sqrt{(2t-1)^2-1}} \,dt $$ Now the substitution $u=2t-1$ seems reasonable. However your original integral can also be solved by $x=\sinh t$ and $dx=\cosh t\, dt$ which gives $$\int \frac{\cosh t}{\cosh t} \, dt = \int 1\, dt=t=\operatorname{arcsinh} x = \ln (x+\sqrt{x^2+1})+C,$$ since $\sqrt{1+x^2}=\sqrt{1+\sinh^2 t}=\cosh t$. See hyperbolic functions and their inverses. If you are familiar (=used to manipulate) with the hyperbolic functions then $x=a\sinh t$ is worth trying whenever you see the expression $\sqrt{a^2+x^2}$ in your integral ($a$ being an arbitrary constant).
H: $HAM-NO-HAM$ problem: How do I show reduction from $HAMcircuit$ problem to the following problem? consider the following language: $HAM-NO-HAM=\{(G_1,G_2)|G_1, G_2\}$ are undirected graphs, $G_1$ has a Hamilton and $G_2$ doesn't have one$\}$. i need to decide wheter it's in $P, NP, CONP$ or none of the above. If I'll show reductions from $HAMcircuit$ and from $NO-HAMcircuit$ I will get that $HAM-NO-HAM$ is both $NPC$ and $coNPC$. What is the best way for showing a reduction from the $HAMcircuit$ to $HAM-NO-HAM$? I want somehow to copy it and then I would get the graph which does or doesn't have a $HAMcircuit$ but I need to copy it again so it would fit the other other condition for the second graph. Any suggestion? AI: In order to reduce from HAM (or noHAM), just take the input graph and pair it with a fixed graph that you already know not to have (respectively, to have) a Hamiltonian circuit. However, what these reductions prove is just that the problem is (co)NP-hard, not that it is complete. To be complete, is must both be hard and actually be in (co)NP. And those same reductions show that your problem cannot be (co)NP unless NP=coNP, which is unknown but believed to be false.
H: Floating point binary arithmetic question I'm doing a basic class on computer architecture and we dwell into Floating Point Arithmetic, I'm not looking for someone to solve my homework, I'm actually just going through old exams and I'm kinda stuck on one exercise here. So here it goes: 1,010010*2^(-9) - 1,000101*2^(-6) Here's what I tried to solve this: Brought both terms to the same exponent: 1010,010*2^(-6) - 1,000101*2^(-6) Subtracted to the following result: 1001,000101*2^(-6) The problem now is, none of this checks out with the possible answers I was given, here's an actual picture of the question and the possible answers: Any idea on what I'm doing wrong here? AI: You need to shift the comma in the other number as $2^{-6}=1000 \cdot 2^{-9}$ in base 2
H: is there a nice way to find the fourier transform of... I am looking for a nice way to calculate the FT of the following function $f(x)=\biggl(\sum_{n=1}^{c}~a_n~e^{-\frac{i}{2}~x~b_n}\biggr)^d$, where $d,c>0$, $a_n$ and $b_n$ are real coefficients, strictly monotonously rising in $n$ and $x$ is the free variable and $c$ might go to $\infty$. I used mathematica to calculate it, but without specification of $d$ and when $c\to\infty$, there is no way that the programme will do it. Any helpful ideas? Thanks!! AI: If $d$ is a positive integer, then you can use the multinomial theorem to expand your expression then take the Fourier transform with the appropriate condition on $\sum b_k $.
H: Calculating powers of 2 on a 2D grid without factoring. Consider the following 2D infinitely large grid where the dots represent infinity: 1 2 3 4 5 6 7 8 9 10 ... 2 4 6 8 10 12 14 16 18 20 ... 3 6 9 12 15 18 21 24 27 30 ... 4 8 12 16 20 24 28 32 36 40 ... 5 10 15 20 25 30 35 40 45 50 ... 6 12 18 24 30 36 42 48 54 60 ... 7 14 21 28 35 42 49 56 63 70 ... 8 16 24 32 40 48 56 64 72 80 ... 9 18 27 36 45 54 63 72 81 90 ... 10 20 30 40 50 60 70 80 90 100 ... .. ... ... ... ... ... ... ... ... ... ... Column and row numbers start at 1 and continue on to infinity. The value at each cell is the product of x and y: (x, y) = (x * y) Now consider all the numbers on this grid that are a power of 2 e.g. 2, 4, 8, etc. Each number appears more than once depending on how many factors it has e.g. 16 = (1, 16), (16, 1), (2, 8), (8, 2), (4, 4). I am not sure if the answer to my question lies in number or graph theories but here is the pattern I am looking for: Given some random (x, y) coordinate, where both x and y are extremely large integers, I want to find out if a power of 2 exists on any diagonal cell of (x, y) where a diagonal cell if any (x +/-k, y +/-k) for all integers k. Since the grid is infinite in size, I cannot loop through each value on the diagonal. The image below highlights all powers of 2 in yellow and highlights diagonal cells in gray. Note: You can zoom into the image by saving it or opening in a new tab. AI: There is a diagonal cell for $(x,y)$ if and only if the binary representation of $|x-y|$ consists of any number of $1$s (could be none) followed by any number of $0$s or the binary representation of $x+y$ contains at most two $1$s. For the main diagonals $(x+k,y+k)$ with $k\in\mathbb Z$, we want $(x+k)(y+k)=2^n$ with $n\in\mathbb N_0$, which implies that $x+k=2^{n_x}$ and $y+k=2^{n_y}$ with $n_x,n_y\in\mathbb N_0$. Without loss of generality assume $x\ge y$. Subtracting the two equations yields $x-y=2^{n_x}-2^{n_y}$. Thus $x-y$ is the difference of two powers of two; its binary representation consists of $n_x-n_y$ $1s$ followed by $n_y$ $0$s. For the minor diagonals $(x+k,y-k)$ with $k\in\mathbb Z$, we want $(x+k)(y-k)=2^n$ with $n\in\mathbb N_0$, which implies that $x+k=2^{n_x}$ and $y-k=2^{n_y}$ with $n_x,n_y\in\mathbb N_0$. Adding the two equations yields $x+y=2^{n_x}+2^{n_y}$. Thus $x+y$ is the sum of two powers of two; its binary representation either has one $1$, if $n_x=n_y$, or otherwise two $1$s in the $n_x$-th and $n_y$-th digits.
H: Can we descend field extensions of prime degree of number fields to number fields of the same degree Let $K$ be a number field and let $p$ be a prime number. Let $L$ be a degree $p$ field extension of $K$. Does there exist a degree $p$ field extension $M$ of $\mathbf{Q}$ such that $$M\otimes_{\mathbf{Q}} K = L?$$ If not, when does this happen? Does it happen almost always? I would be willing to hear about any positive thing that can be done in this direction. I'm thinking about Weil restrictions of varieties. That's where this question came up. AI: The question can be reformulated as follows. Let $L$ be a number field and $p$ a prime. Let $K$ be a subfield of $L$ of index $p$. Does there exist a subfield $M$ of $L$ of order $p$ such that $MK = L$? (One direction of the equivalence is clear; in the other direction, if $MK = L$ then the natural map $M \otimes_{\mathbb{Q}} K \to MK$ is an isomorphism by comparing dimensions.) The answer is not always. If $L$ is Galois over $\mathbb{Q}$ with Galois group $C_4$, then $L$ has a unique quadratic subfield, which we'll select as $K$, so it cannot admit a second quadratic subfield $M$. As an explicit example take $L = \mathbb{Q}(\zeta_5)$ and $K = \mathbb{Q}(\sqrt{5})$. More generally, suppose that $L$ is Galois over $\mathbb{Q}$ with Galois group $G$ and write $|G| = pn$. Then by Cauchy's theorem $G$ has an element of order $p$, so the fixed field of this element is a suitable choice for $K$. In order for $M$ to exist, $G$ needs to have subgroups of index $p$, but it may not have any. For example, suppose $L$ has full Galois group $S_n$ (this is the generic case in the sense that the splitting field of a generic irreducible polynomial of degree $n$ has this property) and $n \ge 5$. By the simplicity of $A_n$ it follows that the minimal index of a subgroup of $S_n$ which is not $A_n$ is $n$, so if $p < n$ is an odd prime then $S_n$ has no subgroups of index $p$ and consequently $L$ has no subfields of degree $p$. On the other hand, we have the following corollary of the fundamental theorem of Galois theory. Proposition: Let $M, K$ be subextensions of a Galois extension $L$ (we do not name the base field as it is irrelevant here) with Galois group $G$. Then $MK = L$ if and only if $\text{Gal}(L/M) \cap \text{Gal}(L/K) = \{ e \}$ (where the intersections take place in $G$). Thus if $|G| = pn$ where $p \nmid n$, then to find $M$ it is necessary and sufficient to find a subgroup of $G$ of index $p$. For example, this is possible if $n = q^k$ for some prime $q \neq p$ by the first Sylow theorem. More generally, it is necessary and sufficient to find a subgroup of index $p$ with trivial intersection with $\text{Gal}(L/K)$ (hence to write $G$ as an internal Zappa-Szép product of a cyclic group of order $p$ and some other subgroup).
H: How to solve the inequality $2^x\ge a+bx$? Let $a$ and $b$ be real constants where $b$ is positive. What is the small real number $x_0>0$ such that $$2^x \ge a+bx?$$ Here is what I have tried. Suppose $a=0$. Fix a positive integer $n$. Then $2^x=e^{(\log 2)x}=\left(e^{\frac{\log 2}{n}x}\right)^n\ge \left(1+\frac{\log 2}{n}x\right)^n\gt\left(\frac{\log 2}{n}x\right)^n$. The inequality $\left(\frac{\log 2}{n}x\right)^n\ge bx$ is solved as $x\ge \left[\frac{b}{(\frac{\log 2}{n})^n}\right]^{1/(n-1)}=\frac{b^{1/(n-1)}}{(\log 2)^{n/(n-1)}} n^{n/(n-1)}$. Therefore $$x_0\ge \inf_n \frac{b^{1/(n-1)}}{(\log 2)^{n/(n-1)}} n^{n/(n-1)}.$$ Can one do better than this or simplify this expression? AI: You're talking about a transcendental equation, so that the solutions (if any) can be determined only if the parameters $a, b$ take particular values - that is, when the equation is a "nice" one and leads to some "trick" for expressing its roots in terms of elementary functions. Other than that, you can only use numerical methods to find arbitrary-precision approximations. Addendum: of course, the answer using Lambert's W function, although simplifying matters, does not change anything at all: just define the $\diamond$-function to be $\diamond(a) := F^{-1}(a)$, where $F(t) = 2^t - t$, and have fun writing a $\mathtt{Mathematica}$ package. Not offending anyone, the point of this is that the solution to $2^x = a + bx$ is not expressible through elementary functions, since the OP seemed to be wanting an explicit $x_0$.
H: What rule do I use to differentiate this function? I am new to calculus, and thought I had my head round the product, quotient and chain rules, but I can't work out how to tackle this: $$ \frac 1{x(x+1)^2} $$ Apparently, the first step of the solution is $$ f'(x) = - \frac {(x+1)^2 + 2x(x+1)} { x^2(x+1)^4} $$ but I can't work out how this was arrived at. The denominator of the derivative is the square of the denominator of the function, which suggest use of the quotient rule, but by what steps was the numerator of the derivative arrived at? I expect it is my poor algebra and manipulation techniques letting me down. If anyone can recommend a good tool for improving these (as well as answering my question!) I"d be grateful. AI: Apply the quotient rule with $u=1$, $v=x(x+1)^2$, then $$\left(\frac 1 v\right)' = \frac{(1)'v-1v'}{v^2} = \frac{0-v'}{v^2}$$ because $(1)'=0$. Now for the derivative of $v$ in the numerator, use product rule.
H: Cauchy's Integral Theorem I am trying to understand Cauchy's Integral Theorem which states $$ \int_\gamma f(z)\,dz = 0. $$ If function $f(z)$ is holomorphic (has no singularities) within the area contained by the contour $\gamma$. I understand the proof comes from Green's theorem, but I don't understand conceptually why this is true. What exactly does the complex contour integral measure? It's not area, is it? AI: No, it's not area. You can think of it this way: for each infinitesimal segment of the curve, you multiply (as complex numbers) the displacement $dz$ by the function $f(z)$, and then you add those all up. This is made rigorous by a definition in terms of Riemann sums. They are really the same Riemann sums that you saw in calculus, but here there is no interpretation in terms of area.
H: Submodules of direct sums of simple modules I'm reading these online notes on representation theory, and I don't fully understand this: Isn't $V\oplus(\bigoplus_{i\in I}S_i)$ a direct sum by definition, so we'd get $I=\{1,\cdots,n\}$? Can we assume the $S_i$ are pairwise disjoint submodules of $U$ based on how the lemma is phrased (or does that follow necessarily from them each being simple)? Intuitively, I feel the condition should be that $W$ as an external direct sum maps into the internal direct sum situated within $U$ in an obvious way, and we choose $I$ maximal so that this map has no kernel. Is this understanding correct? AI: Your notes have a typo: the proof should begin "Choose a subset $I$ maximal subject to the condition that $V + \sum S_i$ is a direct sum." Of course, as you've noted, $V\oplus \bigoplus S_I$ is a direct sum by definition-but it might be considerably bigger than $U$, as Mariano's example shows. We can assume that the $S_i$'s pairwise intersections are either $0$ or an $S_i.$ If the latter case happens, $S_i \cap S_j=S_i,$ we know $S_i=S_j$, since otherwise $S_i$ would be a proper submodule of $S_j.$ Regarding your comments, the definition of direct is not that the summands have pairwise trivial intersections, but that each element of the sum can be written uniquely in terms of the summands. So $(e_1) + (e_2)=(e_1) \oplus (e_2)=V$ in some 2-dimensional vector space $V$, but $(e_1)+(e_2)+(e_1+e_2)=V$ while $(e_1)\oplus (e_2)\oplus (e_1+e_2)$ is three-dimensional and not in $V$ at all. Note, though, that these three modules have trivial intersection. On the other hand, if a sum is direct, this does imply the summands have trivial intersection: otherwise anything in any intersection can be written in at least two ways. It's just that these properties aren't equivalent.
H: Determine $\alpha >0$ for which $\iint_Af(x,y)^\alpha dx \, dy < +\infty$ Let $A=\{(x,y)\in \mathbb R^2: 0<x<1, 0< y < \sqrt{x}\}$ and $f \colon A \to\mathbb R$ a continuous function s.t. $$ \frac{1}{x^2+y^2} \le f(x,y) \le\frac{2}{x^2+y^2} $$ for every $(x,y) \in A$. Determine the set of value $\alpha >0$ such that $$ \iint_Af(x,y)^\alpha dx \, dy < +\infty. $$ The function $t\mapsto t^\alpha$ is monotone (increasing) when $\alpha>0$. Therefore, using the hypothesis we get $$ \frac{1}{(x^2+y^2)^\alpha} \le f(x,y)^\alpha \le\frac{2}{(x^2+y^2)^\alpha} $$ for every $(x,y) \in A$. Integrating we get $$ \iint_A \frac{dx \, dy}{(x^2+y^2)^\alpha} \le \iint_A f(x,y)^\alpha dx \, dy\le\iint_A\frac{2 \, dx \, dy}{(x^2+y^2)^\alpha} $$ If we pass to polar coordinates, we have $$ \iint_A \frac{1}{\rho^{2\alpha-1}} \le \iint_A f(\rho \cos{\vartheta},\rho \sin{\vartheta})^\alpha \rho \, d\rho \, d\vartheta \le \iint_A \frac{2}{\rho^{2\alpha-1}} \, d\rho \, d\vartheta $$ Now we have to write the set $A$ using the polar coordinates, but this is quite difficult. What can we do? I think that the first and the third integrals are improper in $0$ with respect to $\rho$. Therefore I think we should ask at least $2\alpha-1<1$ i.e. $\alpha<1$. I think $\alpha=1$ doesn't work: indeed, we have $$ \begin{split} \iint_A f(x,y) dx \, dy & \ge \iint_A\frac{\, dx \, dy}{(x^2+y^2)} \\ & = \int_0^1 dx \, \int_0^{\sqrt{x}} \frac{dy}{x^2+y^2} = \\ & = \int_0^1 dx \, \frac{1}{x^2}\int_0^{\sqrt{x}}\frac{dy}{1+(\frac{y}{x})^2} =\\ & = \int_0^1 \frac{1}{x}\arctan{\left( \frac{1}{\sqrt{x}}\right)}dx = +\infty \end{split} $$ because $$ \frac{1}{x}\arctan{\left( \frac{1}{\sqrt{x}}\right)} \sim_{x=0} \frac{c}{x} $$ whose integral in $0$ diverges. What do you think? Is it correct? How can we prove it formally? Thanks a lot. AI: If $\alpha <1$ we have $2\alpha<2$ and \begin{equation} \int_{A} f(x,y)^{\alpha} \le \int_{B_1} \dfrac{2}{|X|^{2 \alpha}} < \infty. \end{equation} If $\alpha >1$ we have $2\alpha>2$ and considering $B \subset A$ we have \begin{equation} \int_{A} f(x,y)^{\alpha} \ge \int_{B} \dfrac{1}{|X|^{2 \alpha}} < \infty. \end{equation} To $\alpha=1$ see for one idea. Example that $u\in W^{1,2}$, but $u \notin W^{1,3}$
H: Why does this function converges almost everywhere and not pointwise? I am working on the following question. I believe I am almost done, but I still have a hole in my solution: Let $\{r_k\}_{k=1}^{\infty}$, a counting of $\mathbb{Q}$. For every $k \in \mathbb{N}$, let: $$ f_k(x):= \begin{cases} (x - r_k)^{-1/2} &\text{ if } r_k < x \leq r_k + 1 \\ 0 &\text{ else } \end{cases} $$ Prove that $\sum_{k = 1}^{\infty} 2^{-k}f_k$ converges almost everywhere on $\mathbb{R}$ to an integrable function, $f$, such that for every interval $(a, b)$, and every M, $$ m((a,b) \cap \{x:f(x) \geq M\}) > 0. $$ My question is why does $\sum_{k = 1}^{\infty} 2^{-k}f_k$ converges to $f$ almost everywhere and not simply pointwise? This is what I have so far: $\frac{1}{\sqrt{x}}$ is integrable, hence $f$ is integrable (because its integral on $(0,1)$ equals 2, and $f$'s integral on $\mathbb{R}$ is $2 \times \sum_{k = 1}^{\infty} 2^{-k} < \infty$ by monotone convergence). For every $(a, b)$ there's a $k$ such that $r_k \in (a,b)$, and there is a $\delta$ such that for every $x \in (0, \delta)$, $\frac{1}{\sqrt{x}} > M\times 2^{-k}$ and from this follows the second claim. AI: Let $\{s_m:m\in\mathbb{N}\}$ be enumeration of $\mathbb{Q}\setminus\{-32^{-m}:m\in\mathbb{N}\}$. Define enumeration of $\mathbb{Q}$: $$ r_k= \begin{cases} -32^{-m}&\quad\text{ if }\quad k=2m\\ s_m&\quad\text{ if }\quad k=2m-1 \end{cases} $$ Then $$ f(0)=\sum\limits_{k=1}^\infty 2^{-k}f_k(0)\geq \sum\limits_{m=1}^\infty 2^{-2m}f_{2m}(0)= \sum\limits_{m=1}^\infty 2^{-2m}(0-r_{2m})^{-1/2}\boldsymbol{1}_{[r_{2m},r_{2m}+1)}(0)= \sum\limits_{m=1}^\infty 2^{-2m}(32^{-m})^{-1/2}\boldsymbol{1}_{[-32^{-m},-32^{-m}+1)}(0)= \sum\limits_{m=1}^\infty 2^{-2m}(32^{-m})^{-1/2}= \sum\limits_{m=1}^\infty 2^{m/2}=+\infty $$ Thus you see that for a particular enumeration of $\mathbb{Q}$ there are points $x$ (for our case $x=0$) for which the series $$ \sum\limits_{k=1}^\infty 2^{-k}f_k(x) $$ diverges.
H: Definition of Grothendieck group I'm reading the Wiki article about the Grothendieck group. What's the reason we define $[A] - [B] + [C] = 0 $ rather than $[A] + [B] - [C] = 0 $ (or something else) for every exact sequence $0 \to A \to B \to C \to 0$? What is the property we obtain if we define it this way? I suppose it has something to do with exactness at $B$ but what? AI: To get a feel for this kind of relation, consider a short exact sequence $0 \rightarrow V_1 \rightarrow V_2 \rightarrow V_3 \rightarrow 0$ of finite-dimensional vector spaces over a field. What is the relation between their dimensions?
H: Is concave quadratic + linear a concave function? Basic question about convexity/concavity: Is the difference of a concave quadratic function of a matrix $X$ given by f(X) and a linear function l(X), a concave function? i.e, is f(X)-l(X) concave? If so/not what are the required conditions to be checked for? AI: A linear function is both concave and convex (here $-l$ is concave), and the sum of two concave functions is concave.
H: Evaluate $\lim\limits_{n\to \infty}\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{6n}$ Show that $$\lim_{n\to \infty}\left(\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{6n}\right)=\log 6$$ Here I need to use the definition of integral but I faced problem in range . Please help. AI: Maybe it is intended that you mention Riemann sums explicitly. Rewrite our sum as $$\frac{1}{n}\left(\frac{1}{1+\frac{1}{n}} + \frac{1}{1+\frac{2}{n}} + \frac{1}{1+\frac{3}{n}}+\cdots +\frac{1}{1+\frac{5n}{n}} \right).$$ We recognize this as a (right) Riemann sum for the integral $$\int_0^5 \frac{dx}{1+x},$$ which has value $\log 6$. For imagine dividing the interval from $x=0$ to $x=5$ into equal subintervals of width $\frac{1}{n}$. The right ends of these subintervals are at $\frac{1}{n}$, $\frac{2}{n}$, $\frac{3}{n}$, and so on up to $\frac{5n}{n}$. So if $f(x)=\frac{1}{1+x}$, then our summands are precisely the function values at these right endpoints. The limit of the Riemann sum as $n \to\infty$ is therefore $\log 6$.
H: Proof strategy for Pointwise converging sequence of Riemann integrable functions to not uniformaly converge I am wondering of a proof strategy to show. That a sequence of Riemann integrable functions which converges point wise to a function may not actually uniformly converge to it. If it makes the argument simpler i also know that the function $f$ the sequence of functions are converging point wise to, is not Riemann integrable. Although on a second thought is it possible to build such an argument without the knowledge of $f$ being non-Riemann integrable ? Any help would be much appreciated. AI: For $n\in\mathbb N$, let $f_n$ be the indicator function of the set $\{n,n+1,n+2,\ldots\}$, i.e. we have $$f_n= \begin{cases} 1 &\text{if } x\in \{n,n+1,n+2,\ldots\}\\ 0 &\text{otherwise}\end{cases}$$ and note that each $f_n$ is Riemann-integrable (with integral $0$), and that $f_n\to 0$ pointwise but not uniformly.
H: Show that $k\ln k \in \Theta(n)$ implies $k \in \Theta(n/\ln(n))$? It is exercise (3.2-8) from Introduction to Algorithms book. I need help to solve it. I am confused by the fact that there are two parameters. Because usually one parameter is used. There is related exercise which can be helpful. Thanks. AI: I wouldn't call this rigorous, but here's an idea: If $f \in\Theta(g)$, say $f \sim g$. So we have $n \sim k\ln k$ and want to show $n/\ln n \sim k$. Well if $n \sim k\ln k$, then $\ln n \sim \ln (k \ln k) \sim \ln k + \ln \ln k \sim \ln k$. Note: this requires rigorous justification, and I'm not exactly sure it's right So $n/ \ln n \sim k \ln k / \ln k \sim k$
H: How to show x and y are equal? I'm working on a proof to show that f: $\mathbb{R} \to \mathbb{R}$ for an $f$ defined as $f(x) = x^3 - 6x^2 + 12x - 7$ is injective. Here is the general outline of the proof as I have it right now: Proof: For a function to be injective, whenever $x,y \in A$ and $x\neq y$, then $f(x) \neq f(y)$, i.e., where $A$, $B$ are finite sets, every two elements of $A$ must have distinct images in $B$, which also implies that there must be at least as many elements in $B$ as in $A$ such that the cardinality of $A$ is less than or equals the cardinality of $B$. We shall prove the contra-positive: If $\exists$ $f(x) = f(y)$, then $x=y.$ Let $x^3 - 6x^2 + 12x - 7 = y^3 - 6y^2 + 12y - 7$. Then by addition and some algebra, we get $x(x^2 - 6x + 12) = y(y^2 - 6y + 12)$ This feels dumb to ask but how do I continue to finally get the result that $x = y$? AI: Note that $w^3-6w^2+12w-8=(w-2)^3$. So $w^3-6w^2+12w-7=(w-2)^3+1$. So we want to show that if $(x-2)^3+1=(y-2)^3+1$ then $x=y$. Equivalently, we want to show that if $(x-2)^3=(y-2)^3$ then $x=y$. This is easy, the cube function is increasing. Remark: We can use the basic algebra of ordered fields to show that if $s^3=t^3$ then $s=t$. For $s^3-t^3=(s-t)(s^2+st+t^2)$. But $$2(s^2+st+t^2)=(s+t)^2+s^2+t^2,$$ so $s^2+st+t^2$ can only be $0$ when $s=t=s+t=0$.
H: Exponential Distribution Maximum Likelihood I found the following question in a past exam paper and I would like to ask how to solve it as I can't find anything in the notes related to it: If three samples taken from Exponential(lambda) are 0.1, 0.5 and 0.9, what's the MLE for lambda? I don't really understand how I'm supposed to deduct it from such little information? AI: Consider the definition of the likelihood function for a statistical model. Here, $\theta = \lambda ,$ the unknown parameter of the distribution in question. Assuming your samples $X_1 = 0.1, X_2 = 0.5, X_3 =0.9,$ are independent, we have that the likelihood function is $f_{\lambda } (X_1, X_2, X_3) = \lambda^3 e^{-\lambda (X_1 + X_2 + X_3)}.$ Taking logarithms gives the log-likelihood function of the data; $\mathcal{L}_3 (\lambda ) = 3\log \lambda - \lambda 3\overline{x},$ where $\overline{x} = 0.5$ is the sample mean. The function is maximized at $\hat{\lambda } = \frac{1}{\overline{x}} = 2.$
H: Some naive questions about embeddings First, if I may, I would like to ask for help in getting an intuitive understanding regarding embeddings. Wikipedia gives examples such as $\mathbb N$ in $\mathbb Z$. My first question is: were they not there already? And my second question is what advantage (or advantageous constraint) do you get by embedding. This general question is motivated by a Theorem in Appendix 2 in Marcus's "Number Fields" on p. 259. He states: We are interested in the embeddings of L in $\mathbb C$ which fix K pointwise. (L and K are subfields of $\mathbb C$ and K $\subset$ L.) Clearly such an embedding sends $\alpha \in$ L to one of its conjugates over K. As a second question: How does an "embedding" of something that is already there $\alpha \in$ L $\subset \mathbb C$ get sent to a conjugate? Rather than call it a injective map or permutation? Lastly in the proof of the theorem, he says: let $\sigma$ be an embedding of K in $\mathbb C$ ... and f be the monic irreducible polynomial for $\alpha$ over K. He then goes on to apply $\sigma$ to the coefficients of f to get g. But I thought (as above) that the embeddings fix K, so why would he say then say that g is irreducible (no problem) over $\sigma$K? Shouldn't that just be K? As always thanks for your help and patience with what is probably a pretty obvious question. AI: On your first question: One way to define $\mathbb Z$ is as the set of equivalence classes of pairs of natural numbers under the equivalence relation $(k,l)\sim(m,n)\Leftrightarrow k+n=l+m$. The natural numbers are "not there already" – they have to be embedded in this structure via $f:\mathbb N\hookrightarrow\mathbb Z:n\to[(n,0)]$.
H: For regular expression $E$ and a context free grammar $G$- why deciding if $L(G)\subseteq L(E)?$ is a recursive problem? I'd love your help with understanding why does the following language is recursive: Input: a regular expression $E$ and a context free grammar $G$ question: $L(G)\subseteq L(E)?$ I tried to think of an algorithm for showing that this problem is decidable, but I don't manage to find one, or to reduce to a recursive problem. Thanks a lot! AI: Fix a DFA for $L(E)$, and consider the set $A$ of assertions of the form Nonterminal $S$ in $G$ generates at least one string that takes DFA state $s_1$ to $s_2$. Each production of $G$ induces a rule that proves some assertions in $A$ given other ones, and every true assertion of this form arises from a finite number of applications of such rules (namely, corresponding to a parse tree for the string it speaks of). Thus, start with the empty subset of $A$, and repeatedly apply the rules corresponding to all productions of $G$ until you reach a fixpoint. (This must happen sooner or later because $A$ is finite). Then check whether $A$ says that the starting symbol can generate a string that takes the DFA from the initial state to a non-accepting one.
H: If $\sum \limits_{i=0}^{\infty}{|x_i|}<\infty$, prove that $\sum \limits_ {i=0}^{\infty}{x_i^2}<\infty$ I have been trying to prove this for the last three days. It's for my Time Series Econometrics homework. I think you'll notice I'm not good at math as I can't even express my solution very well, and I'm sorry for that. At first I thought that if: (1) every element is finite; (2) there is one element that is greater or equal to any other in the sequence; (3) no element repeats itself for more than a finite number of times, then we would have $\sum \limits_{i=0}^{\infty}{|x_i|}<\infty$, because the terms would converge to zero(sooner or later, hehe). Believing that $x_i^2$ is a monotonic transformation over $|x_i|$, all the properties that guaranteed the convergence of a series would remain unaltered and thus the proof would be given that $\sum \limits_{i=0}^{\infty}{x_i^2}<\infty$. But now I've just seen that the convergence of terms to zero does not guarantee the convergence of a series, and got pretty confused about the correct approach to the problem. AI: Here is an alternative proof. First note that since $\sum |x_n|$ converges the sequence $\{|x_n|\}$ is bounded. The least upper bound $\{|x_n|\}$ is usually denoted by $\sup_k |x_k|$. Secondly we use $|x_n|^2\leq \sup_k|x_k|\cdot|x_n|$ to reach $$\left|\sum x_n^2\right|\leq\sum |x_n|^2\leq \sup|x_k|\cdot\sum |x_n|$$ which does not only imply convergence, but also a general estimate. Edit: Perhaps the convergence is a bit hidden: The key is that above estimate shows that the tail of the series tends to 0, that is $$ \left|\sum_{n=N}^\infty x_n^2\right|\leq\sum_{n=N}^\infty |x_n|^2\leq \sup|x_k|\cdot\sum_{n=N}^\infty |x_n| \to0\qquad \text{as $N\to\infty$} $$
H: Newton's method - determine accuracy in calculation I have almost managed to solve a problem (I think), but I am a bit unsure if my procedure is correct, and my answer is not quite the correct one. Would appreciate any input! The problem is as follows: If Newton's method is used with $f(x) = x^2 - 1$ and $x_0 = 10^{10}$, how many steps are required to obtain the root with accuracy $10^{-8}$. Solve analytically, not experimentally. (Hint: restart Newton's algorithm when you know that $e_n < 1$). OK. My solution is as follows: If we have $x_0 = 10^{10}$, and we find, by using Newton's algorithm, that $x_1 = \frac{x_0}{2}$, and $x_2 = \frac{x_1}{2}$ (I tried this for the first few terms). Thus, we have that: $$x_{n+1} = \frac{x_0}{2^n}$$ Using the hint, and knowing that the closest root is $x = 1$, and $e_n = x_n - r$, we want to find the first value value $x_n$ such that $|x_n - 1| < 1$. So we have $-2 < x_n < 2$. My next step is therefore: $$\frac{10^{10}}{2^n} < 2$$. $$10^{10} < 2^{n+1}$$ Solving this for $n$ yields: $n > 32,2$. So we must take $33$ steps to get to this point. OK, now - again using the hint - we restart Newton's algorithm again. Set $n = 0$ again, and we now know that $e_0 < 1$. We have that: $$e_{n+1} = \frac{e_{n}^2}{2(e_{n} + 1)} < \frac{e_{n}^2}{2}$$ Then: $$e_1 < \frac{e_{0}^2}{2} \leq \frac{1}{2}, e_2 < \frac{e_{1}^2}{2} \leq \frac{1}{2^3}, e_3 < \frac{e_{2}^2}{2} \leq \frac{1}{2^7}, etc$$. In general: $$e_{n} < \frac{1}{2^{2^{n} -1}}$$ We want $e_n < 10^{-8}$, and this is found when $n = 6$. Thus we need to use a total of $33 + 6 = 39$ steps in total. According to my book, however, the total number of steps should be $40$. So am I making a mistake here somwhere? If someone can see if my procedure is correct, and perhaps spot my mistake, I would be very, very grateful! AI: Since $f$ is convex, you can show that if $x_0>1$ then $x_n \geq 1$ for all $n$. So we can bound the term $\frac{1}{x_n} \leq 1$. The Newton update for $f$ is $x_{n+1} = \frac{1}{2} (x_n + \frac{1}{x_n})$. So we have the bound $x_{n+1} \leq \frac{1}{2} x_n + \frac{1}{2}$. Working through the details gives $x_n \leq \frac{1}{2^n} x_0 + \frac{1}{2^n} + \cdots + \frac{1}{2} = \frac{1}{2^n} (x_0-1) + 1$. To estimate the number of iterations to get an error of less than 1. we want to find the smallest $n$ such that $2>\frac{1}{2^n} (x_0-1) + 1$, or equivalently, $n > \log_2 (x_0-1) \approx 33.2$. Hence, using this estimate, it will take (assuming I have made no mistakes) 34 iterations to get within an error of 1. This is an elaboration on the bound above: Since $x_{n+1} \leq \frac{1}{2} x_n + \frac{1}{2}$, we have $x_{1} \leq \frac{1}{2} x_0 + \frac{1}{2}$. For the induction step, suppose $x_n \leq \frac{1}{2^n} x_0 + \frac{1}{2^n} + \cdots + \frac{1}{2}$. Then we have $x_{n+1} \leq \frac{1}{2} x_n + \frac{1}{2} \leq \frac{1}{2} (\frac{1}{2^n} x_0 + \frac{1}{2^n} + \cdots + \frac{1}{2}) + \frac{1}{2} = \frac{1}{2^{n+1}} x_0 + \frac{1}{2^{n+1}} + \cdots + \frac{1}{2}$, hence the formula is true for all $n$. Summing the geometric series, we have $\frac{1}{2^n} + \cdots + \frac{1}{2} = \frac{1}{2}(\frac{1-\frac{1}{2^n}}{1-\frac{1}{2}}) = 1-\frac{1}{2^n}$, which gives $x_n \leq \frac{1}{2^n} x_0 + \frac{1}{2^n} + \cdots + \frac{1}{2} = \frac{1}{2^n} x_0 + 1-\frac{1}{2^n} = \frac{1}{2^n} (x_0-1) + 1$.
H: Am I talking right? I'm trying to describe expected value. My paragraph goes: From probability theory we have $E[f(x)] = \int{f(x)p(x)dx}$. That is, the expected value of $f(x)$ is equal to the sum of infinitesimals $f(x)dx$ weighted by the probability that $f$ should take on those values $f(x)$ at each $x$. Am I speaking about infinitesimals correctly? I'm thinking here of the integral summing infinitesimal products $f(x)p(x)dx$, but I'm not sure if I've made a no-no reference to use of infinitesimals here in my notation or conception. AI: If you have a probability density $p(x)$, then I would actually write: "The expected value of $f(x)$ is equal to the sum of $f(x)$ weighted by the infinitesimal probability that $X\in[x,x+dx]$ which is $p(x)dx$" So there are two things going on here. First $p(x)$ is a probability density, in that $p(x)$ has units of probability per unit $x$ ($x$ could be length, age, whatever). In other words $p(x)$ by itself is NOT a probability in the usual sense, it needs to be multiplied by the $dx$ term to become an infinitesimal probability. This is sort of a wishy-washy physics explanation of probability density. Next, when writing $E[f(X)]$ one has two choices. The first choice is to express $E[f(X)]$ in terms of the probability density of $X$, which is to say $p(x)$. In this sense, we have $E[f(X)]=\int f(x)p(x)dx$. In other words, we are first querying the domain of $f$ as a random variable and then applying $f$. The other option is to write the expected value in terms of the density of $f(X)$ as a random variable. In this case we would define $g(y)$ as the probability that $f(x)\in[y,y+dy]$. In this case we are querying the range of $f$ as a random variable, that inherits it's randomness from the random domain $X$. So for any event $A$, $P(f(X)\in A)=P(X\in f^{-1}(A))$ where the right-hand-side is a known quantity since we know the distribution of $X$. To wit, we get $E[f(X)]=\int yg(y)dy$.
H: Proving :$ \frac{\textrm{d}}{\textrm{d}x}\int^{g(x)}_{h(x)}f(t)\textrm{d}t =f(g(x))g'(x)-f(h(x))h'(x). $ How to prove that : $$ \frac{\textrm{d}}{\textrm{d}x}\int^{g(x)}_{h(x)}f(t)\textrm{d}t =f(g(x))g'(x)-f(h(x))h'(x). $$ AI: Let us first assume that $f$ has a primitive, which we shall refer to as $F$. By the fundamental theorem of calculus, we have: $$\int_{h(x)}^{g(x)}{f(t)\:dt}=F(g(x))-F(h(x))$$ By the chain rule, we have: $$\frac{d}{dx}\left(f\circ g\right)=f'(g(x))g'(x)$$ As we know that $\frac{d}{dx}F(x)=f(x)$, we have: $$\frac{d}{dx}\left(F(g(x))-F(h(x))\right)=F'(g(x))g'(x)-F'(h(x))h'(x)\\=f(g(x))g'(x)-f(h(x))h'(x)$$ Which means that: $$\frac{d}{dt}\int_{h(x)}^{g(x)}f(t)\:dt=f'(g(x))g'(x)-f(h(x))h'(x)$$ Q.E.D.
H: Simultaneous equations, two unknowns I really should've paid more attention to maths in school... I have some fairly simple simultaneous equations in the following format. VMax = DMax + (DMax - DMin) * GMax VMin = DMin - (DMax - DMin) * GMin Knowns are VMax, VMin, GMax, GMin Unknowns are DMax, DMin All values are real numbers Given the above, how can I re-arrange the equations to solve and determine the values of DMin and DMax? AI: First, expand the multiplications: $VMax=DMax+DMax\cdot GMax-DMin\cdot GMax$ $VMin=DMin-DMax\cdot Gmin+DMin\cdot GMin$ Solve the first equation for DMax $DMax=\frac {VMax+DMin\cdot GMax}{1+GMax}$ Insert this value in the second equation, leaving you one equation in $DMin$, which you solve the same way. Then insert the value you find for $DMin$ into this one.
H: Compute the limit of $\sqrt{1-a}\sum\limits_{n=0}^{+\infty} a^{n^2}$ when $a\to1^-$ I need some suggestions, hints for the limit when $a \to 1^{-}$ of $$\sqrt{\,1 - a\,}\,\sum_{n = 0}^{\infty}a^{n^{2}}.$$ AI: Note that $\frac1{\sqrt{1-a}}=\sum\limits_{k=0}^{+\infty}c_ka^k$ with $c_k=\frac1{4^k}{2k\choose k}\sim\frac1{\sqrt{\pi k}}$. Since $k\mapsto a^k$ is decreasing, $$ b_{i,j}\cdot a^j\leqslant\sum_{k=i}^jc_ka^k\leqslant b_{i,j}\cdot a^i,\qquad b_{i,j}=c_i+\cdots+c_j. $$ Using this for $i=n^2+1$ and $j=(n+1)^2$ with $n\to\infty$, one gets $b_{i,j}\sim\frac2{\sqrt\pi}$. These estimates can be made rigorous to show that $$ \frac1{\sqrt{1-a}}\sim\frac2{\sqrt\pi}\cdot\sum\limits_{n=0}^{+\infty}a^{n^2}, $$ hence $$ \lim\limits_{a\to1,a\lt1}\sqrt{1-a}\cdot\sum\limits_{n=0}^{+\infty}a^{n^2}=\frac{\sqrt\pi}2. $$ The same method shows more generally that, for every $c\geqslant1$, $$ \lim\limits_{a\to1,a\lt1}(1-a)^{1/c}\cdot\sum\limits_{n=0}^{+\infty}a^{n^c}=\Gamma\left(1+\frac1c\right). $$
H: Basics of probability - Independent Events. I read that the probability of two events provided they are independent is obtained by the following formula: $Probability _ {Independent~ Events} = Probability_{1st Event} \times Probability_{2nd Event} \times ...$ Now it states that the probability of getting a Tails and Head (T-H) or getting a Heads and Tails (H-T) when a coin is flipped consecutively is given by: $P(H-T) + P(T-H) = (\frac{1}{2}\times \frac{1}{2}) + (\frac{1}{2}\times \frac{1}{2}) = \frac{1}{2}$ Now my question is why isn't it $P(H-T) \times P(T-H) $ instead of $P(H-T) + P(T-H)$ since after all the two events are independent of each other. I would appreciate it if someone could clear this up. AI: Well..Independence and dependence has nothing to do with the fact that you are adding up the probabilities. You are asking what the probability that either THIS happens OR THAT happens. When you see the word "or" that means you are supposed to add up the probabilities (given that they do not overlap). Now the statement you made is true, and it is partially applying to your question (particularly to each individual event). Your statement applies to situations like "what is the probability that THAT will happen AND THIS will happen. Let's take the first set of events: H-T. Whatever side you get on the second flip is completely independent of the first flip. The chance of getting heads on the first flip is 1/2, and the change on getting tails on the second flip is 1/2, so you multiply them together to get 1/4.
H: There is no $C^1$ function $f$ mapping an open interval in $\mathbb{R}$ onto open ball in $\mathbb{R}^2$ We know that there are no $C^1$ functions which map open $E\subset \mathbb{R}^2$ INTO $\mathbb{R}$. (Actually we can drop the $C^1$ requirement and just use continuity). A nice related question is that There is no $C^1$ function $f$ mapping an open interval in $\mathbb{R}$ ONTO open ball in $\mathbb{R}^2$. But I can't prove it, any suggestion? AI: For the first question: if $f: E \to \mathbb R$ is continuous and one-to-one, let $B$ be an open ball with $\overline{B} \subset E$. If $x \in \partial B$, then since $x$ is a limit point of $B$ and of $E \backslash \overline{B}$, $f(x)$ must be in $\partial f(B)$. But $B$ is connected so $f(B)$ is connected (i.e. an interval in $\mathbb R$), and an interval has only two boundary points. For the second question: a $C^1$ function $f$ is locally Lipschitz, and therefore for $E \subseteq \mathbb R$ the Hausdorff dimension of $f(E)$ is no greater than the Hausdorff dimension of $E$, and thus at most $1$. EDIT: Here's a somewhat less sophisticated version. Consider a closed bounded interval $J$ on which your $C^1$ function $f$ is defined. Then $|f'|$ is bounded on $J$, say be $K$, and for any $x,y \in J$ we have $|f(x) - f(y)| \le K |x - y|$. Let $J$ have length $L$. For any positive integer $n$, we can split up $J$ into $n$ subintervals $J_j$ of length $L/n$, and the images of these under $f$ are $n$ sets of diameter at most $KL/n$. But for any $r > 0$ and positive integer $m$, a square of side $r$ in ${\mathbb R}^2$ contains $m^2$ points (forming a regular grid) whose distance from each other is at least $r/m$. If $n$ is sufficiently large, we can take $m$ so $KL/n < r/m$ but $m^2 > n$, and thus $f(J)$ can't contain any square. Therefore $f(J)$ can't contain any nonempty open set. To complete the proof, you can use the Baire Category Theorem: if $I$ is an open interval, $f(I)$ is the union of countably many $f(J)$ for closed bounded intervals $J$, but $f(J)$ is closed and nowhere dense, so the union of these can't contain any nonempty open set.
H: How to classify 3-sheeted covering space for $S_{1}\vee S_{1}$? This might be a duplicate. This question also feels routine (it is also the execrise 10, page 88 in Hatcher). From Harvard qualification exam, 1990. Let $X$ be figure eight. 1) How many 3-sheeted, connected covering space are there for $X$ up to isomorphism? 2) How many of these are normal (i.e Galois) covering spaces? There are almost uncountably many covering spaces $Y$ for $X$ (one can check the corresponding page in Hatcher, page 58). The question is how to classify them nicely. I know that $p_{*}\pi_{1}(Y)$ has index 3 in $\pi_{1}(X)=\mathbb{Z}* \mathbb{Z}$(the free group generated by two generators). But I do not know how to find all index 3 subgroups of $\mathbb{Z}*\mathbb{Z}$. On the other hand if $H$ is normal in $\mathbb{Z}*\mathbb{Z}$, then the above question can be greatly simplified, but I still do not know how to solve it precisely. I tried to think in terms of deck transformations, etc but did not get anywhere. AI: Hint: Instead of thinking about index 3 subgroups of $\Bbb Z \star \Bbb Z$, consider what connected 3-fold covers of $S_1 \vee S_1$ look like. Any such cover is a connected graph on 3 vertices of valence 4 (why?), and there are only finitely many such graphs. Then use deck transformations to check if each cover is regular.
H: Prove $\cos^2 x \,\sin^3 x=\frac{1}{16}(2 \sin x + \sin 3x - \sin 5x)$ How would I prove the following? $$\cos^2 x \,\sin^3 x=\frac{1}{16}(2 \sin x + \sin 3x - \sin 5x)$$ I do not know how to do do the problem I do know $\sin(3x)$ can be $\sin(2x+x)$ and such yet I am not sure how to commence. AI: Using $\sin 2\theta=2\sin\theta\cos\theta$ and $\sin^2\theta=(1-\cos2\theta)/2$ we get $$\cos^2x ~\sin^3x=(\cos x\sin x)^2\sin x=\left(\frac{\sin 2x}{2}\right)^2\sin x=\frac{1}{4}\frac{1-\cos 4x}{2}\sin x \tag{$\circ$}$$ With the sum rule we have $\sin(\alpha\pm\beta)=\sin\alpha\cos\beta\pm\sin\beta\cos\alpha.$ Therefore we have the identity $$\frac{\sin(\alpha+\beta)-\sin(\alpha-\beta)}{2}=\sin\beta\cos\alpha.$$ Apply with $\alpha=4x$ and $\beta=x$ to $(\circ)$.
H: Function writen as two functions having IVP I heard this problem and I am a bit stuck. Given a function $f : I \rightarrow \mathbb{R}$ where $I \subset \mathbb{R}$ is an open interval. Then $f$ can be writen $f=g+h$ where $g,h$ are defined in the same interval and have the Intermediate Value Property. I tried to construct firstly the one function arbitarily at two points and then tried to define it in a way to have the IVP but I cannot manage to control the other function, as I try to fix the one I destroy the other and I cannot seem to know how to be certain I have enough point to define both in a way they have the IVP. Any help appreciated! Thank you. AI: Edit: In fact, all the information I give below (and more) is provided in another question in a much more organized way. I just found it. My original post: The intermediate Value property is also called the Darboux property. Sierpinski first proved this theorem.The problem is treated in a blog of Beni Bogosel, a member of our own community and in much more generality too. http://mathproblems123.files.wordpress.com/2010/07/strange-functions.pdf It is also proved in( As I found from Wikipedia) Bruckner, Andrew M: Differentiation of real functions, 2 ed, page 6, American Mathematical Society, 1994
H: Boundary of $L^1$ space Is there any rigorous or heuristic notion of boundary of $L^1$ that is studied? I mean something loosely like the collection of functions or distributions defined by $$\left\{f\notin L^1: f_n\to f\quad\text{a.e.}\quad \text{as} \quad n\to \infty \quad \text{where} \quad f_n\in L^1\right\}$$ And what kind of characterizations or properties of this "surface" are known? Edit: Changed to pointwise convergence. AI: I believe the set is just the set of all (non-integrable) measurable functions with $\sigma$-finite support. In particular, if the space in question is $\sigma$-finite (like $\mathbf R$ with Lebesgue measure), then all measurable functions are pointwise limits of integrable functions. Edit: cleaned up a bit. In one direction: Clearly it is enough to show that for nonnegative functions; Let $f$ be a nonnegative measurable function, and $S_n$ an increasing sequence of sets of finite measure such that $\mathrm{supp}f\subseteq \bigcup_nS_n$. Let $A=\lbrace x\mid f(x)=\infty\rbrace$, $A_n=\lbrace x\mid f(x)<n\rbrace$. Then for each $n$ put $f_n(x)=f(x)$ on $A_n\cap S_n$, $f_n(x)=n$ on $A\cap S_n$, $f_n(x)=0$ otherwise. Then $f_n$ are integrable and $f_n\to f$ pointwise. In the other direction, we show that the support of a pointwise limit of a sequence of integrable function has $\sigma$-finite support: Take an arbitrary sequence of integrable functions $f_n$ Any integrable function $f_n$ has a $\sigma$-finite support $B_n=\bigcup_m\lbrace x\mid \lvert f_n(x)\rvert>1/m\rbrace$. The support of the limit of $f_n$ is contained in $\bigcup_n B_n$ (because if at some point none of the functions is nonzero, neither is the limit), and hence $\sigma$-finite, so we're done.
H: Show in the form $A\sin(x+c)=\sin x - \cos x$? Not sure what identity I should be using here: My gut tells me to use the Sin sum formula: $\sin(x+y) = \sin(x)\cos(y) + \cos(x)\sin(y)$, but can't figure out how to. AI: Yes, you want to use that form. You have: $$A\sin(x+y)=A\sin x \cos y + A\sin y \cos x=\sin x - \cos x$$ So that means that $$A\cos y=1, A\sin y=-1$$ squaring the equations and adding them, we see that $$A^2\cos^2 y+A^2\sin^2 y=2$$ $$A^2=2$$ $$A=\pm\sqrt{2}$$ Let's take $A=\sqrt{2}$. Putting this into our first two equations, this implies that: $$\sqrt{2}\cos y=1$$ $$\cos y=\frac 1 {\sqrt 2}$$ And similarly $$\sin y=\frac{-1}{\sqrt 2}$$ $y=-\pi/4$ solves both of these. So our answer is $$\sin x - \cos x = A\sin(x+c)=\sqrt{2}\sin(x-\frac{\pi}4)$$ Note that there are other solutions that will work, but this is probably the simplest.
H: Equivalent characterizations of ordinals of the form $\omega^\delta$ Let $\alpha$ be a limit ordinal. Show the following are equivalent: $\forall \beta, \gamma<\alpha (\beta+\gamma<\alpha)$ $\forall \beta<\alpha(\beta+\alpha=\alpha)$ $\forall X\subset \alpha(\text{type}(X)=\alpha, \text{ or,} \text{ type}(\alpha-X)=\alpha)$ $\exists \delta(\alpha=\omega^\delta )$. What I've tried: I've shown that: $1\rightarrow2$ and $1\rightarrow3$. Could anybody help me? Thanks ahead:) AI: I would recommend showing the implications 1⇒3⇒2⇒1, and 1⇒4⇒1. The following are (fairly complete) outlines of those you haven't completed. (3⇒2) Given $\beta < \alpha$, note that as $\operatorname{type}(\beta) = \beta < \alpha$, by (3) it follows that $\operatorname{type}(\alpha \setminus \beta) = \alpha$. Then $\alpha = \operatorname{type} ( \beta + ( \alpha \setminus \beta ) ) = \operatorname{type} (\beta) + \operatorname{type}(\alpha \setminus \beta) = \beta + \alpha$. (2⇒1) Recall that ordinal addition is strictly monotone in the right summand, and so using (2) it follows that given $\beta , \gamma < \alpha$ we have $\beta + \gamma < \beta + \alpha = \alpha$. (1⇒4) Given a limit ordinal $\alpha$ stisfying (1), define $$\delta = \min \{ \delta \in \mathbf{On} : \alpha < \omega^{\delta+1} \}.$$ Note that $\omega^\delta \leq \alpha$, and so there are unique ordinals $\gamma , \zeta$ such that $\zeta < \omega^\delta$ and $\omega^\delta \cdot \gamma + \zeta = \alpha$. By applying (1) it follows that $\zeta = 0$ and $\gamma = 1$. (4⇒1) By transfinite induction on $\delta > 0$ we can show that $\beta + \gamma < \omega^\delta$ whenever $\beta , \gamma < \omega^\delta$. The base case $\delta = 1$ just says that the sum of two finite ordinals is a finite ordinal. If $\delta = \zeta + 1$, suppose that $\beta , \gamma < \omega^\delta = \omega^{\zeta+1} = \omega^\zeta \cdot \omega = \sup_{n < \omega} \omega^\zeta \cdot n$. Then there must be an $n < \omega$ such that $\beta , \gamma < \omega^\zeta \cdot n$, and we then have $$ \beta + \gamma < \omega^\zeta \cdot n + \omega^\zeta \cdot n = \omega^\zeta \cdot (n+n) < \omega^\zeta \cdot \omega = \omega^\delta. $$ If $\delta > 1$ is a limit ordinal, and $\beta , \gamma < \omega^\delta = \sup_{\zeta < \delta} \omega^\zeta$, then there must be a $\zeta < \delta$ such that $\beta , \gamma < \omega^\zeta$, and by the induction hypothesis it follows that $\alpha + \beta < \omega^\zeta \leq \omega^\delta$.
H: Is there a trick to finding the number of odd numbers b/w two values? I know you could find the number of even numbers (since they are a multiple of two). For example the number of even numbers between $11$ and $30$ will be $$n= \frac{28-12}{2} + 1 = 9 $$ I wanted to know is there a similar way to find the number of odd numbers b/w two extremes? AI: You can use the exact formula. What is number of odd numbers between $11$ and $30$? $$n = (29-13)/2 + 1 = 16/2 + 1 = 9$$ Let's list them out to make sure $: 13, 15, 17, 19, 21, 23, 25, 27, 29.$ There are $9$ of them, so that is correct. By the way, your formula was calculated wrong. $$n = (28-12)/2 + 1 = 16/2 + 1 = 8 + 1 = 9 (12, 14,16,18,20,22,24,26,28)$$
H: Show that $\overline{U\cap \overline{A}}=\overline{U\cap A}.$ Show that: For every open set $U$ in a topological space $X$ and every $A\subset X$ we have $$\overline{U\cap \overline{A}}=\overline{U\cap A}.$$ The simple and new proof is welcome. Thanks for any help. AI: Clearly $\overline{ U \cap \overline{A} } \supseteq \overline{ U \cap A }$, so we need only show the opposite. Suppose that $x \in \overline{ U \cap \overline{ A } }$, and let $V$ be any open neighbourhood of $x$. Then $V \cap ( U \cap \overline{ A } ) \neq \emptyset$. As $V \cap U$ is open, it then follows that $( V \cap U ) \cap A \neq \emptyset$ , and so $V \cap ( U \cap A ) \neq \emptyset$. Therefore $x \in \overline{ U \cap A }$. (The only non-trivial step depends on the following fact, easily proved: If $U$ is an open subset of a topological space $X$ and $A \subseteq X$ is arbitrary, then $U \cap A = \emptyset$ implies $U \cap \overline{A} = \emptyset$.)
H: Probability that a man will hit the target The question is A man can hit a target once in $4$ shots. If he fires 4 shots in succession, what is the probability that he will hit his target? Here is how I am solving it: Since the probability of man hitting the target is $\frac{1}{4}$ so for four consecutive shots it will be $(\frac{1}{4})^4 = \frac{1}{256}$ which is wrong. Now the book takes a different approach and finds the probability that he will not hit the target in one shot = $1 - \frac{1}{4} = \frac{3}{4}$ therefor the probability he will not hit the target in 4 shots is $(\frac{3}{4})^4$ and thus , the probability that he will hit the target at least in one of the four shots is 1- $(\frac{3}{4})^4$ Although I understand the books approach - I wanted to know why my approach is wrong ? doesnt it also calculate the probability of hitting the target in 4 shots AI: Your first calculation finds the probability that the person hits the target $4$ times in a row. That is very different (and much smaller) than the probability that the person hits at least once. Let's do the problem in another way, much too long, but it will tell us what is going on. What is the probability of hitting at least once? The event "at least one hit" could happen in several ways: (i) exactly $1$ hit; (ii) exactly $2$ hits; (iii) exactly $3$ hits; (iv) exactly $4$ hits. (i) The probability of exactly one hit is $\binom{4}{1}(1/4)(3/4)^3$. This is because the hit could happen in any one of $4$ (that is, $\binom{4}{1}$) places. Write H for hit and M for miss. The probability of the pattern HMMM is $(1/4)(3/4)(3/4)(3/4)$. Similarly, the probability of MHMM is $(3/4)(1/4)(3/4)(3/4)$. You will notice this probability is the same as the probability of HMMM. We get the same probability for MMHM and for MMMH, for our total of $\binom{4}{1}(1/4)(3/4)^3$. (ii) Similarly, the probability of exactly $2$ hits is $\binom{4}{2}(1/4)^2(3/4)^2$. (iii) The probability of $3$ hits is $\binom{4}{3}(1/4)^3(3/4)$. (iv) The probability of $4$ hits is $\binom{4}{4}(1/4)^4$. This is the $(1/4)^4$ that you calculated. Add up. We get the required answer. However, that approach is a lot of work. It is much easier to find the probability of no hits, which is the probability of getting MMMM. This is $(3/4)^4$. So the probability that the event "at least one hit" doesn't happen is $(3/4)^4$. So the probability that the event "at least one hit" does happen is $1-(3/4)^4$.
H: Can you prove why consecutive diagonal intersection points show decreasing fractions inside a rectangle? When I was in third grade, I was playing with rectangles and diagonal lines, and discovered something very interesting with fractions. I've shown several math teachers and professors over the years, and never got an answer. Just a few, "Wow, that's neat!" Draw a rectangle. Draw a line from the top left corner to the bottom right corner. Then draw a line from the top right corner to the bottom left corner. The intersection obviously becomes 1/2 units of the rectangle's width. Now draw a line from the last intersection to the bottom line of the rectangle, and then from that point to the top right corner of the rectangle. The new intersection becomes 1/3 units of the rectangle's width. Keep doing this and the denominator of the fraction increases by one each time to infinite. Why does this happen? I don't know how to prove why this happens, but it would be interesting if someone could. Can you? I never became a mathematician to prove it, but if it's easy, please forgive my mathematical ignorance. I tried this several years ago with AutoCAD and it does in fact work out. AI: If we take the top-right corner as the origin and the $x$ and $y$ axes leftward and downward, respectively, and take the side lengths of the rectangle as the respective units, you're intersecting the lines $y=nx$ with the line $y=1-x$. You get the intersections by equating the two right-hand sides, $nx=1-x$, and that yields $(n+1)x=1$, and thus $x=1/(n+1)$.
H: Group with an automorphism of order 2 (Jacobson BA1) I am having trouble with Exercise 11, Section 1.10 of Basic Algebra 1 by Nathan Jacobson (pub. Freeman & Co. 1985). The statement to prove is: Let $G$ be a finite group and $\phi$ an automorphism of $G$. Let $$ I = \{ g \in G : \phi g = g^{-1} \} $$ If $|I| > {3\over4} |G|$ , $G$ is abelian. If $|I| = {3\over4} |G|$ , $G$ has an abelian subgroup of index 2. I'm trying to attack item 2 first, thinking there will be a way from 2 to 1, but I am not even at a point where that matters. Facts I can see: $\phi^2 = id_G$ , because the set of elements fixed by $\phi^2$ is a subgroup containing $I$. Since $|G|$ is even (working on item 2!) , so must be the order of $K$ where $$ K = \{ k \in G : \phi k = k \} $$ because we can partition $G$ into classes $\pi_k = \{ k, \phi k \}$ of size either 2 or 1, and $K$ is the union of all the singleton classes. Hence, $K$ contains an element $i$ of order 2, so $ \phi i = \phi i^{-1} = i^{-1} $ , i.e. $$ 1 \neq i \in K \cap I $$ That's already some nice information, but I still have no clue where to look for the abelian subgroup :-( AI: For part 2) Let $x\in I, y\in I \cap x^{-1} I.$ Then $y\in C_G (x),$ as can be seen by applying $\sigma $ to $xy.$ Now $|x^{-1} I| = |I| = \frac{3}{4} |G|,$ so applying inclusion-exclusion gives $|C_G (x) | \ge |I \cap x^{-1}S| = \frac{3}{2} |G| - |I \cup x^{-1} I| \ge \frac{|G|}{2}.$ Hence $[G: C_G(x) ] \le 2$ for all $x\in G.$ Now if $C_G(x)$ is properly contained in $G$ for some $x\in I$ we are done. Otherwise, we have $I \le Z(G)$; by the size constraint on $I,$ this forces that $G$ be abelian. From here it is easy to show that $I $ is necessarily a subgroup of $G,$ contradicting Lagrange's Theorem. The techniques used for part 1) are quite similar, except that towards the end you need to shift your argument towards the center of the group.
H: Reference request for examples of probabilistic heuristics, help put some examples in a broader context. I was thinking about how probability is used in heuristic arguments, an example being the argument that there are an infinite number of twin primes: the probability that $n$ is the first of two twin primes is about $\frac{1}{(\log n)^2}$, and $\prod{(1-\frac{1}{(\log n)^2}}) \rightarrow 0$, so the probability that there are an infinite number of twin primes is $1$. (Another example provided by @joriki is the heuristic for the Collatz conjecture.) I then wondered if there are any heuristic arguments yielding probabilities strictly between $0$ and $1$ and I considered this example predicate: $A(x,n)$ := "the $n^{th}$ bit of the binary expansion of $x$ is $1$". Given certain assumptions there is a sense in which $A(\pi, n)$ is "true with probability $\frac{1}{2}$". Now this isn't a very useful notion, since it is either true or not, and we can find out by computing the $n^{th}$ bit of $\pi$, although perhaps there is some utility if $n$ is very large. However it is a second-class sort of heuristic because it only gives a hint to an answer we could find definitively with more work. But next I thought of this example: $B(x,n)$ := "the binary expansion of $x$ starting from offset $n$, when interpreted as an Iota program, halts". In a similar way we can lazily argue that $B(\pi, n)$ is "true with probability $\Omega_{\mathrm{Iota}}$". But in this case, under similar assumptions, there are values of $n$ for which $B(\pi, n)$ is not even decidable: we can construct a Gödel-sentence in our working theory, write an Iota program which searches for its proof, find that program in the binary expansion of $\pi$, and then construct $B(\pi, n)$ using its offset $n$. For that reason, it seems to be a first-class sort of heuristic (despite giving us a probability strictly between $0$ and $1$), in the same league as those that give us almost-certain conclusions to open problems. One of the unstated assumptions (along with the 2-normality of $\pi$ and such) is that there is no "conspiracy" between the bits of $\pi$ and the semantics of Iota. It is equally believable that no such conspiracy exists between $\pi$ and $e$, or between $e$ and Iota, or amongst all three. And since the probability of the conjunction of independent events is equal to the product of their respective probabilities, it is just as reasonable to say that $B(\pi,n) \wedge B(e,n)$ is "true with probability $\Omega_{\mathrm{Iota}}^2$". On the other hand, the same doesn't apply to $B(\pi,n) \wedge B(\pi,n+1)$: since the programs substantially overlap we should expect some correlation in their halting status and therefore a different "truth probability" for the conjunction. So here are my questions: Are there other (hopefully more natural) examples of problems (open or not) with simple heuristic arguments suggesting a particular probability of truth (other than $0$ or $1$)? Are my (admittedly vague) arguments above at least vaguely correct or are there major mistakes and conceptual problems? What is a good way of describing the dependencies between different heuristics? Can the idea of quantum probability amplitude be applied here? What are some other examples of heuristics that are not independent (e.g. two statements that are both heuristically true but contradict each other)? Any references to related topics that might help me further develop or discard this idea? I have read overviews of fuzzy logic and probabilistic logic but I don't know how to apply either here. AI: I don't think a heuristic argument suggests a particular probability of truth for a conjecture. I'd view it in a Bayesian light – it suggests a particular probability of truth under the assumption that there is no systematic reason for the result to hold or not to hold, and you should use that to update your a priori assessment of the conjecture. (Of course often you won't have any strong beliefs about the conjecture independent of the numerical data, and in that case the a posteriori probability will be very close to the heuristically determined one.) For the Goldbach conjecture, the heuristic probability is very close to $1$, but it's still strictly between $0$ and $1$. The same is true for other problems for which extensive numerical searches have been done. In such cases, it's typically not the total heuristic probability for the existence of counterexamples that's very close to $0$, only the part that's left after the extensive search. For an example of a search that only leads to a moderate update of the belief in a conjecture, see Irregular Primes to 163 million by Buhler and Harvey, in which the expected number of counterexamples to the Kummer–Vandiver conjecture in the searched range was increased from $0.674$ to $0.748$. [Edit:] Your comments made me realize that we should actually make further distinctions among the cases where it's possible to gather evidence and give definite answers (i.e. neither your $A$ nor $B$). The twin-prime conjecture can't be decided numerically; no amount of twin primes following the expected distribution can prove it and no gap or deviation from the distribution can disprove it. All we can do numerically is gather more data and reason about how this should quantitatively affect our belief in the conjecture. This reasoning requires more sophisticated hypothesis testing than just calculating probabilities for counterexamples and then going to look for them; it requires a measure for how unlikely the encountered empirical twin prime distribution would be under the conjectured distribution. On the other hand, the Goldbach conjecture and the Kummer-Vandiver conjecture can be disproved by a single counterexample, and indeed their numerical investigation proceeds through a search for counterexamples. But the heuristic probability for counterexamples to exist may either be $1$ or less. (Perhaps in some weird continuous cases it might even be zero without this constituting a proof of non-existence.) In the case of the Goldbach conjecture, it is less than $1$, so at any point in the search for a counterexample there is a certain nonzero probability left that one might be found. Not only is the heuristic probability nonzero, but if you happen to believe that there's no systematic reason for the Goldbach conjecture to hold, then your actual assessment of the probability is close to the small nonzero heuristic probability. In the case of the Kummer-Vandiver conjecture, there is disagreement on the heuristic arguments. The arguments proposed by Williams and cited by Buhler and Harvey suggest an expected number $\frac12\log\log x$ of counterexamples up to $x$. This would correspond to a probability of $(2x\log x)^{-1}$ for $x$ to be a counterexample, and the probability for there not to be any counterexamples would thus be $\prod_x(1-(2x\log x)^{-1})$, which converges to zero. Thus, no matter how far we search for counterexamples and don't find any, the heuristics would still predict that there are an infinite number of counterexamples yet to come and that the probability of finding at least one is $1$. In this case it's not the heuristical probability for a counterexample that decreases with the search, but our belief in the heuristics, since the lack of counterexamples in the searched range makes it seem more likely that there's a systematic reason for it. Stated differently, in the Goldbach case it's the prospects for the search yet to come that make us believe that we won't find any counterexamples, whereas in the Kummer-Vandiver case (following Williams) it's the results of the search already carried out that make us doubt that we'll find counterexamples, even though the heuristic prospects for the search yet to come haven't actually changed. Mihăilescu (in the paper I linked to in a comment), on the other hand, offers different arguments and claims that they might imply that there are $O(1)$ counterexamples, which would make this similar to the Goldbach case. The Collatz case, contrary to what I wrote in a comment, is actually an interesting mixture in that there are two possible types of counterexamples, ascending chains and cycles, and the heuristic probability for ascending chains is zero whereas I don't know of any heuristics for cycles and would expect them to heuristically occur with non-zero probability. Certain kinds of cycles have been proved not to exist, so if the heuristic probability for the existence of cycles is strictly between $0$ and $1$, it's conceivable that a proof might, by eliminating the possibility of cycles altogether, convert the Collatz case from the Goldbach category to the twin-prime category, since the existence of ascending chains can't be numerically decided either way. I'm really just thinking out loud here; as I wrote in a comment, I had been meaning to ask a similar question; so I hope others will chime in and throw some more light on this surprisingly diverse heuristic zoo.
H: Dealing with "at least" in Permutation For the following question (which I pulled of the internet) A five member committee is to be selected from among four Math teachers and five English teachers. In how many different ways can the committee be formed under the following circumstance? A) Anyone is eligible to serve on the committee. B) The committee must consist of $3$ Math teachers and $2$ English teachers. C) The committee must contain at least three Math teachers. D) The committee must contain at least three English teachers. (Answer $126$, $40$, $45$, $81$) How would I go about solving when the requirement is at least $3$ Math Teacher Any suggestions ? . I know that when it was $3$ Math and $2$ English teachers I simply took r=3 for math and r=2 for English in the Combination Formula. AI: Since you know how to do it for exactly $3$ math teachers, you also know how to do it for exactly $4$ math teachers. There are only $4$ math teachers, so those are the only two possibilities; you just have to add them up.
H: Canonical Isomorphism Between $\mathbf{V}$ and $(\mathbf{V}^*)^*$ For the finite-dimensional case, we have a canonical isomorphism between $\mathbf{V}$, a vector space with the usual addition and scalar multiplication, and $(\mathbf{V}^*)^*$, the "dual of the dual of $\mathbf{V}$." This canonical isomorphism means that the isomorphism is always the same, independent of additional choices. We can define a map $I : \mathbf{V} \to (\mathbf{V}^*)^*$ by $$x \mapsto I(x) \in (\mathbf{V}^*)^* \ \text{ where } \ I(x)(f) = f(x) \ \text{for any } \ f \in \mathbf{V}^*$$ My Question: what can go wrong in the infinite-dimensional case? The notes I am studying remark that if $\mathbf{V}$ is finite-dimensional, then $I$ is an isomorphism, but in the infinite-dimensional case we can go wrong? How? AI: Let $V$ be any vector space over a field $k$. Let $\{e_i\}_{i \in I}$ be a basis for $V$. For each $i \in I$, there is a unique linear functional $f_i: V \rightarrow k$ such that $f_i(e_j) = \delta_{ij}$: that is, $f_i(e_i) = 1$ and for every other basis element $e_j$, $f_i(e_j) = 0$. CLAIM: The set $\{f_i\}_{i \in I}$ is linearly independent in $V^{\vee}$, and thus $\dim V \leq \dim V^{\vee}$. It is a basis if and only if $V$ is finite-dimensional (if and only if $V^{\vee}$ is finite-dimensional). The linear independence is easy: if $a_1 f_{i_1} + \ldots + a_n f_{i_n} = 0$, then just by evaluating at $e_{i_1},\ldots,e_{i_n}$ we find $a_1 = \ldots = a_n = 0$. In the finite-dimensional case -- say $I = \{1,\ldots,n\}$ -- we may write any linear $g: V \rightarrow k$ as $g = g(e_1) f_1 + \ldots + g(e_n) f_n$, which shows that $f_1,\ldots, f_n$ is a basis for $V^{\vee}$ (and implies $\dim V = \dim V^{\vee}$). However, in the infinite-dimensional case the $\{f_i\}_{i \in I}$ do not form a basis..essentially because in an abstract vector space all our sums must be finite sums rather than infinite sums! Indeed the subspace spanned by the $f_i$'s is precisely the set of linear functionals which are zero at all but finitely many basis elements $e_i$, whereas to give a linear functional $f$ on $V$ the values $f(e_i)$ can be absolutely arbitrary. Concretely, the functional $f$ with $f(e_i) = 1$ for all $i \in I$ does not lie in the span of the $f_i$'s. (Remark: In fact whenever $\dim V$ is infinite, we have $\dim V^{\vee} > \dim V$. That is, not only is $\{f_i\}_{i \in I}$ not a basis, there is no basis for the dual space of cardinality equal to that of $I$. This is actually not so easy to prove, and it is not needed to answer the question.) Now we come back to the canonical map $I: V \rightarrow V^{\vee \vee}$ given by $I(x): f \mapsto f(x)$. CLAIM: a) $I$ is always injective. b) $I$ is surjective if and only if $V$ is finite-dimensional. To prove a), let $x$ be a nonzero element of $V$ and choose a basis $\{e_i\}_{i \in I}$ for $V$ in which $x$ is one of the basis elements, say $x = e_1$. Then $f_1$ is a linear functional which does not vanish at $x$, so $I(x)$ is a nonzero element of $V^{\vee \vee}$. To prove b) we first use the fact that if $V$ is finite-dimensional, $\dim V = \dim V^{\vee} = \dim V^{\vee \vee}$. Thus $I: V \rightarrow V^{\vee \vee}$ is an injection between two vector spaces of the same finite dimension, so it must be an isomorphism. Finally, if $I$ is infinite-dimensional, then one can see by choosing bases, dual sets and dual dual sets as above that $I$ is not surjective. (A good first step here is to confirm that in the finite-dimensional case, if we choose a basis $e_1,\ldots,e_n$ for $V$, a dual base $f_1,\ldots,f_n$ for $V^{\vee}$ and then a dual dual base $g_1,\ldots,g_n$ for $V^{\vee \vee}$, then the map $I$ is precisely the one which maps $e_i$ to $g_i$ for all $i$.) I can supply more details upon request. Note also that if we are willing to make use of the above parenthetical fact that $\dim V^{\vee} > \dim V$ when $\dim V$ is infinite, then we see that $\dim V^{\vee \vee} > \dim V^{\vee} > \dim V$, and thus not only is $I: V \rightarrow V^{\vee \vee}$ not an isomorphism, but moreover there is no isomorphism of vector spaces from $V$ to $V^{\vee \vee}$. Again though, this lies significantly deeper. Added: Let me say a bit about the more ambitious approach of showing $\operatorname{dim}_k V^{\vee} > \operatorname{dim}_k V$ for any infinite-dimensional vector space $V$. Let $I$ be a basis for $V$, so $V \cong \bigoplus_{I} k$. As mentioned above, to give a linear functional on $V$ it is necessary and sufficient to assign to each basis element an arbitrary value in $k$, whence an isomorphism $V^{\vee} \cong k^I = \prod_{I} k$. Thus dualization replaces a direct sum over $I$ with a direct product over $I$. When $I$ is finite there is no difference, so we recover $V \cong V^{\vee}$. However, when $I$ is infinite I claim that $$ \operatorname{dim}_k V^{\vee} = \operatorname{dim}_k k^I = \# k^{\# I} \geq 2^{\# I} > \# I = \operatorname{dim}_k V.$$ I know of almost no standard texts which include a proof of this result, and indeed some cleverness / real ideas seem to be required (unlike the above discussion of the non-surjectivity of $I$ in the infinite-dimensional case which is, while somewhat lengthy to write out in detail, really very straightforward). However, by coincidence I just found on the web a very nice proof of this result which deduces it from Dedekind's Linear Independence of Characters. Please see this note of France Dacar.
H: Is a 2-dimensional subspace always called a plane no matter what the dimensions of the space is? Is a 2-dimensional subspace in a 7-dimensional space still called a plane? I know that a 6-dimensional space in 7-dimensional space is called a hyperplane because the difference in the number of dimensions of the space and subspace is 1. The answer should be easily googlable, but for some reason it's eluding me. Thanks! AI: i guess this is all convention, but i feel safe to say: Name of linear spaces (i.e. not curved): Dim=1: line Dim=2: plane Codim=1: hyperplane When the space is not linear: Dim=1: curve Dim=2: surface Codim=1: hypersurface. Codimenion is just a name for that difference in dimension you mentioned. So a hyperplane in a 2 dimensional space is in fact a line, even weirder a hyperplane in a 1 dimensional space is a point... When the hypersurface is given by a polynomial of degree $d$, it is common to refer to it as quadric ($d=2$), cubic ($d=3$), etc. Edit: I claim no knowledge of terminology when the spaces are infinite dimensional.
H: Image of commutative diagram is commutative under functor? The Wikipedia article for Functor ( http://en.wikipedia.org/wiki/Functor ) claims: Two important consequences of the functor axioms are (where $F \colon C \to D$ is a covariant functor between categories $C$ and $D$) F transforms each commutative diagram in C into a commutative diagram in D; if f is an isomorphism in C, then F(f) is an isomorphism in D. The second is obvious. The first one seems plausible, but I can't seem to prove it. QUESTION 1: Is the first claim even true? It seems like the following is a counterexample, but I could be missing something. The general construction is a functor that is non-injective on objects and introduces nontrivial homology (thinking of objects and morphisms as 0 and 1-cells in a CW-complex). Define categories $C$ and $D$ by $$\mathrm{Ob}(C) := \{a_0,b_0,c_0,d_0,a_1,b_1,c_1,d_1\},$$ $$\mathrm{Ob}(D) := \{a,b,c,d\},$$ $$\mathrm{Mor}(C) := \{f\colon a_0 \to b_0, g\colon c_0 \to d_0, x\colon a_1\to c_1, y\colon b_1\to d_1\},$$ $$\mathrm{Mor}(D) := \{\phi\colon a\to b, \psi\colon b\to d, \theta\colon a\to c, \omega\colon c\to d, \mu,\nu\colon a\to d\},$$ where $\psi \circ \phi := \mu$ and $\omega \circ \theta := \nu$, and of course the identity morphisms in each category are understood to exist. Define a functor $F\colon C \to D$ by $F(a_i) := a$, $F(b_i) := b$, $F(c_i) := c$, $F(d_i) := d$, $F(f) := \phi$, $F(g) := \omega$, $F(x) := \theta$, and $F(y) := \psi$. Again, the functor is understood to take identity morphisms to identity morphisms. Using the entire category $C$ as the commutative diagram, the image is the category $D$ without the morphisms $\mu$ or $\nu$, and is certainly not a commutative diagram (because the two different paths from $a$ to $d$ render the two different morphisms $\mu$ and $\nu$. QUESTION 2: Is there an error in this construction? If the original claim is not true in general, it seems like adding the requirement that the functor be injective on objects would be sufficient. AI: Some explanations why the claim holds were already given. To show what is wrong with you counterexample: The "image" of this diagram is this diagram and not this one In the other words, that result claims nothing about the diagram in the last picture. The claim says that whenever something commutes in the first category, the image must commute in the second category. E.g. $f\circ id_{a_0}=f$, hence also $F(f)\circ F(id_{a_0})=F(f)$. To be able to say something about $F(y)\circ F(f)$ you would need some condition about $y\circ f$ in $C$; such condition cannot be given where since these two morphisms cannot be composed.
H: why is $0=0$ not possible? Hi one of my friend showed me one proof, i.e., $2^2 - 2^2 = 10 - 10$ $(2+2) (2-2) = 5 (2-2)$ dividing both sides by $(2-2)$ $(2 + 2) = 5$ I know this is wrong in first line as both LHS and RHS goes to $0$ and you cannot directly make an equation $0=0$ because $\frac{0}{0} \neq 1$, but I cannot explain this. Can anyone give a perfect reason why we cannot compare $0=0$? Or, is there any other reason for this to be wrong? AI: You can't divide both sides by $(2-2)$, because $(2-2)$ is zero, and you cannot divide by zero. The technical reason for this is that zero does not have a multiplicative inverse in the field of rational numbers (or real numbers, or complex numbers, or any field), because the existence of such an inverse would be inconsistent with the field axioms.
H: Regular Polyhedrons In $\mathbb{R}^3$, there are five regular polyhedrons (up to similarity), and can be parametrized by number of vertices, edges and faces. What is the number of regular polyhedrons in $\mathbb{R}^n$, and their parametrization? Please suggest the reference(s) also. (Thanks in advance.) AI: In short, what happens is the following. The $n$-dimensional analogue of a Platonic solid is called a regular polytope. In any dimension you are guaranteed three "boring" regular polytopes: the $n$-dimensional version of the tetrahedron (the $n$-simplex), the $n$-dimensional hypercube, and its dual, the $n$-dimensional version of the octahedron. In three dimensions, as you know, there are two others. In four dimensions there are others as well, called the 24-cell, 120-cell and 600-cell. In dimensions five and above the boring regular polytopes are the only ones that exist. The wiki pages http://en.wikipedia.org/wiki/Regular_polytopes and http://en.wikipedia.org/wiki/Convex_regular_4-polytope are good places to start. Coxeter's book Regular Polytopes is very comprehensive. Another approach is to look at these things through their reflection symmetry groups: Coxeter's book is a good source for this too, see also http://en.wikipedia.org/wiki/Coxeter_group
H: Partial trace of a system with isolated evolution Let $\rho_{AB}$ be the state of a composite quantum system with state space $H_A\otimes H_B$ (two finite dimensional Hilbert spaces). Now assume that $A$ and $B$ are isolated and suffer a unitary evolution given by $U_A$ and $U_B$. If we measure the system $A$ then the probability of observing $x$, one of the eigenvalues of the meassurement operator is: \begin{equation} P(x)=tr((Pr_x\otimes I)(U_A\otimes U_B)\rho_{AB}(U_A^*\otimes U_B^*)) \end{equation} where $Pr_x$ is the projector on the eigenspace associated with $x$ and $I$ the identity on $H_B$. I would like to prove that $P(x)$ is independent of the evolution of system $B$ and in particular follows: \begin{equation} P(x)=tr_A(Pr_xU_Atr_B(\rho_{AB})U_A^*) \end{equation} where $tr_A$ and $tr_B$ are the partial traces on systems $A$ and $B$. It is not exactly homework, it is just an unproven statement in a textbook. AI: $$ \begin{align} \def\tr{\operatorname{tr}} P(x) &= \tr((Pr_x\otimes I)(U_A\otimes U_B)\rho_{AB}(U_A^*\otimes U_B^*)) \\ &= \tr_A(\tr_B((Pr_x\otimes I)(U_A\otimes U_B)\rho_{AB}(U_A^*\otimes U_B^*))) \\ &= \tr_A(Pr_x\tr_B((U_A\otimes U_B)\rho_{AB}(U_A^*\otimes U_B^*))) \\ &= \tr_A(Pr_x\tr_B((U_A\otimes I)\rho_{AB}(U_A^*\otimes I))) \\ &= \tr_A(Pr_xU_A\tr_B(\rho_{AB})U_A^*)\;. \end{align} $$