text
stringlengths
83
79.5k
H: Selfadjoint compact operator with finite trace I have a compact selfadjoint operator $T$ on a separable Hilbert space. For some fixed orthonormal basis, the operator's diagonal is in $\ell^1(\mathbb{N})$. Can we conclude that $T$ is trace class? AI: No, we cannot conclude that the operator is trace class. For example, let a Hilbert space have orthonormal basis $e_1,f_2,e_2,f_2,e_3,f_3,\ldots$, and $T$ interchanges $e_i,f_i$, while multiplying both by a positive real $\lambda_i$. That is, in these coordinates, the matrix of $T$ is a list of diagonal blocks, with the $i$-th diagonal block being anti-diagonal $\lambda_i,\lambda_i$. For $\lambda_i\rightarrow 0$, the operator is compact, almost from the definition. All the diagonal entries are $0$. The operator is self-adjoint because the matrix is symmetric real. However, the operator is not trace class unless $\sum_i |\lambda_i|<\infty$, which easily fails for many sequences of positive reals $\lambda_i\rightarrow 0$. Edit: It is noteworthy that the analogous characterization (I pointedly don't say "definition") of "Hilbert-Schmidt" does not depend on choice of basis. Thus, "defining" trace-class as composition of two Hilbert-Schmidt operators is sometimes usefully more intrinsic, less basis/coordinate-dependent.
H: Closure of image of diagonal morphism of S-scheme Let $X$ be an $S$-scheme with structural morphism given by $f : X \to S$. The image of the diagonal morphism $\Delta : X \to X \times_S X$ is contained in the subset $Z := \{ z \in X \times_S X : p(z) = q(z) \} \subset X \times_S X$ where $p, q$ are the projection maps. Is $Z$ closed in general? Is it furthermore the closure of $\Delta(X)$? AI: In general, $Z$ will not be closed. As an example, consider $X=\mathbb{A}^1_k$, $S=\text{Spec}(k)$, where $k$ is a field. Then we have $X\times_S X=\mathbb{A}^2_k$ is the affine plane over $k$. Let $C$ be an irreducible plane curve which is not a vertical or horizontal line, or the diagonal. The curve $C$ has a generic point $x_C\in X\times_S X$, which is not closed. One can check that $p(x_C)=q(x_C)$ is the generic point of $X$, so $x_C \in Z$. However, the closure of $\{x_C\}$ contains all of the (closed) points lying on the curve $C$, so by hypothesis, contains a point not on the diagonal. This point will not be in $Z$, so $Z$ is not closed.
H: Continuous variables and proportionality I have another problem in a statistics past paper that goes as follows: Let $X$ be a continuous random variable, taking values in the range $[0,1]$ with pdf given up to proportionality by $f(x) \sim x^4$, $0 \le x \le 1$. What is the value of $E(X)$? Normally $E(X)$ should be $$\int_0^1 xf(x) ~dx,$$ right? In which case $E(X)$ would be $1/6$. But for some reason that's not the answer in the marking scheme. Does the keyword "proportionality" have anything to do with it? If so, what? AI: Begin as follows. the density is of the form $x\mapsto cx^4$. It must integrate to 1, so $$1 = \int_0^1 cx^4\,dx = {c\over 5}.$$ We conclude that $c = 5,$ so the density is $$f(x) = 5x^4 \qquad 0\le x\le 1.$$ Now you can compute the expectation.
H: Zeros of a complex polynomial The question is: Show that $$ P(z) = z^4 + 2z^3 + 3z^2 + z +2$$ has exactly one root in each quadrant of the complex plane. My initial thought was to use Rouche's Theorem (since that's generally what I use to find how many roots a complex polynomial has), but the more I think about it, the more I'm not sure how to make it work. Here is my attempt: First, pick a radius for a circle that can encompass all four roots of the polynomial. For simplicity sake (in my opinion), I went with |z| = 5. Setting $$f(z) = z^4$$ and $$g(z) = 2z^3 + 3z^2 + z + 2$$ I get |f(z)| = 625 and |g(z)| = 332, so by Rouche's Theorem we have four roots in the disc. Now, my thought was that I could somehow seperate the quadrants by breaking up my circle into quarter-circles (like four slices of pie) and applying Rouche's Theorem again on each of these new domains. However, finding the place on the boundary where the value hits its max for some f(z) or g(z) would be messy (at best), since if I juse use |z| = 5, I'm right back where I started. There's also the issue that these zeroes may occur on the boundary, so in the real/imaginary axis, which wouldn't be what I'm trying to show. So now, I'm just stuck, so if anyone can see how to tackle this, it would be greatly appreciated. AI: First dispose of real roots: e.g. $P(z) = z^2 (z+1)^2 + 2 (z+1/4)^2 + 15/8$. Let's look at what happens to $P(z)$ as $z$ goes around a contour around part of the first quadrant. As $z$ goes from $0$ to some large positive $R$ on the real axis, $P(z)$ increases from $2$ to $P(R) >> 0$. Then go on the quarter-arc of the circle $|z| = R$ from $R$ to $iR$: $P(z)$ goes almost in a circle, ending at $P(iR)$ which is in the fourth quadrant. Now come back in to the origin on the imaginary axis. Note that $\text{Re}(P(it)) = t^4 - 3 t^2 + 2 = 0$ at $t = 1$ and $t=\sqrt{2}$, while $\text{Im}(P(it)) = - 2 t^3 + t = 0$ at $t=0$ and $t = \sqrt{2}/2$. So you hit the negative imaginary axis at $t=\sqrt{2}$ and again at $t=1$, then the positive real axis at $t=\sqrt{2}/2$ and $t=0$, but not the negative real or positive imaginary axis. Thus as $z$ goes around this contour, the winding number of $P(z)$ around $0$ is $1$, indicating that there is exactly one zero of $P(z)$ inside the contour. Here's a plot of the case $R=1.6$:
H: Pointwise order of the Cartesian product of two preordered chains Definitions: (From Categories for Types by Roy L. Crole.) A preorder on a set $X$ is a binary relation $\leq$ on $X$ which is reflexive and transitive. A preordered set $(X, \leq)$ is a set equipped with a preorder.... Where confusion cannot result, we refer to the preordered set $X$ or sometimes just the preorder $X$. If $x \leq y$ and $y \leq x$ then we shall write $x \cong y$ and say that $x$ and $y$ are isomorphic elements. Given two preordered sets $A$ and $B$, the point-wise order on the Cartesian product $A \times B$ is defined as $(a,b) \le (a',b')$ if and only if $a \le a'$ and $b \le b'$). The result is a preorder. A subset $C$ of a preorder $X$ is called a chain if for every $x,y \in C$ we have $x \leq y$ or $y \leq x$.... We shall say that a preorder $X$ is a chain ... if the underlying set $X$ is such. (p.8) Exercise: Let $C$ and $C'$ be chains. Show that the set of pairs $(c, c')$, where $c \in C$ and $c' \in C'$, with the pointwise order is also a chain just in case at most one of $C$ or $C'$ has more than one element. (p.9) Proposed Solution: Suppose $C$ is a preorder with more than one element such that for every $a, b \in C, a \cong b$. Then by the definition given above, $C$ is a chain. Now suppose that $C'$ is a chain (without any additional properties). I claim that $C \times C'$ is a chain. Proof Let $(c_1, c'_1), (c_2, c'_2) \in C \times C'$. Then $c_1 \cong c_2$ and ($c'_1 \le c'_2$ or $c'_2 \le c'_1$). So $c_1 \le c_2$ and $c_2 \le c_1$. If $c'_1 \le c'_2$, then $(c_1, c'_1) \le (c_2, c'_2)$. If $c'_2 \le c'_1$, then $(c_2, c'_2) \le (c_1, c'_1)$. Question: This seems to be a counterexample to the statement I am asked to prove. Is the question in error or am I missing something here? AI: Chains usually come up in the context of partial orders, and in that case your definition is equivalent to a chain being a totally ordered subset. This nLab page defines a chain as a totally ordered subset even in the context of preorders. And the statement you're asked to prove is true if chains are defined that way. So I wonder whether there's simply an "either" missing in the definition of a chain: "... for every $x,y\in C$ we have either $x\le y$ or $y\le x$."
H: Triangulate rectangular parallelepiped in $\mathbb{R}^{n}$ I need to triangulate the n-dimensioned rectangular parallelepiped in $\mathbb{R}^{n}$ into a set of $n$-simplices. Could you suggest me any known algorithm for that or maybe an extension of Delaunay triangulation for $n > 2$ cases? AI: The construction outlined by Ross suffices to dissect an $n$-cube into $n!$ congruent $n$-simplices known as Schläfli orthoschemes. One might ask if $n!$ is the smallest number of simplexes needed to dissect an $n$-cube, and it is not in general. For example the $3$-cube may be dissected into as few as five tetrahedra (not all congruent). See Lower Bounds for the Simplexity of the $n$-Cube. The exact minimum number needed to dissect a $4$-cube is known to be 16 rather than 24 = 4!.
H: Use a Jacobian matrix to differentiate between linear and non-linear transormations When determining whether or not a map/transformation is linear or non-linear, how can the Jacobian matrix be used? A linear equation in two variables is one that may be written in the form y = ax + b, but how do know if it is non-linear? AI: If the Jacobian is constant, your function is linear.
H: Does multiplying by $dt$ have any meaning? Consider, for example, the equation $x'=x$, then it is usually solved by writing $\frac{dx}{dt}=x\implies\frac{dx}{x}=dt\implies\int\frac{dx}{x}=\int dt$ ... I know that there is a theorem in ODE that justify $x'=x\implies\int\frac{dx}{x}=\int dt$ ,but my question is about the intermediate step: $x'$ at some point $x_{0}$ is defined via a limit. $\frac{dx}{dt}$ is, as far as I understand, a notation for the function $x'$ - so we can not multiply by $dt$ since it has no meaning, it is a part of the notation. My question is as follows: Although the last step is indeed correct and can be justified, does the intermediate step (multiplying by $dt$) have any meaning, or is it just an easy way to remember and get to the last step ? AI: It's really a formalism supported by the fundamental theorem of calculus and the chain rule.
H: What is the Psi(x) variable binding operator? In the Free Variable article on Wikipedia, it lists these: as variable-binding operators. I have seen all of them during my math studies, except for the psi operator. What does $\psi x$ mean in this context? AI: I don't know what was intended by $\psi$ here, but some Wikipedia archaeology reveals that it was introduced in this edit, and the same user tried to remove it again one minute later. They made a mess out of the removal, and Michael Hardy undid the mess, leaving in the $\psi x$ with no explanation. None is likely forthcoming, because the user who added the $\psi x$, back in August 2008, has not been back to Wikipedia since. In short, it is most likely a piece of Wikipedia nonsense. Unless someone posts a definitive answer here, I will shortly remove the $\psi x$ from the Wikipedia article. As a consolation prize, here are some other variable binding operators you may not be familiar with, which are not in the Wikipedia article: Robin Milner's $\pi$-calculus uses "$\nu x. F$" to denote an expression F in which $x$ is instantiated to a "new" variable that has never been used before within the scope of the current computation. Whitehead and Russell use "$\iota x.\Phi(x)$" to denote the unique $x$ satisfying some description $\Phi(x)$. For example, "$\iota x. x$ is the King of Swaziland" is an expression denoting the King of Swaziland. Hilbert used "$\epsilon x.\phi(x)$" to denote "some value for which $\phi$ is true". That is, for any property $\phi$, if $\exists v.\phi(x)$ is true,then so is $\phi(\epsilon x.\phi(x))$. Similarly, Hilbert used $\mu x.\phi(x)$ to denote the smallest natural number for which $\phi$ is true. Category theorists often use $\exists!x.\phi(x)$ as an abbreviation for $\exists x\forall y.\phi(x) \land (\phi(y)\implies y=x)$, or some equivalent variation of it. It says that there is exactly one $x$ for which $\phi(x)$ holds. Commonly-used quantifiers that do not seem to have any standard compact notation include "almost everywhere" and "all but finitely many".
H: What's the equivalent of the adjacency relation for a directed graph? I've found several sources describing a relation notated $\sim$ signifying adjacency in an undirected graph, but nothing explicitly describing an equivalent for a directed graph. I've been using $\overset{\mathit{member}}{\longrightarrow}$, mainly because I have different types of edges so I need a long symbol to overset with the type of the edge linking the nodes. Is this appropriate? Is there a more usual notation? AI: An arrow is absolutely appropriate to use in this case. For example, see Shimon Even's Graph Algorithms. In fact, I think it would be questionable to use anything but an arrow for a directed edge. Note that Even places the name of the edge over the arrow; your notation of putting the edge type over the arrow is fine too. Or you might put the type underneath, to save room above for the name in case you might need it later.
H: Am I allowed to move around an operator like this? Can I take this product: $$\frac{dL}{dt}\frac{d L}{d \dot{x}}$$ And factor out one of the $L$'s to get: $$L\frac{d}{dt} \left( \frac{d L}{d \dot{x}}\right)$$ Where the operator $\frac{d}{dt}$ now operates on $\frac{d L}{d \dot{x}}$? Is this allowed? Thanks AI: This is only allowed if $L$ is not a function of $t$. If $L$ is a function of $t$, then this is not allowed. This is not factoring though, but using the identity that $$\frac{d}{d\,x}(cf(x))=c\frac{d}{d\,x}f(x).$$
H: Finding a pair of elements to satisfy an inequation Let $F$ be a field of characteristic 2 with more than 2 elements. Show that there are elements $a$ and $b$ in $F$ such that $(a+b)^3 \not= a^3 + b^3$. $F$ couldn't possibly have less than 2 elements, and if it had exactly 2 — that is, $F = \mathbb Z_2$ —, $(a+b)^3$ would actually always be $a^3+b^3$. With $\#F>2$, how do I find $a$ and $b$ to violate that? All I could do so far is simplify the inequation, using the characteristic and the commutative prroperty: $$\begin{align} (a+b)^3 &\not= a^3 + b^3\\ a^3 + 3a^2b + 3ab^2 + b^3 &\not= a^3 + b^3\\ a^3 + a^2b + ab^2 + b^3 &\not= a^3 + b^3\\ a^2b + ab^2 &\not= 0\\ ab(a + b) &\not= 0\quad. \end{align}$$ AI: In a field, product is $0$ iff atleast one of the element in product is $0$. Here, if $a\neq0,b\neq 0$ and $a\neq -b$ then , $ab(a+b)\neq 0$, so choose non-zero elements of field $\Bbb F$ such that one is not the additive inverse of other one.
H: Evaluating $\int_{-1}^1\sqrt{1-x}\,dx$ Evaluate using the Fundamental Theorem of Calculus: $$\int_{-1}^1 \sqrt{1-x} \, dx$$ I'm not sure what to do with the $\sqrt{1-x}$. Most of the problems I'm doing for this assignment use substitution but I just end up getting $u = 1-x$ and of course $-du = dx$. Where do I go from here? AI: Put $u=1-x\implies du=-dx$. Substituing it in integral and changing limits according to the relation $u=1-x$ converts your integral to $$-\int_2^0 \sqrt u du=\left.\frac{-2u^{3/2}}{3}\right|_2^0=4\frac{\sqrt 2}3$$
H: evaluating $\int_{7}^{10} \frac{xdx}{\sqrt{x-6}}$ Evaluate using the Fundamental Theorem of Calculus: $$\int_{7}^{10} \frac{xdx}{\sqrt{x-6}}$$ I am stuck with the x on top. I substitute $u = x-6$ and then $du = dx$. soo... $$\int_{1}^{4} (u)^{-1/2}xdu$$ is it okay to still have that x there or do I need to substitute something else? something maybe like... $x(x-6)^{-1/2}?$ AI: Substitute that $x$ with $x = u + 6,$ you get: $$\int \frac{u + 6}{\sqrt{u}} du = \int \sqrt{u} du + \int \frac{6}{\sqrt{u}} du.$$
H: "Base for a neighborhood system at a point" vs. "base at a point" For a topological space X do the terms "base for a neighbourhood system at a point" and "base at a point" have the same meaning? AI: Yes. Usually these two things mean the same.
H: Does $z ^k+ z^{-k}$ belong to $\Bbb Z[z + z^{-1}]$? Let $z$ be a non-zero element of $\mathbb{C}$. Does $z^k + z^{-k}$ belong to $\mathbb{Z}[z + z^{-1}]$ for every positive integer $k$? Motivation: I came up with this problem from the following question. Maximal real subfield of $\mathbb{Q}(\zeta )$ AI: Let's go by induction as in @Arturo Magidin's answer. The result holds for $k=0,1$. Assume $z^k+z^{-k} \in \mathbb{Z}[z+z^{-1}]$ for $0\le k\le n$. But $$z^{n+1}+z^{-(n+1)} = (z^n+z^{-n})(z+z^{-1}) - (z^{n-1} + z^{-(n-1)}),$$ and so $z^{n+1}+z^{-(n+1)} \in \mathbb{Z}[z+z^{-1}]$.
H: Does this question involve series? While solving the problem Half the people on a bus get off at each stop after the first and no one gets on after the first stop.If only one person gets off at stop number 7 .How many people got on at the first stop ? Here is how I am solving it but I guess I am wrong Let no of people who got on = x so $x + \frac{x}{2} + \frac{x}{4}+ \frac{x}{8}+ \frac{x}{16}+ \frac{x}{32}+ 1 = 0 $ Any suggestions ? AI: Let us do it in your style. Say that there are $x$ people on the bus after the first stop. At the second stop, $\frac{x}{2}$ get off, so $\frac{x}{2}$ are left. At the third stop, $\frac{x}{4}$ get off, and $\frac{x}{4}$ are left. At the fourth stop, $\frac{x}{8}$ get off, $\frac{x}{8}$ are left. At the fifth stop, $\frac{x}{16}$ get off, $\frac{x}{16}$ are left. At the sixth stop, $\frac{x}{32}$ get off, $\frac{x}{32}$ are left. At the seventh stop, $\frac{x}{64}$ get off, and $\frac{x}{64}$ are left. But $1$ person is left. So $\frac{x}{64}=1$, and therefore $x=64$. Alternately, there were $2$ people on the bus just before the $7$th stop, so $4$ just before the sixth, so $8$ just before the fifth, so $16$just before the fourth, so $32$ just before the third, so $64$ just before the second stop. Or else, (but this is too much work, $\frac{x}{2}$ got off at the second, and so on, leaving $1$ person on the bus after the seventh. This gives us the equation $$x=\frac{x}{2}+\frac{x}{4}+\frac{x}{8}+\frac{x}{16}+\frac{x}{32}+\frac{x}{64}+1.$$ Solve for $x$. Maybe to make life easier first multiply both sides by $64$. Remark: Does this question involve series? Well, if we solve it my third way, which is fairly inefficient, then indeed we end up finding the sum of a finite geometric series. But there are more efficient ways that just involve a sequence.
H: Localization and Noetherian property From page 101 in Atiyah-MacDonald: "Two of the important properties of localization are that it preserves exactness and the Noetherian property...." I remember proving that it preserves exactness, it's proposition 3.3. on page 39. But what is meant by "Noetherian property"? Thanks. AI: It means that if $M$ is Noetherian then so is $S^{-1}M$. If you first prove that for any submodule $N'$ of $S^{-1}M$ one has $N' = S^{-1}N$, where $N$ is the inverse image of $N'$ in $M$, then this follows quickly.
H: Solving absolute value inequalities. Prove: If $z,\alpha \in \mathbb{C}$ with $|z|<1, |\alpha|<1$ then $\dfrac{|z|^2+|\alpha|^2}{1+|\alpha z|^2}<1$ AI: Here, $|\alpha|^2+|z|^2-|\alpha z|^2=|\alpha|^2+|z|^2-|\alpha|^2|z|^2-1+1=1-(1-|\alpha|^2)(1-|z|^2)$. Now, Since $|\alpha|\lt 1$ and $|z|\lt 1\implies (1-|\alpha|^2)(1-|z|^2)\gt 0\implies 1-(1-|\alpha|^2)(1-|z|^2)\lt 1$ $$\implies |\alpha|^2+|z|^2-|\alpha z|^2\lt 1$$ $$\implies |\alpha|^2+|z|^2\lt|\alpha z|^2 + 1$$ $$\implies \frac{|\alpha|^2+|z|^2}{|\alpha z|^2 + 1}\lt 1$$ inequality remain unchanged on division as $|\alpha z|^2 + 1\gt 0$
H: Why are $\sin$ and $\cos$ (and perhaps $\tan$) "more important" than their reciprocals? (My personal "feel" is that $\sin$ and $\cos$ are first-class citizens, $\tan$ is "1.5th-class," and the rest are second-class; I'm sure there are others who feel the same.) Main question(s): From a purely high-school-geometric/"ninth-century-geometer's" standpoint, is there any reason why this should be so? Given the usual elementary knowledge of triangles when one is first introduced to these functions, I think it appears pretty arbitrary. How should I convince a high school student that $\sin$, $\cos$, and $\tan$, instead of their reciprocals, should be our main objects of study? How did history decide on their superiority? Of course, with real analysis goggles, things look quite a bit different: $\sin$ and $\cos$ are the only ones that are continuous everywhere; $f'' = -f$ characterizes all their linear combinations; they have much nicer series representations; etc. But I suspect this is all hindsight. (I don't pretend to know enough about complex analysis, but I suspect even more nice things happen there with $\sin$ and $\cos$, and even more ugly things happen with the other four. In any case, I doubt history chose $\sin$ and $\cos$ to be first-class citizens because of their complex properties.) Secondary questions: Is there any reason why $\sin$ is the "main" function and $\cos$ is "only" its complement, or is this arbitrary as well? Is there any reason why $\tan$ is preferred to $\cot$? AI: Historically, trigonometric functions were originally computed in terms of "chords" (the angles subtended by a circular arc), and the sine was computed from a bisected chord (so a "half-chord"). In fact, the word "sine" originates from a mis-translation (from Arabic) of the Sanskrit word "jyā" which literally means "bow-string". (The word "jyā" became "jība" in Arabic, and then subsequently "jaib". A cognate of "jaib" in Arabic has the meaning of "bosom", and jaib was mistakenly rendered into Latin as "sinus". So yes, the word "sine" was originally not safe for work). The importance of the cosine seems also to have been first recognized by Indian mathematicans, who called it "koṭi-jyā" or "kojyā" meaning (roughly) "sine of the extreme angle" ("koṭi" means "the extreme end of a bow" or "extremity" in general). So the sine as the "main" function, and cosine as the "adjunct" function goes back at least 16 centuries (in the Surya Siddhanta, written some time in the 4th century).
H: Expected value of a continuous random variable: interchanging the order of integration I have come across a proof of the following in Ross's book on Probability - For a non-negative continuous random variable Y with a probability density function $f_Y$ $$ \mathrm{E} [Y] = \int_0^\infty P[Y \geq y]dy $$ The author proves it by using $$ \int_0^\infty \int_y^\infty f_Y(x)dxdy = \int_0^\infty (\int_0^x dy) f_Y(x)dx $$ He refers to it as "interchanging the order of integration". I have studied a fair amount of Calculus from Apostol's books (Vol 1 & 2). But I still can't seem to provide a proof of this equation. How does one go about proving this last equation? AI: we have \begin{align*} \int_{[0,\infty)}\int_{[y,\infty)} f_Y(x)\; dx\, dy &= \int_{[0,\infty)}\int_{[0,\infty)}\chi_{[y,\infty)}(x)f_Y(x)\;dx\,dy\\ &= \int_{[0,\infty)}\int_{[0,\infty)}\chi_{[y,\infty)}(x)f_Y(x)\;dy\,dx\\ &= \int_{[0,\infty)}\int_{[0,\infty)}\chi_{[y,\infty)}(x)\;dy\cdot f_Y(x)\;dx\\ &= \int_{[0,\infty)} \int_{[0,\infty)} \chi_{[0,x]}(y)\; dy\cdot f_Y(x)\; dx\\ &= \int_{[0,\infty)} xf_Y(x)\; dx\\ &= E(Y) \end{align*} where $\chi_A$ denotes the indicator function of a set $A$.
H: Intersection of all neighborhoods of zero is a subgroup Let $G$ be a topological abelian group. Let $H$ be the intersection of all neighborhoods of zero. How is $H = \mathrm{cl}(\{0\})$? Isn't the closure of a set $A$ the smallest closed set containing $A$ which is the same as the intersection of all closed sets containing $A$? But neighborhoods in $G$ are not necessarily closed. Thanks. (Edit) To see that $H$ is a subgroup: First note that by construction $H$ contains $0$. Furthermore, $f: x \mapsto -x$ is continuous and its own inverse so that $f$ is also open. Hence $U$ is a neighborhood of $0$ if and only if $-U$ is. Now let $x \in H$. Then $x$ is in every neighborhood $U$ of $0$. Hence $x$ is in every neighborhood $-U$ of $0$. Hence $-x$ is in $U$ and hence in $H$. Alternatively one can verify it as follows: $$x \in H \iff x \in \bigcap_{U \text{ nbhd of } 0} U \iff x \in \bigcap_{U \text{ nbhd of } 0} -U \iff -x \in \bigcap_{U \text{ nbhd of } 0} U \iff -x \in H$$ To see that $x+y$ is in $H$ if $x,y \in H$, note that $g: (x,y) \mapsto x+y$ is continuous. Now let $V$ be an arbitrary neighborhood of $0$. Then since $g$ is continuous there exists a neighborhood $N \times M$ of $(0,0)$ such that $g(N \times M) \subset V$. Since $G \times G$ has the product topology, $N \times M$ is a neighborhood of $0$ if and only if $N$ and $M$ are neighborhoods of $0$. Hence $x,y \in N$ and $x,y \in M$ and hence $g((x,y)) = x + y \in V$ since $g(N \times M) \subset V$. AI: $\def\cl{\mathop{\mathrm{cl}}}$ For $x \in G$, let $\mathcal U_x$ denote the set of all neighbourhoods of $x$. Then we have that $x - \mathcal U_0 = \mathcal U_x$ for each $x \in G$. It follows \begin{align*} x \in \cl\{0\} &\iff \forall U \in \mathcal U_x : U \cap \{0\} \ne \emptyset\\ &\iff \forall V \in \mathcal U_0: (x - V) \cap \{0\} \ne \emptyset\\ &\iff \forall V \in \mathcal U_0: 0 \in x - V\\ &\iff \forall V \in \mathcal U_0 : x \in V\\ &\iff x \in \bigcap \mathcal U_0. \end{align*}
H: Double Integral Claim How can I prove the following inequality: (where $f $ is nice enough) - Given a function $ f(x,y) : \Omega_1 \times \Omega_2 \to \mathbb{R} $ , and $\alpha,C_1,C_2 $ are some constants, ( $\Omega_i$ is equipped with a probability measure $ \mu_i $ respectively) , then there exists a constant $C_3 $ such that $$ \begin{multline}C_1^2 \int_{\Omega_1 } \left| \int_{\Omega_2} f(x,y) d \mu _2 - \alpha\right| ^2 d\mu_1 + C_2^2 \int_{\Omega_2 } \left| \int_{\Omega_1} f(x,y) d \mu _1 - \alpha\right|^2 d\mu_2 \\ \geq C_3^2 \int_{\Omega_1}\int_{\Omega_2} |f(x,y)-\alpha|^2 d\mu_1 d\mu_2\end{multline}$$ is true. It seems like it's kind of triangle's inequality. Have you got an idea? AI: As written it is not true (if you meant, as I assumed, that $C_1, C_2, C_3$ are positive constants; otherwise just setting $C_3 = 0$ settles all cases with $C_1, C_2 \geq 0$). Let $\Omega_1 = \Omega_2 = [0,1] \subset \mathbb{R}$ with the Lebesgue measure (making them probability spaces). Let $f(x,y) = \sin(2\pi x) \sin(2\pi y)$. We have that $$ \int_0^1 f(x,y) \mathrm{d}y = 0 = \int_0^1 f(x,y)\mathrm{d}x $$ Hence if you take $\alpha = 0$ the left hand side of your desired inequality is identically 0. But the right hand side is given by $$ \iint_{[0,1]\times[0,1]} \sin(2\pi x)^2 \sin(2\pi y)^2 ~ \mathrm{d}x ~\mathrm{d}y = \frac14 $$
H: Height of triangle inside a parallelogram I am stumped on the following question PQRS is a parallelogram and ST=TR. What is the ratio of area of triangle QST to the area of parallelogram (Ans 1:4) I need the height of the triangle, how would I get that? Any suggestions would be appreciated. AI: $$A_T=\frac{1}{2}\overline{ST}\,\overline{QH}=\frac{1}{2}\,\frac{1}{2}\overline{SR}\,\overline{QH}=\frac{1}{4}A_P$$ where $H$ is the orthogonal projection of $Q$ on the line containing $S$ and $R$.
H: complex torus has topological genus one could any one give me a hint how to show a complex torus has topological genus one by constructing an explicit homeomorphism to $S^1\times S^1$? Complex Torus: $\mathbb{C}/L$, where $L=\{\mathbb{Z}\omega_1+\mathbb{Z}\omega_2\}$. Thank you. AI: For convenience, I will identify $S^1$ with the topological group $\mathbb{R} / \mathbb{Z}$. Let $\Lambda$ be the lattice $\mathbb{Z} \omega_1 + \mathbb{Z} \omega_2$. Since $\omega_1$ and $\omega_2$ are not parallel, there exists a choice of (real!) coordinates $(x, y)$ on $\mathbb{C}$ such that $\omega_1 = (1, 0)$ and $\omega_2 = (0, 1)$. It is clear that there is a continuous (indeed, smooth) group homomorphism $\mathbb{C} \to S^1 \times S^1$ given by the formula $$(x, y) \mapsto (x + \mathbb{Z}, y + \mathbb{Z})$$ and $\Lambda$ lies in the kernel of this homomorphism, so it factors as a homomorphism $\mathbb{C} / \Lambda \to S^1 \times S^1$. It is not hard to check that this is bijective. On the other hand, $\mathbb{C} / \Lambda$ is compact and $S^1 \times S^1$ is Hausdorff, so this is not just a continuous bijection but a homeomorphism. There is a much more exciting answer involving Weierstrass $\wp$-functions, but for that one should take a different definition of "genus one" and/or "complex torus"...
H: $\mathbb{CP}^1$ is compact? $\mathbb{CP}^1$ is the set of all one dimensional subspaces of $\mathbb{C}^2$, if $(z,w)\in \mathbb{C}^2$ be non zero , then its span is a point in $\mathbb{CP}^1$.let $U_0=\{[z:w]:z\neq 0\}$ and $U_1=\{[z:w]:w\neq 0\}$, $(z,w)\in \mathbb{C}^2$,and $[z:w]=[\lambda z:\lambda w],\lambda\in\mathbb{C}^{*}$ is a point in $\mathbb{CP}^1$, the map is $\phi_0:U_0\rightarrow\mathbb{C}$ defined by $$\phi_0([z:w])=w/z$$ the map $\phi:U_1\rightarrow\mathbb{C}$ defined by $$\phi_1([z:w])=z/w$$ Now,Could any one tell me why $\mathbb{CP}^1$ is the union of two closed sets $\phi_0^{-1}(D)$ and $\phi_1^{-1}(D)$, where $D$ is closed unit disk in $\mathbb{C}$,and why $\mathbb{CP}^1$ is compact? AI: The maps you gave are the coordinate charts on $\mathbb{C}\mathbb{P}^1$ that makes it into a manifold. In particular, they are bijective. If we take $\phi_0^{-1}(D)$, we get all point $[1:z]$, with $z\in D$. Similarly, $\phi_0^{-1}(D)$ is all points of the form $[z:1]$. Together, these sets cover $\mathbb{C}\mathbb{P}^1$. The inverse maps are continuous, because the maps $\phi_i$ are bijective, so $\phi_0^{-1}(D)$ and $\phi_1^{-1}(D)$ are compact, as the image of a compact set under a continuous map is compact. It's not hard to show that if a space is a union of two compact sets, it is compact, so we are done.
H: Determining and (dis)proving if $ \sum_{n = 1}^{\infty} (-1)^{n + 1} \left( 1 - n \log \left( \frac{n + 1}{n} \right) \right) $ converges I am trying to determine if $ \sum_{n = 1}^{\infty} (-1)^{n + 1} \left( 1 - n \log \left( \frac{n + 1}{n} \right) \right) $ converges using an alternating series test. The test in question requires me to prove $ 1 - n \log \left( \frac{n + 1}{n} \right) $ is decreasing and that $ \lim_{n \to \infty} \left( 1 - n \log \left( \frac{n + 1}{n} \right) \right) = 0 $ to prove this series is convergent. My intutition says that this series is convergent because $ n \log \left( \frac{n + 1}{n} \right) $ will tend towards 1 as n goes to infinity (due to the definition of e). I am having trouble proving the sequence $ 1 - n \log \left( \frac{n + 1}{n} \right) $ is decreasing. I set up the inequality $ 1 - (n + 1) \log \left( \frac{n + 2}{n + 1} \right) \leq 1 - n \log \left( \frac{n + 1}{n} \right) $ but I feel like I am stuck on some simple algebra. Any hints for proving the sequence is nonincreasing? Or am I just wrong? EDIT: Could this be done using absolute convergence? EDIT2: I am seeing some really great answers, but I am trying to prove this without calculus. (No derivatives or Taylor series expansions.) AI: Using the Taylor expansion of $\log(1+x)$ we see that $$ 1-n\log\Bigl(\frac{n+1}{n}\Bigr)=1-n\Bigl(\frac{1}{n}-\frac{1}{2\,n^2}+O(n^{-3})\Bigr)=\frac{1}{2\,n}+a_n,\quad a_n=O(n^{-2}). $$ Then $$ \sum_{n=1}^\infty(-1)^{n+1}\Bigl(1-n\log\Bigl(\frac{n+1}{n}\Bigr)\Bigr)=\sum_{n=1}^\infty(-1)^{n+1}\frac{1}{2\,n}+\sum_{n=1}^\infty(-1)^{n+1}a_n. $$ The first series on the right hand side converges by Leibniz's test, and the second is absolutely convergent since $|(-1)^{n+1}a_n|\le C\,n^{-2}$ for some constant $C>0$.
H: When is the integral of a periodic function periodic? I'm attempting some questions from Zwiebach - A First Course in String Theory, and have got stuck. I've proved that a function $h'(u)$ is periodic. The question then asks me to show that $h(u)=au+f(u)$ where $a$ is a constant and $f(u)$ a periodic function. I can't see how to do this directly from the periodicity of $h'$. Is this possible, or true? Many thanks! AI: We may assume without loss of generality that the period of $h'$ is $1$, so that $h'(u + 1) = h'(u)$. Consider $h(u + 1) - h(u)$. By differentiation, we find that $h(u + 1) - h(u) = a$ for some constant $a$. One now guesses that $h(u) = a u + f(u)$ for some periodic function $f$. So, $$f(u + 1) - f(u) = h(u + 1) - h(u) - a = 0$$ hence $f$ is indeed periodic with period $1$.
H: Cauchy sequences in metric spaces Let $(X,d)$ be a metric space and let $(x_n)_{n\in\mathbb{N}}$ be a Cauchy sequence in $X$, i.e. $d(x_n,x_m)$ goes to $0$ when $n,m\rightarrow\infty$. The sequence does not necessarily have a limit in $X$, however. I'm wondering if for fixed $k$, the sequence $d(x_k,x_l)$ has a limit in $\mathbb{R}$ when $l\rightarrow\infty$? I know that $\lim\sup_{l\to\infty}d(x_k,x_l)$ exists (this is always true for Cauchy sequences), but what about the limit? Thank you very much in advance :) AI: Given any point $w\in X$ the function "distance from $w$" $$ d(w,\cdot):X\longrightarrow\Bbb R $$ is continuous and transforms Cauchy sequences (in $X$) into Cauchy sequences (in $\Bbb R$). But $\Bbb R$ is complete!
H: What is the unit of the FFT output? Consider a signal, f(t), with impulse samples taken N times, i.e f[0],f[1],f[2],...f[N-1] Let us perform FFT on it. Now, we have the amplitude on the y-axis and the frequency on the x-axis. I want to know if the unit of the quantity on the y-axis remains the same. If yes, why? If no, what happens to it? Example: If we consider a voltage signal. What will be the unit of the quantity on y-axis after FFT of f(t)? AI: It's still a voltage. If you do a continuous Fourier transform, you go from signal to signal integrated over time, which is signal per frequency, but in a discrete Fourier transform you're just summing discrete voltages with coefficients, and the result is still a voltage. Of course if you want you can multiply it by the time interval between sample points to get a voltage per frequency unit.
H: Distance between discrete random variables Let $X_1, \ldots,X_k$ represent k integers between $1$ and $n$ drawn randomly and without replacement (i.e., there are never repeated numbers). What is the PMF of $Y$, the random variable representing the nearest neighbor distance between draws? This kind of problem falls into the general setting of computing distances between random variables. Is there any general formalism to work with? I have no idea how to start. Thanks in advance. AI: Update: I have modified the answer using a more sensible definition of $Y$. In terms of the gaps, we only care that $g_j\geq r$ for $2\leq j\leq k$ and we don't care about the size of $g_1$ or $g_{k+1}$. In this case, the number of suitable compositions is $n-(r-1)(k-1)\choose k$, and $$\mathbb{P}(Y\geq r)={{n-(r-1)(k-1)\choose k}\over {n\choose k}}.$$ I have interpreted the problem as follows: you are interested in $Y$, the smallest gap between the sampled values $X_1,\dots,X_k$. If this isn't what you want, you may want to clarify the question. Let's look more closely at the gaps between the $X$'s. In addition to the gaps $G_1=X_{(1)}$ and $G_j=X_{(j)}−X_{(j−1)}$ for $2≤j≤k$, it is convenient to introduce the final gap $G_{k+1}=(n+1)−X_{(k)}$. Here the bracket notation refers to order statistics. The random vector $(G_1,G_2,\dots,G_{k+1})$ is a random composition of the number $n+1$. That is, all outcomes $(g_1,g_2,\dots,g_{k+1})$ with $g_1+g_2+\cdots+g_{k+1}=n+1$, $g_j≥1$ are equally likely. There are $n\choose k$ such compositions, as found using stars and bars. Similarly, the number of compositions with all the $g_j\geq r$ is $n-(r-1)(k+1)\choose k$. Therefore, $$\mathbb{P}(Y\geq r)={{n-(r-1)(k+1)\choose k}\over {n\choose k}}.$$ From this you can work out any property of the random variable $Y$. See this question as well: What is the distribution of gaps?
H: A compact operator in $L^2(\mathbb R)$ Let $g \in L^{\infty}(\mathbb R)$. Consider the operator $$ \begin{split} T_g\colon & L^2(\mathbb R)\to L^2(\mathbb R) \\ & f \mapsto gf \end{split} $$ Prove that $T_g$ is compact (i.e., the image under $T_g$ of bounded closed sets is compact) if and only if $g=0$ a.e. I do not know how to start and I'm very puzzled. I know very little about compactness in $L^p$: of course they are complete metric spaces, therefore a subspace is compact if and only if it is closed (complete) and totally bounded. A singleton is of course totally bounded and I think it is closed: therefore I can say that if $g=0$ a.e. then the image of every subspace is $\{0\}$ which is compact, so the operator is compact. What about the inverse direction? It seems hard to prove. Would you help me, please? Thanks. AI: Suppose that $g$ is not zero a.e. and let $\epsilon>0$ be such that $E=\{x:|g(x)|>\epsilon\}$ has nonzero measure. Consider the orthogonal projection $p=T_{\chi_{E}}$. Assume that $T_{g}$ is a compact operator. The operator $T_{g}$ naturally induces a continuous linear operator of $L^2(\mathbb{R})$ into the Hilbert subspace $pL^{2}(\mathbb{R})=\{\xi \in L^{2}(\mathbb{R}):p\xi=\xi\}$, which is a Banach space. This map is onto, since $\xi \in pL^{2}(\mathbb{R})$ is mapped to by the $L^{2}$ function equal to $\xi(x)/g(x)$ on $E$ and zero elsewhere. By the open mapping theorem we have that this induced map is open, and therefore the image of the unit ball of $L^{2}(\mathbb{R})$ under this induced map contains an open ball $B$, and therefore the closure contains a closed ball $\overline{B}$. This closed ball is not compact, since $pL^2(\mathbb{R})$ is infinite-dimensional. But the image of the unit ball of $L^2(\mathbb{R})$ under a compact operator must have compact closure, and closed subsets of compact sets must be compact, so this is a contradiction.
H: Is there a known closed form number for $\prod\limits_{k=2}^{ \infty } \sqrt[k^2]{k}$ $f(x)=\sum\limits_{k = 2 }^ \infty e^{-kx} \ln(k) $ $\int\limits_0^{\infty}\int\limits_x^{\infty}\, f(\gamma)\, d\gamma dx=\sum\limits_{k = 2 }^ \infty \frac{1}{k^2} \ln(k) $ $\int\limits_0^{\infty}\int\limits_x^{\infty} f(\gamma)\, d\gamma dx=\sum\limits_{k=2}^ \infty\ln(k^{\frac{1}{k^2}})=\ln(\prod\limits_{k=2}^{\infty}k^{\frac{1}{k^2}}) $ $\prod\limits_{k=2}^{ \infty }k^{\frac{1}{k^2}}=\prod\limits_{k=2}^{ \infty } \sqrt[k^2]{k}=e^{\int\limits_0^{\infty}\int\limits_x^{\infty} f(\gamma) \,d\gamma dx}$ $f(x)=\sum\limits_{k = 2 }^ \infty e^{-kx} \ln(k) $ $f(x)=\sum\limits_{k = 1 }^ \infty e^{-(k+1)x} \ln(k+1) $ $f(x)=e^{-x}\sum\limits_{k = 1 }^ \infty e^{-kx} \ln(k+1) $ $f(x)=e^{-x}\sum\limits_{n = 1 }^ \infty \frac{(-1)^{n+1}}{n} \sum\limits_{k = 1 }^ \infty k^n e^{-kx}$ We know that $\sum\limits_{k = 1 }^ \infty e^{-kx}= \frac{1}{e^{x}-1} $ $\sum\limits_{k = 1 }^ \infty k^n e^{-kx}= (-1)^n\frac{d^n}{dx^n}(\frac{1}{e^{x}-1}) $ $f(x)=e^{-x}\sum\limits_{n = 1 }^ \infty \frac{(-1)^{n+1}}{n} \sum\limits_{k = 1 }^ \infty k^n e^{-kx} = e^{-x}\sum\limits_{n = 1 }^ \infty \frac{(-1)^{n+1}}{n} (-1)^n\frac{d^n}{dx^n}(\frac{1}{e^{x}-1})$ $f(x)=-e^{-x}\sum\limits_{n = 1 }^ \infty \frac{1}{n} \frac{d^n}{dx^n}(\frac{1}{e^x-1})$ $\int\limits_0^{\infty}\int\limits_x^{\infty} f(\gamma) \,d\gamma dx= -\int\limits_0^{\infty}\int\limits_x^{\infty} e^{-\gamma}\sum\limits_{n = 1 }^ \infty \frac{1}{n} \frac{d^n}{d\gamma^n}(\frac{1}{e^{\gamma}-1})\, d\gamma dx$ I have lost my way after that. Is it possible to find a closed form in my way? or I need to follow a different way. I need your mathematical sense. Thanks a lot for answers and advice. AI: Yes $\boxed{\displaystyle e^{-\zeta'(2)}}$ I think. To prove it start with : $$\zeta(2-x)=\sum_{k=1}^\infty \frac {k^x}{k^2}$$ and compute the derivative! The trick is that the derivation will create a $\ln(k)$ term at the numerator. At the end take the limit as $x\to 0$.
H: Question about $p$-adic numbers and $p$-adic integers I've been trying to understand what $p$-adic numbers and $p$-adic integers are today. Can you tell me if I have it right? Thanks. Let $p$ be a prime. Then we define the ring of $p$-adic integers to be $$ \mathbb Z_p = \{ \sum_{k=m}^\infty a_k p^k \mid m \in \mathbb Z, a_k \in \{0, \dots, p-1\} \} $$ That is, the $p$-adic integers are a bit like formal power series with the indeterminate $x$ replaced with $p$ and coefficients in $\mathbb Z / p \mathbb Z$. So for example, a $3$-adic integers could look like this: $1\cdot 1 + 2 \cdot 3 + 1 \cdot 9 = 16$ or $\frac{1}{9} + 1 $ and so on. Basically, we get all natural numbers, fractions of powers of $p$ and sums of those two. This is a ring (just like formal power series). Now we want to turn it into a field. To this end we take the field of fractions with elements of the form $$ \frac{\sum_{k=m}^\infty a_k p^k}{\sum_{k=r}^\infty b_k p^k}$$ for $\sum_{k=r}^\infty b_k p^k \neq 0$. We denote this field by $\mathbb Q_p$. Now as it turns out, $\mathbb Q_p$ is the same as what we get if we take the ring of fractions of $\mathbb Z_p$ for the set $S=\{p^k \mid k \in \mathbb Z \}$. This I don't see. Because then this would mean that every number $$ \frac{\sum_{k=m}^\infty a_k p^k}{\sum_{k=r}^\infty b_k p^k}$$ can also be written as $$ \frac{\sum_{k=m}^\infty a_k p^k}{p^r}$$ and I somehow don't believe that. So where's my mistake? Thanks for your help. AI: To define $\mathbb{Z}_p$ the summations should start at $k = 0$. In particular, it contains no negative powers of $p$. As for your second question, it suffices to show that the inverse of a $p$-adic integer of the form $1 + a_1 p^1 + a_2 p^2 + ...$ is a $p$-adic integer. I'll write this as $1 - pz$ where $z$ is another $p$-adic integer. Then $$\frac{1}{1 - pz} = 1 + pz + p^2 z^2 + p^3 z^3 + ...$$ and this is a $p$-adic integer because only finitely many terms contribute to the coefficient of $p^k$ for any particular $k$. (I really am allowed to take this infinite sum because it converges $p$-adically.)
H: Should we consider multiplicity while solving this problem? I am trying to solve the problem : A single fence is to be constructed from posts 6 inches wide and separated by lengths of chain 5 feet . If a certain fence begins and ends with a post.Which of following could be length of fence in feet ? a)17 b)28 c)35 d)39 e)50. (Ans:a,b,d and e) Since the total space from one fence to another would be 6ft. I believe the fence size should be a multiple of 6 .None of the nos here are multiple of 6. How did they get the answer above ? AI: Let $n$ be the number of posts. Then there are $n-1$ lengths of chain, since there has to be a post at both ends. Because the post is $\frac{1}{2}$ of a foot, the total length of the fence, in feet, is $$\frac{1}{2}n +5(n-1).$$ This is $5.5n -5$. So we want to check which of our numbers is of the shape $5.5n-5$, for some integer $n$. For example, can we have $5.5n-5=17$? Sure, $22$ is an integer multiple of $5.5$. In general, we are interested in whether, for given $k$, there is an integer $n$ such that $5.5n-5=k$, or equivalently $11n=2k+10$. So what we care about is whether $11$ divides $2k+10$.
H: Summing Lerch Transcendents The Lerch transcendent is given by $$ \Phi(z, s, \alpha) = \sum_{n=0}^\infty \frac { z^n} {(n+\alpha)^s}. $$ While computing $\sum_{m=1}^{\infty} \sum_{n=1}^{\infty} \sum_{p=1}^{\infty}\frac{(-1)^{m+n+p}}{m+n+p}$, the expression $$ -\sum_{k=1}^{\infty} \Phi(-1, 1, 1+k) $$ came up. Is there an (easy?) way to calculate that? Writing it down, it gives: $$ -\sum_{k=1}^{\infty} \Phi(-1, 1, 1+k)= \sum_{k=1}^{\infty} \sum_{n=1}^\infty \frac { (-1)^n} {n^{k+1}} $$ Is changing the summation order valid? There is a relation to the Dirichlet $\eta$ function $$ \eta(s) = \sum_{n=1}^{\infty}{(-1)^{n-1} \over n^s} = \frac{1}{1^s} - \frac{1}{2^s} + \frac{1}{3^s} - \frac{1}{4^s} + \cdots $$ but (how) can I use that? The double series then reads $$ \sum_{k=1}^{\infty} \sum_{n=1}^\infty \frac { (-1)^n} {n^{k+1}}= -\sum_{k=1}^{\infty} \eta(k+1). $$ Interestingly, that among the values for $\eta$, given at the WP, you'll find $\eta(0)=1/2$ related to Grandi's series and $\eta(1)=\ln(2)$, both show up in my attempt to prove the convergence of the triple product given there. AI: The double series $\displaystyle\sum_{k=1}^{\infty} \sum_{n=1}^\infty \frac { (-1)^n} {n^{k+1}}$ diverges. To see this, one can imitate the strategy used in the answer to the other question, and use the identity $$ \frac1{n^{k+1}}=\int_0^{+\infty}\mathrm e^{-ns}\frac{s^k}{k!}\,\mathrm ds. $$ Thus, for every $k\geqslant1$, $$ \sum_{n=1}^\infty \frac { (-1)^n} {n^{k+1}}=\int_0^{+\infty}\sum_{n=1}^\infty(-1)^n\mathrm e^{-ns}\frac{s^k}{k!}\,\mathrm ds=\int_0^{+\infty}\frac{-\mathrm e^{-s}}{1+\mathrm e^{-s}}\frac{s^k}{k!}\,\mathrm ds. $$ Since $1+\mathrm e^{-s}\leqslant2$ uniformly on $s\geqslant0$, $$ \sum_{n=1}^\infty \frac { (-1)^n} {n^{k+1}}\leqslant-\frac12\int_0^{+\infty}\mathrm e^{-s}\frac{s^k}{k!}\,\mathrm ds=-\frac12. $$ This proves that the double series diverges. Edit: More directly, each series $\displaystyle\sum_{n=1}^\infty \frac { (-1)^n} {n^{k+1}}$ is alternating hence it converges and the value of its sum is between any two successive partial sums. For example, $\displaystyle\sum_{n=1}^\infty \frac { (-1)^n} {n^{k+1}}\leqslant\sum_{n=1}^2 \frac { (-1)^n} {n^{k+1}}=-1+\frac1{2^{k+1}}\leqslant-\frac34$ for every $k\geqslant1$. QED.
H: Generating functions of discrete random variable I am trying to understand the solution of a problem. $X_1,X_2,....$ a sequence of independents randoms variables and same probability distribution. $N$ rv. taking its values in $\mathbf{N}$ Considering $Z=\sum_{i=i}^N X_i$ We have : $$G_Z(s)=E(s^Z)=E(s^{\sum_{i=0}^N X_i})=\sum_kE(s^{\sum_{i=0}^N X_i}|N=k)P(N=k) $$ I don't understand how can we pass from $E(s^{\sum_{i=0}^N X_i})$ to that $\sum_kE(s^{\sum_{i=0}^N X_i}|N=k)P(N=k)$ AI: For every measurable partition $(A_n)_{n\geqslant0}$ of the space $\Omega$ such that $\mathrm P(A_n)\ne0$ for every $n\geqslant0$, and every integrable random variable $Y$, $$ \mathrm E(Y)=\sum_{n=0}^{+\infty}\mathrm E(Y:A_n)=\sum_{n=0}^{+\infty}\mathrm E(Y\mid A_n)\cdot\mathrm P(A_n). $$ Apply this to $Y=s^Z$ and $A_n=[N=n]$.
H: Irreducibility of $x^5 -x -1$ by reduction mod 5 Is there a quick way of deducing that $x^5-x-1 \in \mathbb{Z}[x]$ is irreducible by reducing it mod 5, other than verifying that it has no roots in $\mathbb{Z}_5$ and no factorization as the product of a factor of order 2 and a factor of order 3? AI: Hint $ $ If it had an irreducible quadratic factor $\rm\:f(x)\:$ then $\rm\:\Bbb F_5[x]/(f(x)) = \Bbb F_5[w] = \Bbb F_{25}\,$ so $$\rm w^5 = w\!+\!1,\ \ \color{#C00}w = w^{25} = (w^5)^5 = (w\!+\!1)^5 = w^5\! +\! 1^5 = \color{#C00}w\!+\!2\:\Rightarrow\: \color{#C00}0 = 2\:\Rightarrow\Leftarrow$$ Remark $\ $ The same proof works generally. Let $\rm\:p\:$ be prime. Working in $\rm \,\Bbb F_p[x]\,$ suppose that $\rm\:g = x^p\! -\!x\! -\! 1\:$ has an irreducible factor $\rm\:f\:$ of degree $\rm\:n< p.\,$ Then in $\rm\:\Bbb F_p[x]/(f) = \Bbb F_p[w] = \Bbb F_{p^n},\:$ $\rm\:w^p = w\!+\!1\:$ so the linear Frobenius map $\rm\:z\to z^p\:$ is just the shift $\rm\:S\!: w\to w\!+\!1,\,$ $\rm\, k\to k\in\Bbb F_p.\:$ Iterated $\rm\:n\:$ times yields $\rm\: w\!+\!n = S^n\, w = w^{p^n}\! =w,\:$ thus $\rm\: 0 = n < p\:$ in $\rm\,\Bbb F_p,\,$ a contradiction. Alternatively, equivalently, one can instead show that $\rm\:gcd(x^p\!-\!x\!-\!1,\,x^{p^n}\!-\!x) = 1.$ For example, see the answer by Thomas Andrews. Once one learns some Galois theory these and related arguments will become much more intuitive, e.g. see here for a simple example.
H: A curve that intersects every plane in finitely but arbitrarily many points Does there exist a piecewise smooth curve in $\mathbb{R}^3$ such that every plane intersects the curve at finitely many points and the number of intersection points can be arbitrary large? If the number of intersection points for each plane is bounded then there is an example: $\gamma(t)=(t,t^3,t^5)$ intersects every plane at most five points. AI: Try $(x,y,z) = (t + \sin(t^2), t^2, t^3)$. For any $a,b,c$ (not all $0$) and $d$, $|a (t+\sin(t^2)) + b t^2 + c t^3 - d| \to \infty$ as $t \to \pm \infty$ so the intersections are in a finite interval. And since $a (t+\sin(t^2)) + b t^2 + c t^3 - d$ is analytic and not constant, it has finitely many zeros in a compact set. So any plane has only finitely many intersections with the curve. The curve intersects the plane $x = \sqrt{2 m \pi}$ (where $m$ is a positive integer) when $t + \sin(t^2) = \sqrt{2 m \pi}$. For $t = \sqrt{2m \pi}+s$ that says $$s + \sin((\sqrt{2m\pi}+s)^2) = s + \sin(2 \sqrt{2m\pi} s + s^2) = 0$$ In the interval $-1/2 < s < 1/2$, $2 \sqrt{2m\pi}s +s^2$ runs from $-\sqrt{2m\pi} + 1/4$ to $+\sqrt{2m\pi} + 1/4$, and thus passes through approximately $\sqrt{2m/\pi}$ odd multiples of $\pi/2$, at which the sine is alternately $\pm 1$, and thus $ s + \sin(2 \sqrt{2m\pi} s + s^2)$ has approximately $\sqrt{2m/\pi}$ sign changes. Thus the number of intersection points is unbounded.
H: Increasing by doubling a number when it is negative The question is: Let $x=y-\frac{50}{y}$, where $x$ and $y$ are both greater than $0$. If the value of $y$ is doubled in the equation above, the value of x will a)decrease b)remain same c)increase four fold d)double e)increase to more than double - Ans)e Here is how I am solving it: $x = \frac{y^2-50}{y}$ by putting y=$1$ I get $-49$, and by putting y=2(doubling) I get $y=-24$. Now how is $-24$ more than double of $-49$? Correct me if I am wrong: Double increment of $-49$ will be = $-49+49 = 0$ which would be greater than $-24$. AI: Well, the problem statement explicitly restricts itself away from cases where $x<0$, so I think your concern is eliminated by that restriction. In any rate, consider $x_1 = y-\frac{50}{y}$ and $x_2 = 2y-\frac{50}{2y}$. Then, compute $\frac{x_2}{x_1}$ symbolically, and verify what it can be.
H: Can a meromorphic function be written as ratio of holomorphic function? Well, I want to know whether a meromorphic function can be written as ratio of two holomorphic function on $\mathbb{C}$ or on a Riemann surface. Thank you for help. AI: a) On a compact Riemann surface $X$ holomorphic functions are constant so that the the quotients of holomorphic functions are just the constants too. In formulas: $$\mathcal O(X)=\mathbb C \; ,\quad \text{Frac} (\mathcal O(X))=\mathbb C$$ However a deep theorem (Riemann's Existence Theorem) assures us that there exists a non-constant meromorphic function on $X$ and that these meromorphic functions form a finitely generated field of transcendence degree one over $\mathbb C$ ($\;trdeg_ \mathbb C\mathcal M(X)=1$), so that the answer to your question is negative for compact Riemann surfaces: $$\text{Frac} (\mathcal O(X))=\mathbb C \subsetneq \mathcal M(X)$$ b) On a non-compact Riemann surface $Y$ however another difficult theorem, first proved only in 1948 by Behnke and Stein, says that indeed every meromorphic function is the quotient of two holomorphic functions . In formula: $$ \text{Frac} (\mathcal O(Y))= \mathcal M(Y) $$ The modern point of view is that this is an easy consequence of the difficult result that $Y$ is a Stein manifold, the analogue in complex-analytic geometry of an affine algebraic variety. Bibliography As usual, Forster's Lectures on Riemann Surfaces is the best reference for these questions.
H: $\lim_{n\to\infty}\frac{a_{n-1}}{a_{n}}$ If $\{a_{n}\}_{n\geq 1}$ is a decreasing sequence of real numbers, $a_{n}\in (0,1)$ and $\lim_{n\to \infty} a_{n}=0$. What we can say about $$\lim_{n\to\infty}\frac{a_{n-1}}{a_{n}}$$ AI: Probably not much. Let $a_n = b^{-n}$ for any constant $b > 1$, so then $a_n \in (0, 1)$. Then, $$\lim_{n \to \infty} \frac{a_{n-1}}{a_n} = \lim_{n \to \infty} \frac{b^{-n + 1}}{b^{-n}} = \lim_{n \to \infty} b = b.$$ So, any real number greater than 1 is possible. And, if $a_n = \frac{1}{n}$, then a limit of 1 is also possible. And, using $a_n = b^{-n^2}$, we see $\infty$ is also a possible limit. The limit would have to be at least 1, in general, if $a_n$ is decreasing.
H: Polynomial Approximation of an Integral I require a polynomial $p(x)$ such that $$\left|p(x) - \int_0^x \cos{(t^2)} dt\right| < \frac{1}{10!}$$ for all $x \in [-1, 1]$. I know that I should probably use the fact that if $$m\leq f^{n+1} {(t)} \leq M$$ for $t$ in an interval containing the point $a$, then $$m \frac{(x-a)^{(n+1)}}{(n+1)!} \leq E_n(x)\leq M \frac{(x-a)^{(n+1)}}{(n+1)!}$$ for $x > a$ and $$m \frac{(a-x)^{(n+1)}}{(n+1)!} \leq (-1)^{(n+1)}E_n(x)\leq M \frac{(a-x)^{(n+1)}}{(n+1)!}$$ for $x < a$, where $E_n(x)$ is the Taylor remainder. These facts combined with the Taylor series for $\cos {w}$ followed by the appropriate substitution should give me the desired polynomial, although I keep getting stuck. Any clarification would be helpful. Thanks. AI: Write down the ordinary Taylor expansion of $\cos(t^2)$ about $t=0$. This is done by recalling that $\cos w=1-\frac{w^2}{2!}+\frac{w^4}{4!}-\frac{w^6}{6!}+\cdots $ and substituting $w=t^2$. Now integrate term by term from $0$ to $x$. Note that the series we get is, for $|x|\le 1$, an alternating series: the terms alternate in sign, decrease in absolute value, and approach $0$. So the error made in cutting off somewhere is less, in absolute value, than the first "neglected" term. That criterion works efficiently, and does not require the complicated manipulations and notation that you are using. Remark: The various expressions for the remainder are important theoretical* tools. They are often much less useful as practical tools for making estimates.
H: Calibration of an eye tracking device: transformation from known gaze points I am creating a calibration system for an eye tracking device. This calibration involves having the user look at five points on a screen. The eye tracker then reports where it believes the user was looking. The result is a map of five co-ordinates that are likely to be stretched, twisted and translated with respect to the actual co-ordinates. Something like this: So, I now know where the eye tracker thinks the user is looking for each of those five points. From this, it should be possible to calculate where the user is really looking for any co-ordinates, so long as they lie within the calibrated zone. The way I do this at present is by treating both the X and Y axes separately. I plot the real vs. measured X co-ordinates on a scatterplot and find the linear regression equation, and do the same for the Y co-ordinates. Thus, I end up with a 'y = mx + c' equation for both the horizontal and vertical axes (i.e. a 'scale' and 'intercept' value for each axis). In order to then find out where the user is actually looking for any measured co-ordinates, I simply transform the X and Y axis data separately using these scale and intercept values. However; I am not a mathematician. I have recently come across the concept of 'eigenvectors' and wonder if this (or another approach) could provide a more robust method of ensuring I am translating my calibration correctly. In other words, I think I'm doing this correctly, but I really think I ought to run it by someone who is likely to know for sure whether this is likely to work (given that there can be stretch, twist and translation). Any wisdom would be gratefully received. AI: Eigenvectors are a big thing, so the question as to whether they can be used is quite apt, and also quite difficult to answer. Eigenvectors are used in all sorts of applications, so the question as to whether you could use them is probably "yes." (The question of whether you should use them is another matter). The first thing to note is that an eigenvector is simply a vector that solves the equation $(A-\lambda I)v = 0$. The key is to understand what your matrix $A$ is, and that depends a lot on how you approach the problem. One thing that you could do is compute a Principal Component Analysis of the calibration data, which would align the data that has the most variance (say, $x$-offset) along one principal axis, and the data with the second most along another axis. These axes are related to the eigenvectors of the correlation matrix of the data. The result is somewhat similar to performing a least-squares analysis of your data. This is probably overkill, however. Really, the appropriate solution depends on a few things: the accuracy demanded by your solution, the character and the generality of your data (do you need to calibrate for each user, or for all users?), and so forth. Really, I think that your approach is on the right track (I have used a similar computation in the head-tracking work that I have done). Essentially, you have an affine transform. You want to compute the rotation of the scene, the scale of the scene, and how much it has shifted from center. I do not think you would gain appreciable accuracy in a single calibration by attempting a more advanced solution. I would only advocate exploring more complicated statistical analysis techniques if you have to calibrate to a large population size.
H: Probability of Getting a pair of cards I was wondering what could be the probability of getting at least a pair of cards , when every time you draw 6 cards at random from a fresh deck of cards. I calculated it as: 3/51 * 48/50 * 44/49 * 40/48 * 36/47 * 15(6C2) = 0.48 Let me know if it is correct? AI: What's the probability of not getting a pair of cards after drawing n cards? n = 1: p = 1 n = 2: p = 1 * (48/51) n = 3: p = 1 * (48/51) * (44/50) etc... You want 1 - p. You could also verify that the result is roughly correct by running many simulations and using this to estimate the probability: import random got_pair = 0 n = 6 trials = 10000 deck = [value for suit in 'CDHS' for value in 'A23456789TJQK'] for i in range(trials): random.shuffle(deck) if len(set(deck[:n])) < n: got_pair += 1 print(float(got_pair) / trials) Result 0.6542 Note that the result is only an approximation, but it can be a useful aid to check that you didn't make an error with the mathematics.
H: Direct products in the category Rel Please describe direct products in the category Rel. AI: As Dylan has already mentioned, the category-theoretic product in $\textbf{Rel}$ is the disjoint union of sets. We can verify this by hand: \begin{align} \textbf{Rel}(X, Y \amalg Z) & = \mathscr{P}(X \times (Y \amalg Z)) \\ & \cong \mathscr{P}((X \times Y) \amalg (X \times Z)) \\ & \cong \mathscr{P}(X \times Y) \times \mathscr{P}(X \times Z) = \textbf{Rel}(X, Y) \times \textbf{Rel}(X, Z) \end{align} This isn't too surprising, since $\textbf{Rel}$ is isomorphic to $\textbf{Rel}^\textrm{op}$ and behaves a bit like what one expects for the category of (free) vector spaces over the field of one element. There is a categorical description of the cartesian product of sets within $\textbf{Rel}$, however. To avoid confusion, let us now write $X \otimes Y$ for the cartesian product of $X$ and $Y$. It's not hard to check that this makes $\textbf{Rel}$ into a symmetric monoidal category. Moreover, $\textbf{Rel}(X, Y) = \mathscr{P}(X \otimes Y)$, hence, $$\textbf{Rel}(X \otimes Y, Z) \cong \textbf{Rel}(X, Y \otimes Z)$$ so $\textbf{Rel}$ is even a monoidal closed category! Of course, this means $\textbf{Rel}$ is enriched over itself, with the internal hom being given, confusingly, by the cartesian product. (Note that the representable functor $\textbf{Rel}(1, -)$ is not the forgetful functor!) Thus, we may characterise the cartesian product as follows: it is the unique monoidal product on $\textbf{Rel}$ that has unit $1$ and admits an internal hom. (This is because every set is a coproduct of copies of $1$.) [But how does one characterise $1$...? It's not the terminal object anymore!]
H: Conjugate of matrix Over an arbitrary ring $R$ with unit, is the matrix\begin{pmatrix} a & 0 & b & 0\\ 0 & 0 & 0 &0\\c & 0 & d &0\\ 0 &0&0&0 \end{pmatrix} conjugate over $GL(R)$ to \begin{pmatrix} a & b & 0&0\\ c & d & 0&0\\0 & 0 &0 &0\\0&0&0&0 \end{pmatrix} ? AI: View the matrices as linear maps to see that they are the same (one is the matrix of the other when we switch the second and third vectors). Thus take $$P:=\pmatrix{\mathbf 1_R&0&0&0\\ 0&0&\mathbf 1_R&0\\ 0&\mathbf 1_R&0&0\\ 0&0&0&\mathbf 1_R},$$ where $\mathbf 1_R$ is the unit of $R$.
H: Having trouble understanding proof of a theorem involving limits of functions and sequences I found this theorem in a book: Theorem: Let $F: A \subset \mathbb{R}^n \to \mathbb{R}^m.$ Let $P \in \bar{A}$ and $L \in \mathbb{R}^m$. Then the following assertions are equivalent: $ \lim\limits_{X\to P} F(X) = L $ For all ${P_k} \subset A$ such that $P_k \neq P\ \forall k $ and $P_k \to P$, $\lim\limits_{k \to \infty}F(P_k) = L$. Proof: $1 \implies 2)$ [...] $2 \implies 1)$ Suppose that $\lim\limits_{X\to P} F(X) \neq L$. This would mean that there exists $\epsilon > 0$ such that for all $\delta > 0$ there exists $X \in A$ such that $0 < \|X-P\| < \delta$ and $\| F(X) - L \| \ge \epsilon $. Take $\delta = \frac1{k}$ with $k \in \mathbb{N}$ and let $P_k$ be the $k$-th term of the sequence. We see that $P_k \to P$ and $P_k \neq P$, but $\| F(P_k)-L\| \ge \epsilon$, which would mean that $\lim\limits_{k \to \infty} F(P_k) \neq L$, which contradicts the hypothesis. It's the second part of the proof I'm interested in. I understand that asserting that $\lim\limits_{k \to \infty} F(P_k) \neq L$ means that if I make ball around $P$, no matter how small, there will be some $X$ inside that ball such that $F(X)$ stays far away from $L$. But I don't understand the proof. I understand each sentence separately, but I don't get how to connect them. Could anyone help me understand the whole argument? AI: If you understand each sentence separately. The only thing that you need undestand is that $ 2 \Rightarrow 1$ is equivalent to negartion of (1) implies the negation of (2). This was done.
H: Support of the divisor of zeros of a global section of a invertible sheaf This is related to Hartshorne's book Algebraic Geometry, Lemma II.7.8, page 158. Let $X$ be a nonsingular projective variety over an algebraically closed field $k$ and let $\mathcal{L}$ be an invertible sheaf on $X$. For any $s \in \Gamma(X, \mathcal{L})$, we have $\mathrm{Supp}(s)_{0} = (X_{s})^{c}$, where $\mathrm{Supp}(s)_{0}$ means the union of the prime divisors of the divisor of zeros $(s)_{0}$ of $s$ and $X_{s} = \{ P \in X \mid s_{p} \not\in \mathfrak{m}_{p}\mathcal{L}_{p} \}$. Thanks. AI: Since checking that two subsets of $X$ are equal can be done locally, we may assume that $X=Spec(A)$ is affine and that $\mathcal L=\mathcal O_X$, since $\mathcal L$ is an invertible sheaf i.e. a locally free sheaf of rank one. But then $s\in \Gamma(X,\mathcal O)=A$ is just an ordinary regular function on $X$ and to say that $s_P\notin \mathfrak m_P \mathcal L_P=\mathfrak m_P \mathcal O_P= \mathfrak m_P$ means that $s(P)=s_P \:\text{mod }\mathfrak m_P\neq0\in \kappa(P)=\mathcal O_P/\mathfrak m_P=k$. So we see that $X_s=D(s)$ and $(X_s)^c=V(s)$. On the other hand the prime divisors of $(s)_0$ are the irreducible components of $V(s)$ so that their union $(s)_0$ is also equal to $V(s)$ . Thus we have the required equality $\mathrm{Supp}(s)_{0} = (X_{s})^{c} $.
H: Sizes of Sumsets and Dilates Let $A$ be a finite subset of an abelian group $G$. For an integer $n$, define the dilate $n \cdot A = \{na \mid a \in A\}$ and, given another $B \subseteq G$, the sumset $A + B = \{a + b \mid a \in A, b \in B\}$. I would like to show that $$ \frac{|n \cdot A + (nm) \cdot A|}{|n \cdot A|} = \frac{|A + m \cdot A|}{|A|} $$ for any $n,m \in \mathbb{Z}$. AI: Note that $nA+(nm)A = n(A+mA)$. To verify this, if $x\in nA+(nm)A$, then there exists $a,a'\in A$ such that $x = na + nma' = n(a+ma')$; and $a+ma'\in A+mA$. Conversely, $n(a+ma') = na+nma'\in nA+(nm)A$, if $a,a'\in A$. So we are asking whether $$\frac{|n(A+mA)|}{|nA|} = \frac{|A+mA|}{|A|}.$$ If $\langle A\rangle$ has no $n$-torsion, then the result follows since multiplication by $n$ is one-to-one (if $na = na'$ then $n(a-a')=0$, so $a-a'\in\langle A\rangle$ is $n$-torsion, so $a=a'$; the same holds for $A+mA\subseteq \langle A\rangle$). However, if $A$ has $n$-torsion, then you are in trouble. Consider for example $n=5$, $m=3$, $G=\mathbb{Z}/10\mathbb{Z}$, $A=\{1,2\}$. Then $A+3A = \{1,2\}+\{3,6\} = \{4,5,7,8\}$, so the right hand side is $2$. On the other hand, $5(A+3A) = 5\{4,5,7,8\} = \{0,5\}$ and $5A = \{0,5\}$, so the left hand side is $1$.
H: Find the integer solution of $ a^b = 2^{2 c + 1} + 2^c + 1 $ Find the possible number of integer solution for this equation, such that $ b>1$ $$ a^b = 2^{2 c + 1} + 2^c + 1 $$ From $1$ to $1000$, $ {a = 2, b = 2, c=0} $ and $ {a = 23, b = 2, c=4} $ computationally. Are there any other possible solutions? How to show analytically that above two are it's solutions? AI: Assume $c>2$ for simplicity's sake, so that $a$ is odd among other things. Firstly, we note that $a^b\equiv1$ mod $2^c$. Since the order of the multiplicative group mod $2^c$ is $2^{c-1}$, we note that either $b$ is even or that $a\equiv1$ mod $2^c$, but we can rule out the latter fairly easily - $1+2*2^c$ is already too big for $b=2$, 1 is always too small and $1+2^c$ is too small for $b=2$ and too big for $b>2$. Since $b$ is even, $a^b$ will be a square so all we need to do is show that $a=23, c=4$ is the only solution for $b=2$ and we will be done. Rewrite the equation as: $$(a-1)(a+1)=2^c(2^{c+1}+1)$$ $\gcd(a-1,a+1)=2$ since $a$ is odd, so it must be the case that $2^{c-1}$ divides $a-1$ or $a+1$. Let's assume $2^{c-1}|a+1$ for now. Then there exists odd numbers $u$ and $v$ such that $$a-1=2u \\ a+1=2^{c-1}v\\ uv=2^{c+1}+1$$ Combining the first two we get $2^{c-2}v-u=1$, which we can plug into the third to obtain $$(2^{c-2}v-1)v=2^{c+1}+1$$ We know $v$ is odd. If $v\geq5$, then the left hand side will certainly be bigger than the right hand side ($(2^{c-2}v-1)v>2^{c-3}*v^2\geq\frac{25}{16}2^{c+1}>2^{c+1}+1$) and if $v=1$, then the left hand side will definitely be smaller. So $v=3$ and $$2^{c-2}9-3=8*2^{c-2}+1\\ 2^{c-2}=4\\ c=4$$ which is exactly the solution you found. Working out the case $2^{c-1}|a-1$ is essentially the same except you find that there are no solutions then.
H: How to prove that $\frac{10^{\frac{2}{3}}-1}{\sqrt{-3}}$ is an algebraic integer As the title says, I'm trying to show that $\frac{10^{\frac{2}{3}}-1}{\sqrt{-3}}$ is an algebraic integer. I suppose there's probably some heavy duty classification theorems that give one line proofs to this but I don't have any of that at my disposable so basically I'm trying to construct a polynomial over $\mathbb{Z}$ which has this complex number as a root. My general strategy is to raise both sides of the equation $$x = \frac{10^{\frac{2}{3}}-1}{\sqrt{-3}}$$ to the $n^{th}$ power and then break up the resulting sum in such a way as to resubstitute back in smaller powers of $x$. Also since this root is complex I know it must come in a conjugate pair for the coefficients of my polynomial to be real, thus I know that $$x = -\frac{10^{\frac{2}{3}}-1}{\sqrt{-3}}$$ must also be a root of my polynomial. Hence from this I obtain: $$3x^2 = (10^{\frac{2}{3}} -1 )^2$$ However since my root is pure imaginary I don't really get any more information from this, so I'm a bit stumped, I tried raising both sides of $3x^2 -1 = 10^{\frac{4}{3}} - (2)10^{\frac{2}{3}}$ to the third power but it doesn't look like it's going to break up correctly, can anyone help me with this? Thanks. AI: Your general strategy is workable, but you'll be a lot happier if you start by eliminating the cube root. Start with $$x \sqrt{-3}+1=10^{2/3} \, .$$ Cube this equation, expand the left-hand side, collect all the factors of $\sqrt{-3}$ on one side, and square the result, and you'll be left with a sextic (hopefully the same one Robert came up with, up to a constant multiple).
H: About Perimeter - Circumference of Quarter Circle ASB is quarter circle. PQRS is a rectangle with side PQ=8 and PS=6 . What is length of ARC AQB ? Ans $5\pi$ Here is how I am solving it: Radius of Quarter circle = diagonal of rectangle = $\sqrt {100} = 10$ For a Full Circle : $Length_{Arc}=\frac{Arc_{Angle}}{360} \times Circumference$--->A Now Circumference of complete circle will be : $2\pi10 = 20\pi$ Circumference of $\frac{1}{4}$ of circle = $5\pi + 10 + 10 =5\pi + 20$ Now For Quarter Circle $Length_{Arc}= \frac{90}{90} \times (5\pi + 20)$ EDIT: I realize that I could have gotten the answer by simply not adding the 20 in the circumference. But according to this link. Circumference is the same as parameter and we should add the radius twice when determining the perimeter/circumference of quarter circle (refer to example 2 on bottom of the page) , since the formula above (Equ. A) requires the circumference I wanted to know why we shouldn't be adding the 20 since after all isnt it a part of the circumference of circle portion ? For convenience I have posted an image of the example from that site AI: Once you get circumference $ = 20\pi$, just divide by $4$ to get the quarter circumference. Answer is $20\pi / 4 = 5\pi$. simple. However, the title of the question seems to be misleading. The question in the body seems to have nothing to do with the area and the circumference (perimeter?) of the quadrant.
H: What's the analogue of Sierpinski triangle to disk? What's the (closest) analogue of Sierpinski triangle to disk? AI: You could replace each disk with seven disks of $1/3$ the radius packed inside it, like this: After four iterations, it looks like this:
H: Inverse limit by example I'm trying to understand inverse limits. For this I am looking at the example (mentioned in Atiyah-Macdonald, page 102): We start with the topological abelian group $G = \mathbb Z$ (endowed with the topology induced by $|\cdot|_p$) and observes that we have an inverse system, that is, a sequence of groups $G_n$ and group homomorphisms $f_{ji}: G_i \to G_j$ ($j \leq i$) such that (i) $f_{ii} = id$ (ii) $f_{kj} \circ f_{ji} = f_{ki}$ given by $G_n = \mathbb Z / p^n \mathbb Z$ and homomorphisms $f_{ji}: \mathbb Z / p^i \mathbb Z \hookrightarrow \mathbb Z / p^j \mathbb Z, \bar{x} \mapsto x \mod p^j$. Then, by definition, the inverse limit is $$\lim_{\longleftarrow_{n \in \mathbb N}} G_n = \{(x_n) \in \prod_{n \in \mathbb N} G_n \mid f_{ji} (x_i) = x_j , \text{ for all } j \leq i \}$$ Now since $f_{ji}$ are the "projections" (they are $\mod p^j$ really) we see that the requirement $f_{ji} (x_i) = x_j$ is nicely fulfilled if $x_{i+1}$ only adds stuff times $p^{i+1}$ since then the added stuff gets deleted when projecting down. Hence we "see" (we don't really but we already know what it should be) that sequences in this inverse limit look like $p$-adic numbers, since $x_i = \sum_{k=0}^i a_k p^k$. Question 1: Do I have it right? Question 2: Why do we need a topology on the group? I don't see where we used the topology. And question 3: Would someone give me another (very easy!) example of an inverse limit, please? AI: Yes. The topology happens to mesh well with the inverse limit, but is not needed. Four easy examples: Consider the system with $G_i = k[x]/(x^i)$ and the $f_{ij}$ again are the projections. Again, since things are surjective, you can only add multiples of $p^i$, etc. so we get the inverse limit is the ring of formal power series $k[\![x]\!]$. Its (unneeded) topology is given by $|\cdot|_x$ which as for $x=p$ measures how divisible a polynomial is by $x$. Consider the system where all $G_i$ are equal, and $f_{ij}$ are identity maps. Then the inverse limit is also $G_i$. I don't believe any topology is implicit. Consider the inverse limit where all the $f_{ij}$ are zero maps. Then the inverse limit is zero. Consider the inverse limit where there aren't any maps. Then the inverse limit is the direct product. The topology is the product topology.
H: An inequality in a proof of Kunen's Inconsistency This may be a silly question, but I keep coming back to it. Let $j:V\prec M$ be a non-trivial elementary embedding with $M$ a transitive class and $\kappa$ the critical point of $j$. Define the critical sequence for $j$ as usual, setting $\kappa_0=\kappa$ and $\kappa_n=j^n(\kappa)$ (i.e., the $n^{th}$ iterate of $j$ evaluated at $\kappa$). Finally, let $\lambda=\sup_{n<\omega}(\kappa_n)$. Then $j(\lambda)=\lambda$ and $\lambda$ is a strong limit of cofinality $\omega$. The trouble I'm having is the claim that $j(\lambda^+)=\lambda^+$. The claim follows from the string of inequalities: $\lambda^+\leq j(\lambda^+)=(\lambda^+)^M\leq\lambda^+$ However, I don't understand why the final inequality is true. I feel as if I'm just overlooking something obvious. For reference, the string of inequlaities is from proof 2 (due to Woodin) of Kunen's Inconsistency on page 320 in Kanamori's The Higher Infinite. AI: We have indeed that $j(\lambda)=\lambda$. Since $j$ is elementary $j(\lambda^+)$ is the successor of $j(\lambda)=\lambda$ in $M$, so indeed $j(\lambda^+)=(\lambda^+)^M$. However if if $M\subseteq V$ is an inner model then $(\alpha^+)^M\leq(\alpha^+)^V$ for trivial reasons. Therefore we have the equality $\lambda^+=j(\lambda^+)$.
H: Which infinite cardinals can be defined using partition relations? Several types of infinite cardinals are easily defined in terms of partition relations. For instance, if $\kappa > \omega$ then $\kappa$ is weakly compact if $\kappa$ satisfies $\kappa\to(\kappa)^2_2$ $\kappa$ is Ramsey if $\kappa$ satisfies $\kappa\to(\kappa)^{<\omega}_2$ I have not seen definitions of cardinals larger than Ramsey cardinals in terms of similar partition relations. The definitions of measurable cardinals (and higher) typically make use of other notions such as critical points of elementary embeddings, ultrafilters with particular properties, or even more intricate ideas. Can partitions relations of the form $\kappa\to(\lambda)^{\mu}_{\nu}$ be used to define larger cardinals than Ramsey cardinals? If yes, is there an upper-limit? (where?) If no, does that mean that as soon as we start using elementary-embeddings, etc., we've gone beyond the scope of what partition relations can capture? Thanks in advance for any answers, comments or references. AI: The reason you have no seen such characterization is that $\kappa\to(\kappa)^\omega_2$ is inconsistent with the axiom of choice. One hint towards the proof would be to well-order $[\kappa]^\omega$ by some $\prec$ and consider the coloring $F(x)=1$ if and only if $\forall y\subsetneq x:y\nprec x$. Now prove that a homogeneous set of color $0$ would correspond to a decreasing sequence of ordinals, and it would be impossible to have a homogeneous set of color $1$ due to the definition of the coloring. On the other hand, in some models without the axiom of choice $\omega_1\to(\omega_1)^{\omega_1}_2$, but in such models $\omega_1$ carries a $\sigma$-complete ultrafilter (i.e. it is a measurable cardinal). There are "finer" notions of partition-defined cardinals, e.g. Erdős cardinals which are weaker than Ramsey cardinals in general. Both Kanamori The Higher Infinite as well Jech's Set Theory, $3$rd millennium edition have chapters dedicated to partition principles, including the aforementioned results and cardinals, and both are considered the standard books for these topics, I believe.
H: Finding the closest number to the power of 2 for x. What is the fastest way to calculate x given y as a large integer? $y = 100$ $z = 2$ $x = 64$ (Power of z and smaller than or equal to y). $x = f(100, 2) = 64$ $x = f(128, 2) = 128$ $x = f(90, 3) = 81$ AI: If you want to avoid a loop you may use : $$\displaystyle x=z^{\lfloor\frac{\ln(y)}{\ln(z)}\rfloor}$$ else multiply $1$ by $z$ until being greater than $y$ and return the previous value.
H: How to show that this set is compact in $\ell^2$ Let $(a_n)_{n}\in\ell^2:=\ell^2(\mathbb{R})$ be a fixed sequence. Consider the subspace $$C=\{(x_n)_{n}\in\ell^2 : |x_n|\le a_n\text{ for all }n\in\mathbb{N}\}.$$ According to the book [Dunford and Schwartz, Linear operators part I, page 453] $C$ is compact in the $\ell^2$-norm, but there is no proof. How can I show that $C$ is indeed compact in $\ell^2$ ? AI: The set $C$ is the image of the compact set $X = [-1,1]^{\mathbb N}$ (with the product topology) under the map $F: X \to \ell^2$, where $F(x)_j = x_j a_j$. Since the image of a compact Hausdorff space under a continuous map is compact, we just need to show that $F$ is continuous. Note that $\Vert F(x) - F(y)\Vert^2 = \sum_{j=1}^\infty (x_j - y_j)^2 a_j^2$. Given $\varepsilon > 0$, there is $N\in\mathbb{N}$ such that $\sum_{j=N+1}^\infty a_j^2 < \varepsilon/8$. If $x, y \in X$ with $|x_j - y_j|^2 < \varepsilon/(2\|a\|^2)$ for $1 \le j \le N$, we have $\Vert F(x) - F(y)\Vert ^2 < \varepsilon$.
H: Computing a complex integral potentially using residues The question is: Compute: $$\mbox{p.v.}\int_{-\infty}^{\infty}\frac{x\sin4x}{{x^2}-1}dx$$ Initially I thought it was straight forward and I could just use residues. However, the Residue Theorem requires the poles to be in the upper plane ($y > 0$), and in this case, that is not the case. So, now I have no idea what to do since I cannot use residues. AI: Let $C_R$ denote the counterclockwise semicircular arc extending from $R$ to $-R$ in the plane and $C_{\rho_1} , C_{\rho_2}$ be the counterclockwise semicircular arcs extending from $-1 - \rho_1$ to $-1 + \rho_1$ and from $1 -\rho_1$ and $1+\rho_1,$ respectively. Note that $\frac{x\sin 4x}{x^2 -1} = \Im \frac{x\exp{4ix}}{x^2 -1}.$ Consider also that $\,\displaystyle{f(z) = \frac{z\exp{4iz}}{z^2 -1}}\, $ has simple poles at $-1$ and $1.$ By Jordan's Lemma, there exists $\theta \in [0, \pi ]$ such that $$\displaystyle\int_{C_R} \frac{z\exp{4iz}}{z^2-1} \le \frac{\pi }{4} \frac{1}{R^2 \exp{2i\theta }-1}$$ The right hand side of this inequality tends to zero. Using the residue theorem, $$\int_{C_R} \frac{z\exp{4iz}}{z^2-1} + \int_{-C_{\rho_1}} \frac{z\exp{4iz}}{z^2-1} + \int_{-C_{\rho_2}} \frac{z\exp{4iz}}{z^2-1} +\int_{-R}^{-1 -\rho_1} \frac{x\exp{4ix}}{x^2-1} dx +$$ $$+\int_{-1 +\rho_1}^{1 - \rho_e} \frac{x\exp{4ix}}{x^2-1} dx +\int_{1 + \rho_1}^{R} \frac{x\exp{4ix}}{x^2-1} dx = 0$$ Letting $R$ tend to infinity and $\rho_1 \rho_2$ tend to zero, we have $$\textrm{pv }\int_{-\infty }^{\infty } \frac{x\exp{4ix}}{x^2 -1 }dx = \pi i [\textrm{ Res } (f, -1) + \textrm{ Res } (f, 1) ] = \pi i \cos 4 $$ Hence $$\textrm{pv } \int_{-\infty }^{\infty } \frac{x\sin 4x}{x^2 -1 }dx = \pi \cos 4.$$
H: The group of roots of unity in an algebraic number field Is the following proposition true? If yes, how would you prove this? Proposition. Let $K$ be an algebraic number field. The group of roots of unity in $K$ is finite. In other words, the torsion subgroup of $K^*$ is finite. Motivation. Let $A$ be the ring of algebraic integers in $K$. A root of unity in $K$ is a unit (i.e. an invertible element of $A$). It is important to determine the structure of the group of units in $K$ to investigate the arithmetic properties of $K$. Remark. Perhaps, the following fact can be used in the proof. Every conjugate of a root of unity in $K$ has absolute value 1. Related question: The group of roots of unity in the cyclotomic number field of an odd prime order Is an algebraic integer all of whose conjugates have absolute value 1 a root of unity? AI: The degree of $e^{2\pi i/n}$ goes to infinity with $n$. If $K$ had an infinity of roots of unity, it would have elements of arbitrarily high degree, and thus would not be of finite degree over the rationals, and thus would not, in fact, be an algebraic number field.
H: Limit of a complex valued integral The question is: Compute $$\lim_{p\rightarrow0^{+}}\int_{C_p}\frac{e^{3iz}}{z^{2}-1}dz$$ Where $$C_p: z = 1 + pe^{i\theta}$$ My initial thought was to use residues, yet the poles are -1 and 1, so they're on the real line (thus the Residue Theorem does not apply). My next thought was to find some way to make the integral work with the Cauchy Integral Formula, but I can't find a way to do that since a partial fraction decomposition won't work in this case. So, I am stuck. AI: Your contour $C_p$ is a circle of radius $p \to 0$ centred at the point $z=1$. This means that there is a singularity in the contour (not on its path). Because of this, we may use residue theorem (at singularity $z=1$) to evaluate this integral. If $$f(z)=\frac{\exp(3iz)}{z^2-1}=\frac{\exp(3iz)}{(z+1)(z-1)}$$ we see that $$\operatorname{Res}_{z=1}f(z)=\lim_{z \to 1} (z-1) \frac{\exp(3iz)}{(z+1)(z-1)} = \frac{\exp(3i)}{2}$$ Then, by residue theorem $$\oint_{C_p} f(z)\, dz=2\pi i \operatorname{Res}_{z=1} (f(z)) = 2 \pi i \frac{\exp(3i)}{2}=\pi i \exp(3 i)$$
H: Solve system of nonlinear differential equations I am trying to solve a large system of differential equations. Ideally, I would like to solve it exactly, but if not, can anyone suggest me a numerical method? In all its generality, the system I am trying to solve is like this: (here, $x = x(t) \in R^n$, and $\dot x = dx/dt$) $$ (a_i + P_ix/\Vert P_ix \Vert)^T \dot x = -\Vert P_ix \Vert $$ for $i = 1,\ldots,n$. Here all $P_i$ are positive definite matrices, and the set of $a_i$ is linearly independent. Also, $\Vert . \Vert$ is the 2-norm. It would help me a great deal if someone can help me to solve even a highly restricted special case of it, where $n=2$, $a_i = e_i$ (the $i$-th vector of the canonical basis), and $P_i = I$ for all $i$. Namely, this system: $$ (e_i + x/\Vert x \Vert)^T \dot x = -\Vert x \Vert $$ for all $i$. Thanks a lot, Daniel. AI: Solutions are likely to hit a singularity when the matrix $M$ with rows $(a_i + P_i x/\|P_i x\|)^T$ becomes singular or when $x$ approaches the origin. In your $n=2$ example with $u = x/\|x\|$, I get $\det(M) = 1 + \sum_j u_j$, and you'll get a singularity if that hits $0$. That does happen, e.g. with initial conditions $x_1(0)=1$, $x_2(0)=0$, at approximately $t= 1.2464504$ according to Maple's dsolve(..., numeric). EDIT: Hmm, in fact $x_1 - x_2$ is constant in this system, and you get a singularity when $x_1 = 0$, $x_2 < 0$. If $d = x_1 - x_2$, the system has closed-form implicit solutions $$ t+\ln \left( \left( 2 x_1 \left( t \right) +d \right) \sqrt {2}/2+\sqrt {2\, \left( x_1 \left( t \right) \right) ^{2}+2\,x_1 \left( t \right) d+{d}^{2}}/2 \right) \sqrt {2 }/2 +\ln \left( 2\, \left( x_1 \left( t \right) \right) ^{2}+ 2\,x_1 \left( t \right) d+{d}^{2} \right)/2 +c=0 $$
H: Gradient And Hessian Of General 2-Norm Given $f(\mathbf{x}) = \|\mathbf{Ax}\|_2 = (\mathbf{x}^\mathrm{T} \mathbf{A}^\mathrm{T} \mathbf{Ax} )^{1/2}$, $\nabla f(\mathbf{x}) = \frac {\mathbf{A}^\mathrm{T} \mathbf{Ax}} {\|\mathbf{Ax}\|_2} = \frac {\mathbf{A}^\mathrm{T} \mathbf{Ax}} {(\mathbf{x}^\mathrm{T} \mathbf{A}^\mathrm{T} \mathbf{Ax} )^{1/2}}$ $\nabla^2 f(\mathbf{x}) = \frac { (\mathbf{x}^\mathrm{T} \mathbf{A}^\mathrm{T} \mathbf{Ax} )^{1/2} \cdot \mathbf{A}^\mathrm{T} \mathbf{A} - (\mathbf{A}^\mathrm{T} \mathbf{Ax})^\mathrm{T} (\mathbf{x}^\mathrm{T} \mathbf{A}^\mathrm{T} \mathbf{Ax} )^{-1/2} \mathbf{A}^\mathrm{T} \mathbf{Ax} } {(\mathbf{x}^\mathrm{T} \mathbf{A}^\mathrm{T} \mathbf{Ax} ) } = \frac { \mathbf{A}^\mathrm{T} \mathbf{A} } { (\mathbf{x}^\mathrm{T} \mathbf{A}^\mathrm{T} \mathbf{Ax} )^{1/2}} - \frac {\mathbf{x}^\mathrm{T} \mathbf{A}^\mathrm{T} \mathbf{A} \mathbf{A}^\mathrm{T} \mathbf{Ax} } { (\mathbf{x}^\mathrm{T} \mathbf{A}^\mathrm{T} \mathbf{Ax} )^{3/2} }$ I guess I am looking for confirmation that I have done the above correctly. The dimensions match up except for the second term of the Hessian is a scalar, which makes me think that something is missing. Edit: Also, the last equality reduces to $\nabla^2 f(\mathbf{x}) = \frac {\mathbf{A}^\mathrm{T} \mathbf{A} - \nabla f(\mathbf{x})^\mathrm{T} \nabla f(\mathbf{x})} {\|\mathbf{Ax}\|_2}$ AI: It is easier to work with $\phi(x) = \frac{1}{2} f^2(x)$. Just expand $\phi$ around $x$. $\phi(x+\delta) = \frac{1}{2} (x + \delta)^T A^T A (x + \delta) = \phi(x) + x^TA^TA \delta + \frac{1}{2} \delta^T A^T A \delta$. It follows from this that the gradient $\nabla \phi(x) = A^T A x$, and the Hessian is $H = A^TA$. To finish, let $g(x) = \sqrt{2x}$, and note that $f = g \circ \phi$. To get the first derivative, use the composition rule to get $D f(x) = Dg(\phi(x)) D \phi(x)$, which gives $Df(x) = \frac{1}{\sqrt{2 \phi(x)}} x^T A^T A = \frac{1}{\|Ax\|} x^T A^T A$. Let $\eta(x) = \frac{1}{\|Ax\|}$, and $\gamma(x) = x^T A^T A$, and note that $D f(x) = \eta(x) \cdot \gamma(x)$, so we can use the product rule. Let $h(x) = Df(x)$ then the product rule gives $D h(x) (\delta) = (D \eta(x) (\delta)) \gamma(x) + \eta(x) D \gamma(x) (\delta)$. Expanding this yields: $Dh(x)(\delta) = (- \frac{1}{\|Ax\|^2} \frac{1}{\|Ax\|} x^T A^T A \delta) x^T A^T A + \frac{1}{\|Ax\|} \delta^T A^T A $. Noting that $x^T A^T A \delta = \delta^T A^T A x$, we can write this as: $$Dh(x)(\delta) = \delta^T(\frac{1}{\|Ax\|} A^T A - \frac{1}{\|Ax\|^3 } A^T A x x^T A^T A),$$ or alternatively: $$D^2 f(x) = \frac{1}{\|Ax\|} A^T A - \frac{1}{\|Ax\|^3 } A^T A x x^T A^T A .$$ The only difference with the formula given in the question is that the latter dyad was written incorrectly (instead of the dyad $g g^T$, you have the scalar $g^T g$).
H: Vectors in Clifford Algebra I'm studying Clifford Algebra $\mathcal{Cl}_2$ and got stuck in an exercise: Let $\mathbf{a}=e_2+e_{12},\quad \mathbf{b}=(1/2)(1+e_1).$ Compute $\mathbf{ab}$. The answer is zero, but I can't get to it and I'm having trouble putting $\mathbf{b}$ in the form $\mathbf{b}=b_1e_1+b_2e_2$. Important information: $(1, e_1, e_2, e_{12})$ form a basis for the Clifford algebra $\mathcal{Cl}_2$ The Clifford product of two vectors $\mathbf{a}=a_1e_1+a_2e_2\text{ and }\mathbf{b}=b_1e_1+b_2e_2$ is defined as $\mathbf{ab}=a_1b_1+a_2b_2+(a_1b_2-a_2b_1)e_{12}$ And the following multiplication table: $$ \begin{array}{cccc} & \mathbf{e_1} & \mathbf{e_2} & \mathbf{e_{12}} \\\\ \mathbf{e_1}& 1 & e_{12} & e_2 \\\\ \mathbf{e_2}& -e_{12} & 1 & -e_1 \\\\ \mathbf{e_{12}}& -e_2 & e_1 & -1 \end{array} $$ AI: $$\begin{eqnarray*} 2{\bf a}{\bf b} &=& (e_2+e_{12})(1+e_1) \\ &=& e_2+e_{12}+e_2 e_1+e_{12}e_1 \\ &=& e_2 + e_{12} +(-e_{12}) + (-e_2) \\ &=& 0 \end{eqnarray*}$$
H: Matrix Multiplication in 3 Dimensions Possible Duplicate: Is there a 3-dimensional “matrix” by “matrix” product? Is matrix multiplication of 3-dimensional matrices defined? I cannot wrap my mind around how it would even work. Equivalently, is matrix multiplication only defined for 2-dimensional matrices? Edit: The linked question cleared things up for me. I was so used to coding in Matlab with multi-dimensional "matrices" that I didn't think about what I was really doing (eg. 3-dimensional element-wise multiplication isn't really matrix multiplication). Investigating tensors is probably the mathematically-correct answer to my less than precise question. The open-ended-ness of tensor contraction though, means that there isn't one solution to my question. AI: A matrix encodes a linear transformation $T:V\rightarrow W$ in terms of a basis $\{v_1,...,v_n\}$ of $V$ and a basis $\{w_1,...,w_m\}$ of W as follows. The $i^{th}$ column of the matrix is obtained by arranging the $a_{ik}$, $k=1,...,m$ in the column, where $Tv_{i}=a_{i1}w_1+...+a_{im}w_m$. Matrix multiplication tells us how to relate the matrix coefficients of a composition of two linear maps of compatible dimension to the coefficients of the matrices of the composed maps. A more basic question to ask (still interesting) along your lines is: what type of map between (what type of) vector spaces does a 3-dimensional "matrix block" encode? We could go and launch into tensors here, but it would be nice to sneak up on this in an elementary way like the above linear transformation picture. Perhaps it would be fun to think about bilinear maps $T:V \times V \rightarrow \mathbb{R}$ written in terms of bases, for starters. Anyhow, this is a long comment, but I think there are some fun things to do here.
H: evaluate the integral: $\int{\frac{8y}{4-y^2}dy}$ $$\int{\frac{8y}{4-y^2}dy}$$ The answer isn't in the back of my book, so I have no way to see if I'm right! (I'm about 99% sure I'm wrong though) AI: $$\int\frac{8y}{4-y^2}\,dy=-4\int\frac{d(4-y^2)}{4-y^2}=-4\log|4-y^2|+K\,\,(constant)$$
H: Area/Arc Length of a Hyperbolic Segment I'm given a hyperbolic segment, similar to the parabolic segment shown here: http://mathworld.wolfram.com/ParabolicSegment.html I know the height of the segment ("h" in the wolfram article), and the length of the line segment joining the endpoints of the hyperbola ("2a" in the wolfram article). Is it possible to find the area of the segment? Also, does there exist an approximation formula, or rapidly converging method to determine the approximate arc length of the given hyperbola? AI: It is not possible to find an answer just from the information supplied. Consider the hyperbola with equation $$\frac{(y-h)^2}{d^2} -\frac{x^2}{c^2}=1.$$ One branch of this has shape roughly similar to the parabola illustrated in your picture. In particular, it has "height" $h$. In order for the $x$-intercepts to be at $\pm a$ as in the picture, the relevant condition is $c\sqrt{h^2-d^2}=da$. There are infinitely many hyperbolas for specified $h$ and $a$. The areas are not all the same for these hyperbolas, and neither are the arclengths. Once the hyperbola is completely specified, arclength, though somewhat unpleasant, can be handled by setting up the usual integral. It is one of the relatively rare cases where the integration can be carried out explicitly in terms of elementary functions.
H: About finite dimensional vector spaces Would you tell me why the statement below holds? A vector space $V$ has a basis if and only if $0 < \dim V < \infty.$ AI: The claim is false, whether or not one accepts the Axiom of Choice. Consider the vector space $V$ (over the reals) of all polynomials $P(x)$ with real coefficients. Addition of polynomials, and multiplication by a scalar, are defined in the usual way. The space $V$ is infinite-dimensional, but has basis $\{1,x,x^2,x^3,x^4,\dots, x^n, \dots\}$. Remark: It is not difficult to show, without using the Axiom of Choice, that any finite dimensional space does have a basis, so the implication in one direction is true.
H: evaluate the integral: $\int{(3\csc(x)\cot(x) - 5x^7 +\frac{4}{x} + 3)dx}$ $$\int{(3\csc(x)\cot(x) - 5x^7 +\frac{4}{x} + 3)dx}$$ I know this is a simple problem, but I don't have the answer for it and I just want to make sure that I'm correct! AI: $$ \int{(3\csc(x)\cot(x) - 5x^7 +\frac{4}{x} + 3)dy} = y(3\csc(x)\cot(x) - 5x^7 +\frac{4}{x} + 3) + \text{const}. $$ So I'm assuming your integral is actually: $$ \int (3\csc(x)\cot(x) - 5x^7 +\frac{4}{x} + 3) \color{red}{dx}$$ By trig identities $$\csc(x) \cot(x) = \frac{\cos(x)}{\sin^2(x)}.$$ Now if we use $u = \sin(x)$ then $du = \cos(x)dx,$ and $\sin^2(x) =u^2.$ So $$ \int \frac{\cos(x)}{\sin^2(x)} dx = \int \frac{1}{u^2} du .$$ Can you take it from here? Edit: the complete integral is then $$ \int{(3\csc(x)\cot(x) - 5x^7 +\frac{4}{x} + 3)\ \text{d}x} = -\csc(x) - \frac{5}{8}x^8 + 4 \log(|x|) + 3x + \text{const}. $$
H: How many elements in the finite field $F_{256}$ satisfy $x^{103}=x$? How many elements of the finite field $\mathbb{F}_{256}$ with 256 elements satisfy $x^{103}=x$? AI: We will remember about the solution $x=0$ at the end. For the others, we want to solve $x^{102}=1$. The multiplicative group of non-zero elements of our field is cyclic. Let $g$ be a generator. Note that $x^{102} =1$ iff $x^{\gcd(102,255)}=1$. This $\gcd$ is $51$. Let $x=g^k$, where $0\le k \le 254$. Then $x^{51}=1$ iff $g^{51k}=1$. This is the case iff $255$ divides $51k$, that is, iff $5$ divides $k$. So we need only count the multiples of $5$ between $0$ and $254$. Then add $1$.
H: evaluate the integral: $\int_0 ^\sqrt5 \frac{4x}{\sqrt{x^2+4}}dx$ $$\int_0 ^\sqrt5 \frac{4x}{\sqrt{x^2+4}}dx$$ I got $4(5^\frac{1}{4})$ but I'm not sure if thats right AI: Your computation is incorrect. The substitution $u=x^2+4$ gives $du = 2x\,dx$. When $x=0$, we have $u=4$; when $x=\sqrt{5}$, we get $u=9$. So $$\begin{align*} \int_0^{\sqrt{5}}\frac{4x}{\sqrt{x^2+4}}\,dx &= 2\int_0^{\sqrt{5}}\frac{2x\,dx}{\sqrt{x^2+4}}\\ &= 2\int_4^9\frac{du}{\sqrt{u}}\\ &= 2 \int_4^9 u^{-1/2}\,du\\ &= 4u^{1/2}\Bigm|_4^9\\ &= 4(9^{1/2} - 4^{1/2})\\ &= 4(3-2)\\ &= 4. \end{align*}$$
H: Integral inequality - does this look correct? I have been studying for an upcoming exam and this question was on a previous exam: Prove that for $1 < p < \infty, \alpha > 1/p$ $$ \int_0^{\infty} x^{-\alpha p} \left| \int_0^x f(t)dt\right|^pdx \leq C_p \int_0^{\infty} |f(x)x^{1-\alpha}|^p dx. $$ So this is what I wrote down: After a change in variables, $t \mapsto xt$, we get: $$ \left( \int_0^{\infty} x^{-\alpha p} \left| \int_0^x f(t) dt \right|^p dx \right)^{1/p} = \left( \int_0^{\infty} \left| \int_0^x x^{-\alpha}f(t) dt \right|^p dx \right)^{1/p} = \left( \int_0^{\infty} \left| \int_0^1 x^{1-\alpha} f(xt) dt \right|^p dx \right)^{1/p}. $$ Minkowski's inequality for integrals yields: $$ \left( \int_0^{\infty} \left| \int_0^1 x^{1-\alpha} f(xt) dt \right|^p dx \right)^{1/p} \leq \int_0^1 \left( \int_0^{\infty} |x^{1-\alpha}f(xt)|^p dx\right)^{1/p} dt. $$ Another change in variables $x \mapsto x/t$ yields: $$\int_0^1 \left( \int_0^{\infty} |x^{1-\alpha}f(xt)|^p dx\right)^{1/p} dt = \int_0^1 \left( \int_0^{\infty} |t^{\alpha-1}x^{1-\alpha} t^{-1}f(x)|^p dx\right)^{1/p} dt = \int_0^1 t^{\alpha-2} \left( \int_0^{\infty} |x^{1-\alpha} f(x)|^p dx\right)^{1/p} dt.$$ And this last term has the obvious bound: $$\int_0^1 t^{\alpha-2} \left( \int_0^{\infty} |x^{1-\alpha} f(x)|^p dx\right)^{1/p} dt \leq C_p \left( \int_0^{\infty} |x^{1-\alpha} f(x)|^p dx\right)^{1/p}$$ with $C_p = \int_0^1 t^{\alpha-2} < \infty$ since $\alpha > 1/p > 1$, and the result follows after taking the $p$-th power. Now what confuses me is why they would put $\alpha > 1/p$ where I only used $\alpha > 1$. This makes me think I made a mistake. This property is later reinforced with the question: Is there a $p \in (1, \infty)$ such that this inequality remains true for $\alpha = 1/p$. Well, if I did everything correctly, the answer would be it holds for all $p \in (1,\infty)$. Will some one please give me a sanity check here? AI: There is a mistake in your second change of variable. You put the $t^{-1}$ inside the absolute value, and so you are taking a $p$ power of it when you shouldn't. Without that mistake, the exponent in your $C_p$ is $\alpha-1-1/p$, and you require $\alpha>1/p$ for $C_p$ to be finite.
H: Lipschitz Functions Does uniform convergence on a closed and bounded interval preserve Lipschitz functions? (Assume that the sequence of functions has a common Lipschitz constant $K$). AI: It's late at night where I am, so maybe I'm missing something obvious, but.... If $f_n\colon [a,b] \to \mathbb{R}$ each satisfy $|f_n(x) - f_n(y)| \leq K|x-y|$ for all $x, y \in [a,b]$, then just by taking the (pointwise) limit as $n \to \infty$, we obtain $|f(x) - f(y)| \leq K|x-y|$. This reminds me of the following fact: If $\{f_n\}$ is a sequence of (uniformly) equicontinuous functions $[a,b]\to \mathbb{R}$, then $\{f_n\}$ converges pointwise if and only if $\{f_n\}$ converges uniformly.
H: $\sum a_n$ converges absolutely, does $\sum (a_n + \cdots + a_n^n)$ converge Suppose $\sum_{n=1}^\infty a_n$ converges absolutely. Does this imply that the series $$\sum_{n=1}^\infty (a_n + \cdots + a_n^n)$$ converges? I believe the answer is yes, but I can't figure out how to prove it. Any help would be appreciated. Thanks. AI: After a while, $|a_k| \lt \frac{1}{2}$. From that point on, $$|a_k+a_k^2+a_k^3+\cdots +a_k^k| \le |a_k|+\frac{1}{2}|a_k|+\frac{1}{4}|a_k|+\cdots+\frac{1}{2^{k-1}}|a_k|\le 2|a_k|.$$ So by comparison our series converges absolutely.
H: The Star Trek Problem in Williams's Book This problem is from the book Probability with martingales by Williams. It's numbered as Exercise 12.3 on page 236. It can be stated as follows: The control system on the starship has gone wonky. All that one can do is to set a distance to be traveled. The spaceship will then move that distance in a randomly chosen direction and then stop. The object is to get into the Solar System, a ball of radius $r$. Initially, the starship is at a distance $R>r$ from the sun. If the next hop-length is automatically set to be the current distance to the Sun.("next" and "current" being updated in the obvious way). Let $R_n$ be the distance from Sun to the starship after $n$ space-hops. Prove $$\sum_{n=1}^\infty \frac{1}{R^2_n}<\infty$$ holds almost everywhere. It has puzzled me a long time. I tried to prove the series has a finite expectation, but in fact it's expectation is infinite. Does anyone has a solution? AI: $R^2_{n+1}/R^2_n$ has the same distribution as $|u+V|^2$ where $u$ is a fixed unit vector in ${\mathbb R}^3$ and $V$ is a random unit vector (uniform on the unit sphere). I would guess (although I haven't computed it) that $E [\log (|u+V|^2)] > 0$, and the Central Limit Theorem will tell you that almost surely $R_n > c^n$ for some $c > 1$. (EDIT: for sufficiently large $n$; Strong Law of Large Numbers is actually more relevant). EDIT: Yes, $E[\log(|u+V|^2)] = \int_0^{\pi} \sin(\theta) \log(2 - 2 \cos(\theta))\ d\theta/2 = 2 \log(2) - 1 > 0$. In the two-dimensional version, however, $E[\log(|u+V|^2)] = \int_0^\pi \log(2 - 2 \cos(\theta))\ d\theta/\pi = 0$, so the Law of Large Numbers doesn't tell you whether $R_n$ will go to $0$ or $\infty$. However, the Central Limit Theorem will say $P(R_n < 1) \to 1/2$ as $n \to \infty$. Thus it's certainly not the case in the two-dimensional version that $R_n \to 0$ with probability $1$, nor does $R_n \to \infty$ with probability $1$.
H: Tracing diagonal numbers on a 2D grid or matrix. I need to test a cell on a 2D grid/matrix to see if it's diagonal numbers have a power of 2 in them. Example: C0 C1 C2 C3 C4 C5 C6 C7 C8 C9 R0: 00 01 02 03 04 05 06 07 08 09 R1: 10 11 12 13 14 15 16 17 18 19 R2: 20 21 22 23 24 25 26 27 28 29 R3: 30 31 32 33 34 35 36 37 38 39 R4: 40 41 42 43 44 45 46 47 48 49 R5: 50 51 52 53 54 55 56 57 58 59 The size of the grid is known. The values in each cell are simply ((R*W)+C) where: R=Row Number W=Width of Grid C=Column Number Now let's pick two random cells: C3R4 = 43. Has diagonal values of (10, 21, 32, 54) & (07, 16, 25, 34, 52) out of which 16 and 32 are powers of 2. C8R1 = 18. Has no diagonal values that are powers of 2. Required Result: f(x, y) = True or False. Question: Is there a way to detect the above without looping? If this is not possible in a linear way, what would be the most efficient way to loop? AI: If you assume $y$ to be even and $x\leq y$. Start at $(x,y)=10x+y$, the following values lie on the antidiagonal $S_a=\{-x\leq z\leq y\; :\; 10(x+z)+(y-z)\}$ and the diagonal $S_d=\{x\leq z\leq y\; :\; 10(x-z)+(y-z)\}$. So there are $|S_d|+|S_a|-1=(y+x+1)+(x-y+1)=2x+2-1=2x+1\;$ values at all, which is linear ($\pm 1$'s because you count the value itself twice). Convert your values in binary form to see if it's a power of $2$.
H: Possibilities of x in a right angle tringle I am stumped on the following question: Which of the following could be the value of x in the diagram a)10 b)20 c)30 d)40 e)50 (Ans b and c) Any suggestions relating to solving this problem ? AI: The angle must be between $90$ and $180$ degrees and is equal to $5x.$ All options for $x$ other than b) and c) imply $5x$ is out of this range, so b) and c) are the only possibilities. Without any more information, the angle could indeed take on the value of either 100 or 150 degrees, so both b) and c) are attainable.
H: Basic question about fractions I'm solving some exercises about fields and am trying to find the inverse for $a_1 + \sqrt{2}b_1$, i.e. $\frac{1}{a_1 + \sqrt{2}b_1}$. This means I need to split the fraction into something of the form $x_1 + \sqrt{2}x_2$ but I can't seem to remember how to do such a basic thing! Can anyone help me out? AI: Hint: $\displaystyle \frac{1}{a_1+\sqrt{2} b_1}= \frac{a_1- \sqrt{2}b_1}{(a_1+\sqrt{2}b_1)(a_1- \sqrt{2}b_1)}$. It is the same idea that for complex numbers.
H: $p$-adic completion of integers I'm trying to do the following exercise: Let $p$ be a prime and for $n\geq 1$ let $\alpha_n :\mathbb Z/p \mathbb Z \to \mathbb Z/p^n \mathbb Z$ be the injection of abelian groups given by $1 \mapsto p^{n−1}$. Consider the direct sum $\alpha : A \to B$ of these maps where $A$ is a countable direct sum of copies of $\mathbb Z/p \mathbb Z$ and $B$ is the direct sum of the groups $\mathbb Z/p^n \mathbb Z$. Show that the $p$-adic completion of $A$ is just $A$ but that the completion of $A$ for the topology induced from the $p$-adic topology on $B$ is the direct product of the $\mathbb Z/p \mathbb Z$. Deduce that $p$-adic completion is not a right exact functor on the category of all $\mathbb Z$-modules. At first I thought $A$ was just the normal integers but it's not since for example for $p=2$, $-1 = 01111\dots$ is not in the space. The direct sum are all things with only finitely many non-zero terms, so for example the sequence $a_0 = 10000\dots a_1 = 110000\dots, a_2 = 111000\dots$ is a sequence in $A$ with a limit not in $A$. I guess I am confused about what "$p$-adic completion" means: I assumed it meant that I take the equivalence classes of Cauchy sequences (Cauchy with respect to $|\cdot|_p$) where two sequences are equivalent if their difference tends to zero. But if that was what "$p$-adic completion" really meant then the sequence $a_k$ I gave above would be Cauchy and didn't have a limit in $A$ which is a counter example to what the exercise asks me to show. Would someone explain to me what "$p$-adic completion" means? Thanks. Edit I'm bumping this question because the answerer is on holiday and I still have a bunch of questions. Thanks for your help. AI: The $p$-adic completion should mean “complete with respect to the ideal generated by $p$”. I guess the discussion is restricted to groups, so you could instead think about completing with respect to the subgroup filtration $(p^nG)$. I would forget about the metric for the time being. For $A$ we have $pA = 0$, so to complete we're taking the inverse limit \[ A \leftarrow A \leftarrow \cdots \] in which all of the transition maps are the identity. This is clearly isomorphic to $A$. Now we want to see what happens if we complete $A$ with respect to the filtration $(C_n) = (p^nB \cap A)$. So what you should do is write out $p^nB$ and $\alpha(A)$, and then compute these $C_n$. It should come out that $A/C_n = \bigoplus_{i = 1}^n \mathbb Z/p\mathbb Z$, and that the transition map $A/C_{n + 1} \to A/C_n$ forgets the $(n + 1)$-th component. Does the result seem plausible now?
H: How follows the Strong Law of Large Numbers from Birkhoff's Ergodic Theorem? We want to prove the strong law of large numbers with Birkhoff's ergodic theorem. Let $X_k$ be an i.i.d. sequence of $\mathcal{L}^1$ random variables. This is a stochastic process with measure-preserving operation $\theta$ (the shift operator). From Birkhoff's ergodic theorem, we obtain $\frac{X_0 + \dotsb + X_{n-1}}{n} \to Y$ a.s., with $Y=\mathbb{E}[X_1 \mid \mathcal{J}_{\theta}]$ a.s. Now, if $Y$ constant a.s., $Y= \mathbb{E}[X_1]$ a.s., and we would have the desired result. But why is $Y$ constant a.s.? AI: The transformation $\theta$ on $\Omega^{\Bbb N}$ is ergodic. Indeed, it's enough to show that for each cylinder $A$ and $B$, we have $$\frac 1n\sum_{k=0}^{n-1}\mu(\theta^{-k}A\cap B)\to \mu(A)\mu(B),$$ where $\mu$ is the measure on the product $\sigma$-algebra. If $A=\prod_{j=0}^NA_j\times \Omega\times\dots$ and $B=\prod_{j=0}^NB_j\times \Omega\times\dots$, we have for $k>N$ \begin{align} \theta^{-k}A\cap B&=\{(x_j)_{j\geq 0}, (x_{j+k})_{j\geq 0}\in A, (x_j)_{j\geq 0}\in B\}\\ &=\{(x_j)_{j\geq 0},x_{j+k}\in A_j, 0\leq j\leq N, x_j\in B_j,0\leq j\leq N\}\\ &=B_0\times \dots\times B_N\times \Omega\times\dots\times \Omega\times A_0\times\dots\times A_n\times \Omega\times\dots, \end{align} and we use the definition of product measure $\mu$ on cylinders (the $N$ first terms doesn't matter). Since $\theta$ is ergodic, $\mathcal J_{\theta}$ consists only of events of measure $0$ or $1$. The conditional expectation with respect such a $\sigma$-algebra is necessarily constant.
H: Uniform convergence of functions, Spring 2002 The question I have in mind is (see here, page 60, the solution is at page 297): Assume $f_{n}$ is a sequence of functions from a metric space $X$ to $Y$. Suppose $f_{n}\rightarrow f$ uniformly and has inverse $g_{n}$. Now assume $f$'s inverse $g$ is uniformly continuous on $Y$. Prove that $g_{n}\rightarrow g$ uniformly. I could not prove it using standard techniques as I do not know how to bound $|g_{n}(y)-g(y)|$ when $n$ becomes very large. The authors argue that the convergence of $g_{n}(y)\rightarrow g(y)$ is similar to $f(g_{n}(y))\rightarrow f(g(y))$ because the mapping by a uniformly convergent function series keeps uniform convergence. Thus they give the following argument that $$d(f(g(y)),f(g_{n}(y)))=d(y,f(g_{n}(y)))\le d(y,f_{n}(g_{n}(y)))+d(f_{n}(g_{n}(y)),f(g_{n}(y)))=d(f_{n}(g_{n}(y)),f(g_{n}(y)))$$ So since $f_{n}\rightarrow f$ uniformly by hypothesis the statement is proved. My question is: Is the step of substituting $|g_{n}(y)-g(y)|$ by $|f(g(y))-f(g_{n}(y))|$ really justified? I could not get the "keep uniform convergence" thing the author is talking about. But I also could not come up with a better proof. AI: If $h_n\colon S\to S'$ converges uniformly to $h$ on $S$ and $\varphi\colon S'\to S''$ is uniformly continuous on $S$, fix $\varepsilon>0$. There is a $\delta>0$ such that if $d_{S'}(x,y)\leq \delta$ then $d_{S''}(\varphi(x),\varphi(y))\leq\varepsilon)$. Now, we use the fact that there is an integer $n_0$ such that for all $n\geq n_0$, we have $$\sup_{x\in S}d_S(h_n(x),h(x))\leq \delta.$$ Then $$\sup_{x\in S}d_S(\varphi(h_n(x)),\varphi(h(x)))\leq\varepsilon.$$ Now, we apply this result with $\varphi=g$ and $h_n:=f\circ g_n$.
H: Convergence of a series of reciprocal prime numbers If $p$ is a prime number, and $q$ is its twin prime, the sum of the reciprocal twin numbers is convergent and the value of the sum of the series is the Brun constant. Now, if we consider the prime numbers $x=p+\alpha$ where $\alpha$ is a constant such that $x$ is also a prime, we can consider the sum: $$S=\frac{1}{\sum_{k=1}^{N}x_k}$$ The question is: is it possible to show for what values of $\alpha$ the series is convergent? Thanks in advance. AI: For any non-zero integer $\alpha$, the Selberg sieve can be used to show that the number of primes $p \le n$ such that $p + \alpha$ is prime is at most $C_\alpha n/(\log^2 n)$. The value of $\alpha$ does have an effect: if $\alpha$ is divisible by many small primes, then the constant in the upper bound will be higher. On the other hand if $\alpha$ is odd, there is at most one prime satisfying the condition. Either way, it is easy to show by partial summation that the reciprocal sum converges. Brun's original proof gave a slightly weaker upper bound, but one that is still strong enough to obtain convergence.
H: what is the behaviour of moving dot with 50% chance to go left or right? If a dot is moving (from zero) left or right, by one, with 50% chance to go left or right - is it going to go to the +inf or -inf when it has infinite moves? AI: This is the simple symmetric random walk on the integers. It is well known that this walk is recurrent, and so visits every point infinitely often with probability one. In particular, it has probability zero of converging to either $+\infty$ or $-\infty$. Proofs can be found here, or in most introductory textbooks on random processes.
H: Finite family of infinite sets / A.C. Let $\{A_i\mid i\in n\}$ be a finite family of infinite sets. ( That is, $A_i$ is infinite for every $i\in n$ and $n\in \mathbb{N}$) Here, we can choose representative $a_i$ from each $A_i$ and construct $\{a_i\mid i\in n\}$. This process really doesn't use Axiom of Choice? How do I write this process down formally (in mathematical language) AI: $\newcommand{\Zobr}[3]{#1 \colon #2 \to #3}$ Claim: Let $\{A_i; i\in n\}$ be a system of non-empty sets, $n\in\omega$. Then there is a choice function $\Zobr fn{\bigcup A_i}$. Proof. Induction on $n$. $1^\circ$ For $n=0$ put $f=\emptyset$. For $n=1$: Since $A_0$ is non-empty, we know that $(\exists a\in A_0)$. For every such $a$ the function $f=\{(0,a)\}$ has required property. We have used existential instantiation here. $2^\circ$ Suppose that there is a choice function $\Zobr fn{\bigcup A_i}$. We want to extend it to a new function $g$ defined on $n+1=n\cup\{n\}$ in a such way that $g(n)\in A_n$. Again we know that there exists $a\in A_n$ and we put $g=f\cup\{(n,a)\}$. (Again, this is application of existential instantiation.)
H: Higher variance implies larger spread? Suppose a configuration of points is given in $X\in\mathbb{R}^{n\times 2}$. Given that the configuration has zero column means, the variance of each axis, $x$ and $y$, can be expressed as $$\mathrm{var}(x)=\frac{1}{n}x^Ty\qquad\text{and}\qquad\mathrm{var}(y)=\frac{1}{n}y^Ty.$$ If $\mathrm{var}(x)>\mathrm{var}(y)$, does that means that the configuration has larger spread along $x$ axis, i.e. the configuration is stretched along $x$? AI: Discussion in the comments has revealed that the intended question is whether $\operatorname{var}(x) > \operatorname{var}(y)$ implies that $\operatorname{range}(x) > \operatorname{range}(y)$. The answer is no; for an extreme counterexample, let $$\begin{align} x &= [\underbrace{-1, -1, \ldots, -1}_{\text{$n/2$ elements}}, \underbrace{1, 1, \ldots, 1}_{\text{$n/2$ elements}}],\\ y &= [-k, \underbrace{0, 0, \ldots, 0}_{\text{$n-2$ elements}}, k], \end{align}$$ Then $x$ has variance $1$ and range $2$, while $y$ has variance $2k^2/n$ and range $2k$. Pick any $k$ between $1$ and $\sqrt{n/2}$, and you get $\operatorname{var}(x) > \operatorname{var}(y)$ but $\operatorname{range}(x) < \operatorname{range}(y)$. The point is that the variance averages over all the elements, while the range only looks at the two most extreme ones, so they can disagree when the two extreme points are not representative of the rest. (In the counterexample in my comment, I had switched $x$ and $y$ by mistake.)
H: P[random x is composite | $2^{x-1}$ mod $x = 1$ ]? Select a uniformly random integer $n$ between $2^{1024}$ and $2^{1025}$ (Q) What is the probability that n is composite given that $2^{n-1}$ mod $n = 1$ ? How did you calculate this? More info: One way to calculate this would be if you had the following two variables: $$P_Q(n) = 1 - { P_{prime}(n) \over P_{cong}(n) }$$ Where: $P_{prime}(n)$ is the probability n is prime $P_{cong}(n)$ is the probability that $2^{n-1}$ mod $n = 1$ So answering the following two questions would be sufficient to answer the main one: What is $P_{prime}(n)$ equal to? What is $P_{cong}(n)$ equal to ? (This holds because the probability that n is prime if the congruence is false is 0.) Based on the PouletNumber forumale given below: exp((ln(2^1025))^(5/14))-exp((ln(2^1024))^(5/14)) = 123 and (2^1025)*exp(-ln(2^1025)*ln(ln(ln(2^1025))) / (2*ln(ln(2^1025)))) - (2^1024)*exp(-ln(2^1024)*ln(ln(ln(2^1024))) / (2*ln(ln(2^1024)))) = 9.82e263 So its between 123 < x < 9.82e263 ?? And so $P_Q$ is: 3.29e-306 < P_Q < 2.63e-44 AI: Due to Fermat's Little Theorem, $$ a^{p-1} \bmod p=1 $$ if $a$ and $p$ are coprime. In your case $a=2$ and $p$ are all other primes, as pointed out by tomasz in the comment. But as I recognized right now you are looking for composite $x$, so you are looking for Fermat Pseudo primes. Their distribution can be found here. A file with pseudo primes up to $10^{15}$ can be found here. They are also called Poulet numbers, and according to the MathWorld page, for large $x$ their number below $x$ is given by $$ \exp((\ln(x))^{5/14})<P_2(x)<x\exp\left(-\frac{\ln x \ln \ln \ln x}{2 \ln \ln x}\right). $$ If $P_2(x)$ is the same as your $P_{\text{cong}(x)}$, you have something to work with. But see my edit below for some other possible meanings of $P_2(x)$. Concerning your edit: You can approximate $\pi(x)$, the number of primes below $x$ by $x/\log x$. So in the given range there are $$ \pi(2^{1025})- \pi(2^{1024})= 2^{1025}/(1025\log 2)- 2^{1024}/(1024\log 2)=3.74\cdot 10^{307} $$ primes. EDIT Here are some more facts about Poulet numbers: Rotkiewicz Theorem: If $n>19$, there exists a Poulet number between $n$ and $n^2$. The theorem was proved in 1965. Let $p$, $q$, $\ell$ be odd primes, and let $P_2(x)$ be the counting function for odd pseudoprimes with two distinct prime factors, $P_2(x) := \#\{n \leq x : n = pq, p < q, psp(n)\}$ (where $psp(n)$ means $n$ is pseudoprime). Then $P_2(x)\sim C\sqrt{x} / \ln^2(x)$ as $x\to \infty$. For more read here. The notation $P_2(x)$ (the counting function for odd pseudoprimes with $2$ distinct prime factors) seems to be a little confusing. In the last link they also state a result by Erdős, saying $c_1\log x\leq P_k(x)\le c_2\frac x{\log^k x}$. Maybe you need to sum the $P_k(x)$ to get to what you need.
H: How can I prove $\lim_{n \to \infty} \int_{0}^{\pi/2} f(x) \sin ((2n+1) x) dx =0 $? For continuous $f$, $f \in L^2$, prove that $$\lim_{n \to \infty} \int_{0}^{\pi/2} f(x) \sin ((2n+1) x) dx =0 $$ AI: Show it when $f$ is a polynomial. Argue by density, using Stone-Weierstrass theorem.
H: Will this problem be solved using Thales theorem for triangles I am stumped on the following question: In triangle ABC , AD=DB , DE is parallel to BC. The area of Triangle ABC is 40. What is the area of triangle ADE I know Thales theorem must be applied here . But I cant figure how to ? Any suggestions ? AI: Since $DE$ is parallel to $BC$, hence $\triangle ABC$ is similar to $\triangle ADE$. Now since, $AD=DB \implies AB=2AD\implies $ altitude from $A$ to $DE$ is half of altitude from $A$ to $BC$ and $BC=2DE$. Given, Area$(\triangle ABC)=40\implies \frac{1}{2}(\perp from A to BC)(BC)=40\implies \frac{1}{2}(2*\perp from A to DE)(2*DE)=40\implies \frac{1}{2}(\perp from A to DE)(DE)=10$. Hence Area$(\triangle ADE)=10$.
H: Structures on torus Quotienting $\mathbb R^2$ by different lattices isomorphic to $\mathbb Z^2$, we get different tori. Somehow I think of the tori as having different "structures", but thinking more about it, I am not quite sure what different structures I am really thinking of. Two structures I am guessing at are complex structures and metrics. Could someone explain how these differ? Also, am I thinking of the right kind of structure? Are there other structures which vary with the lattice chosen? AI: You're mostly right. The relation between complex structures and metrics comes from their common passion about angles. Basically, a complex structure on a Riemann surface is just a procedure for turning tangent vectors 90° counterclockwise. (Well, actually, that is an almost complex structure but they are the same thing in (complex) dimension 1). But that is one of the things a metric (with an orientation) allows you to do! So a metric on a surface defines canonically a complex structure. Of course, many metrics give the same structure (example 1: you can rescale the metric; example 2: the sphere $S^2$ has a lot of metrics but only one complex structure.) The good notion is the notion of conformally equivalent metrics. Roughly speaking, two metrics are conformally equivalent if they define the same notions of angles between two tangent vectors. That means that they are proportional to each other (the ratio being a positive smooth function on the manifold.) So you get an important fact about surfaces: complex structures and conformal classes of metrics are basically the same thing. So, most of the Riemann surface theory can be stated equivalently in the holomorphic world or in the conformal world. (Example: the uniformisation theorem says either “Any Riemann surface is a quotient of $S^2$, $\mathbb C$ or $\mathbb H$” or “Any Riemannian metric on a surface is conformally equivalent to a metric of constant curvature.”) This polyvalence clearly is one of the riches of Riemann surface theory and explains partially the huge number of dedicated textbooks, as there is room for lots of different approaches. So, here are two possible answers to your question: the relevant structures are “complex structures” or “conformal Riemannian structures”. This gives the same notion of “equivalent” lattices: two of them are equivalent if there is an affine map sending one to the other. (One can imagine variants: for example, one could choose to call two lattices equivalent if there is a volume-preserving map sending one to the other. In that case, one has to enrich the relevant structures on the quotient. Of course, the finer the equivalence relation on the lattices, the richer the structure, but affine equivalence is a popular choice, because the two structures I mentioned are very important and very natural.)
H: About the orthonormal decomposition of $L^2 (-\pi , \pi) $ For any $f \in L^2 (-\pi, \pi)$, prove that there exists unique orthonormal decomposition with even functions and odd functions : $$ L^2 ( -\pi , \pi) = L^2 _{odd} (-\pi , \pi ) \oplus L^2_{even} (-\pi , \pi).$$ AI: You can also think this way. Suppose the $f = g + h$, $g$ is even and $h$ is odd. Then you have $$f(x) = g(x) + h(x)$$ and $$f(-x) = g(-x) + h(-x) = g(x) - h(x)),$$ since $g$ is even and $h$ is odd. Add these two equations and you get $$f(x) + f(-x) = 2g(x)$$ and $$ f(x) - f(-x) = 2h(x).$$ Hence you have $$g(x) = {f(x) + f(-x)\over 2} $$ and $$h(x) = {f(x) - f(-x)\over 2}.$$ It is easy to check that $g$ is in fact even and that $f$ is in fact odd. We see that, algebraically, $L^2$ decomposes as a direct sum into even and odd parts. Now if $g$ is any even function and $f$ is odd, $f\overline g$ is odd. So $$\langle f, g\rangle_{L^2} = \int_{-\pi}^{\pi} f(x){\overline g}(x)\, dx = 0.$$ The decomposition is orthogonal. This is where Norbert's answer comes from.
H: Probability with card game I need help calculating the chances of winning this strange game that I'm going to explain right now: You have a deck of 52 cards(4 suits,Ace to King). All you have to do is turning the cards one by one counting one,two,three while you turn them. If you get an Ace when you count one or a two when you count two or a three when you count three you lose. For example if you get: 2(one),K(two),6(three),3(one),Q(two),3(three) You lose,because you get a 3 when you counted three. The only way I could think to resolve this problem is to calculate the chances of losing and then: \begin{equation} P(W)=1-P(L) \end{equation} where $ P(W) $ is chances of winning and $ P(L) $ is chances of losing. But how do I calculate $ P(L) $ ? I've tried this,but I'm almost sure that's wrong: $P(L)=$chances of getting an ace in first position or chances of getting a 2 in second position or chances of gettin a 3 in third position or chances of getting an ace in fourth position and so on... So: \begin{equation} P(L)=\frac{4}{52}+\frac{4}{51}+\frac{4}{50}+\frac{3}{49}+\frac{3}{48}+\frac{3}{47}+\frac{2}{46}+\frac{2}{45}+\frac{2}{44}+\frac{1}{43}+\frac{1}{42}+\frac{1}{41} \end{equation} Thanks everybody:) AI: Hint: Think of the initial configuration of the deck. You will count one $18$ times and the aces need to be among the other $34$ spaces. You will count two $17$ times and the twos need to be among the other $35$ spaces. You will count three $17$ times and the threes need to be among the other $35$ spaces. So $\frac {34}{52} \frac {33}{51} \ldots $The odds are not good. As a rough approximation, you would expect each of $12$ cards to kill you $\frac 13$ of the time each so you will win $\left( \frac 23 \right)^{12}\approx 0.77 \%$. It is actually better than this, as the aces are taking spaces that would let the twos and threes kill you. The claimed exact solution above is not correct. There are correlations between where the aces go and the places for twos and threes that are not considered. To see the problem, consider a deck having just one ace, one two, and one three. There are two winning decks: 23A and 3A2 for a probability of $\frac 13$. The above would say the ace can go two places, chance $\frac 23$ (correct) and the two can go two places, but the ace may have taken one. I think the proper answer would have to enumerate how many two slots are taken by aces, then how many three slots are taken by aces and twos. It is a lot of work. The approximate value is probably pretty close.
H: How to simplify an expression like this: $(x^2+x^{-2}-2)^{1/2}$ Sorry, I am not sure how to do the maths mark-up on this site but hopefully the question will make sense. I should know how to do this, but I have got myself stuck! Can anyone help? $(x^2+x^{-2}-2)^{1/2}$ AI: $$\left(x-\frac{1}{x}\right)^2= \dots?$$
H: A proof about complex number If $a, b, c\in \mathbb{C}$, and if $\left \| a \right \|=\left \| b \right \|=\left \| c \right \|=1$, prove $(a+b)(b+c)(c+a)/(abc)\in \mathbb{R}$. I have thought this Q for a long time, but I can only get something long and troublesome but not the answer. Can anyone help me please? THANK YOU! ~ AI: Let us denote the quantity by $q$. Since $a$, $b$ and $c$ are all of unit modulus, $${\overline q} = \overline{(a+b)(a+c)(b+c)/(abc)} = (1/a + 1/b)(1/a + 1/c)(1/b + 1/c)(abc) $$ now add the fractions and get $${\overline q} = (abc){a + b\over ab}{a + c\over ac}{b + c\over bc} = (a+b)(b+c)(a+c)/(abc) = q.$$
H: Proving that the general linear group is a differentiable manifold We know that the the general linear group is defined as the set $\{A\in M_n(R): \det A \neq 0\}$. I have a homework on how to prove that it is a smooth manifold. So far my only idea is that we can think of each matrix, say $A$, in that group as an $n^2-$dimensional vector. So i guess that every neighborhood of $A$ is homeomorphic to an open ball in $\mathbb{R}^{n^2}$ (However, i don't know how to prove this.) Now, I'm asking for help if anyone could give me a hint on how to prove that the general linear group is a smooth manifold since I really don't have an idea on how to do this. (By the way, honestly, I don't really understand what a $C^{\infty}-$smooth structure means which is essential to the definition of a smooth manifold.) Your help will be greatly appreciated. :) AI: Construct a map $f:M_n(\mathbb{R}) \rightarrow \mathbb{R}$ by taking each matrix to its determinant, where $M_n(\mathbb{R})$ is the set of all $n \times n$ matrices. $f^{-1}(\mathbb{R}\backslash\{0\})=GL_n(\mathbb{R})$, and $\mathbb{R}\backslash\{0\}$ is an open subset of $\mathbb{R}$. Therefore, $GL_n(\mathbb{R})$ is an open subset of $M_n(\mathbb{R})$. I'll leave the rest to you.
H: measuring distance between probability measures only at the tail Is there any official (i.e., to be found in probability books) metric for the distance between two probability measures, defined only on a subset of their support? Take, for example, the total variation distance: $$TV(\mu,\nu)=\sup_{A\in\mathcal{F}}|\mu(A)-\nu(A)|.$$ If $X$ and $Y$ are two real positive continuous random variables with densities $f_X$ and $f_Y$, then their total variation distance is, if I understand correctly: $$TV(\mu_X,\mu_Y)=\int_{0}^{\infty}|f_X(z)−f_Y(z)|dz.$$ Would it make any sense to calculate a quantity, for $\tau>0$, let's call it partial distance, like this: $$PV(\mu_X,\mu_Y;\tau)=\int_{\tau}^{\infty}|f_X(z)−f_Y(z)|dz\;\;\;?$$ If this does not make any sense (sorry, I really cannot tell, as I am not that good with measure theory...), can anyone think of a measure that would make sense? What I want to use this for is to compare the closeness of two PDFs (or other functions describing a distribution: CDF, CCDF...) $f_X(t)$, $f_Y(t)$ to a third one $f_Z(t)$. I know that both $f_X$ and $f_Y$ "eventually" ($t\to\infty$) converge to $f_Z$, but I would like to show that one of them gets closer, sooner than the other one... AI: I gave you this same answer over at MathOverflow. Looking at your response to Michael Chernick, you probably do want to consult Dudley, as the Prohorov metric and its follow-up in Proposition 11.3.2 refer directly to metrics on random variables, which could be defined for the tail only as you request. You may want to check out Real Analysis and Probability by R. M. Dudley (2002, Cambridge University Press). Chapters 9-11 discuss several metrics on probability measures and random variables (laws), and since restricting your support would be equivalent to some random variable on the measure, you should be able to use something like the metrics discussed in section 11.3 in particular.
H: About completeness of the Fourier series. The Fourier series of a function is given by $$ \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos n \theta + \sum_{n=1}^\infty b_n \sin n \theta . $$ Here what does the statement " $\sum_{n=1}^\infty b_n \sin n \theta $ is complete" mean? And would you tell me how can I prove this statement? AI: I don't think it makes sense to call a Fourier series complete. But what you can call complete is the basis of your space. For example, if the functions you're computing Fourier series of are in $L^2(G)$, where $G$ is a compact Abelian topological group, one can say that the characters $\chi$ of $G$ are complete and it means two things: (i) that for every function $f$ in $L^2$ there exists a sequence of continuous characters $\chi_n \in \mathrm{Hom}(G, S^1)$ such that $\|f - \sum_{k=0}^n a_k \chi_k \|_{L^2} \xrightarrow{n \to \infty} 0$ and (ii) that $\chi_n$ are orthonormal, that is, $\langle \chi_n , \chi_m \rangle = \delta_{nm}$ where $\langle \cdot, \cdot \rangle$ is the inner product of $L^2$, namely, $\langle f, g \rangle = \int_G f \;\overline{g}\; d \mu$ (where $\overline{g}$ denotes complex conjugation)
H: Presence of Identity Element and abelian groups I.N.Herstein in Topics of Algebra defines a group as a set having a special element $i$ such that: $a,i\in A(S) $ which satisfies $i\cdot a = a\cdot i = a$ In this way, this follows commutativity holds true for identity element, then doesnt it run contradictory to the non-necessary condition that $a \cdot b \neq b \cdot a$ ? In similar vein is the set of integers under subtraction a group? {my reasoning is no it isnt, because $a-0 = a \neq 0-a$} Soham AI: The non-necessary condition is $\forall a,b \in A, a \cdot b \not= b \cdot a$. This doesn't rule out that it may be true for some $a, b \in A$. In fact, it could be true for all elements in the group. Then we call it an Abelian group, which is still a group, nonetheless. And you are correct, the integers (or rationals or real numbers) with subtraction does not form a group. You gave one reason. You could also check associativity.
H: Application of the Schwarz Lemma, what if $f(f(z))=cz$? I have an analytic function $f$ mapping the (unit) disc to the (unit) disc, with $f(0)=b$ in $D$, and $f(b)=0$. Part (a) was to show that $|f'(b)f'(0)|\leq 1$, which I have already done by applying the Schwarz lemma to $f(f(z))$. Next, part (b) states that Suppose $f'(b)f'(0)=1$. Find all possible functions $f$. I am stuck. AI: The version of the Schwarz Lemma I know says that for holomorphic $g:D\rightarrow D$ , if in addition to $g(0)=0$ you have $|g(c)| = |c| $ for at least one $c\neq 0$ or $|g'(0)|=1$, then $g$ is a rotation. You may apply the latter again to $g(z)=f\circ f(z)$, i.e. $g'(0)=f'(f(0))f'(0)= f'(b)f'(0)=1$. So g(x)=ax for some complex number $a$ s.t. $|a|=1$, in this case clearly $a=1$. (Your heading seems to indicate you figured that out already, but I'm not sure) Consequently, $f$ is onto and one-one in $D$ (why?), hence $f(z)$ is a biholomorphic map $D\rightarrow D$. These are well known, and I assume you know them, too, since this is what you usually prove using the Schwarz lemma and the reason to introduce the lemma. They are of the form $$f(z)= \frac{az+b}{\bar{a}z + \bar{b}} \,\,, a,b \in \mathbb{C}, |a|^2 -|b|^2 =1$$ If you know that the rest is just calculation.
H: What is the formula for the difference between CI and SI? If principal, time and rate are given how,do I find the difference between Compound interest and Simple Interest? p=12,000 n=1 and a 1/2 yrs. r=10% per year Formulae that I know: CI - SI for 2 years = P(R/100)^2 CI-SI for 3 years = P(R/100)^2 (R/100 + 3) But none of these will work for 1 and a half years, so what formula do I use? Or how do I use these formulae in this context? AI: Formula for simple interest: $$P_{SI} = P \left(1 + \frac{nR}{100}\right)$$ Formula for compound interest: $$P_{CI} = P \left( 1+\frac{R}{100} \right)^n$$ Therefore their difference is $$P_{CI} - P_{SI} = P \left( \left(1+\frac{R}{100}\right)^n - \left(1+\frac{nR}{100}\right)\right)$$ If you substitute $n=2$ and $n=3$ into this formula, and expand out the brackets, you will get the formula you quoted in your question.