text
stringlengths
83
79.5k
H: What is the formula of the sequence? And how to deduce the formula? Does this sequence have a formula? 66 90 117 150 195 264 360 450 540 690 870 1,020 1,260 1,500 1,830 2,160 2,580 3,000 3,510 4,080 4,770 5,490 If it has, please tell me how to find a formula for this kind of sequence, are there any general ways? thanks AI: I can't tell the answer to your sequence, but I can tell some techniques to find a natural pattern in a finite sequence. First you should understand what it means to "differentiate" finite sequences. The idea I will use will be similar as "integrating the derivative" to find the function. Define $x_n =$ the $n^{\text{th}}$ term of your sequence and let $\Delta x_n = x_{n+1} - x_n$. Then the sequence $\Delta x_n$ looks like this : $$ 24 \, 27 \, 33 \, 45 \, 69 \, 96 \, 90 \, 90 \, 150 \, 180 \, 150 \, 210 \, 240 \, 330 \, 330 \,... $$ In this case it doesn't help, but it usually does. You can repeat this process until you find a nice pattern (it might not work), and then compute your sequence's formula inductively using $x_{n+1} = x_n + \Delta x_n$.
H: Equivalent of $ u_{n}=\sum_{k=1}^n (-1)^k\sqrt{k}$ I'm trying to show that $$ u_{n}=\sum_{k=1}^n (-1)^k\sqrt{k}\sim_{n\rightarrow \infty} (-1)^n\frac{\sqrt{n}}{2}$$ when $n\rightarrow\infty$ How can I first show that $$u_{2n}\sim_{n\rightarrow \infty} \frac{\sqrt{2n}}{2}$$ and then deduce the equivalent of $u_{n}$? AI: Pair consecutive terms together: $$u_{2n} = \sum_{k=1}^{2n} (-1)^k \sqrt{k} = \sum_{k=1}^n \left( \sqrt{2k}-\sqrt{2k-1}\right) .$$ Since $\displaystyle \sqrt{1 - \frac{1}{2k}} = 1 - \frac{1}{4k} + \mathcal{O}(k^{-2})$ and $\displaystyle \sum_{k=1}^n k^p = \frac{n^{p+1}}{p+1} + \frac{n^p}{2} + \mathcal{O}(n^{p-1})$ we get $$u_{2n}= \sum_{k=1}^n \left(\frac{\sqrt{2}}{4\sqrt{k}} + \mathcal{O}(k^{-3/2})\right)= \frac{\sqrt{2n}}{2} + \mathcal{O}(1).$$ Now $u_{2n+1} = u_{2n} - \sqrt{2n+1}$ so the result that actually holds is $\displaystyle u_n \sim (-1)^n \frac{\sqrt{n}}{2}.$
H: Joint distribution gives two marginal In the following exercise I got two different distributions for $Z.$ I want to know where my mistake is. Every hint or comment is appreciated. The exercise goes as follows: Let $(X,Y)$ be a random vector with values in $\mathbb{R}^2$ such that it has a joint density given by: $$f(x,y)=\frac{1}{x}\exp(-x)\chi_{\{0<y<x\}}$$ where $\chi_{\{0<y<x\}}$ is the indicator fct. on $\{0<y<x\}.$ Let $Z:=\frac{X}{Y}$. Compute the distribution of $(X,Z)$ and $Z$. Now my computations: First computation: \begin{align*} \mathbb{E}[f(X,Z)]&=\int_{\{0<y<x\}}f(x,\frac{x}{y})\frac{1}{x}\exp(-x)d(x,y)\\ &=\int_{\mathbb{R}_{>0}\times\mathbb{R}_{>1}}f(x,z)\frac{1}{x}\frac{x}{z^2}\exp(-x)d(x,y). \end{align*} Where I used the change of variables $\phi:\mathbb{R}_{>0}\times\mathbb{R}_{>1} \rightarrow \{(x,y) \in \mathbb{R}^2 : 0<y<x\}; (x,z) \mapsto (x,\frac{x}{z}).$ Hence $d\mathbb{P}_{(X,Z)}=\frac{1}{z^2}\exp(-x)\cdot \chi_{\mathbb{R}_{>0}\times\mathbb{R}_{>1}}$. Thus $\mathbb{P}(Z\leq \alpha)=\int_{1}^{\alpha}\frac{1}{z^2}\int_{0}^{\infty}\exp(-x)dxdy=\int_{1}^{\alpha}\frac{1}{z^2}dy$, leading to $$d\mathbb{P}_{Z}=\frac{1}{z^2}\chi_{\mathbb{R}_{>1}}.$$ Second computation: \begin{align*} \mathbb{P}(Z\leq \alpha)=\mathbb{P}(\frac{X}{Y}\leq \alpha)&=\int_{\{\frac{X}{Y}\leq \alpha\}}\frac{1}{x}\exp(-x)\cdot \chi_{\{0<y<x\}}d(x,y)\\ &=\int_{\mathbb{R}^2}\frac{1}{x}\exp(-x)\cdot \chi_{\{0<y<x\}}\cdot \chi_{\{x\leq \alpha y\}}d(x,y)\\ &=\int_{0}^{\infty}\frac{1}{x}\exp(-x)\int_{0}^{\alpha x}dydx\cdot \chi_{\{0<\alpha<1\}}\\ &=\alpha\chi_{\{0<z<1\}} \end{align*} Hence $Z$ is uniformly distributed on $(0,1)$. Now, where is my mistake? (I am always uneasy when doing change of variables so I fear there is my problem...). Thanks in advance. AI: I don't see how you got from ∫χ{0 < y < x}⋅χ{x ≤α y}d(x,y) to ∫dydx⋅χ{0<α<1} where first integral is over R and the second is from 0 to αx. As y goes from 0 to αx how is the constraint x≤αy maintained?
H: Residue theorem The residue theorem Let $\Omega\subseteq \mathbb{C}$ open, $f$ meromorphic on $\Omega$ and $A$ be the set of the poles of $f$. If $\Gamma$ is a cycle in $\Omega\backslash A$ with $\mathrm{ind}_{\Gamma}(\alpha)=0$ for $\alpha\in \mathbb{C}\backslash \Omega$, we have: $$ \frac{1}{2\pi i}\int_{\Gamma}f(z)\mathrm{d}z=\sum_{a\in A}{\rm Res}(f,\ a)\cdot \mathrm{ind}_{\Gamma}(a) $$ Usually, there is a sketch of the proof on wikipedia. But, there isn't one for this theorem. Can someone point me or explain me an outline of the proof to this one? Just the most important steps. AI: Basically, you slice your contour in bits, each containing one pole which is possible, because poles are isolated. Then integral over $\alpha$ is split into the sum of integrals over the parts. Note that integrals along the cuts cancel out . By Cauchy theorem or otherwise show that it is equivalent to integrating around small circles surrounding the poles. Now expand $f$ in Laurent series and note that only the terms of the power -1 survive, since $$\int_C\frac{dz}{(z-a)^n}=0,\quad n\ne 1$$ $$\int_C\frac{dz}{z-a}=2\pi i\cdot \operatorname{ind}_C(a)$$ That's a very rough indication of the proof.
H: Poisson summation formula and Schwartz functions I am reading a proof of the Poisson summation formula which states that (with my version of the Fourier transform - I think they sometimes vary by a constant factor) for $f$ a Schwartz function on $\mathbb{R}$ (that is, a smooth function with all derivatives of $f(x)$ of all orders decay faster than any polynomial function as $|x| \to \infty$), the following relationship holds: $$\sum \limits_{n \in \mathbb{Z}}f(n) = \sum \limits_{n \in \mathbb{Z}}\hat{f}(2\pi n)$$ where $\hat{f}$ denotes the Fourier transform. The proof goes as follows: define 2 functions on $\mathbb{T} = \{z: |z| = 1\}$ by $F, G: \mathbb{T} \to \mathbb{C}$ by $F(\theta) = \sum \limits_{n \in \mathbb{Z}}\hat{f}(2\pi n) e^{2 \pi i n \theta}$, and $G(\theta) = \sum \limits_{k \in \mathbb{Z}}f(\theta + k)$. Then take Fourier transforms and show they are equal. After showing also that $F$ and $G$ are both Schwartz functions on $\mathbb{T}$, we apply uniqueness of Fourier series to show that $F=G$. Now uniqueness here relies very much on the fact that both $F$ and $G$ are Schwartz functions on $\mathbb{T}$: but a Schwartz function on $\mathbb{T}$ is simply an element of $C^\infty (\mathbb{T})$, i.e. a smooth function on $\mathbb{T}$. So, my question is this: how do we know (or show) that $F$ and $G$ are smooth? It is clear both are periodic, so it will suffice to show smoothness within their respective periods I suppose. $f$ is Schwartz on $\mathbb{R}$ which means it is smooth, but we are taking an infinite sum of translations of $f$, so it is not manifestly clear that either $F$ or $G$ will remain smooth. How do we show this? I guess we need to make use of the fact that at large values $f$ is very fast-decaying to show that most terms are "insignificant" in the sums, but whenever I tried to prove the smoothness formally it became messy. Is there a nice trick to showing $F$ and $G$ are smooth on $\mathbb{T}$? I would be very grateful for a proof of the fact. Many thanks in advance. AI: The following result is relevant here: If $f_n : \mathbb{R} \to \mathbb{R} $ is a sequence of continuously differentiable functions, uniformly convergent to $f$ and the sequence $f'_n$ converges uniformly to $g$, then $f$ is differentiable and $f'=g.$ So we can check that $F$ and $G$ are smooth by checking that we can keep on differentiating them, which we do by applying the above theorem, the Weierstrass M-test and the fact that Schwarz functions decay very quickly.
H: Baire sigma-algebra Halo everyone. I would like to enquire how do I solve this question which I extract from Cohn book on Measure Theory. Let $X$ be a compact Hausdorff space, and let $C(X)$be the set of all real-valued continuous funtions on $X$. Then $B_{o}(X)$, the Baire $\sigma$-algebra on $X$ is the smallest $\sigma$-algebra on $X$ that makes each function in $C(X)$ measurable; the sets that belong to $B_{o}(X)$ are called the Baire subsets of $X$. A Baire measure on $X$ is a finite measure on $(X,B_{o}(X))$ (i) Show that $B_{o}(X)$ is the $\sigma$-algebra generated by the closed $G_{\delta}$'s in $X$. (ii) Show that if the compact Hausdorff space $X$ is second countable, then $B_{o}(X)=B(X)$ Note: $B(X)$ is the Borel $\sigma$-algebra. Other question not from the text: (i) What is the distinction between a locally compact Hausdorff space and a compact Hausdorff space? AI: For the first part, you must show two things: (1) that every closed $G_\delta$ subset of $X$ is an element of $B_0(X)$, and (2) that if any $\sigma$-algebra $A$ on $X$ has every closed $G_\delta$ subset of $X$ as an element, then $B_0(X)\subseteq A$. For the latter, it suffices that every continuous function is $A$-measurable, by definition of $B_0(X)$. For the second part, note that any $G_\delta$ subset of $X$ is an element of $B(X)$--since it is generated by the open sets, and so every countable intersection of open sets is $B(X)$-measurable--so in particular, every closed $G_\delta$ subset of $X$ is an element of $B(X)$, and so $B_0(X)\subseteq B(X)$ by the first part. Note that we didn't even use the fact that $X$ is compact, here, so in general we can say $B_0(X)\subseteq B(X)$. To show the other containment, we must show that every closed subset of the second countable compact Hausdorff space $X$ is an element of $B_0(X)$--again, since $B(X)$ is generated by the closed subsets of $X$--which (as Asaf points out in the comments) follows the fact that closed sets are $G_\delta$ in this context. Hopefully that's enough to get you started.
H: Can one construct a "Cayley diagram" that lacks only an inverse? My group theory text asks for an example of a Cayley-like diagram that exhibits all the properties of a group except (only) that at least some elements lack an inverse. Is it possible to construct such a diagram? Nathan Carter p. 24 Question 2.15. in Visual Group Theory (in the context of this question) defines a group as a "collection of actions" that satisfies four rules: There is a predefined list of actions [generators] that never changes. Every action is reversible. Every action is deterministic. Any sequence of consecutive actions is also an action. Clearly (2) will be violated in any diagram that is constructed by adding new node, $n$, to the diagram for a group $G$ by one-way arrows for each of the generators in $G$, since it provides no way to reach $n$ from any other nodes, so that paths starting at $n$ (only) cannot be reversed. This will be the case regardless of what nodes in $G$ each of the arrows from $n$ lead to. For example, starting with $G=D_4$: But, while this diagram satisfies rules (1) and (4), doesn't it also violate (3) because, for example, $r^4=e$ and (starting from $n$) $r^4=r^{-1}$ even though $r^{-1}\neq e$? EDIT: As discussed below, this figure does not, in fact, violate (3): the $r^{-1}$ mentioned above (the one starting at $n$) does not exist. Also, it does matter where the arrows from $n$ are connected: they must be connected to $G$ in a way that follows $G$'s rules. In the diagram above, for example, if the dotted path had been chosen for $f$ instead of the one indicated, the diagram would violate (3). Rule (3) will always be satisfied in a diagram where the connections from the added node replicate outgoing connections from a node in $G$ (here $m$). The answer provided in the book's key focuses on the fact that (2) requires that any diagram ...cannot have two arrows of the same type pointing to the same destination from two different sources. However, while this is certainly a property of (all?) diagrams (including the one above) that violate (2), it is not sufficient to violate (2), and can result in the violation of other rules instead. For example simply "rewiring" one of the cyclic actions in $D_4$ so that it points "to the same destination from two different sources" can produce a diagram that satisfies, (2) but violates (3): The example offered in the key, violates not only (2), but (3) and (4) as well: Is it possible to construct a diagram that satisfies rules (1), (3), and (4) while violating (2)? AI: Edit: To construct such a diagram, we can consider the multiplication table of $\mathbb{Z}_n$ for any $n>1$. (Considering $\mathbb{Z}_n$ as an additive group, of course.) Consider in particular $\Bbb Z_4$. A Cayley diagram corresponding to it as an additive group would be: Above, the arrows clearly indicate the action of addition by $1,$ modulo $4$, which generates the group. However, the diagram for multiplication would be: Above, the red arrows indicate multiplication by $2,$ modulo $4;$ the blue line indicates a two directional swap that occurs under multiplication by $3,$ modulo $4.$ These two actions readily generate all four actions in the monoid. Now, this diagram violates reversibility in two ways, since there's no way back from $0$, at all, and there is no way to get from $2$ back to either $1$ or $3$. (The corresponding diagram for $\Bbb Z_n$ violates reversibility in exactly one way if and only if $n$ is prime.) This does not, however, violate determinism, as a perusing of Carter $\S1.2,$ pages $4-5$ shows. To elucidate this further, consider the following example, which (arguably) violates only determinism. Suppose we have a thin metal disk with rounded edges (such that it cannot balance on edge), one black face, and one white face. We will put this disk in a large, flat, circular area with up-curved edges (like a large, flat-bottomed bowl), so that it can't escape or fetch up against a wall. Our generating action is to put the disk on edge in the center of the area and spin it until it comes to rest again. The available states are black-side-up and white-side-up. This yields the following Cayley-like diagram: Note in particular that for every instance of the action, there is a chance of remaining in the pre-spin state or switching states--in this uncertainty, determinism is violated. However, due to our construction, no matter how many times we spin it, there are no other states in which the disk can be, so any sequence of consecutive actions is again an action. The finicky part is reversibility. The argument is that the disk will eventually switch sides with probability $1,$ which (in real-world terms) means we'll always return to a previous state, if we simply act sufficiently-many times. The situation is, of course, rather contrived (and arguably doesn't really work), but the Cayley-like diagram shows what happens when determinism fails: there is at least one action that "branches" at at least one node. This doesn't happen with multiplication in $\Bbb Z_4$, and so determinism is not violated in that case.
H: $t > 2n^2 \implies t!>n^t$ for $n,t \in \mathbb{N}$ I have come across this in a proof: If $t>2n^2$ then, $$t!>(n^2)^{t-n^2}=n^tn^{t-2n^2}>n^t$$ Obviously, this is much help to determine the relationship between factorials and exponential, but I fail to see the motivation behind the initial assumption. Is there a different way to think about this or derive the same result? AI: Suppose $t > 2n^2$, and for simplicity assume $t$ is even. Then $t/2 > n^2$, so all numbers between $t$ and $t/2$ are bigger than $n^2$. And surely the numbers between $t/2 - 1$ and $1$ are all at least $1$. So $$t! = \left[t (t-1) (t-2) \ldots \left(\frac{t}{2} + 1\right) \frac{t}{2}\right] \cdot \left(\frac{t}{2} - 1\right)! > \left[n^2 n^2 \cdots n^2\right] \cdot 1 > n^t.$$
H: What are operators, commutators and anti commutators algebra? What is the proof for the fact that the product of two operators is generally not commutative? $$\hat A\hat {\vphantom{A}B}\not=\hat{\vphantom{A}B} \hat A.$$ What is the difference between $\hat A\hat {\vphantom{A}B}$ and $\hat {\vphantom{A}B} \hat A$? What are commutator and anti commutator of two operator $\hat A$ and $\hat {\vphantom{A}B}$? AI: What are your operators? There is no reason that they should commute in general, because its not in the definition. A linear operator $\hat{A}$ is a mapping from a vector space into itself, ie. $\hat{A}:V\to V$ (actually an operator isn't always defined by this fact, I have seen it defined this way, and I have seen it used just as a synonym for map). A matrix is a representation of an operator in a particular basis of the vector space. For example, consider the differential operator $\frac{d}{dx}$ and the multiplication by $x$ operator, $\hat{x}$, where the vector space $V=$all polynomials over the real numbers. For example, $x^2\in V$, and $\frac{d}{dx}x^2=2x$, $\hat{x}x^2=x^3$. That is how these operators act on that particular polynomial. The commutator of $\hat{A}$ and $\hat{B}$ is $\hat{A}\hat{B}-\hat{B}\hat{A}$. Using now the polynomial $1$ for example, $(\frac{d}{dx}\hat{x}-\hat{x}\frac{d}{dx})1=\frac{d}{dx}(\hat{x}1)-\hat{x}(\frac{d}{dx}1)=\frac{d}{dx}x=1,$ and so $\hat{x}\frac{d}{dx}\ne\frac{d}{dx}\hat{x}$ since the commutator isn't $0$. In fact for any polynomial $p(x)$, $(\frac{d}{dx}\hat{x}-\hat{x}\frac{d}{dx})p(x)=1$, and so the commutator is just the identity. The anticommutator of $\hat{A}$ and $\hat{B}$ is $\hat{A}\hat{B}+\hat{B}\hat{A}$.
H: How to Prove ($\mathbb{C}\langle x, y \rangle$, $\|\cdot\|$) is a Banach Space Let $\mathbb{C}\langle x,y\rangle$ be the group ring of the complex numbers over the free group in $x,y$. Let $len : \langle x,y \rangle \rightarrow \mathbb{N}$ denote the standard word norm and let $\varphi=exp \circ len: \langle x, y \rangle \rightarrow [1,\infty)$. Define the following norm for $\displaystyle \alpha=\sum_{g \in \langle x,y\rangle} a_{g}g \in \mathbb{C}\langle x,y \rangle$: $$\|\alpha\|=\sum_{g \in \langle x, y \rangle} |a_{g}|\varphi(g)$$ It's not hard to show that $\|\cdot\|$ is indeed a norm and in fact for all $\alpha, \beta \in \mathbb{C}\langle x, y \rangle$ we have $\|\alpha\beta\| \le \|\alpha\|\cdot\|\beta\|$. The natural involution $\displaystyle ^{*}:\sum_{g \in \langle x, y \rangle}a_{g}g \mapsto \sum_{g \in \langle x, y \rangle} \overline{a_{g}}g^{-1}$ has the properties that would make $(\mathbb{C}\langle x, y \rangle,\|\cdot\|,^{*})$ into a Banach *-algebra, including $\|\alpha\|=\|\alpha^{*}\|$. My question is this: Is $\mathbb{C}\langle x, y \rangle$ complete with respect to $\|\cdot\|$? I stumbled upon this while working on an undergrad research project but haven't had any functional analysis, so I'm not sure how to go about proving/disproving this. Any help/references would be greatly appreciated. AI: There are no countable-dimensional Banach spaces. This is a corollary of the Baire category theorem: if $e_1, e_2, ...$ were a basis for a Banach space $B$, then $B$ would be the countable union of the sets $\text{span}(e_1), \text{span}(e_1, e_2), ...$, all of which are nowhere dense, which contradicts BCT3 (in Wikipedia's terminology). There is an obvious and more naive argument which doesn't work: if I had $e_1, e_2, ...$ as above, normalized to have norm $1$, then isn't it obvious that, say, $\sum \frac{e_n}{2^n}$ doesn't lie in the span of the $e_i$? This follows in the special case that the $e_i$ lie in a Hilbert space and are taken to be orthonormal, but in general you can't conclude this. Why? The naive continuation of the naive argument is that since $B = \text{span}(e_1, e_2, ...)$ there is a linear functional $$e_j^{\ast} : B \to \mathbb{C}$$ which, given a finite sum $\sum c_i e_i$, takes the value $c_j$, so we ought to have $$e_j^{\ast} \left( \sum \frac{e_n}{2^n} \right) = \frac{1}{2^j}.$$ Indeed there is such a linear functional, but it is not guaranteed to be continuous! So the last step fails. (When $H$ is a Hilbert space the inner product can be used to write down this functional and it is of course continuous in this case.)
H: Undefined natural logarithm? The natural logarithm is the logarithm to the base $e$, where $e$ is an irrational and transcendental constant. $$e=\lim_{n\to \infty}\left(1+\frac {1}{n}\right)^n.$$ $$\ln a=\log_{e} a.$$ I know that $\ln {(AB)}=\ln {(A}) + \ln {(B)}$ and $\ln {(A^B)}=B \ln {(A)}$. Is there any difference between $\ln {(AB)}$ and $\ln {(A\cdot B)}$ ? Is there any other way to solve $\ln {(A+B)}$ just like $\ln {(AB)}$ ? AI: There is no difference between $\ln (AB)$ and $\ln(A\cdot B)$. Just like there is no difference between "$2x$" and "$2\cdot x$". Multiplication is often denoted by juxtaposition. No, there is no way to "solve" or simplify $\ln(A+B)$ in a manner similar to that of the product.
H: response of unit step input in harmonically oscillating system As far as I've understood or misunderstood in constant coefficient second order differential equation $$\frac{d^2y}{dt^2} + b \frac{dy}{dt} +cy = ef(t)$$ $b$, $c$ being constants, $f(t)$ the input to the system, $y$ being response of the system. Let, $$\frac{d^2y}{dt^2} +\omega_0^2y = u(t)k$$ such that $k$ is non zero be a system and $u(t)$ a unit step function. How to physically visualize this system(or input to the system), isn't the $u(t)$ like applying constant force to the system? Will it not bring the system to halt after certain time? Like if we keep poking a mass-spring system with constant force only in one direction?? But the solution seems different. Please Help me to clear this simple misconception. Thanks you!! AI: Yes, after time $t = 0$, the step function $ku(t)$ is like applying a constant force $k$ to the system, but no, it will not bring the system to a halt. In fact, it merely shifts the equilibrium position of the system, but the system will continue to oscillate about the new equilibrium position just as before. Since we only care about $t \ge 0$, let's just assume the force is a constant $k$. Observe that if $\newcommand{\d}{\mathrm d}$ $$\frac{\d^2y}{\d t^2} + \omega^2y = k,$$ this is equivalent to $$\frac{\d^2y}{\d t^2} + \omega^2\left(y - \frac k{\omega^2}\right) = 0,$$ and if you let $\tilde y = y - k/\omega^2$, you get back the equation of the simple harmonic oscillator centered at $0$, $$\frac{\d^2\tilde y}{\d t^2} + \omega^2\tilde y = 0.$$ So the system with a constant force behaves exactly like the unforced system, only shifted by $k/\omega^2$. Perhaps you're imagining the constant force to be like holding the oscillator and pushing it to one side. But when you do that in real life, you're also opposing the relative motion of the oscillator with respect to your hand, and that is what damps out the motion of the system. A constant force is not like that; it continues to push in one direction, no matter whether the oscillator is above or below its new equilibrium, no matter whether it is moving towards or away from it. Or just think of a spring held up at one end, with a weight at the other end being pulled down by gravity. It's not the gravitational force that makes it eventually come to a stop, it's the friction in the spring itself.
H: Tensor operation on a vector space From the various definitions provided in the article https://en.wikipedia.org/wiki/Tensor, the tensor seems always to be defined, even in the more abstract forms, as a multilinear map, from a product of vector and dual spaces to the underlying field. However, in applied mathematics, one often come across a tensor when used in the form that maps elements of vector and dual spaces to elements of vectors and dual spaces, like this, for example: $\theta^j=\mu_l^j dx^l$ No operation is defined in the article between a tensor and a dual vector, which gives a dual vector. But since other operations are defined on tensors, should the previous notation actually be read in two steps: a tensor product: $\mu_l^j dx^k$ followed by a contraction of the indices $k$ and $l$ AI: You seem to be confused by notationnal issues. With some luck I can deconfuse you, with less luck your confusion may increase ;-) Usually - in classical differential geometry - one would write something like $\mu^j_l dx^l\otimes \frac{\partial}{\partial x^j}$ to define locally a $(1,1)$ tensor or tensor field. The $\mu_l^j$ are just the coefficients, the $dx^l$ form a base of the cotangent space and the $\frac{\partial}{\partial x^j} $ a base of the tangent space. The tensor products $dx^k \otimes \frac{\partial}{\partial x^j}$ form a base of the space of $(1,1)$ tensors. The $\theta^j$ from your example are just coefficients of a vector field, the base vectors are omitted for some reason. If $v= v^j \frac{\partial}{\partial x^j}$ is, in this formalism, a vector field and $\omega = \omega_k dx^k$ a one-form, then you may look at the $(1,1)$ tensor field $v\otimes \omega$ with coefficients $ v^j\omega_k$. Written out more explicitly, the $(1,1)$ tensor field is $$v\otimes \omega= \omega_k v^j dx^k \otimes \frac{\partial}{\partial x^j}$$ This is just writing down a tensor field in a local base representation. Applying a contraction to that tensor field means kind of inserting the vector field $v^j\frac{\partial}{\partial x^j}$ into the one form $\omega_kdx^k$ resulting in the sum $v^j\omega_j$, since $dx^j(\frac{\partial}{\partial x^k})=\delta^j_k$ is just a scalar. In a coordinate free notation you'd write $$ C(v\otimes \omega) = \omega(v)$$ ($C$ denoting contraction) that is, you apply the linear functional to the vector. A general $(1,1)$ tensor is, locally, written as $\mu^j_k\frac{\partial}{\partial x^j}\otimes dx^k$ 'Applying' this to a one form $\sigma_k dx^k$ results in a one form $$\mu^j_k \sigma_j\frac{\partial}{\partial x^j}\otimes dx^k \otimes dx^j = \mu^j_k\sigma_j dx^k$$
H: Why does $\cos (\pi\cos (\pi \cos (\log (20+\pi)))) \approx -1$ I read on Wikipedia that $$\cos (\pi\cos (\pi \cos (\log (20+\pi)))) \approx -1$$ to a high degree of accuracy. Why is this true? Is this pure coincidence or is there some mathematical background? AI: It is a well known coincidence that $$e^{\pi}-\pi \approx 20$$ Using this, we find $$e^{\pi}-\pi \approx 20 \implies \pi\approx \log ( 20+\pi)$$ then $$-1 =\cos (\pi) \approx \cos(\log ( 20+\pi))$$ $\cos (-\pi)=-1$, so a closer approximation of $-1$ can be found with $$-1 =\cos(\pi\cos (\pi)) \approx \cos(\pi\cos(\log ( 20+\pi)))$$ and again $$-1 =\cos(\pi \cos(\pi\cos (\pi))) \approx \cos(\pi\cos(\pi\cos(\log ( 20+\pi))))$$ In fact, if $x_0 \approx -1$ and $x_n=\cos (\pi x_{n-1})$ then $$\lim_{n \to \infty}x_n=-1$$
H: An irreducible polynomial of degree coprime to the degree of an extension is irreducible over this extension I'm having a hard time showing this: If $K$ is an extension of $\mathbb{Q}$ with degree $m$ and $f(x)$ an irreducible polynomial over the rationals with degree $n$, such that $\gcd(m, n)=1$, then $f(x)$ is irreducible over $K$. I have tried it by writing $f(x)=a(x)b(x)$ and then looking at the coefficients of those polynomials (some of them must belong to $K-\mathbb{Q}$ which could possibly result in a contradiction) to no success. I have no idea where to use the hypothesis of m and n being relatively prime. Any help would be appreciated. AI: Let $u$ be a root of $f(x)$. Then $[\mathbb{Q}(u):\mathbb{Q}] = n$. Now consider $K(u)$. We know that $[K(u):K]\leq n$, since the monic irreducible of $u$ over $K$ divides $f$. Since $$[K(u):\mathbb{Q}] = [K(u):K][K:\mathbb{Q}] = m[K(u):K]\leq mn$$ and $$[K(u):\mathbb{Q}] = [K(u):\mathbb{Q}(u)][\mathbb{Q}(u):\mathbb{Q}] = n[K(u):\mathbb{Q}(u)]$$ then $[K(u):\mathbb{Q}]$ is at most $nm$, is a multiple of $n$, and is a multiple of $m$. Since $\gcd(n,m)=1$, it follows that $[K(u):\mathbb{Q}]=nm$, so the degree $[K(u):K]$ is equal to $n$. What does that imply about the monic irreducible of $u$ over $K$ and about $f$?
H: For System dependent on normally distributed parameter, are deviations added or variations? Say, A and B are two normally distributed parameters with their variations being $\sigma^2_a$ and $\sigma^2_b$. Now for system C, which is linearly dependent on these parameters, is its $\sigma^2_c=\sigma^2_a+\sigma^2_b$, or $\sigma_c=\sigma_a+\sigma_b$. To me adding parameter deviations seems natural. But what is the actual behaviour (and why)? AI: If A and B are normally distributed random variables that are independent and have variances σ$^2$a and σ$^2$b respectively and both have mean 0 then if C=A+B, C will be normally distributed with mean 0 and variance σ$^2$a+σ$^2$b. So σc=√(σ$^2$a+σ$^2$b) and not σa + σb. The reason is that for independent random variables the variance of the sum is the sum of the variances.
H: isomorphism or bijection? I have a little problem and confusion. I will write something that is maybe wrong. Let $\alpha$ and $\beta$ two ordinals and $f$ a bijection between $\alpha$ and $\beta$. $f$ is not necessary a isomorphism between $(\alpha,<)$ and $(\beta,<)$ (if it is then we have $\alpha=\beta$). But $f$ is an isomorphism between $(\alpha,\subseteq)$ and $(\beta,\subseteq)$ (each element of $\alpha$ is a subset of $\alpha$ ...). So a bijection between ordinals can be seen as an isomorphism between ordinals... Thanks. AI: Let's consider an explicit example. Take $\beta=\omega$, and $\alpha=\omega\cup\{\omega\}$. Define $f\colon\alpha\to\beta$ as $f(n)=n+1$ if $n\in\omega$, and $f(\omega)=0$. The direct image function $\overline{f}\colon\mathcal{P}(\alpha)\to\mathcal{P}(\beta)$ determined by $f$ from $\alpha$ to $\beta$ does not agree with $f$ itself: $f(\omega)=0$, but the direct image function has $\overline{f}(\{0,1,\ldots,\}) = \{1,2,3\ldots\}\neq 0$. In fact, the values of the direct image function are not elements of $\beta$: for instance, $\overline{f}(\{0\}) = \{1\}$, which is not an element of $\beta$ (since it is not an ordinal). So the direct image function does not give a bijection between $\alpha$ and $\beta$: it doesn't even map $\alpha$ to $\beta$; so the fact that $x\subseteq y$ implies $\overline{f}(x) \subseteq \overline{f}(y)$ is not really relevant here: you are not using the direct image function. You are using your original $f$ on the elements of $\alpha$ and $\beta$, it's just that you are looking at $\alpha$ as ordered via $\subseteq$ instead of ordered via $\in$. And if $x,y\in\alpha$ are such that $x\lt y$ but $f(x)\not\lt f(y)$, then you have $x\subseteq y$ but you still don't have $f(x)$ (the element of $\beta$) contained in $f(y)$.
H: Primes of the form $n\pm k$ Given some arbitrary natural number $n$, can we always find a $k$ such that $n+k$ and $n-k$ are both prime? Has there been any work on finding an upper bound for $k$? AI: Being able to find such a $k$ for any $n$ is equivalent to the Goldbach conjecture, since it would imply that any even number $2n$ is a sum of two primes, $n+k$ and $n-k$. So we don't yet know if there is always such a $k$, much less an upper bound for the smallest such $k$ (other than something trivial like $k\leq n-2$).
H: Simple laplace transform I am trying to find the laplace transform of this equation: $$4-4t+2t^2$$ What I am doing: $$\frac{4}{s}-\frac{4}{s^2}+\frac{4}{s^3}$$ $$\frac{4s^2-4s}{s^3}+\frac{4}{s^3}$$ $$\frac{4s^2-4s+4}{s3}$$ But I am getting the wrong answer, can you please tell me what I am doing wrong? AI: Using the linearity of the Laplace transform, we have $$\mathcal{L}(4-4t+2t^2)=4\mathcal{L}(1)-4\mathcal{L}(t)+2\mathcal{L}(t^2)=\frac{4}{s}-\frac{4}{s^2}+\frac{4}{s^3}$$ The lowest common denominator is $s^3$, thus $$\frac{4}{s}-\frac{4}{s^2}+\frac{4}{s^3}=\frac{4s^2}{s^3}-\frac{4s}{s^3}+\frac{4}{s^3}=\frac{4s^2-4s+4}{s^3}=\frac{4(s^2-s+1)}{s^3}$$
H: A trigonometric series Let $\alpha$ be a real number. I'm asked to discuss the convergence of the series $$ \sum_{k=1}^{\infty} \frac{\sin{(kx)}}{k^\alpha} $$ where $x \in [0,2\pi]$. Well, I show you what I've done: if $\alpha \le 0$ the series cannot converge (its general term does not converge to $0$ when $k \to +\infty$) unless $x=k\pi$ for $k=0,1,2$. In other words, if $\alpha \le 0$ there is pointwise convergence only in $x=0,\pi,2\pi$. if $\alpha \gt 1$, I can use the Weierstrass M-test to conclude that the series is uniformly convergent hence pointwise convergent for every $x \in [0,2\pi]$. Moreover the sum is a continuous function in $[0,2\pi]$. Would you please help me in studying what happens for $\alpha \in (0,1]$? Are there any useful criteria that I can use? Does the series converge? And what kind of convergence is there? In case of non uniform but pointwise convergence, is the limit function continuous? Thanks. AI: The main problem (at least for me) is to prove that sum of the series $$ f_\alpha(x)=\sum\limits_{k=1}^\infty\frac{\sin kx}{k^\alpha},\quad x\in[0,2\pi] $$ is discontinuous for $\alpha\in(0,1]$. Lemma 1. For $\alpha\in(0,1]$ we have $$ \sum\limits_{k=1}^\infty\int\limits_{k-1/2}^{k+1/2}\frac{\sin(xt)}{k^\alpha}dt= \int\limits_{1/2}^\infty\frac{\sin(xt)}{t^\alpha}dt+\varphi(x) $$ where $|\varphi(x)|\leq 2^{1-\alpha}$ for all $x\in[0,2\pi]$. Proof. It is enough to show that difference between this sum and this integral is bounded by some constant. Now, we make estimation $$ \left|\sum\limits_{k=1}^\infty\int\limits_{k-1/2}^{k+1/2}\frac{\sin(xt)}{k^\alpha}dt- \int\limits_{1/2}^\infty\frac{\sin(xt)}{t^\alpha}dt\right|= \left|\sum\limits_{k=1}^\infty\int\limits_{k-1/2}^{k+1/2}\left(\frac{\sin(xt)}{k^\alpha}- \frac{\sin(xt)}{t^\alpha}\right)dt\right|\leq $$ $$ \sum\limits_{k=1}^\infty\int\limits_{k-1/2}^{k+1/2}|\sin(xt)|\left|\frac{1}{k^\alpha}-\frac{1}{t^\alpha}\right|dt\leq \sum\limits_{k=1}^\infty\int\limits_{k-1/2}^{k+1/2}\left|\frac{1}{k^\alpha}-\frac{1}{t^\alpha}\right|dt= $$ $$ \sum\limits_{k=1}^\infty\left(\int\limits_{k-1/2}^{k}\left(\frac{1}{t^\alpha}-\frac{1}{k^\alpha}\right)dt+\int\limits_{k}^{k+1/2}\left(\frac{1}{k^\alpha}-\frac{1}{t^\alpha}\right)dt\right)= \sum\limits_{k=1}^\infty\left(\int\limits_{k-1/2}^{k}\frac{1}{t^\alpha}dt-\int\limits_{k}^{k+1/2}\frac{1}{t^\alpha}dt\right)\leq $$ $$ \sum\limits_{k=1}^\infty\left(\int\limits_{k-1/2}^{k}\frac{1}{(k-1/2)^\alpha}dt-\int\limits_{k}^{k+1/2}\frac{1}{(k+1/2)^\alpha}dt\right)\leq \sum\limits_{k=1}^\infty\left(\frac{1}{2(k-1/2)^\alpha}-\frac{1}{2(k+1/2)^\alpha}\right)=2^{\alpha-1} $$ Lemma 2. For $\alpha\in(0,1]$ we have $$ \sum\limits_{k=1}^\infty\frac{\sin kx}{k^\alpha}= \frac{x^\alpha}{2\sin(x/2)}\int\limits_{x/2}^\infty\frac{\sin y}{y^\alpha}+\frac{x\varphi(x)}{2\sin(x/2)} $$ where $|\varphi(x)|\leq 2^{1-\alpha}$ for all $x\in[0,2\pi]$. Proof. Note that $$ \int\limits_{k-1/2}^{k+1/2}\sin(xt)dt= -\frac{1}{x}\cos(xt)\biggl|_{k-1/2}^{k+1/2}= \frac{2\sin(kx)\sin(x/2)}{x} $$ so, $$ \sin(kx)=\frac{x}{2\sin(x/2)}\int\limits_{k-1/2}^{k+1/2}\sin(xt)dt $$ Hence from lemma 1 we conclude $$ \sum\limits_{k=1}^\infty\frac{\sin kx}{k^\alpha}= \sum\limits_{k=1}^\infty\frac{x}{2k^\alpha\sin(x/2)}\int\limits_{k-1/2}^{k+1/2}\sin(xt)dt= \frac{x}{2\sin(x/2)}\sum\limits_{k=1}^\infty\int\limits_{k-1/2}^{k+1/2}\frac{\sin(xt)}{k^\alpha}dt= $$ $$ \frac{x}{2\sin(x/2)}\left(\int\limits_{1/2}^\infty\frac{\sin(xt)}{t^\alpha}dt+\varphi(x) \right) $$ Making substitution $y=tx$ we get $$ \sum\limits_{k=1}^\infty\frac{\sin kx}{k^\alpha}= \frac{x}{2\sin(x/2)}\left(\frac{1}{x^{1-\alpha}}\int\limits_{x/2}^\infty\frac{\sin y}{y^\alpha}dy+\varphi(x) \right)= \frac{x^\alpha}{2\sin(x/2)}\int\limits_{x/2}^\infty\frac{\sin y}{y^\alpha}dy+\frac{x\varphi(x)}{2\sin(x/2)} $$ Corollary 3. For $\alpha\in(0,1]$ the function $f_\alpha$ is discontinuous at $0$. Proof. Obviously $f_\alpha(0)=0$. Let $\alpha\in(0,1)$, then from fromula proved in lemma 2 we see that $$ \lim\limits_{x\to +0}f_\alpha(x)=\lim\limits_{x\to +0}\left(\frac{x^\alpha}{2\sin(x/2)}\int\limits_{x/2}^\infty\frac{\sin y}{y^\alpha}dy+\frac{x\varphi(x)}{2\sin(x/2)}\right) $$ Since $\varphi$ is bounded then the second term is bounded while the first tends to infinity. Hence the last limit is $\lim\limits_{x\to +0}f_\alpha(x)=+\infty$. If $\alpha=1$, then $|\varphi(x)|\leq 1$ and since $$ \lim\limits_{x\to+0}\frac{x}{2\sin(x/2)}=1, $$ then $$ \left|\frac{x\varphi(x)}{2\sin(x/2)}\right|<\frac{\pi}{3} $$ for some $\delta_1>0$ and $x\in(0,\delta_1)$. Since $$ \lim\limits_{x\to+0}\frac{x}{2\sin(x/2)}\int\limits_{x/2}^\infty\frac{\sin y}{y}dy= \int\limits_{0}^\infty\frac{\sin y}{y}dy=\frac{\pi}{2} $$ then $$ \frac{x}{2\sin(x/2)}\int\limits_{x/2}^\infty\frac{\sin y}{y}dy>\frac{2\pi}{5} $$ for some $\delta_2>0$ and all $x\in(0,\delta_2)$. Thus for all $x\in(0,\min(\delta_1,\delta_2))$ we see that $$ f_\alpha(x)=\frac{x^\alpha}{2\sin(x/2)}\int\limits_{x/2}^\infty\frac{\sin y}{y^\alpha}dy+\frac{x\varphi(x)}{2\sin(x/2)}>\frac{2\pi}{5}-\frac{\pi}{3}>0 $$ In both cases $\lim\limits_{x\to+0}f_\alpha(x)\neq 0$. hence $f_\alpha$ is discontinuous at $0$. Corollary 4. For $\alpha\in(0,1]$ the series $$ \sum\limits_{k=1}^\infty\frac{\sin kx}{k^\alpha} $$ doesn't converges uniformly on $[0,2\pi]$. Proof. Assume that this series converges uniformly. Since this series is the sum of continuous functions and its converges uniformly, then its sum $f_\alpha$ must be continuous function on $[0,2\pi]$. This contradicts corallary 3, hence our series is not uniformly convergent on $[0,2\pi]$. Remark 5. Despite the above, this series converges uniformly on $[\delta,2\pi-\delta]$ for all $\delta\in(0,\pi]$. You can use Dirichlet test to prove this.
H: Theorem of liouville Consider two entire functions with no zeroes and having a ratio equal to unity at infinity. Use Liouville’s Theorem to show that they are in fact the same function. My attempt Consider $h(z) = f(z)/g(z)$. First of all, $h$ is entire, since $f$ and $g$ are entire, and $g(z)$ is nonzero for all $z$ in $\mathbb{C}$. The fact that $\lim_{z→\infty} h(z) = \lim_{z→\infty} f(z)/g(z) = 1$ suggests that $h(z)$ is bounded as well. Why: Since $\lim_{z→\infty} f(z)/g(z) = 1$, there exists $N > 0$ such that $|f(z)/g(z) - 1| < 1$ for all $|z| > N$. (Note that I am taking $\epsilon = 1$ for concreteness.) Then, $|f(z)/g(z) - 1| \geq ||f(z)/g(z)| - 1|$ $\implies ||f(z)/g(z)| - 1| < 1$ $\implies |f(z)/g(z)| - 1 < 1$ or $1 - |f(z)/g(z)| < 1$ $\implies 0 < |f(z)/g(z)| < 2$. That is, $|f(z)/g(z)|$ is bounded above by $2$ for $|z| > N$. Moreover, we know that $|f(z)/g(z)|$ has a maximum $M$ on $|z| \leq N$ by maximum modulus principle (or simply from $|z| \leq N$ being compact). Hence, $|f(z)/g(z)|$ is bounded above by $\max \{2, M\}$ for all z in $\mathbb{C}$. Hence, $h(z)$ is constant by Liouville's Theorem, i.e. $h(z) = f(z)/g(z) = c$ for some constant $c$. Since this is true for all $z$ in $\mathbb{C}$, taking the limit as $z\to \infty$ yields $c = 1$. Hence $f(z) = g(z)$ for all $z$ in $\mathbb{C}$. Is correct my work? AI: Since both are entire functions without zeros, $h(z) = \frac{f(z)}{g(z)}$ is an entire function. $\lim_{z \rightarrow \infty} \frac{f(z)}{g(z)} \rightarrow 1$ where $\infty$ can be interpreted as the point at infinity, implies that $h$ is a bounded. A bounded entire function is constant $c$. $h(z) = c$ implies that $f(z) = c g(z)$. However $\lim_{z \rightarrow \infty} h(z) = 1$ implies that $c = 1$.
H: Evaluating $ \lim_{n\rightarrow\infty} n \int_{0}^{1} \frac{{x}^{n-2}}{{x}^{2n}+x^n+1} \mbox {d}x$ Evaluating $$L = \lim_{n\rightarrow\infty} n \int_{0}^{1} \frac{{x}^{n-2}}{{x}^{2n}+x^n+1} \mbox {d}x$$ AI: Marvis showed that $$ I_n = \int_0^1 \dfrac{nx^{n-2}}{x^{2n} + x^n + 1} dx = \int_0^1 \dfrac{dt}{t^{1/n}(t^2 + t + 1)}. $$ At this point one can use monotone convergence as Davide commented, but since Chris is interested in a more direct approach, here is one. We have $$ I_n \geq \int_0^1 \dfrac{dt}{t^2 + t + 1}=\frac{\pi}{3\sqrt3}, $$ and for $n>1$ and for any small $\delta>0$ $$ I_n\leq \int_0^\delta \dfrac{dt}{t^{1/n}} + \int_\delta^1 \dfrac{dt}{\delta^{1/n}(t^2 + t + 1)}=\frac{\delta^{1-1/n}}{1-1/n}+\frac1{\delta^{1/n}}\Big(\frac{2\pi}{3\sqrt3}-\frac{2\arctan(\frac{2\delta+1}{\sqrt3})}{\sqrt3}\Big). $$ An inspection of the limit as $n\to\infty$ of the right hand side shows that $$ I_n\leq \delta+\frac{2\pi}{3\sqrt3}-\frac{2\arctan(\frac{2\delta+1}{\sqrt3})}{\sqrt3}+E(\delta,n), $$ where $E(\delta,n)\to0$ as $n\to\infty$ for any fixed $\delta>0$. Since $\arctan(z)$ is continuous at $z=\frac1{\sqrt3}$ with $\arctan(\frac1{\sqrt3})=\frac\pi6$, for any given $\varepsilon>0$, we can choose $\delta>0$ so small that $$ \delta+\frac{2\pi}{3\sqrt3}-\frac{2\arctan(\frac{2\delta+1}{\sqrt3})}{\sqrt3}<\frac{\pi}{3\sqrt3}+\frac\varepsilon2. $$ For sufficiently large $n$, we can also ensure $E(\delta,n)<\frac\varepsilon2$, implying that there is a threshold value $N$ such that $$ \frac{\pi}{3\sqrt3}\leq I_n<\frac{\pi}{3\sqrt3}+\varepsilon, $$ for all $n>N$.
H: Laplace transform of multiplication of two terms I have the following expression to get its laplace transfer: $$e^{2t}(3t-3t^2)$$ Is it ok to just calculate the transfer of each term then multiply the result? I calculated the expression above like this but it is different than the answer in my book: $${\frac{1}{s-2}}*\frac{3}{s^2}-\frac{6}{s^3}$$ $$\frac{1}{s-2}*\frac{3s^3-6s^2}{s^5}$$ $$\frac{3s^3-6s^2}{(s-2)s^5}$$ AI: We have $$e^{2t}(3t-3t^2)=3e^{2t}(t-t^2)$$ $$\mathcal{L}(t-t^2)=\mathcal{L}(t)-\mathcal{L}(t^2)=\frac{1}{s^2}-\frac{2}{s^3}$$ You cannot simply multiply the transforms as you wanted (try it on $t^2=t *t$ as a counter example). However, using David Mitra's shift rule, we have $$3\mathcal{L}(e^{2t}(t-t^2))=3\left(\frac{1}{(s-2)^2}-\frac{2}{(s-2)^3}\right)=\frac{3(s-4)}{(s-2)^3}$$
H: Recommendations for probability books i do IT work, and the "it" thing these days is to throw the occasional probability question out there. The last time i stumbled on this, i'd just sat the GMAT and had probability somewhat down... still, it is the one region in maths that has always left me the most confounded. I have not stumbled across any killer probability books either - but i am sure they exist! Or lecture notes/something online? Any suggestions/ideas? UPDATE: i did hons maths pricing black-scholes etc back in uni (10 years ago), so i'm not shabby (but also def not brilliant) at maths. So that's me. As for the questions in particular, you know, a good book that started at the beginning (simple coin toss stuff) and then rocketed off to at least P implies A, with a thorough set of questions and answers would be good. Does that help? AI: The best textbook I know on probability is the wonderful classic from which I learned the subject: Introduction To Probability Theory by Hoel, Port and Stone. It is concise,rigorous covers an enormous amount of ground in basic probability, has outstanding exercises and requires only a good grounding in multivariable calculus and some linear algebra. Make sure you get the original 1971 hardcover-it's a bit pricy but completely worth it.It comes with the down side that there's no solutions manual,but I might be able to help you with that if you email me. I also found the book by DeGroot and Schervish's very readable and complete, particularly for mathematical statistics. It also comes with a fairly complete solutions manual. Those are my recommendations-but be leery. More then half the introductory probability textbooks today aren't worth the paper they're printed on and most of them are ridiculously expensive.Both of these are very good and should be helpful to you!
H: Explicit expression for eigenpairs of Laplace-Beltrami operator In $R^n$, the Laplace-Beltrami operator is just the Laplacian, and its eigenstructure is well known. There are also explicit expressions for the eigenvalues/eigenvectors of the Laplace-Beltrami operator on the sphere. Question: Are there any other nontrivial surfaces for which explicit expressions for the eigenvalues/eigenvectors of the Laplace-Beltrami operator have been worked out ? I was unable to even find anything for an ellipsoid. I also wanted to emphasize that I'm looking for closed form expressions. AI: The answer is nice for flat tori in every dimension. Let's write such a torus as $V/\Gamma$ where $V$ is a finite-dimensional real vector space of dimension $n$ with inner product $\langle \cdot, \cdot \rangle$ and $\Gamma$ is a lattice (a discrete subgroup isomorphic to $\mathbb{Z}^n$ which spans $V$). Any twice-differentiable eigenfunction $f : V/\Gamma \to \mathbb{C}$ of the Laplacian is in particular a bounded eigenfunction of the Laplacian on $V$, so we can take it to have the form $$f_w(v) = e^{2 \pi i \langle w, v \rangle}$$ for some $w \in V$ (for reasons to be described later). We also need to impose the constraint that $f_w$ is invariant under $\Gamma$, hence that $$e^{2\pi i \langle w, v \rangle} = e^{2 \pi i \langle w, v + g \rangle}$$ for every $g \in \Gamma$. This condition is satisfied if and only if $w$ belongs to the dual lattice $\Gamma^{\vee}$, which consists of all vectors $w$ such that $\langle w, g \rangle \in \mathbb{Z}$ for all $g \in \Gamma$. Moreover, $$\Delta f_w = - 4 \pi^2 \| w \|^2$$ so the eigenvalues of the Laplacian on $V/\Gamma$ are just $- 4\pi^2$ times the squares of the lengths of the vectors in $\Gamma^{\vee}$.
H: Coin Toss Probability Question (Feller) I'm working out of Feller's "Introduction to Probability and its Application (Vol I.)" textbook and I'm stuck on a coin toss problem. I'll list the full problem and show where I'm having trouble. A coin is tossed until for the first time the same result appears twice in succession. To every possible outcome requiring n tosses attribute probability 1/$2^{n-1}$. Describe the sample space. Find the probability of the following events: a.) the experiment ends before the sixth toss, b.) an even number of tosses is required. Alright so I'm not having any trouble describing the sample space and completing part a. This first part was solved by creating a possibility tree and adding up the probabilities (answer: 15/16). However, I'm stuck on part b and I don't understand how the 1/$2^{n-1}$ given in the problem is to be interpreted because if you toss the coin twice it makes it seem like HH, and TT each have a probability of 1/2 which is not the case. The sample space of two tosses would be {HH, HT, TH, TT} and each would have a probability of 1/4 and following this logic I arrived at 15/16 so I believe this is the correct thinking which makes the problem even more confusing. The answer to part b is 2/3 so I'm not sure if that will help. Thanks for any help. AI: I think you have interpreted the question slightly incorrectly. The probability that the game ends immediately after the $n$ tosses is $\dfrac1{2^{n-1}}$ and this the probability that has been given in the problem. The sample space is $$\Omega = \{\underbrace{AA}_{1/2},\underbrace{A\bar{A}\bar{A}}_{1/4},\underbrace{A\bar{A}AA}_{1/8},\underbrace{A\bar{A}A\bar{A}\bar{A}}_{1/16},\underbrace{A\bar{A}A\bar{A}AA}_{1/32},\ldots\}$$ where $\bar{A}$ denotes the outcome which is not $A$. Hence, the probability that the experiment ends before the $6^{th}$ toss is $$\dfrac12 + \dfrac1{2^2} + \dfrac1{2^3} + \dfrac1{2^4} = \dfrac{15}{16}$$ For the second part, the probability that the game ends after even number of tosses is \begin{align} \sum_{n=2,4,6}^{\infty} \dfrac1{2^{n-1}} & = \dfrac1{2} + \dfrac1{2^3} + \dfrac1{2^5} + \dfrac1{2^7} + \cdots = \dfrac12 \left( 1 + \dfrac14 + \dfrac1{4^2} + \dfrac1{4^3} + \cdots\right) = \dfrac12 \dfrac1{\left(1- \dfrac14 \right)}\\ & = \dfrac12 \times \dfrac1{3/4} = \dfrac4{2 \times 3} = \dfrac23. \end{align}
H: Partial derivative of trace of an inverse matrix I have the following vector function $f(\mathbf{x})=\operatorname{Tr}[(\mathbf{A}+\operatorname{diag}(\mathbf{x}))^{-1}]$ where $\operatorname{diag}(\mathbf{x})$ is the diagonal matrix with values from $n\times 1$ vector $\mathbf{x}$ on the diagonal, and $\mathbf{A}$ is an $n\times n$ matrix (assume that $\mathbf{A}+\operatorname{diag}(\mathbf{x})$ is invertible). I know that one can express the trace as the sum of quadratic forms involving orthonormal basis vectors $\mathbf{e}_i$, thus we can also write $f(\mathbf{x})=\sum_{i=1}^n\mathbf{e}_i^T(\mathbf{A}+\operatorname{diag}(\mathbf{x}))^{-1}\mathbf{e}_i$. I am interested in $\frac{\partial f}{\partial x_i}$. Is there a way to express it in terms of $\mathbf{A}$, $\mathbf{x}$ and $\mathbf{e}_i$? AI: Here is what you can try: Using the definition of partial derivative, consider $f(x+t e_i)-f(x)$ for some $t \in \mathbb{R}$. We have $$ f(x+t e_i) = \text{Tr}((A + \text{diag}(x) + t e_i e_i^T)^{-1}).$$ You can now apply matrix inversion lemma: http://en.wikipedia.org/wiki/Woodbury_matrix_identity You should be able to isolate $f(x)$ in the expansion and it should not be hard to get the desired partial derivative from the leftover. EDIT: Let me elaborate some more. Let $\Gamma = A + \text{diag}(x)$. Apply Woodbury identity with $A = \Gamma$, $U = t e_i$, $C = 1$ (the scalar 1) and $V = e_i^T$. You get $$ f(x + te_i) = \text{Tr} \Big( \Gamma^{-1} - t \frac{\Gamma^{-1} e_i e_i^T \Gamma^{-1}}{1 + t e_i^T \Gamma^{-1} e_i} \Big). $$ Now, apply the trace to obtain (using invariance of trace under cyclic permutation), $$ f(x + te_i) = f(x) + \frac{-t \,e_i^T \Gamma^{-2} e_i}{1+ t \,e_i^T \Gamma^{-1} e_i}. $$ Hence, $$ \frac{1}{t} [ f(x + te_i) - f(x) ] \to - e_i^T \Gamma^{-2} e_i $$ as $t \to 0$, which is the desired partial derivative (if I haven't made any mistake.) EDIT2: Let me also add this for a general problem. You can avoid Woodbury identity and instead use von Neumann expansion of $(I-B)^{-1} = I + B +B^2 + \dots$ for $\|B\| < 1$. So, in this problem for $t$ small enough, we have (using $(CD)^{-1}= D^{-1} C^{-1}$) \begin{align*} f(x + te_i) &= \text{Tr} [( I + t \Gamma^{-1} e_i e_i^T)^{-1} \Gamma^{-1} ]\\ &=\text{Tr} \big[ \big\{ I - t \Gamma^{-1} e_i e_i^T + o(t)\big\} \Gamma^{-1} \big] \\ &= \text{Tr} \big[ \Gamma^{-1} - t \Gamma^{-1} e_i e_i^T \Gamma^{-1} + o(t)\big] \\ &= f(x) - t (e_i^T \Gamma^{-2} e_i) + o(t) \end{align*} which is the desired result. This approach seems more general and more straightforward. It also produces the entire Taylor expansion of the function. (The previous one using Woodbury can also give you this, as you end up with a function of one variable $t$ as the leftover which can be expanded using usual Taylor series.)
H: Help Understanding Proof of Replacement Theorem? Sorry if this is a trivial question. The book is Linear Algebra Done Right by Axler, page 25-26. Theorem: In a finite-dimensional vector space, the length of every linearly independent list of vectors is less than or equal to the length of every spanning list of vectors. Proof: Suppose that $(u_1 ,\ldots, u_m)$ is linearly independent in $V$ and that $(w_1,\ldots ,w_n)$ spans V. We need to prove that $m \leq n$. We do so through the multistep process described below; note that in each step we add one of the $u$'s and remove one of the $w$'s. Step 1: The list $(w_1,\ldots, w_n)$ spans $V$, and thus adjoining any vector to it produces a linearly dependent list. In particular, the list $(u_1,w_1, \ldots,w_n)$ is linearly dependent. Question: Why is $(u_1,w_1, \ldots,w_n)$ is linearly dependent? AI: Since $\{w_1,\ldots,w_n\}$ spans $V$, and $u_1\in V$, there exist $a_i$ such that $u_1=a_1w_1+\cdots+a_nw_n$. So $(-1)u_1+a_1w_1+\cdots+a_nw_n=0$ and therefore the adjoined set is linearly dependent.
H: Orientation and simplicial homology I'm reading Chapter 2 of Hatcher's Algebraic Topology, and I just can't figure out the computations of the boundary homomorphism for the examples provided. To provide some context, reproduced the figure for the torus from the book below: As I understand it, to compute $\partial U$ we follow the faces (which are edges) counter-clockwise, negating an edge if the oriented arrow is "facing us." But starting in the top right corner and working around U results in $\partial U = (-1)^0 (-b) + (-1)^1 (-a) + (-1)^2 c = a - b + c$, which contrasts with the book's result of $\partial U = a + b - c$. I seem to be making some critically flawed assumptions. What am I not understanding? AI: I don't understand your description of the boundary map, or your explicit calculation. Here is how the calculation goes: Imagine that you are standing in $U$, making a counterclockwise pivot, and looking at the boundary (in the naive sense) as you do so. Let's begin facing the top right corner. As we turn, our field of vision sweeps out $b$, but in the opposite direction to its arrow (we sweep out $b$ from right to left, while the arrow points from left to right), then $a$, again in the opposite direction to its arrow, and finally $c$, in the same direction as its arrow. So $\partial U = -b - a + c$. (If the text instead writes it as $a + b - c$, it must use the opposite orientation on $U$, i.e. a clockwise, rather than counterclockwise, orientation.) The same procedure applied in $L$, starting by facing the lower right corner, yields $\partial L = a - c + b$, which is $- \partial U$, as one would expect. (When you glue the $U$ and $L$ into a torus, the boundaries get glued to together, and so "cancel" one another.)
H: If $K = \mathbb{F}_p(\alpha)$ where $\alpha^n \in \mathbb{F}_p$ and $n$ is the minimal such $n$. Does this imply that $[K : \mathbb{F}_p] = n$? If $K = \mathbb{F}_p(\alpha)$ where $\alpha^n \in \mathbb{F}_p$ and $n$ is the minimal such $n$. Does this imply that $[K : \mathbb{F}_p] = n$? If not, is there a condition on $\alpha$ where this is the case? AI: No, it does not imply that $[K:\mathbb{F}_p]=n$. Let $[K:\mathbb{F}_p]=d$. We can choose an $\alpha\in K$ such that $\alpha$ generates the multiplicative group $K^\times$, so that the smallest $s$ such that $\alpha^s=1$ is $s=p^d-1$, and therefore the smallest $n$ such that $\alpha^n\in\mathbb{F}_p$ has to be $n\geq \frac{p^d-1}{p-1}$, but in general we have that $$\frac{p^d-1}{p-1}> d$$
H: Evaluating $\int \frac{x+4}{x^2 + 2x + 5}dx$ I have not encountered a problem like this before. $$\int \frac{x+4}{x^2 + 2x + 5}dx$$ I do not know how to factor the bottom so I am not sure what to do. AI: Note that $(x^2+2x+5)' = 2x+2$. So we can break up the integral into two integrals, one that can be solved by substitution, and one that needs a bit more work: $$\int \frac{x+4}{x^2+2x+5}\,dx = \int\frac{x+1+3}{x^2+2x+5}\,dx = \int\frac{x+1}{x^2+2x+5}\,dx + \int\frac{3}{x^2+2x+5}\,dx.$$ The first integral can be done with the substitution $u=x^2+2x+5$; then $du = (2x+2)\,dx = 2(x+1)\,dx$, so $$\begin{align*} \int\frac{x+1}{x^2+2x+5}\,dx &= \int\frac{\frac{1}{2}\,du}{u}\\ &= \frac{1}{2}\int\frac{du}{u}\\ &= \frac{1}{2}\ln |u| + C\\ &= \frac{1}{2}\ln|x^2+2x+5| + C\\ &= \frac{1}{2}\ln(x^2+2x+5) + C, \end{align*}$$ with the last equality because $x^2+2x+5$ is always positive. The second integral can be done by first completing the square, then doing a change of variable to get an integral of $\frac{1}{u^2+1}$, which can be solved directly. We have $x^2+2x+5 = (x^2+2x+1)+4 = (x+1)^2+4$. Letting $w=x+1$, we get $$w^2 + 4 = 4\left(\frac{w^2}{4} + 1\right) = 4\left(\left(\frac{w}{2}\right)^2 +1\right).$$ Finally letting $u=\frac{w}{2}$, we get: $$\begin{align*} \int\frac{3}{x^2+2x+5}\,dx &= 3\int\frac{1}{x^2+2x+5}\,dx\\ &= 3\int\frac{1}{(x+1)^2+4}\,dx\\ &= 3\int\frac{1}{w^2+4}\,dw &\text{(letting }w=x+1\text{)}\\ &= 3\int\frac{1}{4((\frac{w}{2})^2+1)}\,dw\\ &= \frac{3}{4}\int\frac{1}{(\frac{w}{2})^2+1}\,dw\\ &= \frac{3}{4}\int \frac{2}{u^2+1}\,du &\text{(letting }u=\frac{w}{2}\text{)}\\ &= \frac{3}{2}\int\frac{1}{u^2+1}\,du\\ &= \frac{3}{2}\arctan(u) + C\\ &= \frac{3}{2}\arctan\left(\frac{w}{2}\right) + C\\ &= \frac{3}{2}\arctan\left(\frac{1}{2}(x+1)\right) + C\\ &= \frac{3}{2}\arctan\left(\frac{1}{2}x + \frac{1}{2}\right) + C. \end{align*}$$ Finally, putting it all together: $$\begin{align*} \int\frac{x+4}{x^2+2x+5}\,dx &= \int\frac{x+1}{x^2+2x+5}\,dx + 3\int\frac{1}{x^2+2x+5}\,dx\\ &= \frac{1}{2}\ln(x^2+2x+5) + \frac{3}{2}\arctan\left(\frac{1}{2}x + \frac{1}{2}\right) + C. \end{align*}$$ Since that was a bit complicated, we can double check by differentiating our answer: $$\begin{align*} &\frac{d}{dx}\left(\frac{1}{2}\ln(x^2+2x+5) + \frac{3}{2}\arctan\left(\frac{1}{2}x + \frac{1}{2}\right)\right)\\ &= \frac{1}{2}\left(\frac{(x^2+2x+5)'}{x^2+2x+5}\right) + \frac{3}{2}\left(\frac{1}{(\frac{1}{2}x+\frac{1}{2})^2 + 1}\right)\left(\frac{1}{2}x + \frac{1}{2}\right)'\\ &= \frac{2x+2}{2(x^2+2x+5)} + \frac{3}{2}\left(\frac{1}{\frac{1}{4}x^2 + \frac{1}{2}x + \frac{1}{4} + 1}\right)\left(\frac{1}{2}\right)\\ &= \frac{x+1}{x^2+2x+5} + \frac{3}{4}\left(\frac{1}{\frac{1}{4}(x^2 + 2x + 5)}\right)\\ &= \frac{x+1}{x^2+2x+5} + \frac{3}{x^2+2x+5}\\ &= \frac{x+4}{x^2+2x+5}. \end{align*}$$ Note. This is a standard method for solving an integral that has an irreducible quadratic in the denominator and a linear term in the numerator. Such fractions occur often when doing integrals of rational functions, since they may be summands that show up in the partial fraction decomposition. It is imperative to be familiar with the two parts: (i) the basic algebra to break up the integral into two, one of which can be done by simple substitution; and (ii) the technique of completing the square and doing appropriate change of variables to solve an integral of the reciprocal of an irreducible quadratic.
H: Cute Determinant Question I stumbled across the following problem and found it cute. Problem: We are given that $19$ divides $23028$, $31882$, $86469$, $6327$, and $61902$. Show that $19$ divides the following determinant: $$\left| \begin{matrix} 2 & 3&0&2&8 \\ 3 & 1&8&8&2\\ 8&6&4&6&9\\ 0&6&3&2&7\\ 6&1&9&0&2 \end{matrix}\right|$$ AI: Multiply the first column by $10^4$, the second by $10^3$, third by $10^2$ and fourth by $10$ - this will scale the value of the determinant by $10^{4+3+2+1}=10^{10}$, which is coprime to $19$. Now add the last four columns to the first one - this will not change the value of the determinant. Finally notice the first column now reads $23028, 31882, 86469, 6327$, and $61902$: each is a multiple of $19$ so we can factor a nineteen cleanly out the determinant.
H: Existence of the Pfaffian? Consider a square skew-symmetric $n\times n$ matrix $A$. We know that $\det(A)=\det(A^T)=(-1)^n\det(A)$, so if $n$ is odd, the determinant vanishes. If $n$ is even, my book claims that the determinant is the square of a polynomial function of the entries, and Wikipedia confirms this. The polynomial in question is called the Pfaffian. I was wondering if there was an easy (clean, conceptual) way to show that this is the case, without mucking around with the symmetric group. AI: Here is an elaboration of Qiaochu's comment above: A $2n\times 2n$ matrix $A$ induces a pairing (say on column vectors), namely $$\langle v,w \rangle := v^T A w.$$ Thus we can think of $A$ as being an element of $(V\otimes V)^*$ (which is the space of all bilinear pairings on $V$), where $V$ is the space of $2n$-dimensional column vectors. If $A$ is skew-symmetric, then this pairing is anti-symmetric, and so we can actually regard $A$ as an element of $\wedge^2 V^*$. We can then take the $n$th exterior power of $A$, so as to obtain an element of $\wedge^{2n} V^*$. This latter space is $1$-dimensional, and so if we fix some appropriately normalized basis for it, the $n$th exterior power of $A$ can be thought of just as a number. This is the Pfaffian of $A$ (provided we chose the right basis for $\wedge^{2n} V^*$). How does this compare to the usual description of determinants via exterior powers: For this, we regard $A$ as an endomorphism $V \to V$, which induces an endomorphism $\wedge^{2n} V \to \wedge^{2n} V$, which is a scalar (being an endomorphism of a $1$-dimensional space); this is $\det A$. So now we see where the formula $\det(A) =$ Pf$(A)^2$ comes from: computing the determinant involves taking a $2n$th exterior power of $A$, while computing the Pfaffian involves only taking an $n$th exterior power (because we use the skew-symmetry of $A$ to get an exterior square "for free", so to speak). The sorting out the details of all this should be a fun exercise.
H: Why does $\mathrm{ord}_p(n!)=\sum_{i=1}^k a_i(1+p+\cdots+p^{i-1})$?. Suppose $$ n=a_0+a_1p+\cdots+a_kp^k\qquad 0\leq a_i<p $$ is the base $p$ (for $p$ a prime) representation of an integer $n$. I'm trying to prove to myself that $$ ord_p(n!)=\sum_{i=1}^k a_i(1+p+\cdots+p^{i-1}). $$ I know the formula that $\displaystyle ord_p(n!)=\left\lfloor\frac{n}{p}\right\rfloor+\left\lfloor\frac{n}{p^2}\right\rfloor+\cdots$. Substituting the representation of $n$ base $p$, I find something like $$ \left\lfloor\frac{a_0}{p}+a_1+\cdots a_kp^{k-1}\right\rfloor+\cdots+\left\lfloor\frac{a_0}{p^k}+\cdots+\frac{a_{k-1}}{p}+a_k\right\rfloor+\left\lfloor\frac{a_0}{p^{k+1}}+\cdots+\frac{a_k}{p}\right\rfloor+\cdots $$ Pulling out the integer terms, this can be rewritten as $$ \left\lfloor\frac{a_0}{p}\right\rfloor+\cdots+\left\lfloor\frac{a_0}{p^{k+1}}+\cdots+\frac{a_k}{p}\right\rfloor+\cdots+\sum_{i=1}^k a_i(1+p+\cdots+p^{i-1}). $$ So I have the formula I want at the right, and it amounts to showing that all the terms on the left are actually $0$. For instance, it's clear $\left\lfloor\frac{a_0}{p}\right\rfloor=0$, but I don't know what to do for the other terms. For $\left\lfloor\frac{a_0}{p^2}+\frac{a_1}{p}\right\rfloor$, I can only bound $$ \frac{a_0}{p^2}+\frac{a_1}{p}=\frac{a_0+a_1p}{p^2}\leq\frac{p+p^2}{p^2} $$ which doesn't give anything conclusive. How can one conclude the leftmost terms are all $0$ to get the alternative formula? Edit: I wouldn't mind seeing the alternative with induction on $n$ instead if it's less messy. AI: EDIT: It appears the main point of confusion relates to these: $$ \left\lfloor \frac{a_0}{p^2} + \frac{a_1}{p} \right\rfloor \leq \left\lfloor \frac{p-1}{p^2} + \frac{p-1}{p} \right\rfloor = \left\lfloor \frac{p^2-1}{p^2} \right\rfloor = 0.$$ $$ \left\lfloor \frac{a_0}{p^3} + \frac{a_1}{p^2} + \frac{a_2}{p} \right\rfloor \leq \left\lfloor \frac{p-1}{p^3} + \frac{p-1}{p^2} + \frac{p-1}{p} \right\rfloor = \left\lfloor \frac{p^3-1}{p^3} \right\rfloor = 0.$$ ORIGINAL: Need to be more careful about the floor function. With $$ 0 \leq a_j \leq p-1, \; \; a_k \neq 0, $$ we get $$ \left\lfloor \frac{n}{p} \right\rfloor = a_1 + a_2 p + a_3 p^2 + \cdots + a_k p^{k-1}, $$ $$ \left\lfloor \frac{n}{p^2} \right\rfloor = a_2 + a_3 p + \cdots + a_k p^{k-2}, $$ $$ \left\lfloor \frac{n}{p^3} \right\rfloor = a_3 + a_4 p + \cdots + a_k p^{k-3}, $$ $$ \cdots $$ $$ \left\lfloor \frac{n}{p^{k-1}} \right\rfloor = a_{k-1} + a_k p, $$ $$ \left\lfloor \frac{n}{p^{k}} \right\rfloor = a_k. $$ So, the coefficient of $a_1$ is $1,$ the total coefficient of $a_2$ is $1+p,$ the total coefficient of $a_3$ is $1+p + p^2,$ and so on. What we get is the sum of $$ a_j \frac{p^j - 1}{p - 1}. $$ Let us look at that a little more. $$ v_p (n!) = \sum_{j=0}^k a_j \frac{p^j - 1}{p - 1} = \sum_{j=0}^k a_j \frac{p^j }{p - 1} \; - \; \sum_{j=0}^k a_j \frac{1 }{p - 1}, $$ $$ v_p (n!) = \sum_{j=0}^k \frac{ a_j p^j }{p - 1} - \sum_{j=0}^k \frac{a_j }{p - 1} = \frac{\sum_{j=0}^k a_j p^j }{p - 1} \; - \; \frac{\sum_{j=0}^k a_j }{p - 1}. $$ So, if we name $$ S_p(n) = \sum_{j=0}^k a_j, $$ we get $$ v_p (n!) = \frac{n }{p - 1} \; - \; \frac{ S_p(n) }{p - 1} = \frac{n - S_p(n) }{p - 1}. $$ Let's see, $v_p$ means the same as $\mbox{ord}_p.$ The $v$ stands for valuation. The quantity $S_p(n)$ can be called the sum of the digits of $n$ when written in base $p.$ So, $$ v_p (n!) \; \; = \; \; \frac{n - S_p(n) }{p - 1}. $$
H: Identification of $T_v(T_pM)$ with $T_pM$ In some passages of Do Carmo's Riemannian geometry book he identify $T_v(T_pM)$ with $T_pM$, my question: How one see $T_pM$ as a manifold? who is the atlas? What is the expression of a vector $x \in T_v(T_pM)$ in local coordinates? AI: Since $T_pM$ is a real vector space, it is isomorphic to $\mathbb{R}^n$ (where $n = \dim M$). The atlas on $T_pM$ is just the atlas on the standard $\mathbb{R}^n$. A choice of local coordinates induces an isomorphism $T_pM\leftrightarrow\mathbb{R}^n$, and under this isomorphism you see that a vector $x\in T_v(T_pM)$ is just a vector from multivariable calculus or real analysis with tail at $v$. The important point here is that $T_pM$ is a finite-dimensional real vector space, which carries a natural manifold structure.
H: Measure-theoretic view of expectation of a Bernoulli sequence Problem: I have a good understanding of basic Bernoulli and Binomial RVs, but this was foundational work in statistics. I am attempting to try and apply my (minimal but increasing) knowledge of measure theory to a tangible concept. I have been working with simple functions, etc. and am trying to utilize only these tools to find expectation: if $f=\sum_{i=1}^m c_i1_{A_i}$ has distinct, finite c's and disjoint A's, then $\int f du=\sum_{i=1}^m c_i\mu(A_i)$ and if $f$ is measurable and $f_n \uparrow f$ then $\int f du=\lim_{n\rightarrow\infty}\int f_n d\mu$ I want to try and practice (read: learn how to) utilize these ideas on a measure space of an infinite number of Bernoulli trials. I define my space below: Work Take $(\Omega,\mathcal{B})=(\{0,1\}^{\infty},\mathcal{B}(\{0,1\}^{\infty}))$ and define an event $\omega\in\Omega$ as $\omega=(x_1,x_2,...)$ Then I defined a probability measure: $P(\{x_1\}$x{0,1}x...$)=\prod_{i=1}^n p^{x_i}(1-p)^{1-x_i}$. From here I want to find the expectation of RVs such as: 1- $Z(\omega)=\sum_{i=1}^nx_i$ 2- $Y=e^{sZ}$ (moment-generating function), and 3- $V(\omega)=\sum_{i=1}^{\infty}r^nx_n$ for positive r. Using comments below: $Z(\omega)=\sum_{i=1}^n 1_{A_i}, A_i\subset\Omega, A_i=\{(x_k)_{k\ge 1}\in\Omega|x_i=1\}$ $Y_n(x_1,...,x_n)=exp \left(s\sum_{i=1}^n x_i \right )$ $E_n(Y_n)=\sum_{(x_k)\in\Omega_n}exp \left(s\sum_{i=1}^n x_i \right )\prod_{i=1}^n p^{x_i}(1-p)^{1-x_i}$ $E(Y)=(1-p+pe^s)^n$ $E(V)=\sum_{i=1}^n r^i p, \forall n$ $V(\omega)_n \uparrow V(\omega)\rightarrow E(V(\omega)_n)\uparrow E(V(\omega))$ I showed these to a friend and he had the following comments: 1-Each RV needs to be represented as a simple function, a limit of a nondecreasing sequence of non-negative functions, or a difference of two such limits (whose product is zero). For example, on $Z$, you need to compute $E(Z)=\sum_{i=1}^n P(A_i)$ and show how each $P(A_i)$ is derived from $P$ as being a unique probability measure satisfying the equality. Also, if the RV is a limit of simple functions, you have to find the expectation of the simple function in the sequence and take the limit. Given that I am learning this on my own from scratch, any explicit help would be greatly appreciated. No detail is too much! AI: The only case requiring measure theory is 3. Note that $Y$, $Z$ and $V_n:\omega\mapsto\sum\limits_{i=1}^nr^ix_i$ could all be defined on the space $(\Omega_n,2^{\Omega_n},\mathrm P_n)$ where $\Omega_n=\{0,1\}^n$ and $\mathrm P_n$ is the probability measure one can guess. In particular $\mathrm E(V_n)=\mathrm E_n(W_n)$ where $W_n(x_1,\ldots,x_n)=\sum\limits_{i=1}^nr^ix_i$ hence $\mathrm E(V_n)=\sum\limits_{i=1}^nr^ip$ for every $n$. Since $(V_n)_n$ is nondecreasing to $V$, $\mathrm E(V_n)\to \mathrm E(V)$ and you are done. Edit: To compute $\mathrm E(Y)$ from first principles, note that $\mathrm E(Y)=\mathrm E_n(Y_n)$ where $Y_n$ is defined on $\Omega_n$ by $Y_n(x_1,\ldots,x_n)=\exp\left(s\sum\limits_{i=1}^nx_i\right)$. Hence, $$ \mathrm E_n(Y_n)=\sum\limits_{(x_k)\in\Omega_n}\exp\left(s\sum\limits_{i=1}^nx_i\right)\prod\limits_{i=1}^np^{x_i}(1-p)^{1-x_i}=\left(\sum\limits_{z=0}^1\mathrm e^{sz}p^z(1-p)^{1-z}\right)^n, $$ and $$ \mathrm E(Y)=(1-p+p\mathrm e^s)^n. $$
H: Find $f$ such that the following integral equation is satisfied. Find $f$ such that the following integral equation is satisfied: $$\int_0^x \lambda f(\lambda) ~d\lambda= \int_x^0(\lambda^2 + 1)f(\lambda) ~d\lambda + x$$ I attempted it in the following way: $\int_0^x \lambda f(\lambda) ~d\lambda= -\int_0^x(\lambda^2 + 1)f(\lambda) ~d\lambda + x$ $\int_0^x\lambda^2f(\lambda) ~d\lambda + \int_0^x \lambda f(\lambda) ~d\lambda + \int_0^xf(\lambda) ~d\lambda - x = 0$ which kind of looks like a quadratic, but I wasn't sure how to use this property. Can I use the fundamental theorem of calculus here somehow? I wasn't sure how to use it with the $\lambda$ terms in front fo the $f(\lambda)$ functions. Am I on the right track or how should I approach this problem? AI: Yes. The fundamental theorem of calculus is your friend here. Note that we can rewrite what you have as $$\int_0^x \lambda f(\lambda) d \lambda + \int_0^x (\lambda^2 + 1) f(\lambda) d\lambda =x$$ $$\int_0^x (\lambda + \lambda^2 + 1) f(\lambda) d \lambda = x$$ The fundamental theorem of calculus now gives us $$(x + x^2 + 1)f(x) = 1$$ Hence, $$f(x) = \dfrac{1}{1 + x + x^2}$$
H: Block Determinants This is a nice question I recently found in Golan's book. Problem: Let $A,B,C,D$ be $n\times n$ matrices over $\mathbb{R}$ with $n\ge 2$, and let $M$ be the $2n\times 2n$ matrix \begin{bmatrix} A & B \\ C & D\\ \end{bmatrix} If all of the "formal determinants" $AD-BC$, $AD-CB$, $DA-CB$, and $DA-BC$ are nonsingular, is $M$ necessarily nonsingular? If $M$ is nonsingular, must all of the formal determinants also be nonsingular? AI: $A=\left[\begin{array} \\ 1&3\\ 1&2\\ \end{array}\right]\quad B=\left[\begin{array}\\ 2&4\\ 2&1\\ \end{array}\right]\quad C=\left[\begin{array} \\ 1&0\\ 1&5\\ \end{array}\right]\quad D=\left[\begin{array} \\ 2&0\\ 2&6\\ \end{array}\right]\\ |AD-BC|=20 \\ |AD-CB|=102\\ |DA-BC|=18\\ |DA-CB|=8$ but the combined matrix has one row twice another, so has a determinant of 0.
H: How to calculate these summations? How to find the values of these kind of summations: $$\large\sum_{i=0}^6(6-i)\;\ast\;\sum_{j=1}^6(7-j)\;\ast\;\sum_{k=2}^7(8-k)\;\ast\;\sum_{\ell=3}^8(9-\ell)$$ AI: Use that $$\begin{align}\sum_{t=a}^b(c-t)&=\left(\sum_{t=a}^bc\right)-\left(\sum_{t=a}^b t\right)\\\\&=\left(\sum_{t=a}^bc\right)-\left(\sum_{s=0}^{b-a} (s+a)\right)\\\\ &=\left(\sum_{t=a}^bc\right)-\left(\sum_{s=0}^{b-a} s\right)-\left(\sum_{s=0}^{b-a}a\right)\\\\ &=(b-a+1)c-(b-a+1)a-\left(\sum_{s=0}^{b-a} s\right)\\\\&=(b-a+1)(c-a)-\left(\sum_{s=0}^{b-a} s\right)\\\\&=(b-a+1)(c-a)-\frac{(b-a)(b-a+1)}{2}\\\\&=(b-a+1)(c-a-\tfrac{b}{2}+\tfrac{a}{2})\\\\&=(b-a+1)(c-\tfrac{b}{2}-\tfrac{a}{2})\end{align}$$ for each term. For example, $$\sum_{i=0}^6(6-i)=6+5+4+3+2+1=\fbox{21}=7\cdot 3=(6-0+1)(6-\tfrac{6}{2}-\tfrac{0}{2})\qquad\checkmark$$
H: $\{1,1\}=\{1\}$, origin of this convention Is there any book that explicitly contain the convention that a representation of the set that contain repeated element is the same as the one without repeated elements? Like $\{1,1,2,3\} = \{1,2,3\}$. I have looked over a few books and it didn't mention such thing. (Wikipedia has it, but it does not cite source). In my years learning mathematics in both US and Hungary, this convention is known and applied. However recently I noticed some Chinese students claim they have never seen this before, and I don't remember I saw it in any book either. I never found a book explicitly says what are the rules in how $\{a_1,a_2,a_3,\ldots,a_n\}$ specify a set. Some people believe it can only specify a set if $a_i\neq a_j \Leftrightarrow i\neq j$. The convention shows that doesn't have to be satisfied. AI: It all ties back into how this specification of sets are defined. An unordered tuple $\{a_1,a_2,a_3,a_4\dots\}$ is defined as $\{x:x=a_1 \lor x=a_2 \lor x=a_3 \lor x=a_4 \lor\dots\}$. So, by this convention, $\{1,1\}$ = $\{x:x=1 \lor x=1 \}$ This is equal to $\{ x : x = 1 \}$ by the idempotency of $\lor$, so $\{1,1\} = \{1\}$
H: A formula with only one $0$ that evaluates to a given integer Just found this math puzzle online: Original puzzle: Using one zero (and no other numbers) and mathematical (including trigonometric and advanced) operations, how can you achieve a final result of 6? Advanced one: Using only trigonometric functions and a single instance of the number zero, derive a forumula to calculate any positive integer n. I am sure there is an answer somewhere online but I can't find one - very interesting. And the answer I can think of for the original one is $$\lim_{x\to0}\;(a^x+ a^x+ a^x+ a^x+ a^x+ a^x)$$ AI: This may not be the most efficient solution for 6, but it works in general. First, note that $cos(0) = 1$, so we can start with $1$. The important formulas are then (a) $\tan(\cot^{-1}(x)) = \frac{1}{x}$ (b) $\sin(\cot^{-1}(x)) = \frac{1}{\sqrt{1 + x^2}}$ These can be checked by drawing a triangle with the appropriate sides lengths. Then, starting with $\sqrt{n}$, applying (b) and then (a) gives $\sqrt{n+1}$. Thus, starting with $\sqrt{n} = n = 1$, repeated applications of (a) and (b) can give the square root of any natural number. Since every natural number is the square root of another natural number, we can get any natural number this way. Edit: After looking at the link Eugene gave below, this same process can be done more simply by iterating $\sec( \tan^{-1}(x)) = \sqrt{x^2+1}$
H: Lagrange's method to find min/max question I have a doubt when using this method to find min/max of a function. When I can find two (or more solutions) of system equation, so I can easily know it has max and min. But, the problem is: if the system equation I solve just have only one solution. So, I can know it max or min (by replace arbitrary value to function to compare). For example is min. So, Can I say this function doesn't has max ? Thanks :) AI: If you are looking a continuous function, say $f(x,y)$ (there could be more variables) over a compact set (closed and bounded), then there will always be a min and a max. But the min and/or max may occur on the boundary of the region, in which case setting partial derivatives equal to $0$ may (and usually will) fail to find it. This is the analogue in dimensions $2$ and higher of the fact that if we are looking at a continuous function over a closed interval, the min and/or max may occur at an endpoint. For example, if we are trying to find the min or max of $x^2+y^2$ on or inside the square with corners $(1,1)$, $(-1,1)$, $(-1,-1)$, and $(1,-1)$, setting partial derivatives equal to $0$ finds the minimum, which occurs at $(0,0)$, but does not find the maximum, which is at the four corners of the region. If we have a function say $f(x,y)$ with continuous partial derivatives, and we are interested in finding the min and max of $f$ over all of $\mathbb{R}^2$, there may not be such a min or max. If there is a single place at which the partials are both $0$, then at least one of min or max fails to exist. But even if the partials are both $0$ at several places, there may not be a min or max. For example, let $f(x,y)=x^3-3x^2+y^3-3y^2$. Then the partials are equal to $0$ at $(0,0)$, $(0,2)$, $(2,0)$, and $(2,2)$ but $f$ has neither a min nor a max on $\mathbb{R}^2$.
H: If $X$ has an exponential distribution, prove the hazard function is constant $X$ has an exponential distribution, $Pr(X>0)=1$, p.d.f is $f$, c.d.f is $F$. $h(x)=\frac{f(x)}{1-F(x)}$ for $x>0$. Prove that $h(x)$ is constant for $x>0$. AI: We have $f(x)=\lambda e^{-\lambda x}$ (for $x\ge 0$). For the cumulative density function $F(x)$, we have $$F(x)=\int_0^x \lambda e^{-\lambda t}\,dt=1-e^{-\lambda x}$$ (if $x\ge 0$). So $1-F(x)=e^{-\lambda x}$, and therefore $$h(x)=\frac{f(x)}{1-F(x)}=\frac{\lambda e^{-\lambda x}}{e^{-\lambda x}}=\lambda,$$ a constant, (if $x\ge 0$).
H: How do I solve a certain characteristic system? I am studying PDEs and have the following (seemingly simple) problem: Find a surface that passes through the curve $$x^2+y^2=z=1$$ and is orthogonal to the family of surfaces $$z(x+y)=c(3z+1)\qquad(c\in\Bbb R)$$ After writing down the orthogonality condition (assuming my calculations are correct $(*)$), this yields the following equation: $$u(3u+1)(u_x+u_y)-x-y=0$$ We usually solve such equations by using the method of characteristics, which tells us (using assumption $(*)$ again) to solve the following characteristic system: $$\begin{align}\dot x=&u(3u+1)\\\dot y=&u(3u+1)\\\dot u =&x+y\end{align}$$ Differentiating the last equation of this system with respect to $t$ gives us $\ddot u=\dot x+\dot y$, which using the first two equations gives us $$\ddot u = 2u(3u+1)$$ After staring at this equation for some time, I decided to ask Wolfram|Alpha. The result seems pretty ugly, so the following questions arise: Did I make a mistake/am I missing something? Is my approach correct? How do I proceed? Thanks. AI: Here's what I do (I'm probably wrong, but we can compare our procedures. I'm not using the comment box because there's not enough space in it). The intersection of the surfaces would be the set of points $(x,y,z)$ such that \begin{cases}u(x,y)-z=0\\z(x+y)-3cz-c=0,\end{cases} that is, those that lie both on the solution surface and the given surface (determined by $c$, and this must hold for every $c$). The surfaces are orthogonal means (I'm guessing) their respective gradients at the intersection points must be orthogonal. The gradient of the solution surface is $(u_x,u_y,-1)$, and the gradient of the given surface(s) is $(z,z,x+y-3c)$. So the condition is $zu_x+zu_y-x-y+3c=0$. Since this occurs when $z=u$ we can write it as $$uu_x+uu_y=x+y-3c.$$ We can clear $c$ from the intersection equations: $c=\frac{(x+y)u}{1+3u}$, and plugging it in the last equation gives us $$uu_x+uu_y=(x+y)(\frac{1}{1+3u}).$$ So we agree there. As for what follows, the method of characteristics, I think you proceeded correctly.
H: Combinatorics: endless series I have the following problem: In an urn, you have 1 blue and 9 white balls. You pull out one ball a time; if it is the blue one, you win. If it it is white, you throw it back in and pull again. Imagine 2 people are playing this game. Who has a better chance to win, the person who goes first or the second? AI: There is a $\frac{1}{10}$ chance that the first player wins on the first pull. There is a $\frac{9}{10}\cdot\frac{1}{10}=\frac{9}{100}$ chance that the second player wins on their first pull. Finally, there is an $\frac{81}{100}$ chance that the game returns to the "start", i.e. it is the first player's turn again. So the probability $P$ that the first player wins satisfies $$P=\frac{1}{10}+\frac{81}{100}P$$ and therefore $$\frac{19}{100}P=\frac{1}{10}$$ $$P=\frac{10}{19}$$ Thus the first player has a slight advantage (the probability that the second player wins is only $\frac{9}{19}$).
H: Proving an interesting feature of any $1000$ different numbers chosen from $\{1, 2, \dots,1997\}$ Assume you choose $1000$ different numbers from the group $\{1, 2, \dots,1997\}$. Prove that within the $1000$ chosen numbers, there is a couple which sum is $1998$. I defined: pigeonholes: possible sums. pigeons: the $1000$ different numbers. Is this definition good or there is something better? AI: Look at the pairs $(1,1997)$, $(2, 1996)$, and so on up to $(998,1000)$, together with the singleton $999$. These are the pigeonholes. Every number belongs to exactly one pigeonhole. If we choose $1000$ numbers, then since there are only $998$ pairs and $1$ singleton, at least $2$ of our numbers end up being in the same pigeonhole, that is, adding up to $1998$.
H: Triangles inside a square I have a question with a figure of Triangle inside a square. The base of the triangle is on the base of the square and the peak of the triangle touches the top of the square.It then asks the ratio of the area of triangle to the area of the square. According to the book the answer is 1/2. But i am confused since i believe i needed more information. Would the ratio be any different if the alignment or (type) of the triangle is changed (ie currently it looks like an isosceles what happens if it was a right angle isosceles inside the square would the result be any different) I cant help but feel that i am missing some key concept here . AI: I assume that the base of the triangle is the entire bottom side of the square. Then it doesn’t matter where the top vertex of the triangle is along the top edge of the square: the area of the triangle will be half the area of the square. Suppose that the sides of the square have length $s$. Clearly the area of the square is $s^2$. The base of the triangle is $s$, and the height is also $s$: it’s the distance from the base of the square to the top edge of the square. Thus, the area of the triangle is $$\frac12\cdot\text{base}\cdot\text{height}=\frac12s^2\;,$$ half the area of the square. In fact, the top vertex of the triangle could be anywhere on the straightline containing the top edge of the square, even beyond the edges of the square, and the triangle would still have base $s$, height $s$, and area $\frac12s^2$. In the figure below, for instance, triangles $\triangle ABC,\triangle ABD,\triangle ABE,\triangle ABF$, and $\triangle ABG$ all have area $\frac12s^2$, because they all have base $s$ and height $s$.
H: Examples of infinite groups such that all their respective elements are of finite order. I am in need of examples of infinite groups such that all their respective elements are of finite order. AI: Here is one. Let $(\mathbb{Q},+)$ denote the groups of rational numbers under addition, and consider it's subgroup $(\mathbb{Z},+)$ of integers. Then any element from the group $\mathbb{Q}/\mathbb{Z}$ has elements of the form $\frac{p}{q} + \mathbb{Z}$ which is of order at-most $q$. Hence it's of finite order. Group of all roots of unity in $\mathbb{C}^{\times}.$ Here is a link from MathOverflow which might prove helpful. https://mathoverflow.net/questions/57493/is-there-an-infinite-group-whose-elements-all-have-finite-order
H: Laplace transform of a product I tried to solve the product below: $$3t\sin(6t)$$ but it seems that getting the transform of each and multiply the result is not leading to a correct answer: $$\frac{3}{s^2}\frac{6}{s^2+36}$$ How does one solve such transforms? AI: We know that if $ L(f(t))=F(s)$ so $ L(t.f(t))=-F’(s)$ in which $F’(s)=\frac{dF}{ds}$. Here you need just to derivative the second part of the last formula above with respect to $s$ and then multiply the result by $3$.
H: Explain a statement about math induction base. I was reading an article in wikipedia about math induction: http://en.wikipedia.org/wiki/Mathematical_induction And there is a sentence: "Note that the first quantifier in the axiom ranges over predicates rather than over individual numbers." It is told about the axiom of math induction: As I understand, first quantifier is P(0), i.e. math induction base. What does it mean that math induction base ranges over predicates rather than over individual numbers? AI: No, the first quantifier is the $\forall$ at the very beginning of the expression. It quanitifies $P$, which can be any predicate describing natural numbers. For example, $P(n)$ could be ‘$n$ is even’, or ‘$n$ is prime’, or $\exists p(p\text{ is prime and }p^2\mid n)$. The second and third quantifiers are the $\forall$’s in $\forall k\in\Bbb N$ and $\forall n\in\Bbb N$: they range over elements of $\Bbb N$, i.e., over natural numbers. $P(0)$ is simply a sentence saying ‘the number $0$ has the property $P$’; there is no quantifier here at all (unless, of course, the predicate $P$ itself contains quantifiers, as in my third example above).
H: If $\gcd(a,35)=1$ then show that $a^{12} \equiv 1 \pmod{35}$ If $\gcd(a,35)=1$, then show that $a^{12}\equiv1\pmod{35}$ I have tried this problem beginning with $a^6 \equiv 1 \pmod{7}$ and $a^4 \equiv 1 \pmod{5}$ (Fermat's Theorem) but couldn't get far enough. Please help. AI: Since $\gcd(a,7) =\gcd(a,5) = 1$, from Fermat's theorem, $$a^6\equiv 1\pmod7 \quad \text{ and } \quad a^4\equiv1\pmod5. $$ Hence, $$ a^{12}\equiv 1\pmod7 \quad \text{ and } \quad a^{12}\equiv1\pmod5. $$ This means that $$7\mid a^{12}-1 \quad\text{ and } \quad 5\mid a^{12}-1. $$ Since $\gcd(7,5)=1$, $$35\mid a^{12}-1, $$ that is, $$ a^{12}\equiv1\pmod{35}. $$
H: What is a fast way to evaluate:$\int P(x)e^{ax}dx$ Undoubtedly, this question is so easy but I'd like to ask it. We know that the way in which the indefinite integrals like $\int P(x)e^{ax}dx$ and $\int P(x)\sin(bx)dx$ wherein $P(x)$ is an arbitrary polynomial of $x$ and $a, b\in \mathbb R$ are evaluated is Integrating by Parts $\int udv=uv-\int vdu$. So we should take $u$ and $dv$ frequently until the whole integral becomes an easy integral. Is there any fast method in which we can do above integrals without using the classic formula? Thank you for the help. AI: I know two tricks. For the $\sin(bx)$ type of integral, notice that $P(x)\sin(bx)$ is the imaginary part of $P(x)e^{ibx}$, which you can integrate more easily and then extract the imaginary part. For the $e^{ax}$ type, notice that $x^ne^{ax} = \frac{\partial^n}{\partial a^n}(e^{ax})$. So then $$\int x^n e^{ax}dx = \int \frac{\partial^n}{\partial a^n}e^{ax}dx = \frac{\partial^n}{\partial a^n} \int e^{ax} dx$$ $$$$ $$ = \frac{\partial^n}{\partial a^n}\left(\frac{1}{a}\right) = \frac{(-1)^n}{n! a^{n+1}}$$if you're integrating from 0 to infinity.
H: Cohomology of sheaves : reference-request I need a good reference book where I can learn the cohomology of sheaves through the approach of Čech cohomology. The Hartshorne's book, for example, doesn't help me a lot because he choose the "derived functors approach". AI: The number one account is still in Serre's legendary Faisceaux Algébriques Cohérents (Chapitre I,§§3,4) of which you can find an English translation here. [Grothendieck had not yet introduced his more abstract version of sheaf cohomology at the time but was soon to do so] You can also look at sections 7.8 and 7.9 of Taylor's textbook . Another excellent textbook is that by Fritsche-Grauert, where you will find in Chapter IV,§3 not only Čech cohomology for sheaves but also its relation to classical singular cohomology. A more technical account for algebraic geometers is in Chapter VII of Mumford-Oda's notes . [I think these notes are a reworking of a draft for a mythical book projected by Mumford, which was to revise and extend his famous red book. The book never materialized because of Mumford's scientific reconversion to theoretical computer science. The online notes are available by courtesy of Professor Chai]
H: Epsilon-delta proof of continuity Prove that $f(x,y,z)=x^4+y^4+z^4$ is continuous on point $(x,y,z)=(0,0,0)$ with epsilon-delta I prove this so: if $$\lim_{x,y,z \to 0,0,0} f(x,y,z) = f(0,0,0)$$ then that function is continuous $$\lim_{x,y,z \to 0,0,0} x^4+y^4+z^4 = 0^4+0^4+0^4=0$$ But how to prove this with $\epsilon$-$\delta$? AI: HINTs $g(x) = x^4$ is continuous at $0$. In fact, if we choose $\delta < \epsilon^{1/4}$, we're set. The triangle inequality is really useful.
H: Continuous root map of the coefficients of a polynomial I have a set of polynomials $P_t(z)= z^n+ a_{n-1}(t)z^{n-1}+\cdots+ a_0(t)$ which depends on a real parameter $t \in [a,b]$ and where $a_{n-1}(t),\ldots, a_0(t)$ are real continuous functions. May I say that there exists a continuous map $\theta(t)$ such that $\theta(t)$ is a root of $P_t$ (for all $t$)? I mean, I know that there exists a continuous dependence of the roots of a polynomial with respect to the coefficients and that the Viète map descends to a homeomorphism $w:C^n/S_n\to C^n$, but, can I 'choose' a root? Or I need the axiom of choice to affirm that there exists a map $C^n/S_n\to C^n$? In that case, may I get a such map to be continuous? Any bibliography reference for all this? AI: Great question. If the roots are all always real the answer is yes (this comes from the fact that in $\mathbb R$ you can order the roots from the lowest to the highest). If the roots or the coefficients may be complex, the answer is in general negative. Take for example the polynomial $t^2-z \in \mathbb{C}[t]$, with $z \in \mathbb C$. However there is a deep theorem (by Kato) that may help you: it states that if the roots of your polynomial depend only on a real parameter $t \in \mathbb {R}$ than you have $N$ continuous functions that describe the roots. Anyway, I suggest you to give a look to Kato, Perturbation theory for linear operators, Springer (Theorem 5.2, pag 109 in my edition).
H: What does the following statement mean? If $\mu(A) <\infty $, then from almost everywhere convergence follows the convergence in the measure . I don't understand what the "convergence in the measure" means. Waiting for your explanation. I am trying to understand proposition 2 from the following link : http://medvegyev.uni-corvinus.hu/CeuAdv2.pdf. Thanks. AI: The proposition says that if $(X,\mu)$ is a measure space, and $\mu(X)$ is finite, then: Whenever $f_n$ is a sequence of real valued functions such that for almost all $x\in X$ we have $\lim f_n(x)=f(x)$ for some $f$, Then for every $\varepsilon>0$ the measure of the sets $E_n=\left\{x\in X: |f_n(x)-f(x)|>\varepsilon\right\}$ approach zero, that is $\lim\limits_n\mu(E_n)=0$. The first type of convergence is called "almost everywhere convergence" and the second type is called "convergence in measure". The proposition tells us that for finite-measure spaces one implies the other.
H: $p$ an prime number of the form $p=2^m+1$. Prove: if $(\frac{a}{p})=-1$ so $a$ is a primitive root modulo $p$ Let $p$ be an odd prime number of the form $p=2^m+1$. I'd like your help proving that if $a$ is an integer such that $(\frac{a}{p})=-1$, then $a$ is a primitive root modulo $p$. If $a$ is not primitive root modulo $p$ so $Ord_{p}(a)=t$, where $t<p-1=2^m$ and $t|2^m$ since $Ord_{p}(a)|p-1$ . I also know that there are no solutions to the congruence $x^2=a(p)$, How can I use it in order to reach a contradiction? Thanks a lot. AI: Ok well we know by Fermat's little theorem that: $a^{p-1} \equiv 1$ mod $p$ i.e. $a^{2^m} \equiv 1$ mod $p$ Now $a$ must have order $2^i$ for some $0\leq i\leq m$ by Lagrange's theorem. But the fact that the Legendre symbol is $-1$ coupled with Euler's criterion tells us that: $a^{\frac{p-1}{2}} \equiv \left(\frac{a}{p}\right)$ mod $p$ i.e. $a^{2^{m-1}} \equiv -1$ mod $p$ So that $a$ cannot have order $2^i$ for $0\leq i < m$ (otherwise this congruence would be false). Thus $a$ has order $2^m$, and so is a primitive root.
H: what does that statement mean about the relation? what does this mean about P? $$\forall x \exists y (p(x,y) \rightarrow p(y,x)) $$ i know that $$\forall x \forall y (p(x,y) \rightarrow p(y,x)) $$ means that P symmetric but what does the first statement means? and what does the last statement mean about the relation P? $$\forall y (\exists x P(x,y) \rightarrow \exists x P(y,x)) $$ AI: $\forall x \exists y (p(x,y) \rightarrow p(y,x))$ says that for each $x$ there is a $y$, which may depend on $x$, such that if $p(x,y)$, then also $p(y,x)$. Let me write $xPy$ to mean $p(x,y)$. Suppose that the statement is true, and fix a particular $x$. Then there’s some $y$, perhaps only one, such that $xPy\to yPx$. Under what circumstances is this true? The only time it’s false is when $xPy$ is true and $yPx$ is false; it’s true whenever $xPy$ is false, or $yPx$ is true, or both. Thus, if you can find a $y$ such that $x(\lnot P)y$, that $y$ works: $xPy\to yPx$ is vacuously true, because $xPy$ is false. What if $xPy$ for every $y$, so that you can’t find such a $y$? If you can find a $y$ such that $yPx$, you can use it, because then $xPy\to yPx$ is true. The only time you’re out of luck is when $xPy$ for every $y$, and there is no $y$ such that $yPx$: then no matter what $y$ you pick, $xPy$ is true and $yPx$ is false, so $xPy\to yPx$ is false. $\forall x \exists y (p(x,y) \rightarrow p(y,x))$ therefore says that for each $x$ there is a $y$ such that either $x(\lnot P)y$, or $yPx$ (or both). Alternatively, it says that there is no $x$ such that for every $y$, $xPy$ and $y(\lnot P)x$: there is no $x$ that is related to everything but not from anything. In fact, though, this is trivially true. Pick any $x$; if $xPx$, then $xPx\to xPx$, and we can simply let $y=x$. If $x(\lnot P)x$, then again $xPx\to xPx$, this time vacuously (because the antecedent is false), and we can let $y=x$. Thus, $\forall x \exists y (p(x,y) \rightarrow p(y,x))$ is true of any relation, because for each $x$ we can take $y=x$ and have $p(x,y) \rightarrow p(y,x)$. The statement $\forall y (\exists x P(x,y) \rightarrow \exists x P(y,x))$ is quite different. In terms of relations it says that each $y$ that is related from something is also related to something, but not necessarily the same thing. Consider, for instance, the relation $P$ on $\{0,1,2,3,4\}$ defined by $0P1P2P3P4P0$; that is, under $P$ the elements $0,1,2,3$, and $4$ form a circular chain. This relation satisfies the statement in question: for each $k\in\{0,1,2,3,4\}$, if there is some $m$ such that $mPk$, then there is some $n$ such that $kPn$. I could throw $5$ into the set without changing $P$, and $P$ would still have the property: there is no $m$ such that $mP5$, so the statement $\forall y (\exists x P(x,y) \rightarrow \exists x P(y,x))$ says nothing about $5$. But if I also add $3P5$, say, in order to keep the property I have to add $5Pk$ for at least one $k$.
H: Simplify this expression with nested radical signs My question is- Simplify: $$\frac1{\sqrt{12-2\sqrt{35}}}-\frac2{\sqrt{10+2\sqrt{21}}}-\frac1{\sqrt{8+2\sqrt{15}}}$$ AI: $$ \begin{align} & {}\quad \frac1{\sqrt{12-2\sqrt{35}}}-\frac2{\sqrt{10+2\sqrt{21}}}-\frac1{\sqrt{8+2\sqrt{15}}}\\[10pt] & =\frac {1}{\sqrt{ 12-2 \sqrt {35}}} \frac {\sqrt{ 12+2 \sqrt {35}}}{\sqrt{ 12+2 \sqrt {35}}}- \frac {2 }{\sqrt{ 10+2 \sqrt {21}}} \frac {\sqrt{ 10-2 \sqrt {21}}}{\sqrt{ 10-2 \sqrt {21}}}- \frac{1}{\sqrt {8+2\sqrt {15}}}\frac{\sqrt {8-2\sqrt {15}}}{\sqrt {8-2\sqrt {15}}}\\[10pt] & =\frac {\sqrt{ 12+2 \sqrt {35}}}{2}-\frac {\sqrt{ 10-2 \sqrt {21}}}{2}-\frac {\sqrt {8-2\sqrt {15}}}{2}\\[10pt] & =\frac {{\sqrt{ 12+2 \sqrt {35}}}- {\sqrt{ 10-2 \sqrt {21}}}- {\sqrt {8-2\sqrt {15}}}}{2}\\[10pt] & = \frac {{{|\sqrt 5 + \sqrt 7|}}- {{|\sqrt 3 - \sqrt 7|}}- {{|\sqrt 3 - \sqrt 5|}}}{2}=\sqrt 3 \end{align} $$
H: How to formally justify that $\int o(x) \, dx\sim o(x^2)$? I'm trying to evaluate the following limit: $$\lim_{x\to 0}\frac{\sin\left(\int_{x^3}^{x^2}\Bigg(\int_0^t g(s^2) \, ds\right) \, dt\Bigg)}{x^8}$$ for $g:[-1,1]\to\mathbb{R}$ differentiable function such that $g(0)=0$, $g'(0)=1$. I developed $g$'s taylor expansion near $0$ and found out that $g(x)=x+o(x)$ where $o(x)$ is a function such that $o(x)/x \to_{x\to 0}0$. Now, intuitively $o'(x^n)\sim o(x^{n-1})$ and $\int o(x^n) \, dx\sim o(x^{n+1})$ but how to formally justify it? I feel that I don't understand Taylor's theorem good enough. By the way, I calculated (using my unjustified intuitions) that the limit above exists and equals to $\frac{1}{12}$. Thanks for your help! AI: You can't differentiate little-o expressions and expect the right answer. (Standard counterexample: $x^n\sin(1/x)$ and variations over this theme.) However, it works with integration: Say that $f(x)=o(x)$ as $x\to0$. This means: For any $\varepsilon>0$, there is $\delta>0$ so that $\lvert f(x)\rvert\le\epsilon\lvert x\rvert$ whenever $\lvert x\rvert<\delta$. Assuming $\epsilon$ is given and $\delta$ is chosen accordingly, whenever $\lvert x\rvert<\delta$ you find $$\left|\int_0^x f(t)\,dt\right|<\left|\int_0^x\lvert f(t)\rvert\,dt\right|<\left|\int_0^x\lvert \varepsilon t\rvert\,dt\right|=\tfrac12\varepsilon x^2,$$ and it follows that $$\left|\int_0^x f(t)\,dt\right|=o(x^2).$$
H: $m!n! < (m+n)!$ Proof? Prove that if $m$ and $n$ are positive integers then $m!n! < (m+n)!$ Given hint: $m!= 1\times 2\times 3\times\cdots\times m$ and $1<m+1, 2<m+2, \ldots , n<m+n$ It looks simple but I'm baffled and can't reach the correct proof. Thank you. AI: Notice that $m!n!$ and $(m+n)!$ both have the same number of terms. Let's compare them: $$m!n! = (1 \times 2 \times \ldots \times m) \times (1 \times 2 \times \ldots \times n)$$ $$(m+n)! = (1 \times 2 \times \ldots \times m) \times ((m+1) \times (m+2) \times \ldots \times (m+n))$$ Both expressions have the same first $m$ terms, but after that each term in the second expression is bigger than the corresponding term in the first: $m+1 > 1$, etc.
H: There exists a unique isomorphism $M \otimes N \to N \otimes M$ I want to show that there is a unique isomorphism $M \otimes N \to N \otimes M$ such that $x\otimes y\mapsto y\otimes x$. (Prop. 2.14, i), Atiyah-Macdonald) My proof idea is to take a bilinear $f: M \times N \to N \otimes M$ and then use the universal property of the tensor product to get a unique linear map $l : M \otimes N \to N \otimes M$. Then show that $l$ is bijective. Can you tell me if my proof is correct: Let $M,N$ be two $R$-modules. Let $(M \otimes N, b)$ be their tensor product. Then $$ \varphi: M \times N \to N \otimes M$$ defined as $$ (m,n) \mapsto n \otimes m$$ and $$ (rm , n) \mapsto r(n \otimes m)$$ $$ (m , rn) \mapsto r(m \otimes n)$$ is bilinear. Hence by the universal property of the tensor product there exists a unique $R$-module homomorphism ($\cong$ linear map) $l: M \otimes N \to N \otimes M$ such that $l \circ b = \varphi$. $l$ is bijective: $l$ is surjective: Let $n \otimes m \in N \otimes M$. Then $l(m \otimes n) = l(b(m,n)) = \varphi (m,n) = n \otimes m$. $l$ is injective: Let $l(m\otimes n) = l(b(m,n)) = 0 = \varphi(m,n) = n \otimes m$. Then $n \otimes m = 0$ implies that either $n$ or $m$ are zero and hence $m \otimes n = 0$. AI: It is not true that $n\otimes m = 0$ implies either $n$ or $m =0$ (see example below). To prove injectivity you should define a map going the other way and show that these maps are inverse. example: $\bar1\otimes \bar2 \in \mathbb{Z}/2\mathbb{Z}\otimes_\mathbb{Z}\mathbb{Z}/3\mathbb{Z}$ satisfies $\bar1\otimes \bar2=\bar1\otimes (2\cdot\bar1)=(\bar1\cdot 2)\otimes \bar1= \bar0\otimes \bar1=0$ but $\bar1\in\mathbb{Z}/2\mathbb{Z}$ and $\bar2\in\mathbb{Z}/3\mathbb{Z}$ are not zero.
H: Asymptotics for sum of binomial coefficients from Concrete Mathematics Concrete Mathematics EXERCISE 9.25: Supposing \[ S_n = \sum_{k=0}^n \binom{3n}k \] Prove that \[ S_n = \binom{3n}{n}\left(2-\frac4n+O\left(\frac1{n^2}\right)\right) \] This sequence also appears in OEIS A066380 I have been trying to understand the answer to the problem, but failed: \[S_n\left/\binom{3n}n\right. = \sum_{k=0}^n \frac{n\cdots(n-k+1)}{(2n+1)\cdots(2n+k)}\tag1\] We may restrict the range of summation to $0 \le k \le (\log n)^2$, say. In this range $n\cdots(n-k+1) = n^k\left(1-\binom k2/n+O(k^4/n^2)\right)$ and $(2n+1)\cdots(2n+k) = (2n)^k\left(1+\binom{k+1}2/2n+O(k^4/n^2)\right)$, so the summand is \[ \frac1{2^k}\left(1-\frac{3k^2-k}{4n}+O\left(\frac{k^4}{n^2}\right)\right) \tag2 \] Hence the sum over $k$ is $2-4/n+O(1/n^2)\tag3$ Q.E.D. The formula (1) is acceptable, because \[ \left. \binom{3n}{n-k} \right/ \binom{3n}{n} = \frac{n\cdots(n-k+1)}{(2n+1)\cdots(2n+k)} \] The equation (2) maybe holds for $0 \le k \le (\log n)^2$, but formula (3) seems too strange (notice that $k$ is restricted, not over integers from $[0..n]$. How can we conclude that? I have tried to considered equation (2) as the partial sum of a power series (the Taylor series for $n^{-1}$), but there seems no evidence that the corresponding power series of (2) or (3) converges. Now OP has understood the answer. A trivial trick is necessary. OP will look for someone clever to give a complete solution and set his/her answer as accepted answer. AI: The claim is that $$\sum_{k=0}^{m} \frac1{2^k}\left(1-\frac{3k^2-k}{4n}+O\left(\frac{k^4}{n^2}\right)\right) = 2-4/n+O(1/n^2) $$ where $m=\lfloor \log_2^2 n \rfloor.$ Computing one term at a time: $ \displaystyle A(m)= \sum_{k=0}^m 1/2^k = 2 - 2^{-m}= 2- \frac{1}{n^{\log n}}= 2 + \mathcal{O}(n^{-2}).$ This far into the book you should know how to compute $\displaystyle \sum_{k=0}^m \frac{3k^2-k}{2^k} = \frac{ 2^{m+4} -3m^2-11m-16}{2^m}.$ (In case you forgot, try differentiating $\sum x^m/2^m$.) The only thing that survives the $\mathcal{O}(n^{-2})$ war is $2^4=16$ so the second term contributes $-4/n + \mathcal{O}(n^{-2}).$ And finally, $\displaystyle \sum_{k=1}^{\infty} \frac{k^4}{2^k}$ is convergent so the last terms contribution is certainly $\mathcal{O}(n^{-2}).$ Hence the result.
H: Proving that the function $f(x,y)=\frac{x^2y}{x^2 + y^2}$ with $f(0,0)=0$ is continuous at $(0,0)$. How would you prove or disprove that the function given by $$f(x,y) = \begin{cases} \dfrac{x^2y}{x^2 + y^2} & (x,y) \neq (0,0) \\ 0 & (x,y) = (0,0) \end{cases}$$ is continuous at $(0,0$)? AI: Observe that $$ \left| \frac{x^2y}{x^2+y^2} \right| \leq \frac{x^2 |y|}{x^2} = |y| $$ provided $x \neq 0$ and $y \neq 0$. Then you conclude.
H: Area of Square - Comparing squares The question is: If the area of a parallelogram $JKLM$ is $n$ and if length of $KN$ is $n+(1/n)$, then find the length of $JM$. (The answer is $n^2 /( n^2+1 )$.) How would i go about solving this problem ? AI: The area of a parallelogram (or see on Wikipedia) is the base times the height. The base here is $JM$ and the height is $KN$, so the area is $$KN * JM = n$$ So you have $$ \left(n + \frac{1}{n}\right)*JM = n $$ Then you solve for $JM$
H: Evaluating $\int_{0}^{1} \frac{x^{2} + 1}{x^{4} + 1 } \ dx$ How do I evaluate $$\int_{0}^{1} \frac{x^{2} + 1}{x^{4} + 1 } \ dx$$ I tried using substitution but I am getting stuck. If there was $x^3$ term in the numerator, then this would have been easy, but this one doesn't. AI: Hints: Try dividing the numerator and denominator by $x^{2}$. Then you get $\displaystyle \int\frac{1 + \frac{1}{x^2}}{x^{2}+\frac{1}{x^2}} \ dx$. Write $\displaystyle x^{2} +\frac{1}{x^2}$ as $\displaystyle \biggl(x-\frac{1}{x}\biggr)^{2} + 2$
H: Wedge product of 1-Forms I'm trying to write down the wedge product of 2 1-forms on an n-dimensional Manifold. $\alpha = \alpha_1 dx^1 + \alpha_2 dx^2 + \cdots + \alpha_n dx^n$ and $\beta = \beta_1 dx^1 + \beta_2 dx^2 + \cdots + \beta_n dx^n$ I know how to do this for the 2 and 3 dimensional case. But I'm having a problem with the n-dimensional case. More specifically, what sign the individual 2-forms get? What I mean by this is, let's concider a few terms of $\alpha \wedge \beta$: $\cdots \alpha_1 \beta_2 dx^1\wedge dx^2 + \alpha_2 \beta_1 dx^2 \wedge dx^1 \cdots$ Now the problem I'm having is, I think $dx^2\wedge dx^1 = -dx^1\wedge dx^2$. Which allows me to combine the above two terms. For 2 and 3 dimensions, this seems easy, as I can just look at the permutations, but in n-dimensions, (1) how does this permutation look like? Or is it always minus if I switch the $dx$? (2) Does this mean that $\alpha\wedge\alpha = 0$ always for 1 forms? AI: You get $$\alpha\wedge\beta = \sum_{1\le i<j \le n}(\alpha_i\beta_j-\beta_i\alpha_j)dx^i\wedge dx^j $$
H: Confusion about unique isomorphism $M \otimes N \to N \otimes M$ This is a follow up question to my previous question here. I'm confused about the following: in Atiyah-Macdonald they state that there exists a unique isomorphism $M \otimes N \to N \otimes M$, $m \otimes n \mapsto n \otimes m$. I'm not sure why AM write unique: If I already know the map, $m \otimes n \mapsto n \otimes m$, then there is no other map that is exactly the same. So I think I get uniqueness for free and all I have to show is that $m \otimes n \mapsto n \otimes m$ is an isomorphism. On the other hand, and that's the way I understood the proposition in the book, if I want to show that there exists a unique isomorphism (without knowing what it is in advance) then I can use the universal property of the tensor product to get uniqueness and the map will turn out to be $m \otimes n \mapsto n \otimes m$ which is an isomorphism. What am I missing? AI: You are misreading Atiyah-Macdonald. You are reading: "There is only one linear map $M \otimes N \rightarrow N \otimes M$. Furthermore this map has the property that $m \otimes n \mapsto n \otimes m$. They mean: "Consider the set of all linear maps $M \otimes N \rightarrow N \otimes M$ which satisfy the property $m \otimes n \mapsto n \otimes m$. There is only one such map." The former can't possibly be right because you could precompose with any endomorphism of $M \otimes N$. The only times you'll have a unique linear map between those two spaces is if one of M and N is the zero vector space (otherwise you have this map and the zero map).
H: Prove that $4^{2n} + 10n -1$ is a multiple of 25 Prove that if $n$ is a positive integer then $4^{2n} + 10n - 1$ is a multiple of $25$ I see that proof by induction would be the logical thing here so I start with trying $n=1$ and it is fine. Then assume statement is true and substitute $n$ by $n+1$ so I have the following: $4^{2(n+1)} + 10(n+1) - 1$ And I have to prove that the above is a multiple of 25. I tried simplifying it but I can't seem to get it right. Any ideas? Thanks. AI: Here is a proof by induction. Suppose $4^{2n}+10n-1=25k$. $$4^{2(n+1)}+10(n+1)-1$$ $$=16\cdot 4^{2n}+10n+9$$ $$=16\cdot 4^{2n}+160n-16-150n+25$$ $$=16(4^{2n}+10n-1)-150n+25$$ $$=16(25k)-25\cdot 6n+25$$ $$=25(16k-6n+1)$$
H: Finding the maximal value of a function on a ellipse How would you find the maximal value of $$f(x,y) = x - y^2$$ on $K = \left\{ (x,y) : \frac{x^2}{4} + \frac{y^2}{9} = 1 \right\}$? AI: Because $\frac{x^2}4 + \frac{y^2}9 = 1$, we know $y^2 = 9\left(1 - \frac{x^2}4\right)$. Substitute this into $x-y^2$ should give a function depending on $x$ only, which you can easily find the maximum (and remember that $x\in[-2,2]$).
H: Prove $\binom{p-1}{k} \equiv (-1)^k\pmod p$ Prove that if $p$ is an odd prime and $k$ is an integer satisfying $1\leq k \leq p-1$,then the binomial coefficient $$\binom{p-1}{k} \equiv (-1)^k\pmod p$$ I have tried basic things like expanding the left hand side to $\frac{(p-1)(p-2).........(p-k)}{k!}$ but couldn't get far enough. AI: Hint: $(p-1)(p-2)\cdots(p-k)\equiv(-1)(-2)\cdots(-k)$ because $p\equiv 0$.
H: Determinant called Grammian Famously, if functions $f_1,f_2,…,f_n$, each of which possesses a derivative of order $n-1$, are linearly independent on the interval $I$, if $$ \det\left( \begin{array}{ccccc} f_1 & f_2 & f_3 &… &f_n \\ f'_1 & f'_2 & f'_3 &... &f'_n \\ ⋮ & ⋮ & ⋮ &⋮ &⋮ \\ f_1^{(n-1)} & f_2^{(n-1)} & f_3^{(n-1)} &... &f_n^{(n-1)} \end{array} \right) $$ called Wronskian of $f_1,f_2,…,f_n$ ,is not zero for at least one point in the interval $I$. Equivalently, if functions $f_1,f_2,…,f_n$ possess at least $n-1$ derivatives and are linearly dependent on $I$ then $W(f_1,f_2,…,f_n)(x)=0$ for every $x\in I$. So this equivalent statement gives just a necessary condition for dependency of above functions on the interval. Fortunately, there is necessary and sufficient condition for dependency of a set of functions $f_1(x),f_2(x),…,f_n(x), x\in I$: A set of functions $f_1(x),f_2(x),…,f_n(x), x\in I$ is linearly dependent on $I$ iff the determinant below is identically zero on $I$: $$ \det\left( \begin{array}{ccccc} \int_{a}^{b} f_1^2 dx& \int_{a}^{b} f_1f_2 dx&… &\int_{a}^{b}f_1f_ndx \\ \int_{a}^{b}f_2f_1dx & \int_{a}^{b}f_2^2 dx &... &\int_{a}^{b}f_2f_ndx \\ ⋮ & ⋮ & ⋮ &⋮ \\ \int_{a}^{b}f_nf_1dx & \int_{a}^{b}f_nf_2dx&... &\int_{a}^{b}f_n^2dx \end{array} \right) $$ It seems to be a great practical Theorem, but I couldn't find its proof. I really appreciate your help. AI: If we assume that $f_j$ are continuous, then we define an inner product by $\langle f,g\rangle=\int_a^bf(x)g(x)dx$. Consider $g_1,\ldots,g_n$ such that $\{g_1,\ldots,g_n\}$ is orthonormal and $\operatorname{span}\{g_1,\ldots,g_n\}=\operatorname{span}\{f_1,\ldots,f_n\}$. We can write $f_i=\sum_{j=1}^n\alpha_{ij}g_j$ and if $\alpha$ denotes the matrix whose entries are $\alpha_{i,j}$ we have that $G=\alpha \alpha^T$, where $G$ is the last matrix of the OP. The matrix $G$ is invertible if and only if so is $\alpha$, which gives the result. Indeed, if $\sum_{j=1}^n\beta_k\alpha_{k,j}=0$ for some $\beta_k$ not all $0$ then $\sum_k\beta_kf_k=0$.
H: Simplify these expressions with radical sign 2 My question is 1) Rationalize the denominator: $$\frac{1}{\sqrt{2}+\sqrt{3}+\sqrt{5}}$$ My answer is: $$\frac{\sqrt{12}+\sqrt{18}-\sqrt{30}}{18}$$ My question is 2) $$\frac{1}{\sqrt{2}+\sqrt{3}-\sqrt{5}}+\frac{1}{\sqrt{2}-\sqrt{3}-\sqrt{5}}$$ My answer is: $$\frac{1}{\sqrt{2}}$$ I would also like to know whether my solutions are right. Thank you, AI: Your answer is almost correct. Multiplying by$$\frac{\sqrt{2}+\sqrt{3}-\sqrt{5}}{\sqrt{2}+\sqrt{3}-\sqrt{5}}$$ and simplifying will give your that answer: $$\frac{\sqrt{2}+\sqrt{3}-\sqrt{5}}{2 \sqrt 6}=\frac{\sqrt{12}+\sqrt{18}-\sqrt{30}}{12}$$ 2. Your answer is correct. Multiplying the first fraction by $$\frac{\sqrt{2}+\sqrt{3}+\sqrt{5}}{\sqrt{2}+\sqrt{3}+\sqrt{5}}$$ And the second by $$\frac{\sqrt{2}-\sqrt{3}+\sqrt{5}}{\sqrt{2}-\sqrt{3}+\sqrt{5}}$$
H: Why is the following map well defined? Let $H\leq G=\operatorname{Gal}(K/F)$ ($K/F$ is a finite galois extension), why is the following map well defined: $\varphi:G/H\to\Gamma_F(K^H,K)$ defined by $\sigma H\mapsto\sigma|_{K^H}$ ,where $\Gamma_{F}(K^H,K)$ denotes all homomorphisms from $K^H$ to $K$ that fixes $F$. My lecture wrote : Let $\sigma\in G$ ,if $\tau\in H$ then $\tau|_{K^H}=\operatorname{Id}_{K^H}$hence $\sigma\tau|_{K^H}=\sigma|_{K^H}$. Why does this imply (what I understand that need to be shown): $\sigma_1|_{K^H}=\sigma_2|_{K^H}\implies\sigma_2^{-1}\sigma_1\in H$ ? Please note I was not told $H$ is a normal subgroup of $G$. AI: Note that $G/H$ represents the left cosets of $H$ in $G$. Let $\sigma\in G$. Then for every $\tau\in H$ we have that the value of $\sigma$ and of $\sigma\tau$ on $K^H$, the field fixed by $H$, is the same. Thus, every element of $\sigma H$ determines the same map $K^H\to K$. This means that the map $G\to\mathrm{Hom}_F(K^H,K)$ given by restriction actually factors through the cosets. Moreover, if $\sigma_1,\sigma_2\in G$ are such that $\sigma_1|_{K^H} = \sigma_2|_{K^H}$, then for every $a\in K^H$ we have $\sigma_1(a)=\sigma_2(a)$, hence $\sigma_2^{-1}\sigma_1(a) = a$. Thus, $\sigma_2^{-1}\sigma_1$ fixes every $a\in K^H$, hence lies in $H$ (since the extension is Galois, the stabilizer of $K^H$ is exactly $H$). Thus, $\sigma_1H = \sigma_2H$. Hence, the restriction map factors through the cosets but not through any larger subgroup.
H: Most Probable Sum Possible Duplicate: Probability of dice sum just greater than 100 A fair dice is rolled and the outcome of the face is summed up each time. We stop rolling when the sum becomes greater than 100. Which of the following is most probable sum? 103 102 100 101 All have equal probability How best do I approach these types of problems? AI: The sum $100$ is not possible if we stop when the sum first becomes greater than $100$. Maybe you meant "greater than or equal to $100$." But we solve the problem as it stands. Look at where we are just before we "go over." Maybe we are at $100$. Then $101$, $102$, $103$ are equally likely. Maybe we are at $99$. Again, $101$, $102$, $103$ are equally likely. Maybe we are at $98$; same thing. Maybe we are at $97$; same thing. Maybe we are at $96$. Now $103$ next is impossible, but $101$, $102$ are equally likely. Maybe we are at $95$. Then only $101$ is possible among the three. It is certainly possible that the sum just before we go over is $95$ or $96$. So $101$ is the most likely of $101$, $102$, $103$. And $102$ is next. If the problem meant to say we stop when our sum is $\ge 100$, the same reasoning shows $100$ is the most likely "first over" number of our four choices. Added Let $p_{100}$, $p_{99}$, $p_{98}$,, up to $p_{95}$ be the probabilities that we are respectively at $100$, $99$, and so on down to $95$ just before we go over. These $p_k$ are not equal, and would be fairly messy to compute. But we don't need to know them. The probability that we end up at $103$ is $\frac{1}{6}\left(p_{100}+p_{99}+p_{98}+p_{97}\right)$. The probability that we end up at $102$ is $\frac{1}{6}\left(p_{100}+p_{99}+p_{98}+p_{97}+p_{96}\right)$, clearly bigger. And the probability we end up at $101$ is even bigger.
H: When does this sequence start repeating itself? Given the sequence $a_j = b^j \mod q$, where $1 < b, q < 2^n$, how can I prove that the sequence starts repeating itself at some term $a_k$ where $k \leq n$? I have been looking at this problem for hours and am completely stuck on how to do it :/ Any help would be appreciated! AI: Note that if $\gcd(b,q)=1$, then (by Euler's Theorem) there exists $n_0$ such that $b^{n_0}\equiv1\mod q$, so, in this case, $a_1=a_{n_0+1}$, and $k=1$. Otherwise, let $m=\gcd(b,q)$. By condition $q<2^n$, we have that $q$ has at most $n-1$ products of $m$, (since $m^n\ge2^n>q$), so if $q$ and $b$ has exactly the same primes in their prime factorization, then $q=lb$ (we can assume $b<q$ since otherwise, we take $b_0=b\mod q$), with $l<2^n$, and then, $b^{n-1}>l$ and then $b^{n-1}\equiv 0\mod q$, so, the sequence starts repeating for $k=n-1<n$. Otherwise, for $p=(n-1)+l$, for any $l\ge0$, we have $$ a_p=b^{n-1}b^l\mod q $$ Define $m'=\gcd(b^{n-1},q), q'=q/m'>1,b'=b^{n-1}/m'$. In this case, $\gcd(b,q')=1$, so there exists $n_1$ such that $b^{n_1}\equiv 1\mod q'$, that is $$ b'b^{n_1}\equiv b'\mod q' $$ So (by properties of $\textit mod$): $$ b^{n-1}b^{n_1}\equiv b^{n-1}\mod q $$ and so, $a_{n-1}=a_{(n-1)+n_1}$, so if we set $k=n-1<n$ the sequence starts repeating, as required.
H: How to prove $641|2^{32}+1$ Possible Duplicate: To show that Fermat number $F_{5}$ is divisible by $641$. How to prove that $641$ divides $2^{32}+1$? What the technical way will be for this question? I want to teach it to my students. Any help. :-) AI: In light of Peter's comment: we have: $2^2=4$, $2^4=16, 2^8=256,$ $2^{16}=256^2=65536=641k_1+154,$ $2^{32}=641k_2+154^2=641k_3+640$ the rest is very easy.
H: Additional insights when converting sums to products Given some sum, $$\displaystyle\sum \ln x_i = k $$ We have $$\ln \prod x_i = k$$ I've always found this relation to be really interesting. I saw it used once in a linear algebra proof but I haven't seen it since. Are there any other interesting uses of this trick? AI: Yes, finding product expansion for sums, especially infinite sums, can sometimes shed new light on them. For example, consider the zeta function, defined for complex numbers with $\Re(z)>1$ by $$\zeta(s)=\sum_{n=0}^\infty \frac{1}{n^s}.$$ It is not hard, though not trivial, to show this "factors" as one would expect, giving the infinite product $$\zeta(s)=\prod_p^\infty \left(1-\frac{1}{p^s}\right)$$ where the product is taken over all primes $p$. This shows that the zeta function has no zeros for $\Re(z)>1$, something which is not obvious from examining the sum representation.
H: Cardinality of the set of ultrafilters on an infinite Boolean algebra Let $\mathfrak B$ be a Boolean algebra with an infinite power $\kappa$. My question is how many ultrafilters does it have? $\kappa$ or $2^\kappa$? Or even smaller? AI: It can be at least those two options: Example I: Consider the algebra $\{A\subseteq\mathbb R\mid A\text{ finite, or }\mathbb R\setminus A\text{ is finite}\}$, that is finite co-finite subsets of $\mathbb R$. It is not hard to verify that this is indeed a Boolean algebra of size $2^{\aleph_0}$. Suppose that $U$ is an ultrafilter over this Boolean algebra, if it does not contain any singleton then it has to be the collection of co-finite sets; if it contains a singleton then it is principal. We have continuum many principal ultrafilters and one free. Therefore a Boolean algebra of size continuum with continuum many ultrafilters. Example II: On the other hand consider $\mathcal P(\mathbb N)$. We know that there are $2^{2^{\aleph_0}}$ many ultrafilters over this Boolean algebra. So we have a Boolean algebra of size $2^{\aleph_0}$ with $2^{2^{\aleph_0}}$ many ultrafilters. For a further discussion on this sort of example see: The set of ultrafilters on an infinite set is uncountable
H: Question on uniform intergrability Consider a probability measure $m$ over $W \subseteq{R^m}$, so that $m(W) = 1$. Consider a function $f: X \times W \rightarrow \mathbb{R}_{\geq 0}$, with compact $X \subset \mathbb{R}^n$, such that the following proposition holds true. For any $\epsilon > 0$ there exists $c > 0$ such that $m(\{w \in W \mid f(x,w) \geq c \}) < \epsilon \ $ for all $x \in X$. In other words, the measure of $\{f\geq c\}$ can be made arbitrarily small, uniformly on $X$. What are the (weakest) conditions to have the family $\{f(x,\cdot)\}_{x \in X}$ Uniformly Integrable? AI: [Old answer] It is convenient to consider $\epsilon=2^{-n}$. Let $c_n$ be the corresponding $c$. You need the sum $\sum_{n}2^{-n}c_n$ to converge. [New answer] It is convenient to consider $\epsilon_n=\sup_{x\in X} m(\{w\in W\colon f(x,w)\ge 2^n\})$. We know that $\epsilon_n\to 0$ as $n\to \infty$. I claim that the condition $\sum_{n=0}^\infty 2^n \epsilon_n<\infty$ is sufficient for uniform integrability. Fix $x$ and consider the sets $W_n=\{w\in W\colon 2^n\le f(x,w)< 2^{n+1}\}$, $n=0,1,2,\dots$. Note that $m(W_n)\le \epsilon_n$. Since $$\int_W f(x,w)\,dm(w) = \int_{\{f<1\}} f(x,w)\,dm(w) + \sum_{n=0}^\infty \int_{W_n} f(x,w)\,dm(w) $$ we can estimate the integral $$\int_W f(x,w)\,dm(w) \le 1 + \sum_{n=0}^\infty 2^{n+1}\epsilon_n<\infty$$ Moreover, if instead of $W$ we integrate over the set $\{f\ge 2^N\}$, the estimate becomes $$\int_{f\ge 2^N} f(x,w)\,dm(w) \le \sum_{n=N}^\infty 2^{n+1}\epsilon_n<\infty$$ which is uniformly small.
H: Primitive element theorem - why any finite and separable extension is simple I have it in my lectures notes that the claim: Let $K/F$ be a finite and separable extension then $K$ is a simple extension of $F$ follows immediately from the theorom : Let $K/F$ be a finite extension, then it is simple iff $K/F$ have a finite number of subfiels. My question is why ? I know that the extension is finite and separable hence $K=F(a_1,...a_n)$ where $a_i$ are all separable, but why there are only finite number of subfields between $F$ and $K$ ? AI: Hint: Consider the Galois closure of $K$.
H: Why can't a model "say" of itself that it is countable? Why can't a (standard?) model of ZFC "say of itself" that it is countable? That is, why is there no bijection $f$ ∈ between and $\omega^$? (I've read that it fails regularity, or even without regularity we get Cantor's paradox. But a direct answer to the question would be most helpful.) Thanks. AI: If $\frak M$ is a [standard] model of ZFC then we know several things: $\frak M$ thinks that $\{x\mid x\notin x\}=\frak M$ is not a set. If $f\in\frak M$ and $\frak M$ thinks that $f$ is a function, then the range of $f$ is a set in $\frak M$. (This is an instance of the axiom schema of replacement) $\omega^\frak M$ is a set in $\frak M$. These combined tell us that if $\frak M$ knew about a function from its own $\omega$ onto its entire universe it would violate the second thing in the list above, and will not be a model of ZFC. In the case of a standard model, we can also have the contradiction from the fact that if such $f$ was in $\frak M$ then we would have $\frak M\in\frak M$ and that, as you said, would contradict the axiom of regularity (both in the universe and in $\frak M$) but this is in addition to the above argument.
H: Why does it always take n numbers to characterize a point in n-dimensional space (or does it)? I don't know if this is obvious and a dumb question or not, but, here we go. To characterize a point in 2-d space we can use standard $x,y$ coordinates or we can use polar coordinates. There are probably other ways to do it other than those two as well. It's very interesting to me that those somehow both require exactly two numbers—either an $x$ and a $y$ or an $r$ and a $\theta$. It seems like a magical coincidence to me that these two completely different ways to describe a point require the same number of numbers. Then moving into 3-d space, there's the same thing. We can use $(x,y,z)$ or $(\rho,\phi ,z)$ (cylindrical coordinates) or $(r,\theta ,\phi)$ (spherical coordinates). These coordinate systems seem to be to function in vastly different ways, and yet they all take three numbers. It's a conspiracy. So I mean on the one hand, it's intuitive that it should take three numbers to describe three dimensional space. On the other hand, I can't figure out why this should be true. So question a) why is this the case and question b) can we imagine a world where there were points in n dimensions and two coordinate systems that took different numbers of numbers to characterize points? P.S. I don't really know what to tag this as. AI: Well, in a silly way, it only takes one number to characterize a point in $n$-dimensional space. It will be easiest for me to explain how to do this with an $n$-dimensional box $[0, 1]^n$. If we write out $n$ numbers $(x_1, ... x_n)$ describing a point in this box using their decimal expansions, e.g. take $n = 5$ and $$x_1 = 0.12345...$$ $$x_2 = 0.33333...$$ $$x_3 = 0.52525...$$ $$x_4 = 0.31415...$$ $$x_5 = 0.27182...$$ then I can write down a single number that describes all of them by interweaving their digits: $$y = 0.1353223217335414321853552...$$ (Strictly speaking this is a small lie because I have not described what to do with numbers like $0.1 = 0.0999...$ which have two decimal expansions, but this turns out to be easily fixable so I'll gloss over it.) However, this is not a useful way to describe points in $n$-dimensional space; small changes in $y$ may result in large changes in the $x_i$ and it is a huge hassle to have to deal with this. More precisely, the map above fails pretty badly to be continuous, and in particular it is not differentiable, so we can't use calculus in a way compatible with this map (e.g. we can't compare integrals in the two coordinate systems using the multivariate change of variables formula). If we want a differentiable map that allows us to go back and forth between two coordinate systems, those coordinate systems need to be describable using the same number of numbers. This follows from the fact that their derivatives (Jacobian matrices) need to do the same thing to the tangent spaces. To really understand this, first take a course in linear algebra, then a course in multivariable calculus. (I do not understand why these are usually taught in the other order.) If we only want a continuous map in both directions, then it is not at all obvious that it still takes $n$ numbers to describe $n$-dimensional space, but this turns out to be true by a difficult theorem called invariance of domain.
H: Determining all Sylow $p$-subgroups of $S_n$ up to isomorphism? I'm trying to understand a classification of all Sylow $p$ subgroups of $S_n$. Let $Z_p$ be the subgroup of $S_p$ generated by $(12\cdots p)$. Then $Z_p\wr Z_p$ has order $p^p\cdot p=p^{p+1}$, and is isomorphic to a subgroup of $S_{p^2}$. Define inductively $Z_p^{\wr r}$ by $Z_p^{\wr 1}=Z_p$ and $Z_p^{\wr k+1}=Z_p^{\wr k}\wr Z_p$. It is easy to show by induction that $Z_p^{\wr r}$ has order $p^{(p^{r-1}+p^{r-2}+\cdots+1)}$, and since by inductively assuming $Z_p^{\wr r-1}$ is isomorphic to a subgroup of $S_{p^{r-1}}$ and $Z_p$ isomorphic to a subgroup of $S_p$, then $Z_p^{\wr r}$ is isomorphic to a subgroup of $S_{p^r}$. However, I can't make the jump that if $n=a_0+a_1p+\cdots+a_kp^k$ is the base $p$ expansion, then any Sylow $p$-subgroup is isomorphic to $$ \underbrace{Z_p^{\wr 1}\times\cdots\times Z_p^{\wr 1}}_{a_1}\times \underbrace{Z_p^{\wr 2}\times\cdots\times Z_p^{\wr 2}}_{a_2}\times\cdots\times \underbrace{Z_p^{\wr k}\times\cdots\times Z_p^{\wr k}}_{a_k}. $$ I know this group has order $$ (p)^{a_1}(p^{p+1})^{a_2}\cdots(p^{(p^{k-1}+p^{k-2}+\cdots+1)})^{a_k}=p^{\sum_{i=1}^k a_i(1+\cdots+p^{i-1})}=p^{\nu_p(n!)} $$ which is the order of any Sylow $p$-subgroup of $S_n$, based on the formula here. However, I couldn't find an epimorphism from any Sylow $p$-subgroup onto this product, or vice versa. Is it clear how this is isomorphic to a subgroup of $S_n$? Then I understand that the isomorphism would just follow from the Sylow theorems. Thanks. AI: Given two permutation groups H on n points and K on m points, there is a permutation group called H × K that acts on n + m points. The group is abstractly the direct product of the two groups, and the action is very simple: an element like $(h,k)$ acts on the first n points exactly like h did on its n points, and on the last m points exactly like k did on its m points. For instance the Sylow 3-subgroup of Sym(6) is $\langle (1,2,3) \rangle \times \langle (4,5,6) \rangle$.
H: Finding what $\langle(135)(246),(12)(34)(56)\rangle\subset S_{6}$ is isomorphic to I am doing an exercise that asks me to find what $\langle(135)(246),(12)(34)(56)\rangle\subset S_{6}$ is isomorphic to. I am allowed to only use the groups $D_n,S_n,\mathbb{Z}_n$ and the direct sums ) where $S_n$ is the permutatin group of $n$ elements, $D_n$ is the dihedral group of order $2n$. I have noted that the first element is of order $3$ and that the second one is of order $2$. I also noted that these elements commutes hence generate an abelian group. I can also say that this group is of order at least $6$ since $gcd(2,3)=1$. How can I find what is Finding $\langle(135)(246),(12)(34)(56)\rangle\subset S_{6}$ ? If there was a good argument that say that this group is at most of order $6$ then I can clain that since the only groups of order $6$ are $S_3$ and $\mathbb{Z}_6$ and $S_3$ is non abelian then this group is isomorphic to $\mathbb{Z}_6$. Can someone please help with this problem ? AI: You already did much of the work when you calculated the product of the two elements. $(135)(246)(12)(34)(56)=(145236)$, which clearly has order $6$. On the other hand, if this permutation is $\pi$, it’s easy to check that $\pi^3=(12)(34)(56)$ and $\pi^4=(135)(246)$, so $\pi$ generates the same subgroup.
H: Evaluating a sum to infinity I'm looking for a way that allows me to work out the following sum: $$\sum\limits_{k=1}^{\infty} \sin^2\left(\frac{1}{k}\right)$$ Any hint/suggestion is welcome. Thanks. AI: It may be too much to ask for a closed form. We find an equivalent series that converges very fast. We have $$\begin{eqnarray*} \sum_{k=1}^\infty \sin^2\frac{1}{k} &=& \sum_{k=1}^\infty \frac{1}{2}\left(1-\cos \frac{2}{k}\right) \\ &=& \frac{1}{2} \sum_{k=1}^\infty \sum_{j=1}^\infty \frac{(-1)^{j+1}}{(2j)!} \left(\frac{2}{k}\right)^{2j} \\ &=& \frac{1}{2} \sum_{j=1}^\infty \frac{(-1)^{j+1} 2^{2j}}{(2j)!} \zeta(2j) \\ &=& \frac{1}{4} \sum_{j=1}^\infty \frac{(4\pi)^{2j}}{[(2j)!]^2} B_{2j} \end{eqnarray*}$$ where $\zeta(2j)$ is the zeta function and $B_{2j}$ are the Bernoulli numbers. Interchanging the sums is allowed by Fubini's theorem. The ratio of successive terms goes like $1/j^2$ for $j$ large. Below we give the partial sums to $25$ digits. $$\begin{array}{ll} N & \frac{1}{4} \sum_{j=1}^N \frac{(4\pi)^{2j}}{[(2j)!]^2} B_{2j}\\\hline 1 & 1.644934066848226436472415\cdots \\ 2 & 1.284159655611180372633747\cdots \\ 3 & 1.329374902810489223287726\cdots \\ 4 & 1.326187355647956066654778\cdots \\ 5 & 1.326328589450443236755002\cdots \\ 6 & 1.326324312838454339066804\cdots \\ 7 & 1.326324406812557661734373\cdots \\ 8 & 1.326324405246394595313185\cdots \\ 9 & 1.326324405266867080420232\cdots \\ 10 & 1.326324405266651581194045\cdots \\ 11 & 1.326324405266653446986876\cdots \\ 12 & 1.326324405266653433466641\cdots \\ 13 & 1.326324405266653433549842\cdots \\ 14 & 1.326324405266653433549402\cdots \\ 15 & 1.326324405266653433549404\cdots \end{array}$$
H: Prove that the intersection of all subfields of the reals is the rationals I'm reading through Abstract Algebra by Hungerford and he makes the remark that the intersection of all subfields of the real numbers is the rational numbers. Despite considerable deliberation, I'm unsure of the steps to take to show that the subfield is $\mathbb Q$. Any insight? AI: First note that $\mathbb Q$ is itself a subfield of $\mathbb R$, so the intersection of all subfields must be a subset of the rationals. Second note that $\mathbb Q$ is a prime field, that is, it has no proper subfields. This is true because if $F\subseteq\mathbb Q$ is a field then $1\in F$, deduce that $\mathbb N\subseteq F$, from this deduce that $\mathbb Z\subseteq F$ and then the conclusion. Third, conclude the equality.
H: Comparison Test about the series $ \sum_{n=1}^\infty \frac{a^n}{n^b} $ When does this series converge?$$ \sum_{n=1}^\infty \frac{a^n}{n^b} $$ I want to know the condition of a and b. AI: Hint: I assume we are working over the reals. For $|a|\lt 1$, use Ratio Test. For $|a|\gt 1$, terms don't go to $0$, or Ratio Test. This leaves $a=1$ and $a=-1$. For $a=1$, comparison with the harmonic series if $b \le 1$, and Integral Test for $b \gt 1$. For $a=-1$, terms don't go to $0$ if $b\le 0\,$, alternating series if $b\gt 0$.
H: To define a measure, is it sufficient to define how to integrate continuous function? Let me make my question clear. I want to define a measure $\mu$ on a space $X$. But instead of telling you what value I assign for some subset of $X$ (measurable sets that form a $\sigma$-algebra), I tell you that for each $f$ continuous, what $\int_X f(x)d\mu (x)$ is. Then, is this measure uniquely determined? I know if I tell you how to integrate all measurable functions, then this measure is of course uniquely determined. Because integrate characteristic functions will give you measure of that respective set. But is it also true if I only define integration with continuous functions? AI: In general this is false. Here are some examples to think about: If the $\sigma$-algebra on $X$ is not the Borel $\sigma$-algebra, there is generally no hope. (What if $X$ has the trivial topology but the $\sigma$-algebra is not trivial?) Hence you should restrict your attention to Borel measures. Take $X = \{a,b\}$ with the topology $\tau = \{\emptyset, \{a\}, \{a,b\}\}$. The Borel $\sigma$-algebra is $2^X$ but the only continuous functions $f : X \to \mathbb{R}$ are constant, so $\mu_1 = \delta_a$ and $\mu_2 = 2 \delta_a - \delta_b$ agree on all continuous functions. Thus you probably want a Hausdorff space. Take $X = \mathbb{R}$. Let $\mu$ be counting measure and $\nu = 2\mu$. So you probably want to look at $\sigma$-finite measures. As I mentioned in the above comment, on $X = \omega_1 + 1$ (which is compact Hausdorff), one can find two distinct finite measures which agree on all continuous functions. However, here is a positive result. Proposition. Let $\mu, \nu$ be finite Borel measures on a metric space $(X,d)$. If $\int f d\mu = \int f d\nu$ for all bounded continuous $f$, then $\mu = \nu$. Proof. Let $E$ be a closed set, and let $f_n(x) = \max\{1 - n d(x,E), 0\}$. You can check that $f_n$ is continuous and $f_n \downarrow 1_E$ as $n \to \infty$. So by dominated convergence, $\mu(E) = \nu(E)$, and $\mu, \nu$ agree on all closed sets. Now we apply Dynkin's $\pi$-$\lambda$ theorem. Let $\mathcal{P}$ be the collection of all closed sets; $\mathcal{P}$ is closed under finite intersections, and $\sigma(\mathcal{P})$ is the Borel $\sigma$-algebra $\mathcal{B}$. Let $\mathcal{L} = \{ A \in \mathcal{B} \colon \mu(A) = \nu(A)\}$. Using countable additivity, it is easy to check that $\mathcal{L}$ is a $\lambda$-system, and we just showed $\mathcal{P} \subset \mathcal{L}$. So by Dynkin's theorem, $\mathcal{B} = \sigma(\mathcal{P}) \subset \mathcal{L}$, which is to say that $\mu,\nu$ agree on all Borel sets, and hence are the same measure.
H: Reductions for regular languages? To reason about whether a language is R, RE, or co-RE, we can use many-one reductions to show how the difficulty (R, RE, or co-RE-ness) of one language influences the difficulty of another. To reason about whether a language is in P, NP, or co-NP, we can use polynomial-time many-one reductions to show how the difficulty (P, NP, or co-NP-ness) of one language influences the difficulty of another. Is there are similar type of reduction we can use for regular languages? For example, is there some type of reduction $\le_R$ such that if $L_1 \le_R L_2$ and $L_2$ is regular, then $L_1$ is regular? Clearly we could arbitrarily define a very specific class of reductions such that this property holds, but is there a known type of reduction with this property? Thanks! AI: There is a very natural model of finite-state reduction, namely the most general finite-state transducer -- one input tape, one output tape, non-deterministic, transitions can be labelled with arbitrary regular sets (with empty strings) on both the input and output side. This can be shown equivalent to Henning's single-symbol operations, but allows for much more intuitive reductions, still within the finite-state realm. The ambiguity Henning speaks of is just the non-determinism. You can even allow such a transducer to have secondary storage (like a Turing Machine, pushdown automaton, etc) as long as there is a uniform constant bound on the size of the secondary storage. Taking that a step further, you can use transformations that do arbitrary computations, but again show that the size of memory needed over all inputs is uniformly bounded, that is, there's a $k$ not depending on the input that limits the size of all memory used. Thus you can use pseudo-code, Java or whatever formalism you like, including forking, that is, non-determinism -- as long as you have: one input and one output tape/stream both streams processed in a single pass total memory is uniformly bounded across all forks/threads In other words, you don't have to model finite-state transformations with transitions on a finite graph, which is a very brittle and finicky programming model. You can use any convenient programming formalism or model with any structuring of memory you like, as long as it satisfies those criteria. In fact, I propose that as a sort of finite-state equivalent of the Turing-Church thesis. Not quite as crisp as the Turing-Church Thesis in the world of recursive functions, but very useful.
H: I can't find differences between $P(1+r)^n$ and $P(2.71828)^{rn}$ They told me $P(1 + r)^n$ can be used to calculate money interest, example: You invert $15,000.00, 20% interest annual, for 3 years: $15,000(1 + 0.20)^3$ = 25,920 And that $P(2.71828)^{rn}$ can be used to calculate population growth, example: We have 500 bacterias, growing at the rate of 25% monthly, for 16 months: $(500)(2.71828)^{(0.25)(16)}$ = roughly 26,968 I can't tell difference on when to use one over another. I mean different than seeing the keywords money and population. How do I tell when to use one over the another? As far as I can tell, they both are something like: We have a starting value, a percent, a period of time AI: The difference is whether you try to model each discrete step or whether you choose a continuous model. If you regard money, then you know that your interest will be calculated annually, so the discrete model will be exact. If you have bacteria or population, you do not have any control about the number of babies or bacteria born at a particular time. You just know that you have enough people or bacteria, so that a statistical approach to the growth rate applies. This means that you say that there are so many bacteria or people that you model it in a continuous way. But you have to take care. In the continuous model, $r=0.25$ in your formula does not mean that you have a quarter more bacteria at the end of the month. This is a different $r$ from the one in the discrete model.
H: Condition for frame of $L_2$ Let $f$ be continuous, real valued and compactly supported with exactly one maximum function in $L_2$. Form the functions $$ f_{m,k}=f^m(x-2^k) $$ Under which conditions $\{f_{m,k}\}$ would be a frame? (A function $f\in L_2(R)$ is said to generate a frame $\{f_{m,k}\}$ of $L_2(\mathbb{R})$ if for some $A$,$B>0$ we have $A\|f\|^2_2\le \sum_{j, k \in Z}|\langle f, f_{j,k}\rangle|^2\le B\|f\|^2_2$) Thank you. AI: My impression is: never. A frame sequence must be bounded in $L^2$, so we must have $|f|\le 1$ a.e. Also, the set where $|f|=1$ must have measure zero, otherwise the characteristic function of this set will violate the lower bound. Since $f$ is continuous and compactly supported, $\sup |f|<1$... But the main issue is, neither $k$ nor $m$ increases the frequency of oscillation of $f$, which is necessary to handle highly oscillatory inputs such as $g_N(x)=\sin(Nx)\chi_{[0,2\pi]}$. Indeed, the family $f_{m,k}$ is equicontinuous, and as $N\to\infty$, this will make $\langle g_N,f^{j,k}\rangle$ small, breaking the lower frame bound.
H: Difference between sample mean and true mean of a gaussian Assume I have a gaussian distribution $\mathcal{N}(\mu, C)$ with mean $\mu$ and covariance $C$. I'm drawing $n$ random numbers from this distribution. Let $m$ be the mean of these numbers. Is there some formula that gives the probability that the distance $d = ||\mu - m||$ is at least $x$, i.e. $P(d \ge x)$? The background here is that in a recent simulation, the results seemed to cluster around a very slightly different point than expected, and I'd like to calculate the probability of this happening by chance. AI: $x=m-\mu$ follows a normal distribution $\mathcal N(0,C/n)$. I don't think there is a closed form for the probability that its norm is at least $d$, but I don't think this is the right statistical test either. You probably should transform your data so that $C$ becomes a unit matrix, and then $\sum_i x_i^2$ will follow a well-known $\chi^2$ distribution: you will easily find a $p$-value to test your hypothesis. Edit: In the univariate case, this is of course much easier: $$P(\|x\|\le a)=\mathrm{erf} \frac{a}{\sqrt{2C/n}}$$
H: Is "algebraic-variety" a relative concept? Let A, B be two NON-isomorphic finitely generated k-algebras, is it possible they isomorphic as abstract commutative unitary rings? (any concrete examples?) If the answer to the above is possibly yes, then what assumptions should one add to preserve the NON-isomorphicity? AI: Yes. For example, let $k$ be $\mathbb{Q}(x)$, let $A$ be $k$, and let $B$ be $k\left(\sqrt{x}\right)$.
H: Converting from Spherical to Rectangular I need to convert $\rho \sin\phi=2\cos\theta$ in to rectangular form. Attempt: I tried using those nice properties : $$x=\rho\sin\phi\cos\theta \\y=\rho\sin\phi\sin\theta\\z=\rho\cos\phi$$ and $\rho^2=x^2+y^2+z^2$ and $\cos\phi=\frac{z}{\sqrt{x^2+y^2+z^2}}$. I cannot find a way to get rid of $\rho$'s and the sines and cosines. Any hints please. AI: Multiply by $\rho\sin\phi$, and note that $\rho^2\sin^2\phi=x^2+y^2$.
H: Evaluating $\int \sqrt{5 + 4x - x^2}dx$ $$\int \sqrt{5 + 4x - x^2}dx$$ I am pretty certain what I need to do to this problem is complete the square and turn it into a trig subsitution but I have no idea how to complete the square with a $-x^2$ or really with this problem at all, I just can't make it work. I tried to see if I could make the problem be the same in any way by just pulling out a negative but that didn't seem to work. I got the problem up to $$\int \sqrt{ -1(x-2)^2 - 1}dx$$ But I do not think that does me any good. What I think I need to do is have a difference of squares with a square in it or something, I just have to get rid of the 4x term somehow. AI: Firstly, it should be $$ \int \sqrt{5 + 4 + (-4) + 4x - x^2} dx = \int \sqrt{5 + 4 - (x^2 - 4x + 4}) dx = \int \sqrt{9 - (x - 2)^2}dx $$ Next a hint. Let $3\sin \theta = x-2$.
H: Every $2k$-regular contains a 2-factor I need to prove that given a graph which is $2k$-regular, I can find a 2-factor. Meaning, There is a sub-graph of the above graph, which contains all vertices, and is 2-regular. I must say I have no idea where to start with this. so help would be greatly appreciated :) Thanks. AI: Each component of the graph contains an Eulerian cycle. Now, split each vertex in two, and keep incident to one of them all the incoming edges of the Eulerian cycle and to the other all the outgoing ones. This gives a bipartite graph that is $k$-regular. This graph contains a 1-factor as a consequence of the marriage theorem. Glue the split vertices together, and this 1-factor turns out to be a two-factor of the original graph.
H: Evaluating $\int \sqrt{x^2 + 2x}dx$ $$\int \sqrt{x^2 + 2x}dx$$ I have no clue what to do on this problem. It is in the trig substitution chapter so I know I have to use that somehow. I know that I can not complete the square because both terms are positive and will not give me a difference of squares. I know u subsitution will not work because I get leftover x terms. I know that I basically have to manipulate this problem algebraically before I can work with it but I just do not know how to do that. I tried to factor out an x or -x but neither makes progress. AI: Same thing as the last one. Use $$ \int \sqrt{x^2 + 2x }\ \ dx = \int \sqrt{x^2 + 2x + 1 - 1}\ \ dx = \int \sqrt{(x + 1)^2 - 1}\ \ dx $$ and use the hint $\sec \theta = x+1$.
H: Family of Self-Adjoint Operators that are Multiplications on a Common $L^2(\mu)$? Suppose that $H$ is some (complex) Hilbert space and that $\{T_\alpha: \alpha \in I\}$ is some collection of bounded self-adjoint operators on $H$. A version of the spectral theorem states that for each $\alpha$, there exists a measure space $(X,M,\mu)$, a bounded real-valued function $\phi$ and a unitary operator $U: H \to L^2(\mu)$ such that $$T_\alpha = U^*M_\phi U$$ where $M_\phi$ denotes the multiplication operator induced by $\phi$. I was wondering under which conditions on $\{T_\alpha\}$ we can find a measure space $(X,M,\mu)$ such that all of the $T_\alpha$ are unitarily equivalent to a multiplication on $L^2(\mu)$? Even better, under what conditions can we choose the same $U$ to work for all $T_\alpha$? For example (although these are not bounded) all constant coefficient partial differential operators on $L^2(\mathbb{R}^n)$ are Fourier multipliers. What makes them special? AI: The situation that's usually considered is when you want to use the same $U$ for all $T_\alpha$. In this case, it's easy to see that it's necessary for the $T_\alpha$s to commute. As I recall, this is also sufficient: there's a spectral theorem for commuting families of self-adjoint operators. Unfortunately, I can't immediately find a reference so I could be missing a hypothesis. I've made this CW so somebody else could edit one in. I don't know about the possibility of using a different $U_\alpha$ for each $T_\alpha$. It's not really clear to me what such a representation would be good for.
H: Induction troubles proving a formula. I'm having a tough time proving the following formula. Suppose $a,b\in R$ a ring. Define $a^{(0)}=a$, $a^{(1)}=[a,b]\equiv ab-ba$, and then $a^{(k)}=[a^{(k-1)},b]$. Then $$ \sum_{i=0}^k b^iab^{k-i}=\sum_{j=0}^k\binom{k+1}{j+1}b^{k-j}a^{(j)}. $$ I wanted to do this with induction on $k$, but the presence of $k$ in each term of the sum is making it difficult to get the right form of things. I had something like $$ \begin{align*} \sum_{i=0}^{k+1}\binom{k+2}{j+1}b^{k+1-j}a^{(j)} &= \sum_{j=0}^k \binom{k+2}{j+1}b^{k+1-j}a^{(j)}+a^{(k+1)}\\ &= \sum_{j=0}^k\left[\binom{k+1}{j}+\binom{k+1}{j+1}\right]b^{k+1-j}a^{(j)}+a^{(k+1)}\\ &=b\sum_{j=0}^k\binom{k+1}{j}b^{k-j}a^{(j)}+b\sum_{j=0}^k\binom{k+1}{j+1}b^{k-j}a^{(j)}+a^{(k+1)}\\ &=b\sum_{j=0}^k\binom{k+1}{j}b^{k-j}a^{(j)}+b\sum_{i=0}^k b^iab^{k-i}+a^{(k+1)} \end{align*} $$ but this is still far from what I want. I tried doing induction in the other direction, but had similar problems. I don't see how the recursive definition of $a^{(k)}$ comes into play. How can the formula be properly derived? Thanks. AI: Denote the LHS terms as $$S_k=\sum_{i=0}^k b^i a b^{k-i}.$$ Notice this satisfies the recurrence $$S_{k+1}=\sum_{i=0}^{k+1} b^i a b^{k+1-i}=\left(\sum_{i=0}^k b^iab^{k+1-i}\right)+b^{k+1}ab^{k+1-(k+1)}=S_kb+b^{k+1}a.$$ If you establish that $S_k$ and the RHS are equal as a base case, we need only prove that the RHS also satisfies this recurrence so they are the same sequence (as the recurrence solution is unique). It helps to first note that $a^{(k)}b=ba^{(k)}+a^{(k+1)}$ using the inductive definition. Now take the RHS of the desired claim, right-multiply by $b$ and add $b^{k+1}a$: from there you want to rearrange to make it the RHS with $k+1$. In order to do this, split and reindex and incorporate $$\binom{n}{m}=\binom{n-1}{m}+\binom{n-1}{m-1}.$$