text
stringlengths
83
79.5k
H: 1-form is exact Let $\omega = f_1 dx_1 +f_2dx_2 + \cdots + f_ndx_n$ be a closed $ C^{\infty}$ $1-$form on $ \mathbb R ^n$. Define a function $g$ by $\displaystyle{ g(x_1, x_2,\cdots, x_n) = \int_{0}^{x_1} f_1(t,x_2 , x_3, \cdots ,x_n) +\int_{0}^{x_2} f_1(0,t, x_3, x_4, \cdots ,x_n) + \int_{0}^{x_3} f_1(0,0,t, x_4 ,x_5, \cdots ,x_n) + \cdots +\int_{0}^{x_n} f_1(0,0, \cdots ,t) }$ Show that $dg =\omega$. AI: $\omega$ being closed gives $\partial_i f_j = \partial_j f_i$. So we have \begin{align*} \partial_i g(x) &= \sum_{k = 1}^{i-1} \int_0^{x_k} \partial_i f_k(0,\ldots, 0, t, x_{k+1}, \ldots, x_n)\,dt + f_i(0,\ldots, 0, x_i, \ldots, x_n)\\ &= \sum_{k=1}^{i-1} \int_0^{x_k} \partial_k f_i(0,\ldots, 0, t, x_{k+1}, \ldots, x_n)\,dt + f_i(0,\ldots, 0, x_i, \ldots, x_n)\\ &= \sum_{k=1}^{i-1} \bigl(f_i(0,\ldots, x_k, \ldots, x_n) - f_i(0,\ldots, 0, x_{k+1}, \ldots, x_n)\bigr)+ f_i(0,\ldots, 0, x_i, \ldots, x_n)\\ &= f_i(x) - f_i(0, \ldots, 0, x_i,\ldots, x_n) + f_i(0,\ldots, 0, x_i,\ldots, x_n)\\ &= f_i(x) \end{align*} So $dg = \sum_i \partial_i g\,dx_i = \sum_i f_i dx_i = \omega$.
H: Calculate: $\sum_{k=0}^{n-2} 2^{k} \tan \left(\frac{\pi}{2^{n-k}}\right)$ Calculate the following sum for integers $n\ge2$: $$\sum_{k=0}^{n-2} 2^{k} \tan \left(\frac{\pi}{2^{n-k}}\right)$$ I'm trying to obtain a closed form if that is possible. AI: We have this nice identity $$\tan(\theta) = \cot(\theta)-2 \cot(2 \theta)$$ Making use of this, and denoting $\displaystyle \sum_{k=0}^{m} 2^k \tan(2^k \theta)$ as $S$, we get that \begin{align}S & = \tan(\theta) + 2 \tan(2 \theta) + 4 \tan(4 \theta) + \cdots + 2^m \tan(2^m \theta)\\ & = \cot(\theta) -2 \cot(2 \theta) +2 \cot(2\theta) - \cdots + 2^m \cot(2^m \theta) - 2^{m+1} \cot(2^{m+1} \theta)\\ & = \cot(\theta) - 2^{m+1} \cot(2^{m+1} \theta) \end{align} In your case, $\theta = \dfrac{\pi}{2^n}$ and $m= n-2$. Hence, we get the sum to be $$S = \cot \left(\dfrac{\pi}{2^n} \right) - 2^{n-1} \cot \left( 2^{n-1} \cdot \dfrac{\pi}{2^n}\right) = \cot \left(\dfrac{\pi}{2^n} \right) - 2^{n-1} \cot \left( \dfrac{\pi}2\right) = \cot \left(\dfrac{\pi}{2^n} \right)$$ Proof for $\tan(\theta) = \cot(\theta)-2 \cot(2 \theta)$ $$\cot(\theta) - \tan(\theta) = \dfrac{\cos(\theta)}{\sin(\theta)} - \dfrac{\sin(\theta)}{\cos(\theta)} = \dfrac{\cos^2(\theta) - \sin^2(\theta)}{\sin(\theta) \cos(\theta)}= 2\dfrac{\cos(2\theta)}{\sin(2 \theta)} = 2 \cot(2 \theta)$$
H: Eigenvalues and Diagonalization The typical definition given for a diagonalizable matrix is: Given $A\in M^F_{n\times n}$ $A$ is diagonalizable $\iff$ A has $n$ linearly independent eigenvectors. Is it also true that $A$ is diagonalizable $\iff$ A has $n$ unique eigenvalues. AI: No. It is not true that $A$ is diagonalizable $\iff$ A has $n$ unique eigenvalues. For instance, the identity matrix is diagonalizable but all eigenvalues are the same, namely $1$. However one way in your claim is true i.e. A has $n$ unique eigenvalues $\implies$ $A$ is diagonalizable.
H: Expectation value of a product of an Ito integral and a function of a Brownian motion this problem has come up in my research and is confusing me immensely, any light you can shed would be deeply appreciated. Let $B(t)$ denote a standard Brownian motion (Wiener process), such that the difference $B(t)-B(s)$ has a normal distribution with zero mean and variance $t-s$. I am seeking an expression for $$E\left[ \cos(B(t))\int\limits_0^t \sin(B(s))\,\textrm{d}B(s) \right],$$ where the integral is a stochastic It$\hat{\textrm{o}}$ integral. My first thought was that the expectation of the integral alone is zero, and that the two terms are statistically independent, hence the whole thing gives zero. However, I can't prove this. To give you a little background: this expression arises as one of several terms in a calculation of the second moment of the integral $$\int\limits_{0}^{t}\cos(B(s))\,\textrm{d}s,$$ after applying It$\hat{\textrm{o}}$'s lemma and squaring. I can simulate this numerically, so I should know when I get the right final expression! Thanks. AI: This addresses the question cited as a motivation. For every $t\geqslant0$, introduce $X_t=\int\limits_{0}^{t}\cos(B_s)\,\textrm{d}s$ and $m(t)=\mathrm E(\cos(B_t))=\mathrm E(\cos(\sqrt{t}Z))$, where $Z$ is standard normal. Then $\mathrm E(X_t)=\int\limits_{0}^{t}m(s)\,\textrm{d}s$ and $\mathrm E(X_t^2)=\int\limits_{0}^{t}\int\limits_{u}^{t}2\mathrm E(\cos(B_s)\cos(B_u))\,\textrm{d}s\textrm{d}u$. For every $s\geqslant u\geqslant0$, one has $2\cos(B_s)\cos(B_u)=\cos(B_s+B_u)+\cos(B_s-B_u)$. Furthermore, $B_s+B_u=2B_u+(B_s-B_u)$ is normal with variance $4u+(s-u)=s+3u$ and $B_s-B_u$ is normal with variance $s-u$. Hence, $2\mathrm E(\cos(B_s)\cos(B_u))=m(s+3u)+m(s-u)$, which implies $$ \mathrm E(X_t^2)=\int\limits_{0}^{t}\int\limits_{u}^{t}(m(s+3u)+m(s-u))\,\textrm{d}s\textrm{d}u. $$ Since $m(t)=\mathrm e^{-t/2}$, this yields after some standard computations, $\mathrm E(X_t)=2(1-\mathrm e^{-t/2})$ and $$ \mathrm E(X_t^2)=2t-\frac13(1-\mathrm e^{-2t})-\frac83(1-\mathrm e^{-t/2}). $$ Sanity check: When $t\to0^+$, $\mathrm E(X_t^2)=t^2+o(t^2)$. To compute the integral $J_t=\mathrm E\left[ \cos(B_t)\int\limits_{0}^{t} \sin(B_s)\,\textrm{d}B_s \right]$, one can start with Itô's formula $$ \cos(B_t)=1-\int\limits_{0}^{t} \sin(B_s)\,\textrm{d}B_s-\frac12\int\limits_{0}^{t} \cos(B_s)\,\textrm{d}s, $$ hence $$ J_t=\mathrm E(\cos(B_t))-\mathrm E(\cos^2(B_t))-\frac12\int\limits_{0}^{t} \mathrm E(\cos(B_t)\cos(B_s))\,\textrm{d}s, $$ and it seems each term can be computed easily.
H: Quantile and percentile terminology Note: This is answered by user974514 below, but there was some discussion outside of the "answer", so I paraphrased the final answers inline here. I've asked around for the exact usages of the terms "quantile" and "percentile" and "rank" and I'm getting conflicting answers from my colleagues. I'll just pose a set of questions to narrow down the exact terminology. 1) Given: a probability CDF. $\mathbb{P}(X \leq A) = B$. $X$ is the random variable. Domain of $B$ is obviously real number in interval $[0, 1)$, and $A$ can be any real number (or other totally-ordered set). Is "quantile" or "percentile" applicable to either $A$ or $B$ (which one)? If $A$ is called a "quantile" (or "percentile"), than what is $B$ called - a "[quantile/percentile] rank", or just "cumulative probability"? A: $A$ is a quantile. $B$ is the cumulative probability. If $B$ is a multiple of 1/100, then the quantile is a percentile (a 100-quantile). When discussing $n$-quantiles in the CDF for various $n$ that have the same value $A$, it's helpful to be explicit with term "value", i.e. $A$ is the "quantile value". 2) Is the term "quantile" only applicable for equally-sized intervals in a CDF? Or can it be for any arbitrary $A$ (or $B$; depending on answer to above question) such that $\mathbb{P}(X \leq A) = B$? For example, what if the probability $B$ is an irrational number, so that it's impossible for it be an interval boundary of equally-sized intervals in a CDF? A: For a CDF, quantiles must be points on equally-sized intervals of that CDF. 3) Given: list of $N$ sample datums (of a totally-ordered set). If I want a value $A$ such that $B$ datums are at most $A$, and assuming that $A$ is a value in the $N$ sample datums, where do the terms "percentile" and "quantile" and "rank" fit in here? For example, if I want the value such that 50% datums are at most this value, can that value be called any of: 50th percentile, 2nd quartile, median? Or does it have to be in terms of $N$ quantiles, e.g. 3rd $N$-quantile? (I'm aware that if $A$ were not in the list of sample datums, there would have to be some rounding or interpolation, but that's not important to these questions.) A: Rephrased: given $N$ sample datums, if $A$ is the $k$-th $N$-quantile, then $A$ is not less than $\frac{k}{N}$ values out of the $N$ values. Assuming the samples are taken randomly, $\frac{k}{N}$ is the cumulative probability. $\frac{k}{n} \cdot N$ (with rounding) is the rank for $n$-quantiles, e.g. $\frac{k}{100} \cdot N$ is the percentile rank. 3a) Can I define percentiles in term of fractions, and if so, what is that fraction called? e.g. in the above example, if the 50-th percentile is called the "percentile", then what is the 0.5 called? This is analogous to what I called "cumulative probability" in the CDF case. A: Assuming samples are taken randomly, the "fraction" (see above) is the cumulative probability. 4) Along the same lines, when I have a value that is called the "P90", what exactly is that - the 90th percentile? Can it be called the 9th decile too? How about 0.9 [something]? A: Except for "0.9", they are all equivalent - just different $n$-quantiles with the same quantile value (and cumulative probability). The 0.9 is the cumulative probability. 5) Is it valid to have a non-integral quantile/percentile? e.g. 55.55th percentile or 2.5th quartile? A: Yes, but given non-integral $k$ and given $k$-th $n$-quantile, you'd typically scale both to some $(k \cdot m)$-th $(n \cdot m)$-quantile, where $(k \cdot m)$ is integral. For example, 2.5-th quartile is the same as the 5-th 8-quantile. A common exception would be the "Px" notation, where things like "P99.9" are common. AI: From wiki we have. If want to find k-th q-quantile we have to find $x$ that: $\Pr[X < x] \le k/q$ (or, equivalently, $\Pr[X \ge x] \ge 1-k/q<)$ and $\Pr[X \le x] \ge k/q$ (or, equivalently, $\Pr[X > x] \le 1-k/q)$ 1) Quantiles are actually what you defined as $A$. I mean if you want to find third quartile you will have to calculate A such that $P(X\leq A) = 0.75=\frac{3}{4}$ in this case you also get your B defined. Although there's no quantile related definition of $B$ that I would know, but finding that $A$ is more than $0.75=\frac{3}{4}$ of all data makes it 3rd quartile or 3rd - 4-quantile or also 75th-percentile or 75th-100-quantile. So $B$ can actually define $A$. Suppose $P(X\leq A)= B = \frac{k}{n}$ so $A$ is k-th n-quantile. 2) Yes, only equally-sized intervals. Concerning your example suppose we have $B$ equal to $\frac{\sqrt{2}}{2}$ or Then you will have 2-quantile, but it will exceed the rest part of quantile, I mean $\frac{\sqrt{2}}{2}\geq 1 - \frac{\sqrt{2}}{2}$ and that's contradiction. 3)Ok, so I supposing you have sample of N datums. k-th n-quantile is a value $B$ that is not less than $\frac{k}{n}$ part of all data. In your case that's $\lfloor \frac{k}{n} \cdot N \rfloor$ values that will be less or equal than $B$. Now you lineup all of your data from smallest to biggest datapoint in non-descending order and enumerate each of them from $1$ to $N$. That's so called ranking procedure. If $N$ is odd you just take $\lfloor \frac{N}{2} \rfloor +1 $-th datum and it will be bigger than half of all data or will be so called median. The same is goes for 50-th percentile. So actually 50th percentile = 2-nd quartile = median. So that 50% you're talking about is called median. And this is the most valid way to call it (rather than calling it half or 50% or I didn't understand the question). 4)Yes, of course. 90-th percentile = $\frac{90}{100}=\frac{9}{10}$ = 9th decile. 5)Non-integer quantiles may be defined, but it actually makes no sense in terms of equally-sized definition. But you can convert them to proper quantiles. I mean $2.5$ quartile means $\frac{2.5}{4}=\frac{5}{8}$ so it's 5-th 8-quantile. Hope my answer will makes you more clear about this things if not you're free to ask.
H: How to define a bijection between $(0,1)$ and $(0,1]$? How to define a bijection between $(0,1)$ and $(0,1]$? Or any other open and closed intervals? If the intervals are both open like $(-1,2)\text{ and }(-5,4)$ I do a cheap trick (don't know if that's how you're supposed to do it): I make a function $f : (-1, 2)\rightarrow (-5, 4)$ of the form $f(x)=mx+b$ by \begin{align*} -5 = f(-1) &= m(-1)+b \\ 4 = f(2) &= m(2) + b \end{align*} Solving for $m$ and $b$ I find $m=3\text{ and }b=-2$ so then $f(x)=3x-2.$ Then I show that $f$ is a bijection by showing that it is injective and surjective. AI: Choose an infinite sequence $(x_n)_{n\geqslant1}$ of distinct elements of $(0,1)$. Let $X=\{x_n\mid n\geqslant1\}$, hence $X\subset(0,1)$. Let $x_0=1$. Define $f(x_n)=x_{n+1}$ for every $n\geqslant0$ and $f(x)=x$ for every $x$ in $(0,1)\setminus X$. Then $f$ is defined on $(0,1]$ and the map $f:(0,1]\to(0,1)$ is bijective. To sum up, one extracts a copy of $\mathbb N$ from $(0,1)$ and one uses the fact that the map $n\mapsto n+1$ is a bijection between $\mathbb N\cup\{0\}$ and $\mathbb N$.
H: Compute: $\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$ I try to solve the following sum: $$\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \frac{1}{k^2n+2nk+n^2k}$$ I'm very curious about the possible approaching ways that lead us to solve it. I'm not experienced with these sums, and any hint, suggestion is very welcome. Thanks. AI: I think one way of approaching this sum would be to use the partial fraction $$ \frac{1}{k^2n+2nk+n^2k} = \frac{1}{kn(k+n+2)} = \frac{1}{2}\Big(\frac{1}{k} + \frac{1}{n}\Big)\Big(\frac{1}{k+n} - \frac{1}{n+k+2}\Big)$$ to rewrite you sum in the form $$\sum_{n=1}^{\infty}\sum_{k=1}^{\infty} \frac{1}{k^2n+n^2k+2kn} = \frac{1}{2}\sum_{k=1}^{\infty}\sum_{n=1}^{\infty} \Big( \frac{1}{n(k+n)} - \frac{1}{n(k+n+2)} + \frac{1}{k(k+n)} - \frac{1}{k(k+n+2)}\Big)$$ Since the sum on the right will telescope in one of the summation variables it should be straightforward to find the answer from here (it ends up being $7/4$ I think).
H: Equality of Voronoi diagram What can we say about two sets $A$ and $B$ if both of them have the same Voronoi diagram. First, I thought if the Voronoi diagram are equal so the sets also should be equal, but by definition, Voronoi diagram is determined by distances to a specified family of objects (subsets) in the space, so do the same distances mean the same sets? Is $A = B$? Or $\left | A \right | = \left | B \right |$? AI: It's easy to see that you can't say that A = B. Consider 2 points. The Voronoi diagram consists of a single line(the perpendicular bisector of the 2 points). There are infinite pairs of points having the same perpendicular bisector. If all the points are distinct, the sizes of the sets would be equal. Since, each region in the Voronoi diagram corresponds to exactly 1 point from the set, the sizes of the sets must be equal.
H: Is there a topology on the countable set which makes the space is not first countable but has countable pseudocharacter? I want to know is there a topology on the countable set which makes the space is not first countable but has countable pseudocharacter? Thanks for any help:) AI: The pseudo-character is defined as $$\psi(p, X) = \min\{ |\mathcal A| : \mathcal A \subseteq \tau_X\text{ is a pseudo-base at }p\text{ (i.e.} \bigcap \mathcal A = \{p\})\},$$ $$\psi(X)=\sup\{\psi(p,X) : p\in X\}+\omega$$ see e.g. here. The space $S_2^-$ from this answer is one possible example of a space which has countable pseudo-character and it is not first countable. It is not sequential, hence it is not first countable. Each point has a countable pseudo-base. This is obvious for isolated points. If we use the same notation as in the linked answer then $\{\omega\}=\bigcap_{n=0}^\infty (\{\omega\}\cup \{n,n+1,\dots\}\times\omega)$. So we see that every point is a countable intersection of open sets. This example is very similar to Arens-Fort space, so you can try to have a look at this space, if you're more familiar with it. Another standard example of a space which is not first-countable is the quotient space $\mathbb R/\mathbb N$. Again, since $\mathbb N= \bigcap_{k\in\mathbb N} (\bigcup_{n\in\mathbb N} \left(n-\frac1k,n+\frac1k\right)$, this space has countable pseudo-character. EDIT: Sorry, I've forgotten that you want countable spaces only; $\mathbb R/\mathbb N$ is of course not countable. But $\mathbb Q/\mathbb N$ should work for the same reasons.
H: generalised inverse function Let $f:\mathbb{R} \rightarrow [0, 1]$ be increasing (edit: i.e., non-decreasing). Define $f^-(y) = \inf \{x \in \mathbb{R} : f(x) \geq y \}$, $y \in [0, 1]$. Is the following line true? $$x \leq f^-(y) \quad\leftrightarrow\quad f(x) \leq y$$ AI: The definition doesn't make sense for $y$ less than or equal to the infimum of the image of $f$ (for example, $0$). Unless you allow $-\infty$. Even where defined, what you said is not true (in fact, it is equivalent to saying that $f$ is strictly increasing). Choose $f(x)=1$ for $x\geq 0$, and $f(x)=e^{x}$ for $x<0$. Then $f^-(1)=0$, and $1>0$, but $f(1)=1\leq 1$, so $\leftarrow$ fails. (But $\rightarrow$ is of course true.)
H: norm of a variant of Fejer 's kernel Let $K_N$ the Fejer's kernel on $\mathbb{T}$. Let $l$ be a positive integer. Let $Q$ the function defined by $$ Q(t)=K_N(lt). $$ In Hewitt/Ross "Abstract Harmonic Analysis 2" page 438, I can read that if $1<p<2$ we have $$ ||Q||_{L_p}=||K_N||_{L_p}. $$ Why? AI: In general, if you have a function $f$ on $\mathbb{T}$ and an integer $k$, $$ \int_{\mathbb{T}}f(t)\,\mathrm{d}t=\int_{\mathbb{T}}f(kt)\,\mathrm{d}t $$ The integral on the right is just $k$ copies of the integral on the left compressed $k$ times (so each copy has $1/k$ times the integral on the left). $\hspace{5mm}$
H: How did Ramanujan get this result? We know Ramanujan got this result $$\sqrt{1+2\sqrt{1+3\sqrt{1+\cdots }}}=3$$ and he used the formula $$x+n+a=\sqrt{ax+{{(n+a)}^{2}}+x\sqrt{a(x+n)+{{(n+a)}^{2}}+(x+n)\sqrt{\cdots }}}$$ where $x=2,n=1,a=0$ ,we get the first result, but I don't know how to prove it, can you help me? AI: $$(x+n+a)^2 = x^2 + n^2 + a^2 + 2an + 2ax + 2nx$$ $$ = ax + (n+a)^2 + x(x + a + 2n)$$ so $x + n + a = \sqrt{ax + (n+a)^2 + x*((x+n) + n + a)}$ which you can substitute for $(x+n) + n + a$ again and again to get the sequence of iterated roots. The convergence is because the sequence is monotone increasing but bounded above by $x+n+a$ for $n > 0$, $a,x \ge 0$ (after enough ($k$) iterations, $x + k*n + a$ will be greater than 1.)
H: "Theorem of Witt" for modules For modules $M_1 \oplus N \cong M_2 \oplus N$, why is $M_1 \cong M_2$, if $Hom(M_1,N)=\{0\}=Hom(M_2,N)$? It seams similar to Witt's cancellation theorem for quadratic forms. Regards, Khanna AI: Let $\phi\colon M_1 \oplus N \to M_2 \oplus N$ an isomorphism. Denote by $i_1\colon M_1 \to M_1 \oplus N$, $i_2\colon N \to M_1 \oplus N$, $j_1\colon M_2 \to M_2 \oplus N$ and $j_2 \colon N \to M_2 \oplus N$ the injections and by $p_1\colon M_1 \oplus N \to M_1$, $p_2 \colon M_1 \oplus N \to N$, $\pi_1\colon M_2 \oplus N \to M_2$ and $\pi_2 \colon M_2 \oplus N \to N$ the projections. Define $\psi \colon M_1 \to M_2$ by $\psi := \pi_1\phi i_1$. We will prove that $\psi$ is an isomorphism. Let $m_1 \in M_1$ such that $\psi(m_1) = 0$, then $\pi_1\phi(m_1,0) = 0$. As $\pi_2 \phi i_1 \colon M_1 \to N$, $\pi_2\phi i_1 = 0$, hence $\pi_2 \phi(m_1,0) = 0$. So $\phi(m_1,0) = 0$, which implies $m_1 = 0$. So $\psi$ is a monomorphism. Let $m_2 \in M_2$. We have that $p_2\phi^{-1}j_1\colon M_2 \to N$, so $p_2\phi^{-1}j_1 = 0$. Set $m_1 := p_1\phi^{-1}j_1(m_2)$, then $\phi^{-1}(m_2,0) = (m_1,0)$, or $\phi(m_1,0) = (m_2,0)$, which implies $\psi(m_1) =m_2$. So $\psi$ is onto.
H: Quotient of a Regular local ring. Is the quotient of a regular local ring by a prime ideal Cohen-Macaulay? If so, how can we see this, if not, is there a counterexample? We know that a regular local ring is a UFD, so $0$ is a prime ideal, so in this case, the quotient is a regular local ring and hence a CM ring. AI: Any complete local noetherian ring is a quotient of a regular local ring. This is called Cohen's structure theorem for noetherian complete local ring (see for instance Matsumura Commutative Ring Theory Theorem 29.4). So any complete noetherian integral domain is a quotient of a local regular ring by a prime ideal. To construct a counterexample to your question, it is thus enough to exhibit a complete local noetherian domain which is not Cohen-Macaulay. The ring $k[X^4,X^3Y,XY^3,Y^4]$ (which is a non-CM domain) is usually a good starting point. In the positive direction, your comment can be generalized to taking the quotient by any regular sequence (not necessarily a system of parameters).This gives a complete intersection ring, hence a CM ring.
H: Question about functions in $L^2$ Let $u(x,y),v(x,y)\in L^2$. What can we say about $\int_{-\infty}^\infty \int_{-\infty}^\infty \frac{d}{dy}(uv) \, dy \, dx$? Does it equal zero? If so, why? AI: I think your integral might not make sense. First of all, I would write $$\lim_{a\to\infty}\left[u(x,y)v(x,y)\right]_{y=-a}^{y=a}$$ rather than $$\left(u\left(x,y\right)v\left(x,y\right)\right)|_{-y=\infty}^{y=\infty}$$ but that's just style. More importantly, this limit might not exist. The fact that a function is in $L^1$ (like $uv$ in your case) does not imply that it has a limit (which, if any, has to be $0$) at infinity. Indeed, consider the function $$f(x) = \begin{cases}n\qquad \text{ if }n-\dfrac{1}{n^3}\leq x\leq n\\0\qquad\text{else}\end{cases}$$ for $n\in \mathbb{N}\setminus\{0\}$. This function is in $L^1(\mathbb{R})$, since $$\int_{\infty}^{+\infty}f(x)dx=\int_0^{+\infty}f(x)dx = \lim_{m\to\infty}\int_0^m f(x)dx =$$ $$=\lim_{m\to\infty}\sum_{n=1}^{m}\int_{n-\frac{1}{n^3}}^n ndx = \lim_{m\to\infty}\sum_{n=1}^{m}\dfrac{1}{n^2}=\sum_{n=1}^{\infty}\dfrac{1}{n^2}=\dfrac{\pi^2}{6}$$ However, $f$ does not have a limit as $x$ goes to infinity. All you can say for an integrable (over $\mathbb{R}$) function is that $$\liminf_{|x|\to\infty}f(x)=0$$ which is not enough to compute the limit that appears in your question. If your functions have more regularity, namely they are not just $L^2$ but also $H^1$, then your integral exists and it's equal to zero, since the derivative cannot grow at infinity, so there can't be spikes whose amplitude does not go to zero. As a consequence, also the $\limsup$ is zero, which tells you that the limit exists and it is zero.
H: Do filters on a Boolean algebra also make a Boolean algebra? Let $\mathfrak{B}=(B,\bot,\top,\lnot,\wedge,\vee)$ be a boolean algebra. $B_F$ be the set of all filters on $\mathfrak B$. And for all filter $F$, $G$, $F \wedge_{B_F} G \colon= \mathbf C(F \cup G)$ in which $\mathbf C$ denotes the filter closure operator; $F \vee_{B_F} G \colon= F \cap G$; $0 \colon= B$, $1 \colon= \{\top\}$. Then $(B_F,0,1,\wedge_{B_F},\vee_{B_F})$ makes a complete lattice. My question is can we add a negation in order to make it be a Boolean algebra? AI: It is relatively easy to see that it does not matter whether we work with filters or with ideals. The following is taken verbatim from Steven R. Givant,Paul Richard Halmos: Introduction to Boolean Algebras p.167: The ideals of a Boolean algebra form a complete, distributive lattice, but they do not, in general, form a Boolean algebra. To give an example, it is helpful to introduce some terminology. An ideal is maximal if it is a proper ideal that is not properly included in any other proper ideal. We shall see in the next chapter that an infinite Boolean algebra $B$ always has at least one maximal ideal that is not principal. Assume this result for the moment. A "complement" of such an ideal $M$ in the lattice of ideals of $B$ would be an ideal $N$ with the property that $$M\wedge N=\{0\} \qquad\text{and}\qquad M\vee N=B.$$ Suppose the first equality holds. If $q$ is any element in $N$, then $p \wedge q = 0$, and therefore $p \le q'$, for every element $p$ in $M$, by Lemma 1. In other words, the ideal $M$ is included in the principal ideal $(q')$. The two ideals must be distinct, since $M$ is not principal. This forces $(q')$ to equal $B$, by the maximality of $M$. In other words, $q' = 1$, and therefore $q = 0$. What has been shown is that the meet $M\wedge N$ can be the trivial ideal only if $N$ itself is trivial. In this case, of course, $M \vee N$ is $M$, not $B$. Conclusion: a maximal, non-principal ideal does not have a complement in the lattice of ideals. The existence of maximal ideals, which was used in the above excerpt, is guaranteed by Boolean prime ideal theorem.
H: Galois Group over Finite Field I am having a bit of difficulty trying to answer the following question: What is the Galois group of $X^8-1$ over $\mathbb{F}_{11}$? So far I have factored $X^8-1$ as $$X^8-1=(X+10)(X+1)(X^2+1)(X^4+1).$$ I know $X^2+1$ is irreducible over $\mathbb{F}_{11}$ since $10$ is not a square modulo $11$. Also, $X^4+1$ is irreducible over $\mathbb{F}_{11}$. The roots of $X^2+1$ and $X^4+1$ over $\mathbb{Q}$ are $\pm i$ and $\pm \frac{\sqrt{2}}{2} \pm \frac{\sqrt{2}}{2} i$, respectively. We also see that $\sqrt{2} \not \in \mathbb{F}_{11}$ since no element squared is equal to $2$. I would then think that $\mathbb{F}_{11}(i, \sqrt{2})$ is a splitting field for $x^8-1$ over $\mathbb{F}_{11}$, which is clearly Galois. If all this were true, I would then venture that the Galois group is $V_4$. I have the feeling, however, that I have made many mistakes in my reasoning. How should one approach a problem like this? AI: An extension of finite fields is always cyclic: the Galois group must be cyclic. So the Galois group certainly cannot be $V_4$. Note that $F_{11}(i)$ does have a square root of $2$: $(3i)^2 = -9\equiv 2\pmod{11}$. So once you adjoint $i$ to $F_{11}$, you also get $\sqrt{2}$. Thus, $F_{11}(i,\sqrt{2}) = F_{11}(i)$. Likewise, $X^2+1$ is reducible over $F_{11}(\sqrt{2})$, since $(4\sqrt{2})^2 = 32\equiv -1\pmod{11}$. That is, $F_{11}(i) = F_{11}(\sqrt{2})$. The key to remember is that there is a unique field of order $121=11^2$; so any extension you get from $F_{11}$ by adjoining the square root of a nonquadratic residue is the same. So $F_{11}(i) = F_{11}(\sqrt{2}) = F_{11}(\sqrt{6}) = F_{11}(\sqrt{7}) = F_{11}(\sqrt{8})$. Your other mistake is that $x^4+1$ is not irreducible over $F_{11}$: it splits as a product of two irreducible quadratics. You can figure this out by replacing $\frac{\sqrt{2}}{2}$ with $7i$ and $\frac{\sqrt{2}}{2}i$ with $-7$ (the values in $F_{121}$), or by solving a system of simple equations. Either way, you get $$x^4 + 1 = (x^2-3x-1)(x^2+3x-1)$$ in $F_{11}$. (Had $x^4+1$ been irreducible, then your extension would have been of degree $4$: you need an extension of degree $4$ to get a root for $x^4+1$, that one already contains the quadratic extension, and so you would get all the roots you need. In that case, the Galois group would have been cyclic of order $4$).
H: Homework help with projectile motion I would please like help on the following question related to projectile motion. A horizontal drainpipe 6 metres above sea level empties stormwater into the sea. If the water comes out horizontally and reaches the sea 2 metres out from the pipe, find the initial velocity of the water, correct to 1 decimal place. Let g be 10 ms−2 and neglect air resistance. I have figured out that when $t=0$, $y=6$. $x=2$ is the range of the projectile motion so when $x=2$, $y=0$. I have also found the equations for velocity and displacement $$\dot{x}=v\cos\theta$$ $$x=vt\cos\theta$$ $$\dot{y}=-10t+v\sin \theta$$ $$y=-5t^{2}+vt\sin\theta+6$$ I know that to find the range of the particle you set $y=0$ and make $t$ the subject then sub $t$ into $x$. So my equation is now: $$0=-5t^{2}+vt\sin\theta+6$$ Here's where I get stuck. I'm not sure what to do as both $\theta$ and $v$ are unknown. I was thinking of using the quadratic equation but then rethought it as I still don't have the value of $\theta$. AI: The vertical speed component when the water exits the pipe is zero, then for the vertical movement $$ s_v = \dfrac{a \cdot t^2}{2} $$ You know $a$ and $s_v$, so you can find $t$. Then, since horizontal speed is constant, $$ s_h = v \cdot t $$ You just calculated $t$, and you know $s_h$, so you can find $v$.
H: Multiple singular values In a lot of texts I have seen involving the singulare value decomposition, it only says, that there are as many nonzero singular value as is the rank of the matrix $A$, which is to be decomposed. Now I have looked at different examples throughout the net and everywhere these singular values are distinct. But as I understood the proof of the singular value decomposition, there are indeed $r$ singular values (if $rank(A)=r)$, but they may appear more than once: To be specific, they appear as often as the algebraic multiplicity of the eigenvalue, which corresponds to this singular value (since the sum of all algebraic multiplicities of all nonzero eigenvalues is $r$). Is this correct ? Could anyone give me an example of a matrix $A$, such that this happens, i.e. such that $A^*A$ has eigenvalues with algebraic multiplicity $>1$ ? AI: You can build a matrix from its singular values by starting with a diagonal matrix with non-negative entries. Concretely, for any choice $\alpha_1,\ldots,\alpha_n\in[0,\infty)$, the matrix $$ A=\begin{bmatrix}\alpha_1 \\ & \ddots \\ & & \alpha_n\end{bmatrix} $$ has singular values $\alpha_1,\ldots,\alpha_n$. And of course you can choose them with as many repetitions as you want. They can even be all equal, as in Gerry's example: $\alpha_1=\cdots=\alpha_n=1$.
H: Has anyone ever tried to develop a theory based on a negation of a commonly believed conjecture? I know that plenty of theorems have been published assuming the Riemann hypothesis to be true. I understand that the main goal of such research is to have a theory ready when someone finally proves the Riemann hypothesis. A secondary goal seems to be to have a chance of spotting a contradiction, thus proving the conjecture false. This must be a secondary goal, since most mathematicians believe the conjecture to be true. I wonder if it would be a good idea to do the opposite. It is widely believed that $e+\pi\not\in\mathbb Q,$ however no one seems to have any idea how to prove it. I wonder if it would be a good idea to try to build a theory on the statement that $e+\pi\in\mathbb Q$. I don't mean just trying to prove the conjecture by contradiction. I mean really, frowardly assuming we live in a universe in which $e+\pi\in\mathbb Q$ and trying to do maths in this universe. I'm not sure if the distinction is clear, but I hope it is. The ultimate goals mentioned above would switch places in this approach. Now the primary goal would be to spot a contradiction, since the theorems proved would all be very likely to be vacuous. The conjecture is considered very difficult to prove so perhaps it wouldn't be bad to admit our blindness and just move a bit at random and hope we're moving ahead. The theorems proved would of course be likely to be vacuous so it could seem as too focused an approach: it may only serve to prove a single statement, that $e+\pi\not\in\mathbb Q.$ However, I think the techniques developed could still turn out useful in proving other things. The question whether it's a good idea is probably not a good question on this site, so it it is not my main question. What I would like to know is examples of such an approach being employed. I assume it hasn't been employed often because I have never heard of it except of one case, so I would also like to know why it hasn't. (The one case is the work on the parallel postulate, which was thought by many to follow from other axioms of Euclidean geometry, and which was later shown not to when considering the alternatives resulted in finding consistent geometries violating this axiom only.) AI: I think one of the examples is Fermat's Last Theorem. Frey noted that if there exists a counterexample to Fermat's Last Theorem, i.e. if there exists non-trivial solution to $(a,b,c)$ to $x^p+y^p=z^p$ where $p>2$, then the elliptic curve $y^2 = x (x − a^p)(x + b^p)$ cannot be modular. And this would violate the Taniyama–Shimura conjecture. And Wiles proved Fermat's Last Theorem by proving a weaker version of Taniyama-Shimura conjecture. More can be found in here.
H: Dedekind complete ⇒ Sequentially complete Let F be an ordered field with least upper bound property. 1.Let $\alpha: \mathbb{N} \to F$ be a Cauchy sequence. Since F is an ordered field, $x$ is bounded both above and below. 2.By assumption and dual of it, $A$={$\alpha(n)$|$n\in \mathbb{N}$} has a inf $a_0$ and sup $b_0$. 3.F is Archimedean 4.If subsequence of a Cauchy sequence is convergent to $a\in F$ then the Cauchy sequence is convergent to $a\in F$ These are all i know.. How do I prove all Cauchy sequences are convergent in $F$? Please consider my level. I want quite a direct proof not mentioning any topology & Cauchy net. *Comment button is not available to me now, (I don't know why), so i write this here. I just proved it with facts that (i)every cauchy sequence is convergent in the set of Cauchy reals and (ii)there exists a bijective homomorphism between two dedekind complete fields and (iii)the set of Cauchy reals is dedekind complete. Let $x:i→x(i):\mathbb{N}→F$ be a cauchy sequence in dedekind complete field $F$. Then use the bijective homomorphism $f$ to show that $x':i→f(x(i))$ is a cauchy sequence in the set of Cauchy reals. By the fact (i), $x'$ is convergent. Since inverse of $f$ is also homomorphism, use this to show that $x$ is convergent. AI: Let $c_0 = \frac{a_0 + b_0}2$ (Note that an ordered field has characteristic 0, so $2 = 1+1$ is invertible in $F$). If $[a_0, c_0]$ contains infinitely many $\alpha(n)$, then let $a_1 = a_0$, $b_1 = c_0$, otherwise $a_1 = c_0$, $b_1 = b_0$. Now set $c_1 = \frac{a_1 + b_1}2$. Continuing inductively, we obtain sequences $(a_n)$, $(b_n)$ such that: $[a_n, b_n]$ contains infinitely many $\alpha(k)$. $b_n - a_n = \frac{b_0 - a_0}{2^n}$ $a_{n} \le a_{n+1} \le b_{n+1} \le b_n$. hold for each $n$. Now let $a^* = \sup_n a_n$ (note that $a_n \le b_0$, so the sup exists) and $b_* = \inf b_n$. Then $a_n \le a^* \le b_* \le b_n$ for each $n$, so $b_* - a^* \le b_n - a_n = \frac{b_0 - a_0}{2^n}$. As $F$ is Archimedian, $b_* - a^* = 0$, i. e. $b_* = a^*$. Set $k_0 := 1$, and for each $n$, given $k_n$ choose $k_{n+1} > k_n$ with $\alpha(k_{n+1}) \in [a_n, b_n]$ (which is possible since the latter interval contains infinitely many $\alpha(k)$). We will prove, that $\alpha(k_n) \to a_*$ (which, by 4., suffices). So let $\epsilon > 0$, as $F$ is Archimedian, there is some $N$ with $2^{-N} \le \epsilon(b_0 - a_0)$. For $n \ge N$ we now have $\alpha(k_n) \in [a_N, b_N]$ and $a^* \in [a_N, b_N]$, hence $|a_n - a^*| \le b_N - a_N \le 2^{-N}(b_0 - a_0) \le \epsilon$.
H: what is the use of derivatives Can any one explain me what is the use of derivatives in real life. When and where we use derivative, i know it can be used to find rate of change but why?. My logic was in real life most of the things we do are not linear functions and derivatives helps to make a real life functions into linear. eg converting parabola into a linear function $x^{2}\rightarrow 2x$ but then i find this; derivation of $\sin{x}\rightarrow \cos{x}$ why we cant use $\sin{x}$ itself to solve the equation. whats the purpose of using its derivative $\cos{x}$. Please forgive me if i have asked a stupid questions, i want to improve my fundamentals in calculus. AI: My example is from the real life situation of war. From experiments in physics we know that the acceleration due to gravity of a particle near the Earth's surface is about $-9.8 \frac{m}{s^2}$. If you're an artilleryman in an army, you want your artillery shells to hit the enemy, or else theirs may hit you and kill you (game over). So you need to know how to angle your artillerygun and which direction to point it in so that when the shell lands, it blows up your enemy (rather than missing). All you know is that you can control the direction your cannon points and the angle you fire it in in the air -- after it's fired gravity takes over and that $-9.8 \frac{m}{s^2}$ takes over the situation. Calculus shows us that if $x(t)$ is the function representing the position of the artillery shell at time $t$, then its first derivative is the shell's velocity and its second derivative is the shell's acceleration. We know acceleration from physics (it's that $-9.8 \frac{m}{s^2}$ we had earlier)! We write this as $x''(t) = -9.8 \frac{m}{s^2}$. From this equation (involving derivatives) you can calculate (using "integration") the position function $x(t)$ of the particle given the direction you fire it in and the angle you fire it at. Why do you care about that? Because knowing $x(t)$ will tell you where your shell lands and thus whether your shot will kill the enemy or not. So you can do a quick calculation to determine which direction and which angle to fire in to ensure that your shell hits your target. The side that does this computation first and gets the shell in the air first will kill the other side, helping win the war.
H: How do I handle image gradient calculation at the edge of images? The image gradient is the rate of change over any given pixel of an image, either in the horizontal or vertical direction. An image can be thought of as a large matrix of values [0, 255]. A common horizontal matrix for taking an image gradient is [1, 0, -1], or the value to the left of our pixel minus the value to the right of our pixel. I understand this conceptually, but in practice I'm not sure how to apply it to the edges of an image (eg the value to the left or the right of the pixel doesn't exist). http://en.wikipedia.org/wiki/Image_gradient#Math AI: Check out the documentation for the matlab function 'imfilter'. The first box on that page has the title 'Boundary Options'. It list four options, for example: 'replicate' Input array values outside the bounds of the array are assumed to equal the nearest array border value. This is a standard problem when filtering an image (and you are approximating the gradient with a filter), and there are many ways to deal with it that are more or less equally good.
H: Polynomials irreducible over $\mathbb{Q}$ but reducible over $\mathbb{F}_p$ for every prime $p$ Let $f(x) \in \mathbb{Z}[x]$. If we reduce the coefficents of $f(x)$ modulo $p$, where $p$ is prime, we get a polynomial $f^*(x) \in \mathbb{F}_p[x]$. Then if $f^*(x)$ is irreducible and has the same degree as $f(x)$, the polynomial $f(x)$ is irreducible. This is one way to show that a polynomial in $\mathbb{Z}[x]$ is irreducible, but it does not always work. There are polynomials which are irreducible in $\mathbb{Z}[x]$ yet factor in $\mathbb{F}_p[x]$ for every prime $p$. The only examples I know are $x^4 + 1$ and $x^4 - 10x^2 + 1$. I'd like to see more examples, in particular an infinite family of polynomials like this would be interesting. How does one go about finding them? Has anyone ever attempted classifying all polynomials in $\mathbb{Z}[x]$ with this property? AI: There are two crucial results here. Dedekind's theorem: Let $f$ be a monic irreducible polynomial over $\mathbb{Z}$ of degree $n$ and let $p$ be a prime such that $f$ has distinct roots $\bmod p$ (this is true for precisely the primes not dividing the discriminant). Suppose that the prime factorization of $f \bmod p$ is $$f \equiv \prod_{i=1}^k f_i \bmod p.$$ Then the Galois group $G$ of $f$ contains an element of cycle type $(\deg f_1, \deg f_2, ...)$. In particular, if $f$ is irreducible $\bmod p$, then $G$ contains an $n$-cycle. Frobenius density theorem: The density of the primes with respect to which the factorization of $f \bmod p$ has the above form is equal to the density of elements of $G$ with the corresponding cycle type. In particular, for every cycle type there is at least one such prime $p$. It follows that $f$ is reducible $\bmod p$ for all $p$ if and only if $G$ does not contain an $n$-cycle. The smallest value of $n$ for which this is possible is $n = 4$, where the Galois group $V_4 \cong C_2 \times C_2$ has no $4$-cycle. Thus to write down a family of examples it suffices to write down a family of irreducible quartics with Galois group $V_4$. As discussed for example in this math.SE question, if $$f(x) = (x^2 - a)^2 - b$$ is irreducible and $a^2 - b$ is a square, then $f$ has Galois group $V_4$. In particular, taking $b = a^2 - 1$ the problem reduces to finding infinitely many $a$ such that $$f(x) = x^4 - 2ax^2 + 1$$ is irreducible. We get your examples by setting $a = 0, 5$. By the rational root theorem, the only possible rational roots of $f$ are $\pm 1$, so by taking $a \neq 1$ we already guarantee that $f$ has no rational roots. If $f$ splits into two quadratic factors, then they both have constant term $\pm 1$, so we can write $$x^4 - 2ax + 1 = (x^2 - bx \pm 1)(x^2 + bx \pm 1)$$ for some $b$. This gives $$2a = b^2 \mp 2.$$ Thus $f$ is irreducible if and only if $2a$ cannot be written in the above form (and also $a \neq 1$). Classifying polynomials $f$ with this property seems quite difficult in general. When $n = 4$, it turns out that $V_4$ is in fact the only transitive subgroup of $S_4$ not containing a $4$-cycle, but for higher values of $n$ there should be lots more, and then one has to tell whether a polynomial has one of these as a Galois group... (Except if $n = q$ is prime; in this case $q | |G|$ so it must have a $q$-cycle.)
H: A sufficient condition for order isomorphism of posets? Let $\mathfrak{A}$ be a poset. For $a, b \in \mathfrak{A}$ we will denote $a \not\asymp b$ if only if there are a non-least element $c$ such that $c \leqslant a \wedge c \leqslant b$. Let $\mathfrak{A}$, $\mathfrak{B}$ are posets. I call a pointfree funcoid a pair $\left( \alpha ; \beta \right)$ of functions $\alpha : \mathfrak{A} \rightarrow \mathfrak{B}$, $\beta : \mathfrak{B} \rightarrow \mathfrak{A}$ such that $$ \forall x \in \mathfrak{A}, y \in \mathfrak{B}: \left( y \not\asymp^{\mathfrak{B}} \alpha \left( x \right) \Leftrightarrow x \not\asymp^{\mathfrak{A}} \beta \left( y \right) \right) . $$ Conjecture If $\left( \alpha ; \beta \right)$ is a pointfree funcoid and $\alpha$ is a bijection $\mathfrak{A} \rightarrow \mathfrak{B}$, then $\alpha$ is an order isomorphism $\mathfrak{A} \rightarrow \mathfrak{B}$. A weaker conjecture: Conjecture If $\left( \alpha ; \beta \right)$ is a pointfree funcoid and $\alpha$ is a bijection $\mathfrak{A} \rightarrow \mathfrak{B}$ and $\beta$ is a bijection $\mathfrak{B} \rightarrow \mathfrak{A}$, then $\alpha$ is an order isomorphism $\mathfrak{A} \rightarrow \mathfrak{B}$. If these conjectures are false, what additional conditions we may add to make them true? (Maybe, these are true for lattices? distributive lattices?) AI: Both conjectures are false for infinite posets. Take for instance any bijections of sets $\alpha,\beta$ ($\beta=\alpha^{-1}$ works fine) between $\mathbb{Z}$ and $\mathbb{Q}$. Because for all $m,n\in\mathbb{Z}$ and for all $r,s\in\mathbb{Q}$ we always have $m\asymp n$ and $r\asymp s$, there is really no condition imposed on $\alpha$ and $\beta$, but neither can be an order isomorphism. Maybe the answer is affirmative if you look at finite posets.
H: Can anyone give any insight on this group given these generators and relations? $G = \langle x,y | x^3 = 1, y^3 = 1, (xy)^3 = 1, (xy^2)^n = 1 \rangle$ I am studying this group and I can't seem to get anywhere with it. I've tried making a Cayley Table but it's getting pretty big. This makes me think I'm doing something wrong. Which doesn't have to be the case. Maybe the group is larger than I expected. I am assuming it's nonabelian. My specific questions are: Is this a relatively common group? Does it have a name? What is the order of G? (Note: an earlier version of this question accidentally left out the $n$ in $(xy^2)^n$.) AI: Set $a=xy^2$ and $b=a^x = y^2x$, so that $ab= xyx$ and $ba =y^2 x^2 y^2$. But $xyxyxy = 1$ so $xyx = y^{-1} x^{-1} y^{-1} = y^2 x^2 y^2$, so $a$ and $b$ commute, and so form a normal abelian subgroup $A$ that is a quotient of $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/n\mathbb{Z}$, and which together with either $x$ or $y$ generates $G$. Check that the semi direct product of $\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/n\mathbb{Z}$ with $x(a) = b$, $x(b)= (ab)^{-1}$ satisfies the relations so that $G$ is a generalization of the alternating group of order 12, having in general order $3n^2$ instead of $3\cdot 2^2$. If you omit the last relation (set $n=0$), then you get the following faithful integral matrix representation of the group. To include the last relation, just interpret the matrices mod $n$ to get a faithful matrix rep over $\mathbb{Z}/n\mathbb{Z}$. $$ x = \left[\begin{smallmatrix} 0 & 1 & 0 \\ -1 & -1 & 0 \\ 0 & 0 & 1 \end{smallmatrix}\right] \quad y = \left[\begin{smallmatrix} 0 & 1 & -1 \\-1 & -1 & 0 \\ 0 & 0 & 1 \end{smallmatrix}\right] \quad a = \left[\begin{smallmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{smallmatrix}\right] \quad b = \left[\begin{smallmatrix} 1 & 0 & -1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{smallmatrix}\right] $$
H: $PGL_2(q)$ acts on $\Omega$ $3-$transitively? Anyone who studies Permutation Groups will be encountering the following definition: A group $G$ acting on a set $\Omega$ is said to be “Sharply m-Transitive” iff $$\forall (a_1,a_2…,a_m) , (b_1,b_2…,b_m) \in \Omega^{m};\ ∃! g \in G , a_i^g=b_i, 1\leq i\leq m$$ While reviewing my written notes in the class about group $PGL_2(q)$, I have faced to this matter that $PGL_2(q)$ acting on set $\Omega=GF(q)\cup\{\infty\}$ is sharply $3-$transitive. The idea for judging that is as follows: Since $PGL_2(q)=\{f|f:\Omega\longrightarrow\Omega, f(z)=\frac{az+b}{cz+d},ad-bc\neq 0; a,b,c,d\in GF(q)\}$ so we have $PGL_2(q)_{\infty}=\{f|f:\Omega\longrightarrow\Omega, f(z)= az+b ,a\neq 0; a,b\in GF(q)\}$, $PGL_2(q)_{\infty,0}=\{f|f:\Omega\longrightarrow\Omega, f(z)= az ,a\neq 0; a\in GF(q)\}$ and $PGL_2(q)_{\infty,0,1}=\{f|f:\Omega\longrightarrow\Omega, f(z)= z\}=\{id\}$. Knowing that $|PGL_2(q)|=q(q^2-1)$, we get $| PGL_2(q)_{\infty}|=q(q-1)$ and then $| PGL_2(q)_{\infty,0}|=q-1$.Now, since $|PGL_2(q):PGL_2(q)_{\infty}|=q+1=|\Omega|$ then $PGL_2(q)$ is acting transitively on $\Omega$; and because of having $|PGL_2(q))_{\infty,0}:PGL_2(q)_{\infty}|=q=|\Omega-\{\infty\}|$ therefore $PGL_2(q)$ is acting $2-$transitively on $\Omega$. Finally, it was concluded in the class that since $|PGL_2(q)_{\infty,0,1}|=1$; $|PGL_2(q)|$ is acting 3-transitively on $\Omega$. I should confess, I can't reach to the last result and above definition is hopeless for me. My question is "Why the group is sharply 3-transitive on $\Omega$. Thanks and sorry for my long question here. AI: To show that a group is acting sharply $3$-transitively, you need to show that it acts $3$-transitively and that the stabilizer of one (and hence of any) triplet of distinct points is trivial. Since you know that $|PGL_2(q)_{\infty,0,1}|=1$, you need to show just that the action is $3$-transitive. For that, it is sufficient to show that the action of the stabilizer of two points, i.e. $PGL_2(q)_{\infty,0}$, is transitive on $\Omega\setminus\{\infty,0\}$. Indeed, if $x,y\in GF(q)\setminus\{0\}$, then take $a=yx^{-1}$ and you have $f(z)=az$ such that $f(x)=y$.
H: Division by $2p+1$ Can $\left\lfloor{\dfrac{x}{2p+1}} \right\rfloor$ be expressed in terms of $\left\lfloor{\dfrac{x}{p}} \right\rfloor$ for prime $p$? How to divide by $2p+1$ by only using division by $p$? EDIT: The above formulation is wrong. I meant "expressed in terms" in a sense broader that "a function that takes $\left\lfloor{\dfrac{x}{p}} \right\rfloor$ as an argument. Different version: let $0\leq a,b < 2p+1$ ($a,b$ known integers) and $x=ab$. How to divide $x$ by $2p+1$ in a way cheaper than just dividing by $2p+1$? Dividing by $p$ is cheaper than dividing by $2p+1$. It doesn't have to be a formula, algorithm is also ok. AI: $$ \frac{x}{2p+1} = \frac{x}{2p} \frac{1}{1+1/(2p)} = \sum_{j=0}^\infty (-1)^j \frac{x}{(2p)^{j+1}}$$ For $\left\lfloor \dfrac{x}{2p+1} \right\rfloor$, you can stop the series if you come to a point where further terms can't make a difference (which should happen unless $x$ is an integer multiple of $2p+1$). Thus if $$S_N = \sum_{j=0}^N (-1)^j \dfrac{x}{(2p)^{j+1}}$$ $\left\lfloor \dfrac{x}{2p+1} \right\rfloor = \left\lfloor S_N \right\rfloor$ if $N$ is even and $x/(2p)^{N+2} < S_N - \lfloor S_N \rfloor$ or $N$ is odd and $x/(2p)^{N+2} < \lceil S_N \rceil - S_N$.
H: Group presentation for semidirect products If $G$ and $H$ are groups with presentations $G=\langle X|R \rangle$ and $H=\langle Y| S \rangle$, then of course $G \times H$ has presentation $\langle X,Y | xy=yx \ \forall x \in X \ \text{and} \ y \in Y, R,S \rangle$. Given two group presentations $G=\langle X|R \rangle$ and $H=\langle Y| S \rangle$ and a homomorphism $\phi: H \rightarrow \operatorname{Aut}(G)$, what is a presentation for $G \rtimes H$? Is there a nice presentation, as in the direct product case? Thanks! AI: Let $G = \langle X \mid R\rangle$ and $H = \langle Y \mid S\rangle$, and let $\phi\colon H\to\mathrm{Aut}(G)$. Then the semidirect product $G\rtimes_\phi H$ has the following presentation: $$ G\rtimes_\phi H \;=\; \langle X, Y \mid R,\,S,\,yxy^{-1}=\phi(y)(x)\text{ for all }x\in X\text{ and }y\in Y\rangle $$ Note that this specializes to the presentation of the direct product in the case where $\phi$ is trivial.   For example, let $G = \langle x \mid x^n = 1\rangle$ be a cyclic group of order $n$, let $H = \langle y \mid y^2=1\rangle$ be a cyclic group of order two, and let $\phi\colon H \to \mathrm{Aut}(G)$ be the homomorphism defined by $\phi(y)(x) = x^{-1}$. Then the semidirect product $G\rtimes_\phi H$ is the dihedral group of order $2n$, with presentation $$ G\rtimes_\phi H \;=\; \langle x,y\mid x^n=1,y^2=1,yxy^{-1}=x^{-1}\rangle. $$
H: Diagonal Lemma justification Given the diagonal lema stated as above: Diagonal Lema. Let $\mathfrak{T}$ be a theory wich is capable of representing the primitive recursive functions, and a codification schema for formulas in $\mathfrak{T}$ such that $\ulcorner \phi \urcorner$ is the codification of $\phi$. For all formulas $\psi(x)$ where $x$ is it's only free variable, we have $\mathfrak{T} \vdash \delta \leftrightarrow \psi(\ulcorner \delta \urcorner)$. Why is it called after Cantor's diagonal arugment? Giving the justification why the standard diagonalization technique for demonstrating that the set of reals isn't countable is straightforward, but for this lemma it isn't. AI: Are you asking why it is named "Diagonal Lemma"? It doesn't seem to be named directly after Cantor, but I suspect Cantor's argument was an early, influential use of arguments that make use of the diagonal elements of some structure. The term "diagonal" is now often used to refer to the mapping $i \mapsto A(i,i)$ whenever $A(i,j)$ parametrizes some natural collection of objects (or just the image of this mapping). Another analogy is that of the diagonal of a matrix. The Diagonal Lemma can be viewed as constructing an object on a certain diagonal, hence the name.
H: Lower bound for $\|A-B\|$ when $\operatorname{rank}(A)\neq \operatorname{rank}(B)$, both $A$ and $B$ are idempotent Let's first focus on $k$-by-$k$ matrices. We know that rank is a continuous function for idempotent matrices, so when we have, say, $\operatorname{rank}(A)>\operatorname{rank}(B)+1$, the two matrices cannot be close in norm topology. But I wonder whether there is an explicit lower bound of the distance between two idempotent matrices in terms of their difference in their ranks. Thanks! AI: The rank of an idempotent matrix is its trace, and $|\text{Tr}(A-B)| \le k \|A-B\|$, so if $\|A-B\| < 1/k$ they must have the same rank. EDIT: Actually if $\|A - B\| < 1$ (where $\|\cdot\|$ is an operator norm) they must have the same rank. Suppose $\text{rank}(A) < \text{rank}(B)$ and $B$ is idempotent. Let $V = \text{Ran}(A)$ and $W = \text{Ran}(B)$. Then the restriction of $A$ to $W$ maps $W$ into $V$. Since $W$ has higher dimension than $V$, this map can't be one-to-one, so there is some nonzero $w \in W$ such that $Aw = 0$. But $Bw = w$, so $(B-A)w = w$ and $\|B-A\| \ge 1$.
H: Countably Compact vs Compact vs Finite Intersection Property There is this exercise: Show that countable compactness is equivalent to the following condition. If ${C_n}$ is a countable collection of closed sets in S satisfying the finite intersection hypothesis, then $\bigcap_{i=1}^\infty C_i$ is nonempty. Definitions: A Space S is countably compact if every infinite subset of S has a limit point in S. A space S has the finite intersection property provided that if ${C_\alpha}$ is any collection of closed sets such that any finite number of them has a nonempty intersection, then the total intersection $\bigcap_\alpha C_\alpha$ is non-empty A family of closed sets, in any space, such that any finite number of them has a nonempty intersection, will be said to satisfy the finite intersection hypothesis. Now there is also a related theorem in the book: Compactness is equivalent to the finite intersection property. Sounds to me countable compactness and compactness are pretty much the same. I am not asking for a solution to the exercise. My question is this: What is the difference between Compactness and Countable Compactness in terms of closed Collections? Both things sound to me like this: Given a collection of closed sets, when a finite number of them has a nonempty intersection, all of them have a nonempty intersection. BTW: the definitions, the theorems and the exercise are from Topology by Hocking/Young. AI: The difference is that if $X$ is compact, every collection of closed sets with the finite intersection property has a non-empty intersection; if $x$ is only countably compact, this is guaranteed only for countable collections of closed sets with the finite intersection property. In a countably compact space that is not compact, there will be some uncountable collection of closed sets that has the finite intersection property but also has empty intersection. An example is the space $\omega_1$ of countable ordinals with the order topology. For each $\xi<\omega_1$ let $F_\xi=\{\alpha<\omega_1:\xi\le\alpha\}=[\xi,\omega_1)$, and let $\mathscr{F}=\{F_\xi:\xi<\omega_1\}$. $\mathscr{F}$ is a nested family: if $\xi<\eta<\omega_1$, then $F_\xi\supsetneqq F_\eta$. Thus, it certainly has the finite intersection property: if $\{F_{\xi_0},F_{\xi_1},\dots,F_{\xi_n}\}$ is any finite subcollection of $\mathscr{F}$, and $\xi_0<\xi_1<\ldots<\xi_n$, then $F_{\xi_0}\cap F_{\xi_1}\cap\ldots\cap F_{\xi_n}=F_{\xi_n}\ne\varnothing$. But $\bigcap\mathscr{F}=\varnothing$, because for each $\xi<\omega_1$ we have $\xi\notin F_{\xi+1}$. This space is a standard example of a countably compact space that it not compact. Added: Note that neither of them says: Given a collection of closed sets, when a finite number of them has a nonempty intersection, all of them have a nonempty intersection. The finite intersection property is not that some finite number of the sets has non-empty intersection: it says that every finite subfamily has non-empty intersection. Consider, for instance, the sets $\{0,1\},\{1,2\}$, and $\{0,2\}$: every two of them have non-empty intersection, but the intersection of all three is empty. This little collection of sets does not have the finite intersection property. Here is perhaps a better way to think of these results. In a compact space, if you have a collection $\mathscr{C}$ of closed sets whose intersection $\bigcap\mathscr{C}$ is empty, then some finite subcollection of $\mathscr{C}$ already has empty intersection: there is some positive integer $n$, and there are some $C_1,\dots,C_n\in\mathscr{C}$ such that $C_1\cap\ldots\cap C_n=\varnothing$. In a countably compact space something similar but weaker is true: if you have a countable collection $\mathscr{C}$ of closed sets whose intersection $\bigcap\mathscr{C}$ is empty, then some finite subcollection of $\mathscr{C}$ already has empty intersection. In a countably compact space you can’t in general say anything about uncountable collections of closed sets with empty intersection.
H: Changing order of summation I would like to rewrite the sum $$\sum_{i=1}^K \sum_{l=-\infty}^\infty \sum_{j=-\infty}^\infty f(i+lK;j-l)$$ In the form $$ \dots\sum_{s=-\infty}^\infty \sum_{w=-\infty}^\infty f(s,w)$$ where $s=i+lK$, $w=j-l $. How do I do it? AI: I think that the sum is exactly $$\sum_{s=-\infty}^\infty \sum_{w=-\infty}^\infty f(s,w)$$ Indeed, consider an ordered pair $(s,w)\in\mathbb Z\times\mathbb Z$. The integer $s\in \mathbb Z$ has a unique representation $s=i+lK$ for some $i\in\{1,\dots,K\}$ and $l\in\mathbb Z$. Since $l$ is already determined, from $w=j-l$ we get $j=w+l$. Thus, $(s,w)$ appears in the original sum exactly once.
H: Why is the matrix representing a non-degenerate sesquilinear form invertible? Let's consider a finite-dimensional vector space $E$ on the field $\mathbb{K}$ (where $\mathbb{K}=\mathbb{C} \ \text{or}\ \mathbb{R}$) and a sesquilinear (or bilinear if $\mathbb{K}=\mathbb{R}$) form $q:E\times E \rightarrow \mathbb{K}$. The definition for a non-degenerate form is that $q(x,y)=0\ \forall y\in E$ implies $x=0$. Now if we represent $q(x,y)$ with a matrix, so $q(x,y) =x^HAy$, why does the condition that the form be non-degenerate impose that $A$ is non-singular? I tried to see it using the dual space as $M(x,A)=x^HA\in E^*$, so that $M:E\times L(E,E)\rightarrow E^*$, where $L(E,E)$ is the vector space of all linear transformations from $E$ to $E$ and playing with the nullspace of $A$, but I just can't see it AI: Let $q$ be a sesquilinear form on a vector space $E$, given by a matrix $A$. The following statements are equivalent: $q$ is degenerate. There exists a nonzero vector $x\in E$ so that $q(x,y)=0$ for all $y\in E$. There exists a nonzero vector $x\in E$ so that $x^H A y = 0$ for all $y\in E$. There exists a nonzero vector $x\in E$ so that $x^H A$ is the zero (row) vector. The left nullspace of $A$ is non-trivial. The matrix $A$ is singular. It should be clear that $(1)\Leftrightarrow(2)\Leftrightarrow(3)\Leftrightarrow(4)\Leftrightarrow(5)\Leftrightarrow(6)$.
H: Primitive element of $\mathbb{Q}(\sqrt{2}+i,\sqrt{3}-i)/\mathbb{Q}$ Is there a clever way to determine a primitive element of the finite extension $$F=\mathbb{Q}(\sqrt{2}+i,\sqrt{3}-i)/\mathbb{Q} \text{ ?}$$ On simpler examples, I've been able to find one by determining all field morphisms $\sigma: F\to\mathbb{C}$ such that $\sigma|_\mathbb{Q}=\text{id}$ and finding $x\in F$ with a different image under each such $\sigma$. But here, it seems quite painful since the degree is between 8 and 16... AI: It is relatively straightforward to show that $F=\mathbb{Q}(i,\sqrt2,\sqrt3)$, because it is easy to see that $\mathbb{Q}(\sqrt2+i)=\mathbb{Q}(\sqrt2,i)$, and similarly $\mathbb{Q}(\sqrt3-i)=\mathbb{Q}(\sqrt3,i)$. Thus $[F:\mathbb{Q}]=8$, this extension is Galois, and the Galois group is elementary 2-abelian. An automorphism $\sigma$ is uniquely determined once we specify $\sigma(i)=\pm i$ (two choices), $\sigma(\sqrt2)=\pm\sqrt2$ and $\sigma(\sqrt3)=\pm\sqrt3$ (again two choices). If an element $z\in F$ is not fixed by any of these automorphisms, it is not in any of the subfields, so for example $$ z=2i+3\sqrt2+5\sqrt3 $$ will be a primitive element.
H: Probability a coin comes up heads more often than tails I am told that a fair coin is flipped $2n$ times and I have to find the probability that it comes up heads more often that it comes up tails. Please, how do I find the required probability? AI: Note that we have $$P(\text{# Heads} > \text{#Tails}) + P(\text{# Heads} = \text{#Tails}) + P(\text{# Heads} < \text{#Tails}) = 1$$ Assuming you are tossing a fair coin, by symmetry, we also have that $$P(\text{# Heads} > \text{#Tails})= P(\text{# Heads} < \text{#Tails})$$ If we want to get $k$ heads in $2n$ tosses, where the probability of getting a head is $p$ then the probability is $$\dbinom{2n}k p^k (1-p)^{2n-k}$$ In our case, if we want the number of heads to be the same as number of tails then $k = n$ and if we are tossing a fair coin then $p=1/2$. Hence, we get $$P(\text{# Heads} = \text{#Tails}) = \dbinom{2n}n \left(\dfrac12 \right)^n \left(\dfrac12 \right)^n = \dfrac1{2^{2n}} \dbinom{2n}n$$ Hence, we get that $$P(\text{# Heads} > \text{#Tails})= P(\text{# Heads} < \text{#Tails}) = \dfrac{1-\dfrac{\dbinom{2n}{n}}{2^{2n}}}2 = \dfrac12 - \dfrac{\dbinom{2n}n}{2^{2n+1}}$$
H: Calculus, Problem. $$\large f\left(x\right) = \int\limits_{\cos x}^{\sin x} e^{t^2+xt}dt.$$Compute $f'\left(0\right)$. I can't get it right -sigh- :/ AI: If we have $$f(x) = \int_{a(x)}^{b(x)} g(t,x) dt,$$ then for "nice enough" $g(t,x)$ $$f'(x) = \int_{a(x)}^{b(x)} \dfrac{\partial g(t,x)}{\partial x} dt + g(b(x),x) \dfrac{db(x)}{dx} - g(a(x),x) \dfrac{da(x)}{dx}$$ In your case, $g(t,x) = \exp(t^2 + xt)$, $a(x) = \cos(x)$ and $b(x) = \sin(x)$. Move your mouse over the gray area below for complete solution. $$\dfrac{\partial g(t,x)}{\partial x} = t\exp(t^2+xt), \dfrac{db(x)}{dx} = \cos(x), \dfrac{da(x)}{dx} = -\sin(x)$$ Hence, $$f'(0) = \int_{a(0)}^{b(0)} t\exp(t^2) dt + g(b(0),0) \cos(0) - g(a(0),0) (-\sin(0))$$ $$ = \int_{1}^{0} t\exp(t^2) dt + g(0,0) \times 1 + g(1,0) \times 0$$ $$ = \left. \dfrac{\exp(t^2)}{2} \right \vert_1^0 + 1 = 1 + \left(\dfrac{1 - \exp(1)}2 \right) = \left(\dfrac{3 - \exp(1)}2 \right)$$
H: Eigenvectors of $P^{-1}AP$ Let $A\in M_{n}(\mathbb{C})$ and assume that $A$ is diagonalizable, let $P\in M_{n}(\mathbb{C})$ be an invertible matrix. My question is what are the eigenvectors of $P^{-1}AP$ ? I think it's probably something like $P$(eigenvectors of $A)$ , but I don't remember... I appriciate any help. AI: Assume $Ax = \lambda x$. Then if you take $v = P^{-1}x$, you have $$ P^{-1}AP(v) = P^{-1}A P(P^{-1}x) = P^{-1} Ax = P^{-1} \lambda x = \lambda P^{-1} x = \lambda v. $$ Therefore the vectors you are looking for are $P^{-1}$ times the eigenvectors of $A$. (Because conversely, an eigenvector of $P^{-1}AP$ gives eigenvectors for $A = (P^{-1})^{-1}(P^{-1}AP)(P^{-1})$ by letting $x = (P^{-1})^{-1} v = Pv$. The function $x \mapsto P^{-1}x = v$ is therefore a bijection from the eigenvectors of $A$ to the eigenvectors of $P^{-1}AP$.) Hope that helps,
H: Understanding torsion from a presentation Let $F_2 = \langle a,b \rangle$ be the free group on two generators, and for each word $w \in F_2$, let $G(w) = \langle a, b \ | \ w \rangle$. Is the following statement true? $G(w)$ is torsion-free if and only if for all $k \geq 2$ and for all $v \in F_2$, $w \neq v^k$ In other words, is it true that $G(w)$ is torsion-free unless there is an obvious reason why it is not? AI: Yes, this is true. It is proved as Theorem 4.12 in "Combinatorial Group Theory", by W. Magnus, A. Karrass and D. Solitar. (It's on p. 266 in the Dover reprint.) Alternatively, see Propositions 5.17 and 5.18 in "Combinatorial Group Theory", by R.C. Lyndon and P.E. Schupp. It is true without the restriction on the rank of the free group, though you can reduce to that case by an embedding. The most straight-forward proof uses induction on the length of the relator.
H: Minimize $\| ACE \|$ by geometrical means I have the following figure Where $AB=10$m, $BD=12$m and $DE=12$m. The point C can slide along the segment BD. Now the problem is to minimize the distance from A to D going along the dashed line. The problem can be solved using simple analysis and differentiation. Let $BC=x$ then $$\|ACE\| = f(x) = \sqrt{10^2-x^2}+\sqrt{(12-x)^2+12^2}$$. in order to find a minima which we know exists one would have to solve $f'(x)=0$ to obtain the solution $x=60/11$. My question is however can one prove without analysis that x=60/11 of $BD$ in order to minimize the distance $\|ACE\|$? AI: According to your math, you're minimizing $|AC|+|CE|$, not $|AC|+|CD|$. Reflect $E$ about the line $BC$ to $E'$. Observe that $|CE| = |CE'|$, so the problem is equivalent to minimizing $|AC| + |CE'|$. Where should $C$ be? (Hint: The answer is not actually the midpoint.) You can think of $ACE$ as a light ray reflected in the mirror $BD$. From one point of view, the solution to this problem is actually the reason why mirrors reflect light that way in the first place.
H: If $a$ in $R$ is prime, then $(a+P)$ is prime in $R/P$. Let $R$ be a UFD and $P$ a prime ideal. Here we are defining a UFD with primes and not irreducibles. Is the following true and what is the justification? If $a$ in $R$ is prime, then $(a+P)$ is prime in $R/P$. AI: The bijection between ideals of $\,R/I\,$ and ideals of $R$ containing $I$ restricts to prime ideals, hence $\,(a+P)\,$ is prime in $\,R/P\,$ iff $\,I = (a)+P\,$ is prime in $\,R.\,$ But generally this is not true, e.g. $\,I=1\,$ for $\,a\nmid P\,$ primes in $\,\Bbb Z,\,$ or $\,I = (4\!-\!x)+(x) = (4,x)\,$ is not prime in $\,\Bbb Z[x],\,$ by $\,2^2\in I\,$ but $\,2\not\in I.$
H: Does commutativity imply Associativity? Does commutativity imply associativity? I'm asking this because I was trying to think of structures that are commutative but non-associative but couldn't come up with any. Are there any such examples? NOTE: I wasn't sure how to tag this so feel free to retag it. AI: Consider the operation $(x,y) \mapsto xy+1$ on the integers.
H: For any sequence in $L^2$ there is a function in $L^2$ s.t. is not orthogonal to any point of the sequence How to prove that for any sequence $(f_n) \subset L^2[0,1]\setminus \{0\}$ there is a function $g \in L^2[0,1]$ such that $$\int f_n g dx \neq0\ \forall n\geq 1?$$ I tried to use a weak limit of $sign(f_n)$ or by argument by contradiction but I didn't get the result. AI: If you know the Baire category theorem, the result falls out at once: $L^2$ cannot be written as a countable union of proper subspaces $\{f_n\}^\perp$.
H: How to get $ \cot(\theta/2)$ from $ \frac {\sin \theta} {1 - \cos \theta} $? According to wolfram alpha, $\dfrac {\sin \theta} {1 - \cos \theta} = \cot \left(\dfrac{\theta}{2} \right)$. But how would you get to $\cot \left(\dfrac{\theta}{2} \right)$ if you're given $\dfrac {\sin \theta} {1 - \cos \theta}$? AI: All you need are the following formulas: $$\sin(2A) = 2 \sin(A) \cos(A) \text{ and } \cos(2A) = 1 - 2 \sin^2(A)$$ Now take $A = \theta/2$ to get that $$\sin(\theta) = 2 \sin(\theta/2) \cos(\theta/2) \text{ and } \cos(\theta) = 1 - 2 \sin^2(\theta/2)$$ Now plugging this into your equation, we get that $$\dfrac{\sin(\theta)}{1-\cos(\theta)} = \dfrac{2 \sin(\theta/2) \cos(\theta/2)}{2 \sin^2(\theta/2)} = \dfrac{\cos(\theta/2)}{\sin(\theta/2)} = \cot(\theta/2)$$
H: Intersection of compositum of fields with another field Let $F_1$, $F_2$ and $K$ be fields of characteristic $0$ such that $F_1 \cap K = F_2 \cap K = M$, the extensions $F_i / (F_i \cap K)$ are Galois, and $[F_1 \cap F_2 : M ]$ is finite. Then is $[F_1 F_2 \cap K : M]$ finite? AI: No. First, the extension $\mathbb{C}(z)/\mathbb{C}$ is Galois for any $z$ that is transcendental over $\mathbb{C}$: if $p(z)$ is a nonconstant polynomial, and $\alpha$ is a root and $\beta$ is a nonroot, then the automorphism $\sigma\colon z\mapsto z+\alpha-\beta$ maps $z-\alpha$ to $z-\beta$, so that $p(z)$ has a factor of $z-\alpha$ and no factor of $z-\beta$ but $\sigma p$ has a factor of $z-\beta$, so $p\neq\sigma p$. Thus, no polynomial is fixed by all automorphisms. If $\frac{p(z)}{q(z)}$ is a rational function with $q(z)$ nonconstant, then a similar argument shows that we can find a $\sigma$ that "moves the zeros" of $q$, so that $p(z)/q(z)$ will have a pole at $\alpha$ but no pole at $\beta$, while $\sigma p/\sigma q$ has a pole at $\beta$. Thus, the fixed field of $\mathrm{Aut}(\mathbb{C}(z)/\mathbb{C})$ is $\mathbb{C}$, hence the extension is Galois. Now, let $x$ and $y$ be transcendental over $\mathbb{C}$. Take $F_1=\mathbb{C}(x)$, $F_2=\mathbb{C}(y)$, $K=\mathbb{C}(xy)$, all subfields of $\mathbb{C}(x,y)$. Then $M=\mathbb{C}$, so $F_i/M$ is Galois; $F_1\cap F_2=\mathbb{C}$, so $[F_1\cap F_2\colon M]=1$. But $F_1F_2\cap K = \mathbb{C}(x,y)\cap \mathbb{C}(xy) = \mathbb{C}(xy)$, and $\mathbb{C}(xy)$ is of infinite degree over $M=\mathbb{C}$
H: What should a PDE/analysis enthusiast know? What are the cool things someone who likes PDE and functional analysis should know and learn about? What do you think are the fundamentals and the next steps? I was thinking it would be good to know how to show existence or even to know where to start to show existence of any non-linear PDE I come across. For example, I only recently found about about how people can use the inverse theorem to prove existence of a non-linear PDE. This involved Frechet derivatives which I have never seen before. And I don't fully appreciate the link between normal derivative, Gateaux derivative and Frechet derivative. So I thought how many other things I have no idea about in PDEs. And PDEs on surface are interesting (but I'm just learning differential geometry so a long wait till I look at that in detail) but it seems studied to death. So anyway what do you think is interesting in this field? I am less interested in constructing solutions to PDEs and more into existence. PS: you can assume the basic knowledge (Lax-Milgram, linear elliptic and parabolic existence and uniqueness, etc..) AI: Perhaps you should study some more advanced analysis, since that's when Frechet derivatives come up. A good (and legally free) reference is Applied Analysis by John Hunter and Bruno Nachtergaele. After that, perhaps Analysis by Elliott H. Lieb and Michael Loss? It's more advanced, so be sure you understand Hunter and Nachtergaele first. For more intense partial differential equations, UC Davis' upper division PDE's courses are available online too (with homework and solutions) when Nordgren taught it. There is Math 118A: Partial Differential Equations and 118B. There are probably more advanced (free) references out there, but these are the ones I use... Addendum: For references on specifically functional analysis, perhaps you should be comfortable with Eidelman et al's Functional Analysis. An introduction (Graduate Studies in Mathematics 66; American Mathematical Society, Providence, RI, 2004); I've heard good things about J.B. Conway's Functional Analysis, although I have yet to read it...
H: Is my proof that $(p \wedge \neg p) \Rightarrow q$ correct? I was asked by a professor a while ago to prove $(p \wedge \neg p)$ implies $q$. Whether through laziness or cleverness, I came up with the following proof: $p \wedge \neg p$ (by assumption). Assume by way of contradiction $\neg q$. $p \wedge \neg p$, therefore $p$ (I forget what this rule is called). Similarly, $p \wedge \neg p$, therefore $\neg p$. We have both $p$ and $\neg p$, a contradiction. Therefore, our assumption, $\neg q$, must have been false, i.e., $q$ is true, the desired result. Is this right? I keep going either way about it to myself. On the one hand, it definitely feels circular. On the other hand, it seems contain all the correct elements of a contradiction proof. EDIT: many commenters seem concerned about what I am allowed or not allowed to "use". I am allowed to use whatever you feel mathematically is correct, and not allowed to use things that aren't. The context of the exercise was not a course in logic, or even in mathematics. It was a computer science class in algorithms. I think the professer just wanted to get a feel for our understanding of logic, and also wanted to demonstrate why inconsistant assumptions can be used to prove anything. I don't even remember if the assignment was graded, or if it was what I got. I do remember that when I presented it on the board that it (obviously) wasn't quite what the professor was looking for. AI: Given the tools you have chosen to work with, you have a valid proof of $p\land \neg p\Rightarrow \neg\neg q$. Formally you would need an extra step where you point out that $\neg\neg q$ is the same as $q$, but that's not the real problem here. The reason why it feels circular is that what you're being asked to prove is one of several possible logical axioms that can be used to justify proof by contradiction in the first place, so the difference between the rule you use and the conclusion is only very slight. It is then natural to wonder whether you're supposed to prove it from more primitive ingredients than proof by contradiction. However, what those particular ingredients might be varies considerably between formalizations of logic. Basically one can choose any of quite a lot of possible logical foundations to use as axioms, and then prove that all of the other ones follow as logical consequences of the axioms we have chosen. (For pedants in the audience, this is assuming that the axiom sets in question are all for classical first-order logic, blah blah blah). In ordinary mathematical reasoning one tacitly allows all of those standard consequences as single proof steps that need no explanation because the reader is always expected to be able to formalize them in his own favorite logic if he wants to. However, that won't work here, because then there wouldn't be anything for you to prove! So it is necessary to be explicit about which proof steps one allows here, because otherwise your exercise just implodes. If one happens to choose an axiom set that includes $p\land \neg p\Rightarrow q$ as an axiom, then there's nothing to prove -- a full and complete proof would consist of the words "it's an axiom". But if you choose not to make it an axiom, then what counts as a valid proof depends critically upon what you do consider axioms. If you do not have a particular formal system you must do your proof in, then I suspect that the expected solution is simply by truth tables -- for each of the possible 4 combinations of whether $p$ and $q$ are true or false, show that $p\land \neg p\Rightarrow q$ evaluates to true. Then you have proved "$p\land \neg p\Rightarrow q$ is always true" which is just a more convoluted way of saying that you have proved $p\land \neg p\Rightarrow q$ itself.
H: "Best practice" for finding the language of a formal grammar If I've been given a formal grammar like $$ \begin{eqnarray} S & \Rightarrow & \lambda & | & 0A & | & 1B \\ A & \Rightarrow & 1S & | & 0AA \\ B & \Rightarrow & S & | & 0S & | & 1BB \end{eqnarray} $$ what is "the best" (or just a "good") way to find the language of that grammar? I do have a problem with that kind of exercises. I play around with the production rules and sometimes I find a solution - sometimes I don't. Could you please help me to find a "blueprint" for that kind of excercise? Or could you tell me how you would approach this one? Thanks in advance! AI: This probably shouldn't be posted as an answer to your question, but it's likely going to turn out to be way too long to be a comment. Your sample grammar was well-chosen, since none of the suggestions I'll propose seem to apply. It appears as if you have had sufficient exposure to grammars and languages to realize that simple grammars can sometimes generate languages that are very hard to describe in any succinct form and, conversely, that some really messy and complicated grammars generate languages that are quite easy to describe. As for a "blueprint", depending on what metrics you put on the description of a language I'd be pretty confident in saying there's no possible algorithm that will take a grammar and provide a "reasonable" description of the language it generates, other than the completely uninteresting description "it's the language generated by this grammar". That said, there are some techniques I use (and I suspect you know already) that can sometimes be helpful. In no particular order: Does the grammar look like anything you've worked on before? Yeah, that's obvious, but it points out the fact, particularly comforting to us older theorists, that sometimes experience can be even better than brilliance. Is the grammar in a particularly nice form? Obviously, a left-linear grammar, for example, will make your job a lot easier. Failing that, perhaps it's in a normal form like Chomsky or Griebach or can be transformed to be one. That may or may not help. Is it even a context-free grammar? If it's not your task is often going to be really hairy. Can you start with some "base" production $A\rightarrow \alpha_1\, |\, \alpha_2\, | \dots$ for some sentential forms $\alpha_i$ and get a simple description of the strings generated by $A$ and then use those in the other productions with $A$ on the right side? Related to the above, are there any productions that aren't (directly or indirectly) recursive? Concentrate your efforts on them. Are there any mostly-recursive productions like $P\rightarrow aP\, |\, bP\, |\, Q$? Sometimes these generate easily-described strings. There are probably many others that I've missed, but I'm going home to eat before hypoglycemia sets in. Good luck!
H: Null space of a matrix I was referring to this lecture http://www.stanford.edu/class/ee364a/videos/video05.html (about 0:38:10) related to convex optimization and for optimization it had a certain affine function equality constraint like $$Ax=b$$ The lecturer then obtained the equivalent optimization problem removing the equality constraint and replacing $x$ with $Fz+x_0$ $$Ax=b \iff x=Fz+x_0\text{ for some }z.$$ According to the lecturer $x_0$ is a solution of the equality $Ax=b$ and $F$ is obtained from the null space of matrix $A$. I didn't get this null space thing and how this $$x = Fz+x_0$$ is derived. Can anyone please explain? AI: The null space of matrix $A$ are all those vectors $z$ with $Az=0$ where $0$ is the zero vector (same dimensions as $x$ and $b$). These vectors build up a vector space which can be described by a base $B$. And that base $B$ can be used to form a matrix $F$ - just fill up with zero columns until you can operate on the same vectors $A$ can. Now multiplying $F$ with any vector $z$ is just a linear combination of some null vectors and the base vectors of $A$'s null space, in short $A(Fz)=0$. If for $x_0$ we know $Ax_0=b$ we can add a zero and are done $Ax_0=b\Leftrightarrow0+Ax_0=b\Leftrightarrow A(Fz)+Ax_0=b\Leftrightarrow A(\underbrace{Fz+x_0}_{=:x})=b$ No matter what $z$ an $x$ defined like that is a solution. EDIT: Example: $$A=\left[\begin{matrix}1&0&0\\3&2&-2\\3&3&-3\end{matrix}\right]$$ Now to find the null space one has to solve $Ax=0$. For that use row reduction until the matrix is in an "easily readable" form. $$A\longrightarrow\left[\begin{matrix}1&0&0\\0&2&-2\\0&3&-3\end{matrix}\right]\longrightarrow\left[\begin{matrix}1&0&0\\0&2&-2\\0&0&0\end{matrix}\right]=:\bar A$$ Row reduction ensures $Ax=0\Leftrightarrow\bar Ax=0$ and in the latter matrix the rows "read" as follows: 1. $1x_1+0x_2+0x_3=0\quad$ aka $x_1$ the first component of $x$ must be $0$. 2. $2x_2-2x_3=0\quad$ aka $x_2=x_3$. 3. Basically: Whatever. So the single base vector for $A$'s null space is $$B_1=\left[\begin{matrix}0\\1\\1\end{matrix}\right]$$ To get $F$ fill up with null vectors $$F=\left[\begin{matrix}0&0&0\\1&0&0\\1&0&0\end{matrix}\right]$$ For an arbirtray $z$ we get $$A(Fz)=\left[\begin{matrix}1&0&0\\3&2&-2\\3&3&-3\end{matrix}\right]\left(\left[\begin{matrix}0&0&0\\1&0&0\\1&0&0\end{matrix}\right]\left[\begin{matrix}z_1\\z_2\\z_3\end{matrix}\right]\right)=\left[\begin{matrix}1&0&0\\3&2&-2\\3&3&-3\end{matrix}\right]\left[\begin{matrix}0\\z_1\\z_1\end{matrix}\right]=\left[\begin{matrix}0\\2z_1-2z_1\\3z_1-3z_1\end{matrix}\right]=\left[\begin{matrix}0\\0\\0\end{matrix}\right]$$ Again, if for $x_0$ we know $Ax_0=b$ it follows for any $z$ that an $x:=Fz+x_0$ yields $Ax=b$.
H: Find $n$ in $n \log_2 n = c$ I'm trying to find the value for $n$ in the following equation. $$n \log_2 n = c$$ what is $n$? thanks, Tim AI: There is no closed solution formula for such equations. You will have to find the solution numerically -- that is, by trial and error, bisection, Newton iteration or the like. (One can write down a solution in terms of the Lambert W function, but from a practical point of view that "solution" just amounts to giving a fancy name to our inability to get an exact solution using ordinary algebra. It doesn't actually help with calculating the solution).
H: Embedding of standard model of arithmetic to PA-model I am working on the following problem: Let $ S_{Arithmetic} = \{+, *, 0, 1\}, \mathfrak{M} $ a model for PA (first-order peano axioms) }, and $ \mathbb{N} = (\mathbb{N},+ ^{\mathbb{N}}, *^{\mathbb{N}},0^{\mathbb{N}},1^{\mathbb{N}} )$. Construct an embedding $f : \mathbb{N} \rightarrow |\mathfrak{M}| $ and show that f is unique. So, for $f$ to be an embedding, the following has to be true (correct me if I'm wrong): $ 0^\mathfrak{M} = f(0^\mathbb{N}) $ $ 1^\mathfrak{M} = f(1^\mathbb{N}) $ $ +^\mathfrak{M} (f(a_0),f(a_1)) = f(+^\mathbb{N}(a_0,a_1)) $ $ *^\mathfrak{M} (f(a_0),f(a_1)) = f(*^\mathbb{N}(a_0,a_1)) $ $ f $ injective Now, if I set $f$ to $f(x) := x$, it seems to me that all these properties are satisfied, but I am not sure what to do to prove this assignment and what to do to show that $f$ is unique. I would be glad if someone could point me in the right direction. AI: First note that $|\frak M|$ need not have $\mathbb N$ as a subset. Why is this important? Because a function is still a subset of $\mathbb N\times|\frak M|$, and $f(x)=x$ would require $\mathbb N\subseteq|\frak M|$. Now to show that $f$ can be defined in such way, we already know what $f(0)$ and $f(1)$ are. Recall that $f(n)=f((n-1)+1)=f(n-1)+f(1)$. So we really have only one way to extend $f$ to the rest of the natural numbers. To show uniqueness suppose that $g$ is another embedding with these properties and use induction to show that $g(n)=f(n)$ for all $n\in\mathbb N$.
H: finding a solution of a heat equation Let $f\in C^0(\mathbb R^n)\cap L^\infty(\mathbb R^n)$ and $\alpha\in\mathbb R$. How can you find a solution $u$ of $$\begin{cases}\frac{\partial u}{\partial t}(x,t)-\Delta u(x,t)&=&-\alpha\cdot u(x,t) &\text{ in }]0,\infty[\times\mathbb R^n\\ \space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space\space u&=&f&\text{ on }\mathbb R^n\end{cases}$$ ? AI: Construct the solution $u$ of the usual heat equation $$\frac{\partial}{\partial t} u = \Delta u, \quad \quad u(0,x) = f(x)$$ in your favorite manner. Now consider $w(t,x) = e^{-\alpha t}u(t,x)$. Then by the product rule $$\frac{\partial}{\partial t} w(t,x) = e^{-\alpha t}\frac{\partial}{\partial t}u(t,x) - \alpha e^{\alpha t}u(t,x)$$ $$ = e^{-\alpha t}\Delta u(t,x) - \alpha w(t,x) = \Delta (e^{-\alpha t}u(t,x)) - \alpha w(t,x)$$ $$= \Delta w(t,x) - \alpha w(t,x)$$ and $$w(0,x) = e^{-\alpha \cdot 0}u(0,x) = u(0,x) = f(x)$$ so that $w$ is the desired solution. For the construction of solutions to the usual heat equation, it would be easier if you imposed some decay conditions on $f$, say, $f \in L^2$. In that case, one way of doing it is by using the Fourier transform. Consider the Laplacian $\Delta: H^2 \to L^2$. Conjugating by the Fourier Transform yields $$(F\Delta F^{-1})u(x) = -\|x\|^2u(x)$$ and thus in Fourier space, the heat equation reads $$\frac{\partial}{\partial t}\hat{u}(t) = -\|\xi\|^2\hat{u}(t)$$ which yields $$\hat{u}(t) = e^{-\|\xi\|^2t}\hat{u}(0) = e^{-\|\xi\|^2t}\hat{f}$$ Transforming back yields $$u(t) = (F^{-1}e^{-\|\xi\|^2t}F)f$$
H: Doubt on displacement of a parabola Find the equation which is an displacement of $x² - 3x + 4$ and passes though point $(-3, 3)$ and $(2, 8)$ I've already mounted an simple system of equations which looks obvious $$\begin{align} 8 &= 4a + 2b + c \\ 3 &= 9a - 3b + c \end{align}$$ but I'm stuck here, does someone know where to go now ? thanks in advance. AI: We could think that since the new parabola is only a displacement of the first one, it must have the same leading coefficient, i.e., $a=1$ (otherwise we'd also be stretching, shrinking or reflecting it). Then we can solve the resulting system of two equations in two unknowns: $\begin{cases} 8 =4 + 2b +c \\ 3 = 9 - 3b + c \end{cases} \Longleftrightarrow \begin{cases} 4 = 2b +c \\ -6 = - 3b + c \end{cases} $ We could try subtracting the last two equations to get $10 = 5b$, then $b=2$ and replacing anywhere we get $c=0$. So the equation of the desired parabola would be $y= x^2 +2x$.
H: Famous papers in algebraic geometry I'm reading the Mathoverflow thread "Do you read the masters?", and it seems the answer is a partial "yes". Some "masters" are mentioned, for example Riemann and Zariski. In particular, a paper by Zariski is mentioned, but not its title nor where it was published, so I have been unable to locate it (on "simple points"). What are some famous papers by the masters that should (and could) be read by a student learning algebraic geometry? I'm currently at the level of the first three chapters of Hartshorne (that is, I know something about varieties, schemes and sheaf cohomology). Edit: I should probably add that I'd like specific titles. The advice "anything by Serre" is unfortunaly not very helpful, considering Serre's productivity. AI: Serre's Faisceaux Algébriques Cohérents (=FAC) has the unique status of being: a) Arguably the most important article in 20-th century algebraic geometry : it introduced sheaf-theoretic methods into algebraic geometry, including their cohomology, characterization of affine varieties by vanishing of said cohomology for coherent sheaves, twisting sheaves $\mathcal O (k)$ on projective varieties,... Dieudonné and Grothendieck write in their Introduction to EGA that Chapters I and II of their treatise (and the the first two paragraphs of chapter III) can essentially be considered as easy transpositions ("transpositions faciles") of Serre's results in FAC (and of his posterior GAGA article). b) Still very readable. Quoting Grothendieck and Dieudonné again "sa lecture peut constituer une excellente préparation à celle de nos Eléments" (reading it may constitute an excellent preparation to reading our Eléments) And do not think that modern books or articles are necessarily simpler: I remember M.S. Narasimhan (a pioneer in the construction of moduli spaces for vector bundles) explaining to students (admittedly some time ago) that FAC was still the best place to look for a proof that if in a short exact sequence two sheaves were coherent, so was the third. Edit I have just checked that the result above on coherent sheaves is not proved in EGA (which refers to FAC), nor in Hartshorne (who doesn't even give the general definition of coherent), nor in Iitaka, nor in most books on algebraic geometry. Actually the only such book I can think of that proves the result is Miyanishi's Algebraic Geometry. (There are also books on complex geometry that prove it) I'm not claiming that this theorem on sheaves is especially important, but want to emphasize how relevant FAC still is. Second Edit Here is a translation of FAC into English.
H: What consitutes an exponential function? I was recently having a discussion with someone, and we found that we could not agree on what an exponential function is, and thus we could not agree on what exponential growth is. Wikipedia claims it is $e^x$, whereas I thought it was $k^x$, where k could be any unchanging number. For example, when I'm doing Computer Science classes, I would do everything using base 2. Is $2^x$ not an exponential function? The classical example of exponential growth is something that doubles every increment, which is perfectly fulfilled by $2^x$. I'd also thought $10^x$ was a common case of exponential growth, that is, increasing by an order of magnitude each time. Or am I wrong in this, and only things that follow the natural exponential are exponential equations, and thus examples of exponential growth? AI: $e^x$ is the exponential function, but $c\cdot k^x$ is an exponential function for any $k$ ($> 0, \ne1$) and $c$ ($\ne 0$). The terminology is a bit confusing, but is so well settled that one just has to get used to it.
H: How do I differentiate this? (logarithm & chain) I keep getting wrong results when trying to differentiate this: ${\partial \over \partial x} \ln{(x - \sqrt{x^2+a^2})}$ Thanks for hints! AI: There may be a typo in the question, since the thing inside the logarithm is $\le 0$. So we solve a different problem, finding the derivative of $\ln(|x-\sqrt{x^2+a^2}|)$, or more simply of $\ln(\sqrt{x^2+a^2}-x)$. Differentiate, using the Chain Rule (twice). We get $$\frac{\frac{x}{\sqrt{x^2+a^2}}-1}{\sqrt{a^2+x^2}-x}.$$ Bring the top to a common denominator, and simplify. We get $$-\frac{1}{\sqrt{x^2+a^2}}.$$ Another variant that makes sense in the reals is finding the derivative of $\ln(x-\sqrt{x^2-a^2})$. The same method yields $-\frac{1}{\sqrt{x^2-a^2}}$.
H: Thompson's Conjecture I have heard that the following is a conjecture due to Thompson: The number of maximal subgroups of a (finite) group $G$ does not exceed the order $|G|$ of the group. My question is: did Thompson really conjecture this? If so, is there any literature on the subject? AI: I've seen this conjecture attributed to Wall (1961), for example: On a conjecture of G.E. Wall. This is a recent article (journal version appeared in 2007), and it gives a bunch of references. The conjecture remains open. Here is a very recent article which does not attack the conjecture itself, but uses it as an inspiration for a different conjecture.
H: Formal language homework problem - extend(L) This is my attempt to solve an exercise from a formal languagues class. Consider the following definition: extend(L) = { w $\in$ $\Sigma^*$ | $\exists$ x $\in$ L, y$\in$ $\Sigma^*$ . w = xy } In words: extend(L) is the set of all strings with some prefix in L; 1- Are CFLs (Context Free Languages) closed under 'extend'? ( If L is CFG, then extend(L) is also CFG?). Justify your answer. Answer: Yes. Let $G_L$ be the CFG gramar for L . We can construct a new $G_E$ for extend(L) as follows: S' $\Rightarrow$ SE E $\Rightarrow$ $a_1$E | $a_2$E | $a_n$E | $\epsilon$ (all other productions from $G_L$ where S' is the starting symbol for $G_E$; S is the starting symbol for $G_L$ > and $a_1$..$a_n$ are all the symbols from $\Sigma$ Is this correct? 2- Recursively Enumerable Languages are closed under 'extend' ? Answer: False. I feel like it is not RE because I can't create a TM for extend(L), but I don't know how to prove it. Reducing another well known undecidable problem to it, perhaps.... but how, and which one? Thanks for any help! AI: The first part looks okay. For the second, though, $\mathrm{extend}(L)$ appears to be the concatenation of $L$ and $\Sigma^*$, and the r.e. languages are closed under concatenation. Consider a Turing machine $M$ that lists $L$. Modify it so that after listing the $n$-th word of $L$, it systematically lists all of the extensions of the first $n$ words by a maximum of $n$ letters.
H: Finding the recurrence relation This is actually a very simple problem, but I am going to type everything out in case I really overlooked something I am trying to find a recurrence relation in the series solution I got. The ODE is $y'' + xy'- y = 0$. $\begin{align} y'' + xy' - y & = \sum_{n=2}^{\infty} n(n - 1)a_nx^{n-2} + x\sum_{n=1}^{\infty} na_nx^{n-1} - \sum_{n=0}^{\infty} a_nx^n \\ & = \sum_{n=2}^{\infty} n(n - 1)a_nx^{n-2} + \sum_{n=1}^{\infty} na_nx^n - \sum_{n=0}^{\infty} a_nx^n \\ & = \sum_{n=0}^{\infty} (n+2)(n+1)a_nx^n + \sum_{n=0}^{\infty} na_nx^n - \sum_{n=0}^{\infty} a_nx^n\\ & = \sum_{n=0}^{\infty} x^n[(n+1)(n+2)a_{n+2} - a_n + na_n ] \\ & = 0 \end{align}$ The power series is 0 iff $(n+1)(n+2)a_{n+2} - a_n(1- n) = 0 \iff a_{n+2} = \dfrac{a_n (1 - n)}{(n+1)(n+2)}$ So testing out various values of n = 1, 2, 3... led me to $a_2 = \dfrac{a_0}{2}$ $a_3 = 0$ $a_4 = \dfrac{-a_2}{12} = \dfrac{a_0}{24}$ $a_5 = 0$ $a_6 = -\dfrac{a_4}{10} = \dfrac{a_0}{240}$ Okay so clearly the odd terms all go to 0 and there will be no odd terms. I know that the even terms alternate back and forth in series (so I will get an alternating series). I have no idea how the denominator can jump from 2 to 24 and then to 240 and then to a stunning 240*56. I see no pattern in how the denominator jumps at all, it jumps too fast if you ask me AI: Rewriting the recurrence $$a_{n+2}=\frac{a_n(1-n)}{(n+1)(n+2)}=-\frac{a_n(n-1)}{(n+1)(n+2)}$$ as $$a_{n+2}(n+1)=-\frac{a_n(n-1)}{n+2}$$ suggests the substitution $b_n=a_n(n-1)$, $b_0=-a_0$. Then we have simply $$b_{n+2}=-\dfrac{b_n}{n+2}\;.$$ Sticking to the even-numbered terms, we get $b_0,b_2=-\dfrac{b_0}2,b_4=\dfrac{b_0}{2\cdot4},b_6=-\dfrac{b_0}{2\cdot4\cdot6}$, and it’s apparent (and easily proved by induction) that $$b_{2n}=\frac{(-1)^nb_0}{2^nn!}\;.$$ This translates back to $$a_{2n}=\frac{(-1)^{n+1}a_0}{(2n-1)2^nn!}\;.$$
H: Recommend a space to analyze the bearing of the vector between any two points In Euclidean space, given any two points, the vector connecting them can be characterized by length (distance) and direction (bearing). Now I am only interested in the bearing part. And I found it is inconvenient to analyze the bearing property because in Euclidean space distance is always involved in the analysis. Is there a space I can map the Euclidean space to so that I can analyze the bearing between any two points better? Say some kind of projection space or something, I don't know. Any suggestions? Here is an example. Consider a triangle in Euclidean space. What I am interested is the shape, i.e., interior angles of the triangle. I don't care the scale, i.e., length of the sides. How can I map this triangle into another space where I need not care the distance? AI: You should become familiar with the notion of a configuration space or moduli space in mathematics (unfortunately these both have more precise technical meanings that at this point will distract from the underlying idea). If you are interested in a family of objects, you can often construct a space whose points describe those objects in some way. Moreover this space should have a topology so that you can make sense of a "continuous family" of objects. In your first example, the space of all directions in $\mathbb{R}^n$ is the sphere $S^{n-1}$. There is also a configuration space $F_2(\mathbb{R}^n)$ describing ordered pairs of distinct points in $\mathbb{R}^n$ and consequently a natural map $F_2(\mathbb{R}^n) \to S^{n-1}$ sending a pair of vectors $u, v$ to $\frac{u-v}{|u-v|}$. In your second example, the space of all triangles in $\mathbb{R}^n$ can be identified with the configuration space $C_3(\mathbb{R}^n)$ describing unordered triples of distinct points in $\mathbb{R}^n$ (namely the vertices). There is a natural map which translates the triangle until its vertices $a, b, c$ satisfy $a + b + c = 0$ (presumably you don't care about translation either) and another one which scales $a, b, c$ so that the triangle has unit area.
H: Statistics: How to measure how accurately probabilities are reported? If you roll a six sided die a bunch of times, and count how many times the number 1 shows up, you'd expect it to show up about 1/6 of the time. Now if you roll this die 1000 times, and the number 1 shows up 600 times, you'd know something is amiss (maybe somebody tampered with the die) I'm designing an AI for a computer game, and part of this AI requires it to try to calculate the probability of some binary (true / false) event occurring. After having calculated the probability, it gets feedback on the outcome of this probability event. The catch is the program's prediction may be wrong -- for example, it might falsely calculate an event to have a 1/6 chance of occurring, while the actual probability of it occurring is 0.6. How can I determine to what extent is the probability engine accurate? For instance, for the event that should occur with probability 0.6, the engine reporting the probability to be 0.7 would be rated much more accurate than one reporting the probability as 0.1. Here's a sample of my data. So if the engine is accurate, 90% of the entries of the value 0.90 should be in the success category, while only 10% of them in the failure category. AI: One possible statistic that you could use to check this: if the probability of success for event $i$ is reported as $p_i$, let $Y_i = 1 - p_i$ in case of success and $-p_i$ in case of failure, and let $S = \sum_{i=1}^n Y_i$ (where there are $n$ events). Assuming the probability is accurate and the events are independent, $S$ has mean $0$ and standard deviation $\sigma = \sqrt{\sum_{i=1}^n p_i (1-p_i)}$; if $n$ is large and at least some positive fraction of the $p_i$ are not too close to $0$ or $1$, the distribution should be close to a normal distribution with that mean and standard deviation. The null hypothesis (that the probabilities are accurate) would be rejected at the 5% confidence level if $|S| > 1.96 \sigma$. For your data I get $S = -5.46$ and $\sigma = 3.122627099$, so $S/\sigma = -1.748527707$. That's not enough to reject the null hypothesis.
H: What is the thing inside a sum called? You know how the "thing" inside an integral, we call that an integrand. Does any know what the $a_n$ in a typical $\sum a_n$ is called? Or do we only have names if it is an infinite series? I could've sworn there is a name or there really should be a name AI: Any element of a sum is called a summand. For example: $2$, $4$ and $6$ are summands in $2+4+6 = 12$, and in $\sum a_i$ each of the $a_i$ are summands, that is, $a_1$, $a_2$, $a_3$, etc. are summands. However, in $\sum a_i$, we can also call $a_i$ the general term of the sequence $(a_n)$, but that's specific to sequences and series, because it makes a reference to the role of $a_i$ in the sequence, unlike the name summand, which only refers to the fact that a quantity is being summed (actually, that's the etymology of the word: sum, with a gerund ending), or is part of a sum. The name term applies to both concepts, and probably to more general settings too.
H: On the space of ultrafilters on $N$ I meet the space $X$ of ultrafilters on $N$ with the topology generated by sets of the form $\{p\}\cup A$ where $A\in p \in X$. I can't understand the definition of the topology. Is the points in $N$ are all discrete? Could someone help me to understand this space? Any help will be appreciated:) AI: The points of $\Bbb N$ are isolated, so $\Bbb N$ forms a discrete open subset of $X$. $X\setminus\Bbb N$ is also discrete: if $p$ is a free (= non-principal) ultrafilter on $\Bbb N$, then $\Bbb N\in p$, so $\{p\}\cup\Bbb N$ is an open nbhd of $p$ that contains no other point of $X\setminus\Bbb N$. Thus, $X\setminus\Bbb N$ is a closed discrete subset of $X$. (And it’s a large one: there are $2^{2^\omega}=2^\mathfrak{c}$ free ultrafilters on $\Bbb N$.) However, points of $X\setminus\Bbb N$ are not isolated: $\varnothing\notin p$, so $\{p\}=\{p\}\cup\varnothing$ is not an open nbhd of $p$. $X$ is separable, because $\Bbb N$ is a countable dense subset of $X$: every point of $X\setminus\Bbb N$ is a limit point of $\Bbb N$. However, the points of $X\setminus\Bbb N$ are limit points only of $\Bbb N$: each of them has a local base of nbhds that exclude the other points of $X\setminus\Bbb N$. Here’s a little more on the space; it’s simple stuff, but perhaps it will help you to get a better picture. $X$ is Hausdorff. If $m,n\in\Bbb N$ with $m\ne n$, then $\{m\}$ and $\{n\}$ are disjoint open nbhds of $m$ and $n$. If $p\in X\setminus\Bbb N$ and $n\in\Bbb N$, then $\Bbb N\setminus\{n\}\in p$, so $\{n\}$ and $\{p\}\cup\big(\Bbb N\setminus\{n\}\big)$ are disjoint open nbhds of $n$ and $p$. If $p,q\in X\setminus\Bbb N$ with $p\ne q$, then there is some $A\subseteq\Bbb N$ such that $A\in p$ and $A\notin q$. But $q$ is an ultrafilter, so $\Bbb N\setminus A\in q$. Thus, $\{p\}\cup A$ and $\{q\}\cup\big(\Bbb N\setminus A\big)$ are disjoint open nbhds of $p$ and $q$. Added: I should mention that the definition that you’ve been given is a little sloppy. Either $X$ should be described as the union of $\Bbb N$ and the set of free ultrafilters on $\Bbb N$, which is how I’ve treated it above, or the topology has to be defined a little differently. Specifically, if $X$ really is the set of all ultrafilters on $\Bbb N$, then the definition of the topology needs to distinguish two cases. If $p_n=\{A\subseteq\Bbb N:n\in A\}$ is the principal ultrafilter at $n\in\Bbb N$, then $\{p_n\}$ is a local base at $p_n$. Otherwise, if $p$ is free, then basic open nbhds of $p$ are the sets of the form $\{p\}\cup\{\widehat A:A\in p\}$, where $\widehat A=\{p_n:n\in A\}$. It usually easier just to identify $p_n$ with $n$ and describe $X$ as the union of $\Bbb N$ and the set of free ultrafilters on $\Bbb N$. Correction, 21 May 2020: My original answer said that each basic open set in $X$ is clopen, so that $X$ is zero-dimensional and hence Tikhonov. This is of course false. For instance, the closure of the basic open nbhd $\{p\}\cup\Bbb N$ of any $p\in X\setminus\Bbb N$ is clearly $X$, since $\Bbb N$ is in every free ultrafilter on $\Bbb N$. And in fact as PatrickR pointed out in this question, no basic open nbhd of a point $p\in X\setminus\Bbb N$ is closed. If $A\in p$, so that $\{p\}\cup A$ is such a nbhd, partition $A$ into two infinite subsets $B$ and $C$; exactly one of them, say $B$, belongs to $p$, and clearly every $q\in X\setminus\Bbb N$ such that $C\in q$ is in the closure of $\{p\}\cup A$. It also follows easily from this that $X$ is not regular.
H: The smallest possible value of $x^2 + 4xy + 5y^2 - 4x - 6y + 7$ I have been trying to find the smallest possible value of $x^2 + 4xy + 5y^2 - 4x - 6y + 7$, but I do not seem to have been heading in any direction which is going to give me an answer I feel certain is correct. Any hints on how to algebraically approach finding this value would be appreciated. I prefer not to be told what the value is. AI: Note that $x^2 + 4xy + 5y^2 - 4x - 6y + 7=(x+2y)^2+y^2-4x-6y+7$. Let $u=x+2y$. Write our expression in terms of $u$ and $y$, and complete the squares. Remark: The approach may have looked a little ad hoc, but the same basic idea will work for any quadratic polynomial $Q(x,y)$ that has a minimum or maximum. For quadratic polynomials $Q(x_1,x_2,\dots, x_n)$, one can do something similar, but it becomes useful to have more theory. You may want to look into the general diagonalization procedure.
H: When does an "infinite polynomial" make sense? Suppose I pick a collection $A \subset \mathbb{C}$ of points in the complex plane and attempt to construct a "polynomial" with those roots via, $$f(z):=\Pi_{\alpha \in A} (z-\alpha).$$ If $A$ is finite, we get a polynomial. If $A=\{n\pi:n \in \mathbb{Z}\}$, according to Euler we get $f(x)=\sin(x)$. Edit: this example is not right as Qiaochu has pointed out; see his answer for more details. What about other subsets of the complex plane? Other countable subsets without accumulation points? Countable sub with accumulation points like $A=\{1/n:n \in \mathbb{Z}\}$? Uncountable subsets?? When does the product converge, and if it does how does the spatial distribution of $A$ effect the properties of $f$? This question was motivated by the question here: Determining the density of roots to an infinite polynomial AI: That is not what Euler's product expansion of the sine looks like. It is very much supposed to be in the form $$\frac{\sin z}{z} = \prod_{n \ge 1} \left( 1 - \frac{z^2}{\pi^2 n^2} \right).$$ The product you've written down does not converge for $A = \{ n \pi \}$ unless $z \in A$. Indeed, its factors don't go to $1$, which is a necessary condition exactly analogous to the condition for infinite series that the terms need to go to $0$. In fact one can switch between infinite sums and infinite products using the logarithm, which can be used to prove the following. (First I need to mention that the theorem below requires the convention wherein a product which tends to $0$ is said to diverge. This is because the logarithm of such a product diverges to $-\infty$.) Theorem: Let $a_n \in \mathbb{C}$ be a sequence such that $\sum |a_n|^2$ converges. Then $\prod (1 + a_n)$ converges if and only if $\sum a_n$ converges. Sketch. Use the fact that $\log (1 + a_n) = a_n + O(|a_n|^2)$. So we can make sense of the "infinite polynomial" $$\prod_{\alpha \in A} \left( 1 - \frac{z}{\alpha} \right)$$ for countable $A$ such that $\sum_{\alpha \in A} \frac{1}{|\alpha|^2}$ and $\sum_{\alpha \in A} \frac{1}{\alpha}$ both converge. See also the Weierstrass factorization theorem. Note that by the identity theorem, the zeroes of a holomorphic function are isolated, so if you want your product to be holomorphic with $A$ as its zero set, $A$ needs to be discrete. Infinite sums and products do not behave well for uncountably many terms, the basic reason being the following. Theorem: Let $S$ be an uncountable set of positive real numbers. Then for any positive real $r$, there is a finite subset of $S$ whose sum is greater than $r$. (In other words, no sum with uncountably many terms can converge absolutely.) Proof. The sets $S_{\epsilon} = \{ s : s \in S, s > \epsilon \}$ for $\epsilon$ a positive rational are a countable collection of sets whose union is $S$. Since a countable union of countable sets is countable, it follows that there exists $\epsilon$ such that $S_{\epsilon}$ is uncountable. Then the result is clear.
H: Does localization preserves dimension? Does localization preserves dimension? Here's the problem: Let $C=V(y-x^3)$ and $D=V(y)$ curves in $\mathbb{A}^{2}$. I want to compute the intersection multiplicity of $C$ and $D$ at the origin. Let $R=k[x,y]/(y-x^3,y)$. The intersection multiplicity is by definition the dimension as a $k$-vector space of $R$ localized at the ideal $(x,y)$. Note now that: $R \cong k[y]/(y) \otimes _{k} k[x]/(x^3)$ Clearly $k[y]/(y) \cong k$ so the above tensor product is isomorphic to $k[x]/(x^3)$. Therefore $R$ has dimension $3$. However I want to compute the dimension of $R$ localized at the ideal $(x,y)$. Does localization preserves dimension or how do we proceed? I would really appreciate if you can please explain this example in full detail. AI: You have the $k[x,y]$-module $k[x]/(x^3)$, on which $y$ acts as $0$. If you're not able to compute the localization of this very concrete module at the ideal $(x,y)$, then you will need to practice with localizations. The general answer to your question is, in any case, "not always" (although in your particular example it does). You want to try some one-variable cases first. E.g. consider $M = k[x]/(x^2)$ and $M = k[x]/(x^2 -x)$. The former is its own localization at the prime ideal $(x)$, and so the dimension doesn't change after we localize at that prime. The latter module has one-dimensional localization at $(x)$. Now to give another two-dimensional example: $V(x^2 - x) \cap V(y)$ correspond to the ring $k[x,y]/(x^2 - x,y) \cong k[x]/(x^2 - x)$. This is two-dimensional, but the localization at $(x,y)$ is just one-dimensional. Try drawing a picture to see if you can understand why this holds geometrically.
H: $G_{1}/N_{1} \cong G_{2}/N_{2}$ and $N_{1} \cong N_{2} \Rightarrow G_{1} \cong G_{2}$? 1) Suppose $G_{1}$ and $G_{2}$ are groups with respective normal subgroups $N_{1}$ and $N_{2}$. Suppose $G_{1}/N_{1} \cong G_{2}/N_{2}$ and $N_{1} \cong N_{2}$. Does this imply that $G_{1} \cong G_{2}$? 2) Suppose $G/N \cong H$ and it is known that $N$ and $H$ are both finite. Does this imply that $G$ is finite? Can't think of any counter examples. I'm trying to get some information about a group with only knowledge about it's subgroups and quotients. AI: The first question is false. Consider $G_1=\mathbb{Z/9 Z}$ and $G_2=\mathbb{(Z/3Z)}^2$. Take $N_1=N_2=\mathbb{Z/3Z}$. The second question is indeed true for set theoretical considerations, $G$ is a finite union of $|N|$ many copies of a set of size $|H|$ (to see this fact note that if $\varphi$ is the surjective homomorphism from $G$ onto $H$ then all the fibers have the same size).
H: Is the diagonalization of A Invertable? There is a theorem which says that given a diagonalizable matrix $A$ such that $P^{-1}AP=D$ if $D$ is invertible then A is invertible. I suspect that the other direction isn't true, but I can't think of a counter example. AI: You know that your matrix $P$ is invertible. Now working with determinants you don't only know that $A$ is invertible, but that it even has the same determinant as $D$: $$\det(D)=\det(P^{-1}AP)=\det(P^{-1})\det(A)\det(P)=\det(P^{-1})\det(P)\det(A)$$ $$\det(P^{-1})\det(P)\det(A)=\det(P^{-1}P)\det(A)=1\cdot\det(A)=\det(A)$$
H: Topology for beginners Possible Duplicate: best book for topology? Please Suggest some good books on Topology and Functional Analysis. It would be good if somebody can post links of video lectures related to these. Thanks in advance!! AI: I really liked Topology by Munkres. It covers a lot of general topology. For functional analysis I used Introductory Functional Analysis with Applications by Kreyszig. It is easy to use for beginners, but maybe other books cover more material.
H: Simple Property of GCD and Modular Arithmetic I'm stuck on proving a rather elementary property, as I'm not really sure how to start off the approach. Suppose $g^a\equiv 1$ mod $m$ and $g^b\equiv 1$ mod $m$. Does this imply that $g^{\gcd(a,b)}\equiv 1$ mod $m$? Here's my attempt: By definition, we know that $m\mid g^a-1$ and $m|g^b-1$, so there exists some $x,y\in\mathbb{Z}$ such that $mx=g^a-1$ and $my=g^b-1$. Then \begin{align} m(x+y)&=g^{\gcd(a,b)}g^{\frac{a}{\gcd(a,b)}}+g^{\gcd(a,b)}g^{\frac{b}{\gcd(a,b)}}-2\\ &=g^{\gcd(a,b)}\Big(g^{\frac{a}{\gcd(a,b)}}+g^{\frac{b}{\gcd(a,b)}}\Big)-2. \end{align} However, I feel like this approach is only making the problem more complicated than it actually is, as the terms become harder and harder to manipulate to get our desired result $mz=g^{\gcd(a,b)}-1$ for some $z\in\mathbb{Z}.$ Any help would be appreciated! AI: We have by the extended Euclidean algorithm that there exists $x, y \in \Bbb{Z}$ such that $ax + by = gcd(a,b)$. So $$ g^{gcd(a,b)} = g^{ax+by} = g^{ax} \cdot g^{by} = (g^a)^x \cdot (g^b)^y \equiv 1^x\cdot 1^y = 1 \pmod{m}. $$
H: Proving Linear Independence of Sequences Good morning, my question is about proving the linear independence of sequences. In the theory of linear difference equations, one needs a fact, that for all distinct $\lambda_1,\ldots,\lambda_m \in \mathbb{C}$ and all $k_1,\ldots,k_m \in \mathbb{N}$, the functions (sequences) $$n^{0} \lambda_1^n, n^1 \lambda_1^n, \ldots, n^{k_1 - 1} \lambda_1^n,$$ $$n^{0} \lambda_2^n, n^1 \lambda_2^n, \ldots, n^{k_2 - 1} \lambda_2^n,$$ $$\ldots,$$ $$n^{0} \lambda_m^n, n^1 \lambda_m^n, \ldots, n^{k_m - 1} \lambda_m^n$$ are linearly independent. Although this fact is extensively used in literature, I haven't found any satisfactory proof and my attempts to prove this by myself weren't successful as well. I know how to prove that $\lambda_1^n,\lambda_2^n,\ldots,\lambda_m^n$ are linearly independent for distinct $\lambda_1,\ldots,\lambda_m \in \mathbb{C}$ (using Vandermonde determinant). Moreover, I know how to prove that $n^0 \lambda_i^n, n^1 \lambda_i^n, \ldots, n^{k_i - 1} \lambda_i^n$ are linearly independent (if one supposes that they are dependent, one gets an equation that says that a polynomial with at least one non-zero coefficient is zero for all $n \in \mathbb{N}$, i.e., a contradiction). But I don't know how to combine these results. Simply the fact that both sets of sequences are linearly independent does not prove that their union is linearly independent. Any ideas would be helpful. Thank you in advance. AI: You can get linear independence quite easily by looking at growth rates (analytic considerations rather than algebraic identities). Your list is very easily seen to fall into the category below. Any set of (say, positive) functions $f_1(n), \ldots, f_k(n)$ satisfying $f_i = o(f_j)$ for all $1 \le i<j\le k$ must be linearly independent. Suppose there were a non-trivial linear combination $\sum_{i=1}^k c_i f_i = 0$, and WLOG $c_k=1$. Then choose $n$ large enough so that $f_k(n) > \sum_{i<k} |c_i|\, f_i(n)$. Contradiction. EDIT: As pointed out by anon, the question calls for $\lambda_i \in \mathbb C$, in which case it's quite possible for members of this family to be comparable in size. However, this argument allows one to decompose the entire family into small sets with matching growth rates, and those can be handled by Vandermonde.
H: Do there exist infinitely many primes $p$ such that $a^{p-1}\equiv 1$ $\text{mod } p^2$ for fixed a? I noticed that Hardy and Wright in their "An Introduction to Theory of Numbers"(sixth edition) have asked the following: Is it ever true that $$2^{p-1}\equiv 1 \bmod p^2 \tag{*}\;\;\;?$$ They have pointed out that for $p=1093$ there is a solution to $(*)$ .But they have stated that such $p$ are sparse . Question: Do there exist infinitely many primes $p$ such that $a^{p-1}\equiv 1$ $\text{mod } p^2$? For some fixed $a\in Z^\mathbb{+}$ for $a>2$? Sorry if my question is absolutely trivial. AI: Your question is related to Wieferich primes, which represent the case if $a=2$: Despite a number of extensive searches, the only known Wieferich primes to date are 1093 and 3511. (sequence A001220 in OEIS). You asked for the more general case $a>2$. See here for Generalized Wieferich primes. Here's the table of known examples: \begin{eqnarray} a& p& \text{OEIS sequence}\\ 2 & 1093, 3511 & A001220\\ 3& 11, 1006003& A014127\\ 5& 2, 20771, 40487, 53471161, &&\\ &1645333507, 6692367337, 188748146801 & A123692\\ 7& 5, 491531 & A123693\\ 11& 71 & \\ 13& 2, 863, 1747591 & A128667\\ 17& 2, 3, 46021, 48947, 478225523351 & A128668\\ 19& 3, 7, 13, 43, 137, 63061489 &A090968\\ 23& 13, 2481757, 13703077, 15546404183, 2549536629329 & A128669\\ \end{eqnarray} Gottfried Helms had a paper on that: Fermat-/Euler-quotients $(a^{p-1}-1)/p^k$ with arbirtrary $k$.
H: A $C^{\infty}$ function from $\mathbb{R}^2$ to $\mathbb{R}$ Сould any one help me how to show $C^{\infty}$ function from $\mathbb{R}^2$ to $\mathbb{R}$ can not be injective? AI: $\mathbb{R}^2\setminus\{x\}$ is connected for any $x\in\mathbb{R}^2$. Only continuity is required for the argument.
H: if $G$ is a finite group of an odd order, then the product of all the elements of $G$ is in $G' $ Let $G$ be a finite group of an odd order, and let $x$ be the product of all the elements of $G$ in some order. Prove that $x \in G' $ My proof: (1) If $G$ is abelian then it is very simple to prove. (2) If $G$ is not abelian: $G/G'$ is abelian, therefore by (1), the product of all the elements of $G/G'$ is in $(G/G')'$. But $(G/G')'=1$. So we have: $1=(aG')(bG')...(aG')=ab...nG'$ and finally we conclude $ab...n \in G'$ Does my proof make any sense? Any other solutions? AI: I'll summarize the OP's solution, following the points commented by Gerry and Geoff, so that this question won't remain unanswered: If $\,G\,$ abelian then we can pair any element in the group with its inverse. Since there can't be element that equals its own inverse (why?), we have an even number of such pairs (why?) and $$x=\prod_{a\in G}a=1\cdot (aa^{-1})(bb^{-1})\cdot\ldots =1\cdot\ldots\cdot 1 =1\in \{1\}=G'$$ If $\,G\,$ is not abelian, let us take $\,G/G'\,$ , which is abelian, and let us denote $\,\overline{a}:=aG'\in G/G'\,$ . By Lagrange' theorem , $\,|gG'|=|G'|\,,\,\forall\,g\in G\,$ , so by the first part we get$$\left(\prod_{a\in G}a\right)G'=\left(\prod_{\overline{a}\in G/G'}\overline{a}\right)^{|G'|}\in \left(G/G'\right)'=\overline{\{1\}}\leq G/G'\Longrightarrow \left(\prod_{a\in G}a\right)\in G'$$ Check please whether the above reflects accurately what you people commented/
H: How do I prove exchangeable modularity? How do I prove that, considering all numbers natural, and p and i relatively prime, $mp+n \not \equiv 0 \pmod i$ is the same as $m-x \not \equiv 0 \pmod i$ considering x a natural number and the solution of $xp+n \equiv 0 \pmod i$ ? AI: Since $p$ and $i$ are relatively prime, we know that $p$ has a multiplicative inverse mod $i$, which is to say there is a number $p^{-1}$ such that $p\,p^{-1}\equiv 1\pmod i$. If $x$ is a solution to $$ xp+n\equiv 0\pmod i $$ we can multiply both sides of the congruence above by $p^{-1}$ to obtain $$ x+p^{-1}n\equiv 0 \pmod i $$ Since we are also given that $m \not \equiv x\pmod i$ we have $$ m+p^{-1}n \not \equiv 0 \pmod i $$ Now, multiplying both sides of this by $p$ we obtain $$ mp+n\not \equiv 0 \pmod i $$ as required. This argument is reversible, so the two conditions in the original probem are equivalent.
H: a question on Pixley-Roy topology Let $X$ be a $T_1$ space and let $F[X]$ be $\{x\subset X:\text{is finite}\}$ with Pixley-Roy topology. If $X$ is not discrete, how to prove $F[X]$ is not a Baire space? Thanks ahead:) Definition of Pixley-Roy topology: Basic neighborhoods of $F\in F[X]$ are the sets $$[F,V]=\{H\in F[X]; F\subseteq H\subseteq V\}$$ for open sets $V\supseteq F$, see e.g. here. I don't know in the theorem 2.2 why each $Z \cap F_n[X]$ is closed, nowhere dense subspace of $Z$? AI: (explaining the theorem 2.2 in the paper linked by Martin Sleziak) Let $X$ not be discrete, and $p$ be a limit point. Put $F_n[X]=\lbrace A\subseteq X\vert \lvert A\rvert\leq n\rbrace$, $Z=[\lbrace p\rbrace,X]$ If $F[Z]$ were Baire, so would $Z$. But $Z=\bigcup_n (Z\cap F_n[X])$, and each of $Z\cap F_n[Z]$ is nowhere dense and closed: it is closed because its complement $\bigcup_{\lvert A\rvert >n} (Z\cap[A,X])$ is open. it is nowhere dense, because any nonempty basic open subset of $Z$ is of the form $[A,U]$ for $p\in A\subseteq U$, so in particular $U$ is infinite (as a neighbourhood of $p$), so it has an $n+1$-element subset $B$. Then $[A\cup B,U]$ is a subset of $[A,U]$ disjoint from $F_n[X]\cap Z$
H: Ways to compute the limit of $\sum_{k=1}^\infty \frac{k^{n-1}}{(k+1)^n}$ as $n\to\infty$? Consider the sum $$S(n) = \frac{1^{n-1}}{2^n} + \frac{2^{n-1}}{3^n} + \frac{3^{n-1}}{4^n} + \cdots \infty = \sum_{k=1}^\infty \frac{k^{n-1}}{(k+1)^n}$$ How do I find the value of $\lim_{n\to\infty}S(n)$? I am guessing it would be zero. But then again that's a guess! ;) AI: Since $n$ is fixed, as $k \to \infty$, $$\left({k\over k+1}\right)^n \to 1.$$ As a result, $${k^{n-1}\over (k+1)^n} \sim {1\over k}\qquad {\rm as}\; k\to\infty$$ Since $n$ is fixed, the sum defined by $S(n)$ diverges for all $n$. By the limit comparison test, the sum $S(n)$ diverges for all $n$.
H: A problem in combinatorics 8 subjects need to be given to 4 students. In how many ways can it be done so that the third student gets an odd number of subjects. I tried combination with repetition $9 \choose 7$+$7 \choose 5$+$5 \choose 3$+$3 \choose 1$ but i'm not quite sure this is the right way to solve the problem. Can anyone help me? Thanks AI: Hint: If you start with the third student, there are $\binom 81$ ways to give one subject, then $3^7$ ways to give the remaining $7$ to the other two, as each of the other $7$ subjects can be given to any one. Can you extend this to other numbers of subjects for the third student?
H: question on complete vector fields Could any one help me to solve these two problems on vector fields Any $\mathbb{C}^{\infty}$ vector field on a compact manifold is complete. Is every vector field on $\mathbb{R}$ complete? AI: The 1st is true, not the 2nd. The flow associated to $\dot{x}=x^2$ is given by $\phi_t(x)=x/(1-tx)$, and obviously does not exist for every $t$.
H: how many distinct sets can be formed QUESTION (Edited to make it more readable) If A and B are two different nonempty sets, how many distinct sets can be formed with these sets using as many unions,intersections,complements and parentheses as desired. EDIT: My question was actually about generalising the homework question above, i.e. the the "n" number of sets case. We are debating if the answer is $2^{2^n - 1}$ or $2^{2^{n}}$, but neither of us are sure of our explanations. AI: The question you ask is specifically about two sets $A$ and $B$. Draw a general Venn diagram with a universe $U$ (the usual rectangle), and two intersecting sets $A$ and $B$, say disks. The Venn Diagram divides the universe into $4$ parts. The only subsets we can make using the allowed tools are a union of $0$ or more of these parts. By listing, we can see that there are $16$ such subsets. But what about if we start with $3$ sets, $A$, $B$, and $C$? We go directly to the general case, where instead of starting with $2$ sets $A$ and $B$, we start with $n$ sets $A_1, A_2,\dots,A_n$. I believe you were solving the general case, without explicitly saying so. If that is so, the answer you got is correct. Let the $A_i$ be subsets of a "universe" $U$. Imagine drawing the associated Venn Diagram. The diagram divides the universe into pairwise disjoint parts. These parts are obtained by looking successively at $A_1$, $A_2$, $A_3$, and so on, and saying yes or no. There are $2^n$ ways to do this. For some choices of $A_1$ to $A_n$, some of the resulting sets will be empty. But (if our universe is large enough) we can find $A_i$ such that each of the $2^n$ subsets we obtain in this way is non-empty. Then the number of sets that can be constructed using allowed tools is as large as possible. The $2^n$ parts of the Venn Diagram are the "atoms" from which our subsets are constructed by union. We cannot split $U$ into finer parts than these atoms by using a combination of allowed operations. Now we can build all the achievable subsets by saying yes or no to each atom, and taking the union of the atoms we say yes to. If we have the full number $2^n$ of atoms, the number of ways to do this is $2^{(2^n)}$. If $U$ is large enough, then by appropriate choice of the $A_i$ we can arrange to have exactly $k$ atoms, where $k$ is any number between $0$ and $2^n$. If we have $k$ atoms, then $2^k$ subsets can be constructed using allowed tools.
H: Solve the equation: $e^x=mx^2$ I need to find out the maximum possible number of real roots of the equation: $$e^x=mx^2$$ where m is a real parameter. I'm interested in some easy approaches. Moreover, is it possible to solve it without using derivatives at all? Thanks. AI: Provided $m > 0$ there is always a negative solution. We now turn our attention to solutions on $(0,\infty)$. Put $$g(x) = {e^x\over x^2}$$ for $x > 0$. Differentiating, you get $$g'(x) = {(x-2)e^x\over x^3}.$$ If you draw the sign chart for g', you will see it is negative if $x < 2$ and positive if $x > 2$. Therefore there is a global minimum at $x = 2$. We conclude that $${e^x\over x} \ge {e^2\over 4}$$ for $x > 0$. So, if $m < e^2/4$, no solution on the $(0,\infty)$ exists. If $m > e^2/4$, two solutions exist, one to the left of 2 and one to the right. If $m = e^2/4$, there is one solution.
H: Lifting additive characters Let $K$ a finite extension of $\mathbb{Q}_p$ ($p$ prime different from 2) and let $G_K$ the absolute Galois group of $K$. Let $\bar{u} : G_K \longrightarrow \mathbb{F}_p$ a continuous additive character. Is it always possible to lift $\bar{u}$ to an additive character $u : G_K \longrightarrow \mathbb{Z}_p$ ? I know the answer is yes when $K$ does not contains the $p^{th}$-root of unity. What happens in the other case ? AI: By local class field theory, such a character corresponds to a character $K^{\times} \to \mathbb F_p$. Now $K^{\times} \cong \mathbb Z \times \mathcal O_K^{\times}.$ Certainly a character $\mathbb Z \to \mathbb F_p$ can be lifted to a character $\mathbb Z \to \mathbb Z_p$. So the question is whether $\mathcal O_K^{\times} \to \mathbb F_p$ can be lifted. (This corresponds to the restriction of your Galois character to inertia.) Now $\mathbb O_K^{\times} \cong \mu \times \Gamma$, where $\Gamma$ is isomorphic to a product of copies of $\mathbb Z_p$, and $\mu$ is the subgroup of roots of unity in $K$. Again, a character $\Gamma \to \mathbb F_p$ can always be lifted, so the question is whether a character $\mu \to \mathbb F_p$ can be lifted to a character $\mu \to \mathbb Z_p$. Since $\mu$ is finite, this is possible if and only if $\mu \to \mathbb F_p$ is trivial. This will be automatic if and only if $\mu$ contains no elements of order $p$, i.e. if and only if $K$ contains no $p$-power roots of unity. So, if $K$ contains $p$-power roots of unity, then you have to check whether or not your given character $K^{\times} \to \mathbb F_p$ is trivial on these roots of unity. It lifts to a character $K^{\times} \to \mathbb Z_p$ if and only if it is trivial on them.
H: Boundedness of the solution of the heat equation The general solution of the heat equation $$\left\{\begin{array}{rcl} \partial_tu-\Delta u &=& 0\\ u(x,0)&=& f \end{array} \right.$$ is given by $$u(x,t)=\int\limits_{\mathbb R^n}\Phi(x-y,t)f(y)\mathrm dy$$ with the fundamental solution $\Phi$ (wikipedia). So why is the solution $u\in C^0([0,\infty)\times\mathbb R^n)\cap C^\infty((0,\infty)\times\mathbb R^n)$ bounded if f is bounded? AI: The fundamental solution is positive, integrable on $\mathbb{R}^n$ and $\int_{\mathbb{R}^n}\Phi(x,t)\,dx=1$ for all $t>0$. Then $$ |u(x,t)|\le\int_{\mathbb{R}^n}\Phi(x-y,t)|f(y)|\,dy\le\sup_{y\in\mathbb{R}^n}|f(y)|\int_{\mathbb{R}^n}\Phi(x-y,t)\,dy=\sup_{y\in\mathbb{R}^n}|f(y)|<\infty. $$
H: A good commutative algebra book Possible Duplicate: Reference request: introduction to commutative algebra I'm looking for a good book on commutative algebra covering most of (but not limited to) : Basic Galois theory and Module algebra Primary decomposition of ideals Zariski topology Nullstellensatz, Hauptidealsatz Noether's normalization Ring extensions "Going up" and "Going down" The emphasis is on the approach, as I would like a book giving a good geometric intuition of ring theory that I could use as a solid basis to start learning algebraic geometry. All in all, do you remember a book that gave you a deeper geometric insight of commutative algebra ? AI: My top 3 : Commutative Algebra: with a View Toward Algebraic Geometry, by D. Eisenbud, definitely. As Dylan said in the comments, “some will call it overly chatty but the geometry discussed there is worth everything”. To learn, nothing is too chatty, but to serve as a handbook, yes, this book might be a bit too chatty. Commutative Algebra, by Bourbaki, exhaustive, once you will be confortable, not to learn. Commutative Algebra I & II, by Zariski and Samuel, slightly old fashioned, but very pedagogic, and feature very interesting points of view, aimed at geometry.
H: If $R_2$ is an $R_1$-algebra, then is $R_2 \otimes_{R_1} M$ an $R_2$-module? If we have a ring homomorphism $f\colon R_{1}\rightarrow R_{2}$, and if $M$ is an $R_{1}$-module, my question is: Can we show that the $R_{1}$-module $R_{2}\otimes_{R_{1}}M$ is somehow also an $R_{2}$-module? AI: Yes. This is called extending scalars to $R_2$ and the $R_2$-module structure is as Prof Magidin has described. To be completely formal, if $r \in R_2$ then the map $t_r\colon R_2 \to R_2$ that is multiplication by $r$ is $R_1$-linear and you can define for $x \in R_2 \otimes_{R_1} M$ that $rx = (t_r \otimes \operatorname{id}_M)(x)$. On the scheme side of things, this corresponds to the concept of base change.
H: Square with variable $x$ inside I am learning about how to calculate the length of a path with integration. The equation is: $$\sqrt{1+\Big (\frac{dy}{dx} \Big)^2} $$ So I have to integrate it between $a$ and $b$. In my book I have an example, I understand it but don't know how he solved this: $$\sqrt{\frac{1}{4} \times (x^4 + 2 + \frac{1}{x^4})}$$ for which the result after eliminating the square root is: $$1/2 \times (x^2 + \frac{1}{x^2})$$ I know that $\sqrt{1/4} = 1/2$ but I can't understand how the rest is solved. AI: This is a special case of the familiar identity $$a^2+2ab+b^2=(a+b)^2,$$ with $a=x^2$ and $b=\frac{1}{x^2}$. That makes the "middle" term $2ab$ equal to $2$. The easiest way to see things is probably to work backwards, and expand $$\left(x^2+\frac{1}{x^2}\right)^2.$$ Remark: In most cases, if we take a function $y$, and calculate $\sqrt{1+\left(\frac{dy}{dx}\right)^2}$, we get something ugly that cannot be integrated in terms of elementary functions. So arclength problems often involve fairly artificial functions $y$ for which $\sqrt{1+\left(\frac{dy}{dx}\right)^2}$ magically happens to simplify.
H: What is this automorphism-related subgroup? Let $G$ be a group, let $H\leq G$ and let $\phi\in\operatorname{Aut}(G)$. Then what "is" the subgroup $K=\langle h^{-1}(h\phi): h\in H\rangle$? Does it, or its normal closure, have a name? Does it have any interesting properties? Stuff seems to get interesting if you assume $H\phi=H$, but what about in the general case? (Really, I just want to know if it has a name so I can search for stuff about it - but any information would be nice. I'm not fussy!) AI: The group you are talking about is $[H, \phi].$ This group, though it does not really have a special name, is well known to group-theorists and goes back much further than the reference given by Arturo. More generally, if $A$ is any group of automorphisms of the group $G,$ then the group $[G,A]$ is the group generate by $\{ g^{-1}g^{a}: g \in G, a \in A \}.$ Here the result of the automorphism $a$ on $g$ is denoted by $g^{a},$ which is consistent with thinking of $[G,A]$ as a subgroup of the semidirect product $GA.$ Denotin, as is standard, $g^{-1}g^{a}$ by $[g,a]$ is is easy to check that $[g,a]^{h}[h,a] = [gh,a]$ for all $a \in A$ and $g,h \in G$ while also $[g,ab] = [g,b][g,a]^{b}$ for all $g \in G$ and $a,b \in A.$ This shows that for any $a \in A$, $[G,a]$ is a normal subgroup of $G$ ( so $[G,A] \lhd G$ also), and $[G,A]$ is $A$-invariant. This (sketchy) discussion is all at least implicit in D. Gorenstein's 1968 book "Finite Groups", for example. Since $H$ is not $\phi$-invariant in your question, we need to be a little more careful. Since $[xy,\phi] = [x,\phi]^{y}[y,\phi]$ for $x,y \in H,$ it does follow that $[H.\phi]$ is normalized by $H$, though it need not be a subgroup of $H.$ Similarly, the equation $[h, \phi^{2}] = [h,\phi][h,\phi]^{\phi}$ implies that $[H, \langle \phi \rangle ]$ is a $\phi$-invariant subgroup. For the moment, I am unsure whether $[H, \phi] = [H, \langle \phi \rangle]$ in general, though this is true when $H$ is $\phi$-invariant-(added later)-indeed, as Jack Schmidt points out, $[H,\phi] \neq [H, \langle \phi \rangle ]$ in general.
H: Hyperplane in projective space Let $P_0,P_1,\ldots,P_r$ be distinct points in $\mathbb{P}^n$. Why there is a hyperplane $H$ in $\mathbb{P}^n$ passing through $P_0$ but not through any of $P_1,\ldots,P_r$? AI: By projective duality, your question is equivalent to asking why, given a finite collection of distinct hyperplanes in $\mathbb P^n$, there is a point lying on exactly one of them. Does that make it any easier?
H: Point belonging to plane In $\Bbb R^4$, I have a plane (given by its cartesian equation) and a point (given by its coordinates). How can I check if it belongs to the plane? AI: Some related cases: If the equation of a hyperplane is in the form $a_1x_1 + a_2x_2 + a_3x_3 + a_4x_4 = b$, to check whether a point $(p_1,p_2,p_3,p_4)$ is in the hyperplane, we have to see if the point satisfies the equation, i.e., if when we replace $(x_1,x_2,x_3,x_4)$ by $(p_1,p_2,p_3,p_4)$ we get a valid equation, that is, if $a_1p_1 + a_2p_2 + a_3p_3 + a_4p_4 = b$. An example in $\mathbb{R}^3$: to see whether the point $(1,3,2)$ belongs to the (hyper)plane $x-y+2z = 3$, we see that $1-3+2\cdot 2 = 2 \neq 3$, so it doesn't belong to the plane; however, $(1,0,1)$ does, because $1-0+2\cdot 1= 3$. In general, if we have a Cartesian equation for some subset of $\mathbb{R}^n$ (or any set $S$) given by $f(x) = 0$, this actually means that the subset is $\{x \in S : f(x) = 0\}$, and what we have to do to decide membership of any $x_0 \in S$ is be to check whether $f(x_0) =0$. If we have more than one equation describing a set, an element has to satisfy all the equations in order to belong to it. In our case, the set is a plane and is described by two equations, so when we want to decide whether an element belongs to it, we check if the element satisfies both equations. Otherwise, it doesn't belong to the plane.
H: What would be the value of $\sum\limits_{n=0}^\infty \frac{1}{an^2+bn+c}$ I would like to evaluate the sum $$\sum_{n=0}^\infty \frac{1}{an^2+bn+c}$$ Here is my attempt: Letting $$f(z)=\frac{1}{az^2+bz+c}$$ The poles of $f(z)$ are located at $$z_0 = \frac{-b+\sqrt{b^2-4ac}}{2a}$$ and $$z_1 = \frac{-b-\sqrt{b^2-4ac}}{2a}$$ Then $$ b_0=\operatorname*{Res}_{z=z_0}\,\pi \cot (\pi z)f(z)= \lim_{z \to z_0} \frac{(z-z_0)\pi\cot (\pi z)}{az^2+bz+c}= \lim_{z \to z_0} \frac{\pi\cot (\pi z)+(z_0-z)\pi^2\csc^2 (\pi z)}{2az+b} $$ Using L'Hopital's rule. Continuing, we have the limit is $$ \lim_{z \to z_0} \frac{\pi\cot (\pi z)+(z_0-z)\pi^2\csc^2 (\pi z)}{2az+b}= \frac{\pi\cot (\pi z_0)}{2az_0+b} $$ For $z_0 \ne 0$ Similarly, we find $$b_1=\operatorname*{Res}_{z=z_1}\,\pi \cot (\pi z)f(z)=\frac{\pi\cot (\pi z_1)}{2az_1+b}$$ Then $$\sum_{n=-\infty}^\infty \frac{1}{an^2+bn+c} = -(b_0+b_1)=\\ -\pi\left( \frac{\cot (\pi z_0)}{2az_0+b} + \frac{\cot (\pi z_1)}{2az_1+b}\right)= -\pi\left( \frac{\cot (\pi z_0)}{\sqrt{b^2-4ac}} + \frac{\cot (\pi z_1)}{-\sqrt{b^2-4ac}}\right)= \frac{-\pi(\cot (\pi z_0)-\cot (\pi z_1))}{\sqrt{b^2-4ac}}= \frac{\pi(\cot (\pi z_1)-\cot (\pi z_0))}{\sqrt{b^2-4ac}} $$ Then we have $$\sum_{n=0}^\infty \frac{1}{an^2+bn+c} = \frac{\pi(\cot (\pi z_1)-\cot (\pi z_0))}{2\sqrt{b^2-4ac}}$$ Is this correct? I feel like I made a mistake somewhere. Could someone correct me? Is there an easier way to evaluate this sum? AI: This is almost correct, but I believe the original sum needs to range from $-\infty$ to $\infty$ instead of $0$ to $\infty$. The solution that follows considers the sum $\sum_{n=-\infty}^\infty \frac{1}{an^2+bn+c}$, and throughout I will write $\sum_{n=-\infty}^\infty f(n)$ to mean $\lim_{N\rightarrow \infty}\sum_{n=-N}^N f(n)$. Factoring the quadratic, with your definition of $z_{0},\ z_{1}$, we have $$\sum_{n=-\infty}^\infty \frac{1}{an^2+bn+c}=\frac{1}{a}\sum_{n=-\infty}^{\infty}\frac{1}{\left(n-z_{0}\right)\left(n-z_{1}\right)}.$$ Assume that neither $z_0$ nor $z_1$ are integers, since otherwise we would have a $\frac{1}{0}$ term appearing in the sum. By applying partial fractions, remembering that $z_{0}-z_{1}=\frac{\sqrt{b^{2}-4ac}}{a}$ we get $$\frac{1}{\sqrt{b^{2}-4ac}}\sum_{n=-\infty}^{\infty}\left(\frac{1}{n-z_{0}}-\frac{1}{n-z_{1}}\right).$$ By the cotangent identity $\pi\cot\left(\pi x\right)=\sum_{n=-\infty}^{\infty}\frac{1}{n+x},$ we conclude that $$\sum_{n=-\infty}^\infty \frac{1}{an^2+bn+c}=\frac{\pi\cot\left(\pi z_{1}\right)-\pi\cot\left(\pi z_{0}\right)}{\sqrt{b^{2}-4ac}}.$$
H: Prove that if $n \in \mathbb{N}$, $n\ge 1$ As the title says. I encounter this problem in Bernd Schroeder's book of "Mathematical Analysis: A Concise Introduction", p.15. It essentially characterizes natural number from the axioms regarding real number, i.e. the axioms of addition and multiplication of $\mathbb{R}$ as a field. My attempt is the following using Principle of Induction. Let $S:=\{n \in \mathbb{N}: n \ge 1\}$. Check if $ 1 \in S$. Since $1 \ge 1$ holds, therefore $1 \in S$. Suppose $n \in S$, we want to show $n+1 \in S$. $n+1 \ge 1 \Rightarrow n \ge 0$.However from the induction step, we have $ n \ge 1 > 0$, therefore $n+1 \in S$. By Principle of Induction, $S=\mathbb{N}$ $\Box$ EDIT1: There is a mistake in the proof as pointed out by talmid. EDIT2: The author of the above mentioned book constructs natural number, integers and rationals from Completeness Axiom. A similar approach can be found in Royden's Real Analysis AI: I think the inductive step is a bit fuzzy, because you assume $n + 1\geq 1$ (which is what you want to prove - fallacious), deduce that $n \geq 0$, and since this holds then $n+1 \geq 1$, which would be affirming the consequent (i.e., $(P \rightarrow Q) \wedge Q \vdash P$) - also fallacious. Maybe your $\Rightarrow$ is backwards. Also, I would add: you seem to have tried to arrive at a contradiction between $n \geq 0$ and $n \geq 1$ ("$n \geq 0$. However [...] $n \geq 1$"), but you can perfectly have an $n$ greater than $0$ and greater than $1$ at the same time. In these cases, try to convince yourself, formally and informally, that you arrived at a contradiction, for example, by trying to come up with counterexamples. I think you'll find it very easy to find an $n$ greater than both $0$ and $1$. You could say that $n\geq 1$ (inductive hypothesis), and then $n+1 \geq n \geq 1$, so $n + 1\geq 1$ (this is assuming that you know that $n + 1 \geq n$, i.e., $1 \geq 0$).
H: Evaluating $\int_0^{\sqrt{3}}{\frac{\sqrt{1+x^2}}{x}}\,dx$ Could someone please show me how to evaluate this integral (maybe doing all the steps)? $$\int_0^{\sqrt{3}}{\frac{\sqrt{1+x^2}}{x}}\,dx$$ I prefer if you avoid to follow the same method used by WolframAlpha (with $\csc$, $\sec$ ecc). This is what I tried 'till now: Substitution with $\sqrt{1+x^2} = u$ I obtained: $$\int{\frac{u}{\sqrt{u^2-1}}\frac{u}{\sqrt{u^2-1}}}\,du = \int{\frac{u^2}{u^2-1}}\,du$$ But not knowing how to continue, I tried another substitution with $u^2 - 1 = s$ and I obtained: $$\int{\frac{s+1}{s} \frac{1}{2\sqrt{s+1}} }\,ds = \frac{1}{2}\int{\frac{s+1}{s\sqrt{s+1}}}\,ds$$ But, again, not knowing how to continue I decided to ask here. Thanks in advance for the help! AI: For every $u \in \mathbb{R}\setminus\{-1,1\}$ we have $$ \frac{u^2}{u^2-1}=\frac{u^2-1+1}{u^2-1}=1+\frac{1}{u^2-1} =1+\frac{1}{2}\left(\frac{1}{u-1}-\frac{1}{u+1}\right), $$ and so $$ \int\frac{u^2}{u^2-1}du=\int\left[1+\frac{1}{2}\left(\frac{1}{u-1}-\frac{1}{u+1}\right)\right]du=u+\frac{1}{2}\ln\left|\frac{u-1}{u+1}\right| +C. $$
H: Limit of exponentials Why is $n^n (n+m)^{-{\left(n+m\over 2\right)}}(n-m)^{-{\left(n-m\over 2\right)}}$ asymptotically equal to $\exp\left(-{m^2\over 2n}\right)$ as $n,m\to \infty$? AI: By Stirling's approximation we have $$ \binom{2n}{n+m}= \frac{(2n)!}{(n+m)!(n-m)!} \sim \frac{\sqrt{2\pi n} (2n/e)^{2n}}{\sqrt{2\pi(n+m)} \left( \frac{n+m}{e} \right)^{n+m} \sqrt{2\pi(n-m)} \left( \frac{n-m}{e} \right)^{n-m} }= \frac{1}{\sqrt{2\pi (n^2-m^2)}} \cdot \frac{(2n)^{2n} }{ (n+m)^{n+m} (n-m)^{n-m}} .$$ Now if $m$ is "small" compared to "n" then $$n^n (n+m)^{-(n+m)/2} (n-m)^{-(n-m)/2} \sim \frac{1}{2^n} \sqrt{ \sqrt{2\pi (n^2-m^2)}\binom{2n}{n+m} }.$$ We make the assumption that $m$ is small compared to $n$ precise in the sense that we take $m=\mathcal{o}(n^{2/3})$ so that we can apply the refined entropy formula found at equation 8 of the link, which gives the result.
H: Variational distance basic properties The variational distance between two probability distributions $X$ and $Y$ taking values on the same alphabet $\mathcal A$ is defined as \begin{equation} \delta (X,Y)=1/2\sum_{a\in A} |p_X(a)-p_Y(a)|$ \end{equation} There are two very basic claims with regard to the variational distance that I would like to formally prove. 1) It cannot increase by the application of a function: \begin{equation} \delta (X,Y)\geq \delta (f(X),f(Y)) \end{equation} 2) \begin{equation} \frac{1}{2}\sum_{a\in A} |p_X(a)-p_Y(a)| = 1 -\sum_{a\in A} \min (p_X(a),p_Y(a)) \end{equation} AI: $\def\abs#1{\left|#1\right|}$ Let $f\colon \mathcal A \to \mathcal A$. Then \begin{align*} \delta\bigl(f(X), f(Y)\bigr) &= \frac 12 \sum_{a \in \mathcal A} \abs{p_{f(X)}(a) - p_{f(Y)}(a)}\\ &= \frac 12 \sum_{a \in \mathcal A} \abs{\sum_{b\in f^{-1}(a)} \bigl(p_X(b) - p_Y(b)\bigr)}\\ &\le \frac 12 \sum_{a \in \mathcal A} \sum_{b \in f^{-1}(a)} \abs{p_X(b) - p_Y(b)}\\ &= \frac 12 \sum_{b \in \mathcal A} \abs{p_X(b) - p_Y(b)}\\ &= \delta(X,Y) \end{align*} Using did's hint from below we have for 2) \begin{align*} \delta(X, Y) &= \frac 12 \sum_{a \in \mathcal A} \abs{p_X(a) - p_{Y}(a)}\\ &= \sum_{a \in \mathcal A} \frac 12 p_X(a) + \frac 12 p_Y(a) - \min\{p_X(a), p_Y(a)\}\\ &= \frac 12 \sum_{a \in \mathcal A} p_X(a) + \frac 12 \sum_{a \in \mathcal A} p_Y(a) - \sum_{a\in \mathcal A} \min\{p_X(a), p_Y(a)\}\\ &= 1 - \sum_{a\in \mathcal A} \min\{p_X(a), p_Y(a)\}. \end{align*}
H: How to get the angle in the right triangle? I have two coordinates which represent the mouse position with respect to the center of the screen ([0, 0] meaning the center, y increases downwards). So, [0, 0] is one corner of the triangle, and mousePos is another. Now, the position of the mouse should determine the direction of of a small sprite representing the player (in radians). When the mousePos is, say, [0, -100], right above the player (remember, y increases downwards), then the players direction is 0. When it's [100, 0], right to the player, the direction should be PI/2. How do I get this? I know how to do it in a very long way, which would be very inefficient for the computer. What is the standard way of computing the angle? AI: You want to use the atan2 function if you have it. Since different languages treat it in different ways, you may need to experiment: atan2(-y,x) or atan2(x,-y) will probably give you what you want.
H: Math books pointing towards Probability Theory I work as a professional composer, and I also program most of my own software. I failed every year of math in high school. I am studying Bayesian Probabilities in reference to music, and while I understand most of what is being said I can't help but feel progress would be greatly enhanced by a stronger foundation in the basics. Similarly I have also been studying deep learning, restricted boltzmann machines and such. What books are well regarded for concisely covering the foundations that lead to these disciplines? AI: I think a beautiful source is Gian-Carlo Rota's and Kenneth Baclawski's Introduction to Probability and Random Processes. (Made available online by David Ellerman with the permission of Professor Baclawski.) Rota was a brilliant combinatorialist and his writing combines a friendly, clear style with a deep, well-developed organization.
H: Mlodinow. The Drunkard's Walk. An example from the book. This excerpt is from Leonard Mlodinow's book The Drunkard's Walk: And although Fortune is fair in potentialities, she is not fair in outcomes. That means that if each of 10 Hollywood executives tosses 10 coins, although each has an equal chance of being the winner or the loser, in the end there will be winners and losers. In this example, the chances are 2 out of 3 that at least 1 of the executives will score 8 or more heads or tails. How can it be proved with math? AI: First let us look at the probability of one executive getting $8$ or more heads/tails when tossing a coin $10$ times. $$ P(\text{# of heads}\geq 8 \text{ or # of tails} \geq 8) = 2 \times \dfrac1{2^{10}} \left(\dbinom{10}8 + \dbinom{10}9 +\dbinom{10}{10} \right) = \dfrac7{64}$$ Hence, the probability that one executives will score $7$ or less heads/tails$ = 1 - \dfrac7{64} = \dfrac{57}{64}$. Hence, the probability that at-least one of the executives will score $8$ or more heads/tails is nothing but $\left(1 - \text{ the probability that all the executives will score $7$ or less heads or tails} \right)$. The probability that all executives will score $7$ or less heads or tails is $ \left(\dfrac{57}{64} \right)^{10}$. Hence, the desired probability is $$1 - \left(\dfrac{57}{64} \right)^{10} = 0.685986140417819036628477302741657695150934159755706787109375 \approx \dfrac23$$
H: Product of spheres embeds in Euclidean space of 1 dimension higher This problem was given to me by a friend: Prove that $\Pi_{i=1}^m \mathbb{S}^{n_i}$ can be smoothly embedded in a Euclidean space of dimension $1+\sum_{i=1}^m n_i$. The solution is apparently fairly simple, but I am having trouble getting a start on this problem. Any help? AI: EDIT: I cannot delete this post as it's been already accepted :( The proof is carried out by induction on $m.$ $m=1$ is trivial by choosing coordinates $(x^{(1)}_0,x^{(1)}_1,...,x^{(1)}_{n_1})$ where $\sum_{j=0}^{n_1} (x_j^{(1)})^2=1$, so let $m=2,$ then similarly $S^{n_1} \times S^{n_2}$ is embedded in $\mathbb{R}^{n_1+n_2+2}$ which also lies in the hypersurface $H$ with equation $\sum_{j=0}^{n_1} (x_j^{(1)})^2+\sum_{j=0}^{n_2} (x_j^{(2)})^2=2.$ In fact, $H$ is diffeomorphic to $S^{n_1+n_2+1}.$ Now, the embedding is missing at least a point, for example $p=(0,...,0,1,1) \in H,$ so by stereographic projection $S^{n_1+n_2+1} \setminus \{p\}$ is diffeomorphic to $\mathbb{R}^{n_1+n_2+1}.$ Suppose that the assertion holds for $<m,$ then $(S^{n_1}\times...\times S^{n_{m-1}}) \times S^{n_m}$ is embedded diffeomorphically in $\mathbb{R}^{n_1+...+n_{m-1}+1} \times \mathbb{R}^{n_m+1}\cong \mathbb{R}^{n_1+...+n_{m}+2}$ hence following the same argument you can reduce the dimension by $1,$ so the result.
H: Is this map of domains a Jordan homomorphism? Let $\phi\colon D\to D'$ be a map of division rings, such that $\phi$ is a homomorphism of the additive groups, respects unity, and if $a\neq 0$, $\phi(a)\neq 0$, and $\phi(a)^{-1}=\phi(a^{-1})$. It's a theorem of L.K. Hua that $\phi$ is either a homomorphism of anti-homomorphism, but I'm struggling to prove it. It's a result of Jacobson and Rickart that any Jordan homomorphism of a ring into a domain is either a homomorphism or an anti-homomorphism (Theorem 2 of this paper) so I think it's sufficient to show $\phi$ is a Jordan homomorphism, that is, $\phi(aba)=\phi(a)\phi(b)\phi(a)$ for all $a,b\in D$, as the other two properties of a Jordan homomorphism are already assumed. I was able to derive Hua's identity that if $a,b,ab-1$ are invertible elements of a ring, then $$ ((a-b^{-1})^{-1}-a^{-1})^{-1}=aba-a. $$ Now $\phi(aba)=\phi(a)\phi(b)\phi(a)$ holds if $a,b=0$, or if $ab-1=0$, so I'm only trying to verify for the other case where $a,b,ab-1$ are all units. Applying $\phi$ to $aba$ and using the above identity, I get something bad $$ ((\phi(a)-\phi(b)^{-1})^{-1}-\phi(a)^{-1})^{-1}+\phi(a). $$ What's the correct way to tell that $\phi$ is a homomorphism or anti-homomorphism? Thanks. AI: You are basically done. Just show your work more clearly, and the answer is clear. $$ aba = ((a-b^{-1})^{-1}-a^{-1})^{-1} + a $$ $$ \phi(aba) \stackrel{1}{=} ((\phi(a)-\phi(b)^{-1})^{-1}-\phi(a)^{-1})^{-1}+\phi(a) \stackrel{2}{=} \phi(a)\phi(b)\phi(a). $$ The first equality applies $\phi$ to Hua's identity, using that it respects addition, subtraction, and inverses. The second equality uses Hua's identity with $A=\phi(a)$ and $B=\phi(b)$.
H: What are some (or even one) interesting examples of (non-group) semigroups? I'm going to give a lecture on Alon and Schieber's Tech Report on computing semigroup products (Optimal Preprocessing for Answering On-Line Product Queries). Basically, given a list of elements $a_1,\ldots,a_n$ and a bound $k$, they show how to preprocess the elements so that later it is possible to efficiently compute the product of any arbitrary sublist $a_i\cdot\ldots\cdot a_j$ by performing only $k$ products. To motivate this algorithm I am looking for examples of semigroups where: 1. It is plausible that one would have a list of elements and want to compute products of sublists. 2. The semigroup is not a group (since the problem is trivial for groups). 3. The space complexity of the product does not overwhelm the time complexity of computing it directly (e.g., in string concatenation, just copying the result to the output takes the same time as computing it even with no preprocessing). So far I only have $\min$ and $\max$ and matrix multiplication as examples. I'm looking for something more "sexy", preferably related to Computer Science (perhaps something to do with graphs or trees). Also, for $\min$ and $\max$ there is a better algorithm, so I can't really use them to motivate this algorithm. AI: I'm not sure exactly what you're looking for, but here are some semigroup operations: Multiplication of polynomials. Conjunction of logical expressions which are in disjunctive normal form. Cartesian product of graphs, or other graph products. Unions of finite sets. Greatest common divisor or least common multiple of integers. Do any of these help? Other than being related to computer science, what properties do you want this semigroup to have?