text
stringlengths
83
79.5k
H: question on symmetric and nilpotent matrix a) Let $A$ is of nilpotent of degree $k$ then according to the condition ${A}^{k}={B}^{2k}=0$ But I can't say anything about existence of such $B$ I have no idea on $b$, $c$, please help. AI: (a) Can you show that if $\,A\,$ is a $\,n\times n\,$ nilpotent matrix , say$\,A^k=0\,$, then $\,k\leq n\,$? This answers this section as $\,B\,$ is nilpotent, but then $\,B^2=0\,$ , so if $\,A\neq 0\,$ then the claim isn't true. (b) Since $\,A\,$ is symmetric positive definite his eigenvalues are positive real, so we can write (wrt some basis of eigenvectors) $$A=\begin{pmatrix}a_1&0\\....&....\\0&a_n\end{pmatrix}\,\,,\,a_i>0$$ Well, finding $\,B\,$ now is easy...
H: A transform function from $(−∞,∞)$ to $(0,1)$? I want to convert an integral from $(0, 1)$ range to $(-\infty, \infty)$ range by change of variable. What is the best transform function to do this - one that is simple, monotonic with $f(-\infty)=0$ and $f(\infty)=1$? Thanks. AI: $\newcommand{\logit}{\operatorname{logit}}$ $$ z=\logit p = \log \frac{p}{1-p}. $$ As $p$ goes from $0$ to $1$, $z$ goes from $-\infty$ to $+\infty$ (provided the base of the logarithmic function is more than $1$; most often it is $e$.). The logit of $1/2$ is $0$. The graph is symmetric about $(1/2,0)$, so that, for example, $\operatorname{logit}\ 0.2= -\operatorname{logit}\ 0.8$. This function is used in statistics. The first syllable is pronounced with the "long o" sound as in "low"; the "g" like the "j" in "jet". The inverse is the logistic function $$ p = \frac{e^z}{1+e^z}\ = \frac{1}{1+e^{-z}}. $$ If you're applying this to an integral, then which of many such functions should be chosen would depend on what integral it is. Later edit: I occurs to me that if one wishes to be aware of the logit function, one show know this fact about probability: $$ \logit \Pr(A\mid D) = \logit \Pr(A) + \log\frac{\Pr(D\mid A)}{\Pr(D\mid \text{not }A)} $$ (and the letter $D$ may be take to stand for "data".)
H: Non-isomorphic abelian groups of order $19^5$ I am trying to classify abelian groups of order $19^5$ up to isomorphism. Can anyone provide any approaches or hints? AI: From the Fundamental Theorem for Fin. Gen. Abelian groups, it follows that we must take the partitions of 5 (all this can be googled easily): $$\begin{align*}5=&5\\5=&4+1\\5=&3+2\\5=&3+1+1\\5=&2+2+1\\5=&2+1+1+1\\5=&1+1+1+1+1\end{align*}$$ Since there are 7 such partitions, there are 7 non-isomorphic groups of order $\,19^5\,$, which are (notation: $\,C_k=\,$ the cyclic group of order $\,k\,$): $$C_{19^5}\,\,,\,\,C_{19^4}\times C_{19}\,\,,\,\,C_{19^3}\times C_{19^2}\,\,,\,\,C_{19^3}\times C_{19}\times C_{19}\,\,,...\text{you've got the idea}$$
H: $e^x(\ln x-c) =\sum \limits_{k=0}^\infty \frac{ x^{k} \Gamma'(k+1)}{ (k!)^2}$ Is it correct result? $e^x=\sum \limits_{k=0}^\infty \frac{x^k}{k!}$ We can write $e^x=\sum \limits_{k=0}^\infty \frac{x^k}{ \Gamma(k+1)}$ Where $\Gamma(x)$ is Gamma function $\Gamma(k+1)=k\Gamma(k)$ $\frac{\Gamma(k+1)}{\Gamma(k)}=k$ $\frac{\Gamma(1)}{\Gamma(0)}=0$ $\frac{1}{\Gamma(0)}=0$ $\frac{\Gamma(-1+1)}{\Gamma(-1)}=\frac{\Gamma(0)}{\Gamma(-1)}=-1$ $\frac{1}{\Gamma(-1)}=\frac{-1}{\Gamma(0)}=-1.\frac{1}{\Gamma(0)}=0$ If we continue in that way, we get result for $m $ is non-positive integer, $\frac{1}{\Gamma(m)}=0$ Thus we can write $e^x=\sum \limits_{k=-\infty}^\infty \frac{x^k}{ \Gamma(k+1)}$ Then we extended $e^x$ for n is integer (Equation 1): $$e^x=\sum \limits_{k=-\infty}^\infty \frac{x^{k+n}}{ \Gamma(k+n+1)}$$ $n \in Z $ {...,-2,-1,0,1,2,...} It is possible to extend the defination to $z \in C$. $f(x)=\sum \limits_{k=-\infty}^\infty \frac{x^{k+z}}{ \Gamma(k+1+z)}$ $$\frac{d(f(x))}{dx}=\frac{d}{dx}(\sum \limits_{k=-\infty}^\infty \frac{x^{k+z}}{ \Gamma(k+1+z)})= \sum \limits_{k=-\infty}^\infty (k+z)\frac{x^{k+z-1}}{ (k+z)\Gamma(k+z)}= \sum \limits_{k=-\infty}^\infty \frac{x^{k+z-1}}{ \Gamma(k+z)}=\sum \limits_{k=-\infty}^\infty \frac{x^{k+z}}{ \Gamma(k+1+z)}=f(x)$$ $$\frac{d(f(x))}{dx}=f(x)$$ $$f(x)=c(z)e^x$$ $c(z)e^x=\sum \limits_{k=-\infty}^\infty \frac{x^{k+z}}{ \Gamma(k+1+z)}$ According to Equation 1, $c(z) = 1$ for $z \in Z$ but I noticed I need to find what is $c(z)$ for $z \in C$. (Thanks to Norbert for his contribution) After that we can find the result: $$\frac{\partial(c(z)e^x)}{\partial z}=\sum \limits_{k=-\infty}^\infty \frac{\partial}{\partial z}(\frac{x^{k+z}}{ \Gamma(k+1+z)})$$ $$c'(z)e^x=\sum \limits_{k=-\infty}^\infty \frac{\partial}{\partial z}(\frac{x^{k+z}}{ \Gamma(k+1+z)})$$ $$c'(z)e^x=\sum \limits_{k=-\infty}^\infty (\frac{\ln x . x^{k+z}}{ \Gamma(k+1+z)})-\sum \limits_{k=-\infty}^\infty (\Gamma'(k+1+z)\frac{ x^{k+z}}{ \Gamma^2(k+1+z)})$$ $$c'(z)e^x=\ln x \sum \limits_{k=-\infty}^\infty (\frac{ x^{k+z}}{ \Gamma(k+1+z)})-\sum \limits_{k=-\infty}^\infty (\Gamma'(k+1+z)\frac{ x^{k+z}}{ \Gamma^2(k+1+z)})$$ $$e^x(c(z)\ln x-c'(z)) =\sum \limits_{k=-\infty}^\infty (\frac{ x^{k+z} \Gamma'(k+1+z)}{ \Gamma^2(k+1+z)})$$ If we take $z=0$, we get an interesting result. $$e^x(c(0)\ln x -c'(0)) =\sum \limits_{k=0}^\infty \frac{ x^{k} \Gamma'(k+1)}{ (k!)^2}$$ $c(0)=1$ according to Equation 1 Thus $$e^x(\ln x -c'(0)) =\sum \limits_{k=0}^\infty \frac{ x^{k} \Gamma'(k+1)}{ (k!)^2}$$ I do not know what $c'(0) is$. Acoording to Norbert's answer. $c'(0) \approx -0.596347$ I have not seen that result in other place. Is it known result?Please let me know if my results are correct or not. Can we extend all such functions that include $\Gamma(x)$ in denominator? Thanks for advice AI: Setting aside issues of convergence for the moment, there seems to be a mistake in the formal computation. The function $\frac{\Gamma'(1+z)}{\Gamma(1+z)^2}$ extends to an entire function, whose value at $z=-n$ is $(-1)^n\Gamma(n)$. This means that, when $z=0$, we cannot ignore the terms in the sum with $k<0$. So, maybe the question should be whether $$ e^x\log(x)=\sum_{k=0}^\infty \frac{x^k\Gamma'(k+1)}{(k!)^2}+\sum_{n=1}^\infty (-1)^n (n-1)! x^{-n} $$ At this point, this is all formal. The second piece of the above series does not converge for any $x\neq 0$. However, there is a technique called Borel summation (check out W. Balser's book "From divergent power series to analytic functions", or http://en.wikipedia.org/wiki/Borel_summation), which allows one assign a "sum" to a power series whose coefficients grow factorially fast. Basically, the theory of Borel summation says that among all analytic functions in some sector having $$ \sum_{n=1}^\infty (-1)^n(n-1)!x^{-n} $$ as an asymptotic expansion at infinity, there is at most one which is approximated maximally well by this expansion. This analytic function, if it exists, is called the Borel sum of the series. If we define $$ g(x)=\sum_{k=0}^\infty \frac{x^k\Gamma'(k+1)}{(k!)^2}, $$ and define $h(x)$ to be the Borel sum of $$ \sum_{n=1}^\infty (-1)^n (n-1)! x^{-n}, $$ then it makes sense to ask whether $e^x\log(x)=g(x)+h(x)$. I do not know the answer to this. In our case, we can compute explicitly that $$ h(x)=-e^x\int_x^\infty\frac{e^{-t}}{t}\,dt $$ It seems like we need to do the same thing when we write $$ e^x=\sum_{k=-\infty}^{\infty}\frac{x^{k+z}}{\Gamma(k+1+z)} $$ When $z$ is not an integer, this series does not converge for any $x$ (we have that $\Gamma(k+1+z)$ decays like $\frac{1}{|k|!}$ as $k\to-\infty$). This series can be separated, the divergent part Borel summed, and again I do not know if we get the function $e^x$ in the end. I have checked that this is the case when $z$ is a half-integer (I did this computation a long time ago, and I'm only about 80% confident in its accuracy)
H: Check if two 3D vectors are linearly dependent I would like to determine with code (c++ for example) if two 3D vectors are linearly dependent. I know that if I could determine that the expression $ v_1 = k · v_2 $is true then they are linearly dependent; they are linearly independent otherwise. I've tried to construct an equation system to determine that, but since there could be zeros anywhere it gets very tricky and could end with divisions by zero and similar. I've also though about using some matrices/determinants, but since the matrix would look like: $$ \begin{matrix} x_1 & y_1 & z_1\\ x_2 & y_2 & z_2\\ \end{matrix} $$ i don't see an easy way to check for the the linear dependency... any ideas? Thanks! AI: Here is the portion of the code you need: if((x1*y2 - x2*y1) != 0 || (x1*z2 - x2*z1) != 0 || (y1*z2 - y2*z1) != 0) { //Here you have independent vectors } else { //Here the vectors are linearly dependent }
H: Low-Rank Approximations Book I am looking for a source (book, online book, etc..) where I can find the theory behind low-rank approximations of matrices. In particular, I am interested in low-rank approximations used in optimization problems, such as minimizations of the Euclidean and Forbenius norms. I have some background in basic and numberical linear algebra, but I am looking for a source that is simple and straightforward. Sincerly, Amit. AI: The book Low Rank Approximation: Algorithms, Implementation, Applications by Ivan Markovsky might be of intrest here. The table of content is available in the Amazon link.
H: Independent events and Dependent events I have a question regarding these strikingly similar problems with contradicting solutions. This is somewhat long, so prepare Probblem 1 Consider a bag of ten coins, nine are fair, but one is weighted with both sides heads. You randomly select a coin and toss it five times. Let $2s$ denote the event of selecting the weighted coin (that is the 2-sided coin) and $N$ be the even you select a regular coin and $5H$ be the event of getting five heads in a row. What is a) $P(5H | 2s)$ b) $P(5H | N)$ c) $P(5H)$ d) $P(2s | 5H)$ Solution 1 a) Simply 1 b) $\frac{1}{2^5}$ c) $\frac{1}{2^5}\frac{9}{10}+ \frac{1}{10} = \frac{41}{320}$ d) $P(2s|5H) = \dfrac{P(5H|2s)P(2s)}{P(5H)} = \frac{32}{41}$ From the Solution 1, it seems that $P(2s|5H) \neq P(2s)P(5H)$ That is the event of picking out the weighted coin affects the probability of getting 5H. Here is part of my question, isn't there also some tiny probability of getting 5H from picking the normal one as well? Doesn't make sense why the events of picking the coin and getting 5H is dependent. Read on the next question Problem 2 A diagnostic test for an eye disease is 88% accurate of the time and 2.4% of the population actually has the disease. Let $ED$ be the event of having the eye disease and $p$ be the event of testing positive. Find the probability that a) the patient tests positive b) the patient has the disease and tests positive Solution 2 Here is a tree diagram a) $0.02122 + 0.011712 = 0.13824$ b) $P(ED | p) = \dfrac{P(\text{ED and p})}{P(p)} =\frac{0.02122}{0.13824 }= 0.1535$ From Solution 2, it looks like $P(\text{ED and p}) = P(\text{ED})P(p)$ which means that having the eye disease and testing positive are independent events? After trying out the same formula from Problem 1, it also seems that $$P(\text{ED | p}) = \dfrac{P(\text{ED and p})}{P(p)} = \dfrac{P(\text{p | ED})P(ED)}{P(p)} = 0.1535$$ Also, when the question asks "the patient has the disease and tests positive", how do I know that it is $P(ED | p)$ and not $P(p | ED)$? I am very confused in general with this. Could anyone clarify for me? Thanks AI: From the Solution 1, it seems that $P(2s|5H) \ne P(2s)P(5H)$ That is the event of picking out the weighted coin affects the probability of getting 5H. Here is part of my question, isn't there also some tiny probability of getting $5H$ from picking the normal one as well? Doesn't make sense why the events of picking the coin and getting $5H$ is dependent. The fact that there is some tiny probability of getting $5H$ from picking the normal coin is exactly why the the event $5H$ is dependant on the event $2s$. In other words, the probability of getting five heads depends on which coin you pick. Also, when the question asks "the patient has the disease and tests positive", how do I know that it is $P(ED|p)$ and not $P(p|ED)$ ? I read $P(ED|p)$ as "the probability that the patient has the eye disease given that the test comes back positive. On the other hand, I read $P(p|ED)$ as "the probability that the test comes back positive given that the patient has the eye disease." In practical terms however, the patient wants to know the former. That is, when I go get a test from the doctor and it comes back positive, I want to know what the probability is that the result of the test is accurate. I believe this is what the homework question is asking you to find.
H: Projective and injective modules; direct sums and products I need two counterexamples. First, a direct sum of $R$-modules is projective iff each one is projective. But I need an example to show that, “an arbitrary direct product of projective modules need not be a projective module.” If I let $R= \mathbb Z$ then $\mathbb Z$ is a projective $R$-module, but the direct product $\mathbb Z \times \mathbb Z \times \cdots$ is not free, hence it is not a projective module. We have a theorem which says that every free module over a ring $R$ is projective. Am I correct? Second, a direct product of $R$-modules is injective iff each one is injective but I need an example to show that the direct sum of injective modules need not be injective. AI: As for the first question: yes, $P = \prod_{i=1}^{\infty} \mathbb{Z}$ is a direct product of free $\mathbb{Z}$-modules which is not free. Since $\mathbb{Z}$ is a PID, $P$ is also not projective. The proof that $P$ is not free is nontrivial, but I believe it has already been given either here or on MathOverflow. As for the second question: the Bass-Papp Theorem asserts that a commutative ring $R$ is Noetherian iff every direct sum of injective $R$-modules is injective. Thus every non-Noetherian ring carries a counterexample. The proof of the result -- given for instance in $\S 8.9$ of these notes -- is reasonably constructive: if $I_1 \subsetneq I_2 \subsetneq \ldots \subsetneq I_n \subsetneq \ldots$ is an infinite properly ascending chain of ideals of $R$, then for all $n$ let $E_n = E(R/I_n)$ be the injective envelope (see $\S 3.6.5$ of loc. cit.) of $R/I_n$, and let $E = \bigoplus_{n=1}^{\infty} E_n$. Then $E$ is a direct sum of injective modules and (an argument given in the notes shows) that $E$ is not itself injective.
H: A free group on the non-empty set $X$ is solvable iff $|X| =1$ Let $X$ be a non-empty set. Prove that $F_X$, the free group on $X$ is solvable if and only if $|X| = 1$. We can see that if $|X| = 1$, then $F_X$ is abelian, and hence solvable. However, the other direction stumps me. Any suggestions? AI: Yes: a quotient of a solvable group is solvable. Every free group of rank greater than $1$ has the free group of rank $2$ as a quotient (just kill all but two of the free generators), so it suffices to find a nonsolvable group which can be generated by two elements. Try $A_5$ for instance...
H: Z-index of an arbitrary point on a flattened 3-dimensional triangle I have a triangle in a 3-dimensional coordinate system that I want to draw to a screen. I'm able to flatten the triangle to 2 dimensions and determine whether an arbitrary point on the screen falls within the flattened object. What I don't know how to do is determine the Z-index of that point. As an example, let's say the triangle is bounded by the points $(2, 2, 0)$, $(2, 8, 5)$, and $(8, 5, 10)$ (where higher Z values are behind lower ones, in case that's not clear) and mapped to a $10 \times 10$ screen. The flattened points, according to the system I'm using, are $(2, 2)$, $(3, 7)$, and $(6.5, 5)$. The arbitrary point $(5, 3)$ falls within this flattened triangle; how can I determine the Z-index of that point on the original triangle? AI: You can calculate the barycentric coordinates of the point (as described by Wikipedia) and then calculate the $z$ index as the weighted average of the $z$ indices of the vertices, with the barycentric coordinates as weights.
H: Elements of $\mathbb{F}_p$ having cube roots in $\mathbb{F}_p$ Let $p$ be a prime number, and let $\mathbb{F}_p$ be the field with $p$ elements. How many elements of $\mathbb{F}_p$ have cube roots in $\mathbb{F}_p$? I had this question on an exam and after reviewing I am still not sure. Any help would be appreciated. AI: 1) If $\,p=3\,$ then $\,a^3=a\,\,\,,\,\forall a\in\Bbb F_p\,$, by Fermat's Little Theorem 2) If $\,3\nmid (p-1)\,$ then $\,f:\Bbb F_p^*\to \Bbb F_p^*\,\,\,,\,f(x):=x^3\,$ is an automorphism (can you see why? Check $\,\ker f\,$...) 3) Finally, if $\,3\mid (p-1)\,$ then the map $\,f\,$ above cannot be an automorphism, and since $\,\Bbb F_p^*\,$ is a cyclic group it then has one single subgroup of any order divinding $\,p-1\,$, so... (optional: add $\,0\,$)
H: Fastest numeric method for ODE I am currently using Euler method and it is 4-8 times too slow. Which method will be fastest? I need it to compute Turing's reaction-diffusion system. AI: Euler's method is the fastest possible single-step, explicit, non-adaptive method for a given fixed step size $h$. If this is too slow, I would do the following things, in this order. Perform a fundamental analysis on your code. Insert breakpoints. Ensure that you're not doing things like repeatedly re-sizing arrays. Replace loops with better code (a first-order explicit ODE solver should have one "loop": one walk through the code, from $t=0$ to $t=T$.) Increase step size. Compute what your acceptable error tolerance is, and make $h$ be as large as possible to keep your solution within that error bound. Implement an adaptive-step size solver. These are usually faster for a given accuracy, but are more complicated. Dormand-Prince 4(5) pairs is a standard adaptive RK method. I'm willing to bet that you can solve your speed problem with step (1) (hint: if you're iterating through 256x256 elements, you don't need to necessarily encapsulate the method in a double loop).
H: How to compare big numbers that are outcome of different functions. How is the best way to compare big numbers? They are result of two functions with different asymptotic growth. For example: Googleplex which is $10^{{10}^{100}}$ to $1000!$ AI: $10^{googol}$ compared to $1000!$ $1000!=1000\times999\times998...<1000^{1000}$ $1000^{1000}=(10^3)^{1000}=10^{3000}$ since a googol is drastically larger than $3000$, the first number is much, much greater. In general logarithms (equivalently, converting to a base and comparing exponents) are a great way for comparing large numbers. For example: whether $2^{523} <^? 3^{228}$ may not be obvious, but even knowing very rounded values for $\log(2)$ and $\log(3)$ will let you compare $523\log(2)$ and $228\log(3)$ quite easily, which is an equivalent problem.
H: Is there a way to calculate the probabilities without tree diagram Could someone tell me how on earth did they calculate $P(A_1\text{ and }A_3)$ and etc.. without drawing a tree diagram? Thank you very much AI: These are independent events and you want to know $$\,P(A_1\cap A_3)=P(A_1)\cdot P(A_3)=\frac{1}{2}\cdot\frac{1}{2}=\frac{1}{4}$$ since I assume the probability of the coin being T or H is the same. Btw, for any $\,1\leq i\neq j\leq 4\,\,,\,\,P(A_i\cap A_j)=P(A_1\cap A_3)=\frac{1}{4}\,$ . Why didn't you ask about these ones?
H: What real numbers are in the Mandelbrot set? The Mandelbrot set is defined over the complex numbers and is quite complicated. It's defined by the complex numbers $c$ that remain bounded under the recursion: $$ z_{n+1} = z_n^2 + c,$$ where $z_1 = 0$. If $c$ is real, then above recursion will remain real. So for what values of $c$ does the recursion remain bounded? AI: The Wikipedia page gives the intersection of the set with the real axis as $[-2,0.25]$ Added: You can verify that $-2$ is in the set easily, and that any more negative number decreases each iteration without bound. For the positive end, each iteration is greater than the one before. To hit a limit, you must have $z=z^2+c$, which has the solution $z=\frac{1+\sqrt{1-4c}}2$, which becomes imaginary at $c \gt \frac 14$
H: Is there a better counter-example? (problem involving limit of composition of functions) This question arises from one of the Berkeley Problems in Mathematics, fall 1981. It asks me to find a counterexample to the following claim: Suppose $\lim\limits_{x\rightarrow x_{0}}f(x)=a$, $\lim\limits_{t\rightarrow a}g(t)=b$, prove or find a counterexample for the claim $\lim\limits_{x\rightarrow x_{0}}g(f(x))=b$. I feel this should not be true, for we can select functions with infinite 'singularities' near the origin like $x\sin[1/x]$. Then for each value $x=\frac{2}{k\pi}$ we should have the value to be equal to $x$. But this is not a counterexample. Looking up the book the authors give the following example: $f=g=1$ when $t\not=0$, $f=g=0$ when $t=0$. The authors then claimed that $\lim_{x\rightarrow 0}g(f(x))=1$ while $\lim_{t\rightarrow 0}g=0$. Unless I am confused with the definition of the limit, that there exists $\lambda$ such that for any $\epsilon$, if $|x-x_{0}|\le \lambda$, then $f(x)-A|\le \epsilon$, I do not see why $\lim_{t\rightarrow 0}g=0$ since $t=0$ is a removable singularity for $g$. So assuming I am wrong somewhere, I want to ask: 1): why the limit of $g$ at 0 is 0? 2): is there better counterexamples? AI: I think you just made a careless error. In the example provided, the limit of $g$ at 0 is not 0; it is 1, as you said. Here $x_0 = 0$, so that $a = 1$. So lim$_{t\rightarrow a}{g(t)}$ = lim$_{t\rightarrow 1}{g(t)} = 0$. And lim$_{x\rightarrow x_0}{g(f(x))} = $ lim$_{x\rightarrow 0}{g(f(x))} = 1$. This is probably as simple a counterexample as one could hope for.
H: How can I bound this equation? This is a problem I stuck in Berkeley Problems in Mathematics, fall 1983: Let $x(t)=(x_{1}(t)...x_{n}(t))$ be a differentiable function from $\mathbb{R}$ to $\mathbb{R}^{n}$. It satisfies a differential equation of the form $$x'(t)=f(x(t))$$ where $f:\mathbb{R}^{n}\rightarrow \mathbb{R}$ is a continuous function. Assuming that $f$ satisfies the condition $$\langle f(y),y\rangle\le |y|^{2}$$ derive an inequality showing that $|x(t)|$ grows at most exponentially. I am thinking about decompoising $f(y)$ into $y^{T}$ and $y$ directions, but this does not allow me to integrate $x'(t)$ from $t_{0}$ to $t$. If I can show $|f(y)|\le K|y|$ then the inequality would be immediate; but this is false as $f(y)$ can be orthogonal to $y$ at every point $y=x(t)$ and have arbitrarily large norm. So I do not know how to solve this in an elegant way. I can solve it in the most radical case that $x'(t)\cdot x(t)=0\forall t$ and $x'(t)=K(t)x(t)$, but I do not know how to deal with arbitrarily curves. AI: Let $V(t) = \frac{1}{2} \|x(t)\|^2$. Then $\dot{V}(t) = \langle x(t), \dot{x}(t) \rangle = \langle x(t), f(x(t)) \rangle \leq \|x(t)\|^2 = 2 V(t)$. Now consider $\eta(t) = e^{-2t}V(t)$; we have $\dot{\eta}(t) \leq 0$, from which we get $\eta(t) \leq \eta(0)$, which is equivalent to $V(t) \leq V(0) e^{2t}$. Multiplying by $2$ and taking square roots gives $\|x(t)\| \leq \|x(0)\| e^t$.
H: Scaling a square puzzle Found this on the net today and lost. On a table you have a square made of 4 coins at the corner at distance 1. So, the square is of size 1×1. In a valid move, you can choose any two coin let’s call them mirror and jumper. Now, you move the jumper in a new position which is its mirror image with respect to mirror. That is, imagine that mirror is a centre of a circle and the jumper is on the periphery. You move the jumper to a diagonally opposite point on that circle. With any number of valid moves, can you form a square of size 2×2? If yes, how? If no, why not? I dont think its possible since if it is, then the reverse must be true i.e. say 1/2 X 1/2 scale wise, that leaves the coins in the initial position forever on the vertices.But this is well, the anti proof, how to prove mathematically that 2X2 is not a possibility. AI: No. Should not be possible. Think of the coins as positioned on the points (0,0),(0,1),(1,0),(1,1). The no. of even x-coordinates (0) is the same as the odd x-coordinates(1). Any reflection that you have defined would preserve the parity of the coordinates (adding an even no. to them). So in the end you can't get all the x-coordinates to be of the same parity (as required by a 2x2 square).
H: Hausdorff Measure Given the Hausdorff Measure Is it true that $H^1$(line)= Length of the line? How can one prove it? AI: On $\mathbb{R}^n$, thus in particular on $\mathbb{R}$, the $n-$ dimensional Hausdorff measure equals the $n-$ dimensional Lebesgue measure. This is not completely trivial for general $n$, not too difficult for $n=1$. One source for this is L.C.Evans, R.F.Gariepy, 'Measure theory and fine properties of functions'. The equality of the measures is Theorem 2 in chapter 2.3. The proof for $n= 1$ is in section 2.1, Theorem 2 (or, of course, Federer's monograph :-)) Edit: another source is William P. Ziemer, 'Weakly differentiable functions', Theorem 1.4.2. another edit: the wikipedia page you used as a reference uses a definition of Hausdorff measure which differs by a dimension dependend factor from the one in the sources I cited. Consequently the Hausdorff measure, using that definition, is a factor times the Lebesgue measure. This is also stated on that page, and the factor is explicitly given.
H: Finding second derivative I am asked to find the second derivative of the function: $$h(x)=\sqrt{x^2+1}$$ $$h(x)=(x^2+1)^\frac{1}{2}$$ $$h'=\frac{1}{2}(x^2+1)^\frac{-1}{2} 2x$$ $$h'=\frac{x}{\sqrt{x^2+1}}$$ $$h''=\frac{\sqrt{x^2+1} - x(\frac{1}{2}(x^2+1)^\frac{-1}{2}2x}{(\sqrt{x^2+1})^2}$$ And this is as far as I get. My question is regarding how to proceed algebraically to solve this question. I am having hard time seeing what the next step is in simplifying this equation. AI: After the most obvious simplifications you have $$\begin{align*} h''(x)&=\frac{\sqrt{x^2+1} - x\left(\frac{1}{2}(x^2+1)^{-\frac12}(2x)\right)}{(\sqrt{x^2+1})^2}\\ &=\frac{\sqrt{x^2+1}-x^2(x^2+1)^{-\frac12}}{x^2+1}\\ &=\frac{\sqrt{x^2+1}-\dfrac{x^2}{\sqrt{x^2+1}}}{x^2+1}\;. \end{align*}$$ You can now either multiply the fraction directly by $\frac{\sqrt{x^2+1}}{\sqrt{x^2+1}}$ or continue simplifying the numerator. If you do the latter, you get $$\begin{align*} \frac{\sqrt{x^2+1}-\dfrac{x^2}{\sqrt{x^2+1}}}{x^2+1}&=\frac{\dfrac{x^2+1-x^2}{\sqrt{x^2+1}}}{x^2+1}\\ &=\frac{\dfrac1{\sqrt{x^2+1}}}{x^2+1}\\ &=\frac1{(x^2+1)\sqrt{x^2+1}}\\ &=\frac1{(x^2+1)^{3/2}}\\ &=(x^2+1)^{-\frac32}\;; \end{align*}$$ if you multiply by $\frac{\sqrt{x^2+1}}{\sqrt{x^2+1}}$ directly, you end up with the same result after similar calculations. Note that you could also have used the product rule to calculate $h''(x)$, leaving $h'(x)$ in the form $x(x^2+1)^{-1/2}$. Then $$\begin{align*} h''(x)&=x\left(-\frac12(x^2+1)^{-3/2}(2x)\right)+(x^2+1)^{-1/2}\\ &=-\frac{x^2}{(x^2+1)^{3/2}}+\frac1{(x^2+1)^{1/2}}\\ &=-\frac{x^2}{(x^2+1)^{3/2}}+\frac{x^2+1}{(x^2+1)^{3/2}}\\ &=\frac1{(x^2+1)^{3/2}}\;, \end{align*}$$ with perhaps a little less work.
H: Asking solutions for the integral equations This is from Berkeley Problems in Mathematics, Spring 86. It asks for $\lambda\in \mathbb{R}$, find all solutions of the following two equations: $$\phi(x)=e^{x}+\lambda\int^{x}_{0}e^{x-y}\phi(y)dy; \psi(x)=e^{x}+\lambda\int^{1}_{0}e^{x-y}\psi(y)dy$$ My thought is to take the derivative, thus we have $\frac{d}{dx}\phi=e^{x}+\lambda\phi(x)$ because we have $$\frac{d}{dx}\lambda \int^{x}_{0}e^{x}/e^{y}\phi(y)dy=\lambda e^{x}/e^{x}\phi(x)=\lambda \phi(x)$$. And so is the equation for $\psi$ Thus the difference equation would be $$\frac{d}{dx}(\phi-\psi)=\lambda(\phi-\psi)$$ and implies $\phi-\psi=Ce^{\lambda x}$ for some $C$. But I do not know how to use this to solve the original equation. The seeming simple equation $$\frac{d}{dx}\phi=e^{x}+\lambda\phi(x)$$is also not easy to solve. I do not know how to treat the delay term $e^{x}$ or find a special solution for this. AI: The derivative of $\phi(x)=e^{x}+\lambda\int^{x}_{0}e^{x-y}\phi(y) \,dy$ is $\phi'(x) = e^{x}+\lambda \phi(x) + \lambda\int^{x}_{0}e^{x-y}\phi(y)\, dy$, or, more clearly, $\phi' = (1+\lambda) \phi$. The derivative of $\psi(x)=e^{x}+\lambda\int^{1}_{0}e^{x-y}\psi(y) \, dy$ is $\psi'(x) = e^{x} +\lambda\int^{1}_{0}e^{x-y}\psi(y) \, dy$, o, more clearly, $\psi' = \psi$. The solutions are $\phi(x) = \phi(0) e^{(1+\lambda) x}$, and $\psi(x) = \psi(0) e^x$.
H: Request for a $\mathrm{Hom}$ functor example Let $$ P_2 \xrightarrow{d_2} P_1 \xrightarrow{d_1} P_0 \xrightarrow{d_0} M \to 0$$ be an exact sequence of $R$-modules. Consider $$ (*) \hspace{1 cm} P_2 \xrightarrow{d_2} P_1 \xrightarrow{d_1} P_0 \to 0$$ that is, the sequence with $M$ removed. Then this resulting sequence should still be exact at $P_1$ since we did not change the maps $d_2, d_1$. Now apply the left exact functor $\mathrm{Hom}(-, N)$ (for some $R$-module $N$) to get $$ (**) \hspace{1 cm} 0 \to \mathrm{Hom}(P_0, N) \xrightarrow{\overline{d_1}} \mathrm{Hom}(P_1, N) \xrightarrow{\overline{d_2}} \mathrm{Hom}(P_2, N) $$ Clearly, $\overline{d_1}$ does not have to be injective since $(*)$ was not exact at $P_0$. What I'm not so clear about is why, even though $(*)$ was exact at $P_1$, we also don't necessarily get exactness at $\mathrm{Hom}(P_1, N)$ anymore. Can you give me a simple example with concrete $R$-modules $P_1, M, N$ such that $(*)$ is exact at $P_1$ but $(**)$ not exact at $\mathrm{Hom}(P_1, N)$? Thanks. AI: Let's take $R = \mathbb{Z}$, $M = \mathbb{Z} / 2 \mathbb{Z}$, $P_0 = \mathbb{Z}$, $P_1 = 2 \mathbb{Z}$, $P_2 = 0$. There is an evident exact sequence $$0 \longrightarrow 2 \mathbb{Z} \longrightarrow \mathbb{Z} \longrightarrow \mathbb{Z} / 2 \mathbb{Z} \longrightarrow 0$$ and applying $\textrm{Hom}(-, \mathbb{Z} / 2 \mathbb{Z})$ to the truncation yields the sequence $$\mathbb{Z} / 2 \mathbb{Z} \stackrel{0}{\longrightarrow} \mathbb{Z} / 2 \mathbb{Z}\longrightarrow 0$$ which fails to be exact anywhere at all.
H: floor division remainder and quotients may vary I have question on behavior of floor division. if i have, -7/3 = -3 and remainder = 2 also -7/3 = -4 remainder = 5 it can have many results. When we make tally all are correct.Then what is the use of floor division is not stick to one solution as truncated division. Where floor division can be applied any example. AI: The Division Theorem says that if $a$ and $b$ are integers, $b\ne0$, then there are unique integers $q$ and $r$ such that $$a=bq+r,\quad0\le r\lt|b|$$ So there is only one result, with that condition on the remainder, $r$. The Division Theorem is the first step in the most frequently seen proof of the Unique Factorization Theorem. Generalizations of the Division Theorem can be used to prove uniqueness of factorization in many other settings, for example, unique factorization of polynomials with real coefficients.
H: Is it possible to extend an arbitrary smooth function on a closed subset of $R^n$ to a smooth function on $R^n$? Assume that $K$ is a closed (or compact if necessary) subset in $\mathbb{R^n}$ and $f:K \rightarrow \mathbb{R}$ is a smoth function in the following sense: for each $x \in K$ there exists a neighbourhood $V_x$ in $\mathbb{R}^n$ of $x$ and a function $F_x: V_x \rightarrow \mathbb{R}$ of class $C^\infty(V_x)$ such that $F_x| (V_x \cap K)=f|(V_x \cap K)$. Is it posible to extend $f$ to a function of class $C^\infty$ on $\mathbb{R}^n$ ? AI: Yes, you may extend $f$ to a $C^\infty$ function on $\mathbb R^n$. The local conditions you give on the function $f:K\to \mathbb R$ say exactly that $f$ is a section on $K$ of the sheaf $ C^\infty_{\mathbb R^n}$, i.e. $f\in \Gamma (K, C^\infty_{\mathbb R^n})$. This sheaf is fine, a translation of the existence of partitions of unity on $\mathbb R^n$, hence soft (see here and here ) and thus by definition of soft, this implies that the restriction map $$\Gamma (\mathbb R^n, C^\infty_{\mathbb R^n})=C^\infty (\mathbb R^n) \to \Gamma (K, C^\infty_{\mathbb R^n}):F\mapsto F\mid K$$ is surjective, answering your question in the affirmative. Remarks 1) The closed set $K$ does not have to be compact. In fact the result and its proof generalize word for word to the case of a closed subset of a paracompact differential manifold. 2) One could unpack everything I wrote so as to eschew the use of sheaves, but I would rather consider that the ease with which you can solve such a question is good propaganda for sheaves, which are a very easy notion anyway ( at least as long as cohomology is not introduced).
H: Proving there are no subfields I am trying to solve Q11 at pg. 582 from the book Abstract algebra by Dummit and Foote, the question is: Let $f\in\mathbb{Z}[x]$ be an irreducible quartic whose splitting field has Galois group $S_{4}$over $\mathbb{Q}$. Let $\theta$ be a root of $f$ and denote $K=\mathbb{Q}(\theta)$ prove that $[K:\mathbb{Q}]=4$ which has no proper subfield. My efforts: $f$ is irreducible over $\mathbb{Q}$ otherwise the splitting field of $f$ would of been of degree $\leq3!$ , hence $[K:\mathbb{Q}]=4$. I can't figure out the second part, if such subfield exist then it is of degree $2$ over $\mathbb{Q}$ and is of the form $\mathbb{Q}(\sqrt{a})$ where $a\in\mathbb{Q}$. since $[K:\mathbb{Q}(\sqrt{a})]=2$ it also holds that $\theta=\sqrt{b}$ where $b\in\mathbb{Q}(\sqrt{a})$ . this is where I am stuck. How do I prove that $K$ has no proper subfield ? help is appriciated! AI: If $F$ were an intermediate field of $K/\mathbb Q$, then $[K:F]=[F:\mathbb Q]=2$. Therefore the minimal polynomial of $\theta$ over $F$ is of the form $g(X) = X^2 + aX + b$. But then $f(X) = g(X) h(X)$ with $g, h \in F[X]$ of degree $2$. Adjoining $\theta$ as a root of $g$ and a root $\beta$ of $h$, we get an extension $F(\theta, \beta)$ in which $g$ and $h$ split, hence $f$ splits in $F(\theta, \beta)$. Now on the one hand $$[F(\theta, \beta):F] = [F(\theta, \beta):F(\beta)]\cdot[F(\beta):F]\le 4$$ implies $[F(\theta, \beta):\mathbb Q] \le 8< 24 = 4!$. On the other hand, $F(\theta, \beta)$ contains the splitting field of $f$. This is clearly not possible if the splitting field has Galois group $S_4$ of order $4!$. Btw: I don't think that it is necessarily true that $\theta = \sqrt{b}$ for some $b\in F$. The general form is rather $\theta = a \pm \sqrt{b}$ for $a,b\in F$, I believe.
H: Working with indices, where Einstein summation applies Problem: Let $$H:={1\over 2m}\vec p\cdot \vec p -{1\over r}$$, where $r=(\vec r\cdot \vec r)^{1\over 2}, \,\,\,\vec r= (x_1,x_2,x_3)^T$ and $$\vec R:= {1\over m}\vec p\times \vec L -\hat r$$. I wish to show that $${\partial H\over \partial p_i}{\partial R_j\over \partial x_i}-{\partial H\over \partial x_i}{\partial R_j\over \partial p_i}=0$$ I have worked on it and got $${\partial H\over \partial p_i}={1\over m}p_i$$ $${\partial H\over \partial x_i}={x_i\over r}$$ $${\partial R_j\over \partial x_i}={1\over m} (p_np_n\delta_{ij}-p_ip_j)-{1\over r}\delta_{ij}+{1\over r^3}x_ix_j$$ $${\partial R_j\over \partial p_i}={1\over m} (\epsilon_{jik}L_k+x_jp_i-x_kp_k\delta_{ij})$$ So substituting these into $I:={\partial H\over \partial p_i}{\partial R_j\over \partial x_i}-{\partial H\over \partial x_i}{\partial R_j\over \partial p_i}$, I get $$I=({1\over m}p_i)({1\over m} (p_np_n\delta_{ij}-p_ip_j)-{1\over r}\delta_{ij}+{1\over r^3}x_ix_j)-({x_i\over r})({1\over m} (\epsilon_{jik}L_k+x_jp_i-x_kp_k\delta_{ij}))$$ $$={1\over m} ({1\over m} (p_np_np_j-p_ip_ip_j)-{1\over r }p_j+{1\over r^3} x_ip_ix_j-{1\over mr} (\epsilon_{jik}L_kx_i+x_jp_ix_i-x_kp_kx_j))$$ Out of the $7$ terms, the first $2$ and the last $2$ vanish. So we are left with $$I={1\over m}(-{1\over r }p_j+{1\over r^3} x_ip_ix_j-{1\over r} (x_ip_ix_j-x_ix_ip_j))$$ Unfortunately, I don't see why this should equal to $0$ and I can't find my mistake either. Any suggestions? Thanks. AI: $\displaystyle {\partial H\over \partial x_i}={x_i\over r}$ is wrong: $\displaystyle{\partial r\over \partial x_i}={x_i\over r}$, but the term in $H$ is $-1/r$, so you get another $1/r^2$. You can catch this sort of mistake by checking dimensions. If you include the factor $1/r^2$, the remaining terms in your result cancel if you use $x_ix_i=r^2$.
H: An example of finite groups Is there any example of finite group $G$ with the following properties? 1) There is prime divisor $p$ of order $G$ such that the number of cyclic subgroup of order $p$ is $p+1$. 2) The order of Sylow $p$-subgroup of $G$ is $p^{2}$. 3) $G$ is not $p$-group. AI: Let $G$ be a group with all the properties stated in your question, and let P be a Sylow $p$-subgroup. If $P$ is not cyclic, then $P$ must be elementary Abelian, hence already contains $p+1$ subgroups of order $p.$ Then $P$ must be normal, otherwise a conjugate $Q$ of $P$ different from $P$ will contain another subgroup of order $p$ not contained in $P.$ On the other had, any group with a (proper) normal Sylow $p$-subgroup which is elementary Abelian of order $p^{2}$ will satisfy your conditions. Suppose then that $P$ is cyclic of order $p^{2}.$ I will derive a contradiction. Notic that the conjugation action of $G$ on its subgroups of order $p$ gives a homomorphism from $G$ to the symmetric group $S_{p+1}.$ Since $S_{p+1}$ does not contain elements of order $p^{2},$ $P$ must intersect the kernel of this homomorphism non-trivially. Let $Q$ be the unique subgroup of $P$ of order $p.$ Then $Q$ normalizes each subgroup of $G$ of order $p.$ If $R$ is one of these, but $R \neq Q,$ then $RQ$ is a group of order $p^{2}$ containing more than one subgroup of order $p,$ so must be Abelian of order $p^{2},$ so must be a Sylow $p$-subgroup of $G$, hence conjugate to $P,$ a contradiction, as $P$ is not cyclic. Hence $Q$ must be the only subgroup of $G$ of order $p,$ again a contradiction.
H: Computing $10101011 \cdot 1025$ I'm trying to compute the following expression: $10101011 \cdot 1025$ in a simple, easy way without using a calculator, or an elementary school way. I realized that $1025=2^{10}+1$ so I need to compute now $10101011 \cdot 2^{10}+10101011$, How should I proceed? AI: Forget binary and use base $10$ ! $$10101011\cdot 1000= 10101011000$$ $$10101011\cdot 25=10101011\cdot 100 / 4=505050550/2=252525275$$ Add...
H: Trace of a $227\times 227$ matrix over $\mathbb{Z}_{227}$ well, I know that the trace is the negative of coefficient of $x^{226}$ of the characteristic polynomial the matrix, but I dont know how the Char.Poly looks like in this case.please give me some hint. Do I have to work in the splitting field of the characteristic polynomial and add the eigen values to get the trace?but I dont know how. AI: Let's do something smaller. Use 3 instead of 227. The matrix $$\pmatrix{0&0&0\cr0&1&0\cr0&0&2\cr}$$ has distinct eigenvalues and trace zero. The matrix $$\pmatrix{1&0&0\cr0&0&1\cr0&2&0\cr}$$ has distinct eigenvalues $1,i,-i$, where $i$ is a square root of minus one in an extension field, and it has trace 1. So if the eigenvalues are allowed to be in an extension field, the answer is not determined by the information given. Of course, if the eigenvalues have to be in the field, the problem is easy.
H: checking whether certain numbers have an integral square root I was wondering if it is possible to find out all $n \geq 1$, such that $3n^2+2n$ has an integral square root, that is, there exists $a \in \mathbb{N}$ such that $3n^2+2n = a^2$ Also, similarly for $(n+1)(3n+1)$. Thanks for any help! AI: $3n^2+2n=a^2$, $9n^2+6n=3a^2$, $9n^2+6n+1=3a^2+1$, $(3n+1)^2=3a^2+1$, $u^2=3a^2+1$ (where $u=3n+1$), $u^2-3a^2=1$, and that's an instance of Pell's equation, and you'll find tons of information on solving those in intro Number Theory textbooks, and on the web, probably even here on m.se. Try similar manipulations for your other question.
H: Countable union of countable sets(ZF) Let ${{E_n}}_{n\in \mathbb{N}}$ be a sequence such that every $E_n$ is countable. Let $g_n : \mathbb{N} \to E_n$ be a bijection for every $n\in \mathbb{N}$. Let $\alpha (n,k) = g_n(k)$ Let $A$ be the union of $E_n$'s. Then $\alpha : \mathbb{N} × \mathbb{N} \to A$ is a surjective function. Since $\mathbb{N} \times \mathbb{N}$ is equipotent with $\mathbb{N}$, there exists a surjective function $f: \mathbb{N}\to A$. Let $[n]$={$m\in \mathbb{N}$|$f(m)=f(n)$} for every $n\in \mathbb{N}$. Since $f$ is surjective, for every $n\in \mathbb{N}$, $[n]\ne \emptyset$. Since $[n] \subset \mathbb{N}$, $[n]$ is well-ordered. Let $l_n$ designate the least element of $[n]$. Let $B=\{l_n \in \mathbb{N} | n\in \mathbb{N}\}$ Then $f_{[B]}$ : $B\to A$ is a bijection. Since $B\subset \mathbb{N}$, $B$ is at most countable. Since $A$ is infinite, $B$ is countable, hence $A$ is countable. I don't know where i used AC in my argument. Help AI: You used countable choice when you chose $g_n$. It is true that for every countable set there is an injection from the said set into $\mathbb N$, however to choose exactly one for every set requires choice. If, however, you are given the injections then the union is in fact countable, since there is no need to choose bijections. Note that if we only wish to take union over finitely many countable sets then we can choose finitely many injections and the argument follows. Similarly the finite product of countable sets is countable and non-empty, whereas infinite products could be empty even if all sets are finite. See also the last part in this answer.
H: Why is choosing elements in equivalence classes not a choice? This is Asaf's answer from this link: How do we know an $ \aleph_1 $ exists at all? I don't understand this sentence that is; From each equivalence class choose the representative which is an ordinal (which does not require any form of choice, as the equivalence classes can be described without the axiom of choice, as well as being an ordinal). The set of representatives is a set of ordinals, we take its union. Here's what I think this means. Please tell me I'm following this argument correctly. Let $X$ be the class of all the well orderings of $\omega$ Let $[G]$={$F \in X$|$F$ is isomorphic with $G$} for every $G\in X$. Then we 'choose' representatives from each $[G]$ and take a union. I see this is definitely a choice since there might be infinitely many [G]'s. Why is this not a choice?? AI: Since your choice comes from the collection of ordinals, you can specify that you want the least such ordinal that satisfies the condition. This ordinal exists and is unique, so there is no choice involved. (Of course as Asaf's answer says it turns out that there is only one ordinal that satisfies the property, but you can also make this selection without knowing that fact)
H: Is this metric space complete: $d(f,g)=\max\limits_{t\in[0,1]} |f(t)-g(t)|$ for $C^1$-functions? The space $\mathcal C^1[0,1]$ with the metric $$d(f,g)=\max_{t\in[0,1]} |f(t)-g(t)|$$ No, it is not complete metric space: by Stone-Weierstrass thm we know that $|x|$ can be uniformly approximated by sequence of polynomials which are clearly $\mathcal{C}^1[0,1]$, but $|x|$ is not $\mathcal{C}^1$. Is my argument correct? AI: [Added: By $C[0,1]$ I mean the set of continuous functions $f: [0,1] \rightarrow \mathbb{R}$ endowed with the metric $d(f,g) = \max_{x \in [0,1]} |f(x)-g(x)|$. This is a complete metric space by the Cauchy Criterion for Uniform Convergence.] Yes. To recap it: you have a complete metric space, $\mathcal{C}[0,1]$, and a subspace, $\mathcal{C}^1[0,1]$, which is not closed (rather, it is proper and dense). Therefore $\mathcal{C}^1[0,1]$ cannot be complete. Added: As t.b. points out, the absolute value function is $C^1$ on the interval $[0,1]$, so you should pick something else (e.g. what t.b. says). I also agree that Weierstrass Approximation is much more than you need here. For instance, in Example 7 of these notes I show -- in an intentionally clunky, hands-on fashion -- that the absolute value function is a uniform limit of $C^1$-functions on $[-1,1]$.
H: Why does $ \| v_1 \cdots v_k \|_2 \leqslant \|v_k \|_2 \|v_1 \|_\infty \cdots \| v_{k-1} \|_\infty$ hold? For $ v_i \in L^\infty \cap L^2 $, has a compact support, $v_i : \mathbb R^n \to \mathbb R$, $$ \| v_1 \cdots v_k \|_2 \leqslant \|v_k \|_2 \|v_1 \|_\infty \cdots \| v_{k-1} \|_\infty$$ holds? Then why? AI: By definition, $v_i\leq\|v_i\|_\infty=\operatorname{ess\,sup}\limits_{x\in\mathbb{R}^n}\;|v(x)|$ almost everywhere . Thus $$\int (v_1v_2\cdots v_k)^2\,dm\leq\int\|v_1\|_\infty^2\|v_2\|_\infty^2\cdots\|v_{k-1}\|_\infty^2 v_k^2\,dm=\|v_1\|_\infty^2\|v_2\|_\infty^2\cdots\|v_{k-1}\|_\infty^2\int v_k^2\,dm$$ and then taking the square root of both sides we arrive at $$\|v_1\cdots v_k\|_2\leq\|v_1\|_\infty\|v_2\|_\infty\cdots\|v_{k-1}\|_\infty\|v_k\|_2$$
H: Distance between a point and a m-dimensional space in n-dimensional space ($m I am trying to find a method with a low computational cost to compute the distance of a point $P$ and a space $S$ that is defined by the origin $O$ and $m$ vectors $v_1, v_2, ..., v_m$ in an $n$-dimensional space ($m<n$). The vectors are not restricted by any means other than that they are not 0. Furthermore, I would like to identify the point in $S$ that is closest to $P$. This calculation is part of a 'fitting function' for a machine learning problem and thus has to be executed rather often and should be fast. The input to the function is as defined above, $P$ and $v_1, v_2, ..., v_m$. This is just for context and I am happy for a mathematical solution and can of course do the implementation myself. Thanks in advance and please let me know if I need to specify anything in more detail. AI: I think the most efficient way is to compute projection $\mathrm{Pr}_L(p)$ of vector $p$ on linear subspace $L$ spanned by vectors $v_1,\ldots,v_m$ and the find length of the vector $p-\mathrm{Pr}_L(p)$. Using modified Gramm-Schmidt orthogonalization process you can find orthogonal basis of $L$, call it $\{e_1,\ldots,e_m\}$ and then compute $$ \mathrm{Pr}_L(p)=\sum\limits_{i=1}^m \langle p,e_i\rangle e_i $$ The desired distance is $$ d(p,L)=\left\Vert p-\sum\limits_{i=1}^m \langle p,e_i\rangle e_i\right\Vert $$ There is an elegant but useless for your purposes formula of the distance $d(p,L)$ from point $p\in\mathbb{R}^n$ to linear subspace $L$ spanned by vectors $v_1,\ldots, v_m$. It can be found by the formula $$ d^2(p,L)=\frac{G(v_1,\ldots,v_m,p)}{G(v_1,\ldots,v_m)}\tag{1} $$ where $$ G(v_1,\ldots,v_k)=\det \begin{Vmatrix} \langle v_1,v_1 \rangle & \ldots & \langle v_1,v_k \rangle\\ \vdots & \ddots & \vdots\\ \langle v_k,v_1 \rangle & \ldots & \langle v_k,v_k \rangle \end{Vmatrix} $$ is a Gram determinant.
H: Reduced frequency range FFT Generally when one takes the FFT of a signal it "works" over the whole bandwidth dividing up the spectrum into chunks given by the resolution. If the bandwidth of the signal is 10khz and your resolution is 1000 then each "frequency" represents a chunk of 10hz(each bin is 10hz in size). The problem with this method is that it gives the same "size" to each bin even though lower frequencies loose resolution. e.g., a 10hz bin around 25hz frequency contains much more information than 10hz at 8kz. This issue is not hard to fix with the analytical FT since it is just a matter of scale. Is it possible adapt the FFT to have a frequency dependent bin-size? Essentially when we divide up the frequency range into n chunks we want lower frequencies to have more accuracy since they are lower frequencies. e.g., I might want a resolution of 0.1hz in the lower frequencies and 10hz in the higher frequencies. The problem with the current FFT is that we must use the highest resolution overall. That is, Because I want a 0.1hz resolution in the lower frequencies I MUST have an 0.1hz resolution in the higher resolutions. This means I'll require a much higher n-point transform than I really need. AI: It's called the nonequispaced fft or non-uniform discrete Fourier transform: Code can be found here
H: analytic on a disc with a hole I am reading a proof of Cauchy's Integral Formula.In the proof,the author let $\phi (z,w)=[f(z)-f(w)]/(z-w)$ if $(z\neq w)$ and $f'(z)$ otherwise and leaves the readers to prove that $g(z)=\phi (z,w)$ is analytic for fixed $w$.I solve this problem by consider the power series around $w$.But I doubt that we can get a more general result. Let $f:B(a;R) \rightarrow \mathbb{C}$ be continuous and $f$ be analytic on $B(a;R)-\{a\}$,where $B(a;R)$ is a open disc of radius $R$ with centre at $a$.Is it true that $f$ is analytic on $B(a;R)$? AI: This can be proved using Morera's theorem, which states that if $f$ is a continuous function defined on an open set $D$ such that the integral around every simple closed curve is 0, then the function is holomorphic on $D$. To show that the integral around any simple closed curve is zero, it is sufficient to consider only very simple curves such as rectangles or triangles. Furthermore, it is clear that we only have to show this is true for curves which either pass through $a$ or go around $a$, because we know $f$ is holomorphic on $B(a;R)\setminus\{a\}$. There are some different cases to consider, but the argument goes like this... Choose $\epsilon>0$ let $$M=\max_{z\in \overline{B(a;R/2)}}\{|f(z)|\}.$$ Continuity of $f$ ensures that this is well defined. Now choose $r$ such that $0<r<\epsilon/6M$. Let $\Gamma$ be any triangle contained in $B(a; r)$. It follows that $$\int_\Gamma f(z)\,dz\leq M(6r)<\epsilon.$$ Thus, by rewriting the integral around any simple closed curve as a sum of integrals around simple closed curves contained in $B(a;R)\setminus\{a\}$ and one integral around an appropriately small triangle around $\{a\}$, this shows that the integral is zero, and Morera's theorem finishes the argument.
H: Functional and Linear Functional may I know what is the distinction between functional analysis and linear functional analysis? I do a search online and came to a conclusion that linear functional analysis is not functional analysis and am getting confused by them. When I look at the book Linear Functional Analysis published by Joan Cerda by AMS, there is not much difference in its content compared to other titles with just the heading Functional Analysis. Hope someone can clarify for me. Thank You. AI: It is mostly a matter of taste. Traditionally functional analysis has dealt mostly with linear operators, whereas authors would typically title their books nonlinear functional analysis if they consider the theory of differentiable (or continuous or more general) mappings between Banach (or more general function) spaces. But it seems more recently the subject of "nonlinear functional analysis" has been gaining traction and many more people now adhere to the philosophy "classifying mathematical problems as linear and nonlinear is like classifying the universe as bananas and non-bananas." So instead of labeling all the non-bananas, some people now agree that it makes more sense to instead label the bananas...
H: About the convergence of $f_n:=\sin{\sqrt{t+4n^2\pi^2}}$ Let $f_n \in C([0,+\infty))$ be defined by $f_n(t):=\sin{\sqrt{t+4n^2\pi^2}}$, for $n \in \mathbb N$ and $t \ge 0$. Prove that $f_n$ converges pointwise to $f \in C([0,+\infty))$ and determine $f$; study the uniform convergence of the sequence on bounded intervals and on $[0;+\infty)$. Well, I've got some problems and I need your kind help. First of all, I've noted that $f_n(0)=0$ for every $n \in \mathbb N$. My guess is that the pointwise limit is $f \equiv 0$. But how can I prove it rigorously? I think that one should note that $4n^2\pi^2=(2n\pi)^2$ so there must be something related to periodicity of the function $\sin(\cdot)$... Thanks in advance. AI: $\sin(\sqrt{t+4n^2\pi^2})=\sin\left(2n\pi\sqrt{1+\frac{t}{4n^2\pi^2}}\right)\approx \sin\left(2n\pi+\frac{t}{4n\pi}\right)=\sin\left(\frac{t}{4n\pi}\right)\approx\frac{t}{4n\pi}\rightarrow 0$ so $f(t)=0$. On any interval, the convergence is bounded as $|\sin(x)|\le|x|$.
H: Which statement is true regarding the centroid? From a point $P$ on the circle $x^{2} + y^{2} = 4r^{2}$, tangents are drawn to $x^{2} + y^{2} = r^{2}$ at $Q$ and $R$. Then which of the following statement(s) are true regarding the centroid of $ΔPQR$ $A.$ is at a distance $r$ from chord $QR$.; $B.$ lies on $x^{2} + y^{2} = r^{2}$ $C.$ is at a distance $1.5r$ from $P$ $D.$ doesn't lie on the line joining the origin and $P$ My take: $B$ Please help. AI: Let $O$ be the centre of the circles. By symmetry the centroid lies on the line $OP$. Where on $OP$? Join $Q$ and $R$, and let $M$ be the point where $OP$ meets $QR$. Then by properties of the centroid, the centroid lies on $PM$, one-third of the way up from $M$ towards $P$. Note that $PO=2r$ and $OQ=r$. So $\angle QPO$ is $30^\circ$. Computation (special angles or Pythagorean Theorem) shows that $PM=\frac{3}{2} r$. The point $X$ where the little circle meets $OP$ is therefore $1/3$ of the way up from $M$, since $XP=r$. It follows that $X$ is the centroid.
H: The necessary and sufficient conditions for the solution of the equation $\frac{dy}{dx} = f(y)$ is locally unique. $$\frac{\mathrm{d}y}{\mathrm{d}x} = f(y)$$ where $f(y)$ is continuous on $|y-a|\leq \epsilon$,and $f(y)=0$ iff $y=a$. To Proof : For the initial value point on $y=a$,the equation has local unique solution iff $\left|\int_a^{a+\epsilon}\frac{\mathrm{d}y}{f(y)}\right|= \infty$ How to proof Initial value $\Rightarrow$ $\left|\int_a^{a+\epsilon}\frac{\mathrm{d}y}{f(y)}\right|= \infty$ ? AI: It's not true. Consider e.g. $$\dfrac{dy}{dx} = - y^{1/3}, y(0) = 0$$ Even though $\int_0^{\epsilon} - y^{-1/3}\ dy$ is finite, it's obvious that the only solution is $y = 0$. What is true is this. Suppose $f(y) > 0$ for $a+\epsilon > y > a$ and $\displaystyle\int_{a}^{a+\epsilon} \dfrac{dy}{f(y)} = b < \infty$. Then besides the constant solution $y = a$, there is a solution defined implicitly by $\displaystyle\int_a^y \dfrac{ds}{f(s)} = x$ for $0 < x < b$, with $y = 0$ for $x \le 0$. Similarly, if $f(y) < 0$ for $a-\epsilon < y < a$ and $\displaystyle \int_{a-\epsilon}^a \dfrac{dy}{f(y)} = c > -\infty$, there is another solution in $c < x < 0$.
H: Question about factor rings Assume $m_i$ are maximal ideals in a ring $R$. Then I have $m_1 \cdot \dots m_{k}$ is an ideal in $m_1 \cdot \dots m_{k-1}$ hence I can quotient to get a factor ring $m_1 \cdot \dots m_{k-1} / m_1 \cdot \dots m_{k}$. This ring is of course also an abelian group. Now I'd like to turn this into a vector space. How do I see that the field of scalars I am looking for is $R/m_k$? AI: If $M$ is an $R$-module and $I$ is an ideal of $R$, then $IM$ is a submodule of $M$, and there is a natural $R/I$ module structure on $M/IM$: namely, given $r+I\in R/I$ and $m+IM$ in $M/IM$, define $(r+I)(m+IM) = rm+IM$. This is well-defined: if $r-s\in I$ then $(r-s)m\in IM$, so $rm+IM = sm+IM$. And if $n-m\in IM$, then we can express $n-m$ as a sum $$n-m = a_1m_1+\cdots +a_km_k,\qquad a_i\in I, m_j\in M$$ so $$r(n-m) = ra_1m_1+\cdots + ra_km_k\in IM$$ since $ra_i\in I$ for all $i$. Here, you have $M=m_1\cdots m_{k-1}$, $I=m_k$, so there is a natural $R/m_k$ module structure on $M/IM = m_1\cdots m_{k-1}/m_1\cdots m_k$. Since $m_k$ is maximal, $R/m_k$ is a field, so this is actually a vector space structure. So this is a natural field over which to give $M/IM$ a vector space structure, but by no means the only one.
H: Boundary of product manifolds such as $S^2 \times \mathbb R$ Simple question but I am confused. What is the boundary of $S^2\times\mathbb{R}$? Is it just $S^2$? What would be the general way to evaluate the boundary of a product manifold? Thanks for the replies! AI: If you are interested in differentiable manifolds, a variation of what joriki said is still true. If $X$ is a manifold without boundary and $Y$, a manifold with boundary $\partial Y$, then $X\times Y$ is a manifold with boundary $X\times \partial Y$. For more details, you can look at many places, e.g. Guillemin and Pollack Differential Topology, Chapter 2.
H: Proof using Reductio ad absurdum (RAA) Note: $\neg$ means 'not', $\rightarrow$ is 'conditional', $\land$ is 'and', $\lor$ means '(inclusive) or'. Prove: $[\neg D \lor (A \land B)] \rightarrow[(J \rightarrow \neg A) \rightarrow (D \rightarrow \neg J)]$ using Reductio ad absurdum (RAA) or conditional proof (CP). $\neg[(J \rightarrow \neg A) \rightarrow (D \rightarrow \neg J)]$ (Assume for RAA) $(J \rightarrow \neg A) \land \neg(D \rightarrow \neg J)$ (Equation 1 is only false if the the left is true and the right is false) $\neg(D \rightarrow \neg J)$ (simplification of equation 2, I think) $D \land \neg\neg J$ (the only way for eq 3 to be true is if $D$ is true and $\neg J$ is false ). $D$ (simplification of 4) $\neg\neg J$ (simplification of 4) $J$ (double negative of 6). $J \rightarrow \neg A$ (simplification of left term equation 2 $\neg A$ (eq 7, 8, $J$ is true, so $\neg A$ is true). $\neg(A \land B)$ (if $A$ is false, $A$ and anything is false, negate that to get a true) $\neg[\neg D \vee (A \land B)]$ ($A\land B$ is false, from eq 10, and $D$ is true from eq 5, $\neg D$ is false, false and false is false) 12 $\neg [(J \rightarrow \neg A) \rightarrow (D \rightarrow \neg J)] \rightarrow \neg[\neg D \lor (A \land B)]$ (1-11, RAA) 13 $[\neg D \lor (A \land B)] \rightarrow [(J \rightarrow \neg A) \rightarrow (D \rightarrow \neg J)]$ (contrapositive of eq 12) What's wrong with the above proof? The teacher said it did not use RAA or CP and gave 0 points. Why not? AI: In a proof by reductio ad absurdum, you begin by assuming the negation of the proposition to be proven; in this case, you would begin by assuming $$\neg\Bigl([\neg D \lor (A \land B)] \longrightarrow[(J \rightarrow \neg A) \rightarrow (D \rightarrow \neg J)]\Bigr).$$ The objective is to deduce an absurdity (something of the form $P\land \neg P$). As you can see, you don't do either of those (you neither start with the negation of the proposition to be proven, nor do you deduce an absurdity), so you did not give a proof by reductio ad absurdum. In a "conditional proof" of $P\to Q$, you assume $P$, and then you proceed to deduce $Q$. From this proof, thanks to the Deduction Metatheorem, you can conclude that there is a proof of $P\to Q$. In the case at hand, your proof would have to begin by assuming $$\neg D\lor (A\land B)$$ and finish by deducing $$(J\to \neg A)\to(D\to\neg J).$$ Again, as you can see, you did not do this. So I think that "what is wrong" with your proof is that it did not follow the instructions given; you were asked to produce a proof by one of two methods, and you did not. That is not to say your proof is invalid: from what I can tell, it is a valid proof. You provided a conditional proof of the contrapositive of the proposition you want to establish. Unfortunately, a conditional proof of the contrapositive is neither a conditional proof, nor a reductio ad absurdum proof. As to partial marks, that's up to your instructor. Now, don't despair: it is fairly easy to turn your proof into a proof by reductio ad absurdum. Start with the negation of the proposition you have; you will be able to deduce from this assumption your line 1. Having deduced your line 1, you will end up, with your development, deducing $\neg(\neg D\lor (A\land B))$. Now, form the original assumption you should also be able to deduce $(\neg D\lor (A\land B))$, and now you are in the situation you wish: you will have $P\land \neg P$, with $P\equiv (\neg D\lor (A\land B))$.
H: What is the formula to find the count of (sets of) maps between sets that equal the identity map? Given two finite sets $A$ and $B$, what is the formula to find the number of pairs of mappings $f, g$ such that $g \circ f = 1_A$? AI: We look at the problem when $A$ and $B$ are finite. Suppose that $A$ has $a$ elements, and $B$ has $b$ elements. A function $f$ has the desired property iff $f$ is injective. Thus we need $a \le b$. For any given $f$, the function $g$ is determined on $f(A)$, but not on the rest of $B$. There are $b(b-1)(b-2)\cdots (b-a+1)$ injections from $A$ to $B$. For every such injection, the value of $g$ on the complement of $f(A)$ is arbitrary, so there are $a^{b-a}$ choices for $g$. Multiply. The total number of ordered pairs $(f,g)$ with the desired property is $$b(b-1)(b-2)\cdots(b-a+1)(a^{b-a}),$$ or equivalently $\frac{b!}{(b-a)!}a^{b-a}$.
H: Limit of a subsequence I studied a definition,and I didn´t find it in anyother book (but those I use). It´s like a point of closure for sequences.We call $a$ "value of closure" of $(x_n)$ when $a$ is the limit of a subsequence of $(x_n)$. The question is: For a real number $a$ be a "value of closure" is necessary and sufficient that $\forall \epsilon >0$ and $\forall k \in \mathbb{N}$ given ,there is $n > k$ such that $|x_n -a|< \epsilon$. I could do the first part ($a \Rightarrow |x_n - a|<\epsilon$) but not the $\Leftarrow$. Thanks for any help! AI: To expand the comment, fix $n_1=1$ for example. We can find $n_2>1$ such that $|x_{n_2}-a|<\frac 12$. Assume that $n_1<n_2<\dots<n_k$ are construct. For $\varepsilon=2^{-(k+1)}$, we can find $n_{k+1}>n_k$ such that $$|x_{n_{k+1}}-a|\leq 2^{-(k+1)}.$$ Hence we have construct a subsequence $\{x_{n_k}\}$ such that $|x_{n_k}-a|\leq 2^{-k}$ for all integer $k$. This proves that $a$ is a value of closure of $\{x_n\}$.
H: Continuous Actions and Homomorphisms I am learning about the compact-open topology and have a small proposition I am struggling to prove. Let $G$ be a topological group, $X$ a compact, Hausdorff space, and $H(X)$, the homeomorphisms of $X$, have the compact open topology. I want to show that an action of $G$ on $X$, call it $\gamma_1:G\times X\rightarrow X$ is continuous iff its associated homomorphism $\gamma_2:G\rightarrow H(X)$, where $g\mapsto \phi_g$ (left-translation by $g$), is continuous. I can do the $\Leftarrow$ direction: assuming $\gamma_2$ is continuous, the map $(g,x)\mapsto (\phi_g,x)$ is continuous. Since the evaluation map $(\phi_g,x)\mapsto \phi_g(x)$ is continuous, and $\gamma_1$ is the composition of these two, $\gamma_2$ is continuous. The other direction is where I'm not sure how to proceed. We can take $\phi_g\in H(X)$ and a subbasis set $S(C,U)=\{f:f(C)\subseteq U\}$ for $C$ compact, $U$ open, such that $\phi_g\in S(C,U)$. So $\gamma_2^{-1}(S(C,U))=\{h\in G:h\cdot C\subseteq U\}$. This is all true, but I am not sure it is helpful. I am not sure what the right approach is; in particular I don't see how/when to leverage the fact that $\gamma_1$ is continuous. Thanks for any hints, direction, insight, etc. AI: I think, it is quite important to carefully write down what one is supposed to do in a situation like this one, as it is really easy to get confused. You are given a subbasis element $S(C,U)$ with $C\subset X$ compact, and $U\subset X$ open. To prove continuity of $\gamma_2$ it suffices to show that for any $g\in G$ satisfying $g(C)\subset U$ there exists a neighbourhood $V\subset G$ of $g$ such that $$ h\in V \implies h(C) \subset U$$ Luckily we are only given one information about the action of $G$ on $X$, so we immediately now where to start: Fix $g\in G$ as above. We know that for any $x\in C$ we have $g(x) \in U$. Also fix $x$ for the moment. By assumption on the continuity of $\gamma_1$ there are neighbourhoods $W_x \subset X$ of $x$ and $V_x\subset G$ of $g$ such that $$(h,y)\in V_x\times W_x \implies h(y) = \gamma_1(h,y)\in U$$ Now finitely many of these $W_x$ cover $C$, say $C\subset W_{x_1}\cup\dots\cup W_{x_n}$. Let $V = V_{x_1}\cap \dots \cap V_{x_n}$. Then $V$ is an open neighbourhood of $g$ in $G$ and I'll leave it to you to verify that $\gamma_2(V)\subset S(C,U)$.
H: Schwarz inequality for unital completely positive maps I came across the following form of Schwarz inequality for completely positive maps in Arveson's paper: Let $\delta:\mathcal{A}\to\mathcal{B}$ be a unital completely positive linear map between two $C^*$-algebras, then \begin{equation}\delta(A)^*\delta(A)\le \delta(A^*A),\end{equation} which is a crucial part in proving proposition 3. I do not know this inequality and thus searched on wiki, something related is (Kadison-Schwarz) If $\phi$ is a unital positive map, then \begin{equation}\phi(a^*a)\ge\phi(a^*)\phi(a)\end{equation} for all normal elements $a$. However, I cannot find a proof of Kadison-Schwarz. Also, since Kadison-Schwarz works only for normal elements, there seems to be a gap between Kadison-Schwarz and Schwarz. I wonder where I can find a proof to the first inequality. Thanks! AI: Note that Kadison-Schwarz is stated for positive (not necessarily cp) maps; that's why the restriction to normals is required (the C$^*$-algebra generated by a normal is abelian, and then any positive map is completely positive). Also, for Schwarz inequality all that is required is 2-positivity (which of course is implied by complete positivity). This is the proof of the inequality as in Paulsen's book: We have $$ \begin{bmatrix}1&a \\ a^*& a^*a\end{bmatrix}=\begin{bmatrix}1&a\\ 0&0\end{bmatrix}^*\begin{bmatrix}1&a\\ 0&0\end{bmatrix}\geq0. $$ Then $$ 0\leq\delta^{(2)}\left(\begin{bmatrix}1&a \\ a^*& a^*a\end{bmatrix}\right) =\begin{bmatrix}1&\delta(a) \\ \delta(a)^*& \delta(a^*a)\end{bmatrix} $$ Applying this in particular to the vector $\begin{bmatrix}-\delta(a)\eta\\ \eta\end{bmatrix}$, we get $$ 0\leq\left\langle\begin{bmatrix}1&\delta(a) \\ \delta(a)^*& \delta(a^*a)\end{bmatrix}\begin{bmatrix}-\delta(a)\eta\\ \eta\end{bmatrix},\begin{bmatrix}-\delta(a)\eta\\ \eta\end{bmatrix}\right\rangle=\langle(\delta(a^*a)-\delta(a)^*\delta(a))\eta,\eta\rangle. $$ As the vector $\eta$ can be chosen arbitrarily, we conclude that $\delta(a^*a)-\delta(a)^*\delta(a)\geq0$.
H: Strange application of Cauchy's Integral Theorem According to my book, Riemann's Zeta Function, Cauchy's Integral Formula is applicable to the following integral for all negative values of $s$: $$-\frac{\Pi(-s)}{2\pi i}\int_{|z|=\epsilon}(-2\pi in - z)^{s-1}\frac{z}{e^z - 1}\frac{dz}{z} = -\Pi(-s)(-2\pi in)^{s-1}$$ where $\Pi(-s) := \Gamma(-s+1)$. Could someone explain to me how exactly this works, I can't seem to figure it out, thanks. AI: Notice that for $n\neq 0,$ the factor $(-2\pi in-z)^{s-1}\frac{z}{e^z-1}$ is holomorphic near $z=0.$ To see this, expand $\frac{z}{e^z-1}=\frac{z}{(1+z+z^2/2+\cdots) -1}=\frac{1}{1+z/2+\cdots}.$ Then we can apply Cauchy's integral formula to see that the integral equals $2\pi i \times (-2\pi in - 0)^{s-1}\times\frac{1}{1+0+\cdots},$ which gives the result.
H: The closure of $\overline{\{x\}}$ being irreducible and relating the generic point to its associated irreducible scheme If $x$ is a point in $X$ where $X$ is a scheme, we write $\overline{\{ x\}}$ for the closure of $x$ in $X$. $\mathbf{Question \;1}$: I am a bit confused why $\overline{\{ x\}}$ is irreducible. According to some lecture notes, this scheme $\overline{\{ x\}}$ is irreducible since an open subset of $\overline{\{ x\}}$ that doesn't contain $x$ also doesn't contain any point of the closure of $x$ since the complement of an open set is closed. Therefore, every open subset of $\overline{\{ x\}}$ contains $x$, and is therefore dense in $\overline{\{x \}}$. So how does every open subset of $\overline{\{x \}}$ being dense relate to $\overline{\{x \}}$ being irreducible? $\mathbf{Question \;2}$: Suppose $X$ is a scheme and $U=\operatorname{Spec }A$ is a nonempty irreducible subset of $X$. Then $U$ has a unique generic point $\xi$ corresponding to the minimal prime of $A$. Why is it that $\{ \xi\}\not=U$, but instead $\overline{\{\xi \}}=U$? AI: $1)$ if $\overline{\{x\}}$ was reducible, you would have a non-trivial decomposition $\overline{\{x\}}=F\cup F'$ in a union of two closed subsets. The complementary of, says, $F$ give then a non dense open subset. $2)$ $U$ can certainly by equal to $\{\xi \}$, but if $U$ is a subscheme of $X$, it refers usually to closed subschemes (just as subvarieties).
H: If a subset of $\mathbb{R}$ is closed and bounded with respect to a metric equivalent to the Euclidean metric, must it be compact? Two different metrics $d$ and $\hat d$ in a space $X$ are said to be equivalent iff the topologies generated by them are the same, in other words $U\subseteq X$ is $d$-open iff it is $\hat d$-open. So by definition $(c)$ is correct, am I right? AI: No. Even if $d$ and $\hat{d}$ induce the same topology (in which case a set is $d$-closed $\iff$ it is $\hat{d}$-closed), it need not be the case that $\hat{d}$-bounded $\implies$ $d$-bounded, so one cannot conclude from the Heine-Borel theorem ($d$-closed and $d$-bounded $\implies$ $d$-compact for subsets of $\mathbb{R}^n$) that $\hat{d}$-closed and $\hat{d}$-bounded $\implies$ $\hat{d}$-compact for subsets of $\mathbb{R}^n$. BenjaLim has given the example $\hat{d}(x,y)=\min\{d(x,y),1\}$. The other standard example is $$\hat{d}(x,y)=\frac{d(x,y)}{1+d(x,y)}.$$ One can show that for any metric space $(X,d)$, both of these definitions of $\hat{d}$ induce the same topology as $d$, but that $X$ is bounded under these $\hat{d}$ even if it was not under $d$. Remember that boundedness is strictly a property that comes from a metric, i.e. boundedness is not a topological property.
H: Is $\pmb{\eta}\cdot\pmb{\omega_1} = (\pmb{\eta} + \pmb{1})\cdot\pmb{\omega_1}$? $\pmb{\eta}$ - order type of $\mathbb{Q}$. $\pmb{1}$ - order type of a singleton set. $\pmb{\omega_0}$ - order type of $\mathbb{N}$. $\pmb{\omega_1}$ - order type of the first uncountable ordinal. It is easy to see that $\pmb{\eta}\cdot\pmb{\omega_0} = (\pmb{\eta} + \pmb{1})\cdot\pmb{\omega_0}$, in fact, both sides are $\pmb{\eta}$. Question: Is $\pmb{\eta}\cdot\pmb{\omega_1} = (\pmb{\eta} + \pmb{1})\cdot\pmb{\omega_1}$? AI: Here is a proposed solution, I have not verified all the details, but I believe this should work. Let $\mathbb Q^\ast$ be the rational numbers plus an endpoint, let $A,B$ be a partition of this set to intervals such that $A$ has order type $\eta$. In $\mathbb Q$ fix some partition into two parts $X,Y$ such that both are intervals and $X$ is of order type $\eta+1$. For $\alpha<\omega_1$ we write $A_\alpha,B_\alpha,X_\alpha,Y_\alpha$ to be the corresponding parts in the $\alpha$-th copies of $\mathbb Q,\mathbb Q^\ast$. Now we define by induction: For $\alpha=0$ simply send $A_0+B_0$ into $X_0$, and $A_1$ into $Y_0$. If $\alpha$ is a limit ordinal, do the same. Namely $A_\alpha+B_\alpha$ into $X_\alpha$ and $A_{\alpha+1}$ into $Y_\alpha$. If $\alpha=\beta+1$, send $B_\alpha$ into $X_\alpha$ and $A_{\alpha+1}$ into $Y_\alpha$. It is clear that arriving at any limit ordinal $\alpha$ we have an isomorphism of $(\eta+1)\cdot\alpha$ into $\eta\cdot\alpha$, so the step taken at the limit ordinal itself is well-defined (we do not need to worry about embedding $A_\alpha$ in a prior step). It is clearly an order isomorphism, and it is a bijection for obvious reasons too.
H: Proving $\int_{1}^{\infty}\frac{\sin x}{\left(\log x\right)^{\frac{1}{2}}}dx$ converges Prove that the following improper integral converges $$\int_{1}^{\infty}\frac{\sin x}{\left(\log x\right)^{\frac{1}{2}}}dx.$$ I see that you can show this using Dirichlet's Convergence Test, but how would you show it not using this test? AI: You have two issues to address. First, is $\int_1^{1.1}\frac{\sin x}{\sqrt{\log x}}\,dx$ convergent? Second, is $\int_{1.1}^{\infty}\frac{\sin x}{\sqrt{\log x}}\,dx$ convergent? The first integral blows up at its left end point. But we can show that for the some constant $C$, $\frac{\sin x}{\sqrt{\log x}}<\frac{C}{\sqrt{x-1}}$ for $x$ in $(1,1.1)$. Then by direct comparison, this piece is convergent. The numerator $\sin(x)$ is clearly positive and less than $1$. So we'd need to show that $\log x>k(x-1)$ for some constant $k$, for all $x\in(1,1.1)$. You can do this in the standard way by noting both are equal at $x=0$ and comparing derivatives. The second integral breaks up into a sequence of alternating integrals $$\int_{1.1}^{\pi}\frac{\sin x}{\sqrt{\log x}}\,dx+\sum_{n=1}^{\infty}\int_{n\pi}^{n\pi+\pi}\frac{\sin x}{\sqrt{\log x}}\,dx$$ By the alternating test for integrals, we need to check that $\lim_{n\to\infty}\left|\int_{n\pi}^{n\pi+\pi}\frac{\sin x}{\sqrt{\log x}}\,dx\right|=0$. This is true, since $$\begin{align} \left|\int_{n\pi}^{n\pi+\pi}\frac{\sin x}{\sqrt{\log x}}\,dx\right|&<\frac{1}{\sqrt{\log(n\pi)}}\left|\int_{n\pi}^{n\pi+\pi}\sin(x)\,dx\right|\\ &=\frac{2}{\sqrt{\log(n\pi)}}\end{align}$$ Joriki reminded me that the alternating series test also requires that the absolute values of the integrals be decreasing. (Otherwise you could have say, the positive integrals going down like $1/n$ with the negative ones going down like $1/2^n$, and the whole thing diverges.) So we should check that $$\begin{align} \left|\int_{n\pi}^{n\pi+\pi}\frac{\sin x}{\sqrt{\log x}}\,dx\right|&>\left|\int_{n\pi+\pi}^{n\pi+2\pi}\frac{\sin x}{\sqrt{\log x}}\,dx\right|\end{align}$$ which is true since the numerator function just repeats another period with the same values, while the denominator function becomes larger.
H: Research done by high-school students I'm giving a talk soon to a group of high-school students about open problems in mathematics that high-school students could understand. To inspire them, I would like to give them examples of high-school students who have made original contributions in mathematics. One example I have is the 11th-grader from Hawai'i named Kang Ying Liu who in 2010 "discover[ed] nine new geometric formulas for describing triangle inequalities." Do you have any other examples of high-school students who have made original contributions in mathematics? AI: I'm not sure this is really what you're looking for, but Britney Gallivan, then $16$, disproved the famous claim that it was impossible to fold a piece of paper in half ten times, by folding one twelve times. She also came up with a model that correctly explained the limit, and predicted how big the original paper would have to be to be folded $n$ times. Archive of page about Gallivan from the Pomona Historical Society
H: If the covariance matrix is $\Sigma$, the covariance after projecting in $u$ is $u^T \Sigma u$. Why? I read in this answer that: If covariance matrix is $\Sigma$, the covariance after projecting in $u$ is $u^T \Sigma u$. I fail to see this, how do I get the covariance of a set of points after projecting those points along the direction $u$ as a function of $u$ and $\Sigma$ ? AI: The covariance matrix for a vector quantity $x$ is $\langle xx^\top\rangle-\langle x\rangle\langle x^\top\rangle$. The covariance for the projection $u^\top x$ is $$\langle u^\top xx^\top u\rangle-\langle u^\top x\rangle\langle x^\top u\rangle=u^\top\langle xx^\top\rangle u-u^\top\langle x\rangle\langle x^\top\rangle u=u^\top\left(\langle xx^\top\rangle-\langle x\rangle\langle x^\top\rangle\right)u\;.$$ The point is basically that you can pull $u$ out of all the expectation values because it's a constant.
H: Complex equation in maxima I rested on this tutorial. After issuing the command with "solve" function: %i2 solve((a-b-sqrt(-c^2+2*c*y-y^2+r^2))^2+(d-y)^2=2*r^2*(1-cos(e)),y); The output is: Why there is unknown quantity "y" on the right side? P.S. There's no "Maxima" tag, what a pity! However, I was redirected here by a stackoverflow moderator, so I assume it is not offtopic. Maxima is a computer algebra system based on a 1982 version of Macsyma. It is written in Common Lisp and runs on all POSIX platforms such as Mac OS X, Unix, BSD, and GNU/Linux as well as under Microsoft Windows. It is free software released under the terms of the GNU General Public License. source: http://en.wikipedia.org/wiki/Maxima_%28software%29 AI: The general solve command in Macsyma has limited capabilities for dealing with algebraic functions. You can work around this by using the rational function package as in the code below. There I define w to be the sqrt, then solve for w, square, then solve for y. (algebraic:true, tellrat( w^2 = -c^2+2*c*y-y^2+r^2 )); solve(rat(solve(rat((a-b-w)^2+(d-y)^2 = 2*r^2*(1-cos(e))),w)^2),y)
H: How does this game work? (Number game: subtract prime) Problem Alice and Bob play the following game.They choose a number N to play with.The runs are as follows : 1.Bob plays first and the two players alternate. 2.In his/her turn ,a player can subtract from $N$ any prime number less than $N$ or the number 1. The result thus obtained is the new $N$. 3.The person who cannot make a move in his/her turn loses the game. Assuming both play optimally, who wins the game? The answer is if $N \equiv 1 \pmod{4}$, then "Alice" will win, otherwise Bob. But I couldn't prove it mathematically correct. The way I obtain this answer is just bruteforce for some particular $N$. I noticed several rules but I just can't find a way to deduce from there. Here is what I have: If $N = P + 1$, where $P$ is prime, then whoever take this turn will win. If $N = 5$, then whoever takes this turn will lose. The biggest problem I'm facing is primes because there is no general rule to generate one. I wonder could anyone shed me some light on this problem? Any suggestion would be greatly appreciated. AI: It sounds like the key point is that you can subtract $1$, $2$, or $3$, but nothing equivalent to $0$ mod 4. (In other words, the primes are a distraction.) Thus, if $N \not\equiv 1$ (mod 4), Bob can subtract either $1$, $2$, or $3$ from $N$ so that $N' \equiv 1$ (mod 4), where $N'$ is the new $N$. However, if $N \equiv 1$ (mod 4), then Bob is forced to make $N' \not\equiv 1$, and Alice can reply by making the next number equivalent to 1 (mod 4). (I'm assuming that the endgame is such that if you start with $N = 1$, you lose.)
H: Finding the divisors of the number $p^3q^6$ My text says that $p^3q^6$ has 28 divisors. Could anyone please explain to me how they got 28 here? Edit: $p$ and $q$ are distinct prime numbers Sorry for the late addition.. AI: There can be 0-3 factors of $p$, so there are 4 ways for that to occur. There are 0-6 factors for $q$, so there are 7 ways for that to occur.
H: Farey sequences for polynomials? Does a notion of Farey sequence (or something equivalent) exist for polynomials over finite fields? AI: The terms in the Farey sequence exhaust the rationals, which form a field. The polynomials over a field don't form a field. The terms in a Farey sequence are listed in increasing order. The polynomials over a field don't have a natural order. So it's hard to think what properties you are expecting such a sequence to have. I suppose you could do this. For the field of $p$ elements, the "Farey sequence" of order $n$ could be all the polynomials of degree at most $n$, ordered by increasing value when evaluated at $1/p$. The coefficients would be restricted to the range 0 to $p-1$. E.g., there are 27 polynomials of degree at most 2 over the field of 3 elements: $0,1,2,x,x+1,x+2,2x,2x+1,2x+2,x^2,x^2+1,x^2+2,x^2+x,x^2+x+1,x^2+x+2,x^2+2x,x^2+2x+1,x^2+2x+2, 2x^2,2x^2+1,2x^2+2,2x^2+x,2x^2+x+1,2x^2+x+2,2x^2+2x,2x^2+2x+1,2x^2+2x+2.$ You can evaluate each of these at $x=1/3$, getting 27 different numbers, which you could then put in increasing order, and then you could order the polynomials accordingly. Doesn't look very exciting. EDIT: I'm surprised to find that there is a paper by William Webb, titled The Farey series of polynomials over a finite field. The citation is Elem. Math. 41 (1986), no. 1, 6–11, MR0880238 (88j:11086). The review begins, Denote by $R$ the ring of polynomials in $X$ with coefficients in the field of $q$ elements. The Farey series of order $n$ for $R$ is denoted $F_n$ and defined by $$F_n=\{{P/Q:P,Q\in R,\deg P\lt\deg Q\le n,(P,Q)=1,Q{\rm\ monic}\}}$$ The author develops the elementary theory of $F_n$ along the lines of that of the classical Farey series. What I find most surprising is that I wrote that review.
H: Probability of Survival Let $p(x)_{t}$ be the probability that a person aged $x$ survives until at least age $x+t$. Suppose we are given the following: $p(x)_{1} = 0.99$ $p(x+1)_{1} = 0.985$ $p(x+1)_{3} = 0.95$ $q(x+3)_1 = 0.02$ Note that $q(x)_{t} = 1-p(x)_{t}$. What is $p(x+1)_{2}$? So we want to find the probability that a person aged $x+1$ survives until at least age $x+3$. So $$p(x+1)_{2} = p(x+1)_{1} \cdot p(x+2)_{1}$$ But we don't know those values. That is, we don't know $ p(x+2)_{1}$. AI: $p(x+1)_3 = p(x+1)_1 p(x+2)_1 p(x+3)_1$, so $p(x+1)_1 p(x+2)_1 = \frac{p(x+1)_3}{p(x+3)_1} = \frac{0.95}{0.98} \approx 0.97$.
H: Are there "one way" integrals? If we suppose that we can start with any function we like, can we work "backwards" and differentiate the function to create an integral that is hard to solve? To define the question better, let's say we start with a function of our choosing, $f(x)$. We can then differentiate the function with respect to $x$ do get $g(x)$: $$g(x) = f'(x)$$ This, in turn, implies, under appropriate conditions, that the integral of $g(x)$ is $f(x)$: $$\int_a^b { g(x) dx } = [f(x)]_a^b$$ I'm wondering what conditions are appropriate to allow one to easily get a $g(x)$ and $f(x)$ that assure that $f(x)$ can't be easily found from $g(x)$. SUMMARY OF THE QUESTION Can we get a function, $g(x)$, that is hard to integrate, yet we know the solution to? It's important that no one else should be able to find the solution, $f(x)$, given only $g(x)$. Please help! POSSIBLE EXAMPLE This question/integral seems like it has some potential. DEFINITION OF HARDNESS The solution to the definite integral can be returned with the most $n$ significant digits correct. Then it is hard to do this if the time it takes is an exponential number of this time. In other words, if we get the first $n$ digits correct, it would take roughly $O(e^n)$ seconds to do it. AI: I interpret your question as follows: Is there any differentiable function $f$ and pair of real numbers $a,b$ such that computing the integral $\int_a^b f'(x)dx$ to $n$ bits of precision given $a,b,f'$ is substantially harder than computing $f(b)-f(a)$ to the same precision? The answer to this is no, in the sense that if $f(b)-f(a)$ can be computed in $O(h(n))$ then the integral can be computed in $O(h(n))$ as well, assuming $\lim\limits_{n\to\infty} h(n)=\infty$. One way to do this would be to simply enumerate all functions with finite-length definitions and differentiate them until one is found with derivative $f'$. One might object that it is impossible to determine whether two strings of symbols produce the same function, but there are only countably many algorithms for symbolic differentiation. Any algorithm used to differentiate $f$ must be a provably correct implementation of differentiation, and one can enumerate these by enumerating the list of all algorithms and of all proofs using the standard pairing function and checking each pair (algorithm,proof) to see if the proof proves the correctness of the algorithm. We can thus enumerate all pairs (function, provably correct differentiation algorithm). Thus we get the function $f$. This is obviously done in constant time w.r.t. $n$, and so if we can compute $f(b)-f(a)$ in $O(h(n))$ we can compute the integral in $O(h(n))+O(1)=O(h(n))$.
H: What is the difference between normal and perpendicular? What is the difference when a line is said to be normal to another and a line is said to be perpendicular to other? AI: There are different kind of contexts you use term normal in mathematics. You often use perpendicular in case of two or three dimensional geometry, and in hi-dimensional spaces (e.g. infinite-dimensional) a term orthogonal is more common. On the other hand in the context of vectors, normal usually means also that the vector is of unit length, however this is not a must (but it is for example if you speak about orthonormal base). There is yet another related meaning in computer graphics, where normal is the direction which you would use to reflect the light. This can be a vector perpendicular to rendered face, but for nice effects you use one perpendicular to the original surface, not its approximation. Also, there is bump mapping technique that lets you change the normal vectors and achieve look of some kind of wrinkles and bumps. Finally, e.g. in topology or group theory term normal means something completely different, however, I will leave those and others out as I suspect this was not the scope of your question. Hope this helps ;-)
H: Could someone explain this proof to me? Probability proof If event $A$ and $B$ are events such that $P(A)$ and $P(B)$ are either $0$ or $1$ and $A$ is subset of $B$, then $A$ and $B$ are dependent events. Proof: Since $A\subset B$, we have $A\cap B=A$ and so $P(A\cap B)=P(A)$. $\therefore$ $P(A\cap B)-P(A)P(B)=P(A)-P(A)P(B)=P(A)[1-P(B)]$ Since $P(A)>0$ and $P(B)<1$, (original image) Where did that last step come from, involving $P(A \cap B) - P(A)P(B)$? How did the solution come up with that? AI: The result has clearly been misstated or miscopied. The proof is correct for the following result: If event $A$ and $B$ are events such that $P(A)$ and $P(B)$ are neither $0$ nor $1$, and $A$ is subset of $B$, then $A$ and $B$ are dependent events. The reason for looking at $P(A\cap B)-P(A)P(B)$ is that by definition, $A$ and $B$ are independent if and only if $P(A\cap B)=P(A)P(B)$, i.e., if and only if $P(A\cap B)-P(A)P(B)=0$. But the hypothesis that $A\subseteq B$ implies that $P(A)=P(A\cap B)$, so $$P(A\cap B)-P(A)P(B)=P(A)-P(A)P(B)=P(A)\big(1-P(B)\big)\;,$$ which is $0$ if and only if either $P(A)=0$ or $1-P(B)=0$, i.e., if and only if either $P(A)=0$ or $P(B)=1$. These possibilities are ruled out by the corrected version of the hypothesis, so it must be the case that $P(A\cap B)-P(A)P(B)\ne 0$, $P(A\cap B)\ne P(A)P(B)$, and hence by definition $A$ and $B$ are not independent (which of course means that they are dependent). The answer by copper.hat shows why the stated version is wrong.
H: Is there a name for the "most square" factorization of an integer? For the definition that follows, I'm curious to know if there's a known name (to enable a literature search relating to algorithms). Definition. Given an integer $n$, the maximally square factorization consists of the integers $\{a,b\}$ such that $n=ab$ and the difference $|a-b|$ is minimized. Formally: $$\{a,b\} = \arg \min_{\{x,y : x|n, y|n, xy=n\}} |x-y|.$$ Examples: For $n=16$, we get $\{a,b\}=\{4,4\}$. For $n=1300$, we get $\{a,b\}=\{26,50\}$. Questions: 1) Does this concept have a name? Any closely related concepts are also of interest. Number theory is not my strength. 2) Is there some clever way for finding $\{a,b\}$ that doesn't require fully factorizing $n$ and taking the two integers closest to $\sqrt{n}$? AI: Erdos conjectured that almost all integers have a pair $d,d'$ of divisors satisfying $d\lt d'\le2d$. ["Almost all" means that if you take the number of integers less than $n$ having this property, and divide by $n$, and then let $n$ go to infinity, the quotient will approach 1.] This was proved, in a strengthened form, in Maier and Tenenbaum, On the set of divisors of an integer, Invent. Math. 76 (1984), no. 1, 121–128, MR0739628 (86b:11057). Many other papers have taken the Maier-Tenenbaum paper as a starting point for further investigations. There's a whole book, Hall and Tenenbaum, Divisors, Cambridge Tracts in Mathematics, 90, Cambridge University Press, Cambridge, 1988, MR0964687 (90a:11107).
H: What is the possible value of $x$ in the following case? $$2^{y}+2(3^{y}) > 3(4^{y})$$ and $$y=3x^2+2x-2$$ Which of the following is a possible value of $x$? $A -1.5 $ $B-2.5 $ $C -0.5 $ $D+0.7 $ $E+1.2$ $********$ I could just conclude from the $1st$ inequality that $$y∊(-∞, 0)$$ Should I put values of $y$ e.g. $-1, -2$ and then check or is there some other way? I couldn't approach further. Please help. AI: The roots of the quadratic are $\frac{-2\pm\sqrt{28}}{6}$. So $y$ is negative only for $\frac{-2-\sqrt{28}}{6}\lt x \lt \frac{-2+\sqrt{28}}{6}$. The smaller root is about $-1.2$, and the larger is about $0.5$. Combine this with your observation that $y \le 0$, and note that only one of the given values of $x$ lies in the right range.
H: Is every set a subset? Is every set a subset of a larger set? In other words, for an arbitrary set S, can one always construct a set S' such that S is a proper subset of S'? Is this question even meaningful? AI: Yes, one can: $S\cup\{S\}$ is a proper superset of $S$, since $S\in S\cup\{S\}$, but $S\notin S$. Thus, $S\subseteq S\cup\{S\}$, but $S\ne S\cup\{S\}$.
H: Definite integral with a complex number in Euler form Well... I spent an hour trying to figure out how to go from lhs to rhs: $$\frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\left( \int _{ k }^{ +\infty } e^{ -iux }dx \right) du=\frac { 1 }{ 2 } +\frac { 1 }{ \pi } \int _{ 0 }^{ +\infty } \Re \left[ \frac { \phi _{ T }(u)e^{ -iuk } }{ iu } \right] du$$ The $\phi$ being a characteristic function. What I get is : $$\frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\frac { 1 }{ u } \left[ \sin(ux) \right] _{ k }^{ +\infty }du+\frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\frac { i }{ u } \left[ \cos(ux) \right] _{ k }^{ +\infty }du$$ which seems to me will go nowhere since I don't see a way to solve the improper integrals with the cos and sin... Would appreciate a hint, thanks. AI: For the internal integral you get: $$ \int _{ k }^{ +\infty } e^{ -iux }dx=\biggr[\frac{e^{-iux}}{-iu}\biggr]_{k}^{\infty}, $$ whatever that means... So you have $$ \begin{eqnarray} \frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\left( \int _{ k }^{ +\infty } e^{ -iux }dx \right) du&=&\frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\biggr[\frac{e^{-iux}}{-iu}\biggr]_{k}^{\infty} du\\ &=&\lim_{z\to \infty}\frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\biggr[\frac{e^{-iuz}}{-iu}-\frac{e^{-iuk}}{-iu}\biggr] du\\ &=&\lim_{z\to \infty}\frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\frac{e^{-iuz}}{-iu} du\\ &&-\frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\frac{e^{-iuk}}{-iu} du \end{eqnarray} $$ Let's assume $ \lim_{z\to \infty}\frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\frac{e^{-iuz}}{-iu} du =\frac12 $ for the moment. Now split the remaining integral in 2 parts $\int_{-\infty}^0 \dots du$ and $\int_0^{\infty} \dots du$ to get: $$ \frac { 1 }{ 2\pi } \int _{ -\infty }^{ +\infty } \phi _{ T }(u)\frac{e^{-iuk}}{-iu} du= \frac { 1 }{ 2\pi } \int _{ -\infty }^{ 0 } \phi _{ T }(u)\frac{e^{-iuk}}{-iu} du+ \frac { 1 }{ 2\pi } \int _{ 0 }^{ +\infty } \phi _{ T }(u)\frac{e^{-iuk}}{-iu} du. $$ Now substitute $u=-u'$, use $\phi_T(u)=\phi_T(-u)^*$ to get $$ \frac { 1 }{ 2\pi } \int _{ -\infty }^{ 0 } \phi _{ T }(u)\frac{e^{-iuk}}{-iu} du= \frac { 1 }{ 2\pi } \int _{ 0 }^{ \infty } \phi _{ T }(-u')\frac{e^{iu'k}}{iu'} du'= \frac { 1 }{ 2\pi } \int _{ 0 }^{ \infty } \left(\phi _{ T }(u')\frac{e^{-iu'k}}{-iu'}\right)^* du' $$ Write $u$ for $u'$ again since it doesn't matter and combine the integrants $$\frac { 1 }{ 2\pi } \int _{ 0 }^{ \infty } \left(\phi _{ T }(u)\frac{e^{-iuk}}{-iu}\right)^* + \phi _{ T }(u)\frac{e^{-iuk}}{-iu} du= \frac { 1 }{ 2\pi } \int _{ 0 }^{ +\infty }2 \Re \left[ \frac { \phi _{ T }(u)e^{ -iuk } }{ iu } \right] du$$
H: 16 digit numbers divisible by 17 I wanted to know about the $16$ digit numbers those are divisible by $17$ and when this $16$ digit number is broken in groups of $4$ those groups of four are also divisible by $17$ and a check to verify their occurrence. Emma. AI: Use a divisibility rule: Subtract 5 times the last digit from the rest, e.g. $221: 22 − 1\times 5 = 17.$
H: Need to find the recurrence equation for coloring a 1 by n chessboard So the question asks me to find the number of ways H[n] to color a 1 by n chessboard with 3 colors - red, blue and white such that the number of red squares is even and number of blue squares is at least one. I am doing it in this way - 1.If the first square is white then the remaining n-1 squares can be colored in H[n-1] ways. 2.If the first square is red then another red will be needed in the n-1 remaining squares and the rest n-2 can be colored in H[n-2] ways. (i.e (n-1)*H[n-2]) 3.And now is the problem with blue. If I put blue in the first square and say that the rest n-1 squares can be colored in H[n-1] ways that will be wrong as I already have a blue and may not need any more(while H[n-1] requires one blue at least). I thought of adding H'[n-1] to H[n] = H[n-1] + (n-1)*H[n-2] which gives H[n] = H[n-1] + (n-1)*H[n-2] + H'[n-1] where H'[n] is the number of ways to fill n squares with no blue squares(so H'[n] = (n-1)*H'[n-2] + H'[n-1]). So now I'm kind of really confused how to solve such an equation -> H[n] = H[n-1] + (n-1)*H[n-2] + H'[n-1]. (I am specifically asked not to use exponential generating function to solve problem). AI: I wouldn’t actually use a recurrence to solve this problem. Let $c_n$ be the number of ways of coloring the $1\times n$ board with an even number of red cells, and let $b_n$ be the number of these that have no blue cells. Then $h_n=c_n-b_n$, where $h_n$ is the number of colorings with an even number of red cells and at least one blue cell, $$c_n=\sum_k\binom{n}{2k}2^{n-2k}\;,$$ and $$b_n=\sum_k\binom{n}{2k}\;.$$ Then $$\begin{align*} c_{n+1}&=\sum_k\binom{n+1}{2k}2^{n+1-2k}\\ &=\sum_k\left(\binom{n}{2k}+\binom{n}{2k-1}\right)2^{n+1-2k}\\ &=2c_n+\sum_k\binom{n}{2k-1}2^{n-(2k-1)}\\ &=2c_n+\sum_k\binom{n}k2^{n-k}-\sum_k\binom{n}{2k}2^{n-2k}\\ &=2c_n+\sum_k\binom{n}k1^k2^{n-k}-c_n\\ &=c_n+3^n \end{align*}$$ Clearly $c_0=1$, so $$c_n=1+\sum_{k=0}^{n-1}3^k\;,$$ which is easy to evaluate in closed form. And $b_n$ is just the number of subsets of $\{1,\dots,n\}$ of even cardinality, so it’s also easy to evaluate in closed form. However, if you want a recurrence, you can probably get one without too much trouble by working backwards from this solution.
H: Detail in definition of stochastic independence for families of events In a probability theory script, I read a definition of independence where I don't understand one detail (Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space) "A family of events $(A_i)_{i \in I}$, $A_i \in \mathcal{F}$ is called independent if $\{A_i, \Omega\}$ [*] with $i\in I$ is independent." (def. 7.2.1c) Just before there is definied: "Subsets $\mathcal{E}_i$, $i \in I$ of $\mathcal{F}$ are called independent if all finite combinations of them are independent." (which comes down to the usual formula $\mathbb{P}(A \cap B) = \mathbb{P}(A) \mathbb{P}(B)$) Why do I have to introduce the $\Omega$ in [*] ? (The source is this script: http://www.wias-berlin.de/people/koenig/www/WTSkript.pdf) AI: The usual definition of independence of sets is the following. Definition: The sets $(A_i)_{i\in I}$ is said to be independent if that for every $1\leq n\leq |I|$ and choice of indices $i_1,\ldots,i_n \in I $ we have that $$ P(A_{i_1}\cap\cdots\cap A_{i_n})=\prod_{k=1}^n P(A_{i_k}) \qquad (*). $$ If one were to check the definition in $(*)$, this would involve computing products with $2,3,\ldots, |I|$ terms. Therefore it is sometimes more convenient to introduce an equivalent characterization. The sets $(A_i)_{i\in I}$ are independent if and only if $$ P(B_1\cap\cdots \cap B_{N})=\prod_{k=1}^N P(B_k), \qquad (\text{here } N=|I|), $$ for every choice $B_1,\ldots,B_N$ where $B_i\in \{A_i,\Omega\}$. Note that this is again equivalent to saying that the sets $(\{A_i,\Omega\})_{i\in I}$ are independent.
H: How to find the inverse metrics? I know one can calculate the inverse of metric tensor $g$ in coordinates as the inverse of it's matrix $g_{ij}$. However what I really liked about differential geometry is how one can actually avoid writing matrices, one just use expressions like $$g = dx \otimes dx + dy \otimes dy + \frac{1}{2} \left( dx \otimes dy + dy \otimes dx \right)$$ Honestly, in the case of matrices I was never able to remember whether $i$ in $g_{ij}$ is a row or a column, though for a metrics it doesn't matter since it is symmetric. The next best thing is that all the skills needed to do many of the transformations are in essence polynomial handling and algebraic differentiation. So I ask if there is a technique to reverse the tensor above without dealing with the matrix explicitly? Just some fancy rewritings, substitutions, applications to certain vectors etc. P.S. I've just realized that the way I wrote here $g$ can be viewed as a sparse matrix representation, whereas the ordinary notation as a table is more like a dense representation. AI: The inverse metric is defined by the property that: $$ g\circ g^{-1} = \delta $$ where the Kronecker delta is defined to be the section of $T^*M\otimes TM$ that is acts as the identity on $T^*M$: $$ \delta := \sum_{i = 1}^{n} \mathrm{d}x^i \otimes \partial_{x^i} $$ In other words, if there were a general way to compute the inverse metric without computing the inverse matrix, then the method can be applied to the case of a constant metric on a linear space (that is, a simple inner product space) to compute the inverse matrix of a matrix without, erh, computing the inverse matrix. (And I think you see the problem with this...) Now, in your special case your metric can be written as the following: $$ g = \frac12 \left( \mathrm{d}x+\mathrm{d}y\right)\otimes \left(\mathrm{d}x + \mathrm{d}y\right) + \frac12 \left( \mathrm{d}x\otimes\mathrm{d}x + \mathrm{d}y \otimes \mathrm{d}y\right) $$ the second half is the "identity matrix". So we see that your metric can be easily put into diagonal form $$ g = \frac34 \mathrm{d}(x + y)^2 + \frac14\mathrm{d}(x - y)^2 $$ for which inversion is easy. The OP asks: But every symmetric matrix is diagonalizable, right? Then one can try to rewrite the metrics in such a simple form. Yes, every symmetric matrix is diagonalisable. But that is besides the point. When we speak of symmetric matrices being diagonalisable, we are usually speaking in the context of spectral theorem of diagonalising an operator $A:V\to V$ from a vector space $V$ to itself. Diagonalisation is the process of choosing a basis of $V$ consisting of only eigenvectors of $A$. The statement "$A$ is symmetric" is interpreted as: "$A$ is symmetric with respect to some positive definite inner product on $V$". In this case diagonalisation of $A$ means writing $A$ as $$ A = \sum_{i = 1}^K \lambda_i e_i \otimes (e_i)^*~. $$ In the context of the metric tensor you are dealing with a "symmetric bilinear form" on the vector space $V$. It is a simple algebraic fact that If $B$ is a symmetric bilinear form on a vector space $V$, there exists a basis of $V$ such that $B$ is diagonal. In this context $B$ is written as $$ B = \sum_{i = 1}^K \nu_i (e_i)^* \otimes (e_i)^*~.$$ The algebraic content of the two statements are very, very different. With that said: given an arbitrary (Riemannian) metric tensor field $$ g = \sum g_{ij}(x) \mathrm{d}x^i \otimes \mathrm{d}x^j $$ by the above fact it is true that you can find a family of vector fields $\vec{E_\mu} = \sum_i E_\mu^i \partial_i$ such that $$ g(E_\mu,E_\nu) = \delta_{\mu\nu} $$ However, the vector fields are constructed at each point $x$ by finding an orthogonal basis for $g$ at the point $x$, and there are two major problems: The vector fields need not be continuous (this, at least, in practice can be solved locally). The vector fields need not be integrable: it is a necessary condition that a family of coordinate vector fields $\partial_i$ be commutative: that is $[\partial_i,\partial_j] = 0$ where $[,]$ is the Lie bracket. (This is just the statement that coordinate partial derivatives must commute.) In general there is absolutely no reason that your chosen vector fields $E_\mu$, or some $x$-dependent multiples thereof, will satisfy this condition. Given a (pseudo)Riemannian metric, whether one can find a local coordinate system in which the metric can be diagonalised is a very classical problem. In two dimensions, smooth Riemannian and Lorentzian metrics can always be diagonalised: the former by isothermal (conformal) coordinates, the latter by null coordinates. It is also know that in three dimensions any Riemannian metric can be diagonalised locally. In four and higher dimensions, metrics which admit a coordinate system in which they are diagonal must force certain algebraic conditions on the Weyl curvature tensor and its derivatives. For references on these facts and more, please see this paper of Paul Tod's. Even in the two dimensional case the question can be difficult: finding isothermal coordinates given arbitrary smooth metric is a hard problem in general. You case admits a simple solution because your metric is "constant" in the coordinates you gave, which automatically implies that it is flat, and hence admits a linear transformation of the coordinates making it isometric to the standard Euclidean space.
H: A simple property of the norm of an cyclotomic integer Let $l$ be an odd prime number and $\zeta$ be a primitive $l$-th root of unity in $\mathbb{C}$. Let $\mathbb{Q}(\zeta)$ be the cyclotomic field. Let $A$ be the ring of algebraic integers of $\mathbb{Q}(\zeta)$. Let $\alpha \in A$. Let $N(\alpha)$ be the norm of $\alpha$. My question: How can we prove that $N(\alpha) \equiv 0$ or $\equiv 1$ (mod $l$)? AI: Observe that $\zeta^n-\zeta\in (1-\zeta)$ if $l$ does not divide $n$. This implies that all the conjugates of $\alpha$ are congruent mod $(1-\zeta)$, hence $N\alpha\equiv \alpha^{l-1}$ mod $(1-\zeta)$. Since $A/(1-\zeta)\cong\mathbb{F}_l$, we have $N\alpha\equiv0$ or $\equiv1$ mod $l$.
H: Gauss' proof of the irreducibility of a cyclotomic polynomial Let $l$ be an odd prime number. Let $f(X) = 1 + X + ... + X^{l-1} \in \mathbb{Z}[X]$. Probably Gauss was the first man who proved that $f(X)$ is irreducible. I wonder how he proved it. AI: The first proof presented here is a proof by Gauss. The original is in Gauss' magnum opus Disquisitiones Arithmeticae.
H: Finding the values of $\cos \frac{n\pi}{2}$ and $\sin \frac{n\pi}{2}$. i know that the values of $\cos n\pi=(-1)^{n}$ and $\sin n\pi=0$. Now i want to know that what is the general expressions of $\cos \frac{n\pi}{2}$ and $\sin \frac{n\pi}{2}$. AI: There are two cases: $n$ is even, write it as $2k$ and then you have $\cos(k\pi)$ and $\sin(k\pi)$ which you already know. $n$ is odd, write it as $2k+1$ and then you have $\cos(k\pi+\frac\pi2)$ and $\sin(k\pi+\frac\pi2)$. Recall that $\sin(x)=\cos(x+\frac\pi2)$, and deduce from the previous case what the values are.
H: Mahalanobis Distance using Eigen-Values of the Covariance Matrix Given the formula of Mahalanobis Distance: $D^2_M = (\mathbf{x} - \mathbf{\mu})^T \mathbf{S}^{-1} (\mathbf{x} - \mathbf{\mu})$ If I simplify the above expression using Eigen-value decomposition (EVD) of the Covariance Matrix: $S = \mathbf{P} \Lambda \mathbf{P}^T$ Then, $D^2_M = (\mathbf{x} - \mathbf{\mu})^T \mathbf{P} \Lambda^{-1} \mathbf{P}^T (\mathbf{x} - \mathbf{\mu})$ Let, the projections of $(\mathbf{x}-\mu)$ on all eigen-vectors present in $\mathbf{P}$ be $\mathbf{b}$, then: $\mathbf{b} = \mathbf{P}^T(\mathbf{x} - \mathbf{\mu})$ And, $D^2_M = \mathbf{b}^T \Lambda^{-1} \mathbf{b}$ $D^2_M = \sum_i{\frac{b^2_i}{\lambda_i}}$ The problem that I am facing right now is as follows: The covariance matrix $\mathbf{S}$ is calculated on a dataset, in which no. of observations are less than the no. of variables. This causes some zero-valued eigen-values after EVD of $\mathbf{S}$. In these cases the above simplified expression does not result in the same Mahalanobis Distance as the original expression, i.e.: $(\mathbf{x} - \mathbf{\mu})^T \mathbf{S}^{-1} (\mathbf{x} - \mathbf{\mu}) \neq \sum_i{\frac{b^2_i}{\lambda_i}}$ (for non-zero $\lambda_i$) My question is: Does the simplified expression still functionally represent the Mahalanobis Distance? P.S.: Motivation to use the simplified expression of Mahalanbis Distance is to calculate its gradient wrt $b$. AI: As indicated in Erick's comment, your problem is not that the two calculations yield different results for a singular covariance matrix, but that $\mathbf S$ is singular (and hence not invertible) if some of the eigenvalues are zero, so that neither calculation is well-defined. This is a conceptual problem, not a computational one; the Mahalanobis distance is simply not well-defined in this case. This paper suggests what it calls a regularized Mahalanobis distance to deal with this problem.
H: How to get principal argument of complex number from complex plane? I am just starting to learn calculus and the concepts of radians. Something that is confusing me is how my textbook is getting the principal argument ($\arg z$) from the complex plane. i.e. for the complex number $-2 + 2i$, how does it get $\frac{3\pi}{4}$? (I get $\frac{\pi}{4}$). The formula is $\tan^{-1}(\frac{b}{a})$, and I am getting $\frac{\pi}{4}$ when I calculate $\tan^{-1}(\frac{2}{-2})$. When I draw it I see that the point is in quadrant 2. So how do you compute the correct value of the principal argument? AI: The principal value of $\tan^{-1}\theta$ is always between $-\frac{\pi}2$ and $\frac{\pi}2$. The principal value of $\arg z$, on the other hand, is always in the interval $(-\pi,\pi]$. Thus, for $z$ in the first quadrant it’s between $0$ and $\frac{\pi}2$; for $z$ in the second quadrant it’s between $\frac{\pi}2$ and $\pi$; for $z$ in the third quadrant it’s between $-\frac{\pi}2$ and $-\pi$; and for $z$ in the fourth quadrant it’s between $0$ and $-\frac{\pi}2$. This means that the $\tan^{-1}$ function gives you the correct angle only when $z$ is in the first and fourth quadrants. When $z$ is in the second quadrant, you have to find an angle between $\frac{\pi}2$ and $\pi$ that has the same tangent as the angle $\theta$ returned by the $\tan^{-1}$ function, which satisfies $-\frac{\pi}2<\theta\le 0$. The tangent function is periodic with period $\pi$, so $\tan(\theta+\pi)=\tan\theta$, and $$\frac{\pi}2=-\frac{\pi}2+\pi<\theta+\pi\le0+\pi=\pi\;,$$ so $\theta+\pi$ is indeed in the second quadrant. When $z$ is in the third quadrant, you have to find an angle between $-\pi$ and $-\frac{\pi}2$ that has the same tangent as the angle $\theta$ returned by the $\tan^{-1}$ function, which satisfies $0\le\theta<\frac{\pi}2$. This time subtracting $\pi$ does the trick: $\tan(\theta-\pi)=\tan\theta$, and $$-\pi=0-\pi<\theta-\pi<\frac{\pi}2-\pi=-\frac{\pi}2\;.$$ There’s just one slightly tricky bit. If $z$ is a negative real number, should you consider it to be in the second or in the third quadrant? The tangent is $0$, so the $\tan^{-1}$ function will return $0$. If you treat $z$ as being in the second quadrant, you’ll add $\pi$ and get a principal argument of $\pi$. If instead you treat $z$ as being in the third quadrant, you’ll subtract $\pi$ and get a principal argument of $-\pi$. But by definition the principal argument is in the half-open interval $(-\pi,\pi]$, which does not include $-\pi$; thus, you must take $z$ to be in the second quadrant and assign it the principal argument $\pi$.
H: Ill posed PDE problem using Fourier transform. I have got this question in my exam and i was not able to solve it . The hint that i had gotten was to use Fourier transform and solve it . But i couldn't . . Can anyone help me . Thank you . AI: Let $u$ and $\bar u$ be the solutions with initial values $f$ and $\bar f$ respectively. Then $\bar u-u$ is the solution with initial value $n^{-1}\sin(n\,\pi\,x/L)$. Computing this solution (by separation of variables) we get $$ \bar u(x,t)-u(x,t)=\frac1n\,e^{\tfrac{k\,\pi^2\,n^2}{L^2}t}\sin\Bigl(\frac{n\,\pi\,x}{L}\Bigr). $$ Then $$ \sup_{0\le x\le L}|\bar f(x)-f(x)|\to0\quad\text{as}\quad n\to\infty, $$ while for any $T>0$ (and assuming $k>0$) $$ \sup_{0\le x\le L,0\le t\le T}|\bar u(x,t)-u(x,t)|\to\infty\quad\text{as}\quad n\to\infty $$
H: Inequality problems I have Maths test tomorrow and was just doing my revision when I came across these two questions. Would anyone please give me a nudge in the right direction? $1)$ If $x$ is real and $$y=\frac{x^2+4x-17}{2(x-3)},$$ show that $|y-5|\geq2$ $2)$ If $a>0$, $b>0$, prove that $$\left(a+\frac1b\right)\left(2b+\frac1{2a}\right)\ge\frac92$$ AI: For the first problem: Write it as $$ \begin{eqnarray} \left(y-5\right)^2-4&=&\left(\frac{x^2+4x-17}{2(x-3)}-5\right)^2-4\\ &=&\left(\frac{x^2+4x-17-10x+30}{2(x-3)}\right)^2-4\\ &=&\left(\frac{x^2-6x+13}{2(x-3)}\right)^2-4\\ &=&\frac{(x^2-6x+13)^2 - 16(x-3)^2}{4(x-3)^2}\\ &=&\frac{169-156 x+62 x^2-12 x^3+x^4 - 16x^2+96x-144}{4(x-3)^2}\\ &=&\frac{x^4 -12x^3+46x^2 -60x+25}{4(x-3)^2}\\ &=&\frac{(x^2 -6x+5)^2}{4(x-3)^2}\\ &=&\frac{(x-5)^2(x-1)^2}{4(x-3)^2}\ge 0 \end{eqnarray} $$ So only squares show up, hence it's positive.
H: Proof of the divisibility rule of 17. Rule: Subtract 5 times the last digit from the rest of the number, if the result is divisible by 17 then the number is also divisible by 17. How does this rule work? Please give the proof. AI: Write your number $10a+b$. Then because 10 and 17 are relatively prime, $$17\mid a-5b \iff 17\mid 10a-50b \iff 17\mid 10a+b$$ The last equivalence is because $10a+b-(10a-50b) = 51b$ is always a multiple of 17.
H: Prove that $\lim\limits_{(x,y) \to (0,0)} \frac{{x{y^2}}}{{{x^2} + {y^4}}} = 0$ $$\lim_{(x,y) \to (0,0)} \frac{{x{y^2}}}{{{x^2} + {y^4}}} = 0$$ Please, Anyone could suggest me some way for this?. Thanks. AI: If indeed the limit was zero then every way we approach $(0,0)$ the limit would have to be $0$. However, if we take the path $x=y^2$ we have: $$\lim_{x\to 0}\frac{x^2}{x^2+x^2}=\frac12\neq 0$$ So the limit cannot be zero. Maybe it could be something else, but then it would have to be $\frac12$. Take $y=0$, we have: $$\lim_{x\to 0}\frac0{x^2}=0\neq\frac12$$ Therefore the limit does not exist.
H: Does the identity $\det(I+g^{-1})\det(I+g)=|\det(g-I)|^2$ hold for $g \in U(n)$? In a paper (corollary 1, p.14) the following identity is used: Let g be a unitary matrix. Then: $$\det(I+g^{-1})\det(I+g)=|\det(g-I)|^2 \text{ for }g \in U(n)$$ Now my question is why this holds I calculated: $$\det(I+g^{-1})\det(I+g)=\overline{\det(I+g^t)}\det(I+g)=\overline{\det(I+g)}\det(I+g)=|\det(I+g)|^2$$ Where the second equality holds as $I$ has only entries in the diagonal ($I$ is of course the unit matrix). But this is not the same as on the right side. (I also thought that maybe there was a typo on the left side where should be minus-signs. However in the paper itself it is needed that there are plus-signs.) Thanks for any hints. Edit: This equality was in the scope of an integral: $$\int_{U(n)}\prod_{l=1}^{k}det(I+g^{-1})\prod_{l=1}^{k}\det(I+g)dg=\int_{U(n)}|\det(g-I)|^{2k}dg$$ With a change of variable it was solved with my calculation done above. See Giuseppe's answer. AI: Dear AndreasS I have given a look at the paper. I think that the calculation $$\det(I+g^{-1})\det(I+g)=|\det(I+g)|^2 \text{ for }g \in U(n)$$ is correct. But in order to obtain the result stated in Corollary 1, you just need the change of variable $g\mapsto -g$ in the integral over $U(n)$, so that $$\int_{U(n)}|\det(g-I)|^{2k}dg=\int_{U(n)}|\det(I+g)|^{2k}dg.$$ Then in the paper you find how factorize $|\det(I+g)|^{2k}.$ I hope that it helps.
H: 2x2 Matrices and Differences of Fractions Consider the difference of two arbitrary fractions, $\frac{a}{b}$ and $\frac{c}{d}$. $$\frac{a}{b}-\frac{c}{d}=\frac{ad-bc}{bd}$$ The numerator is the determinant of the 2x2 matrix $$ \left( \begin{array}{ccc} a & c \\ b & d \\ \end{array} \right)$$ Is there any reason for this? Are the two related in any way? AI: Think of the determinant as an expression for an area or volume spanned by the vectors $(a,b)^T$ and $(c,d)^T$. If the ratios, which represent the direction of the vector are equal, i.e. $a/b=c/d$, then the area/volume is $0$.
H: generating function for $\sum\limits_k \frac{x^k}{k^2}$? Does anyone know the generating function $f$ of $$f(x) = \sum_{k=1}^{\infty} \frac{x^k}{k^2}$$ How can we get it? Thanks! AI: If you're asking for a function which has the Taylor series in your question when expanded around $x=0$, the answer is the dilogarithm $\operatorname{Li}_2(x)$.
H: Arithmetic sequence A sequence $(a_n)$ satisfies: $|a_{m+n}-a_{m}-a_{n}|<\frac{1}{m+n}$ for all positive integers $m,n$ Show that $(a_n)$ is an arithmetic progression. I thought to solve it like this way : by induction $|a_{km}-ka_m|\le \frac 1m (\frac 12 +\frac 13 +...+\frac 1k)=\frac{H_k-1}{m}$. Therefore $|\frac{a_m}{m}-\frac{a_n}{n}|=\frac{1}{mn}|na_m-ma_n|<\frac{1}{mn}(\frac{H_n-1}{m}+\frac{H_m-1}{n})$. Therefore $\frac{a_n}{n}=const =a$ and $a_n=na.$ AI: I think it would be better to show more steps in the proof. You could lead up to the first by saying $|a_{2m}-2a_m|\le \frac 1{2m}$, $|a_{3m}-3a_m|\le |a_{3m}-a_{2m}-a_m|+|a_{2m}-2a_m| \le \frac 1{2m}+\frac 1{3m}$ to show what you are thinking. Similarly for the second, you can say $\frac 1{mn} |na_m-ma_n|=\frac 1{mn}|na_m-mna_1|+\frac 1{mn} |mna_1-ma_n|\lt \ldots$ Finally you want to say you can take $n$ and $m$ very large and the right side goes to zero to get that $\frac {a_n}n=\frac {a_m}m$. The approach is fine, but you are making the reader work a bit.
H: Segment of $\mathbb{R}^2$? I don't understand this sentence; The segment $(a,b)$ can be regarded as both a subset of $\mathbb{R}^2$ and an open subset of $\mathbb{R}^1$. If $(a,b)$ is a subset of $\mathbb{R}^2$, it is not open, but it is an open subset of $\mathbb{R}^1$. What is 'segment $(a,b)$ in $\mathbb{R}^2$'? AI: That sentence is a good example of poor use of mathematical statements. I guess they meant the set $$\{(x,y_0)\in\Bbb R^2\;:\;a< x< b\,\,,\,y_0\in\Bbb R\,\,\text{fixed}\}$$ Taking $\,y_0=0\,$ above gives an interval on the x-axis in the plane...
H: Are the consequences of contradictions avoidable? In common natural languages, there are two interpretations of the word "or". Can you construct a formal logic based on the excluding notion of "or", such that from a contradictory ($A$ and $\mathbb{not}(A)$ is true simultaneously) it doesn't follow, that all formulas are true? That logic doesn't have to be very strong, but should still look like something which can be used to compute intuitive conclusion rules from some axioms. AI: Graham Priest's book In Contradiction: A Study of the Transconsistent is fascinating, very readable, and discusses this exact question in exhaustive detail.
H: False proof of $R$ Noetherian, $I$ irreducible hence $I$ prime Can you tell me what's wrong with my proof? Thanks. Claim: If $R$ is a Noetherian ring and $I$ is an irreducible ideal in $R$ then $I$ is prime Proof: Let $xy \in I$. We want to show that either $x\in I$ or $y \in I$. By contradiction assume neither $x\in I$ nor $y \in I$. Then $I + \langle x \rangle$ and $I + \langle y \rangle$ are two ideals properly containing $I$ and $I + \langle x \rangle \cap I + \langle y \rangle = I + \langle xy \rangle = I$, i.e. $I$ is reducible which is a contradiction to our assumption. AI: How do you know that: $$I + \langle x \rangle \cap I + \langle y \rangle = I + \langle xy \rangle$$? That seems like a big leap, especially since it is not true in general that $\langle x\rangle\cap\langle y\rangle = \langle xy \rangle$ (which is the case $I=\{0\}$.)
H: Prove that an odd degree polynomial must cross any bounded continuous function in $\mathbb R$ Let $p (x)$ be an odd degree polynomial in one variable with coefficients from the set $\newcommand{\R}{\mathbb R} \R$ of real numbers. Let $g : \R → \R$ be a bounded continuous function. Prove that there exists an $x_0 ∈ \R$ such that $p (x_0) = g (x_0)$. AI: Let $f\colon x\mapsto p(x)-g(x)$ and $M$ such that $|g(x)|\leq M$ for all real number $x$. The degree of $p$ is odd, hence first case: $\lim_{x\to +\infty}p(x)=+\infty$: in this case $f(x)\geq p(x)-M$ hence $\lim_{x\to +\infty}f(x)=+\infty$. We have $\lim_{x\to -\infty}p(x)=-\infty$ by the condition over the degree, and $f(x)\leq p(x)+M$ so $\lim_{x\to -\infty}f(x)=-\infty$. In particular, the continuous map $f$ takes negative and positives values, hence vanished (by the intermediate value theorem) at some point $x_0$. second case $\lim_{x\to +\infty}p(x)=-\infty$: apply the previous case to $-p$ and $-g$ (which is continuous and bounded).
H: Question about primary decomposition in Noetherian rings I have a question about the following proof: How do I get that $\mathfrak a$ is reducible? I thought perhaps one can argue that $\mathfrak a \cap \mathfrak a = \mathfrak a$ is a finite intersection hence $\mathfrak a$ can't be irreducible. But this feels stupid so it must be wrong. Thanks for your help. AI: If $\mathfrak a$ were irreducible, then it would trivially be a finite intersection of irreducible ideals (namely, $\mathfrak a = \mathfrak a$, or $\mathfrak a = \mathfrak a \cap \mathfrak a$ if you like your intersections to have more than one intersectee). Since $\mathfrak a$ is an element of the collection of all ideals which have no such representation, $\mathfrak a$ must be reducible, which further leads to the contradiction exposed in the proof. Sometimes things are as simple as they appear!
H: Composition of two polynomials How's to make the composition of two polynomials? According to this page: If $ P = (x^3 + x) $, $ Q = (x^2 + 1) $ then, $ P\circ Q = P\circ (x^2 + 1) = (x^2 + 1)^3 + (x^2 + 1) = x^6 + 3 x^4 + 4 x^2 + 2 $ It seems that the $ (x^3 + x) $ becomes the $x^3$, then we have $( \space \space \space )^3$ and now we just need to switch the inside of $P$ by the inside of $Q$ thus $(x^2 + 1)^3$. I'm just not sure if my interpretation is correct. I'm also aware that I may not be using the right terms for describing this, but it's what I have now. AI: Looks fine. Maybe it becomes even clearer, when you write it like: $$ P\circ Q = (x^3 + x)\circ Q= (Q^3+Q)=(x^2 + 1)^3 + (x^2 + 1) = x^6 + 3 x^4 + 4 x^2 + 2 $$
H: How can I prove that $V$ is irreducible as a representation of $O(V)$? Let $V$ be a finite dimentional vector space over a field $k$. Let $g(\cdot,\cdot)$ be a nondegenerate symmetric bilinear form on $V$. Let $O(V)$ be the subgroup of $GL(V)$ that preserves $g$. Then $V$ can be viewed as a reprentation of $O(V)$. I guess this representation is irreducible. Would anyone please give a proof or a counterexample? Thanks very much! I don't know what conditions should be imposed on the underlying field $k$. Maybe you need to assume that $k$ is algebraically closed with characteristic 0. AI: It is just a matter of showing that given a non trivial subspace $V' \subset V$, you can find an orthogonal transformation that do not stabilize $V'$. If $\text{car} \neq 2$ you could use orthogonal reflections.
H: What do level curves signify? Suppose I have a function $z=f(x,y)$, say like $z=\sqrt{x^2+y^2}$. By fixing some value for $z$ and varying all possible $x$ and $y$, we would get a level curve of $z=f(x,y)$. By changing values for $z$, one can get different level curves. For $z=\sqrt{x^2+y^2}$, the level curves would be concentric circles. My question is, What do level curves signify? What all conclusions can be made about the function $z=f(x,y)$ just from the level curves? AI: In addition to the applied examples from the physical sciences that appear in @drak's comment, here are some mathematical applications of level curves. You can infer all sorts of data from level curves, depending on your function. The spacing between level curves is a good way to estimate gradients: level curves that are close together represent areas of steeper descent/ascent. If the function is a bivariate probability distribution, level curves can give you an estimate of variance. If the function is a classification boundary in a data-mining application, level curves can define the classification boundary between inclusion and exclusion. Level curves can show you boundaries of constant flux in some types of flow problems. Level curves can show you areas where temperature, stress, or concentrations are within some interval. Finally, level curves are useful if your function is sufficiently complicated that it is difficult to visualize a 3-D rendering of the surface that it makes. I am community wiki'ing this answer, because certainly other folks might have more to add.
H: What exactly is the difference between a derivative and a total derivative? I am not too grounded in differentiation but today, I was posed with a supposedly easy question $w = f(x,y) = x^2 + y^2$ where $x = r\sin\theta $ and $y = r\cos\theta$ requiring the solution to $\partial w / \partial r$ and $\partial w / \partial \theta $. I simply solved the former using the trig identity $\sin^2 \theta + \cos^2 \theta = 1$, resulting to $\partial w / \partial r = 2r$. However I was told that this solution could not be applied to this question because I should be solving for the total derivative. I could not find any good resource online to explain clearly to me the difference between a normal derivative and a total derivative and why my solution here was wrong. Is there anyone who could explain the difference to me using a practical example? Thanks! AI: The key difference is that when you take a partial derivative, you operate under a sort of assumption that you hold one variable fixed while the other changes. When computing a total derivative, you allow changes in one variable to affect the other. So, for instance, if you have $f(x,y) = 2x+3y$, then when you compute the partial derivative $\frac{\partial f}{\partial x}$, you temporarily assume $y$ constant and treat it as such, yielding $\frac{\partial f}{\partial x} = 2 + \frac{\partial (3y)}{\partial x} = 2 + 0 = 2$. However, if $x=x(r,\theta)$ and $y=y(r,\theta)$, then the assumption that $y$ stays constant when $x$ changes is no longer valid. Since $x = x(r,\theta)$, then if $x$ changes, this implies that at least one of $r$ or $\theta$ change. And if $r$ or $\theta$ change, then $y$ changes. And if $y$ changes, then obviously it has some sort of effect on the derivative and we can no longer assume it to be equal to zero. In your example, you are given $f(x,y) = x^2+y^2$, but what you really have is the following: $f(x,y) = f(x(r,\theta),y(r,\theta))$. So if you compute $\frac{\partial f}{\partial x}$, you cannot assume that the change in $x$ computed in this derivative has no effect on a change in $y$. What you need to compute instead is $\frac{\rm{d} f}{\rm{d}\theta}$ and $\frac{\rm{d} f}{\rm{d} r}$, the first of which can be computed as: $\frac{\rm{d} f}{\rm{d}\theta} = \frac{\partial f}{\partial \theta} + \frac{\partial f}{\partial x}\frac{\rm{d} x}{\rm{d} \theta} + \frac{\partial f}{\partial y}\frac{\rm{d} y}{\rm{d} \theta}$
H: Evaluating $\int_{0}^{\infty}\frac{\arctan (a\,\sin^2x)}{x^2}dx$ This is the sequel of my previous question $$I(a)=\int_{0}^{\infty}\frac{\arctan (a\,\sin^2x)}{x^2}dx$$ I want to use differentiation under the integral sign with respect to parameter "a" but so far without success. Any hint? AI: Thanks for the nice question. The answer is $$ I(a) = \frac{\pi}{\sqrt{2}} \cdot \frac{a}{ \sqrt{1 + \sqrt{1+a^2}}} $$ The sketch of the proof: expand $\arctan$ in series, and integrate term-wise (can do this for small enough $a$, since the sine is bounded): $$ \arctan\left(a \sin^2(x)\right) = \sum_{n=0}^\infty \frac{(-1)^n a^{2n+1}}{2n+1} \sin^{4n+2}(x) $$ This gives $$ \int_0^\infty \frac{\sin^{4n+2}(x)}{x^2} \mathrm{d} x = \frac{1}{\binom{2n}{\tfrac{1}{2}}} = \frac{\sqrt{\pi}}{2} \cdot \frac{\Gamma(2n+\frac{1}{2})}{(2n)!} $$ The summation is easy, since the summand is a hypergeometric term: $$ I(a) = \frac{\sqrt{\pi}}{2} \sum_{n=0}^\infty \frac{(-1)^n a^{2n+1}}{2n+1} \frac{\Gamma(2n+\frac{1}{2})}{(2n)!} = \frac{\pi a}{2} \cdot {}_2F_1\left(\frac{1}{4}, \frac{3}{4}; \frac{3}{2}; -a^2\right) = \frac{\pi}{\sqrt{2}} \cdot \frac{a}{ \sqrt{1 + \sqrt{1+a^2}}} $$ Added: The hard part is to prove that $S_n = \int_0^\infty \frac{\sin^{4n+2}(x)}{x^2} \mathrm{d} x$ is a hypergeometric term as claimed above. This can be done using: $$\begin{eqnarray} \sin^{4n+2}(x) &=& \left(\frac{\mathrm{e}^{ix} - \mathrm{e}^{-i x}}{2i}\right)^{4n+2} = -\frac{1}{4} \cdot \frac{1}{16^n} \sum_{m=0}^{4n+2} \binom{4n+2}{m} (-1)^m \mathrm{e}^{i (4n+2-2m)x} \\ &\stackrel{\text{symmetry}}{=}& -\frac{1}{4} \cdot \frac{1}{16^n} \sum_{m=0}^{4n+2} \binom{4n+2}{m} (-1)^m \underbrace{\cos((4n+2-2m)x)}_{1-2 \sin^2((2n+1-m)x)} \\ &=& \frac{1}{2} \cdot \frac{1}{16^n} \sum_{m=0}^{4n+2} \binom{4n+2}{m} (-1)^m \sin^2((2n+1-m)x) \\ &\stackrel{\text{symmetry}}{=}& \frac{1}{16^n} \sum_{m=0}^{2n} \binom{4n+2}{m} (-1)^m \sin^2((2n+1-m)x) \end{eqnarray} $$ Now: $$\begin{eqnarray} S_n &=& \frac{1}{16^n} \sum_{m=0}^{2n} \binom{4n+2}{m} (-1)^m \int_0^\infty \frac{\sin^2\left((2n+1-m) x\right)}{x^2} \mathrm{d} x \\ &=& \frac{1}{16^n} \sum_{m=0}^{2n} \binom{4n+2}{m} (-1)^m \frac{\pi}{2} \left(2n+1-m\right) \\ & \stackrel{m \to 2n-m}{=}& \frac{1}{16^n} \frac{\pi}{2} \sum_{m=0}^{2n} \binom{4n+2}{2n+2+m} (-1)^m \left(m+1\right) \end{eqnarray} $$ The latter sum readily yields to telescoping method, establishing the claim: $$ S_n = \frac{\pi}{2} \cdot \frac{n+1 }{4 n+1} \cdot \frac{1}{16^n} \binom{4 n+2}{2 n+2} = \frac{\sqrt{\pi}}{2} \frac{\Gamma\left(2n+\frac{1}{2}\right)}{(2n)!} $$
H: Sequences, subsequences, and continuity of functions It's been a few years since I studied point-set topology, and I'm a bit rusty on the basics. Would appreciate help with the following question. Suppose $f:X\rightarrow Y$ is a map between two topological spaces, and I know that for any sequence $x_n\rightarrow x$ in $X$, there is a subsequence $x_{n_k}$ such that $f(x_{n_k})\rightarrow f(x)$. Does it follow that $x_n\rightarrow x\Rightarrow f(x_n)\rightarrow f(x)$? In the case when $X$ is first countable, the first condition is enough to imply that $f$ is continuous (I believe), and so the second condition must hold. I suspect this isn't true for general $X$, but I can't come up with a counterexample. Thanks. AI: If $x_n\to x$ and $f(x_n)$ doesn't converge to $f(x)$, take $V$ a neighborhood of $f(x)$ such that for infinitely many $k$, $f(x_k)\notin V$. We can write this infinitely many as a subsequence $x_{n_k}$. We have $x_{n_k}\to x$, but we can't extract a subsequence of $\{f(x_{n_k})\}$ which is convergent to $f(x)$.
H: Subgroups written as products Suppose a finite group $G$ is the product of two of its proper subgroups $G=AB$. Assume also that $A\lhd G$ and that $A,B$ have relatively prime orders. Isn't it true that any subgroup $H$ of $G$ can be written as $H=(H\cap A)(H\cap B)$? AI: While Qiaochu's answer settles the original question, it might be worth noting that every subgroup $H$ of $AB$ under these hypotheses can be written in the form $(H \cap A)(H \cap B^{x})$ for some $x \in A.$ Let $\pi$ be the set of prime divisors of $A$. Then $H/(H \cap A)$ is isomorphic to a subgroup of $B$, so is a $\pi^{\prime}$-group. By the Schur-Zassenhaus theorem, we have $H = (H \cap A)C$ for some subgroup $C$ of $H$ with $(H \cap A) \cap C = 1.$ By Schur-Zassenhaus again, we have $C^{g} \leq B$ for some $g \in G.$ Write $g = ab$ for some $a \in A, b \in B.$ Then $C^{a} \leq B.$ Hence $H^{a} = (H \cap A)^{a}C^{a} \leq (H^{a} \cap A)(H^{a} \cap B) \leq H^{a}.$ Setting $x = a^{-1}$, conjugating by $x$ gives $H = (H \cap A)(H \cap B^{x}).$
H: Expectation and median (Jensen’s inequality) of spacial functions Let’s have a 1-Lipschitz function $f:S^n \to \mathbb{R}$, where $S^n$ is equipped with the geodesic distance $d$ and with the uniform measure $\mu$. How can I show that such an $f$ satisfies Jensen’s inequality: $(\int_{S^n} f d\mu)^2 \leq {\int_{S^n} f^2 d\mu}$? In addition, is it true that in such a case we have $\sqrt{\int_{S^n} f^2 d\mu} \leq m$ where $m$ is the unique number satisfying $\mu(f \geq m) \geq 0.5, \space \mu(f \leq m) \geq 0.5$? AI: The first inequality is called Cauchy-Schwarz inequality, rather than Jensen inequality. Its proof is simple and very general: one considers $g=(f-a)^2$ with $a=\int\limits_{S_n}f\,\mathrm d\mu$. Then $g\geqslant0$ everywhere hence $\int\limits_{S_n}g\,\mathrm d\mu\geqslant0$. Expanding this by linearity and using the fact that the mass of $\mu$ is $1$ yields the result. The second inequality you suggest is odd. If $f(x)=x_1-1$, then $m=-1$, which ruins every chance to get a nonnegative quantity $\leqslant m$. More generally, $ \sqrt{ \int\limits_{S^n} f^2\,\mathrm d\mu } \geqslant \int\limits_{S^n} f\,\mathrm d\mu$ and, as soon as $f$ is symmetric around one of its medians $m$, the RHS is $m$. To sum up, no comparison can exist, and if one existed, it would be the opposite of the one you suggest.
H: Estimation Theory - Maximum Likelihood Estimation The below homework question comes from Larsen and Marx, 4th edition. Is the maximum likelihood estimator for $\sigma^{2}$ in a normal pdf, where both $\mu$ and >$\sigma^{2}$ are unknown, asymptotically unbiased? I think I understand the notion that an estimator $\hat{\theta_{n}}$ is unbiased if the limit of its expected value as n goes to infinity is $\theta$, but I'm really not sure where to go with trying to answer the above question. Any idea where to start? AI: If $X_1,\ldots,X_n$ are iid $\mathcal{N}(\mu,\sigma^2)$ variables, then the maximum likelihood estimator of $\sigma^2$ is given by $$ \hat{\sigma}_n^2=\frac{1}{n}\sum_{i=1}^n(X_i-\bar{X})^2, $$ where $\bar{X}=\frac{1}{n}\sum_{i=1}^n X_i$ is the average. Recall that we usually do not use this estimator because it is biased. Instead we will often use $$ s^2_n=\frac{1}{n-1}\sum_{i=1}^n (X_i-\bar{X})^2, $$ because $E[s^2_n]=\sigma^2$. Now because $\hat{\sigma}^2_n = \frac{n-1}{n}s^2_n$ it follows that $$ E[\hat{\sigma}_n^2]=\frac{n-1}{n}\sigma^2\to \sigma^2 $$ as $n\to\infty$ and hence the maximum likelihood estimator is asymptotically unbiased.
H: Why does this equation converge to 1? The following simple equation takes in an N-length (real) vector, and spits out a (real) number between 0 and 1. (I believe this means that it is a transformation mapping $\mathfrak{R}^N \rightarrow \mathfrak{R}^1$). This equation has the property that the answer will converge to 1, as the input elements become more and more alike. Anyway, here is the equation. $\bf x$ is the N-dimensional input vector. $y$ is the 1-dimensional scalar output. $e$ is just the exponent operator. $$ y = \frac{e^{\frac{1}{N}\displaystyle\sum_{n=1}^N \log_e(x[n])}}{\frac{1}{N}\displaystyle\sum_{n=0}^N x[n]} $$ (Notice how the denominator is just the mean of $\bf x$). This function will return $y=1$ if all the elements of the vector $x$ are equal to each other. I am trying to do two things: 1) First, I would like to 'translate' the English statement "Show that this equation converges to 1, as all the elements in the vector $\bf x$ become more and more alike", into a mathematical statement. (Italics stressing what I want translated). For example, in English we might say "As the variable $c$ approaches infinity, etc, and we use the $\displaystyle\lim_{c \to +\infty}$ to mathematically denote that. 2) After that, I would like to show (prove), that the above equation actually does converge to 1, as the elements of the above equation $x$ become more and more similar. (I know that it does indeed converge to 1, but would like to prove it). AI: First of all, note that in order to get real values, you want all $x_n > 0$. The equation can be simplified as $$ y = \dfrac{\prod_{n=1}^N x_n^{1/N}}{\frac{1}{N} \sum_{n=1}^N x_n}$$ You could say that for any $c > 0$, $$ \lim_{{\bf x} \to [c,\ldots,c]} \dfrac{\prod_{n=1}^N x_n^{1/N}}{\frac{1}{N} \sum_{n=1}^N x_n} = 1 $$ This is true because the expression is a continuous function on $(0,\infty)^N$ and, as you noted, the value is $1$ when all $x_n$ are equal to the same positive number.
H: Examples of preorders in which meets and joins do not exist Exercise 1.2.8 (Part 1), p.8, from Categories for Types by Roy L. Crole Definition: Let $X$ be a preordered set and $A \subseteq X$. A join of $A$, if such exists, is a least element in the set of upper bounds for $A$. A meet of $A$, if such exists, is a greatest element in the set of lower bounds for $A$. Exercise: Make sure you understand the definition of meet and join in a preorder $X$. Think of some simple finite preordered sets in which meets and joins do not exist. AI: Let $X = {a, b}$. Define a preorder on $X$ as $a \le a$ and $b \le b$. Now suppose that $\vee X$ is a join of $X$. Then $\vee X$ is an upperbound of $X$. So $a \le \vee X$ and $b \le \vee X$. So $\vee X = a$ and $\vee X = b$, which is a contradiction. Therefor, $\vee X$ does not exist. A similar proof will show that $X$ does not have a meet, either.