text
stringlengths
83
79.5k
H: Integral and series convergence intuition I have this problem I ran into during my studies to the upcoming exam: I don't feel I have the intuition of whether a series or an integral converges or not. What are the things I should look for when looking at a closed form expression, and thinking whether it converges or not? It's not as intuitive to me as in sequences, which come up much easier usually. Thanks in advance for any advice! Also, not sure if this question deserves the 'soft question' tag. Feel free to add it if you think it's needed. AI: Something that I find helps my intuition is to have ready examples of series that converge and diverge. After all the only way you can "see" if a series converges is if it "looks" like a convergent series. Series that converge which you should memorize: $$ \sum_{n=1}^\infty \frac{1}{n^2} \qquad \text{(converges)} $$ $$ \sum_{n=1}^\infty \frac{1}{n^p} \qquad \text{(converges if p>1)} $$ $$ \sum_{n=0}^\infty ar^n \qquad \text{(converges if |r|<1)} $$ Series which diverge that you should memorize: $$ \sum_{n=1}^\infty \frac{1}{n} \qquad \text{(diverges)} $$ $$ \sum_{n=2}^\infty \frac{1}{\ln(n)} \qquad \text{(diverges)} $$ Knowing these can help you intuition a lot. Something else to keep in mind is the rate of growth of different funcitons. In the long run we have: $$ \ln(n) \lt n \lt n^2 \lt 5^n \lt n!$$ which means that $$ \frac{1}{\ln(n)} \gt \frac{1}{n} \gt \frac{1}{n^2} \gt \frac{1}{5^n} \gt \frac{1}{n!}.$$ If you don't already know it you should learn the limit comparison test. This is a free licence to your intuition. For instance when determining the convergence of the following series: $$ \sum_{n=1}^\infty \frac{5n^3-3n^2+177}{6n^4-n^2+1} $$ We can squint our eyes and see that the top is basically $5n^3$ and the bottom is $6n^4$ telling us that, $$\frac{5n^3-3n^2+177}{6n^4-n^2+1} \sim \frac{5n^3}{6n^4} = \frac{5}{6n} \sim \frac{1}{n}$$ So our intuition tells us this series should diverge because its really just a harmonic series in disguise. We justify that intuition by applying the limit comparison test. Another example: $$\sum_{n=1}^\infty \frac{2^n}{n!}$$ We have a ratio of $2^n$ and $n!$. Recall that $n!$ grows much faster than $2^n$, this means its contribution is much more significant than the contribution of $2^n$. On its own the $n!$ in the denominator would make the series converge therefore we can guess that this series will still converge. We verify our guess by applying the ratio test (this should be your go-to convergence test when factorials are involved). $$\left\vert \frac{a_{n+1}}{ a_n} \right\vert = \frac{2^{n+1}(n)!}{(n+1)!2^n} = \frac{2}{(n+1)} \rightarrow 0 \text{ as } n \rightarrow \infty$$ Since the ratio of consecutive terms approaches $0$, which is less than $1$, as $n$ approaches $\infty$ we can conclude that the series converges.
H: Restrictions of automorphisms to elementary substructures Suppose that I have structures $M \preceq M'$ (in some first-order language). I have a set $A$, with $M \subseteq A \subseteq M'$, and an automorphism $f$ of $M'$. Is it is always possible to find an $M''$, with $M \preceq M'' \preceq M'$, and $A \subseteq M''$, such that $f$ restricts to an automorphism of $M''$? If so, and supposing that $M$ and $A$ are countable, can I also arrange for $M''$ to be countable? Thanks! AI: Consider $M_0$ to be the elementary submodel generated by $A\cup f[A]\cup f^{-1}[A]$. Now define by induction $M_{n+1}$ to be the elementary submodel generated by $M_n\cup f[M_n]\cup f^{-1}[M_n]$. Let $M''$ be the union of the $M_n$'s. Increasing unions of elementary submodels is an elementary submodel, and it is not hard to see that if $m\in M''$ then $f(m)\in M''$ as well, and if $f(m)\in M''$ then $m\in M''$. Furthermore by the usual Lowenheim-Skolem arguments we have that $|M_n|=|A|+\aleph_0$ and therefore if $A$ is countable, so is $M''$.
H: Determining the structure of the $\mathbb{Z}$-module $\mathbb{Z}^3/K$, with $K=\langle (2,1,-3),(1,-1,2)\rangle$ Well, this is the exercise: Determine the structure of $\mathbb{Z}^{(3)}/K$ where $K$ is generated by $f_1=(2,1,-3)$, $f_2=(1,-1,2)$. Looking at the proof of the fundamental structure theorem for finitely generated modules over a PID, I tried to find the normal form of $$ A=\begin{pmatrix} 2 & 1 &-3 \\ 1 & -1 & 2 \end{pmatrix} $$ then, by multiplications with elementary matrices (in each $\rightarrow$ I multiplied $A$ by elementary matrices, so A is equivalent with all these matrices) $$ A \rightarrow\begin{pmatrix} 1 & -1 & 2 \\ 2 & 1 &-3 \end{pmatrix}\rightarrow \begin{pmatrix} 1 & 0 & 0 \\ 2 & 3 & -7 \end{pmatrix} \rightarrow\begin{pmatrix} 1 &0 & 0\\ 0 & 3&-7 \end{pmatrix}\rightarrow \begin{pmatrix} 1&0&0 \\ 0&3&-1 \end{pmatrix}\rightarrow \begin{pmatrix} 1&0&0 \\ 0&-1&3 \end{pmatrix}\rightarrow \begin{pmatrix} 1 & 0 &0\\ 0&-1&0 \end{pmatrix} $$ and I got that $$ A\sim \begin{pmatrix} 1 & 0 & 0\\ 0 & -1 & 0 \end{pmatrix} $$ But, I found this. So, Where is my mistake? And, I'd appreciate if someone could explain me the reasoning in the above link. EDIT: I have to add that, if my calculations were correct, I still don't get how can be that both invariant factors were units. I think (please correct me If I'm wrong) that if both were units implies that $\mathbb{Z}^3/K$ would be isomorphic to $\{0\}$, and that can not be. AI: You are right and something in the calculation in the linked question went wrong. Observe that $A$ has a $2\times2$ minor $$ \left|\begin{array}{rr}2&-3\\1&2\end{array}\right|=7. $$ The g.c.d. of the $2\times2$ minors is equal (up to sign) to the product of two smallest invariant factors. But $3\nmid7$, so $3$ cannot appear in the Smith normal form, i.e. the other poster (or I!) made an error. The role of the diagonal entries in the Smith normal form appears in the theorem (aka the stacked bases theorem): Assume that $R$ is a PID and $M\subseteq N$ are finitely generated free $R$-modules. Let $u_1,u_2,\ldots,u_n$ be a basis of $N$, $v_1,v_2,\ldots,v_m, m\le n,$ be generators of $M$. Let $A=(a_{ij})$ be the $m\times n$ matrix $A$ determined by the equations¹ $$ v_i=\sum_{j=1}^na_{ij}u_j. $$ If the diagonal entries of the Smith normal form of $A$ are $d_1\mid d_2\mid\cdots\mid d_m$, then there exist bases (these are the stacked bases) $e_1,e_2,\ldots,e_n$ of $N$ and $f_1,f_2,\ldots,f_m$ of $M$ such that $$ f_i=d_ie_i\quad\text{for all $1\le i\le m$}. $$ Consequently $$ N/M\cong (R/d_1R)\oplus (R/d_2R)\oplus\cdots (R/d_mR)\oplus R^{n-m}. $$ ¹In other words, the $k$-th row of the matrix $A$ is the element $v_k$ written in the basis given by $u_1, u_2, \dots, u_n$ (just like a coordinate vector in the context of vector spaces). There is a proof of this result in at least Jacobson Basic Algebra I, and probably at most other textbooks. It does follow relatively easily from the characterization of Smith normal form. In your case this implies that $$ \mathbb{Z}^3/K\cong \mathbb{Z}/(1\cdot\mathbb{Z})\oplus\mathbb{Z}/((-1)\cdot\mathbb{Z})\oplus\mathbb{Z}^{3-2}\cong\mathbb{Z}. $$
H: Closure Operator and Set Operations In Engelking, General Topology stand the following exercise: Show that for any sequence $A_1, A_2, \ldots$ of subsets of a topological space we have $$ \overline{\bigcup_{i=1}^{\infty} A_i} = \bigcup_{i=1}^{\infty} \overline{A_i} \cup \bigcap_{i=1}^{\infty} \overline{\bigcup_{j=0}^{\infty} A_{i+j}}. $$ I have no idea how to show this, I can't even show that the right side yields a closed set, because when I build the complement I got infinite intersections, which aren't necessarily open. Any hints how to solve this exersice? AI: $\newcommand{\cl}{\operatorname{cl}}$For each $i\in\Bbb Z^+$ $$\cl\bigcup_{j\ge 0}A_{i+j}\subseteq\cl\bigcup_{j\ge 1}A_j\;,$$ so $$\bigcap_{i\ge 1}\cl\bigcup_{j\ge 0}A_{i+j}\subseteq\cl\bigcup_{i\ge 1}A_i\;,$$ and therefore $$\cl\bigcup_{i\ge 1}A_i\supseteq\bigcup_{i\ge 1}\cl A_i\cup\bigcap_{j\ge 1}\cl\bigcup_{i\ge 0}A_{i+j}\;.$$ Suppose that $$x\in\left(\cl\bigcup_{i\ge 1}A_i\right)\setminus\bigcup_{i\ge 1}\cl A_i\;.$$ For any $n\in\Bbb Z^+$ we know that $$\cl\bigcup_{i=1}^nA_k=\bigcup_{i=1}^n\cl A_i\;,$$ so for each $n\in\Bbb Z^+$ we have $$x\in\left(\cl\bigcup_{i\ge 1}A_i\right)\setminus\cl\bigcup_{i=1}^nA_i=\left(\cl\bigcup_{i\ge n+1}A_i\cup\cl\bigcup_{i=1}^nA_i\right)\setminus\cl\bigcup_{i=1}^nA_i\subseteq\cl\bigcup_{i\ge n+1}A_i\;;$$ do you see that this implies that $$x\in\bigcap_{j\ge 1}\cl\bigcup_{i\ge 0}A_{i+j}$$ and hence that $$\cl\bigcup_{i\ge 1}A_i\subseteq\bigcup_{i\ge 1}\cl A_i\cup\bigcap_{j\ge 1}\cl\bigcup_{i\ge 0}A_{i+j}\;?$$
H: $\sum \limits_{n=1}^{\infty}{a_n^2}$ converges $\implies \sum \limits_{n=1}^{\infty}{\frac{a_n}{n}}$ I need some help to solve the following problem: *Show that if $\sum \limits_{n=1}^{\infty}{a_n^2}$ converges, then $\sum \limits_{n=1}^{\infty}{\dfrac{a_n}{n}}$ converges. I tried to solve it by using the Ratio Test, but got anything. Regards and thanks a lot. AI: Apply Hölder's Inequality (Cauchy-Schwarz if you are not familiar with Hölder) $$ \sum_{n=1}^\infty \left|a_n\cdot \frac1n \right | \leq \left(\sum_{n=1}^\infty a_n^2\right)^{1/2} \left(\sum_{n=1}^\infty\frac1{n^2} \right)^{1/2} < \infty $$ so the series is absolutely convergent.
H: Uniform convergence of $\sin\left ( {\frac{1}{n^{3}x}} \right )$ I ran into this question and im not sure how to solve it: Check uniform convergence of: $$f_{n}(x)=\sin\left ( {\frac{1}{n^{3}x}} \right )\quad \;x\in (0,1]$$ I tried finding the supremum of the function and then take the limit as $n \to\infty$. I got the answer $1$ which means that it is not uniform convergence, but there is no use of the specific bounding of $x\in (0,1]$ in this way, which means there is no uniform convergence for any $x$. on the other hand, I found that when $x\in [1,\infty)$ there is uniform converegnce. Please help me understand where is the use of the specific boundry of $x\in (0,1]$. Thanks in advance. AI: The pointwise limit of $f_n(x)$ is $f(x) = 0$. For $N \in \mathbb N$, let $x_N = 1/N^3$. This is always possible since $x \in (0, 1]$. We have $f_N(x_N) = \sin(1)$. By letting $\epsilon < \sin(1)$ we see that $f_n(x)$ cannot converge uniformly on $x \in (0, 1]$. On the other hand, if $x \in [1, \infty)$, we have $0 \le \dfrac{1}{n^3 x} \le \dfrac{1}{n^3} \le 1$. Hence: $$ \left|f_n(x)\right| = \left|\sin\left(\frac{1}{n^3 x}\right)\right| \le \left|\sin\left(\frac{1}{n^3}\right)\right| $$ For any $\epsilon > 0$, we can make $n$ large enough so that: $$ \left|\sin\left(\frac{1}{n^3}\right)\right| < \epsilon $$ Thus, $f_n(x)$ converges uniformly on $x \in [1, \infty)$.
H: Laplace transform of sin(at) Given $f(t)= \sin (at)$ I want to calculate the Laplace transform of $f(t)$. I have determined by using integration by parts twice, that the answer should be $$F(s)= \frac{a}{s^2+a^2}$$ Now I want to recalculate it by using just that $$\sin (at)= \frac{1}{2i} \left( {e^{ati}-e^{-ati}}\right) $$ So the integral becomes $$\frac{1}{2i}\int_{0}^{\infty} e^{-st}\left( {e^{ati}-e^{-ati}}\right) = $$ $$ \frac{1}{2i}\int_{0}^{\infty} e^{t(ai-s)}-e^{-t(ai+s)}dt =$$ $$ \frac{1}{2i} \left[\frac{1}{ai-s}e^{t(ai-s)} \right]_0^{\infty} +\frac{1}{2i} \left[\frac{1}{ai+s}e^{-t(ai+s)} \right]_0^{\infty} $$ My question now is: How to determine the limit of $e^{t(ai-s)}$ as $t$ goes to $\infty$ ? AI: The traditional way to handle this is as follows: In order to proceed, we need to have some more information about $s$; otherwise, we can't be sure that the integral will converge. So, we assume that $s$ is in the "region of convergence" (ROC) for this function. In this particular integral, we suppose that $\text{Re} \{s\}>0$, so that $\lim_{t\to\infty}e^{t(-s\pm ai)}=0$. You should be able to handle the rest. $$ %\begin{align*} %\left|\lim_{t\to \infty} e^{t(-s\pm ai)}\right| &= %\lim_{t\to \infty} \left|e^{t(-s\pm ai)}\right|\\&= %\lim_{t\to \infty} \left|e^{-st}\right|\cdot\left|e^{\pm ai}\right|\\&= %\lim_{t\to \infty} \left|e^{-st}\right|\cdot 1=0 %\end{align*} %$$
H: The principle of duality for sets The Wikipedia article on the algebra of sets briefly mentions the following: These are examples of an extremely important and powerful property of set algebra, namely, the principle of duality for sets, which asserts that for any true statement about sets, the dual statement obtained by interchanging unions and intersections, interchanging U and Ø and reversing inclusions is also true. A statement is said to be self-dual if it is equal to its own dual. The article doesn't talk about how this is proven and the linked article wasn't particularly enlightening to me. This is a surprising theorem to me and I am interested in finding out how to prove it. What kind of math would be involved in proving it? How would we prove this? AI: As written,the statement is false. For example, the statement $\neg (\exists x)(x\in \varnothing)$ won't flip around that way. The actual point, however, is that the set of all subsets of a set $U$ forms a lattice with join $\cup$, meet $\cap$, bottom $\varnothing$, top $U$, and ordering $\subseteq$. Thus the usual lattice dualities apply.
H: How to prove that normal matrix with property $A^2=A$ is Hermitian? I am given a matrix $A\in M(n\times n, \mathbb{C})$ normal (in matrix form $AA^*=A^*A$) and $A^2=A$. The task is to prove that the matrix is Hermitian. But when I try something like $A^*=\,\,...$ , then I can't reach $A$, because I can't "get rid of star" in expression. Also it is not enough to show $BA=BA^*$ for some $B$ since matrix don't form a field, and I haven't got any other thoughts. Thanks in advance! AI: Hint: by spectral theorem, a normal matrix is hermitian if and only if all its eigenvalues are real. What complex numbers have the property that they are equal to their squares?
H: I do not understand this integral,please help... $$\int_0^{\infty} P(y > z) \, dz = \int_0^{\infty} \int_z^{\infty} h(y) \, dy \, dz = \int_0^{\infty} \int_0^y \, dz \, h(y) \, dy$$ Why do we have the last equality? I used Fubini and derived the following: $$ \int_z^{\infty} \int_0^{\infty} h(y) \, dz \, dy.$$ Not the above result. I THANK YOU VERY MUCH FOR YOUR HELP! AI: The middle integral gives us $$0\le z<\infty\;,\;\;z\le y<\infty$$ If you now change the integration order you get $$0\le z\le y<\infty\implies 0\le y<\infty\;,\;0\le z\le y$$ Which is precisely what you have in the third integral in the first line...
H: Can you solve this problem with functions? We are given $f(x)=ax^2+2x+b$, a is not $0$, $Df=R$ and $f\circ g=g\circ f$ where $g(x)=x$ has only solution $x_0$. Then we have to show that $ab\leq 1/4$. AI: If $x_0$ is a solution for $g(x)=x$, then we also have $$g(f(x_0))=f(g(x_0))=f(x_0)$$ so $f(x_0)$ is also a solution of $g(x)=x$. By uniqueness, we have $x_0=f(x_0)$, i.e., $$x_0=ax_0^2+2x_0+b\\ 0=ax_0^2+x_0+b\,.$$ And, as $x_0$ is already a root of this polynomial $ax^2+x+b$, it must have nonnegative discriminant.
H: Can one apply the classifying space functor $B$ more than once? For a topological monoid $M$, the classifying space $BM$ is at least a pointed topological space as far as I know. From where to where is the construction $B$ a functor actually? Can I plug in an $A_\infty$ space $M$ or even a $H$-space $M$? What do I get in those cases? In particular: Can I apply the $B$ costruction again to get $BBM$? AI: One can form $BM$ for any $A_\infty$-space and it's more or less the delooping ($A_\infty$-structure on a connected $H$-space $M$ is more or less the same thing as an equivalence $M\cong \Omega X$ for some $X$; the proof is more or less that $M\cong\Omega BM$). (AFAIR this can be generalized further but then there is a question of what properties do you want from $BM$.) If $M$ is abelian, $BM$ is again a monoid which is abelian enough so the construction can be iterated (basically one gets $B^nM=M[S^n]$ — i.e. the configuration space of points on $n$-sphere with labels in $M$). In particular, if $G$ is a discrete abelian group, one can define $B^nG$ and it has homotopy type of $K(G,n)$ (and so we get a very explicit description of $K(G,n)$ — $G[S^n]$). On the other hand, if $G$ is a non-abelian group, $BG$ is not (in general) even an $H$-space (say, any surface is a $BG$ but if $g>1$ it's not an $H$-space).
H: How do I create a sigmoid-esque function with the following properties? For a range of $x$ values between $A$ and $B$ I would like $f(x) \rightarrow x$. For values less than $A$ I would like $f(x)$ to exhibit a sigmoid-esque convergence to $A'$ where $A'$ is $A - \delta$ for some small $\delta$. Similarly, for values greater than $B$ I would like $f(x)$ to converge to $B'$ where $B'$ is $B+\delta$ for some small $\delta$. Typical values will be $A = 0.5$, $B = 1.5$, $A' = 0.4$, $B' = 1.6$. AI: This image from Wikipedia shows several sigmoid-type functions, each with derivative 1 at the origin: By taking any of these (choose one, and call it $g(x)$), translating, and rescaling gives you $B + \delta g((x-B)/\delta)$. This will serve as the upper end of your function. The "$B +$" is the vertical translation, the "$- B)$" is the horizontal translation, the factor of $\delta$ on the outside rescales $g$ vertically, and the factor of $\delta$ on the inside scales $g$ horizontally (making the derivative correct, so that you don't have a singular point in your piecewise function). See this example that I typed into wolfram alpha with $g = \tanh$, $B = 1.5$, $\delta = 0.1$: http://www.wolframalpha.com/input/?i=1.5+%2B+0.1+*+tanh%28%28x-1.5%29%2F0.1%29 You can work out the lower end of your function analogously. $$ f(x) = \left\{ \begin{array}{lr} A + \delta\tanh((x-A)/\delta), & x \leq A \\ x, & A < x < B \\ B + \delta\tanh((x-B)/\delta), & x \geq B \\ \end{array} \right. $$ is one such function, but you don't need to use $\tanh$. If it's unclear how I arrived at this, it could be helpful to take each of the four transformations I performed on $\tanh$ and plotting them individually to see exactly how the function changes, and how the transformations act together to give you what you need.
H: Which axioms of ZFC or PA are known to not be derivable from the others? Which, if any, axioms of ZFC are known to not be derivable from the other axioms? Which, if any, axioms of PA are known to not be derivable from the other axioms? AI: There are several interesting issues here. The first is that there are different axiomatizations of PA and ZFC. If you look at several set theory books you are likely to find several different sets of axioms called "ZFC". Each of these sets is equivalent to each of the other sets, but they have subtly different axioms. In one set, the axiom scheme of comprehension may follow from the axiom scheme of replacement; in another set of axioms it may not. That makes the issue of independence harder to answer in general for ZFC; you have to really look at the particular set of axioms being used. PA has two different common axiomatizations. For the rest of this answer I will assume the axiomatization from Kaye's book Models of Peano Arithmetic which is based on the axioms for a discretely ordered semring. The second issue is that both PA and ZFC (in any of their forms) have an infinite number of axioms, because they both have infinite axiom schemes. Moreover, neither PA nor ZFC is finitely axiomatizable. That means, in particular, that given any finite number of axioms of one of these theories, there is some other axiom that is not provable from the given finite set. Third, just to be pendantic, I should point out that, although PA and ZFC are accepted to be consistent, if they were inconsistent, then every axiom would follow from a minimal inconsistent set of axioms. The practical effect of this is that any proof of independence has to either prove the consistency of the theory at hand, or assume it. Apart from these considerations, there are other things that can be said, depending on how much you know about PA and ZFC. In PA, the axiom scheme of induction can be broken into infinitely many infinite sets of axioms in a certain way using the arithmetical hierarchy; these sets of axioms are usually called $\text{I-}\Sigma^0_0$, $\text{I-}\Sigma^0_1$, $\text{I-}\Sigma^0_2$ , $\ldots$. For each $k$, $\text{I-}\Sigma^0_k \subseteq \text{I-}\Sigma^0_{k+1}$. The remaining non-induction axioms of PA are denoted $\text{PA}^-$. Then the theorem is that, for each $k$, there is an axiom in $\text{I-}\Sigma^0_{k+1}$ that is not provable from $\text{PA}^- + \text{I-}\Sigma^0_k$. This is true for both common axiomatizations of PA. In ZFC, it is usually more interesting to ask which axioms do follow from the others. The axiom of the empty set (for the authors who include it) follows from an instance of the axiom scheme of separation and the fact that $(\exists x)[x \in x \lor x \not \in x]$ is a formula in the language of ZFC that is logically valid in first order logic, so ZFC trivially proves that at least one set exists. In ZFC, there are some forms of the axiom scheme of separation that follow from the remainder of ZFC when particular forms of the axiom of replacement are used. The axiom of pairing is also redundant from the other axioms in many presentation. There are likely to be other redundancies in ZFC as well, depending on the presentation. One reason that we do not remove the redundant axioms from ZFC is that it is common in set theory to look at fragments of ZFC in which the axiom of powerset, the axiom scheme of replacement, or both, are removed. So axioms that are redundant when these axioms are included may not be redundant once these axioms are removed.
H: Lebesgue Measure of a k-cell Working through Rudin's RCA construction (Theorem 2.20, p. 53) of the Lebesgue measure using the Riesz Representation Theorem. Rudin constructs a linear functional $\Lambda$ on $\operatorname{C}_c(\mathbb{R}^k)$ such that $$\Lambda f := \lim\limits_{n \to \infty} 2^{-nk} \sum\limits_{x \in P_n} f(x)$$ where $P_n$ is the set of all vectors of the form $x = (a_1/2^n,...,a_k/2^n)$ for $a_1,...,a_k \in \mathbb{Z}$. Now let $W$ be an open $k$-cell. Rudin considers the set $S_r = \{ Q \in \Omega_r: \overline{Q} \subset W\}$, where $\Omega_r$ is the set of all boxes of the form $Q = \{ x: a_i \leq x_i < a_i + 2^{-r}, a_i \in P_r, 1 \leq i \leq k\}$. He then defines $$E_r = \bigcup\limits_{Q \in S_r} Q$$ and applies Urysohn's Lemma to obtain a function $0 \leq f_r \leq 1$ such that $f[\overline{E}_r] = 1$, $\operatorname{supp}(f)\subseteq W$, and $\overline{E}_r \subseteq \operatorname{supp}(f)$. Note that $\overline{E}_r $ is compact. He then asserts without proof that $$\operatorname{vol}(E_r) \leq \Lambda f_r \leq \Lambda g_r \leq \operatorname{vol}(W)$$ where $g_r := \max\{f_i: 1 \leq i \leq r\}$. How do we establish this inequality? It would seem a priori that it would arise out of demonstrating $\operatorname{vol}(E_r) \leq \Lambda_n f$ for all $n$, but I have so far been unsuccessful in this matter. Using Urysohn's Lemma we obtained that $\chi_{\overline{E}_r} \leq f \leq \chi_W$, which may yield another route. As a secondary question, are there any other good sources for a construction of the Lebesgue measure on $\mathbb{R}^k$ using the Riesz Representation Theorem? All of the other standard texts, e.g. Royden, on the subject construct an outer Lebesgue measure and extend using the results of Caratheodory. AI: $E_r$ is a union of $P_r$-boxes, and since $W$ is a $k$-cell, so is $E_r$. For $s \geqslant r$, consider $$\Lambda_s(f) = 2^{-sk}\sum_{x \in P_s} f(x) \geqslant 2^{-sk} \sum_{\substack{x \in P_s\\x+[0,\,2^{-s})^k \subset E_r}} f(x) = 2^{-sk}\sum_{\substack{x \in P_s\\x+[0,\,2^{-s})^k \subset E_r}} 1 = \operatorname{vol}(E_r),$$ since the $P_s$-boxes contained in $E_r$ are disjoint, their union is $E_r$, and each has volume $2^{-sk}$. Thus $\Lambda_s(f) \geqslant \operatorname{vol}(E_r)$ for all $s \geqslant r$, hence the same holds for $\Lambda(f)$. I can't answer your secondary question, Rudin is the only source I know that takes this approach.
H: local diffeomorphism on $\mathbb{R}$ and on manifolds. I find the proof of diffoemorphism in Guillemin & Pallock's Differential Topology 1.3.3 is more or less independent of the fact that the manifold happen to be $\mathbb{R}$, and therefore are the same. Then I am asking if my two proofs (primarily the latter one) are correct. GP 1.3.3: Let $f: \mathbb{R} \rightarrow \mathbb{R}$ be a local diffeomorphism. Prove that the image of $f$ is an open interval and that, in fact $f$ maps $\mathbb{R}$ diffeomorphically onto this interval. Proof: Since local diffeomorphism implies $f$ and $f^{-1}$ are globally smooth, it suffices to show that the local diffeomorphism $f: \mathbb{R} \to$ image$f$ is a bijection. We know it's surjective since we restricted to its image. But if it is not surjective, by mean value theorem, there is some point $x \in \mathbb{R}$ such that $f'(x) = 0$. However, the pushforward $df_{U_x}$ at $x$ must be a linear isomorphism. This is a contradiction. And right after: GP 1.3.5: Prove that a local diffeomorphism $f: X \rightarrow Y$ is actually a diffeomorphism of $X$ onto an open subset of $Y$, provided that $f$ is one-to-one. Proof: Since local diffeomorphism implies $f$ are globally smooth and $f^{-1}$ are globally smooth within the image of $f$, it suffices to show that the local diffeomorphism $f: X \to$ image$f \subseteq Y$ is a bijection. We know it's surjective since we restricted to its image. Given $f$ is one-to-one, it is bijective. Hence a diffeomorphism onto an open subset of $Y$. AI: The problem with your first proof is this: Why is $f$ an open map? Why is the image an interval? Also tautologically $f$ is surjective onto its image so what's going on with the MVT in 1.3.3? Well the first actually this follows immediately from $f$ being a local diffeomorphism! For any $f(y) \in f(\Bbb{R})$, $f$ a local diffeomorphism implies there is $U$ open about $y$ such that $f(U)$ is open in $\Bbb{R}$ about $f(y)$. Necessarily $f(U) \subseteq f(\Bbb{R})$ and so immediately this means $f(\Bbb{R})$ is open. For 2) the image is an interval because $f(\Bbb{R})$ is an open connected set which is an interval from basic real analysis. To complete the problem, you just need to tell us why $f$ is injective: If $x,y$ with $x\neq y$ are such that $f(x) = f(y)$, the MVT implies there is $d$ between $x$ and $y$ so that $f'(d) = 0$. Now how does this contradict what we have? Exercise: Complete the proof above for why $f$ has to be injective with the following hints: Let $M$ be a smooth manifold. For any $U \subseteq M$ open and $p\in M$, $T_pM \cong T_pU$. A functor always takes isomorphisms to isomorphisms. If $f'(d) = 0$ then the induced map on tangent spaces at $d$ is the (.....) map.
H: domain of square root What is the domain and range of square $\sqrt{3-t} - \sqrt{2+t}$? I consider the domain the two separate domain of each square root. My domain is $[-2,3]$. Is it right? Are there methods on how to figure out the domain and range in this kind of problem? AI: You are right about the domain. As to the range, use the fact that as $t$ travels from $-2$ to $3$, $\sqrt{3-t}$ is decreasing, and $\sqrt{2+t}$ is increasing, so the difference is decreasing.
H: Examples of types of mathematical models I am a student currently doing a course on modelling and simulation. I came across the classifications of mathematical models and studied that they can classified as static or dynamic, deterministic or stochastic, and as discrete or continuous. This means any mathematical model may belong to one of the 8 categories as shown in the picture below. Although I am able to understand every classification, I am unable to find real world examples for each type of model. Can someone give good examples for each of the 8 classifications shown here? AI: Deterministic-Static-Discrete: Clock cycles for a computer program to run on a given input. Deterministic-Static-Continuous: Amount of fluid a pipe can hold before breaking. Deterministic-Dynamic-Discrete: CPU percentage upon startup Deterministic-Dynamic-Continuous: Arguably everything part of the classical physical model Stochastic-Static-Discrete: Dice roll outcomes Stochastic-Static-Continuous: Distance from bullseye on a dart throw (could be considered continuous, especially if the quantity is being compared by competing players) Stochastic-Dynamic-Discrete: Gambler's Running Total Stochastic-Dynamic-Continuous: Weather
H: Where to find Rudin's references. Principles of mathematical analysis(Rudin) comes with various references to proves and results that have appeared in several magazines such as A.m.s and Monthly Math, but I have entered these pages and I actually don't know how to look for the articles that the book quotes. AI: If you have access to JSTOR, MathSciNet, or other websites which contain compilations of journal articles you can log on with your university access if you are a student. If not, you can likely find a specific reference by searching for the title followed by "free pdf" or "pdf download". You can also sometimes find a reference by searching for the authors arXiv.org account or their university website where some professors post their papers free of charge.
H: Derivative of the maximum of two random variables For any two real numbers $a$ and $b$ and any two random variables (with no mass points in their distributions) $x$ and $y$, why is it that the derivative of $E[\max\{a+x,b+y\}]$ with respect to $a$ is $\Pr(a+x>b+y)$? AI: Warning: The statement is not true as you have asked it. We will only be able to show that the derivative exists at points $a$ where $P[a+x=b+y]=0$. I give a counterexample below for when this condition is not met. Let $|h_n| \downarrow 0$. Consider that for all $\omega \in \Omega,$ for all $n$ large (depending on $\omega$), we will have have $$\max\{a+h_n+x(\omega),b+y(\omega)\}-\max\{a+x(\omega),b+y(\omega)\} $$ $$ = \left\{\begin{array}{cc} h_n, & a+x(\omega)>b+y(\omega)\\ 0,& a+x(\omega)< b+y(\omega)\\ h_n 1_{h_n>0},&a+x(\omega)=b+y(\omega) \end{array} \right. $$ Thus the difference quotient satisfies $$ \lim_{n \to \infty} \frac{1}{h_n}(\max\{a+h_n+x(\omega),b+y(\omega)\}-\max\{a+x(\omega),b+y(\omega)\} ) = 1_{a+x(\omega) > b+y(\omega)} $$ for all $\omega$ where $a+x(\omega)\neq b+y(\omega)$. We see that the derivative will exist at $a$ if $P(a+x=b+y)=0$ by taking expectations and applying dominated convergence (the difference quotients are bounded!). Doing so gives $$\partial_a E[\max\{a+x,b+y\}] = E[1_{a+x > b+y}] = P[a+x > b+y].$$ Note that we did not need independence of $x$ and $y$. Example of the derivative not existing if $P[a+x = b+y]>0$. Take a fair 2-coin flip space, with $x$ being the indicator of the first flip being heads and $y$ being the indicator of the second flip being heads. Take $a=b=0$. Of course $P[x=y]=\frac12$. Then the difference quotient turns out to look like $$ \frac{\max (0,h)+\max (0,h+1)+\max (1,h)+\max (1,h+1)-3}{4 h} $$ $\hskip 1.5 in$ as function of $h$. In fact, Expectation[ Max[x + a, y], {Distributed[x, BernoulliDistribution[1/2]], Distributed[y, BernoulliDistribution[1/2]]}] gives $$ \frac{1}{4} (\max (0,a)+\max (0,a+1)+\max (1,a)+\max (1,a+1)) $$ which looks like $\hskip 1.5 in$ Thus the derivative does not exist for $a=0$.
H: Convergence of $\int_0^\infty x \sin e^x \, dx$ I'm trying to demonstrate the convergence/divergence of a couple of integrals. They are: $\int_0^\infty x \sin e^x\,dx$$\int_0^{\pi/2} \sin(\sec x)\,dx$ There was a previous exercise similar to the first one, $\int_0^\infty e^x \sin e^x \, dx$. I concluded that this one diverges, since by substitution I was able to change it to $\int_1^\infty \sin u\ du$. But I haven't been as successful in substitution for the first one: if we try $u=e^x$, we get $x \sin e^x \, dx = \ln(u)\ e^{-u}\sin u \,du$, but now we can't make the limits work. For the second one, obviously the problem is that $\sec$ goes to infinity as $x\to \pi/2$, so $\sin$ oscillates wildly. How can we investigate the convergence here? AI: For the first function, we are integrating from $0$ to $M$, and then making $M$ get big. Rewrite our function as $xe^{-x} e^x\sin(e^x)$. Integrate by parts, letting $u=xe^{-x}$ and $dv=e^x \sin(e^x)\,dx$. Then $du=(-xe^{-x}+e^{-x})\,dx$ and we can take $v$ to be $-\cos(e^x)$. Now everything behaves very nicely for large $M$ because of the killer $e^{-x}$, and we get convergence.
H: $\overline{A}\cap B\neq\emptyset$ or $\overline{B}\cap A\neq\emptyset$ implies $\operatorname{dist}(A,B)=0$ $\overline{A}\cap B\neq\emptyset$ or $\overline{B}\cap A\neq\emptyset$ implies $\operatorname{dist}(A,B)=0$. I have tried to prove this but was unable to find a decent method. Any help will be appreciated AI: HINT: Prove the contrapositive: if $\operatorname{dist}(A,B)>0$, then $(\operatorname{cl}A)\cap B=\varnothing=A\cap\operatorname{cl}B$.
H: Solving Differential equation with partial fraction decomposition I am a little rusty with some calculus and need some help with the follow equation: \begin{equation} \int\dfrac{f'(x)}{f(x)}dx = \int\dfrac{1}{-x+x^2y}dx \end{equation} Where $y$ is a constant. My idea is to use some kind of $U$ substitution as I know $\int\frac{dv}{v} = \ln(v)$. This gives: \begin{align} \ln(f(x)) =& \int\dfrac{1}{-x+x^2y}dx \\ \end{align} Then I see to solve for $f(x)$ I can exponentiate. To solve the right integrand I would first do partial fraction decomposition which gives: $A=-1$ and $B=y$. Then I have: \begin{align} \ln(f(x)) = \int\dfrac{-1}{x}dx + \int\dfrac{y}{-1+xy}dx \\ \end{align} Then I get: \begin{equation} f(x) = x+-1+xy = -1+(y+1)x \end{equation} Is this correct? AI: Everything you've done looks correct, except from the second-to-last to the last lines. After the integration you get: $$\ln(f(x)) = -\ln(x) + \ln(-1+xy) + C$$ (Don't forget the +C) Exponentiating both sides, we get ($\exp(x)$ is shorthand for $e^x$): $$\begin{align} f(x) &= \exp(-\ln(x) + \ln(-1+xy) + C)\\ &=\exp(-\ln(x))\cdot\exp(\ln(-1+xy))\cdot\exp(C)\\ &=\left(\frac{1}{x}\right)\left(xy-1\right)\cdot(C)\\ &=\left(y - \frac{1}{x}\right)\cdot C \end{align}$$ Note that $\exp(a+b)\ne\exp(a)+\exp(b)$. EDIT: To show that this satisfies the original equation: $$\frac{d f}{dx} = \frac{C}{x^2}$$ $$\begin{align} \frac{f'(x)}{f(x)} &= \frac{\frac{C}{x^2}}{\left(y - \frac{1}{x}\right)\cdot C}\\ &=\frac{1}{x^2\left(y - \frac{1}{x}\right)}\\ &=\frac{1}{x^2y-x} \end{align}$$ Thus: $$\int\frac{f'(x)}{f(x)}\,dx = \int\frac{1}{x^2y-x}\,dx$$
H: Verifying statements for non-zero matrix Let $N$ be non-zero $3 \times 3$ matrix with the property $N^2=0$. Which of the following is true? (A) $N$ is not similar to a diagonal matrix. (B) $N$ is similar to diagonal matrix. (C) $N$ has one non-zero eigenvector. (D) $N$ has three linearly independent eigenvectors. AI: $N\ne0$ implies its minimal polynomial is $x^2$ whose roots are not distinct. Therefore $N$ is not diagonalizble. Consequently $B,D$ are false and $A$ is true. Since $N$ has a eigenvalue $0,$ $C$ is true by definition.
H: Combinatorially showing $\lim_{n\to \infty}{\frac{2n\choose n}{4^n}}=0$ I am trying to show that $\lim_{n\to \infty}{\frac{2n\choose n}{4^n}}=0$. I found that using stirling's approximation, I can get: $$ \lim_{n\to \infty}{\frac{2n\choose n}{4^n}}= \lim_{n\to \infty}{\frac{(2n)!}{n!^2*2^{2n}}}= \lim_{n\to \infty}{\frac{(\frac{2n}{e})^{2n}\sqrt{2\pi (2n)}}{((\frac{n}{e})^n\sqrt{2\pi n})^2*2^{2n}}} =\lim_{n\to \infty}{\frac{2\sqrt{\pi n}}{2\pi n}} =0 $$ But this seems inelegant where there should be a more elegant, combinatorial proof. Is there an easier way? AI: Not a combinatorial approach, but a fun way of doing it is showing that $$\frac{\binom{2n}{n}}{4^n}=\frac{1}{2\pi}\int_0^{2\pi} \cos^{2n} x\; dx$$ Basically, writing $\cos x = \frac12\left(e^{ix}+e^{-ix}\right)$. So $\cos^{2n} x$ has constant term $\binom{2n}{n}/4^n$. Since $\cos^{2n} x\to 0$ for almost all $x$, it is pretty easy to show that the above integral tends to zero as $n\to\infty$. Specifically, on the intervals $(\varepsilon,\pi-\varepsilon)\cup(\pi+\varepsilon,2\pi-\varepsilon)$, $\cos^{2n}x\to 0$ uniformly.
H: I want to prove an inequality between limits. I want to prove that if $f:\mathbb R \rightarrow \mathbb R$ is continuous and $x_n$ a bounded sequence, then $\liminf_{n\rightarrow \infty}f(x_n) < f(\liminf_{n\rightarrow \infty}x_n)$. Suppose $\liminf_{n\rightarrow \infty}x_n=a$ $$f\text{ continuous }\Rightarrow \forall \varepsilon>0,\ \exists \delta>0,\quad |x_n - a|<\delta \Rightarrow |f(x_n)-f(a)|<\varepsilon. \tag{1}$$ It follows from the definition of limit inferior that there is a sequence $x_{n_k}$ for which $x_{n_k}\rightarrow a$. In other words, $\exists k_0 \in \mathbb N$, such that $\forall k>k_0$, $|x_{n_k}-a|<\delta$. Now, by (1) we get $|x_{n_k} -a|<\delta \Rightarrow |f(x_{n_k} - f(a)|<\varepsilon$, $\forall k>k_0$. My question is: May I infer from all this that $\liminf_{n\rightarrow \infty}f(x_n) \leq f(\liminf_{n\rightarrow \infty}x_n)$? Why? AI: Let $a = \liminf_n x_n$, then there is some subsequence $x_{n_k} \to a$. Since $f$ is continuous, $f(x_{n_k}) \to f(a)$. Hence $\liminf_n f(x_n) \le \liminf_k f(x_{n_k}) = \lim_k f(x_{n_k}) = f(a)$. Note: The only tricky part here is showing $\liminf_n f(x_n) \le \liminf_k f(x_{n_k})$. Let $I = \{ n_k \}$. Then we have $\inf_{k \ge n} f(x_k) \le \inf_{k \ge n, k \in I} f(x_k)$, and since both sides are non-decreasing, we have $\inf_{k \ge n} f(x_k) \le \lim_n \inf_{k \ge n, k \in I} f(x_k)$ followed by $\lim_n \inf_{k \ge n} f(x_k) \le \lim_n \inf_{k \ge n, k \in I} f(x_k)$. Hence $$\liminf_n f(x_n) = \lim_n \inf_{k \ge n} f(x_k) \le \lim_n \inf_{k \ge n, k \in I} f(x_k) = \liminf_k f(x_{n_k})$$
H: Help with graphing a piecewise function What would be the graph and domain of this function? My domain is $(- \infty, \infty)$. I am stuck on graphing $-2x$. $$g(x)=\begin{cases} x+9 & \text{if }x<-3,\\ -2x & \text{if }|x|\leq 3,\\ -6 & \text{if }x>3. \end{cases}$$ AI: $-2x$ is a straight line. Your domain for this line is $[-3,3]$. At $x=-3$, $-2x=6$. At $x=3,-2x=-6$. So draw a straight line between $(-3,6)$ and $(3,-6)$.
H: Prove that $(x+1)^{\frac{1}{x+1}}+x^{-\frac{1}{x}}>2$ for $x > 0$ Let $x>0$. Show that $$(x+1)^{\frac{1}{x+1}}+x^{-\frac{1}{x}}>2.$$ Do you have any nice method? My idea $F(x)=(x+1)^{\frac{1}{x+1}}+x^{-\frac{1}{x}}$ then we hvae $F'(x)=\cdots$ But it's ugly. can you have nice methods? Thank you by this I have see this same problem let $0<x<1$ we have $$x+\dfrac{1}{x^x}<2$$ this problem have nice methods: becasue we have $$\dfrac{1}{x^x}=\left(\dfrac{1}{x}\right)^x\cdot 1^{1-x}<x\cdot\dfrac{1}{x}+(1-x)\cdot 1=2-x$$ AI: First to prove $f(x)=x^{\frac{1}{x}}$ have a max $f(e)$. Edit:2nd version when $x<e-1, f(x+1)>f(x)$ when $x>e,f(x)>f(x+1)$ for $f(x+1)>f(x)$, it is trivial . when $e-1\le x\le e$, : $f(x) $ is mono increasing, so $f(x)^{-1}_{min}=f(e)$, $f(x+1)$ is mono decreasing, so $ f(x+1)_{min}=f(e+1)$, thus,$f(x+1)+f(x)^{-1} > (e+1)^{\frac{1}{e+1}}$$+\dfrac{1}{e^{\frac{1}{e}}}=2.11 >2$ for $x>e$, let $g(x)=(x+1)^{\frac{1}{x+1}}+x^{-\frac{1}{x}}$, I will prove g(x) is mono decreasing,so $g(x)_{min}=g(+\infty)=2$ $g'(x)=x^{-2 - \frac{1}{x}} (-1 + Ln(x)) - (1 + x)^{-2 + \frac{1}{1 + x}} (-1 + Ln(1 + x))$, now to prove: $x^{-2 - \frac{1}{x}}<(1 + x)^{-2 + \frac{1}{1 + x}} \iff \dfrac{x^2x^{\frac{1}{x}}(x+1)^{\frac{1}{x+1}}}{(x+1)^2}>1 \iff \dfrac{x^2(x+1)^{\frac{1}{x+1}}(x+1)^{\frac{1}{x+1}}}{(x+1)^2}>1 \iff \left(\dfrac{x*(x+1)^{\frac{1}{x+1}}}{x+1} \right)^2>1 \iff\dfrac{x*(x+1)^{\frac{1}{x+1}}}{x+1}>1 \iff \dfrac{1}{x+1}Ln(x+1)>Ln(x+1)-Ln(x) \iff (x+1)Ln(x)>xLn(x+1) \iff x^{\frac{1}{x}} > (x+1)^{\frac{1}{x+1}} $ so $g'(x)<0$ when $x>e$. Done
H: If an arithmetic progression starts with 4, what is the common difference if the sum of the first 12 terms is twice the sum of the first 8 terms? An arithmetic progression (AP) has 4 as its first term. What is the common difference if the sum of the first 12 terms is 2 times the sum of the first 8 terms? AI: Let $d$ be the common difference. The sum of the first $12$ terms is then $$4 + (4 + d) + (4 + 2d) + \ ... \ + (4 + 11d)$$ Recall now the key fact that the sum of the first $n$ integers is $\frac{(n+1)n}{2}$. Obviously you can write the expression for the sum of the first $8$ terms similarly and it should now reduce just to arithmetic.
H: Are the strong limit cardinals precisely those of the form $\beth_\lambda$, where $\lambda$ is a limit ordinal or $0$? I know that $\aleph_\lambda$ is a weak limit cardinal iff $\lambda$ is a limit ordinal or $0$. In the absence of GCH, can we similarly prove that $\kappa$ is a strong limit cardinal iff $\kappa=\beth_\lambda$ with $\lambda$ is a limit ordinal or $0$? I'm guessing 'no', because I imagine that ZFC has models wherein the beth numbers 'skip over' some of the strong limit cardinals. AI: I’m assuming the axiom of choice. If $\lambda$ is a limit ordinal, then by definition $\beth_\lambda=\bigcup_{\alpha<\lambda}\beth_\alpha$. Let $\kappa=\operatorname{cf}\lambda$, and let $\langle\alpha_\xi:\xi<\kappa\rangle$ be an increasing sequence cofinal in $\lambda$. Then $\langle\beth_{\alpha_\xi}:\xi<\kappa\rangle$ is cofinal in $\beth_\lambda$, so for any cardinal $\mu<\beth_\lambda$ there is a $\xi<\kappa$ such that $\mu<\beth_{\alpha_\xi}$ and hence $2^\mu\le\beth_{\alpha_\xi+1}<\beth_\lambda$. It follows that $\beth_\lambda$ is a strong limit cardinal. Added: Conversely, suppose that $\kappa>\omega$ is a strong limit. If $\beth_\alpha<\kappa$ for some ordinal $\alpha$, then $\beth_{\alpha+1}<\kappa$. Let $\lambda=\sup\{\alpha:\beth_\alpha<\kappa\}$; then $\beth_\lambda\le\kappa$, but if $\beth_\lambda<\kappa$, then $\beth_{\lambda+1}<\kappa$, contradicting the choice of $\lambda$. Thus, $\kappa=\beth_\lambda$.
H: gauss map takes geodesics to geodesics Let $S$ be a regular surface, and let's consider $\gamma: I \to S$ be a geodesic. Let $ N: S \to S^2 $ be the gauss map. Then $ \beta(s) = N(\alpha(s))$ is a curve $\beta : I \to S^2$ (where $S^2$ denotes the unit sphere). I want to prove that $\beta$ it's also a geodesic, and also I want to know a more general result considering other kind of maps, and not only the gauss map N. AI: As Ted observes in a comment, if the Gauss map sends geodesics in your surface $S$ to geodesics in the sphere, then geodesics in $S$ are plane curves. It is a classic result that a (connected!) surface all of whose geodesics are plane curves is in fact (part of) a plane or a sphere. This last fact is a simple exercise. To start, show that, in general, a geodesic which is a plane curve is a line of curvature. Since in our surface all geodesics are lines of curvature, all of its points are umbilical. Next show that if a surface has all its points umbilical it is a part of a plane or a sphere. These two observations are standard exercises, and should be in pretty much every textbook on the geometry of surfaces.
H: Given a day of the week and the day of the month, what is the range of time within which it will uniquely specify a single date? In other words, given "2 Tues", (e.g. today, 2 July 2013), for how long must I wait until it is Tuesday on a 2nd of the month again? How does this interval change for each week? Is it constant or does it fluctuate? If it fluctuates, is there a minimum bound on this duration of "day/day-of-week certainty"? P.S. I actually have a real reason to use the answer for. You see, in my current zsh shell command prompt on my terminal on my computers, I print out the day of the week (so I can reasonably tell how recently I did something), and also the day of the month (because something more than 7 days old would be ambiguous without at least this info). The goal was to produce a format that is easy to parse and helpful for quick relative calculations (e.g. "Tues" is more helpful than "7/1/2013", if for no other reason than the fact that I'm more apt to know the day of the week than the day of the month). I can't think of a practical reason not to also stick the month in there (as it'd take up only two or three extra characters), so clearly the practical importance is low, but it did get me thinking about whether adding the month really would reduce the uncertainty period by a factor of exactly 12 or not. Edit: You know, I'm not sure how e.g. 28 days in February (and how that is divisible by 7) slipped past me. Feeling stupid right about now. AI: Note that (except in leap years) February 1 and March 1 must obviously fall on the same day of the week, and so too will February $n$ and March $n$ for any $n$, since February is exactly 4 weeks long. Similarly, September $n$ and December $n$ always fall on the same day of the week, because September + October + November adds up to 30+31+30 = 91 days = exactly 13 weeks. Similar analysis, or staring at a calendar, shows that in the same way, April and July's dates always fall on the same days of the week (as do January's in leap years), and also March and November (and Feburary, in non-leap years), October and January in non-leap years, and August and February in leap years. May and June, however, are perfectly safe. If I tell you that I got married last year on Saturday the 26th, you know it must be May, because May is the only month last year whose 26th day fell on a Saturday. May dates are always unambiguous in this way, and so are June dates. Additionally, February and October's dates are unambiguous in leap years, and August's in common years. So it is quite ambiguous. A given date will be ambiguous a little over ¾ of the time. The basic problem is that there are twelve possible months that a particular date could fall in, and the seven days of the week cannot give enough information to disambiguate these. Or put another way, there are twelve months and only seven weekdays, so you know some of the months must be doubling up.
H: Why do we care about specifying events in a probability space? Why aren't probability spaces just defined as $(\Omega, p)$ pairs with $\Omega$ as the sample space, $\sum_{\omega \in \Omega}p(\omega) = 1$, and for a subset $A \subseteq \Omega$, $\Pr(A) := \sum_{\omega \in A}p(\omega)$? Said another way, why aren't all $(\Omega, \mathcal{A}, p)$ probability spaces of the form $(\Omega, \mathcal{P}(\Omega), p)$? What do we gain by giving ourselves the freedom to exclude certain subsets of $\Omega$ from $\mathcal{A}$ ? AI: That's a good question. An answer is that there are many probability spaces $(\Omega,\mathcal{A},p)$ where the probability function $p$ cannot be extended to all of $\mathcal{P}(\Omega)$. For example, consider the probability space where $\Omega=[0,1]$, $\mathcal{A}$ is the Lebesgue $\sigma$-algebra on $[0,1]$, and $p=\lambda$ is the Lebesgue measure. Then there is no way of extending $p$ to have a value when given the Vitali set (the standard example of a non-Lebesgue measurable subset of $[0,1]$).
H: Probability (arranging around a circle) What is the probability that when arranging n people around a circle, two people with the same birthday (assume no leap years) will be adjacent to each other? AI: Your probability model is not quite clear. If you have $n$ given people and arrange them randomly, the probability you ask for is $1$ if the $n$ people all have the same birthday, $0$ if they all have different birthdays. I don't think that's what you meant to ask. Reading between the lines, I'm guessing that the $n$ people at your round table are an independent random sample from an infinite population where all $365$ birthdays are equally likely. In that case, the number of possible outcomes is $365^n$, while the number of outcomes where no two people in adjacent seats have the same birthday is $364^n+364(-1)^n$. Hence the probability that no two people with the same birthday are sitting next to each other is $\dfrac{364^n+364(-1)^n}{365^n}$, and your answer is $1-\dfrac{364^n+364(-1)^n}{365^n}$. The numerator is obtained by setting $t=365$ in the expression $(t-1)^n+(t-1)(-1)^n$, the chromatic polynomial of the cycle graph $C_n$.
H: Is there a transitive set that is non empty and doesn't contain the empty set? Sorry for being so naive and because this is maybe a silly question but all the examples I can think of contains the empty set, and it's not clear to me whether this makes sense at least intuitively. For example, let's supose that we have a set $A\neq \emptyset$ that is transitive. Then there's $x\in A$. But then every element $x_{i}$ in $x$ is element of $A$. Then every element $x_{j}$ of $x_{i}$ is in $A$. And so on. This processes might be infinite and hopefully $A$ doesn't have the empty set... AI: Assuming the Axiom of Foundation, no there isn't. For example, assume $t \neq \emptyset$ is transitive. Let $x$ be an $\in$-minimal element of $t$. Then $x \cap t = \emptyset$. But since $t$ is transitive, $x \subseteq t$ so that $x = x \cap t = \emptyset$. Edit 1: As you point out, possibly that chain of elements could be infinite, but the Axiom of Foundation says that $\in$ is well-founded and that is equivalent to saying that there is no descending chain $ \cdots \in x_1 \in x_0$. Edit 2: As Andres points out below not assuming Foundation it is consistent that there is a set $A = \{ A \}$ i.e. $A \in A$ so that $A \neq \emptyset$. But then $A$ is transitive because $\forall y \in A (y \subseteq A)$ ($A \in A$ and surely $A \subseteq A$). Relating this to my first edit, we would have a descending chain $ \cdots \in A \in A \in A$.
H: Proofs: the running in the sun conjecture (I made it up - explained below). Is it true and how can it be proven? The running in the sun conjecture (I made it up). Is it true and how can it be proven? This is just for fun. It might actually turn out to be easy - or it might be hard - I'm not sure. I'm an engineer, not a mathematician, but I think I can learn something from this ... I conjecture that if I go for a run at a constant pace along any path where I end at the same spot where I started, and if we assume the sun has stood completely still throughout my run, then I'll have gotten exactly as much sun on the front of my body as I did on the back of my body. More precisely, I conjecture that if I take any spot on my body, it will have gotten the same amount of sun as the opposite spot (in reference to a vertical axis going through my center. So - the "opposite"of the tip of my left shoulder is the tip of my right shoulder at the same height, etc.). Maybe we can just assume I'm a cylinder for simplicity (?). Let's definitely assume the sun is at an "infinite distance" (i.e. the rays are parallel over the entire region where I'm running). First of all, is my conjecture true? Second of all, how would we approach trying to prove it if it's true? My intuition tells me something related to the standard integral theorems of vector calculus might be a way to go: Stokes' theorem, etc. For simplicity, we can start by assuming a simple closed path that doesn't cross itself ... But what if we want to prove it for a path that crosses itself? Any thoughts? AI: I don't know if it's a problem to answer my own question, but I think I've pretty much figured it out just now: Essentially, we're asking whether the line integral of a constant vector field always sums to zero along any arbitrary closed path. Here it is - Line integral: http://en.wikipedia.org/wiki/Line_integral#Path_independence The "path independence" part of the link above is what enables it. We can represent the constant vector field $F$ of the sun's rays as the gradient of a linear scalar field $G$ (changes linearly with $x$, $y$, and $z$). Then, the path integral is equal to $G(x_0) - G(x_0) = 0$ due to path independence of the line integral of a gradient field ... (see link). If the body is a cylinder, then the normal vector on any portion of the surface is always at a constant angle from the vector facing forward along the running path (because we assume we're a rigid cylinder or any shape) ... So, although the path integral "dots" the vector field with the forward direction along the path, all side directions keep the same angle with the path throughout the entire path, so, their integral is a constant times the same closed path integral, which is zero. So, we've proved it. In fact, we've proved it for an arbitrary three-dimensional rigid body shape, along any planar path (paths in three dimensions or higher introduce new considerations and are not as straightforward). If the path crosses itself, as someone pointed out in one of the other answers, we can break it down into two simple paths that don't cross themselves and prove the same way we just did that the integral around each of the two simple paths sums to zero - so the integral around the path that crosses itself also sums to zero. Actually, we set out to ask the question for a constant vector field, but the argument seems to hold for any kind of field that satisfies $F = \nabla G$ ... ($F$ is the gradient of $G$ - where $G$ is a scalar field). So, if $F$ meets this condition, then, around a closed path, the one side of our body will end up getting as much of $F$ as the opposite side of our body will. However, we should actually note that this ignores one of the implicit, unstated assumptions of the question: what if indeed we take the sun as a spherical source that's a finite distance away and we walk around it in a circle with the center of the circle right below the sun? The rays will then still be representable as a gradient field of a scalar function (since they follow an inverse power law). So, path independence of the line integral will work in that case too because on either side of the body, we're exposed to the same gradient field. The assumption being violated however is that one side of the body will block the rays from reaching the opposite side - i.e. we are an opaque object. Because of this, then, it seems only to work for parallel constant rays ... To clarify, this works if the path is planar, and the vectors normal to the body are parallel to that plane. For example, sun exposure of the top of the head and one's back will not be the same, but exposure of left shoulder will be the same as that of the right shoulder, and the front will be exposed the same as the back, regardless of where the sun is and regardless of the specific path ... It may or may not work in general with non-planar paths, but if it does, we would need certain conditions on the orientation of the rigid body as it follows the path, and it still won't be true for all surfaces of the body (comments on this point are welcome).
H: How to integrate these integrals $$\int^{\frac {\pi}2}_0 \frac {dx}{1+ \cos x}$$ $$\int^{\frac {\pi}2}_0 \frac {dx}{1+ \sin x}$$ It seems that substitutions make things worse: $$\int \frac {dx}{1+ \cos x} ; t = 1 + \cos x; dt = -\sin x dx ; \sin x = \sqrt{1 - \cos^2 x} = \sqrt{1 - (t-1)^2} $$ $$ \Rightarrow \int \frac {-\sqrt{1 - (t-1)^2}}{t} = \int \frac {-\sqrt{t^2 + 2t}}{t} = \int \frac {-\sqrt t \cdot \sqrt t \cdot \sqrt{t + 2}}{\sqrt t \cdot t} $$ $$= \int \frac {- \sqrt{t + 2}}{\sqrt t } = \int - \sqrt{1 + \frac 2t} = ? $$ What next? I don’t know. Also, I’ve tried another “substitution”, namely $1 + \cos x = 2 \cos^2 \frac x2) $ $$ \int \frac {dx}{1+ \cos x} = \int \frac {dx}{2 \cos^2 \frac x2} = \int \frac 12 \cdot \sec^2 \frac x2 = ? $$ And failed again. Help me, please. AI: HINT: $$\text{As }\frac{d(\tan mx)}{dx}=m\sec^2mx,$$ $$\int\sec^2mx= \frac{\tan mx}m+C$$ here $m=\frac12$ or using Weierstrass substitution $\tan \frac x2=t,$ $\frac x2=\arctan t\implies dx=2\frac{dt}{1+t^2}$ and $\cos x=\frac{1-t^2}{1+t^2}$ $$\int \frac{dx}{1+\cos x}=\int dt=t+K=\tan\frac x2+K$$
H: Which of these statements regarding metric spaces are true? The following are a few statements in various metric spaces mcqs that I couldn't figure if they are true or false. Please offer some help to get answer them Let $(X,d)$ be a metric space 1) If $ A,B \subseteq X $ and $ A,B $ are bounded $\mathrm{dist}(A,B)>0 \Rightarrow A \bigcap B = \emptyset$ 2)$ A \subseteq X $ and $A$ is nowhere dense $\Rightarrow X$\ $\overline A $ is dense $ \emptyset \neq S \subseteq X $ 3) $A \subseteq S$ and $A$ is closed in S $\Rightarrow $ A is closed in $X$ 4) $A \subseteq X \Rightarrow \overline {A \bigcap S} $ (closure wrt to S)= $\overline A \bigcap S$ (A's closure wrt to X) if $ d_1$ and $d_2$ are metrics on X and $ \emptyset \neq A \subseteq X$ 5) $d_1(x,y) \le d_2(x,y)$ for each x,y $ \in X \Rightarrow$ G is $ d_2$ open for each $d_1$ open subset G of X AI: HINTS: is completely trivial, but here’s a hint anyway: if $x\in A$, then $\operatorname{dist}(x,A)=0$. This just requires you to know the definition of nowhere dense. If $A$ is nowhere dense, then $\operatorname{cl}A$ does not contain any non-empty open set. In the real line, $[1,2)$ is closed in $(0,2)$. What if $A=(0,1)$ and $S=[1,2]$ in $\Bbb R$? If $G$ is $d_1$-open and $x\in G$, then there is an $\epsilon>0$ such that $B_{d_1}(x,\epsilon)\subseteq G$. Show that $B_{d_2}(x,\epsilon)\subseteq G$.
H: How to generate sequence like this? Can you tell what algorithm can generate sequence $x_1, x_2, x_3, x_4, ...$ satisfying: $x_n$ is real, and always $0<x_n<1$. Every change between $x_n$ and $x_{n+1}$, such as increase or decrease and their amount, can be controlled by a variable with values, say, $+ε$ or $-ε$ where $ε$ is not necessarily between 0 and 1. This sequence are intended for me to be probabilities, which can be changed step by step. Thank you. AI: Let $\displaystyle c_n=\tan \left(\tau\left(\frac{x_n}2-\frac{1}4\right)\right )$. Each real $c_n$ corresponds to a unique real $0<x_n<1$ Take $c_{n+1}=c_n\pm \epsilon$ and solve for $x_{n+1}$.
H: Why does $\oint\mathbf{E}\cdot{d}\boldsymbol\ell=0$ imply $\nabla\times\mathbf{E}=\mathbf{0}$? I am looking at Griffith's Electrodynamics textbook and on page 76 he is discussing the curl of electric field in electrostatics. He claims that since $$\oint_C\mathbf{E}\cdot{d}\boldsymbol\ell=0$$ then $$\nabla\times\mathbf{E}=\mathbf{0}$$ I don't follow this logic. Although I know that curl of $\mathbf{E}$ in statics is $\mathbf{0}$, I can't see how you can simply apply Stokes' theorem to equate the two statements. If we take Stokes' original theorem, we have $\oint\mathbf{E}\cdot{d}\boldsymbol\ell=\int\nabla\times\mathbf{E}\cdot{d}\mathbf{a}=0$. How does this imply $\nabla\times\mathbf{E}=\mathbf{0}$? Griffiths seem to imply that this step is pretty easy, but I can't see it! AI: Suppose $\nabla \times {\bf E}$ is a well-behaved function and $\nabla \times {\bf E}\neq 0$ in some region. Then you could find a surface $S$ through which $\int_S \nabla \times {\bf E} \cdot d{\bf a}\neq0$ by making that surface very small and close to the aforementioned region. This contradicts Stokes's theorem, so it must be that $\nabla \times {\bf E}=0$ everywhere.
H: Ways of enumerating a countable set My questions are about enumerating countable sets. Say we have a countable set $X$. How many ways are there of enumerating X? It seems to me, just based on the examples i've seen in maths so far, that most of the time, it doesn't matter too much how you enumerate a set if it's countable, when you're trying to later do something with the set. Are there any situations in mathematics when the actual enumeration might matter? The only situation I can think of that is sortof related to this has to do with decidability and effective procedures in logic. Say we have a set of expressions $A$ in some language $L$, and say we have an effective procedure that, given any expression $\epsilon$, produces the answer "yes" iff $\epsilon \in A$. If we want to create a listing of $A$, we can enumerate all expressions in our language. Say this enumeration is $\epsilon_{1}, \epsilon_{2}, \cdots$. Spend one minute testing $\epsilon_{1}$. Then spend 1 minute testing $\epsilon_{2}.$ Go back to testing $\epsilon_{1}$ again for a minute. Then $\epsilon_{2}$, then $\epsilon_{3}$, then back to $\epsilon_{1}$, (etc)...Whenever our procedure outputs "yes" for an expression, put it in our list. In this way, anything in $A$ will eventually appear in our list. (this is in Enderton's "mathematical intro to logic) Thanks for any help/examples! Sincerely, Vien AI: If $X$ is countably infinite, there are $2^\omega=\mathfrak{c}$ bijections from $\omega$ to $X$ and hence $2^\omega$ different enumerations of $X$. There are times when you want an enumeration that has some specific property. For instance, suppose that you want to enumerate all finite strings over some finite alphabet of symbols. There are countably infinitely many such strings, but you might want specifically an enumeration $\{\sigma_n:n\in\Bbb N\}$ such that if $\sigma_m$ is shorter than $\sigma_n$, then $m<n$. In other words, the empty string must be $\sigma_0$; if there are $k$ symbols, $\sigma_1$ through $\sigma_k$ must be the one-symbol strings in some order; and so on.
H: Show that $\int_{0}^{\infty }\frac {\ln x}{x^4+1}\ dx =-\frac{\pi^2 \sqrt{2}}{16}$ I could prove it using the residues but I'm interested to have it in a different way (for example using Gamma/Beta or any other functions) to show that $$ \int_{0}^{\infty}\frac{\ln\left(x\right)}{x^{4} + 1}\,{\rm d}x =-\frac{\,\pi^{2}\,\sqrt{\,2\,}\,}{16}. $$ Thanks in advance. AI: One possible way is to introduce $$ I(s)=\frac{1}{16}\int_0^{\infty}\frac{y^{s-\frac34}dy}{1+y}.\tag{1}$$ The integral you are looking for is obtained as $I'(0)$ after the change of variables $y=x^4$. Let us make in (1) another change of variables: $\displaystyle t=\frac{y}{1+y}\Longleftrightarrow y=\frac{t}{1-t},dy=\frac{dt}{(1-t)^2}$. This gives \begin{align} I(s)&=\frac{1}{16}\int_0^1t\cdot\left(\frac{t}{1-t}\right)^{s-\frac74}\cdot \frac{dt}{(1-t)^2}=\\ &=\frac{1}{16}\int_0^1t^{s-\frac34}(1-t)^{-s-\frac{1}{4}}dt=\\& =\frac{1}{16}B\left(s+\frac14,-s+\frac34\right)=\\& =\frac{1}{16}\Gamma\left(s+\frac14\right)\Gamma\left(-s+\frac34\right)=\\ &=\frac{\pi}{16\sin\pi\left(s+\frac14\right)}. \end{align} Differentiating this with respect to $s$, we indeed get $$I'(0)=-\frac{\pi^2\cos\frac{\pi}{4}}{16\sin^2\frac{\pi}{4}}=-\frac{\pi^2\sqrt{2}}{16}.$$
H: A problem on countabiliy and families of sets Let $X$ be a non-empty set and $(A_{\lambda})_{\lambda\in \Delta } $ be a family of subsets of $X$. a) $ \Delta $ is countable and $(A_{\lambda}$ is countable for each $\lambda\in \Delta) \implies \prod_{\lambda\in \Delta} A_{\lambda} $ is countable b) $ \Delta $ is countable and $(A_{\lambda}$ is countable for each $\lambda\in \Delta) \implies \bigcup_{\lambda\in \Delta} A_{\lambda} $ is countable c) $\bigcup_{\lambda\in \Delta} A_{\lambda} $ is countable $\implies \Delta$ is countable d) $\prod_{\lambda\in \Delta} A_{\lambda} \neq \emptyset \implies A_{\lambda} \neq \emptyset $ for each $\lambda \in \Delta $ e) $\rho = \{(x,y)\mid x,y \in A_{\lambda} \text{ for some $\lambda \in \Delta$}\}$ is an equivalence relation Which of these statements are true? I have very little knowledge on countability as it was barely taught so please help on this. I can only get that b) is true and also think e) is true AI: (I will be assuming the axiom of choice throughout, as it’s clear from context that you’re expected to do so.) You are correct in thinking that (b) is true. (a) is not necessarily true. Take $\Delta=\Bbb N$ and $A_n=\{0,1\}$ for each $n\in\Bbb N$. Then $\prod_{n\in\Bbb N}A_n$ is the set of functions from $\Bbb N$ to $\{0,1\}$. There is a bijection between this set of functions and $\wp(\Bbb N)$, the set of subsets of $\Bbb N$: a set $S\subseteq\Bbb N$ corresponds to its indicator (or characteristic) function $\chi_S$. Since $|\wp(\Bbb N)|>|\Bbb N|$, in this case the product set is uncountable. (c) is not necessarily true even if the sets $A_\lambda$ are required to be non-empty: we might have $\Delta=\Bbb R$, an uncountable set, and $A_\lambda=\{0\}=X$ for each $\lambda\in\Bbb R$, in which case $\bigcup_{\lambda\in\Bbb R}A_\lambda=\{0\}$, which is certainly countable even though $\Bbb R$ is not. For that matter, we could let $X=\Bbb N$, $\Delta=\wp(\Bbb N)$, and $A_\lambda=\lambda$ for each $\lambda\subseteq\Bbb N$. Then $\bigcup_{\lambda\in\Delta}A_\lambda=\Bbb N$, which is countable, but $\Delta=\wp(\Bbb N)$ is uncountable. (d) is true: if even one $A_\lambda$ is empty, then so is $\prod_{\lambda\in\Delta}A_\lambda$. This is because an element $x$ of $\prod_{\lambda\in\Delta}A_\lambda$ is by definition a function with domain $\Delta$ such that $x(\lambda)\in A_\lambda$ for each $\lambda\in\Delta$. If some $A_\lambda=\varnothing$, there clearly are no such functions. (e) is true if, for instance, the sets $A_\lambda$ are pairwise disjoint, but in general it need not be true. For instance, let $\Delta=\{0,1\}$, let $A_0=\{0,1\}$, and let $A_1=\{1,2\}$. Then $$\rho=\{\langle 0,0\rangle,\langle 1,1\rangle,\langle 2,2\rangle,\langle 0,1\rangle,\langle 1,0\rangle,\langle 1,2\rangle,\langle 2,1\rangle\}\;.$$ This relation is certainly reflexive and symmetric, as indeed it will always be, but in this case it’s not transitive: $\langle 0,1\rangle\in\rho$ and $\langle 1,2\rangle\in\rho$, but $\langle 0,2\rangle\notin\rho$.
H: Did I solve this System of differential equations right? My Problem is this given System of differential equations. $$y_{1}^{\prime}=5y_{1}+2y_{2} \\ y_{2}^{\prime}=-2y_{1}+y_{2}$$ I am looking for the solution. According to one of my earlier Questions, I tried the method on my own. Now i fear the solution could be wrong. (especially the eigenvectors) My Approach was: again, i analyze, it must be a ordinary, linear System of equations, with both being of first-order. Than i built the corresponding Matrix as follows: $$\underbrace{\pmatrix{ y_1^{\prime} \\ y_2^{\prime}}}_{\large{ {\vec y^{\prime}}}} = \underbrace{\pmatrix{5 & 2 \\ -2 & 1}}_{\large{\mathbf A}}\underbrace{\pmatrix{y_1\\y_2}}_{\large{\vec y}}$$ that's why: $$\vec y^{\prime} = \pmatrix{5 & 2 \\ -2 & 1}\vec y$$ Then I determined the eigenvalues: they are $r_1 = 3$ and $r_2=3$ Knowing them, I can build the corresponding eigenvectors: they are $\vec v_1 = \pmatrix{ -1 \\ +1}$ and $\vec v_2 = \pmatrix{ 0 \\ 0}$ Now i plug into the equation: $$\vec{x} = c_1e^{r_1t}\vec{v_1}+c_2e^{r_2t}\vec{v_2} \\ \vec{x} = c_1e^{3t}\pmatrix{-1 \\ 1}+c_2e^{3t}\pmatrix{0 \\ 0}$$ this lead to my result: $$y_1 = -c_1e^{3t} + 0c_2e^{3t}\\ y_2 = c_1e^{3t} + 0c_2e^{3t} \\ \\ y_1 = -c_1e^{3t}\\ y_2 = c_1e^{3t}$$ But I doubt it's correct. My suspect are the eigenvectors, I really don't know if they are correct. And this could have lead to a wrong solution. P.S.: Edits were made to improve language and latex AI: Hint:$$X '(t)=AX(t)$$ if you have repeated eigen value like $ c $ and $v $ is eigen vector correspond to $c $ then general solution is $$\large{X(t)=e^{ct}v}$$ and here solution is:$$\large{ X(t) = ce^{3t}\pmatrix{-1 \\ 1}}$$
H: Trace, Kronecker and vec relations I'm reading a paper and got stuck on one of the simplifications that was done without any elaboration. I've taken a course on Linear Algebra, but this is a little out of reach for me... The simplification done is this (note that $T$ is not the transpose but a constant) where it's the second step I don't follow: $$ L(\mathbf{Y}|\mathbf{A})\propto |\mathbf{A_0}|^T\exp\left[-0.5tr(\mathbf{ZA})^\prime(\mathbf{ZA})\right]\propto |\mathbf{A_0}|^T\exp\left[-0.5 vec(\mathbf{A})^\prime(\mathbf{I}\otimes \mathbf{Z}^\prime\mathbf{Z})vec(\mathbf{A})\right] $$ where $\mathbf{Z}=\begin{bmatrix} \mathbf{Y} & -\mathbf{X}\end{bmatrix}$ is $T\times (m+k)$ and $\mathbf{A}=\begin{bmatrix}\mathbf{A}_0 & \mathbf{A}_+\end{bmatrix}^\prime$ is $(m+k)\times m$. It looks quite trivial and it bothers me that I don't even know where to start. I know that $tr(X^\prime Y)=vec(X)^\prime vec(Y)$, but that requires equal size of $X$ and $Y$, does it not? Otherwise $vec(X)^\prime vec(Y)$ wouldn't be possible. (I guess though that if the dimension of $X$ is $k\times p$ and the dimension of $Y$ is $m\times n$, if $kp=mn$ then it'd still work, but that's not the case here). Can anyone point me in the right direction? It's from "Bayesian Methods for Dynamic Multivariate Models" by Christopher Sims and Tao Zha if anyone is interested in the source. AI: $\def\tr{\mathop{\rm tr}}\def\vec{\mathop{\rm vec}}$We have, as $ZA$ and $ZA$ have equal size \begin{align*} \tr\bigl((ZA)^tZA\bigr) &= \vec(ZA)^t\vec(ZA) \end{align*} Now note that $\vec(ZA) = (\mathrm{Id} \otimes Z)\vec(A)$, hence $\vec(ZA)^t = \vec(A)^t(\mathrm{Id}\otimes Z^t)$, givnig \begin{align*} \tr\bigl((ZA)^tZA\bigr) &= \vec(ZA)^t\vec(ZA)\\ &= \vec(A)^t(\mathrm{Id}\otimes Z^t)(\mathrm{Id} \otimes Z)\vec(A)\\ &= \vec(A)^t(\mathrm{Id} \otimes Z^tZ)\vec(A) \end{align*} as wanted.
H: Determine the sets on which $f$ is continuous and discontinuous. Let $f:\mathbb R\to\mathbb R$ be defined by $$f(x):=\begin{cases} x, &\text{if } x\in\mathbb Q\;;\\ -x, &\text{if } x\in\mathbb R\setminus\mathbb Q.\end{cases}$$ Determine the sets on which $f$ is continuous and discontinuous. Prove your answer. I know that I should use sequential criterion for continuity to prove this, but I don't even know in which set will $f$ be continuous or discontinuous. Please give me some ideas. Thank you. AI: By the equality $|f(x)|=|x|$ we can see that $f$ is continuous at $0$. Now for $x\neq 0$, if $x\in\mathbb Q$ there's a sequence $(x_n)$ of irrational numbers such that $x_n\to x$ and if $f$ is continuous at $x$ we have $f(x_n)=-x_n\to-x=f(x)=x$ which is a contradiction. The same method if $x\in \mathbb R\setminus \mathbb Q$.
H: Characterization of positive elements in unital C*-algebra Let $\mathcal{A}$ be a unital C*-algebra (not necessarily commutative) and let $A^*=A\in \mathcal{A}$ be a self-adjoint element with $\vert\vert A \vert\vert \leq 2$? I want to show that $\vert\vert \mathbb{1}-A \vert\vert \leq 1 \Leftrightarrow \sigma(A)\subset [0,\infty)$ (i.e. $A$ is positive), but don't really see the connection. How can approach this problem? AI: Let $\mathcal B\subseteq \mathcal A$ be the sub-$C^*$-algebra generated by $1$ and $A.$ Then $\mathcal B$ is commutative, so $\mathcal B\simeq C(X).$ Let $f_A\in C(X)$ be the function corresponding to $A.$ The following implications hold: \begin{gather*} ||1-A||\leq 1 \Longleftrightarrow ||1-f_A||_\infty\leq 1 \Longleftrightarrow |1-f_A|\leq 1\ \mbox{on}\ X\\ \Longleftrightarrow 0\leq f_A\leq 2\ \mbox{on}\ X\Longleftrightarrow 0\leq A\leq 2\ (\mbox{in } \mathcal B)\Longleftrightarrow 0\leq A\leq 2\ (\mbox{in } \mathcal A) \end{gather*} (Here $||\cdot||_\infty$ is the supremum norm on $X,$ i.e. norm in $C(X)$) It solves your problem.
H: Question on different definitions of upper (hemi)semicontinuity for set-valued maps In this thesis(page $8-10$), it is asserted, two definitions are equivalent, if the set-valued map $f$ maps to a compact space. Definition $1$:$f : X \to 2^Y$ is upper semicontinuous if: $f(x)$ is compact for all $x \in X$, and for any $x \in X$, given any $\epsilon > 0$, there exists a $\delta > 0$ such that if $z \in N_{\delta}(x) \cap X$, then $f(z) \subset N_{\epsilon}(f(x))$ Here, $N_{\epsilon}(f(x))$ is a neighbourhood of “radius” $\epsilon$ of the set $f(x)$. For any set $A$, we define a neighbourhood of radius $\epsilon$ of a set as follows:$$N_{\epsilon}(A) = \bigcup_{a \in A}N_{\epsilon}(a)$$ Definition $2$:$f : X \to 2^Y$ is upper semicontinuous if: for all $x \in X$, if $x$ is in the upper inverse of an open set then so is a neighbourhood of $x$. An upper inverse of $E$ under a set valued map $f$ is $$f^{+}(E)=\{x \in X: f(x) \subset E\}$$ My difficulty is in how to show that definition $1$ implies definition $2$. I can't understand in the Fourth Line, why there must exist an $\epsilon$ such that $N_{\epsilon}(f(x)) \subset E$. Unlike definition $1$, definition $2$ doesn't require $f(x)$ is always compact. What if $f(x) = E$? Or did I misunderstand something? It seems to me the precondition "set-valued map $f$ maps to a compact space" means $f(X)$ is compact, not that $f(x)$ is compact-valued, though I can't find where compactness is invoked. AI: The correspondence $\phi:[0,1]\to 2^{[0,1]}$ given by $\phi(x)=(1/3,2/3)$ for all $x\in[0,1]$ is upper-hemicontinuous under Definition 2, but not under Definition 1. The proof fails if we do not also assume in Definition 2 that the correspondence is compact-valued.
H: How can we prove $\pi =1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}-\frac{1}{5}+\frac{1}{6}+\frac{1}{7}+\cdots\,$? I saw a beautiful result in Wikipedia which was proved by Euler; but I do not know how it can be proved: $$\pi =1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} - \frac{1}{5} + \frac{1}{6} + \frac{1}{7} + \frac{1}{8} + \frac{1}{9} - \frac{1}{10} + \frac{1}{11} + \frac{1}{12} - \frac{1}{13} + \cdots $$ After the first two terms, the signs are determined as follows: If the denominator is a prime of the form $4m - 1$, the sign is positive; if the denominator is a prime of the form $4m + 1$, the sign is negative; for composite numbers, the sign equals the product of the signs of its factors. There is reference in this Wikipedia page to Carl B. Boyer's A History of Mathematics, Chapter 21., p. 488-489. I found the book on the internet but there is no proof in the book. Thanks a lot for your help. AI: We want to compute $$S=\sum_{m=1}^{\infty}\frac{(-1)^{s(m)}}{m},$$ where $s(m)$ counts the number of appearances of primes of the form $4k+1$ in the prime decomposition of $m$. Note that $$S=\sum_{n=0}^{\infty}\frac{(-1)^{s(2n+1)}}{2n+1}+\frac{S}{2}\quad\Longrightarrow \quad \frac{S}{2}=\sum_{n=0}^{\infty}\frac{(-1)^{s(2n+1)}}{2n+1}.\tag{1}$$ But the latter sum can be written as $$\sum_{n=0}^{\infty}\frac{(-1)^{s(2n+1)}}{2n+1}=\prod_{k=2}^{\infty}\left(1+\dfrac{(-1)^{\frac{p_{{k}}-1}{2}}}{p_{k}} \right )^{-1},\tag{2}$$ where the product on the right is taken over odd primes. To show the equality, expand each factor on the right into geometric series and multiply them. Further, as shown by answers to this question, this product is equal to $\pi/2$. Being combined with (1), this gives $S=\pi$.
H: Unitary linear map So my professor gave me this question : Let $V$ be a vector space. Let $e_{1},\ldots,e_{n}$ be an orthonormal basis for $V$, $T\colon V\to V$ be a linear map and $\forall 1\leq i \leq n$ $\|T(e_{i})\|=1$. Is $T$ unitary ? So I know that there is a counterexample. But I have a proof and I would like to know what is wrong in it. So we know that for all $i$ $$\sqrt{\langle T(e_{i}),T(e_{i})\rangle}=1$$ therefore $$\langle T(e_{i}),T(e_{i})\rangle=1$$ therefore $$\langle e_{i},T^{*}(T(e_{i}))\rangle=1$$ and we know that $\langle e_{i},e_{i}\rangle=1$, therefore $T^{*}T=I_{n}$, in the same way I can proof that $TT^{*}=I_{n}$. And that is it. I proved $T$ is unitary. Where is my mistake ? Thanks in advanced !! AI: The problem is, from $\def\lr#1{\langle #1\rangle}$$\lr{e_i, T^*Te_i}=1$ and $\lr{e_i,e_i} = 1$ for all $i$, you cannot conclude that $T^*Te_i = e_i$! We have that $\lr{e_i, T^*Te_i - e_i} = 0$, which means that $T^*Te_i - e_i$ must be orthogonal to $e_i$ (all $i$). But is doesn't have to be 0, for that we need the stronger assumption $\lr{e_j, T^*Te_i - e_i} = 0$ for all $j$ (not just for $j=i$), or $\lr{e_j, T^*Te_i} = \lr{e_j, e_i}$, for all $i$ and $j$.
H: Derivative of tan(x) with product and chain rules instead of quotient rule So I usually just use the product and chain rules for quotient functions, because I can never remember which product to substract from which in the numerator. But somehow I'm doing it wrong for $\tan(x)$. Say $a$ and $b$ are functions of $x$, and $f$ is the quotient of those functions. I take $a$ and multiply it with the derivative of $b^{-1}$ and add the derivative of $a$ multiplied with $b^{-1}$. I am doing something wrong here but can't see what exactly yet. \begin{gather} f=ab^{-1},\quad f'=a'b^{-1}-b^{-2}ab'\\[2ex] f(x)=\tan(x)=\frac{\sin(x)}{\cos(x)}=\sin(x)\cos^{-1}(x)\\[2ex] f'(x)=\cos(x)\cos^{-1}(x)+\sin^2(x)\cos^{-2}(x) \end{gather} This is completely wrong, as I should get $f'(x)=\frac{1}{\cos^2(x)}$ instead of $f'(x)=1+\tan^2(x)$. I know I'm applying this wrong, but I'm not sure where. Thanks. AI: $$\frac{1}{\cos^2x}=\frac{\cos^2x+\sin^2x}{\cos^2x}=1+\tan^2 x.$$
H: find the radius of convergence of $(1+(-1)^{n}2^{1+n})x^{n}$ I ran into this question and I don't really know how to find the radius of convergence. the power series is: $$\sum_{n=0}^\infty(1+(-1)^{n}2^{1+n})x^{n}$$ Thanks in advance. AI: HINT: $$(1+(-1)^n2^{1+n})x^n=x^n+2(-2x)^n$$ Using this, $\sum_{0\le n<\infty} x^n$ is convergent iff $|x|<1\iff -1<x<1$ and $\sum_{0\le n<\infty} (-2x)^n$ is convergent iff $|-2x|<1\iff -\frac12< x<\frac12$ So, we need to fulfill both condition
H: exercise on uniform integrability I cannot figure out the following exercise: Let $F$ be the family of functions $f$ on $[0,1]$, each of which is (Lebesgue) integrable over $[0,1]$ and has $\int_a^b|f|\le b-a$ for all $[a,b]\subseteq[0,1]$. Is $F$ uniformly integrable over $[0,1]$? AI: The answer is 'yes'. To prove this, we first show, as suggested by Did, that for any $f$ in $F$ one has $|f|\le 1$ a.e. on $[0,1]$. For that, denote $A_n=\{x\in[0,1] : |f(x)|>1+\frac{1}{n} \}$, and $A=\{x\in [0,1]:|f(x)|>1\}$. Obviously, $A_n\subseteq A_{n+1}$ for all $n$, and $m(A)=\lim_{n\to\infty}m(A_n)$. We prove that $m(A_n)=0$ for every value of $n$, which we fix from now on. On one hand, $$ \int_{A_n}|f|\ge m(A_n)\cdot(1+\tfrac1n) $$ by Chebyshev's inequality. On the other hand, for any $\epsilon>0$ one can find an open cover $U_\epsilon\supseteq A_n$ such that $m(A_n)\le m(U_\epsilon)\le m(A_n)+\epsilon$. Being an open set, $U_\epsilon$ is a disjoint countable union of open intervals: $U_\epsilon=\bigsqcup_{k=1}^\infty I_k$. By the countable additivity of Lebesgue integral, $$\int_{A_n}|f|\le\int_{U_\epsilon}|f|=\sum_{k=1}^n\int_{I_k}|f|\le \sum_{k=1}^n\ell(I_k)=m(U_\epsilon)\le m(A_n)+\epsilon.$$ So we have: $$ m(A_n)\cdot(1+\tfrac1n)\le\int_{A_n}|f|\le m(A_n)+\epsilon. $$ As $n$ is fixed and $\epsilon$ can be made arbitrarily small, we conclude that $m(A_n)=0$, and hence $m(A)=\lim_{n\to\infty}m(A_n)=0$. This proves that $|f|\le 1$ almost everywhere on $[0,1]$. Now, if $B$ is a measurable subset of $[0,1]$ with $m(B)<\delta$, one has $\int_B|f|\le\int_B1=m(B)<\delta$, so one can set $\delta=\epsilon$ in the definition of the uniform integrability.
H: Expressing polynomial roots expression in terms of coefficients This is my first question on MSE. Apologies in advance for any textual or LaTeX errors. I'm stuck with this problem: Given $x^3 - bx^2 + cx - d = 0$ has roots $\alpha$, $\beta$, $\gamma$, find an expression in terms of $b$, $c$ and $d$ for:   (i) $\alpha^2 + \beta^2 + \gamma^2$   (ii) $\alpha^3 + \beta^3 + \gamma^3$   (iii) $(1 + \alpha^3)(1 + \beta^3)(1 + \gamma^3)$ I had no trouble with (i) or (ii), but got stuck on (iii) as follows: Expanding, $$\begin{align*} (1 + \alpha^3)(1 + \beta^3)(1 + \gamma^3) & = (1 + \alpha^3 + \beta^3 + \alpha^3\beta^3)(1 + \gamma^3)\\ & = 1 + (\alpha^3 + \beta^3 + \gamma^3) + (\alpha^3\beta^3 + \beta^3\gamma^3 + \gamma^3\alpha^3) + \alpha^3\beta^3\gamma^3 \end{align*}$$ The first, second and fourth RHS terms are no problem, leaving us with: $$\alpha^3\beta^3 + \beta^3\gamma^3 + \gamma^3\alpha^3 = \left(\frac{1}{\gamma^3} + \frac{1}{\alpha^3} + \frac{1}{\beta^3} \right)\alpha^3\beta^3\gamma^3$$ So now we are left with the term in brackets. My next thought was to transform the original polynomial to one with roots $\frac{1}{\alpha}$, $\frac{1}{\beta}$ and $\frac{1}{\gamma}$ and then use the answer to (ii) above. Will this work? Or is there a better approach? AI: Let $y=1+\alpha^3$ Again, $\alpha^3=b\alpha^2-c\alpha+d$ $$\implies y-1-d=b\alpha^2-c\alpha$$ Cubing we get $$ (y-1-d)^3=b^3\alpha^6-c^3\alpha^3-3bc\alpha^2\cdot \alpha(b\alpha^2-c\alpha)$$ $$(y-1-d)^3=b^3(y-1)^2-c^3(y-1)-3bc(y-1)(y-1-d)$$ as $\alpha^3=y-1,b\alpha^2-c\alpha=y-1-d$ Arrange as $y^3+By^2+Cy+D=0$ whose roots are $1 + \alpha^3,1 + \beta^3,1 + \gamma^3$ Using Vieta's formulas, $$(1 + \alpha^3)(1 + \beta^3)(1 + \gamma^3)=-D$$
H: If a sequence of natural numbers satisfies $\gcd(a_{i+1},a_{i})>a_{i-1}$, then $a_{n}>2^n$ Given a sequence $\{a_{n}\}$ in $\mathbb{N}$ such that $\gcd(a_{i+1},a_{i})>a_{i-1},$ for any $i\ge 2$, show that $a_{n}>2^{n-1}$. Thank you everyone, my friend asked me about this problem, and I feel this problem is really interesting. AI: (Note, that $\mathbb N$ does not include $0$ in my answer) This seems like a very interesting problem. As @MarcvanLeeuwen pointed out in his comment, the problem as you stated it, is not true. In fact, we can prove, that the counterexample he gave is the worst possible: Theorem: Let $(a_n)_{n\in\mathbb N}$ be a sequence in $\mathbb N$, such that $a_i<(a_{i+1},a_{i+2})$ for all $i\in\mathbb N$, then $a_i\geq 2^{i-1}$ for all $i\in\mathbb N$. Proof: First, note, that $(a_n)$ is strictly increasing, since: $$a_i<(a_{i+1},a_{i+2})\leq a_{i+1}$$ Also, we will repeatedly use the fact $$ (*1):\qquad \forall a,b\in\mathbb Z: (a,b)\leq |a-b|$$ (this is true as $(a,b)|a,b$ implies $(a,b)|a-b$). We prove, that the theorem is true, for $i=1,2,3$: $a_1\in\mathbb N$ implies $a_1\geq 1$. $a_2>a_1\geq 1$ implies $a_1\geq 2$. $a_3>a_2\geq 2$, so $a_3\geq 3$. If $a_3=3$, then $a_2=2$ and $a_1=1$ and $1=a_1<(a_2,a_3)=(2,3)=1$ is false. So $a_3\geq 4$. For $i+1>3$, we prove the theorem by induction, so assume that it is true, for $i, i-1$ and $i-2$. There are $k,l\in\mathbb N$, such that $$(a_i,a_{i+1})\cdot k=a_{i+1} \qquad (a_i,a_{i+1})\cdot l=a_i$$ $k\leq 2$: Use ($*1$): $$a_{i+1}=k\cdot (a_i, a_{i+1})\leq k\cdot ( a_{i+1} - a_i ) \leq 2 \cdot ( a_{i+1} - a_i )$$ So $a_{i+1}\geq 2a_i=2\cdot 2^{i-1}=2^i$ by the induction hypothesis. $k\geq 4$: This follows immediately by applying the induction hypothesis. $$a_{i+1} = k \cdot (a_i,a_{i+1}) \geq 4 \cdot (a_i,a_{i+1}) > 4a_{i-1} = 4\cdot 2^{i-2} = 2^i$$ $k=3$ and $l=1$. Then $a_{i+1}=3a_i\geq 3\cdot 2^{i-1} \geq 2^i$. As $a_i<a_{i+1}$ implies $l<k$, the only remaining case is the following: $k=3$ and $l=2$. Denote $m:=(a_i,a_{i+1})$, so $a_i=2m$ and $a_{i+1}=3m$. $m\geq \frac{8}{3}a_{i-2}:$ Then $$a_{i+1}=3m\geq 3\cdot \frac{8}{3}a_{i-2}=8a_{i-2}\geq 8\cdot 2^{i-3}=2^i$$ $m\geq \frac{4}{3}a_{i-1}:$ Same as before: $$a_{i+1}=3m\geq 3\cdot \frac{4}{3}a_{i-1}=4a_{i-1}\geq 4\cdot 2^{i-2}=2^i$$ $m<\frac{8}{3}a_{i-2}$ ($*2$) and $m<\frac{4}{3}a_{i-1}$ ($*3$): Then $$a_{i-1}<(a_i,a_{i+1})=(2m,3m)=m$$ Using this and ($*1)$, we obtain: $$a_{i-2}<(a_{i-1},a_i)=(a_{i-1},a_i-a_{i-1})\leq |a_i-2a_{i-1}|=|2m-2a_{i-1}|=2(m-a_{i-1})$$ Regroup this equation to get: $$(*4):\quad m > a_{i-1} + \frac{1}{2} a_{i-2}$$ Then: $$m\overset{(*4)}{>} a_{i-1} + \frac{1}{2} a_{i-2}\overset{(*3)}{>} \frac{3}{4}m+\frac{1}{2} a_{i-2} \Rightarrow \frac{1}{4}m>\frac{1}{2}a_{i-2} \Rightarrow m > 2a_{i-2}$$ Using this and ($*4$) again: $$\frac{3}{2}a_{i-2} < \frac{3}{4}m \overset{(*3)}{<} a_{i-1} \overset{(*4)}{<} m - \frac{1}{2} a_{i-2} \overset{(*2)}{<} \frac{8}{3}a_{i-2} - \frac{1}{2}a_{i-2} = \frac{13}{16}a_{i-2} < \frac{3}{2} a_{i-2}$$ This is a contradiction, so this case is not possible. EDIT: I edited the post to give the above proof of the tight bound. I did not delete the previously proven weaker bounds, as they are simple proofs and someone might find them interesting. Weaker bounds, i.e. proofs for $a_i\geq b^{i-1}$ for some $b<2$: Lemma: Let $(a_n)_{n\in\mathbb N}$ be a sequence in $\mathbb N$, such that $a_i<(a_{i+1},a_{i+2})$ for all $i\in\mathbb N$, then $a_{i+2}>a_{i+1}+a_i$ for all $i\in\mathbb N$. Proof: The sequence is strictly increasing since $a_i<(a_{i+1},a_{i+2})\leq a_{i+1}$. $(a_{i+1},a_{i+2})$ divides both $a_{i+1}$ and $a_{i+2}$, hence also its difference $a_{i+2}-a_{i+1}$. Since both are positive, this implies $(a_{i+1},a_{i+2})\leq a_{i+2}-a_{i+1}$ and this yields. $$a_{i+2}=(a_{i+2}-a_{i+1})+a_{i+1}\geq (a_{i+2},a_{i+1}) + a_{i+1} > a_i + a_{i+1}$$ Theorem: Let $(a_n)_{n\in\mathbb N}$ be a sequence in $\mathbb N$, such that $a_i<(a_{i+1},a_{i+2})$ for all $i\in\mathbb N$, then $a_i\geq F_i$ for all $i\in\mathbb N$. (where $F_i$ denotes the $i$-th Fibonacci number) Proof: This is true for $i=1,2$. For $i>2$ conclude by induction: $$F_{n+2}=F_n+F_{n+1}\leq a_n+a_{n+1}<a_{n+2}$$ As $F_n\approx \frac{\varphi^n}{\sqrt{5}}$, where $\varphi\approx 1.61$ is the golden ratio, we cound some find some lower bound in the above spirit with $b$ slightly below $\varphi$. Weaker, but simpler is $b=\sqrt{2}$: Theorem: Let $(a_n)_{n\in\mathbb N}$ be a sequence in $\mathbb N$, such that $a_i<(a_{i+1},a_{i+2})$ for all $i\in\mathbb N$, then $a_i\geq (\sqrt{2})^{i-1}$ for all $i\in\mathbb N$. Proof: As $a_n$ is strictly increasing, we have $a_1\geq 1$ and $a_2\geq 2$, so this is true for $i=1,2$. For $i>2$ again induction: $$a_{n+2} > a_{n+1}+a_n > a_n + a_n = 2a_n \geq 2\cdot (\sqrt{2})^{i-1}= (\sqrt{2})^{i+1}$$
H: Sufficient condition for self-adjoint subset of bounded linear operators on a Hilbert space being irreducible Let $H$ be a Hilbert space and denote as $B(H)$ the bounded linear operators on $H$. Let $M$ be a subset of $B(H)$, s.t. for $A \in M$, also $A^* \in M$. How can one show that if the commutant has the form $M'=\{\lambda \mathbb{1} : \lambda \in \mathbb{C}\}$, then $M$ is irreducible, i.e. under the action of $M$, the only closed invariant subspaces $G\subset H$ are $G=\{0\}$ and $G=H$? AI: Suppose $M$ is reducible and $G\subset H$ is a nontrivial closed subspace invariant under all $A\in M.$ Since $M=M^*,$ $G^\perp$ is also invariant under $M$. Denote by $P:H\to G$ the orthogonal projection. Then $P$ is nonscalar and $P\in M'.$ Indeed, let $A\in M.$ Then if $\varphi\in G$ then $A\varphi\in G$ and $AP\varphi=A\varphi=PA\varphi,$ if $\varphi\in G^\perp$ then $A\varphi\in G^\perp$ and $AP\varphi=0=PA\varphi.$ Hence $AP\varphi=PA\varphi$ for all $\varphi\in H.$
H: How to check if a normal vector of a plane points towards or away from a certain point The problem is as follows: I have a plane defined by three points. I also have a fourth point. I now want to calculate the normalvector of the plane defined by the first three points, but i want the Normalvector to point towards the side of the plane, where the fourth point is. My idea is to just calculate any normalvector of the plane, then drop a perpenicular from point four to the plane, and then check if the two vectors are parralell or antiparralell. Is the there a better solution without the extra step of droping the perpendicular? AI: One way of doing this is to ensure that the dot product of your normal and any vector joining a point in the plane to the fourth point is positive.
H: Statements regarding relations in R Suppose $\rho$ is a relation on $R$. I want to verify whether the following statements are true. Looks simple but proving them seems to be difficult for me. $\rho\circ\rho$ is a subset of $\rho$ $\rho\circ\rho=\rho$ implies $\rho=i_{D(\rho)}$ ($D(\rho)$ being the domain of $\rho$) $\rho=\rho^{-1}$ implies $\rho=i_{D(\rho)}$ I believe the second point is false considering the counter example constant function but I need help with the other statements. AI: All of them are false. For the first, let $\rho = \{(a,b),\,(b,c)\}$ For the second, let $\rho = \{(1,1),\,(3,3),\,(1,3),\,(3,1)\}$ For the third, let $\rho = \{(0,1),\,(1,0)\}$
H: Prove that graph without independent set is not $4$-colorable Let $G$ be an undirected graph with $n$ vertices so that $n\geq35$ and there isn't an independent set (IS) of size $4$ in $G$. (If $S$ is a group of vertices from graph $G$, $S$ is an independent set if there isn't an edge in graph $G$ between each two vertices from $S$) Prove that $G$ isn't $4$-colorable. There was a clue in addition to the question that implied the use of Ramsey Theory, specifically the upper bound on Ramsey Numbers. Is there a way to prove it using Ramsey Theory? AI: Assume there is a 4-coloration of $G$, then there must be a set $S$ of vertices of same color of size at least $\lceil 35/4\rceil=9$. But since $S$ cannot be an independent set, there is an edge between two vertices of $S$, which contradicts the fact that they are of same color. EDIT: alternative proof using Ramsey Theory Ramsey theory gives us the existence of big cliques or big independent sets if the graph is big enough. Here we know that there is no big independent set, so by Ramsey there must be a big clique, which prevents small colorations. To show that it's not $4$-coloriable, we would need a $5$-clique, so the relevant Ramsey number is $R(4,5)=25$ (check wikipedia). Since $35\geq 25$ and we know that there is no $4$-independent set, such a $5$-clique is guaranteed to exist. But this proof is unnecessarily complicated...
H: Harmonic Series is $\theta(\ln n)$ How does one prove that $1 + \frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} = H_n = \theta(\ln n)$ by using Riemann Sums? I have seen in the MIT OCW 6.042 that if $f$ is continuous and increasing then $$f(1) + \int_1^n f(x) \, dx < f(1) + f(2) + \cdots + f(n) < f(n) + \int_1^n f(x) \, dx$$ But besides the intuition behind it, I do not know how to prove it. It can be obviously used here to prove my question about harmonic series. AI: Hint: If $f(x)$ is strictly increasing, $$f(k)<\int_k^{k+1} f(x)\;dx<f(k+1)$$
H: Lang $SL_2$: fin-dim irreducible subspace for abelian group has dim < 2 Lang $SL_2(\mathbb R)$ p. 24, Theorem 2 : Let $\pi$ be an irreducible representation of G on a Banach space H. Let $H_n$ be the subspace of vectors v s.t. $$\pi(r(\theta))v = e^{in\theta}v.$$ If dim $H_n$ is finite, then dim $H_n$ = 0 or 1. This is always the case if $\pi$ is unitary irreducible. We know that $H_n$ is irreducible for $\pi^1(S_{n,n})$ and finite dimensional linear algebra shows that dim $H_n = 0$ or 1 since $S_{n,n}$ is commutative. 1) Well I think I have a very simple counter example: let $$\pi\colon \mathbb R \to GL(\mathbb R^2)$$ by $$\theta \mapsto \text{rotation by }\theta.$$ Clearly, $\mathbb R^2$ is an irreducible space for this representation of dim > 1. The proof goes on: On the other hand, if $\pi$ is unitary, and $f\in S_{n,n}$, then $\pi^1(f)^* = \pi^1(f^*)$, where $f^*(x) = f(x^{-1})$. It is immediately verified that $f^*\in S_{n,n}$. Hence, $\pi^1(S_{n,n})$ is *-closed and Schur's lemma implies that dim $H_n$ = 0 or 1. 2) Schur's lemma says that if If M and N are two simple modules over a ring R, then any homomorphism f: M → N of R-modules is either invertible or zero. Here, I understand that simple means irreducible. But why do we need *-closedness? Isn't $\pi^1$ as representation with the structure of a group only already good enough? AI: 1) As Marc van Leeuwen already suggested, Lang considers only complex representations, so what you give does not count as a counterexample. (As you say, Lang does not state this assumption explicitly, but I think—besides its being standard in this topic—this is implicit in Ch. I, §2.) How does the proof go now? Just use the fact that a complex finite-dimensional vector space which is irreducible for a commuting family of endomorphisms must have dimension $0$ or $1$. 2) What Lang refers here as Schur's lemma is the one in his Appendix 1, §1. More precisely, one utilizes the corollary on p. 363, and closure under taking adjoints is needed to be able to do so.
H: Confused about harmonic series and Euler product So Euler argued that $$1 + \frac{1}{2} + \frac{1}{3} + \frac {1}{4} + \cdots = \frac {2 \cdot 3 \cdot 5 \cdot 7 \cdots} {1 \cdot 2 \cdot 4 \cdot 6 \cdots} $$ which you can rearrange to $$ \left( \frac {1 \cdot 2 \cdot 4 \cdot 6 \cdots} {2 \cdot 3 \cdot 5 \cdot 7 \cdots} \right) \left( 1 + \frac{1}{2} + \frac{1}{3} + \frac {1}{4} + \cdots \right) = 1$$ which in turn you might write as $$\prod_{n=1}^\infty \frac{p_n-1}{p_n} \times \sum_{n=1}^\infty \frac{1}{n} = 1.$$ I'm confused about how this works rigorously. How do the primes 'align themselves' with the naturals? For example, is the correct statement something like $$\prod_{n=1}^z \frac{p_n-1}{p_n} \times \sum_{n=1}^z \frac{1}{n} \to 1 \text{ as } z \to \infty$$ where you're taking the first $z$ primes and the first $z$ naturals, or is it something like $$\prod_{p \le z}^z \frac{p-1}{p} \times \sum_{n=1}^z \frac{1}{n} \to 1 \text{ as } z \to \infty$$ where you're taking all the primes less than or equal to $z$ and the first $z$ naturals, or is it something else entirely? I tried to get a clue with numerical programming but didn't get very far. AI: $$\prod_1^r{p_n-1\over p_n}=\prod_1^r\left(1-{1\over p_n}\right)^{-1}=\prod_1^r\left(1+{1\over p_n}+{1\over p_n^2}+\cdots\right)=\sum{1\over m}$$ where the sum is over all $m$ divisible by no primes other than $p_1,\dots,p_n$. Formally, the limit as $r\to\infty$ gives the first displayed equation in the question. Rigorously, it can be shown that $$\prod_1^{\infty}\left(1-{1\over p_n^s}\right)^{-1}=\sum_1^{\infty}{1\over n^s}$$ for $s\gt1$ (or even for real part of $s$ exceeding $1$).
H: Conditional or absolute convergence of an integral $\int_{0}^{\infty}\frac{\sin x}{1+x^{2}}dx$ I ran into a few problems where i had to check absolute\conditional convergence of a few integrals. I'm sure theres a method to check this, i just can't find the trick. I wan't help with one of the problems so that i can get the rest on my own. Check absolute or conditional convergence of the integral: $$\int_{0}^{\infty}\frac{\sin x}{1+x^{2}}dx$$ thanks in advance, yaron AI: Compare with $$\int_0^\infty {dx\over 1 + x^2}$$.
H: Asymptotic solutions for inequalities How do I determine the order (big o) of $\omega$ in $e^{-\omega/\epsilon}\leq10^{-9}$ and $e^{-\omega/\epsilon}\leq\epsilon$, where $\epsilon$ is a small parameter. AI: In the first case, taking logs yields $$-\omega/\epsilon \leq -9 \ln(10)\\ \omega \geq 9 \epsilon \ln(10)\\ \omega = \Omega(\epsilon)$$ and similarly in the second case $\omega \geq \epsilon \ln (\epsilon)$ so $\omega = \Omega(\epsilon\ln (\epsilon))$
H: Forcing and antichains What would be a good way to show: If $p \Vdash (\exists\alpha) \phi(\alpha)$, then there is an antichain $A$ maximal below $p$, and a set of ordinal $\{\gamma_{q} | q \in A\}$ s.t. $(\forall q \in A$) $q \Vdash \phi(\gamma_{q})$ AI: Use the mixing lemma to conclude that there exists a name $\dot x$ such that $p\Vdash\dot x\in\check{\sf Ord}$ such that $p\Vdash\phi(\dot x)$. Now consider $\{q\leq p\mid\exists\alpha\in{\sf Ord}:q\Vdash\check\alpha=\dot x\}$. This is a dense [and open] set, so we immediately have that there is only a set of possible ordinal values for $\dot x$, and by thinning it out to a maximal antichain we have as wanted.
H: What does $f: A \times A \to A$ mean? What does $f: A \times A \to A$ mean? Can you give some examples please? AI: Normally this means that $f$ is a function with domain $A\times A$ and codomain $A$. One way to represent a function is with ordered pairs. For this example, it would be a set of ordered pairs $((a,b),c)$, where $(a,b)\in A\times A$ and $c\in A$. Formally, $$f\subseteq (A\times A)\times A$$ To be a function, each element of $A\times A$ must be represented exactly once as the first element of the ordered pairs contained in $f$. A concrete example would be for $A=\mathbb{R}$ and $f:(x,y)\rightarrow x+y$.
H: A problem on an open sets in $\mathbb R$ and expressing in pairwise disjoint intervals Let $G$ be an open subset of $\Bbb R$. Define the relation $x \sim y $ on $G$ such that $x \sim y$ iff there exists an open interval $I$ such that $x,y \in I$ and $I \subseteq G $ Verify that '$\sim$' is an equivalence relation on $G$. Deduce that $G$ can be expressed as an union of a countable number of pair wise disjoint open intervals. Can some one please help me on this especially on how to reduce the result? Is it by using equivalence classes? If so, how do you find equivalence classes? AI: The only nontrivial part in proving this is an equivalence relation is transitivity. Recall that if $x\sim y$ and $y\sim z$ (and without loss of generality $x<y<z$) then there are $a,b,c,d$ such that $a<x<y<b$ and $c<y<z<d$. What can you conclude on the relation between $b$ and $c$? Can you find an interval containing $x$ and $z$ now? As for the second part, first note that what is an equivalence class? It is an open interval. But you can't really write down the equivalence classes because you're not given the exact $G$. If $G$ is an interval you will only have one equivalence class, but if $G$ is a much more complicated open set then you will have infinitely many equivalence classes. However you can still prove that there are only countably many of them, show that you can pick one rational number from each interval. So there is a function from the open intervals into the rational numbers, try to prove it is injective.
H: Quesion on a detail of the proof of Schauder-Tychonoff fixed point theorem I'm trying to understand the proof of Schauder-Tychonoff fixed point theorem on page $96-97$, in Fixed Point Theory and Applications, Ravi P. Agarwal,Maria Meehan,Donal O'Regan, which can be found here in googlebooks. Let $F: C \to E$ be a continuous function from $C$ which is a convex subset of $E$, to $E$. $E$ is Hausdorff space, so $C$ is automatically Hausdorff. I'm confused with the following statement on page $97$. I'm not sure what $V_x(x)$ and $W_x(F(x))$ actually are. One guess is that they're just neighbourhoods of $x$ and $F(x)$ respectively. The other is that they're both neighbourhoods of $0$ at first place, and they also contain $x$ and $F(x)$ respectively. But either way I see no need to invoke Hausdorff condition to guarentee the existence of the two neighbourhoods. Because we can just let $W_x(F(x)) = E$ and then $(8.5)$ and $(8.6)$ will always hold. So what is role of Hausdorff condition here? AI: $E$ is a topological vector space. So the topology is translation-invariant, for any $x \in E$, the neighbourhoods of $x$ are the sets of the form $x + V = \{x + v \colon v \in V\}$ where $V$ is a neighbourhood of $0$. $V(x)$ is just another notation for $x + V$, so for $V_x$ a neighbourhood of $0$, $V_x(x)$ is indeed a neighbourhood of $x$.. So by $x \neq F(x)$ and the Hausdorff condition, you have two neighbourhoods $U_1$ of $x$ and $U_2$ of $F(x)$ with $U_1 \cap U_2 = \varnothing$. By the continuity of $F$, for every neighbourhood $U_3$ of $F(x)$, there is a neighbourhood $U_4$ of $x$ such that $F(C\cap U_4) \subset U_3$. Now pick $U_3 = U_2$ and $W_x = U_3 - x$, and $V_x = (U_1 \cap U_4) - x$. I'm rather convinced the $V_x(x) \cap W_x(F(x)) \neq \varnothing$ is a typo and should be $V_x(x) \cap W_x(F(x))= \varnothing$.
H: If $a_n$ is divergent, then $f(a_n)$ is divergent Suppose $f(x)$ is strictly increasing on $\mathbb R$ and $a_n$ is divergent. Then sequence $f(a_n)$ is also divergent. How to prove that this is false? Actually, a counterexample will suffice. AI: If $a_n$ is increasing, then $f(a_n)$ is increasing thus convergent if and only if bounded. So you need $f(x)$ to be bounded. This leads to $f(x)=\arctan(x), a_n=n$.
H: Hilbert space on line bundle Suppose that $L$ is a complex line bundle on a manifold $M$ with measure $\mu$, How can we prove, $L^2(M,L,\mu)$ is Hilbert space? AI: You must be leaving something out of the question here - you need an inner product in order to say that something is a Hilbert space. Probably the situation is something like this: you have a Hermitian metric on $L$, i.e. an inner product $\langle \cdot , \cdot \rangle_x$ on $L_x$ for each $x \in M$, such that the inner products vary smoothly with $x$. Then you can define an inner product on smooth (or even continuous) sections of $L$ by $$ \langle \phi, \psi \rangle = \int_M \langle \phi(x), \psi(x) \rangle_x d \mu(x). $$ If $M$ is not compact then you will need to restrict to compactly supported sections in order for this integral to make sense. Now the question is, how do you define $L^2(M,L,\mu)$? There are essentially two options: the first is that you define it to be the completion of the space of sections with respect to the inner product just defined, in which case it is automatically a Hilbert space; the other option is to define an equivalence relation on sections saying that two sections are equivalent if they are equal almost everywhere, then define $L^2(M,L,\mu)$ to be the set of equivalence classes of square-integrable sections, in which case the proof of completeness is essentially identical to that for $L^2(M,\mu)$.
H: Translation-invariant operator Let $T$ be a translation invariant bounded linear operator $L^p(\mathbb{R}^d)\rightarrow L^q(\mathbb{R}^d)$, i.e. $T(\tau_c f)=T f$ where $\tau_cf(x)=f(x+c)$ for $c\in\mathbb{R}^d$. Then I have read in this marvellous post by Tao that necessarily $q\ge p$ ("the larger exponents are always on the left"). He says one can see this by considering $$f(x)=\sum_{n=1}^N g(x+x_n)$$ where $g$ is say, smooth and compactly supported. Then supposedly $$\|Tf\|_q\sim N^{1/q} \|Tg\|_q$$ But when I apply $T$ I get $Tf(x)=N\cdot Tg(x)$ and hence $$\|Tf\|_q=N\|Tf\|_q$$ Where is my mistake? Please help me, I'm utterly confused. AI: i.e. $T(\tau_cf)=Tf$ where You're misunderstanding translation-invariance here. A translation-invariant operator $T \colon L^p(\mathbb{R}^d) \to L^q(\mathbb{R}^d)$ is an operator that commutes with translations, i.e. $T \circ \tau_c = \tau_c \circ T$ for all $c \in \mathbb{R}^d$. Then, with a bit of ho-humming (that can be made precise, see below), when the $x_k$ are spread far enough apart, the $T(\tau_{x_k}g) = \tau_{x_k}Tg$ have the bulk of their weights separated, so $$\lVert Tf\rVert_q = \lVert \sum \tau_{x_k}Tg\rVert_q \approx \left(\int \sum \left\lvert (Tg)(x-x_k)\right\rvert^q\,dx \right)^{1/q} \approx \left(\sum \int \lvert(Tg)(x-x_k)\rvert^q\,dx \right)^{1/q} \approx N^{1/q} \lVert Tg\rVert_q$$ For $f$ itself, with the assumption of compact support there is no problem seeing $\lVert f\rVert_p = N^{1/p}\lVert g\rVert_p$. To conclude that that implies $q \geqslant p$, one needs $T \neq 0$. Making the ho-humming precise: Let $$\chi_r(x) = \begin{cases} 1,\quad \lVert x\rVert \leqslant r\\ 0,\quad \lVert x\rVert > r. \end{cases}$$ Fix a (smooth) $g \in L^p(\mathbb{R}^d)$ with compact support in $B_R(0)$. Then, for $f = \sum\limits_{k = 1}^N \tau_{x_k}g$ and $r > 0$, if $\lVert x_i - x_j\rVert \geqslant 2r$ for all $i \neq j$, you have $$\begin{align} \lVert Tf\rVert_q &= \left\lVert\left(\sum_{k=1}^N \tau_{x_k}\bigl(\chi_r\cdot(Tg)\bigr)\right) + \left(\sum_{k=1}^N \tau_{x_k}\bigl((1-\chi_r)\cdot(Tg)\bigr)\right) \right\rVert_q\\ &\leqslant \left\lVert\left(\sum_{k=1}^N \tau_{x_k}\bigl(\chi_r\cdot(Tg)\bigr)\right)\right\rVert_q + N\cdot \lVert (1-\chi_r)(Tg)\rVert_q\\ &= \left(\int \sum_{k=1}^N \left\lvert\chi_r(x-x_k)(Tg)(x-x_k)\right\rvert^q\,dx\right)^{1/q} + N\cdot \lVert (1-\chi_r)(Tg)\rVert_q\\ &= \left(N\cdot \lVert \chi_r\cdot (Tg)\rVert^q\right)^{1/q} + N\cdot \lVert (1-\chi_r)(Tg)\rVert_q\\ &= N^{1/q} \lVert \chi_r\cdot (Tg)\rVert_q + N\cdot \lVert (1-\chi_r)(Tg)\rVert_q\\ &\leqslant N^{1/q} \lVert Tg\rVert_q + N\cdot \lVert (1-\chi_r)(Tg)\rVert_q. \end{align}$$ The first inequality follows from the triangle inequality for $\lVert\cdot\rVert_q$. The next equalities follow from the disjointness of the supports of the $\tau_{x_k}\bigl(\chi_r\cdot(Tg)\bigr)$ and the translation-invariance (note: since the result is a number, and not a function, translation-invariance means $\lambda(\tau_c M) = \lambda(M)$ for all $c$ and measurable $M$ here) of the Lebesgue measure. The final inequality from $\lVert\chi_r\cdot h\rVert_q \leqslant \lVert h\rVert_q$ for all $h$. Using the triangle inequality in the form $\lVert a + b \rVert \geqslant \lVert a\rVert - \lVert b\rVert$ for the split between the $\chi_r(Tg)$ and $(1-\chi_r)(Tg)$, we obtain $$\lVert Tf\rVert_q \geqslant N^{1/q} \lVert \chi_r\cdot (Tg)\rVert_q - N\cdot \lVert (1-\chi_r)(Tg)\rVert_q$$ Now, given $g$, $N$, and an arbitrary $\varepsilon > 0$, by the dominated convergence theorem, you can choose $r_0 > 0$ so that $\lVert (1-\chi_r)\cdot(Tg)\rVert_q < \varepsilon/N$ for all $r \geqslant r_0$. Choose additionally $r_0 > 2R$, so that the $\tau_{x_k}g$ have disjoint support. If the $x_k$ are then chosen far enough apart, you find $$N^{1/q}\lVert Tg\rVert_q - \frac{\varepsilon}{N^{1-1/q}} - N\cdot\frac{\varepsilon}{N} \leqslant \lVert Tf\rVert_q \leqslant N^{1/q}\lVert Tg\rVert_q + \varepsilon,$$ so $\lVert Tf\rVert_q \approx N^{1/q}\lVert g\rVert_q$, and $$N^{1/q}\lVert Tg\rVert_q - 2\varepsilon \leqslant \lVert Tf\rVert_q \leqslant \lVert T\rVert \cdot \lVert f\rVert_p = N^{1/p}\lVert T\rVert\cdot\lVert g\rVert_p.$$ Since $\varepsilon$ could be arbitrarily chosen, $$N^{1/q}\lVert Tg\rVert_q \leqslant N^{1/p}\lVert T\rVert\cdot \lVert g\rVert_p$$ for all $N$ and (smooth) $g$ with compact support. If $T \neq 0$, a smooth $g$ with compact support and $Tg \neq 0$ exists ($\mathscr{C}_c(\mathbb{R}^d)$ is dense in $L^p(\mathbb{R}^d)$ for $p < \infty$). Then, taking the limit for $N \to \infty$ would lead to a contradiction if $p > q$.
H: Jensen inequality Does Jensen inequality, which is $\mathbb{E}(g(x)) \geq g(\mathbb{E}X)$ if $g$ is convex, assume that $\mathbb{E}X$ (expected value of random variable $X$) must belong to $R(X)$ (range of random variable $X$)? I THANK YOU. AI: The answer is negative: consider the easy example $X:\{\omega_1,\omega_2\}\rightarrow R(X)=\{x_1,x_2\}$, with $X(\omega_i)=x_i$ and $P(X=x_1)=p$, $P(X=x_2)=1-p$. Then, for any convex $g$: $$g(p x_1+(1-p)x_2)=g(E[X])\leq p g(x_1)+(1-p)g(x_2)=E[g(X)].$$ Note that $E[X]\not\in R(X)$, which is no linear space. It is just a finite set.
H: Prove by using the integral test question Prove by using the integral test the correctness of these results: $$\sum_{n=1}^{\infty} \frac{1}{n^p} \rightarrow P>1 : \text{Convergent} , P\leq1 : \text{Diverging}$$ $$\sum_{n=1}^{\infty} \frac{1}{n\ln^q(n)} \rightarrow q>1 : \text{Convergent}, q\leq1 : \text{Diverging}$$ I need to do the integral for the two cases of p and q? for example if $P\leq1$ or $q\leq1$ there is option that is a fraction between $0<\dots<1$ and option that they are $P,q\leq0$ . how I need to show it when I`m doing the integral? Or I dont need to consider the case I mentioned before ( $0<\dots<1$ and $P,q\leq0$ ). Thanks. AI: Hint: If we assume that $p \neq 1$, then: $$ \int_1^\infty \dfrac{1}{x^p}dx = \lim_{k\to\infty} \left[\dfrac{x^{-p+1}}{-p+1} \right]_1^k= \lim_{k\to\infty} \left[\dfrac{x^{-(p-1)}}{1-p} \right]_1^k = \dfrac{1}{1-p}\lim_{k\to\infty} \left[\dfrac{1}{k^{p-1}} - 1 \right] $$ Now fill in the question marks: $$ \lim_{k\to\infty}\dfrac{1}{k^{p-1}}=\begin{cases} ?? & \text{if } p < 1 \\ ?? & \text{if } p > 1 \\ \end{cases} $$ For the special case when $p=1$, we have: $$ \int_1^\infty \dfrac{1}{x}dx = \lim_{k\to\infty} \left[\ln|x| \right]_1^k = \lim_{k \to \infty} [\ln|k| - 0] = \infty $$
H: Matrix $I + 2 A A^T$ is nonsingular for any A Suppose A is $m\times n$ matrix with real entries. Could you prove that $\det (I + 2 A A^T) \neq 0$ AI: Suppose $S$ is symmetric and positive semi-definite. Then $S=U\Lambda U^T$ for some orthogonal $U$ and diagonal $\Lambda$. Then $I+S = I+U \Lambda U^T= U (I+\Lambda) U^T$. Hence $\det (I+S) = \det (I+ \Lambda) = \prod_k (1+\lambda_k)$, where the $\lambda_k$ are diagonal elements of $\Lambda$. Since $S$ is positive semi-definite, $\lambda_k \ge 0$, hence $\det (I+S) \ge 1$. In this example, $S=2A A^T$, which is easily seen to be symmetric and positive semi-definite.
H: Name for grid system Is there a name for a type of grid you might find in Battleship? Where coordinates don't relate to points on a grid but rather the squares themselves? AI: "Grid" is as good a name as any: See Regular Grid in Wikipedia: In particular, see the "related" grid: the Cartesian Grid: "A Cartesian grid is a special case where the elements are unit squares or unit cubes [cubes in the case of a 3-D grid], and the vertices are integer points." [brackets, bold-face mine]. You could also refer to this sort of "playing field" in a game like battleship as an incidence matrix of sorts: where a cell in the ith row and jth column might be occupied, using "$1$", or not occupied, using "$0$".
H: Is the multiplication of two complex numbers with $|z|=1$ a complex number with modulus 1? If we have two complex numbers $a, b \in \mathbb{C}$ such that $|a|=1$ and $|b|=1$ is $|a\cdot b|=1$ as well? I am trying to determine if the set $\left(\{z\in\mathbb{C}:|z|=1\},\cdot\right)$ is a group. I am not sure if it is closed under the binary operation $\cdot$. My intuition is that it is not closed under this operation. I may be misunderstanding something here, but any clarification and help would be helpful. AI: \begin{eqnarray} |(x+iy)(u+iv)|^2 &=& |xu-yv+i(xv+yu)|^2 \\ &=& (xu-yv)^2+(xv+yu)^2 \\ &=& v^2 y^2+u^2 y^2+v^2 x^2+u^2 x^2 \\ &=& (x^2+y^2)(u^2+v^2) \\ &=& |x+iy|^2|u+iv|^2 \end{eqnarray}
H: What is the definition of a flat morphism? When we say that a morphism $f: E \rightarrow M $ between two algebraic varieties (over $\mathbb{C}$) is a flat morphism, what does it mean? Does it mean that that the "dimension" of every fiber $f^{-1}(x)$ is the same for all $x$? Or do we also have to check some compatibility conditions? The specific example I have in mind is the following: Let $\mathcal{D} \approx \mathbb{P}^{\delta_d}$ be the space of non-zero homogeneous degree $d$ polynomials in three variables upto scaling, where $\delta_d = \frac{d(d+3)}{2}$ (basically degree $d$-curves in $\mathbb{P}^2$). Let $$ \mathcal{V} := \{ [f]\in \mathcal{D}: f([1,0,0]) =0, ~~\nabla f|_{[1,0,0] =0} \} $$ ie it is the space of curves having $[1,0,0]$ as a singular point. Define $$ \mathcal{C} := \{ ([f], p) \in \mathcal{V} \times \mathbb{P}^2: f(p) =0 \} $$ I think that the "morphism" $$ \pi_{\mathcal{V}}: \mathcal{C} \rightarrow \mathcal{V} $$ is "flat", although it is not a fiber bundle (in the sense of differential topology). Why is this a "flat morphism"? Secondly I believe that the morphism $$ \pi_{\mathbb{P^2}}: \mathcal{C} \rightarrow \mathbb{P}^2 $$ is not flat. Why is that? Is it because the fiber over $[1,0,0]$ is of a larger "dimension"? $\textbf{EDIT:}$ One further question about terminology, using this example. Define $$ \mathcal{V}^* := \{ [f]\in \mathcal{D}: f([1,0,0]) =0, ~~\nabla f|_{[1,0,0]} =0, ~~det \nabla^2 f|_{[1,0,0]} \neq 0 \}, $$ $$ \mathcal{C}^* := \{ ([f], p) \in \mathcal{V}^* \times \mathbb{P}^2: f(p) =0 \} $$ I believe that the morphism $$ \pi_{\mathcal{V}^*}: \mathcal{C}^* \rightarrow \mathcal{V}^* $$ is smooth of relative dimension one. I assume it is of relative dimension one, since the fiber at each point is one dimensional. But why is it "smooth"? I don't think $\mathcal{C}^*$ is even a smooth manifold. AI: There is a formal definition of flatness, which you can find for example in Hartshorne section III.9. The definition is maybe unsatisfying, because it is algebraic in nature and its geometric meaning is not so clear. But it has some nice geometric consequences, for example Corollary 9.6 in the same section: if $X \rightarrow Y$ is a surjective flat morphism of irreducible algebraic varieties, then (every component of) every fibre has the same dimension. So your guess at a definition is indeed a consequence of flatness. In fact, under some additional assumptions --- $Y$ regular and $X$ Cohen--Macaulay --- it is in fact equivalent to flatness: see Hartshorne Exercise III.10.9. (But keep in mind that this is not the general definition.) This should show why one of your morphisms is flat, but not the other.
H: graph of complicated equation Graph of the equation $(x+y) (x^2 + y^2 -1) = 0$ is just the line $y=-x$ and the circle $x^2 + y^2 = 1$. Is it generally true that the graph of $f(x,y) \cdot g(x,y) = 0$ may be drawn as union of graphs of $f(x,y) =0 $ and $g(x,y) = 0$? AI: Note that the equation you've written is an equation, and not a function. And the graph is the union of the equations $$y = -x \;\; \cup\;\; x^2 + y^2 = 1$$ Yes indeed: $$f(x, y)\cdot g(x,y) = 0 \iff f(x, y) = 0 \;\;\cup\;\; g(x, y) = 0$$
H: Steinhausen set in $[0,1]$ Does every Steinhausen set have positive lebesgue measure? Steinhausen set is a set $A\in [0,1]:0\in \operatorname{int}(A-A),\mu(A)\geq 0$. AI: No. The Cantor set has measure zero, yet $0$ is an interior point of its difference set (in fact, the difference set is $[-1,1]$). A proof of the latter fact can be found in Gelbaum and Olmsted's Counterexamples in Analysis. See, here.
H: how to solve this partial differential equation while in its equilibrium position a uniform string stretched between the points (0,0) and (ℓ,0) (hint cn=0 since equilibruim) AI: I take it you mean the wave equation should be solved here, which it more or less has. Your job is to match the boundary condition given. You can start by taking the time derivative of your given solution: $$\dot{y}(x,t) = \sum_{n=1}^{\infty} \frac{n \pi a}{\ell} \sin{\left ( \frac{n \pi x}{\ell} \right)} \left [-C_n \sin{\left ( \frac{n \pi a t}{\ell} \right)} + D_n \cos{\left ( \frac{n \pi a t}{\ell} \right)}\right ] $$ This means that $$\dot{y}(x,0) = \sum_{n=1}^{\infty} \frac{n \pi a}{\ell} D_n \sin{\left ( \frac{n \pi x}{\ell} \right)} = g(x)$$ This is a Fourier series, so you find the coefficients as follows: $$\frac{n \pi a}{\ell} D_n = \frac{\displaystyle \int_0^{\ell} dx \, g(x) \, \sin{\left ( \frac{n \pi x}{\ell} \right)}}{\displaystyle \int_0^{\ell} dx \, \sin^2{\left ( \frac{n \pi x}{\ell} \right)}}$$ Now, I suspect your definition of $g$ is not right because the dimensions don't line up; I will take $$g(x) = \begin{cases} \frac{a x}{\ell} & 0 \le x \le \frac{\ell}{2} \\ a \left ( 1-\frac{x}{\ell}\right) & \frac{\ell}{2} \lt x \le \ell \end{cases} $$ I will let you handle the details of the integration; I get $$D_n = 2 \ell \frac{(-1)^n}{n}$$ This leaves $C_n$; this should clearly be zero for all $n$ because the string is initially in the equilibrium position. Thus the solution is $$y(x,t) = 2 \ell \sum_{n=1}^{\infty} \frac{(-1)^n}{n} \sin{\left ( \frac{n \pi x}{\ell} \right)} \sin{\left ( \frac{n \pi a t}{\ell} \right)} $$
H: Do equivalent norms preserve dual spaces? Suppose that $X^*$ is the dual space of a normed space $X$. If we renorm the space $X^*$ with a new norm equivalent to the first one, is this new normed space the dual of $X$ as well? (I think it suffices to prove that a functional $f$ is continuous with a norm1 if and only if it is continuous with norm2 where norm1 and norm2 are two equivalent norms. This seems to be obvious!). Thanks for the help. AI: No. Since you don't change the vector space $X^*$ by renorming it, it remains the dual as the set of continuous linear functionals on $X$. But it is no longer the dual of $X$ as a normed vector space, since that means precisely $\|f\|=\sup_{\|x\|\leq 1} |f(x)|$, which is completely determined by the norm on $X$. Along these lines, what is true is: if you put an equivalent norm on $X$, then $X$ has the same bounded (= continuous) linear functionals as before, so the vector space $X^*$ remains the same. And both induced norms on $X^*$ are equivalent as well. Maybe that's what you meant to ask. To prove the statement of that last paragraph, assume first that $\|x\|_1\leq C\|x\|_2$ for some $C>0$. Then $\|x\|_1\leq 1$ whenever $\|Cx\|_2\leq 1$, whence for every linear functional on $X$ $$ C\|f\|_1=C\sup_{\|x\|_1\leq 1}|f(x)|\geq \sup_{\|Cx\|_2\leq 1}|f(Cx)|= \sup_{\|y\|_2\leq 1}|f(y)|=\|f\|_2. $$ In particular, $f$ is $\|\cdot\|_2$ bounded whenever it is $\|\cdot\|_1$ bounded. The result follows by symmetry.
H: A function/distribution which satisfies an integral equation. (sounds bizzare) I think I need a function, $f(x)$ such that $\large{\int_{x_1}^{x_2}f(x)\,d{x} = \frac{1}{(x_2-x_1)}}$ $\forall x_2>x_1>0$. Wonder such a function been used or studied by someone, or is it just more than insane to ask for such a function? I guess it has to be some distribution, and if so, I'd like to know more about it. AI: Note that for your function you can easely find $x_1 <x_2 < x_3$ so that $$\int_{x_1}^{x_2} f(x) \, dx + \int_{x_2}^{x_3} f(x) \, dx \neq \int_{x_1}^{x_3} f(x) \, dx$$ so no such function can exists. Actually you can prove this way that there exists no measure $m$ so that $$ m([x_1,x_2])=\frac{1}{x_2-x_1} \,.$$ so in particular there is no absolutely continuous measure to satisfy that relation.
H: Quick question about binary strings Determine the unambigious expression which generates every string in this set. The set of all binary strings which contains 001111 as a substring. I am thinking that the answer should be {0,1}$^*${001111}{0,1}$^*$ But the answer says that it should be {0,1}$^*$\{1}$^*$({00}{0}$^*${1,11,111}∪{0}{1}{1}$^*$)$^*${0}$^*$ Which is basically to take all the strings, and remove those that do not contain 001111 as a substring. Is my answer correct also, or is the 2nd one more correct? Sorry if this is in the wrong section, as I couldn't find the Binary strings tag AI: Your answer is not correct, for the following reason: if a binary string contains more than one copy of 001111, then it can be represented as you suggest in more than one way. (For instance, the first instance of 001111 could be absorbed in to the first glob and the second could be the required one; or, the first instance could be the required one, and the second absorbed in to the tail.)
H: Transforms to orthogonal basis Given a function $f(x)$ we can use the Fourier Transform to find $F(w)$, which represents how we can build up the signal by $\sin$/$\cos$ as we find the coefficients, frequencies and phases. (I realise this is very simplified I just wanted to fit it in a sentence) My question is could we have represented this image by any set of orthogonal basis? I know we can have a $\operatorname{sinc}$ transform. If we can how would one go about deriving the transfer function? Thank you in advance! :) AI: Yes, there are various sets of orthogonal functions that can be used for such an expansion provided that their inner product (as an integral of the product of any two) gives Kronecker's delta (for orthonormal) or a scalar multiple of it (for orthognal). There are many such functions, particularly polynomial functions. You get the transfer function as the response (convolution integral) to a Dirac impulse for the input, as usual. Whether you will get a simple relation for the frequency-domain transfer function as for Fourier analysis depends on the mathematical properties of these new transforms. Most likely, this will much more complicated than when using complex exponentials as in F.T.
H: Convergence to $0$ of Jacobi theta function I'm trying to prove that a function $$f(y) = \sum_{k=-\infty}^{+\infty}{(-1)^ke^{-k^2y}}$$ is $O(y)$ while $y$ tends to $+0$. I have observed that $f(y) = \vartheta(0.5,\frac{iy}{\pi})$ where $\vartheta$ is a Jacobi theta function. It seems that these functions are very well studied but I am not too familiar with this area. Any useful links or suggestions are very appreciated. AI: UPD: the previous version contained a square which shouldn't be there. Actually, your function is even more simply expressed in terms of $\vartheta_4$-function. Also, I prefer this notation in which $$f(y)=\vartheta_4(0,e^{-y})=\vartheta_4\Bigl(0\Bigr|\Bigl.\frac{iy}{\pi}\Bigr).$$ I.e. I use the convention $\vartheta_k(z,q)=\vartheta_k(z|\tau)$. Then, to obtain the asymptotics as $y\rightarrow 0^+$, we need two things: Jacobi's imaginary transformation, after which the transformed nome and half-period behave as $q'\rightarrow0$, $\tau'\rightarrow i\infty$ (instead of $q\rightarrow1$, $\tau\rightarrow0$): $$\vartheta_4\Bigl(0\Bigr|\Bigl.\frac{iy}{\pi}\Bigr)=\sqrt{\frac{\pi}{y}}\vartheta_2\Bigl(0\Bigr|\Bigl.\frac{i\pi}{y}\Bigr).$$ Series representations for theta functions (e.g. the formula (8) by the first link), which implies that $$\vartheta_2(0,q')\sim 2(q')^{\frac14}$$ as $q'\rightarrow 0$. Note that you can also obtain an arbitrary number of terms in the asymptotic expansion if you want. Taking into account the two things above, we obtain that the leading asymptotic term is given by $$f(y\rightarrow0)\sim 2\sqrt{\frac{\pi}{y}} \exp\left\{-\frac{\pi^2}{4y}\right\}.$$
H: Finding $\lim\limits_{x\to0}x^2\ln (x)$ without L'Hospital I am preparing a resit for calculus and I encountered a limit problem. The problem is the following: $\lim\limits_{x\to0}x^2\ln (x)$ I am not allowed to use L'Hospital. Please help me, I am stuck for almost an hour now. AI: Note that we should rather consider $$\lim_{x\to 0^+}x^2\ln x.$$ Substitute $x$ with $e^{-t}$ (this is possible for $x>0$) to get $$ \lim_{x\to 0^+}x^2\ln x=\lim_{t\to+\infty}(-t)(e^{-t})^2=\lim_{t\to+\infty}\frac{-t}{(e^t)^2}$$ and use our favorite estimate for the exponential function: $e^t\ge 1+t$, to get $$ \left|\lim_{x\to 0^+}x^2\ln x\right|\le \lim_{t\to+\infty}\left|\frac{-t}{(e^t)^2}\right|=\lim_{t\to+\infty}\frac{t}{(e^t)^2}\le \lim_{t\to+\infty}\frac{t}{(t+1)^2}=0.$$
H: How and what to teach on a first year elementary number theory course? In the late 80’s and early 90’s there was the idea of ‘calculus reform’ and some emphasis and syllabus changed. The order of doing things in calculus also changed with the advantage of technology. Similarly in linear algebra there was a linear algebra curriculum study group which produced some really good ways of teaching linear algebra and also highlighted curriculum changes. This was produced in the January 1993 College Mathematics Journal. Has any similar work been covered in number theory. I am looking for what are the important topics to cover and any work or research on the teaching of number theory. AI: You might be interested in looking at some of the work of the Mathematical Associations Special Interest Group (SIGMAA), RUME: Research in Undergraduate Mathematics Education. See also MAA Online: RUME for more information on the group, and abundant resources to sort through, as well as links to the RUME Community Website, where you may be able to get targeted answers and suggestions from folks who know the current research very well, and have ready access to effective syllabi for undergraduate mathematics courses across topics.
H: Trig substitution $\int x^3 \sqrt{1-x^2} dx$ $$\int x^3 \sqrt{1-x^2} dx$$ $x = \sin \theta $ $dx = \cos \theta d \theta$ $$\int \sin^3 \theta d \theta$$ $$\int (1 - \cos^2 \theta) \sin \theta d \theta$$ $u = \cos \theta$ $du = -\sin\theta d \theta$ $$-\int u^2 du$$ $$\frac{-u^3}{3} $$ $$\frac{\cos^3 \theta}{3}$$ With the triangle trick I get: $$\frac{-\sqrt{1-x^2}^3}{3}$$ This is wrong but I am not sure where I went wrong. AI: Let $x=\sin{\theta}$, then $dx = \cos{\theta} \, d\theta$; the integral becomes $$\int d\theta \, \sin^3{\theta} \, \cos^2{\theta} = \int d\theta \, \sin^3{\theta} -\int d\theta \, \sin^5{\theta} $$ $$\int d\theta \, \sin^3{\theta} = \int d\theta \, \sin{\theta} (1-\cos^2{\theta}) = -\int d(\cos{\theta}) (1-\cos^2{\theta})= -\cos{\theta} + \frac13 \cos^3{\theta}+C$$ Similarly $$\int d\theta \, \sin^5{\theta} = -\int d(\cos{\theta}) (1-\cos^2{\theta})^2 = -\cos{\theta} + \frac{2}{3} \cos^3{\theta}-\frac15 \cos^5{\theta}+C'$$ Subtracting the two, I get $$\int d\theta \, \sin^3{\theta} \, \cos^2{\theta} = -\frac13 \cos^3{\theta}+\frac15 \cos^5{\theta}+C$$ Then use $x=\sin{\theta}$ and get $$\int dx \, x^3 \, \sqrt{1-x^2} = \frac{1}{15} (3 x^4-x^2-2) \sqrt{1-x^2}+C$$ EDIT I see that the answer can be simplified further to $$-\frac{1}{15} (1-x^2)^{3/2} (3 x^2+2) + C$$
H: Difference between two definitions of Manifold I've been studying Differential Geometry on Spivak's Differential Geometry book. Since Spivak just works with notions of metric spaces and analysis, I'm doing fine. The point is that Spivak presents the following definition of a manifold: A manifold $M$ is a metric space such that for every $p \in M$ there's some neighbourhood $V$ of $p$ and some integer $n \geq 0$ such that $V$ is homeomorphic to $\Bbb R^n$ Now, there's another definition, usually given in texts that assume the reader knows general topology, and the definition is: A manifold $M$ is a topological space such that: $M$ is Hausdorff; $M$ has a countable basis for its topology; $M$ is locally Euclidean. For now I'm happy with Spivak's definition because I've not seen general topology yet, but I'm curious with one thing: these two definitions are equivalent? In other words, every topological space with those three properties is metrizable, so that it can be put in terms of the first definition? Is there any other way in which these definitions can be said to be equivalent? Thanks very much in advance for the help. AI: In other words, every topological space with those three properties is metrizable, so that it can be put in terms of the first definition? Yes, that's right. Properties 1. and 3. give that $M$ is Hausdorff and locally compact, hence regular by a standard exercise in general topology. Together with Property 2. this gives that $M$ is regular and second countable, hence metrizable by Urysohn's Theorem. Thus your second definition implies Spivak's definition with metrizable in place of metric. (It is slightly strange to define a manifold as a metric space in the absence of a Riemannian metric. It is unlikely that the specific metric is ever used.) Conversely, suppose $M$ is a metrizable locally Euclidean space. Then of course $M$ is Hausdorff. However, it need not be second countable. A cheap way for it not to be is just to take a direct sum of uncountably many connected components. However it is also possible for a connected Hausdorff locally Euclidean space not to be second countable. Here the condition is equivalent to something called paracompactness and the standard counterexample of a connected Hausdorff locally Euclidean space which is not paracompact is the long line. In fact I am pretty sure I learned about this from an Appendix to Volume I of Spivak's Comprehensive Introduction to Differential Geometry! The equivalence of paracompactness, second countability and metrizability for connected, Hausdorff locally Euclidean spaces should also be found there, if memory serves. Let me also say that absolutely everyone agrees that a "topological manifold" should be a locally Euclidean topological space. Most people agree that it should also be Hausdorff, but the minority who does not has its reasons: as soon as you start taking quotient of manifolds by group actions you will start to meet non-Hausdorff guys. Whether one should impose paracompactness or (stronger) second-countability is really not standard. I would say that neither should probably figure in the definition of a topological manifold but one should expect that these hypotheses will often be imposed in practice.
H: Complexity of all nearest neighbours problem Given a set $P$ of points $P=\left\{p_1,p_2,\dots p_n\right\}\subset\mathbb{R}^2$ What I want to show is that every correct algorithm which finds for every $p_i$ the nearest neightbour, i.e. the point $p_j$ such that $dist(p_i,p_j)\le dist(p_i,p_k)$ for every $k\neq i$, takes $\Omega(n\log(n))$ steps in worst case. Assume I already have found the nearest neighbour for every point, is it possible to construct the Voronoi diagram of $P$ in linear time? In that case there cannot be an algorithm, which finds all nearest neighbours faster that $\Omega(n\log(n))$ because I already know, that construction of Voronoi diagram takes at least $\Omega(n\log(n))$ steps. AI: Have a look at the Element distinctness problem, which has a known lower bound of $\Omega(n \log n)$ under the algebraic decision tree model. The point set contains equal elements clearly iff the nearest neighbor of some point is at the same position.
H: Proving there is no solutions in diophantine equations I recently saw a question that I couldn´t answer, so I decided to do a little C code to test if some solutions were possible but I got nothing. The problem is: If $m,n, p \in \mathbb{Z}^+$ give the number of solutions of $$4mn-m-n=p^2$$ I couldn't find any answer up to $100$ for $m$ or $n$, and it really surprises me that an equation with so many degrees of freedom apparently doesnt have solutions. Any help is greatly appreciated. AI: Multiplying your equation by four and regrouping allows us to rewrite it in the form $$ (4m-1)(4n-1)=4p^2+1. $$ Let $q$ be any prime factor of r.h.s. Clearly $q$ is odd. As $$ (2p)^2=4p^2\equiv-1\pmod{q}, $$ we see that $-1$ is a quadratic residue modulo $q$. For odd primes this is known to imply that $q\equiv1\pmod4$. But the left hand side manifestly also has prime divisors congruent to $-1\pmod4$. Both $4n-1$ and $4m-1$ must have at least one such prime factor. This is a contradiction. Therefore there are no solutions with $m,n,p$ positive.
H: Integration trig substitution $\int \frac{dx}{x\sqrt{x^2 + 16}}$ $$\int \frac{dx}{x\sqrt{x^2 + 16}}$$ With some magic I get down to $$\frac{1}{4} \int\frac{1}{\sin\theta} d\theta$$ Now is where I am lost. How do I do this? I tried integration by parts but it doesn't work. AI: HINT: $$\int\frac{d\theta}{\sin\theta}=\int\csc\theta d\theta=\int\frac{\csc\theta(\csc\theta+\cot\theta)}{\csc\theta+\cot\theta}d\theta$$ Now what’s the derivative of that last denominator?
H: How to check if the given lines are coplanar? I have two lines: $$\frac{x-1}{3}=\frac{y+2}{-2}=\frac{z}{1} = L_1$$ $$ \frac{x+1}{4} = \frac{y-3}{1}=\frac{z}{\alpha} =L_2$$ How can I find the value of $\alpha$ for which these two lines lie on the same plane? Just notice that this is not a homework question. I took it from an exam held a few years ago, and got no idea about it. Thanks! AI: Examine both lines in parametric form. If their vectors are parallel then they are certainly coplanar. If their vectors are not parallel, two lines are coplanar if and only iff they intersect; otherwise, they are skew.
H: how to solve this first-order nonlinear ode how to solve this differential equation: $A\cdot(dT(x)/dx)(1873.382+2.2111\cdot T(x))=90457.5-2.149\cdot10^{-10 }-10\cdot T(x)^4$ where A is a constant Thank you AI: It looks like your equation takes the form $$A \frac{dT}{dx} (B + C T) = D - E T^4$$ where the constants are all positive. Then this equation may be turned around to produce $$\int dT \frac{B+C T}{D-ET^4} = \frac{X}{A} + \text{constant}$$ A good way to attack that integral is to note that $$(D-E T^4) = (\sqrt{D}-\sqrt{E} T^2) (\sqrt{D}+\sqrt{E} T^2)$$ and then use partial fractions: $$\frac{1}{D-E T^4} = \frac{1}{2 \sqrt{D}}\left(\frac{1}{\sqrt{D}-\sqrt{E} T^2} + \frac{1}{\sqrt{D}+\sqrt{E} T^2}\right)$$ Using substitution, you may see that the resulting integral is a sum of terms involving arctangents and logarithms. This then gives you $x(T)$, which you would need to invert to get $T(x)$.
H: Solving an integral of form $ F(x)=\int (2x-1)e^{2x}\ dx $ I have this integral in a worksheet please help me solve it $$ F(x)=\int (2x-1)e^{2x}\ dx $$ AI: Let $u = (2x - 1)$ and $dv = e^{2x}dx$. Now continue with integration by parts.
H: When to begin a new paragraph? When writing a mathematical article, when should someone begin a new paragraph? Is there some specific rule or convention? And more generally, what rules are there about articles-writing? AI: Mathematical writing, like all writing, is best when it reads smoothly. A good way to test the smoothness of any piece of writing is to read it out loud to yourself (I realize that this test works best for those mathematicians who write in their native languages). Sentence and paragraph breaks should be at natural points in the flow of an argument. Certainly you should (at minimum) start a new paragraph any time you move from one argument (lemma, definition, remark) to another. There are special considerations, of course, when writing in any specific discipline. It is rare for mathematicians to read an article straight through, at least on the first pass, so it is very important that your writing is skimmable. Paragraph breaks and other formatting are one of the key ways to achieve this. Short, precise sentences, in subject-verb-object order, are preferable to long convoluted sentences, even when it makes the text sound slightly choppy. (I often fail at my own advice, and produce mathematical writing that is too wordy, with long blocks of text.) Actually, paragraph breaks are just the beginning: you should clearly label each block of a few paragraphs for what it is (a definition, a remark, a proof), and headers can be very helpful ("Definition (gizmo): A gizmo is a thingamabob equipped with a compatible doohickey."). It benefits the reader if you indent and italicize (and so on) the main theorems and definitions. Of course, these things also depend in part on the venue. Writing for math.stackexchange is slightly different from writing an article (although still definitely "mathematical writing"). Writing reviews is again different. As was mentioned in the comments, your question here has perfect paragraph breaks. In fact, glancing over your other questions on m.se, it looks like your paragraph breaks are perfectly placed.
H: How do you show that isometry is an equivalence relation among metric spaces? Ok, to start I am new to metric spaces. I have studied equivalence relations in Algebra, but unfamiliar with the e.r. in metric spaces. Here is my question: We say that metric spaces (X,$d_X$) and (Y,$d_Y$) are isometric if there is an isometry $f:X \to Y$. Write this as (X,$d_X$) $\backsimeq$ (Y,$d_Y$) and show that $\backsimeq$ is an equivalence relation on the collection of metric spaces. I know that I need to show the following $x \backsimeq x$ but don't know what "x" looks like; is it $d_X (a,a) \backsimeq d_Y (x,x)$ if so where do "x" come from. Or, how do I define $\backsimeq$? $x \backsimeq y \implies y \backsimeq x$ same problem as above $x \backsimeq z$ and $z \backsimeq y \implies x \backsimeq z$. Which is an even bigger problem since i don't know where "z" comes from. I am told that if I can show $f^{-1}$ preserves distance, then I am done. If anyone can tell me what I need to show, I should be good. i.e. what "x" is and how to define $\backsimeq$. PROOF: (As of right now. I am still over thinking this problem.) Reflective property can be shown by the identity map. Let $id:X\to X$, then $id$ is bijective and we are done? Symmetric property want to show $f:X\to Y$ is bijective, which will then show there is inverse to $f$; i.e. $f^{-1}:X\to Y$ which is bijective. Let $(X,d_X)\backsimeq (Y,d_Y)$, then $f:X\to Y$ is isometry which implies that for every $y\in Y$ $\exists x\in X$ s.t. $f(x)=y$. Since $(X,d_X)$ and $(Y,d_Y)$ are isometric we have $d_X (x_1,x_2)=d_Y (y_1,y_2)=f_Y (f(x_1),f(x_2))$. Thus, for every $x\in X$ and $y\in Y$ we have $d_X(x_1,x_2)=d_Y(f(x_1),f(x_2))$. Thus, $f$ is bijective, which implies there exists an $f^{-1}$. Therefore, $(Y,d_Y)\backsimeq (X,d_X)$. AI: Like you said, "$(X,d_X)\backsimeq(Y,d_Y)$" means that "$(X,d_X)$ is isometric to $(Y,d_Y)$". In particular, "$(X,d_X)\backsimeq(X,d_X)$" means that "$(X,d_X)$ is isometric to $(X,d_X)$". In order to prove that the relation $\backsimeq$ is reflexive, you have to show that every metric space $(X,d_X)$ is isometric to itself. In other words, given an arbitrary metric space $(X,d_X)$, you must somehow concoct a function $f:X\to X$ which is an isometry. To prove that $\backsimeq$ is symmetric means that, given two metric spaces $(X,d_X)$ and $(Y,d_Y)$, and given that $(X,d_X)\backsimeq(Y,d_Y)$, you have to prove that $(Y,d_Y)\backsimeq(X,d_X)$. Now, the fact that $(X,d_X)\backsimeq(Y,d_Y)$ means that there is an isometry $f:X\to Y$. From that you must somehow contrive an isometry $g:Y\to X$ Now for transitivity. You don't have to know where "$z$" comes from, any more than you have to know where "$x$" and "$y$" come from; "$x$", "$y$", and "$z$" are given. You are given three metric spaces $(X,d_X),(Y,d_Y),(Z,d_Z)$, and you are given that $(X,d_X)\backsimeq(Z,d_Z)$ and $(Z,d_Z)\backsimeq(Y,d_Y)$, i.e., you are given isometries $g:X\to Z$ and $h:Z\to Y$; somehow or other you have to manufacture an isometry $f:X\to Y$, thereby showing that $(X,d_X)\backsimeq(Y,d_Y)$.
H: Physical meaning of "probability density" Is there some way of describing the co-domain of probability density functions? Does it relate in some way to something physically meaningful? I was given that question today - and I was at a loss. Density for me, is the co-domain of pdfs - a scalar dimension with values from zero to infinity. For instance, I fit a normal distribution to a probability vector on N values (a probability mass function). This vector contains normalised values of the actual frequency counts in a histogram with N bins. The normalisation is done by dividing each frequency count by the discrete integral "area" of the pmf - so that the pmf values sum to 1. The pmfs now seems to be scaled similarly to the estimated pdfs. Is probability density a measure of probability? I am sure that the necessary definitions must be hidden somewhere deep down in the guts of measure theory - which is why I have included that as a tag. AI: To add to Kirill's comment, you can write $P_0 = P(x_0\leq X \leq x_0+{\rm d}x) = p(x_0)dx$, so the value of the pdf $p(x)$ at $x_0$ is the height of an infinitesimally narrow interval ('slice') of width d$x$ fitting below this function at $x_0$ that will give you a probablity value $P_0$ to find the random variable lying within this interval d$x$. You could also think of it, perhaps more intuitively, in terms of the derivative of the cdf: the pdf at $x_0$ is then the 'speed' or rate, d$P/$d$x$, at which the probability will increase when going from $x_0$ to $x_0+dx$ (for positive d$x$). A "physical" meaning is more difficult, because probability is mathematical... Perhaps if you think of the analogy of "mass density" (~pdf) as a measure of concentration of "mass" (~cdf), that may help?
H: Differential equation two solutions, how so? I tried to solve $7x^3y'=4*\sqrt{y}$ with $y(1)=1$ now I thought that Picard Lindelöf would tell me that there is a (at least in a local area for x=1) unique solution unfortunately I found two: $y(x)=(-\frac{1}{7x^2}+\frac{8}{7})^2, y(x)=(-\frac{1}{7x^2}-\frac{6}{7})^2$. Can somebody explain to me why this is the case here? AI: The second one isn't a solution to the differential equation. Clearly the RHS is non-negative. However $\displaystyle y_2'(x)=-\frac{4(6x^2+1)}{49x^5}$ (courtesy of WA), therefore the LHS is $\displaystyle -\frac{4(6x^2+1)}{7x^2}\color{grey}{\leq 0}$.
H: Show that the set of functions under composition is isomorphic to $S_3$ Show that the set $\{f_1, f_2, f_3, f_4, f_5, f_6\}$ of functions $\mathbb{R}-\{0, 1\}\rightarrow \mathbb{R}-\{0,1\}$ under composition is isomorphic to $S_3$, where $$f_1(x)= x\\ f_2(x) = 1-x\\f_3(x)=1/x\\f_4(x)=1-1/x\\f_5(x)=1/(1-x)\\f_6(x)=x/(x-1)$$ I am unsure how to start with this. I want to understand how to go about doing this and not just the answer. Any pointers would be a appreciated. AI: Hint: You need to construct a bijection between your $6$ functions and the $3!=6$ elements of $S_3$. Try writing them down in separate columns and matching them together. Can you spot the identity element? Compose each function with itself until you get the identity function; this will help you figure out its order. Once you've done that, you should find that $3$ functions have order $2$. Arbitrarily assign these $3$ functions to the three $2$-cycles in $S_3$. To figure out how to assign the remaining $2$ functions of order $3$, pick any two distinct $2$-cycles in $S_3$ and multiply them to obtain a $3$-cycle. Match this $3$-cycle to the function of order $3$ obtained by composing the corresponding functions of order $2$.