text
stringlengths
83
79.5k
H: an exercise book for probability theory recommendation request I'm looking for a good exercise book for probability theory, preferably at least partially with solutions to it. I want it to be detailed, not trivial, providing me solid fundamentals in the topic to be developed in the future. I'd wish it to be more of a "applied" technical university approach than the highly "abstract" one tailored for pure mathematics student at a university, however it need not to be so. The topics I would like it to cover are more or less like in the part one of the book "Probability, Random Variables and Stochastic Processes" by Papoulis and Pillai ( table of contents ). Thank you in advance. AI: One Thousand Exercises in Probability by Grimmett and Stirzaker, Oxford Press, 2001 might suit your needs.
H: A sensible/systematical way to deal with the following equation Given that $y=x\varphi(z)+\psi(z)$ where $z$ is an implicit function of $x,y$, and $x\cdot\varphi'(z)+\psi'(z)\neq0$. Try to prove that $$\frac{\partial^2z}{\partial x^2}\cdot\left(\frac{\partial z}{\partial y}\right)^2-2\cdot\frac{\partial z}{\partial x}\cdot\frac{\partial z}{\partial y}\cdot\frac{\partial^2 z}{\partial x\partial y}+\frac{\partial^2 z}{\partial y^2}\cdot\left(\frac{\partial z}{\partial x}\right)^2=0$$ The outline of the proof from the book Григорий Михайлович Фихтенгольц: Differentiating $y=x\varphi(z)+\psi(z)$ with $\dfrac{\partial^2z}{\partial x^2}$, $\dfrac{\partial^2z}{\partial x\partial y}$, $\dfrac{\partial^2z}{\partial y^2}$, and then multiplying special coefficients, we can get the answer. I wonder whether there's some sensible ways to check these equations, or even more, some sysmatical ways to produce such equations. Any help? Thanks a lot! AI: Let $\begin{pmatrix}z_{xx}&z_{xy}\\z_{xy}&z_{yy}\end{pmatrix}$ be the Hessian matrix of $z(x,y)$. For any unit vector $v$ the expression $v^THv$ is the second directional derivative of $z$ along $v$. The expression we are given is exactly of this form, with $v=\begin{pmatrix}-z_y \\ z_x\end{pmatrix}$, which we recognize as the gradient $\nabla z$ rotated by 90 degrees. In other words, the formula we are asked to prove simply says that the second directional derivative of $z(x,y)$ vanishes in the direction tangent to the level curve of $z$. This sounds like the level curve is not allowed to have positive curvature, and indeed, a glance at the implicit equation tells us that the level curves of $z(x,y)$ are lines. Naturally, all directional derivatives of $z$ vanish along these lines, including the second one. Additional comments: The graph of $z$ is a special kind of a ruled surface: it is ruled by horizontal lines. Not sure if these have a name. If we instead require that the second derivative vanishes in the direction of $\nabla z$ (without rotating the gradient by 90 degrees), we get $(\nabla z)^T\, H\, \nabla z=0$, the $\infty$-Laplace equation, a recently popular subject in PDE. Григорий Михайлович Фихтенгольц died almost exactly 53 years ago, June 26th. I'm sure that he expected the students to actually carry out the computations, and that the students did.
H: If $V$ is a neighbourhood of $a$ having $E= D \cap V$ , show that $a \in E'$ I'm having a little problem in understanding this question... Let $f \colon D \rightarrow \mathbb{R}$ with $D \subset \mathbb{R}$ and $a \in D'$. If $V$ is a neighbourhood of $a$ and $E= D \cap V$, show that $a \in E'$. The exercise also asks to show that if $f|_E$ has a limit at the point $a$ then $f$ also has a limit at the point $a$ . More specifically, I don't understand how to analyze the restricted function... Thanks the attention! (This is an exercise of my real analysis list of exercises. I spent a good time trying to do it, but I'm stuck... If anyone could give me a light... I'm studying for my final exam...) AI: Expanded, given that apparently it was unclear. Because $a\in D'$, that means that every neighborhood of $a$ must contain points of $D$ other than $a$. You want to prove that every neighborhood $U$ of $a$ must contain points of $E\cap V$ other than $a$ itself; let $U$ be a neighborhood of $a$. Then $U\cap V$ is also a neighborhood of $a$, so we know that $(U\cap V)\cap D-\{a\}$ is not empty. But $(U\cap V)\cap D = U\cap (V\cap D) = U\cap E$. Hence... For the second part, let us suppose that $f|_E$ has a limit at $a$, say $r$. This means that for every open set $W$ that contains $r$, there exists an open set $U$ that contains $a$ and such that $f((U\cap E)-\{a\})\subseteq W$. To show that $f$ has a limit at $a$, we need to show that there exists an open set $\mathscr{O}$ that contains $a$ and such that $f((\mathscr{O}\cap D)-\{a\})\subseteq W$. Fix $W$, an open set containing $r$, and let $U$ be an open set that contains $a$ and such that $f((U\cap E)-\{a\})\subseteq W$. Now let $\mathscr{O} = U\cap \mathrm{int}(V)$, the interior of $V$. This is an open set that contains $a$ (since $U$ is open and contains $a$, and $V$ is a neighborhood of $a$ so its interior contains $a$). Will that work?
H: errors, random and approximations Hello good evening all! If a reading is reported as R = 200.045 + 0.001 or 200.045 - 0.001 Ohm. Does +0.001 or -0.001 Ohm represents a systematic or random error? Thanking you. AI: It represents random error. Random error is a bit of 'spreading out', but systematic error means the data is centered around the wrong spot. Put another way: Say your wife asks you your anniversary. If you guess the date slightly wrong, that's random error. You know roughly what's going on, but the data isn't exact. If you give the correct date of her birthday instead, that's systematic error. Accurate, but at the same time completely off the mark. Having both is either awful science or grounds for divorce. Random error can be calculated through standard deviation, which you can learn more about here. The wiki page on variance gives a good sense of where this formula comes from. Systematic error is different. Say I measure your height and get a bunch of values within a few millimetres of each other. These are just numbers, and so I can compute how spread out they are just fine. However, nothing about the numbers can tell me that you didn't take your shoes off. Systematic error is when the value you get is wrong. What makes it wrong can't be determined mathematically, and can only be ensured by being very careful about how you run your experiment.
H: combinatorics counting sets with a one element having values in another set counting the number of ordered subsets in a n-set is easy, it's $ 2 ^n -1 $ (assuming we don"t want the empty subset, it's the sum of $ n \choose i $ , i>0 now imagine I have a set s=(a, b, x), where x can have 2 possible values x1, x2 the result wanted: (a), (b), (x1), (x2), (a,b), (a,x1), (a,x2), (b,x1), (b,x2), (a,b,x1), (a,b,x2) I would like to generalize with s of length n, and x having k possible values => Counting the subsets of $ (a_1, a_2, a_3, ...,a_{n-1}, \begin{matrix} x_1\\ x_2 \\ \vdots\\ x_k \end{matrix} ) $ AI: We include the empty set. You can remove it if you wish: I don't want to be accused of anti-emptyism. My understanding is that you have a set $A$ of $n-1$ elements, and another (disjoint) set $X$ of $k$ elements. We want to count the subsets of $A\cup X$ that have have no more than one object from $X$. Take any subset of $A$ (there are $2^{n-1}$ of these) and append a single object from $X$, or nothing ($k+1$ choices). That gives $(k+1)2^{n-1}$ possibilities.
H: Is an open linear map closed (to some extent)? Suppose we have a surjective bounded linear operator acting between Banach spaces. By the Open Mapping Theorem it maps open sets in the domain to open sets in the codomain. Must the image of a closed linear subspace of the domain be closed then? AI: A construction very similar to that seen in the question Do there exist closed subspaces $X$, $Y$ of Banach space, such that $X+Y$ is not closed? applies here. If $f:E\to F$ is any bounded linear operator such that $f(E)$ is not closed, then $g:E\oplus F\to F$ defined by $$g(x+y)=f(x)+y$$ is surjective but maps $E\oplus 0$ to the non-closed subspace $f(E)$.
H: Cauchy in Norm and Weakly converge Implies Norm convergent Let $X$ be a normed space and $(x_n)$ is a Cauchy sequence in the norm sense. Also assume the $x_n \rightarrow x_0 $ weakly. Then $x_n \rightarrow x_0 $ in norm. What I did:Take $ \varepsilon >0 $ since $x_n$ is cauchy there a $n_0$ such that $ |\!| x_n-x_m |\!|< \epsilon$ $\forall n,m \geq n_0$ From Hahn Banach there are $x^{*} _n\, \in X^{*}$ such that $|\!| x_n-x_0 |\!| = | x^{*}_n (x_n-x_0)|$ and $|\!| x^{*}_n |\!|=1$. Hence \begin{align} |\!| x_n-x_0 |\!| &= | x^{*}_n (x_n-x_0)|\\ &=| x^{*}_n (x_n-x_m+x_m-x_0)| \\ &\leq | x^{*}_n (x_n-x_m)|+| x^{*}_n (x_m-x_0)|\\ &\leq \varepsilon+| x^{*}_n (x_m-x_0)|. \end{align} Since the last inequality hold $\forall m \geq n_0$ we can take the limit in respect of $m$ and then we get $|\!| x_n-x_0 |\!| \leq \epsilon $ (Since $x_n \rightarrow x_0 $ weakly). And then by definition we are done. Where I saw this exercise there was a hint. Hint Observe that $x_n \in x_m +\varepsilon B_X$ and $x_m+\varepsilon B_X$ is weakly closed. How do we proceed from there? AI: Fix $\varepsilon>0$. Since $\{x_n:n\in\mathbb{N}\}$ is a Cauchy sequence we can find $N\in\mathbb{N}$ such that $n\geq m\geq N$ implies $x_n \in x_m+\varepsilon B_X$ Since $\{x_n:n\geq N\}\subset x_m+\varepsilon B_X$ and $x_m+\varepsilon B_X$ is weakly closed, then $$ x_0=w\lim\limits_{n\to\infty} x_n\in x_m+\varepsilon B_X. $$ Thus $x_0\in x_m+\varepsilon B_X$, which can be reformulated as $\Vert x_m-x_0\Vert\leq \varepsilon$. Finally for all $\varepsilon>0$ there exist $N\in\mathbb{N}$ such that $m\geq N$ implies $\Vert x_m-x_0\Vert\leq \varepsilon$, hence $$x_0=\lim\limits_{m\to\infty} x_m$$
H: Does the sequence $\frac{n!}{2^n}$ converge or diverge? Does the following sequence $\{a_n\}$ converge or diverge? $$a_n=\dfrac{n!}{2^n}$$ AI: Consider writing "out" the sequence: $$\tag 1\frac{{n!}}{{{2^n}}} = \frac{n}{2}\frac{{n - 1}}{2}\frac{{n - 2}}{2} \cdots \frac{4}{2}\frac{3}{2}\frac{2}{2}\frac{1}{2}$$ Note that every time we take another step in the sequence, we multiply by $\dfrac{n}{2}$ so we're making the sequence larger and larger each time. In particular, we can see that every term in the factorization in $(1)$ is larger or equal than $1$, except $1/2$, so that $$\frac{{n!}}{{{2^n}}} = \frac{n}{2}\frac{{n - 1}}{2}\frac{{n - 2}}{2} \cdots \frac{4}{2}\frac{3}{2}\frac{2}{2}\frac{1}{2} \geqslant \frac{1}{2}\frac{n}{2}=\frac{n}{4}$$ What does this tell you about the limit of the sequence? Another approach would be D'Alambert's criterion (ratio test), which gives: $${a_n} = \frac{{n!}}{{{2^n}}}$$ So $$\mathop {\lim }\limits_{n \to \infty } \frac{{{a_{n + 1}}}}{{{a_n}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{\left( {n + 1} \right)!}}{{{2^{n + 1}}}}\frac{{{2^n}}}{{n!}} = \mathop {\lim }\limits_{n \to \infty } \frac{{{a_{n + 1}}}}{{{a_n}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{n + 1}}{2}\frac{{n!}}{{n!}} = \mathop {\lim }\limits_{n \to \infty } \frac{{n + 1}}{2} $$ What can you say about that limit? Then, what does this tell you about $$\mathop {\lim }\limits_{n \to \infty } {a_n} \; \;?$$
H: Finding a conformal map to the upper half plane Let $$ \Omega = \{z \in \mathbb{C} : |z| > 1, z \notin \mathbb{R}_{< -1}, z \notin \mathbb{R}_{\ge 2}\}. $$ Find a conformal map which maps the region $\Omega$ to the upper half plane. I would want to know what I am supposed to do first. I tried to shift by $1$ to the right $(z+1)$ then used $1/z$ but I was not sure about how the points around $z=-1$ and $z=2$ moved by these two maps if it is a good way to start this problem. Thank you in advance. AI: To start, apply the map $f_1(z)=z+1/z$ which collapses the unit circle onto the segment $[-2,2]$; the exterior of the unit circle is mapped onto the exterior of the circle. Then you have a plane with two slits, and things should become clearer.
H: Calculating Points in Circles I am trying to calculate the points around a circle with a certain distance in between each other. Below is a graphical representation of what I am attempting to do. The diameter is 70 (radius 35), and I want to find all the points that are 5 apart from each other. AI: Do you want the points $5$ apart around the circle, or $5$ apart on a straight line? Around the circle, you won't come out even, as the circumference is $70 \pi \approx 219.91149$. But you can come pretty close, as this is close to $220$. The angle at the center then is $\frac 5{35}=\frac 17$ radian $=\frac {180^\circ}{7\pi} \approx 8.18511^\circ$. So you can put them at $(35\cos 8.18511k^\circ,35\sin 8.18511k^\circ)$ for $k$ from $0$ to $43$. For straight lines, the central angle is $2 \arcsin \frac 1{14} \approx 8.192^\circ$ Again you don't come out even, but you can put them at $(35\cos 8.1892k^\circ,35\sin8.1892k^\circ)$ for $k$ from $0$ to $43$.
H: Find all ordered pair of integers $(x,y)$ Obtain all ordered pair of integers $(x,y)$ such that $$x(x + 1) = y(y + 1)(y + 2)(y + 3)$$ I'm getting 8, (0, 0), (0, -1), (0, -2), (0, -3) (-1, 0), (-1, -1), (-1, -2), (-1, -3) Please confirm my answer. AI: Hint: It is easily proved that the product of four consecutive integers, plus $1$, is a perfect square. But $x(x+1)+1$ is hardly ever a perfect square! Added: To prove that $y(y+1)(y+2)(y+3)+1$ is a perfect square, note that $$y(y+1)(y+2)(y+3)=y(y+3)(y+1)(y+2)=(y^2+3y)(y^2+3y+2)=z(z+2),$$ where $z=y^2+3y$. And clearly $z(z+2)+1=(z+1)^2$.
H: Explanation of Proof of Zorn's lemma in Halmos's Book I have been reading Halmos's book on naive set theory on my own and have got stuck in the Chapter on Zorn's lemma. The 2nd and 3rd paragraphs are not very clear to me. Here is the text: Zorn's lemma. If $X$ is a partially ordered set such that every chain in $X$ has an upper bound, then $X$ contains a maximal element. Proof. The first step is to replace the abstract partial ordering in $X$ by the inclusion order in a suitable collection of sets. More precisely, we consider, for each element $x \in X$, the weak initial segment $\bar{s}(x)$ consisting of $x$ and all its predecessors. The range $\mathscr{S}$ of the function $\bar{s}$ (from $X$ to$\wp(X)$) is a certain collection of subsets of $X$, which we may, of course, regard as (partially) ordered by inclusion. The function $\bar{s}$ is one-to-one, and a necessary and sufficient condition that $\bar{s}(x)\subseteq \bar{s}(y)$ is that $x\leq y$. In view of this, the task of finding a maximal element in $X$ is the same as the task of finding a maximal set in $\mathscr{S}$. The hypothesis about chains in $X$ implies (and is, in fact, equivalent to) the corresponding statement about chains in $\mathscr{S}$. Let $\mathscr{X}$ be the set of all chains in $X$; every member of $\mathscr{X}$ is included in $\bar{s}(x)$ for some $x \in X$. The collection $\mathscr{X}$ is a non-empty collection of sets, partially ordered by inclusion, and such that if $\mathscr{C}$ is a chain in $\mathscr{X}$, then the union of the sets in $\mathscr{C}$ (i.e., $\bigcup_{A \in \mathscr{C}}A$) belongs to $\mathscr{X}$. Since each set in $\mathscr{X}$ is dominated by some set in $\mathscr{S}$, the passage from $\mathscr{S}$ to $\mathscr{X}$ cannot introduce any new maximal elements. One advantage of the collection $\mathscr{X}$ is the slightly more specific form that the chain hypothesis assumes; instead of saying that each chain $\mathscr{C}$ has some upper bound in $\mathscr{S}$, we can say explicitly that the union of the sets of $\mathscr{C}$, which is clearly an upper bound of $\mathscr{C}$, is an element of the collection $\mathscr{X}$. Another technical advantage of $\mathscr{X}$ is that it contains all the subsets of each of its sets; this makes it possible to enlarge non-maximal sets in $\mathscr{X}$ slowly, one element at a time. Now we can forget about the given partial order in $X$. In what follows we consider a non-empty collection $\mathscr{X}$ of subsets of a non-empty set $X$, subject to two conditions: every subset of each set in $\mathscr{X}$ is in $\mathscr{X}$, and the union of each chain of sets in $\mathscr{X}$ is in $\mathscr{X}$. Note that the first condition implies that $\varnothing\in\mathscr{X}$. Our task is to prove that there exists in $\mathscr{X}$ a maximal set. and the proof continues... In the 2nd paragraph: 'Since each set in $\mathscr{X}$ is dominated by some set in $\mathscr{S}$, the passage from $\mathscr{S}$ to $\mathscr{X}$ cannot introduce any new maximal elements.' Here I am able to prove that every maximal element of $\mathscr{S}$ has to be a maximal element of $\mathscr{X}$, but surely there can be maximal elements of $\mathscr{X}$ which are not maximal elements of $\mathscr{S}$ and hence 'extra'- please explain.. In the 3rd para- The author considers a set $\mathscr{X}$ with the given properties, and states that the problem of finding a maximal element in $X$ is equivalent to finding a maximal set in $\mathscr{X}$. How come? Detailed but simple answers would be much appreciated. AI: (This is a fairly major re-write of the original answer.) For your first question, the proper answer is that the statement doesn't matter too much. But to go through it, we begin with the following two facts: Fact 1: If $\bar{s} (x)$ is a maximal set in $\mathscr{S}$, then $x$ is a maximal element of $X$. Proof. If $z \geq x$ holds for some $z \in X$, then $\bar{s}(x) \subseteq \bar{s}(z)$. By maximality of $\bar{s}(x)$ it follows that $\bar{s}(z) = \bar{s}(x)$, and therefore $z \in \bar{s}(x)$ meaning that $z \leq x$. Thus $z = x$. $\dashv$ Fact 2: If $Y$ is a maximal set in $\mathscr{X}$, then $Y$ contains a maximum element $x$ (i.e. $y \leq x$ holds for all $y \in Y$), and $x$ is a maximal element of $X$. Proof. As $Y$ is a chain in $X$, it has an upper bound $x$. We show that $x \in Y$ (and as it is an upper bound of $Y$ it follows that $x$ is the maximum element of $Y$) and that $x$ is a maximal element of $X$. Suppose that $z \geq x$ holds for some $z \in X$. Note that $Y \cup \{ z \}$ is also a chain in $X$, and so by maximality of $Y$ it must be that $Y \cup \{ z \} = Y$, and therefore $z \in Y$. (In particular, as $x \geq x$ we have $x \in Y$.) As $x$ is an upper bound of $Y$ we have $z \leq x$, and so $z = x$. Thus $x$ is a maximal element of $X$. $\dashv$ Thus starting with maximal sets in either $\mathscr{S}$ or $\mathscr{X}$ will lead you to maximal elements of the partially ordered set $X$. It might appear, at first glance, that as chains are "more general" than weak initial segments , you get more maximal elements of $X$ by starting with maximal sets in $\mathscr{X}$ than you do by starting with maximal sets in $\mathscr{S}$. Halmos's statement is that this is not the case. More precisely, one can show that if $Y$ is a maximal set in $\mathscr{X}$ with maximum element $x$, then $\bar{s} (x)$ is a maximal set in $\mathscr{S}$. Therefore, the maximal element $x$ of $X$ obtained by begining with a maximal set in $\mathscr{X}$ could also have been obtained by begining with a maximal set in $\mathscr{S}$. I honestly wouldn't worry about this too much. The real crux of the proof lies in Fact 2. Your second question should (now) find its answer in the statement and proof of Fact 2, above. Additional notes: Note that in the 2nd paragraph Halmos mentions that the family $\mathscr{X}$ of all chains in the partially ordered set $X$ satisfies the following conditions: $\mathscr{X}$ is non-empty; every subset of an element of $\mathscr{X}$ is also an element of $\mathscr{X}$; and the union of every chain of elements of $\mathscr{X}$ is an element of $\mathscr{X}$. He then makes the following claims: Claim 1: If there is a maximal set in $\mathscr{X}$, then there is a maximal element of $X$. (This was established in Fact 2, above.) Claim 2: Suppose $\mathscr{Z}$ is any family of subsets of some non-empty set $Z$ satisfying the following conditions: $\mathscr{Z}$ is non-empty; every subset of an element of $\mathscr{Z}$ is also an element of $\mathscr{Z}$; and the union of every chain of elements of $\mathscr{Z}$ is an element of $\mathscr{Z}$. Then there is a maximal set in $\mathscr{Z}$. (This is what the remainder of Halmos's proof establishes, except that -- in possibly bad form -- he uses $X$ and $\mathscr{X}$ (which symbols are already used in the proof) in place of my $Z$ and $\mathscr{Z}$, respectively.) Once these two claims are established, the proof of Zorn's Lemma is complete: Since the family $\mathscr{X}$ of all chains in $X$ has the desired propeties mentioned in the premise of Claim 2, by this it follows that $\mathscr{X}$ contains a maximal set. By Claim 1 it then follows that the partially ordered set $X$ contains a maximal element.
H: Common factors of the ideals $(x - \zeta_p^k)$, $x \in \mathbb Z$, in $\mathbb Z[\zeta_p]$ I'm trying to understand a proof of the following Lemma (regarding Catalan's conjecture): Lemma: Let $x\in \mathbb{Z}$, $2<q\not=p>2$ prime, $G:=\text{Gal}(\mathbb{Q}(\zeta_p):\mathbb{Q})$, $x\equiv 1\pmod{p}$ and $\lvert x\lvert >q^{p-1}+q$. Then the map $\phi:\{\theta\in\mathbb{Z}:(x-\zeta_p)^\theta\in\mathbb{Q}(\zeta_p)^{*q}\}\rightarrow\mathbb{Q}(\zeta_p)$, $\ $ $\phi(\theta)=\alpha$ such that $\alpha^q=(x-\zeta)^\theta$ is injective. I don't understand the following step in the proof: The ideals $(x-\sigma(\zeta_p))_{\sigma \in G}$ in $\mathbb{Z}[\zeta_p]$ have at most the factor $(1-\zeta_p)$ in common. Since $x\equiv 1\pmod{p}$ the ideals do have this factor in common. Could you please explain to me why these two statements are true? AI: In the world of ideals gcd is the sum. Pick any two ideals $(x-\sigma(\zeta_p))$ and $(x-\sigma'(\zeta_p))$. Then $\sigma(\zeta_p)-\sigma'(\zeta_p)$ will be in the sum ideal. This is a difference of two distinct powers of $\zeta_p$, so generates the same ideal as one of the $1-\zeta_p^k$, $0<k<p$. All of these generate the same prime ideal as $(1-\zeta_p)$, because the quotient $$\frac{1-\zeta_p^k}{1-\zeta_p}=\frac{1-\zeta_p^k}{1-(\zeta_p^k)^{k'}}=\left(\frac{1-(\zeta_p^k)^{k'}}{1-\zeta_p^k}\right)^{-1}$$ is manifestly a unit in the ring $\mathbb{Z}[\zeta_p]$ (here $kk'\equiv1\pmod p$). Because the ideal $(1-\zeta_p)$ is a non-zero prime ideal, hence maximal, this gives the first claim you asked about. The second claim follows from the well known fact that up to a unit factor (i.e. as a principal ideal) $(p)$ is the power $(1-\zeta_p)^{p-1}$. If $x=mp+1$, $m$ a rational integer, then $(1-\zeta_p)$ divides both $mp$ and $(1-\sigma(\zeta_p))$, and therefore also $x-\sigma(\zeta_p)$.
H: Is it possible to prove that the Grz axiom is valid in a modal frame iff the frame is reflexive and transitive? We need to prove (or disprove?) that $ \square (\square (A \rightarrow \square A) \rightarrow A) \rightarrow A $ is valid in the Kripke modal frame $ F = <S, R> $ iff R is transitive and reflexive. I think that more is required of R for this to be correct. Am I right? AI: Consider following structure: $ U=\{ w_{1},w_{2},w_{3}\}$, $R$ is transitive, reflexive plus $(w_{3},w_{2}) \in R$ and following valuation $v(w_{1},A)=0, v(w_{2},A)=1, v(w_{3},A)=0$. It's easy to check that Grz axiom is false in this structure with respect to given valuation. In general this http://en.wikipedia.org/wiki/Method_of_analytic_tableaux method is helpful for such issues.
H: Finding $f'(0)$ when $f(x)=\int\limits_0^x\sin\left(\frac{1}{t}\right)dt$ I need to show that $f'(0)=0$ for $$ f(x)=\int\limits_0^x\sin\left(\frac{1}{t}\right)dt $$ But fundamental theorem of calculus is unapplicable here. What should I do? AI: Here is one approach: Use integration by parts to show that $f(x)=x^2\cos(1/x)-\int_0^x 2t\cos(1/t)dt$ for $x\neq 0$. Use this to show that $\left|\dfrac{f(x)}{x}\right|\leq 2|x|$.
H: Which of these statements about $Q=\frac{1}{100}+\frac{1}{101}+\cdots+\frac{1}{1000}$ is correct? Pick the correct option regarding $Q$. $$Q=\frac{1}{100}+\frac{1}{101}+\cdots+\frac{1}{1000}$$ Pick one option: $Q>1\qquad$ 2. $Q\leq \frac{1}{3}\qquad$ 3. $\frac{1}{3}<Q\leq \frac{2}{3}\qquad$ 4. $\frac{2}{3}<Q\leq 1$ My approach : $$Q = \frac{1}{100}+\frac{1}{101}+\cdots+\frac{1}{1000} > \frac{1}{1000}+\frac{1}{1000}+\cdots+\frac{1}{1000} = \frac{901}{1000}$$ $$\implies \frac{1}{100}+\frac{1}{101}+\cdots+\frac{1}{1000} > \frac{9}{10}$$ So, $Q > 1$. Option 1 is correct. Now, my question is: can this be proved with some other approach? AI: Divide the sum into two parts: $$\frac1{100}+\ldots+\frac{1}{500}+\frac1{501}+\ldots+\frac1{1000}$$ The first part, $100$ to $500$ is such that: $$\frac1{100}+\ldots+\frac{1}{500}>\frac1{500}+\ldots+\frac{1}{500}=\frac{401}{500}$$ The second part, $501$ to $1000$ is such that: $$\frac1{501}+\ldots+\frac{1}{1000}>\frac1{1000}+\ldots+\frac{1}{1000}=\frac{500}{1000}$$ Therefore the sum of $100$ to $1000$ is larger than $\frac{401}{500}+\frac{500}{1000}$, which itself is larger than $1$.
H: About the stablizer of an element, $G_\alpha$ After reading some notes about permutation groups, I have tackled with this really simple question I hope it is not a ridiculous question. :) From the first chapters of any book about these kinds of groups and knowing how a group $G$ acts on a set $\Omega$, we are faced to an important structure, named $G_\alpha$ where is $\alpha \in \Omega$. Why do we frequently regard $G_\alpha$ acting on set $\Omega-\{\alpha\}$. Why this set,$\Omega-\{\alpha\}$? Of course with this assumption a majority of theorems will be carried out properly and desirably. Thanks. AI: Consider two transitive groups on the same set: $G = \langle (1,2,3) \rangle$ and $H=\langle (1,2), (1,2,3) \rangle$ both acting on $\Omega = \{1,2,3\}$. How well do $G$ and $H$ swirl the points of $\Omega$? Well they both do a very thorough job! They are both transitive groups. If the boss wants 1 moved to 3, no problem, $(1,2,3)^2$ will do the job in both $G$ and $H$. Now move 3 back to 2? No problem! $(1,2,3)^2$ again will do the trick. What if the boss goes mad with the power? What if he wants you to move 1 to 2 and at the same time leave 3 alone? In $G$ this cannot be done. Every element other than the identity moves all the points ($G$ is regular). In $H$ this is easy, $(1,2)$ does the job. What is the difference between these groups? They are both transitive, but once you start saying where one point goes, they react very differently about what they can do to the rest. In other words, $G_3=\{()\}$, but $H_3=\langle (1,2) \rangle$. The first group cannot do anything, but the second is transitive on $\Omega \setminus \{3\}$. To understand a group action, it is not enough to just ask if it is transitive. We often want to know how the group can move pairs of points. Consider the actions of $G$ and $H$ on $\Omega \times \Omega$, the set of ordered pairs $\{ (i,j) : 1 \leq i,j \leq 3 \}$. Clearly these actions are not transitive, because $(i,i)^g = (i^g, i^g)$. One orbit is always the "diagonal orbit" $\{ (i,i) : i \in \Omega \}$. What about the rest of $\Omega \times \Omega$? Well for $G$ there are two orbits: $\{ (1,2), (2,3), (3,1) \}$ and $\{ (2,1), (3,2), (1,3) \}$. $G$ moves them in a circle, and does not change clockwise versus counter-clockwise. We say $G$ is a rank three permutation group. For $H$ there is only one orbit: any $i,j$ with $i \neq j$ can be sent to any other such pair. We say $H$ is a 2-transitive permutation group. For groups acting on graphs, we restrict what part of $\Omega \times \Omega$ is allowed and of concern, and "transitive" becomes "vertex transitive" while "2-transitive" becomes "edge-transitive".
H: Binomial sum - generating functions Find a closed form for $a_n:=\sum_{k=0}^{n}\binom{n}{k}(n-k)^n(-1)^k$ using generating functions. AI: We have $$\begin{eqnarray*} \sum_{k=0}^{n}(-1)^k \binom{n}{k}(n-k)^n &=& \sum_{k=0}^{n}(-1)^k \binom{n}{n-k}(n-k)^n \\ &=& \sum_{k=0}^{n} (-1)^{n-k} \binom{n}{k} k^n \\ &=& \left.\sum_{k=0}^{n} (-1)^{n-k} \binom{n}{k} (x D)^n x^k\right|_{x=1} \\ &=& \left.(x D)^n \sum_{k=0}^{n} (-1)^{n-k} \binom{n}{k} x^k\right|_{x=1} \\ &=& \left.(x D)^n (x-1)^n\right|_{x=1}, \end{eqnarray*}$$ where $D = \partial/\partial x$. But $$(x D)^n = x^n D^n + (\mathrm{const}) x^{n-1}D^{n-1} + \ldots.$$ and $D^k(x-1)^n|_{x=1} = 0$ unless $k\ge n$. Therefore, $\left.(x D)^n (x-1)^n\right|_{x=1} = D^n(x-1)^n|_{x=1} = n!,$ and so $$\begin{equation*} \sum_{k=0}^{n}(-1)^k \binom{n}{k}(n-k)^n = n!.\tag{1} \end{equation*}$$ The argument above immediately implies that $$\sum_{k=0}^{n}(-1)^k \binom{n}{k}(n-k)^m = 0$$ if $m\in\mathbb{N}$ and $m<n$. It also gives us a method to calculate the sum for $m>n$. Sums of this type are related to the Stirling numbers of the second kind, $$\begin{eqnarray*} \sum_{k=0}^{n}(-1)^k \binom{n}{k}(n-k)^m &=& \sum_{k=0}^{n}(-1)^{n-k} \binom{n}{k}k^m \\ &=& n! \left\{m\atop n\right\}. \end{eqnarray*}$$ The operator $(x D)^n$ and it's connection to the Stirling numbers has been discussed here.
H: How to define rational composition of functions, appropriately? Let $f: X \rightarrow X$ and $f^{1/n}:X \rightarrow X$ such that $f$ is equal to $n$ times composition of $f^{1/n}$, i.e. $$f^{1/n} \circ f^{1/n}\circ \cdots \circ f^{1/n} = f $$ then $f^{1/n}$ is an $n$-th root of $f$. Such a function may well not exist in general, and if it does, might not be unique. But is there a universal way to construct unique n-th roots for arbitrary $f$? AI: No. For a complex number $a$, let $f_a : \mathbb{C} \to \mathbb{C}$ be defined by $z \mapsto az$. Then the nice $n^{th}$ roots of $f_a$ are given by $f_b$ where $b^n = a$ and there's no canonical way to choose a root of this polynomial. You can ask your question more generally: if $M$ is a monoid, when is it possible to define rational powers of elements of $M$ in a nice way? Depending on what you mean by "nice way" this is equivalent to $M$ being a vector space over $\mathbb{Q}$ (switching to additive notation). If you only want to demand that $n^{th}$ roots are unique and in addition $M$ is an abelian group, this is equivalent to $M$ being torsion-free, and if you only want to demand that $n^{th}$ roots exist (and in addition $M$ is an abelian group), this is equivalent to $M$ being divisible.
H: Is it true that the Laplace-Beltrami operator on the sphere has compact resolvents? We consider the Riemannian structure on the sphere $\mathbb{S}^n$ seen as a submanifold of $\mathbb{R}^{n+1}$ and the Laplace-Beltrami operator defined on $C^\infty(\mathbb{S}^n)$ by the equation $$\Delta f= -\operatorname{div}\operatorname{grad} f = -\frac{1}{\sqrt{g}}\frac{\partial}{\partial u^i}\left(\sqrt{g}g^{ij}\frac{\partial f}{\partial u^j}\right).$$ We regard $C^{\infty}(\mathbb{S}^n)$ as a dense subspace of the Hilbert space $L^2(\mathbb{S}^n)$. Question Is it true that $\Delta$ has compact resolvents, meaning that there exists $\lambda \in \mathbb{R}$ such that the closure of $\Delta-\lambda$ is invertible and its inverse operator is compact? I think that we can easily work out the special case $n=1$: in this case the equation $\Delta u-\lambda u = v$ reduces to the standard Sturm-Liouville problem $$\begin{cases} -\frac{d^2}{dt^2}u-\lambda u = v & t\in (-\pi, \pi) \\ {}\\ u(-\pi)=u(\pi) \\ u'(-\pi)=u'(\pi)\end{cases}$$ which admits Green's function for, say, $\lambda=-1$ (actually any $\lambda \notin \{0, 1, 4, 9 \ldots\}$ will do). So the inverse of $-d^2/dt^2+1$ is an integral operator and in particular it is compact. I suspect that, similarly, the operator $\Delta_{\mathbb{S}^n}+1$ admits Green's function in any dimension $n$, but I am unable to prove (or disprove) this. Thank you for reading. AI: This might be a stupid way to argue, but anyway. The solution of $-\mathrm{div}\ \mathrm{grad}\ f+f=v$ is the unique minimizer of the energy $E_v(f)=\int_{S^n}|\mathrm{grad}\ f|^2+|f-v|^2$. Since $0$ is admissible in minimization, the minimizer satisfies $E_v(f)\le E_v(0)=\int_{S^n}|v|^2$. Hence, every function $f\in L^2$ such that $-\mathrm{div}\ \mathrm{grad}\ f+f=v$ with $\|v\|_{L^2}\le 1$ satisfies $\int_{S^n}|\mathrm{grad}\ f|^2\le 1$. By the Rellich-Kondrachov theorem, the set of such $f$ is precompact in $L^2$.
H: Are Complex Substitutions Legal in Integration? This question has been irritating me for awhile so I thought I'd ask here. Are complex substitutions in integration okay? Can the following substitution used to evaluate the Fresnel integrals: $$\int_{0}^{\infty} \sin x^2\, dx=\operatorname {Im}\left( \int_0^\infty\cos x^2\, dx + i\int_0^\infty\sin x^2\, dx\right)=\operatorname {Im}\left(\int_0^\infty \exp(ix^2)\, dx\right)$$ Letting $ix^2=-z^2 \implies x=\pm\sqrt{iz^2}=\pm \sqrt{i}z \implies z=\pm \sqrt{-i} x \implies dx = \pm\sqrt{i}\, dz$ Thus the integral becomes $$\operatorname {Im}\left(\pm \sqrt{i}\int_0^{\pm\sqrt{-i}\infty} \exp(z^2)\, dz\right)$$ This step requires some justification, and I am hoping someone can help me justify this step as well: $$\pm \sqrt{i}\int_0^{\pm\sqrt{-i}\infty} \exp(z^2)\, dz=\pm\sqrt{i}\int^\infty_0\exp(z^2)\, dz=\pm\sqrt{i}\left(\frac{\sqrt{\pi}}{2}\right)$$ Thus $$\operatorname {Im}\left(\int_0^\infty \exp(ix^2)\, dx\right)=\operatorname {Im}\left(\pm\frac{\sqrt{i\pi}}{2}\right)=\operatorname {Im}\left(\pm\frac{(1+i)\sqrt{\pi}}{2\sqrt{2}}\right)=\pm\frac{1}{2}\sqrt{\frac{\pi}{2}}$$ We find that the correct answer is the positive part (simply prove the integral is positive, perhaps by showing the integral can be written as an alternating sum of integrals). Can someone help justify this substitution? Is this legal? AI: Let's consider the legality of doing an actual u-substitution, such as $z = \sqrt{i} x$. Not only must the integrand be rewritten, so must the limits of integration. In the original definite integral you have $x$ going from $0$ to $\infty$. Of course this then gives a path of integration for $z$, but it's not sufficient to have just limits $0$ to $\infty$ in the complex plane to specify that path. So this would be a gray area where the limitations of your notation could let you down! In the complex plane there are many paths from $0$ to $\infty$, even many straight such paths.
H: Does the series converge or diverge? $\sum_{n=1}^\infty\frac{4^n+n}{n!}$ Does the following series converge or diverge? $$\sum_{n=1}^\infty\frac{4^n+n}{n!}$$ AI: $$\sum_{n=1}^{\infty} \dfrac{4^n + n}{n!} = \sum_{n=1}^{\infty} \dfrac{4^n}{n!} + \sum_{n=1}^{\infty} \dfrac{n}{n!} = \exp(4)-1 + \sum_{n=1}^{\infty} \dfrac1{(n-1)!} = \exp(4)-1 + \exp(1)$$
H: Perfect square modulo $n = pq$ I've been stuck on this problem for a while. Any insights to the problem would be great! We start with $n = pq$, where $p, q$ are distinct odd primes. In addition, $\gcd(a,n) =1$. If $x^2 \equiv a \pmod n$ has any solutions, then it has four solutions (where the book does not specify what $a$ is) So far, we have that $a \equiv x^2 \pmod {pq}$. This implies that $a \equiv x^2 \pmod p$ and $a \equiv x^2 \pmod q$. I am stuck at this point. So far my thoughts are if $a \equiv x^2 \pmod {pq}$ has a solution, then $a \equiv x^2 \pmod p$ and $a \equiv x^2 \pmod q$ have solutions (not sure if this is true & if it is true, I couldn't find anything to refer to in my textbook). From here, we can say that $x \equiv \pm c_1 \mod p$ and $x \equiv \pm c_2 \mod q$. This part of my class is dealing with the Chinese Remainder Theorem. Thanks again. Edit: I added a little more stuff on what I got so far. AI: Hint $\rm\ (\pm x,\pm x)^2 \equiv (a,a)\ mod\ (p,q).\ $ There are $4$ possible sign combinations.
H: Simple equation solving/transformation Im really stuck here, and feel quite dumb, because it looks so simple. However, I'm sure I'm overlooking something here. I have given this equation: $x=c_0y + c_1z$ where $x,y,z$ are some variables and $c_0,c_1$ are constants. My goal is to get an equivalent equation in the form of $x-y = ?$ I went ahead and tried moving the components around, but whatever I do, I dont get it in the desired form: $x - c_0y = c_1z$ $\frac{x}{c_0} - y = \frac{c_1z}{c_0}$ ... The constants just wont get to the other side... I just cant get my head around this. Any help or hint is very appreciated! Thanks in advance! AI: Starting with $$x=c_0 y +c_1 z \; , \; c_0 \neq 1$$ the best you can do is $$x-y=(C_0-1)y+c_1 z$$ There is no way to obtain $x-y$ isolated in one side only (unless obviously $c_0=1$) Think about it more concretely, with $$x=2y+az$$ $$x=-4y+az$$ $$x=\pi y+az$$ and similar examples.
H: Given k % y, how can I adjust the dividend (k) to preserve the modulo when the divisor (y) is incremented by one? In a programming algorithm, I'm using the result of k % y. I need to understand how to adjust k when the value of y is incremented by one to preserve the same modulo result. In other words, solve for x: (k + x) % (y + 1) = k % y Empirically, I see that I can always adjust k by adding a constant that depends on both the value of k and the value of y. I believe the solution involves computing some sort of "distance across y" between k % y and k % (y + 1). Hoping this is easy for someone who studied mathematics, rather than comp sci, as I'm out of my league here. Thanks! AI: You have $k=my+r$ for some $m,r$ and $k\%y=r=k-my$. Now if you want $k'\%(y+1)=r$, you need $k'=m'(y+1)+r$. One way to assure this is to have $m'=m, k'=k+m$. Unfortunately, just having $k\%y$ you don't know $m$. Another way to assure this is to let $k=r$ but it seems you want $k'$ to depend upon $k$ somehow. Yet a third way is to set $k'=k(y+1)+r$ Do any of these meet your needs?
H: Tricky derivative with chain rule. Let $z = 1/x$ and $y = f(z)$, find $\dfrac{d^2y}{dx^2}$ So the answer was $$\frac{\mathrm{d} y}{\mathrm{d} x} = \frac{\mathrm{d} y}{\mathrm{d} z}\frac{\mathrm{d}z }{\mathrm{d}x }$$ Where $$\frac{\mathrm{d} y}{\mathrm{d} x} = \frac{\mathrm{d} y}{\mathrm{d} z}\frac{\mathrm{d}z }{\mathrm{d}x } =-\frac{\mathrm{d} y}{\mathrm{d} z}\frac{1}{x^2} = -\frac{\mathrm{d} y}{\mathrm{d} z}z^2$$ $$\dfrac{d^2y}{dx^2} = \frac{\mathrm{d} }{\mathrm{d} x}\left ( \frac{\mathrm{d} y}{\mathrm{d} x} \right ) = \frac{\mathrm{d} }{\mathrm{d} x}\left ( \frac{\mathrm{d} y}{\mathrm{d} z}\frac{\mathrm{d}z }{\mathrm{d}x }\right ) = \frac{\mathrm{d} ^2y}{\mathrm{d} z^2}\left (\frac{\mathrm{d} z}{\mathrm{d} x} \right )^2 + \frac{\mathrm{d} y}{\mathrm{d} z}\frac{\mathrm{d} ^2z}{\mathrm{d} x^2} $$ Could someone explain to me how on earth did $\frac{\mathrm{d} ^2y}{\mathrm{d} z^2}\left (\frac{\mathrm{d} z}{\mathrm{d} x} \right )^2$ appear? EDIT: I am going to show what I did. $$\frac{\mathrm{d} }{\mathrm{d} x}\left ( \frac{\mathrm{d} y}{\mathrm{d} z}\frac{\mathrm{d}z }{\mathrm{d}x }\right ) = \frac{d}{dx}\left(\frac{dy}{dz}\right) \frac{dz}{dx}+\frac{d ^2z}{dx^2}\frac{dy}{dz} = \frac{d}{dx}\left(\frac{-1}{z^2} \frac{dy}{dx}\right) \frac{dz}{dx}+\frac{d ^2z}{dx^2}\frac{dy}{dz}$$ Basically I don't understand how $$\frac{d}{dx}\left(\frac{-1}{z^2} \frac{dy}{dx}\right)$$ could turn into $$\frac{\mathrm{d} ^2y}{\mathrm{d} z^2}\left (\frac{\mathrm{d} z}{\mathrm{d} x} \right )^2$$ In fact I got $$\frac{d}{dx}\left(\frac{-1}{z^2} \frac{dy}{dx}\right) = 2z^{-3}\frac{\mathrm{d} z}{\mathrm{d} x}\frac{\mathrm{d} y}{\mathrm{d} x} - \frac{1}{z^2}\frac{\mathrm{d} ^2 y}{\mathrm{d} x^2}$$ AI: You need to use the product rule: $$\frac{d^2y}{dx^2}=\frac{d}{dx}\left(\frac{dy}{dz}\frac{dz}{dx}\right) $$ $$\frac{d^2y}{dx^2}=\frac{d}{dx}\left(\frac{dy}{dz}\right) \frac{dz}{dx}+\frac{d}{dx}\left(\frac{dz}{dx}\right)\frac{dy}{dz} $$ So that $$\frac{d^2y}{dx^2}=\frac{d}{dx}\left(\frac{dy}{dz}\right) \frac{dz}{dx}+\frac{d ^2z}{dx^2}\frac{dy}{dz} $$ Note that you're dealing implicitly with the derivatives, so if you substitute for any of the functions, you'll crash the whole manipulation (you can do it afterwards, but first get the expression, understand?) So you need to find: $$D=\frac{d}{{dx}}\left( {\frac{{dy}}{{dz}}} \right)$$ By the chain rule, this "simply" is: $$\frac{d}{{dx}}\left( {\frac{{dy}}{{dz}}} \right) = \frac{d}{{dz}}\left( {\frac{{dy}}{{dz}}} \right)\frac{{dz}}{{dx}} = \frac{{{d^2}y}}{{d{z^2}}}\frac{{dz}}{{dx}}$$ So you get $$\frac{{{d^2}y}}{{d{x^2}}} = \frac{{{d^2}y}}{{d{z^2}}}{\left( {\frac{{dz}}{{dx}}} \right)^2} + \frac{{{d^2}z}}{{d{x^2}}}\frac{{dy}}{{dz}}$$ Now you can calculate all those expressions separately and get what you want.
H: Finding a function and a sequence of recursively defined sets that gives a counterexample. Let X be a set and $f:X\rightarrow X$. Define the sequence $(A_n)$ recursively by $A_1=X$ and $A_n=f(A_{n-1})$ for $n>1$. Let $A=\bigcap_{n\in N}A_n$. My problem is asking me to show $f(A)⊊A$. I have already shown $f(A)\subset A$, but I'm having a hard time finding a function $f$ and sequence $(A_n)$ such that $f(A)\neq A$. One of my attempts: Let $X=A_1=[0,2]$, and for $n>1$ take $f(A_n)=\left[0,\frac{1}{2}+\frac{1}{n}\right]$. *(not sure if I can do this next step) "Take $f\left(\left[0,\frac{1}{2}\right]\right)={0}$". We have $A=\left[0, \frac{1}{2}\right]$. Therefore, $f(A)={0}\neq [0,2]=A$ All of my counterexamples led to a similar argument above, but I'm not sure if it is a valid argument. My only hope is that is that I am taking $f$ to be a dilation fixing 0 each time. AI: I think you have to define an $f:X\to X$, and not define only the images (because not necessarily such an $f$ exists). But in you example, write $[0,2]=\left[0,\frac{1}{2}\right]\cup\left(\bigcup_{n\in\mathbb{N}}\left(\frac{1}{2}+\frac{1}{n+1},\frac{1}{2}+\frac{1}{n}\right]\right)\cup \left(\frac{3}{2},2\right]$, and define $f:X\to X$: $f(x)=0,\forall x\in\left[0,\frac{1}{2}\right];$ (That is, $f$ sends $\left[0,\frac{1}{2}\right]$ to $\{0\}$) For each $n\in\mathbb{N}$, $$f(x)=\left(\frac{1}{2}+\frac{1}{n+1}\right)\left(\frac{x-\left(\frac{1}{2}+\frac{1}{(n+1)}\right)}{\frac{1}{n}-\frac{1}{(n+1)}}\right),\forall x\in\left(\frac{1}{2}+\frac{1}{n+1},\frac{1}{2}+\frac{1}{n}\right]$$ Note: That is, $f$ sends $\left(\frac{1}{2}+\frac{1}{n+1},\frac{1}{2}+\frac{1}{n}\right]$ to $\left(0,\frac{1}{2}+\frac{1}{n+1}\right]$ $f(x)=3\left(\frac{x-3}{2}\right),\forall x\in\left(\frac{3}{2},2\right]$ (That is, $f$ sends $\left(\frac{3}{2},2\right]$ to $\left(0,\frac{3}{2}\right]$) So, $f$ satisfies your requirements, and then $f(A)\ne A$.
H: Find $\lim\limits_{x \to \infty} \frac{\sqrt{x^2 + 4}}{x+4}$ $$\lim\limits_{x \to \infty} \frac{\sqrt{x^2 + 4}}{x+4}$$ I have tried multiplying by $\frac{1}{\sqrt{x^2+4}}$ and it's reciprocal, but I cannot seem to find the solution. L'Hospital's doesn't seem to work either, as I keep getting rational square roots. AI: Hint $$\mathop {\lim }\limits_{x \to \infty } \dfrac{{\sqrt {{x^2} + 4} }}{{x + 4}} = \mathop {\lim }\limits_{x \to \infty } \frac{{\dfrac{{\sqrt {{x^2} + 4} }}{x}}}{{\dfrac{{x + 4}}{x}}} = \mathop {\lim }\limits_{x \to \infty } \dfrac{{\sqrt {\dfrac{{{x^2} + 4}}{{{x^2}}}} }}{{1 + \dfrac{4}{x}}} = \mathop {\lim }\limits_{x \to \infty } \frac{{\sqrt {1 + \dfrac{4}{{{x^2}}}} }}{{1 + \dfrac{4}{x}}}$$
H: Smallest number in a set A is the set of seven consequtive two digit numbers, none of these being multiple of 10. Reversing the numbers in set A forms numbers in set B. The difference between the sum of elements in set A and those in set B is 63. The smallest number in set A can be : I tried to write some sets and reverse them and calculate their value, but I am not able to arrive at the answer. AI: Let $A = \{10a+b,10a+b+1,10a+b+2,10a+b+3,10a+b+4,10a+b+5,10a+b+6\}$, where $a \in \{1,2,\ldots,9\}$ and since none of them is divisible by $10$, we have that $b\in\{1,2,3\}$. Then $$B =\{10b+a,10b+a+10,10b+a+20,10b+a+30,10b+a+40,10b+a+50,10b+a+60\}$$ Sum of elements in $A$ is $70a+7b+21$ and sum of elements in $B$ is $70b+7a+210$. We are given $$\vert 63(a-b) - 189 \vert = 63 \implies \vert (a-b) - 3 \vert = 1$$ If $b=1$, we get that $\vert a-4\vert = 1 \implies a=3,5$. If $b=2$, we get that $\vert a-5\vert = 1 \implies a=4,6$. If $b=3$, we get that $\vert a-6\vert = 1 \implies a=5,7$. The smallest number in $A$ is obtained by choosing $a=3$ and $b=1$. Hence, the smallest number in the set $A$ is $31$.
H: Is $M$ the midpoint of this line? $${{\bullet\!\!\! -\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\bullet\!\!\! -\!\!\!-\!\!\!-\!\!\!-\!\!\!-\!\!\!\bullet}\atop O \;\quad\quad M\quad\quad\; P}$$ Given that $OM = x + 8$, $MP = 2x - 6$, $OP = 44$, is $M$ the midpoint of $OP$? AI: HINT $$OM + MP = OP$$ Move your mouse over the gray area for the complete solution. Note that $$OM + MP = OP$$ and hence we get that $3x+2 = 44 \implies x =14$. Hence, $OM = 14 + 8 = 22 = \dfrac{44}2 = \dfrac{OP}2$. Hence, $M$ is indeed the midpoint.
H: Please recommend books on calculus, linear algebra, statistics for someone trying to learn Probability Theory and Machine Learning? I am tackling some topics in Probability Theory and Machine Learning and while I have plenty of resources dedicated to those disciplines I am lacking in a good basic math foundation. Does anyone know any good, concise math books that can help introduce the foundations (calculus, linear algebra, statistics) of these disciplines to someone whose exposure to math is very limited? Of particular interest would be a book that could relate these concepts to someone familiar with programming to leverage that mode of thinking to relate the essential ideas. AI: You may want to try someone online videos that may be helpful. Here is link for free linear algebra book with solutions http://joshua.smcvt.edu/linearalgebra/
H: Powers of $2 \times 2$ matrices expressed in linear form I recently reopened an old high school math textbook and came upon the matrices unit. Some of the questions were those rewrite-in-linear-form problems: given, say, $M^2 = 2M - I$, express in linear form ($aM+bI$) the matrices $M^3$ and $M^4$. The method of solution would be to express $M^n$ as $M^{n-1}M$ and continue expanding each consecutive set of powers until you end up substituting in the original $M^2 = 2M-I$, for answers of $M^3 = 3M-2I$ and $M^4 = 4M-3I$. Further, induction proof problems could also be posed, such as proving that $M^n = nM - (n-1)I$ for all $n \in \mathbb{Z}^+$. (Technically, it could also be for $n \in \mathbb{Z}$ if you consider $M^{-n} \equiv (M^{-1})^n \equiv (M^n)^{-1}$, which is relevant.) I decided to take this a step further, and see if I could take powers of $M$ where $M^2 = aM + bI$. After some experimentation, one can define two related sequences $(a_n)$ and $(b_n)$ which describe linear form coefficients of $M^n$. Trivially, $M^1 = M = 1M + 0I$ and $M^0 = MM^{-1} = I = 0M + 1I$, which defines $a_0$, $a_1$, $b_0$, and $b_1$ as 0, 1, 1 and 0, respectively. From this, and by the properties of expanding powers through $M^2 = a_2M + b_2I$, $$\begin{align} &\text{Basic} \\ a_0 &= b_1 = 0 \\ a_1 &= b_0 = 1 \\ \\ &\text{Following} \\ a_{n+1} &= a_2 a_n + b_n \\ b_{n+1} &= b_2 a_n \\ \\ &\text{Preceding} \\ a_{n-1} &= \frac{1}{b_2}b_n \\ b_{n-1} &= a_n - \frac{a_2}{b_2} b_n \\ \end{align}$$ With a little playing around, one finds that $$a_n = a_2 a_{n-1} + b_2 a_{n-2}$$ which looks a little like the Fibonacci sequence, and since $b_n$ is already defined in terms of $a_n$, the crux of the problem lies in a non-recurring equation for $a_n$, given $a_2$ and $b_2$ as parameters. Is there a formula for $a_n$, and subsequently, $b_n$? Am I on the right track with this method or is there a more elegant exploitability in these sequences? AI: Nice observation! What you have here are indeed generalized Fibonacci sequences, and they do indeed have closed forms generalizing Binet's formula. Since you're already taking powers of matrices, learn just a little more linear algebra and you'll understand how to do this when those matrices are diagonalizable. It would also be a good idea to learn about the Cayley-Hamilton theorem. The general statement is a little messy, but in the special case that the polynomial $t^2 = at + b$ has distinct roots $r_1, r_2$, then $$a_n = \frac{r_1^n - r_2^n}{r_1 - r_2}$$ and $b_n$ has a similar expression. The idea is to look at the vector space of sequences $c_n$ satisfying $$c_{n+2} = a c_{n+1} + b c_n.$$ This vector space has dimension $2$ because such a sequence is uniquely determined by $c_0$ and $c_1$, but on the other hand $c_n = r_1^n$ and $c_n = r_2^n$ both clearly lie in it. It follows that these two sequences in fact form a basis of this vector space, so any solution can be expressed as a linear combination of them.
H: how to find maximal linearly independent subsets Given a set of vectors, we can compute the number of independent vectors by calculating the rank of the set, but my question is how to find a maximal linearly independent subset. Thanks! AI: One method is: Place the vectors as columns of a matrix. Call this matrix $A$. Use Gaussian elimination to reduce the matrix to row-echelon form, $B$. Identify the columns of $B$ that contain the leading $1$s (the pivots). The columns of $A$ that correspond to the columns identified in step 3 form a maximal linearly independent set of our original set of vectors. Another method is: Let $A$ be your multiset of vectors, and let $B=\varnothing$, the empty set. Remove from $A$ any repetitions and all zero vectors. If $A$ is empty, stop. This set is a maximal linearly independent subset of $A$. Otherwise, go to step 4. Pick a vector $\mathbf{v}$ from $A$ and test to see if it lies in the span of $B$. If $\mathbf{v}$ is in the span of $B$, replace $A$ with $A-\{\mathbf{v}\}$, and do not modify $B$; then go back to step 3. If $\mathbf{v}$ is not in the span of $B$, replace $A$ with $A-\{\mathbf{v}\}$ and replace $B$ with $B\cup\{\mathbf{v}\}$. Then go back to step 3. When step 3 instructs you to stop, $B$ contains a maximal linearly independent subset of $A$.
H: paradox of square matrix I remember that every embedding is injective, and every projection is surjective. For square matrices, they're both embedding and projection, which means they're bijective, so they should be invertible. But obviously, not every square matrix is invertible. I don't which part is wrong in my logic. Not every projection is surjective? Or square matrices are not embedding or projection? AI: Consider the simplest example: $$M: \mathbb{R}^2 \longrightarrow \mathbb{R}^2$$ defined by the matrix $M = \begin{bmatrix} 0 & 0 \\ 0 & 0\end{bmatrix}$ is neither injective nor surjective. So square matrices needn't be injective nor surjective.
H: Self-Linking Number on 3-Manifolds We can assign a framing to a knot $K$ (in some nice enough space $M$) in order to calculate the self-linking number $lk(K,K)$. But of course it is not necessarily canonical, as added twists in your vector field can remove/add crossings. Two things are stated in Witten's QFT paper on the Jones polynomial, which I do not quite see: 1) On $S^3$ we do have a canonical framing of knots, by requesting that $lk(K,K)=0$. Why? I must be understanding this incorrectly, because if we decide the framing by requiring $lk(K,K)=0$, so that the framing has whatever twists it needs to accomplish this, then aren't we making a choice?? We could have simply required $lk(K,K)=n$ for any integer $n$. If $n> 0$ does there then exist multiple possible framings? 2) For general 3-manifolds, we can have $lk(K,K)$ ill-defined or it can be a fixed fraction (modulo $\mathbb{Z}$) so that any choice of framing won't make it $0$. What are some examples? When is it possible to set a fixed fraction? Is there a relation between the 3-manifold $M$ and the fractional value you can assign to $lk(K,K)$? AI: I see my answer wasnt so clear. Let me try again. Your first comment says that given a knot $K:S^1\subset M$ in a 3-manifold, with a choice of framing, i.e. normal vector field to $K$, you can calculate the self-linking number. My first remark is that this is only possible if the knot is nullhomologous, i.e. represents $0$ in $H_1(M)$. For example, there is no self-linking number of the core $0\times S^1\subset D^2\times S^1$ of a solid torus, no matter how you frame it. If K is nullhomologous, then depending on how you think of homology you see that there is a 2-chain with boundary the knot $K$. It is true, but a bit more work, to see that in fact there exists a oriented embedded surface $C\subset M$ with boundary $K$. (so you can take the 2-chain to be the sum of the triangles in a triangulation of $C$. Then given any other knot $L$ disjoint from $K$ (for example a push off of $K$ with respect to some framing) then the intersection of $C$ with $L$ is by defintion $lk(K,L)$ and is an integer. You may worry about whether it is independent of the choice of $C$, and the answer is yes if $L$ is also nullhomologous, or more generally torsion (i.e. finite order) in $H_1(M)$, and moreover in this case it is also symmetric, $lk(K,L)=lk(L,K)$. Notice that no framing of $K$ or $L$ was used to define $lk(K,L)$. Now to answer your questions. Since $H_1(S^3)=0$, every knot in $S^3$ is nullhomologous. Thus any two component link in $S^3$ has a well defined integer linking number. You are considering the 2 component link determined by $K$ and a normal framing: the normal framing is used to push off $K$ to get $L=f(K)$. As you note, changing the framing changes the linking number, and in fact by twisting once over a small arc in $K$ you can change it by $\pm1$. Thus there is some framing $f$ so that $lk(K, f(K))=0$, this is the canonical framing (typically called the "0-framing"). It makes sense in $S^3$, or any 3-manifold with $H_1(M)=0$. For your second question, you are referring to a slightly different concept, which is the linking pairing $lk:Torsion(H_1(M))\times Torsion(H_1(M))\to Q/Z$. It is defined as follows: given $a,b$ torsion classes, then some integer $n$ (for example the order of $torsion(H_1(M))$) has the property that $na$ is nullhomologous. Thus $na$ is represented by a nullhomologous knot, call it $K$. $b$ is also represented by a knot say $L$, which can be perturbed to be disjoint from $K$. Then define $lk(a,b)$ to be $(1/n) lk(K,L)$ mod $Z$, with $lk(K,L)$ as above. For example, if $P$ is a knot in a lens space $M=L(p,q)$ with $H_1(M)=Z/p$, you could take $a=[P]$ and $b=[P]$ in $H_1(M)$, and then $lk(a,b)=q n^2/p$ for some integer $n$ that depends on $a$. Note that the answer (mod Z!) is independent of how you push $P$ off itself, in particular, the framing of $P$ is irrelevant, and you'll never get $0$ unless $a=0$ (i.e. $P$ is nullhomologous). Note also that if you don't mod out by the integers then the linking pairing is not well defined.
H: Proving $\sup\left\{ r\in\mathbb{Q}:r^{2}<3\right\}=\sqrt{3}$ Let $E=\left\{ r\in\mathbb{Q}:r^{2}<3\right\}$. Prove that $\sup E=\sqrt{3}$. Since $E$ is bounded from above by $\sqrt{3}$ and is nonempty, $\alpha:=\sup E$ must exist by the Least Upper Bound Principle. Now I am stuck. Suppose $\alpha<\sqrt{3}$. Then what? Edit: It just hit me. If $\alpha < \sqrt{3}$ then since the rationals are dense in $\mathbb{R}$, there exists $q\in\mathbb{Q}$ such that $\alpha < q <\sqrt{3}$ and hence $\alpha^2 <q^2 <3$ which contradicts the fact that $\alpha$ is the sup of $E$ AI: Let $r\in \bf E$ , $r = \dfrac m n $. Then we need to prove that there always exists a $\dfrac{p}{q}$ such that $$ \frac{m}{n}<\frac p q < \sqrt 3$$ By exploting the special properties of $\sqrt 3 $,we can do it: We start with $$\eqalign{ & r < \sqrt 3 \cr & r + 1 < \sqrt 3 + 1 \cr & \frac{1}{{r + 1}} > \frac{1}{{\sqrt 3 + 1}} \cr & \frac{1}{{r + 1}} > \frac{{\sqrt 3 - 1}}{2} \cr & \frac{{r + 3}}{{r + 1}} > \sqrt 3 \cr} $$ But now we've gotten a number greater than $\sqrt 3 $. So what we'll do is apply the process again, and the relation will be reversed: $$\frac{{Q + 3}}{{Q + 1}} < \sqrt 3 $$ Letting ${Q = \frac{{r + 3}}{{r + 1}}}$ will give $$\frac{{2r + 3}}{{r + 2}} < \sqrt 3 $$ All we need to do now is prove that $$r < \frac{{2r + 3}}{{r + 2}}$$ But since $r+2>0$ we get that $$\eqalign{ & {r^2} + 2r < 2r + 3 \cr & {r^2} < 3 \cr} $$ which is true by hypothesis. Thus, given any rational $r$, there exists another rational $q$ such that $$r<q<\sqrt 3$$ where $$q = \frac{{2r + 3}}{{r + 2}}$$ Just to make things clear, I'll add some extra information. The recursion defined as $r_0=1$ $$r_{n+1}=\frac{2 r_n+3}{r_n +2} $$ converges monotonically (increasing) to $\sqrt 3 $. This in particular means that given any $\epsilon>0$ there is an $r \in \rm E$ such that $$\sqrt 3 -\epsilon < r$$ which means $\sqrt 3 $ is the supremum of the set. This also plays the role of showing that $3$ is irrational.
H: Silly question about Fourier Transform What is the Fourier Transform of : $$\sum_{n=1}^N A_ne^{\large-a_nt} u(t)~?$$ This is a time domain function, how can I find its Fourier Transform (continuous not discrete) ? AI: Tips: The Fourier transform is linear; $$\mathcal{F}\left\{\sum_l a_lf_l(t)\right\}=\sum_l a_l\mathcal{F}\{f_l(t)\}.$$ Plug $e^{-ct}u(t)$ into $\mathcal{F}$ and then discard part of the region of integration ($u(t)=0$ when $t<0$): $$\int_{-\infty}^\infty e^{-ct}u(t)e^{-2\pi i st}dt=\int_0^\infty e^{(c-2\pi is)t}dt=? $$ Now put these two together..
H: How to solve the maximum subset sum problem? I've just read this article about solving the below minimum subset sum using dynamic programming technique. Given a list of $N$ coins, their values $(V_1, V_2, ... , V_N)$, and the total sum S. Find the minimum number of coins the sum of which is $S$ (we can use as many coins of one type as we want), or report that it's not possible to select coins in such a way that they sum up to $S$. How can I use the same technique to find the maximum subset of coins that sums up to $S$? And what is the recurrence relation for this 2nd problem? AI: The pseudocode for minimum subset sum given in the article is as follows: Set M[i] equal to infinity for all of i M[0]=0 For i = 1 to S For j = 0 to N - 1 If (Vj<=i AND M[i-Vj]+1<M[i]) Then M[i]=M[i-Vj]+1 Output M[S] The inner loop over $j$ is just computing $M[i] := \min\limits_{V_j \le i} M[i-V_j] + 1$. To compute the maximum subset sum, you just have to replace $\min$ with $\max$, which you can do by initializing $M$ to $-\infty$ instead of $\infty$, and changing the less-than sign in the comparison of $M$ to a greater-than sign.
H: Flirtatious Primes Here's a possibly interesting prime puzzle. Call a prime $p$ flirtatious if the sum of its digits is also prime. Are there finitely many flirtatious primes, or infinitely many? AI: These are tabulated at the Online Encyclopedia of Integer Sequences. It appears to be known that there are infinitely many, and a link is given to a recent paper of Harman. Some high-powered math is involved.
H: For field extensions $F\subsetneq K \subset F(x)$, $x$ is algebraic over $K$ Let $x$ be an element not algebraic over $F$, and $K \subset F(x)$ a subfield that strictly contains $F$. Why is $x$ algebraic over $K$? Thanks a lot! AI: The thing is you need to understand what are the subfields in between $F$ and $F(x)$. If you have a subfield $K$ of $F(x)$ that contains $F$, then to allow $K$ to contain "more" elements than $F$ (i.e. to be distinct from $F$, but lie inside $F(x)$), $K$ must contain some element of $F(x)$, say $\frac{p(x)}{q(x)} \in K \cap F(x)$. But then this means $x$ is a root of $p(t) - q(t)\frac{p(x)}{q(x)} \in K[t]$, hence $x$ is algebraic over $K$. EDIT : @Aspirin : I assumed in my question that $F \subset K \subset F(x)$ because I thought it was clear from the context, but perhaps it is not. I think that in this case, if you assume that $K$ is not a subfield of $F$, you can replace $F$ by $F' = K \cap F$ so that $F' \subsetneq K \subsetneq F'(x)$. You clearly have $F' \subsetneq K$ by assumption on $K$ in this case (i.e. that $K$ not contained in $F$) and you get $K \subsetneq (K \cap F)(x) = K(x) \cap F(x)$ since $K \subseteq K(x)$ and $K \subsetneq F(x)$, hence is in the intersection, but if you had equality, it would mean $K = K(x) \cap F(x)$, and since $x \in K(x) \cap F(x)$, that would mean $x \in K$. In that case OP's question is trivial, so we can suppose we are not in that case. Also note that since $x$ is not algebraic over $F$ it is certainly not over $F'$. Therefore $F' \subsetneq K \subsetneq F'(x)$ and you can use the arguments above to show that $x$ is algebraic over $K$. EDIT2 : I am leaving the preceding edit there because I still don't understand why it breaks down in the case $K \subseteq F$, which shouldn't work if we are still sane around here. I am more convinced that you need to assume $F \subsetneq K$ than the fact that my edit gives something good. EDIT3 : After all those editings and reading the comments, I modified EDIT so that the case where $K \subseteq F$ is clearly not possible. The most general case where it would work is when $K \subseteq F(x)$ but $K$ is not a subfield of $F$. A slighty small detail dodged my eye and Dylan Moreland caught it, thanks to him. Note that this also proves that $x$ would be algebraic over $K$ if and only if $K$ is not a subfield of $F$ (assuming $K$ is described as in OP's question). Hope that helps,
H: $f(x)=x^n+5x^{n-1}+3$ can't be expressed as the product of two polynomials Let $f(x)=x^n+5x^{n-1}+3$ where $n\geq1$ is an integer. Prove that $f(x)$ can't be expressed as the product of two polynomials each of which has all its coefficients integers and degree $\geq1$. If the condition that each polynomial must have all its coefficients integers was not there, then I needed to show only that $f(x)$ is irreducible over real numbers. But since this condition is given,therefore, we can't exclude the case when $f(x)$ is reducible over real numbers but not with polynomials of integer coefficients. Anyone with idea how to proceed? Thanks in advance!! AI: The notation $[x^t]P(x)$ denotes the coefficient of $x^t$ in polynomial $P(x)$. Suppose that $f(x)=g(x)h(x)$, where $g(x),h(x)$ are monic polynomials, $g(x),h(x)\in\Bbb Z[x]$ and $\deg g>0$, $\deg h>0$, $\deg g+\deg h=n$. Let $g(x)=\sum_k\alpha_kx^k$, $h(x)=\sum_k\beta_kx^k$, where $\alpha_k,\beta_k$ is integer whenever $k\ge0$. First, we have $\alpha_0\beta_0=3$, so $|\alpha_0|=3$ and $|\beta_0|=1$ or $|\alpha_0|=1$ and $|\beta_0|=3$. WLOG, suppose that $|\alpha_0|=3$ and $|\beta_0|=1$. Let $m$ is the smallest positive integer such that $3\nmid\alpha_m$ (such $m$ exists because $g(x)$ is monic). Now we have $$[z^m]f(x)=\alpha_0\beta_m+\alpha_1\beta_{m-1}+\cdots+\alpha_m\beta_0\equiv\alpha_m\beta_0\not\equiv0\pmod3$$ So $m\ge n-1$, and $\deg g\ge n-1$, thus $\deg h\le 1$, hence $\deg h=1$, and $h(x)=x+\beta_0$, where $\beta_0=\pm1$, so $h(\beta_0)=0$, and $f(\beta_0)=0$, but it's obvious that $f(\pm1)\neq0$, Q.E.D.
H: deciding whether $125$ is a primitive root modulo $529$ I'd like your help with the following: I need to show that $5$ is a primitive root modulo $23^m$ for all natural $m$ and to decide if $125$ is a primitive root modulo $529$. For the first part I need to show that $5$ is a primitive root for $23$ ( I showed that) and that $5^{22} \neq 1\pmod{23}$ in order to use the theorem which says that if $g$ is a primitive root modulo $p$ it is also a primitive root for modulo $p^l$ for all natural $l$, but how can I compute $5^{22}$ modulo $23^2$ and show that it does not equal $1$? Also for I'd love help for the second part of the question- What can help me determining whether $5^2$ is a primitive root $529$? I couldn't find any theorem or simple way to for showing it. If I remember correct I read somewhere that $ord_m{g^c}= \frac{ord_mg}{\gcd(c,\phi(m))}$. Is this correct and the best way to determine the claim? If so, how can I prove it? Thanks a lot! AI: Here's one way of doing it with pencil and paper calculations. Essentially I do a test run of the algorithm implicit in Gerry's answer. $$5^4=625=529+96\equiv 96\pmod{529}.$$ This implies $$5^5\equiv5\cdot96=480\equiv-49\pmod{529}.$$ Squaring this gives $$ 5^{10}\equiv(-49)^2=2401=2116+285\equiv285\pmod{529}. $$ So $$ 5^{11}\equiv5\cdot285=1425=1058+367\equiv367\pmod{529}. $$ As a check (scanning for errors is a good habit to acquire!) we observe that $367\equiv-1\pmod{23}$. At this point we are done. We know from general theory that $\mathbb{Z}_{529}^*$ is cyclic of order $\phi(529)=506$. Therefore there are exactly two residue classes $x$ modulo $529$ that are solutions of the equation $x^2=1$, namely the cosets of $\pm1$. As $367$ is not one of them, we know that $$5^{22}=(5^{11})^2\equiv(367)^2\not\equiv1\pmod{529}.$$ For the last part I would first check, whether the following basic fact about cyclic groups might help? If $G=\langle g\rangle$ is a cyclic group of order $n$, then $g^m$ is another generator of $G$ if and only if $\gcd(m,n)=1$. This is part of the result that you quoted. In a typical first course of abstract algebra that result is proved shortly after cyclic groups and subgroups have both been introduced. I would be very surprised, if it was not covered in your course. The proof uses the division algorithm of integers (=a single step in Euclid's algorithm).
H: Do I need to check solution while solving an equation? I am facing a strange problem in solving the following equation for x and y $$\frac{4x^2+(x^2+y^2-1)^2}{(x^2+(y-1)^2)^2}=1$$ $$4x^2+(x^2+y^2-1)^2=(x^2+(y-1)^2)^2$$ On solving the terms in brackets, we get $$4y^3-8y^2+4y+4x^2y=0$$ $$4y(y^2-2y+1+x^2)=0$$ $$4y[(y-1)^2+x^2]=0$$ $$y=0 \space or \space (y-1)^2+x^2=0$$ So one solution is $y=0$. $$\text{if}\space(y-1)^2+x^2=0\,\space \text{then} \space y=1\space \text{and}\space x=0$$ But if I try to put $y=1$, and $x=0$ in original equation, I get $\frac{0}{0}$, which is not $1$. So I want to ask is $(y=1,$ $x=0)$ solution or not ? AI: $(x,y)=(0,1)$ is not a solution. \begin{align*} &\frac{4x^2+(x^2+y^2-1)^2}{\left(x^2+(y-1)^2\right)^2}=1\\ \iff&\begin{cases}4x^2+(x^2+y^2-1)^2=\left(x^2+(y-1)^2\right)^2\\\left(x^2+(y-1)^2\right)^2\neq0\end{cases}\\ \iff&\begin{cases}y\left(x^2+(y-1)^2\right)=0\\\left(x^2+(y-1)^2\right)^2\neq0\end{cases}\\ \iff&\begin{cases}y=0\\x\neq0\hbox{ or }y\neq1\end{cases}\iff y=0 \end{align*}
H: Quadratic equation : Two solutions or one solution? I have an equation to solve for y: $$\frac{y^2}{y}=1$$ Normally, I would cancel out one $y$ and get $y=1$ as a single solution. But If I think of it as quadratic equation $$y^2=y$$ $$y^2-y=0$$ $$y(y-1)=0$$ $$y=0 \space \text{or} \space y=1$$ to have two solutions. But when I put $y=0$ in original equation, I get $\frac{0}{0}$, so is $y=0$ a solution or not ? If yes, then I get $\frac{0}{0}$. If no, then how come this quadratic equation has $1$ solution ? AI: This equation has two solutions, $y=0$ and $y=1$: $$y^2=y \tag{1}$$ This equation has one solution, $y=1$: $$\frac{y^2}{y} = 1 \tag{2}$$ The reason that it has one solution is that either of the operations "Cancel a $y$ from the top and bottom of the fraction" and "Multiply both sides of the equation by $y$" are only valid when $y\neq 0$. In particular, if you want to manipulate (2) to look like (1), you first have to assume that $y\neq 0$, which rules out one of the solutions of the quadratic equation. A more sophisticated answer realises that when we are asked to "solve" an equation, what we are really doing is looking for a root of a particular function (i.e. a value of the argument at which evaluating the function gives zero). In the case of (1), the function is $f(y)=y^2-y$, and in the case of (2) the function is $f(y)=y^2/y-1$. Now, functions have to have a domain. In the case of (1) the domain is $\mathbb{R}$ (the real numbers) which includes both 0 and 1. In the case of (2), the domain is $\mathbb{R}-\{0\}$ (the real numbers without 0) which doesn't include 0.
H: What does this mean: $\mathbb{Z_{q}^{n}}$? I can't understand the notation $\mathbb{Z}_{q}^{n} \times \mathbb{T}$ as defined below. As far as I know $\mathbb{Z_{q}}$ comprises all integers modulo $q$. But with $n$ as a power symbol I can't understand it. Also: $\mathbb{R/Z}$, what does it denote? "... $ \mathbb{T} = \mathbb{R}/\mathbb{Z} $ the additive group on reals modulo one. Denote by $ A_{s,\phi} $ the distribution on $ \mathbb{Z}^n_q \times \mathbb{T}$ ..." AI: $\mathbb{Z}_q^n$ means the vector space of lenght $n$ over $\mathbb{Z}_q$ $\times$ is for cartesian product $\mathbb{R}/\mathbb{Z}$ means the set $\mathbb{R}$ modulus the set $\mathbb{Z}$ so in this specific case it means the set $\{ x | x \in \mathbb{R}\wedge 0\leq x < 1\}$
H: Is the product of square singular and non singular matrices always singular? Given $A,B\in R^{n\times n}$ such that A is singular, and B is non-singular. Is $(AB)$ always singular? If so, how do I prove it? AI: The easiest way to see this is by looking at the determinant, since $\det(AB) = \det(A)\det(B)$ and a matrix $A$ is singular iff $\det(A) = 0$.
H: Sum of the series $\frac{1}{n!}$ I wish to find sum of a finite and infinite series$$\sum \frac{1}{n!}$$ I am aware, that this is a standard series and thus has a straight forward (well-known) answer BUT I am not recollecting it. A hint Please. AI: Do you remember that $$e^x = \sum_{n=0}^\infty \frac{x^n}{n!}\ ?$$ Edit: This is the answer for the infinite series. Edit: The convergence is easy: just apply the ratio test.
H: Why is $f(x) \equiv 0 \;\bmod \; p$ for all $p \in \mathbb{P}$? I try to find a reason/proof for the following statement: Let be $f(x)=x^2+x$ an integer polynomial. Why is $$x^2+x \equiv 0 \pmod p$$ for all $p \in \mathbb{P}$? I made a list for the first primes and obviously it's true, but I can't find a proof for it. Any help would be great. AI: $x^2+x=0\pmod p\implies p|x(x+1)$,but since p is a prime $p|x$ or $p|(x+1)$ giving two solutions $x=0\pmod p$ or $x=-1\pmod p$.
H: All subgroup are Normal All subgroups of a abelian group are normal. But the converse is not true. If every subgroup of a group is normal, then what more can we say about the group? AI: If $G$ is a finite non-abelian group where all subgroups are normal, then $$G \cong Q_8 \times A \times B$$ where $A$ is an elementary abelian 2-group (ie, all non-identity elements have order 2), $B$ is abelian of odd order and $Q_8$ is the quaternion group of order 8. A proof can be found in for example Berkovich's Groups of Prime Power Order I believe.
H: Fundamental Theorem of Calculus question While revising, I came across this question: Let $h(x)=\int_0^{x^2}e^{x+t}dt$. Find $h'(1)$. I tried using the substitution $t=u^2$, then $dt=2u du$. The integral becomes $h(x)=\int_0^x{e^{x+u^2}}\cdot 2udu$. Then by the Fundamental Theorem of Calculus, $h '(x)=e^{x+x^2}\cdot 2x$, and hence $h'(1)=2e^2$. However, the correct answer is $3e^2-e$. May I know what is wrong with the above approach? Thanks a lot. AI: $\frac{d({\int_{a(x)}^{b(x)}} f(x,t)dt)}{dx}= f(x,b(x)).b'(x)-f(x,a(x)).a'(x)$+$\int_{a(x)}^{b(x)}(\frac{\partial}{\partial x}f(x,t))dt$. Putting $f(x,t)=e^{x+t}$,this gives $h'(x)=(e^{x+x^2}2x)-(0)+(e^{x+x^2}-e^x) \implies h'(x)=e^{x+x^2}(2x+1)-e^x\implies h'(1)=3e^2-e $.
H: What does the negative in this anwer depict. For the following question: Peter lives 12 miles west of school and Bill lives north of school.Peter finds the direct distance from his house to Bill is 6 miles shorter than the distance by way of shcool.How many miles north of school does Bil live? For this after constructing a right triangle I got -12x = 108 so x=-9 (distance from school to Bill) Does the - in the answer depict anything ? Edit: My perpendicular was x and my hypotenuse was x-6 AI: Sides of right angle triangle are $x,12,x+6$.Using Pythagoras theorem, $x^2+12^2=(x+12)^2 \implies 144=12x+36 \implies 12x=108 \implies x=9$. There is no negative sign.You have made some sign mistake somewhere in your solution.
H: An introductory textbook on functional analysis and operator theory I would like to ask for some recommendation of introductory texts on functional analysis. I am not a professional mathematician and I am totally new to the subject. However, I found out that some knowledge of functional analysis and operator theory would be quite helpful to my work... What I am searching for is some accessible and instructive text not necessarily covering the subject in great depth, but explaining the main ideas. I am not searching for a text for engineers, some amount of mathematical rigor would be fine. But I found myself unable of reading some standard textbooks covering in great depth a large amount of issues in theory of Banach spaces, etc. I am looking for something that proceeds to the most important topics (e.g., spectral theory) faster than the most of textbooks, but not at the expense of rigor. I.e., something that covers rigorously the main topics, but concentrates only on the main ideas. Simply an accessible introductory text for a fast orientation in the subject. Moreover, I would prefer a text that does not require any background in measure theory and similar disciplines. And another question: is there any functional analysis book that deals primarily with sequence spaces? It need not fulfill the description above. Thank you for your recommendations! AI: I think that Kreyszig'book is a good introduction, though not very short. Spaces of sequences are classical Banach spaces, and there are books that study their properties systematically. A classic book is this one. But also Larsen's book has plenty of examples from $\ell^p$, $c_0$, $c_{00}$.
H: Is there any difference between the definition of a commutative ring and field? Is a commutative ring a field? A set equipped with addition and multiplication which is abelian over those two operations and it holds distributivity of multiplication over addition? AI: A key difference between an ordinary commutative ring and a field is that in a field, all non-zero elements must be invertible. For example: $\Bbb{Z}$ is a commutative ring but $2$ is not invertible in there so it can't be a field, whereas $\Bbb{Q}$ is a field and every non-zero element has an inverse. Examples of commutative rings that are not fields: The ring of polynomials in one indeterminate over $\Bbb{Q}, \Bbb{R}$, $\Bbb{C}$, $\Bbb{F}_{11}$, $\Bbb{Q}(\sqrt{2},\sqrt{3})$ or $\Bbb{Z}$. The quotient ring $\Bbb{Z}/6\Bbb{Z}$ $\Bbb{Z}[\zeta_n]$ - elements in here are linear combinations of powers of $\zeta_n$ with coefficients in $\Bbb{Z}$ (In fact this is also a finitely generated $\Bbb{Z}$ - module) The direct sum of rings $\Bbb{R} \oplus \Bbb{R}$ that also has the additional structure of being a 2-dimensional $\Bbb{R}$ - algebra. Let $X$ be a compact Hausdorff space with more than one point. Then $C(X)$ is an example of a commutative ring, the ring of all real valued functions on $X$. The localisation of $\Bbb{Z}$ at the prime ideal $(5)$. The result ring, $\Bbb{Z}_{(5)}$ is the set of all $$\left\{\frac{a}{b} : \text{$b$ is not a multiple of 5} \right\}$$ and is a local ring, i.e. a ring with only one maximal ideal. I believe when $G$ is a cyclic group, the endomorphism ring $\textrm{End}(G)$ is an example of a commutative ring. Examples of Fields: $\Bbb{F}_{2^5}$ $\Bbb{Q}(\zeta_n)$ $\Bbb{R}$ $\Bbb{C}$ The fraction field of an integral domain More generally given an algebraic extension $E/F$, for any $\alpha \in E$ we have $F(\alpha)$ being a field. The algebraic closure $\overline{\Bbb{Q}}$ of $\Bbb{Q}$ in $\Bbb{C}$.
H: Finding an orthogonal basis from a column space I'm having issues with understanding one of the exercises I'm making. I have to find an orthogonal basis for the column space of $A$, where: $$A = \begin{bmatrix} 0 & 2 & 3 & -4 & 1\\ 0 & 0 & 2 & 3 & 4 \\ 2 & 2 & -5 & 2 & 4\\ 2 & 0 & -6 & 9 & 7 \end{bmatrix}.$$ The first question was to find a basis of the column space of $A$, clearly this is simply the first $3$ column vectors (by reducing it to row echelon form, and finding the leading $1$'s). However, then I had to find an orthogonal basis out of the column space of $A$, and here is where I get lost. I started off with finding the first vector: $$u_1 = \begin{bmatrix}0\\0\\2\\2\\\end{bmatrix}.$$ Then I thought I would find the second vector like this: $$u_2 = \begin{bmatrix}2\\0\\2\\0\\\end{bmatrix}-\left(\begin{bmatrix}2\\0\\2\\0\\\end{bmatrix}\cdot\begin{bmatrix}0\\0\\2\\2\\\end{bmatrix}\right)*\begin{bmatrix}0\\0\\2\\2\\\end{bmatrix} = \begin{bmatrix}2\\0\\2\\0\\\end{bmatrix}-4*\begin{bmatrix}0\\0\\2\\2\\\end{bmatrix} = \begin{bmatrix}2\\0\\-6\\-8\\\end{bmatrix}.$$ However, according to the result sheet we were given, instead of having a $4$, I should have $\frac{4}{8}$. I somehow can not figure out what I am missing, since the dot product of the two vectors clearly is $4$. Also, as a second question: if I had to find a orthonormal basis I would only have to take the orthogonal vectors found here, and multiply them by their $1$/length, correct? AI: Your basic idea is right. However, you can easily verify that the vectors $u_1$ and $u_2$ you found are not orthogonal by calculating $$<u_1,u_2> = (0,0,2,2)\cdot \left( \begin{matrix} 2 \\ 0 \\ -6 \\ -8 \end{matrix} \right) = -12-16 = -28 \neq 0$$ So something is going wrong in your process. I suppose you want to use the Gram-Schmidt Algorithm to find the orthogonal basis. I think you skipped the normalization part of the algorithm because you only want an orthogonal basis, and not an orthonormal basis. However even if you don't want to have an orthonormal basis you have to take care about the normalization of your projections. If you only do $u_i<u_i,u_j>$ it will go wrong. Instead you need to normalize and take $u_i\frac{<u_i,u_j>}{<u_i,u_i>}$. If you do the normalization step of the Gram-Schmidt Algorithm, of course $<u_i,u_i>=1$ so it's usually left out. The Wikipedia article should clear it up quite well. Update Ok, you say that $v_1 = \left( \begin{matrix} 0 \\ 0 \\ 2 \\ 2 \end{matrix} \right), v_2 = \left( \begin{matrix} 2 \\ 0 \\ 2 \\ 0 \end{matrix} \right), v_3 = \left( \begin{matrix} 3 \\ 2 \\ -5 \\ -6 \end{matrix} \right)$ is the basis you start from. As you did you can take the first vector $v_1$ as it is. So you first basis vector is $u_1 = v_1$ Now you want to calculate a vector $u_2$ that is orthogonal to this $u_1$. Gram Schmidt tells you that you receive such a vector by $$u_2 = v_2 - \text{proj}_{u_1}(v_2)$$ And then a third vector $u_3$ orthogonal to both of them by $$u_3 = v_3 - \text{proj}_{u_1}(v_3) - \text{proj}_{u_2}(v_3)$$ You did do this approach. What went wrong is your projection. You calculated it as $$ \text{proj}_{u_1}(v_2) = v_2<u_1,v_2>$$ but this is incorrect. The true projection is $$ \text{proj}_{u_1}(v_2) = v_2\frac{<u_1,v_2>}{<u_1,u_1>}$$ As I tried to point out, some textbooks will skip the division by $<u_1,u_1>$ in the explanation of Gram-Schmidt, but this is because in most cases you want to construct an orthonormal basis. In that case you normalize every $u_i$ before proceeding to the next step. Therefore $<u_i,u_i> = 1$ can be skipped. So what you need to change is to divide by $<u_2,u_2> = 8$ in your projection.
H: How do I combine two matrix equations into one? I have a discretely sampled 2D function: S = 1 2 3 4 1 2 3 4 1 2 3 4 I want to find finite difference matrices, DX and DY such that: $\ S_x=DX*S,S_y=DY*S $ where subscript denotes partial derivation with respect to. Finding DY was easy. Using central differences: DY = -1.0000 1.0000 0 -0.5000 0 0.5000 0 -1.0000 1.0000 Test: >> DY*S ans = 0 0 0 0 0 0 0 0 0 0 0 0 Finding DX seems more difficult. Matrix product is "row x column". Since x is along the row of S; I try to find DX such that $\ S_x=S*DX $ instead: DX = -1.0000 -0.5000 0 0 1.0000 0 -0.5000 0 0 0.5000 0 -1.0000 0 0 0.5000 1.0000 Test: >> S*DX ans = 1 1 1 1 1 1 1 1 1 1 1 1 Now I have the two matrix equations: $\ Sx = S*DX, Sy = DY*S $ How can I combine these two into one matrix equation? The thing is that I know Sx and Sy and am trying to solve for S using a least squares approach. I read that the conjugate gradient method does this. If I can cast my two equations into the shape of one matrix equation: $\ \mathbf{A}\mathbf{x}=\mathbf{b} $ I could use the conjugate gradient method to find a least squares solution. I guess I will have to reshape S into a vector: s=S(:) s = 1 1 1 2 2 2 3 3 3 4 4 4 Next I will have to find a 12x12 matrix dy such that: $\ sx = dx*s $ where sx now is also reshaped as a vector. Using my example data and solving for dx I find: >> sx/s ans = 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 0 0 0 0 0 0 0 0 0 0.2500 0 0 How is this related to DX? I do not understand how to generate this matrix when s is unknown. BTW I am using Matlab. Thanks in advance for any answers! AI: I'll show you the code for doing this with square matrices, and leave the generalization to rectangular matrices to you. First we'll generate a data matrix S: >> n = 4; >> S=reshape(1:n*n, [n n]); Now define the shift-by-one matrix on vectors: >> E = sparse(2:n, 1:n-1, 1, n, n); and the corresponding central difference matrix: >> D = (E' - E) / 2; Now you can define X and Y gradient matrices, which act on matrices (suitably reshaped as vectors) by: >> I = speye(n); % identity matrix >> Dx = kron(I,D); >> Dy = kron(D,I); and use them in the following way: >> Sx = reshape( Dx * S(:), [n n]); >> Sy = reshape( Dy * S(:), [n n]); i.e. whenever you want to calculate a gradient, you first reshape the matrix S into a vector, apply the appropriate derivative operator, and then reshape the result back into a matrix. Note that I've defined E, D etc as sparse matrices, for computational efficiency. If you want to see what any of these matrices look like, you can convert them to dense matrices using the function full, for example: >> full(D) ans = 0 0.5000 0 0 -0.5000 0 0.5000 0 0 -0.5000 0 0.5000 0 0 -0.5000 0
H: Algebra of Functions There seems to be an interesting algebra of functions. Does it already exist in literature? Given functions $f_1 : X_1 \to Y$ and $f_2 : X_2 \to Y$, if $f_1(x) = f_2(x)$ for all $x \in X_1 \cap X_2$, then their sum is defined as $(f_1 + f_2) : X_1 \cup X_2 \to Y$, where $$ (f_1 + f_2)(x) = \begin{cases} f_1(x) & x \in X_1\\ f_2(x) & x \in X_2. \end{cases} $$ Given functions $f : X \to Y$ and $g : Y \to Z$, their product is defined as $fg : X \to Z$, where $$ (fg)(x) = g(f(x)). $$ We immediately have the following: Addition is commutative and associative. The empty function with domain the empty set $\emptyset$ is the additive identity. Mutliplication is associative and distributes over addition. The identity function is the multiplicative identity. These properties seem so nice that this algebra must have been investigated before - does anyone have some references for this? AI: I think this setup is pretty interesting, but having addition and multiplication only partially defined does not make this into an algebra in the ring theoretic sense. (I guess you are using it in the universal algebra sense :) ) The general the feeling I get is that algebraic objects with partially defined operations are important in mathematical logic (which I know next to nothing about, really!) So, let me tell you about my luck searching. A google search for ("partially defined" addition multiplication) immediately returned an ArXiV paper with the addition operation you mentioned on the first page. Another hit I got was this talk about "ringoids" which seems to discuss partially defined addition and multiplication. Around this time I became convinced that "partial algebra" as the correct search term, but I had trouble finding an explanation that was both clear and trustworthy...until I found my answer in this paper by M.L. Reyes! He explains what I think you're looking for clearly, and he points us to Kochen and Specker in the references for more information (reference [11]). I hope this helps. If I were you I'd start with Manny's paper.
H: Help understanding a $3n+1$ problem programming project I was going through this website where I found 3n + 1 problem. I was not able to understand what the output should be. I understood if the input is 1 and 10 then the numbers between 1 to 10 need to be in consideration, and the output will be 20. But I am wondering how it can be 20. Where as 10 - 10 5 16 8 4 2 1. (Total 6 excluding 16). Then how can be it 20? Please could you please help me. Please make me understand the problem, What it really wants me to solve? Please tell me if I had put some wrong tag AI: If the input is $i=1$ and $j=10$ then you need to consider the full cycle for each starting value $i=1\le n \le 10=j$, consider all of their "cycle lengths" and return the largest cycle length. For $n=10$ the cycle length is 7, since you need to include 16. Given $i=1,j=10$ as input, the cycle length for $n=1$ is $1$, for $n=2$ is 2, for $n=3$ is $8$, and for $n=4,5,6,7,8,9,10$ is respectively $3,6,9,17,4,20,7$. The largest value in this range is 20, which should be your answer. Similarly, if $i=100,j=200$, then the longest cycle length is $125$ which you find starting with $n=171$ then considering the full cycle $C=\{171,514,257,772,\ldots,4,2,1\},\vert C\vert=125$.
H: Section of a circle I need some help with some of my homework, I can't figure it out. I have a radius on a circle and a height from the circle to the chord. I found this formula $$ h=r \left(1-\cos \frac{v}{2} \right) $$ And isolated it to $$ v = \arccos \left( \frac{h/r -1}{2}\right) $$ Not sure if that's correct. Then I input it in this formula $$ A=r^2((\pi v)/360-(\sin v)/2) $$ I have tried with a radius of 1000 and a height from the circle to the chord of 1000. But it gives a wrong result. AI: The problem is that your solution is incorrect. You have the 2 factor in the wrong place. the correct solution is: v=2arccos((1-h/r)
H: Best statistical method for determining likely subset length Let's say I have a string set such as "AAAABBBBAAAABAAAA" and I want to have some quantitative measure of most likely subset length. In the above example (4 sets of length 4, one set of length 1), it is "humanly" evident the next set will be 4 in length, but just making an average yields length of 3.4. What would be best method to get "the answer closer to 4" ? AI: It's not evident to me that the next group will have length 4. What if the next elements are "BAAAA" again? It happened once, so why not again? If you want the "most likely" length, then choose the mode, which is exactly what you said: the most likely length seen so far, in this case 4.
H: What are the subsemigroups of $(\mathbb N,+)?$ While trying to solve a somewhat bigger problem, I realized that I don't know what the subsemigroups of one of the most important semigroups, $(\mathbb N,+)$, are. (I assume $0\not\in\mathbb N$.) I've tried to characterize them but I haven't managed to do it fully. What follows are the facts I have proven so far, without most of the proofs but with lemmas, which hopefully show how the proofs go. I'm not posting all the proofs not because I'm sure my proofs are correct, but because I don't want to make this post too long. I will add the proof of anything below on request. Notation. $\langle a_1,\ldots,a_n\rangle$ will denote the subsemigroup generated by $a_1,\ldots,a_n\in\mathbb N$. If $X\subseteq \mathbb N$, then $\langle X\rangle$ willl denote the subsemigroup generated by $X$. Lemma 1. $\langle a_1,\ldots,a_n\rangle = \{k_1a_1+\ldots+k_na_n\,|\,k_i\geq 0,\,\sum k_i>0\}.$ Lemma 2. If $a_1,\ldots,a_n\in\mathbb N$ and $\gcd(a_1,\ldots,a_n)=1,$ then there exists $x\in \langle a_1,\ldots,a_n\rangle$ such that for every $n\geq x,\,n\in\mathbb N,$ we have that $n\in \langle a_1,\ldots,a_n\rangle.$ Notation. For $n\in\mathbb N$ and $X\subseteq \mathbb N$, $nX$ will denote $\{nx\,|\,x\in X\}.$ Lemma 3. For every finitely generated subsemigroup $S=\langle a_1\ldots,a_n\rangle$ of $\mathbb N$, there exists a finitely generated subsemigroup $T$ of $\mathbb N$ whose generators are coprime and such that $S=\gcd(a_1,\ldots,a_n)\,T$. Proposition 4. Every finitely generated subsemigroup $S=\langle a_1,\ldots,a_n\rangle$ of $\mathbb N$ eventually becomes an infinite arithmetic progression with difference $d=\gcd(a_1,\ldots,a_n)$. That is, there exists $x\in S$ such that $S\cap\{n\in\mathbb N\,|\,n\geq x\}=\{x+kd\,|\,k\geq 0\}.$ (It has to be noted that $d|x.$) Lemma 5. If $X\subseteq \mathbb N$, then there exists a unique $\gcd(X),$ that is a number $d\in\mathbb N$ such that for all $x\in X$ we have $d|x,$ and if for all $x\in X$ we have $c|x$, then $c|d.$ There also exists a finite subset $Y\subseteq X$ such that $\gcd (Y)=\gcd(X).$ Proposition 6. Every subsemigroup of $\mathbb N$ is finitely generated. Proof. Let $S$ be a subsemigroup of $\mathbb N$. Let $d=\gcd (S).$ Then there exists $Y\subseteq S$ such that $\gcd(Y)=d.$ Surely $\langle Y\rangle\subseteq\mathbb N.$ There exists $x\in\langle Y\rangle$ such that $$\langle Y\rangle\cap\{n\in\mathbb N\,|\,n\geq x\}=\{x+kd\,|\,k\geq 0\}.$$ Thus, beginning from $x$, all numbers divisible by $d$ are in $\langle Y\rangle.$ Therefore, in particular, all elements of $S$ greater than or equal $x$ are in $\langle Y\rangle.$ It follows that $S=\langle Y\cup (S\cap \{n\in\mathbb N\,|\,n<x\})\rangle.$ So $S$ is finitely generated. $\square$ QUESTION. From the above facts (which are hopefully true), I know what subsemigroups of $\mathbb N$ look like eventually. They become arithmetic progressions whose difference divides its elements. But can we describe their initial behavior in a usable way? For example, this is a subsemigroup: $$\{3,5,7,8,9,\ldots\};$$ this is a subsemigroup: $$\{3,5,8,9,\ldots\};$$ but this is not: $$\{3,5,7,9,\ldots\}.$$ My general question is this: can we usefully characterize subsemigroups of $\mathbb N$ among subsets of $\mathbb N$? AI: You are looking at what is sometimes called the Frobenius problem. The two-generator case is already interesting: if $a,b$ are coprime then the semigroup they generate has every positive integer from $N=(a-1)(b-1)$ on, and exactly half of the integers between zero and $N-1$, inclusive; moreover, it contains $m$ in that range if and only if it doesn't contain $N-1-m$. These are all good exercises to prove, if this is new to you. The $n=3$ case is much harder, bigger values of $n$ are harder still.
H: Defining a Limit Point of A Set Limit Point is defined as: Wolfram MathWorld: A number $x$ such that for all $\epsilon \gt 0$, there exists a member of the set $y$ different from $x$ such that $|y-x| \lt \epsilon$. Proof Wiki: Some sources define a point $ x \in S$ to be a limit point of $A$ iff every open neighbourhood $U$ of $x$ satisfies: $$ A \cap (U \smallsetminus \{x\}) \neq \emptyset $$ What I don't understand is what prevents the above definitions from calling interior points (points which lie in the interior of the boundaries?) as limit points? For instance, George Simmons defines the sequence $\{1, \frac{1}{2}, \frac{1}{3} \cdots \}$ and states that $0$ is the limit point and $0$ is the ONLY limit point. If I select $\frac{1}{2}$, every neighbourhood of $\frac{1}{2}$ (minus the point $\frac{1}{2}$) has a non-zero intersection with the set $A$. Why not call $\frac{1}{2}$ the limit point? AI: Interior points are indeed limit points. For example, consider the set $S$ which contains all numbers $x$ such that $1<x<2$. Then every element of $S$ is a limit point of $S$. (As you noted below, 1 and 2 are also limit points of $S$.) In your example, $T=\{1, \frac12, \frac13\ldots\}$. Here $\frac12$ is not a limit point of $T$ because we need, for every positive $\epsilon$, there is a point $y$ of $T$ different from $\frac12$ with $|y-\frac12| < \epsilon$. But when $\epsilon < \frac16$, there is no such point $y$. Similarly, the ProofWiki definition says that $\frac12$ will be a limit point of $T$ if every neighborhood of $\frac12$ intersects $T\setminus\left\{\frac12\right\}$. But as before, a sufficiently small neighborhood of $\frac12$, say $\left(\frac5{12}, \frac7{12}\right)$ as suggested by Mr. Mastragostino, does not intersect $T\setminus\left\{\frac12\right\}$.
H: Cauchy sequence of functions and the limit inferior I'm trying to understand a step in a proof. I don't get a special trick that is used several times in the book I am reading, so this does not get out of my head. I try to explain the prerequisites and what I don't understand: Let $Y$ be a Banach space and let $S$ be a set. For a mapping $f:S\to Y$ let $\|f\| := \sup_{x\in S}|f(x)|$. Let $B(S;Y)$ be the set of all mappings $g:S\to Y$ with $\|g\| < \infty$. I don't understand a step in the proof of the fact that this is a complete metric space. Let $(f_k)_{k\in\mathbb{N}}$ be a Cauchy sequence in $B(S;Y)$. Let $x\in S$. Since $|f_k(x)-f_l(x)| \rightarrow 0$ for $k,l\rightarrow \infty$, we can define the pointwise limit $f$ of the sequence by $f(x) := \lim_{k\to\infty}f_k(x)$. Now here is the step that I don't understand: The author of the book (and the proof) states that $\lim_{l\to\infty}|f_l(x)-f_k(x)| \le \lim\inf_{l\to\infty}\|f_l-f_k\|<\infty$. Why is that? What has the $\lim\inf$ to do here. And why is this finite? I think that it has something to do with subsequences of Cauchy sequences, but I don't understand it. This $\lim\inf$-trick is used several times in the book so that it seems a bit important to me. Thank you very much in advance. I'm glad for every help. This does not get out of my head. AI: Assume we are give two sequences $\{a_r:r\in\mathbb{N}\}$ and $\{b_r:r\in\mathbb{N}\}$ such that for all $r\in\mathbb{N}$ we have $a_r\leq b_r$. Fix some $l\in\mathbb{N}$ and consider the last inequality for all $r\geq l$. After taking infimum on the left side over all $r\geq l$ we get $$ \inf\limits_{r\geq l}a_r \leq b_r $$ Then we take infimum on the right side over all $r\geq l$, and we get $$ \inf\limits_{r\geq l}a_r \leq \inf\limits_{r\geq l}b_r $$ Note that sequences $\{\inf\limits_{r\geq l}a_r:l\in\mathbb{N}\}$ and $\{\inf\limits_{r\geq l}b_r:l\in\mathbb{N}\}$ are non decreasing hence they have limits (finite or infinite). Lets take this limits, then we get $$ \lim\limits_{l\to\infty}\inf\limits_{r\geq l}a_r \leq \lim\limits_{l\to\infty}\inf\limits_{r\geq l}b_r $$ It is known that $\liminf\limits_{l\to\infty}x_l=\lim\limits_{l\to\infty}\inf\limits\limits_{r\geq l}x_r$ (sometimes this equality is taken as definition of $\liminf$), so $$ \liminf\limits_{l\to\infty}a_l \leq \liminf\limits_{l\to\infty}b_l\tag{1} $$ Another interesting fact: if a sequnces $\{x_l:l\in\mathbb{N}\}$ have a limit then it is equal to limit inferioir and limit superioir $$ \lim\limits_{l\to\infty}x_l=\liminf\limits_{l\to\infty}x_l=\limsup\limits_{l\to\infty}x_l\tag{2} $$ Now we proceed to your question. Fix $k\in\mathbb{N}$ and consider sequences $$ a_l=|f_l(x)-f_k(x)|\qquad b_l=\Vert f_l-f_k\Vert\qquad\text{ where }\qquad l\in\mathbb{N} $$ Since $a_l\leq b_l$ for all $l\in\mathbb{N}$ from (1) we get $$ \liminf\limits_{l\to\infty}|f_l(x)-f_k(x)| \leq \liminf\limits_{l\to\infty}\Vert f_l-f_k\Vert $$ By construction sequence $\{a_n:n\in\mathbb{N}\}$ is convergent. Indeed $$ \lim\limits_{l\to\infty} a_l= \lim\limits_{l\to\infty} |f_l(x)-f_k(x)|= |\lim\limits_{l\to\infty}f_l(x)-f_k(x)|= |f(x)-f_k(x)| $$ hence from $(2)$ we get $$ \lim\limits_{l\to\infty}|f_l(x)-f_k(x)|=\liminf\limits_{l\to\infty}|f_l(x)-f_k(x)| \leq \liminf\limits_{l\to\infty}\Vert f_l-f_k\Vert $$
H: first 1 in a bitmask using log2 I am trying to get the last 1 in a bitmask. More mathematically speaking, I have a number k, that can be written in its binary form as a sequence of 1 and 0. I want the "weight" or "index" of the last 1 in that number (from the right). As an example, for the number 0b 10 0100 (36 in decimal), I want to get 3 (because the rightmost 1 is on 3rd position from the right) I can find the first 1 easily (in my case 6), as the size of the number, or its base-2 log (rounded down +1), but what about the last one? For information, I will use this in a dirty trick in MS Excel, described in https://stackoverflow.com/questions/5065432/excel-find-last-value-in-an-array/11246728#11246728 AI: If $x$ is your original number, (~x + 1) & x will have only one bit set, the same as the least significant set bit of $x$. For example, if $x$ is 10101000, then (~x + 1) & x = 00001000. Perhaps that helps? You can then use a lookup table, or one of the techniques in Which bit is set to figure out which bit is still set. You might also look at the references given the the OEIS entry for this function.
H: How to solve $x'(t)=\frac{x+t}{2t-x}$? I wish to solve $x'(t)=\frac{x+t}{2t-x}$ with the initial condition $x(1)=0$. I noted that $x'(t)=f(\frac{x}{t})$ where $f(y)=\frac{y+1}{2-y}$ so I denoted $y(t)=\frac{x(t)}{t}$ and got that $y'(t)=\frac{f(y)-y}{t}$ so I can write something like $\frac{dy}{dt}=\frac{f(y)-y}{t}$ so $\frac{dy}{f(y)-y}=\frac{dt}{t}$ . Here I am a bit stuck, I know I should do something like take integrals on both sides, but I am having trouble with the initial condition. In this exercise I can leave integrals in the answer so I would like to know how to get the solution with the integral, I think this requires to calculate the boundaries of some integral (maybe of $\frac{f(y)-y}{t}$ ?) How can I continue to get the solution with integral that satisfies the initial condition ? Help is appriciated! AI: Your differential equation is homogenous of order one. You did it right finding $\frac{dy}{dt}=\frac{f(y)-y}{t}$. So you can integrate of both sides in last equation till your solution is achieved respect to $y(t)$ and $t$. Then you can substitute $x(t)=y(t)t$ in your solution and then apply the initial condition. I hope it help. :)
H: Asymptotic behavior of the expression: $(1-\frac{\ln n}{n})^n$ when $n\rightarrow\infty$ The well known results states that: $\lim_{n\rightarrow \infty}(1-\frac{c}{n})^n=(1/e)^c$ for any constant $c$. I need the following limit: $\lim_{n\rightarrow \infty}(1-\frac{\ln n}{n})^n$. Can I prove it in the following way? Let $x=\frac{n}{\ln n}$, then we get: $\lim_{n\rightarrow \infty}(1-\frac{\ln}{n})^n=\lim_{x\rightarrow \infty}(1-\frac{1}{x})^{x\ln n}=(1/e)^{\ln n}=\frac{1}{n}$. So, $\lim_{n\rightarrow \infty}(1-\frac{\ln}{n})^n=\frac{1}{n}$. I see that this is wrong to have an expression with $n$ after the limit. But how to show that the asymptotic behavior is $1/n$? Thanks! AI: According to the comments, your real aim is to prove that $x_n=n\left(1-\frac{\log n}n\right)^n$ has a non degenerate limit. Note that $\log x_n=\log n+n\log\left(1-\frac{\log n}n\right)$ and that $\log(1-u)=-u+O(u^2)$ when $u\to0$ hence $n\log\left(1-\frac{\log n}n\right)=-\log n+O\left(\frac{(\log n)^2}n\right)$ and $\log x_n=O\left(\frac{(\log n)^2}n\right)$. In particular, $\log x_n\to0$, hence $x_n\to1$, that is, $$ \left(1-\frac{\log n}n\right)^n\sim\frac1n. $$ Edit: In the case at hand, one knows that $\log(1-u)\leqslant-u$ for every $u$ in $[0,1)$. Hence $\log x_n\leqslant0$ and, for every $n\geqslant1$, $$ \left(1-\frac{\log n}n\right)^n\leqslant\frac1n. $$
H: lego geometry: Segments bent to a given angle make an n sided polygon I swear this isn't homework. I'm actually ordering lego pieces from Pick-a-Brick. They have pipe segments that bend at 180 degrees (straight), 157.5, 135, 112.5, and 90 degrees. I need to know the number of sides of a polygon with those internal angles so that i know how many segments to order. If someone can show me the answer that's great, and if someone can show me how to find the answer that is better. I think i may want to make oval-like elongated polygons by mixing angles (I'm making a zeppelin :) AI: Note that all the angles differ from $180^\circ$ by multiples of $22.5^\circ$, which is $\frac{1}{16}$ of the required total rotation of $360^\circ$. So you can label the pieces as follows: $0$ ($180^\circ$) $1$ ($157.5^\circ$) $2$ ($135^\circ$) $3$ ($112.5^\circ$) $4$ ($90^\circ$). A complete rotation is produced by joining together pieces whose labels sum to $16$, e.g., $1111111111111111$ or $4444$ or $13333$, with all angles rotating in the same direction. You can also produce non-convex polygons by using some counter-rotating angles, in which case their labels should be subtracted rather than added, e.g., $443(-3)443(-3)$. For a given arrangement of $N$ angles, if the polygon is intended to close, then there will be $N-2$ free length parameters left to assign. For instance, a triangle with fixed angles has a single scale parameter; four right angles produce a rectangle, for which two side lengths can be chosen freely.
H: Period of the sum/product of two functions Suppose that period of $f(x)=T$ and period of $g(x)=S$, I am interested what is a period of $f(x) g(x)$? period of $f(x)+g(x)$? What I have tried is to search in internet, and found following link for this. Also I know that period of $\sin(x)$ is $2\pi$, but what about $\sin^2(x)$? Does it have period again $\pi n$, or? example is following function $y=\frac{\sin^2(x)}{\cos(x)}$ i can do following thing, namely we know that $\sin(x)/\cos(x)=\tan(x)$ and period of tangent function is $\pi$, so I can represent $y=\sin^2(x)/\cos(x)$ as $y=\tan(x)\times\sin(x)$,but how can calculate period of this? Please help me. AI: We make a few comments only. Note that $2\pi$ is a period of $\sin x$, or, equivalently, $1$ is a period of $\sin(2\pi x)$. But $\sin x$ has many other periods, such as $4\pi$, $6\pi$, and so on. However, $\sin x$ has no (positive) period shorter than $2\pi$. If $p$ is a period of $f(x)$, and $H$ is any function, then $p$ is a period of $H(f(x))$. So in particular, $2\pi$ is a period of $\sin^2 x$. However, $\sin^2 x$ has a period which is smaller than $2\pi$, namely $\pi$. Note that $\sin(x+\pi)=-\sin x$, so $\sin^2(x+\pi)=\sin^2 x$. It turns out that $\pi$ is the shortest period of $\sin^2 x$. For sums and products, the general situation is complicated. Let $p$ be a period of $f(x)$ and let $q$ be a period of $g(x)$. Suppose that there are positive integers $a$ and $b$ such that $ap=bq=r$. Then $r$ is a period of $f(x)+g(x)$, and also of $f(x)g(x)$. So for example, if $f(x)$ has $5\pi$ as a period, and $g(x)$ has $7\pi$ as a period, then $f(x)+g(x)$ and $f(x)g(x)$ each have $35\pi$ as a period. However, even if $5\pi$ is the shortest period of $f(x)$ and $7\pi$ is the shortest period of $g(x)$, the number $35\pi$ need not be the shortest period of $f(x)+g(x)$ or $f(x)g(x)$. We already had an example of this phenomenon: the shortest period of $\sin x$ is $2\pi$, while the shortest period of $(\sin x)(\sin x)$ is $\pi$. Here is a more dramatic example. Let $f(x)=\sin x$, and $g(x)=-\sin x$. Each function has smallest period $2\pi$. But their sum is the $0$-function, which has every positive number $p$ as a period! If $p$ and $q$ are periods of $f(x)$ and $g(x)$ respectively, then any common multiple of $p$ and $q$ is a period of $H(f(x), g(x))$ for any function $H(u,v)$, in particular when $H$ is addition and when $H$ is multiplication. So the least common multiple of $p$ and $q$, if it exists, is a period of $H(f(x),g(x))$. However, it need not be the smallest period. Periods can exhibit quite strange behaviour. For example, let $f(x)=1$ when $x$ is rational, and let $f(x)=0$ when $x$ is irrational. Then every positive rational $r$ is a period of $f(x)$. In particular, $f(x)$ is periodic but has no shortest period. Quite often, the sum of two periodic functions is not periodic. For example, let $f(x)=\sin x+\cos 2\pi x$. The first term has period $2\pi$, the second has period $1$. The sum is not a period. The problem is that $1$ and $2\pi$ are incommensurable. There do not exist positive integers $a$ and $b$ such that $(a)(1)=(b)(2\pi)$.
H: Measure of "boundary" of a nowhere dense set Suppose $E$ is a nowhere dense set. For simplicity, assume it is in $R$. Is it true that the Lebesgue measure of $\overline{E}-E$ is zero? I.e., $m(\overline{E}-E)=0$. The statement is not true in general. If $E$ is allowed to be open then take the complement of a fat cantor set. This is also not the same set as what is often defined to be the boundary. If $\overline{E}-E^o$ is the boundary of a set, then fat cantor sets have positive measure boundaries. AI: Let $P$ be a closed nowhere dense set in $\mathbb R$ with positive measure and let $E$ be a countable subset of $P$ that is dense in $P.$ For example, $E$ could be the set of endpoints of the complementary open intervals for $P.$ Then $m(\overline{E}-E)=m(P)$ is positive.
H: Fireworks under inverse-cube gravity What is the path of a projectile under an inverse-cube gravity law? Imagine that the law of gravity was changed overnight from $F(r) = G m_1 m_2 / r^2$ to $F(r) = G' m_1 m_2 / r^3$. To be specific, suppose $G' = G r_E$ where $r_E$ is the radius of the Earth, so that the force at the Earth's surface is unchanged. I am wondering how would the arc of a fireworks rocket compare to the parabolic path it would follow under the inverse-square law. (In the U.S. at this time of year, the evening sky is full of fireworks as we approach the 4th of July.) Presumably, the same rocket would travel a bit higher and cover a bit more distance horizontally, but what is the precise path it would follow? It is known that the solutions to a central force that is inverse-cubic is a Cotes' Spiral, which comes in three varieties:       But I am uncertain which of the three would apply here, and how to compute the relevant constants. Perhaps a piece of an epispiral, something like this?       It would be instructive to see the inverse-square parabola and the inverse-cubic Cotes's spiral, for the same projectile initial conditions, plotted together... Addendum. After retrieving Arnol'd's book as per Mark Dominus's recommendation, I wanted to share one interesting fact (p.37): The only central-force laws in which all the bounded orbits are closed are the inverse-square and inverse-cubic laws! AI: The path of a ballistic projectile in an inverse-square field is not a parabola, but an ellipse, assuming that the projectile does not reach escape velocity, which is certainly true for fireworks rockets. But fireworks don't go high enough for the $r$ in the denominator to vary significantly, so we can approximate this elliptical segment very closely by a parabolic segment instead. Even if you change the $r^2$ to an $r^3$, the same will be true: the rocket doesn't go far enough from the earth for the $r$ distance to vary significantly, so the path will still look like a parabolic segment. Moreover, the parabola it looks like will be almost exactly the same as the parabola it would have been with inverse-square gravity. (To put it another way, the path is a parabola if the Earth is an infinite flat plane rather than a sphere. So you only have to distinguish between elliptical and parabolic paths if the rocket is sufficiently far away that it can notice the curvature of the Earth. Fireworks don't get anything like that far away.)
H: How to find $\lim\limits_{n\rightarrow \infty}\frac{(\log n)^p}{n}$ How to solve $$\lim_{n\rightarrow \infty}\frac{(\log n)^p}{n}$$ AI: Note that $\displaystyle\frac{(\log n)^p}{n}=p^p\cdot\left(\frac{\log k}k\right)^p$, where $k=n^{1/p}\to\infty$ when $n\to\infty$.
H: Normal subgroup of prime index Generalizing the case $p=2$ we would like to know if the statement below is true. Let $p$ the smallest prime dividing the order of $G$. If $H$ is a subgroup of $G$ with index $p$ then $H$ is normal. AI: This is a standard exercise, and the answer is that the statement is true, but the proof is rather different from the elementary way in which the $p=2$ case can be proven. Let $H$ be a subgroup of index $p$ where $p$ is the smallest prime that divides $|G|$. Then $G$ acts on the set of left cosets of $H$, $\{gH\mid g\in G\}$ by left multiplication, $x\cdot(gH) = xgH$. This action induces a homomorphism from $G\to S_p$, whose kernel is contained in $H$. Let $K$ be the kernel. Then $G/K$ is isomorphic to a subgroup of $S_p$, and so has order dividing $p!$. But it must also have order dividing $|G|$, and since $p$ is the smallest prime that divides $|G|$, it follows that $|G/K|=p$. Since $|G/K| = [G:K]=[G:H][H:K] = p[H:K]$, it follows that $[H:K]=1$, so $K=H$. Since $K$ is normal, $H$ was in fact normal.
H: Decimal expression of reals Let $x>0$ be real. Then $A_1=\{n\in \mathbb{N}\mid x<n\}$ is nonempty since $\mathbb{R}$ is dedekind complete. Since $\mathbb{N}$ is well ordered, $A_1$ has a least element $k$. Thus $k-1$ is the largest element of $A_2=\{n\in \mathbb{Z}\mid n\leq x\}$. Thus for every $0<x\in \mathbb{R}$, there exists a largest integer $n\in \mathbb{Z}$ such that $n\leq x$. Let $n_0$ be the largest integer such that $n_0\leq x$. If $n_0<x$, $10(x-n_0)>0$. Let $n_1$ be the largest integer less than or equal to this element. If I continue this process, i get $x=n_0 + n_1/10 + \cdots$ I don't know how to legitimate this process, since I can define finite decimals for this method but can't define infinite decimals. I think the existence of all the $n_k$'s should be guaranteed simultaneously, but don't know how.. AI: From your work I think you will be able to formalize this much: For every $i\in\mathbb{N}$ and $x\in\mathbb{R}$ and $y\leq x$, there exists a digit $0\leq d<10$ such that $x\in [y+d\cdot 10^i,y+(d+1)\cdot10^{i})$. Note that this says $x-y\leq 10^{i+1}$. Establish that there is a $k\in\mathbb{N}$ such that $10^k\leq x<10^{k+1}$. Set $x_0=x$ and find $d_0$ where $x_0-d_010^k<10^{k+1}$. Inductively define $x_{i+1}:=x_{i}-d_i10^{k-i}$ where $d_i$ is a digit, ensuring $x_{i+1}<10^{k-i+1}$ as discussed above. Now define the series $s=\sum_{i=0}^\infty d_i10^{k-i}$. The only thing left to show is that $s$ actually converges to $x$. By definition, $x-\sum_{i=0}^nd_i10^{k-i}=x_n-d_n10^{k-n}<10^{k-n+1}$. Since this is true for any $n$, the parital sums converge to $x$, and so $\sum_{i=0}^\infty d_i10^{k-i}$ is a decimal representation of $x$. There is no doubt, an easier way and perhaps I overlooked something crass... but I enjoyed giving it a shot for the first time.
H: expected value for this question A manufacturer buys an item for 1600 dollar and sells it for 2000 dollar. The probabilities for a demand of 0, 1, 2, 3, 4, “5 or more” items are 0.05, 0.15, 0.30, 0.25, 0.15, 0.10 respectively. How many items he must stock to maximize his expected profit? AI: There is often no magic formula to solve a problem. So we need to do some preliminary exploration to see what's going on. We also need to make some assumptions. In the real world, unsold items can probably be returned to the supplier, probably with some not very large "restocking" fee. Or else we can keep the unsold item, and perhaps sell it next month. Again, there will be a cost associated with this (warehousing, interest). Absolutely no information has been supplied about these matters. So we are forced to make the extremely unrealistic assumption that any unsold item has to be junked, like a fish that goes bad. Under that assumption, we start to solve the problem. Start with the easy stuff. If we stock $0$ items, our expected profit is $0$. Suppose that we stock $1$ item. With probability $0.05$, we won't sell it. Then our "profit" is $-1600$. With probability $0.95$, we will sell it. Our profit is then $400$. So our expected profit is $(-1600)(0.05)+(400)(0.95)$. Compute. We get $300$. Suppose that we stock $2$ items. With probability $0.05$, we sell none, profit $-3200$. With probability $0.15$, we sell one, profit $400-1600=-1200$. And with probability $0.8$ we sell both, profit $800$. The expected profit is therefore $(-3200)(0.05)+(-1200)(0.15)+(800)(0.8)$. Compute. Suppose that we stock $3$ items. In the same way, we can find the expected profit. Probably we should also do the calculation for $4$ items. For stocking $5$ or more items, we will need to do some mathematics. If we stock $n$ items, where $n \ge 5$, our profit is $400n$ if we sell them all (probability $\le 0.10$). But if we are unlucky and the demand is $0$, we "gain" $-1600n$. So our expected profit is $\le (-1600)(0.05)+(400n)(0.10)$, which is negative. There are not many details for you to fill in.
H: How do I know which method of revolution to use for finding volume in Calculus? Is there any easy way to decide which method I should use to find the volume of a revolution in Calculus? I'm currently in the middle of my second attempt at Calculus II, and I am getting tripped up once again by this concept, which seems like it should be rather straight forward, but I can't figure it out. If I have the option of the disk/washer method and the shell method, how do I know which one I should use? AI: The first thing to understand is that you don’t directly choose the method of integration: you determine what kind of integration will be easier, based on the shape of the region in question, and that determines which method you’ll use. Draw the region that’s being revolved. Then ask yourself: does it slice up nicely into vertical strips, or do horizontal strips work better? If the region has boundaries of the form $y=f(x)$, $y=g(x)$, $x=a$, and $x=b$, the answer is almost always that vertical strips are simpler: for each $x$ from $a$ to $b$ you have a strip of length $f(x)-g(x)$ or $g(x)-f(x)$, depending on which of $f(x)$ and $g(x)$ is larger. Similarly, if the region has boundaries of the form $x=f(y)$, $x=g(y)$, $y=a$, and $y=b$, the answer is almost always that horizontal strips are simpler: for each $y$ from $a$ to $b$ you have a strip of length $f(y)-g(y)$ or $g(y)-f(y)$, depending on which of $f(y)$ and $g(y)$ is larger. The case of a region bounded by $y=f(x)$ and $y=g(x)$, where you have to solve for the points of intersection of the two curves in order to find the horizontal extent of the region, is really just special case of (1). Similarly, if the region is bounded by $x=f(y)$ and $x=g(y)$, and you have to solve for the vertical extent of the region, you’re looking at a special case of (2). What you want is a way of slicing the region into vertical or horizontal strips whose endpoints are defined in the same way. Take, for instance, the region bounded by $y=x$ and $y=x(x-1)$. If you slice it into vertical strips, each strip runs from the parabola at the bottom to the straight line at the top, so the strip at each $x$ between $0$ and $2$ has its bottom end at $x(x-1)$ and its top end at $x$. If you were to slice it into horizontal strips, the ones between $y=0$ and $y=2$ would have their left ends on the straight line and their right ends on the parabola, but the ones between $y=-1/4$ and $y=0$ would have both ends on the parabola. Thus, you’d need a different calculation to handle the part of the region below the $x$-axis from the one that you’d need for the part above the $x$-axis. Whether the boundaries are given in the form $y=f(x)$ or the form $x=f(y)$ is often a good indication: the former tends to go with vertical slices and the latter with horizontal slices. It’s far from infallible, however, and in some problems some boundaries are given in one form and some in the other. You should always look at a picture of the region. You want slices whose endpoints are defined consistently, and you want slices that don’t have any gaps in them. Once you’ve decided which way to slice up the region, sketch in the axis of revolution. If it’s parallel to your slices, each slice will trace out a cylindrical shell as it revolves about the axis. If, on the other hand, it’s perpendicular to your slices, each slice will trace out a washer or disk as it revolves about the axis. In either case the proper method of integration has automatically been determined for you.
H: Solving a complex integral I need help solving an integral from John Conway book. Lets $\alpha$ complex number different from 1 find integral $$\int\frac{dx}{1-2\alpha\cos{x}+{\alpha}^2}$$ from 0 to $2\pi$ in unit circle $$(z-\alpha)^{-1}(z-\frac{1}{\alpha})^{-1}$$ AI: Substitute: $z = e^{i x}$ then: $$\cos x = \frac{1}{2} \left( z + \frac{1}{z} \right), \; dx = \frac{1}{i} \cdot \frac{dz}{z}$$ and we can rewrite the integral as: $$ i \int_{|z|=1} \frac{dz}{a (z-a)(z - \frac{1}{a})}$$ There are two cases: $|a| < 1$ then only $z=a$ is a pole inside a circle and the residue is: $${\rm res}_{z=a} \frac{1}{a (z-a)(z - \frac{1}{a})} = \lim_{z \to a} \frac{z-a}{a (z-a)(z - \frac{1}{a})} = \frac{1}{a^2 - 1}$$ hence the result is: $$2 \pi i \cdot \frac{i}{a^2 - 1} = \frac{2 \pi}{1 - a^2}$$ $|a| > 1$ similarly only $1/a$ lies inside the circle and the residue is: $${\rm res}_{z=1/a} \frac{1}{a (z-a)(z - \frac{1}{a})} = \lim_{z \to 1/a} \frac{z-\frac{1}{a}}{a (z-a)(z - \frac{1}{a})} = \frac{1}{1 - a^2}$$ hence the result is: $$2 \pi i \cdot \frac{i}{1 - a^2} = \frac{2 \pi}{a^2 - 1}$$
H: What is this digital operation called? I've recently come across an operation that has interesting ties to the digital root operation. Instead of finding the sum of all digits, one would find the difference between each digit, repeating this until a single number is obtained. One would then find the digital root of this single number. For example: $1937$ $9-1=8, 3-9=-6, 7-3=4$ $(-6)-8=-14, 4-(-6)= 10$ $10-(-14)=24$ Now for the digital root of the single number, $24$: $2+4=6$ (I apologize for the poor formatting) Is there a specific name for this operation? If not, what would be a proper mathematical description? AI: Call your function $f$ and the digital root $d$. Let's "letter the digits" of a the argument. (ex. 1937 would be $abcd$ with a=1, b=9, c=3 and d=7) If we just plug in our digits and simplify we get... $$ f(a) = d(a) \\ f(ab) = d(b-a) \\ f(abc) = d(c-2b+a) \\ f(abcd) = d(d-3c+3b-a) $$ Binomial coefficients with alternating sign?
H: Partial fraction with same denominator Is the following fraction (actually a Laplace transform) a kind of partial fraction? $$\frac{4s+3}{{s^2}+3}$$ Can this be solved this way? $$\frac{A}{s}+\frac{B}{s+{\frac{3}{s}}}$$ If not can you please tell me how to find inverse transform? AI: If you want to keep everything real, this is already decomposed into partial fractions. For the inverse Laplace transform, just split it as $$ \frac{4s+3}{s^2+3} = 4 \frac{s}{s^2+3} + 3\frac{1}{s^2+3}. $$ You should be able to invert each term separately.
H: Direct sum of submodules I'm trying to prove that the following statements are equivalent for a commutative ring $R$ A: $_RR=_RN \oplus_RM$ for some submodules $_RN$, $_RM\subseteq_RR$ B: There exists an element $e=e^2\in R$ such that $N=Re$ and $M=R(1-e)$ I have no idea how to show $A \implies B$, but $B\implies A$ seems easy: every $r \in R$ can be written as $re+r(1-e)$ and for $r\in N\cap M$ $r=ae=b(1-e)$ for some $a,b\in R$, so $re=ae^2=b(1-e)e=b(e-e^2)=0$. Now if $R$ is a domain, $e=0$ and $M=R$ or $r=0$ and $N\cap M=\emptyset$. But what if it's not a domain? AI: For $B\implies A$, since $(1-e)e=0$, you see that $ae\in R(1-e)$ implies $ae=ae^2=0$. Thus the intersection is zero. For $A\implies B$, look at $1=m+n\in M\oplus N$. Clearly n=1-m. So $m$ and $n$ are pretty good candidates for idempotents: check! (Hint: one might start by multiplying $1=m+n$ on the left and right by $n$ to prove something about $mn$ and $nm$.)
H: Properties of complex function Let be $f$ a complex function in zone $\Omega$ can someone help me how to prove that if $f$ and $f^2$ are harmonic in $\Omega$ then $f$ and $f^2$ are holomorphic in $\Omega$ AI: $f$ is harmonic if $f_{z\bar z}=0$ (you should check this if you haven't seen this before). Another way to phrase this: $f$ is harmonic if and only if $f_z$ is holomorphic. We are told that $f_z$ and $(f^2)_z$ are holomorphic. By the chain rule, $(f^2)_z=2ff_z$, and since $f_z$ is holomorphic, you should be able to conclude that $f$ is holomorphic as well. [Added] Or antiholomorphic, in case $f_z\equiv 0$. Thanks, @Andrew.
H: Proofs with steps of division and setting things not equal to zero I am self studying from A Book of Abstract Algebra by Charles C. Pinter. In chapter 3 problem set B problem 4 it asks to show whether $(a,b)\star(c,d)=(ac-bd,ad+bc)$ on the set $ \{(x,y) \in \mathbb R^2 | (x,y) \not= (0,0)\} $ is a group. Currently I am in the middle of showing that there is an inverse. This is the equation I am trying to solve to show that there is an inverse. $(a,b) \star (a',b')=(1,0)=(aa' - bb',ab' + ba')$ I am pretty sure I have the answer for the first entry of the inverse element but I pass through steps with division. In particular I am wondering whether $[1]$ to $[2]$ is a valid step and if it is why. In other words if I exclude $0$ at one step do I have to keep $0$ excluded for the rest of the proof? If its not legal would I just have to split the problem into a couple of cases? $[1]$ $\frac{aa'-1}{b} =\frac{-ba'}{a}$ where $a\not=0$ and $b\not=0\\$ $[2]$ $a^2 a' - a = -b^2 a'$ (note I did not exclude $0$) AI: You want $a'$ and $b'$ such that $aa'-bb'=1$ and $ab'+ba'=0$. In effect you want to show that the system $$\left\{\begin{align*} &ax-by=1\\ &bx+ay=0 \end{align*}\right.\tag{1}$$ always has a solution with $\langle x,y\rangle\ne\langle 0,0\rangle$ provided that $\langle a,b\rangle\ne\langle 0,0\rangle$. You can do this in several ways. If you already know the relevant linear algebra, you can note that $$\det\pmatrix{a&-b\\b&a}=a^2+b^2\ge 0\;,$$ with equality iff $a=b=0$, so $(1)$ always has a unique solution, which clearly (by virtue of the first equation of $(1)$) cannot be $x=y=0$. Alternatively, you can work out the solution by whatever technique appeals to you and then show directly that it exists and is a non-$\langle 0,0\rangle$ solution whenever $a\ne 0\ne b$. Your approach, however, does require you to consider separate cases, since it’s entirely possible that one side of $[1]$ is undefined.
H: Exact sequence of sheaves in Beauville's "Complex Algebraic Surfaces" On the first pages of Beauville's "Complex Algebraic Surfaces", he has a surface $S$ (smooth, projective) and two curves $C$ and $C'$ in $S$. He defines $\mathcal{O}_S(C)$ as the invertible sheaf associated to $C$. I'm assuming that if $C$ is given as a Cartier divisor by $(U_\alpha,f_\alpha)$, then $\mathcal{O}_S(C)(U_\alpha)$ is generated by $1/f_\alpha$ (following Hartshorne's notation); this assumption is justified as Beauville says that $\mathcal{O}_S(-C)$ is simply the ideal sheaf that defines $C$. The part I don't understand is that he then takes a non-zero section $s\in H^0(\mathcal{O}_S(C))$ (and the same for $s'$) and says that it vanishes on $C$. Isn't this the definition of a global section of $\mathcal{O}_S(-C)$ though (according to the previous notation)? He then writes the exact sequence (which I don't really understand) $$0\to\mathcal{O}_S(-C-C')\stackrel{(s',-s)}{\to}\mathcal{O}_S(-C)\oplus\mathcal{O}_S(-C')\stackrel{(s,s')}{\to}\mathcal{O}_S\to\mathcal{O}_{C\cap C'}\to 0.$$ I need to have the definitions clear in order to be able to understand the exact sequence. Can anybody help me out? AI: Beauville is right. a) The only global section of $\mathcal{O}_S(-C)$ is zero: indeed $\mathcal{O}_S(-C)=\mathcal I_C\subset \mathcal{O}_S$ is the sheaf of holomorphic functions on $S$ vanishing on $C$. In particular since global holomorphic functions on the projective surface $S$ are constant, the only such function vanishing on $C$ is zero: $ H^0(S,\mathcal O_S(-C))=0\subset H^0(S,\mathcal{O}_S)=\mathbb C$ b) The part about a section of $\mathcal O_S(C)$ vanishing on $C$ is devilishly subtle and, alas, not sufficiently explained in books. Pragmatically, the point is that sections $s_\alpha \in H^0(U_\alpha,\mathcal O_S(C))$ are meromorphic functions on $U_\alpha$ of the form $\frac{h_\alpha }{f_\alpha}$ with $h_\alpha\in \mathcal O_S(U_\alpha )$. But the vanishing set of $s_\alpha$ is that of $h_\alpha $! Also, it is not true that every section $s \in H^0(S,\mathcal O_S(C))$ vanishes on $C$. What is true is that there is a canonical section $s_0 \in H^0(S,\mathcal O_S(C))$ vanishing exactly on $C$. And that section is... the constant function $1$ , seen as a section $1=s_0\in H^0(S,\mathcal O_S(C)$ ! Indeed on $U_\alpha $ write $1=\frac{f_\alpha }{f_\alpha }$ and acccording to the pragmatic recipe above, the zero locus of that section is that of $f_\alpha$, namely $C \cap U_\alpha$.
H: Find max $x\cdot y$ where $x\in S= \{(x_1,\ldots, x_n)\in \mathbb{R}^n$ Let $S= \{(x_1,\ldots, x_n)\in \mathbb{R}^n$; $|x_1|^p+\ldots+|x_n|^p=1\}$, where $p>1$ is real(and fixed), consider a fixed $y\in\mathbb{R}^n$ and $T:\mathbb{R}^n\rightarrow\mathbb{R}$ such that $T(x) = x\cdot y$, where $x\cdot y = x_1y_1+\ldots+x_ny_n$. I'm having a hard time to find $\max_{x\in S}\ T(x)$. I already noticed a few things but its still really difficult to do something useful. 1) $\forall (x_1\ldots, x_n)\in S, |x_1|^p+\ldots+|x_n|^p\leq |x_1|+\ldots+|x_n|$; 2) Taking the norm $\Vert(x_1,\dots, x_n)\Vert=(|x_1|^p+\ldots+|x_n|^p)^{\frac{1}{p}}$ and the ball $B(0,1)$, with center $0\in\mathbb{R}^n$ and radius $1$, we have $S=\partial B$; 3) $\forall i=1\ldots n, T(e_i) = y_i$. Also, I'm trying to avoid Holder's inequality. AI: Define $\newcommand{\sgn}{\operatorname{sgn}}$ $$ F(x)=\sum_{k=1}^n|x_k|^p\tag{1} $$ then $$ \nabla F(x)=\left(p\sgn(x_k)|x_k|^{p-1}\right)_{k=1}^n\tag{2} $$ You want to find a point on the surface where $\nabla F\,||\,y$. Therefore, $$ x=\left(\sum_{k=1}^n|y_k|^{\frac{p}{p-1}}\right)^{-\frac{1}{p}}\left(\sgn(y_k)|y_k|^{\frac{1}{p-1}}\right)_{k=1}^n\tag{3} $$ should be the point where $T(x)$ is the greatest. Computing $T(x)$ yields $$ T(x)=\left(\sum_{k=1}^n|y_k|^{\frac{p}{p-1}}\right)^{\frac{p-1}{p}}\tag{4} $$
H: doubt on $C^{\infty}$ vector field Warner page-37: Theorem: Let $X$ be a $C^{\infty}$ vector field on a differentiable manifold $M$ For each $m\in M$ there exist $a(m)$ and $b(m)$ in extended real line, and smooth curve $$\gamma_m:(a(m),b(m))\rightarrow M$$ such that a) $0\in (a(m),b(m))$ and $\gamma_m(0)=m$ b)$\gamma_m$ is an integral curve of $X$ c) If $\mu:(c,d)\rightarrow M$ is a smooth curve satisfying conditions $a$ and $b$ then $(c,d)\subseteq (a(m),b(m))$ and $\mu=\gamma_m|(c,d)$ my question is I confused about the dependency of $a$ , $b$ on $m$ and not getting any feeling of that definition, could any one give one simple example satisfying above conditions? Next is based on above: Definition: $\forall t\in \mathbb{R}$ we define a transformation $X_t$ with domain $$D_t=\{m\in M:t\in (a(m),b(m))\}$$ by setting $$X_t(m)=\gamma_m(t),$$ I understand that this a map from $M$ to $M$, but not able to visualize what is exactly going on, I need some example on $\mathbb{R}$ or $\mathbb{R}^n$, I will be really happy if some one help me to understand these. AI: All $(a(m), b(m))$ is doing is defining a "maximal" integral curve starting at the point $m$. The easiest example is if the vector field is complete (e.g. take the manifold to be compact). Then $a(m)=-\infty$ and $b(m)=\infty$ for all $m$. In other words, you can flow along the vector field for all time through any point. One way to think about this is given some point $m\in M$ you could certainly define some infinitesimally small curve $(-\epsilon, \epsilon)\to M$ through $m$ whose tangent vector field is $X$ along the curve. This will probably extend to some larger domain because this small curve segment is probably a part of some larger integral curve. This theorem tells you that in fact there is a well-defined maximum. The map $X_t$ is flowing along the vector field for time length $t$. If $t=0$, then $X_0$ is just the identity. You don't flow for any time. If $t=1$, then $X_t(m)$ just means take the point $m$ and move it along the integral curve for a time length of $1$. In $\mathbb{R}^2$ you could take the constant vector field $\frac{\partial}{\partial x}$. It just points to the right with magnitude $1$ everywhere. Intuitively flowing along this is going to move a point to the right with the proper velocity, and this is true. For example, $X_4((x,y))=(x+4, y)$. The integral curves are defined for all time by $\gamma_{x_0, y_0}(t)=(x_0+t, y_0)$, so $X_t(x,y)=(x+t, y)$. I think of it like water flowing over the manifold. The vector field tells you the direction and magnitude of the flow of the water at a given point. The integral curve says if you start at some point $m$ it is like dropping something in the water at that point and letting it move with the flow of the water for the length of time specified. The flow $X_t$ is sort of the global picture of doing this with all points simultaneously.
H: How many times do you need to double previous result to get at least $10^{82}$? This is pretty straightforward, but I'd like to study, how find out, how many times do you need to double previous result of calculation to get some sum, for example: $10^{82}$ $1\times 2 = 2$ $2\times 2 = 4$ $4\times 2 = 8$ $8\times 2 = 16$ n. $x\times 2 \geq 10^{82}$ AI: The $n$th step is $2^n$. If you want $2^n\geq A$, then you want $$n = \log_2(2^n) \geq \log_2(A) = \frac{\ln(A)}{\ln(2)} = \frac{\log_{10}A}{\log_{10}(2)}.$$ So the first $n$ at which $2^n\geq A$ will be the least positive integer greater than or equal to $\log_2(A)$, which is denoted $$\left\lceil \log_2A \right\rceil.$$
H: Linear combination of cosines: so close to perodic. I'm interested in linear combinations of cosines: $$f(x) = \alpha_1 \cos(2\pi \theta_1 x) + \alpha_2 \cos(2\pi \theta_2 x) + \cdots + \alpha_k \cos(2\pi \theta_k x)\enspace,$$ where $\alpha_i \in \mathbb{Z}$ and the $\theta_i$'s are linearly dependent irrational numbers (when they are rational numbers or irrational linearly independent, I have less trouble, as in the former case the function is periodic, and in the second, any value of each $\cos$ can be attained independently of the others). When plotting functions like this, one sees that they are not periodic, but still, they are close to be. In particular, it seems that there are values $p$ and $\epsilon$ such that if $f(x) > \epsilon$, then $f(x+n\times p) > \epsilon$, at least for the first few $n$'s. I have two questions: Is this true? Is there a name for the pseudo-periodicity that $f$ seems to enjoy? Thanks! AI: You could be interested in almost periodic functions. A function $f\colon \Bbb R\to \Bbb C$ is said almost periodic if for all $\varepsilon>0$, the set $$P(f,\varepsilon)=\{T\in\Bbb R\mid\forall x\in \Bbb R, |f(x+T)-f(x)|\leq \varepsilon\}$$ is relatively dense, that is, we can find $L>0$ such that each interval of length $L$ has a non-empty intersection with $P(f,\varepsilon)$. It can be shown that if $f\colon\Bbb R\to\Bbb C$, the following properties are equivalent: $f$ is almost periodic; for all $\varepsilon>0$, we can find an integer $N$, real numbers $\theta_1,\dots,\theta_N$ and complex numbers $a_1,\dots,a_N$ such that $$\forall x\in\Bbb R,\quad \left|f(x)-\sum_{j=1}^Na_je^{i\theta_j x}\right|\leq \varepsilon.$$ The family $\{\tau_t f,t\in\Bbb R\}$ has a compact closure for the uniform norm on the continuous bounded functions over $\Bbb R$, where $\tau_tf(x)=f(x+t)$. This concept has been introduced by Harald Bohr. A good reference is Corduneanu's book Almost periodic functions.
H: Angle between vectors when their dot product and norm of cross product are equal. I had a question in the final exam that asked what the angle between vectors a and b is if: $$\vec a \cdot \vec b=|\vec a\times\vec b| $$ Any hints please. AI: It is not hard to show that $$\|u\times v\| = \|u\|\|v\| \sin(\theta),$$ where $\theta\in[0,\pi]$ is the angle between $u$ and $v$. This should lead you right to a solution.
H: Is the arithmetic most mathematicans use a modelled within first or a second order logic? I often read that arithmetic in first order logic has problems and you really want to do it in second order logic. However, aren't the Zermelo–Fraenkel axioms written down in the language of first order logic? AI: Note that ZFC is a theory strong enough to prove second-order arithmetics. So if you agree to take ZFC as your foundational point, taking second-order PA for arithmetic should not pose any problems. This is one of the reasons set theory is a good foundational basis for mathematics, since it allows second-order (and higher) to work via first-order formulas in the universe of set theory. Further reading: First-order logic advantage over second-order logic? what is the relationship between ZFC and first-order logic? First-Order Logic vs. Second-Order Logic
H: Do we have $(R/I)/(J/I) \cong (R/J)/(I/J)$? Do we have $(R/I)/(J/I) \cong (R/J)/(I/J)$ where $R$ is a ring and $I,J$ are ideals? If $I \subset J$ then it follows from the third isomorphism theorem. In $R= \mathbb Z$ with $I = 3 \mathbb Z$ and $J = 5 \mathbb Z$ we have $I/J = \{ \bar{0}, \bar{3}, \bar{6}, \bar{9}, \bar{12} \} \cong R/J$ and $J/I = \{ \bar{0}, \bar{5}, \bar{10} \} \cong R/I$. Then $(R/I) / (J/I) = 0 = (R/J) / (I/J)$ so it seems to hold also. I assume it doesn't hold in general. Or does it? Does it hold for principal ideal domains? Thanks. AI: You can only speak of $I/J$ if $J\subseteq I$, so your question of whether $$(R/I)/(J/I) \cong (R/J)/(I/J)$$ only makes sense if both $I\subseteq J$ and $J\subseteq I$, or equivalently $I=J$. Of course, when that is the case, then the above statement is trivially true. Let's use your specific example, where $R=\mathbb{Z}$, $I=3\mathbb{Z}$, and $J=5\mathbb{Z}$. There is no such thing as $I/J$, but you can talk about $$I/(I\cap J)=3\mathbb{Z}/15\mathbb{Z}=\{\bar{0},\bar{3},\bar{6},\bar{9},\bar{12}\}\cong \mathbb{Z}/5\mathbb{Z},$$ and $$J/(I\cap J)=5\mathbb{Z}/15\mathbb{Z}=\{\bar{0},\bar{5},\bar{10}\}\cong \mathbb{Z}/3\mathbb{Z}.$$ However, note that (by the third isomorphism theorem) $$(R/(I\cap J))/(I/(I\cap J))\cong R/I$$ $$(R/(I\cap J))/(J/(I\cap J))\cong R/J$$ and there is no reason for those two rings to be isomorphic in general.
H: Highschool math: $w=\frac{x+jy}{x+jy-1}$ separate (x and y) and j terms. I'm stuck on the first step again: $$w=\frac{x+jy}{x+jy-1}$$ I need to separate out (x and y) and j The only thing I can come up with at this stage is: $$w=\frac{x}{x+jy-1}+\frac{jy}{x+jy-1}$$ but that doesn't help me at all. Aaaaargh!!! AI: Write $$ w = {x + jy\over (x-1) + jy}$$ Multiply both numerator and denominator by $(x-1)-iy$ to obtain $$ w = {(x + jy)(x-1-jy)\over (x-1)^2 + y^2}$$ Multiply out the numerator and separate into complex and real terms.
H: Absoluteness and categories From the wikipedia article on the Skolem paradox: A central goal of early research into set theory was to find a first order axiomatisation for set theory which was categorical, meaning that the axioms would have exactly one model, consisting of all sets. Skolem's result showed this is not possible, creating doubts about the use of set theory as a foundation of mathematics. I would like to know if the term "categorical" for that property here lead to the naming of category theory. Maybe because all the problems of non-absoluteness don't happen there? How is the relation? AI: This is not the same category as in category theory. This is a model theoretic concept of categoricity. We say that a theory is categorical if it has exactly one model up to isomorphism. We know that a first-order theory is categorical if and only if its only model is finite. So for first-order theories we have a weaker notion, $\kappa$-categorical, namely all models of cardinality $\kappa$ are isomorphic. Skolem's paradox showed that ZFC is not categorical, in the model-theoretical sense of the word, although historically I am not sure if the term was coined at the time. On the other hand, category theory deals with notions of categorizing mathematical objects, "groups" or "compact topological rings".
H: What happens to this theorem if we change the dimension of the spaces? We have the following theorem Let $T:X\rightarrow Y$ be a compact operator between the Hilbert spaces $X,Y$. Then there exist (possibly finite) orthonormal bases $\{e_1,e_2,\ldots \}$ and $\{f_1,f_2,\ldots \}$ of $X,Y$ and (possibly finite) numbers $s_1,s_2,\ldots$ such that $s_n \rightarrow 0$ (if there are countably many numbers), such that $$ Tx=\sum\limits_{n=1}^{\infty}s_n \left<x,e_n \right>f_n $$ This is the singular value decomposition for operators between Hilbert spaces. My questions are: If $X$ and $Y$ are finite dimensional then the theorem, as stated above is true? If $X$ is finite dimensional, both orthonormal bases become finite, since then we work in $T(X)$ which is a finite dimensional subspace in the (possible infinite dimensional) space $Y$. But what happens, if $X$ is infinite dimensional, but $Y$ is not? Are the orthonormal bases in that case still finite? AI: If one space is infinite-dimensional and the other is finite-dimensional, you have finite orthonormal sets, but obviously not an orthonormal basis of the infinite-dimensional one. Similarly if $X$ and $Y$ are finite-dimensional but with different dimensions, you don't get a basis of the larger one. Even in the case where $X=Y$, it's not at all clear to me that $\{e_i\}$ and $\{f_i\}$ can both be taken to be bases, rather than just orthonormal sets.
H: Differentiable structure on the real line The usual differentiable structure on real line was obtained by taking ${F}$ to be the maximal collection containing the identity map, Let ${F_1}$ be the maximal collection containing $t\mapsto t^3$. I need to show $F_1\neq F$, but $(\mathbb{R},F)$ and $(\mathbb{R},F_1)$ are diffeomorphic. Well, first of all, what does it mean by maximal collection? I am not clearly getting and how to define a diffeomorphism between these ordered pairs, that also not clear to me. Please help. AI: The maximality of $F$ means that if $(U, \varphi)$ [here $U$ is an open subset of our manifold; in this case, of $\mathbb R$] is another chart which is compatible with each $(V, \psi) \in F$ — in the sense that each transition map \[ \varphi^{-1} \circ \psi\colon \psi(U \cap V) \to \varphi(U \cap V) \] is a diffeomorphism in the sense of calculus on $\mathbb R$ — then $\varphi$ is actually in $F$. So take $U = \mathbb R$ and $\varphi(t) = t^3$. Is this compatible with $F$? For the second part, note that it follows from all of these definitions that if a function $f\colon (\mathbb R, F) \to (\mathbb R, F_1)$ is a diffeomorphism with respect a choice of global coordinates on both sides, then it is a diffeomorphism of manifolds.
H: Algorithms and generalisation of functions I admit I'm little bit poor in functions in mathematics. But I'm in real urge to get this riddle out. How to express $$x(n)=x(n-1)+x(n-2)+1,$$ where $n>1$ and $x(0)=0$ and $x(1)=1$, in terms of the function $$y(n)=y(n-1)+n,$$ where $n>1$ and $y(0)=0$ and $y(1)=1$? I found the answer as $x(n)=y(n+2)-1$ in a some PDF about AVL trees for the minimal number of nodes $n_{\min}(h)$ of an AVL tree of height $h$. Please explain. AI: The Fibonacci numbers are defined by $f_0=0,f_1=1$, and $f_n=f_{n-1}+f_{n-2}$ for $n>1$. Your sequence $x$ is defined by $x_0=0,x_1=1$, and $x_n=x_{n-1}+x_{n-2}+1$ for $n>1$. We wish to prove that $x_n=f_{n+2}-1$ for $n\ge 0$. We first observe that $x_0=0=1-1=f_2-1$ and $x_1=1=2-1=f_3-1$; this gets the induction off the ground. Now for the induction step let $n>1$ be arbitrary, and assume as the induction hypothesis that $x_k=f_{k+2}-1$ for $k=0,\dots,n-1$. Then $$\begin{align*} x_n&=x_{n-1}+x_{n-2}+1\tag{1}\\ &=\Big(f_{(n-1)+2}-1\Big)+\Big(f_{(n-2)+2}-1\Big)+1\tag{2}\\ &=f_{n+1}+f_n-1\tag{3}\\ &=f_{n+2}-1\tag{4}\;, \end{align*}$$ where $(1)$ follows from the definition of the $x$’s, $(2)$ from the induction hypothesis, $(3)$ by elementary arithmetic, and $(4)$ from the definition of the Fibonacci numbers. The result now follows by induction: we’ve shown that it’s true for $n=0$ and $n=1$, and we’ve shown that if it’s true for all non-negative $k<n$, then it’s true for all $k<n+1$.
H: Directed colimit in a concrete category I recently found myself at a spot that I never believed I'll get (or at least not that soon in my career). I ran into a problem which seems to be best answered via categories. The situation is this, I have a directed system of structures and the maps are all the inclusion map, that is $X_i$ for $i\in I$ where $(I,\leq)$ is a directed set; and if $i\leq j$ then $X_i$ is a substructure of $X_j$. Suppose that the direct limit of the system exists. Can I be certain that this direct limit is actually the union? Namely, what sort of categories would ensure this, and what possible counterexamples are there? I asked several folks around the department today, some assured me that this is the case for concrete categories, while others assured me that a counterexample can be found (although it won't be organic, and would probably be manufactured for this case). The situation is such that the direct system is originating from forcing, so it's quite... wide and probably immune to some of the "thinkable" counterexamples (by arguments of [set theoretical] genericity from one angle or another), and so any counterexample which is essentially a linearly ordered system is not going to be useful as a counterexample. Another typical counterexample which is irrelevant here is finitely-generated things, for example we can take a direct system of f.g. vector spaces whose limit is not f.g. but this aspect is also irrelevant to me; although I am not sure how to accurately describe this sort of irrelevance. Last point (which came up with every person I discussed this question today), if we consider: $$\mathbb R\hookrightarrow\mathbb R^2\hookrightarrow\ldots$$ Then we consider those to be actually increasing sets in inclusion and not "natural identifications" as commonly done in categories. So the limit of the above would actually be $\mathbb R^{<\omega}$ (all finite sequences from $\mathbb R$). AI: Your question essentially amounts to asking, "when does the forgetful functor $U : \mathcal{C} \to \textbf{Set}$ create directed colimits?" More generally, one could replace "directed colimit" by "filtered colimit". There is, as far as I know, no general answer. Here is one reasonably general class of categories $\mathcal{C}$ for which there is such a forgetful functor. Let us consider a finitary algebraic theory $\mathfrak{T}$, i.e. a one-sorted first-order theory with a set of operations of finite arity and whose axioms form a set of universally-quantified equations. For example, $\mathfrak{T}$ could be the theory of groups, or the theory of $R$-modules for any fixed $R$-module. Then, the category $\mathcal{C}$ of models of $\mathfrak{T}$ will be a category in which filtered colimits are, roughly speaking, obtained by taking the union of the underlying sets. This can be proven "by hand", by showing that the obvious construction has the required universal property: the key lemma is that filtered colimits commute with finite limits in $\textbf{Set}$ – so, for example, $\varinjlim_{\mathcal{J}} X \times \varinjlim_{\mathcal{J}} Y \cong \varinjlim_{\mathcal{J}} X \times Y$ if $\mathcal{J}$ is a filtered category. Mac Lane spells out the details in [CWM, Ch. IX, §3, Thm 1]. Addenda. Fix a one-sorted first-order signature $\Sigma$. Consider the directed colimit of the underlying sets of some $\Sigma$-structures: notice that the colimit inherits a $\Sigma$-structure if and only if the operations and predicates of $\Sigma$ are all finitary. Qiaochu's counterexample with $\{ 0 \} \subset \{ 0, 1 \} \subset \{ 0, 1, 2 \} \subset \cdots$ can be pressed into service here as well. So let us assume $\Sigma$ only has finitary operations and predicates. The problem is now to establish an analogue of Łoś's theorem for directed colimits. Let $\mathcal{J}$ be a directed set and let $X : \mathcal{J} \to \Sigma \text{-Str}$ be a directed system of $\Sigma$-structures. Let us say that a logical formula $\phi$ is "good" just if $X_j \vDash \phi$ for all $X_j$ implies $\varinjlim X \vDash \phi$ (where $\varinjlim X$ is computed in $\textbf{Set}$ and given the induced $\Sigma$-structure). It is not hard to check that universally quantified equations and atomic predicates are good formulae. The set of good formulae is closed under conjunction and disjunction. The set of good formulae is closed under universal quantification. The set of good formulae is not closed under existential quantification: the formula $\forall x . \, x \le m$ (with free variable $m$) is a good formula in the signature of posets, but $\exists m . \, \forall x . \, x \le m$ is clearly not preserved by direct limits. However, a quantifier-free good formula is still a good formula when prefixed with any number of existential quantifiers. In particular, the set of good formulae is not closed under negation: the property of being unbounded above can be expressed as a good formula in the signature of posets with inequality, but its negation is the property of being bounded above. Section 6.5 of [Hodges, Model theory] seems to have some relevant material, but I haven't read it yet. The point, I suppose, is that there are some fairly strong conditions that the theory in question must satisfy before the directed colimit in $\textbf{Set}$ is even a model of the theory, let alone be a directed colimit in the category of models of the theory.
H: How to show growth without bound in only certain cases and not in others? I encountered the following problem. (We're working in a finite-dimensional real vector space, here.) Suppose $$A=\frac{1}{2}\left(\begin{array}{cc}-2 & 4\\1 & 1\end{array}\right).$$ Find a vector, $y$, so that $\lVert A^nx\rVert\to\infty$ as $n\to\infty$, except when $x$ is perpendicular to $y$. Explain your reasoning. My first thought was that we should try to express $A=zy^t$ for some vectors $y,z$, whence, if $x$ is perpendicular to $y$, we would have $$Ax=z(y^tx)=z\langle y,x\rangle=0,$$ and so $\lVert A^nx\rVert$ cannot grow without bound. Then, I'd hoped to show that for any other $x$, it did grow without bound. Unfortunately, when I assumed such $y,z$ existed and attempted to see what I could discern about them, I ended up with a $0\neq 0$ contradiction. It's possible I made a mistake, but I'm not finding one, and if there isn't a mistake, then that approach is not going to work. I also found the eigenvalues of $A$--namely $\frac{-1}{8}\left(1\pm\sqrt{257}\right)$--both of which have modulus greater than $1$, so clearly $\lVert A^n\rVert$ grows without bound. Unfortunately, I'm not sure what else I can do from here. Any hints? As a secondary question, for which matrices $A$ can we say that $a=zy^t$ for some $y,z$? (Of course, if it's all of them, then I goofed, and my first thought works after all.) AI: Hint: Recalculate the eigenvalues. They are much nicer than you think. Then find a basis of eigenvectors. The problem will fall apart.
H: variance of multiple regression coefficients If I consider universal kriging (or multiple spatial regression) in matrix form as: ${\bf{V = XA + R }}$ where $\bf{R}$ is the residual and $\bf{A}$ are the trend coefficients, then the estimate of ${\bf{\hat A}}$ is: ${\bf{\hat A}}=(\bf{X^{T}C^{-1}X)^{-1}X^{T}C^{-1}V}$ (as I understand it), where $\bf{C}$ is the covariance matrix, if it is known. Then, the variance of the coefficients is: $\text{VAR}({\bf{\hat A}})=(\bf{X^{T}C^{-1}X)^{-1}}$??? I am getting this from here. How does one get from the estimate of ${\bf{\hat A}}$, to its variance? i.e. how can I derive that variance? AI: $$\newcommand{\var}{\operatorname{var}}$$ First, recall that $$ \var(MV) = M\Big(\var(V)\Big)M^T. $$ so $$ \begin{align} & \var((X^T C^{-1}X)^{-1} X^T C^{-1}V) \\[10pt] & = (X^T C^{-1}X)^{-1} X^T C^{-1}\Big(\var{V}\Big)\Big( (X^T C^{-1}X)^{-1} X^T C^{-1} \Big)^T. \tag{1} \end{align} $$ Then, recall that $(AB)^T$ (with $A$ to the left of $B$) is equal to $B^T A^T$ (with $A$ to the right of $B$). With $X^T C^{-1} X$, one cannot invert all three matrices and multiply in the opposite order, since $X$ is not a square matrix. But that matrix is symmetric, i.e. it is its own transpose. And $C$ is also symmetric, and so is $C^{-1}$. So we get: $$ \Big( (X^T C^{-1}X)^{-1} X^T C^{-1} \Big)^T = C^{-1}X(X^TC^{-1}X)^{-1}. $$ Then $(1)$ becomes $$ \begin{align} & (X^T C^{-1}X)^{-1} X^T C^{-1}\Big(\var{V}\Big) C^{-1}X(X^TC^{-1}X)^{-1} \\[10pt] & = (X^T C^{-1}X)^{-1} X^T C^{-1}\Big( C \Big) C^{-1}X(X^TC^{-1}X)^{-1} \\[10pt] & = (X^T C^{-1}X)^{-1} X^T C^{-1} X(X^TC^{-1}X)^{-1} \\[10pt] & = (X^T C^{-1}X)^{-1}. \end{align} $$
H: finding the rational number which the continued fraction $[1;1,2,1,1,2,\ldots]$ represents I'd really love your help with finding the rational number which the continued fraction $[1;1,2,1,1,2,\ldots]$ represents. With the recursion for continued fraction $( p_0=a_0, q_0=1, p_{-1}=1, q_{-1}=o), q_s=a_sq_{s-1}+q_{s-2},p_s=a_sp_{s-1}+p_{s-2}$. I found out the $p_k=1,2,5,7,12,32, q_k=1,1,3,4,7,18 $ (anything special about these series? perhaps I did a mistake?), and I know that $r=\lim_{c_k}=\lim\frac{p_k}{q_k}$, but I can't see anything special about $p_k, q_k$ or the realtion between them, Any help? Thanks a lot! AI: We have that $$\Large x=1+\frac{1}{1+\frac{1}{2\,+\,\frac{1}{1\,+\,\frac{1}{1\,+\,\frac{1}{2\,+\,\cdots}}}}}=1+\frac{1}{1+\frac{1}{2+\frac{1}{x}}}$$ so that $$\frac{1}{x-1}=1+\frac{1}{2+\frac{1}{x}}$$ hence $$\frac{1}{\frac{1}{x-1}-1}=\frac{1}{\frac{1}{x-1}-\frac{x-1}{x-1}}=\frac{1}{\frac{2-x}{x-1}}=\frac{x-1}{2-x}=2+\frac{1}{x}$$ and thus $$x^2-x=2x(2-x)+(2-x)$$ $$x^2-x=-2x^2+4x+2-x$$ $$3x^2-4x-2=0$$ The roots of this equation are $$x=\frac{4\pm\sqrt{40}}{6}=\frac{4\pm 2\sqrt{10}}{6}$$ but we know it can't be $x=\frac{4-2\sqrt{10}}{6}$ since that number is negative, so we can conclude that $x=\frac{4+2\sqrt{10}}{6}$. Note that this number is not rational.
H: Is there a analysis conjecture proven to be unprovable or a proof is non-existence? Is there a analysis conjecture proven to be unprovable or a proof is non-existence? So, is it once a math history milestone AI: There is a large number of statements in analysis that have been proved independent of the axioms of ZFC. For a partial list, see this. A fair number of them are questions that had already been asked, and worked on seriously, long before they were proved independent. Added: ZFC is Zermelo-Fraenkel set theory, with Axiom of Choice added. It is currently, and has been for quite a while, the standard "background theory" for most of mathematics.
H: Direct sum of Hilbert Space subspaces - Notation? This is a really basic question sorry, I just need to make sure I have my understanding correct. Given an infinite dimensional Hilbert space $\mathcal{H}$ and two subspaces, also Hilbert spaces, $\mathcal{H}_1$ and $\mathcal{H}_2$. If one can write: $\mathcal{H}=\mathcal{H}_1 \bigoplus \mathcal{H}_2$ This means that $\mathcal{H}$ is isomorphic to the direct sum, not 'equal', right? I'm just a little confused as the definiton of the direct sum of two Hilbert spaces (see http://en.wikipedia.org/wiki/Hilbert_space#Direct_sums) does not look to have the same 'form' as $\mathcal{H}$, so it seems a little strange to say they're equal. I'm fairly sure, if they are isomorphic, the definition used for the direct sum isn't all that important, but I feel like I may be missing something. AI: The $\oplus$ sign can mean several things in several contexts, and I think one should always say what is meant. I usually indicate this by writing things like $H=H_1\oplus H_2$ (internal algebraic direct sum) $H=H_1\oplus H_2$ (external topological direct sum) $H=H_1\oplus H_2$ (orthogonal direct sum) etc. Anyway, I think your confusion is an algebraic one, and is about the difference between the 'internal direct sum' and 'external direct sum'. So let's forget about Hilbert spaces and just work with vector spaces. Correct me if I am wrong about this. For two vector spaces $A,B$ one can define the vector space $A\oplus B$ as $A\times B$ (set-product) with componentwise operations; this is the 'external direct sum'. If $A,B$ happen to be subspaces of a bigger vector space $V$, then to say that $V$ is the internal direct sum of $A,B$ means precisely that the map $A\oplus B\to V$ given by $(a,b)\mapsto a+b$ (where the domain is the external direct sum just constructed) is a bijection. Equivalently, this means $A,B$ intersect trivially (the injective part) and $A+B=V$ (the surjective part). In conclusion: if by $A\oplus B$ you mean the external direct sum (e.g. constructed as $A\times B$),then the $=$ sign in $V=A\oplus B$ (with $A,B$ subspaces of $V$) should really be an $\cong$ sign: set-theoretically there is no equality, since $V$ does not consists of pairs $(a,b)$.
H: Unital nonabelian banach algebra where the only closed ideals are $\{0\}$ and $A$ This is a problem in exercise one of Murphy's book Find an example of a nonabelian unital Banach algebra $A$, where the only closed ideals are $\{0\}$ and $A$. But does such an algebra exist at all? My argument is the following: Let $a$ be an arbitrary nonzero element in $A$, if $a$ is not invertible, then $a$ is contained in a maximal ideal, which is closed. However, there is no such ideal thus every nonzero element must be invertible. Then Gelfand-Mazur says $A$ is the complex numbers and thus must be abelian. What is the problem with my argument? Thanks! AI: The problem is that not being invertible doesn't imply being contained in a maximal two-sided ideal if $A$ is noncommutative. For example, there are plenty of elements of $\mathcal{M}_n(\mathbb{C})$ that are not invertible but since this algebra is simple (which incidentally implies that it's already an answer to your question when $n \ge 2$) all nonzero elements generate the unit two-sided ideal. What is true is that not being left-invertible is equivalent to being contained in a maximal left ideal and not being right-invertible is equivalent to being contained in a maximal right ideal.