text
stringlengths
83
79.5k
H: Best method to determine if a first order formula is logically valid? I know there isnt a "standard method" to determine if a formula is logically valid, but i would like to know if there is something you can always try first of all to determine it. Not all formulas look obviously valid like $\forall x P(x) \rightarrow \exists x P(x)$. This can be proven with natural deduction, if its required. For example, i think $(\forall x P(x) \rightarrow \forall x Q(x)) \rightarrow \forall x (P(x) \rightarrow Q(x))$ is valid, and i am trying to prove it by contradiction, asumming there is some intepretation in wich it is false, but i can't figure out how to do it. So, any advices? AI: First order logic is undecidable, so there is (as you note) no mechanical method for determining first order validity. So what's the next best to a mechanical method for trying to establish (in)validity? The trouble with looking for natural deduction proofs is that the introduction rules allow you to deduce more and more complex wffs, and thus don't stop you spinning off down blind alleys (even when there is a proof to be discovered). Better, then, to use a method without introduction rules, where as you grow a proof the length of wffs at least doesn't grow and ideally decreases. Two options spring to mind. One is downward-branching tableau proofs [as in Richard Jeffrey's lovely Formal Logic: Its Scope and Limits, or my Jeffrey-for-Dummies, officially titled An Introduction to Formal Logic]. The other option is certain sequent systems. The tableau system in particular is very student-friendly, and with a bit of practice, on "nice" examples you'll usually find a closed tableau if there is one to be found, and find an open tableau and be able to read off a valuation which falsifies the wff being tested if the wff is invalid. This is perhaps your best buy! For example, let's use a tableau to test $(\forall x P(x) \rightarrow \forall x Q(x)) \rightarrow \forall x (P(x) \rightarrow Q(x))$. We start by assuming that's false (i.e. its negation is true), which comes to assuming $$\forall x P(x) \rightarrow \forall x Q(x)$$ $$\neg\forall x (P(x) \rightarrow Q(x))$$ The latter is equivalent to $$\exists x\neg (P(x) \rightarrow Q(x))$$ So, if that's true there must be an object in the domain which satisfies the condition, which we can dub $a$, so $$\neg (P(a) \rightarrow Q(a))$$ whence $$P(a)$$ $$\neg Q(a)$$ Now look at the first line. We have to consider cases. We have either the negated antecedent or the consequent: $$\neg\forall x(Px)\quad\quad|\quad\quad \forall xQ(x)$$ On the right hand branch we immediately have a contradiction, however, because we can instantiate the universal $\forall xQ(x)$ with $a$ to get $Q(a)$. So look at the left branch. We can continue that $$\exists x\neg P(x)\quad\quad\quad\quad\quad\quad$$ $$\neg P(b)\quad\quad\quad\quad\quad\quad$$ where we have to use a new name to dub the witness for the existential. And now we are stymied. So the assumption that the original wff is false doesn't lead to contradiction. So it can't be valid. In fact, we can read off the open branch of the tableau the interpretation with just two elements in the domain -- call them $a$ and $b$ -- where $P(a)$, $\neg P(b)$, $\neg Q(a)$ and it doesn't matter for $Q(b)$. And that interpretation will falsify the test formula! [For a more careful exploration of this kind of thing, if it is unfamiliar to you, see my book.]
H: Let $|\langle u,v\rangle|=\|u\| \cdot \|v\|.$ How to show that $u,v$ are linearly dependent? Let $|\langle u,v\rangle|=\|u\|\cdot \|v\|.$ How to show that $u,v$ are linearly dependent? Without loss of generality let $u\ne0,v\ne0.$ Then, $\displaystyle\left\langle\frac{u}{\|u\|},\frac{v}{\|v\|}\right\rangle=1=\left\langle\frac{u}{\|u\|},\frac{u}{\|u\|}\right\rangle\\\implies\displaystyle\left\langle\frac{u}{\|u\|},\frac{u}{\|u\|}-\frac{v}{\|v\|}\right\rangle=0$ I don't know what to do next. Added: A thought that just occurred to me: $\displaystyle\left\langle\frac{u}{\|u\|},\frac{u}{\|u\|}-\frac{v}{\|v\|}\right\rangle=0=\left\langle \frac{v}{\|v\|},0\right\rangle\\\implies \displaystyle\left\langle\frac{u}{\|u\|}-\frac{v}{\|v\|},\frac{u}{\|u\|}-\frac{v}{\|v\|}\right\rangle=0\\\implies \dfrac{u}{\|u\|}-\dfrac{v}{\|v\|}=0$ Does it work? AI: Write $\lambda=\langle u,v\rangle/\Vert v\Vert^2$. Let $$ w=u-\lambda v. $$ Then $$ \begin{aligned} \langle w,w\rangle&=\langle u,u\rangle-\overline{\lambda}\langle u,v\rangle-\lambda\langle v, u\rangle+|\lambda|^2\langle v,v\rangle\\ &=\Vert u\Vert^2-2\frac{|\langle u,v\rangle|^2}{\Vert v\Vert^2}+\frac{|\langle u,v\rangle|^2}{\Vert v\Vert^4}\Vert v\Vert^2\\ &=\Vert u\Vert^2-2\frac{\Vert u\Vert^2\cdot\Vert v\Vert^2}{\Vert v\Vert^2}+ \frac{\Vert u\Vert^2\cdot\Vert v\Vert^2}{\Vert v\Vert^2}\\ &=\Vert u\Vert^2(1-2+1)=0. \end{aligned} $$ So $w=0$ and $u$ and $v$ must be dependent.
H: A little confusion about extensions $E(-,-)$ and $\mathrm{Ext}(-,-)$ If we want to calculate $E(\mathbb{Z}/p\mathbb{Z},\mathbb{Z})$, i.e. equivalence classes of short exact sequences $\mathbb{Z}\rightarrow E\rightarrow\mathbb{Z}/p\mathbb{Z}$, we have $\mathbb{Z}/p\mathbb{Z}\cong\mathrm{Ext}(\mathbb{Z}/p\mathbb{Z},\mathbb{Z})\cong E(\mathbb{Z}/p\mathbb{Z},\mathbb{Z})$. To get the explicit representative, we form the pushout as shown in the diagram: $$ \begin{array}{llllllllllll} \mathbb{Z}&\overset{\times p}{\rightarrow}&\mathbb{Z}&\rightarrow&\mathbb{Z}/p\mathbb{Z}\\ \downarrow\times a &&\downarrow &&{=}\\ \mathbb{Z}&\rightarrow&E&\rightarrow&\mathbb{Z}/p\mathbb{Z} \end{array} $$ The pushout is calculated as $\mathbb{Z}\oplus\mathbb{Z}/((a,p))$. When $(a,p)=1$, we have $ax+py=1$, so the invertible matrix $\begin{bmatrix}x & y\\-p & a\end{bmatrix}$ will take $(a,p)$ to $(1,0)$, which shows $E\cong\mathbb{Z}$. And $(1,0)$ is taken to $(x,-p)$, which is then equivalent to $-p$ in $E\cong\mathbb{Z}$, so the lower column is again a $\mathbb{Z}\overset{\times p}{\rightarrow}\mathbb{Z}\rightarrow\mathbb{Z}/p\mathbb{Z}$ whenever $(a,p)=1$. But for different $a=1,2,...,p-1$, the sequence should be different, as $\times a$ are different elements in $\mathrm{Ext}(\mathbb{Z}/p\mathbb{Z},\mathbb{Z})$, which is a contradiction. Surely I am wrong somewhere, but I just cannot find it out. Can anyone help me? Thanks. AI: They are actually different. $(0,1)$ is taken to $(-p,a)$. So they are actually $\mathbb{Z}\overset{\times p}{\rightarrow}\mathbb{Z}\overset{(\times a) \mod p}{\rightarrow}\mathbb{Z}/p\mathbb{Z}$. If we want an isomorphism: $$ \begin{array}{llllllllllll} \mathbb{Z}&\overset{\times p}{\rightarrow}&\mathbb{Z}&\overset{(\times a) \mod p}{\rightarrow}&\mathbb{Z}/p\mathbb{Z}\\ = &&\downarrow &&{=}\\ \mathbb{Z}&\overset{\times p}{\rightarrow}&\mathbb{Z}&\overset{(\times b) \mod p}{\rightarrow}&\mathbb{Z}/p\mathbb{Z} \end{array} $$ The unit of the upper and lower $\mathbb{Z}$ will be mapped to $a$ and $b$, respectively, which means $a\equiv b\mod p$.
H: Find the number of way the books are kept together. The number of ways that $5$ Spanish , $3$ Enligh & $3$ German books are arranged if the books of each language are to be kept together are ... My Try : $5!+3!+3!$ Which is wrong answer . where i am getting wrong ? AI: In enumeration problems like this, you should think of addition as "or" and multiplication as "and". With that in mind, $5! + 3! + 3!$ says "arrange the Spanish books or the English books or the German books", which is not appropriate here. We want "and" throughout, which gives $5!3!3!$ ways to arrange the books within each group. Finally, we have to arrange the order in which the groups appear, which can be done in $3!$ ways. All together, that gives $5!3!3!3!$ arrangements.
H: At what angle do $y=x^2$ and $y=\sqrt{x}$ intersect at $(1,1)$? Problem: at what angle do $y=x^2$ and $y=\sqrt{x}$ intersect at (1,1)? Solution: As far as I know angle of intersection of 2 curves is the angle of intersection of their tangent lines. Also, coefficient before x in tangent line equals tangent of the angle between line and x-axis. $$y=x^2 \rightarrow y'=2x=2 \rightarrow \tan a=2$$ $$ y=\sqrt{x} \rightarrow y'=\frac1{2\sqrt{x}}=\frac12 \rightarrow \tan b=\frac{1}{2}$$ EDIT: Hence, my angle $= \arctan(2) - \arctan(0.5)$. The new question: it there any way to simplify it? AI: The only identity $\tan\theta$ has to satisfy is $$\sec^2\theta=1+\tan^2\theta$$ $\sec\theta$ is real, for any real value of $\tan\theta$ As $\tan(A-B)=\frac{\tan A-\tan B}{1+\tan A\tan B}$ if the angle of intersection is $\theta$ $$\theta=\arctan 2-\arctan \frac12=\arctan\left(\frac{2-\frac12}{1+2\cdot\frac12}\right)=\arctan \frac34$$
H: Probability that $n$ vectors drawn randomly from $\mathbb{R}^n$ are linearly independent Let's take $n$ vectors in $\mathbb{R}^n$ at random. What is the probability that these vectors are linearly independent? (i.e. they form a basis of $\mathbb{R}^n$) (of course the problem is equivalent of "taken a matrix at random from $M_{\mathbb{R}}(n,n)$, what is the probability that its determinant $\neq 0$) Don't know if this question is difficult to answer or not. Please share any information about it! :-) (the $n$ vectors are meant with real values, I'm interested in solutions in $\mathbb{N}$ or $\mathbb{Q}$ or whatever fields you like) AI: As others have pointed out the main problem is what "taking a vector at random" means. Probability theory requires that one specifies a certain probability measure on ${\mathbb R}^n$ before one can make any predictions about outcomes of experiments concerning chosen vectors. E.g., if it is totally unlikely, meaning: the probability is zero, that a vector with $x_n\ne 0$ is chosen, then the probability that $n$ vectors chosen independently are linearly independent is $\>=0$, since with probability $1$ they all lie in the plane $x_n=0$. A reasonable starting point could be installing a rotational invariant probability measure. As the length of the $n$ chosen vectors does not affect their linear dependence or independence this means that we are chosing $n$ independent vectors uniformly distributed on the sphere $S^{n-1}$. (This informal description has a precise mathematical meaning.) Under this hypothesis the probability that the $n$ chosen vectors $X_k$ are linearly independent is $=1$. Proof. The first vector $X_1$ is linearly independent with probability $1$, as $|X_1|=1$. Assume that $1< r\leq n$ and that the first $r-1$ vectors are linearly independent with probability $1$. Then with probability $1$ these $r-1$ vectors span a subspace $V$ of dimension $r-1$, which intersects $S^{n-1}$ in an $(r-2)$-dimensional "subsphere" $S_V^{r-2}$. This subsphere has $(n-1)$-dimensional measure $0$ on $S^{n-1}$. Therefore the probability that $X_r$ lies in this subsphere is zero. It follows that with probability $1$ the vectors $X_1$, $\ldots$, $X_{r-1}$, $X_r$ are linearly independent.
H: How to find an area? I have this question: A farmer plans to enclose a rectangular pasture adjacent to a river. The pasture must contain $320,000$ square meters in order to provide enough grass for the herd. What dimensions will require the least amount of fencing if no fencing is needed along the river? $$x = m$$ $$y = m$$ That what I did not sure what I'm doing wrong? $$xy=320000$$ $$2(x+y)=P$$ $$x+\frac{320,000}{x}=\frac{P}{Z}$$ $$1-\frac{320,000}{x^2}=0$$ $$x=400$$ $$y=400$$ My answer is wrong. I think $X$ needs to be larger then $Y$. AI: You set up the problem correctly, though we want for $2x + y = P$ $$ 2(x+y)= 2x + y = P$$ $$P = 2x+\dfrac{320,000}{x} = \frac{2x^2 + 320000}{x}$$ We want to minimize the perimeter needed, so we need to find the derivative of $P(x)$ and set that equal to zero, then solve for the zeros: the $x$ values that solve $P'(x) = 0$. Using the quotient rule: $$P'(x) = \dfrac{4x\cdot x - (2x^2 + 320000)}{x^2} $$ Simplify and set equal to zero: $$P'(x) = \dfrac{2x^2 - 320000}{x^2} = 0$$ $P'(x)$ will equal zero when the numerator is equal to zero. We certainly don't want a rectangle where $x = 0$, so we can affirm $x \neq 0 $ and hence the denominator will not be zero. So, we simply solve for $x$: $$2x^2 - 320000 = 0 \iff x^2 - 160000 = (x + 400)(x - 400) = 0$$ So $$x = 400, y = \dfrac{320000}{400} = 800.$$
H: Evaluating $\int^1_0 \frac2{\sqrt{2-x^2}} dx$ $$\int^1_0 \frac2{\sqrt{2-x^2}} dx$$ using substitution $x=\sqrt 2 \sin \theta$ $$\int^{\pi/4}_0 \frac{2\cos \theta d\theta}{\sqrt{2-2\sin^2 \theta}} = \int^{\pi/4}_0 \frac{2\cos\theta d\theta}{\sqrt2 \cos\theta} = \int^{\pi/4}_0 \frac{2d\theta}{\sqrt2} = \int^{\pi/4}_0 \frac{2\cdot\sqrt2 d\theta}{\sqrt2\cdot\sqrt2} = \int^{\pi/4}_0 \sqrt2d\theta = \sqrt2 \theta = \sqrt2 (\frac\pi4 - 0) = \sqrt2 \frac\pi4 $$ The problem is that the answer is $\frac\pi2$. Where did I make a mistake? UPDATE: using substitution $x=\sqrt 2 \sin\theta \rightarrow dx=\sqrt2\cos\theta$ $$\int^{\pi/4}_0 \frac{2\sqrt2\cos\theta}{\sqrt{2-2\sin^2\theta}} = \int^{\pi/4}_0 \frac{2\sqrt2\cos\theta}{\sqrt2 cos\theta} = \int^{\pi/4}_0 \frac{2\sqrt2}{\sqrt2} = \int^{\pi/4}_0 2 = 2 \theta = 2 (\frac\pi4 - 0) = \frac\pi2 $$ AI: HINT: Putting $x=\sqrt2\sin\theta, dx=\sqrt2\cos\theta d\theta$ (you missed this $\sqrt2$) $$\int^1_0 \frac2{\sqrt{2-x^2}}dx=2\int_0^{\frac\pi4}\frac{\sqrt2\cos\theta d\theta}{\sqrt2\cos\theta}=2\int_0^{\frac\pi4}d\theta$$
H: matrix representations of linear transformation I have a indexing problem about the matrix representation of linear transformation. Let $V$ be a $3$ dimensional vector space over a field $F$ and fix $(\mathbf{v_1},\mathbf{v_2},\mathbf{v_3})$ as a basis. Consider a linear transformation $T: V \rightarrow V$. Then we have $$T(\mathbf{v_1})=a_{11}\mathbf{v_1}+a_{21}\mathbf{v_2}+a_{31}\mathbf{v_3}$$ $$T(\mathbf{v_2})=a_{12}\mathbf{v_1}+a_{22}\mathbf{v_2}+a_{32}\mathbf{v_3}$$ $$T(\mathbf{v_3})=a_{13}\mathbf{v_1}+a_{23}\mathbf{v_2}+a_{33}\mathbf{v_3}$$ So that we can identify $T$ by the matrix $$\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\\ \end{pmatrix}$$ But then when I read several linear algebra book, it said if $T(\mathbf{v_i})=\sum_j a_{ij}\mathbf{v_j}$, then we can identify $T$ by the matrix $(a_{ij})$. My problem is: isn't the matrix is $(a_{ji})$ instead of $(a_{ij})$? Could someone please explain the subtle difference, thanks in advance. AI: Your book has a typo (as did I); it should be $$T(\mathbf{v}_i)=\sum_ja_{ji}\mathbf{v}_j.$$ Here is an example: in the basis $\{\mathbf{v}_1,\mathbf{v}_2,\mathbf{v}_3\}$, we know that $\mathbf{v}_1$ corresponds to the column vector $$\begin{pmatrix} 1\\ 0\\ 0\end{pmatrix}.$$ Applying the algorithm for matrix multiplication, $$\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{21}&a_{22}&a_{23}\\ a_{31}&a_{32}&a_{33}\\ \end{pmatrix}\begin{pmatrix} 1\\ 0\\ 0\end{pmatrix}=\begin{pmatrix} a_{11}\!\\ a_{21}\!\\ a_{31}\!\end{pmatrix}=a_{11}\mathbf{v}_1+a_{21}\mathbf{v}_2+a_{31}\mathbf{v}_3=\sum_{j=1}^3a_{j1}\mathbf{v}_j.$$ I had instinctively been thinking of the formula $$(AB)_{ij}=\sum_k A_{ik}B_{kj}$$ for multiplying two matrices together, where the index of the summation appears on the right in the part of the expression for $A$ - that is, $A_{i\hspace{0.02cm}\large\mathbf{k}}$. However, this is expressing the entries of $AB$; to express a column of $AB$ itself as a summation of basis vectors, the index variable would be the one on the left.
H: Probability of k triangles in a random graph Let $G_{n,p}, n\in \mathbb{N}, p\in(0,1)$ be the binomial random graph, i.e. a graph on $n$ vertices where an edge is in $G_{n,p}$ with probability $p$ and denote as $V$ its vertex set. Let $j\in \binom{V}{3}$ be a set of 3 vertices and denote as $\mathcal{E}_j$ the event that $j$ is a triangle. In terms of $p$, what is the the following probability? $$\mathbb{P}\left(\bigcap_{j=1}^k \mathcal{E}_j\right),$$ i.e. the probability that $k$ such sets of 3 vertices are all triangles. I have trouble counting the cases where edges are in several triangles... AI: If I understood correctly, you are given a vertex set, e.g. $V = \{1,2,3,4,5,6,7\}$. You pick $j_1 = \{1,2,4\}$ (for example), and $j_2 = \{2,1,3\}$. The edge set $\{12,14,24\}$ forms the first triangle, and the edge set $\{12,23,13\}$ forms the second triangle. The cardinality of the union of the two edge sets is $5$, and hence $P(\mathcal{E}_{j_1}\cap\mathcal{E}_{j_2}) = p^5$. Since you allow to pick $\mathcal{E}_j$s arbitrarily, I see no way but to calculate the edge sets, take their unions and then calculate the cardinality, raise $p$ to that cardinality.
H: Power series in complex analysis We derived from cauchy's integral formula that a holomorphic function converges locally in a power series. Now we had the Identity theorem and I wanted to know whether I can conclude from this that the power series of a function is uniquely determined and converges on the whole domain of the function(which should be assumed to be connted and open) and not just locally anymore? AI: Consider the function $f(z)=\frac{1}{z}$. It is analytic in $\mathbb C - \{0\}$. This function can't be expressed as a power series on its entire domain of definition, for the following reason. Suppose $f$ is given as a power series expansion at $z=0$. This is impossible, since such a power series can be extended to $z=0$, while the function $f(z)$ has an essential discontinuity at $z=0$. So, suppose $f$ is given as a power series expansion about $z=z_0\ne 0$. The assumption is thus that the power series converges for every $w\ne 0$, and that it gives $f(w)$. But then the radius of convergence of the power series is $\infty $, and thus, again, the function $f(z)$ should be extendable to $z=0$, which is false.
H: Problem with the Analytic theorem The analytic theorem claims that If $f(t)$ is a bounded and locally integrable function on $t \geq 0$ and $g(z) = \int_0^\infty f(t)e^{-zt}$ is analytic with $\Re(z) > 0$ and extends holomorphically to an analytic function with $\Re(z) \geq 0$ Then $\int_0^\infty f(t) dt$ exists and equals $g(0)$ The proof starts with: Let R>0 be large and C the boundary of the region $\{z \in \mathbb{C} : |z| \leq R, \ \Re(z) \geq -\delta \}$ where $\delta>0$ is small enough so that $g(z)$ is holomorphic in and on C. ... But $g$ isn't defined to be only holomorphic on $\Re(z) \geq 0$ ? Why can $g$ be holomorphic even when $-\delta<\Re(z)<0$ ? Thank you AI: A function is defined to be holomorphic at a point if it is differentiable in some neighborhood of that point. From this it is clear that if $g$ is holomorphic at $z$, it is holomorphic in a neighborhood of $z$. Thus for each $z$ such that $\Re(z)=0$, we have some open ball $B_z$ centered at $z$ on which $g$ is holomorphic. Clearly together these balls cover the set $\{z\in\mathbb C:|z|\leq R,\Re(z)=0\}$, and since this is compact we have some finite subcover. Let $\delta$ be smaller than the smallest radius of any of these (finitely many) balls. Then $g$ is holomorphic on $\{z\in\mathbb C:|z|\leq R,\Re(z)\geq -\delta\}$.
H: Exercise of a basis in the product of two metric spaces Let $(E_1,d_1)$, $(E_2,d_2)$ two metric spaces and $E=E_1\times E_2$ the metric space product. Consider the following metrics in $E$ \begin{eqnarray*} d^{(\infty)}(\mathbf{x},\mathbf{y}) & = & \max(d_1(x_1,y_1),d_2(x_2,y_2)) \\ d^{(1)}(\mathbf{x},\mathbf{y}) & = & d_1(x_1,y_1)+d_2(x_2,y_2) \\ d^{(2)}(\mathbf{x},\mathbf{y}) & = & \sqrt{d_1^2(x_1,y_1)+d_2^2(x_2,y_2)} \\ \end{eqnarray*}I already showed that the topology inducing the three metrics in E are equal, that I did testing that metrics are equivalent. Now if $\mathcal{O}_1$ is the collection of all open sets of $E_1$ y $\mathcal{O}_2$ is the collection of all open sets of $E_2$. It is true that the set $$\mathcal{G}=\{A_1\times A_2\subseteq E_1\times E_2:A_1\in \mathcal{O}_1,\,\,A_2\in \mathcal{O}_2\}$$ is a basis for the product topology? How I can test this? Or how I refute it? I have a suggestion: Show that the topology induced by any metric $d^{(\infty)}, d^{(1)},d^{(2)}$ is the same that the product topology. But do not understand why the suggestion test proves my exercise also not know how to test the suggestion Definition of product topology. The product topology is the collection of sets of $E$ that are unions of products of the form $U_1\times U_2$ with $U_1\in \mathcal{O}_1,\,\,U_2\in \mathcal{O}_2$ AI: I agree with @tomasz: I think that you’re supposed to show that each of the three metrics generates the product topology. If you’ve already shown that the metrics are equivalent, it will be enough to show that one of them generates the product topology. I would use $d^{(\infty)}$, since for any $p=\langle x,y\rangle\in E$ and $\epsilon>0$ we have $$B_{d^{(\infty)}}(p,\epsilon)=\{\langle u,v\rangle\in E:d_1(x,u)<\epsilon\text{ and }d_2(y,v)<\epsilon\}=B_{d_1}(x,\epsilon)\times B_{d_2}(y,\epsilon)\;.$$ Added in response to comments: Suppose that $p=\langle x,y\rangle\in G=U_1\times U_2\in\mathscr{G}$. $U_1$ is open in $E_1$, so there is an $\epsilon_1>0$ such that $B_{d_1}(x,\epsilon_1)\subseteq U_1$. Similarly, there is an $\epsilon_2>0$ such that $B_{d_2}(y,\epsilon_2)\subseteq U_2$. Let $\epsilon=\min\{\epsilon_1,\epsilon_2\}$. What can you say about $B_{d^{(\infty)}}(p,\epsilon)$?
H: Quotient topology is homeomorphic to nonnegative reals I'm having some difficult to solve this: Consider the following equivalence relation on $\mathbb R^2$: $(x_1,x_2)\sim (y_1,y_2)$ if $x_1^2 + y_1^2 = x_2^2 + y_2^2$ Show that the quotient space $\mathbb R^2/\sim$ is homeomorphic to the set $\mathbb R_+ := \{x \in \mathbb{R} \mid x \geq 0\} \subset \mathbb{R}$, provided with the subspace topology. I tried to find out some continuous bijective function (with inverse continuous) between those two spaces but had no success. AI: The intuition was given to you by Brian M. Scott: this quotient contracts the circles centered at $0$ to points. Here is how you can prove that these spaces are homeomorphic, formally. The mapping $\phi:(x,y)\longmapsto x^2+y^2$ is continuous from $\mathbb{R}^2$ to $\mathbb{R}_+$. Note that $\phi(x,y)=\phi(x',y')$ whenever $(x,y)\sim (x',y')$. By universal property of a quotient space, the map $\phi$ factors uniquely through the quotient into a continuous map $\widetilde{\phi}:\mathbb{R}^2/\sim\longrightarrow \mathbb{R}_+$. It is clear that $\widetilde{\phi}$ is bijective. In many similar cases, we can deduce that $\overline{\phi}$ is a homeomorphism by compactness of the domain. But it is not the case here. So we need a little more work. Note that the map $\theta:r\longmapsto (\sqrt{r},0)$ is continuous from $\mathbb{R}_+$ to $\mathbb{R}^2$. Composing $\theta$ with the quotient map $q:\mathbb{R}^2\longrightarrow \mathbb{R}^2/\sim$ yields $\widetilde{\theta}:\mathbb{R}_+\longrightarrow \mathbb{R}^2/\sim$ continuous. A routine verification shows that $\widetilde{\theta}$ is the inverse of $\widetilde{\phi}$, which is therefore a homeomorphism from $\mathbb{R}^2/\sim$ onto $\mathbb{R}_+$.
H: Visualizing the Area of a Parallelogram I've seen how to visualize the formula for getting the area of a parallelogram. The first picture shows 2 ways which give the same result of Area = base * height. http://tinypic.com/view.php?pic=2na1kr7&s=5 (link to first pic) However, the first time I thought about it before being show the actual formula, I thought it was Area = side * side. http://tinypic.com/view.php?pic=2rhpbvk&s=5 (link to second pic) The 2 sides of the parallelogram are 2 vectors, x and y. If you place copies of vector y length of x times along the vector x, and then you add all lengths of vector y's (which is the same as length of vector y * length of vector x), it seems like you also get the area of the parallelogram. I know that you'll end up with a different result since the height is always less than the length of the vector y (unless they were parallel) but the method in the second picture seems correct. I also haven't seen the second picture's method being taught online so I'm guessing it's actually incorrect. What I want to know is what makes it incorrect because it really seems like it's correct despite giving a different answer from the usual base * height formula. Thanks in advance for any help. AI: To visualise this, we must think of everything in 2 dimensions. We cannot use 1 dimensional constructions (lines) to make up the areas. Imagine a parallelogram and we want to find its area similar to the method you described here If you place copies of vector y length of x times along the vector x, and then you add all lengths of vector y's (which is the same as length of vector y * length of vector x), it seems like you also get the area of the parallelogram. Construct a very small rectangle, height $\delta$ and width the base of the parallelogram. It will have a bit of the left (or right) edge sticking out, but this is countered by the fact the right (or left) edge doesn't quite fit in. Now, to use this to find the area, imagine stacking all of these rectangle on top of one another each one shifted slightly to best fit inside the parallelogram. Note that the number of rectangles needed is actually related to the height of the parallelogram, not the length of the other edge and that number is $h/\delta$, where $h$ is the height of the parallelogram.
H: How much slower is a Turing Machine if you only give it one end of the tape to work with? Turing Machines start with the input string and tape head in the "middle" of a tape that extends infinitely in either direction. Suppose instead that the tape head starts at the "far left" of the tape: the tape extends infinitely to the right, but the tape head can never move further left than its starting position. This "one-sided Turing Machine" is clearly Turing complete, but I wonder if this new model ever has inferior time complexity to a regular Turing machine. Are there any languages that require asymptotically more time to decide on a one-sided Turing machine than a regular Turing machine? If so, what is the maximum asymptotic speedup required on a one-sided TM? AI: It's not clear to me whether you care about constant factors. The complexity can be worse by at most a factor of two. Given a one-ended Turing machine, simulate a standard turing machine by letting the odd positions be the ones on the right and the even positions the ones on the left. Then moving one position on the simulated machine corresponds to moving two on the one-ended machine, which is a factor of two worse. Crossing between left and right at the center is obviously cheap. I don't know if there are languages actually requiring a factor of two. Edit: I think I missed a bit there, since the string to be recognized is put on the tape all together, rather than spread out, which means you could potentially need an additional $4n$ steps to spread it out before starting the simulation. There may be a better general way, of course.
H: Roulette outcome probability I have a question about the probability of a certain Roulette outcome. In the American Roulette wheel with double zero pockets, what are the chances of rolling black and red alternately for a total of nine times consecutively (i.e. B,R,B,R,B,R,B,R,B)? And also how many rolls will it take so that the probability of (B,R,B,R,B,R,B,R,B) in that order will happen? AI: On any trial, the probability of black is $\frac{18}{38}$, as is the probability of red. This is because there are $38$ slots, of which $18$ are black and $18$ are red. The results of individual trials are independent, so you multiply the individual probabilities. There is some ambiguity in the problem. Is RBRBRBRBR rolling black and red alternately? If no, then our answer is $\left(\frac{18}{38}\right)^{9}$. If yes, then our answer is $2\left(\frac{18}{38}\right)^{9}$. Added: In a comment, it is mentioned that red first is fine too. Then the answer under yes is the right one. Numerically, it turns out to be approximately $0.0024$, so approximately $0.24\%$.
H: What is an effective means to make divisibility tests a mathematical 'habit', particularly for algebra? Divisibility tests are a useful problem-solving technique for particularly dealing with larger numbers (thousands etc) and algebraic problems. However, I have always found that many students will just reach for the calculator, many not even realising the simpler tests, such as anything divisible by 2, 5 or 10. What is an effective means to help 'train' these skills in students as a problem solving and 'thinking' skill? AI: The students are the smart ones. If the calculator is giving the easiest way to get the answer of course the best is to use the calculator. But what would happen if the problem is so designed that a calculator would be of little or no help. Calculators don\t help much to show that $n(n+1)$ is even. That $a+10b+100c+1000d$ is divisible by $9$ if and only if $a+b+c+d$ is divisible by $9$. That $3^{3^{3^{3^{3^{3}}}}}-1$ is even. ... and these are still not clever exercises. Addendum: Because it is relevant, I would like to advocate against training students to mindlessly using some tricks, techniques, procedures. Students in their brute state are critical enough by nature. Pushing them to do some task for which there is not a clear purpose is what turns them into bad students. In mathematics we often focus too much in teaching the techniques, but what is more important is the purpose for those techniques. That is why you often see students that compulsively 'simplify' solutions, even partial solutions, or expand brackets for no reason. This applies to any technique. Give them exercises in which it is clear the purpose of a technique. Being able to identify a purpose, not knowing lots of procedures, is what makes them able to solve new problems. Procedures and techniques can be googled most of the time.
H: Is $\int_0^\infty \left (\int_0^\infty f(k) k \sin kr \, \mathrm dk \right) \mathrm dr = \int_0^\infty f(k) \, \mathrm dk$ correct? I am a physicist, and as a physicist I have proved the following equality: $$ \int_0^\infty \left (\int_0^\infty f(k) k \sin kr \,dk \right) dr = \int_0^\infty f(k) dk, $$ where $f$ is a rapidly decaying function or even a Schwartz function. The method is the following: by interchanging the order of integration one obtains: \begin{align*} &\int_0^\infty \left (\int_0^\infty f(k) k \sin kr \,dk \right) dr = -\int_0^\infty f(k)\left (\int_0^\infty \partial_r\cos kr\, dr \right) dk = \\ & -\int_0^\infty f(k)\Big |\cos kr \Big |_{r=0}^{r=\infty} dk = \int_0^\infty f(k)\, dk - \lim_{r\to \infty} \int_0^\infty f(k)\cos kr \,dk. \end{align*} The limit is null due to a simple generalization of the Riemann-Lebesgue lemma. However I am not sure that the conditions for the Fubini theorem are satisfied and that interchanging the order of integration is allowed. Can somebody suggest me how the above result can be rigorously proved or disproved? * EDIT After an interaction with Patrick Da Silva I had this idea: set $$ \int_0^\infty \left (\int_0^\infty f(k) k \sin kr \,dk \right) dr =\lim_{R \to \infty} \int_0^R \left (\int_0^\infty f(k) k \sin kr \,dk \right) dr. $$ Now the integral with $R$ satisfies the conditions for Fubini, and the calculation proceeds as above. Is it correct? AI: The Fourier transform of a Schwartz functions is a Schwartz function. The key is that the inner integral in the LHS is the derivative of the Fourier transform of $f$ evaluated at $r$, up to a minus sign. Hence the LHS is $-(\lim_{r\to \infty}\widehat f(r)-\widehat f(0))$, and the fast decay of the Fourier transform gives that this is equal to $\widehat f(0)$. Notice that we assumed WLOG that the support of $f$ is contained in $[0,+\infty)$.
H: Can you explain to me the statement of this problem? Let $n$ a natural number and $A=(a_{ij})$, where $a_{ij}=\left(\begin{array}{c} i+j \\ i \end{array}\right)$, for $0 \leq i,j < n $. Prove that $A$ has an inverse matrix and that all the entries of $A^{-1}$ are integers. What I don't understand is the definition of $a_{ij}$, I was thinking that it is just a column vector but that would mean that $A$ is a $(2n) \times n$ matrix, so how could it be invertible if it's not a square matrix? AI: The number $\dbinom{i+j}{j}$ is a binomial coefficient. It is the number of ways to choose $j$ objects from $i+j$ objects. One expression for it is $\dfrac{(i+j)!}{i!j!}$.
H: Find the four elements of $M(S)=\{\pi, \rho, \sigma, \theta\}$ Question from Marcel Finan's A Semester Course in Basic Abstract Algebra. Consider the set $S=\{a,b\}$. Find the four elements of $$M(S)=\{\pi,\rho,\sigma,\theta\},$$ where $M(S)$ is the set of mappings from $S$ to $S$. I'm just confused. So these mappings are from S to S but S only has 2 elements. AI: Given $\;S = \{a, b\},$ Each mapping $m: S \to S$ (where $m \in M(S)$) must send $a$ to either $a$ or $b$ and must send $b$ to $a$ or to $b$: So we have $2⋅2=4$ mappings which correspond to the four different mappings $$M(S)=\{\pi,\rho,\sigma,\theta\},$$ where $M(S)$ is the set of mappings from $S = \{a, b\}$ to $S = \{a, b\}$. So we can arbitrarily assign each of the four distinct mappings a distinct "name": $$ (\pi): \quad a \to a,\; b\to a \\ (\rho): \quad a\to a,\; b\to b \\ (\sigma): \quad a\to b,\; b\to a \\ (\theta): \quad a\to b,\; b\to b $$
H: Calculate $\sum\limits_{n=1}^{\infty}na(1-a)^{n-1}$, where $a \in (0,1)$. Calculate $a\displaystyle \sum_{n=1}^{\infty}n(1-a)^{n-1}$ where $a \in (0,1)$. AI: For fun (and this is probably not the solution you are expected to give) we give a mean proof. A certain coin has probability $a$ of landing heads, and probability $1-a$ of landing tails. The coin is tossed repeatedly. Let random variable $X$ be the number of tosses until we get the first head. The probability that $X=1$ is $a$, the probability that $X=2$ is $(1-a)a$, the probability that $X=3$ is $(1-a)^2a$, and so on. It follows that the infinite sum we are asked to evaluate is $E(X)$, the mean of $X$. Let us calculate this mean another way. Let the mean be $\mu$. Toss the coin. With probability $a$, we get a head on the first toss, and $X=1$. With probability $1-a$ we get a tail on the first toss, and we have wasted a toss. The mean number of additional tosses needed in that case is $\mu$. Thus $$\mu=(a)(1)+(1-a)(1+\mu).$$ Solve this linear equation for $\mu$. It turns out that $\mu=\dfrac{1}{a}$.
H: How do I prove this map is a two-sided inverse I'm trying to solve this exercise: I almost solved it, to prove the bijectivity, we have to show that $\theta^{-1}=f^{-1}$. Since $f$ is an epimorphism, we have $\theta(\theta^{-1}(G))=\theta(f^{-1}(G))=f(f^{-1}(G))=G$, for any $G\in S_{M'}$. My problem is to prove that $\theta^{-1}$ is a left inverse of $\theta$, I need help in this part. Thanks a lot. AI: Well, $\theta^{-1}(\theta(G)) = f^{-1}(f(G))$ by the definition of $\theta$ and $\theta^{-1}$. Now, $f$ is a homomorphism, and for every homomorphism and any set $T \subset M$, one has $$f^{-1}(f(T)) = \{m \in M \colon \bigl(\exists t \in T\bigr)(f(t) = f(m))\} = \{m \in M \colon \bigl(\exists t \in T\bigr)(m-t \in \ker f)\} = T + \ker f$$ By assumption, $G$ is a submodule with $\ker f \subset G$, so $G + \ker f = G$.
H: I need help solving a ODE converting it to system of linear different equations $y'''+4y''+5y'+2y=10cos(t)$ Where it is subject to a condition $y''(0)=3, y'(0)=0, y(0)=0$ Ok I'm having a very hard time to solving this b/c if one is going to solve using system of linear different equations don't we need a forcing term. I have solved for the Yc which is $Y_c=c_1v_1e^{-t}+c_2(v_1+v_2t)e^{-t}+c_3v_3e^{-2t} $ and my lambda= -1,-1,-2 and my vectors are $v_1=[1,-1,1] v_2=[1,0,-1] v_3=[1,-2,4]$ but for the Yp I don't know how solve without forcing terms. Am I doing something wrong? Help would be much appreciated. Thank You! AI: I will guide you through hints and see if you can fill in the blanks using two methods and recommend a third. Method 1 (Complimentary and Particular Solution) For the complementary solution, we have: $m^3 + 4m^2 + 5 m = 2 = 0 \rightarrow m_1 = -1, m_2 = -1, m_3 = -2$ This gives us a complimentary solution (when using the ICs) of: $$y_c = 3 e^{-t} (t-1) + 3 e^{-2 t}$$ To find the particular solution, we guess at a solution in the form of: $$y_p = a \cos t + b \sin t$$ Differentiate and substitute into ODE and solve for constants. You should end up with: $$y(t) = e^{-2 t} \left(-2 e^t (t-1)+2 e^{2 t} \sin t-1\right)-\cos t $$ Method 2 (Matrix DEQ) We need to rewrite the system as a set of first order equations, so we let $x_1=y$ and have: $x_1' = y' = x_2$ $x_2' = y'' = x_3$ $x_3' = y''' = -4y'' - 5y' -2y + 10 \cos t = -4x_3 - 5x_2 -2x_1 + 10 \cos t$ Our initial conditions are now expressed as $x_1(0) = 0, x_2(0) = 0, x_3(0) = 3$. Please make a note that we have time $t_0 = 0$ as we need that for later. Now, we have a system of three equations and we can rewrite it using matrix equations as: $$x'(t) = Ax(t) + F(t), \text{with IC vector}~~ C$$ To solve this matrix equation with forcing function we have: $$\tag 1 \displaystyle x(t) = e^{At}C + \int_{t_0}^{t} e^{A(t-s)}F(s)~ds$$ So, we have: $$\tag 2 x'(t) = \begin{bmatrix}x_1'(t)\\x_2'(t)\\x_3'(t) \end{bmatrix} = \begin{bmatrix}0 & 1 & 0\\0 & 0 & 1\\ -2 & -5 & -4\end{bmatrix}\begin{bmatrix}x_1(t)\\x_2(t)\\x_3(t) \end{bmatrix} + \begin{bmatrix}0\\0\\ 10\cos t \end{bmatrix}, C = \begin{bmatrix}0\\0\\3 \end{bmatrix}, t_0 = 0$$ We have a generalized eigenvector for $v_3$ due to eigenvalue with multiplicity of two. We have eigenvalues/eigenvectors as (I changed the order you used, but it does not matter): $\lambda_1 = -2, ~v_1 = (1, -2, 4)$ $\lambda_2 = -1, ~v_2 = (1, -1, 1)$ $\lambda_3 = -1, ~v_3 = (2, -1, 0)$ Let me kick start you, to solve $(2)$, from $(1)$, we get: $$e^{At}C = \begin{bmatrix}3 e^{-t} (t-1)+3 e^{-2 t}\\-3 e^{-t} (t-2)-6 e^{-2 t}\\3 e^{-t} (t-3)+12 e^{-2 t} \end{bmatrix}$$ Next do the integral (it is not bad), so see if you can do the others from this since you know the result. Method 3: Variation of Parameters w/Greene's Function This method is very useful because it makes it very easy to change the forcing function with little work. Give it a go if you are bored!
H: Proving that $\Pr[|X| > T\sqrt{n}/2] \geq \Pr[|X-\mathbb EX| < T\sqrt{n}/2]$ I am reading a paper Revealing information while preserving privacy and I am stuck in a step in the proof of Lemma 4. I'll write the relevant details below so you do not need to extract them from the paper. Let $x,d \in [0,1]^n$ be constant vectors. Let $X_1,\dots, X_n$ be independent random variables such that $X_i$ takes value $x_i-d_i$ with probability $1/2$ and $0$ with probability $1/2$, and let $X=\sum_i X_i$. Let $T$ be a positive constant, and assume $\mathbb EX \geq T\sqrt{n}$. The step I am stuck is: $$ \Pr[|X| > T\sqrt{n}/2] \geqslant \Pr[|X-\mathbb EX| < T\sqrt{n}/2] $$ This is what I have so far: $$ \Pr\left[ \lvert X \rvert > T\sqrt{n}/2 \right] = \Pr\left[ T\sqrt{n}-\lvert X \rvert < T\sqrt{n}/2 \right] \\ \geqslant \Pr\left[ \mathbb EX-\lvert X \rvert < T\sqrt{n}/2 \right] \\ \geqslant \Pr\left[ \rvert \lvert X \rvert-\mathbb EX \rvert < T\sqrt{n}/2 \right] $$ I am not sure how to proceed after that. AI: $$\Pr[\vert X\vert>T\sqrt{n}/2]=\Pr[\vert X - \mathbb{E}X +\mathbb{E}X\vert>T\sqrt{n}/2]\\ \geq \Pr[-\vert X - \mathbb{E}X\vert+\vert\mathbb{E}X\vert>T\sqrt{n}/2]\\ = \Pr[\vert X - \mathbb{E}X\vert<\vert\mathbb{E}X\vert-T\sqrt{n}/2]\\ \geq \Pr[\vert X - \mathbb{E}X\vert<T\sqrt{n}-T\sqrt{n}/2=T\sqrt{n}/2]$$
H: For $a_n$ alternating and $b_n \to c \neq 0$, don't we have $\sum a_n b_n$ converges iff $\sum a_n$ converges? I'm reading the following set of notes on Taylor series and big O-notation, written by a professor at Columbia: http://www.math.columbia.edu/~nironi/taylor2.pdf. He repeatedly refers to what he calls "limit comparison", by which he means the theorem that for $a_n, b_n$ sequences of positive real numbers such that $b_n \to C>0$, we have that $\sum a_n <\infty$ iff $\sum a_n b_n < \infty$. On page 13, he is manipulating a series using O notation, and he ends up with $$\sum\limits_{n=1}^{\infty}(-1)^n\left(\frac{1}{n}\right)\frac{\frac{7}{12}+O(1/n^2)}{-\frac{1}{6}+O(1/n^2)},$$at which point he says "At this point we might be tempted to use limit comparison and conclude that the series is convergent; but limit comparison cannot be applied to an alternating series. Instead of using limit comparison we try to separate the part of the series that converges but not absolutely from the part that converges absolutely." Now, in reference to the theorem he calls "limit comparison", can't we strengthen this theorem to say that if $a_n$ and $b_n$ are sequences of reals, and $b_n \to C \neq 0$, then $\sum a_n <\infty \iff \sum a_n b_n < \infty$? If not, what is a relevant counter-example? And if so, can't we use this to conclude that the above series converges? My second question is that I do not understand the algebraic manipulation he does directly after the sentence quoted above. I suppose he is trying to "separate the part of the series that converges but not absolutely from the part that converges absolutely", but I don't know what this means, or what he is doing. This is all on page 13. Thanks for your help. AI: So here's all that's going on with that series. Remember that $O(1/n^2)$ refers to a term that is bounded by a multiple of $1/n^2$. Recall that $\dfrac1{1+x} = 1-x+x^2 + \dots$ for $|x|<1$. In particular, $\dfrac1{1+O(1/n^2)} = 1+ O(1/n^2)$. So \begin{align*} \frac{\frac7{12} + O(1/n^2)}{-\frac16+O(1/n^2)} &= \frac{\frac7{12} + O(1/n^2)}{-\frac16\big(1+O(1/n^2)\big)} = \big(-\tfrac72 + O(1/n^2)\big)\big(1+O(1/n^2)\big) \\ &= -\tfrac72 + O(1/n^2)\,, \end{align*} the last step following by distributing. This means that his original series is of the form $$\sum \left((-1)^{n+1}\frac72\cdot\frac 1n + O(1/n^3)\right)\,.$$ This is a sum of two convergent series, hence convergent. The first, however, is conditionally convergent, whereas the second is absolutely convergent. This means that the entire series is conditionally convergent. (He comments that you could see this directly by removing the $(-1)^n$ and then using limit comparison with the harmonic series.)
H: Permutation with Repetition how many different four letter words can be formed from the word "ENHANCEMENT" if the first and last letters is A and T respectively. i am confused.Do we need to include A and T in 2 letters? possible positions: _ _ with repition so 11 * 11 [do we need to eliminate E-3 times N-3 times...] $^{11}P_2$ or $^9P_2$? I don't get the concept. [Beginner] Can someone help me understand. AI: So we really only need to count the $2$-letter words that we will stick in the middle, between A and T. Apart from A and T, which are not available, we have, I think, $3$ E's, $3$ N's, and $1$ each of H, C, and M. It is best I think to forget temporarily about ${}_nP_k$ and ${}_nC_k$, and just count. We have $5$ different letters. First count the $2$-letter words that have distinct letters. The first letter can be picked in $5$ ways, and for each way the second can be picked in $4$ ways, for a total of $20$. Then there are the $2$-letter words EE and NN. That's it, a total of $22$. The $(5)(4)$ that we got could have been written as ${}_5P_2$. However, it is best to do things concretely, particularly when numbers are small. Then you retain control over what's going on.
H: Manipulating integral with u substitution I have one of those stupid homework questions that is obscure and not examples given. $f$ is a continuous function and these values are known: $$\int_0^1 f(x) dx = 5$$ $$\int_{-1}^1 f(x)dx = 3$$ $$\int_0^2 f(x)dx = 8$$ $$\int_0^4 f(x)dx = 11$$ Evaluate: $$\int_2^3 x f(8-x^2) dx$$ I know that I should use u substitution because that is what the worksheet says. Ok no thinking about it, just do it. That is what math is all about. $u = 8-x^2$ $du = -2x dx$ $dx = \frac{-1}{2x}du$ $$\frac{-1}{2} \int_4^{-1} f(u)du$$ $$\frac{1}{2} \int_{-1}^4 f(u)du$$ What good does this do me? I am stuck, I have $f(u)$ not $f(x)$ and I have done nothing because I can reintroduce $x$ but then I get what I just had. AI: You are doing fine, but I will compute separately. Let $u=8-x^2$. Then $-2x\,dx=du$, so $x\,dx=-\frac{1}{2}\,du$. Substitute. We get $$\int_{x=2}^3 xf(8-x^2)\,dx=\int_{u=4}^{-1} -\frac{1}{2}f(u)\,du$$ The integral on the right is running the wrong way. We can change the order by changing the sign, and find that we want $$\frac{1}{2}\int_{u=-1}^4 f(u)\,du.$$ Note that the variable of integration in a definite integral is a dummy variable, we can replace it by any other letter. The letter $x$ sounds good, we want $$\frac{1}{2}\int_{-1}^4 f(x)\,dx.$$ From the first two items of information we were given, we can find the integral from $-1$ to $0$. For $$\int_{-1}^1 f(x)\,dx=\int_{-1}^0 f(x)\,dx+\int_0^1 f(x)\,dx.$$ We know two of these items, so we know the third, and find that $$\int_{-1}^0f(x)\,dx=3-5=-2.$$ We have also been told the integral from $0$ to $4$, plus a third useless fact. So now we can finish.
H: Prove that $(g \circ f)^* = f^* \circ g^*$ Assume $g: F \mapsto G$ is a linear map Prove that $(g \circ f)^* = f^* \circ g^*$ My solution $(g \circ f)^* = g^* \circ f^*$ by the properties of associativity in linear maps. If we assume that $g^* \circ f^* = f^* \circ g^* $ then $g$ and $f$ are inverse functions of each other. By the properties of linear maps the only way that $g^* \circ f^* = f^* \circ g^*$ is if they are inverse functions. therefore $(g \circ f)^* = f^* \circ g^*$ is there anything wildly wrong with my reasoning? AI: Following anon's interpretation, $(g\circ f)^\star$ is a map from $G^\star$, the dual space to $G$, to $F^\star$. We have $(g\circ f)^\star (\phi)=\phi\circ (g\circ f)$. By associativity of composition, this equals $(\phi \circ g)\circ f$, which equals $f^\star(\phi \circ g)=f^\star(g^\star(\phi))=(f^\star \circ g^\star)(\phi)$. Hence we have proved $(g\circ f)^\star=f^\star \circ g^\star$ on every element of the domain; hence the functions are equal.
H: Is there a shorter way to prove this? Let $n$ a natural number and $A=(a_{ij})$, where $a_{ij}=\left(\begin{array}{c}i+j \\i\end{array} \right)$, for $0≤i,j<n$. Prove that A has an inverse matrix and that all the entries of $A^{−1}$ are integers. I tried to prove this like this: (but I'm not sure if it's correct or formal enough) Also if you could give me a hint to prove the second aasertion it would be great. Thank you. AI: By the rules of Pascal's Triangle, $$ \binom{i+1+j}{j}-\binom{i+j}{j}=\binom{i+j}{j-1} $$ which means that row $i+1$ minus row $i$ gives row $i+1$ shifted to the right. $$ a_{i+1,j}-a_{i,j}=a_{i+1,j-1} $$ This can be done to shift row $n$ to the right, then shift row $n-1$, up to row $2$. We can repeat the process to shift rows $n$ through $3$ to the right. We can continue this until we have all the $1$s on the diagonal and $0$s in the lower left triangle. Subtracting one row from another does not change the determinant, so the original determinant was $1$. Using Cramer's Rule, we get that the inverse is an integer matrix. Subtract row $3$ from row $4$: $$\small \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ \binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\ \binom{2}{0}&\binom{3}{1}&\binom{4}{2}&\binom{5}{3}\\ \binom{3}{0}&\binom{4}{1}&\binom{5}{2}&\binom{6}{3}\\ \end{bmatrix} \rightarrow \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ \binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\ \binom{2}{0}&\binom{3}{1}&\binom{4}{2}&\binom{5}{3}\\ 0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\ \end{bmatrix} $$ Subtract row $2$ from row $3$: $$\small \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ \binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\ \binom{2}{0}&\binom{3}{1}&\binom{4}{2}&\binom{5}{3}\\ 0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\ \end{bmatrix} \rightarrow \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ \binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\ 0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\ 0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\ \end{bmatrix} $$ Subtract row $1$ from row $2$: $$\small \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ \binom{1}{0}&\binom{2}{1}&\binom{3}{2}&\binom{4}{3}\\ 0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\ 0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\ \end{bmatrix} \rightarrow \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ 0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\ 0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\ 0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\ \end{bmatrix} $$ Subtract row $3$ from row $4$: $$\small \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ 0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\ 0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\ 0&\binom{3}{0}&\binom{4}{1}&\binom{5}{2}\\ \end{bmatrix} \rightarrow \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ 0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\ 0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\ 0&0&\binom{3}{0}&\binom{4}{1}\\ \end{bmatrix} $$ Subtract row $2$ from row $3$: $$\small \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ 0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\ 0&\binom{2}{0}&\binom{3}{1}&\binom{4}{2}\\ 0&0&\binom{3}{0}&\binom{4}{1}\\ \end{bmatrix} \rightarrow \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ 0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\ 0&0&\binom{2}{0}&\binom{3}{1}\\ 0&0&\binom{3}{0}&\binom{4}{1}\\ \end{bmatrix} $$ Subtract row $3$ from row $4$: $$\small \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ 0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\ 0&0&\binom{2}{0}&\binom{3}{1}\\ 0&0&\binom{3}{0}&\binom{4}{1}\\ \end{bmatrix} \rightarrow \begin{bmatrix} \binom{0}{0}&\binom{1}{1}&\binom{2}{2}&\binom{3}{3}\\ 0&\binom{1}{0}&\binom{2}{1}&\binom{3}{2}\\ 0&0&\binom{2}{0}&\binom{3}{1}\\ 0&0&0&\binom{3}{0}\\ \end{bmatrix} $$ The determinant is $\binom{0}{0}\binom{1}{0}\binom{2}{0}\binom{3}{0}=1$.
H: $x^4-3x^3-9x^2+2=0$, why does wolframalpha give complex solutions when they are real? Although $x^4-3x^3-9x^2+2=y$ intersects with the $x$-axis $4$ times (this is shown in the graph) Wolframalpha gives me complex solutions. Why does this happen? Thanks. AI: Algorithms for computing explicit roots to general polynomials of order 3 or 4 require complex numbers. Even for a polynomial with three real roots, these complex numbers cancel out in the end. However, due to numerical precision issues, evaluating these computations on any finite precision computer will leave an error. The complex part in the answers are less than the machine epsilon (around $2.26 \times 10^{-16}$), so that's basically what's happening here.
H: Repetition in piece-wise function In my pre-calculus course lesson, I have this word problem: Amy's electric bill can be represented by the piecewise function: $$\begin{cases} 8.25+0.0705x, &x≤400 \\ 36.45+0.0605x, &x>400 \end{cases}$$ where $x$ is the number of kilowatt hours used. Use this function to determine the cost of her electric bill if she used $825$ kilowatt hours. Round your answer to the nearest cent. If I calculate the first line, "Amy's" bill is $\$36.45$ for the first 35 hours, as we see. In the second line, It takes 36.45 and adds it to the reduced rate, right? Makes sense, except, that the $0.0605x$ rate is applied to the entire bill of 825 hours, not just the remaining 425 hours? Why does the piece-wise function apply the reduced rate to the entire bill instead of just the remaining hours? Does this mean that I'm getting billed twice for some of my hours from my electric company (assuming this is a real-world accurate function)? :P AI: You are right that this is a very strange function. The cost for $400$ kilowatt hours is $\$36.45$, while the cost for $401$ kilowatt hours is $\$60.71$. Probably the author intended to write a function that does the following: $\$8.25$ service charge $\$0.0705$ per hour for the first $400$ kilowatt hours $\$0.0605$ per hour for kilowatt hours beyond $400$ The first piece of the function is correct, and the slope of the second piece is correct. As you point out, however, you are being charged again for the first $400$ kilowatt hours, which we need to subtract off somehow. We could fix this error by solving $$ b + 0.0605 \cdot 400 = 36.45, $$ since any reasonable function should give us the same cost at $400$ hours in both pieces. We get $b = 12.25$, which means the second piece of the function should have been $$ 12.25 + 0.0605x \text{ if } x > 400. $$
H: integration of differential equation for wind profile I'm trying to calculate the vertical wind speed gradient with some equations and am having trouble with the integration. From the book I'm using, the wind profile is calculated according to $$\frac{kz}{u_*}\frac{\partial u}{\partial z}=\phi_m(\zeta),$$ where k is the von karman constant, z is height, u* is the air friction velocity, du/dz is the change in wind speed with change in height i.e. the wind profile, and psi_m(zeta) is an atmospheric stability function where $$\zeta=\frac{z}{L}$$ and L is the Monin obukhov length scale which is just an expression of atmospheric stability. The book states that psi_m(zeta) can be integrated between z1 and z2 i.e. two different heights to obtain the wind gradient as $$u_2-u_1=\frac{u_*}{k}\left[\ln\left(\frac{z_2}{z_1}\right)-\psi_m\left(\frac{z_2}{L}\right)+\psi_m\left(\frac{z_1}{L}\right)\right]$$ where $$\psi_m(\zeta)=\int_{z_0/L}^{\zeta}\frac{[1-\psi_m(x)]}{x}\text{d}x$$ Im finding it really hard to make the link between these equations and coming up with a step by step solution to the wind profile. Could anyone explain what the author has done and how the step by step process should look like? I havent posted this on the physics stack exchange seeing as it is a problem with the mathematical procedure I am having and not in understanding the process. AI: The equation for the wind profile should be $$\frac{kz}{u_*}\frac{\partial \bar{U}}{\partial z}=\phi_m(\zeta),$$ where $\bar{U}$ is the mean wind-speed profile. The trick will be writing $\phi_m=1-(1-\phi_m)$. Thus, dividing by $z$ and integrating gives: $$\frac{k}{u_*}\left.\bar{U}\right|_{z_1}^{z_2}=\int_{z_1}^{z_2}\frac{\phi_m(\zeta)}{z}dz=\int_{z_1}^{z_2}\frac{1-(1-\phi_m(\zeta))}{z}dz=\int_{z_1}^{z_2}\frac{dz}{z}-\int_{z_1}^{z_2}\frac{(1-\phi_m(\zeta))}{z}dz$$ so upon substituting $z=x L$, to give: $$\int_{z_1}^{z_2}\frac{(1-\phi_m(z/L))}{z}dz=\int_{z_1/L}^{z_2/L}\frac{1-\phi_m(x)}{xL}Ldx=\psi_m(z_2/L)-\psi_m(z_1/L)$$ where you now use the correct definition of the momentum influence function: $$\psi_m(\zeta):=\int_{z_{0,m}/L}^\zeta\frac{1-\phi_m(x)}{x}dx$$ where $z_{0,m}$ is some base value. Then $$\frac{k}{u_*}(u_2-u_1)=\ln\left(\frac{z_2}{z_1}\right)-\psi_m\left(\frac{z_2}{L}\right)+\psi_m\left(\frac{z_1}{L}\right).$$ As an aside, the point of writing $\psi_m$ is to quantify the difference of the windprofile when you are in the stable vs unstable regime. For example, if $\phi_m(x)=1$, then you are in the perfectly stable regime, $\psi_m=0$ so you get back the usual log-wind profile law.
H: system of equations - when does it have a solution? Find all values of $t$ for which the system of equations $$\begin{array} 22x_1 + x_2 + 4x_3 + 3x_4 = 1\\ x_1 + 3x_2 + 2x_3 − x_4 = 3t\\ x_1 + x_2 + 2x_3 + x_4 = t^2 \end{array}$$ has a solution? I was given a theorem, that system has a solution, when column vector of RHS lies in the subspace spanned by column vectors of LHS. If we take respective column vectors, and notice that third is a scalar multiple of the first column, we get three linearly independent vectors. What I don't get is, why should there be particular $t$'s, for which the system doesn't have a solution, as if we have three linearly independent (column) vectors, they should span $\mathbb{R^3}$, and thus we could find solution for any set of $t$'s. I suppose I'm wrong, but where's the mistake, and how should I check then, which $t$'s suffice? AI: Using Angela's observation, the system reduces to $$2x_1+x_2=1\\x_1+3x_2=3t\\x_1+x_2=t^2$$ Subtracting the third line from the first, we get $x_1=1-t^2$. The second line then gives $3x_2=3t-(1-t^2)=t^2+3t-1$, while the third line gives $x_2=t^2-(1-t^2)=2t^2-1$. Hence for the system to have a solution these must agree, i.e. $\frac{1}{3}(t^2+3t-1)=2t^2-1$ or $t^2+3t-1=6t^2-3$. This is a quadratic equation $5t^2-3t-2=0$ with two solutions, and copper.hat kindly found them explicitly.
H: Solutions for integer $n$, given $ \exp(m-\frac{2}{\pi}n)=n^{b/{\pi}}. $ Let $m, n$ be integers. Let $b \in \mathbb R$. Solve the following equation for $n$. $$ \exp(m-\frac{2}{\pi}n)=n^{b/{\pi}}. $$ Thank you. AI: Let's play around and see what happens. We have $\exp(m-\frac{2}{\pi}n)=n^{b/{\pi}}$, or $\exp(m)/\exp(n)^{2/\pi}=n^{b/{\pi}}$. Raise to the $\pi$ power and we get $\exp(m\pi)/\exp(n)^{2}=n^{b}$ or $\exp(m\pi)= n^b e^{2n}$, or $e^{m\pi} = e^{b \ln n} e^{2n}$ or $m\pi = b \ln n + 2n$. If $m$ and $n$ are integers, it seems very unlikely to me that this would have a solution unless $b$ were chosen to make this true for some $m$ and $n$. I don't see how to go further without using the W-function.
H: If $\lim_{x \rightarrow \infty} f(x)$ is finite, is it true that $ \lim_{x \rightarrow \infty} f'(x) = 0$? Does finite $\lim_{x \rightarrow \infty} f(x)$ imply that $\lim_{x \rightarrow \infty} f'(x) = 0$? If not, could you provide a counterexample? It's obvious for constant function. But what about others? AI: Simple counterexample: $f(x) = \frac{\sin x^2}{x}$. UPDATE: It may seem that such an answer is an unexplicable lucky guess, but it is not. I strongly suggest looking at Brian M. Scott's answer to see why. His answer reveals exactly the reasoning that should first happen in one's head. I started thinking along the same lines, and then I just replaced those triangular bumps with $\sin x^2$ that oscillates more and more quickly as $x$ goes to infinity.
H: Solution of Lagrangian Could you give me advice how to solve the following Lagrangian? $$L=x^3+y^3 - \lambda (x^2-xy+y^2-5)$$ $$ \left\{ \begin{array}{c} \frac{\partial L}{\partial x} = 3x^2 - \lambda (2x-y) = 0 \\\frac{\partial L}{\partial y} = 3y^2 - \lambda (-x+2y) = 0 \\ \frac{\partial L}{\partial \lambda} = x^2-xy+y^2=5 \end{array} \right. $$ $$ \left\{ \begin{array}{c} 3x^2 / (2x-y)= \lambda \\ 3y^2 / (-x+2y) = \lambda \\ x^2-xy+y^2=5 \Rightarrow x^2 = xy-y^2+5 \end{array} \right. $$ $$\Rightarrow \lambda = 3x^2 / (2x-y)= 3y^2 / (-x+2y) \Rightarrow x^2 = \frac{(2x-y)(y^2)}{2y-x} \Rightarrow \frac{(2x-y)(y^2)}{2y-x} = xy-y^2+5 \Rightarrow (2x-y)(y^2) = (xy-y^2+5) (2y-x) \Rightarrow 2xy^2 - y^3 = 2xy^2 - 2y^3 +10y - x^2y +xy^2 -5x \Rightarrow 0 = - y^3 +10y - x^2y +xy^2 -5x $$ I don't know how to transform the above in order to find x,y. AI: First $$\lambda = \frac{3x^2}{2x-y} = \frac{3y^2}{-x+2y}\implies -x^3 +2x^2y = 2y^2x - y^3.$$ Using the factoring formula for $x^n -y^n$ when $n$ is an odd number: $$ x^n - y^n = (x-y)(x^{n-1} + x^{n-2}y + \cdots + xy^{n-2} + y^{n-1}). $$ Hence we have $$ x^3 -y^3 + 2y^2x - 2x^2y = 0\implies (x-y)(x^2-xy+y^2) = 0. $$ I believe you can take it from here.
H: Proving that $f ^{*} $ is a linear map Here's another that I want to see if my reasoning was correct. Let $f:E \rightarrow F$ be a linear map and let $$f^* : \text{Hom}_K(F,K) \rightarrow \text{Hom}_K(E,K) $$ if we define $f^*(u) = u \circ f$, prove that $f^*$ is a linear map. Originally my reasoning was that since $f^*(u) = u \circ f$ then by definition the two functions were inverse of each other (f and u I mean). f was onto and one-to-one and that's the only way that equality works But I think now that could be wrong. AI: No that isn't quite right. First note: $\text{Hom}_K(F, K)$ are linear maps from $F$ to $K$ (these maps are called linear functionals). Similarly, for $\text{Hom}_K(E,K)$. Moreover, these each form vector spaces over $K$ of their own! So the "vectors" in these spaces are the linear functionals. $\text{Hom}_K(F, K)$ is called the dual space to $F$ and sometimes denoted $F^*$ (that is where the $"*"$ is coming from in the problem) see here. So now $f^*$ in the question is a linear map that is mapping a linear functional in $\text{Hom}_K(F, K)$ to a linear functional in $\text{Hom}_K(E, K)$. So the $u$ in the problem is a linear functional in $\text{Hom}_K(F, K)$ and $f^*(u)$ is a linear functional in $\text{Hom}_K(E, K)$. This means that $f^*(u) : E \to K$ i.e. its domain is $E$ and notice that also $f : E \to F$ so that its domain is also $E$ Now, $f^*(u) = u \circ f$ means that they agree on all points of their domain (notice that we do have the same domain). In other words, for all $v \in E$ we have that $$f^*(u)(v) = u \circ f (v) = u(f(v))$$ So now that we understand the maps a bit more, we want to check the conditions of being a linear map. That is, we want for all $u_1, u_2 \in \text{Hom}_K(F, K)$ $$f^*(u_1 + u_2) = f^*(u_1) +f^*(u_2)$$ and for all $\alpha \in K$ $$f^*(\alpha u) = \alpha f^*(u)$$ I will do the first one and maybe you can do the second. Let $u_1, u_2 \in \text{Hom}_K(F, K)$. By the above to show $f^*(u_1 + u_2) = f^*(u_1) +f^*(u_2)$ we need to show they agree on all points of their domain, so let $v \in E$. Then we have $$f^*(u_1 + u_2)(v) = (u_1 + u_2) \circ f(v) = (u_1 + u_2)(f(v)) = u_1(f(v)) + u_2(f(v))$$ $$= u_1 \circ f(v) + u_2 \circ f(v) = f^*(u_1)(v) + f^*(u_2)(v)$$ which is what we wanted. Do you see why $(u_1 + u_2)(f(v)) = u_1(f(v)) + u_2(f(v))$? Remember, $u_1, u_2$ are functions . . . how do we define addition of functions?
H: Interesting Ordinary Differential Equation Problem consider the DE $y'=\alpha x,x>0$ Where $\alpha$ is a constant. Show that : 1) if $\phi(x)$ is any solution and $\psi(x)=\phi(x)e^{-\alpha x}$ then $\psi(x)$ is a constant. 2) if $\alpha<0$, then every solution tends to zero as $x \rightarrow 0$ I tried the normal method of first solving the given ODE to find its solution and considering it to be $\phi(x)$, i found $\psi(x)$, tried differentiating it to show that its constant. but didn't work out. AI: There are two problems with the question statement. First the equation should be $$ y' = \alpha \color{red}{y}.\tag{1} $$ Secondly it should be concerning with the limiting behavior of $y$ namely $$ y\to 0\text{ when } x\to \color{red}{+\infty}.\tag{2} $$ Hints: If $\psi(x)$ is a constant, then $$ \psi'(x) = \phi'(x)e^{-\alpha x}+ \phi(x)(-\alpha e^{-\alpha x}) $$ must be zero, is this true if $\phi(x)$ solves (1)? The solution to (1) is $y = y(0)e^{\alpha x}$, where $y(0)$ is the initial value you assigned, take any real number $a = y(0)$, is (2) true?
H: Proving a projection is a linear map OK, I think I am on the right track but I am trying to figure out if I am going wrong somewhere. Let $V$ be a vector space over a field $K$. Let $B = \{x_1, x_2, x_3, \ldots , x_n\}$ be a base of $V$ over $K$. Then for all $x \in V$ there exist unique scalars $\lambda_1, \lambda_2,\ldots, \lambda_n$ in $K$ such that $\sum_{i=1}^n \lambda_1 x_i = x$. We call the $\lambda_i$ the $i$-th component of $x$ with respect to base $B$. Let $p_i(x)$ denote the $i$-th component of $x$ with respect to base $B$, where $p_i$ is a projection map. Prove the following: That $p_i: V \to K$ is a linear map and therefore $p_i \in \operatorname{Hom}_k(V, K)$. Show that $\{p_1, p_2,\ldots, p_n\}$ is a base of $\operatorname{Hom}_K(V,K)$. Show that there is an isomorphism $\Psi : V \to \operatorname{Hom}_K(V,K)$ such that $\Psi(x_i) = p_i$ for all $1 \le i \le n$. I got through part of this. My reasoning was that since $p_i$ is a projection onto $V$ then by definition any vector $p_i(x) \in V$ which means that like any other vector in the space it can be expressed as a linear combination of some scalar (call it $\mu$) such that $p_i(x) =\{\mu_1 x_1 + \mu_2 x_2 + \cdots + \mu_n x_n\} = \sum^n_{i=1}\mu_i x_i$. Since it can be expressed that way, that means that if we pick any other vector in $V$, say $y$, that can also be expressed as a linear combination of the base components (say, $\lambda_ix_i$). So if we add $x$ and $y$ and plug them into the projection map we get $p_i(x+y) = \sum^n_{i=1}\mu_ix_i + \lambda_ix_i$ which with a little algebra can be separated into $\sum_{i=1}^n \mu_i x_i + \sum_{i=1}^n \lambda_i x_i$. A similar process can prove that it preserves scalar multiplication. Is this reasoning correct? Because if it is, my next step was to say that given that $p_i$ is a linear map, and that it is from $V$ to $V$, the number of dimensions it has is going to be the same as the dual space. But I am not convinced that's a good reason. Any input would be welcome. AI: First let's prove that $p_i : V \to K$ is a linear map. Let $v, w \in V$, where $v = a_1x_1 + \dots a_nx_n$ and $w = b_1x_n + \dots b_nx_n$. Since $v + w = (a_1 + b_1)x_1 + \dots + (a_n + b_n)x_n$, we have $p_i(v + w) = a_i + b_i = p_i(v) + p_i(w)$. Now suppose $c \in K$. Clearly, $cv = (ca_1)x_1 + \dots + (ca_n)x_n$, and thus $p_i(cv) = ca_i = cp_i(v)$. For the second question, if you can use the fact that $\operatorname{Hom}_K(V, K)$ has dimension $n$, it suffices to show that $\{p_1, \dots, p_n\}$ is linearly independent. Assume $c_i \in F$ such that $$c_1p_1 + \dots + c_np_n = 0 \in \operatorname{Hom}_K(V, K)$$ If $c_i \ne 0$ for some $i$, we have $$(c_1p_1 + \dots + c_np_n)(x_i) = c_1p_1(x_i) + \dots + c_np_n(x_i) = c_ip_i(x_i) = c_i \in K$$ in which case $c_1p_1 + \dots + c_np_n$ is not zero. Alternatively, it's not too hard to show that $\{p_1, \dots, p_n\}$ is a spanning set for $\operatorname{Hom}_K(V, K)$. Any $T \in \operatorname{Hom}_K(V, K)$ is completely determined by its value at the basis elements, so say we have $T(x_i) = \lambda_i$. Then $$(\lambda_1p_1 + \dots + \lambda_np_n)(x_i) = \lambda_1p_1(x_i) + \dots + \lambda_np_n(x_i) = \lambda_ip_i(x_i) = \lambda_i $$ Since both maps agree on basis elements, we conclude that $\lambda_1p_1 + \dots + \lambda_np_n = T$. For 3, I don't think there is much to show. Once you define $\Psi: V \to \operatorname{Hom}_K(V, K)$ on the basis elements by $\Psi(x_i) = p_i$. This extends $K$-linearly to a well-defined linear map $V \to \operatorname{Hom}_K(V, K)$, which is clearly an isomorphism.
H: Unintentional Negative Sign in Limit Evaluation I've been working on evaluating the following limit: $$\lim_{x\to 0} \left(\csc(x^2)\cos(x)-\csc(x^2)\cos(3x) \right)$$ According to my calculator, the limit should end up being 4. Though I've tried using the following process to find the limit, I continue to get -4: $$\begin{align} \lim_{x\to 0} \left(\frac{\cos(x)-\cos(3x)}{\sin(x^2)}\right)&=\lim_{x\to 0} \left(\frac{\cos(x) - \cos(2x+x)}{\sin(x^2)}\right)\\ &=\lim_{x\to 0} \left(\frac{\cos(x) - \cos(2x)\cos(x) - \sin(2x)\sin(x)}{\sin(x^2)}\right)\\ &=\lim_{x\to 0} \left(\frac{\cos(x) - \left(1-2\sin^2(x)\right)\cos(x) - 2\sin^2(x)\cos(x)}{\sin(x^2)}\right)\\ &=\lim_{x\to 0} \left(\frac{\cos(x) - \cos(x)-2\sin^2(x)\cos(x)-2\sin^2(x)\cos(x)}{\sin(x^2)}\right)\\ &=\lim_{x\to 0} \left(\frac{\cos(x) - \cos(x)-4\sin^2(x)\cos(x)}{\sin(x^2)}\right)\\ &=\lim_{x\to 0} \left(\frac{-4\sin^2(x)\cos(x)}{\sin(x^2)}\right)\\ &=\lim_{x\to 0} \left(\frac{\frac{d}{dx}\left(-4\sin^2(x)\cos(x)\right)}{\frac{d}{dx}\left(\sin(x^2)\right)}\right)\\ &=\lim_{x\to 0} \left(\frac{-4\cos^2(x)\cos(x)-\sin^2(x)\sin(x)}{\cos(x^2)}\right)\\ &=\frac{-4\cdot1 - 0}{1} \\ &=-4 \end{align}$$ Is there a specific place in my steps where I am going wrong with respect to negative sign notation or distribution? (Also, apologies for the extra steps... and possible incorrect ordering of labels, functions, and symbols.) AI: $$ \bigg(\frac{\cos(x) - \cos(2x+x)}{\sin(x^2)}\bigg)=\bigg(\frac{\cos(x) - \cos(2x)\cos(x) + \sin(2x)\sin(x)}{\sin(x^2)}\bigg)$$
H: How to show that a set $A$ is closed iff it is covered by a family $\mathcal{P}$ of open sets $U$, where $A \cap U$ is closed in $U$? How to show that a set $A$ is closed if and only if it is covered by a family $\mathcal{P}$ of open sets, where $A \cap U$ is closed in $U$ for each open set $U$ in $\mathcal{P}$? Thank you very much. AI: Let $X$ be the space. For the easy direction, note that $X$ itself is an open set. The other direction isn’t true as stated: Ink’s answer provides a counterexample. It becomes true, however, if $\mathscr{U}$ is an open cover of $X$ rather than just of $A$. In that case suppose that $x\in X\setminus A$. There is a $U\in\mathscr{U}$ such that $x\in U$, and $U\setminus A$ is then an open nbhd of $x$ disjoint from $A$. It follows that $X\setminus A$ is open and hence that $A$ is closed.
H: Summing a series with a changing power $$\sum_{r=0}^{k-1}4^r$$ Hi, I was wondering whether anyone could explain how to work this out. I know the end result is $\frac{4^k-1}{3}$, but I don't know why or how to get there. Thank you :D AI: Multiply it by $(4-1)$. Expand without turning it into $4-1=3$. a lot of powers of $4$ will simplify except $4^k$ and $1$. Afterwards divide by $4-1=3$.
H: Probability of getting a "double" in at least two throws of two dies We play a game where we throw two distinct dies twice. A player wins $\$3$ if he gets at least one time a double between the two throws, and loses $\$1$ if he doesn't get a double in any throw of the two. What is the expected value of the player winnings in a game? I am having a bit of a problem knowing what is the probability of getting a double in at least one of the two throws. I thought about it like this: Our $\Omega$ is all the sequences on $\{1,2,3,4,5,6\}$ with length 4 (for the two throws). $| \Omega | = 6^4$. $Pr(\text{to get a double in at least one throw}) = 1 - Pr(\text{to not get a double at any of the throws}) = 1 - \frac{36}{6^4} = \frac{35}{36}$ which is obviously wrong. Any help will be appreciated. P.S.: (Can we solve it using an Indicator Random Variable?) AI: The probability of rolling a double on 2 six sided dice is $\frac{1}{6}$. So the probability of not rolling one is $\frac{5}{6}$. The probability of this happening twice is $\left(\frac{5}{6}\right)^2=\frac{25}{36}$. The expectedayoff is $0.22 - deal me in!
H: Determining whether a coin is fair I have a dataset where an ostensibly 50% process has been tested 118 times and has come up positive 84 times. My actual question: IF a process has a 50% chance of testing positive and IF you then run it 118 times What is the probability that you get AT LEAST 84 successes? My gut feeling is that, the more tests are run, the closer to a 50% success rate I should get and so something might be wrong with the process (That is, it might not truly be 50%) but at the same time, it looks like it's running correctly, so I want to know what the chances are that it's actually correct and I've just had a long string of successes. AI: The total number of successes in $n=118$ runs is binomial $(n,\frac12)$ hence the probability $p_n(k)$ to get at least $k=84$ successes is $$ p_n(k)=2^{-n}\sum_{i=k}^n{n\choose i}. $$ When $k$ is significantly larger than $\frac{n}2$, $p_n(k)$ is very small and an estimation of how small $p_n(k)$ is is obtained through a large deviations estimate. This says that $p_n(k)\leqslant p_n^*(k)$ with $$ p^*_n(k)=2^{-n}\inf\{(1+s)^ns^{-k}\,;\,s\geqslant1\}. $$ For every $k\gt\frac{n}2$, the infimum is reached at $s=\frac{k}{n-k}$, hence $$ p^*_n(k)=2^{-n}n^nk^{-k}(n-k)^{-(n-k)}=\left(I\left(\tfrac{k}n\right)\right)^{-n},\quad I(t)=2t^t(1-t)^{1-t}. $$ For example, if $k=84$ and $n=118$, then $t=.712$ hence $I(t)\approx1.09710$ and $$ p^*_{118}(84)\approx(1.09710)^{-118}\approx10^{-5}. $$ Numerically, $p_{118}(84)\approx2.36224\cdot10^{-6}$ and $p^*_{118}(84)\approx1.78153\cdot10^{-5}$.
H: Finding the DE of family of curves FInd the DE of the family of circles in XY plane passing through the points $(-1,1)$ and $(1,1)$ AI: Replace $(\pm1,1)$ by $(\pm1,0)$ for the moment. By symmetry it is sufficient to consider a point $P=(x,y)$ in the (interior of) the first quadrant. The circle through $(\pm1,0)$ and $(x,y)$ has its center at $M=(0,c)$ with $$c={x^2+y^2-1\over 2y}\ .$$ The vector $MP$ has slope ${y-c\over x}$; therefore the tangent to the circle through $P$ with center $M$ has slope $$-{x\over y-c}={2xy\over x^2-y^2-1}$$ at $P$. It follows that the DE of "my" family of circles is given by $$y'={2xy\over x^2-y^2-1}\ .$$ In order to obtain the DE for "your" family of circles we have to move this "slope field" one unit up, so that we end with $$y'={2x(y-1)\over x^2-(y-1)^2-1}\ .$$
H: Is $2^{\aleph_0}=c$? When I search about the cardinality of real number set in Wiki (http://en.wikipedia.org/wiki/Cardinality_of_the_continuum) I found: By the Cantor–Bernstein–Schroeder theorem we conclude that $c=|P(\mathbb{N})|=2^{\aleph_0}$ And the Cantor–Bernstein–Schroeder state that if $A\preceq B$ and $B\preceq A$, then $A \sim B$. How can I use it to prove that $c=|P(\mathbb{N})|=2^{\aleph_0}$. How to create a bijection function mapping $P(\mathbb{N})$ onto $\mathbb{R}$? AI: Proof by Cantor-Bernstein-Schroder. To use Cantor-Bernstein-Schroder, you only need to find an injection from $\mathbb{R}$ to $P(\mathbb{N})$ and vice-versa. This is fairly easy to do. For the $\mathbb{R}$ to $P(\mathbb{N})$ direction, we work as follows. It is easy to show that $\mathbb{R}$ is equipollent to $(0, 1)$. I give an explicit bijection below. Using such a bijection, first map $\mathbb{R}$ to $(0, 1)$. Now, we exploit the following fact: the subsets of $\mathbb{N}$ can be put into one-to-one correspondence with (finite and infinite) binary strings that do not end in zeroes as follows. Write out the elements of $\mathbb{N}$ in their usual order. Cross out the elements of the subset. Below each element of $\mathbb{N}$, write a “0” if it is not crossed out, and a “1” if it is. Read out the resulting binary string, and chop off trailing zeroes (I.e. what happens when the set is finite). (The empty set maps to the empty string $\epsilon$.) Reversing this rule gives a mapping from such strings to subsets. Then, map $(0, 1)$ injectively into $P(\mathbb{N})$ by binary-expanding each member of $(0, 1)$, minding to use a consistent conevntion of whether to take the terminating or "11111..."-ending expansion when applicable, taking the fractional part, and then chopping off any trailing zeroes and applying the above rule in reverse to get a subset of $\mathbb{N}$. This gives an injection from $\mathbb{R}$ to $P(\mathbb{N})$. Now, for an injection going the other way. Map all infinite members of $P(\mathbb{N})$ into $(0, 1)$ using the inverse of the map just given. Then map all finite members by interpreting their binary strings as integers after appending a leading 1. These integers will be 1 or more, hence won't conflict with anything in $(0, 1)$. This shows $\mathbb{R}$ and $P(\mathbb{N})$ are equipollent by the Cantor-Bernstein-Schroder Theorem. But you also asked for something else: an explicit bijection from $\mathbb{R}$ to $P(\mathbb{N})$. We can do this too. Proof by Explicit Construction. We can build up a bijection $f: \mathbb{R} → P(\mathbb{N})$ in stages, as follows. The first step is to construct one from $[0, 1)$ to $P(\mathbb{N})$. Call it $\beta$. We exploit the fact that a subset of $\mathbb{N}$ can be encoded as a binary string without trailing zeroes that we just mentioned. Expand a real in $[0, 1)$ in binary. Exclude the set of reals whose binary terminates (or can be represented as a repeating binary ending in “111111....”). Take the fractional parts of the binary expansions of these reals and map them to subsets of $\mathbb{N}$ via the procedure just discussed. The next stage is to fill in the reals we just excluded. We cannot simply use the natural correspondence of these reals to their binary expansions, for their binary expansions are not unique. Instead, we can do something like this. Let such a real be expanded as $0.a_1a_2...a_n$, $a_n = 1$. Then, let the rational that said binary represents be given by $\frac{p}{2^n}$, $p = a_1 2^{n-1} + a_2 2^{n-2} + … + 2a_{n-1} + a_n$. Now for the trick. Do not map the real $\frac{p}{2^n}$ to the binary $a_1a_2...a_n$ and corresponding subset. Instead, map $\frac{p}{2^{n+1}}$ to this binary, and map $\frac{p + 2^n}{2^{n+1}}$ to the binary $a_1a_2...a_{n-1}011111....$ and its corresponding subset. Map 0 to the null set, and map $1/2$ to $\mathbb{N}$. That this last mapping puts the terminating-binary members of $[0, 1)$ in one-to-one correspondence with the remaining subsets of $\mathbb{N}$ should be easy to see: each such terminating-binary rational can be represented as $\frac{p}{2^n}$ for an odd, unique $p$ and unique $n > 0$ with $p < 2^n$. All odd $p$ are represented. For each $n > 1$, all values for $p$ from $1$ to $2^n – 1$ are given a binary string/subset. The case where $n = 1$ is taken care of by special case, as is $0$. So every such rational is given a unique string, and every string a unique such rational. This completes the construction of bijection $\beta$. We now consider the construction of bijection $Б: (0, 1) \rightarrow [0, 1)$. Such a bijection can be constructed in a number of ways. One way is as follows. When $x \le 1/2$, let $Б(x) = 1/2 – x$. This maps $(0, 1/2]$ to $[0, 1/2)$. When $x > 1/2$ (so is in $(1/2, 1)$ ), let $Б(x) = \frac{Б(2x – 1)}{2} + \frac{1}{2}$. $2x – 1$ stretches $(1/2, 1)$ to $(0, 1)$. $Б$ then is called upon to recursively map $(0, 1)$ to $[0, 1)$. Dividing by 2 shrinks $[0, 1)$ to $[0, 1/2)$. Adding $\frac{1}{2}$ translates $[0, 1/2)$ to $[1/2, 1)$, the missing interval. Note that each of these steps is reversible, so $Б$ is indeed bijective. Finally, we need a bijection from $\mathbb{R}$ to $(0, 1)$. This is the simplest of all. Call this bijection $b$. We can use something like $$b(x) = \frac{\mathrm{arccot}(x)}{\pi}$$. Or $$b(x) = \frac{\tanh(x) + 1}{2}$$ or any other sigmoid function ranging in $(0, 1)$. Then, $$f = \beta \circ Б \circ b$$ is a bijection from $\mathbb{R}$ to $P(\mathbb{N})$. Thus we prove by explicit construction that $\mathbb{R}$ and $P(\mathbb{N})$ are equipollent. So, it is definitely true that $$2^{\aleph_0} = \mathfrak{c}$$ .
H: Surjective endomorphisms of Noetherian modules are isomorphisms. I'm trying to solve this question: I didn't understand why the hint is true and how to apply it. I really need help, because it's my first question on this subject and my experience on this field is zero. I need some help. Thanks a lot AI: For any $m\in M$, if you have $u(m)=0$, then you have $$(u\circ u)(m)=u(u(m))=u(0)=0.$$ Therefore $\ker(u)\subseteq\ker(u\circ u)$. By the same argument, you have $$\ker(u)\subseteq\ker(u\circ u)\subseteq\ker(u\circ u\circ u)\subseteq\cdots$$ which should look familiar if you have just read a chapter on Noetherian modules.
H: A slight modification to the recursion theorem I always come across this version of the recursion theorem that is frequently used without justification: Given a function $g:A\longrightarrow A$ there exists exactly one funcion such that, $f(0)=a_{0}, f(1)=a_{1},...,f(k)=a_{k}$ $f(n+k)=g(f(n))$ I guess of course that the original proof of the standard recursion theorem, meaning by using only the conditions $f(0)=a_{0}$ and $f(n+1)=g(f(n))$ would also work in this case. So my question is if it's possible to deduce this theorem not writing the proof again but supposing the standard theorem as proved. AI: Define $$G:A^k\to A^k:\langle a_1,\dots,a_k\rangle\mapsto \langle g(a_1),\dots,g(a_k)\rangle$$ and use the ordinary result to define $F$ so that $F(0)=\langle a_0,\dots,a_{k-1}\rangle$ and $F(n+1)=G(F(n))$. From $F$ you can easily extract $f$: $$\begin{align*} F(1)&=\langle g(a_0),\dots,g(a_{k-1})\rangle=\langle f(k),\dots,f(2k-1)\rangle\;,\\ F(2)&=\langle g(f(k)),\dots,g(f(2k-1))\rangle=\langle f(2k),\dots,f(3k-1)\rangle\;, \end{align*}$$ and so on.
H: Question regarding GWP (Kesten-Stigum setup) Let $(Z_n)_{n\in \mathbb{N}}$ be a GWP with $Z_0=1$ and mean of offspring distribution $m\in (1,\infty)$. Define $W_n=Z_n/m^n$ and denote its limit by $W$ (i.e. setup as in the Kesten-Stigum theorem). I already know that $\{W>0\} \subset \{Z_n \rightarrow \infty\}$, but I am not sure how to show the opposite, i.e. $$\mathbb{P}(W>0 \vert Z_n \rightarrow \infty) = 1$$ How can one prove this statement? AI: This result is indeed called Kesten-Stigum theorem. It holds (if and) only if the offspring distribution is L log L integrable (hence integrable is not enough). A simple approach to prove it is expanded in the paper Conceptual Proofs of L log L Criteria by Russell Lyons, Robin Pemantle and Yuval Peres, available here.
H: The difference between closed and open sets of the product topology I was tackling with this problem from Munkres: If Y is compact, then the projection map of $X \times Y$ is a closed map. And I thought the same things as Akt904. After reading Brian M. Scott's comment I was nearly convinced but what about the open subset $\{(x,y): x^2+y^2<1\}$ of $\Bbb{R}\times\Bbb{R}$? Since it is open it is a union of basis elements, can it written as $A\times B$ where both $A$ and $B$ are open subsets of $\Bbb{R}$? Or it is not needed for open sets to be written as $A\times B$? AI: Open sets need not be products of open sets. Note that if $U\subseteq X\times Y$ can be written as $A\times B$ then its projections onto $X$ and $Y$ are $A$ and $B$, respectively. Note that if we project the open unit ball of $\Bbb R^2$ onto $\Bbb R$ we get $(-1,1)$ in both coordinates, their product is the open unit square, rather than the ball. Therefore it is not the product of any two subsets of $\Bbb R$, but rather the union of such sets.
H: Can a basis for a vector space $V$ can be restricted to a basis for any subspace $W$? I don't understand why this statement is wrong: $V$ is a vector space, and $W$ is a subspace of $V$. $K$ is a basis of $V$. We can manage to find a subset of $K$ that will be a basis of $W$. Sorry if my English is bad... and if you can show me an example of something that contradicts it, it'd be great. AI: Take for example $V = \mathbb{R}^2$, $W = \{(x,y)\in\mathbb{R}^2 : x = y\}$, and $K = \{(1,0),(0,1)\}$. $W$ is a subspace of $V$, but there is no subset of $K$ that gives a basis for $W$.
H: In $\mathbb{C}[x]$ is it true that $F_{a,b}=\{p\in\mathbb{C}[x] : p(a)=p(b)\}$ for $a\neq b$ is a maximal subring? The problem is in the title. It is clear that $F_{a,b}$ is a ring, but it is not so clear to me that it is maximal in $\mathbb{C}[x]$. I tried to consider it as a vector space and show that it has codim=1 but I didn't go far with that. Does anyone have any ideas? AI: This answer is based on the hints and insights of Qiaochu Yuan. We can consider the rings and subrings as vector spaces. If $T\leq \mathbb{C}[x]$ then $T$ is a subspace of the vector space $\mathbb{C}[x]$. So, in order to prove that $T$ is a maximal subring of $\mathbb{C}[x]$, we can just prove that $\operatorname{codim}T = 1$. Indeed, suppose by contradiction that $\operatorname{codim}T = 1$ and $T$ is not a maximal subring of $\mathbb{C}[x]$. In this case we have that there exists $V$ a subring of $\mathbb{C}[x]$ such that $T\subsetneq V\subsetneq \mathbb{C}[x]$. But in the context of vector spaces, that means that $\operatorname{codim}T> \operatorname{codim}V\geq 1$ which implies that $\operatorname{codim}T>1$. Contradiction! So it is true that $$\operatorname{codim}T = 1 \Rightarrow T \text{ is a maximal subring of } \mathbb{C}[x].$$ In our case we choose the linear functional $f:\mathbb{C}[x]\rightarrow \mathbb{C}$ defined by $f(p)=p(a)-p(b)$. Clearly $F_{a,b} \equiv \ker f$. Furthermore $\operatorname{Im}f=\mathbb{C}$. So $$\operatorname{codim}\ker f=\dim\mathbb{C}[x]/\ker f=\dim\operatorname{Im}f = \dim\mathbb{C}=1.$$ From what we proved above, since $\operatorname{codim}F_{a,b}=1$ we have that $F_{a,b}$ is a maximal subring of $\mathbb{C}[x].$
H: When is there a ring structure on an abelian group $(A,+)$? Given an abelian group $(A,+)$, what are conditions on $A$ that ensure there is or isn't a unitary ring structure $(A,+,*)$? That is, an associative bilinear operation $* : A^2 \to A$ with an identity $1_A \in A$. A possible follow-up question would be conditions for a nontrivial rng structure on $A$ (indeed, the multiplication $ab = 0$ always gives a rng structure). For example, if $A$ is finitely generated, then by the structure theorem on finite type abelian groups, $A \simeq \mathbb Z^r \times \prod \mathbb Z/p_i^{a_i} \mathbb Z$ and it has a product ring structure. On the other hand, $\mathbb Q / \mathbb Z$ doesn't: if it did, the unit $u=[p/q]$ would have finite order $q$, and $\forall x \in \mathbb Q / \mathbb Z, qx = q(u*x) = (qu)*x = 0$, but $q(1/2q) = 1/2 \neq 0$. More generally, by this question, if every element has finite order but the orders are not bounded, then there is no ring structure on $A$. Is this condition also necessary? I doubt it: I have trouble envisioning a ring structure on $(\mathbb R / \mathbb Z, +) = (S^1, \cdot)$, but not every element has finite order in $S^1$. The argument also completely fails for rngs. (Note that my question is different from the one I linked: this one simply asked for a counterexample, I'm asking for general conditions) AI: In a unitary ring you always have the fundamental concept of "characteristic". If you have a torsion Abelian group $G$, and you want a unitary ring structure on $G$, then first of all you need to choose a unit. If $g\in G$ is such a unit, then you are forced to have a ring of characteristic $n=|g\mathbb Z|$. This tells you that $nh=0$ for all $h\in G$, so if $G$ does not have a finite exponent, you cannot have a ring structure on $G$. If you want a source for many torsion-free counterexamples you can proceed as follows. You can use a famous theorem of Saharon Shelah to find, for any infinite cardinal $\alpha$, $2^\alpha$ many non-isomorphic torsion-free abelian groups of cardinality $\alpha$ and whose endomorphism ring is a subring of the rationals. In particular, given such a $G$, $\operatorname{End}(G)$ is countable. Notice that, if you have a ring structure on $G$, then there is an injection $G\to\operatorname{End}(G)$ sending $g$ to left-multiplication by $g$. Let me add that these matters are nicely discussed in Chapter XVII of the second volume of Infinite Abelian Groups by Laszlo Fuchs (the title of the chapter is "Additive groups of rings"). EDIT: I briefly sum up some of the matters treated by Fuchs. Let me start by quoting his final "note". The problem of defining ring structures on an additive group was raised by Beaumont who considered rings on direct sums of cyclic groups. Nearly at the same time, Szele investigated zero-rings, and Ridei and Szele and Beaumont and Zuckerman described the rings on subgroups of the rationals. A rather systematic study of constructing rings on a group appeared in Fuchs where the fundamental role of the basic subgroup was pointed out. More satisfactory results have been obtained for torsion-free groups of finite rank by Bedumont and Pierce [...]. It would be a serious mistake to expect too much from a study of the additive structures of rings, as far as ring theory is concerned. In many important cases the additive structures are too trivial [e.g., torsion-free divisible or an elementary $p$-group] to give any real information about the ring structure. This especially applies to the torsion-free case, where a close interrelation between the additive and the multiplicative structures can be expected only if the additive group is more complicated. One should, however, remember that there are intriguing questions even if the additive group is too easy to describe; for instance, we do not know of any uncountable Noetherian ring whose additive group is free. Let me say that your question is open in general but there are answers for particular classes of groups, such as torsion-free groups of rank one, where one can not only say when a ring structure exists but also encode somehow all the possible ring structure, even non-associative or non-unitary. The fact is that one of the tools for studying ring structures on a group $G$ is the group of multiplications on $G$, $\operatorname{Mult}(G)$. This is a group in general only if you include non-associative multiplications. The chapter starts studying general properties of $\operatorname{Mult}(G)$ (for example one can show that $\operatorname{Mult(G)}\cong\operatorname{Hom}(G\otimes G,G)$). After that it passes to general conditions on torsion and torsion-free groups (especially divisible, or finite rank, or even rank $1$ for the best results). Notice that the main focus is not exactly on your question but it is on an even more delicate matter, that is, given a group $G$ try to classify all the rings that have $G$ as underlying group. Proceeding with the chapter more particular questions are investigated. For example, one can give a very precise form for the additive groups of Artinian rings (see Theorem 122.4). There are also precise results for regular rings and the chapter ends (as all the chapters of this book) asking some intriguing questions on the matter. It may happen that this classical book is not up to date as it goes back to 1973 but, in my experience, you can be quite sure that, if something related to Abelian groups was known before 1973, then it is in this book. I was looking for more recent information about your question something like 3 years ago but, as far as I remember, I could not find anything relevant after Fuchs' book.
H: Continuous extension of a Bounded Holomorphic Function on $\mathbb{C}\setminus K$ Let $f:\mathbb{C}\setminus K\rightarrow\mathbb{D}$ be a holomorphic map, where $K$ is a compact set with empty interior. My question: Prove or disprove that: $f$ extends continuously on $\mathbb{C}.$ Remark: Observe that if $K$ is discrete then by the Riemann Removable Singularity Theorem we know that infact there is a holomorphic extension. AI: Here is an example from Which sets are removable for holomorphic functions? The function $f=z+z^{-1}$ maps the punctured unit disk $\mathbb D$ bijectively onto $\mathbb C\setminus [-2,2]$. The inverse $g=f^{-1}$ is holomorphic in $\mathbb C\setminus [-2,2]$ and is bounded by $1$, but has no holomorphic (or even continuous) extension to $\mathbb C$. Indeed, $g(z)$ approaches both $i$ and $-i$ as $z\to 0$. A compact set $K$ is removable for bounded holomorphic functions if and only if its analytic capacity is zero. A simple sufficient condition was given by Painlevé: if the $1$-dimensional Hausdorff measure of $K$ is zero, then it's removable for bounded holomorphic functions. That is, every bounded holomorphic function on $\mathbb C\setminus K$ extends to a holomorphic function on $\mathbb C$ (which is necessarily constant by Liouville's theorem).
H: Contour integration of $\int \frac{dx} {(1+x^2)^{n+1}}$ I want to compute $$ \int_{-\infty}^\infty \frac 1{ (1+x^2)^{n+1}} dx $$ for $n \in \mathbb N_{\geq 1}$. If I let $$ f(z) := \frac 1 {(z+i)^{n+1}(z-i)^{n+1}} $$ then I see that $f$ has poles of order $n+1$ in $\pm i$. Initially I thought that $$ [-R,R] \cup \{R \exp(i \theta) \mid \theta \in [0,\pi] \} $$ would be a good conture but for the residue in $i$ I get $(-1)^n (n+1) 2 i$. I know that the result must be $$ \frac {1 \cdot 3 \cdot 5 \cdots (2n-1)}{2 \cdot 4 \cdots 2 n} \pi $$ which looks quite different than my residue. What goes wrong here ? AI: I think that both of you had made mistake in calculating derivatives. $$\underset{z=i}{\operatorname{res}}f(z)=\underset{z=i}{\operatorname{res}}\frac{1}{\left(z+i\right)^{n+1}\left(z-i\right)^{n+1}}=\frac{1}{n!}\underset{z\to i}\lim \frac{d^n}{dz^n}\left(\frac{\left(z-i\right)^{n+1}}{\left(z-i\right)^{n+1}\left(z+i\right)^{n+1}}\right)=\\ =\frac{1}{n!}\underset{z\to i}\lim \frac{d^n}{dz^n}\left(\frac{1}{\left(z+i\right)^{n+1}}\right)=\frac{1}{n!}\underset{z\to i}\lim \frac{d^{n-1}}{dz^{n-1}}\left(\frac{(-1)(n+1)}{\left(z+i\right)^{n+2}}\right)=\\ =\frac{1}{n!}\underset{z\to i}\lim \frac{d^{n-2}}{dz^{n-2}}\left(\frac{\left(-1\right)^2(n+1)(n+2)}{\left(z+i\right)^{n+3}}\right)=...=\frac{1}{n!}\underset{z\to i}\lim \left(\frac{\left(-1\right)^n(n+1)(n+2)..2n}{\left(z+i\right)^{2n+1}}\right)=\\ =\frac{1}{n!}\left(\frac{\left(-1\right)^n(n+1)(n+2)..2n}{2^{2n+1}i^{2n+1}}\right)$$
H: Norms in extended fields let's have some notation to start with: $K$ is a number field and $L$ is an extension of $K$. Let $\mathfrak{p}$ be a prime ideal in $K$ and let its norm with respect to $K$ be denoted as $N_{\mathbb{Q}}^K(\mathfrak{p})$. My question is this: If $|L:K|=n$, what is $N_{\mathbb{Q}}^L(\mathfrak{p})$? I would like to think that $N_{\mathbb{Q}}^L(\mathfrak{p})=\left(N_{\mathbb{Q}}^K(\mathfrak{p})\right)^n$, ie if $L$ is a quadratic extension of $K$, then $N_{\mathbb{Q}}^L(\mathfrak{p})=\left(N_{\mathbb{Q}}^K(\mathfrak{p})\right)^2$. Is this right? I feel that the prove would involve using the definition that $N_K^L(x)$ is the determinant of the multiplication by $x$ matrix (Here, $K$ and $L$ are arbitrary). Thanks! AI: The norm is the product over all conjugates and there are $[L:K]$ times as many conjugates (i.e. $[L:K]$ conjugates over each conjugate of $K$), so your guess is correct.
H: $G$ is 2-vertex-connected graph if and only if every 2 vertices from $G$ lie on a simple cycle. $G$ is $2$-vertex-connected graph if and only if every $2$ vertices from $G$ lie on a simple cycle. $\implies$ If $G$ is $2$-vertex connected graph it means for every $ v \in V(G), \deg(v) \ge 2 $ because if exist any $ v_1 $ and $ \deg v_1 = 1 $ then we can erase neighboring vertex and graph will not connected so won't be 2-vertex connected. So if $ v \in V(G), \deg(v) \le 2 $ then it exist cycle. $ \deg_v \ge 2 $ so if we start in $v$ we have to back to this vertex. $\impliedby$ I don't know. AI: I think your forwards implication is a little vague. Take two vertices $u$ and $v$. We want to find a simple cycle containing them. As you've noted, each vertex must have at least two neighbours. So let $u_1$ and $u_2$ be two neighbours of $u$, and consider the graph we get from deleting $u$. We know by assumption that it is still connected, so we can find paths from $u_1$ to $v$ and $u_2$ to $v$. Joining these together will give us a simple cycle containing $u$ and $v$, so we're done — provided the two paths don't intersect. Can every such path have an intersection? Explain why not. For the reverse implication, take a vertex $x$ which we want to delete, and two vertices $a$ and $b$ which we want to connect in the resulting graph. Using the fact that we can conjure up a simple cycle containing $a$ and $b$, show that we can get a path in $G \setminus \{ x \}$ containing $a$ and $b$.
H: How to get the radius of an ellipse at a specific angle by knowing its semi-major and semi-minor axes? How to get the radius of an ellipse at a specific angle by knowing its semi-major and semi-minor axes? Please take a look at this picture : AI: The polar form of the equation for an ellipse with "horizontal" semi-axis $a$ and "vertical" semi-axis $b$ is $$r = \frac{ab}{\sqrt{a^2\sin^2\theta+b^2\cos^2\theta}}$$ Here, $\theta$ represents the angle measured from the horizontal axis ($30.5^\circ$ in your case), and $r$ is the distance from the center to the point in question (the radius you seek).
H: Prime Mean of Primes 1) Are there infinitely many primes $p_1,p_2$ such that $\frac{p_1+p_2}{2}$ is also prime? 2) What can we say about the more general problem : Are there infinitely many primes $p_1,p_2,\cdots, p_n$ such that $\frac{p_1+p_2+\cdots+p_n}{n}$ is also a prime for $n\geq 1$? $\mathbf{Remark}$: Note that for $n=1$, the problem 2 is solved by Euclid's theorem. AI: Both cases follow from the fact that there exist progressions in primes of length $k,$ for each $k\ge 2.$ This is Green-Tao Theorem. Remark: case $n=2$ was proved by K.Roth.
H: How to find asymptotics of integrand? Let $ f \in C ([0, \infty)) $ be s. t. $$f(x) \int_0^x f(t)^2 dt \to 1, x \to \infty.$$ How to prove that $f(x) \sim \left( \frac 1 {3x} \right)^{1/3} $ as $x \to \infty?$ AI: Define $$ F(x)=\int_0^xf(t)^2\,\mathrm{d}t\tag{1} $$ For any $\epsilon\gt0$ there is an $M$, so that if $x\ge M$, $$ 1-\epsilon\le f(x)F(x)\le1+\epsilon\tag{2} $$ Squaring yields $$ (1-\epsilon)^2 \le F(x)^2\frac{\mathrm{d}}{\mathrm{d}x}F(x) \le(1+\epsilon)^2\tag{3} $$ Thus, $$ 3(1-\epsilon)^2 \le\frac{\mathrm{d}}{\mathrm{d}x}F(x)^3 \le3(1+\epsilon)^2\tag{4} $$ and so, $$ \left(3(1-\epsilon)^2x+C\right)^{1/3}\le F(x)\le\left(3(1+\epsilon)^2x+C\right)^{1/3}\tag{5} $$ and finally, $$ \frac{1-\epsilon}{\left(3(1+\epsilon)^2x+C\right)^{1/3}} \le f(x) \le\frac{1+\epsilon}{\left(3(1-\epsilon)^2x+C\right)^{1/3}}\tag{6} $$ Therefore, by the Squeeze Theorem, $$ \lim_{x\to\infty}f(x)(3x)^{1/3}=1\tag{7} $$
H: An Orlicz norm is a norm I had asked a question pertaining to Orlicz norms here. However, in the book I was reading, it said (and I paraphrase) "It's not difficult to show it is a norm on the space of integrable random variables and for which $\|X\|_{\psi}$ is finite". So I decided to prove this. To recap, if $\psi$ is a monotone nondecreasing, convex function with $\psi(0) = 0$, the Orlicz norm of an integrable random variable X wrt $\psi$ is given by $$\|X\|_{\psi} = \inf \left\{u>0: \mathbb{E}\left[\psi\left(\frac{|X|}{u}\right) \right] \leq 1\right\}$$ Here is what I got so far: 1) $\|aX\|_{\psi} = |a|\|X\|_{\psi} \quad \forall a \in \mathbb{R}$ $$ \|aX\|_{\psi} = \inf \left\{u>0: \mathbb{E}\left[\psi\left(\frac{|aX|}{u}\right) \right] \leq 1\right\}$$ Replace $u$ with $|a|v$ $$ = \inf \left\{|a|v>0: \mathbb{E}\left[\psi\left(\frac{|X|}{v}\right) \right] \leq 1\right\}$$ $$ = |a|\inf \left\{v>0: \mathbb{E}\left[\psi\left(\frac{|X|}{v}\right) \right] \leq 1\right\}$$ $$ = |a|\|X\|_{\psi}$$ 2) $\|X\|_{\psi} = 0 \iff X = 0 \quad a.e$ ($\Leftarrow$) This directly follows from substitution. You will however have to split the expectation into two parts, one where $X=0$ and the other where it isn't. But I got it nonetheless. Now for ($\Rightarrow$). If $\|X\|_{\psi} = 0$, then $\forall u > 0$, $$ \mathbb{E}\left[\psi\left(\frac{|X|}{u}\right) \right] \leq 1$$ Now suppose, the implication were false, then let $A = \left\{\omega \subset \Omega : X(\omega) \neq 0\right\}$. Now by Jensen's Inequality, $$\psi \left[\mathbb{E}\left(\frac{|X|1_A}{u}\right) \right] \leq \mathbb{E}\left[\psi\left(\frac{|X1_A|}{u}\right) \right]$$ Now unless I assume that $\psi$ is unbounded, I don't know how to proceed. Incidentally, in the book, the $\psi$ they use are all unbounded. If it is unbounded, then for a sufficiently small $u$, I can "cross" 1, which will be a desired contradiction. So I need help for the bounded case. 3) Triangle inequality I tried several convexity tricks, but I didn't get this. Essentially, we have to show $$\|X+Y\|_{\psi} \leq \|X\|_{\psi} +\|Y\|_{\psi}$$ Perhaps there is a trick I haven't tried... Anyway I would appreciate any hints, tips and answers for the parts I didn't get. Thanks for your time. AI: For triangle inequality, take $X,Y$ for which $\lVert\cdot\rVert_{\psi}$ is finite, and for a fixed $\varepsilon$, consider $u$ and $v$ such that $u-\lVert X\rVert_\psi<\varepsilon$, $v-\lVert Y\rVert_\psi<\varepsilon$, and $$\max\left\{E\left[\psi\left(\frac{|X|}u\right)\right],E\left[\psi\left(\frac{|Y|}v\right)\right]\right\}\leqslant 1.$$ Then \begin{align} \psi\left(\frac{|X+Y|}{u+v}\right)&\leqslant\psi\left(\frac{|X|+|Y|}{u+v}\right)&\phi\mbox{ is non-decreasing}\\ &=\psi\left(\frac u{u+v}\frac{|X|}u+\frac v{u+v}\frac{|Y|}v\right)&\\ &\leqslant \frac u{u+v}\psi\left(\frac{|X|}u\right)+\frac v{u+v}\psi\left(\frac{|Y|}v\right)&\mbox{ by convexity}. \end{align} This is what we do when we have the definition of the Orlicz norm. Once can check that actually, the infimum is attained (taking a sequence $(u_n,n\geqslant 1)$ approaching the infinimum, and using Fatou's lemma). This will avoid the reasoning with $\varepsilon$.
H: Kernel of a linear map One often says that the kernel of a linear functional has codimension 1. Okay, consider $\delta_1\colon C[0,1]\to \mathbb{R}$ given by $\delta_1(f)=f(1)$. Then for each monomial $f_n(t)=t^n$ we have $\delta_1(f_n)=1$ and $\{f_n\colon n\in \mathbb{N}\}$ are linearly independent... What am I confusing here? AI: Take a look at the kernel: $\forall n,\,\forall m\, f_m-f_n\in \text{Ker}\, \delta_1$.
H: Example of a non-affine irreducible scheme What are basic examples for irreducible schemes which are not affine? What happens if I also demand the scheme to be Noetherian and/or locally Noetherian? AI: I think the plane minus the origin is an example, and it's Noetherian if you take it over a field: It is covered by the open sets $V(x)^c$ ad $V(y)^c$ where $V(x) = \text{Spec} k[x,y]/(x)$ and similarly for $V(y)$. Taking their complements gives open sets in $\text{Spec}k[x,y] = \mathbb{A}_k^2$ and so about any point in the plane we have an open set which is an affine scheme (exercise II.2.1 in Hartshorne shows that $D(f) \simeq \text{Spec} A_f$ as schemes, so in this case, $V(x)^c = D(x) \simeq \text{Spec} k[x,y]_{x}$ and similary for $D(y)$). This scheme is irreducible by example I.1.1.3 in Hartshorne since the plane minus a point is a nonempty open (complement of the origin which is closed as it is a variety cut out by $(x,y)$) subset of an irreducible space, $\mathbb{A}^2_k$.
H: Prove that $\frac{1}{2\pi}\frac{xdy-ydx}{x^2+y^2}$ is closed I would like to prove that $\alpha = \frac{1}{2\pi} \frac{xdy-ydx}{x^2+y^2}$ is a closed differential form on $\mathbb{R}^2-\{0\}$ . However when I apply the external derivative to this expression (and ignore the $\frac{1}{2\pi}\cdot\frac{1}{x^2+y^2}$ factor ), I get: \begin{equation} d \alpha = \frac{\partial x}{\partial x}dx\wedge dy + \frac{\partial x}{\partial y}dy\wedge dy - \frac{\partial y}{\partial x}dx\wedge dx - \frac{\partial y}{\partial y}dy\wedge dx \end{equation} \begin{equation} d \alpha = dx\wedge dy - dy\wedge dx \end{equation} \begin{equation} d \alpha = 2 dx\wedge dy \end{equation} Which is not closed on $\mathbb{R}^2-\{0\}$. Where is my mistake ? AI: Well, let us write $\alpha=f\cdot \omega $ with $f(x,y)=\frac{1}{x^2+y^2}$ and $\omega=xdy-ydx$. Then \begin{align} d\alpha=df\wedge \omega+fd\omega&=-\frac{1}{(x^2+y^2)^2}\left(2xdx+2ydy\right)\wedge\omega+ \frac{1}{x^2+y^2}2dx\wedge dy=\\ &=-\frac{1}{(x^2+y^2)^2}\left(2x^2 dx\wedge dy-2y^2dy\wedge dx\right)+ \frac{1}{x^2+y^2}2dx\wedge dy=\\ &=-\frac{1}{(x^2+y^2)^2}\cdot 2\left(x^2+y^2\right) dx\wedge dy+ \frac{1}{x^2+y^2}2dx\wedge dy= \\ &=0. \end{align}
H: Metric on a set Can someone provide a hint for solving the following. Show that $d:(R^{\infty})^2\to R_+$ is a metric. $$d(x, y)=\sqrt{\sum_{i=0}^{\infty}{(x_i-y_i)^2}}$$ I need a hint for showing that $d$ satisfies the triangle inequality AI: $\mathbf{Hint:}$ Use Cauchy-Schwarz inequality.
H: If $M/G_1$ and $M/G_2$ are Noetherians, then $M/(G_1\cap G_2)$ is Noetherian. I'm trying to prove if $M$ is a module, $G_1$, $G_2$ submodules of $M$ and $M/G_1$ and $M/G_2$ Noetherians, then $M/(G_1\cap G_2)$ is Noetherian. I've tried by brute force (writing down explicitly an ascending chain), by second fundamental theorem of isomorphisms, etc... without success. I really need help. Thanks a lot. AI: Consider the canonical map $$\Phi \colon M \to M/G_1 \times M/G_2; \quad \Phi(m) = ([m]_{G_1},\, [m]_{G_2}).$$ The kernel is $\ker \Phi = G_1 \cap G_2$, so $\Phi$ induces an embedding $$\varphi \colon M/(G_1 \cap G_2) \hookrightarrow M/G_1 \times M/G_2.$$ $P = M/G_1 \times M/G_2$ is Noetherian (why?), $M/(G_1\cap G_2)$ is isomorphic to a submodule of $P$. A submodule of a Noetherian module is Noetherian (why?).
H: Functions whose derivatives can be written as a function of themself? What kinds of function $f: \mathbb{R} \to \mathbb{R}$ can be written as some function of itself? I.e. $f'(x) = g(f(x))$ for some function $g$? If $f$ is given, can $g$ be solved in terms of the symbol $f$ (not in terms of specific $f$), if $g$ exists? My question is related to part 3 of my another question, which asks about when the variance can be represented as a function of mean, both as functions of a distribution parameter, and in particular, when the variance is the derivative of the mean. Thanks! AI: A necessary and sufficient condition is that $[f(x_1)=f(x_2)\implies f'(x_1)=f'(x_2)]$. When this condition is met, one can define $g$ as follows: If $t$ is not in $f(\mathbb R)$, then $g(t)=0$. If $t$ is in $f(\mathbb R)$, then $g(t)=f'(x)$ for any $x$ such that $t=f(x)$. The condition above is what is needed for the second part of this definition to be independent of the choice of $x$. Thus, strictly monotone (smooth) functions $f$ are allright but $f=\cos$ is not.
H: How do we know $p/q$ can be expressed as a terminating fraction in base $B$ only if prime factors of $q$ are prime factors of $B$? On cs.stackexchange I asked a math question: How to demonstrate only 4 numbers between two integers are multiples of .01 and also writable as binary. Yuval Filmus answered with a explanation depending on knowledge that "a (reduced) rational number $p/q$ can be represented exactly in base $B$ if and only if all prime factors of $q$ are prime factors of $B$." I've tried googling to find how that's known and I've found a couple other posts that mention it as a known fact. It's not self-evident to me, should it be? Is it practical to demonstrate to someone with no background in number theory, or is it a theorem with a recognized name? Or just one of those things that's theorem 8.2 in one textbook and Theorem 24 in another? I'm just asking out of curiosity - the original question arises in talking about why not to store currency in variables of type double in float. Thanks AI: It is an immediate fact. $\frac{p}{q} \times b^N$ is an integer $M$. Hence, $pb^N = q M$ Since $\gcd(p,q)=1$ (coprime), by Euclid's Lemma (or obviously) $q | b^N$. So the prime factors of $q$ must be prime factors of $b$.
H: Finitely many non-convergent ultrafilters I am trying to prove that if a space $X$ has finitely many non-convergent ultrafilters, then every non-convergent ultrafilter $\mathcal U$ contains a set $A$ that is not contained in any of other non-convergent ultrafilters. I honestly have no idea why this should be true, why anyone would think of it, and how to go about proving it. AI: This has nothing to do with convergence. Given any finite family of distinct ultrafilters, say $U_1,\dots,U_n$, on a set $X$, each $U_i$ contains a set $A$ that is in none of the others. To prove it, fix $i$ and do the following for each $j\neq i$. Since $U_i\neq U_j$, there is a set $B_j$ that is in one of $U_i$ and $U_j$ but not the other. If it's in $U_i$ but not $U_j$, set $C_j=B_j$; if, on the other hand, $B_j$ is in $U_j$ but not $U_i$, then set $C_j=X-B_j$. Either way, $C_j$ is in $U_i$ but not in $U_j$. Now set $A=\bigcap_{j\neq i}C_j$. This $A$ is in $U_i$ because it's the intersection of finitely many sets from $U_i$. If $j\neq i$ then $A\notin U_j$ because $A\subseteq C_j$ and $C_j\notin U_j$.
H: For real numbers $x$ and $y$, show that $\frac{x^2 + y^2}{4} < e^{x+y-2} $ Show that for $x$, $y$ real numbers, $0<x$ , $0<y$ $$\left(\frac{x^2 + y^2}{4}\right) < e^{x+y-2}. $$ Someone can help me with this please... AI: From the well known inequality: $1+z\leq e^z$, we replace $z$ by $\frac{z}{2}-1$ to get $\frac{z}{2}\leq e^{\frac{z}{2}-1}$. Square both sides to get $\frac{z^2}{4}\leq e^{z-2}$. Now let $z=x+y$, to get: $\frac{(x+y)^2}{4}\leq e^{x+y-2}$. Hence: $$\frac{x^2}{4}+\frac{y^2}{4}<\frac{(x+y)^2}{4}\leq e^{x+y-2}$$ Note: $\frac{x^2}{4}+\frac{y^2}{4}< \frac{(x+y)^2}{4}$ because $x,y> 0$
H: In how many ways can $3$ balls be tossed into $3$ boxes? $3$ balls are tossed into $3$ boxes. In how many ways can that be done? Well, I did in the following way. We have $5$ objects: $3$ balls and $2$ walls of boxes. So a configuration, for example: 00|0| meaning : $2$ balls are in the left box, one ball is in the center box and one box is empty. Thus, to count all ways I took ${5 \choose 2} = 10$. But answer is $3^3$. Where I am wrong? AI: Let's consider the general situation with $n$ balls and $k$ boxes. Then you have four different cases: If both the balls and the boxes are distinguishable (what the comments and the other answer termed "labelled"), then it matters which ball goes to which box, and you have a total of $k^n$ possibilities, for three balls and three boxes that is $3^3$. That's obviously what was assumed for the given answer. If the boxes are distinguishable, but the balls are not, then your reasoning is exactly the correct one, and you get a total of ${n+k-1\choose n} = {n+k-1 \choose k-1}$ possibilities. For $n=k=3$, ${5 \choose 2}$. If the balls are distinguishable, but the boxes are not, I don't know a general formula, but you can calculate the number as follows: The first ball has only a single choice, since all boxes are indistinguishable. The second ball has two choices (unless there's only one box total): Either it goes into the same box as the first, or it goes to a new one. As long as there's more than one unoccupied box, any time a ball goes to a new box, the number of choices for the later balls grows by one. As soon as all but one box contain a ball, the boxes are effectively labelled by the balls inside, and therefore the rest of the balls (if any) follow the rules of labelled boxes. For three balls and three boxes, this way we get five possibilities: 123||, 12|3|, 13|2|, 1|23|, 1|2|3 Edit: For $n=k$, the number is given by Sloane's A000110 If neither the balls nor the boxes are indistinguishable, the number of possibilities equals the number of ways to write sums of exactly $k$ nonnegative integers adding up to $n$. As was noted in the comments, for three balls and three boxes this gives just three possibilities: ooo||, oo|o|, o|o|o
H: Proof Complex positive definite => self-adjoint I am looking for a proof of the theorem that says: A is a complex positive definite endomorphism and therefore is A self-adjoint. Does anybody of you know how to do this? AI: In general, for $A:H\longrightarrow H$ bounded linear operator on a Hilbert space $H$, $A\geq 0$ implies $A^*=A$. Where $A\geq 0$ means $(Ax,x)\geq 0$ for every $x\in H$. Note that we could more generally simply assume that $(Ax,x)\in \mathbb{R}$ for every $x$ in $H$. By assumption, $(Ax,x)\in\mathbb{R}$ whence $$ (Ax,x)=\overline{(Ax,x)}=(x,Ax)=(A^*x,x)\quad \Rightarrow \quad ((A-A^*)x,x)=0\quad \forall x\in H. $$ So it boils down to the following key property, which is false in the real case. Fact: if $T$ is a (not necessarily bounded) linear operator on a complex Hilbert space $H$ such that $(Tx,x)=0$ for every $x\in H$, then $T=0$. Proof: the usual polarization tricks, assuming semi-linearity in the first variable. With $x+y$, we get $$ 0=(T(x+y),x+y)=(Tx,y)+(Ty,x). $$ And with $x+iy$, $$ 0=(T(x+iy),x+iy)=i(Tx,y)-i(Ty,x). $$ It follows that $(Tx,y)=0$ for every $x,y$, whence $Tx=0$ for every $x$. $\Box$.
H: limit question: $\lim\limits_{n\to \infty } \frac{n}{2^n}=0$ $$ \lim_{n\to\infty}\frac n{2^n}=0. $$ I know how to prove it by using the trick, $2^n=(1+1)^n=1+n+\frac{n(n-1)}{2}+\text{...}$ But how to prove it without using this? AI: Let's do something different!! Note that the sequence $\{\frac{n}{2^n}\}_{n\geq 1}$ is decreasing ( easy to prove) and bounded from below by $0$, thus $\lim_{n\rightarrow \infty}\frac{n}{2^n}$ exists. Call it $L$. $$L=\lim_{n\rightarrow \infty}\frac{n}{2^n}=\lim_{n\rightarrow \infty}\frac{(2n)}{2^{(2n)}}=\lim_{n\rightarrow\infty}(\frac{1}{2^{n-1}}\frac{n}{2^n})=[\lim_{n\rightarrow\infty}\frac{1}{2^{n-1}}][\lim_{n\rightarrow \infty}\frac{n}{2^n}]=(0)(L)=0$$
H: Show that $|z-z_1|^2 + |z-z_2|^2 = |z_1 - z_2|^2$ The problem is : if $z$ lies on a circle with diameter having endpoints $z_1$ and $z_2$ then show that $|z-z_1|^2 + |z-z_2|^2 = |z_1 - z_2|^2$ where $z, z_1, z_2 \in \mathbb{C}$. The angle subtended by the diameter on any point on the circle is a right angle and thus $|z-z_1|$, $|z-z_2|$ and $|z_1-z_2|$ are the lengths of a right-angled triangle. The equation above then follows from the Pythagorean Theorem. Now the equation for $z$ can also be written as $\left|z - \left(\dfrac{z_1+z_2}{2}\right)\right| = \dfrac{|z_1-z_2|}{2}$ since $\left(\dfrac{z_1+z_2}{2}\right)$ is the centre and $\dfrac{|z_1-z_2|}{2}$ is the radius of the circle. But since the locus of both the equations are the same, I figured that it should be possible to prove them equal. So here's what I did: $$\begin{align} \left|z - \left(\dfrac{z_1+z_2}{2}\right)\right| = \dfrac{|z_1-z_2|}{2} &\iff |2z - (z_1+z_2)| =|z_1-z_2| \\ &\iff |(z - z_1)+ (z-z_2)|^2=|z_1-z_2|^2 \end{align}$$ Using $|z_1+z_2|^2=|z_1|^2 + |z_2|^2 + \Re{(z_1\overline{z_2})}$, $|z - z_1|^2+|z-z_2|^2 + \Re{((z-z_1)\overline{(z-z_2)})}=|z_1-z_2|^2$. Comparing it with what I have to show, it seems like I have to prove $\Re{((z-z_1)\overline{(z-z_2)})} = 0$, but I can't think of a way of doing that. Thank you in advance! Edit: To sum it up, my question is how to show that $$\left|z - \left(\dfrac{z_1+z_2}{2}\right)\right| = \dfrac{|z_1-z_2|}{2}\iff|z-z_1|^2 + |z-z_2|^2 = |z_1 - z_2|^2$$ $\,\,\,\,\,\,\,\,\,\,\,\,\,\,$where $z, z_1, z_2 \in \mathbb{C}$. AI: After reading the comment, the question seems to be If $z_1$ and $z_2$ are at different ends of the diameter of a circle, and $z$ is also on that circle, prove $$ \mathrm{Re}((z-z_1)(\overline{z-z_2}))=0 $$ Method 1: One way to see this is to note that $\mathrm{Re}(z\bar{w})=z\cdot w$ (dot product as vectors). Since $z-z_1$ and $z-z_2$ are perpendicular, $(z-z_1)\cdot(z-z_2)=0$. Method 2: Another way of looking at this, is to let $c$ be the center of the circle and let $w=z-c$, $w_1=z_1-c$, and $w_2=z_2-c$. Then $w_2=-w_1$ and $|w|=|w_1|=|w_2|=r$ $$ \begin{align} (z-z_1)(\overline{z-z_2}) &=(w-w_1)(\overline{w-w_2})\\ &=w\bar{w}+w_1\bar{w}_2-w\bar{w}_2-\bar{w}w_1\\ &=w\bar{w}-w_1\bar{w}_1+(w\bar{w}_1-\bar{w}w_1)\\ &=r^2-r^2+2i\,\mathrm{Im}(w\bar{w}_1)\\ &=2i\,\mathrm{Im}(w\bar{w}_1) \end{align} $$ which is pure imaginary. Therefore, $$ \mathrm{Re}((z-z_1)(\overline{z-z_2}))=0 $$
H: Angle limit problem I have been trying to interpret orientation angle data retrieved from a sensor device. It returns the angle in Radian units towards North that the device is measuring at the moment. The problem I am having right now, is that these measurements are noisy, and need some filtering, so I use a moving average of, say, 50 measurements. But since the angles are placed in the [-$\pi$; $\pi$] interval, when approaching a direction close to the limit, the average will be affected in an undesired way: cancelling values before the lower and the upper limit. How can I achieve a Moving Average orientation signal that isn't affected by this frequent limit transitions? The resulting average may "jump around" due to being near the limits, as this is expected, and will end up being equivalent. AI: If you are willing to assume the heading doesn't change too much between readings, you should just add or subtract $2 \pi$ to make the new reading be as close to the previous reading (or the current average) as possible. If the sensor is rotating slowly clockwise the reading will slowly increase. As it reaches $\pi$ you will start getting readings that are almost $-\pi$. Adding $2 \pi$ will make them slightly more than $+\pi$, which is what you want.
H: Are $p$-groups Engel groups? A group $G$ is termed Engel if whenever $x, y \in G$, there exists an integer $n$ (depending on $x$ and $y$ such that $[x,y,\dots,y]=1$, where $y$ occurs $n$ times. Is it true that every infinite $p$-group is Engel? AI: Well, nilpotent groups are almost trivially $\;n-$ Engel, with $\;n=$ the group's nilpotency class. Since finite $\;p-$ groups are nilpotent then they are Engel groups. About infinite $\;p-$groups I can't be sure, but I think in general the answer is negative since there are simple (infinite) $\;p-$groups...
H: Number of combinations when you can choose none or multiple options for a questions. I would like to know both the formula and the math name for such combination. Simple example: 1 questions with 3 options, you can choose none, one or multiple options. How can I calculate the number of combinations in such case. AI: Consider a set $S$ with options $\{a,b, c\}$. The power set of $S$ is the set of all possible subsets, viz $\{\varnothing, \{a\}, \{b\},\{c\}, \{a,b\}, \{a,c\}, \{b,c\}, \{a,b, c\}\}$. If there are $n$ options, then there are $2^n$ choices. Proof: Consider the set $S$. For each subset, we have $2$ choices, we can either include the option, or not include the option. By the fundamental principle of counting, we multiply these to get that there are $2^{|S|}$ choices.
H: Can $\Phi^{-1}(x)$ be written in terms of $\operatorname{erf}^{-1}(x)$? Can the inverse CDF of a standard normal variable $\Phi^{-1}(x)$ be written in terms of the inverse error function $\operatorname{erf}^{-1}(x)$, and, if so, how? This seems like an easy question, but I am struggling with it. I know that $\Phi(x)=\frac{1}{2}\left[1+\operatorname{erf}\left(\frac{x}{\sqrt{2}}\right)\right]$ simply by the change of variables in the definitions of $\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^xe^{-t^2/2}dt$ and $\operatorname{erf}(x)=\frac{1}{\sqrt{\pi}}\int_{-x}^xe^{-t^2}dt$, however I can't figure out the inverses. AI: $\Phi(x)=\frac{1}{2}\left[1+\operatorname{erf}\left(\frac{x}{\sqrt{2}}\right)\right]$, which means $\Phi^{-1}\left(\frac{1}{2}\left[1+\operatorname{erf}\left(\frac{x}{\sqrt{2}}\right)\right]\right)=x$ Now the goal is to make what is inside $\Phi^{-1}(\cdot)$ "equal to" $x$. For this purpose, let us put $\sqrt{2}\mathrm{erf}^{-1}(x)$ instead of $x$: $\Phi^{-1}\left(\frac{1}{2}\left[1+x\right]\right)=\sqrt{2}\mathrm{erf}^{-1}(x)$ Now put $2x-1$ instead of $x$: $\Phi^{-1}\left(x\right)=\sqrt{2}\mathrm{erf}^{-1}(2x-1)$. These can be done in a single line of course.
H: If $m$ and $n$ are distinct positive integers, then $m\mathbb{Z}$ is not ring-isomorphic to $n\mathbb{Z}$ Show that if $m$ and $n$ are distinct positive integers, then $m\mathbb{Z}$ is not ring-isomorphic to $n\mathbb{Z}$. Can I get some help to solve this problem AI: Assume you have an isomorphism $\phi: m\mathbb{Z} \rightarrow n\mathbb{Z}$, $m\neq n$. Since $m$ is a generator of $m\mathbb{Z}$, $\phi$ is determined by its value on $m$, which must be $n$ if $\phi$ is to be a bijection. How can you from this derive a contradiction?
H: Set of permutation matrices I'm stuck in this problem. Prove the set $P$ of $n×n$ permutation matrices spans a subspace of dimension $(n−1)^2+1$ AI: Remark that if $M\in \mathit{Span}(P)$, then there is $\lambda$ such that for all line or column $x$ of $M$, the sum of elements of $x$ is $\lambda$. Let $V$ be the space of matrices verifying this condition, we have $\mathit{Span}(P)\subseteq V$, and every $M\in V$ is uniquely defined by the $M(i,j)$ with $0\leq i,j\leq n-1$, together with the sum $\lambda$. It implies that $\mathrm{dim}(\mathit{Span}(P))\leq\mathrm{dim}(V)= (n-1)^2+1$. For the other direction, it suffices to show that $\mathrm{dim}(\mathit{Span}(P))\geq (n-1)^2+1$. For this, you can verify that the following set of matrices are independent: $\mathit{Id}$, $\tau_{1,x}$, $(1,x,y)$, where $1\neq x\neq y\neq 1$. The last argument is from here.
H: Justify conditional If there is an interpretation $I$ in wich $A=\forall x (P(x)\rightarrow Q(x))$ is true, then $(P(x)\rightarrow Q(x))$ is not true ands is not false in $I$. I need to justifiy this is false. So, if A is true for $I$ this means that every valuation on $I$ satisfies A. A little more formally, valuation $v$ satisfies $\forall x (P(x)\rightarrow Q(x))$ iff $v$ satisfies $P(x)\rightarrow Q(x)$ and every x-equivalent valuation $v'$ satisfies $P(x)\rightarrow Q(x)$. Therefore, it doesnt exists a valuation on $I$ in wich $P(x)\rightarrow Q(x)$ is false, then $P(x)\rightarrow Q(x)$ is true for $I$. Is this ok? AI: I think you did a fine job with the proof. So the only suggestion I'll make is simply a matter of "parsing"/choice of words, not directly related to the "content" of the proof. I'll italicize the word choices, then follow each italicization with a suggested replacement, which you can "take or leave." "... [M]ore formally, valuation $v$ satisfies $\forall x (P(x)\rightarrow Q(x))$ iff [if and only if] $v$ satisfies $P(x)\rightarrow Q(x)$, and every x-equivalent valuation $v'$ satisfies $P(x)\rightarrow Q(x)$. Therefore, it doesn't exists [there does not exist] a valuation on $I$ in [for?] which $P(x)\rightarrow Q(x)$ is false, then [and hence it follows that] $P(x)\rightarrow Q(x)$ is true for $I$."
H: Show that $\int_{(0,1)\times (0,1)} \frac{1}{1-xy} dxdy = \sum_{n=1}^{\infty} \frac{1}{n^2}$ I'm looking for a clever way to show that $$ \int\limits_{(0,1)\times (0,1)} \frac{1}{1-xy} dxdy = \sum_{n=1}^{\infty} \frac{1}{n^2}.$$ All suggestions will be appreciated! AI: Hint: $$\frac{1}{1-xy}=1+(xy)+(xy)^2+....$$ then integrate term by term. (This is not a full solution though and there are some important details that must be included in order to have a rigorous argument)
H: Can any finite group be realized as the automorphism group of a directed acyclic graph? We are given a finite group $G$ and wish to find a DAG (directed acyclic graph) $(V,E)$ whose automorphism group is exactly G (a graph automorphism of a graph is a bijective function $f:V\to V$ such that $(u,v)\in E \iff (f(u),f(v))\in E$). A similar (positive) result for undirected graph is known: Frucht's theorem. My uneducated guess is that the answer to my question is negative, i.e. the automorphism groups of DAG's have some special properties. For directed trees the problem is very simple, and one can show that even $\mathbb{Z}_3$ is not realizable as the automorphism group of a directed tree. However, I can't find a counterexample for DAG's. AI: Given an undirected graph $X=(V,E)$ with automorphism group $G$, we form the direct graph $\hat{X}=(\hat{V},\hat{E})$ in the following way by setting $$\hat{V}=V\cup\{v_e|e\in E\}$$ $$\hat{E}=\{(v,v_{\{v,w\}})|v\in V,w\in V\}.$$ Clearly each $v\in V$ has zero indegree and outdegree at least zero and each $v\in\hat{V}\setminus V$ has indegree two and zero outdegree. It follows that $\hat{X}$ has no cycles and so is a DAG. It should be clear that every automorphism $g$ on $X$ induces an automorphism $g$ on $\hat{X}$ by simply mapping $v$ to $g(v)$ and mapping $v_e$ to $v_{g(e)}$. Because the $v_{\{w_1,w_2\}}$ are 'trapped' between the vertices $w_1$ and $w_2$, it should also be clear that no new automorphisms can act on $\hat{X}$ as the image of $v_{\{w_1,w_2\}}$ is fully determined by the images of $w_1$, $w_2$ and a possible reordering of the edges in $E$ between $w_1$ and $w_2$, which is all taken in to account by the above induced action.
H: ve that the perpendiculars to the sides at these points meet in common point if and only if $ BP^2 + CQ^2 + AR^2 = PC^2 + QA^2 + RB^2 $ $P, Q, R$ are points on the sides $BC,CA,AB $ of triangle $ABC$. Prove that the perpendiculars to the sides at these points meet in common point if and only if $ BP^2 + CQ^2 + AR^2 = PC^2 + QA^2 + RB^2 $ I can't seem to prove $ BP^2+CQ^2+AR^2 = PC^2+QA^2+RB^2 \Rightarrow $ "the perpendiculars to the sides at these points meet in common point". I have proved the implication other way using Pythagoras theorem. AI: Let $X$ be the intersection of the perpendicular drawn from $Q$ and $R$. We have $$XB^2 - XC^2 = XB^2-XA^2 +XA^2-XC^2\\ =RB^2-RA^2+QA^2-QC^2\\ =PB^2-PC^2$$ From this you can show that $XP$ is perpendicular to $BC$.
H: Block matrix and invariant subspaces I was wondering what the exact relationship between invariant subspaces and a block matrix is? Is it correct to say: Each diagonal block matrix "creates a vector space decomposition" and vice versa? If this is so, I would be interested in understanding how one gets from the diagonal block matrix to the vector space decomposition, e.g. $ \begin{pmatrix} 1 & 0&0 \\ 0 & 2 &3 \\ 0 & 4 &5 \\ \end{pmatrix}$. Obvisiously, this one has two blocks(the 1) and this 2x2 term. How does this give me a vector space decomposition? AI: Working from you example, $$ \begin{pmatrix} 1 & 0&0 \\ 0 & 2 &3 \\ 0 & 4 &5 \\ \end{pmatrix}\begin{pmatrix}x\\0\\0\end{pmatrix}=\begin{pmatrix}x\\0\\0\end{pmatrix}$$ $$ \begin{pmatrix} 1 & 0&0 \\ 0 & 2 &3 \\ 0 & 4 &5 \\ \end{pmatrix}\begin{pmatrix}0\\y\\z\end{pmatrix}=\begin{pmatrix}0\\2y+3z\\4y+5z\end{pmatrix}$$ Notice how the two subspaces are invariant under multiplication by the given matrix. Blocks identify invariant subspaces, which makes for easy direct sum decomposition. There is a lot of good information out there on this topic, here is a good start.
H: Show that the equation of a line can be given as ℑm(αz+β)=0 I've just started a non-Euclidean Geometry course and the book we are using has a very brief (and not-so-helpful) section on complex numbers that we sort of went over in class. One of the questions is this: Given that α and β are complex constants and z = x + iy, show that ℑm(αz+β)=0 is the equation of a straight line. The book doesn't even talk about this form "ℑm(αz+β)" until the questions, so I'm already a bit confused with the new symbols. This is what I THINK the question is saying: If the imaginary part of a complex function (αz+β) is 0, then then function is a straight line on the Cartesian plane because if the iy of z is zero, that leaves us with just the real part, x. But what is beta then--a y-intercept? And how can I show this? Any help would be much appreciated! AI: The imaginary part is not taken of $z$, but of $\alpha z+\beta$. Since $\alpha$ and $\beta$ are complex, we can write them as $\alpha = \alpha_r + \mathrm i\alpha_i$ and $\beta = \beta_r + \mathrm i\beta_i$ where $\alpha_r,\alpha_i,\beta_r,\beta_i\in\mathbb R$. Now $\alpha z + \beta = (\alpha_r + \mathrm i\alpha_i)(x + \mathrm i y) + (\beta_r +\mathrm i \beta_i)$. Expand the product, using the fact that $\mathrm i^2=-1$, and then you can easily take the imaginary part of that (it's everything that has an $\mathrm i$ as factor after expansion). Note that it contains both $x$ and $y$. I assume you know what a general equation for a straight line in the $x$-$y$ plane looks like. Just compare the imaginary part you got from the above calculation with the general form. As a bonus, you can also consider the real part. You'll find that this also describes a straight line, which is related to the line from the imaginary part (How?)
H: Finding limit function $\lim_{n \rightarrow \infty} n ((x^2 +x + 1)^{1/n} -1)$ \begin{align} f(x) &= \lim_{n \rightarrow \infty} n ((x^2 +x + 1)^{1/n} -1) \\&= \lim_{n \rightarrow \infty} n ((\infty)^{1/n} -1) \\&= \lim_{n \rightarrow \infty} n (1 -1)\\& = \lim_{n \rightarrow \infty} n \cdot 0 \\&= 0 \end{align} Did I solve it correctly?? AI: Put $\,a:=x^2+x+1\,$ . Note that $\,x\in\Bbb R\implies a>0\;$ (why?) . Thus, you want $$\lim_{n\to\infty} n\left(\sqrt[n] a-1\right)$$ Let us define for a continuous variable $$x>0\;,\;\;f(x):= x(\sqrt[x]a-1)=\frac{\sqrt[x]a-1}{\frac1x}$$ Now, you can apply l'Hospital when $\,x\to\infty\,$ (why?) , so $$\lim_{x\to\infty}f(x)\stackrel{\text{l'H}}=\frac{-\frac1{x^2}a^{\frac1x}\log a}{-\frac1{x^2}}=\lim_{x\to\infty}\frac{a^{1/x}\log a}1=\log a$$ Thus....
H: How can i solve this System of first-order differential Equations? My Problem is this given System of differential Equations: $$\dot{x}=8x+18y$$ $$\dot{y}=-3x-7y$$ I am looking for a gerenal solution. My Approach was: i can see this is a System of linear and ordinary differential equations. Both are of first-order, because the highest derivative is the first. But now i am stuck, i have no idea how to solve it. A Transformation into a Matrix should lead to this expression: $$\overrightarrow{y}=\left( \begin{array}{cc} 8 & 18 \\ -3 & -7 \end{array} \right)\cdot x$$ or is this correct: $$\overrightarrow{x}=\left( \begin{array}{cc} 8 & 18 \\ -3 & -7 \end{array} \right)\cdot y\text{ ?}$$ But i don't know how to determine the solution, from this point on. AI: I'm going to rename your variables. Instead of $x$ and $y$, I will use $x_1$ and $x_2$ (respectively). Now, let's look at the system: $$\begin{cases} \dot x_1 = 8x_1+18x_2\\ \dot x_2 = -3x_1 -7x_2 \end{cases}$$ To change this into matrix form, we rewrite as $\dot {\vec x} = \mathbf A \vec x$, where $\mathbf A$ is a matrix. This looks like: $$\underbrace{\pmatrix{\dot x_1 \\ \dot x_2}}_{\large{\dot {\vec x}}} = \underbrace{\pmatrix{8 & 18 \\ -3 & -7}}_{\large{\mathbf A}}\underbrace{\pmatrix{x_1\\x_2}}_{\large{\vec x}}$$ To solve the system, we find the eigenvalues of the matrix. These are $r_1 = 2$ and $r_2 = -1$. Two corresponding eigenvectors are $\vec \xi_1 =\pmatrix{3 \\ -1}$ and $\vec \xi_2 =\pmatrix{2 \\ -1}$, respectively. We now plug these into the equation: $$\vec{x} = c_1e^{r_1t}\vec{\xi_1}+c_2e^{r_2t}\vec{\xi_2}$$ This yields: $$\vec{x} = c_1e^{2t}\pmatrix{3 \\ -1}+c_2e^{-t}\pmatrix{2 \\ -1}$$ So, your individual solutions are: $$x_1 = 3c_1e^{2t} + 2c_2e^{-t}\\ x_2 = -c_1e^{2t} -c_2e^{-t}$$
H: Property of Banach algebra with involution Let $\mathcal{B}$ be a Banach algebra with involution *. Is it always true that $\forall A \in \mathcal{B}: \| A \|^2 \geq \| A^* A \| $? (motivation: I read a proof that bounded linear operators on a Hilbert space form a C*-algebra, but for the C*-property they only proved $\| A \|^2 \leq \| A^* A \|$ (after having established that it is a Banach algebra with involution), so I wondered if the other direction is obvious...) AI: With the direction established in the motivation above, the other direction follows. Using the submultiplicativity of the norm, one gets for $0\neq A\in \mathcal{B}$ $$\vert\vert A \vert\vert^2 \leq \vert\vert A^* A \vert\vert \leq \vert\vert A^* \vert\vert \;\vert\vert A\vert\vert$$ and thus $$\vert\vert A \vert\vert \leq \vert\vert A^* \vert\vert$$ Exchanging the roles of $A^*$ and $A$ one obtains $$\vert\vert A \vert\vert \geq \vert\vert A^* \vert\vert$$ Thus, the original inequality becomes $$\vert\vert A \vert\vert^2 \leq \vert\vert A^* A \vert\vert \leq \vert\vert A\vert\vert^2$$ which establishes the C*-property.
H: Proof of Hilbert's Basis Theorem: won't $\deg (f_{i})$ be a strictly decreasing sequence? Say we have an ideal $I\subset R[X]$. We select a set of polynomials $f_{1},f_{2},f_{3},\dots$ such that $f_{i+1}$ has minimal degree in $I\setminus (f_{1},f_{2},f_{3},\dots f_{i})$. Can't $\deg (f_{i})$ be a strictly decreasing sequence? For example, let the ideal $I\setminus (f_{1},f_{2},f_{3},\dots f_{i})$ contain polynomials of degree $2$ or greater. Say it contains the polynomials $x^2+2$ and $(x^2+2)^2+(x+1)$. Then $f_{i+1}=x^2+2$ and $f_{i+2}=x+1$! Motivation: In this proof of Hilbet's Basis Theorem, we have constructed a polynomial $g=u_{1}f_{1}x^{n_{1}}+\dots u_{n}f_{n}x^{n_{N}}$, where $n_{i}=\deg (f_{N+1})-\deg(f_{i})$. I feel that then $n_{i}$ shoud be negative; and as $x$ raised to negative powers is not defined in $R[X]$, I'm having trouble understanding how $g=u_{1}f_{1}x^{n_{1}}+\dots u_{n}f_{n}x^{n_{N}}$ has been defined. AI: No. If $\deg f_{i+1}<\deg f_i$, then you would have chosen $f_{i+1}$ in place of $f_i$ in the previous step. Going through the example in the OP. Assume that the polynomials $p_1=(x^2+2)$ and $p_2=(x^2+2)^2+(x+1)$ belong to the ideal $I$, but do not belong to the smaller ideal $J_{i-1}=(f_1,f_2,\ldots,f_{i-1})$ generated by the polynomials $f_i$ picked earlier. The polynomial $p_3=x+1=p_2-p_1^2$ belongs to the ideal $I$. There are two possibilities. I) If $p_3$ does not belong to the ideal $J_{i-1}$, then we will not be allowed to pick $f_i=p_1$, because $p_3$ has a lower degree and does not belong to the ideal $J_{i-1}$ either. II) If $p_3$ belongs to the ideal $J_{i-1}$, then it will never be picked. Not now, not in the future. So the scenario that we might pick $p_1$ in this round, and $p_3$ in the next is impossible.
H: Maclaurin series problem I can't seem to understand the full solution Why did they post x=0.2? thanks in advance AI: The reason they plug $0.2$ into the function is because that is what gives them the answer they seek. If you define $f(x) = \sqrt{1 + x}$, then $f(0.2) = \sqrt{1+0.2} = \sqrt{1.2}$, as desired.
H: Lower bound on the probability that the maximum of a sequence of $n$ i.i.d. standard normal r.v.'s exceeds $x$ Let $X_{\max}=\max(X_1,X_2,\ldots,X_n)$ where $n$ is large and each $X_i$ is i.i.d. standard normal random variable, i.e. $X_i\sim\mathcal{N}(0,1)$. Is there a lower bound on the probability $P(X_{\max}\geq x)$ using elementary functions in terms of $n$ and constant $x$ independent of $n$? My guess is that there is a lower bound is in the form $P(X_{\max}\geq x)\geq e^{-x/\sqrt{\log n}}$, however, I am wondering if that is true, and, if it is, how to prove it. I am pretty new to the extreme value theory, and would appreciate any help. AI: $$ \Pr(\max>x) = 1-\Pr(\max\le x) = 1-\Pr(\text{for }i=1,\ldots, n,\quad X_i\le x) $$ $$ = 1-\Big(\Pr(X_1\le x)\Big)^n = 1 - \Big(\Phi(x)\Big)^n. $$ If you mean a computationally efficient lower bound, it might make sense to ask about computationally efficient upper bounds on $\Phi(x)$. Things like that are known.
H: Show that the statements are equivalents I have to prove that these statements are equivalents: (i) $f: X \rightarrow Y$ is continuous (ii) $f(A') \subset \overline {f(A)} , \forall A\subset X$ (iii) $Fr(f^{-1} (B)) \subset f^ {-1} (Fr(B)) , \forall B \subset Y$ I could only show (i) implies (ii). I don't know what I'm missing to show the rest. $Fr$ is boundary. I didn't find anything that relates to boundary. Hints are much appreciated AI: To see that (3) implies (1), take a closed set $B$ in $Y$, and note that closedness is equivalent to $B=\overline B$ and that $\overline B=B\cup\partial B=B\cup B'$. Then $f^{-1}(B)=f^{-1}(B)\cup f^{-1}(\partial B)$. By (3) this contains $\partial f^{-1}(B)$. Can you go on from here? For (2) implies (1), note that $f(A')\subseteq\overline{f(A)}$ is equivalent to $f(\overline{A})\subseteq\overline{f(A)}$. Now let $B=\text{cl}B$. Deduce that $B\supseteq\text{cl}f(f^{-1}(B))\supseteq f(\text{cl}f^{-1}(B))$. Conlude that $f^{-1}(B)$ is closed.
H: divergence of $\int_{2}^{\infty}\frac{dx}{x^{2}-x-2}$ i ran into this question and im sitting on it for a long time. why does this integral diverge: $$\int_{2}^{\infty}\frac{dx}{x^{2}-x-2}$$ thank you very much in advance. yaron. AI: $$\frac1{x^2-x-2}=\frac13\left(\frac1{x-2}-\frac1{x+1}\right)\implies$$ $$\int\limits_2^\infty\frac{dx}{x^2-x-2}=\left.\lim_{b\to\infty\,,\,\epsilon\to +}\frac13\log\frac{x-2}{x+1}\right|_{2+\epsilon}^b=\frac13\left[\lim_{b\to\infty}\log\frac{b-2}{b+1}-\lim_{\epsilon\to 0^+}\log\frac{\epsilon}{3+\epsilon}\right]=\infty$$
H: Determine coefficients of a Fourier series Given the $2\pi$-periodic function $f(t)=t^2$ such that $-\pi \le t \le \pi$, I want to determine the coefficients $f_k$ of the fourier series of this signal. Therefore I use $$f_k = \frac{1}{2\pi} \int_{-\pi}^{\pi} t^2 e^{-ikt}\,dt$$ Is that true? Because I'm looking at the answers right now and I don't see the minus sign in the integral. Maybe there is a reason for this that I don't know? Shortly, the 'correct anwer' according to my textbook should be $$f_k = \frac{1}{2\pi} \int_{-\pi}^{\pi} t^2 e^{ikt}\,dt$$ (and this worked out of course). AI: You write $$ f(t) = t^2 = \sum_{n \in \mathbb Z} c_n e^{int} $$ where $$ c_k = \frac 1{2\pi} \int_{-\pi}^{\pi} f(t) e^{-ikt} \, dt. $$ The reason why this makes sense is that if you make the substitution, assuming $$ f(t) = \sum_{n \in \mathbb Z} c_n e^{int} $$ for some coefficients $c_n \in \mathbb C$, then you have $$ \int_{-\pi}^{\pi} \left( \sum_{n \in \mathbb Z} c_n e^{int} \right) e^{-ikt} \, dt = \sum_{n \in \mathbb Z} c_n \int_{-\pi}^{\pi} e^{int} e^{-ikt} \, dt = 2\pi c_k, $$ because $$ \int_{-\pi}^{\pi} e^{i(n-k)t} \, dt = \begin{cases} 2\pi & \text{ if } n=k \\ 0 & \text{ otherwise }. \end{cases} $$ (I didn't justify the switch of the summation and integral but you can still understand that your minus sign needs to be there by looking at that.) You're saying you got the answer right by putting a minus sign where it shouldn't be, so I expect a sign error in your computations. Can we see the details? Or maybe you can look them up yourself. (I am VERY confident that this sign should be there ; the idea behind Fourier analysis is all about using the inner product : $$ \langle f,g \rangle = \int_{-\pi}^{\pi} f(t) \overline{g(t)} \, dt $$ and the statement that "Fourier analysis works" is just that $\{ e^{int} \, | \, n \in \mathbb Z \}$ is an Hilbert basis of the $L^2$ integrable functions over $[-\pi,\pi]$ under this inner product, hence the $-$ in the decomposition because $\overline{e^{int}} = e^{-int}$.) Hope that helps,
H: Show that $\lim\limits_{n\to\infty} x_n$ exists for $0 \le x_{n+1} \le x_n + \frac1{n^2}$ Let $x_1, x_2,\ldots$ be a sequence of non-negative real numbers such that $$ x_{n+1} ≤ x_n + \frac 1{n^2}\text{ for }1≤n. $$ Show that $\lim\limits_{n\to\infty} x_n$ exists. Help please... AI: The sequence is bounded from above and below, hence both $$ \ell = \liminf x_n $$ and $$ L = \limsup x_n $$ are finite. Pick $\varepsilon$ and a very large $n$ at which we have both $$x_n < \ell + \varepsilon$$ and $$\sum_{k \geq n} \frac{1}{k^2} < \varepsilon$$ Then for any $m > n$ using the assumption, we get $$ x_m < x_n + \sum_{k \geq n} \frac{1}{k^2} \leq \ell + 2\varepsilon $$ Taking $m$ to infinity along a sequence such that $x_m \rightarrow L$ we get $L \leq \ell + 2\varepsilon$. Taking $\varepsilon$ to zero we get $L \leq \ell$. Since trivially $\ell \leq L$ we conclude that $\ell = L$ and therefore the limit exists.
H: Converting Maximum TSP to Normal TSP Consider the Travelling Salesman Problem: Given N cities connected by edges of varying weights. Given a city A what is the shortest path for visiting all the cities exactly once that returns back to A and the Max Travelling Salesman Problem: Given N cities connected by edges of varying weights. Given a city A what is the largest path for visiting all the cities exactly once that returns back to A Notice the following: If we take all the edge lengths of regular Travelling Salesman and we compute their reciprocals for example if we have edge lengths: 1, 2, 3 we form a new problem with corresponding edges 1, 1/2, 1/3 we can then solve regular Travelling Salesman with Max Travelling salesman. Does this work? An equivalent notion is the following: Given a set of numbers $Q = {x_1, x_2, x_3 ... x_n}$ if a k element minimum of the set is ${y_1, y_2, y_3 ... y_k}$ such that $y_i$ are all members of Q Then: Given a set of numbers $Q' = {\frac{1}{x_1},\frac{1}{x_2},\frac{1}{x_3}... \frac{1}{x_n}}$ The k element maximum of the set is: ${\frac{1}{y_1},\frac{1}{y_2},\frac{1}{y_3}... \frac{1}{y_n}}$ ^how do you prove that? AI: Here is one solution I believe Take the set $Q = x_1,x_2,x_3...x_n$ And sort it in ascending order Now the k minimum of the set is simply the first k elements since those are the k smallest elements. Now consider the number $\frac{1}{x_i}$ If $x_i > 0$ then as $x_i ==> \infty$ $\frac{1}{x_i} ==> 0$ Therefore if we assume that $Q$ contains only positive elements, then the k element maximum of the set ${\frac{1}{x_1},\frac{1}{x_2},\frac{1}{x_3}...\frac{1}{x_n}}$ are the reciprocals of the k element minimum of the set $Q = x_1,x_2,x_3...x_n$
H: Prove there is no such analytic function Please help prove that there is no analytic function on $z=0$ such that $$ n^{-\frac3 2}<\left|f\left(\frac1 n\right)\right|<2n^{-\frac3 2}$$ for every natural $n$. AI: By the continuity of $f$, we must have $f(0) = 0$. Clearly, $f$ cannot be identically $0$. Since $f(0) = 0$ and $f$ is not identically zero, we can find another function $g$ that is analytic on $z = 0$ so that $f(z) = z^k g(z)$ for some positive integer $k$ and $g(0) \ne 0$. It follows that for every $n \in \mathbb N$: $$ \frac{1}{n^{3/2}} < \left|\left(\frac{1}{n}\right)^k g\left(\frac{1}{n}\right)\right| < \frac{2}{n^{3/2}} $$ Thus: $$ n^{k - 3/2} < \left|g\left(\frac{1}{n}\right)\right| < 2n^{k - 3/2} $$ If $k = 1$, we must have $g(0) = 0$ by the second inequality, a contradiction. If $k > 1$, we must have $\lim_{n\to\infty} g(1/n) = \infty$ by the first inequality, another contradiction. It follows that such a function $f$ cannot exist.
H: Vector in the line of intersection of two planes In the context of Geometric Algebra, in $ \mathbb{R}^3 $: Let $A$ and $B$ be bivectors (representing planes). Show that $ (\langle AB \rangle_2)^* $ is a vector in the line of intersection of the two planes, where the $ ^* $ represents the dual. My approach: $ A^*, B^* $ are vectors orthogonal to $A$, $B$; $ (A^* \wedge B^*) $ represents the plane orthogonal to both $A$ and $B$, and $ (A^* \wedge B^*)^* $ is a vector orthogonal to this plane, thus lying in the line of intersection of $A$ and $B$. Using the duality of the inner and outer products: $$ (A^* \wedge B^*)^* = A^*\cdot B^{**} = A^*.(-B) = - A^*\cdot B$$ But not quite sure how to proceed from here. Is there a general commutativity property of the dot product of a vector and a bivector? Also, what is the geometric interpretation of this result? AI: In general, vector-bivector contractions anti-commute. You can use this to break the contraction down into geometric products. You have, within a minus sign, $$(IA) \cdot B = \frac{1}{2} (IAB - BIA)$$ where $I$ is the 3d pseudoscalar. You should use at this point that $I$ commutes with all blades in 3d, which allows you to factor it out, and you should then be able to recognize the rest of the expression is in fact $\langle AB \rangle_2 = A \times B$, the commutator product (MacDonald has the commutator product defined, I hope, right?). Edit: remember, the dot product of a vector $c$ and a bivector $D$ gives the vector in $D$ that is orthogonal to $c$. Edit edit: as always, if you don't know whether a given product commutes, you can use grade projection and associativity instead. $$\langle IAB \rangle_1 = \langle (IA) B \rangle_1 = (IA) \cdot B$$ but using different associative grouping, $$\langle IAB \rangle_1 = \langle I(AB) \rangle_1 = I \langle AB \rangle_2$$ $AB$ has only grade-0 and grade-2 terms, so this is exhaustive.