text
stringlengths
83
79.5k
H: Simplifying easy expression $\frac{u-2}{u^{1/2}}=u^{1/2}-2u^{-1/2}$ How does this work? I know that it adds up, but I'm just not sure about the mechanics at play. Are there any similar cases? Thank you. AI: It’s a matter of splitting the fraction into a difference of two fractions, each of which is then simplified using the laws of exponents: $$\frac{u-2}{u^{1/2}}=\frac{u}{u^{1/2}}-\frac2{u^{1/2}}=u^1\cdot u^{-1/2}-2\cdot u^{-1/2}=u^{1-\frac12}-2u^{-1/2}=u^{1/2}-2u^{-1/2}$$
H: Eigenvalues of the product of two symmetric matrices This question may seem a little dumb, but I really googled around with no success. Let $A$ and $B$ be two square matrices of size $n$ whose (real) eigenvalues are denoted by $$\lambda_1(A) \leq \lambda_2(A) \leq \cdots \leq \lambda_n(A)$$ $$\lambda_1(B) \leq \lambda_2(B) \leq \cdots \leq \lambda_n(B)$$ What can I say about $\lambda_i (AB)$? AI: Here are two results (this is exercise III.6.14 in Matrix Analysis by R. Bhatia): Let $\lambda^\uparrow(A)$ be the vector of eigenvalues of $A$ in increasing order and $\lambda^\downarrow(A)$ be the vector of eigenvalues of $A$ in decreasing order. Then, if $A$ and $B$ are positive definite: $$\lambda^\downarrow(A)\lambda^\uparrow(B) \prec \lambda(AB) \prec \lambda^\downarrow(A)\lambda^\downarrow(B).$$ The $\prec$ symbol means majorized; For $n$-dimensional vectors $x,y$, $x \prec y$ means that for $1 \leq k < n$: $$\sum_{i = 1}^k x_i^\downarrow \leq \sum_{i = 1}^k y_i^\downarrow$$ and $$\sum_{i = 1}^n x_i = \sum_{i =1}^n y_i$$ i.e., if we sum the $k$ largest elements in each vector, the sum of elements from $x$ will be smaller than or equal to the sum of elements from $y$, but if we sum all the elements in each vector, the sums are equal. More generally, for any Hermitian (symmetric, if $A$ and $B$ are real) matrices $A,B$: $$\langle \lambda^\downarrow(A), \lambda^\uparrow(B) \rangle \leq \operatorname{tr} AB \leq \langle \lambda^\downarrow(A), \lambda^\downarrow(B) \rangle$$ and we have in general, for any square matrix $X$, that $$\operatorname{tr} X = \sum_{i=1}^n \lambda_i(X)$$. Special case if $AB$ is symmetric An interesting case occurs if the product $AB$ is also symmetric. This happens if and only if $AB = BA$. Since $A$ and $B$ are diagonalizable (since they are symmetric), this means that $A$ and $B$ are simultaneously diagonalizable, i.e. there exists $T$ such that $$\begin{align*} A &= TD_AT^{-1} \\ B &= TD_BT^{-1} \end{align*}$$ and hence $$AB = TD_AT^{-1}TD_BT^{-1} = TD_AD_BT^{-1}$$ and the product $D_AD_B$ will be a diagonal matrix, containing the eigenvalues of $AB$. Hence an eigenvalue of $AB$ will be a product of one eigenvalue from $A$ and one eigenvalue from $B$.
H: Learning Complex Geometry - Textbook Recommendation Request I wish to learn Complex Geometry and am aware of the following books : Huybretchs, Voisin, Griffths-Harris, R O Wells, Demailly. But I am not sure which one or two to choose. I am interested in learning complex analytic & complex algberaic geometry both. Could somebody please advise me which of these books deal with the same or similar aspects of the subject ? If I am not mistaken, Huybretchs and Voisin deal with similar aspects ? and Huybretchs would be relatively elementary or may be a preparation for Voisin ? I would like to select two books (preferably from above-listed ones, though other suggestions are welcome) such that the intersection between their contents is minimal and the union maximal. Any comments about the above-mentioned books will be very helpful. AI: Well, you'll really want to read them all at some point. To start with, take Griffiths-Harris for geometric insight and Huybrecths for company (his chapter 1.2 is amazing). Voisin is very good and at first covers the same ground as Huybrecths, but is more advanced (do read the introduction to Voisin's book early, it sets the scene quite well). Demailly's book is where all the details are, you'll want that one for proofs of the main theorems like Hodge decomposition, Kodaira vanishing etc. There's also a new book by Arapura that looks very user-friendly. And now for some clearly false generalities: The books by Huybrechts, Voisin and Arapura have very algebraic points of view; they were written by people who are mainly algebraic geometers and (to simplify greatly) think in Spec of rings. By contrast, Demailly and Griffiths-Harris have more differential-geometric points of view and use metrics and positivity of curvature as their main tools. I'll take the opportunity to also recommend Zheng's wonderful "Complex differential geometry" for an alternative introduction to that point of view. You'll need to know how to use all of these tools (as do all those people, of course). So, to sum things up: $$ \begin{array}{ccc} & \hbox{introduction} & \hbox{advanced} \\ \hbox{algebraic} & \hbox{Arapura, Huybrechts} & \hbox{Voisin}\\ \hbox{metric} & \hbox{Griffiths-Harris, Zheng} & \hbox{Demailly} \\ \end{array} $$
H: Let $f,g \in {\mathscr R[a,b]}.$ If $\int^{b}_{a}f^2=0,$ then $\int^{b}_{a}fg=0.$ Let $f,g \in {\mathscr R[a,b]}.$ If $\int^{b}_{a}f^2=0,$ then $\int^{b}_{a}fg=0.$ I have shown that $2|\int^{b}_{a}fg|\leq t\int^{b}_{a}f^2 + \frac{1}{t}\int^{b}_{a}g^2, t>0.$ Hence how do I show $\int^{b}_{a}fg=0?$ Thank you. AI: HINT: $t\int^{b}_{a}f^2=0$ and now let $t\to\infty$...
H: Finding the shortest distance between a point and a circle The question is "Find the shortest distance from the origin of the graph of the circle $x^2-14x+y^2-18y+81=0$ ". I found the circle in the following form: $(x-7)^2+(y-9)^2=7^2$ Then I found the line that connects the origin $(0,0)$ and the center $(7,9)$, and it was $y=(9/7)x$ Now I want to find a point that is both on the circle AND on the line mentioned above, then find the distance between that point and $(0,0)$, but I don't know how to find that point. My knowledge level is Preparatory mathematics. Thanks :) AI: Solve the equation: $$(x-7)^2+\left[\underbrace{\left(\frac 97x\right)}_{y = \frac 97x}-9\right]^2 = 7^2$$ to find where the circle intersects your line (two points, one of which is closest to the origin). Call each point $(x_0, y_0)$. To find the distance from the origin to each point, we know: $$d = \sqrt{(x^0 - 0)^2 + (y_0 - 0)^2} = \sqrt{x_0^2 + y_0^2}$$ Of the two points of intersection, choose the one for which $d$ is smallest.
H: Cotangent bundle of a $n$-dimensional differentiable manifold is a $2n$-dimensional manifold How to prove that cotangent bundle of a $n$-dimensional differentiable manifold is a $2n$-dimensional manifold? Detailed explanation is welcome. Thanks in advance. AI: I suppose you are given a definition of tangent space to every point (with curves or in an algebraic way). As set the cotangent bundle is defined as $\bigcup_{p \in M} T_{p}^{*}$, where the union is disjointed. Now consider the atlas you have, call it $\lbrace \left( U_{i} , \varphi_{i} \right) \rbrace_{i \in I}$. First, you define the topology on the cotangent bundle of each open set of the covering. For every $i$ you have a map $\left( \varphi_{i}, d\varphi_{i} \right): T^{*}U_{i} \rightarrow \varphi_{i}\left(U_{i}\right)\times \mathbb{R}^{n}$. Since you know it is a bijection, you simply impose it is an homeomorphism. Then you impose that the whole topology is the one generated by the sets which are "obliged" to be open by the maps $\left( \varphi_{i}, d\varphi_{i}\right)$.It is easy to show it is Hausdorff, since you separate covectors which "live" in different points with the topology of the manifold itself, while if two covectors "Live" on the same point you separate them in the $\mathbb{R}^{n}$ where they live, since you have endowed every cotangent space with the euclidean topology. Then the maps we have used to define the topology are themselves the charts, since $\varphi_{i}\left(U_{i}\right) \times \mathbb{R}^{n}$ is an open set in $\mathbb{R}^{2n}$. What is left is the change of charts. By construction the atlas is $\lbrace \left( V_{i} , \left(\varphi_{i}, d\varphi_{i}\right) \right) \rbrace_{i \in I}$, with the projection on the manifold $\pi : V_{i} \rightarrow U_{i}$. By construction $V_{i} \bigcap V_{j} \neq \emptyset$ iff $U_{i} \bigcap U_{j} \neq \emptyset$. Now the change on charts is $\left(\varphi_{i} \circ\varphi_{j}^{-1},d\varphi_{i} \circ d \varphi_{j}^{-1}=d\left(\varphi_{i} \circ\varphi_{j}^{-1} \right)\right):\varphi_{j}\left(U_{j}\right) \times \mathbb{R}^{n} \rightarrow \varphi_{i}\left(U_{i}\right) \times \mathbb{R}^{n}$. You already know $\varphi_{i} \circ\varphi_{j}^{-1}$ is smooth; the map on the second $n$ coordinates is linear, since it is the differential of a smooth function between open sets of $\mathbb{R}^{n}$. Hence the function is smooth itself and the structure of manifold is given.
H: If $x_1, \ldots, x_6$ are positive real numbers that add up to $2$. Show that: If $x_1,x_2,x_3,x_4,x_5$ and $x_6$ are positive real numbers that add up to $2$, then: $$2^{12} \leq \left(1+\dfrac{1}{x_1}\right) \left(1+\dfrac{1}{x_2}\right)\left(1+\dfrac{1}{x_3}\right)\left(1+\dfrac{1}{x_4}\right)\left(1+\dfrac{1}{x_5}\right)\left(1+\dfrac{1}{x_6}\right) .$$ Factoring out the $\dfrac{1}{x_1}, \ldots, \dfrac{1}{x_6}$ from each term, I get $$\dfrac{1}{x_1}(x_1 + 1)\dfrac{1}{x_2}(x_2 + 1)\dfrac{1}{x_3}(x_3 + 1)\dfrac{1}{x_4}(x_4 + 1)\dfrac{1}{x_5}(x_5 + 1)\dfrac{1}{x_6}(x_6 + 1)$$ We know that: $x_1 + \cdots + x_6 = 2$ Also, I see that $x_1, \ldots, x_6$ are small numbers and $1$ over something small will give something big. How can I keep going? AI: The inequality is equivalent to $$ \frac{1}{ \left(1+\dfrac{1}{x_1}\right) \left(1+\dfrac{1}{x_2}\right)\left(1+\dfrac{1}{x_3}\right)\left(1+\dfrac{1}{x_4}\right)\left(1+\dfrac{1}{x_5}\right)\left(1+\dfrac{1}{x_6}\right)} \leq \frac{1}{2^{12}}$$ or $$\sqrt[6]{\prod \frac{x_i}{x_i+1}} \leq \frac{1}{4}$$ Now, by AM-GM we have $$\sqrt[6]{\prod \frac{x_i}{x_i+1}} \leq \frac{\sum \frac{x_i}{x_i+1}} {6}$$, so it suffices to prove $$\sum \frac{x_i}{x_i+1} \leq \frac{3}{2}$$ This is equivalent to $$6- \sum \frac{1}{x_i+1} \leq \frac{3}{2} \Leftrightarrow \sum \frac{1}{x_i+1} \geq \frac{9}{2}$$ But this is Just Cuachy Schwartz $$6^2 \leq \left( \sum \frac{1}{x_i+1} \right) \left(\sum (x_i+1)\right)=\left( \sum \frac{1}{x_i+1} \right) \left(2+6\right)$$
H: How do I calculate the marginal probability density function of Y? Let $\displaystyle{{\rm f}\left(x, y\right) = \left\{x\quad \mbox{if}\quad 0< y < {1 \over x}\right\}\ \mbox{and}\ 0}$ otherwise. I need to calculate the marginal pdf of $Y$. I know I need to integrate out $x$, but I'm having a hard time seeing what to integrate to. Please help I've been stuck on this problem for $3$ days !. AI: Hint: Graphical Hint: Try drawing the region you're integrating over. Draw the curve $y = 1/x$ for $0 < x < 1$ and realize that the region you're integrating over will be the region bounded below this curve (in the first quadrant between $0<x<1$). Now if you are going to integrate out the $x$ variable, fix some positive $y$ value and ask when you enter and exit the region as you travel parallel to the $x$-axis at the height of $y$. If you chose $y$ with $0 < y \leq 1$, the you enter the region at $x=0$ and exit at $x=1$, otherwise, if you chose $y$ with $y > 1$ then you enter the region when $x =0$ and exit when $x=1/y$. Therefore you find your marginal PDF for $Y$ will be calculated as $$ \begin{cases} \int_0^1 f(x,y)dx & 0 < y \leq 1 \\ \int_0^{1/y} f(x,y) dx & 1 < y \end{cases} $$
H: Subset, Not a subset, And Elements Question $1$) Write $\subseteq$ or $\not\subset$: $\Bbb N\underline{}\Bbb Q$ $\Bbb N\underline{}\wp(\Bbb R)$ $\varnothing\underline{}\Bbb Z$ $\sqrt2\underline{}\Bbb R$ $\Bbb Z \cup [-1,1]\underline{}[-2,2]$ Question $2$) Write $\subseteq$ or $\in$: $(3,5)\underline{}[3,5]$ $[-1,4]\underline{}(-1,4)$ $\{\varnothing\}\underline{}\wp(\Bbb Q)$ $\Bbb N \times\Bbb Z\underline{}\Bbb Z \times\Bbb N$ I'm not sure if my answers are correct: Question $1$: $\subseteq,\not\subset,\subseteq,\subseteq,\not\subset$ Question $2$: $\subseteq,\in,\subseteq,\subseteq$ AI: 1a) Every Natural Number $N \in \mathbb{N}$ is also a rational: we can just write it as $\frac{N}{1}$. So $\mathbb{N} \subseteq \mathbb{Q}$. 1b) $\mathbb{N} \nsubseteq \wp\mathbb{R}$, because every element in $\wp\mathbb{R}$ is a set, and this would imply every $n \in \mathbb{N}$ is a set, which is not true. However, $\mathbb{N} \in \wp\mathbb{R}$. 1c) The empty set is a subset of any set. (You should prove or look up this fact.) So $\emptyset \subseteq \mathbb{Z}$. 1d) $\sqrt{2} \nsubseteq \mathbb{R}$. $\sqrt{2}$ is not a set. It is, however, a member of the set of the real numbers: $\sqrt{2} \in \mathbb{R}$. 1e) $\mathbb{Z} \cup [-1,1] \nsubseteq [-2,2]$. The union of $\mathbb{Z}$ and $[-1,1]$ obviously contains elements not contained by $[-2,2]$. 2a) $(3,5) \subseteq [3,5]$. You should read about open and closed intervals. 2b) $[−1,4]\_(−1,4)$ does not make sense given the options provided. $[−1,4]\nsubseteq(−1,4)$ because $[-1,4]$ contains $-1,4$ which are not contained by the open interval $(-1,4)$. However, the interval $[-1,4]$ can also obviously not be an element of the interval $(-1,4)$. 2c) $\{\emptyset\} \subseteq \wp\mathbb{Q}$. The set containing the empty set is always contained in any powerset. You should play with this statement to prove its truth. 2d) $\mathbb{N} \times \mathbb{Z} \nsubseteq \mathbb{Z} \times \mathbb{N}$. The definition of the cartesian product $A\times B = \{(a,b) \mid a \in A, b \in B\}$ should provide the necessary intuition. (Obviously, $\mathbb{N} \times \mathbb{Z} \notin \mathbb{Z} \times \mathbb{N}$ - so this statement also does not make sense given the options provided.)
H: Equivalent definitions of quasi-projective algebraic sets. I'm trying to prove this equivalence which are also the definition of quasi-projective algebraic sets: $X\subset \mathbb P^n$ is an open subset of its closure $\Leftrightarrow$ $X\subset \mathbb P^n$ is an open subset of a closed subset of $\mathbb P^n$. The first side of this equivalence is easy, I need help with the converse. I have another question: The quasi-projective algebraic sets in Hartshorne's book are the open subsets of $\mathbb P^n$, I don't know why the definition above is equivalent to the definition in Hartshorne. I really need help. I would be very grateful if someone could help me Thanks a lot. AI: Equivalences have directions, not sides, so I'm not sure which one you're asking about. Is it "$\Leftarrow$"? If $X$ is open in $Y$, which is closed in $\mathbf{P}^n$, then $X$ is open in $Y\cap \overline{X}$. As $Y$ is closed and contains $X$, $Y$ contains $\overline{X}$, so $Y\cap \overline{X} = \overline{X}$ I don't know which Hartshorne book you're looking at, but the one I'm looking at defines on p. 10 a quasi-projective variety as an open subset of a projective variety, not projective space.
H: Lebesgue measure - $\alpha$ be defined for arbitrary subsets of $X$ I'm trying to solve the following question: Let $X$ be a set and let $\alpha$ be defined for arbitrary subsets of $X$ to $R$ and satisfy $0\leq\alpha(E)\leq\alpha(E\cup F)\leq \alpha(E)+\alpha(F)$, when $E$ and $F$ are subsets of $X$. Let $S$ be the collection of all subsets $E$ of $X$ such that $\alpha(A)=\alpha(A\cap E)+\alpha(A\cap E^{c})$ for all $A\subset X$. If $S$ is non empty, it is an algebra and $\alpha$ is additive on $S$. I'm just having trouble to show that the empty set belongs to $S$. I get the inequalite $\alpha(A)\leq\alpha(A\cap \varnothing)+\alpha(A\cap \varnothing^{c})$ But I'm not getting the opposite inequality. Anybody can help me? AI: From the 'measurability' property you have, for $E \in S$, $\alpha(A)=\alpha(A\cap E)+\alpha(A\cap E^{c})$ for all $A$. Set $A = \emptyset$, and let $ E \in S$ (which is non-empty by assumption). Then the above gives $\alpha(\emptyset)=\alpha(\emptyset)+\alpha(\emptyset)$, which gives $\alpha(\emptyset)=0$.
H: Finite calculus: Apply difference operator to generalized falling factorial $(ax+b)^{\underline m}$ The $m$th falling factorial power of $x$ is defined as $x^{\underline m}:=x(x-1)...(x-m+1),$ and the difference operator as $\Delta f(x) := f(x+1)-f(x).$ One fundamental statement in finite calculus is the identity $\Delta x^{\underline m} = m x^{\underline {m-1}}.$ However, there is a more general version (to be found, e.g. here, page 68, eq.(6)): Let $a,b\in\mathbb{Z}$, then $\Delta (ax+b)^{\underline m}=am(ax+b)^{\underline {m-1}}.$ Unfortunately (to my understanding) the author gives no clear justification. Is there a simple way to derive this from the previously mentioned $(a,b)=(1,0)$-case. The shift $b$ seems to be trivial, but I have no idea how to understand $a\neq 1$. Edit: As @BrianM.Scott pointed out correctly, the statement is false. As a final remark I would like to provide the correct version of the statement, which is true, to solve the misunderstanding of the Book cited above. Define the more general $m$th falling factorial power with step $s$ as $x^{\underline {m,s}}:=x(x-s)(x-2s)...(x-(m-1)s),$ then $\Delta (ax+b)^{\underline {m,a}} = am(ax+b)^{\underline {m-1,a}}.$ This can be easily proven by induction in $m$. AI: It’s not true in general: $$\begin{align*} \Delta(2x)^{\underline2}&=(2(x+1))^{\underline2}-(2x)^{\underline2}\\ &=(2x+2)(2x+1)-2x(2x-1)\\ &=(4x^2+6x+2)-(4x^2-2x)\\ &=8x+2\\ &\ne 8x\\ &=2\cdot2(2x)^{\underline 1} \end{align*}$$
H: Prove that x is in the boundary of A iff x is an accumulation point of the complement of A given x is an isolated point of A The complete question is the following: "Let A be a subset of metric space X and let x be an isolated point of A. Show that x is in the boundary point of A iff x is an accumulation point of $A^C$. I am having trouble with my proof. I have looked at the proof posted at isolated point in boundary and don't understand why the poster's statement "Using the latter fact, we see $x_o$∈Acc($A_c$)" is true. That is, why would having $B_r$(x)∩$A^C$ ≠ Ø imply $B_r$(x)/{x}∩$A^C$ ≠ Ø? Thanks for any help! AI: You have to start by using the fact that $x$ is in $A$. In particular, there is an $R>0$ so that $B_r(x) \cap A = \{x\}$ if $r<R$ (i.e. $x$ is the only point in this ball that is in $A$, everything else is in $A^c$). But, because we're assuming $x \in \partial A$ for this direction, we also know that for every $r > 0$, $B_r(x)$ contains points in both $A$ and $A^c$. So, combined with the earlier observation, this means that the rest of the points in $B_R(x)$ (besides $x$) are in $A^c$. In particular, $B_R(x)\backslash\{x\} \cap A^c \neq \emptyset$.
H: Is a group cyclic with its generators A group (S, $\odot$) is called cyclic if there exists g $\in$ S such that for every a $\in$ S there exists an integer n such that a = g $\odot$ g ... $\odot$ g (n times). If such a g $\in$ S exists, it is called a generator. Is the group $\mathbb{Z}^{*}_{13}$= {1, 2 ... 11, 12} together with multiplication modulo 13 then cyclic? I can't seem to find the generators AI: Without some background in number theory, all you can do is try various numbers. Obviously $1$ doesn't work. So try $2$. We get lucky, $2$ works. For let us find the various positive powers of $2$, reduced modulo $13$. We get, in order, $$2,4,8,3,6,12,11,9,5,10,7,1.$$ Remark: There are other generators, a total of $4$ of them. It turns out that they are $2^1$, $2^5$, $2^7$, and $2^{11}$ (modulo $13$). At this early stage, if you want all the generators, it is best to compute. Let us test $3$. The various powers, modulo $13$, are $3$, $9$, $1$, and now things start all over again, so we certainly won't get everything. Next let's try $4$. The powers of $4$ will be the even powers of $2$, so we can look back on the work we did with $2$ and see that we will only get $4$, $3$, $12$, $9$, $10$, and $1$. Quite a few left. Let's try $5$. It turns out that $5$ is no good, because $5^4$ gives $1$. Continue.
H: Differentiating under integral for convolution I have a function $f\in L^1(\mathbb{R}) $ and $g(x)=\dfrac{1}{2\sqrt{\pi t}}e^{-\frac{(at+x)^2}{4t}}$, where $a,t\in\mathbb{R}$, $t>0$. I want to show that $$\dfrac{d}{dx}\int_{-\infty}^\infty f(y)g(x-y)dy=\int_{-\infty}^\infty f(y)\dfrac{d}{dx}g(x-y)$$ Leibniz doesn't work since there's no continuity assumption on $f$. I'm thinking about using the dominated convergence theorem, but how would the proof go? AI: Since you multiply it with an $L^1$ function, it is sufficient to show that $g'(x)$ is bounded, say $\lvert g'(x)\rvert \leqslant M$. Then the dominated convergence theorem can be applied to the difference quotients $$\frac{(f\ast g)(x+h) - (f\ast g)(x)}{h} = \int_{-\infty}^\infty f(y) \frac{g(x+h-y)-g(x-y)}{h}\,dy,$$ which then are uniformly dominated by $M\cdot\lvert f\rvert$.
H: Is this horse proof by induction okay? Let $P(n)$ be the statement "all horses in a set of n horses are of the same colour." Basis Step: Clearly, $P(1)$ is true. Inductive Hypothesis: Suppose that $P(k)$ is true for some arbitrary integer $k\geq 1$; that is, all horses are of the same colour. Inductive Step: We now prove that $P(k+1)$ is true. Consider any $k+1$ horses. Number these horses as $1,2,3,...,k+1$. By the inductive hypothesis, horses $1,2,...,k$ have the same colour. Also, by the inductive hypothesis, horses $2,3,...,k+1$ have the same colour. Because the set of the first $k$ horses and the last $k$ horses overlap, all $k+1$ horses must be of the same colour and we have shown $P(k+1)$ is true. Therefore, all horses in a set of $n$ horses are of the same colour, for all integers $n\geq1$. AI: Have you read this? It's from Wikipedia. Explanation: The argument above makes the implicit assumption that the two subsets of horses to which the induction assumption is applied have a common element. This is not true when n = 1, that is, when the original set (prior to either removal) only contains n+1 = 2 horses. Let the two horses be horse A and horse B. When horse A is removed, it is true that the remaining horses in the set are the same color (only horse B remains). If horse B is removed instead, this leaves a different set containing only horse A, which may or may not be the same color as horse B. The problem in the argument is the assumption that because each of these two sets contains only one color of horses, the original set also contained only one color of horses. Because there are no common elements (horses) in the two sets, it is unknown whether the two horses share the same color. The proof forms a falsidical paradox; it seems to show something manifestly false by valid reasoning, but in fact the reasoning is flawed
H: Approximating the Indicator Function Using Continuous Functions How do you approximate the indicator function $1_{A}(x)$ (which equals 1 if $x\in A$, 0 if $x$ not $\in A$ using continuous functions? AI: EDIT: I realized I was assuming that you want to approximate a simple function $\Sigma c_i \chi_{E_i}$ by a continuous function $f$, tho that may not be the case. Still, if you want to to do this, The method I know is by joining the right- and left- endpoints of $c_i \chi_ {E_i}$ by line segments between $ \chi_{E_i}$ and $\chi_{E_{i+i}}$ and with $\chi_{E_{i-1}}$. Then you join the constant function $c_i$ in $E_i$ to the constant value $c_{i-1}$
H: Conclusion about limit definition of e^a for a sequence of real numbers {a_n} converging to a? I have seen this fact used in several demonstrations, but have never seen a proof of it. I believe the statement is: If $\{a_n\}$ is a sequence of real numbers such that $a_n \rightarrow a$ finite, then $(1 + \frac{a_n}{n} )^n \rightarrow e^a $. Any help or references appreciated! AI: A simple approach starts from the fact that $\mathrm e^{x-x^2}\leqslant1+x\leqslant\mathrm e^x$ for every $x\geqslant-\frac12$ and that $\frac{a_n}n\geqslant-\frac12$ for every $n$ large enough, hence, applying this to $x=\frac{a_n}n$ for every $n$ such that $\frac{a_n}n\geqslant-\frac12$ yields $$ \mathrm e^{nx-nx^2}\leqslant\left(1+\frac{a_n}n\right)^n\leqslant\mathrm e^{nx}, $$ that is, $$ \mathrm e^{a_n-a_n^2/n}\leqslant\left(1+\frac{a_n}n\right)^n\leqslant\mathrm e^{a_n}. $$ Since $a_n\to a$ and $\frac{a_n^2}n\to0$, the LHS and the RHS both converge to $\mathrm e^a$.
H: Compact metric spaces is second countable and axiom of countable choice Why we need axiom of countable choice to prove following theorem: every compact metric spaces is second countable? In which step it's "hidden"? Thank you for any help. AI: Usually the proof would go like this: For every $n$ define $\mathcal U_n=\{B(x,\frac1n)\mid x\in X\}$, this is clearly an open cover so it has a finite subcover $\mathcal V_n$. Finally we can show that $\bigcup\mathcal V_n$ is a basis for the topology. Here used twice countable choice: We chose a finite subcover for every $n$. We took the union of countably many finite sets, each with more than one element. Then we claim that this union is countable. Both things are a consequence of the axiom of countable choice, and cannot be proved in general without it. In the following paper the authors show that for compact metric spaces the statement that the space is separable is equivalent to the statement that it is second-countable. This makes it easier to find a counterexample, as non-separable compact metric spaces are easier to come by. Keremedis, Kyriakos; Tachtsis, Eleftherios. "Compact metric spaces and weak forms of the axiom of choice." MLQ Math. Log. Q. 47 (2001), no. 1, 117–128. Models where these fail (i.e. there is a compact metric space which is not second-countable) are also given.
H: Inverse eigenvalue of a linear transformation T is a linear transformation, and $\lambda$ is N eigenvalue of T. How do I prove that $\lambda^{-1}$ is an eigenvalue for $T^{-1}$? I know for a matrix, I can use the fact that $Av=\lambda v$, but how does a linear transformation work? AI: If $\lambda\ne0$ (and we know that the eigenvalues of an invertible matrix are different of $0$) then $$Av=\lambda v\iff A^{-1}A v=A^{-1}\lambda v\iff \frac{1}{\lambda }v=A^{-1} v$$ hence $\frac{1}{\lambda }$ is an eigenvalue for $A^{-1}$.
H: How to show $|(1-z)e^z| \geq e^{-|z|^2}$ for all $|z| \leq 1/2$. I'm trying to prove the above inequality, but keep running into difficulties. It seems breaking up the RHS into $e^{-x^2}e^{-y^2}$ doesn't help much, and similarly writing LHS $\geq |e^{z}| - |z||e^{z}|$ isn't useful. Any help would be appreciated. AI: We can take the logarithm of $(1-z)e^z$, using the principal branch, that gives $$\log \left((1-z)e^z\right) = z + \log (1-z) = z - \sum_{k=1}^\infty \frac{z^k}{k} = - \sum_{k=2}^\infty \frac{z^k}{k}.$$ Now, to get the logarithm of the absolute modulus of $(1-z)e^z$, we take the real part. We exaggerate by using the absolute modulus and find $$\begin{align} \log \left\lvert (1-z)e^z\right\rvert &= \operatorname{Re} \log \left((1-z)e^z\right)\\ &= - \operatorname{Re}\sum_{k=2}^\infty \frac{z^k}{k}\\ &\geqslant - \sum_{k=2}^\infty \frac{\lvert z\rvert^k}{k}\\ &\geqslant - \frac{\lvert z\rvert^2}{2} \sum_{\nu=0}^\infty \lvert z\rvert^\nu\\ &= - \frac{\lvert z\rvert^2}{2} \frac{1}{1-\lvert z\rvert}\\ &\geqslant - \lvert z\rvert^2, \end{align}$$ since $\dfrac{1}{1-\lvert z\rvert} \leqslant 2$ for $\lvert z\rvert \leqslant \frac12$.
H: Help with limit of trigonometric function $$\lim_{x \to 0}\frac{x \csc 10x}{\cos20x}$$ I'm unsure of how to solve this. I think if I were to simplify it, it would be: $$\lim_{x \to 0}\frac{x \csc 10x}{\cos20x} = \lim_{x \to 0}(\frac{1}{x\sin10x})\div(\frac{1}{\cos20x})$$ $$\implies\lim_{x \to 0}\frac{1}{x\sin 10x}\times\frac{\cos20x}{1}=\lim_{x \to 0}\frac{\cos20x}{x\sin 10x}.$$ Is this correct? How do I proceed? Thanks. AI: $$\lim_{x \to 0}\frac{x \csc 10x}{\cos20x}=\lim_{x \to 0}\frac{x}{\sin10x\cos20x}=$$ $$=\lim_{x \to 0}\frac{10x}{\sin10x}\frac{1}{10\cos20x}=\frac{1}{10}$$
H: Show that for any $n \in \mathbb{Z}, n^3$ is congruent to 0,1,-1 modulo 9. Having a little difficulties with this one. Tried thinking of going down the line of even/odd proofs, but couldn't get anywhere. AI: any integer has at least one of these representations: $$3k, 3k+1, 3k-1$$ in cubic power $$27k^3, 27k^3 + 27k^2 + 9k + 1, 27k^3 - 27k^2 + 9k -1$$ all of them are either dividable by 9 or remaining = -1/+1.
H: Existence of differential form on a manifold I have a fundamental question about the existence of differential forms on manifolds. A $k$-form on a manifold in local coordinates looks like $f(x_1,...,x_n)dx_{i_1}...dx_{i_k}$, where $f$ is a smooth function. Given any smooth function $f$, is $f(x_1,...,x_n)dx_i$ a $1$-form for every $i$? In general does the existence of a smooth function ($0$-form) imply the existence of a $k$-form for all $k<n+1$? I can just multiply the function by the desired number if $dx$'s. Thanks AI: Unfortunately we can't multiply a given form or a function globally by $dx_j$. For one, this is because such a form is nowhere zero: in local coordinates it would be equal to $dx_j$, which is most certainly nonzero. However, there are manifolds on which any differential $1$-form must be zero at some point, like the $2$-dimensional sphere (see the hairy ball theorem). In general, the existence of a nowhere zero differential form $u$ on a manifold $M$ means that its cotangent bundle contains a trivial subbundle: If $T^*_M$ is the cotangent bundle then it must contain the trivial line bundle $\mathbb R u$. Similarly, if we have $k$ differential forms $u_1, \ldots, u_k$ that are linearly independent everywhere on $M$, then $T^*_M$ has a trivial subbundle of rank $k$. Similar things apply to degree-$p$ nonzero forms, with $\bigwedge^p T^*_M$ instead of $T^*_M$. Now, it is true that any manifold admits plenty of differential forms of any degree. For a cheap example, take a local coordinate chart $U$ of your manifold. Construct a smooth function $\theta$ supported on $U$, ie a smooth function that is not identically zero but whose support is contained in $U$. We can then extend $\theta$ by zero as a smooth function to all of $M$. Similarly, we can extend the smooth $p$-form $\theta(x) dx_1 \wedge \ldots \wedge dx_p$ defined on $U$ by zero to a smooth form on all of $M$. This may look like cheating, but we do get in this way forms that are nonzero on most of $M$, by covering $M$ with charts on which we have these kinds of forms and taking linear combinations thereof. For less artificial examples, you can look at the volume forms induced by (pseudo)-Riemannian metrics, symplectic forms on symplectic manifolds, Kahler and curvature forms on complex manifolds and so on.
H: Showing a matrix can't be factored into unit lower triangular matrix and upper triangular matrix I'm trying to show the following matrix cannot be factored into the product of a unit lower triangular matrix and an upper triangular matrix. $$\pmatrix{ 2 & 2 & 1 \\ 1 & 1 & 1 \\ 3 & 2 & 1}$$ I'm trying different row operations, attempting to get A into an upper triangular matrix and can't. Is there explicit way to show that it can't be factored into those two matrices? AI: $$M=\pmatrix{ 2 & 2 & 1 \\ 1 & 1 & 1 \\ 3 & 2 & 1}$$ Suppose it can be factored $$M=\begin{pmatrix}1&&\\a&1&\\b&c&1\end{pmatrix}\begin{pmatrix}d&e&f\\&g&h\\&&i\end{pmatrix}=\begin{pmatrix}x_{i,j}\end{pmatrix}$$ $2=x_{1,1}=d$ so $d = 2$ $1=x_{2,1}=ad$ so $a=\frac{1}{2}$ $2=x_{1,2}=e$ so $e=2$ $1=x_{2,2}=ea+g=2\frac{1}{2}+g=1+g$ so $g=0$ Which is absurd because $\det M \not= 0$
H: Quotients of the Ordered Square Let $[0,1]^2$ be the ordered square; i.e. it has the order topology given by the dictionary order. This is a first countable compact space. Let $\Delta=\{(x,x)\mid x\in [0,1]\}$. Then is the quotient space $[0,1]^2/\Delta$ first countable? It isn't hard to see that there is a countable basis at every point of $[0,1]^2/\Delta$ that isn't the equivalence class of the diagonal $\Delta$, but I can't seem to figure out whether the equivalence class $[\Delta]$ has a countable basis or not. I feel inclined to say that it does not, as by the fact that $[0,1]^2/\Delta$ is a quotient space, if $q:[0,1]^2\rightarrow [0,1]^2/\Delta$ is the quotient map that sends each element to its equivalence class, $U\subset [0,1]^2/\Delta$ is open iff $p^{-1}(U)$ is open. In particular, $U$ is a neighborhood of $[\Delta]$ in $[0,1]^2/\Delta$ iff $p^{-1}(U)$ is an open set containing $\Delta$. EDIT: $\Delta$ is not closed. If this quotient space is first-countable, do there exist quotient spaces of $[0,1]^2$ that are not first-countable? AI: Let $q:[0,1]^2\to[0,1]^2/\Delta$ be the quotient map, and let $p$ be the point in the quotient that corresponds to $\Delta$. Suppose that $U\subseteq[0,1]^2/\Delta$ is an open nbhd of $p$; then $q^{-1}[U]$ must be an open nbhd of $\Delta$ in the lexicographically ordered square. This means that there must be functions $f_U:(0,1)\to(0,1)$ and $g_U:(0,1)\to(0,1)$ such that: $f_U(x)<x<g_U(x)$ for each $x\in(0,1)$, and $\{x\}\times(f_U(x),g_U(x))\subseteq q^{-1}[U]$ for each $x\in(0,1)$. Use these functions $g_U$ and $f_U$ to show that if $\{U_n:n\in\Bbb N\}$ is any countable family of open nbhds of $p$, there is an open nbhd $V$ of $p$ such that $U_n\setminus V\ne\varnothing$ for each $n\in\Bbb N$; this shows that no countable family of open nbhds of $p$ can be a base at $p$.
H: Is $\mathbb{Z}_p$ a Finite Field? Denote the integers modulo $p$, $\mathbb{Z}$ mod $P$, as $\mathbb{Z}_P$. Denote the set of integers equivalent to $n$ mod $P$ - the equivalence class of $n$ as $\overline{n}$. We know that for any prime $p$, $\mathbb{Z}_P$ is a field. As a finite field contains a finite number of elements, and $\mathbb{Z}_P$ has elements $\overline{0}, \overline{1}, \ldots, \overline{p-1}$, which is obviously a finite set. So is $\mathbb{Z}_P$ a finite field? (Could it be the case that $\mathbb{Z}_P$ is not a finite field, because though $\mathbb{Z}_P$ contains only $p$ elements, every one of these elements is an infinite set?) AI: Yes, it is a finite field. It doesn't matter that the elements of that field are themselves (infinite) sets, the "element of" relation is not transitive [in general, there are sets with the property that $x\in y \in T \Rightarrow x\in T$, but this is not one of them].
H: prove there's another subsequental limit $a_n$ a sequence such that: $\forall n \in \mathbb{N} :|{a_{n + 1}} - {a_n}| < 1$ $\{ 0,2\} \subseteq P({a_n}) $ Prove there's another subsequental limit, $L \ne 0,2$ I'll be glad for help here AI: Let $B=(\infty, \frac{1}{2})$, $M=[\frac{1}{2}, \frac{3}{2}]$ and $T=(\frac{3}{2}, \infty)$. Let $I_B = \{ n | a_n \in B\}$ and similarly for $I_M,I_T$. By assumption, both $I_B, I_T$ are infinite. I claim that $ I_M$ is infinite as well. Take some $N$. Then there is some $n_1,n_2 \ge N$ such that $n_1 \in B$, $n_2 \in T$. Let $k > n_1$ be the first index greater than $n_1$ such that $a_k \in T$. Since $a_k \in T$ and $|a_{k-1}-a_k| < 1$, we must have $a_{k-1} \in M$. Since this is true for arbitrary $N$, we have that $I_M$ is infinite. Since $M$ is compact, we must have a convergent subsequence whose limit lies in $M$.
H: Prove that set of all points on a sphere is uncountable Let $S=\{(x,y,z): x^2+y^2+z^2=4\}$ be the set of points on a sphere. Prove $S$ is uncountable. Attempt: Basically, each coordinate is between $0$ and $2$, i.e. $0\le x \le 2, 0\le y \le 2, 0\le z \le2$. So if I prove that for some $a$ set $A=\{a \in \mathbb{R}: 0 \le a \le 2\}$ is uncountable, then $A\times A\times A$ is also uncountable. Then, I would try to find a one to one function between $A\times A\times A$ and $S$, i.e. $f: A\times A\times A \rightarrow S$ which will prove that $S$ is uncountable. The difficulties I have are: how do I prove that $A$ is uncountable? Would fucntion $f(a,a,a)=(x,y,z)$ work? Is it the right approach? Thank you. AI: Note that $$\{ (x,y), x^2 + y^2 = 4 \} = S^1 \subset S^2 = S$$ if you set $z=0$ in an embedding. Then find $S^1 \sim [0, 2\pi)$ via $\phi \mapsto 4(\cos \phi, \sin \phi)$ which is uncountable.
H: Hankel trasformation of acoustic wave equation We consider a simplified version of acoustic wave equation \begin{equation} \frac{\partial^2 p}{\partial r^2}+\frac{1}{r}\frac{\partial p}{\partial r}+\frac{\partial^2 p}{\partial z^2}+k^2 p=\frac{1}{r} \delta(r) \delta(z-z_0) \end{equation} where p is a complex quantity, defined by $P=p \exp(-i\omega t)$, $\omega$ is the circular frequency, $k=\omega/c$ is the wavenumber, c is the sound speed, z is the depth below the ocean surface, r is the radius. The formulation is in cylindrical coordinates because the ocean may be treated as an cylindrical waveguide over relatively large distances. The article (which I am studying) says that by applying the Hankel transformation we obtain \begin{equation} \frac{\partial^2 q}{\partial z^2}+(k^2-h^2) q=\delta(z-z_0) \end{equation} I do not know the Hankel transformation. We accept suggestions for understanding the transition between the two mathematical formulas. Thank you very much. AI: The Hankel transform takes the form $$\hat{f}(\rho) = 2 \pi \int_0^{\infty} dr \, r J_0(\rho r) \, f(r)$$ One may derive this from the two-dimensional Fourier transform, assuming that the functions are radially symmetric (i.e., $f(x,y)=f(r)$, as follows: $$\begin{align}\hat{f}(u,v) &= \int_{-\infty}^{\infty} dx \, \int_{-\infty}^{\infty} dy \, e^{i (u x+v y)} f(x,y)\\ &= \int_0^{\infty} dr \, r f(r) \int_0^{2 \pi} d\theta \, e^{i \rho r \cos{(\theta-\phi)}} \end{align}$$ where $u=\rho \cos{\phi}$, etc. The Bessel function comes from that last inner integral. The inverse is $$f(r) = \int_0^{\infty} d\rho \, \rho J_0(\rho r) \, \hat{f}(\rho)$$ Now, the Bessel function obeys the following relation: $$\left (\frac{d^2}{dr^2} + \frac{1}{r} \frac{d}{dr} \right ) J_0(\rho r) = -\rho^2 J_0(\rho r)$$ and we also have $$\frac{\delta(r)}{r} = \int_0^{\infty} d\rho \, \rho J_0(\rho r)$$ So that, assuming that $q$ is the Hankel transform of $p$, we find that, from the differential equation, $$\int_0^{\infty} d\rho \, \rho J_0(\rho r) \, \left [\frac{d^2 q}{dz^2} + (k^2-\rho^2) q - \delta(z-z_0) \right ] = 0$$ Note that the $h^2$ in your equation is the radial frequency variable in what I have derived. As the integral is zero identically, the quantity inside the brackets is zero. Thus, the second equation.
H: Number of equivalence relations with a fixed size How can I find the number of equivalence relations R on a set of size 7 such that |R|=29? Any advice would be greatly appreciated! :D AI: HINT: Let $A$ be the set of size $7$. Think about the partition corresponding to the equivalence relation $R$. Suppose that it has parts $P_1,\ldots,P_n$ of sizes $m_1,\ldots,m_n$, respectively. For each $k=1,\ldots,n$, the part $P_k$ contributes $m_k^2$ ordered pairs to $R$. (Why?) Thus, we want $$29=|R|=\sum_{k=1}^nm_k^2\;,$$ where we know that $$\sum_{k=1}^nm_k=7\;.$$ In other words, we need to find sets of integers summing to $7$ whose squares sum to $29$. One obvious set is $\{2,5\}$: $2+5=7$, and $2^2+5^2=29$. I leave you to check whether there are any others. Now consider the partitions that have two classes, one of size $2$ and the other of size $5$. How many of them are there? Note that once you decide which $2$ elements of $A$ are in one class, you know exactly which $5$ are in the other class.
H: what are the differences between alignment and colinear? Im reading a chapter talking about orthogonal complement of dual space in optimization by vector space. And the author introduced a definition of Alignment as following: where X* means the dual of X and < x , x* > denote a functional as following: What are the differences between the alignment and colinear? Can you give me more examples? Thank you^_^ P. S. Riesz-Frechet: Riesz Representation Theorem: AI: The dual space is not an element of the underlying space, so, in general, it doesn't make sense to talk about collinear. Informally, however, alignment is basically the same idea, taking the different spaces into account. For Hilbert spaces, the Riesz representation theorem shows that the space and its dual are isomorphic (antilinear, depending on your taste), so we can identify the dual and the underlying space as in the case of $\mathbb{C}^n$, for example. For an example, take $X= l_1$, then $X^* = l_\infty$ (or, at least, identified with). Define $\sigma \in X^*$ by $\sigma(x) = \sum_n x_n$. Then $s$ is aligned with $x$ iff $s(x) = \|s\| \|x||$. Since $\|s\| = 1$, $s$ and $x$ are aligned iff $s(x) = \|x\|$, which is easily seen to be equivalent to $x_n = |x_n|$ for all $n$. Note that (as an element of $l_\infty$) we have $s=(1,1,...)$, so we cannot identify $s$ with any element of $X$. Luenberger's book has an example involving the dual of $C[0,1]$ which illustrates the point a little more. As an analog of the above, consider $f$ a continuous linear functional on $C[0,1]$ given by $f(x) = \int_0^1 x(t)dt$ (that is, the average value of $x$). It is easy to see that $\|f\| = 1$, so to look for aligned points, we solve $f(x) = \|x\|$. Since $x$ is continuous, we see that $\int_0^1 x(t) dt = \max_t |x(t)|$ iff $x(t) = \max_t |x(t)|$ for all $t$, that is the constant functions whose value is non-negative. Another example (well, not exactly) in $C[0,1]$ is $f(x) = \int_0^\frac{1}{2} x(t) dt- \int_\frac{1}{2}^1 x(t) dt$. A little work shows that $\|f\| = 1$ and $|f(x)| < \|x\|$ for all $x$, and so there are no non-zero vectors that are aligned with $f$.
H: Probability of a event in time I consistently see the probability P(random variable at time t) = F(random variable before t-1) - F(random variable before t) or some form of this. How do I verify this? (Sorry if the notation is goofy) Edit: F is the cdf AI: I assume here that $F$ is meant to denote some sort of cumulative distribution function for some random variable; it's hard to say, though, because you don't actually have a random variable. If I was going to more formally write down what I think you are trying to say, I'd say your setup is something like this: Let $A_1,A_2,\ldots$ be events in some probability space $(\Omega,\mathcal{F},P)$. Let $T$ be the minimum number $t$ such that event $A_t$ occurs. More formally: for $\omega\in\Omega$, define $$ T(\omega)=\begin{cases}\infty & \text{if }\omega\notin\bigcup_{t=1}^{\infty}A_t\\\min\{t:\ \omega\in A_t\} & \text{else}\end{cases}. $$ $T$ is now a well-defined random variable. Let $F$ be its cumulative distribution function: $$ F(t):=P(T\leq t),\qquad t\in\mathbb{N}. $$ Notice that because only natural-numbered $T$ are allowed, $$ P(T\leq t)=\sum_{j=1}^{t}P(T=j)=\sum_{j=1}^{t-1}P(T=j)+P(T=t)=P(T\leq t-1)+P(T=t); $$ but $P(T\leq t)=F(t)$ and $P(T\leq t-1)=F(t-1)$. So, this says $$ F(t)=F(t-1)+P(T=t), $$ or $$ P(T=t)=F(t)-F(t-1). $$ (This result is, of course, completely false if your random variable is allowed to take arbitrary real values instead of discrete integer values.)
H: how do I compute the eigenvectors for spectral clustering from a singular value decomposition? I am implementing spectral clustering following A tutorial on spectral clustering. After preparing the Laplacian matrix $L^{n \times n}$, I compute the Singular Value Decomposition $U \Sigma V^{*}$. From $\Sigma$ I extract the eigenvalues, and I choose $k$ (which according to the paper are the $k$ smallest ones) with the eigengap heuristic. Now, the algorithm (page 7) requires to compute the first $k$ eigenvectors $u_{1}, ..., u_{k}$ of the generalized eigenproblem $L u = \lambda D u$, which basically means the first eigenvectors of $L$. From this, I need to build the matrix $N \in R^{n \times k}$ containing the vectors $u_{1},...,u_{k}$ as columns. My question is, given the computed SVD, how do I build $N$, hence the first k eigenvectors of $L$? AI: The singular value decomposition of a symmetric matrix with real entries (such as the Laplacian of a graph) is just its eigendecomposition: $$L = P D P^t$$ where $D$ is a diagonal matrix with the eigenvalues of $L$ along the diagonal and $P$ is the matrix whose columns are the corresponding eigenvectors of $L$.
H: Finding positive integer solutions to $3^x + 55=y^2$ I think it must be finite, $y$ is always even, but I don't know how to continue. edit: with $x,y\in\mathbb Z$ AI: Hint Modulo $4$ we have $$(-1)^x \equiv 1 \pmod{4} \Rightarrow x =2k$$ Then $$(y-3^k)(y+3^k) =55$$ Now all you have to do is check all possible factorizations of $55: 1 \times 55$ or $5 \times 11$.
H: The Magic Hat Puzzle ...Vinnie's number is always one above or below Tommy's... Imagine there is a hat sitting on the table. And there are two contestants. You, Tommy, will be one of the contestants, and we'll call the other one Vinnie. You reach into the hat, and pull out a number. Then, Vinnie does the same. Now, the reason this hat is magical is that, no matter what number you pull out, Vinnie will always pull out a number that is either one above or one below your number. For example, if you pull out a two, you know Vinnie has pulled out either a one or a three. To make it simple, we'll limit the numbers to between one... and infinity. So, each of you pulls out a number. Let's say you pick three and Vinnie picks two. I'm the moderator, and I ask Tommy, "Do you know what number Vinnie has?" Tommy looks at his number, which is 3, and says, "No, I don't." I then ask Vinnie, "Do you know what number Tommy has?" He looks at his number two and says, "Yes." He knows Tommy has to have a 3. Regardless of the numbers that are picked, and assuming that both contestants answer truthfully, if I keep asking the question of both of them, eventually one contestant will know what number the other contestant has. In other words, if I ask Tommy, then ask Vinnie, then Tommy again... then Vinnie again... eventually one of them will know the other's number. The question is this: How can this be proven in a mathematically rigorous fashion? This question was taken from the Car Talk Puzzler Archive and is located at http://www.cartalk.com/content/magic-hat-0?question AI: This is known as Conway's Paradox, and you can find a mathematically rigorous solution here.
H: Price-Demand, Marginal-price and other financial jargon So my book likes to assume that I already have a business degree while learning calculus so I need your help to clarify my book's questions. It asks: Price-demand equation. The marginal price for a weekly demand of x bottles of shampoo in a drugstore is given by: $p'(x)=\frac{-6000}{(3x+50)^2}$ Find the price-demand equation if the weekly demand is 150 when the price of a bottle of shampoo is 8. What is the weekly demand when the price is 6.50? What exactly is being asked? I integrated the marginal price function to get a price-function (I suppose) but now what? My assumed price-function: $p(x)=\frac{2000}{(3x+50)}$ AI: The question in your textbook indeed doesn't make sense in the economics terms, but sort of makes sense in the context of it being a calculus textbook. In economics and finance, "marginal" refers to the derivative. For example, marginal cost function is defined as the amount by which the total cost changes when you produce an additional unit of the good. Cost function $C(Q)$ corresponds to the cost of producing $Q$ units. In the context of calculus, provided that the cost function is differentiable, marginal cost function is the derivative of the cost function $C(Q)$ with respect to $Q$, $\frac{dC}{dQ}$. The reason I brought up the cost function is because it is sometimes called price function. However, while your calculation of the anti-derivative is correct, usually the total cost function is an increasing function of the number of units produced. So, I think that your textbook is mis-using the term cost function. The price-demand equation maps the price per unit of the good to the number of consumers willing to buy the good at that price (we assume that each consumer buys a single unit of good). This is also known as the demand curve. Now, I will guess what your calculus book question means (and this is only a guess). I think what your book means by a "price function" is (a function of) a "demand function", which maps the number of consumers willing to buy the good (in your case shampoo) to price of the good, with demand curves for most goods decreasing (I am very confident that in the great majority of markets shampoo has a decreasing demand curve). I guess (and that's just a guess) that the book's price function however maps the number of units of shampoo sold to the "producer's cost" of the shampoo. So, plugging in 150, we get the "price" of $\frac{2000}{3\times150+5}=4$. The price of the shampoo as sold to the store might be, as you had guessed in the comment, $P(x)=\frac{2000}{3\times150+5}+C$, which makes sense in the context of this being a calculus textbook whose goal it is to teach you the concept of integration constant and not necessarily the laws of supply and demand. But really, had we not known that this is a calculus textbook, we could put in any function $f(x,y)$ that satisfies $8=f(4,8)$ would be valid as a map from the "producer's cost" to "price in the store"... I hope that I didn't confuse you further, and you seem to have figured this out on your own. I think your teacher/professor will accept your answer in the context of the calculus class.
H: Chern classes are not numbers, are they? Let $X$ be a smooth projective algebraic variety, say over $\mathbb C$. Let $E$ be a rank $r$ vector bundle on $X$. We can associate with $E$ its Chern classes $c_i(E)$. When I read "$c_i(E)$", the first thing I (automatically) do is to think where it lives. And it lives in the codimension $i$ part of the Chow ring of $X$, namely $$ c_i(E)\in A^i(X). $$ If $r>n=\dim X$, I see we can identify $c_n(E)\in A^n(X)$ with an integer. But reading some papers, I got the impression that one can do the same for the other $c_i$'s as well. So my question is just about terminology: what does it mean, for a vector bundle $E$, to have even or odd $c_i$, if Chern classes live in $A^\ast(X)$? AI: The only reasonable interpretation I can think of for "$c_i(E)$ is even" is that there is some class $a\in A^i(X)$ such that $c_i(E)=2a$. However I have no interpretation for "$c_i(E)$ is odd", except the rather ridiculous one that it means "$c_i(E)$ is not even" ...
H: Probability of forming a 3-senator committee If the Senate has 47 Republicans and 53 Democrats, in how many ways can you form a 3-senator committee in which neither party holds all 3 seats? The solution says that: You can choose one Democrat, one Republican, and one more senator from either party. We can make these choices, in that order, in $53\cdot 47\cdot 98$ ways. But then we've counted each possible committee twice, since any given committee can be arranged in the order Democrat-Republican-Third Person in two different ways (depending on which member of the majority party on the committee is chosen as the Third Person). How are there two different ways to arrange the Democrat-Republican-Third committee based on the third person chosen? I only see one possible way. AI: Say that the first two are $D$ and $R$, and suppose that the third person is a Democrat, say $D\,'$; you could have chosen the same committee by picking $D\,'$ and $R$ as your Democrat and Republican and then picking $D$ as your third person. A similar argument works when the third person is a Republican.
H: Why is $S = \{(x,y) \in \mathbb{R}^2 \mid \text{$x$, $y$ even integers}\}$ not a subspace of $\mathbb{R}^2$? Let $S = \{(x,y) \in \mathbb{R}^2 \mid \text{$x$, $y$ even integers}\} \subset \mathbb{R}^2$. Why is this not a subspace of $\mathbb{R}^2$? $0$ seems to be in it ($2(0) = 0$), $x+y$ seems to be in it, and $kx$ where $k$ is an integer seems to be in it as well. Thanks! AI: Subspaces have to be closed under more general linear combinations than just $x + y$. That is, you have to have $$ c_1 x + c_2 y \in S $$ whenever $x,y \in S$ for any $c_1,c_2 \in \mathbb R$ (since the original vector space is a real-vector space, the scalars for linear combinations are, in general, real numbers)
H: Define a relation $\sim$ on $\mathbb{N}$ by $a\sim b$ if and only if $ab$ is a square (a) Show that $\sim$ is an equivalence relation on $\mathbb{N}$. (b) Describe the equivalence classes [3], [9], and [99]. (c) If $a\sim b$, which attributes of $a \text{ and } b$ are equal? For (a) I have to show that $\sim$ is reflexive, transitive, and symmetric in order for it to be an equivalence relation. So if $ab$ is square, $a = b$ so the relation is reflexive. Next, take $a\sim b$ and $b \sim c$, because $ab$ is square and $bc$ is square, $a = b = c$, so $a = c$. Thus $\sim$ is transitive. For symmetric, I'm not sure. But before I continue, can I even say that if $ab$ is square, $a = b$. I was thinking of $16 = 2 \cdot 8$ which seems like $a$ and $b$ are not equal but is that just a square number or a square? Is there a difference? If I can't do that, how am I supposed to go about it? AI: I’ll get you started. To show that $\sim$ is reflexive, you must show that if $n\in\Bbb N$, then $n\sim n$. Check the definition of $\sim$: this means that $n\cdot n$ is a square. Of course $n\cdot n=n^2$ is a square, so $n\sim n$, and $\sim$ is reflexive. You should have no trouble showing that $\sim$ is symmetric. For transitivity, suppose that $k\sim m$ and $m\sim n$. Then $km$ and $mn$ are squares, say $km=a^2$ and $mn=b^2$; you must show that $k\sim n$, i.e., that $kn$ is a square. Try to write $kn$ in terms of the pieces that you already have, doing it in a way that demonstrates that $kn$ is a square. $[3]$ is by definition the set of all $n\in\Bbb N$ such that $3\sim n$, i.e., such that $3n$ is a square. What does this tell you about $n$? HINT: What can you say about the number of factors of $3$ in the prime factorization of $n$? Thinking in similar terms will get you through the rest of (b) as well. For (c) you should be thinking about the prime factorizations of $a$ and $b$.
H: The $\sigma$-algebra of a class. We've got the following definition Let $\mathcal C$ be a class of subsets of $\Omega$. We say that $\sigma(\mathcal C)$ is the $\sigma$-algebra generated by $\mathcal C$ if satisfies that: 1. $\mathcal C\subseteq \sigma(\mathcal C)$. 2. If $\mathcal C\subseteq \mathcal A$, with $\mathcal A$ another $\sigma$-algebra, then $\sigma(\mathcal C)\subseteq \mathcal A$. I was checking some old problems in my probability notes, and to be honest I really don't understand this, like why do we need this? when do we use this? I can see some things derived from the definition, like $\sigma(\mathcal C)$ is the smallest $\sigma$-algebra that contains $\mathcal C$ or the trivial fact that this is really a $\sigma$-algebra. Also, supposedly, the $\sigma(\mathcal C)$ is the intersection of all the $\sigma$-algebras that contain $\mathcal C$, I don't get how this is something easy to see. (I was hesitant to add the [measure-theory] tag, since I haven't study that yet) AI: It's hard to directly construct the smallest $\sigma$-algebra containing $\mathcal C$. Instead, you define it as a sort of "infimum" over all $\sigma$-algebras that contain $\mathcal C$. For the question about intersections, clearly the intersection of all $\sigma$-algebras containing $\mathcal C$ must be contained in $\sigma(\mathcal C)$, since this is one of the elements you are intersecting. Now recall that the intersection of $\sigma$-algebras is again a $\sigma$-algebra. So if the intersection was strictly contained in $\sigma(\mathcal C)$, this would contradict condition $2$ of your definition.
H: Prove that relation $R$ on a set of functions is an equivalence relation Let set $S$ be the set of all functions $f:\mathbb{Z_+} \rightarrow \mathbb{Z_+}$. Define a realtion $R$ on $S$ by $(f,g)\in R$ iff there is a constant $M$ such that $\forall n (\frac{1}{M} < \frac{f(n)}{g(n)}<M). $ Prove that $R$ is an equivalence relation and that there are infinitely mane equivalence classes. Attempt: Would it work if I define $f$ as $f_k(n)=kn$ and g as $g_k(n)=(M-k)n$ where $M>k$. Then, $$\forall n ((\frac{1}{M} < \frac{f(n)}{g(n)}<M)=(\frac{1}{M} < \frac{k}{M-k} <M))$$ is true as long as $M>k$. So, $R$ is reflexive: $(f,f) \in R$ $$\forall n ((\frac{1}{M} < \frac{f(n)}{f(n)}<M)=(\frac{1}{M} < 1 <M)), \space M>1$$ $R$ is symmetric: $(f,g)\in R \Rightarrow (g,f)\in R$ $$\forall n ((\frac{1}{M} < \frac{f(n)}{g(n)}<M)=(\frac{1}{M} < \frac{k}{M-k} <M))$$ $$\forall n ((\frac{1}{M} < \frac{g(n)}{f(n)}<M)=(\frac{1}{M} < \frac{M-k}{k} <M))$$ for $M>k$. $R$ is transitive: $(f,g)\in R \wedge (g,h) \in R \Rightarrow (f,h)\in R$ $$\frac{f(n)}{g(n)} \in R, \space \frac{g(n)}{h(n)} \in R \Rightarrow \frac{f(n)g(n)}{h(n)g(n)}=\frac{f(g)}{h(n)}\Rightarrow \forall n ((\frac{1}{M} < \frac{f(n)}{h(n)}<M) $$ Then, $R$ is an equivalence relation. AI: Your proof of reflexivity would be right if you expressed it a little more clearly. Let $f\in S$ be arbitrary. Then for all $n\in\Bbb Z_+$ we have $$\frac12<\frac{f(n)}{f(n)}<2\;,$$ so $\langle f,f\rangle\in R$, and $R$ is reflexive. (Or course I could have used any real number greater than $1$ for my $M$, but it’s best to be explicit.) For symmetry you want to assume that $\langle f,g\rangle\in R$ and show that $\langle g,f\rangle$ is therefore necessarily in $R$ as well. Since $\langle f,g\rangle\in R$, you know that there is a constant $M$ such that $$\frac1M<\frac{f(n)}{g(n)}<M\tag{1}$$ for each $n\in\Bbb Z_+$. Taking reciprocals in $(1)$, we see that $$M>\frac{g(n)}{f(n)}>\frac1M$$ and hence that $\langle g,f\rangle\in R$, as desired. For transitivity you must assume that $f,g,h\in S$ are such that $\langle f,g\rangle\in R$ and $\langle g,h\rangle\in R$ and somehow prove that $\langle f,h\rangle\in R$. You know that there are constants $M$ and $N$ such that $$\frac1M<\frac{f(n)}{g(n)}<M\qquad\text{and}\qquad\frac1N<\frac{g(n)}{h(n)}<N\tag{2}$$ for all $n\in\Bbb Z_+$, and you want to use $M$ and $N$ somehow to find a constant $K$ such that $$\frac1K<\frac{f(n)}{h(n)}<K$$ for all $n\in\Bbb Z_+$. Your idea of looking at the product $\dfrac{f(n)}{g(n)}\cdot\dfrac{g(n)}{h(n)}$ is a good one, but you have to carry it out correctly; make use of the inequalities in $(2)$. For the last part of the question, consider the functions $f_k:\Bbb Z_+\to\Bbb Z_+:n\mapsto n^k$.
H: Prove that $|x(a)| + \max \{|x'(t)|: t \in [a,b]\}$ is a norm for a complete space Prove that the space $C^1([a,b])$ consisting of continuous functions in $[a, b]$ with the norm $|x(a)| + \max \{|x'(t)|: t \in [a,b]\}$ is a Banach space. I can't prove the completeness of this space. Hope someone can help me. Thanks. I really appreciate this. AI: That's not even a norm. Take $a = 0$, $b=1$, $x(t) = -2 + 2t$. Did you mean "$|x(a)|$"? By the way, "\max" gives you $\max$ and is easier on the eyes. EDIT: after I wrote the above, the question was fixed and "$x(a)$" was replaced by "$|x(a)|$", so the given norm (I'll call it "$\| \cdot \|_l$") actually is a norm. The norm $\| \cdot \|_l$ is equivalent to the standard norm on $C^1([a,b])$, which is $\|f\| = \max_{[0,1]} |f| + \max_{[0,1]} |f'|$: to verify this you must verify two inequalities, one of which is trivial, and the other one of which takes a little work. Then a sequence of functions $(f_n) \subset C^1([0,1])$ is Cauchy in one of the norms if and only if it is Cauchy in the other norm. If you are allowed to assume $C^1([0,1])$ is complete using the standard norm, it should be easy to prove $C^1([0,1])$ is complete using $\| \cdot \|_l$.
H: Evaluate $\int_{\partial \mathbf{D}} f(z) dz$ for some meromorphic $f$. This is for homework, so just hints please! The question asks If $f$ is a meromorphic function in $\mathbb{C}$ that satisfies $|f(z) z^2| \leq 1$ for $|z| \geq 1$, then evaluate $\int_{\partial \mathbf{D}} f(z) dz$ (where $\mathbf{D}$ represents the unit disk). Since $f$ is meromorphic on $\mathbb{C}$, it is certainly meromorphic on $\mathbf{D}$. This means that there are at most finitely many poles $\{ z_0, \dotsc, z_n \}$ of $f$ in $\mathbf{D}$. By the residue theorem, then, $$ \int_{\partial \mathbf{D}} f(z) dz = \sum_{i=0}^n \text{Res}_{z = z_i} \left[ f(z) \right]. $$ However, by the condition $|f(z) z^2| \leq 1$ for $|z| \geq 1$, we see that the Laurent series of $f$ about each $z_i$ contains no powers greater than $-2$. For example, $f(z) = \frac{1}{z} + \frac{1}{z^3}$ does not fit our criterion. The residue of $f$ at a point is defined to be the coefficient of $\frac{1}{z}$ in the Laurent expansion of $f$. Thus, $\text{Res}_{z = z_i} \left[ f(z) \right] = 0$ for every $i$, and hence $$ \int_{\partial \mathbf{D}} f(z) dz = 0. $$ Does this look OK? I have a feeling that there is a mistake somewhere but am not sure where. AI: The end result is correct. However, the way to it isn't. Thus, $\text{Res}_{z = z_i} \left[ f(z) \right] = 0$ for every $i$ is wrong. Consider for example $$f(z) = \frac12\left(\frac{1}{z-\frac12} - \frac{1}{z+\frac12}\right) = \frac{1}{2(z^2-\frac14)}.$$ That satisfies the criterion $\lvert f(z)z^2\rvert \leqslant 1$ for $\lvert z\rvert \geqslant 1$, but it has two poles with nonzero residue. The condition $\lvert f(z)z^2 \rvert \leqslant 1$ for $\lvert z\rvert \geqslant 1$ ensures that $f$ has a zero of order (at least) $2$ in $\infty$. Thus $f$ is a rational function, and the sum of all residues of a rational function is $0$. The fact that $f$ has a multiple zero in $\infty$ means that the residue in $\infty$ is $0$, hence the sum of all residues in $\mathbb{C}$ (which is the sum of the residues in $\mathbb{D}$) is $0$.
H: Why can't polynomials have negative exponents or division by a variable Why can't: $$2x^{-3} - 3x$$ or $$\frac{1}{2x}$$ be polynomials too? Why have a definition that excludes these algebraic forms? AI: Polynomials are defined as they are for a few distinct reasons: (1) because polynomials as functions have certain properties that your 'polynomials with division' don't have, and (2) because there are other terms for more generalized algebraic forms. First, the properties of polynomials: unlike e.g., $2x^{-3}+3x$, polynomials have no poles; there's no place where a polynomial 'blows up', where it goes to infinity. In particular, this means that the domain of a polynomial as a function is all of $\mathbb{R}$ — or, alternately, all of $\mathbb{C}$). In fact, polynomials in the complex plane have even nicer properties - they're analytic functions, which means that not only are they well-defined everywhere, but all of their derivatives are well-defined everywhere. This means that polynomials have a lot of structural properties that make them 'nice' objects of study in ways that your expressions aren't. Secondly, there's also a notion that (roughly) corresponds to your 'extended' version of polynomials, with its own set of nice properties: the ring of rational functions, which includes not just items like $x^2+\frac1x+5$ but also terms like $\dfrac{x^3+2}{x+5}$ that your formulation doesn't at least at first glance suggest. These represent a useful extension of polynomials for study because they're closed under the operation of division; for any two rational functions $f(x)$ and $g(x)$, the function $h(x) = \frac{f(x)}{g(x)}$ is also a rational function. This property doesn't hold for your 'Lambert polynomials', because there's no finite expression in positive and/or negative powers of $x$ that corresponds to the function $\frac1{x+1}$. What's more, there's also an object occasionally studied that more directly corresponds to your notion: the notion of Laurent Polynomial. As that article suggests, they're of particular importance and interest for their connections with the field of Hopf Algebras (and by extension, quantum groups). I would opine that the reason that they're not the primary object of study is because they're not the 'simplest' structure of interest among any of their peers, and fundamentally the most important structures in mathematics tend to be the simplest structures exhibiting some given property.
H: Finding polynomal function with given zeros and one zero is a square root I've been having trouble with this problem: Find a polynomial function of minimum degree with $-1$ and $1-\sqrt{3}$ as zeros. Function must have integer coefficients. When I tried it, I got this: \begin{align} (x+1)(x-(1-\sqrt{3}))=& x^2 - x(1-\sqrt{3}) + x - 1 + \sqrt{3}\\ =& x^2 - x + x(\sqrt{3}) + x - 1 + \sqrt{3}\\ =& x^2 + x(\sqrt{3}) - 1 + \sqrt{3} \end{align} This looks completely wrong to me and it probably is as my teacher said the answer should not be a quadratic. I'm not looking for the answer to be given to me; but if I could get some guidance, it would be greatly appreciated. AI: Your polynomial does not work because it does not have integer coefficients. So no quadratic will work, because the two roots are forced to be $-1$ and $1-\sqrt 3$, and you just showed this doesn't work. However, a cubic will work. Try the one with $-1$, $1-\sqrt 3$, and $1+\sqrt 3$ as roots.
H: Changing one coefficient in a set of linear equations Consider a set of linear equations described by $A\vec{X}=\vec{B}$ is given, where $A$ is an $n\times n$ matrix and $\vec{X}$ and $\vec{B}$ are n-row vectors. Also suppose that this system of equations have a unique solution and this solution is given. Imagine a new set of linear equations $A'\vec{X}=\vec{B}$, where all elements of $A'$ is equal to those of $A$ but one element $A_{ij}$ which is increased by $k$. I am interested to know if I could somehow relate the solution of the first problem to the second problem by knowing the value of $k$ and $A$. In other words, I would like to derive the new solution without resolving the set of linear equations. AI: Let's look at the $3\times 3$ case when $a_{21}$ is changed by $+k$. The determinant $\det(A)$ is in every entry in $A^{-1}$, so let's look at that. Expand along the second row to see that $\det(A + k e_{21}) = -(a_{21} + k)|C_{21}| + a_{22} |C_{22}| - a_{23}|C_{23}|$ where $|C_{ik}|$ is the determinant of the matrix $A$ with row $i$ and column $k$ deleted, a $2\times 2$ matrix. This formula equals $det(A) + (-1)^{i+j}k|C_{ij}|$.
H: Minimizing the sum of a product I am having a hard time coming up with a function to represent a word problem. The product of three values equals 192. one of the values is twice another. What is the minimum value of their sum. Given all three values are greater than 0. So far I have came up with: $ABC=192$ $A=2B$ $2BBC=192$ $2B^2C=192$ However after this point I am stuck as to what I need to express to get a representation of a function. I assume the function needs to be broken down into a coefficient, but I am not certian. AI: I prefer to use $x,y$, and $z$ for the three quantities. We have $xyz=192$, and $y=2x$, so $2x^2z=192$, or $x^2z=96$. We want to minimize $x+y+z$, given that $x,y,z>0$. Now $$x+y+z=3x+z=3x+\frac{96}{x^2}\;,$$ so we have enough information to express the sum in terms of $x$ alone. It’s now a straightforward calculus problem.
H: How to calculate $E[X^2Y^5]$ given density functions for $x$ and $y$ Let $X$ and $Y$ be random independent variables within the limits $[0, 1]$ with the following density functions: $f_X(x) = 0.16x + 0.92$ such that $x$ is within the parameters $[0, 1]$ and $f_Y(y) = 1.41y^2 + 0.53$ such that $y$ is within the parameters $[0, 1]$. how do I calculate $E[X^²Y^5]$? I used the the formula $E[X] = \frac{1}{λ}$, solved for $λ$, then substituted it into the equation $E[X^n] = \frac{n!}{\lambda^n}$. I calculated this for $E[X^2]$ and $E[Y^5]$ separately, then multiplied them. I got the wrong answer but I don't know how else to solve it. AI: Just to further clarify Potato's answer, as far as the exponents go: $E(X^2Y^5)=E(X^2)E(Y^5)$ $E(X^2) = E(g(X))$ $E(Y^5) = E(h(X))$ And from there we can move on to integrating. You should read about the law of the unconscious statistician. Just as an example, integrating $E(X^2)$ would look like: $\int g(x) f(x) dx,$ Then substituting in $g(x) = x^2$ and the density function, and adding the limits, we get: $\int_0^1 x^2 (0.16x+0.92) dx$ Do the same for E(Y), and you should be good to go.
H: Negative curvature compact manifolds I know there is a theorem about the existence of metrics with constant negative curvature in compact orientable surfaces with genus greater than 1. My intuition of the meaning of genus make me think that surfaces with genus greater that 1 cannot be simply-connected, but as my knowledge about algebraic-topology is zero, I might be wrong. My question is: are there two and three dimensional orientable compact manifolds with constat negative curvature that are simply-connected? If yes, what is an example of one? If not, what is the reason? Thanks in advance! AI: No, none. You are describing space forms, in this case the hyperbolic plane and 3-space. You might want to look at Cheeger and Ebin, Comparison Theorems in Riemannian Geometry. Right, Theorem 1.37 on page 41, simply connected manifolds of the same dimension and constant (sectional) curvature $K$ are isometric. Corollary of Cartan-Ambrose-Hicks. Meanwhile, you get the compact surfaces precisely because there are Fuchsian groups, acting on $\mathbb H^2,$ and your surfaces are quotient spaces. It turns out that Fuchsian groups, and the flower and color Fuchsia, are named after different people named Fuchs. Go figure.
H: How to calculate the mod value of a rational/irrational value? We have a course in network security this semester and we are being taught RSA algorithm. I came across a typical math problem that I was unable to solve here. $$D*E \equiv 1 \mod{\phi(n)}$$ This became $$D \equiv E^{-1} \mod{\phi(n)}$$ How do you solve this ??? My lect gave this specific example $$D \equiv 7^{-1} \mod{160}$$ and the solution for $D$ was $23$! How did she arrive at this. AI: Hint $7D = 1 \mod 160$ Sol'n $7D = 1 \mod 160$ and $D = 23 \Longrightarrow 23*7 \mod 160$ satisfies this.
H: What's the symbolic definition of the maximum value of a domain? Lets say we have a domain S Maximum value of domain S = {S | ? ? ? ? ? ? } How could one define the possible maximum value of a set of values, symbolically? AI: If $S$ is an (ordered) set, we write the maximum value of $S$ as $$\max S.$$ If $S = \{s_1, s_2, s_3, \dots, s_n\}$, we could also write $\max S$ as $$\max(s_1, s_2, s_3, \dots, s_n)$$ or as $\max_i s_i$.
H: How to study the convergence of this series? Is convergent series? $$\sum_{n=1}^∞{{1}\over{n^2+2\sqrt{n}-21}}.$$ AI: HINT: How does $\dfrac1{n^2+2\sqrt{n}-21}$ compare with $\dfrac1{n^2}$ when $n\ge 121$, say? The key idea here is that the $n^2$ term is the dominant term in the denominator, so for large $n$ the denominator ought to behave much like $n^2$.
H: If a function is positive on a set of measure greater than zero, is the Lebesgue integral of that function greater than zero? Suppose we have a set $A \subset \mathbb{R}^n$ such that $f(x) > 0$ for $x \in A$ and $m(A) > 0$. Does it follow that $\int_A f > 0$? Obviously if there is some kind of lower bound on $f(x)$ on some non-trivial subset of $A$, then we're done, but is it possible for there not to be any such lower bound? I'm thinking like $f(a)$ is some number and then the value of $f$ decreases very rapidly everywhere on $A$. AI: Let $A_n = A \cap \{ f > \frac{1}{n} \}$. Then $A = \cup A_n$. If each $A_n$ had measure $0$, then $A$ itself would have measure $0$. It follows that $m(A_n) > 0$ for some $n$. So $\int_A f \geq \int_{A_n} f \geq \frac{1}{n} m(A_n) > 0$.
H: To show this $R$ is a PID I am studying for a qual, and I can't quite figure this one out. Any hints or suggestions for which theorems to use? It might be a very simple problem that I'm just not seeing the solution: Suppose that $R$ is a Noetherian integral domain and every finitely generated torsion-free $R$-module is free. Show that $R$ is a PID. I started by taking an ideal in $R$, which must be finitely generated as $R$ is Noetherian. We could quotient out its torsion elements to get a torsion-free module, but I don't see what to do next. AI: Your ideal of $R$ is already torsion-free as an $R$-module. (Torsion at Wikipedia.) So by assumption it is free. Now show that an ideal can never be a free $R$-module unless it is generated by a single element. (Hint: give a nontrivial relation over $R$ between any two nonzero elements of the ideal.)
H: When do two functions differ by a constant throughout an interval (Fundamental Theorem of Calculus) I'm reading the proof of the Fundamental Theorem of Calculus here and I don't understand the following parts (at the bottom of page 2): I don't know how to conclude that $G(x)-F(x)=C$ for a $x \in [a,b]$. How do I prove the above statement and does it rely on another theorem not mentioned in this proof? I tried to figure this out by looking at the definitions of $G(x)$ and $F(x)% but only the definition of $G(x)$ is provided. AI: You should be able to reach this conclusion by the following: Prove: If $f$ is continuous on $[a,b]$ and differentiable on $(a,b)$, and $f'(x)=0$ for all $x\in (a,b)$, then $f$ is constant. And if this is a typical sort of analysis book, then you should already have learned the following: Differentiation distributes across addition: if $f$ and $g$ are differentiable on $(a,b)$, then $f+g$ is differentiable on $(a,b)$, and $(f+g)' = f'+g'$. With this, the conclusion in the proof is simple: $G'(x)=F'(x)$ on $(a,b)$ implies that $(G-F)'(x) = G'(x)-F'(x)=0$ for all $x\in (a,b)$. Thus $G(x)-F(x)$ must be constant on $[a,b]$. If we call this constant $C$, then $G(x)-F(x)=C$, i.e. $G(x)=F(x)+C$ on $[a,b]$.
H: Cauchy Goursat Theorem If $C$ is the positively oriented unit circle |$z$| = 1, then is it true that $\int_C\!Log(z+3)\, \mathrm{d}z$ = $0$ Why or why not? Is is true because its analytic right? AI: Yes, this is only not analytic on $s:=${$Re(x)\leq3,y=0$}. Here you are using the principle branch of Log.
H: A noetherian ring $R$ which is commutative integral domain but not a PID? I am looking for an example of a ring $R$ which is a commutative and Noetherian integral domain but not a PID. Thanks. AI: In the polynomial ring $\mathbb{Z}[x]$, the ideal $$I = \langle 2, x\rangle$$ is not principal.
H: Calculation of $\int_{0}^{\pi}\frac{1}{(5+4\cos x)^2}dx$ Calculation of $\displaystyle \int_{0}^{\pi}\frac{1}{(5+4\cos x)^2}dx$ $\bf{My\; Try}::$ Using $\displaystyle \cos x = \frac{1-\tan^2 \frac{x}{2}}{1+\tan^2 \frac{x}{2}}$ Let $\displaystyle I = \int_{0}^{\pi}\frac{1}{\left(5+\frac{4-4\tan^2 \frac{x}{2}}{1+\tan^2 \frac{x}{2}}\right)^2}dx = \int_{0}^{\pi}\frac{1+\tan^2 \frac{x}{2}}{\left(9+\tan^2 \frac{x}{2}\right)^2}dx$ $\displaystyle I = \int_{0}^{\pi}\frac{\sec^2 \frac{x}{2}}{\left(9+\tan^2 \frac{x}{2}\right)^2}dx$ Now Let $\tan \frac{x}{2} = t$ and $\sec^2 \frac{x}{2}dx = 2dt$ $\displaystyle I = 2\int_{0}^{\infty}\frac{1}{(9+t^2)^2dt}$ Now I did not understand how can i solve after that Help Required Thanks AI: Using Weierstrass substitution, we get $$\int_0^\pi \frac{dx}{(5+4\cos x)^2}=\int_0^\infty\frac{1}{(5+4\frac{1-t^2}{1+t^2})^2}\frac{2dt}{1+t^2}=2\int_0^\infty \frac{t^2+1}{(t^2+9)^2}dt=\int_{-\infty}^\infty \frac{t^2+1}{(t^2+9)^2}dt$$ Now let $$f(z)=\frac{z^2+1}{(z^2+9)^2}$$ Then, $f$ has a poles of order 2 at $\pm 3i$. We get $$\mathrm{res}_{z=3i}f=\lim_{z\to 3i}\frac{d}{dz}\frac{z^2+1}{(z+3i)^2}=\lim_{z\to 3i}\left(\frac{-2(1+z^2)}{(z+3i)^3}+\frac{2z}{(z+3i)^2}\right)=-\frac{5i}{54}$$ Hence, letting $\gamma$ be the semi circle of radius $R>3$ centered at the origin and in the upper half-plane, we get that $\gamma$ encloses the pole $z=3i$, so $$\int_\gamma f(z)dz=2\pi i\ \mathrm{res}_{z=3i}f=\frac{5\pi}{27}$$ Letting $R\to\infty$, we see that the integral on the arc-section vanish so that we are left with $$\int_0^\pi \frac{dx}{(5+4\cos x)^2}=\frac{5\pi}{27}$$
H: Completeness of a normed vector space This is captured from a chapter talking about completeness of metric space in Real Analysis, Carothers, 1ed. I have been confused by two questions: What does absolutely summable mean in metric space? Does it mean the norm of xi(i=1,2,3,...) that belongs to norm vector space X is summable? 2nd part of the proof should try to show that if ∑xn(n from 1 to infinity) converges in X whenever ||xn||(n from 1 to infinity) is summable, then X is complete. However, why does the author prove the subsequence of {xn} converges? Thanks^_^ AI: There is a fundamental distinction between the two series. Let $(x_i)$ be a sequence in $X$. Then $\displaystyle \sum_{i=0}^{\infty}{\|x_i\|}$ is a series in $\mathbb{R}$. If $\displaystyle \sum_{i=0}^{\infty}{\|x_i\|}$ is convergent, then the series $\displaystyle \sum_{i=0}^{\infty}{x_i}$ is said to be absolutely summable. On the other hand, $\displaystyle \sum_{i=0}^{\infty}{x_i}$ is said to be summable if it is convergent in the normed vector space. (i.e. $\displaystyle \sum_{i=0}^{\infty}{x_i}=x\in X$). Now we address your second question. A standard theorem is that if a Cauchy sequence has a convergent subsequence then it is itself convergent and converges to the same limit as the subsequence. Proof: Take $(x_i)$ a Cauchy sequence. Let ($x_{i_k}$) be a convergent subsequence with limit $x$. Fix $\epsilon > 0$. For $N$ large enough, if $i_k,i>N$ $$\|x_i-x_{i_k}\|<\frac{\epsilon}{2} \textbf{ and } ||x_{i_k}-x||<\frac{\epsilon}{2}$$ The first inequality is due to the Cauchyness of the sequence. The second by the convergence of the subsequence. By the triangle inequality, $\|x_i-x\|<\epsilon$ for $i>N$. Hence $(x_i)$ converges to $x$. Comment on book proof: As a consequence of this theorem, the author just needs to show that any Cauchy sequence in $X$ has a convergent subsequence in order to prove that it is convergent hence showing that $X$ is complete.
H: Contour integrals Evaluate $\int_C\dfrac{\mathrm{d}z}{z^2-1}$ where a) $C$ is the clockwise oriented circle $\left|z \right| = 2$; b) $C$ is the anti-clockwise oriented square with sides on $x= \pm2$ and $y= \pm2$; c) $C$ is the clockwise oriented circle $\left|z-1 \right|= 1$. So for this I would set $z = x+iy$ and separate the function into a real and imaginary part to solve the integral, right? And how do i know what bunds to put on the integral? AI: Note that $$\frac{1}{z^2-1}=\frac{1}{(z-1)(z+1)}$$ Hence, the function $f(z)=\frac{1}{z^2-1}$ has simple poles at $z=\pm 1$. We get $$\mathrm{res}_{z=1}f=\frac{1}{2},\quad\mathrm{res}_{z=-1}f=-\frac{1}{2}$$ For (a) the given contour encloses both poles so you have $$\int_{\gamma_a} \frac{1}{z^2-1}dz = -2\pi i\left(\frac{1}{2}+\frac{-1}{2}\right)=0$$ For (b) the contour is reversed, but also encloses both poles, so $$\int_{\gamma_b} \frac{1}{z^2-1}dz = 2\pi i\left(\frac{1}{2}+\frac{-1}{2}\right)=0$$ For (c) the contour encloses only the pole $z=1$, so $$\int_{\gamma_c} \frac{1}{z^2-1}dz = -2\pi i\cdot\frac{1}{2}=-\pi i$$
H: Open subsets of the closure I want to prove that every open subset of a topological subpace is an open subset of its closure. Let $Y$ be a topological space and $X$ a subspace of $Y$. If $U$ is an open subset of $X$, we have $U=U\cap \overline X$, thus U is an open subset of $\overline X$ also. Am I right? Thanks in advance AI: Yes, this is correct. More generally, if $U$ is an open subset of $X$, and $A$ is any subset of $X$ containing $U$, then $U\cap A=U$ is an open subset of $A$. Added: By ‘this’ I meant the assertion that an open set is open in its closure; I was called away and didn’t look closely enough at the argument, which seems to go with the different (and false) proposition that if $U$ is an open subset of $X$, where $X\subseteq Y$, then $U$ is open in $\operatorname{cl}_YX$. A counterexample to this is to take $Y=\Bbb R$, $X=\Bbb Q$, and $U=\Bbb Q\cap(0,1)$. Then $U$ is relatively open in $\Bbb Q$, but $U$ is not an open set in $\operatorname{cl}_{\Bbb R}\Bbb Q=\Bbb R$.
H: Prove that $\int_0^1 f(x^2)dx\geqslant f\left(\frac{1}{3}\right)$ Let $f:[0,1]\to\mathbb{R}$ be twice differentiable. Suppose $f''(x)\geqslant 0$ for all $x\in[0,1]$. Prove that $$\int_0^1f(x^2)dx\geqslant f\left(\frac{1}{3}\right).$$ I am thinking of using Taylor's Theorem to expand $f(x^2)$ at $\frac{1}{\sqrt{3}}$. But it seems that this makes things more complicated. AI: A way to rewrite your hypothesis in a more inequality manner is. $f(x) \geq f'(t) (x-t) +f(t)$ which holds for $\forall t,x \in [0,1]$ To relate $f(x^2)$ we substitute $x$ with $x^2$ we get $f(x^2) \geq f'(t)(x^2-t) +f(t)$ To get even closer to what we are trying to prove we integrate this and we get $$ \int _{0}^{1 }f(x^2) \mathrm{d}x \geq f'(t)(\frac{1}{3}-t) + f(t)$$ That is $\int _{0}^{1 }f(x^2)\mathrm{d}x \geq \max _{t\in [0,1]} \left ( f'(t)(\frac{1}{3}-t) + f(t)\right )$ Specifically for $t=\frac{1}{3}$ which gives you the inequality you at stake.
H: Counting - puzzle question Suppose that you have infinitely many one dollar bills (numbered 1, 3, 5, . . . ) and you come upon the Devil, who is willing to pay two dollars for each of your one-dollar bills. The Devil is very particular, however, about the order in which the bills are exchanged. The contract stipulates that in each sub-transaction he buys from you your lowest-numbered bill and pays you with higher-numbered bills. First sub-transaction takes 1/2 hour, then 1/4 hour, 1/8, and so on, so that after one hour the entire exchange will be complete. How can this deal be harmful? AI: At the end of the hour you have none of the numbered bills. Let $n_k$ be the lowest-numbered bill in your possession just before the $(k+1)$-st sub-transaction. Thus, $n_0=1$. The rules imply that $n_k<n_{k+1}$ for each $k\in\Bbb N$. Thus, for any $m\in\Bbb N$ there is a $k\in\Bbb N$ such that $m<n_k$, which means that after sub-transaction $k$ you definitely do not have bill number $m$ in your possession. Since $m$ was arbitrary, you have none of the numbered bills at the end of the hour.
H: If $lim \int f_n dx$ exists and $<\infty$, can we switch limit and integral A common example that we cannot switch limit and integral is $$f_n=1_{[n,\infty]}$$$lim\int f_ndx=\infty$, while $\int \lim f_ndx=0$. Thus we have Dominated Convergence Theorem. In my knowledge, such examples all deal with some integral which equals infinity. Here is my quesiton, if $\lim \int f_ndx$ exists and $<\infty$, can we switch the limit and integral? AI: Consider $f_n=1_{[n,n+1]}$. That $\int f_ndx=1$ and $\lim_{n\rightarrow\infty}f_n=0$ pointwise, thus $\int \lim_{n\rightarrow\infty}f_n=0$
H: Is the converse of Lagrange's Theorem true for the permutation group $S_5$? Is the converse of Lagrange's Theorem true for the permutation group $S_5$? That is, if $n\mid |S_5|$, then is there a subgroup of $S_5$ with order $n$. Since $|S_5|$ = 5! = 120, then any subgroup must have length of some divisor of 120. I'm not sure how to proceed with this. AI: Ans is no. For example there does not exist a subgroup of index $3$. Assume that there exists a subgroup $H$ of index $3$. Now $G=Sym(5)$ acts on the set $G/^rH$ of all right cosets of $H$ in $G$. Let $\rho: G \rightarrow Sym(G/^rH)$ be the action. Note that the kernel of this action is trivial. This means that $G=Sym(5)$ is isomorphic to a subgroup of $Sym(G/^rH) \cong Sym(3)$. This is a contradiction.
H: How to find the inverse laplace transform of [F(s)/s]^n Let $F(s)=\mathcal{L}\{f(t)\}$, we have $\frac{F(s)}{s}=\mathcal{L}\{\int_o^tf(x)dx\}$. How to find $\mathcal{L}^{-1}\left\{\left(\frac{F(s)}{s}\right)^n\right\},\text{ for}~ n\in \mathbb{N} $ AI: A related problem. Here is a start for the case $n=2$, Let $$ g(t)=\int_{0}^{t} f(x)dx .$$ Now, recalling the fact $$ \mathcal{L}(g*h) = \mathcal{L}(g) \mathcal{L}(h), $$ we have $$ \mathcal{L}^{-1}\left\{\left(\frac{F(s)}{s}\right)^2\right\}=(g*g)(t)=\int_{0}^{t}g(\tau)g(t-\tau)d\tau .$$ You can simplify the above integral by interchanging the order of integration and see what you get. Try to generalize this answer.
H: What is the explicit formula for the sequence representing the number of any triangles in a triangular grid? What is the explicit formula for the sequence representing the number of any triangles in a triangular grid, like those below? Context Counting only up-facing triangles of size one, we get triangular numbers. Counting both up-facing and down-facing triangles of size one, we get square numbers (that is, perfect squares) But what is the total number of all triangles of any size? Here are some values for small $n$. AI: The values fit $a_n=2n^2-2n+1$ To find that, two levels of differences gives a constant $4$, so the leading term is $(4/2!)n^2$. Subtract that off and you can find the rest.
H: Prove that if $2^{4\times5^k}=x\times5^{k+3}+a,0 Let $$2^{4\times5^k}\equiv a \pmod {5^{k+3}},\\2^{4\times5^k}\equiv b \pmod {5^{k+4}},$$ and $0<a<5^{k+3},0<b<5^{k+4},$ prove that $a=b.$$(k>1)$ This is equivalent to this: if $2^{4\times5^k}=x\times5^{k+3}+a,0<a<5^{k+3},$ then $5\mid x.$ ADD: A similar problem: Prove that if $2^{2\times5^k}=x\times5^{k+4}+a,0<a<5^{k+4},$ then $5\mid x.(k>2)$ AI: Using this, if ord$\displaystyle _{(p^k)}a = d$ where k is a natural number and $p$ odd prime, we can show that ord$_{(p^{k+1})}a = d$ or $pd$ Now, $\displaystyle 2^2\equiv-1\pmod5\implies 2^4\equiv1\pmod5\implies$ord$_52=4$ $\displaystyle\implies$ord$_{(5^2)}2=4$ or $4\cdot5=20$ which $=\phi(25)$ Now, $\displaystyle2^4=16\not\equiv1\pmod{25}\implies$ ord$_{(5^2)}2=20$ So, $2$ is a primitive root of $25$ using this, $2$ is a primitive root of $5^k$ for integer $k\ge1$ $\displaystyle\implies2^{4\cdot5^k}\equiv1\pmod{5^{k+1}}\equiv1+c\cdot5^{k+1}\pmod{5^r}$ for integer $r\ge k+1$ where $c$ is some integer not divisible by $5$ as $\displaystyle2^{4\cdot5^k}\not\equiv1\pmod{5^r}$ where $r\ge k+1$
H: How to find general solution of an L-R-C series circuit with inductance L = 1/5 henry, resistance R=4 ohms, and capacitance c=1/520 farad? Consider an L-R-C series circuit with inductance L = 1/5 henry, resistance R=4 ohms, and capacitance c=1/520 farad. Find the general form of the charge on the capacitor if this is a free series circuit. I got q(t) = e^(-10t)(c1(cos(50t) + c2(sin(50t)), but the answer that my friend's teacher provided is q(t)=e^(-3t)(c1(cos(50t)+c2(sin(50t)). Who is correct? AI: For the R-L-C circuit, using KVL, the voltage across the capacitor is given by: $$LC \dfrac{d^2V_c}{dt^2} + RC \dfrac{dV_c}{dt} + V_c(t) = 0$$ This gives us: $$\dfrac{1}{2600} \dfrac{d^2V_c}{dt^2} + \dfrac{4}{520} \dfrac{dV_c}{dt} + V_c(t) = 0$$ Solving this yields: $$V_c(t) = e^{-10t} \left(c_1 \cos(50 t) + c_2 \sin(50 t) \right)$$
H: Is the homomorphic image of a PID a PID? $R$ is a ring which is a PID [i.e., $R$ is an integral domain in which every ideal is generated by a single element] and we are given with a map $f:R\to S$ which is a homomorphism, i.e. $f(a + b) = f(a) + f(b)$ for all a and b in $R$, $f(ab) = f(a) f(b)$ for all a and b in R, $f(1_R) = 1_S. $ Is it necessary that image of ring $R$, i.e. $f(R)$, also PID? AI: This is not the case in general. Note for example that $\Bbb Z$ is a PID. Consider quotient rings of $\Bbb Z$ to find homomorphic images of $\Bbb Z$ that are not PIDs (in fact, not even integral domains). However, if we know that a ring homomorphism $f:R\to S$ is one-to-one, then of course it is true, since $R$ is then isomorphic to $f(R)$.
H: Confusion regarding boundedness & Equicontinuity It is given that $ |f_{n} ' (x) | \le \frac {1}{x^{\frac {1}{3}}} \forall 0 \lt x \le 1$ , where {$f_{n}$} is a sequence of real valued $C^{1}$ function on $[0,1]$ and each {$f_{n}$} has a zero in $[0,1]$ . Now to prove that the sequence has a uniformly convergent subsequence; somehow I have to try with Ascoli Arzela Theorem. What I have done: .... To go with Ascoli-Arzela, I have to show pointwise boundedness & equicontinuity... Now, by the condition that: each {$f_{n}$} has a zero in $[0,1]$ , pointwise boundedness is shown. But, 1) I could not show Equicontinuity, 2) & one doubt is: since {$f_{n}$} is a sequence of real valued $C^{1}$ function on $[0,1]$ derivative should be bounded.. isn't it?? AI: Hint: I guess you are using the following argument to show that $f_n$ are uniformly bounded: Let $x_n\in [0,1]$ such that $f_n(x_n) = 0$, then for all $x\in [0,1]$, $$ |f_n(x)| = |f_n(x) - f_n(x_n)| = \bigg|\int^x_{x_n} f_n'(t) dt \bigg| \leq \int^x_{x_n} t^{-1/3} dt \leq \frac{3}{2} \big| x^{1-1/2} - x_n^{1-1/2}\big| \leq 3.$$ What happens if you use a general $y$ instead of $x_n$?
H: Extreme value theorem - condition on continuity for boundedness According to my math professor, the extreme value theorem is stated as: If $ f: [a,b] \to \mathbb{R} $ is continuous then $f$ is bounded, and the maxima and minima are obtained for some $x$ belonging to the domain. My intuition tells me that the condition on continuity is redundant, and even a function which is not continuous over $[a,b]$ is bounded. Is it not true that any function which is defined for all values in a given closed interval should be bounded? I am unable to think of an example of a function which is defined on a closed interval, but not bounded. AI: For example, let $f:[0,1] \to \mathbb R$, $f(x) = 1/x $ if $x\neq 0$ and $f(0) = 0$. This is unbounded, and is not continuous at $0$.
H: Finding a recursive definition and computing $B(10)$ For $n \geq 1$, let $B(n)$ be the number of ways to express $n$ as the sum of $1$s and $2$s, taking order into account. Thus $B(4) = 5$ because $4 = 1 + 1 + 1 + 1 = 1 + 1 + 2 = 1 + 2 + 1 = 2 + 1 + 1 = 2 + 2$. (a) Compute $B(i)$ for $1 \leq i \leq 5$ by showing all the different ways to write these numbers as above. (b) Find a recursive definition for $B(n)$ and identify this sequence. (c) Compute $B(10)$. I've done part (a) $B(1) = 1$ because $1 = (1)$ $B(2) = 2$ because $2 = (1+1) = (2)$ $B(3) = 3$ because $3 = (1+1+1) = (1+2) = (2+1)$ $B(4) = 5$ because $4 = (1 + 1 + 1 + 1) = (1 + 1 + 2) = (1 + 2 + 1) = (2 + 1 + 1) = (2 + 2)$ $B(5) = 8$ because $$5 = (1+1+1+1+1) = (1+1+1+2) = (1+1+2+1) = (1+2+1+1)\\ = (2+1+1+1) = (1+2+2) = (2+1+2) = (2+2+1)$$ *Note I used parenthesis for clarity How would I do part B? AI: Hint: There are two questions for you to answer. Suppose that $n\ge 3.$ How many ways are there to obtain $n$ as such a sum where the first number is $1$? What if the first number is $2$? What can you then conclude?
H: Show that $\sum_{i=1}^{n}x_if'(x_i)=f'(\xi)$ Let $f:(a,b)\to\mathbb{R}$ be differentiable and let $x_1,\dots,x_n\in (a,b)$. Suppose $x_i>0$ for all $i$ and $\sum_{i=1}^{n}x_i=1$. Show that there is $\xi\in(a,b)$ s.t. $$\sum_{i=1}^{n}x_if'(x_i)=f'(\xi).$$ Clearly by mean value theorem there is $\xi\in(a,b)$ s.t. $$f'(\xi)=\frac{f(b)-f(a)}{b-a}$$. Moreover since $\sum_{i=1}^{n}x_i=1$, it follows that $$\sum_{i=1}^{n}x_if'(\xi)=f'(\xi)$$ so it suffices to show that $$\sum_{i=1}^{n}x_i(f'(x_i)-f'(\xi))=0$$ Thus I am trying to construct a function which can give me the desired form. I have some trouble here. AI: Here is one way to prove this, which relies on the following slightly nontrivial fact: Fact (Darboux's Theorem$^{[1]}$): If a function is differentiable on an interval $(a,b)$, then the image of the derivative is an interval; i.e. the Intermediate Value Theorem holds on any closed subinterval. This fact can be proved by noting that the slopes of chords of a graph of a continuous function form an interval, and then applying the Mean Value Theorem. Assuming this, your work can be finished as follows. Set $$g(x)=\sum_{i=1}^{n}x_i(f'(x_i)-f'(x))=(\sum_{i=1}^{n}x_if'(x_i))-f'(x).$$ Then the image of $g(x)$ is an interval by the fact above. Let $m$ be the minimum of the $x_i$ and $M$ be the maximum of the $x_i$. If $m=M$, then all the $x_i$ are equal and we may take $\xi=x_i$. Otherwise, $$g(m)=\sum_{i=1}^{n}x_i(f'(x_i)-f'(m))>0, \qquad g(M)=\sum_{i=1}^{n}x_i(f'(x_i)-f'(M))<0.$$ Applying the fact to the inteval $[m,M]\subset(a,b)$, we see that there must be a $\xi\in [m,M]$ such that $g(\xi)=0$; i.e., $$\sum_{i=1}^{n}x_if'(x_i)=f'(\xi)$$ as required.
H: Prove by minimum counterexample that $2^n>10n$ for $n>5$ Prove by minimum counterexample that for all integers $n>5$ the statement $2^n>10n$ is true. Attempt: Let $S$ be a set of counterexamples, $S=\{n \in \mathbb{Z_+}: 2^n \le 10n, \space n>5 \}$. Let $m \in S$ be the smallest element of $S$, so $m>5$. Then, $m-1 \notin S$. So, we have $$2^{m-1}>10(m-1) \\ 2^m>20(m-1)$$ I am stuck here. I dont know how to get rid of that $-1$. Help appreciated. Thank you. AI: Your attempt has some flaws. The method should show that there is no minimal counterexample by using the fact that eavery nonempty subset of $\mathbb Z_+$ has a minimal element. But you cannot show that $S$ is empty, simply because $S$ isn't empty (in fact $S=\{1,2,3,4,5\}$. So by actually taking the miniumal element of this $S$, you take $m=1$. Then indeed $m-1=0\notin S$, but so what? Adjust the definition of $S$ so that its emptyness is precisely the statement that there are no counterexamples to "$2^n>10n$ and $n>5$".
H: Using Schröder-Bernstein theorem to show same cardinality Use the Schröder-Bernstein theorem to show that $(0,1)\subseteq \Bbb R$ and $[0,1]\subseteq\Bbb R$ have the same cardinality. Firstly I'm not even entirely sure about what the syntax even means. The elements of subset (0,1) and [0,1] are also included in the set of all real numbers? And [0,1] doesn't include 0 and 1 but (0,1) does? Even when I understand the syntax I probably won't know how to solve it AI: $(0,1) \subset [0,1]$ so $|(0,1)| \le |[0,1]|$. Using the bijection $\phi(x) = \frac{1}{4}+\frac{1}{2} x$, we see that $|[0,1]| = |[\frac{1}{4},\frac{3}{4}]|$, and since $[\frac{1}{4},\frac{3}{4}] \subset (0,1)$ we have $|[0,1]| = |[\frac{1}{4},\frac{3}{4}]| \le |(0,1)|$. Hence, using the Bieber-Cantor–Bernstein–Schröder-Heimlich-Presley theorem we have $|(0,1)| =|[0,1]|$.
H: contour integrals complex analysis 2 Evaluate $\int_C\!\frac{2z-1}{z^4-2z^2+1}dz$ where $C$ is the circle |$z$|=$10$ oriented clockwise. I have a exam tomorrow and need to understand this, can someone please help. AI: Use the residue theorem. Look for the poles (in this case zeros of the denominator) inside $|z|=10$. Hint: the denominator factors into $(z^2 - 1)^2$, so the poles are $\pm 1$ each with multiplicity 2.
H: Triple integral using spherical coordinates The following function is given: $$\iiint_{x^2+y^2+z^2\leq z} \sqrt{x^2+y^2+z^2}dx\,dy\,dz$$ And I have to calculate this integral using spherical coordinates. The substitutions are standard, I think, but I am having a problem with the limits. $$0\leq\phi\leq\pi$$$$0\leq\theta\leq2\pi$$ are the limits for the angles. I am not able to determine the limits for $\rho$ defined as $$\rho=\sqrt{x^2+y^2+z^2}$$ I tried it with $\rho\cos(\phi)$ as the upper limit but it didn't work. AI: You're given the region $x^2+y^2+z^2 \leq z$, which by definition is equivalent to $\rho^2 \leq \rho\sin\phi$. You can assume $\rho \geq 0$, so this simplifies to $0 \leq \rho \leq \sin\phi$. Is that enough for you to figure out the problem?
H: Show that $y^2-3xy+2x^2=1$ is a solution of the differential equation $4x-36+y'(2y-3x)=0$ I want to show that the given equation($1.$) is a solution of the differential equation($2.$) $y^2-3xy+2x^2=1$ $4x-3y+y'(2y-3x)=0$ I need to put the derivative of $1.$ in $2.$? thanks. AI: Hint: The first equation actually gives a curve $F(x,y)=y^2-3xy+2x^2-1=0$. And the second gives the equation containing the derivative of $y$ w.r.t $x$. So you could try implicit function theorem to find $y'=-\frac{\partial F/\partial x}{\partial F/\partial y}$ from (1). You will see that would be equivalent to (2). And you may mistake 36 for 3y.
H: Calculus and line integrals What's the difference between $\int_C f\,ds$ and $\int_C F \cdot dr$? And is $\int_C P \, dx+\int_C Q\,dy$ just notation for $\int_C f\,ds$?. I am referring here to Section 13.2 from James Stewart's Essential Calculus and trying to understand this very basic thing. It seems like there is this notion of dot product when thinking about integrating along a path in a vector field, and then this notion of integrating with respect to arclength. Is that correct? AI: In the first integral you integrate a real valued function; in the second you integrate a vector valued function. I do not consider complex integration for simplicity. Let $C:[a,b]\rightarrow\mathbb R^2, s\mapsto C(s):=(C_1(s),C_2(s))$ be a smooth curve in $\mathbb R^2$, and $f:\mathbb R^2\rightarrow\mathbb R$ a real valued function. Then the integral of $f$ along $C$ is denoted by $\int_C f$ or $\int_C f ds$ (this second notation is a bit spurious) and it is equal to $$\int_C f:=\int_a^b f(C(s))\|\frac{dC}{ds}\|ds, $$ where $$\|\frac{dC}{ds}\|^2:=\left(\frac{dC_1}{ds}\right)^2+\left(\frac{dC_1}{ds}\right)^2.$$ If $F:\mathbb R^2\rightarrow \mathbb R^2$ is a vector valued function, then the integral denoted by $\int_C F\cdot dC$ is equal to $$\int_C F\cdot dC:=\int_a^b F(C(s))\cdot\frac{dC}{ds}ds, $$ where $\cdot$ is the scalar product of vectors in $\mathbb R^2$, and it is often called the work of $F$ along $C$. You can generalize the work integral in $\mathbb R^3$, of course. Heuristically, you are considering the equality $$dC"="\frac{dC}{ds}ds$$ in the definition of $\int_C F\cdot dC$, where $dC$ denotes an infinitesimal displacement along $C$.
H: Orthogonality of Haar wavelet functions I'm reading about wavelets and I bumped into the follwing: $\text{Haar wavelet is a step function}\; \psi(x), \text{which takes values 1 and -1, when}\; x \;\text{is in the ranges}\; [0, \frac{1}{2}) \;\text{and}\; [\frac{1}{2}, 1).$ $\text{Dilations and translations of the Haar wavelet are defined as:}$ $$\psi_{jk}(x) = \text{const} \cdot \psi(2^jx-k) $$ Now here is what I need help with : $\text{It is apparent that:}$ $$\int\psi_{jk}(x)\cdot\psi_{j'k'}(x) = 0$$ $\text{whenever}\; j=j'\; \text{and}\; k=k'\; \text{is not satisfied simultaneously}$. My question is: Why is the integral above true? P.S. here is my reference: http://gtwavelet.bme.gatech.edu/wp/kidsA.pdf (page 4) AI: The wavelets are similar to clipped versions of the Rademacher functions, and 'orthogonality' follows in a similar manner. Note that $\psi^{-1} \{0\}^c = [0,1)$, and hence $\psi_{jk}^{-1} \{0\}^c = S_{jk} = \frac{1}{2^j}[k,k+1)$. The keys to the proof are :(i) Any two sets $S_{jk}, S_{j'k'}$ satisfy one of $S_{jk} \subset S_{j'k'}$, the other way around, or $S_{jk} \cap S_{j'k'} = \emptyset$. (ii) If $S_{jk}, S_{j'k'}$ overlap, then either $S_{jk} = S_{j'k'}$, or one is contained in 'half the length' of the other (to be made precise below). We have $\int \psi = \int_{S_{00}} \psi = 0$, and hence $\int \psi_{jk} = \int_{S_{jk}} \psi_{jk} = 0$. Now suppose $j \le j'$. Then either $S_{jk} \supset S_{j'k'}$ or $S_{jk} \cap S_{j'k'} = \emptyset$. Suppose $j=j'$ and $k \neq k'$. Then we have $S_{jk} \cap S_{jk'} = \emptyset$, and it follows that $\int \psi_{jk} \psi_{jk'} = 0$. Now suppose $j < j'$. If $S_{jk} \cap S_{j'k'} = \emptyset$, we have $\int \psi_{jk} \psi_{jk'} = 0$. If $S_{jk} \supset S_{j'k'}$, then we must have (since $\frac{1}{2^j} \ge \frac{1}{2} \frac{1}{2^{j'}} $) either $\frac{1}{2^{j'}}[k',k'+1) \subset \frac{1}{2^j}[k,k+\frac{1}{2})$ or $\frac{1}{2^{j'}}[k',k'+1) \subset \frac{1}{2^j}[k+\frac{1}{2},k+1)$, and hence $\psi_{jk}$ is constant on $ S_{j'k'}$. It follows that $\int \psi_{jk} \psi_{jk'} = 0$.
H: Line integrals of given curves This question has an integral $$\int(x^4+4xy^3)dx+(6x^2y^2-5y^4)dy$$to be evaluated on the parametric curve $$C:(-(t+2)\cos(\pi t^2), t-1)$$I took the partial derivatives of the terms in the bracket and subtracted them to get $0$. However, this is not the right answer. I don't know any other method to solve such integrals. AI: Direct approach Let us write the integral as $$\int_C F\cdot dC:=\int_0^1 F(C(t))\cdot\frac{dC}{dt}dt, $$ with $F(x,y):=(F_1(x,y),F_2(x,y))=(x^4+4xy^3,6x^2y^2-5y^4)$ and $C:[0,1]\rightarrow R^2$, with $C(t):=(-(t+2)\cos(\pi t^2), t-1)$. The integral is really complicated and we do not want to perform all computations. Searching for a potential $\varphi(x,y)$ Let us try another way, i.e. let us have a deeper look at the original formulation of our integral: $$\int_C F\cdot dC:=\int F_1dx+F_2dy$$ If we could find a $C^1$ function $\varphi(x,y)$ s.t. $$F_1:=\frac{\partial \varphi}{\partial x}, $$ $$F_2:=\frac{\partial \varphi}{\partial y}, $$ then our integral would be equal to $$\int_C F\cdot dC:=\int_C \frac{\partial \varphi}{\partial x}dx+\frac{\partial \varphi}{\partial y}dy=(\text{using the definition of the integral along a curve})= \int_0^1\frac{d\varphi(C_1(t),C_2(t))}{dt}dt=\varphi(C_1(1),C_2(1))-\varphi(C_1(0),C_2(0)).$$ This proof, if it not clear, can be found on all textbooks on Analysis. To find such $\varphi$, if it exists, we must solve the equations $$x^4+4xy^3=\frac{\partial \varphi}{\partial x}, $$ $$6x^2y^2-5y^4=\frac{\partial \varphi}{\partial y}.$$ Let us solve the first equation; we arrive at $$x^4+4xy^3=\frac{\partial \varphi}{\partial x}\Rightarrow \varphi(x,y)=\frac{x^5}{5}+2x^2y^3+\rho(y),$$ for some function $\rho=\rho(y)$. Plugging the above $\varphi(x,y)$ in the second equation we arrive at $$\frac{\partial }{\partial y}\left(\frac{x^5}{5}+2x^2y^3+\rho(y)\right)\stackrel{!}{=} 6x^2y^2-5y^4,$$ i.e. $$6x^2y^2+\frac{d\rho }{d y}\stackrel{!}{=} 6x^2y^2-5y^4,$$ which implies $\frac{d\rho }{d y}=-5y^4$, or $\rho(y)=-y^5$. In summary, the potential function is given by $$\varphi(x,y)=\frac{x^5}{5}+2x^2y^3-y^5, $$ and the original integral is $$\int_C F\cdot dC=\varphi(3,0)-\varphi(-2,-1). $$ as $C(1)=(3,0)$ and $C(0)=(-2,-1)$.
H: Infinite dimensional euclidian space with the product topology metrizable? Let $\mathbb{R}^{\omega}$ be the space of real sequenes with the product topology. Is $\mathbb{R}^{\omega}$ metrizable? AI: Hint: As $\mathbb{R}$ is homeomorphic to $(0 , 2^{-n} )$ for all $n \geq 1$, it follows that $\mathbb{R}^\omega$ (with the product topology) would be homoemorphic to $Y =\prod_{i=1}^\infty ( 0 , 2^{-n} )$. Can you think of a natural candidate for a metric on $Y$?
H: Prove that $\sqrt[3]{p}$, $\sqrt[3]{q}$ and $\sqrt[3]{r}$ cannot be in the same arithmetic progression My cousin (he doesn't speak English well so I am writing on his behalf) is trying to do the following problem: Let $p$,$q$, $r$ be different primes (let's assume $p<q<r$). Show that $\sqrt[3]{p}$, $\sqrt[3]{q}$ and $\sqrt[3]{r}$ cannot be in the same arithmetic progression. I honestly don't know how to do it without going into field extensions and that kind of stuff, but my cousin is at high school level (he's preparing for IMO or something similar). So here's what we have done so far: Suppose there exists such a progression: $a[n] = nd + a[0]$, where $a[n_0] =\sqrt[3]{q}$ and $ a[n_1] =\sqrt[3]{r}$ WLOG assume that $\sqrt[3]{p} = a[0]$ Then it must be $$\frac{\sqrt[3]{q}-\sqrt[3]{p}}{n_0} = \frac{\sqrt[3]{r}-\sqrt[3]{p}}{n_1} $$ Therefore $\frac{\sqrt[3]{r}-\sqrt[3]{p}}{\sqrt[3]{q}-\sqrt[3]{p}}\in \mathbb{Q}$. We want to show that this can't be, but alas we haven't succeeded. We have tried to exponentiate, take the roots out of the denominator, etc. with no success. Any help will be most appreciated. Thanks! Note: We have to prove that they are not in the same arithmetic progression, consecutive or not (arbitrary positions) AI: If $a(\sqrt[3]r - \sqrt[3]p)=b(\sqrt[3]q - \sqrt[3]p)$ with $a,b \in \Bbb Z$ (with $a,b$ nonzero and distinct), then $(b-a)\sqrt[3]p = b\sqrt[3]q - a\sqrt[3]r$, and $(b-a)^3p = b^3q - 3ab\sqrt[3]{qr}(b\sqrt[3]q-a\sqrt[3]r)-a^3r = b^3q-3ab(b-a)\sqrt[3]{pqr} - a^3r$. Then, $\sqrt[3]{pqr} = (a^3r-b^3q+(b-a)^3p)/3ab(a-b)$ is rational. You can quickly deduce a contradiction from there.
H: Are all metric translations isometries Let $(M, d)$ be a metric space. I define a translation of $M$ to be a function $f$ from $M$ to $M$ such that $d(x, f(x)) = d(y, f(y))$ for all $x$ and $y$ in $M$. My conjecture is that every translation on $M$ is an isometry under the same metric. Can anyone prove this, or give me a counterexample? AI: If we don't require continuity of $f$, we have a counterexample $f\colon \mathbb{R}\to\mathbb{R}$ with $$f(x) = \begin{cases}x + 1 &, x \in \mathbb{Q}\\ x-1 &, x \notin\mathbb{Q}.\end{cases}$$ For continuous $f$, we get a counterexample by considering a discrete metric space $(M,d)$, with $d(x,y) = 1$ for $x\neq y$, that contains at least three points. Let $x_0,x_1$ be two distinct points, and $$f(x) = \begin{cases}x_0 &, x \neq x_0\\ x_1 &, x = x_0. \end{cases}$$ Then $d(x,f(x)) = 1$ for all $x$, but $d(f(x),f(y)) = 0$ for all $x,y \in M\setminus \{x_0\}$.
H: Show that $(y-2x+3)^3=(y-x+1)^2$ is a solution of the differential equation $(2x-4y)dx+(x+y-3)dy=0$ I want to show that $$(y-2x+3)^3=(y-x+1)^2$$ is a solution for: $$(2x-4y)dx+(x+y-3)dy=0$$ what I did so far is: $$\frac{dy}{dx}=\frac{4y-2x}{x+y-3}$$ any suggestions? AI: $(y-2x+3)^3=(y-x+1)^2$ differentiating this we have $3(y-2x+3)^2(dy-2dx)=2(y-x+1)(dy-dx)$ using the original equation and substituting we have $3(y-x+1)(dy-2dx)=2(y-2x+3)(dy-dx)$ $(y+x-3)dy+(-4y+2x)dx=0$ $(2x-4y)dx+(y+x-3)dy=0$ $\fbox{}$
H: Show that $7$ is irreducible in $\Bbb Z[i]$ I have to show that $7$ is irreducible in $\Bbb Z[i]$. To show irreducibility I have to show that it's not a unit. This is simple to just show exhaustively. I'm having trouble with the second part which is to show that if it factors into $a.b$ that either $a$ or $b$ is a unit. What I have so far is that $7 = (a + bi)(c + di)$ $7 = (ac - bd) + (ad + cb)i$ Which gives two linear equations $ad+cb=0$ $ac-bd=7$ How do I get from that to a complete proof? Or have I gone down the wrong path. AI: Hint: First, show that $a+bi$ is a unit if and only if $|a+bi|^2=1$ (that is, if and only if $a^2+b^2=1$). Next, show that $a+bi\mapsto|a+bi|^2$ is a multiplicative function, meaning that $|(a+bi)(c+di)|^2=|a+bi|^2|c+di|^2.$ When $7=(a+bi)(c+di),$ what can we conclude about the possible values of $|a+bi|^2$ and $|c+di|^2$? What does that tell us about $7$?
H: Uniform continuous distributions - question with a square RV Question: Jack wants to build a wooden cylinder, He decided to choose it's radius (Y) randomly s.t $Y\sim U[0,1]$. a. What is the probability that the radius is in a closed interval $[\alpha,\beta]$? what is Y's density function? b. What is the prob. that the volume of the cylinder (with a known height $h$) (X=\Pi Y^2 h) is in a closed interval $[\alpha,\beta]$?what is X's density function? c. What is E[X]? d. What is Var[X]? Request I'd like to verify this solution, especially b. (I'm not sure that I'm allowed to do what I did there, and that I can assume X is also uniform. THx My answers: 5.a. The probability of the RV being in an interval equals the length of the interval, therefore $P(Y\in [\alpha,\beta])=\beta-\alpha$ The density function is $\begin{cases} 1 &Y\in [0,1]\\ 0 & elsewhere \end{cases}$ 5.b. $X=\Pi h Y^2$ so $P(a \le X \le b)= P(a \le \Pi h Y^2 \le b)= P(\sqrt \frac {a}{\Pi h} \le Y \le \sqrt \frac{b}{\Pi h})=$ $\sqrt \frac{b}{\Pi h}-\sqrt \frac {a}{\Pi h}$ The passage with the square root is possible due to the fact that all values are positive only. The values of X can be in this interval $[0,\Pi h]$ therefore the density function of X is $\begin{cases} \frac 1{\Pi h} & x\in[0,\Pi h] \\ 0 & elsewhere \end{cases}$ 5.c. Lemma (for every uniform RV $\in [0,1]: E[Y^n]=\frac 1{n+1}$- this is proved easily because this is actually $\int _0^1 y^ndy$ We'll be using this now: $E[X]=E[\Pi h Y^2]=\Pi h E[Y^2]=\frac {\Pi h}3$ 5d. $V(X)=V(\Pi h Y^2)=(\Pi h)^2V(Y^2)=(\Pi h)^2[E[Y^4]-E^2[Y^2]]= (\Pi h)^2[\frac 15-\frac19]=\frac{4(\Pi h)^2}{45}$ AI: Hint: [for part b) You have the cumulative for $Y$, namely $F(u)=P(Y\le u)=\sqrt{u/(\pi h)},$ if your calculation is right (I didn't check that). The density for $Y$ should be the derivative of this cumulative, which won't be a constant.
H: A question regarding lines between points. On pg.13 of Lang's "Second Course in Calculus", the following is asserted: Let $P=(2,1)$ and $A=(-1,5)$. Then the parametric equation of the line through $P$ and in the direction of $A$ gives us $x=2-t, y=1+5t$. Shouldn't the equations be $x=2+3t, =1-4t$? Thanks in advance! AI: Well, no. What they mean, here, is that the line passes through the point $P$, and is in the direction of the vector from $(0,0)$ to $A$. That was very poorly phrased, though, and they are equivocating on how to treat an ordered pair.
H: $\mathbb{F}_{p}A$-module Yesterday I was introduced to the definition of a module in a course: "Homological Algebra". I'm doing a project involving $p-$groups and in the text I got from my supervisor they use the word: $\mathbb{F}_{p}A$-module. What does $\mathbb{F}_{p}A$-module mean? The problem is that the text does not define it and it is difficult to search for an answer using only the notation. Note: In this case $A$ is a $p$-group. AI: I don't know what $A$ is, but hopefully it is a group. If that's not true, it's safe to ignore everything I write below! Even though $A$ is probably abelian, I'll write it multiplicatively to make the below easier. This isn't a rigorous definition but is hopefully enough to let you search. The ring $\mathbb{F}_pA$, sometimes written $\mathbb{F}_p[A]$, is the ring of finite sums $$ \sum_i \lambda_i a_i $$ where $\lambda_i$ belongs to the finite field $\mathbb{F}_p$ and $a_i$ belongs to $A$. To add and multiply these, just "treat them like polynomials," remembering that we're writing $A$ multiplicatively. In other words, if $a_i \neq a_j$ then $a_i + a_j$ doesn't simplify, but $a_ia_j = a_k$ using the group law on $A$. (If this informal explanation doesn't make sense, the phrase to google is "group ring" or "group algebra.") Since this is a ring, it makes sense to talk about a module over it. (The phrase to google is "module" or "R-module.")
H: For $j \in \{0,...,n-1\}$ is $(n-j)!(j+1)! \leq n!$ true? For $j \in \{0,...,n-1\}$ is $(n-j)!(j+1)! \leq n!$ true? I mean $\dfrac{n!}{(n-j)!(j+1)!}$ doesn't have to be an integer. I need this inequality in another exercise, so Is it provable? AI: Hint: Compare $$ (n-j)!(j+1)!=1\cdot \ldots\cdot (n-j)\cdot 2 \cdot \ldots \cdot (j+1) $$ with $$ n! = 1\cdot\ldots\cdot (n-j)\cdot (n-j+1)\cdot\ldots n. $$
H: Field extension $F\subseteq L_1$ and $F\subseteq L_2$ and $[L_1L_2:F]<[L_1:F][L_2:F]$. I'm searching for an example of field extensions $L1$, $L2$ of $F$ for which $[L_1L_2:F]<[L_1:F][L_2:F]$. Infact I'm trying prove the problem below. So any hint can be helpful. Let $K$ be a finite extension of $F$. If $L_1$ and $L_2$ are subfields of $K$ containing $F$, show that $[L_1L_2:F]\leq [L_1:F][L_2:F]$. If $\gcd([L_1:F],[L_2:F])=1$, prove that $[L_1 L_2:F]=[L_1:F][L_2:F].$ AI: The easiest example is when $L_1$ is contained in $L_2$ and both are non-trivial extensions of $F$. To prove the first part of the exercise take a basis $a_1,\ldots,a_n$ of $L_1$ and a basis $b_1,\ldots, b_m$ of $L_2$ and prove that $a_ib_j$ generates $L_1L_2$ (everything here as $F$ vector spaces). For the second part just note that both $[L_1:F]$ and $[L_2:F]$ must divide $[L_1L_2:F]$, so the least common multiple of those divides $[L_1L_2:F]$.
H: Geometric Solution for Equation with Complex Numbers Given we have two complex numbers $z_1$ and $z_2$ with $|z_1| = |z_2|$. How can it be shown geometrically, that $\frac{z_1+z_2}{z_1-z_2}$ is purely imaginery? AI: Consider the quadrilateral with vertices $0, z_1, z_1+z_2, z_2$. The condition $\lvert z_1\rvert = \lvert z_2 \rvert$ makes it a rhombus. Thus the two diagonals $z_1+z_2$ and $z_1 - z_2$ are orthogonal.
H: Infinite series $\sum _{n=2}^{\infty } \frac{1}{n \log (n)}$ Recently, I encountered a problem about infinite series. So my question is how to know whether the infinite series $\sum _{n=2}^{\infty } \frac{1}{n \log (n)}$ is convergent? AI: To see whether $\sum_2^\infty 1/(n \log n)$ converges, we can use the integral test. This series converges if and only if this integral does: $$ \int_2^\infty \frac{1}{x \log x} dx = \left[\log(\log x)\right]_2^\infty $$ and in fact the integral diverges. This is part of a family of examples worth remembering. Note that $$ d/dx \log(\log(\log x)) = d/dx \log(\log x) \cdot \frac{1}{\log (\log x)} = \frac{1}{x \log x \log(\log x)} $$ and $\log (\log (\log x)) \to \infty$ as $x \to \infty$ hence $\sum \frac{1}{n \log n \log (\log n)}$ diverges as well. Similarly, by induction we can put as many iterated $\log$s in the denominator as we want (i.e. $\sum \frac{1}{n \log n \log(\log n) \ldots \log (\ldots (\log n) \ldots )}$ where the $i$th log is iterated $i$ times), and it will still diverge. However, as you should check, $\sum \frac{1}{x \log^2x}$ converges, and in fact (again by induction) if you square any of the iterated logs in $\sum \frac{1}{n \log n \log(\log n) \ldots \log (\ldots (\log n) \ldots )}$ the sum will converge.
H: solving two simple line integrals First one is : $$\int_\gamma e^zdz,\quad \gamma(t)=\pi ti,\quad t\in[-1,1]$$ my attempt: $z=\gamma(t)=\pi ti \quad dz=\pi idt \quad -1\le t\ \le1, $ then $$\int_\gamma e^zdz=\int_{-1}^1e^{\pi ti}\pi idt=\pi i\int_{-1}^1e^{\pi ti}dt=\pi i|_{-1}^1\frac{e^{\pi ti}}{\pi i}=e^{\pi i}-e^{-\pi i}.$$ Is this correct? Second one is: $$\int_\gamma z^2dz\quad \gamma(t)=t+it\quad t\in [0,1] $$ I tried to solve with the way i tried like in $1$st, but couldn't solve. AI: The second integral is equal to $$I_2=\int_0^1 (t+it)^2 (1+i)dt=\int_0^1 t^2(1+i)^3dt=(1+i)^3\cdot\frac{1}{3}, $$ as $\frac{d\gamma}{dt}=1+i$.
H: Vector Space Dimension Let $A,B$ be $n\times m$, $s\times m$ matrices respectively, and let $$V=\{X\in \mathbb{F}^{m\times n};\ B X A=0\}.$$ Suppose that $$rank(A)=r,\ rank(B)=m.$$ Show that $dim V=m(n-r)$. I have no idea. AI: Note that from rank-nullity, we have $$m = \mathrm{rank}(B) + \mathrm{nullity}(B) = m + \mathrm{nullity}(B)$$ Therefore $B$ has a trivial nullspace. It follows that if $BXA = 0$ then we have $XA = 0$. This happens if and only if $\mathrm{Im}(A) \subseteq \ker(X)$. Fix a basis of $\mathrm{Im}(A)$ and extend this to a basis of $\mathbb{F}^n$. With respect to this basis, $X$ satisfies the required condition if and only if the columns corresponding to the vectors in $\mathrm{Im}(A)$ are zero. There are $r$ such columns of $m$ entries each. It follows you are free to vary the remaining $mn - mr = m(n-r)$ entries. Therefore the dimension of the space is $m(n-r)$.
H: proof of succession convergence Prove that the following sequence is convergent and calculate its limit: $x_1=2$ and $x_{n+1} = \sqrt{3 + \frac{x_n^2}{2}}$ I get a limit of 2.4, but by calculating several sequence terms, I can see that it converges to 2. AI: The sequence $\{x_n\}$ is bounded between $2$ and $\sqrt{6}$ because the bounds are valid for $x_1$ and $$x_{n+1}=\sqrt{3 + \frac{x_n^2}{2}} \ge \sqrt{3 + \frac{2^2}{2}} = \sqrt{3 + 2} \ge \sqrt{2 + 2} = 2 $$ $$x_{n+1}=\sqrt{3 + \frac{x_n^2}{2}} \le \sqrt{\frac{6+(\sqrt{6})^2}{2}} = \sqrt{\frac{6+6}{2}} = \sqrt{6} $$ Then it is monotone increasing: $$x_{n+1}^2 = 3 + \frac{x_n^2}{2} = \frac{6+x_n^2}{2} \ge \frac{x_n^2+x_n^2}{2} = x_n^2$$ Therefore it is convergent and the limit $x$ is given by the equation $$ x=\sqrt{3 + \frac{x^2}{2}} \Rightarrow x^2=3 + \frac{x^2}{2} \Rightarrow \frac{x^2}{2} = 3 \Rightarrow x^2 = 6 \Rightarrow x = \sqrt{6}$$
H: Solution of an equation in a certain field Let $F$ be a certain field. Prove or disprove that the following statements: The equation $X^3=0_F$ has only one solution. The equation $X^3=1_F$ has only one solution. Suppose F is finite, then the equation $X^3=1_F$ has only one solution. I'm pretty sure all of them are true but I'm not sure how to write the proof so any help would be appreciated. Note: I'm only in my first few weeks of linear algebra so I don't think any advanced solutions would help. AI: $X^3 = 0$ has only one solution. This is because fields do not have zero divisors, which follows from requiring every element to be invertible. $X^3 = 1$ might have more solutions. In $\mathbb{C}$, for example, there are three cube roots of unity. For finite fields we can, again, have more than two solutions. Consider $\mathbb{F}_7$ - here both 1 and 2 are solutions, because $2^3 = 8 = 1 \;mod\; 7$.