text
stringlengths
83
79.5k
H: Sufficient condition for a ring to be a product of two rings In his algebraic geometry notes, Vakil suggests the exercise (remark 3.6.3) of showing that a ring $A$ is a product $A = A_1 \times A_2$ iff $\operatorname{Spec} A$ is disconnected. His hint is to show that both conditions are equivalent to the existence of two elements $a_1, a_2 \in A$ s.t. $a_1^2 = a_1$, $a_2^2 = a_2$ and $(a_1 + a_2) = 1$, and hence $a_1a_2 = 1$. 2 questions: 1) Does it really follow that $a_1a_2 = 0$ from the other assumptions on $a_1$ and $a_2$ I tried deducing it but the most that I succeeded in showing was: $$1=(a_1+a_2)^2 = a_1^2 + a_2^2 + 2a_1a_2 = a_1 + a_2 +2a_1a_2$$ $$0=2(a_1a_2)$$ However, I fail to see why this implies that $a_1a_2 = 0$, since he didn't mention any assumption of $2$ being regular. 2) How does one get that $\operatorname{Spec} A$ disconnected implies the existence of such $a_1,a_2$? I reasoned the following: taking $U = V(a_1), W = V(a_2)$ to be clopen sets such that $U\cap W = \varnothing$ and $U\cup W = \operatorname{Spec} A$ we get: $$\cup_{i\in I} D(f_i) = V(a_2)$$ $$\cup_{j\in J} D(f_j) = V(a_1)$$ where the left hand side is the expression of an open set as a union of basis elements and the right hand side is the definition of a closed set. From the fact that $V(a_1) \cup V(a_2) = V(a_1a_2) = \operatorname{Spec} A$ we get that $a_1a_2$ is nilpotent, but I can't seem to get anything else. AI: $\left(a_1 \cdot a_2\right) \;\; = \;\; \left(a_1 \cdot a_2\right)+0 \;\; = \;\; \left(a_1 \cdot a_2\right)+a_1+\left(-\hspace{.03 in}a_1\right) \;\; = \;\; a_1 + \left(a_1 \cdot a_2\right)+\left(-\hspace{.03 in}a_1\right)$ $= \;\; a_1^2 + \left(a_1 \cdot a_2\right)+\left(-\hspace{.03 in}a_1\right) \;\; = \;\; \left(a_1 \cdot a_1\right)+\left(a_1 \cdot a_2\right)+\left(-\hspace{.03 in}a_1\right) \;\; =$ $\left(a_1 \cdot \left(a_1+a_2\right)\right)+\left(-\hspace{.03 in}a_1\right) \;\; = \;\; \left(a_1 \cdot 1\right)+\left(-\hspace{.03 in}a_1\right) \;\; = \;\; a_1+\left(-\hspace{.03 in}a_1\right) \;\; = \;\; 0$
H: Expanding partial derivatives Let $$\Delta=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}$$ Consider the polar coordinates with $x=r\cos\theta$ and $y=r\sin\theta$. I want to show that $$\Delta=\frac{\partial^2}{\partial r^2}+\frac1r\frac\partial{\partial r}+\frac1{r^2}\frac{\partial^2}{\partial \theta^2}.$$ I use the chain rule to compute $$\frac{\partial}{\partial x} = \frac{x}{r} \frac{\partial}{\partial r} -\frac{y}{r^2} \frac{\partial}{\partial \theta}$$ and $$\frac{\partial}{\partial y} = \frac{y}{r} \frac{\partial}{\partial r} +\frac{x}{r^2} \frac{\partial}{\partial \theta}$$ So $$\Delta=\left (\frac{x}{r} \frac{\partial}{\partial r} -\frac{y}{r^2} \frac{\partial}{\partial \theta} \right )^2 + \left (\frac{y}{r} \frac{\partial}{\partial r} +\frac{x}{r^2} \frac{\partial}{\partial \theta} \right )^2$$ Expanding, I get $$\Delta=\frac{\partial^2}{\partial r^2}+\frac1{r^2}\frac{\partial^2}{\partial\theta^2}$$ Where does the term $\frac1r\frac\partial{\partial r}$ disappear? AI: You have to rewrite everything in terms of $r$ and $\theta$, so where you are leaving $x$ and $y$ behind you have to clear up. Also, these are operators, so you have to have them act on a test function $f$ -- this also makes you be a bit more careful. What you'll find is that, since $\Delta = \nabla \cdot \nabla$, you need to compute the Jacobian for the transformation, that is \begin{equation} \frac{\partial f}{\partial x} = \frac{\partial f}{\partial \theta} \frac{\partial \theta}{\partial x} + \frac{\partial f}{\partial r} \frac{\partial r}{\partial x} \end{equation} etc. This gets you how to rewrite the $\nabla$ operator in polar coordinates. Then you need to figure out what $\nabla \cdot (\nabla f)$ is to figure out what $\Delta f$ is. There is also a metric tensor way to rewrite the Laplacian, see under http://mathworld.wolfram.com/Laplacian.html for example.
H: How to get $\sum_{k=1}^{n-1}n\binom{n-1}{k-1}y^k \bigg(\frac{1-z^k}{k}- \frac{1-z^n}{n}\bigg)$"? I asked a question here and got answer committing : $$(1+yz^n)(1+y)^{n-1} - (1+yz)^n=\sum_{k=1}^{n-1}n\binom{n-1}{k-1}y^k \bigg(\frac{1-z^k}{k}- \frac{1-z^n}{n}\bigg) \tag{1}$$and$$\frac{1-z^{n}}{n}-\frac{1-z^{n+1}}{n+1}= \frac{(z-1)^2}{n(n+1)}\bigg( \sum_{k=0}^{n-1} (k+1)z^k \bigg) \tag{2}$$ My try: Using binomial theorem, $$(1+yz^n)(1+y)^{n-1} - (1+yz)^n=(1+yz^n)\sum_{k=0}^{n-1}\binom{n-1}{k}y^{n-1-k}-\sum_{k=0}^n\binom{n}{k}(yz)^{n-k}$$ Using this,$$\frac{1-z^{n}}{n}-\frac{1-z^{n+1}}{n+1}=(1-z)\cdot\bigg(\frac{1+z+\dots+z^{n-1}}{n}-\frac{1+z+\dots+z^{n-1}+z^n}{n+1}\bigg)$$ I can't go ahead in both the cases. Please help me. AI: $1.$ \begin{align*} (1+yz^n)(1+y)^{n-1} - (1+yz)^n &= (1 + yz^n) \sum_{k=0}^{n-1} \binom{n-1}{k}y^k - \sum_{k=0}^{n} \binom{n}{k}(zy)^k \\ &= \sum_{k=1}^{n-1} y^k \left(\binom{n-1}{k} + z^n \binom{n-1}{k-1} - \binom{n}{k} z^k \right )\\ &= \sum_{k=1}^{n-1} \binom{n-1}{k-1} y^k \left( \frac{n-k}{k } + z^n - \frac{n}{k} z^k \right ) \\ &= \sum_{k=1}^{n-1} \binom{n-1}{k-1} y^k \left( \frac{n}{k}(1-z^k) - (1-z^n) \right )\\ \end{align*} $2.$ $\displaystyle \sum_{k=0}^{n-1} (k+1)z^k$ is Arithmetico-geometric sequence can be evaluated as $\displaystyle \frac{n z^{n+1}-n z^n-z^n+1}{(z-1)^2}$ just by differentiating both sides of $\displaystyle \sum z^k = \frac{1-z^{n+1}}{1-z}$ Also to proceed from your step, \begin{align*} \frac{1-z^{n}}{n}-\frac{1-z^{n+1}}{n+1} &= (1-z)\cdot\left( \frac{1+z+\dots+z^{n-1}}{n}-\frac{1+z+\dots+z^{n-1}+z^n}{n+1}\right ) \\ &= \frac{(1-z)}{n(n+1)} \cdot \left( 1 + z + \dots + z^{n-1} - n z^n \right )\\ &= \frac{(1-z)}{n(n+1)} \cdot \left( \sum_{k=0}^{n-1} (k+1)z^k - \sum_{k=0}^{n-1}k z^k - \left( \sum_{k=0}^{n-1} (k+1)z^{k+1} - \sum_{k=0}^{n-1} k z^{k} \right ) \right )\\ \end{align*}
H: Simply connected does not imply contractible. Is there a nice counter example in $R^2$? The standard counter example to the claim that a simply connected space might be contractible is a sphere $S^n$, with $n > 1$, which is simply connected but not contractible. Suppose that I were interested in a counter example in the plane - does anyone know of a subset of $R^2$ which is simply connected but not contractible? AI: Consider the topologist sine curve $$y = \sin \bigg(\frac{1}{x}\bigg),\ 1\geq x>0$$ together with the interval $\{(0, t): |t|\leq 1\}$ and a curve joining this interval with the graph. This is simply connected but is not contractible. You may find the proof of noncontractibility in http://math.ucr.edu/~res/math205B-2012/polishcircle.pdf
H: Combinatorics error correcting code (56) * (36)^4 * (-55) + (35)(67)(-14)^2 mod 17. Find the least non-negative residue of the expression module the given n. First I just want to make sure I understand what the question wants. To do that, I give a simple case 15 mod 7. 1 is what we looking for right? Then get back to the question, what is the method for effecticiently finding the answer other than just use calculator to multiply everything and then find the multiple of 7 to minus that member. By the way no use calculator AI: The calculation can be done without a calculator. Since $56=3\cdot 17+5$, we have $56\equiv 5\pmod{17}$. We have $36\equiv 2\pmod{17}$, so $36^4\equiv 2^4\equiv -1\pmod{17}$. But $(-1)(-55)=55\equiv 4\pmod{17}$. Thus the first part of our expression is $\equiv 20\equiv 3\pmod{17}$. A similar calculation shows that the second part is $\equiv 8\pmod{17}$. For $35\equiv 1\pmod{17}$, and $67\equiv 16\pmod{17}$. We have $-14\equiv 3\pmod{17}$, so $(-14)^2\equiv 9\pmod{17}$. It remains to calculate $(16)(9)$ modulo $17$. One can work directly, multiplying to get $144$, and finding the remainder. But here is a useful trick: $16\equiv -1\pmod{17}$, so $(16)(9)\equiv (-1)(9)=-9\equiv 8\pmod{17}$. Now add. The sum is $\equiv 11\pmod{17}$.
H: probability problem(3 urns) There are 3 urns. Urn 1 has 2 black and 3 white ball. Urn 2 has one black and 2 white balls. Urn 3 has 2 black and 1 white. A person who is blindfolded, picks a ball from urn 1, puts it into urn 2, picks a ball from urn 2 and puts it into urn 3 and finally picks a ball from urn 3 and puts it into urn 1. What is the probability that at the end of round 1 all the urns have exactly the same composition as they originally started off with? the answer is 3/8. Please tell me how to reach the answer. AI: What are the possible ways that the urns could end up in the same configuration? Say he chooses a white ball from urn 1 (this happens with probability $\frac{3}{5}$). He then places it into urn 2 and then removes a random ball from urn 2. In order for the configuration to stay the same, the ball he removes must be white. This happens with probability $\frac{3}{4}$ because there are now 3 white balls and one black ball in urn 2. Lastly, he places the white ball in urn three, which then has two of each color ball. The probability that he then removes a white ball is $\frac{1}{2}$. Since these events are all independent, the probability that they all happen in succession is $\frac{3}{5}\cdot\frac{3}{4}\cdot\frac{1}{2}=\frac{9}{40}$. To finish the problem, do the same analysis supposing he removes a black ball from urn 1 and combine the answers.
H: When written in decimal notation, every square number has at most 1000 digits that are not 0 or 1. True or false? When written in decimal notation, every square number has at most $1000$ digits that are not $0$ or $1$. True or false? This question is from an admissions quiz, so no calculators should be used. AI: This is how I did it, but I'm sure there are better ways. Let $x = 10^n - 1$ with $n\in\mathbb{Z}$, then $x^2 = 10^{2n} - 2\times10^n + 1 = 999\cdots9998000\cdots001 $ Let $n>1001$ so we have more than $1000$ nines, then we've found a square number with more than $1000$ digits that are not $0$ or $1$, so it's false.
H: Mutual tangent lines Find all points where the curves $f(x) = x^3-3x+4$ and $g(x) = 3x^2-3x$ share the same tangent line. Graphing them I see that they look like they share a tangent line at $x=2$. I got the derivatives of both and set them equal to each other and got $x=0$ and $x=2$. After plugging $2$ back in I got $6$. So the point is $(2,6)$. Is that correct? AI: $$f' = g ' \iff 3x^2 - 3 = 6x - 3 \iff x^2 -2x = 0 \iff x(x-2) = 0 \iff x=0,2 $$ Heence, they share same tangent lines at $$ (2, 6 )$$ as you said. Notice at $0$ functions are not equal, so they cannot share a tangent line at $0$
H: Affine sets and affine hull Mathematically an affine hull can be expressed as $ Aff[C] = \{\theta_1x_1 + \theta_2x_2 .... \theta_nx_n| x_i \in C \ \ \sum_{i=1}^{n}\theta_i = 1 \}$ Intuitively can anyone explain what this means? Also, what is a 'hull'? AI: A little work shows that if $a,b \in \operatorname{aff} C$, then for all $\lambda \in \mathbb{R}$, we have $\lambda a + (1-\lambda) b \in \operatorname{aff} C$. In fact, this is one definition of an affine set. You can think of it as relaxation of convexity (for a convex set, $\lambda \in [0,1]$). Suppose $c \in C$. Let $L = \{ a -c | a \in \operatorname{aff} C \}$. Suppose $x \in L$, then $x+c \in \operatorname{aff} C$. Then $\lambda (x+c) + (1-\lambda) c = \lambda x +c \in \operatorname{aff} C$ for all $\lambda$, and so $\lambda x \in L$ for all $\lambda$. Suppose $x,y \in L$, then $x+c, y+c \in \operatorname{aff} C$, and so $\frac{1}{2}((x+c)+(y+c)) = \frac{1}{2}(x+y) +c \in \operatorname{aff} C$. Hence $\frac{1}{2}(x+y) \in L$, and the previous result shows that $2 \frac{1}{2}(x+y) = x+y \in L$. This shows that $L$ is a subspace. Hence we can write $\operatorname{aff} C = L + \{c\}$, so $\operatorname{aff} C$ is basically a translate of a linear subspace. The set $\operatorname{aff} C$ is the smallest affine set containing $C$, so it is the smallest affine 'container' of $C$. I presume this is the origin of the term 'hull', much as the hull of a ship contains the 'stuff' inside the ship. The term linear hull is used for smallest subspace containing a set, and the term convex hull is used for smallest convex set containing a set.
H: Proving A and Not A are Dependent Events How do I prove that an event and its complement are dependent on each other? Clearly both outcomes cannot happen, but I don't know how to formally prove it. AI: Not quite always. Two events $A$ and $B$ are independent if and only if $\Pr(A\cap B)=\Pr(A)\Pr(B)$. This automatically holds if $\Pr(A)=0$ or $\Pr(B)=0$. Suppose now that $B$ is the complement of $A$. Let $p=\Pr(A)$. Then if $p=0$, or $p=1$, then $A$ and $B$ are independent. We show that this is the only situation in which $A$ and its complement $B$ can be independent. For if $B$ is the complement of $A$, then $\Pr(A\cap B)=0$. Thus if $A$ and $B$ are independent, then $p(1-p)=0$. This forces $p=0$ or $p=1$.
H: let (X,d) be a metric space. d is discrete iff X∩X'=∅ Let (X,d) be a metric space. prove that: (X,d) is discrete if only if X∩X′=∅,X′ is the set of all limit points of X AI: HINT: $X\cap A=A$ for every subset $A$ of $X$, so what is $X'$?
H: Question from "An introduction to measure theory" by Terence Tao If $(x_α)_{α \in A}$ is a collection of numbers $x_α ∈ [0, +\infty]$ such that $\sum_{α∈A}{x_α} < \infty$, show that $x_α = 0$ for all but at most countably many $α \in A$, even if $A$ itself is uncountable. AI: Consider the set of $\alpha$ for which $x_\alpha > 1/n$, for each natural $n$. It must be finite, and there are countably many integers, so all but countably many $x_\alpha$ are less than each $1/n$, and thus zero.
H: Finding the singular value(s) of a given matrix without SVD. I am struggling on a problem that asks to find the singular value(s) that are unequal to 0 in the following matrix: $M = \begin{bmatrix} 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \end{bmatrix}$ I am not supposed to do a singular value decomposition on the matrix. The given answer shows how you should rewrite the matrix as $ \sqrt{20} \begin{bmatrix} \frac{1}{2}\\ \frac{1}{2}\\ \frac{1}{2}\\ \frac{1}{2} \end{bmatrix} \begin{bmatrix} \frac{1}{\sqrt{5}} & \frac{1}{\sqrt{5}} & \frac{1}{\sqrt{5}} & \frac{1}{\sqrt{5}} & \frac{1}{\sqrt{5}} \end{bmatrix} $, and that therefore the singular value must be $\sqrt{20}$. I see how the two vectors are orthonormal but other than that I don't really see why $\sqrt{20}$ must be a singular value here. It is also unclear to me why this is the only singular value unequal to zero. Any help would be much appreciated. AI: For any $n$, let $e=(1,...,1)^T$, and let $x_2,...,x_n$ be an orthonormal basis of $\operatorname{sp}\{e\}^\bot$. Let $x_1 = \frac{1}{\|e\|} e$. Then $x_1,...,x_n$ is an orthonormal basis for $\mathbb{R}^n$. Let $U_n = \begin{bmatrix} x_1 & \cdots & x_n \end{bmatrix}$. Note that $U_4^* M U_5 = \begin{bmatrix} \begin{matrix} \sqrt{4} & \cdots & \sqrt{4} \end{matrix} \\ 0 \end{bmatrix} U_5 = \Sigma = \begin{bmatrix} \sqrt{20} & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 \\ \vdots & \vdots & & \vdots\\ 0 & 0 & \cdots & 0 \end{bmatrix} $ and so $M = U_4 \Sigma U_5^*$. $U_4,U_5$ are unitary, $\Sigma$ is diagonal with non-negative entries, hence this is a singular value decomposition.
H: Quotient of monotone functions is monotone? Suppose $f,g$ are monotone( say increasing) and differentiable and nonnegative. Both go from $\mathbb{R}^{\geq 1} \to \mathbb{R} $ Is $\frac{f}{g}$ also monotone ? AI: Counterexample: Let $f(x) = e^x+5, g(x) = x$. $$\frac{f}{g}(x) = \frac{e^x+5}{x}$$ Can you tell me why $\displaystyle{\frac{f}{g}}$ is not necessarily monotone?
H: Integral coordinates Proof Nine distinct points with all coordinates integral are selected in the space. Prove that the line segment with ends at certain two of these points contains in its interior a point with all coordinates integral. AI: HINT: Look at the coordinates modulo $2$ and apply the pigeonhole principle. The point in the interior of the segment will actually be its midpoint.
H: Pigeonhole Principle Proof 2004 flies are inside a cube of side 1. Show that some 3 of them are within a sphere of radius 1/11. I am not sure how to begin the proof especially since we are asked to work on a sphere rather than the given cube. AI: I would try and see if I can cover the cube with $1001$ balls of radius $1/11$. If I could do that, then if more than $2002$ flies were inside the cube, each fly would be in one of those little balls, and $3$ of them would have to be in the same ball. Hmm. I can divide the cube into $1000$ little cubes of side $1/10$. I wonder what radius of sphere it takes to contain one of those little cubes? The distance between opposite corners of the unit cube is $\sqrt3$. For the little cubes, the corner-to-corner distance is $\sqrt3/10$, the center-to-corner distance is $\sqrt3/20$, so each little cube is contained in a sphere of radius $\sqrt3/20$. I wonder how that compares with $1/11$?
H: What will be the closed formula for the following recursive function? What will be the closed formula for the following recursive function? F(n) = F(n/2) +1 if n is even F(n) = F(n-1) + 1 if n is odd F(1) = 0 How do we generate closed formula for such recursive functions? Thanks. AI: $\newcommand{\+}{^{\dagger}}% \newcommand{\angles}[1]{\left\langle #1 \right\rangle}% \newcommand{\braces}[1]{\left\lbrace #1 \right\rbrace}% \newcommand{\bracks}[1]{\left\lbrack #1 \right\rbrack}% \newcommand{\dd}{{\rm d}}% \newcommand{\isdiv}{\,\left.\right\vert\,}% \newcommand{\ds}[1]{\displaystyle{#1}}% \newcommand{\equalby}[1]{{#1 \atop {= \atop \vphantom{\huge A}}}}% \newcommand{\expo}[1]{\,{\rm e}^{#1}\,}% \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,}% \newcommand{\ic}{{\rm i}}% \newcommand{\imp}{\Longrightarrow}% \newcommand{\ket}[1]{\left\vert #1\right\rangle}% \newcommand{\pars}[1]{\left( #1 \right)}% \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}}% \newcommand{\root}[2][]{\,\sqrt[#1]{\,#2\,}\,}% \newcommand{\sech}{\,{\rm sech}}% \newcommand{\sgn}{\,{\rm sgn}}% \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}}% \newcommand{\verts}[1]{\left\vert #1 \right\vert}% \newcommand{\yy}{\Longleftrightarrow}$ $\large\tt Hint:$ $\ds{{\cal F}\pars{z} \equiv \sum_{n = 1}^{\infty}{\rm F}\pars{n}\,z^{n}\,,\qquad z \in {\mathbb Z}\,,\quad\verts{z} < 1}$. \begin{align} {\cal F}\pars{z} &= \sum_{n = 0}^{\infty}{\rm F}\pars{2n + 1}\,z^{2n + 1} + \sum_{n = 0}^{\infty}{\rm F}\pars{2n + 2}\,z^{2n + 2} \\[3mm]&= {\rm F}\pars{1}z + \sum_{n = 1}^{\infty}\bracks{{\rm F}\pars{2n} + 1}\,z^{2n + 1} + \sum_{n = 0}^{\infty}{\rm F}\pars{2n + 2}\,z^{2n + 2} \\[3mm]&= \sum_{n = 0}^{\infty}\bracks{{\rm F}\pars{2n + 2} + 1}\,z^{2n + 3} + \sum_{n = 0}^{\infty}{\rm F}\pars{2n + 2}\,z^{2n + 2} \\[3mm]&= \pars{z + 1}\sum_{n = 0}^{\infty}{\rm F}\pars{2n + 2}\,z^{2n + 2} + {z^{3} \over 1 - z^{2}} = \pars{z + 1}\sum_{n = 0}^{\infty}\bracks{{\rm F}\pars{n + 1} + 1}\,z^{2n + 2} + {z^{3} \over 1 - z^{2}} \\[3mm]&= \pars{z + 1}\sum_{n = 1}^{\infty}{\rm F}\pars{n}\,z^{2n} + \pars{z + 1}\,{z^{2} \over 1 - z^{2}} + {z^{3} \over 1 - z^{2}} \end{align} $$ {\cal F}\pars{z} = \pars{z + 1}{\cal F}\pars{z^{2}} + {z^{2} \over 1 - z} + {z^{3} \over 1 - z^{2}} $$
H: Algebraic Basis vs Hilbert basis I am confused between algebraic basis and hilbert basis. How do they differ exactly? Can you give me examples (possibly in infinite dimensions) on when they are the same and when they are not the same? Thanks in Advance AI: A basis $B$ of a vector space $V$ allows you to express all vectors as a finite sum with vectors of that base. That is: $V=\mathrm{span}(B)$ In case of a Hilbert-basis all vector are expressed with a (maybe) infinite sum of vectors of that Hilbert-basis. And that is: $V=\overline{\mathrm{span}(B)}$. So in that sense a Hilbert-basis is not a basis.
H: A finite set always has a maximum and a minimum. I am pretty confident that this statement is true. However, I am not sure how to prove it. Any hints/ideas/answers would be appreciated. AI: Let $S = \{s_1, \ldots,s_n\}$ be a nonempty finite set of size $n > 0$. We will show by induction on $n \in \mathbb N$ that there exist some $m,M \in S$ such that for all $s \in S$, we have that $m \leq s \leq M$. Base Case: For $n=1$, we have $S = \{s_1\}$, so taking $m = s_1$ and $M=s_1$ trivially satisfies the required condition. Induction Hypothesis: Assume that the claim holds for $n=k$, where $k \geq 1$. It remains to prove that the claim holds true for $n = k+1$. To this end, choose any set $S$ with $k+1$ elements, say $S = \{s_1 ,\ldots,s_k,s_{k+1}\}$. Now by the induction hypothesis, the subset: $$ S' = S \setminus \{s_{k+1}\} = \{s_1 ,\ldots,s_k\} $$ has a minimum element and a maximum element. That is, we know that there exists some $m',M' \in S'$ such that for all $s' \in S'$, we have that $m' \leq s' \leq M'$. Now observe that $s_{k+1}$ must fall under $1$ of $3$ cases: Case 1: Suppose that $s_{k+1} < m'$. Then take $m = s_{k+1}$ and $M=M'$. To see why this works, observe that any element in $S$ is either $s_{k+1}$ or some $s' \in S'$, and: $$ m = s_{k+1} < m' \leq s' \leq M' =M $$ Case 2: Suppose that $m' \leq s_{k+1} \leq M'$. Then take $m = m'$ and $M=M'$. To see why this works, observe that any element in $S$ is either $s_{k+1}$ or some $s' \in S'$, and: $$ m = m' \leq s_{k+1} \leq M' = M $$ $$ m = m' \leq s' \leq M' = M $$ Case 3: Suppose that $s_{k+1} > M'$. Then take $m =m'$ and $M=s_{k+1}$. To see why this works, observe that any element in $S$ is either $s_{k+1}$ or some $s' \in S'$, and: $$ m = m' \leq s' \leq M' < s_{k+1} = M $$ Hence, we have shown that $S$ has a minimum and maximum element, as desired.
H: Good Pairs in Algebraic Topology Hatcher’s book says that $ \left( \mathbb{D}^{n},\mathbb{S}^{n - 1} \right) $ is a good pair; that is, there exists an open neighborhood $ V $ of $ \mathbb{D}^{n} $ containing $ \mathbb{S}^{n - 1} $ that deformation retracts onto $ \mathbb{S}^{n - 1} $. What is the open neighborhood $ V $ in this case? Is it an annulus (without its boundary, of course)? Thanks! AI: Take $V=\{x\in D^n: \|x\|\not= 0\}$. Then $V$ is an open neighborhood of $S^{n-1}$ in $D^n$ that deformation retracts into $S^{n-1}$ (can you see why?).
H: Does every tree have an upper portion that is a non-empty chain? Given a poset $P$, call $A \subseteq P$ an upper portion iff $A^c < A$, by which I just mean that for all $a \in A^c$ and all $b \in A$ we have that $a < b$. Then every upper portion is upward closed. Proof. Suppose for a contradiction that $a \in A$ and $a \leq b$, but not $b \in A$. Then $b \in A^c$, so $b < a$, a contradiction. Now by tree (Warning! Non-standard definition) let us mean: a directed poset with the property that the upward closure of any singleton is a chain. By the upward closure of $\{x\}$ I just mean $\{y \mid y≥x\}$. Question. Does every tree have an upper portion that is a non-empty chain? AI: No: · · · x |\ x x |\ x x |\ x x
H: Let $E$ be a Banach space, prove that the sum of two closed subspaces is closed if one is finite dimensional Let $E$ be a Banach space and let $S$ and $T$ be closed subspaces, with dim$\space T<\infty$. Prove that $S+T$ is closed. To prove that $S+T$ is closed I have to show that for any limit point $x$ of $S+T$, $x \in S+T$. So let $x$ be a limit point that subspace. Then, there is a sequence $\{x_n\}_{n \in \mathbb N} \subset S+T$ : $x_n \rightarrow x$. $x_n \in S+T \space \forall \space n \in \mathbb N$. This means that $x_n=s_n+t_n$ with $s_n \in S$ and $t_n \in T$ $\forall \space n \in \mathbb N$. Well, I couldn't go further than this. $\{s_n\}_{n \in \mathbb N}$ and $\{t_n\}_{n \in \mathbb N}$ are sequences in $S$ and $T$ respectively. I would like to prove that both of them converge to some points $s$ and $t$, for if this is the case, the hypothesis would assure $s \in S$ and $t \in T$. Then, I would have to prove that $x=lim x_n=lim s_n+t_n=lim s_n+limt_n=s+t$ and from all the previous steps I would conclude $x \in S+T$. $$$$Maybe I could start by proving that $\{t_n\}_{n \in \mathbb N}$ is convergent. But I don't see why this is true, I've tried to prove that $\{t_n\}_{n \in \mathbb N}$ is a Cauchy sequence (in a Banach space this would imply that the sequence converges) but I couldn't. AI: One way to do this is as follows : $E/S$ is a Banach space, and the image of $T$, $\pi(T)$, in the quotient is a finite dimensional subspace of $E/S$. Hence, $\pi(T)$ is closed, and so $$ S+T = \pi^{-1}(\pi(T)) $$ is closed in $E$ Added : Here is another approach - by induction on $dim(T)$, we can assume without loss of generality that $dim(T) = 1$. So let $z \in T$ be non-zero, so we are now looking at $$ S+T = \{s + \alpha z : s \in S, \alpha \in \mathbb{K}\} $$ By the Hahn-Banach theorem, there is a continuous linear functional on $E$ such that $$ f(s) = 0 \quad\forall s\in S, \text{ and } f(z) =1 $$ So if $x_n = s_n + \alpha_nz \to x$, then $$ \alpha_n = f(x_n) \to f(x) $$ Hence, $$ s_n = x_n - \alpha_nz\to x-f(x)z $$ Since $S$ is closed, $s:= x-f(x)z\in S$, and hence $$ x = s+f(x)z \in S+T $$
H: Generator of a subgroup of a cyclic group Let $G$ be a cyclic group, and let $x \in G$ be its generator such that $|x| > 1$. Suppose $H$ is a nontrivial subgroup of $G$. Prove that if $m$ is the minimum positive integer such that $x^m \in H$, then $x^m$ generates $H$. My try: Suppose $|x| = n > 1 $. We want to show that $|x^m| = |H|$. Suppose to the contrary that $|x^m| < |H|$. Since $x^n = e \implies x^{mn} = e \implies (x^m)^n = e $. So $n $ divides $m$. By division algorithm, we can take $q,r$ such that $|H| = mq + r $ where $0 \leq r < m$. Therefore $$ x^r = x^{|H| -mq}= x^{|H|}\implies r = |H|$$ But, can we conclude then that $x^m$ is not in $H$ so that we get a contradiction? AI: Because $x$ generates $G$, all elements of $H$ are powers of $x$. Let $h\in H$. Then $h=x^i$ for some $i$. Writing $i=mq+r$ with $0\le r < m$ we get $r=0$ by minimality of $m$. Hence all elements of $H$ are powers of $x^m$ and so $x^m$ generates $H$.
H: Does exists a function that has the following o-notation properties? Let $p>0$ be any real positive. Does there exist a function $f(x)$ which is $o(|x|^p)$ in $x=0$ yet not $O(|x|^{p+\varepsilon})$ for any $\varepsilon>0$ ? AI: $f(0) = 0$ and on $[1/2^n, 1/2^n-1[$, $f(x)=x^{p+1/\sqrt{n}}$ The idea is to have something like $f(x) = x^{p+\alpha}$ with alpha smaller than epsilon for any $\varepsilon$ when $x \to 0$. Or $f(x) = x^{p-log(x)}$
H: find the limit of a sequnce I need to find the limit: $\mathop {\lim }\limits_{n \to \infty } {1 \over n}\left[ {{{(a + {1 \over n})}^2} + {{(a + {2 \over n})}^2} + ... + {{(a + {{n - 1} \over n})}^2}} \right]$ any ideas here? I've tried to use "squeeze theorem" but with no luck.. AI: Let $u_n={1 \over n}\left[ {{{(a + {1 \over n})}^2} + {{(a + {2 \over n})}^2} + ... + {{(a + {{n - 1} \over n})}^2}} \right]$. To elaborate on Daniel Fischer’answer : the Riemann sum approach yields $u_n=\sum_{k=1}^{n-1} \frac{1}{n}f(\frac{k}{n})$ where $f(x)=(a+x)^2$, so $(u_n)$ converges to $\int_0^1 f(x)dx=\frac{(a+1)^3-a^3}{3}=\frac{3a^2+3a+1}{3}$. Also, the expansion approach yields $$ \begin{array}{lcl} u_n &=& \frac{1}{n}. \sum_{k=1}^{n-1} (a+\frac{k}{n})^2 \\ &=& \frac{1}{n}. \sum_{k=1}^{n-1} \big(a^2+2\frac{ak}{n}+\frac{k^2}{n^2}\big) \\ &=& (\frac{n-1}{n})a^2+\frac{2a}{n^2}\big(\sum_{k=1}^{n-1}k\big) +\frac{1}{n^3}\big(\sum_{k=1}^{n-1}k^2\big) \\ &=& (\frac{n-1}{n})a^2+\frac{a(n-1)}{n} +\frac{1}{n^3}\big(\frac{n(n-1)(2n-1)}{6}\big) \end{array} $$ And we see again that $(u_n)$ converges to $\frac{3a^2+3a+1}{3}$.
H: Prove that the function is differentiable at (0,0) The function is shown below. Its not a very complicated function. $$ f(x,y)=\sqrt{9-x^2-y^2}$$ I was wondering is it sufficient to say that since $f_x(0,0)$ and $f_y(0,0)$ are both zero and since $f_x$ and $f_y$ are continuous (by finding the limit as it appraoches to $(0,0)$ for both..is there a faster way?) at $(0,0)$ therefore $f(x,y)$ is differentiable at $(0,0)$. Are there any other ways. While browising around other questions, this limit came up twice: $\lim_{(x,y)\to (0,0)}\frac{f(x,y)}{\sqrt{x^2+y^2}}=0$. Is it applicable to all functions? I doubt it because the denominator is specific to the question. I would appreciate alternative methods and any help. AI: Maybe you're not familiar with the definition of differentiability in higher dimensions: see Wikipedia, for example. The idea is this: in one-variable calculus, the derivative of a function at a point gives the tangent line to the graph of a function at that point. With more variables, the derivative (the linear function) gives the tangent line/plane/space to the graph of the function at the point. Now, to the problem. Youhave to show that the differential of $f$ at $(0,0)$ is the zero-linear function, that, is, $$\lim_{(x,y)\rightarrow (0,0)}\dfrac{f((0,0)+(x,y))-f(0,0)-\mathbf{0}(x,y)}{\Vert (x,y)\Vert}=\lim_{(x,y)\rightarrow(0,0)}\dfrac{\sqrt{9-x^2-y^2}-3}{\Vert (x,y)\Vert}=0$$ You can calculate this limit directly, and attain the result you want (you can use the fact that $\lim_{h\rightarrow 0}\dfrac{\sqrt{9-h^2}-3}{h}=0$, in one-variable calculus!).
H: A Milnor Differential Topology Excercise If $m<p$, show that every map $f:M^m\longrightarrow\ S^p$ is homotopic to a constant, where $M^m$ is smooth manifold of dimension $m$. I tried to show that $M^m$ is contractible or convex, but I couldn't. Any Idea would be helpful. AI: Expanding on John's hint. If $f$ is not onto then there is a $x_0\in S^n\setminus f(M)$, so $f$ is actually a map from $M$ into $S^n\setminus\{x_0\}$. Using that $S^n\setminus\{x_0\}\equiv \mathbb R^n$ (here we are using stereographic projection) then $f$ is essentially a map from $M$ into $\mathbb R^n$ since $\mathbb R^n$ is contractible $f$ is homotopic to a constant.
H: The group of rigid motions of an icosahedron. Prove that group of rigid motions of icosahedron is isomorphic to $A_{5}$. Can you help me to prove this? What I have done is shown that the order of the group of rigid motions of icosahedron is 60, which is same as $A_5$. AI: You can do this by considering the dodecahedron, which is dual to the icosahedron and hence has the same group of symmetries. There are five cubes that fit inside a dodecahedron in such a way that the rotational symmetries permute these cubes with $3$-cycles. But $3$-cycles in $\mathcal{S}_{5}$ generate $\mathcal{A}_{5}$. I hope that gives you some strong clues :)
H: How to determine sides lenght of irregular 11-sided polygon given that 10-sides are equal and polygon must be described inside circle. I have stuck on problem. I have to draw 11-sided polygon. 10 sides of polygon must be equal, while one side must be longer (at least twice longer than any other side). To make things more complicated it must fit circle as close as possible. Is there anyway to determine at least close approximation of side lenghts ? AI: A hint: Inscribe a long side to your liking, compute the corresponding central angle $ \alpha$, and divide $360^\circ-\alpha$ into $10$ equal pieces.
H: $m \in \{2,6,42,1806,...\} $ - a problem of sum-of-$m$'th powers modulo $m$ (continuing the work for an answer for a question here in MSE and also in MO) I'm (re-)viewing the function $$ f(m) = \sum_{k=0}^{m-1} k^m $$ considering its residue modulo $m$: $$ r(m) \equiv f(m) \pmod m $$ It is easy to see why for odd m $$ r(m) = 0 \qquad \text{ for odd } m$$ Not so easy is it for even m. I tried to determine for what m we get $$ r(m) = 1 $$ It seems highly nontrivial; and after some brute force it seems this is very rarely the case, and seemingly for $m_k$ where $m \in \{2,6,42,1806,?? \} $ but interestingly not for the next $m=3263442 $ when we follow that pattern. The recursive pattern says $$ \begin{array} {} m_0=&= 2 & \to & r(m_0)=1 & 2 \in \mathbb P\\ m_1=m_0 \cdot (m_0+1) &= 2 \cdot 3 & \to & r(m_1)=1 & 3 \in \mathbb P\\ m_2=m_1 \cdot (m_1+1) &= 2\cdot 3 \cdot 7 & \to & r(m_2)=1 & 7 \in \mathbb P\\ m_3=m_2 \cdot (m_2+1) &= 2 \cdot 3 \cdot 7 \cdot 43 & \to & r(m_3)=1 & 43 \in \mathbb P\\ m_4=m_3 \cdot (m_3+1) &=2 \cdot 3 \cdot 7 \cdot 43 \cdot 1807 & \to & r(m_4)=1807 & 1807 \notin \mathbb P\\ m_5=m_4 \cdot (m_4+1) &=m_4 \cdot 3263443 & \to & r(m_5)=?? & 3263443 \in \mathbb P\\ \vdots \end{array} \\ $$ However, I could not compute the last entry $r(m_5)$ because the sum expression for $f(m_5)$ is too huge. Also it seems to be an interesting question to answer this analytically. Q1: is $r(m_5) = 1$ ? Q2: does the pattern continue, in the sense that if the cofactor is/is not prime, the residue is/is not 1? Q3: are the other numbers $w$ outside of this pattern for which $r(w)=1$ ? The sequence $2,3,7,43,1807,... $ is in the OEIS in different variants The sequence $2,6,42,1806,...$ is also in the OEIS in different variants [update] Ah I see now, that in a comment at OEIS-sequence A014117 Max Alekseyev states (Aug 2013) that this sequence is even finite - however it is not yet clear to me, whether my problem-definition and the OEIS'-definition match. So this problem has possibly been solved... AI: Quick answer: No for all three questions. Explanation: Let us suppose $r(m)=1$. Let $p$ be a prime factor of $m$, then we must have $$1 \equiv f(m)=\sum_{k=0}^{m-1}{k^m} \equiv \sum_{j=0}^{\frac{m}{p}-1}{\sum_{k=0}^{p-1}{(k+jp)^m}} \equiv \frac{m}{p}\sum_{k=0}^{p-1}{k^m} \pmod{p}$$ Consider a primitive root $g \pmod{p}$, then $$\sum_{k=0}^{p-1}{k^m}=\sum_{k=1}^{p-1}{k^m}\equiv \sum_{i=0}^{p-2}{(g^i)^m} \equiv \begin{cases} \frac{1-(g^m)^{p-1}}{1-g^m} \equiv 0 \pmod{p}& p-1 \nmid m \\ \sum_{y=1}^{p-1}{1} \equiv -1 \pmod{p} & p-1 \mid m \end{cases}$$ Thus if either $p-1 \nmid m$ or $p^2 \mid m$, then we have $1 \equiv \frac{m}{p}\sum_{k=0}^{p-1}{k^m} \equiv 0 \pmod{p}$, a contradiction. Therefore $p-1 \mid m$ and $p^2 \nmid m$. This implies that $m$ must necessarily be squarefree, and $p \mid m \Rightarrow p-1 \mid m$. We immediately see that $m$ must be even, so write $m=2p_1p_2 \ldots p_k$, where $p_1<p_2< \ldots <p_k$, and $k \geq 0$. Note that for $i \leq k$, we have $p_i \mid m \Rightarrow p_i-1 \mid m=2p_1p_2 \ldots p_k$. Since $0<p_i-1<p_i< \ldots<p_k$, we have $(p_i-1, p_ip_{i+1} \ldots p_k)=1$ so $p_i-1 \mid 2p_1p_2 \ldots p_{i-1}$. If $k=0$, then $m=2$ works, since indeed $r(2)=1$. If $k \geq 1$, then $p_1-1 \mid 2$ and $p_1>2$, so $p_1=3$. If $k=1$ this gives $m=6$, which works, since $r(m)=1$. If $k \geq 2$, then $p_2-1 \mid 2p_1=6$ and $p_2>p_1=3$, so $p_2=7$. If $k=2$ this gives $m=42$, which works, since $r(42)=1$. If $k \geq 3$, then $p_3-1 \mid 2p_1p_2=42$ and $p_3>p_2=7$, so $p_3=43$. If $k=3$ this gives $m=1806$, which works, since $r(1806)=1$. If $k \geq 4$, then $p_4-1 \mid 2p_1p_2p_3=1806$ which has no solution for $p_4>p_3=43$. Thus we get no solution for $m$ when $k \geq 4$. In conclusion, the only $m$ for which $r(m)=1$ are $2, 6, 42, 1806$.
H: Linear PDE of degree 2: general form and an example As the general form of a linear PDE of degree 2 we wrote $$ (Lu)(x):=\sum_{i,j=1}^{n}a_{ij}(x)\frac{\partial^2 u}{\partial x_i\partial x_j}+\sum_{i=1}^{n}b_i(x)\frac{\partial u}{\partial x_i}+c(x)u=f(x) $$ Now I have the PDE $$ (1+x^2)\frac{\partial^2 u}{\partial x^2}-2x\frac{\partial^2 u}{\partial y^2}-(1+u^2)\frac{\partial u}{\partial x}+(1+\frac{\partial u}{\partial x})\frac{\partial u}{\partial y}-u=1 $$ I try to transfer the general form to this example. What I see is: $$ a_{11}(x)=1+x^2,~~a_{12}(x)=-2x,~~c(x)=-1,~~f(x)=1 $$ But what are the other coefficients? AI: Since you have $\partial u/\partial x$ multiplied by $\partial u/\partial y$, it is not linear.
H: Why is this inequality true in this proof? I've been studying Spivak's Calculus on Manifolds and there's one proof he gaves that made me confused. Is probably a very basic fact, however, I'm not grasping why this should be true. The Lemma being proved is: "Let $A\subset\Bbb R^n$ be a closed rectangle and $f: A\to \Bbb R$ bounded. If $o(f,x)<\varepsilon$ for every $x\in A$, then there's a partition $P$ of $A$ such that $U(f,P)-L(f,P)<\varepsilon \operatorname{vol}(A)$". Here, if we denote $M_S(f)$ and $m_S(f)$ the supremum and infimum of $f$ respectively on the set $S$ and if we denote $B(a;r)$ the open ball centered at $a$ with radius $r$, then $$o(f,x)=\lim_{r\to 0}[M_{B(x;r)}-m_{B(a;r)}].$$ Spivak starts the proof with: "For each $x\in A$ there's a closed rectangle $U_x$, containing $x$ at it's interior, such that $M_{U_x}-m_{U_x}<\varepsilon$." Now, the existance of these rectangles seems to be a very basic fact, however I'm failing to see the reason for that. Why is this true? Thanks very much in advance! AI: The assumption $o(f,x) < \varepsilon$ says that for all sufficiently small $r > 0$, say $r \leqslant r_0$, you have $$M_{B(x;r)}(f) - m_{B(x;r)}(f) < \varepsilon.$$ To see that, let $L = o(f,x)$, and consider $\eta = \dfrac{\varepsilon - L}{2}$. By the definition of the limit, there is a $\delta > 0$ such that $\lvert M_{B(x;r)}(f) - m_{B(x;r)}(f) - L\rvert < \eta$ for all $r < \delta$. But then we have $\lvert M_{B(x;r)}(f) - m_{B(x;r)}(f)\rvert \leqslant L + \lvert M_{B(x;r)}(f) - m_{B(x;r)}(f) - L\rvert < L+\eta = \dfrac{L+\varepsilon}{2} < \varepsilon$ for $r < \delta$. Then you have $M_S(f) - m_S(f) < \varepsilon$ for every set $S \subset B(x;r_0)$. Any ball around $x$ contains a closed rectangle with centre $x$, one only needs to choose the side length(s) small enough.
H: Elementary set questions problem In an exam,there are 150 students. 40 passed in paper A & B.40 passed in paper B & C. 30 passed in paper A & C and 10 passed in all three.How many students passed in paper B only? and also If no student failed find the number of student who passed in exactly one paper. Can we determine the answer whereas individual value is not given. AI: Approach: Draw a Venn Diagram depicting the overlaps and relationships $|A\cap B| = 40,\;$ $\;|B \cap C |= 40,\;$ $\;|A\cap C| = 30,\;$ and the intersection $\;|A\cap B \cap C| = 10$. The image on the right, below, depicts your situation. Fill in the known number of students in each region. You will not be able to determine, from the given information, the precise number who passed in only $B$. But you will be able to determine the number of students, out of $150$, who must have passed in only one paper, provided no student failed in every paper.
H: Solving $\frac{dx}{dz}-\frac{2x}{z}=1$ Please can someone solve this? $$\frac{dx}{dz}-\frac{2x}{z}=1$$ Please this is only part of my homework question. I am stuckwith here. Please teach me this solution thank you:) AI: To elaborate on Amzoti's suggestion: Here is a nice step-by-step solution to help you work through your problem. Do take the time to study (refresh your memory) about integrating factors, and how to use them for problems of this type, so you can apply this technique to a wider range of such problems.
H: A problem on second order differentiation If $y=\sin x$, then find the value of $$\frac{d^2(\cos^7 x)}{dy^2}$$ I have no idea on how to proceed in this problem. Please help. AI: Hint: notice that $$ \cos x = \sqrt{1-y^2}$$
H: Discrete valuations of a functional field have discrete valuation rings. Theorem: If $\nu:F\to\mathbb R\cup\{\infty\}$ is a valuation of a functional field, then the set $$\mathfrak O_{\nu}=\{x\in F: \nu(x)\geq 0\}$$ is a local ring with maximal ideal $$\mathfrak M_{\nu}=\{x\in F: \nu(x) > 0\}$$ and a quotient field $F$. If the valuation is discrete then $\mathfrak O_{\nu}$ is a discrete valuation ring and an arbitrary parameter $t$ of $\mathfrak O_{\nu}$ generates the maximal ideal $\mathfrak M_{\nu}$. Proof: Assume that $\mathfrak O_{\nu}$ is a local ring with maximal ideal $\mathfrak M_{\nu}$ and a quotient field $F$. I have no doubts. "...If $\nu:F\to\mathbb R\cup\{\infty\}$ is a discrete valuation then $\nu[F^*]=(d\mathbb Z,+)$ for some $d\in\mathbb R^+$. Assume $t\in F^*$ is such that $\nu(t)=d$. We will show that $t$ is a local parameter of $\mathfrak O_{\nu}$..." Then the proof goes on to show the representation $z=ut^m$ for any nonzero $z\in\mathfrak O_{\nu}$, and some $u\in\mathfrak O_{\nu}^*$. Now "...If $xy\in\langle t\rangle$ for some $x=ut^m$, $y=vt^n\in\mathfrak O_{\nu}$ and $u,v\in\mathfrak O_{\nu}^*$ ... then $x\in\langle t\rangle$ or $y\in\langle t\rangle$. This shows that $\mathfrak O_{\nu}$ is a discrete valuation ring with local parameter $t$. Definition: The commutative domain $R$ with a multiplicative identity is a discrete valuation ring if an indecomposable element $t\in R\setminus R^*$ exists such for any $z\in R\setminus \{0\}$, $z=ut^m$ for some $u\in R^*$ and $m\in \mathbb N$. The element $t$ we call a local parameter. Problem: I do not understand how from $0\not=xy\in\langle t\rangle$ and $xy\not\in\mathfrak O_{\nu}^*\Rightarrow x\in\langle t\rangle$ or $y\in\langle t\rangle$ follows that $t$ is indecomposable. In more detail if $t=t_1t_2$ then $t_1\in\langle t\rangle$ or $t_2\in\langle t\rangle$, but how come then $t_1\in \mathfrak O_{\nu}^*$ or $t_2\in \mathfrak O_{\nu}^*$. Any help would be appreciated. AI: Your result comes from the following: Exercise: In a domain $R$, a principal ideal $(a)$ being prime implies $a$ is irreducible. Proof: Suppose $(a)$ is prime. Suppose there are $b,c$ so that $a = bc$. Then without loss of generality we get $b = aa'$ for some $a' \in A$. Hence $a = aa'c$ and so $a'c = 1$ since $R$ is a domain. In other words, $a'$ is a unit and so $a = aa'c$ from which we get $a'c = 1$. In other words, $c$ is a unit and so $a$ is irreducible.
H: Show that set is null set Let $\mu$ be a measure on $(X,\mathcal A)$. Let $(A_k)_{k\in\mathbb N}$ be a sequence of sets in $\mathcal A$ such that $\sum_{k\in\mathbb N}\mu(A_k)<\infty$. Let $A:=\{x\in X:x\in A_k $for infinitely many $A_k\}$. Show that $\mu(A)=0$ and that this does not hold if we don't ask for $\sum_{k\in\mathbb N}\mu(A_k)<\infty$. My ideas: Since $A\subseteq \cup_{k\in\mathbb N} A_k$ it follows that $\mu(A)\leq\mu(\cup_{k\in\mathbb N} A_k)\leq \sum_{k\in\mathbb N}\mu(A_k)<\infty$ so $\mu(A)$ is finite. My idea is to construct an infinite series of subsets of A with constant measure leading to a contradiction, but I'm not sure if this is the right way to go. I don't want a complete solution, just hints to guide me. AI: Hint: Note, that for every $k \in \mathbb N$ we have $$ A \subseteq \bigcup_{n \ge k} A_n $$ and hence $\mu(A) \le \sum_{n \ge k}\mu(A_n)$.
H: Finding the roots of 4096x^3-10496x^2+152576x - 961=0 (1 root and 2 complex)? I don't know how to find the roots of 4096x^3-10496x^2+152576x - 961=0 I try using wolfram and http://en.wikipedia.org/wiki/Cubic_function. I don't really understand it can someone please explain how it is done? AI: A cubic equation in the form $y^3+py^2+qy+r=0$ may be reduced to the form $$ x^3+ax+b+0$$ by substituting for $y$ the value $x-\frac{p}{3}$. Here $$ a = \frac{1}{3}(3q-p^2) \quad \text{and} \quad b = \frac{1}{27}(2p^3-9pq+27r)$$ For the solution let $$A = \sqrt[3]{-\frac{b}{2}+\sqrt{\frac{b^2}{4}+\frac{a^3}{27}}}$$ and $$ B = \sqrt[3]{-\frac{b}{2}-\sqrt{\frac{b^2}{4}+\frac{a^3}{27}}}$$ Then the values of $x$ will be given by $$ x_1 = A+B \\ x_2 = -\frac{A+B}{2}+\frac{A-B}{2} \sqrt{-3} \\ x_3 = - \frac{A+B}{2} - \frac{A-B}{2}\sqrt{-3}$$ Recall that $y=x-\frac{p}{3}$, so you solve for $y_1,y_2$, and $y_3$ by using the corresponding $x$ values. If $p,q,r$ are real, then let $ d = \dfrac{b^2}{4} + \dfrac{a^3}{27}$. (i) If $d>0$, then there will be one real root and two conjugate complex roots. (ii) If $d=0$, then there will be three real roots of which at least two are equal. (iii) If $d<0$, then there will be three real and unequal roots. For your problem, we can say your equation is $$4096y^3-10496y^2+152576y - 961=0 \\ y^3 - \frac{10496}{4096}y^2+\frac{152576}{4096}y - \frac{961}{4096} = 0$$ Therefore, your values of $p,q$, and $r$ are $$p = -\frac{10496}{4096} = -\frac{41}{16} \\ q = \frac{152576}{4096} = \frac{149}{4} \\ r = - \frac{961}{4096}$$ So $$ a = \frac{1}{3}(3q-p^2) = \frac{26927}{768}$$ and $$ b = \frac{1}{27}(2p^3-9pq+27r) \approx 30.33668801 $$ We then define $$A = \sqrt[3]{-\frac{b}{2}+\sqrt{\frac{b^2}{4}+\frac{a^3}{27}}} \approx 3.020887293 $$ and $$B = \sqrt[3]{-\frac{b}{2}-\sqrt{\frac{b^2}{4}+\frac{a^3}{27}}} \approx -3.868752734 $$ Solving for $x_1, x_2,$ and $x_3$: $$ x_1 = A+B = -0.847865441\\ x_2 = -\frac{A+B}{2}+\frac{A-B}{2} \sqrt{-3} = 0.4239327205+5.966603286i \\ x_3 = - \frac{A+B}{2} - \frac{A-B}{2}\sqrt{-3} = 0.4239327205-5.966603286i$$ Therefore, the roots of your equation are $$y_1 = x_1-\frac{p}{3} = 0.0063012257 = \mathbf{0.0063012} \\ y_2 = x_2-\frac{p}{3} = 1.278099387+5.966603286i = \mathbf{1.2781+5.9666i} \\ y_3 = x_3-\frac{p}{3} = 1.278099387-5.966603286i = \mathbf{1.2781-5.9666i}$$ Which fits under case (i) with $d>0$. WolframAlpha's answers to your original expression are listed in bold above, which are indeed equal to the ones we have derived.
H: Average value of recurrent function. Given a function f(x) = a*f(x-1) where a is a number between 0 and 1, what is the average value of f(x) for x >= 0? Clarification: f(1) is a constant, say 1 f is only defined for integer inputs Disclaimer: I'm still in high school, so it's possible that I'm asking/saying something very stupid. AI: Ok, so let f(1)=k. Then for any n we have that the sum of $f(1)+f(2)...f(n)= k(\frac{1-a^{n+1}}{1-a})$. then the average value of $f(m)$ for $1\leq m \leq n$ is $\frac{k(\frac{1-a^{n+1}}{1-a})}{n}$. So it makes sense to say that the average value is going to be $\lim_{n \to \infty} \frac{k(\frac{1-a^{n+1}}{1-a})}{n}=k\lim_{n \to \infty} \frac {1-a^{n+1}}{n-a}=0 $for $k\neq 1 $and $k$ for $ a=1 $
H: A problem on finding dy/dx If $a+b+c=0$ and $$y=\frac{1}{x^b+x^{-c}+1}+\frac{1}{x^c+x^{-a}+1}+\frac{1}{x^a+x^{-b}+1}$$then $\frac{dy}{dx}$=? The only way which I can think of solving this is by differentiating each term. However, is there a simpler way? AI: $$y=\frac{1}{x^b+x^{a+b}+1}+\frac{1}{x^{-a-b}+x^{-a}+1}+\frac{1}{x^a+x^{-b}+1}$$ $$y=\frac{1}{x^b+x^{a+b}+1}+\frac{x^{a+b}}{1+x^{b}+x^{a+b}}+\frac{x^b}{x^{a+b}+1+x^b}$$
H: Roots to an equation using analysis Suppose that $a_{1}<a_{2}<…<a_{n}$. Prove that the equation $ \frac{1}{x-a_{1}}+\frac{1}{x-a_{2}} +…+\frac{1}{x-a_{n}}=c$ has exactly $n-1$ roots if $c=0$ and $n$ roots if $c\neq 0$. I am absolutely not sure but I think I need to use Rolle's theorem on this. Multiply both sides by $x-a_{1}$ so we get: $1+\frac{x-a_{1}}{x-a_{2}}+…+\frac{x-a_{1}}{x-a_{n}}-c(x-a_{1})=0$. Now it is defined for $a_{1}$ but not for $a_{2},…,a_{n}.$. If we consider the interval $(a_{1}-\varepsilon,a_{n}+\varepsilon), \varepsilon>0$ we get..nowhere. I want to create an interval with endpoints that are solutions to the equation. Yet I don't know how to prove that there are exactly $n$ roots(Fundamental theorem of algebra can't be used). AI: Let $f(x)=\sum_{k=1}^n \frac{1}{x-a_k}-c$. Then $f'(x)=-\sum_{k=1}^n \frac{1}{(x-a_k)^2}$, so $f$ is decreasing on each interval $I_0=(-\infty,a_1),I_1=(a_1,a_2),I_2=(a_2,a_3), \ldots, I_{n-1}=(a_{n-1},a_n), I_n=(a_n,+\infty)$. Also, note that ${\lim}_{x \to a_k,x < a_k}f(x)=-\infty$ and ${\lim}_{x \to a_k,x > a_k}f(x)=+\infty$ and ${\lim}_{x \to -\infty}f(x)={\lim}_{x \to -\infty}f(x)=-c$. This shows that $f(x)=0$ has exactly one root in each $I_k$ for $1\leq k <n$. If $c>0$, then $f$ has no root in $I_0$ and exactly one root in $I_n$. If $c<0$, then $f$ has no root in $I_n$ and exactly one root in $I_0$. If $c=0$, $f$ has no root in $I_0$ or $I_n$.
H: Is the derivative in $C(\overline{\Omega})$? Let $\Omega\subset\mathbb{R}^2$ a bounded domain and consider $u\in C^2(\Omega)\cap C(\overline{\Omega})$. I would like to know if then $$ 1+\frac{\partial u}{\partial x}\in C(\overline{\Omega})? $$ My answer is: YES, because: 1) $f(x):=1$ is of course uniformly continious on $\mathbb{R}^2$, because it is Lipschitz-continious on whole $\mathbb{R}^2$ then on $\overline{\Omega}$, too, i.e. $f\in C(\overline{\Omega})$ 2) Because of $u\in C^2(\Omega)$, the partial derivative $u_x$ is uniformly continious on $\Omega$. And a uniformly continious function on a bounded domain can be continued to a continious function $g$ on $\overline{\Omega}$. So $u_x\in C(\Omega)$. And the sum of two functions on $C(\overline{\Omega})$ is again in $C(\overline{\Omega})$ This is my argumentation. Is it okay? AI: The answer is NO. By definition, $u\in C^2(\Omega)\Rightarrow \frac{\partial u}{\partial x}\in C(\Omega)$, but $\frac{\partial u}{\partial x}$ may not be uniformly continuous on $\Omega$. Counter-example: $\Omega=(0,1)\times (0,1)$, $u(x,y)=\sqrt{x}$.
H: Proof Concerning Sum of Binomial Coefficients Could anybody provide a proof of the following identity identity: $$ \sum_{n=0}^{N-1}\binom{N-1+n}{n}=\binom{2N-1}{N}$$ possibly using Symmetry property and Pascal's rule (or another easier way): $$\binom{a}{b}=\binom{a-1}{b-1}+\binom{a-1}{b}$$ AI: $$\begin{eqnarray} \sum_{k=0}^N {N+k \choose N} &=& \sum_{k=0}^N {N-1+k \choose N-1} + \sum_{k=-1}^{N-1} {N+k \choose N} \\ &=& \sum_{k=0}^{N-1} {N-1+k \choose N-1} + \sum_{k=0}^N {N+k \choose N} + {2N-1 \choose N-1} - {2N \choose N} \end{eqnarray}$$ So $$\begin{eqnarray} 0 &=& \sum_{k=0}^{N-1} {N-1+k \choose N-1} + {2N-1 \choose N-1} - {2N \choose N} \\ &=& \sum_{k=0}^{N-1} {N-1+k \choose N-1} - {2N - 1 \choose N} \end{eqnarray}$$
H: A problem on limit involving various functions Find the value of $$\lim \limits_{x\to 0} \frac{\tan\sqrt[3]x\ln(1+3x)}{(\tan^{-1}\sqrt{x})^2(e^{5\large \sqrt[3]x}-1)}$$ Applying L'Hospital's rule does not seem to simplify the expression. AI: Note that $$\lim \limits_{y\to 0}\frac{\tan y}{y}=\lim_{y\to 0}\frac{\tan^{-1} y}{y}=\lim_{y\to 0}\frac{\ln (1+y)}{y}=\lim_{y\to 0}\frac{e^y-1}{y}=1.$$ Therefore, $$\lim \limits_{x\to 0^+} \frac{\tan\sqrt[3]x\cdot \ln(1+3x)}{\left(\tan^{-1}\sqrt{x}\right)^2\cdot(e^{5 \sqrt[3]x}-1)}=\lim_{x\to 0^+}\frac{\sqrt[3]x\cdot 3x}{(\sqrt{x})^2\cdot 5 \sqrt[3]{x}}=\frac{3}{5}.$$ Edit: Some comments below suggest that the answer above may not be detailed enough. Let me explain a little more about the gap between the two lines of equations above. The limit we are concerned about can be written as $$\lim \limits_{x\to 0^+}\left(\frac{\tan\sqrt[3]x}{\sqrt[3]x}\cdot \frac{\ln(1+3x)}{3x}\cdot \left(\frac{\sqrt{x}}{\tan^{-1}\sqrt{x}}\right)^2\cdot\frac{5 \sqrt[3]x}{e^{5 \sqrt[3]x}-1}\cdot \frac{\sqrt[3]x\cdot 3x}{(\sqrt{x})^2\cdot 5 \sqrt[3]{x}}\right). $$ On the one hand, as $x\to 0^+$, $\sqrt[3]x$, $3x$ $\sqrt x$ and $5\sqrt[3]x$ all approach to $0$, so from the first displayed line we know that each limit of first four terms in the product above exists and equals $1$. On the other hand, the limit of the last term exists and equals $\dfrac{3}{5}$. As a result, the original limit exists and equals $\dfrac{3}{5}$.
H: Regular expression for language Let's have a language $L=\{\omega\in\{a,b,c\}^* | \omega $ contains $ab$ and does not contain $ba\}$ make a regular expression for this language. I've ended up with this one $$(a^*(b^*+(cc^{*}a^*))^*)^*ab(b^*+(cc^{*}a^*))^*$$ Is this correct? Does it generate all words containing $ab$ and not containing $ba$ ? AI: This is incorrect. It accepts $bab=(a^0(b^1)^1)^1ab$. In the first part of the expression, you want to tie $b$ up to $c$, not $a$ to $c$.
H: Remainder modulo 8 A number is given: $1234513151313653211415515253$ Is there any way to find out the reminder when it divided by 8? What will be happened if I use MOD rules here? AI: When you divide by $8$, you only need to worry about the last 3 numbers. So $1234513151313653211415515253$ mod $8$ becomes $253$ mod $8$. Now it's a cake walk!
H: In statistics, what is the meaning of $Z_{0.3}$ What is the meaning of $Z_{0.3}$ and how do I calculate it? I know it was calculated this way: $$Z_{0.3} = -Z_{0.7} = -0.52$$ I tried to follow the General Distribution table but I can't seem to find the way to get this. It's quite hard to look for information about a term you have no clue how it is called. AI: This is the quantile of the standard normal distribution. In other words, if $\Phi(x)$ is the cdf of a normal distribution with mean 0 and variance 1, then $Z_{0.3}$ simply means the value of $x$ for which $\Phi(x)=0.3$. Essentially, it's the inverse of the cdf. The equation given follows from the symmetry of the standard normal curve about the mean 0. If you have access to a table a table of values of the cdf $\Phi$ then you can look for the value closest to 0.7 in the table and see which $x$ gives that $Z$ value. Hope this helps.
H: About using functor to reduce problem into the category Sets In the answer to this question, the author said Now as for the proof of the Lemma, just use the Yoneda Lemma to reduce it to the case of the category of sets, where you can really see this equation immediately. Is it suggesting that to prove any proposition in a category $\mathcal{C}$, we can always assume $\mathcal{C}$ to be the category of sets, and just show in this case the proposition is true? AI: The statement in you question about assuming $\mathcal{C}$ is the category of sets is not precise, so I don't think one can say whether it's true or false. But, with respect to the situation in the question you linked to, it can be made precise. Say you have a commutative square of morphisms in $\mathcal{C}$ $\begin{matrix}X\rightarrow&X_1\\\\\downarrow&\downarrow\\\\X_2\rightarrow&Y\end{matrix}$ and you want to prove that this diagram is cartesian, meaning that the morphisms $p:X\rightarrow X_1,q:X\rightarrow X_2$ realize $X$ as $X_1\times_YX_2$. By Yoneda, this is equivalent to the statement that the diagram of contravariant $\mathrm{Hom}$ functors $\begin{matrix}h_X\rightarrow&h_{X_1}\\\\\downarrow&\downarrow\\\\h_{X_2}\rightarrow&h_Y\end{matrix}$ arising from the first diagram is cartesian, which in turn means that for each object $S$ of $\mathcal{C}$, the diagram of sets $\begin{matrix}X(S)\rightarrow&X_1(S)\\\\\downarrow&\downarrow\\\\X_2(S)\rightarrow&Y(S)\end{matrix}$ obtained by applying $\mathrm{Hom}_{\mathcal{C}}(S,-)$ to the original diagram, is cartesian. Now, in specific instances, such as the one in the question you reference, the morphisms in the original square are purely categorical in their definition (i.e. they make sense in any category for which the relevant constructions, like products, can be made, an example being the diagonal morphism), and if the Yoneda functor $X\rightsquigarrow h_X$ gives you the corresponding categorical morphisms in the category of contravariant functors from $\mathcal{C}$ to sets, then, if you've proved that such a diagram is cartesian in sets, you get cartesian-ness of the third diagram above for each object $S$, and therefore for the original diagram (in $\mathcal{C}$). EDIT: As Martin Brandenburg indicates in his comment, it's not necessary to know about Yoneda's lemma to understand this. If one just thinks of the universal property of the fiber product (as a triple consisting of an object and two arrows), then one finds that the top diagram is cartesian if and only if the bottom diagram of sets is cartesian for each $S\in\mathcal{C}$.
H: Is it possible to use limit to find collision between two subjects Let's say that in a game I have a subject $A$ placed at $(0, 5)$ and a subject $B$ placed at $(5, 0)$. Accordingly, their distance is $5\sqrt{2}$. The subject $A$ will walk that distance at a constant velocity in a infinite loop. The subject $B$ will walk from $(5, 0)$ to $(0, 0)$ at the same velocity also in a infinite loop. Now, I don't know if it's true, but I believe that given they walk at the same velocity a different amount of distance ($5 \neq 5\sqrt{2}$), at some point they'll meet at $(5, 0)$. Can I prove that they'll meet? Is it possible to use limit to define when will they meet? Thanks. AI: $A$ will be at the point $(5,0)$ at times $t_A=(2n+1) 5 \sqrt{2}/v$ , $B$ at $t_B= m \, 5/v$, for $t_A=t_B$ we must have some integer $n,m$ such that $ (2 n+1) \sqrt{2} = m$. This cannot happen, because $\sqrt{2}$ is irrational. Now, if you are willing to place some "tolerance" (they "almost" meet), it's true that as time increases you'd get better and better approximations (this would amount to get fractional approximations to the irrational number).
H: Quantative comparison Any tips for this question, that which quantity will be greater? AI: Quantity A: There are $r \times s \times t$ possible (distinct) combinations of one entree, one side, and one dessert. Compare that to quantity $B$.
H: Prove that $3n +5m = 12$ for any two natural numbers. Prove that $\exists n,m \in \mathbb{N}$ such that $$3n+5m=12 $$ This is clearly false, but I am not sure how to conduct a proof stating it is false. Should I just give examples with $n = 1,2,3$ and then pick $m$s afterwards? AI: Since it asks for solutions in $\mathbb{N}$, you could cycle through all the (finitely) many possible solutions and show none satisfies the equation - i.e. you exhaustively show all possible solutions fail. However, another simple argument is to suppose that some $m$, $n$ exist in $\mathbb{N}$ such that $3n + 5m = 12$. In this case, $n = \frac{12-5m}{3}$, by rearrangement. Thus, $n =4- \frac{5m}{3}$. But, $\frac{5m}{3}\not\in \mathbb{N}$ for $m<3$. So, $m\geq3$, but in this case we have $5m\geq15$ and hence $n$ must be negative, i.e. $n\not\in \mathbb{N}$. This is a simple proof by contradiction.
H: Catching the right bus If buses go every $1$ minute, every $2$ minutes, every $5$ minutes and every $15$ minutes and you turn up at the bus stop at a random point, what is the probability that the next bus to arrive is one of the every $15$ minutes ones? We can assume that the timetable is set to make this as unlikely as possible. AI: 0% probability. Assuming the timetable is set to make this as unlikely as possible consider having the 1 min bus arrive every minute on the minute, ie 12:00, 12:01, etc. Then make the 15 minute bus arrive at 12:00 + epsilon, 12:15 + epsilon, etc.
H: A problem on distribution functions I have a quick question here. From the definition of a distribution function (DF), $\text{A real-valued, nondecreasing, right continuous function} \; F \; \text{defined on} \;\left(-\infty,\infty\right)\;\text{satisfying}$$$ F(-\infty)=0 \; \text{and} \; F(\infty)=1 $$ $\text{is called a DF}.$ I encountered this problem. $\text{Consider the function}$ $$F(x)=\frac{1}{\pi}\arctan{x} \;,\;-\infty<x<\infty$$ $\text{Is this a DF?}$ Now, from what I understood, although it satisfies the conditions of (1) being nondecreasing, (2) right continuous, and (3) real-valued, I found that $$F(-\infty)=-\frac{1}{2}$$ and $$F(\infty)=\frac{1}{2}.$$ Clearly, from my solution, it must not be a DF. However, the answer from another reference which I studied said it is a DF, with a PDF of $$f(x)=\frac{1}{\pi}\frac{1}{1+x^2}I^{(x)}_{(-\infty,\infty)}$$ Now, I know that $f(x)$ can be verified to be a PDF since $f(x)\geq0$ for all $x$ and its integral over $\mathbb{R}$ is $1$. So now, I am in the dilemma of what should my answer to the problem is, i.e. is $F$ a DF or not? Please help. In case it is, please explain where was I wrong in the $F(-\infty) \; \text{and} \; F(\infty)$ conditions. Thanks. AI: Your $F$ will be a distribution function after adding the adequate constant, namely, $\frac 12$. So $F$ as written in the OP is not a DF, but $F+1/2$ is a DF.
H: Pade Approximations convergence acceleration Why Pade Approximatoins accelerate the convergence of series? Generally speaking, what is there an advantage in the sence of convergence acceleration using rational interpolation? Thanks much in advance!!! AI: To expand in vadim123's answer. With truncations of a series (the trivial way to obtain approximations to its sum) you are using a number $n$ of terms and you get a remainder which is just 'killing' those $n$ terms from the series. With Pade approximants, since a rational function has more parameters, you can 'kill' more terms of the series (this is just what is done to define Pade's approximants) and in that way, hopefully, closer to the sum of the series (or what is the same get a smaller remainder). Accelerating also depends on how you compare. A truncation of a series up to degree $n$ is a polynomial of degree $n$. Pade's approximants are quotients of two polynomials. Should we compare this truncation of degree $n$ to the Pade approximant that uses a quotient of two polynomials of degree $n$, which is going to capture the information of $2n$ terms of the series and it is likely to give a better approximation? Or should we compare to the Pade approximant that is a quotient of two $n/2$ polynomials, which captures the information of only $n$ terms of the series, the same as the truncation of degree $n$?
H: How to simplify $(5-\sqrt{3}) \cdot \sqrt{\left(7+\frac{5\sqrt{3}}{2}\right)}$ How to simplify this: $$(5-\sqrt{3}) \sqrt{7+\frac{5\sqrt{3}}{2}}$$ Dont know how to minimize to 11. Thanks in advance! AI: Hint: $7+\dfrac{5\sqrt{3}}{2} = \dfrac{1}{2^2}\left(5^2+\left(\sqrt{3}\right)^2+2\cdot5\cdot\sqrt{3}\right)$.
H: Show that any $n+1$ vectors in a $n$ dimensional vector space forms a linearly dependent set $V$ is an $n$-dimensional vector space. Show that $n + 1$ vectors in $V$ form a linearly dependent set. Here is how I am approaching it: Let $\dim V = n$, which implies that $S$ is a linearly independent set of vectors such that $S = \{v_1,v_2,\ldots,v_n\}$ is the basis of $V$. Let $W = \{w_1,\ldots,w_r\}$ be a set of linearly independent vectors in $V$ I don't know where to go from here. AI: The statement $\dim V = n$ means there exists a basis $\{v_1,\ldots,v_n\}$ for $V$. Let $v_{n+1} \in V\setminus \{v_1,\ldots,v_n\}$. Then by the definition of a basis, there exist $\alpha_1,\ldots,\alpha_n \in F$ ($F$ being the field over which $V$ is defined) such that $$ v_{n+1} = \alpha_1v_1 + \ldots + \alpha_nv_n. $$ So $\{v_1,\ldots,v_{n+1}\}$ is linearly dependent.
H: Solve a differential equation using the power series method Problem By assuming a power series solution of the form $$y(x) = \sum_{m=0}^{\infty} c_mx^m , \quad c_0 \not =0 $$ Show that the equation $ 2y'+xy=x $ has general solution $y(x)=1+Ae^{-x^2/4}$ where A is a constant. [Hint: you may use without proof the fact that $$ e^{ax^2} = \sum_{m=0}^{\infty} \frac{(ax^2)^n}{n!} $$ Progress I'm genuinely stuck, I've worked out $y'$ and substituted it into the equation $2y′+xy=x$ however not to sure where to go from there? AI: $$2\sum_{m=1}^{\infty}{mc_m x^{m-1}}+\sum_{m=0}^{\infty}{c_m x^{m+1}}=x$$ $$2c_1+2\sum_{m=2}^{\infty}{mc_m x^{m-1}}+\sum_{m=0}^{\infty}{c_m x^{m+1}}=x$$ $$2c_1+2\sum_{m=0}^{\infty}{(m+2)c_{m+2} x^{m+1}}+\sum_{m=0}^{\infty}{c_m x^{m+1}}=x$$ $$2c_1+\sum_{m=0}^{\infty}{(2(m+2)c_{m+2}+{c_m}) x^{m+1}}=x$$, so we have $2c_1 = 0$, $2(2c_2)+c_0 = 1$, $2((m+2)c_{m+2})+c_m = 0$ for $m>0$, so $c_k=0$ for odd $k$, $c_0\in\mathbb{R}\backslash\{0\}$ $c_2=\frac{1-c_0}{4}=\frac{1-c_0}{4*1!}$, $c_4=\frac{-c_2}{8}=\frac{-(1-c_0)}{32}=\frac{-1+c_0}{32}=\frac{-1+c_0}{4^2*2!}$, $c_6=\frac{-c_4}{12}=\frac{1-c_0}{32*12}=\frac{1-c_0}{4^3*3!}$, and so on... I'll leave it to you to get an explicit formula for $c_k$ for even $k$ now write $c_0 = 1+(-1+c_0)$, With all these, try to express put the explicit representation for $y$. You will also need to substitute $a=\frac{1}{4}$ into your hint to get your desired solution
H: Prove that $x$ and $x+1$ are coprime numbers Given $\{x \mid x > 1\}$, how do I prove that any given $x$ and $x+1$ are coprime? AI: If $y$ divides $x$ and $x+1$ then it divides $(x+1)-x=1$. Conclude.
H: prove that the operator is compact. Let $H$ be a Hilbert space over $\mathbb C$, and $\{f_j\}$ a orthonormal set in $H$. Let $t_j\in \mathbb C$ such that $\displaystyle \lim_{n\to \infty} t_j =0$ i.e $(t_j)_{j\in \mathbb N}\in c_0$. Show that the operator $T:H\to H$ defined by: $Tx=\sum t_j (f_j \cdot x)f_j$ is compact. It's easy to prove if $X$ is a reflexive space, then an operator in compact if and only if carries weakly convergent into norm convergent sequences(and it's enough the case weakly convergent to zero). If $H$ is a Hilbert space, then it's reflexive. Let $x_j \to 0$ weakly. I don't know how to do this problem, because I can bound $||Tx||\le ||x|| \sum |t_j|$ but the right side could be infinite. AI: For each $n\in \mathbb{N}$, define $$ T_n(x) = \sum_{j=1}^n t_j (f_j\cdot x)f_j $$ Then, $T_n$ is finite rank, and hence compact. Now consider $$ \|T(x)-T_n(x)\| \leq \sum_{j=n+1}^{\infty} |t_j||(f_j\cdot x)| \leq u_n\|x\| $$ by Bessel's inequality, where $$ u_n = \sup_{j\geq n} |t_j| $$ Now, $$ \lim u_n = \limsup |t_n| = 0 $$ and hence for $\epsilon > 0$, there is $N_0 \in \mathbb{N}$ such that $u_n < \epsilon$ for all $n\geq N_0$, in which case $$ \|T-T_n\| < \epsilon \quad\forall n\geq N_0 $$ Hence, $T$ is the limit of finite rank operators, and is hence compact.
H: What is a good book to study classical projective geometry for the reader familiar with algebraic geometry? The more I study algebraic geometry, the more I realize how I should have studied projective geometry in depth before. Not that I don't understand projective space (on the contrary, I am well versed in several different constructions of it), but I lack the familiarity with basic results as cross-ratios, how projective linear transformations act on projective space (as in how many points determine one transformation), Desargues' theorem, etc. I also sometimes feel that it wouldn't hurt to get more practice with hard (as in Olympiad-style) classical geometry problems that may or may not use some facts of projective geometry. To summarize, I am looking for a reference that covers classical results of projective geometry, and yet assumes the maturity of a reader who has already started studying algebraic geometry. It would be only better if such a book could help me understand where those amazing solutions to Olympiad problems come from. Does anyone have a suggestion? AI: Here are two references which seem to answer your request: I) Lectures on Curves, Surfaces and Projective Varieties by Beltrametti, Carletti, Gallarati, Bragadin. This is a fat textbook written by four Italian geometers in a very classical style and concentrating on classical projective geometry: schemes, cohomology or functors are never even alluded to! II) Geometria proiettiva, Problemi risolti e richiami di teoria by Fortuna, Frigerio, Pardini. This is a book consisting of solved exercises preceded by a reminder of the theory. Although the book is recent the content is very classical and elementary: cross-ratios, quadrics, pencils of conics, inflection points,linear systems,... From a review: “This book is the result of the experience acquired by the authors while lecturing Projective Geometry to students from a three year course leading to a degree in Mathematics in the University of Pisa (Italy). … ” (Ana Pereira do Vale, Zentralblatt MATH, Vol. 1227, 2012) (The book is in Italian, but judging from your first name this might not be a big deal :-) Anyway mathematical Italian is very close to mathematical English) I think there is some poetic justice in the fact that all seven authors of the two books are Italian: the justly vaunted Italian algebraic geometry seems to be alive and well in its native country! Edit Richter-Gebert has has recently written an encyclopaedic book containing an amazing wealth of material on projective geometry, starting with nine (!) proofs of Pappos's theorem . The book examines some very unexpected topics like the use of tensor calculus in projective geometry, building on research by computer scientist Jim Blinn . It would be difficult to read that book from cover to cover but the book is fascinating and has splendid illustrations in color.
H: Mathematical model for magic square As I spent some time on magic squares, it seems like the magic squares can be formed only with a odd number of rows/columns? Is it that.? If so why? is there a mathematical model that explains magic square? AI: It's not true that magic squares can only be formed with an odd number of rows and columns. For instance, consider $$\begin{matrix} 7 & 12 & 1 & 14 \\ 2 & 13 & 8 & 11 \\ 16 & 3 & 10 & 5 \\ 9 & 6 & 15 & 4 \end{matrix}$$ There are more examples, together with some explanations, on the Wikipedia page.
H: is $p \land (p \lor q)$ a tautology? I would just like to know whether my work is correct before I continue on with the rest of the questions. $$p \land (p \lor q)$$ $$p \land (\lnot p \rightarrow q)$$ $$(p \land \lnot p) \rightarrow q$$ $$F \rightarrow q$$ with this, I'm going to say it is not a tautology. AI: No, it is not a tautology. $p \land (p\lor q)$ it is not true when both $p$ and $q$ are false. A tautology is a proposition that is always true, no matter what the truth-values of its variables. Note that the following equivalence does not hold: $$p\land(\lnot p \lor q) \equiv (p \land \lnot p) \lor q\tag{wrong}$$ Associativity only works when the connectives are both $\land,$ or both $\lor$. What we do have is $$p\land(\lnot p \lor q) \equiv (p\land \lnot p) \lor (p \land q)\equiv F \lor (p \land q)\equiv p \lor q$$
H: Marginal distribution of two jointly distributed Random Variables that are dependent Find the Marginal Distribution of $X$ (note: $c=1/8$). Clearly, $X$ is dependent on $Y$, since the value Y takes restricts the domain of $X$. Would this affect how the marginal distribution is computed? Or would you just do the usual? What happens to the domain of $X$? Is it simply $- \infty < x < \infty $ ? I feel like I am missing something conceptually here. Any insights would be appreciated! AI: Note that the joint density function is $0$ unless $y\ge |x|$. So for $f_X(x)$ we integrate $c(y^2-x^2)e^{-y}$ from $y=|x|$ to $\infty$. The domain of $f_X(x)$ will indeed be all of $\mathbb{R}$. Remark: The function obtained in the OP is not a density function over any interval. For unless it is restricted to a subset of $[-\sqrt{2},\sqrt{2}]$, it is sometimes negative. And if restricted, it integrates to less than $1$.
H: Find the basis of a polynomial vector space where the derivative(pi)=0 I am working in the $P_3$ space. Let W = {p($t$) $\in P_3$: p'($\pi$) = 0} and need to find the basis for W What I've done so far: p($t$) = a$t^3$ + b$t^2$ + c$t$ + $d$ p'($t$) = 3a$t^2$ + 2b$t$ + $c$ p'($pi$) = $3a(\pi$)$^2$$ + 2b(\pi$) + $c$ = 0, so $c$ = - $3a(\pi$)$^2$$ - 2b(\pi$) then p($t$) = a$t^3$ + b$t^2$ - $3at$$^2$$ - 2bt$ + $d$ = a$t^3$ - $3at$$^2$$ + b$t$^2$ - 2$bt$ + $d$ AI: A polynomial $p(t)$ satisfies $p(\pi) = 0$ iff $(t-\pi) \mid p(t)$. In other words, $p \in W$ iff $$ p'(t) = (t-\pi)(at +b) $$ So try to find $p_1$ and $p_2$ such that $$ p_1'(t) = t(t-\pi), \text{ and } p_2'(t) = (t-\pi) $$ then check that they form a basis for $W$ together with $p_3 = 1$
H: Hartshorne, exercise II.2.18: a ring morphism is surjective if it induces a homeomorphism into a closed subset, and the sheaf map is surjective Let $\phi:A\to B$ be a ring morphism, and let $f:X=Spec(B)\to Y=Spec(A)$ be the induced map of affine schemes. I'm trying to show that if $f$ is a homeomorphism onto a closed subset of $Y$ and $f^\#:\mathcal{O}_Y\to f_*(\mathcal{O}_X)$ is surjective, then $\phi$ is surjective. Hartshorne suggests to factor the map through the quotient. So let $\pi:A\to A/ker \phi$ and $\tilde \phi:A/ker \phi\to B$ be the canonical maps. $\tilde \phi$ induces a map $\tilde f:X\to Y'=Spec (A/ker\phi)$, and $\pi$ induces a map $g:Y'\to Y$. My idea is: prove that $g_*\tilde f^\#$ is an isomorphism. Then, if we apply the global sections functor to it, we get $\tilde \phi$ which is then also an isomorphism. First, $g_*\tilde f^\#$ is surjective, since $f^\#=g_*\tilde f^\# \circ g^\#$, and $f^\#$ is surjective by hypothesis. Now, since $\tilde \phi$ is injective, we get get that $\tilde f^\#$ is injective (previous part of the exercise). The direct image functor $g_*$ is left exact (since it is the right adjoint to the inverse image functor $g^{-1}$), so $g_*\tilde f^\#$ is also injective. Thus $g_*\tilde f^\#$ is an isomorphism. Then $(g_*\tilde f^\#)_Y=\tilde f^\#_{Y'}=\tilde \phi$ is an isomorphism. QED Something must be wrong with my argument since I didn't use the hypothesis that $f$ is an homeomorphism onto a closed subset of $Y$. I've seen solutions online that do the following: prove that $\tilde f$ is an homeo, then somehow (not explicited) from surjectivity of $g_*\tilde f^\#$ we get surjectivity of $\tilde f^\#$. Then $\tilde f$ is an isomorphism of affine schemes, thus $\tilde \phi$ is an isomorphism of rings. How do they deduce surjectivity of $\tilde f^\#$ from surjectivity of $g_*\tilde f^\#$? AI: Your solution is correct. The hypothesis on the underlying map is superfluous, and actually follows from the surjectivity of the sheaf map. I remember being thrown off by this question for the same reason, last year. There are a few of those in Hartshorne; stay vigilant! Here's another way to see this, if you want to familiarize yourself with this line of thought. Let $\mathcal I$ be the kernel of $f^\sharp$. Then we have the sequence of quasi-coherent sheaves on $Y$: $$0 \to \mathcal I \to \mathcal O_Y \to f_* \mathcal O_X \to 0.$$ Since an affine scheme has no cohomology, we get an exact sequence $$0 \to I \to A \to B \to 0,$$ so the map $A \to B$ is surjective.
H: Counting primes by counting numbers of the form $6k \pm 1$ which are not prime Again, pondering on twin primes, I came upon the following result. It baffles me a bit, so could someone give more intuitive reasoning why it works. First, define a function $P_6$ as $$P_6(n)=\begin{cases} 0, \ \ 6n-1 \not\in \mathbb P \wedge 6n+1 \not\in \mathbb P \\ 1, \ \ (6n-1 \not\in \mathbb P \wedge 6n+1 \in \mathbb P) \vee (6n-1 \not\in \mathbb P \wedge 6n+1 \in \mathbb P)\\ 2, \ \ 6n-1 \in \mathbb P \wedge 6n+1 \in \mathbb P \end{cases}$$ So $P_6(n)$ has value $0$ if neither of the numbers around $6n$ is a prime, $1$ if either but not both are primes and $2$ is both are. Let sets $P_6^0,P_6^1,P_6^2$ be the corresponding sets of indexes where $P_6 = 0,1 \vee 2$, so, for example, $\forall n\in P_6^1, P_6(n) = 1$ Define three new functions using the indicator functions of above sets: \begin{cases} \pi_{6\bullet}^0 (n) = \sum_{i=1}^n 1_{P_6^0}(i) \\ \pi_{6\bullet}^1 (n) = \sum_{i=1}^n 1_{P_6^1}(i) \\ \pi_{6\bullet}^2 (n) = \sum_{i=1}^n 1_{P_6^2}(i) \end{cases} So these functions tell how many such indexes $1 \leq s \leq n$ there are for whom the number of primes surrounding $6s$ is $i$, $i \in \{0,1,2\}$. These functions have following relations: \begin{equation} \pi_{6\bullet}^0 (n)+\pi_{6\bullet}^1 (n)+\pi_{6\bullet}^2 (n) = n \ \ \ \ \ (1) \end{equation} and \begin{equation} \pi(6n+1)-2 = \pi_{6\bullet}^1 (n)+2 \pi_{6\bullet}^2 (n) \ \ \ \ \ (2). \end{equation} Here $\pi(n)$ is the prime counting function and it has argument $6n+1$ because the biggest number we test is indeed $6n+1$ and we have to remove $2$ because the first two primes are not reachable via number six. Now, from (1) we get \begin{equation} \pi_{6\bullet}^2 (n) = n-\pi_{6\bullet}^0 (n)-\pi_{6\bullet}^1 (n) \ \ \ \ \ (3). \end{equation} Substituting to (2) we get $$\pi(6n+1) = 2n-2\pi_{6\bullet}^0 (n)-\pi_{6\bullet}^1 (n)+2.$$ This works. For example $\pi_{6\bullet}^0 (5000) = 2223,$ and $\pi_{6\bullet}^1 (5000) = 2311$, and $10000-2*2223-2311+2 =3245 = \pi(30001).$ Can someone offer a bit more intuition on how this works? The $(2\pi_{6\bullet}^0 (n)+\pi_{6\bullet}^1 (n))$ is the number of numbers of the form $6k-1 \vee 6k+1$ between 5 and $6n+1$ which are not prime and when this is subtracted from $2n$ we get number of primes between 5 and $6n+1$. How? AI: An integer has a chance of being prime if it is not divisible by 2 or 3. Call such a number a potential prime. The number of potential primes between 5 and $6n+1$ (again, that means integers not divisible by 2 or 3) is $2n-2$, because numbers not divisible by 2 or 3 occur in pairs $(6k-1, 6k+1)$ around the multiples of 6. To get the actual number of primes, we have to subtract from $2n$ the number of these potential primes that are not prime. For each $k \le n$, see if the two neighbors of $6k$ are primes, and if not, subtract from $2n$ accordingly. If neither $6k-1$ nor $6k+1$ is prime, subtract 2 to account for the two non-primes. If exactly one neighbor of $6k$ is prime, we subtract 1. We can subtract all the 2's at once by subtracting $2\pi_{6\bullet}^0 (n)$, and we can subtract all the 1's at once by subtracting $\pi_{6\bullet}^1 (n)$. This gives the number of primes between 5 and $6n+1$. Taking into account the primes 2 and 3, the result is the formula you wanted an explanation for, $$\pi(6n+1) = 2n-2\pi_{6\bullet}^0 (n)-\pi_{6\bullet}^1 (n)+2\textrm{.}$$
H: How to tell $\overline {(a,b)}=[a,b]$, $\overline{\{\frac{1}{n}:n=1,2,3,\ldots}\}=\{\frac{1}{n}:n=1,2,3,\ldots\}\cup \{0\}$ Morning reading a book that deals with metric spaces noticed this fact: Tell that $$\overline {(a,b)}=[a,b],$$ $$\overline{\{\frac{1}{n}}\}=\{\frac{1}{n}\}\cup \{0\}.$$ I do not know much about metric spaces and so I started to read about them, because the discipline looks interesting, please help me, if someone solves Previously, the thank AI: It is clear that $(a,b) \subset \overline{(a,b)}$, so we only need to show that $a,b \in \overline{(a,b)}$. For every sufficiently small $\varepsilon>0$ (say $\varepsilon<b-a$) we have $$ (a-\varepsilon,a+\varepsilon)\cap(a,b)=(a, a+\varepsilon)\ne \varnothing, $$ i.e. $a \in \overline{(a,b)}$. Similarly we have $$ (b-\varepsilon,b+\varepsilon)\cap(a,b)=(b-\varepsilon,b)\ne \varnothing, $$ i.e. $b \in \overline{(a,b)}$. In the same manner we have $$ A:=\left\{\frac{1}{n}:\ n \in \mathbb{N}\right\} \subset \overline{A}=\overline{\left\{\frac{1}{n}:\ n \in \mathbb{N}\right\}}, $$ and only need to show that $0 \in \overline{A}$. Since $\lim_{n\to \infty}\frac{1}{n}=0$, for every $\varepsilon>0$ there exists some integer $N=N(\varepsilon) \ge 1$ such that $$ \left|\frac{1}{n}\right| < \varepsilon \quad \forall n \ge N, $$ i.e. $$ (-\varepsilon,\varepsilon)\cap A =\left\{\frac{1}{n}:\ n \ge N\right\} \ne \varnothing, $$ and we conclude that $0 \in \overline{A}$.
H: (New - Updated 3/1/13) Is this argument correct in showing that this field extension does not have the stated degree? I have been asked to prove that $[\mathbb{Q}(\sqrt2,\sqrt{1+i}):\mathbb{Q}]=4$. I have some problems believing this to be true and have the following argument that assumes it to be true and seemingly derives a contradiction. If someone could verify whether in the following argument I have made any incorrect assertions or otherwise, I would be grateful. By the Tower Law for field extensions, we have: $[\mathbb{Q}(\sqrt2,\sqrt{1+i}):\mathbb{Q}]= [\mathbb{Q}(\sqrt2,\sqrt{1+i}):\mathbb{Q}(\sqrt2)]\cdot[\mathbb{Q}(\sqrt2):\mathbb{Q}]$. We have that $[\mathbb{Q}(\sqrt2):\mathbb{Q}]=2$. I have previously shown this result explicitly and have no problem working with this easy vector space over $\mathbb{Q}$. So, if the question is correct, then $[\mathbb{Q}(\sqrt2,\sqrt{1+i}):\mathbb{Q}(\sqrt2)]=2$. For a simple field extension $K(\alpha)$ over $K$, with $\alpha$ algebraic over $K$, we have $[K(\alpha):K]=$ deg(mipo$_K(\alpha))$. We need to show $[\mathbb{Q}(\sqrt2,\sqrt{1+i}):\mathbb{Q}(\sqrt2)]=[\mathbb{Q}(\sqrt2)(\sqrt{1+i}):\mathbb{Q}(\sqrt2)]=2$. Therefore, by the result just stated, with $\mathbb{Q}(\sqrt2)$ as our base field and $\sqrt{1+i}$ as the algebraic element we adjoin, we have that deg(mipo$_{\mathbb{Q}\sqrt2}(\sqrt{1+i}))=2$. If this is the case, then we have that for some $p,q\in \mathbb{Q}(\sqrt2)$ that $p+q\sqrt{1+i}+(1+i)=0$. But there are no non-trivial solutions to this equation, since writing $\sqrt{1+i}$ explicitly, i.e. $\sqrt{1+i}=\frac{\sqrt{1+\sqrt2}}{\sqrt2}+i\frac{\sqrt{-1+\sqrt2}}{\sqrt2}$ (I derived this form in radicals and it can be verified on Wolfram), we have that this equation is equivalent to $(a+b\sqrt2)+(c+d\sqrt2)(\frac{\sqrt{1+\sqrt2}}{\sqrt2}+i\frac{\sqrt{-1+\sqrt2}}{\sqrt2})+(1+i)=0$, where $a,b,c,d,e,f\in\mathbb{Q}$. Since the equation equals $0$, in particular the imaginary component vanishes, i.e. $(c+d\sqrt2)(\frac{\sqrt{-1+\sqrt2}}{\sqrt2})+1=0$. Ultimately, no consistent solution exists to this equation. Rearranging we obtain, $\sqrt{-1+\sqrt2}=\frac{-\sqrt2}{c+d\sqrt2}$, and after some rationalising it follows that $\sqrt{-1+\sqrt2}=\frac{2d}{c^{2}-2d^{2}}-\frac{c}{c^{2}-2d^{2}}\sqrt2$. Hence, $\sqrt{-1+\sqrt2}=p+q\sqrt2$, for $p,q \in \mathbb{Q}$ as in the previous step. But, by first squaring both sides, a contradiction is derived. After squaring, we have the quadratic equation: $-1+\sqrt2=p^{2}+2q^{2}+2pq\sqrt2$ From this, we obtain two equations, which are: $p^{2}+2q^{2}=-1$ and $2pq=1$. Solving does not lead to any solutions in $\mathbb{Q}$ as, for instance $q^2=\frac{-4\pm\sqrt{-16}}{16}$, contradicting $p,q\in\mathbb{Q}$. Indeed, $p^{2}+2q^{2}\neq-1$ in $\mathbb{Q}$. Therefore, by our chain of reasoning, deg(mipo$_{\mathbb{Q}\sqrt2}(\sqrt{1+i}))\neq2$. Hence by the earlier result, $[\mathbb{Q}(\sqrt2,\sqrt{1+i}):\mathbb{Q}(\sqrt2)]\neq2$ and thus by the first point (using the Tower Law), $[\mathbb{Q}(\sqrt2,\sqrt{1+i}):\mathbb{Q}]\neq4$. Please note that I have updated this solution after the feedback. Thank you all for your initial feedback (particularly T. Bongers). However, after writing the minimal polynomial in degree 2 (initially I had written it in degree 1), I believe that a contradiction also arises. I would appreciate any feedback as to whether the updated reasoning here is sound. AI: Your step three isn't correct. It should read There exist $p, q \in \mathbb{Q}(\sqrt{2})$ with $$\sqrt{1 + i}^2 + p \sqrt{1 + i} + q = 0$$ Your step uses a minimum polynomial of degree less than $2$.
H: French metro metric: difficulty to prove that $d(x, y) = 0\iff x = y$. I think that it is related to the special definition of the metric in my book: $$d(x, y) = \begin{cases}||x - y||,\mbox{ if }\exists \alpha\in\mathbb{R}: \alpha x + (1-\alpha) y = 0;\\ ||x|| + ||y||, \mbox{ otherwise.}\end{cases}$$ This way, for $x = y$, we have $\alpha x + (1 - \alpha) x = 0$, which is true only if $x = 0$, so we fall into the second case: $d(x, x) = ||x|| + ||x|| = ||x||^2 \neq 0$ if $x\neq 0$. Seems like it doesn't satisfy the axioms of a metric. The case described in Woflram MathWorld is simpler, because the condition for the first case is: $x = \alpha y$. This way, for $d(x, x)$, we have $\alpha = 1$, and everything works fine! Am I missing something or is there an error in the problem statement? Thanks in advance! AI: I agree. In some sense the actual condition they wanted is "$x,y$ lie on a common line through the origin", but the given condition breaks down as the points approach each other. If you divide the condition equation by $\alpha$ and then take $\alpha\to\infty$ it's clear that this would hold. Hence you should use Wolfram's definition which is almost the same.
H: Prove that any odd number can be expressed as $4n+1$ or $4n+3$ Prove that any odd number can be expressed as $$4n+1$$ or $$4n+3$$ I can see that this is true, but I am not certain on how to make a formal proof. AI: Hint: Every odd number can be written as $2k+1$. Now $k$ can be even ($k=2n$) or...
H: How to show that a function is in O($h^m$) i try to find the biggest m that $$f(h) = \frac{e^h-e^{-h}}{2h}-1$$ is $\in O(h^m)$ ($h \to 0, h > 0$) I thought i have to use the definition. So i wrote this: $$ limsup_{h \to 0} \left | \frac{\frac{e^h-e^-h}{2h}-1}{h^m} \right | = limsup_{h \to 0} \left | \frac{e^h-e^{-h}-2h}{h^m \cdot 2h} \right |$$ but now, i dont know how to estimate further. This term should be $< \infty$. Can you help me? Thanks! AI: \begin{eqnarray} f(h):&=&\frac{e^h-e^{-h}}{2h}-1\\ &=&\frac{1}{2h}\left[\left(1+h+\frac{h^2}{2}+\frac{h^3}{3!}+\frac{h^4}{4!}+\ldots\right)-\left(1-h+\frac{h^2}{2}-\frac{h^3}{3!}+\frac{h^4}{4!}+\ldots\right)\right]-1\\ &=&\frac{1}{2h}\left(2h+\frac{2h^3}{3!}+\frac{2h^5}{5!}+\ldots\right)-1\\ &=&\frac{h^2}{3!}+\frac{h^4}{5!}+\ldots=h^2\left(\frac{1}{3!}+\frac{h^2}{5!}+\ldots\right)=:h^2g(h). \end{eqnarray} Since $$ \lim_{h\to 0}g(h)=\frac16, $$ we have $f(h)=O(h^2)$ as $h \to 0$.
H: Is there an example of fields $F \subseteq K \subseteq L$ where $L/K$ and $K/F$ are normal but $L/F$ is not normal? I'm searching(I searched this site first) for example of fields $F \subseteq K \subseteq L$ where $L/K$ and $K/F$ are normal but $L/F$ is not normal. Presenting some fields just for $F$ or $L$, instead of all three fields will help me too. Thanks for your attention. AI: Hint: Consider $$\Bbb{Q} \subseteq \cdots \subseteq \Bbb{Q}[\sqrt[4]{2}]$$
H: Mapping on Cauchy Sequences How can I show that $f(x)=x^2$ maps Cauchy sequences to Cauchy sequences? That is the function preserves the Cauchy property. AI: It is a fact that a sequence in $\mathbb R$ is Cauchy iff it is convergent. Now if $(x_n)_{n\in\mathbb N}$ is a Cauchy sequence then it is a convergent sequence. Therefore $(f(x_n))_{n\in\mathbb N}=(x_n^2)_{n\in\mathbb N}$ is a convergent and so a Cauchy sequence. Note that if this is a homework maybe this is not the correct way to prove it. It depents on what you have been taught.
H: Setting up double integrals in polar coordinates I am currently studying double integrals in polar coordinates and I'd like clarification on some issues I'm having. Suppose I have a disk centered at $(1,0)$ with radius 1. How do I set up the integrals by (a) slicing up the disk where $\theta$ is constant (b) slicing up the disk wheere $r$ is constant. Thanks. AI: Here's a big hint: Show that the circle is given in polar coordinates by the equation $r=2\cos\theta$. What is the interval on which $\theta$ lives? A picture will help. EDIT: Now, as with cartesian coordinates, for any $-\pi/2\le \theta\le\pi/2$, $r$ goes from what to what? And, since you insist, for any $0\le r\le 2$, then $\theta$ goes from what to what? In both cases, the equation I gave you in the first paragraph is relevant.
H: Show that $f=g$, if $f(z)=g(z)$ for $z\in dA$ with $A$ bounded region Let $A$ be a bounded region, $f$, $g$ continuous functions of $\bar{A}$ in the complex. Suppose that these functions are holomorphic in the region and agree on the border. Prove you are the same. I think this problem is solved by the maximum modulus principle, where we know that: $$M=sup_{z\in dA}|f(z)|=sup_{z\in dA}|g(z)|$$ consequently: $$f(z)\le M$$ and $$g(z)\le M$$ However, I do not know how to use these observations, I think without ideas, or how I can use that the region is bounded? AI: Hint: The function $h(z) := f(z) - g(z)$ is holomorphic in $A$ and continuous on $\overline{A}$. What is its value on the boundary, and how does the maximum modulus principle give the result now?
H: Ideal of a ring of matrices Let $F$ be a (commutative) field. $R$, the matrices of the form $ \left( \begin{array}{ccc} a & b \\ 0 & c \\ \end{array} \right)$ with $a,b,c \in F$, is a subring of the ring $M_2(F)$ of 2 x 2 matrices with entries in $F$. Give a non-trivial two-sided ideal $I$ of $R$. I read a lot on ideals (course notes and online), but I have a hard time understanding the concept of ideals... Especially in this example with matrices. How would one define the ideal of $R$ in this context? AI: One easy way to construct ideals is to look at the singly generated ideals. These are of the form $RxR$ for some fixed element $x$. Of course you run the risk that $RxR=R$ and then the ideal is trivial. But if we try a couple examples we can make them work: First, try $x=\begin{bmatrix}0&1\\0&0\end{bmatrix}$. We have $$ \begin{bmatrix}a&b\\0&c\end{bmatrix}\,\begin{bmatrix}0&1\\0&0\end{bmatrix}\,\begin{bmatrix}d&e\\0&f\end{bmatrix}=\begin{bmatrix}0&ae\\0&0\end{bmatrix}. $$ So the set $$ I=\left\{\begin{bmatrix}0&a\\0&0\end{bmatrix}:\ a\in F\right\} $$ is a proper two-sided ideal. Or if you try $y=\begin{bmatrix}1&0\\0&0\end{bmatrix}$, $$ \begin{bmatrix}a&b\\0&c\end{bmatrix}\,\begin{bmatrix}1&0\\0&0\end{bmatrix}\,\begin{bmatrix}d&e\\0&f\end{bmatrix}=\begin{bmatrix}ad&ae\\0&0\end{bmatrix}. $$ Thus $$ J=\left\{\begin{bmatrix}a&b\\0&0\end{bmatrix}:\ a,b\in F\right\} $$ is another proper two-sided ideal.
H: Probability task (Find probability that the chosen ball is white.) I have this task in my book: First box contains $10$ balls, from which $8$ are white. Second box contains $20$ from which $4$ are white. From each box one ball is chosen. Then from previously chosen two balls, one is chosen. Find probability that the chosen ball is white. The answer is $0.5$. Again I get the different answer: There are four possible outcomes when two balls are chosen: $w$ -white, $a$ - for another color $(a,a),(w,a),(a,w),(w,w)$. Each outcome has probability: $\frac{2}{10} \cdot \frac{16}{20}; \frac{8}{10} \cdot \frac{16}{20}; \frac{2}{10} \cdot \frac{4}{20}; \frac{8}{10} \cdot \frac{4}{20};$ In my opinion the probability that the one ball chosen at the end is white is equal to the sum of last three probabilities $\frac{8}{10} \cdot \frac{16}{20} + \frac{2}{10} \cdot \frac{4}{20} + \frac{8}{10} \cdot \frac{4}{20}=\frac{21}{25}$. Am I wrong or there is a mistake in the answer in the book? AI: In case of $(w,a)$ or $(a,w)$ you need to consider that one of these two balls is chosen (randomly as by coin tossing, we should assume). Therefore these cases have to be weighted by a factor of $\frac 12$.
H: If $G, H, K$ are divisible abelian groups and $G \oplus H \cong G \oplus K$ then $H \cong K$ This is an exercise in Hungerford. But can somebody explain why is the following not a counter-example? Let $G$ be the direct sum of $|\mathbb{R}|$ copies of $\mathbb{Q}$. Let $K$ be the direct sum of $|\mathbb{N}|$ copies of $\mathbb{Q}$. Then $G \oplus \mathbb{Q} \cong G \oplus K$ but $\mathbb{Q}$ is not isomorphic to $K$. Indeed, suppose $f : \mathbb{Q} \rightarrow K$ is a $\mathbb{Z}$-module isomorphism. We may show that $f$ is a $\mathbb{Q}$-module isomorphism obtaining thus a contradiction. For any $v \in Q$ non-zero and a non-zero natural $b$ there is a unique $w$ such that $bw = v$, that is $w = (1/b)v$. We have $b[(1/b)v] = v$ so $b f((1/b) v) = f(v)$. The same uniqueness argument applies in $K$ since it is torsion-free, so $f((1/b) v) = (1/b) f(v)$. Hence $f$ is a $\mathbb{Q}$-module isomorphism. AI: Yes. A more simple counterexample is $\mathbb{Q}^{\oplus \mathbb{N}} \oplus \mathbb{Q} \cong \mathbb{Q}^{\oplus \mathbb{N}} \oplus 0$. Are you sure that the exercise in Hungerford is exactly as stated?
H: Is the following true: $\forall x\in\mathbb R: \exists y\in\mathbb R: x^2+y^2=-1$ How would I solve the following question. And determine if its true or false. 1.$\forall x \in R , \exists y\in R, x^2+y^2=-1$ 2: $\exists x\in R,\forall y \in R, x^2+y^2=-1$ For the first one I think I can justify it is false. As for any arbitrary x must y must be $y=\sqrt{-x^2-1}$ which would not be real number. The second one I can say that two numbers squared cannot be a negative.So it would be false? AI: Yes, both statements are false because the sum of two squared real numbers, whatever those numbers are, will never be negative.
H: Make the vector $[1,1]$ turn of an angle - $\pi/4$ , with complex numbers We have $[1,1]$ and $\theta = -\pi/4$ here is my attempt: $(\cos(-\pi/4) + i \sin(-\pi/4)) * (x+iy)$ = $(\sqrt{2}/2 - i \sqrt{2}/2) (1+i)$ = $\sqrt{2}/2 - i^2\sqrt{2}/2 $ = $[\sqrt{2}/2 + \sqrt{2}/2]$ I'm not sure if I'm adding up the parts correctly... For some reasons I end up with only one part in my final vector AI: Hint: $$\begin{pmatrix}\cos\frac\pi4&-\sin\frac\pi4\\\sin\frac\pi4&\cos\frac\pi4\end{pmatrix}\binom 11=\binom{\cos\frac\pi4-\sin\frac\pi4}{\sin\frac\pi4+\cos\frac\pi4}=\binom0{\sqrt2}$$
H: Advantages to continuity at a point A scalar field $f : \mathbb{R}^n \to \mathbb{R}$ is said to be continuous at a point $\boldsymbol{a}$ if $$ \lim_{\boldsymbol{x} \to \boldsymbol{a}} f(\boldsymbol{x}) = f(\boldsymbol{a}) $$ So in other words, $f$ has to be defined at $\boldsymbol{a}$ and also has to have a limit at $\boldsymbol{a}$. But isolated points are also defined to be continuous. It seems to me like, given this definition, there isn't really an advantage to having a function continuous at a point, and continuity is only useful on an set or interval, because of the following: $f$ can be continuous at a point but not differentiable (e.g. $f(\boldsymbol{x}) = \|\boldsymbol{x}\|$ at $\boldsymbol{0}$) $f$ can be continuous at a point but the limit doesn't exist (e.g. $f(\boldsymbol{x}) = \sqrt{-\|\boldsymbol{x}\|}$ at $\boldsymbol{0}$ if $f(\boldsymbol{x}) \subset \mathbb{R}$) $f$ can be continuous at a point but the first-order partials don't exist So if $f$ is continuous at a point $\boldsymbol{a}$, is there anything we can say about $f$ at $\boldsymbol{a}$? Or is it just a nice-to-have? AI: Continuity is a very important property to have, in particular when studying topology. For analysis you usually want something more, for example you'd like the function to be $C^1$ or $C^\infty$ in order to differentiate. Also, it is nice to have a definition of continuity at a point because it shows that continuity is a local property.
H: Proving orientability of manifold I don't know how to prove the following: $RP^n$ is orientable manifold if n is odd? Any help is welcome. AI: Hint: Decide when the antipodal map on $S^n$ is orientation-preserving.
H: Proving a few things about $ L^{p} $-spaces I am new to $ L^{p} $-spaces and am trying to prove a few things about them. Therefore, I would like to ask you whether I have gotten the following right. Prove that $ {L^{\infty}}(I) \subseteq {L^{p}}(I) $ for all $ p \in (0,\infty) $, where $ I = [a,b] $ is a closed bounded interval, by showing that $$ \text{$ \forall $ measurable functions $ f $ defined on $ [a,b] $}: \quad \| f \|_{L^{p}} \le (b - a)^{\frac{1}{p}} \| f \|_{L^{\infty}}. $$ My idea was to invoke the fact that $$ \| f \|_{L^{p}} \stackrel{\text{def}}{=} \left( \int_{I} |f|^{p} \, d{\mu} \right)^{\frac{1}{p}} \le \| f \|_{\infty} \left( \int_{I} 1 \, d{\mu} \right)^{\frac{1}{p}} $$ for all measurable functions $ f $ defined on $ [a,b] $. Is this correct? Prove that $ \displaystyle {L^{\infty}}(I) \subsetneq \bigcap_{p \in (0,\infty)} {L^{p}}(I) $. Inclusion is clear from (1), and a function that illustrates why the inclusion is strict is $ f(x) \stackrel{\text{def}}{=} \dfrac{1}{x^{x}} $ on the interval $ [0,1] $ (its $ L^{p} $-norm exists for each $ p \le \infty $, but it is not bounded). Prove that $ {L^{p}}(\mathbb{R}) $ is not a subset of $ {L^{\infty}}(\mathbb{R}) $ and vice-versa. Take the function $ f(x) \stackrel{\text{def}}{\equiv} 3 $. It is not integrable over $ \mathbb{R} $, but it is bounded. For the reverse implication, take the function \begin{equation} f(x) \stackrel{\text{def}}{=} \left\{ \begin{array}{ll} \frac{1}{\sqrt{x}} & \text{if $ 0 \le x \le 1 $}; \\ 0 & \text{if $ x \in (- \infty,0) \cup (1,\infty) $}. \end{array} \right. \end{equation} It is integrable over $ \mathbb{R} $, but it is not bounded. Maybe I have made some mistakes here or maybe I am missing something, so I appreciate any kind of help!!! P.S.: I have just noticed that my example $ \dfrac{1}{x^{x}} $ does not work, since this function is actually bounded on $ [0,1] $. AI: Number 1 is correct, and so is 3. Can you justify why the function you wrote in #2 works using a calculation?
H: Why are we Multiplying here instead of Adding? Three small towns designated by $A$, $B$ and $C$ are interconnected by by a system of two-way roads as described below [hopefully the following answer is enough info without picture] part a)In how many ways can Linda travel from town $A$ to town $C$? I calculated $14$ and the answer key agrees with me. part b) How many different round trips can Linda travel from town $A$ to town $C$ and back to town $A$? I said $14+14$ because you have $14$ ways to go there, and from there you have $14$ ways to go back. The answer key has $14\times14$, why? I've been given the following tips to help determine if adding or multiplying is necessary. Does anyone have any tips to add or disagree with any of the ones on the list? unions, the word "either" and the word "or" suggest to use the sum rule instersections, the word "and" or if steps are implied suggest to use the product rule AI: There are $14$ ways to go there, and for each of those $14$ ways there are $14$ more ways to go back. So, if you took route $1$ there, the number of routes back is $14$. If you took route $2$ there, the number of routes back is $14$ ($28$ total). Do this $14$ times because there are $14$ routes there, and the result is $14 \times 14$.
H: Is this a set in New Foundations theory of sets: A={x∈X: x∉f(x)} when X=universal set and f(x)=x? Does the formula A={x∈X: x∉f(x)} define a set in New Foundations theory of sets (NF) when X=universal set and f(x)=x? If not: WHY? It can be seen that many elements of universal set (X) satisfy the condition of the formula: If x=∅(it is possible since ∅ is an element of universal set(∅∈X)): Then: f(∅)=∅ and ∅∉∅. So, in this case x∈X and x∉f(x), thus ∅∈A. We already have one element of A! If x≠∅: This case is also possible, since there is an infinite number of elements of X, for which f(x)=x, but x∉f(x), for example f(x)=x={1,2,3}. But {1,2,3}∉{1,2,3}. Thus infinite number of elements of A exits, that satisfy condition: x∈X: x∉f(x). Nonetheless I have to believe, the formula A={x∈X: x∉f(x)} does not define a set in NF, because if it did, the Cantor’s theorem would be valid in NF. But Cantor's theorem is not valid in NF. Does this set violate some of the axioms of NF? AI: This set cannot be defined because the formula $x \notin x$ appearing in the putative set definition $\{x \in X : x \notin x\}$ cannot be stratified. The restriction of the separation axiom schema to stratified formulas is how New Foundations manages to avoid Russell's paradox even though it allows a universal set. To be more precise, I should say that the notation $\{x \in X : x \notin x\}$ cannot be said to define a set solely on the basis of the separation axiom schema of NF. For certain $X$ the notation $\{x \in X : x \notin x\}$ may happen to define a set; for example if $X$ is the empty set it is safe to say that the notation $\{x \in X : x \notin x\}$ defines a set (also the empty set.) But if $X$ is the universal set then the notation $\{x \in X : x \notin x\}$ cannot define a set, or the argument of Russell's paradox would lead to a contradiction.
H: About minimal prime ideals and varieties Let $W$ be a variety, and $I=\mathbb{I}(M)$, then we have $$ I=\operatorname{rad}(I)=P_1\cap\cdots\cap P_n $$ where $P_i$'s are minimal prime ideals containing $I$. Thus we have $$ W=\mathbb{V}(I)=\mathbb{V}(P_1\cap\cdots\cap P_n)\supset\mathbb{V}(P_1)\cup\cdots\cup\mathbb{V}(P_n) $$ I don't think we can deduce $W=\mathbb{V}(P_1)\cup\cdots\cup\mathbb{V}(P_n)$, but I failed to make an counterexample. Could anyone make an example for me? Or prove the equality? AI: $V(I \cap J) = V(I \cdot J) = V(I) \cup V(J)$ is a general fact, and easy to prove.
H: True or false: $\forall x \in \Bbb R,\exists y\in \Bbb R,y+x=x+y$ How would one tell if this is true or false. 1: $\forall x \in \Bbb R,\exists y\in \Bbb R,y+x=x+y$ 2: $\exists x \in \Bbb R,\forall y \in \Bbb R,y+x=x+y$ For the first I think it would be true because if $x=8$ then $y+8=8+y$ $y=2$ So to justify it would I say true because of commutative property. AI: Both statements are true, regardless of our choices for $x, y$. $$y + x = x+y$$ is true for any arbitrary pair of real numbers $(x, y)$, and as you note, this is due to the commutativity of addition over the real numbers. Note: In your first case, if $x = 8$, the statement holds regardless of the value of $y\in \mathbb R$, and not just when $y = 2$.
H: How do I take the limit with invoking L'Hospital's rule? $\lim_{x \to 1}\left(\frac{x}{x-1}-\frac{1}{\ln(x)}\right)$ Need to take the limit: $$\lim_{x \to 1}\left(\frac{x}{x-1}-\frac{1}{\ln(x)}\right) = \lim_{x \to 1}\left(\frac{x\cdot \ln(x)-x+1}{(x-1)\cdot \ln(x)}\right)=(0/0)$$ Now I can use L'Hospital's rule: $$\lim_{x \to 1}\left(\frac{1\cdot \ln(x)+x\cdot \frac{1}{x}-1}{1\cdot \ln(x)+(x-1)\cdot\frac{1}{x}}\right)= \lim_{x \to 1}\left(\frac{\ln(x)+1-1}{\ln(x)+\frac{(x-1)}{x}}\right)=\lim_{x \to 1}\left(\frac{\ln(x)}{\frac{(x-1)+x \cdot \ln(x)}{x}}\right)=\lim_{x \to 1}\frac{x\cdot \ln(x)}{x-1+x\cdot \ln(x)}=\frac{1\cdot 0}{1-1+0}=(0/0)$$ As you can see I came to $(0/0)$ again. So what I have to do to solve this problem? AI: You can apply L'Hopital's rule again on the new limit: $$ \lim_{x \to 1}\frac{x\cdot ln(x)}{x-1+x\cdot ln(x)}=\lim_{x\to 1}\frac{ln(x)+1}{1+ln(x)+1}=\frac{0+1}{0+1+1}=\frac{1}{2} $$ Alternatively, note that $$ \lim_{x \to 1}\frac{x\cdot ln(x)}{x-1+x\cdot ln(x)}=\lim_{x\to 1}\displaystyle\frac{1}{1+\frac{x-1}{x\cdot ln(x)}} $$ So it suffices to compute $$ \lim_{x\to 1}\frac{x-1}{x\cdot\ln(x)}=\lim_{x\to 1} \frac{1}{1+ln(x)}=1 $$ by L'Hopital, so the original limit was $\frac{1}{1+1}=\frac{1}{2}$.
H: Inclusion $[0,1]\rightarrow\mathbb{C}$ generates $C^{1}[0,1]$ as a Banach algebra I am trying to show that the inclusion map $x:[0,1]\rightarrow\mathbb{C}$ generates $A=C^{1}[0,1]$ as a Banach algebra. The first thing that occurred to me was to try using Stone-Weierstrass but somehow I'm not getting it. Also, I want to show that the map $[0,1]\rightarrow\Omega(A),t\mapsto\tau_{t}$ is onto, where $\Omega(A)$ is the space of characters on $A$ and $\tau_{t}(f)=f(t)$ for $f\in A$. If that is going to be true, then I must have $\phi(x)\in[0,1]$ for all $\phi\in\Omega(A)$ where $x$ is the inclusion map. But I can't see why this is so. AI: For the first it is exactly as you say, the algebra generated by $x$ is the algebra of polynomials on $[0,1]$, and this satisfies the conditions for Stone-Weiertrass. Regarding your second question, note that $\phi$ is necessarily positive. As the order you will consider is the pointwise one, your have $0\leq x\leq1$, and so $0=\phi(0)\leq\phi(x)\leq\phi(1)=1$.
H: Right invertible element My question is as follows: $a,b \in L$ where $L$ is a ring. We are given that $ab=1$. I need to prove or disprove that $a$ is invertible (meaning, there is an element $x \in L : ax=xa=1$). But how can we say that for sure? just because $ab=1$ doesn't mean that $ba=1$, ba could be something else altogether, we werent given that this is a commutative ring. Please give advice on how to solve, thank you. Word by word translation of the question to make it clearer: Prove or disprove that if $a$ and $b$ are elements of $L$ and $ab=1$, then $a$ is invertible. AI: Suppose $L$ is the ring of $\mathbb R$-linear maps $l:\mathbb R[X]\to \mathbb R[X]$ (with multiplication being composition). Consider $D,I\in L$ defined by $D(P(X))=P'(X)$ and $I(P(X))=\int _0^XP(t)dt$. Then $D\circ I=Id$ but nevertheless $D$ is not invertible because...
H: prove of disprove :'For every $x\in G$ there exists some $y\in G$ such that $x=y^2$, where $G$ is a group." I am working on a question in the book: A Book of Abstract Algebra by Pinter. The question asks to prove or disprove the following statement: For every $x\in G$ there exists some $y\in G$ such that $x=y^2$, where $G$ is a group. Now I am quite stumped by this. I tried the following: $$x=yy$$ thus $$x^{-1} = y^{-1}y^{-1}$$ Now since $y \in G$ we need to find an element $z=y^{-1}$ such that $x z^2=e$. I am not sure if this is the way to go to show the final result. If someone could help me out along the way that would be greatly appreciated. Thanks in advance! P.S. Another approach might be $$y^{-1}x = xy^{-1} =y$$ Now because a group is not always commutative this could imply the statement is untrue.... I am not sure.. Thanks! AI: If you consider the group $\mathbb{Z}/2\mathbb{Z}$ i.e. the group of residues $\pmod{2}$ under addition, in which every element squared gives the identity, there can be no element whose "square" is equal to the non-identity element 1.
H: Quick logic question about $P\leftrightarrow Q$, terminology I know that if we have $P\rightarrow Q$, $P$ can be called the antecedent and $Q$ the consequent or conclusion. If we have $P \leftrightarrow Q$, are there names for what we would call $P$ and $Q$ here? I am writing a proof where I am proving $P \leftrightarrow Q$ and I already showed $P\rightarrow Q$ and I am trying to show $Q\rightarrow P$ but the statement of $Q$ is long and I was wondering if there is a word for $Q$ that I could use. AI: If you have displayed the claim $$ P \iff Q$$ prominently, then you can subsequently call $P$ the "left hand side" and $Q$ the "right hand side".
H: Must the intersection of a certain decreasing sequence of open sets be non-empty I am trying to show the open mapping theorem. As I was trying to prove it, I made the following conjecture: Let $X$ be a complete metric space. Let $V_1,V_2,...$ be a sequence of open sets in $X$ such that: $$V_1\supseteq \overline{V_2}\supseteq V_2\supseteq \overline{V_3}\supseteq V_3\supseteq \overline{V_4}\supseteq V_4\supseteq .... $$Then $\cap_{n=1}^{\infty}V_n\not=\emptyset$ Question 1: Is the above conjecture true ? I would be able to prove the open mapping theorem if the conjecture is true, but I am curious about this question as well: Question 2: Does the conjecture hold even if we don't require $X$ to be complete ? Thank you Edit: Neal answered both questions. There is still one more question that Ii formulated after seeing the counterexamples of Neal. Question 3: Does the conjecture hold if $X$ is a complete bounded metric space ? AI: No, the conjecture need not be true. Consider $V_i = (i,\infty)\subset\mathbb{R}$. Then: $$(1,\infty)\supseteq [2,\infty)\supseteq (3,\infty)\supseteq \cdots$$ but (by the Archimedean principle) their intersection is empty. If $X$ is bounded but not complete, the conjecture still does not hold: Consider the punctured interval $X = [-1,0)\cup(0,1]$ and a $$V_i = \bigg(-\frac{1}{i},\frac{1}{i}\bigg)\cap X.$$ If $X$ is just bounded and complete, I think it won't work. We'll take a disjoint union of countably many little balls and then let $V_i$ be the union of all but finitely many of them. Consider a separable infinite-dimensional Hilbert space with orthonormal basis $e_i$. Define auxiliary sets $$E_{i,\epsilon} = B(e_i,\epsilon).$$ Choose $\epsilon$ small enough so that the $E_{i,\epsilon}$ are pairwise disjoint and put $$V_i = \bigcup_{j=i}^\infty E_{j,\epsilon/i}$$ Now for each $i$, we've $\bar{V_i}\subseteq V_{i-1}$, and yet the intersection of all $V_i$ is empty. Since $V_1$ is contained in the ball of radius $1 + \epsilon$, it is a subset of the bounded, closed metric space $\overline{B(0,1+\epsilon)}$. This example may be easier to visualize if you consider the simpler space defined as a countably infinite collection of intervals $[0,1]$ wedged at $0$, i.e., $$\bigsqcup_{i\in\mathbb{N}} [0,1]_i/(\forall i,j\ \ 0_i\sim 0_j).$$ Let $E_{(j,\epsilon)} = (1-\epsilon, 1]_j$, the open $\epsilon$-neighborhood of the endpoint of the $j^{th}$ interval, and $V_i$ the union of all but the first $i$ of the $E_{j,\epsilon/i}$s. As $i$ increases, the size of each $E_j$ shrinks, so $\overline{V_{i+1}}\subseteq V_i$ for all $i$. But any $x\in V_1$ is only contained in finitely many $\overline{V_i}$, so the intersection is empty. However, if $X$ is compact, you can get your result by arguing that $$V_1\cap \bar{V_2}\cap V_3\cap\cdots = \bar{V_1}\cap\bar{V_2}\cap\bar{V_3}\cap\cdots.$$
H: Suppose $\{f_k\}$ is a sequence of $M$-measurable functions on $X$. Let $p_1$ and $p_2\in [1,\infty)$, and suppose $f_k\in L^{p_1}\cap L^{p_2}$. Suppose $\{f_k\}$ is a sequence of $M$-measurable functions on $X$. Let $p_1$ and $p_2\in [1,\infty)$, and suppose $f_k\in L^{p_1}\cap L^{p_2}$. Also suppose there exist $g\in L^{p_1}$ and $h\in L^{p_2}$ such that $f_k\to g$ in $L^{p_1}$, $f_k\to h$ in $L^{p_2}$. Prove that $g=h$ ($\mu$ almost everywhere) So we know $||f_k-g||_{p_1}\to 0$ and $||f_k-h||_{p_2}\to 0$ as $k\to\infty$. That means $\lim_{k\to\infty}\int |f_k-g|^{p_1}d\mu= 0$ and $\lim_{k\to\infty}\int |f_k-h|^{p_2}d\mu= 0$. Suppose it's valid to bring the limit inside the integral, and $\lim f_k$ exists (call it $f$). Then $\int|f-g|^{p_1}d\mu=\int|f-h|^{p_2}d\mu=0$. That means $f=g=h$ almost everywhere. But how do I justify bringing the limit inside and $\lim f_k$ exists? Can I apply dominated convergence theorem? Is $|f_k-g|$ bounded? Thanks. AI: Convergence in $L^p$ spaces implies convergence in measure. Then $$f_k\to g\quad\text{and}\quad f_k\to f$$ in measure. By the uniqueness of the limit of sequences of functions which converges in measure, we can conclude $f = g$ almos everywhere. Note: Uniqueness of the limit of sequences which converges in measure is a consequence of the Hausdorffness of the topology of "convergence in measure". In general, if a space is Hausdorff then the limits of sequences are unique. Alternatively: Since $f_k\to g$ in $L^{p_1}$, there exist a subsequence $(f_{k_j})_j$ such that $f_{k_j}\to g$ pointwise a.e. Since $f_k\to h$ in $L^{p_2}$, its subsequence $(f_{k_j})_j$ must converge to $h$ as well in $L^{p_2}$. Since $f_{k_j}\to h$ in $L^{p_2}$, there exist a subsequence $(f_{k_{j_i}})_i$ which converges to $h$ pointwise a.e. Since $(f_{k_{j_i}})_i$ is a subsequence of $(f_{k_j})_j$ and $f_{k_j}\to g$ pointwise a.e., it follows that $f_{k_{j_i}}\to g$ pointwise a.e. So, in the set of pointwise convergence of $(f_{k_{j_i}})_i$ we have $g = h$. But this set is the whole space without a set of measure zero, i.e. $g=h$ a.e.
H: Showing almost sure divergence Doing some exercises as preparation for an upcoming exam, but im sort of stuck at this exercise: \ Assume that $X_1,X_2,...$ is an i.i.d sequence, such that $X_1 \sim \mathcal{N} (\xi , \sigma^2)$, with $\xi>0$ . Define $$ S_n=\sum_{k=1}^n \frac{X_k}{k} $$ Show that $X_n \to \infty $ almost surely as n tends to infinity. Im not aware of any theorem that can ease my way of showing this quickly. But im trying something going this direction: $$ X_n \stackrel{a.s.}{\to} \infty \iff \forall \varepsilon>0 : P(S_n>\varepsilon \quad evt.)=1 \iff \forall \varepsilon>0: P( S_n \leq \varepsilon \quad i.o.)=0 $$ $$ \Leftarrow \forall \varepsilon>0 : \sum_{k=1}^\infty P(S_n \leq \varepsilon) <\infty $$ And from here i dont know where to go, because i have no closed form expression of the above probabilities, or even any idea if it holds. Any tips/tricks/solutions are welcome. AI: Let's begin with a result that has nothing to do with probability. If $(x_k)$ is a sequence of numbers where $\sum_{k=1}^n {x_k\over k}$ converges to a finite limit $\alpha$, then ${1\over n}\sum_{k=1}^n x_k\to 0$. Proof: Note that $\sum_{k=1}^n {x_k\over k}\to\alpha$ implies that the Cesàro averages also converge to $\alpha$: $${1\over n}\sum_{j=2}^n \left( \sum_{k=1}^{j-1}{x_k\over k}\right)\to\alpha. $$ But since $$\begin{eqnarray*} {1\over n}\sum_{j=2}^n \left( \sum_{k=1}^{j-1}{x_k\over k}\right) &=&{1\over n}\sum_{k=1}^n \left( \sum_{j=k+1}^n {x_k\over k}\right)\\[5pt] &=&\sum_{k=1}^n{x_k\over k}-{1\over n}\sum_{k=1}^n x_k. \end{eqnarray*}$$ we deduce that ${1\over n}\sum_{k=1}^n x_k\to 0$. Now, let's put probability back into the picture. The law of large numbers gives ${1\over n}\sum_{k=1}^n X_k\to\xi>0$ and therefore $\sum_{k=1}^n {X_k\over k}$ cannot converge to a finite value.
H: Finding solutions for $x^3\equiv 1 \bmod n$ How can I find all the numbers mod n such that $x^3\equiv 1 \bmod n$? Does it help if n is prime? AI: If $n$ is a power of a prime, $n=p^r$, $r\gt1$, then, if you can solve $x^3\equiv1\pmod p$, you can lift $x$ to a solution modulo $n$ by Hensel's Lemma, q.v. If you can factor $n$ as a product of powers of distinct primes, then you can solve the problem modulo each of these prime powers, and sew them together using the Chinese Remainder Theorem, q.v. If $n$ is not prime, and you can't factor it, life is more difficult.
H: Can any subset of $x$ be moved out of $x$? Let $x$ be a set and let $y\subset x$. Does there exist a set $z$ such that: (1) $z\cap x=\emptyset$ and (2) there exists a bijection $y \to z$ ? It is quite intuitive that the answer should be yes. My first attempt was to take a set $x'\notin x$ and to consider $z=y\times \{ x'\}$. But I am unable to show $z\cap x=\emptyset$. I could imagine a proof with the axiom of choice, but I'd prefer to avoid it if possible. AI: Choosing $z = y\times \{x\} = \bigl\{ \{\{a\},\{a,x\}\} : a \in y\bigr\}$ works, since the axiom of foundation guarantees that $z \cap x = \varnothing$ then. For otherwise, there'd be an $a\in y$ with $$x \in \{a,x\} \in \{\{a\},\{a,x\}\} \in x.$$
H: Set of all functions with a Lipschitz Condition I could not find a way to start with, let alone solution. Any help would be greatly appreciated. Let $M_K$ be the set of all functions f in $C_{[a,b]}$ satisfying a Lipschitz condition i.e., $|f(t_1)-f(t_2)|\leq K|t_1-t_2|$ for all $t_1,t_2\in[a,b]$, where $K$ is a fixed positive number. Then $M=\cup_K M_K$ is not closed. AI: If $a = b$, then every $f\in C([a,b])$ is Lipschitz continuous (with Lipschitz constant $0$), so in that degenerate case, $M$ actually is closed (trivially). Therefore let's assume $a < b$. $M$ is the set of functions satisfying any Lipschitz condition on $[a,b]$. In particular, $M$ contains all polynomials, since these are continuously differentiable on all of $\mathbb{R}$, and hence their derivatives are bounded on $[a,b]$. There is a theorem by Weierstraß that every continuous function on a compact interval can be uniformly approximated by polynomials, hence $M$ is dense in $(C([a,b]),\lVert\,\cdot\,\rVert_\infty)$. Since there are continuous functions that are not Lipschitz continuous, e.g. $x\mapsto \sqrt{\lvert x-c\rvert}$ for $c\in (a,b)$, $M$ is not closed (because the only closed and dense subset is the entire space). In fact, since $M_K$ is closed and has empty interior for every $K$, and $M = \bigcup\limits_{n=1}^\infty M_n$ is a countable union of such sets, $M$ is meagre (of the first category) in $C([a,b])$, and by Baire's theorem $C([a,b])\setminus M$ is non-meagre (of the second category) in $C([a,b])$. So topologically, the set of Lipschitz continuous functions on $[a,b]$ is a small subset of the set of all continuous functions.
H: Possible Class equation for a group Determine the possible class equation for a group of order 21? Until now I have found the following: $1+3+3+7+7$ $1+1+1+3+3+3+9$ $1+1+1+1+1+1+1+7+7$ $1+1+1+1+1+1+1+1+1+3+3+3+3$ $1+1+1+\cdots +1 \ (21 \ \text{times})$ Is there any way to eliminate the choices from this equation? More importantly, how would we know that this is a complete list (Until now my attempt has just been guess and check after I found possible occurences of 1's) Is there any easier way to determine the class equation? AI: Hint: the number of 1's is the order of $Z(G)$, the center of $G$. Also, $|G/Z(G)|$ can not be a prime number. And of course $|Z(G)|$ divides the order of $G$. This leaves you with the first and last possibilities. In the last case the group is abelian.
H: Is the condition "sample paths are continuous" an appropriate part of the "characterization" of the Wiener process? Wikipedia has separate articles on "Brownian motion" and "Wiener process" (http://en.wikipedia.org/wiki/Brownian_motion and http://en.wikipedia.org/wiki/Wiener_process ). I am not an expert, but that strikes me as dubious. That is not the subject of my question, however, and that may be a better issue to discuss at Wikipedia than in MSE. Both articles say that the Wiener process is "characterized by" (they do not use the words "defined" or "definition") four facts. One of them is essentially the fact that sample paths are almost surely continuous. In a monograph I will cite below, the continuity of sample paths was not part of the definition. I think it would be better if they gave a mathematician's definition that did not include continuity of sample paths, and then cited the continuity of sample paths as a property that could be proven. Even though people other than mathematicians may be interested in the Wiener process, is is a mathematical idea (according to Wikipedia), and I think it should be defined in mathematical terms. A good mathematical definition does not include redundant information. For example, when one defines a group, one never states that there is a unique identity element, because the uniqueness can be proven. I don't know if Brown or Wiener used continuity of sample paths as part of their definition, but if they did, I think a modern, leaner definition that omitted unnecessary hypotheses would be better. I would like to complain about this on the articles' talk pages, but I am not confident enough to be sure this is a valid complaint. Can anyone back me up? Or can anyone justify Wikipedia's "characterization"? EDIT: This is the definition of Brownian motion from "An Introduction to Stochastic Differential Equations" by L. C. Evans: (i) $W(0) = 0$ almost surely. (ii) $W(t)-W(s) \sim N(0,t-s)$ for all $0 \leq s \leq t$ (iii) For all $0<t_1<t_2<\cdots<t_n$, the random variables $W(t_1),W(t_2)-W(t_1),\ldots,W(t_n)-W(t_{n-1})$ are independent. He then proceeds to prove in a "Theorem" that states "for a.e. $\omega$, the sample path $t \mapsto W(t,\omega)$ is continuous". CORRECTION: He actually proves that the Levy construction of Brownian motion has continuous sample paths. He does not prove continuity follows from (i)-(iii). Evans is a very good mathematician and a good writer. If a continuity assumption is necessary, then someone has probably already proven that there exists a stochastic process that satisfies (i)-(iii) above but that is not the same as Brownian motion (as given by several constructions, such as Levy's construction, for example). Has anyone done this? EDIT: Someone has in fact done this. See the MathOverflow link given in the comment immediately below this answer. AI: There are a number of ways to define Brownian Motion. As you say, many common approaches first construct a suitable collection of processes (often on a product space), then show that in each equivalence class there is a continuous version of the process (generally using Kolmogorov's continuity theorem) and then declare that we are only interested in that element of each equivalence class. That is very different from saying that any process satisfying the other assumptions is necessarily a continuous process. Somewhat paradoxically, in the usual construction on the product space, the event that the process is continuous is not even measurable. In that entirely natural setting, it doesn't even make sense to ask the question of whether or not paths are continuous without doing some kind of a restriction. There is a discussion of this construction and this issue in Durrett, if you are interested. Since physically (and mathematically) we want to be able to specify that Brownian Motion is continuous pathwise, we have to add the restriction to the continuous element of the equivalence classes into the definition to use the construction on a product space and obtain the process we want. I think it may be worth looking at the other common, relatively elementary construction as well. Another approach is to construct Brownian Motion directly on the space of continuous functions in the usual topology. But you see in this approach, which I would argue is nicer than the first approach, we implicitly assume that Brownian Motion must be continuous. As a comment, the previous link contains what I generally see referred to as the standard modern mathematical reference on Brownian Motion and the authors do have continuity in the definition. For what it's worth, I just took a very quick look at Wiener's original paper and it looks (at a very quick glance) like he proves continuity as a theorem.