text
stringlengths
83
79.5k
H: Stuck on a dot product proof I've been stuck on this for hours and would really appreciate some help! Question: Suppose $\phi:\mathbb{R}^n\rightarrow\mathbb{R}^n$ is a function that preserves dot products. In other words, for all $u,v \in \mathbb{R}^n$, we have $(u*v)=(\phi(u)*\phi(v))$. Using only basic properties of dot product (such as the distributive law) and the definition of length: Prove that for all $u,v \in \mathbb{R}^n$, we have $|u+v|=|\phi(u)+\phi(v)|$. Prove that for all $u,v,w \in \mathbb{R}^n$, we have $|u+v-w|=|\phi(u)+\phi(v)-\phi(w)|$. I can do these proofs in more complicated ways, but not with such simple assumptions! I'm not allowed to use the fact that $\phi$ is a linear operator, for example. AI: All you need to do is to apply the definition of length, use linearity of dot product, and apply $\phi$ preserving dot product: $|u + v|^2 = (u + v) \cdot (u + v) = u \cdot u + 2u \cdot v + v \cdot v = \phi(u) \cdot \phi(u) + 2\phi(u) \cdot \phi(v) + \phi(v) \cdot \phi(v) = (\phi(u) + \phi(v)) \cdot (\phi(u) + \phi(v)) = |\phi(u) + \phi(v)|^2$. $|u + v - w|^2 = (u + v - w) \cdot (u + v - w) = u \cdot u + v \cdot v + w \cdot w + 2u \cdot v - 2u \cdot w - 2v \cdot w = \phi(u) \cdot \phi(u) + \phi(v) \cdot \phi(v) + \phi(w) \cdot \phi(w) + 2\phi(u) \cdot \phi(v) - 2\phi(u) \cdot \phi(w) - 2\phi(v) \cdot \phi(w) = (\phi(u) + \phi(v) - \phi(w)) \cdot (\phi(u) + \phi(v) - \phi(w)) = |\phi(u) + \phi(v) - \phi(w)|^2$. When you see a question, think about how to make use of all the condition that it gives you, and that all it needs is some proper logic derivation to solve the question.
H: Probability distribution binomial An assembly system is composed of n independent and identical parts. During any given “run” of the system, all parts have a probability of p of working. Suppose the random variable Y represents the number of working parts in any given “run”. You are told that P(Y=3) = 2P(Y=2) and V[Y] = p. What is the probability distribution of Y? I know thus is a binomial wherein the mean is np and the variance is np(1-p) how would you solve this? AI: There is the trivial solution $p=0$, in which case $n$ is cannot be determined. Assume now that $p\ne 0$. We have $\Pr(Y=3)=\binom{n}{3}p^3(1-p)^{n-3}$ and $\Pr(Y=2)=\binom{n}{2}p^2(1-p)^{n-2}$. We were told that $$\binom{n}{3}p^3(1-p)^{n-3}=2\binom{n}{2}p^2(1-p)^{n-2}.$$ Thus $$\frac{n(n-1)(n-2)}{6}p^3(1-p)^{n-3}=2\frac{n(n-1)}{2}p^2(1-p)^{n-2}.$$ A bit of algebra simplifies this to $$(n-2)p=6(1-p).\tag{1}$$ The information about the variance tells us that $n(1-p)=1$. Solve for $p$, by replacing $n$ in (1) by $\frac{1}{1-p}$. We get $p=\frac{3}{4}$ and then $n=4$.
H: Extend a homeomorphism of an arc of a circle to the entire circle. Let $X$ and $Y$ be homeomorphic to a circle. Let $C_X$ and $C_Y$ be closed arcs of $X$ and $Y$, respectively, and there is a homeomorphism $\phi:C_X\to C_Y$. Is it always possible to extend $\phi$ to $\phi':X\to Y$, such that $\phi'$ is a homeomorphism and $\phi'(x)=\phi(x)$ for $x\in C_X$? AI: Yes. Wlog. $X=Y=S^1$ (via given homeomorphisms), $C_X=[0,\alpha]$, $C_Y=[0,\beta]$ (per rotation and/or reflection) and the extension $[\alpha,2\pi]\to[\beta,2\pi]$ is straightforward. Alright, the only trap in this is that we need to show that $\phi$ takes endpoints to endpoints. But that is clear as endpoints can be characterized by the fact that removing thenm does not make the arc disconnected.
H: Is $\mathbb R$ a normal topological space? As in the title, in euclidean space is it always possible two find for two disjoint closed sets $A,B$ two open sets $U,V$ disjoint such that $A \subseteq U$ and $B \subseteq V$ (T4-property, normal)? AI: Every metric space is normal, in particular $\mathbb R^n$. The proof goes as follows: For each $a\in A$, let $r_a=\frac{1}{3}d(a,B)$, and for each $b\in B$, let $s_b=\frac{1}{3}d(b,A)$. Now define $U=\bigcup_{a\in A}B(a,r_a)$ and $V=\bigcup_{b\in B}B(b,s_b)$. It is not hard to show that $A\subseteq U,$ $B\subseteq V,$ and $U\cap V=\varnothing$.
H: Evaluating $\int_{-2}^{2} 4-x^2 dx$ with a Riemann sum I'm having problems with a Riemann sum ... I need to find the integral:$$\int_{-2}^2 (4-x^2)\;dx$$Clearly we have $$\int_{-2}^{2}(4-x^2)\;dx=4x-\frac{x^3}{3}\mid_{-2}^{2}=(4\cdot2-\frac{2^3}{3})-(4\cdot(-2)-\frac{(-2)^3}{3})=\frac{32}{3}$$OK. On the other hand, we have $$\Delta x=\frac{b-a}{n}=\frac{4}{n}$$ and $$\xi_1=-2+\frac{4}{n};\;\;\xi_2=-2+2\frac{4}{n};\;\;\ldots\;\;;\xi_n=-2+n\frac{4}{n}$$ then $$\xi_i=-2+\frac{4i}{n}=\frac{4i-2n}{n},$$ so that $$ \begin{align} \int_{-2}^2 {4-x^2}\;dx=&\lim_{n\to+\infty} \sum_{i=1}^n\left(4-\left( \frac{4i-2n}{n}\right)^2\right)\frac{4}{n}\\ =&\lim_{n\to+\infty} \sum_{i=1}^n\left(4-\left( \frac{4i-2n}{n}\right)^2\right)\cdot\frac{4}{n}\\ =&\lim_{n\to+\infty} \sum_{i=1}^n\left(4-\left( \frac{16i^2-16ni+4n^2}{n^2}\right)\right)\cdot\frac{4}{n}\\ =&\lim_{n\to+\infty} \sum_{i=1}^n \left(\frac{4n^2-16i^2+16ni-4n^2}{n^2} \right)\cdot\frac{4}{n}\\ =&\lim_{n\to+\infty} \sum_{i=1}^n \frac{-16i^2+16ni}{n^2}\cdot\frac{4}{n} \\ =&\lim_{n\to+\infty}\sum_{i=1}^{n} \frac{-64i^2+64ni}{n^3}\\ =&\lim_{n\to+\infty} \frac{64}{n^3}\left(-\sum_{i=1}^{n} i^2+n\sum_{i=1}^{n} i\right) \\ =&\lim_{n\to+\infty} \frac{64}{n^3} \left(-\frac{n(n+1)(2n+1)}{6}+n\cdot \frac{n(n+1)}{2} \right)\\ =&\lim_{n\to+\infty} \frac{64}{n^3} \left(-\frac{\color{#ff0000}{2}n^3+3n^2+n}{6}+\frac{n^3+n^2}{2} \right) \\ =&\lim_{n\to+\infty} \frac{64}{n^3} \left(\frac{-\color{#ff0000}{2}n^3-3n^2-n+3n^3+3n^2}{6}\right)\\ =&\lim_{n\to+\infty}\frac{32}{3} \left(\frac{\color{#ff0000}{1\times}n^3-n}{n^3}\right) \\ =&\lim_{n\to+\infty}\frac{64}{\color{#ff0000}{2\times}3}-\frac{32}{3n^2}=\frac{64}{\color{#ff0000}{2\times}3} \color{#ff0000}{=}\frac{32}{3}=\int_{-2}^{2} {4-x^2}\;dx \end{align}$$ Where is the mistake? AI: The problem is an accidental algebra mistake, as I pointed out in comments: $n(n+1)(2n+1)=2n^3+3n^2+n\neq n^3+3n^2+n$, and this should do it.
H: Defining a recursive function $f$ on $\{a, b\}$* I would need some help on how I can define a recursive function $f$ on $\{a, b\}$* Define a recursive function $f$ on $\{a, b\}$* which replaces any $a$ with $b$ and vice versa, for example, $f(aba) = bab$ and $f(aaabbb) = bbbaaa$ I would appreciate hints and/or examples or atleast how I need to think to solve a task like this one. Thank YOU. AI: HINT: It’s actually more similar than you might think to this question. If $w\in\{a,b\}^*$, and you know what $f(w)$ is, what are $f(wa)$ and $f(wb)$ in terms of $f(w)$?
H: How do I show that $f(x) = x^3+ax^2+c$ has exactly one negative roots? How do I show that $f(x) = x^3+ax^2+c$ has exactly one negative roots if $a < 0$ and $ c > 0$? I can use the the bisection technique and choose any number for a and c. But I was wondering is there are any other solutions for this. AI: $f'(x)=3x^2+2ax$, which is positive if $x\leq 0$, so $f$ is increasing in $(-\infty,0)$. Also, $f(0)=c>0$, and $\displaystyle\lim_{x\to-\infty}f(x)=-\infty$, so there exists only one negative root.
H: Why add one to the number of observations when calculating percentiles? The CFA Quantitative Methods book uses the following formula for finding the observation in a sorted list that corresponds to a given percentile $y$ in a set of observations of size $n$: $(n + 1)\frac{y}{100}$ It defines percentile as follows: "Given a set of observations, the yth percentile is the value at or below which y percent of observations lie." My question is, where does the $+ 1$ come from? I can see that if you wanted to ensure that all values are below a given percentile, it is useful. It also ensures the correct value for the median. But given the definition of percentile above, I would think it should be possible to have a hundredth percentile, which would be equal to the largest value. Is the "at or below" in conflict with the $+ 1$? AI: You answered your own question: "I can see that if I use the first formula to calculate the 50th percentile, the +1 ensures I get the same answer as when I calculate the median." That's a really important property for percentiles! One you should want. Also, if you are fitting empirical data to some parametric curve, adding +1 allows for a "tail". Many curves you would fit to have infinite support, so if you did not add +1, you would be saying your last data point is at the 100%-tile, which is usually a bad assumption.
H: Generators of Groups I need to show the following: Show that $\mathbb{Z}$ is generated by $5$ and $7$. I think that the solution has to do with relative prime numbers but I don't know where to start. AI: What you're missing is a quite useful theorem called Bézout's identity.
H: Normal approximation to the log-normal distribution Intuitively, it seems that a lognormal distribution with a tiny $\sigma/\mu$ ratio might look quite a bit like a normal distribution. Can this be formalized in any way (e.g., by stating upper bounds on the size of the error margin in the cdf from using this approximation, etc.)? Also, any intuition behind why this approximation works well would be appreciated. For some reason, I could not find any references on the web, apart from this (unsupported) rule of thumb in a random risk software manual. AI: The lognormal random variable with parameters $\mu, \sigma$ is $e^{\mu + \sigma Z}$, where $Z$ is standard normal, and so its CDF is $F(x) = \Phi\left(\frac{\ln(x) - \mu}{\sigma}\right)$ for $x > 0$, where $\Phi$ is the standard normal CDF. When $x = e^{\mu} (1+ t)$ is near $e^{\mu}$, $$\ln(x) = \mu + \ln(1 + t) = \mu + t + O(t^2)$$ so that $$F(x) \approx \Phi\left(\frac{t }{\sigma}\right)$$ In fact $$F(x) - \Phi\left(\frac{t }{\sigma}\right) = \Phi\left(\frac{\ln(1+t )}{\sigma}\right) - \Phi\left(\frac{t}{\sigma}\right) $$ This will be a very good approximation for $t$ near $0$. For fixed $t$, it goes to $0$ as $\sigma \to 0$ because both $\Phi$'s go to $0$ if $t < 0$ and both $\Phi$'s go to $1$ if $t > 0$. When $\sigma$ is small there appear to be two local minima of $F(x) - \Phi\left({t }/{\sigma}\right) $, at approximately $t = \pm \sqrt{2} \sigma$, where $F(x) - \Phi\left(t/\sigma\right) \approx -\sigma/(\sqrt{2\pi} e) \pm \sigma^2/(6 \sqrt{\pi} e)$. Thus the maximum error in $F$ is approximately $$ \frac{\sigma}{\sqrt{2\pi} e} + \frac{\sigma^2}{6 \sqrt{\pi} e}$$ For example, for $\sigma = 1/2$, the actual maximum error is approximately $0.0817231495769266$, and the above approximation to the maximum error is $0.08202940444$. As $\sigma$ gets smaller, the approximation should get better.
H: f(n) from sequence? I'm more than a little rusty on my algebra and am realizing it could serve a great purpose here. I am currently trying to reverse engineer a relatively simple sequence of numbers but am having a hard time with it. This is the sequence: 500 490 481 472 464 456 448 441 434 427 420 414 408 402 396 390 385 380 375 370 365 360 356 352 348 344 340 336 332 329 326 323 320 317 314 311 308 306 304 302 300 298 296 294 292 290 289 288 287 286 285 284 283 282 281 I've determined that the difference decreases by 1 on a decreasing frequency. That frequency is: -10 -9 -9 -8 -8 -8 -7 -7 -7 -7 -6 -6 -6 -6 -6 -5 -5 -5 -5 -5 -5 -4 -4 -4 -4 -4 -4 -4 -3 -3 -3 -3 -3 -3 -3 -3 -2 -2 -2 -2 -2 -2 -2 -2 -2 -1 -1 -1 -1 -1 -1 -1 -1 -1 That's where I'm getting stuck -- figuring out how to turn this into a function... I've been led to polynomials. Am I on the right track? AI: If your initial term corresponds to $n=0$ then you might find that $$500 - 11n + n\left[\sqrt{2n}\right] + \frac{\left[\sqrt{2n}\right]-\left[\sqrt{2n}\right]^3}{6}$$ where $[x]$ is the nearest integer to $x$, fits your sequence rather more neatly than a $54$-degree polynomial. This is related to OEIS A060432
H: Understanding the quotient of infinite groups $\mathbb{R}^2/H$ where $H = \{(a, 0): a\in \mathbb{R}\}$ Define $H = \{(a, 0): a\in \mathbb{R}\}$. Without using the fundamental homomorphism theorem, how would we know what $\mathbb{R}^2/H$ is? The quotient group is $\{H, (x, y) + H, (x_2, y_2) + H, \dots \}$. Intuitively, each coset in the quotient group is a horizontal line crossing through the point $(x_i + a, y_i)$ or just $(0, y_i)$. And based on other material I've covered, I intuitively think $\mathbb{R}^2/H \cong \mathbb{R}$, but I wouldn't know how to show it. AI: We'll do this two ways: in words, and then with an isomorphism. Let's show that every element in the quotient group can be written like $H + (0,y)$. So start with some element $(x,y)+H$ in the quotient group. What this means is that for any element $h \in H$, our element $(x,y)+H$ is the same element as $ (x,y) + h + H$. But $(-x,0)$ is in $H$. And so our element $(x,y) + H = (0,y) + H$. In this way, it's easy to see that for any $y \in \mathbb{R}$, there is an element in $\mathbb{R}^2/H$ that looks like $(0,y) + H$. Further, no two elements $(0,a) + H$ and $(0,b) + H$ are the same when $a \neq b$, as $H$ does not affect the second coordinate. The typical group operations of $\mathbb{R}$ still work on the second coordinate as well, for the same reason that $H$ does not affect the second coordinate. Thus $\mathbb{R}^2 / H \cong \mathbb{R}$. Alternately, what we have considered is this map: $$\begin{align}\mathbb{R}^2 / H &\to \mathbb{R}\\ (x,y) + H &\mapsto y\end{align} $$ The first notable paragraph above shows it's surjective. The second shows that it's well-defined and injective. I sort of wave my hands at showing it's a homomorphism, but it is (and you can check it!). Thus we have explicitly given an isomorphism.
H: $F : \mathbb{Z} \to \mathbb{Z}$, $F(n) = 2 -3n$. Is $F$ one-to-one? Onto? Define $F : \mathbb{Z} \to \mathbb{Z}$ by the rule $F(n) = 2 -3n$, for all $n \in \mathbb{Z}$. Is $F$ one-to-one? Onto? Now, I understand that one-to-one means that nothing in the co-domain is being pointed to twice. I also understand onto; which means that every point in the codomain is being pointed to by a point in the domain. Beyond that, I am unsure of where to start here, in terms of proving or disproving them. AI: Hints: Suppose $F(m) = F(n)$, that is $2-3m = 2-3n$; what can you say about the relationship between $m$ and $n$? Let $k \in \mathbb{Z}$. Can you solve $f(n) = k$? First solve the equation $2 - 3n = k$ for $n$, then check to see whether or not $n \in \mathbb{Z}$.
H: Prove $\sum\binom{n}{k}2^k = 3^n$ using the binomial theorem I'm studying for a midterm and need some help with proving summation $$\sum\limits_{k=0}^n\binom{n}{k}2^k = 3^n$$ using the binomial theorem. This is what I've been thinking so far: In the binomial theorem, we set $x = 0$ and $y = 2$, so: $3^n = (x+y)^n = \sum\limits_{k=0}^n\binom{n}{k}y^k$ $ = \sum\limits_{k=0}^n\binom{n}{k}2^k$ Am I getting this correct so far or completely wrong? Any help would be appreciated. Thanks. AI: Not quite if $x=0$ and $y=3$ then you would have $3^n=y^n$ which is not what you want. If you set $x=0$ and $y=2$ you would have $2^n = y^n$. Compare this with the general binomial theorem, $$(x+y)^n = \sum_{k=0}^{n} {n \choose k} x^{n-k}y^k $$ Notice that if $x=1$ and $y=1$ we have, $$ 2^n = \sum_{k=0}^{n} {n \choose k} $$ Notice that if $x=6$ and $y=-4$ we have, $$(2)^n = \sum_{k=0}^{n} {n \choose k} 6^{n-k}(-4)^k $$ What values could $x$ and $y$ have in order to get a $3^n$ on the left? What values could $x$ and $y$ have in order to get a $2^k$ in the summation? If you can find values for $x$ and $y$ which satisfy both those questions then you have solved the problem. Further Motivation: You correctly guessed x=1 and y=2. As far as I know there is no formal algorithmic approach that will solve this type of problem. This is in fact a great tool for teaching problem solving skills because it involves thinking about a problem from different angles without an initially clear path to the solution. When looking at a problem you should first write it down and examine every piece. We had $$ 3^n = \sum_{k=0}^n { n \choose k } y^k $$ The first thing which strikes me when looking at this is that we have binomial coefficients. This suggests that we may find greater insight by looking at the binomial theorem. $$ (x+y)^n = \sum_{k=0}^n { n \choose k } x^{n-k} y^k $$ Comparing the statement of the binomial theorem to our problem we notice that both have a summation with binomial coefficients equal to the power of a number. First let us compare the results of the respective summations. $$ 3^n \qquad (x+y)^n $$ We see that in one case we have the number $3$ raised to the $n$'th power and in the other case we have the number $x+y$ raised to the $n$'th power. We suspect that these numbers will have to be the same for the binomial theorem to be useful in solving the problem. However there are two degrees of freedom in the second number in the form of the variables $x$ and $y$ this means there is not a unique $(x,y)$ pair which will produce a $3$ when added together. Note (-1,4),(0,3),(1,2),(103,-100), etc. all add to three. This means that we have only partially determined the solution by comparing the results of the summations. To narrow it down to a solution we compare the summands. $$ {n \choose k } 2^k \qquad { n \choose k } x^{n-k}y^k $$ Both terms have identical binomial coefficients which means that we can ignore them. One has powers of a single number, $2$, the other has powers of our variables $x$ and $y$. We notice that $2$ and $y$ are both raised to the $k$ whereas $x$ is raised to the $n-k$. This suggests to us that it may be useful to associate $y$ with $2$. If we did this we could say that we believe $y=2$ but are not yet confident of the value $x$ should have. Thinking back we remember that one of the pairs of possible $x$ and $y$ values was $(1,2)$ this is hopeful since it identifies $y=2$ as we desire. If $x=1$ doesn't cause any problems we will have solved the problem. Examining the binomial theorem with $(1,2)$ we see that, $$ (x+y)^n = \sum_{k=0}^n { n \choose k } x^{n-k} y^k $$ $$ (1+2)^n = \sum_{k=0}^n { n \choose k } (1)^{n-k} (2)^k $$ $$ 3^n = \sum_{k=0}^n { n \choose k } (2)^k $$ Which is the identity we wanted to establish. At this point we know the path through the woods. The solution is : Evaluate the binomial theorem for $x=1$ and $y=2$ and the result is the desired identity. This is logically impeccable but contains non of the thought that was necessary to produce it. One advantage of this is that if we are lucky enough to guess the right (x,y) we can solve the problem even if we didn't have a good reason for coming up with the ordered pair. The disadvantage is that we don't learn anything about solving other problems when we see just the bare solution. You learn to think this way by solving a lot of problems without clearly marked paths to the solution. If you are interested in learning more about how to think in this way you should read "How to Solve It" by George Polya.
H: infinite summation formula help How do I find the following sum? $$\sum_{x=3}^\infty 1.536x\left(\frac{5}{8}\right)^x$$ It wouldn't be geometric because of the $x$ in front, right? AI: Hint: You are correct that it's not a geometric series. What we can do is note that $$\sum\limits_{n = 0}^{\infty} r^n = \frac{1}{1 - r}$$ Now differentiating termwise, we see that $$\sum\limits_{n = 1}^{\infty} nr^{n - 1} = \frac{1}{(1 - r)^2}$$ This can be adapted for your series: \begin{align} \sum\limits_{x = 3}^{\infty} x \left(\frac 5 8 \right)^x &= \frac 5 8 \sum\limits_{x = 3}^{\infty} x \left(\frac 5 8\right)^{x - 1} \\ &= \frac 5 8 \left(\sum\limits_{x = 1}^{\infty} x \left(\frac 5 8 \right)^{x - 1} - 1 - 2 \cdot \frac 5 8\right) \\ &= \frac 5 8 \left(\frac{1}{(1 - (5/8))^2}-1 - \frac {10} 8\right) \end{align}
H: Prove that $\inf(A\cap B)\geq \max [\inf(A),\inf(B)]$ Question: Let A and B be subsets of real numbers. Prove that $\inf(A\cap B)\geq \max [\inf(A),\inf(B)]$. Attempt at proof: Let $x\in A\cap B.$ Then, $\forall (x\in A\cap B):[\inf(A) \leq x]$ and $\forall (x\in A\cap B):[\inf(B) \leq x]$. Since any value of $x$ is at least larger than both $\inf(A)$ and $\inf(B)$, $x\geq \max [\inf(A),\inf(B)]$ $\therefore \inf(A\cap B)\geq \max [\inf(A),\inf(B)]$ I wonder if this is starting to look like a proof for the question? AI: The proof as written is incorrect. You have quantified $x$ in the first line -- $x$ is an arbitrary element in $A\cap B$. Then, in the second line, you have a new variable that you are quantifying, that is also called $x$. Here is a simpler proof, that does not need to appeal to elements. $A\cap B\subseteq A$; hence $(\inf A\cap B )\ge (\inf A)$. Similarly, $A\cap B\subseteq B$; hence $(\inf A\cap B )\ge (\inf B)$. Since $(\inf A\cap B )$ is $\ge$ both $(\inf A)$ and $(\inf B)$,we must have $(\inf A\cap B )\ge max(\inf A,\inf B)$
H: Lambda Calculus: Reducing to Normal Form I'm having trouble understanding how to reduce lambda terms to normal form. We just got assigned a sheet with a few problems to reduce, and the only helpful thing I've found is the following example in the book: (λf.λx.f(fx))(λy.y+1)2 ->(λx.(λy.y+1)((λy.y+1)x))2 //How'd it get to this?? ->(λx.(λy.y+1)(x+1))2 ->(λx.(x+1+1))2 ->(2+1+1) I'm pretty sure I understand most of it... except for their first step (everything else is pretty much substitution as if it was: f(x) = x + 3, x = y, therefore y+3) Can someone please explain this to me? I pretty much have no experience with lambda calculus. Thanks, Sean AI: Basically, the function $(\lambda f.\lambda x.f(fx))$ is applied to the argument $\lambda y.y+1$. This step is also called beta reduction.
H: Contour integration with 2 simple poles on contour Ok for this one I would appreciate if someone could give me a conceptual answer first. I am supposed to integrate $\int_{-\infty}^{\infty} \frac{e^{-i q t}}{p^2 - q^2} dq$ along a half circle C (whose radius goes to infinity), which comprises a horizontal path along the real line circumventing the two real poles -p & +p with a semicircle in the upper half plane, just as in part (a) I am to prove that the result is 0 if t<0 and $2 \pi i (-\frac{i}{p} ) \sin{pt}$ if t>0. Now I don't quite understand why the sign of t would change anything. Can someone enlighten me? AI: The sign of $t$ determines which contour you use. The idea is that the integral over the large circular arc shall tend to $0$ when the radius tends to $\infty$, and for that, the integrand must become small, in particular the exponential factor $e^{-iqt}$. We have $\lvert e^z\rvert = e^{\operatorname{Re} z}$, and $\operatorname{Re} (-iqt) = t\cdot \operatorname{Im} q$. For $t > 0$, that becomes negative, and hence the integrand small if $q$ is in the lower half plane, for $t < 0$, the integrand becomes small when $q$ is in the upper half plane. The contour in the upper half plane does not enclose a pole, hence by the Cauchy integral theorem, the integral is $0$. The contour with the large semicircle in the lower half plane encloses the two poles, and by the residue theorem, it is then $-2\pi i$ times the sum of the residues in the poles ($-2\pi i$ because the contour is negatively oriented).
H: Can one assume that $x$ and $y$ are at a fixed position? GRE I'm confused because I thought that one cannot assume anything is in a fixed position while on a number line. It says the answer is C but I don't understand how they can deduce that from the given information. Twice as far from $x$ as from $y$? there are no markers so how can one know if it's not drawn to scale? AI: It doesn't matter if the scale isn't specified all you need to know is that there is some distance between $x$ and $y$ and there are other points on the lines with various distances to $x$ and $y$. Just because you can't assume you know the distance between them does not mean that you need to throw out all your intution. I recommend this. Place your finger on the far left of the number line. Are points in that region twice as far from $x$ as they are from $y$? To my eyes it looks like they are closer to $x$ than $y$ and therefore do not count. Move your finger to the right till the points start looking like they are closer to $y$ than $x$. This should be between $x$ and $y$ a bit closer to $y$ than $x$. Can you see that even though you aren't sure exactly where it is there has to be a point in that middle region which is twice as far from $x$ as it is from $y$? In other words that there is a point whose distance to $y$ is half its distance to $x$? I think it should be clear there is one point in the middle that does this. Now keep moving your finger to the right passing through $y$. Notice that while going from the middle region to the right hand region your distance to $y$ goes to zero and then starts getting bigger again. Can you see that we are about to hit another point where $x$ is twice as far from your finger as $y$ is and that there is only one point on the right which does this? This point is the one that is as far from $y$ as $y$ is from $x$. Hope that helps.
H: Sequence of simple functions nonnegative that converge to measurable function $f$ Suppose $f\geq 0$ is measurable. We want to find a sequence of $s_n$ of nonnegative simple functions such that $s_n \to_{pointwise} f$. My book says the we should consided the sequence: $$ s_n = \sum_{k=0}^{2^{2n}} \frac{k}{2^n} \chi_{f^{-1}( [ \frac{k}{2^n}, \frac{k +1}{2^n} ] ) }$$ But I don't know how to show this sequence converges pointwise to $f$. I mean it is obviously non-negative. Do I have to show that $s_n$ is monotone and bounded ? and hence convergent to $f$? maybe the notation of the sequence is confusing. Any help would be greatly appreciated. AI: The sequence as given, $$s_n = \sum_{k=0}^{2^{2n}} \frac{k}{2^n} \chi_{f^{-1}( [ \frac{k}{2^n}, \frac{k +1}{2^n} ] ) }$$ contains an insidious error. It will not converge pointwise to $f$ if $f$ takes on a value of the form $\frac{k}{2^n}$ at all, and it will not even converge to $f$ almost everywhere if $f$ attains values of that form on a set of positive measure. One must choose half-open intervals. Let $$E_{n,k} = f^{-1}\left(\left[\frac{k}{2^n}, \frac{k+1}{2^n}\right) \right),$$ and $$s_n = \sum_{k=0}^{2^{2n}-1}\frac{k}{2^n}\chi_{E_{n,k}}.$$ A small change, but now it works ;) For every $n \in \mathbb{N}$, the family $\mathscr{E}_n = \{ E_{n,k} : k \in \mathbb{N}\}$ is a partition of the space, since they are preimages of disjoint sets, hence disjoint, and every point lies in some $E_{n,k}$, since the intervals $\left[\frac{k}{2^n},\frac{k+1}{2^n}\right)$ cover the entire range $[0,\infty)$ of $f$. Let us see that $s_n$ converges monotonically pointwise to $f$. For that, fix an arbitrary $x$ and check how $s_n(x)$ behaves. First, while $2^n \leqslant f(x)$, we have $s_n(x) = 0$: Since $f(x) \geqslant 2^n$, we have $k_{x,n} = \lfloor 2^n\cdot f(x)\rfloor \geqslant 2^{2n}$, and $x \in E_{n,k_{x,n}}$, but since $k_{x,n} \geqslant 2^{2n}$, the characteristic function of $E_{n,k_{x,n}}$ does not yet appear in $s_n$, so $s_n(x) = 0$. Once $n$ is so large that $2^n > f(x)$, we have $k_{x,n} = \lfloor 2^n\cdot f(x)\rfloor < 2^{2n}$, and since $x \in E_{n,k_{x,n}}$, we have $$s_n(x) = \sum_{k=0}^{2^{2n}-1}\frac{k}{2^n}\chi_{E_{n,k}}(x) = \frac{k_{x,n}}{2^n}\chi_{E_{n,k_{x,n}}}(x) = \frac{k_{x,n}}{2^n}$$ since the $E_{n,k}$ are disjoint. Since $k_{x,n} \leqslant 2^n\cdot f(x) < k_{x,n}+1$, we have $s_n(x) = \dfrac{k_{x,n}}{2^n} \leqslant f(x) < \dfrac{k_{x,n}+1}{2^n}$, and hence $$0 \leqslant f(x) - s_n(x) < \frac{1}{2^n},$$ which shows that $s_n(x) \to f(x)$. To see that $s_{n+1}(x) \geqslant s_n(x)$, note that $k_{x,n} \leqslant 2^n\cdot f(x) < k_{x,n}+1$ implies $2k_{x,n} \leqslant 2^{n+1}\cdot f(x) < 2k_{x,n}+2$, and so we have either $k_{x,n+1} = \lfloor 2^{n+1}\cdot f(x)\rfloor = 2k_{x,n}$ - if $2^{n+1}\cdot f(x) < 2 k_{x,n}+1$ - or $k_{x,n+1} = \lfloor 2^{n+1}\cdot f(x)\rfloor = 2k_{x,n}+1$. In the first case, we have $$s_{n+1}(x) = \frac{k_{x,n+1}}{2^{n+1}} = \frac{2k_{x,n}}{2^{n+1}} = \frac{k_{x,n}}{2^n} = s_n(x),$$ and in the second, we have $$s_{n+1}(x) = \frac{k_{x,n+1}}{2^{n+1}} = \frac{2k_{x,n}+1}{2^{n+1}} = \frac{k_{x,n}}{2^n} + \frac{1}{2^{n+1}} = s_n(x) + \frac{1}{2^{n+1}}.$$ In either case, $s_{n+1}(x) \geqslant s_n(x)$, so the sequence $s_n(x)$ is nondecreasing. Since $x$ was arbitrary, the entire sequence of simple functions is nondecreasing, and converges pointwise to $f$ (assuming all values of $f$ are real; if $f$ takes $+\infty$ as a value, we must modify $s_n$ if we want pointwise convergence everywhere, for example by adding $2^n\cdot \chi_{F_n}$ to $s_n$, where $F_n = \{x : f(x) \geqslant 2^n\}$).
H: Expression for hyperbola on complex plane The hyperbola $$x^2 - y^2 = 1$$ has a simple expression in the complex plane as $\{z^2 + \bar{z}^2 = 2\}$. Is there a similarly simple expression for a hyperbola $\frac{x^2}{a^2}-\frac{y^2}{b^2}=1$? Or an ellipse? I know we can express hyperbolae and ellipses as images of vertical and horizontal lines under the sine function. AI: Well, there's one way to find out. Solving the system $$z = x + iy$$ $$\overline{z} = x - i y$$ for $x$ and $y$, we get $$x = \frac{z+\overline{z}}{2}$$ and $$y = \frac{z-\overline{z}}{2i}.$$ Plugging these into the equation $$\frac{x^2}{a^2} - \frac{y^2}{b^2} = 1,$$ we get $$\frac{(z+\overline{z})^2}{4a^2} + \frac{(z-\overline{z})^2}{4b^2}=1.$$ If there's no relation between $a$ and $b$ then the $z \overline{z}$ term won't cancel like it did in the $a = b$ case you gave above. However, if instead of starting with an equation like $x^2/a^2 - y^2/b^2 = 1$ you think more geometrically there's a better way of doing things. If we let $p$ and $q$ be the foci of a hyperbola, we can write an equation like $$||z - p| - |z - q|| = 2a$$ which simply encodes the geometric definition of the hyperbola.
H: Find a linear transfornation Find a linear transfornation T on $\mathbb R^2$ and closed subset C of $\mathbb R^2$ such that T(C) is not closed. Please.... Thanks. AI: Consider $T:\mathbb{R}^2 \rightarrow \mathbb{R}$ given by $T(x,y) = x$. It maps the closed set $\{(x,1/x)\ : x \neq 0\}$ to a set which isn't closed.
H: How do I derive the formula for $\sum\limits_{x = 3}^{\infty} 1.536 (x^2) \left(\frac 5 8\right)^x $? How would I find the summation of $$\sum\limits_{x = 3}^{\infty} 1.536 (x^2) \left(\frac 5 8\right)^x $$ Would I have to take the 2nd derivative of $(1/1-x)$? AI: Yes, but it’s helpful to insert an intermediate step. Starting with $$\frac1{1-x}=f(x)=\sum_{n\ge 0}x^n\;,$$ you get $$\frac1{(1-x)^2}=f\,'(x)=\sum_{n\ge 0}nx^{n-1}\;,$$ and therefore $$\frac{x}{(1-x)^2}=xf\,'(x)=x\sum_{n\ge 0}nx^{n-1}=\sum_{n\ge 0}nx^n\;.$$ Now what happens if you differentiate a second time and then multiply by $x$ again?
H: projection onto vector spaces How do you project a vector on to the euclidean ball? For example, if there is a vector $x ∈ R^n$ how does one project this onto the euclidean ball. What are the steps for projecting a vector onto a subspace? Is there a formula? AI: The usual thing would be to divide $x$ by its magnitude, assuming that $x \ne 0$. Then $$\frac{x}{\|x\|}$$ would be a vector whose magnitude is exactly $1$, and so can be thought of as a point on the surface of the ball.
H: Cauchy sequences and condition $X_{n+1} – X_n\to 0$ Let $\{X_n\}$ be a sequence and suppose that the sequence $\{X_{n+1} – X_n\}$ converges to $0$. Give an example to show that the sequence $\{X_n\}$ may not converge. Hence, the condition that $|X_n-X_m| < \epsilon$ for all $m,n \ge N$ is crucial in the definition of a Cauchy sequence. AI: Hint: What can you say about the partial sums of, say, the harmonic series? Or a sequence involving $\ln{n}$? More generally, think of your favorite function $f$ with $\lim_{n \to \infty} f(n) = \infty$, but that approaches $\infty$ "slowly."
H: Noncommutativity of tensor algebra My question is simple. Let $M$ be an $A$-module and let $T(M)$ be its tensor algebra. I saw that it is noncommutative in general... but I can' understand this fact... I think that by commutativity of tensor product it is commutative... Help me... AI: Let me change notation: $R$ rather than $A$ will denote the (commutative) base ring, and I will abbreviate the tensor product $\otimes_R$ on the category of left $R$-modules to $\otimes$. Keep in mind that commutativity of multiplication $m$ on an $R$-algebra $A$ is expressed by a commutative triangle which says that the composite $$A \otimes A \stackrel{\sigma}{\to} A \otimes A \stackrel{m}{\to} A$$ equals $m$, where $\sigma$ is the natural symmetry isomorphism on the tensor product. However, if you look at how the multiplication on the tensor algebra $A = \sum_{n \geq 0} M^{\otimes n}$ is defined: $$A \otimes A \cong (\sum_i M^{\otimes i}) \otimes (\sum_j M^{\otimes j}) \cong \sum_n \sum_{i + j = n} M^{\otimes (i+j)} \stackrel{\sum_n \nabla}{\to} \sum_n M^{\otimes n} = A,$$ then you see that nowhere is the symmetry isomorphism used. In other words, the only thing used in this construction is the distributivity of $\otimes$ over coproducts, and it would make perfect sense even in contexts where $\otimes$ is not symmetric monoidal (e.g., the tensor product of bimodules). It migt help to run through this in the case where $R = k$, a field, and where $M = V$ is an $k$-dimensional vector space over $k$ with basis elements $e_1, \ldots, e_k$. In this case the construction $T(V) = \sum_n V^{\otimes n}$ may be identified with the noncommutative polynomial algebra generated by $k$ indeterminates $e_1, \ldots, e_k$, with $V^{\otimes n}$ the homogeneous component of degree $n$ monomials in $k$ variables; this has dimension $nk$ as expected. In the commutative polynomial algebra case, the dimension of the homogeneous component would be given rather by a multinomial coefficient.
H: a question about continuous differentiable function Given f is a continuous differentiable funtion, what is the sufficient condition for $$\lim_{n\rightarrow \infty } \int_{-n}^{x}f'(s)ds\rightarrow f(x) ?$$ the hint from the instructor is that thinking about the behavior of f, f' when x approaches to negative infinity. any idea to get started with it? Got stuck for so long....got no clue... Thank you AI: Hint: What will guarantee that $f(x)-f(-n)\to f(x)$ as $n\to\infty$?
H: Advanced urn problem Imagine there are two urns — urn A and urn B. Urn A contains 3 blue balls and 7 red balls. Urn B contains 7 blue balls and 3 red balls. Balls are now randomly drawn from one of these urns where the drawn ball is always placed back into the same urn. Twelve such random draws yielded 8 red balls and 4 blue balls. What do you estimate the probability that the balls were drawn from urn A? AI: Hint: Can you calculate the probability that twelve draws from $A$ gave that result? Similarly for draws from $B$? Then the probability you want is the first divided by the sum.
H: a question about a ring (unit, multiplicative inverse) S is the power set of integers Z. Define two binary operations: $\bigoplus $: $A$\bigoplus $B=(A\bigcup B)-(A\bigcap B)$, the symmetric difference set $\ast : A \ast B = A \bigcap B$, which forms a ring. Q: Does $\ast$ have a unit? I'm even wondering will the multiplicative identity is Z ?? it feels so weird.... And what would the inverse of an element of S looks like? Thanks for any help. AI: HINT: What are $\varnothing\oplus A$ and $\Bbb Z\cap A$? And what is $A\oplus A$?
H: differential equation of motion - how to deal with squared differentials This is probably something pretty elementary. But... The equation given is $\frac{1}{2}m \dot x^2 - gmx = E$ and the assignment is to solve it to show you get the same thing as Newton's equation of motion. I just want to know if I am approaching solving this correctly. When I move the variables around to solve the differential, I end up with $$\dot x^2 = \frac{2(E+gmx)}{m} \Rightarrow \dot x = \sqrt{\frac{2(E+gmx)}{m}}$$ and then I am left with what seems like a pretty ugly integral. Maybe that's just the way it is. But when I tru to integrate this I end up with $$\frac{2}{3}\left(\frac{(2(E+gmx)}{m}\right)^{3/2}t$$ or something like it and I know that is wrong-- basically I am at loss of how to approach something like this. Maybe its jut integration by parts or some such? Should I be changing the E to something in terms of x and t? or is there a simpler approach to this kind of differential generally? thanks in advance. AI: Hint: Try writing the integration part as: $$\displaystyle \int \dfrac{\sqrt{m}}{\sqrt{E + g m x}} dx = \int \sqrt{2} dt$$ Let $u = E + g m x \rightarrow du = g m dx$ Can you finish it? Spoiler $x(t) = \dfrac{gt^2}{2} - \dfrac{E}{gm} \pm gc t +\dfrac{g c^2}{2}$
H: Fourier sine series for function F(t) = t for 0 How to get to the part circled with red? I tried to compute it on Wolfram alpha.. (http://www.wolframalpha.com/input/?i=2%2FL+*Integrate+x+sin%28%28n+pi+x%29%2FL%29+dx+from+0+to+L) Still confused. Need help. AI: It's all right, the only problem is that Wolfram Alpha didn't know that $n$ is an integer. For a non-integral $n$, you can't in general simplify Wolfram Alpha's result. For $n$ an integer, you can simplify $$\sin (n\pi) = 0$$ and $$\cos (n\pi) = (-1)^n$$ to get the simpler result.
H: Definition verification from two different books? In Kaplansky's Set Theory And Metric Spaces, he mentions a useful example of a neighborhood of $x$ is a closed ball with center $x$. However, one of the theorems in baby Rudin is "Every neighborhood is an open set". I'm confused? AI: You’re seeing two different definitions of neighborhood. Kaplansky is using the more inclusive definition: $N$ is a nbhd of $x$ if $x$ is in the interior of $N$, or, equivalently, if there is an open set $U$ such that $x\in U\subseteq N$. Rudin is using the narrower definition: $N$ is a nbhd of $x$ if $N$ is an open set containing $x$. Kaplansky would call Rudin’s nbhds open neighborhoods.
H: Coordinate vectors for different size matrices Hi am having trouble with this question Let $A = \left[\begin{array}{cc}-1&3&-4&-3\\2&-1&0&-3\end{array}\right]$ Find the coordinate vector for the matrix $A \cdot A^{T}$ with respect to the standard basis for $R^{2 \times 2}$ I figured out the $A \cdot A^{T} = \left[\begin{array}{cc}5&-5&-4&-3\\-5&10&12&-6\\-4&12&16&-12\\-3&-6&12&18\end{array}\right]$ and the standard basis for $R^{2 \times 2}$ is $\left[\begin{array}{cc}1&0\\0&0\end{array}\right], \left[\begin{array}{cc}0&1\\0&0\end{array}\right], \left[\begin{array}{cc}0&0\\1&0\end{array}\right], \left[\begin{array}{cc}0&0\\0&1\end{array}\right]$but that is it I get stuck at this point because i cant figure out how $2\times2$ matrices can result to a $4 \times 4$ matrix AI: $$AA^t=\begin{pmatrix}-1&\;\;3&-4&-3\\\;\;2&-1&\;\;0&-3\end{pmatrix}\begin{pmatrix}-1&\;\;2\\\;\;3&-1\\-4&\;\;0\\-3&-3\end{pmatrix}=\begin{pmatrix}35&4\\4&14\end{pmatrix}$$
H: Contraction and Fixed Point How do I show that for $T: X \rightarrow X$ where X is complete and $T^m$ is a contraction that T has a unique fixed point $x_0 \in X$. I know there exists $\lambda_1 \in (0,1)$ for $x, y \in X$ such that $d(T^mx, T^my) \leq \lambda_1 d(x, y)$ and I need to show that T is a contraction and then apply the fixed point theorem but how do I do that? AI: You don't need to show that $T$ is a contraction. That might be false. (E.g., $X=\mathbb R^2$, $T(x,y)=(0,2x)$, $m=2$.) then apply the fixed point theorem So you know a fixed point theorem that would apply if $T$ were a contraction. That means that you know a fixed point theorem that does apply to $T^m$. Hence, you know that $T^m$ has a unique fixed point $x_0\in X$. $T$ cannot have any other fixed points, because every fixed point of $T$ is a fixed point for all powers of $T$. Thus the remaining work is to show that $x_0$ is in fact also a fixed point for $T$. Note that $x_0=T^m(x_0)$ and $T(x_0)=T(T^m(x_0))=T^m(T(x_0))$, so $d(T(x_0),x_0)=d(T^m(T(x_0)),T^m(x_0))\leq \lambda_1 d(T(x_0),x_0)\implies d(T(x_0),x_0)=0.$ Alternatively, as noted, $T^m(T(x_0))=T(x_0)$, which shows that $T(x_0)$ is a fixed point for $T^m$, hence $T(x_0)=x_0$ by uniqueness.
H: how to calculate integral $\lim_{h \to -\frac{1}{2}} \int_0^{1+2h}\frac{\sin(x^2)}{(1+2h)^3}dx$ how to calculate integral, $\displaystyle\lim_{h \to -\frac{1}{2}} \int_0^{1+2h}\frac{\sin(x^2)}{(1+2h)^3}dx$ Not sure if the limit exists or not. AI: We may change your limit by $$\displaystyle\lim_{\varepsilon \to 0}\frac{1}{{\varepsilon^3}} \int_0^{\varepsilon}{\sin(x^2)}dx$$ At this point, you may use L'Hopital's Rule.
H: Can every topological space be considered as a subspace of a separable space? We know that subset of a separable space may not be separable. Now is it true that any topological space can be considered as a subspace of a separable space? Please give a hint. AI: Yes. Let $\langle X,\tau\rangle$ be a topological space, and let $p$ be any point not in $X$. Let $$\tau'=\{U\cup\{p\}:U\in\tau\}\cup\{\varnothing\}\;;$$ then $\tau'$ is a topology on $X\cup\{p\}$ such that the relative topology on $X$ is $\tau$, and $\{p\}$ is a countable dense subset of $X\cup\{p\}$. Of course $\langle X\cup\{p\},\tau'\rangle$ is never $T_1$, though it is $T_0$ if $\langle X,\tau\rangle$ is.
H: For symmetric positive definite $A,\ B$ does $\sqrt{AB}=\sqrt{A}\sqrt{B}$? Let $A, B ∈ F$ n×n be Hermitian and positive definite and assume that $S = AB$ is also positive definite. Show that for the unique positive definite square roots of A, B, S, we have $\sqrt{S} =\sqrt{AB} =\sqrt{A}\sqrt{B}$ This result is required to prove the next part. However, I feel that above is not correct. I tried checking it with random SPD matrices in Matlab. My analysis shows that it is true only when $\sqrt{A},\ \sqrt{B}$ commutes? I wonder, am I missing anything here ? AI: In order for $AB$ to be positive definite, you need $AB = (AB)^*=B^*A^*=BA$, so $A$ and $B$ commute, hence $\sqrt A$ and $\sqrt B$ also commute, and yes it is true.
H: good textbook to self-learn systems of ODEs I've taken regular Ordinary Differential Equations. Right now I'm taking Systems of ODEs and the textbook is less than stellar. I was wondering if anyone could point me to a decent self-study book for the subject. Systems of ODEs: matrices composed of regular ODEs Example: $\frac{d}{dt}\bigl(\begin{smallmatrix} x\\ y \end{smallmatrix} \bigr)=\bigl(\begin{smallmatrix} a&b\\ c&d \end{smallmatrix} \bigr)\bigl(\begin{smallmatrix} x\\ y \end{smallmatrix} \bigr)$ Thanks! ^_^ AI: You might want to peruse these online and see if they satisfy your needs and tastes: Differential Equations and Their Applications, M. Braun Differential Equations: A Dynamical Systems Approach (series) by J Hubbard and B West There are also many excellent books of Nonlinear Equations and Chaos that include systems. Certainly there are excellent notes and examples you can find online and I would imagine Opencourseware (like MIT). Lastly, if you have a college library, you might want to peruse it and see if there are books that suit your needs.
H: Prove the least upper bound property using $\mathbb{Q}$-Cauchy sequences. Hi everyone I'd like to know if the next proof is correct. I'd appreciate any suggestion mainly in the points marks with (1) and (2). Theorem: Let $E$ be a nonempty subset of real numbers which has an upper bound. Then it must have exactly one least upper bound. Proof: We have to show that if $E$ has at most one lub (least upper bound). Suppose $M$ and $M'$ are lub's for E. Then, $M \le M'$ because $M$ is a lub and $M'$ is an upper bound. Similarly, if we interchanged the roles of $M$ and $M'$, i.e., $M$ is an upper bound and $M'$ is a lub then $M' \le M$. Hence, $M= M'$. Now we have to show that there exists at least one lub. Let $n$ be a positive natural number, let $M$ be an upper bound of $E$ and let $x_0$ be an element of $E$ (this is possible because $E$ is a nonempty set by hypothesis). Then $x_0 \le M$, thus $M-x_0$ is a positive real number. Furthermore, by the Archimedean property there is a natural number $K$ such that $K/n>M-x_0$, i.e., $x_0+K/n >M$ which means that $x_0+K/n$ is an upper bound of E. Moreover $x_0-1/n$ is not an upper bound for $E$. We claim that there exists a unique natural number $0\le i \le K$ such that $x_0+i/n$ is an upper bound, but $x_0+(i-1)/n$ is not. We argue this by contradiction. Suppose there not exists such a $i$. Then, this means that whenever $x_0+(i-1)/n$ is not an upper bound then $x_0+i/n$ it must not be an upper bound. Using induction it is not difficult to see that this would imply $x_0+m/n$ is not an upper bound for every natural number $m$. But $K$ by construction is a natural number, a contradiction. Thus, there exists a $i$ such that $x_0+i/n$ is an upper bound and $x_0+(i-1)/n$ is not an upper bound. To show that this $i$ is unique we argue again by contradiction. Suppose there exists a $i'$ with the desired property and $i' \not= i$. Without loss of generality we may assume that $i' < i$. So, we have $\,i' \le i-1$ and thus $\,x_0+i'/n \le x_0+(i-1)/n$, which implies that $x_0+i'/n$ is not an upper bound, a contradiction. By the last claim we already know that there is a unique integer such that $x_0+i/n$ is an upper bound for $E$, but $x_0+(i-1)/n$ is not. We now claim that $x_0+(i-1)/n <x_0+i/n$. If were not $\,x_0+(i-1)/n \ge x_0+i/n$ we get a contradiction. By the denseness of $\mathbb{Q}$ in $\mathbb{R}$ there must be a $q_n\in \mathbb{Q}$ such that $x_0+(i-1)/n < q_n <x_0+i/n$. Thus $q_n-1/n$ is not an upper bound because is less than $x_0+i/n$ which is not an upper bound and $q_n+1/n$ is an upper bound because is bigger than the upper bound $x_0+i/n$. Let $n,m\in \mathbb{N}- \{0\}$. Since $q_n+1/n$ is an upper bound and $q_m-1/m$ is not an upper bound. Then $q_m-1/m<q_n+1/n$ otherwise we have a contradiction. A similar argument shows that $q_n-1/n<q_m+1/m$. Thus, $-(1/n+1/m)<q_n-q_m<1/n+1/m$, in other words we have $|q_n-q_m|<1/n+1/m$. Let $M$ be a positive integer and $n,m\ge M$ then $|q_n-q_m|<2/M$. For what we have said above if we define the $\mathbb{Q}$-sequence $(q_n)_{n=1}^{\infty}$ this is $2/M$-steady for every $M$. We claim that $(q_n)$ is $\mathbb{Q}$- Cauchy sequence. So let $\epsilon>0$ be arbitrary it will sufficient to have $2/M< \epsilon$ but this means $M > 2/ \epsilon$ which is possible by the Archimedean principle. Now since $(q_n)$ is a $\mathbb{Q}$- Cauchy sequence it must be the formal limit of a real number. Let $S$ be such a real number. To conclude the proof our task is to show that $S$ is the least upper bound of $E$. (1) We shall show that $S$ is an upper bound. Let $x\in E$ be arbitrary. Since $q_n+1/n$ is an upper bound for every n. So, we have $x \le q_n+1/n$ for every $n$. Thus, $x \le q_n$ and hence $x \le S$ which prove that $S$ is an upper bound (2) Now we will show that $S$ is the lub. Let $T$ be an upper bound of $E$. Since $q_n-1/n$ is not an upper bound for every n, thus $q_n-1/n \le T$. Then $q_n\le T$ and then $S\le T$ which prove that $S$ is the least upper bound and conclude the proof. Thanks. :) AI: There’s an oversight in your argument for the existence of a least upper bound for $E$: from $x_0\le M$ you can only conclude that $M-x_0$ is non-negative, not that it’s positive. However, if it’s $0$, then clearly $M$ itself is a least upper bound for $E$, so we can focus on the case in which $M-x_0>0$. Instead of using induction to prove that $i$ exists, you could simply let $$B=\left\{i\in\Bbb N:x_0+\frac{i}n\text{ is an upper bound for }E\right\}$$ and set $i=\min B$: $K\in B$, so $B\ne\varnothing$, and the well-ordering principle applies. You’ve a typo in the paragraph in which you choose $q_n$: you meant to say that $q_n-\frac1n$ is not an upper bound for $E$ because it’s smaller than $x_0+\frac{i-1}n$, not because it’s smaller than $x_0+\frac{i}n$. I’ve never seen the term steady in this context, but its meaning is clear from the previous paragraph. At the end in (1) you cannot go directly from $x\le q_n+\frac1n$ for each $n\in\Bbb Z^+$ to $x\le q_n$ for each $n\in\Bbb Z^+$, because the $q_n$’s need not all be the same. What if, for instance, $q_n=-\frac1{n+1}$, and $x=0$? You can, however, argue as follows. If $S<x$, choose $n\in\Bbb Z^+$ such that $\frac3n<x-S$. Then $S\ge q_n-\frac2n$ (why?), and $x\le q_n+\frac1n$, so $x-S<\frac3n$, which is a contradiction. You have the same kind of error in (2): the fact that $q_n-\frac1n\le T$ for all $n$ does not imply that $q_n\le T$ for all $n$. For now I’ll leave it to you to see if you can repair this last part.
H: simple probability question, two aces drawn There is a pile of 6 cards that contains two aces. The cards are then sorted out into two separate piles. What is the chance that there is an ace in each pile? My approach is that there are ${3 \choose 1} {3 \choose 1}$ or $9$ ways such that the aces are in different piles. The total number of ways to arrange $6$ cards is $6! = 720$ ways. But since it doesn't matter which ace or card is in the given pile, we can treat them as indistinguishable. so there are $6!/2!\cdot 4! = 15$ distinct possible ways. I am not sure if this approach makes sense as the probability of $9/15$ seems quite high. Any help/feedback appreciated. AI: Your answer is absolutely correct. There are many ways to figure this. Suppose you and your friend Harry (the two aces) are in a group of six people which is going to be divided randomly into two teams of three; what are the chances that you and Harry end up on different teams? Well, there are five other people besides you; two of them are going to be on your team, three of them are going to be on the other team. So Harry (or anyone else you care to name) has three chances out of five of being on the other team. $3/5=9/15$.
H: Show that every polynomial is the sum of an even and odd function. I have been given an optional challenge problem at the end of one of my earlier tutorials that I am unsure how to solve. It is a question with three parts, but I would like to tackle them separately with a little help on part a. a function $f$ is even if $f(-x) = f(x)$ for all $x$, and odd if $f(-x) = -f(x)$ for all $x$. Now I am told to show that every polynomial is the sum of both an even and an odd function, but have no idea how to go about this. Is the key to this question that $f(-x) = x^2$ will always $= f(x)$ $\therefore$ $x^2$ is even? I understand that, and if a secondary function was $g(-x) = x$, where x is only positive, then it would make sense that this is a odd function, but it is for all x, and therefore a negative x value in g(-x) would become positive and it wouldn't be an odd or an even function. AI: Hint: $x$, $x^3$, $x^5$, ... are all odd, while $1$, $x^2$, $x^4$, ... are all even (prove this!). So try decomposing a polynomial into a sum of terms only involving even powers of $x$, and terms only involving odd powers of $x$.
H: Linearly independent columns of a matrix product Given $\mathbf{A} = \mathbf{B}\mathbf{C}$, with $\mathbf{B} \in \mathbb{R}^{m\times n}$ and $\mathbf{C} \in \mathbb{R}^{n\times p}$. Say we know that the columns of $\mathbf{A}$ are linearly independent. Does this also imply that the columns of $\mathbf{B}$ and $\mathbf{C}$ are linearly independent? How can I go about proving this? We know $\text{Ker}(\mathbf{BC}) = \left\{\mathbf{0}\right\}$, so $\mathbf{x} = \mathbf{0}$ is the only vector satisfying $$ \mathbf{BCx} = \mathbf{0} $$ From this: $$ \mathbf{BC}(\mathbf{0}) = \mathbf{B}\mathbf(0) = \mathbf{0} $$ but this only shows that $\mathbf{0}$ is in the null space of $\mathbf{B}$, not that it's the only member of the nullspace. AI: It is not necessarily true that the columns of $B$ are linearly independent. For example, $$ \pmatrix{1 & 0\cr 0 & 1\cr} = \pmatrix{1 & 0 & 0\cr 0 & 1 & 0\cr} \pmatrix{1 & 0\cr 0 & 1\cr 0 & 0\cr}$$ On the other hand, it is true that the columns of $C$ are linearly independent, because ${\rm Ker}(C) \subseteq {\rm Ker}(BC)$.
H: Prove a $T_0$ topological group is $T_1$ How to prove that a $T_0$ topological group is $T_1$. I am a beginner in topological group. Also I want some good reference. AI: In a topological group, the group operations are continuous. So if you have two points $x \neq y$, and a neighbourhood $U$ of $x$ that does not contain $y$, then $yU^{-1}x$ is a neighbourhood of $y$ that does not contain $x$, where $U^{-1} = \{u^{-1} : u \in U\}$. We can see that as follows: $$x \in yV^{-1}x \iff e \in yV^{-1} \iff y \in V$$ for any set $V \subset G$. Since translations are homeomorphisms, $x^{-1}U$ is a neighbourhood of $e$, hence $(x^{-1}U)^{-1} = U^{-1}x$ is also a neighborhood of $e$, and $yU^{-1}x$ is a neighbourhood of $y$.
H: Are functions of this sort bijections from a subset of the reals to the reals? I'm teaching about infinite cardinalities tomorrow and will be showing that $\tan x$ is a bijection from $(-\pi/2, \pi/2)$ to $\mathbb{R}$. As I was putting the slides together, it occurred to me that there are probably lots of bijections from subsets of reals to reals, and started thinking about why $\tan x$ worked. Suppose you have a function $f : (a, b) \rightarrow \mathbb{R}$ with the following properties is a bijection: $f$ is continuous on $(a, b)$. $f$ is monotonically increasing on $(a, b)$. $\lim_{x \rightarrow a+} f(x) = -\infty$. $\lim_{x \rightarrow b-} f(x) = +\infty$. I have a background in discrete math rather than continuous mathematics, so I'm not exactly sure how I would prove this. I can prove that the function is injective because it's monotonically increasing, but I'm not sure how to prove surjectivity from the other claims. My questions are as follows: Am I right? That is, is my guess correct? If I'm right, how would I prove surjectivity from these points? Thanks! AI: Yes, a function with those properties will be a bijection. It's $1-1$ if we assume that it is strictly monotone, and the intermediate value theorem shows that it's onto as follows: Choose an $\alpha \in \mathbb{R}$. Since $\lim_{x \to b^+} f(x) = \infty$, there exists an $\epsilon > 0$ such that $$b - \epsilon < y < b \implies f(y) > \alpha$$ Likewise, there exists an $\epsilon'$ for the left side. Choosing any $y$ and $z$ in the two respective intervals, we see that some $c \in [y, z]$ must satisfy $f(c) = \alpha$.
H: counting on $4$ pairs of gloves There are $4$ different pairs of gloves. $4$ right handed gloves are given randomly to four persons then $4$ left handed gloves are given.how many ways are possible such that nobody gets the right (correct)pair of gloves. $\underline{\bf{My\;\;Try}}::$ Let the $4$ pairs of gloves as $(a,A)\;\;,(b,B)\;\;,(c,C)\;\;,(d,D)$. Then R.H.S gloves $(A,B,C,D)$ can be distributed as $ = 4!$ ways. Now I Did not understand How can I distributed L.H.S gloves so that nobody get correct pair of gloves Help Required. Thanks AI: You can (and should) look up Derangements. But the numbers here are so small that we can list and count. Let the right gloves be called $A$, $B$, $C$, $D$, and the matching left gloves be $a,b,c,d$. The right gloves can be distributed between the $4$ people in $4!$ ways. Once you have done this, line up the right glove holders in the order $A,B,C,D$ and distribute the left gloves at random. We count the ways in which we can distribute the left gloves so there is no matching pair. If there is no matching pair, left glove $a$ must go to $B$, $C$, or $D$. We count the number of ways there is no match and $a$ goes to $B$, and multiply the result by $3$. So $a$ goes to $B$. Maybe $b$ goes to $A$. Then $c$ must go to $D$, and $d$ to $C$. That gives $1$ way. Maybe $b$ goes to one of $C$ or $D$. We will count the ways in which $b$ goes to $C$, and multiply by $2$. If $a$ goes to $B$, and $b$ to $C$, then $d$ must go to $A$, and $c$ to $D$. That gives $1$ way. Multiply by $2$, since $b$ could have gone to $D$. We conclude that there is a total of $3$ ways in which $a$ can go to $B$. Multiply by $2$. We get $9$. Thus for every one of the $4!$ ways to distribute the right gloves among the $4$ people, there are $9$ ways to distribute the left gloves so that no one is happy, for a total of $4!\cdot 9$.
H: Why does $\sum_{j,k\geq 0}\frac{(j+k)a^{j+k}}{j!k!}=\sum_{l=0}^\infty \frac{l(2a)^l}{l!}$? I notice that $$\sum_{j,k\geq 0}\frac{(j+k)a^{j+k}}{j!k!}=2ae^{2a} = \sum_{l=0}^\infty \frac{l(2a)^l}{l!}.$$ Is there a simple intuitive explanation why these two should have the same sum, or is it more or less a coincidence? To be clear, I'm not wondering how to prove they are the same, but wondering if it is part of a general pattern of replacing sums of $j,k$ by single-index sums and getting the same result. AI: It's an instance of a general pattern. It doesn't always work out as nicely as it does here, but the principle is: Cauchy-product. You rearrange the sum so that you sum the terms with $j+k = m$ in a group: $$\begin{align} \sum_{j,k \geqslant 0} \frac{(j+k)a^{j+k}}{j!k!} &= \sum_{m=0}^\infty \left(\sum_{k=0}^m \frac{1}{k!(m-k)!}\right)ma^m\\ &=\sum_{m=0}^\infty \left(\sum_{k=0}^m \binom{m}{k}\right)\frac{ma^m}{m!}\\ &= \sum_{m=0}^\infty \frac{m2^ma^m}{m!} \end{align}$$
H: Find by integrating the area of ​​the triangle vertices $(5,1), (1,3)\;\text{and}\;(-1,-2)$ Find by integrating the area of ​​the triangle vertices $$(5,1), (1,3)\;\text{and}\;(-1,-2)$$ I tried to make straight and integrate, but it is very complicated, there is some better way? AI: It is just tedious. Let $f_1(x) = \frac{5x+1}{2}$, $f_2(x) = \frac{7-x}{2}$, $f_3(x)= \frac{x-3}{2}$. $(-1,f_1(-1)) = (-1,f_3(-1)) = (-1,-2)$, $(1,f_1(1)) = (1,f_2(1)) = (1,3)$, $(5,f_2(5)) = (5, f_3(5)) = (5, 1)$. $A = \int_{-1}^1 (f_1(x)-f_3(x) ) dx + \int_{1}^5 (f_2(x)-f_3(x) ) dx = 4 +8 = 12$.
H: Connectedness of Disjoint Union of Connected Sets The definition of connected sets is: A topological space $X$ is connected iff there do not exist sets $U, V \subset X$ such that: $U, V \neq \varnothing$, $U \cap V = \varnothing$ and $U \cup V = X$, with both $U$ and $V$ both open and closed. I am having trouble applying this definition to certain cases--for example, the union of two intervals in the real number line with the usual topology. Intuitively, $C=(0,1) \cup (2,3)$ should be disconnected (and I found a special definition of connectedness for open sets that allows me to prove that), but I don't see how to apply the actual definition of connectedness to prove that (or to prove, for example, the same problem with closed sets). $C$ being disconnected should imply the existence of $U$ and $V$ satisfying the above property, but I can't find any. AI: Take $U=(0,1)$ and $V=(2,3)$: these sets are both open and closed in the space $C$. $(0,1)$ and $(2,3)$ are open in $C$ because each is the intersection with $C$ of a set open in $\Bbb R$, and each is closed in $C$ because it’s the complement in $C$ of an open subset of $C$. The fact that neither is closed in $\Bbb R$ is irrelevant.
H: Asymptotics of a real sequence Let $(a_n)_{n\in\mathbb{N}}$ be a real sequence with $a_n\in O(n^d)$ $(d\in (-1,0))$. Now we consider the expression $$ b_n:=(1-\sqrt{1-a_n}).$$ Is $b_n\in O(\sqrt{n^d})$? Thanks! AI: Yes. Even better, it is in $O(n^d)$. We have $$\sqrt{1-x} = 1 - \frac{x}{2} - \frac{x^2}{8} + O(x^3)$$ for $\lvert x\rvert < 1$ by the Taylor expansion, so $$1 - \sqrt{1-a_n} = 1 - (1 - \frac{a_n}{2} + O(a_n^2)) = \frac{a_n}{2} + O(a_n^2).$$ Since $d < 0$, $n^d$ is dominated by $\sqrt{n^d}$.
H: $\int_{\frac{1}{3}\pi}^{\frac{2}{3}\pi} {\sin(x)\;dx}$ using Riemann sums? How to find the integral $$\int_{\frac{1}{3}\pi}^{\frac{2}{3}\pi} {\sin(x)\;dx}=1$$ using Riemann sums? AI: We will use the Lagrange trigonometric identities: $$ \sum_{k=1}^n\cos(kx)=\frac12\left(\frac{\sin\left(\frac{2n+1}{2}x\right)}{\sin\left(\frac12x\right)}-1\right) $$ and $$ \sum_{k=1}^n\sin(kx)=\frac{\sin\left(\frac{n+1}{2}x\right)\sin\left(\frac n2x\right)}{\sin\left(\frac12x\right)} $$ The Riemann Sum is $$ \begin{align} &\int_{\pi/3}^{2\pi/3}\sin(x)\,\mathrm{d}x\\ &=\lim_{n\to\infty}\sum_{k=1}^n\sin\left(\frac\pi3+\frac\pi3\frac kn\right)\frac\pi{3n}\\ &=\lim_{n\to\infty}\sum_{k=1}^n\left(\sin\left(\frac\pi3\right)\cos\left(\frac\pi3\frac kn\right)+\cos\left(\frac\pi3\right)\sin\left(\frac\pi3\frac kn\right)\right)\frac\pi{3n}\\ &=\lim_{n\to\infty}\sin\left(\frac\pi3\right)\frac\pi{6n}\left(\frac{\sin\left(\frac{2n+1}{2}\frac\pi{3n}\right)}{\sin\left(\frac12\frac\pi{3n}\right)}-1\right)+\cos\left(\frac\pi3\right)\frac\pi{3n}\frac{\sin\left(\frac{n+1}{2}\frac\pi{3n}\right)\sin\left(\frac n2\frac\pi{3n}\right)}{\sin\left(\frac12\frac\pi{3n}\right)}\\ &=\sin^2\left(\frac\pi3\right)+2\cos\left(\frac\pi3\right)\sin^2\left(\frac\pi6\right)\\[9pt] &=\frac34+2\cdot\frac12\cdot\frac14\\[15pt] &=1 \end{align} $$
H: An element of order $n$ generates a normal subgroup of $D_n$ Let $a$ be an element of order $n$ of $D_n$. Show that $\langle a\rangle \lhd D_n$ and $D_n/\langle a\rangle \cong \mathbb Z_2$. Proof: Let $K = <a>$ for some a ∈ G. Let H ≤ K be an arbitrary subgroup. Since $H ≤ K = <a>$ it follows that $H = <a^d>$ for some integer d. If |a| = 1 then a = 1, H = K = {1} and, obviously, $H \lhd G$. Is what I have right? I think to prove the second part I use Lagrange but I'm not sure how. AI: If $D_n$ is the dihedral group, then it has $2n$ elements. It is a general fact that every subgroup of order $n$ is normal.
H: Distributing distinct apples among 5 people How many ways are there to distribute 6 distinct apples among 5 people? How would I do this? I know for identical it would b C(6 + 5 - 1, 6). AI: Each apple has $5$ possibilities for the person it goes to. There are $6$ apples. So you might think of $$5 \times 5 \times 5 \times 5 \times 5 \times 5.$$
H: Changing index of summation. $\sum_{y=1}^\infty (1-\theta)^{y-1}\theta$ I am always confused how to change the index of summation. $$\sum_{y=1}^\infty (1-\theta)^{y-1}\theta$$ The above is supposed to be a geometric sum and sum up to $1-(1-\theta)^x$? But how? AI: Is your upper bound of summation $x$? If yes, set $n=y-1$. $y$ takes the values $1,2,3,\dots x$, so $n$ will take the values $0,1,2,\dots x-1$. So, $$\sum_{y=1}^x(1-\theta)^{y-1}\theta=\sum_{n=0}^{x-1}(1-\theta)^n\theta=\theta\sum_{n=0}^{x-1}(1-\theta)^n=\theta\frac{(1-\theta)^x-1}{(1-\theta)-1}=1-(1-\theta)^x.$$
H: if T is normal, then $‎\sigma(T)=‎\sigma‎‎_{‎ap‎}‎(T)‎‎‎‎‎$‎ I want to show that if the operator T is normal, then $‎\sigma(T)=‎\sigma‎‎_{‎ap‎}‎(T)‎‎‎‎‎$‎ Its proof is obvious from one hand.But i cant prove that $‎\sigma(T)\subseteq‎‎‎‎‎\sigma‎‎_{‎ap‎}‎(T)‎‎‎‎‎$‎ Recall that $\sigma‎‎_{‎ap‎}‎(T)‎‎‎‎‎=\{‎\lambda \in\mathbb{C}~|~T-\lambda~is~not~bounded~below‎‎\}‎$ AI: We show that $\sigma_{ap}(T)^c \subset \sigma(T)^c$ : Suppose $\lambda \notin \sigma_{ap}(T)$, then we want to show that $\lambda \notin \sigma(T)$. Note that $\exists c > 0$ such that $$ \|(T-\lambda I)x\| \geq c\|x\| \quad \forall x\in H $$ and hence $(T-\lambda I)$ is injective, so it suffices to prove that $R(T-\lambda I) = H$. Also, $R(T-\lambda I)$ is complete (why?), and so it is closed in $H$. It suffices to show that $R(T-\lambda I)^{\perp} = \{0\}$. So choose $x \in R(T-\lambda I)^{\perp}$, then $$ 0 = \langle x,(T-\lambda I)(T^{\ast} - \overline{\lambda} I)x \rangle $$ $$ = \langle x, (T^{\ast} - \overline{\lambda}I)(T-\lambda I)x\rangle \quad\text{ (since $T$ is normal)} $$ $$ = \langle (T-\lambda I)x, (T-\lambda I)x \rangle \geq c^2\|x\|^2 $$ and hence $x=0$. Thus $R(T-\lambda I) = H$, and so $T-\lambda I$ is invertible.
H: lower sum and upper sum of x^2 ... I hope you're able to understand what I'm writing now: I have to calculate the lim of lower sum and upper sum for the Integral $\int_0^1 x^2 dx, $ by decomposition the interval into n pieces of the same length. So, I know the following things: 1.) the width of $ x_{k+1} - x_k = 1/n$ since $(1-0)/n = 1/n. $ This means, all "stripes" have the same width. 2.) the "height" for $ x_0 = f(0) = 0$, $x_{0+k} = f(0+k) = k^2 $ ... so: $ f(x_k) = (k/n)^2 $ ? 3.) the lower sum is: $ \sum_{k=0}^{n-1} (k/n)^2 * 1/n$ - correct? 4.) the upper sum is: $ \sum_{k=1}^{n} (k/n)^2 * 1/n$ - correct? Well, I will go on when I know if this is correct or not :) EDIT: Hello, thank you all very much (I love this website :-) is it also correct, that the upper sum is from "k=1 to n" while the lower sum is from "k=0 to n-1" ? I wrote this simply by taking a look at a specific graph.. Okay, now I have to find the limit: Now for lower sum.. : 5.) $ lim_{n \to \infty}\ \sum_{k=0}^{n-1} (k/n)^2 * 1/n$ this means: $ lim_{n \to \infty} (\sum_{k=0}^{n-1} (k^2)*(1/n^2) * (1/n)) $ $=$ $lim_{n \to \infty} ((1/n^3)\sum_{k=0}^{n-1} (k^2) =$ $ lim_{n \to \infty} ((1/n^3) \frac{n*(n+1)*(2n+1)}{6}) $ $ lim \frac{2n^2 +3n +1}{6n^2} = 1/3 + 0 + 0 + 0 = 1/3 $ ?? I'm sorry if it's not written in the mathematical-correct way! AI: Yes, everything you've written is correct. (except for that one weird chain of equations $x_{0+k} = f(0+k) = k^2$ which doesn't really make sense) Although this problem is simple enough that what I suggest below may seems silly, sometimes doing such things help in complex problems. It may be useful to define variables that more closely relate to the actual objects of the problem. Specifically, rather than picking out the $n+1$ points $x_k = k/n$, you might instead pick out the $n$ intervals $$ I_k = \left[ \frac{k}{n}, \frac{k+1}{n} \right] $$ for $k = 0, \ldots, n-1$. Having a name for the intervals makes it simpler to reason about them. Re: your edit: you're mostly okay. You've made one main error: you used the formula for $\sum_{k=0}^n k^2$, but the upper limit on the sum is only $n-1$. It turns out, however, that error doesn't affect the end result. The number of steps you need to write between $$ \lim_{n \to \infty} ((1/n^3) \frac{n*(n+1)*(2n+1)}{6}) $$ (after correcting for your error) and $$ 1/3 $$ depends on how much you want to show you know the steps between those two. In particular, if you are in a setting where those are "obvious", you can just skip all of the way. But if you are in a class where your professor is assessing your knowledge of that stuff and it can't just be assumed you know, then you need to show more steps! Of course, if you are unsure of your steps, it's best to write them out. Not only does seeing your own work make it easier to spot errors or help to be confident it's correct, it would let your professor and others see how you do the work and suggest corrections and improvements. (P.S. use \lim instead of lim when you write the math, and you'll get the better typesetting you see in my post)
H: Two basic questions about uniformizers in algebraic curves I'm recently trying to study basics about algebraic curves. However, apparently I'm still quite unfamiliar with the subject as there have occured 2 questions that seem quite basic to me, yet I don't directly see a way to make them clear to myself: 1.) For an (edit: smooth) algebraic curve $C/K$, why does $K(C)$ contain uniformizers for any point $P$ of $C$? My - so long - first and only idea was to take a minimum degree set of generators of the maximal ideal $M_P$ at $P$ and try find a contradiction if all of them were also contained in $M^2_P$. However, notation of degree in this situation is not entirely clear to me and apparently there are lots of things that can go wrong, so I have quite the feeling this is the wrong path. 2.) Why is the ramification index $e_\Phi(P) := ord_P(\Phi^{\ast} t_{\Phi}(P))$ for a uniformizer $ t_{\Phi}(P)$ at $P$ well defined and doesn't depend on the choice of the uniformizer? Thanks a lot in advance. AI: You have to assume that $C$ is smooth. Then every stalk $\mathcal{O}_{C,P}$ is regular, noetherian and of dimension $1$, hence a discrete valuation ring. You should say what is $\Phi$ is. I assume that it is finite morphism of curves. Then you can check that $e_{\Phi}(P)$ is the order of $\Phi^*(\pi_P)$ for every uniformizer $\pi_P$ at $P$. The reason is simply that uniformizes only differ by units, and $\Phi^*$ (as a ring homomorphism) preserves this property.
H: Chance of getting six in three dice I am having a hard time wrapping my head around this and am sure that my answers are wrong. There are three dice. A. Chance of getting exactly one six on the three dice. $$(1/6) * 3 = 1/3$$ B. Chance of getting exactly two sixes. $$(1/6 * 1/6) * 1.5 = 1/24$$ C. Chance of getting exactly $~3~$ sixes. $$1/6 * 1/6 * 1/6 = 1/216$$ D. Chance of any combination of A, B and C $$1/3 + 1/24 + 1/216= 72/216 + 9/216 + 1/216 = 82/216$$ AI: A. There is a total of 6^3=216 combinations if you roll 3 dice. There are 5^2x3=75 combinations that you will get one 6. Thus there is a 75/216=25/72 chance of getting only one 6 when rolling 3 dice. B. There are 5x3 combinations that you will get 2 6s. Thus there is a 15/216=5/72 chance of getting a 2 6s when rolling 3 dice. C. There is 1 combination where you will get 3 6s. Thus there is a 1/216 chance you will get 3 6s when rolling 3 dice. (Good job you got this correct) D. There is a 75+15+1/216=91/216 chance of any of them happening.
H: The pumping theorem and regular language I have this problem: $L_0 = L(a^*bba^*)$, the language of the regular expression $a^*bba^*$ $L_1 = \{uu \mid u \in L_0 \}$ Is $L_1$ a regular language? I know that I should use the pumping theorem for this, but I don't know how to use it. I assume that $L_1$ is the same as $a^*bba^*a^*bba^*$? Or am I wrong here too? So how do I know if this is a regular language? AI: If we had $L_1 = L(a^*bba^*a^*bba^*)$, $L_1$ would (obviously) be regular, as generated by a regular expression. But note the difference: \begin{align*} L(a^*bba^*a^*bba^*) &= L_0 \circ L_0 = \{uv \mid u,v\in L_0\} \\ L_1 &= \{uu\mid u \in L_0\} \end{align*} The point in the definition of $L_1$ is that words of $L_1$ consits of two times the same word of $L_0$. Such languages are seldom regular, as a finite automaton cannot remember a whole word from $L_0$ to repeat it. This is only a handwaving argument, to prove that $L_1$ is not regular, you should, as you wrote, use the pumping lemma. So, suppose $L_1$ were regular and $p$ its pumping length. Consider $w = a^pbba^pa^pbba^p \in L_1$. By the pumping lemma there would be a decomposition $w = xyz$ with $|xy| \le p$, $|y| \ge 1$. Then $y$ consitsts only of $a$s and $xy^iz = a^{p+i}bba^pa^pbba^p \not\in L_1$. So $L_1$ isn't regular.
H: find the equation of the circle passing through the extremities of the diameter of the circle find the equation of the circle passing through the extremities of the diameter of the circle $x^2 +y^2 +2x-4y-2=0$ $x^2 +y^2 =0$ $x^2 +y^2 -6x-8y-2=0$ I cant understand what the question asks us to do. AI: I am not sure what is asked exactly in the question but if my guess is true it asks the equation of black circle Here $B$ denotes the center of the blue circle, $A$ denotes the center of the red circle and $H$ the center of the black circle. We know the coordinates of $B$ and $A$ hence we can find the equation of green line. After that if we solve the equation of the line with the red and the blue circles we can find coordinates of $E$ and $G$. We know the diameter of the big circle is the distance between $E$ and $G$ and the center of the big circle is mid point of $EG$. Therefore we can find the equation of the big circle.
H: Probability of rolling 6's on 3 dice - adjusted gambling odds? I'm struggling to work out odds on a game that were working on. It's probably best if I write an example as I'm really not a mathematician! I'm working on a dice game where the player bets 1 coin and rolls 3 dice. If any of the dice are a 6 we payout based on the following. If 1 dice is a six we pay x If 2 are a six we pay y and if all of them are a 6 we pay z This seemed simple at first as the odds of rolling 1 six are 1/2, 2 sixes would be 1/14 and 3 sixes would be 1/216 so they would be our odds. The issue is that in theory would have to roll the dice 216 times to get the jackpot and receive the £216 payout (at £1 per bet). However, if I did do that I would also win 108 times at 2/1, 15 times at 14.4/1 and once at 216/1 (ignoring the luck factor etc). Therefor for my £216 stake i would win a total of £648. So what should my payouts be for 1x6, 2x6, and 3x6 (assuming 0% house edge and all that) to be fair to the player? Thanks for you time. Mark AI: The probabilities you have are wrong. Denote by $X$ the number of sixes rolled. Then the correct probabilities are: $P(X = 0) = \left(\frac{5}{6}\right)^3 = \frac{125}{216}$ $P(X = 1) = P$ (first dice six) $ + P$(second dice six) $ + P$(third dice six) $= \frac{1}{6} \frac{5}{6} \frac{5}{6} + \frac{5}{6} \frac{1}{6} \frac{5}{6} + \frac{5}{6} \frac{5}{6} \frac{1}{6} = 3 \cdot \frac{25}{216} = \frac{75}{216} = \frac{25}{72}$. $P(X = 2) = P$ (first dice not six) $ + P$(second dice not six) $ + P$(third dice not six) $= \frac{5}{6} \frac{1}{6} \frac{1}{6} + \frac{1}{6} \frac{5}{6} \frac{1}{6} + \frac{1}{6} \frac{1}{6} \frac{5}{6} = 3 \cdot \frac{5}{216} = \frac{15}{216} = \frac{5}{72}$. $P(X = 3) = \left(\frac{1}{6}\right)^3 = \frac{1}{216}$ To answer your question, if you want a fair game (expectation zero profit/loss) you can pick $x,y,z$ however you want, as long as they satisfy $75x + 15y + z = 216$. Namely, on average in 216 rolls, you will get one six $75$ times, two sixes $15$ times and three sixes once. Reasonable values could be $x = 2, y = 3.4, z = 15$.
H: On nilpotent factor group Let $G$ be a finite group and let $N$ be a normal subgroup of $G$ with the property that $G/N$ is nilpotent. Prove that there exists a nilpotent subgroup $H$ of $G$ satisfying $G = HN$. This is problem 223 on page 24 of http://www.math.kent.edu/~white/qual/list/all.pdf. I think that Frattini's argument may be useful. AI: Use induction on $|G|$. If all Sylow $p$-subgroups of $G$ are normal, then $G$ is nilpotent. Otherwise, there exists a non-normal Sylow $p$-subgroup $P$. Apply the Frattini argument to $PN \unlhd G$ to get $G=NN_G(P)$ and then apply induction to $N_G(P)$.
H: Find the probability of $a>b+c$, where $a$, $b$, $c$ are $U(0,1)$ What is the probability that $a > b + c$? $a, b, c$ are picked independently and uniformly at random from bounded interval [0,1] of $\mathbb{R}$. AI: Probability as the volume of a pyramid $V = \frac{1}{3}Sh = \frac{1}{3}\cdot\frac{1}{2}\cdot1 = \frac{1}{6}.$
H: Characteristic polynomial of a matrix is monic? Given a $n \times n$ matrix A, I need to show that its characteristic polynomial, defined as $P_A (x) = det (xI-A)$ is monic. I am trying induction. But no clue after induction hypothesis. AI: $$A=\begin{pmatrix}a_{11}&a_{12}&\ldots&a_{1n}\\a_{21}&a_{22}&\ldots&a_{2n}\\\ldots&\ldots&\ldots&\ldots\\a_{n1}&a_{n2}&\ldots&a_{nn}\end{pmatrix}\implies $$ $$|xI-A|=\begin{vmatrix}x-a_{11}&-a_{12}&\ldots&-a_{1n}\\-a_{21}&x-a_{22}&\ldots&-a_{2n}\\\ldots&\ldots&\ldots&\ldots\\-a_{n1}&-a_{n2}&\ldots&x-a_{nn}\end{vmatrix}=\prod_{i=1}^n(x-a_{ii})+\ldots=x^n+\ldots$$ and this is enough since we know $\;\deg p_A(x)=n\;$ ...
H: Nilpotent elements in power series over a Noetherian ring Suppose $A$ is a Noetherian ring. Let $$f(x)=\sum_{i=0}^{\infty} a_i x^i\in A[[x]].$$ If $a_i$ are all nilpotent, then $f$ is nilpotent. How to prove it? AI: Hint: the sequence of ideals $(a_0), (a_0) + (a_1), (a_0) + (a_1) + (a_2), \ldots$ stabilizes after finitely many steps. Edit (years later): It was asked in a comment how this implies that $f$ is nilpotent. Let $I = (a_0) + \ldots + (a_j)$ be the ideal to which the sequence stabilizes. Thus each $a_n$ can be written in the form $\sum_{i=0}^j a_i r_{i, n}$, so that $$f(x) = \sum_{n \geq 0} a_n x^n = \sum_{n \geq 0} \left(\sum_{i = 0}^j a_i r_{i, n}\right) x^n = \sum_{i=0}^j a_i f_i(x)$$ where $f_i(x) = \sum_{n \geq 0} r_{i, n} x^n$. Each $a_i f_i(x)$ is nilpotent since $a_i$ is, and a sum of finitely many nilpotent elements is nilpotent by a simple inductive argument based on the binomial expansion $(a + b)^n = \sum_{j=0}^n \binom{n}{j} a^j b^{n-j}$. This completes the proof.
H: Open subgroup of $SO(3)$ Does $SO(3)$ have an open nontrivial subgroup?(Group $SO(3)$ with usual matrices product, is all $3\times 3$ matrices whose determinant is 1 and for every element $A\in SO(3)$ we have $A^tA=AA^t=I_3$ and also let it's norm be a operator norm, that's, norm of linear mapping $A:R^3\rightarrow R^3$ which induced the topology on $SO(3)$.) AI: Note that open subgroups are also closed (since all cosets are open, any coset, being the complement of the union of the other cosets, is also closed). So we'd be talking about a subgroup that is open and closed. But $SO(3)$ is connected; in fact it is a quotient space $S^3/\{1, -1\}$ of the unit quaternions, which is itself connected. Thus there are no nontrivial open subgroups.
H: Showing that Bezier curve length is less than its control polygon This is a homework and pardon me for the huge gap of my Mathematics knowledge. After thinking and referencing for a few days I came up with something like following, appreciate help to comment whether this is already correct: By observing the following, it shows that the length of a Bezier curve converges to 1/n of its polygon's length AI: You almost got it. But you made two mistakes: (1) In the second line, the control points of the derivative curve are $n\|\Delta C_i\|$, not $\|\Delta C_i\|$. (2) In going from the second to the third line, you assumed that the norm of a sum of vectors is equal to the sum of their norms. This isn't true, in general. However the norm of the sum is less than or equal to the sum of the norms. With those two changes, you have what you want. Your conclusion that the arclength is $\tfrac1n$ times the polygon length is obviously false. Think about what this means geometrically, for $n=2$ or $n=3$.
H: On equalizers in Top Wikipedia says "The equalizer of a pair of morphisms is given by placing the subspace topology on the set-theoretic equalizer." for the category $\mathbf{Top}$. What is the simplest way to prove this? It seems to be an instance of a more general (not only about $\mathbf{Top}$) theorem. Isn't it? AI: Since the underlying-set functor $\hom(1, -): \mathbf{Top} \to \mathbf{Set}$ preserves all limits (being a representable functor), it preserves equalizers in particular. So we know that the underlying set of the equalizer must be the equalizer as computed in $\mathbf{Set}$. The only question then is what is the correct topology on the equalizer $i: E \to A$ (of a pair of arrows $f, g$ from $A$ to $B$ say). We know that $i$ must be continuous, and this means that $i^{-1}(U)$ must be open for every open $U \subseteq A$; that is, thinking of $i$ as an inclusion, we must have $U \cap E$ open in $E$. So at least the correct topology must contain the subspace topology. On the other hand, if we consider the inclusion map $j: E_{sub} \to A$ where $E_{sub}$ is the underlying set equipped with the subspace topology, then surely $f j = g j$, so this would have to factor through the correct topology, meaning the correct topology must be contained in the subspace topology. So it must be the subspace topology.
H: Meaning of symbol $L^1(\mathbb{P})$ In Furstenberg-Kesten theorem, a theory relating to products of random matrices, one of the assumptions is that: $$\log^{+}||A||\in L^1(\mathbb{P}),$$ where $A$ (a random matrix) is the generator of the cocycle. My question is, can anyone explain what is meant by this assumption? In particular, what is meant (or what might be meant) by $L^1(\mathbb{P}).$ AI: Given a probiability space $(\Omega, \mathcal A, \mathbb P)$, by $L^1(\mathbb P)$ one denotes the vector space of (depending on context real or complex) random variables $X \colon \Omega \to \mathbb R$ (or $\mathbb C$) such that $$ E_{\mathbb P}\bigl(\left|X\right|) = \int_\Omega \left|X\right| \, d\mathbb P $$ is finite. As $\log^+ \|A\|$ is non-negative, in your case the assumption just says that the expected value $E(\log^+ \|A\|)$ is finite. See also in this wiki article on $L^p$-spaces.
H: Simplifying Multiple Summations for worst case analysis I'm figuring out a worst case analysis on a function. After converting it to a set of summations, and changing the sigma notations into summation formuale I ended up with: N(N+1)(2N+1) / 6 + N - N(N+1) / 2 Using LCD i was able to combine the first two components as: N(N+1)(2N+1) + 6N / 6 Leaving me with: N(N+1)(2N+1) + 6N / 6 - N(N+1) / 2 Using LCD again, I'm guessing I would use an LCD of 6 then combine the two fractions as i did the first. But i am having trouble converting the top line of the second fraction. Do I multiply the whole expression by 3, or just numerics? Many Thanks AI: $\dfrac{N(N+1)(2N+1)}{6} + N - \dfrac{ N(N+1)}{ 2}$ $=\dfrac{N(N+1)(2N+1)}{6} + \dfrac{6 N}{6} - \dfrac{ 3N(N+1)}{ 6}$ $=\dfrac{N(N+1)(2N+1) + 6 N - 3N(N+1)}{ 6}$ and you can simplify the numerator. Incidentally this is $\displaystyle\sum_{i=1}^N i^2+1-i$ so it is quite easy to check your result for various small values of $N$.
H: Is there a difference between connected subset and connected subspace? Let $(X,\tau)$ be a topological space. Let $(Y,\tau_Y)$ be a subspace of $X$. Let's say "$Y$ is a connected subset of $X$ iff 'there does not exist nonempty subsets $A,B$ of $X$ such that $\overline{A} \cap B = \emptyset, \overline{B} \cap A= \emptyset, A\cup B= Y$'". (Closures are taken with respect to $\tau$) Let's say $Y$ is a connected subspace of $X$ iff 'there does not exist nonempty subset $A,B$ of $Y$ such that $\overline{A} \cap B =\emptyset, \overline{B}\cap A=\emptyset, A\cup B =Y$'". (Closures are taken with respect to $\tau_Y$) It's trivial to see that every connected subspace is a connected subset. But how do i prove the converse? Or is it false? AI: If $A,B\subseteq X$ where $X$ is a topological space then the pair $\left\{ A,B\right\} $ is a separation if both sets are not empty and $\bar{A}\cap B=\emptyset=A\cap\bar{B}$. If $\left\{ A,B\right\} $ is a separation, and $Y=A\cup B$ then $\left\{ A,B\right\} $ is a separation of $Y$ in $X$. Then we have: $\left\{ A,B\right\} $ is a separation of $Y$ in $X$ if and only if $\left\{ A,B\right\} $ is a separation of $Y$ in $Y$ . This affirmes that the definitions are equivalent. In the following proof $\bar{A}^{Y}$ denotes the closure of $A$ as a subset of $Y$ and we have $\bar{A}^{Y}=Y\cap\bar{A}$ . Proof: Let $\left\{ A,B\right\} $ be a separation of $Y$ in $X$. Then $\bar{A}^{Y}\cap B=Y\cap\bar{A}\cap B\subseteq\bar{A}\cap B=\emptyset$ and likewise $A\cap\bar{B}^{Y}=\emptyset$ showing that $\left\{ A,B\right\} $ be a separation of $Y$ in $Y$. Conversely let $\left\{ A,B\right\} $ be a separation of $Y$ in $Y$. Denoting the complement of $Y$ in $X$ by $Y^{c}$ we find: $\bar{A}\cap B=\left(Y\cap\bar{A}\cap B\right)\cup\left(Y^{c}\cap\bar{A}\cap B\right)=\left(\bar{A}^{Y}\cap B\right)\cup\emptyset=\emptyset$ and likewise $A\cap\bar{B}=\emptyset$. Note: if $Y\subset Z\subset X$ then the subspace topology on $Y$ inherited from $X$ agrees with the subspace topology on $Y$ inherited from $Z$. So if $\left\{ A,B\right\} $ is a separation of $Y$ in $Z$ then above it has been shown that it is a separation of $Y$ in $X$ as well, and vice versa. So terms like 'in $X$', 'in $Z$' or 'in $Y$' become redundant here and can be omitted.
H: Problem about arithmetic and general harmonic progressions An arithmetic progression and a general harmonic progression have the same first term, the same last term and the same number of terms. Prove that the product of the $r$th term from the beginning in one series and the $r$th term from the end in the other is independent of $r$. Please help me prove this. AI: The $k$th term of a AP is $a+k d_1$, while the $k$th term of an HP is $a/(1+k d_2)$. Under the constraints of having the first and last terms agree, and the progressions having the same number of terms (say, $n$), we have $$a+n d_1 = \frac{a}{1+n d_2} $$ This gives us a relation between $d_1$ and $d_2$: $$d_1 = -\frac{a d_2}{1+n d_2}$$ Now, the problems asserts that the product of the $r$th term from the beginning of one sequence (say, the AP) and the $r$th term from the end of the other sequence (the HP) is independent of $r$. Well, let's write the $r$th term of the AP: $$a+r d_1 = a - \frac{r a d_2}{1+n d_2} = \frac{a+(n-r) a d_2}{1+n d_2}$$ Now, the $r$th term from the end of the HP is $a/(1+(n-r) d_2)$. We then have the product being $$(a+r d_1) \frac{a}{1+(n-r) d_2} = \frac{a+(n-r) a d_2}{1+n d_2} \frac{a}{1+(n-r) d_2} = \frac{a^2}{1+n d_2}$$ which is independent of $r$.
H: Proximal Operator of a Quadratic Function What is a proximal operator and how would one derive it in general for a function? In particular, if I had a function: $ f(x) = x^TQx + b^Tx + c $ How would I get the proximal operator for this if Q was a m dimensional square symmetric positive semidefinite matrix? AI: The proximal operator is the map $\def\prox{\mathop{\rm prox}\nolimits}\prox_f \colon \mathbb R^m \to \mathbb R^m$ given by $$ \prox_f(x) = {\rm argmin}_{y \in \mathbb R^m} f(y) + \frac 12\|x-y\|^2 $$ For the given $f$ and $x,y \in \mathbb R^m$, we have \begin{align*} g_x(y) &:= f(y) + \frac 12\|x-y\|^2\\ &= y^tQy + b^ty + c + \frac 12x^tx + x^ty + \frac 12y^ty\\ &= y^t\left(Q + \frac 12\right)y + (b+x)^ty + c + \frac 12x^tx \end{align*} Now $Q + \frac 12$ is symmetric and positive definite, hence this has a unique minimum. To find it, we compute $g_x$'s critical point, we have for $h \in \mathbb R^m$: \begin{align*} g_x'(y)h &= 2y^t\left(Q + \frac 12\right)h + (b+x)^t h \end{align*} so $g_x'(y) = 0$ iff $$ y = \left(2Q + 1\right)^{-1}(b+x) $$ That is $$ \prox_f(x) = \left(2Q + 1\right)^{-1}(b+x) $$
H: Can a continuous function have a non-continuous derivative? If $f(x)$ is continuous throughout its domain, can its derivative, g(x) be non-continuous for any point? If so, what effect does a non-continuous derivative have on the function? AI: Yes, consider the function $$f(x) = \begin{cases} 0 \qquad x\leq0 \\ x \qquad x>0\end{cases}.$$ It's gradient is defined everywhere except $x=0$ $$f^\prime(x) = \begin{cases}0 \qquad x<0\\1\qquad x>0\end{cases}.$$ If you want a function which is continuous but no-where differentiable Brownian Motion/Wiener Process satisfies this.
H: Diophantine Equation: solving $x^2-y^2=45$ in integers How should I solve $x^2-y^2=45$ in integers? I know $$(x+y)(x-y)=3^2\cdot 5,$$ which means $3\mid (x+y)$ or $3\mid (x-y)$, and analogously for $5$. AI: $$ (x-y)(x+y) = 45 \Rightarrow x + y = \frac{45}{x-y} $$ but since $$ x +y \in \Bbb Z \Rightarrow x - y \mid 45 \Rightarrow (x - y , x + y) \in \{ (45,1) , (1,45) , (-45,-1) , (-1 ,-45) , (3,15) , (5,9) , (-3,-15) , (-5,-9) , (9,5) , (-9, -5) , (15,3) , (-15, - 3) \} $$ so can you get now ?? $$ (x,y) \in ?? $$
H: How do I show that $u_x$, $u_y$, $u_{xx}$, $u_{yy}$ and $u_{xy}$ are also solution for the pde $u_{xx}+u_{yy}=0$? Title says it all. Let $u$ satisfy the patial differntial equation $u_{xx}+u_{yy}=0$(elliptic in linear 2nd order pde). How do I show that $u_x$, $u_y$, $u_{xx}$, $u_{yy}$ and $u_{xy}$ are also solution? I don't know where to start. AI: Hint: By Schwarz, for example $$ (u_x)_{yy} = u_{xyy} = u_{yyx} = (u_{yy})_x $$
H: Help with Inequality involving absolute values of trig I am trying to wrap my ahead around the following problem: Prove that for all $x,y$ in $\Bbb R$ $ |\sin(x) - \sin(y)| \leq |x-y|$ And prove that for $x,y$ in $R$ $|\cos(x) - \cos(y)| \leq |x - y|$ My first idea was to use partial derivatives to find relative optima and show that the growth rate of the two sides is different. But that seems like overkill and I don't know how to formally show it. If I can prove the inequalities for just $-2\pi \leq x ,y\leq 2\pi$ then proving for all of $\Bbb R$ shouldn't be hard. AI: Hint: have a look at the unit circle and look at $x$ and $y$ as lengths of arcs.
H: Geometrical meaning - Derivatives The area of a trapezoid with basis $a$ and $b$, and height $h$ equals to $S = \frac{1}{2} (a+b) h$. Find $ \frac{\partial S}{\partial a}, \frac{\partial S}{\partial b}, \frac{\partial S}{\partial h} $ and, using a drawing , determine their geometrial meaning. I have no idea of the geometrical meaning of the partial derivatives, in this case. Can you give me a help with it? AI: $\displaystyle\frac{\partial S}{\partial a}$, for example, is the rate of change of the area when $a$ is changing, when the other side, $b$, and height, $h$, are kept constant.
H: If $f(3x-1)=9x^2+6x-7$, determine $f(x)$ if $f(3x-1)=9x^2+6x-7$ determine all the $f(x)$ functions. I tried in this way : $t=3x-1 \Rightarrow x=(t+1)/3$ $f(t)=9(t+1)^2/9-6((t+1)/3)-7((t+1)/3)\ldots$ but unfortunately I get the original function. Thanks in advance. AI: Using your method, substituting $t = 3x - 1 \iff x = \dfrac {t+1}{3} $, we have $$\begin{align} f(t) & = \dfrac{9(t+1)^2}{9} + \dfrac{6(t+1)}{3} - 7 \\ \\ & = (t+1)^2 +2(t+1) - 7 \\ \\ &= t^2 + 4t - 4 \end{align}$$ Hence, $$f(x) = x^2 + 4x - 4$$
H: How to calculate $\sum_{i = 0}^{k-1}\left(\frac{5}{6}\right)^i$ I need to calculate the complexity of an algorithm. I have come across this summation that I can't evaluate, I am stuck. I haven't seen one of those in years, therefore I am rusty and really struggling: $$\sum_{i = 0}^{k-1}\left(\frac{5}{6}\right)^i$$ where $k = \log_3(2n)$. I have found several formulas, I am not sure which one I should apply: $$\sum_{n=0}^{N}x^n=\frac{x^{N+1}-1}{x-1}$$ I don't think this is it, that would give me a negative value and since I am trying to calculate complexity, that can't be negative. Perhaps I could use the formula for decreasing geometric series, so, I would have $$\frac{1\cdot\left(1-\left(\frac{5}{6}\right)^k\right)}{1-\frac{5}{6}}$$ but then $$\left(1-\left(\frac{5}{6}\right)^{\log_3{2n}}\right)\cdot 6$$ and I can't figure out how to get rid of that nasty exponent. Can please anyone put me on the right track? AI: $a^{\log_b c}=\exp\left(\frac{\log a\log c}{\log b}\right) = c^{\log_b a}$ You have trouble with $(5/6)^{\log_3(2n)}$. The formula I gave shows how to turn it into a power of $n$. Someone is having fun with their negativity button.
H: Is such a function of bounded variation? Let $f:[a,b] \rightarrow \mathbb R$ and let $(x_n) $ be a given sequence of points such that: $$ a<x_{n+1} <x_n<b \textrm{ for } n\in \mathbb R, \atop x_n \rightarrow a. $$ Let's assume that $$ sup_{n\in \mathbb N} var(f, [x_n,b])<\infty. $$ Is it then $f$ of bounded variation on $[a,b]$ ? AI: Hint: If $f$ is bounded on $[a,x_n]$ for some $n$, then your condition implies that $f$ is of bounded variation on $[a,b]$. Otherwise, you can construct a sequence $y_n \searrow a$ with $f(y_{n+1}) - f(y_n) > 1$ which contradicts your condition.
H: Which PDE does v fullfill? Let u be a solution of the PDE $$ u_{xy}+au_x+b u_y+cu=0,~~~~~~~~~~~~~~a,b,c=const.~~~~~(*) $$ Consider $$ v(x,y):=u(x,y)\exp(bx+ay). $$ Find the PDE which $v$ fullfills. Could you please give me a hint how to find that PDE? (*) is a linear PDE of degree 2, right? Do I have to find the normal form? As you see: I do not really know how to solve this question. AI: $u(x,y)=v(x,y)\exp(-bx-ay)$. Now differentiate that, and combine the results of $u_{xy}$ and so on, to add up to 0 on the right-hand side
H: Is a smooth function characterized up to translation by its discrete pieces? Let $f, g : \mathbb{R} \to \mathbb{R}$ be smooth, bounded functions, and suppose that for any finite set $X \subset \mathbb{R}$, there is some $t \in \mathbb{R}$ such that for each $x \in X$, $f(x) = g(x + t)$. (I.e., $f$ and $g$ have the same finite restrictions up to translation.) Does it follow that $f$ and $g$ are identical up to some translation? (For my application I really want the answer to be "yes", so if there are other nice conditions I can impose on $f$ and $g$ to get that answer, that would also be great to know.) AI: The answer is no. Let $g(x)=\begin{cases}0&x\le 0\\ e^{-1/x^2} & x>0\end{cases}$, and let $f(x)=0$. Any $X$ you like for $f$, has a translate into the negative reals where $f,g$ agree. $g(x)$ is constructed to be $C^\infty$ everywhere; it is also bounded above by $1$ (and below by $0$). A condition you need to impose is that $f,g$ are equal to their Taylor series everywhere. I'm not sure if that's enough, but it's a start.
H: Duals of Linear Programs We are trying to find the dual of the following linear program. $$ \max_x \ ax_1 \ + x_2 $$ such that: $$ v_1x_1 - v_2x_2 \geq b_1 \\ v_1x_1 - v_2x_2 \geq b_2 \\ x_1 \geq 0 \\ x_2 \geq 0$$ How would one do this? I am not sure but I get the following as the dual : $$ \min_\gamma \ b_1\gamma_1 + b_2\gamma_2 $$ such that: $$ v_1\gamma_1 - v_2\gamma_2 \geq a $$ $$ v_1\gamma_1 - v_2\gamma_2 \geq 1 \\ \gamma_1 \geq 0 \\ \gamma_2 \geq 0$$ Is this correct? What can one say about the primal and dual feasibility and the optimal values of this problem? AI: I employ the method described by Sebastien Lahaie at http://www.cs.columbia.edu/coms6998-3/lpprimer.pdf Question: find the dual of the following linear program: $$ \max_x \ ax_1 \ + x_2 $$ such that: $$ v_1x_1 - v_2x_2 \geq b_1 \\ v_1x_1 - v_2x_2 \geq b_2 \\ x_1 \geq 0 \\ x_2 \geq 0$$ Solution: Steps 1 and 2. $$ \min_x -ax_1 - x_2$$ such that $$ v_2x_2 - v_1x_1 +b_1 \le 0 \\ v_2x_2 - v_1x_1 + b_2 \le 0 \\ x_1 \geq 0 \\ x_2 \geq 0$$ Step 3 $$ \max_ \gamma \min_x -ax_1 - x_2$$ $$+ \gamma_1(v_2x_2 - v_1x_1 +b_1) \\ + \gamma_2(v_2x_2 - v_1x_1 + b_2)$$ $$x_1 \geq 0 \\ x_2 \geq 0 \\ \gamma_1 \geq 0 \\ \gamma_2 \geq 0$$ Step 4 $$ \max_ \gamma \min_x \gamma_1 b_1 + \gamma_2 b_2$$ $$ + x_1(-v_1 \gamma_1 -v_1 \gamma_2 -a)$$ $$ + x_2(v_2 \gamma_1 +v_2 \gamma_2 -1)$$ $$x_1 \geq 0 \\ x_2 \geq 0 \\ \gamma_1 \geq 0 \\ \gamma_2 \geq 0$$ Steps 5 and 6 $$ \max_ \gamma \gamma_1 b_1 + \gamma_2 b_2$$ $$-v_1 \gamma_1 -v_1 \gamma_2 -a \ge 0$$ $$v_2 \gamma_1 +v_2 \gamma_2 -1 \ge 0$$ $$ \gamma_1 \geq 0 \\ \gamma_2 \geq 0$$ Step 7 $$ \min_ \gamma - \gamma_1 b_1 - \gamma_2 b_2$$ $$-v_1 \gamma_1 -v_1 \gamma_2 \ge a$$ $$v_2 \gamma_1 +v_2 \gamma_2 \ge 1$$ $$ \gamma_1 \geq 0 \\ \gamma_2 \geq 0$$
H: Showing that complement of $\mathbb{Q}^2$ in $\mathbb{R}^2$ is connected The complement of $\mathbb{Q}^2$ in $\mathbb{R}^2$ is connected. I have to solve this problem. At first, I tried to solve as following: For any 2 disjoint neighborhoods of $\mathbb{R}^2$ \ $\mathbb{Q}^2$, whose union is $\mathbb{R}^2$ \ $\mathbb{Q}^2$, one of them must be empty. But I couldn't deduce the last goal. How can this problem be solved? AI: Take any two point $A,B$ from $\mathbb{R}^2\setminus\mathbb{Q}^2$, join them by a straight line, draw perpendicular bisector of $AB$,Now You start chosing points on the Perpendicular bisector of $AB$ and join to $A$ and $B$ by straightline. How many points you can chose to join $A$ and $B$ like this way? Do you see your set is path connected?
H: How to generate a basis for a linear mapping in a complex field such that's it's matrix is Upper Triangular? Let $T:V \to V$ be linear. $V$ is a complex vector space of dimension $k$. Then there exists a basis so that the matrix generated by $T$ under that basis is upper triangular. The proof is by induction on $k$. But how to generate such a basis. The first step is that because $v$ is a complex vector space, so there exists a non-zero vector $v$ such that $Tv=\lambda v$ for some $\lambda \in \Bbb F$. So, take $e_1=v$ because then $T(e_1)=\lambda e_1$. So, in the first column of the matrix the rows 2 to n will be zero. Now if I extend $e_1$ to any basis of $v$ that basis may not be a basis for which the matrix is upper triangular. Then how to construct it ? AI: A hint: The transpose $T^*:\ V^*\to V^*$ has an eigenvector $e^*\in V^*$. The kernel $K$ of $e^*$ is an invariant subspace of dimension $n-1$ of $T$.
H: Discrete Markov Transition Matrix Well, I've been reading over the internet but I've been unable to find a straight answer. I've got a transition matrix for a Markov Discrete Chain. I've made the graph and according to my knowledge, this graph should be periodic. If the graph is periodic, then I shouldn't be able to find a stationary distribution by elevating the matrix times infinity right? Well, I added the matrix to MatLab, elevated it time 100, 1000,10000 and everytime I'm getting the same answer, even though I should be getting different ones. Can someone help me? \begin{bmatrix} 0&0.4 &0 &0 &0 &0.6 \\ 0.3&0 &0.7 &0 &0 &0 \\ 0&0.5 &0 &0 &0.5 &0 \\ 0& 0 & 0.1 & 0 &0.9 & 0\\ 0.6&0.2 &0 &0 &0 &0.2 \\ 0&0 &0 &0.2 &0.8 &0 \end{bmatrix} AI: You can return to state $1$ in two steps ($1\to2\to1$) or in three steps ($1\to6\to5\to1$) with positive probability. The period of the chain divides both 2 and 3, hence the period is 1. That is, the chain is aperiodic.
H: For what values of $k$ this matrix is invertible? Let $D_{k}=\left[\begin{array}{ccc}k&2&k&3k\\1&0&1&3\\4&0&k&12\\2&k&2&k\end{array}\right]$. For what values of $k$ this matrix is invertible? There is a property that says if the rank of a matrix is equal to the number of its lines/columns (since it's a square matrix), then the matrix is invertible. So one needs to find the echelon form of the matrix. In this case: $$\left[\begin{array}{ccc}k&2&k&3k\\1&0&1&3\\4&0&k&12\\2&k&2&k\end{array}\right]\xrightarrow{L_{1}\leftrightarrow L_{2}}\left[\begin{array}{ccc}1&0&1&3\\k&2&k&3k\\4&0&k&12\\2&k&2&k\end{array}\right]\xrightarrow{L_{3}=L_{3}-4L_{1}}\left[\begin{array}{ccc}1&0&1&3\\k&2&k&3k\\0&0&k-4&0\\2&k&2&k\end{array}\right]\xrightarrow{L_{4}=L_{4}-2L_{1}}$$ $$\left[\begin{array}{ccc}1&0&1&3\\k&2&k&3k\\0&0&k-4&0\\0&k&0&k-6\end{array}\right]$$ Now, in order to get the echelon form, $k=0$ and $k\neq4$ and $k \neq6$. The conclusion is that the matrix $D_{k}$ is invertible for $k=0$. But I've checked with $k=5$ and the matrix $D_{5}$ is also invertible. What is wrong in my conclusion? Thanks. AI: If $k \neq4 $ and $k \neq 6$, you can still get row-echelon form even if $k \neq 0$.. In your last matrix, subtract $k$ times the first row from the second row. Then subtract $k/2$ times the second row from the fourth row. This will give you row-echelon form and you can infer when the matrix is invertible accordingly. You should get a matrix that just has $1,2,k-4,k-6$ on the diagonal and zeros everywhere else below the diagonal.
H: Does $\sum_{i=1}^{\infty} \frac{3-\cos{i}}{i^{2/3}-2}$ converge? Need help determining whether the following infinite series converges or diverges, please state what convergence/divergence test was used. $$\sum_{i=1}^{\infty} \frac{3-\cos{i}}{i^{2/3}-2}$$ Side Note: How can I make the equation bigger, it's kind of small with using only $$. AI: Hint $$\;\frac{3-\cos i}{i^{2/3}-2}\ge\frac2{i^{2/3}}\ldots$$ Observe the series is positive for all but a finite number of summands.
H: Interpreting the set $IJ = \{\sum_i x_iy_i \mid x_i \in I, y_i \in J\}$ where $I$ and $J$ are ideals Let $I$ and $J$ be ideals in a ring $R$. Show that $IJ = \left\{\sum_i x_iy_i \mid x_i \in I, y_i \in J\right\}$ is an ideal. Question I am not sure how to interpret this question because of the index $i$. Using the same index $i$ for both $x$ and $y$ seems to suggest that there will always be the same number of elements in $I$ and $J$ and that they get multiplied together through some arbitrary ordering, i.e. $x_1y_1 + x_2y_2 + \cdots$ Can anyone make clear up what is meant by the set $IJ = \left\{\sum_i x_iy_i \mid x_i \in I, y_i \in J\right\}$? AI: You choose arbitrary $k$ and then choose arbitrary $x_1,\ldots,x_k \in I$ and arbitrary $y_1,\ldots,y_k \in J$ and then form the sum $\sum_{i=1}^k x_iy_i$. The particular selection of the $x_i$ and $y_i$ are not fixed, so you can take any choices for them when you define a sum. The ideal $IJ$ is the set of all such sums, for every choice of $k$ and every choice of the particular $x_i,y_i$ for the sum. It is a good exercise to check this $IJ$ is an ideal.
H: Solving a trigonometric system of equations I am a math tutor at the local university and encountered a trigonometric problem which stumped me. (Of course, I'm certain that this material was covered in my students class, but they still didn't know how to solve it, either.) Basically, the problem boils down to solving the following system of equations for $x$ and $y$ with $-\pi \leq x, y \leq \pi$: $c_1\cos x + c_2 \cos y = 0$ $c_1\sin x + c_2 \sin y = 0$ How to solve a system of trigonometric equations is a similar question, but I am having difficulty following the answer and applying them to this particular problem. Any tips would be appreciated. AI: You could just multiply the first equation by $\sin{x}$ and the second by $\cos{x}$ and subtract to get $$c_2 \sin{x} \cos{y} - c_2 \cos{x} \sin{y} = c_2 \sin{(x-y)} = 0$$ Then $x=y$ or $x-y = k \pi$, $k \in \mathbb{Z}$. Plug this back into either equation to determine $x$ or $y$.
H: Hypergeometric Distribution I hate to be that guy who comes here and asks one question, however I've been stuck on this problem for quite some time, and I'm going to assume that I shouldn't even be using the binomial theorem at this point I was given this question: A teacher was asked by her principal to select 7 students at random from her class to take a standardized math test, which will be used to determine how well students at that school are doing with respect to math (and correspondingly, how well the teacher is doing at teaching math). The teacher previously had rank ordered her students on the basis of their performance in her class on math tests, and divided the class into quartiles, such that there were 5 students in the upper quartile and 15 students in the lower three quartiles. When the teacher handed the principal the names of the students she had randomly selected from the class, the number of students from the upper quartile was 5, and the number of students from the lower three quartiles was 2. Is there any statistical evidence that the teacher is biased toward selecting students from the upper quartile? The first question given was: Compute the probability that, of the students selected, 0 are upper quartile students. Round off to 4 decimal places. I was able to answer that question with the answer 0.0830, this i was able to answer by doing (15/20)(14/19)(13/18)(12/17)(11/16)(10/15)(9/14), however when it asks a question such as computing the probability that of the students selected, 2 are upper quartile students, I get stuck, I tried using the binomial theorem, but to no avail, perhaps I'm doing something wrong, I can't find a fast enough way to determine all of the sequences that will produce two upper quartile students of the 7 draws. AI: The hypergeometric distribution is the probability of having $k$ successes in $n$ draws without replacement, if the total population size is $N$ and there are $K$ successes. For our problem, we'll call a draw a success if we choose an upper quartile student. That means $N = 20$, $K = 5$, $n = 7$, and $k = 2$ since we have $20$ total students, $5$ are in the upper quartile, we are choosing a group of $7$ students, and we want exactly $2$ to be upper quartile. For the hypergeometric distribution, we have $P(X=k)= \frac{{K \choose k}{N \choose n-k}}{N \choose n}$. Now, $P(X=2)= \frac{{5 \choose 2}{15 \choose 5}}{20 \choose 7}$ for our given values, which is just the # of ways to choose the group so that it has $2$ of the $5$ upper quartile students and $5$ of the $15$ lower quartile students (which adds up to the $7$ total in the selected group) divided by the total number of possible groups. Can you see why?
H: Reading on Mathematical Logic I am looking for books to read, so as to dive into mathematical logical and related disciplines like set theory, model theory, and topos theory. I have a decent background in category theory and algebra, analysis, topology, etc. but little in explicit logic or set theory aside from the first chapter of Munkres. Any suggestions? I am interest in topics such as para-consistency, computability, and ill-founded logics. AI: The place to start, perhaps, is here: http://www.logicmatters.net/resources/pdfs/TeachYourselfLogic9-2.pdf This is a detailed annotated guide to a wide range of logic literature, at different levels of sophistication. You will be able to choose entry points to suit your background. On non-paraconsistent logic, see in particular http://plato.stanford.edu/entries/logic-paraconsistent/ which gives pointers to the literature. On computability, there is much to be said for Boolos, Burgess and Jeffrey, http://www.amazon.co.uk/Computability-Logic-George-S-Boolos/dp/0521701465 I'm not sure what is meant by "ill-founded logics". For non-wellfounded set theory, see http://plato.stanford.edu/entries/nonwellfounded-set-theory/ which again gives more pointers. On topos theory, I still think it is worth starting with Robert Goldblatt's book: http://www.amazon.co.uk/Topoi-Categorial-Analysis-Logic-Mathematics/dp/0486450260
H: How to solve $\lim _{ x\to 0 } \frac { { \sin }^{ -1 }x }{ x }$? Solve the following equation: \begin{eqnarray} \lim _{ x\to 0 } f(x)= \lim _{ x\to 0 } \frac { { \sin }^{ -1 }x }{ x }\\ \end{eqnarray} The answer in my book is 1. Can I do this? \begin{eqnarray} \lim _{ x\to 0 } \frac { { \sin }^{ -1 }x }{ x }=\lim_{x\to 0} \frac{1}{x\sin x} \end{eqnarray} I think $\sin^{-1}x$ is an inverse, not equal to $(\sin x)^{-1}$, so I can't do this. Right? Are there any easily ways to deal with $\lim_{x\to a}$ including $\sin^{-1}x$, $\cos^{-1}x$ and $\tan^{-1}x$? I have no idea to deal with this question. Thank you for your attention. AI: HINT: Put $\sin^{-1}x=h\implies x=\sin h$ and as $x\to0,h=\sin^{-1}x\to0$
H: Solve $r=(p-1)+pr_1+p^2r_2$ for $r_1$ and $r_2$ when $r(p-1) \equiv 1$ (mod $p^3$) Let $p$ be an odd prime. $\mathbb Z_{p^3}=\left\{0,1,...,p^3-1\right\}$ 1) Let $r$ be an element of $\mathbb Z_{p^3}$. Then, we can define $r$ as follows: $r=(p-1)+pr_1+p^2r_2$ for some $0\leq r_1\leq p-1$, and $0\leq r_2 \leq p-1 $. Assume we have $r(p-1) \equiv 1$ (mod $p^3$). Then, what is $r_1$ and $r_2$ in terms of $p$? 2)Can we find a general form for $r \in \mathbb Z_{p^k}$ when $r(p-1)\equiv 1$ (mod $p^{k-1}$)? That is can we find $r_1$, $r_2$,...$r_{k-1}$ for $r=(p-1)+pr_1+p^2r_2+\cdots+p^{k-1}r_{k-1}$ where $0 \leq r_i\leq p-1$ and $r(p-1)\equiv 1$ (mod $p^{k-1}$) AI: Hint: It looks as if we are trying to find the inverse of $p-1$ modulo $p^3$. Maybe the inverse of $1-p$ would be more familiar as $1+p+p^2+\cdots$. So changing sign and cutting off at $p^2$ we get $-1-p-p^2$. If we want a $p-1$ in front, change to $(p-1)-2p-p^2$.
H: Radian, an arbitrary unit too? Why is the radian defined as the angle subtended at the centre of a circle when the arc length equals the radius? Why not the angle subtended when the arc length is twice as long as the radius , or the radius is twice as long as the arc length? Isn't putting 2 $\pi$ radian similar to arbitrarily putting 360 degree in a revolution? AI: You are right when you say that the angle can be described in the ways you specified as well. The radian is arbitrary in that sense. The relationship between subtended arc length and angle is not arbitrary. Radians are a convenient choice which you can see if you relate arc length to radius. $$ s = r \cdot k \theta $$ If you want the $k$ to be a $1$ then you want to work in radians. Physicists run into a similar issue when choosing units. If the speed of light is $c$ the distance travelled by a light ray is $$ d = c t$$ The numerical value of $c$ depends on the units you are working in. If you choose to work in meters and seconds then $c=3.0\times 10^8 \frac{m}{s}$. If you choose to work in light-years and years then $c=1.0 \frac{\text{light-years}}{\text{year}}$. Update: There is a comment requesting clarification as to what I mean by "The relationship between subtended arc length and angle is not arbitrary". There are many ways you can assign numerical values to different angle measures however that is still what you might call a "human convention". The way in which we agree to assign numbers to angles doesn't change what an angle is. No matter how you do it there will still be a circular arc subtended by the angle which is unique to that angle. Therefore if you give me a scheme for assigning numbers to angles then it will always be possible for me take a given angle measure and tell you the arc length for that angle. An example of this is in the way you proposed to define angles. We could say that the value of an angle is twice the arc length subtended by the angle. If we did this there would still be a 1-1 correspondence between any angle measurement and the arc-length. In our case if you told me you had an angle measured to be 2 meters in this convention I could telly you the subtended arc length is 1 meter. Update about a year later Please don't consider the following as a formal proof of anything. There are some loose ends involving infinite limits that I didn't completely tidy up. Nevertheless this is a sketch of how one might get at the inifinites series for sine using the idea of radian measure for angles and hopefully explains why this series specifically requires radians. You asked in the comment below why the Taylor series for $\sin$ is only valid for angles measured in radians. To understand why this is we need to have some idea where such a series would come from in the context of geometry. Remember we define the measure of an angle in radians in terms of the arc length subtended along a circle. Measuring the length of an arc is difficult to do geometrically. The standard approach involves approximating the circular arc with a arbitrarily large number of small line segments which become indefinitely small. From this approach to measuring arc length we can conclude that if an angle is measured in radians then the following will be approximately true for very small angles, $$ \sin( \delta \theta ) \approx \delta \theta \qquad \cos(\delta\theta) \approx 1 $$ The above "small angle" formulas are the critical point in this derivation where we use the notion of radian measure in particular. If we measured angles in terms of degrees we would have some conversion factors showing up in these formulas. Now suppose we want to know the sine of some large angle $\theta$. For a large enough integer $N$ we can write $\theta$ as a multiple $N$ of some very small angle $\delta \theta$, $$ N \delta \theta = \theta .$$ Using your favorite multiple angle identity it is possible to express $\sin(N \delta \theta)$ in terms of $\sin(\delta \theta)$ and $\cos(\delta \theta)$. In particular I'm going to use DeMoivre's Formula, $$ \cos(N\delta\theta) + i \sin( N \delta \theta) = (\cos(\delta\theta) + i \sin(\delta\theta))^N$$ Applying the binomial theorem on the right allows us to express this as , $$ \cos(N\delta\theta) + i \sin( N \delta \theta) = \sum_{k=0}^N i^k \sin^k(\delta\theta) \cos^{N-k}(\delta \theta) \frac{N!}{k!(N-k)!}$$ Now if $\delta \theta $ is small enough we can use the small angle identities for sine and cosine, $$ \cos(N\delta\theta) + i \sin( N \delta \theta) = \sum_{k=0}^N i^k (\delta\theta)^k \frac{N!}{k!(N-k)!}$$ Removing all explicit reference to $\delta \theta$ we get, $$ \cos(\theta) + i \sin(\theta) = \sum_{k=0}^N i^k (\theta)^k \frac{ N!}{k! N^k (N-k)!}$$ Now if we take the limit as $N$ becomes infinite the term $\frac{ N!}{k! N^k (N-k)!}$ will go to $\frac{1}{k!}$. Since all of the approximations we made become exact for infinite $N$ we take the limit and get, $$ \cos(\theta) + i \sin(\theta) = \sum_{k=0}^\infty i^k \theta^k \frac{1}{k!}$$ If we want the $\sin(\theta)$ then we need to take the imaginary part of this series which will give us only the odd terms in the series. We note that $Im(i) = 1$, $Im(i^3)=-1$, $Im(i^5)=1$, and so on. $$\sin(\theta) = \theta - \theta^3 / 3! + \theta^5 / 5! -\theta^7/7!+\cdots$$ So you can see that the form of this series is all based on a hypothesis about how to measure angles in terms of arc length and the multiple angle formula for $\sin$.
H: Simple PDE question I was trying to solve the problem 2.5.15 (Evans) from this notes: http://www.math.umn.edu/~robt/docs/evans_solutions.pdf But, in page 12, line 11, I really dont know how to prove that \begin{align*} & \lim_{x \rightarrow 0^{+}} \frac{x}{\sqrt{4\pi}}\int_{t-\delta}^{t}\frac{1}{(t-s)^{3/2}}e^{\frac{-x^2}{4(t-s)}}g(s)ds\\ &\quad +\lim_{x \rightarrow 0^{+}}\frac{x}{\sqrt{4\pi}}\int_{0}^{t-\delta}\frac{1}{(t-s)^{3/2}}e^{\frac{-x^2}{4(t-s)}}g(s)ds\\ &=g(t)\lim_{x \rightarrow 0^{+}} \frac{x}{\sqrt{4\pi}}\int_{t-\delta}^{t}\frac{1}{(t-s)^{3/2}}e^{\frac{-x^2}{4(t-s)}}ds\\ \end{align*} where $g(0) = 0$ and $\delta >0$ fixed. Please, I need help. Thanks in advance AI: Note that the second term goes to zero as $x\to 0$ because the limits of the integral are bounded away from $t$ so that the integral is always finite. As for the first term: $\displaystyle|\lim_{x \rightarrow 0^{+}} \frac{x}{\sqrt{4\pi}}\int_{t-\delta}^{t}\frac{1}{(t-s)^{3/2}}e^{\frac{-x^2}{4(t-s)}}[g(s)-g(t)]ds|=|\lim_{x \rightarrow 0^{+}} \frac{x}{\sqrt{4\pi}}\int_{t-\delta}^{t-\epsilon}\frac{1}{(t-s)^{3/2}}e^{\frac{-x^2}{4(t-s)}}[g(s)-g(t)]ds+\lim_{x \rightarrow 0^{+}} \frac{x}{\sqrt{4\pi}}\int_{t-\epsilon}^{t}\frac{1}{(t-s)^{3/2}}e^{\frac{-x^2}{4(t-s)}}[g(s)-g(t)]ds|=|\lim_{x \rightarrow 0^{+}} \frac{x}{\sqrt{4\pi}}\int_{t-\epsilon}^{t}\frac{1}{(t-s)^{3/2}}e^{\frac{-x^2}{4(t-s)}}[g(s)-g(t)]ds|\le\lim_{x \rightarrow 0^{+}} \frac{x}{\sqrt{4\pi}}\int_{t-\epsilon}^{t}\frac{1}{(t-s)^{3/2}}e^{\frac{-x^2}{4(t-s)}}ds\cdot\sup\limits_{s\in[t-\epsilon,t]}|g(s)-g(t)|.$ So as long as $\lim_{x \rightarrow 0^{+}} \frac{x}{\sqrt{4\pi}}\int_{t-\epsilon}^{t}\frac{1}{(t-s)^{3/2}}e^{\frac{-x^2}{4(t-s)}}ds$ is finite for all $\epsilon>0$, we can take $\epsilon\to 0$ and since $g$ is continuous, $\sup\limits_{s\in[t-\epsilon,t]}|g(s)-g(t)|\to 0$ as $\epsilon\to 0$, giving the result. And they prove that the integral is finite in the next step.
H: Prove lower bound $\sum\limits_{k=1}^{n}\frac{1}{\sqrt{n^2+k^2}}\ge\left(1-\frac{1}{n}\right)\ln{(1+\sqrt{2})}+\frac{\sqrt{2}}{2n}$ Consider $$S_n=\sum_{k=1}^{n}\frac{1}{\sqrt{n^2+k^2}}$$ and show that, for every positive integer $n$, $$S_n\ge\left(1-\frac{1}{n}\right)\ln{(1+\sqrt{2})}+\dfrac{\sqrt{2}}{2n}$$ I can prove a related upper bound: $$S_n=\sum_{k=1}^{n}\dfrac{1}{n\sqrt{1+\left(\dfrac{k}{n}\right)^2}}\le\int_{0}^{1}\dfrac{1}{\sqrt{1+x^2}}dx=\ln{(1+\sqrt{2})}$$ but I can't prove the lower bound. AI: In a previous version of this answer, we first noted that the argument to get an upper bound of $$S_n=\frac1n\sum_{k=1}^nf\left(\frac{k}n\right)\qquad f(x)=\frac1{\sqrt{1+x^2}}$$ is simply that the function $f$ is decreasing on $x>0$ hence, for every $k$, $$\frac1nf\left(\frac{k}n\right)\leqslant\int_{(k-1)/n}^{k/n}f(x)dx$$ and, summing over $1\leqslant k\leqslant n$, one gets readily $$S_n\leqslant\sum_{k=1}^n\int_{(k-1)/n}^{k/n}f(x)dx=\int_0^1f(x)dx=\left.\arg\!\sinh x\,\right|_0^1=\arg\!\sinh1=\ln(1+\sqrt2) $$ Likewise, to get a lower bound, the most natural idea (in any case this is the idea we followed in the previous version of this answer...) might be to use once again the fact that the function $f$ is decreasing but this time, to the effect that, for every $k$, $$\frac1nf\left(\frac{k}n\right)\geqslant\int^{(k+1)/n}_{k/n}f(x)dx$$ hence, summing over $1\leqslant k\leqslant n-1$, one gets $$S_n-\frac1{\sqrt{2n^2}}\geqslant\sum_{k=1}^{n-1}\int^{(k+1)/n}_{k/n}f(x)dx=\int_{1/n}^1f(x)dx=\left.\arg\!\sinh x\,\right|_{1/n}^1=\arg\!\sinh1-\arg\!\sinh\left(\frac1n\right) $$ Unfortunately, $$\arg\!\sinh\left(\frac1n\right)>\frac1n\arg\!\sinh1$$ hence, to get the desired lower bound, this approach is doomed and another argument is required. (The trapezoidal rule might be all that is needed here, but I did not check.) Nota: Thanks to user @Hans for having spotted this mistake (nearly 4 years later...).
H: Find the common solution of $u_{xy}+2u_x+u_y+2u=0$. As the title says, the task is Find the common solution of the PDE $$ u_{xy}+2u_x+u_y+2u=0.~~~~(*) $$ What I have to mention here is that this is the second part of the task which I asked here: Which PDE does v fullfill?. Maybe one does need that here, therefore I mention it. My idea is the following: Concerning to the link, $u$ is a solution of (*) exactly then, when $v(x,y):=u(x,y)\exp(x+2y)$ is a solution of the PDE $$ v_{xy}=0. $$ Now the question is which possibilities there are for $v$, in order to get $v_{xy}=0$. $v_{xy}=0$ exactly then, when $u_y$ is constant or only depends on $y$ or is a combination of that, i.e. $v=const+g(y)$ for any $g\in C^2(\mathbb{R})$? Then to my thoughts, $$ u(x,y)=v(x,y)\exp(-bx-ay), v(x,y):=\mbox{const}+g(y), g\in C^2(\mathbb{R}) $$ is the common solution of (*). What do you think about that? AI: Integrating $v_{xy} = 0$ with respect to $x$ should give you $v_y = a(y)$ for an arbitrary function $a$. Now integrate with respect to $y$ and you should get $v = A(y) + B(x)$ where $A$ and $B$ are arbitrary functions.
H: Is there a strategy to switching $\infty$ limits to $-\infty$ limits? I asked recently in this question how to use the definition of $$e:= \lim_{x\to\infty}\left( 1+\frac1x \right)^x$$ to show that $$\lim_{x\to -\infty}\left( 1+\frac1x \right)^x = e.$$ A helpful answer said that $$\left(1+\frac1{\eta}\right)^\eta = \left(\frac1{\left(1+\frac1{-(\eta+1)}\right)^{-(\eta+1)}}\right)^{-\eta/(\eta+1)}$$ and $-(\eta + 1) \xrightarrow[]{\eta \to -\infty} \infty$. It struck me as a brilliant way to solve the problem, and I wondered how one might come up with it. Is the rearrangement done in this answer an instance of a more general strategy, or is it more or less just fiddling until one gets the right form? AI: In general, if $h(x)=1+O(x^{-2})$ then $\lim_{x\to\infty} h(x)^x = 1$. Now, let $h(x)=\left(1-\frac{1}{x^2}\right)=\left(1-\frac{1}{x}\right)\left(1+\frac 1x\right)$. Since $h(x)^x\to 1$ as $x\to\infty$ and $(1+1/x)^x\to e$ as $x\to\infty$ we have that $\left(1-\frac 1x\right)^x\to \frac{1}{e}$ as $x\to\infty$. But then $\left(1-\frac1x\right)^{-x}\to e$ as $x\to\infty$. Replacing $x$ with $-x$, we see that: $$\lim_{x\to -\infty} \left(1+\frac{1}{x}\right)^x=e$$ So all you have to prove is the first claim.
H: On the definition of product of groups What I'm asking comes from Bosch, Algebra, the first chapter on elementary group theory. 1) Let $X$ be a set and $G$ a group. Then the set $G^X$ of maps from $X$ to $G$ is a group in a natural way, that is for every $f,g\in G^X$, set $(f\cdot g)(x):=f(x)\cdot g(x)$. 2) Let $X$ be a set of indexes and $\left(G_x\right)_{x\in X}$ a collection of groups. Then the product set $\prod_{x\in X}G_x$ becomes a group if we define the product component-wise, that is $(g_x)_{x\in X}\cdot (h_x)_{x\in X}:=(g_x\cdot h_x)_{x\in X}$. Now suppose that the groups $G_x$ are copies of the same group $G$. Then, using notations in 1), we have $$\displaystyle\prod_{x\in X}G_x=G^X$$ Well, I can't understand this equation. I mean, an element in the lefthandside is a collection $(g_x)_{x\in X}$ of elements of $G$, whereas on the righthandside we have functions taking values in $G$. How can the two objects be the same? AI: Strictly speaking, what we have is only an isomorphism of groups, given by $(g_x)_{x\in X}\mapsto (x\mapsto g_x)$ with inverse map $f\mapsto(f(x))_{x\in X}$. But this isomorphism is so canonical, the difference so shallow that actually calling it equality seems to be one of the lesser sins. For example, would you really object to the equation $(A\times B)\times C=A\times (B\times C)$ for cartesian products of sets?