text
stringlengths
83
79.5k
H: What Topology does a Straight Line in the Plane Inherit as a Subspace of $\mathbb{R_l} \times \mathbb{R}$ and of $\mathbb{R_l} \times \mathbb{R_l}$ Given a straight line in the plane, what topology does this straight line inherit as a subspace of $\mathbb{R_l} \times \mathbb{R}$ and as a subspace of $\mathbb{R_l} \times \mathbb{R_l}$, where $\mathbb{R_l}$ is the lower limit topology? So trying to figure this out definitely made my brain hurt. I believe that as a subspace of $\mathbb{R_l} \times \mathbb{R}$, all non-vertical straight lines just inherit the lower limit topology $\mathbb{R_l}$, while vertical lines inherit the standard topology on $\mathbb{R}$. As for $\mathbb{R_l} \times \mathbb{R_l}$, as far as I can tell, the only difference is that now all straight lines, including vertical ones, inherit the lower limit topology. My reasoning was basically that for $\mathbb{R_l} \times \mathbb{R}$, open sets on the line all have an initial left end point, since the x-coordinate of this left end point is always captured by the open sets [a,b) in $\mathbb{R_l}$, and this in turn drags the y-coordinate of the initial left end point along for the ride (so to speak). This is true in all but the vertical line case where there is no shift in the horizontal direction and thus the topology is inherited strictly from $\mathbb{R}$. As for $\mathbb{R_l} \times \mathbb{R_l}$ it's basically the same argument except now the vertical lines inherit from $\mathbb{R_l}$ as well. Can someone let me know whether I've reasoned correctly? Thanks. AI: You’re right about $\Bbb R_l\times\Bbb R$, but you can make a better (or at least clearer) argument. Basic open sets in $\Bbb R_l\times\Bbb R$ are of the form $[a,b)\times(c,d)$, so they’re boxes open on all sides except the left. Let $L$ be a non-vertical line, and let $\langle x,y\rangle\in L$. Then the intersection with $L$ of any basic open set $[x,b)\times(c,d)$, where $y\in(c,d)$, is a left-closed, right-open interval with $x$ as left endpoint. It’s not hard to check that these intersections are a base for the subspace topology on $L$, which is therefore the Sorgenfrey (lower-limit) topology. And as you say, the vertical lines are homeomorphic to the second factor, $\Bbb R$. $\Bbb R_l^2$ is another story, though. Now the topology has a base of boxes of the form $[a,b)\times[c,d)$, open only on the top and righthand edges. If the line $L$ is vertical or horizontal, of course, it’s homeomorphic to one of the factor spaces, i.e., to $\Bbb R_l$. If it has positive slope, you can argue much as in the case of $\Bbb R_l\times\Bbb R$ to conclude that it’s homeomorphic to $\Bbb R_l$. If it has negative slope, however, its topology is discrete: for any $\langle x,y\rangle\in L$, $$L\cap\Big([x,x+1)\times[y,y+1)\Big)=\big\{\langle x,y\rangle\big\}\;.$$
H: Convergent operatorial series An exercise I was doing asks (among other things) for the values of $z\in\mathbb{C}$ for which the following (operatorial) series converges absolutely: $$\sum_{n=0}^{\infty}z^nA^n$$ where $A$ is an operator in the Hilbert space $L^2(0,2\pi)$ such that $$(Af)(x)=\frac{1}{\pi}\int_0^{2\pi}[\cos(x)\cos(y)+\sin(x)\sin(y)]f(y)dy$$ I understand that $A$ is basically a projection operator in the form $Af = c\langle c,f\rangle+s\langle s,f\rangle$, where $s=\frac{1}{\sqrt{\pi}}\sin (x)$ and $c=\frac{1}{\sqrt{\pi}}\cos (x)$, so $||A||=1$ and $A^n = A$. I also understand that, if I interpreted well, you should apply the Cauchy-Hadamard theorem to $\sum_{n=0}^{\infty}z^nA^n$ and search if it converges absolutely in the Banach space of all bounded operators between $L^2(0,2\pi)$ and itself. But with the Cauchy-Hadamard theorem you can conclude that the radius of convergence is $(\limsup||A^n||^{1/n})=1$. The answer to the exercise, however, is different and more cryptic: "The series converges in norm when $||zA||\le 1$ ($|z|\le1$)" How could you include the case $|z|=1$? AI: You solution is correct and solution in your book is wrong, because of the simple case $z=1$. For this case we have infinite sum of identical non-zero operators, more preciesly we have $$ \sum\limits_{n=1}^\infty 1^nA^n=\sum\limits_{n=1}^\infty A= ??? $$ Since $A^n=A$ our series is of the form $$ \sum\limits_{n=0}^\infty z^n A $$ which is absolutely convergent for $|z|<1$, because convergence depends only on the series $$ \sum\limits_{n=0}^\infty z^n $$
H: Two sums for $\pi$ I was looking for a simple way to evaluate the integral $\int_0^\infty \frac{\sin x}{x}dx$ ( a belated look at this question). There are symmetries to be exploited, for one thing. So I had an idea, but the idea depends, optimistically, on two expressions for $\pi$ that I cannot prove, and my question is whether anyone sees reasonable proofs of either expression (1, 2 below). Put $$\int_0^\infty \frac{\sin x}{x}dx = \int_0^\pi \sum_{k=0}^{k=\infty} \left\{\frac{\sin x}{x + 2 k \pi }-\frac{\sin x }{x+2k\pi+\pi}\right\} \, dx$$ The intuition behind this step is that as we make our way around the unit circle from 0 to $\pi$, for each angle $\alpha$ we have an angle $\alpha + \pi$ for which the value of $\frac{\sin x} {x}$will be smaller and negative. No need to integrate beyond $\pi$, and for angles greater than $2\pi$ we are simply dividing up the original integral's range by adding multiples of $\pi$ to the value of $x$. WLOG we can form Riemann sums by dividing the range of integration into an even number of equi-angle subranges $dx_n$. For example, if $n = 6$, we have $$\int_0^\pi \sum_{k=0}^{k=\infty}\sin x \left\{\frac{1}{x + 2 k \pi }-\frac{1 }{x+2k\pi+\pi}\right\} \, dx$$ $$\approx \left\{ \sum_{k=0}^\infty \sum_{n=1}^6 \sin \left( \frac{n\pi}{6}\right)\left(\frac{1}{\pi}\right) \left(\frac{1}{\frac{n}{6} + 2k}-\frac{1}{\frac{n}{6}+2k+1}\right) \right\} \left\{ \frac{\pi}{6} \right\}.$$ Taking this as true, we note that pairs of summands of the Riemann sums symmetric about $\frac{\pi}{2}$ sum to $1$, and the value of the summand at $\frac{\pi}{2}$ is $1/2$. Again, taking this as true, we have that $$\sum f(x_n) = \frac{1}{2} + 1\cdot \frac{n-2}{2} + \sin\pi\cdot f(\pi) = \frac{n-1}{2},$$ and $$\sum f(x_n) \, dx_n = \left(\frac{n-1}{2}\right) \cdot \frac{\pi}{n}.$$ So for the approximation above for $n = 6$, we expect a value of about $\frac{5 \pi}{12}$ for high $k$. In words, we are only counting n - 1 summands, since the last summand at $\pi$ is $0$. Since we divided the interval into $n$ subintervals, we are stuck with the sum above and the conditional limit (conditioned at least on the truth of the two limits below): $$\int_0^\infty \frac{\sin x}{x} \, dx = \lim_{n = 1}^\infty \frac{n-1}{2}\cdot \frac{\pi}{n} = \frac{\pi}{2}.$$ The problem is this: we need to prove, to begin with, that $$\sum_{k=0}^\infty \left(\frac{ 1}{\pi/2 + 2\pi k} - \frac{1}{\pi/2 + 2k\pi + \pi}\right) = \frac{1}{2}$$ or $$ \frac{1}{\pi}\sum \frac{1}{1/2 + 2k}- \frac{1}{1/2 + 2k\pi +1}= \frac{1}{2},$$ that is, (1) $\sum_{k=0}^\infty \frac{4}{3+16k+16k^2}= \frac{\pi}{2}$. We must also prove for $0< j < n$ that (2) $\sin (\frac{j\pi}{n})( \sum_{k=0}^{\infty} (\frac{ 1}{\frac{j\pi}{n} + 2\pi k} - \frac{1}{\frac{j\pi}{n} + 2k\pi + \pi}) + \sum_{k=0}^{\infty} (\frac{ 1}{\frac{(n-j)\pi}{n} + 2\pi k} - \frac{1}{\frac{(n-j)\pi}{n} + 2k\pi + \pi}))= 1$. or , factoring $\frac{1}{ \pi}$ out of the sum and mutiplying: $$\sin \left( \frac{j\pi}{n}\right)\left( \sum_{k=0}^\infty \left(\frac{1}{\frac{j}{n} + 2 k} - \frac{1}{\frac{j}{n} + 2k + 1}\right) + \sum_{k=0}^\infty \left( \frac{ 1}{\frac{(n-j)}{n} + 2 k} - \frac{1}{\frac{(n-j)}{n} + 2k + 1}\right)\right)= \pi.$$ (1) might be a special case of something at Wolfram's site, but I didn't spot it. (2) looks messy, but maybe induction? If this is otherwise correct, I think one could work back from the relations for $\pi$ and establish the value of the integral. It is important not to forget that k starts at 0. Edit: the following is equivalent to (2) and maybe easier to scan. For $0<r<1$, $$(2) \sum_{k=0}^{\infty}\frac {2(\sin \pi r)(1+4k+4k^2-r+r^2) }{(1+2k+r)(1+2k-r)(2+2k-r)(2k+r)} = \pi$$ This was derived assuming r rational, but I don't think it matters. AI: $$ \sum_{k=0}^N \frac{1}{\alpha + 2 k} = \frac{\Psi(N+1+\alpha/2) - \Psi(\alpha/2)}{2} $$ where $\Psi(t) = \ln(t) + O(1/t)$ as $t \to \infty$, so $$\sum_{k=0}^\infty \left(\frac{1}{1/2 + 2 k} - \frac{1}{3/2 + 2 k} \right) = \frac{\Psi(3/4) - \Psi(1/4)}{2} = \frac{\pi}{2}$$ However, I think the evaluation of $\Psi(3/4) - \Psi(1/4)$ may be just as hard as your integral. EDIT: It turns out that $\Psi$ has the integral form $$ \Psi(t) = \int_0^\infty \left( e^{-s} - \frac{1}{(s+1)^t} \right) \frac{ds}{s}$$ so that $$ \Psi(3/4) - \Psi(1/4) = \int_0^\infty \left((s+1)^{-3/4} - (s+1)^{-1/4}\right) \frac{ds}{s} = \left. 4 \arctan((s+1)^{1/4}) \right|_0^\infty = \pi $$
H: Simplifying a fraction I'm in doubt on how to simplify $ \left (-\dfrac{1}{243} \right )^{-\frac{2}{3}}$. I've started with $\left (-\dfrac{1}{9\sqrt{3}} \right )^{-\frac{2}{3}}$ but now I'm stuck because of this minus signal in the main fraction ? Thanks in advance. AI: Take it one step at a time, first getting rid of the negative exponent by taking reciprocals: $$\begin{align*} \left(-\frac1{243}\right)^{-\frac23}&=\left(-\frac1{243}\right)^{(-1)\cdot\frac23}\\ &=\left(\left(-\frac1{243}\right)^{-1}\right)^{\frac23}\\ &=(-243)^{2/3}\\ &=\big(-(3^5)\big)^{2/3}\\ &=\big(-(3^5)\big)^{2\cdot\frac13}\\ &=\left(\big(-(3^5)\big)^2\right)^{1/3}\\ &=\left(3^{10}\right)^{1/3}\\ &=3^{10/3}\\ &=3^{3+\frac13}\\ &=3^3\cdot3^{1/3}\\ &=27\sqrt[3]3\;. \end{align*}$$
H: Linear algebra: projection Consider $P_2$ together with the inner product $(p(x), q(x)) = p(0)q(0) + p(1)q(1) + p(2)q(2)$. Find the projection of $p(x)=x$ onto the subspace $W=\operatorname{span}\{-x+1,x^{2}+2\}$. How do you solve this question? I don't get what it means to find the projection of $p(x)=x$ onto $W$. I'm only used to finding the projection of a vector onto $W$. I did, however, find the orthogonal basis for the subspace being $\{-x+1, x^{2}-2x+4\}$. But I don't know how to apply this to the projection equation. Can someone please help me? AI: Suppose $\mathbf{V}$ is an inner product vector space, and $\mathbf{W}$ is a subspace. If $\beta=\{\mathbf{w}_1,\ldots,\mathbf{w}_k\}$ is an orthonormal basis for $\mathbf{W}$, then the orthogonal projection onto $\mathbf{W}$ can be computed using $\beta$: given a vector $\mathbf{v}$, the orthogonal projection onto $\mathbf{W}$ is $$\pi_{\mathbf{W}}(\mathbf{v}) = \langle \mathbf{v},\mathbf{w}_1\rangle \mathbf{w}_1+\cdots + \langle \mathbf{v},\mathbf{w}_k\rangle \mathbf{w}_k.$$ If you only have an orthogonal basis, then you need to divide each factor by the square of the norm of the basis vectors. That is, if you have an orthogonal basis $\gamma = \{\mathbf{z}_1,\ldots,\mathbf{z}_k\}$, then the projection is given by: $$\pi_{\mathbf{W}}(\mathbf{v}) = \frac{\langle\mathbf{v},\mathbf{z}_1\rangle}{\langle \mathbf{z}_1,\mathbf{z}_1\rangle}\mathbf{z}_1 + \cdots + \frac{\langle\mathbf{v},\mathbf{z}_k\rangle}{\langle\mathbf{z}_k,\mathbf{z}_k\rangle}\mathbf{z}_k.$$ Here, you have a subspace for which you say you already have an orthogonal basis. And you have your vector: $\mathbf{v} = x$. So all you have to do is use the usual formula with these vectors and this inner product. For example, with $\mathbf{v}=x$ and $\mathbf{z}_1 = -x + 1$, we have: $$\langle x,-x+1\rangle = (0)(-0+1) + (1)(-1+1) + (2)(-2+1) = 0+0-2 = -2.$$ Etc.
H: Proof of relative compactness Is the set of functions $f$ in $C^1([0,1])$ equipped with $\lVert f \lVert = \lVert f\lVert_\infty+\lVert f'\lVert_\infty$ (Case 1) or $\lVert f\lVert=\lVert f \lVert_\infty$ (Case 2) satisfying $\lvert f(0.5) \lvert \leq 1$ and $\int_0^1 \lvert f'(x)\lvert^2 dx \leq1$ relatively compact in $C^1([0,1])$? I think this might be an application of the Arzela-Ascoli theorem. So I'd like to show that the set of functions is uniformly bounded and equicontinous. Boundedness is easy since \begin{align}\lvert f(x)\lvert \leq 1+\int_{0.5}^x \lvert f'(t)\lvert dt\leq1+0.5+\int_0^1\lvert f'(t)\lvert^2 dt \quad \forall x. \end{align} But equicontinuity seems harder since $\int_0^1 \lvert f'(x)\lvert^2 dx \leq1$ doesn't imply anything about the boundedness of $f'(x)$. AI: Let $$K:=\{f\in C^1[0,1]\mid |f(0.5)|\leq 1\mbox{ and }\int_0^1|f'(t)|^2\leq 1\}.$$ This set have a compact closure, since for $f\in K$, $x,y\in [0,1]$: $$|f(x)-f(y)|\leq \int_x^y1\cdot |f'(t)|dt\leq \sqrt{x-y},$$ by Cauchy-Scharz inequality. Uniform boundedness have already be shown by the OP, but can also be deduced from equi-continuity and boundedness at one point. But $K$ hasn't a compact closure for the norm $\lVert f\rVert:=\sup_{0\leq x\leq 1}|f(x)|+\sup_{0\leq x\leq 1}|f'(x)|$. Indeed, consider $f_n(x):=\frac{\sin(nx)}n$. Then $f_n\in K$ for all $k$, and assume that $\{f_{n_k}\}$ is a converging subsequence for $\lVert\cdot\rVert$, to $f$ for example. In particular, $f_{n_k}'$ converges uniformly to $f'$. But $f'_{n_k}(x)=\cos(n_kx)$ and we should have, by uniform convergence, $$f'(0)=\lim_{k\to +\infty}f'_{n_k}\left(\frac{2\pi}{n_k}\right)=\lim_{k\to +\infty}f'_{n_k}\left(\frac{\pi}{2n_k}\right).$$ But the first sequence is always $1$ whereas the other one is always $0$.
H: Precompactness of a set vs. compactness of some operator I found a remark in Adams' book Sobolev spaces (Acad. Press) which I cannot understand completely. It is on page 34 and says: We remark that Theorem 2.22 is just a setting, suitable for our purposes, of a well-known theorem stating that the operator-norm limit of a sequence of compact operators is compact. Theorem 2.22 is stated as follows: Theorem Let $1\le p <+\infty$ and let $K\subset L^p(\Omega)$. Suppose there exists a sequence $(\Omega_j)$ of subdomains of $\Omega$ having the following properties: a. For each $j$, $\Omega_j \subset \Omega_{j+1}$; b. For each $j$ the set of restrictions to $\Omega_j$ of the functions in $K$ is precompact in $L^p(\Omega_j)$; c. For every $\varepsilon > 0$ there exists $j$ such that $$\int_{\Omega \setminus \Omega_j} \lvert u(x)\rvert^p\, dx <\varepsilon \qquad \text{for every}\ u\in K.$$ Then $K$ is precompact in $L^p(\Omega)$. What "sequence of operators" is he talking about? Thank you. AI: The sequence of restriction operators to $\Omega_j$. Restriction is a linear operator, right?
H: Cartesian equation of a parametric curve I am trying to find the cartesian equation of the parametric curve $$x = 1-t^2, \qquad y = t-2, \qquad -2 \leq t \leq 4$$ I am not sure how to proceed but I think a good direction would be to get everything in terms of $x$, eliminating $t$. I get $t = \sqrt{x-1}$ so I put that in and I get a wrong answer. What is wrong with my solution? AI: If $x=1-t^2$ and $y=t-2$ then $$y+2=t$$ $$(y+2)^2=t^2$$ We replace this in the other equation $$x=1-t^2$$ $$x=1-(y+2)^2$$ $$x-1=-(y+2)^2$$ $$1-x=(y+2)^2$$ NOTE: Although one might solve for $y$, it is not necessary to do so, since the expression $\sqrt{1-x}$ will have to be taken as $\pm \sqrt{1-x}$. Thus, it is better to stick to the above parabola in "$(y,x)$" rather than to a squareroot function in "$(x,y)$". Now you need to find what are then ranges of $x$ and $y$ for the respective values of $t$. Note you'll have a curve which will not be a function (It will be an horizontal cropped parabola)
H: Inconjugate maximal subgroups of a soluble group I am looking for a proof of the following proposition: If $M_1,M_2$ are inconjugate maximal subgroups of the finite and soluble group $G$, then $M_1\cap M_2$ is maximal in at least one of $M_1$ or $M_2$. I know there is a proof of this fact in Doerk and Hawkes' "Finite Soluble Groups", but I can't access it, at least for the time being. I was wondering if anyone knows of some other source where a proof is provided. AI: I couldn't find another reference, but here are Theorem 16.6 and its proof from Doerk's book: Theorem: Let $L$ and $M$ be inconjugate maximal subgroups of a finite soluble group $G$. If $M^G\not\leq L^G$, then $L\cap M$ is a maximal subgroup of $L$. Proof: Let $R=$ Core$_G(M)$ and $S/R=$ Soc$(G/R)$. The hypothesis implies that $R\not\leq$ Core$_G(L)$ and therefore that $LR=G$ $(*)$. Since $G/R$ is primitive, $S/R$ is a chief factor of $G$, and since $R$ centralizes $S/R$, it follows from $(*)$ that $S/R$ is $L$-irreducible. From $(*)$ we also have that $S=S\cap LR=(S\cap L)R$, whence $S/R=(S\cap L)R/R\ \ \stackrel{\cong}{\tiny{L}}\ \ (S\cap L)/(R\cap L)$, and therefore $(S\cap L)/(R\cap L)$ is a chief factor of $L$. Now $M$ complements $S/R$ in $G$ and $LM=G$. Hence, $|L:L\cap M|\geq |(S\cap L)(L\cap M):L\cap M|=|S\cap L :R\cap L|=|S:R|=|G:M|=|LM:M|=|L:L\cap M|$. Consequently $(S\cap L)(L\cap M)=L$, and $L\cap M$ complements the chief factor $(S\cap L)/(R\cap L)$ in $L$. Therefore, $L\cap M$ is a max. subgroup of $L$.$\ \ \ $ QED Since $\leq$ is a partial order, your mentioned theorem follows.
H: Unique decimal representation using only 0s and 1s (base 10) Say $x \in [0, 1)$ and $\displaystyle x = \sum_{i = 1}^\infty \frac{a_i}{10^i}$ for $a_i \in \{0, 1\}$. Is it true if $\displaystyle x = \sum_{i = 1}^\infty \frac{b_i}{10^i}$ for $b_i \in \{0, 1\}$ that $a_i = b_i$ for all $i$? It seems to me that yes this is true, since the only times I have seen uniqueness of decimal representation break down is when we introduce repeating 9s. Is there a simple proof of my statement above? EDIT: I added base 10 to the subject to clarify I don't want to consider $\displaystyle \sum_{i = 1}^\infty \frac{a_i}{2^i}$. I am not sure if that is the correct terminology. I want to establish $f : \{0, 1\}^\infty \to \mathbb{R}$ by $(a_1, a_2, a_3, \cdots) \mapsto 0.a_1a_2a_3\cdots$ is an injection. AI: Note that we have $$ 0 = x -x = \sum_{i=1}^{\infty} \dfrac{a_i}{10^i} - \sum_{i=1}^{\infty} \dfrac{b_i}{10^i} = \sum_{i=1}^{\infty} \dfrac{a_i - b_i}{10^i}$$ Note that you are allowed to rearrange terms since $\lvert a_i \rvert + \lvert b_i \rvert \leq 2$ and the sequence $\displaystyle \sum_{i} \dfrac{\lvert a_i \rvert + \lvert b_i \rvert}{10^i}$ is absolutely convergent bounded above by $2x$. Hence, we have $$0 = \sum_{i=1}^{\infty} \dfrac{c_i}{10^i}$$ where $c_i \in \{-1,0,1\}$. Our goal now is to show that all the $c_i$'s are $0$. If not, look at the first non-zero $c_k$. Note that $$\sum_{i=k+1}^{\infty} \dfrac{|c_i|}{10^i} \leq \sum_{i=k+1}^{\infty} \dfrac1{10^i} = \dfrac1{10^{k+1}} \left( \dfrac1{1-1/10}\right) = \dfrac1{9 \cdot 10^{k}} < \dfrac1{10^{k}}.$$ Hence, $$\left \lvert \dfrac{c_k}{10^{k}} \right \rvert = \dfrac1{10^{k}} > \sum_{i=k+1}^{\infty} \dfrac{|c_i|}{10^i}.$$ Hence, $$\left \lvert \sum_{i=1}^{\infty} \dfrac{c_i}{10^i} \right \rvert = \left \lvert \dfrac{c_k}{10^k} - \left(-\sum_{i=k+1}^{\infty} \dfrac{|c_i|}{10^i} \right) \right \vert \geq \left \lvert \dfrac{c_k}{10^k} \right \rvert - \left \lvert \left(\sum_{i=k+1}^{\infty} \dfrac{|c_i|}{10^i} \right) \right \vert > 0$$ Hence, if $0 = \displaystyle \sum_{i=1}^{\infty} \dfrac{c_i}{10^i} \implies c_i = 0$, which in-turn gives us that $a_i = b_i$.
H: On compact space with order topology We are familar with that for the first uncountable cardinality $\omega_1$, the topological space $[0,\omega_1]$ is compact. I find the proof for the $\omega_1$, is also for every regular cardinality. So is there a result for every regular cardinality, i.e., For every regular cardinality $\kappa$ with order topology, then $\kappa\cup \{\kappa\}$ is compact? Am I right? Thanks for any help:) AI: For limit ordinal $\alpha$, $\alpha$ with order topology is not compact, since it is the strictly increasing union of $\beta<\alpha$, all of which are open (as subsets of $\alpha$). On the other hand, any $\alpha+1$ is compact when equipped with order topology, since if we choose any family of open sets covering $\alpha+1$, we can find a subcover the size of which is the length of a descending sequence of ordinals (the elements of the sequence are maximal elements not covered by the intervals chosen so far, it can be done because any noninitial open interval has a maximal preceding element), so because any such sequence is finite, we have compactness. Whether the number is limit, regular or cardinal does not matter. Any successor has that property.
H: Describing multivariable functions So I am presented with the following question: Describe and sketch the largest region in the $xy$-plane that corresponds to the domain of the function: $$g(x,y) = \sqrt{4 - x^2 - y^2} \ln(x-y).$$ Now to be I can find different restrictions like $4 - x^2 - y^2 \geq 0$... but I'm honestly not even sure where to begin this question! Any help? AI: So, you need two things: $$ 4 - x^2 - y^2 \geq 0 $$ to get make the square root work, and also $$ x-y > 0 $$ to make the logarithm work. You will be graphing two regions in the $xy$-plane, and your answer will be the area which is in both regions. A good technique for graphing a region given by an inequality is to first replace the inequality by an equality. For the first region this means $$ 4 - x^2 - y^2 = 0$$ $$ 4 = x^2 + y^2 $$ Therefore, we're talking about the circle of radius two centered at the origin. The next question to answer: do we want the inside or outside of that circle? To determine that, we use a test point: pick any point not on the circle and plug it into the inequality. I'll choose $(x,y) = (5,0)$. Note that this point is on the outside of the circle. $$ 4 - x^2 - y^2 \geq 0 $$ $$ 4 - 5^2 - 0^2 \geq 0 $$ That's clearly false, so we do not want the outside of the circle. Our region is the inside of the circle. Shade that lightly on your drawing. Now, do the line by the same algorithm.
H: Parametric equations and line segments I am not sure how to do this one at all, I can't even start it. I am suppose to show that $$x = x_1+(x_2-x_1)t \\ y= y_1+(y_2-y_1)t$$where t is between zero and one describes the line segment that joins $P_1(x_1, y_1)$ and $P_2(x_2,y_2)$ I really have no clue what to do and how to work with this many different variables. AI: There are three things to prove here: the parametrization begins at $(x_1,y_1)$, the parametrization ends at $(x_2,y_2)$, the parametrization is a line segment. The first two are easy to prove: we have $0 \leq t \leq 1$, so plug $t=0$ into the equations for $x$ and $y$. You should obtain $x = x_1$ and $y = y_1$, which gives us item #1 on our todo list. Do the same for $t=1$, and you'll get item #2. The third requires a bit of algebra, and it must be kept in mind that $x_1,x_2,y_1,y_2$ are all constants, while $x,y$ are variables. Let us solve the $x$ equation for $t$: $$ x = x_1 + (x_2 - x_1)t $$ $$ \frac{x - x_1}{x_2-x_1} = t $$ Now, plug that into the $y$ equation: $$ y = y_1 + (y_2-y_1)t $$ $$ y = y_1 + (y_2-y_1)\frac{x - x_1}{x_2-x_1} $$ The goal here is to get it into the form $y = mx+b$, so the algebra will be focused on figuring out what's multiplying $x$ and what isn't. We'll start by breaking the fraction into two: $$ y = y_1 + (y_2-y_1)\frac{x}{x_2-x_1} + (y_2-y_1)\frac{- x_1}{x_2-x_1} $$ Now, move the $y_1$ at the beginning to join it with the other stuff that does not depend on $x$: $$ y = (y_2-y_1)\frac{x}{x_2-x_1} + y_1 - (y_2-y_1)\frac{x_1}{x_2-x_1} $$ From here, we can see that we do have a line if we set $$ m =\frac{y_2-y_1}{x_2-x_1} $$ $$ b = y_1 - (y_2-y_1)\frac{x_1}{x_2-x_1} $$
H: When is tangent line horizontal? When is the tangent line of the following function horizontal? $$y =\frac{\sin x}{e^x}$$ What steps or how can I draw a graph to figure this out or prove this? AI: Apply the quotient rule to find $$ y' = \frac{e^x \cos x - e^x \sin x}{e^{2x}} $$ We want $y'=0$, which means we want the numerator to equal zero: $$ 0 = e^x \cos x - e^x \sin x $$ $$ 0 = \cos x - \sin x $$ $$ \sin x = \cos x $$ Divide by $\cos x$: $$ \tan x = 1 $$ This happens at $x = \frac{\pi}{4}$, and since tangent has period $\pi$, it will happen also when we add an integer multiple of $\pi$. Thus, we have a horizontal tangent line at $$ x = \frac{\pi}{4} + n\pi, n \in \mathbb{Z} $$
H: Computing $\lim_{h\rightarrow 0}\frac{\tan(-\frac{\pi}{4})+1}{h}$ How do I compute a tan limit with a fraction? $$\lim_{h\rightarrow 0}\frac{\tan(-\frac{\pi}{4})+1}{h}$$ AI: Since $ \tan(- \pi /4 ) + 1 = 0$, we have $$ \lim_{h \rightarrow 0} \frac{\tan \left ( \frac{- \pi}{4} \right ) + 1}{h} = \lim_{h \rightarrow 0} \frac{0}{h} = \lim_{x \rightarrow 0} 0 = 0 $$
H: Piece-Wise Discontinuity & Continuity When is the following function continuous? How would i go about listing the removable discontinuities and then redefine the function so that it is now continuous in those places? $$f(x)= \begin {cases} \sqrt{2x}+7&\text{if }x <2\\ 0\ &\text{if }x = 2\\ 3^x&\text{if }x > 2 \end {cases}$$ AI: It is discontinuous at $x = 2$. Approaching from the left, the limit is $\sqrt{(2)(2)} + 7 = 9$. Approaching from the right, you have $3^2 = 9$. However at $x = 2$, the function is defined to be $0 \neq 9$. Hence it is discontinuous at $x = 2$, but can be made continuous by defining $f(2) = 9$.
H: Topology - The arbitrary union axiom So, the common answer to why we need the concept of topology is that we need it to talk about things like limits of infinite sequences and continuity. But, when we define the axioms of topology, we have an axiom which says that an arbitrary(countable and uncountable) union of open sets exists(or is this not true?) and is open. But doesn't the notion of arbitrary union in itself hide an infinite sequence, namely the the sequence of partial unions of the open set? Isn't the union of infinitely many open sets equal to the "limit" of the partial unions? Is there an alternate interpretation of arbitrary unions which doesn't requires us to define a topology on the power set of the set of interest first? AI: No, no sequence or limit process in involved. This is a purely set-theoretic concept. If you have a collection of sets $\mathcal{F}$ in a universe of discourse $\Omega$, you define $$\bigcup \mathcal{F} = \{x\in\Omega| \exists F\in \mathcal{F}\; {\rm with}\; x\in F\}.$$ It's just an existential quantifier. No order or structure is involved.
H: Lebesgue integral (existence and finiteness) of $\sin(1/x^2)$ I think I am getting a little better at these MCT, DCT-type exercises. The issue is to show/prove the existence and finiteness (if they apply) to the following function: $$f(x)=\sin \left(\dfrac1{x^2} \right)$$ Where applicable I want to show as rigorously as possible the justification for existence, etc. For example, it is not enough to say simply, "this exists by MCT"; rather, I need to show that the conditions of measurability, monotone-increasing, etc. are met. Work/attempt at a solution: First, although I have a limited experience with this type of integral, I believe the function can be re-written as follows: $$f(x)=\sum_{n=0}^{\infty}\frac{(-1)^n}{x^{2(2n+1)}(2n+1)!}$$ If I have this correctly (I attempted to extrapolate from an integral table I found regarding $\sin\left(\dfrac1x \right)$, but please correct me if I am wrong), I can utilize Dominated Convergence to show the integral exists. That is, If $f_1,f_2,...$ are measurable functions and $|f_n|\le g$ for $\mu$-integrable function, $g$, and if $f_n\to f$, $\mu$-a.e, then $\lim_{n\to\infty}\int f_n d\mu\to\int fd\mu$. First, I need to show measurability. I believe that continuity on the interval in question, $(0,\infty)$, is sufficient to show measurability, correct? Next, define $$S_n(x):=\sum_{n=0}^{t}\frac{(-1)^n}{x^{2(2n+1)}(2n+1)!}\to f(x)=\sum_{n=0}^{\infty}\frac{(-1)^n}{x^{2(2n+1)}(2n+1)!}$$ Then, define $g(x)=\dfrac1x\ge |f_n|,\forall x>0$ and therefore conclude by Dominated Convergence Theorem, that $\displaystyle \int_{(0,\infty)}\sin \left(\dfrac1{x^2} \right)d\mu$ exists. Please comment, add answers, critique, tell me where I made mistakes, etc. I am learning this on my own so any and all improvements are welcome. From here I need to show whether this integral is finite or infinite. Is there a way to show this without an explicit calculation of the integral? AI: $\sin(1/x^2)$ is continuous in $(0,\infty)$ and hence it is also measurable. Split your domain $(0,\infty)$ into $(0,1] \cup (1,\infty)$. In the interval $(0,1]$, bound $\sin(1/x^2)$ by $1$ and in the interval $(1,\infty)$, bound $\sin(1/x^2)$ by $1/x^2$ and then argue out.
H: Derivative of matrix exponential away from the identity Suppose $X$ and $Y$ are $n\times n$ matrices (assume they are symmetric if you want), is there a reasonable formula for $$ Z=\frac{d}{dt}|_{t=0} e^{X+tY}? $$ Obviously if we assume that $[X,Y]=0$ then there is a simple answer. I assume if there is an reasonable closed form then it should follow from the Baker-Campbell-Hausdorff identity but I am having a bit of a hard time at it. AI: For each $k\in\mathbb{N}$, let $X_k$ be the set of all non-commutative monomials of degree $k$ on two variables. Write $p_{0,k}$ for the monomial $p_{0,k}(x,y)=x^k$. $$ \frac{e^{X+tY}-e^X}t=\sum_{k=0}^\infty\frac{(X+tY)^k-X^k}{tk!}=\sum_{k=1}^\infty\frac{\sum_{p\in X_k}p(X,tY)-X^k}{tk!} =\sum_{k=1}^\infty\sum_{p\in X_k\setminus\{p_{0,k}\}}\frac{p(X,tY)}{tk!} $$ The only monomials in $X_k$ that will survive after $t\to0$ are those where there is only one $tY$, i.e. $p_{1,k}(x,y)=yx^{k-1}$, $p_{2,k}(x,y)=xyx^{k-2}$, etc. So $$ \lim_{t\to0}\frac{e^{X+tY}-e^X}t=\lim_{t\to0}\sum_{k=1}^\infty\sum_{p\in X_k\setminus\{p_{0,k}\}}\frac{p(X,tY)}{tk!}=\lim_{t\to0}\sum_{k=1}^\infty\sum_{p\in \{p_{1,k},\ldots,p_{k,k}\}}\frac{p(X,tY)}{tk!}\\ =\sum_{k=1}^\infty\frac{YX^{k-1}+XYX^{k-2}+X^2YX^{k-3}+\cdots+X^{k-1}Y}{k!} =\sum_{m=0}^\infty\sum_{n=0}^\infty\frac{X^mYX^n}{(m+n+1)!} $$ (Edit: thanks to Robert for the suggestion for a nicer expression)
H: Rate and Distance Question Calc How do I find the rate at which the distance from the plane to the station is increasing when it is 4 mi away from the station. A plane flying horizontally at an altitude of 3 mi and a speed of 460 mi/h passes directly over a radar station AI: This is a related rates question. The general strategy: Draw a picture. Invariably useful with these sort of problems. In the picture, label every quantity that is fixed, and every quantity that is changing with a name. Write down the information you are given. Identify the rates you are told something about, and the rate you want to know something about. Find an equation that relates the quantities whose rates you are being asked about. A relation among the quantities, not the rates. Differentiate the equation you found in the previous step; this will give you an equation that relations the quantities and their rates of change. Plug in all the information you have. Solve for the information you want to know. Here, after you draw the picture, you'll see that it makes sense to think of the radar station as the origin, and the plane as flying on the line $y=3$. Let $p(t)$ be the horizontal position of the plane at time $t$ (so that the plane will be at the point $(p(t),3)$). Let $D(t)$ be the distance from the plane to the radar station. You are told how $p(t)$ is changing: that is, you are given information about $\frac{dp}{dt}$. You are being asked about how the Distance is changing; that is, you are being asked to find $$\frac{dD}{dt}\Bigl|_{D=4}$$ (Actually, it's unclear if you want the derivative when $D$, the straight line from the plane to the radar, is $4$, or if you want it when $p(t)$ is $4$; I think it is the former, though). So you wan to find some equation that relates $p$ and $D$. After you do that, taking the derivative of the equation with respect to $t$ will give you an equation that relates $p$, $D$, $\frac{dp}{dt}$, and $\frac{dD}{dt}$. You know the value of $\frac{dp}{dt}$, and you know the value of $D$ you want. So you should figure out what $p$ is for that $D$ (if necessary), plug everything in, and solve for $\frac{dD}{dt}$.
H: Solving Linear Systems with Singular Matrices Good morning! For (say, homogenous) linear systems of the form $$x_{n+1} = A x_n,$$ where $A$ is a nonsingular matrix, each initial value problem can be solved by the method of finding a general solution by means of eigenvalues of $A$. However, for singular matrices, this method need not to be successful for all initial value problems (because of zero eigenvalues) and I was unable to find references for such case. So my question is, is there any general method of solving such systems for singular matrices? Thank you in advance. AI: The trouble with your method is not when $A$ is singular, it's when $A$ is not diagonalizable. The solution of the initial value problem $x_{n+1} = A x_n$, $x_0$ given, is $x_n = A^n x_0$. Now we can write $A = S^{-1} J S$ where $S$ is invertible and $J$ is in Jordan canonical form, and so $x_n = S^{-1} J^n S x_0$. For a $d \times d$ Jordan block $$ J = \pmatrix{\lambda & 1 & 0 & \ldots & 0 & 0\cr 0 & \lambda & 1 & \ldots & 0 & 0\cr \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \cr 0 & 0 & 0 & \ldots & \lambda & 1\cr 0 & 0 & 0 & \ldots & 0 & \lambda\cr}$$ $$ J^n = \pmatrix{ \lambda^n & {n \choose 1} \lambda^{n-1} & {n \choose 2} \lambda^{n-2} & \ldots & {n \choose {d-2}} \lambda^{n-d+2} & {n \choose {d-1}} \lambda^{n-d+1}\cr 0 & \lambda^n & {n \choose 1} \lambda^{n-1} & \ldots & {n \choose {d-3}} \lambda^{n-d+3} & {n \choose {d-2}} \lambda^{n-d+2}\cr & \ldots & \ldots & \ldots & \ldots & \ldots & \ldots \cr 0 & 0 & 0 & \ldots & \lambda^n & {n \choose 1} \lambda^{n-1}\cr 0 & 0 & 0 & \ldots & 0 & \lambda^n\cr}$$ where ${n \choose j} \lambda^{n-j} = 0$ when $n < j$.
H: How to determine if 2 points are on opposite sides of a line How can I determine whether the 2 points $(a_x, a_y)$ and $(b_x, b_y)$ are on opposite sides of the line $(x_1,y_1)\to(x_2,y_2)$? AI: Explicitly, they are on opposite sides iff $$((y_1-y_2)(a_x-x_1)+(x_2-x_1)(a_y-y_1))((y_1-y_2)(b_x-x_1)+(x_2-x_1)(b_y-y_1)) < 0.$$
H: A fifth degree polynomial $P(x)$ with integral coefficients. A fifth degree polynomial $P(x)$ with integral coefficients takes on values $0,1,2,3,4$ at $x=0,1,2,3,4$, respectively. Which of the following is a possible value for $P(5)$? A) $5$ B) $24$ C) $125$ D)None of the above AI: Any polynomial of degree at most five, fulfilling the given interpolation conditions, has the form \[ P(x) = x + a\prod_{i=0}^4 (x-i) \] for some $a \in \mathbb Z$ (we can see this if we write $P$ in the Newton basis, for example). So $P(5) = 5 + 5!a = 5 + 120a$. Now A) corresponds to $a = 0$, B) to $a = \frac{19}{120}$ and C) to $a=1$. As we want $P$ to be of fifth degree, we need $a \ne 0$, and as $\frac{19}{120} \not\in\mathbb Z$, the correct answer is C), giving $P(x) = x + \prod_{i=0}^4(x-i)$.
H: If $f \circ g$ is invertible, is $(f \circ g)^{-1} = g^{-1} \circ f^{-1}$? If $f \circ g$ is invertible, is $(f \circ g)^{-1} = g^{-1} \circ f^{-1}$? If not can someone give me a counterexample? AI: It is true if $f$ and $g$ are invertible. You can check this directly by using the definition of inverse function. But neither $f$ nor $g$ need to be invertible for $f\circ g$ to be invertible. To find a counterexample, I suggest looking for a noninjective $f$ and a nonsurjective $g$.
H: Solve functional equation $f(x_1 x_2) = g_1(x_1) g_2(x_2)$ Let $x_1$ and $x_2$ be real positive numbers. The problem is to find all possible triples of $f$,$g_1$,$g_2$ such that $f(x_1 x_2) = g_1(x_1) g_2(x_2)$. I suspect that the only one solution is $f(x_1 x_2) = (x_1 x_2)^n$, $g_1(x_1) = x_1^n$, $g_1(x_2) = x_2^n$ where $n$ is some power. But I can't provide the proof that there are no other solutions. AI: Note that $g(a)g(b) = f(ab) = g(1) g(ab)$ for every $a,b$. We could have $g(1) = 0$, in which case we have the trivial solution $f(x)=g(x) = 0$ for all $x$. Otherwise, $h(x) = g(x)/g(1)$ satisfies $h(a) h(b) = h(ab)$ and $h(1) = 1$. Thus $h$ is a homomorphism of the multiplicative group of positive reals into the nonzero complex numbers. Besides the solutions $h(x) = x^p$ (for arbitrary complex $p$), corresponding to $g(x) = C x^p$ and $f(x) = C^2 x^p$, there are exotic non-measurable solutions.
H: Entropy expression optimization with Langrange multipliers I have recently encountered variants of the following expression: \begin{equation} S = H(a,b,c,d)-H(a+b,c+d) \end{equation} where $H$ is the Shannon entropy function, that is $H(X)=\sum_{x\in X}-x\log x$. And restrictions: \begin{eqnarray} a + b &=& t\\ a + c &=& t\\ a + b +c + d &=& 1 \end{eqnarray} for some given $t$. It is supposed to be easy to maximize this expression with Lagrange multipliers but I am unfamiliar with them so any hint in how $S$ can be optimized would be welcome. AI: Presumably $a,b,c,d > 0$ are required. In order for this to be possible, $0 < t < 1$. You should actually check the limits as one or more of $a,b,c,d$ go to $0$, but I will not do so. Putting in a Lagrange multiplier $\lambda_i$ for each constraint, the Lagrangian is $$L = -a\ln \left( a \right) -b\ln \left( b \right) -c\ln \left( c \right) -d\ln \left( d \right) + \left( a+b \right) \ln \left( a+b \right) + \left( c+d \right) \ln \left( c+d \right) +\lambda_{{1}} \left( a+b-t \right) +\lambda_{{2}} \left( a+c-t \right) +\lambda_{{3 }} \left( a+b+c+d-1 \right)$$ Now solve the system of equations obtained by setting to $0$ the derivative of $L$ with respect to each of the variables $a,b,c,d,\lambda_1, \lambda_2,\lambda_3$: $$ \eqalign{ a+b-t &= 0\cr a+c-t &= 0\cr a+b+c+d-1 &= 0\cr -\ln(d)+\ln(c+d)+\lambda_3 &=0\cr -\ln(b)+\ln(a+b)+\lambda_1+\lambda_3 &= 0\cr -\ln(c)+\ln(c+d)+\lambda_2+\lambda_3 &=0\cr -\ln(a)+\ln(a+b)+\lambda_1+\lambda_2+\lambda_3 &= 0\cr}$$ obtaining $$ a={t}^{2},b=c=t-{t}^{2}, d=(t-1)^2,\lambda_1 = 0,\lambda_{{2 }}=\ln(t/(1-t)) , \lambda_{{3}}=\ln (1-t) $$
H: Cardinality of Mappings on a 2-Sphere I am wondering about the number of mappings from a point on a sphere to a neighboring point, and a not so neighboring point. If I take a 2-sphere, and place it on some $x,y,z$-axis and fix those so that the center is at the origin, then I draw a point on the surface close, but some finite distance, from one of the axis, say the $x$, are there distinct cardinalities associated with the sets of change of basis mappings from that point to the point on the sphere on the $x$-axis then there are mappings to, say, the point on the sphere on the $y$-axis? Another way, are their more rotations associated to either set of rotations from that point to putting it on the $x$-axis then on the $y$ just because of its relative location? Thanks in advance, AI: If I understand correctly there is no "change of basis" occurring here at all, but you are asking about rotations moving some given point $P\in S^2$ to some other given point $Q\in S^2$, where $Q$ happens to lie on one of the coordinate axes. Now in any case there is an infinity of such rotations: If $P\ne Q$ then any plane $\pi$ through $P$ and $Q$ intersects $S^2$ in a circle. Let $M_\pi$ be the center of this circle. There is a rotation with axis $a:=O\vee M_\pi$ mapping $P$ onto $Q$.
H: Existence of $n^{th}$ root Disclaimer : This is homework (Tagged) I am trying to prove that Real numbers have nth roots. Most references do the following : Let $a \in \mathbb{R}, n \in \mathbb{N}, n > 0$ Then we consider a set X = {$t | t^n< a$}. Then we prove that X is nonempty (since it contains 0) and that it is bounded above ($a+1$ is an upper bound). Then through the property of Real numbers, this set will have a supremum. Let that supremum be f. Most books now deal with 2 cases : $f^n<a , f^n>a$ and show a contradiction thus proving $f^n = a$ and I am comfortable with this way of doing it. I have finished my homework. Books closed. EOF. However, is there any way to prove this without the use of contradiction? AI: The idea: Show the Newton-Raphson interation scheme to find the positive root of $f(x)= x^n-a $ converges, where we fix $n\geq 2, a>0.$ This is sufficient to prove existence, and uniqueness follows from $0<x<y \implies 0< x^n < y^n.$ Newton-Raphson: $$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = \frac{1}{n} \left( (n-1)x_k+ \frac{a}{x_k^{n-1}} \right) .$$ Pick $x_0=a.$ If $x_k\to x$ then taking limits of the recurrence and rearranging gives $x^n=a$ as we hope. We show $(x_k)$ is positive and monotonically decreasing for $k\geq 1$, and thus convergent: By the Arithmetic-Geometric mean inequality, $$ x^n_{k+1} = \frac{1}{n^n} \left( (n-1)x_k+ \frac{a}{x_k^{n-1}} \right)^n = \left( \frac{ x_k + x_k + \cdots x_k + \frac{a}{x_k^{n-1}} }{n} \right)^n \geq x_k^{n-1} \frac{a}{x_k^{n-1}} = a$$ for all $k\geq 0.$ Hence $a/x_k^n\leq 1$ for all $k\geq 1$ so using the recurrence gives: $$nx_{k+1}=(n-1)x_k + \frac{a}{x_k^{n-1}} = x_k\left( n-1 + \frac{a}{x_k^n} \right) \leq x_k(n-1+1) = nx_k$$ so the sequence is decreasing as required. Simple introduction to Newton-Raphson: The aim is to approximate the root of a function. To do so, take a decently close estimate of the root $x_0.$ Draw the graph of $y=f(x)$ with some root, and mark the point $(x_0, f(x_0))$ which is somewhat near-by. Draw the tangent to the curve at that point, notice that the $x$-intercept of this tangent is a better approximation to the root than $x_0$ was, call it $x_1.$ Repeat this, and in the limit (in "ideal" situations) $x_n$ tends to the root. Simple analytic geometry yields $x_{k+1}$ in terms of $x_k, f(x_k), f'(x_k)$ and we get the Newton-Raphson recurrence. This is often a very quickly converging method to approximate roots of functions (each iteration doubles the number of accurate decimal places), but it can fail if your initial approximation is too far away, or too close to another root, or if $f'(x_k)=0$ (so that the tangent is horizontal so never intercepts the $x$-axis). For this problem I showed that by choosing $x_0=a$ this method ends up converging (which is something you should strongly suspect to happen by sketching the graph and a few of the tangents), to something that must be a root of $x^n-a.$
H: Sum of Absolute Values I was going over a question and I wanted your opinion(s) on it: The product of two numbers is 6 and one of the numbers is 5 less than the other. What is the absolute value of the sum of the two numbers? The numbers I got are 1 and -6 because $b(b+5)=6$. Now the sum of absolute values would be $|1-6|$, right, so 5? According to the book the answer is 7. Apparently the book expects me to do $|1| + |-6| = 7$. Am I right and the answer in the book is wrong? Edit: It seems my numbers are wrong, here is how I got them. Please correct me if I am wrong. $$(b+5)(b) = 6$$ so I get $$b^2 + 5b -6 = 0 $$ $b= 1$ and $b=-6$ by solving the quadratic equation. AI: Let $x$ be the smaller of the numbers, so the other is $x+5$. We have the equation $$ x(x+5)=6. $$ Solve for $x$ and check that $|x+(x+5)|=7$ no matter which solution of the quadratic you use as $x$.
H: $\frac{(a^2+b^2)}{(1+ab)}$ must be a perfect square if it is an integer Possible Duplicate: Alternative proof that $(a^2+b^2)/(ab+1)$ is a square when it's an integer I came across this problem, but couldn't solve it. Let $a,b>0$ be two integers such that $(1+ab)\mid (a^2+b^2)$. Show that the integer $\frac{(a^2+b^2)}{(1+ab)}$ must be a perfect square. It's a double star problem in Number theory (by Niven). Thanks in advance. AI: It was an IMO(International Mathematical Olympiad)problem, Terence Tao among few others solved it. There is a technique that solves similar problems, here is a link http://www.georgmohr.dk/tr/tr09taltvieta.pdf
H: Whats the sum of the length of all the sides of a triangle? You are given triangles with integer sides and one angle fixed at 120 degrees. If the length of the longest side is 28 and product of the remaining to sides is 240, what is the sum of all sides of the triangle? I have tried to solve it using the formula given in the following link about integer triangle with 120 link Integer triangles with a 120° angle can be generated by AI: Since the longest side must be opposite the largest angle in the triangle, the side which is 28 units is opposite the angle of degree measure of 120. Let us name the remaining sides $a$ and $b$. Using the cosine rule we get: $$28^2=a^2+b^2-2ab\cos(120)$$ $$784=a^2+b^2-2ab\left(\frac{-1}{2}\right)$$ $$784=a^2+b^2+ab$$ Since $ab=240$ $$a^2+b^2=544$$ We thus have the following system of equations to solve: $$\begin{align*}a^2+b^2&=544 \\ ab&=240\end{align*}$$ Solving for $b$ in the latter equation $$b=\frac{240}{a}$$ Plugging into the first one gives: $$a^2+\left(\frac{240}{a}\right)^2=544$$ Multiplying both sides by $a^2$ and letting $u=a^2$ $$u^2-544u+240^2=0$$ Using the quadratic equation $$\begin{align*}u&=\frac{544\pm\sqrt{(-544)^2-4\cdot240^2}}{2} \\ u&=\frac{544\pm256}{2} \\ u&=144 \text{ or }400\end{align*}$$ But since $u=a^2$, $$a^2=144 \text{ or } a^2=400$$ $$a=12 \text{ or } a=20$$ Using $b=\dfrac{240}{a}$ we thus get $a=12$ and $b=20$ or $a=20$ and $b=12$. In either case the sum of the sides is 60. Edit: lhf pointed out a much quicker alternative: When we came to the following stage $$784=a^2+b^2+ab$$ Rather than solve for $a$ and $b$ etc. simply add $ab$ to both sides to get $$784+ab=a^2+b^2+2ab$$ Since $a^2+b^2+2ab$ is a perfect square and $ab=240$ $$1024=(a+b)^2$$ $$a+b=32$$ Therefore the sum of all the sides is $a+b+28=32+28=60$
H: Integration Example How can i find the integration of this example $$\int \frac{\sin x}{\sin x - \cos x } dx$$ I tried first add cos and then substracting cos but then what about $$\int \frac{\cos x}{\sin x - \cos x } dx\ ?$$ AI: Multiply and divide Numerator by 2 to get $2\sin(x)$ and write it as $(\sin(x)+\cos(x)) + (\sin(x)-\cos(x))$ then divide each term by denominator.Second term would be 1(you can integrate it easily),for first term, put $(\sin(x)-\cos(x))$ as $z$ then, $(\sin(x)+\cos(x))dx$ would become $dz$ and your first integral will look like $\int {\frac{dz}{z}}$, which is $ln(z)+c$.
H: Example of limit How can I find the limit of this example $$ \lim_{x\to 0}\frac{\sin^{-1}x-2x}{\sin^{-1}x+2\sin(\frac{1}{2}\sin^{-1}x)[3-4\sin^2(\frac{1}{2}\sin^{-1}x)] } $$ I tried using L'hospital but it was very long. Is there any easy trick? please give me some hint. AI: what I usually do in such cases: Since $\lim\limits_{x \to 0} \frac{\sin^{-1}x}{x}=1 \implies$ for $ x\to0$ , $ \sin^{-1}x \approx x$, hence subsitute $x$ for $\sin^{-1}x$ and similarly other trigonometric functions' approximations. your problem becomes $\lim\limits_{x\to0} \frac{-x}{x+x(3-x^2)}$ (here I have subsituted $\sin x\approx x$ (which is true as $x \to 0$)), and now you can easily apply L'Hopital's Rule, which evaluates it to $-1/4$.
H: Must a function that 'preserves r.e.-ness' be computable itself? Does there exist a non-recursive function (say, from naturals to naturals) such that the inverse of every r.e. set is r.e.? If yes, how to construct one? If no, how to prove that? Any References? AI: For any sequence $\mathcal{B} = \langle B_i : i \in \mathbb{N}\rangle$ of subsets of $\mathbb{N}$, there is a $\mathcal{B}$-cohesive set $A$: an infinite set such that, for any $B_i \in \mathcal{B}$, either there are only finitely many elements of $A$ in $B_i$, or all but finitely many elements of $A$ are in $B_i$. The set $A$ is called "cohesive" because, although it is infinite, none of the sets $B_i$ is able to split it into two infinite pieces. Apply that fact with $\mathcal{B}$ containing all the r.e. sets to obtain a cohesive set $A$. Let $f$ be the function that enumerates $A$ in increasing order. First, $A$ is not computable, or even r.e., because if $A$ was r.e. then we could use that enumeration to effectively enumerate "every other element" of $A$, which would give us an r.e. set that splits $A$ into two infinite pieces, which is impossible. Moreover, because $A$ is not computable, $f$ is not computable - if we could compute $f$ then we could enumerate $A$ in increasing order, which would imply $A$ is computable. Now let $B$ be any r.e. set. Then, because $A$ is cohesive for r.e. sets, one of two things must happen: All but finitely many elements of $A$ are in $B$. In this case $f^{-1}(B)$ is cofinite, and hence is computable. Only finitely many elements of $A$ are in $B$. In this case, $f^{-1}(B)$ is a finite set, and is thus computable. Either way $f^{-1}(B)$ is not only r.e., it is computable.
H: the meaning of "finite" in the finite covering theorem A textbook I am using to learn analysis states (in reference to just the real line): Every system of open intervals covering a closed interval contains a finite subsystem that covers the closed interval. (the textbook is "Mathematical Analysis I" by V.A. Zorich) Let's say $S$ = {$U_n$} is the system of open intervals $U_n$ in question, which covers the closed interval $I$. Now, if we take all $U_n$ to be infinitely small in length, but take $S$ to be infinite in cardinality such that all $U_n$ still cover $I$ (does this even make sense?), wouldn't it be impossible to select a finite subsystem of $S$ that covers $I$? I feel that maybe I am missing an elementary but important distinction in my idea of "infinite" AI: Your mistake lies in the assumption that you can choose "$U_n$ infinitely small in length". For each $n$ seperately the set $U_n$ is an open set. An open set contains an open interval and an open interval has a positive measure (the notion of "length" doesn't really make sense for an arbitrary open set, but that is not the problem here). Obviously the open sets may become "smaller" (again, you have to be careful how you measure the size of an open set) if $n$ varies, but that's a different story. Btw: The property you quote is the definition of compactness. For subsets of the real numbers we have: bounded and closed if and only if compact. Note that the textbook presumably assumes that the interval is bounded.
H: Calc Rate of Change Question I think I have an interesting calc question here but im not sure how to solve it. Can someone perhaps give me a helping hand or guide me through steps? A balloon that takes images of the earth is shot up in the sky with rockets from 0 ft off the ground is given by the height of the function s(t)= $-18t^2+120t$ . a) Find velocity after 2 and 4 seconds as it approaches space. b) When does the balloon reach full altitude? c) When does it touch back down to earth? AI: Well, I believe that the velocity of an object is its derivative. So, find the derivative of your initial function. $$\ s(t) = -18t^2 + 120t $$ With a simple application of the power rule, we arrive at $$\ s'(t) = -36t + 120 $$ $$or~ v(t) = -36t+120$$ Now, you have a function that will give you the balloon's velocity at any time t, since the derivative is the instantaneous rate of change(aka velocity).
H: A number has 101 composite factors. A number has 101 composite factors. How many prime factors at max A number could have ? AI: Suppose $m = p_1^{a_1} ... p_n^{a_n}$ has evactly $101$ composite factors. Then $101 + (1+n) = (a_1 + 1)(a_2+1) ... (a_n+1)$. But the RHS is at least $2^n$ and it is easily checked that the inequality: $102 + n \geq 2^n$ fails for $n \geq 7$. So there can be at most $6$ primes in the factorisation of $m$. We now try to decompose the numbers $101 + (1+n)$ into a product of exactly $n$ integers for $n=1,2,3,4,5,6$, in order to see whether the $a_i$ can actually exist in each case. We see that: $108 = 2^2 \times 3^3$ $107$ is prime $106 = 2\times 53$ meaning that the cases for $n=4,5,6$ cannot work. However the number $105 = 3\times 5\times 7$ does have such a representation as a product of three numbers. Hence the biggest number of primes you may have in $m$ is $3$ in order to have exactly 101 composite factors. Such a number is given by $m = p_1^2 p_2^4 p_3^6$ for any three different primes you wish. As an aside, all such numbers $m$ must be of one of the following forms: $p_1^2 p_2^4 p_3^6$ $p_1^7 p_2^{12}$ $p_1^3 p_2^{25}$ $p_1 p_2^{51}$ $p_1^{102}$ Where $p_1,p_2,p_3$ are distinct primes.
H: What is the shortest sequence that contains every permutation of $1..n$? Possible Duplicate: What is the shortest string that contains all permutations of an alphabet? How can one create a list of numbers so that by taking $n$ consecutive elements from that list, it is possible get every permutation of numbers from 1 to $n$? I'll explain myself: The shortest list that contains every permutation of the numbers from 1 to 2 is: $$1, 2, 1$$ It contains (1, 2) and (2, 1). With numbers from 1 to 3, it would look like something like this: $$1, 2, 3, 1, 2, 1, 3, 2, 1$$ It contains (1, 2, 3), (1, 3, 2), (2, 1, 3), … Note: I'm not sure that this is the shortest list possible. Is there any way to find the smallest list for numbers from 1 to $n$? AI: You are looking for de Bruijn sequences. That search term should find you what you want.
H: Fourier transformation of a test function Let $f \in C_c^\infty(\mathbb{R}^n)$. Then, $$\hat{f}(\xi)= (2\pi)^{\frac{-n}{2}}\int_{\mathbb{R^n}}\exp(-ix\xi)f(x){d}x$$ can be (i) analytically continued to an entire function and (ii) for $r>0$ there holds: $\hat{f}(\cdot + ib)$ is uniformly rapidly decreasing $\forall\; b\in B(0,r)$. Can somebody help me to see (i) and (ii)? Thanks. AI: (i) The integral makes sense for every $\xi\in\mathbb C$, since you are integrating a continuous function over a compact set. Since $\xi\mapsto \exp(-ix\xi)$ is holomorphic in $\mathbb C$, so is the integral. (Holomorphicity is preserved under all kinds of limits, including integration, which is also a kind of a limit.) (ii) If you integrate by parts, throwing derivative onto $f$, you get $\xi$ in denominator. You can do this as many times as you want.
H: $|G|=2^{k}m$ has a normal subgroup of index $2^k$ I am being involved with the following problem: A group $G$ of order $2^{k}m$ wherein $m$ is odd has a cyclic subgroup of order $2^k$. Prove that $G$ has a normal subgroup of index $2^k$. Honestly, I have one solution of this problem which is based on induction on $k$ and permutation groups. I am just asking if someone knows any other approaches. Thans AI: Consider the regular representation $\rho_{G}$ of $G$. Let $S$ be a Sylow $2$-subgroup of $G,$ and suppose that $S = \langle s \rangle$ is cyclic. Then $\rho_{G}$ restricts to $S$ as a drect sum of $m$ copies of $\rho_{S}.$ Hence the eigenvalues of $\rho_{G}(s)$ all occur with multiplicity $m,$ and each $2^{k}$-th root of unity occurs as such an eigenvalue. Now the product of all complex $2^{k}$th roots of unity is $-1$, because all such roots except $1$ and $-1$ occur in complex conjugate pairs. Hence $\rho_{G}(s)$ has determinant $(-1)^{m} = -1$ as $m$ is odd. Now $g \to {\rm det}(\rho_{G}(g))$ is a $1$-deimensional representation of $G.$ If $g$ has odd order, then ${\rm det}(\rho_{G}(g)) =1,$ because all $|g|$-th rooots of unity occur as eigenvalues with equal multiplicity, and all except 1 occur in complex conjugate pairs. Hence the image of this $1$-dimensional representation is $\{1,-1\}$ (I have skipped a few details here), and its kernel has index $2$. By induction, the kernel has a normal subgroup of index $2^{k-1}$ which is characteristic, so a normal subgroup of index $2^{k}$ of $G.$
H: Stumped: Trapezoid problem with law of sin/cos ... This one stumped me. Thought you might enjoy.. What are the lengths of the diagonals? It should be related to law of sine/cosine. (The answers are in fraction) AI: Hint: If you cut off a parallelogram from the left, you get a triangle with a base of 3 and a top angle of (what?). The law of sines then gives you the other two sides. Then restore the parallelogram, draw the diagonals, and use the law of cosines.
H: Proof that $\mathbb N $ is finite Obviously this is a false proof. It relies on Berry's paradox. Assume that $\mathbb{N}$ is infinite. Since there are only finitely many words in the English language, there are only finitely many numbers which can be described unambiguously in less than 15 words. Let $n$ be the smallest number which can't. Then $n$ can be described as "the smallest number which can be described unambiguously in less than 15 words". Contradiction. I know nothing of mathematical logic, but looking in a few books has told me that the problem here lies in the definition $n$ := "smallest number which can't be described unambiguously in less than 15 words". If this isn't a valid definition, then what exactly is a valid definition? AI: The problem is that you are using the concept of describability in your descriptions, and hence the definition is self-referential and paradoxical. It's much like if I tried to define: $$f(n) = \begin{cases}1 &\text{if }f(n)\text{ is even} \\ 2 &\text{if }f(n)\text{ is odd} \end{cases}$$ This is manifestly a nonsense definition, and the number which you propose to define is nonsense in much the same way: you define the interpretation of a string implicitly in terms of the interpretations of strings, and in such a way that you contradict your own definition. It is, of course, possible to use self-referential definitions in mathematics, but you need to do some extra work to ensure your definition is both complete and non-contradictory. For a more complete explanation of recursion, and when it does or does not work, you may be interested in this answer.
H: Reproducing Kernel of subspace of $L^2(0,1)$ Definition of the problem Let $\mathcal{H}$ be a Hilbert space which consists of functions, defined on a set $S$. Let $k:S\times S \rightarrow \mathbb{K}$ is our reproducing kernel for $\mathcal{H}$. Now, let $\mathcal{H}$ be the two-dimensional subspace of $L^2(0,1)$, consisting of the functions $f(t)=a+bt,\ a,b\in \mathbb{K}$. I am asked to find the reproducing kernel for $\mathcal{H}$. My efforts I have to show that $f(t)=\left\langle f,k_{t}\right\rangle $. Given that $f(t)=a+bt$, we have to find $k_t$ such that $a+bt= \left\langle a+bt,k_{t}\right\rangle$. And in $L^2(0,1)$, we know that $\left\langle f,k_{t}\right\rangle =\int f\left(t\right)\overline{k_{t}}dt=f\left(t\right)$. My question How could I find from there the reproducing kernel of $\mathcal H$? Should I integrate over $dt$, and between $(0,1)$ ? Thank you, Franck! AI: First, the formula $a+bt= \left\langle a+bt,k_{t}\right\rangle$ is a mess: two different variables are denoted by the same letter $t$. It should be $a+bt= \int_0^1 (a+bs)\overline{k_{t}(s)}\,ds$. Now if this is to hold for all $a$ and for all $b$, then the coefficients of $a$ and $b$ must be the same on both sides. $1=\int_0^1 \overline{k_{t}(s)}\,ds$ and $t=\int_0^1 s\overline{k_{t}(s)}\,ds$. This gives you two equations for two unknowns in $k_t(s)=\alpha_t+\beta_t s$.
H: Induction for sum of Poisson distributed random variables Given the identically distributed and independant random variables $X_1,X_2,\ldots\sim\operatorname{Po}(\lambda)$ and $S_n=X_1+\ldots+X_n$ show with induction that $$\Pr[S_n=k]=\frac{(n\lambda)^k}{k!}e^{-n\lambda}.$$ So far for $n=1$ via the definition of the density function for Poisson:$$\Pr[S_1=k]=\Pr[X_1=k]=\frac{e^{-\lambda}\lambda^k}{k!}$$ With $n=2$ and independency: $$\Pr[X_1+X_2=k]=\sum\limits_{n=0}^k\Pr[X_1=n]\Pr[X_2=k-n]=\sum\limits_{n=0}^k\frac{\lambda^n}{n!}e^{-\lambda}\frac{\lambda^{k-n}}{(k-n)!}e^{-\lambda}$$$$=\frac{1}{k!}e^{-2\lambda}\sum\limits_{n=0}^k\binom{k}{n}\lambda^n\lambda^{k-n}=\frac{(2\lambda)^k}{k!}e^{-2\lambda}$$ I assume now that with $S_n\sim\operatorname{Po}(\lambda)$ every $X_i\sim\operatorname{Po}(\lambda/n)$ and every Poisson variable can obviously split up, but how can I prove this? AI: Assume that the sum of $m$ independent Poisson with parameter $\lambda$ is Poisson with parameter $m\lambda$. We show that the sum of $m+1$ independent Poisson with parameter $\lambda$ is Poisson parameter $(m+1)\lambda$. You post basically does it: exactly the same reasoning that got you from $1$ to $2$ gets you from $m$ to $m+1$. Using almost exactly your notation, we find that $$\Pr(S_{m+1}=k)=\sum\limits_{n=0}^k\Pr(S_m=n)\Pr(X_{m+1}=k-n)=\sum\limits_{n=0}^k\frac{(m\lambda)^n}{n!}e^{-m\lambda}\frac{\lambda^{k-n}}{(k-n)!}e^{-\lambda}.$$ It follows that $$\Pr(S_{m+1}=k)=\frac{1}{k!}e^{-(m+1)\lambda}\sum\limits_{n=0}^k\binom{k}{n}(m\lambda)^n\lambda^{k-n}=\frac{((m+1)\lambda)^k}{k!}e^{-(m+1)\lambda}$$ The last equation is by the Binomial Theorem. For $$\sum_{n=0}^k \binom{k}{n}x^ny^{k-n}=(x+y)^k.$$ We are putting $x=m\lambda$ and $y=\lambda$, so $x+y=(m+1)\lambda$.
H: Biased alternating random walk on a lattice in 1D Let's consider a random walk on a fixed lattice with step size 1 in 1 dimensions. In variation to the broadly discussed basic case, with a probability p the next step will be in the opposite direction of the previous step. The direction of the first step is also chosen randomly with same probability p. In my opinion the result should be identical to an unbiased random walk regardless of p, however experiments in Excel seem to suggest that there is a difference (path looks more jagged; but this could be an artifact of the PRNG used). If there really is a difference I would like to know how to calculate the expected distance from origin after n steps and passing times. Edit: I had a bug in my formula, now p is changing the look of the graph. With p = 0.5 you have an unbiased random walk, with p = 1 it is completely predictable. Thus my question about traveled distance and passing times / passing probabilities has some eligibility. AI: The paper Un principe d'invariance pour une classe de marches $p$-corrélées sur $\mathbb Z^d$ (in French) by Alexis Bienvenüe studies a $d$-dimensional model which contains yours (called $1$-dimensional Gillis' random walk) as a special case. (Beware that the integer $p$ in the title of the paper is not your parameter $p$.) Thus, one knows that the random walk is recurrent for every $p\ne0$ (your $p$) and that the position $X_n$ at time $n$ is such that $\frac{X_n}{\sqrt{n\sigma^2}}$ converges in distribution to a standard normal random variable, with $\sigma^2=\frac{1-p}p$.
H: Is there a difference between allowing only countable unions/intersections, and allowing arbitrary (possibly uncountable) unions/intersections? As in the title, I am asking if there is a difference between allowing set-theoretic operations over arbitrarily many sets, and restricting to only countably many sets. For example, the standard definition of an topology on a set $X$ requires that arbitrary unions of open sets are open. Do I lose anything significant if I restrict this to just unions of countably many (open) sets? I cannot come up with an example where it makes a difference. AI: Let $X$ be an uncountable set. Let \[ \tau = \{ O \subseteq X \mid O = X \text{ or } O \text{ is at most countable}\} \] Then $\tau$ contains $\emptyset$ and $X$, is closed under finite intersections and under countable unions. But it isn't a topology on $X$ as it isn't closed under arbitrary unions. So it makes a difference.
H: Direct limit of localizations of a ring at elements not in a prime ideal For a prime ideal $P$ of a commutative ring $A$, consider the direct limit of the family of localizations $A_f$ indexed by the set $A \setminus P$ with partial order $\le$ such that $f \le g$ iff $V(f) \subseteq V(g)$. (We have for such $f \le g$ a natural homomorphism $A_f \to A_g$.) I want to show that this direct limit, $\varinjlim_{f \not\in P} A_f$, is isomorphic to the localization $A_P$ of $A$ at $P$. For this I consider the homomorphism $\phi$ that maps an equivalence class $[(f, a/f^n)] \mapsto a/f^n$. (I denote elements of the disjoint union $\sqcup_{f \not\in P} A_f$ by tuples $(f, a/f^n)$.) Surjectivity is clear, because for any $a/s \in A_P$ with $s \not\in P$, we have the class $[(s, a/s)] \in \varinjlim_{f \not\in P} A_f$ whose image is $a/s$. For injectivity, suppose we have a class $[(f, a/f^n)]$ whose image $a/f^n = 0/1 \in A_P$. Then there exists $t \notin P$ such that $ta = 0$. We want to show that $[(f, a/f^n)]] = [(f, 0/1)]$, which I believe is equivalent to finding a $g \notin P$ such that $V(f) \subseteq V(g)$ and $g^ka = 0$ for some $k \in \mathbb{N}$. Well, $t$ seems to almost work, but I couldn’t prove that $V(f) \subseteq V(t)$, so maybe we need a different $g$? Or am I using the wrong map entirely? AI: If $a/f^n \in A_f$ is mapped to $0$ in $A_p,$ then there is a $g \not \in p,$ s.t. $ga=0,$ therefore, $a/f^n=0 \in A_{gf}.$ Hence the injectivity.
H: Harmonic function with bounded preimage I recently saw a question here about bounded/unbounded preimages of a set under a harmonic function. The question asked did not seem to make sense as it was talking about harmonic functions on $\mathbb{R}$, but got me thinking about a corresponding problem for a harmonic function on $\mathbb{R}^n$, and I would appreciate it if anyone could tell me if my argument below is correct, or if there is a neater way to do it. Suppose that $u:\mathbb{R}^n\to\mathbb{R}$ is harmonic and suppose that the set $C = u^{-1}(\{c\})$ is bounded. Since adding a constant to $u$ leaves it harmonic, we may assume that $c=0$. Take any point $x\in\mathbb{R}^n$ and consider a ball $B(x,r)$ with $r$ sufficiently large that all points of $C$ are in the interior of $B$. Now either $u>0$ or $u<0$ on $\partial B$ since $u$ is continuous and non-zero on $\partial B$. Take the case $u>0$ on $\partial B$ (the other case is similar), and recall that $u(x)$ is the average of the values of $u$ on the surface $\partial B$, so that $u(x)>0$. Since $x$ is arbitrary, it follows that $u$ is bounded below on $\mathbb{R}$, and harmonic, and hence constant. Rephrasing: "If $u$ is a non-constant harmonic function on $\mathbb{R}$, then $u^{−1}(c)$ is unbounded." Any comments or corrections would be gratefully received. AI: Your reasoning is perfectly valid. A slight rephrasing: Suppose there exists $R$ such that $u$ does not vanish in $\mathbb R^n\setminus B(0,R)$. Since the set $\mathbb R^n\setminus B(0,R)$ is connected (this is where we need $n\ge 2$), it follows that either $u>0$ or $u<0$ on this set. Also, $u$ is bounded on $B(0,R)$. We conclude that $u$ is either bounded from above or bounded from below on $\mathbb R^n$, hence constant.
H: How to get a part of a quaternion? e.g. get half of the rotation of a quaternion? if I have a quaternion which describes an arbitrary rotation, how can I get for example only the half rotation or something like 30% of this rotation? Thanks in advance! AI: I believe what you're looking for are exponent and logarithm formulas for quaternions, which can be found on the Wikipedia page on quaternions. The Wikipedia page even gives a formula for raising a quaternion to an arbitrary power, which is exactly what you want. If your original rotation is given by $q$, and you want to take 30% of this rotation, you simply take $q^{0.3}$.
H: Positivity of a functional. While going through the Riez Representation theorem i am stuck with the use of positivity of linear functional. My question is If $\tau$ is a linear functional from $C(X)\to \mathbb C$ , $f\in C(X)$ . Then i didn't understand how positivity is used to write the following : $\tau(|f|^2) \le \tau (||f||^21) =||f||^2\tau(1)$ I want to know where exactly the positivity is used and how ? Thank you for your kind help. AI: It is used in the inequality, because we have for all $x\in X$, $|f(x)|^2\leq \lVert f\rVert_{\infty}^2\mathbf 1(x)$, where $\mathbf 1$ is the function constant equal to $1$. Then we apply $\tau$ to $\lVert f\rVert_{\infty}^2\mathbf 1-|f|^2\geq 0$, and we conclude by linearity.
H: Number of times two rescaled, 'fully' monotonic functions can cross Consider two functions $f: [0,1) \rightarrow \mathbb{R}$ and $g: [0,1) \rightarrow \mathbb{R}$. Suppose $f(x) > g(x)$ for all $x \in [0,1)$. Suppose further that $f$ and $g$ are infinitely differentiable and all derivatives of both $f$ and $g$ are strictly positive on the interior $(0,1)$ of the domain and non-negative at $x=0$: $f^{(n)}(x) > 0$ and $g^{(n)}(x) > 0$ for all $x \in (0,1)$, $f^{(n)}(0) \geq 0$, $g^{(n)}(0) \geq 0$, for all $n$. (For lack of a better term, I call $f$ and $g$ 'fully monotonic', since 'totally' or 'completely' monotonic requires the signs to alternative. See here.) Let $x_0 \in (0,1)$ be a fixed interior point. Is it true that $$ \frac{f(x)}{f(x_0)} = \frac{g(x)}{g(x_0)} $$ only holds at $x = x_0$? That is, do the functions $f(x)/f(x_0)$ and $g(x)/g(x_0)$ cross only once? A standard sufficient condition for uniqueness is $f' > g'$, or the slightly weaker $f'(x)/f(x_0) > g'(x)/g(x_0)$, but this is not assumed here. Intuitively it seems that full monotonicity and $f > g$ imply any rescaled functions $f(x)/C_1$ and $g(x)/C_2$ where $C_1 > 1$ and $0 < C_2 < 1$ can cross at most once, but I have not been able to prove this, nor can I construct a counterexample. Also, in case these added properties matter: I know that $f(0) = 1$, $g(0) = 0$ and both $f(x)$ and $g(x)$ diverge to infinity as $x \rightarrow 1$. $f$ and $g$ are both power series. AI: Let $g\colon[0,1)\to\mathbb{R}$ be a $C^\infty$ function such that $g(0)=0$, $g^{(n)}(x)>0$ for all $x\in(0,1)$ and all $n\ge0$, $g^{(n)}(0)\ge0$ for all $n\ge0$ and $\lim_{x\to1^-}g(x)=\infty$. One example is $g(x)=\tan(\pi\, x/2)$. Let $f(x)=1+(g(x))^2$. Then $f(x)>g(x)$ for all $x\in(0,1)$, $f$ satisfies the required conditions about the positivity of its derivatives and $\lim_{x\to1^-}f(x)/g(x)=\infty$. Take $x_0\in(0,1)$ such that $g(x_0)<1$ (this is possible since $g(0)=0$). Then the equation $$ \frac{f(x)}{f(x_0)}=\frac{g(x)}{g(x_0)} $$ has at least two solutions. One is $x=x_0$. A simple calculation shows that $$ \Bigl(\frac{f(x)}{f(x_0)}\Bigr)'\Bigr|_{x=x_0}<\Bigl(\frac{g(x)}{g(x_0)}\Bigr)'\Bigr|_{x=x_0}. $$ Then $g(x)/g(x_0)>f(x)/f(x_0)$ on $(x_0,x_0+\epsilon)$ for some $\epsilon>0$. But $\lim_{x\to1^-}f(x)/g(x)=\infty$, so that the grapg of $f(x)/f(x_0)$ must cut the graph of $g(x)/g(x_0)$ on some point $x>x_0$.
H: Finding a polynomial for this trigonometric equality For all natural numbers, find the polynomials such that $$\cos(n\theta)=p_n(\tan(\theta))\cos^n(\theta)$$ It was suggested that taking $p_n(x)=\frac{1}{2}\{(1+ix)^n+(1-ix)^n\}$, but I don't know how? AI: Let $q_n(x) = (1+ix)^n$. Then $$q_n(\tan(\theta))\cos^n(\theta) = (1+i\frac{\sin\theta}{\cos \theta})^n\cos^n\theta = (\cos \theta + i\sin\theta)^n = \cos n\theta + i\sin n\theta$$ So $p_n(x) = \frac{1}{2}\left(q_n(x)+q_n(-x)\right)$ Use that $\cos(-x)=\cos x$ to show that $$p_n(\tan\theta)\cos^n\theta = \cos n\theta $$
H: Exposition on hyperelliptic curves Is there any literature that introduces hyperelliptic curves without the view towards cryptography? Even better is if there are any books that talk about them with (about) the same amount of detail as Silverman does for elliptic curves. AI: For Genus $2$ there is a very nice book by J.W.S. Cassels and E.V. Flynn: Prolegomena to a Middlebrow Arithmetic of Curves of Genus 2 For higher genus I have only seen short sections in books on Riemann Surfaces and Abelian Varieties, mostly in the form of a more elaborate example for the general theory.
H: A system of equations with 5 variables: $a+b+c+d+e=0$, $a^3+b^3+c^3+d^3+e^3=0$, $a^5+b^5+c^5+d^5+e^5=10$ Find the real numbers $a, b, c, d, e$ in $[-2, 2]$ that simultaneously satisfy the following relations: $$a+b+c+d+e=0$$ $$a^3+b^3+c^3+d^3+e^3=0$$ $$a^5+b^5+c^5+d^5+e^5=10$$ I suppose that the key is related to a trigonometric substitution, but not sure what kind of substitution, or it's about a different thing. AI: The unknowns $a,b,c,d,e$ are to be real and in the interval $[-2,2]$. This screams for the substitution $a=2\cos\phi_1$, $b=2\cos\phi_2$, $\ldots, e=2\cos\phi_5$ with some unknown angles $\phi_j,j=1,2,3,4,5$ to be made. Let's use the equations $2\cos\phi_j=e^{i\phi_j}+e^{-i\phi_j}$, $j=1,2,3,4,5$. Now $$ 0=a+b+c+d+e=\sum_{j=1}^5(e^{i\phi_j}+e^{-i\phi_j}), $$ Using this in the second equation gives $$ 0=a^3+b^3+c^3+d^3+e^3=\sum_{j=1}^5(e^{3i\phi_j}+3e^{i\phi_j}+3e^{-i\phi_j}+e^{-3i\phi_j}) =\sum_{j=1}^5(e^{3i\phi_j}+e^{-3i\phi_j}). $$ Using both of these in the last equation gives $$ \begin{align} 10=a^5+b^5+c^5+d^5+e^5&=\sum_{j=1}^5(e^{5i\phi_j}+5e^{3i\phi_j}+10e^{i\phi_j}+10e^{-i\phi_j}+5e^{-3i\phi_j}+e^{-5i\phi_j})\\ &=\sum_{j=1}^5(e^{5i\phi_j}+e^{-5i\phi_j})=\sum_{j=1}^5(2\cos5\phi_j). \end{align} $$ This is equivalent to $$ \sum_{j=1}^5\cos5\phi_j=5. $$ When we know that the sum of five cosines is equal to five, certain deductions can be made :-) This shows that there are 5 possible values for all the five unknowns, namely $2\cos(2k\pi/5)$ with $k=0,1,2,3,4$ (well, cosine is an even function, so there are only three!). We get a solution by using each value of $k$ exactly once, because then the first two equations are satisfied (use familiar identities involving roots of unity). There may be others, but having reduced the problem to a finite search, I will exit back left.
H: Differentiating Under the Integral Proof There are many variations of "differentiating under the integral sign" theorem; here is one: If $U$ is an open subset of $\mathbb{R}^n$ and $f:U \times [a,b] \rightarrow \mathbb{R}$ is continuous with continuous partial derivatives $\partial_1 f, \dots \partial_n f$ then the function $$ \phi(x) = \int^b_a f(x,t)dt $$ is continuously differentiable and $$ \partial_i \phi (x) = \int^b_a \partial_i f(x,t)dt $$ Can anyone suggest a textbook that provides a proof of this version of the theorem? AI: Isn't the proof sort of "follow your nose"? Let $\Delta x$ be nonzero, consider $$ \phi(x+\Delta x)-\phi(x) = \int^{b}_{a}f(x+\Delta x,t)-f(x,t)\,\mathrm{d}t$$ Then construct the quotient $$ \frac{\phi(x+\Delta x)-\phi(x)}{\Delta x} = \frac{\int^{b}_{a}f(x+\Delta x,t)-f(x,t)\,\mathrm{d}t}{\Delta x} $$ But because we do not integrate over $x$, we treat $x$ like a constant. So we can rewrite the integral as $$ \frac{\phi(x+\Delta x)-\phi(x)}{\Delta x} = \int^{b}_{a}\frac{f(x+\Delta x,t)-f(x,t)}{\Delta x}\,\mathrm{d}t $$ Taking the limit as $\Delta x\to0$ gives us $$ \frac{\mathrm{d}\phi(x)}{\mathrm{d} x} = \int^{b}_{a}\frac{\partial f(x,t)}{\partial x}\,\mathrm{d}t $$ precisely as desired? [Edit: We can take the limit under the integral sign, as Giuseppe Negro points out, if the function $f(x,t)$ is continuously differentiable in $x$.] Addendum: Why, oh why, do we need $f(x,t)$ to be continuously differentiable in $x$? Why can we take this limit? Well, there's a number of different arguments. One is the Dominated convergence theorem, which states if we have a sequence of functions $f_{n}(t)\to F(t)$ which is "dominated" by some function $g(t)$, meaning $$ |f_{n}(t)|\leq g(t)\quad\mbox{for any }t $$ then we have $$ \lim_{n\to\infty}\int|f_{n}(t)-F(t)|\,\mathrm{d}t=0 $$ which implies $$ \lim_{n\to\infty}\int f_{n}(t)\,\mathrm{d}t=\int F(t)\,\mathrm{d}t. $$ Take $F(t)=\partial f(x,t)/\partial x$ and $f_{n}(t)$ to be $$ f_{n}(t) = \frac{f(x + \varepsilon_{n},t)-f(x,t)}{\varepsilon_{n}} $$ using any sequence $\varepsilon_{n}\to 0$. Addendum 2: A second different way begins with the observation $$ \int^{b}_{a}\int^{x}_{0}\frac{\partial f(y,t)}{\partial y}\,\mathrm{d}y\,\mathrm{d}t = \phi(x)-\phi(0)$$ by the fundamental theorem of calculus. Fubini's theorem lets us switch the order of integration $$ \int^{x}_{0}\int^{b}_{a}\frac{\partial f(y,t)}{\partial y}\,\mathrm{d}t\,\mathrm{d}y = \phi(x)-\phi(0)$$ Then we can use Leibniz's rule differentiating both sides with respect to $x$. This gives us the desired result $$ \int^{b}_{a}\frac{\partial f(x,t)}{\partial x}\,\mathrm{d}t = \phi'(x).$$ Recall Leibniz's rule states if $G(x) = \int^{x}_{0}g(y)\,\mathrm{d}y$ then $$ G'(x) = g(x). $$ We can prove this quickly by $$ \frac{G(x+\Delta x)-G(x)}{\Delta x} = \frac{1}{\Delta x}\int^{x+\Delta x}_{x} g(y)\,\mathrm{d}y$$ and taking $\Delta x$ to be "sufficiently small", we can approximate the Riemann sum as $$ \int^{x+\Delta x}_{x} g(y)\,\mathrm{d}y\approx g(c)\Delta x$$ where $x\leq c\leq x+\Delta x$. Plugging this back in gives us $$ \frac{G(x+\Delta x)-G(x)}{\Delta x} = \frac{1}{\Delta x}\left(g(c)\Delta x\right) = g(c)$$ Taking $\Delta x\to 0$ gives us $c\to x$, and $$ G'(x) = g(x)$$ as desired.
H: Confusion regarding ML estimate I was going through this article and they have this log likelihood given by $$ LL = \sum_{i=1}^n A_i\log p_i + \sum_{i=1}^n A'_i\log(1-p_i).$$ Basically this is the loglikelihood of a logistic regression where pi is the output from the sigmoid function and Ai is the number of entries at $i$ having y value 1 and $A'_i$ is the number of entries at $i$ having y value $0$. Now the close form solution of this is given by $$p_i = \frac{A_i}{A_i+A'_i}$$ I didn't get this. Where the above solution came from? AI: The derivative of the log-likelihood with respect to $p_i$ is given by $$ \frac{A_i}{p_i}-\frac{A_i'}{1-p_i}. $$ Putting this equal to zero and solving for $p_i$ yields $$ p_i=\frac{A_i}{A_i+A_i'}. $$ Of course you should show that this in fact is a maximum and not a minimum, and this is easily done by looking at the second derivative with respect to $p_i$.
H: Finding solutions to $(4x^2+1)(4y^2+1) = (4z^2+1)$ Consider the following equation with integral, nonzero $x,y,z$ $$(4x^2+1)(4y^2+1) = (4z^2+1)$$ What are some general strategies to find solutions to this Diophantine? If it helps, this can also be rewritten as $z^2 = x^2(4y^2+1) + y^2$ I've already looked at On the equation $(a^2+1)(b^2+1)=c^2+1$ AI: Let $a$ be a positive integer. Then \begin{align} (4a^2+1)(4((2a)^2)^2+1) &= 256a^6 + 64a^2 + 4a^2 + 1 \\ & = 4(64a^6 + 16a^4 + a^2) + 1 \\ &= 4(a^2(8a^2+1)^2)+1 \\ &= 4((8a^2+1)a)^2+1 \end{align} so $(a, (2a)^2, (8a^2+1)a)$ is always a solution. There are others as well.
H: Finding $\frac{d^2 y}{dx^2}$ I am not sure how to do this but I need to find $\frac{dy}{dx}$ and $\frac{d^2 y}{dx^2}$ For $x = t^2 + 1, y= t^2+t$ And then show what t values gives a concave upward. I know the simple formula to find $\frac{dy}{dx}$ I get $$\frac{dy}{dx} = \frac{y'}{x'}$$ $$\frac{dy}{dx} = \frac{2t+1}{2t}$$ $$\frac{d^2 y}{dx^2} = \frac{\frac{dy}{dx}}{dx}$$ $$\frac{\frac{2t+1}{2t}}{2t}$$ This is wrong and I am not sure why, they end with a negative number which makes no sense to me. AI: You have $$\frac{dy}{dx}=\frac{2t+1}{2t}=1+\frac1{2t}\;.$$ To differentiate this again with respect to $x$, you must repeat what you did to get this: calculate $$\frac{\frac{d}{dt}\left(\frac{dy}{dx}\right)}{dx/dt}\;.$$ You forgot to do the differentiation in the numerator. When you do it, you get $$\frac{\frac{d}{dt}\left(1+\frac1{2t}\right)}{2t}=\frac{-\frac1{2t^2}}{2t}=-\frac1{4t^3}\;.$$
H: Submodule of free module over a p.i.d. is free even when the module is not finitely generated? I have heard that any submodule of a free module over a p.i.d. is free. I can prove this for finitely generated modules over a p.i.d. But the proof involves induction on the number of generators, so it does not apply to modules that are not finitely generated. Does the result still hold? What's the argument? AI: Let $F$ be a free $R$-module, where $R$ is a PID, and $U$ be a submodule. Then $U$ is also free (and the rank is at most the rank of $F$). Here is a hint for the proof. Let $\{e_i\}_{i \in I}$ be a basis of $F$. Choose a well-ordering $\leq$ on $I$ (this requires the Axiom of Choice). Let $p_i : F \to R$ be the projection on the $i$th coordinate. Let $F_i$ be the submodule of $F$ generated by the $e_j$ with $j \leq i$. Let $U_i = U \cap F_i$. Then $p_i(U_i)$ is a submodule of $R$, i.e. has the form $R a_i$. Choose some $u_i \in U_i$ with $p_i(u_i)=a_i$. If $a_i=0$, we may also choose $u_i=0$. Now show that the $u_i \neq 0$ constitute a basis of $U$. Hint: Transfinite induction. The same proof shows the more general result: If $R$ is a hereditary ring (every ideal of $R$ is projective over $R$), then any submodule of a free $R$-module is a direct sum of ideals of $R$.
H: Convergence of $E(X_n)$ where $X_n=\frac nY 1_{\{Y>n\}}$ for any non-negative rv $Y$ Here is another self-study exercise that I am struggling mightily with: $X_n=\frac nY 1_{\{Y>n\}}$ for any $Y$ such that $P(0\le Y<\infty)=1$ I am told that $X_n\to X$ a.s for some $X$, and am to show whether $E(X_n)\to E(X)$ as $n\to\infty$ I do not need to explicitly calculate the expectation, but just show its convergence, if applicable. As I get more and more familiar with dominated convergence, monotone convergence, Fatou, etc. I may not need as much explicit help, but in this exercise if you could help me identify which of the convergence theorems is necessary (and hints at the justification for such), it would be of great help. I'm having trouble visualizing the series in this form. AI: Using $nI_{\{Y > n\}} \leq Y$, conclude that $|X_{n}| \leq 1$. Hence, you can take $Z = 1$ as the integrable function which dominates $|X_{n}|$ and conclude from the Dominated Convergence Theorem that, since $X_{n}$ converges a.s. to $X$, $E[X_{n}]$ converges to $E[X]$. You can actually find out what $X$ is. Observe that $\{\omega: X_{n}(\omega) \neq 0\} = \{\omega: Y(\omega) > n\}$. Using this, you can work out that $X = 0$.
H: Ring automorphisms of $\mathbb{Z}$ quotients and their products How many ring automorphisms are there of $\mathbb{Z}/n\mathbb{Z}$? Calling such rings $A_n$, how many ring automorphisms are there of $\prod_1^n A_{n_j}$? AI: For $\mathbb{Z}/n\mathbb{Z}$, the only ring automorphism is the identity. For finite products, the only nontrivial automorphisms come from exchanging prime-power order direct factors. A group homomorphism of $\mathbb{Z}/n\mathbb{Z}$ to itself is completely determined by the image of $1+n\mathbb{Z}$. In order to be a group automorphism, you must map it to $a+n\mathbb{Z}$ with $\gcd(a,n)=1$. If $1\mapsto a$, then it is easy to verify that the homomorphism is just $r+n\mathbb{Z}\longmapsto ar+n\mathbb{Z}$. But in order to be a ring isomorphism, we also require that it respects products. Therefore, the image of $bc$ must be the product of the images of $b$ and $c$; since $1^2=1$, we must have that $a^2\equiv a\pmod{n}$. And since $\gcd(a,n)=1$, this means that $a\equiv 1\pmod{n}$, so the map is the identity. For the second problem, I'm assuming a finite number of direct factors. The Chinese Remainder Theorem establishes a ring isomorphism between $\mathbb{Z}/n\mathbb{Z}$ and the direct product of the rings $\mathbb{Z}/p_i^{a_i}\mathbb{Z}$, where $n=p_1^{a_1}\cdots p_r^{a_r}$ is a factorization of $n$ into prime powers of distinct primes, so we lose no generality by assuming that each $n_j$ is a prime power. Moreover, we may assume all $n_j$ are powers of the same prime $p$, since maps between direct factors corresponding to distinct primes must be the zero map. Now, the only nontrivial idempotents in $\mathbb{Z}/p^a\mathbb{Z}$ are $0$ and $1$; this can be established easily using congruences; alternatively, note that a nontrivial idempotent $e$ in a commutative ring $R$ with unity will yield a direct product decomposition $Re \times R(1-e)$, and since $\mathbb{Z}/p^a\mathbb{Z}$ is indecomposable, it follows that if $e$ is idempotent then $e=1$ or $e=0$. Let $R = \mathbb{Z}/p^{a_1}\mathbb{Z}\times\cdots\times\mathbb{Z}/p^{a_r}\mathbb{Z}$, with $1\leq a_1\leq\cdots\leq a_r$. And we consider the automorphisms of $R$. Now, as a group, the product is generated by the idempotents $\mathbf{e}_i$, where $\mathbf{e}_i$ has a $1$ in the $i$th coordinate and $0$s elsehwere. Their images must be idempotents, so they must be of the form $(b_1,\ldots,b_r)$ with $b_i$ idempotent, hence $b_i\in \{0,1\}$. In particular, if $a_i\lt a_j$, then the $j$th coordinate of the image of $\mathbf{e}_i$ must be $0$, by considering the additive order of $\mathbf{e}_i$. Now, since $\mathbf{e}_i\mathbf{e}_j = (0,0,\ldots,0)$ if $i\neq j$ (the idempotents are "mutually orthogonal"), it follows that the support of the image of $\mathbf{e}_i$ (the coordinates with nonzero component) must be disjoint from the support of the image of $\mathbf{e}_j$ if $i\neq j$. Let $t$ be the largest index such that $a_t=a_1$. Then since $\mathbf{e}_i$, $1\leq i\leq t$ must be mapped to some idempotent in $\mathbb{Z}/p^{a_1}\mathbb{Z}\times\cdots \times\mathbb{Z}/p^{a_t}\mathbb{Z}$, and different $\mathbf{e}_i$ must map to mutually orthogonal idempotents, it follows that any automorphism of $R$ must simply permute the $\mathbf{e}_i$, $1\leq i\leq t$. Looking at the next largest power of $p$ that occurs we see therefore that the same thing must occur, etc. Thus, if the ring is of the form $$\left(\frac{\mathbb{Z}}{p^{a_1}\mathbb{Z}}\right)^{n_1}\times\cdots\times \left(\frac{\mathbb{Z}}{p^{a_s}\mathbb{Z}}\right)^{n_s}$$ with $1\leq a_1\lt \cdots \lt a_s$, $n_i\geq 1$, then the only automorphisms correspond to permutations of the factors of $\mathbb{Z}/p^{a_i}\mathbb{Z}$ for each $i$. Hence there are $(n_1!)\cdots (n_s!)$ such automorphisms. In the case of a single factor, we get $s=1$ and $n_1=1$, so we get the previous result as a special case.
H: The adjugate of the adjugate For any $n > 2$ and any $(n \times n)$-matrix $A$ over an arbitrary field, the adjugate of the adjugate of $A$ equals $\det(A)^{n - 2} A$. Is there a unified way, without dividing into two cases – $A$ invertible and $A$ non-invertible – to prove this result ? AI: Exactly the same universal approach in the related problem below works on your similar problem. Hint $ $ Denote the adjoint of $\rm\:A\:$ by $\rm\:A^*.\:$ Then $$\rm\: A A^* = |A|\: I_n \,\Rightarrow\ |A|\, |A^*| = |A|^n \,\Rightarrow\ |A^*| = |A|^{n-1}\qquad$$ where the cancellation of $\rm\:|A|\:$ is done universally, i.e. consider matrix extries as indeterminates, so the determinant is a nonzero polynomial in a domain $\:\!\rm\Bbb Z[a_{\:\!i\:\!j}],\,$ so it is cancellable. For further discussion of such universal cancellation of "apparent singularities" see here and here and here.
H: Question about infinite intersection Is $\bigcap\limits_{n=1}^\infty (0, 1 + \frac{1}{n})$ equal to $(0, 1)$ or $(0,1]$? Help is appreciated. AI: For all $n$, $1\in\left(0,1+\frac{1}{n}\right)$ and $\displaystyle\bigcap_{n=1}^{\infty}\left(0,1+\frac{1}{n}\right)=\left\{x,\,\forall n,\,x\in\left(0,1+\frac{1}{n}\right)\right\}$. So $1\in\displaystyle\bigcap_{n=1}^{\infty}\left(0,1+\frac{1}{n}\right)$. More generally : The same way, we can show that $\displaystyle(0,1]\subset\bigcap_{n=1}^{\infty}\left(0,1+\frac{1}{n}\right)$. If $x>1$, it exists $n$ such as $x\notin\left(0,1+\frac{1}{n}\right)$, so $\displaystyle x\notin\bigcap_{n=1}^{\infty}\left(0,1+\frac{1}{n}\right)$. If $x\le 0$, $x\notin(0,\frac{1}{1})$, so $\displaystyle x\notin\bigcap_{n=1}^{\infty}\left(0,1+\frac{1}{n}\right)$. Conclusion : $\displaystyle\bigcap_{n=1}^{\infty}\left(0,1+\frac{1}{n}\right)=(0,1]$.
H: Show these simple inequalities and $(\log(1+x))^2\le x$ and that $(\log(1+x))^2\le x^2$ Show that $(\log(1+x))^2\le x$ and that $(\log(1+x))^2\le x^2$ for all $x\ge0$. Both of these inequalities seem to be true, judging from plotting these functions with a grapher. Can you help me get started? Can this be done by other means than with some painful Taylor series approaches? Update. The latter inequality seems trivial, given $\log(1+x)\le x$, since we can just square both sides. So it seems like the first inequality is the non-trivial one. AI: For the first inequality, let $f(x)=\sqrt{x}-\ln(1+x)$. Then $f(0)=0$, and $$f'(x)=\frac{1}{2\sqrt{x}}-\frac{1}{1+x}=\frac{(1-\sqrt{x})^2}{\sqrt{x}(1+x)}\ge 0.$$ Thus $f(x)$ is an increasing function on $[0,\infty)$ (it hesitates a bit at $x=1$).
H: If $a^m=b^m$ and $a^n=b^n$ for $(m,n)=1$, does $a=b$? Possible Duplicate: Prove that $a=b$, where $a$ and $b$ are elements of the integral domain $D$ Something I'm curious about, suppose $a,b$ are elements of an integral domain, such that $a^m=b^m$ and $a^n=b^n$ for $m$ and $n$ coprime positive integers. Does this imply $a=b$? Since $m,n$ are coprime, I know there exist integers $r$ and $s$ such that $rm+sn=1$. Then $$ a=a^{rm+sn}=a^{rm}a^{sn}=b^{rm}b^{sn}=b^{rm+sn}=b. $$ However, I'm worried that if $r$ or $s$ happen to be negative then $a^{rm}, a^{sn}$, etc may not make sense, and moreover, I don't see where the fact that I'm working in a domain comes into play. How can this be remedied? AI: That works as long as you pass to the fraction field. But using fractions, the proof is much simpler: excluding the trivial case $\rm\,b=0,\,$ we have $\rm\:(a/b)^m = 1 = (a/b)^n\:$ hence the order of $\rm\,a/b\,$ divides the coprime integers $\rm\,m,n,\,$ thus the order must be $1.\,$ Therefore $\rm\,a/b = 1,\,$ so $\rm\,a = b.\,$ For a proof avoiding fraction fields see this proof that I taught to a student. Conceptually, both proofs exploit the innate structure of an order ideal. Often hidden in many proofs in elementary number theory are various ideal structures, e.g. denominator/conductor ideals in irrationality proofs. Pedagogically, it is essential to bring this structure to the fore.
H: Convergence in the absence of Dominated Convergence Theorem, and uniform integrability This question is extended from Resnick's exercise 5.13 in his book A Probability Path. Let the probability space be the Lebesgue interval, that is, $(\Omega=[0,1],\mathcal{B}([0,1]),\lambda)$ and define $X_n:=\frac{n}{\log n}1_{(0,\frac 1n)}$. Show $X_n\to 0, E(X_n)\to 0$ even though DCT fails. And secondly, show $\lim_{M\to\infty} \sup_{n\ge 2} E(X_n 1_{X_n>M})=0$ (uniform integrability) AI: I) $X_{n} \rightarrow 0$. You should use that $X_{n}(w) \neq 0$ iif $w \in (0,1/n)$. The proof follows almost from definition. For any $w \in (0,1)$, there exists $n^{*}$ such that $\frac{1}{n^{*}} < w$. Hence, for any $n > n^{*}$, $X_{n}(w) = 0$ and $\lim X_{n}(w) = 0$. II) $E[X_{n}] \rightarrow 0$. Observe that $X_{n}$ is constant and therefore, it is easy to compute $E[X_{n}]$. $E[X_{n}] = \frac{n}{\log(n)}P(Y \in (0,1/n)) = \frac{1}{\log(n)}$. III) Uniform integrability. Observe that $X_{n} \leq n$. Hence, $$\sup_{n \geq 2} E[X_{n}I(X_{n} > M)] \geq \sup_{n \geq M} E[X_{n}I(X_{n} > M)] \geq \sup_{n \geq M} E[X_{n}] = \sup_{n \geq M}\frac{1}{\log(n)} = \frac{1}{\log(M)}$$ The result follows taking the limit in M.
H: Proving an implication by proving its dual My textbook "Discrete and Combinatorial Mathematics, an Applied Introduction" by Ralph P. Grimaldi contains the following definition: Let $s$ be a statement. If $s$ contains no logical connectives other than $\wedge$ and $\vee$, then the dual of $s$, denoted $s^d$, is the statement obtained from $s$ by replacing each occurrence of $\wedge$ and $\vee$ by $\vee$ and $\wedge$, respectively, and each occurrence of $T_0$ and $F_0$ by $F_0$ and $T_0$, respectively. (I apologize for not knowing the formatting conventions here) and the following theorem: The Principle of Duality . Let $s$ and $t$ be statements that contain no logical connectives other than $\wedge$ and $\vee$. If $s \leftrightarrow t$, then $s^d \leftrightarrow t^d$. Is anyone familiar with these? Can I use them to reason as follows: Start with a premise take its dual manipulate the dual arrive at the dual of our conclusion take its dual This would allow me to do something like: to prove that $\forall x\ P(x)\vee \forall\ Q(x) \rightarrow \forall x\ [P(x)\vee Q(x)]$ 1. Ax,P(x) V Ax,Q(x) premise 2. Ax,P(x) ^ Ax,Q(x) the dual of (1) 3. Ax,P(x) conjunctive simplification 4. P(c) universal specification 5. Ax,Q(x) conjunctive simplification from (2) 6. Q(c) universal specification 7. P(c) ^ Q(c) rule of conjunction from (4) and (6) 8. Ax[P(x) ^ Q(x)] universal generalization 9. Ax[P(x) V Q(x)] the dual of (8) When I wrote this proof I didn't know that you could use universal specification to go from (1) to the dual of (7), so I needed a way to separate (1) into two parts. Taking the dual seemed to be a thing I can do. I worry though, because it seems like I could prove the converse this way, and the converse is false. So what's wrong? Why can't I use the concept of the dual of a statement in this way? Thanks everyone! a. PS oh and I solved the question in another way, so this is just my curiosity. PPS thank you for being patient with my ugly post, I'll try to learn your mark up. AI: Your steps (1)-(2) and (8)-(9) don't work. The dual is defined in your quote only under the condition If $s$ contains no logical connectives other than $\land$ and $\lor$, but here you also have quantifiers appearing in addition to $\land$ and $\lor$, and then the definition does not apply as written. The principle behind the dual is that instead of the the formula $\phi(A,B,C,\ldots)$ where $A$, $B$, $C$ are propositional variables, you look at $\neg\phi(\neg A,\neg B,\neg C)$ and then use De Morgan's laws to push the negations down through the formula until they eliminate each other. On the way they change every $\land$ to $\lor$ and vice versa. You can then do some transformations and dualize again, giving you something equivalent to $\neg \neg \phi(\neg \neg A,\neg \neg B, \neg\neg C,\ldots)$, which of course is just $\phi(A,B,C,\ldots)$. When there are quantifiers involved, you need to handle those using their appropriate De Morgan rules: $$\neg(\forall x\,\phi) \leftrightarrow \exists x\,(\neg \phi) \qquad \qquad \neg(\exists x\,\psi) \leftrightarrow \forall x\,(\neg \psi)$$ so when you dualize $(\forall x\,P(X)) \lor (\forall x\,Q(X))$ you should get $(\exists x\,\bar P(x)) \land (\exists x\,\bar Q(x))$, where $\bar P$ and $\bar Q$ are the duals of $P$ and $Q$.
H: Laplace transform I want to find the Laplace transform of the following signal but I don't know what to do with the absolute value. $$x(t)=e^{-|t|}\; u(t+1)$$ The first thing it came to my mind is to split in negative and positive sides and then find each one and add them. The problem is that I checked back to the solutions and it not the same. Any ideas? AI: Since you're considering the two sided Laplace Transform you need to evaluate $$\mathcal L(s)=\int_{-\infty}^\infty e^{-|t|} \theta(t+1) e^{- st}dt$$ Since $\theta(t+1)=0$ for $t<-1$ you integral becomes $$\mathcal L(s)=\int_{-1}^\infty e^{-|t|} e^{- st}dt$$ Now we consider that for $(-1,0)$, $-|t|=t$, and for $(1,\infty)$, $-|t|=-t$, so that $$\mathcal L(s)=\int_{-1}^0 e^{t} e^{- st}dt+\int_{1}^\infty e^{-t} e^{- st}dt$$ Can you take it from there?
H: Error Term in Passing from Summation to Integral I encountered the following in a paper and do not understand how the error term is being bounded. In what follows, $n$ and $k$ are large integer constants. $$ \sum_{i=0}^{n-1} \ln\left(1 - \frac{i}{k}\right) = \int_0^n \ln\left(1 - \frac{x}{k}\right) dx \pm e(n,k) $$ where the error term $e(n,k)$ can be bounded by $$ e(n,k) \leq \int_0^n \ln\left(1 - \frac{x}{k}\right) - \ln\left(1 - \frac{x+1}{k}\right) dx $$ AI: $$0 \le e(n,k) = \int_0^n \left(\ln \left(1 - \dfrac{\lfloor x \rfloor}{k} \right) - \ln \left(1 - \dfrac{x}{k}\right)\right) dx \le \int_0^n \left(\ln \left(1 - \dfrac{ x-1}{k} \right) - \ln \left(1 - \dfrac{x}{k}\right)\right) dx $$ So the inequality is true if $$\ln \left(1 - \dfrac{ x-1}{k} \right) - \ln \left(1 - \dfrac{x}{k}\right) \le \ln \left(1 - \dfrac{x}{k}\right) - \ln \left(1 - \dfrac{x+1}{k} \right)$$ but rearranging and taking exponential of both sides, this becomes $$\left(1 - \dfrac{ x-1}{k}\right)\left(1 - \dfrac{x+1}{k}\right) \le \left( 1 - \dfrac{x}{k}\right)^2$$ and the left side is $\left(1 - \dfrac{x}{k}\right)^2 - \left(\dfrac{1}{k}\right)^2$.
H: Prove the following $\tan(nA)$ expansion I've figured out the approach. Writing the expansion of $(1 + x)^n$, then replacing $x$ with $i \tan (A)$. Then separating real and imaginary part and $\tan(nA)$ will be equal to Im/Real. But, after reaching $(1 + i \tan(A))^n$, I'm unable to convert it into De-Moivre's form from which I could proceed further. Prove that $$\tan(nA) = \dfrac{\dbinom{n}1t - \dbinom{n}3 t^3 + \dbinom{n}5 t^5 \pm \cdots}{1 - \dbinom{n}2 t^2 + \dbinom{n}4 t^4 \pm \cdots}$$ where $t= \tan(A)$. AI: \begin{align} (1 + i \tan(A))^n & = \left( 1 + i \dfrac{\sin(A)}{\cos(A)}\right)^n\\ & = \left(\dfrac{\cos(A) + i \sin(A)}{\cos(A)} \right)^n\\ & = \dfrac{e^{inA}}{\cos^n(A)}\\ & = \dfrac{\cos(nA) + i \sin(nA)}{\cos^n(A)} \end{align} Hence, the real part is $\dfrac{\cos(nA)}{\cos^n(A)}$ and the imaginary part is $\dfrac{\sin(nA)}{\cos^n(A)}$. Now you should be able to finish it off by dividing the imaginary part by the real part.
H: length of paths between two nodes in a directed acyclic graph What might be a good way to calculate length of all paths between two nodes in a directed acyclic graph? I don't need the actual paths, just the length. Is there a combinatorial formula for that? AI: Let $A$ be the adjacency matrix for your graph - so $A_{ij}=1$ if $(i,j)$ is an edge of your graph and $(A_{ij}=0$ otherwise. Then $(A^k)_{ij}$ is the number of paths from $i$ to $j$ of length $k$. One nice property is that, since the graph is acyclic, there can be no path of length $\geq n$, where $n$ is the number of nodes in your graph, since such a path would have to have a cycle. This means that $A^n=0$. You can easily get the number of paths from $i$ to $j$ by taking the $ij$th entry of $I+A+A^2+...+A^{n-1}=(I-A)^{-1}$, so you need only invert $I-A$ to get all the counts. I'm not seeing how to get the counts of the lengths, or even the list of lengths, other than explicitly computing $A^k$ for each $k$.
H: An example of a $2\times2$ matrix $A$ without zero entries and with eigenvalues $\lambda_{1}=3,\lambda_{2}=-4$ I am trying to do an exercise that asks : Find an example of a $2\times2$ matrix $A$ without zero entries and with eigenvalues $\lambda_{1}=3,\lambda_{2}=-4$ I am having trouble thinking of a way to find such a matrix, of course $\mathrm{diag}(3,-4)$ have $\lambda_{1}=3,\lambda_{2}=-4$ as eigenvalues but I can't figure if I can use it to get to a matrix without zero entries... Help is appreciated! AI: Hint. Start with the diagonal matrix and think of it as corresponding to the basis $\{(1,0),(0,1)\}$. Then conjugate by an invertible matrix (and think of it as a change of basis) in a way that gives a result with no zero components. For instance, what happens if you change to the basis $\{(1,1), (1,-1)\}$? Will the image of either basis vector have a $0$ coordinate? Alternate method. Find $a$, $b$, and $c$, nonzero, such that $ac-b^2 = \lambda_1\lambda_2$ and $a+c = \lambda_1+\lambda_2$. Then $$\left(\begin{array}{cc}a&b\\b&c\end{array}\right)$$ will do. (Why?)
H: How to average ellipses? I would like to find the average among a number of ellipses. The ellipses all have the same center, and the same major axis length, but they have different eccentricities and different orientations, which is what makes the problem challenging. Averaging two ellipses with the same eccentricity, but with an angle of $\frac{\pi}{2}$ between the major axes should result in a circle, as should the average between three same ellipses oriented at $\frac{\pi}{3}$ from one another. Averaging two ellipses with the same eccentricity, but with an angle of $\frac{\pi}{4}$ between the major axes should result an ellipse with a smaller major axis right in between the two axes. Initially, it seemed that drawing a new ellipse through the intersection points between two ellipses would work, but that is not easily adapted to more than two ellipses, and it doesn't appear to work well if the eccentricity is different between ellipses. AI: One approach to "averaging" would be to write down the equations for each ellipse in the form $p_i(x,y)=0$ where $p_i$ is a second-degree polynomial normalized such that $p_i(0,0)=1$. Then you can take the average of the various polynomials (algebraically, that is just the same as averaging the coefficients position for position), the result will be an polynomial whose zeroes are another ellipse or circle: $$ \frac 1n\Bigl(p_1(x,y)+\cdots+p_n(x,y)\Bigr) = 0 $$
H: A non-arithmetical set? A set is called arithmetical if it can be defined by a first-order formula in Peano arithmetic. I first encountered these sets when exploring the arithmetical hierarchy in the context of computability theory. However, I have not encountered any examples of sets that are not arithmetical. Is there a canonical example of an non-arithmetical set? Thanks! AI: There are countably many first order formulas defining arithmetical sets. Let $\varphi_n$, $n\in\mathbb N$, be a list of those. Consider the set that contains a natural number $n$ iff $n$ is not contained in the set defined by the $n$-th formula.
H: Finding the smallest possible angles in a triangle I am having difficulty solving this problem: In the given figure (x+y) is an integer greater than 110. What is the smallest possible values of (w+z) (ans is 111)? Any suggestion on how 111 is the answer? AI: If you draw a line across the big triangle like the one you drew, but parallel to the base, then we will have $y=w$, so $x+y=x+w$. Since $x+y$ is an integer greater than $110$, the smallest possible value of $x+y$, and hence of $x+w$, is $111$. If the line meets the base somewhere to the right of the base, that is, has negative slope if we think of the base as horizontal, then $w>y$, so $w+x$ will be greater than $111$. If the line across the triangle has positive slope, then $w$ can be substantially less than $y$, so $w+x$ can be substantially less than $x+y$. Remark: If we change the problem to asking about $w+z$, then indeed the smallest possible value of $w+z$ is $111$. For the angles complementary to $x$ and $y$ are $180^\circ -x$ and $180^\circ -y$. Thus the sum of the angles of the quadrilateral "below" our line is $(180^\circ-x)+(180^\circ -y)+w+z.$ But the sum of the angles of a convex quadrilateral is $360^\circ$. It follows that $$(180^\circ-x)+(180^\circ -y)+w+z=360^\circ,$$ from which we see that $w+z=x+y$. Since the smallest allowed $x+y$ is $111^\circ$, this is also the smallest allowed value of $w+z$.
H: How much Set Theory before Topology? I was reading Baby Rudin for Real Analysis and wanted to explore Topology a little deeper. I bought George Simmons' Introduction to Topology and Modern Analysis and found myself liking it. I am having some problems every once in a while with prerequisites. How much Set theory do I need to learn before diving into the aforementioned book? Also, the book is divided into 3 parts : Topology, Operators and Algebras of Operators. Till where can I trot with a good understanding of SV Calculus? AI: Simmons' book has a brief introduction to set theory at the beginning. If you can get through it without any trouble, you probably have a strong enough background in set theory. In the worst case, there may be a couple of things that you will get stuck at. If that happens, you can go try to figure it out on your own, and if that fails go back to a reference for set theory. This should be pretty infrequent. The best thing to do is to try to read the book (and understand every line of every proof, not just skimming over it). If you get stuck, find a way to get past it. If you find yourself frequently getting stuck, figure out what it is you're weak on, and go study that before you go back to the book.
H: Prove that $ \sqrt{\sum_{j=1}^{n}a_j^2} \le \sum_{j=1}^{n}|a_j|$, for all $a_i \in \mathbb{R}, i=\{1,2,...,n\}$ As the title says, prove that $ \sqrt{\sum_{j=1}^{n}a_j^2} \le \sum_{j=1}^{n}|a_j|$, for all $a_i \in \mathbb{R}, i=\{1,2,...,n\}$. I can solve for the case of n=2, but kind of stuck while proving the n-terms version. The attempt is as follow: $\left(\sqrt{\sum_{j=1}^{n}a_j^2}\right)^2 = \sum_{j=1}^n {a_j}^2 \le |\sum_{j=1}^n {a_j}^2| \le \sum_{j=1}^n |a_j|^2 \mbox{by triangle inequality.}$ Alright, I am stuck in the last inequality. Since I already square at the beginning, I would like to obtain something of the form squared too for the last inequality so that I can take the square root. When I solve the case for n=2, I can easily add terms to turn it into a square, but for this I am stuck. Help is pretty much appreciated. AI: Equivalently, you want to show that $\left(\sum|a_j|\right)^2 \ge \sum a_j^2$. If you expand $\left(\sum|a_j|\right)^2$, you will get $\sum a_j^2$ plus "cross" terms which are non-negative. More precisely, $$\left(\sum|a_j|\right)^2=\sum a_j^2 +2\sum_{i\lt j}|a_i||a_j|.$$
H: An explicit example of an invariant halfspace of the unilateral shift? In a recent talk, A. Popov stated the following fact The unilateral shift on $\ell^2$ has invariant halfspaces. Halfspaces are closed subspaces whose dimension and codimension are both infinite. He did not prove it. I know that unilateral shift has many invariant halfspaces, but all the examples I know are finite dimensional. Thus I wonder whether somebody can give an explicit invariant halfspace of the unilateral shift. Just to be precise, I am asking about the forward shift, that is, $Se_n=e_{n+1}$. Thanks! AI: Naturally, we should use the Beurling theorem on invariant subspaces. Let $\theta$ be an infinite Blaschke product. The space of functions of the form $\theta f$, $f\in H^2(\mathbb D)$, is infinite dimensional and also has infinite codimension. The former is obvious and the latter is because you can knock out the zeros of $\theta$ one by one. I'm not sure if this qualifies as an explicit example. The criteria that make an example explicit were not made explicit.
H: On a two dimensional grid is there a formula I can use to spiral coordinates in an outward pattern? I don't know exactly how to describe it, but in a programmatic way I want to spiral outward from coordinates 0,0 to infinity (for practical purposes, though really I only need to go to about +/-100,000:+/-100,000) So if this is my grid: [-1, 1][ 0, 1][ 1, 1][ 2, 1] [-1, 0][ 0, 0][ 1, 0][ 2, 0] [-1,-1][ 0,-1][ 1,-1][ 2,-1] I want to spiral in an order sort of like: [7][8][9][10] [6][1][2][11] [5][4][3][12] etc... Is there a formula or method of doing this? AI: Here’s a recipe for finding the coordinates of your position after $n$ steps along the spiral. It’s simpler to number the positions on the spiral starting at $0$: position $0$ is $\langle 0,0\rangle$, the origin, position $1$ is $\langle 1,0\rangle$, position $2$ is $\langle 1,-1\rangle$, and so on. Using $R,D,L$, and $U$ to indicate steps Right, Down, Left, and Up, respectively, we see the following pattern: $$RD\,|LLUU\,\|\,RRRDDD\,|LLLLUUUU\,\|\,RRRRRDDDDD\,|LLLLLLUUUUUU\;\|\dots\;,$$ or with exponents to denote repetition, $R^1D^1|L^2U^2\|R^3D^3|L^4U^4\|R^5D^5|L^6U^6\|\dots\;$. I’ll call each $RDLU$ group a block; the first block is the initial $RDLLUU$, and I’ve displayed the first three full blocks above. Clearly the first $m$ blocks comprise a total of $2\sum_{k=1}^{2m}k=2m(2m+1)$ steps. It’s also not hard to see that the $k$-th block is $R^{2k+1}D^{2k+1}L^{2k+2}U^{2k+2}$, so that the net effect of the block is to move you one step up and to the left. Since the starting position after $0$ blocks is $\langle 0,0\rangle$, the starting position after $k$ full blocks is $\langle -k,k\rangle$. Suppose that you’ve taken $n$ steps. There is a unique even integer $2k$ such that $$2k(2k+1)<n\le(2k+2)(2k+3)\;;$$ at this point you’ve gone through $k$ blocks plus an additional $n-2k(2k+1)$ steps. After some straightforward but slightly tedious algebra we find that you’re at $$\begin{cases} \langle n-4k^2-3k,k\rangle,&\text{if }2k(2k+1)<n\le(2k+1)^2\\ \langle k+1,4k^2+5k+1-n\rangle,&\text{if }(2k+1)^2<n\le 2(k+1)(2k+1)\\ \langle 4k^2+7k+3-n,-k-1\rangle,&\text{if }2(k+1)(2k+1)<n\le4(k+1)^2\\ \langle -k-1,n-4k^2-9k-5\rangle,&\text{if }4(k+1)^2<n\le2(k+1)(2k+3)\;. \end{cases}$$ To find $k$ easily, let $m=\lfloor\sqrt n\rfloor$. If $m$ is odd, $k=\frac12(m-1)$. If $m$ is even, and $n\ge m(m+1)$, then $k=\frac{m}2$; otherwise, $k=\frac{m}2-1$.
H: Understanding Riemann sums I am trying to understand Riemann sums. As far as I can understand, we have $\Delta x$ which is $\frac{b-a}{n}$ and $n$ is the number of subintervals I want to divide my function between $a$ and $b$. But I still don't understand how to calculate $C_i$. For example I have a function in my book which is: $$f(x) = \sqrt{x}$$ And it says that the right point of the rectangle is: $$c_i(right) = \frac{i^2}{n^2}$$ And the left point is: $$c_i(left) = \frac{(i-1)^2}{n^2}$$ So $c_i(right) - c_i(left)$: $$\Delta x = \frac{2i-1}{n^2}$$ I don't understand why. I have another examples where $\Delta x = \frac{2}{5}$ and $c_i = \frac{2}{5}i$. I undestand that but don't know how here it is that. AI: We want to approximate the are under the graph with easy to calculate areas. The way we do this is by constructing rectangles that approximate the area. To construct such a rectangle, we "sample" the value of the function on the interval in question and use that value to approximate the value of the function on the entire interval. If $f(x)$ is continuous and the interval is small, then the values of $f(x)$ will not change much over the interval, so this will be a good approximation. There are many ways of doing the "sampling". The two most common are by doing a "left sample" (always take the leftmost point of the interval as your sampling point), and doing a "right sample" (always take the rightmost point of the interval as your sampling point). These are often called the Left Riemann Sum and the Right Riemann Sum, respectively. Our approximation will then be a rectangle of height $f(x^*)$ where $x^*$ is the sampling point, and of base the length of the interval. Their product is the area of our approximating rectangle. So what we do is we divide the interval $[a,b]$ into a bunch of smaller intervals, and approximate the function using these "rectangles" in each of the smaller intervals. As we make the intervals smaller and smaller and smaller, the approximation will get better and better and better. It is a theorem that if the function is continuous, then it doesn't matter how we do the sampling or how we divide the big interval into smaller ones, as long as the "mesh size" (the length of the largest subinterval we select) gets smaller and smaller and smaller and goes to zero, the limit will always be the same. So you are looking at the special construction in which we take $[a,b]$ and we divide it into $n$ equal subintervals, each of which will have length $\frac{b-a}{n}$. Call this $\Delta x$, because if we write the points we are selecting to break up the interval as $$a = x_0 \lt x_1 \lt x_2 \lt \cdots \lt x_n = b$$ then the increment to get from $x_i$ to $x_{i+1}$ is precisely $\Delta x = \frac{b-a}{n}$ ($\Delta x$ is pronounced "the increase in $x$" or "the change in $x$"). If we use the left sampling method, we obtain what is called the "Left Riemann Sum with $n$ equal subintervals". Let's call it $L(n)$ ($f$ and $[a,b]$ are understood from context). How much is each rectangle? The first rectangle has height $f(x_0)$ and base $\Delta x$; the second has height $f(x_1)$ and base $\Delta x$; etc, until the $n$th rectangle that has height $f(x_{n-1})$ and height $\Delta x$. So $$\begin{align*} L(n) &= f(x_0)\Delta x + f(x_1)\Delta x + \cdots + f(x_{n-1})\Delta x\\ &= \sum_{i=0}^{n-1}f(x_i)\Delta x. \end{align*}$$ If we use the right sampling method we get the "Right Riemann Sum with $n$ equal subintervals", let's call it $R(n)$. we have: $$\begin{align*} R(n) &= f(x_1)\Delta x + f(x_2)\Delta x + \cdots + f(x_n)\Delta x\\ &= \sum_{i=1}^n f(x_i)\Delta x.\end{align*}$$ For example, for $f(x) = \sqrt{x}$ and $[a,b] = [0,1]$, dividing the interval into $n$ equal subintervals means marking off the points $0=\frac{0}{n}$, $\frac{1}{n}$, $\frac{2}{n},\ldots,\frac{n-1}{n}$, and $\frac{n}{n} = 1$. Then $\Delta x = \frac{1-0}{n} = \frac{1}{n}$. The left hand Riemann sum would then be $$\begin{align*} L(n) &= \sum_{i=0}^{n-1}f\left(\frac{i}{n}\right)\Delta x\\ &= \sum_{i=0}^{n-1}\sqrt{\frac{i}{n}} \frac{1}{n}\\ &= \frac{1}{n}\sum_{i=0}^{n-1}\sqrt{\frac{i}{n}}\\ &= \frac{1}{n\sqrt{n}}\sum_{i=0}^{n-1}\sqrt{i}\\ &= \frac{1}{n^{3/2}}\left( \sqrt{0} + \sqrt{1} + \sqrt{2}+\cdots + \sqrt{n-1}\right). \end{align*}$$ If you look at the graph of $y=\sqrt{x}$, you will see that this is an underestimate of the area under the graph, because all of these rectangles lie completely under the graph. The right hand Riemann sum will be $$R(n) = \frac{1}{n^{3/2}}\left(\sqrt{1}+\sqrt{2}+\cdots + \sqrt{n-1}+\sqrt{n}\right);$$ if you look at the graph of $y=\sqrt{x}$, you will see that this is an overestimate of the area under the graph. So we know that the true value of the area, $\int_0^1\sqrt{x}\,dx$ lies somewhere between $L(n)$ and $R(n)$.
H: Equalizing percentages I am having difficulty with the following problem: Mark was formerly paid a salary of $400 per week and 8% commission on his total sales.Later Mark was given a 20% salary cut and his commission rate was increased to 10%. What is the smallest amount of sales Mark has to make to earn as much under the new plan as he did with the old. (Ans=4000) Here is how i am attempting it. Marks new salary is $320 and his commission rate is 10%. So He needs 80 dollars more to be back to his original salary so he needs a sale of $800 Now further compensation requires that the previous sales income must match the onward sales income . I don't know how to do that. I guess that's why I am not getting the right answer. Any suggestions that could help me out. AI: Your statement that He needs 80 dollars more to be back to his original salary is incorrect; recall that his original salary wasn't just \$400, but also included a commission of 8%. If $d$ is the number of dollars in sales he needs to make so that his total salary would be the same under both his new plan and his old plan, instead of solving $$320+(0.1\times d)=400$$ (which is what you proposed), we need to solve $$320+(0.1\times d)=400+(0.08\times d)$$ This equation becomes $$(0.1\times d)-(0.08\times d)= 400-320$$ $$0.02\times d = 80$$ $$d=80\times\frac{1}{0.02}=80\times 50=4000,$$ which is the correct answer.
H: Restricting Line Bundles to Hypersurfaces? The Adjunction Formula is given to be $K_V = (K_X \otimes [V])_V$ Where $K_V$ is the canonical class on $V\subset X$ and $K_X$ of $X$. And $[V]$ denotes the line bundle associated to $V$. Now say, Instead of the canonical bundle on $V$, that I'm interested in some general Line bundle $[D]$. Is there some equivalent formula like $[D]_V = ([D]_X \otimes [V])_V$? Or how do you actually restrict a line bundle on $X$ to $V$? How do their chern classes relate? AI: What kind of a formula do you want? If we write the formula you posit for $[D]$ doesn't make sense: if we restrict $[D]$ to $V$, we get $[D]_V$; that's a tautology. The reason that the adjunction formula has a more complicated shape is precisely because the restriction of the canonical bundle of $X$ to $V$ is not the canonical bundle of $V$. The Chern class of $[D]_V$ is precisely the intersection of (a generic representative of the linear equivalence class of) the Chern class of $[D]$ with $V$. (If $s$ is a generic section of $D$, assuming it admits one, then the Chern class of $[D]$ is precisely the zero locus of $s$. If we restrict $[D]$ to $V$, then $s_{|V}$ will be a section of $[D]_V$, and its zero locus will be the intersection of the zero locus of $s$ with $V$.) Based on your question, it seems that you are confused about something at a more basic level. Perhaps asking a more specific question would help.
H: Find the modular inverse of $19\pmod{141}$ I'm doing a question that states to find the inverse of $19 \pmod {141}$. So far this is what I have: Since $\gcd(19,141) = 1$, an inverse exists to we can use the Euclidean algorithm to solve for it. $$ 141 = 19\cdot 7 + 8 $$ $$ 19 = 8\cdot 2 + 3 $$ $$ 8 = 3\cdot 2 + 2 $$ $$ 3 = 2\cdot 1 + 1 $$ $$ 2 = 2\cdot 1 $$ The textbook says that the answer is 52 but I have no idea how they got the answer and am not sure if I'm on the right track. An explanation would be appreciated! Thanks! AI: You're on the right track; as mixedmath says, you now have to "backtrack" your steps like this: $$\begin{eqnarray}1\;\;\;\;&&=3-2\cdot 1\\ 1=&3-(8-3\cdot 2)\cdot 1&=3\cdot 3-8\\ 1=&(19-8\cdot 2)\cdot 3-8&=19\cdot 3-8\cdot 7\\ 1=&19\cdot 3-(141-19\cdot 7)\cdot 7&=19\cdot52-141\cdot 7\end{eqnarray}$$ This last equation, when taken modulo 141, gives $1\equiv 19\cdot 52\bmod 141$, demonstrating that 52 is the inverse of 19 modulo 141.
H: First order ordinary differential equations involving powers of the slope Are there any general approaches to differential equations like $$x-x\ y(x)+y'(x)\ (y'(x)+x\ y(x))=0,$$ or that equation specifically? The problem seems to be the term $y'(x)^2$. Solving the equation for $y'(x)$ like a qudratic equation gives some expression $y'(x)=F(y(x),x)$, where $F$ is "not too bad" as it involves small polynomials in $x$ and $y$ and roots of such object. That might be a starting point for a numerical approach, but I'm actually more interested in theory now. $y(x)=1$ is a stationary solution. Plugging in $y(x)\equiv 1+z(x)$ and taking a look at the new equation makes me think functions of the form $\exp{(a\ x^n)}$ might be involved, but that's only speculation. I see no symmetry whatsoever and dimensional analysis fails. AI: Hint: $x-xy+y'(y'+xy)=0$ $(y')^2+xyy'+x-xy=0$ $(y')^2=x(y-1-yy')$ $x=\dfrac{(y')^2}{y-1-yy'}$ Follow the method in http://science.fire.ustc.edu.cn/download/download1/book%5Cmathematics%5CHandbook%20of%20Exact%20Solutions%20for%20Ordinary%20Differential%20EquationsSecond%20Edition%5Cc2972_fm.pdf#page=234: Let $t=y'$ , Then $\left(1+\dfrac{t^3(t-1)}{y^2(t-1)+1}\right)\dfrac{dy}{dt}=\dfrac{2t^2}{y(1-t)-1}+\dfrac{yt^3}{y(1-t^2)-1}$
H: Is it true that $\max\limits_D |f(x)|=\max\limits\{|\max\limits_D f(x)|, |\min\limits_D f(x)|\}$? I came across an equality, which states that If $D\subset\mathbb{R}^n, n\geq 2$ is compact, for each $ f\in C(D)$, we have the following equality $$\max\limits_D |f(x)|=\max\limits\{|\max\limits_D f(x)|, |\min\limits_D f(x)|\}.$$ Actually I can not judge if it is right. Can anyone tell me if it is right, and if so, how to prove it? Thanks a lot. AI: I assume that you are assuming that $\max$ and $\min$ exists or that you are assuming that $f(x)$ is continuous which in-turn guarantees that $\max$ and $\min$ exists, since $D$ is compact. First note that if we have $a \leq y \leq b$, then $\vert y \vert \leq \max\{\vert a \vert, \vert b \vert\}$, where $a,b,y \in \mathbb{R}$. Hence, $$\min_D f(x) \leq f(x) \leq \max_D f(x) \implies \vert f(x) \vert \leq \max\{\vert \max_D f(x) \vert, \vert \min_D f(x) \vert\}, \, \forall x$$ Hence, we have that $$\max_D \vert f(x) \vert \leq \max\{\vert \max_D f(x) \vert, \vert \min_D f(x) \vert\}$$ Now we need to prove the inequality the other way around. Note that we have $$\vert f(x) \vert \leq \max_D \vert f(x) \vert$$ This implies $$-\max_D \vert f(x) \vert \leq f(x) \leq \max_D\vert f(x) \vert$$ This implies $$-\max_D \vert f(x) \vert \leq \max_D f(x) \leq \max_D\vert f(x) \vert$$ $$-\max_D \vert f(x) \vert \leq \min_D f(x) \leq \max_D\vert f(x) \vert$$ Hence, we have that $$\vert \max_D f(x) \rvert \leq \max_D\vert f(x) \vert$$ $$\vert \min_D f(x) \rvert \leq \max_D\vert f(x) \vert$$ The above two can be put together as $$\max \{\vert \max_D f(x) \rvert, \vert \min_D f(x) \rvert \} \leq \max_D\vert f(x) \vert$$ Hence, we can now conclude that $$\max \{\vert \max_D f(x) \rvert, \vert \min_D f(x) \rvert \} = \max_D\vert f(x) \vert$$
H: Flows of left-invariant vector fields on a Lie Group Let $G$ be a connected Lie group with identity $e$ and let $g\in G$. Must there exist a left-invariant vector field $X_g = dL_g(v)$ (for some $v \in \mathfrak g$) such that the flow $\phi_t(e)$ of $X$ through $e$ passes through $g$? I believe this is equivalent to the question of whether the exponential map from $\mathfrak g$ to $G$ is surjective. AI: No. The equivalence you describe holds, and it's well-known that the exponential map fails to be surjective for $\text{SL}_2(\mathbb{R})$. Any $g$ with this property necessarily has a square root, which $\left[ \begin{array}{cc} -1 & 1 \\\ 0 & -1 \end{array} \right]$ doesn't. (Since any square root in $\text{SL}_2(\mathbb{R})$ has determinant $1$, it must have eigenvalues $\pm i$, so in particular it is diagonalizable.) The exponential map is surjective if $G$ is either compact or nilpotent.
H: Find ring morphism from ring R to ring S I am trying to find a method to find ring morphism between two rings. I know the definition and all but I am unsure how to go about finding the ring morphisms between $\mathbb{Z}$ and $\mathbb{Z}_6$ If anyone could help me figure that out it would be great. Thanks in advance AI: The general problem is too abstract to deal with. The specific one, homomorphisms between $\mathbb{Z}$ and $\mathbb{Z}_6$, is much simpler: Because the additive group $\mathbb{Z}$ is generated by $1$, any ring homomorphism with domain $\mathbb{Z}$ is completely determined by the value of $f(1)$. So we just need to find out the possible values of $f(1)$ that will give a ring homomorphism $f\colon\mathbb{Z}\to\mathbb{Z}_6$. From the additive group point of view, any value for $f(1)$ will give you an additive homomorphism (because $\mathbb{Z}$ is the "free group" on one generator). Now, if $f(1) = a\in\mathbb{Z}_6$, then you also need $a(rs) = f(rs) = f(r)f(s) = (ar)(as) = a^2rs$. So that means that for every $r,s\in\mathbb{Z}$, we need $a^2rs \equiv ars\pmod{6}$. Taking $r=s=1$, we conclude that we need $a^2\equiv a\pmod{6}$. How many elements of $\mathbb{Z}_6$ have the property that $a^2=a$? (These are called idempotents). Only those can be the value of $f(1)$. And if $a^2\equiv a\pmod{6}$, then $f(1)=a$ will give you a homomorphism (verify that it is multiplicative). So the problem of determining all ring homomorphisms $\mathbb{Z}\to\mathbb{Z}_6$ (that do not need to send $1$ to $1$) is equivalent to the problem of determining all idempotents of $\mathbb{Z}_6$. Can you do the latter?
H: Counting the ordered pairs $(A, B)$, where $A$ and $B$ are subsets of $S$ and $A$ is a proper subset of $B$: Let $S$ be a set of $n$ consecutive natural numbers. How to find the number of ordered pairs $(A, B)$, where $A$ and $B$ are subsets of $S$ and $A$ is a proper subset of $B$. The answer in my book $ 3^n - 2^n$, why is this true? AI: Suppose the number of proper subsets of a set with $k$ elements is $PS(k)$. Then, once you fix a subset $B$ of $S$, the number of ordered pairs that have $B$ in the second coordinate will be $PS(|B|)$. So you just need to: Find what $PS(k)$ is in terms of $k$; and Find out how many subsets of size $k$ there are in a set of size $n$. Neither of these seems too hard... Can you figure them out?
H: How do I factor trinomials of the format $ax^2 + bxy + cy^2$? Take, for example, the polynomial $15x^2 + 5xy - 12y^2$. AI: What you have is a homogeneous polynomial of degree $2$ in $x$ and $y$ (homogeneous means that the total degree is the same in each monomial). Divide through by $y^2$, and set $Z=\frac{x}{y}$. (This is called "de-homogeneization"). This will give you the one-variable degree $2$ polynomials $$aZ^2 + bZ + c.$$ If you know how to factor these (which is not hard), then after factoring $$aZ^2 + bZ + c = (rZ+s)(tZ+u)$$ you can multiply through by $y^2$ by multiplying one $y$ in each factor on the right hand side, to get back to the homogeneous form ("homogeneization"): $$ax^2 + bxy + cy^2 = (rx+sy)(tx+uy).$$ So it all comes down to factoring degree $2$ polynomials.
H: The most efficient way to tile a rectangle with squares? I was writing up a description of the Euclidean algorithm for a set of course notes and was playing around with one geometric intuition for the algorithm that involves tiling an $m \times n$ rectangle with progressively smaller squares, as seen in this animation linked from the Wikipedia article: I was looking over this animation and was curious whether or not the squares that you would place down in the course of this algorithm necessarily gave the minimum number of squares necessary to cover the entire rectangle. More formally: suppose that you are given an $m \times n$ rectangle, where $m, n \in \mathbb{N}$ and $m, n > 0$. Your goal is to tile this rectangle with a set of squares such that no two squares overlap. Given $m$ and $n$, what is the most efficient way to place these tiles? Is the tiling suggested by the Euclidean algorithm (that is, in an $m \times n$ rectangle, with $m \ge n$, always place an $n \times n$ rectangle, then recurse on the remaining rectangle) always optimal? If not, is there are more efficient algorithm for this problem? I am reasonably sure that the Euclidean approach is correct, but I was having a lot of trouble formalizing the intuition with a proof due to the fact that there are a lot of different crazy ways you can try to place the squares. For example, I'm not sure how to have the proof handle the possibility that squares could be at angles, or could have side lengths in $\mathbb{R} - \mathbb{Q}$. Thanks! AI: Euclid doesn't always minimize the number of squares. E.g., with an $8\times9$ rectangle, Euclid says use an 8-square and 8 1-squares, 9 squares in all. But you can do it with a 5, two 4s, a 3, a 2, and two 1s, making 7 squares total. You put the 5 in a corner, then put the 4s in the corners that have room for them; that leaves a $3\times5$ rectangle to fill, which you can do by Euclid.
H: Round a value in a set of number Given an order set of some number, $S=\{1.3, 1.7, 1.9, 2.8\}$, I would like to know how can I mathematically define a function that round a value to the nearest number in the set $S$. For example, if I give the value 1.72, I'll receive back the number 1.7. AI: One possible candidate is $$f(x) = \min \{\text{arginf}_{y \in S} \vert x-y\vert\}$$
H: How to convert a series to an equation? Possible Duplicate: Value of $\sum\limits_n x^n$ I don't know the technical language for what I'm asking, so the title might be a little misleading, but hopefully I can convey my purpose to you just as well without. Essentially I'm thinking of this: the series $4^n + 4^{n-1} \cdots 4^{n-n}$. I suppose this is the summation of the series $4^n$ from $n$ to 0. But is there any way to express this as a pure equation, not as a summation of a series? If so, how do you figure out how to convert it? AI: In general, for $x\neq 1$ it is true that $$\sum_{k=0}^nx^k=1+x+\cdots+x^n=\frac{x^{n+1}-1}{x-1}.$$ So, in your case in particular, we have that $$\sum_{k=0}^n4^{n-k}=4^n+\cdots+4+1=1+4+\cdots+4^n=\sum_{k=0}^n4^k=\frac{4^{n+1}-1}{3}.$$ Alternatively, one could pull out a factor of $4^n$ from all terms, and compute $$\sum_{k=0}^n4^{n-k}=4^n\sum_{k=0}^n(\tfrac{1}{4})^k=4^n\cdot\frac{(\frac{1}{4})^{n+1}-1}{(\frac{1}{4})-1}=4^n\cdot\frac{\frac{4^{n+1}-1}{4^{n+1}}}{\frac{3}{4}}=4^{n+1}\cdot\frac{\frac{4^{n+1}-1}{4^{n+1}}}{3}=\frac{4^{n+1}-1}{3}.$$
H: Find Min and Abs Max with $\ln x$ I know how to do a quadratic version but how do I find the absolute minimum and absolute maximum values of f on the given interval. $f(x) = x − \ln 2x$, $[\frac{1}{2}, 2]$ I keep getting the wrong answers AI: The derivative is $1-\frac{2}{2x}$. It is hard to go wrong after that. The contenders are $x=1/2$, $x=1$, and $x=2$.
H: How to find the sum of this infinite series. How to find the sum of the following series ? Kindly guide me about the general term, then I can give a shot at summing it up. $$1 - \frac{1}{4} + \frac{1}{6} -\frac{1}{9} +\frac{1}{11} - \frac{1}{14} + \cdots$$ AI: Your sum is $$S = \dfrac11 - \dfrac14 + \dfrac16 - \dfrac19 + \dfrac1{11} - \dfrac1{14} \pm \cdots = \sum_{k=0}^{\infty} \left(\dfrac1{5k+1} - \dfrac1{5k+4}\right)$$ Consider $f(t) = 1 - t^3 + t^5 - t^8 \pm \cdots $ for $\vert t \vert < 1$. This is a geometric series and can be summed for $\vert t \vert < 1$. Summing it up we get that $f(t) = \dfrac{1-t^3}{1-t^5}$ for $\vert t \vert <1$. Now integrate $f(t)$ term by term from $0$ to $1$ and integrate $\dfrac{1-t^3}{1-t^5}$ from $0$ to $1$ to get the answer i.e. we have $$\sum_{k=0}^{\infty} \left(\dfrac1{5k+1} - \dfrac1{5k+4}\right) = \int_0^1 \dfrac{1-t^3}{1-t^5} dt$$ A similar sum using the same idea is worked out here.
H: How do you do linearization Questions? This is new to me, im trying to work on a linearization problem. I found an online problem Find the linearization L(x) of the function at a. $f(x) = x^4 + 2x^2,$ a = −1 What steps do i take to approach this solution? AI: The linearization is $px+q$, where $y=px+q$ is the equation of the tangent line to $y=f(x)$ at $x=a$. Now I expect you can do the calculation. It is not absolutely clear in what form you are expected to give the answer. Perhaps it is $px+q$, as above. Or perhaps it is $c(x-a)+d$.
H: Simplest method to find $5^{20}$ modulo $61$ What is the simplest method to go about finding the remainder of $5^{20}$ divided by $61$? AI: $5^3 = 125 \equiv 3 \mod 61$, so $5^5 \equiv 25 \times 3 = 75 \equiv 14 \mod 61$, $5^{10} \equiv 14^2 = 196 \equiv 13 \mod 61$, $5^{20} \equiv 13^2 = 169 \equiv 47 \mod 61$.
H: Counting descending sequences of positive integers The complete question I would like to answer is: Given positive integers $k,n$, how many descending lists of non-negative integers $(x_1~x_2\ldots x_k)$ are there such that $\sum_{i=1}^k x_i = n$? As a quick first insight, it's clear that the general problem will come if we can find the solution for $k=n$. It would take a long time to explain the buildup, but I have reduced the problem to calculating the value $f(n,n)$ of a function recursively defined by: $$ f(n,m) = f(n,m-1) + f(n-m,\min(m,n-m))~, $$ $$ f(n,2) = 1+\left\lfloor\frac{n}{2}\right\rfloor~~;~f(n,1) = f(n,0) = 1~. $$ For a less clunky reformulation, posit that $f(a,b) = f(a,\min(a,b))$ for each pair $a,b$, and we have $f(n,m) = f(n,m-1) + f(n-m,m)$. Now my attempt so far is repeatedly applying the recursive rule to the first term to yield $$ f(n,m) = 1 + \left\lfloor\frac{n}{2}\right\rfloor + \sum_{i=3}^m f(n-i,i) ~, $$ which may itself be repeated to form a summation over no more than $\left\lfloor\dfrac{n-2}{3}\right\rfloor$ variables, but the computations involved (summations over products of floor functions) become extremely tedious. Does anyone have suggestions as for how to proceed down this particular attempt at solution, a better method of solution, or a different angle on the problem entirely? For the curious, this number of descending lists is part of a coefficient in the expansion of the expression $D^n[f^k]$, the n$^\text{th}$ derivative of the k$^\text{th}$ power of some function. Attempting this solely out of curiosity. Thanks ahead of time for reading this mess! AI: Assuming that you want your lists to be strictly decreasing, this is the number of partitions of $n$ into $k$ distinct parts; otherwise it’s the number of partitions of $n$ into $k$ parts. The former is given by the triangle A060016 at OEIS, the second by the triangle A008284. At those links you’ll find some references, and the latter also has some relevant generating functions.
H: To find the square root of a polynomial My question is: Find the value of $k$ such that $$4x^6 - 24x^5 + 20x^4 + 68x^3 -44x^2 - 40x + k$$ is a perfect square. hey all i have made an edit. Sorry for the inconvenience. Any help to solve this question would be greatly appreciated. AI: If the goal is to find $k$ so that the polynomial is a perfect square, start by noting that it must be the square of a cubic polynomial: $$4x^6 - 24x^5 + 20x^4 + 68x^3 -44x^2 - 40x + k=(ax^3+bx^2+cx+d)^2\;.$$ Clearly this immediately require that $a=2$. Now the square of $(2x^3+bx^2+cx+d)^2$ is $$4x^6+4bx^5+(4c+b^2)x^4+(4d+2bc)x^3+(2bd+c^2)x^2+2cdx+d^2\;,$$ so you have the following system of equations: $$\left\{\begin{align*} &4b=-24\\ &4c+b^2=20\\ &4d+2bc=68\\ &2bd+c^2=-44\\ &2cd=-40\\ &d^2=k \end{align*}\right.$$ Clearly $b=-6$; the second equation then allows you to find $c$, and you can then use the third, fourth, or fifth to find $d$ and then $k$. (To play safe, you should verify that the third, fourth, and fifth equations all yield the same value of $d$.)
H: How do I solve an overdetermined linear system of partial differential equations? I have two partial differential equations that I want to solve (for $\ \sigma $) by finite differences: $\ -\frac{\partial \sigma}{\partial x}(x,y,t) -p(x,y,t)\frac{\partial \sigma}{\partial t}(x,y,t) = p(x,y,t)$ $\ -\frac{\partial \sigma}{\partial y}(x,y,t) -q(x,y,t)\frac{\partial \sigma}{\partial t}(x,y,t) = q(x,y,t)$ where p and q are known. This is an over determined system (one unknown, two equations) so I am seeking an Least Squares Error (LSE) solution. I know that for the inversion problem: $\ d = \mathbf{F}m $ , where F is a forward operator, the LSE is: $\ \mathbf{F}^T\mathbf{F}m = \mathbf{F}^Td$ In my case $\ \mathbf{d}=[p(x,y,t),q(x,y,t)]^T$, $\ \mathbf{m} = [\sigma] $ and $\ \mathbf{F} =[-\frac{\partial}{\partial x}-p\frac{\partial}{\partial t}, -\frac{\partial}{\partial y}-q\frac{\partial}{\partial t}]^T$ ? How do I solve this by finite differences? Thanks in advance for any answers! EDIT: Thanks to Andrew we can instead solve the easier equation: $\ qs_x=ps_y $ for each discrete t value. I have m discrete x values: $\ x_i, 1<=i<=m $ and n discrete y values: $\ y_j, 1<=j<=n $. My boundary condition is: $\ s(m/2,n/2)=t $ I could of course just integrate outwards from the center point, but this would yield a different result depending on the path taken. Therefore I am looking for a LSE solution to this equation. How can I do this? AI: The problem can be simplified a bit. Putting $s(x,y,t)=\sigma(x,y,t)+t$ turns the system into a homogeneous one: $$ -s_x-ps_t=0, $$ $$ -s_y-qs_t=0. $$ Excluding $s_t$ we have $qs_x=ps_y\;$.
H: weak convergence in $L^p$ plus convergence of norm implies strong convergence Having trouble with this problem. Any ideas? Let $\Omega$ be a measure space. Let $f_n$ be a sequence in $L^p(\Omega)$ with $1<p<\infty$ and let $f \in L^p(\Omega)$. Suppose that $$f_n \rightharpoonup f \text{ weakly in } \sigma(L^p,L^{p'})$$ and $$\|f_n\|_p \to \|f\|_p.$$ Prove that $\|f_n-f\|_p \to 0$. Also, can you come up with a counter-example for the $L^1$ case? AI: Since $1<p<\infty$, the space $L^p(\Omega)$ is uniformly convex. This follows from Clarkson's inequalities. Now we use the following theorem, which can be studied in Brezis' book on functional analysis (chapter III). Theorem. Let $E$ be a uniformly convex Banach space, and let $\{x_n\}$ be a weakly convergent sequence in $E$, i.e. $x_n \rightharpoonup x$ for some $x \in E$. If $$\limsup_{n \to +\infty} \|x_n\| \leq \|x\|,$$ then $x_n \to x$ strongly in $E$. Try to construct a counter-example in the $\sigma(L^1,L^\infty)$ topology.