Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Prove that $ a^4b^4+ a^4c^4+b^4c^4\le3$ Let $a,b,c>0$ and $a^3+b^3+c^3=3$. Prove that $$ a^4b^4+ a^4c^4+b^4c^4\le3$$ My attempts: 1) $$ a^4b^4+ a^4c^4+b^4c^4\le \frac{(a^4+b^4)^2}{4}+\frac{(c^4+b^4)^2}{4}+\frac{(a^4+c^4)^2}{4}$$ 2) $$a^4b^4+ a^4c^4+b^4c^4\le3=a^3+b^3+c^3$$
By AM-GM and Schur we obtain $$\sum_{cyc}a^4b^4=\frac{1}{3}\sum_{cyc}(3ab)a^3b^3\leq\frac{1}{3}\sum_{cyc}(1+a^3+b^3)a^3b^3=$$ $$=\frac{1}{3}\sum_{cyc}(a^3b^3+a^6b^3+a^6c^3)=\frac{1}{9}\sum_{cyc}a^3\sum_{cyc}a^3b^3+\frac{1}{3}\sum_{cyc}(a^6b^3+a^6c^3)=$$ $$=\frac{1}{9}\sum_{cyc}(4a^6b^3+4a^6c^3+a^3b^3c^3)=$$ $$=\frac{1}{9}\left(\sum_{cyc}(a^9+3a^6b^3+3a^6c^3+2a^2b^2c^2)-\sum_{cyc}(a^9-a^6b^3-a^6c^3+a^3b^3c^3)\right)\leq$$ $$\leq\frac{1}{9}\sum_{cyc}(a^9+3a^6b^3+3a^6c^3+2a^2b^2c^2)=\frac{1}{9}(a^3+b^3+c^3)^3=3.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2589946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Australian Math Competition Geometry Problem $PQRS$ is a rectangle with a centre $C$. $PQ$ has length $4$ and $PS$ has length 12. The circles meet $PS$ at $U$ and $V$ with both having radius $1$. $PU$ has length $1$ and $PV$ has length $4$. What is $PW$? I’ve tried this problem for days and tried to find answers elsewhere but I can’t do it. Tried getting areas of triangles to find altitudes and was working with trapeziums and such. Made lots of approaches but I always get stuck.
Draw a rectangle where C is extended straight downwards until it reaches PS (call this point of intersection A) and one line from C to the left until it reaches PQ. This rectangle has side lengths 2 and 6, because if C is the center of rectangle PQRS, then it should divide the rectangle's sides into halves as well. Also, notice that it is tangent to both the circles because they have the same height as this new rectangle. From there, we can calculate that the distance from the rightmost point of the right circle to line CA is 1 because 6-1 (the radius of the circle)=1. Extend lines AP and BC one unit to the left, and create a larger rectangle. Notice now that if you extend line CW one unit to the left as well, it intersects perfectly with the left bottom corner of the rectangle. Call this point W'. As per the definition of a diagonal, this newly extended line CW' divides the new rectangle exactly in half. And given that these circles are positioned so that they are horizontally symmetrical, it is clear why the original line CW divided the areas into halves. Now for the final calculation, we take the dimensions of this new larger rectangle. Its height is 2 and its width is 7. Therefore, the slope of the diagonal Cw' is 2/7. Taking point W' as the origin of a coordinate system, the coordinates of point W is (1, 2/7) and the coordinates of point P is (1, 0). It should then be clear that the distance is 2/7! Sorry if the formatting or logic seems a bit janky; I'm a high schooler in South Korea and this is new for me :/
{ "language": "en", "url": "https://math.stackexchange.com/questions/2590046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Prove that $\sum_{n=1}^\infty f_n$ converges uniformly on $\mathbb{R}$ where $f_n$ is defined piecewise. Prove that the series of functions $\sum_{n=1}^\infty f_n$ converges uniformly on $\mathbb{R}$, where $$f_n: \mathbb{R} \to \mathbb{R}: x \mapsto \begin{cases}0 \quad x \neq n \\\frac{1}{x} \quad x =n\end{cases}$$ My attempt: Let $x \in \mathbb{R}$ be fixed. Let $n \in \mathbb{N}$ with $n > x$. Then, it is clear that: $$\sum_{k=1}^n f_k(x) = \begin{cases}0 \quad x \notin \mathbb{N_0}\\\frac{1}{x} \quad x \in \mathbb{N_0}\end{cases}$$ Hence, letting $n \to \infty$, we have that the given series converges pointwise to the function $$f: \mathbb{R} \to \mathbb{R}: x \mapsto \begin{cases}0 \quad x \notin \mathbb{N_0}\\\frac{1}{x} \quad x \in \mathbb{N_0}\end{cases}$$ We now show uniform convergence. Let $x \in \mathbb{R}$ and $n \in \mathbb{N}$. If $x \in \mathbb{N_0}$, we consider two cases: (i) $n\geq x$: then $\left|\sum_{k=1}^nf_k(x) - f(x)\right| = 0$ (ii) $n < x$: then $\left|\sum_{k=1}^nf_k(x) - f(x)\right| = |f(x)| = \frac{1}{x} < \frac{1}{n}$ If $x \notin \mathbb{N_0}$, then $\left|\sum_{k=1}^nf_k(x) - f(x)\right| = 0$ So, we have proven that $\forall n \in \mathbb{N}, \forall x \in \mathbb{R}: \left|\sum_{k=1}^nf_k(x) - f(x)\right| < \frac{1}{n} \to 0$ Let then $\epsilon > 0$. Choose $n_0: \forall n \geq n_0: \frac{1}{n} < \epsilon$. Then, for $n \geq n_0: \left|\sum_{k=1}^nf_k(x) - f(x)\right| < \frac{1}{n} < \epsilon$ So, the convergence is uniform. Is this correct?
You are mostly there. With the $f$ you have found as the limiting function, for any $x\in \mathbb{R}$, $$ \left|\sum_{k=1}^nf_k(x)-f(x)\right|\leq \frac{1}{n+1} $$ since if $x\not\in \mathbb{N}$ or if $x\in \{ 1,\dots,n\}$ the difference is zero. Otherwise, the worst this difference could be is the difference at $x=n+1$. This gives you $$ \sup_{x\in \mathbb{R}}\left|\sum_{k=1}^nf_k(x)-f(x)\right|\leq \frac{1}{n+1}\to 0 $$ as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2590134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculate the amount of series We know that ‎$‎\sum_{k=1}^\infty ‎\frac{1}{k^2+1}‎ = ‎‎\frac{1}{2}(\pi ‎\coth‎(\pi) - 1)‎‎‎$‎‎. Now, how do we calculate the series‎ ‎$‎\sum_{k=1}^\infty ‎\frac{1}{(k + x)^2+1}‎$‎ for ‎‎$‎x\geq 0‎‎‎$?
One may write $$ \begin{align} ‎\frac{2i}{(k + x)^2+1}&= ‎\frac{1}{k+x-i}-‎\frac{1}{k+x+i} =\!\left(\frac{1}{k}-\frac{1}{k+x+i}\right)-\left(\frac{1}{k}-\frac{1}{k+x-i}\right) \end{align} $$ yielding $$ \sum_{k=1}^\infty ‎\frac{1}{(k + x)^2+1}=\frac1{2i}\psi(x+1+i)-\frac1{2i}\psi(x+1-i),\qquad \text{Re}(x+1)>-1,\tag 1 $$ where we have used the digamma function which satisfy $$ \psi(z+1)=-\gamma+\sum_{k=1}^\infty\left(‎\frac{1}{k}-‎\frac{1}{k+z} \right),\quad \text{Re}z>-1.\tag2 $$ For example, by putting $x=\frac12$ in $(1)$, using special values of $\psi$, one gets $$ \sum_{k=1}^\infty ‎\frac{1}{\left(k + \frac12\right)^2+1}=\color{blue}{-\frac45+\frac \pi2 \tanh (\pi)}.\tag3 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2590235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Non uniformly integrable sequence I am looking for a sequence of random variable $X_1, X_2, \cdots$ on $([0,1], \mathbb B, \lambda)$ such that $$X_n\to 0 \quad \text{a.s.}$$ $$EX_n\to 0$$ but such that $(X_n)_n$ is not uniformly integrable. I already showed that the sequence can't have $X_n\geq 0$ for all $n$. Can we maybe achieve it with linear combinations of indicator functions $\mathbb 1_{[\frac{i-1}{n}, \frac{i}{n}]}$ ?
Consider $$X_n := n 1_{(0,1/n)}-n 1_{(1/n,2/n)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2590350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Curl of a cross product with constant vector If $\mathbf a$ is a constant vector in the 3-dimensional space and $\mathbf s=x\mathbf e_x+y\mathbf e_y +z\mathbf e_z$, I want to show that $$\nabla \land \left(\mathbf a \land \mathbf s\right) = 2\mathbf a. $$ I have done as follows: $$\nabla \land \left(\mathbf a \land \mathbf s\right)=(\nabla \cdot \mathbf s)\mathbf a\ -\ (\nabla \cdot \mathbf a)\mathbf s=3\mathbf a\ -\ (\nabla \cdot \mathbf a)\mathbf s $$ But I am confused as to how the last part is computed. Could you explicitly show how $(\nabla \cdot \mathbf a)\mathbf s$ equals $\mathbf a$ or point out any other mistake?
We have the general formula $$\nabla \wedge ({\bf a}\wedge {\bf s}) = {\bf a}(\nabla \cdot {\bf s}) - {\bf s}(\nabla \cdot {\bf a}) + ({\bf s}\cdot \nabla){\bf a} - ({\bf a}\cdot \nabla){\bf s}.$$Now let's see each piece. * *Since ${\bf s} = (x,y,z)$, clearly $\nabla \cdot {\bf s} = 3$. *Since ${\bf a}$ is constant, we have $\nabla \cdot {\bf a} = 0$. *Since ${\bf a}$ is constant and ${\bf s}\cdot \nabla$ is a differential operator, $({\bf s}\cdot \nabla){\bf a} = {\bf 0}$. *This last part we compute directly, calling ${\bf a} = (a_1,a_2,a_3)$, acting componentwise as follows: $$({\bf a}\cdot \nabla){\bf s} = \left(a_1\partial_x+a_2\partial_y+a_3\partial_z\right)(x,y,z) = (a_1 \cdot 1, a_2 \cdot 1, a_3 \cdot 1) = {\bf a}.$$ So everything boils down to $$\nabla \wedge ({\bf a}\wedge {\bf s}) =3{\bf a} - {\bf 0} + {\bf 0} - {\bf a} = 2{\bf a},$$as wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2590478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$i^*(\omega)$ vanish on S $\Leftrightarrow$ A is tangent to $S$ If $\omega\in \Omega^2(\mathbb{R}^3)$ is a 2-form: $\omega=f\;dx\wedge dy+g\;dx\wedge dz+h\;dy \wedge dz$, and $S$ is a surface in $\mathbb{R}^3$, prove that $i^*(\omega)$ vanish on $S$ $\iff$ $A=h\frac{\partial}{\partial x}-g\frac{\partial}{\partial y}+f\frac{\partial}{\partial z}$ is tangent to $S$. The only clue I have been able to find is that there exists this isomorphism: $$ \phi:\mathfrak{X}(\mathbb{R}^3)\longrightarrow \Omega^2(\mathbb{R}^3)$$ $$ A \longmapsto \phi(A),$$ where $\phi(A)(B,C)=(B\times C)\cdot A$. In this way $\phi(A)=h\;dx\wedge dy-g\;dx\wedge dz+f\;dy \wedge dz$, which is similar to what we are working about. Any idea?
By duality, $\omega$ corresponds to the vector field $(h,-g,f)$. Then $\iota^\ast(\omega)$ measures exactly the normal component of $(h,-g,f)$ with respect to $S$. With this in mind, there's nothing else to do. Explicitly, we identify $$F_1\partial_x + F_2\partial_y +F_3 \partial_z \leftrightarrow F_1 \;{\rm d}y\wedge dz +F_2 \,{\rm d}x\;\wedge {\rm d}y + F_3\; {\rm d}x \wedge {\rm d}y.$$You can check that this is an isomorphism between $\mathfrak{X}(\Bbb R^3)$ and $\Omega^2(\Bbb R^3)$. This works for the following reason: if $\beta_2$ denotes that isomorphism, and $$\begin{align} {\rm id}\colon & \quad C^\infty(\Bbb R^3) \ni f \mapsto f \in \Omega^0(\Bbb R^3)\\ \beta_1:& \quad \mathfrak{X}(\Bbb R^3) \ni F_1\partial_x+F_2\partial_y+F_3\partial_z \mapsto F_1\,{\rm d}x+F_2\,{\rm d}y+F_3\,{\rm d}z \in \Omega^1(\Bbb R^3) \\ \beta_3:&\quad C^\infty(\Bbb R^3) \ni f \mapsto f\,{\rm d}x\wedge {\rm d}y\wedge {\rm d}z \in \Omega^3(\Bbb R^3),\end{align}$$then $\beta_1 \circ {\rm grad} = {\rm d} \circ {\rm id}$, $\beta_3\circ {\rm div} = {\rm d} \circ \beta_2$ and $\beta_2 \circ {\rm curl} = {\rm d} \circ \beta_1$. Also, if $F,G \in \mathfrak{X}(\Bbb R^3)$, we have $$\beta_1(F)\wedge \beta_1(G) = \beta_2(F \times G).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2590590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A stronger form of Bernoulli's inequality Let $x\geq 0$ be fixed. By Bernoulli's inequality we know that for all $n \in \mathbb{N}_0$, $$ (1+x)^n \geq 1+nx $$ Now, let $\alpha \geq 0$ be fixed as well. By a limit argument, we know that $$ (1+x)^n \geq 1+\alpha nx $$ for large $n$. But how large does $n$ have to be? More specifically, can we find some expression for $N \in \mathbb{N}$ depending on $x$ and $\alpha$ such that the inequality holds for all $n \geq N$? If it helps, $N$ does not necessarily have to be optimal. I am specifically interested in the case where $\alpha \in \mathbb{N}$, and even more specifically in the case $\alpha=2$.
If $n\ge 2$ then $$(1+x)^n\ge 1+nx+\frac{n(n-1)}{2}x^2.$$So you just need $\frac{(n-1)}{2}x>\alpha-1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2590661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
When does $\arcsin^2x+\arccos^2x=1$ for $x$ real or comlex? let $\arcsin$ is the compositional inverse of $\sin$ and $\arccos$ the compositional inverse of $\cos$ , my question here is it possible to find $x$ for which : $\arcsin^2x+ \arccos^2x=1$. Note: $x$ is a real number or complex
You want $u^2+v^2=1$ and $\sin u=\cos v$. This means: Letting $z=e^{iv}$ you want $z+\frac{1}{z}=2\sin u$ or $z=\frac{2\sin u\pm \sqrt{4\sin^2 u-4}}{2}=\sin u \pm i\cos u$. If $u=\frac{\pi}{2}-w$ then this is $z=e^{\pm iw}$ or $v=2\pi k \pm\left( \frac{\pi}{2}-u\right)$. Case 1: If $u+v=2\pi k +\frac{\pi}{2}=S_+$ then $$uv = \frac{(u+v)^2-(u^2+v^2)}{2}=\frac{S_+^2-1}{2}$$ Giving that $u,v$ are roots of $x^2-S_+x+\frac{S_+^2-1}{2}=0$, or: $$u,v=\frac{S_+\pm \sqrt{2-S_+^2}}{2}$$ Since $S_+$ is real an absolute value greater than $\sqrt{2}$, the real parts of $u,v$ must be $\frac{S_+}{2}.$ To get $u,v$ in the range of $\arcsin,\arccos$ you need the real part of $u$ to be in $\left[-\frac{\pi}{2},\frac{\pi}{2}\right]$ and you need the real part of $v$ to be in $[0,\pi].$ This means $S_+\in[0,\pi]$ or $k=0$. So we get two solutions: $$x=\cos v=\cos\left(\frac{\pi \pm i\sqrt{\pi^2-8}}{4}\right)$$ Case 2: $v-u = 2\pi k - \frac{\pi}{2}=S_-$. $$uv = \frac{u^2+v^2-(v-u)^2}{2}=\frac{1-S_-^2}{2}$$ So $v,-u$ are roots of $x^2-S_-x-\frac{1-S_-^2}{2}=0.$ And thus $$v,-u=\frac{S_-\pm \sqrt{2-S_-^2}}{2}$$ Now the real part of $u$ is $\frac{-S_-}{2}$ and the real part of $v$ is $\frac{S_-}{2}$, so you need $S_-\in[-\pi,\pi]$ and $S_-\in[0,2\pi]$ or $S_-\in[0,\pi]$. This is not possible with integer $k$. If you take $\arccos$ and $\arcsin$ as multivalued on $\mathbb C$ then you get a more complicated result - basically, any $k$ gives two pairs $u,v$ for each case, (four pairs total) and then you get $x=\sin u=\cos v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2590793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Are fundamental representations of Lie algebras faithful? Let $L$ be a semisimple algebra, $\alpha_1, \cdots, \alpha_l$ a base of the root system $\Phi$, $\omega_1, \cdots, \omega_l$ the dual basis relative to the inner product (such that $ \langle \omega_i, \alpha_j \rangle = \delta_{ij}$). Is the irreducible representation $L(\omega_k)$ of highest weight $\omega_k$ faithful?
To summarize the discussion in the comments: this is true iff $L$ is simple. Writing $L$ as the direct sum of its simple factors $L_i$, the simple roots of $L$ are the disjoint union of the simple roots of each $L_i$, and similarly for the fundamental weights. So the fundamental representations of $L$ are the fundamental representations of each $L_i$, and in particular they are faithful representations of exactly one of the simple factors of $L$ at a time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2590902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Using the weak law of large numbers to find the limit of $\sum\limits_{r = an}^{bn} {n \choose r } p^r (1-p)^{n-r}$ I need to find the limit of $\sum\limits_{r}^{} {n \choose r } p^r (1-p)^{n-r}$ such that $ an < r < bn $ in the cases $p < a$, $ a < p < b$, and $b < p$. I know I need to consider the sum of $n$ identically distributed independent Bernoulli random variables, but I'm not sure how to apply the weak law to show what is required. Any help you could offer would be very much appreciated.
Let $X_n$ be binomial with parameters $p$ and $n$. The desired sum is $P(a<X_n/n<b)$. But by the WLLN, $X_n/n\xrightarrow{p} p$, so ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2591006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to prove $\cos(n!) \neq 1$ without using $\pi$ is irrational Prove $[\forall n \in \mathbb{N}, \cos(n!) \neq 1]$, without using $\pi$ is irrational. Using $\pi \in (\mathbb R-\mathbb Q)$, I can prove it... Thanks everyone!
If $\cos(n!) = 1$ for some $n\in\mathbb N,$ then $n! = 2\pi m$ for some $m\in\mathbb N,$ so $\pi = m/(n!)$ and thus $\pi$ is rational. Conversely if $\pi$ is rational then $\pi = m/\ell$ for some $m,\ell\in\mathbb N,$ and $\ell$ is a divisor of $n!$ for some $n\in\mathbb N,$ so then $\cos(n!)=1.$ Therefore the only way to prove $\cos(n!)\ne1$ for every $n\in\mathbb N$ is by proving that $\pi$ is irrational. Various ways of doing that exist: https://en.wikipedia.org/wiki/Proof_that_%CF%80_is_irrational
{ "language": "en", "url": "https://math.stackexchange.com/questions/2591123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Computing the sum $\sum_\limits{n=2}^\infty \left(\frac{1}{(n-1)!}-\frac{1}{n!}\right)\frac{1}{n+1}$ I have come across an infinite series, but I have no clue on how to compute its sum. $$\sum_{n=2}^\infty \left(\frac{1}{(n-1)!}-\frac{1}{n!}\right)\frac{1}{n+1}$$ It should have something to do with the Taylor expansion of $e^x$, but I could not figure out how to do this.
\begin{align*} \sum_{n=2}^\infty \left (\frac{1}{(n-1)!}-\frac{1}{n!}\right)\frac{1}{(n+1)} &=\sum_{n=2}^\infty \left (\frac{n}{(n+1)!}-\frac{1}{(n+1)!}\right)\\ &=\sum_{n=2}^\infty \left (\frac{n+1-1}{(n+1)!}-\frac{1}{(n+1)!}\right)\\ &=\sum_{n=2}^\infty \frac{1}{n!}-2\sum_{n=2}^\infty \frac{1}{(n+1)!}\\ &=-2+\sum_{n=0}^\infty \frac{1}{n!}-2\sum_{n=3}^\infty \frac{1}{n!}\\ &=-2+e-2(e- \frac 5 2 )=-e+3 \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2591213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
if $f(x,y) = \int_{x}^{y}p(p-1)(p-2)dp$ then calculate $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$ How do i take derivative of this function : $f(x,y) = \int_{x}^{y}p(p-1)(p-2)dp$. For single variable I can evaluate but this involves two variables. Any hint please I am stuck here. I want to calculate the stationary points, so I would like to evaluate $\frac{\partial f}{\partial x}$ and $\frac{\partial f}{\partial y}$.
$$\int_x^yp(p-1)(p-2)\,dp=\int_x^y (p^3-3p^2+2p)\,dp=-\frac{x^4}{4}+x^3-x^2+\frac{y^4}{4}-y^3+y^2$$ Now differentiate with respect to whatever variable you want. In general, if we are given a continuous function $f(p)$ and we want to differentiate the integral $$G(x,y)=\int_x^yf(p)\,dp$$ as a function of $y$, say, then we can use the fundamental theorem of calculus which allows us to deduce that $$\frac{\partial G}{\partial y}(x,y)=f(y)$$ similarly, $$\frac{\partial G}{\partial x}(x,y)=-f(x)$$ the negative sign is because the $x$ appears as a lower limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2591357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solving the inequality ${x^2+2x+2^{|a|}\over x^2-a^2} > 0$ How do I solve the inequality ${x^2+2x+2^{|a|}\over x^2-a^2} > 0$? My idea is that for $a\ne 1$, the numerator will always be positive. So the inequality reduces to $ {1 \over x^2-a^2 } > 0 $ My doubt is in this part. If we factorise the denominator, we get $ {1 \over (x+a)(x-a)} $ which, by using the wavy curve method, gives me, $ x \epsilon (-\infty,-a) \ \ \ \cup \ \ \ (a,\infty). $ But according to my textbook, the answer with $a\ne1$ is $ x \epsilon (-\infty,-|a|) \ \ \ \cup \ \ \ (|a|,\infty). $ Why?
The problem is that in your solution you can have the $2$ intervals ovelapping, i.e. $-a > a$ which holds for all $a<0$ Hence in order for the intervals to not overlap, you have to use the modulus
{ "language": "en", "url": "https://math.stackexchange.com/questions/2591453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Higher-order derivative of $1/(1+e^x)$ Let $$ f(x) = \frac{1}{1+e^x} $$ 1) Is there a closed formula for the $k$-derivative $f^{(k)}(x)$? 2) Is there a simple argument which shows for $k \geq 1$ that $$ \lim_{x \to \pm \infty} f^{(k)}(x) = 0 ? $$ Thanks!
Write $e^x=:u$ for short. Then there is a sequence $(p_n)_{n\geq0}$ of polynomials such that $$f^{(n)}(x)={p_n(u)\over(1+u)^{n+1}}\qquad(n\geq0)\ .$$ It is easy to show that the $p_n$ satisfy the recursion $$p_0=1,\qquad p_{n+1}=(u+u^2)\, p_n'-(n+1) u\, p_n\qquad(n\geq0)\ .$$ E.g., one obtains $$p_6(u)=u (-1+57 u-302 u^2+302 u^3-57 u^4+u^5)\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2591565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Trigonometric limit mistake Question: $$\lim_{x\to 0}\frac{\tan x-\sin x}{x^3}$$ The answer, by L'Hopital's rule as well as wolfram and desmos is $\frac{1}{2}$ Here's what I did: $$\lim_{x\to0}({\tan x \over x}\times{1\over x^2}-{\sin x \over x}\times{1\over x^2})$$ $$\lim_{x\to0}({1 \over x^2}-{1 \over x^2})=0$$ Im not sure where the mistake is.
Note that there is no split of the limit like $$\lim f(x) - \lim g(x) $$ here. Rather the expression under limit has been split as a difference based on laws of algebra and this is perfect. The mistake happens in the next step and is very common and that is replacing the expressions $(\sin x) /x$ and $(\tan x) /x$ by $1$. And that's just plain wrong. We all know that these expressions are never equal to $1$ and thus they can't be replaced by $1$. I really find it surprising that the mistake is so common inspite of the very obvious mathematical fact that one can't replace $A$ by $B$ unless $A=B$. Well what you can really do is that you can always replace the expression $\lim_{x\to 0}\dfrac{\sin x} {x}$ with $1$ without any restrictions simply because they are equal. This emphasizes the fact the expression $\dfrac{\sin x} {x} $ is different from the expression $\lim_{x\to 0}\dfrac{\sin x} {x} $. Unless this simple fact is taken into consideration one can get into trouble. I have described this problem in detail in this answer which also describes when such replacements are valid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2591683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Embedding a Riemann surface which is diffeomorphic to a punctured disc in $\mathbb{C}$ How do we prove that any Riemann surface which is diffeomorphic to a punctured disc can be holomorphically embedded in $\mathbb{C}$? The reason I am thinking about this is because I was trying to classify complex structures on a punctured disc. Any hints will be appreciated. Thanks.
Such a Riemann surface $X$ has fundamental group $\Bbb Z$. By the uniformisation theorem, its universal cover $U$ is either $\Bbb C$ or the upper half plane, and $X$ is the quotient of $U$ by an automorphism $\gamma$ of infinite order. Up to inner automorphisms, if $U=\Bbb C$ then $\gamma$ is conjugate to $z\mapsto z+1$, and then $X$ is conformally equivalent to $\Bbb C^*$. If $U$ is the upper half plane, then we can take $\gamma:z\mapsto z+1$ or $\gamma: z\mapsto \lambda z$ ($\lambda>1$). In the former case we get a punctured disc. In the latter case we get an annulus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2591786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can I solve this rather simple looking integral equation? I was working on a physics problem and I have reduced it down to a simple integral equation with two boundary conditions: $$\int_0^{l-t}y(x, t) dx = lh$$ With the conditions: $$y(0, t) = y(l-t, t) = h$$ I am looking for $y(x, t)$. $l$, $h$ and $t$ are positive real numbers. Unfortunately, I don't know how to solve it. So, I need a little help there. Also, I am curious if this same problem could be converted into a differential equation.
Has this something to do with FEM? I would do: at $x=0$, $$y(x, t)=h$$ at $x=\frac{1}{2} \cdot(l-t)$, $$y(x, t)=h+\frac{2ht}{l-t}$$ at $x=(l-t)$, $$y(x, t)=h$$ Is this what you are looking for?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2591899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Eigenvector and eigenvalue of $\mathbf A+\lambda\mathbf I$ where $\mathbf A=\mathbf{vv}^\top$ Given $\mathbf v=\begin{bmatrix}v_1\\v_2\\\vdots\\v_n\end{bmatrix}$ and $\mathbf A=\mathbf{vv}^\top$, find the eigenvectors and eigenvalues of $\mathbf A+\lambda\mathbf I$. My current work progress is: Since $(\mathbf A+\lambda\mathbf I)\mathbf v =\mathbf{Av}+\lambda\mathbf v=\mathbf v(\mathbf v^\top\mathbf v) + \lambda\mathbf v=\mathbf v \|\mathbf v\|+ \lambda\mathbf v=(\|\mathbf v\|+\lambda)\mathbf v$ , so one of the eigenvalue is $\lambda_1=(\|\mathbf v\|+\lambda)$ with corresponding eigenvector $\mathbf v_1=\mathbf v$. I'd like to find out other pairs of eigenvalues and eigenvectors. I first calculate the trace. $$\mathrm{Tr}\,(\mathbf A)=n\lambda+\|\mathbf v\|=\lambda_1+(n-1)\lambda$$ Hence, the rest of $n-1$ eigenvalues has sum $(n-1)\lambda$. From here, I do not know how to proceed. * *Can I assume the rest of eigenvalues are $\lambda$ with multiplicities $n-1$? Why? *How to find corresponding eigenvectors? [Note1] This is like a follow up question regarding this one. [Note2] Found later on there is a question but people focus on linking it to Note1.
Hint: Consider a vector orthogonal to $v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2592045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Lemma 2.8 - Elements of integration by Bartle I'm self studying Measure Theory by Bartle's book and I have a doubt in the proof of a lemma. Before I say what is the lemma and what is my doubt, I would like to say that the book is available here if anyone wants to see and I would like to introduce some notation according to the Bartle's book that I believe it's necessary to understand the proof of the lemma. $\textbf{Notation:}$ (i) $\overline{\mathbb{R}} := \mathbb{R} \cup \{ -\infty, +\infty \} $ is the extended real number system and the function with codomain $\overline{\mathbb{R}}$ are said to be extended real valued functions. (ii) $\textbf{X}$ in bold is the $\sigma-$algebra of a set $X$. (iii) The collection of all extended real valued $\textbf{X}-$measurable function on $X$ is denoted by $M(X, \textbf{X})$. $\textbf{Lemma 2.8}$ An extended real valued function $f$ is measurable if and only if the sets $$A := \{ x \in X \ ; \ f(x) = +\infty \} \hspace{0.5cm} \text{and} \hspace{0.5cm} B := \{ x \in X \ ; \ f(x) = - \infty \}$$ belong to $\textbf{X}$ and the real-valued function $f_1$ defined by $$f_1(x) := f(x), x \notin A \cup B$$ $$f_1(x) := 0, x \in A \cup B$$ is measurable. $\textbf{Proof:}$ If $f \in M(X, \textbf{X})$, it has already been noted that $A$ and $B$ belong to $\textbf{X}$. Let $\alpha \in \mathbb{R}$ and $\alpha \geq 0$, then $$\{ x \in X \ ; \ f_1(x) > \alpha \} = \{ x \in X \ ; \ f(x) > \alpha \} \ \backslash \ A$$ If $\alpha < 0$, then $$\{ x \in X \ ; \ f_1(x) > \alpha \} = \{ x \in X \ ; \ f(x) > \alpha \} \cup B$$ The proof continues, but my doubt emerges here. I would like to know why $\{ x \in X \ ; \ f_1(x) > \alpha \} = \{ x \in X \ ; \ f(x) > \alpha \} \cup B$? I don't know why $\{ x \in X \ ; \ f_1(x) > \alpha \}$ have points of $B$ since $\alpha$ is fixed. Thanks in advance!
This equality is valid for $\alpha <0$, so \begin{align}\{ x \in X \ ; \ f_1(x) > \alpha \}&=\{ x \in X \ ; \ f(x) > \alpha \}\cup\{ x \in X \ ; \ f_1(x) =0 \}\\[1ex] &=\{ x \in X \ ; \ f(x) > \alpha \}\cup (A\cup B)\\[1ex] &=\{ x \in X \ ; \ f(x) > \alpha \}\cup B \end{align} since $\;A\subset \{ x \in X \ ; \ f(x) > \alpha \}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2592170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
the set of all matrices which represent orthogonal projection in $M_n(C)$ is closed The set of all matrices which represent orthogonal projection in $M_n(C)$ is closed. I can not identify the set. Can you please help me to do so?
The set you are looking for is the following: $$\mathrm{Gr}_n(\mathbb{C}):=\{A\in\mathcal{M}_n(\mathbb{C})\textrm{ s.t. } A^2=A,{}^\intercal\overline{A}=A\}.$$ It is closed, since it is given as an intersection of closed sets. Indeed, the two following maps $A\mapsto A^2-A$ and $A\mapsto{}^\intercal{\overline{A}}-A$ are continuous (why?). Remark. Being a projector is exactly asking $A^2=A$ and orthogonality reads as ${}^\intercal\overline{A}A=I_n$. If you are wondering why I called this set $\textrm{Gr}_n(\mathbb{C})$ it is because one has: $$\textrm{Gr}_n(\mathbb{C})=\coprod_{k=0}^n\textrm{Gr}_{n,k}(\mathbb{C}),$$ where $\textrm{Gr}_{n,k}(\mathbb{C})$ is the collection of $k$-planes in $\mathbb{C}^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2592250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Complex analysis on real integral Use complex analysis to compute the real integral $$\int_{-\infty}^\infty \frac{dx}{(1+x^2)^3}$$ Attempt I think I want to consider this as the real part of $$\int_{-\infty}^\infty \frac{dz}{(1+z^2)^3}$$ and then apply the residue theorem. However, I am not sure how that is the complex form and the upper integral is the real part and how to apply.
It is much faster to apply Feynman's trick. For any $a>0$ we clearly have $$ \int_{-\infty}^{+\infty}\frac{dx}{a+x^2} = \frac{\pi}{\sqrt{a}} \tag{1} $$ and by applying $\frac{d^2}{da^2}$ to both sides: $$ \int_{-\infty}^{+\infty}\frac{2\,dx}{(a+x^2)^3} = \frac{3\pi}{4a^2\sqrt{a}} \tag{2} $$ hence by evaluating $(2)$ at $a=1$ we instantly get $$ \int_{-\infty}^{+\infty}\frac{dx}{(1+x^2)^3}\,dx = \frac{3\pi}{8}\tag{3}$$ and in general $$ \forall m\geq 1,\qquad \int_{-\infty}^{+\infty}\frac{dx}{(1+x^2)^{m}}\,dx = \frac{\pi}{4^{m-1}}\binom{2m-2}{m-1}.\tag{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2592350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
The $4 x^3 = y^2 +27$ solutions. Is it true that if $$4 x^3 = y^2 +27$$ then $$y=3k$$ I tried the Fermat theorem for $x^3$, but seems I am missing something. EDITED: $x$ and $y$ are integers.
You are correct that you could use Fermat's theorem for $x^3+y^3=z^3$. Starting with your elliptic curve $$4x^3=y^2+27$$ we borrow Yong Hao Ng's answer by multiplying it by $16$ to get $$(4x)^3=(4y)^2+432$$ $$X^3-432=Y^2\tag1$$ This is a special case of a well-known family. The problem of finding two rational cubes equal to $N$ $$p^3+q^3 = N$$ or, $$\left(\frac{36N+v}{6u}\right)^3+\left(\frac{36N-v}{6u}\right)^3 =N$$ simplifies to $$u^3-432N^2 =v^2\tag2$$ Thus, your curve is just the case $N=1$. Of course, by Fermat's Last Theorem, there are no non-trivial rational solutions to, $$p^3 + q^3 =1$$ so the only rational point on $(1)$ is its torsion point $X=12$, and no other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2592454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is the coefficient of $ a^8b^4c^9d^9$ in $(abc+abd+acd+bcd)^{10}$? I am new to Binomial Theorem and I want to find out the coefficient of $ a^8b^4c^9d^9$ in $$(abc+abd+acd+bcd)^{10}$$ How to find that?
Let us set $abc=D, abd=C, acd=B, bcd=A$. Then $a^8 b^4 c^9 d^9 =A^2 B^6 C D$ and the problem boils down to finding the coefficient of $A^2 B^6 C D$ in $(A+B+C+D)^{10}$, i.e. to counting the anagrams of the word $AABBBBBBCD$. They are $$ \frac{10!}{2!6!}=\color{red}{2520}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2592558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Connection compatible with a volume form Let $M$ be a smooth, orientable $n$-manifold and $\eta$ a volume form on $M$. Does there exist a connection $A$ on $TM$ such that $$\tag{$*$}D\eta=0,$$ where $D$ is the appropriate covariant derivative associated to $A$ on $\Omega^n(M)$? Can one assume $A$ to be symmetric? I have a feeling that if one writes $(*)$ plus the torsion equation and then applies the Frobenius theorem, some local result can be obtained. But that doesn't give global conditions (on $M$ nor $\eta$). A reasonable assumption would be $M$ parallelizable.
If $\eta$ vanishes somewhere, then the only way it can be parallel (for any connection at all) is if it vanishes identically. Thus I will assume $\eta$ is non-vanishing. Let $g$ be a Riemannian metric on $M$ and $\omega$ the volume form of $g.$ Since $\omega,\eta$ are smooth non-vanishing sections of a line bundle, there is some positive function $f \in C^\infty(M)$ such that $\eta = f \omega.$ (I'm assuming here that we take the orientation of $\omega$ to match that of $\eta.$) The conformally related metric $\tilde g = f^{2/n}g$ then has volume form $\tilde \omega = (f^{2/n})^{n/2} \omega = \eta,$ which is made parallel by the Levi-Civita connection of $\tilde g.$ Thus there are no conditions necessary - there is always a symmetric connection that does the job.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2592624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Non real roots and Gauss Lucas I am stuck on this problem: a) let a be a complex number, M and M' the images of a and $\bar a$. Study the sign of Im{$\frac{1}{z-a}+\frac{1}{z-\bar a}$} with the position of the point P image of z d to the real axis and to the circle of diameter MM'. b) Be A a polynomial in R[X]. Be $\rho_1$, $\bar \rho_1$, $\rho_2$, $\bar \rho_2$,..$\rho_p$, $\bar \rho_p$, the complex roots of A. Be $M_i$, $M'_i$ the images of $\rho_i$ and $\bar \rho_i$, $\Gamma_i$ the circle of diameter $M_i$$M'_i$. Show that all non real roots of the derived polynomial A' has its image either within of one of the $\Gamma_i$ or on one of the $\Gamma_i$. For part a), I tried to express the Imaginary part of $\frac{1}{z-a}+\frac{1}{z-\bar a}$ by decomposing with the real and imaginary parts of a and z and using the fact that $\frac{1}{z}= \frac{\bar z}{|z\bar z|}$, but nothing convincing came up. For part b) I think this has to do with the Gauss Lucas theorem. Thanks a lot for support
This is Jensen's theorem. Let $p\in{}\mathbb{R}[x]$. It is known that if $a$ is a complex root of $p$, Then $\bar a$ is also a root. In this case, we will say that the disk of diameter $aa^*$ (i.e. its diameter is the line between $a$ and $\bar a$) is a Jensen Disk of $p$ (example: http://mathworld.wolfram.com/JensenDisk.html). Theorem: all nonreal roots of $p'$ lie in or on the boundary of one the Jensen Disks of $p$. To prove this, let $z_1,...,z_n$ be the roots of $p$. Assume that $z$ is a nonreal root of $p'$ that is not in or on the boundary of a Jensen Disk of $p$. Note that $z$ is not a root of $p$, and thus $0=\frac{p'(z)}{p(z)}=\sum_{i=1}^n{\frac{1}{z-z_i}}$. We shall reach a contradiction by showing that the imaginary part of $\sum_{i=1}^n{\frac{1}{z-z_i}}$ is not zero. we will do it by showing that: (i) $sgn(\Im(\frac{1}{z-a}+\frac{1}{z-\bar a}))=-sgn(\Im(z))$ for each complex nonreal root $a$ of $p$ (which is your question (a)), and that (ii) $sgn(\Im(\frac{1}{z-r}))=-sgn(\Im(z))$ for each real root $r$ of $p$. Indeed, this would imply that $sgn(\Im(\sum_{i=1}^n{\frac{1}{z-z_i}}))=-sgn(\Im(z))\ne{}0$. i) Let $a=s+ti$ be a nonreal root of $p$. Then $$\frac{1}{z-a}+\frac{1}{z-\bar a}=\frac{1}{z-s-ti}+\frac{1}{z-s+ti}=\frac{2z-2s}{(z-s)^2+t^2}=\frac{2(z-s)((\bar z-s)^2+t^2)}{|(z-s)^2+t^2|^2}$$ Define $x\sim{}y$ if $sgn(\Im(x))=sgn(\Im(y))$. So $$ \frac{1}{z-a}+\frac{1}{z-\bar a}= \frac{2(z-s)((\bar z-s)^2+t^2)}{|(z-s)^2+t^2|^2}\sim{} (z-s)((\bar z-s)^2+t^2)=$$ $$=|z-s|^2(\bar z-s)+(z-s)t^2\sim{} |z-s|^2\cdot(-z)+z\cdot{}t^2= (t^2-|z-s|^2)z$$ But the expression $t^2-|z-s|^2$ is negative since $z$ is not in the Jensen disk of diameter $a \bar a$. Thus $sgn(\Im(\frac{1}{z-a}+\frac{1}{z-\bar a}))=-sgn(\Im(z))$, as required. ii). Let $r$ be a real root of $p$. Then $$ \frac{1}{z-r}-\overline{\frac{1}{z - r}}= \frac{1}{z-r}-\frac{1}{\bar z - r}= \frac{\bar z - z}{|z-r|^2}\sim{}\bar z - z\sim{} -z$$ And it follows that $sgn(\Im(\frac{1}{z-r}))=-sgn(\Im(z))$, as required. Thus, we have reached the desired contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2592744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\sqrt{I}=R \iff I=R$ Let $R$ be a commutative ring with identity and $I$ and ideal. I have a proof that $\sqrt{I}=R \iff I=R$ and I am wondering if its correct: "$\Leftarrow$":Obvious, since $I\subset \sqrt{I}$ "$\Rightarrow$":We know that $\sqrt{I}=\bigcap{p_i}$ where $p_i$ is prime and $p_i\supset I$ for all $i$. Because $\bigcap{p_i}=R$ we get that there isn't any proper prime ideal containing $I$ and because each proper ideal is contained in a (proper) maximal ideal we get that $I$ is maximal. Let $x \in R\setminus I$. Then by maximality of $I$ we have $xI=R$. But this means that $x$ is invertible and therefore also $x^n$. Since $x \in \sqrt{I}\Rightarrow x^n \in I$ we have that $I=R$ since it contains an invertible ellement. Thanks in advance !
It's much simpler to use the definition: if $\sqrt{I}=R$, then $1\in\sqrt{I}$, so $1^n\in I$, for some $n$. Your proof is incorrect. From $\bigcap_i p_i=R$ you argue that $I$ is maximal, which is wrong. You should instead argue that there is no prime ideal containing $I$. Therefore $I$ is not a proper ideal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2592888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
how to solve the logarithmic equation which has both n and logn How to solve this logarithmic equation? $8n^2 = 64n\log n$, ($\log n$ here is base 2) I have tried to convert it to $n-8\log n = 0$, but how to solve the latest?
This doesn't have any solutions using elementary functions. But, using the Lambert W function, we get: $$n = -\frac {8}{\ln 2} \operatorname{W} \left (-\frac {\ln 2}{8} \right)$$ and $$n = -\frac {8}{\ln 2} \operatorname{W}_{-1} \left (-\frac {\ln 2}{8} \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2593003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A few general questions on the Penrose transform Let us consider the Bateman or Whittaker's pioneering examples of a Penrose transform. Starting from a holomorphic function on an open subset of twistor space, they constructed a solution to the Laplace equation (in one case in dimension $4$ and in the other case in dimension $3$). Fine, except it is not clear what their holomorphic function represents. Does it represent an element of $H^0(Z,O(k))$ for some $k$, or for instance $H^1(Z, O(k))$ for some $k$? Of course by now, these issues have been sorted out. But to be honest, I don't understand them a 100%. So my question is this: if I were say presented with such an example of a Penrose transform, how do I figure out: 1) the degree of the sheaf cohomology group ($H^0$, $H^1$,...) 2) the degree $k$ of the line bundle 3) what it corresponds to on the "spacetime" side? Can you perhaps give me the general idea please?
I found the explanation in Huggett and Tod's book, "Introduction to Twistor theory", to be very clear and down to earth (it was recommended to me by D. Calderbank, and I thank him for this reference). Using homogeneous coordinates on (projective) twistor space, a la Penrose, and using the integral formula, one can then answer all my questions. For instance, the degree of the bundle is essentially the degree of homogeneity one needs for the expression to be well defined on the twistor lines, and so on. I highly recommend this book as an introduction to the Penrose transform. Somehow, when I read first about the Penrose transform, it was in the very well written paper by Eastwood, Penrose and Wells. However, it was a little high tech for me at the time, and it hid somehow how someone may have "discovered" the Penrose transform, so to speak, similar to the pioneering work of Bateman and Whittaker, and later on, independently, by Penrose (the integral formula).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2593068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Compute $\lim_{x \to \infty} \frac{\log(x)}{x^a}$ How can I compute $\lim_{x \to \infty} \frac{\log(x)}{x^a}$ for some $a \in \mathbb R$ with $x^a := e^{a \log(x)}$? Can you give me a hint? I want to use only the basic properties of limits, like the linearity, multiplicativity, monotonicity, the Sandwich property and continuity (no L'Hospital, derivatives, integrals).
Set $y:= a \cdot \log(x)$ than it follows for $a>0$ that $$\lim_{x \rightarrow \infty} \frac{\log(x)}{x^{a}}= \lim_{x \rightarrow \infty} \frac{\log(x)}{e^{a \log(x)}} = \frac{1}{a} \lim_{y \rightarrow \infty} \frac{y}{e^{y}}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2593289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $\sum_{i=1}^\infty i^2\mathbb{P}(i\leq X_nI am studying probability theory myself, so I have been asking questions a lot recently. Please help. This question comes from Rosenthal's 3.6.13 Let $X_1, X_2,\dots$ be defined jointly on some probability space $(\Omega, \mathcal{F}, \mathbb{P})$, with $\sum_{i=1}^\infty i^2\mathbb{P}(i\leq X_n<i+1)\leq C\leq \infty$ for all $n$. Prove that $\mathbb{P}(X_n \geq n\ i.o.) = 0$ I am thinking if I can prove $\sum_{n=1}^\infty \mathbb{P}(X_n \geq n) < \infty$, then Borel-Cantelli Lemma can apply, but got no luck. What I got are: \begin{align} \sum_{i=1}^\infty i^2\mathbb{P}(i\leq X_n<i+1) &= \mathbb{P}(X_n \geq 1) - \mathbb{P}(X_n\geq 2) + 2^2 \mathbb{P}(X_n \geq 2) - \mathbb{P}(X_n\geq 3) + \dots \\ &= \sum_{i=1}^\infty (2i-1)\mathbb{P}(X_n \geq i) \\ \mathbb{P}(X_n \geq n) & = \sum_{i=n}^\infty \mathbb{P}(i\leq X_n<i+1) \end{align} However, none of these lead me to an answer. If it requires the Kolmogorov Zero-One Law, please explain me a little. I am confused about the definition of "tail field".
The event $X_n \ge n$ i.o. is indeed a tail event. You can tell because if you change any finite number of the $X_n,$ it won't change the truth value of $X_n \ge n$ i.o. By the Kolmogorov 0-1 law, $\mathbb{P}(X_n \ge n \text{ i.o})$ is either 0 or 1. Therefore, it suffices to show that it is not equal to 1. So assume that $\mathbb{P}(X_n \ge n \text{ i.o}) = 1.$ For infinitely many $n,$ we have that $$\sum_{k=1}^\infty k^2\mathbb{P}(k \le X_n < k+1) = \sum_{k=n}^\infty k^2\mathbb{P}(k \le X_n < k+1)\ge n^2,$$ contrary to hypothesis. $\square$ For more information on tail events and tail fields, see this answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2593371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that equality is correct Prove that $$\int_{0}^{1} \frac{\ln \left ( x^2+x+1 \right )}{x}\mathrm dx=\frac{\pi^2}{9}.$$ As I understand, I can do this: $$\large 1- x^3 = (1-x)(1+x+x^2) \Rightarrow x^2 +x+1 = \frac{1-x^3}{1-x}, $$ this gives $$f(x)= \frac{1}{x} \ln \left(\frac {1-x^3}{1-x}\right) =\frac{1}{x} (\ln(1-x^3)-\ln(1-x)).$$ But what shall I do next? Thank you for your help very much!
You recognized a crucial fact, i.e. that $x^2+x+1$ is a cyclotomic polynomial. For any $m\geq 1$ we have $$ \int_{0}^{1}\frac{-\log(1-x^m)}{x}\,dx = \sum_{n\geq 1}\int_{0}^{1}\frac{x^{mn-1}}{n}\,dx = \frac{1}{m}\sum_{n\geq 1}\frac{1}{n^2} = \frac{\zeta(2)}{m}\tag{1}$$ hence $$ \int_{0}^{1}\frac{\log\Phi_3(x)}{x}\,dx =\frac{2}{3}\zeta(2)=\color{red}{\frac{\pi^2}{9}}.\tag{2}$$ In general, given $$ \Phi_n(x) = \prod_{d\mid n}(1-x^d)^{\,\mu\left(\frac{n}{d}\right)} \tag{3}$$ we have $$\begin{eqnarray*} \int_{0}^{1}\frac{\log\Phi_n(x)}{x}\,dx&=&-\zeta(2)\sum_{d\mid n}\frac{1}{d}\cdot\mu\left(\frac{n}{d}\right)\\&=&-\frac{\zeta(2)}{n}\sum_{d\mid n}d\cdot\mu(d)\\&=&-\frac{\zeta(2)}{n}\prod_{p\mid n}(1-p)\\&=&\frac{\zeta(2)(-1)^{\omega(n)+1}\varphi(n)}{n^2}\prod_{p\mid n}p\\&=&\color{red}{\frac{\zeta(2)(-1)^{\omega(n)+1}\varphi(n)\,\text{rad}(n)}{n^2}}. \tag{4}\end{eqnarray*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2593518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integrable function has bounded variation Let $f:[0,1] \to \mathbb{R}$ be Lebesgue integrable. Define $F(x) = \int_0^x f$. Prove $F(X)$ has bounded variation on $[0,1]$. What does this imply for differentiability of $F$ and why? Attempt $F$ is monotone (increasing) very obviously. So if $F$ is of bounded variation and is monotone it is differentiable a.e. (I think bounded variation also implies bounded on a compact interval?) INCORRECT EDIT Since $F$ is of bounded variation it can be written as the differnece of two increasing (monotone) functions on $[0,1]$. Since monotone functions are differentiable a.e. and the derivative behaves linearly we have that $F$ is differentiable a.e. Notice $F(a)-F(b) = \int_a^b f$. Further, $|F(a)-F(b)| \leq \int_a^b f$. Let $P$ be any partition of $[0,1]$ with $0=x_0 < x_1 < \dots < x_k = 1$. We have $$\sum |F(x_i)-F(x_{i-1})| \leq \int_0^1 |f| < \infty$$ since $f$ is Lebesgue integrable. Thus, $F$ is of bounded variation.
Hint: If $0\le a < b\le 1,$ then $|F(b)-F(a)| \le \int_a^b|f|.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2593632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove: For $\triangle ABC$, if $\sin^2A + \sin^2B = 5\sin^2C$, then $\sin C \leq \frac{3}{5}$. We have a triangle $ABC$. It is given that $\sin^2A + \sin^2B = 5\sin^2C$. Prove that $\sin C \leq \frac{3}{5}$. Let's say that $BC = a$, $AC=b$, $AB=c$. According to the sine law, $$\frac{a}{\sin A} = \frac{b}{\sin B} = \frac{c}{\sin C} = 2R,$$ then $\sin A = \frac{a}{2R}\implies \sin^2A = \frac{a^2}{4R^2}$ $\sin B = \frac{b}{2R}\implies \sin^2B = \frac{b^2}{4R^2}$ $\sin C = \frac{c}{2R}\implies \sin^2C = \frac{c^2}{4R^2}$ Then we get: $$\frac{a^2}{4R^2} + \frac{b^2}{4R^2} = 5\frac{c^2}{4R^2}.$$ Since $4R^2 > 0$, we get that $a^2 + b^2 = 5c^2$ Guys, is that correct? Even if it is, do you have any ideas what shall I do next?
Since $A + B + C = \pi$, we have $\sin (A + B) = \sin C$. Also notice that $$\cos^2 A + \cos^2 B = 1 - \sin^2A + 1 - \sin^2B = 2 - 5\sin^2(A + B)$$ Proceeding by CSB, we get: \begin{align} \sin(A + B) &= \sin A \cos B + \sin B \cos A \\ &\le \sqrt{\sin^2 A + \sin ^2 B} \sqrt{\cos^2A + \cos^2B} \\ &= \sqrt{5} \sin(A + B) \sqrt{2 - 5\sin^2(A + B)} \end{align} Therefore $$1 \le 5(2 - 5\sin^2(A + B)) = 10 - 25\sin^2(A + B)$$ which yields $$\sin C = \sin(A + B) \le \frac35$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2593739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Prove that $(-1)^n\sum_{k=0}^{n}{{{n+k}\choose{n}}2^k}+1=2^{n+1}\sum_{k=0}^{n}{{{n+k}\choose{n}}(-1)^k}$ Define: $$A_n:=\sum_{k=0}^{n}{{n+k}\choose{n}} 2^k,\quad{B_n}:=\sum_{k=0}^{n}{{n+k}\choose{n}}(-1)^k$$ I've found that (based on values for small $n$) this identity seems to be true: $${\left(-1\right)}^nA_n+1=2^{n+1}B_n$$ However, I'm stuck on trying to find a proof. Any ideas? Thanks!
Using formal power series we have the Iverson bracket $$[[0\le k\le n]] = [z^n] z^k \frac{1}{1-z}.$$ We then get for $A_n$ $$\sum_{k\ge 0} [z^n] z^k \frac{1}{1-z} {n+k\choose k} 2^k = [z^n] \frac{1}{1-z} \sum_{k\ge 0} z^k {n+k\choose n} 2^k \\ = [z^n] \frac{1}{1-z} \frac{1}{(1-2z)^{n+1}}.$$ This yields for $1+(-1)^n A_n$ $$1 + (-1)^n [z^n] \frac{1}{1-z} \frac{1}{(1-2z)^{n+1}} = 1 + [z^n] \frac{1}{1+z} \frac{1}{(1+2z)^{n+1}}$$ This is $$1 + \mathrm{Res}_{z=0} \frac{1}{z^{n+1}} \frac{1}{1+z} \frac{1}{(1+2z)^{n+1}}.$$ Now the residue at infinity is zero by inspection and we get the closed form (residues sum to zero) $$1 - \mathrm{Res}_{z=-1} \frac{1}{z^{n+1}} \frac{1}{1+z} \frac{1}{(1+2z)^{n+1}} - \mathrm{Res}_{z=-1/2} \frac{1}{z^{n+1}} \frac{1}{1+z} \frac{1}{(1+2z)^{n+1}} \\ = - \mathrm{Res}_{z=-1/2} \frac{1}{z^{n+1}} \frac{1}{1+z} \frac{1}{(1+2z)^{n+1}} \\ = - \frac{1}{2^{n+1}}\mathrm{Res}_{z=-1/2} \frac{1}{z^{n+1}} \frac{1}{1+z} \frac{1}{(z+1/2)^{n+1}} .$$ We evidently require (Leibniz rule) $$\frac{1}{n!} \left(\frac{1}{z^{n+1}} \frac{1}{1+z} \right)^{(n)} \\ = \frac{1}{n!} \sum_{q=0}^n {n\choose q} \frac{(-1)^q (n+q)!}{n!} \frac{1}{z^{n+1+q}} \frac{(n-q)! (-1)^{n-q}}{(1+z)^{n-q+1}} \\ = (-1)^n \sum_{q=0}^n {n+q\choose q} \frac{1}{z^{n+1+q}} \frac{1}{(1+z)^{n-q+1}}.$$ Evaluate at $z=-1/2$ to get $$(-1)^n \sum_{q=0}^n {n+q\choose q} (-2)^{n+1+q} 2^{n-q+1} = 2^{2n+2} \sum_{q=0}^n {n+q\choose q} (-1)^{q+1}.$$ Restoring the multiplier in front now yields $$- \frac{1}{2^{n+1}} 2^{2n+2} \sum_{q=0}^n {n+q\choose q} (-1)^{q+1} = 2^{n+1} \sum_{q=0}^n {n+q\choose q} (-1)^{q}.$$ This is $2^{n+1} B_n$ as claimed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2593862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Differentiate a Variable Limit Ito Integral Consider a variable limit integral $I(t)=\int\limits_0^{\phi(t)}M(s)dW(s)$, where $\phi(t)$ is an increasing deterministic function with $\phi(0)=0$, the integrand $M(t)$ is stochastic, and $W(t)$ is a standard Brownian motion. Assume that $M(t)$ and $W(t)$ are adapted to filtration $\mathscr{F}_t$. I am unsure whether I can differentiate this integral as usual, i.e. $dI(t)=M(\phi(t))\phi^{'}(t)dW(t)$. If not, what should I do to get the limit of the integral rid of those functions.
I think you cannot write it in this form, but using random time change, you can write it in another form as follows. By Theorem 8.5.7 of Oksendall's book, you can write $$I_t = \int_0^t M_{\phi(s)}\sqrt{\phi'(s)}d\tilde B_s,$$ where $\tilde B_t = \int_0^{\phi(t)} \sqrt{(\phi^{-1})'(s)}dW_s$ is another Brownian motion (let $\alpha_t:=\phi(t)$ in the theorem). In this form, you can write $$dI_t = M_{\phi(t)}\sqrt{\phi'(t)}d\tilde B_t.$$ You don't even need that $I_t$ is adapted to the filtration of $W_t$ (which holds if and only if $\phi(t)\leq t$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2593960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to find the limit:$\lim_{n\to \infty}\left(\sum_{k=n+1}^{2n}\left(2(2k)^{\frac{1}{2k}}-k^{\frac{1}{k}}\right)-n\right)$ How to find the limit:$$\lim_{n\to \infty}\left(\sum_{k=n+1}^{2n}\left(2(2k)^{\frac{1}{2k}}-k^{\frac{1}{k}}\right)-n\right)$$ I can't think of any way of this problem Can someone to evaluate this? Thank you.
For small $x>0$, we have $1+x\leq\exp(x)\leq 1+x+x^{2}$, then for large $k$, \begin{align*} 1+\dfrac{\log 2}{k}-\left(\dfrac{\log k}{k}\right)^{2}\leq 2(2k)^{1/2k}-k^{1/k}\leq 1+\dfrac{\log 2}{k}+\dfrac{(\log 2k)^{2}}{2k^{2}}, \end{align*} and for large $n$, \begin{align*} \left(\sum_{k=n+1}^{2n}2(2k)^{1/2k}-k^{1/k}\right)-n\leq\log 2\sum_{k=n+1}^{2n}\dfrac{1}{k}+\sum_{k=n+1}^{2n}\dfrac{(\log 2k)^{2}}{2k^{2}}, \end{align*} and \begin{align*} \left(\sum_{k=n+1}^{2n}2(2k)^{1/2k}-k^{1/k}\right)-n\geq\log 2\sum_{k=n+1}^{2n}\dfrac{1}{k}-\sum_{k=n+1}^{2n}\left(\dfrac{\log k}{k}\right)^{2}, \end{align*} and note that \begin{align*} \sum_{k=1}^{\infty}\dfrac{(\log 2k)^{2}}{2k^{2}}&<\infty,\\ \sum_{k=1}^{\infty}\left(\dfrac{\log k}{k}\right)^{2}&<\infty, \end{align*} and we treat \begin{align*} \sum_{k=n+1}^{2n}\dfrac{1}{k} \end{align*} as \begin{align*} \int_{n+1}^{2n}\dfrac{1}{t}dt=\log 2n-\log(n+1)=\log\left(\dfrac{2n}{n+1}\right) \end{align*} when $n\rightarrow\infty$. So the limit is $(\log 2)^{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2594050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Sobolev functions vanish in a ball Assume $u\in H^1(\mathbb R^N)$ and $u=0$ a.e. in $B_1$. Does it hold that $u\in H_0^1(\mathbb R^N\setminus \overline B_1)$? I tried to find smooth functions with compact support to approximate $u$. For example $\rho_n*u$, with $\rho_n$ being mollifiers. However the support of these smooth functions is located in a neighborhood of $\mathbb R^n\setminus B_1$. When I tried to find an approximation of $u$ in $H^1_0(\mathbb R^N\setminus \overline B_1)$ by some truncated functions, i.e. $\xi_n u\in H^1_0(\mathbb R^N\setminus\overline B_1)$, with $$\xi_n=\begin{cases}1, &x\in \mathbb R^N\setminus B_{1+\frac1n},\\ 0, &x\in B_1, \end{cases} $$ I found that the gradient of $\xi_n$ is not bounded so that I can not get $\xi_n u\to u$. So I wonder whether it is true.
I take it that $H^1_0(\Omega)$ is the closure of $C^\infty_c(\Omega)$? Hint: For $r>0$ define $$f_r(x)=f(rx).$$Since $C_c(\mathbb R^d)$ is dense in $L^2$ it follows that if $f\in L^2(\mathbb R^d)$ then $$\lim_{r\to1}||f-f_r||_{L^2}=0.$$Applying this to $f$ and to $f'$ shows that the same is true with $H^1$ in place of $L^2$. So you can begin by approximating your $u$ by a function that vanishes in $B(0,t)$ for some $t>1$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2594160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
To solve $ \frac{1}{a}+\frac{1}{a+b}+\frac{1}{a+b+c}=1$ in natural numbers. I really stuck on the following 7th grade problem (shame on me). The problem asks to solve an equation in natural numbers (that is to find all possible such natural $a,b,c~$ that) $$ \frac{1}{a}+\frac{1}{a+b}+\frac{1}{a+b+c}=1$$ I have found solutions: $$a=3, ~b=0, ~c=0;$$ $$a=2, ~b=1, ~c=3$$ and $$a=2, ~b=2, ~c=0$$ but there must be a general trick to find all possible solutions, that I missed. Can anyone suggest some way to solve the equation?
The above mentioned equation shown below has parametric solution given below: $\frac{1}{a}+\frac{1}{a+b}+\frac{1}{a+b+c}=1$ $(a,b,c) =[w(-1-k),w(1+2k),w(k)]$ where, $w=(k+3)/[2k(k+1)]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2594282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
single valued function and multi valued function square root function is a multi valued function. for example 16 have two square root 4 and -4. but it is violation of the definition of function that each element of domain has one and only one image. then how it is a function. please clear my dout.
$f:[0,\infty)\to[0,\infty)$ defined by $f(x)=\sqrt{x}$ assigns to each $x$ a single value, the positive square root. This is a function. Similarly, $-f$ is a function. Functions cannot output more than one value for a given input.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2594410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
$K\subseteq X$ compact, C closed, can I say something about $K \cap C$? If C is closed then C is compact as well, but the intersection of two compact set is not compact in general. It would be nice to have this intersection to be compact though, any idea if this is true?
$K \cap C$ is closed in $K$, by the definition of subspace topology, so compact. A closed subset of a compact space is always compact, no extra assumptions needed. $K \cap C$ need not be closed if $K$ is not closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2594566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to color a map, so that one color covers the maximum area. Given a graph $G$ where every node resembles a number. How would one color this graph, so that no two connected nodes have the same color and that one color has the maximum of area? Area is defined by the number inside the node. For this example problem I found a solution of $22$. This is achieved by coloring the number 8, the lower 5, the highest 3, the upper right 4 and the remaining 2. I'd like to know if there is a good method for finding this number without using trial and error. EDIT: You can use as many colors as you like, so the four color theorem does not apply here.
I believe I worked out a solution to the example problem and a decent method to get there. Copy of the problem: First I decided that it wasn't needed to label all of them with the same color. There's is only one color we're actually interested in. So we can label every node with R (Red) or O (Other). For the 8 we have two options: R or O. If we color it in with O it must mean that we would have to color the 4, the 2 or the 3 with R. Because if we were to cover them all with O, we would just lose the 8 points for no reason. So what is the maximum amount of points we can get by coloring the 4, 2 and 3? Seven of course (color the 4 and the 3). But 7 is less then 8 so you'd have to color 8. There are basically two options: 8: R, 4: O, 2: O, 3: O --> this results in a score of 8. 8: O, 4: R, 2: O, 3: R --> results in a score of 7 and you also make 3 other nodes (the 5, 2 and 3) O, so they can't contribute any points. So I color the 8 Red. The 4, 2 and 3 must be any Other color. No I can lift the 8, 4, 2 and 3 out of the graph. They have been colored in so they don't matter anymore. Now you could simplify the problem to this: simplified problem You could go through the problem one more time but instead of looking at the 8, you can look at the lower 5. Again there are two options: 5: R, 2: O, 3: O, 2: O --> results in a score of 5. 5: O, 2: R, 3: O, 2: R --> results in a score of 4 and covers up 4 other nodes with O. So I color the 5 red and the 2, 3 and 2 must be any Other color. rince and repeat: More Simplified Graph Take a look at the two's at the left. Two options: Lowest 2: R, higher 2: O --> results in a score of 2. Lowest 2: 0, higher 2: R --> results in a score of 2 and you cover up the 4. Take the Lowest 2. Simplify. Last Simplified Version From here I can see that, to take the maximum amount, I now need to take the 3 and the 4 to get the maximum in this graph. And at this point we're done. We have a total of: 8 + 5 + 2 + 4 + 3 = 22. NOTE: This sudoku-ing of the graph doesn't always work (just like how not all beginning positions of a sudoko are solvable). I tried for example with the map of the United States: I got stuck. The 'algorithm' I used is still extremely cumbersome and takes a lot of time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2594656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Convergence of Series $\sum_{n=1}^{\infty} \left[ \frac{\sin \left( \frac{n^2+1}{n}x\right)}{\sqrt{n}}\left( 1+\frac{1}{n}\right)^n\right]$ Consider the series $\displaystyle{ \sum_{n=1}^{\infty} \left[ \frac{\sin \left( \frac{n^2+1}{n}x\right)}{\sqrt{n}}\left( 1+\frac{1}{n}\right)^n\right]}$ . Find all points at which the series is convergent. Find all intervals that the series is uniformly convergent. I know that I need to use Dirichlet and/or Abel Criterion to show this. My first attempt was to consider the argument of the sum as product of three sequences of functions, $f_n,g_n,h_n$ and perform Dirichlet/Abel twice. Towards that end I tried to break up $ \sin \left( \frac{n^2+1}{n}x\right)$ using $\sin(\alpha +\beta)=\sin \alpha \cos \beta \,+\, \sin \beta \cos \alpha $ to produce $\sin \left(nx +\frac{x}{n} \right)=\sin(nx)\cos\left(\frac{x}{n}\right)+\sin\left(\frac{x}{n}\right)\cos (nx)$ and use the fact that $\sin(nx)\leq \frac{1}{\big|\sin\left(\frac{x}{2}\right)\big|}$, but then I don't know what to do with the $\sin\left(\frac{x}{n}\right)$, $\cos\left(\frac{x}{n}\right)$, and $\cos(nx)$. Previously, when using this method we showed uniform convergence on compact intervals like $[2k\pi-\varepsilon,2(k+1)\pi-\varepsilon]$. Can I just say $ \frac{\sin \left( \frac{n^2+1}{n}x\right)}{\sqrt{n}}\leq \frac{1}{\sqrt{n}}\to 0$ as $n\to \infty$ ? Also, I know that $\lim_{n\to \infty}\left( 1+\frac{1}{n}\right)^n=e$. i.e. Help :) Thank you in advance
Pointwise convergence: Fix $x.$ Our series equals $$\tag 1 \sum_{n=1}^{\infty} \frac{\sin(nx+x/n)-\sin (nx)}{\sqrt n}(1+1/n)^n + \sum_{n=1}^{\infty} \frac{\sin (nx)}{\sqrt n}(1+1/n)^n.$$ Now by the MVT, $$|\sin(nx+x/n)-\sin (nx)| = |(\cos c)(x/n)| \le |x/n|.$$ So in the first series in $(1),$ the sum of the absolute values is bounded above by $$\sum_{n=1}^{\infty} \frac{|x/n|}{\sqrt n}(1+1/n)^n.$$ Because $(1+1/n)^n\le e,$ this series converges absolutely, hence converges. In the second series in $(1),$ let's use $(1+1/n)^{1/n} = e+ O(1/n).$ (I'll leave this as an exercise.) Then this series equals $$\sum_{n=1}^{\infty} \frac{\sin (nx)}{\sqrt n}e + \sum_{n=1}^{\infty}\sin (nx)\cdot O\left (\frac{1}{n^{3/2}}\right).$$ Here the the first series converges by Dirichlet's test, and the second series converges absolutely. Thus $(1)$ converges pointwise everywhere. Uniform convergence: From the above, after dropping the constant $e,$ our series can be written $$\tag 2 \sum_{n=1}^{\infty} \frac{\sin (nx)}{\sqrt n} + \sum f_n(x),$$ where $\sum f_n(x)$ converges uniformly on $[-r,r]$ for every $r>0.$ That's not true of the first series. This series is $2\pi$-periodic, so consider what happens on $[0,2\pi].$ Summing by parts (a la Dirichlet) shows that the series converges uniformly on $[a,2\pi-a]$ for all small $a>0.$ Suppose the convergence were uniform to some $f$ on $[0,2\pi].$ Then $f$ would be continuous and we would have $$\int_0^{2\pi}|f|^2 = \lim_{N\to \infty} \int_0^{2\pi}\left |\sum_{n=1}^{N} \frac{\sin (nx)}{\sqrt n}\right |^2\, dx = \lim_{N\to \infty}\sum_{n=1}^{N} \frac{\pi}{(\sqrt n)^2} = \infty.$$ In the second $=$ we have unsed the orthogonality of the functions $\sin (nx).$ That's a contradiction, hence uniform convergence on $[0,2\pi]$ fails. Putting this all together shows the intervals of convergence for $(2)$ are $[2m\pi + a,2\pi(m+1)-a],m\in \mathbb Z,$ for any small $a>0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2594753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Evaluating $\lim_{a,b\to + \infty} \iint_{[0,a]\times[0,b]}e^{-xy} \sin x \,dx\,dy$ I'm trying to calculate the following: $$\lim_{a,b\to + \infty} \iint_{[0,a]\times[0,b]}e^{-xy} \sin x \,dx\,dy$$ Trying to calculate by definition didn't get me far. Any ideas how to attack this problem?
Let $I=\int_0^b\sin xe^{-xy}dx$. Let $u=\sin x,\,dv=e^{-xy}dx$. Then $$I=\int_0^b\sin xe^{-xy}dx=\frac{\sin xe^{-xy}}{-y}\bigg|_0^b+\frac{1}{y}\int_0^b\cos xe^{-xy}dx.$$ Now apply by parts once again with $u=\cos x,\, dv=e^{-xy}dx$ we get$$I=\int_0^b\sin xe^{-xy}dx=\frac{\sin xe^{-xy}}{-y}\bigg|_0^b-\frac{\cos xe^{-xy}}{y^2}\bigg|_0^b-\frac{1}{y^2}\int_0^b\sin xe^{-xy}dx.$$ Thus we find $$(1+\frac{1}{y^2})I=\frac{\sin xe^{-xy}}{-y}\bigg|_0^b-\frac{\cos xe^{-xy}}{y^2}\bigg|_0^b\Longrightarrow I=\frac{-y\sin xe^{-xy}}{y^2+1}\bigg|_0^b-\frac{\cos xe^{-xy}}{y^2+1}\bigg|_0^b.$$ The values at $b$ will go to zero as $b\to \infty$, so $I=\frac{1}{y^2+1}$. Putting this in the original integral and integrating w.r. to $y$ you get $\arctan y$. I think you can proceed from here. Your result will be $\pi/2$. P.S. Actually @MichaelHardy's comments complete the theoretical part of the solution as we require while interchangin limits and integrals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2594858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $e^{1-n} \leq \frac {n!}{n^n}$ How can I show that for a $n \in \mathbb N$ $$e^{1-n} \leq \frac {n!}{n^n}$$ I tried using the binomial theorem like this $$n^n \le (1+n)^n = \sum_{k=0}^n \binom nk n^k \le \sum_{k=0}^\infty \binom nk n^k = \sum_{k=0}^\infty \frac{n!}{k!(n-k)!} n^k \le \sum_{k=0}^\infty \frac{n!}{k!} n^k = n! \sum_{k=0}^\infty \frac{n^k}{k!} = n! \cdot e^n$$ which would give me $$\frac{1}{e^n} \le \frac{n!}{n^n}$$ But I'm missing the factor of $e$ on the left side. Can you give me a hint?
Using induction we see that for $n=1$, the inequality holds. Assume that it holds for some number $k$. Then, using $\left(1+\frac1k\right)^k<e$, we find that $$\begin{align} \frac{(k+1)!}{(k+1)^{k+1}}&=\frac{k!}{k^k\left(1+\frac1k\right)^k}\\\\ &\ge \frac{e^{1-k}}{e}\\\\ &=e^{1-(k+1)} \end{align}$$ And we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2594928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
Solving $\frac{1}{|x+1|} < \frac{1}{2x}$ Solving $\frac{1}{|x+1|} < \frac{1}{2x}$ I'm having trouble with this inequality. If it was $\frac{1}{|x+1|} < \frac{1}{2}$, then: If $x+1>0, x\neq0$, then $\frac{1}{(x+1)} < \frac{1}{2} \Rightarrow x+1 > 2 \Rightarrow x>1$ If $x+1<0$, then $\frac{1}{-(x+1)} < \frac{1}{2} \Rightarrow -(x+1) > 2 \Rightarrow x+1<-2 \Rightarrow x<-3$ So the solution is $ x \in (-\infty,-3) \cup (1,\infty)$ But when solving $\frac{1}{|x+1|} < \frac{1}{2x}$, If $x+1>0, x\neq0$, then $\frac{1}{x+1} < \frac{1}{2x} \Rightarrow x+1 > 2x \Rightarrow x<1 $ If $x+1<0$, then $\frac{1}{-(x+1)} < \frac{1}{2x} \Rightarrow -(x+1) > 2x \Rightarrow x+1 < -2x \Rightarrow x<-\frac{1}{3}$ But the solution should be $x \in (0,1)$. I can see that there can't be negative values of $x$ in the inequality, because the left side would be positive and it can't be less than a negative number. But shouldn't this appear on my calculations?
So from first case where $x\in(-1,\infty)$, you get the permissible via of $x$ to be $$(-1,\infty)\cap(-\infty,1)=(-1,1)$$ In the second case where $x\in(-\infty,-1)$ you get the permissible via of $x$ to be $$(\infty,-1)\cap\left(-\infty,-\frac13\right)=(-\infty,-1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2595010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Show that $2x^5+3x^4+2x+16$ has exactly one real root It's clear that this function has a zero in the interval $[-2,-1]$ by the Intermediate Value Theorem. I have graphed this function, and it's easy to see that it only has one real root. But, this function is not injective and I'm having a very hard time proving that it has exactly one real zero. I can't calculate the other 4 complex roots, and my algebra is relatively weak. I have also looked at similar questions, where the solutions use Rolle's Theorem, but I can't seem to apply it to this problem.
Exploiting Descartes law of signs is the way to go, anyway there is a (longer) alternative which consists in studying the variations of $f$ starting by its second derivative which is easily factorisable. $f(x)=2x^5+3x^4+2x+16 $ $f'(x)=10x^4+12x^3+2=2(x+1)(5x^3+x^2-x+1)$ $f''(x)=40x^3+36x^2=4x^2(10x+9)$ So we can start draw a variation array $\begin{array}{|c|ccccc|}\hline x & -\infty && -\frac 9{10} && 0 && +\infty\\\hline f'' & -\infty & \nearrow & 0 & \searrow & 0 & \nearrow &+\infty\\ && -&&+&&+\\\hline\end{array}$ $\begin{array}{|c|ccccc|}\hline x & -\infty && -1 && -\frac 9{10} && \alpha && +\infty\\\hline f' &+\infty &\searrow& 0 &\searrow & -0.187 & \nearrow & 0 &\nearrow& +\infty\\ &&+&&-&&-&&+&\\\hline\end{array}$ Since $f'(-\frac 9{10})<0$ and $\lim\limits_{x\to+\infty} f'(x)=+\infty$ by intermediate value theorem there is a root $f'(\alpha)=0$ in the interval $[-\frac 9{10},+\infty[$. We don't need to calculate it, we just need to know that it annulates $g(x)=5x^3+x^2-x+1$. $\begin{array}{|c|ccccc|}\hline x & -\infty && \beta && -1 && \alpha && +\infty\\\hline f &-\infty &\nearrow &0 &\nearrow & 15 &\searrow & f(\alpha) &\nearrow& +\infty\\\hline\end{array}$ Since $\lim\limits_{x\to-\infty}f(x)=-\infty$ and $f(-1)>0$ by intermediate value theorem there is a root $f(\beta)=0$ in the interval $]-\infty,-1]$. To show it is the only one we have to prove that $f(\alpha)>0$. The polynomial division of $f$ by $g$ gives $f(x)=\dfrac{50x^2+65x-3}{125}g(x)+\dfrac{2003+182x+18x^2}{125}$ Since $g(\alpha)=0$ then $f(\alpha)$ is the same sign as $2003+182\alpha+18\alpha^2$ but this quadratic has no real root so it is always positive and $f(\alpha)>0$. You can eventually refine the interval for $\beta$ noticing $f(-2)=-4<0$ so $\beta\in]-2,-1[$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2595102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Find $f(2^{2017})$ The function $f(x)$ has only positive $f(x)$. It is known that $f(1)+f(2)=10$, and $f(a+b)=f(a)+f(b) + 2\sqrt{f(a)\cdot f(b)}$. How can I find $f(2^{2017})$? The second part of the equality resembles $(\sqrt{f(a)}+\sqrt{f(b)})^2$, but I still have no idea what to do with $2^{2017}$.
$f(2a) = f(a+a) = f(a) + f(a) + 2\sqrt{f(a)f(a)} = 4f(a)$ By induction $f(2^k)=4f(2^{k-1}) = 4^2f(2^{k-1}) = 4^kf(1)$. $f(1) + f(2) = f(1) + 4f(1) = 5f(1) = 10$. So $f(1) = 2$. So $f(2^k) = 4^kf(1) = 2*4^k$ $f(2^{2017} = 2*4^{2017}=2^{4035}$ ==== What what be interesting is what other values are. $f(3) = f(1) + f(2) + 2\sqrt{f(1)f(2)} = 2 + 8 + 2\sqrt {16} = 18$ $..... = 3^2*2?????$ Does $f(k) = 2k^2$????? $f(1) = 2*1^2$ and $f(2) = 8 = 2*2^2$ and if $f(k) = 2k^2$ then $f(1+k) = f(1) + f(k) + 2\sqrt{f(1)f(k)}$ $= 2 + 2k^2 + 2\sqrt{2*2k^2} = 2+2k^2 + 4k = 2(k^2 + 2k + 1) = 2(k+1)^2$. So that holds for integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2595273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is this a combinatorial identity: $ \sum_{k=1}^{n+1}\binom{n+1}{k} \sum_{i=0}^{k-1}\binom{n}{i} = 2^{2n} $? $$ \sum_{k=1}^{n+1}\left(\binom{n+1}{k} \sum_{i=0}^{k-1}\binom{n}{i}\right) = 2^{2n} $$ This is my first question, please feel free to correct/guide me. While solving a probability problem from a text book l reduced the problem to the above LHS. I couldn't reduce it any further. I tried a few values of n and it holds. I gave a half hearted attempt at induction before I gave up. Does this hold? Is there a combinatorial proof to it(assuming it holds) i.e count something one way and count the same thing other way and then equate them. Is there a name to it? Most importantly how to Google such questions? To provide further context, the problem is as follows:- Alice and Bob have a total of $2n+1$ fair coins. Bob tosses $n+1$ coins while Alice tosses $n$ coins. Tosses are independent. What is the probability that Bob tossed more heads than Alice? It is from a standard textbook "Introduction to Probability" by Dmitri and N John.
The combinatorial interpretation provided by Lord Shark is really nice and elementary, but there also is a brute-force way to proving such identity. $$ \sum_{k=1}^{n+1}\left[\binom{n+1}{k}\sum_{i=0}^{k-1}\binom{n}{i}\right]=\sum_{a=0}^{n}\left[\binom{n+1}{a+1}\sum_{b=0}^{a}\binom{n}{b}\right]=\sum_{0\leq b\leq a\leq n}\binom{n+1}{a+1}\binom{n}{b}. $$ Let $T_a=\sum_{b=0}^{a}\binom{n}{b}$. From $(1+x)^n = \sum_{b\geq 0}\binom{n}{b}x^b$ we get $$ \frac{(1+x)^n}{1-x} = T_0+T_1 x+ \ldots + T_{n-1} x^{n-1} + 2^n x^n + 2^n x^{n+1} + 2^{n} x^{n+2}+\ldots $$ hence $$ \frac{(1+x)^n-2^n x^{n+1}}{1-x} = \sum_{a=0}^{n} T_a x^a $$ $$ \frac{\left(1+\frac{1}{x}\right)^n-\frac{2^n}{ x^{n+1}}}{x-1} = \sum_{a=0}^{n} T_a x^{-(a+1)} $$ $$ (1+x)^{n+1}\frac{\left(1+\frac{1}{x}\right)^n-\frac{2^n}{ x^{n+1}}}{x-1} = \sum_{a=0}^{n} T_a x^{-(a+1)}(1+x)^{n+1} $$ and $$ \sum_{a=0}^{n}\binom{n+1}{a+1}T_a =\operatorname*{Res}_{x=0}\frac{(x+1)^{n+1}}{x}\cdot\frac{x(x+1)^n-2^n}{x^{n+1}(x-1)}.$$ On the other hand, $$ \operatorname*{Res}_{x=0}\frac{2^n(x+1)^{n+1}}{x^{n+2}(x-1)}=-\operatorname*{Res}_{x=1}\frac{2^n(x+1)^{n+1}}{x^{n+2}(x-1)}=-2^{2n+1}$$ and $$\begin{eqnarray*} \operatorname*{Res}_{x=0}\frac{(x+1)^{2n+1}}{x^{n+1}(x-1)}&=&-\operatorname*{Res}_{x=0}\frac{(x+1)^{2n+1}}{x^{n+1}}-\operatorname*{Res}_{x=0}\frac{(x+1)^{2n+1}}{x^{n}}-\ldots-\operatorname*{Res}_{x=0}\frac{(x+1)^{2n+1}}{x}\\&=&-\binom{2n+1}{n}-\binom{2n+1}{n-1}-\ldots-\binom{2n+1}{0}=-2^{2n}\end{eqnarray*}$$ so $$\sum_{a=0}^{n}\binom{n+1}{a+1}T_a = 2^{2n+1}-2^{2n} = \color{red}{2^{2n}}$$ as wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2595398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
What does $2^π$ : the multiplication of $2 \pi$ many times? This is rather an intuitive question, in the sense that indeed a real number raised to the power of an irrational doesn't make sense but I wanted to know what does it mean intuitively. If $2^7=2*2*2*2*2*2*2, 2^{\frac{1}{2}}=\frac{1}{\sqrt{2}}$ and $a^b=a*a*a*a*a*...*a\, \, b$ many times, I'm curious as to what does $2^π$ mean. Obviously it's equal to $8.8249778271...$ but what's the meaning in the aforementioned scenario?
Start with $2^3=8$. Then think about what $2^{3.1}$ is: $$2^{3.1} = 2^{31/10} = \sqrt[10]{2^{31}} \approx 8.574187700.$$ Then consider $$2^{3.14} = 2^{314/100} = \sqrt[100]{2^{314}} \approx 8.815240927.$$ Since the sequence $3, 3.1, 3.14, 3.141, \ldots$ approaches $\pi,$, the sequence $$2^3, 2^{3.1}, 2^{3.14}, 2^{3.141}, \ldots$$ should approach $2^\pi.$ So it's probably best to think of $2^\pi$ as a limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2595558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 0 }
Factoring the polynomial $3(x^2 - 1)^3 + 7(x^2 - 1)^2 +4x^2 - 4$ I'm trying to factor the following polynomial: $$3(x^2 - 1)^3 + 7(x^2 - 1)^2 +4x^2 - 4$$ What I've done: $$3(x^2 -1)^3 + 7(x^2-1)^2 + 4(x^2 -1)$$ Then I set $p=x^2 -1$ so the polynomial is: $$3p^3 + 7p^2 + 4p$$ Therefore: $$p(3p^2 + 7p + 4)$$ I apply Cross Multiplication Method: $$p(p+3)(p+4)$$ I substitute $p$ with $x^2-1$: $$(x^2-1)(x^2-1+3)(x^2-1+4)$$ $$(x-1)(x+1)(x^2-2)(x^2-3)$$ I don't know if I've done something wrong or if I have to proceed further and how. The result has to be: $x^2(3x^2+1)(x+1)(x-1)$. Can you help me? Thanks.
Taking $ (x^2 - 1) $ as common, you get: $$ (x^2 - 1) (3(x^2 - 1)^2 + 7(x^2 - 1) + 4) $$ $$ = (x+1)(x-1)( 3 (x^2-1)^2 + 3(x^2 - 1) + 4(x^2 - 1) + 4 )$$ $$ = (x+1)(x-1)( 3(x^2 - 1) + 4 ) ( x^2 - 1 + 1 ) $$ $$ = (x+1)(x-1)( 3x^2 - 3 + 4)(x^2) $$ $$ = x^2(x+1)(x-1)(3x^2 + 1) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2595704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Derivative of $e^{2x^4-x^2-1}$ with limit definition of derivative Let $f:\mathbb{R}\to\mathbb{R}$ be defined as $f(x) = e^{2x^4-x^2-1}$. I have to find the derivative using the defintion: $$f'(x) = \lim_{h \to 0} \frac{f(x+h)-f(x)}{h}$$ My approach: $$ \begin{align} &\lim_{h \to 0} \frac{\exp({2(x+h)^4-(x+h)^2-1})-\exp({2x^4-x^2-1})}{h}\\ &=\lim_{h \to 0} \frac{\exp(2 h^4 + 8 h^3 x + h^2 (12 x^2 - 1) + h (8 x^3 - 2 x) + 2 x^4 - x^2 - 1) - e^{2 x^4 - x^2 - 1}}{h} \end{align}$$ How can I remove $h $from denominator?
Hint Factor the term $e^{2 x^4 - x^2 - 1}$ in each of the two terms of the difference. Then you get the product $$\frac{f(x+h)-f(x)}{h}=e^{2 x^4 - x^2 - 1} \times \frac{e^{hA(x,h)}-1}{h}$$ Where $A(x,h)=h(8x^2 -x) +h^2 B(x,h)$ with $A,B$ polynomials. Then as $e^{y} -1 \approx y$ around $0$, you get $$e^{2 x^4 - x^2 - 1} \times \frac{e^{hA(x,h)}-1}{h} \approx e^{2 x^4 - x^2 - 1} \left((8x^2-x) +hB(x,h)\right)$$ which converges to $(8x^2-x)e^{2 x^4 - x^2 - 1}$ as $h \to 0$. Proving that $f^\prime(x)=(8x^2-x)e^{2 x^4 - x^2 - 1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2595841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Subgroups and ideals of integer numbers. Let $(\mathbb{Z},+)$ be the additive group of integers, and $(\mathbb{Z}, +, \cdot)$ the ring of integers. By definition, every ideal of $(\mathbb{Z}, +, \cdot)$ is a subgroup of $(\mathbb{Z},+)$. Is the opposite true? Is every subgroup of $(\mathbb{Z},+)$ also an ideal of $(\mathbb{Z}, +, \cdot)$? If yes, how can it be proved? If not, is there any counterexample?
The equivalence follows from the following two (easy to prove) results: * *The ideals of a commutative ring $R$ are precisely the $R$-submodules of $R$ as a module over itself. *The Abelian groups are essentially $\mathbb Z$ modules, in the sense that the notions of group homomorphisms, subgroups, quotients, map to the appropriate notions of $\mathbb Z$-modules (module homomorphism, submodule, quotient). I am sure someone with more knowledge of category theory would be able to formulate the second statement more precisely. What I want to conclude is: $$I\subseteq \mathbb Z\;\text{is an ideal of}\;\mathbb Z\Leftrightarrow I\;\text{is a}\;\mathbb Z\text{-submodule of}\;\mathbb Z\Leftrightarrow I\;\text{is a subgroup of}\;\mathbb Z$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2595987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Proving Identity for Derivative of Determinant For a square matrix $A$ and identity matrix $I$, how does one prove that $$\frac{d}{dt}\det(tI-A)=\sum_{i=1}^n\det(tI-A_i)$$ Where $A_i$ is the matrix $A$ with the $i^{th}$ row and $i^{th}$ column vectors removed?
Here is one way to see this: Note that the map $\phi(t_1,...,t_n) = \det ( \sum_k t_k e_k e_k^T -A)$ is smooth, and if $\tau(t) = (t,....,t)$ then $f(t)=\det (tI-A) = \phi(\tau(t))$. In particular, $f'(t) = \sum_k {\partial \phi(t,....,t) \over \partial t_k}$. If we adopt the notation $\det B = d(b_1,...,b_n)$, where $b_k$ is the $k$th column of $B$, we have \begin{eqnarray} \phi(t,...,t+\delta,...t) &=& d(te_1-a_1,..., \delta e_k + t_ke_k -a_k,...,te_n -a_n) \\ &=& \phi(t,...,t) + \delta d(te_1-a_1,..., e_k,...,te_n -a_n) \\ &=& \phi(t,...,t) + \delta \det (tI-A_k) \end{eqnarray} and so ${\partial \phi(t,....,t) \over \partial t_k} = \det (tI-A_k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2596098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Convex envelop of Tr(XY)? How would one go about calculating the convex envelop of $f(X,Y) = Tr(XY)$, where both $X \in R^{ n \times n}$ and $Y \in R^{ n \times n}$ define the domain of $f$ and are both symmetric PSD? I am trying to calculate a global under-estimator of $f$.
It's well known that $\mbox{tr}(XY) \geq 0$ for all PSD pairs $(X,Y)$. Unfortunately, you can't do any better than that in finding the convex envelope. Let $g(X,Y)$ be the convex envelope of $f(X,Y)$. I claim that $g(X,Y)=0$. Take any pair $(X,Y)$ in the domain of $f$. The pairs $(2X,0)$ and $(0,2Y)$ are also in the domain of $f$. $f(2X,0)=0$ $f(0,2Y)=0$ $f(X,Y)=f((2X,0)/2+(0,2Y)/2)$ Thus $g(X,Y) \leq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2596238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$Re(f)=Re(g)$ implies $f(z)=g(z)+ic$ Let $f$ and $g$ be analystic in a region $G$. If $\Re(f)=\Re(g)$ in $G$, then prove $$f(z)=g(z)+ic \tag{for all $z \in G$}$$ where $c$ is a real constant. By cauchy riemann we have $$\int \frac{\partial}{\partial y}Re(g)=-\int\frac{\partial}{\partial x}Re(g)$$ Hence, $Im(f)$ must be some constant?
Notice that $f-g$ has values in $i\mathbb{R}$ and if $f-g$ is non-constant, then its image must be open in $\mathbb{C}$, which is a contradiction. Whence the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2596329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Complex exponential squared does not equal sinusoid squared? I just noticed this today and I'm a bit confused by it. If we represent cos(x) as the real part of exp(ix), then I always thought that we could then say that cos(x)^2 is equal to the real part of exp(ix)exp(ix)=exp(2*i*x). However, this clearly cannot be correct, as the real part of exp(2*i*x) is actually cos(2*x), which of course is not equal to cos(x)^2. So what rule am I violating here when I try to represent cos(x)^2 as exp(2*ix)? Why can't I just say cos(x)^2 is equal to the real part of exp(ix)exp(ix)=exp(2*i*x)? I know I'm going wrong somewhere but can't see where. Thanks!
You are not violating a rule, rather, you are making up a rule which does not exist. You are assuming that the real part of $z^2$ is (the real part of $z$), squared. But in fact if we write $z=a+ib$ then $$z^2=a^2-b^2+2iab\ ,\quad \Re(z^2)=a^2-b^2$$ while $$\Re z=a\ ,\quad (\Re z)^2=a^2\ ,$$ and these are not equal (except in the special case $b=0$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2596415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Explicit formula for a reversible function $f: [0,1] \rightarrow \mathbb{R}$ There is a bijection between $[0,1]$ and $\mathbb{R}$ (because they have a same cardinality). Can we write an explicit formula for such a function? (or at least a reversible function $f$ whose domain is $[0,1]$ and its range cover all real numbers?)
or at least a reversible function $f$ whose domain is $[0,1]$ and its range cover all real numbers? I'm not sure how is that different... Anyway I assume that by "explicit" you mean "there is an algorithm that for a given $x$ it can calculate $f(x)$ using a given set of elementary operations (whatever that means)". Or in other words that $f$ can be encoded by a finite number of words from a set of elementary functions and operations. Not formally clear and obvious but I guess this may satisfy your needs: take a bijection $$g:(0,1)\to\mathbb{R}$$ $$g(x)=\tan\bigg(\pi x-\frac{\pi}{2}\bigg)$$ Now define the sequence $a_0=0$, $a_1=1$, $a_n=\frac{1}{n}$ and define $$f:[0,1]\to\mathbb{R}$$ $$f(x)=\begin{cases} g(a_{n+2}) &\text{when }x=a_n \\ g(x) &\text{otherwise} \end{cases}$$ You can easily check that $f$ is a bijection. Note that since $[0,1]$ is compact then there is no continuous bijection $[0,1]\to\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2596549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How do I calculate the average expected value of the dice bowl game? I am trying to better understand the "dice bowl game" from Goldratt's The Goal. In the book, five kids (named Andy, Ben, Chuck, Dave and Evan) each start with an empty bowl, and a 6-sided die. There is a box of matches next to Andy. For 10 rounds, each kid rolls the die, and tries to take that number of matches from the previous bowl (or matchbox), but they cannot take more matches than the bowl has. Andy will always take the number he rolls since the matchbox is always full. (There is no choice involved - the kids take the number they roll, limited by the number of matches in the bowl they take from.) An example round would look like this: A B C D E 6 (Andy rolls 6, takes from matchbox) 2 4 (Ben rolls 4, takes 4 from Andy) 2 0 4 (Chuck rolls 5, takes 4 from Ben) 2 0 1 3 (Dave rolls 3, takes 3 from Chuck) 2 0 1 2 1 (Evan rolls 1, takes 1 from Dave) The second round would look something like this: A B C D E 2 0 1 2 1 (Starting position from the previous round) 4 0 1 2 1 (Andy rolls 2, takes 2 from matchbox) 1 3 1 2 1 (Ben rolls 3, takes 3 from Andy) 1 0 4 2 1 (Chuck rolls 5, takes 3 from Ben) 1 0 1 5 1 (Dave rolls 3, takes 3 from Chuck) 1 0 1 1 5 (Evan rolls 4, takes 4 from Dave) Note that Chuck rolled a 5 but could only take 4 from Ben, since Ben's bowl only contained 4. I'd like to understand how I can calculate the expected value of Evan's bowl after 10 rounds. The idea here is to gain a deeper understanding of variability on the throughput of a factory process. I've simulated the problem, but I'd like to understand how to solve it without simulation. I have high school math, and have dabbled in discrete math and probability, but my understanding is fairly elementary. UPDATE: So this was not clear from the initial statement of the question, but the bowls don't empty between rounds. Players will also try to take the number they roll, limited only by the matches available.
Let $R_i, i = 1, 2, \ldots n$ be the die value of the $i$-th kid, where $n$ is the total number of kids participating in the game. Assuming the die are fair, we have $R_i$ are i.i.d. discrete uniform random variables on $\{1, 2, 3, 4, 5, 6\}$. Let $T_i$ be the number of matches taken from the previous bowl (or matchbox) by the $i$-th kid. Then by the rules of the game, $$T_1=R_1, T_i=\min\{T_{i−1}, R_i\},i = 2, 3, \ldots n $$ The number of matches remain in the $i$-th bowl after being taken by the next kid is $$T_i − T_{i+1},i = 1, 2, \ldots n−1$$ and since there is no kid taking from the last bowl, all $T_n$ matches remain in the $n$-th bowl. Now you are interested in $E[T_n]$, the expected number of matches that the last kid have (in $1$ round). If you play $m$ rounds, you can just multiply by $m$ as the expectation is linear. If we further investigate into those $T_i$, it is not hard to check that $$ T_2 = \min\{T_1, R_2\} = \min\{R_1, R_2\}$$ $$ T_3 = \min\{T_2, R_3\} = \min\{\min\{R_1, R_2\}, R_3\} = \min\{R_1, R_2, R_3\}$$ and so on. So inductively we have $$ T_i = \min\{R_1, R_2, \ldots, R_i\}, i = 1, 2, \ldots n $$ i.e. the running minimum of the previous rolls. You can verify this with the example you made, where the roll result $R_i$ are $(6, 4, 5, 3, 1)$ and the number of matches taken $T_i$ are $(6, 4, 4, 3, 1)$. With all the above assumption, we can compute the survival function of $T_n$: $$ \begin{align} \Pr\{T_n > t\} &= \Pr\{\min\{R_1, R_2, \ldots R_n\} > t\} \\ &= \Pr\left\{\bigcap_{i=1}^n R_i > t\right\} \\ &= \prod_{i=1}^n \Pr\{R_i > t\} \\ &= \begin{cases} 1 & \text{when} & t < 1 \\ \displaystyle \left(1 - \frac {\lfloor t \rfloor} {6}\right)^n & \text{when} & 1 \leq t < 6 \\ 0 & \text{when} & t \geq 6 \end{cases} \end{align}$$ Since $T_n$ is a positive, discrete random variable, we can make use of the survival function to compute the expectation: $$ E[T_n] = \sum_{t=0}^5 \Pr\{T_n > t\} = \sum_{t=0}^5\left(1 - \frac {t} {6}\right)^n = \frac {1^n + 2^n + 3^n + 4^n + 5^n + 6^n} {6^n} $$ and multiply by the number of rounds $m$. From the nature of minimum, when the number of players increase, the number of matches expected decrease and approaching $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2596663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $b_n$ is convergent, then $a_n$ is also convergent Let $(a_n)_{n\geq 1}$ and $(b_n)_{n \geq 1}$ be two sequences of real numbers such that $$b_n=a_{n+2}-5a_{n+1}+6a_{n}, \: \forall n \geq 1$$ Prove that if $(b_n)$ is convergent, then $(a_n)$ is also convergent. I defined $c_n=a_{n+1}-2a_n$ and the relation became $b_n=c_{n+1}-3c_n.$ Then I tried to prove that $c_n$ is convergent by expressing $c_n$ only in terms of $b_n, b_{n-1}, \dots b_1$ and $c_1$, but the convergence doesn't follow from here and I got stuck. EDIT: As proven below, this statement is false !
Setting $b_n = 1$ (which is clearly convergent) and $a_1 = a_2 = 1$, we get $$ a_n = \frac16(3\cdot 2^n - 3^n + 3) $$ (WolframAlpha calculation), which doesn't converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2596747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
What does the vector space R^[0,1] mean? While reading a book Linear Algebra Done Right, I came to knew that a vector space $\mathbf{R}^n$ represents a space with dimensions as $(x_1, x_2, ...,x_n)$, but there were other vector spaces that I could not understand. There was a statement as Ref: 1.35 The set of continuous real-valued functions on the interval $[0,1]$ is a subspace of $\mathbf{R}^{[0,1]}$ What kind of space does $\mathbf{R}^{[0,1]}$ represent? Is this a space that can continuously be from $0$ dimension to $1$ dimension? Another statement, Ref: 1.35 The set of differentiable real-valued functions on $\mathbf{R}$ is a subspace of $\mathbf{R}^\mathbf{R}$ What kind of space is $\mathbf{R}^\mathbf{R}$? Similarly, there were other subspaces as, $\mathbf{R}^{(0,\ 3)}$ and $\mathbf{R}^{(-4,\ 4)}$ Explain me how can I visualize such spaces. If you can explain with the proof too, that will be great.
The set $\mathbb{R}^{[0,1]}$ is the set of all functions from $[0,1]$ into $\mathbb R$ and the set $\mathbb{R}^\mathbb{R}$ is the set of all functions from $\mathbb R$ into itself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2596832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
How can we visualize that $2^n$ gives the number of ways binary digits of length n? Like if we have to find the number of ways can be represented in bits up to 4 places. We use $2^4$, but why do we use this method?
Because for each bit you have 2 different simbols. Then if $n$ denotes the number of bits, this means that we have to make a choice between $1$ or $0$ for $n$ times is: $$\underbrace{2\cdot2\cdot2\cdot\dots\cdot2}_{n\text{-times}}=2^n$$ This happens because every choice is independent from the others. If you want you can prove it easily by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2596918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Is my reasoning correct (regarding minimal distance on graph) Question: Given two point $A(0,10)$ and $B(30,20)$, find the point $P$ on x axis for which sum of distances from given points to the required point is minimum Now i could form a lengthy equation and use differentiation but instead I decided to do something else and would like to verify if what i did is correct Consider a triangle formed by above given points and $(x,0)$. Let the distance between $A$ and $B$ be $c$, distance between $P$ and $A$ be $a$ and distance between $P$ and $B$ be $b$. By the cosine law $c^2=a^2+b^2-2ab\cos(\angle BPA)$. Now since c is constant, $a^2+b^2$ will be minimum when $2ab\cos(\angle BPA)=0$ and thus when $\angle BPA=\pi/2$ Now, since $PA$ and $PB$ intersect at $90\deg$, the product of thier slope = $-1$ $\implies {-10 \over x}{20 \over 30-x}=-1$ $\implies x=20$ or $10$ Is my reasoning correct? And why does it give two different answers? (Minima is at $10$) According to my reasoning both answers should be minimal values EDIT: I got why 20 doesnt give right answer. Its because I'm Minimizing $a^2+b^2$ while i need to minimize a + b. Does this invalidate the other part to? that is will it work on all triangles or not?
Consider the "conjugate" of $B$: $B'=(30,-20)$. Note that $P$ is the intersection of $x$-axis with the line that joins the points $A$ and $B'$ (by triangular inequality) This line is $y=-x+10$ and $P=(10,0)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2597019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $i$ is a root, then $-i$ is also a root? I have the following question: Prove that i is a roof of the equation $g(z) = 0$, where $g(z) = z^3 - 3z^2 + z - 3$. Find the other roots of this equation. The answer says without any working out: Since i is a root, then -i is a root. (The answers are $[z - i], [z + i]$, and $[z-3]$) Why is this? I know that I can factor out [z-3] and then continue from there, however the answer doesn't do that. I also plotted a graph: However, that doesn't show the roots at i and -i. Basically, can I assume that for any polynomial with i as a root, -i is also a root? And why? Thank you.
Any polynomial with real coefficients with root $r = a+ib$ also has root $\bar{r} = a-ib$. This is known as the conjugate root theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2597154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Function $u$ solves the heat equation $\Longrightarrow$ $\langle x, \nabla u \rangle + 2t u_t$ solves the heat equation I am baffling with this homework problem: Assume that a smooth function u solves the heat equation $u_t − \Delta u = 0.$ Show that also the function $v(x, t) = \langle x, ∇u(x, t) \rangle + 2tu_t (x, t)$ solves the heat equation. Here $\langle \cdot, \cdot \rangle$ denotes the innerproduct $ f,g \mapsto \int_{\mathbb{R}^N} f(x) \cdot g(x) dx$ . I was considering to use integration by parts but, to my understanding, nothing guarantees that the solution $u$ even vanishes on the boundaries. Some clues are welcome.
For $\langle \cdot, \cdot \rangle $ denoting the Euclidian scalar product you get $$v_t = \nabla u \cdot \boldsymbol{x}_t + \boldsymbol{x} \cdot \partial_t \nabla u + 2 (u_{tt} + u_t) = \boldsymbol{x} \cdot \partial_t \nabla u + 2 (tu_{tt} + u_t) \tag{1}$$ Use Green's Vector identity for the $\Delta \langle \boldsymbol{x}, \nabla u \rangle$ and the fact that $\nabla \times \nabla v \equiv 0 \: \forall \: v \in \mathbb{R}^n$ \begin{align}\Delta v &= \nabla u \cdot \Delta \boldsymbol{x} - \boldsymbol{x} \cdot \Delta \nabla u + 2 \nabla \cdot \Big( (\boldsymbol{x} \cdot \nabla) \nabla u + \boldsymbol{x} \times (\nabla \times \nabla u) \Big) + 2 t \Delta u_t \\ &= - \boldsymbol{x} \cdot \Delta \nabla u + 2 \nabla \cdot \Big( (\boldsymbol{x} \cdot \nabla) \nabla u \Big) + 2 t \Delta u_t \tag{2} \end{align} The terms including $t$ cancel each other for a smooth $u$ (can interchange differentiations arbitrarily). Formally, you can define the operator $2 t \partial_t$ which is then applied to $u_t - \Delta u_t$. To simplify the remaining terms, index notation is employed. \begin{align} - \boldsymbol{x} \cdot \Delta \nabla u + 2 \nabla \cdot \Big( (\boldsymbol{x} \cdot \nabla) \nabla u \Big) =& -x_i \partial_i \partial_j\partial_j u + 2 \partial_i (x_j \partial_j\partial_i u ) \\ &= -x_i \partial_i \partial_j\partial_j u + 2 \big( \delta_{ij} \partial_j \partial_i u + x_i \partial_i\partial_j\partial_j u \big) \\ & = 2 \partial_i \partial_i u + x_i \partial_i\partial_j\partial_j u \\ &= 2 \Delta u + \boldsymbol{x} \cdot \nabla \Delta u \end{align} Clearly, $2u_t - 2 \Delta u = 0$. By interchanging the temporal and spatial derivatives and defining the operator $\boldsymbol{x} \cdot \nabla $ also the last remaining terms cancel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2597242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does the sum of exponents of a symbol in a word of a free group have a specific name in the literature? Let $A$ be a non-empty set and let $F(A)$ be the free group it generates. An element of $F(A)$ is of the form $$w = a_1^{\varepsilon_1}a_2^{\varepsilon_2}\cdots a_{n-1}^{\varepsilon_{n-1}}a_n^{\varepsilon_n}$$ where $a_i\in A$ and $\varepsilon_i = \pm 1$ for all $i=1,\ldots,n$. Let us fix a specific element, $b\in A$. We can define the group morphism $w_b:F(A)\to \mathbb{Z}$ where $$w_b(a_i) = \begin{cases}1 & \text{if }a_i = b\\ 0 & \text{otherwise.} \end{cases}$$ With this definition, $$w_b(w) = \sum_{\substack{1\leq i\leq n \\ a_i = b}} \varepsilon_i.$$ For example, $w_b(a^2b^{-1}cb^3a^{-1}) = 2$. It seems like such a morphism would have a name in the literature. If so, what is it?
If $B$ is a subset of $A$, I would call projection from $F(A)$ onto $F(B)$ the morphism $\pi_B: F(A) \to F(B)$ defined, for each $a \in A$, by $$ \pi_B(a) = \begin{cases} a &\text{if $a \in B$}\\ 1 &\text{otherwise} \end{cases} $$ In particular, your morphism would be the projection from $F(A)$ onto $F(b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2597474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How would I solve the following congruence? $5x \equiv 118 (\mod 127)$ This is what I have done so far: $127 = 25*5+2$ $2 = 127-25*5$ $25 = 12*2+1$ $1 = 25-12*2$ $1 = 25-12(127-25*5)$ I am a little stuck on how to continue. I know I am supposed to write it in a form such that $5v+127w = 1$ but I am not exactly sure how I would do that. Any help?
To solve it as a naive, start from your second line of solution: $$127-25 \times5=2 \tag{1}$$ Now$$118/2=59$$ Multiply (1) by 59:- $$59\times 127-5\times 1475=118$$ All the solutions will be of the form $$x=59+\frac{1475k}{1}\tag{2.}$$and $$y=-1475+\frac{59k}{1} \tag{3.}$$for $127x+5y=118$. Or you can adopt extended euclidean algorithm to solve as shown in other answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2597599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 4 }
Fundamental theorem of algebra applied to $y=x^2$ The fundamental theorem of algebra seems to indicate that there should be two roots for $y=x^2;$ however, applying the quadratic formula leads to just $0.$ Considering that $+0$ and $-0$ make no sense, what is the reasoning behind this superficial contradiction?
I don't know what exact statement of the Fundamental Theorem of Algebra you're using. The Fundamental Theorem of Algebra guarantees at least one root. Sometimes you'll see it stated that for a polynomial degree $n$, there are $n$ roots (some with multiplicity), but multiplicity is defined precisely by how many times the factor $(x-\lambda)$ is repeated in factorization of the polynomial. In this case, $0$ is a root of multiplicity $2$ because $x^2 = (x-0)(x-0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2597682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$\frac{1}{1+x^2}$'s Taylor expansion about a point $a\in\mathbb{R} $, given by $f(x) = \sum_{n = 0} ^ \infty$ $a_n (x -a)^n$. Radius of convergence? Let $f(x) = \frac{1}{1+x^2}$. Consider its Taylor expansion about a point $a \in \mathbb{R}$ given by $$f(x)=\sum_{n = 0}^\infty a_n(x-a)^n.$$ What is the radius of convergence? My try: $\tan^{-1}(x) = x - \frac{x^3}{3} + \frac{x^5}{5}..........$ Differentiating both sides I get $\frac{1}{1+x^2} = 1 - x^2 +x^4 -.............$ How can I proceed from here? Can anyone please help me. Answer is $\sqrt{a^2+1}$.
I do not know if there is a simple proof by real analytic methods but it is elementary when you consider the complex function $\frac 1 {1+z^{2}}$. The largest disc around a on which the function is analytic has radius $(1+a^{2})^{1/2}$, the distance from a to the points $i,-i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2597799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Uniform convergence of $\int_0^{+\infty}\frac{1-\cos{\alpha x}}{x}e^{-\lambda x}dx$ Please help me to solve of the following problem: Let $\alpha, \lambda>0$. Prove uniform convergence of improper integral with respect to $\alpha$: $$\int_0^{+\infty}\frac{1-\cos{\alpha x}}{x}e^{-\lambda x}dx$$ My attempt: I think I should use Dirichlet's test. Is it correct? My notes contain the following variant: Let $f(x,\alpha), g(x, \alpha)$ are defined on $[0,+\infty)\times(0, +\infty)$. Then improper integral $\int_0^{+\infty}f(x,\alpha) g(x, \alpha)\ dx$ converges uniformly on $(0,+\infty)$ if the following conditions hold: * *$f(x,\alpha), g(x, \alpha)$ are continuous on $[0,+\infty)\times(0, +\infty)$. *$|\int_0^{B}f(x,\alpha)\ dx|< C$ for some $C>0$ and for all $B>0$ and $\alpha \in (0,+\infty)$. *$g(x, \alpha)$ is monotone with respect to $x$ for any fixed $\alpha$ and $g(x,\alpha)\to0$ uniformly when $x \to +\infty$ and $\alpha \in (0,+\infty)$. Ok, next thing is to get $f$ and $g$. I think the should take $f(x,\alpha)=\frac{1-\cos{\alpha x}}{x}$ and $g(x,\alpha)=e^{-\lambda x}$. Is it correct? Ok, now I can claim that condition $3$ is satisfied since $g(x, \alpha)$ is monotone with respect to $x$ and does not depend on $\alpha$ and $g(x, \alpha) \to 0$ as $x \to +\infty$. I also claim that condition $2$ holds as well. Is it correct? I hope the things I stated above are correct, if I made mistakes please let me know. Now let us move the harder part. * *$g(x, \alpha)$ is clearly continuous. But what about $f(x, \alpha)$? We have a problem at $x=0$. But $f(x, \alpha)=\frac{\sin^2(\frac{\alpha x}{2})}{x},$ so $f(x, \alpha)$ has nice limit $0$ as $x \to 0$. Taking this into account, can we still use Dirichlet's test and claim that condition 1 is safisfied? How can we justify that? Do not spend your time is the proof is long, just please prove a reference. *What about condition $2$? I did not make any progress with it. If my approach with Dirichlet's test is wrong, please let me know and suggest working approach. Thanks a lot for your help!
Condition 2 is not fulfilled because, loosely speaking, on average $\sin ^2(\frac{\alpha x}2)$ equals $\frac 12$ and the integral of $\frac 1{2x}$ diverges for $x\rightarrow\infty$. It might be easier to fall back on the definition of uniform convergence and find an upper bound for $\int_B^\infty \frac{1-\cos \alpha x}xe^{-\lambda x}dx$ that does not depend on $\alpha$ and converges to $0$ for $B\rightarrow\infty$. There is no need to start at $B=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Product of 2-digit numbers 31 people are dancing in a circle formation. Each people’s age is a 2-digit number, and for each digit we know that the units’ digit is equal to the tens’ digit of the person who is in the clockwise position, while the tens’ digit is equal to the units’ digit of the person who is in the counter-clockwise position. Any 2 neighboring digits are different. Check if the product of all 31 ages can be a perfect square. I don’t know how to start :( All I can tell is that the digits appear in pairs: If we have, for example 31 digits, $d_1, d_2, \ldots, d_{31}$, then the ages are: $$ 10 \cdot d_1+d_2, 10 \cdot d_2+d_3, \ldots, 10 \cdot d_{31}+d_1 $$ and obviously the product is $(10 \cdot d_1+d_2) \cdot (10 \cdot d_2+d_3) \cdots (10 \cdot d_{31}+d_1)$. But I don’t know how to continue.
It's possible. To restate the goal, we want a cycle of digits $d_1 \to d_2 \to d_3 \to \ldots \to d_{31} \to d_1$, such that $$ (d_1 d_2) (d_2d_3) (d_3d_4) \ldots (d_{31} d_1) $$ is a perfect square. We can build this up by combining small cycles. First, $$2 \to 7 \to 2$$ is a nice move, as it gives $27 \cdot 72$ -- dividing out squares, we have $3 \cdot 2 = 6$. We can also do the move $$ 2 \to 5 \to 2 $$ which is $25 \cdot 52$, or just $13$ remaining. And we also have $$ 2 \to 6 \to 9 \to 2 $$ which is $26 \cdot 69 \cdot 92$, or $2 \cdot 13 \cdot 3 \cdot 23 \cdot 23 \cdot 4$, or just $6 \cdot 13$. So all of these together give a perfect square cycle of total length $2 + 2 + 3 = 7$: $$ \underbrace{2 \to 7 \to 2 \to 5 \to 2 \to 6 \to 9 \to 2}_{\text{perfect square cycle}}. \tag{a} $$ But that's not good enough -- we need another way of getting a perfect square, because cycles of length $7$ can't add up to get total length $31$. So here's another cycle: $$ \underbrace{2 \to 4 \to 2 \to 1 \to 2}_{\text{perfect square cycle}}. \tag{b} $$ Just to verify this, we have $24 \cdot 42 \cdot 21 \cdot 12$, dividing out squares and factoring $6 \cdot (6 \cdot 7) \cdot (3 \cdot 7) \cdot 3$, so everything ends up squared. This time, the cycle has length $4$. (This was a bit more complicated than necessary. We could just have done $2 \to 5 \to 2 \to 5 \to 2$, or $2 \to d \to 2 \to d \to 2$ for any $d$, for that matter.) Then we are done: by starting from $2$ and traversing cycle (a) one and cycle (b) six times, the total length is $$ 7 + 4 + 4 + 4 + 4 + 4 + 4 = 31. $$ Some intuition (if you know graph theory) The whole length $31$ cycle can be thought of as a cyclic path in the complete graph with $9$ vertices, where each vertex is a different digit. The path of length $31$ then decomposes into a bunch of cycles of small length ($\le 9$). The procedure to decompose into small cycles is as follows: for any cycle of length greater than $9$, it must repeat some vertex twice; therefore, it splits into two cycles from the repeated vertex. So the goal is basically to find a bunch of small cycles whose total length is $31$, and which multiply to a perfect square. Hence, my approach was to start looking for small cycles and what they multiply to, and then try to get them to cancel out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
$2^p -1$ then $p$ is a prime I have been trying to solve this following problem: If $2^p-1$ is a prime then prove that $p$ is a prime, where $p \geq 2$. which way should I go to prove this? Using fermat's or Bezout's theorem? or is it something else? This is what I managed to come up with,I am confused about it and not sure if it is correct: If $p$ is a prime and $2 \nmid p \rightarrow$ gcd$(2,p)=1$ so, $2^{p-1} \equiv 1(p) $ $2^p\equiv2(p)$ so, $2^p\not\equiv 1(p) $ and since $2\not\equiv 1 (p)$ conc: $p\nmid2^p -1$ Any hints are appreciated.
If $p=mn$, where $m>1$ and $m>1$ then $2^{mn}-1$ is divided by $2^m-1$ and by $2^{n}-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Path in a directed graph Let $G$ be a directed graph with $2^k$ vertices where there is exactly one edge between each two vertices. Prove that that regardless of the directions (orientations) of the edges there exist a path in $G$ which goes through $k+1$ unique vertices. I know that there are $\binom{2^k}2$ edges, but that's about it. Could someone give a hint as to how I should proceed?
Hint: Try splitting the graph in half, and consider induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $XY$ crosses the midpoints $\triangle ABC$ has altitudes $AD$, $BE$, $CF$. The reflections of $E$, $F$ in $H$ are $E'$, $F'$. The circle $DE'F'$ intersects $BE$, $CF$ at $X$, $Y$. Prove that $XY$ goes through the midpoints of $AB$, $AC$. I can show that $XY$ is parallel to $BC$ by simply angle-chasing. $EYFX$ is cyclic as well as $APFY$. I also tried showing that $AP$=$PH+HD$
To illustrate the idea of the solution, consider the following picture first, where the mid points $B_1$, $C_1$ of $CA$, $AB$, and the reflection $D'$ of $D$ in $H$ are also present. We want to show that the points $X,H,D,B_1$ are on a cycle. This would be enough to conclude, since from here we are allowed to write the first equality in the following chain: $$ \widehat{XB_1D} = 180^\circ - \widehat{XHD} = \widehat{XHA} = \widehat{EHA} = \widehat{EFA} = \hat C= \widehat{B_1DC} \ . $$ This gives $XB_1\|DC=BC$. Similarly $YC_1\|BC$. And now combine this with $XY\|BC$ obtained by the pink angle chasing from the picture, to obtain the collinearity of $B_1,C_1,X,Y$. So let us show that $(XHDB_1)$ is cyclic by considering the two marked angles in $X$ and $B_1$ against the (a posteriori insured) arc $\overset\frown{HD}$. The angle in $X$ is complicated, but we will move it to a simple place via: $$ \hat X = \widehat{E'XD} = \widehat{E'F'D} = \widehat{EFD'} \ . $$ So we need to clear $$\widehat{EFD'} \overset ?= \widehat{HB_1D}\ . $$ After adding the same angle, $\widehat{EFD} = 2\widehat{EFH}=2(90^\circ-\hat C)=\widehat{DB_1C}$, on both sides, we have to clear equivalently: $$ \widehat{D'FD} \overset ?= \widehat{HB_1C}\ . $$ And indeed, let us show the similarity of triangles $\Delta D'FD\sim\Delta HB_1C$. First of all, we clear an angle, $$ \widehat{FDD'} = \widehat{FDA} = \widehat{FCA} = \widehat{HCB_1}\ . $$ It remains to get a proportion, and this is: $$ \frac{D'D}{HC} = 2\cdot \frac{HD}{HC} = 2\cos B =2\cdot\frac{BD}{BA} =2\cdot\frac{DF}{AC} % \text{ from }\Delta BDF\sim\Delta BAC = \frac{DF}{AC/2} = \frac{DF}{CB_1}\ . $$ $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
If $5|P(2),2|P(5)$ then which of the followings divides $P(7)$? Suppose that $P(x)$ is a polynomial with integer coefficients.If $5|P(2),2|P(5)$ then which of the followings divides $P(7)$? a)10 b)7 c)3 d)4 e)8 $5|P(2),2|P(5)\Rightarrow5-2|p(5)-p(2)$ but this doesn't help.I don't know any property of polynomials useful here.
Meta-solution There are no restriction on the degree of your polynomial. If you take the constant polinomial $P(x)=10$ the conditions required are satisfied. So the only coherent answer is $a)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Continuous surjective map Does there exist a continuous surjective map from $\{(x, y)\in \mathbb{R^2} | x^2-y^2=1\}$ to $\mathbb{R}$? Any help would be greatly appreciated. I was trying to work with taking one branch of the hyperbola and restricting the function to it. But I cannot understand how to proceed much further.
Yes! Ascribe points of hyperbola with $x>0$ to positive numbers and points with $x<0$ to negative ones. Also map $(1,0)$ and $(-1,0)$ to 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Bridgeland-Stability Condition: Why is the Harder-Narasimhan filtration unique? I'm trying to understand Bridgeland's notion of stability condition on a triangulated category as defined in Definition 1.1 of the paper Stability conditions on triangulated categories. See also Definition 3.3 in the same paper. The decompositions of a non-zero object as in Definition 3.3(c) are the Harder-Narasimhan filtrations. Question: Why is the Harder-Narasimhan filtration of any non-zero object unique (up to isomorphism of the semistable factors)? Thank you for your help!
The short answer is that the filtrations are unique because they arise as factorization of an orthogonal, multiple factorization system: see here §4.1. The even shorter answer is that you can prove it for triangulated categories, but it's unsatisfying.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show $f$ is not differentiable at $x = 0$. Define $f(x) = \begin{cases} x & x\in \mathbb{Q} \\ 0 & x \in \mathbb{R}\setminus\mathbb{Q} \end{cases}$ Is $f$ differentiable at $0$? Is $g(x) = xf(x)$ differentiable at $0$? Is my reasoning correct? Is there a simpler way to prove $f'(0)$ does not exist? I was typing this up because I was stuck on it but I ended up figuring it out so I thought why not answer my own question.
For $h \neq 0$ you have $$\frac{f(h)-f(0)}{h-0}=\begin{cases} 1 & \text{for } h \in \mathbb Q\\ 0 & \text{for } h \in \mathbb{R}\setminus\mathbb{Q} \end {cases}$$ So $\lim\limits_{h \to 0} \frac{f(h)-f(0)}{h-0}$ can’t exist as a limit is unique. Hence $f$ is not differentiable at $0$. $g$ is differentiable at $0$ and $g^\prime(0)=0$ as for all $x \neq 0$ $$\left\vert \frac{g(x)}{x} \right\vert \le \vert x \vert.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Probability and Combinatorial Group Theory. If this is too broad or is otherwise a poor question, I apologise. I learnt recently that the probability that two integers generate the additive group of integers is $\frac{6}{\pi^2}$. What other results are there like this? I'm looking for any results of probability applied to group theory, preferably combinatorial group theory, in manner such as the one above.
For a finite group you can compute the probability that any two random elements will commute by looking at certain conjugacy classes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2598998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
How to prove a problem is #P-complete? What are the major steps in proving a problem is #P-complete? For example, I know that showing a problem is NP-complete requires (i) showing the problem is in NP by giving a polytime verification algorithm, (ii) showing an existing NP-hard problem is polytime reducible to the given problem. What would be the corresponding steps for a #P-completeness proof? Any resources would be helpful. From Wikipedia I have: "By definition, a problem is #P-complete if and only if it is in #P, and every problem in #P can be reduced to it by a polynomial-time counting reduction..." What makes a problem be in #P? Would reducing an exiting #P-complete problem to a given problem be enough?
A good treatment of the complexity of counting problems may be found in Moore and Mertens and Arora and Barak. In summary, the common approach to proving #P-completeness is through reduction. Given a known #P-complete problem $P_1$, if we can find an appropriate reduction of $P_1$ to $P_2$, then $P_2$ is also #P-complete. First, appropriate means that the reduction allows us to efficiently retrieve the number of solutions to $P_1$ from the number of solutions to $P_2$. We call such a reduction counting and, specifically, parsimonious if it leaves the number of solutions unchanged. Second, as in proving NP-completeness by reduction, the reduction should make solving $P_1$ "easy" if we could "easily" solve $P_2$. Hence the reduction should take polynomial time. A final simple observation that makes the reduction approach work for #P-completeness more or less as it does for NP-completeness is that composing two counting reductions yields a counting reduction. Given a forest of known #P-complete problems, we can attach a new branch wherever we find it most convenient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2599133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
To use trigonometric identities to find the value of $\tan\alpha$ I'm stuck on this problem: If $\alpha+\beta=\frac{\pi}{2}$ and $\beta+\gamma=\alpha $ then find the value of $\tan\alpha$ I have tried isolating $\alpha$ but ended up getting $\tan\alpha = \frac{1}{\tan\beta}$ and $\tan\beta=\frac{1 - \tan\frac{\gamma}{2}}{1 + \tan\frac{\gamma}{2}}$ Could someone please tell me how to properly approach this question?
$$\alpha+\beta=\dfrac\pi2$$ $$\alpha-\beta=\gamma$$ Add to find $\alpha$ in terms of $\gamma$ Then apply $$\tan(x+y)$$ formula
{ "language": "en", "url": "https://math.stackexchange.com/questions/2599247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Understanding cutting plane proofs I am trying to understand the following definition: Def: A cutting plane proof from the system $Ax\leq b$ for an inequality $c^Tx\leq d$ is a sequence of inequalities $c_i^T\leq d_i$, $(i = 1,\ldots, k)$ with the following properties i) every $c_i$ is integral, ii) $c_k = c$ and $d_k = d$, iii) for every $i$ there is a number $d_i'$ satisfying $\lfloor d_i'\rfloor \leq d_i$, such that $c_i^T x\leq d_i'$ is a nonnegative combination of the inequalities $Ax\leq b$ and $c_1^Tx\leq d_1,\ldots, c_{i-1}^Tx\leq d_{i-1}$. Question: Could someone explain to me how the number $k$ is specified, and give an example of a cutting plane proof? I have tried to find examples that match this definition, but I haven't found any. Thanks in advance!
Suppose $Ax\le b$ is the following system: $$(1): -x_1-3x_2\le -3\\ (2): 8x_1+3x_2\le 24\\ (3): -2x_1+x_2\le 1 $$ The solution set is represented by the following graph: Suppose we want to derive $c^Tx\le b$ where $c^T=(0,1),b=3$, i.e., $x_2\le 3$ from $Ax\le b$. From the picture, the problematic point is the intersection of (2) and (3). So we proceed as follows. First we obtain $c_1^Tx\le d_1$ by performing: $$\frac{1}{2}\cdot((2)+(3)): 3x_1+2x_2\le 12.5.$$ Note that this is a nonnegative combination of the inequalities in $Ax\le b$. Now we set $d_1'=12$ which is the floor of $d_1=12.5$. This is $c_1^Tx\le d_1'$. We name it (4). Next we obtain $c_2^Tx\le d_2$ by performing: $$\frac{1}{7}(3\cdot (3)+2\cdot(4)): x_2\le \frac{27}{7}.$$ Note that this is again a nonnegative combination of the inequalities in $Ax\le b$ and $c_1^Tx\le d_1'$. Making $d_2'=\lfloor d_2\rfloor=3$ gives the desired result. So $k=2$ and $c_2^Tx\le d_2'$ is exactly $c^Tx\le d$. P.S. I think there are some typos in your definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2599391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
On weak star convergent sequences of functions that are not strongly convergent anywhere Can we find example of a sequence of functions $g_n:(0,1)\rightarrow\mathbf{R}$ such that $g_n$ converges weak-star in ${\rm L}^{\infty}(0,1)$ as $n\rightarrow+\infty$, but such that for every measurable set $A\subseteq (0,1)$ such that $\lambda(A)>0$ the sequence $g_n\vert_{A}$ has no strongly convergent subsequence in ${\rm L}^{\infty}(A)$? In fact, I am dealing with a sequence of $\theta$-Holder continuous functions, for some fixed $0<\theta<1$ which is independent on $n$. Any chance of finding an example which satisfies all assumptions? Or maybe there is no such example? Thanks in advance.
Remark on previous comment by pozz: I point out that constant functions actualy have bounded oscillation property. In this previous comment we need to take into account that ${\rm sup}_{J^n_i}g_n\geq b$ is never satisfied, provided $g_n=c$, and provided we choose $b>c$. So there exists no sub-interval at all which satisfies both inf and sup estimate, as required in the definition of "bounded oscillation" property. Therefore, $g_n:=c$ is both strongly convergent and satisfies bounded oscillation property. The same argument applies to any sequence of functions which take values within some fixed discrete set (which is independent of $n$). Notice, however, that such sequences of "simple" functions are no longer continuous. Sorry for not answering the question, but maybe this observation helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2599543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The Cartesian product of two regular graphs is regular Consider two regular graphs $G_1$ and $G_2$ of degrees $d_1$ and $d_2$, respectively. I would like to prove that the Cartesian product $G_1 \square G_2$ is regular of degree $d_1 + d_2$. Currently I have a proof that revolves around the following facts: * *The adjacency matrix of $G_1 \square G_2$ is $A_{G_1 \square G_2} = A_{G_1}\otimes I_{n_2} + I_{n_1} \otimes A_{G_2}$ where $A_{G_i}$ is the adjacency matrix of $G_i$ and $n_i$ is the number of vertices of $G_i$. *The eigenvalues of the Kronecker product of two matrices $B$ and $C$ are all the possible products of eigenvalues of one eigenvalue of $B$ with one eigenvalue of $C$. *A graph is $d$-regular if and only if $\mathbf{1} = (1,\dotsc,1)^T$ is an eigenvector of its adjacency matrix, in which case the corresponding eigenvalue is $d$. *A graph is $d$-regular if and only if the maximal eigenvalue of its adjacency matrix is $d$. On the other hand, I would like to know if there is a more elementary (but not longer) proof.
There's a far more elementary proof, using just the definition of the Cartesian product. Let $G_1 = (V_1, E_1)$ be $d_1$-regular and let $G_2 = (V_2, E_2)$ be $d_2$-regular. Recall that the Cartesian product is defined as $ G_1 \square G_2 = (V_1 \times V_2, E) $ where $\times$ is the ordinary Cartesian product (on sets) and $$E = \big\{ (v_1, v_2) (u_1, u_2) \,\big|\, (v_1 = u_1 \text{ and } v_2 u_2 \in E_2) \text{ or } (v_2 = u_2 \text{ and } v_1 u_1 \in E_1 )\big\} .$$ For a fixed $(v_1, v_2) \in V(G_1 \square G_2)$, there are $d_2$ neighbors $(u_1, u_2)$ such that $v_1 = u_1$ and $v_2 u_2 \in E_2$ and there are $d_1$ neighbors $(u_1, u_2)$ such that $v_2 = u_2$ and $v_1 u_1 \in E_1$. Assuming the graphs are simple (no loops, etc.), we've counted each neighbor of $(v_1, v_2)$ exactly once and gotten $d_1 + d_2$ of them, not matter what $(v_1, v_2)$ was. Thus $G_1 \square G_2$ is $(d_1 + d_2)$-regular. (The detail here might be a bit overbearing, and could definitely be pared down.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2599645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Intersection of two cylinders What is the easiest way to find the area of the surface created when the cylinder $$x^2+z^2=1\text { intersects the cylinder }y^2+z^2=1.$$ I have used double integrals and also line integrals to find the area of the surface created when the cylinder $x^2+z^2=1$ intersects $ y^2+z^2=1.$ The line integral is much easier than the double integral. Is there a way to find this surface area without using integrals?
\begin{align} R &= \{ 0<x<1,-x<y<x,z=\sqrt{1-x^2} \} \\ S &= 8\iint_R dA \\ z &= \sqrt{1-x^2} \\ z_x &= -\frac{x}{\sqrt{1-x^2}} \\ z_y &= 0 \\ dA &= \sqrt{1+z_x^2+z_y^2} \, dx \, dy \\ &= \frac{1}{\sqrt{1-x^2}} \, dx \, dy \\ S &= 8\int_{0}^{1} \int_{-x}^{x} \frac{1}{\sqrt{1-x^2}} \, dy \, dx \\ S &= 8\int_{0}^{1} \frac{2x}{\sqrt{1-x^2}} \, dx \\ &= 16 \end{align} See also ancient Chinese work, Nine Chapters《九章算術》commented by Liu Hui(劉徽) here. Addendum In the figure above, the intersections between two cylinders (yellow and cyan) are two ellipses, namely $$x^2=y^2=1-z^2$$ Consider the purple region $T=\{ 0<y<x<1,x=\sqrt{1-z^2} \}$ which is bounded by a semi-circle in $xz$-plane and a semi-ellipse namely $$(x,y,z)=(\sin \theta,\sin \theta,\cos \theta)$$ Now unwrapping the cylinder, $$s=\theta \implies y=\sin s$$ therefore we get a sine curve (green). The area of purple region $T$ is $$\int_0^\pi \sin \theta \, d\theta=2$$ which is $\dfrac{1}{8}$ of the total surface area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2599749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving a system of third order homogenous ODEs I am trying to solve this third order system of homogenous ODEs. * *$x'''=2x+y$ *$y'''=x+2y$ Initial conditions are given as well. Higher order systems weren't covered in the lectures hence I am a bit lost. Nor can I find any similar questions asked on this site. I have tried to solve the equations in two ways so far. Firstly express $y$ in terms of $x'''$ and $x$ from the first equation differentiate the equation three times giving: $x^{(6)}-2x=y'''$ and plugging this into the second equation finally obtaining: $x^{(6)}-4x'''+3x=0$ A higher order homogeneous ODE with constant coefficients that I can solve with the substitution $x = e^{\lambda x}$. The solution can then be differentiated three times and plugged into $x'''=2x+y$ to obtain $y$. Using the given initial conditions we can then find the specific solution. However, this proves very tedious, especially solving the 6x6 system of equations. Is this approach correct and applicable to other similar systems? Or is there a completely different approach to this type of problem I am not aware of? Could you try to solve the system $\vec{u}''' = A \vec{u}$ by diagonalizing A and look for a solution that way, I've tried this as well but I cannot seem to find a solution. Any help is much appreciated.
Because of the symmetric cyclic/circulant nature of the system matrix, you can decouple the system by Fourier transform. Set $u=x+y$, $v=x-y$ to get the system $$ u'''=3u\\ v'''=v $$ which can be both solved with standard methods for linear ODE with constant coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2599883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
find the limits $\lim_{x\to \infty} \lfloor f(x) \rfloor=? $ let $f(x)=\dfrac{4x\sin^2x-\sin 2x}{x^2-x\sin 2x}$ the fine the limits : $$\lim_{x\to \infty} \lfloor f(x) \rfloor=? $$ $$f(x)=\dfrac{4x\sin^2x-\sin 2x}{x^2-x\sin 2x}=\dfrac{2\sin x(2x\sin x-\cos x)}{x(x-\sin2x)}$$ what do i do ?
A more heuristic argument is as follows: The key observation is that $\sin u$ is always between $-1$ and $1$, no matter what $u$ is. Consider the impact of this on the numerator. We have \begin{align} \text{numerator} & = 4x \times \text{(something between $-1$ and $1$)} - \text{(something between $-1$ and $1$)} \\ & = \text{(something between $-4|x|$ and $4|x|$)} - \text{(something between $-1$ and $1$)} \\ & = \text{(something between $-4|x|-1$ and $4|x|+1$)} \end{align} Now, consider the impact of that observation on the denominator. We have \begin{align} \text{denominator} & = x^2 - x \times \text{(something between $-1$ and $1$)} \\ & = x^2 - \text{(something between $-|x|$ and $|x|$)} \\ & = \text{(something between $x^2-|x|$ and $x^2+|x|$)} \end{align} So our fraction consists of something which is linear in $x$, divided by something that is quadratic in $x$. Therefore, the limit must be $0$, as $x$ increases without bound (in either direction).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2599997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Equivalent of a remainder I want to find en equivalent of the rest of the convergent series for $n \in \mathbb{N}^{*}$ $$ R_n=\sum_{k=n+1}\frac{1}{k^2\ln\left(k\right)} $$ I used that $\displaystyle x \mapsto \frac{1}{x^2\ln\left(x\right)}$ is positive, decreasing and tending to $0$ for $x \in \left[2,+\infty\right[$. Hence $$ \int_{k}^{k+1}\frac{\text{d}x}{x^2\ln\left(x\right)} \leq \frac{1}{k^2\ln\left(k\right)} \leq \int_{k-1}^{k}\frac{\text{d}x}{x^2\ln\left(x\right)} $$ By chasles relation and summing to $N \geq n+1$ $$ \int_{n+1}^{N}\frac{\text{d}x}{x^2\ln\left(x\right)} \leq \sum_{k=n+1}^{N+1}\frac{1}{k^2\ln\left(k\right)} \leq \int_{n}^{N}\frac{\text{d}x}{x^2\ln\left(x\right)} $$ But i'm idiotically stuck here, how can I compute this easily ? Do I need to find an equivalent of $\displaystyle x \mapsto \frac{1}{x^2\ln\left(x\right)}$ in $+\infty$ and integrate the comparison ? I think that those integrals are so-called Bertrand's integrals
Since $\log n$ is approximately constant on short intervals and $\sum_{k>n}\frac{1}{k^2}$ behaves like $\frac{C}{n}$, it is reasonable to expect that $R_n \sim \frac{C}{n\log n}$. And by the Stolz-Cesàro theorem we have $$ \lim_{n\to +\infty}\frac{R_n}{\frac{1}{n\log n}}=\lim_{n\to +\infty}\frac{R_n-R_{n+1}}{\frac{1}{n\log n}-\frac{1}{(n+1)\log(n+1)}}=\lim_{n\to +\infty}\frac{\frac{1}{(n+1)^2\log(n+1)}}{\frac{1}{n\log n}-\frac{1}{(n+1)\log(n+1)}}=\color{red}{1} $$ proving our conjecture with $C=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2600081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Resolvent estimate self-adjoint operator Let $A:D(A)\longrightarrow H$ be an unbounded self-adjoint (or normal) operator on a Hilbert space $H$. Then we know that $\sigma(A) \neq \emptyset$ and $$\|(\lambda-A)^{-1}\|=\frac{1}{d(\lambda,\sigma(A))}, \quad \forall \lambda \in \rho(A),$$ where $d(\lambda,\sigma(A))=\min_{\mu \in \sigma(A)} |\lambda-\mu|>0$. Do we have a similar formula for $$\|A(\lambda-A)^{-1}\|= ?$$ I point out that $A(\lambda-A)^{-1}$ is a bounded operator since $A(\lambda-A)^{-1}x=-x+\lambda(\lambda-A)^{-1}x$ for any $x \in H$. I have the basic estimate $$\|A(\lambda-A)^{-1}\| \leq 1+\frac{|\lambda|}{d(\lambda,\sigma(A))}.$$ Is it sharp ?
The following is exact: $$ \|A(\lambda I-A)^{-1}\|=\sup_{\mu\in\sigma(A)}\left|\frac{\mu}{\lambda-\mu}\right| % \\ = \sup_{\mu\in\sigma(A)}\left|-1+\frac{\lambda}{\lambda-\mu}\right|. $$ If $\sigma(A)=\mathbb{R}$ and $\lambda=i$, then the above gives $$ \|A(\lambda I-A)^{-1}\| = \sup_{\mu\in\mathbb{R}}\frac{|\mu|}{\sqrt{\mu^2+1}} =1. $$ while your expression gives $$ 1+\frac{1}{1}=2. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2600213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Wrting $\operatorname{Eq}\bigl(\prod_{i\in I}X_i\rightrightarrows\prod_{j\in J}Y_j\bigr)$ as a limit of a single diagram Let $D\colon \mathcal{I}\longrightarrow\mathcal{C}$ be a small diagram in the category $\mathcal{C}$. It is rather well known that a limit of $D$ can be computed as the equalizer of the two maps $$\prod_{i\in\mathcal{I}}D(i)\rightrightarrows\prod_{(f\colon i\to j)\in\mathcal{I}}D(j)$$ mapping a family $\{x_i\}$ to $\{x_j\}_{f\colon i\to j}$ resp. $\{D(f)(x_i)\}_{f\colon i\to j}$. My question is wether conversely it is possible to write the equalizer of two maps of products $\operatorname{Eq}\bigl(\prod_{i\in I}X_i\rightrightarrows\prod_{j\in J}Y_j\bigr)$ as a limit of a diagram having the objects $X_i$ and $Y_j$ as its vertices.
A more elementary example. Take the category of vector spaces over your favourite field. Writing $V^*$ for the dual of $V$, $$\text{lim}_i(V_i^*)\cong(\text{colim}_iV_i)^*,$$ so every limit $\text{lim}_iV_i$ of finite dimensional vector spaces is a dual, $(\text{colim}_iV_i^*)^*$, and so can’t have countably infinite dimension. But a countable dimensional vector space is an equalizer of maps between products of finite dimensional vector spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2600315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to calculate $\lim_{n\to \infty } \frac{n^n}{n!^2}$? How to calculate $\lim_{n\to \infty } \frac{n^n}{n!^2}$? I tried to solve this by using Stirling's formula but I got stuck at $\lim\limits_{n\to \infty } \frac{e^{2n}}{n^{n+1}}$. Any help?
Most of the time you do not need the exact Stirling approximation, just use the simplest: Factorial Inequality problem $\left(\frac n2\right)^n > n! > \left(\frac n3\right)^n$ It ensues that $\dfrac{(n!)^2}{n^n}>\left(\dfrac n9\right)^n\to\infty$ so the inverse is going to $0$. Anyway regarding the point where you are stuck $\dfrac{n^{n+1}}{e^{2n}}=\underbrace{n}_{\to\infty}\bigg(\ \underbrace{\dfrac n{e^2}}_{\to\infty}\ \bigg)^{\underbrace{n}_{\to\infty}}\to \infty$ so the inverse is going to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2600424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Integral of $\int_{-\infty}^\infty\frac{1}{t}e^{-\frac{t^2}{4}}\int_0^{\frac{t}{2}}e^{u^2}dudt$ How to calculate the following integral: $$\int_{-\infty}^\infty\frac{1}{t}e^{-\frac{t^2}{4}}\int_0^{\frac{t}{2}}e^{u^2}dudt\ \ ?$$ I don't understand how to calculate. Please help. Thank you.
With the substitution $u\to\frac{t}2x$, the inner integral becomes $$\int_0^{\frac{t}{2}}e^{u^2}\,du=\frac{t}2\int^1_0e^{\frac{t^2}{4}x^2}\,dx,$$ implying $$\int_{-\infty}^\infty\frac{1}{t}e^{-\frac{t^2}{4}}\int_0^{\frac{t}{2}}e^{u^2}\,du\,dt=\frac12\int_{-\infty}^\infty\int^1_0 e^{-\frac{t^2}{4}(1-x^2)}\,dx\,dt=\int_{-\infty}^\infty\int^1_0 e^{-s^2(1-x^2)}\,dx\,ds.$$ Since the integrand is positive, Mr. Fubini encourages us to change the order of integration, and $$\int_{-\infty}^\infty e^{-s^2(1-x^2)}\,ds=\frac{\sqrt{\pi}}{\sqrt{1-x^2}}.$$ Now clearly $$\int^1_0\frac{dx}{\sqrt{1-x^2}}\,dx=\frac{\pi}2,$$ so we arrive at $$\int_{-\infty}^\infty\frac{1}{t}e^{-\frac{t^2}{4}}\int_0^{\frac{t}{2}}e^{u^2}\,du\,dt=\frac{\pi^{3/2}}2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2600528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Definition of canonical isomorphism between two vector spaces I'm reading "Linear Algebra via Exterior Products" by Winitzki. The part leading up to the canonical isomorphism definition firstly proves the result that an $n$-dimensional vector space $V$ over $\mathbb{R}$ is isomorphic to $\mathbb{R}^n$. Suppose the isomorphism is defined as $\hat{M}v = (x_1, \ldots, x_n)$, where the coordinates $x_i$ are given in the standard basis $\{e_i\}$. Then it says: Note that the isomorphism $\hat{M}$ will depend on the choice of the basis: a different basis $\{e′_i\}$ yields a different map $\hat{M_1}$. For this reason, the isomorphism $\hat{M}$ is not canonical. ... A linear map between two vector spaces $V$ and $W$ is canonically defined or canonical if it is defined independently of a choice of bases in $V$ and $W$. I can't understand why changing the basis of $\mathbb{R}^n$ would yield a different linear map. Shouldn't the map remain the same? Only the representation of the output vectors would change because of the basis change. And by that logic, for any linear map, changing the basis would change the coordinates, and hence the representation, of the output vectors. So what exactly is meant by a linear map "defined independently of the choice of bases"?
Consider the vector space of linear and constant polynomials with real coefficients: $$V=\{a+bx: a,b \in \mathbb R\}.$$ Let's also consider the bases $B_1=\{1,x\}$ and $B_2=\{x,x+1\}$. If you express in each basis the polynomial $f(x)=2-3x$ you get $$f(x)=2\cdot 1+(-3)\cdot x$$ and $$f(x)=(-5)\cdot x+2\cdot (x+1).$$ Then, if we consider the first basis the isomorphism $\hat M$ is such that $$\hat M(2-3x)=(2,-3)$$ but if we take the second basis we get $$\hat M(2-3x)=(-5,2).$$ So that when you say "let $B$ be a basis...", the choice of that basis determines which homeomorphism you're defining.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2600640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Product of cdf and pdf of normal distribution. A continuos random variable $X$ has the density $$ f(x) = 2\phi(x)\Phi(x), ~x\in\mathbb{R} $$ then (A) $E(X) > 0$ (B) $E(X) < 0$ (C) $P(X\leq 0) > 0.5$ (D) $P(X\ge0) < 0.25$ \begin{eqnarray} \Phi(x) &=& \text{Cumulative distribution function of } N(0,1)\\ \phi(x) &=& \text{Density function of } N(0, 1) \end{eqnarray} I don't have a slightest clue where to start with. Can someone give me a little push. I saw some answers on same question like this but I didn't understand how should I integrate it when calculating expectation.
This question requires no calculations. You should not integrate anything to answer it. The Key If $\phi(x)$ is the density function of a distribution $D$, and $\Phi(x)$ is the cumulative distribution function of $D$, then the density $f(x) = 2\phi(x)\Phi(x)$ corresponds to the distribution of a random variable $X$ defined as the greater of two random variables (say $D_1,D_2$) picked according to $D$. Why you would see this To see this from the formula for $f$, see that $\phi(x)$ corresponds to the chance that $D_1$ has a certain value, and $\Phi(x)$ is the chance that another sample $D_2$ has a lesser value. Finally, the factor of 2 is for the alternative case that actually $D_2$ had this value and was greater, so together, $f(x)$ is the chance that $x$ is the greater of two randomly picked values. Once you see this, answering the question is easy and requires no calculations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2600776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 0 }
Showing measurability of composite function Let $p:\mathbb{R}\rightarrow\mathbb{R}$ be a continuous map and $p':\mathbb{R}\rightarrow\mathbb{R}$ with $p'(x) = \left\{ \begin{array}{lr} a & : x=0\\ p(x) & : x\neq 0. \end{array} \right.$ I want to show that $p'$ is Lebesgue measurable, i.e. for all $L\in\mathcal{L}$, with $\mathcal{L}$ the Lebesgue sigma-algebra, we have $p'^{-1}(L)\in\mathcal{L}$. If $a\notin L$, then $p'(L)\in\mathcal{L}$ since $p$ is measurable. But what if $a\in L$?
You have: $$p^{-1}(L) = p^{-1}(\{a\} \cup L \setminus \{a\})= p^{-1}(\{a\}) \cup p^{-1}(L \setminus \{a\}) = \{0\}\cup p^{-1}(L \setminus \{0\}),$$ which enables to conclude as $\{b\}$ is the complement of the open set $(-\infty,b) \cup (b, \infty)$ for any $b \in \mathbb R.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2600897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $\bar f(x)$ irreducible in $\mathbb{Z}_2[x] \Rightarrow f(x)$ irreducible in $\mathbb{Q}[x]$? The example I'm working with is $\bar f(x) = x^4+x^3+\bar 1 \in \mathbb{Z}_2[x]$ , which I know is irreducible in $\mathbb{Z}_2[x]$. The text that I'm reading from seems to imply that my "if, then" statement in my title above is true for this particular example. So I wondered if it is true more generally for any polynomial in $\mathbb{Z}_p[x]$ for $p$, prime. Thanks in advance.
Not quite. For example, $2x^2 + x$ is irreducible in $\mathbb{Z}_2[x]$, but not in $\mathbb{Z}[x]$. Here is a correct statement: if $f\in \mathbb{Z}[x]$, and the leading coefficient of $f$ is not divisible by $2$, then $\overline{f}$ irreducible implies ${f}$ irreducible over $\mathbb{Q}$. To see this, suppose that $f=gh$ is a non-trivial factorization, i.e. $g$ and $h$ have degree at least $1$. Since their leading coefficients are not divisible by $2$, $\overline{g}$ and $\overline{h}$ also have degree at least $1$, so $\overline{f} = \overline{g}\overline{h}$ is a nontrivial factorization.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2601024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Irreducible polynomials over an integrally closed domain Let $A$ be an integrally closed domain, with quotient field $K_A$. My main question is the following: Question. Does any non constant irreducible polynomial of $A[X]$ stays irreducible in $K_A[X]$ ? Of course, this is true for monic non constant polynomials, so my first idea what to reduce to this case. This leads to the following subquestion: Subquestion 1. Let $A$ be a domain, and let $P\in A[X]$ be an irreducible polynomial of degree $n\geq 1$, with leading coefficient $a_n$. Is $a_n^{n-1}P(X/a_n)\in A[X]$ irreducible ? If not, is this the case if $A$ is an integral domain ? For the moment, I've tried to prove it without any success, but I came across another subquestion, which would imply a positive answer to the previous one, and thus a positive one to the main question (if I am not mistaken...). Subquestion 2 Let $A$ be an integral domain, and let be a non constant irreducible polynomial $P\in A[X]$. Finally , let $a\in A\setminus\{0\}$. Is is true that the divisors of $aP$ have the form , $b\in A$, or $cP$, $b,c\in A\setminus\{0\}$. If not, is is true if $A$ is an integrally closed domain ? Greg
Ok, finally the answer to the main question (and subquestion $2$) is no. $A=\mathbb{Z}[i\sqrt{13}]$ is integrally closed, and $P=2X^2+2X+7$ is irreducible if I am not mistaken. However $2P=(2X+1+i\sqrt{13})(2X+1-i\sqrt{13})$, so in particular $P$ is not irreducible in $K_A[X]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2601124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Triangular Numbers and Perfect Squares Prove that $n$ is a triangular number if and only if $8n+1$ is a perfect square. I proved the easier part first (I think), that is, if $n$ is a triangular number then $8n+1$ is a perfect square. I don't know where to start from for the other part, please help. By the way, this was taken from David Burton's book, Elementary Number Theory and sadly it doesn't have all the solutions...
* *If $n$ is a triangular number, then $n = \frac{k(k+1)}{2}$ and $8n+1 = 4k(k+1)+1 = (2k+1)^2$. *If $8n+1$ is a perfect square, then $8n+1 = (2m+1)^2 \implies n = \frac{4m^2+4m+1-1}{8} = \frac{m(m+1)}{2}$ (because $8n+1$ is odd so it must be square of an odd number).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2601270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the greatest possible value for the difference in ratings. Seven experts evaluate the picture. Each of them makes an assessment - an integer number of points from 0 to 10 inclusive. It is known that all experts have put different points. According to the old rating system, the rating of the picture is the arithmetic mean of all expert points. According to the new rating system, the rating of the picture is estimated as follows: the smallest and largest points are discarded and the arithmetic mean of the remaining points is calculated. Find the greatest possible value for the difference in ratings calculated from the old and new assessment systems. I find seven points 0, 1, 2, 3, 4, 5, 10 for which this difference is $\frac{4}{7}$ but I can't prove that it is maximum value.
Let the seven ratings be $a,b,c,d,e,f,g$ where, WLOG, $a<b<c<d<e<f<g$. The old system would give $\frac{a+b+c+d+e+f+g}{7}$, whereas the new one would give $\frac{b+c+d+e+f}{5}$ We would like to maximize the absolute difference between the two, which is given by \begin{align} \ & |\frac{a+b+c+d+e+f+g}{7}-\frac{b+c+d+e+f}{5}| \\ \ = & |\frac{a+g}{7}-\frac{2(b+c+d+e+f)}{35}| \\ \end{align} WLOG, let's say we make $\frac{a+g}{7}$ as large as possible, and $\frac{2(b+c+d+e+f)}{35}$ as small as possible. Then we would want $g=10$. And whatever value that $a$ takes, the smallest possible value for $\frac{2(b+c+d+e+f)}{35}$ would occur when $b=a+1, c=a+2, d=a+3, e=a+4, f=a+5$. And now it is just a matter of directly computing the cases for when $a=0,1,2,3,4$. It turns out that, when a=0, the difference is $\frac47$ when a=1, the difference is $\frac37$ when a=2, the difference is $\frac27$ when a=3, the difference is $\frac17$ when a=4, the difference is $0$. So the answer is indeed $\frac47$ Of course, a more elegant way of doing this would be to say that $$\frac{a+(a+1)+(a+2)+(a+3)+(a+4)+(a+5)+10}{7}=\frac{6a+25}{7},$$ and that $$\frac{(a+1)+(a+2)+(a+3)+(a+4)+(a+5)}{5}=\frac{5a+15}{5},$$ where $\frac{6a+25}{7} - \frac{5a+15}{5} = \frac{4-a}{7}$ so that the smallest value of $a$, namely $a=0$, gives the largest difference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2601359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }