Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Example of a $\mathbb{Z}$-module with exactly three proper submodules? What is an example of a $\mathbb{Z}$-module which has exactly three proper submodules?
A $\mathbf Z$-module is but an abelian group. For any prime $p$, $\mathbf Z/p^3\mathbf Z$ is an example of an abelian groups with three proper subgroups, which are linearly ordered. These subgroups are $$0\subsetneq p^2\mathbf Z/p^3\mathbf Z\simeq \mathbf Z/p\mathbf Z\subsetneq p\mathbf Z/p^3\mathbf Z\simeq\mathbf Z/p^2\mathbf Z\subsetneq \mathbf Z/p^3\mathbf Z.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
find the maximum possible area of $\triangle{ABC}$ Let $ABC$ be of triangle with $\angle BAC = 60^\circ$ . Let $P$ be a point in its interior so that $PA=1, PB=2$ and $PC=3$. Find the maximum area of triangle $ABC$. I took reflection of point $P$ about the three sides of triangle and joined them to vertices of triangle. Thus I got a hexagon having area double of triangle, having one angle $120$ and sides $1,1,2,2,3,3$. We have to maximize area of this hexagon. For that, I used some trigonometry but it went very complicated and I couldn't get the solution.
Let $\theta=\measuredangle PAB$ in the triangle you specify. Then $\measuredangle PAC=60°-\theta$. By the law of cosines, $$2^2=1^2+c^2-2\cdot 1\cdot c\cdot \cos\theta$$ $$3^2=1^2+b^2-2\cdot 1\cdot b\cdot \cos(60°-\theta)$$ Solving those equations for $b$ and $c$, $$c=\cos\theta+\sqrt{\cos^2\theta+3}$$ $$b=\cos(60°-\theta)+\sqrt{\cos^2(60°-\theta)+8}$$ Since the triangle's area is $\frac 12bc\sin A$ and $A=60°$, putting them all together we get $$Area=\frac{\sqrt 3}{4}\left(\cos(60°-\theta)+\sqrt{\cos^2(60°-\theta)+8}\right)\left(\cos\theta+\sqrt{\cos^2\theta+3}\right)$$ where $0°<\theta<60°$. This looks like a bear to maximize analytically, so I did it numerically with a graph and got $$Area_{max}\approx 4.66441363567 \quad\text{at}\quad \theta\approx 23.413°$$ I could get only five significant digits for $\theta$ but the maximum area should be accurate to all shown digits. I checked this with a construction in Geogebra and it checks. WolframAlpha timed out while trying to find an exact maximum value of the triangle's area from that my formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Simplify $2(\cos^2 x - \sin^2 x)^2 \tan 2x$ Simplify: $$2(\cos^2 x - \sin^2 x)^2 \tan 2x$$ After some sketching, I arrive at: $$2 \cos 2x \sin2x$$ Now according to the answer sheet, I should simplify this further, to arrive at $\sin 4x$. But how do I derive the latter from the former? Where do I start? Hoe do I use my double-angle formulas to arrive there?
Use $$\cos^2x-\sin^2x=\cos2x$$ and $$\tan A=\dfrac{\sin A}{\cos A}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Creating a set of vectors Suppose we are given a number k, and we have to construct $2^k$ vectors in $2^k$ dimensional space using only the coordinates 1 and -1 so that all the vectors are orthogonal to each other. How can such a construction be made? For, example, for k=2, a valid construction might be 1,1,-1,-1 1,1,1,1 1,-1,1,-1 1,-1,-1,1
A matrix $A \in M_n(\mathbb{R})$ is called a Hadamard matrix of order $n$ if the entries of $A$ are $\pm 1$ and the columns of $A$ are mutually orthogonal. Using the notion above, you ask how can one construct a Hadamard matrix of order $2^k$. The basic observation is that if $H$ is a Hadamard matrix of order $n$ then the block matrix $$ \hat{H} := \begin{pmatrix} H & H \\ H & -H \end{pmatrix} $$ is a Hadamard matrix of order $2n$. Starting with the $1 \times 1$ Hadamard matrix $(1)$, one can repeatedly apply this observation to create Hadamard matrices of order $2^k$ for all $k \in \mathbb{N}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$(x^2+1)(y^2+1)(z^2+1) + 8 \geq 2(x+1)(y+1)(z+1)$ The other day I came across this problem: Let $x$, $y$, $z$ be real numbers. Prove that $$(x^2+1)(y^2+1)(z^2+1) + 8 \geq 2(x+1)(y+1)(z+1)$$ The first thought was power mean inequality, more exactly : $AM \leq SM$ ( we noted $AM$ and $SM$ as arithmetic and square mean), but I haven't found anything helpful. (To be more specific, my attempts looked like this : $\frac{x+1}{2} \leq \sqrt{\frac{x^2+1}{2}}$) I also take into consideration Cauchy-Buniakowsky-Scwartz or Bergström inequality, but none seems to help. Some hints would be apreciated. Thanks!
For real $x$,we have(or Use Cauchy-Schwarz inequality) $$x^2+1\ge\dfrac{1}{2}(x+1)^2$$ the same we have $$y^2+1\ge\dfrac{1}{2}(y+1)^2$$ $$z^2+1\ge\dfrac{1}{2}(z+1)^2$$ so $$(x^2+1)(y^2+1)(z^2+1)\ge\dfrac{1}{8}[(x+)(y+1)(z+1)]^2$$ Use AM-GM inequality $$\dfrac{1}{8}[(x+1)(y+1)(z+1)]^2+8\ge 2\sqrt{\dfrac{1}{8}[(x+)(y+1)(z+1)]^2\cdot 8}=2(x+1)(y+1)(z+1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Adjoint action on Lie algebra su(2) ($A \in SU(2), X \in \mathfrak{su(2)} \Rightarrow AXA^{-1}\in \mathfrak{su(2)}$) I am trying to understand ho $SU(2)/\{\pm I\} \cong SO(3)$ (see: how to show $SU(2)/\mathbb{Z}_2\cong SO(3)$) but i am not sure about the adjoint action. In especially, as I understand, the adjoint action operates on $\mathfrak{su(2)}$ therefore the following statement should hold: $A \in SU(2), X \in \mathfrak{su(2)} \Rightarrow AXA^{-1}\in \mathfrak{su(2)}$ We know that $SU(2) = \{ \left( \begin{matrix}x & y\\ -\bar y & \bar x \end{matrix}\right): |x|+|y| = 1\} $ furthermore $\mathfrak{su(2)} = span\left( \left( \begin{matrix}i & 0\\ 0 & -i \end{matrix} \right), \left( \begin{matrix}0 & -1\\ 1 & 0 \end{matrix} \right), \left( \begin{matrix}0 & i\\ i & 0 \end{matrix} \right) \right)$. So i wanted to proof the statement on the basis of $\mathfrak{su(2)}$ (since $A\in SU(2) \Rightarrow A^{-1} = \bar{A}^t$): $AXA^{-1} = \left( \begin{matrix}a+ib & c+id\\ -c+id & a-ib \end{matrix} \right) \left( \begin{matrix}i & 0\\ 0 & -i \end{matrix} \right) \left( \begin{matrix}a-ib & -c-id\\ c-id & a+ib \end{matrix} \right) = $ $\left( \begin{matrix}((-a-i b) (c-i d)+(a-i b) (c+i d) & (-a-i b) (a+i b)+(-c-i d) (c+i d)\\ (a-i b)^2+(c-i d)^2 & (a-i b) (-c-i d)+(a+i b) (c-i d))\\ \end{matrix} \right)$ but i can't find a way to write this in the basis of $\mathfrak{su(2)}$. Am i doing something completely wrong or was my conclusion about the adjoint action wrong?
An element $A$ of $\mathfrak{su}(2)$ must satisfy $A=-A^*$ and $tr(A)=0$, because, respectively, of the conditions $AA^*=I$ and $\det(A)=1$ (see Lee's Book proposition 5.38, pg 117, to see how to calculate the tangent space. hint: use $\Phi:SL(2,\mathbb{C})\to SL(2,\mathbb{C})$, given by $\Phi(X)=XX^*$, and so $SU(2)=\Phi^{-1}(I)$. Now, if $A\in SU(2)$ and $X\in\mathfrak{su}(2)$ you have $(AXA^{-1})^*=(A^{-1})^*X^*A^*=-AXA^{-1}$, because $AA^*=I$ and $X=-X^*$. That is, $AXA^{-1}\in\mathfrak{su}(2)$. With your calculations (you messed up the product $AXA^{-1}$, when you correct) you'll get $ \begin{aligned} AXA^{-1} &= \left( \begin{matrix}a+ib & c+id\\ -c+id & a-ib \end{matrix} \right) \left( \begin{matrix}i & 0\\ 0 & -i \end{matrix} \right) \left( \begin{matrix}a-ib & -c-id\\ c-id & a+ib \end{matrix} \right)\\ &=(a^2+b^2-c^2-d^2)\left( \begin{matrix}i & 0\\ 0 & -i \end{matrix} \right)+(-2ad-2bc)\left( \begin{matrix}0 & -1\\ 1 & 0 \end{matrix} \right)+(-2ac+2bd)\left( \begin{matrix}0 & i\\ i & 0 \end{matrix} \right) \end{aligned} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\int_0^{+\infty}(\frac{\arctan t}{t})^2dt$ Is it possible to calculate $$\int_0^{+\infty}\Big(\frac{\arctan t}{t}\Big)^2 dt$$ without using complex analysis? I found this on a calculus I book and I don't know how to solve it. I tried to set $t = \tan u$ but it didn't help.
The answer can be derived from a tailor-made version of Parseval's theorem, as already suggested by Ron Gordon. The Fourier sine series of $x$ over $\left(-\frac{\pi}{2},\frac{\pi}{2}\right)$ is given by: $$ x = \sum_{k\geq 1}\frac{(-1)^{k+1}}{k}\,\sin(2k x) \tag{1} $$ and if $k,j\in\mathbb{N}^+$ we have: $$ \int_{0}^{\pi/2}\frac{\sin(2kt)}{\sin(t)}\cdot\frac{\sin(2jt)}{\sin(t)}\,dt = \pi\cdot\min(j,k).\tag{2} $$ That gives: $$ \int_{0}^{\pi/2}\frac{x^2}{\sin^2 x}\,dx = \pi \sum_{j,k\geq 1}\frac{(-1)^{j+k}\min(j,k)}{jk}\tag{3} $$ and by reindexing over $j+k$, then using partial fraction decomposition, it is not difficult to check that the RHS of $(3)$ equals $\color{red}{\pi\log 2}$. On the other hand, the LHS of $(3)$ equals $\int_{0}^{+\infty}\frac{\arctan^2 t}{t^2}\,dt$ through the obvious substitution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Relation between hypergeometric functions? Is there any relations between the following hypergeometric functions? $$\ _2F_1(1,-a,1-a,\frac{1}{1-z})$$ $$\ _2F_1(1,-a,1-a,{1-z})$$ $$\ _2F_1(1,a,1+a,\frac{1}{1-z})$$ $$\ _2F_1(1,a,1+a,{1-z})$$
First, you need to use Barnes integral representation $${}_2F_1(a,b;c;z) = \frac{\Gamma(c)}{\Gamma(a)\Gamma(b)}\frac{1}{2\pi i} \int_{-i\infty}^{+i\infty}\frac{\Gamma(a+s)\Gamma(b+s)\Gamma(-s)}{\Gamma(c+s)}(-z)^sds. $$ The Gauss hypergeometric function ${}_2F_1(a,b;c;z)$ is usually defined by a power series that converges only for $|z|<1$, but you have to extend the definition on the whole complex plane such that the function can be evaluated at both $1-z$ and $1/(1-z)$. Using the integral representation, \begin{align*} \ _2F_1(1,-a;1-a;{1-z}) &= \frac{\Gamma(1-a)}{\Gamma(1)\Gamma(-a)}\frac{1}{2\pi i} \int_{-i\infty}^{i\infty} \frac{\Gamma(1+s)\Gamma(-a+s)\Gamma(-s)}{\Gamma(1-a+s)}(z-1)^sds \cr &=\frac{(-a)}{2\pi i}\int_{-i\infty}^{+i\infty}\frac{\Gamma(1+s)\Gamma(-s)}{s-a}(z-1)^sds, \end{align*} and \begin{align*} \ _2F_1\left(1,a;1+a;\frac{1}{1-z}\right) &= \frac{\Gamma(1+a)}{\Gamma(1)\Gamma(a)}\frac{1}{2\pi i} \int_{-i\infty}^{i\infty} \frac{\Gamma(1+s)\Gamma(a+s)\Gamma(-s)}{\Gamma(1+a+s)}(z-1)^{-s}ds \cr &=\frac{a}{2\pi i}\int_{-i\infty}^{+i\infty}\frac{\Gamma(1+s)\Gamma(-s)}{s+a}(z-1)^{-s}ds. \end{align*} In the second relation, change $s$ to $-s$, we get (keeping track of all the minus signs) $$ \ _2F_1\left(1,a;1+a;\frac{1}{1-z}\right) = \frac{a}{2\pi i}\int_{-i\infty}^{+i\infty}\frac{\Gamma(1-s)\Gamma(s)}{a-s}(z-1)^{s}ds. $$ Finally using the relation $$\Gamma(s)\Gamma(1-s)=\frac{\pi}{\sin \pi s} \quad \mbox{and}\quad \Gamma(-s)\Gamma(1+s)=-\frac{\pi}{\sin \pi(-s)} = -\frac{\pi}{\sin \pi s},$$ we get $$\ _2F_1(1,-a;1-a;{1-z}) = \ _2F_1\left(1,a;1+a;\frac{1}{1-z}\right)$$ and similar the equivalence for the rest two (or just change $a$ to $-a$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about creating a volume form for $SL(2,\mathbb{R})$ This problem comes out of R.W.R. Darling (Differential Forms and Connections) ch.8. In the chapter he shows that if $M$ is an $n$-dimensional differential manifold immersed in $\mathbb{R}^{n+k}$, and $\Psi$ is an immersion from $\mathbb{R}^n \rightarrow \mathbb{R}^{n+k}$ that parametrizes the manifold, and $f$ is a submersion from $\mathbb{R}^{n+k} \rightarrow \mathbb{R}^k$ such that $f^{-1}(0) = M$, then we can construct a volume form $(\star df)$ on $M$ using the Hodge star, and that $(\star df)\Lambda^n \Psi_{*}$, which parametrizes the volume form, is given for $k=1$, and $\Psi(0) = r$ by $$ \begin{vmatrix} D_1 f(r) & D_1 \Psi_1(0) & \cdots & D_n \Psi_1(0) \\ \vdots & \vdots & \ddots & \vdots \\ D_n f(r) & D_1 \Psi_n(0) & \cdots & D_n \Psi_n(0) \\ \end{vmatrix}. $$ The exercise is to do this with the submanifold $SL(2,\mathbb{R}) \subset GL(2,\mathbb{R})$ regarded as equivalent to $\mathbb{R}^{nxn}$, with $\Psi$ parametrizing $\begin{pmatrix} x & y \\ z & w \\ \end{pmatrix}$ as the image of $(x,y,z)$ and $f(x,y,z,w) = xw - yz - 1.$ I calculated this, and got: $$ \begin{vmatrix} w & 1 & 0 & 0 \\ -z & 0 & 1 & 0 \\ -y & 0 & 0 & 1 \\ x & \frac{-w}{x} & \frac{z}{x} & \frac{y}{x} \\ \end{vmatrix}, $$ where $w = \frac{1+yz}{x}$, which correctly evaluates at $I$ to give $-2dx\land dy\land dz$ as the parametrized volume operator. The second part is where I have a problem, it says to extend this volume form in a left-invariant manner to $SL(2,\mathbb{R})$ by calculating $(L_A^{*}(\star df))(A^{-1})$, where $L_A$ is the left shift operator on $GL(2,\mathbb{R})$, with $L_A \begin{pmatrix} s & t \\ u & v \\ \end{pmatrix} = \begin{pmatrix} x & y\\ z & w\\ \end{pmatrix} \begin{pmatrix} s & t \\ u & v \\ \end{pmatrix}$ when $A = \begin{pmatrix} x & y\\ z & w\\ \end{pmatrix}.$ I sense that I should take $(L_A^{*}(\star df)) = ((\star df)L_{A{*}})$ to start the process, but am confused as to how the push-forward fits into the calculation with respect to the parametrization $\Psi$. Could someone help me with how that works? And do I calculate $L_{A{*}}$ as an element of $\mathbb{R}^{nxn}$ which gives a differential as a 4x4 matrix? And if so, does it pre-multiply or post multiply which of the various pieces of the matrix determinant needed to form the volume form? (The problem is 8.4.5 in Darling, p.173, and this is a self-study question.)
First, I would suggest that you practice computing some pullbacks in a more basic setting. For example, if $f(u,v)=(u+v^2,uv,u^3+v^3)=(x,y,z)$ what is $f^*(x\,dy\wedge dz + z\,dx\wedge dy)$? You should learn how to do this without ever writing down a push-forward. Second, you want to compute $L_{A^{-1}}^*(dx\wedge dy\wedge dz)(I)$. Since $A^{-1}=\begin{bmatrix}w&-y\\-z&x\end{bmatrix}$, we have $$\begin{bmatrix} w&-y\\-z&x\end{bmatrix}\begin{bmatrix} dx &dy\\ dz &dw \end{bmatrix}=\begin{bmatrix} wdx-ydz&wdy-ydw\\-zdx+xdz&\dots\end{bmatrix},$$ so $dx\wedge dy\wedge dz$ pulls back to \begin{align*} (wdx-ydz)\wedge (wdy-ydw)\wedge (-zdx+xdz)&=-dx\wedge dz\wedge (w dy-ydw)\\ &=-dx\wedge dz\wedge (w-yz/x)dy \\&=\frac 1x dx\wedge dy\wedge dz.\end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1590959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
When does equality hold for $|x+y| \leq |x|+|y| ?$ This is problem 1-2 in Calculus on Manifolds. Spivak. He gives two hints. * *Examine the proof and *The answer is not "when y and y are linearly dependent." This is the proof we are told to examine: $|x+y|^2 = \sum^n_{i=1}(x^i+y^i)^2=\sum^n_{i=1}{x^i}^2+\sum^n_{i=1}{y^i}^2+2\sum^n_{i=1}{x^i}{y^i} \leq |x|^2+|y|^2+2|x||y|=(|x|+|y|)^2$ Then take the square root of both sides. Based on this proof, I believe the answer is $|x+y| = |x|+|y|\;\longleftrightarrow\;\sum^n_{i=1}{x^i}{y^i}=<x,y>=|x||y|.$ The answer seems obvious, but I'm not sure because I came across this set of solutions (http://jianfeishen.weebly.com/uploads/4/7/2/6/4726705/calculus_on_manifolds.pdf) that has a much more complicated answer. My proof seems a bit too easy. I might be overlooking something.
Your argument is good, but you can go further. Suppose $\|x+y\|=\|x\|+\|y\|$ (sorry, but I can't use notation with single bars and upper indices) with $x\ne0$; this is equivalent to $$ \langle x+y,x+y\rangle=\langle x,x\rangle+2\|x\|\,\|y\|+\langle y,y\rangle $$ that is, expanding the left-hand side and simplifying, $$ \langle x,y\rangle=\|x\|\,\|y\|\tag{*} $$ This is clearly connected to the Cauchy-Schwarz inequality, so let's examine its proof. Consider $\langle tx+y,tx+y\rangle\ge0$, for every scalar $t$. Then $$ t^2\langle x,x\rangle+2t\langle x,y\rangle+\langle y,y\rangle\ge0 $$ for all $t$; therefore the (reduced) discriminant $$ \langle x,y\rangle^2-\langle x,x\rangle\langle y,y\rangle\le0 $$ In case the equality (*) holds, the discriminant is zero, so, for $t=-\frac{\langle x,y\rangle}{\langle x,x\rangle}$, we have that $$ \langle tx+y,tx+y\rangle=0 $$ that is, $tx+y=0$, so $y\in\operatorname{Span}\{x\}$. Now, if $y=rx$, for some scalar $r$, $$ \|x+y\|=\|(1+r)x\|=|1+r|\,\|x\| $$ whereas $\|x\|+\|rx\|=(1+|r|)\|x\|$. So we need $$ |1+r|=1+|r| $$ Squaring we get $1+2r+r^2=1+2|r|+r^2$, so the condition is $r\ge0$. Thus the solution is either $y=rx$, for some $r\ge0$, or $x=ry$, for some $r\ge0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Baby Rudin exercise 1.6: Is this the proof Rudin expects? $\bf Exercise\, 1.6$ Fix $b>1.$ Prove that if $m,n,p,q$ are integers, $n>0,q>0$ and $r=m/n=p/q$, then $$ (b^m)^{1/n}=(b^p)^{1/q}. $$ I'm not really sure what I can assume and what I can't assume, I think that all I need is $(x^y)^z=x^{yz}$ for integers $y,z$, but I'm not sure how to prove this (I don't even know what the expected definition of exponentiation is!). Attempt $$ \begin{align} \left((b^m)^{1/n}\right)^n&=b^m\quad \text{By Theorem 1.21 (I think).}\\ \left((b^m)^{1/n}\right)^{nq}&=b^{mq}\quad \text{Here I use $(x^y)^z=x^{yz}$.}\\ \left((b^m)^{1/n}\right)^{nq}&=b^{np}\quad \text{As $mq=np$.}\\ \left((b^m)^{1/n}\right)^{qn}&=b^{pn}\quad \end{align} $$ Then, taking $n$-th and then $q$-th roots, we get our desired result (I think this is possible, again, by theorem $1.21$, but I'm not sure). Could someone check my proof and tell me which facts about exponentiation we are allowed to assume and use for these kind of proofs?
Rudin defined $b^n; n \in \mathbb Z$ as notation to mean $b^n = b\cdot b ..b$. Simple grouping allows you to assume $b^nb^m = b \cdot b...b \cdot b\cdot b....b = b^{n+m}$ and $(b^n)^m = b^{nm}$. By theorem 1.21 you know that for $b^m$ there exists a unique $d := (b^m)^{1/n}$ such that $d^n = b^m$. The excercise is to show if $m/n = p/q$ then $(b^p)^{1/q} = (b^m)^{1/n}$. Your proof is mostly good. $((b^p)^{1/q})^{mq} = ([(b^p)^{1/q})^q]^m = ([b^p]^m) = b^{pm}$ So $(b^p)^{1/q} = (b^{pm})^{1/mq}$ which is a uniquely defined number by th 1.21 $((b^m)^{1/n})^{np} = ([(b^m)^{1/n})^n]^p = ([b^m]^p) = b^{pm}$ So $(b^m)^{1/n} = (b^{pm})^{1/np}$ which is a uniquely defined number by th 1.21 But $1/np = 1/mq$ so $(b^{pm})^{1/np} =(b^{pm})^{1/mq}$ are both the same unique number. So $(b^p)^{1/q} = (b^{pm})^{1/mq} = (b^{pm})^{1/np}= (b^m)^{1/n}$ are all different ways of expressing the same unique number. Thus defining $b^r$ as $(b^m)^{1/n}$ when $r = m/n$ is consistant, not ambiguous, and always existent. Thus it is "well-defined".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
$3^{n+1}$ divides $2^{3^n}+1$ Describe all positive integers,n such that $3^{n+1}$divides $2^{3^n}+1$. I am little confused about what the question asks-if it asks me to find all such positive integers, or if it asks me to prove that for every positive integer n,$3^{n+1}$ divides $2^{3^n}+1$. Kindly clarify this doubt and if it's the former part, please verify my solution-n=1.
By induction: case $n=0,1$ is obvious, assume the claim for $n \in \mathbb{N}$. Then, $$2^{3^{n+1}} +1 = ((2^{3^{n}}+1)-1)^3 +1 = (2^{3^{n}}+1)^3 -3(2^{3^{n}}+1)^2 +3 (2^{3^{n}}+1)$$ and by the induction hypothesis $ 3^{n+2}$ divides the last two terms. For the first term (call it $a$), induction again gives $3^{n+1}$ divides $a^{1/3}$. Then, $3^{n+2}$ divides $3^{3n+3}$ which divides $a$ so by transitivity we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 0 }
Geometric Inequality $(a+b)(b+c)(c+a)(s-a)(s-b)(s-c)\leq (abc)^{2}$ everyone. $a$,$b$,$c$ are three sides of a triangle. Prove or disprove the following. $(a+b)(b+c)(c+a)(a+b-c)(b+c-a)(c+a-b)\leq 8(abc)^{2}$ I know two inequalities. $8(s-a)(s-b)(s-c)\leq abc~$ , $~(a+b)(b+c)(c+a)\geq 8abc$ But for the above combination of them, I have no idea. Thanks in advance.
Use Heron's formula $$(a+b-c)(b+c-a)(c+a-b)(a+b+c)=\dfrac{a^2b^2c^2}{R^2}$$ where $R$ be the center of the circumcircle of $\Delta ABC$. your inequality can write as $$8R^2\ge\dfrac{(a+b)(b+c)(a+c)}{a+b+c}$$ since $$9R^2\ge a^2+b^2+c^2$$ it suffices to prove $$8(a+b+c)(a^2+b^2+c^2)\ge 9(a+b)(b+c)(a+c)\tag{1}$$ use AM-GM inequality $$27(a+b)(b+c)(a+c)\le 8(a+b+c)^3$$ then it easy to prove (1)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that if $z+\frac1z$ is real, then either $|z|=1$ or $z$ is real. Prove that if $z+\frac1z$ is real, then either $|z|=1$ or $z$ is real. Original image I am not sure whether my proof is sufficient. So far, I have shown that $$z+\frac1z = \frac{z^2+1}{z}=\frac{|z|+1}{z}$$ However, I don't think the proof enough, and also I seemed to have proven that they both have to be real and not either... Please advise. Note: I realised that my proof is wrong, as it was kindly mentioned that $z^2$ doesn't equal $|z|$. I remembered wrongly, it should be $zz*=|z|$ with $z*$ being a conjugate of $z$.
Another way: \begin{align} z+\frac{1}{z}&=\rho(\cos\theta+i\sin\theta)+\frac{1}{\rho}(\cos\theta-i\sin\theta)\\ &\left(\rho+\frac{1}{\rho}\right)\cos\theta+i\left(\rho-\frac{1}{\rho}\right)\sin\theta \end{align} And $$ \left(\rho-\frac{1}{\rho}\right)\sin\theta=0\implies \rho=1 \text{ or } \theta=0 \text{ or } \theta=\pi $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Denominator in rational gcd of integer polynomials A recent question tells us that even if two polynomials $f,g\in \mathbb Z[X]$ have no common factor as polynomials, their values at integer points may have common factors. That question gives this example: $$ f=x^3-x^2+3x-1, \qquad g=x^3+2, \qquad \gcd(f(27),g(27))=31 $$ The explanation I've given for this example is that even though $\gcd(f,g)=1$ in $\mathbb{Z}[x]$, we cannot always write $1=uf+vg$ with $u,v \in \mathbb{Z}[x]$ (because $\mathbb{Z}[x]$ is not a PID). But we can write $1=uf+vg$, if we allow $u,v \in \mathbb{Q}[x]$. In the example above, we get $$ 1 = \dfrac1{31} (-6 x^2-7 x-3)f(x)+\dfrac1{31}(6 x^2+x+14)g(x) $$ Now, clearing denominators, we get $d = uf+vg$ with $u,v \in \mathbb{Z}[x]$ and $d \in \mathbb{Z}$. * *Is there a name for $d$ in terms of $f$ and $g$? *Can we compute $d$ without performing the entire extended Euclidean algorithm in $\mathbb{Q}[x]$? *When $d>1$, is it always true that some values of $f$ and $g$ (at the same point) are not coprime? It seemed that $d$ is the resultant of $f$ and $g$, but perhaps not.
* *Regarding Question $1$: I think that $d$ in your post is -in the general case- some divisor of the resultant. However, any divisor $r>1$ of the resultant would do, in the sense that it may be the $\gcd\big(f(n), g(n)\big)$ for some $n$: for any divisor $r$ of the resultant $Res(f,g)=d$, we can always find some integer $n$ such that $\gcd\big(f(n),g(n)\big)\geq r>1$, given that the following system of simultaneous equations $$ \begin{array}{c} f(x)\equiv 0\mod r \\ % \\ g(x)\equiv 0\mod r \end{array} $$ has a solution $x\equiv n\mod r$. Then, the secuence $$ m_{n,r}(i)=n+r*i $$ for all $i\geq0$, has the property that: $$ \gcd\Big(f\big(m_{n,r}(i)\big),g\big(m_{n,r}(i)\big)\Big)\geq r>1 $$ * *Regarding Question $2$: The resultant $Res(f,g)$ of two polynomials over a field, is by definition the determinant of the Sylvester matrix of the polynomials, and it can be shown to be equal to the product between the differences of their roots (some of them may lie in some field extension) times suitable powers of the leading coefficients of $f,g$. *Regarding Question $3$: The answer is negative in general: For example consider the polynomials $f(x)=5x+1$, $g(x)=5x+6$. Then $Res(f,g)=25$ but the simultaneous equations $$ \begin{array}{c} 5x+1\equiv 0\mod 5 \\ % \\ 5x+6\equiv 0\mod 5 \end{array} $$ have no common solutions: $f-g=5$ thus any common divisor of $f(n), g(n)$ must divide $5$. However, none of $f(n), g(n)$ is divisible by $5$. Thus, $\gcd(5n+1,5n+6)=1$ for any positive integer $n$. Notice also that $$ u(x)f(x)+w(x)g(x)=5 $$ for $u(x)=-1$ and $w(x)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Show for $x,y \in \mathbb{X} : |d_A (x) - d_A (y)| \leq d(x,y)$ For $A \subset \mathbb{X}$ non empty and $x \in \mathbb{X}$ define the distance of $x$ to $A$ by $$d_A (x) = \inf \limits _{a \in A} d(x,a)$$ I am trying to show for $$x,y \in \mathbb{X} : |d_A (x) - d_A (y)| \leq d(x,y)$$ This is the proof I have. I start with the triangle inequality: $$d(x,a) \leq d(x,y) + d(y,a)$$ I note $$d_A (x) = \inf \limits _{a \in A} d(x,a)$$ and $$d_A (y) = \inf \limits _{a \in A} d(y,a)$$ So the triangle inequality becomes $$d_A (x) \leq d(x,y) + d_A (y)$$ At this point it is obvious you rearrange to get the desired solution but that was marked as wrong and instead you change the role of $x$ and $y$. Why is rearranging wrong? And why do you have to change the roles of $x$ and $y$?
In order to prove that $$\vert d_A (x) - d_A (y) \vert \leq d(x,y),$$ you have to prove the following two inequalities: $$\begin{cases} -d(x,y) \le d_A (x) - d_A (y) \\ d_A (x) - d_A (y) \le d(x,y) \end{cases}.$$ In the question of your post, you proved the second one. Rearranging only won't be enough to obtain the first one. However, as $x,y$ play symmetric roles, you can permute them to get the first inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to show that the Banach space $\left(C[a,b],\lVert.\rVert_{\scriptsize C[a,b]}\right)$ is not Hilbert space? I want to show that the Banach space $\left(C[a,b],\lVert.\rVert_{\scriptsize C[a,b]}\right)$ is not a Hilbert space. So I should show that it is not an inner product space. Most likely, The Parallelogram Equality does not work for some two elements of this space but I could not find these two elements. Thanks for your helps.
Take $$\begin{cases} f_1(x)=-1+2\frac{x-a}{b-a}\\ f_2(x)=\left\vert \frac{2x - (a+b)}{b-a} \right\vert \end{cases}$$ You have $\Vert f_1 \Vert = \Vert f_2 \Vert=1$ and $\Vert f_1+f_2 \Vert = \Vert f_1-f_2 \Vert = 2$. Hence the parallelogram law is not satisfied.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Why does $\left\| {\left| A \right|} \right\| \le \left\| {\left| I \right|} \right\|$, for every doubly stochastic matrix $A \in M_n$? Let $\left\| {\left| . \right|} \right\|$ be a unitarily invariant matrix norm on $M_n$. Why does $\left\| {\left| A \right|} \right\| \le \left\| {\left| I \right|} \right\|$, for every doubly stochastic matrix $A \in M_n$ ?
In the spirit of Omnomnomnom's answer, here is an alternative approach that doesn't require Birkhoff theorem. It is well-known that when $A$ is doubly stochastic, its operator norm is equal to $1$ (because $1\le\rho(A)^2 \le \|A\|_2^2 = \rho(A^TA) \le \|A^TA\|_1 = 1$). So, by unitary invariance of the norm $|||\cdot|||$ in question and by singular value decomposition, you may assume that $A$ is a nonnegative diagonal matrix whose largest diagonal entry is $1$. It is easy to show that such a matrix is a convex combination of at most $2n$ diagonal real orthogonal matrices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What will be the remainder when $2^{31}$ is divided by $5$? The question is given in the title: Find the remainder when $2^{31}$ is divided by $5$. My friend explained me this way: $2^2$ gives $-1$ remainder. So, any power of $2^2$ will give $-1$ remainder. So, $2^{30}$ gives $-1$ remainder. So, $2^{30}\times 2$ or $2^{31}$ gives $3$ remainder. Now, I cannot understand how he said the last line. So, please explain this line. Also, how can I do this using modular congruency?
The salient feature here is that when taking the remainder of a product modulo some integer, it doesn't matter if you first take the remainder or first compute the product. In other words: $$ab = (a \bmod n)(b \bmod n) \bmod n$$ Thus $2^{30} \times 2 =(2^{30} \bmod 5)(2\bmod 5) = ((-1)^{15} \bmod 5)2 = -2 = 3 \bmod 5$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 2 }
Solving $\lim_{x\to+\infty}(\sin\sqrt{x+1}-\sin\sqrt{x})$ Do you have any tips on how to solve the limit in the title? Whatever I think of doesn't lead to the solution. I tried using: $\sin{x}-\sin{y}=2\cos{\frac{x+y}{2}}\sin{\frac{x-y}{2}}$ and I got: $$\lim_{x\to+\infty}\bigg(2\cos{\frac{\sqrt{x+1}+\sqrt{x}}{2}}\sin{\frac{\sqrt{x+1}-\sqrt{x}}{2}}\bigg)=$$ $$\lim_{x\to+\infty}\bigg(2\cos{\frac{\sqrt{x+1}+\sqrt{x}}{2}\frac{\sqrt{x+1}-\sqrt{x}}{\sqrt{x+1}-\sqrt{x}}}\sin{\frac{\sqrt{x+1}-\sqrt{x}}{2}}\bigg)=$$ $$\lim_{x\to+\infty}\bigg(2\cos{\frac{1}{2(\sqrt{x+1}-\sqrt{x})}}\sin{\frac{\sqrt{x+1}-\sqrt{x}}{2}}\bigg)$$ but, as you can see, this leads me to $\infty-\infty$ in $\cos$ term. How can I get rid of that?
Notice, $$\lim_{x\to +\infty}\left(2\cos\left(\frac{1}{2(\sqrt{x+1}-\sqrt x)}\right)\sin\left(\frac{ \sqrt{x+1}-\sqrt x}{2}\right) \right)$$ $$=2\lim_{x\to +\infty}\cos\left(\frac{1}{2(\sqrt{x+1}-\sqrt x)}\right)\sin\left(\frac{1}{2(\sqrt{x+1}+\sqrt x)}\right)$$ $$=2\lim_{x\to +\infty}\cos\left(\frac{1}{2(\sqrt{x+1}-\sqrt x)}\right)\cdot \lim_{x\to +\infty}\sin\left(\frac{1}{2(\sqrt{x+1}+\sqrt x)}\right)$$ since, $-1\le \cos y\le 1\ \ \ \forall \ \ \ y\in R$, $$=2(k)\cdot \sin\left(0\right)=\color{red}{0}$$ where, $-1\le k\le 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Inverse Laplace transform of $\tan^{−1}\left(\frac{1}{s}\right)$ I'm studying Laplace transformations, but I don't understand where $-\frac{1}{t}$ comes from. And what is the relationship between the corollary and the example?
Think of $\tan^{-1} \left( \frac 1 s\right)$ as the antiderivative of $\frac{-1}{s^2+1}$. Then in your example, $n=1$. This introduces a factor of $1/t$ on the left hand side, and a negative sign on the right hand side.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Determine the set of solutions of $ |x| <2x+3 $ Determine the set of solutions of $ |x| <2x+3 $ where $|x| $ denotes the absolute value of $x$. Forgive the banality of the question but when I try to solve this problem given standard techniques I just can't get an answer : Assuming $x >0$ we have that $x<2x+3$ which gives $x>-3$ Then,assuming $x<0$, I have $-x<2x+3$ which gives $x>-1$ How do I extrapolate the solution from that ? Now, I know that the solution is $x>-1$ just by using the fact that $y=2x+3$ must intersect $y=|x|$ in the second quadrant at $x=-1$ and for every $x>-1$ the inequality is satisfied ,but how do I get this solution just using the definition of absolute value ?
When we assume $x > 0$ we get $x < 2x + 3$ wich yields $x > -3$. But we assumed $x > 0$. So far our set of solutions is $x > 0$ since it both satisfies $x > 0$ and $x > 3$ (both our conditions). In terms of set theory, one condition is satisfied when $x \in (0,\infty)$ and the other one when $x\in (-3,\infty)$ so both are satisfied when $x\in (0,\infty)\cap (-3,\infty) = (0,\infty)$. When we assume $x \leq 0$ we get $x > -1$. To satisfy both conditions we now need $-1 < x \leq 0$. In terms of set theory, one condition is satisfied when $x \in (-\infty,0]$ and the other one when $x\in (-1,\infty)$ so both are satisfied when $x\in (-\infty,0]\cap (-1,\infty) = (-1,0]$. We now combine both solutions to get $x > -1$. In terms of set theory, we will have a solution if $x\in (-1,0]\cup (0,\infty) = (-1,\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1591984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Fourier transform of simple function I'm stuck on this problem Calculate the Fourier transform of $$ f(t) = \sin t \, , \, |t|<\pi \\ f(t) = 0 \, , \, |t|≥ \pi $$ So the start is really simple $$ \hat{f}(\omega) = \int_{-\pi}^{\pi}\sin(t)e^{-i \omega t}dt = ... = \frac{1}{2i}\left[\frac{e^{-(\omega-1)it}}{-i(\omega -1)} \right]_{-\pi}^{\pi}-\frac{1}{2i}\left[\frac{e^{-(\omega+1)it}}{-i(\omega +1)} \right]_{-\pi}^{\pi} $$ and now I am asked to simplify this so i get $\hat{f}(\omega)= 2i\frac{\sin \pi \omega}{\omega^2-1}$, I have been trying for hours and I simply can't make it! Is there anyone here that knows the trick? Maybe it have been answered before somewhere?
You should use this identity after you plug your bounds in the antiderivative: $$\sin \theta = \frac{e^{i\theta}-e^{-i\theta}}{2i}$$ With this, you get $$\frac{\sin(\pi(w-1))}{i(w-1)} -\frac{\sin(\pi(w+1))}{i(w+1)}$$ Observe $$\frac{\sin(\pi(w-1))}{i(w-1)} -\frac{\sin(\pi(w+1))}{i(w+1)} = \frac{-\sin(\pi w)}{i(w-1)} -\frac{-\sin(\pi w)}{i(w+1)} = \frac{-\sin(\pi w)}{i(w-1)} +\frac{\sin(\pi w)}{i(w+1)}.$$ Combine denominators and you'll get exactly what you want: $$\frac{-(w+1)\sin(\pi w) +(w-1)\sin(\pi w)}{i(w+1)(w-1)} = \frac{-2\sin(\pi w)}{i(w^2-1)} = \frac{2i\sin(\pi w)}{(w^2-1)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The real numbers are a field extension of the rationals? In preparing for an upcoming course in field theory I am reading a Wikipedia article on field extensions. It states that the complex numbers are a field extension of the reals. I understand this since $\mathbb R(i) = \{ a + bi : a,b \in \mathbb R\}$. Then the article states that the reals are a field extension of the rationals. I do not understand how this could be. What would you adjoin to $\mathbb Q$ to get all the reals? The article doesn't seem to say anything more about this. Is there a way to explain this to someone who has yet to take a course in field theory?
For $\mathbb{R}$ to be field extension of $\mathbb{Q}$, all we need is that $\mathbb{R}$ is a field containing $\mathbb{Q}$ as a subfield. That's definitely true. The construction is a bit more delicate and analytic in nature: $\mathbb{R}$ is the completion of $\mathbb{Q}$ and is substantially larger. Since $\mathbb{R}$ is uncountable, it's not an extension of finite degree, meaning that you cannot write $\mathbb{R} = \mathbb{Q}(a_1, a_2, ..., a_n)$ for some finite sequence of symbols $a_i$ (nor even a countable sequence). You have to adjoin uncountably many symbols. If you're interested in a way to construct reals from rationals, take a look at Dedekind cuts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 4 }
Beginning to Work on the Runge-Kutta Method I'm starting to play around with techniques in the introductory numerical methods literature, and I am starting to use the Runge-Kutta method for approximating solutiosn for systems of first-order diff eq.'s, and I'm getting a little tripped up on the method. Take the system $$u_1' = -4u_1 - 2u_2 + \cos(t) + 4\sin(t), u_1(0) = 0$$ $$u_2' = 3u_1 + u_2 - 3\sin(t), u_2(0) = -1.$$ I can't quite figure out how to develop the endpoints for this algorithm; Could someone help me with figuring out how to set this up for algorithmic processing?
We are given the system; $$u_1' = -4u_1 - 2u_2 + \cos t + 4\sin t, u_1(0) = 0 \\ u_2' = 3u_1 + u_2 - 3\sin t, u_2(0) = -1.$$ Here is the setup for the system of differential equations using a $4^{th}-$ Order Runge-Kutta. f(t, u_1, u_2) = -4 u_1 - 2 u_2 + cos t + 4 sin t g(t, u_1, u_2) = 3 u_1 + u_2 - 3 sin t t_0 = 0 u_1(0) = 0 u_2(0) = -1 h = 0.1 k_1 = h f(t(n), u_1(n), u_2(n)) l_1 = h g(t(n), u_1(n), u_2(n)) k_2 = h f(t(n) + h/2, u_1(n) + 1/2 k_1, u_2(n) + 1/2 l_1) l_2 = h g(t(n) + h/2, u_1(n) + 1/2 k_1, u_2(n) + 1/2 l_1) k_3 = h f(t(n) + h/2, u_1(n) + 1/2 k_2, u_2(n) + 1/2 l_2) l_3 = h g(t(n) + h/2, u_1(n) + 1/2 k_2, u_2(n) + 1/2 l_2) k_4 = h f(t(n) + h, u_1(n) + k_3, u_2(n) + l_3) l_4 = h g(t(n) + h, u_1(n) + k_3, u_2(n) + l_3) u_1(n + 1) = u_1(n) + 1/6 (k_1 + 2 k_2 + 2 k_3 + k_4) u_2(n + 1) = u_2(n) + 1/6 (l_1 + 2 l_2 + 2 l_3 + l_4) Here are the first few calculations: $$(t, u_1(t), u_2(t)) = (0., 0, -1), (0.1, 0.272041, -1.07705), (0.2, 0.495482, -1.11554)$$ If we continue iterating, we get the graphs: For this system, we can calculate the exact result as: $$u_1(t) = e^{-2 t} \left(2 e^t+e^{2 t} \sin t-2\right) \\ u_2(t) = -e^{-2 t} \left(3 e^t-2\right)$$ A plot of this exact solution shows: Of course you can do an error analysis, but those look pretty good visually.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Intrepreting tuples as functions I have been mulling over this for a while now. I am told $\mathbb R^n$ can be interpreted as a set of functions. Take $\mathbb R^2$, for example I can see how we might interpret it as a set containing all ordered pairs: $<x_1,x_2>$. However I do not understand the notation: $ \{ f:\{1,2\} \longrightarrow \mathbb R \} = \mathbb R^2$ This would mean we have a domain with two elements and a codomain with $| \mathbb R|$ elements(which doesn't make sense to me). What would the values of $f(1)$ and $f(2)$ be by this definition? Basically I'm asking what justification is there for the following: $$ \{ f:\{1,2\} \longrightarrow \mathbb R \} = \{\langle x_1,x_2\rangle:x_1,x_2 \in \mathbb R^2\}$$ For instance it's easy to see why the R.H.S. can be interpreted as Cartesian plane but how does two-dimensional plane relate to the L.H.S in the above?
The left hand side is the set of all functions from $\{1,2\}$ to $\mathbb{R}$. For instance one such function is given by $1\to \pi$, $2\to -7$. This function corresponds to the point $(\pi, -7)$ in $\mathbb{R}^2$. More generally, the function that sends $1$ to $a$ and $2$ to $b$ (with $a,b\in\mathbb{R}$) corresponds to the point $(a,b)\in \mathbb{R}^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Gluing together functions on a closed subvariety I'm trying to get an intuition for what sheafification does. I came across a passage from Perrin's algebraic geometry book about closed subvarieties. If says that if X is an algebraic variety and Y is a closed subvariety, we can inherit a sheaf on Y from X. It suggests the natural thing to do would be to define: $O'(V) := \{ f : V \rightarrow K | \text{there is an open } U \in X \text{ such that } U \cap Y = V \text{ and } g|_V = f \text{ for some } g \in O_U \}$ And then it goes on to claim that this is typically not a sheaf, but merely a presheaf, and that the correct thing to do is to sheafify it. I was trying to justify this last line by finding a counterexample to the gluing axiom. This is what I came up with: Let $X = \mathbb{A}^2$, let $Y = \mathbb{V}(xy)$. Then let $U_1 = D(x)$ and $U_2 = D(y)$, which forms a cover of the open subset $Y - {(0,0)}$ of $Y$. Define $f_1 = 0$ and $f_2 = 1$, which are elements of $O'(U_1)$ and $O'(U_2)$ respectively. They have no overlap, since their would-be intersection at the origin has been left out. But when you glue them together, you seem to run into trouble near the origin. (Informally, the polynomial's value seems to approach both $0$ and $1$ as you approach the orign. Less informally, the density of this open set in $Y$ ought to allow you to extend the polynomial to the origin in two distinct ways). My question is simply, is my analysis above valid? I feel like I may have overlooked some assumption somewhere. If it is valid, then what function do you get when you glue together these two functions? If I made a mistake somewhere, could I get some guidance towards a true counter-example?
EDIT: Incorrect Your analysis is correct. Letting $U = U_1 \cup U_2$, we do indeed have a regular function on $U$ that is $0$ on $U_1$ and $1$ on $U_2$ (it is locally rational on $U$). There is clearly no regular function on $\mathbb{A}^2 \supseteq Y$ that restricts to our 'weird function'. However, if we sheafify the naive presheaf then such a section is allowed
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Alternative way to show that a simple group of order $60$ can not have a cyclic subgroup of order $6$ Suppose $G$ is a simple group of order $60$, show that $G$ can not have a subgroup isomorphic to $ \frac {\bf Z}{6 \bf Z}$. Of course, one way to do this is to note that only simple group of order $60$ is $A_5$. So if $G$ has a cyclic subgroup of order $6$ then it must have a element $\sigma$ of order $6$, i.e. in (disjoint) cycle decomposition of $\sigma$ there must be a $3$ cycle and at least $2$ transposition, which is impossible in $A_5$.Hence,we are done. I'm interested in solving this question without using the fact that $ G \cong A_5$. Here is what I tried: Suppose $G$ has a subgroup say $H$ isomorphic to $ \frac {\bf Z}{6 \bf Z}$, then consider the natural transitive action $G \times \frac {G}{H} \to \frac {G}{H}$, which gives a homomorphism $\phi \colon G \to S_{10}$. Can some one help me to prove that $\ker \phi$ is nontrivial ? Is there any other way to solve this question? Any hints/ideas?
(1) Suppose $H=C_6$ is a cyclic subgroup of order $6$ in $G$ (simple group of order $60$). Let $z$ be the element of order $2$ in $H$, so that $\langle z\rangle$ is subgroup of order $2$ in $H$. (2) $C_G(z)=$centralizer of $z$ in $G$; it clearly contains $H$. Also, $\langle z\rangle$ is contained in Sylow-$2$ subgroup of $G$, say $P_2$, which has order $4$, hence it is abelian, and therefore, centralizer of $z$ contains this Sylow-$2$ subgroup. (3) Thus, $C_G(z)$ contains a subgroup of order $3$ (of $H$) as well as Sylow-$2$ subgroup of order $4$; hence $|C_G(z)|\geq 12$. (4) Then $[G\colon C_G(z)]\leq 5$, and since $G$ is simple group of order $60$, $G$ can not have a subgroup of index $<5$ [prove it]. Hence $|C_G(z)|$ must be $12$. (5) Thus, $H\subseteq C_G(z)$ and $|C_G(z)|=12$. Then $H$ has index $2$ in $C_G(z)$, so $H\trianglelefteq C_G(z)$. If $P_3$ denotes the (unique, characteristic) subgroup of $H$ of order $3$ then it follows that $P_3$ is normal in $C_G(z)$. [Characteristic subgroup of a normal subgroup is normal]. (6) Thus, $P_3$ is a Sylow-$3$ subgroup of $G$, which is normal in a group of order $12$ namely $C_G(z)$, i.e. index of normalizer of a Sylow-$3$ subgroup is at most $5$; it must by $5$ (by similar reason as in (4)). (7) So $P_3$ is a Sylow-$3$ subgroup of $G$ with index of its normalizer equal to $5$; this means the number of Sylow-$3$ subgroups must be $5$; this contradicts Sylow's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
Test convergence, find $\alpha$ which makes integral converge I'm testing the convergence of this improper integral $$\int_2^{\infty} x(\ln x)^{\alpha} dx$$ I used the limit comparison test with $\frac{1}{x}$ which is divergent, I found that this integral diverges for all values of $\alpha$. Am I correct ?
You are correct. Observe that the integrand is positive and we have, for all real values of $\alpha$, $$ \lim_{x \to +\infty}\left(x(\ln x)^{\alpha}\right)=+\infty $$ thus the initial integral is divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the difference between an impulse response and a transferfunction? An imupulse response, is the output you get when you apply an impulse, like a delta dirac function, to your system (only for LTI?). By knowing the impulse response you know the system. The transferfunction relates the input to the output. I.e. this is a representation of the system. So aren't both the same? Or Did I misunderstand something?
For any partially continuous function $f : \mathbb{R} \to \mathbb{R}$, the Dirac delta function has the nice property $$ f(t) = \int_{-\infty}^\infty f(s) \delta(t - s) ds $$ So, any partially continuous function can be written as a sum of Dirac delta functions. This is particularly useful for LTI systems. If we know the impulse response of a LTI system, we can calculate its output for a specific input function using the above property. In fact, it is called the "convolution integral". The Laplace transform of the inpulse response is called the transfer function. It is also useful since $\mathcal{L}\{\delta(t)\} = 1$ and $\mathcal{L}\{f * g\} = \mathcal{L}\{f\} \mathcal{L}\{g\}$. Because of this property, it gives a nice rational polynomial representation of input/output behavior of LTI systems. You may also calculate the impulse response of a time-varying and/or non-linear system, but it isn't useful since you can't use it to calculate the output for any input function. The system must be LTI to do so.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How to compute $\lim\limits_{x \to +\infty} \left(\frac{\left((x-1)^2-x\ln(x)\right)(x)!}{(x+2)!+7^x}\right)$? I have a problem with this limit, I don't know what method to use. I have no idea how to compute it. Can you explain the method and the steps used? $$\lim\limits_{x \to +\infty} \left(\frac{\left((x-1)^2-x\ln(x)\right)(x)!}{(x+2)!+7^x}\right)$$
I would do as following. The limit equals $$\lim_{x \to \infty} \frac {(x - 1)^2 - x \log x} {x (x + 1) + 7^x/x!}.$$ Note that $(x - 1)^2 - x \log x \sim x^2$, $x (x + 1) \sim x^2$ and $7^x/x! = o (1)$. Using these, we have $$\lim_{x \to \infty} \frac {(x - 1)^2 - x \log x} {x (x + 1) + 7^x/x!} = \lim_{x \to \infty} \frac {x^2} {x^2} = 1.$$ I know it's not rigorous but it's the way I process in my head when I glance at limits which look like complete mess like this one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Find $\operatorname{lcm}(2n-1,2n+1)$ I'm trying to find the formula for $\operatorname{lcm}(2n-1,2n+1)$ with $n \in \mathbb{Z}$. Here is my solution but I'm not sure about it. We know that $$\operatorname{lcm}(a,b)=\frac{\lvert ab \lvert}{\gcd(a,b)}$$ Now if we substitute we have: $$\operatorname{lcm}(2n-1,2n+1)=\frac{\lvert (2n-1)(2n+1) \lvert}{\gcd(2n-1,2n+1)}=\frac{\lvert (4n^2-1) \lvert}{\gcd(2n-1,2n+1)}$$ Now we compute $\gcd(2n-1,2n+1)$, We can notice that $$2n+1=(2n-1)+2$$ Thus the 2 arguments are not divisible (unless $n=0$) hence $\gcd(2n-1,2n+1)=1$ We can conclude that $$\operatorname{lcm}(2n-1,2n+1)=\lvert (4n^2-1) \lvert$$
Yes, your work is fine. A slightly different formulation of your study of the greatest common divisor would be to notice that the gcd of two numbers divides their difference, which is $2$. Hence, the gcd is either $1$ or $2$, and the fact that they are both odd implies that it's $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
How many acute triangles can be formed by 100 points in a plane? Given 100 points in the plane, no three of which are on the same line, consider all triangles that have all their vertices chosen from the 100 given points. Prove that at most 70% of those triangles are acute-angled.
This is from the 1970 International Mathematical Olympiad. You can find the questions here, and the (rather easy) solution to this question here. It was one of the homework questions for aspirants to the British team for IMO 1978 in Bucharest. I managed to prove an explicit upper bound on the maximum possible proportion of triangles in $n$ points that tends to $2/3$ as $n$ tends to $\infty$. I can't remember what it was, but it is not too hard to work out, using the process described in the following paragraphs. The proof of the simpler question goes like this: first prove geometrically that every $4$-set (i.e. set of $4$ points) must contain a non-acute triangle, so that at most $75$% of triangles in a $4$-set can be acute. Then argue combinatorially that if you have shown that at most a proportion $p$ of triangles in an $n$-set can be acute, it follows that at most a proportion $p$ of the triangles in an $(n+1)$-set can be acute. So we immediately get that at most $7\frac12$ of the $10$ triangles in a $5$-set can be acute; and because the number of acute triangles must be an integer, we can bring this down to $7$ out of $10$, i.e. $70$%. But there is no reason to stop there. The same process gains nothing when passing to $6$-sets, because $70$% of $20$ is an integer; but passing to $7$-sets gives us $70$% of $35$, which is $24\frac12$, which we can bring down to $24$. So we can define an integer sequence $(s_i)$ by $s_4 = 3$, and $s_{i+1} = \left\lfloor \dfrac{(i+1)s_i}{i-2}\right\rfloor$. This gives us an upper bound on the maximum possible number of acute triangles in an $n$-set. I seem to remember that it is a cubic in $n$, whose exact form depends on $n$ mod $3$. And the proportion $s_n/\binom{n}{3}$ tends to $2/3$. Note that $s_n$ is not necessarily the maximum possible number of acute triangles in an $n$-set. It just gives us an upper bound. In fact I think that this upper bound can't be attained for large enough $n$ ($n > 6$ perhaps?). I played around with this a bit, but after all it was $37$ years ago, so the details are blurred.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1592964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
Prove that $\sqrt{x_1}+\sqrt{x_2}+\cdots+\sqrt{x_n} \geq (n-1) \left (\frac{1}{\sqrt{x_1}}+\frac{1}{\sqrt{x_2}}+\cdots+\frac{1}{\sqrt{x_n}} \right )$ Let $x_1,x_2,\ldots,x_n > 0$ such that $\dfrac{1}{1+x_1}+\cdots+\dfrac{1}{1+x_n}=1$. Prove the following inequality. $$\sqrt{x_1}+\sqrt{x_2}+\cdots+\sqrt{x_n} \geq (n-1) \left (\dfrac{1}{\sqrt{x_1}}+\dfrac{1}{\sqrt{x_2}}+\cdots+\dfrac{1}{\sqrt{x_n}} \right ).$$ This is Exercice 1.48 in the book "Inequalities-A mathematical Olympiad approach". Attempt I tried using HM-GM and I got $\left ( \dfrac{1}{x_1x_2\cdots x_n}\right)^{\frac{1}{2n}} \geq \dfrac{n}{\dfrac{1}{\sqrt{x_1}}+\dfrac{1}{\sqrt{x_2}}+\cdots+\dfrac{1}{\sqrt{x_n}}} \implies \dfrac{1}{\sqrt{x_1}}+\dfrac{1}{\sqrt{x_1}}+\cdots+\dfrac{1}{\sqrt{x_n}} \geq n(x_1x_2 \cdots x_n)^{\frac{1}{2n}}$. But I get stuck here and don't know if this even helps.
Rewrite our inequality in the following form: $\sum\limits_{i=1}^n\frac{x_i+1}{\sqrt{x_i}}\sum\limits_{i=1}^n\frac{1}{x_i+1}\geq n\sum\limits_{i=1}^n\frac{1}{\sqrt{x_i}}$, which is Chebyshov.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 1 }
On the join of simplicial sets as a dependent product Prop. 3.5 of Joyal notes in quasicategories gives a description of $X\star Y$ as $i_*(X,Y)$, where $i^*\dashv i_*$ is the adjunction $$ i^*\colon \mathbf{sSet}/\Delta[1] \leftrightarrows \mathbf{sSet}/\partial\Delta[1] $$ where $i^*$ is "pullback along $\partial\Delta[1]\to \Delta[1]$ and $i_*$ is the associated dependent product, which being $\partial\Delta[1]$ discrete should morally be a mere product; this baffles me a lot, since $X\star Y$ seems much more complicated than what I get if I blind myself and I try to write what $i_*(A,B)$ is for $A,B\in \bf sSet$ (confusing ${\bf sSet}/\Delta[1]$ with ${\bf sSet}^{\Delta[1]}$ it chooses $A\times B\to B$). Here are the words of Joyal: any clue? I have some observations to make: somewhere there must be a typo, since either Joyal's symbol for adjointness is reversed, or he uses the left adjoint of pullback, $i_!$, which simply composes $A\to \partial\Delta[1]$ with $i$ (but this is pretty strange too, since then I am dealing with the mere coproduct map $X\sqcup Y\to \partial\Delta[1]\to\Delta[1]$). I have absolutely no clue about what he does in the proof so I tried something different: my alternative proof strategy was to simply prove that the unit $X\star Y\to i_*(X,Y)$ is a monic (only in this component); it's always a split epi since $i^*$ is a full functor. And, alas, I'm not able to prove this either!
By the Yoneda lemma for the category $\mathbf{\Delta}/[1]$, using the canonical equivalence $(\mathbf{\Delta}/[1])^\wedge = \mathbf{\Delta}^\wedge/\Delta^1$, it suffices to show that the morphism in question induces a bijection $$ Hom((\Delta^n,f), X \star Y) \to Hom((\Delta^n,f), i_*(X \sqcup Y)) $$ for each $n$, and each $f : \Delta^n \to \Delta^1$. By adjunction, it suffices to compute $$ Hom(i^*(\Delta^n,f), X \sqcup Y) $$ and compare the result with the formula in 3.2. For each morphism $f$ (there are $n+2$ choices), it is easy to compute $i^*(\Delta^n,f)$ by hand; the possible values are listed in the formula Joyal writes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Example of forward difference approaching derivative only up to $O(h)$ According to the article on Wikipedia about finite differences, the forward and backward difference by $h$ of a function $f(x)$ divided by $h$ approach the derivative to order $O(h)$, i.e. $$\frac {\Delta_h f(x)}{h} -f'(x)= \frac{f(x+h)-f(x)}{h} -f'(x) \in O(h)$$ $$\frac {\nabla_h f(x)}{h} -f'(x)= \frac{f(x)-f(x-h)}{h} -f'(x) \in O(h)$$ However, from Taylor's theorem, I'd expect this to be $O(h^2)$, which following the article, is only the case for the central difference. $$\frac {\delta_h f(x)}{h} -f'(x)= \frac{f(x+h/2)-f(x-h/2)}{h} -f'(x) \in O(h^2)$$ Can anybody explain why? Or give an example of when the forward (/backward) difference only approach to order $O(h)$?
Taylor's theorem gives $$ f(x+h) = f(x) + h f'(x) + O(h^2) $$ (for $h \to 0$) and therefore "only" $$ \frac{f(x+h)-f(x)}{h} -f'(x) = O(h) $$ A simple example is $f(x) = x^2$, where $$ \frac{f(x+h)-f(x)}{h} -f'(x) = \frac{(x+h)^2-x^2}{h} - 2x = h $$ For central differences, the function $g(h) = f(x+h) - f(x-h)$ has the property that not only $g(0) = 0$ but also $g''(0) = 0$, therefore $$ f(x+ \frac h2) - f(x - \frac h2) = g(\frac h2) = \frac h2 g'(0) + O(h^3) = h f'(x) + O(h^3) \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove for every odd integer $a$ that $(a^2 + 3)(a^2 + 7) = 32b$ for some integer $b$. I've gotten this far: $a$ is odd, so $a = 2k + 1$ for some integer $k$. Then $(a^2 + 3).(a^2 + 7) = [(2k + 1)^2 + 3] [(2k + 1)^2 + 7]$ $= (4k^2 + 4k + 4) (4k^2 + 4k + 8) $ $=16k^4 + 16k^3 + 32k^2 + 16k^3 + 16k^2 + 32k + 16k^2 + 16k + 32$ $=16k^4 + 32k^3 + 64k^2 + 48k + 32$ But this isn't a multiple of 32, at most I could say $(a^2 + 3)(a^2 + 7) = 16b$ for some integer $b$
Notice $$16k^4 +48k = 16k(k^3+3).$$ If $k$ is even, we are done. If $k$ is odd, then $k^3 +3$ is even, and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Need help with $\int_0^\pi\arctan^2\left(\frac{\sin x}{2+\cos x}\right)dx$ Please help me to evaluate this integral: $$\int_0^\pi\arctan^2\left(\frac{\sin x}{2+\cos x}\right)dx$$ Using substitution $x=2\arctan t$ it can be transformed to: $$\int_0^\infty\frac{2}{1+t^2}\arctan^2\left(\frac{2t}{3+t^2}\right)dt$$ Then I tried integration by parts, but without any success...
A Fourier analytic approach. If $x\in(0,\pi)$, $$\begin{eqnarray*}\arctan\left(\frac{\sin x}{2+\cos x}\right) &=& \text{Im}\log(2+e^{ix})\\&=&\text{Im}\sum_{n\geq 1}\frac{(-1)^{n+1}}{n 2^n}\,e^{inx}\\&=&\sum_{n\geq 1}\frac{(-1)^{n+1}}{n 2^n}\,\sin(nx),\end{eqnarray*}$$ hence by Parseval's theorem: $$ \int_{0}^{\pi}\arctan^2\left(\frac{\sin x}{2+\cos x}\right)\,dx=\frac{\pi}{2}\sum_{n\geq 1}\frac{1}{n^2 4^n}=\color{red}{\frac{\pi}{2}\cdot\text{Li}_2\left(\frac{1}{4}\right)}.$$ As a side note, we may notice that $\text{Li}_2\left(\frac{1}{4}\right)$ is quite close to $\frac{1}{4}$. By applying summation by parts twice we get: $$ \sum_{n\geq 1}\frac{1}{n^2 4^n} = \color{red}{\frac{1}{3}-\frac{1}{12}}+\sum_{n\geq 1}\frac{1}{9\cdot 4^n}\left(\frac{1}{n^2}-\frac{2}{(n+1)^2}+\frac{1}{(n+2)^2}\right)$$ and the last sum is positive but less than $\frac{11}{486}$, since $f:n\mapsto \frac{1}{n^2}-\frac{2}{(n+1)^2}+\frac{1}{(n+2)^2}$ is a positive decreasing function on $\mathbb{Z}^+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 2, "answer_id": 0 }
What is the difference between an indexed family and a sequence? For indexed family wikipedia states: Formally, an indexed family is the same thing as a mathematical function; a function with domain J and codomain X is equivalent to a family of elements of X indexed by elements of J For sequence wikipedia states: Formally, a sequence can be defined as a function whose domain is a countable totally ordered set, such as the natural numbers. Both can have repeated elements and their order matters. Is there any difference between this two?
You can index something by the real numbers for example. So for every real number $\alpha$ you might have a value $x_{\alpha}$. But since you cannot enumerate the real numbers you cannot represent that as a sequence. So it's just an indexed set, something more general than a sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
show that the function is in $L^r$ Let $f$ be a measurable function and $1 \le p < r < q < \infty$. If there is a constant $C$ such that $$\mu ( \{ x : |f(x ) | > \lambda \} ) \le \frac { C }{ \lambda ^p + \lambda ^ q} $$ for every $ \lambda > 0$, show that $f \in L ^r $. I know that if both $f \in L^p $ and $f \in L^q $ then $f \in L ^r$. But the condition provided seems not suffices to show that (I can only show that it's in weak $L^p$ or weak $L^q $). Any help is appreciated.
Hint: $$\int_X f(x)^r\,d\mu(x) = \int_0^\infty r\lambda ^{r-1} \mu(\{x\in X:f(x)> \lambda\})\,d\lambda.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
number of ways of choosing $r$ points from $n$ points arranged in a circle such that no consecutive points is taken. number of ways of choosing $r$ points from $n$ ponts arranged in a circle such that no consecutive points is taken. (I have seen some question on SE related to this. But I am trying to solve it using my own method). Let the points be taken in a straight line $\{A_1,A_2,...,A_n\}$. I have to choose $r$ points such that no two points are consecutive. Let the chosen points be represented by $\{x_1,x_2,...,x_r\}$. The number of ways of choosing such points is same as the number of ways of placing these $r$ points among $n-r$ objects such that none of the points $\{x_1,x_2,...,x_r\}$ come together. Since I am only talking about "choosing" points, I will assume all the points to be identical. This can be calculated by assuming gaps between the $n-r$ initial objects. There will be $n-r$ gaps since the leftmost gap and right most gap are same for a circular arrangement. First I will place $x_1$. The number of ways of placing the remaining $r-1$ points in $n-r-1$ gaps is $$^{n-r-1}C_{r-1}$$ (the points are assumed to be identical). Similarly, If I start with $x_2$, the number of ways will be $$^{n-r-1}C_{r-1}$$ So the total will be $$n\cdot^{n-r-1}C_{r-1}$$ But the answer given is $$\frac{n\cdot^{n-r-1}C_{r-1}}{r}$$
Consider $n$ things in a circle namely $P_1,P_2 \dots P_n$. Without loss of generality let us assume that $P_1,P_2 \dots P_n$ are all arranged in a circle in a clockwise manner. Now let us start the question by counting the number of favourable cases always involving $P_1$. Clearly $P_2$ and $P_n$ cannot be selected now. as they are just adjacent to $P_1$. Therfore we can open the circle at $P_2$ and $P_n$ thus forming a linear arrangement with its end at $P_3$ and $P_{n-1}$. Now we can simply choose $r-1$ things out of these $n-3$ things so that no two are adjacent. By using the formula we will get $C^{n-r-1}_{r-1}$. Now this was the case in which $P_1$ was definitely included. Similarly there will be cases where in $P_2$ is definitely included and so on up to the case where $P_n$ is definitely included. Thus there are $n$ such cases. Therefore we will be multiplying the answer we got by $n$. Therefore $n \cdot C^{n-r-1}_{r-1}$. But careful!!! We have counted in excess. I will tell you how. Let's consider a special case where in $n=9$ and $r=3$. Suppose I choose $P_1,P_3,P_5$ and so on cases where $P_1$ is definitely included. Now in the case where $P_3$ is definitely included also this case will appear. Also it will appear in the case where $P_5$ is definitely included. And so if you care to see the pattern will be repeated $3$ times. Therefore in selection of $r$ things the pattern repeats $r$ times. Therefore to get the correct answer we will have to divide it by $r$. Hence, our final answer will be $\frac{n}{r} C^{n-r-1}_{r-1}$. Also please note that the formula that I have used for the linear part in between has been already proved by you in the question itself. Just have to plug in $n$ as $n-3$ and $r$ as $r-1$ and you have your answer.... Good Luck. Best wishes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
Primitive of $\sin(4x)$ If $\sin(4x)=2\sin(2x)\cos(2x)$ then: $$\int \sin(4x)=\int 2\sin(2x)\cos(2x)$$ $$\frac{-\cos(4x)}{4}=\frac{-\cos^2(2x)}{2}$$ But considering now $x=2$: $$\frac{-\cos(4\cdot 2)}{4}\neq\frac{-\cos^2(2\cdot 2)}{2}$$ $$0.036\neq -0.214$$ What's the error that is behind this process ? Thanks in advance.
$$\frac { -\cos (4x) }{ 4 } +C_{ 1 }=\frac { -\cos ^{ 2 } (2x) }{ 2 } +C_{ 2 }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Projective space and its basis I am trying to solve an exercise from the book "Permutation Groups" by J. Dixon and B. Mortimer. Later, I asked a similar question about the basis of Affine geometry " Affine geometry and its basis". Similar to the answer of that question, I think that in this case, all of the basis of $PG_{d-1}(F)$ are of the form $\{[v],[v+v_1],[v+v_2],\ldots,[v+v_d]\}$, but I cannot prove that why the other forms cannot be a basis. Also, I have a problem with showing that $PGL_d(F)$ is transitive over the basis of the above forms. Another form that I think can be the form of all such basis is $\{[v_1+v_2+\cdots+v_d],[v_1],[v_2],\ldots,[v_d]\}$, but I don't know it is true or not. I appreciate your help.
I don't think either of the two forms of a projective basis that you suggest is quite right. By definition, such a basis has the form $\{[v_1],\ldots,[v_{d+1}] \}$, subject to the condition that any $d$-element subset of $\{v_1,\ldots,v_{d+1} \}$ is linearly independent (and hence is a basis of $F^d$). This condition is equivalent to $\{v_1,\ldots,v_d \}$ being linearly independent, and $v_{d+1} = \lambda_1v_1 + \cdots + \lambda_d v_d$ for some $0 \ne \lambda_i \in F$. Given two such projective bases $\{[v_1],\ldots,[v_{d+1}] \}$ and $\{[v'_1],\ldots,[v'_{d+1}] \}$, with $v_{d+1} = \lambda_1v_1 + \cdots + \lambda_d v_d$ and $v'_{d+1} = \lambda'_1v'_1 + \cdots + \lambda'_d v'_d$, you can check that the elements of ${\rm GL}_d(F)$ that map the first to the second are of the form $v_i \mapsto \alpha \lambda'_iv'_i/\lambda_i$ for some fixed nonzero element $\alpha$ of $F$. So there is a unique element of ${\rm PGL}_d(F)$ which does this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving this limit: $\lim\limits_{x\to\infty}\frac{x^{x+1/x}}{(x+1/x)^x}$ $$\lim\limits_{x\to\infty}\frac{x^{x+1/x}}{(x+1/x)^x}$$ I have tried a lot of things, like: * *transforming those terms to: $$\frac{e^{(x+1/x)\ln(x)}}{e^{x\ln(x+1/x)}}$$ *then I tried L'Hôpital's rule but it was just getting more complex *I also made them one, like: $$e^{(x+1/x)\ln(x)-x\ln(x+1/x)}$$ *At last, I tried to "squeeze" them but I couldn't find the perfect function for that. I hope that this is not a duplicate because I searched but I couldn't find a similar post.
Rewrite the limit as $$L = \lim_{x \to +\infty} \frac{x^{\frac{x^2 + 1}{x}}}{\left(\frac{x^2 + 1}{x}\right)^x} = \lim_{x \to +\infty} \frac{x^{\frac{2x^2 + 1}{x}}}{(x^2 + 1)^x} = \lim_{x \to +\infty} e^\ell,$$ where $$\require{cancel}\ell = \frac{2x^2 + 1}x\ln x - x\ln(x^2 + 1) = \cancel{2x\ln x} + \frac1x\ln x - \cancel{2x\ln x} - x\ln\left(1 + \frac1{x^2}\right).$$ Since $\ell \to 0$ and $\exp(x)$ is continuous, we conclude that the limit is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1593972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
For nonzeros $A,B,C\in M_n(\mathbb{R})$, $ABC=0$. Show $\operatorname{rank}(A)+\operatorname{rank}(B)+\operatorname{rank}(C)\le 2n$ Let $A,B,C\in M_n(\mathbb{R})$ be nonzero matrices such that $ABC=0$. How can we prove that $\operatorname{rank}(A)+\operatorname{rank}(B)+\operatorname{rank}(C)\le 2 n$ ? I can prove this for two matrices, but in this case, i can't!
$\newcommand{\rk}{\operatorname{rk}}$ $\newcommand{\im}{\operatorname{im}}$You know that $\im(BC)\subseteq \ker A$. Hence, $n= \rk A+\dim\ker A\ge \rk A+\rk(BC)$. Now, if you consider the restriction of multiplication by $B$ to the subspace $\im C$ and use rank-nullity theorem, you get $\rk(BC)=\rk C-\dim\ker(B|_{\im C})$. Hence \begin{align} n\ge \rk A+\rk(BC)&=\rk A+\rk C-\dim(\ker B\cap \im C)\ge \rk A+\rk C-\dim\ker B=\\&=\rk A+\rk C+\rk B-n \end{align} Whence the thesis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Two step random experiment: Density of combined uniform and normal distribution imagine a random experiment, where first some number $u$ is drawn uniformly on $[c-\varepsilon,c+\varepsilon]$ for $c>0$ and $0<\varepsilon<c$. Next, a $N(u,\sigma^2)$-distributed random variable $Y$ is generated (meaning that we use the $u$ of the first step as mean and assume some known variance $\sigma^2>0$ for the normal distribution). Now, I am interested in the density of $Y$. How can one derive this density? Should I compute the joint density of $u$ and $Y$? Thanks a lot!
Density of $Y$: \begin{align} f_Y(y) &= \int_{u=c-\epsilon}^{c+\epsilon} f_{Y|U}(y\mid u)f_U(u)\;du \\ &= \int_{u=c-\epsilon}^{c+\epsilon} \dfrac{1}{2\epsilon}\dfrac{1}{\sqrt{2\pi\sigma^2}}\;e^{-\dfrac{1}{2}\left(\dfrac{y-u}{\sigma}\right)^2}\;du. \\ \end{align} Let $z=(u-y)/\sigma\;$ so that $dz=du/\sigma$. Then, \begin{align} f_Y(y) &= \dfrac{1}{2\epsilon}\int_{z=(c-\epsilon-y)/\sigma}^{(c+\epsilon-y)/\sigma} \dfrac{1}{\sqrt{2\pi}}\;e^{-\frac{1}{2}z^2}\;dz \\ & \\ &= \dfrac{1}{2\epsilon}\left[ \Phi\left(\dfrac{c+\epsilon-y}{\sigma}\right) - \Phi\left(\dfrac{c-\epsilon-y}{\sigma}\right)\right]. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
There is a prime between $n$ and $n^2$, without Bertrand Consider the following statement: For any integer $n>1$ there is a prime number strictly between $n$ and $n^2$. This problem was given as an (extra) qualification problem for certain workshops (which I unfortunately couldn't attend). There was a requirement to not use Bertrand's postulate (with which the problem is nearly trivial), and I was told that there does exist a moderately short proof of this statement not using Bertrand. This is my question: How can one prove the above statement without Bertrand postulate or any strong theorems? Although I can only accept one answer, I would love to see any argument you can come up with. I would also want to exclude arguments using a proof of Bertrand's postulate, unless it can be significantly simplified to prove weaker statement. Thank you in advance.
I've come up with a simple proof based on the prime-counting function $\pi(x)$, which I'm pretty sure doesn't depend on Bertrand's Postulate. First, I will prove a lemma that for every prime $n$, there is another prime $p$ with $n < p < n^2$. I will use this result later to show the general result (i.e. for composite $p$ as well). Based on some inequalities of $\pi(x)$ and the value of a related constant we have $$\frac{n}{\log n} < \pi(n) < C\frac{n}{\log n}\text{, where }C=\pi(30)\frac{\log 113}{113} \approx 1.255$$ for all $n \geq 17$. We want to prove that for all prime $n$ we have $$\pi(n^2) > \pi(n)$$ or $$\pi(n^2) - \pi(n) > 0,$$ so we look at the upper bound of $\pi(n)$ and the lower bound of $\pi(n^2)$ to find a minimum difference. We have $$\begin{align} \pi(n^2) - \pi(n) &> \frac{n^2}{\log n^2} - C\frac{n}{\log n} \\ &= \left(\frac{n}{2} - C\right)\frac{n}{\log n} \end{align} $$ so the conclusion is satisfied whenever $$\left(\frac{n}{2} - C\right)\frac{n}{\log n} > 0.$$ Since for all prime $n$ we have $$\frac{n}{\log n} > 0,$$ we find that $\pi(n^2) - \pi(n) > 0$ whenever $\frac{n}{2} - C > 0$, which gives us $$n > 2C \approx 2.51$$ so $$n \geq 3$$ which is true for every prime $n \geq 17$. For lesser primes we can prove things manually: $$2 < 3 < 2^2$$ $$3 < 5 < 3^2$$ $$5 < 7 < 5^2$$ $$7 < 11 < 7^2$$ $$11 < 13 < 11^2$$ $$13 < 17 < 13^2$$ This concludes the lemma. Then, we must prove that for every composite $n$ there is a prime $p$ with $n < p < n^2$. Take the largest prime less than $n$ and call that $p_m$. Then by applying the lemma we find that there is some prime $p$ such that $$p_m < p < p_m^2.$$ Because $p > p_m$, $p_m$ is the largest prime less than $n$, and $n$ is composite, we find that $$p < n.$$ Also, we have $$p < p_m^2 < n^2,$$ so we reach the second conclusion: if $n$ is composite then there's a prime $p$ such that $$n < p < n^2.$$ Now we've proved the original statement for both prime and composite $n$, and the proof is complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 2 }
Prove that if $a,b,$ and $c$ are positive real numbers, then $\frac{a^3}{b}+\frac{b^3}{c}+\frac{c^3}{a} \geq ab + bc + ca$. Prove that if $a,b,$ and $c$ are positive real numbers, then $\dfrac{a^3}{b}+\dfrac{b^3}{c}+\dfrac{c^3}{a} \geq ab + bc + ca$. I tried AM-GM and it doesn't look like AM-GM or Cauchy-Schwarz work here. The $ab+bc+ca$ reminds of a cyclic expression, so that may help by factoring the inequality and getting a true statement.
It is actually very simple. Use nothing but AM-GM. $$\frac{a^3}{b} + ab \geq 2a^2$$ $$\frac{b^3}{c} + bc \geq 2b^2$$ $$\frac{c^3}{a} + ac \geq 2c^2$$ $$LHS + (ab+bc+ac) \geq 2(a^2+b^2+c^2) \geq 2(ab + bc +ac)$$ We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 1 }
Solve integral $\int{\frac{x^2 + 4}{x^2 + 6x +10}dx}$ Please help me with this integral: $$\int{\frac{x^2 + 4}{x^2 + 6x +10}}\,dx .$$ I know I must solve it by substitution, but I don't know how exactly.
Using long division, $$ \frac{x^2+4}{x^2+6x+10} = 1 -6\frac{x+1}{x^2+6x+10}= 1 -\frac{6x}{x^2+6x+10}-\frac{6}{x^2+6x+10}$$ Integrating $$1 -\frac{6x}{x^2+6x+10}-\frac{6}{x^2+6x+10} $$ should be straight forward Note that $x^2+6x+10= (x+3)^2 +1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 3 }
Infinite primes proof There is a proof for infinite prime numbers that i don't understand. right in the middle of the proof: "since every such $m$ can be written in a unique way as a product of the form: $\prod_{p\leqslant x}p^{k_p}$. we see that the last sum is equal to: $\prod_{\binom{p\leqslant x}{p\in \mathbb{P}}}(\sum_{k\leqslant 0}\frac{1}{p^k})$. I don't see that. can anyone can explain this step to me?
It's not every such $m$ that can be written as $\prod_{\substack{p\in\mathbb{P},\\ p\leq x}}\sum_{k\geq0}\frac{1}{p}$, it's that the sum of all such $m$ is the same as the product and summation. What your proof is saying are equal is $$\prod_{\substack{p\in\mathbb{P},\\p\leq x}}\sum_{k\geq0}\frac{1}{p}=\sum_{\substack{m \text{ with}\\ \text{prime}\\ \text{factors}\\\leq x}}\frac{1}{m}$$ For example's sake, say $x=6$. Then we're summing over $m$ with prime factors of $2$, $3$, $5$. Listing these out, we have something like $$\prod_{p\in\{2,3,5\}}\sum_{k=0}^\infty \frac{1}{p^k}=1+\frac{1}{2}+\frac{1}{3}+\frac{1}{2^2}+\frac{1}{5}+\frac{1}{2\cdot 3}+\frac{1}{2^3}+\frac{1}{3^2}+\frac{1}{2\cdot 5}++\frac{1}{2^2\cdot3}+\dots\text{ }.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Sum of partial derivatives Suppose that $$\mu_i(x)=x_i \int_0^1 t^{n-1} \rho(tx) dt$$ where $\rho$ is a function on $\mathbb R^n$ and $tx=(tx_1,\dots,tx_n)\in \mathbb R^n$. Show that $$\sum_{i=1}^n \frac{\partial\mu_i}{\partial x_i}=\rho .$$ This problem looks simple, but I am having difficulty in showing the result. I guess that the first step is to find $\frac{\partial\mu_i}{\partial x_i}$ using the product rule.
$$\dfrac{\partial\mu_i}{\partial x_i}=\int_0^1 t^{n-1} \rho(tx) dt + x_i\int_0^1 t^{n} \rho'(tx) dt,$$ $$\sum_{i=1}^n\dfrac{\partial\mu_i}{\partial x_i} = n\int_0^1 t^{n-1} \rho(tx) dt +\sum_{i=1}^n x_i\int_0^1 t^{n} \rho'(tx) dt = \int_0^1 \rho(tx) dt^n + \int_0^1 t^{n} \dfrac{d\rho(tx)}{dt} dt = t^{n} \rho(tx)\biggr|_0^1 - \int_0^1 t^{n} \dfrac{d\rho(tx)}{dt} dt + \int_0^1 t^{n} \dfrac{d\rho(tx)}{dt} dt = \rho(x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
SPNE of a finitely repeated game So I have this little game between two players, played T times without any discouting factor with the following payoff table: now if T is 2, is there a SPNE where (B,L) is being played first round? I'm thinking no, since (T,R) is the unique NE and no credible threat can be made when there's only 2 rounds. Is this reasoning sound? What happens if $2<T<\infty?$
Because the stage game has a unique NE, in any finitely repeated game of the stage game, there is a unique SPNE. In this SPNE, the stage NE is played after every history. This can be easily shown by backward induction as follows. Suppose horizon is $T$. Let's use $t=0,1,\cdots, T-1$ to denote each period. In period $T-1$, given any history $h^{T-1}$, SPNE requires the stage NE. In period $T-2$, because after any history in $h^{T-1}$, the same NE is played, there is no intertemporal incentives. Hence after any history $h^{T-2}$, the unique NE must be played. The same logic follows. This is true because the stage game has a unique stage NE. If the stage game has multiple NE, then non trivial intertemporal incentives can be provided.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing $\mathbb{Z}\oplus\mathbb{Z}\oplus\mathbb{Z}/(1,-1,0)\mathbb{Z}+(0,1,1)\mathbb{Z}+(1,0,-1)\mathbb{Z}\cong \mathbb{Z}$ I have to prove that if $V_K = \{v_0, v_1, v_2\}$ and $K = \{\{v_0\}, \{v_1\}, \{v_2\}, \{v_0, v_1\}, \{v_0, v_2\}, \{v_1, v_2\}\}$ then $H_q(K, \mathbb{Z})\cong \mathbb{Z}$ for $q = 0, 1$. Already I proved that for $q=1$. For $H_0(K, \mathbb{Z})=\ker\delta_0/\operatorname{Im}\delta_1$, $\delta_0 : C_0(K)\to 0$, so it is obvious that $\ker\delta_0 = C_0(K)$. Also, $$\operatorname{Im}\delta_1 = \{(a + c)v_0 + (-a + b)v_1 + (b - c)v_2 \mid a, b, c \in \mathbb{Z}\}$$ according to the definition of $\delta_1$. Finally, this set can be written as follows: $$\operatorname{Im}\delta_1 = (1, -1, 0)\mathbb{Z} + (0, 1, 1)\mathbb{Z} + (1, 0, -1)\mathbb{Z}$$ so the only thing I have to prove is that $$\mathbb{Z}\oplus\mathbb{Z}\oplus\mathbb{Z}/(1,-1,0)\mathbb{Z}+(0,1,1)\mathbb{Z}+(1,0,-1)\mathbb{Z}\cong \mathbb{Z}.$$ All I can say is that the vectors $e_i$ don't belongs to this subset.
The quotient is not equal to $\Bbb Z$: it is equal to $\Bbb Z/2$. In particular, to correct the comment by p Groups: $f(1, -1, 0)=0$ implies $a-b=0$, so $a=b$. Similarly, $a-c=0$ implies $a=c$. Finally, $b+c=0$ implies $2a=0$. So the quotient is just $a\Bbb Z/2a\Bbb Z$, for example with representatives $\{(0, 0, 0), (1, 0, 0)\}$. In particular, $(0, 0, 1) = (1, 0, 0) - (1, 0, -1)$, and $(0, 1, 0) = (1, 0, 0) - (1, -1, 0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
To find volume of solid of revolution The volume of a solid generated by revolving about the horizontal line y=2 the region bounded by $y^{2}\leq2x$, $x\leq 8$ and $y\geq 2$. I have figured out the area to be revolved. But I do not know how to do disk method or washer method here. Thanks you so much
Your function that describes the solid is $f(x) = \sqrt{2x} - 2$. Integrate the square of this function over $ \sqrt{2x} = 2 \Leftrightarrow x = 2$ to $ x = 8$ to get the volume: $$V = \pi* \int_{2}^{8} (\sqrt{2x} - 2)^2dx $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
find minimal polynomial of matrix product $AB$ where $AB=BA$ Let $x^2-1$ and $x^2+1$ be minimal polynomials of $A,B\in M_n(\mathbb{R})$, respectively. If $AB=BA$, find the minimal polynomial of $AB$.
Notice that $A^2 = I$ and $B^2 = -I$. Therefore $(AB)^2 = A^2 B^2 = -I$, so the minimal polynom of $AB$ divides $x^2+1$. Because $x^2+1$ is monic and irreducible over $\mathbb{R}$ this already is the minimal polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How much group theory is required before undertaking an introductory course on Galois Theory? How much knowledge of group theory is needed in order to begin Galois Theory? Which topics are most relevant?
Most Galois Theory books are self-explanatory, but you need to familiarize yourself with concepts as solvable groups (this relates to equations being solvable by radicals), simple groups. Also Sylow theory helps a lot. In addition, knowledge of rings and fields is necessary. By the way, Ian Stewart's book Galois Theory makes a very nice read.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1594968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $2$ is subtracted from each root,the results are reciprocals of the original roots.Find the value of $b^2+c^2+bc.$ The equation $x^2+bx+c=0$ has distinct roots .If $2$ is subtracted from each root,the results are reciprocals of the original roots.Find the value of $b^2+c^2+bc.$ Let $\alpha$ and $\beta$ are the roots of the equation $x^2+bx+c=0$. According to the question, $\alpha-2=\frac{1}{\alpha}$ and $\beta-2=\frac{1}{\beta}$ $\alpha^2-2\alpha-1=0$ and $\beta^2-2\beta-1=0$ Adding the two equations, $\alpha^2+\beta^2-2\alpha-2\beta-2=0$ $(\alpha+\beta)^2-2\alpha\beta-2\alpha-2\beta-2=0$ $(-b)^2-2c-2(-b)-2=0$ $b^2+2b-2c-2=0$ But i am not able to find $b^2+c^2+bc=0.$What should i do now?I am stuck here.
So from equation we get quadratic which is $x^2-2x-1$ so roots if this are $1+\sqrt{2},1-\sqrt{2}$ so sum of roots is $-b/1=2,$ and product is $c/1=-1$ so $b^2+c^2+bc=4+1-2=3$ hope its clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can I draw the skewed normal distribution curve If its of real importance, I am trying to plot the data on gnuplot. I have the following of some experimental data, obtained by octave: * *mean: $\overline{\mu} = 0.6058$ *median: $\tilde{\nu} = 0.6364$ *std: $\sigma_x = 0.1674$ *variance: $\sigma^2 = 0.028$ *skewness $= 0.3131$ I am trying to plot them, but AFAIK, the normal distribution does not account for skewness (left or right). I am using the typical Gaussian curve function: $f(x) = \frac{1}{\sigma_x \cdot \sqrt{2 \pi}} e^{- \frac{(x-\mu)^2}{2\sigma^2}}$ How can I account the change in the curve shape, for my given skewness? For this data-set it is obvious that there exists a difference from median to mean, and I would like to plot that.
You can use the skew normal distribution with parameters $(ξ,ω,α)$ which can be estimated from the given data. If we set $δ=\dfrac{α}{\sqrt{1+α^2}}$, then the mean, variance and skewness of the skew normal distribution are given by (see the link) * *mean: $ξ+ωδ\sqrt{\dfrac2π}$ *variance: $ω^2\left(1-\dfrac{2δ^2}{π}\right)$ *skewness: $\dfrac{4-\pi}{2} \dfrac{\left(\delta\sqrt{\dfrac2\pi}\right)^3}{ \left(1-\dfrac{2\delta^2}{\pi}\right)^{3/2}}$ Substitute your known values for the mean, variance and skewness to find proper values for the parameters $(ξ,ω,α)$ of the distribution. Approximate values will do (you do not need to solve exactly), because you are based on a sample, and the distribution that you will find does not need to fit exactly to the sample. So, trial and error (with a computer), may help, since this is not an easy to solve system. Start from the formula of the skewness which depends only on $δ$. That is solve $$0.3131=\dfrac{4-\pi}{2} \dfrac{\left(\delta\sqrt{\dfrac2\pi}\right)^3}{ \left(1-\dfrac{2\delta^2}{\pi}\right)^{3/2}}$$ to find (approximately $δ$). From $δ$ you can find directly $α$. Now, go to the variance and solve $$0.028=ω^2\left(1-\dfrac{2δ^2}{π}\right)$$ to find $ω$. Use the value for the $δ$ that you already have. Finally, use the formula for the mean (with $ω, δ$ known) to find $ξ$ in a similar way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluate the definite integral $\frac{105}{19}\int^{\pi/2}_0 \frac{\sin 8x}{\sin x}dx$ Problem : Determine the value of $$\frac{105}{19}\int^{\pi/2}_0 \frac{\sin 8x}{\sin x}\ \text dx$$ My approach: using $\int^a_0f(x)\ \text dx = \int^a_0 f(a-x)\ \text dx$, $$ \begin{align} \frac{105}{19}\int^{\pi/2}_0 \frac{\sin 8x}{\sin x}\ \text dx &= \frac{105}{19}\int^{\pi/2}_0 \frac{\sin (4\pi -8x)}{\cos x}\ \text dx\\ &= \frac{105}{19}\int^{\pi/2}_0 -\frac{\sin 8x}{\cos x}\ \text dx \end{align} $$ But it seems it won't work please help thanks
Method $1$ $1.$ Use the identity $$2\sin x \sum_{k=1}^{n}\cos(2k-1)x = \sin2nx$$ which can easily be verified by using $$2\sin x \cos(2k-1)x = \sin 2k x - \sin 2(k-1) x$$ and the telescoping property of the sums. $2.$ Use the above formula to get $$\frac{\sin 2nx}{\sin x} = 2 \sum_{k=1}^{n}\cos(2k-1)x$$ $3.$ Integrate to obtain $$\begin{align} F(x) &=\int \frac{\sin 2nx}{\sin x} \, dx \\ &= 2 \sum_{k=1}^{n} \int \cos(2k-1)x \, dx \\ &= 2 \sum_{k=1}^{n} \frac{1}{2k-1} \sin(2k-1)x + C \end{align}$$ $4.$ Evaluating the definite integral results in harmonic partial sums $$\begin{align} I &= \int_{0}^{\frac{\pi}{2}} \frac{\sin 2nx}{\sin x} \\ &= F(\frac{\pi}{2})-F(0)\\ &= 2 \sum_{k=1}^{n} \frac{\sin (2k-1) \frac{\pi}{2}}{2k-1} - 0\\ &= \boxed{2 \sum_{k=1}^{n} \frac{(-1)^{k+1}}{2k-1}} \end{align}$$ In your case, you may choose $n=4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
If $f(x)$ is a continous function such that $f(x) +f(\frac{1}{2}+x) =1; \forall x\in [0, \frac{1}{2}]$ then $4\int^1_0 f(x) dx =$ . Problem : If $f(x)$ is a continous function such that $f(x) +f(\frac{1}{2}+x) =1; \forall x\in [0, \frac{1}{2}]$ then $4\int^1_0 f(x) dx = ?$ My approach : $$f(x) +f(\frac{1}{2}+x) =1 \tag{1}\label{1}$$ Let us put $x = \frac{1}{2}+x$ we get $$f(\frac{1}{2}+x) +f(1+x) =1 \tag{2}\label{2}$$ $\eqref{1}-\eqref{2}\Rightarrow f(x) -f(1+x) = 0$ $\Rightarrow f(x) = f(1+x)$ Is it the correct method to find $f(x)$? Please guide thanks.
\begin{align*} &4\int_0^1 f \\ =& 4\int_{0}^{0.5} f(t) dt + 4 \int_{0.5}^1 f(t) dt\\ =& 4\int_{0}^{0.5} f(t) dt + 4 \int_0^{0.5} f(x+\frac12) dx \quad\text{($x = t - \frac12$)}\\ =& 4\int_{0}^{0.5} (f(t) + f(t+\frac12)) dt\\ =& 4\int_{0}^{0.5} 1 dt\\ =& 4\cdot 0.5\\ =& 2 \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Maximizing the sum of the products of endpoints of edges in a graph Let $G$ be a graph with vertex set $V=\{v_1,v_2\dots v_n\}$ and edge set $E$. Let $f:V\rightarrow \mathbb [0,\infty)$ be a real valued function such that $\sum\limits_{i=1}^n f(v_i)=A$. What is the maximum possible value for $\sum\limits_{uv\in E}f(u)f(v)$? I remember seeing a sloppy proof a couple years ago that it was $\frac{A^2(k-1)}{2k}$ where $k$ is the clique number of $G$. It is straightforward that this is a lower bound for the maximum, however I am having trouble proving it is the actual maximum. We can see it is a lower bound by taking a maximal clique and letting $f(v)=\frac{1}{k}$ if the vertex is in the clique and $0$ elsewhere.
I managed to solve it with some help from fractal in AOPS. For each function $f:V\rightarrow [0,\infty)$ with the desired properties let $pos(f)$ be $\{v\in V|f(v)>0\}$. Now, consider the set of all such functions in $f$ such that $\sum\limits_{uv\in E}f(u)f(v)$ reaches the maximum- Take $f$ to be one of functions in this set so that $|pos(f)|$ is minimal. Suppose $pos(f)$ does not induce a clique, then there are vertices $a,b\in pos(f)$ which are not connected by an edge. We can now write $\sum\limits_{uv\in E}f(u)f(v)=c_1+c_2f(a)+c_3f(b)$. Where $c_1$ is the sum of the products of the endpoints of all the edges that don't include $a$ or $b$, $c_2$ is the sum of $f(x)$ over all neighbours $x$ of $a$ and $c_3$ is the sum of $f(x)$ over all neighbours $x$ of $b$. So basically, the edges not containing $a$ or $b$, plus the edges containing $a$ plus the edges containing $b$. So if we suppose $c_2\geq c_3$ then the function $f'$ defined as $f'(x)=f(x)$ if $x\neq a,b$ and $f'(a)=f(a)+f(b)$ and $f'(b)=0$ satisfies the following three conditions: $\sum\limits(u\in V)f(u) =A $. $\sum\limits_{uv\in E}f(u)f(v)\leq \sum\limits_{uv\in E}f'(u)f'(v)$. $pos(f')<pos(f)$. Contradicting the minimality of $|pos(f)|$. So we can find a function that reaches the maximum and satisfies that $pos(f)$ induces a clique. So now let $f$ be a function so that $pos(f)$ is a clique with vertex set $\{w_1,w_2\dots w_k\}$. Then we want to maximize: $\sum\limits_{1\leq i<j\leq k}f(w_i)f(w_j)=\frac{(w_1+w_2+\dots w_k)^2-(w_1^2+w_2^2 + \dots + w_k^2)}{2}=\frac{A^2-(w_1^2+w_2^2 + \dots + w_k^2)}{2}$. So we want to minimize the sum of squares, by Jensen's inequality or alternatively by AM-QM this occurs when $w_i=\frac{A}{k}$ for $1\leq i\leq k$. And in this case the desired sum becomes $\frac{A^2-k(A/k)^2}{2}=\frac{A^2(k-1)}{2k}$. Clearly this becomes larger as $k$ becomes larger, so the maximum is reached when $k$ is the clique number, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
In general, when does it hold that $f(\sup(X)) = \sup f(X)$? Let $f: [-\infty, \infty] \to [-\infty, \infty]$. What conditions should we impose on $f$ so that the following statement becomes true? $$\forall \ X \subset [-\infty, \infty], \sup f(X) = f(\sup X)$$ If that doesn't make much sense, then for some function with certain conditions, what kind of sets $X$ satisfy $f(\sup X) = \sup f(X)$? Some background to the question: While doing a certain proof, I was about to swap $\sqrt{\cdot}$ and $\sup$, but I soon realized that such a step probably needs some scrutiny. I still do not know whether such a step is valid, and I would like to know what sort of functions satisfy the requirement. I supposed that $f$ is an extended real valued function for the possibility of $\sup = \infty$.
I believe the sufficient and necessary condition for $f$ is that it is nondecreasing, left-continuous (i.e. for all $x_0$, $\lim\limits_{x\rightarrow x_0^-}f(x)=f(x_0)$) and $f(-\infty)=-\infty$. Last condition is necessary for consideration of $X=\varnothing$. First condition is necessary as for $X=\{a,b\},a\leq b$ we need $\max\{f(a),f(b)\}=\sup f(X)=f(\sup X)=f(b)$, i.e. $f(a)\geq f(b)$. Second condition is necessary for when we take $X=(-\infty,x_0)$. Since we know $f$ is nondecreasing, $\sup f(X)=\lim\limits_{x\rightarrow x_0^-}f(x)$, and it must be equal $f(\sup X)=f(x_0)$. As for sufficiency, assume the above three conditions. For $X=\varnothing$ of $X=\{-\infty\}$ this is clear so assume $\sup X:=x_0>-\infty$. It's easy to see that left continuity implies $\sup f((-\infty,x_0))=f(x_0)$ (thanks to monotonicity of the function), so we only need to prove $\sup f((-\infty,x_0))=\sup f(X)$. If $x_0\in X$, this is obvious from monotonicity. Clearly $\sup f((-\infty,x_0))\geq\sup f(X)$, since the former $\sup$ is taken on a (possibly) larger set. Now, take any $a\in (-\infty,x_0)$. Then there exists a $b\in X$ such that $a<b<x_0$ (since $a$ is smaller than $\sup X$). Now $f(b)\geq f(a)$. It follows from this that every element of $f((-\infty,x_0))$ is not larger than some element $f(X)$. Hence $\sup f((-\infty,x_0))\leq\sup f(X)$, so $\sup f((-\infty,x_0))=\sup f(X)$, as we wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
How to solve the following quadratic word problem? The total cost of carpeting a rectangular room is given the expression $$6x^2 + 18x$$ This is the multiple choice type question so the given options were set up like this. The length of the room is_______feet, its width is ____ feet and the cost of carpeting is _____ per square. (i'm purposely leaving them as blank because I want to know how to solve it. My question is how would i find the blank parts? if i factor the equation, i'll get $$ 6x(x + 3)$$ this wouldn't give me any information about the blanks above? What would I do?
Steve X is correct that there are infinitely many solutions. But let's consider what is the "best" solution. It is perfectly natural in a word problem that the unknown $x$ refers to one of the properties of the physical situation (i.e., length or width or price per square foot). Once we factor the polynomial (but see below): $6 x (x+3)$ we can assume that each of the three terms refers to one of the properties of the physical situation. If $x$ referred to the cost per square foot, it would make no physical sense that the width or length would be $x + 3$. That is, if $x$ is in dollars per square foot, then $x + 3$ would not have the units of a length. Both length and width have the same units (e.g., feet or meters), so it is most natural that $x$ refers to width and thus $x + 3$ refers to length (which is typically longer than width). So I think the best solution is: * *Width of room = $x$ *Length of room = $x+3$ *Cost per unit area = 6 Note that there are an infinite number of ways to factor the given equation into three terms, even if we assume that $x$ is a "natural" factor: $x (x+3) 6$ $x (2 x + 6) 3$ $x (a x + 3 a) (6/a)$ for any real positive $a$. There are limits on $a$. For example it would make no sense for any property in this problem to have a negative value, and thus $a > 0$. Likewise, it makes no sense for the area of the rug to be extraordinarily large (e.g., larger than the Atlantic Ocean), so there are limits on $a$. Regardless, as Steve X points out: there are an infinite number of solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why is $\tan^{-1}(\infty)$ here equal to $\dfrac{\pi}{2}$? In the question below in the last step of the solution, they seem to be claiming that $\displaystyle \lim_{n \to \infty+}\tan^{-1} \left (\dfrac{n+1}{d} \right) = \dfrac{\pi}{2}$, which I agree with but what about $\dfrac{5\pi}{2}$ etc.? How can they say it must be $\dfrac{\pi}{2}$? Also if they aren't claiming what I said above it seems that they are claiming that $\displaystyle \lim_{n \to \infty+}\left[ \tan^{-1} \left (\dfrac{n+1}{d} \right)+\tan^{-1} \dfrac{n}{d}\right] = \pi $, which isn't necessarily true. Problem and solution
it is small because the individual angles considered are arctangents from lines with positive slopes: the angle between the positive $x$ axis and a line of positive slope $m$ is $\arctan m$ which is between $0$ and $\pi / 2.$ You should have emphasized an earlier line $$ \arctan \left( \frac{i+1}{d} \right) - \arctan \left( \frac{i-1}{d} \right) = \arctan \left( \frac{2d}{i^2 + d^2 - 1} \right) \approx \frac{2d}{i^2 + d^2 - 1} $$ which becomes very small.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Is the complex square root of $z^2 = \pm z$? Is $\sqrt{z^2} = \pm z$, for $z$ complex? I think it is, since either $-z$ or $+z$ satisfies the definition $\sqrt{z^2}= e^{\large \frac{1}{2}\log(z)^2}$ but I just wanted to make sure. It's a bit tricky going from the positive square to the complex square root. Thanks,
The notation $\sqrt x$ is usually avoided except when referring to the non-negative square root of a non-negative real number, because otherwise it's ambiguous. In $C$ it is better to refer to "a square root" not "the square root". You will also come across the phrase "$x$ is an $n$th root of $1$" (when it is understood that $n$ is a positive integer) which means that $x^n=1$ and does not specify or restrict which of $n$ possible values $x$ might be.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
question about orthogonal vector Let say I have 2 vectors $(1, 0, 0)$ and $(0, 2, 0)$, and I want to find a third vector that is orthogonal to both of them. I can do a cross product and get $(0, 0, 2)$. However, I know there are infinite vector in the following form $(0, 0, x)$ where $x$ $\in$ R that are orthogonal to the other two. My question is what is the difference between the orthogonal vector results from cross product and any other orthogonal vector? Why the cross product gives only 1 specific orthogonal vector? What is the significance of this vector? Thank you!
The vector you get by performing the cross product is the unique vector orthogonal to both of your original vectors that * *has a length equal to the magnitude of the area of the parallelogram (actually rectangle in this case) with sides $(1,0,0)$ and $(0,2,0)$ and *forms a right-handed set with $(1,0,0)$ and $(0,2,0)$ If you don't care about either of those two properties, then you could just choose any vector of the form $(0,0,c)$. But sometimes those properties are useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Bounded Holomorphic function on Right half plane. Does there exist a Bounded Holomorphic function defined on Right half plane which have all $\sqrt{n}$ as root for all natural number $n$? I guess It is a just $0$ function. But How Could I approach this one? (I've been trying to use Blascke product.) Thanks!
If $f$ is a bounded non-constant holomorphic function on the disc then $\{1-|z| \mid f(z)=0\}$ is summable (see here). The function $$z\mapsto\frac{1+z}{1-z}$$ maps the unit disc onto the right half plane. The pre-image of $\sqrt{n}$ is $$\frac{\sqrt{n}-1}{\sqrt{n}+1} = 1 -\frac2{\sqrt{n}+1}$$ so a bounded function on the unit disc with those roots must be identically zero. Therefore also a bounded function on the right half plane with roots $\sqrt{n}$ must be identically zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Using polar coordinates to find the area of an ellipse Considering an ellipse with the $x$ radius equal to $a$ and the $y$ radius equal to b$:$ I figured that some kind of parameterization might be: $x=a\cos\theta$ $y=b\sin\theta$ and then polar $r^2$ is just $x^2 + y^2$ But then I tried to come up with some unit of infinitesimal area using triangles $\left(\dfrac{d\theta r^2}{2}\right)$ which does not give the correct answer. I read somewhere that my polar coordinates are wrong and that they are actually $x=ar\cos\theta$ $y=br\sin\theta$ But this does not make sense to me as an engineer because that seems like it would have the dimension of area equal to the dimension of a distance. The integral also takes $r$ from $0$ to $1$ which I thought was eliminated because the equation for $r$ should be in terms of $\theta$ and the constants $a$ and $b.$ I would like some explanation of what I am doing wrong that would make some "physical" sense (or why physical intuition might fail for this problem)
My answer is linked here. My name is Marco mamello10@gmail.com.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1595894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 4 }
Is every "weakly square" matrix either a $0$ matrix, or a square matrix? Call a matrix $A$ weakly square iff $\mathrm{det}(A^\top A) = \mathrm{det}(A A^\top)$. Then clearly, * *every square matrix is weakly square, and *every zero matrix is weakly square. Question. Are these the only examples of weakly-square matrices? Remark. I got the idea from Donald Reynolds answer here.
A non-square matrix $A$ is weakly square if and only if neither $A$ nor $A^T$ has full rank, which is to say iff $\operatorname{rank}(A)<\min\{m,n\}$. The key to this observation is to note that $$ \operatorname{rank}(A^TA)= \operatorname{rank}(A)= \operatorname{rank}(A^T)= \operatorname{rank}(AA^T) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finite element method books I know this question has been asked before; I just want to enquire if anybody has any suggestions to learn how to compute finite element problems, including plenty of examples. The topics I would like to focus in are as follows: Introduction to finite elements for 1D and 2D problems covering: * *weak formulation *Galerkin approximation *Shape functions *Isoparametric elements Key examples with walkthrough of common problems such as: * *Applying to heat equation *Applying to beam equation *Eigenvalue Problems *Nonlinear Problems Any recommendations would be sincerely appreciated (and happy new year!)
I would suggest Larson–Bengzon - The Finite Element Method: Theory, Implementation and Applications. It contains everything you requested for. They use Matlab as a programming environment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Identifying a sequence as subset of subspace If I have some sequence $\mathcal A = (a_i)$ of objects $a_i$ (maybe finite, maybe countably infinite) how can I say that those objects all exist in some subspace $S$? Is it correct to say $\mathcal A \subseteq S$? I'm not sure because $\mathcal A$ isn't a set it's a sequence.
You are correct in your assertion that $(a_i)\subseteq S$ lacks some formality, but I think it would be understood as intended if you wrote it that way. Here are some other options: * *Use $(a_i) \in S^{\mathbb{N}}$ if you have a sequence in the usual sense. *Use $(a_i) \in S^N$ if you have a sequence of finite length. *Introduce $\{a_i\}\subseteq S$ and use the object $(a_i)$ when needed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right)=\left(\frac{1}{64}+\frac{3}{128\sqrt{2}}\right)\pi^3$ Prove that $$\sum_{n=1}^{\infty}\left(\frac{1}{(8n-7)^3}-\frac{1}{(8n-1)^3}\right)=\left(\frac{1}{64}+\frac{3}{128\sqrt{2}}\right)\pi^3$$ I don't have an idea about how to start.
What we have is $$\frac1{1^3} - \frac1{7^3} + \frac1{9^3} - \frac1{15^3} + \cdots $$ Imagine if we used negative summation indices (e.g., $n=0, -1, -2, \cdots$). Then we would have $$\frac1{(-7)^3} - \frac1{(-1)^3} + \frac1{(-15)^3} - \frac1{(-9)^3} + \cdots$$ You should see then that the sum is $$\frac12 \sum_{n=-\infty}^{\infty} \left [\frac1{(8 n-7)^3} - \frac1{(8 n-1)^3} \right ] $$ The significance of this is that we may use a very simple result from residue theory to evaluate the sum: $$\sum_{n=-\infty}^{\infty} f(n) = -\pi \sum_k \operatorname*{Res}_{z=z_k} [f(z) \cot{\pi z} ]$$ where the $z_k$ are the non-integer poles of $f$. Here $$f(z) = \frac12 \left [\frac1{(8 z-7)^3} - \frac1{(8 z-1)^3} \right ] = \frac1{2 \cdot 8^3} \left [\frac1{(z-7/8)^3} - \frac1{(z-1/8)^3} \right ]$$ $f$ has poles at $z_1=1/8$ and $z_2=7/8$. At $z_1$, the residue is $$\frac1{2 \cdot 8^3} \frac1{2!} \left [ \frac{d^2}{dz^2} \cot{\pi z} \right ]_{z=1/8} = \frac1{2 \cdot 8^3} \frac1{2!} (2 \pi ^2) \cot \left(\frac{\pi }{8}\right) \csc ^2\left(\frac{\pi }{8}\right)$$ The calculation for the other pole is similar. Then our sum is $$\frac1{2 \cdot 8^3} \frac1{2!} \pi (2 \pi ^2) \left [\cot \left(\frac{\pi }{8}\right) \csc ^2\left(\frac{\pi }{8}\right)-\cot \left(\frac{7 \pi }{8}\right) \csc ^2\left(\frac{7 \pi }{8}\right) \right ] = \frac{\pi^3}{\cdot 8^3} \cot \left(\frac{\pi }{8}\right) \csc ^2\left(\frac{\pi }{8} \right )$$ Simplifying... $$\cot \left(\frac{\pi }{8}\right) = 1+\sqrt{2}$$ $$\csc ^2\left(\frac{\pi }{8} \right ) = \frac{2}{1-\frac{\sqrt{2}}{2}} = \frac{4}{2-\sqrt{2}} = 2 \left (2+\sqrt{2} \right)$$ Thus: $$\sum_{n=1}^{\infty} \left [\frac1{(8 n-1)^3} - \frac1{(8 n-7)^3} \right ] = \frac{\pi^3}{256} \left (4+3 \sqrt{2} \right )$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Solve the integral $\frac 1 {\sqrt {2 \pi t}}\int_{-\infty}^{\infty} x^2 e^{-\frac {x^2} {2t}}dx$ To find the Variance of a Wiener Process, $Var[W(t)]$, I have to compute the integral $$ Var[W(t)]=\dots=\frac 1 {\sqrt {2 \pi t}}\int_{-\infty}^{\infty} x^2 e^{-\frac {x^2} {2t}}dx=\dots=t. $$ I've tried integration by parts to solve the integral but end up with $$ \dots=\frac 1 {\sqrt {2 \pi t}} \left(0 - \int_{-\infty}^{\infty} -\frac {x} {t} \cdot e^{-\frac {x^2} {2t}}\cdot \frac {x^3} 3 dx\right), $$ which is even worse and probably wrong. Can anyone please help me compute the first integral and show how it becomes equal to $t$? (I know that the Variance for a Wiener Process (Standard Brownian motion)is defined as $t$ but want to prove it with the integral above.)
Hint: Let $a=\dfrac1{2t}.~$ Then we are left with evaluating $\displaystyle\int_{-\infty}^\infty x^2e^{-ax^2}~dx.~$ But the latter can be written as $-\dfrac d{da}\displaystyle\int_{-\infty}^\infty e^{-ax^2}~dx.~$ Can you take it from here ? ;-$)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
is the vector space of n- forms of an n-manifold equal to the vector space of compactly supported n-forms? Let $\Omega^{n}(M)$ be the real vector space of smooth n-forms of an n-manifold $M$. It is a real vector space of dimension 1. $\Omega^{n}_c(M)$ is the real vector space of compactly supported smooth n-forms on $M$. Since it is a subspace of $\Omega^{n}(M)$, which has dimension 1, shouldn't be equal to the whole $\Omega^{n}(M)$ or to $\{0\}$? But there are n-forms that are not compactly supported.
I think that, for the purposes of your confusion, we can first consider changing "n-form" to "vector field" in order to gain some more concreteness first. The confusion is the same. Consider the set of vector fields on a manifold $M$, let's call it $V(M)$. It is the set of smooth sections of the tangent bundle $TM$. What we are doing is assigning to each point $x$ of $M$ a point on $T_xM$ in a "smooth" way. This is a real vector space, since we can add two vector fields: we do it by adding in each $T_xM$. The dimension of $T_xM$ is $n$, but not of $V(M)$. It is a different object. Analogously, in your case, a smooth n-form is a section on the n-th exterior power of the cotangent bundle. In each point, you have a vector space of dimension $1$. But $\Omega(M)$ is not that vector space, it is a different object.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For all $\omega \neq 0$, $\rho(L_{\omega}) \ge |1 - \omega|$, where $L_{\omega}$ is the SOR matrix Let $A = (a_{ij}) \in M_n(\Bbb C)$ be invertible, such that $a_{ii} \neq 0$ for all $i$. Split $A$ into $D - E - F$, where $D$ is the diagonal of $A$, $E$ is the strict lower triangular part of $-A$ and $F$ is the strict upper triangular part of $-A$. Let $\omega \in \Bbb C \setminus \{0\}$, and define: $$L_{\omega} = \left(\frac1{\omega} D - E\right)^{-1} \left(\frac{1-\omega}{\omega} D + F\right)$$ It is required to prove that $\rho(L_{\omega}) \ge |1-\omega|$ Where $\rho(M)$ denotes the spectral radius of a matrix $M$. Until now, I have no good ideas. I tried to prove the constraint on each eigenvalue by considering the characteristic polynomial, $$P_{L_{\omega}} (x) = \det\left( xI - \left(\frac1{\omega} D - E\right)^{-1} \left(\frac{1-\omega}{\omega} D + F\right) \right) \\ = \ldots \\ = \frac{\omega^n}{\prod a_{ii}} \det \left(\frac{x - 1 + \omega}{\omega} D - Ex - F\right) $$ This seems to lead nowhere. I tried to prove it for the case $n=2$ to have a better understanding of the situation, however I couldn't. Just in case it helps, I obtained the following as the characteristic polynomial for the case $n=2$. $$x^2 - \frac{\omega^2}{a_{11}a_{22}}a_{12}a_{21}x + (1-\omega)^2$$ Finally, I have no relevant theorems in mind; all the theorems regarding the spectral radius, which are known to me, are just possible upper bounds. Source of the claim (page $27$).
For $\omega \in \Bbb C \setminus \{0\}$ you have : \begin{gather*}\det\left(L_{\omega}\right ) & =& \det\left(\left(\frac1{\omega} D - E\right)^{-1} \left(\frac{1-\omega}{\omega} D + F\right) \right) \\ & = &\frac{\det\left(\frac{1-\omega}{\omega} D + F\right)}{\det \left(\frac1{\omega} D - E\right) } \end{gather*} But your decomposition is such that E and F are strictly triangular so we end up with : \begin{gather*} \det \left(\frac1{\omega} D - E\right) = \frac{1}{\omega^{n}}\prod a_{ii}\\ \det \left(\frac{1-\omega}{\omega} D + F\right) = \frac{\left(1 - \omega \right )^{n}}{\omega^{n}}\prod a_{ii}\\ \end{gather*} That gives us : $\det\left(L_{\omega}\right ) = \left(1 - \omega \right )^{n}$ But you also know by definition of the spectral radius that $\rho(L_{\omega}) = \displaystyle \max_{i} |\lambda_i|$. To the power of $n$ it gives : \begin{gather*} \rho(L_{\omega})^{n} &\ge& \prod_{i} |\lambda_i|\\ &\ge& |\det\left(L_{\omega}\right )|\\ &\ge& |1 - \omega|^{n} \end{gather*} And from that follows $\rho(L_{\omega}) \ge |1 - \omega|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate the integral $\int x^{\frac{-4}{3}}(-x^{\frac{2}{3}}+1)^{\frac{1}{2}}\mathrm dx$ $$x^{\frac{-4}{3}}(-x^{\frac{2}{3}}+1)^{\frac{1}{2}}=\frac{\sqrt{(-\sqrt[3]{x^2}+1)}}{\sqrt[3]{x^4}}$$ Is it necessary to simplify the function further? What substitution is useful? $u=\sqrt[n]{\frac{ax+b}{cx+d}}$ doesn't work.
Following RecklessReckoner's comment: We have $\int x^{\frac{-4}{3}}(-x^{\frac{2}{3}}+1)^{\frac{1}{2}}\mathrm dx$. Now, let $u=x^{1/3}$ and $3u \mathrm du=\mathrm dx$ This gives $3\int u^{-4}(-u^{2}+1)^{\frac{1}{2}}u\mathrm du=3\int u^{-3}(-u^{2}+1)^{\frac{1}{2}}\mathrm du=3\int u^{-3}(1-u^{2})^{\frac{1}{2}}\mathrm du.$ Next, we perform another substitution: let $u=\sin t$ and $\mathrm du=\cos t \mathrm dt$ $3\int u^{-3}(1-u^{2})^{\frac{1}{2}}\mathrm du=3\int {\sin^{-3} t}(1-{\sin}^{2} t)^{\frac{1}{2}}\cos t\mathrm dt=3\int {\sin^{-3} t}({\cos}^{2} t)^{\frac{1}{2}}\cos t\mathrm dt$ $=3\int {\sin^{-3} t} \,{\cos}^{2} t\mathrm dt=3\int \frac{\cos^2t}{\sin^3 t} \mathrm dt=3\int {\cot^2t} \,{\csc t} \mathrm dt$ We can use the identity $\csc^2 t-1= \cot^2 t.$ $3\int {\cot^2t} \,{\csc t} \mathrm dt=3\int (\csc^2 t-1)\csc t \mathrm dt=3\int \csc^3 t-\csc t \mathrm dt=3\int \csc^3 t \mathrm dt- \int \csc t \mathrm dt.$ For the second term use the fact that $\int \csc t \mathrm dt =-\log|\csc t+\cot t| \mathrm dt+C$ For the first term, use the reduction formula $$\int \csc^m t \mathrm dt= \frac{(-\cos t)(\csc^{m-1} t)}{m-1}+\frac{m-2}{m-1}\int \csc^{m-2} t \mathrm dt$$ with m=3. Then, substitute back for $t$ and $u$. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Prove that $\sum_{i = 1}^n \frac{x_i}{i^2} \geq \sum_{i = 1}^n \frac{1}{i}.$ Let $x_1,x_2,\ldots,x_n$ be distinct positive integers. Prove that $$\displaystyle \sum_{i = 1}^n \dfrac{x_i}{i^2} \geq \sum_{i = 1}^n \dfrac{1}{i}.$$ Attempt I tried using Cauchy-Schwarz and I got that $$(x_1^2+x_2^2+\cdots+x_n^2) \left (\dfrac{1}{1^2}+\dfrac{1}{2^2}+\cdots+ \dfrac{1}{n^2} \right ) \geq \left ( \dfrac{x_1}{1}+\dfrac{x_2}{2}+\cdots+\dfrac{x_n}{n} \right)^2 = \displaystyle \left (\sum_{i = 1}^n \dfrac{x_i}{i} \right)^2,$$ but this doesn't seem to help.
A proof sketch that doesn't use the rearrangement inequality: Since the integers are distinct it's enough to prove the claim in the case $\{x_1,\dots,x_n\}$ is a permutation on $\{1,\dots,n\}$. For positive integers $\alpha<\beta$, $\gamma < \delta$ you can show $$\alpha/\gamma + \beta/\delta < \beta/\gamma + \alpha/\delta$$ From here you can actually choose $\{x_1,\dots,x_n\}$ such that a lower bound on $\sum_{i=1}^n \frac{x_i}{n}$ is attained and from this you'll get the inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why does the monotonicity imply $2^u < 3^v$ if and only if $3^u < 6^v$? In the question and solution below, I am wondering how to #$7$ it says "The monotonicity of $f$" implies that $2^u < 3^v$ if and only if $3^u < 6^v$, $u,v$ being positive integers." How does this even depend on the definition of $f$? And if it does how is it true? Problem Solution
Because $f(mn) = f(m)f(n)$, we have: $$ f(m^k) = f(\underbrace{m m \dots m}_{\text{$k$ times}}) = \underbrace{f(m)f(m)\dots f(m)}_{\text{$k$ times}} = f(m)^k $$ Also, because $f$ is strictly increasing, we have: $$ a > b \implies f(a) > f(b) $$ (indeed that's the definition of strictly increasing). With these two together, the argument goes like this -- suppose we had $2^u < 3^v$. Then, $$ \begin{aligned} 2^u &< 3^v\\ f(2^u) &< f(3^v)\\ f(2)^u &< f(3)^v\\ 3^u &< 6^v\\ \end{aligned} $$ (What a bizarre argument, may I say. Couldn't they have just calculated $\log_2(3)$ and $\log_3(6)$ once they got there, found that they were not equal, and called contradiction before doing any of the golden ratio business?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Zeroth homotopy group: what exactly is it? What are the elements in the zeroth homotopy group? Also, why does $\pi_0(X)=0$ imply that the space is path-connected? Thanks for the help. I find that zeroth homotopy groups are rarely discussed in literature, hence having some trouble understanding it. I do understand that the elements in $\pi_1(X)$ are loops (homotopy classes of loops), trying to see the relation to $\pi_0$.
Just a slight rephrase: you can consider $\pi_0(X)$ as the quotient set of the set of all points in $X$ where you mod out by the equivalence relation that identifies two points if there is a path between them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 1 }
A function satisfies the identity $f(x) + 2f\left(\frac1x\right) = 2x+1$ ... find another identity that $f(x)$ satisfies. A function satisfies the identity $f(x) + 2f\left( \frac{1}{x} \right) = 2x+1$. By replacing all instances of $x$ with $\frac{1}{x}$, find another identity that $f(x)$ satisfies. I have absolutely no idea what this question is asking, and how to go about it. I would really appreciate some help; thanks in advance!
You have your original identity and new one obtained by substituting $\frac1x$ instead of $x$: \begin{align} f(x)+2f\left(\frac1x\right) &= 2x+1 \tag{1}\\ f\left(\frac1x\right)+2f(x) &= \frac2x+1 \tag{2} \end{align} Now you want to combine the two equalities to get $f(x)$. If you add to equation $(1)$ the equation $(2)$ multiplied by $-2$, then the occurrences of $f(1/x)$ cancel each other out: \begin{align*} -3f(x) &= 2x-\frac4x-1\\ f(x) &= -\frac23x +\frac4{3x}+\frac13 \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1596943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does a connected countable metric space exist? I'm wondering if a connected countable metric space exists. My intuition is telling me no. For a space to be connected it must not be the union of 2 or more open disjoint sets. For a set $M$ to be countable there must exist an injective function from $\mathbb{N} \rightarrow M$. I know the Integers and Rationals clearly are not connected. Consider the set $\mathbb{R}$, if we eliminated a single irrational point then that would disconnect the set. A similar problem arises if we consider $\mathbb{Q}^2$ In any dimension it seems by eliminating all the irrational numbers the set will become disconnected. And since $\mathbb{R}$ is uncountable there cannot exist a connected space that is countable. My problem is formally proving this. Though a single Yes/No answer will suffice, I would like to know both the intuition and the proof behind this. Thanks for any help. I haven't looked at cofinite topologies (which I happened to see online). I also don't see where the Metric might affect the countability of a space, if we are primarily concerned with an injective function into the set alone.
Fix $x_0 \in X $. Then, the continuous(!) map $$ \Phi: X \to \Bbb {R}, x \mapsto d (x,x_0) $$ has an (at most) countable, connected image. Thus, the image is a whole (nontrivial!, if $X $ has more than one point) interval, in contradiction to being countable. EDIT: On a related note, this even show's that every connected metric space with more than one point has at least the cardinality of the continuum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Existence of maximum and minimum Let $f:\mathbb{R}_+\rightarrow \mathbb{R}$ be continuous and such that $f(0)=1$ and $lim_{x\rightarrow+\infty}f(x) = 0$. Prove that $f$ must have a maximum in $\mathbb{R}_+$. What about the minimum? I started working on that trying to verify Weierstrass theorem on a smaller interval of $\mathbb{R}_+$. The fact is that I am not sure on how to use the data contained in the text. Moreover, I assumed that once Weierstrass theorem holds, both maximum and minimum exist since both argmax and argmin are non empty compact sets; but looking at the solutions this is not the case, since minimum exists only in certain cases. Can you give me any hint/starting point to solve this problem?
Suppose $f$ doesn't have a maximum. Pick any non-zero $x_{1} \in \Bbb R_{+}$. Since $[0, x_{1}]$ is compact, and $f$ is continuous, we know $f$ has a maximum $M > 0$ on this interval. But since $f$ doesn't have a maximum on $\Bbb R_{+}$, we also know there is some $x_{2} \not \in [0,x_{1}]$ with $f(x_{2}) > M$. But $[0,x_{2}]$ is compact, so $f$ attains a larger maximum on this interval than $M$ (since $f$ is continuous). In particular, we can find $z \in [0, x_{2}]$ so that $f(z) > M$. Since $f$ doesn't attain a maximum on $\Bbb R_{+}$, proceeding as above we can find $\{x_{i}\}_{i = 1}^{\infty}$ with $x_{1} < x_{2} < x_{3} < x_{4} < \dots$ and $f(x_{i}) > M$ for each $i$. What does this tell you about the limit of $f(x)$ as $x \to \infty$? Can it be $0$ as we assumed?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is every converging sequence the sum of a constant sequence and a null sequence? Let $a_n$ be any sequence converging to $a$ when $n \to \infty$. Can you rewrite $a_n$ so that it is the sum of two other sequences? $$a_n=b_n + c_n,$$ with $b_n=b$ for every $n \in \mathbb{N}$ and $c_n\to 0$ as $n\to \infty$. In other words: Is a converging sequence ($a_n$) actually a null sequence ($c_n$) "shifted" by a constant ($b$)? Or is there any counterexample where one is not allowed to do so?
Yes, you can do that. Simply take $b_n=a,c_n=a_n-a$. By basic properties of limits $$\lim\limits_{n\rightarrow\infty}c_n=\lim\limits_{n\rightarrow\infty}(a_n-a)=(\lim\limits_{n\rightarrow\infty}a_n)-a=a-a=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Find the summation $\frac{1}{1!}+\frac{1+2}{2!}+\frac{1+2+3}{3!}+ \cdots$ What is the value of the following sum? $$\frac{1}{1!}+\frac{1+2}{2!}+\frac{1+2+3}{3!}+ \cdots$$ The possible answers are: A. $e$ B. $\frac{e}{2}$ C. $\frac{3e}{2}$ D. $1 + \frac{e}{2}$ I tried to expand the options using the series representation of $e$ and putting in $x=1$, but I couldn't get back the original series. Any ideas?
Clearly the $r^{th}$ numerator is $1+2+3+...+r= \frac{r(r+1)}{2}$ . And the $r^{th}$ denominator is $r!$. Thus $$\displaystyle U_r=\frac{\frac{r(r+1)}{2}}{r!}=\frac{r(r+1)}{2r!}$$ Since the degree of the numerator is $2$ , use partial fractions to find $A,B,C$ such that (If you use partial fractions up to $(r-3)!$ , its' coefficient will be zero when comparing coefficients.) $\displaystyle U_r=\frac{r(r+1)}{2r!}=\frac{A}{(r-2)!}+\frac{B}{(r-1)!}+\frac{C}{r!}$ $\displaystyle (2r!)\times U_r=(2r!)\times \frac{r(r+1)}{2r!}=(2r!)\times \frac{A}{(r-2)!}+(2r!)\times \frac{B}{(r-1)!}+(2r!)\times \frac{C}{r!}$ So $\displaystyle r(r+1)=r!\times \frac{2A}{(r-2)!}+r!\times \frac{2B}{(r-1)!}+r!\times \frac{2C}{r!}$ ............................................................................. Now observe that $r!=1\times 2\times 3\times .... \times (r-2)\times(r-1)\times r $ $\Rightarrow r!=(r−2)! ×(r−1)r $ and $ \Rightarrow r!=(r−1)!×r $ ............................................................................... So $\displaystyle r(r+1)=(r−2)! ×(r−1)r \times \frac{2A}{(r-2)!}+(r−1)!×r\times \frac{2B}{(r-1)!}+r!\times \frac{2C}{r!}$ So $\displaystyle r^2+r = 2A(r-1)r+2Br+ 2C $ Clearly $C=0$ , $B=1$ and $A=\frac{1}{2}$ So $\displaystyle U_r=\frac{r(r+1)}{2r!}=\frac{1}{2(r-2)!}+\frac{1}{(r-1)!}$ $\displaystyle \sum_{r=2}^{\infty}U_r= \frac{1}{2} \left( \frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+.....\right)+\left( \frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+.....\right)$ $\displaystyle \sum_{r=2}^{\infty}U_r= \frac{1}{2} \left( e\right)+\left( e-1\right)$ $\displaystyle \sum_{r=1}^{\infty}U_r= U_1+\frac{1}{2} \left( e\right)+\left( e-1\right)=1+\frac{e}{2}+e-1 =\frac{3e}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 1 }
If $n$ and $m$ are odd integers, show that $ \frac{(nm)^2 -1}8$ is an integer I am trying to solve: If $n$ and $m$ are odd integers, show that $ \frac{(nm)^2 -1}8$ is an integer. If I write $n=2k+1$ and $m=2l+1$ I get stuck at $$\frac{1}{8}(16k^2 l^2 +4(k+l)^2 +8kl(k+l)+4kl+2(k+l))$$
$$((2k+1)(2l+1))^2-1=16k^2l^2+4k^2+16kl^2+16kl+4k+4l^2+16lk^2+4l.$$ Dropping all the terms with coefficient $16$, $$4(k^2+k+l^2+l)=4(k(k+1)+l(l+1))$$ must be a multiple of $8$. With a slightly simpler evaluation: $$((2k+1)(2l+1))^2-1=(4kl+2k+2l)(4kl+2k+2l+2)=4(2kl+k+l)(2kl+k+l+1).$$ This is a multiple of $8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 9, "answer_id": 2 }
How do we find the minimum distance of a narrow sense BCH code? I know the designed distance, $d$, is a lower bound for the minimum distance, $d(C)$. Usually, in the examples I've seen, what we do is find the generator polynomial $g(x)$ of the code, then from $d \le d(C) \le w(g)$, where $w(g)$ is the weight of the code word $g(x)$, it always so happens that $w(g)=d$. This clearly doesn't always work. What if the last equality doesn't hold? Is there a way to find the minimum distance without finding the generator polynomial first? For example, in our text, for the narrow sense BCH code of length 23 with $d=5$, the minimum distance is said to be $7$. How are they getting this?
This is a very special code $C$ known as the binary Golay code. To get $d_{min}=7$ you can do the following. * *The smallest extension field containing a 23rd root of unity is $GF(2^{11})$. We see this for example by repeatedly applying the Frobenius (i.e. squaring). If $\alpha$ is a 23rd root of unity, its conjugates are $$\alpha,\alpha^2,\alpha^4,\alpha^8,\alpha^{16},\alpha^9,\alpha^{18},\alpha^{13},\alpha^3,\alpha^6,\alpha^{12}.$$ *So you see that the minimal polynomial of $\alpha$ has as its zeros also $\alpha^2$, $\alpha^3$ and $\alpha^4$. Therefore the BCH-bound says that $d_{min}\ge5$. *For the rest to work, you need to produce a generator matrix for $C$. If you know a generator polynomial $g(x)$, it is easy. Another way is to calculate the idempotent of $C$. By the same calculation we see that idempotents in the ring $R=GF(2)[x]/\langle x^{23}+1\rangle$ are $$e(x)=x+x^2+x^3+x^4+x^6+x^8+x^9+x^{12}+x^{13}+x^{16}+x^{18}$$ its reciprocal $\tilde{e}(x)=x^{23}e(\dfrac1x)$, and whatever you get by adding $1$ to one of them (the sum $e+\tilde{e}$ is also an idempotent, but it generates the repetition code, so is not interesting). *IIRC $e(x)$ generates a code we want. The code is 12-dimensional, and you get a basis for $C$ by writing out the words $x^ie(x), 0\le i\le 11$. *Extend $C$ to a code $C^+$ of length $24$ by adding an overall parity check bit. *Then follows the key step: Check that the basis vectors generating $C^+$ all have weights divisible by four, and that they are orthogonal to each other. Prove (by induction on the number of generators appearing in the sum) that all the words of $C^+$ have weights divisible by four. *Conclude that $d_{min}(C^+)\ge8$, and thus $d_{min}(C)\ge7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of -(-v)=v in a vector space The exercise is: Prove that $-(-v)=v$ for every $v \in V$ Proof Suppose $v \in V$ and $V$ is a vector space. Then $-(-v) \in V$ as result of the scalar multiplication property and $-(-v)=-(-1\cdot v)=-1 \cdot(-1\cdot v) =(-1 \cdot-1)\cdot v = 1 \cdot v = v $ The desired result $-(-v)=v$ holds. The solution manual gives the proof: Proof I just wanted to make sure that the way I did the proof isn't missing anything?Thanks
Your solution seems good, acknowledging the fact that V is a vector space, and therefore satisfies the axioms pertaining to the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding a function $f$ with the minimal $\|f'\|_1$ I was wondering about the following question, which I am sure the answer is known. I couldn't quite find it and I would appreciate if someone could tell me. Suppose I have a function $f : \mathbb{R} \rightarrow \mathbb{R}$ such that $f(-1)=0$, $f(0)=1$, $f(1)=0$, and $f(x) = 0$ if $|x|>1$. I was wondering what is the minimum possible value of $\int_{\mathbb{R}} |f'(x)| dx$ and what is $f$ that achieves it? Thank you very much!
Sketch: Assuming $f\in C^1,$ we have $\int_{-1}^0f' = f(0)-f(1) = 1.$ Hence $\int_{-1}^0|f'| \ge 1.$ The same applies on $[0,1].$ Thus $$\int_{\mathbb R}|f'| = \int_{-1}^1|f'| \ge 2.$$ Towards finding the minimum value: Let $f$ be any $C^1$ function satisfying the hypotheses that in addition is increasing on $[-1,0]$ and decreasing on $[0,1].$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proof of a determinant expansion This is equivalent to a result in Prasolov's book on linear algebra whose proof is not clear to me. I need help in understanding why the result is true. Let $x_1,x_2,\dots,x_n$ be row vectors in $R^n$. Let $e_1,\dots,e_n$ denote the canonical basis row vectors in $R^n.$ Let $M(x_1,\dots,x_n)$ denote the determinant of a matrix with rows $x_1,\dots,x_n$, Choose an integer $1 \leq k < n$. Given A, a subset of $\{1,\dots,n\}$ of n - k elements, say A = $\{i_1,\dots,i_{n-k}\}$ where $i_1 < i_2 < \dots < i_{n-k}$, let $M(x_1,\dots,x_k,A)$ denote the determinant of a matrix with rows $x_1,\dots,x_k,e_{i_1},\dots,e_{i_{n-k}}$. Define similarly $M(A,x_{k+1},\dots,x_{n})$ for each subset A of size k from $\{1,\dots,n\}$. Then we have, for some suitable choice of signs, $$ M(x_1,\dots,x_n) = \sum_{A : |A| = n -k} \pm M(x_1,\dots,x_k,A) M(A^{c},x_{k+1},\dots,x_n).$$
This follows from the generalized laplace expansion of the determinant from the first $k$ rows. We have, $$ M(x_1,\dots,x_n) = \sum_{A : |A| = k}\pm S(1,\dots,k;A) S(k+1,\dots,n;A^c) $$ where for $A \subset \{1,\dots, n\}$ with $|A| = k$, $S(1,\dots,k;A)$ denotes the determinant of the submatrix of the matrix with rows $x_1,\dots,x_n$ with row indices $1,\dots,k$ and column indices $j_1 < j_2 < \dots < j_k$ where $A = \{j_1,\dots,j_k\}$. $S(k+1,\dots,n;A)$ is defined similarly. The sign corresponding to $A = \{j_1,\dots,j_k\}$ above is $(-1)^{ (1 + \dots + k + j_1 + \dots j_k)}$. $S(1,\dots,k;A)$ is identical to the determinant of the following matrix : $ \begin{pmatrix} x_{1,j_1} & x_{1,j_2} & \dots & x_{1,j_k} & x_{1,l_1} & x_{1,l_2} & \dots & x_{1,l_{n-k}} \\ x_{2,j_1} & x_{2,j_2} & \dots & x_{2,j_k} & x_{2,l_1} & x_{2,l_2} & \dots & x_{2,l_{n-k}} \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ x_{k,j_1} & x_{k,j_2} & \dots & x_{k,j_k} & x_{k,l_1} & x_{k,l_2} & \dots & x_{k,l_{n-k}} \\ 0 & 0 & \dots & 0 & 1 & 0 & \dots & 0 \\ 0 & 0 & \dots & 0 & 0 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ 0 & 0 & \dots & 0 & 0 & 0 & \dots & 1 \\ \end{pmatrix} $ where $x_{i,j}$ denotes the $(i,j)^{\text{th}}$ element of the matrix with rows $x_i$ and $A = \{j_1,\dots,j_k\}$ and $A^c = \{l_1,\dots,l_{n-k}\}.$ Rearranging the columns we get $$ S(1,\dots,k;A) = \pm M(x_1,\dots,x_k,A^c). $$ Similar statement can be made about $ M(x_{k+1},\dots,x_n;A^c)$ and the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a set is a basis, having already given a basis Question: If {$u,v$} is a basis for the subspace U, show that {$u+2v,-3v$} is also a basis for U My attempt: We must prove that {$u+2v,-3v$} spans U and is linearly independent We know that given any $w \in U$ there exists $w = a_1(u) + a_2(v)$ where $a_1,a_2 \in \Bbb F$ and we also know that if $a_1(u) + a_2(v) = 0$ then $a_1=a_2=0$ Therefore: given any $w \in U$ there exists $w = a_1(u+2v) + a_2(-3v)$, which can be written as $w = (a_1)u + (2a_1-3a_2)(v)$. $(2a_1-3a_2) \in \Bbb F$ therefore {$u+2v,-3v$} spans U By the same principle, $0 = (a_1)u + (2a_1-3a_2)(v)$. $(2a_1-3a_2) \in \Bbb F$, let $a_3= 2a_1-3a_2 $, we can rewrite as $0 = (a_1)u + a_3(v)$ Therefore {$u+2v,-3v$} is also linearly independent, therefore a basis for U. Is this correct? Thank you very much!
First, note that $\dim U=2$ so it suffices to show that $\{u+2\,v,-3\,v\}$ is linearly independent. To do so, note that $$ \lambda_1(u+2\,v)+\lambda_2(-3\,v)=0 $$ if and only if $$ \lambda_1u+(2\,\lambda_1-3\,\lambda_2)v=0\tag{1} $$ But $\{u,v\}$ is linearly independent so (1) holds if and only if \begin{array}{rcrcrcrc} 0 &=&\lambda_1\\ 0&=&2\,\lambda_1&-&3\,\lambda_2 \end{array} which holds if and only if $\lambda_1=\lambda_2=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Showing that $\lim_{{x\to 0}}(1+\sin{x})^{\frac{1}{x}} = e$ How do you calculate $\displaystyle \lim_{{x\to 0}}(1+\sin{x})^{\frac{1}{x}}$? I got it from here. It says L'Hopital, but I can't figure out how to apply it as I don't have a denominator. I also tried to rewrite the limit using trig identities: $\displaystyle \lim_{{x\to 0}}(1+\sin{x})^{\frac{1}{x}} = \lim_{x \to 0} 2^{\frac{1}{x}}\sin^{\frac{2}{x}}\left(\frac{\pi}{4}+\frac{x}{2}\right) = ?$
my answer: $\lim_{x\to 0}\left(1+\sin x\right)^{1/x}$ $=\lim_{x\to 0}(\left(1+\sin x\right)^{\frac{1}{\sin x}})^{\frac{\sin x}{x}}$ Note: $\lim_{x\to 0}\frac{\sin x}{x}=1$, so i get $=(e)^1=e$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1597940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How to get the complex number out of the polar form How does one get the complex number out of this equation? $$\Large{c = M e^{j \phi}}$$ I would like to write a function for this in C but I don't see how I can get the real and imaginary parts out of this equation to store it in a C structure.
$$\mathcal{R}(c)=M\cdot\cos{\phi}$$ $$\mathcal{I}(c)=M\cdot\sin{\phi}$$ Assuming that $j^2=-1$ (Physics notation). If we follow a math notational convention where $i^2=-1$ and $j=e^{{2i\pi\over 3}}$ is a complex root of unity we just replace in the above $\phi$ by $\phi+{2i\pi\over 3}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1598067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Boolean Algebra Product of Sums I have a question to solve the following expression and get it in terms of product of sums (AB' + A'B)C And I tried taking the compliment of this [(AB' + A'B)C]' [(AB' + A'B)' + C'] [(AB')'.(A'B)' + C'] [(A' + B).(A + B') + C'] [(A' + B + C').(A + B' + C')] Is this the correct method and is this the answer? I took the compliment because of the fact that Sum of products are 1's and Products of sums are 0's
As @coffeemath noted, your answer is actually the complement of the original expression. So this method is not correct. I think the easiest way is to use the identities $(x + y)' = x'y'$ and $(xy)' = x' + y'$ (known as De Morgan's laws) and $x'' = x$. We have $$AB' + A'B = ((AB')'(A'B)')' = ((A'+B)(A+B'))'=$$ $$=(A'A + A'B' + AB + BB')' = (AB + A'B')' = (AB)'(A'B')' = (A'+B')(A+B).$$ Hence the original expression equals $$ (AB'+A'B)C = (A'+B')(A+B)C.$$ Alternatively, if you're familiar with how to get a SOP or a POS from the truth table, then you can just construct the truth table for the function $f(A, B)$ realized by the expression $AB' + A'B$ and construct a POS directly from it. In our case the truth table looks like this: $$\begin{array}{|c|c|} \hline A & B & f(A, B) \\ \hline 0 & 0 & 0 \\ 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \\ \hline \end{array}$$ To construct the POS representation of $f(A, B)$ you should look at pairs $(a, b)$ with $f(a, b) = 0$. With every such pair you associate a sum $A^\sigma + B^\tau$, where $A^\sigma = A$ if $a = 0$ and $A^\sigma = A'$ if $a = 1$, and same for $B^\tau$. After that you just construct the product of all these sums. In our case we have two pairs $(0, 0)$ and $(1, 1)$. Corresponding sums are $A + B$ and $A' + B'$. Finally, we have the POS $(A'+B')(A+B)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1598262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can we find $x_{1}, x_{2}, ..., x_{n}$? Consider this. $$x_{1}+x_{2}+x_{3}+....+x_{n}=a_{1}$$ $$x_{1}^2+x_{2}^2+x_{3}^2+....+x_{n}^2=a_{2}$$ $$x_{1}^4+x_{2}^4+x_{3}^4+....+x_{n}^4=a_{3}$$ $$x_{1}^8+x_{2}^8+x_{3}^8+....+x_{n}^8=a_{4}$$ $$.............................$$ $$x_{1}^{2^{n-1}}+x_{2}^{2^{n-1}}+x_{3}^{2^{n-1}}+....+x_{n}^{2^{n-1}}=a_{n}$$ Can we find $x_{1}, x_{2}, ..., x_{n}$ by knowing $a_{1}, a_{2}, ..., a_{n}$?
If you mean the diophantine-equation tag and the solutions are supposed to be integers, searching will be easy because high powers are spaced far apart. Because of the symmetry you can insist that $x_1 \ge x_2 \ge \dots \ge x_n$. You can focus just on the last equation $x_{1}^{2^{n-1}}+x_{2}^{2^{n-1}}+x_{3}^{2^{n-1}}+....+x_{n}^{2^{n-1}}=a_{n}$. We have $x_{1}^{2^{n-1}} \le a_n \le nx_{1}^{2^{n-1}}$ or $(\frac{a_n}n)^{2^{1 \le -n}} x_1 \le a_n^{2^{1-n}}$ Since $n^{2^{1-n}}$ is just a bit greater than $1$, this is a tight bound. For example, let $n=5$ and the final equation be $x_{1}^{16}+x_{2}^{16}+x_{3}^{16}+x_{4}^{16}+x_{5}^{16}=46512832447930819$ We get $9.955 \le a_1 \lt 11.008$, so $a_1$ must be $10$ or $11$. If we take $a_1=10$, then $9.94 \lt a_2$, so $a_2=10$ as well. It turns out we get driven to $10,10,10,10,10$, which is too large, or $10,10,10,10,9$, which is too small. If we take $a_1=11$ things work better (because I made them). We get $7.66 \lt a_2 \lt 8.35$, so $a_2=8$, then $7.47 \lt a_3 \lt 8.002,$ so $a_3=8$ and we find $a_4=5, a_5=3$ works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1598329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How many combinations for a 5 digit code using 3 numbers. Can anyone please help here? I have inherited a strange looking safe with only numbers 1 2 and 3. The code to open it is 5 digits and the code uses all three numbers at least once. Is there some formula I can apply to list all the combinations? Thanks
Much simpler is to just [choose numbers] $\times$ [permute them] 3-1-1 of a kind, e.g 22231: $\binom{3}{1,2}\times\frac{5!}{3!} = 60$ 2-2-1 of a kind, e.g. 22113: $\binom{3}{2,1}\times\frac{5!}{2!2!} = 90$ yielding the answer of 150
{ "language": "en", "url": "https://math.stackexchange.com/questions/1598410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Example of using Delta Method Let $\hat p$ be the proportion of successes in $n$ independent Bernoulli trials each having probability $p$ of success. (a) Compute the expectation of $\hat p (1-\hat p)$. (b) Compute the approximate mean and variance of $\hat p (1-\hat p)$ using the Delta Method. For part (a), I can calculate the expectation of $\hat p$ but got stuck on the expectation of $\hat p^2$, though I doubt this is a right track. For part (b), I'm new to Delta method and as for the approximate variance I know I need to calculate the derivative of the function $\hat p (1-\hat p)$, but not quite sure. I appreciate if anyone can provide some answers or some similar examples online.
Let $$ k = #{success in n independent Bernoulli trials}$$ then we have $$ k \sim B(n,p)$$ and $$ E(\hat{p})=E(\frac{k}{n})=\frac{E(k)}{n}=\frac{np}{n}=p $$ $$Var(\hat{p})=Var(\frac{k}{n})=\frac{Var(k)}{n^{2}}=\frac{np(1-p)}{n^{2}}=\frac{p(1-p)}{n}$$ thus $$E(\hat{p}^{2})=Var(\hat{p})+ E(\hat{p})^{2}=\frac{p-(1-n)p^{2}}{n} $$ this answers your first question.$$$$ For the second part,$$ Var(\hat{p}(1-\hat{p}))\approx(1-2\hat{p})^{2}Var(\hat{p})=(1-2\hat{p})^{2}\frac{p-(1-n)p^{2}}{n}$$ and plug in the estimate of $\hat{p}$ to get it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1598503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Valid method to obtain a basis of a topological subspace? Let $(X,\tau)$ be a topological space and $Y \subset X.$ We know that if $\mathcal{B}$ is a basis for $\tau$ and $\tau_{\small{Y}}$ is the subspace topology on $Y$, then we can obtain a basis for $\tau_{\small{Y}}$ by taking the collection $\mathcal{B}_Y$ of intersections $Y \bigcap B$ as $B$ ranges over all the sets in $\mathcal{B}$. I was wondering if as another genereal method to obtain a basis for $\tau_{\small{Y}}$ by taking the collection $\mathcal{S}_Y$ of all the sets $B$ in $\mathcal{B}$ such that $B \subset Y$? If yes, will we always have that $\mathcal{B}_Y=\mathcal{S}_Y$? As an example I took the interval $[1,2]$ with the induced Euclidean topology and it seems to work.
No. Take a line in $\mathbb{R}^2$. Also, your case $[1,2]$ doesn't seem to work for me. It seems neither $1$ nor $2$ would have a neighbourhood.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1598562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Real analysis supremum proof Let $A$ be a non-empty bounded sub-set of $\mathbb{R}$. Let $B\subset\mathbb{R}$, given by $$B=\left\{\frac{a_1+2a_2}{2} \,\Bigg|\,a_1,a_2\in A\right\}$$ Express $\sup B$ in terms of $\sup A$. My attempt: Suppose $a_1,a_2\in A$ and $b\in B$. Then $a_1 \leq \sup (A)$ and $a_2\leq \sup (A)$ So $a_1 + 2a_2\leq 3sup (A)$. This gives $\frac{a_1 + 2a_2}{2}\leq \frac{3\sup (A)}{2}$. This means that $\frac{3\sup (A)}{2}$ is an upperbound and $\sup(B)\leq\frac{3\sup (A)}{2}$. Now let $\epsilon>0$. $a_1>\sup(A) - \frac{2\epsilon}{3}$ $a_2>\sup(A) - \frac{\epsilon}{3}$ This gives $\frac{a_1 + 2a_2}{2}> \frac{3\sup (A)}{2}-\epsilon$. So this means that $\frac{3\sup(A)}{2}\leq\sup(B)$ So $\sup(B)=\frac{3\sup(A)}{2}$. Is this correct? And how can I improve my proofs?
There are Lemmas that would be helpful. $\sup(A+B) = \sup(A) + \sup(B)$ $\sup(cA) = c\sup(A), c\geq 0$ where A, B are subset of R, $A + B = \{z = a + b| a \in A, b \in B\}$, and c is a non-negative real number then your questions can be answered: $\sup((A + 2*A)/2) = 3\sup(A)/2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1598667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
solving $\int \cos^2x \sin2x dx$ $$\int \cos^2x \sin2x dx$$ $$\int \cos^2x \sin2x \, dx=\int \left(\frac{1}{2} +\frac{\cos2x}{2} \right) \sin2x \, dx$$ $u=\sin2x$ $du=2\cos2x\,dx$ $$\int \left(\frac{1}{2} +\frac{du}{4}\right)u \, du$$ Is the last step is ok?
Using the substitution you proposed $u\leadsto\sin2x$ one would get $$\begin{align} \int\cos^2x\sin2x\,\mathrm dx&=\int\left(\dfrac12+\dfrac{\cos 2x}2\right)\sin 2x\,\mathrm dx\\ &=\int\dfrac{\sin 2x}2\,\mathrm dx+\int\dfrac{\sin 2x}{4}\underbrace{{2\cos 2x}\,\mathrm dx}_{\displaystyle\mathrm du}\\ &=\int\dfrac{\sin 2x}{2}\,\mathrm dx+\int\dfrac{u}{4}\,\mathrm du, \end{align}$$ which is the correct form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1598733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 2 }
Determining linear independence in $\mathbb{R}^3$ Let $\{\xi_k\}_{k=1}^4$ be a set of vectors in $\mathbb{R}^3$. If $\{\xi_1, \xi_2\}$ and $\{\xi_2, \xi_3, \xi_4\}$ are independent sets, and $\xi_1$ belongs to the span of $\{\xi_2, \xi_4\}$. Show that $\{\xi_k\}_{k=1}^3$ is linearly independent. Clearly, $\xi_1 = a_1\xi_2 + a_2\xi_4$ where $a_1\neq0$ due to the fact that $\xi_1$ and $\xi_2$ are linearly independent. Hence, $\xi_2 = a_1^{-1}\xi_1-\frac{a_2}{a_1}\xi_4$... How to proceed with the formalization? Because, it is quite clear that $\xi_1$ is linearly independent of $\xi_3$ and we know that $\{\xi_1, \xi_2\}$ is independent as well, so it follows that $\{\xi_k\}_{k=1}^3$ is linearly independent set. Is it rigorous enough? Thank you!
hint Start with a linear combination of vectors $1,2,3$ and set it equal to $0$ (vector). Now write vector $1$ as a linear combination of vectors $2$ and $4$. This gives you a linear combination of vectors $2,3$ and $4$ equal to the zero vector. Now use the independence of $2,3,4$ to find the coefficients.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1598947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }