Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Doubt in proof of Bertrand Postulate I studied proof of Bertrand Postulate from M Ram Murthy Problems in analytic number theory and completely understood it . In M Ram Murthy Book , Statement of Bertrand Postulate is (1) - For n sufficiently large , there exists a prime between n and 2n. But while I was looking at Book Introduction to sieve methods and applications by Ram Murthy the statement of Bertrand Postulate is (2) - For every n $\geq$ 1 , there always exists a prime between n and 2n . Can someone please tell how To deduce 2nd statement from statement 1 ie to prove that for each n $ \geq $ 1 , there exists a prime between n and 2n .
Statement 1 probably tells you something like "if $n \geq 750$, then there is a prime between $n$ and $2n$". That's the bound that I was taught, where my proof ultimately hit the inequality $\frac{n \log 4}{3} < (2 + \sqrt{2n}) \log(2n)$ that was required to fail. Now you can obtain statement 2 by checking that there is a prime between $2$ and $4$, between $3$ and $6$, …, between $749$ and $1498$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3509687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is an isolated point in $\mathbb{R}^d$ a limit point? I just read the definition of limit point and isolated point in the book Real Analysis: measure theory, integration and hilbert spaces by e.m. stein and r. shakarchi A point $x\in\mathbb{R}^d$ is a limit point of $E$ if for every $r>0$, the ball $B_{r}(x)$ contains points of $E$. ( Does it mean $B_{r}(x)\cap E \neq \emptyset ?$ ) An isolated point of $E$ is a point $x\in E$ s.t. there exists an $r>0$ where $B_{r}(x)\cap E$ is equal to $\{x\}$ Since the definition of limit point here does not require $B_{r}(x)$ contains points of $E\backslash$$\{x\}$, then an isolated point of $E$ is a limit point of $E$ since for every $r>0$, the ball $B_{r}(x)$ contains a point $x\in E$. I was confused about that. (Or the "points" in the definition of limit point means at least two different points?)
The way these definitions are written here, yes, it looks like isolated points become limit points. This is, however, not the conventional definition of limit points. We usually require that $B_r(x)$ contains points of $E$ in addition to $x$ itself. With the conventional definition, isolated points are not limit points. In fact, the isolated points of $E$ are exactly the points of $E$ that are not limit points (note that also points outside $E$ can be limit points of $E$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3509776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
About an order of a p-group I'm trying to show that if G is a Group, then $|G| = p^2 \Rightarrow G$ is abelian. The path I'm taking relies on supposing that $|Z(G)| = p$ and forming the quotient group $\overline{G} = G/Z(G)$. Then we got $\overline{G}$ as a cyclic group, because it has order p. $\overline{G}=<a.Z(G)>, \ a \in G$. By that way, on the texts I saw, it's said that I can assume every element in $G$ is in the form of $(a^n)b$, $a \in G$ and $b \in Z(G)$. Why can I assume this? Thank you
It's well known that the center of a $p$-group is nontrivial. This can be seen by looking at the class equation. The rest goes through without a hitch.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3509896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Show that $(1+\frac 1n)^{n^2} \mathrm e^{-n}$ is not a zero sequence without logarithms How can I show that $z_n = \left(1+\dfrac 1n \right)^{n^2} \mathrm e^{-n}$ is not a zero sequence? WolframAlpha says, it converges to $\frac{1}{\sqrt{e}}$ but how can I prove? I'd appreciate a solution without logarithms.
We will first argue that $z_n \ge e^{-1/2}$, by showing that that for $x \in [0,1], e^{x - x^2/2} \le 1 + x$. TO do this, let $$ f(x) := e^{x - x^2/2} - 1 - x.$$ Note that, $$ f'(x) = (1-x) e^{x -x^2/2} - 1, \\ f''(x) = ((1-x)^2 -1)e^{x- x^2/2}$$ Note that the second derivative is non-positive for $x \in [0,1]$. Thus, the derivative is nonincreasing in ths domain. Since $f'(0) = 0,$ the derivative is non-positive in $[0,1]$ - i.e., $f$ is nonincreasing on $[0,1]$. Finally, since $f(0) = 0,$ this tells us that for $x \in [0,1], f(x) \le 0 \iff e^{x- x^2/2} \le 1 + x$. Using this for $x = 1/n,$ we find that $ e^{1/n - 1/2n^2} \le 1 + 1/n,$ for $n \ge 1,$ and thus $$(1 + 1/n)^{n^2} e^{-n} \ge (e^{1/n - 1/2n^2})^{n^2} e^{-n} = e^{-1/2}.$$ We will now get an upper bound on the $z_n$. By truncating the series, we get that $e^{u} \ge 1 + u + u^2/2$ for $u \ge 0$. Now, let $x_n = \sqrt{1+2/n} - 1$. Note that $x_n + x_n^2/2 = 1/n$. So, we find that $(1+1/n) \le e^{x_n},$ and thus $$ (1 + 1/n)^{n^2} e^{-n} \le e^{n^2 x_n - n}. $$ Since $\exp(\cdot)$ is continuous, if we can argue that $ n^2 x_n - n \to -1/2,$ then we'd be done by the sandwich theorem. We can show this via a Taylor expansion: $\sqrt{1 + u} = 1 + u/2 - u^2/8 + O(u^3).$ Thus, $$ n^2 x_n - n = n^2 \left( 1 + \frac{(2/n)}{2} - \frac{(4/n^2)}{8} + O(n^{-3}) -1\right) - n = - \frac{1}{2} + O(n^{-1}),$$ and we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3510012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Limit with definite integral Given $$f(x)=\int _0^x \dfrac{\sin t}{t} dt$$ calculate $$\lim _{x \rightarrow 0} \dfrac{2f(x)-f(2x)}{x-f(x)}.$$ I applied L'Hospital's rule so now I have: $$\lim _{x \rightarrow 0} 2\dfrac{\frac{\sin x}{x}-\frac{\sin 2x}{2x}}{1-\frac{\sin x}{x}}$$ Now I don't know how to proceed.
Use Taylor's expansion at order $3$ after L'Hospital: $$ \frac{2\sin x-\sin 2x}{x-\sin x}=\frac{2x-\dfrac{x^3}3-2x+\cfrac{4x^3}3+o(x^3)}{x-x+\cfrac{x^3}6+o(x^3)}=\frac{x^3+o(x^3)}{\cfrac{x^3}6+o(x^3)}=\frac{1+o(1)}{\frac 16+o(1)}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3510144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding linear recurrence for a sequence which depend on another recurrence I stumbled upon a sequence of numbers $b_n$ such that for all $n\in \mathbb{N}$, $$b_n=\sum_{j=0}^M a_{n,j}$$ where $a_{n,j}$ are given by the recurrence, $$a_{n,j}=\begin{cases} 1 & n=0 &\land &j=0\\ 0 & n=0 &\oplus &j=0\\ \sum_{i=0}^M \lambda_{i,j}\cdot a_{n-1,i} & 0<n\in\mathbb{N} & \land & 0<j\leq M \end{cases}$$ Is it possible to find a linear recurrence for $b_n$ based on that? Hopefully it can be done explicitly.
I don't see any immediate way for this to be rewritten into a linear recurrence. However, we can get a closed form. Let us rewrite $$a_n=\begin{bmatrix}a_{n,0}\\a_{n,1}\\\vdots\\a_{n,M}\end{bmatrix}$$ $$\lambda=\begin{bmatrix}\lambda_{0,0}&\lambda_{1,0}&\cdots&\lambda_{M,0}\\\lambda_{0,1}&\lambda_{1,1}&\cdots&\lambda_{M,1}\\\vdots&\vdots&\ddots&\vdots\\\lambda_{0,M}&\lambda_{1,M}&\cdots&\lambda_{M,M}\end{bmatrix}$$ with $\lambda_{i,0}=0$. We then have $$a_n=\lambda^n\begin{bmatrix}1\\0\\\vdots\\0\end{bmatrix}$$ and finally, \begin{align}b_n&=\begin{bmatrix}1\\1\\\vdots\\1\end{bmatrix}^\intercal a_n\\&=\begin{bmatrix}1\\1\\\vdots\\1\end{bmatrix}^\intercal\lambda^n\begin{bmatrix}1\\0\\\vdots\\0\end{bmatrix}\end{align} which can be used to quickly compute $b_n$ for large $n$, among other things.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3510263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Length of cable parabola with height $\propto$ (horizontal distance from midpoint)² Question: The cable to a certain suspension bridge has the shape of a parabola. The height at a certain point is proportional to the square of the horizontal distance from the midpoint. The dimensions are given in the image. Determine the length of the cable. Attempted solution: I add a few things to it. The height at a certain point is something that I interpret as the $y$-value of a currently unknown function. Once we have this function, we will use the arc length integration to get the length of the cable, either by integrating from one end to the other, or integrate from the middle to the end and double it: $$L = 2 \cdot \int_0^{25} \sqrt{1+ f'(x)^2}$$ The height is proportional to the square of the horizontal distance from the midpoint: $h(x) = kx^2$ How do we determine the $k$ value? Well, there is an initial condition in the image. When the distance is 25 meters from center, the height is $10 + h_0$. Putting this in: $10 + h_0 = 25^2 k \Rightarrow 625k - 10 = h_0$ But this leaves me with one equation and two unknowns and cannot really progress. I think I have not understood the basic setup for the question. What are some productive ways to proceed and finish this question off? The expected answer is: $$5\sqrt{41} + \frac{125}{4}\ln \frac{4 + \sqrt{41}}{5} m$$
Your formula, since $h(0)=h_0$, is $h(x)=kx^2+h_0$. And the $h_0$ plays no role in the length formula, because the derivative will kill it. Then your integral gives the value you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3510431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Eigenvectors and Eigenvalues of Shift Matrix $$S:\mathbb{C}^n\rightarrow\mathbb{C}^n, $$ $$S(x_1,x_2,...,x_n)^T = (x_n,x_1,...,x_{n-1})^T.$$ How can the eigenvalues and eigenvectors of S be calculated? I already have the standard matrix of S which is: \begin{bmatrix} 0 & 0 & 0 & \dots & 0 & 1\\ 1 & 0 & 0 & \dots & 0 & 0\\ 0 & 1 & 0 & \dots & 0 & 0\\ 0 & 0 & 1 & \dots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 1 & 0 \end{bmatrix}
To find its eigenvalues, note that $\lambda I - S$ is given by the following matrix: \begin{eqnarray} \begin{pmatrix} \lambda & 0 & 0 & \cdots & 0 & -1 \\ -1 & \lambda & 0 &\cdots & 0 & 0 \\ 0 & -1 & \lambda & \cdots & 0 & 0 \\ 0 & 0 & -1 &\cdots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots& \vdots & \vdots \\ 0 & 0 & 0 &\cdots & -1 & \lambda \end{pmatrix} \tag{1}\label{1} \end{eqnarray} To find the determinant of (\ref{1}), you can use the cofactor procedure as follows. You eliminate the first row and the first column of this matrix and evaluate the determinant of the remaining matrix. Then, you keep the first row eliminated and proceed by eliminating the following columns. This will lead you to: \begin{eqnarray} \det(\lambda I - S)= \lambda \det\begin{pmatrix} \lambda & 0 & \cdots & 0 & 0\\ -1 & \lambda & \cdots & 0 & 0 \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & \cdots & -1 & \lambda \end{pmatrix} +(-1)(-1)^{1+n}\det\begin{pmatrix} -1 & \lambda & 0 & \cdots & 0 \\ 0 & -1 & \lambda & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & -1 \end{pmatrix} \end{eqnarray} But these determinants are easy to calculate because both matrices are triangular, so you only have to multiply the main diagonal elements of each matrix. Thus: $$\det(\lambda I - S) = \lambda^{n}+(-1)^{n}(-1)^{n-1} = \lambda^{n}-1$$ In other words, the eigenvalues of $S$ are the $n$ (complex) roots of $1$. Once you have the eigenvalues, the eigenvectors follow from direct calculations. Remark: The $n$ factors are due the fact that I'm assuming $S$ is $n\times n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3510533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Homotopy of functions of $S^1$ and $\mathbb{R}P^2$ So I was doing an exercise that asked me that if there exist two functions $g \colon S^1 \rightarrow \mathbb{R}P^2$ and $f\colon\mathbb{R}P^2 \rightarrow S^1 $ then $f \circ g$ is homotopic to the identity in $S^1$. I think this is false and my argument is as follows. Lets suppose that it is true, then we know that the homomorphism of fundamental groups induced by $f\circ g$ is the trivial one since we are going to have in one part a homomorphism from $\mathbb{Z}_2$ to $\mathbb{Z}$. So if we have a loop $\gamma$ in the fundamental group of $S^1$, $f\circ g\circ \gamma \sim e$ where $e$ is the identity. Since we are assuming that $f\circ g $ is homotopic to the identity function, then we can use this homotopy to show that $\gamma \sim f\circ g \circ \gamma$ but by what we showed before this would imply that $\gamma \sim e$ for all the elements of the fundamental group of the circle, which would be a contradiction. So my question is, is this argument correct?
I think your argument is correct, but it can be expressed more succinctly using the functoriality of $\pi_1$. You can show that any continuous composition $S^1 \stackrel{g}{\to} \mathbb{R}P^2 \stackrel{f}{\to} S^1$ is null-homotopic by considering the composition $\pi_1(S^1) \stackrel{\pi_1(g)}{\to} \pi_1(\mathbb{R}P^2) \stackrel{\pi_1(f)}{\to} \pi_1(S^1)$, which must be $0$ since there are no non-zero homomorphisms $\mathbb{Z}/2 \to \mathbb{Z}$. Your arguments about homotopies are already encapsulated in the definition and basic properties of $\pi_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3510660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Determinant of a particular type of matrix I was doing a problem and found that if I could get the determinant of this matrix, it would make solution easier. Eventually, I gave up and solved it another way. I am still curious as to how I would go about calculating the determinant of this $n \times n$ matrix: $$A=\begin{bmatrix} a & 0 & \ldots & 0 & -a\\ 0 & a & \ldots & 0 & -a\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 &0 & \ldots & a & -a \\-a & -a & \ldots & -a & b\end{bmatrix}$$
\begin{eqnarray*} A=\begin{bmatrix} a & 0 & \ldots & 0 & -a\\ 0 & a & \ldots & 0 & -a\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 &0 & \ldots & a & -a \\ -a & -a & \ldots & -a & b \end{bmatrix} \end{eqnarray*} Add each column to the last column \begin{eqnarray*} A=\begin{bmatrix} a & 0 & \ldots & 0 & 0\\ 0 & a & \ldots & 0 & 0\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 &0 & \ldots & a & 0 \\ -a & -a & \ldots & -a & b-(n-1)a \end{bmatrix} \end{eqnarray*} So the determinant is $a^{n-1}(b-(n-1)a)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3510823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What assumption on $x$ is needed for $x x^\top$ to be positive definite? Consider $M = x x^\top$, $x \in \mathbb{R}^n$ For $n = 1$: $M = x^2$, then $M$ is PD if $x \neq 0$. For $n = 2$, $M = \begin{bmatrix} x_1^2 & x_1x_2 \\ x_2x_1 & x_2^2 \end{bmatrix}$, which is PD when $x_1 \neq x_2$ and $x_1 \neq 0 \wedge x_2 \neq 0$ Is there a pattern to the positive definiteness or lack thereof of this matrix?
Observe that for any vector $v\in\mathbb R^n$ we have that $$ v^TMv=(v^Tx)(x^Tv)=\langle v,x\rangle^2. $$ For $M$ to be positive definite, we need this to be strictly greater than zero for all non-zero $v$. But this is impossible if $n> 1$, since the orthogonal complement of the subspace spanned by $x$ is non-empty. In particular, your $n=2$ is never positive definite since $$ M\begin{pmatrix}x_2\\-x_1\end{pmatrix}=\begin{pmatrix}0\\0\end{pmatrix}. $$ Another explanation: The matrix $xx^T$ has rank at most $1$ (by the rank inequality mentioned here), which implies that it has determinant zero when $n>1$ (since non-zero determinant implies that the rank equals $n$). And a positive definite matrix cannot have determinant zero. A third explanation: By applying a linear transformation to $x$ such that it becomes a standard basis vector, you obtain a similarity transformation of $xx^T$ into a matrix that has at most one non-zero entry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3510966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do arbitrary union and arbitrary intersection operations commute? Let $X$ be a set, and $\mathcal{S}$ a set of subsets of $X$. Let $\sigma(\mathcal{S})$ be the set of all arbitrary unions of the elements of $\mathcal{S}$, and $\tau(\sigma(\mathcal{S}))$ the set of all arbitrary intersections (empty intersection being $X$) of the elements of $\sigma(\mathcal{S})$. Similarly, let $\tau(\mathcal{S})$ be the set of all arbitrary intersections of the elements of $\mathcal{S}$, and $\sigma(\tau(\mathcal{S}))$ the set of all arbitrary unions of the elements of $\tau(\mathcal{S})$. Then do we have $$\tau(\sigma(\mathcal{S}))=\sigma(\tau(\mathcal{S})) \ ?$$ A couple of observations: * *If $\sigma$ and $\tau$ are finite union and finite intersection operations, respectively, then the equality seems true. *If only $\tau$ is finite intersection operation, then the equality fails.
Consider that for subsets $A_{i,j} \subseteq X$ and index set $J$ (and for each $j \in J$ and index set $I_j$), we can rewrite a member of $\tau(\sigma(\mathcal{S}))$ as: $$\bigcap_{j \in J} \bigcup_{i \in I_j} A_{i,j} = \bigcup_{f \in F} \bigcap_{j \in J} A_{j, f(j)}$$ where $F$ is the set of all functions $f$ from $J \to \bigcup_{j} I_j$ such that $(j) \in I_j$ for all $j$ (choice functions); this equality requires AC. This shows that $\tau(\sigma(\mathcal{S})) \subseteq \sigma(\tau(\mathcal{S}))$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3511075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do I rotate a square around a circle? I have * *a circle of radius r *a square of length l The centre of the square is currently rotating around the circle in a path described by a circle of radius $(r + \frac{l}{2})$ However, the square overlaps the circle at e.g. 45°. I do not want the square to overlap with the circle at all, and still smoothly move around the circle. The square must not rotate, and should remain in contact with the circle i.e. the distance of the square from the circle should fluctuate as the square moves around the circle. Is there a formula or algorithm for the path of the square? [edit] Thanks for all your help! I've implemented the movement based on Yves solution with help from Izaak's source code: I drew a diagram to help me visualise the movement as well (the square moves along the red track):
Consider what happens when the square is directly to the right of the circle with the circle tangent to the midpoint of a side of the square. In order for the square to move up while remaining in contact without crossing into the square, it has to go straight up. This continues until the lower left vertex is on the circle, after which that vertex moves along the circle from the rightmost point to the topmost point. Then the square has to move straight left until the lower right vertex is on the circle, and then that vertex moves along the circle. Each time a vertex moves along the circle, the center of the square moves along a circular arc whose center is displaced from the center of the displayed circle. As a result, the center of the square moves straight up, then through a quarter circle arc, the straight left for the width of the square, then through another quarter circle, then down for the height of the square, etc. The total length of the path of the square's center until it returns to the start is $2\pi r + 4s,$ where $r$ is the radius of the circle and $s$ is the side of the square. If you can use the distance along the path, not the direction between the centers, as your parameter for the motion of the square, it should make this easier to draw.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3511223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 4, "answer_id": 1 }
Use dominated convergence theorem to show: $\lim_{\ n \to \infty} \| f \ 1_{[n,n+1]} \|_1 = 0$ Consider the Lebesgue measure $\lambda$ on $\mathbb{R}$. Given that $f: \mathbb{R} \to \mathbb{R} $ is integrable, Ik would like to show the following using the dominant convergence theorem: $$\lim_{\ n \to \infty} \| f \ 1_{[n,n+1]} \|_1 = 0$$ I started as follows: $$\begin{align} \| f \ 1_{[n,n+1]} \|_1 &= \int_\mathbb{R} |f \ 1_{[n,n+1]}| \, d\lambda \\ &= \int_\mathbb{R} |f \ 1_{[n,n+1]}| \, d\lambda \end{align}$$ $f_n = |f \ 1_{[n,n+1]}|$ is dominated by $|f|$, because $|f \ 1_{[n,n+1]}| \leq |f| $ for all $x \in \mathbb{R}$ and |f| is integrable, because if $f$ is integrable, then absolute value of $f$ is also. We can apply dominant convergence: $$\begin{align} \lim_{\ n \to \infty} \| f \ 1_{[n,n+1]} \|_1 &=\lim_{\ n \to \infty}\int_\mathbb{R} |f \ 1_{[n,n+1]}| \, d\lambda \\ &= \int_\mathbb{R} \lim_{\ n \to \infty}|f \ 1_{[n,n+1]}| \, d\lambda \\ &= \int_\mathbb{R} 0 \, d\lambda \\ &= 0 \end{align}$$ But I feel like I am doing something wrong.
$$ \int\limits_{\mathbb R} f\,d\lambda = \sum_{n\,\in\,\mathbb Z} \,\,\, \int\limits_{\mathbb R} f\mathbf 1_{[n,n+1]} \, d\lambda $$ The series converges absolutely because $\|f\|_1<\infty.$ The terms of a convergent series approach $0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3511365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Maximum number of students such that they all have at least 3 out of 6 different answers. A teacher made a test for his mathematics class with 6 true or false questions. When he received the tests, he noticed that any pair of students had at least three diferent answers. Since all the students answered every question, what's the maximum number of students of the class? Here is my take. $2^6=64$ diferent tests. For one of those tests, there are $^6C_3+^6C_4+^6C_5+^6C_6$=42 tests which have 3 or more diferent questions from it. $42/63 = 2/3$, and if the ratio stands for successive pairs of tests, ${3/2}^{10}<63<{3/2}^{11}$ then 10 students. This argument is clearly fallacious and unfounded, but I found no better.
To see that $9$ is impossible: Note that if you only had $4$ questions you could have no more than $2$ students whose pairwise answers differed by at least three questions. Indeed, pick one of the students. Relabeling if necessary we can assume that this student chose $TTTT$. Any other student would need to have chosen at least three $F's$, but then no any third student who chose at least three $F's$ would have to match the second in at least two places. Now, take your proposed group of $9$ students. Since there are exactly $4$ ways to respond to the first two questions, there must be at least $3$ students who match on the first two. By the preceding comment, however, this is not possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3511773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Show: For every $\alpha \in \mathbb{Q}$, the polynomial $X^2+\alpha$ has endless different roots in $\mathbb{Q}^{2\times2}$. Show that for every $\alpha \in \mathbb{Q}$, the polynomial $X^2+\alpha$ has endless different roots in $\mathbb{Q}^{2\times2}$. What I did: Let $A = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \mathbb{Q}^{2\times2}, f=X^2+\alpha \in \mathbb{Q}[X].$ Consider $$f(A) = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \cdot \begin{pmatrix} a & b \\ c & d \end{pmatrix}+\begin{pmatrix} \alpha & 0 \\ 0 & \alpha \end{pmatrix} = \\ \begin{pmatrix} a^2+bc & ab+bd \\ ac+cd & bc+d^2 \end{pmatrix} + \begin{pmatrix} \alpha & 0 \\ 0 & \alpha \end{pmatrix} = \\\begin{pmatrix} a^2+bc + \alpha & ab+bd \\ ac+cd & bc+d^2 + \alpha \end{pmatrix}.$$ How do I get to the claim from here on? Thanks in advance!
HINT: What happens if $a=-d$ and $bc=-a^2-\alpha$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3511852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Calculating Volume of the Submerged Portion of a Rotated Rectangular Prism I'm trying to simulate buoyancy forces on rectangular prism of arbitrary orientation. Suppose that the prism has size $L\times W\times H$ and an Euler angle of $(\phi, \theta, \psi)$. Furthermore, as a rudimentary approximation, the water can be represented as a waveless horizontal plane that slices the prism at some height $h$. A rough diagram is shown below, with the volume of interest highlighted in yellow: Is there a way to calculate this volume in general?
Answer: $V = LW(d_y + H/2)$, where $d_y$ is the $y$-coordinate at which the surface of the water hits the $y$-axis. If $\theta$ is the angle that the $y$-axis of the prism makes with the vector pointing straight up, then $d_y = h/\cos \theta$. Note: This formula fails when the prism lies such that the $y$-axis is parallel to the surface of the water, in which case the $y$-axis intersection is non-unique. In this exceptional case, you could use one of the other axes to get the formulas $$ V = (d_z + L/2)WH, \qquad V = L(d_x + W/2)H $$ This formula also assumes that the entire "bottom" of the prism (relative to your chosen axis) is submerged. Derivation: I assume that your origin is taken to be in the center of the prism. Here's an idea. Let $\mathbf n = (n_1,n_2,n_3)$ denote the normal vector to the surface water (relative to your intrinsic coordinate system of the prism). We ultimately won't need it for the final answer, but if you're interested this $\mathbf n$ can be calculated from your Euler angles as the second column of the rotation matrix corresponding to your Euler angles. Let $d$ denote the $y$-coordinate at which the surface of the water hits the $y$-axis. The equation of the surface of the water has the form $$ n_1 x + n_2 (y - d) + n_3 z = 0 \implies \\ n_1 x + n_2 y + n_3 z = n_2 d \implies\\ y = d - \frac{n_1}{n_2}x - \frac {n_3}{n_2}z \implies\\ y = d - ax - bz $$ where $a = n_1/n_2$ and $b = n_3/n_2$. With that, your volume can be expressed as the triple integral $$ V = \int_{-W/2}^{W/2} \int_{-L/2}^{L/2} [(d - ax - bz) + H/2]\,dz\,dx\\ = \int_{-W/2}^{W/2} \int_{-L/2}^{L/2} (d+ H/2)\,dz\,dx = LW(d + H/2). $$ I suspect that this calculation can be avoided by some symmetry-argument. In any case, the volume in question will be $LW(d + H/2)$, so all you need to do is find that $y$-coordinate $d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3512057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that: $\lim_{n\rightarrow\infty}e^{\frac{\log x}{\log\log xn-\log\log n}-\log\left(n\right)}=\sqrt{x}$. I want to show that: $$\lim_{n\rightarrow\infty}e^{\frac{\log x}{\log\log xn-\log\log n}-\log\left(n\right)}=\sqrt{x}$$ I looked it up on Wolfram Alpha, and it says: $$\lim_{n\rightarrow\infty}e^{\frac{\log x}{\log\log xn-\log\log n}-\log\left(n\right)}=1$$ I got confused because it didn't match my computation results, suggesting that: $$\lim_{n\rightarrow\infty}e^{\frac{\log x}{\log\log xn-\log\log n}-\log\left(n\right)}=\sqrt{x}$$ However, I did try WA for some values of $x$, and it gave the right value: $$\lim_{n\rightarrow\infty}e^{\frac{\log2}{\log\log2n-\log\log n}-\log\left(n\right)}=\sqrt{2}$$ $$\lim_{n\rightarrow\infty}e^{\frac{\log7}{\log\log7n-\log\log n}-\log\left(n\right)}=\sqrt{7}$$ $$\lim_{n\rightarrow\infty}e^{\frac{\log31}{\log\log31n-\log\log n}-\log\left(n\right)}=\sqrt{31}$$ What is going on here? And how can I show the limit is $\sqrt{x}$?
Yes, you are correct. If $x>0$ then the limit is $\sqrt{x}$. Note that as $n\to +\infty$, \begin{align}\log(\log(nx))&=\log\left(\log(n)\left(1+\frac{\log(x)}{\log(n)}\right)\right)\\ &=\log(\log(n))+\frac{\log(x)}{\log(n)}-\frac{1}{2}\frac{\log^2(x)}{\log^2(n)}+o(1/\log^2(n)). \end{align} Hence \begin{align}\frac{\log(x)}{\log(\log(nx))-\log(\log(n))}-\log\left(n\right) &= \frac{\log(n)}{1-\frac{1}{2}\frac{\log(x)}{\log(n)}+o(1/\log(n))}-\log\left(n\right)\\&=\log(n)\left(1+\frac{1}{2}\frac{\log(x)}{\log(n)}+o(1/\log(n))\right)-\log\left(n\right)\\ &=\log(\sqrt{x})+o(1) \end{align} and the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3512202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is it that given $E=\{ 1, \{2,3\}, \{2,4\} \}$ we have $1\in E$ but $2\not\in E$? I am reading "Book of proof" of Richard Hammard and it says that given the set $E=\{ 1, \{2,3\}, \{2,4\} \}$, we have that $2\not\in E$ . I do not understand why is it so. I understand that the set $E$ is a collection of the elements "$1$", "$\{2,3\}$" and "$\{2,4\}$". But I do not understand why I could not go further and deduce that since $2\in\{2,3\}$ and $\{2,3\}$ is an element of $E$ then $2$ must be an element of $E$. Sorry if this is a very basic question but I do not even seem to know how to ask for this clarification.
I think this is a deliberately provocative example, chosen by the author to illustrate the thought, that the relation, $\in$ is 'blind' to whatever structure the elements may have. The elements of a set are just anonymous 'lumps' - labels, if you will - and we don't care about what these labels mean, if indeed they mean anything at all. Surprisingly, this is, in my experience, often one of the more difficult aspects of mathematics: you sometimes have to deliberately ignore any further knowledge and limit yourself to just the particular aspect that you are studying. Sets theory is perhaps 'worse' than other areas of maths - a general set has no structure; the only trait that all sets have, is cardinality: loosely speaking how many elements the set contains. From that point of view, the set $\{ apple, pear, orange \}$ is exactly the same as $\{1, \{2,3 \}, \{2,4\} \} $; they are isomorphic to use the modern (category theoretical) term, which simply means they are indistinguishable given the tools of, here, set theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3512341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Adjoint(s) to the forgetful functor $U:A/\mathbf{C}\to \mathbf{C}$. I am preparing for my exam in Category Theory, and came across the following exercise in an old exam. Let $\mathbf{C}$ a category with finite coproducts. For a fixed object $A$, consider the coslice category consisting of objects $f:A\to C$. Morphisms are $\alpha:C\to D$ making the triangle commute. We have to determine whether the forgetful functor $U$ has a left or / and right adjoint. An (rather unfounded) approach I had in mind for the right adjoint was the functor $F$ which maps an object $C$ to $i_A:A\to A\sqcup C$, where $i_A$ denotes the inclusion map. A morphism $\alpha:C\to D$ is then mapped to the unique $u:A\sqcup C\to A \sqcup D$ which arises when considering the maps $i_A:A\to A\sqcup D$ and $i_D\circ f:C\to A\sqcup D$, by the universal property of the coproduct. Since this functor does not preserve the terminal object it can't be the left adjoint. To show it is indeed a right adjoint we need to show the following isomorphism of Hom sets: $$ \hom_{\mathbf{C}}(D,U(f:A\to C))\cong \hom_{A/\mathbf{C}}(i_A:A\to A\sqcup D,f:A\to C) $$ However, I failed to show this and do not have an alternative idea so far. Neither do I have an idea for a possible left adjoint, if it exists. Any kind of help is welcome!
For the fact that it admits a left adjoint your (not unfounded at all) discussion gives you the awnser (the only problem is you were trying the wrong side) : $$ \text{Hom}_{\mathbf{C}}(D, U(f:A \to C)) \cong \text{Hom}_{A/\mathbf{C}}(i_A : A \to A \sqcup D, f:A\to C). $$ Indeed having a map $g : D \to C$ will give, by universal property of $A\sqcup D$ and the given data $f:A \to C$, a map $\overline g : A\sqcup D \to C$ that verifies $\overline g \circ i_A = f$, i.e. $\overline g $ is a morphism in $A/\mathbf C$ from $i_A : A \to A\sqcup D$ to $f:A\to C$. For the right adjoint part, if $U$ admits a right adjoint, this would mean that $U$ is left adjoint, and so it should at least preserve the intial object, but the initial object of $A/\mathbf{C}$ is $id_A : A \to A$, and $U(id_A)= A$ which is not a priori the intial object of $\mathbf{C}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3512488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving that function is continuous and differentiable at $x=0$ I have the following function: $$ f(x)=\begin{cases} |x|^{m}\sin\left(\frac{1}{|x|^{n}}\right) & x\neq0 \\ 0 & x=0 \end{cases} $$ I'm trying to find for which $n,m$: * *The function $f$ is continuous at $0$. *The function $f$ is differentiable at $0$. *The derivative of $f$ is continuous at $0$. I know that for every $x$ we get $-1\leq sin\leq 1$, but how does it help us? Would be glad to see some guidelines on how to solve it.
* *As you said, $ -1\le\sin(x)\le 1$, hence you can write: $$|x|^m \sin\Big(\frac{1}{|x|^n}\Big)\le |x|^m $$ So for each $m>0$ the function is continuous. Fo $m=0$ the sin has no limit except in the trivial case $n=m=0$ for which our function is just the constant $\sin(1)$ everywhere but in $x=0$. Note that even in this case the function is not continuous. Now, the case $m<0$: if $n>0$, than you easily see that the limit is $\infty$. But if $n<0$ than: $$f(x)=\frac{\sin\Big(|x|^{-n}\Big)}{|x|^{-m}} $$ For the behavious of $\sin(x)$, you know that if $-n>-m$ the sin is "stronger" than the power of $x$, and hence if $m>n$ the limit exists and it's $0$. Finally, for the special case $m=n$ the limit would be $1$, but the function will tunr out to be discontinuous *We use the definition of derivative: $$ \lim_{x\to 0} \frac{f(x)-f(0)}{x-0} = \lim_{x\to 0} \frac{f(x)}{x} = \lim_{x\to 0} |x|^{m-1} \sin\Big(\frac{1}{|x|^n}\Big) $$ Similarly as before, we can evaluate the limit if $m>1$. The value of the derivative in $0$ is hence 0. *Let's calculate the derivative as usual. For semplicity consider $x> 0$. The case $x$ negative is the same. It turns out to be: $$ f'(x)= m|x|^{m-1} \sin\Big(\frac{1}{|x|^n}\Big) + |x|^{m} \cos\Big(\frac{1}{|x|^n}\Big)\frac{-n}{|x|^{n+1}} $$ and calculate the limit for $x\to0$ The first piece is definied again if $m>1$ and, in this case, gives us $0$. The second addend is more interesting: it converges to $0$ only if $x$ has strictly positive exponent, i.e. $ m-n-1>0$ that is $m>n+1$. in this case, the limit of the derivative of $f$ is $0$ and hence is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3512648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculate $\int_{0}^{\pi/2} (\sin(2x))^5 dx$ I want to calculate the following integral: $$\int_{0}^{\pi/2} (\sin(2x))^5 dx$$I tried integration by parts but it didn't work for me. Suggestions?
$$\dfrac{d(\sin^nax\cos ax)}{dx}=an\sin^{n-1}ax(1-\sin^2ax)-a\sin^{n+1}x$$ Integrate both sides with respect to $x,$ $$\sin^nax\cos ax=anI_{n+1}-a(n+1)I_{n+1}$$ where $$I_n=\int\sin^max\ dx$$ Set $n+1=5,3,1$ one by one
{ "language": "en", "url": "https://math.stackexchange.com/questions/3512754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Metric on the embedded hyperboloid in five-dimensional Minkowski space I have an explicit parametrisation of a one-sheeted hyperboloid in the five-dimensional Minkowski space, namely: $$Z_{0}=-l \cdot \cot(\tau)$$ $$Z_{a}=\frac{l}{\sin(\tau)}\omega_{a}, \quad a=1,\ldots,4$$ Where the $\omega_{a}$ represent the coordinates that embed a unit three sphere into $\Bbb R^4$ and satisfy $\omega_{a} \omega_{a}=1$. In the ambient space we have the metric: $$ds^2=-dZ_{0}^2+dZ_{1}^2+dZ_{2}^2+dZ_{3}^2+dZ_{4}^2.$$ Now I am trying to verify that for the coordinates above the metric takes the form: $$ds^2=\frac{l^2}{\sin^2(\tau)}(-d \tau^2 +d\Omega_{3}^2)$$ Where $d\Omega_{3}^2$ denotes the metric of the unit three-sphere. I am extremly thankful for every answer, comment or idea! Thank you all in advance!
Let us set $l=1$. One can reinsert the right power of $l$ at the end, as it is nothing but a scaling constant. Let me first correct that the problem actually concerns the hyperboloid with one sheet $M$ of all unit spacelike vectors in Minkowski $5$-dimensional spacetime, namely the vectors $Z^a$ satisfying $Z^a Z_a = 1$, where I am implicitly using Einstein's summation convention over indices $a$ going from $0$ to $4$, and I have used the Minkowski metric to lower the index of the second $Z$. The problem is essentially a change of coordinate problem. Let us delve into the details. We have $$dZ_0 = - \csc^2(\tau)d\tau,$$ which gives $$dZ_0^2 = \csc^4(\tau)d\tau^2.$$ Similarly, $$dZ_i = -\csc(\tau)\cot(\tau) \omega_i d\tau + \csc(\tau) d\omega_i,$$ which, upon squaring, gives $$dZ_i^2 = \csc^2(\tau)\cot^2(\tau)\omega_i^2d\tau^2 + \csc^2(\tau)d\omega_i^2 - 2 \csc^2(\tau)\cot(\tau) \omega_i d\omega_i d\tau.$$ Note that $\sum_{i=1}^4 \omega_i^2 = 1$, which implies, upon differentiating, that $\sum_{i=1}^4 \omega_i d\omega_i = 0$. This implies that after taking a summation for $i$ going from $1$ to $4$, the last term in the formula for $dZ_i^2$ above vanishes. Hence the metric on the $4$-dimensional hyperboloid with one sheet $M$ becomes in the new coordinates: $$ ds^2 = -(\csc^4(\tau) - \csc^2(\tau)\cot^2(\tau))d\tau^2 + \csc^2(\tau) \sum_{i=1}^4 d\omega_i^2.$$ After factoring out $\csc^2(\tau)$ and using the trigonometric identity $\csc^2(\tau)-\cot^2(\tau) = 1$, one gets the sought-after expression, albeit with $l$ set to $1$ (see my remark at the beginning of this post).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3512876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the error calculating the sum of this series? I need to determine if $$\sum_{n=1}^\infty \frac{2^n-1}{4^n}$$ converges, in which case I must also find its sum, or diverges. This is my approach: $i$) $$\frac{2^n-1}{4^n}=\frac{2^n}{4^n}-\frac{1}{4^n}=\frac{2^n}{2^{2n}}-\frac{1}{4^n}=\frac{1}{2^n}-\frac{1}{4^n}$$ Then $$\sum_{n=1}^\infty \frac{2^n-1}{4^n}=\sum_{n=1}^\infty \left(\frac{1}{2^n}-\frac{1}{4^n}\right)=\sum_{n=1}^\infty \frac{1}{2^n}-\sum_{n=1}^\infty \frac{1}{4^n}$$ $$=\sum_{n=1}^\infty \frac{1}{4} \left(\frac{1}{2}\right)^{n-1}-\sum_{n=1}^\infty \frac{1}{16}\left(\frac{1}{4}\right)^{n-1}$$ $ii$) We found our series can be written as the difference of two geometric series with $|r|<1$, and the solution is $$=\sum_{n=1}^\infty \frac{1}{4} \left(\frac{1}{2}\right)^{n-1}-\sum_{n=1}^\infty \frac{1}{16}\left(\frac{1}{4}\right)^{n-1}=\frac{\frac{1}{4}}{1-\frac{1}{2}}-\frac{\frac{1}{16}}{1-\frac{1}{4}}=\frac{5}{12}$$ I have given this series to an online series calculator, and it claims the result is not $\frac{5}{12}$ but $\frac{2}{3}$, and yet I can not find the mistake in my procedure. Can anyone point it out to me? EDIT: Please remember, I'm asking more than "what is the right way to solve this problem"; I'm asking precisely what was the mistake I made. Correlated, but not the same.
$\sum_\limits{n=1}^{\infty} k^n = \frac {k}{1-k}\\ \sum_\limits{n=1}^{\infty} (\frac {1}{2})^n = \frac {\frac 12}{1-\frac 12} = 1\\ \sum_\limits{n=1}^{\infty} (\frac {1}{4})^n = \frac {\frac 14}{1-\frac 14} = \frac {1}{3}\\ \sum_\limits{n=1}^{\infty} (\frac {1}{2})^n - \sum_\limits{n=1}^{\infty} (\frac {1}{4})^n = \frac 23$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3512982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Unexpected appearances of $\pi^2 /~6$. "The number $\frac 16 \pi^2$ turns up surprisingly often and frequently in unexpected places." - Julian Havil, Gamma: Exploring Euler's Constant. It is well-known, especially in 'pop math,' that $$\zeta(2)=\frac1{1^2}+\frac1{2^2}+\frac1{3^2}+\cdots = \frac{\pi^2}{6}.$$ Euler's proof of which is nice. I would like to know where else this constant appears non-trivially. This is a bit broad, so here are the specifics of my question: * *We can fiddle with the zeta function at arbitrary even integer values to eek out a $\zeta(2)$. I would consider these 'appearances' of $\frac 16 \pi^2$ to be redundant and ask that they not be mentioned unless you have some wickedly compelling reason to include it. *By 'non-trivially,' I mean that I do not want converging series, integrals, etc. where it is obvious that $c\pi$ or $c\pi^2$ with $c \in \mathbb{Q}$ can simply be 'factored out' in some way such that it looks like $c\pi^2$ was included after-the-fact so that said series, integral, etc. would equal $\frac 16 \pi^2$. For instance, $\sum \frac{\pi^2}{6\cdot2^n} = \frac 16 \pi^2$, but clearly the appearance of $\frac 16\pi^2$ here is contrived. (But, if you have an answer that seems very interesting but you're unsure if it fits the 'non-trivial' bill, keep in mind that nobody will actually stop you from posting it.) I hope this is specific enough. This was my attempt at formally saying 'I want to see all the interesting ways we can make $\frac 16 \pi^2$.' With all that being said, I will give my favorite example as an answer below! :$)$ There used to be a chunk of text explaining why this question should be reopened here. It was reopened, so I removed it.
Consider the following picture:, centered at the origin of $\mathbf{R}^{2}.$ It is a concentric arrangement of circles $\color{red}{\text{(- this should be discs ?)}}$; each circle has radius $1/n.$ We can think of it as an infinite bulls-eye. The sum of the areas shaded in red is equal to $\frac{1}{2}\pi\zeta(2).$ In particular $$\sum_{k=1}^\infty \int_0^1 \frac{\sin (\pi (2k - 1)/ r)}{2k - 1} r \, dr = \frac{\pi}{8}\left(1-\zeta(2)\right), $$ Surprisingly if you take this arrangement and rotate it about the $x-$axis then you have a similar arrangement with circles being replaced by $3-$balls each with radius $1/n.$ In this case the sum of volumes shaded in red is equal to $\pi\zeta(3).$ Update: It dawned on me that I can in fact extend the notion "volume shaded in red" to higher dimensions. Let $K_{i}$ be the $n-$ ball at the center of the origin of Euclidean $n-$space, $\mathbf{E}^{n},$ with radius $\frac{1}{i}$ and whose volume I denote by $\mu\left(K_{i}\right).$ Consider $$ \sum\limits_{i=1}^{\infty}(-1)^{i+1}\mu\left(K_{i}\right). $$ A closed form for this quantity is known whenever $n$ is an even number: $$ (-1)^{1+\frac{n}{2}}{\left(2^{n-1}-1 \right)B_{n}\above 1.5pt \Gamma(1+\frac{n}{2})\Gamma(1+n)}\pi^{\frac{3}{2}n}. $$ Inspections shows the numerator of the rational part is the sequence A036280(n/2). You can check in the case that $n=2$ the quantity computes to $\frac{1}{2}\pi\zeta(2).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3513089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 28, "answer_id": 2 }
Using the complex definition of $\sin$ to solve an integral Is it possible to use the complex definition of trigonometric identities to simplify integrals? For instance, the integral: $$\int e^{-at}\cos(bt) \, dt$$ Would it be appropriate to use the definition of complex sine and then convert it back after the integration, versus using integration by parts?
This does not add anything to @Michael Hardy's answer but shows the ooold way I learnt ! Consider $$I= \int e^{-at} \cos(bt) \,dt \qquad \text{and} \qquad J=\int e^{-at} \sin(bt) \,dt$$ So $$I+iJ=\int e^{-(a+ib)t} \,dt\qquad \text{and} \qquad I-iJ=\int e^{-(a-ib)t} \,dt$$ that is to say $$I+iJ=-\frac {e^{-(a+ib)t}}{a+ib}\qquad \text{and} \qquad I-iJ=-\frac {e^{-(a-ib)t} }{a-ib}$$ Adding $$2I=-\frac {e^{-(a+ib)t}}{a+ib}-\frac {e^{-(a-ib)t} }{a-ib}$$ Expanding the complex numbers and using again Euler's identity, then $$I=\frac{e^{-a} ( b \sin (b t)- a \cos (b t))}{a^2+b^2}\qquad \text{and} \qquad J=\frac{e^{-a} (a \sin (b t)+b \cos (b t))}{a^2+b^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3513117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Radius of convergence of a complex series I need to find the radius of convergence of the following series $$\displaystyle \sum_{n=0}^{\infty} \Big(\frac{z}{1+z}\Big)^n.$$ Here is my solution: I know that $\displaystyle \sum_{n=0}^{\infty} w^n$ converges absolutely for $|w| < 1$, converges uniformly for $|w| \leq \delta < 1$ and diverges elsewhere. Applying this to the above problem, the given series converges absolutely iff $$\Bigg| \frac{z}{1+z} \Bigg| < 1 \iff \text{Re}(z) > \frac{-1}{2}$$ Hence, $\displaystyle \sum_{n=0}^{\infty} \Big(\frac{z}{1+z}\Big)^n$ converges absolutely for $|z|< \dfrac{1}{2}$ and uniformly on $\Big(|z| < \dfrac{1}{2}\Big)\setminus \Big\{\dfrac{-1}{2}\Big\}$
Your main error is assuming that the area of convergence is a disk (with the border being 'unknown'), by insisting on a convergence radius! That's only true for a power series, which has the form $\sum_{n=0}^{\infty}a_nz^n$, which your series is not. Other kinds of serieses have other shapes of their area of (absolute) convergence. As you correctly found out, your series has a half plane as area of absolute convergence: $$\Re(z) > -\frac12$$ You then tried to interpret that result, saw the value $\frac12$ and tried to cram into your (wrong) notion that a convergance radius must exist. But that's not true, the area of absolute convergence is a half plane, not some disk! Uniform convergence is another matter. Aside from solving $$\left\vert\frac{z}{1+z}\right\vert \le \delta$$ for a given $0 < \delta < 1$, you need to find out if the map $$z \to \frac{z}{1+z}$$ is uniformly continuous in that area as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3513242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Infinite product with zero value For an infinite product $\prod a_k$ to converge we need * *at most finitely many zero factor, let be $m$ the maximum index of them *$c=\lim_{n\to \infty}\prod_{k=m+1}^n a_k$ must exists, and *$c\ne 0$. My question is "Why the additional condition 3?" Consider $$\tag{1} \prod_{k=1}^\infty \frac{n}{n+1}=\frac{1}{2}\frac{2}{3}\frac{3}{4}\cdots $$ The $n$th partial product would be $1/n$, thus the limit is zero. The definition above excludes (1) from the converging infinite products, but I do not understand what is bad about (1) converging to zero. There must be some consideration behind it. EDIT My question could be read as follows: What are the advantages of this definition? Is there a better (easier to develop by excluding the zero) definition ? Why is the zero limit excluded (even in case there are no zero factors)?
Yes, this is the definition in all books covering infinite products. We say $\prod \frac{n}{n+1}$ "diverges to $0$", and is not included when we say an infinite product "converges". The reason for this definition is that it is useful, for example in complex analysis. One example: (there are many others) $$ \sin z= z \prod_{n=1}^\infty \left(1-{\frac {z^{2} } {\pi^2n^{2} } } \right) $$ where (for all complex $z$) it is a convergent infinite product. Therefore, we may read the zeros of $\sin$ from it directly. With infinite products that possibly diverge to $0$, we cannot do that. As all mathematics students know, the principle $ab = 0 \Longrightarrow (a=0\text{ or }b=0)$ is very useful. We want to keep this useful fact for infinite products also! The is the first (and simplest, of many) reasons that convergence of infinite products is defined this way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3513417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Calculate a limit with an integral within proving that it's possible to use the L'Hôpital rule Let $f: [1, +\infty) \rightarrow R\;$ be a continuous function, bounded, and such that $f(x) \ge1 \;\;\;\forall\;x\ge1$. Calculate reasonably the following limit, proving that it is possible to use L'Hôpital Rule: $$\lim_{x\to +\infty} \frac{1}{x} \int_{1}^{x^2} \frac{f(t)}{t}dt$$ I have been trying to prove we can use L'Hôpiatl rule by giving examples of functions which meet those conditions, such as the aditive polynomical, irrational (where the degree of the numerator is higher than the one of the denominator) and exponential functions, but then I'm stuck and I don't know how to continue. Thank you!
Note that the denominator $x$ here tends to $\infty $ and thus L'Hospital's Rule can be applied. One should remember L'Hospital's Rule can be applied on two forms: "$0/0$" and "$\text{anything} /(\pm\infty) $". Applying the rule here we see that limit in question is equal to the limit of $$\frac{f(x^2)}{x^2}\cdot 2x=2\cdot\frac{f(x^2)}{x}$$ provided the limit of above expression exists. Since $f$ is bounded the desired limit is $0$. It is a common misconception that L'Hospital's Rule works on "$\infty/\infty $". One can see the emphasis on proving the limiting behavior of numerator in various answers here. This is entirely unnecessary. If the denominator tends to $\infty$ or $-\infty $ then we can apply L'Hospital's Rule without worrying about limiting behavior of numerator. The rule will work if the expression obtained after differentiation of numerator and denominator tends to a limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3513587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
How to determine all $2 \times 2$ normal matrices? Determine all $2 \times 2$ normal matrices. In particular, how would I show that there are normal matrices which are neither unitary, Hermitian, skew-Hermitian, symmetric, nor skew-symmetric. The only thing I do know is $AA^* = A^*A$, but I'm not sure how to proceed. I tried writing in $a,b,c,d$ as entries of $2 \times 2$ matrix, but it seemed to lead nowhere.
Note that $A$ is normal if and only if $AA^* - A^*A = 0$. If we take $$ A = \pmatrix{a&b\\c&d}, $$ then we find $$ AA^* - A^*A = \pmatrix{|b|^2 - |c|^2 & -b \bar a + a \bar c + b \bar d - d \bar c\\ \overline{-b \bar a + a \bar c + b \bar d - d \bar c} & |c|^2 - |b|^2}. $$ $A$ will be normal if and only if all the above entries are zero. In particular, we see that this means that $A$ is normal if and only if $$ |c| = |b|, \quad -b \bar a + a \bar c + b \bar d - d \bar c = 0. $$ In order to take advantage of the first equation, write $b,c$ in polar form. That is, let's say $$ b = r_1e^{i\theta}, \quad c = r_1e^{i \phi} $$ where $r_1 \geq 0$ and $\theta,\phi \in \Bbb R$. Substituting these into the second equation yields $$ - r_1e^{i \theta}\bar a + a r_1e^{-i\phi} + re^{i\theta} \bar d - d r_1e^{-i\phi} = 0 \implies\\ e^{i \theta}[\bar d-\bar a] + e^{-i \phi}[a-d] = 0 \implies\\ e^{i \theta}[\overline{d-a}] - e^{-i \phi}[d-a] = 0 \implies\\ e^{i (\theta + \phi)}[\overline{d-a}] - [d-a] = 0. $$ Write $d-a$ in polar form. That is, take $d-a = r_2 e^{i \psi}$. We can write the above as the equation $$ e^{i (\theta + \phi)}r_2 e^{-i\psi} - r_2e^{i\psi} = 0 \implies\\ e^{i(\theta + \phi - \psi)} = e^{i \psi} \implies\\ e^{i(\theta + \phi)} = [e^{i \psi}]^2 \implies\\ \pm \exp\left[i\frac{\theta + \phi}2\right] = e^{i \psi} = \frac{d-a}{r_2} \implies\\ d-a = \pm r_2 \exp\left[i\frac{\theta + \phi}2\right]. $$ The above analysis leads to the following parameterization of the normal matrices. Select any $a \in \Bbb C$, $r_1,r_2 \geq 0$ and angles $\phi,\psi$. The matrix with this value of $a$ and $$ b = r_1e^{i\theta}, \quad c = r_1e^{i \phi}, \quad d = a \pm r_2 \exp\left[i\frac{\theta + \phi}2\right] $$ is necessarily normal, and every normal matrix can be written in this way. We could also tweak some definition to get the equivalent parameterization $$ a,b = [\text{arbitrary complex}], \quad c = b e^{2i k_1}, \quad d = a + k_2 e^{i k_1} $$ where $k_1,k_2$ are real. This amounts to $$ A = aI + \pmatrix{0&b\\b e^{2i k_1} & k_2 e^{ik_1}}. $$ As far as the second part goes: it suffices to take a multiple of a unitary matrix. For instance, take the matrix of any rotation by an angle that is not a multiple of $\pi/2$, and multiply it by some $\alpha > 1$. Alternatively: for any normal $A$, there exists a $\gamma \in \Bbb C$ such that $A + \gamma I$ is neither unitary, Hermitian, nor skew-Hermitian. To see that this is the case, it suffices to look at the expressions for $M^*M$ and $M \pm M^*$, where $M = A + \gamma$ (in terms of $A$ and $\gamma$, not in terms of the entries). Alternatively: note that every diagonal matrix is normal. However, not every diagonal matrix is Hermitian, skew-Hermitian, or unitary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3513736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Prove the positive sequence $a_{n+1} = \sqrt{1+\frac{a^2_n}{4}} $ is strictly increasing for $0 \leq a_0<\frac{2}{\sqrt{3}}$ My attempt: $a_{n+1} - a_n= \sqrt{1+\frac{a^2_n}{4}} -a_n > \frac{a_n}{2}-a_n = -\frac{1}{2}a_n$, which doesn't tell me any thing. How do I prove that this sequence is strickly increasing? Thank you!
HINT: It remains to show that $$ \forall n\, a_n<\frac2{\sqrt{3}}. $$ For $n=1$ it is in comments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3513861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Unexpected use of polynomials in combinatorics Can someone please post some (relatively easy, say high school level) combinatorial problems which can be solved with polynomials but NOT generating functions. Related to this post in ME.SE.
Here is an example I was talking about: We have $2n$ different numbers $a_1,...a_n, b_1,...b_n$. A table $n\times n$ is divided on $n^2$ unit cells and in cell $(i,j)$ we write a number $a_i+b_j$. Suppose that all products of numbers written in cells in each column are the same. Prove that then all products of numbers written in cells in each row are the same. Idea for a solution: Observe a polynomial $$P(x) = (x+a_1)...(x+a_n)-(x-b_1)...(x-b_n)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3514004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 9, "answer_id": 1 }
Proving if the set is a group under the following operation So I’ve started learning about group theory this semester (actually just this week) and I’m a total newbie in the field. I’ve got the following set and I need to prove if it’s a group with respect to the operation stated below: $$ G := \mathbb R \setminus \{1\} $$ under the operation: $$a \circ b = a +b-ab$$ for $$ a,b \in G $$ So far I've proved that It's associative and the identity is the number $0$, but I can't manage to prove the closure property and also can't find the inverse. Any help in any form would be highly appreciated.
I think an easy way to prove the closure property is to see that, $a\circ b \ne 1, \forall a, b \in \mathbb R \setminus \{1\}$. Using the identity $a+b-ab-1=(1-b)(a-1)$, we can see that $a+b-ab \ne 1 \Leftrightarrow (1-b)(a-1) \ne 0, \forall a, b \in \mathbb R \setminus \{1\}$ The inverse of $a$ goes for resolving $a\circ a^{-1} = 0$, because $0$ is th identity element of the group $G$. So, $a\circ a^{-1} = 0 \Leftrightarrow a + a^{-1} - aa^{-1} = 0 \Leftrightarrow a + a^{-1} - aa^{-1} - 1 = -1 \Leftrightarrow (1-a^{-1})(a-1) = -1$. Since $a, a^{-1} \ne 1$: $(a^{-1}-1) = \frac{-1}{1-a} \Leftrightarrow a^{-1} = 1 + \frac{1}{a-1} \Leftrightarrow a^{-1} = \frac{a}{a-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3514088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Sum of combinations formula Is there an explicit formula for the sum $0\dbinom{n}{0}+1\dbinom{n}{1}+\dots+n\dbinom{n}{n} = \sum_{k=0}^nk\dbinom{n}{k}$?
Actually, I think I got it because I realized I forgot to show what work I have already done in the question but as I was trying to refine my work I realized what I need to do: $$0\dbinom{n}{0}+1\dbinom{n}{1}+\dots+n\dbinom{n}{n}=$$ $$1\dbinom{n}{1}+2\dbinom{n}{2}+\dots+n\dbinom{n}{n}=$$ $$1\cdot\dfrac{n!}{1!\cdot (n-1)!}+2\cdot\dfrac{n!}{2!\cdot (n-2)!}+\dots+n\cdot\dfrac{n!}{n!\cdot 0!}=$$ $$n\left(1\cdot\dfrac{(n-1)!}{1!\cdot (n-1)!}+2\cdot\dfrac{(n-1)!}{2!\cdot (n-2)!}+\dots+n\cdot\dfrac{(n-1)!}{n!\cdot 0!}\right)=$$ $$n\left(\dfrac{(n-1)!}{0!\cdot (n-1)!}+\dfrac{(n-1)!}{1!\cdot (n-2)!}+\dots+\dfrac{(n-1)!}{(n-1)!\cdot 0!}\right)=$$ $$n\left(\dbinom{n-1}{0}+\dbinom{n-1}{1}+\dots+\dbinom{n-1}{n-1}\right)=$$ $$n\left(2^{n-1}\right)=\boxed{n \cdot 2^{n-1}}$$ Based on Lucas Henrique and Lucas De Souza's answers, this is what I came up with: $$f(x)=(x+1)^n=\dbinom{n}{0}x^n+\dbinom{n}{1}x^{n-1}+\dots+\dbinom{n}{n}x^0$$ Taking the derivative of $f(x)$, we get $$f'(x)=n\cdot(x+1)^{n-1}=n\dbinom{n}{0}x^{n-1}+(n-1)\dbinom{n}{1}x^{n-2}+\dots+0\dbinom{n}{n}$$ This can also be written as $$f'(x)=n\dbinom{n}{n}x^{n-1}+(n-1)\dbinom{n}{n-1}x^{n-2}+\dots+0\dbinom{n}{0}$$ because $\binom{n}{k}=\binom{n}{n-k}$. Plugging in $1$ for $x$, we get $$f'(1)=n\dbinom{n}{n}+(n-1)\dbinom{n}{n-1}+\dots+0\dbinom{n}{0}$$ as desired. Thus, our sum can be represented by $f'(1)$, which can be calculated as $n\cdot(1+1)^{n-1}=\boxed{n\cdot2^{n-1}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3514166", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Is sum of two uniform random variables is uniformly distributed? For example, $X_1, X_2 \sim U[0,t]$. Does it imply that $2X_1+X_2 \sim U[0,3t]$?
If $X_2$ is always equal to $X_1,$ and $X_1\sim\operatorname{Uniform}[0,t],$ then $2X_1+X_2\sim\operatorname{Uniform}[0,3t].$ At the opposite extreme, if $X_1,X_2\sim\operatorname{Uniform}[0,t]$ and $X_1,X_2$ are independent, then $2X_1+X_2$ is not uniformly distributed. You haven't told use the joint distribution of $X_1,X_2,$ but only how each one separately is distributed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3514321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Using Beta Gamma function, show that :$\int_0^{\frac {\pi}{6}} \cos^2 (6\theta).\sin^4 (3\theta) d\theta$ Using Beta Gamma function, show that :$\int_0^{\frac {\pi}{6}} \cos^2 (6\theta)\cdot\sin^4 (3\theta) d\theta$ My Attempt $$\int_0^{\frac {\pi}{6}} \cos^2 (6\theta) \cdot \sin^4 (3\theta) d\theta$$ Put $3\theta=t$ $$3d\theta=dt$$ $$d\theta=\frac {dt}{3}$$ When $\theta=0$, $t=0$ When $\theta=\frac {\pi}{6}$, $t=\frac {\pi}{2}$ Now, $$=\int_0^{\frac {\pi}{2}} \cos^2 (2t)\cdot\sin^4 (t) \frac {dt}{3}$$
Hint: Use $\cos2t=1-2\sin^2t$ $$\cos^2(2t)=(1-2\sin^2t)^2=?$$ $$\beta\left(m,n\right)=2\int_0^{\pi/2}\sin^{2m-1}t\cos^{2n-1}t\ dt$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3514501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$(A=A\cup B )\Leftrightarrow B\subset A$ - is it true? I'm trying to find out whether the following statement is true: $(A=A\cup B )\Leftrightarrow B\subset A$ In my opinion it is, because $A=A\cup B$ if B is either an empty set, or $A=B$ or $B \subset A$, all of which are are equivalent with $B \subset A$. Could you please verify my proof? Thanks
$$B\not\subseteq A\iff \exists x\in B\setminus A\iff x\in A\cup B, x\notin A \iff A\neq A\cup B$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3514781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Intersection of union of measurable set Let $f_n$ be a sequence of measurable functions on $\mathbb R$. Show that the set $A:= \{x \in \mathbb R \mid f_n > 0\text{ for infinitely many } n \}$ is measurable. If I write the set $A$ like intersection of the union of a measurable set, then I am done. But I can not. Please help me.
Let $$E_n:= \{f_n > 0\}=\{x\in\mathbb R\mid f_n(x)>0\}$$ $$A = \bigcap_{n=1}^\infty \bigcup_{k=n}^\infty E_n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3514917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Given the premises p→q and ¬p→¬q, prove that p is logically equivalent to q Given the premises p→q and ¬p→¬q, prove that p is logically equivalent to q. I understand why this works, but I do not know how to construct a complete formal proof. So far, I have this: premises: p→q ¬p→¬q (p→q) ∧ (¬p→¬q) ⇔ T apply contrapositive law (p→q) ∧ (q→p) ⇔ T ∴ (p↔q) ⇔ T At this point, how do I formally assert (using symbols and laws) that, because (p↔q) is a tautology, p and q must be logically equivalent to each other?
Hint: To prove this use natural deduction, in general we will use $\leftrightarrow\text{ Intro }$, so we want to first assume $Q$, see if this indeed implies $P$, another direction is trivial by $\to\text{ Elim }$ from $P\to Q$. Answer: $$\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}\fitch{1.~P\to Q\\2.~\neg P\to \neg Q}{\fitch{3.~Q}{\fitch{4.~\neg P}{5.~\neg Q\hspace{10ex}\to\text{ Elim }2,4\\6.~\bot\hspace{12.5ex}\bot\text{ Intro }3,5}\\7.~\neg\neg P\hspace{13.2ex}\neg\text{ Intro }4-6\\8.~P\hspace{16.2ex}\neg\neg\text{ Elim }7}\\\fitch{9.~P}{10.~Q\hspace{14.5ex}\to\text{ Elim }1,9}\\11.~P\leftrightarrow Q\hspace{12.5ex}\leftrightarrow\text{ Intro }3-8,9-10}$$ $\underline{\text{Definition}}$ Formulas $P$ and $Q$ are logically equivalent if and only if the statement of their material equivalence $(P\leftrightarrow Q)$ is a tautology. (There is a difference between being true and being a tautology) If we have premise says $P\to Q$ and $\neg P\to\neg Q$ are tautology this is reasonable: $$\begin{align} 1.~&P\to Q\equiv \top&&\text{Premise}\\ 2.~&\neg P\to \neg Q\equiv \top&&\text{Premise}\\ 3.~&Q\to P\equiv \top&&\text{Contrapositive 2}\\ 4.~&P\to Q\land Q\to P\equiv\top\land\top\equiv \top&&\text{Domination law 1,3}\\ 5.~&P\leftrightarrow Q\equiv\top&&\text{Biconditional equivalence 4}\\ 6.~&P\equiv Q&&\text{Definition 5} \end{align}$$ However if the premise only says $P\to Q$ and $\neg P\to\neg Q$ are true, we only know that $P\leftrightarrow Q$ is also true, which might not be a tautology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3515063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
if $w$ is a complex number, how to show that $w^\left(1/2\right)$ has 2 roots? except the case for $w=0$ I am looking for a convincing argument to show that if $w$ is a complex number, $w^{\frac 12}$ has 2 different roots.
Consider the polar representation $w=r e^{i(\phi+2k\pi)}$. Its square root yields 2 distinct roots that repeat ad infinitum. That is: $$w^{1/2}=\sqrt r e^{i(\phi/2+k\pi)}$$ If $r=0$ we have only 1 root, which is 0. Otherwise we get the 2 distinct roots $\sqrt r e^{i\phi/2}$ and $\sqrt r e^{i(\phi/2+\pi)}$. All other values of $k$ map to these 2 roots due to the $2\pi$ periodicity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3515241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does $f(x) = x - \tanh(x)$ have an inverse function that can be expressed in terms of elementary functions? I find this question relevant in my current study of the tractrix, namely because this expression appears in one parameterization of the curve. I’ve noticed that the plot of the Cartesian equation of the tractrix appears to be the graph of a function. As you may have guessed, I became interested in discovering this function, and led myself to this question. If $x - \tanh(x)$ does indeed have an elementary inverse, I’m more interested in how to derive an expression for it than the expression itself. If it doesn’t, I’d like to know why.
Inverse functions are sometimes transcendental in nature, cannot be expressed in terms of elementary functions. Way to derive 3D coordinates of asymptotic lines on a pseudosphere for tractrix , in polar/cylindrical coordinates $(r,\theta,z$ ): $$ \sin \psi = \sin \phi = r/a = \frac{r d\theta}{ds}$$ $$ dr/ds= \sin \phi \cos \psi, dz/ds=\cos \phi \cos \psi $$ Integrating with initial conditions $ (z=0, r=a) $ you get: $$ z/a= \theta - \tanh \theta,\, r/a= \operatorname{sech} \,\theta. $$ We have parametric $ r(\theta), z(\theta)$ and there would perhaps be no need to find an inverse function. We can find $z$ for any $\theta. $ Explicitly solving for $z$ is not possible analytically but possible only numerically. Image from an earlier post here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3515440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why graphs are so important The notion of graph seems to be of a huge importance in Computer Science. A very rich Graph Theory was developed, and theoretical problems are being solved all the time. What I don't understand, is what makes graph objects so important? Which intrinsic properties of graphs make them so useful, or more useful than other math objects?
Graphs are a common method to visually illustrate relationships in the data. The purpose of a graph is to present data that are too numerous or complicated to be described adequately in the text and in less space. Wikipedia says, Graphs can be used to model many types of relations and processes in physical, biological, social and information systems. Many practical problems can be represented by graphs. Emphasizing their application to real-world systems, the term network is sometimes defined to mean a graph in which attributes (e.g. names) are associated with the vertices and edges. In computer science, graphs are used to represent networks of communication, data organization, computational devices, the flow of computation, etc. For instance, the link structure of a website can be represented by a directed graph, in which the vertices represent web pages and directed edges represent links from one page to another. A similar approach can be taken to problems in social media, travel, biology, computer chip design, mapping the progression of neuro-degenerative diseases, and many other fields. The development of algorithms to handle graphs is therefore of major interest in computer science. The transformation of graphs is often formalized and represented by graph rewrite systems. Complementary to graph transformation systems focusing on rule-based in-memory manipulation of graphs are graph databases geared towards transaction-safe, persistent storing and querying of graph-structured data.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3515632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Vector Space is a Free Module Is this argument correct? They assume $B$ doesn’t span $V$. Then ${B}^{‘} = B \cup (v) $. And they say $\alpha v + \sum_{i=1}^r \alpha_ib_i$ = 0 implies that $\alpha\neq0$ otherwise every scalar becomes zero. My question here is that there is typo here isn’t it? $B^‘$ must be linearly dependent right if $\alpha\neq0$? Another question is what is the problem if all scalars are zero? That will show that $B^‘$ is linearly independent and will belong in $\mathscr F$ and that will contradict the maximality of $B$. Isn’t this argument sounding circular? Or am I missing something?
$v$ is by assumption (that $B$ is not a basis) not a linear combination of any finite subset of $B$. In particular, $v \notin \text{span}(B)$. So it's right, that $B' = B \cup \{v\}$ is linear independent (because if it was linear dependent, then there would be an $(\alpha,\alpha_1,...,\alpha_r) \neq 0$ and a sum $\alpha v+\sum_{i=1}^r \alpha_i b_i =0$ which would imply $v \in \text{span}(B)$ (as in the text)). Thus $B' \in \mathscr{F}$ which is a contradiction to the maximality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3515995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Hall and Knight question If n is any positive integer show that the integral part of $$(3+\sqrt7)^n$$is a odd number I have no idea how to begin this problem but it is given in the chapter of binomial theorem so I hope that it is found using that only
'I'to denote the integral and 'f'to denote the fractional part of $(3+√7)^n$ Now $(3-√7)^n$ is less than 1 and a proper fraction let's denote it by f' $(3+√7)^n=3^n+ C_13^{n-1}√7......$ $3-√7)^n=3^n-C_13^{n-1}√7........$ As you can see when we add them the irrational terms cancel out. $(3+√7)^n+(3-√7)^n$=I+f+f'= even integer But since f and f' are proper fractions there are some must be 1 Hence we conclude that it's integral part is odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3516145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Can the quadratic formula be explained intuitively? Most people know what the quadratic formula is so I won’t post it here (Besides, I don’t know how to properly post formulas in general). I was wondering if there is an intuitive explanation as to why the quadratic formula is structured the way it is.
The equation $$ax^2+bx+c=0$$ can be normalized to a simpler form by using a linear change of variable such as $$x=pt+q.$$ Plugging in the equation, we get $$ap^2t^2+(2apq+bp)t+aq^2+bq+c=0.$$ Now (WLOG $a>0$) we are free to set $$\begin{cases}ap^2=1,\\2apq+bp=0\end{cases}$$ and the equation simplifies to $$\color{green}{t^2-d=0}$$ for some constant $d$. Obviously the solutions are $$t=\pm\sqrt{d}$$ and are real for $d>0$. Hence the solutions in $x$ are a linear function of $\pm\sqrt d$. Solving for the parameters $p, q, d$, you obtain the classical formulas.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3516253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Is the following true: $[\nabla \times(\vec{A} \times \vec{B})] \cdot \vec{C}=(\vec{C} \times \vec{\nabla}) \cdot(\vec{A} \times \vec{B})$? Is the following equation correct? $$[\nabla \times(\vec{A} \times \vec{B})] \cdot \vec{C}=(\vec{C} \times \vec{\nabla}) \cdot(\vec{A} \times \vec{B})$$ If so, how can this be shown?
Having $\vec A\times \vec B$ just muddies the waters. Just try to see instead why $$(\nabla\times\vec A)\cdot\vec C = (\vec C\times\nabla)\cdot\vec A$$ for any vector fields $\vec A$ and $\vec C$. Let's write out the right-hand side in terms of components. Writing $\vec A = (A_1,A_2,A_3)$ and $\vec C = (C_1,C_2,C_3)$, this is \begin{align*} \big(C_2\frac{\partial}{\partial z}& - C_3\frac{\partial}{\partial y}\big)A_1 + \big(C_3\frac{\partial}{\partial x} - C_1\frac{\partial}{\partial z}\big)A_2 + \big(C_1\frac{\partial}{\partial y} - C_2\frac{\partial}{\partial x}\big)A_3 \\ &=C_1\big(\frac{\partial A_3}{\partial y} - \frac{\partial A_2}{\partial z}\big) + C_2\big(\frac{\partial A_1}{\partial z} - \frac{\partial A_3}{\partial x}\big) + C_3\big(\frac{\partial A_2}{\partial x}- \frac{\partial A_1}{\partial y}\big) \\ &= (\nabla\times\vec A)\cdot \vec C, \end{align*} as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3516430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
High school math books recommendations I'm in last middle school year and want to learn High School maths in depth — I want to grasp the behind the scenes and learn proofs. What are good books to self study HS maths this way? In what order should I study?
I add to the answer of Chris a very nice book in English language. I have discovered this morning that some examples are taken from my textbook in italian language. The name of the book it is the necklace of JAMES STEWART Here there is a preview of Algebra: https://www.stewartcalculus.com/data/CALCULUS_8E_ET/upfiles/6et_reviewofalgebra.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/3516516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How to find the matrix A with the given linear transformation? I'm working on an assignment and I came across this problem and I am not really sure how to approach it. Any advice would be really helpful. Find an example of a linear transformation $T:\mathbb{R}^2\to\mathbb{R}^3$ given by $T(x)=Ax$ such that $T([1,1]) = \langle3,3,5\rangle$. Find the matrix $A$. I know I have to use the standard basis, but I can't figure the process to find the matrix.
Why not let $$A=\begin{bmatrix} 3 & 0 \\ 3 & 0 \\ 5 & 0 \end{bmatrix}$$ Then you'd have that $$T(x)=Ax=\begin{bmatrix} 3 & 0 \\ 3 & 0 \\ 5 & 0 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 3x_1 \\ 3x_1 \\ 5x_1 \end{bmatrix}.$$ And it would follow easily that $T\left(\begin{bmatrix}1\\1\end{bmatrix}\right)=\begin{bmatrix}3\\3\\5\end{bmatrix}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3516607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I find $\lim_{x \to 8} \frac{(\sqrt[3]{x} -2)}{x-8}$ by using the conjugate rule? I need to find: $\lim_{x \to 8} \frac{(\sqrt[3]{x} -2)}{x-8}$ I cannot solve this by substitution because that would cause the denominator to equal 0. Normally, I would simply use the conjugate trick, however I am uncertain how I would rationalize the numerator. $$\frac{(\sqrt[3]{x} -2)}{x-8}\times\frac{\sqrt[3]{x}+2}{\sqrt[3]{x}+2}$$ However, clearly this won't help me with anything, as I won't be able to factor anything. $$\frac{(\sqrt[3]{x^2} -4)}{(x-8)(\sqrt[3]{x^2}+2)}$$ I am unsure about how to continue from here. Perhaps I am on the wrong track entirely. Any form of guidance would be welcome. Thank you.
Take the steps below$$\lim_{x \to 8} \frac{\sqrt[3]{x} -2}{x-8} =\lim_{x \to 8} \frac{(\sqrt[3]{x} -2)((\sqrt[3]{x})^2 +2\sqrt[3]{x} + 4)}{(x-8)((\sqrt[3]{x})^2 +2\sqrt[3]{x} + 4)}$$ $$=\lim_{x \to 8}\frac{x-8}{(x-8)((\sqrt[3]{x})^2 +2\sqrt[3]{x} + 4)} =\lim_{x \to 8}\frac{1}{(\sqrt[3]{x})^2 +2\sqrt[3]{x} + 4} =\frac1{4+4+4}=\frac1{12}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3516838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 2 }
Is there a general formula for harmonic number at present? New to stackexchange. Known harmonic numbers are defined as $$ H_n= \sum _ {i = 1}^n\frac {1} {i}$$ Is the series above similar to the general term formula of $ \sum _ {i = 1}^n i= n (n+1)/2$?
The method is the same as for $\sum_{j \le n} j^r$ but the result isn't a polynomial, we only have an asymptotic expansion. Let $$f(z) = \frac1z - \log(z+1)+\log(z)=\frac1z - \log(1+\frac1{z})=F(\frac1z)$$ $F(s)= s-\log(1+s)$ is analytic for $|s|<1$ and $F(0)=F'(0)=0$. Thus, by induction there are some coefficients $c_k$ such that for all $K$, the first $K+1$ derivatives of $$F_K(s) = \sum_{k=1}^K c_k s^k (1-(\frac1{s+1})^k)$$ agree with that of $F$. Whence $$\sum_{j> n} f(j)= \sum_{j> n} (F_K(1/j)+O(j^{-K-2}))= \sum_{k=1}^K c_k \sum_{j> n}(\frac1{j^k}-\frac1{(j+1)^k})+O(\sum_{j>n} j^{-K-2})$$ $$ = \sum_{k=1}^K \frac{c_k }{(n+1)^k} + O(n^{-K-1})$$ From which we get $$\sum_{j \le n} \frac1j = \sum_{j\le n} (\log(j+1)-\log(j)) + \sum_j f(j) - \sum_{j> n} f(j)$$ $$ = \log(n+1)+\gamma-\sum_{k=1}^K \frac{c_k}{(n+1)^k} + O(n^{-K-1})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3516956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Knowing that $\prod_{i = 1}^na_i = 1$, prove that $\prod_{i = 1}^n(a_i + 1)^{i + 1} > (n + 1)^{n + 1}$. Given natural $n$ $(n \ge 3)$ and positives $a_1, a_2, \cdots, a_{n - 1}, a_n$ such that $\displaystyle \prod_{i = 1}^na_i = 1$, prove that $$\large \prod_{i = 1}^n(a_i + 1)^{i + 1} > (n + 1)^{n + 1}$$ We have that $$\prod_{i = 1}^n(a_i + 1)^{i + 1} \ge \prod_{i = 1}^n(2\sqrt{a_i}) \cdot \left(\sqrt[m]{\prod_{i = 1}^na_i^i} + 1\right)^m$$ where $\displaystyle p = \sum_{i = 1}^ni = \dfrac{n(n + 1)}{2}$, then I don't know what to do next. I suspect that $\displaystyle \min\left(\prod_{i = 1}^n(a_i + 1)^{i + 1}\right) = 2^q$, occuring when $a_1 = a_2 = \cdots = a_{n - 1} = a_n = 1$, where $q = \dfrac{(n + 3)n}{2}$, although I'm not sure that $2^q > (n + 1)^{n + 1}, \forall n \in \mathbb Z^+, n \ge 2$. (I've just realised this is just a redraft of problem 2, IMO 2012.)
For the numbers $2^{\frac{n}{2}}$ is greater than n+1 obviously satisfies, for n>5 can be proven by induction (hint: x$\sqrt{2}$ - x - 1>0 for x> $\sqrt{2}$ + 1)and for other n<6 check manually
{ "language": "en", "url": "https://math.stackexchange.com/questions/3517123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find the limit $\lim_{x\to 0} x\left(\left[\frac{1}{x}\right] +\left[\frac{2}{x}\right] +\cdots \left[\frac{10}{x}\right] \right)$ Can someone help me finding the following limit $$ \lim_{x\to 0} x\left(\left\lfloor\frac{1}{x}\right\rfloor +\left\lfloor\frac{2}{x}\right\rfloor +\cdots \left\lfloor\frac{10}{x}\right\rfloor\right)$$ I can somehow guess the limit will be $55$, as $\lim_{x\to 0}x\left\lfloor\frac{1}{x}\right\rfloor=1$. But, I am not able to prove it. Note: $\left\lfloor x\right\rfloor$ denotes the greatest integer less than or equal to $x$.
$$1>\frac{i}{x}-\left[\frac{i}{x}\right]\ge0$$ For $x>0$, multiply by $x$ $$x>i-x\left[\frac{i}{x}\right]\ge0$$ Sum for $1\le i\le10$ $$10x>55-x\sum \left[\frac{i}{x}\right]\ge0$$ Let $x$ tend to $0$, then $x\sum \left[\frac{i}{x}\right]$ tends to $55$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3517293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 3 }
Inverse cdf of the $\chi$-squared distribution I need to evaluate the functional inverse of the $\text{cdf}$ of the $\chi$-squared distribution $$\text{cdf}_{\chi_\nu}(t)=\mathbb P(X^2_\nu>t)=\frac1{2^\nu\Gamma(\frac\nu2)}\int_0^te^{-x^2}x^{\nu/2-1}dx.$$ The value of $t$ is fixed (say $0.9$), but the number of degrees of freedom $\nu$ is variable, say from $8$ to infinity. I am looking for a formula that is simple and fast to compute. I don't need much accuracy. Presumably, for a large number of DOF we should be close to a Normal law $\mathcal N(t;\nu,\sqrt{2\nu})$ so that an approximation would be $$\nu+z_t\sqrt{2\nu}$$ where $z_t$ is the position of a normal quantile. Can anyone confirm this and/or provide a better approximation ?
I have found relevant information in this paper: "Exploring How to Simply Approximate the P-value of a Chi-Squared Statistic, Eric J. Beh, Austrian Journal of Statistics June 2018, Volume 47, 63-75."
{ "language": "en", "url": "https://math.stackexchange.com/questions/3517407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integration inequality from ISI Entrance Examination Let $f(x)$ be a continuous function, whose first and second derivatives are continuous on $[0,2\pi]$ and $f''(x)\ge 0$ for all $x\in [0,2\pi]$. Show that $$\int_0^{2\pi}f(x)\cos x dx\ge 0$$.
Let us complete your attempt. According to the mean value theorems for definite integrals there is a $c\in (0,2\pi)$ such that \begin{equation*} \int_{0}^{2\pi}f''(x)\cos x\, \mathrm{d}x = \cos(c)\cdot\int_{0}^{2\pi}f''(x)\, \mathrm{d}x = \cos(c)(f'(2\pi)-f'(0)). \end{equation*} Consequently \begin{equation*} I= (f'(2\pi)-f'(0))(1-\cos(c))\ge 0. \end{equation*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3517544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Continuous in subspace topology Suppose $X = \lbrace 1,2,3,4,5 \rbrace$ and $\tau = \lbrace X, \emptyset , \lbrace 1 \rbrace , \lbrace 1,2 \rbrace , \lbrace 1,3,4 \rbrace , \lbrace 1,2,3,4 \rbrace , \lbrace 1,2,5 \rbrace \rbrace$. For $\tau_{M}$ we take subspace topology on $M = \lbrace 1,3,5 \rbrace$. We consider the function $f: X \rightarrow \lbrace 0,1 \rbrace$, where $$f(n) = \left\{ \begin{array}{ll} 0 & \textrm{if $n \le 3$}\\ 1 & \textrm{others }\\ \end{array} \right. $$ Let $f \mid_{M} : M \rightarrow \lbrace 0,1 \rbrace$ be the restriction of $f$ to $(M, \tau_{M})$. Show that these function $f$ and $f \mid_{M}$ are continuous. If not, point the counterexample. What I did? I think, that $\tau_{M} = \lbrace \lbrace 1,3,5 \rbrace , \emptyset , \lbrace 1 \rbrace \rbrace$. But, have no idea, what can I do next. Please, help me :)
You have $\tau_{M} = \lbrace \lbrace 1,3,5 \rbrace , \emptyset , \lbrace 1 \rbrace \rbrace$. You missed $\{1,3\}$ and $\{1,5\}$. $(f \mid_{M})^{-1}(0) = \{1,3\}$ an open set & $(f \mid_{M})^{-1}(1) = \{5\}$ not an open set. So $f \mid_{M}$ is not continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3517714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
The minimum number of variables to represent a 3D line in a unique way To my best understanding, the minimum number of variables to represent a line in 3D space is four. It means you need at least four values to identify a 3D line. For example from here $$a x + b y + c z = d \tag{1}$$ defines a line with four variables. However, this is not a unique representation of the line, and by scaling the equation you may still represent the same line: $$k a x + k b y + k c z = k d \, , \{k \neq 0\} \tag{2}$$ So my question is that what is the minimum number of variables to represent a line in a unique way? In other words, if $$l_1 \equiv L\{ a_1, a_2, \cdots , a_n \} \tag{3}$$ and $$l_2 \equiv L\{ b_1, b_2, \cdots , b_n \} \tag{4}$$ then $$l_1 \equiv l_2 \tag{5}$$ only and if only $$ \{a_i = b_i, \forall i \in 1,\cdots,n\} \tag{6}$$ Or, if I want to ask my question differently, what is the best way to mathematically represent a line in a 3D space, in a unique way, with the minimum number of degrees of freedom? P.S.1. I think I have my answer, and it is shamefully simple. Just divide the first equation by $d$. So it seems the minimum number of variables to represent a line (or DOF) in a 3D space in a unique way should be 3: $$a' x + b' y + c' z = 1 \tag{7}$$ where $\alpha' = \frac{\alpha}{d}$! P.S.2. The above method doesn't work if $d = 0$, so one needs at least a boolean variable $d' = 0 \, or \, 1$: $$a' x + b' y + c' z = d' \tag{8}$$ to represent all possible lines in a 3D space. P.S.3. I made a very silly mistake. Eq.1. represents a plane, not a line!
Maybe $5$? $x,y,z$ of the reference point and a unit vector, which needs $\theta$ and $\phi$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3517799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
mathematical induction natural number Tell me about this exercise, I try to solve it but it was confusing A bank gives 20\$ and 50\$. I must use mathematical induction so that the bank will create whatever amount of money bigger or equal to 40\$, that it is multiple to 10. Prove that for every natural number $n≥4$ there are $l,m$ Natural ,so that $10n=20l+50m$. How i try solve it: Basic step: n=1 induction situation: n=k so 10k=20l+50m I name this (1)relation Basic Induction :n=k+1 so 10(k+1)=20l+50m k+1=2l+5m k=2l+5m-1 i name this (2) relation In (1) relation i replace the k from (2) so i have 10(2l+5m-1 +1)=20l+50m 10(2l+5m)=20l+50m
As you observed, all we need to show is that for each natural number $k\geq4$, there exists nonnegative integers $m,n$ so that $k=2m+5n$. First, note that if $k=2m+5n$, then $k+2=2(m+1)+5n$, hence if $k$ can be written in desired form, so can $k+2$. But obviously $k=4,5$ work. So by induction, all other integers work too. To be concrete, all the odd numbers $\geq5$ work since $$\begin{split}5&=2\times0+5\times1\\7&=2\times1+5\times1\\9&=2\times2+5\times1\\&\vdots\\5+2n&=2\times n+5\times1\end{split}$$ and all even numbers $\geq 4$ work since $$\begin{split}4&=2\times2+5\times0\\6&=2\times3+5\times0\\8&=2\times4+5\times0\\&\vdots\\2n&=2\times n+5\times0.\end{split}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3517903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find g(Z) and show that is holomorphic Let be $\gamma$ the circumference of the center at the origin and radius 2, and we will consider the function $g:\mathbb{C}/\gamma \to\mathbb{C}$ $g(z)=\displaystyle\int_{\gamma}\frac{cos(s)}{z-is}ds$ I have doubts about how to calculate $g(z)$ for $ \left |z\right|\ne2 $ and then show that it is holomorphic in $\mathbb{C}$\ {$z:\left |z\right|=2$} then calculate $g´(z)$ Thanks in advance :)
Define $h(z)=g(iz)$. Then $h$ is holomorphic with derivative $h'(z)=-1/(2\pi )\oint_{\gamma}\cos s/(z-s)^2\operatorname ds$, by Cauchy's differentiation formula. But then $g'(z)=-ih'(-iz)$. Meanwhile, for $|z|\gt2$, we have $g(z)=0$, by Cauchy's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3518044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Variance identity for i.i.d mean-zero random vector. Let $Z_1, \ldots, Z_k$ independent mean-zero random vectors in $\mathbb{R}^n$. Show that \begin{equation} \mathbb{E} \left\| \sum_{j=1}^k Z_j \right\|_2^2 = \sum_{j=1}^k \mathbb{E} \left\| Z_j \right\|_2^2 \end{equation} Answer in correct place. Thank you all for the confirmation.
As we know: \begin{equation} \left\| Z \right\|_2^2 = \langle Z, Z \rangle = Z^T Z = \sum_{i=1}^n z_i z_i \end{equation} with $z_i$ being the $i$-th component of $Z$ vector. We can rewrite the left hand side as \begin{align} \mathbb{E} \left\| \sum_{j=1}^k Z_j \right\|_2^2 &= \mathbb{E} \left\langle \sum_{j=1}^k Z_j, \sum_{j=1}^k Z_j \right\rangle \\ &= \mathbb{E} \sum_{i=1}^n \sum_{j=1}^k {Z_j}_i {Z_j}_i \\ &= \mathbb{E} \sum_{j=1}^k \sum_{i=1}^n {Z_j}_i {Z_j}_i \\ \text{assuming mean 0} \\ &= \sum_{j=1}^k \left( \mathbb{E} \sum_{i=1}^n {Z_j}_i {Z_j}_i \right) \\ &= \sum_{j=1}^k \mathbb{E} \langle Z_j, Z_j \rangle \\ &= \sum_{j=1}^k \mathbb{E} \left\| Z_j \right\|^2_2 \\ \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3518296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a special prime The prime number $$p=82\ 954\ 517$$ has the property that the numbers $$2!+p,3!+p,\cdots , 11!+p$$ are all prime, but $12!+p$ is composite. Upto $10^{10}$, the only other prime with this property is $105\ 204\ 557$ Does a prime $p$ exist such that $$2!+p,3!+p,\cdots , 12!+p$$ are all prime ? If yes, which is the smallest such prime ? Such a prime must exceed $10^{10}$ Update : The prime $$p=79\ 017\ 245\ 897$$ is even better than what I wanted. $j!+p$ is prime for $j=2,3,4,\cdots,13$. Now it remains to find the minimum primes for the limit $12$ and the limit $13$
Just a few restrictions: * *first leads to p is 5 mod 6 *second eliminates 29 mod 30 *third eliminates 11 mod 30 *fourth eliminates 6 mod 7 ( aka 167,83 mod 210) *fifth eliminates 1 mod 7 ( aka 197, 113 mod 210) *sixth eliminates 9 mod 11 *seventh eliminates 6 mod 11 *eighth eliminates 10 mod 11 *ninth eliminates 1 mod 11 *tenth eliminates 12 mod 13 *eleventh eliminates 1 mod 13 *twelvth eliminates 14 mod 17 *thirteenth eliminates 9 mod 17 *fourteenth eliminates 16 mod 17 *fifteenth eliminates 1 mod 17 *sixteenth eliminates 18 mod 19 *seventeenth eliminates 1 mod 19 *eighteenth eliminates 19 mod 23 *nineteenth eliminates 12 mod 23 *twentieth eliminates 22 mod 23 *twenty first eliminates 1 mod 23 *twenty second eliminates 7 mod 29 *twenty third eliminates 23 mod 29 *twenty fourth eliminates 24 mod 29 *twenty fifth eliminates 15 mod 29 *twenty sixth eliminates 28 mod 29 *twenty seventh eliminates 1 mod 29 *twenty eighth eliminates 30 mod 31 *twenty ninth eliminates 1 mod 31 *thirtieth eliminates 4 mod 37 okay I have messed up ( prior) we need -n! mod q# eliminated. table updated (fixed) and extended.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3518415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Evaluate $\lim_{n \to \infty} \sqrt[n^2]{2^n+4^{n^2}}$ Evaluate $$\lim_{n \to \infty} \sqrt[n^2]{2^n+4^{n^2}}$$ We know that as $n\to \infty$ we have $2^n<<2^{2n^2}$ and therefore the limit is $4$ In a more formal way I started with: $$\log(L)=\lim_{n \to \infty} \log(2^n+4^{n^2})^{\frac{1}{n^2}}=\lim_{n \to \infty}\frac{1}{n^2}\log(2^n+2^{2n^2})$$ Continuing to $$\log(L)=\lim_{n \to \infty}\frac{1}{n^2}\log\left[2^n(1+2^{2n})\right]$$ Did not help much As I arrived to $$\log(L)=\lim_{n \to \infty}\frac{1}{n^2}\left[\log(2^n)+\log(1+2^{2n})\right]=\lim_{n \to \infty}\frac{1}{n^2}\log(2^n)+\lim_{n \to \infty}\frac{1}{n^2}\log(1+2^{2n})=0+\lim_{n \to \infty}\frac{1}{n^2}\log(1+2^{2n})$$
Hint: $n>1$; $f(n):=4(1+\dfrac{2^n}{2^{2n^2}})^{1/n^2}=$ $4(1+\dfrac{1}{2^{2n^2-n}})^{1/n^2}$; $4(1+0)^{1/n^2} \lt f(n) < 4(1+1)^{1/n^2}.$ Take the limit. Recall: For $a>1$, real; and $n >1$, integer: $1<a^{1/n^2} <a^{1/n}$, and $\lim_{n \rightarrow \infty} a^{1/n}=1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3518719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the Fourier coefficients of $g$ Suppose $f:\mathbb{R}\rightarrow\mathbb{R}$ is periodic with period $2\pi$ so that $\hat{f}(n)=\frac{1}{1+n^{2}}$ for every $n\in \mathbb{N}$, and $g:\mathbb{R}\rightarrow\mathbb{R}$ is periodic with period $2\pi$ and defined by the formula $g(x)=\int_{0}^{x}f(t)dt $ for every $-\pi\lneq x\leq \pi$. How can I find $\hat{g}(n)$? I've tried to go by the definition of the Fourier coefficient but didn't see a way to solve this.
Since you consider $\hat g (n) $, I'll assume that it's $\mathbb Z$ instead of $\mathbb N$ and that you are using the exponential. If $f$ is good enough (continuous, for instance; we need this to be able to exchange the integrals), you can write \begin{align} \hat g(n)&=\frac1{2\pi}\,\int_0^{2\pi} g(x)\,e^{inx}\,dx =\frac1{2\pi}\,\int_0^{2\pi} \int_0^x f(t)\,dt\,\,e^{inx}\,dx\\ \ \\ &=\frac1{2\pi}\,\int_0^{2\pi} f(t)\int_t^{2\pi} \,e^{inx}\,dx\,dt\\ \ \\ &=-\frac1{2n\pi}\,\int_0^{2\pi} f(t)\, (1-e^{int})\,dt\\ \ \\ &=-\frac1{2n\pi}\,\int_0^{2\pi} f(t)\, dt-\frac1{2n\pi}\,\int_0^{2\pi} f(t)\,e^{int} dt\\ \ \\ &=\frac1{2n\pi}-\frac1n\,\frac1{1+n^2} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3518902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
compactness in $\ell^p$ space Choose $1 \leq p \leq \infty$, and let $D=\left\{x \in \ell^{p}:\|x\|_{p} \leq 1\right\}$ be a closed ball in $\ell^p$. Try to show that $D \text { is not a compact subset of } \ell^{p}$. So far I've proved that the sequence of standard basis vectors $\left\{\delta_{n}\right\}_{n \in \mathbb{N}}$ contains no convergent subsequences, will that directly imply $D$ is not compact? Any help is appreciated.
Firstly, using F. Riesz's lemma, you can prove that the closed unit ball is compact in a normed space if and only if the respective normed space is finite-dimensional, which $\ell^p$ is not. Secondly, in a metric space, compactness is equivalent with sequential compactness, so yes, if you find a sequence that has no convergent subsequence, then your space is not compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3519083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a set of points of continuity has only irrational elements. Part a) Prove that for any function $f:\mathbb{R} \rightarrow \mathbb{R}$, the set $C_f$ of points of continuity of $f$ $$ C_f = \left\{ a \in \mathbb{R}: \forall \epsilon > 0, \exists \delta > 0 \forall x,y \left( |x-a| < \delta \text{ and } |y-a|< \delta \right) \Rightarrow |f(x) - f(y)| < \epsilon \right\} $$ is $G_\delta$ Part b) Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be defined as follows: $$ f(x) = \left\{ \begin{array} 00 & \text{if }x \in \mathbb{R}\setminus\mathbb{Q}\\ \frac{1}{q} & \text{if } $x=\frac{p}{q}, p \in \mathbb{Z}, q\in \mathbb{N} \text{ and } p,q\text{ are coprime}. \end{array} \right. $$ Show that $C_f = \mathbb{R}\setminus\mathbb{Q}$. Use part a) to show that it is impossible to define a function $g:\mathbb{R} \rightarrow \mathbb{R}$ with $C_g = \mathbb{Q}$ Attempt solution for a) Let $$ C_{f_1} = \left\{ a \in \mathbb{R}: \forall \epsilon > 0, \forall x,y \left( |x-a| < 1 \text{ and } |y-a|< 1 \right) \Rightarrow |f(x) - f(y)| < \epsilon \right\} $$ and $$ C_{f_k} = \left\{ a \in \mathbb{R}: \forall \epsilon > 0, \forall x,y \left( |x-a| < \frac{1}{k} \text{ and } |y-a|< \frac{1}{k} \right) \Rightarrow |f(x) - f(y)| < \epsilon \right\} $$ We have $C_{f_1} \supseteq C_{f_2} \supseteq \dots \supseteq C_{f_k} \supseteq C_{f_{k+1}} \supseteq \dots $ and $$ C_f = \bigcap_{k=1}^\infty C_{f_k} $$ so $C_f$ is $G_\delta$ Any help on questions a) or b) is appreciated! Thank you.
For a), take $\epsilon = \frac{1}{n}$ and define $C_f^n=\{ a \in \mathbb{R} \mid \exists \delta>0, a - \delta < x, y < a + \delta \implies |f(x)-f(y)|< \frac{1}{n}\}$. We have that $C_f^n$ is open because given $a \in C_f^n$ and $a - \delta < b < a + \delta$, we can take $\delta_1=\frac{1}{2}\min(a + \delta - b, b - a + \delta)$ and conclude that $b - \delta_1 < x, y < b + \delta_1 \implies |f(x)-f(y)|< \frac{1}{n}$, i.e., $b \in C_f^n$. Now note that $\cap_{n \ge 1} C_f^n = C_f$. This concludes a). The last part of b) follows from Baire's Category Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3519223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the set of values of x for which $\lvert \frac {x-1}{x+1} \rvert <2 $ How to solve this inequality question involving modulus? I can’t get the same answer as the book [answer below] I know the properties of absolute values, that is If $\lvert x \rvert <k$ , then $-k < x < k$. So for this question, this is my working: $-2<\lvert \frac {x-1}{x+1} \rvert <2 $ When $-2<\lvert \frac {x-1}{x+1} \rvert $ $-2x-2 < x-1$ $-1<3x$ $x>- \frac {1}{3}$ And when $\lvert \frac {x-1}{x+1} \rvert <2 $ $x-1<2x+2$ $x>-3$ So, I got $x> - \frac {1}{3}\:$ or $\:x>-3$ However, The **answer given is $\{ {x\mid x<-3\: \text{ or }\: x> -\frac {1}{3}, ∈ℝ} \}$ ** I don't think the answer from the book is wrong, since the following graph confirms that the book answer is correct. Please show me how to get the answer. Thank you
The problem with the analysis of the second part is that you have taken $|\frac{x-1}{x+1}| = \frac{x-1}{x+1}$ and done the analysis. This is false : what about those $x$ for which we have $|\frac{x-1}{x+1}| = \boxed{-\frac{x-1}{x+1}}$? You need to break your analysis according to where $|\frac{x-1}{x+1}| = \frac{x-1}{x+1}$ and where it is the negative of the expression. After this, you can work separately on each component. For example, when is $\frac{x-1}{x+1} > 0$? It happens if and only if both numerator and denominator are positive, or both are negative. One checks that this comes to $x<-1$ or $x>1$. In these intervals, one solves $-2<\frac{x-1}{x+1}<2$. (i.e. one solves this, then takes the intersection with $x<-1 \cup x>1$). On the interval $-1 < x \leq 1$, one solves $-2 < -\frac{x-1}{x+1}< 2$. Then we can put them together to finish. In short, your argument falls because you assumed that the value $\frac{x-1}{x+1}$ was positive all the time : instead you must break it into where it is negative, where it is positive and then work separately on both parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3519520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find the law of a random variable Let $X$ be a discrete random value taking values in $\mathbb{N}^* = \left\{1, 2, 3, \ldots \right\}$, and such that $\exists p \in (0,1) \forall n \geq 1$: $$ \mathbb{P}[X=n] = p \mathbb{P}[X \geq n] $$ Find the law of $X$. I understood the definition of law of random variable but I have problem to apply it in specific case as this one, can someone help me?
You can build a simple recursion for $p_n := P(X=n)$ as follows: * *$p_1 = p\underbrace{\sum_{n\geq 1}p_n}_{=1} = p$ *$p_n - p_{n+1} = p\left(\sum_{k\geq n}p_k - \sum_{k\geq n+1}p_k\right) = pp_n\Leftrightarrow p_{n+1} =(1-p)p_n$ All together: $p_1 = p, p_{n+1} = (1-p)p_n \Rightarrow \boxed{p_n = p(1-p)^{n-1}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3519664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
"If x - a is a factor of polynomial P(x), then a is a factor of the constant term of the polynomial." - Confused with proof I have recently started learning about polynomials. I've been able to grasp polynomial long division algorithm and the remainder and factor theorems and also a few other common-sense theorems about polynomials. There's just one property of polynomials I don't quite understand the proof of. The property: "If x - a is a factor of polynomial P(x), then a is a factor of the constant term of the polynomial." There are 2 proofs that I've seen so far that prove this theorem. The first proof I understand and makes complete sense to me. In my view, I think proof 1 is easier to understand. Proof 1: Proof 2: The second proof is the one I don't understand. More specifically, the part that I don't understand is how: Can someone please carefully explain how those two expressions are equal to each other? I just don't see how those expressions are equal. I can't find any common factors that have been taken out or what logic has been used to rewrite the expression in that way.
This does not answer your very question, but the theorem seems easy. If $(x-a)$ divides $P(x)$ then $P(a)=0$. But $P(a)=a(p_na^{n-1}+\cdots+a p_2+p_1)+p_0$ and $p_0=-a(\cdots)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3519794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Given a Möbius Transformation $w=f(z)$ find $f(-\bar{z})$ Let $w=f(z)$ be a Möbius Transformation, and let $\gamma,\Gamma\subset\mathbb{C}$ be the following curves: $$\gamma=\{z\in\mathbb{C}\mid \Re(z)=0\}\\ \Gamma=\{w\in\mathbb{C}\mid |w-w_0|=r\}$$ Where $w_0\in\mathbb{C}$ and $r\in\mathbb{R^+}$. Given that $f(\gamma)=\Gamma$, find the value of $h(z)=f(-\bar{z})$ using $w,w_0$ and $r$. I know that $f(z)$ is a Möbius Transformation, therefore: $$f(z)=\frac{az+b}{cz+d}$$ For some $a,b,c,d\in\mathbb{C}$. Now because $f(\gamma)=\Gamma$, I receive: $$f(0)=\frac bd\\f(\bar\infty)=\frac ac$$ Therefore $\frac ac, \frac bd\in\Gamma$. Choosing any other $z\in\gamma$ and trying to work out the equations led me to some ugly algebra which I couldn't manage to handle. I am sure there is an easier way to handle this but I don't know how. Thanks
$z$ and $-\bar z$ are symmetric about the line $\gamma$. Such points are conjugate. Mobius transformations happen to preserve conjugate points. Thus $h(z)$ is symmetric to $f(z)$ relative to the circle $\Gamma$. This means they are the images of each other under circle inversion. So we get $h(z)=w_0+\dfrac{r^2( w-w_0)}{|w-w_0|^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3520006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $f:[-1,1]\rightarrow \Bbb R$ and $f(\sin(\frac{1}{n}))=\cos(\frac{1}{n})$ and $f'(0)$ exist. Prove that $f(0)=1$ Let $f:[-1,1]\rightarrow \mathbb{R}$ and $f(\sin(\frac{1}{n}))=\cos(\frac{1}{n})$ and $f'(0)$ exist. Prove that $f(0)=1$. Wwhat I did is because $f'(0)$ exist then $f$ is continuous at $0$ and $f(0)= \lim _{x\to 0}\left(f\left(x\right)\right) =\lim _{n\to \infty }\left(f\left(sin\left(\frac{1}{n}\right)\right)\right) = \lim _{n\to \infty }\left(cos\left(\frac{1}{n}\right)\right) = 1$ Is this correct? if not how to prove it? thanks a lot
Perfect answer! Your arguments are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3520233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does it mean to prove if $p$ then $q$? Does that mean to prove $p\rightarrow q$ is a true statement? Then since when $p$ is false, $p\rightarrow q$ is vacuously true, do I only have to prove $q$ is true when $p$ is true?
That is the gist of it, yeah. There are, in practice, several ways to do this, and here is a short summary. A direct proof uses intermediate, already-known implications chained together like this. $$ p\to p_1\\ p_1\to p_2\\ \vdots\\ p_n\to q $$ A contrapositive proof is a direct proof of the statement $$ \text{not }q\to\text{not }p $$ and a proof by contradiction is a direct proof of the statement $$ (p\text{ and not }q)\to \text{contradiction / absurdity} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3520320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can any polynomials in the rational field be decomposed like this I' ve learned that the following examples can be used to decompose a factor in this way: x^5 - 5 x + 12 = (x - a) (a^4 - (5 a^3)/8 + (7 a^2)/8 + 1/8 (5 a^2 - 12 a) + 1/8 (5 a^3 - 12 a^2) + ((5 a^4)/16 - a^3/8 + (7 a^2)/16 - 3/16 (5 a^2 - 12 a) + 1/16 (12 a^3 - 5 a^4) + 1/8 (12 a^2 - 5 a^3) - (9 a)/4) x^2 + ((5 a^4)/16 + a^3/4 + ( 5 a^2)/16 + 1/16 (12 a - 5 a^2) + 1/16 (12 a^3 - 5 a^4) + a/2 + 1/4 (12 - 5 a) - 3) x + a x^3 + a/4 + 1/4 (5 a - 12) + x^4 - 2) Can any polynomials f(x) in the field of rational numbers be factorized into the form of (x - a) g (x, a)? Besides a, other coefficients of g (x, a) should also be in the rational number field. In the case of $x^5-5 x+12$, we can know the algebraic relations of his five roots (The letter a is a root of equation $x^5−5x+12$): $x^5-5 x+12=(x-a) \left(x-\frac{1}{8} \left(-a^4-a^3-a^2-2 \sqrt{2} \sqrt{3 a^3-2 a^2+a+4}-a+4\right)\right) \left(x-\frac{1}{8} \left(-a^4-a^3-a^2+2 \sqrt{2} \sqrt{3 a^3-2 a^2+a+4}-a+4\right)\right) \left(x-\frac{1}{8} \left(a^4+a^3+a^2-2 \sqrt{2} \sqrt{a^4+a^2+6 a-8}-3 a-4\right)\right) \left(x-\frac{1}{8} \left(a^4+a^3+a^2+2 \sqrt{2} \sqrt{a^4+a^2+6 a-8}-3 a-4\right)\right)$ I used the function in this link to find the relationship between a polynomial Galois group and a root set.
It seems to me that you are asking two separate questions. One is whether every $f$ can be factored as $(x-\alpha)g(x)$ where the coefficients of $g$ are simple expressions in (rationals and) $\alpha$. The other is whether every $f$ can be factored as $(x-\alpha)(x-p_1(\alpha))\cdots(x-p_{n-1}(\alpha))$ where each $p_i$ is a radical expression in $\alpha$. The answer to the first question is Yes. Let $K={\bf Q}(\alpha)$. Then $f$ has a zero in $K$, so it factors over $K$, and one factor is $x-\alpha$, and the other is a polynomial $g(x)$ with coefficients in $K$, so the coefficients of $g$ are polynomials in $\alpha$ (of degree less than the degree of $\alpha$). The answer to the second question is Yes if the degree of $f$ is five (or less), but in general it's No if the degree of $f$ is larger. If the degree of $f$ is five, then $f(x)=(x-\alpha)g(x)$, where $g$ is a polynomial of degree four, whose coefficients are polynomials of degree at most four in $\alpha$. Since $g$ has degree four, its roots can be expressed in radicals in its coefficients, which is to say, radicals of polynomials in $\alpha$. That's where stuff like $\sqrt{\alpha^4+\alpha^2+6\alpha-8}$ comes from in your example. But if the degree of $f$ exceeds five, then the degree of $g$ is at least five, and (in general) there will be no radical expression for the roots of $g$ in terms of radicals of polynomials in $\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3520447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is $G= \mathbb{Z}$ a group with binary operation defined as $a \cdot b \equiv a - b$? Is $G= \mathbb{Z}$ a group with binary operation defined as $a \cdot b \equiv a - b$? I am pretty sure the answer is NO but I would like some verification on my reasoning below: Associativity fails as in: Let $a,b,c \in G$ then $a \cdot (b \cdot c) = a \cdot (b-c) = a - (b-c) = a - b + c$. But $(a \cdot b) \cdot c = (a-b) \cdot c = a - b - c \neq a \cdot (b \cdot c)$. Identity Let $a \in G$. Then $a \cdot 0 = a - 0 = a$, so $0$ is the additive identity of $a$, hence the identity element in $G$. But $0 \cdot a = 0 - a = -a \neq a \cdot 0$ Inverse Let $a \in G \backslash \{0\}$. Assume $G$ is a group, then $a$ has an inverse $a^{-1}$. So $a \cdot a^{-1} = a - a^{-1} = a - (-a) \neq 0$. I am confident about associativity, but I am not sure if my arguments about the inverse and identity are justified.
You don't need to disprove every property for $(\mathbb Z,-)$ not to be a group, it suffices to show that at least one property does not hold. Indeed, as you rightly pointed out, associativity does not work, so it is not a group. If I tell you that every cat has whiskers, a tail and four paws, it suffices for you to show me that a human doesn't have whiskers in order to deduce that a human is not a cat (you don't have to bother with tail and paws).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3520597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Field Extension of $\mathbb{R}\left(x+\frac{1}{x}\right)$ Can someone help me to prove $[\mathbb{R}(x):\mathbb{R}\left(x+\frac{1}{x}\right)]=2$? My intuition the basis is $\{x,x+\frac{1}{x}\}$ but I cannot prove. Thank you
The minimal polynomial of $x$ is $t^2-(x+1/x)t+1$. To see that $x$ doesn’t lie in $\Bbb R(x+1/x)$, assume first that it does. $x =P(x+1/x)$ where $P$ is a polynomial. $x= x^{-n}+a_{-n+1}x^{-n+1}+...+a_{n-1}x^{n-1}+x^n$, so $x^{-n}+a_{-n+1}x^{-n+1}+...+(a_1 - 1)x+...+a_{n-1}x^{n-1}+x^n = 0$. Therefore $a_1=1$ and all other $a_i =0$. But we must have $a_1=a_{-1}$ because the polynomial is symmetric in $x$ and $1/x$, which gives a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3520767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve homogeneous linear recurrence relations with constant coefficients? Consider a sequence $(a_n)_{n\in\mathbb N}$ defined by $k$ initial values $(a_1,\dots,a_k)$ and $$a_{n+k}=c_{k-1}a_{n+k-1}+\dots+c_0a_n$$ for all $n\in\mathbb N$. What are some ways to get closed forms for $a_n$? What are some ways of rewriting $a_n$ that allows it to be computed without going through all previous values? For example, we have Binet's formula: $$F_n=\frac{\phi^n-(-\phi)^{-n}}{\sqrt5}$$ Furthermore, what about simultaneously defined linear recurrences? For example: $$\begin{cases}a_{n+1}=2a_n+b_n\\b_{n+1}=a_n+2b_n\end{cases}$$ How can these be solved? See also: Wikipedia: Recurrence relations. This is being repurposed in an effort to cut down on duplicates; see here: * *Coping with abstract duplicate questions *List of abstract duplicates
Characteristic/Auxiliary Polynomials The basic solution * *Suppose that $\alpha$ is a root of the associated polynomial $$x^k=c_{k-1}x^{k-1}+c_{k-2}x^{k-2}+\dots+c_0\quad(1)$$ Then it is also true that $$\alpha^{n+k}=c_1\alpha^{n+k-1}+\dots+c_0\alpha^n$$ So $a_n=\alpha^n$ satisfies the recurrence relation (but probably not the initial conditions). *Since the relation is linear, if $\alpha_1,\alpha_2,\dots,\alpha_k$ are the roots of the associated polynomial, then $$a_n=A_1\alpha_1^n+\dots+A_k\alpha_k^n$$ also satisfies the recurrence relation. Provided all $k$ roots are distinct, we can then use the $k$ initial conditions to solve for $A_1,\dots,A_k$. *Suppose that $\alpha$ is a repeated root. Then $\alpha$ is also a root of the derivative and so we have $$kx^{k-1}=c_{k-1}(k-1)x^{k-2}+\dots+c_0\quad(2)$$ Taking $nx^n(1)+x^{n+1}(2)$ we get $$(n+k)x^{n+k}=c_{k-1}(n+k-1)x^{n+k-1}+\dots+c_0nx^n$$ and so $a_n=n\alpha^n$ is satisfies the recurrence relation. *Similarly, we find that if $\alpha$ is a root of order $h$ (so that $(x-\alpha)^h$ divides the polynomial, then $a_n=\alpha^n,n\alpha^n,n^2\alpha^n,\dots,n^{h-1}\alpha^n$ all satisfy the recurrence relation. *So in all cases the associated polynomial gives us $k$ solutions to the recurrence relation. We then take a suitable linear combination of those solutions to satisfy the initial conditions. Additional points *If often happens that all but one of the roots $\alpha$ of the polynomial satisfy $|\alpha|<1$ which means that their contribution to $a_n$ is negligible except possibly for small $n$. Since the $a_n$ are usually integers, this means we can often express the solution as $\lfloor A\alpha^n\rfloor$ or $\lceil A\alpha^n\rceil$ (where $\alpha$ is the root with $|\alpha|>1$. *We sometimes get simultaneous linear recurrences like the two in the question $$a_{n+1}=2a_n+b_n,b_{n+1}=a_n+2b_n$$ In this case we can eliminate all but one of the sequences, in a similar way to solving ordinary simultaneous equations. In this case we have $b_n=a_{n+1}-2a_n$. Substituting in the other relation; $a_{n+2}-2a_{n+1}=a_n+2a_{n+1}-4a_n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3521010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Reduction from Graph Isomorphism to String Isomorphism I was studying the reduction from Graph Isomorphism (GI) to String Isomorphism (SI) showed in this bachelor's thesis in Chapter 2.2 and was understanding the procedures just fine until I got stuck in a proof. My problem is in the following Lemma: Lemma 2.9. If SI is $O(f(n))$, then GI is $O(f(n^2) + n^2)$. Given $X = (\Omega, E)$ and $Y = (\Omega, F)$, we create the indicator functions $\eta,\iota : \Omega^2 \rightarrow \{0,1\}$ that encodes which edge is in $E$ and $F$ or not by using some sort of (unordered?) binary string. Then it gets weird for me. In the natural action $S = im(Sym(\Omega) \rightarrow Sym(\Omega^2))$ of $Sym(\Omega)$ on $\Omega^2$ we get an action in the format $Sym(\Omega) \times \Omega^2 \rightarrow S$? How is a function of $f \in Sym(\Omega)$ applied in $\Omega^2$ and how is $S$ constructed? I understand that the image of $Sym(\Omega) \rightarrow Sym(\Omega^2)$ will have leftovers elements in the right side, but the elements are to be chosen arbitrarily? The result is a group of bijections $I = Iso_{S}(\eta,\iota)$ that maps pairs from $\eta$ to pairs of $\iota$, resulting in $I = Iso(X,Y)$, as these two functions are indicator functions for edges. But I also can't really see from where the resulting complexity comes from. Can someone help me?
In the string isomorphism problem, we are given two length-$N$ strings over the same alphabet and a group $G$ of allowable permutations: a subgroup of $S_N$. The problem is to find all the elements of $G$ which permute one string into the other (or, in the decision version of the problem, to determine if there is any such element). The rough idea of this lemma is this. We encode $n$-vertex graphs as $n \times n$ adjacency matrices, and think of these as length-$n^2$ strings over the alphabet $\{0,1\}$. To remember that these are adjacency matrices, the subgroup $G$ that we use is the subgroup of those permutations that come from permuting the rows of the adjacency matrix in some way, then permuting the columns in the exact same way. These correspond exactly to permuting the vertices of a graph. Let's take a simple example, with $n=3$. Take the graphs $X$ with edges $\{12, 13\}$ and $Y$ with edges $\{13, 23\}$. Then: * *Their adjacency matrices are: $$A_X = \begin{bmatrix}0 & 1 & 1 \\ 1 & 0 & 0 \\ 1 & 0 & 0\end{bmatrix} \qquad \text{and} \qquad A_Y = \begin{bmatrix}0 & 0 & 1 \\ 0 & 0 & 1 \\ 1 & 1 & 0\end{bmatrix}$$ *We can "flatten these out" to read them as strings $011100100$ and $001001110$. (I'm just concatenating all the rows here). *The subgroup $G$ can be generated by the following two permutations: $$g_1 = (1\;5)\;(2\;4)\;(3\;6)\;(7\;8) \qquad \text{and} \qquad g_2 = (2\;3)\;(4\;7)\;(5\;9)\;(6\;8).$$ These correspond to swapping rows/columns $1$ and $2$, and to swapping rows/columns $2$ and $3$, respectively. To check this, apply $g_1$ to the string $abcdefghi$ getting $edfbachgi$, which if we write it as a $3\times 3$ matrix again is the transformation $$\begin{bmatrix}a & b & c \\ d & e & f \\ g & h & i\end{bmatrix} \leadsto \begin{bmatrix}e & d & f \\ b & a & c \\ h & g & i\end{bmatrix}.$$ This is exactly what we get by swapping rows $1$ and $2$ then columns $1$ and $2$. Similar work explains $g_2$. *In fact, there is a permutation in $G$ that turns $011100100$ into $001001110$. This permutation is the permutation $$g_1g_2g_1 = (1\;9)\;(2\;8)\;(3\;7)\;(4\;6)$$ that corresponds to swapping rows/columns $1$ and $3$. So the two strings are isomorphic. So are the two graphs: the graph isomorphism swaps vertices $1$ and $3$. The reason for the complexity is that we've turned an $n$-vertex $\textsf{GI}$ problem into an $n^2$-length $\textsf{SI}$ problem, with $O(n^2)$ processing time for the transformation (because we need to build the adjacency matrix). If we can solve a length-$n$ instance of $\textsf{SI}$ in $O(f(n))$ time for all $n$, then we can solve the resulting length-$n^2$ instance of $\textsf{SI}$ in $O(f(n^2))$ time. This means that we can solve an $n$-vertex instance of $\textsf{GI}$ in $O(f(n^2)) + O(n^2)$ time: $O(n^2)$ for the transformation, and $O(f(n^2))$ for whatever our $\textsf{SI}$ algorithm is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3521142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finite harmonic sum inequality I wish to prove some inequality involving a finite harmonic series: $$\sum_{k=n+1}^{n^2}\frac{1}{k}>\sum_{k=2}^{n}\frac{1}{k}$$ Certainly $\frac{1}{nk+q}≥\frac{1}{n(k+1)}$ for $q=1,2,3,....,n.$ So that $$\sum_{q=1}^n\frac{1}{nk+q}≥\frac{1}{k+1}$$ Adding the last inequality from $k=1$ to $n-1$ should yield the required inequality but I don't see how it does.
You want to prove that $$ H_{n^2}-H_{n} \geq H_n -1 $$ i.e. that $$ H_{n^2}-2H_n\geq -1. $$ On the interval $\left[\frac{1}{2},1\right]$ we have $\frac{x-\log(1+x)}{x^2}\in\left[0.3,0.4\right]$, so $$ \frac{3}{25}\leq\sum_{k=2}^{n}\frac{1}{k}-\sum_{k=2}^{n}\log\left(1+\frac{1}{k}\right)\leq\frac{2}{5}\sum_{k=2}^{n}\frac{1}{k^2}\leq \frac{2}{5}(\zeta(2)-1)\leq\frac{7}{25} $$ where $\sum_{k=2}^{n}\log\left(1+\frac{1}{k}\right) = \log(n+1)-\log(2).$ It follows that $$ H_{n^2}-2H_n \geq \log(n^2+1)+\frac{3}{25}-2\log(n+1)-\frac{14}{25}\geq\log\frac{n^2+1}{(n+1)^2}-\frac{12}{25}\geq -1 $$ for any $n\geq 3$, and the other cases can be easily checked by hand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3521265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Asymptotic expansions for $\int_x^\infty e^{-y^3} dy$ I would like to find asymptotic expansions of $F(x) = \int_x^\infty e^{-y^3} dy$ as (a) $x \to 0$ and as (b) $x \to \infty$. I solved (a) using the expansion for $e^{-y^3}$ around $0$: $$F(x) = \int_0^\infty e^{-y^3} dy - \int_0^x\sum_{j= 0}^\infty (-1)^j \frac{y^{3j}}{j!} dy = \int_0^\infty e^{-y^3} dy - \sum_{j= 0}^\infty (-1)^j \frac{x^{3j+1}}{(3j+1)j!} \\ = C - x + \frac{x^4}{4} - \frac{x^7}{14} + \ldots$$ I know that the series here is convergent for all $x$ but convergence is so slow for large $x$ that it does not provide a good asymptotic representation. How can I get a better asymptotic expansion as $x \to \infty$? I thought about changing variables $y \to 1/t$ to get: $$F(x) = \int_0^{1/x}\frac{e^{-t^{-3}}}{t^2}dt = \int_0^{1/x}\sum_{j=0}^\infty(-1)^j \frac{t^{-3j-2}}{j!} dt$$ But this cannot be integrated term by term.
The Integral equals $\frac13 \Gamma(1/3,x^3)$ where $\Gamma(a,z)$ is the incomplete gamma function. You can look up that the asymptotics around $0$ are $\frac{1}{3}\Gamma \left(\frac{1}{3}\right)-x+\frac{x^4}{4}+O\left(x^5\right)$, and around $\infty$ is $e^{-x^3+O\left(\left(\frac{1}{x}\right)^6\right)} \left(\frac{1}{3 x^2}-\frac{2}{9 x^5}+O\left(\left(\frac{1}{x}\right)^6\right)\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3521445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Calculus: Find the limit: $\lim_{h\rightarrow0}\frac{f(x+h)−f(x)}{h}$ given that $f(x)=\sin(2x)$ Find the limit: $$\lim_{h\rightarrow0}\frac{f(x+h)−f(x)}{h}$$ Given that $f(x)=\sin(2x)$. Tried many ways, but I kept on getting an indeterminate form. I can't find a way to cancel out terms on the numerator and denominator. Any help will be appreciated.
Hint $$\sin(2x+2h)=\sin 2x\cos 2h+\cos 2x\sin 2h$$ $$\lim_{u\to 0}{\sin u\over u}=1$$ $$\lim_{u\to 0}{\cos u-1\over u}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3521583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluate $\lim_{n\to \infty}\frac{1}{\sqrt[4]{{n^4}+n+2}}+\cdots+\frac{1}{\sqrt[4]{{n^4}+5n-1}}$ I am new to analysis and I have no clue how to solve this limit. This is an exam problem from my analysis 1 course, there are one or two similar ones on the exam. $$\lim_{n\to \infty}\frac{1}{\sqrt[4]{{n^4}+n+2}}+\cdots+\frac{1}{\sqrt[4]{{n^4}+5n-1}}$$ The only thing I tryed was this silly idea to rewrite it as one single fraction and apply Stolz-Cesaro theorem, but it got way too messy so I doubt that is the way. I can't find explanations generally on these limits of sequences of the type $\frac{1}{f(x_n)}+\cdots+\frac{1}{f(x_{n+k})}$ (I hope this is a good representation). Should series be involved in solving these kinds of limits ? EDIT: The limit is supposed to be solved only with the knowledge prior to derivatives and integrals. Thanks in advance
Also note that $\frac{1}{n+1}<\frac{1}{n^4+n+k} <\frac{1}{n}$ so $$ \sum_{k=1}^{4n-1} \frac{1}{n+1}<S_n=\sum_{k=1}^{4n-1} \frac{1}{(n^4+n+k)^{1/4}} <\sum_{k=1}^{4n-1} \frac{1}{n}.$$ So, we note that $$ \lim_{n \rightarrow \infty} \frac{4n-1}{n+1}=4= \lim_{n \rightarrow \infty} \frac{4n-1}{n}= \lim_{n \rightarrow \infty} S_n$$ When $n \rightarrow \infty$ the first term of the sequence $1/(n^4+n+2)^{1/4} \rightarrow 0$ it can well be neglected. Note that $$\frac{1}{(n^4+n+k)^{1/4}} > \frac{1}{n+1}$$ can be checked to be true for all $1<k\le 4n-1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3521700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Comparing the Dual Banach Space with norm topology and Weak* topology If we consider the dual Banach space $V' :=\{f:V\rightarrow \mathbb{C}$ such that $f$ is linear and bounded$\}$. We know $V'$ also forms a Banach space in the norm topology where the norm is the general operator norm. But the open ball is not compact in the norm topology (V is not finite dimensional), but it is so by the weak* topology by the Banach Alaoglu Theorem. My question is that are these two topologies i.e. the norm topology and the weak* topology comparable, i.e. is one of them weaker than the other? The second question is, whether $V'$ is still a Banach space with the weak* topology?
A subbasic open set of the weak$^\ast$ topology is of the form $O(v,\epsilon):=\{f \in V': |f(v)| < \epsilon\}$ (for some $v \in V, \epsilon>0$ and so for every $f \in O(v,\epsilon)$, we have $f \in B_d(f, \epsilon) \subseteq O(v, \epsilon)$, where $d$ is the sup-operator norm and so every weak$^\ast$ open set is operator-norm open. So exactly as the name suggests, the weak$^\ast$ topology is weaker than the norm topology on $V'$ and strictly weaker for infinite-dimensional $V$, by the Banach-Alaoglu theorem (and the fact that a Banach space is locally compact iff it is finite-dimensional). So $V'$ will almost never be a Banach space in the weak$^\ast$ topology (only if $V \simeq V' \simeq \Bbb R^n$ for some $n \in \Bbb N$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3521817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$O(h^3)$ in the second-order approximation for $f(\mathbf{x}^*)$ I am currently studying the textbook Algorithms for Optimization by Mikel J. Kochenderfer and Tim A. Wheeler. Chapter 1.6.2 Multivariate says the following: The following conditions are necessary for $\mathbf{x}$ to be at a local minimum of $f$: * *$\nabla f(\mathbf{x}) = 0$, the first-order necessary condition (FONC) *$\nabla^2 f(\mathbf{x})$ is positive semidefinite (for a review of this definition, see appendix C.6), the second-order necessary condition (SONC) The FONC and SONC are generalizations of the univariate case. The FONC tells us that the function is not changing at $\mathbf{x}$. Figure 1.8 shows examples of multivariate functions where the FONC is satisfied. The SONC tells us that $\mathbf{x}$ is in a bowl. The FONC and SONC can be obtained from a simple analysis. In order for $\mathbf{x}^*$ to be at a local minimum, it must be smaller than those values around it: $$f(\mathbf{x}^*) \le f(\mathbf{x} + h \mathbf{y}) \iff f(\mathbf{x} + h\mathbf{y}) - f(\mathbf{x}^*) \ge 0 \tag{1.14}$$ If we write the second-order approximation for $f(\mathbf{x}^*)$, we get: $$ f(\mathbf{x}^* + h \mathbf{y}) = f(\mathbf{x}^*) + h \nabla f(\mathbf{x}^*)^T \mathbf{y} + \dfrac{1}{2} h^2 \mathbf{y}^T \nabla^2 f(\mathbf{x}^*)\mathbf{y} + O(h^3) \tag{1.15}$$ I'm wondering where the $O(h^3)$ term came from in 1.15? I cannot see why it would algebraically be there? I would appreciate it if someone would please take the time to clarify this.
The term $O(h^3)$ means that the estimation error is locally bounded by a third degree polynomial. For example, the second order estimation of $f(x)=e^x$ at $x=2$ is $g(x) = e^2(0.5x^2 - x + 1)$, so $f(x) = g(x) + O(x^3)$. The higher the power of the error term, the more rapidly it goes to $0$ as $x\to 2$. Note that this example shows that the error term does not hold on the entirety of $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3521915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Infinite series: defining the sum Consider the following sum described below: $$\sum_{i=0}^{x}\frac{1}{\sqrt{(3+i)(2k-4-i)}}$$ Where $0 \leq x \leq 2k-8$ and even and $k\geq 5$ is a constant integer. I need to find the closed form expression for this sum, however, after many attempts I couldn't. This would make it easier for me, because it makes up a function that I am proving to be increasing. Can you help me?
To show that it is increasing, consider $$a_i=\frac1 {\sqrt{(3+i)(2k-4-i)}}$$ and let $$b_i=\frac 1 {a_i^2}=(3+i)(2k-4-i)\implies b_{i+1}-b_i=2k-2i-8$$ So $(b_{i+1}-b_i)$ is decreasing with $i$ for a given $k$ and so $(a_{i+1}-a_i)$ is increasing. For a closed form, I am quite skeptical (even using special functions). But, this makes a nice looking function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3522101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Get complex angle when computing quaternions (Satellite attitude) I am modelling a, so far uncontrolled, satellite in MATLAB , along with its attitude. I have been researching about this matter and found that quaternions are the way to go , since they don't have singularities. My goal was to get angular velocities in each axis (body frame) from the satellite motion equations : $$I \frac{\partial ^2\theta}{\partial t^2} + \frac{\partial \theta}{\partial t} \times I \frac{\partial \theta}{\partial t} = \sum \tau$$ From which I got the angular velocity in each axis for every step of my simulation. I then computed the attitude quaternions using a formula I saw here : $$ q_{new} = q_{old} + \frac{dt}{2} \cdot \omega \cdot q_{old} ,$$ in which $ \omega\ $ is the angular velocity. It seemed to work well but when I computed the Euler angles, using this : \begin{align} \phi &= atan2 (2(q_0q_1 + q_2q_3), 1- 2(q_1^2+q_2^2)) \\ \theta &= asin (2(q_0q_2-q_3q_1)) \\ \psi &= atan2 (2(q_0q_3+q_1q_2), 1-2 (q_2^2+q_3q_1)) \end{align} I got complex results in the pitch axis (outside the domain of the $asin$ function), which means something is obviously wrong but I can't think of any solution besides trying randomly until something makes sense. I even tried to divide the angular velocity by its norm but it didn't help. Does anyone know which of my steps is wrong? I hope I was clear enough. Thanks in advance, Hugo
Here's a reference to how the 3D game engine people do rotations using quaternions: https://www.3dgep.com/understanding-quaternions/ In particular, for this problem, I suggest working through SLERPing. I suspect you might find there is a better modelling solution by staying within quaternion arithmetic. Enjoy! (It's not my stuff, I just found it useful.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3522318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why does this would eventually simplify into the original circle equation? I was trying to solve this problem: The point $A$ has coordinates $(5, 16)$ and the point $B$ has coordinates $(-4,4)$. The variable $P$ has coordinate $(x,y)$ and moves on a path such that $ AP = 2BP$. Show that the Cartesian equation of the path of the $P$ is: $$(x+7)^2 +y^2 = 100\tag{*}$$ So, what I did is to found a point on the circular path, and shows that the relationship holds. However, the actual answer is to let: $$(x-5)^2 +(y-16)^2 = 4(x+4)^2 +4(y-4)^2$$ This also just means $AP=2BP$. However, it will simplify to the original circle equation $(*)$. I literally don't know why. Even if $AP=2BP$ , why would this simplify to $(*)$. What's the mechanism behind this. Thank you very much for you guys reply.
The locus of the points such that the ratio of their distance to two given points is constant is a circle: $$\frac{\sqrt{(x-x_1)^2+(y-y_1)^2}}{\sqrt{(x-x_0)^2+(y-y_0)^2}}=\lambda.$$ This is because $$(x-x_1)^2+(y-y_1)^2-\lambda^2((x-x_0)^2+(y-y_0)^2)=0$$ is the equation of a conic, such that * *the coefficients of $x^2,y^2$ are equal, and *with no cross term $xy$. These are the conditions to have a circle. When $\lambda=1$, the quadratic terms cancel each other, giving the equation of a line (the mediatrix).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3522446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why the following description implies the limit is finite? Let $f:[0,T]\rightarrow \mathbb{R}$ $$V(f) = \lim_{\|\Pi\|\rightarrow 0}\sum_{j=0}^{n-1}|f(t_{j+1})-f(t_j)|$$ where $\Pi$ is a partition of $[0,T]$ and define $\|\Pi\|$ as $$\Pi=\{t_0,t_1,\cdots,t_n\}, \ \ 0=t_0<t_1<\ldots<t_n=T, \ \ \|\Pi\|=\max_i(t_{i+1}-t_i)$$ Now the solution manual says that Suppose $V(f)$ is finite. Then for any $\epsilon>0$, there exists an $N\geq 1$, $$\sum_{j=0}^{n-1}|f(t_{j+1})-f(t_j)|<V(f) + \epsilon$$ for all $n\geq N$. I am confused about the description of $+\epsilon$ and $\forall n \geq N$ parts. How do they and this description imply finite of $V(f)$? I believe this is a trick widely used in real analysis; however, cannot still understand this.
What are you asking for the converse of what is sated in the manual. The manual says that if $V(f)<\infty$ the something happens. The stated property is immediate from the definition of limit. It is something like this: if $\lim a_n=l$ and $\epsilon >0$ then there exists $N $ such that $a_n <l+\epsilon$ for $n \geq N$. Do you agree with this statement?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3522602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Removing a penny raises mean coin value from 17 to 18. How many nickles? So this is a question that my sister (grade 8) got wrong on a test. There is a collection of quarters, dimes, nickels, and pennies in a jar. The mean of these coins is 17 but when you remove a penny, the mean becomes 18. How many nickels are in the jar? I tried to get the multiple of $17$ which subtracted by $1$ and divided by $16$ is $18$. For that I got $17 \cdot 17 = 289 - 1 = 288 / 16 = 18$ which is correct. The only problem that I'm encountering is which coins are in the collection and of course how many nickels are there.
You got most of the way there already. You know there are 16 coins that are worth a total of 288 cents. Three of these must be pennies (since 288 is 3 mod 5 and all non-pennies have value 0 mod 5), so that leaves 13 worth 285. This can be achieved with 11 quarters and 2 nickels. In a jar with $Q$ quarters and $13-Q$ other coins, for $Q\leq10$, the total value of the jar is at most $25Q+10(13-Q)=130+15Q\leq280<285$ cents, so you need at least 11 quarters. However, with 11 quarters, you have ten cents left and 2 coins, so they must both be nickels.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3522725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that the sequence $x_n=\sum\limits_{k=1}^n\frac1{\sqrt{k+1}+\sqrt{k}}$ is unbounded. Consider the sequence $\{x_n\}_{n\ge1}$ defined by $$x_n=\sum_{k=1}^n\frac{1}{\sqrt{k+1}+\sqrt{k}}, \forall n\in\mathbb{N}.$$ Is $\{x_n\}_{n\ge 1}$ bounded or unbounded. I solved the problem as stated in the answer posted by me. Is it possible to solve the problem in a more better and rigorous manner?
$\sqrt {k+1}+\sqrt k \leq \sqrt {2k}+\sqrt k<3\sqrt k$. Hence the given sum is at least $\sum\limits_{k=1}^{n} \frac 1 {3\sqrt k}$ Now use the fact that the series $\sum\limits_{k=1}^{n} \frac 1 {3\sqrt k}$ is divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3522867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Without use of Darboux's theorem, prove that $f'$, where $f(x)=x^2\sin\left(\frac{1}{x}\right)$, enjoys IVP Prove (wihout use of Darboux's theorem) that the derivative of the function: $$f(x)=\left \{\begin {array}{ll} x^2\sin\left(\frac{1}{x}\right)&,~x\neq0\\ 0&,~x=0\\ \end{array} \right.,$$ that is $$f'(x)=\left \{\begin {array}{ll} 2x\sin\left(\frac{1}{x}\right)-\cos\left(\frac{1}{x}\right) &\textrm{, $x \neq 0$}\\ 0 &\textrm{, $x =0$} \end{array} \right.$$ (see the pic), enjoys the Intermediate Value Property (IVP). This function ($f'$) is a classic counterexample that IVP does not characterize continuity (it is not continuous at $0$). All the proofs I have seen argue on IVP using Darboux's theorem on derivative. Is there a way to give a straightforward proof in this particular function $f'$? (one of course may restrict on intervals $I$ containing the discontinuity $0$, since $f'$ is continuous on every interval $I$ not containing $0$, so by IVT for continuous functions we get the desired result). Thanks in advance.
One keeps all definitions as in the question. Lemma. Let $I$ be an interval either of the form $(\alpha ,0]$ or $[0,\beta)$. Let $J:=I\setminus \{0\}$. The $f'(J)$ contains the interval $(-1,1)$. Proof. It is clear that $$\limsup_{x\rightarrow 0^+}f'(x)=1,\liminf_{x\rightarrow 0^+}f'(x)=-1,$$ and similarly $$\limsup_{x\rightarrow 0^-}f'(x)=1,\liminf_{x\rightarrow 0^-}f'(x)=-1.$$ One prove the case when $J=(0,\beta)$ (the other case being similar). If $y_0\in (-1,1)$, then $y_0\in (-1+\epsilon,1-\epsilon)$ for som positive $\epsilon$. From the above observations, there exists $x_2\in J$ such that $f'(x_2)>1-\epsilon$. And then there exists $0<x_1<x_2$ such that $f'(x_1)<-1+\epsilon$. Now since $(x_1,x_2)\subseteq J$ and $f'$ is continuous there, there exists by IVP for $f'$ that $\exists x_0\in (x_1,x_2)$ with $f'(x_0)=y_0.$ This completes the proof. Proposition. $f'$ satisfies the IVP, namely for $I=[a,b]$, $f'$ assumes any value $y$ between $f'(a)$ and $f'(b)$. Proof. As the OP remarked, it suffices to prove the case when $0\in I$ (so $a\leq 0,b\geq 0$). Also the case when $f'(a)=f'(b)$ being void, one assumes that $f'(a)\neq f'(b).$ Let $y$ be any value between $f'(a)$ and $f'(b)$. Now one may divide into two cases. Case 1. $|y|<1.$ This follows from the lemma above. Case 2. There are $4$ subcases (possibly overlapping) as follows (where one applies the lemma above to choose $x_0$): Subcase 1. $1\leq y<f'(b)$: Take $0<x_0<b$ such that $f'(x_0)<1$ and apply IVP to $f'$ on $[x_0,b]$. Subcase 2. $1\leq y<f'(a)$: Take $a<x_0<0$ such that $f'(x_0)<1$ and apply IVP to $f'$ on $[a,x_0]$. Subcase 3. $f'(a) < y\leq -1$: Take $a<x_0<0$ such that $f'(x_0)>-1$ and apply IVP to $f'$ on $[a,x_0]$. Subcase 4. $f'(b)<y\leq -1$: Take $0<x_0<b$ such that $f'(x_0)>-1$ and apply IVP to $f'$ on $[x_0,b]$. Combining all cases, the proposition is proven.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3523018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it good to mix up syntax with semantics in logic? just want to know is it "good" to use syntax and semantics together in a formal prove in mathematics. With Completeness theorm,syntax of first order logic is equal in value to semantics. However,in Zhongwang Lu's mathematical logic towards computer science (the book is written in Chinese and the title is translated by myself),Lu said that many people (in Chinese) mixed up syntax with semantics,and one of the purpose of his writing the book is to correct this trend. So Is it good to mix syntax with semantics in mathematical logic?
In many low-level foundations of mathematics introductions there is a lack of precision if it is about completeness and soundness. For example, there is a quite popular "mixing up syntax with semantics"-confusion over the definition of completeness. Some authors use the notion of "completeness" for both: * *semantic completeness: For any tautology in the system, there is a formal proof. $(\models \phi )\Rightarrow( \vdash \phi)$. *syntactical completeness: (or negation completeness) For any sentence $\phi$, either $\phi$ or $\lnot \phi$ is provable in the system. $(\not \vdash \lnot \phi) \Rightarrow (\vdash \phi)$. These notions are not equivalent, however. Negation completeness is stronger than semantic completeness. Also, at least in german, there is another very similar confusion, since some authors use correctness also for consistency, but of course these notions aren't equivalent as well. Maybe this is what the chinese author means if he says people mix up syntax with semantics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3523162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to find all possible minimal edge covers over $K_6$? How to find all possible minimal edge covers over $K_6$? $K_6$-complete graph of $6$ vertices. On working out I found out that there would be these many cases .. Can somebody tell me if this what I have done is right? Or are there any more cases?
OEIS A053530 provides an exponential generating function $$\exp(-x - x^2/2 + x\exp(x)),$$ and the count for $n=6$ is 171, which confirms that your 111 plus the missing 60 cover all cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3523398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that for any $n > 0$, if $a^n$ is even, then $a$ is even the proof at hand is: Prove that for any $n > 0$, if $a^n$ is even, then $a$ is even. Hint: Contradiction So I know to start the problem in a contradiction format would be: $a^n$ is even, then $a$ is odd so that $a = 2k+1$. Then plug in that into $a^n$, we get $(2k+1)^n$. This is where I become stuck. Thanks in advance!
The contrapositive is that $$a \text{ odd }\implies a^n \text { odd}$$ Use induction here: if $a^1=2k_1+1$, then $a^2=(2k_1+1)^2=2(2k_1^2+2k_1)+1=2k_2+1$ hence true for $n=2$ Then assume true for $n=m$, $(2k_1+1)^m=2k_m+1$ Then $$(2k_1+1)^{m+1}=(2k_m+1)(2k_1+1)=4k_mk_1+2k_m+2k_1+1=2(2k_mk_1+k_m+k_1)+1$$ Hence true with $k_{m+1}=2k_mk_1+k_m+k_1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3523495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
density of the product of uniformly distributed random variables Let $X$ and $Y$ be independent with uniform distribution over $(0,a)$ and set $Z=X^2Y^2$. What is the joint density of $Z$ \begin{align} F_{Z}(t) &= \mathbb P(X^2Y^2<t) = \begin{cases} 0,& t<0\\ 1,& t>a^4 \end{cases} \end{align} Consider the case when $0<t<a^4$ $$F_{Z}(t) = \mathbb P(XY<\sqrt t)=\int\int_{xy<\sqrt t}f(x,y)dxdy=\int\int_{xy<\sqrt t}f_X(x)f_{Y}(y)dxdy =1/a^2 \int_{\sqrt t/a}^{a}dx\int_{\sqrt t/x}^{a}dy$$ Are the integration limits set correctly?
No. You should integrate below the hyperbola $yx=\sqrt{t}$ or $y=\frac{\sqrt{t}}{a}$ at the picture: So $$ F_{Z}(t) = \mathbb P(XY<\sqrt t)=1/a^2 \int_0^{\sqrt t/a}dx\int_0^a dy + 1/a^2 \int_{\sqrt t/a}^a dx\int_0^{\sqrt t/x} dy $$ Here is slightly modified picture with integration bounds for $y$ are drawn at each $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3523607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find out the probability that the tallest person in a group of people is a man? Assume we have a population of N men and women such that exactly $N/2$ people are men (set $M$) and $N/2$ people are women (set $W$). Assume that the standard deviations for height between both groups is the same $\sigma$ however their averages are different with $\mu_M > \mu_W$. We pick $n$ people at random from the population such that $n/2$ people are men and $n/2$ people are women. Knowing $n$ what's the probability that the tallest person in the selected group is a man? Note: this is not homework, I came up with this question on my own. Note 2: all distributions are normal distributions. EDIT: My current reasoning is that if $Z = M-W$: $E(Z) = E(M-W) = E(M) - E(W)$ And $V(Z) = V(M-W) = V(M) + V(W) = \sqrt2\sigma$ Thus the probability that one man is taller than one woman is: $P(Z>0)$ And so the probability that all women are taller than all men in the sample is: $(1 - P(Z>0))^{n/2}$: However this result has lead me to some very odd conclusions. So I suspect I am wrong.
Let \begin{align*} M_1, \cdots, M_{N/2} &\overset{\text{iid}}{\sim} N(\mu_M, \sigma^2) \\ W_2, \cdots, W_{N/2} &\overset{\text{iid}}{\sim} N(\mu_W, \sigma^2) \\ S_M, S_W &\overset{\text{iid}}{\sim} \text{SRSWOR}(\{1, \cdots, N/2\}) \\ \end{align*} where SRSWOR meaning simple random sample without replacement. Define \begin{align*} \textbf{A}_{S} = \{A_i\}_{i \in S} \end{align*} So we are looking at computing \begin{align*} P(\max(\textbf{M}_{S_M}) > \max(\textbf{W}_{S_W})) \end{align*} This is equal to \begin{align*} \sum_{s_M, s_W \in \binom{\{1, \cdots, N/2\}}{n/2}}P(\max(\textbf{M}_{S_M}) > \max(\textbf{W}_{S_W})|S_M=s_M, S_W=s_W)P(S_M=s_M)P(S_W=s_W) \end{align*} But since \begin{align*} P(S_M=s_M) = P(S_W=s_W) = 1/\binom{N/2}{n/2} \end{align*} due to SRSWOR and \begin{align*} P(\max(\textbf{M}_{S_M}) > \max(\textbf{W}_{S_W})|S_M=s_M, S_W=s_W) = P(\max(\textbf{M}_{S_M}) > \max(\textbf{W}_{S_W})) \end{align*} due to i.i.d, we could just consider computing \begin{align*} P(\max(M_1, \cdots, M_{n/2}) > \max(W_1, \cdots, W_{n/2})) \end{align*} Working directly with the distribution of the maximum of normal random variables would be straightforward to compute numerically, although I don't think a closed-form would exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3523710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Measurability of a stopped local martingale Let $X_t$ be a local martingale. Let $\tau_n$ be a localizing sequence. We then know that $$E[X_{t\land \tau_n}|\mathcal F_s]=X_{s\land\tau_n}$$ $1)$ But $X_{s\land\tau_n}$ is measurable w.r.t wich sigma algebra? is it $\mathcal F_s$ measurable? $2)$ Moreover, if i have $E[e^{X_{t\land \tau_n} + t\land \tau_n}|\mathcal F_s]=1$ can I bring on the other side $t\land \tau_n$ obtaining: $E[e^{X_{t\land \tau_n}}|\mathcal F_s]=e^{- t\land \tau_n}$? I think that it is not possible because $\tau_n$ is not $\mathcal F_s$ measurable, but then is measurable w.r.t. to what?
Yes. It is measurable w.r.t $\mathcal F_{s \wedge \tau_n}$ and $F_{s \wedge \tau_n} \subset \mathcal F_s$. $e^{t\wedge \tau_n}$ is measurable w.r.t. $\mathcal F_t$. So it is legitimate to bring it to RHS only when you know that $t \leq s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3523804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that if $a\nmid b$, $ax^3+bx+(b+a)=0$ has no natural number solutions Let $a,b\in\mathbb Z$ with $a\neq0$. I need to prove that if $a\nmid b$, then the equation $ax^3+bx+(b+a)=0$ does not have a solution that is a natural number. I noticed that regardless of the values of $a$ and $b$, the equation will always have a root at $-1$ (i.e. $ax^3+bx+(b+a)=a(x^2-x+(1+\frac{b}{a}))(x+1)$. So now the problem reduces to proving that if $a\nmid b$, $x^2-x+(1+\frac{b}{a})=0$ has no natural number solutions. I've used the quadratic formula and tried to analyse this a number of ways but I'm quite stuck.
Your quadratic equation is, with the factor of $a$ included, $$ax^2 - ax + a + b = 0 \implies b = -a(x^2 - x + 1) \tag{1}\label{eq1A}$$ As Bill Dubuque's question comment states, for $x \in \mathbb{N}$, you have $a$ dividing the right hand side, so it must also divide the left hand side, i.e., $b$. Thus, you require $a \mid b$. However, since you're given $a \not\mid b$, this can't be true, meaning there is no natural number solution for $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3523960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do we use the Intermediate Value Theorem to show that $2^x = \frac{10}{x}$ for $x>0$. The Intermediate Value Theorem states that over a closed interval $[a,b]$ for line $L$, that there exists a value $c$ in that interval such that $f(c) = L$. We know both functions require $x>0$, however this is not a closed interval. However, I went ahead on the problem anyway. I decided to solve for x. $2^x = \frac{10}{x}$ $x2^x = 10$ $x^2log2 = log10$ $x^2 = \frac{1}{log2}$ $x = \sqrt{\frac{1}{log2}}$ However, when graphing these two functions I found that they both met up at ~$2.236$, which is not equal to $x = \sqrt{\frac{1}{log2}}$ I am very new to Calculus and the Intermediate Value theorem and would appreciate a simple explanation. Thank you.
You have an error in one of the steps it should be as follows: $$x2^x=10 \implies \ln(x2^x)=\ln 10 \implies \ln x+ x \ln 2=\ln 10.$$ To use IVT, let $f(x)=x2^x-10$, now argue that it is a continuous function for $x>0$. After this find two positive real numbers $a$ and $b$ such that $f(a)<0$ and $f(b)>0$. Then by IVT you can claim that there must be some point $c$ in the interval $[a,b]$ where $f$ takes the value $0$. This would mean $c2^c=10$. If you try $a=1$, you get $f(1)=2-10=-8<0$, now try a value for $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3524118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $f(z) = \text{Im }z$ is not differentiable I want to prove that $f(z) = \text{Im }z$ is not differentiable anywhere. I know how to prove it easily with the Cauchy-Riemann equations, however I'm also interested in proving it by just using the definition of differentiability. I know that the definition of differentiability for complex functions is almost the same as for real functions, but I'm still having problem proving it since we're only considering the imaginary part. Thanks in advance
For any $z\in \mathbb{C}$ consider the limit of the difference quotient in two different directions, namely in the purely imaginary direction and also in the real direction. These limits will not agree, so $\text{Im }z$ is not complex differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3524362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How can we count different coloring in 2 cycle with one edge in common There is a question that asked what is the counts of different coloring in $C_5$ and $C_6$ when it has one edge in common: I have used fundamental reduction in cycles in order to get a recursive formula (I know there is a closed formula for that too), but when i try use the same in this particular question, it becomes very complicated. How can I find a formula for this kind of graph? Im looking for Chromatic Polynomial.
Let's first find a formula for a "mouse graph" ($C_n$ with a path $P_m$ attached) This is easy with induction and the fundamental d-c. We know that for $n$-cycle the chromatic polynomial is $C_n(x) = (x-1)^n+(-1)^n(x-1)$ and we get that for the mouse it is $$M_{n,m}(x) = (x-1)^m C_n(x)$$ Now for the graph in question. Let's denote the chromatic polynomial by $P_{n,m}$, where $n$ is the number of nodes in the first cycle and $m$ is the extra nodes in the second. So what we want is $P_{6, 3}$. When we use deletion-contraction on the first edge that comes out on the second cycle, we get the recursion (and the basecase $P_{n,0} (x) = C_n(x)$) $$P_{n,m}(x) = M_{n,m}(x) - P_{n,m-1}(x)\\ = M_{n,m}(x) - M_{n,m-1}(x) + P_{n,m-2}(x)\\ = \dots =\\ = \sum_{j=0}^m (-1)^j M_{n,m-j}(x) = C_{n}(x)\sum_{j=0}^m (-1)^j (x-1)^{m-j}\\ = C_{n}(x)(-1)^m\sum_{j=0}^m (1-x)^{j} \\ = C_n(x)(-1)^m\frac{1-(1-x)^{m+1}}{x}\\ = C_n(x)\frac{(x-1)^{m+1} + (-1)^m}{x}$$ You can check the answer with the following SAGE-code: R.<x> = QQ['x'] def c(n): return x if n==1 else (x-1)**n + (-1)**n*(x-1) def makeGraph(n, m): N = n+m g = Graph(N) for i in range(N): g.add_edge(i, (i+1)%N) if n>1 and m>1: g.add_edge(0, n-1) return g def getFormula(n, m): #TODO doesn't work for n==1 or m==1 p = c(n)*(-1)**m*(1-(1-x)**(m+1))/x return p for n in range(1, 10): for m in range(1, 10): p = makeGraph(n,m).chromatic_polynomial() pFormula = getFormula(n,m) if p != pFormula: print("Wrong formula with %d, %d" %(n,m)) print (p) print (pFormula) EDIT The formula doesn't work for the cases where $n=1$ or $m=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3524470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find all polynomials $P$ for which $(P(x)-x)\mid P^{(n)}(x)-x$ $n\gt1$ is a fixed natural number. Find all polynomials $P(x)$ with complex coefficients for which $(P(x)-x)\mid P^{(n)}(x)-x,$ where $P^{(n)}()$ is the $n$th iterate: $P^{(1)}(x)=P(x)$ and $P^{(i+1)}(x)$ = $P(P^{(i)}(x))$ . What I proved until now : I proved $P(x)-x$ does not have any double roots and the problem is equivalent to solving $P(x)-x\mid P^{'}(x)^{n}-1$
If $r$ is a root of $P(x)-x$ of order $m$, i.e. $P(x) = x + O((x-r)^m)$ as $x \to r$, then I claim $P^{(n)}(x) = x + O((x-r)^m)$ as well. This should be possible to prove by induction on $n$. Therefore all roots of $P(x) - x$ are roots of $P^{(n)}(x) - x$ with the same or greater multiplicity. We conclude that $P(x) - x$ always divides $P^{(n)}(x) - x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3524594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$e^{-xy} + e^{xy} = 2e^{-y}$ - where am I going wrong? I am trying to see if there is any $x$ (real or complex) for which this equation can be solved. $$e^{-xy} + e^{xy} = 2e^{-y}$$ Step 1. Multiplying both sides by y, $$ye^{-xy} + ye^{xy} = 2ye^{-y}$$ Step2. Partially differentiating the original equation (i.e. $e^{-xy} + e^{xy} = 2e^{-y}$ ) w.r.t. x, $$-ye^{-xy} + ye^{xy} = 0$$ Step3. Adding Steps 1 and 2, $$2ye^{xy} = 2ye^{-y}$$ this gives $x=-1$. Obviously this is not a solution. Where am I going wrong? or is there a complex value of x for which Step 3 holds?
The derivative of two different constant functions is zero. The two functions are not equal. All equality of derivatives tells you is that the two functions agree up to a constant offset (recall the "${}+C$" from integral calculus). The partial derivative with respect to $x$ of two different functions depending only on $y$ is zero. All equality of partial derivatives with respect to $x$ tells you is that the two functions agree up to an arbitrary function in $y$. Your right-hand side is not "$0$"; it's "$F(y)$", which isn't nearly as convenient for your solution method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3524722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }