Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Seeking Name of Theory for multiple integral Consider two definite integrals:
\begin{equation}
I_1 = \int_{R_1} f(x) \:dx\qquad I_2 = \int_{R_2} g(y) \:dy
\end{equation}
Then,
\begin{equation}
I_1 \cdot I_2 = \left[ \int_{R_1} f(x) \:dx \right] \left[ \int_{R_2} g(y) \:dy\right]
\end{equation}
Which under certain conditions becomes:
\begin{equation}
I_1 I_2 = \int_{R_1} \int_{R_2} f(x) g(y) \:dx \:dy
\end{equation}
I'm unsure whether I have to force the following, but here $f(x), g(y)$ are Real Valued Functions continuous on $R_1$ and $R_2$ respectively.
My question is: What theorem(s) is required in order for this property to hold? i.e. how do we know when we can combine and separate multiple integrals?
| This is just linearity of the integral. It's automatic as long as the product $I_1\cdot I_2$ itself is meaningful (i.e. we need $I_1, I_2, I_1I_2$ to exist in some sense, so we shouldn't have e.g. $I_1=0, I_2 = \infty$)
Edit: After thinking a little more, it occurs to me that we also need the "inner" integral in your iterated integral to be finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3320552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving $2\left(\sqrt{2s-16}-\sqrt{s}\right)-8=0$ I am trying to solve the equation
$$2\left(\sqrt{2s-16}-\sqrt{s}\right)-8=0$$
Using regular method I found two roots of this namely $32(2+\sqrt{3})$ and $32(2-\sqrt{3})$. But when I tried to confirm them only $32(2+\sqrt{3})$ worked as a root whereas $32(2-\sqrt{3})$ gave me negative value.
Can someone help with this?
| Whenever you square an equation you get additional roots. For example $x=1$ has a unique solution but $x^{2}=1^{2}$ has two solutions $x=1$ and $x=-1$. Your 'regular method' involves squaring so you got an extra root. After getting the two values for $s$ you have to go back to the given equation and keep only the one that really satisfies that equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3320682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
The set of all conjugation classes of group $G$ form a partition of $G$ So the property I would like to proof is the same as stated in the title:
Consider the group $G,\cdot$. The set of all conjugation classes of group $G$ form a partition of $G$
So to prove this, one needs to show that the union of conjugation classes of $G$ is equal to the group $G$. And that two different conjugation classes are disjunct. But I don't know how to proof both parts.
| Hint:
You can prove this by showing that the relation $\sim$ on $G$ defined by:$$g\sim h\iff \exists x\in G\;[gx=xh]$$is an equivalence relation.
The equivalence classes (they form a partition of $G$) are exactly the conjugation classes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3320789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Conjugate diameters of ellipse How to find the length of major and minor axis of ellipse given the length of two conjugate diameters and the angle between them?
I am aware about how to construct the ellipse using the above given facts(not by Rytz's Construction). I would like to know, independent of what method of construction one uses, how one can find the length of the axes.
| Here's a geometric construction: if $MN$ and $DE$ are conjugate diameters, draw line $QQ'$ through $N$ perpendicular to $DE$ (see diagram below). Points $Q$ and $Q'$ must be chosen such that $NQ=NQ'=OD$. Major axis $IR$ is the bisector of angle $\angle QOQ'$ and minor axis $TS$ is perpendicular to it.
Their lengths can be computed from:
$$
\tag{1}
IR=OQ'+OQ,\quad TS=OQ'-OQ.
$$
If $ON=a$, $OD=b$ and the angle between them is $\theta$, then from the cosine rule applied to triangles $ONQ$ and $ONQ'$ we get:
$$
OQ^2=a^2+b^2-2ab\sin\theta,\quad OQ'^2=a^2+b^2+2ab\sin\theta.
$$
Inserting these into $(1)$ we finally obtain:
$$
OR\cdot OS=ab\sin\theta,\quad OR^2+OS^2=a^2+b^2.
$$
These equalities could have been directly derived, as they are well known properties of conjugate diameters (see properties 1. and 2. listed here).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3320901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What p.d.f. over angles is equivalent to a uniform distribution over a hypersphere? Eric Weisstein's Sphere Point Picking points out that sampling uniformly from each angle $\phi$ and $\theta$ in spherical coordinates does not sample from the uniform sphere because it clusters near the poles. I am interested in which distribution over the angles does sample uniformly over the area element.
For the spherical case, he notes that the random variables $\phi$ and $\theta$ that do correspond to sampling from the uniform sphere are:
$\theta = 2\pi u \\
\phi = \cos^{-1}(2v -1)$
where $u$ and $v$ are random variables uniformly distributed over [0, 1].
I would like to know how this extends to n-dimensional hyperspheres. Is there a similar expression for the distribution of the angles $\boldsymbol{\theta}$ when sampling from a uniform hypersphere?
Very grateful for any help!
(I'm aware that there are simpler ways to sample from the unit hypersphere such as this. I'm specifically interested in the probability density function of the angles.)
| If you're using hyperspherical coordinates, the area element of the $n$-sphere can be written $$ \sin^{n-1}(\phi_1)\sin^{n-2}(\phi_2)\ldots \sin(\phi_{n-1})d\phi_1\ldots d\phi_n$$ where $\phi_1,\ldots\phi_{n-1}$ range from $0$ to $\pi$ and $\phi_n$ ranges from $0$ to $2\pi.$
So this gives you the pdf's of the angles right there: $\phi_n$ is uniform, $\phi_{n-1}$ has a pdf proportional to $\sin(\phi_{n-1}),$ etc.
I don't see a particularly nice way to express the inverse CDF though, which is what you need to get the expression in terms of a uniform. For instance, the way it works for $\phi_{n-1}$ is that $$ F(\phi_{n-1}) = \frac{\int_0^{\phi_{n-1}}\sin(\phi_{n-1})d\phi_{n-1}}{\int_0^\pi \sin(\phi_{n-1})d\phi_{n-1}} = \frac{1}{2}(1-\cos(\phi_{n-1}))$$ which has inverse function $$F^{-1}(u) = \arccos(1-2u) $$ which agrees with what you were told for the polar angle in the three dimensional case.
But for $\phi_{n-2}$ we have $$ F(\phi_{n-2}) = \frac{\int_0^{\phi_{n-2}}\sin^2(\phi_{n-2})d\phi_{n-2}}{\int_0^\pi \sin(\phi_{n-2})d\phi_{n-2}} = \frac{1}{\pi}(\phi_{n-2}-\frac{1}{2}\sin(2\phi_{n-2}))$$ which doesn't have any nice inverse in closed form I'm aware of. That said, the CDFs all have closed forms an it is easy enough to invert them numerically.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is there a matrix that can be used to find the transpose of a matrix? Let $A$ be a general $n\times n$ invertible matrix. Let $T^A$ be the "transposer" matrix i.e. $T^A A = A'$. (Does that $T^A$ multiplied by $A$ equal the transpose of $A$?) Then does $T^A$ depend on the matrix $A$: is $T^A = T^B$ for all invertible Matrices? Prove your claim.
So can you take a matrix times the given matrix in order to find its transpose? If so, is that a general form that can be used for all invertible matrices?
I'm guessing it does not, but I am not totally sure or know how to go about proving that and I cannot find anything online about it.
| There is no general transposer matrix.. For this, e.g. note that the only transposer for the identity in any dimension is the identity as the equation
$$T^II=I$$
needs to be fulfilled, but $T^II=T^I$.
However, for e.g. for non-symmetric matrices $A$, the identity is definitely not the transposer, as $A^\top\neq A$, but $IA=A$.
Now, we may try to construct the transposer matrix by considering $X=(x_{ij})_{i,j\leq n}$ and $A=(a_{ij})_{i,j\leq n}$, and then solving the equation $XA=A^\top$ for $X$. I.e. by the laws of matrix multiplication, you have to have
$$a_{ij}=\sum_{k=1}^n a_{jk}x_{ik}$$
for all $i,j$. However, this system of linear equations is not always solvable (only if $A$ is not symmetric). For this, consider the following example:
Look at $A=\begin{pmatrix}0 &1\\0 &0\end{pmatrix}$, then
$$\begin{pmatrix}x &y\\z &w\end{pmatrix}\begin{pmatrix}0 &1\\0 &0\end{pmatrix}=\begin{pmatrix}0 &x\\0 &z\end{pmatrix}\neq\begin{pmatrix}0 &0\\1 &0\end{pmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Prove the zeros of a polynomial all lie in an annulus. I am working on a problem. It has two parts:
(a) Let $c_{0}>c_{1}>\cdots c_{n}>0$. Show that the polynomial $P(z):=c_{0}+c_{1}z+\cdots+c_{n}z^{n}$ has no zeros inside the closed unit disc.
(b) Show that the zeros of polynomial $P_{n}(z):=1+\frac{z}{2}+\frac{z^{2}}{3}+\cdots+\frac{z^{n}}{n+1}$ all lie in an annulus $\{1<|z|<1+\delta_{n}\}$ where $\delta_{n}\rightarrow 0$ as $n\rightarrow\infty$.
I've proved part (a), but I am stuck in part (b). I think the proof of part (b) may be similar to part (a), so I state part (a) above and give my proof below, then I will give my attempt for part (b).
Part (a):
Suppose there exists $z_{0}\in\mathbb{C}$ such that $P(z_{0})=0$ and $|z_{0}|<1$.
Since $P(z_{0})=0$, we also have $$(1-z_{0})P(z_{0})=0, $$ where $LHS=c_{0}+(c_{1}-c_{0})z_{0}+(c_{2}-c_{1})z_{0}^{2}+\cdots+ (c_{n}-c_{n-1})z_{0}^{n}-c_{n}z_{0}^{n+1}.$
Thus, we have $$c_{0}=(c_{0}-c_{1})z_{0}+(c_{1}-c_{2})z_{0}^{2}+\cdots (c_{n-1}-c_{n})z_{0}^{n}+c_{n}z_{0}^{n+1}.$$
Now, taking norm to both side, and recalling that $c_{0}>c_{1}>\cdots>c_{n}>0$ and $|z_{0}|<1$, we have
\begin{align*}
c_{0}&<c_{0}-c_{1}+c_{1}-c_{2}+\cdots+c_{n-1}-c_{n}+c_{n}\\
&=c_{0}
\end{align*}
which is a contradiction.
Thus, there is no zero of $P(z)$ that is inside the closed unit disc.
Part (b):
For part (b), I mimic what I've done in part (a). Let $z\in\mathbb{C}$ be a zero of $P_{n}(z)$, then $P_{n}(z)=0$ implies that $$(1-z)P_{n}(z)=0.$$
Thus, we have $$\Big(1-\dfrac{z^{n+1}}{n+1}\Big)-\dfrac{z}{2}-\dfrac{z^{2}}{6}-\cdots-\dfrac{z^{n}}{n(n+1)}=0.$$
Set $Q_{n}(z):=-\dfrac{z}{2}-\dfrac{z^{2}}{6}-\cdots-\dfrac{z^{n}}{n(n+1)}.$
Then if $|z|<1$, we have
\begin{align*}
1&\leq|Q_{n}(z)|+\Big|\dfrac{z^{n+1}}{n+1}\Big|\\
&\leq\dfrac{|z|}{2}+\dfrac{|z|^{2}}{6}+\cdots+\dfrac{|z|^{n}}{n(n+1)}+\dfrac{|z|^{n+1}}{n+1}\\
&<\dfrac{1}{2}+\dfrac{1}{6}+\cdots+\dfrac{1}{n(n+1)}+\dfrac{1}{n+1}\\
&=1-\dfrac{1}{2}+\dfrac{1}{2}-\dfrac{1}{3}+\cdots+\dfrac{1}{n}-\dfrac{1}{n+1}+\dfrac{1}{n+1}\\
&=1,
\end{align*}
which is a contradiction.
Thus, zeros of $P_{n}(z)$ must lie in $|z|\geq 1$.
Then, I try to get rid of $|z|=1$ by using the same techniques, but I found something else interesting.
If $|z|=1$, then by definition
$$|Q_{n}(z)|\leq 1-\dfrac{1}{2}+\dfrac{1}{2}-\dfrac{1}{3}+\cdots+\dfrac{1}{n}-\dfrac{1}{n+1}=\dfrac{n}{n+1}.$$
On the other hand, since we assume $z$ is a zero, we have $$|Q_{n}(z)|=\Big|1-\dfrac{z^{n+1}}{n+1}\Big|\geq \Big|1-\dfrac{1}{n+1}\Big|=\dfrac{n}{n+1}.$$
Thus, if $|z|=1$, we have $$\dfrac{n}{n+1}\leq |Q_{n}(z)|\leq\dfrac{n}{n+1},$$ and thus $$|Q_{n}(z)|=\dfrac{n}{n+1}.$$
This did not give me any contradiction, but an idea of $\delta_{n}$. By the problem itself, we can see that if $\delta_{n}\rightarrow 0$, then the annulus will become to a unit circle $|z|=1$, which is exactly our case.
Also, by my argument above, I want my $\delta_{n}$ to be related to $|Q_{n}(z)|$, so I tried to set $$1+\delta_{n}=\dfrac{n}{n+1},$$ which gives us $$\delta_{n}=-\dfrac{1}{n+1},$$ which tends to be $0$ as $n\rightarrow\infty$.
But then I don't know how to proceed, what should I do now?
Thank you!
| Consider the polynomial $R_n(z)=(z-1)P_{n}(z)=-1+\dfrac{z}{2}+\dfrac{z^{2}}{6}+\cdots+\dfrac{z^{n}}{n(n+1)}+\dfrac{z^{n+1}}{n+1}$.
We will show that the Cauchy bound $\rho_n=1+\gamma_n$ of $R_n$, satisfies $\gamma_n \to 0$ as $n \to \infty$ hence we are done since all the roots $w_{k,n}$ of $R_n$ (hence of $P_n$) satisfy $|w_{k,n}| \le \rho_n$
If not familiar with the notion, the Cauchy bound $\rho_P$ of a polynomial $P(z)=\sum_{0}^{n}{a_kz^k}, a_n \ne 0, n \ge 1$ is the unique positive root of the polynomial $|a_n|z^n-\sum_{0}^{n-1}|a_k|z^k$ - taken by convention as $0$ if the polynomial is just $a_nz^n$ -and by the triangle inequality the relation $|w_k| \le
\rho_P$ for all the roots of $P$ is clear.
Also it is clear that if for some $R >0, \sum_{k=0}^{n-1}{|a_k|R^k} \le |a_n|R^n$, then $\rho_P \le R$ by the unicity of the positive root.
We use the inequality: $\rho_P \le max_{k=0,..,n-1}|n\frac{a_k}{a_n}|^{\frac{1}{n-k}}$, which is fairly obvious since if we denote by $R$ the maximum on the RHS, $|a_k| \le \frac{1}{n}|a_n|R^{n-k}, k=0,...n-1$, so $\sum_{k=0}^{n-1}{|a_k|R^k} \le |a_n|R^n$, hence $R \ge \rho_P$ as above.
In our case the degree is $n+1$ and the coefficients are $|a_0|=1, a_k=\frac{1}{k(k+1)}, 1 \le k \le n, a_{n+1}=\frac{1}{n+1}$, so we need to prove the following two results:
1: $((n+1)^2)^{\frac{1}{n+1}}=1+b_n, b_n \to 0$
2: $max_{ 1 \le k \le n}(\frac{(n+1)^2}{k(k+1)})^{\frac{1}{n+1-k}}=1+c_{n}, c_{n} \to 0$
But using the inequality $\log(1+x) \ge \frac{x}{2}, 0 \le x \le 1$, we get $\log(1+b_n) = 2\frac{\log(n+1)}{n+1} \to 0$, hence $b_n \le 1$ eventually, hence $b_n \le 4\frac{\log(n+1)}{n+1} \to 0$, so 1: is done.
Similarly if $k \le \frac{2n}{3}$, $n+1-k \ge \frac{n}{3}$ and the same proof applies since then $\frac{\log(\frac{(n+1)^2}{k(k+1)})}{n+1-k} \le 6\frac{\log(n+1)}{n} \to 0$
If $k \ge \frac{2n}{3}, \frac{\log(\frac{n+1}{k})}{n+1-k} \le \frac{n+1-k}{k(n+1-k)} \le \frac{2}{n} \to 0$ and same for the other $\frac{\log(\frac{n+1}{k+1})}{n+1-k} \to 0$, so we are done too for case 2; and the problem is solved with $\delta_n$ the maximum of $b_n, c_n$ above
(we used $\log(\frac{n+1}{k})=\log(1+\frac{n+1-k}{k}) \le \frac{n+1-k}{k}$)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Show that if vectors $(\overline{v},\overline{w}) \in V$ are linearly independent then they are not parallel Problem
Show that if vectors $(\overline{v},\overline{w}) \in V$ are linearly independent and neither of them is zero vector then they are not parallel
Attempt to solve
vectors $\overline{v},\overline{w}$ are linear independent if
$$ \exists(c_1,c_2)\in \mathbb{R} : c_1\overline{v} + c_2\overline{w} = \overline{0} \implies c_1=0,c_2=0 $$
Now it follows from this that they are not parallel when this condition is satisfied.
However, I'm having trouble connecting the fact that these vectors cannot be parallel when they are linearly independent. This is intuitive to me at some level by the definition.
One way would be to find a connection with cross product and the fact that when
$$ \overline{v} \times \overline{w} = 0 \implies \text{ parallel} $$
then since I wanted to show that they are not parallel use negation
$$ \overline{v} \times \overline{w} \neq 0 \implies \text{ not parallel } $$
But it's problematic since it limits me to $\mathbb{R}^3$ vector space?
Better option is possibly to try to find
$$ \forall(a,b)\in \mathbb{R} : a \overline{v} - b \overline{w} \neq \overline{0} $$
which implies they cannot be parallel since by scaling them with arbitrary $(a,b)$ they cannot be the same.
| By the contrapositive, if they are parallel, then there must exist a scalar $\alpha$ such that $\bar{v} - \alpha \bar{w} = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Evaluating $\lim_{k\to\infty}\sum_{n=1}^{\infty} \frac{\sin\left(\pi n/k\right)}{n}$ Recently, I was asked by a friend to compute the limit of the following series
$$\displaystyle{\lim_{k\to\infty}}\sum_{n=1}^{\infty} \frac{\sin\left(\frac{\pi n}{k}\right)}{n}$$
Having seen a similar problem to this before, Difficult infinite trigonometric series, I used the same complex argument approach as seen in that problem.
Ultimately, for this problem, I obtained $\frac{\pi}{2}$ as my answer. However, this limit can also be interpreted as a Riemann Sum, except the answer to the Riemann Sum differs from what I obtained as my answer, and according to Wolfram Alpha, the answer is expressed in terms of $Si$, where $Si$ is the sine integral.
I'm wondering, does the limit invalidate the argument approach, or is there something else I'm missing, because this limit if I'm not mistaken is a Riemann Sum after all?
| The Riemann Sum would be
$$
\begin{align}
\lim_{k\to\infty}\sum_{n=1}^\infty\frac{\sin\left(\frac{\pi n}k\right)}{n}
&=\lim_{k\to\infty}\sum_{n=1}^\infty\frac{\sin\left(\frac{\pi n}k\right)}{n/k}\frac1k\\
&=\int_0^\infty\frac{\sin(\pi x)}x\,\mathrm{d}x\\
&=\int_0^\infty\frac{\sin(x)}x\,\mathrm{d}x\\[3pt]
&=\frac\pi2\tag1
\end{align}
$$
However, a cleaner way is to note that
$$
\begin{align}
\sum_{n=1}^\infty\frac{\sin\left(\frac{\pi n}k\right)}{n}
&=-\mathrm{Im}\!\left(\log\left(1-e^{i\pi/k}\right)\right)\\
&=\frac\pi2-\frac\pi{2k}\tag2
\end{align}
$$
and the limit is easy.
Take Care
One must be careful with the convergence of the Riemann Sum. Here is one method to control the remainders.
Because $|\sin(\pi x)|\le1$, we have
$$
\int_m^{m+1}\left|\frac{\sin(\pi x)}x\right|\,\mathrm{d}x
\le\frac1m\tag3
$$
Furthermore, $\int_m^{m+2}\sin(\pi x)\,\mathrm{d}x=0$, thus,
$$
\begin{align}
\left|\int_m^{m+2}\frac{\sin(\pi x)}x\,\mathrm{d}x\right|
&=\left|\int_m^{m+2}\sin(\pi x)\left(\frac1x-\frac1{m+1}\right)\mathrm{d}x\right|\\
&\le\frac1{m(m+1)}+\frac1{(m+1)(m+2)}\\[6pt]
&=\frac1m-\frac1{m+2}\tag4
\end{align}
$$
Therefore, for any $N\ge m$,
$$
\left|\int_m^N\frac{\sin(\pi x)}x\,\mathrm{d}x\right|
\le\frac1m\tag5
$$
Because $|\sin(\pi x)|\le1$, we have
$$
\sum_{n=mk}^{(m+1)k}\left|\frac{\sin\left(\frac{\pi n}k\right)}{n}\right|
\le\frac1m\tag6
$$
Furthermore, $\sum\limits_{n=mk}^{(m+2)k}\sin\left(\frac{\pi n}k\right)=0$, thus,
$$
\begin{align}
\left|\sum_{n=mk}^{(m+2)k}\frac{\sin\left(\frac{\pi n}k\right)}{n}\right|
&=\left|\sum_{n=mk}^{(m+2)k}\sin\left(\frac{\pi n}k\right)\left(\frac1n-\frac1{(m+1)k}\right)\right|\\
&\le\frac1{m(m+1)}+\frac1{(m+1)(m+2)}\\[6pt]
&=\frac1m-\frac1{m+2}\tag7
\end{align}
$$
Therefore, for any $M\ge mk$,
$$
\left|\sum_{n=mk}^M\frac{\sin\left(\frac{\pi n}k\right)}{n}\right|\le\frac1m\tag8
$$
For any $\epsilon\gt0$, let $m\ge\frac4\epsilon$. Then Riemann Sums allow us to choose a $k$ large enough so that
$$
\left|\int_0^m\frac{\sin(\pi x)}x\,\mathrm{d}x-\sum_{n=1}^{mk}\frac{\sin\left(\frac{\pi n}k\right)}{n/k}\frac1k\right|\le\frac\epsilon2\tag9
$$
Inequalities $(5)$ and $(8)$ show that for any $N\ge m$ and $M\ge mk$,
$$
\left|\int_m^N\frac{\sin(\pi x)}x\,\mathrm{d}x\right|\le\frac\epsilon4
\quad\text{and}\quad
\left|\sum_{n=mk}^M\frac{\sin\left(\frac{\pi n}k\right)}{n/k}\frac1k\right|\le\frac\epsilon4\tag{10}
$$
Inequalities $(9)$ and $(10)$ show that, for the $k$ chosen for $(9)$,
$$
\left|\sum_{n=1}^\infty\frac{\sin\left(\frac{\pi n}k\right)}{n}-\frac\pi2\right|\le\epsilon\tag{11}
$$
Since $\epsilon\gt0$ was arbitrary, $(11)$ says that
$$
\lim_{k\to\infty}\sum_{n=1}^\infty\frac{\sin\left(\frac{\pi n}k\right)}{n}=\frac\pi2\tag{12}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why $a_{-1}$ term of Laurent series may not be residue? Suppose $f(z)$ is analytic on $0<|z-z_0|<R$. And we find a Laurent's series for $f(z)$ on annulus $r<|z-z_0|<R$ where $r$ may not be $0$. Then it is said that $a_{-1}$ of such Laurent's series may not be residue unless $r=0$ (Residue is defined as $Res(f,z_0)=\frac{1}{2 \pi i}\int_\gamma f(z)dz$ for any enclosed curve $\gamma$ on $0<|z-z_0|<R$). I find this hard to understand. In particular, fix an enclosed curve $\gamma'$ contained in $r<|z-z_0|<R$ where the Laurent's series applies. Substitute $f$ with this Laurent series into $Res(f,z_0)=\frac{1}{2 \pi i}\int_{\gamma'} f(z)dz$. Then integral of all except the $a_{-1}/(z-z_0)$ terms should evaluate to $0$ as we have finished substitution and are just evaluating the integral with Cauchy formula (and thus are no longer concerned by where Laurent's series apply). In the end since $Res(f,z_0)=\frac{1}{2 \pi i}\int_\gamma f(z)dz$ takes same value for all $\gamma$ in $0<|z-z_0|<R$, our result based on $\gamma'$ applies in general. Where is the mistake in this proof?
Note 1.
I have checked explicit expression for $a_{-1}$ in a Laurent expansion where $r>0$. I think $a_{-1}$ should be residue. The statement in Note 2 may be false. I hope the author or somebody with expertise in complex analysis could confirm.
Note 2.
The original statement I was referring to can be found in Simon's answer in this post: Calculate residue at essential singularity:
"In fact the residue of $f(z)$ at an isolated singularity $z_0$ of $f$ is defined as the coefficient of the $(z-z_0)^{-1}$ term in the Laurent Series expansion of $f(z)$ in an annulus of the form $0 < |z-z_0|<R$ for some $R > 0$ or $R = \infty$.
If you have another Laurent Series for $f(z)$ which is valid in an annulus $r < |z-z_0|< R$ where $r > 0$, then it might differ from the first Laurent Series, and in particular the coefficient of $(z-z_0)^{-1}$ might be different, and hence not equal to the residue of $f(z)$ at $z_0$."
| Don't confuse $$\int_{|z| = r+\epsilon} f(z)dz= 2i\pi a_{-1}, \qquad \int_{|z| = \epsilon} f(z)dz = 2i\pi b_{-1}$$
where $f$ is assumed to be analytic on $|z| \in (0,2\epsilon)$ and $|z| \in (r,R)$ and $a_n,b_n$ are the Laurent coefficients of the expansion on each annulus.
If $f$ is analytic on $|z|\in (0, R)$ then $a_n = b_n$.
If $f$ has a pole on $|z| \in (2\epsilon,r)$ then both contour integrals won't give the same result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Calculate number of elements in a set How many natural numbers are there below 1000 that are multiples of 3 or that contain 3 in any digit of the number?
My effort : Here we need to calculate union of two set. First set is natural number which are multiple of 3. So it's cardinality will be the nearest integer of 1000/3, which will be 333. But I am confused with second set.
Any help/hint in this regards would be highly appreciated. Thanks in advance!
| How many numbers have $3$ as a first digit? How many number that do not have $3$ as a first digit have $3$ as a second? How many that do not have $3$ in one of the first two positions have $3$ in the third? Add these together to answer your question. You can't just add this to $333$ because all the ones that are multiples of $3$ and have a $3$ in them have been counted twice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Simplify $ \frac{ \sqrt[3]{16} - 1}{ \sqrt[3]{27} + \sqrt[3]{4} + \sqrt[3]{2}} $ Simplify
$$ \frac{ \sqrt[3]{16} - 1}{ \sqrt[3]{27} + \sqrt[3]{4} + \sqrt[3]{2}} $$
Attempt:
$$ \frac{ \sqrt[3]{16} - 1}{3 + \sqrt[3]{4} + \sqrt[3]{2}} = \frac{ \sqrt[3]{16} - 1}{ (3 + \sqrt[3]{4}) + \sqrt[3]{2}} \times \frac{ (3 + \sqrt[3]{4}) - \sqrt[3]{2}}{ (3 + \sqrt[3]{4}) - \sqrt[3]{2}} $$
$$ = \frac{ (\sqrt[3]{16} - 1) [(3 + \sqrt[3]{4}) - \sqrt[3]{2}]}{ (3 + \sqrt[3]{4})^{2} - 2^{2/3}} $$
$$ = \frac{ 3 \sqrt[3]{16} - 3\sqrt[3]{4} + \sqrt[3]{2} + 1}{ (9 + 5 \sqrt[3]{4} + \sqrt[3]{16}) } $$
From here on I don't know how to continue.
I can let $a = \sqrt[3]{2}$, but still cannot do anything.
| Let $x=\sqrt[3]2$ then we have $${x^4-1\over x^2+x+3}={x^6-x^2\over x(x^3+x^2+3x)}={4-x^2\over x(2+x^2+3x)}= {(2-x)(2+x)\over x(x+2)(x+1) }$$
$$ = {2-x\over x^2+x}= {(2-x)(x-1)\over x(x+1)(x-1)}= {(2-x)(x-1)\over x^3-x}$$
$$= {(2-x)(x-1)\over 2-x} = x-1$$
Edit: but other solution is much nicer then this one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Find the kernel of a ring homomorphism $f(x)\to f(\sqrt 2)$ Let $$\phi:\mathbb Z[x]\longrightarrow \mathbb R$$, where $$\phi(f(x))=f(\sqrt 2)$$
find $$\operatorname{Ker}(\phi)=\{f(x)\in \mathbb Z[x]\mid f(\sqrt2)=0\}$$
1st) I wanted to use isomorphism theorem since we have $\mathbb R$-field so $Z[x]/\ker\phi\simeq \mathbb R$ so we find that $\ker\phi$ maximal ideal so it must be irreducible minimal polynomial which has root $\sqrt2$ that is $x^2-2$ but $\phi$ is not surjective.
2nd) Let $f=f_0+f_1x+f_2x^2+f_3x^3+f_4x^4+..$
since $\\f(\sqrt2)=0$ we have $$\\f_0+2f_2+4f_4+8f_6+...=0\\ f_1+2f_3+4f_5+...=0$$but how to show $x^2-2\mid f(x)$
| $\newcommand\Ker{\operatorname{Ker}}$First note that $x^2-2\in\Ker\varphi$.
Conversely, if $f\in\Ker\varphi$, write $f(x)=(x^2-2)q(x)+r(x)$ with $r(x)=0$ or $\deg r(x)<2$.
Since $f(\sqrt 2)=0$ you get $r(\sqrt 2)=0$, but since $\sqrt 2$ is irrational and $\deg r<2$ we get $r=0$.
This proves $(x^2-2)|f(x)$.
Consequently, $\Ker\varphi=(x^2-2)\Bbb Z[x]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Linear differential equation with driving In class we are solving the linear differential equation with driving given by
$$
\frac{dx}{dt} = -\gamma x + f(t)
$$
The professor first transformed to a new variable $y$,
$$
y(t) := x(t) e^{\gamma t}
$$
and then calculate the differential equation for $y$:
$$
\frac{dy}{dt} = \frac{\partial y}{\partial x}\frac{dx}{dt} + \frac{\partial y}{\partial t} = e^{\gamma t} f(t)
$$
Problem: I don't understand how he reached the last step, i.e., how it equals $e^{\gamma t} f(t)$.
Attempt: Here is what I get for the differentials:
$$
\frac{\partial y}{\partial x} = e^{\gamma t} \\
\frac{dx}{dt} = -\gamma x + f(t) \\
\frac{\partial y}{\partial t} = e^{\gamma t}\frac{dx}{dt} + x(t)\gamma e^{\gamma t}
$$
Given these, I get
$$
\frac{\partial y}{\partial x}\frac{dx}{dt} + \frac{\partial y}{\partial t} = -\gamma x e^{\gamma t} + 2fe^{\gamma t}
$$
which is incorrect. It's probably my differentials that are incorrect, but what am I doing wrong here?
| Differentiate $y(t) = x(t) e^{\gamma t}$ with the product rule:
$$y'(t)=x'(t)e^{\gamma t}+x(t) \gamma e^{\gamma t}.$$
Since $x'(t)=- \gamma x(t)+f(t)$, we get
$$y'(t)=(- \gamma x(t)+f(t))e^{\gamma t}+x(t)\gamma e^{\gamma t}=f(t)e^{\gamma t}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3321929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How the components of alternating link diagram $D$ are boundaries of the regions of one color after performing positive smoothing at all crossings? I am reading the proof of the Proposition 5.3. of the chapter "The Jones Polynomial of an Alternating Link" from the book "Introduction to knot theory" by "Lickorish". I have a problem with understanding its proof. Before coming to the problem I will mention some background for the problem.
Suppose we are given a link diagram $D$ and at each crossing, we perform the following kind of smoothing and denoted as $s_{+}D$:
Now suppose we have an alternating link diagram with chessboard coloring.
Following line is written in the proof which I don't understand:
"The alternating condition implies that the components of $s_+ D$ are the boundaries of the regions of one of the colors (the black ones, say) with corners rounded off."
Following are the diagrams I have drawn after performing $s_+ D$ smoothing on trefoil and its mirror image. In both diagrams, each circle is the boundary of each color.
So what does the author mean by "$\cdots$ boundaries of the regions of one of the colors".
Can someone explain it to me, please?
| Here's what this is meant to mean:
Take a look at a region of the alternating knot diagram, which is a disk. Due to it being an alternating diagram, regions come in only two types, depending on what the incident crossings look like:
I am calling them type R and type L. Notice that around a crossing, type R and type L regions alternate:
Thus, if we color all the type R regions black and the type L regions white, we have a checkerboard coloring. Now, if we do the $s_+$ smoothing, the black regions "have their corners smoothed" like so:
If we make sure the "outer" region is type L (there is always a diagram where this is the case --- it's simply by rotating the diagram on $S^2$, a.k.a. isotoping strands "through infinity"), then the upshot is the circles of $s_+D$ are not nested. Colored, it will be some number of black disks on a white plane.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3322019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to solve $M - EV - (EV)^T = 0$ against $E$? I have a matrix equation:
$$M - EV - (EV)^T = 0$$
which i want to solve against $E$. Is this possible? How to do that?
Remarks: matrices are square, $E$, $M$ are symmetric, $V$ is invertibile.
Regards,
Marek
| We can write this equivalently as
$$
EV + (EV)^T = M.
$$
Because the (linear) operator $E \mapsto EV + (EV)^T$ is not invertible, this equation will have infinitely many solutions. In particular, we can always take $E = \frac 12 MV^{-1}$. Indeed, plugging this $E$ in yields
$$
EV + (EV)^T = \frac 12 MV^{-1}V + (\frac 12 MV^{-1}V)^T = \frac 12 M + \frac 12 M^T = M
$$
As it turns out, every solution to this equation can be written in the form
$$
E = \frac 12 (M + S)V^{-1}
$$
where $S$ is skew symmetric, which is to say that $S$ satisfies $S^T = -S$.
Now, requiring that $E$ is symmetric amounts to requiring that
$$
[(M + S)V^{-1}]^T = (M + S)V^{-1} \implies\\
V^{-T}(M - S) = (M + S)V^{-1} \implies\\
V^{-T}S + SV^{-1} = V^{-T}M - MV^{-1}.
$$
In other words: there exists a symmetric solution $E$ if and only if the Sylvester equation (more specifically Lyapunov equation) $V^{-T}S + SV^{-1} = V^{-T}M - MV^{-1}$ has a solution $S$ in which $S$ is also skew-symmetric. I don't see a way of simplifying this condition.
It is notable that if the eigenvalues of $V$ have all positive real part or negative real part, then there exists a unique solution $S$. If the eigenvalues of $V$ have only negative real part, then this unique $S$ can be written explicitly as the integral
$$
S = \int_0^\infty e^{V^{-T}\tau}[V^{-T}M - MV^{-1}]e^{V^{-1}\tau} d\tau.
$$
This $S$ is necessarily skew-symmetric, which is to say that our original equation has a (unique) symmetric solution $E$ in this case.
Another approach: if we're looking specifically for a symmetric solution $E$, then we can rewrite the original equation as
$$
EV + V^TE^T = M \implies V^TE + EV = M.
$$
Now, if the eigenvalues of $V$ all have negative real parts, then this equation has a unique solution that can be written as
$$
E = \int_0^\infty e^{V^{T}\tau}\,M\,e^{V\tau} d\tau,
$$
which is necessarily a symmetric matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3322171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving that the equivalence class generates the group (mod 125). The Question:
Let n be a positive integer and let $G_n = \left\{[a] ∈ \mathbb{Z}_n ; \text{gcd}(a,n) = 1\right\}$ be the group of invertible elements in (Zn,·), where ”·” represents the product (mod n).
Prove that $(G_{125},·)$ is a group with 100 elements. Use Lagrange’s theorem to find all possible sizes of subgroups of $G_{125}$. Hence prove that [2] is a generator for $(G_{125}, ·)$. (You may use without checking the following identities (mod 125): $2^{10} ≡24,2^{20} ≡76,2^{25} ≡57$)
My Attempt: We can see that $\left|G_{125}\right| = 100$ by using the fact that only multiples of 5 in $\mathbb{Z}_{125}$ are not in $G_{125}$, of which there are 25.
By Lagrange, the order of a subgroup $<d>$ of $G$ divides the order of $G$. So the set of all possible sizes of subgroups in $G_{125}$ $:= \left\{a ; \text{gcd}\left(a,100\right) = a, a\in\mathbb{Z}\right\} = \left\{1,2,4,5,10,20,25,50,100\right\}$
At this point I'm stuck. The logic I tried using is that if [2] (mod 125) is a generator of $G_{125}$ then the cyclic subgroup $<[2]>$ should have the same order as $G_{125}$, that is, $2^{\text{ord}\left(G_{125}\right)}≡1$ (mod 125).
It's clear from above that the order of this subgroup can only be one of the numbers from the set of possible sizes of subgroups. It's not going to be any of the first 7 elements (simple calculations and the hint given in the question show this).
So I'm left with 50 and 100. How do I show that the order of $<[2]>$ is 100 and not 50? ($2^{50}$ and $2^{100}$ are huge unusable numbers.)
Or am I using the wrong method of proving that [2] generates $G_{125}$?
| Hint:
$2^{50} \equiv 1 \implies 2^{25}\equiv \pm 1 $
In general if $x^2\equiv 1\pmod{p^e}$, where $p$ is an odd prime,
then $x\equiv \pm 1 \pmod{p^e}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3322309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Computing $\int\frac {du}{\sqrt{u^2 + s^2}} =\log \lvert (u + \sqrt{u^2 + s^2}) \rvert$ with a substitution Can someone please show me where I am going wrong? It seems there is a contradiction in the formula for the the following integral.
$$\int\frac {du}{\sqrt{u^2 + s^2}} = \log \bigl\lvert u + \sqrt{u^2 + s^2}\bigr\rvert$$
Now, can't we rewrite the integral as follows using the substitution z = -u?
\begin{align}
\int\frac {du}{\sqrt{u^2 + s^2}} &= \int\frac {du}{\sqrt{(-u)^2 + s^2}} = \int\frac {-dz}{\sqrt{z^2 + s^2}}\\
& = -\log \bigl\lvert z + \sqrt{z^2 + s^2}\bigr\rvert = -\log \bigl\lvert (-u + \sqrt{u^2 + s^2})\bigr\rvert
\end{align}
But, clearly $\;\log \bigl\lvert u + \sqrt{u^2 + s^2}\bigr\rvert \ne -\log \bigl\lvert -u + \sqrt{u^2 + s^2} \bigr\rvert$
Also, the quantity inside $\log \bigl\lvert u + \sqrt{u^2 + s^2})\bigr\rvert$ is clearly non-negative for any choice of $u$ and $s$, so why do we need the absolute value of the quantity in our integral?
| You have a faster way, using some hyperbolic trigonometry:
Set $t=\dfrac us$, so that $\mathrm du=s\,\mathrm dt$. The integral becomes, supposing $s>0\,$:
$$\int\frac {\mathrm du}{\sqrt{u^2 + s^2}}=\int\frac {\not\! s\,\mathrm dt}{\not \!s\sqrt{ t^2 + 1}}=\int\frac {\mathrm dt}{\sqrt{ t^2 + 1}}=\operatorname{argsinh}t$$
and it is known that
$$\operatorname{argsinh}t =\ln\bigl(t+\sqrt{t^2+1}\bigr). $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3322560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Time series prediction for three constrained variables (x+y+z=1) How can I PREDICT time-series for three non-negative variables that sum to 1? Say, x+y+z=1. I have historical data for x ,y, z , t. Based on historical data, I can create an ARIMA model for each variable individually, and make predictions for the future. How do I add the constraint ?
If this were only one variable, applying an ARIMA is simple.
For the single variable x(t), I can get a fit ARIMA_x(p, d, q) and those three numbers parametrize the model.
Here, I could get three sets of fits independently. But that is not proper.
With three variables that always sum to 1, how do I get three sets of constrained fit parameters?
https://en.wikipedia.org/wiki/Autoregressive_integrated_moving_average
https://www.statsmodels.org/stable/generated/statsmodels.tsa.arima_model.ARIMA.html
| tl;dr: Reduction of variables is a generic method to apply constraints. We show this below and it is easy because ARIMA is indifferent to affine linear transformations. However, at the end, we discover that we have just run ARIMA on two of the original variables and reconstructed the predictions of the third using the constraint.
Switch to a two variable system. (Three variables minus one linear constraint leaves two independent degrees of freedom.)
A smart way to do this is to pick the point $(1/3,1/3,1/3)$ as the origin and then pick two perpendicular unit vectors in that plane to be your new basis.
But I'm lazy and your ARIMA is indifferent to an affine linear transform, so the laziest thing to do is notice that points on your constraint is the plane through $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$. So put the origin at $(1,0,0)$ and use $(-1,1,0)$ and $(-1,0,1)$ (the displacements from the first axis intercept to the second and third intercepts). Let our new parameters be $u$ and $v$, so we have to solve
\begin{align*}
\begin{pmatrix}x\\y\\z\end{pmatrix} - \begin{pmatrix}1\\0\\0\end{pmatrix} &=
\begin{pmatrix}x-1\\y\\1-x-y\end{pmatrix} \\
&= u \begin{pmatrix}-1\\1\\0\end{pmatrix} + v \begin{pmatrix}-1\\0\\1\end{pmatrix}
\end{align*}
giving the system \begin{align*}
1 - x &= u + v \\
y &= u \\
1 - x - y &= v \text{,}
\end{align*}
so we can read off $u$ and $v$ in terms of $x$ and $y$ in the second and third equations. (And going the other way from the first two: $x = 1 - u - v$ and $y = u$.)
Now run your ARIMA on $u$ and $v$. Then you can convert back to $x$ and $y$ and deduce $z = 1 - x - y$ to re-express results in the original variables.
But is all this necessary? No. Notice that we have $u = y$ and $v = z$. That is, we can ignore one variable and run the ARIMA on the other two, then reconstruct the third variable using the constraint.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3322653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that for $n\in\mathbb{N}$, $\sum_{k=1}^{\infty}\left[\frac{n}{5^{k}}\right]=15\iff\left[\frac{n}{5}\right]=13.$ How to show that the following relation? : for $n\in\mathbb{N}$, $$\sum_{k=1}^{\infty}\left[\frac{n}{5^{k}}\right]=15\iff\left[\frac{n}{5}\right]=13.$$ It's not obvious to me. Can anyone help me? Thank you!
| It's simple.
Let us assume $\left[\frac{n}{5}\right]=13$ to be true.
Since $\left[\frac{n}{5}\right]=13$,
$\Rightarrow\frac{n}{5}\in[13,14)$
$\Rightarrow n\in[13*5,14*5)$
$\Rightarrow n\in[65,70)$
Let us consider,
$$\sum_{k=1}^{\infty}\left[\frac{n}{5^{k}}\right]=\left[\frac{n}{5}\right]+\left[\frac{n}{5^2}\right]+\left[\frac{n}{5^3}\right]+\left[\frac{n}{5^4}\right]+ …...$$
We know that $\left[\frac{n}{5}\right]=13$.
Previously we had found $n\in[65,70)$, so $\frac{n}{5^2}\in\left[\frac{65}{25},\frac{70}{25}\right)$, or $\frac{n}{5^2}\in\left[2.6,2.8\right)$
So, $\left[\frac{n}{5^2}\right]=2$
For other terms till infinity, the fraction inside the floor function goes below one and thus the value of floor will be zero. So we finally get the following result,
$$\sum_{k=1}^{\infty}\left[\frac{n}{5^{k}}\right]=13+2=15$$
Hence proved!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3322763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Fourier sine expansion for $x(\pi - x)$ I'm trying to find the Fourier sine expansion for the function $f(x)$ = $x(\pi - x)$ for the interval $0 \leq x \leq \pi$.
I think I am supposed to find $\sum_{k=1}^{n} b_k \sin{(kx)}$ where $b_k = \frac{2}{\pi}$ $\int_{0}^{\pi} x(\pi - x) \sin{(kx)} dx$. However this is turning out really complex and nothing like the answer in my textbook, which is $f(x) = \frac{8}{\pi} \sum_{k=0}^{\infty} \frac{\sin{(2k + 1)x}}{(2k + 1)^3}$.
Any help is appreciated.
| Your formula for $b_k$ is correct in this example. Compute the integral correctly, and you will see that you obtain $b_k=0$ when $k$ is even, while $$b_{2j+1}={8\over\pi(2j+1)^3}\qquad(j\geq 0)\ .$$
This leads to
$$f(x)=\sum_{j=0}^\infty b_{2j+1}\sin\bigl((2j+1)x\bigr)={8\over\pi}\sum_{j=0}^\infty{\sin\bigl((2j+1)x\bigr)\over(2j+1)^3} \qquad(0\leq x\leq\pi)\ .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3322879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can $e^x$ be expressed as a linear combination of $(1 + \frac x n)^n$? Can $e^x$ be expressed as a linear combination of $(1 + \frac x n)^n$? In other words, does there exist an infinite sequence $(a_k)_{k \in \mathbb N_0}$ such that $$e^x = a_0 + \sum_{1 \leq k < \infty} a_k \left(1 + \frac x k\right)^k$$
for all $x \in \mathbb R$? Call the series on the right $s(x)$.
I can answer the question in the negative when the series is absolutely convergent. In the conditionally convergent case, I'm not so sure. My thoughts were to use the fact that:
$$e^{x - \frac{x^2}{2k}} \leq (1+ \frac x k)^k \leq e^{x}$$
and use the lower bound when $a_k$ is negative, and the upper bound when $a_k$ is positive. This gets stuck because it's not always the case that if some $b_k$ is a decaying sequence then $\sum_{k} \frac{b_k}{k}$ is convergent.
The strengthened inequality $$e^{x - \frac{x^2}{2k}} \leq (1+ \frac x k)^k \leq e^{x - \frac{x^2}{2k} + \frac{x^3}{3k^2}}$$ looks like it might make more progress...
[EDIT 2019/08/14 14:00 GMT]
This is the solution in the absolutely convergent case, given by lemmas 1 and 2:
Definition: Let $s(x) = a_0 + \sum_{1 \leq k < \infty} a_k \left(1 + \frac x k\right)^k$.
Lemmas and proofs follow:
Lemma 1: If $s(x)$ converges absolutely for some $x\geq 0$, then $s(x)$ converges absolutely for all $x \geq 0$.
Proof
Pick an $x_0 \geq 0$ for which $s(x_0)$ converges absolutely.
By the condition stated in the lemma, the series $\sum_{1 \leq k < \infty} |a_k| \left|1 + \frac {x_0} k\right|^k$ must converge. We also observe that $|a_k| \leq |a_k| \left|1 + \frac {x_0} k\right|^k$ is true for all $k$. So by the Direct Comparison Test, the series $\sum_{0 \leq k < \infty} |a_k|$ must also converge. In other words, $s(0)$ is absolutely convergent.
Consider now any $x \geq 0$. The series $\sum_{0 \leq k < \infty} |a_k| e^{x}$ converges because it is equal to $e^{x} \sum_{0 \leq k < \infty} |a_k|$, which we proved to be convergent in the previous paragraph. We observe that $|a_k| \left|1 + \frac {x} k\right|^k \leq |a_k| e^{x}$ is true for all $k$. So by the Direct Comparison Test, the series $|a_0| + \sum_{1 \leq k < \infty} |a_k| \left|1 + \frac {x} k\right|^k$ must also converge. So by the definition of absolute convergence, we have that $a_0 + \sum_{1 \leq k < \infty} a_k \left(1 + \frac x k\right)^k=s(x)$ converges absolutely, where $x \geq 0$ was arbitrary.
$\blacksquare$
Lemma 2: If $s(x)$ converges absolutely when $x \geq 0$, then for large enough $x$ we have that $e^x > s(x)$.
Proof
Let $z_n(x) = |a_0| + \sum_{1 \leq k < n} |a_k| \left(1 + \frac x k\right)^k$.
Pick some $\epsilon < \frac 1 2$.
Observe that there must be a large enough $n$ such that $z_\infty(0) - z_n(0) \leq \epsilon$.
Using the triangle inequality, we have that:
$$\begin{aligned}
|s(x)| &\leq z_\infty(x)\\
&\leq z_n(x) + (z_\infty(x) - z_n(x))\\
\end{aligned}$$
Since $z_n(x)$ is a polynomial, there is a large enough $X$ such that all $x \geq X$ it's true $z_n(x) < \epsilon \cdot e^x$. So we have that
$$\begin{aligned}
|s(x)| &<\epsilon\cdot e^x + (z_\infty(x) - z_n(x))\\
&\leq \epsilon\cdot e^x + (z_\infty(0) - z_n(0)) e^x\\
&\leq \epsilon\cdot e^x + \epsilon\cdot e^x\\
& = 2\epsilon \cdot e^x\\
&< e^x.
\end{aligned}$$
The claim above that $z_\infty(x) - z_n(x) \leq (z_\infty(0) - z_n(0)) e^x$ follows from
$$\begin{aligned}
&|a_k| \left(1 + \frac x k\right)^k \leq |a_k| e^x\\
\implies &\sum_{k \geq {n+1}}\left(1 + \frac x k\right)^k \leq \sum_{k \geq {n+1}}|a_k| e^x\\
\implies & z_\infty(x) - z_n(x) \leq (z_\infty(0) - z_n(0)) e^x
\end{aligned}$$
We are done.
$\blacksquare$
| Let $R_n(z)=\sum_{1 \leq k \le n} a_k \left(1 + \frac z k\right)^k$. Assuming the hypothesis we will show:
1: $R_n(z) \to e^z-a_0$ uniformly in the disc $|z| \le \frac{1}{2}$
2: $\sum_{1}^{\infty}{\frac{a_k}{k^q}}=0, q \ge 1$ arbitrary integer
3: $a_k=0, k \ge 1$
We use that if $|z| \le \frac{1}{2}, k \ge 1$, $|(1+\frac{z}{k})^k-(1+\frac{z}{k+1})^{k+1}| \le B|\frac{1}{k^2}|, B>0$ constant, which follows from $k\log(1+\frac{z}{k})=z-\frac{z^2}{2k}+O(\frac{z^3}{k^2}), |z| \le \frac{1}{2}, k \ge 1$, so $(1+\frac{z}{k})^k=e^{z-\frac{z^2}{2k}+O(\frac{z^3}{k^2})}$ and then subtracting the relations for $k, k+1$ since the $O$ terms are at most $\frac{1}{8k^2}$ in absolute value and $|\frac{z^2}{2k}-\frac{z^2}{2k+2}| \le \frac{1}{8k^2}, |z| \le \frac{1}{2}$
We also note that $|(1+\frac{z}{k})^k| \le (1+\frac{|z|}{k})^k \le e^{|z|} \le e^{\frac{1}{2}} <e$ since the binomial coefficients are positive and the triangle inequality works
For simplicity let $b_k(z)=(1+ \frac z k)^k$, so if $|z| \le \frac{1}{2}$
$|b_k(z)-b_{k+1}(z)| \le B|\frac{1}{k^2}|$
$|b_k(z)| \le e$
Then since $\Sigma{a_k} \to 1$ by hypothesis for $x=0$, it folows that $|\sum_{N}^{M}a_k| \le A$ for all $N \le M$ and some constant $A>0$, while $|\sum_{N}^{M}a_k| \to 0, N,M \to \infty$ so if we pick arbitrary $\epsilon >0, |\sum_{N}^{M}a_k| \le \epsilon, M>N >N(\epsilon)$ and then we sum by parts:
$|R_M(z)-R_N(z)|=|\sum_{N+1}^{M}a_kb_k(z)|=|(A_{N+1}(b_{N+1}-b_{N+2})(z))+(A_{N+2}(b_{N+2}-b_{N+3})(z))+....(A_{M-1}(b_{M-1}-b_{M})(z))+(A_{M}(b_{M}(z))|$,
where $A_p=\sum_{N+1}^{p}a_k, p \ge N+1$
So $|R_M(z)-R_N(z)| \le A\sum_{N+1}^{M-1}{B|\frac{1}{k^2}}|+e|A_M| \le AB\frac{1}{N}+e\epsilon, M>N > N(\epsilon)$ which shows that $R_N(z)$ is uniformly Cauchy in $|z| \le \frac{1}{2}$. But this means $R_n(z)$ converges uniformly to an analytic function $f(z)$ on the disc of radius $r=\frac{1}{2}$ and since we know by hypothesis that $f(x)=e^x-a_0$ on the $[-\frac{1}{2},\frac{1}{2}]$ segment it follows by the identity principle that $f(z)=e^z-a_0$ and this is 1 above
Now, we can differentiate term by term and get $R_n(z)' \to e^z$ uniformly on the disc of radius $\frac{1}{2}$ and then plugging $z=0$ we get $\sum_{k \ge 1}a_k=1$ hence $a_0=0$, hence $R_n(z) \to e^z$ uniformly on the above disc.
Subtracting we get $\sum_{k = 1}^n\frac{a_kz}{k}(1+\frac{z}{k})^{k-1} \to 0$ uniformly which clearly implies $\sum_{k = 1}^n\frac{a_k}{k}(1+\frac{z}{k})^{k-1} \to 0$, hence $\sum_{k \ge 1}\frac{a_k}{k}=0$
($zf_n(z) \to 0$ uniformly on the disc of radius $r$, means that for any $\epsilon >0$ there is $N(\epsilon), |zf_n(z)| \le \epsilon, |z| \le r, n \ge N(\epsilon)$
Schwarz lemma implies $|zf_n(z)| \le \frac{|z|}{r}\epsilon, |z| \le r, n \ge N(\epsilon)$ or $|f_n(z)| \le \frac{1}{r}\epsilon, |z| \le r, n \ge N(\epsilon)$)
But now (with all convergences below being uniform) we can integrate on the straight line from $0$ to $z$, $\sum_{k = 1}^{n}\frac{a_k}{k}(1+\frac{w}{k})^{k-1} \to 0$ and get $\sum_{k = 1}^{n}\frac{a_k}{k}(1+\frac{z}{k})^{k} \to 0$.
Subtracting gives $\sum_{k = 1}^n\frac{a_kz}{k^2}(1+\frac{z}{k})^{k-1} \to 0$, hence $\sum_{k = 1}^n\frac{a_k}{k^2}(1+\frac{z}{k})^{k-1} \to 0$, hence $\sum_{k \ge 1}\frac{a_k}{k^2}=0$. A clear induction (integrate, subtract, divide the $z$) gives 2 above
3 is a trivial consequence of 2 since wlog we can assume $\sum|a_k| < \infty$ in 2 by going to $b_k=\frac{a_k}{k^2}$ which is absolutely convergent since $a_k$ is bounded and which clearly satisfies 2; then if $p \ge 1$ is the first index for which $a_p \ne 0, |a_p|=a>0$ and with $A=\max|a_k| \ge a>0$ we easily find a large $q$ s.t. $A\frac{p^q}{(p+m)^q} \le .0001\frac{a}{(p+m)^2}, m>1$ as $(1+\frac{m}{p})^{q-2} \ge (1+\frac{1}{p})^{q-2} \to \infty$ with $q$ for fixed $p$, leading to a term with absolute value $a$ plus sum that is at most $.0001a\frac{\pi^2}{6}$ in absolute value being zero and that is impossible.
So all $a_k$ must be zero if 2 is satisfied and we are finally done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3323005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 1,
"answer_id": 0
} |
How to find the shaded region
Find the area of the blue shaded region of the square in the following image:
[Added by Jack:]
The area of the triangle in the middle of the square is given by
$$
4.8\times 6=28.8\ (cm^2)
$$
Other than this, it seems difficult to go further with the given information. It seems that one has yet to use the assumption of "square".
How can one solve this problem?
| Let us start with a square $ABCD$ and construct on the sides $AB$, $BC$, $CD$, $DA$ points $E,F,G,H$ so that $AE=BF=CG=DH$:
Let $X$ be the intersection $X=AG\cap BH$. Similar points $Y,Z,W$ obtained by rotation around the center of the square were also drawn. This realizes the situation from the given problem. We want to become independent of the given values for $XG$ and $XB$, and show in general:
The blue area together is equal to the area of triangle $\Delta BXG$.
(To have some symmetry, and some common part with $\Delta BXG$ some green triangles have been also drawn.)
(Note that because of $DH=CG$ the area of the blue triangle $\Delta DHB$ is equal to the area of the blue triangle in the OP, $\Delta BGC$.)
Proof:
$$
\begin{aligned}
2\operatorname{Area}(\Delta BXG)
&=
2\operatorname{Area}(\Delta YXW)
+
2\operatorname{Area}(\Delta YWB)
+
2\operatorname{Area}(\Delta GWB)
\\
&=
\operatorname{Area}(\square XYZW)
+
YB\cdot XW
+
WG\cdot XB
\\
&=
\operatorname{Area}(\square XYZW)
+
AX\cdot XY
+
YE\cdot XB
\\
&=
\operatorname{Area}(\square XYZW)
+
2\operatorname{Area}(\Delta AXE)
+
2\operatorname{Area}(\Delta XEB)
\\
&=\text{"Blue area plus green area"}
\\
&=
2\operatorname{Area}(AXH) +
2\operatorname{Area}(DHB)
\ ,
\end{aligned}
$$
and the result follows.
$\square$
I tried to show "the square in the middle", and use in the proof the full symmetry.
Later edit:
Here is a further picture that illustrates the way to split the triangle $\Delta BXG$ in three triangles, that can be pasted elsewhere with equivalent area:
$\Delta BXG$ in three parts, one part is half of a square, two other parts add to a side triangle.">
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3323175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Evaluating the integral: $ I = \int e^{\frac xa} \sin x \, \mathrm dx$
Evaluating the integral:
$$ I = \int e^{\frac xa} \sin x \, \mathrm dx \tag {1}$$
This question was asked in CBSE Board 12th Grade (India). So, here was the approach I made.
Proposition 1: $$ for, \, y= u(x), \forall \, x \in \mathbb{R} $$
$$ \int e^{\frac xa} u(x) \, \mathrm dx = a e^{\frac xa} \left ( au(x) - a^2\dfrac{\mathrm du(x)}{\mathrm dx} + a^3\dfrac{\mathrm d^2u(x)}{\mathrm dx^2} - \dots \right ) \quad \dots\tag {*} $$
Proof: This can easily be proved by applying by parts in LHS and subtracting it with RHS to a quantity which can be made small than any other assignable quantity as required.
So, using the same to evaluate the integral $(1)$, we get:
$$I = ae^{\frac xa} \left ( (\sin x) - (\cos x) + (-\sin x) - (-\cos x) + (\sin x) - (\cos x) + (-\sin x) - (-\cos x) + (\dots) \right) $$
Clearly, the repetitions of sine and cosine functions inside the brackets in RHS are cancelling each other, so irrespective of the value of $x$, the series should converge to '0'.
$$\therefore I = 0$$
But, wait, the integrand is continuous and is strictly increasing and strictly decreasing for particular intervals of $x$. This is enough to show that my answer is wrong, but what I missed?
Edit: This question is more like why my approach failed then What is the correct way to find the solution of the question
Edit 2: Thanks to @J.G for pointing out that my proposition had issues. I've fixed that part now :)
| There are several issues here.
*
*Your $(\ast)$ should read $\int e^{x/a}u(x)dx=e^{x/a}(au-a^2u^\prime+a^3u^{\prime\prime}-\cdots)+C$.
*We have $\int e^{x/a}\sin xdx=e^{x/a}(a\sin x-a^2\cos x-a^3\sin x+\cdots)+C$. Thanks to the powers of $a$, you can use a geometric series, $\frac{a}{1+a^2}e^{x/a}(\sin x-a\cos x)+C$. You can verify by differentiation this is correct.
*There are certain convergence issues we have to either address or gloss over to use $(\ast)$, or the geometric series above. (You can understand the $a\to1^-$ limit with a careful understanding of this.) A safer approach is @user1337's or, if you're happy with complex methods, $$\int e^{x/a}\sin xdx=\Im\int e^{(1/a+i)x}dx=\Im\frac{1}{1/a+i}e^{(1/a+i)x}+C,$$which gets you to the above result fairly quickly. (For complex $a$, write the integrand as $\frac{e^{(1/a+i)x}-e^{(1/a-i)x}}{2i}$ instead.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3323330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
For any positive integer $a$, prove that there exists infinitely many composite $n$ such that $a^{n-1}\equiv 1\mod n$. Studying for an upcoming comprehensive exam we stumbled upon the following problem:
Prove that for every positive integer $a$, there exists infinitely many composite integers $n$ such that $a^{n-1}\equiv 1 \mod n$. (Hint: Choose $n=\frac{a^{2p}-1}{a^2-1}$ for a suitable prime $p$.)
We have tried many approaches to no avail. The most promising approach involved choosing $p$ such that $a$ is a quadratic residue modulo $p$. Then $n-1=\frac{a^2(a^{2(p-1)}-1)}{a^2-1}$ has a factor of $p$ (the second factor in the numerator is a difference of squares which factors to another difference of squares and finally an $a^{(p-1)/2}-1$ pops out which by our choice of $p$ must be divisible by $p$. We were unable to conclude from here. Any help would be appreciated.
| First: If $p$ is a prime and
$$n=\frac{a^{2p}-1}{a^2-1} =1+a^2+a^4+...+a^{2p-2}$$
Then
$$ 1+a^2+a^4+...+a^{2p-2}=0 \pmod{n} \\
a^2( 1+a^2+a^4+...+a^{2p-2})=0 \pmod{n} \\
a^2+a^4+...+a^{2p-2}+a^{2p}=0 \pmod{n} \\
$$
Subtracting the first and last relation you get
$$1=a^{2p} \pmod{n}$$
Now, if you now chose $p,n$ so that $2p |n-1$ you get
$$a^{n-1} \equiv 1 \pmod{n}$$
Note that $2p |n-1$ means
$$2p|a^2(1+a^2+...+a^{2p-4})$$
It is easy to make the RHS even and if you want
$$p|1+a^2+...+a^{2p-4}$$
the easiest way is to pick $p >a^2-1$ (to make sure that $a^2-1$ cannot be divisible by $p$) and use
$$a^{2p-2} \equiv (a^{p-2})^2\equiv 1 \pmod{p}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3323510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How does one derive the following formula of integration? $$\int_0^\infty\frac{\exp{\left(-\frac {y^2}{4w}-t^2w\right)}}{\sqrt {\pi w}}dw=\frac{\exp(-ty)}t$$ for $t$ and $y$ positive. This integral is useful in the following context: suppose we are given $$\int_0^\infty tf(t){\exp{\left(-t^2w\right)}}dt$$ (a function of $w$) and we want to convert it to the Laplace transform of $f(t)$. Multiplying by $$\frac{\exp{\left(-\frac {y^2}{4w}\right)}}{\sqrt {\pi w}}$$ and integrating over $w$ from $0$ to $\infty$ will clearly do the trick (resulting in a function of $y$).
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
With $\ds{w \equiv {y \over 2t}\,\expo{2\theta}}$:
\begin{align}
&\bbox[15px,#ffd]{\int_{0}^{\infty}
\exp\pars{-{y^{2} \over 4w} -t^{2}w}{\dd w \over \root{\pi w}}}
\\[5mm] = &\
{1 \over \root{\pi}}\int_{-\infty}^{\infty}
\exp\pars{-{y^{2} \over 4\bracks{y\expo{2\theta}/2t}} -
t^{2}\bracks{{y \over 2t}\,\expo{2\theta}}}
{y \over 2t}\,{2\expo{2\theta}\,\dd\theta \over \root{y\expo{2\theta}/\pars{2t}}}
\\[5mm] = &\
{1 \over \root{\pi}}\root{2y \over t}\int_{-\infty}^{\infty}
\expo{-ty\cosh\pars{2\theta}}\expo{\theta}\dd\theta
\\[5mm] = &\
{2 \over \root{\pi}}\root{2y \over t}\int_{0}^{\infty}
\expo{-ty\cosh\pars{2\theta}}\cosh\pars{\theta}\dd\theta
\\[5mm] = &\
{2 \over \root{\pi}}\root{2y \over t}\int_{0}^{\infty}
\exp\pars{-ty\bracks{2\sinh^{2}\pars{\theta} + 1}}
\cosh\pars{\theta}\dd\theta
\\[5mm] \stackrel{\sinh\pars{\theta}\ =\ x}{=}\,\,\,&
{2 \over \root{\pi}}\root{2y \over t}\expo{-ty}\int_{0}^{\infty}
\exp\pars{-2tyx^{2}}\,\dd x
\\[5mm] = &\
{2 \over \root{\pi}}\root{2y \over t}\expo{-ty}\pars{\root{\pi/2} \over 2\root{ty}} = \bbx{\expo{-ty} \over t}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3323655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Taylor series of $\ln(1+x+x^2+...+x^{10})$ I have to find Taylor series of $\ln(1+x+x^2+...+x^{10})$.
I have a clue that I write like the two difference of logarithm but I do not how.
Any help?
| Recall the formula for a finite geometric series:
$$1+x+x^2 + \cdots + x^n = \frac{1-x^{n+1}}{1-x}$$
Take $n=10$ and you can apply this to your problem fairly readily.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3323724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
equation simplification. $(5y-1)/3 + 4 =(-8y+4)/6$ Simplification of this equation gives two answers when approched by two different methods.
Method 1 Using L.C.M( least common multiple)
$(5y-1)/3 + 4 =(-8y+4)/6$
$(5y-1+12)/3 = (-8y+4)/6$
$5y-11 = (-8y+4)/2$
$(5y-11)2= (-8y+4)$
$10y-22 = -8y+4$
$18y=26$
$y = 26/18=13/9$
Method 2 multiplying every term by 3
$3(5y-1)/3 + 4*3 = 3(-8y+4)/6$
$5y-1 + 12 = (-8y+4)/2$
$2(5y-1 + 12) = -8y+4$
$10y-2+24 = -8y+4$
$18y + 22 = 4$
$18y = -18$
$y = -1$
The correct method is method 2 and the correct answer is y = -1
Why is method 1 is incorrect? Could anyone explain why the answer is wrong when using the L.C.M( method 1)?
| From the second step transitioning to the third step of your work, you incorrectly computed $5y-1+12$ to be $5y-11$ when it should be equal to $5y+11$. An obvious careless sign error.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3323817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
} |
Intersection of ideals $(x,y),(y,z)$ and $(x,z)$ in $K[x,y,z]$ I have to prove that the intersection of ideals $(x,y),(y,z)$ and $(x,z)$ is equal to the ideal generated by $xy ,yz$ and $xz$. I am unable to prove it using the definitions only.
| Since $xy, yz, xz \in (x,y)\cap (y, z) \cap (x, z)$, clearly $(xy, yz, xz) \subset (x,y)\cap (y, z) \cap (x, z)$. Suppose now that $p(x,y,z)\in (x,y)\cap (y, z) \cap (x, z)$. Fix a monomial term, $m(x,y,z)$ of $p(x,y,z)$. Since $p(x,y,z)\in (x,y)$ we have $p(x,y,z)=xp_1(x,y,z)+yp_2(x,y,z)$ for some polynomials $p_1$ and $p_2$ which means that $m$ must either contain $x$ or $y$ as a factor. Suppose without loss of generality that it contains $x$ as a factor. Then we just observe that since $p(x,y,z)\in (y, z)$ also, a symmetric argument to the one above shows that $m$ either contains $y$ or $z$ as a factor. If the former is true, $m$ then contains $xy$ as a factor, if the latter is true it contains $xz$ as a factor. Since $m$ was an arbitrary term in $p$ this shows that each term in $p$ contains $xy$, $yz$ or $xz$ as a factor, i.e. $p(x,y,z)\in (xy, yz, xz)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3323928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why does $x_0 + \sin(x_0) \approx \pi$, when computing it multiple times? Why does $x_0 + \sin(x_0) \approx \pi$ when computing this multiple times on the calculator?
So for any value of up to $\approx 6.25$, doing the following operation ($x_0 + \sin(x_0)$, then the newly obtained value (let's call it $x_1$) is replaced instead of $x_0$: $x_1 + \sin(x_1)$) many times leads to a value, which is very close to $\pi$.
However, when going above that value, this will result in roughly $3\pi$, $5\pi$, etc.
Could anyone please try to explain why the simple first calculation works?
EDIT: Thank you so much for your answers guys - through a combination of the many, I believe to now understand why!
| You start with $x_0 \in \mathbb R$ and define recursively for $n \ge 0$
$$x_{n+1} = x_n + \sin x_n .$$
What can we say about the convergence of the sequence $(x_n)$?
If it converges to some $x \in \mathbb R$, then necessarily
$$x = \lim x_{n+1} = \lim (x_n + \sin x_n) = \lim x_n + \lim \sin x_n = \lim x_n + \sin (\lim x_n) = x + \sin x .$$
This means $\sin x = 0$, i.e. $x = k\pi$ for some $k \in \mathbb Z$.
Case 1. $x_0 = k\pi$.
Then all $x_n = x_0$. Hence $(x_n)$ is a constant sequence which trivially converges to $k\pi$.
Case 2. $0 < x_0 < \pi$.
We claim that $(x_n)$ is strictly monotonically increasing such that all $x_n < \pi$. Hence it converges to some $x \in [x_0,\pi]$ and our above limit consideration shows $x = \pi$. The claim is easily proved by induction using the following facts:
*
*For $0 < \xi < \pi$ we have $\sin \xi > 0$ which implies $\xi < \xi + \sin \xi$.
*For $0 < \eta$ we have $\sin \eta < \eta$. For $\xi < \pi$ we therefore get $\xi + \sin \xi = \xi + \sin (\pi -\xi) < \xi + \pi - \xi = \pi$.
Case 3. $2r\pi < x_0 < (2r+1)\pi$.
Then $(x_n)$ is strictly monotonically increasing such that all $x_n < (2r+1)\pi$ and $\lim x_n = (2r+1)\pi$. This follows from Case 2 by considering the sequence $x'_n = x_n - 2r\pi$. Note that
$$x'_{n+1} = x_{n+1} - 2r\pi = x_n + \sin x_n - 2r\pi = x_n - 2r\pi + \sin (x_n - 2r\pi) = x'_n + \sin x'_n .$$
Case 4. $(2r-1)\pi < x_0 < 2r\pi$.
Then $(x_n)$ is strictly monotonically decreasing such that all $x_n > (2r-1)\pi$ and $\lim x_n = (2r-1)\pi$. This follows from Case 3 by considering the sequence $x'_n = -x_n$. Note that that $2(-r)\pi = -2r\pi < x'_0 = -x_0 < -(2r-1)\pi = (2(-r)+1)\pi$ and
$$x'_{n+1} = -x_{n+1} = -x_n - \sin x_n = -x_n +\sin(-x_n) = x'_n +\sin x'_n . $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3324022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Help to find complex Fourier series coefficient of this periodic function I'm having big trouble finding the complex Fourier series coefficient of the following periodic function
$$\frac{a-b\cos\varphi}{\sqrt{a^2+b^2-2ab\cos\varphi}}$$
Mathematica is unable to compute it!!
| You might combine the Legendre expansion (wlog $a>b$)
$$
\frac1{\sqrt{a^2+b^2-2ab\cos\varphi}}=\frac1a\sum_{n=0}^\infty\left(\frac ba\right)^nP_n(\cos\varphi)
$$
with the Fourier expansion (Gradshteyn-Ryzhik 8.826)
$$
P_n(\cos\varphi)=\\\frac{2^{n+2}n!}{\pi(2n+1)!!}\left(\sin(n+1)\varphi+\frac11\frac{n+1}{2n+3}\sin(n+3)\varphi+\frac11\frac32\frac{n+1}{2n+3}\frac{n+2}{2n+5}\sin(n+5)\varphi+\dots\right)
$$
and product-to-sum formulas when multiplying by $a-b\cos\varphi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3324124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
solve sum of square roots using a single functiom using Newton's Method I have to estimate $n= \sqrt{3} + \sqrt{7}$ using Newton's Method of approximation but I have to determine a single function which can be used to estimate $n$.
P.S: The function should not involve radical expressions (nth root of constants or variables)
So, I just need to know what the single function is. I can solve the rest myself
Thanks!
| Considering that radical roots appear in conjugate pairs, you could construction a rational function that has the root $n=\sqrt{7}+\sqrt{3}$ as follows,
$$[x-(\sqrt{7}+\sqrt{3})][x-(\sqrt{7}-\sqrt{3})] = x^2 +4 -2\sqrt{7}x$$
$$ (x^2 +4 -2\sqrt{7}x)(x^2 +4 + 2\sqrt{7}x) = (x^2+4)^2-28x^2 $$
Thus, the function
$$f(x)=(x^2+4)^2-28x^2$$ contains the roots $n=\sqrt{7}\pm \sqrt{3}$, as well as another pair.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3324255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How does the product rule work for a double derivative, $\frac{\partial^2 }{\partial x^2}$? I have this function $$A_x=A_0\frac{1}{1+e^{-az}}e^{ik(z-ct)}$$ where $A_0$ is a constant and $a$ and $k$ are constants with dimensions of inverse length and $z \in \mathbb{R}$
I wish to compute $$\frac{\partial^2 A_x}{\partial z^2}$$
So my attempt is
$$\frac{\partial^2 A_x}{\partial z^2}=A_0\left[\frac{\partial^2}{\partial z^2} \left(\frac{1}{1+e^{-az}}\right)-\frac{k^2}{1+e^{-az}}\right]e^{ik\left(z-ct \right)}$$
where I have simply applied the product rule for differentiation making the assumption that it works for double derivatives.
The reason why I have done it this way is to simplify the calculation further by using the following result:
$$\frac{\partial^2}{\partial z^2}\left(\frac{1}{1+e^{-az}}\right)=a^2\frac{e^{-az}\left(e^{-az}-1\right)}{\left(1+e^{-az}\right)^3}$$
The problem, however, is that the correct answer is
$$\frac{\partial^2 A_x}{\partial z^2}=A_0\left[\frac{\partial^2}{\partial z^2} \left(\frac{1}{1+e^{-az}}\right)+\color{blue}{2ik\frac{\partial}{\partial z}\left(\frac{1}{1+e^{-az}}\right)}-\frac{k^2}{1+e^{-az}}\right]e^{ik\left(z-ct \right)}$$
My answer is identical with the exception of the blue term, I don't understand where this term comes from and I know that my answer is wrong, but I would like to know where this term comes from and hence why the product rule does not apply to double derivatives?
| The product rule is slightly different for double derivatives. For simplicity, let $f$ and $g$ be twice continuously differentiable functions of $x$. Then,
$$\frac{d^2}{dx^2}fg = \frac{d}{dx}\left(\frac{d}{dx}fg\right) = \frac{d}{dx}\left(f'g + fg'\right) = \frac{d}{dx}f'g + \frac{d}{dx}fg' = f''g + f'g' + f'g' + fg'' = f''g + \mathbf{2f'g'} + fg''.$$
The bold term corresponds to the blue term missing from your answer.
By the way, you can extend this to higher order derivatives in the same way. You end up with,
$$\frac{d^n}{dx^n}fg = \sum_{k=0}^n \binom{n}{k}f^{(n-k)}g^{(k)}.$$
edit: fixed typo
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3324466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Determining the truth of $\forall \; n \in Z, \exists \; a,b \in Z$ : $n=4a+5b$ $\implies$ $n^2 = 5a+4b-1$ The question at proving $\forall n \in Z,$ : ($\exists a,b \in Z$ $n=4a+5b$) $\implies$ ($\exists a,b \in Z$ $n^2 = 5a+4b-1$) originally asked one thing but then it was corrected to ask something quite different. I had already started answering the original question, and finished answering it but then deleted the answer. As I believe the original question is quite interesting, I'm asking it here, along with providing my answer.
The original question was asking to prove
$\forall \; n \in Z, \exists \; a,b \in Z$ : $n=4a+5b$ $\implies$ $n^2 = 5a+4b-1$
Although as Matthew Daly's comments indicate
Given $n$, if you choose $a$ and $b$ such that the first equation is not true, then the implication is true because a false premise implies any consequent.
I, instead, considered the implication of it saying that for each integer $n$, there always exists integers $a,b$ such that they satisfy both of the following $2$ equations simultaneously.
$$n = 4a + 5b \tag{1}$$
$$n^2 = 5a + 4b - 1 \tag{2}$$
The OP originally suggested
I tried assuming that the LHS is true and squaring $4a+5b$ and tried to make it equal to the RHS but I am not sure if that's the right way.
This is one way to try, but I don't see any way to finish the solution. This is because you get $n^2 = 16a + 40ab + 25b^2 = 5a + 4b - 1$, giving a mixture of terms with $a$, $b$, $ab$, $a^2$ and $b^2$. The OP ended with
Which proof technique would work the best?
I believe an approach to check is to do some sort of manipulation of the equations, and then check one or a few relatively small moduli to determine how to limit the potential solutions, or even possibly show there are no solutions.
| If $n=4a+5b$ and $n^2+1=5a+4b$, then $n^2+n+1=9(a+b)$, which implies $9$ divides $n^2+n+1$. This in turn implies $9$ divides $4n^2+4n+4=(2n+1)^2+3$. But this is impossible, since if $9$ divided $(2n+1)^2+3$, then $3$ would have to divide $2n+1$, in which case $(2n+1)^2+3$ leaves remainder $3$ when divided by $9$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3324559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
eigenvector transformation Say you have the following matrix A in $R^2 \rightarrow R^2$:
$
\begin{bmatrix}
7 & -10 \\
5 & -8
\end{bmatrix}
$
Thus the eigenvalues/eigenvectors are: 2 $\begin{bmatrix} 2 \\ 1 \end{bmatrix}$ and -3 $\begin{bmatrix} 1 \\ 1 \end{bmatrix}$.
Thus the eigenspace matrix is $\begin{bmatrix} 2 & 1 \\ 1 & 1 \end{bmatrix}$.
Say you have the vector v(x,y) of (2,3), thus Ax = [-16, -14].
I'm confused as to how does the eigenspace and eigenvalues allow me to easily see what A is doing to the vector (2,3)?
How do I apply the eigenvalues/eigenspace on vector v(2,3) to see what A is doing to it?
| You need to write $(2,3)$ as a linear combination of the eigenvectors.
In this case, $(2,3) = -(2,1) + 4 (1,1)$, so
$$A \begin{bmatrix}2 \\ 3 \end{bmatrix} = - A \begin{bmatrix}2 \\ 1 \end{bmatrix} + 4 A \begin{bmatrix}1\\ 1 \end{bmatrix}$$
and then you can use the fact that these are eigenvectors to easily complete the computation and understand how the "stretching" along the two "eigen-directions" describes the action of $A$ on $(2,3)$.
More generally, you should read about diagonalizing $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3324647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
probability picking parts no replacement In a bin containing 30 parts, 27 parts are good and 3 parts are defective.
a) What is the probability that if you select 3 parts randomly, without replacing the parts in the bin, from the bin that you will have 1 defective part?
I thought of $\dfrac{\dbinom{27}{1}\dbinom{26}{1}\dbinom{3}{1}}{\dbinom{30}{1}\dbinom{29}{1}\dbinom{28}{1}}$
| Close, but as you seek the probability for obtaining $2$ from the $27$ good parts and $1$ from the $3$ defective parts, when selecting any $3$ from all $30$ parts (without replacement or bias), it is: $$\dfrac{\dbinom{27}{2}\dbinom{3}{1}}{\dbinom{30}{3}}$$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3324731",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Help computing $\lim_{n\to \infty}\int_{0}^{1}f_{n}(x)\,dx$ We have the function $f(x)=2x(1-x)$ and we define $f^{\circ 2}=f\circ f$, and $f^{\circ n}=f\circ f^{\circ(n-1)}$ for $n>2$. We need to compute $$\lim_{n\to\infty}\int_{0}^{1}f_{n}(x)dx$$.
To do that, I found that $f^{\circ n}(x)\leq \frac{1}{2}$ for all $x\in[0,1]$ and for all $n$. Furthermore, I found that $$f(x)\leq f^{\circ 2}(x)\leq f^{\circ 3}(x)\leq \cdots \leq f^{\circ n}(x)\leq \frac{1}{2}.$$
With this in mind I obtained that $$\lim_{n\to \infty }\int_{0}^{1}f^{\circ n}(x)\,dx\leq \frac{1}{2}$$. In addition, I guess that the sequence of functions $\{f^{\circ n}(x)\}_{n\in \mathbf{N}}$ converges pointwise to the function $g(x)=1/2$ for $x\neq 0,1$ and $0$ for $x=0,1,$ but I couldn't prove it.
Please any suggestion for this problem would be really appreciated.
| You can verify that for $x\in (0,1)$,
$$\left|f(x)-f\left(\frac 1 2\right)\right| = \left|f(x)-\frac 1 2\right| \leq \|f^\prime\|_{\infty}\left|x-\frac 12\right|=2\left|x-\frac 12\right|$$
Thus
$$\left|f_n(x)-\frac 1 2\right|\leq 2^n\left|x-\frac 12\right|^n$$
So you can now see that for all $x\in (0,1)$, you have pointwise convergence.
The integral converges towards to $\int_0^1\frac 12dx=\frac 12$ by the dominated convergence theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3324831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Four touching circles and one common tangent
Given four touching circles and one common tangent, show that $$\angle BAD = \dfrac{1}{2}(\angle DO_1A + \angle AO_2B)$$
It is done by looking at triangles $\triangle ADO_1$ and $\triangle ABO_2$.
Now I am trying to prove that a circle can be circumscribed around $ABCD$.
I tried to show $\angle ADC + \angle DCB = 180 ^\circ$, thus $AD || CB$ but I messed up (my idea was to prove that $ABCD$ is an isosceles trapezium).
My second thought was to show $\angle DCB + \angle DAB = 180 ^\circ$, but I don't see how this can be done.
Would appreciate help of any kind.
| Hint: As you proved $$<BAD = {1\over 2}(<AO_2B+ <AO_1D)$$
similary we have also: $$<BCD = {1\over 2}(<CO_3B+ <CO_4D)$$
so $$<BAD +<BCD = {1\over 2}(<AO_2B+ <AO_1D)+{1\over 2}(<CO_3B+ <CO_4D)$$ $$={1\over 2}360 = 180$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3324958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
10 + 67 × 2= ?, This question is causing a ruckus among my friend Help!
How would you solve 10 + 67 × 2?
I solved by putting a bracket between 10 and 67. Then multiplying by 2
Like this (10 + 67)2 = 154
However, others insist it's 144 and I'm wrong.
| If they meant $(10 + 67) \times 2$, they should have written that. Otherwise one should assume the standard order of operations is meant. Go to Wolfram Alpha and put in 10 + 67 * 2. It will answer 144 because there is nothing to indicate the addition should be done first.
Another possibility is that your friend is trying to create the next viral math formula. I suggest your friend should go back to the drawing board.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Combination patterns - how to calculate? I have a bag of 9 different colored balls, and I want to calculate the probabilities of all the patterns there can be. If I picked 4 balls (with replacement), I could get the following patterns:
*
*all 4 are the same color (eg RRRR)
*3 are the same color, and 1 is different (RRRY)
*2 are the same color, and the other two are the same color (RRYY)
*2 are the same color, and the other two are different
(RRYB)
*all 4 are different colors (RYBG)
I've actually produced and sorted all the possible combinations, so I know what my end numbers should be. I did that so I could try to understand how to get to the correct number, but I'
m not having much luck, and that won't work once I start getting to larger groups (pulling 5, ,6, 7, 8, or 9).
Calculating cat 1 is easy - it's always 9. There are 9 options for the first ball, and then 1 option (the matching color) for all of the rest.
Calculating the last category is also easy: 9*8*7*6*....
Its the intermediates that are causing me issues. I'm not understanding how to get there. For example, my count showed 216 possibilities where 3 balls were the same color and the other was a different color. To me, this seems like it would be 9*1*1*8, but that's definitely NOT 216.
How do I calculate these? Especially for larger groups, it will be vital that I understand this - I can't generate all 0.53/5/43/387 billion possible combinations.
| In the event that order matters:
*
*All balls same color: $9$
*Three balls same color, one ball different: $\binom{4}{3}\cdot 9\cdot \binom{1}{1}\cdot 8 = 4\cdot 9\cdot 8 = 288$
*Two balls same color, remaining two balls same color: $\binom{9}{2}\binom{4}{2}= 216$
*Two balls same color, remaining two balls different colors: $\binom{4}{2}\cdot 9\cdot 8\cdot 7 = 3024$
*All balls different colors: $9\cdot 8\cdot 7\cdot 6 = 3024$
Adding these together gives: $9+288+216+3024+3024 = 6561$ which happens to equal $9^4$, the total number of arrangements of four balls where order matters, just as we should have expected.
More details for the third case for instance where two balls are of the same color and remaining two balls are of a same different color.
We first pick which two colors appear simultaneously. This can be done in $\binom{9}{2}$ ways. Now that we see what colors were selected, there is an unambiguous color whose name comes first alphabetically. We pick which two spaces are occupied by that color ball. This can be done in $\binom{4}{2}$ ways. The remaining spaces are then occupied by balls of the other selected color. We get then a count of $\binom{9}{2}\binom{4}{2}=216$ ways.
Alternatively, we could pick the color of the first ball. We then pick which position among the remaining three matches in color. We then pick the remaining color for the remaining positions. This can be done in $9\cdot \binom{3}{2}\cdot 8=216$ ways which we see is equal to what we got before.
In response to comment.
9 colors 5 at a time:
*
*All balls same color: $9$
*Four balls same color, one ball different: $\binom{5}{4}\cdot 9\cdot 8 = 360$
*Three balls same color, remaining two balls same: $\binom{5}{3}\cdot 9\cdot 8 = 720$
*Three balls same color, remaining two balls different: $\binom{5}{3}\cdot 9\cdot 8\cdot 7 = 5040$
*Two balls same, Two other balls same, one ball different: $\binom{9}{2}\binom{5}{2}\binom{3}{2}\cdot 7 = 7560$
*Two balls same, remaining all different: $9\cdot \binom{5}{2}\cdot 8\cdot 7\cdot 6 = 30240$
*All balls different: $9\cdot 8\cdot 7\cdot 6\cdot 5 = 15120$
Checking, we get $9+360+720+5040+7560+30240+15120=59049=9^5$, just as we expected.
Your errors were doing something totally out of left field for the 3/2 case that I can't explain your thought process... adding? $\binom{3}{2}$? Where did those come from?
The other mistake was much more common. In the 2/2/1 case, you did "Pick two spaces for the first color" then "pick what the first color actually is" followed by "pick two spaces for the second color" followed by "pick what the second color actually is" finally followed by "pick the color for the last space."
This is incorrect and overcounts because you can use different sequences of answers to get the same result. We can't tell which was the "first" color and which was the "second" color after they have already been distributed and your counting makes the difference between "first" and "second" color somehow significant when it shouldn't have been.
For example:
"First two spaces": $\underline{\star}~\underline{\star}~\underline{~}~\underline{~}~\underline{~} \rightarrow$ "Red": $\underline{R}~\underline{R}~\underline{~}~\underline{~}~\underline{~}\rightarrow$ "Third and fourth spaces": $\underline{R}~\underline{R}~\underline{\star}~\underline{\star}~\underline{~}\rightarrow$ "Yellow": $\underline{R}~\underline{R}~\underline{Y}~\underline{Y}~\underline{~}\rightarrow$ "Blue": $\underline{R}~\underline{R}~\underline{Y}~\underline{Y}~\underline{B}$
gave the same result as:
"Third and fourth spaces": $\underline{~}~\underline{~}~\underline{\star}~\underline{\star}~\underline{~} \rightarrow$ "Yellow": $\underline{~}~\underline{~}~\underline{Y}~\underline{Y}~\underline{~}\rightarrow$ "First two spaces": $\underline{\star}~\underline{\star}~\underline{Y}~\underline{Y}~\underline{~}\rightarrow$ "Red": $\underline{R}~\underline{R}~\underline{Y}~\underline{Y}~\underline{~}\rightarrow$ "Blue": $\underline{R}~\underline{R}~\underline{Y}~\underline{Y}~\underline{B}$
Instead, select both colors used for two balls simultaneously, then for the earlier appearing alphabetically pick which spaces it uses.
$\binom{9}{2}$ to choose the colors, $\binom{5}{2}$ to pick the spaces for the alphabetically first selected color, then $\binom{3}{2}$ to pick the spaces for the remaining selected color, finally $7$ choices for the final color, giving $\binom{9}{2}\binom{5}{2}\binom{3}{2}\cdot 7 = 7560$ arrangements.
Alternatively, pick the location of the singleton color and what color it is. Then in the furthest left remaining available position pick a color for it. Pick one of the remaining positions to match the color. Finally pick a color for the remaining positions, giving a total of $5\cdot 9\cdot 8\cdot 3\cdot 7=7560$, same as before.
In both of these correct ways of counting, we make sure that there is an unambiguous single way of arriving at each outcome where we can't rearrange our answers in a way to give the same result like we could for yours. We accomplished that by noting that there is an unambiguous "first" color when ordering them alphabetically in the first method. We accomplished that by noting there is an unambiguous "furthest left" remaining available space in the second method.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Continuous Bijection of Top Spaces It is well known that a continuous bijection of compact hausdorff topological spaces is a homoemorphism. I am wondering, is it true that a continuous bijection of compactly generated spaces is a homemorphism?
I am also interested in other generealizations of this theorem.
Thanks so much!
| Here is a negative result that precludes many generalizations you might consider (though it doesn't quite address your case where you require both the domain and the codomain to be compactly generated).
Theorem: Let $X$ be a regular Hausdorff space such that every continuous bijection $X\to Y$ to a Hausdorff space $Y$ is a homeomorphism. Then $X$ is compact.
Proof: Suppose $X$ is not compact; let $\mathcal{A}$ be a collection of nonempty closed subsets of $X$ which is closed under finite intersections but such that $\bigcap \mathcal{A}=\emptyset$. Fix a point $x\in X$ and let $Y$ be $X$ with the topology such that a set is $U$ is open iff it is open in $Y$ and if $x\in U$, then $U$ contains an element of $\mathcal{A}$. Clearly the identity map $X\to Y$ is a continuous bijection. Since $x\not\in\bigcap \mathcal{A}$, there is some $A\in\mathcal{A}$ such that $x\not\in A$, and then $X\setminus A$ is open in $X$ but not in $Y$, so the identity map is not a homeomorphism.
It remains to be checked that $Y$ is Hausdorff. For points in $Y$ besides $x$ this is trivial, so we must just show that if $y\in Y$ is distinct from $x$ then $x$ and $y$ are separated by open sets. Since $y\not\in\bigcap \mathcal{A}$, there is some $A\in\mathcal{A}$ such that $y\not\in A$. By regularity, we can then separate $y$ and $A\cup\{x\}$ by open sets of $X$, and these will still be open in $Y$ since the one that contains $x$ also contains $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
solve the Lagrange multiplier equations the equations :
$$\left\{\begin{array}{l}{x+6 y+4 \lambda x=0} \\ {6 x+2 y+\lambda y=0} \\ {4 x^{2}+y^{2}-25=0}\end{array}\right.$$
I've done many transformations,but I still can't get the answer.
the last I did this:
$$\lambda=\left(6 \frac{x}{y}+2\right)=\left(\frac{1}{4}+\frac{3}{2} \frac{y}{x}\right),y=tx \\=\left(6 \cdot \frac{1}{t}+2\right)=\left(\frac{1}{4}+\frac{3}{2} t\right)\\t^{2}-\frac{7}{6} t-4=0$$
the quadratic formula's roots have square number , but the answer doesn't have,so I think I might be wrong.
I don't know how to solve it
| Consider the first two equations, under this slightly different form:$$\left\{\begin{array}{l}(1+4\lambda)x+6y=0\\6x+(2+\lambda)y=0.\end{array}\right.$$Suppose that this homogeneous system has exatly one solution; then this solution is $(x,y)=(0,0)$, which is not a solution of the third one.
But\begin{align}\text{The system has more than one solution}&\iff\det\begin{bmatrix}1+4\lambda&6\\6&2+\lambda\end{bmatrix}=0\\&\iff\lambda=-\frac{17}4\vee\lambda=2.\end{align}So, deal only with the cases $\lambda=-\frac{17}4$ and $\lambda=2$. Can you do that?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
solve the functional equation $f(x+t)-f(x-t)=4xt$ I think this question might be related with arbitrary functions, but I’m not sure. I also tried to set $t$ to different values but couldn’t get it to work.
I tried to set $t=x$ and end up with $f(2x)=f(0)+4x^2$, $f(x)=f(0)+x^2$.
| Set $x=t$ and we get $$f(2x)-f(0)= 4x^2$$ so $f(x) =x^2+a$ where $a= f(0)$.
Check: If we now put this in to starting equation we get: $$ (x+t)^2+a-(x-t)^2-a = 4xt$$ which is always true. So $\boxed{f(x) =x^2+a}$ for all real $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Explicit description of quotient ring of $\mathbb{Z}[x]$ I am studying for a prelim and I stumbled on this problem:
Describe explicitly the elements in the quotient ring $\dfrac{\mathbb{Z}[x]}{(3,x^3-x+1)}$. First of all I don't see why the ideal $(3,x^3-x+1)$ is a maximal ideal in $\mathbb{Z}[x]$. If there is anyone who can help me with this will be greatly appreciated.
| You can solve this problem by a two step process. First, let $J = (3, x^3-x+1)$, and let $I = (3)$. These are ideals of $\mathbb Z[x]$ with $I \subset J$.
The third isomorphism theorem says that
$$\mathbb Z[x]/(3,x^3-x+1) = \mathbb Z[x]/J \cong \frac{\mathbb Z[x]/I}{J/I}$$
In other words, the ring you are looking for can be found by taking the ring $\mathbb Z[x]/I$ and modding out by an ideal therein.
Note that $\mathbb Z[x]/I \cong \mathbb F_3[x]$, where $\mathbb F_3$ is the field with three elements. Inside this ring, $J/I$ is just the ideal in $\mathbb F_3[x]$ generated by $x^3-x+1$.
The problem becomes to describe the elements of the quotient ring $\mathbb F_3[x]/(x^3-x+1)$. To do this, you should first determine whether or not $x^3-x+1$ is irreducible in $\mathbb F_3[x]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Volume of a frustum given the bottom radius and the top cone height.
A cone with base radius 12 cm is sliced parallel to its base, as shown,
to remove a smaller cone of height 15 cm. If the height of the smaller
cone is three-fourths that of the original cone, what is the volume of the remaining frustum?
I set the frustum's top radius as x and its height as y. Using the cone volume formula, I have $$\frac{144\pi\cdot(y+15)}{4}=\frac{15x^2\pi}{3}$$$$\implies 36 y+540=5x^2$$
I am stuck here. How should I continue?
| Hint:
The smaller cone is homothetic of the larger cone in a homothety with centre the vertex of the cones and ratio $3/4$. So the height $h$ of the smaller cone is $h=3/4$ of the height $H$ of the larger cone, its base area is $b=9/16B$ and its volume is $v=27/64V$. So the remaining frustum has volume
$$\mathcal V=V-v =\frac{37}{64}V.$$
Can you calculate the volume of the given cone?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does de morgans law apply to literals within brackets? This is the example I have, however, in the second line de morgans law does not turn the literals x,z, and y into not x,z,y. Why is this? Does deMorgans law not apply to those literals for some special reason?
≡ ∃x, y, ¬(¬(x < y) ∨ ∃z (x < z ∧ z < y)) (by p ⇒ q ≡ ¬p ∨ q)
≡ ∃x, y, ((x < y) ∧ ∀z (x ≥ z ∨ z ≥ y)) (byDeMorgan’s law)
| In the language of my first order logic text, $x,y,z$ are individual variables, not propositional variables. They don't represent truth values, they are merely arguments for the predicate variables that do return truth values. Perhaps it's clearer to write the sentence without the infix inequality symbols:
$$\exists x,y,\neg(\neg Lxy\vee\exists z(Lxz\wedge Lzy))$$
From here, it's much clearer to see what De Morgan's laws should and shouldn't do.
$$\exists x,y,Lxy\wedge\neg\exists x(Lxz\wedge Lzy)$$
$$\exists x,y,Lxy\wedge\forall z\neg(Lxz\wedge Lzy)$$
$$\exists x,y,Lxy\wedge\forall z(\neg Lxz\vee\neg Lzy)$$
$$\exists x,y,Lxy\wedge\forall z(GExz\vee GEzy)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove there exists $2011$ consecutive amazing integers Recently, I have found this problem:
We call a positive integer $n$ amazing if there exists positive integers $a, b, c$
such that the equality $$n = (b, c)(a, bc) + (c, a)(b, ca) + (a, b)(c, ab)$$
holds. Prove that there exists $2011$ consecutive positive integers which are amazing.
Here some amazing numbers:
In the picture from left to right you find the numbers: $n$, $a$, $b$ and $c$.
I have tried to solve this problem in a lot of different ways, for example using the definition of $GCD$, or divisibility but I can't go on. Any idea?
Note:by $(m, n)$ we denote the greatest common divisor of positive integers $m$
and $n$.
| Note that if $n=d^2k$, with $d+2|k$, then with $c=d$, $b=\frac{dk}{d+2}$, then $a=bc$, $(a,b)(c,ab)=bc=d^2\frac{k}{d+2}$, $(b,c)(a,bc)=d^3\frac{k}{d+2}$, and $(c,a)(b,ac)=bc=d^2\frac{k}{d+2}$, so the sum is $n$, which is thus amazing.
So consider a sequence $\delta_1 \geq 6$ and $\delta_{i+1}=\prod_{k=1}^i{(\delta_k^2-1)}$, then, for all $1 \leq i < j$, $(\delta_i-1)^2(\delta_i+1)$ and $(\delta_j-1)^2(\delta_j+1)$ are coprime.
Define $d_i=\delta_i-1$, $P_i=d_i^2(d_i+2)$, where the $P_i$ are pairwise coprime.
By CRT there is some $n+1$ such that for all $1 \leq i \leq 2011$, $n+i$ is divisible by $P_i$, so is amazing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Doubt in Hoffman and Kunze Section 5.2 (existence of determinant) I am trying to read Hoffman Kunze's book on linear algebra and I have a doubt in a particular result, (Theorem 1) of Section 5.2. Specifically, the theorem states:
Let $n > 1$ and let $D$ be an alternating $(n - 1)$-linear function on
$(n - 1)\times (n - 1)$ matrices over $K$. For each $j$, $1 < j \le n$, the function $E_j$ defined by $$E_j(A) = \sum_{i=1}^n(-l)^{i+j}A_{ij}D_{ij}$$ is an alternating $n$-linear
function on $n \times n$ matrices $A$. If $D$ is a determinant function,
so is each $E_j$.
Here $D_{ij}=D[A(i|j)]$ where $A(i|j)$ denotes the matrix obtained by deleting the $i$th row and the $j$th column of $A$.
Now my question concerns the $n$-linear part. I understand why $D_{ij}$ is linear in every row except the $i$th row and that $D_{ij}$ is independent of the $i$th row. What I do not understand is why $D_{ij}$ is linear in the $i$th row.
For example, if $n=2$ and $D([a])=a$ then $$D_{11}\begin{pmatrix} a+a'& b+b'\\c & d\end{pmatrix}=d$$ while $$D_{11}\begin{pmatrix} a& b\\c & d\end{pmatrix}+D_{11}\begin{pmatrix} a'& b'\\c & d\end{pmatrix}=d+d=2d.$$
Yet the authors state $A_{ij}D_{ij}$ is $n$-linear.
| As you have observed, $D_{ij}(A)$ is linear in every row except the $i$th row, and $D_{ij}(A)$ is independent of row $i$. On the other hand, $A_{ij}$ is independent in every row except the $i$th row, and $A_{ij}$ is linear in row $i$. Thus, $A_{ij}D_{ij}(A)$ is linear in every row!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3325965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does any manifold admit a complete distance metric?
Let $M$ be a smooth manifold. Is there a metric $d$ on $M$ compatible with the original topology such that $(M,d)$ is a complete metric space? Note that in this question I always mean distance metric, not a Riemannian metric.
My initial idea was to start with a cover on $M$ with charts $U_i$ diffeomorphic to $\mathbb{R}^n$ and then pullback the Euclidian distance. Denote it by $d_i$. Glue everything with a partition of unity $(a_i)$ subordinated to the cover $U_i$, in the sense that consider $d=\Sigma a_id_i $. But I don't know if it is a correct approach. I also don't know if the answer to my question is affirmative.
| The Hopf-Rinow Theorem states that a (connected) Riemannian manifold $(M, g)$ is complete if and only if the metric $d$ on $M$ induced by $g$ is complete. (Here, $d$ is the Riemannian distance determined by $g$: $$d(x, y) := \inf_\gamma \int_\gamma ds = \int_{t_0}^{t_1} g(\gamma'(t), \gamma'(t)) \,dt ,$$ where $\gamma$ varies over the paths from $x$ to $y$.) So, it suffices to show any manifold admits a complete Riemannian metric $g$.
In fact, something much stronger is true: For any Riemannian manifold $(M, g)$ there is some metric conformal to $g$---that is, a metric of the form $\lambda g$ where $\lambda$ is a smooth, positive function---that is geodesically complete (see this paper of Nomizu and Ozeki [pdf warning]). Thus, it suffices to show that any smooth manifold admits at least one Riemannian metric, but this is a standard exercise using partitions of unity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3326198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Inductively simplify specific Vandermonde determinant From Serge Lang's Linear Algebra:
Let $x_1$, $x_2$, $x_3$ be numbers. Show that:
$$\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}=(x_2-x_1)(x_3-x_1)(x_3-x_2)$$
The matrix presented above seems to be the specific case of Vandermonde determinant:
$$
\begin{vmatrix}
1 & x_1 & ... & x_1^{n-1}\\
1 &x_2 & ... & x_2^{n-1}\\
... & ... & ... & ...\\
1 & x_n & ... & x_n^{n-1}
\end{vmatrix}=\prod_{i, j}(x_i - x_j), \forall (1 \leq i \leq n) \land (1 \leq j \leq n)
$$
I'm trying to prove the specific case to then generalize it for arbitrary Vandermonde matrices.
My incomplete "proof"
Since determinant is a multilinear alternating function, it can be seen that adding a scalar multiple of one column (resp. row) to other column (resp. row) does not change the value (I omitted the proof to avoid too much text).
Thus considering that $x_1$ is a scalar, we can multiply each column but the last one of our specific Vandermonde matrix by $x_1$ and then starting from right to left subtract $n-1$th column from $n$:
$$\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}=\begin{vmatrix}
x_1 & 0 & 0 \\
x_1 & x_2 - x_1 & x^{2}_2 - x^{2}_1\\
x_1 & x_3 - x_1 & x^{2}_3 - x^{2}_1
\end{vmatrix}$$
Then using the expansion rule along the first row (since all the elements in it but $x_1$ are zero):
$$... =x_1\begin{vmatrix}
x_2 - x_1 & x^{2}_2 - x^{2}_1\\
x_3 - x_1 & x^{2}_3 - x^{2}_1
\end{vmatrix}=(x_1x_2-x^2_1)(x^2_{3}-x^2_1)-(x^{2}_2x_1 - x^{3}_1)(x_3x_1 - x^2_1)$$
The first expansion seems interesting because it contains $x_2 - x_1$ and $x_3 - x_1$ (which are first two factors of specific Vandermonde matrix), but further expansion does not give satisfying results.
Question:
Is this a good simple start of inductively "proving" relation between Vandermonde matrix and its factors? If so what does it lack to show the complete result? Did I make mistake during evaluation?
Thank you!
| The general proof is not difficult.
From the definition of a determinant (sum of products), the expansion must be a polynomial in $x_1,x_2,\cdots x_n$, of degree $0+1+2+\cdots n-1=\dfrac{(n-1)n}2$, and the coefficient of every term is $\pm1$.
On another hand, the determinant cancels whenever $x_j=x_k$, so that the polynomial must be a multiple of
$$(x_1-x_2)(x_1-x_3)(x_1-x_4)\cdots(x_1-x_n)\\
(x_2-x_3)(x_2-x_4)\cdots(x_2-x_n)\\
(x_3-x_4)\cdots(x_3-x_n)\\
\cdots\\
(x_n-x_{n-1})$$ ($\dfrac{(n-1)n}2$ factors).
Hence the determinant has no other choice than being $\pm$ this product.
For the $3\times3$ case,
$$\begin{vmatrix} 1 & x_1 & x_1^2\\ 1 &x_2 & x_2^2\\ 1 & x_3 &
x_3^2 \end{vmatrix}=
\begin{vmatrix} 1 & x_1 & x_1^2\\ 0 &x_2-x_1 & x_2^2-x_1^2\\ 0 & x_3-x_1 &
x_3^2-x_1^2 \end{vmatrix}=\begin{vmatrix} x_2-x_1 & x_2^2-x_1^2\\ x_3-x_1 &
x_3^2-x_1^2 \end{vmatrix}=(x_2-x_1)(x_3-x_1)\begin{vmatrix} 1&x_2+x_1 \\1& x_3+x_1 \end{vmatrix}=(x_2-x_1)(x_3-x_1)(x_3-x_2).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3326310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Find value of $(\cos\frac{2\pi}{7})^ {\frac{1}{3}} + (\cos\frac{4\pi}{7})^ {\frac{1}{3}} + (\cos\frac{8\pi}{7})^ {\frac{1}{3}} $ This question was on my list. I was trying to apply the $n$-th roots of unity, but other ideas are welcome. I also tried Newton's sums, but it's not working.
I searched around here and I didn't find a similar one, but if they do, just say that I delete the topic.
| This cubic-root sum, which has the inscrutable value $\sqrt[3]{(5-3\sqrt[3]{7})/2}$, was discovered over a hundred years ago. Outlined below is an elementary evaluation of the sum.
Note that $\cos\frac{2\pi}7$, $\cos\frac{4\pi}7 $ and $\cos\frac{8\pi}7 $ are the roots of
$$x^3+\frac{1}{2}x^2-\frac{1}{2}x-\frac{1}{8}=0$$
With the short-hands,
$$c_1=\left(\cos\frac{2\pi}{7}\right)^{\frac{1}{3}},\space \space \space c_2=\left(\cos\frac{4\pi}{7}\right)^{\frac{1}{3}},\space \space \space c_3=\left(\cos\frac{8\pi}{7}\right)^{\frac{1}{3}}
$$
we have
$$c_1^3+c_2^3+c_3^3 = -\frac{1}{2},\>\>\>
c_1^3c_2^3+c_2^3c_3^3+c_3^3c_1^3 = -\frac{1}{2},\>\>\>
c_1^3c_2^3c_3^3 = \frac{1}{8}\tag{1}$$
Let
$$A=c_1+c_2+c_3, \space\space\space B=c_1c_2+c_2c_3+c_3c_1$$
and evaluate
$$A^3 = c_1^3+c_2^3+c_3^3 +3AB-3c_1c_2c_3,$$
$$B^3= c_1^3c_2^3+c_2^3c_3^3+c_3^3c_1^3+3c_1c_2c_3 AB-3(c_1c_2c_3)^2
$$
Plugging (1) into above expressions to get
$$A^3=-2+3AB,\space\space\space B^3=-\frac{5}{4}+\frac{3}{2}AB\tag{2}$$
Next, evaluate
$$(2AB-3)^3=8A^3B^3-36A^2B^2+54AB-27$$
and use the results (2) to get the equation satisfied by $A$,
$$\left( \frac{2A^3-5}{3}\right)^3 =-7\tag{3}$$
Finally, solve (3) and we obtain the sum,
$$A = \left(\cos\frac{2\pi}{7}\right)^{\frac{1}{3}} +\left(\cos\frac{4\pi}{7}\right)^{\frac{1}{3}}+\left(\cos\frac{8\pi}{7}\right)^{\frac{1}{3}} =
\left[ \frac{1}{2}\left(5-3\cdot 7^{\frac{1}{3}}\right) \right]^{\frac{1}{3}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3326406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Calculate how many ways you can give $7$ children $7$ identical candies My try:
Calculate how many ways you can give $7$ children $7$ identical candies if each child got at most 2 candies.
$$x_1+x_2+x_3+x_4+x_5+x_6+x_7=7 \text{ for } x_i \in \left\{ 0,1,2\right\}$$
$$[t^7](1+t+t^2)^7[t^7](\frac{1-t^3}{1-t})^7=[t^7](1-t^3)^7 \sum {n+6 \choose 6}t^n$$
$$\begin{array}{|c|c|c|c||c|c|}
\hline
\text{first parenthesis} & \text{ways in the first} & \text{ways in the second }\\ \hline
\text{1} & 1 & { 13 \choose 6} \\ \hline
{t^3} & { 1 \choose 1} & { 10 \choose 6} \\ \hline
{t^6} & { 7 \choose 2} & { 7 \choose 6}\\ \hline
\end{array}$$
Sollution:$${ 7 \choose 2}{ 7 \choose 6}+{ 7 \choose 1}{ 10 \choose 6}+{ 13 \choose 6}=3333$$
But I checked it in Mathematica and I get $393$. So can you check where the error is?
| As was commented, the only issue is the sign of the middle term must be negative. To see it, express the first binomial as a sum too:
$$[t^7](1-t^3)^7 \sum {n+6 \choose 6}t^n=[t^7]\sum_{k=0}^7{7\choose k}(-t^3)^k \sum_{n=0}^{\infty} {n+6 \choose 6}t^n=\\
\underbrace{{7\choose 0}{13\choose 6}}_{k=0,n=7}-\underbrace{{7\choose 1}{10\choose 6}}_{k=1,n=4}+\underbrace{{7\choose 2}{7\choose 6}}_{k=2,n=1}=393.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3326485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Investigating Continuity of Dirichlet and related functions: An $\epsilon-\delta$ approach I have trouble proving discontinuity of the Dirichlet function, using the $\epsilon-\delta$ approach.
The function is defined as follows:
$$ f(x) = \left\{\begin{array}{l l} 1 &\text{if }x \in \mathbb{Q} \\ 0 & \text{if } x \notin \mathbb{Q}
\end{array} \right. $$
Would it do any good showing the discontinuity at some $x_0$, by bifurcating the problem into two cases, one where $x_0$ is rational and one where it isn't?
The above function isn't continuous anywhere, but let's look at one that is continuous at only one point in its entire domain:
$$ f(x) = \left\{\begin{array}{l l} 0 &\text{if }x \in \mathbb{Q} \\ x & \text{if } x \notin \mathbb{Q}
\end{array} \right. $$
I can see that this function is continuous at 0 alone, but once again, not able to show it rigorously by picking an appropriate $\epsilon$.
How should I go about investigating the continuity of such "weird" functions, using $\epsilon-\delta$ arguments? I'm sure there are many more to add to my troubles, such as Thomae's function, for instance. I'm really more concerned with the approach than with the solution, though it'd be great if someone could help me figure out proper proofs for the above functions, so I can at least get started from where I ended up getting stuck (all the functions look pretty similar in that sense, and knowing how to work with $\epsilon-\delta$ with even one should help me figure out the rest)
Please help me out, I'm pretty new to real analysis! Thanks a lot in advance!
| For Dirichlet function, do you understand that $\mathbb{Q}$ is dense in $\mathbb{R}$? If so, you will find the function is discontinuous for every point just by $\epsilon-\delta$ approach. For the second function, you can just pick any $\epsilon$, then pick $\delta=\epsilon$, for $x\in(-\delta,\delta)$, if $x$ is rational, $f(x)=0<\epsilon$, if $x$ is irrational, $|f(x)-0| = |x|<\epsilon$. Hence, by the approach, the function is continuous at $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3326695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
10-digit numbers with constraints
How many 10-digit numbers can be made by using the digits {5,6,7} (all of them) and with the additional constraints that no two consecutive digits must be the same and also that the first and last digits of the number must be the same?
I am trying to find a solution by using combinatorics. I start from the 1st leftmost digit, which can have any value from {5,6,7} (3 possibilities).
Then we move to the 2nd digit, which can have 2 values (since it can't be the same with the 1st) and so on, and for the last digit we only have 1 option. But this is not correct, because for the 9th digit we have the restriction that it must be different from the 8th and also different from the 10th, which, in turn, is equal to the 1st.
I don't know how to express this.
I therefore tried to find a recursive relation.
I found that the general relation is $a(n) = 2*a(n-1)$ if n odd and $2*a(n-1) + 6$ if n is even.
For n=4, we have 6 such numbers (4-digit numbers, but with the given restrictions). Then if we add one more digit to the right, we remove the rightmost (4th) digit, which had to be the same with the first, and now for the 3rd digit we have 2 options instead of 1 (we can also add the options that were rejected because they were neighboring with the 4th digit). So in total we now have $2 x 6 = 12$ options.
Therefore, $a(4)=6$ and $a(5)=12$.
I don't understand, however, where this $+6$ (in the recursive relation) comes from!
By the way, the correct answer is 510.
Many thanks in anticipation.
| I have another solution. We call a number satisfying the condition of the problem "accepted". Let $a(n)$ denote the number of accepted numbers with $n$ digits. Imagine an accepted number with $n-2$ digits and put two empty places at the left side of the number with $n-2$ digits. You can easily make two different accepted numbers with $n$ digits. To make this claim clear, assume $1,3,...,1$ is an accepted number with $n-2$ digits, we can build two $n$-digit numbers as below: $$1,2,1,3,...,1$$ $$1,3,1,3,...,1$$
Now, assume we have an accepted number with $n-1$ digits. We can make an accepted number with $n$ digits from this number. For example, assume $1,3,...,1$ is such number. we can write:$$1,2,3,...,1$$
Just notice that we added $1$ to the left side and the first $1$ in$1,3,...,1$ changed into $2$.
Now, assume $n$ is even. An accepted number can be made from $1,2,1,2,1,2,...,1,2$ with $n-2$ digit, as below:$$2,3,1,2,...,1,2$$
It is easy to understand that $1,2,1,2,1,2,...,1,2$ doesn't belong to the set of accepted numbers with $n-2$ digits.
If $n$ is odd, the same goes for it, consider $1,2,1,2,...,1,2$ with $n-1$ digits. It's possible to make an accepted number with $n$ digits from this as below:$$2,3,2, 1,2,1,2,...,1,2$$
In each situation there are 6 accepted numbers (why?) with $n$ digits which can't be made from accepted numbers with $n-1$ or $n-2$ digits. So, we will get $a(n)=a(n-1)+2a(n-2)+6$.
To be done, one should check two statements below:
*
*A $n$-digit accepted number made from $n-2$-digit accepted numbers doesn't coincide with any $n$-digit accepted numbers made from $n-1$-digit accepted numbers and vice versa.
*Each $n$-digit accepted number can be obtained through either $n-2$-digit accepted numbers, $n-1$-digit accepted numbers or those six numbers.
Note: What I wrote implies the formula you mentioned.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3326780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Understand the rank of $ \begin{bmatrix} A&b\\ b^{*}&0 \end{bmatrix}$
Let $A \in M_n(C)$ and $b$ be a column vector of n complex complements. Denote $\widetilde A = \begin{bmatrix}
A&b\\
b^{*}&0
\end{bmatrix} $ If $rank(\widetilde A)=rank(A)$, which of the following is true?
(a) $Ax = b$ has infinitely many solutions.
(b) $Ax = b$ has a unique solution.
(c) $\widetilde Ax = 0$ has only solution x = 0.
(d) $\widetilde Ax = 0$ has nonzero solutions.
Zhang, Fuzhen. Linear Algebra
I am assuming that $b^*$ is the conjugate transpose. Could you help me construct an example $A$ and $\widetilde A$ such as $rank(\widetilde A)=rank(A)$ ? I have hard time doing that without assuming that $b=0$.
| Hint: If both matrices have the same rank (column rank = row rank), then $b$ lies in the column space of $A$ and so there is a linear combination of the columns of $A$ which gives $b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3326868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to choose k in Poisson distribution?
Problem: In a factory, the probability of a screw being defective is p
= 0.015. What is the probability that a box of 100 screws does not contain a defective one?
One way to answer this problem is to use the Poisson formula the following way:
a) $P (\textrm{There are no defective screws in a box of 100}) = e^{-\lambda} \cdot \frac{\lambda^k}{k!}$,
where $\lambda = n\cdot p = 100 \cdot 0.015 = 1.5$,
k = 0.
Hence, $e^{-1.5} \cdot \frac{1.5^0}{k!} = e^{-1.5} \cdot (1 / 1) = 0.2231$, which is the correct answer.
I am confused, however, why this problem cannot be calculated in the reverse way, by focusing on the probability that all 100 screws are good (which should be the same that none of them are defective). This is what I mean:
b) $P (\textrm{All screws are good in a box of 100}) =e^{-\lambda} \cdot \frac{\lambda^{k}}{k!}$,
where $\lambda = n\cdot p = 100 \cdot (1-0.015) = 100 \cdot 0.985 = 98.5,
k = 100.$
Hence, $e^{-98.5} \cdot \frac{98.5^{100}}{ 100!} = \frac{3.678\cdot 10^{156}}{100!} = 0.039$, which is clearly not the same as a).
Is there any stipulation in the Poisson formula whether $k$ stands for success or failure? If yes, then how does one know?
(The exercise is from Feller: An Introduction to probability theory and its applications, Vol.1, p- 155/d.)
|
"The Poisson distribution can be applied to systems with a large
number of possible events, each of which is rare"
wiki, Poisson distribution
This is not the case if $p=0.98$. Basically the random variables is binomial distributed. This variable can be approximated by the Poisson distribution if $n$ is large and $p$ is small.
If we use the binomial distribution we obtain $P(X=100)=0.985^{100}\approx 22\%$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3327066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Angles between vectors of center of two incircles I have two two incircle between rectangle and two
quadrilateral circlein. It's possible to determine exact value of $\phi,$ angles between vectors of center of two circles.
| There are constraints that $A$ and $B$ must fullfill, i.e., the following system of equations coming from Pythagoras theorem applied to certain right triangles :
$$\begin{cases}(B-2)^2+2^2=(A/2)^2\\(B-1)^2+(A/2)^2=(B+1)^2\end{cases}$$
giving $A=8 \sqrt{2}$ and $B=8$.
If now we take equations as in the partial solution you gave (a good idea) :
$$\tan(\theta) = \dfrac{2}{A/2} = \dfrac{\sqrt{2}}{4}\tag{1}$$
and
$$\tan(\phi + \theta) = \dfrac{B-1}{A/2} = \dfrac{7}{4 \sqrt{2}}\tag{2}$$
Equation (2) can also be written :
$$\dfrac{\tan(\phi) + \tan(\theta)}{1-\tan(\phi)\tan(\theta)} = \dfrac{7\sqrt{2}}{8}\tag{3}$$
Let $T:=\tan(\phi)$. (3) is equivalent to :
$$\dfrac{T + \sqrt{2}/4}{1-T\sqrt{2}/4} = \dfrac{7\sqrt{2}}{8}\tag{4}$$
giving $T=\dfrac{10 \sqrt{2}}{23}$. Therefore
$\phi=\arctan(\dfrac{10 \sqrt{2}}{23})\approx 31.5863 \ \text{degrees}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3327157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
An entire function satisfies $f(az+b)=f(z)$ Here is a problem that I got stuck on while preparing for an upcoming exam:
If $a,b\in \mathbb{C}$ and $f:\mathbb{C}\to\mathbb{C}$ is non-constant and entire with $f(az+b)=f(z)$ for all $z\in \mathbb{C}$, prove that there exists a positive integer $n$ such that $a^n=1$.
I proved the first part of the problem which is the same thing but with $b=0$. I proved this by breaking into the three cases of $|a|<1$, $|a|=1$ and $|a|>1$. The first and last case, I got a contradiction that $f$ is constant (by analytic continuation and Liouville theorem respectively). I am not sure however, how to do it with $b\neq 0$. I would really appreciate a hint.
| Hint: if $a\neq 1$ then $g(z):=f\big(z+b/(1-a)\big)$ satisfies $g(az)=g(z)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3327247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Problem with $x^{6} - 2 = 0$ compute roots in $\mathbb{C}$
I have problem with simple equation $x^{6} - 2 = $ compute roots in
$\mathbb{C}$
I will try compute roots of $x^{6} - 2 = (x^{3}-\sqrt{2})(x^{3}+\sqrt{2})=(x-2^{1/6})(x^{2}+2^{1/6}x+2^{1/3})(x^{3}+\sqrt{2})$, but this not looks good.
Maybe is better solution.
Any suggestions?
| As commented above, you used $a^3-b^3=(a-b)(a^2+ab+b^2)$. You can also use $a^3+b^3=(a+b)(a^2-ab+b^2)$ and easily solve the quadratic equations:
$$x^{6} - 2 = (x^{3}-\sqrt{2})(x^{3}+\sqrt{2})=\color{red}{(x-2^{1/6})}\color{green}{(x^{2}+2^{1/6}x+2^{1/3})}\color{blue}{(x+2^{1/6})}\color{purple}{(x^{2}-2^{1/6}x+2^{1/3})} \Rightarrow \\
\color{red}{x_1=2^{1/6}},\color{blue}{x_2=-2^{1/6}},\\
\color{green}{x^2+2^{1/6}x+2^{1/3}}=0 \Rightarrow x_{3,4}=\frac{-2^{1/6}\pm \sqrt{-3\cdot 2^{1/3}}}{2}=\frac{-2^{1/6}\pm 2^{1/6}\sqrt{3}i}{2},\\
\color{purple}{x^2-2^{1/6}x+2^{1/3}}=0 \Rightarrow x_{3,4}=\frac{2^{1/6}\pm \sqrt{-3\cdot 2^{1/3}}}{2}=\frac{2^{1/6}\pm 2^{1/6}\sqrt{3}i}{2}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3327355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 5
} |
Stabiliser of a Subset of Center I have no clue for the following problem:
Let $G$ be a finite group, $p$ a prime number, $S$ a Sylow $p$ subgroup of $G$. Let $N$ be the normalizer of $S$ inside $G$. Let $X, Y$ two subsets of $Z(S)$ (center of $S$) such that $\exists g \in G, gXg^{-1}= Y$. Then we need to show that $\exists n \in N$ such that $gxg^{-1} = nxn^{-1}, \forall x \in X$.
So I guess first I can assume $X, Y$ to be subgroups by taking the smallest subgroup containing them. Then I have no clue.
| Because $X$ is contained in $Z(P)$, it follows that $N_G(X)$ contains $P$. That means that $N_G(Y) = N_G({}^gX)$ must contain ${}^gP$. But it also contains $P$, since $Y$ is central in $P$.
Now, notice that both $P$ and ${}^gP$ are Sylow $p$-subgroups of $N_G(Y)$. Can you take it from there?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3327435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Line Graph Doubt The line graph $L(G)$ of a simple graph $G$ is defined as follows:
There is exactly one vertex $v(e)$ in $L(G)$ for each edge $e$ in $G$.
For any two edges $e$ and $e'$ in $G$, $L(G)$ has an edge between $v(e)$ and $v(e')$, if and only if $e$ and $e'$ are incident with the same vertex in $G$.
Which of the following statements is/are TRUE?
*
*The line graph of a cycle is a cycle.
*The line graph of a clique is a clique.
*The line graph of a planar graph is planar.
*The line graph of a tree is a tree.
I have already done the following:
*
*The line graph of a cycle is a cycle.
*[See below.]
*The line graph of a planar graph is planar. Proof by counter-example: Let $G$ have $5$ vertices and $9$ edges which is a planar graph but $L(G)$ isn't a planar graph because then it will have $25$ edges; therefore, $|E|\leq 3\cdot|V|-6$ is violated.
*The line graph of a tree is a tree. By counter-example: Try drawing a simple tree which has a root node. The root node has one child $A$ and node $A$ has two children $B$ and $C$. Draw its line graph according to given rules in question and you will get a cycle graph of $3$ vertices.
My doubt is that I can't figure out 2. The line graph of a clique is a clique. Please help me out here.
| HINT: Let $G$ be a clique on the $n$ vertices $\{x_1,x_2,\ldots, x_n\}$ for $n \ge 4$. Then edges $e_1 = x_1x_2$ and $e_2=x_3x_4$ do not share a vertex. So are $v(e_1)$ and $v(e_2)$ adjacent to each other in $L(G)$?
If you want to look at this another way, $L(K_n)$ has $\frac{n(n-1)}{2}$ vertices. But for each $e \in K_n$, the vertex $v(e)$ has degree only $2(n-2)$ [make sure you see why]. Note that $\frac{n(n-1)}{2} - 1$ $>>$ $2(n-2)$ for $n > 6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3327652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Are (∀x∈A)(∃y∈B)(x≤y) and (∃y∈B)(∀x∈A)(x≤y) the same? Do the statements $$(∀x∈A)(∃y∈B)(x≤y)$$ and $$(∃y∈B)(∀x∈A)(x≤y)$$ mean the same, even though the first two brackets are reversed?
P.S: Lets say, I have a sentence: There is no number from A, so it would be bigger than all numbers from B.
| As Scientifica mentions in their answer, translating the logical statement to a sentence helps.
If you can simplify the sentence towards more natural language, do so.
Moreover, it is often instructive to to consider a special case.
And even better, combine the two and write sentences about a special case.
If $A=B=\mathbb N$, then the statements are:
*
*$(∀x∈A)(∃y∈B)(x≤y)$ — "for any natural number, there is a number that is at least as big"
*$(∃y∈B)(∀x∈A)(x≤y)$ — "there is a biggest natural number"
Written this way, the problem is pretty easy.
And that is often hard in math: rewriting a problem so that it becomes easy.
That can be hard!
This should also help you get a flavor of what might happen with more general sets $A$ and $B$.
Once you have a feel for the phenomenon from an example, more general cases are easier to make sense of.
If you have trouble figuring out why these are correct translations, let me know and I can give details.
You gave this sentence: "There is no number from A, so it would be bigger than all numbers from B."
This is literally $\neg(∃x∈A)(∀y∈B)(x>y)$.
Using the basic rules of negations and quantifiers, this can be seen to be equivalent with $(∀x∈A)(∃y∈B)(x≤y)$.
Indeed, only the first symbolic logical statement expresses the idea you have written in words.
If you end up with two candidates and are unsure like in this question, spelling them both out in natural language helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3327749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Find the difference of the shaded areas from two overlapping squares I have a question from a school problem-solving homework sheet.
Here is the diagram of the problem:
Question: Two squares, A and B of the side lengths 6 cm and 5 cm respectively, overlap each other partially. Find the difference of the two shaded areas.
Here is the diagram
| The difference of the two shaded areas is just the area difference of the two squares, i.e. $36 - 25 = 11\space \text{cm}^2$. The overlapping area simply cancels out in the difference.
An obvious case is where the smaller square is right inside the larger one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3327838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Name of a property in Graph Theory A multigraph is a graph which allows for more than one edge between a pair of nodes in a graph. What would be the name of a graph which allows for more than one type of node. For example, buyers and sellers. I've heard multimodal, but I don't know if that is correct.
| I believe what you are describing is referred to in network science as a heterogeneous graph. A heterogeneous graph is a graph $G = (V, E)$ such that the vertex set $V$ is a disjoint union $V = V_1 \cup V_2 \cup \dots V_n$ where each $V_i$ denotes some set of vertices that share a common label. If $G$ is a directed graph then $E$ is the disjoint union of $n^2$ edge sets, where where each edge set is denoted by $E_{(i, j)}$ and consists of edges from vertices in $V_i$ to vertices in $V_j$. When $G$ is undirected we have $E_{(i, j)} = E_{(j, i)}$ and there are only $\binom{n}{2}$ edge sets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3327971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability of knowing the source of a sound after hearing it I am given this problem by my professor and I can't figure a reasonable answer, though I know it'll be easy.
Suppose to hear a sound on your roof at midnight. It could've been a thief or an animal, respectively with probability
$$ P(S \mid X=\text{thief})= 0.8, \quad P(S \mid X=\text{animal})=0.3 $$
We know that $P(X=\text{thief})=0.001$.
We want to find the probability of the sound to have been caused by a thief after hearing that sound, so it should be $P(X=\text{thief}\mid S)$ (I think).
No other clue is given, I have tried to solve it via Bayes' Theorem, putting
$$P(X=\text{thief}\mid S)= \frac{P(S\mid X=\text{thief})\cdot P(X=\text{thief})}{P(S)}$$
but this is assuming $P(S)=1$, because you did hear a sound. Moreover, this would mean that the result will be $0.8\cdot 0.001= 0.008$, which seems counter-intuitive. The text of the problem is unclear, I know, but it's all I've got.
| You are correct up to guessing $P(S)=1$. The probability of hearing a sound is given by the probability of hearing a sound and $X$ being a thief added to the probability of hearing a sound and $X$ being an animal. Hence we have
$$P(S)=0.001\times0.8+(1-0.001)\times0.3=0.3005$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$2\pi$-periodic $L^2$ functions on $R^1$ approximated by its Fourier series I'm reading section 4.26 in Big Rudin, but I have two questions.
Suppose $f$ is in $L^1(T)$. This means $f$ is the class of all complex, $2\pi$-periodic, and Lebesgue measurable functions on $R^1$ for which the norm $$||f||_p=\{\frac{1}{2\pi}\int_{-\pi}^{\pi} |f(t)|^p dt\}^{1/p}$$ is finite.
For any $f\in L^1(T)$, Fourier coefficients of $f$ is define by the formula $$\hat{f(n)}=\frac{1}{2\pi}\int_{-\pi}^{\pi} f(t)e^{-int} dt$$ where $n$ is an integer.
The Fourier series of $f$ is $$\sum_{-\infty}^{\infty} \hat{f(n)}e^{int},$$
and its partial sums are $$s_N=\sum_{-N}^{N} \hat{f(n)}e^{int}$$ where $N$ is a natural number (include 0).
It is the fact (the Parseval theorem) that for any $f, g \in L^2(T)$, $$ \sum_{n=-\infty}^{\infty} \hat{f(n)}\overline{\hat{g(n)}}=\frac{1}{2\pi}\int_{-\pi}^{\pi} f(t)\overline{g(t)}dt$$ holds.
Next, Rudin says $$\lim_{N \to \infty}||f-s_N||_2=0$$
since a special case of the Parseval theorem yields $${||f-s_N||_2}^2=\sum_{|n|>N}|\hat{f(n)}|^2$$.
I have questions:
1) how did Rudin derive the last identity from the Parseval theorem? Is this equation obtained by putting $f(t)=g(t)=|f-s_N|$? (but I can't reach the answer)
2) Why the last equation gives the limit? I see $||f-s_N||$ decreases if taking the limit, but I don't know what it means to converge to zero.
| Note $$\hat{s}_N(m) = \sum_{\lvert n\rvert \le N} \hat{f}(n)\cdot\frac{1}{2\pi}\int_{-\pi}^\pi e^{i(n-m)t}\, dt = \sum_{\lvert n\rvert \le N} \hat{f}(n)\delta_{nm}$$ where $\delta_{nm}$ equals $1$ when $n = m$ and equals $0$ otherwise. If $\lvert m\rvert > N$, then $\delta_{nm} = 0$ for every $n$ with $\lvert n \vert \le N$. Therefore, $\hat{s}_N(m) = 0$ whenever $\lvert m\rvert > N$. On the other hand, if $\lvert m\rvert \le N$, then $\sum_{\lvert n \rvert \le N} \hat{f}(n)\delta_{nm}$ reduces to $\hat{f}(m)$. It follows that $\hat{f}(m) - \hat{s}_N(m)$ is $\hat{f}(m)$ for $\lvert m\rvert > N$ and $0$ for $\lvert m \rvert \le N$. By Parseval's theorem, $$\|f - s_N\|_2^2 = \sum_{n\, =\, -\infty}^\infty \lvert\hat{f}(n) - \hat{s}_N(n)\rvert^2 = \sum_{\lvert n\rvert > N} \lvert \hat{f}(n)\rvert^2$$ Since the sequence $(\hat{f}(n))_{n = 1}^\infty$ is square summable, given a positive number $\varepsilon$, there corresponds a $k$ such that for all $N > k$, $\sum_{\lvert n \rvert > N} \lvert \hat{f}(n)\rvert^2 < \varepsilon$. Thus $$\lim_{N\to \infty} \sum_{\lvert n \rvert > N} \lvert \hat{f}(n)\rvert^2 = 0$$ implying the $\lim_{N\to \infty}\|f - s_N\|_2 = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does $(f_n)=(n\sin(\frac{x}{n})-x)$ converge uniformly on $[-a,a]$ for $a\geq0$? I'm trying to solve the next problem: Let $\left(f_{n}\right)_{n\in\mathbb{N}}$
be a sequence of functions such that $f_{n}\colon\mathbb{R}\to\mathbb{R}$
is given by $f_{n}\left(x\right)=n\sin\left(\frac{x}{n}\right)-x$,
for all $n\in\mathbb{N}$.
$1.$ Prove that $\left(f_{n}\right)_{n\in\mathbb{N}}$ is pointwise
convergent.
$2.$ Prove that $\left(f_{n}\right)_{n\in\mathbb{N}}$ doesn't converge
uniformly.
$3.$ Does $\left(f_{n}\right)_{n\in\mathbb{N}}$ converge uniformly
on $\left[-a,a\right]$ for $a\geq0$?
For $1.$ I proved that $\left(f_{n}\right)_{n\in\mathbb{N}}$ coneverges
pointwise to 0. Given $x\in\mathbb{R}$, using L'Hopital's rule we
have that:
$$ \lim_{n\to\infty}\frac{\sin(\frac{x}{n})}{\frac{1}{n}}=\lim_{n\to\infty}\frac{\cos(\frac{x}{n})\cdot (-\frac{x}{n^{2}})}{-\frac{1}{n^{2}}}=x\cdot\lim_{n\to\infty}\cos(\frac{x}{n})=x\cdot\cos(0)=x.
$$
And therefore $\lim_{n\to\infty}f_{n}\left(x\right)=\lim_{n\to\infty}n\sin\left(\frac{x}{n}\right)-\lim_{n\to\infty}x=x-x=0.$
Hence $\left(f_{n}\right)_{n\in\mathbb{N}}$ converges pointwise to
0.
In $2.$ I took $\varepsilon=1$. Then given $n\in\mathbb{N}$, we
can take $x=n\pi\in\mathbb{R}$ and we have that
$$
\mid f_{n}\left(x\right)-0\mid=\mid n\sin\left(\frac{n\pi}{n}\right)-n\pi\mid=n\pi>1=\varepsilon.
$$
And with this we can conclude that $\left(f_{n}\right)_{n\in\mathbb{N}}$
doesn't converge uniformly in $\mathbb{R}$.
I don't know for $3.$ if the answer is yes or no. I'm thinking that
may be the answer could be yes, since we are taking a compact set
and $f_{n}$ is continuous for all $n\in\mathbb{N}$, but I couldn't
prove this. Could you help me or give me some idea for this problem?
Thanks.
| It is known that $\lim\limits_{y \to 0}\frac{\sin y}{y} =1$.
Take $\epsilon >0$. You can pick up $\delta >0$ such that $\left\vert \frac{\sin y}{y} -1 \right\vert \le \epsilon$ for $\vert y \vert \le \delta$. For $n >a/\delta$ and $\vert x \vert \le a$, you have $\left\vert \frac{x}{n} \right\vert \le \delta$ and therefore
$$\left\vert x \frac{\sin(\frac{x}{n})}{\frac{x}{n}}-x\right\vert =\vert f_n(x)\vert \le a \epsilon$$
proving that the function converges uniformly on each compact $[-a,a]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Limit of a function in which square roots are involved $$\lim_{x \to 0}\frac{x+2-\sqrt{2x+4}}{3x-1+\sqrt{x+1}}$$
Could someone please help me solve this problem.
I tried multiplying by a unity factor but I end up stuck.
| You may use $$\sqrt{1+x}=1+\frac x2+O(x^2)$$
First,
$$\sqrt{2x+4}=2\sqrt{1+\frac x2}=2(1+\frac x4)+O(x^2)=2+\frac x2+O(x^2)$$
And
$$\frac{x+2-\sqrt{2x+4}}{3x-1+\sqrt{x+1}}
=\frac{x+2-2-\frac x2+O(x^2)}{3x-1+1+\frac x2+O(x^2)}
\\=\frac{\frac x2+O(x^2)}{\frac72x+O(x^2)}=\frac17+O(x)\underset{x\to0}\longrightarrow\frac17
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Are there any infinite dimensional division algebras? Appart from the finite dimensional division algebras like $\mathbb{R, C, H, O}$
Are there any infinite dimensional division algebras? (Especially any "exceptional" ones?)
I was thinking maybe the ring over polynomials might be a division algebra if you include negative exponents and allow infinite series. But I'm not sure if every series gives a unique member of the algebra. You might have $(1+x)^{-1} = 1-x+x^2-...$
Well I guess the space of functions is a division algebra since you can add and divide them $f(x)g(x)$ and $f(x)/g(x)$ and has an identity element $1$.
What about the ring over polynomials with rational or irrational exponents?
Or ones based on lattices?
| The rational functions (in one variable) provide an example of such a ring ... after we mod out by the appropriate equivalence relation, namely "equality off a finite set" (so that e.g. "$x+3$" and "${(x-2)(x+3)\over x-2}$" are the same thing). And the rational functions are only the tip of a much larger iceberg of "well-behaved" functions, especially when we work over $\mathbb{C}$ instead of $\mathbb{R}$ (see the notion of analytic function).
However, this idea isn't so straightforward for arbitrary collections of functions. Rational functions are extremely well behaved, and when we lose that good behavior things get quite difficult. For example, let $f(x)=1$ if $x>0$ and $0$ if $x\le 0$, and let $g(x)=1-f(x)$. Then the product of $f$ and $g$ is the always-zero function, but both $f$ and $g$ are nonzero a lot of the time so there's no clear picture of which one "ought to be" the zero element of our algebra. In general, finding a natural equivalence relation making a division algebra out of a given class of functions can be quite hard.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
} |
Universal property of product topology, unique up to homeomorphism
Let $(X_j, \tau_j)_{j\in J}$ be a family of topological spaces. Let $X=\prod_{j\in J} X_j$ provided with the product topology and let $pr_k: X\to X_k$ be the projection on the $k$-th coordinate, then has $(X,\tau, (pr_j)_{j\in J})$ the following universal property:
Is $Y$ a topological space $(f_j: Y\to X_j)_{j\in J}$ a familiy of continuous functions, then there is exactly one continuous function $f:Y\to X$ such that for every $k$ holds $f_k=pr_k\circ f$
This universal property of $X$ and the function $pr_k$ characterizes $(X,\tau, (pr_j)_{j\in J})$ unique up to homeomorphism.
I have a question why this characterizes up to homeomorphism.
The proof goes as follows:
Suppose $P$ is a topological space with functions $\beta_j: P\to X_j$ which has the same universal property.
With that we get:
Now observe $\alpha\circ \beta: P\to P$ (Note that the $X$ in the following picture is supposed to be $P$)
Then $\alpha\circ\beta =\operatorname{id}_P$, because of $\alpha$ and $\beta$ beeing unique.
Similarly $\beta\circ\alpha = \operatorname{id}_X$ so $\alpha$ and $\beta$ are homeomorphic.
My question:
Why is $\alpha\circ\beta=\operatorname{id}_P$. And what has the uniqueness to do with it?
From the diagramm in the last picture we get that
$\beta_j\circ(\alpha\circ\beta)=\beta_j$ which implies $\alpha\circ\beta=\operatorname{id}_P$ immediatly.
Since $\beta_j\circ (\alpha\circ\beta(p))=\beta_j(p)\Leftrightarrow \alpha\circ\beta(p)=p$ for every $p\in P\Leftrightarrow \alpha\circ\beta=\operatorname{id}_P$.
What has the uniqueness to do with it?
Thanks in advance and excuse me for these awful images, but creating such diagrams on this website is always an odysse on its own...
| You know that $\alpha: X \to P$ satisfies $$\forall j: \beta_j \circ \alpha = \text{pr}_j\tag{1}$$
and $\beta: P \to X$ satisfies $$\forall j : \text{pr}_j \circ \beta= \beta_j \tag{2}$$
now using $(1)$ and $(2)$ we get that for any $j$:
$$\text{pr}_j \circ (\beta \circ \alpha) = (\text{pr}_j \circ \beta) \circ \alpha = \beta_j \circ \alpha = \text{pr}_j\tag{3}$$
And in the diagram we get from applying the universal property for $X$ to the test space $X$ itself, we are promised a unique $\gamma: X \to X$ such that
$$\forall j: \text{pr}_j \circ \gamma = \text{pr}_j\tag{4}$$
Now, $(3)$ tells us that $\gamma = \beta \circ \alpha$ also obeys $(4)$ and standard common sense (or the axioms of a category, in a more abstract setting) tells us that $\gamma=\text{id}_X$ also satisfies $(4)$.
So the unicity of $\gamma$ tells us $\beta \circ \alpha = \gamma = \text{id}_X$ and in particular, $$\beta \circ \alpha = \text{id}_X$$
The $\alpha \circ \beta = \text{id}_P$ follows similarly from applying the universal property of $P$ to $P$ and a similar small computation to $(3)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove 1111......11 (91 digits) is a prime or composite number?
How to prove $1111......11$ ($91$ digits) is a prime or composite number?
My Approach:
$1111......11$ can be expressed as $10^{0}+10^{1}+10^{2}+...…..+10^{90}$
Using summation of a geometric progression formula,
$$10^{0}+10^{1}+10^{2}+...…..+10^{90} = 1\frac{(10^{91}-1)}{(10-1)}=\frac{(10^{91}-1)}{(9)}$$
I do not know how to proceed after this step. Kindly guide me how to solve this problem.
| Note that$$91=13(7)$$ so you may factor $$1111111$$ and your number is $$1111111(10^{12\times {7}} + 10^{11\times 7}+10^{10\times 7}+...1)$$
So it is composite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Why does the general solution of $y'=y$ not covering $y=0?$ It is regarding the ODE $y'=y.$
Usually we try to find its general solution using separation of variables as
$\frac{dy}{y}=dx\implies\log y=x+c\implies y=e^{x+c}$ ($c$ being arbitrary constants).
Please tell me why does the general solution does not cover the case $y=0.$
| Generally, when solving ODEs, you are looking for a non-trivial solution; i.e. $y$ that is not identically zero. Note that the trivial solution ($ y = 0$) is quite often a valid solution, but it just isn't interesting from either a mathematical or a practical point of view. Moreover, as others have mentioned, the method breaks down if we do not assume that $y$ is not zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
} |
Integral $\int^{\infty}_0 \exp\left[-\left(4x+\frac{9}{x}\right)\right] \sqrt{x}\,dx$
How do I evaluate $$\displaystyle\int^{\infty}_0 \exp\left[-\left(4x+\dfrac{9}{x}\right)\right] \sqrt{x}\;dx?$$
To my knowledge the following integral should be related to the Gamma function.
I have tried using the substitution $t^2 = x$, and I got
$$
2e^{12}\displaystyle \int^{\infty}_0 \exp\left[-\left(2t + \dfrac{3}{t}\right)^2\right] t^2 \; dt
$$
after substitution. But it seems like I can do nothing about this integral anymore. Can anyone kindly give me a hint, or guide me to the answer?
| It looks like a tricky integral, however Feynman's trick deals with it nicely.
$$I=\int^{\infty}_0 \exp\left(-\left(4x+\dfrac{9}{x}\right)\right) \sqrt{x}dx\overset{\sqrt x\to x}=2\int_0^\infty \exp\left(-\left(4x^2+\frac{9}{x^2}\right)\right)x^2 dx$$
Now consider the following integral:
$$I(t)=2\int_0^\infty \exp\left(-\left(4x^2+\frac{t}{x^2}\right)\right)x^2 dx$$
The reason why I'm putting the parameter in that place is because if $x^2$ is simplified then the integral becomes much easier. So let's take a derivative with respect to $t$ in order to get:
$$ I'(t)=-2\int_0^\infty \exp\left(-\left(4x^2+\frac{t}{x^2}\right)\right) dx=-\frac{\sqrt \pi}{2}e^{-4\sqrt t}$$
The above result follows using the Cauchy-Schlomilch transformation (see $3.3$).
I think that you are on the right track now and basically the future the steps would be to see that:
$$I(0)=\frac{\sqrt \pi}{16}\Rightarrow I=I(9)-I(0)+\frac{\sqrt\pi}{16}=-\frac{\sqrt \pi}2 \int_0^9e^{-4 \sqrt t}dt+\frac{\sqrt{\pi}}{16}=\boxed{\frac{13\sqrt \pi}{16e^2}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
For non-negative $a$ and $b$ with $a+b \leq c$ for a small constant $c$, what is the minimum of $\cos a + \cos b$?
Let $a,b \geq 0$ with $a+b \leq c$ for a small constant $c$ between $0$ and $1$.
What is the minimum of $\cos(a) + \cos(b)$?
I conjecture it is $\cos(0)+\cos(c) = 1 + \cos(c)$ but I have no proof for this.
| This is not an answer but it strengthen your guess. Suppose $a+b=2k\leq c$ and let $a=k-\varepsilon$ and $b=k+\varepsilon$ then:
$$f(a,b) :=\cos(a)+\cos(b)=2\cos(k)\cos(\varepsilon),$$
Now one can see that when $\varepsilon\to 0$, $f(a,b)$ increases and for $\varepsilon\to k$, $f(a,b)$ decreases. So probably the minimum occur when $\varepsilon=k$ and $k\to c/2$; i.e. $a=0$ and $b=c$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3328969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Residue field at the generic point is equal to the fraction field of the section I want to show that:
If $X$ is a non-empty integral scheme and $U \subset X$ is an affine, non-empty, open subset of $X$ then
$$ K_{X, \eta} = \text{Frac}(\mathcal{O}_X(U)).$$
Here $K_{X, \eta}$ is the residue field at the generic point $\eta$.
So the proof starts like this:
We define a morphism
$$ \mathcal{O}_X(U) \rightarrow K_{X, \eta} = O_{X, \eta}, \ s \mapsto [(U,s)].$$
This map is injective because if $[(U,s)]=0$ then $s(\eta)=0$ in $K_{X, \eta}$. The set $V=\{x \in U \ | \ s(x) = 0 \in K_{X,x} \}$ is closed in $U$ and contains $\eta$. Hence the set $V$ must be all of $U$. Therefore $s=0$.
My explanation why $V$ is closed: $U$ is affine so we may assume that $U = \text{Spec}(A)$ for some ring $A$. Then $\mathcal{O}_X(U) = {\mathcal{O}_X}|_U (U) = A$. So the element $s$ corresponds to some $a \in A$. In $V$ we have: $\mathfrak{p} \in V$ then $s(\mathfrak{p}) = \frac{s}{1} \equiv 0 \in A_{\mathfrak{p}} / {\mathfrak{p} A_{\mathfrak{p}}}$. Thus $s \in \mathfrak{p}$. So the set $V$ is just the set of primes that contain $s$, which is closed.
If my explanation is correct, how does it then follow that $s=0$? Is it because if $V=U$ then all primes $\mathfrak{p}$ contain $s$, so $s \in rad(A) = \{0\}$ because $A$ is an integral domain?
| Your reason for $s=0$ is correct. It is interesting that the map $\mathcal{O}_X(U)\to K_{X,\eta}$ is injective even when $U$ is not affine. What you have done for the affine case is generalized into two facts:
*
*Let $X$ be a scheme (in fact this is true for a locally ringed space). Let $f\in \mathcal{O}_X(U)$. The set of point $x\in U$ such that $f$ vanishes at $x$ is closed in $U$.
*Let $X$ be a reduced scheme (every stalk is a reduced ring, i.e. ring with no nilpotent). Then a section $f$ over $U$ vanishes at every point of $U$ if and only if $f=0$.
Since $X$ is an integral scheme, it is reduced and irreducible with generic point $\eta$. If $[(U,s)]=0$ then $s(\eta)=0$, so $s$ vanishes on $\overline{\{\eta\}}\cap U=U$. But $X$ is reduced so $s=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3329077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Visual Intuition for the Sum of a FINITE Geometric Series I'm interested in intuitive visual explanations for the sum of a finite geometric series.
I know there are some pretty "intuitive" explanations out there (including some on this site), but I haven't seen any that provide a visual intuition.
If anyone here knows of any and would share them, I'd greatly appreciate it!
Thanks!
Finite geometric series.
All the answers thus far have been for the infinite case.
Thanks!
| I think it is Matteo's method in disguise , since you want it for finite sum . (Very sorry for bad drawing)
k is common difference (r).
Assume you want to calculate $$S=2+2^2+2^3+2^4$$.
Now imagine a rectangle of side 1 & 2 . Now take two more such rectangles and put them on adjacent positions they form a big rectangle of side 2 and 2 . now again take two big rectangle to repeat the process up to 4 times . (See diagram). Now all you need to is find the total area of these rectangles. Here two consecutive rectangles got one side common and area is twice of bigger.
Now lets go one step further in series and draw next rectangle which will have twice the area of last one . Now we want to check if this area is more then or less then S.
Now we try to fit our area in to this bigger rectangle.
Let divide it in to two parts one part of S fits in it we left with other part , again divide smaller one into two parts one part fits and we left with one smaller. Repeat this process our whole S fits in and we eventually left with area as much as smallest rectangle.
So $$\color{red}{S=2^5-2=32-2=30}$$
When you think about generalizing this method we need $\color{red}{k}$ more previous rectangles . Or we have to divide that one step further area in to k parts and we will use 1 part to cover up again , so we got k-1 parts to divide and we will eventually left with smallest rectangle. So the area is k-1 times.
$$ S(k-1)=A_{n+1}-A_1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3329190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 4
} |
For how many pairs of positive integers n and m is the statement $mn - 8m + 6n =0$ true?
Each interior angle of a regular polygon with $n$ sides is $\frac{3}{4}$
of each interior angle of a second regular polygon with $m$ sides.
How many pairs of positive integers $n$ and $m$ are there for which this
statement is true?
$\frac{(n-2)*180}{n}$ is the value of one interior angle for a polygon with $n$ sides.
Therefore $\frac{(n-2)*180}{n} =\frac{3(m-2)*180}{4m}$ and $mn - 8m + 6n =0$
For how many pairs of positive integers $n$ and $m$ is the statement $mn - 8m + 6n =0$ true?
| $$mn-8m+6n=0$$
$$m(n-8)=-6n$$
$$m=\frac {6n}{8-n}$$
The positive integral solutions are $$(m,n)=(2,2),(6,4),(18,6),(42,7),(10,5) $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3329331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Define $f : L^2 \rightarrow \mathbb{R}$ by $f(x) = \sum_{n=1}^{\infty} \frac{x_n}{n}$. Is $f$ continuous? Problem I came across studying for qualify exams. I think I have it, if $x^k$ is a sequences of sequences in $L^2$ and it converges to $x$ in $L^2$, then $d(x^k_n, x_n) \rightarrow 0$ as $k \rightarrow \infty$ for all $n$. Then consider $d(f(x^k), f(x)) = \sum_{n=1}^{\infty} \frac{x^k_n - x_n}{n}$. But since $d(x^k_n, x_n) \rightarrow 0$ as $k \rightarrow \infty$, $d(f(x^k), f(x)) \rightarrow 0$ as $k \rightarrow \infty$. Is this right? Also, I was curious if it's even true that $f(x)$ is finite for all $x \in L^2$. I'm suspicious that it's not necessarily true but I couldn't figure out a proof or find a counterexample.
| Hint:
$$|f(x) - f(y)| \le \frac{\pi}{\sqrt 6} \left( \sum_{k=1}^\infty (x_k - y_k)^2 \right)^{1/2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3329447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How can I solve prove that $8(1-a)(1-b)(1-c)\le abc$ with the conditions below? There was a homework about inequalities (that why I ask a bunch of inequality problems). But I couldn't solve the following:
If $0<a,b,c<1$ and $a+b+c=2$, prove that $8(1-a)(1-b)(1-c)\le abc$
I tried many times, and finally I used Muirhead, but it failed!
$\begin{split}L.H.S.-R.H.S.&=8(1-a)(1-b)(1-c)-abc\\&=8-8(a+b+c) +8(ab+bc+ca)-9abc\\&=-(a^3+b^3+c^3)+(a^2b+b^2c+c^2a+ab^2+bc^2+ca^2)-3abc\\&=\frac{1}{2}\Bigg(\sum_{sym}a^2b-\sum_{sym}a^3\Bigg)+\frac{1}{2}\Bigg(\sum_{sym}a^2b-\sum_{sym}abc\Bigg)\end{split}$
But as $(3,0,0)$ majorizes $(2,1,0)$ but $(2,1,0)$ majorizes $(1,1,1)$, so it fails.
Could someone help? Any help is appreciated!
| Also, we can use Muirhead here.
Let $a+b-c=z$, $a+c-b=y$ and $b+c-a=x$.
Thus, $$x=a+b+c-2a=2(1-a)>0.$$
Similarly, $y>0$ and $z>0$ and we need to prove that
$$8xyz\leq(x+y)(x+z)(y+z)$$ or
$$\sum_{cyc}(x^2y+x^2z-2xyz)\geq0,$$ which is true by Muirhead.
Also, we can use AM-GM:
$$\prod_{cyc}(x+y)\geq\prod_{cyc}2\sqrt{xy}=8xyz.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3329546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
What does this particular graph notation mean? I am trying to understand this paper about video summarization using a graph algorithm.
Section 4.1 of the linked paper, describes two graphs, with the same node set, while the edges differ, as they have different defining equations for edge weight. And the paper later combines the two graphs using something like a dot product.
$G^\theta_{tc} = G.G^\theta$
What is this notation for graph description?
| Unfortunetaly their notation is not clear at all.
They say,
We negate values to transfer the difference into similarity and normalize matrix G
So it would seem like G is a matrix. Then they define,
We construct a graph G(V,W)
Which is already a bit strange because G(V,W) can denote a graph, and G some matrix. They are overloading G.
Then they drop the (V,W) with the G, and perform,
$ G^\theta_{tc} = G.G^\theta$
So $G$ is some matrix, and $G^\theta$ is some graph. I'm not even sure if the multiplication between a graph and matrix is well defined, unless they are multiplying some adjacency matrices together.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3329785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Range of $f(x,y)=\frac{4x^2+(y+2)^2}{x^2+y^2+1}$ I am trying to find the range of this function:
$$f(x,y)=\frac{4x^2+(y+2)^2}{x^2+y^2+1}$$
So I think that means I have to find minima and maxima. Using partial derivatives gets messy, so I was wondering if I could do some change of variables to make it easier computationally. But no change of coordinates that I can think of have really simplified it much. If I set $2w=y+2$, then I get a problem below. Am I thinking of the right strategy, or is there something better I could do?
| Idea: $$f(x,y) =\frac{4x^2+4y^2+4+(y+2)^2-4y^2-4}{x^2+y^2+1}$$
$$=4+\frac{-3y^2+4y}{x^2+y^2+1}$$
$$\leq 4+\frac{-3y^2+4y}{y^2+1}$$ if $-3y^2+4y\geq 0$ (else it is reversed),
$$= 1+\frac{4y+3}{y^2+1} =:g(y)$$
So you have to find a maximum value of $g$ on $[0,{4\over 3}]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3329898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
prove the absolute value of the integral from $a$ to $b$ of $f$ is less or equal than integral from $a$ to $b$ of the absolute value of $f$ if $f$ is integrable on $[a,b]$ , then
$$\bigg\lvert\,\int_a^b{f(x) dx}\,\bigg\rvert \leq \int_a^b{\big\lvert\,f(x)\,\big\rvert\,dx}$$
I know how this works because of the area of the integrals, but I can't express it as a proof.
There's a hint: $-\lvert\,f(x)\,\rvert\leq\,f(x)\leq\lvert\,f(x)\,\rvert$
| Hint: if $\forall x \in [a,b], f(x) \leq g(x)$, then $\int_a^b f(x)dx \leq \int_a^b g(x)dx$
How to actually do it:
We apply that statement in the following two cases
*
*$f$ and $\lvert\,f\,\rvert$: So $\int_a^b f(x)dx \leq \int_a^b \lvert\,f(x)\,\rvert\,dx$
*$-f$ and $\lvert\,f\,\rvert$ So $\int_a^b -f(x)dx = -\int_a^b f(x)dx \leq \int_a^b \lvert\,f(x)\,\rvert\,dx$
Now we apply that $a<b$ and $-a<b$ together imply $\lvert\,a\,\rvert<b$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3330078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Find the minimum $n$ such that $x^2+7=\sum_{k=1}^n f_k(x)^2$ where $f_k(x)\in \mathbb{Q}[x]$ Recently, I have found this problem:
Find the minimum $n \in N$ such that $x^2+7=f_1(x)^2+f_2(x)^2+\cdots+f_n(x)^2$ where $f_1(x),+f_2(x),+\cdots+f_n(x)$ are polynomials with rational coefficients.
I have tried to solve this problem when $n=2$ using $f_1(x)=(a_1x+b_1)^2$ and $f_2(x)=-(a_2x+b_2)^2$, but I can't go on. Any idea?
| Let's assume you can do with only two polynomials. Then we have:
$$
p = \sum_{i=1}^{k_1}{p_nx^n}
\\
q = \sum_{i=1}^{k_2}{q_nx^n}
$$
with
$$
p^2+q^2 =x^2 +7
$$
Therefore, if we can do it with only two polynomials, then there's a solution for
$$x^2+7 = (ax+b)^2 + (cx+d)^2
\\
x^2+7 = (a^2+c^2)x^2+(2ab+2cd)x + (b^2+d^2)
$$
By equating coefficients and solving the equation system for $a,b,c$, we have
$$a =\pm \frac d{\sqrt7}$$
And therefore either $a$ or $d$ is irrational.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3330146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
confusion about negative trigonometric identities Given that $\cos A=1/2$ and $\cos A$ and $\sin A$ have the same sign, find the value of $\sin(-A)$. If the question is referring to the first quadrant, where all trigonometric identities are positive, why is the value of $\sin (-A)$: $-\sqrt{3}/2$? Is it because this rule only applies to positive angles? Also is the rule $\sin(-A)= -\sin A $ applicable for every quadrant?
| Remember the negative angle identities:
$$\sin(-A)=-\sin(A)$$
$$\cos(-A)=\cos(A)$$
To find the value of A, we have to solve the equation $\cos^{-1}(A)=\frac{1}{2}$ (Remember your 30-60-90 triangle)
Assuming that this is over the interval $[0, 360]$, you can get either $60^{\circ}$ or $300^{\circ}$ (Remember your reference angle formulas)
Because the question stated that the value of $\cos(A)$ and $\sin(-A)$ have the same sign, we will have to evaluate the $\sin(-300^{\circ})$ Why? Because only the cosine is positive in the fourth quadrant and because $\sin(-300) = -\sin(300) = --\frac{\sqrt{3}}{2}=\frac{\sqrt{3}}{2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3330254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Software for 3D graphing complex functions? (I am new to this forum.)
I am interesting in seeing what some equations look like when they are plotted 3-dimentionally, with one axis real numbers, the second axis imaginary numbers (thus the complex plane), and the third axis real numbers. Is such software available either online or free-downloadable?
Thank you.
| Mathematica wouldn't be your worst option, but it will cost you a fair bit unless you're a student. You won't find anything as fully featured as Mathematica for free, but there are free alternatives that will at least be able to do what you need them to.
I would google free Mathematica alternatives and pick one that sounds like it will work for you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3330328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Finding all real solutions of $x-8\sqrt{x}+7=0$ Finding all real solutions of $x-8\sqrt{x}+7=0$.
Man, I tried subtituting $x=y^2$ but IDK things got complicated. What is the best way to figure this out? Thanks!
| Let $y=\sqrt{x}$, therefore $y^2=x$
$y^2-8y+7=0$ therefore $(y-7)(y-1)=0$ hence $y=1,7$.
Therefore $x=1,49$. These are indeed the only solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3330413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 8,
"answer_id": 0
} |
How much different between $D_{KL}[P(X)||Q(X)]$ and $D_{KL}[Q(X)||P(X)]$? While Kullback Leibler divergence is not symmetric: $D_{KL}[P(X)||Q(X)] \neq D_{KL}[Q(X)||P(X)]$, how much different $D_{KL}[P(X)||Q(X)]$ from $D_{KL}[Q(X)||P(X)]$?
Does the quantity $D_{KL}[P(X)||Q(X)] - D_{KL}[Q(X)||P(X)]$ have some interesting interpretation?
| Let $\epsilon \in (0,1)$ and suppose that $P(X)=(1-\epsilon)\delta_{0}+\epsilon\delta_{1}$ and $Q(X)=\frac 12\delta_{0}+\frac 12\delta_{1}$.
Then $D_{KL}[P(X)||Q(X)]=(1-\epsilon)\ln(2(1-\epsilon))+\epsilon \ln(2\epsilon)$ and $$D_{KL}[Q(X)||P(X)] = \frac 12 \ln(\frac 1{2(1-\epsilon)})+\frac 12\ln(\frac{1}{2\epsilon})$$
As $\epsilon \to 0$, $D_{KL}[P(X)||Q(X)]\to \ln 2$ and $D_{KL}[Q(X)||P(X)] \to \infty$, thus $$D_{KL}[P(X)||Q(X)] - D_{KL}[Q(X)||P(X)] \xrightarrow[\epsilon \to 0]{}-\infty$$
The rationale is that the KL divergence $D_{KL}[\mu||\nu]$ blows up when $\mu$ is not absolutely continuous w.r.t $\nu$ (e.g. when the support of $\mu$ is not a subset of the support of $\nu$). This is a common pitfall for $f$-divergences.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3330531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question about affine transformation and one-one function. An Affine transformation is a function $f:\mathbb{R}^2 \rightarrow \mathbb{R}^2$ such that $f(v) = Av+b $, where $\det A \neq 0 $ and $b \in \mathbb{R}^2.$
My professor give me a task, I need to prove that a function $f$ is Affine transformation iff $f$ is one-one.
It's a little confuse, I have proved that if $f$ is affine then $f$ is one - one, but the other hand I don't know and I suspect it's not true. Of course, if I take $f: \mathbb{R}_{+} \rightarrow \mathbb{R},$ where $f(x) = x^2,$ then this function is one- one and not affine, but I need some example in $\mathbb{R}^2$ with the form $f(v)=Av + b.$
| I think you have mis-interpreted the question. What is true is if $f(v)=Av+b$ for some square matrix $A$ and some vector $b$ then $f$ is one-to-one iff $\det(A) \neq 0$.
Proof: If $\det(A) \neq 0$ and $Av+b=Aw+b$ then $A(v-w)=0$ and this implies $v=w$ because $A$ is non-singular (and its kernel is $\{0\}$).
If $\det(A)=0$ the there exists a nonzero vecto $v$ such that $Av=0$. This gives $f(v)=f(2v)$ so $f$ is not one-to-one.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3330625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Help calculate this limit,about double factorial. $$
\lim_ {n\to\infty} \dfrac {\left [\left (2n-1\right)!! \right] ^ {1/ {2n}}} {\left [\displaystyle\prod_ {k=1} ^ {n} (2k-1)!! \right] ^ {1/ {n^2}}}$$
| The numerator is $\int_0^1 x\log(x)dx=-1/4$
so $\lim_{n\to\infty}\frac{1}{n}(\sum_{k=1}^n \frac{2k-1}{2n-1}\log(\frac{2k-1}{2n-1}))=-\frac14$
so $\lim_{n\to\infty}(\prod_{k=1}^n(\frac{2k-1}{2n-1})^{2k-1})^{\frac{1}{n(2n-1)}}=e^{-\frac14}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3330761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to show that $\left \lceil x\right \rceil-\left \lceil y \right \rceil=\left \lceil x-y \right \rceil-1$ with $x,y \in R^+$ Recently, I have found this inequality: $\left \lceil x \right \rceil + \left \lceil y \right \rceil\geq \left \lceil x+y \right \rceil$. Im' trying to understand also if this similar equality holds: $\left \lceil x\right \rceil-\left \lceil y \right \rceil=\left \lceil x-y \right \rceil-1$ where $x,y \in R^+$. I've already tried to substitute into $x$ and $y$ some values and it seems correct, but I don't know how to prove it. Any idea?
| Note that $\lceil p\rceil-\lceil q\rceil-(\lceil p-q\rceil-1)=\lceil x\rceil-\lceil y\rceil-(\lceil x-y\rceil-1)$ when $p=x-\lceil x\rceil$ and $q=y-\lceil y\rceil$.
Thus, we just need to consider when $-1<x,y\le 0$.
Since $\lceil x\rceil,\lceil y\rceil=0$ here, we need to show $0\ge\lceil x-y\rceil-1$ and this is trivial since $x-y<1$.
(I think you probably asked $\lceil x\rceil-\lceil y\rceil\ge\lceil x-y\rceil-1$ not $\lceil x\rceil-\lceil y\rceil=\lceil x-y\rceil-1$ as that's whay you wrote first, but for the current question we have simple counterexample of $-1<x<y<0$, which can be motivated by above proof)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3330969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Handle a function of itself I have a tricky question. I want to handle a function y(x) defined in this way:
$y(x)=f(y(x))g(x)$
Here, $f(y(x))$ and $g(x)$ are smooth functions: is there any way/method to express $y(x)$ as a (complicated) function of $x$ solely?
Maybe it is impossible if $f$ and $g$ are generic.
Edit:
If I have the numerical functions which describe $f(y)$ and $g(x)$, is there any approximated way to express $y(x)$? For example with recursive formulae?
My point is not to guess what form $y(x)$ should have, but deriving it or, at least, inferring it approximately.
| You can find many functions to satify your conditions.
For example,on $x\in[0,\infty)$ define $$y(x)=x^2,f(x)=\sqrt x, g(x)=x$$
We get $$f(y(x))g(x) =x\sqrt {y(x)} = x^2= y(x)$$
You may design other examples with little effort.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3331264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
How many groups of order at most $25$ are "pleasant" (abelian, with every non-identity element having prime order)?
A group $G$ is called pleasant if it is abelian and every non-identity element $g$ in $G$ has prime order. Up to isomorphism, how many pleasant groups are there of order at most $25?$
Options: $0, 9, 16, 25, 31$, or infinitely many.
I got $15.$ Any help would be greatly appreciated!
Thanks,
I got:
*
*the trivial one,
*cyclic groups of prime order up to $25$ (there are $9$) and also
*$C_2 \times C_2$,
*$C_2 \times C_2 \times C_2$,
*$C_2 \times C_2 \times C_2 \times C_2$,
*$C_3 \times C_3$ and
*$C_5 \times C_5$.
Thank you again!
| Notice that by the fundamental theorem of finite abelian groups, we have that if $\Gamma$ is abelian and finite that $$\Gamma \cong \bigoplus_{i=1}^n \mathbb{Z}_{p_i^{e_i}}^{k_i} $$ where each of the $p_i$ are primes. Now, if any of the $e_i$ were bigger than one, so $\mathbb{Z}_{p^\alpha}$ is a subgroup of $\Gamma$ with $\alpha$ greater than 1, it follows that the generator of the cyclic group would have order $p^\alpha$ and therefore not have prime order. It follows that for any pleasant group $\Gamma$ we have $$\Gamma \cong \bigoplus_{i=1}^n \mathbb{Z}_{p_i}^{k_i}$$ Now, suppose that this decomposition is composed of at least two different primes $p_1$ and $p_2$, not equal to each other. Then, by properties of the direct sum of groups $\mathbb{Z}_{p_1} \oplus \mathbb{Z}_{p_2}$ is a subgroup of $\Gamma$, and the element that is the product of the generators of these two groups $\mathbb{Z}_{p_1}$ and $\mathbb{Z}_{p_2}$ would have order $p_1p_2$, and therefore cannot be prime. Thus we have for a pleasant group $\Gamma$ $$\Gamma \cong \mathbb{Z}_{p}^k $$ for $p$ prime and $k$ an integer. Furthermore, it can easily be seen each such group is abelian and that the order of each non-identity element in such a group is $p$. Thus pleasant groups are in 1-1 correspondence with prime powers. Therefore, the number of pleasant groups of order at most 25 is simply equal to the number of prime powers less than 25. There are exactly 15 such numbers $$ 1, 2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25$$ which correspond to precisely the groups you have found. And thus it seems that there are 15 pleasant groups of order at most 25. I could be mistaken somewhere in this reasoning but it seems to me that the answers given do not contain the correct choice of 15.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3331377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Complete metric Suppose $M_1=(\mathbb{R}^2,g_s)$ is the plane with standard flat metric, it is a complete manifold. Now if I delete point origin, $M_2=(\mathbb{R}^2\setminus\{0\},g_s)$ is obviously not complete. However, when punctured plane is given a different metirc, it becomes complete, such as
$$g=\frac{1}{|x|^2}g_s$$
I think it is geodesically complete, since when points get close to origin, the metric blows up. However, I don't know how to rigorously prove it?
| Write $g_s$ in polar coordinates: $g_s=dr^2 + r^2\,d\theta^2$. Then, with the change of variables $s=\ln r$, it holds that $g=ds^2 + d\theta^2$. This is the standard metric on the cylinder $\mathbb{R}\times S^1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3331665",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Why do we need the covariant derivative along a curve - why are linear connections not sufficient? I can't figure out why we need the definition of a 'covariant derivative along a curve', i.e. I can't see why we can't use a 'linear connection' even when the vector fields are not extendible.
I'm reading Lee's book on Riemannian manifolds. After he has shown that $\nabla$ depends on X and Y only around an open set, he defines the Christoffel symbols through the expression $\nabla_{E^j}E^i$, where $E^j,E^i$ are elements of a local frame, i.e. vector fields defined only locally on an open set (and thus not necessarily extendible). Likewise, it is show that $(\nabla_{X}Y)_p$ in fact only depends on $X$ through its value at p and on Y through its values on a curve through p whose tangent at p is $X_p$. Therefore, if $\gamma$ is a smooth curve, $(\nabla_{\dot{\gamma}}Y)_p$ should be well-defined, even if Y is only defined along $\gamma$ and isn't extendible.
Where am I wrong? Thanks a lot.
| A quick answer to the Title.
One of the important and powerful tool in studying Differential geometry and Riemannian geometry is understanding the behavior of geodesics. And what is the geodesic?
There are two key properties
satisfied by straight lines in $\Bbb R^n$, either of which serves to characterize them
uniquely: first, every segment of a straight line is the unique shortest path between its
endpoints; and second, straight lines are the only curves that have parametrizations
with zero acceleration. (John m. Lee, Riemannian manifolds)
So we need the notion of covariant derivative along a curve to measure the acceleration of a curve and then define the geodesics and then discovering topological properties and then ...
Added: Note that covariant derivative along a curve is not a definition in Lee's Book. it is just a restriction to a curve of covariant derivative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3331751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Pairs of integer pairs with same lcm, gcd and mean The problem is to find all pairs of two distinct pairs(up to permutation) of integer(!) numbers $(a, b)$ and $(c, d)$ s.t. $$\operatorname{lcm}(a, b) = \operatorname{lcm}(c, d)$$
$$\gcd(a, b) = \gcd(c, d)$$ and
$$\frac{a + b}{2} = \frac{c + d}{2}$$
It is easy to show that if both LCM and GCD are equal, then two pairs have the same product in absolute value and the same sum AND the same GCD.
There was a similar question on MathSE about natural numbers, answer to which is that such distinct pairs dont exist: Prove two distinct pairs of natural numbers with these properties do not exist
One such pair is ((-6; 35), (14;15)): their sums and LCMs and GCDs are all equal pairwise. How to find all such pairs?
| Given $\operatorname{lcm}(x,y)\cdot\operatorname{gcd}(x,y)=|xy|$, we have $\operatorname{lcm}(a,b)\cdot\operatorname{gcd}(a,b)=\operatorname{lcm}(c,d)\cdot\operatorname{gcd}(c,d)$, so $|ab|=|cd|$. Given $\frac{a+b}{2}=\frac{c+d}{2}$, we have $c=(a+b)-d$. When substituted into $|ab|=|cd|$, this gives the two quadratics $d^2-(a+b)d\pm ab=0$. These have the two pairs of solutions $d=a,b$ (when $c,d$ are trivially a permutation of $a,b$) and $d=\frac{(a+b)\pm\sqrt{\Delta}}{2}$, where $\Delta$ is the discriminant $a^2+6ab+b^2$.
In the nontrivial case, $d$ is integral if $(a+b)\pm\sqrt{\Delta}$ is integral (and even), therefore $\Delta$ is a square. Hence, for some integer $k$, the triple $(a,b,k)$ is a solution of the 3-variable, 2nd-degree Diophantine equation $a^2+6ab+b^2=k^2$. This can be solved with a method analogous to finding Pythagorean triples: consider the intersections of the hyperbola $a^2+6ab+b^2=1$ with the line $a=m(b-1)$. The first intersection is $(1,0)$ and the second intersection is guaranteed to be rational when $m$ is integral, which paves the way for integer solutions when clearing denominators. The second solution is $b=\frac{(m+1)(m-1)}{m^2+6m+1},a=\frac{-2m(3m+1)}{m^2+6m+1}$, which we can substitute into $a^2+6ab+b^2=1$ and multiply through by $(m^2+6m+1)^2$, to give solutions for $(a,b,k)$, and by extension $(a,b,c,d)$ when substituted into the earlier equations. So the set of nontrivial solutions up to multiples and permutations is, for $m\in\mathbb{Z}$
$a=-2m(3m+1)\\
b=(m+1)(m-1)\\
c=-(3m+1)(m+1)\\
d=2m(1-m)$
For example, $m=-5$ gives the solution $(-140,24,-56,-60)$, which is a multiple and permutation of the solution $(-6,35,14,15)$ mentioned by the asker. This encompasses all solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3331849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Subgroup of $S_n$ generated by $(1,2,\cdots,n)$ and $(1,2,\cdots,m)$. I'm working on the following problem:
Let $G$ be the subgroup of $S_n$ generated by $(1,2,\cdots,n)$ and
$(1,2,\cdots,m)$ where $1<m<n$. Show $G$ is $S_n$ if either $m$ or $n$
is even, and otherwise, $G$ is $A_n$.
I know that $G$ is primitive and there's a related theorem:
Let $G$ be a primitive subgroup of $S_n$, if $G\neq S_n,A_n$, then
$|S_n:G|\geq[(n+1)/2]!$.
| I'll be using right actions (so for $x,y\in S_n$, $xy$ means apply $x$ then $y$) and the notation $x^y=y^{-1}xy$.
I will assume the following (which can be proved by induction):
$A_n=\langle(1,2,3),(2,3,4),\ldots,(n-2,n-1,n)\rangle$
Let $\sigma=(1,\ldots,n)$, $\tau=(1,\ldots,m)$, so
$$\tau^\sigma=(2,3,\ldots,m+1)$$
$$\tau(\tau^\sigma)^{-1}=(1,m+1,m)$$
$$\left(\tau(\tau^\sigma)^{-1}\right)^{(\tau^\sigma)^2}=(1,3,2)$$
So $(1,3,2)\in G$ and therefore $(1,2,3)\in G$.
For $i=1,\ldots,n-3$ we have $(1,2,3)^{\sigma^i}=(i+1,i+2,i+3)$ giving $A_n\le G$.
Clearly $G=A_n$ if and only if $\sigma,\tau\in A_n$ if and only if $n$ and $m$ are odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3331933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Need for basis for a topology Why did we need to define a basis for a topology?
I read that it is difficult to specify topology on a bigger set, so we define topology on a smaller collection.
Now I want to ask that what were the requirements to accomplish this plan?
When this question was in front of us, what actually we wanted to find?
| The biggest advantage to talking about bases is that they control the entirety of a space's topological information while being easy to work with. The prototypical example would be something like open balls or open rectangles in $\mathbb{R}^n$. Open subsets of $\mathbb{R}^n$ can be very strange, but open balls are simple to imagine and easy to work with. If I want to, say, prove that some function $f: X \rightarrow \mathbb{R}^n$ is continuous I would, in principal, need to prove that $f^{-1}(U)$ is open for every single open subset of $\mathbb{R}^n$. However, in actuality, what I will typically prove is that $f^{-1}(B_r(x))$ is open for all $x$ and for all sufficiently small $r$. This is sufficient though, because preimages are well behaved with unions, and because I can represent any arbitrary open subset of $\mathbb{R}$ as a union of open balls.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3332043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Tangent to a 2-Dimensional curve, in 3 Dimensions For line to be tangent to a given curve, they should pass through a common point and both should have same slope at that point. But how do we compare slope in 3D?
E.g.=> For a circle in 2-Dimensions, we know which lines will be it's tangents. But what if I ask, which lines will be tangent to it in 3-Dimensions? Will the answer be same for both cases, or will many others lines, lying in some other planes, also be included?
All I could think of is that, we need to find tangents in planes, and in every plane the curve will be set of points, except the one in which the curve is a circle. And because slope of a point is not defined, there will be no new tangents added when we go from 2D to 3D.
| You can define slope at a point of a curve just as you did in calculus 1 - take the limit of approximating secant lines. The only difference this time is that the limiting object will be a line pointing in, well, any direction. So, at this point, it is best to use vectors.
Two curves in space will be tangent to eachother at a point if they both pass through this point and if they both have the same tangent vectors at this point.
Now, with a little bit of imagination, you can see why you will have new curves which are tangent to the circle beyond the curves which are coplanar with the circle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3332150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Almost uniform convergence implies convergence in measure Let $(A,\mathcal{F},\mu)$ be finite measure space and $\{f_n\}$ a sequence of finite real measurable functions so that $f_n\rightarrow f$ a.e. We say $f_n\rightarrow f$ almost uniformly if $\epsilon>0$, there is $E\subseteq A$ such that $f_n \rightarrow f$ uniformly on $E^c$ and $\mu(E)<\epsilon$.
I want to show that $f_n\rightarrow f$ almost uniformly implies convergence in $\mu$. For this, suppose not. Then
$$\exists \eta,\epsilon>0:\forall N\in \mathbb{N}:\exists n>N:\mu(\mid f_n-f\mid\geq\epsilon)\geq \eta, $$
i.e., for infinitely many points $n\in \mathbb{N}$. From the definition of almost uniform convergence, $\exists E:\mu(E)<\eta$ and $f_n\rightarrow f$ uniformly on $E^c$. Contradiction.
Question
It seems intuitive to me. But how to deduce this contradiction precisely?
I know that if $x\in E$, then it must satify the negation of uniform convergence which is
$$\exists \epsilon>0:\forall N\in \mathbb{N}:\exists n>N:\mid f_n-f\mid\geq\epsilon.$$
Now, $x$ may not be in $\{f_n \text{ does not converge in measure to } f \}$ if $\mu(\mid f_n(x)-f(x)\mid\geq\epsilon)<\eta$. So I conclude that $$\{f_n \text{ does not converge in measure to } f \}\subseteq E$$ implying that $\eta>\mu(E)\geq \eta$; a contradiction.
My argument seems right but also very inefficient. How could you express this idea as clean as possible?
Thanks!
| This is proved more succinctly by a direct proof.
Given $\varepsilon > 0,$ let $E$ be as in the definition of almost-uniform convergence. Then there is some $N$ such that $n \geq N$ implies
$$\mu(E^c \cap \{|f_n-f|\geq\varepsilon\}) = 0$$
What does this imply for $\mu(|f_n-f|\geq \varepsilon)?$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3332265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Counterexamples concerning the central limit theorem The central limit theorem states that, if $X$ is a random variable with finite variance $\sigma^2$ and expected value $\mu$, and if $(X_n)$ is a sequence of independent random variables identically distributed like $ X $, then
\begin{equation}
Z_n = {\frac{{\overline{X}}_n-\mu}{\sqrt{\sigma^2/n}}}\ \rightsquigarrow N(0,1),
\end{equation}
where ${\overline{X}}_{n} = \frac{1}{n} \sum_{i=1}^{n} X_i$ and $\rightsquigarrow$ means convergence in distribution, which in this case is equivalent to the pointwise convergence of the cdf of $Z_n$ to the cdf of a $N(0,1)$.
Suppose moreover that $X_n$ is absolutely continuous for all $n$‘s. Then $Z_n$ is absolutely continuous for all $n$‘s. Are there examples of such sequences $(X_n)$, such that the pdf of $Z_n$ does not converge pointwise to the pdf of a $N(0,1)$, not even almost everywhere (w.r.t. the Lebesgue measure on $\mathbb R$)?
| This counterexample is from the book Limit Distributions for Sums of Independent Random Variables by Gnedenko and Kolmogorov.
Let $X$ have density $\begin{cases}
0 &\text{if} |x|\geq \frac 1e \\
\frac{1}{2|x|\log^2(|x|)} &\text{if} |x|< \frac 1e
\end{cases}$
The authors argue that $f_n$ the density of $\sum_{i=1}^n X_i$ verifies $\displaystyle f_n(x) > \frac{c_n}{|x \log^{n+1}(|x|)|}$ for some positive constant $c_n$ in a neighborhood of $0$. So the density of $Z_n$ (which is a normalized version of $f_n$) is infinite at $0$.
They prove the following theorem:
Theorem: Suppose $X$ has density $f$. If
*
*for some $m\geq 1$, $f_m$ (the density of $\sum_{i=1}^m X_i$) is in $L^r(\mathbb R)$ for some $r\in (1,2]$,
*$\int x^2 f(x) dx <\infty$ (i.e. $X$ has a second moment)
Then $\displaystyle \sup_{x\in \mathbb R} \left|\sigma \sqrt n f_n(\sigma \sqrt n x) - \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \right| \xrightarrow[n\to \infty]{}0$
In Petrov's Sums of Independent Random Variables, the following theorem is stated:
Theorem: Let $(X_n)$ be a sequence of i.i.d r.v with mean zero and variance $\sigma^2$ and let $f_n$ denote the density of $Z_n$ (if it exists).
Then $\displaystyle \sup_{x\in \mathbb R} \left| f_n(x) - \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \right| \xrightarrow[n\to \infty]{}0$ if and only if $f_n$ is bounded for some $n$.
In Shiryaev's Probability 2, the following Local Central Limit Theorem is stated:
Theorem: Let $(X_n)$ be a sequence of i.i.d r.v with mean zero and variance $\sigma^2$. If for some $r\geq 1$, $\int |\phi_{X_1}(t)|^r dt <\infty$, then $Z_n$ has a density $f_n$ such that
$\displaystyle \sup_{x\in \mathbb R} \left| f_n(x) - \frac{1}{\sqrt{2\pi}} e^{-x^2/2} \right| \xrightarrow[n\to \infty]{}0$
Regarding almost sure convergence, you should have a look at Rao's A Limit Theorem for Densities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3332370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.