Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Proof verification (sets & logic) Question:
Let $f$: $\mathbb{N} \rightarrow \mathbb{Z}$ be a function that is eventually zero. i.e: There exists some $N \in \mathbb{N}$ s.t $f(n)=0$ for all $n \geq \mathbb{N}$. Prove that the set of such functions is countable.
Proof:
Define $g$: $\mathbb{N} \rightarrow \mathbb{Z}$ s.t $$
g(i) = \left\{
\begin{array}{ll}
f(i) & \quad i \in \{0,1…N-1\} \\
0 & \quad , otherwise
\end{array}
\right.
$$
Let $B_n=\{h|h: \{0,1…N-1\}\rightarrow \mathbb{Z}$}. Clearly, $B_n$ is equal to $N$ copies of $\mathbb{Z}$. Hence, it's countable.
Let $G=\{f|f: \mathbb{N}\rightarrow \mathbb{Z}\}$ is eventually zero}.
So, $G = \cup_{N=1}^{\infty} B_n$. Countable union of countable sets is countable. So is $G$.
Is my approach correct?
| The basic idea is good, but you rather want to take the union of $G_N$'s where
$$G_N:=\{f:\Bbb N\to\Bbb Z:f(n)=0\text{ if } n>N\}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3494041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Is there a formal way for solving these series with |r| < 1 $$\sum _{n=1}^{\infty }\:\left(\frac{5}{6}\right)^{2n-2}=\sum \:_{n=1}^{\infty \:}\:\frac{\left(\frac{5}{6}\right)^{2n}}{\left(\frac{5}{6}\right)^2}=\frac{1}{\left(\frac{5}{6}\right)^2}\sum \:\:_{n=1}^{\infty \:\:}\:\left(\frac{5}{6}\right)^{2n}=\frac{1}{\left(\frac{5}{6}\right)^2}\cdot \frac{\left(\frac{5}{6}\right)^2-\left(\frac{5}{6}\right)^{2n+1}}{1-\left(\frac{5}{6}\right)^2}=\frac{1}{\left(\frac{5}{6}\right)^2}\cdot \:\frac{\left(\frac{5}{6}\right)^2}{1-\left(\frac{5}{6}\right)^2}=\:\frac{1}{1-\left(\frac{5}{6}\right)^2}$$
I'm applying directly the formula:
$$\frac{a-a^{n+1}}{1-a}$$ As I have as exponen $$2n$$, then I just write:
$$\frac{a^2-a^{2n+1}}{1-a^2}$$
Is there any other formal way to do this? I know that
$$\sum _{i=0}^{\infty }\:x^{2i}\:=\:\frac{1}{1-x^2}$$
But I don't understand what happened with -2 part of the exponencial 2n−2, or even how to generalize for $$\sum _{i=1,2,3...}^{\infty }$$
I found an idea here https://en.wikipedia.org/wiki/Geometric_progression and I guess that I could do:
$$\sum _{n=m}^{\infty }\:ar^{cn-b}=:\frac{ar^{m}}{(1-r^c)r^b} ...or... \sum _{n=m}^{\infty }\:ar^{cn+b}=:\frac{ar^{m+b}}{1-r^c} $$
Is there any other more simple way to calculate this?
Thanks.
| The formal way of solving this problem is to map it to a geometric progression:
$$ \sum_{n=m}^{\infty} ar^{cn-b}=\sum_{n-m=0}^{\infty} ar^{-b} r^{cm}r^{c(n-m)} = ar^{-b} r^{cm} \sum_{i=0}^{\infty}\left( r^c\right)^{i} = \frac{ar^{cm-b}}{1-r^c} .$$
Here, we first use a change of variable $(n,m)\to(i=n-m,m)$. Then, we recognize that $\sum_{i=0}^{\infty}\left( r^c\right)^{i}$ is a geometric progression with the common ratio as $r^c$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3494285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$L \vert K$ normal extension with $K \subset M \subset L$. Show that $L \vert M$ is normal Let $L \vert K$ be a field extension and let $K \subset M \subset L$ be a subextension. Show that if $L \vert K$ is normal, so is $L \vert M$.
So at this point I'm asking myself if $L \vert K$ must be a finite extension.
If not, I would argue using the fact that in this case $L$ is a splitting field of a polynomial of degree $n$ with roots $\alpha_1, \dots, \alpha_n \in L$ which implies we can write $M = K(\alpha_1, \dots, \alpha_i)$ with $i \in \{1, \dots, n\}$.
Now, let $f(X) \in M[X]$ be defined as $f(X):=\prod_{j=i}^n (X- \alpha_j)$
It follows that $L$ is a splitting field of $f(X) \in M[X]$ and therefore $L \vert M$ a normal extension what was to be shown.
My questions: Has $L \vert K$ to be finite? If yes, what would be the proof in this case? In the first case, is my proof correct?
| Let $p$ be an irreducible element of $M[X]$ suppose that $P$ has a root $u\in L$, since $L|K$ is algebraic ? $K(u)|K$ is finite and denote by $f$ the minimal polynomial of $u$ over $K$, since $L|K$ is normal, all the roots of $f$ are in $L$ and remark that $p$ divides $f$ since $p$ is irreducible, we deduce that all the roots of $p$ are in $L$ and $L|M$ is normal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3494382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Sequence of measurable $\&$ continuous functions defined on $[0,1]$ Let $\{f_n\}$ be a sequence of measurable $\&$ continuous functions from $[0,1]$ to $[0,1]$. Assume $f_n \rightarrow f$ pointwise. Is it true/false that,
*
*$f$ is Riemann integrable $\& \int _{[0,1]}f_n \rightarrow \int_{[0,1]}f$?
*$f$ is Lebesgue integrable $\& \int _{[0,1]}f_n \rightarrow \int_{[0,1]}f$?
My work:
For $(1),$ I came up with a counter-example $f_n(x)=nx(1-x^2)^n$ on $[0,1]$ as $f_n \rightarrow 0$ but $\int _{[0,1]}f_n=\frac12$ (but is $f_n(x) \in [0,1]$ for all $x$??)
For $(2),$ I think this holds since,
1.The continuous functions over a closed bounded interval is R.I & hence L.I
2.measure space is finite.
*$\{f_n\}$ is uniformly bounded and it converges to $f$ pointwise.
So,by DCT, this holds true.
Am I correct? Also what about my choice of function for case (1)?
| $$
\text{Let } f_n(x) = \begin{cases} 2n\left(1-|2nx-1| \right) & \text{if } 0\le x\le1/n, \\[6pt] {} 0 & \text{otherwise.} \end{cases}
$$
Then
$$
\lim_{n\to\infty} \int\limits_{[0,1]} f_n(x)\,dx = 1 \ne 0 = \int\limits_{[0,1]} \lim_{n\to\infty} f_n(x)\,dx.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3494545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is $x^2 \geq \alpha(\alpha-1)$?
If $\alpha$ is a nonnegative real and $x$ is a real satisfying $(x+1)^2\geq \alpha(\alpha+1),$ is $x^2 \geq \alpha(\alpha-1)$?
The answer is yes. Consider two cases: $1) \, x < -1$ and $2)\, x\geq -1.$
In case $1,$ taking the square root of both sides of the inequality gives $-(x+1) \geq \sqrt{\alpha(\alpha+1)}\Rightarrow x \leq -\sqrt{\alpha(\alpha+1)}-1.$ Hence $x^2\geq \alpha(\alpha+1)+2\sqrt{\alpha(\alpha+1)}+1.$ Since $2\sqrt{\alpha(\alpha+1)}+1\geq 2\alpha+1 > -2\alpha, x^2>\alpha(\alpha+1)-2\alpha=\alpha(\alpha-1).$
Now in case $2,$ if $\alpha = 0, x^2 \geq 0,$ so we are done. Suppose $\alpha > 0$. Taking the square root of both sides gives $x+1 \geq \sqrt{\alpha(\alpha+1)}\Rightarrow x\geq \sqrt{\alpha(\alpha+1)}-1\Rightarrow x^2\geq \alpha(\alpha+1)-2\sqrt{\alpha(\alpha+1)}+1.$ $2\alpha-2\sqrt{\alpha(\alpha+1)}+1 =2\alpha\left(1-\sqrt{1+\frac{1}{\alpha}}\right)+1 =\dfrac{\sqrt{1+\frac{1}{\alpha}}-1}{1+\sqrt{1+\frac{1}{\alpha}}}>0.$ Hence $-2\sqrt{\alpha(\alpha+1)}+1>-2\alpha\Rightarrow x^2 > \alpha(\alpha+1)-2\alpha = \alpha(\alpha-1)$. I think this argument works, but I was wondering if there was a faster method?
| Here is one way by contraposition that has some advantage in not requiring cases. We can assume $\alpha > 1$ otherwise $\alpha(\alpha-1) \leq 0 \leq x^2$. (This is so we can take the square root.)
Suppose $x^2 < \alpha(\alpha-1)$, so
$$ (x + 1)^2 = x^2 + 2x + 1 < \alpha(\alpha - 1) + 2\sqrt{\alpha(\alpha-1)} + 1. $$
It suffices to show this is less than $\alpha(\alpha + 1)$. Expanding, this amounts to showing
$$ -\alpha + 2\sqrt{\alpha(\alpha-1)} + 1 < \alpha. $$
This is easy, because you can rearrange to get $2\sqrt{\alpha(\alpha-1)} < 2\alpha - 1$ and then square (as both LHS and RHS are positive) to get $4\alpha(\alpha-1) < (2\alpha-1)^2 = 4\alpha(\alpha - 1) + 1$. That is, $x^2 < \alpha(\alpha-1)$ implies $(x + 1)^2 < \alpha(\alpha + 1)$.
EDIT: I forgot to add that when we write $2x < 2\sqrt{\alpha(\alpha-1)}$ in the first step, this is because $2x \leq 2|x| < 2\sqrt{\alpha(\alpha-1)}$, i.e. we can assume $x$ is nonnegative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3494612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Estimation of the $L^2$ norm of a function given the function $$g(x,t)=\frac{e^{\frac{Y}{2}}}{2 \epsilon [1+e^{\frac{Y}{2}}]^3}$$ with $Y=\frac{x-\frac{t}{2}}{\epsilon}$ where $x\in \mathbb{R}$ and $t>0$.
I'm looking for $C_{\epsilon}>0$ where $$||g(.,t)||_2\leq C_{\epsilon}$$
where $C_{\epsilon} \to 0 $ as $\epsilon \to 0^+.$
My idea, I used the following variable change $u=x/\epsilon$ and I ve got:
$$||g(.,t)||_2^2=\frac{e^{\frac{-t}{2 \epsilon}}}{4 \epsilon} \int_{\mathbb{R}}\frac{ e^u}{[1+e^{\frac{-t}{4 \epsilon}} e^{\frac{u}{2}}]^6}du$$
Now I only have to prove that the integral in bounded by a constant that doesn't depend on $\epsilon$
So I used the following variable change: $z=e^{-\frac{t}{4 \epsilon}}e^{\frac{u}{2}}$ but I only got:
$$||g(.,t)||_2^2=\frac{1}{2 \epsilon}\int_0^{+\infty}\frac{z}{(1+z)^6}dz$$
| Integration-by-parts results in
$$\int\frac{z}{(1+z)^6}\text{d}z = - \frac{z}{5(1+z)^5} + \frac{1}{5}\int\frac{1}{(1+z)^5}\text{d}z,$$
where the second integral is straightforward,
$$\int\frac{1}{(1+z)^5}\text{d}z = -\frac{1}{4(1+z)^4}.$$
Taking limits, the improper integral is
$$\int_{0}^{+\infty}\frac{z}{(1+z)^6}\text{d}z = \frac{1}{20}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3494886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Election between $2$ candidates ends in a tie: probability one candidate leads until the penultimate vote
Assume there are two candidate $C_1$ and $C_2$. At the end of the election both candidates receive the same amount of votes. What is the probability $P$ that candidate $C_1$ leads during the whole election process until the penultimate vote? (The last vote must always be in favor of candidate $C_2$)
This question was presented in our lecture in the context of the ballot-theorem. So one should think of paths which start at $(0, 0)$ along the $x$-axis and end at some point $(n,s)$, where $n,s \in \mathbb{Z}$.
My approach:
My sample space $\Omega$ includes all possible paths along the $x$-axis. If the path is above the $x$-axis then candidate $C_1$ has more votes and if the path is below then $C_2$ has more votes . If the paths touches the $x$-axis then both candidates have the same amount of votes. Hence, $|\Omega|={2p \choose p}$, where $p \in \mathbb{N}$ is the number of votes of each candidate.
Firstly, I count all paths which start at $(1,1)$ and end at $(2p,0)$. These are ${2p-1 \choose p-1}$ many. Now I subtract all paths that touch the $x$-axis, these are ${2p-2 \choose p-2}$ many. So in total I count ${2p-1 \choose p-1}-{2p-2 \choose p-2}$ paths which do not touch the $x$-axis. One can interpret all these paths as desired outcomes, i.e. where candidate $C_1$ leads until the penultimate vote. As all paths are equally probable I get the solution just by dividing by $|\Omega|={2p \choose p}$. Hence, $P = \frac{{2p-1 \choose p-1}-{2p-2 \choose p-2}}{{2p \choose p}}$.
I am not sure if this is correct. May be someone can check it or comment on it.
| Let's shift gears and use the ballot theorem instead of reinventing it for the case of ties. Other than the one appeal to the ballot theorem, this will be a conditional probability problem. ^_^
Our sample space will be all cases where each candidate received $p$ votes. Let $A$ be the event that $C_1$ lead all the way until the moment before the final vote was read, and let $B$ be the event that $C_2$ received the final vote.
We know that
*
*$P(B)=P(\overline B)=\frac12$
*$P(A\mid B)=\frac{p-(p-1)}{p+(p-1)}=\frac1{2p-1}\quad$ This is where we are using the ballot theorem.
*$P(A\mid \overline B)=0\quad$ Obviously, $C_1$ could not have lead throughout the counting and received the final vote, because the count ended in a tie.
Using all this and the law of total probability,
$$P(A)=P(A\mid B)\cdot P(B)+P(A\mid \overline B)\cdot P(\overline B)\\=\frac{1}{2p-1}\cdot\frac12+0\cdot\frac12=\frac1{4p-2}$$
Note that this formula agrees with almagest's calculation that the probability was $\frac1{10}$ when $p=3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3495031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
How to calculate the following limits: $\lim_{x\to0}\frac{\ln\left(\cosh\left(x\right)\right)}{\ln\left(\cos\left(x\right)\right)}$
How to calculate the following limits:
*
*$\lim\limits_{x\to0}\frac{\ln\left(\cosh\left(x\right)\right)}{\ln\left(\cos\left(x\right)\right)}$
*$\lim\limits_{n\to\infty}\sin\left(\pi\sqrt{n^{2}+1}\right)$
$1.$
$$\frac{\ln\left(\cosh\left(x\right)\right)}
{\ln\left(\cos\left(x\right)\right)}=\log\thinspace_{\cos\left(x\right)}\left(\cosh\left(x\right)\right)=\frac{
\log\thinspace_{\cos\left(x\right)}\left(\cosh\left(x\right)\right)}{\cosh\left(x\right)-
1}\cdot\left(\cosh\left(x\right)-1\right)$$
Also setting: $\cosh\left(x\right)-1=t$
we have:
$$\lim_{t\to0}\frac{\log\thinspace_{\cos\left(x\right)}\left(t+1\right)}{t}t=\log\thinspace_{\cos\left(x\right)}\left(e\right).0=0$$
But this is not the answer, so where is my error?
$2.$
Based on my information about the properties of limits, since sine function is continuous over its
interval hence the given limit can be rewritten as:
$$\sin\left(\pi\lim_{n\to\infty}\sqrt{n^{2}+1}\right)$$
which does not exist, but it's not the answer, so why I'm wrong?
Also if we consider the given function as a real valued
function,e.g.$$\lim_{x\to\infty}\sin\left(\pi\sqrt{x^{2}+1}\right)$$ does not exist, so what is the reason
behind this fact?
Why the limit of the function as a sequence does exist but as a real valued function we do not have such
condition?
any elementary hint for determining the first limit is appreciated.
| Do you like an insane overkill? By the Weierstrass product for the cosine function
$$ \cos(x)=\prod_{n\geq 0}\left(1-\frac{4x^2}{\pi^2(2n+1)^2}\right) $$
we have
$$ \cosh(x)=\prod_{n\geq 0}\left(1+\frac{4x^2}{\pi^2(2n+1)^2}\right) $$
hence
$$ \frac{\log\cosh(x)}{\log\cos(x)}=\frac{\sum_{n\geq 0}\log\left(1-\frac{4x^2}{\pi^2(2n+1)^2}\right)}{\sum_{n\geq 0}\log\left(1+\frac{4x^2}{\pi^2(2n+1)^2}\right)}=\frac{-\sum_{n\geq 0}\sum_{m\geq 1}\frac{4^m x^{2m}}{m\pi^{2m}(2n+1)^{2m}}}{-\sum_{n\geq 0}\sum_{m\geq 1}\frac{(-1)^m 4^m x^{2m}}{m\pi^{2m}(2n+1)^{2m}}}$$
and by switching the series on $n$ and $m$
$$ \lim_{x\to 0}\frac{\log\cosh(x)}{\log\cos(x)}=\frac{\sum_{n\geq 0}\frac{1}{(2n+1)^2}}{\sum_{n\geq 0}\frac{(-1)}{(2n+1)^2}}=\color{red}{-1}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3495156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
} |
Equation for $x^2+y^2=1$ in exponential space? Consider the unit circle $x^2+y^2=1$ in exponential space. Is there an equation for the set of points under the mapping?
Here's what I tried:
I took each coordinate pair on the unit circle and exponentiated it.
For example, $(x,y)\mapsto(e^x,e^y).$
This is what I have so far:
I also tried to come up with an equation for this set of points but have not been able to so far.
| The equation for the set of points under the prescribed mapping is $\ln(x)^2+\ln(y)^2=1.$
Start with $(x,y) \mapsto (e^x,e^y).$ Define $u=\ln(x)$ and $v=\ln(y).$ In the pre-image space, we simply have $x^2+y^2=1.$ In the image space (after the prescribed mapping) we get $u^2+v^2=1.$ Substituting back to get an explicit equation in terms of $x$ and $y$ we get the above equation $\ln(x)^2+\ln(y)^2=1.$
Notice that in a $u-v$ coordinate system, we actually do have the equation for a circle, but using an $x-y$ coordinate system for the image space, we get a nonlinear equation in the coordinates $x-y.$ So if you ever have to deal with $\ln^2(x)+\ln^2(y)=1,$ just remember that you can always simplify it down to the equation of a circle using a change of coordinates which acts to linearize the equation from the image space to the pre-image space.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3495371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the group of group-like elements of a quantum group? A quantum group is not a group.
For example, the Drinfeld-Jimbo "quantum doubles" are Hopf algebras obtained by deforming the universal enveloping algebras of Lie algebras.
But in every Hopf algebra, there's a subset of group-like elements that satisfy
$$ \Delta(g) = g \otimes g $$
which form a group.
So there is a group hiding somewhere in the quantum group, just like there is a group hiding in the universal enveloping algebra (the latter is given by exponentiating the Lie algebra).
What is this group? Is it equal to the original Lie group, or is it deformed? Afaik Lie groups are rigid objects, so I don't see how the latter can be possible. But I'd like to confirm nevertheless.
| I do not have a complete answer to the question. But the group of group-like elements of a quantum group might be deformed somehow.
I'm refering to Lemma 6.4.1 of V.Chari, A.Pressley A guide to quantum groups.
In the case of $\mathfrak{sl_2}$, if you take the h-adic version of quantum $\mathfrak{sl_2}$, $U_h(\mathfrak{sl_2})$, it is a $\mathbb{C}[[h]]$ Hopf algebra.
Any elements $e^{h\lambda H}$, where $\lambda \in \mathbb{C}[[h]]$, is group like. (Example: $K$ in the standard quantum version $U_q( \mathfrak{sl_2})$)
Hence, we have some group like elements coming from the deformation of the universal enveloping algebra.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3495483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
'imperfect' numbers A perfect number (integer) is equal to the sum of its divisors, including 1, and excluding itself. This has been around since Euclid.
Recently, I noticed that at least for the initial integers, it is more common for that sum of divisors to be smaller than the number in question.
However, for example, using the same rules the sum of the divisors of 12 is actually sixteen: The sum is here greater than the whole.
Clearly, not perfect. Therefore, imperfect. But, has anyone studied these numbers?
| The sum of all proper divisors of a number is less than, equal to, or greater than the number,
according as the number is deficient, perfect, or abundant.
The first 28 abundant numbers are $12, 18, 20, 24, 30, 36, 40, 42, 48, 54, 56, 60,$
$66, 70, 72, 78, 80, 84, 88, 90, 96, 100, 102, 104,$ $ 108, 112, 114, $ and $120.$
They are sequence A005101 in the On-Line Encyclopedia of Integer Sequences.
The smallest odd abundant number is $945$.
Every multiple (beyond $1$) of a perfect number is abundant.
You can read more about these numbers on Wikipedia.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3495607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Linear transformation and dependence of vectors I couldn't find an example or explanation why the following sentence is correct
If a Transformation is linear, and vectors $u_1$,$u_2$,$u_3$ are dependent then
$T(u_1)$,$T(u_2)$,$T(u_3)$
must also be dependent
but
If a Transformation is linear, and $T(u_1)$,$T(u_2)$,$T(u_3)$ are dependent , that doesn't mean vectors $u_1$,$u_2$,$u_3$ are dependent
Couldn't think of an example that would justify the 2nd phrase
| Let $T$ be the null function and take any $3$ linear independent vectors $u_1$, $u_2$, and $u_3$. Then $\bigl\{T(u_1),T(u_2),T(u_3)\bigr\}$ is linearly dependent (since it is equal to $\{0\}$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3495852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$1+\frac{1}{2}+...+\frac{1}{x_n}\geq n$. Determine $\lim\limits_{n\to\infty} \frac{x_{n+1}}{x_n}$ For every $n \in \mathbb{N}$ let $x_n$ be the smallest natural number such that:
$$1+\frac{1}{2}+...+\frac{1}{x_n}\geq n$$
Determine $\lim\limits_{n\to\infty} \frac{x_{n+1}}{x_n}$
It's the harmonic series but I can't figure when does the harmonic sum overcomes a number. I've done a script that finds the first 5 numbers from the sequence but I can't find a formula between them.
$$x_1=1\\x_2=4\\x_3=11\\x_4=31\\x_5=81$$
I believe it has to do something with exponential but I am not sure (from it's growth)
| Here's a slightly different approach to the first answer that provides a more precise asymptotic:
I'll write $H_n = 1 + 1/2+ \ldots + 1/n$. It is known that
$$H_n = \log n + \gamma + o(1)$$
where $\gamma$ is the Euler-Mascheroni constant. There therefore exists two sequences $\delta_n, \epsilon_n =o(1)$ such that
$$\log n + \gamma + \delta_n \le H_n \le \log n + \gamma + \epsilon_n.$$
Since $H_{x_n} \ge n$,
$$\log x_n + \gamma + \epsilon_{x_n} \ge n$$
which rearranges to $x_n \ge e^{-\gamma} e^{n +o(1)}$. Further, since $H_{x_n} \ge n$ then we necessarily have that
$$\log x_n + \gamma + \delta_{x_n} < n + 1/x_n,$$
since, if this does not hold, $H_{x_n} \ge n+1/x_n$ and $H_{x_n -1} \ge n + 1/x_n - 1/x_n =n$ contradicting the definition of $x_n$ (since $x_n -1$ would be smaller than $x_n$ and still satisfy your inequality). Hence, $x_n < e^{-\gamma} e^{n+1/x_n +o(1)}$.
Combining our inequalities gives that
$$ \frac{x_n}{e^n} \sim e^{-\gamma}.$$
Your question follows easily from this asymptotic expression.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3495970",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find the integral by first principle In my math course homework, I encountered this problem:
Find $$\frac{d}{dx}\int^{x^2}_0\frac{dt}{1+e^{t^2}}$$ by first principle
$$\begin{split}\frac{d}{dx}\int^{x^2}_0\frac{dt}{1+e^{t^2}}&=\lim_{h\rightarrow0}\frac{\int^{(x+h)^2}_0\frac{dt}{1+e^{t^2}}-\int^{x^2}_0\frac{dt}{1+e^{t^2}}}{h}\\&=\lim_{h\rightarrow0}\frac{\int^{(x+h)^2}_{x^2}\frac{dt}{1+e^{t^2}}}{h}\end{split}$$
Then I don't know how to do. Please help me!
| It might be easier to look a little more abstractly and then apply to your specific case.
Suppose $f$ is continuous and $g$ is differentiable.
Let $\phi(x) = \int_0^{g(x)} f(t) dt$. The fundamental theorem/Leibniz rule tells us that
$\phi'(x) = f(g(x)) g'(x)$.
So we want to show that $\lim_{h \to 0} |{ \phi(x+h)-\phi(x) \over h} -f(g(x))g'(x)| = 0$.
Pick $\epsilon>0$ and $\delta_1 >0$ such that if $|t-g(x)| < \delta$ then $|f(t)-f(g(x))| < \epsilon$.
Also, choose $\delta_2 \le \delta_1$ such that if $|h|< \delta_2$ then
$|{g(x+h)-g(x) \over h} - g'(x)| < \epsilon$.
\begin{eqnarray}
|{\phi(x+h)-\phi(x) \over h } - f(g(x))g'(x) | &=& |{1 \over h}\int_{g(x)}^{g(x+h)} f(t) dt - f(g(x))g'(x)| \\&=&
|{1 \over h}\int_{g(x)}^{g(x+h)} (f(g(x))+f(t)-(g(x))) dt - f(g(x))g'(x)| \\
&\le& |{1 \over h}\int_{g(x)}^{g(x+h)} f(g(x)) dt - f(g(x))g'(x)| +|{1 \over h}\int_{g(x)}^{g(x+h)} \epsilon dt|\\
&=& |f(g(x))| | {(g(x+h)-g(x)) \over h} - g'(x) | + \epsilon |{g(x+h)-g(x) \over h} |\\
&\le& \epsilon |f(g(x))|+ \epsilon^2
\end{eqnarray}
It follows that the limit is zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3496096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Degree $1$ birational map of $\mathbb{P}_k^n$ I wonder why the degree $1$ birational maps of $\mathbb{P}_k^n$ are the automorphisms of $\mathbb{P}_k^n$? In particular, why are they defined everywhere on $\mathbb{P}_k^n$?
I know each degree $1$ birational maps of $\mathbb{P}_k^n$ is of the form $\phi:=[f_0:...:f_n]$, where $f_i$ is a homogeneous linear polynomial. I know $\phi$ is defined everywhere iff the associated matrix (coefficients of $f_i$) of $\phi$ is corresponding to a matrix in $\text{PGL}_{n+1}(k)$, i.e. $\text{Aut}(\mathbb{P}_k^n)=\text{PGL}_{n+1}(k)$. But how to see the set of degree $1$ birational maps of $\mathbb{P}_k^n$ is $\text{Aut}(\mathbb{P}_k^n)$?
Also why does an automorphism of $\mathbb{P}^n$ have to be of degree $1$?
| The key to this is the matrix of linear forms. If the matrix is not full rank, then the image of the map associated to this matrix of linear forms is supported inside some closed linear subvariety, which implies it is not birational. So birationality is equivalent to the matrix being full rank, which is equivalent to it being defined everywhere: the points where the map associated to the matrix isn't defined is exactly the projectivization of the kernel of this matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3496199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solving a system of equations with proof: I can't figure out what exactly the question is asking me.
Question
There is a system such as follows:
$\dot{x}=-2x$, $\dot{y}=y$, given that $0<\eta<1$ and $(x_{\eta}(t),y_{\eta}(t))^T$ is the solution of the system with initial conditions $x(0)=1$ and $y(0)=\eta$.
Prove that there exists a $\tau=\tau(\eta)$ such that $0<x_{\eta}\leq 1$ and $y_{\eta}(\tau)=1$. Find $\tau(\eta)$ and show that $x(\tau(\eta))={\eta}^2$.
Attempt at solving:
I have found the general solution to be $x= c_1 (1,0)e^{-2t}+c_2(0,1)e^{t}$ by solving the system in matrix form and finding the eigenvalues and eigenvectors. I also tried solving it by turning it into a seperable equation through $\frac{\dot{y}}{\dot{x}}=\frac{dy}{dx}=-\frac{y}{2x}$ and finding the solution $y=\frac{C}{\sqrt{x}}$.
However I can't connect any of these to what the question is asking and I don't know what else to try. This is a sample exam question and I can't find another example of it. Any help or tips would be much appreciated.
| Yes, the general solution is $x(t)=c_1e^{-2t},y(t)=c_2e^t$ where $c_1,c_2$ are constants.
You now apply the initial conditions to get the particular solution $x_0(t),y_0(t)$.
So we have $x_0(0)=1$ and hence $c_1=1$. Similarly, $y_0(0)=\eta$, so $c_2=\eta$.
We want to find $\tau$ so that $y_0(\tau)=1$. That implies that $\eta e^\tau=1$. So $e^\tau=1/\eta$ and hence $\tau=\ln(1/\eta)$. Note that $0<x_0(t)\le1$ for all $t>0$, so it is certainly true for $t=\tau$.
Finally, we want $x_0(\tau)$. It is $e^{-2\tau}=1/(e^\tau)^2=\eta^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3496318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the supremum of $A=\{\cos{n},n\in\Bbb{N}\}$ I would like to find the supremum of the following set:
$$A=\{\cos{n},n\in\Bbb{N}\}$$
Is it $1$ or something smaller?
One way to show that the supremum is $1$ (If it were), is to prove that for any $\epsilon>0$, there are positive integers $p,q$ such as $\vert2\pi q-p\vert<\epsilon$ , but I have not managed to do that...
Any suggestions-solutions, to this problem? Thanks in advance.
| Well the answer would be $1$ if it were true that for any $\epsilon > 0$ we can find $n_\epsilon$ so that $|\cos n_\epsilon -1| < \epsilon$.
Now each natural $n$ is congruent $\pmod {2\pi}$ to some value $n': -\pi \le n' < \pi$, or in other words $n = n' + 2k\pi$ for some integer $k$. And we want to show that $n'$ can, for values of $n$, be arbitrarily close to $0$. ($n'$ can't actually every equal $0$ because that would mean $n = 2k\pi$ and $\pi$ is irrational so that can't happen [unless $n$ and $k$ equal $0$]).
If we go the other way, though for any $d > 0$ where $d$ is pretty dang small, can we choose $d$ small enough that $\cos (\pm d) =\cos d > 1- \epsilon$? Well the answer to that has to be, yes. $\cos$ is a continuous function, so for any $\epsilon > o$ there is a $\delta$ so that $|x-0| <\delta$ (or in other words $-d < x < d$) then $|\cos x - \cos 0|=|\cos x - 1| < \epsilon $ (or in other words $1-\epsilon < \cos x$.
So Let $\delta$ be so that $-\delta < x < \delta \implies 1-\epsilon < \cos x$.
Now we need to find an $n$ so that $|2k\pi - n| < \delta$ for some integer $k$.
Well for that to happen we need $n-\delta < 2k\pi < n+ \delta$ or $\frac n{2k}-\frac {\delta}{2k} < \pi < \frac n{2k}+\frac {\delta}{2k}$ which is clearly possible by the archimedian principal.
So, yep. Supremum is $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3496470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Conditional probability: The second ball is black. Find the probability that the first ball was white. Two black balls and three white balls are put in a bag. First, we pull one ball, then the second. The second ball is black. Find the probability that the first ball was white.
I think it could be $\frac{3*2}{3*2+2*1} = 0.75$
as we have $m=3∗2$ (WB - what we need) and $n=3∗2+2∗1$ (WB + BB - all possible options)
| Your answer is correct
Another way of getting the same answer is to say that, given the second draw without replacement is white, there are three black balls and one white ball for the non-second draws, and these are equally likely to be any position making the conditional probability that the first draw is black $\frac34$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3496647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Evaluating $\int_{-e}^e(x+x^3+x^5+\cdots+x^{999}) dx$
Integrate
$$\int_{-e}^e(x+x^3+x^5+\cdots+x^{999}) dx$$
I converted the integral into geometric sum, but i cannot proceed from there
The geometric sum is
$$\frac{x(x^{1001}-1)}{x^2-1}$$
| Your original functions is and "odd function" (https://en.wikipedia.org/wiki/Even_and_odd_functions#Odd_functions) such that $f(-x) = -f(x)$. When odd functions are integrated over intervals symmetric over the origin, like $[-1, 1]$ or in your case $[-e,e]$, they cancel out and integrate to zero.
Simple example: $\int_{-a}^{a} x \ dx = 0$, and you can generalize it to $\int_{-a}^{a} x^{odd} \ dx = 0$. Summing them of course still leaves you with zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3496795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inequality about the sum of square root of 1 over x This question originates from an adversarial bandit problem. In the proof of self-bounding property of regret, there is a step showing
$$C\sum_{t=1}^T\sum_{i\neq i^*}\sqrt{\frac{x_{i,t}}{t}}\leq 2C\sqrt{KT}$$
where $\sum_{i\neq i^*}$ means summing over all $K$ arms except the optimal arm $i^*$, and $x_{i,t}$ is a probability distribution over $K$ arms at time $t$ with $\sum_{i=1}^K x_{i,t}=1,\forall t$.
Now the hint in the notes says
this follows trivially by Jensen on the $i$ sum
I don't really get how to use Jensen's inequality to prove it, but what I got so far is to use Cauchy–Schwarz inequality to prove
$$\sum_{i\neq i^*}\sqrt{x_{i,t}}\leq \sqrt{K},\forall t$$
which lefts me with
$$\sum_{t=1}^T \sqrt{\frac{1}{t}}\leq 2\sqrt{T}$$
to prove. The way I got around it is to let $f(T) = 2\sqrt{T}-\sum_{t=1}^T \sqrt{\frac{1}{t}}$, first show that $f(1)=0$, and then show that $f'(T)\geq 0$ for $T>0$ (help from Wolfram Alpha) which verifies $f(T)$ is monotonically increasing for positive $T$. However, this approach involves dealing with HurwitzZeta function, which is a bit complicated.
I wonder if there is some handy tricks I can use to prove the inequality about $f(T)$, or if there is some more direct way to use Jensen's inequality to prove the overall inequality, as the hint suggests.
| The best way I now know to see the inequality about $t$ is to upbound the sum by integral:
$$\sum_{t=1}^T\sqrt{\frac{1}{t}}\leq \int_0^T\sqrt{\frac{1}{t}} dt= 2\sqrt{T}$$
Of course, the upboundedness is true because $\sqrt{\frac{1}{t}}$ is monotonically decreasing.
A lesson I took from it was while we can approximate integral by (Riemann) sum, we can also approximate/bound sum by integral. This approximation trick can be quite useful given conditions like monotonicity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3496893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Determine if the graph is Hamiltonian graph I am supposed to determine if the following graph is a Hamiltonian graph:
$G=(V,E)$ where $V=\{1,2,...,13\}$ and the set of edges is defined as follow: $E=\{(i,j)\in V\times V|11\leq i + j \leq 15\}$.
My Attempt:
I draw it and I found out, that it does not contain bridge or articulation.We learnt Dirac and Ore theorem, but none of this is applicable. So I have to find Hamiltonian circle, but I do not know how, because I think this is complicated graph for doing so.
Can anyone help me?
| The graph is Hamiltonian, here are two Hamilton cycles:
$(7,8,5,10,3,12,1,13,2,11,4,9,6,7)$
$(7,8,5,10,3,12,2,13,1,11,4,9,6,7)$
I constructed this cycle starting from the vertex $13$ whose neighbors are only the vertex $1,2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3497037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that for all acute triangles $\triangle ABC$, $r_a + r_b + r_c \ge m_a + m_b + m_c$.
Let $r_b$ and $m_b$ respectively be the exradius of the excircle opposite $B$ and the median drawn from the midpoint of side $CA$ of acute triangles $\triangle ABC$. Prove that $$\large r_a + r_b + r_c \ge m_a + m_b + m_c$$
We have that $$[ABC] = \sqrt{\frac{r_a + r_b + r_c}{2} \cdot \prod_{cyc}\frac{r_a - r_b + r_c}{2}} = \frac{4}{3}\sqrt{\frac{m_a + m_b + m_c}{2} \cdot \prod_{cyc}\frac{m_a - m_b + m_c}{2}}$$
Let $r_a - r_b + r_c = r_b'$, $m_a - m_b + m_c = m_b'$ and so on, we have that $$\sum_{cyc}r_b' \cdot \prod_{cyc}r_b' \ge \frac{16}{9} \cdot \sum_{cyc}m_b' \cdot \prod_{cyc}m_b'$$
In order to prove that $r_a + r_b + r_c \ge m_a + m_b + m_c$, which could be rewritten as $$r_a' + r_b' + r_c' \ge m_a' + m_b' + m_c'$$, we need to prove that $r_a' \cdot r_b' \cdot r_c' \le \dfrac{16}{9} \cdot m_a' \cdot m_b' \cdot m_c'$.
You could do that preferentially... or let $p - a = a'$, $p - b = b'$, $p - c = c'$, we need to prove that $$\sqrt{(a' + b' + c') \cdot (a'b'c')} \cdot \left(\frac{1}{a'} + \frac{1}{b'} + \frac{1}{c'}\right)\ge \sum_{cyc}\sqrt{b'(a' + b' + c') + \frac{(c' - a')^2}{4}}$$
$\left(p = \dfrac{a + b + c}{2}\right)$ in which I don't know what to do next.
| Your inequality is true for any triangle!
Let $a=y+z$, $b=x+z$ and $c=x+y$.
Thus, $x$, $y$ and $z$ are positives and in the standard notation we need to prove that:
$$\sum_{cyc}\frac{2S}{b+c-a}\geq\frac{1}{2}\sum_{cyc}\sqrt{2b^2+2c^2-a^2}$$ or
$$\sum_{cyc}\frac{2\sqrt{xyz(x+y+z)}}{2x}\geq\frac{1}{2}\sum_{cyc}\sqrt{4x(x+y+z)+(y-z)^2}$$ or
$$\frac{2(xy+xz+yz)\sqrt{x+y+z}}{\sqrt{xyz}}\geq\sum_{cyc}\sqrt{4x(x+y+z)+(y-z)^2}$$ or
$$\frac{4(xy+xz+yz)^2(x+y+z)}{xyz}\geq\sum_{cyc}(4x(x+y+z)+(y-z)^2)+$$
$$+2\sum_{cyc}\sqrt{(4x(x+y+z)+(y-z)^2)(4y(x+y+z)+(x-z)^2)}$$ or
$$\frac{4(xy+xz+yz)^2(x+y+z)}{xyz}-3\sum_{cyc}(4x(x+y+z)+(y-z)^2)+$$
$$+\sum_{cyc}\left(\sqrt{4x(x+y+z)+(y-z)^2}-\sqrt{4y(x+y+z)+(x-z)^2}\right)^2\geq0$$ or
$$\frac{4(xy+xz+yz)^2(x+y+z)}{xyz}-18\sum_{cyc}(x^2+xy)+$$
$$+\sum_{cyc}\frac{(x-y)^2(4(x+y+z)-(x+y-2z))^2}{\left(\sqrt{4x(x+y+z)+(y-z)^2}+\sqrt{4y(x+y+z)+(x-z)^2}\right)^2}\geq0,$$ for which it's enough to prove that:
$$\frac{4(xy+xz+yz)^2(x+y+z)}{xyz}-18\sum_{cyc}(x^2+xy)+$$
$$+\sum_{cyc}\frac{9(x-y)^2(x+y+2z)^2}{\frac{1}{2}\left(4x(x+y+z)+(y-z)^2+4y(x+y+z)+(x-z)^2\right)}\geq0$$ or
$$\frac{2(xy+xz+yz)^2(x+y+z)}{xyz}-9\sum_{cyc}(x^2+xy)+$$
$$+\sum_{cyc}\frac{9(x-y)^2(x+y+2z)^2}{4(x+y)(x+y+z)+(y-z)^2+(x-z)^2}\geq0$$ or
$$\sum_{cyc}(2x^3y^2+2x^3z^2-5x^3yz+x^2y^2z)+\sum_{cyc}\tfrac{9xyz(x-y)^2(x+y+2z)^2}{4(x+y)(x+y+z)+(y-z)^2+(x-z)^2}\geq0$$ or
$$\sum_{cyc}(2x^3y^2+2x^3z^2-4x^3yz-x^3yz+x^2y^2z)+\sum_{cyc}\tfrac{9xyz(x-y)^2(x+y+2z)^2}{4(x+y)(x+y+z)+(y-z)^2+(x-z)^2}\geq0$$ or
$$\sum_{cyc}(x-y)^2\left(2z^3-\frac{1}{2}xyz\right)+\sum_{cyc}\tfrac{9xyz(x-y)^2(x+y+2z)^2}{4(x+y)(x+y+z)+(y-z)^2+(x-z)^2}\geq0$$ or
$$\sum_{cyc}(x-y)^2\left(4z^3-xyz+\tfrac{18xyz(x+y+2z)^2}{4(x+y)(x+y+z)+(y-z)^2+(x-z)^2}\right)\geq0,$$
For which it's enough to prove that:
$$18(x+y+2z)^2\geq4(x+y)(x+y+z)+(y-z)^2+(x-z)^2$$ or
$$13x^2+28xy+13y^2+70(x+y+z)z\geq0$$ and we are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3497183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Example of weak convergence I'd like to understand better the concept of weak convergence.
I know that a sequence of probability measures $\mu_n$ converges weakly to $\mu$ if $\int{f d\mu_n}$ converges to $\int{f d\mu}$ for each $f$ which is continuous and bounded.
Could you please give me an example of a sequence of probability measures $\mu_n$ that converges weakly to $\mu$ and find a function $f$ such that $\int{f d\mu_n}$ does not converge to $\int{f d\mu}$?
| Let $P(X_n=1/n)=1$, let $P(X=0)=1$. The probability distribution of $X_n$ is $\mu_n=\delta_{1/n}$, the point-mass measure concentrated at $1/n$, that of $X$ is the point mass at $0$, namely $\mu=\delta_0$.
We can check that $\mu_n$ converges to $\mu$ weakly: since $\int f d\mu_n = f(1/n)$ and $\int f d\mu=f(0)$, for continuous and bounded $f$ the desired limit holds: $\lim_{n\to\infty} f(1/n)=f(0)$.
But for $f=\chi_{\{0\}}$, say, the indicator function of the singleton set $\{0\}$, we have $\int f d\mu_n = 0$ and $\int f d\mu = 1$, so the desired limit does not hold. This $f$ is bounded but not continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3497341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Determine Sum of $\sum_{k=3}^{25}{k^2-5k+9}$ Given $\sum_{k=3}^{28}(k-3)^2 = 14,910$ and $\sum_{k=0}^{25}k = 325$ I am working through some questions from a textbook which states that I should determine the sum without expanding or calculating any sums. I have been given the following information.
$\sum_{k=3}^{28}(k-3)^2 = 14,910$ and $\sum_{k=0}^{25}k = 325$
And must calculate the sum for $\sum_{k=3}^{25}{k^2-5k+9}$
So far I've noticed the following,
${\sum}_{k=3}^{28}\left(k-3\right)^{2}={\sum}_{k=0}^{25}k^{2}$
${\sum}_{k=0}^{25}k={\sum}_{k=3}^{28}k-3$
I'm also noticed that, $\left(k-3\right)^{2}=k^{2}-6k+9$, is very close to the sum I need to find, but am not sure how this helps.
I calculated the sum as follows, but I think the textbook would not have wanted me to calculate it in this way,
$\sum_{k=3}^{25}{k^2-5k+9}={\sum}_{k=3}^{25}k^{2}-5{\sum}_{k=3}^{25}k+{\sum}_{k=3}^{25}9=14,910-5-5\times\left(325-3\right)+22\times9=13,493$
I believe my answer is wrong however I'm not sure how to proceed. I would appreciate it if someone could provide some sort of hint to guide me in the right direction.
| We can manually subtract terms from each of the two sums to get the desired range:
$$\sum_{k=3}^{25}(k-3)^2=14910-25^2-24^2-23^2=13180$$
$$\sum_{k=3}^{25}k=325-0-1-2=322$$
The final answer is $13180+322=13502$.
In fact, the given values are completely wrong; the actual correct answer is $4117$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3497491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Union of intersections and intersection of unions Would someone be able to help me understand the (concept of) "union of intersections" and the "intersection of unions".
More specifically, how to approach (prove) the following equality of two sets $X$ and $Y$;
$$
X=\bigcup_{n=1}^{\infty}\bigcap^\infty_{j=n} A_{j} \text{ and } Y=\bigcap^\infty_{n=1}\bigcup_{j=n}^{\infty} A_{j}
$$
Your feedback is much appreciated as always.
| As pointed out in the comments, the sets are not necessarily equal. Consider the following counterexample :
For even $n$, define $A_n = \{0\}$ and for odd $n$, define $A_n = \{1\}$.
That is, $A_1 = A_3 = A_5 = \cdots = \{1\}$ and $A_2 = A_4 = A_6 = \cdots = \{0\}$.
Let us look at the set $X$ first.
Note that for any given $n \in \mathbb{N}$, the set $\displaystyle\bigcap_{j=n}^\infty A_j$ is empty. This can be easily observed if you notice that $A_n \cap A_{n+1} = \varnothing$. (Why?)
Thus, $X = \displaystyle\bigcup_{n=1}^\infty\varnothing = \varnothing$.
Now, let us look at $Y$.
Note that for any given $n \in \mathbb{N}$, the set $\displaystyle\bigcup_{j=n}^\infty A_j$ equals $\{0, 1\}$.
This is also easy to prove. Note that $A_j \subset \{0, 1\}$ for each $j \ge n$ and hence, $\displaystyle\bigcup_{j=n}^\infty A_j \subset \{0, 1\}$.
To see that the equality follows, note that $A_n \cup A_{n+1} = \{0, 1\}$. (Why?)
Thus, we get that $X \neq Y$.
The next natural question to ask would be whether $X \subset Y$ is true in general?
To see this, one could first try to observe what sort of elements are there in $X$.
Observe that there is a big union outside. Thus, if $x \in X$, then $x \in \displaystyle\bigcap_{j=n_0}^\infty A_j$ for some $n_0 \in \mathbb{N}$.
This then gives us that $x \in A_j$ for every $j \ge n_0$.
Thus, in a more intuitive phrasing: $x$ must belong to every set after a point.
Let us now see if this is enough for $x$ to belong to $Y$ as well.
Note that there is a big intersection on the outside in the definition of $Y$. Thus, for $x$ to belong to $Y$, $x$ must belong to $\displaystyle\bigcup_{j=n}^\infty A_j$ for every $n \in \mathbb{N}$. Does our given $x$ have this property?
Well, yes, it does! Given any $n \in \mathbb{N}$, know that $x$ belongs to $A_{\max\{n, n_0\}}$. (How?)
Thus, $x$ will belong to $\displaystyle\bigcup_{j=n}^\infty A_j$, no matter the $n$. (How?)
In turn, this gives us that $x \in \displaystyle\bigcap_{n=1}^\infty\bigcup_{j=n}^\infty A_j = Y.$
So, what we have shown is that if $x \in X$, then $x \in Y$. This is precisely what it means for $X \subset Y$ and thus, we are done.
Some more intuition: If you try to characterise the elements that belong to $Y$, you'll observe that $y \in Y$ iff $y$ belongs to infinitely many $A_i$s.
However, as we saw, the condition that an element belongs to $X$ was stricter, it required that $x$ must belong to all $A_i$s after a point. That is, $x$ must belong to all but finitely many $A_i$s.
This goes well with the sort of counterexample that we saw, to begin with.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3497611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Convergence of $\int_{0}^{\pi^{2}}\frac{1}{1-\cos(\sqrt{x})}dx$ I need to compare this integral $\int_{0}^{\pi^{2}}\frac{1}{1-\cos(\sqrt{x})}dx$ to another one in order to ensure its convergence, but I can't find the proper one, because $\frac{1}{1-\sqrt{x}}$ is not continuous.
| @SL_MathGuy already showed that the definite integral diverges.
Using the tangent half-angle substitution and one integration by parts, the antiderivative is given by
$$J(x)=\int\frac{dx}{1-\cos \left(\sqrt{x}\right)}=4 \log \left(\sin \left(\frac{\sqrt{x}}{2}\right)\right)-2 \sqrt{x} \cot \left(\frac{\sqrt{x}}{2}\right)$$ and, since $J(\pi^2)=0$
$$\int_\epsilon^{\pi ^2}\frac{dx}{1-\cos \left(\sqrt{x}\right)}=2 \sqrt{\epsilon } \cot \left(\frac{\sqrt{\epsilon }}{2}\right)-4 \log \left(\sin
\left(\frac{\sqrt{\epsilon }}{2}\right)\right)$$ which, expanded a a series gives
$$-2 \log (\epsilon )+4(1+ \log (2))-\frac{\epsilon }{6}-\frac{\epsilon
^2}{240}+O\left(\epsilon ^{3}\right)$$
Computed for $\epsilon=\frac 1 {100}$, the exact result would be
$$\frac{1}{5} \cot \left(\frac{1}{20}\right)-4 \log \left(\sin
\left(\frac{1}{20}\right)\right)\approx 15.98126201077$$ while the above truncated expansion would give
$$\frac{9595999}{2400000}+\log (160000)\approx 15.98126201088$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3497732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
How to solve $|1+1/x| > 2$ I am having a problem with this question:
Find all real $x$ that satisfies $|1 + 1/x| > 2$.
This is clearly not defined in $x = 0$.
By my logic, it should be solved with:
$1+1/x > 2$ or $1+1/x < -2$
But the result I am getting from this is wrong. ($x<1$)
Correct result is
$-1/3 < x < 1$
How to solve a problem like this? Why is this logic not working here:
$|x| > 2$
$x > 2$ or $x < -2$
| Alternatively, $x\ne 0$ and:
$$|x+1|>2|x| \iff (x+1)^2>(2x)^2 \iff (x-1)(3x+1)<0 \iff -1/3<x<1$$
Hence:
$$x\in (-1/3,0)\cup (0,1)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3497935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 9,
"answer_id": 7
} |
How to prove that if $G$ is a group with $|G|= p^k*q$ with $p,q$ distinct primes, that $G$ has a subgroup of order $p^k$ I have to prove with induction on $k$ that if $G$ is a group with $|G|= p^k*q$ with $p,q$ distinct primes, that $G$ has a subgroup of order $p^k$. I already know that if $p$ is not a divisor of the center of $G$, $Z(G)$, that there is a subgroup of order $p^k$. I also know that if $G$ is abelian, G has an element of order $p$ and order q. I haven't got any further than this.
| For the induction step: suppose $G$ has order $p^k q$. If $p$ does not divide the order of the center of $G$, then you're done. Else, by your second lemma, $Z(G)$ has an element of order $p$. Call the cyclic subgroup generated by this element $P$. Then $P$ is a normal (in fact, central) subgroup of $G$. The quotient $G/P$ is a group of order $p^{k-1} q$, so by induction it contains a subgroup of order $p^{k-1}$. Apply the correspondence theorem and you're done.
Make sure you understand that you can take $k=1$ in the above argument, so the base step can be chosen to be the trivial $k=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3498078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Pointwise convergence of probability densities imply weak convergence of probability measures Assume $P_n, n\in\mathbb{N}$ and $P$ are absolutely continuous probability measures with respect to a sigma finite measure $\mu$ on $(\mathbb{R},\mathcal{B})$. Let $f_n, n\in \mathbb{N}$ and $f$ be the densities of above measures respectively.
I want to prove that if $f_n$ converges pointwise to $f$, then $P_n$ converges weakly to $P$. Here is what I have tried so far. I've read that even strong (pointwise) convergence follows, so I tried to prove that and the weak convergence follows:
Let $A \in \mathcal{B}$ and since $f_n$ are the the densities of $P_n$the following equalities hold
$$P_n(A) = \int\limits_{A} f_n d\mu = \int 1_{A} f_n d\mu.$$
Now since $f_n$ converges pointwise to $f$, also $1_{A} f_n$ converges pointwise to $1_{A} f$. Here is where I'm stuck because I want to use the theorem about dominated convergence. Hence, I have to find an integrable bound for $1_{A} f_n$. If the theorem about dominated convergence applies I get
$$\lim\limits_{n\rightarrow\infty}P_n(A)=\int \lim\limits_{n\rightarrow\infty} 1_{A} f_n d\mu = \int 1_{A} f d\mu=\int\limits_{A} f d\mu = P(A)$$
which completes the proof.
Can someone help me to find a bound for $f_n$? Is this approach correct or am I missing something?
| You can apply the lemma of Scheffé.
Observe that: $$\int\left(f-f_{n}\right)d\mu=\int fd\mu-\int f_{n}d\mu=1-1=0$$
implying that: $$\int\left(f-f_{n}\right)^{+}d\mu=\int\left(f-f_{n}\right)^{-}d\mu$$
and consequently: $$\int\left|f-f_{n}\right|d\mu=2\int\left(f-f_{n}\right)^{+}d\mu$$
Then for every measurable set $A$ we have:$$\left|P\left(A\right)-P_n\left(A\right)\right|=\left|\int_{A}\left(f-f_{n}\right)d\mu\right|\leq\int\left|f-f_{n}\right|d\mu=2\int\left(f-f_{n}\right)^{+}d\mu$$
Applying dominated convergence theorem we find: $$\lim_{n\to\infty}\int\left(f-f_{n}\right)^{+}d\mu=0$$
and conclude that: $$\lim_{n\to\infty}P_n\left(A\right)=P\left(A\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3498228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$ABCD$ is a trapezium, $MN=\dfrac{AB-CD}{2}$ where $M,N$ are the midpoints of $AB,CD$, respectively; show $\angle BAD+\angle ABC=90^\circ$
$ABCD$ is a trapezium in which $AB$ is parallel to $CD$. Let $M$ be the midpoint of $AB$ and $N$ the midpoint of $CD$. If $MN=\dfrac{AB-CD}{2}$, I should show that $\angle BAD+\angle ABC=90^\circ$.
Let $KP$ be the midsegment of $ABCD: K\in AD$ and $P\in BC$. We know $KP=\dfrac{AB+CD}{2}$. If $MN$ intersects $KP$ at $X$, $X$ is the midpoint of $MN$ and $KP$. How can I continue? Does this help?
Edit: I have another idea. Let $NN_2||AD$ and $NN_1||BC$. Can we show $\angle N_1NN_2=90^\circ$?
|
Extend $AD$ and $BC$ to meet at $P$. We are given that $m=x-y$. Since $CD\parallel AB$, $\triangle PDN\sim\triangle PAM$ and thus
$$\frac yn=\frac x{n+m}=\frac x{n+x-y}\\nx=y(n+x-y)=yn+xy-y^2\\nx-xy=yn-y^2\\x(n-y)=y(n-y)\\x=y\quad\text{or}\quad n-y=0$$
Since generally $x\neq y$, we conclude that $n-y=0$. Thus $y=n$ and $m+n=(x-y)+y=x$.
Therefore, $2PM=AB$, so $M$ is the circumcenter of $\triangle PAB$. This is only possible in a right triangle, so $\angle P=90^\circ$ and thus $\angle A+\angle B=90^\circ$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3498349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Map explicitly a simply connected region with at least two boundary points to unit disk In an old Qualifying exam there is an exercise that says:
"Let $G$ be a simply connected region of $\mathbb{C}$ such that the boundary $\partial G$ has at least two points $a\neq b$. Construct explicitly, in function of $a$ and $b$, a conformal map defined on $G$ such that $f(G)$ is open and bounded in $\mathbb{C}$".
I have problems with the generality of the region $G$. I mean if $G$ is the interior of the intersection of two circles I can manage to do it (Presumably, $f(G)$ will be the open unit disk), but in a general case like this I don't know how to proceed.
It is possible to do this in the general case? Thanks in advance.
| By the assumption that $G$ is a simply connected region of $\mathbb{C}$ such that the boundary $\partial G$ has at least two points $a\ne b$, we may take a path $\Gamma$: $z=\gamma(t)$ $(0\le t<1)$ with $\gamma(0)=a\, (\text{or }b)$, $\gamma(1/2)=b\, (\text{or }a)$, and $\lim_{t\to 1} \gamma(t)=\infty.$
Let $H=\mathbb{C}\setminus \Gamma$, then $H$ is simply connected and $G\subset H.$
We assume that $\gamma(0)=a,\,\gamma(1/2)=b$.
If we could construct explicitly, in function of $a$ and $b$, a conformal map $f$ defined on $H$
such that $f(H)$ is open and bounded in $\mathbb{C}$, $f$ is just desired one (since $G\subset H$, $f(G)$ should be bounded).
We consider a function $\varphi (z)=\sqrt{e^{-i\theta _0}(z-a)}$ on $H$, where $\theta_0=\arg (b-a),\, 0\le \theta _0<2\pi.$
The branch of $\varphi(z)$ is determined so that $\varphi (b)$ is real positive. By $\varphi $ a small circle $C_\varepsilon : z=a+\varepsilon e^{i\theta }\,( 0\le \theta <2\pi)$ is mapped to a half circle, so we may take a point $c$ such that $c\not \in f(H)$.
Then $\phi(z)=\frac{1}{z-c}$ maps $\varphi (H)$ to a bounded region. Explicitly we may take $c=-\varphi (a+\varepsilon e^{i\pi})$ .Therefore $f(z)=\phi(\varphi (z))$, explicitly in function of $a$ and $b$, maps $H$ to a bounded region and $G$ too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3498543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Local coordinate frame there is this problem that keeps bothering me.
Consider the following local frame of the tangent space of $\mathbb{R}^2\setminus \{0\}$:
$$ E_1 = \frac{x}{\sqrt{x^2+y^2}} \partial_x + \frac{y}{\sqrt{x^2+y^2}}\partial_y, E_2 = \frac{-y}{\sqrt{x^2+y^2}}\partial_x + \frac{x}{\sqrt{x^2+y^2}}\partial_y.$$
Geometrically, one can see this to be the unit normal vectors tangent to the concentric circles and the (outward pointing) normal.
One can show that these vector fields do not commute. Hence, in theory, we cannot find a smooth chart $(U,\phi=(x_1,...,x_n))$ such that locally $E_j = \partial_{x_j}$.
We can find a neighbourhood $U$ such that the following ARE smooth and invertible: $f_1(x,y) = \frac{-1}{2}\log(x^2+y^2)$ and $f_2(x,y) = -\tan^{-1}(\frac{x}{y})$. Thus, they would give us diffeomorphisms, and according to my calculations, they DO induce these vector fields, meaning that they are a chart bringing it to a local coordinate frame...
Where did I go wrong in this reasoning/calculation?
Any advice is greatly appreciated.
| I'm not quite sure why you have the negative signs, but in both cases, you're off by a factor of $r$. $-df_1 = \dfrac{dr}r$ and $-df_2=d\theta$, so the dual vector fields are $r\dfrac{\partial}{\partial r}$ and $\dfrac{\partial}{\partial\theta}$, but $E_1=\dfrac{\partial}{\partial r}$ and $E_2=\dfrac1r\dfrac{\partial}{\partial \theta}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3498672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$X^p + Y^p \equiv Z^p \pmod q$ Is it possible that although fermat's last theorem implies $x^p + y^p = z^p$ has no solution in integers $x,y,z$ and $p\geq3$, for some $q>p, x^p + y^p \equiv z^p \pmod q$. Like maybe $x^p \equiv 2 \pmod q$, $y^p$ is $1 \pmod q$ and $z^p \equiv 3 \pmod q$?
| A. E. Pellet, Mémoire sur la théorie algébrique des équations, Bull. Soc. Math. France 15 (1887) 61-102, showed that for every prime $p$ there is a number $q_0(p)$ such that if $q$ is prime and $q\ge q_0(p)$ then $x^p+y^p+z^p\equiv0\bmod q$ has nontrivial solutions. (Later authors found explicit values for $q_0(p)$, and slicker proofs than that of Pellet, but it does appear that Pellet was the first one to get a result).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3498801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Negligible function I'm studying about complexity and reaching negligible. Can anyone tell me if $f(x)$ is a negligible function and $p(x) \in \mathbb{R}$ then $p(x) . f(x)$ is a negligible function?
| Let $f'(x) = f(x) \cdot p(x)$
*
*f(x) is negligible means that it is smaller than the inverse of any polynomial, for all sufficiently large n. In your case, given any polynomial $q(x)$, it is smaller than $1/p(x)q(x)$
*By using the limit definition of negligibility.
$f(x)$ is negligible than for every polynomial $q(x)$ we have;
$$\lim_{x \rightarrow \infty} q(x) f(x) =0$$
We need to show that for every $q(x)$, $f'(x)$ is negligible;
$$\lim_{n \rightarrow \infty} q(x) f'(x) = \lim_{x \rightarrow \infty} q(x) [p(x) f(x)] =0$$
This is true; since $f(x)$ is negligible implies for any polynomial;
$$\lim_{n \rightarrow \infty} [q(x) p(x)] f(x) =0,$$ where $q(x) p(x)$ is a polynomial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3498905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
When does multiplication by an orthogonal matrix preserves the eigenvalues? Let $A$ be a real $n \times n$ matrix, with rank $\ge n-1$.
Suppose that the eigenvalues (counted with multiplicities) of $A$ are the same as the eigenvalues of $QA$ for some orthogonal matrix $Q$. Must $Q$ be diagonal?
The condition $\text{rank}(A)\ge n-1$ is necessary: If we allow $\text{rank}(A)< n-1$, then one can take $A$ to be block diagonal with the $2 \times 2$ zero matrix as its first block. Then the entire $\text{O}(2) \times \text{Id}_{n-2}$ preserves the eigenvalues.
| No. Consider
$$
R=\pmatrix{0&1\\ 1&0},\ Q=\pmatrix{R\\ &R},\ A=\pmatrix{R\\ &I},\ QA=\pmatrix{I\\ &R}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3499051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Evaluate $\int_{0}^{1}4x^3\left(\frac{d^2}{dx^2}(1-x^2)x^5\right)dx$ $$\int_{0}^{1}4x^3\left(\dfrac{d^2}{dx^2}(1-x^2)x^5\right)dx$$
$$\int_{0}^{1}4x^3\left(\dfrac{d}{dx}(5x^4-7x^6)\right)dx$$
$$\int_{0}^{1}4x^3\left(20x^3-42x^5\right)dx$$
$$8\int_{0}^{1}10x^6-21x^8 dx$$
$$8\left(\dfrac{10x^7}{7}-\dfrac{21x^9}{9}\right)_{x=1}$$
$$8\left(\dfrac{10}{7}-\dfrac{21}{9}\right)=8\cdot\dfrac{90-147}{63}=-\dfrac{152}{21}$$
But actual answer is $2$. What am I missing here?
| Although you've gotten to the point of knowing that the "actual answer" you've been given is wrong, you might want to consider a different approach to the problem -- integration by parts. You have an integral of the form
$$
\int_0^1 f(x) g'(x) ~ dx
$$
(where $g(x) = ((1-x^2)x^5)'$), so you can convert it, with "parts", as follows:
\begin{align}
I &= \int_{0}^{1}4x^3\left(\dfrac{d^2}{dx^2}(1-x^2)x^5\right)dx\\
&= \int_{0}^{1}4x^3\left(\dfrac{d}{dx}\left(\frac{d}{dx}[(1-x^2)x^5]\right)\right)dx \\
&= \left. 4x^3 \left(\frac{d}{dx}[(1-x^2)x^5]\right)\right|_0^1 - \int_{0}^{1}12x^2\left(\frac{d}{dx}[(1-x^2)x^5]\right)dx \\
\end{align}
In the left hand term, when $x = 0$ the $4x^3$ factor is zero; when $x = 1$, the "derivative" factor turns out to be $5 - 7 = -2$, so we have
\begin{align}
I
&= 4 \cdot (-2) - \int_{0}^{1}12x^2\left(\frac{d}{dx}[(1-x^2)x^5]\right)dx \\
&= -8 - \int_{0}^{1}12x^2\left(\frac{d}{dx}[(1-x^2)x^5]\right)dx \\
\end{align}
and we can repeat the integration by parts, to get
\begin{align}
I &= -8 - \left.\left(12x^2 \cdot (1-x^2)x^5\right) \right|_0^1 + \int_{0}^{1}24x\left((1-x^2)x^5\right)dx \\
\end{align}
In this case, the middle term is zero at both $x = 0$ and $x = 1$, so we have
\begin{align}
I
&= -8 + \int_{0}^{1}24x\left((1-x^2)x^5\right)dx \\
&= -8 + \int_{0}^{1}24x(x^5-x^7)dx \\
&= -8 + 24\int_{0}^{1}(x^6-x^8)dx \\
&= -8 + 24 \left(\left. \frac{x^7}{7}-\frac{x^9}{9}\right|_{0}^{1}\right)\\
&= -8 + 24 \left( \frac{1}{7}-\frac{1}{9}\right)\\
&= -8 + 24 \left( \frac{2}{63}\right)\\
&= -8 + \frac{16}{21}\\
\end{align}
Is this really the best way to go? Probably not -- I had to multiply stuff out in the middle to do the evaluations from $0$ to $1$, etc. -- but the general notion that whenever you see $fg'$ inside an integral, it's not a bad idea to try integration by parts, is still worth knowing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3499203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Application WLLN in a convergence related problem So, I have stumbled upon a question like this:
${X_n}_{\{n\geq1\}}$ be a sequence of iid random variables having common pdf $f_X(x)=xe^{-x}I_{x>0}$. Define $\overline{X_n}=\frac{1}{n}\sum_{i=1}^{n}{X_i},n=1,2,...$. Then $\lim{n\to \infty}P(\overline{X_n}=2)=$
*
*$0$
*$1$
*$1/2$
*$1/4$
Now,as $X_i$ is a continuous ranfom variable $\overline{X_n}$ is also continuous. So,$P(\overline{X_n}=2)=0$ for any $n$. But Weak of of Large Numbers says that $\overline{X_n}$ converges in probability to $2$ ( as $E(\overline{X_n})=2)$ which means(?) $\overline{X_n}=2$ with probability $1$. So, I must be missing something here. Which option is correct?
Any help would be very much appreciated!
| We say that a sequence $(X_n)_n$ of random variables converges in probability to the random variable $X$ (notation: $X_n \stackrel{\mathbb{P}}{\to} X)$ if
$$\forall \epsilon > 0: \lim_n\mathbb{P}(|X_n-X| \geq \epsilon)=0$$
If $\overline{X_n} = 2$ with probability $1$ then for all $\epsilon > 0$ we have $\mathbb{P}(|\overline{X_n}-2| \geq \epsilon) = 0$ and thus we have $\overline{X}_n \stackrel{\mathbb{P}}{\to} 2$ trivially.
There is no contradiction. Of course $\overline{X}_n = 2$ with probability $1$ is stronger than convergence in probability.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3499384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $a^{\cos(b)^2}-b^{\cos(a)^2}\leq \frac{4}{\pi}(1-\frac{\pi}{2})b+\frac{\pi}{2}-1$ and the reverse inequality . It'a problem of my own :
Let $a\geq b>0$ such that $a+b=\frac{\pi}{2}$ then we have :
$$a^{\cos(b)^2}-b^{\cos(a)^2}\leq \frac{4}{\pi}(1-\frac{\pi}{2})b+\frac{\pi}{2}-1$$
Let $b\geq a>0$ such that $a+b=\frac{\pi}{2}$ then we have :
$$a^{\cos(b)^2}-b^{\cos(a)^2}\geq \frac{4}{\pi}(1-\frac{\pi}{2})b+\frac{\pi}{2}-1$$
For the first the equality case is when $b=0$ or $b=\frac{\pi}{4}$ for the second when $b=\frac{\pi}{4}$ or $b=\frac{\pi}{2}$
The main remark is : the line $f(x)=\frac{4}{\pi}(1-\frac{\pi}{2})x+\frac{\pi}{2}-1$ is a chord of the curve defines by the function $g(x)=\Big(\frac{\pi}{2}-x\Big)^{\cos^2(x)}-\Big(x\Big)^{\sin^2(x)}$
So my idea was to derivate the function $g(x)$ we obtain :
$$g'(x)= (\frac{\pi}{2} - x)^{\cos^2(x)} \Big(-\frac{\cos^2(x)}{(\frac{\pi}{2} - x)} - 2 \log\Big(\frac{\pi}{2} - x\Big) \sin(x) \cos(x)\Big) - x^{\sin^2(x)} \Big(\frac{\sin^2(x)}{x} + 2 \log(x) \sin(x) \cos(x)\Big)$$
See that $g'(x)<0$ on the interval $[0,\frac{\pi}{2}]$ and apply the mean value theorem but I'm stuck after this.
If you have nice idea it would be cool.
Ps: Is there a symmetry with respect to the line ($f(x)$)?If yes we can cut in two the problem.
Thanks for sharing your time and knowledge .
| put $x=\dfrac{\pi}{4}+a$
$h(a)=(\frac{\pi}{4}-a)^{\frac{1}{4}(\cos{a}-\sin{a})}+(\frac{\pi}{4}+a)^{\frac{1}{4}(\sin{a}+\cos{a})}$
$=i(a)+i(-a)$
$i''(x)$ is increasing in the area $(-\frac{\pi}{4},0)$ and $i'(0)\geq-1$, h(x) is the shape like which add two parabola.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3499535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Mathematical equivalent of a one-dimensional array as seen in programming Suppose some kind of structure to hold a (finite) number of elements is needed. The order of the elements matters and duplicate elements are allowed. When programming one would use a one-dimensional array for this.
What mathematical structure comes closest to such a one-dimensional array as seen in programming? I've seen both row and column vectors being used for this purpose, but it seems that a sequence would do as well. Is there any reason to choose one over the other or to choose an entirely other structure?
| A vector is the wrong object to consider. While vectors (in $\mathbb{R}^d$) can be represented as ordered lists, calling something a vector implies that it comes from a space with a great deal more structure. Specifically, vectors can be added together, and can be scaled by field elements (in the case of vectors in $\mathbb{R}^n$, you can multiply a vector by a real number).
A closer analogy would be a sequence (if your array has infinite length, or if you are willing to pad it by zero) or an $n$-tuple (if your arrays are all of the same length $n$). Either of these objects may be thought of as a function from (a subset of) the natural numbers $\mathbb{N}$ to the appropriate set (e.g. the set of floating point numbers, the set of integers, the set of real numbers, etc).
For example, the array
float array[5] = {3.14, 0.60309, 2.7183, 2, 1.618}
could be written as a $5$-tuple
$$ (3.14, 0.60309, 2.7183, 2.0, 1.612), $$
which can be thought of as a function $ a : \{0,1,2,3,4\} \to (\text{Set of Floats})$ defined by
$$
\begin{cases}
3.14 & \text{if $x=0$,} \\
0.60309 & \text{if $x=1$,} \\
2.7183 & \text{if $x=2$,} \\
2.0 & \text{if $x=3$, and} \\
1.612 & \text{if $x=4$.}
\end{cases}$$
That is (for example) $a(2) = 2.7182$.
As rschwieb noted in a comment below, this last way of looking at a tuple or array is likely the most relevant for software engineers: one passes an index to the array, and gets back the value of the array element stored at that index. This is equivalent to evaluating the function at that index.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3499672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Sampling from a distribution with given pdf Given a continuous multivariate pdf in analytical form (i.e. in function form), how can one sample from the corresponding distribution? In other words, what are the ways of coming up with random (or psuedo-random) realizations from the distribution with probability matching the one corresponding to the given pdf? What is the general idea/principle behind the sampling procedure?
Referring to related references would be helpful too.
| This is a broad topic. Here are a few examples to initiate some thinking on your part.
(1) To generate a random sample $x$ from a univariate distribution with CDF $F$, first generate a uniformly distributed random number $u \sim U(0,1)$ and take $x = F^{-1}(u)$. Note that $x$ has the desired distribution since
$$\mathbb{P}(x \leqslant a) = \mathbb{P}(F^{-1}(u) \leqslant a) = \mathbb{P}(u \leqslant F(a)) = F(a)$$
(2) To generate a random vector with a multivariate normal distibution, i.e., $\mathbf{x} \sim N(\mathbf{\mu}, \Sigma)$, first generate a vector $\mathbf{z}$ with components that are indpendent and distributed $z_j \sim N(0,1)$. Find the Cholesky decomposition of the covariance matrix $\Sigma = LL^T$ and take $\mathbf{x} = \mu + L\mathbf{z}$.
This imposes the desired covariance structure since
$$\mathbb{E}((\mathbf{x} - \mathbf{\mu})(\mathbf{x} - \mathbf{\mu})^T) = \mathbb{E}(L\mathbf{z}(L\mathbf{z})^T)= \mathbb{E}(L\mathbf{z}\mathbf{z}^T L^T) = L \mathbb{E}(\mathbf{z}\mathbf{z}^T)L^T = LL^T = \Sigma$$
(3) More generally, consider an approach like Gibbs sampling.
Also you could explore the notion of a copula if the marginal distributions are accesible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3499892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
a question about the surface area of two Octagonal pyramids A friend of mine came to me with a problem earlier today and I'm unsure if the answer I arrived at was correct. I simply found a formula on wolfram and used the provided values to work backwards to a solution. I know there has to be an easier way to solve this but I'm stumped on it and any help would be appreciated. The person I was helping has already turned in the assignment so it won't be aiding someone surreptitiously if that is a concern and I'll change the specific values given in the original problem. So anyway the problem goes:
You are given two octagonal pyramids with height 8. The distance from the base point of pyramid A to the midpoint of one of its sides is 6, and the distance between the base point of pyramid B and the midpoint of one of its sides is 15. The surface area of pyramid A is 306.4, what is the surface area of pyramid B?
Edit:
sorry I didn't include my attempt at this initially.
The method I used started with a formula I found on wolfram alpha for the surface area of an octagonal pyramid. That formula is
$2 s (\sqrt{4 h^2 + s^2 cot^2 (π/8)} + s cot(π/8))$
Where h is the height and s is the side length of one of the sides of the base octagon. I plugged in the height, and equated the whole formula to the surface area of pyramid A to get the side length s. I then used the fact that the resulting right triangles made from the base octagons of either pyramid should be similar hence we can find the side length of the base octagon of pyramid B by using:
$b/15 = a / 6$
where a is $1/2$ the side length of the base octagon of pyramid A and b is $1/2$ the side length of the base octagon of pyramid B.
I used this second formula to find the value of b:
since
$ a = (383/2)/\sqrt{5*(320 + 383 \sqrt{3 + 2 \sqrt{2}}}) = 2.428$
then
$ b = 15*a/6 $
so plugging $2 * b$ in for s in our original formula gives the surface area of pyramid B as
SA = 1521.78
So by my logic that should be correct but I can't imagine that's how the problem is meant to be solved, nor do I think my answer is correct, though that's simply because I threw together formulas until I got an answer and that leaves me no way to verify my answer. Basically my hope it for some answer that bypasses the use of these formulas and is more straightforward. Any help would be greatly appreciated. Thank you for your time!
| I think there is a problem with this question, as the given area for pyramid A is incorrect (assuming a right pyramid, i.e. the apex above the centre of a regular octagon). Nevertheless, I'll solve this problem using that incorrect figure using as few calculations as possible.
In pyramid A, the octagonal base can be split into 8 isosceles triangles, with height 6. Their base is an edge of the pyramid, and it has length L, which I will not need to calculate. The other faces of the pyramid are also eight triangles with base L, but with height $\sqrt{6^2+8^2}=10$.
Therefore the area of the pyramid is split between the base and the other eight faces proportional to the height of the triangles - the area of the octagonal base of A is $\frac{6}{10+6}*306.4$ and the total area of the other eight sides is $\frac{10}{10+6}*306.4$.
In pyramid B, the octagonal base is scaled relative to A by a factor of $\frac{15}{6}$, so the area of the base is scaled by that squared, and must have area $\frac{15^2}{6^2}\frac{6}{10+6}*306.4 = 718.125$. The other eight faces have a base length that is scaled by that factor $\frac{15}{6}$ compared to pyramid A, but their height is now $\sqrt{15^2+8^2}=17$, which is $\frac{17}{10}$ times its previous value. The area of these eight faces is therefore $\frac{15}{6}\frac{17}{10}\frac{10}{10+6}*306.4 = 813.875$. This gives a total area for pyramid B of $1532.0$.
With regards to the incorrect given area:
The value of L is $2*6\tan(22.5) \approx 4.97$. The area of pyramid A should therefore be $8*\frac{6L}{2}+8*\frac{10L}{2} = 64L \approx 318.116$, and not $306.4$. You can adjust the values for B's area accordingly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3500018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove by limit definition $\lim _{x\to \infty }\left(\frac{-7x^2+9x}{4x^2+8}\right)=\frac{-7}{4}$ Prove by limit definition
$$\lim _{x\to \infty }\left(\frac{-7x^2+9x}{4x^2+8}\right)=\frac{-7}{4}$$
let $\epsilon > 0$ need to find $M$ such that for every $x>M \implies |f(x) - L|<\epsilon$
$\left|\frac{-7x^2+9x}{4x^2+8}+\frac{7}{4}\right| = \frac{9x+14}{4\left(x^2+2\right)} \le \frac{23x}{4x^2}=\frac{23}{4x} < \epsilon $
so I choose $M=\frac{23}{4\epsilon}$
My question is that I assumed that $x>0$ do I need to check when $x\le0$ or this is enough since $x \to \infty $ ? and does this prove the limit ?
thanks
| Look for example at pp. 105, Calculus (Third Edition) from Spivak. The definition of $\lim_{x\to\infty} f(x)=L$ is that for every $\varepsilon>0$ there is a number $N$ such that, for all $x$, if $x>N$, then $|f(x)-L|<\varepsilon$. According to this, what you have done proves the limit, but I think that in your attempt to solve the problem it was necessary to assume $x>1$. Nevertheless this is not a problem because you can take $M=\max\{1, \frac{23}{4\varepsilon}\}$ to ensure that the bounds you took are ok.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3500163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving the existence of a line that passes only through two points So I came across this question in a Math Olympiad Book:
Consider a finite set $S$ of points in a plane which are not all collinear. Show that there is a line in the plane which passes only through two points in $S$.
$$S = \{A_1, A_2,...,A_n\}$$
Since this was in the combinatorics section of the book, my first instinct was to construct a set (say $A$) which consists of increasing ordered triplets of $S$. Basically,
$$A=\{(A_1,A_2,A_3),(A_1,A_2,A_4),...\}$$
Furthermore, let us assume that every element $x$ of $A$ represents collinearity between those three points within $x$.
Now let us assume that there does exist a set $S$ such that no line passing through only two points exists. In that case $A$ would contain every single increasing ordered triplet using elements of $S$.
However we know that there is only one possible line passing between two points. Thus it means that if two triplets $(A_1,A_2,A_i)$ and $(A_1,A_2,A_j)$ are elements of $A$ then the line passing through $A_1$ and $A_2$ also passes through $A_i$ and $A_j$, which would imply that all four points are collinear. Similarly we know that $A$ contains all possible triplets of the form $(A_1,A_2,A_i)$ thus we can say that all the points are collinear.
This is a contradiction. Thus no such set can exist. $QED$.
However, I showed this proof to a friend and he told me that I was using cyclic reasoning and was incorrect. According to him I should prove that such a line exists rather than proving that no such set can exist. Is he right?
| There is a simple proof without going into any deep knowledge of Mathematics. Just a little common sense and you are done.
If $S$ be such a finite set then area enclosed by all points is finite. If that happens then we can construct an imaginary boundary around the points. Outside this field we draw a straight line , and surely none of the points of our set lie on this line. Since area enclosed by the points is trivial we can extend area and bring exactly two points from our parent set on the straight line. And then we have a set of points where there is a line passing through exactly $2$ points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3500280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Find the value of $E(\frac{1}{2018})+E(\frac{2}{2018})+\dots+E(\frac{2017}{2018})$ Let $E(n) = \dfrac{4^n}{4^n+2}$.
If the value of $E(\frac{1}{2018})+E(\frac{2}{2018})+\dots+E(\frac{2017}{2018})=\frac{a}{b}$ (in lowest terms), find $b$.
I tried solving this question using telescopic sum but was unable to solve
| Let $\displaystyle f(x)=\frac{4^x}{4^x+2},$ Then $\displaystyle f(1-x)=\frac{4^{1-x}}{4^{1-x}+2}=\frac{2}{4^x+2}$
So $$f(x)+f(1-x)=\frac{4^x+2}{4^x+2}=1.$$
So Put $\displaystyle x=\frac{1}{2018},\frac{2}{2018},\frac{3}{2018},\cdots \cdots ,\frac{1008}{2018}$
So $$f\bigg(\frac{1}{2018}\bigg)+f\bigg(\frac{2}{2018}\bigg)+f\bigg(\frac{3}{2018}\bigg)+\cdots +\cdots +f\bigg(\frac{1009}{2018}\bigg)+f\bigg(\frac{1010}{2018}\bigg)\cdots +f\bigg(\frac{2017}{2018}\bigg)=1008+\frac{1}{2}$$
For calculation of $\displaystyle f\bigg(\frac{1009}{2018}\bigg),$
Put $\displaystyle x=\frac{1009}{2018}$ in $f(x)+f(1-x)=1,$
Getting $$\displaystyle f\bigg(\frac{1009}{2018}\bigg)=\frac{1}{2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3500411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Probability of Independent Normal Random Variables There are two Random Variables X and Y. Both of them are Normally distributed with 0 mean and variance a and b respectively. X and Y are independent of each other.
What is the probability of
What should be the easiest way to do this?
| The joint probability distribution of $X$ and $Y$ is $\frac{1}{2\pi \sqrt{ab}}e^{\frac{-1}{2} \left(\frac{x^2}{a^2} + \frac{y^2}{b^2} \right)}$.
Hence, we are looking for the value of the following integral.
\begin{align*}
\frac{1}{2 \pi \sqrt{ab}} \iint_{x+y>0, y>0}e^{\frac{-1}{2} \left(\frac{x^2}{a^2} + \frac{y^2}{b^2} \right)} dxdy
\end{align*}
We make the substitution $s = \frac{x}{a}$ and $t = \frac{y}{b}$, and this becomes
\begin{align*}
\frac{\sqrt{ab}}{2 \pi} \iint_{as+bt>0, t>0}e^{\frac{-1}{2} \left(s^2 + t^2 \right)} dsdt
\end{align*}
Now, we want to switch to polar coordinates, but we have to be careful about the domain over which we integrate. You might want to draw a picture to find that it is the sector $0 < \theta < \pi + \arctan\left( \frac{-a}{b} \right)$. This gives the following expression.
\begin{align*}
\frac{\sqrt{ab}}{2 \pi} \int_{0}^{\pi + \arctan\left( \frac{-a}{b} \right)}\int_0^{\infty} e^{\frac{-r^2}{2}} rdrd\theta =& \frac{\sqrt{ab}\left( \pi + \arctan \left( \frac{-a}{b}\right) \right)}{2\pi} \int_0^{\infty}e^{\frac{-r^2}{2}}d\left( \frac{r^2}{2} \right) \\
=& \frac{\sqrt{ab}\left( \pi + \arctan \left( \frac{-a}{b}\right) \right)}{2\pi}
\end{align*}
which is your solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3500705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find all squarefree integers whose divisors $d_1 < d_2 < · · · < d_k$ satisfy $d_i − d_{i−1}|n$ for all $2 \leq i \leq k$. Disclaimer: this problem came from USAMTS: https://www.usamts.org/Tests/Problems_31_3.pdf
The contest has ended., in case there is any doubt.
Problem: Find all squarefree integers whose divisors $d_1 < d_2 < · · · < d_k$ satisfy $d_i − d_{i−1}|n$ for all $2 \leq i \leq k$.
My thinking is if $d_i − d_{i−1}|n$ then $d_i − d_{i−1}$ is in $d_1, d_2,... d_k$. Then $d_2-d_1 < d_2$ is a divisor so $d_2-d_1 = d_1, d_2 = 2d_1$. Obviously $d_1 = 1$ and $d_2 = 2$ since $1 | n$.
Then $(d_3 - d_2 = d_3 - 2$. if $d_3 - 2 = d_2 $ then $d_3 = 4$. If $d_3 - 2 = d_1 $ then $d_3 = 3$
Powers of 2 seem to work, but not sure how to prove if other cases don't work.
It seems we can make some decent progress but I am stuck here..
| We must have $d_1=1.$ So $d_2-1$ must be divisor less than $d_2$ so $d_2-1=1,$ or $d_2=2.$
In general, if $p$ is a prime divisor of $n$ then let $d$ be the previous divisor. Then $p\not\mid d$ and $p-d$ must be a divisor of $n$ and likewise $\frac{n}{d}-\frac{n}{p}=\frac{(p-d)n}{pd}=(p-d)\frac{n}{pd}$ must be a divisor of $n.$
But we can show that $\gcd(p-d,pd)=1.$ So if $d\neq p-1$ then $(p-d)\frac{n}{pd}$ is not square free, so it cannot be a divisor of $n.$
So the only way for $p$ to be added is if $p-1$ is also a divisor.
In particular, we must start $1,2,3,\dots,$ and the next cannot be $5$ since $5\neq 3+1.$ So the next must be six, and the next value must be a prime, and the only prime can be $6+1=7.$
So we start $1,2,3,6,7,\dots,14,\dots, 21,\dots,42,\dots$ If there were any primes added to this in the $\dots,$ then the smallest such $p$ must have $p-1$ in $7,14,21,42.$ But the only option is $p=43,$ because $7+1,14+1,21+1$ are all non-prime.
So the sequence must start:
$$1,2,3,6,7,14, 21,42,\dots$$
We can stop at $n=42.$ Or we can continue with $p=43.$ But then we have that there is not prime $p$ so that $p-1=43d_1$ where $d_1\mid 42.$ So there can be no larger $n.$
So the only values are $n=1,2,6=2(2+1),42=6(6+1),1806=42(42+1).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3500818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Understanding the least squares regression formula?
I've seen the following tutorial on it, but the formula itself had not been explained (https://www.youtube.com/watch?v=Qa2APhWjQPc).
I understanding the intuition behind finding a line that "best fits" the data set where the error is minimised (image below).
However, I don't see how the formula relates to the intuition? If anyone could explain the formula, as I can't visualise what it's trying to achieve. A simple gradient is the dy/dx, would't we just do $\sum(Y - y) \ ÷ \sum (X - x)$ where Y and X are the centroid values (average values). By my logic, that would be how you calculate the average gradient? Could someone explain this to me?
| very concisely:
*
*if the points were all on a straight line, then you would like that to be the regression line, isn't it ?
*if now you translate rigidly the linear cloud (no rotation), you would like the regression line to translate in the same way;
*the regression line will contain all the cloud points, including the centroid $(\bar x, \bar y)$;
*passing to a general cloud of points, translate the reference system to have the origin at the centroid and see what happens to the parameters $m' , c'$ computed in the new reference.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3500898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Use Fubini's theorem to reverse the order of the double integral
*
*Use Fubini's theorem to reverse the order of the double integral (the reversed integral may split into a sum of multiple pieces):
a) $\int_0 ^4 \int_0 ^{\sqrt{x}} f(x,y) dy dx$
b) $\int_0 ^2 \int_x ^{3} f(x,y) dy dx$
c) $\int_{-1} ^2 \int_0 ^{1-y^2} f(x,y) dx dy$
In part $c)$ be careful about signs!
My attempt
a)
Boundary of number at $x$: $0\leq x\leq 4$
Boundary of function at $y$: $0\leq y\leq \sqrt{x}.$
Now consider,
boundary of number at $y$: $0\leq y\leq 2$
boundary of function at $x$: $y^2\leq x\leq 4.$
So by the Fubini's
$\int_0 ^4 \int_0 ^{\sqrt{x}} f(x,y) dy dx=\int_0 ^2 \int_{y^2}^4 f(x,y) dx dy.$
b)
Boundary of number at $x$: $0\leq x\leq 2$,
boundary of function at $y$: $x\leq y\leq 3.$
Now, consider
boundary of number at $y$: $0\leq y\leq 3$
boundary of function at $x$: $0\leq x\leq y$
So, by the Fubini's theorem:
$\int_0 ^2 \int_x ^{3} f(x,y) dy dx=\int_0 ^3 \int_0 ^{y} f(x,y) dy dx.$
c) I couldn't draw graph, may you draw and add here?
May you check $a), b)$ and may you help for $c)$?
| The graph and working for (a) are correct. The graph for (b) is wrong and should look like
The reversed integral is $\int_0^3\int_0^{\min(2,y)}f(x,y)\,dx\,dy$.
For (c) the graph looks like this:
where the negative sign indicates a reversed orientation. This integral needs to be split when reversed, and the result is
$$\int_0^1\int_{-\sqrt{1-x}}^{\sqrt{1-x}}f(x,y)\,dy\,dx-\int_{-3}^0\int_{\sqrt{1-x}}^2f(x,y)\,dy\,dx$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3501001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is the Kronecker symbol $(n/2)$ conventionally defined as it is? Admittedly this may be an extremely naive question, but I am simply puzzled about the motivation behind choosing this function to be one of the Dirichlet characters modulo $8$, rather than the Dirichlet character modulo $2$ (or, indeed, any other character modulo $8$). There must be a good reason for this, right? What was Kronecker's motivation?
For $m\in\mathbb{N}$ and $n\notin 2\mathbb{N}$, the Kronecker symbol $(m/n)$ is the completely multiplicative extension to the odd numbers of the Legendre symbol $(m/p)$, where $p$ is an odd prime. That is,
$$\left(\frac{m}{n}\right)=\prod_{p|n}\left(\frac{m}{p}\right)^{\alpha_p(n)},$$
where $\alpha_p(n)$ is the exponent of $p$ in the factorisation of $n$ and $\left(\frac{m}{p}\right)$ is $0$ if $p|m$ and $\{-1,1\}$ depending on whether $m$ is a quadratic non-reside or residue (mod $p$), respectively.
Very well. When $n\in 2\mathbb{N}$, however, Kronecker's symbol is defined in the same way but with $(m/2)$ defined to be $0$ if $m$ is even, $1$ if $m\equiv \pm 1$ (mod 8) and $-1$ $m\equiv \pm 3$ (mod 8). In other words, Kronecker's symbol the Dirichlet character modulo $8$ that takes the values $1,0,-1,0,-1,0,1,0,...$.
Of course, there are four characters (mod $8$) which are all real, completely multiplicative functions of $m$. For example, the simplest of these is the Dirichlet character (mod $2$), which takes the values $1,0,...$. In particular, had we taken Kronecker's symbol to be this character, then it would still be completely multiplicative as a function of $n$ and when $n=2$ we would have a function of $m$ that carries the trivial structure of residuosity (mod $2$), too. So why do we take it to be the particular character we do, instead?
| The reason is quadratic residues and Legendre's
symbol $\,\big(\frac{a}{p}\big).\,$ In particular the
Legendre symbol $\,\big(\frac2{p}\big).\,$ It can be
proved that $2$ is a quadratic residue of primes
of the form $\,8n+1\,$ or $\,8n+7\,$ and a quadratic
nonresidue of primes of the form $\,8n+3\,$ or
$\,8n+5.\,$ This can be expressed by the formula
$\,\big(\frac2{p}\big) = (-1)^{(p^2-1)/8}\,$
which is contained in the Wikipedia article on
Legendre symbol.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3501439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Verify $\cos(x)=\frac{1-t^2}{1+t^2}$ with $t=\tan(\frac{x}{2})$ I was requested to verify, with $t=\tan(\frac{x}{2})$, the following identity:
$$\cos(x)=\frac{1-t^2}{1+t^2}$$
I'm quite rusty on my trigonometry, and hasn't been able to found the proof of this. I'm sure there may be some trigonometric property I should know to simplify the work. Could someone hint me or altotegher tell me how to solve this problem? I tried to simplify the RHS looking to get $\cos(x)$ out of it but failed.
| $$\cos(x)=\cos^2(x/2)-\sin^2(x/2)=(1-t^2)/(\sec^2(x/2))=(1-t^2)/(1+t^2)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3501607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Inequality Solution Correctness Let $a,b,c \in \Re $ and $ 0 < a < 1 , 0 < b < 1 , 0 < c < 1 $ & $ \sum_{cyc} a = 2$
Prove that
$$ \prod_{cyc}\frac{a}{1-a} \ge 8$$
My solution
$$ \prod_{cyc}\frac{a}{1-a} \ge 8$$
or
$$ \prod_{cyc}a \ge 8\prod_{cyc}(1-a)$$
From $ \sum_{cyc} a = 2$ we can conclude $\prod_{cyc}a \le \frac{8}{27} $ , thus getting a maximum bound on $\prod_{cyc}a$ .
Going back to
$$ \prod_{cyc}a \ge 8\prod_{cyc}(1-a)$$
We can say $$\prod_{cyc}(1-a) \le \left(\frac{\sum_{cyc}(1-a)}{3}\right )^3=\left(\frac{1}{3}\right )^3=\frac{1}{27}.$$
Which is true from $AM-GM$ inequality .
Is this solution correct ?
| As shown by Michael Rozenberg, the proof given is not correct. Here is another AGM approach, using
$$
\sum_{k=1}^n\frac1{p_k}=1\implies\sum_{k=1}^nx_k\ge\prod_{k=1}^n(p_kx_k)^{1/p_k}\quad\text{where}\quad x_k,p_k\gt0\tag1
$$
Suppose $x+y+z=1$, then $(1)$ says that
$$
\begin{align}
&(1-x)(1-y)(1-z)\\
&=1-(x+y+z)+(xy+yz+zx)-xyz\\
&=(x+y+z)(xy+yz+zx)-xyz\\
&=x^2y+xy^2+2xyz+y^2z+yz^2+zx^2+z^2x\\
&\ge\left(8x^2y\right)^{1/8}\left(8xy^2\right)^{1/8}\left(8xyz\right)^{1/4}\left(8y^2z\right)^{1/8}\left(8yz^2\right)^{1/8}\left(8zx^2\right)^{1/8}\left(8z^2x\right)^{1/8}\\
&=8xyz\tag2
\end{align}
$$
Let $a=1-x$, $b=1-y$, $c=1-z$, then we get that if $a+b+c=2$, then $(2)$ is equivalent to
$$
abc\ge8(1-a)(1-b)(1-c)\tag3
$$
Thus, if $a+b+c=2$, then
$$
\frac{a}{1-a}\frac{b}{1-b}\frac{c}{1-c}\ge8\tag4
$$
and $a=b=c=\frac23$ shows that $(4)$ is sharp.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3501708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Using the equation $\mathbf{A} \cdot \mathbf{B} = AB\cos(\theta)$ to show that the dot product is distributive when the three vectors are coplanar. I am asked to use the equation $\mathbf{A} \cdot \mathbf{B} = AB\cos(\theta)$ to show that the dot product is distributive when the three vectors are coplanar. It seems that the typical way in which this is proven is with reference to a diagram, as was done in this answer; however, I want to try and prove this without relying on a diagram.
This equation is still valid for more than 2 vectors, since we know that the sum of two vectors (say, $\mathbf{B}$ and $\mathbf{C}$) is itself a vector:
$$\mathbf{A} \cdot (\mathbf{B} + \mathbf{C}) = A(B + C)\cos(\theta)$$
So let $\mathbf{A}, \mathbf{B}, \mathbf{C}$ be coplanar vectors. Since we have that 2 vectors span a plane, this implies that one of the vectors must be a linear combination of the other two. Let that vector be $\mathbf{C} = \alpha \mathbf{A} + \beta \mathbf{B} = \alpha(a_1, a_2, a_3) + \beta(b_1, b_2, b_3)$, where $\alpha, \beta \in \mathbb{R}$.
So we have that
$$\begin{align} \mathbf{A} \cdot ( \mathbf{B} + \mathbf{C}) &= A(B + C) \cos(\theta) \\ &= \sqrt{{a_1}^2 + {a_2}^2 + {a_3}^2}\sqrt{(b_1 + c_1)^2 + (b_2 + c_2)^2 + (b_3 + c_3)^2} \cos(\theta) \end{align}$$
We can continue by factoring out the squares under the second square root, but I do not see how this would progress the proof?
What am I missing? Did I make an error, or is my reasoning so far correct? I would greatly appreciate it if people would please take the time to clarify this.
| Normally we'd prove it by showing $A\cdot B=\sum_iA_iB_i$. You could try an alternative using $(B+C)^2=B^2+C^2-2BC\cos\phi$ with $\phi$ the angle between $B,\,C$, but I doubt that'll help much. Since $AB\cos\theta$ and $\sum_iA_iB_i=(A^TB)_{11}$ are both invariant under rotations of the plane (viz. $A\mapsto RA,\,B\mapsto RB$ with $R^TR=I$ so $(RA)^T(RB)=A^TB$), they're equal iff they're equal when $B$ is along the positive $x$-axis, and in that case$$A_1=A\cos\theta,\,A_2=A\sin\theta,\,\sum_iA_iB_i=AB\cos\theta.$$In particular,$$\begin{align}\left.\sum_iA_iB_i\right|_\text{arbitrary axes}&\stackrel{\ast}{=}\left.\sum_iA_iB_i\right|_\text{above axes}\\&=\left.AB\cos\theta\right|_\text{above axes}\\&\stackrel{\ast}{=}\left.|A||B|\cos\theta\right|_\text{arbitrary axes},\end{align}$$where each $\stackrel{\ast}{=}$ uses rotational invariance (the first uses matrices, the second the invariance of $|A|,\,|B|,\,\theta$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3501832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove a finite element space is not conforming Let $\tau=[-1,1]^2$, consider the finite element $\left(\tau, Q_{1}, \Sigma\right)$, $Q=span\{1,x,y,x^2−y^2 \},$ $\Sigma=\{w(−1,0),w(1,0),w(0,−1),w(0,1)\}$.
Show that the unisolvent element leads to a finite element space,
which is not $H^1-$conforming.
I have proved that this is a unisolvent finite element. What should I need to do to prove it leads to a finite element space which is not $H^1-$ conforming.
I know $H^1$-conforming means something like $V_h\subset H^1$.
| Denote $V_h$ the finite element space. Recall that if $V_h\subset H^1(\Omega)$ and $Q$ consists of continuous functions, then $V_h\subset C(\Omega)$.
Thus, now we only need to show that $V_h$ is not contained in $C(\Omega)$. It's not difficult to find a function in $V_h$ but not continuous.
For example, let us consider $\Omega=\Omega_1\cup \Omega_2$ where $\Omega_1=[-1,0]\times [-1,1]$ and $\Omega_2=[0,1]\times [-1,1]$. Let $v_h\in V_h$ be defined as $v_h|_{\Omega_1}=x$ and $v_h|_{\Omega_2}=x^2-y^2$. Then $v_h|_{\Omega_1}(0,0)=0=v_h|_{\Omega_2}(0,0)$ whcih imples that $v_h\in V_h$. Note that $v_h|_{\Omega_1}(0,-1)=0\neq -1=v_h|_{\Omega_2}(0,-1)$. Thus $v_h\notin C(\Omega)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3502091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving that a group of order $p^nq$ for primes $p$ and $q$ is not simple.
Prove that a group of order $p^nq$ for primes $p$ and $q$ is not simple.
I've been able to prove the theorem holds for $p=q$ and $p>q$. If $p<q$ the best I've been able to do is use Sylow to show: $$p^n+p^{n-1}-1\leq q$$
Yet I seem to be stuck. I would appreciate any help.
| A consequence of one of Sylow's theorems is that if there is exactly one $p$-Sylow subgroup $H$ of $G$, then is it normal. Can you do the rest?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3502205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Prove for every positive integer $n$, one of $n, n+1, n+2,...,2n$ is the square of an integer Prove for every positive integer $n$, one of $n, n+1, n+2,...,2n$ is the square of an integer.
This seems like a proof by induction, but I'm more used to using proof by induction with a single equation, where I usually prove it's true for the base case then assume it's true for n and prove n+1 this is true, but I don't think that applies here.
| Assume $x^2<n$ but $(x+1)^2\ge n$.
Then, if $x\ge3, 2x+1<x^2,$ so $(x+1)^2=x^2+2x+1<2x^2<2n.$
The cases where $x\lt3$ can be easily eliminated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3502361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
You started with one chip. You need to get 4 chips to win. What is the probability that you will win? This is very similar to the question I've just asked, except now the requirement is to gain $4$ chips to win (instead of $3$)
The game is:
You start with one chip. You flip a fair coin. If it throws heads, you gain
one chip. If it throws tails, you lose one chip. If you have zero
chips, you lose the game. If you have four chips, you win. What is the
probability that you will win this game?
I've tried to use the identical reasoning used to solve the problem with three chips, but seems like in this case, it doesn't work.
So the attempt is:
We will denote $H$ as heads and $T$ as tails (i.e $HHH$ means three heads in a row, $HT$ means heads and tails etc)
Let $p$ be the probability that you win the game. If you throw $HHH$ ($\frac{1}{8}$ probability), then you win. If you throw $HT$ ($\frac{1}{4}$ probability), then your probability of winning is $p$ at this stage. If you throw heads $HHT$ ($\frac{1}{8}$ probability), then your probability of winning $\frac{1}{2}p$
Hence the recursion formula is
$$\begin{align}p & = \frac{1}{8} + \frac{1}{4}p+ \frac{1}{8}\frac{1}{2}p \\
&= \frac{1}{8} + \frac{1}{4}p +\frac{1}{16}p \\
&= \frac{1}{8} + \frac{5}{16}p
\end{align}$$
Solving for $p$ gives
$$\frac{11}{16}p = \frac{1}{8} \implies p = \frac{16}{88}$$
Now, to verify the accuracy of the solution above, I've tried to calculate the probability of losing using the same logic, namely:
Let $p$ denote the probability of losing. If you throw $T$ ($\frac{1}{2}$ probability), you lose. If you throw $H$ ($\frac{1}{2}$ probability), the probaility of losing at this stage is $\frac{1}{2}p$. If you throw $HH$($\frac{1}{4}$ probability), the probability of losing is $\frac{1}{4}p$. Setting up the recursion gives
$$\begin{align}p & = \frac{1}{2} + \frac{1}{4}p+ \frac{1}{8}\frac{1}{2}p \\
&= \frac{1}{2} + \frac{1}{4}p +\frac{1}{16}p \\
&= \frac{1}{2} + \frac{5}{16}p
\end{align}$$
Which implies that
$$\frac{11}{16}p = \frac{1}{2} \implies p = \frac{16}{22} = \frac{64}{88}$$
Which means that probabilities of winning and losing the game do not add up to $1$.
So the main question is: Where is the mistake? How to solve it using recursion? (Note that for now, I'm mainly interested in the recursive solution)
And the bonus question: Is there a possibility to generalize? I.e to find the formula that will give us the probability of winning the game, given that we need to gain $n$ chips to win?
| This answer only addresses what's wrong with your recursion, since the other answers (both in this question and your earlier question) already gave many different ways to set up the right recursions (or use other methods).
The key mistake is what you highlighted. When you throw $HHT$, you now have $2$ chips. For the special case of this problem, $2$ chips is right in the middle between $0$ and $4$ chips, so the winning prob is obviously $\color{red}{\frac12}$ by symmetry. But you had it as $\color{red}{\frac12 p}$ which is wrong. Thus the correct equation is:
$$p = P(HHH) + P(HT) p + P(HHT) \color{red}{\frac12}= \frac18 + \frac14 p + \frac18 \color{red}{\frac12}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3502507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Why must absolute value be used in this equation? I have a simple equation:
$(x-2)^2 < 3$
My first solution was:
$x-2 < \sqrt{3}$
But this gives me only:
$x < \sqrt{3} + 2$
Which is only one solution, so it's not enough. I've figured it out that I must use:
$|x-2| < \sqrt{3}$
Then, solutions are:
$-\sqrt{3} + 2 < x < \sqrt{3} + 2$
My question is how can I know that here:
$(x-2)^2 < 3$
I must use absolute value after square rooting both sides of inequality? Like: $|x-2| < \sqrt{3}$
| The quantity $(x-2)^2$ is derived by the squaring of the quantity $(x-2)$. But, you do not know whether this expression is positive or not. Thus, by square-rooting (as you did), you must take into consideration both cases, which is only yielded by using the absolute value.
Note, that the absolute value represents the essence of distance in the set of reals $\mathbb R$. The expression $|x-2| < \sqrt{3}$ means that the distance of $x$ from $2$ is less than $\sqrt{3}$. But $x$ can be either positive or negative, exactly as we wanted.
Thus, a rigorous solution in such case when you're trying to solve for every $x \in \mathbb R$, shall be as:
$$(x-2)^2 < 3 \Rightarrow |x-2| < \sqrt{3} \Leftrightarrow -\sqrt{3} + 2 < x < \sqrt{3} + 2$$
A simple example to understand the importance of the absolute value: Take into account the equation $x^2 = 4$. If you do not use the absolute value and thus not taking into consideration the negative value case, you will only get $x=2$. But, isn't also $(-2)^2 = 4$ ? Thus, the right way to go is $x^2 = 4 \Rightarrow |x| = 2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3502651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Show that $\lim_{x \to a} kx^2 + mx + n = ka^2+ ma +n$ Show that $$\lim_{x \to a} kx^2 + mx + n = ka^2+ ma +n$$ (Assume the domain is $\mathbb{R}$)
Let $\epsilon > 0$ be given. Suppose
$$\delta = min\{ \frac{1}{|kx + ka +m|}, 1\}$$
Then we have if $$ |x- a| < \delta$$
$$|(kx^2 + mx + n) - (ka^2 + ma + n)| = |k(x-a)(x+a)+m(x-a)| = |(x-a)(k(x+a)+m)|=|(x-a)|*|(k(x+a)+m)| < |k(x+a)+m|*\delta = \frac{\epsilon}{|kx + ka + n|}*|kx + ka + n| = \epsilon $$
I am having some doubts about this. I chose $\delta$ to be what it is because of how |f(x) - L| factors and because of how the largest a fraction can be is 1. It seems as though I may need to say more, in particular about whether delta can be chosen the way I have chosen it. I think I am missing something.
| By the triangle inequality we have
$$|k(x^2-a^2)+m(x-a)|\leq|k(x^2-a^2)|+|m(x-a)|=|k|\cdot|x^2-a^2|+|m|\cdot|x-a|$$
This equals the follwing, before we estimate with $\delta$
$$|k|\cdot|x-a|\cdot|x+a|+|m|\cdot|x-a|\lt |k|\delta \cdot|x+a|+|m|\delta$$
Now we use a sneaky trick to be able to estimate with with the triangle equality again
$$|k|\delta \cdot|x+a|+|m|\delta=|k|\delta \cdot|x-a+(a+a)|+|m|\delta\leq|k|\delta \cdot(|x-a|+|a+a|)+|m|\delta$$
Now we can remove all $x$ from the inequality and estimate with $\delta$
$$|k|\delta \cdot(|x-a|+|a+a|)+|m|\delta\lt |k|\delta \cdot(\delta+2|a|)+|m|\delta=|k|\delta^2+(2|ka|+|m|)\delta $$
Can you solve for $|k|\delta^2+(2|ka|+|m|)\delta-\epsilon=0$? This will give you 2 options of $\delta$, one of which will be positive. That will be your $\delta$ independent of $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3502751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
One one and continuous function from $\Bbb R$ to $\Bbb R$ Let $f :\mathbb R \to \mathbb R$ be a continuous and one one function. Then which of the following is true?
*
*f is onto.
*f is either strictly increasing or strictly decreasing.
*There exist $x \in \Bbb R$ such that $f (x) = 1$
*f is unbounded
| 1, 3 and 4 are not true.
The function $f(x) = \tan^{-1}x + 10$ is a counterexample for all of them.
2 is true.
To show this, assume the contrary and derive a contradiction using intermediate value theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3502851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Width and thickness of the Samsung Odyssey G9 monitor The width and thickness of the upcoming Samsung monitor have not been released yet.
However, we know it's a part of a circle of 1m radius and we know the length of that part of the circle. I'm guessing we should be able to get the width and thickness of the monitor right?
Here's where I'm at:
Thanks for helping me to see if it will fit my desk :)
| First, we need to convert $47.17$ inches into metric units. This is $119.8$ centimetres.
The angle subtended by the monitor is
$$360^\circ×\frac{119.8}{2\pi×100}=68.64^\circ$$
The width of the monitor forms the third side of a triangle with the other two sides $1$ metre and included angle $68.64^\circ$. Thus it may be derived by the law of cosines:
$$\sqrt{100^2+100^2-2(100)(100)\cos 68.64^\circ}=112.77\text{ cm}$$
The distance from the centre of the circle to the monitor is then an application of the Pythagorean theorem. The thickness of the monitor is its complement in $1$ metre:
$$100-\sqrt{100^2-(112.76/2)^2}=17.41\text{ cm}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3502980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Solving $m\frac{\mathrm d ^2x}{\mathrm d t^2}+b\frac{\mathrm d x}{\mathrm d t}+kx=F_0\cos(\omega_dt)$ for $x(t)$ I am studying damped oscillations and forced oscillations in physics which are represented by the following differential equations in order.
$$\begin{align}m\dfrac{\mathrm d^2x}{\mathrm dt^2}+b\dfrac{\mathrm dx}{\mathrm dt}+kx&=0 \tag1\\m \dfrac{\mathrm d^2x}{\mathrm dt^2}+b\dfrac{\mathrm dx}{\mathrm dt}+kx&=F_0\cos(\omega_dt)\tag2\end{align}$$
I am not very much familiar with solving $2$nd order differential equations. I am able to solve equation $(1)$ by intuition. But solving equation $(2)$ seems complicated. It would be helpful if I could get a demonstration of the procedure to solving this type of ODE. Thanks
Edit $1$
I looked up the method of using Laplace transform to solving differential equations. I tried doing the first one using that.
Attempt
$$\begin{aligned}mx''(t)+bx'(t)+kx&=0\\ m\mathcal{L}\{x''\}+b\mathcal{L}\{x'\}+k\mathcal{L}\{x\}&=0\\ \mathcal{L}\{x\}&=A_0\frac{(ms+b)\sin\delta+m}{ms^2+bs-k}\end{aligned}$$
Is that right? Would now taking the inverse Laplace transform give me $x(t)=A_0e^{-bt/2m}\sin(\omega t+\delta)$ where $\omega^2=k/m-b^2/4m^2$? Thanks
| You can use Laplace transform if you can. It is the easiest way of solving linear differential equations, I think.
Taking laplace transform of both sides, by noting that $X(s)$ is the Laplace transform of $x(t)$ and Laplace transform of the derivative of $x(t)$ is multiplication by $s$ in Laplace domain (if initial conditions are zero):
\begin{equation*}
ms^2X(s)+bsX(s)+kX(s)=F_0 L(cos(\omega_d t))=\frac{1}{2}F_0 L(e^{j \omega_d t} + e^{-j \omega_d t}) = \frac{1}{2}F_0(\frac{1}{s-j\omega_d}+\frac{1}{s+j\omega_d})
\end{equation*}
\begin{equation*}
X(s) = \frac{1}{2}F_0 (\frac{1}{s-j\omega_d}+\frac{1}{s+j\omega_d}) \frac{1}{ms^2+bs+k}
\end{equation*}
(For finding the laplace of $e^{j\omega_d t}$, please see the first equation below).
From this point on, you know the Laplace transform $X(s)$. For finding the inverse Laplace transform, we decompose $X(s)$ into partial fractions, and then use Laplace table for primitive functions to find the inverse Laplace by inspection. Just look at the inverse Laplace of terms like $\frac{1}{s-a}$ or $\frac{1}{(s-a)^n}$ in the table, and note that Laplace transform is a linear transformation (that is you can do superposition). Here are the useful terms from the table:
\begin{equation*}
L^{-1}(\frac{c}{s+p})=ce^{-pt}
\end{equation*}
\begin{equation*}
L^{-1}(\frac{c}{(s+p)^n})=c\frac{t^{n-1}}{(n-1)!}e^{-pt}
\end{equation*}
This way you find the $x(t)$. Here note that you can get complex numbers as $p$ can be complex. However, as ODE has real coefficients, they come in conjugate pairs and add up to a real function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3503059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
proof $ \lvert\int_a^bf(t)dt\rvert\le\frac{(b-a)^2M}{4} $ Let $ f \in C^1([a,b],\Bbb R)$ such that $f(a)=f(b)=0$ with $a \lt b $.
If $M=\sup\limits_{x\in[a,b]}\lvert f'(x)\rvert $, show that:
$$ \left|\int_a^bf(t)dt\right|\le\frac{(b-a)^2M}{4} $$
I tried this
for the MVT: $f'(t)=\frac{f(t)-f(a)}{t-a} $
then $f'(t)(t-a)=f(t)$
then $$ \left|\int_a^bf(t)dt\right| =\left|\int_a^bf'(x)(t-a)dt\right|\le \left|\int_a^bM(t-a)dt\right|=\left|M\int_a^b(t-a)dt\right|=\left|\frac{(b-a)^2M}{2}\right|$$
now
$$\left|\frac{(b-a)^2M}{2}\right|=\frac{(b-a)^2M}{2} $$
Maybe I made some mistake in development. I don't understand why I reached 2.
| There's nothing wrong with what you've done. $\frac{(b - a)^2M}2$ is the best you can do if $|f'(x)| \le M$ and $f(a) = 0$. Because the MVT says that $|f(x)| \le M(x - a)$ and the integral of that will give you $M(b - a)^2/2$.
To get $\frac14$ you need to use the fact that $f(b) = 0$ as well. Then MVT will give you
$$ |f(x)| \le \min\{M(x - a), M(b - x)\} = \begin{cases} M(x - a) & a \le x \le \frac{a + b}2 \\
M(b - x) & \frac{a + b}2 \le x \le b \end{cases}, $$
which has a sort of triangle shape.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3503551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Is every power series infinitely differentiable everywhere? I have found various sources on the internet that say that power series are infinitely differentiable on their interval of convergence:
Wikipedia:
Once a function $f(x)$ is given as a power series as above, it is differentiable on the interior of the domain of convergence.
Northwestern University:
[...] power series are (infinitely) differentiable on their intervals of convergence [...]
But isn't every power series (infinitely) differentiable everywhere?
After all, a power series is just an infinite polynomial and a polynomial of degree $n$ is differentiable $n+1$ times. Source
Doesn't this imply, that a polynomial of "degree $\infty$" is differentiable $\infty$ times?
| There is lot of difference between polynomials and power series.
You cannot define $\sum z^{n}$ for $|z| >1$. So there is no question of differentiability of this on $\{z: |z| >1\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3503861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Moderate complexity between polynomial and exponential There are ,,plenty'' of functions growing faster then any polynomial and at the same time growing slower than any exponential function (with base $>1$) e.g. $f(x)=e^{g(x)}$ where $g(x)=\log^{c} x$ where $c>1$ or $g(x)=x^c$ where $c \in (0,1)$. I would like to know some examples of problems for which there are (most efficient) algorithms which run in the time $f(n)$ for any such $f$. This question is motivated by the visible dichotomy between polynomial time algoritms and exponential time algoritms which one encounters in almost all classical problems.
| The Graph isomorphism problem is an example of a problem unknown to be in $\text{P}$, that was conjectured to not be $\text{NP-Hard}$, and relatively recently (2015) Laszlo Babai published a paper proving it can be done in $\exp(\log(n)^{O(1)})$ which is quasi-polynomial time.
Remark: Someone found a mistake in the paper, that was later fixed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3504013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing a subgroup of $\mathbb{Z}\times\mathbb{Z}$ is cyclic.
In the group $G = \mathbb{Z}\times\mathbb{Z}$, consider the subgroup $H$ generated by $(-5,1)$ and $(1,-5)$. I want to show that $G/H$ is cyclic and find the standard cyclic group it is isomorphic to.
I haven't much group theory experience, but understand that $G$ is a group. Firstly what is meant by $H$ being generated by the mentioned elements of $G$? I know it's the intersection of all subgroups that contain those two particular elements, but can it be thought of as all the multiples and linear combinations of the two?
And I am also confused about the rest of the question.
Edit: I think confusion lies with the definition of 'generated by'. I understand its the intersection of all these subgroups that contain the set of elements (or generators) but is there a more useful equivalent definition.
| If $H$ is generated by $h_1$ and $h_2$ then $H=\{ah_1+bh_2\}$ where $a$ and $b$ are integers. If you think of $G$ as the set of points in the plane with integer co-ordinates then $H$ is the lattice of points with co-ordinates $(-5a+b, a-5b)$ where $a$ and $b$ are integers.
The elements of $G/H$ correspond to the co-sets of $H$ within $G$. Since the determinant of
$\begin{pmatrix} -5 & 1 \\ 1 & -5 \end{pmatrix}$
is $24$, the area of the parallelogram bounded by $(0,0)$, $(-5,1)$ and $(1,-5)$ is $24$ so there are $24$ such co-sets.
Since $G$ is abelian, $G/H$ must also be abelian, so $G/H$ is an abelian group of order $24$. To show that $G/H$ is isomorphic to $C_{24}$ and not to some other abelian group with order $24$ (such as $C_{12} \times C_2$) we must find an element of $G/H$ that has order $24$. The co-set that contains the point $(0,1)$ is a candidate for this, since
$-5a+b=0 \Rightarrow b=5a \Rightarrow a-5b = -24a$
so if $k(0,1) \in H$ then $k$ must be a multiple of $24$, so the order of the $(0,1)$ co-set within $G/H$ is $24$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3504127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Expectation under conditional distribution I am trying to understand the following equivalence:
\begin{array}{c}{\mathbb{E}_{p(x, y)}\left[\log \frac{q(x | y)}{p(x)} \frac{p(x | y)}{q(x | y)}\right] =\mathbb{E}_{p(x, y)}\left[\log \frac{q(x | y)}{p(x)}\right]+\mathbb{E}_{p(y)}\left[D_{K L}(p(x | y) \| q(x | y))\right]}\end{array}
which is used in [1] at eq. 2.
The first term is clear to me. However I don't understand completely how the second term is calculated.
As far as I understand it is based on the following:
$
\mathbb{E}_{p(x,y)}\left[\log \frac{p(x | y)}{q(x | y)}\right] = \mathbb{E}_{p(y)}\left[D_{K L}(p(x | y) \| q(x | y))\right]
$
Apparently we have;
$D_{K L}(p(x | y) \| q(x | y)) = \mathbb{E}_{p(x | y)}\left[\log \frac{p(x | y)}{q(x | y)}\right]$
So to my understanding this implies the following:
$
\mathbb{E}_{p(x,y)}\left[\log \frac{p(x | y)}{q(x | y)}\right] =
\mathbb{E}_{p(y)}\left[ \mathbb{E}_{p(x | y)}\left[\log \frac{p(x | y)}{q(x | y)}\right]\right]
$
Unfortunately It is not clear to me how respectively why $\mathbb{E}_{p(x,y)}$ can be decomposed into this nested expectations under the marginal $p(y)$ and the conditional $p(x|y)$.
I think I am clearly missing a very simple point here.
[1] https://arxiv.org/pdf/1905.06922.pdf
| This is just the law of total expectation: Quite generally,
$$
\mathbb E_{p(x,y)}[Z]=\mathbb E_{p(y)}\left[\mathbb E_{p(x\mid y)}[Z]\right]\;.
$$
That is, you can first form the expectation as if you knew $Y$, and then form the expectation of the result using the marginal distribution of $Y$. In the discrete case, this is just
$$
\sum_{x,y}p(x,y)Z(x,y)=\sum_y\left(\sum_\xi p(\xi,y)\right)\sum_x\frac{p(x,y)}{\sum_\xi p(\xi,y)}Z(x,y)\;.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3504239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that two block matrices over $\mathbb{F}$ are similar
Let $\mathbb{F}$ be a field, $n\in\mathbb{N}_{\geq 1}$ and $A\in M_{2n}(\mathbb{F})$, such that $$A=\begin{pmatrix} 0_n & 0_n \\ B & 0_n \end{pmatrix}$$ with $B\in GL_n(\mathbb{F})$. Show that A is similar to the matrix $$\begin{pmatrix} C & 0_2 & \ldots & 0_2 \\ 0_2 & C & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0_2 \\ 0_2 & \ldots & 0_2 & C \end{pmatrix}$$ where $C=\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}\in M_2(\mathbb{F})$.
I searched the Internet well enough and found nothing similar.
Thanks in advance!
| Denote by $\Gamma$ the matrix $$\begin{pmatrix} C&0_2&0_2&\dots&0_2\\0_2&C&0_2&\dots&0_2\\0_2&0_2&C&\dots&0_2\\\vdots&\vdots&\vdots&\ddots&\vdots\\0_2&0_2&0_2&\dots&C\end{pmatrix}.$$
Let $P$ denote the permutation matrix associated to the permutation $\sigma\in \mathcal{S}_{2n}$ such that
$$\sigma(k)=\left\{\begin{array}{ll}
n+\frac{k}{2}&\text{if}\ k\ \text{is even},\\
\frac{k+1}{2}&\text{if}\ k\ \text{is odd}.
\end{array}\right.$$
i.e., $P=[p_{i,j}]$, where
$$p_{i,j}=\delta_{\sigma(i),j}.$$
Observe that $P^{-1}\Gamma P$ is given by
$$J=\begin{pmatrix}0_n &0_n\\ I_n&0_n\end{pmatrix}.$$
Next, we know that
$$\begin{pmatrix}I_n&0_n\\0_n&B\end{pmatrix}\begin{pmatrix}0_n&0_n\\I_n&0_n\end{pmatrix}\begin{pmatrix}I_n&0_n\\0_n&B^{-1}\end{pmatrix}=\begin{pmatrix}0_n&0_n\\B&0_n\end{pmatrix}=A.$$
Thus, if $M$ denotes $\begin{pmatrix}I_n&0_n\\0_n&B\end{pmatrix}$, then $$A=MJM^{-1}=M(P^{-1}\Gamma P)M^{-1}=(MP^{-1})\Gamma(MP^{-1})^{-1}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3504321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
tail bound of the euclidean norm of multivariate norm distributed random variable Let $x$ has a multivariate norm distribution, i.e, $x\sim \mathcal{N}(\mu,\Sigma)$, what is the following upper bound $\Pr(||x||_2^2\geq M)\leq ?$, where $M$ is a constant.
Thanks a lot!
| $P(\|x\|_2^{2} \geq M) \leq \frac 1 M E\|x\|^{2}=\frac 1 M \sum\limits_{k=1}^{n} Ex_i^{2}=\frac 1 M \sum\limits_{k=1}^{n} (\sigma_i^{2}+\mu_i^{2})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3504485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to apply squeeze theorem to this limit. I'm trying to solve $$\int_0^∞ e^{-x} \cos(x)\,dx$$
It is not hard to find that $$\int e^{-x} \cos(x)=\frac{1}{2}(e^{-x} \sin(x)-e^{-x} \cos(x))+C$$
From all this follows that
$$\lim_{t\to\infty}\int_0^te^{-x} \cos(x) \, dx = \frac{1}{2}\lim_{t\to\infty}(e^{-t} \sin(t) - e^{-t} \cos(t))+\frac{1}{2}$$
Notice that I have simplified already a lot the expression we are taking the limit of.
I have not been able to find this limit; a collegue student told me that I had to use the squeeze theorem, but I do not find how nor where. Any guides on how the theorem can help with this limit?
| $$\because\sin x-\cos x=\sqrt 2(\cos\frac{\pi}{4}\sin x-\sin\frac{\pi}{4}\cos x)=\sqrt 2\sin (x-\frac{\pi}{4})$$
$$\therefore\lim_{t\rightarrow +\infty}|e^{-t}\sin t-e^{-t}\cos t|= \lim_{t\rightarrow +\infty}|\sqrt 2e^{-t}\sin (t-\frac{\pi}{4})|\leqslant \lim_{t\rightarrow +\infty}|\sqrt 2e^{-t}|=0$$
$$\therefore\lim_{t\rightarrow +\infty}(e^{-t}\sin t-e^{-t}\cos t)=0$$
$$\therefore\frac{1}{2}\lim_{t\rightarrow +\infty}(e^{-t}\sin t-e^{-t}\cos t)+\frac{1}{2}=\frac{1}{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3504604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
} |
False Σ1-sound theories I was wondering how Σ1-sound theories in the language of first-order arithmetic can go wrong. As far as I can tell, they cannot prove false claims about consistency, since such claims (e.g., that there is a proof of 0=1" from the axioms) are equivalent (in weak theories of arithmetic) to Σ1 sentences. Can they prove that a sequence is finite when it is really infinite? It would be helpful to hear about some concrete Σ1-sound theories that prove blatantly false things. Thanks!
| Basically, soundness at one level of the arithmetical hierarchy doesn't prevent errors higher up.
For example, let $PA_n$ be PA + all true $\Pi_n$ sentences (incidentally, note that every true $\Sigma_{n+1}$ sentence is a theorem of $PA_n$). Then we can form the Godel-Rosser sentence for $PA_n$:
$(*)_n$: "For every proof of me from $PA_n$, there is a shorter disproof of me from $PA_n$."
The usual arguments show that $(*)_n$ is independent of $PA_n$ and true (in particular, $PA_n$ is fully sound and $PA_n$ can decide whether something is a valid $PA_n$ proof); so $PA_n+(*)_n$ is $\Pi_n$-complete and hence $\Sigma_n$-sound, but not fully sound.
As a cautionary tale, note that Godel's second incompleteness theorem breaks down when we try to push it beyond computably axiomatizable theories: there is an arithmetically definable theory which proves its own consistency, in fact it's just an unusual arithmetic description of PA itself! I believe this is folklore, but see e.g. here.
For a more non-constructive but simpler argument, note that $Thm(PA_n)$ has Turing degree ${\bf 0^{(n+1)}}$ but $Th(\mathbb{N})$ is not arithmetic, so we must have $Thm(PA_n)\subsetneq Th(\mathbb{N})$ for all $n$. Pick $\sigma\in Th(\mathbb{N})\setminus Thm(PA_n)$, and consider $PA_n+\neg\sigma$.
(I'm writing "$Thm(S)$" for the deductive closure of $S$ here.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3504874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Matrix with even integers entries doesn't have odd eigenvalue Let $A \in M_n(\mathbb{Z})$ with even entries. Prove that $A$ doesn't have odd eigenvalue.
| Suppose $A$ has an odd eigenvalue $k$. Then the characteristic polynomial $\det(A-\lambda I)$ can be written as:
$$\lambda^n+a_{n-1}\lambda^{n-1}+\ldots+a_0=(\lambda-k)(b_{n-1}\lambda^{n-1}+b_{n-2}\lambda^{n-2}+\ldots+b_1\lambda+b_0)$$
where each $a_i$ is even $(i=0, \ldots, n-1)$.
The right hand side evaluates to:
$$b_{n-1}\lambda^{n}+(b_{n-2}-kb_{n-1})\lambda^{n-1}+(b_{n-3}-kb_{n-2})\lambda^{n-2}+\ldots+(b_0-kb_1)\lambda-kb_0$$
Each of the coefficients must match the even $a_i$, so:
$$b_{n-1}=1\text{ which is odd},\\
a_{n-1}=b_{n-2}-kb_{n-1}\text{ even} \implies b_{n-2}\text{ odd},\\
a_{n-2}=b_{n-3}-kb_{n-2}\text{ even} \implies b_{n-3}\text{ odd},\\
\cdots \\
a_{1}=b_0-kb_1\text{ even} \implies b_0 \text{ odd}, \\
a_{0}=-kb_0\text{ even} \implies k\text{ even}.
$$
This is a contradiction.
Qed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3505021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Find the values for which $A^2 = I_2$, A is a matrix, with $A \neq I_2$ and $A \neq -I_2$ First I tried to find $A^2$ with
$$
A=\begin{bmatrix}
\alpha & \beta\\
\delta & \gamma\\
\end{bmatrix}
$$
I multiplied this by itself and got:
$$
\begin{bmatrix}
\alpha^2+\beta\delta& \beta(\alpha + \gamma)\\
\delta (\alpha + \gamma) & \delta\beta+\gamma^2\\
\end{bmatrix}
$$
I put this in a system:
$$
\left\{
\begin{array}{c}
\alpha^2+\beta\delta = 1 \\
\beta(\alpha + \gamma) = 0 \\
\delta (\alpha + \gamma) = 0 \\
\delta\beta+\gamma^2 = 1 \\
\end{array}
\right.
$$
I tried to solve for $\beta$ first and right away got an issue:
$$\beta = \frac{1-\alpha^2}{\delta}$$
One solution given by my book is:
$$
\begin{bmatrix}
1& 0\\
0 & -1\\
\end{bmatrix}
$$
So $\delta$ can be zero but according to my system it can't. How is this possible?
| You assumed $\delta \neq 0$ when you divided by it (when solving for $\beta$). The first system stills valid whether $\delta = 0$ or not.
In fact, when $\delta = 0, |\alpha| = |\gamma| = 1$. If $\gamma = -\alpha, \beta$ can be anything.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3505184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 2
} |
Getting probability greater than 1? Consider a random walk starting at a positive integer $k$. Now, trying to calculate the probability of returning to zero (from $k$), I did the following:
\begin{align}
P&(\text{returning to zero}) \\
&=P(\text{returning to zero in } k\text{ steps}) \\
&\quad+ P(\text{returning to zero in } k+2\text{ steps} ) \\
&\quad+ P(\text{returning to zero in } k+4\text{ steps} ) \\
&\quad+\cdots \\
&=\Bigl({1\over 2}\Bigr)^k \\
&\quad+\Bigl({1\over 2}\Bigr)^{k+2}{k+2\choose 2} \\
&\quad+\Bigl({1\over 2}\Bigr)^{k+4}{k+4\choose 4} \\
&\quad+\cdots \\
&=\Bigl({1\over 2}\Bigr)^k\sum_{j=0}^\infty\Bigl({1\over 2}\Bigr)^{2j}{k+2j\choose 2j}
\end{align}
Now as I put some numbers in Desmos, it is very clear that this evaluates to greater than $1$. Clearly, I’ve made some mistake.
| Clue :
$P$(Returning to zero in $k+2$ steps) should be $(\frac12)^{k+2}(k+1)$
Returning to zero in $k$ steps can only be done by taking the path $$k, k-1, \dots, 0$$
Returning to zero in $k+2$ steps can only be done by deviating exactly once from the '$k$ steps path'. So it's something like $$k, k-1, \dots, n, n+1, n, \dots, 0$$
There are $k+1$ integers between $0$ and $k$, leading to the result stated on top.
Now, if the walk is also allowed on the negatives, then there are $k+2$ possible deviations, because you can also go : $$k, \dots, 0, -1,0$$
But it's never ${k+2 \choose 2}$
I hope this was helpful.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3505467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why writing equations in two column explanation is bad math? A question arised while I was reading "A Guide to Write Mathematics" (by Dr. Kevin P. Lee, p. 5) about writing explanation of steps, in equations calculations or listing some equations, without using two columns like this:
I don't think is bad or hard to read, actually I believe is easier to follow up the calculations. Do you have any arguments for not to do it?.
I attach some examples of how do I apply it.
I'd appreciate your comments.
| The author of the guide is trying to teach some elements of compositional style, emphasizing the idea that mathematics should be presented in a way that is readable as sensible English sentences. From this point of view, the two-column display reads as an alternation of declaratives and imperatives (i.e., statements and commands); substituting "expression" and "unknown" for the precise formulas and writing out the equal sign as the verbal "is equal to," we have the following piece of prose:
A complicated expression is equal to negative one. Solve this
equation. The complicated expression plus one is equal to zero.
Collect the terms on one side. The square of a simpler expression is
equal to zero. Factor. A very simple exponential is equal to one. Use
the Zero Factor Property. The unknown is equal to one. Solve for the
unknown.
Compare this with the guide's recommended presentation excerpted in Matthew Daly's answer. Among the defects of the two-column presentation, most of the commands (Collect! Factor! Use! Solve!) come after the command has been carried out; only "Solve this equation" precedes the steps that do what that sentence commands. One can, of course, observe that that's not how the imperatives are meant to be understood, but that's sort of the author's point: good, readable, mathematical writing does not contravene the rules of good English (or whatever language is being written in).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3505641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Relationship between groups and spaces From the definition on Wikipedia: In mathematics, a space is a set (sometimes called a universe) with some added structure.
A group seems to satisfy this definition. For example, we know that Lie groups are differentiable manifolds.
So my question is: is it proper to say that a group is also a kind of spaces, just like vector spaces and topological spaces, but with different or simpler structure?
| Sure, it can be useful to attempt to think about mathematical concepts in a way that goes against the grain a little bit. Here are a few ways of thinking of groups as spaces. I suppose it should go without saying that these are very useful ways of thinking about groups.
There are topological groups, which you could define as topological spaces that admit the structure of a group, or as groups where the defining features (multiplication, inversion) are continuous maps. As you mention, Lie groups are a good example of topological groups. Of course, any group may be given the discrete topology, but even this is relevant, for instance, in some definitions of a properly discontinuous group action.
Another topology that any group can be given is the profinite topology, where a neighborhood basis of the identity in $G$ is the set of finite index subgroups of $G$. Useful properties of infinite groups, like residual finiteness, are topological statements about the profinite topology.
If $S$ is a generating set for a group $G$, $G$ can be given the word metric, $d_S$, where $d_S(g,h)$ is the minimum number of elements of $S$ required to write $gh^{-1}$ in terms of this basis. Although this gives every group the discrete topology, when $S$ is a finite set, one can learn quite a lot about $G$ from $d_S$, at least as distances go to infinity! (This is a starting point for geometric group theory.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3505788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to show the additive property of a sum of a series So I have been working on this proof for a bit now. I found a way to prove it directly, but am struggling to find a way to prove it with limits.
Question:
Let ∑an = A, ∑bn = B, for an, bn, A, B in the Reals,
show that ∑(an+bn) = (A+B)
| If you have two convergent sequences, the limit of the sum is the sum of the limits. So you have
\begin{align}
\sum_{n=1}^\infty(a_n+b_n)&=\lim_{m\to\infty}\sum_{n=1}^m(a_n+b_n)=\lim_{m\to\infty}\left(\sum_{n=1}^ma_n+\sum_{n=1}^mb_n\right)\\ \ \\
&=\lim_{m\to\infty}\sum_{n=1}^ma_n+\lim_{m\to\infty}\sum_{n=1}^mb_n
=\sum_{n=1}^\infty a_n+=\sum_{n=1}^\infty b_n
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3505885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\det(AB - BA) = \frac{1}{3}\left(\mathrm{Trace}(AB - BA)^3\right)$ I want a correct and feasible answer to this question.
So does anyone have any creative ideas to prove this equation?
$A$ and $B$ are $3\times3$ matrices.
$\det(AB - BA) = \dfrac{1}{3}\operatorname{Trace}\left((AB - BA)^3\right)$
Answer:
We can write and compute both sides to prove it but this is not a good solution!
| This follows easily from Cayley-Hamilton theorem. Since $M=AB-BA$ has zero trace, by Cayley-Hamilton theorem, $M^3=cM+dI_3$ where $c$ is some scalar and $d=\det(M)$. Therefore $\operatorname{tr}(M^3)=3d$ and the result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3505997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 3
} |
A quote by Jacobi I remember reading that Jacobi once said:
If Cauchy says he proved something, you can be 50% sure that he actually did. If Gauss says he proved something, you can be mostly sure that he actually did. But if (insert name) says he proved something, then you can be 100% sure that he actually did.
Does someone remember the exact names and the exact quote? Thanks!
| The original is in a letter from Jacobi to Alexander von Humboldt dated December 21, 1846:
Dirichlet allein, nicht ich, nicht Cauchy, nicht Gauß, weiß, was ein
vollkommen strenger Beweis ist, sondern wir lernen es erst von ihm.
Wenn Gauß sagt, er habe etwas bewiesen, so ist es mir sehr
wahrscheinlich, wenn Cauchy es sagt, ist ebensoviel pro als contra zu
wetten, wenn Dirichlet es sagt, ist es gewiß; ich lasse mich auf
diese Delikatessen lieber gar nicht ein.
(Source: Laugwitz's book about Riemann, p. 63.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3506125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
Bounding Bernoulli trials by the standard Bernoulli process Suppose we have a Bernoulli-like process $P$. At each step a coin is tossed and the outcome ("success", "failure") is recorded. What differentiate $P$ from the standard Bernoulli process, is that we pick a probability of "success" uniformly at random in range $(1/2, 1)$ at each step before we toss the coin.
I'm interested in finding an upper bound on the expected number of trials
until the first "success" is tossed.
What I have thought, if the probability of "success" is at least $1/2$, then at each step $P$ is more probable to stop than a standard Bernoulli process, therefore an expectation of a standard geometrically distributed variable bounds from above the expectation of steps until the first "success".
How can I make this claim formal?
| While, kimchi's answer is an answer to the problem as I have stated it in the first place ...
I want to share an approach, that tackles directly the expectation bound.
Suppose, we have a series of independent Bernoulli trials $X_i$ each with a probability of success $p_i \geq \frac{1}{2}$. And a series of standard i.i.d. Bernoulli trials $Y_i$ with a probability of success $p = \frac{1}{2}$.
Define by $X$ - the index of the first success in the series $X_i$, and by $Y$ - an index of the first success in the series $Y_i$.
We have that $Y \sim Geom(\frac{1}{2})$, and $\mathbb{E}(Y) = 2$.
We ask, what is $\mathbb{P}(X > k)$ ?
In other words what is the probability that the first success in series $X_i$ happens after the $k^{th}$ trial. The answer can be calculated in a straight-forward way:
$$\mathbb{P}(X > k) = \prod\limits_{i=1}^k(1-p_i),$$ as all trials before and including $k^{th}$ should fail.
We further use the fact that $p_i \geq p$ to show that
$$\mathbb{P}(X > k) = \prod\limits_{i=1}^k(1-p_i) \leq \prod\limits_{i=1}^k(1-p) = \mathbb{P}(Y > k)$$
Recall, that for a discrete variable $Z$(such as $X$ and $Y$) taking values in $\{1, 2, \ldots \} \cup \{ +\infty\}$
$$\mathbb{E}(Z) = \sum\limits_{k = 1}^{\infty}\mathbb{P}(Z > k)$$
Now sum the probabilities to get the expectations:
$$\mathbb{E}(X) = \sum\limits_{k = 1}^{\infty}\mathbb{P}(X > k) \leq \sum\limits_{k = 1}^{\infty}\mathbb{P}(Y > k) = \mathbb{E}(Y)$$
Thus a direct upper bound is shown.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3506280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What does the !! mean in trigonometric identity? What does the $!!$ mean in:
$$
\int_0^x \sin^n(t) \mathrm dt = \begin{cases}
\frac{(n-1)\color{red}{!!}}{n\color{red}{!!}}\Big[1-\cos(x)\sum_{j=0}^{(n-1)/2}\frac{(2j-1)\color{red}{!!}}{(2j)\color{red}{!!}}\sin^{2j}(x)\Big]&\text{for $n$ odd}\\
\frac{(n-1)\color{red}{!!}}{n\color{red}{!!}}\Big[x-\cos(x)\sum_{j=0}^{(n-2)/2}\frac{(2j)\color{red}{!!}}{(2j+1)\color{red}{!!}}\sin^{2j+1}(x)\Big]&\text{for $n$ even}\\
\end{cases}.
$$
Is it factorial applied twice?
This is from page 317 of An Atlas of Functions, Second edition: with Equator, the Atlas Function Calculator by Keith B. Oldham, Jan Myland, Jerome Spanier
| In mathematics, the double factorial or semifactorial of a number $n$ (denoted by $n!!$) is the product of all the integers from $1$ up to $n$ that have the same parity (odd or even) as $n$.
Example: $9!! = 9 \cdot 7 \cdot 5 \cdot 3 \cdot 1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3506511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Infinite complex nested radical and it's complex conjugate. Today I want to play with $i$ the imaginary unit I have this :
$$\overline{\sqrt{1+i\sqrt{1+i^2\sqrt{1+i^3\sqrt{1+i^4\sqrt{\cdots}}}}}}=\sqrt{1+\frac{1}{i}\sqrt{1+\frac{1}{i^2}\sqrt{1+\frac{1}{i^3}\sqrt{1+\frac{1}{i^4}\sqrt{\cdots}}}}}$$
Main remarks we have :
$$\sqrt{1+i\sqrt{1+i^2}}=\sqrt{1+\frac{1}{i}\sqrt{1+\frac{1}{i^2}}}=1$$
$$\overline{\sqrt{1+i\sqrt{1+i^2\sqrt{1+i^3}}}}=\sqrt{1+\frac{1}{i}\sqrt{1+\frac{1}{i^2}\sqrt{1+\frac{1}{i^3}}}}$$
$$\overline{\sqrt{1+i\sqrt{1+i^2\sqrt{1+i^3\sqrt{1+i^4}}}}}=\sqrt{1+\frac{1}{i}\sqrt{1+\frac{1}{i^2}\sqrt{1+\frac{1}{i^3}\sqrt{1+\frac{1}{i^4}}}}}$$
$$\overline{\sqrt{1+i\sqrt{1+i^2\sqrt{1+i^3\sqrt{1+i^4\sqrt{1+i^5}}}}}}=\sqrt{1+\frac{1}{i}\sqrt{1+\frac{1}{i^2}\sqrt{1+\frac{1}{i^3}\sqrt{1+\frac{1}{i^4}\sqrt{1+\frac{1}{i^5}}}}}}$$
And so on...
My question :
What happend in the infinite case ?
Have we a closed form for this special nested radical ?
Edit : It was in fact the complex conjugate.
Thanks a lot for sharing your time and knowledge .
| Just a comment on this. Well first and foremost, I see that you're doing a lot of radicals of this form & also much trivially we can get;
$$\sqrt{1+\frac{1}{x}\sqrt{1+\frac{1}{x^2}\sqrt{1+\frac{1}{x^3}\sqrt{1+\frac{1}{x^4}\sqrt{\cdots}}}}}=\frac{1}{x^{2}}\sqrt{x^{4}+\sqrt{x^{6}+\sqrt{x^{8}+\sqrt{x^{10}+\sqrt{x^{12}+...}}}}}$$
So In a way;
$$\sqrt{1+\frac{1}{i}\sqrt{1+\frac{1}{i^2}\sqrt{1+\frac{1}{i^3}\sqrt{1+\frac{1}{i^4}\sqrt{\cdots}}}}}=\frac{1}{i^{2}}\sqrt{i^{4}+\sqrt{i^{6}+\sqrt{i^{8}+\sqrt{i^{10}+\sqrt{i^{12}+...}}}}}$$
$$=-1$$
One can make a sense out of this, and I still don't personally like nested imaginary units myself. But the solution above is most likely wrong, because the identity claimed at first is by my observation only applicable for positive real quantities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3506607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the difference between logical statements and statement variables? For the following two question
Let P, Q, and R be logical statements. Use a truth table to prove that_______.
Let P, Q, and R be statement variables, and suppose that the logical expression_______ is false.
The blank is two expressions which I don't know how to type those symbols, hope it doesn't matter.
| A logical statement like $P$ is meant to be a specific statement. For example, $P$ could mean 'It is raining'.
A statement variable is something we use to indicate that we are dealing with some statement ... but we don't know what it is.
It is like the difference between $2$ and $x$ when doing algebra. The $2$ is a specific number, but the $x$ is some unknown number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3506852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find $\lim\limits_{n \to \infty} \int\limits_0^n \frac1{1 + n^2 \cos^2 x} dx$. I have to find the limit:
$$\lim\limits_{n \to \infty} \displaystyle\int_0^n \dfrac{1}{1 + n^2 \cos^2 x} dx$$
How should I approach this?
I kept looking for some appropriate bounds (for the Squeeze Theorem) that I could use to determine the the limit, but I didn't come up with anything useful.
| Here is a direct approach. Let $f(x)$ be the integrand, and note that $f(x)$ has a period $\pi$. Let $k$ be the largest positive integer such that $(2k+1)\frac{\pi}{2}<n$. Then:
$$\begin{align}
\int_0^nf(x)\,dx&=\int_0^{\pi/2}f(x)\,dx+\int_{\pi/2}^{3\pi/2}f(x)\,dx+\dots+\int_{(2k+1)\pi/2}^nf(x)\,dx \\
&=\int_0^{\pi/2}f(x)\,dx+k\int_{\pi/2}^{3\pi/2}f(x)\,dx+\int_{(2k+1)\pi/2}^nf(x)\,dx
\end{align} $$
Each of those can be evaluated with the substitution $t=\tan(x) \Rightarrow \cos^2(x)=\frac{1}{1+\tan^2(x)}=\frac{1}{1+t^2}$ and $dx=\frac{dt}{1+t^2}$
$$\begin{align}
\int\frac{1}{1+n^2\cos^2(x)}\,dx&=\int\frac{1}{1+n^2\frac{1}{1+t^2}}\frac{1}{1+t^2}\,dt\\
&=\int\frac{1}{t^2+n^2+1}\,dt\\
&=\frac{1}{\sqrt{n^2+1}}\tan^{-1}\left(\frac{t}{\sqrt{n^2+1}}\right)+C
\end{align} $$
Then:
$$\begin{align}
\int_0^{\pi/2}f(x)\,dx&=\frac{1}{\sqrt{n^2+1}}\tan^{-1}\left(\frac{t}{\sqrt{n^2+1}}\right)\Big|_0^\infty \\
&=\frac{\pi}{2\sqrt{n^2+1}}\overset{n\to\infty}{\to} 0
\end{align}$$
and
$$\begin{align}
\int_{(2k+1)\pi/2}^nf(x)\,dx &= \frac{1}{\sqrt{n^2+1}}\tan^{-1}\left(\frac{t}{\sqrt{n^2+1}}\right)\Big|_{-\infty}^{\tan(n)} \\
&=\frac{1}{\sqrt{n^2+1}}\left(\tan^{-1}\left(\frac{\tan(n)}{\sqrt{n^2+1}}\right)+\frac{\pi}{2}\right)\\
&\overset{n\to\infty}{\to} 0
\end{align}$$
because $1/\sqrt{n^2+1}\to 0$ and the expression in the parentheses is bounded. Finally,
$$\begin{align}
k\int_{\pi/2}^{3\pi/2}f(x)\,dx &= k\frac{1}{\sqrt{n^2+1}}\tan^{-1}\left(\frac{t}{\sqrt{n^2+1}}\right)\Big|_{-\infty}^{\infty} \\
&= \frac{\pi k}{\sqrt{n^2+1}}
\end{align}$$
So you just have to find
$$\lim_{n\to\infty}\frac{\pi k}{\sqrt{n^2+1}}=\lim_{n\to\infty}\frac{\pi k}{n} $$
Note that the choice of $k$ implies $(2k+1)\frac{\pi}{2}<n<(2k+3)\frac{\pi}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3506964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What's wrong with my solution to this probability problem Let $S$ be the set of all integers between −10 and 10, inclusive: $\{−10, −9, −8, . . . , 8, 9, 10\}$. If two integers are chosen out of the set at random, then the probability that the product of the two integers is positive is a/b, where a and b share no common factors. Find a + b.
I found that the probability that the product is positive is $200/441$, and the probability that the product is negative is $200/441$. However, I found that the probability the product is zero as $1/21$, since only 0 has to be the first factor. This leaves us with $20/441$ extra, meaning that something is wrong. However, I don't know what's wrong! Can someone help?
| The product is $0$ if either sample is $0$. $\frac{1}{21}$ is the chance that the first sample is zero.
Assuming the samples are uniform and independent, the probability of a 0 product is
$$Pr(x_0 = 0 {\rm \ or \ } x_1=0) \\
= Pr(x_0 = 0) + Pr(x_1=0) - Pr(x_0 = 0 {\rm \ and \ } x_1 = 0) \\
= \frac{1}{21} + \frac{1}{21} - \frac{1}{21^2} \\
= \frac{41}{441}$$
The positive is $\frac{200}{441}$ and negative is $\frac{200}{441}$
so the sum of probabilities of positive negative and $0$ products are
$$ \frac{200}{441} + \frac{200}{441} + \frac{41}{441} = 1$$
They are mutually exclusive and cover all outcomes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3507127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $g(y) = \int_0^1 f(x,y) dx$ is not continuous at $y =0$.
Exercise 7.5.17 (Introduction to Real Analysis by Jiri Lebl): Define
$$f(x) = \left\{
\begin{array}{lr}
\frac{2xy}{x^4+y^2} & \text{if} & (x,y) \not= (0,0)\\
0 & \text{if} & (x,y) =(0,0)
\end{array}
\right.
$$
Show that $g(y) = \int_0^1 f(x,y) dx$ is not continuous at $y =0$. Note: Feel free to use what you know about $\arctan$ from calculus, in particular that $\frac{d}{ds} [\arctan(s)] = \frac1{1+s^2}$.
From the previous exercises, I know that $f$ is continuous at one variable, fixing another variable. In addition, I also know that $f$ is not continuous at $(0,0)$. Since the question gives us the use of the derivative of $\arctan$ as a hint, I guess that I need to use trigonometric substitution to compute the integral (?), but I am not really sure. I appreciate if you give some help.
| HINT
Let $x^2 = t$
$$\int_0^1\frac{2xy}{x^4 + y^2}dx = \int_0^1\frac{y}{(t^2 + y^2)}dt = \arctan(\frac{1}{y})$$
Can you complete the rest?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3507247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A man is on block 2, find expected number of steps for him to reach block 1.
There are infinite blocks placed along positive axis at integer points. A man is on position $2$. He moves back one step with probability $p$ and forward with probability $1-p$ where $p > 0.5$.
Find expected number of steps for him to reach first block.
I denoted expected number of steps to reach block $1$ when on ith block as $E_i$
Then $E_1 = 0$ and we need to find $E_2$. Further there is a relation:
$$E_k = p(E_{k-1}+1)+(1-p)(E_{k+1}+1)$$
Now I dont have any ideas.
| If you take a step forward, notice how the situation is similar, if you now ask the expected number of steps to return to tile $2$. So it will also take $E_2$ steps on average to go back to tile $2$ when you're on tile $3$, and then again $E_2$ to go from $2$ to $1$. So we can write
$$E_2 = p\times 1 + (1-p)\times(1+2E_2)$$
and solve
$$E_2 = \frac{1}{2p-1}$$
We need to know that the expectancies are finite (which is the case when $p>0.5$), for this to make sense. For more information see here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3507472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Brownian Motion $E[B_t \cdot B_s \cdot B_v]$ with $0 < t < s Can someone derive analytically the following Brownian Motion question? $E[B_t \cdot B_s \cdot B_v]$ with $0 < t < s < v$.
| We can also use the martingale property of the BM. Let us denote $\mathcal{F}_t$ the natural filtration of the Brownian motion $B_t$.
We now that $\{B_t\}_{t\geq0}$ and $\{B_t^2-t\}_{t\geq0}$ are martingales. I recall that $B_t \sim \mathcal{N}(0,t)$.
We have then
\begin{align*}
E\left[B_sB_tB_v\right] &= E\left[\underbrace{B_sB_t}_{\mathcal{F}_t-\text{mesurable}}E[B_v|\mathcal{F}_t]\right]\\
&= E\left[B_sB_t^2\right] = E\left[B_s(B_t^2-t)+ tB_s\right] \\
&= E\left[B_sE[(B_t^2-t)|\mathcal{F}_s]\right] = E\left[B_s^3-B_s\right] \\
&= 0
\end{align*}
The last equality comes from the fact the odd moments of gaussian r.v. are 0.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3507560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How to compute the limit below by L'Hopital's rule?
Let $\left(X_j\right)_{j\geq1}$ be i.i.d. with $E\{X_j\}=\mu$ and $\text{Var}\{X_j\}=\sigma^2$ (all $j$) with $0<\sigma^2<\infty$. Let $S_n=\sum\limits_{j=1}^{n}X_j$ and $Y_n=\dfrac{S_n-n\mu}{\sigma\sqrt{n}}$. Let $\varphi_j$ be the characteristic function of $X_j-\mu$. Since the $\left(X_j\right)_{j\geq1}$ are i.i.d., $\varphi_j$ does not depend on $j$ and we write $\varphi$.
One can show that $\varphi_{Y_n}(u)=\left(\varphi\left(\dfrac{u}{\sigma\sqrt{n}}\right)\right)^n$. Then, one can expand $\varphi$ in a Taylor expansion about $u=0$ to get
$$\varphi(u)=1+0-\dfrac{\sigma^2u^2}{2}+u^2h(u)$$ with $h$ denoting the Peano remainder and $h(u)\rightarrow0$ as $u\rightarrow0$.
One can also show that
$$\varphi_{Y_n}(u)=e^{n\log\left(1-\frac{u^2}{2n}+\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)\right)}$$
Taking limits as $n\rightarrow\infty$ and using for example L'Hopital rule, one gets $$\lim\limits_{n\to\infty}\varphi_{Y_n}(u)=e^{-\frac{u^2}{2}}$$
MY QUESTION: how could the above limit be solved by using L'Hopital's rule?
This is an indeterminate form $\infty\times0$. Thanks to the advice of user Mark Viola I get to the fact that limit can be rephrased as follows
$$\lim\limits_{n\to\infty}\varphi_{Y_n}(u)=\lim\limits_{n\to\infty}e^{\frac{\log\left(1-\frac{u^2}{2n}+\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)\right)}{\frac{1}{n}}}$$
At this point, we get an indeterminate form $\left[\dfrac{0}{0}\right]$. By continuity of exponential function $e$, we have
$$\lim\limits_{n\to\infty}e^{\frac{\log\left(1-\frac{u^2}{2n}+\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)\right)}{\frac{1}{n}}}=e^{\lim\limits_{n\to\infty}\frac{\log\left(1-\frac{u^2}{2n}+\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)\right)}{\frac{1}{n}}}$$
At this point, focusing on the limit
$$\lim\limits_{n\to\infty}\frac{\log\left(1-\frac{u^2}{2n}+\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)\right)}{\frac{1}{n}}$$
I have tried to solve it by means of L'Hopital rule.
$$\lim\limits_{n\to\infty}\frac{\log\left(1-\frac{u^2}{2n}+\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)\right)}{\frac{1}{n}}\stackrel{H}=\lim\limits_{n\to\infty}\frac{\frac{d\log\left(1-\frac{u^2}{2n}+\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)\right)}{dn}}{\frac{d\frac{1}{n}}{dn}}$$
I have some problem in computing $$\frac{d\log\left(1-\frac{u^2}{2n}+\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)\right)}{dn}$$
Let $f(n)=\left(1-\frac{u^2}{2n}+\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)\right)$. I know that $\dfrac{d\log\left(f(n)\right)}{dn}=\dfrac{f^{'}(n)}{f(n)}$ and my problems are related to the computation of $f^{'}(n)$ since I have no clue on how to compute the derivative of $\frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)$ with respect to $n$, that is
$$\dfrac{d \frac{u^2}{n\sigma^2}h\left(\frac{u}{\sigma\sqrt{n}}\right)}{dn} \tag{1}$$
Could you please explain to me in detail how to solve derivative $(1)$ so as to get to the final result $\lim\limits_{n\to\infty}\varphi_{Y_n}(u)=e^{-\frac{u^2}{2}}$?
| HINT:
Let $x=1/n$. Then examine the limit
$$\lim_{x\to0}\,\frac{\log\left(1-u^2x/2+(u^2/\sigma^2xh(u\sqrt{x}/\sigma))\right)}{x}$$
Can you proceed now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3507687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Building a Subspace Complementary to Two Given Subspaces Lee's Introduction to Smooth Manifolds claims that given an n-dimensional vector space $V$ and two $k$-dimensional subspaces $P,P'$, we can find a single $n-k$-dimensional subspace $Q$ complementary to both $P$ and $P'$. How do we show this?
| If $P=P'$ then there exists a $0 \neq v \in V \setminus P$, and then span$\{v\}$ is a subspace disjoint from $P$ and $P'$. If they are not equal, there exists a $v_1 \in P \setminus P'$ and a $v_2 \in P' \setminus P$. Then any non-trivial linear combination of these two will suffice to make a new subspace disjoint from $P$ and $P'$. I.e. span$\{v=av_1+bv_2\}$ for $a,b \neq 0$ will be a subspace (of dimension 1) disjoint from $P$ and $P'$. Now repeat this process with $P+v\Bbb R$ and $P' + v\Bbb R$, to get a new subspace of dimension $2$ and so on
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3507801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Evaluate $\int _0^1\:\frac{2-x^2}{(1+x)\sqrt{1-x^2}} \, dx$
Evaluate $$\int _0^1\:\frac{2-x^2}{(1+x)\sqrt{1-x^2}}dx$$
$$
\int _0^1\:\frac{2-x^2}{(1+x)\sqrt{1-x^2}} \, dx = \int_0^1 \bigg[\sqrt{\frac{1+x}{1-x}}+\frac{1}{(1-x)^{3/2}\sqrt{1+x}}\bigg] \, dx=\int_0^1[f(x)+f'(x)]\,dx
$$
Does making it into the above final form helps somehow solve the given definite integral ?
Note: The solution given is $\pi/2$
| Let $x=\sin t$. Then,
$$\int _0^1 \frac{2-x^2}{\left(1+x\right)\sqrt{1-x^2}}dx
=\int _0^1\left(\frac{1}{\left(1+x\right)\sqrt{1-x^2}}
+\frac{\sqrt{1-x^2}}{1+x}\right)dx$$
$$=\int_0^{\pi/2} \left( \frac{dt}{1+\sin t}+1-\sin t\right)dt$$
$$=\left(-\frac{\cos t}{1+\sin t} + t + \cos t\right)_0^{\pi/2}=\frac\pi2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3507993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Linear transformation with respect to basis problem Let $T: \mathbb{R}^2 \to \mathbb{R}^2$ such that $T(\begin{bmatrix} 3 \\ 1 \end{bmatrix}) = \begin{bmatrix} 1 \\2 \end{bmatrix}$ and $T(\begin{bmatrix} -1 \\ 0 \end{bmatrix}) = \begin{bmatrix} 1 \\1 \end{bmatrix}$. Find the matrix $A$ representing $T$.
I understand that to approach this problem, I have to view $\begin{bmatrix} 3 \\1 \end{bmatrix}$ and $\begin{bmatrix} -1 \\ 0 \end{bmatrix}$ as a basis, $B = \{ v_1, v_2 \}$, where $[v1 v2]$ is the transition matrix from $[x]B$ to $x$. How do I use $\begin{bmatrix} 1\\2 \end{bmatrix}$ and $\begin{bmatrix} 1 \\1 \end{bmatrix}$? I'm unclear on their connection to the basis vectors $v_1$ and $v_2$.
| Relative to the bases $B$ and the standard basis, the matrix is:. $\begin{pmatrix}1&1\\2&1\end{pmatrix}$.
The change of basis matrix is: $\begin{pmatrix}3&-1\\1&0\end{pmatrix}$.
The latter changes basis from $B$ to the standard basis.
Thus you want: $\begin{pmatrix}1&1\\2&1\end{pmatrix}\begin{pmatrix}3&-1\\1&0\end{pmatrix}^{-1}$.
I will leave this calculation for you.
$\begin{pmatrix}1&1\\2&1\end{pmatrix}\begin{pmatrix}0&1\\-1&3\end{pmatrix}=\begin{pmatrix}-1&4\\-1&5\end{pmatrix}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3508294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Waiting time for a pattern in coin tossing Let $\{X_t\}$ be an iid sequence of fair coin tosses and $\tau_{HTH} = \inf\{t\geq 3: X_{t-2}X_{t-1}X_t=HTH\}$. I want to determine $E\tau_{HTH}$.
I don't understand a part of the explanation which is:
Gamblers place bets on each individual toss. On each bet, gambler pays an entrance fee of $k$ and is paid $2k$ in return if the outcome is $H$ or $0$ if the outcome is $T$. $k$ can be negative which corresponds to a bet on $T$.
Suppose that at each unit of time until $HTH$ first appears, a new gambler enters and employs the following strategy: on his first bet, he wagers $1$ on $H$. If he loses, he stops. If he wins and $HTH$ has not yet appeared, he wagers his payoff of $2$ on $T$. If he loses, he stops. If he wins and $HTH$ has not yet appeared, he wagers his payoff of $4$ on $H$. This is the last bet placed bu this particular gambler.
The gambler who started at $\tau_{HTH}$ is paid $2$ and the gambler who started at $\tau_{HTH}-2$ is paid $8$ and every gambler has paid an initial entry fee of $1$. At time $\tau_{HTH}$, the net profit to all gamblers $=10-\tau_{HTH}$ and since the game is fair $E\tau_{HTH}=10$
Q1) I can see that the net profit for all gamblers is $10-\tau_{HTH}$ if the outcome string is say $TTTHTH$ and this wouldn't be the net profit if atleast one $H$ appears before $HTH$, am I correct? (say the outcome string is $HHHHTH$).
Q2) The explanation also says $\tau_{HTH}/3$ is bounded by a Geometric(1/8) random variable. How can I see that?
| We can handle the case of a possibly unfair coin using generating functions.
A pattern for all sequences ending in $HTH$ is
$$
T^\ast\left(H^\ast(HTT)T^\ast\right)^\ast H^\ast HTH\tag1
$$
where $(\dots)^\ast$ represents $0$ or more sequences matching $(\dots)$.
The generating function for $(1)$ is
$$
\overbrace{\ \frac1{1-qx}\ }^{T^\ast}\overbrace{\frac1{1-\underbrace{\frac1{1-px}pq^2x^3\frac1{1-qx}}_{H^\ast(HTT)T^\ast}}}^{\left(H^\ast(HTT)T^\ast\right)^\ast}\overbrace{\ \frac1{1-px}\ }^{H^\ast}\overbrace{\ \ p^2qx^3\ \ \vphantom{\frac11}}^{HTH}\tag2
$$
where $p$ is the probability of an $H$ and $q=1-p$ is the probability of a $T$.
That is,
$$
\begin{align}
g(x)
&=\frac{p^2(1-p)x^3}{1-x+p(1-p)x^2-p(1-p)^2x^3}\tag3\\[9pt]
g(1)
&=1\tag4\\[9pt]
%g'(x)
%&=\frac{p^2(1-p)x^2\left(3-2x+p(1-p)x^2\right)}{\left(1-x+p(1-p)x^2-p(1-p)^2x^3\right)^2}\\
g'(1)
&=\frac{1+p(1-p)}{p^2(1-p)}\tag5\\[3pt]
%g''(x)
%&=\frac{2p^2(1-p)x\left(3-3x+\left(1-p+p^2\right)x^2+6p(1-p)^2x^3-3p(1-p)^2x^4+p^2(1-p)^3x^5\right)}{\left(1-x+p(1-p)x^2-p(1-p)^2x^3\right)^3}\\
g''(1)
&=\frac{2+4p-8p^2+6p^4-2p^5}{p^4(1-p)^2}\tag6
\end{align}
$$
The mean duration is
$$
g'(1)=\frac{1+p(1-p)}{p^2(1-p)}\,\overset{p\to\frac12}\to\,10\tag7
$$
The duration variance is
$$
g''(1)+g'(1)-g'(1)^2=\frac{1+p(1-p)\left(2-4p-2p^2+p^3\right)}{p^4(1-p)^2}\,\overset{p\to\frac12}\to\,58\tag8
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3508407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Finding eigenspaces of a matrix with parameter I have the following matrix:
$$
\begin{pmatrix}
3&0&0\\
k+2&1&0\\
5&k-1&1
\end{pmatrix}
$$
The exercise asks to find the eigenvalues of the matrix and, for all $k\in\mathbb{R}$, determine a basis of each eigenspace of the matrix.
Since ths is a lower triangular matrix, the eigenvalues are the values on the diagonals: $\lambda=3$ and $\lambda=1$.
Now, for $\lambda=3$ I need to find the null space of this matrix:
$$
\begin{pmatrix}
0&0&0\\
k+2&-2&0\\
5&k-1&-2
\end{pmatrix}
\rightarrow
\begin{pmatrix}
5&k-1&-2\\
k+2&-2&0\\
0&0&0
\end{pmatrix}
$$
By reducing in row echelon form ($R_2\leftarrow5R_2-kR_1$ and then $R_2\leftarrow R_2-2R_1$):
$$
\begin{pmatrix}
5&k-1&-2\\
0&-k^2-k-8&2k+4\\
0&0&0
\end{pmatrix}
$$
I'm stuck here because $-k^2-k-8$ doesn't have real roots. Any hints?
| Since $-k^2-k-8$ can’t be zero, you can safely divide by it and continue merrily on your way with the row-reduction. However, because you’re working in $\mathbb R^3$ there’s a much simpler way to find a basis for this null space. The row space of $A-3I$ is clearly spanned by its two nonzero rows, and the null space of a matrix is the orthogonal complement of its row space, so the null space of $A-3I$ is spanned by $$(k+2,-2,0)\times(5,k-1,-2) = (4,2k+4,k^2+k+8).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3508502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Probabilistic inequality for an antisymmetric function Let $X$ be a random variable with $\mathbb P(-1 \leq X \leq 1) = 1$. Does $\mathbb E(X) \geq 0$ imply
$$\mathbb E[X(1-|X|)] \geq 0?$$
The function is antysymmetric around $0$ and has more probability mass on the positive side. Intuitively, it should be correct.
If not, is it true if I assume $\mathbb E(X) > 0$?
| The answer is no.
A simple counter-example is letting $X\equiv 1$.
meaning, $X$ is a constant random variable equal to 1.
We get $$\mathbb E[X(1-\vert X\vert)] = \mathbb E[X(1-\vert 1\vert)] = 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3508647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Given positives $a, b, c$, prove that $\frac{a}{(b + c)^2} + \frac{b}{(c + a)^2} + \frac{c}{(a + b)^2} \ge \frac{9}{4(a + b + c)}$.
Given positives $a, b, c$, prove that $$\large \frac{a}{(b + c)^2} + \frac{b}{(c + a)^2} + \frac{c}{(a + b)^2} \ge \frac{9}{4(a + b + c)}$$
Let $x = \dfrac{b + c}{2}, y = \dfrac{c + a}{2}, z = \dfrac{a + b}{2}$
It needs to be sufficient to prove that $$\sum_{cyc}\frac{\dfrac{a + b}{2} + \dfrac{b + c}{2} - \dfrac{c + a}{2}}{\left(2 \cdot \dfrac{c + a}{2}\right)^2} \ge \frac{9}{\displaystyle 4 \cdot \sum_{cyc}\dfrac{c + a}{2}} \implies \sum_{cyc}\frac{z + x - y}{y^2} \ge \frac{9}{y + z + x}$$
According to the Cauchy-Schwarz inequality, we have that
$$(y + z + x) \cdot \sum_{cyc}\frac{z + x - y}{y^2} \ge \left(\sum_{cyc}\sqrt{\frac{z + x - y}{y}}\right)^2$$
We need to prove that $$\sum_{cyc}\sqrt{\frac{z + x - y}{y}} \ge 3$$
but I don't know how to.
Thanks to Isaac YIU Math Studio, we additionally have that $$(y + z + x) \cdot \sum_{cyc}\frac{z + x - y}{y^2} = \sum_{cyc}(z + x - y) \cdot \sum_{cyc}\frac{z + x - y}{y^2} \ge \left(\sum_{cyc}\frac{z + x - y}{y}\right)^2$$
We now need to prove that $$\sum_{cyc}\frac{z + x - y}{y} \ge 3$$, which could be followed from Nesbitt's inequality.
I would be greatly appreciated if there are any other solutions than this one.
| By Cauchy-Schwarz,
$$\left[\dfrac{a}{(b + c)^2} + \dfrac{b}{(c + a)^2} + \dfrac{c}{(a + b)^2}\right](a+b+c) \ge \left(\dfrac{a}{b+c}+\dfrac{b}{c+a}+\dfrac{c}{a+b}\right)^2$$
Then by rearrangement inequality,
$$\dfrac{a}{b+c}+\dfrac{b}{c+a}+\dfrac{c}{a+b}\ge\dfrac{b}{b+c}+\dfrac{c}{c+a}+\dfrac{a}{a+b} \\ \dfrac{a}{b+c}+\dfrac{b}{c+a}+\dfrac{c}{a+b}\ge\dfrac{c}{b+c}+\dfrac{a}{c+a}+\dfrac{b}{a+b}$$ Sum up and get $$\dfrac{a}{b+c}+\dfrac{b}{c+a}+\dfrac{c}{a+b}\ge\dfrac{3}{2}$$
So from the first inequality, we get:
$$\left[\dfrac{a}{(b + c)^2} + \dfrac{b}{(c + a)^2} + \dfrac{c}{(a + b)^2}\right](a+b+c) \ge \dfrac{9}{4} \\ \dfrac{a}{(b + c)^2} + \dfrac{b}{(c + a)^2} + \dfrac{c}{(a + b)^2} \ge \dfrac{9}{4(a+b+c)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3508789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 1
} |
Inequality $\binom{n+m}{k}+\binom{n-m}{k}\ge 2\binom{n}{k}$ Is it true that $\binom{n+m}{k}+\binom{n-m}{k}\ge 2\binom{n}{k}$?
I've been checking this for many cases in the Pascal triangle ($n,m,k\in\mathbb{N}^*$ such that $n-m\ge k$) but cannot prove formally.
| Claim. The inequality
$$\binom{n+m}k+\binom{n-m}k \ge 2 \binom nk$$
holds for any integers such that $0\le m,k \le n$.
Proof. It is clear that this is true for $k=0$. So we will assume from now on that $k\ge1$.
Let us denote $$a_j=\binom{n+j}k+\binom{n-j}k$$ for $j=0,1,\dots,n$. We have $a_0=2\binom nk$. It suffices to show that the sequence $a_j$ is non-decreasing.
For this, we just compute
\begin{align*}
a_{j+1}-a_j&=\binom{n+j+1}k-\binom{n+j}k-\binom{n-j}k+\binom{n-j-1}k\\
&=\binom{n+j}{k-1}-\binom{n-j-1}{k-1} \ge 0.
\end{align*}
So we get $a_{j+1}-a_j\ge0$ and thus $a_{j+1}\ge a_j$ whenever $j \le n-1$ (and $k-1 \ge 0$.
You can find other approaches to this problem (or generalizations) here:
*
*How can we show binomial function is convex without calculus?
*Convexity of Binomial Term
*Prove that $\binom{a_1}{2} + \binom{a_2}{2} + \cdots + \binom{a_n}{2} \ge r\binom{k+1}{2} + \left(n-r\right)\binom{k}{2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3508893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Surjection of map to n-Sphere Consider the map $f\colon S^n \times S^m \times [0,1] \rightarrow S^{n+m+1}$ defined by $f(p,q,t) = p \cos(\frac{\pi}{2}t) +q \sin(\frac{\pi}{2}t)$ where $p \in S^n$, $q \in S^m$ and $t \in[0,1]$. Show this map is surjective, where $n,m \in \mathbb{R}$
I have been able to show it when $n=m=0$ and for $n=0, m=1$, but I am having problems to generalize it.
Edit: Following Ted's suggestion, I think about $p,q$ as $p\in S^n\subset \Bbb R^{n+1}\times \{0\}\subset\Bbb R^{n+1}\times\Bbb R^{m+1} = \Bbb R^{n+m+2}$. So basically the map is similar to a linear combination of points
| Consider a point $(x,y) \in S^{n+m+1}$, where $x \in \mathbb R^{n+1}, y \in \mathbb R^{m+1}$. We are looking for a solution $(p,q,t) \in S^n \times S^m \times [0,1]$ of the two equations
$$\cos(\frac{\pi}{2} t) p = x, \sin (\frac{\pi}{2} t) q = y .$$
Note that $\cos(\frac{\pi}{2} t), \sin(\frac{\pi}{2} t) \in [0,1]$ for $t \in[0,1]$. Taking the norm, we see that a necessary condition for $t$ is
$$(*) \quad \cos(\frac{\pi}{2} t) = \cos(\frac{\pi}{2} t) \lVert p \rVert = \lVert \cos(\frac{\pi}{2} t) p \rVert = \lVert x \rVert , \sin(\frac{\pi}{2} t) = \sin(\frac{\pi}{2} t) \lVert q \rVert = \lVert \sin(\frac{\pi}{2} t) q \rVert = \lVert y \rVert .$$
But $\lVert x \rVert^2 + \lVert y \rVert^2 = \lVert (x,y) \rVert^2 = 1$. Thus $(\lVert x \rVert,\lVert y \rVert) \in S^1$. Hence there is a unique $\tau \in [0,2\pi)$ such that $(\lVert x \rVert,\lVert y \rVert) = (\cos \tau,\sin \tau)$. Since $\lVert x \rVert,\lVert y \rVert \ge 0$, we see that $\tau \in [0,\pi/2]$. Hence $t = \frac{2\tau}{\pi} \in [0,1]$ is the unique solution of $(*)$.
Moreover, if $x \ne 0$, we necessarily have $p = \frac{x}{\lVert x \rVert}$, and if $y \ne 0$, we necessarily have $q = \frac{y}{\lVert y \rVert}$.
By inserting we easily verify
*
*If $x,y \ne 0$, then $f(\frac{x}{\lVert x \rVert},\frac{y}{\lVert y \rVert},\frac{2\tau}{\pi}) = (x,y)$.
*If $x = 0$, then $\lVert y \rVert = 1$ and $\tau = \frac{\pi}{2}$. Let $p \in S^{n+1}$ be arbitrary. Then $f(p,y,1) = (0,y) = (x,y)$.
*If $y = 0$, then $\lVert x \rVert = 1$ and $\tau = 0$. Let $q \in S^{m+1}$ be arbitrary. Then $f(x,q,0) = (x,0) = (x,y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3509012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Olympiad problem :Integer roots of $P(P(x))$ in function of the roots of $P(x)$ Here is the original problem :
A polynomial $P(x)$ of degree $n \geq 5$ with integer coefficients and
$n$ distinct integer roots is given. Find all integer roots of $P(P(x))$
given that $0$ is a root of $P(x)$.
Here is my solution, however i'm not sure of it , could you guys please check it ?
Solution :
It's easy to see that $P(P(x))=0$ for all $x=x_1,..,x_n$. The other roots that $P(P(x))$ may have are the eventual values $X_j$ for which $P(X_j)=x_j, j\neq 1$ (and those $X_j$ must be integers.) We have $P(0)=0$ so obviously $P(x)=a_nx^n+..+a_1x$.
For all integers $a,b$ :
$a-b\mid P(a)-P(b)$, since $P\in\mathbb Z[x]$.
Taking $a=X_j$ and $b=x_j$ (for $i=2,..,n$, this is just :
$X_j-x_j\mid x_j$.
This means there exists some integer $k>1$ such that: $X_j=kx_j$, for all $2\le i\le n$.
We have : $P(x_j)=a_nx_j^n+..+a_1x_j$.
Thus : $P(X_j)=P(kx_j)=a_nk^nx_j^n+..+a_1kx_j=x_j$$\Longleftrightarrow$
$a_nk^nx_j^{n-1}+..+a_1k=1$.
Thus :
$a_nk^{n-1}x_j^{n-1}+..+a_1=\frac{1}{k}$. But $P$ has integer coefficients and $k,x_j$ are integers, so $\frac{1}{k}$ must be an integer, so $k=1\Longrightarrow X_j=x_j$, so the integer roots of $P(P(x))$ are the same as those of $P(x)$.
Thanks a lot !
| So $P(x)=x\prod_{i=2}^n (x-x_i)$ where $x_2,\ldots, x_n$ are the nonzero integral roots of $P$. Now suppose there is a a nonzero root $y$ of $P(P(x))$ that is distinct from $0,x_2,\ldots, x_n$. Then $y\prod_{i=2}^n(y-x_i)$ must be in $\{x_2,x_3,\ldots, x_n\}$. We show that this is impossible for integral $y \not = 0,x_2,\ldots, x_n$ via Claim 1 below:
Claim 1: Let us use the notation as above, and let us write $x_n$ be the root of $P$ with the largest modulus i.e., $|x_n| \ge |x_i|$ for each $i=2,3,\ldots, n$. Let $y$ be a nonzero integer. Then $|P(y)| > |x_n|$.
Case 1: $0< |y| \le \frac{|x_n|}{2}$. Then $|y\prod_{i=2}^n (y-x_i)| > |x_n|$. Indeed, that $y$ is distinct from $0,x_2,\ldots, x_n$ implies $|y|$ and $|y-x_i|$ for each $i=2, \ldots, n-2$ is at least 1, and that at least 2 of $\{|y|, |y-x_i|; i=2,\ldots, n-1\}$ is at least 2, since the degree $n$ of $P$ is at least 5. So $|y\prod_{i=2}^{n-1}(y-x_i)|$ is at least 4. But then indeed as $|y| \le \frac{|x_n|}{2}$ it follows that $|y-x_n|$ is at least $\frac{|x_n|}{2}$, this implies that
$$|y\prod_{i=2}^{n} (y-x_i)| \ge \frac{|x_n|}{2} \times |y\prod_{i=2}^{n-1}(y-x_i)|$$
$$\ge \frac{|x_n|}{2} \times 4 > |x_n|.$$
So Claim 1 follws for Case 1.
Case 2: $|y| \ge \frac{|x_n|}{2}$. Then $|y\prod_{i=2}^n (y-x_i)| > |x_n|$. Indeed, that $y$ is distinct from $0,x_2,\ldots, x_n$ implies $|y-x_i|$ for each $i=2, \ldots, n$ is at least 1, and that $|y-x_i| \ge 2$ for at least 2 of the other $i$s in $2,\ldots, n$, since the degree $n$ of $P$ is at least 5. So $|\prod_{i=2}^{n}(y-x_i)|$ is at least 4. But then indeed as $|y|$ is at least $|x_n/2|$, this implies that
$$|y\prod_{i=2}^{n} (y-x_i)| \ge \frac{|x_n|}{2} \prod_{i=2}^{n}|(y-x_i)|$$
$$\ge \frac{|x_n|}{2} \times 4 > |x_n|.$$
So Claim 1 follows for the remaining Case 2 as well, thus Claim 1 follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3509219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $|f(z)| \leq |z|$ on annulus Let $D = \{ z \in \mathbb{C} : 2 < |z| < 3 \}$. Suppose that $f$ is holomorphic on $D$ and $f$ is continuous on $\overline{D}$. Suppose that $\max \{ |f(z)| : |z| = 2\} \leq 2$ and $\max \{ |f(z)| : |z| = 3 \} \leq 3$. Show that $\forall z \in D, |f(z)| \leq |z|$. The solution I am thinking of I believe is wrong, but here is what it is. Assume $f$ is non-constant. By the maximum modulus principle $|f|$ assumes its maximum on $\partial D$, Hence, $|f(z)| \leq 3, \forall z \in D$. Define $g(z) = \frac{f(z)}{z}$, which is analytic. Consider $r \in (2,3)$. For $|z| = r$, $|g(z)| \leq \frac{3}{r}$. Since for $|z| = 2$, $|g(z)| \leq 1$, by the maximum modulus principle, $\forall |z| \in (2,r), |g(z)| \leq \frac{3}{r}$. Let $r \rightarrow 3$, then $\forall |z| \in (2,3), |g(z)| \leq 1 \implies |f(z)| \leq |z|$. I essentially followed the same technique as in Schwarz's Lemma, but I am not sure if this is right.
| Your proof is correct, but you can simplify it. The function $\lvert g(z) \rvert$ assumes its maximum on $\partial D$. For $\lvert z \rvert = 2$ we have $\lvert f(z) \rvert \le 2 = \lvert z \rvert$, thus $\lvert g(z) \rvert \le 1$. Similarly $\lvert g(z) \rvert \le 1$ for $\lvert z \rvert = 3$. Thus necessarily $\lvert g(z) \rvert \le 1$ for all $z \in D$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3509408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
circle envelope tangent in another circle As the picture shows,
One big circle ,$(0,0)$ ,radius=R, there is a small circle in it, $(m,0)$ ,radius=r .
G is on the big circle. From G ,we can do two tangent lines about the small circle. Get the points of intersection E and F. line EF has a envelope about G , which seem like a circle.
How to prove it? Since calculating it requires much effort.
Some additional infomation:
By picking special points,I get
the radius of envelope circle is
$$\frac{R \left(m^4-2 m^2 \left(r^2+R^2\right)-2 r^2 R^2+R^4\right)}{\left(m^2-R^2\right)^2}$$
and the circle center
$$\left(\frac{1}{2} \left(\frac{R \left(-m^2+2 m R+2 r^2-R^2\right)}{(R-m)^2}-\frac{R \left(-m^2-2 m R+2 r^2-R^2\right)}{(m+R)^2}\right),0\right)$$
| Here is an analytic geometry solution :
We can assume without loss of generality that $R=1$, i.e., we work inside the unit circle. Therefore, we have the following condition : $0 < m <1$.
Let us introduce some notations.
Let $B$ be the second intersection of line $GC$ with the unit circle.
Let $a$ be the polar angle of point $G$ (i.e., oriented angle between positive $x$-axis and $AG$).
Let $b$ be the polar angle of $B$ (i.e., oriented angle between positive $x$-axis and $AB$).
Let $\theta$ be the angle of line $GE$ with line $GB$ (equal to the angle between $GB$ and $GF$).
We will use a certain number of trigonometric formulas. An extensive and well structured list of those can be found here.
Fig. 1: $G(\cos a,\sin a)$, $B(\cos b,\sin b)$, and $C(m,0)$, center of the small circle with radius $r$. The circle to which lines $EF$ are all assumed to be tangent is in red.
Let us look for relationships between angles $a,b$ and $\theta$ and lengths $m$ and $r$.
*
*a) The fact that $GE$ and $GF$ are tangent to the small circle is expressed by the following relationship :
$$\sin \theta = \dfrac{r}{GC}$$
which is equivalent to :
$$(\sin \theta)^2 = \dfrac{r^2}{GC^2}= \dfrac{r^2}{(\cos a - m)^2+(\sin a - 0)^2}=\dfrac{r^2}{1 - 2m \cos a +m^2}\tag{1}$$
*
*b) The angle between $AE$ and $AB$ is $2 \theta$ by central angle theorem. The same for the angle between $AB$ and $AF$. Let $I$ be the intersection point of line $AB$ and line $EF$ ; triangle $EAF$ being isosceles, line segment $AI$ is orthogonal to $EF$, implying that its algebraic measure ("signed distance") is $AI=\cos(2 \theta)$ (possibly negative).
Therefore, straight line $EF$ whose normal vector is $\vec{AB} = \binom{\cos b}{\sin b}$ has equation :
$$x \cos b + y \sin b = \cos 2 \theta \tag{2}$$
Using relationship $\cos 2 \theta=1-2 \sin^2 \theta$ with formula (1) :
$$x \cos b + y \sin b = 1-\dfrac{2 r^2}{1 - 2m \cos a +m^2}\tag{3}$$
*
*c) Let us, finally, express that $G, C$ and $B$ are aligned. This relationship, written under the form (see http://mathworld.wolfram.com/Collinear.html) :
$$\begin{vmatrix}m &\cos a &\cos b\\0 & \sin a&\sin b\\1&1&1\end{vmatrix}=0\tag{4}$$
i.e.,
$$m=\dfrac{\sin(a-b)}{\sin a - \sin b}$$
$$\iff \ \ m=\dfrac{\color{red}{2 \sin(\tfrac12(b-a))}\cos(\tfrac12(b-a))}{\color{red}{2 \sin(\tfrac12(b-a))}\cos(\tfrac12(b+a))}=\dfrac{1+\tan(a/2) \ \tan(b/2)}{1-\tan(a/2) \ \tan(b/2)}$$
yielding the rather unexpected following condition :
$$\tan(a/2) \ \tan(b/2)=k \ \ \text{where} \ \ k:=\dfrac{m-1}{m+1}\tag{5}$$
from which we deduce (using a "tangent half-angle formula" and setting :
$$t:=\tan(b/2)$$
that :
$$\cos a=\dfrac{1-\tan(a/2)^2}{1+\tan(a/2)^2}=\dfrac{1-\left(\tfrac{k}{t}\right)^2}{1+\left(\tfrac{k}{t}\right)^2}=\dfrac{t^2-k^2}{t^2+k^2}\tag{6}$$
Plugging (6) into formula (3), we get for the RHS of (3) after some algebraic transformations :
$$\cos 2 \theta=1-\dfrac{2r^2(t^2(m+1)^2+(m-1)^2)}{(m^2-1)^2(1+t^2)}$$
Using once again tangent half-angle formulas, we can now express the equation of straight line $EF$ under the following parametric form in variable $t$ :
$$x \dfrac{1-t^2}{1+t^2} + y \dfrac{2t}{1+t^2} - 1+\dfrac{2r^2(t^2(m+1)^2+(m-1)^2)}{(m^2-1)^2(1+t^2)}=0.\tag{7}$$
It remains to establish that the distance $d$ of the point with coordinates
$$C'(x_0,y_0)=\left(4m\dfrac{r^2}{(1-m^2)^2},0\right)\tag{8}$$
(would-be center of envelope circle) to straight line $EF$ is constant (i.e., is independent from $t$).
This distance, obtained (see http://mathworld.wolfram.com/Point-LineDistance2-Dimensional.html) by replacing $(x,y)$ in the Left Hand Side of equation (7) by the coordinates $(x_0,y_0)$ of $C'$ given by (8)
$$d=(2r^2(1+m^2) - (m^2 - 1)^2)/(m^2 - 1)^2\tag{9}$$
is indeed independent from parameter $t$ ; moreover it is (up to its sign) the awaited expression (I have used a Computer Algebra System to obtain this expression of $d$).
Remarks :
1) It is possible to characterize the tangent point of $EF$ on the (red) envelope circle, following the method outlined by @Jan-Magnus Økland in comments following this answer.
2) About a possible more (synthetic) geometry solution. I just discovered in this paper dealing with "exponential pencils of conics" that there is so-called "conjugate conic" of the first circle vs. the second circle (see Theorem 2.5 page 4). But I am not sure it will permit a new solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3509582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.