Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Is my (first) proof by induction correct? I am self-studying and would very much appreciate some feedback on my first ever induction proof.
Let $y_1 = 6$, and for each $n\in \mathbb{N}$ define $y_{n+1} = (2y_n - 6)/3$. Use induction to prove the sequence satisfies $y_n > -6$.
First, we'll prove the base case. Because we've been given $y_1 = 6 > -6$, our base case will be $n=2$:
$y_2 = (2y_1-6)/3 = (2(6)-6)/3 = 2 > -6.$
Next, the induction step. Assume $y_{n+1} > -6$. Let's then prove $y_{n+2} > -6$ for some $y_{n+1}$. We'll do this by contradiction, assuming $y_{n+2} = (2y_{n+1}-6)/3 < -6$. Observe:
$$(2y_{n+1}-6)/3 < -6,$$
$$2y_{n+1}-6 < -18,$$
$$2y_{n+1} < -12,$$
$$y_{n+1} < -6,$$
which contradicts our initial assumption that $y_{n+1} > -6$. Thus, $y_{n+2} > -6$.
| You can still use the base case $n=1$ despite it holding automatically. In general for induction proofs it is more natural to write 'assume for $n$, then true for $n+1$', although it is essentially the same as what you have done. Having said that, for the proof by contradiction to work you need to start by assuming the precise opposite of what you are trying to prove, so assume $y_{n+2}\leq -6$ instead and you're good to go.
A good way to think of induction proofs is like a 'domino effect'. Check that it works for the first domino ($n=1$), and that if it works for $n$ then it must work for $n+1$. It then must work for all $n\geq 1$ by induction.
A cleaner proof: Clearly $y_1>-6$. If $y_n>-6$, then $y_{n+1}=\frac{1}{3}(2y_n-6)>-6$.
$y_1>-6$ and ($y_n>-6\implies y_{n+1}>-6$) gives that $y_n>-6$ by induction for all $n\geq 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4360266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding the Kernel and Image of $\mathbb Z \to \mathbb Z[i]/(1+3i)$, $x\mapsto x+(1+3i)$ I'm trying to apply the homomorphism theorem to the following function:
$$h:\mathbb Z \to \mathbb Z[i]/(1+3i)$$
$$x\mapsto x+(1+3i)$$
Where $(1+3i)$ is the ideal generated by $1+3i$.
I know that Because $\ker(h)$ is an ideal in $\mathbb Z$ which is an Principal Ideal Domain, then $\ker(h)=m\mathbb Z$ for some $m \in \mathbb Z$, but I'm having some trouble finding that $m$.
And I have no clue how I can find the elements of $\text{im}(h)$.
How can I do this?
| Note that $\ker{h}=\{n \in \mathbb{Z} : n \in \langle 1+3i \rangle\}=\langle 1+3i \rangle \cap \mathbb{Z}$. $n \in \ker{h} \implies n=(1+3i)(a+bi)=a+bi+3ai-3b=(a-3b)+i(b+3a)$. Since $n \in \mathbb{Z}$, $b=-3a$. Thus, every $n \in \ker{h}$ is of the form $a(1+3i)(1-3i)=10a$ for some $a \in \mathbb{Z}$. Clearly, the least such element is $10$ because $|10a|=|10||a|=10|a|\geq 10$ and for $a=1$, equality is satisfied. So, $\ker{h}=\langle 10 \rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4360394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Does expected convergence in total variation distance imply weak convergence? From the definition of total variation distance, we know that convergence in total variation implies weak convergence. However, suppose we have the following,
$$
d_{TV}(X_n, X) = Y_n,
$$
and $\mathbb{E}[Y_n] \rightarrow 0$, and hence, $\mathbb{E}[d_{TV}(X_n, X)] \rightarrow 0$.
If the expectation of the total variation distance converges to $0$, can we still somehow conclude that $X_n$ converges weakly to $X$?
$d_{TV}$ refers to the total variation metric. The total variation distance between two probability measures $P$ and $Q$ on a common probability space
$(\Omega, \mathcal{F})$ is given by,
$$
d_{TV}(P, Q) = \sup_{A \in \mathcal{F}} |P(A) - Q(A)|.
$$
| First notice that $Y_n$ is a non-negative number, so $\operatorname{E}[Y_n]=Y_n$, then we have that $Y_n\to 0$. Now let $P_n$ the probability measure induced by $X_n$, and $P_X$ the probability measure induced by $X$ and set $Q:=\frac1{2}P_X+\sum_{n\geqslant 1}\frac1{2^{n+1}}P_n$, then its easy to check that $Q$ is a probability measure and that $P_n\ll Q$ and $P_X\ll Q$, therefore there are Radon-Nikodym derivatives $f_n$ and $f_X$ such that $P_n=f_n\cdot Q$ and $P_X=f_X\cdot Q$.
Let $\mathcal{B}(\mathbb{R})$ the Borel $\sigma $-algebra of $\mathbb{R}$ and note that
$$
d_{TV}(X_n,X)=\sup_{A\in \mathcal{B}(\mathbb{R})}\left|\int_{A}(f_n-f_X)dQ\right|=\int_{\{f_n-f_X\geqslant 0\}}(f_n-f_X)dQ=\frac1{2}\int_{\mathbb{R}}|f_n-f_X|dQ
$$
where the last equality follows from the fact that
$$
\int_{\{f_n-f_X\geqslant 0\}}(f_n-f_X)dQ=\int_{\{f_n-f_X< 0\}}(f_X-f_n)dQ
$$
as $\int_{\mathbb{R}}(f_n-f_X)dQ=0$. Therefore $f_n \xrightarrow{L_1}f_X$, what implies that $X_n\xrightarrow{\text{dist.}}X$ (as $L_1$ convergence implies weak convergence of measures).∎
An easier way to state the same is that for all $c\in \mathbb{R}$
$$
|P_n((-\infty ,c])-P_X((-\infty ,c])|\leqslant d_{TV}(X_n,X)\to 0
$$
so $X_n\xrightarrow{\text{dist.}}X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4360561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$a_{n+1} = a_n/2 + 1/a_n$ is Cauchy but has no limit in $\mathbb{Q}$ I want to show that the sequence recursively defined by
$a_{n+1} = \frac{a_n}{2} + \frac{1}{a_n}, \:\: a_1=1$
is a Cauchy sequence that does not converge in $\mathbb{Q}$.
My idea was to show that this is a bounded sequence with bounds $1\leq a_n \leq 2$ and with a monotone tail, so that the monotonicity principle implies convergences in $\mathbb{R}$. But then how do I show that the limit is not in $\mathbb{Q}$?
Is there a way to use the definition of a Cauchy sequence directly?
Any help would be much appreciated!
| Maybe this is overkill, but one approach would be to use the Banach Fixed Point Theorem. On closed intervals, it simplifies to:
Let $f:[a,b]\to[a,b]$ be such that there exists $0\leq K<1$ with $|f(x)-f(y)|\leq K|x-y|$ for all $x,y$. This is called a contraction on $[a,b]$. Then $f$ has a unique fixed point in $[a,b]$ i.e. there is a unique $x^*\in[a,b]$ with $f(x^*)=x^*$. Furthermore, if we let $x_1\in X$ and define $x_{n+1}:=f(x_n)$ for all $n\geq 1$, then $x_n\to x^*$.
Applying this to the interval $[1,2]$, we define $f:[1,2]\to[1,2]$ by $f(x):=\frac{x}{2}+\frac{1}{x}$. Then $|f(x)-f(y)|\leq\frac{1}{2}|x-y|$ gives that $f$ is a contraction. Since $a_{n+1}=f(a_n)$ for all $n$, we have that $a_n$ converges to the unique fixed point of $f$ by BFPT, which one can easily check is $\sqrt{2}$. Since $a_n$ converges, it must be Cauchy and, of course, it converges outside of $\mathbb{Q}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4360700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Puzzle about sharing of information Let there be 16 people in a room each with distinct piece of information. Whenever two person interact, they share all the information that they have gained till that time. For instance if person $A$ interact with person $B$, then both of them will share their information that they have. So both will have information, say, $a+b$. Now, if $A$ interact with person $C$, then $A$ & $C$ both will have information $a+b+c$.
The question is how many minimum interactions are needed so that everyone have all pieces of information.
If we denote the minimum interaction needed for $n$ people by $p(n)$, then it is easily shown that $$ p(n)+1 \leq p(n+1) \leq p(n)+2 $$
By computation, I know $p(4)=4$ and this gives a bound , $16 \leq p(16) \leq 28$, which is not very fruitful. Any help is appreciated.
| The minimum number of phone calls needed is $p(n)=2n-4$, for all $n\ge 4$. The fact that $p(n)\le 2n-4$ for $n\ge 4$ is implied by $p(4)\le 4$ and your bound $p(n+1)\le p(n)+2$. Proving the matching lower bound requires careful analysis. The first published solution I could find was [Tijderman (1971)]. There is also a solution in [Baker and Shostak (1972)] which I find to be easier to understand.
Brenda Baker, Robert Shostak, "Gossips and telephones," Discrete Mathematics, Volume 2, Issue 3,
1972,
Pages 191-193,
ISSN 0012-365X,
https://doi.org/10.1016/0012-365X(72)90001-5.
R. Tijdeman, On a telephone problem,
Nieuw Archief voor Wiskunde (3), XIX, 188–
192 (1971).
The solution from [Tijderman (1971)] is available online at Torsten Sillke's homepage (direct link to file), along with a modern typesetting of the [Baker and Shostak (1972)] proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4360878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What are the dimensions/units in the kernel density estimate? I've been reading about the kernel density estiamte and the dimensions/units don't make sense to me.
Let $x$ have the dimensions of distance/length (L), i.e.
$$[x] = L,$$
where I am using square brackets to denote the dimensions of the symbol.
According to Wikiepdia, if ($x_1$, $x_2$, .... $x_n$) are indpependendent and identically distributed samples drawn from some univariate distribution with an unknown density $f$ at any given point $x$, then the kernel density estimator is
$$\hat{f}_h(x)=\frac{1}{nh}\sum_{i=1}^n K\left(\frac{x-x_i}{h}\right),$$
where $K$ is a kernel and $h$ is the bandwidth.
I believe the dimensions of $\hat{f}_h(x)$ are one over length, $K(x)$ has no dimensions and the dimension of $h$ are length, i.e.
$$[\hat{f}_h(x)] = L^{-1},$$
$$[K(x)] = 1,$$
$$[h]=L.$$
However, further down on the Wikipedia page, it says
$$\mathrm{AMISE}(h) = \frac{R(K)}{n h} + \mathrm{other\ terms},$$
where
$$R(K)=\int_{-\infty}^\infty K(x)^2 dx.$$
Since
$$[\mathrm{AMISE}(h)] = [\mathrm{MISE}(h)]$$
we know that
$$[\mathrm{AMISE}(h)] = L^{-1}.$$
However,
$$[R(K)]=L,$$
therefore,
$$\left[\frac{R(K)}{nh}\right]=1.$$
But this can't be right as
$$[\mathrm{AMISE}(h)] = \left[\frac{R(K)}{nh}\right].$$
Do you know where I have gone wrong? I must have got the dimensions of one of the symbols wrong?
| I think it could be explained if
$$R(K) = \int_{-\infty}^\infty K(\mu)^2d\mu,$$
where $\mu$ is dimensionless, i.e.,
$$[\mu]=1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4361011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Exterior Derivatives and Wedge Product I'm having trouble grasping the use of the wedge product in (what I think is) an exterior derivative. I found this following equation on the wikipedia page (https://en.wikipedia.org/wiki/Exterior_derivative), but am not sure if I was applying it correctly:
Is this the proper way to solve an exterior derivatives problem, like the first example below? And how would you solve the second part?
Example--
Consider the following differential form fields on $ℝ^3$:
$ = ^3 −4x^2 $ and $= ^3 ∧ +sin ∧$
*
*Find the derivative $$.
I attempted the following, but it doesn't make much sense to me:
$= \frac{∂}{∂x}(z^3) ∧∧ + \frac{∂}{∂y}(z^3) y∧∧ + \frac{∂}{∂x}(sinz) ∧∧ + \frac{∂}{∂z}(sinz) z∧∧z$
so $= 0$..?
$= 0 + 0 + 0+ (cosz)∧∧z$
$= (cosz)∧∧z$
Is that correct? If so what does that result mean?
*Find $∧$.
I'm not sure how to go about solving this part.
| $
\def\red#1{\color{red}{#1}}
\def\green#1{\color{limegreen}{#1}}
\def\blue#1{\color{blue}{#1}}
\def\orange#1{\color{orange}{#1}}
$
Let $\alpha = x^3dx − 4x^2ydz$ and $\beta= z^3 dx\wedge dy +\sin (z) dx\wedge dz$ as in your question.
First to compute $d\beta$:
$$
\begin{align}
d\beta &= d(z^3dx\wedge dy) + d(\sin(z)dx\wedge dz)\\
&=\frac{\partial z^3}{\partial x}dx\wedge dx\wedge dy + \frac{\partial z^3}{\partial y}dy\wedge dx\wedge dy+\frac{\partial z^3}{\partial z}dz\wedge dx\wedge dy\\
&\hspace{.5cm}+\frac{\partial \sin(z)}{\partial x}dx\wedge dx\wedge dz+\frac{\partial \sin(z)}{\partial y}dy\wedge dx\wedge dz+\frac{\partial \sin(z)}{\partial z}dz\wedge dx\wedge dz\\
&=3z^2dz\wedge dx\wedge dy\,,\\
d\beta&= 3z^2 dx\wedge dy\wedge dz\,.
\end{align}$$
Now let's look at $\alpha\wedge \beta$:
$$\begin{align}
\alpha\wedge\beta&=(\red{x^3dx}-\green{4x^2ydz})\wedge(\blue{z^3dx\wedge dy} + \orange{\sin(z)dx\wedge dz})\\
&=\red{x^3dx}\wedge(\blue{z^3dx\wedge dy}+\orange{\sin(z) dx\wedge dz})\\
&\hspace{.5cm}+\green{4x^2ydz}\wedge(\blue{z^3dx\wedge dy}+\orange{\sin(z) dx\wedge dz})\\
&=\red{x^3}\blue{z^3}\red{dx}\wedge \blue{dx\wedge dy} +\red{x^3}\orange{\sin(z)}\red{dx}\wedge\orange{ dx\wedge dz}\\
&\hspace{.5cm}+\green{4x^2y}\blue{z^3}\green{dz}\wedge \blue{dx\wedge dy}+\green{4x^2y}\orange{\sin(z)}\green{dz}\wedge \orange{dx\wedge dz}\\
&=\green{4x^2y}\blue{z^3dx\wedge dy}\wedge \green{dz}
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4361168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two functions agreed to differ at points on a full set While studying concepts of measurable functions, there was a theorem suggesting that if any measurable function is altered on a null set, its measurability still remains.
If $f:E\to\mathbb R$ is measurable, $E\in\mathcal M, g:E\to\mathbb R$ is such that the set $\left\{x:f(x)\ne g(x)\right\}$ is null, then $g$ is measurable.
They showed the following proof.
Consider the difference $d(x)=g(x)-f(x)$. It is zero except on a null set, so
$$\left\{x: d(x)\gt a\right\}=
\begin{cases}
\text{a null set} & a\ge0 \\
\text{a full set} & a\lt 0 \\
\end{cases}$$
Here, a full set is the complement of a null set. Since both null and full sets are measurable, $d$ is a measurable function. $g=f+d$ is thus measurable.
Now, I was curious whether the statement remains true if I change the set $\left\{x: f(x)\ne g(x)\right\}$ into $\left\{x: f(x)=g(x)\right\}$
, i.e., differ at points in a full set, because there was no doubt if I alter the proof as the following.
The difference $d(x)$ is still measurable since
$$\left\{x: d(x)\gt a\right\}=
\begin{cases}
\text{a full set} & a\ge0 \\
\text{a full set} & a\lt 0 \\
\end{cases}$$
However, the later statement is actually not plausible at all. Is there any contradictory logic among here?
Appreicate as always.
| Let $f=0, A\subseteq E$ be a non-measurable subset and $N$ is a null set with $N\cap A=\varnothing.$
Then we let $g(x)=
\begin{cases}
1,&x\in A\\
0,&x\in N\\
-1,&x\in E\setminus(A\cup N)\\
\end{cases}$.
Thus,$\{x:d(x)>0\}=A$ which is non-measurable. So here $d(x)$ is not a measurable function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4361300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Verify that $(\neg P\land N)\lor(\neg D \land P)\rightarrow \neg D\lor N$ The answer seems to be $\neg D \lor N$, but below is what I got,no idea where it goes wrong
$$\begin{aligned} (\neg P\land N)\lor(\neg D \land P) &\equiv \neg(\neg N\lor P)\lor \neg(\neg P\lor D)\\ &\equiv\neg[(N\implies P)\land(P\implies D)]\\&\rightarrow \neg(N\implies D)\\ &\equiv\neg(\neg N\lor D) \\ &\equiv\neg D\land N \end{aligned}
$$
Appreciate for any help.
|
\begin{aligned} (\neg P\land N)\lor(\neg D \land P) &\equiv \neg(\neg N\lor P)\lor \neg(\neg P\lor D)\\ &\equiv\neg((N\implies P)\land(P\implies D))\\&\rightarrow \neg(N\implies D)\\ &\equiv\neg(\neg N\lor D) \\ &\equiv\neg D\land N \end{aligned}
That logical entailment in Line 3 is incorrect, as evidenced by the assignment $(P,N,D)=(0,1,1).$
Here's a correct attempt, if you don't mind applying the distributive laws: \begin{aligned} (\neg P\land N)\lor(\neg D \land P) &\equiv (¬P∨¬D)∧(¬P∨P)∧(N∨¬D)∧(N∨P)\\
&\models (N∨¬D)\\
&\equiv \neg D ∨ N. \end{aligned} Thus, $$(\neg P\land N)\lor(\neg D \land P) \to \neg D ∨ N$$ is a validity, as required.
P.S. I use $≡$ and $⊨$ to mean logically equivalent and logically implies, respectively (i.e., as metalogical symbols), while I use $\to$ merely as the material conditional (i.e., as a logical operator). As for $\implies,$ I use it just to mean implies (e.g., $x=2\implies x^2=4$) rather than as the material conditional.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4361584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Let $G$ be a group of order $7105$, show that the order of the center of $G$ is divisible by $35$. Let $G$ be a group of order $5\cdot 7^2 \cdot 29$, show that the order of the center $Z(G)$ of $G$ is divisible by $35$.
Now, $G$ contains only a $5$-sylow, $P_5$, and since the conjugacy action of $G/P_5$ on $P_5$ is the trivial homomorphism, then $P_5 \subseteq Z(G)$, so $5$ divides the order of the center.
The number of $7$-sylow could be either $1$ or $29$, in the first case we can conclude as above. What about the second case?
| This question is listed as unanswered. So I decided to post an answer. This $G$ has unique Sylow $29$-subgroup $S$ by the Sylow theorem. So $S$ is normal, $G/S$ has order $5\cdot 49=245$. $G/S$ has unique Sylow $5$-subgroup $T/S$ of order $5$ and unique $7$-subgroup $R/S$ of order $49$. Both $T/S$ and $R/S$ are cyclic, and normal.
Then $T$ has order $5\times 29$, so it has unique normal, hence characteristic, subgroup $T_1$ of order $5$. Since $T_1$ is characteristic in $T$ which is normal in $G$, $T_1$ is normal in $G$. $G$ acts on $T_1$ by conjugation, but the automorphism group of $T_1$ has order $4$ and $|G|$ is odd, so the action is trivial, and $T_1$ is central in $G$.
Let $R/S$ be generated by s coset $aS$ of order $49$. Then $a$ acts on $S$ by conjugation. The order of the automorphism group of $S$ is $28=4\cdot 7$, so $a^7S$ is an element of order $7$ which must act trivially on $S$. Therefore $b=a^{7\cdot 29}$ must have order $7$ in $G$. Also it centralizes $S$. Let $B=\langle b\rangle$. Then the subgroups $T_1, B, S_1$ form a direct product in $G$ (they pairwise commute elementwise, and the intersection of each of them with the product of the other two is trivial by the Lagrange theorem). Therefore $T_1B$ is a central subgroup of $G$ of order $35$ (here we use the fact that $T_1$ is central $b$ commutes with $a$ and all elements of $S$, hence is central in $G$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4361737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to show that for $C, d>0$, the integral $ \int_1^{\infty} \exp(-\frac{x^d}{2C})dx<\infty $? How to show that for $C, d>0$, the integral $$
\int_1^{\infty} \exp(-\frac{x^d}{2C})dx<\infty
$$
As $d=1$,
$$
\int_1^{\infty} \exp(-\frac{x}{2C})dx=2C\int_1^{\infty} \exp(-\frac{x}{2C})d(\frac{x}{2C})=2Ce^{-1}<\infty
$$
But how to show this integral is finite for $d>1$?
| Note that by Taylor expansion of $\exp$ function, for any $k \ge 1$ and $x \ge 0$ we have $\exp(x) \ge \frac{x^k}{k!}$.
Hence for any $d > 0$, $C>0$, $k \ge 1$ and $x > 0$ we get $$ \exp(\frac{x^d}{2C}) \ge \frac{(x^d)^k}{(2C)^kk!} $$
Choose $k$ such that $dk \ge 2$. Let $M = \frac{1}{(2C)^k k!}$ (it's independent of $x$ !). Then for $x \ge 1$ we get $\exp(\frac{x^d}{2C}) \ge Mx^2$. Hence $$ \int_1^{\infty}\exp(-\frac{x^d}{2C})dx \le \int_{1}^\infty\frac{1}{Mx^2}dx < \infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4361897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Simplicial polyhedral cones I would appreciate a reply to my question: let' s consider the Euclidean d-dimension real vector space. A polyhedral cone K is said to be simplicial if K is generated by linearly independent vectors (and non simplicial otherwise).
If d=2, if I'm nor wrong, every cone is simplicial. This implies that, if d=3, for an arbitrary cone K, its faces are always simplicial cones.
Question: In dimension d>= 4, the last statement is always true? That is, does there exist non simplicial cones with faces that are non simplicial?
Thank you very much for your reply.
| a polyhedral cone is the non-negative hull of a given set of points. E.g. for two given points $x$ and $y$ the nonnegative hull is $\{ \lambda x+\mu y : \lambda, \mu\ge0\}$. You well might visualize this set by means of the convex hull of two points is $\{\lambda x + (1-\lambda) y : 0\le\lambda\le 1\}$, as any ray from the (also given) origin, intersecting that convex full, would be contained within that cone. So, in other words, that convex hull is an intersection of that cone, and conversely that cone is like an infinite pyramid of that hull.
Thence your question will be answered negatively, by providing a $(d-1)$-dimensional polytope, which itself is not simplicial and the facets of which are neither simplicial. Here you might think of a 3-dimensional polyhedron, serving as that convex hull, and using that one within an affine hyperplane of a 4-dimensional space: each of the six facet cones (of that cube-supported cone) will be supported by a square. None of these is simplicial for sure.
--- rk
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4362253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does $F:P\otimes−:Mod_R\to Mod_R$ define an equivalence of categories? Let $P$ be a finitely generated projective R-module of rank 1 which is not free.
Let $F:P\otimes_R−:Mod_R\to Mod_R$ be the functor.
We have that $R$ is free and $F(R)=P\otimes_R R\simeq P$ is not free.
If $F$ did define a equivalence of category, we would prove that "Free" is not a categorical property.
But why does $F$ define a equivalence of categories? (I suppose that the fact that $P$ is projective plays an important role).
Let $p$ be the generator of $P$.
The natural transformation $\alpha_M:M\to M\otimes_R P$ defined as $\alpha_M(m)= m\otimes p$ may not define a natural isomorphism: $\alpha_M$ may not be an ismorphism for every $M$.
| Thanks to the comment by rschwieb, it is easy to see that 'free' is not a categorical property.
In relation to my specific question about the functor $F$, I have just found a theorem by Morita that explains the reason why $F$ defines a equivalence of categories. I have found it in Rotman's book "An introduction to Homological Algebra", page 269. All these ideas are developed in Morita Theory.
The general idea would be the following:
We need to define another functor $G = Hom_R(P,-)$. Since $F$ and $G$ are adjoint, they define two natural transformation $FG\to 1_R$ and $1_R\to GF$.
Since $P$ is a small projective generator of $Mod_R$, we have that these natural transformation are also isomorphism.
All the details can be found in the proof:
Theorem 5.55. Let $R$ be a ring and let $P$ be a small projective generator of $Mod_R$. If $S = End_R(P)$, then there is an isomorphism $F : Mod_S \to Mod_R$ given by $M \to M \otimes_S P$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4362443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to make Q well-ordered? How can we define a well-ordering relation on set Q(set of all fractions)
[We're looking for a total order on Q such that every subset of Q has a least element by applying that order. Clearly the conventional < relation we're familiar with doesn't do the job.]
| You just need to find a way to through them systematically. Let's ignore the negatives for a moment. List all of the integers, then those of the form $\frac{n}{2}$ (except those that you have already done), now those of the form $\frac{n}{3}$, etc.
Woops, that doesn't work since the first set does not end and you never start the second. However, we can fix the procedure.
List all of the form $\frac{m}{n}$ where $m + n = 1$, that's just $0$. Now those of the form $m + n = 2$ that's just $1$. Now $m + n = 3$, that's $2$ and $\frac{1}{2}$. Etc. This time, each set is finite and hence, it will end of you can go onto the next. Again, skip ones which have already appeared. Every positive rational will eventually show up.
What about the negatives? Just pop them in after the corresponding positive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4362629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An exercise for positive map on operator system Definition. Let $A$ be a unital C*-algebra. If $A$ has a unit $1$ and $S$ is a self-adjoint ($S^*=S$) subsapce of $A$ containing $1$, then we call $S$ an $operator$ $system$.
Question: Let $S$ be an operator system, $B$ be a C*-algebra, and $\phi: S\rightarrow B$ be a positive map. How to prove that $\phi$ extends to a positive map on the norm closure of $S$?
This is an exercise from Page 21 of the book Completely Bounded Maps and Operator Theory by V. Paulsen.
| The question is basically resolved in the comments; bounded maps extend in a norm-preserving way to the closure and to see positivity of the extension, take a positive element of the closure, find a sequence of positive elements from the space that converge there and apply $\psi$ and then use the fact that the positive elements in the target $C^*$-algebra is a closed set. The question becomes how to find a sequence of positive elements from $S$ that converges to $x$.
If $S$ was a $*$-subalgebra, then this would be easy: since $x\ge0\in\overline{S}$ we also have $x^{1/2}\in \overline{S}$ (since in this case $\overline{S}$ is a $C^*$-algebra) so we find a sequence $(x_n)\subset S$ with $x_n\to x^{1/2}$ and then $S\ni x_n^*x_n\to (x^{1/2})^*(x^{1/2})=x$ are a sequence of such positive elements.
This technique fails when we move to operator systems. But, the standard technique (which you will see used in Paulsen's book in many "flavors") is this: let $S$ be an operator system and $x\ge0$ an element of $\overline{S}$. Find a sequence $(x_n)\subset S$ with $x_n\to x$. Then $x_n^*\to x$, so $\frac{x_n+x_n^*}{2}\to x$, and the elements $\frac{x_n+x_n^*}{2}$ are self-adjoint elements of $S$. So we can assume without loss of generality that $(x_n)$ are self-adjoint. Set $\varepsilon_n:=\|x_n-x\|$. Then since for any self-adjoint we have $-\|h\|1\le h\le\|h\|1$, we have
$$-\varepsilon_n1\le x_n-x\le \varepsilon_n1$$
and thus $0\le x_n+\varepsilon_n1-x$. But $x\ge0$, so $(x_n+\varepsilon_n1-x)+x\ge0$, i.e. $x_n+\varepsilon_n1\ge0$. But $x_n+\varepsilon_n1$ are elements of $S$ and since $x_n\to x$ and $\varepsilon_n\to0$, we have $x_n+\varepsilon_n1\to x$.
Another way that this $+\varepsilon1$ technique is used (in Paulsen's book) is the following: assume you have a sequence of positive elements $(x_n)$ that converge to some $x$ but you need invertible elements; just take $(\varepsilon_n)\subset(0,1)$ with $\varepsilon_n\to0$ and take $x_n'=x_n+\varepsilon_n1$. Then $x_n'\to x$ and $x_n'\ge\varepsilon_n1$, giving you invertibility.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4362780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$T:\Bbb Z_5^n \to\Bbb Z_5^3$, $n>1$. Prove that if $T$ is onto and $\ker T$ contains 5 vectors then $n=4$
$T:\Bbb Z_5^n \to \Bbb Z_5^3$, $n>1$. Prove that if $T$ is onto and $\ker T$ contains 5 vectors then $n=4$
This is supposed to be simple and the solution in the textbook is just a line but I couldn't understand a certain part.
since it is onto we know that $dimImT=3$ and according to the dimension theorem $dimkerT=n-dimImT$ from here we get $dimkerT=n-3$ the following part is what I did not understand I will quote the textbook "on the other hand $dimkerT=1$ because it has 5 vectors therefore $n=4$ I cannot understand how they got to $dimkerT=1$ because it contains 5 vectors? I know this is a simple question and there isn't much to show on what I tried but I do not seem to understand it.
Thank you!
| Since $T$ is onto, $n-\dim\ker T=3$, that is, $\dim\ker T=n-3$. Since $\ker T$ contain only 5 vectors we get that $\dim\ker T=1$, so $n=4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4362908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I prove that there are no retractions from $S^1\times \bar{B^2}$ and $A$? I want to show that there is no retraction from $S^1\times \bar{B^2}$ to $A$. Where A is the following:
Let us assume that $A$ is a deformation retract of $X$. THen by definition there exists a retraction $$r:X\rightarrow A;\,\,\,\,r(a)=a\,\,\,\forall a\in A$$In addition $r\circ i=id_A$ where $i:A\rightarrow X$ is the inclusion. Thus we can consider the morphisms $$r_*:\Pi_1(X)\rightarrow \Pi_1(A)$$ $$i_*:\Pi_1(A)\rightarrow \Pi_1(X)$$ where $r_*$ is surjective and $i_*$ is incective. Now consider also $$id_{\Pi_1(A)}=(r\circ i)_*:\Pi_1(A)\rightarrow \Pi_1(A)$$ and by a corrollary we know that $r_*\circ i_*=id_{\Pi_1(A)}$
Now I only need to find a contradiction.
Therefore I wanted to ask if someone could help me?
Thanks for your help.
| Let me fix some notation. We write $X=\Bbb S^1 \times \Bbb D^2$, where $\Bbb D^2$ denotes the closed unit disk. We have embeddings $j:\Bbb S^1 \cong \Bbb S^1\times\{0\}\subseteq X$ and $i:\Bbb S^1\cong A\subseteq X$. The first of both has a retraction $q:X\rightarrow \Bbb S^1, (x,t)\mapsto (x,0)$ which turns $(j,q)$ into a deformation retract, hence homotopy equivalence.
Now assume that $i$ admits a retraction $p$. As you noted this induces a surjection
$\pi_1(X) \rightarrow \pi_1(A)$ on all homotopy groups. Precomposing with the homotopy equivalence $j$ we thus have a surjection $\pi_1(\Bbb S^1) \rightarrow \pi_1(\Bbb S^1)$. This is a surjective group homomorphism $\Bbb Z \rightarrow \Bbb Z$, which necessarily is an isomorphism.
Note that the composite $qi:\Bbb S^1 \rightarrow \Bbb S^1$ defines a nullhomotopic map, hence the zero map on homotopy groups. But at the same time it is a composite of two isomorphisms of homotopy groups, hence itself an isomorphism of homotopy groups. A contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4363099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Where is the "covariance" in a covariant derivative I have the following definition of a covariant derivative. Consider a general fibre bundle $E \rightarrow M$ with a connection given by a parallel transport, i.e. along a path $\gamma$ in $M$ we have a transport $\Gamma(\gamma)^t_s : E_{\gamma(s)} \rightarrow E_{\gamma(t)}$ with a covariant derivative $\nabla_{\dot{\gamma}(0)} \sigma(x) := \frac{d}{d t}\mid_{t=0}(\Gamma(\gamma)^t_0)^{-1} \circ \sigma \circ \gamma(t)$.
My question is, what does "covariant" refer to in the name covariant derivatrive? I have two main guesses:
*
*It reflects the fact that local forms of the covariant derivative "commute" with the transition maps of the bundle - but it is pretty obvious as the derivative is defined globally and its local expressions are defined so that it makes sens.
*It is purely historical and stems from the fact that the above covariant derivative is a generalisation of the covariant derivative of a metric connection which "vector field component" changes covariantly.
| Covariant derivative extends the notions of directional derivative of multivariate calculus.
*
*$\nabla_v f(x) = f'_v(x) = D_v f(x) = Df(x)(v) = \partial_v f(x) = v \cdot \nabla f(x) = v \cdot \frac{\partial f(x)}{\partial x} $
The type of definition of derivative depends on the problem you are trying to solve and the geometric object you were looking at.
*
*$\displaystyle (\nabla_v f(x))_p = (f \circ \phi)'(0) = \lim_{t \to 0} \frac{f(\phi(t)) - f(p)}{t} $
where $\phi(t): [-1,1]\to M$ is a "path" of some kind. This seems to be called the Lie derivative since we're differentiating a scalar function along a vector field.
These definitions had to be "covariant" with respect to change of coordinates. The physics didn't change just because you used a different ruler or camera.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4363268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Proof that limit points are unique I ams having problems to prove that the limits points of a sequence are unique. For example given the following sequence
\begin{equation}
x_n=(-1)^n+\frac{1}{n}
\end{equation}
To find the limit points, I establish these subsequences
\begin{equation}
y_n=x_{2n}=(-1)^{2n}+\frac{1}{2n}=1+\frac{1}{2n}
\end{equation}
\begin{equation}
z_n=x_{2n+1}=(-1)^{2n+1}+\frac{1}{2n+1}=-1+\frac{1}{2n+1}
\end{equation}
Where it can be seen that
\begin{equation}
y_n\rightarrow 1
\end{equation}
\begin{equation}
z_n\rightarrow -1
\end{equation}
My question is how can I prove that these are the only two limit points of the sequence.
I suppose that it is related to the fact that
\begin{equation}
Image(f(n))+Image(g(n))=\mathbb{N}
\end{equation}
Being
\begin{equation}
f(n)=2n\; \; \; \;
g(n)=2n+1
\end{equation}
| Suppose that $L$ is another number, not equal to $1$ or to $-1$. Then we can find neighborhoods of $1$, $-1$, and $L$, all mutually disjoint. Since the sequence is eventually inside the union of the neighborhoods of $1$ and $-1$, then it's eventually outside the neighborhood of $L$, so $L$ is not a limit point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4363364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
} |
Determine the greatest of the numbers $\sqrt2,\sqrt[3]3,\sqrt[4]4,\sqrt[5]5,\sqrt[6]6$ Determine the greatest of the numbers $$\sqrt2,\sqrt[3]3,\sqrt[4]4,\sqrt[5]5,\sqrt[6]6$$ The least common multiple of $2,3,4,5$ and $6$ is $LCM(2,3,4,5,6)=60$, so $$\sqrt2=\sqrt[60]{2^{30}}\\\sqrt[3]3=\sqrt[60]{3^{20}}\\\sqrt[4]4=\sqrt[60]{4^{15}}=\sqrt[60]{2^{30}}\\\sqrt[5]{5}=\sqrt[60]{5^{12}}\\\sqrt[6]{6}=\sqrt[60]{6^{10}}=\sqrt[60]{2^{10}\cdot3^{10}}$$ Now how do we compare $2^{30},3^{20},4^{15},5^{12}$ and $6^{10}$? I can't come up with another approach.
| Let$$f(x)=x^{1/x}=e^{\log(x)/x}$$and note that $f(n)=\sqrt[n]n$, for each $n\in\Bbb N$. You have$$f'(x)=\frac{1-\log(x)}{x^2}e^{\log(x)/x},$$which is greater than $0$ on $[1,e)$ and smaller than $0$ on $(e,\infty]$. Therefore $f$ is strictly increasing on $[1,e]$ and strictly decreasing on $[e,\infty)$. So, since $e<3$ and since $3<4<5<6$,$$\sqrt[3]3>\sqrt[4]4>\sqrt[5]5>\sqrt[6]6.$$Besides, $\sqrt2=\sqrt[4]4$. And it is easy to compare $\sqrt2$ with $\sqrt[3]3$; just use the fact that $\sqrt2^6=8$ and that $\sqrt3^6=9$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4363451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Using De Moivre's theorem to solve $(z−3+2i)^4 = z^4$ What are all the solutions to: $(z−3+2i)^4 = z^4$?
I know I have to use De Moivre's theorem which states:
$$(\cos\theta + i\sin\theta)^n=\cos\theta n + i\sin\theta n$$
| To solve
$$
\left(\frac{z-3+2i}{z}\right)^4=1\tag1
$$
Using De Moivre, we want to find $\theta$ so that
$$
\begin{align}
1
&=(\cos(\theta)+i\sin(\theta))^4\tag{2a}\\
&=\cos(4\theta)+i\sin(4\theta)\tag{2b}
\end{align}
$$
which is $\theta\in\left\{0,\frac\pi2,\pi,\frac{3\pi}2\right\}$
$$
\frac{z-3+2i}z=\cos\left(\tfrac{k\pi}2\right)+i\sin\left(\tfrac{k\pi}2\right)\tag3
$$
for $k\in\{1,2,3\}$ (we won't get a solution for $k=0$). That is,
$$
z=\frac{3-2i}{1-\cos\left(\frac{k\pi}2\right)-i\sin\left(\frac{k\pi}2\right)}\tag4
$$
We get the three solutions
$$
\left\{\frac{5+i}2,\frac{3-2i}2,\frac{1-5i}2\right\}\tag5
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4363612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Is an order necessary in a finite or infinite cartesian product? The elements of $\displaystyle \prod_{i\in I}X_i$ are usually represented by tuples. For this cartesian product to be defined, is it necessary for $I$ to be an ordered set?
| I guess the answer is no because by definition, the product is just the set of all functions $f: I \to \bigcup_{i \in I} X_i$ such that $f(i)\in X_i$ for each $i\in I.$
Remark: how do we know that this product is not empty? If $X$ is a set, then the Axiom of Choice says that for each $x\in X$, there is a function $f$ that maps $x$ into $\bigcup X$, such that $f(x)\in x.$ Now, if $X=\{X_i\}_{i\in I}$ this means that for each $X_i\in X,$ there is a function $f$ that sends $X_i$ to an element of $X_i$, and therefore, the composition $f\circ g:I\to \bigcup_{i \in I} X_i$, satisfies the definition of the product, where $g(i)=X_i.$ That is, the product is not empty.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4363750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Problem of finding values of $ a $ for which two matrices are similar Problem: Let $A=\left(\begin{array}{ccc}1 & 2 & 3 \\ 1 & 2 & 7-a^{2} \\ 2 & 2+a & 6\end{array}\right) $ , $ B=\left(\begin{array}{lll}0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 9\end{array}\right)$ where $ a \in \mathbb{R} $. Find all the values of $ a $ for which $ A $ is similar to $ B $
Attempt:
We know the necessary conditions for two matrices to be similar are:
Their traces are equal
They have the same determinant
They have the same characteristic polynomial
They have the same eigenvalues
Their ranks are equal
Thus, $ |A| = [ 12 - (7-a^2)(2+a) ] - 2[ 6-2(7-a^2)] + 3[ 2+a - 4 ] = (2-a)^2\cdot (2+a) $,
$ |B| = 0 $, hence $ |A| = 0 $ and therefore either $ a= 2 $ or $ a = -2 $
The characteristic polynomial of $ B $ is $ \Delta_B(x) = x^2(9-x) $
How do I continue from here? I'm stuck. ( I thought to jordanize $ A $ and $ B $ but that seems to really complicate things [ jordanizing $ A $ seems really hard here ], I also thought I'd find the eigenspaces of the eigenvalues of $ B $ but I still wasen't sure how I'd continue from there )
Thanks in advance for help!
| Hint: $A$ is similar to $B$ .
$B$ is diagonal matrix.
Hence, $A$ is diagonalizable.
And Eigenvaules of $A$ are $ 0 $ and $9$ and they are of same multiplicity as of multiplicity of $0$ and $9$ of $B$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4363955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Finding Person 1's values whilst only knowing Person 7's using iteration The general formula is $a_{n+1}=a_n-\sqrt{3}b_n$ and $b_{n+1}=\sqrt{3}a_n+b_n$.
Person 7's numbers are $a_7=8$ and $b_7=32$.
Firstly, I tried to solve it without iteration and just kept changing my values until I reached Person 1's numbers, but then I realised that I had probably got it wrong, and it was taking way too long. So afterwards, I attempted to create a general equation that had all values for $a$ and $b$ from 1 to 7. I'm not sure if that was a useless step, as I am unable to go anywhere with this information.
$$32-8\sqrt{3}=\sqrt{3}(a_1+a_2+a_3+a_4+a_5+a_6)+3(b_2+b_3+b_4+ b_5+ b_6)+4b_1$$
The above was my final step before I was at a loss and decided to come here for help.
| How about solving for $a_1$ and $b_1$ from the following equation system: \begin{equation} \begin{bmatrix} a_{n+1} \\ b_{n+1} \end{bmatrix} = \begin{bmatrix} 1 & -\sqrt{3} \\ \sqrt{3} & 1 \end{bmatrix}\begin{bmatrix} a_{n} \\ b_{n} \end{bmatrix} \text{ for } n = 1, 2, 3, \cdots? \end{equation}
Then you can find out $\begin{bmatrix} 1 & -\sqrt{3} \\ \sqrt{3} & 1 \end{bmatrix}^{-6}$ and plug $a_{6+1=7} = 8, b_{6+1=7} = 32$ to find \begin{equation} \begin{bmatrix} a_{1} \\ b_{1} \end{bmatrix} = \begin{bmatrix} 1 & -\sqrt{3} \\ \sqrt{3} & 1 \end{bmatrix}^{-6} \begin{bmatrix} a_{7} \\ b_{7} \end{bmatrix} = \begin{bmatrix} 1/64 & 0 \\ 0 & 1/64 \end{bmatrix} \begin{bmatrix} 8 \\ 32 \end{bmatrix} = \cdots \end{equation}
Of course you need to know how to find the (true) inverse of a square matrix to understand how the $1/64$ appears.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4364087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proof of energy inequality in PDE. We have: $$y_t+y_{xx} + cosy =0 , \quad y=y(x,t),\quad (x,t)\in (0,l)\times(0,\infty) \qquad (1)$$
$$y(0,t)=y(l,t)=0, \quad t\in [0,+\infty) \qquad (2)$$
$$y(x,0)=f(x) , \quad x\in [0,l] \qquad (3)$$
Let $y_1$ , $y_2$ solutions of (1),(2),(3) and $b=y_1-y_2$ $$A(t)=\int^{l}_{0}b^2 dx \qquad (4)$$
prove that $$A'(t)\leq 2 \int^{l}_{0}|b|^2 dx $$
My work so far is: By abstracting by parts the (1) for the solutions $y_1$,$y_2$ we get:
$$y_{1t}-y_{2t}-y_{1xx}+y_{2xx}+cosy_1 - cosy_2 =0$$
$$\Rightarrow cosy_1-cosy_2=b_{xx}-b_{t}$$
$$(4)\Rightarrow A'(t)=\int^{l}_{0}2 b\cdot b_t dx=\int^{l}_{0}2b(b_{xx}-cosy_1 +cosy_2)dx \leq 2\int^{l}_{0}2|b|(|b_{xx}|+2)dx$$
But I dont know how to proceed. Any ideas ?
(I know how to solve the PDE: $$y_t + y_{xx}=0$$)
| As you have calculated: $$A'(t) = 2\int_0^\ell b b_{xx} \, dx - 2\int_0^\ell b(\cos y_1 - \cos y_2) \, dx .$$ For the first integral, $b(0)=b(\ell)=0$, so integrating by parts gives $$2\int_0^\ell b b_{xx} \, dx=-2\int_0^\ell b_x^2 \, dx \leqslant 0.$$ For the second integral, we use that $\vert \cos y_1-\cos y_2\vert \leqslant \vert y_1-y_2\vert = \vert b\vert$ to obtain $$- 2\int_0^\ell b(\cos y_1 - \cos y_2) \, dx \leqslant 2\int_0^\ell \vert b \vert \vert \cos y_1 - \cos y_2\vert \, dx \leqslant2\int_0^\ell \vert b \vert^2 \, dx. $$ Combining these estimates gives $$A'(t) \leqslant 2\int_0^\ell \vert b \vert^2 \, dx$$ as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4364217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$-\log\Phi(x) \sim \frac{x^2}{2}$ as $x \to -\infty$ for $\Phi(x) = (2\pi)^{-\frac 12}\int_{-\infty}^x \mathrm{e}^{-t^2/2}\,\mathrm{d}t$ For $$\Phi(x) = (2\pi)^{-\frac 12}\int_{-\infty}^x \mathrm{e}^{-t^2/2}\,\mathrm{d}t,$$ it is claimed in the proof of Lemma 8.12 of this book that we have the asymptotic relation $-\log\Phi(x) \sim \frac{x^2}{2}$ as $x \to -\infty$. However, I do not see any way of deriving it from the so-called Gordon's inequality, which says that $$1 - \Phi(t) \sim \frac{1}{t\sqrt{2\pi}}\mathrm{e}^{-t^2/2}$$ for $t \gg 1$ being sufficiently large. May I know how can we obtain the advertised asymptotic relation as $x \to -\infty$?
| I try to turn the way around: for $x\to +\infty$, $\log ( (2\pi)^{-\frac 12}\int_{x}^\infty \mathrm{e}^{-t^2/2}\,\mathrm{d}t )\sim -x^2/2$. Since this is asymptotic, we omit the $(2\pi)^{-\frac 12}$ and only focus on the integral.
Notice that
$$\int_{x}^\infty \mathrm{e}^{-t^2/2}\,\mathrm{d}t\ge \int_{x}^{x+1/x} \mathrm{e}^{-t^2/2}\,\mathrm{d}t\ge\int_{x}^{x+1/x} \mathrm{e}^{-(x+1/x)^2/2}\,\mathrm{d}t=\frac 1x\mathrm{e}^{-\frac{x^2+2+1/x^2}{2}}.$$
So when taking logarithms, we have $\log(\int_{x}^\infty \mathrm{e}^{-t^2/2}\,\mathrm{d}t)\ge-\frac{x^2}{2}+O(\log x)$.
Furthermore, notice that for $t\ge x$, we have $t^2\ge x^2+2x(t-x)$. So we have, for $x>0$,
$$\int_{x}^\infty \mathrm{e}^{-t^2/2}\,\mathrm{d}t\le \int_{x}^{\infty} \mathrm{e}^{\frac{-x^2-2x(t-x)}{2}}\,\mathrm{d}t=\mathrm{e}^{\frac{-x^2}{2}}\int_{x}^{\infty} \mathrm{e}^{-x(t-x)}\,\mathrm{d}t=\mathrm{e}^{\frac{-x^2}{2}}\int_{0}^{\infty} \mathrm{e}^{-xt}\,\mathrm{d}t=\mathrm{e}^{\frac{-x^2}{2}}\frac{1}{x}.$$
So when taking logarithms, we also have $\log(\int_{x}^\infty \mathrm{e}^{-t^2/2}\,\mathrm{d}t)\le-\frac{x^2}{2}+O(\log x)$.
Therefore $\log \Phi(x)\sim -\frac{x^2}{2}$. What's more, we can prove that $\log \Phi(x)= -\frac{x^2}{2}-\log x+O(1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4364370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Exercise 10(a), Section 19 of Munkres’ Topology
Let $A$ be a set; let $\{X_\alpha \}_{\alpha \in J}$ be an indexed family of spaces; and let $\{ f_\alpha \}_{\alpha \in J}$ be an indexed family of functions $f_\alpha \colon A \to X_\alpha$.
(a) Show there is a unique coarsest topology $\mathscr{T}$ on $A$ relative to which each of the functions $f_\alpha$ is continuous.
I can’t solve this problem. Maybe I don’t understand “exactly” what to prove. For instance, closure $\overline{A}$ is the smallest closed subset of $X$ containing $A$ means (1) $\overline{A}$ is closed in $X$ and it contains $A$, and (2) if $A\subseteq V$ where $V$ is closed in $X$ then $\overline{A} \subseteq V$. You can also give hint(s) to solve this problem.
Edit: The following definition I sto.. ahaa aaa took from Henno Brandsma answer
Let $(X,\mathcal{T})$ be a topological space.
Let $I$ be an index set, and let $Y_i (i \in I)$ be topological spaces
and let $f_i: X \rightarrow Y_i$ be a family of functions.
Then $\mathcal{T}$ is called the initial topology with respect to the maps $f_i$
iff
*
*$\mathcal{T}$ makes all $f_i$ continuous.
*If $\mathcal{T}'$ is any other topology on $X$ that makes all $f_i$ continuous, then $\mathcal{T} \subseteq \mathcal{T}'$.
| You need to show that there is a unique topology $\tau$ on $A$ such that every map $f_\alpha: A\rightarrow X_\alpha$ is continuous and $\tau$ is contained in every other topology making the maps $f_\alpha$ continuous (all at once).
You can construct $\tau$ by noticing that a topology makes each $f_\alpha$ continuous if and only if it contains a certain family $S$ of subsets of $A$, can you see what family?
Once you've proved the previous result, you're also done: This family $S$ will not be a topology in general, so you take the topology $\tau$ generated by $S$. This $\tau$ contains $S$, so it makes every $f_\alpha$ continuous and, by construction, is contained in every other topology making $f_\alpha$ continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4364707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
determining the limit of an integral with a parameter For any $\alpha \in \mathbb{R}$ determine the limit $$\lim_{\substack{> \\ \delta \rightarrow 0}}\int^2_0\frac{1-x}{\delta+x^{\alpha}}dx \qquad \text{in } [-\infty, +\infty].$$
I want to use some kind of convergence theorem, but I'm stuck because $\alpha$ can be any element in $\mathbb{R}$.
| Denote for $\alpha \in \mathbb R$ and $\delta \gt 0$
$$f_{\alpha, \delta}(x) = \frac{1-x}{\delta+x^{\alpha}} \text{ and } F_{\alpha}(x) = \frac{1-x}{x^{\alpha}}.$$
For $x \gt 0$, you have
$$\vert f_{\alpha, \delta}(x) \vert \le \vert F_{\alpha}(x) \vert$$ and
$$\lim_{\substack{> \\ \delta \rightarrow 0}} f_{\alpha, \delta}(x) = F_{\alpha}(x).$$
As $\int_0^2 \vert F_{\alpha}(x) \vert \ dx$ is convergent for $\alpha \lt 1$, you can apply Dominated Convergent Theorem - DCT in that case and get
$$\lim_{\substack{> \\ \delta \rightarrow 0}}\int^2_0\frac{1-x}{\delta+x^{\alpha}}dx = \int_0^2 F_{\alpha}(x) \ dx.$$
So let's now suppose that $\alpha \ge 1$.
You'll easily prove that
$$\int_1^2 f_{\alpha, \delta}(x) \ dx$$ is bounded by $3$ for $(\alpha, \delta) \in [1, \infty) \times (0, \infty)$. This implies the implication
$$\lim_{\substack{> \\ \delta \rightarrow 0}}\int_0^1 f_{\alpha, \delta}(x) \ dx = \infty \implies \lim_{\substack{> \\ \delta \rightarrow 0}}\int_0^2 f_{\alpha, \delta}(x) \ dx = \infty.$$
Now for $0 \lt x \le 1$ (and $\alpha \ge 1$ as assumed), we have
$$0 \le \frac{1-x}{\delta + x} \le f_{\alpha, \delta}(x) .$$
We get the desired result as
$$\int_0^1 \frac{1-x}{\delta + x} \ dx = -1+(1+\delta)\ln \left(1 + \frac{1}{\delta}\right)$$ and
$$\lim_{\substack{> \\ \delta \rightarrow 0}}(1+\delta)\ln \left(1 + \frac{1}{\delta}\right) = \infty.$$
Conclusion
$$
\lim_{\substack{> \\ \delta \rightarrow 0}}\int^2_0\frac{1-x}{\delta+x^{\alpha}} \ dx =
\begin{cases}
\infty &\text{for } \alpha \ge 1\\
\int_0^2 \frac{1-x}{x^{\alpha}} \ dx &\text{for } \alpha \lt 1
\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4364826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Equation of cardioid Can someone please explain to how
$$c = \frac{1}{2} e^{i \theta}- \frac{1}{4} e^{2 i \theta}, \quad 0 \leq \theta \leq 2 \pi$$
does represent an equation of cardioid $r = \frac{1}{2} - \frac{1}{4} \cos(\theta)$, $0\leq \theta \leq 2 \pi$?
I tried to write $\cos(\theta) = \frac{e^{i \theta} + e^{-i \theta}}{2}$, but couldn't prove it is equal to $\frac{1}{2} e^{i \theta}- \frac{1}{4} e^{2 i \theta}$. Thanks in advance.
| Plot the two functions. You will then note that:
*
*The formula for the radius is wrong. In a cardioid the radius is zero at $\theta=0$. Try $$r=\frac12(1-\cos\theta)$$
*The complex representation is a carioid shifted along the real axis by $1/4$
$$x_c=\frac12\cos\theta-\frac14\cos2\theta=\frac12\cos\theta-\frac12\cos^2\theta+\frac14=\frac12(1-\cos\theta)\cos\theta+\frac14\\y_c=\frac12\sin\theta-\frac14\sin2\theta=\frac12\sin\theta-\frac12\sin\theta\cos\theta=\frac12(1-\cos\theta)\sin\theta
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4364991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Is there an equivalence between $\big\lceil \frac{a}{M} \big\rceil$ and $floor$ that works regardless of wether $a$ and/or $M$ are integers or not? Is there a way to rewrite a $ceiling$ as a $floor$?
I want to write $\big\lceil \frac{a}{M} \big\rceil$ in terms of $floor$ and possibly $\bmod M$ if necessary.
Given $q := \big\lfloor \frac{a}{M} \big\rfloor$ and $r = a \bmod m$, I thought the following were equivalent
$$
\begin{aligned}
\Big\lceil \frac{a}{M} \Big\rceil &=
\begin{cases}
q &\;\text{if }\ r = 0\\
q+1 &\;\text{if }\ r \neq 0
\end{cases} &&(1)\\
\\
&= \Big\lfloor \frac{a-1+M}{M} \Big\rfloor &&(2)\\
\\
&= \Big\lfloor \frac{a-1}{M} \Big\rfloor + 1 &&(3)
\end{aligned}
$$
But after graphing them on Desmos, I've come to find out the latter two, $(2)$ and $(3)$, seem to be equivalent to $\big\lceil \frac{a}{M} \big\rceil$ iif $a$ and $M$ are integers or something like that. Maybe $a$ and $M$ have to be positive as well for $(2)$ and $(3)$ to be equivalent to $(1)$.
Is there an equivalence between $\big\lfloor \frac{a}{M} \big\rfloor$ and $floor$ ((other than the equivalence shown in $(1)$)) that works regardless of wether $a$ and/or $M$ are integers or not?
By the way, using $Iverson\ bracket\ notation$, $(1)$ can be written as
$$
\begin{aligned}
\Big\lceil \frac{a}{M} \Big\rceil &= \Big\lfloor \frac{a}{M} \Big\rfloor + [[a \bmod M \neq 0]]
\end{aligned}
$$
where $[[condition]]$ denotes $Iverson\ bracket\ notation$ defined as
$$
[[condition]]=
\begin{cases}
1 &\text{if } condition\\
0 &\text{otherwise}
\end{cases}\\
$$
| As stated in the Relations among the functions section of Wikipedia's article, for any $x \in \mathbb{R}$, we have $-\lceil x \rceil = \lfloor -x \rfloor$, which means
$$\lceil x \rceil = -\lfloor -x \rfloor \tag{1}\label{eq1A}$$
To prove this, let $x = n + r$, where $n \in \mathbb{Z}$, $r \in \mathbb{R}$ and $0 \le r \lt 1$. If $r = 0$, then $\lceil x \rceil = n$ and $-\lfloor -x \rfloor = -(-n) = n$. If $r \gt 0$, then $\lceil x \rceil = n + 1$ while $-\lfloor -x \rfloor = -\lfloor -n - r \rfloor = -(-n - 1) = n + 1$.
Thus, \eqref{eq1A} holds for both cases. For your particular expression, using $x = \frac{a}{M}$ in \eqref{eq1A} gives
$$\left\lceil \frac{a}{M} \right\rceil = -\left\lfloor -\frac{a}{M} \right\rfloor \tag{2}\label{eq2A}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4365173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can two distinct elementary functions be equal over an interval of nonzero width? The wikipedia entry on elementary functions describes them to be "of a single variable (typically real or complex) that [are] defined as taking sums, products, and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, and exponential functions, including possibly their inverse functions". Piecewise functions do not fit this description, and I believe these functions are continuous for the regions for which they are defined, as well as their derivatives for the regions along which those derivatives are defined.
Taking two functions which are composed of elementary functions and are unequal for the majority of the interval along which they can be defined for the independent variable, can these functions be equal (meaning they contain all the same points) over an interval with nonzero width, such as being defined and equal for (2,3) or (0, inf)? If this is impossible, why is it impossible?
Thanks!
| Extending the ideas already present in the comments, pick any $a<b$.
Let $f(x)=b-a$ and $g(x)=\sqrt{(x-a)^2}+\sqrt{(x-b)^2}$. Both $f(x)$ and $g(x)$ are elementary functions. $g(x)$ is just $|x-a|+|x-b|$ in disguise so that it would be clear that it is an elementary function according to all definitions.
$f(x)$ is identically equal to $g(x)$ for any $x\in[a,b]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4365346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to find the formula for the integral $\int_{0}^{\infty} \frac{d x}{\left(x^{2}+1\right)^{n}}$, where $n\in N$? By the generalization in my post,we are going to evaluate the integral $$\int_{0}^{\infty} \frac{d x}{\left(x^{2}+1\right)^{n}},$$
where $n\in N.$
First of all, let us define the integral $$I_n(a)=\int_{0}^{\infty} \frac{d x}{\left(x^{2}+a\right)^{n}} \textrm{ for any positive real number }a.$$
Again, we start with $$I_1(a)=\int_{0}^{\infty} \frac{d x}{x^{2}+a}= \left[\frac{1}{\sqrt{a}} \tan ^{-1}\left(\frac{x}{\sqrt{a}}\right)\right]_{0}^{\infty} = \frac{\pi}{2 }a^{-\frac{1}{2} } $$
Then differentiating $I_1(a)$ w.r.t. $a$ by $n-1$ times yields
$$
\int_{0}^{\infty} \frac{(-1)^{n-1}(n-1) !}{\left(x^{2}+a\right)^{n}} d x=\frac{\pi}{2} \left(-\frac{1}{2}\right)\left(-\frac{3}{2}\right) \cdots\left(-\frac{2 n-3}{2}\right) a^{-\frac{2 n-1}{2}}
$$
Rearranging and simplifying gives $$
\boxed{\int_{0}^{\infty} \frac{d x}{\left(x^{2}+a\right)^{n}} =\frac{\pi a^{-\frac{2 n-1}{2}}}{2^{n}(n-1) !} \prod_{k=1}^{n-1}(2 k-1)}
$$
Putting $a=1$ gives the formula of our integral
$$
\boxed{\int_{0}^{\infty} \frac{d x}{\left(x^{2}+1\right)^{n}} =\frac{\pi}{2^{n}(n-1) !} \prod_{k=1}^{n-1}(2 k-1)= \frac{\pi}{2^{2 n-1}} \left(\begin{array}{c}
2 n-2 \\
n-1
\end{array}\right)}$$
For verification, let’s try $$
\begin{aligned}
\int_{0}^{\infty} \frac{d x}{\left(x^{2}+1\right)^{10}} &= \frac{\pi}{2^{19}}\left(\begin{array}{c}
18 \\
9
\end{array}\right) =\frac{12155 \pi}{131072} ,
\end{aligned}
$$
which is checked by Wolframalpha .
Are there any other methods to find the formula? Alternate methods are warmly welcome.
Join me if you are interested in creating more formula for those integrals in the form $$
\int_{c}^{d} \frac{f(x)}{\left(x^{m}+1\right)^{n}} d x.
$$
where $m$ and $n$ are natural numbers.
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\on}[1]{\operatorname{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\bbox[5px,#ffd]{\left.\int_{0}^{\infty}{\dd x \over \pars{x^{2} + 1}^{n}}
\right\vert_{\Re(n)\ >\ 1/2}}
\,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\,
{1 \over 2}\int_{0}^{\infty}{x^{\color{#f00}{1/2} - 1} \over \pars{1 + x}^{n}}\,\dd x
\end{align}
Since $\ds{\pars{1 + x}^{-n} =
\sum_{k = 0}^{\infty}{-n \choose k}x^{k} =
\sum_{k = 0}^{\infty}\bracks{{n + k - 1\choose k}\pars{-1}^{k}}x^{k} =
\sum_{k = 0}^{\infty}\color{#f00}{\Gamma\pars{n + k} \over \Gamma\pars{n}}{\pars{-x}^{k} \over k!};}$
\begin{align}
& \overbrace{\bbox[5px,#ffd]{\left.\int_{0}^{\infty}{\dd x \over \pars{x^{2} + 1}^{n}}
\right\vert_{\Re(n)\ >\ 1/2}} =
{1 \over 2}\,\Gamma\pars{1 \over 2}{\Gamma\pars{n - 1/2}\over \Gamma\pars{n}}}
^{\ds{Ramanujan's\ Master\ Theorem}}
\\[5mm] = & {\pi \over 2}{\pars{n - 3/2}! \over \pars{n - 1}!\pars{-1/2}!} =
\bbox[5px,#ffd]{{\pi \over 2}{n - 3/2 \choose n - 1}}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4365867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 5
} |
How Weierstrass Transform and analytic functions related? The Wikipedia entry on Weierstrass Transform says
"The generalized Weierstrass transform provides a means to approximate a given integrable function f arbitrarily well with analytic functions."
But it doesn't cite any references nor goes deep on that. How are the Weierstrass Transform and analytic functions related? How can I use it to approximate functions? Is there any multidimensional version of the Weierstrass Transform? References would be valuable.
| If $f$ is $L^1(\Bbb{R}^n)$ then $f_k = f \ast k^n e^{-\pi k^2 |x|^2}$ is entire and it converges to $f$ in $L^1(\Bbb{R}^n)$.
It follows from the density of $C^0_c$ in $L^1$ which itself follows from the definition of the Lebesgue measurable sets, as if $f\in C^0_c$ then the proof is easy.
It stays true when replacing the convergence in $L^1$ by $L^p,p <\infty$, $C^k \cap L^\infty$, the Sobolev norms, the Schwartz topology, the tempered distributions topology...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4366055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integral of $2\int_0^{\infty} e^{-2y}e^{-y}{{(y)^x}\over{x!}} dy$ Is it possible to take this integral?
$2\int_0^{\infty} e^{-2y}e^{-y}{{(y)^x}\over{x!}} dy$
I know I can use the fact that: $\int_0^{\infty} y^k e^{−y} dy = k!$
But I'm basically stuck on how to do this by parts.
I'm ending up with (by parts):
$p_1(x) = 2(x! {{e^{-2y}\over{x!}}} - \int ({{e^{-2y}\over{x!}}})' x!dx)$
I'm not sure how to handle that last integral. It looks like it needs to be broken down by parts as well?
This isn't quite right:
$p_1(x) = 4(e^{-2y} - \int e^{-2y} )$
I should end up with:
${1\over3}({2\over3})^x$
Thanks..
| We have
\begin{align*}
2\int_0^\infty {e^{ - 2y} e^{ - y} \frac{{y^x }}{{x!}}dy} & = \frac{2}{{x!}}\int_0^\infty {e^{ - 3y} y^x dy} = \frac{2}{{3^{x + 1} x!}}\int_0^\infty {e^{ - 3y} (3y)^x d(3y)} \\ & = \frac{2}{{3^{x + 1} x!}}\int_0^\infty {e^{ - t} t^x dt} = \frac{2}{{3^{x + 1} }}
\end{align*}
provided $\Re x>-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4366225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving the system $\tan x + \tan y = 1$ and $\cos x \cdot \sin y = \frac{\sqrt{2}}{2}$ How can I solve this system of trigonometric equations:
$$\tan x + \tan y = 1$$
$$\cos x \cdot \sin y = \frac{\sqrt{2}}{2}$$
I tried to write tangent as $\sin/\cos$ and then multiply the first equation with the second one but it is not that brought me in a right way.
Do you have any idea how to solve this one?
| Let $ t = \tan \left(y\right)$ and $ u = \tan \left(x\right)$. One has $\sin \left(y\right) = \pm \frac{t}{\sqrt{1+{t}^{2}}}$ and $\cos \left(x\right) = \pm \frac{1}{\sqrt{1+{u}^{2}}}$. The second equation then implies
\begin{equation}2 {t}^{2} = \left(1+{u}^{2}\right) \left(1+{t}^{2}\right)\end{equation}
but according to the first equation, we have $ u = 1-t$, hence the quadric equation
\begin{equation}\renewcommand{\arraystretch}{1.5} \begin{array}{cc}&2 {t}^{2} = \left(1+{\left(1-t\right)}^{2}\right) \left(1+{t}^{2}\right)\\
\Longleftrightarrow &\left(t-1\right) \left({t}^{3}-{t}^{2}-2\right) = 0
\end{array}\end{equation}
This equation has two real solutions $ {t}_{0} = 1$ and $ {t}_{1} = \frac{1}{3} \left(d+1+\frac{1}{d}\right)$ with $ d = \sqrt[3]{3 \sqrt{87}+28}$.
The first solution gives
\begin{equation}x = k {\pi} + 2 n \pi, \quad y = \frac{{\pi}}{4}+ k {\pi}\end{equation}
The second solution gives
\begin{equation}y = \arctan \left({t}_{1}\right)+k {\pi} , \quad x = \arctan \left(1-{t}_{1}\right)+k {\pi}+2 n {\pi}\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4366385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
How to compare asymptotic efficiency of MOM estimator with MLE? Compare the asymptotic efficiency of MOM estimator of parameter $\alpha$ of the pareto distributions with MLE (assume $X_m$ known)
$$f(x;\alpha;X_m) = \alpha X_m^{\alpha} x^{-(\alpha + 1)}.$$
I computed the MLE of the pareto distribution equating to $0$ the first derivative of the log-likelihood getting $$\frac{n}{n\log(X_m)+\sum_{i=0}^n \log(X_i)}.$$
And for MOM I get $$\frac{\bar X}{\bar X- X_m}.$$
My question is: In order to compare the asymptotic efficiency should I compute the variance of the Mom and the variance of MLE?
If so someone can direct me in the right direction?
| Some simplification is possible. Note that if $X_m$ is known, then consider the transformation $Y = X/X_m$, hence $Y$ is Pareto with shape $\alpha$ and location $1$ with density $f_Y(y) = \alpha y^{-(\alpha+1)} \mathbb 1 (y \ge 1)$. Then the transformed sample is simply $$\boldsymbol Y = (y_1, y_2, \ldots, y_n) = \boldsymbol X/X_m = (x_1/X_m, x_2/X_m, \ldots, x_n/X_m).$$ So for the purposes of estimation and efficiency, we can work with $\boldsymbol Y$, or equivalently, assume $X_m = 1$, because this transformation is one-to-one.
That said, the efficiency of an estimator $w(\theta)$ of $\theta$ is defined as $$\mathcal E(w(\theta)) = \frac{1/\mathcal I(\theta)}{\operatorname{Var}[w(\theta)]},$$ where $\mathcal I(\theta)$ is the Fisher information. So you need to compute the variance of each estimator.
I leave it as an exercise to show that for your distribution, the Fisher information is:
$$\mathcal I(\alpha) = \frac{n}{\alpha^2}. \tag{1}$$
The exact variance of the MLE is:
$$\operatorname{Var}[\hat \alpha] = \frac{(n \alpha)^2}{(n-1)^2 (n-2)}, \quad n > 2, \alpha > 1. \tag{2}$$
The asymptotic variance of the method of moments estimator via the CLT and the delta method is:
$$\operatorname{Var}[\tilde \alpha] = \frac{\alpha (\alpha-1)^2 ((n-1)\alpha - 2n)}{n^2 (\alpha-2)^2}. \tag{3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4366558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why are fully faithful functors conservative? (Or, why are isomorphisms reflected?) Say $F$ is a functor from category $C$ to $D$. By "fully faithful", I mean $f \mapsto Ff$ is injective ("faithful") and surjective ("full") in $C(X, Y) \to D(FX, FY)$.
My question is: when $F$ is fully faithful, why does $FX \cong FY$ imply that $X \cong Y$ for objects $X, Y$ in $C$?
The best I am able to come up with is:
Since $FX \cong FY$, there exists an isomorphism $h \in D(FX, FY)$. And, because $F$ is full, there exists a morphism $f \in C(X, Y)$ such that $F f = h$, and also a morphism $g \in C(Y, X)$ such that $F g$ is the inverse of $h$.
but... I don't know how to prove that $f \in C(X, Y)$ is an isomorphism. I think I need to show that $f g = \text{id}_Y$ and $g f = \text{id}_X$, but I don't know how.
Notation background: I'm using the text
Bradley, T. D., Bryson, T., & Terilla, J. (2020). Topology: A Categorical Approach. MIT Press.
| You're nearly there! You've used the fact that $F$ is full, now you need to use the fact that it's faithful...
Suppose $f\in C(X,Y)$ and $g\in C(Y,X)$, such that $Ff\in D(FX,FY)$ and $Fg\in D(FY,FX)$ are inverses. Then $F(f\circ g) = Ff\circ Fg = \text{id}_{FY} = F(\text{id}_Y)$, and since $F$ is faithful, $f\circ g = \text{id}_Y$. The same argument shows $g\circ f = \text{id}_X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4366828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Find missing value - geometry We are given square $ABCD$ and equilateral triangle $CEF$.
We are looking for angle $a$.
I have tried everything but no clue.
All I managed to find is that $EC=EA$ because $AC$ and $DB$ are the square's diagonals, which are perpendicular and bisect each other, so $EB$ is perpendicular bisector of $AC$, so all points on it are equidistant from $A$ and $C$. So triangle $EAC$ is isosceles.
Geogebra says that the angle in question is constant and $a = 15^\circ$.
So we need to prove that angle $FAC = 30^\circ$.
Any ideas?
Thank you very much.
| Hint: Rotate both the square and the triangle clockwise by 60° around $C$. What is the image of $E$? What line does it belong to?
Edit. I am going to provide the image. It should be clear what to do now.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4366953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Rank Nullity "Converse"? The rank nullity theorem requires a linear map $T:V \longrightarrow W$ between a finite dimensional domain VS and some VS W.
The conclusion of the theorem is that $\text{Dim}(V) = \text{Dim}(\text{Ker}(T))+\text{Dim}(\text{Ran}(T))$.
My question is, if I have a linear map $T: V \longrightarrow W$ between any two vector spaces and $\text{Dim}(\text{Ker}(T))+\text{Dim}(\text{Ran}(T)) < \infty$, then is $\text{Dim}(V) < \infty$ with $\text{Dim}(V) = \text{Dim}(\text{Ker}(T))+\text{Dim}(\text{Ran}(T))$?
Apologies if this is blatantly true or false.
Thank you.
| Suppose $\dim{V} = \infty$, then we can write $V = \ker{T} \oplus U$, where $U$ is an infinite dimensional subspace of $V$. Let $\{u_i\}_{i \in I}$ be a Hamel basis for $U$, then $\{T(u_i)\}$ would be a Hamel basis for $\operatorname{Ran}{T}$. This means $\operatorname{Ran}{T}$ would need to be infinite dimensional, which it isn't. Hence, we have a contradiction, and $\dim V < \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4367098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is this secant substitution allowed? On Paul's Math Notes covering Trig Substitutions for Integrals we start with an integral:
$$\int{{\frac{{\sqrt {25{x^2} - 4} }}{x}\,dx}}$$
Right away he says to substitute $x=\frac{2}{5}\sec(θ)$. Why is that allowed?
Looking further down onto how he approaches the problem, it seems like it's allowed because it's compensated for with a dx:
$$dx = \frac{2}{5}\sec \theta \tan \theta \,d\theta$$
Is that what's going on here? It's fair to say you can substitute x with whatever you want so long as you update dx? Seems like it wouldn't work for constant functions of x, like $x = 5$.. since that'd get you $dx=0$ and clearly be wrong. So what rules are in play here for substitution?
| You need to remember the Pythagorean trigonometric identity that says $$ \sec^2 \theta-1 = \tan^2\theta. $$
Where you see $\Big( \big(\text{variable}\big)^2 - \text{positive constant} \Big),$ you can often use this substitution in just the way in which it is used here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4367233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
The weak topology is compatible with the vector space structure I'm trying to prove this result
Let $E$ be a topological vector space and $E^\star$ its topological dual. Let $\sigma(E, E^\star)$ be the weak topology on $E$. We denote by $E_w$ the vector space $E$ together with $\sigma(E, E^\star)$. Then $E_w$ is a topological vector space.
I posted my proof as below answer. Could you have a check on my attempt?
| First, we consider the linear map $T: E_w \times E_w \to E_w, (x, y) \mapsto x+y$. We want to prove that $T$ is continuous. It suffices to show that $$f \circ T: E_w \times E_w \to \mathbb R, (x, y) \mapsto f(x)+f(y).$$ is continuous for all $f \in E^\star$. The claim then follows from below diagram
$$
\substack{E_w \times E_w \\ (x,y)} \, \substack{ \longrightarrow \\ \longmapsto } \, \substack{\mathbb R \times \mathbb R \\ (f(x),f(y))} \, \substack{ \longrightarrow \\ \longmapsto } \, \substack{ \mathbb R \\ f(x)+f(y)}
$$
and the fact that $f$ is also continuous in the weak topology. Second, we consider the linear map $L: \mathbb R \times E_w \to E_w, (t, x) \mapsto tx$. We want to prove that $L$ is continuous. It suffices to show that $$f \circ L: \mathbb R \times E_w \to \mathbb R, (t, x) \mapsto tf(x).$$ is continuous for all $f \in E^\star$. The claim then follows from below diagram
$$
\substack{\mathbb R \times E_w \\ (t, x)} \, \substack{ \longrightarrow \\ \longmapsto } \, \substack{\mathbb R \times \mathbb R \\ (t,f(x))} \, \substack{ \longrightarrow \\ \longmapsto } \, \substack{ \mathbb R \\ tf(x)}
$$
and the fact that $f$ is also continuous in the weak topology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4367441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Equivalent definition of Markov property - question about conditional expectation Let $(\Omega, \mathscr{F}, \mathscr{F}_k, P)$ be a filtered probability space and $(X_k, \mathscr{F}_k)$ be a Markov chain. I would like to show that the Markov property $$E[f(X_{k+1})|\mathscr{F}_k] = E[f(X_{k+1})|X_k]$$ for all bounded measurable $f$ is equivalent to the following : for every $k$, bounded $\sigma(X_j, j\ge k)$-measurable random variable $Y$ and bounded $\mathscr{F}_k$ -measurable random variable $Z$,
$$E[YZ|X_k] = E[Y|X_k]E[Z|X_k].$$
The proof showing that the latter implies the former goes as follows: If $Z$ is bounded and $\mathscr{F}_k$-measurable we obtain
$E[f(X_{k+1})Z]=E[E[f(X_{k+1})Z|X_k]]=E[E[f(X_{k+1})|X_k]E[Z|X_k]]=E[E[f(X_{k+1})|X_k]Z]$.
I don't know how we get the last equality here. What property of conditional expectation gives this identity?
| We show that if $W$ is $X_k$ measurable, than for every $Z$:
$E[WZ]=E[WE[Z|X_k]]$ [1]
Indeed this is an application of the tower property:
$E[WZ]=E[E[WZ]|X_k]=E[WE[Z|X_k]]$
Now choose in [1] $W=E[f(X_{k+1})|X_k]$ and you should have your last equality.
Admittedly, the amount of E[...] in this expressions does not really help to follow the algebra...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4367815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the solution to $a_n =a_{n-1}^2+a_{n-2}^2, a_0, a_1>1 $? I saw this on Quora
and have made
very little progress.
What is the solution
(exact or asymptotic) to
$a_n
=a_{n-1}^2+a_{n-2}^2,
a_0, a_1>1
$?
I would be satisfied
with a good asymptotic analysis.
Heck,
I would be happy with an
asymptotic form
of the log of the solution.
A lower bound is easy.
Assuming $a_1 > a_0 > 1$,
$a_n>a_{n-1}^2
\gt a_{n-2}^4
... > a_{n-k}^{2^k}
...
\gt a_1^{2^{n-1}}
\gt a_0^{2^{n}}
$.
A reasonable upper bound seems harder.
$a_{n-1} > a_{n-2}^2
$
so
$a_n
=a_{n-1}^2+a_{n-2}^2
\lt a_{n-1}^2+a_{n-1}
$
so
$a_n+\frac14
\lt a_{n-1}^2+a_{n-1}+\frac14
=(a_{n-1}+\frac12)^2
$
or
$a_n+\frac12
\lt (a_{n-1}+\frac12)^2+\frac14
$.
If
$b_n
= a_n+\frac12
$,
$\begin{array}\\
b_n
&\lt b_{n-1}^2+\frac14\\
&\lt (b_{n-2}^2+\frac14)^2+\frac14\\
&= b_{n-2}^4+\frac12 b_{n-2}^2+\frac1{16}+\frac14\\
\end{array}
$
Not sure where to go
from this.
Another possibility
is to notice that
once $a_n$ gets large,
for any $c > 0$
there is an
$n(c)$ such that
for $n \ge n(c)$,
$a_n > 1/c$ so
$\begin{array}\\
a_n
&=a_{n-1}^2+a_{n-2}^2\\
&\lt a_{n-1}^2+a_{n-1}\\
&= a_{n-1}^2(1+1/a_{n-1})\\
&\lt (1+c)a_{n-1}^2\\
&\lt (1+c)((1+c)a_{n-2}^2)^2\\
&=(1+c)^3a_{n-2}^4\\
&<(1+c)^3((1+c)a_{n-3}^2)^4\\
&=(1+c)^7a_{n-3}^8\\
& ...\\
&<(1+c)^{2^k-1}a_{n-k}^{2^k}\\
& ... \text{up to } n-k = n(c), k=n-n(c)\\
&<(1+c)^{2^{n-n(c)}-1}a_{n(c)}^{2^{n-n(c)}}\\
\end{array}
$
Don't see how to do better.
| For any $n \ge 1$, we have:
\begin{align*}
a_{n+1} &= a_n^2+a_{n-1}^2
\\
a_{n+1} &= a_n^2\left(1+\dfrac{a_{n-1}^2}{a_n^2}\right)
\\
\log a_{n+1} &= 2\log a_n + \log\left(1+\dfrac{a_{n-1}^2}{a_n^2}\right)
\\
\dfrac{1}{2^{n+1}}\log a_{n+1} &= \dfrac{1}{2^n}\log a_n + \dfrac{1}{2^n}\log\left(1+\dfrac{a_{n-1}^2}{a_n^2}\right)
\end{align*}
This is enough to show that $\dfrac{1}{2^n}\log a_n$ is non-decreasing, and thus, either converges to some limit or is unbounded.
Summing the previous equation from $n = 2$ to $n = N-1$ yields, $$\dfrac{1}{2^N}\log a_N = \dfrac{1}{4}\log a_2 + \sum_{n = 2}^{N-1}\dfrac{1}{2^n}\log\left(1+\dfrac{a_{n-1}^2}{a_n^2}\right).$$
It is easy to check that for all $n \ge 2$, we have $a_{n-1} > 1$, and thus, $a_n = a_{n-1}^2+a_{n-2}^2 > a_{n-1}^2 > a_{n-1}$. Hence, we can bound $$0 \le \sum_{n = 2}^{N-1}\dfrac{1}{2^n}\log\left(1+\dfrac{a_{n-1}^2}{a_n^2}\right) \le \sum_{n = 2}^{N-1}\dfrac{1}{2^n}\log 2 = \dfrac{1}{2}\log 2,$$ and thus, $$\dfrac{1}{4}\log a_2 \le \dfrac{1}{2^N}\log a_N \le \dfrac{1}{4}\log a_2 + \dfrac{1}{2}\log 2.$$
Since $\dfrac{1}{2^n}\log a_n$ is bounded (and previously shown to be non-decreasing), $\dfrac{1}{2^n}\log a_n$ converges to some number between $\dfrac{1}{4}\log a_2$ and $\dfrac{1}{4}\log a_2 + \dfrac{1}{2}\log 2$.
We can probably get tighter bounds if we are more careful about bounding the sum.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4367967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Necessary and sufficient condition for diagonalizabilty of rank $1$ $n\times n$ square matrix. Let $A$ be a square matrix $n\times n$ of rank $1$. I want to find a necessary and sufficient condition for the diagonalizable of this particular form of $A$.
I know equivalent conditions for diagonalizability of $n\times n$ matrix $A$.
*
*Its minimal polynomial has no repeated roots.
*It has $n$ linearly independent eigenvectors.
I found interesting posts about the full rank of $A$ in Diagonalizable vs full rank vs nonsingular (square matrix) ....,
| First, we have the following lemma
Lemma: $A$ has rank $1$ iff there exist non-zero column vectors $v, w\in \mathbb R^n$ such that $A = vw^T$ (see e.g. MSE).
Theorem: For non-zero vectors $v, w$ in $\mathbb R^n$, $A=vw^T$ is diagonalizable if and only if $v, w$ are non-othorgonal.
Note. $w^Tv\neq 0$ is equivalent to the trace of $A$ is non-zero.
Proof of theorem:
Part 1 ($\Longleftarrow$).
Assume that $w^Tv\neq 0$.
We have $\lambda=w^Tv\neq 0$ and $v_0=v\neq 0$ is a pair of eigenvalue and eigenvector, since $Av_0 = vw^Tv= \lambda v$. Let $v_1,...,v_{n-1}$ be a basis of $w^{\bot}=\{ u: w^Tu=0\}$, here $\dim w^{\bot}=n-1$ since $w\neq 0$. We know that $\{v_0, v_1,..., v_{n-1}\}$ is a basis since it is linearly independent ($v_0=v\notin w^{\bot}$ and $v_1,...,v_{n-1}\in w^{\bot}$). Then $V= [v_0, v_1,...,v_{n-1}]$ is invertible. Let $D=\text{ diag}(\lambda, 0,....,0)$ be $n\times n$ diagnonal matrices. We have $A=VDV^{-1}$. Hence $A$ is diagonalizable.
Part 2 ($\Longrightarrow$). We now assume that $w^Tv=0$. We aim to prove that $A$ is not diagonalizable by showing that the set of eigenvectors is linearly dependent. To do that, it is sufficient to show that all $n$ eigenvectors belong to $w^{\bot}$ of dim $n-1$. Assume that $u\neq 0$ is an eigenvector not in $w^{\bot}$, i.e. $w^Tu\neq 0$. Then $Au=(w^Tu) v = \lambda u$, where $\lambda$ is the corresponding eigenvalue of $u$. Since $w^Tu\neq 0$, $v\neq 0$ and $u\neq 0$, then $\lambda\neq 0$. Thus, $u = k v$ where $k=w^T u/\lambda$.
Then $w^Tu=k(w^Tv)=0$, which is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4368111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How do I evaluate $\frac{1}{2 \pi i} \oint_{C} \frac{z^{2} d z}{z^{2}+4}$ How do I evaluate the following integral when where $C$ is the square with vertices at $\pm2, \pm2+4i$,
$$\frac{1}{2 \pi i} \oint_{C} \frac{z^{2} d z}{z^{2}+4}$$
Using Cauchy integral:
$\frac{z^{2}}{z^{2}+4}=\frac{z^2}{(z+2i)(z-2i)}=\frac12\frac{z^2}{z+2i}+\frac12\frac{z^2}{z-2i}$ then $\frac{1}{2 \pi i} \oint_{C} \frac{z^{2} d z}{z^{2}+4} \implies \frac{1}{2 \pi i} \oint_{C} \frac12\frac{z^2 d z}{z+2i}+\frac{1}{2 \pi i} \oint_{C} \frac12\frac{z^2 d z}{z-2i}$
$\frac{z^2}{z+2i}$ is analytic on and inside $C$, hence we can apply Cauchy theorem and for the second term we use Cauchy integral formula,
$$\frac{1}{2 \pi i} \oint_{C} \frac12\frac{z^2 d z}{z+2i}+\frac{1}{2 \pi i} \oint_{C} \frac12\frac{z^2 d z}{z-2i}=0+\frac{1}{4\pi i}2\pi i \times f(2i)=-2$$
Using Residue Theorem:
$2i$ is the only isolated singularity in $C$.
$$\frac{1}{2 \pi i} \oint_{C} \frac{z^{2} d z}{z^{2}+4}=\frac{1}{2 \pi i} 2\pi i \times \text{Res}(f,2i)=i$$
I get correct answer for Residue Theorem but couldn't understand where I do wrong when using Cauchy integral.
It will be great help if someone clear me when to use which method to find the integral.
| I will perform the first computation in a slightly different manner:
$$
\begin{aligned}
\frac{1}{2 \pi i} \oint_{C} \frac{z^2 }{z^2+4}\; dz
&=
\frac{1}{2 \pi i} \oint_{C} \frac{(z^2 + 4) - 4}{z^2+4}\; dz
\\
&=
\frac{1}{2 \pi i} \oint_{C} dz
+
\frac{1}{2 \pi i} (-4)\oint_{C} \frac1{z^2+4}\; dz
\\
&=
0
+
\frac{1}{2 \pi i} (-4)\oint_{C} \frac 1{4i}\left(\frac1{z-2i} - \frac1{z+2i}\right)\; dz
\\
&=
\frac{1}{2 \pi i} \cdot i\oint_{C} \frac{dz}{z-2i}
-
\frac{1}{2 \pi i} \cdot i\oint_{C} \frac{dz}{z+2i}
\\
&= i-0\ .
\end{aligned}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4368289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Finding bounds for a function I would like to show that
$$\frac{1}{\pi^2}<\int_{\pi/2}^\pi\frac{\sin x}{x^3}<\frac{3}{2\pi^2}.$$
We know that $\frac{\sin x}{x^3}\leq\frac{1}{x^3}$ and integrating gives $\int_{\pi/2}^\pi\sin x/x^3\leq3/(2\pi^2)$. I don't know how to make $\leq$ into $<$. On the other hand, $\frac{1}{\pi^3}\leq\frac{\sin x}{x^3}$ and integrating gives $1/\pi^3\leq\int_{\pi/2}^\pi\sin x/x^3$. But $1/\pi^3<1/\pi^2$ so this clearly is not what I want.
Any suggestions in how to fix these issues?
|
I don't know how to make $≤$ into $<$
Notice that $\frac{\sin(x)}{x^3}$ and $\frac{1}{x^3}$ are only equal at $\frac{\pi}{2}$. So if we split $\left[\frac{\pi}{2} , \pi \right]$ as $\left[\frac{\pi}{2} , \frac{3\pi}{4} \right] \cup \left[\frac{3\pi}{4} , \pi \right]$ for example, we can thus assert that
\begin{align}
\frac{\sin(x)}{x^3}\le \frac{1}{x^3} \text{ on }\left[\frac{\pi}{2} , \frac{3\pi}{4} \right]\\
\frac{\sin(x)}{x^3}\mathbin{\color{red}{<}} \frac{1}{x^3} \text{ on }\left[\frac{3\pi}{4}, \pi \right]
\end{align}
And hence
\begin{align*}
\int_{\frac{\pi}{2}}^\pi\frac{\sin x}{x^3} \, \mathrm{d}x &= \int_{\frac{\pi}{2}}^{\frac{3\pi}{4}}\frac{\sin x}{x^3} \, \mathrm{d}x + \int_{\frac{3\pi}{4}}^{\pi}\frac{\sin x}{x^3} \, \mathrm{d}x \\
& \le \int_{\frac{\pi}{2}}^{\frac{3\pi}{4}}\frac{1}{x^3} \, \mathrm{d}x + \int_{\frac{3\pi}{4}}^{\pi}\frac{\sin x}{x^3} \, \mathrm{d}x \\
& =\frac{10}{9 \pi^2} + \int_{\frac{3\pi}{4}}^{\pi}\frac{\sin x}{x^3} \, \mathrm{d}x\\
&\mathbin{\color{red}{<}}\frac{10}{9 \pi^2} + \int_{\frac{3\pi}{4}}^{\pi}\frac{1}{x^3} \, \mathrm{d}x\\
& = \frac{3}{2\pi^2}
\end{align*}
as desired.
On the other hand $\frac{1}{\pi^3}\leq\frac{\sin x}{x^3}$.
This is not the inequality you want. Notice that on $\left[\frac{\pi}{2}, \pi \right]$ we know $\sin(x)$ is concave, which means it can be bounded from below by the line passing through the endpoints. This line turns out to be
$$
y\ =\ -\frac{2}{\pi}\left(x-\pi\right)
$$
So we can thus say that
$$
\frac{-\frac{2}{\pi}\left(x-\pi\right)}{x^{3}} \le \frac{\sin(x)}{x^3} \text{ on }\left[\frac{\pi}{2} , \pi \right]
$$
And since $ \int_{\frac{\pi}{2}}^\pi\frac{-\frac{2}{\pi}\left(x-\pi\right)}{x^{3}} \, \mathrm{d}x =\frac{1}{\pi^2} $ this gives the desired result.
Lastly, if you're again worried about the $\le$ instead of $<$, then you can use the same trick of splitting the interval and then evaluating the integrals separately.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4368515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Exercise 2.10.8 of Classical and Modern Numerical Analysis by Azmy S. Ackleh et al. First, thank you for reading my post!
I have been trying to solve the following problem(image attached below) and I have zero idea on how to approach this. I am self reading the book and was able to figure out the first 7 problems of this problem set. Can someone please give me an idea how to start this problem?
My approach:
We would want $|x_n-\alpha|<\epsilon,$ for arbitrary $\epsilon<0$ and large enough $n.$ This is equivalent to making $|g(x_n)-cf(x_n)-\alpha|<\epsilon.$ From here, I am clueless as how to proceed or how to use the given data.
Thank you in advance!
| Root is $\alpha$. $x_n$ near $\alpha$, so $0<|x_n-\alpha|<\delta$
$\frac{|x_n+cf(x_n)-\alpha|}{|x_n-\alpha|}<1$. Condition for convergence.
$\frac{|x_n-\alpha+cf(x_n)-cf(\alpha)|}{|x_n-\alpha|}<1$ Adding zero to numerator.
$|x_n-\alpha+cf(x_n)-cf(\alpha)|<|x_n-\alpha|$
$|x_n-\alpha||1+c\frac{f(x_n)-f(\alpha)}{x_n-\alpha}|<|x_n-\alpha|$. Factor out $(x_n-\alpha)$
$|1+c \frac{f(x_n)-f(\alpha)}{x_n-\alpha}|<1$. Divided.
$-1<1+c\frac{f(x_n)-f(\alpha)}{x_n-\alpha}<1$. Re-expessed as a complex inequality.
$-2<c\frac{f(x_n)-f(\alpha)}{x_n-\alpha}<0$
$-2< cf'(\alpha)<0$ Condition on $c$ with derivative.
We don't know if $f'(\alpha)$ is positive or negative so:
$|c|<\frac{2}{|f'(\alpha)|}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4368679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving the system $2x^2 = \frac{y}{z}+\frac{z}{y}$, $2y^2 = \frac{z}{x}+\frac{x}{z}$, $2z^2 = \frac{x}{y}+\frac{y}{x}$
Find all triplets $\{ x, y, z\}$, such that all three of them are real and nonzero, and satisfies:
$$2x^2 = \frac{y}{z}+\frac{z}{y}$$
$$2y^2 = \frac{z}{x}+\frac{x}{z}$$
$$2z^2 = \frac{x}{y}+\frac{y}{x}$$
I'm stuck in this problem. My first thought was that the only solutions that are possible are $\{ 1, 1, 1 \}$ and $\{ -1, -1, -1\}$, but I do not know how to prove that these are the only solutions. I know for a fact that there seems something "fishy" about the fractions, and I am thinking of changing those into:
$$2x^2yz=y^2+z^2$$
$$2y^2xz=x^2+z^2$$
$$2z^2xy=y^2+x^2$$
but I do not know how to proceed from here. Can anybody give me a hint on how to proceed?
| all of x,y,z have the same sign.
assume x,y,z>0.
assume $x\geq y\geq z$
$$\frac{y}{z}+\frac{z}{y}=2x^2\geq 2y^2=\frac{z}{x}+\frac{x}{z}$$
$$\frac{y}{z}\geq \frac{x}{z}$$
$$y\geq x$$
$$x=y$$
$$2z^2=\frac{x}{y}+\frac{y}{x}=2$$
$$z=1$$
$$2x^2=x+1/x$$
$$2x^3=x^2+1$$
$$2x^3-x^2-1=0$$
$$(x-1)(2x^2+x+1)=0$$
$$x=1$$
$$x=y=z=1$$
dropping the assumption that all are positive, (x,y,z)=(1,1,1) or (-1,-1,-1).
there might be a more elegant solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4368783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Question about the proof of Krull-Akizuki theorem
(Krull-Akizuki) If $R$ is a one-dimensional
Noetherian domain with quotient field $K$, and $L$ is a finite
extension field of $K$, then any subring $S$ of $L$ that contains $R$
is Noetherian and of dimension $\leq 1$ , and has only finitely many
ideals containing a given nonzero ideal of R. In particular, the
integral closure of $R$ in $L$ is Noetherian.
The main part of the proof is about showing that if $0\neq J\subset S$ be an ideal, by proving that $J/aS$ for some $0\neq a\in J\cap R$, is an $R$-module of finite length. And then, it concludes that the rest of the assertions follows.
I am not quite sure about the dimension $\leq 1$ part of the theorem. I think it might be related to the incomparability of distinct prime ideals, but this property holds when $R\subset S$ is an integral extension. However, in the situation above, $S$ is obviously an algebraic extension over $R$. I think this might be somehow true even though, or is there any other points that I missed ?
| It is also proved that $\ell_R(S/aS)<\infty$, where $a\in R$, $a\ne0$. If $P$ is a non-zero prime ideal of $S$, then $P\cap R\ne(0)$ and $S/P$ is a quotient of $S/aS$ for some $a\in P\cap R$, $a\ne 0$. But $S/aS$ is an artinian $R$-module, hence an artinian ring. It follows that $S/P$ is an artinian domain, hence a field. We showed that $P$ is maximal, and thus $\dim S\le1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4369458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is the set $A = \left\{ x \in \mathbb{R}\, |\, x^2 \text{ is rational}\right\}$ countable? Is the set $A = \left\{ x \in \mathbb{R} \,|\, x^2 \text{ is rational}\right\}$ countable?
I know that the set of all rational numbers is countable. But for some irrational numbers, $x^2$ is rational.
Example: $(\sqrt2)^2$ is rational.
So that the set $A$ includes all rational numbers and some irrational numbers.
Then how can we prove that the set $A$ is countable?
| The range $R$ of$$\begin{array}{ccc}\{x\in\Bbb Q\mid x\geqslant0\}&\longrightarrow&\Bbb R\\x&\mapsto&\sqrt x\end{array}$$is a countable set (since its domain is countable) and $A=R\cup(-R)$. Therefore, $A$ is countable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4369607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
What are exactly the information given by the CDF of a random variable? Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and $X : (\Omega, \mathcal{F}) \rightarrow (\mathbb{R}, \mathcal{F}_E)$ a random variable that admits a density function $f_X : \mathbb{R} \rightarrow \mathbb{R}$.
I have some questions about the information given by the cumulative distribution function of $X$ ($F_X : \mathbb{R} \rightarrow [0,1]$).
i) Does the CDF determine $f_X$ uniquely ?
ii) Does the CDF determine the law of X uniquely ?
iii) Can we find a subset of events that uniquely determine the CDF ?
For i), I would say that as $F_X(t) = \int_{- \infty}^{t}f_X(x)dx$, the density function is the derivative of the CDF and therefore the CDF can only define one density function. But I feel like maybe we could find a counterexample by considering Lebesgue measure and taking sets of the form $(a,b)$ and $[a,b]$.
For ii), as $\forall x \in \mathbb{R}$, $\mathbb{P}(X = x) = F_X(x) - F_X(x-)$ and $\mathbb{P}(X \leq x) = F_X(x)$, I am tempted to say that the law of $X$ is indeed by definition determined uniquely by the CDF.
For iii), maybe we could take all events of the form $\{X \in (a,b): a, b \in \mathbb{R}\}$ but I'm not quite sure about that, and particularly regarding the uniqueness.
Thanks in advance for your help.
| By subtraction, the CDF $F(x) := P(X \leq x)$ determines the probabilities $P(X \in [a, b])$ for every $a, b \in \mathbb{R}$. By an approximation theorem, such as Caratheodory's theorem, this information determines $P(X \in E)$ for every $E \in B(\mathbb{R})$. So (ii) is true.
(i) follows (though, as always, the density is unique only up to almost everywhere equality).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4369736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Series including binomial coefficients I am trying to solve the following equation:
$\Sigma_{n=0}^N\binom{N}{n} e^{cn( x + y)}$
I know that you can write $\Sigma_{n=0}^N\binom{N}{n}=2^n$ but I cannot do this right now because I have the exponential factor.
Then I though of expressing $ e^{cn( x + y)}= \Sigma_{n=0}^N \frac{[cn(x + y)]^n}{n!} $
But this only complicates things further.
Any tips on how I should proceed ?
Edit: c,y,x are constants
| Substitute $z=e^{c(x+y)}$
$\sum_{k=0}^{N} \binom{N}{k}e^{ck(x+y)}=\sum_{k=0}^{N} \binom{N}{k}z^k={(1+z)}^N={(1+e^{c(x+y)})}^N$
$\sum_{k=0}^{N} \binom{N}{k}z^k={(1+z)}^N$
The above statement is just from Newton's binomial theorem if we plug $x=1$ and $y=z$:
https://en.wikipedia.org/wiki/Binomial_theorem
I would say that this is a pretty nice form
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4369913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving that a function $f:[0,\infty)\to[0,\infty)$ satisfying these conditions is necessarily non-decreasing I have a function $f: [0, \infty) \to [0, \infty)$ which is smooth. I also have that
*
*$f(0) = 0$
*$f'(0) > 0$
*$f''(x) \leq 0$, for all $x \in [0, \infty)$
It intuitively makes sense then that $f$ on $[0, \infty)$ must be strictly non-decreasing (since if it was ever decreasing, we would have to eventually "pull up" and contradict clause 3). I want to prove this, and I was able to derive contradictions if at some point the slope was negative. If at point $c$ in $(0, infty)$, $f'(c) < 0$, take some point $c + h$ further on in $[0, \infty)$, $h > 0$, and three cases occur:
a. $f(c + h) = f(c)$ -> got a contradiction
b. $f(c + h) > f(c)$ -> got a contradiction
But for $f(c + h) < f(c)$, this leads nowhere since it can still technically happen. I am at a complete loss on how to go forward. I was able to show that $f$ can never intersect $0$.
Edit: My main goal is to claim that $f(a) \leq f(b)$ for any $a < b$ on $[0, \infty)$
| Take the function defined at $ [0,+\infty)$ by
$$(\forall x\ge 0)\;\; f(x)=x(1-x)$$
we have
$$f(0)=0$$
$$f'(0)=1>0$$
and
$$(\forall x\ge 0)\;\; f''(x)=-2\le 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4370019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
To show that a complex-valued function is injective Given a complex-valued function $w=f(z) = - \dfrac{1}{2} \left( z + \dfrac{1}{z} \right)$ from $\{ z= x+iy : |z| < 1 \}$ to $\{ w : \text{Im}(w) >0 \}$, show that $f$ is injective.
My Approach : Let $f(z_1) = f(z_2)$ which yields
$$(z_1-z_2) + \left( \frac{\overline{z_1}}{|z_1|^2} - \frac{\overline{z_2}}{|z_2|^2} \right) = 0 .$$
From here how can I conclude that $z_1 = z_2$ ? Any help is much appreciated.
| Alt. hint: $\;f(z)=w \iff z^2 + 2w z+ 1 = 0\,$. The quadratic has two roots whose product is $1$, so only one of them can be inside the unit circle $|z| \lt 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4370341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Character group of non-split torus in $GL_2$ Let $E=\mathbb Q(\sqrt{-d})$ be an imaginary quadratic field and let $R_{E/\mathbb Q}(\mathbb G_m)$ be the restriction of scalars of the multiplicative group, i.e. $R_{E/\mathbb Q}(\mathbb G_m)(X) = \mathbb G_m(X \times_{\mathbb Q} E)$ for each $\mathbb Q$-scheme $X$.
Picking a basis $\langle 1, -\sqrt{-d}\rangle$ for $E$ and letting $E$ act on itself, we can embed $E$ into $M_2(\mathbb Q)$ by $(\alpha, \beta) \mapsto \begin{bmatrix}\alpha&-d\beta\\\beta&\alpha\end{bmatrix}$.
Question 1: This embedding should give rise to an embedding of algebraic groups over $\mathbb Q$, $R_{E/\mathbb Q}(\mathbb G_m) \hookrightarrow GL_2$. Is $R_{E/\mathbb Q}(\mathbb G_m)$ a maximal (non-split) torus in $GL_2$? I seem to remember that the elements of maximal non-split tori in $GL_n$ all satisfy $\det = 1$, which is not the case here. What went wrong?
Question 2: Whatever the correct definition of the maximal torus in $GL_2$ obtained from $E$ is, how can we describe its character group explicitly?
| Question 1: Yes, this is correct. In fact, in general, all maximal tori of $\mathrm{GL}_{n,F}$ are of the form $\mathrm{Res}_{E/F}\mathbb{G}_{m,E}$ where $E$ is an etale algebra over $F$ of degree $n$ (i.e. $E=L_1\times\cdots\times L_n$ with each $L_i/F$ a finite separable extension, and $\sum_i [L_i:F]=n$) (although you need to choose an embedding of $E$ into $\mathrm{Mat}_{n,F}$ to get your actual torus).
Question 2: In general if $E=L_1\times\cdots\times L_n$ then
$$X^\ast(\mathrm{Res}_{E/F}\mathbb{G}_{m,E})=\prod_i X^\ast(\mathrm{Res}_{L_i/F}\mathbb{G}_{m,L_i})$$
and $X^\ast(\mathrm{Res}_{L_i/F}\mathbb{G}_{m,L_i})$ is the permutation module for $\mathrm{Gal}(\overline{F}/F)$ associated to the $\mathrm{Gal}(\overline{F}/F)$-set $\mathrm{Hom}_F(L_i,\overline{F})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4370461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Question on the integrability of the product of functions If $f \in L^2([0,1]$ and $g$ continuous on [0,1] , then why the product $fg$ is in $L^2([0,1])$?
By definition, we have $\left(\int_{[0,1]} |f|^2 d\mu \right)^{1/2} < + \infty$, why do we have $\left( \int_{[0,1]} |fg|^2 d\mu \right)^{1/2} < + \infty$?
| [0,1] is compact set in $R^1$, continuous function on compact set is bound that
for some M, $|g| < M$,
$$
\left( \int_{[0,1]} |fg|^2 d\mu \right)^{1/2} < \left( M^2\int_{[0,1]} |f|^2 d\mu \right)^{1/2} < \infty
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4370591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Identifying whether two statements are same. The two statements are:
$\forall a\in A, \exists b\in B, M(a)\implies N(b)$
$(\forall a\in A, M(a))\implies (\exists b\in B, N(b))$
I am not so good in logics. I feel the second statement implies first one. The existence of $b$ is not depending on $a$. So, I think two statements are same.
Am I correct? Can anyone give some counter for sets and predicates if No. Thank you.
| They are not the same. Consider the following statements:
*
*$M(a)$: Person $a$ has a dog
*$N(b)$: Person $b$ has a cat
Then the first statement says that for any person, if that person has a cat, then there is a person with a dog. In other words, if there is a cat, then there is a dog.
The second sentence says that if every person has a cat, then someone has a dog.
They are seen to be inequivalent if some but not all people have cats, and no one has a dog.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4370760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Every translation of a topological group is a homeomorphism Definition. A topological group is a group endowed with a topology such that
(1) the multiplication map $m: G\times G\rightarrow G$, $(g,h)\mapsto gh$ is continuous, and
(2) the inversion map $G\rightarrow G$, $g\mapsto g^{-1}$ is continuous.
What I want to do is to prove the following easy fact:
Fact. For any $g\in G$ the left translation $L_g:G\rightarrow G$, $x\mapsto gx$ is continuous.
To prove this, we first take any open subset $U\subseteq G$, and our task is to show that $L_g^{-1}(U)$ is an open subset of $G$. My idea is to use the following relation
$L_g^{-1}(U)=m^{-1}(U)\cap (\{ g \}\times G)$.
By condition (1) of the definition, $m^{-1}(U)$ is open in $G\times G$. If one could show that $\{ g \}\times G$ is open in $G\times G$, then $L_g^{-1}(U)$ would be open because it is the intersection of two open subsets. However, $G$ is generally not discrete, so $\{ g \}\times G$ may not be open in $G\times G$.
It seems that this Fact is easy to prove, but I don't know how to do it.
Any hint or help will be appreciated.
| As $m \colon G \times G \to G$, $(g, h) \mapsto g h$ is continuous, so is its restriction $$m|_{G \times \{ h_0 \}} \colon G \times \{ h_0 \} \to G h_0 = \{ g h_0: g \in G \}, \qquad (g, h_0) \mapsto g h_0.$$
for any $h_0 \in G$, which can clearly be identified with $L_{h_0}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4370939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Relating the $F$-rational points of a torus to the character group If $T$ is an algebraic torus over a field $F$, then I keep reading that $T(F) \cong X_*(T) \otimes F^\times$, where $X_*(T)$ is the cocharacter lattice. What is the isomorphism between them?
| This is not true unless $T$ is split. If $T$ is split the isomorphism is
$$X_\ast(T)\otimes F^\times\to T(F), \qquad \alpha\otimes c\mapsto \alpha(c).$$
So, for intance if $T=\mathbb{G}_{m,F}^n$ then
$$X_\ast(T)=\left\{\sum_i a_i e_i: a_i\in\mathbb{Z}\right\}$$
where $$e_i:\mathbb{G}_{m,F}\to \mathbb{G}_{m,F}^n,\qquad a\mapsto (1,\ldots,a,\ldots,1).$$
Our map then takes
$$\sum_i e_i \otimes c_i\mapsto (c_1,\ldots,c_n)$$
where now here each $c_i$ is allowed to be in $F^\times$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4371096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Area and Integral of step functions in Tom Apostol's Calculus vol. $1$ At the beginning of the section $1.18$ of Tom Apostol's Calculus vol $1.$ (second edition), it is written that from the area properties introduced in section $1.6$, he proved that the area of the ordinate set of a nonnegative step function is equal to the integral of the function.
However, in the section $1.12$, where he is defining the integral for step functions, he just says that the definition is made such that the integral is equal to the area of the function's ordinate set, but I do not see any proof for that later.
Is there a proof maybe in an earlier version of the book, and it was removed from the second edition (which is the one I have)? In particular, I would be interested to see the proof that the step function integral satisfies the exhaustion property (i.e. Axiom $6$ of an area function).
Thank you!
| I still didn't find the proof in the book, but it seems it could be derived from the Additive and the Choice of scale properties, which are the axioms of an area function. Namely:
*
*Any rectangle is measurable and has area $a(R) = hk$ (by Property 5).
*A set of rectangles is also measurable and has the area which is the sum of the areas of the individual rectangles (by Property 2).
The definition of the integral of a non-negative step function is the sum of the areas of individual rectangles, which fits in the Property 2. Therefore, by being the part of the assumed area function, the integral of the non-negative step functions also satisfies the other properties too, including the exhaustion property.
Please let me know if I went wrong somewhere.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4371227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
showing removable singularity at origin. Let $f$ be holomorphic in the punctured disk $\{z:0< \vert z \vert<2\}$ such that
$$\vert f(z) \vert \leq \bigg(\log \frac{1}{\vert z \vert}\bigg)^{100}, \space \text{in $\vert z \vert \leq \frac{1}{2}$}$$
and
$$\vert f(z) \vert = 1, \space \text{on $\vert z \vert = 1$}$$
I need to show $f$ has a removable singularity at the origin. Does this mean I need to find an analytic function $g$ defined on an $\epsilon$ ball about the origin agrees with $f$ for $0< \vert z \vert < \epsilon$? So is this by construction? Am I contracting such an analytic $g$?
| Note that$$\lim_{z\to0}\bigl|zf(z)\bigr|=\lim_{z\to0}|z|\left(\log|z|\right)^{100}=0.$$Therefore, if $g(z)=zf(z)$, then $g$ has a removable singularity at $0$ (by Riemann's theorem) and, if you extend it to $0$, defining $g(0)=0$, $g$ is analytic (by the same theorem). Since $g(0)=0$, you can write $g(z)$ near $0$ as $a_1z+a_2z^2+\cdots$, and therefore near $0$ you have $f(z)=a_1+a_2z+\cdots$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4371552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
If $\varphi: G \to H$ is an isomorphism and $N \trianglelefteq G$, then, $\varphi(N) \subseteq K$, where $K$ is some normal subgroup in $H$? If $\varphi: G \to H$ is an isomorphism and $N \trianglelefteq G$, then, is it true that $\varphi(N) \subseteq K$, where $K$ is some normal subgroup in $H$? Specifically, if $N \cong K$, is $\varphi(N) = K$?
Motivation:
Suppose that $\pi: G \to H/K$ such that $\pi (g) = \varphi (g) K$, where $\varphi$ is some isomorphism from $G$ to $H$. Also, that $\text{ker}(\pi) \subseteq N$. If $\alpha: G/N \to H/K$ such that $\alpha (gN) = \pi (g)$. My question: Is $\alpha$ well-defined? For more details click on this post.
My attempt:
If $a, b \in G$ and $aN = bN \Rightarrow (a * b^{-1})N = N \Rightarrow a * b^{-1} \in N \Rightarrow \varphi (a * b^{-1}) \in K$ (here's when I get stuck and I all I wanted to prove was that $\varphi (a * b^{-1}) \in K$).
Hope my question is clear. Thanks.
| In order to get $\varphi(N)$ normal in $H$ (so that you can take $\varphi(N)$ as the sought $K$), your $\varphi$ is even overdetermined. It suffices to assume $\varphi$ a surjective homomorphism. In fact in this case, $\forall h\in H$, $\exists g\in G$ such that:
\begin{alignat}{2}
h\varphi(N)h^{-1} &= &&\{h\varphi(n)h^{-1}, n\in N\} \\
&= &&\{\varphi(g)\varphi(n)\varphi(g)^{-1}, n\in N\} \\
&= &&\{\varphi(g)\varphi(n)\varphi(g^{-1}), n\in N\} \\
&= &&\{\varphi(gng^{-1}), n\in N\} \\
&\stackrel{(N\unlhd G)}{=} &&\{\varphi(n'), n'\in N\} \\
&= &&\varphi(N)
\end{alignat}
whence $\varphi(N)\unlhd H$. Of course this holds, a fortiori, if $\varphi$ is an isomorphism (just replace "$\exists$" with the stronger "$\exists !$").
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4371711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Prove $\|\cdot\|_1$ and $\|\cdot\|_2$ are equivalent iff there are positive scalars $m$ and $M$ s.t. every $v$, $m\|v\|_1 \leq \|v\|_2 \leq M\|v\|_1$ Exercise
We say two norms are equivalent iff its induced metrics are equivalent as well.
Prove the following statements are equivalent:
(a) $\|\cdot\|_{1}$ and $\|\cdot\|_{2}$ are equivalent.
(b) There are $m\in\mathbb{R}_{>0}$ and $M\in\mathbb{R}_{>0}$ such that, for every $v$ it holds that
\begin{align*}
m\|v\|_{1} \leq \|v\|_{2} \leq M\|v\|_{1}.
\end{align*}
My attempt
Let us prove the implication $(b)\Rightarrow(a)$ first. According to the hypothesis, for every $x\in X$ and every $r > 0$, there exists $s = r/M > 0$ such that for every $y\in X$
\begin{align*}
\|x - y\|_{1} < s \Rightarrow \|x - y\|_{2} < r.
\end{align*}
Similarly, for every $x\in X$ and every $r > 0$, there exists $t = mr > 0$ such that for every $y\in X$
\begin{align*}
\|x - y\|_{2} < t \Rightarrow \|x - y\|_{1} < r.
\end{align*}
In this way, we have just proven that $\|\cdot\|_{1}\sim\|\cdot\|_{2}$.
However I am not able to prove the implication $(a)\Rightarrow(b)$.
Can somebody help me with this?
EDIT
We say two metrics $d$ and $d'$ defined over a non-empty set $X$ are equivalent iff for every $x\in X$ and every $r > 0$, there are $s > 0$ and $t > 0$ such that
\begin{align*}
\begin{cases}
\{y\in X : d'(x,y) < s\} \subset \{y\in Y : d(x,y) < r\},\\\\
\{y\in X : d(x,y) < t\} \subset \{y\in Y : d'(x,y) < r\}.
\end{cases}
\end{align*}
| Suppose that (a) holds. By your definition of equivalence, with $r=1$ there exists $s>0$, $t>0$ such that 1. if $\| x\|_2 <s$ then $\|x\|_1<1$ and 2. if $\| x\|_1 <t$ then $\|x\|_2<1$.
Let $x\in X$ be arbitrary and let $\omega = sx/2\|x\|_2$. Then $$\|\omega \|_2=\frac
s 2 <s$$ so $\| \omega\|_1<1$. This implies that $s \frac{\|x\|_1}{2\|x\|_2}<1 $ so $$\| x\|_1 \leqslant \frac 2 s \| x\|_2. $$ Similarly by considering $\tilde \omega = tx/2\|x\|_1$ you obtain the other inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4371859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Totally imaginary number fields of degree $4$ What are some examples of totally imaginary number fields of degree $4$ that are not a compositum of two imaginary quadratic fields?
How to find such examples (other than by trial and error, considering integer polynomials of degree $4$)?
Is there any useful classification/characterisation of such algebraic number fields? (e.g. in the same way we can say that any totally imaginary number field of degree $2$ is of the form $\mathbb{Q}[\sqrt{d}]$ for some square-free negative integer $d$)?
Thank you.
| There are many examples in the number field database https://www.lmfdb.org/NumberField/ . Just type in [0,2] for the signature.
If you want to construct some, take a polynomial of the form
$f(x) = (x^2+10)(x^2+11) + a$ for small values of $a$; its roots will be close to $\sqrt{-10}$ and $\sqrt{-11}$, hence the field generated by a root (if $f$ is irreducible) will be totally complex.
There are classifications of cyclic quartic fields (Kronecker-Weber), and of course you can distinguish the possible Galois groups in the remaining nonabelian cases (dihedral, quaternion, $A_4$, $S_4$ and the Frobenius group), but the Galois group does not fix the form of a generator.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4372232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$\varepsilon-\delta$ argument to show that uniform continuity of a real-valued function $f$ on some interval $A$ implies that $f$ is continuous on $A$ Here is an $\varepsilon-\delta$ argument to show that uniform continuity of a real-valued function $f$ on some interval $A$ implies that $f$ is continuous on $A$.
Suppose $f$ is a real-valued function that is uniformly continuous on the interval $A$. This means that for any $\varepsilon$, there is a corresponding $\delta_{\varepsilon}$ such that for any $x,y \in A$ we have that if $|x-y|\lt \delta_\varepsilon$, then $|f(x)-f(y)|\lt \varepsilon \quad (\dagger_1)$.
We will now show that at any arbitrary $a \in A$, $f$ is continuous.
Consider some arbitrary $a$ and some arbitrary $\varepsilon$. Consider the $\delta_{\varepsilon}$ that $(\dagger_1)$ supplies us with. Next, consider the open interval $\left(a-\frac{\delta_{\varepsilon}}{2},a+\frac{\delta_{\varepsilon}}{2}\right)$. Choose any $x,y$ in this interval: then we have that $a-\frac{\delta_{\varepsilon}}{2} \lt x \lt a+\frac{\delta_{\varepsilon}}{2}$ and $a-\frac{\delta_{\varepsilon}}{2} \lt y \lt a+\frac{\delta_{\varepsilon}}{2}$. Some manipulation of the inequalities will show that $x-y \lt \delta_{\varepsilon}$ and $-\delta_{\varepsilon} \lt x-y$, which implies that $|x-y| \lt \delta_{\varepsilon}$.
Then an application of $(\dagger_1)$ tells us that for any $x,y \in \left(a-\frac{\delta_{\varepsilon}}{2},a+\frac{\delta_{\varepsilon}}{2}\right)$, we must have $|f(x)-f(y)|\lt \varepsilon$. Consider the particular instance of $y:=a$. Then we have that for any $x \in \left(a-\frac{\delta_{\varepsilon}}{2},a+\frac{\delta_{\varepsilon}}{2}\right): |f(x)-f(a)| \lt \varepsilon$.
But this is the definition of continuity...in particular, the desired $\delta$ for this $\varepsilon$ is $\frac{\delta_{\varepsilon}}{2}$.
Edit: I think I need to be more careful with the interval $\left(a-\frac{\delta_{\varepsilon}}{2},a+\frac{\delta_{\varepsilon}}{2}\right)$ to ensure that this sits inside $A$.
| Although I see no errors, it is clear to me that it's much simpler than you think it is. Given $\varepsilon>0$, if $|x-a|<\delta_\varepsilon$, and if $x\in A$, then $\bigl|f(x)-f(a)\bigr|<\varepsilon$, and therefore $f$ is continuous at $a$. That's all.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4372423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\int_0^\infty \frac{x}{(e^{2\pi x}-1)(x^2+1)^2}dx$? How to calculate integral $\int_0^\infty \frac{x}{(e^{2\pi x}-1)(x^2+1)^2}dx$? I got this integral by using Abel-Plana formula on series $\sum_{n=0}^\infty \frac{1}{(n+1)^2}$. This integral can be splitted into two integrals with bounds from 0 to 1 and from 1 to infinity and the both integrals converge, so does the sum. I checked with WolframAlpha and the value of the integral is $\frac{-9 + \pi^2}{24}$, but I don't know how to compute it. Also, I tried to write $\frac{2xdx}{(1+x^2)^2}=d\frac{1}{x^2+1}$ and then tried to use partial integration, but didn't succeded.
Any help is welcome. Thanks in advance.
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\on}[1]{\operatorname{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\bbox[5px,#ffd]{\int_{0}^{\infty}{x \over \pars{\expo{2\pi x} - 1}\pars{x^{2} + 1}^{2}}
\dd x}
\\[5mm] = &\
{1 \over 4}\bracks{-2\int_{0}^{\infty}{\Im\pars{\bracks{1 + \ic x}^{-2}} \over \expo{2\pi x} - 1}\dd x}
\end{align}
The brackets-$\ds{\bracks{}}$ enclosed expression can be evaluated with the Abel-Plana Formula. Namely,
\begin{align}
&\bbox[5px,#ffd]{\int_{0}^{\infty}{x \over \pars{\expo{2\pi x} - 1}\pars{x^{2} + 1}^{2}}
\dd x}
\\[5mm] = &\
{1 \over 4}\bracks{\sum_{n = 0}^{\infty}{1 \over \pars{1 + n}^{2}} -
\int_{0}^{\infty}{\dd n \over \pars{1 + n}^{2}} -
\left.{1 \over 2}{1 \over \pars{1 + n}^{2}}\right\vert_{n\ =\ 0}}
\\[5mm] = &
{1 \over 4}\pars{{\pi^{2} \over 6} - 1 - {1 \over 2}} = \bbox[5px,#ffd]{{\pi^{2} \over 24} - {3 \over 8}} \approx 0.0362
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4372571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 3
} |
Let $a,b,c,d,e$ be five numbers satisfying the following conditions...
Let $a,b,c,d,e$ be five numbers satisfying the following conditions: $$a+b+c+d+e =0$$ and $$abc+abd+abe+acd+ace+ade+bcd+bce+bde+cde=33$$ Find the value of $$\frac{a^3+b^3+c^3+d^3+e^3}{502}$$
My Approach:
$$(a+b+c+d+e)^3 = \sum_{a,b,c,d,e}{a^3} + 3\sum_{a,b,c,d,e}{a^2b} + 6\sum_{a,b,c,d,e}{abc} $$
Taking $\mod (a+b+c+d+e)$, $$(a+b) ≡ -(c+d+e)$$ $$ab(a+b) ≡ -ab(c+d+e)$$ $$\sum{a^2b} ≡ -ab(c+d+e) -bc(a+d+e) -cd(a+b+e)-... = -\sum_{a,b,c,d,e}{ab(c+d+e)} = -3\sum_{a,b,c,d,e}{abc}$$ Therefore, $\sum{a^2b} = p(a,b,c,d,e) . (a+b+c+d+e) - 3\sum_{a,b,c,d,e}{abc}$
Since,
$(a+b+c+d+e) = 0$ $$\sum{a^3} = (3×3 -6)\sum{abc} = 3×33 = \color{red}{99}$$
But the answer key shows: $$\frac{\sum{a^3}}{\color{blue}{502}} = 99$$
Where is my mistake?
| OP's result is correct. For verification, Newton's identity $\,p_3=e_1^3-3e_1e_2+3e_3\,$ for the sum of cubes gives the same result directly (where the sums are the symmetric sums over $a,b,c,d,e$):
$$
\sum a^3 = \left(\sum a\right)^3 - 3\,\left(\sum a\right)\left(\sum ab\right) + 3 \left(\sum abc\right) = 0 - 3 \cdot 0 \cdot \left(\sum ab\right) + 3 \cdot 33 = 99
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4372807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
divisibility problem and gcd Let a and b be a positive integers. Proof that if number $ 100ab -1 $ divides number $ (100a^2 -1)^2 $ then also divides number $ (100b^2 -1)^2 $.
My attempt:
Let's notice that \begin{split} b^2(100a^2-1)^2 -a^2(100b^2-1)^2 & =(100a^2b-b)^2-(100ab^2-a)^2\\
& =(100a^2b-b-100ab^2+a)(100a^2b-b+100ab^2-a)\\
& =(100ab(a-b)+(a-b))(100ab(a+b)-(a+b))\\
& =(a-b)(100ab+1)(a+b)(100ab-1).\end{split}
This means that $ 100ab-1 |a^2(100b^2-1)^2$, so if we proof that $\gcd(100ab-1,a^2)=1$ the proof is completed. Now I know that it should be trivial to show that these numbers are relatively prime but somehow i have no idea how to do it.
Also I am intrested if there is a way to solve this problem by using modular arithmetic?
| We show how modular arithmetic allows us to view it as a special case of polynomial reversal, i.e. that $f(a)=0\Rightarrow \tilde f(a^{-1})=0,\,$ where $\tilde f$ denotes the reverse (reciprocol) polynomial.
Here mod arithmetic works nicely: $\bmod 100ab-1\,$ we have $\,100ab\equiv 1\,$ so $\,\color{#c00}{a \equiv 1/(100b)}.\,$ We can substitute this into any polynomial equation $\,f(\color{#c00}a)\equiv 0\,$ then clear denom's to get an equation $\,g(b)\equiv 0\,$ for $\,b.\,$ Here $f(a)\equiv (100\color{#c00}a^2-1)^2\equiv0\,$ so making said $\rm\color{#c00}{substitution}$ for $\,\color{#c00}a\,$ yields
$$0\equiv f(a) \equiv \left[\dfrac{100}{(\color{#c00}{100b})^2}-1\right]^2 \equiv \left[\dfrac{1-100b^2}{\color{#0a0}{100b^2}}\right]^2\!\Rightarrow (1-100b^2)^2\equiv 0\!\!$$
Your proof, viewed modularly, essentially squares the following equation (compare here)
$$\begin{align}
b(\color{#c00}{100a}a-1) &\,\equiv\, a(1-\color{#0a0}{100b}b)\\
\iff\ \ b\ (\color{#c00}{b^{-1}}\ a-1) &\,\equiv\, a(1-\ \color{#0a0}{a^{-1}}\ b),\,\ \text{is true by both} \equiv a-b
\end{align}\qquad$$
So squaring we get $\,(100aa-1)^2\equiv 0\Rightarrow \color{#0a0}{a^2}(1-100bb)^2\equiv 0\Rightarrow (1-100bb)^2\equiv 0\,$ by twice cancelling the unit (invertible) $\,\color{#0a0}a\,$, i.e. by scaling by $\,\color{#0a0}{a^{-1}\, (\equiv 100b)}$.
Remark $ $ We can do all arithmetic fraction-free by scaling $\,f(a)\,$ by $\,(\color{#0a0}{100b^2})^2$ (this is essentially what is done in S. Dolan's answer, but there the key idea $\rm\color{#c00}{(elimination)}$ is not brought to the fore).
Above is a slight variation of the following well known result: $ $ if $\,a\,$ is a root of a polynomial $\,f(x)\,$ then $\,a^{-1}\,$ is a root of the reciprocal (reverse) polynomial $\,x^{d}f(1/x),\,$ $\, d := \deg f,\,$ as above.
Thus by using modular arithmetic we can express the problem using equations (congruences) and this allows us to use well-known facts on the relationship between an equation for $\,a\,$ and one for its inverse $\,a^{-1}.\,$ This relationship would be obfuscated if we used only divisibility language (vs. congruence equations).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4373464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Pushforward of some line bundles along blow-up Let $X$ be a smooth projective variety and $b:Y\to X$ be the blow up of $X$ along a smooth subvariety $Z\subset X$. I'm trying to compute the complex $Rb_*\mathcal{O}_Y(K_Y+E)$, where $E$ is the exceptional divisor and $K_Y$ is the canonical divisor of $Y$.
I know from Kollar's theorem that $R^ib_*\mathcal{O}_Y(K_Y)=0$ for every $i>0$, and $R^0b_*\mathcal{O}_Y(K_Y)$ is torsion free. Is there any similar result holds for $Rb_*\mathcal{O}_Y(K_Y+E)$?
| By Grothendieck duality
$$
Rb_*\mathcal{O}_Y(K_Y + E) \cong
Rb_*R\mathcal{H}\mathit{om}(\mathcal{O}_Y(-E),\omega_Y) \cong
Rb_*R\mathcal{H}\mathit{om}(\mathcal{O}_Y(-E),b^!\omega_X) \cong
R\mathcal{H}\mathit{om}(Rb_*(\mathcal{O}_Y(-E)),\omega_X) \cong
I_Z^\vee \otimes \omega_X,
$$
where $(-)^\vee$ stands for the derived dual.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4373620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show for all $a,b,c\in\mathbb{N}$, $gcd(a,b)\cdot gcd(b,c) |b\cdot gcd(a,c)$ Really struggling with this one. I've tried using Bezout's Identity to solve this, but algebraic manipulation with the definition of divisibility gets me nowhere. I also considered that $gcd(a,b)|b$ and $gcd(b,c)|b$ implies that $gcd(a,b)gcd(b,c)|b^2$, but that didn't seem to help either since there's no good way to get rid of the square without compromising the proof. Any advice/hints for this one?
| Proof $1$: $\,\ \begin{align}&\color{#c00}{(a,b)\mid a,b}\\ &\color{#0a0}{(b,c)\:\!\mid b,c}\end{align}\:\!\Rightarrow\: \overbrace{\color{#c00}{(a,b)}\color{#0a0}{(b,c)}}^{\textstyle d}\mid \color{#c00}a\color{#0a0}b,\color{#c00}b\color{#0a0}c\,\Rightarrow\, d\mid(ab,bc)=b(a,c)\ \ $
$\!\!\begin{align} {\rm Proof}\ 2\!:\ (a,b)(b,c) &=(\color{#c00}{ab},\color{#0a0}{bc},\ bb,ac) \mid \color{#c00}{ab},\color{#0a0}{bc}\, \ldots \ \text{ as above.}\\
{\bf or}\ &= (b(a,c),bb,ac)\mid b(a,c)
\end{align}$
Proof $3\!:\ (a,b)(b,c) = (\color{#c00}{ab},bb,ac,\color{#0a0}{bc}) \!\supseteq\! (\color{#c00}{ab},\color{#0a0}{bc}) = b(a,c)\ \ $ [prior using principal ideals]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4373964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to find the value of this infinite sum rigorously? the problem is about this sum:
$$\sum_{n=0}^{\infty}(-1)^n(\sqrt{n+1}-\sqrt{n})
$$
This is a convergent series, converger to a value around 0.76. My approach is quite simple:
$$\sum_{n=0}^{\infty}(-1)^n(\sqrt{n+1}-\sqrt{n})=2\sum_{k=1}^{\infty}(-1)^{k+1}\sqrt{k}
=2\left(\sum_{k=1}^{\infty}\sqrt{k}-2\sum_{k=1}^{\infty}\sqrt{2k}\right)\\=2(1-2\sqrt{2})\cdot\zeta(-1/2)
$$
The actural value of answer seems right checking by Wolfram Alpha, but series that does not converge appears in my approach. Is there a more rigorous way to get the answer?
| Compute the partial sums
$$S_p=\sum_{n=0}^{p}(-1)^n \sqrt{n+1}-\sum_{n=0}^{p}(-1)^n\sqrt{n} $$ in terms of the generalized Riemann zeta function to obtain
$$S_p=2 \left(1-2 \sqrt{2}\right) \zeta \left(-\frac{1}{2}\right)+$$ $$(-1)^{p+1} \sqrt{2}\Bigg[\zeta \left(-\frac{1}{2},\frac{p+1}{2}\right)-2 \zeta
\left(-\frac{1}{2},\frac{p+2}{2}\right)+\zeta
\left(-\frac{1}{2},\frac{p+3}{2}\right)\Bigg]$$
The term in brackets is quite small : $\sim -0.169557$ for $p=0$, $\sim -0.017590$ for $p=100$
You could be interested by the third part of this paper.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4374079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
minimizing this expression: $\frac{(x+y)(c+d)-(z+w)(a+b)}{(a+b)(c+d)}$ Jody and Shelli each receive a box of buttons on Saturday and on Sunday. Each button is either red or white, and the number of buttons in each of the four boxes is from $1$ to $100$, inclusive.
On both days, the percentage of red buttons in Jody's box is greater than the percentage of red buttons in Shelli's box. If $J \%$ of Jody's total number of buttons are red and $S \%$ of Shelli's total number of buttons are red, what is the least possible value of $J-S$? Express your answer to the nearest integer.
Ans. -96 (Source: 2021 MathCounts Target Round, calculators allowed)
I had Jody receiving on Saturday and Sunday respectively $a$ and $b$ buttons of which $x$ and $y$ are red. Shelli on Saturday and Sunday receive respectively $c$ and $d$ buttons, of which $z$ and $w$ are red.
We have $\frac{x}{a} > \frac{z}{c}$ and $\frac{y}{b} > \frac{w}{d}$ and
$J-S=100 \cdot (\frac{x+y}{a+b} - \frac{z+w}{c+d})$. So,
$J-S=100 \cdot \frac{(x+y)(c+d)-(z+w)(a+b)}{(a+b)(c+d)}$
I'm not sure how to minimize this expression.
| ("guessing from extreme scenarios" Approach for competitions that only needs final answer. It's most likely true, but isn't proven. )
For the first day, Jody has 1 red button (100%), Sheeli has 99 red buttons and 1 white button (99%).
For the second day, Jody has 1 red button and 99 white buttons (1%), Sheeli has 1 white button (0%).
Then, $ J = \frac{ 1 + 1 } { 1 + 100 } = 2/ 100$ and $ s = \frac{99+0 } { 100 + 1 } = 99 / 101.$
This gives us $2 / 101 - 99/101 \approx -96\%$.
Now, prove it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4374201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Resnick -Probability Path - Exercise 7.40 In this problem, Resnick states that if $\{X_n,n \geq 1\}$ is an independent sequence of random variables and $S_n := \sum_{i=1}^{n} X_n$, then $\frac{S_n}{n} \overset{a.s}{\to} 0$ if and only if $\frac{S_n}{n} \overset{P}{\to} 0$ and $\frac{S_{2^n}}{2^n} \overset{a.s}{\to} 0$
My question is the following: Why did Resnick include the assumption of independence?
| I do not believe this is a necessary assumption. The non-immediate direction is to show that $\frac{S_n}{n} \overset{P}{\to} 0$ and $\frac{S_{2^n}}{2^n} \overset{a.s}{\to} 0$ implies $\frac{S_n}{n} \overset{a.s}{\to} 0$. But to do this, we do not need independence. Since $\frac{S_n}{n}$ is cauchy i.p -- by virtue of $\frac{S_n}{n}$ converging in probability -- it is also cauchy a.s (since $\psi_N = \sup_{m,n \geq N} |\frac{S_n}{n} - \frac{S_m}{m}|$ is a monotonically decreasing sequence). And $\frac{S_n}{n}$ being cauchy a.s + $\frac{S_{2^n}}{2^n} \overset{a.s}{\to} 0$ is all we need to conduct a straightforward argument to conclude $\frac{S_n}{n} \overset{a.s}{\to} 0$.
The other direction is an immediate consequence of the definition of almost sure convergence (and does not rely on the independence assumption)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4374329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Evaluating $\lim_{(x,y)\to(0,0)}\dfrac{x^2+y^2}{x^4+y^4}$
Evaluate the limit: $\displaystyle\lim_{(x,y)\to(0,0)}\dfrac{x^2+y^2}{x^4+y^4}$
To solve this, I converted it to polar coordinate and got: $\displaystyle\lim _{r\to0}\left(\frac{1}{r^2(\sin^4\theta+cos^4\theta)}\right)=\infty$
But after putting this on WolframAlpha, it tells me that this limit does not exist.
Who is wrong here?
| We can also see that :
$$\dfrac{x^2 + y^2}{x^4 + y^4} \geq \dfrac{x^2 + y^2}{(x^2 + y^2)^2} = \dfrac{1}{x^2 + y^2} \underset{(x, y) \to (0, 0)}{\to} +\infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4374717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Injectivity of Fourier series in a horizontal strip I'm trying to prove (or give a counter-example, which I couldn't find) the following statement:
Let $g : \mathbb{R} \to \mathbb{C}$ be such that $g(x)=g(x+2 \pi)$ and $g(x)=\sum_{n} c_{n}e^{inx}$, then for points $x, y$ in the horizontal strip $|x-y|<2 \pi$, $g$ is injective.
My reasoning went like this:
Let $g(x)=g(y)$, for $x$, $y$ such that $|x-y|<2 \pi$.
Since $g$ can be writen in complex Fourier series, by Cantor's theorem, it follows that $c_ne^{inx}=c_ne^{iny}$, for at least one $n$. But this implies, since $|x-y|<2 \pi$, that $x=y$. Proving the proposition.
That was what I thought, if someone can help with a counter-example or some comments on the proof, I thank you a lot.
Edited: I'll try to be clearer.
I'm trying to prove (or give a counter-example) to the following statement:
Let $g : \mathbb{R} \to \mathbb{C}$ be such that $g(x)=g(x+2 \pi)$ and $g(x)=\sum_{n} c_{n}e^{inx}$, then there exist points $x, y$ with $|x-y|<2 \pi$ in such a way that $g(x)=g(y)$ implies $x=y$, so for the set of these points $g$ is injective.
What I thought was to let $g(x)=g(y)$, for $x$, $y$ such that $|x-y|<2 \pi$.
Then, by the uniqueness of Fourier series, it follows that $c_ne^{inx}=c_ne^{iny}$, for at least one $n$, (and here is my doubt, because I don't know what is meant by a Fourier series to be the same, I just imagine that if it has the same Fourier coefficients, then at least one term in the expansion must be different for different points, since is the same function.). But this implies, since $|x-y|<2 \frac{\pi}{|n|}<2 \pi$, that $x=y$. Proving the proposition.
My main doubt is this: if a function has a Fourier expansion, is it true that for two different points $x, y$ in its domain, at least one term in the expansion of $x$ will differ from that of $y$? (Excluding the constant function).
Sorry if the first question was confusing. Thank you a lot!
| Take a Weierstrass function $f(x)=\sum a^n\cos(b^n\pi x), 0<a<1, b=2k+1, ab>1+3\pi/2$
Then $f$ (given as a Fourier series) is continuous and nowhere differentiable which means there is no interval $(c,d)$ where it is injective as it would be monotonic there (by continuity) and hence differentiable ae there by Lebesgue's theorem
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4374867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to find the point of the line of intersection of two planes I was able to find the direction vector of the line intersecting both planes, but I'm confused on how the textbook obtained the point $(1, 1, -1)$ as the point on the line. The planes are:
$3x + y - z = 3$ and $x - 2y + 4z = -5$
Doing the cross product of the normal vectors made me get: $⟨2, -13, -7⟩$ as the direction vector. The textbook answer for the whole equation is:
$$r = ⟨1, 1, -1⟩ + t⟨2, -13, -7⟩$$
Please let me know how exactly I can get point $(1, 1, -1)$ as the point on the line.
| Notice that every point lying on the line of intersection of two planes must always lie on both the intersecting planes i.e. $3x+y-z=3$ & $x-2y+4z=-5$. We can rewrite them as follows
$$z=3x+y-3\tag 1$$
$$z=\frac{-x+2y}{4}\tag2$$
Solving (1) & (2) by equating, one should get
$$13x+2y=7$$
$$y=\frac{7-13x}{2}\tag3$$
substituting value of $y$ from (3) into (1),
$$z=\frac{1-7x}{2}\tag 4$$
Thus we can take any arbitrary value to $x$ and find the corresponding values of $y$ & $z$ from (3) & (4) respectively.
Therefore there are infinite no. of points lying on the line of intersection.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4375028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Prove that the integral $\int_1^\infty f(x)\, dx$ and series $\sum f(n)$ both converge or both diverge? If f is monotonic decreasing for all $x \ge 1$ and if $\lim \limits_{x \to +\infty}f(x) =0 $,
Prove that the integral $\int_1^\infty f(x)\,dx$ and series $\sum f(n)$ both converge or both diverge?
I try this exercise at 10.24 of book tom apostol's calculus like this:
Let $S_n=\sum f(n)$ and $T_n=\int_1^nf(x)\,dx$, since f is monotonic decreasing we can get
$$
S_n-S_1\le T_n \le \int_1^a f(t)\,dt \le T_{n+1} \le S_n \\
Where\, a \in [n,n+1]
$$
As $a \to +\infty$, $n \to +\infty$, so $\int_1^{\infty} f(x)\,dx$ same convergence or divergence as $T_n$ which also has same as $S_n$. So we have proved.
I am not sure what I have try is correct or not. Any help will be appreciated.
| Let $$F(x)=\int_{1}^{x}f(t)dt,\forall\ x\geq\ 1.$$When $n\leq x\leq n+1$ we have $$a_{n+1}=f(n+1)\leq f(x)\leq f(n)=a_{n}$$ since $f$ is monotonic decreasing.
Thus we have $$a_{n+1}\leq\int_{n}^{n+1}f(t)dt\leq a_{n}$$which means$$S_{n}\leq a_{1}+F(n),F(n)\leq S_{n-1},$$where $S_{n}=\sum\limits_{k=1}^{n}a_{k}.$
So we can conclude that the integral $\int_{1}^{\infty}f(x)dx$ and series $\sum\limits_{n=1}^{\infty}a_{n}$ both converge or both diverge since $S_{n}$ and $F_{n}$ are both monotonic increasing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4375165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What is the difference between $\forall$ and $\implies$ Hellow, what is the difference? between $$\forall{a}\in{A}:\ldots$$ and $${a}\in{A}\Rightarrow{}\ldots$$
| We can write:$$\forall a\in A [P(a)]$$This in order to state that every element of set $A$ has property $P$.
Another way of expressing this is:$$\forall a[a\in A\implies P(a)]$$
In words: for every $a$ it is true that it has property $P$ if $a$ happens to be an element $A$.
Sometimes we just leave out the quantor and abbreviate this as:$$a\in A\implies P(a)$$This with in the back of our mind the knowledge that we think of all $a$.
I hope this makes things more clear for you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4375299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Do test functions have smooth extensions? Let $\Omega\subset\mathbb R^d$ be open. A smooth function $\phi\colon \Omega\to\mathbb R$ is called a test function on $\Omega$ if its support is compact and inside $\Omega$. I expect that the function
\begin{align}
\Phi\colon\mathbb R^d&\to\mathbb R\\
x&\mapsto\begin{cases}\phi(x)&\text{if }x\in\Omega\\
0&\text{else}\end{cases}
\end{align}
is a test function on $\mathbb R^d$, but there is one issue. We need to show that $$U:=\{x\in\mathbb R^d:\Phi\text{ is smooth in }x\}=\mathbb R^d$$ but it is not clear to me why $\partial\Omega\subset U$.
My attempt
My guess is that each $x\in\partial\Omega$ has a neighbourhood where $\Phi$ is zero, because if this is not the case for some $x\in\partial\Omega$, then this $x$ is inside the support of $\Phi$, but by the above definition the support of $\phi$ - which equals the support of $\Phi$ - is inside $\Omega$. However, I don't know how to make this rigorous, since I have not attended a topology lecture. Also, my guess might be wrong. So your help would be appreciated.
| Thanks to the hint given in the comments, I was able to complete the proof. We can show that for each $x\in\partial\Omega$ there is an $r>0$ s.t. $\Phi= 0$ on the open ball $B(x,r)=\{y\in\mathbb R^d:|x-y|<r\}$. In my question I suggested to prove the weak existence of such an $r$, i.e. $\lnot\forall_{r>0}\lnot\forall_ {y\in B(x,r)}\Phi(y)=0$, but we can actually give a constructive proof.
Proof:
Let $K$ be the support of $\phi$ and consider some $x\in\partial\Omega$. Since the function $K\ni y\mapsto|x−y|$ is continuous and defined on a compact set, there is some $a\in K$ s.t. $|x−a|≤|x−y|$ for all $y\in K$. Of course, $0<|x−a|=:r$ since $\partial\Omega$ and $\Omega$ are disjoint. Thus, $K\cap B(x,r)=\emptyset$ and $\Phi=0$ on $B(x,r)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4375480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does a Bratelli diagram have to label the dimension of its nodes? I'm reading about Bratelli diagrams, and am seeing what look to be two different definitions. When converting a direct sequence of finite-dimensional $C^*$-algebras $A_n$ into a Bratelli diagram, some authors will label each vertex with the "dimension" $d$ of the factor $M_d(\mathbb{C}) \subseteq A_n$ it corresponds to, while others won't. So, for example, if I had the sequences $M_1(\mathbb{C}) \hookrightarrow M_2(\mathbb{C}) \hookrightarrow M_3(\mathbb{C}) \hookrightarrow \cdots$ and $M_2(\mathbb{C}) \hookrightarrow M_4(\mathbb{C}) \hookrightarrow M_6(\mathbb{C}) \hookrightarrow \cdots$, and I wanted to make their Bratelli diagrams, then for the first convention, I would have the diagrams
$\require{AMScd}$
\begin{CD}
\stackrel{1}{\cdot} @>>> \stackrel{2}{\cdot} @>>> \stackrel{3}{\cdot} @>>> \cdots ,
\end{CD}
\begin{CD}
\stackrel{2}{\cdot} @>>> \stackrel{4}{\cdot} @>>> \stackrel{6}{\cdot} @>>> \cdots ,
\end{CD}
respectively, but for the second convention, I'd just have the diagram
\begin{CD}
\cdot @>>> \cdot @>>> \cdot @>>> \cdots
\end{CD}
for both. But it's not clear to me how I could look at the latter and recover the direct sequence, because it just tells me how many factors I have at each step and how many copies of them I can fit into the factors of the next step. Is the idea that given this "dimensionless" Bratelli diagram, I could just choose the sizes of the factors of $A_n$ to be whatever I wanted as long as they were large enough to fit however many copies of the previous step's factors, and I would still get the same limit $C^*$-algebra? If so, is there any intuition for why we have this degree of freedom?
Thanks for your help!
| I cannot comment much because I have not seen what you mention. In general, you need to write the sizes. Otherwise
$\require{AMScd}$
\begin{CD}
\cdot @>>> \cdot @>>> \cdot @>>> \cdots
\end{CD}
could mean
\begin{CD}
\cdot @>>> \stackrel1\cdot @>>> \stackrel1\cdot @>>> \stackrel1\cdot @>>>\cdots
\end{CD}
so isomorphic to $\mathbb C$, while
\begin{CD}
\cdot @>>> \stackrel1\cdot @>>> \stackrel2\cdot @>>> \stackrel3\cdot @>>>\cdots
\end{CD}
would give you $K(H)$. I suppose it could be that in some cases the context makes the sizes obvious?
There is one case where the sizes are deduced, which is the case where the embeddings are prescribed to be unital and the initial algebra $\mathbb C$. In such case the dimension is precisely the sum of the origins of the arrows. For instance, this graph
woud automatically represent this:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4375642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Counterexample separation theorem I am trying to understand, why the Separation Theorem from Hahn-Banach needs one of the sets to be open.
The theorem states:
$$\text{Re}\langle x',x_1 \rangle < \alpha \leq\text{Re}\langle x', x_{2} \rangle$$
for $x_1 \in C_1, x_2 \in C_2$ and $C_1,C_2$ convex, disjoint sets with $C_1$ open.
Now I am looking at the example in $c_{00}$ referenced here:
https://mathoverflow.net/questions/37551/a-counter-example-to-hahn-banach-separation-theorem-of-convex-sets
My trouble lies in understanding why $\mathcal{l}(x) = 0$ follows from $\pm \mathcal{l}(x) + \delta \mathcal{l}(y) \geq 0$.
| Since the inequality
$$
\pm l(x)\leq \delta\,l(y)
$$
holdsl for all $\delta>0$, you get that $\pm l(x)\leq0$. This forces $l(x)=0$.
The above is argued in the real case. In the complex case you would get $\operatorname{Re} l(x)=0$. As this can be done for the element $ix$, we also get $$0=\operatorname{Re}l(ix)=-\operatorname{Im} l(x).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4375821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $\pi =3\arccos(\frac{5}{\sqrt{28}}) + 3\arctan(\frac{\sqrt{3}}{2})$
Question:
Show that, $$\pi =3\arccos(\frac{5}{\sqrt{28}}) +
3\arctan(\frac{\sqrt{3}}{2}) ~~~~~~ (*)$$
My proof method for this question has received mixed responses. Some people say it's fine, others say that it is a verification, instead of a proof.
Proof: $$\pi =3\arccos(\frac{5}{\sqrt{28}}) + 3\arctan(\frac{\sqrt{3}}{2})\iff \frac{\pi}{3} = \arccos(\frac{5}{\sqrt{28}}) + \arctan(\frac{\sqrt{3}}{2})
$$$$\iff \frac{\pi}{3} = \arctan(\frac{\sqrt{3}}{5})+\arctan(\frac{\sqrt{3}}{2})$$
As $\arccos(\frac{5}{\sqrt{28}})=\arctan(\frac{\sqrt{3}}{5})$
The plan now is to apply the tangent function to both sides, and show that LHS=RHS using the tangent addition formula to expand it out.
I.e. $$\tan(\frac{\pi}{3}) = \tan\bigg(\arctan(\frac{\sqrt{3}}{5})+\arctan(\frac{\sqrt{3}}{2}\bigg)$$
$$\iff \sqrt{3} = \frac{\frac{\sqrt{3}}{5}+\frac{\sqrt{3}}{2}}{1-\frac{\sqrt{3}}{5} \frac{\sqrt{3}}{2}}$$
and the RHS will reduce down to $\sqrt{3}$. Hence LHS=RHS.
Some things that I've noticed about this method of proof:
*
*It could be used to (incorrectly) prove that $$\frac{\pi}{3}+\pi = \arccos(\frac{5}{\sqrt{28}}) + \arctan(\frac{\sqrt{3}}{2})$$
So because this method of proof can be used to prove things true, that are obviously false, that means it can't be used?
*
*Instead of proving (*), wouldn't this method of proof actually prove that? $$\arccos(\frac{5}{\sqrt{28}})+\arctan(\frac{\sqrt{3}}{2})=\frac{\pi}{3} + \pi k$$
for some $k\in \mathbb{Z}$ which we must find. In this case being when $k=0$.
| You have$$3\arccos\left(\frac5{\sqrt{28}}\right)\in[0,3],$$since $\frac{\sqrt3}2<\frac5{\sqrt{28}}<1$, and therefore$$3>3\frac\pi6>3\arccos\left(\frac5{\sqrt{28}}\right)>0.$$You also have$$0\leqslant3\arctan\left(\frac{\sqrt3}2\right)<3\arctan\left(\sqrt3\right)=\pi,$$and therefore$$\arccos\left(\frac5{\sqrt{28}}\right)+\arctan\left(\sqrt3\right)\in\left[0,\frac\pi3+1\right].$$What you did shows that$$\tan\left(\arccos\left(\frac5{\sqrt{28}}\right)+\arctan\left(\sqrt3\right)\right)=\tan\left(\frac\pi3\right).$$But the only number in $\left[0,\frac\pi3+1\right]$ whose tangent is $\tan\left(\frac\pi3\right)$ is $\frac\pi3$. So, your proof is indeed correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4375994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
A definite integral in the Bessel function $J_{-1/2}(s)$ from Stein-Shakarchi I'm working through the asymptotics section of Stein-Shakarchi's Complex Analysis, where the Bessel function is defined, for $\nu > - \frac{1}{2}$ as
$$ J_{\nu}(s) = \frac{(s/2)^\nu}{\Gamma(\nu + 1/2)\Gamma(1/2)}\int_{-1}^1e^{isx}(1-x^2)^{\nu-1/2}\text{d}x $$
They then state that if $J_{-1/2}(s)$ is defined as $\lim_{\nu\to -1/2}J_\nu(s)$ then it equals $\sqrt{\frac{2}{\pi s}}\cos(s).$ I can see where the prefactor comes from but I don't understand how one gets $\cos(s)$ from this limit. Any help would be appreciated.
| Consider the integral $\varphi_p(t):=\int^1_{-1}(1-x^2)^pe^{-ixt}\,dx$. Expanding the exponential in $\varphi_p$ as a power series yields
\begin{align*}
\phi_p(t)&=\int^1_{-1}(1-x^2)^p\sum_{n\geq0}\frac{(-ixt)^n}{n!}\,dx =\sum_{n\geq0}\frac{(-it)^n}{n!}\int^1_{-1}(1-x^2)^px^n\,dx\\
&=2\sum_{k\geq0}\frac{(-it)^{2k}}{(2k)!}\int^1_0(1-x^2)^px^{2k}\,dx\\
&=\sum_{k\geq0}\frac{(-it)^{2k}}{(2k)!}\int^1_0(1-u)^pu^ku^{-1/2}\,du \\
&=\sum_{k\geq0}\frac{(-1)^kt^{2k}}{(2k)!}B(p+1,k+\tfrac12)
\end{align*}
where $B$ is the beta function. Using the identities
\begin{align*}
B(p+1,k+\tfrac12) &= \frac{\Gamma(p+1)\Gamma(k+\tfrac12)}{\Gamma(p+k+\tfrac32)} = \frac{\Gamma(p+1)}{\Gamma(p+k+\tfrac32)}\frac{(2k)!\sqrt{\pi}}{2^{2k}\,k!}
\end{align*}
we obtain
\begin{align*}
\varphi_p(t)&= \int^1_{-1}(1-x^2)^pe^{-ixt}\,dx=\sum_{k\geq0}\frac{(-1)^k\Gamma(p+1)\sqrt{\pi}}{\Gamma(k+ p+ \tfrac32)k!}\Big(\frac{t}{2}\Big)^{2k}
\end{align*}
Then
\begin{align}
J_{p+\tfrac12}(t) = \frac{(t/2)^{p+\tfrac12}}{\Gamma(p+1)\sqrt{\pi}} \varphi_p(t)= \frac{(t/2)^{p+\tfrac12}}{\Gamma(p+1)\sqrt{\pi}} \sum_{k\geq0}\frac{(-1)^k\Gamma(p+1)\sqrt{\pi}}{\Gamma(k+ p+ \tfrac32)k!}\Big(\frac{t}{2}\Big)^{2k}
\end{align}
On the other hand, the trigonometric substitution $x=\sin\theta$ in the integral defining $\varphi_p$ gives
\begin{align*}
\varphi_p(t)=\int^{\pi/2}_{-\pi/2}e^{-it\sin\theta}\cos^{2p+1}\theta\,d\theta=
\int^{\pi/2}_{-\pi/2}e^{it\sin\theta}\cos^{2p+1}\theta\,d\theta=
\int^\pi_0 e^{-i\cos\theta}\sin^{2p+1}\theta\,d\theta
\end{align*}
Putting things together, we have that
Lemma: For $p>-\tfrac32$
\begin{align}
\int^\pi_0e^{-it\cos\theta}\sin^{2p+1}\theta\,d\theta = \frac{\Gamma(p+1)\sqrt{\pi}}{(t/2)^{p+\tfrac12}}J_{p+\tfrac12}(t)=\int^1_{-1}(1-x^2)^p e^{-xti}\,dx\tag{0}\label{zero}
\end{align}
where $J_m$ is the Bessel function of order $m$.
A simple computation shows that
A simple computation shows that for all $m>-1$,
\begin{align}
\frac{d}{dz}\Big(z^{-m}J_m(z)\Big)&=-z^{-m}J_{m+1}(z)\tag{1}\label{one}\\
\frac{d}{dz}\Big(z^mJ_m(z)\Big) &= z^mJ_{m-1}(z)\tag{2}\label{two}
\end{align}
For $p=0$ in identity \eqref{zero} we obtain
$$ J_{1/2}(t)=\frac{\sqrt{t}}{\sqrt{2\pi}}
\int^1_{-1}e^{-itx}\,dx=\sqrt{\frac{2}{\pi t}}\sin t$$
Using \eqref{two} or ($p=-1$ in the power series above) yields
$$ J_{-1/2}(z)=\sqrt{\frac{2}{\pi t}}\cos t$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4376199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How can I use the master theorem to get an upper bound on this recurrence? I Wrote an algorithm whose running time is described by the recurrence
$$T(n) = 2T(n/2) + \Theta(n\lg n),$$
and I want to determine whether $T(n) = o(n^2)$.
Now by the third case of the "Master Theorem" a recurrence of the form
$$T(n) = 2T(n/2) + f(n)$$
has solution $T(n) = \Theta(f(n))$, if $f(n) = \Omega(n^{1+\varepsilon})$ for some $\varepsilon > 0$.
I can't apply this directly to my recurrence since $n\lg n \neq \Omega(n^{1+\varepsilon})$, but if I can find a function $f(n)$ such that
$$n\lg n = \Omega(f(n))\qquad \text{ and } \qquad f(n) = \Omega(n^{1+\varepsilon}) \qquad\text{ and } \qquad f(n) = o(n^2),$$
then I can just use the master theorem on
$$T(n) = 2T(n/2) + f(n)$$
to get the bound. So is there such an $f(n)$?
| None of the Master Theorem's cases apply to your problem. In order to find the asymptotic running time of your recurrence, you need an extended version described in exercise 4.6-2 of CLRS book (extension of Case 2). Besides the Wikipedia to which Ian already referred, see also here or here for a proof of this case extension.
So you have $a=b=2, k=1$, and you get $T(n)=\Theta(n\lg^2n)=o(n^2)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4376581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Extreme point of $g(x_1,x_2,x_3)=x_1^2+x_2^2+x_3^2-10x_1x_2x_3$. Let $g: \mathbb R^3 \rightarrow \mathbb R$ such that $g(x_1,x_2,x_3)=x_1^2+x_2^2+x_3^2-10x_1x_2x_3$. I know $(0,0,0)$ is critical point and want to check $(0,0,0)$ is local minima or local maxima.
My attempt:
I tried from A.M., G.M. inequality but nothing conclusive from it as $\frac{x_1^2+x_2^2+x_3^2}{3}\ge(x_1^2x_2^2x_3^2)^\frac{1}{3}$
I also thought $x_1^2+x_2^2+x_3^2-10x_1x_2x_3 \le ||x||^2-10||x||^3$. I dont know how to proceed further
| The 3 variable form of the Hessian can be used to check the nature of a critical point
$$H_{xyz} = \begin{bmatrix}\frac{\partial^2 f}{\partial x^2} & \frac{\partial^2 f}{\partial x\partial y} & \frac{\partial^2 f}{\partial x\partial z} \\ \frac{\partial^2 f}{\partial y \partial x} & \frac{\partial^2 f}{\partial y^2} & \frac{\partial^2 f}{\partial y \partial z} \\ \frac{\partial^2 f}{\partial x \partial z} & \frac{\partial^2 f}{\partial y \partial z} & \frac{\partial^2 f}{\partial z^2}\end{bmatrix}$$
Check the determinant value at $(0,0,0)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4376754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Assume $X_i$ are i.i.d, from Poisson$(\lambda)$ distribution for some $\lambda >0$, Show $\sqrt{n}(1/\bar{X}-1/\lambda)\to_D N(0,\sigma_{\lambda}^2)$
Assume $X_1,...,X_n$ are i.i.d, from Poisson$(\lambda)$ distribution for some $\lambda >0$, Show that $\sigma_{\lambda}^2$ $\sqrt{n}(1/\bar{X}-1/\lambda)\to_D N(0,\sigma_{\lambda}^2)$, then compute $\sigma_{\lambda}^2$
I see this already looks similar to the statement about Central limit theorem, I thought I would solve by letting $Y_i=\frac{1}{X_i}$, however I would only know that $E(Y)\geq \frac{1}{\lambda}$
| Note the CLT tells you
$$\sqrt n (\bar X-\lambda)\rightarrow_d N(0,\lambda).$$
Now use the delta method to find the asymptotic distribution of
$$\sqrt n (g(\bar X)-g(\lambda))$$
taking $g:t\rightarrow 1/t$.
Update: Alternatively, write
$$\sqrt n \left(\frac{1}{\bar X}-\frac{1}{\lambda}\right)=\left(-\frac{1}{\lambda \bar X}\right)\sqrt n (\bar X-\lambda),$$
and the weak law of large numbers tells us $\bar X\rightarrow_p \lambda.$ Now use Slutsky's theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4376959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Asymptotics of Laplace transform for small parameter The context of the question is that I was trying to derive the asymptotics of the modified Bessel function of the second kind:
$$ K_\alpha(z) = \int_0^\infty e^{-z\cosh(t)}\cosh(\alpha t)dt $$
for $z$ small, which Wikipedia gives as
$$ K_\alpha(z) \sim \begin{cases} -\log(z/2) - \gamma, &\alpha = 0, \\
\frac{\Gamma(\alpha)}{2}(2/z)^\alpha, &\alpha > 0. \end{cases} $$
My attempt is to make a change of variable $x = \cosh(t)$ so that $t = \text{arccosh}(x)$ and $dx = \sinh(t)dt$ so that
$$ K_\alpha(z) = \int_1^\infty e^{-zx}\cosh(\alpha\,\text{arccosh}(x))\frac{dx}{\sinh(\text{arccosh}(x))}. $$
If $\alpha = 0$ for simplicity say then $\sinh(\text{arccosh}(x)) = \sqrt{x^2-1}$ so making another substitution $x = y + 1$ we obtain
$$ K_0(z) = e^{-z} \int_0^\infty e^{-zy} \frac{dy}{\sqrt{y(y+2)}} $$
which I recognise as a Laplace transform. How does one proceed from here to obtain (either the $\alpha = 0$ case or the general case) asymptotics for $K_0(z)$ as $z \rightarrow 0^+$? (Is this even the right approach?)
In general is there a way to study the asymptotics of the Laplace transform $\int_0^\infty e^{-zt}f(t)dt$ for $z \rightarrow 0^+$? I'm asking this question since most answers I've found seem to relate to asymptotics as $z \rightarrow \infty$ and I'm also interested in a more general discussion of techniques to study asymptotics of integrals of this form.
| Let's consider the case $\alpha=0$ first.
$$K_0(z) = \int_0^\infty e^{-z\cosh t}dt$$
Making the substitution $x=\cosh t$
$$K_0=\int_1^\infty e^{-zx}\frac{dx}{\sqrt{x^2-1}}=e^{-z}\int_0^\infty e^{-zt}\frac{dt}{\sqrt{t+2}\sqrt t}=2e^{-z}\int_0^\infty e^{-2zx^2}\frac{dx}{\sqrt{x^2+1}}$$
Using $d\big(\ln(t+\sqrt{t^2+1})\big)=\frac{dt}{\sqrt{1+t^2}}$ and integrating by part (also making several substitutions)
$$K_0=2e^{-z}e^{-2zt^2}\ln(t+\sqrt{t^2+1})\Big|_{t=0}^\infty+8ze^{-z}\int_0^\infty e^{-2zt^2}\ln(t+\sqrt{t^2+1})tdt$$
$$=2e^{-z}\int_0^\infty e^{-x}\ln\frac{\sqrt x+\sqrt{x+2z}}{\sqrt{2z}}dx=-e^{-z}\ln(2z)+2e^{-z}\int_0^\infty e^{-x}\ln(\sqrt x+\sqrt{x+2z})dx$$
In the second integral the main contribution comes from the region $x>z$.
Indeed,
$$|\int_0^{2z} e^{-x}\ln(\sqrt x+\sqrt{x+2z})dx|<|\int_0^{2z}\ln(\sqrt {2z})dx|<|\int_0^{2z}\ln(\sqrt {2x})dx|=O(z\ln z)\ll1$$
Therefore, to find the main asymptotics term, we can drop $2z$ in the second integral and (with the required accuracy) integrate from zero to $\infty$. Also, $e^{-z}=1+O(z)\sim 1$. Taking all together
$$K_0\sim-\ln(2z)+2\int_0^\infty e^{-x}\ln(2\sqrt x)dx=-\ln(2z)+2\ln2+\int_0^\infty e^{-x}\ln xdx$$
$$K_0\sim \ln\frac{2}{z}-\gamma$$
For $\alpha> 0$ we are acting in the same way.
$$K_\alpha(z) = \int_0^\infty e^{-z\cosh(t)}\cosh(\alpha t)dt=\int_0^\infty\frac{e^{-z-x}\cosh\Big(\alpha\cosh^{-1}(1+\frac{x}{z})\Big)}{\sqrt x\sqrt{x+2z}}dx$$
Using $\cosh^{-1}(a)=\ln(a+\sqrt{a^2-1})$ (we consider $x>0$), after easy transformations we get
$$K_\alpha(z)=\frac{e^{-z}}{2}\int_0^\infty\frac{dx\,e^{-x}}{\sqrt x\sqrt{x+2z}}\bigg(\Big(1+\frac{x}{z}+\sqrt{\frac{x^2}{z^2}+\frac{2x}{z}}\,\,\Big)^\alpha+\Big(1+\frac{x}{z}+\sqrt{\frac{x^2}{z^2}+\frac{2x}{z}}\,\,\Big)^{-\alpha}\bigg)$$
Again, the main contribution to the integral comes from the region $x>z$
$$\int_0^{2z}\frac{dx\,e^{-x}}{\sqrt x\sqrt{x+2z}}\bigg(\Big(1+\frac{x}{z}+\sqrt{\frac{x^2}{z^2}+\frac{2x}{z}}\,\,\Big)^\alpha+\Big(1+\frac{x}{z}+\sqrt{\frac{x^2}{z^2}+\frac{2x}{z}}\,\,\Big)^{-\alpha}\bigg)$$
$$<\frac{1}{2}\int_0^{2z}\frac{dx}{\sqrt x\sqrt{x+2z}}\Big((2+\sqrt3)^\alpha+1\Big)<\frac{(2+\sqrt3)^\alpha+1}{2\sqrt{2z}}\int_0^{2z}\frac{dx}{\sqrt x}=O(1)$$
Therefore, keeping the main terms ($\frac{x}{z}>1$) in the integrand, we get the main asymptotics term:
$$K_\alpha(z)\sim\frac{1}{2}\int_0^\infty\frac{e^{-x}\big(2\frac{x}{z}\big)^\alpha}{x}dx=\frac{1}{2}\Big(\frac{2}{z}\Big)^\alpha\int_0^\infty e^{-x}x^{\alpha-1}dx=\frac{\Gamma(\alpha)}{2}\Big(\frac{2}{z}\Big)^\alpha\gg1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4377144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Use the isomorphism theorem to determine the group $GL_2(\mathbb{R})/SL_2(\mathbb{R})$. Use the isomorphism theorem to determine the group $GL_2(\mathbb{R})/SL_2(\mathbb{R})$. Here $GL_2(\mathbb{R})$ is the group of $2\times 2$ matrices with determinant not equal to $0$, and $SL_2(\mathbb{R})$ is the group of $2\times 2$ matrices with determinant $1$. In the first part of the problem, I proved that $SL_2(\mathbb{R})$ is a normal subgroup of $GL_2(\mathbb{R})$. Now it wants me to use the isomorphism theorem. I tried using
$$|GL_2(\mathbb{R})/SL_2(\mathbb{R})|=|GL_2(\mathbb{R})|/|SL_2(\mathbb{R})|,$$
but since both groups have infinite order, I don't think I can use this here.
| Define $f:GL_2(\mathbb{R})\to \mathbb{R}^*$ such that
$$\underbrace{f(M)=\det(M)}_{\text{Onto}}$$
Let $M_1, M_2 \in GL_2(\mathbb{R}^*) $ then
$f(M_1 M_2)=\det(M_1 M_2)=\det(M_1)\det(M_2)=f(M_1)(M_2)$
Hence $f$ is homomorphism for $\ker(f)=\{M: M\in GL_2(\mathbb{R})\text{ such that }\det(M)=1\}=SL_2(\mathbb{R})$
From first theorem in isomorphism
$GL_2(\mathbb{R})/SL_2(\mathbb{R})\approx \mathbb{R}^*$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4377294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Maximum modulus of $f(z)=z^2$ on the disc $D=\{z \in \mathbb C~:~|z-(1+i)|=1\}$ How to find the maximum modulus of $f(z)=z^2$ on the disc $D=\{z \in \mathbb C~:~|z-(1+i)|=1\}$.
Clearly, by maximum modulus principle $\max_{z \in D}|f|$ attains on points of the form $z=(1+i)+e^{i \theta},~\theta \in \mathbb R$, Or how to find maximum of $g(x,y)=(x^2-y^2)^2+4x^2y^2,$ subject to $(x-1)^2+(y-1)^2=1$.
| Geometrically, what is the point on $D$ that is the furthest from the origin? \
Analytically, why do you want $x$ and $y$?
Maximize, as you suggested, $g(\theta)=\vert 1+i+e^{i\theta }\vert^2=(\cos \theta+1)^2+(\sin \theta+1)^2=3+2\cos \theta+2 \sin \theta$, for $\theta \in [0,2\pi]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4377639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find all pairs (p,q) of real numbers such that whenever $\alpha$ is a root of $x^2 + px+q=0$ $\alpha^2-2$ is also root of the equation.
Find all pairs (p,q) of real numbers such that whenever $\alpha$ is a root of $x^2 + px+q=0$ $\alpha^2-2$ is also root of the equation.
My Approach:
I could not find any elegant method that is why I tried applying quadratic formula: the roots will be $\frac{-p±\sqrt{p^2-4q}}{2}$
Now we have 2 cases:
*
*$$\left(\frac{-p+\sqrt{p^2-4q}}{2}\right) =\left (\frac{-p-\sqrt{p^2-4q}}{2}\right)^2 -2$$
*$$\left(\frac{-p-\sqrt{p^2-4q}}{2}\right)=\left(\frac{-p+\sqrt{p^2-4q}}{2}\right)^2-2$$
It's very tedious to solve these. I searched on Wolfram alpha and got these I II
Is there an elegant way to approach this problem?
| For each $\alpha \in \mathbb{R}$ we can define a quadratic with zeros $x_1 = \alpha $ and $x_2 = \alpha^2 - 2$ by using Vieta's Theorem, which states $x^2+px+q$ with $-p = x_1 + x_2$ and $q = x_1 x_2$ has the desired zeros $x_1,x_2$. Hence for a given $\alpha$ the quadratic you are looking for is $x^2 - (\alpha^2 + \alpha - 2)x + (\alpha^3 - 2\alpha)$.
Edit: In order to find solutions s.t. it works indepently from the choice of the root, which we call $\alpha$, the both roots must satisfy $x_1^2 -2 = x_2$ and $x_2^2-2 = x_1$. How to go on from there is already shown in the answer by markvs.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4377802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why do all eighteen of these rotations have the same angle of rotation? In 3 dimensions, a rotation can be characterized by a roll (an $\alpha$ degree rotation around the $x$ axis), pitch (a $\beta$ degree rotation around the $y$ axis), and a yaw (a $\gamma$ degree rotation around the $z$ axis), say in that order. (Let's call roll, pitch, and yaw the different 'modes' of rotation.) Therefore we can express the total rotation $R = R_\alpha * P_\beta * Y_\gamma$ where each is a rotation matrix that I don't want to write out here. We can bash out what this matrix product evaluates to (it's not that difficult), and we find that its trace is $tr(R) = \cos{\alpha} \cos{\beta} + \cos{\beta} \cos{\gamma} + \cos{\gamma}\cos{\alpha} + \sin{\alpha} \sin{\beta} \sin{\gamma}$.
Alternatively, we can characterize $R$ by an axis-of-rotation which it leaves fixed and the angle that everything else gets rotated by (around the axis-of-rotation). This is Euler's Rotation Theorem. Therefore, if we let $T$ be any rotation (there are many) that brings the axis-of-rotation to the x-y axis, and then $S_\theta$ a rotation around the x-y axis by the angle theta (ie. the matrix $\begin{pmatrix}\cos{\theta} & -\sin{\theta} & 0 \\ \sin{\theta} & \cos{\theta} & 0 \\ 0 & 0 & 1\end{pmatrix}$), we can write $R = T S_\theta T^{-1}$. Taking the trace, we get $tr(R) = tr(T S_\theta T^{-1}) = tr(T^{-1} T S_\theta) = tr(S_\theta) = 1 + 2\cos{\theta}$ where we use the following trace property:
Cyclic Permutation Property of Trace (or just the "Trace Property"). For three square matrices $A,B,C$, we have $tr(ABC) = tr(CAB) = tr(BCA) := x$. (Thus we also have $tr(BAC) = tr(ACB) = tr(CBA) :=y$; but $x \ne y$ in general.) Note that $CAB, BCA$ are cyclic permutations of $ABC$, hence the name of the property.
Therefore, we have $1 + 2\cos\theta = \cos{\alpha} \cos{\beta} + \cos{\beta} \cos{\gamma} + \cos{\gamma}\cos{\alpha} + \sin{\alpha} \sin{\beta} \sin{\gamma}$. It is interesting that this expression is symmetric in the variables $\alpha, \beta, \gamma$.
This means that if we exchange the angles of in any of the 3!=6 ways (while keeping the modes in the same order), we obtain rotations that still have the same angle of rotation (but possibly different axes). Call this the "Angle Permutation Property". The Trace Property above also implies the same with the 3 cyclic permutations of the modes of rotation - call that the "Mode Cyclic Permutation Property". Combining the 6 permutations of the angles ($\alpha, \beta, \gamma$) with the 3 cyclic permutations of the modes, we have found a set of 18 related rotations all of which have the same angle of rotation (but possibly different axes).
Is there a geometric argument for the truth of the Angle Permutation Property? of the Mode Cyclic Permutation Property (for which a geometric argument for the Trace Property suffices)? Are their axes related in a similar way as well? What is the geometric significance of this equivalence of these 18 rotations?
By the way, assuming $\alpha, \beta, \gamma$ are all small, we obtain $\theta^2 \approx \alpha^2 + \beta^2 + \gamma^2 (+\alpha\beta\gamma)$, which perhaps makes sense because the three rotations are in some vague sense "orthogonal" (which sense?) but is perhaps surprising given that the surface of the sphere has only two dimensions, not three!
| If $A$ is a rotation by angle $θ$ about $\vec v$, and $B$ is a rotation, then the conjugate $BAB^{-1}$ is a rotation by the same angle $θ$ about $B\vec v$. This one fact lets us establish both properties without any trigonometry.
Mode Cyclic Permutation Property
Let $C = \left[\begin{smallmatrix}0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0\end{smallmatrix}\right]$ be the rotation by $\frac{2π}{3}$ about $(1, 1, 1)$. Then $R_α P_β Y_γ$ has the same angle as
\begin{gather*}
C(R_αP_βY_γ)C^{-1} = (CR_αC^{-1})(CP_βC^{-1})(CY_γC^{-1}) = P_αY_βR_γ, \\
C^{-1}(R_αP_βY_γ)C = (C^{-1}R_αC)(C^{-1}P_βC)(C^{-1}Y_γC) = Y_αR_βP_γ.
\end{gather*}
Angle Permutation Property
$R_αP_βY_γ$ has the same angle as $R_{-α}(R_αP_βY_γ)R_α = P_βY_γR_α$, which has the same angle as $R_βP_γY_α$ by MCPP.
Also, $R_αP_βY_γ$ has the same angle as
\begin{gather*}
P_π(R_αP_βY_γ)P_π = (P_πR_αP_π)(P_πP_βP_π)(P_πY_γP_π) = R_{-α}P_βY_{-γ}, \\
Y_{π/2}(R_{-α}P_βY_{-γ})Y_{-π/2} = (Y_{π/2}R_{-α}Y_{-π/2})(Y_{π/2}P_βY_{-π/2})(Y_{π/2}Y_{-γ}Y_{-π/2}) = P_{-α}R_{-β}Y_{-γ},
\end{gather*}
which is the inverse of (thus has the same angle as) $Y_γR_βP_α$, which has the same angle as $R_γP_βY_α$ by MCPP.
Compositions of these two angle permutations $(αβγ)$ and $(αγ)$ generate all six.
Rotation axes
We can extract the corresponding rotation axes from these proofs. If $R_αP_βY_γ$ has axis $\vec v$, then
*
*$P_αY_βR_γ$ has axis $C\vec v$,
*$R_βP_γY_α$ has axis $C^{-1}R_{-α}\vec v$,
*$R_γP_βY_α$ has axis $-CY_{π/2}P_π\vec v$.
The axes for all eighteen compositions generated by these will be the six coordinate permutations of $\vec v = (a, b, c)$, the six coordinate permutations of $R_{-a}\vec v = (a, e, f)$, and the six coordinate permutations of $Y_γ\vec v = (d, e, c)$:
*
*$R_αP_βY_γ$ has axis $(a, b, c)$,
$P_αY_βR_γ$ has axis $(c, a, b)$,
$Y_αR_βP_γ$ has axis $(b, c, a)$,
*$R_βP_αY_γ$ has axis $(e, d, c)$,
$P_βY_αR_γ$ has axis $(c, e, d)$,
$Y_βR_αP_γ$ has axis $(d, c, e)$,
*$R_βP_γY_α$ has axis $(e, f, a)$,
$P_βY_γR_α$ has axis $(a, e, f)$,
$Y_βR_γP_α$ has axis $(f, a, e)$,
*$R_γP_βY_α$ has axis $(c, b, a)$,
$P_γY_βR_α$ has axis $(a, c, b)$,
$Y_γR_βP_α$ has axis $(b, a, c)$,
*$R_γP_αY_β$ has axis $(c, d, e)$,
$P_γY_αR_β$ has axis $(e, c, d)$,
$Y_γR_αP_β$ has axis $(d, e, c)$,
*$R_αP_γY_β$ has axis $(a, f, e)$,
$P_αY_γR_β$ has axis $(e, a, f)$,
$Y_αR_γP_β$ has axis $(f, e, a)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4377963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Confusion regarding infinite series involving complex number I've been asked to find $\Omega$, such that :
$$\Omega=\frac{\Omega_1}{\Omega_2}=\frac{1+e^{\frac{2i\pi}{3}}+e^{\frac{4i\pi}{3}}+e^{\frac{6i\pi}{3}}+e^{\frac{8i\pi}{3}}+.....}{i+\frac{i^2}{2}+\frac{i^3}{4}+\frac{i^4}{8}+.......}$$
I noticed that in the numerator, $e^{\frac{2i\pi}{3}}=(e^{2\pi i})^{1/3}=1^{1/3}=\omega$ which is the cube root of unity.
Hence, $e^{\frac{4i\pi}{3}}=(e^{2\pi i})^{2/3}=1^{2/3}=\omega^2$.
Similarly, $e^{\frac{6i\pi}{3}}=(e^{2\pi i})^{3/3}=1^{3/3}=\omega^3=1$, and the pattern repeats.
Since, $1+\omega+\omega^2=0$, The entire series $\Omega_1$ must also be equal to $0$, and hence $\Omega=\frac{\Omega_1}{\Omega_2}=0$
However, this is what one of my teachers did in a note :
I don't see how this second method is correct, and my method is wrong. I don't think that in the second method, we can add the successive terms in the gp series like that.
Any help would be highly appreciated.
| Consider $z=-1 + 1 + -1 +1 + -1 +1+...$. Well, it alternates, so it doesn't converge. It's also ambiguous.
Does $z=(-1+1)+(-1+1)+...=0+0+0=0$
How about $z=-1+(1+-1)+(1+-1)+...=-1+0+0+0=-1$? Swap adjacent terms pair-wise and the arguments says $z=1$.
Yet another argument says $\frac{1}{1-z}=1+z+z^2+...$ so $\frac{1}{1-(-1)}=\frac{1}{2}=1+-1+1+-1+...$
$\Omega_1=\frac{1}{1-e^{\frac{2\pi i}{3}}}$
$\Omega_2=\frac{i}{1-i/2}$
So $\frac{\Omega_1}{\Omega_2}=\frac{1-i/2}{i(1-e^{2\pi i/3})}=\frac{(1-i/2)(-i)(1-e^{-2\pi i/3})}{(1-e^{2\pi i/3})(1-e^{-2\pi i/3})}$ , rationalizing the denominator.
$e^{2\pi i/3}=-1/2+i(\sqrt{3}/2)$
$1-e^{2\pi i /3}=3/2-i(\sqrt{3}/2)$
$(1-e^{2\pi i /3})(1-e^{-2\pi i /3})=3$
$-(1/2+i)(3/2+i\sqrt{3}/2)=-(3-2\sqrt{3})/4-i(6+\sqrt{3})/4$
So $\frac{\Omega_1}{\Omega_2}=\frac{(2\sqrt{3}-3)-i(6+\sqrt 3))}{12}$ if you use this series approach.
Take that with a grain of salt though. There are doubtless other ways to associate a value with it. Analytic Continuation should probably be mentioned, but the details are vague .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4378103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Given real numbers $a,b,c >0$ and $a+b+c=3$. Prove that $\frac{a^2}{b}+\frac{b^2}{c} +\frac{c^2}{a} \ge3 + \frac{4}{3}\max\{(a-b)^2;(b-c)^2;(c-a)^2\}$ My first way is to put the inequality under the same degree, so I multipled both sides to $(a+b+c)$, however it leaded to a hard to solve result. Can anyone help me with this problem?
| For the current inequality of $ \sum \frac{a^2}{b} \geq 3 + \frac{4}{3} \max ( a-b)^2 $:
*
*Replace 3 with $ a+b+c$. The reason for this choice is the "well-known" inequality: $ \sum \frac{ a^2}{b} \geq \sum a $. We're then asking how much more leeway there is in the difference of these terms.
*We can write the difference as a as Sum of Squares (Figure out how to do this before looking at the hint.). Thus, we want to show that
$$ \sum \frac{ (a-b)^2 } { b} \geq \frac{4}{3} \max ( a-b)^2.$$
This is just Cauchy Schwarz:
*
*Numerator: $[ \sum | a - b | ] ^2= [2 ( \max (a,b,c) - \min (a, b, c) )]^2 = 4 \max((a-b)^2)$.
*Denominator: $a + b + c = 3$.
Equality holds in the Cauchy Schwarz iff:
*
*For $ a \geq b \geq c$: $\frac{ a-b}{b} = \frac{b-c}{c} = \frac{a-c}{a} , a+b+c = 3$ $\Rightarrow (a, b, c) = (1, 1, 1) , (\frac{3}{2} , \frac{ 3 \sqrt{5} - 3 } { 4} , \frac{ 9 - 3 \sqrt{5}}{ 4} ). $
*For $ a \geq c \geq b$: $\frac{ a-b}{b} = \frac{c-b}{c} = \frac{a-c}{a} , a+b+c =3 $, which leads to the same cases.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4378392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Factorizing a quartic expression to show that it is a perfect square.
Show that $\frac{a^4+b^4+(a+b)^4}{2}$ is a perfect square.
I tried this,
$$\frac{a^4+b^4+(a+b)^4}{2}$$
$$\frac{a^4+b^4+(a^2+b^2+2ab)^2}{2}$$
$$\frac{2a^4+2b^4+4a^2b^2+2(a^2b^2+2a^3b+2ab^3)}{2}$$
$$a^4+b^4+2a^2b^2+ab(ab+2a^2+2b^2)$$
$$(a^2+b^2)^2+ab(ab+2a^2+2b^2)$$
What can I do next?
| If $a^4+2a^3b+3a^2b^2+2ab^3+b^4$ is a square, it has to be of the form $(xa^2+yab+zb^2)^2$ for some coefficients $x,y,z$.
Then $a^4+2a^3b+3a^2b^2+2ab^3+b^4=x^2a^4+2xya^3b+(y^2+2xz)a^2b^2+2yzab^3+z^2b^4$.
Comparing coefficients we get $x^2=1, 2xy=2, y^2+2xz=3,2yz=2,z^2=1$.
Solving this system we get $x=y=z=1$ and $x=y=z=-1$, so $a^4+2a^3b+3a^2b^2+2ab^3+b^4=(a^2+ab+b^2)^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4378599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the coefficient of the power $x^{n-2}$ in a Chebyshev polynomial? Let us consider the second kind Chebyshev polynomial over the positive integers $U_{n+1}(x) = 2xU_n(x) - U_{n-1}(x)$ with $n>1$ is a positive integer.
I know that the leading term of $U_n(x)$ is $2^{n}$ and it's associated with the power $x^{n}$.
My question is:
What is the coefficient of the power $x^{n-2}$?
I find the formula (17) in this link: https://mathworld.wolfram.com/ChebyshevPolynomialoftheSecondKind.html
| You can get the result directly from the recurrence relation. Let $$U_n(x)=2^nx^n -c_nx^{n-2}+\ldots $$ As $U_0=1$ and $U_1=2x,$ we get $c_0=c_1=0.$ Next making use of the recurrence relation gives
$$2^{n+1}x^{n+1}-c_{n+1}x^{n-1}+\ldots = (2^{n+1}x^{n+1}-2c_nx^{n-1}+\ldots )-(2^{n-1}x^{n-1}-c_{n-1}x^{n-3}+\ldots ) $$
We equate the coefficients at $x^{n-1}$ and divide by $2^{n+1}$ to get
$$2^{-n-1}c_{n+1}=2^{-n}c_n+{1\over 4}, \qquad n\ge 1.$$
Therefore $$2^{-n}c_n={n-1\over 4}+c_1={n-1\over 4}.$$
Hence $c_n=(n-1)2^{n-2}$ for $n\ge 1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4378830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove that the segment $BE$ bisects $AC$ Points $A,B,C,D,E$ lie on a circle $ω$ and point $P$ lies outside the circle. The given points are such that (i) lines $(P B)$ and $(P D)$ are tangent to $ω$, (ii) $P, A, C$ are collinear, and (iii) $DE ∥ AC$.
Prove that $[BE]$ bisects $[AC]$.
The issue that I have with this problem is that I don't what should I prove to get that $BE$ bisects $AC$. Maybe proving that the intersection between $(BE)$ and $(AC)$ (let's call it $I$) is inside $\omega$. Which is equivalent to proving $$OI<OA \quad \text{where O is the center of } \omega.$$
Would that work?
| Since $PB$ and $PD$ are tangents to the circle $\omega$ at the points $B$ and $D$ respectively, the two triangles $BOP$ and $DOP$ are congruent, right-angled triangles where $$\angle \, OBP = \angle\, ODP = 90$$
Moreover, by congruence, $$\angle \, BOP = \angle\, DOP = \alpha$$
which means that $$\angle \, BOD = 2\, \alpha$$
But then
$$\angle\, BED = \frac{1}{2} \, \angle \, BOD = \alpha$$
because $\angle \, BOD$ is a central angle and $\angle \, BED$ is inscribed. However, $AC \, || \, DE$ which by the collinearity of $A, C, P$ means that
$$PC \, || \, DE$$ and therefore
$$\angle \, BIP = \angle \, BED = \alpha$$
Thus, we conclude that $$\angle \, BIP = \alpha = \angle \, BOP$$ which means that the quadrilateral $BOIP$ is cyclic (in fact the points $B, I, O, D, P$ lie on the same circle.) Consequently,
$$\angle \, OIP = \angle\, OBP = 90$$ which means that $$OI \perp AP$$ however, $C$ lies on AP, so $$OI \perp AC$$ and since $AC$ is a chord in the circle $\omega$ with $O$ its center, the point $I$ must be the midpoint of the segment $AC$, i.e. $BE$ bisects $AC$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4378974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Convergence almost sure and strong law of large number Let $\{B_n\}$ and $\{X_n\}$ random variables i.i.d., at $(\Omega, \mathcal{F}, P)$.
(a) Suppose that $P(B_1 = 1) = p = 1 - P(B_1 = 0).$ Define
$$\hat{p}_n(\omega):=\frac{card\{j | 1 \leq j \leq n , \ B_j(\omega)=1 \}}{n}, \ \ \ \ \omega \in \Omega, \ n \in \mathbb{N}.$$
where card stands for cardinality. Show that $\hat{p}_n \to p$ almost sure.
(b) Suppose $F$ is the common distribution of $X_n$, $n \in \mathbb{N}$. Let
$$A_x^{(j)}:=\{\omega \in \Omega | X_j(\omega) \leq x\}, \ \ \ x \in \mathbb{R}, \ j \in \mathbb{N},$$
and
$$F_x^{(n)}(\omega):=\frac{1}{n}\sum_{j=1}^{n}1_{A_x^{(j)}}(\omega), \ \ \ \omega \in \Omega, \ n \in \mathbb{N}.$$
Show that, for every $x \in \mathbb{R}$, $F_x^{(n)} \to F(x)$ almost sure.
For (b) we will use the strong law of large number. For this it is sufficient to show that $E[F_x^{(n)}]=F(x)$ (this value is finite as the function is bounded by $0$ and $1$).
Then,
\begin{align*}
E[F_x^{(n)}] &= E[\frac{1}{n}\sum_{j=1}^{n}1_{A_x^{(j)}}]\\
&= \frac{1}{n} \sum_{j=1}^{n} E[1_{A_x^{(j)}}]\\
&=\frac{1}{n} \sum_{j=1}^{n} P[A_x^{(j)}]\\
&=\frac{1}{n} \sum_{j=1}^{n} P[ X_j \leq x]\\
&=\frac{1}{n} \sum_{j=1}^{n} P[ X_1 \leq x] \tag{i.d.}\\
&=P[ X_1 \leq x]=F(x).
\end{align*}
For the strong law of large number, it follows that $F_x^{(n)} \to F(x)$ almost sure.
I have problems with the first part. I think I did the second part correctly. I think that the first part is similar to the second part but I maybe I did not understand very well what is $\hat{p}_n(\omega)$. Any idea?
| The notation is the only thing obfuscating these from being handled directly by the law of large numbers.
For the first part, write $\hat{p_n} = \frac{1}{n}\sum_{j = 1}^{n}B_j$. Since $B_j$ are i.i.d., by law of large numbers, $\hat{p_n} \to E(B_1)$ a.s. as $n \to \infty$.
For the second part, write $A_{x}^{(j)} = \{X_j \leq x\}$. So $F_{x}^{(n)} = \frac{1}{n}\sum_{j = 1}^{n}1_{\{X_j \leq x\}}$. Again since $1_{\{X_j \leq x\}}$ are i.i.d., by law of large numbers, $F_{x}^{(n)} \to E(1_{\{X_1 \leq x\}}) = P(X_1 \leq x)$ a.s. as $n \to \infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4379177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding an exponential equation solution (general trinomial problem) I have an equation of the form $$ax^{1-p}+bx^{-p}=c$$
where $x,a,b,c,p\in \mathbb{R}_+$ and $p\ge 1$. $a,b,c,p$ are given constants. I'm looking for clues for solving it. Does it even permit any closed form solution?
I was able to deduce the following. We introduce $f(x) = ax^{1-p}+bx^{-p}-c$
$$\lim_{x\to 0^+}f(x) = \frac{ax+b-cx^p}{x^p}\rightarrow+\infty$$
and $\lim_{x\to +\infty}f(x)=-c\le 0$. Furthermore $f'(x) = a(1-p)x^{-p}-bpx^{-p-1} = x^{-p-1}[a(1-p)x-bp]\le 0$, therefore according to intermediate value theorem there is a unique root somewhere in $\mathbb{R}_+$ since the function is continuous except in $x=0$. BTW I tried it with Wolfram-alpha but it was unable to understand (which indicates that maybe I don't know how to interact with it)! How should we proceed?
Thanks in advance
===========================Edit===========================
As suggested by Gary, it is possible to get other forms of this equation by changing variable. for example if
*
*We introduce $z=\frac{1}{x}$, we get $az^{p-1}+bz^p=c$.
*We can write $x^p = e^{p\ln x}$ and define $t=\ln x$ and get $ae^{(p-1)t}+be^{pt}=c$
*Also multiplying both sides with $x^p$ gives $ax+b-cx^p=0$
| Multiplying by $x^p$, you are looking for the zero('s ?) of the function
$$f(x)=c x^p-a x-b$$ which, for the most general case, will not show explicit solutions. So, either numerical methods or more or less accurate approximations
The first derivatives are
$$f'(x)=c p x^{p-1}-a \qquad \text{and} \qquad f''(x)=c (p-1) p x^{p-2}$$ The fist derivative cancels at
$$x_*=\left(\frac{a}{c p}\right)^{\frac{1}{p-1}}\implies f''(x_*)=c (p-1) p \left(\frac{a}{c p}\right)^{\frac{p-2}{p-1}}$$ So, for a first approximation, assuming that $f(x_*)<0$, expand $f(x)$ around $x_*$ to $O\left((x-x_*)^3\right)$ and solve the quadratic to have
$$x_0=x_*+\sqrt{-2\frac{f(x_*)}{f''(x_*)}}$$
Trying for $a=3$, $b=7$, $c=11$, $p=\pi$,this would give $x_0=1.20223$ while the solution, given using Newton method, is $x=1.03249$.
Now, repeat Newton iterations according to
$$x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)}$$
$$\left(
\begin{array}{cc}
n & x_n \\
0 & 1.20223 \\
1 & 1.05941 \\
2 & 1.03333 \\
3 & 1.03249
\end{array}
\right)$$ which is quite fast because we have easily obtained a reasonable estimate to start with.
You also could improve the guess expanding $f(x)$ around $x_0$ and use series reversion to have
$$x_1=x_0-\frac{f(x_0)}{f'(x_0)}-\frac{f(x_0)^2 f''(x_0)}{2 f'(x_0)^3}+\frac{f(x_0)^3 \left(f^{(3)}(x_0) f'(x_0)-3 f''(x_0)^2\right)}{6 f'(x_0)^5}+\cdots$$ which, for the worked example, would give $x_1=1.03451$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4379365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\int_{0}^{x^3+1}\frac{f'(t)}{1+f(t)}dt=\ln(x)$ Solve for $f(x)$.
$$\int_{0}^{x^3+1}\frac{f'(t)}{1+f(t)}dt=\ln(x)$$
I attempted a simpler version without the $x^3+1$ bound to just get a sense for the problem.
$$\int_0^x \frac{f'(t)}{1+f(t)} dt = \ln(x)$$
Using the Fundamental Theorem of Calculus Part I, we can take the derivative on both sides of the equation to clean up the integral symbols and eliminate the $t$ variables.
$$\Rightarrow\frac{d}{dx}\left[\int_0^x \frac{f'(t)}{1+f(t)} dt = \ln(x)\right]\\\Rightarrow \frac{f'(x)}{1+f(x)}=\frac{1}{x}$$
Assume $y = f(x)$.$$\frac{dy}{dx}=\frac{1+y}{x}$$
The following differential equation can be solved by seperation and integrating both sides of the equation, which leads us to $y=f(x)=x-1$ $$\Rightarrow\int\left[\frac{dy}{y+1}=\frac{dx}{x}\right]\\\Rightarrow\ln|y+1|=\ln|x|\\\Rightarrow y=f(x)=x-1$$
Now going back to our original problem, we again take the derivative on both sides. This time we have to implement the chain rule for FTC Part I to work and then solve our differential equation (which proved to be difficult for me).
$$\Rightarrow\frac{d}{dx}\left[\int_{0}^{x^3+1}\frac{f'(t)}{1+f(t)}dt=\ln(x)\right]\\\Rightarrow\frac{3x^2f'(x^3+1)}{1+f(x^3+1)}=\frac{1}{x}\\\Rightarrow 3x^3f'(x^3+1)=1+f(x^3+1)$$
Now I wasn't really sure what to do from this point onwards. I could just assume $x^3+1$ as an independent variable and then just solve the differential equation, but I'm just not sure if that's the correct approach. I would appreciate if you not give me the entire answer but rather a prompt in the right direction. Thanks in advance (:
| Put $s=f(t)$ in the integral. You get $\int_{f(0)} ^{f(x^{3}+1)} \frac 1 {1+s} ds=\ln x$. So $\ln [1+f(x^{3}+1)]-\ln [1+f(0)]=\ln x$. Can you continue from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4379518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Limit of $\frac{\frac{(\sqrt{n}+n-1)!}{(\sqrt{n})!(n-1)!}}{2^{\sqrt{n}}}$ as $n \rightarrow \infty$. I'm trying to show $$\lim_{n \rightarrow \infty} \frac{\frac{(\sqrt{n}+n-1)!}{(\sqrt{n})!(n-1)!}}{2^{\sqrt{n}}} = \infty$$ as wolfram says it should. But can't seem to get it. This is what I've tried so far:
$$\begin{align}\frac{1}{2^{\sqrt{n}}}\frac{(\sqrt{n}+n-1)!}{(\sqrt{n})!(n-1)!} & = \frac{1}{2^{\sqrt{n}}}\frac{[\prod_{i=1}^{n-1} (\sqrt{n}+i)](\sqrt{n})!}{(\sqrt{n})!(n-1)!} \\ &= \frac{1}{2^{\sqrt{n}}} \frac{\prod_{i=1}^{n-1} (\sqrt{n}+i)}{(n-1)!}\end{align}.$$
But I'm not sure where to go from here.
| $$a_n=\frac{1}{2^{\sqrt{n}}}\frac{(\sqrt{n}+n-1)!}{(\sqrt{n})!(n-1)!}$$ Take logarithms
$$\log(a_n)=-\sqrt n \log(2)+\log((\sqrt{n}+n-1)!)-\log((\sqrt{n})!)-\log((n-1)!)$$ Use three times Stirling approximation and continue with Taylor series
$$\log(a_n)=\sqrt{n} \left(\log \left(\frac{\sqrt{n}}{2}\right)+1\right)+\frac{1}{4}
\left(2-\log \left(4 \pi ^2 n\right)\right)+O\left(\frac{1}{\sqrt n}\right)$$
So, $ \log(a_n)\to \infty$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4379681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.