Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Is refinement of a composition series a composition series? I was given it as an exercise to check whether every refinement of a composition series is a composition series or not?
I think it follows from the definition itself that every refinement of a composition series is a composition series.
Please correct me if I am wrong.
| If you follow wikipedia's definition:
"A subnormal series is a composition series if and only if it is of maximal length. That is, there are no additional subgroups which can be "inserted" into a composition series."
So if you have a composition series you can not have a strict refinement of it i.e. every refinement is the trivial refinement so we end with the same composition series with which we started.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4081402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculus: Why do we ignore dx? I'm new to calculus and I find it really confusing why we just ignore the dx at the end.
For example, when working on derivation of $x^2$,
at the last step, we're left with
$f'(x)= 2x + dx$
But I've heard people in videos say: "Since $dx$ is super super super small, we can safely ignore it. But just ignoring it bugs me. If we choose to ignore it, should it not actually look like this:
$f'(x) \approx 2x $
Thanks for your time. Cheers :)
| Warning: I'm about to use the terms "secant" and "tangent" with geometric meanings I'll link to, and not their meanings in trigonometry. I didn't choose the terminology.
For $y=x^2$ the secant from $(x,\,y)$ to a point $(x+\Delta x,\,y+\Delta y)$ with $\Delta x\ne0$ has gradient $\frac{\Delta y}{\Delta x}=2x+\Delta x$. The tangent at $(x,\,y)$ is the line through it whose gradient is the $\Delta x\to0$ limit $2x$ of the secant's gradient $2x+\Delta x$. We are not ignoring the $\Delta x$ term in the secant's gradient; we are interested in the tangent's gradient, which is denoted $\frac{dy}{dx}$. This is not literally a ratio of two quantities $dy,\,dx$, but has this notation because it is a limit of a ratio.
As has been discussed, the fact that $\lim_{\Delta x\to0}\frac{\Delta y}{\Delta x}=\frac{dy}{dx}$ is finite implies the small-$\Delta x$ little-$o$ result $\Delta y\in\frac{dy}{dx}\Delta x+o(\Delta x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4081517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 9,
"answer_id": 6
} |
Induction with divisibility QUESTION: Prove that $16 \mid 19^{4n+1}+17^{3n+1}-4$ for all $n \in \mathbb{N}$
This is what I have so far but I'm not sure where to go from here.
PROOF:
Let $16 \mid 19^{4n+1}+17^{3n+1}-4$ equal $S(n)$.
Base case, let $n=0$.
\begin{align*}
16 &\mid 191+171-4\\
= 16 & \mid 32\\
= 2
\end{align*}
Therefore, $S(0)$ is true.
Using induction hypothesis, suppose $19^{4k+1}+17^{3k+1}-4$ is divisible by $16$ for all $n \in \mathbb{N}$.
Claim,
$16 \mid 19^{4k+1}+17^{3k+1}-4$,
that is $19^{4k+1}+17^{3k+1}-4=16m$, whereby $m$ is a multiple of $16$.
The above equation simplifies into,
\begin{align*}
16 & \mid 19^{4n+5}+17^{3n+4}-4\\
16 & \mid 19^4 \cdot 19^{4k+1} + 17^3 \cdot 17^{3n+1}-4
\end{align*}
photo of my working out
| Q. Show that $16$ $|$ $19^{4n+1} +17^{3n+1} -4$ $\forall$ $ n \in \mathbb N$.
First of all, simplify the expression. Write $19=16+3$ and $17=16+1$, then use binomial theorem to expand it. That way you'll get a simpler expression to use induction.
$16$ $|$ $19^{4n+1} +17^{3n+1} -4 \implies$ $16$ $|$ $3^{4n+1}+1-4 \implies 16$ $|$ $3(3^{4n}-1)$
$\implies 16$ $|$ $3^{4n}-1$.
Method 1 to proceed:
Now here it's easier to just use the fact that $x-y$ $|$ $x^n-y^n$, because it will prove that $ 16$ $|$ $3^{4n}-1$ directly, without induction. ( As $16$ $|$ $3^4 -1$ $|$ $3^{4n}-1$. )
But if you wish to use induction only, then here it is:
Method 2 to proceed:
Let $P(n):16$ $|$ $3^{4n}-1$.
$P(1), P(2)$ are true.
Let $P(i)$ be true for $i=1,2,\cdots,k$
$16$ $|$ $3^{4n}-1 \implies 16$ $|$ $(3^{4n}-1)(3^4+1) \implies 16$ $|$ $3^{4n+4}-1 +3^{4}(3^{4n-4}-1)$
And we know $16 $ $|$ $(3^{4n-4}-1)$ (As $P(n-1)$ is true).
$\implies 16$ $|$ $3^{4n+4}-1 \implies P(n+1)$ is true, and hence $P(n)$ is true for all natural numbers $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4081698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Find the set, where the series converges: $\sum_{n=1}^\infty\frac{(-1)^nn}{n^3+1}(x-3)^n$ I have to find the set, where the series converges: $$\sum_{n=1}^\infty\frac{(-1)^n n}{n^3+1}(x-3)^n.$$
I have already found $r=1$ and the center is $3$. So the set is $(2,4)$. Now I have to check, whether the series
is converges for $x=2$ and $x=4$.
And I don't know how to check it for $x=2$.
I don't know how to check is this series convergent: $$\sum_{n=1}^\infty\frac{(-1)^n n}{n^3+1}(-1)^n.$$
Please help me. I have been stuck here for 2 hours.
| For both $x=2,4$, we have that
$$|(2-3)^n|=|(-1)^n|=1\text{ and }|(4-3)^n|=|1^n|=1$$
Then
$$\sum_{n=1}^\infty \left|\frac{(-1)^nn}{n^3+n}(x-3)^n\right|=\sum_{n=1}^\infty \frac{n}{n^3+n}=\sum_{n=1}^\infty \frac{1}{n^2+1}<\sum_{n=1}^\infty \frac{1}{n^2+0}=\sum_{n=1}^\infty \frac{1}{n^2}<\infty$$
(this last sum converges by the p-test).Thus, the original sum converges absolutely at $x=2,4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4081785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How many transitive relations on a set of 3 elements $A=\{a,b,c\}$? How many transitive relations on a set of 3 elements $A=\{a,b,c\}$? I know that the number of binary relations on an n -element set is $2^{n^2}$. In our case, that’s 512. I know that the number of transitive relations should be 171, but how to calculate and prove this? If I check each of the 512 binary relations for transitivity, then it will take months, there should be another way to calculate this, but which one?
| I don't think this can be done without a certain amount of drudgery, but you won't have to generate every binary relation. I'll use the letters $x,y,z$ to represent the elements $a,b,c$, with the understanding that no two of $x,y,z$ are equal.
I would approach this by considering the number of pairs in each relation. There is $1$ transitive relation with $0$ pairs. There are $9$ transitive relations with $1$ pair. The first interesting case is $2$ pairs. There are $36$ such relations. If one of the pairs is of the form $(x,x)$ then the relation is transitive, so suppose we have a pair $(x,y)$. The relation will be intransitive if and only if the other pair is one of $(y,x),(y,z),\text{ or }(z,x)$. There are $6$ choices for $(x,y)$ so we have $\frac{6\cdot3}2=9$ intransitive $2$-pair relations, and $27$ transitive ones.
Try to continue in this manner. I think that after some point it will get easy, because a transitive relation with some number $n$ of pairs will have to comprise all $9$ pairs, and you can stop. I doubt $n$ is very close to $9$. Take care to avoid double counting.
EDIT
An alternative is to try to count the number of intransitive relations directly, and subtract from $512$. For example, a relation that contains $(x,y)$ and $(y,z)$ but not $(x,z)$ is intransitive. That is, there are $2^6$ intransitive relations that contain $(x,y)$ and $(y,z)$, because the other pairs can be any of the $6$ elements that aren't $(x,z)$. You'd have to use inclusion-exclusion to avoid double-counting, and my feeling is that it would be quite error-prone. I think the first approach is probably easier to carry out, though the final solution might be longer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4081930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove that if $\int_0^1 f(yx) dx = g(y)$, so $f(x)=g(x)+xg'(x)$ for any $x$. I'm trying to solve this question:
$f: \mathbb{R} \to \mathbb{R}$ is a continuous function and $g: \mathbb{R} \to \mathbb{R}$ is of $C^1$ class, such that $g(y) = \int_0^1 f(yx) dx$. Prove that $f(x)=g(x)+xg'(x)$ for all $x \in \mathbb{R}$.
But my progress until now was really slow. I've tried to use the Leibniz integral rule and the first mean value theorem for definite integrals, but I had no success. Can somebody help me?
| welcome toi MSE . a hint can be $$f(x)=g(x)+xg'(x)\\
f(x)=x'g(x)+xg'(x)\\f(x)=(xg(x))'\\\to \\\int f(x)=xg(x)+const$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4082092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$\|x\| \leq \|x(h)\|$ for all $x\in A$ implies $\dim A < \infty$. Let $A\subseteq B(H)$ be a unital inclusion of $C^*$-algebras. Suppose that there is a vector $h \in H$ such that $\|x \| \leq \|x(h)\|$ for all $x \in A$. Does it follow that $A$ is finite-dimensional?
Attempt/observations:
Not really sure how to start. Somehow we want to show that a set of linearly independent vectors in $A$ must be finite.
*
*Taking $x=1$, we see that $\|h\| \geq 1$.
*Maybe we can show that if $A$ has infinite dimension, then there is $x \in A$ with $\|x\|> \|x(h)\|$, or equivalently $\|x^*x\| > \langle x^*xh,h\rangle.$
| The condition forces $A$ to be finite-dimensional.
This is because the inequality will also hold for the double commutant $A''$. Indeed, $A''$ is the sot-closure of $A$. If $x=\lim_{sot}x_j$ and $\|x_j\|\leq\|x_jh\|$, then
$$
\|x\|\leq\limsup_j\|x_j\|\leq\limsup_j\|x_jh\|=\|xh\|.
$$
Being a von Neumann algebra, $A''$ is the norm closure of the span of its projections. Let $p_1,\ldots,p_m\in A''$ be pairwise orthogonal projections. Then
$$
\|h\|^2=\sum_j\|p_kh\|^2\geq m
$$
So $A''$ admits only finitely many pairwise orthogonal projections. This forces $A''$ (and, a fortiori, $A$, since they are now equal) to be finite-dimensional.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4082253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Simple proofs of $H^1(X, \mathcal{O}^*)=0$ when $X$ is an open Riemann Surface I am trying to get proof that $H^1(X, \mathcal{O}^*)=0$ when $X$ is an Open Riemann surface.
Looking at some books I have seen the following two approaches:
*
*Use Mittag-Leffer distributions, Runge theorem and some functional analysis to prove the Mittag-Leffer and Weierstrass theorems (this is, that every divisor is the divisor of some meromorphic function).
*Use Runge theorem to prove that $H^1(X, \mathcal{O})=0$, and assume that $H^2(X, \mathbb{Z})=0$ and then the long cohomology sequence associated to the exponential exact sequence proves the claim.
I was wondering if there is a proof that is more or less self contained, where Runge theorem on open Riemann Surfaces and the fact that $H^1(X, \mathcal{O}^*)=0$ iff every line bundle is trivial are assumed.
For example, I don't find the second one satisfactory because $H^2(X, \mathbb{Z})=0$ depends on somewhat hard results: That sheaf cohomology agrees with singular cohomology and Poincaré duality with coefficients in $\mathbb{Z}$, whereas the first one depends too heavily on finding weak solutions to PDEs.
| I found a proof only involving the existence of a good cover.
Let $L \to X$ be a holomorphic line bundle and $\mathcal{U}$ an open cover of $X$ such that:
*
*On each $U_i$ there is defined a nowhere section of $L$ $s_i$
*Each $U_i$ is contained in a holomorphic chart
*Each $U_i$ and each $U_i \cap U_j$ is simply connected
Let $h_{ij} = s_i/s_j$. Because $h_{ij}$ is nowhere 0 on a simply connected domain, $h_{ij} = e^{g_{ij}}$.
Let $\xi_{ij}$ be a partition of unity subordinate to the covering $U_{ij}$ for each $i$, and define $g_i = \sum_j \xi_{ij}g_{ij}$. Then $g_i - g_j = g_{ij}$, so $\bar{\partial}g_i = \bar{\partial} g_j$ because each $g_{ij}$ is holomorphic. Therefore we can define some $(0,1)$ form $\alpha$ such that $\alpha_{\mid U_i} = \bar{\partial} g_i$.
Let $f$ be a solution to $\bar{\partial} f = \alpha$ (which exists by a direct application of Runge theorem) and let $h_i = g_i - f$. Then $h_i$ is holomorphic and $h_i - h_j = g_{ij}$. If $e_i = e^{-h_i}s_i$ then $$\frac{e_i}{e_j} = e^{-g_{ij}}\frac{s_i}{s_j} = h_{ij}^{-1} \frac{s_i}{s_j} =1$$So the $e_i$ can be glued to obtain a global holomorphic section
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4082379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Maximal torus in algebraic groups Suppose that $G$ is a linear algebraic group.
We say that $T$ is a torus if $T\approx \mathbb G_m^n$.
What are the conditions on $G$ for the existence of a torus $T$ in $G$?
Thanks for any reference for this.
| Every semisimple element of $G$ lies in a maximal torus of $G^{\circ}$, the connected component of $G$. This follows from Theorem 6.4.5 (ii), Linear Algebraic Groups (2nd edition) by T.A. Springer.
In particular, if $G^{\circ}$ has a nontrivial semisimple element, then $G$ has a nontrivial torus. On the other hand, if $G^{\circ}$ has a nontrivial torus, then it has a nontrivial semisimple element (as all the elements of a torus are semisimple).
So that's your answer: $G$ has a nontrivial torus if and only if $G^{\circ}$ has a nontrivial semisimple element.
Which connected linear algebraic groups do not have any nontrivial semisimple elements? Those which consist entirely of unipotent elements. Up to isomorphism, these are exactly the closed subgroups of $\operatorname{GL}_n$ whose elements are upper triangular matrices with $1$s on the diagonal (Proposition 2.4.12, Springer).
Now, what about linear algebraic groups, not necessarily connected, whose connected components consist of unipotent elements? These aren't easy to classify as far as I know. For example, there are groups of the form $G = G^{\circ} \times H$, where $G^{\circ}$ is connected unipotent and $H$ is finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4082523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does it make sense for "If $p$ and not p, therefore $q$" to be a valid argument? In propositional logic, an argument is invalid iff there is any instance where all the premises are true and the conclusion is false, if the set of premises is $$\left\{\:p,\:\neg p\:\right\}$$
then the argument is true for any conclusion $q$, this doesn't make sense in the real world (at least compared to other valid arguments), is there anything that sets this argument apart from other valid ones (that make sense to be valid)?
| Assume otherwise. Then $\lnot((p\land \lnot p)\to q)$ holds, which is only true when $p\land \lnot p$ is true while $q$ is false. But $p\land \lnot p$ is true exactly when both $p$ and $\lnot p$ are true, a contradiction. Thus $(p\land \lnot p)\to q$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4082650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Outer measure of intervals: $|(a,b)|=|[a,b)|=|(a,b]|=b-a$ I have proved the following statement and I would like to know if I have made any mistake, thanks.
"Prove that if $a,b\in\mathbb{R}, a<b$, then $|(a,b)|=|[a,b)|=|(a,b]|=b-a$
NOTE: $|\cdot|$ refers to outer measure, i.e. for $A\subset\mathbb{R},\ |A|:=\inf\{\sum_{k=1}^{\infty}l(I_k): I_1,I_2,\dots\text{ are open intervals such that }A\subset\bigcup_{k=1}^{\infty}I_k\}$; the length of an open interval $I\subset\mathbb{R}$ is defined as
$\ell(I):=\begin{cases}
b-a & \text{if }I=(a,b),\ a,b\in\mathbb{R}, a<b; \\
0 & \text{if }I=\emptyset; \\
\infty & \text{if } I=(-\infty, a)\text{ or } I=(a,\infty);\\
\infty & \text{ if }I=(-\infty,\infty)
\end{cases} $
I already know: $(1)$ countable subadditivity of outer measure, $(2)\ $countable sets have measure $0$, $(3)$ outer measure preserves order, $(4)\ |[a,b]|=b-a$ outer measure of a closed interval
(I) $(a,b)\subset [a,b]\overset{(3)}{\Rightarrow} |(a,b)|\leq |[a,b]|$
(II) $|[a,b]|=|\{a\}\cup (a,b)\cup \{b\}|\overset{(1)}{\leq} |\{a\}|+ |(a,b)|+ \{b\}\overset{(2)}{=} |(a,b)|$
(I), (II) $\Rightarrow$ (III) $\fbox{$|(a,b)|=|[a,b]|\overset{(4)}{=}b-a$}$
$(a,b)\subset [a,b)\subset [a,b]\overset{(3)}{\Rightarrow}|(a,b)|\leq |[a,b)|\leq |[a,b]|\overset{(III)}{\Rightarrow}\fbox{$|[a,b)|=b-a$}$
$(a,b)\subset (a,b]\subset [a,b]\overset{(3)}{\Rightarrow}|(a,b)|\leq |(a,b]|\leq |[a,b]|\overset{(III)}{\Rightarrow}\fbox{$|(a,b]|=b-a$}$
Putting all together we have $\fbox{$|[a,b)|=|(a,b]|=|(a,b)|=|[a,b]|=b-a$}$, as desired.
| Your proof is correct. It can be simplified a bit. If you prove
$$
|(a,b)|=|[a,b]|= b-a
$$
first then the remaining equalities follow from
$$
(a, b) \subset (a, b] \subset [a, b]
$$
and
$$
(a, b) \subset [a, b) \subset [a, b]
$$
because of the order preserving property.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4082809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$p = x^2 + y^2$ with $p$ prime and $y$ even, show that $x$ and $y/2$ are quadratic residues mod $p$
Let $p$ be a prime which can be written as $p = x^2 + y^2$ for positive integers $x$ and $y$. Also assume that $y$ is an even integer.
Show that $x$ and $y/2$ are quadratic residues mod $p$.
My attempt:
The case $p = 2$ is trivial.
For $p > 2$ we know that $p \equiv 1$ mod $4$. The integer $y$ is even. Since $p = x^2+y^2$ and $y^2 \equiv 0 $ mod $4$, we have that $x^2 \equiv 1$ mod $4$.
From this point I would like to use the legendre symbol $\left(\frac{x}{p}\right)$ to get further, but I don't get anywhere. Any tips?
| Use properties of the Jacobi symbol.
Firstly, note that if $p>2$, then $p\equiv 1\pmod 4$. Since $x$ and $p$ are odd we have
$$
\left(\frac{x}{p}\right)=(-1)^{(x-1)(p-1)/4}\left(\frac{p}{x}\right)=\left(\frac{p}{x}\right)=\left(\frac{x^2+y^2}{x}\right)=\left(\frac{y^2}{x}\right)=1,
$$
so $x$ is a quadratic residue.
For $y$ we consider two cases: $y\equiv 2\pmod 4$ (meaning that $p\equiv 5\pmod 8$) or $y\equiv 0\pmod 4$ (meaning that $p\equiv 1\pmod 8$).
In the first case we can just do the same
$$
\left(\frac{y/2}{p}\right)=(-1)^{(y/2-1)(p-1)/4}\left(\frac{p}{y/2}\right)=\left(\frac{p}{y/2}\right)=\left(\frac{x^2+y^2}{y/2}\right)=\left(\frac{x^2}{y/2}\right)=1.
$$
For the second case let $y=2^kz$ where $z$ is odd and $k\ge 2$. Since $p\equiv 1\pmod 8$ we have $\left(\frac{2}{p}\right)=1$, so the previous argument can be modified in the following way:
$$
\left(\frac{y/2}{p}\right)=\left(\frac{2^{k-1}z}{p}\right)=\left(\frac{z}{p}\right)=(-1)^{(z-1)(p-1)/4}\left(\frac{p}{z}\right)=\left(\frac{p}{z}\right)=\left(\frac{x^2+y^2}{z}\right)=\left(\frac{x^2}{z}\right)=1.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4083137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Taylor series about 1 I was trying to solve the Taylor series about one for $\dfrac{x^2}{2 - x}$ but my answer seems to be wrong. I got $T(x) = \frac{1}{n!}(x-1)^n$.
| We are looking for a representation
\begin{align*}
\sum_{n=0}^\infty a_n(x-1)^n
\end{align*}
We obtain
\begin{align*}
\color{blue}{\frac{x^2}{2-x}}&=\frac{(x-1+1)^2}{1-(x-1)}\\
&=\left((x-1)^2+2(x-1)+1\right)\sum_{n=0}^\infty (x-1)^n\\
&=\sum_{n=0}^\infty(x-1)^{n+2}+2\sum_{n=0}^\infty(x-1)^{n+1}+\sum_{n=0}^\infty(x-1)^n\\
&=\sum_{n=2}^\infty(x-1)^{n}+2\sum_{n=1}^\infty(x-1)^{n}+\sum_{n=0}^\infty(x-1)^n\\
&\,\,\color{blue}{=1+3(x-1)+4\sum_{n=2}^\infty(x-1)^n}
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4083300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Strange case of uniform convergence of series. Let's consider the following series:
$$\sum_{n = 1}^{\infty}{\left(1 - \cos{\sqrt[3]\frac{x}{n^2}}\right)}$$
for $x$ in the intervals $\delta_1=(0,1)$ and $\delta_2=(1,+\infty)$. It converges uniformly on $\delta_1$ and non-uniformly on $\delta_2$ (this is the answer). I assumed that our series converges and got the same answer.
*
*Found derivative and zeros of derivative
*Found supremum of our function $\phi_n(x)$ and said that our series uniformly converges by comparison test (supremum is $1 - \cos{\sqrt[3]\frac{1}{n^2}}$ and if we consider our series converging, then this converges as well)
*Proved that our series does not uniformly converge on $\delta_2$ via negation of Cauchy Criterion of Uniform Convergence
So the only single step left is to prove that given sequence converges. I failed several times.
What I have tried:
*
*Ratio test - always gives 1
*Comparison test - I am not sure I know how to find function to compare with in this case. My be there is something about equivalent functions I miss
*I tried to use Maclaurin Series since cosine argument is going to zero, however decomposition to more than one element leading to 1 in Ratio test and decomposition to only $1$ seems strange. By the way I am not sure it is suitable in this kind of tasks at all
I did not try Integral Test, since it seems to be really difficult to integrate this kind of function over $dn$ and I did not try Root Test, since it seems to be useless in our case
| Use the identity $1-\cos\theta=2\sin^2(\theta/2)$ to rewrite the series as
$\sum_{n=1}^{\infty}2\sin^2\left(\frac{1}{2}\sqrt[3]\frac{x}{n^2}\right)$.
Now you can apply the comparison test, using the inequality $\sin^2\theta\leq\theta^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4083419",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to compute the integral $\displaystyle\int_0^\infty\frac{1}{\sqrt{2\pi t}}\cdot\exp\bigg({\frac{-a}{2t}-bt}\bigg)dt$ How to get the following for $a,b>0$? $$\int_0^\infty\frac{1}{\sqrt{2\pi t}}\cdot\exp\bigg({\frac{-a}{2t}-bt}\bigg)dt=\frac{1}{\sqrt{2b}}\cdot \exp\big(-\sqrt{2ab}\big) $$
The context for this identity is its use in computing the resolvent of Brownian motion in the stochastic calculus book by Le Gall (screenshot below). I figured there had to be an elementary way to compute it, i.e. without using the fact Le Gall alludes to about the Laplace transform of the hitting time of Brownian motion. As is indicated in the comments, this is clearly a duplicate so I wouldn't object to its being closed for that reason. Although maybe those who posted the excellent answers below should've been given the chance to migrate their answers to the post of which this is a duplicate, if they wish.
| The following change of variables
$$u=\sqrt{t},\qquad t=u^2,\qquad dt =2udu$$
gives
\begin{align*}
I:&= \int_0^\infty\frac{1}{\sqrt{2\pi t}}\cdot\exp\bigg({\frac{-a}{2t}-bt}\bigg)dt\\
&=\frac{1}{\sqrt{2\pi}} \int^\infty_0\frac{1}{u} e^{-\frac{a}{u^2}-b u^2}2udu=\sqrt{\frac{2}{\pi}}\int^\infty_0 e^{-\frac{a}{u^2}-b u^2}\,du=F(a)
\end{align*}
The method used here can be then applied to estimate the integral $I$. Here is more or less what we get when we carry out that aforementioned method:
$$F'(a)=-\sqrt{\frac{2}{\pi}}\int^\infty_0 e^{-\big(\frac{a}{u^2}+ bu^2\big)}\frac{1}{u^2}\,du
$$
The change of variables
$$v=\frac{\sqrt{a}}{\sqrt{b} u},\qquad u=\frac{\sqrt{a}}{\sqrt{b} v},\qquad du=-\frac{\sqrt{a}}{\sqrt{b}v^2}$$
leads to
$$F'(a)=-\sqrt{\frac{2a}{b\pi}}\int^\infty_0 e^{-\Big(bv^2 +\frac{a}{v^2}\Big)}\frac{b}{a}\,dv=-\sqrt{\frac{b}{a}}F(a)$$
The first order ODE we get satisfies the following initial condition:
$$F(0)=\sqrt{\frac{2}{\pi}}\int^\infty_0e^{-bu^2}\,du=\sqrt{\frac{1}{2b}}$$
Hence
$$
F'(a)+\sqrt{\frac{b}{a}}F(a)=0
$$
which is equivalent to
$$
e^{2\sqrt{ba}}\Big(F'(a)+\sqrt{\frac{b}{a}}F(a)\Big)=\frac{d}{da}\Big(e^{2\sqrt{ba}}F(a)\Big)=0
$$
and so
$$
e^{2\sqrt{ab}}F(a)-F(0)=e^{2\sqrt{ab}}F(a)-\sqrt{\frac{1}{2b}}=0
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4083595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Image of an accumulation point under a continuous function Let $X$ be a compact Hausdorff topological space and let $f:X \to F$ ($F$ can be assumed metric space) be a continuous map. Let $\left(x_n\right)$ be a sequence in $X$. Now, there are two possibilities:
*
*$\left(x_n\right)$ is eventually constant, i.e., $x_n=x_0$ for all but finitely many $n$.
*We can think of $\left(x_n\right)$ as an infinite set in $X$, which has an accumulation point in $X$, say $x_0$.
Now, what can we say about $f(x_0)$?
Can we say that the sequence $(f(x_n))$ converges to $f(x_0)$?
A detailed answer will be of great help. Thanks in advance.
| A bijective continuous map from a compact space to a Hausdorff space is a homeomorphism.
Thus every open set $U\subset X$, $f(U)$ is open in F. So, if every open set containing $x_0$ interests X (limit point), the image of those open sets (containing $f(x_0)$ )will also intersect $f(X)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4083760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Does ZFC prove every extensional well-founded out-tree translate into a set? Let $T$ be an extensional well-founded out-tree, where well-founded refers to absence of infinite branches, and extensional refers to absence of two isomorphic full subtrees of $T$ whose root nodes are connected to a common node in $T$. By full subtree of $T$ its meant a subtree of $T$ that has every branch of $T$ stemming from its root node being a branch of it! For any node $n$ in $T$ we call the full subtree of $T$ stemming from it as $Tree^T(n)$
We define a translation function $f$ from nodes of $T$ to a set $x$ as:
$f(root( T)) = x \\ f(n)=f(m) \iff Tree^T(n) \approx Tree^T(m)$
Where $\approx$ stand for isomorphism between trees.
Now we define a graph $G(f)$ on $range(f)$ that has a directed edge between any elements of $range(f)$ if and only if an element of the pre-image (under $f$) of one of them is connected by an edge to an element of the pre-image (under $f$) of the other node, and the direction of that edge in $G(f)$ is the same as that between those two connected nodes in $T$.
Now $f$ would be called a translation from tree $T$ to set $x$ if and only if $G(f)$ is the membership relation on the transitive closure set of $x$.
So we'd say: a tree $T$ translates into a set $x$ if and only if a translation $f$ exists between them.
Does $\sf ZFC$ prove that "every extensional well-founded out-tree translates into a set"?
If we add the above as an axiom to $\sf ZF$, would that get to interpret $\sf AC$?
| Every extensional and well-founded relation on a set is isomorphic to a unique transitive set. This is the set-version of Mostowski's collapse lemma (the general version on classes requires that the relation is also set-like: every point has only a set that is "in relation with it", e.g. the class of ordinals and the transitive set is now a transitive class).
So $\sf ZF$ proves that every extensional and well-founded tree is isomorphic to a set. Therefore your questions are answered yes and no respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4083921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Area of $(x-3)^2+(y+2)^2<25: (x,y) \in L_1 \cap L_2$ Two lines $(L_1,L_2)$ intersects the circle $(x-3)^2+(y+2)^2=25$ at the points $(P,Q)$ and $(R,S)$ respectively. The midpoint of the line segment $PQ$ has $x$-coordinate $-\dfrac{3}{5}$, and the midpoint of the line segment $RS$ has $y$-coordinate $-\dfrac{3}{5}$.
If $A$ is the locus of the intersections of the line segments $PQ$ and $RS$, then the area of the region $A$ is:
What I've done:
Consider $L_1: y= ax+b$. The midpoint of the chord $PQ$ is $(-\dfrac{3}{5}, -\dfrac{3a}{5}+b)$. Now, using the property that the midpoint of a chord of a circle and the center of the circle $(3,-2)$ are perpendicular we have: $\dfrac{-\dfrac{3a}{5}+b-(-2)}{-\dfrac{3}{5}-(3)} *a = -1$
$$\implies b= \dfrac{3a^2-10a+18}{5a}$$ This means we can eliminate one variable and write the equation of $L_1: y= ax+ \dfrac{3a^2-10a+18}{5a}$
From this form of $L_1$ we can get the value of the minimum value of the $y$-intercept by differentiating $b$. Let $b= f(a) = \dfrac{3a^2-10a+18}{5a}, f'(a) = \dfrac{3a^2-18}{5a^2}, f'(a)=0 \implies a = \pm \sqrt{6}$. I just found this hoping we will get bounds on the y-intercept of $L_1$.
Now lets do the same process for $L_2$.
Consider $L_2: y= cx+d$. The midpoint of the chord $RS$ is $(\dfrac{-\dfrac{3}{5} -d}{c},-\dfrac{3}{5})$. Now, using the property that the midpoint of a chord of a circle and the center of the circle $(3,-2)$ are perpendicular we have: $\dfrac{-\dfrac{3}{5}+2}{\dfrac{-\dfrac{3}{5} -d}{c}-3} *c = -1$
$$\implies d= \dfrac{7c^2-15c-3}{5}$$ This means we can eliminate one variable and write the equation of $L_2: y= cx+ \dfrac{7c^2-15c-3}{5}$
From this form of $L_2$ we can get the value of the minimum value of the $y$-intercept by differentiating $d$. Let $d= f(c) = \dfrac{7c^2-15c-3}{5}, f'(c) = \dfrac{14c-15}{5}, f'(c)=0 \implies c = \dfrac{15}{14}$. I just found this hoping we will get bounds on the y-intercept of $L_2$.
Along with all this, we can set bounds when the line segment is just about to leave the circle (tangent to the circle).
What I can visualize:
Let $X$ = union of all line segments $PQ$.
Let $Y$ = union of all line segments $RS$.
Every point in the intersection of $X$ and $Y$ is a candidate intersection point of the lines $L_1$ and $L_2$. So A = $X \cap Y$
Edit 1: I saw the equation of $L_1$ varying as $a$ varies on DESMOS and think the boundry of the union of all line segments PQ might be an outer circle.
| EDIT (Original answer at the end).
I want to show how the envelope of chords $RS$ (or $PQ$) can be obtained without calculus and without coordinates (see figure below).
Let's start with a chord $AB$ of a circle of centre $O$. For any point $M$ on that chord, we can construct a line $RS$ passing through $M$ and perpendicular to $OM$. We want to find the envelope of all those lines, i.e. the curve which is tangent to all the lines $RS$ as $M$ varies on $AB$.
Consider then another point $M'$ on $AB$ and its associated line $R'S'$. Let $P$ be the intersection of $RS$ and $R'S'$, $T$ the tangency point of $RS$ with the envelope, $T'$ the tangency point of $R'S'$ with the envelope. As $M'$ approaches $M$, both $T'$ and $P$ approach $T$.
But the circle through $OMM'$ also passes through $P$ (because $\angle PMO=\angle PM'O=90°$) and this circle, as $M'\to M$, tends to the circle through $O$ tangent to $AB$ at $M$. Hence $T$, which is the limiting position of $P$, is the intersection of that circle with line $RS$. Moreover, $OT$ is a diameter of that circle.
Now that we know how to construct point $T$ on $RS$,
let's also construct line $HK$, parallel to $AB$ at a distance from it equal to the distance of $O$ from $AB$. If $J$ is the projection of $T$ on it, then $TJ=TO$, because line $CH$ joining the midpoints of the legs of a trapezoid is the arithmetic mean of bases $OK$ and $TJ$.
It follows that point $T$ has the same distance from $O$ and from line $HK$. Hence its locus (which is the envelope) is a parabola, having $O$ as focus and $HK$ as directrix.
ORIGINAL ANSWER.
What you need is the envelope $\gamma_1$ formed by lines $L_1$ and the envelope $\gamma_2$ formed by lines $L_2$.
As the equation of $L_1$ is $y=\left(x+{3\over5}\right)a-2+{18\over5a}$, differentiating w.r.t. $a$ we get: $x+{3\over5}-{18\over5a^2}=0$, which can be solved for $a$:
$$
a^2={18\over 5x+3}.
$$
Inserting this into the equation of $L_1$ we get (after some algebra):
$$
25(2+y)^2={72}(5x+3),
$$
which is the desired envelope $\gamma_1$ (a parabola).
Repeating the same process for $L_2$ we can find the equation of $\gamma_2$ (another parabola):
$$
y=-{5\over28}(3-x)^2-{3\over5}.
$$
Lines $L_1$ and $L_2$ are tangent to their envelope, hence
the area you want to compute is that external to both parabolas but inside the circle.
To compute the area, one has to find the coordinates of the upper intersection $A$ between $\gamma_1$ and the circle,
of the left intersection $C$ between $\gamma_2$ and the circle
and of the intersection $B$ between $\gamma_1$ and $\gamma_2$
lying inside the circle:
$$
A=\left(\frac{4}{5},\frac{6}{5} \sqrt{14}-2\right),\quad
C=\left(3-\frac{6}{5}\sqrt{14},-\frac{21}{5}\right),\quad
B=\left(\frac{1}{5} \left(29-12 \sqrt{7}\right),\frac{2}{5} \left(6
\sqrt{7}-23\right)\right).
$$
Integrating along $y$ the area can then be computed as:
$$
\begin{align}
area&=\int_{y_B}^{y_A}
\left[\left(\frac{5}{72}(y+2)^2-\frac{3}{5}\right)-\left(3-\sqrt{25-(y+2)^2}\right)\right] \, dy \\
&+\int_{y_C}^{y_B}
\left[\left(-\frac{2}{5} \sqrt{-35 y-21}+3\right)-\left(3-\sqrt{25-(y+2)^2}\right)\right] \, dy \\
&=\frac{25 \pi }{4}+4 \sqrt{14 \left(9-4 \sqrt{2}\right)}-\frac{3004}{75}.
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4084086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Prove the following set is uncountable I need some help with part (b) of the question. Would appreciate feedback on whether my solution to part (a) is correct too.
For each ∈ ℝ, define = { + ∶ ∈ ℤ}. Let = { ∶ ∈ ℝ}.
(a) Prove that is countable for every ∈ ℝ.
(b) Prove that is uncountable.
You may use without proof the fact that a set is countable if and only if there is a sequence
0, 1, 2, … ∈ in which every element of appears.
For part (a), I proved that for all ∈ ℝ, can be written in a sequence defined as below:
c2i = x - i
c2i+1 = x + i + 1
i.e. = {x, x + 1, x - 1, x + 2, x - 2, x + 3, x - 3, ....}
For part (b), however, I am stuck on proving .
I believe it is probably something to do with cardinality of Unions? Since is just a Union of 1, 2, 3 ...
However, in my current syllabus, one theorem I am taught is that:
"Let A,B be countable infinite sets. Then A U B is countable."
Thank you for taking the time to read this and I appreciate all feedback! Thank you
| For the solution on part a, the idea is correct, but there is a typo in $c_{2i}=x-i$.
For part (b): $[0,1)$ can be imbedded in $\mathscr{C}$ via the map $x\mapsto Ax$.
Let $x,y\in[0,1)$ with $Ax=Ay$, then there must be some $k\in\mathbb{Z}$ with $x=y+k$. As $|x-y|<1$, thus $k=0$ and $x=y$.
This proves the injectivity.
$[0,1)$ is uncountable and so is $\mathscr{C}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4084235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Show that Y is a Gaussian process I have a Moving Average of order $q$. That is to say $MA(q)$.
| Possibly the fastest way is to use the direct definition.
Consider the set $$\{Y_{t_1},...,Y_{t_n}\}$$
We need to show that every linear combination of this set is Gaussian random variable. Take now such linear combination and denote the chosen indecies as in set $A$
$$\sum_{k\in A} \lambda_k Y_{t_k}$$
But what do we have here? A sum of sum of Gaussian variables times constants. And we know that is Gaussian by basic theorem. Hence we are done.
EDIT: To use the theorem that sums of Gaussians are Gaussian, we need that the Gaussians are independent. Let's show why that is. Consider $Y_{t_i},Y_{t_j}$ and show that the sum is indeed Gaussian using the moment generating function. Say that $t_i\leq t_j=t_i+n$
$$E(e^{s(Y_{t_i}+Y_{t_i+n})})=E(e^{s(\sum_{k=0}^q\theta_kZ_{t_i-k}+\sum_{k=0}^q\theta_kZ_{t_i+n-k})})$$
$$=E(e^{s(\sum_{k=0}^{q-n-1}Z_{t_i+n+k}(\theta_k+\theta_{k+n})+\sum_{k=0}^{\min(n-1,q)}\theta_{k} Z_{t_i+k}+\theta_{q-n+1+k} Z_{t_i+q+1+k})})$$
$$=\prod_{k=0}^{q-n-1} E(e^{s(Z_{t_i+n+k}(\theta_k+\theta_{k+n}))})\prod_{k=0}^{\min(n-1,q)}E(e^{s(\theta_{k} Z_{t_i+k})})
\prod_{k=0}^{\min(n-1,q)}E(e^{s(\theta_{q-n+1+k} Z_{t_i+q+1+k})})$$
And this is a product of MGTs of Gaussian variables, hence the sum is Gaussian. It remains to note that sums of $Y$ are always in this form.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4084411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Double integral in polar coordination between two circles Use polar coordinates in $\Bbb{R^2}$ to evaluate
$$\iint_{R} \frac{x^2}{x^2 +y^2} \,dx\,dy$$ where R is the region between the concentric circles of equations $x^2 +y^2=a$ and $x^2 +y^2=b$ with $a<b$ and $(x,y)$ are Cartesian coordinates in $\Bbb{R^2}$
So I know the region is the area between
the smaller circle $$x^2 +y^2=a$$ and the bigger circle
$$x^2 +y^2=b$$ but i'm not sure how to get limits and be able to evaluate it when I don't have any values for a and b which is throwing me off.
| In polar coordinates, the integration region is parametrized as
$$
\left\{x = r\cos\left(\theta\right), y = r\sin\left(\theta\right)\right\}, \quad 0\leq\theta\leq 2\pi, a\leq \rho \leq b.
$$
Changing the variables gives
$$
dxdy = rdrd\theta.
$$
$$
\iint_R\frac{x^2}{x^2+y^2}dxdy = \int_0^{2\pi}\int_a^b\frac{r^2\cos^2\left(\theta\right)}{r^2\cos^2\left(\theta\right) + r^2\sin^2\left(\theta\right)}rdrd\theta = \int_a^brdr\int_0^{2\pi}\cos^2\left(\theta\right)d\theta
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4084524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does $\sum_{n=0}^{\infty} \frac{(n+1)(n+2)}{2^n}=16$? Here's my question, it is rather straightforward: does $\sum_{n=0}^{\infty} \frac{(n+1)(n+2)}{2^n}=16$? First I tested if $\sum_{n=0}^{\infty} \frac{(n+1)(n+2)}{2^n}$ diverges to make sure it doesn't add to $\infty$. I found that taking the limit of the series as n approaches $\infty$ gives no information.
Then I expanded the series to get $\frac{2}{1}+\frac{6}{2}+\frac{12}{4}+....$ and found no pattern. My goal is to evaluate the series and find out if it's equal to 16 or not. How should I do that? or is there another approach to the question?
| $$S:=\sum_{n=0}^{\infty} \frac{(n+1)(n+2)}{2^n}$$
$$\frac S2:=\sum_{n=0}^{\infty} \frac{(n+1)(n+2)}{2^{n+1}}=\sum_{n=0}^{\infty} \frac{n(n+1)}{2^n}$$
so that
$$S-\frac S2=\sum_{n=0}^{\infty} \frac{2(n+1)}{2^n}.$$
Then
$$T:=\sum_{n=0}^{\infty} \frac{n+1}{2^n}$$
and
$$T-\frac T2=\sum_{n=0}^{\infty} \frac1{2^n}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4084657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Uniqueness of the weak derivative with the fundamental lemma of the Calculus of Variations I have a homework question where I need to prove why a specific integrable (but not continuous) function $u$ on the interval $[-1,1]$ has one and only one weak derivative. This raised the question of how to prove that if $f$ and $g$ are weak derivatives of an arbitrary integrable $u: [a,b] \to \mathbb{R}$, then $f = g$.
Given any $h \in C^1([a,b])$ that is zero at the endpoints, we obtain $\int_a^b [f(x)-g(x)]h(x) \, dx = 0$ from the definition of weak derivative. If $\varphi := f-g$ was continuous, then obviously we could just apply the fundamental lemma and we'd get $f=g$, but what if the finite or countable set $X \subset [a,b]$ of $\varphi$'s discontinuities is non-empty?
I've only looked at the finite case $X:=\{x_1,...,x_{n-1}\}, \, n \geq 2$, and partitioned the interval into the set of open intervals $I_{k} := (x_{k-1}, x_k), \, k=1,...,n$ (where $x_0=a$ and $x_n=b$). $\varphi h$ is continuous on all of these sub-intervals, and I'm trying to use a proof by strong induction: given $1 \leq p \leq n$, assume that $\varphi \equiv 0$ on $n-s$ of these intervals $I_k$ (for all $0 \leq s < p$) implies $\varphi \equiv 0$ on $[a,b]\setminus X$; now prove that if $\varphi \equiv 0$ on $n-p$ of the intervals, then $\varphi \equiv 0$ on $[a,b] \setminus X$. Since $X$ has measure zero, we show that $f=g$ up to a null set.
I think I can prove the case where $X$ is countable after I show it holds for the finite one.
| You'd want to use the fact that $\{h \in C^1([a, b]) | h(a) = h(b) = 0\}$ is a dense subset of $L^1([a, b])$.
Consider the measurable set $S_n = \{x \in [a, b] | f(x) - g(x) > \frac{1}{n}\}$. Then approximate the characteristic function $\chi_S$ within $\epsilon$ by a function $k \in C^1([a, b])$ such that $k(a) = k(b) = 0$. Then we have $\int (f - g) k\; dx = 0$. Then we have $\int (f - g) \chi_S \; dx < \epsilon$ for all positive $\epsilon$. And we also have $\int (f - g) \chi_S \; dx \geq \frac{1}{n} \mu(S_n)$, where $\mu$ is the measure. Thus, we must have $\mu(S_n) = 0$.
Therefore, we have $\mu(\{x \in [a, b] | f(x) - g(x) > 0\}) = \mu(\bigcup\limits_{n = 0}^\infty S_n) \leq \sum\limits_{n = 0}^\infty \mu(S_n) = 0$.
Similarly, we have $\mu(\{x \in [a, b] | f(x) - g(x) < 0\}) = 0$.
Thus, we see $f = g$ almost everywhere.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4084784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sufficient criteria to know if a complex measure is the zero measure I'm trying to see if the following condition is enough to determine if a complex measure in $\mathbb{R}^n$ is the zero measure, however I dont see a clear way to handle the question. Suppose that $\int p\mathop{}\!d \mu=0$ for any complex polynomial $p$, can we say that $\mu$ is the zero measure? Some help will be appreciated.
An idea could be to see if the integral of any simple function is zero knowing that the integral of any polynomial is zero, however it doesn't seem true that a simple function is the limit of a sequence of polynomials.
| If the measure $\mu$ has compact support (or rather its variation $|\mu|$ has compact support), then the answer should be in the positive as the Stone-Weierstrass would show.
The result in general is not true. This is related to the moment problem in probability (Stieltjes moment problem in the $(0,\infty)$, Hamburger moment problem for $\mathbb{R}$)
Here is a counterexample by Heyde:
Consider the lognormal distribution whose density is
$$f_0(x)=\frac{1}{\sqrt{2\pi}}\frac{1}{x}\exp(-\frac12 \log^2x)\mathbb{1}_{(0,\infty)}(x)$$
for $|a|<1$ let
$$f_a(x)=f_0(x)\big(1+a\sin(2\pi\log x)\big)$$
It is not difficult to check that
$$\int^\infty_0 f_0(x)x^n \sin(2\pi\log x)\,dx=0,\qquad n=0,1,\ldots$$
So, the measure $\mu(dx)=(f_0(x)-f_a(x))\,dx$ is not $0$ and $\mu(p)=0$ for any polynomial.
There are other nice counterexamples. The book of counterexamples in Probability by Stoyan is a good place to see more of them.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4084901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Show $f$ is discontinuous at every $x_{0}$ in $\mathbb{R}$. Let $f(x)=1$ for rational numbers x, and $f(x)=0$ for irrational numbers. Show $f$ is discontinuous at every $x_{0}$ in $\mathbb{R}$.
I started with the negation of $f$'s discontinuity first.
Proof: Suppose $f$ is continuous at some $x_{0}$ in $\mathbb{R}$.
Then, for all $\varepsilon > 0 $, there exists $\delta > 0$, s.t for all $x$ in $\mathbb{R}$ that satisfies $|x-x_{0}| < \delta$, we have $|f(x)-f(x_{0})| < \varepsilon$.
Let $\varepsilon=1/2$.
We know that for any $B_{\delta}(x_{0})$ for all $x_{0}$ in $\mathbb{R}$, we always can achieve some rational number and irrational number.
Then, for all $x_{0}$ we can always get, $|f(x)-f(x_{0})|=|1-0|=1$.
However, $1>1/2$ is a contradiction.
Hence, $f$ is discontinuous at every $x_{0}$ in $\mathbb{R}$.
$$\blacksquare$$
Well, it's a very well-known problem, but I did not want to use sequence $(x_{n})$ that converges to $x_{0}$ because not using it seems a little bit easier for me to understand.
Does my proof make sense? Is there a simpler, or shorter approach for this kind of problem?
| Here it is another way to approach it for the sake of curiosity.
Suppose that $f$ is continuous at $q\in\mathbb{Q}$.
Then for every sequence $q_{n}$ which converges to $q$, the sequence $f(q_{n})$ must converge to $f(q) = 1$.
Based on such assumption, we shall consider the following sequences:
\begin{align*}
\begin{cases}
x_{n} = q + \dfrac{1}{n}\\\\
y_{n} = q + \dfrac{\sqrt{2}}{n}
\end{cases}
\end{align*}
Notice that both $x_{n}$ and $y_{n}$ converges to $q$, but $x_{n}$ is rational and $y_{n}$ is irrational.
Consequently, we have that $f(x_{n}) = 1$ and $f(y_{n}) = 0$, which contradicts our primary assumption.
Similar reasoning applies to the irrational values. Thus $f$ is everywhere discontinuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4085082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Symmetry of a line about the Origin I was reading about derivative of even and odd functions. I realized that If $f(x)$ be an odd function then $f'(x)$ is an even function. I'm trying to understand this intuitively. So I want to know that why the slope of the tangent line to function for the specific point $x_0>0$ is the same as slope of tangent line at $-x_0$ to the function. In fact I don't completely understand why symmetry of a line about the origin has the same slope with the original line.
|
So I want to know that why the slope of the tangent line to function for the specific point $x_0>0$ is the same as slope of tangent line at $-x_0$ to the function.
The reason for this thing lies in the fact that even function are symmetric with respect to y axis, that's why $f'(x)=f'(-x)$, the slope of the tangent is same at both the sides.
For better understanding assume that $f(x)=\sin x$ therefore $f'(x)=\cos x$
Now lets draw tangent to two points on $\sin x$ :- $(\pi/4,1/\sqrt2)$, $(-\pi/4,-1/\sqrt2)$
Now the slope of tangent will be same for both the points only because graph of $\cos x$ is symmetric with respect to y axis so value of slope i.e. value of $\cos x$ will be same for both $\pi/4$ and $-\pi/4$. Here is a graph for you to understand better.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4085202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Conditional constraint activated by binary variable For each time step $t$, $T_1(t),...,T_n(t)$ are continuous variables, $z(t)$ are binary variables. $T_c(t)$ is known. I am trying to express the following constraint in a Linear Programm.
$$
T_i(t+1) = \left\{\begin{array}{ll}T_c(t) & \text{if } z_t = 1\\T_i(t) & \text{if } z_t = 0\end{array}\right.
$$
Any hints?
Many Thanks
| You have the bilinear equality constraint $T_i(t+1) = z_t T_c(t) +(1-z_t)T_i(t)$. In this, you can linearize the binary times continuous expression using a standard big-M model.
https://or.stackexchange.com/questions/39/how-to-linearize-the-product-of-a-binary-and-a-non-negative-continuous-variable
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4085337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Guidance for a complex number proof I am given $\left(x+iy\right)^{\frac{1}{3}}=a+ib$, and I need to prove $4\left(a^{2\ }-b^{2}\right)=\frac{x}{a}+\frac{y}{b}$.
The first "key" thing (I hope it's actually useful!) I notice is that we are only considering the real parts (I'm pretty sure, anyway).
I tried "cheating" but extending the LHS and RHS to $4\left(a-b\right)\left(a+b\right)=\frac{xb+ya}{ab}$ but I don't think that's particularly helpful for me because I don't immediately know what to do.
I also tried rewriting the given pieces of information: $\left(a+ib\right)^{3}$ to find
$x=a^{3}-3ab^{2}$
$y=3a^{2}b-b^{3}$
Right now my plan in to get everything on LHS then RHS in terms of $x$ and $y$ to show they are equal, but I am having trouble with that. If you have a better method as well, feel free to comment.
| You already (correctly) figured out that $x+iy = (a+ib)^3$ implies
$$
x = a(a^2-3b^2) \, ,\\
y = b(3a^2 -b^2) \, .
$$
It follows that
$$
\frac x a + \frac y b = (a^2-3b^2)+ (3a^2 -b^2) = 4(a^2 -b^2)
$$
Another way is to compute
$$
(x+iy)(a+ib) = (a+ib)^4 = (a^4-6a^2b^2+b^4) + i(4a^3 b-4ab^3)
$$
and compare the imaginary parts:
$$
xb + ya = 4a^3 b-4ab^3 = 4ab (a^2-b^2) \, .
$$
If $ab \ne 0$ then the desired formula follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4085488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Simple Compactness and Continuity Proof Verification I'm given the following:
Let $X$ be a compact metric space, $g:X\rightarrow\mathbb{R}$, $g$ continuous and $g(x)\ne0$ $\forall x\in X$.
And I need to prove that there exists $\delta>0$ such that $|g(x)|\ge\delta$ $\forall x\in X$.
I've gone about it using a few results.
Proof: Consider $|g|:X\rightarrow\mathbb{R}$. Since $g$ is continuous, $|g|$ is also continuous. Furthermore, since the image of a continuous function with a compact domain is itself compact, $|g|$ maps $X$ to a compact subset of $\mathbb{R}$. By Heine-Borel, compactness in $\mathbb{R}$ is equivalent to closed and boundedness. Then, since $|g(x)|$ is bounded from below, we can pick $\delta>0$ such that $\delta=\inf_{x\in X}|g(x)|$, so we get $|g(x)|\ge\delta$ for all $x\in X$.
Does this proof suffice?
| Edit:
A real valued continuous function on a compact space attains its infimum (because $g(X)$ is compact in $\mathbb{R}$, hence closed and bounded), $\exists~p \in X $ such that $|g(p)|=\inf_{x \in X}|g(x)|$. This $|g(p)|>0$ by your assumption, so $\exists~\delta>0$ such that for any $x \in X$, $|g(x)|\geq |g(p)|>\delta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4085729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Limit of $\underset{n\to \infty }{\text{lim}}\frac{\ln (n+1)}{\ln (n)}$ without L'Hôpital I intuitively understand that the limit goes to 1 and I can solve with L'Hôpital but I can't without it.
I tried to call it equal to y and rise both sides to the base e but doesn't seems to work.
$\underset{n\to \infty }{\text{lim}}\frac{\ln (n+1)}{\ln (n)}$
| Hint: $\displaystyle\log(n+1)=\log\left(n\left(1+\frac1n\right)\right)=\log(n)+\log\left(1+\frac1n\right)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4085891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Prove that $ \intop_{\gamma}fdz=0 $ for a complex function Let $ f: \mathbb{C} \to \mathbb{C} $ be holomorphic and $ C_{\mathbb{C}}^{1} $ function. Let $ \gamma $ be a paramaterization of rectangle boundary (with clockwise direction). Prove that $$ \intop_{\gamma}f\left(z\right)dz=0 $$
Using Green's theorem.
I know that I should write $ f\left(x,y\right)=u\left(x,y\right)+iv\left(x,y\right) $ and then probably after Using Green's theorem, Cauchy Riemmman equation would give me that the integral vanish, but Im not sure how to get there. if $\gamma $ is the rectangle boundary, and say $ z(t), a\leq t\leq b $ is the paramaterization, then by definition $$ \intop_{\gamma}f\left(z\right)dz=\intop_{a}^{b}f\left(z\left(t\right)\right)z'\left(t\right)dt $$
Now we can write $ z\left(t\right)=x\left(t\right)+iy\left(t\right) $ and $ f\left(x,y\right)=u\left(x,y\right)+iv\left(x,y\right) $ which will lead us to $$ \intop_{a}^{b}f\left(z\left(t\right)\right)z'\left(t\right)dt=\intop_{a}^{b}[u\left(x\left(t\right),y\left(t\right)\right)+iv\left(x\left(t\right),y\left(t\right)\right)][x'\left(t\right)+iy'\left(t\right)]dt $$
$$ =\intop_{a}^{b}u\left(x\left(t\right),y\left(t\right)\right)x'\left(t\right)-v\left(x\left(t\right),y\left(t\right)\right)y'\left(t\right)dt+i\intop_{a}^{b}u\left(x\left(t\right),y\left(t\right)\right)y'\left(t\right)+v\left(x\left(t\right),y\left(t\right)\right)x'\left(t\right)dt $$
$$ =\intop_{\gamma}u\left(x,y\right)dx-v\left(x,y\right)dy+i\intop_{\gamma}u\left(x,y\right)dy+v\left(x,y\right)dx $$
And using Green's theorem:
$$ =\intop_{R}\left(\frac{d}{dx}u\left(x,y\right)+\frac{d}{dy}v\left(x,y\right)\right)dxdy+i\intop_{R}\left(\frac{d}{dx}v\left(x,y\right)-\frac{d}{dy}u\left(x,y\right)\right)dxdy $$
By Cauchy-Riemman we have
$$ \frac{d}{dx}u\left(x,y\right)=\frac{d}{dy}v\left(x,y\right),\thinspace\thinspace\thinspace\thinspace\frac{d}{dy}u\left(x,y\right)=-\frac{d}{dx}v\left(x,y\right) $$
Which would be exactly what I need if it were multiplied by $-1$. So where is my mistake?
If you have simpler way, I'd love to see it.
Note that this is just the beggining of the complex analysis course that Im taking, so we are not allowed to use the fact that holomorphic function is analytic.
| The error comes from using Green's theorem incorrectly. It should be
$$
\int_\gamma Adx + Bdy = \int_R \left(\frac{\partial}{\partial x}B - \frac{\partial}{\partial y}A\right)dxdy.
$$
Applying this to your formula, you should obtain
\begin{align}
\intop_{\gamma}u\left(x,y\right)dx-v\left(x,y\right)dy =\intop_{R}\left(-\frac{d}{dx}v\left(x,y\right)-\frac{d}{dx}u\left(x,y\right)\right)dxdy\\
i\intop_{\gamma}u\left(x,y\right)dy+v\left(x,y\right)dx = i\intop_{R}\left(\frac{d}{dx}u\left(x,y\right)-\frac{d}{dy}v\left(x,y\right)\right)dxdy.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4086028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the Lebesgue measure of a open interval intersected with a generalized Cantor set with positive Lebesgue measure? In Folland's book Real Analysis: Modern Techniques and Their Applications, p. 39 has an explanation of how to construct a generalized Cantor set with positive measure.
For reference, the construction of the generalized Cantor set involves starting with $K_0 = [0,1]$ removing the open interval of length $\alpha_1$ (for $\alpha_1 \in (0,1)$) centered at the midpoint, and at each step $j$, creating $K_j$ by removing the open middle $\alpha_j^{th}$ from each interval in $K_{j-1}$.
After we construct a generalized Cantor set $K$ with positive measure $\beta \in (0,1)$, I want to know how to calculate the measure of any open interval, call it $V$ intersected with $K$. I am guessing that it is just the length of the $V$ inside $[0,1]$ times $\beta$, and I would like to know how to rigorously show this, assuming that my guess is correct. Thanks.
| The guess is not correct. Choose an odd $n=2m+1\in\Bbb Z^+$ large enough so that $\frac1n<\alpha_1$. Then
$$\beta=\sum_{k=0}^{n-1}m\left(K\cap\left(\frac{k}n,\frac{k+1}n\right)\right)\,,$$
so
$$m\left(K\cap\left(\frac{k}n,\frac{k+1}n\right)\right)>0$$
for some $k\in\{0,\ldots,n-1\}$, but
$$m\left(K\cap\left(\frac{m}n,\frac{m+1}n\right)\right)=m(\varnothing)=0,.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4086190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Series $A= \sum_{n=1}^{\infty}\left(n^\frac{1}{n^2+1}-1\right)$ Consider convergence of series:
$A=\displaystyle \sum_{n=1}^{\infty}\left(n^\frac{1}{n^2+1}-1\right)$
I have a idea
\begin{align*}
a_n&=n^\frac{1}{n^2+1}-1
\\&=e^{\frac{\ln n}{n^2+1}}-1 \sim b_n= \dfrac{\ln\,n}{n^2+1} \text{ when } n \to \infty
\end{align*}
I wanna consider convergence of series $B=\displaystyle \sum_{n=1}^{\infty}b_n$. I have trouble here.
| To help with the convergence of $$B=\sum_{n=1}^{\infty}\left(\frac{\ln(n)}{n^2+1}\right)$$ note that $$\sum_{n=1}^{\infty}\left(\frac{\ln(n)}{n^2+1}\right)\leq\sum_{n=1}^{\infty}\left(\frac{\ln(n)}{n^2}\right)$$.
For the series on the right, we can use the integral test, considering: $$\int_{1}^{\infty}\frac{\ln(x)}{x^2}\;\mathrm{d}x$$
$$=\int_{1}^{\infty}\frac{-\ln\left(\frac{1}{x}\right)}{x^2}\;\mathrm{d}x$$.
Using the substitution $u=\frac{1}{x},\;\frac{\mathrm{d}u}{\mathrm{d}x}=-\frac{1}{x^2}$, we get
$$\int_{1}^{0}\ln(u)\;\mathrm{d}u$$
$$=-\int_{0}^{1}\ln(u)\;\mathrm{d}u$$
which through integration by parts obtains
$$\left[u(1-\ln(u))\right]_0^{1}$$
$$=1-\lim_{u\to 0}\left(u(1-\ln(u))\right)$$
$$=1-\lim_{u\to 0}\left(\frac{1-\ln(u)}{u^{-1}})\right)$$
$$=1-\lim_{u\to 0}\left(\frac{-u^{-1}}{-u^{-2}}\right)\;\;\;\text{(L'Hoptial's Rule)}$$
$$=1-\lim_{u\to 0}\left(u\right)$$
$$=1$$
which is finite. Hence, by the integral test, $$\sum_{n=1}^{\infty}\left(\frac{\ln(n)}{n^2}\right)$$ converges and therefore by the comparison test, your series $$B=\sum_{n=1}^{\infty}\left(\frac{\ln(n)}{n^2+1}\right)$$ also converges. Hope this helps as a hint.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4086336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to prove that $\sqrt{2-\sqrt{2}} \in \mathbb{Q}(\sqrt{2+\sqrt{2}})$ I am trying to prove a statement about the decomposition field of a polynomial that has both $\sqrt{2-\sqrt{2}}$ and $\sqrt{2+\sqrt{2}}$ as roots. I cannot find a way to prove that $\sqrt{2-\sqrt{2}} \in \mathbb{Q}(\sqrt{2+\sqrt{2}})$. I have tried writing it in the basis $1,\sqrt{2+\sqrt{2}}(\sqrt{2+\sqrt{2}})^2,(\sqrt{2+\sqrt{2}})^3$ but nothing works and without this I cannot prove that the decomposition field of $t^4-4t^2+2$ is $\mathbb{Q}(\sqrt{2+\sqrt{2}})$ over $\mathbb{Q}$
| Using the hint given by Bart Michels, we have$$
\sqrt{2-\sqrt{2}}\sqrt{2+\sqrt{2}}=\sqrt{2}\\
\sqrt{2+\sqrt{2}}(\sqrt{2-\sqrt{2}})-\sqrt{2}=0
$$
And so the minimial polynomial of $\sqrt{2-\sqrt{2}}$ is linear in $\mathbb{Q}(\sqrt{2+\sqrt{2}})$ and so it must be an element of the field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4086523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Cancellation property for direct products I must be misunderstanding something very elementary, because every proof I see of this uses advanced methods (including the one in my course notes). Suppose $G, H, K$ are groups such that $G \times H \cong G \times K$. We have to prove that $K \cong H$. Now we know $G \times \{ 1 \} $ is a normal subgroup of the direct product, so we can cancel it out. By the second isomorphism theorem we obtain:
$(G \times H)/G \times \{ 1 \} \cong (G \times \{ 1 \})(\{ 1 \} \times H)/G \times \{ 1 \} \cong \{ 1 \} \times H$
and likewise for $K$. Consequently, $\{ 1 \} \times K \cong \{ 1 \} \times H$ and $K \cong H$.
What is wrong with this proof?
| While it is true that $G\times H/H\cong G\times K/K$, this in no way implies that $H\cong K$. In fact, this cancellation property you are speaking of is simply not true. Take $G=\prod\limits_{i=1}^\infty \mathbb{Z},\ H=\mathbb{Z},\ K=\mathbb{Z\times Z.}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4086652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
How to prove an identity involving a finite sum of binomial coefficients I’m struggling to prove this identity $\displaystyle\sum_{m=1}^{n}{\left(\binom nm\frac{{{\left( -1 \right)}^{m-1}}n!}{m} \right)}=\sum_{m=0}^{n-1}{\frac{n!}{n-m}}$. I do understand that it equals $\begin{bmatrix}
n+1 \\
2 \\
\end{bmatrix}$ but if possible, I would like to find a proof without explicitly using Stirling numbers. Any help would be appreciated.
| We can both sides of the identity divide by $n!$ and want to show
\begin{align*}
\sum_{m=1}^n\binom{n}{m}\frac{(-1)^{m-1}}{m}=\sum_{m=0}^{n-1}\frac{1}{n-m}\tag{1}
\end{align*}
We start with the left-hand side of (1) and obtain
\begin{align*}
a_n&=\color{blue}{\sum_{m=1}^n\binom{n}{m}\frac{(-1)^{m-1}}{m}}\\
&=\sum_{m=1}^n\left(\binom{n-1}{m}+\binom{n-1}{m-1}\right)\frac{(-1)^{m-1}}{m}\\
&=a_{n-1}+\sum_{m=1}^{n}\binom{n-1}{m-1}\frac{(-1)^{m-1}}{m}\\
&=a_{n-1}-\frac{1}{n}\sum_{m=1}^n\binom{n}{m}(-1)^{m}\tag{2}\\
&=a_{n-1}-\frac{1}{n}\left((1-1)^n-1\right)\\
&=a_{n-1}+\frac{1}{n}\\
&\,\,\color{blue}{=H_n}
\end{align*}
with $H_n=1+\frac{1}{2}+\cdots+\frac{1}{n}$ the $n$-th Harmonic number.
The right-hand side gives
\begin{align*}
\color{blue}{\sum_{m=0}^{n-1}\frac{1}{n-m}}&=\sum_{m=0}^{n-1}\frac{1}{m+1}\tag{3}\\
&=\sum_{m=1}^{n}\frac{1}{m}\tag{4}\\
&\,\,\color{blue}{=H_n}
\end{align*}
and the claim (1) follows.
Comment:
*
*In (2) we use the binomial identity $\binom{p}{q}=\frac{p}{q}\binom{p-1}{q-1}$.
*In (3) we switch the order of summation $m\to n-1-m$.
*In (4) we shift the index to start with $m=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4086905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Why is $I$ the only idempotent matrix with nonzero determinant? While reading about the attribute of the identity Matrix, it's mentioned that I is not only idempotent but that it is also the only such matrix that does not have a determinant of zero. While I being idempotent is simple to understand, how is it proved that every other matrix without a determinant of zero isn't?
| Idempotent means
$$M^2 = M$$ Which means $$(M-I) M = 0.$$ So, if the determinant of $M$ is not $0,$ $M-I$ must be singular, so there must be a vector $v$ such that $Mv = v.$ Now, consider the orthogonal complement of $v.$ The matrix is still idempotent on that, so do it by induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4087136",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Is the period of an irreducible Markov Chain the same for all states? I'm a bit confused about this theorem about irreducible Markov Chains:
If a Markov chain is irreducible, then all its states have the
same period $d(i) := g.c.d.\{n > 0|P^n(i, i) > 0\}$.[source]
Let's suppose we have the following matrix which represents an irreducible Markov Chain:
\begin{equation}
P = \begin{bmatrix}
0 & 0 & 0.8 & 0.2\\
0 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0
\end{bmatrix}
\end{equation}
if we start from state 1, we can reach it again after two transitions:
\begin{equation}
1\rightarrow 3 \rightarrow 1
\end{equation}
but if we start from state 2, we can only reach it again after 4 transitions:
\begin{equation}
2\rightarrow 3 \rightarrow 1 \rightarrow 4 \rightarrow 2
\end{equation}
So what is the period of the system? Is it 2 or 4? We can argue that it is 4 because 4 is a multiple of two and it is acceptable for both cases. But using the prementioned theorem $d(1) = 2$ and all other states should follow. What is it that I am getting wrong here?
| The period is $2$ as given in the formula. A Markov chain is periodic if the chain can return to the same state only at multiples of some integer $>1$. See https://www.randomservices.org/random/markov/Periodicity.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4087314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
$X \times \{y\}$ is homeomorphic to $X$ Can I get a proof verification? The only thing I am unsure about is the proof of continuity of $f,f^{-1}$. I know the proof of bijectiveness is trivial.
Prove:$X \times \{y\}$ is homeomorphic to $X$.
Attempt: Take the fact that we can prove continuity by showing that the inverse image of basis elements are open. Define the map $f:X \times \{y\} \rightarrow X$ by $f(x,y)=x$. Then $f$ is one to one since $f(x,y)=f(w,y)\implies x=w$ and so $(x,y)=(w,y)$. $f$ is surjective, since for any $x \in X, f(x,y)=x$. So $f$ is bijective. Let $U$ be an open set in $X$. Then $f^{-1}(U)=U \times \{y\}=(X \times \{y\}) \cap (U \times Y)$, which is open in the subspace topology, so $f$ is continuous.Let $W$ be any open set in $X \times \{y\}$. Then $W=(U \times V) \cap (X \times \{y\})$ where $U \times V$ is open in the product topology. Then $f(W)=U$ is open in $X$ and so $f^{-1}$ is continuous.
| Most of it is fine, but the proof that $f$ is an open set needs a bit more work. The problem is that it isn’t immediately obvious that an open set $W$ in $X\times\{y\}$ is of the form $(U\times V)\cap(X\times\{y\}$ for some open $U\subseteq X$ and $V\subseteq Y$: all you really know is that $W=U\cap(X\times\{y\})$ for some open subset $U$ of $X\times Y$, and that $U$ need not be ‘rectangular’.
Here’s one way to get around the problem. Let $U$ be such a set. For each $x\in f[W]$ there are open $G_x\subseteq X$ and $V_x\subseteq y$ such that
$$\langle x,y\rangle\subseteq G_x\times V_x\subseteq U\,.$$
Let $G=\bigcup_{x\in f[W]}G_x$; $G$ is open in $X$. Let $z\in G$. Then $z\in G_x$ for some $x\in f[W]$, so $\langle z,y\rangle\in G_x\times V_x\subseteq U$. Clearly $\langle z,y\rangle\in X\times\{y\}$, so $\langle z,y\rangle\in W$. I’ll leave it to you to verify that if $z\in X\setminus G$, then $\langle z,y\rangle\notin W$ and conclude that $f[W]=G$ and hence that $f$ is open.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4087485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Let $B\subset A = \{1,2,3,...,99,100\}$ and $|B|= 48$. Prove that exist $x,y\in B$, $x\ne y$ such that $11\mid x+y$.
Let $B\subset A = \{1,2,3,...,99,100\}$ and $|B|= 48$.
Prove that exist $x,y\in B$, $x\ne y$ such that $11\mid x+y$.
Proof: Let $P_0:= \{11,22,...,99\}$ and for $i=
1,2,...49$ and $11\nmid i$ make pairs $P_i:= \{i,99-i\}$. Now we
have $46$ subsets with sum of each pair in each subset divisible
by $11$. So if we took at most $1$ element from each set $P_i$ in to $B$
then $B$ would have at most $47$ elements (if $100$ is in $B$ also).
A contradiction.
This bound is sharp. Take $$B:=\{i\in A; i\equiv x\pmod{11},\,x\in \{1,2,3,4,5\}\} \cup \{11\}$$ Then $|B| =47$ and there is no $x,y\in B$ such that $11\mid x+y$.
My question here is:
Is this problem doable by polynomial method? As to watch in $\mathbb{Z}_{11}[x,y]$ a polynomial $$p(x,y) := \prod_{i=1}^{10}(x+y-i)$$ If the statment would not hold, then $p(x,y)=0$ for all $(x,y)\in \{(a,b)\in B' \times B'; \;a\ne b\}$ where $B' = B\pmod {11}$. Clearly $|B'|\geq 6$. Is this good idea?
| I'm not sure if I understand your question correctly since it seems to me that you almost got the proof by the combinatorial nullstellensatz (?). Anyway, I will write down my proof here:
First, if there are more than two multiples of $11$ in $B$ then we are done. Now suppose that there is at most one multiple of $11$ in $B$ and consider $C$ is the subset of $B$ removing the multiple of $11$ (if there is) in $B$, so $|C| \ge 47$. By taking modulo $11$, we see that if $C' = C \pmod {11}$ then clearly, $|C'| \ge 6$.
Now consider the polynomial $$p(x,y) := \prod_{i=1}^{10}(x+y-i)$$ for any $x,y \in C'$.
This polynomial has degree $10$ and the coefficient of $x^5y^5$ is ${10 \choose 5} \not = 0 \pmod {11}$; also $|C'| = 6 > 5$ so there is $a,b \in C'$ such that $p(a,b) \not = 0 \pmod{11}$, i.e, $a+b = 0 \pmod{11}$. Note that both $a$ and $b$ are not $0 \pmod{11}$, so $a\not = b \pmod{11}$. Hence, the claim follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4087608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Find all odd functions of the form $f(x) = \frac{ax + b}{x + c}$ I am working through a pure maths book and am stuck on odd and even functions.
Let $f(x) = \frac{ax + b}{x + c}$ where x, a, b, c are real and $x \ne \pm c$. Show that if $f$ is an even function then $ac = b$. Deduce that if $f$ is an even function then $f(x)$ must reduce to the form $f(x) = k$, where k is constant. Find all odd functions of the form $\frac{ax + b}{x + c}$
I have solved the first part. I am fairly sure of the second part. But I cannot solve the third part.
My calculations are as follows:
If $f(x)$ is even $f(x) = f(-x)$
$\implies \frac{ax + b}{x + c} = \frac{-ax + b}{-x + c}$
$\implies (ax + b)(c-x)= (x+c)(b-ax)$
$ac = b$
$\implies \frac{ax + b}{x + c} = \frac{ax + ac}{x + c} = \frac{a(x + c)}{x + c} = a$
So I am assuming that the a is the k to which the question refers.
Now, if $\frac{ax + b}{x + c}$ is odd, $f(-x) =-f(x)$. So
$\frac{-ax + b}{-x + c} = \frac{-(ax + b)}{x + c}$
$\implies (b-ax)(c+x)= (c-x)(-ax-b)$
$\implies 2bc=2ax^2 \implies bc=ax^2 \implies x = \sqrt\frac{bc}{a}$
but this does not lead to the answer in the book, which is $f(x)=\frac{k}{x}$
| You need $$ax^2 -bc =0 $$ for all $x$. That is, $ax^2-bc$ should be the zero polynomial, i.e. $a=0$ and $bc=0$. If $b=0$, then $$f(x) =0$$ If $c=0$, then $$f(x)=\frac bx $$ for any $b\in \mathbb R$. But setting $b=0$ gives the first case, so a general solution is $$f(x)=\frac kx$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4087772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Why are the polygons formed by many extended straight lines in a plane convex? I was looking over a chapter in my textbook on linear programming, when I thought of something interesting, but probably unrelated.
On a plane, many straight lines - each one extended to infinite length in both directions - are drawn. Let $x$ be a point (the red dot in the diagram below) in the plane so that it is inside (or on the boundary of) a polygon formed by parts of the straight lines. Then the smallest polygon - in terms of area - that contains $x$ (i.e. the green polygon) is convex!
Why is this?
I don't think we have a convex hull to work with in an attempted proof.
Hmmm... if you join up all the intersection points of the lines then you just get lots of triangles. It’s not obvious how this helps though.
$$$$
Note that this doesn't work if the lines themselves are strictly convex (or concave), e.g.:
The green shape is not convex because the straight line joining the red dots does not lie inside the green shape (this is the definition of convex).
| Your lines are actually half-spaces, splitting the plane into the half that contains the point (and excluding the other half), at each line.
Because the intersection of a finite number of half-spaces is convex, the polygon around the dot is convex.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4087991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Combinatorics of Drawing Suits of card from a Stacked Deck? So my question has two parts and I have managed to confused myself into lack of comprehension.
Suppose you are handed a shuffled, stacked deck of playing cards i.e. instead of 52 cards there are 104 as the I included an extra copy of each card for the heart suit (two 2 of hearts, two 3 of hearts, etc) and four copies of each spades (four 2 of spades, four 3 of spades, etc).
(P(hearts) = 1/4, P(spades)=1/2, P(clubs)=P(diamond)=1/8)
Pick a number, n. What is the likelihood that in the top n cards there are h hearts, s spades, and c clubs? where h + s + c + d = n (where d is the number of diamonds)
A bit harder:
Now pick two numbers m and n. First remove the first m cards from the randomized deck. Now what is the likelihood that in the top n cards there are h hearts, y spades, and c clubs, h + s + c + d = n (where d is the number of diamonds).
So for the first part (without removing m cards at random) is clear to me that the top n cards of the stacked deck of cards is 104Cn an the number of ways the specific combination of (h, s, c, d) is the multinomial coefficient (n!/(h!*s!*c!*d!)). Where I am confusing myself is, is this just the (n C h,s,c,d / 104Cn * 100) as 104Cn inherently handles the uneven distributions in the suits of cards? or do I need to weight the multinomial coefficient somehow?
| For the first part, let the probability equal $p_1$. Then,
$$p_1 = \dfrac{\dfrac{n!}{h!\ s!\ c!\ d!} \cdot \dfrac{104-n!}{(26-h)!\ (52-s)!\ (13-c)!\ (13-d)!}}{\dfrac{104!}{26!\ 52!\ 13!\ 13!}} = \dfrac{{26 \choose h}\cdot {52 \choose s}\cdot {13 \choose c}\cdot {13 \choose d}}{{104 \choose n}}$$
The numerator of the RHS in the above expression is more indicative of the choices favorable to the problem. The denominator is the total number of ways of choosing n cards from 104.
For the second part, let m be restricted to the case where $m \leq 104-n$ (as otherwise the probability is 0).
We are dividing the pile into 3 parts of size $m,n$ and $104-m-n$. The total number of ways of doing this are $\frac{104!}{m!\cdot n!\cdot (104-m-n)!} = \binom{104}{m,n,104-m-n}$
For the pile of size n, we have the probability of favorable cases as ${26 \choose h}\cdot {52 \choose s}\cdot {13 \choose c}\cdot {13 \choose d}$. The remaining $104-n$ need to be split into $m$ and $(104-m-n)$, which can be done in ${104-n \choose m}$ ways
Let $p_2$ be the required probability for the second part. Then:
$$p_2 = \dfrac{{26 \choose h}\cdot {52 \choose s}\cdot {13 \choose c}\cdot {13 \choose d}\cdot {104-n\choose m}}{\frac{104!}{m!\cdot n!\cdot (104-m-n)!}} = \dfrac{{26 \choose h}\cdot {52 \choose s}\cdot {13 \choose c}\cdot {13 \choose d}}{\binom{104}{n}}$$
which is same as the first answer. Many thanks to @angryavian for the correction in second part.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4088155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
College Algebra: Find real-valued closed formulas for the trajectory $x(t+1)=Ax(t)$ ?? Hey so I have this problem on my webwork that I do not understand:
The problem says to find real-valued closed formulas for the trajectory:
$x(t+1)=Ax(t)$ where
$A=\begin{bmatrix} -0.8 & 0.6 \\ -0.6 & -0.8 \end{bmatrix}$ and $\overrightarrow{x}(0) = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$
My thinking is that I have to make the dynamical system $x_{k+1} = Ax_k$
I solved for the eigenvalues $\lambda_{1} = \frac{-4}{5} + \frac35i$ with its respective eigenvector $\begin{bmatrix} -i \\ 1 \end{bmatrix}$
and $\lambda_{2} = \frac{-4}{5} - \frac35i$ with its respective eigenvector $\begin{bmatrix} i \\ 1 \end{bmatrix}$
I made a formula $x_k = 1(\frac{-4}{5} + \frac35i)^k\begin{bmatrix} -i \\ 1 \end{bmatrix} + 0(\frac{-4}{5} - \frac35i)^k\begin{bmatrix} i \\ 1 \end{bmatrix}$ which evaluates to
$x_k = 1(\frac{-4}{5} + \frac35i)^k\begin{bmatrix} -i \\ 1 \end{bmatrix}$
but the answer is looking for a vector with 2 rows and 1 column.. what am I doing wrong??
| Hint: $A$ is rotation matrix for angle $\theta = \pi + \arccos 0.8 \approx 3.785$ radians, which is approximate $216.9^{\circ}$. So $x(1)$ is $x(0)$, but rotated by this angle, $x(2)$ is $x(0)$ rotated by $2\theta$ and so on. You can write in as
\begin{align}
x(k) = A^k x(0) &= \begin{pmatrix}
\cos{k(\pi + \arccos 0.8)} & -\sin{k(\pi + \arccos 0.8)}\\
\sin{k(\pi + \arccos 0.8)} & \cos{k(\pi + \arccos 0.8)}
\end{pmatrix} x(0) \\&=
\begin{pmatrix}
(-1)^k cos{(k\arccos 0.8)} & - (-1)^k\sin{(k\arccos 0.8)} \\
(-1)^k\sin{(k\arccos 0.8)} & (-1)^k \cos{(k\arccos 0.8)}
\end{pmatrix} x(0) \\&=
(-1)^{k} \begin{pmatrix}
\cos{(k\arccos 0.8)} & -\sin{(k\arccos 0.8)} \\
\sin{(k\arccos 0.8)} & \cos{(k\arccos 0.8)}
\end{pmatrix} x(0)
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4088281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the orbit and stabilizer of each element of a set of subgroups Let $S_3$ be the symmetric group of all permutations on the set $\{1, 2, 3\}$. Then, let $S$ be the set of all subgroups of $S_3$. Consider $S$ as an $S_3$-set with respect to conjugation and for each element of $S$ find its orbit and its stabilizer.
This is a problem I am working on from an exercise in my notes for algebra so I have the answer to this, however I have been unable to arrive at the correct answer and can't understand where my understanding is falling short.
What I have so far:
We know that $S_3$ is the set $\{e, (1 2), (1 3), (2 3), (1 2 3), (1 3 2)\}$ and so I was fairly easily able to deduct that the set of all subgroups $S = \{\{e\}, S_3, \{e, (1 2)\}, \{e, (1 3)\}, \{e, (2 3)\}, \{e, (1 2 3), (1 3 2)\}\}$.
I then named each element of $S$ as $H_1:H_6$, that is $H_1 = \{e\}$, $H_2 = S_3$, $H_3 = \{e, (1 2)\}$, ...
I then used the definition of orbit to write $S_3H_1 = \{\alpha H_1 : \alpha \in S_3\}$, which is to be repeated for all of the other elements of $S$.
And I used the definition of the stabilizer to write $S_{3_{H_1}} = \{\alpha \in S_3 : \alpha H_1 = H_1\}$, which is also to be repeated for all of the other elements of $S$.
Each time I have tried to work these out, however, I have not achieved the correct answer. I wonder whether the reason why is in the part of the question that tells us to "consider $S$ as an $S_3$-set with respect to conjugation".
Thanks for any help!
| Maybe the problem is how you have defined the stabilizer and the orbit. If you have a group $G$ and $a,b\in G$ a conjugation is something of the form $a^{-1}ba$. When you write the stabilizer you write $\alpha H_1= H_1$, but I think it should be $\alpha^{-1} H_1 \alpha= H_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4088363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\{0,1\}^{\omega}$ is not compact $\{0,1\}^{\omega}$ is not compact. I proved a more difficult result that $[0,1]^{\omega}$ is not compact in the box topology and was told my proof was similar to this. However, I think it would be good practice to try this result as well. $\{0,1\}$ is in the discrete topology. Let $C$ be the collection of all points in $\{0,1\}^{\omega}$. Consider the open cover of $\{0,1\}^{\omega}$ given by $\{\prod_{n \in \mathbb{N}}U_{nx}\}_{x \in C}$, where for each $x=(x_n)_{n \in \mathbb{N}}$ in $C$, a corresponding open set $\prod_{n \in \mathbb{N}}U_{nx}$ containing it is defined by
$$U_{nx}=\begin{cases} \{1\} &\text{ if } x_n=1 \\
\{0\} & \ \text{if} \ x_n=0\end{cases}$$
Then $x \in \prod_{n \in \mathbb{N}}U_{nx}$ and if $y \in C-\{x\}$ there is an index $k$ such that $x_k \notin U_{ky}$ and $x \notin \prod_{n \in \mathbb{N}}U_{ny}$. So if $\prod_{n \in \mathbb{N}}U_{nx}$ is taken out of the cover, the element $x \in \{0,1\}^{\omega}$ is not covered by $\{\prod_{n \in \mathbb{N}}U_{nx}\}_{x \in C}-\{\prod_{n \in \mathbb{N}}U_{nx}\}$. Since $\{\prod_{n \in \mathbb{N}}U_{nx}\}_{x \in C}$ is an infinite cover, and taking one element of this collection out results in a noncover, $\{0,1\}^{\omega}$ is not compact.
One thing I had difficulty showing is that the collection $\{\prod_{n \in \mathbb{N}}U_{nx}\}_{x \in C}$ actually covers $\{0,1\}^{\omega}$. How could I show that the union of elements in the cover is equal to $\{0,1\}^{\omega}$ explicitly? Also, would this be the correct way to do this proof? I modeled the solution after this solutions, in which I was provided help. Proving $[0,1]^{\omega}$ in box topology is not compact
| That it covers is trivial: $x \in \prod_n U_{nx}=: U_x$ for each $x \in C$.
In fact $U_x=\{x\}$ is open in the box topology, showing that all points of $\{0,1\}^\omega$ are isolated. $C$ is infinite (uncountable) and discrete.
The cover $\{U_x\mid x\in C\}$ is irreducible: we cannot omit any $U_y$ because then $y$ will not be covered anymore. So the cover has no proper subcovers at all.
Nitpick: you can just say $U_{nx}=\{x_n\}$ and be done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4088555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Flajolet and Sedgewick generating function for Hertzsprung Problem The Hertzsprung Problem goes as follows: In how many can we place exactly $n$ non-attacking kings on a $n \times n$ chessboard such that there is exactly $1$ king in each row and column where $n \in \mathbb{N}$.
My main question is: How did Flajolet and Sedgewick get the below generating function for the Hertzsprung problem?
$$\sum_{n=0}^\infty n! x^n \frac{(1-x)^n}{(1+x)^n}$$
Flajolet and Sedgewick discuss this generating function briefly and somewhat vaguely on page $373$ in Enumerative Combinatorics. They give a sketch of a sketch. However I'm totally not sure how they got the generating function. I have tried to extract the same generating function for a long time using restricted permutation, I mean the Hertzsprung problem reduces to finding number of permutations of $[n]$ such that no two adjacent digits in the permutation are consecutive.
Moreover using the restricted permutations argument we get a nice closed form as follows
$$n!+\sum_{k=1}^n {(-1)^k}(n-k)!\sum_{i=1}^k 2^{i} \binom{k-1}{i-1}\binom{n-k+1}{i}$$
by simply using Principle of inclusion-exclusion and stars and bars argument.
However I don't know how Flajolet and Sedgewick got the generating function
$$\sum_{n=0}^\infty n! x^n \frac{(1-x)^n}{(1+x)^n}$$
Any idea how to approach the problem for finding the generating function? I have already searched all the references in (oeis.org/A002464) but no reference gives a proof on how they got the generating function. All the references just show or give an approach on how to get the closed form of the double sum which is very easy to get.
Your help would be highly appreciated.
Thanks.
| Here is my guess at what Flajolet and Sedgewick had in mind.
We want to find a generating function for the number of permutations $\sigma$ of $[1..n]$ such that $|\sigma_{i+1} - \sigma_i| \ne 1$. We start by solving an associated problem: in preparation to apply inclusion / exclusion, we say that a permutation $\sigma$ of $[1..n]$ has "property $i$" if $|\sigma_{i+1} - \sigma_i| = 1$ for $1 \le i < n$, and we define $s_{j,n}$ to be the number of permutations with $j$ of the properties. We define the bivariate generating function of $s_{j,n}$ by
$$f(u,z) = \sum_{n \ge 0} \sum_{j \ge 0} s_{n,j} u^j z^n$$
Once we have $f(u,z)$, it's easy to find the generating function for the number of permutations which have none of the properties. By inclusion / exclusion, the GF is simply $f(-1,z)$.
Consider an arbitrary permutation of $[1..n]$. We can break the permutation into a sequence of chunks where each chunk is either a single integer or consists of two or more consecutive integers in ascending or descending order. For example, here is a permutation of $[1..12]$ with seven chunks: four single integers, one chunk of length $2$ and two chunks of length $3$.
$$9\; 2\; \overleftarrow{\boxed{12\; 11\; 10}}\; 8\; \overrightarrow{\boxed{5\; 6\; 7}} \; 1\; \overrightarrow{\boxed{3\; 4}}$$
How many permutations consist of $m$ chunks? If the lengths of the chunks are $t_1, t_2, t_3, \dots, t_m$, then we must have
$$t_1+t_2+t_3+\dots+t_m=n$$
with $t_i \ge 1$ for all $i$. Taking into account that each chunk of length greater than $1$ may be either ascending or descending and the chunks can be ordered in $m!$ ways, a GF for the number of permutations of $[1..n]$ having $m$ chunks is
$$m!\; (z + 2z^2 + 2z^3 + 2z^4 + \dots)^m$$
Any permutation may be broken into $m$ chunks for exactly one value of $m$, so a GF for the number of permutations of $[1..n]$ is
$$\sum_{m \ge 0} m!\; (z + 2z^2 + 2z^3 + 2z^4 + \dots)^m$$
Observing that a chunk of length $j$ contributes $j-1$ to $s_{j,n}$, we can modify the previous GF to produce the bivariate GF
$$f(u,z) = \sum_{m \ge 0} m!\; (z + 2u z^2 + 2 u^2 z^3 + 2u^3 z^4 + \dots)^m$$
Applying the formula for the sum of an infinite geometric series and simplifying,
$$f(u,z) = \sum_{m \ge 0} m!\; \left( z + \frac{2 u z^2}{1 - uz} \right)^m$$
We are now in a position to apply inclusion / exclusion to find the GF for the number of permutations $\sigma$ of $[1..n]$ with no instances of $|\sigma_{i+1} - \sigma_i| = 1$:
$$f(-1,z) = \sum_{m \ge 0} m!\; \left( z\; \frac{1-z}{1+z} \right)^m $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4088666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Why is the sum of all the elements in a subgroup generated by $10$ of $\Bbb Z_p$ is divisible by $p$ Consider the subgroup $\langle 10\rangle $ of $\mathbb{Z}_p$. Let the order of the subgroup $\langle 10\rangle $ be $d$.
So $$\langle 10\rangle= \left\{10, 10^2, 10^3, \dots, 10^{d-1}, 10^{d} = 1\right \}.$$
Now is it true that sum of all the digits of $\langle 10\rangle $ is divisible by $p$? If so, then how is it true.
I have verified the same for $p=13$ where order of $10$ modulo $13$ is $6$. We have $$10+10^2+10^3+10^4+10^5+1 = 111111$$ and $$13 \times 8547 = 111111.$$
I am not able to see how this can be proved. Kindly help me out.
Thanks in advance!
Update:
Let $\langle 10\rangle $ be a subgroup of $\Bbb Z_p$ with ${\rm ord}(\langle 10\rangle) = n$ Thus if $x_1, x_2, x_3, \dots x_n$ are the elements of the subgroup $\langle 10\rangle $ then $x_{i}^n \equiv 1 \pmod{p}$ where $1 \le i \le n$. So, $x_i$'s are the $n^{th}$ roots of unity modulo $p$.
So, $x^n-1 = (x-x_1)(x-x_2)(x-x_3)\dots(x-x_{n-1})(x-x_n)$.
Now the coefficient of $x^{n-1}$ in $x^n-1$ is $0$. So $x_1+x_2+x_3+\dots+x_{n-1}+x_n \equiv 0 \pmod{p}$ This is where I am confused. How can I write $x_1+x_2+x_3+\dots+x_{n-1}+x_n \equiv 0 \pmod{p}$ ? For this to be true I must have $x^n-1 \equiv 0 \pmod{p}$. Right?
So, the sum of $x_i$'s which are the elements of $\langle 10\rangle $ is congruent to $0$ modulo $p$.
| Your difficulty is that the problem contains a red herring: This is true for any subgroup of $(\Bbb{Z}/(p))^*$ other than the identity subgroup $<1>$. The easiest way to see this is to remember that any subgroup is cyclic and thus is generated by a root of unity. Thus, if $\xi_1,...,\xi_n$ ($n>1$) are the elements of the subgroup, we have $x^n-1=(x-\xi_1)\cdot\cdot\cdot (x-\xi_n)$. (We know this because each of the $\xi_i$ satisfy $\xi_i^n=1$, and as there are $n$ of them, they are precisely the $n$ roots of $x^n-1$.) But the coefficient of $x^{n-1}$ in this polynomial is both $0$ (since $n>1$) and also $-(\xi_1+\cdot\cdot\cdot +\xi_n)$. Thus, the sum of the $\xi_i$ is $0$ in $(\Bbb{Z}/(p))$, i.e., it is divisible by $p$.
Note that this uses the well-known fact that $(\Bbb{Z}/(p))$ is a field. Also, note that to solve your problem, you don't even need the fact that every multiplicative subgroup of a finite field is cyclic, since $<10>$ is obviously cyclic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4088820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
What is the source of this derivative formula We have just proved the n-th derivative of $f(x)=e^{-x}p(x)$, where $p(x)$ is a polynomial.
We got:
$$f(x)^{(n)}= e^{-x}(-1)^n\sum_{k=0}^{n}\binom{n}{k}(-1)^kp^{(k)}$$
Even though I do not completely understand the proof, I wonder, if there is a more general formula for expressions similar to this or if it has a name, so that I can look it up.
| A generalization is
$$
\frac{d^n}{dx^n}\big(u(x) v(x)\big) = \sum_{k=0}^n \binom{n}{k} u^{(k)}(x) v^{(n-k)}(x)
$$
Proved by induction using the product rule.
This is sometimes called the "Leibniz Rule".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4089042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Equation of the curve tangent to the unit circle and every circle with center $(\sum_{k=1}^{n-1}\frac1k, 0)$ and radius $\frac1n$, for $n>1$
I saw this problem on twitter where we have to find equation of red curve.
$C_{1}\implies x^2+y^2=1$
$C_{2}\implies (x-1)^2+y^2=\frac14$
$C_{3}\implies \Big(x-\frac32 \Big)^2+y^2=\frac19$
$C_{4}\implies \Big(x-\frac{11}{6} \Big)^2+y^2=\frac{1}{16}$
$\vdots$
$$C_{n}\implies \Bigg(x-\sum_{k=1}^{n-1}\frac{1}{k}\Bigg)^2+y^2=\frac{1}{n^2}$$ $\mathbf{\forall n>1}$
I thought from the shape of red curve that It should exponential function of form $\mathbf {y=a^{x-b}}$ , where $\mathbf{0<a<1}$.
But I don't know how to proceed further.
Thank you for your help!
| Not a full answer, but hopefully a helpful perspective:
We recall the surprisingly useful fact that the harmonic number $\sum_{k=1}^{n-1} \frac1k$ is equal to $p(n)$, where $p(z) = \frac{\Gamma'(z)}{\Gamma(z)} + \gamma$ is the logarithmic derivative of the Gamma function plus Euler's constant. This allows us to extend the discrete set of circles described on the OP to the continuous set of circles centered at $(p(z),0)$ with radius $\frac1z$ (for $z\ge1$). I'm guessing that the curve you seek is literally the upper envelope of this continuous family of curves.
The closest I could come to finding a formula for that upper envelope is to consider a fixed $y$-coordinate; the circle indexed by $z\ge1$ has equation $(x-p(z))^2+y^2=z^{-2}$, which means that the point furthest to the right at height $y$ is $\sqrt{z^{-2}-y^2}+p(z)$. Emperically, this function of $z$ (for fixed $y$) decreases to a unique maximum and then decreases (to $p(1/y)$ itself at $z=1/y$, where it stops being defined). So "all" we have to do is find the value $z = z(y)$ that maximizes $\sqrt{z^{-2}-y^2}+p(z)$; then the upper envelope would have equation $x=1/z(y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4089354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Confusion about coordinate ring of a fiber. I am reading an example given on page 82 in Eisenbud and Harris's The Geometry of Schemes, and I am confused about some terminology. Let $A:=\mathbb{Z}[\sqrt{3}]$, the ring of integers for $\mathbb{Q}(\sqrt{3})$. The natural inclusion $\mathbb{Z}\hookrightarrow A$ induces a map $\operatorname{Spec}A\rightarrow\operatorname{Spec}\mathbb{Z}$. Looking at the fiber of a point $[(p)]\in\operatorname{Spec}\mathbb{Z}$ is equivalent to examining the primes lying above $p$ in $A$. In the case that $p=2$ or $p=3$, we get $2A=(1+\sqrt{3})$ and $3A=(\sqrt{3})^2$, respectively, and the residue fields at the points $(1+\sqrt{3})$ and $(\sqrt{3})$ are $\mathbb{F}_2$ and $\mathbb{F}_3$, respectively.
Now, for these cases, the authors say that for $p = 2$ or $3$, the fiber over $(p)$ is a "single, nonreduced point, with coordinate ring isomorphic to $A/\mathfrak{p}^2$".
My question is, how is the "coordinate ring" here defined? I know when $A$ is a reduced, finitely generated $K$-algebra over a algebraically closed field $K$ that $A$ is itself the coordinate ring. But I'm confused as to what it means here precisely, and can't find any definition given in the book. Can someone explain to me what definition is being used?
| If $f:X\to Y$ is a morphism of schemes and $y\in Y$ is a point, the fiber over $y$ is $\operatorname{Spec} \kappa(y)\times_Y X$ (see here for a proof that the underlying topological space of this scheme is really the same as the set $f^{-1}(y)$ with the subspace topology). When $X=\operatorname{Spec} A$ and $Y=\operatorname{Spec} R$ are affine, like in your problem, $\operatorname{Spec} \kappa(y)\times_Y X$ is also affine: it's the spectrum of $\kappa(y)\otimes_R A$. The ring $\kappa(y)\otimes_R A$ is the coordinate ring of the fiber.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4089497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Satisfying Conditions for Inner Product in Space $R^2$
The solution states that this is an inner product iff $b=c$ and $d>b^2$. I am not sure how to solve/approach this problem. I know that there are 4 conditions for the inner product condition to satisfy. Can you explain how to use those four properties to find conditions for b,c, and d?
P.S. Properties of inner product
*
*$<x,y>=<y,x>$
*$<x+z,y>=<x,y>+<z,y>$
*$<cx,y>=c<x,y>=<x,cy>$
*$<x,x>=0$ if $x\neq0 $
| Eventually, you learn that positive definiteness will require that the determinant of the matrix of coefficients be positive. But you can see it directly by completing the square: With $x=(x_1,x_2)$,
$$\langle x,x \rangle = x_1^2 + 2bx_1x_2 + dx_2^2 = (x_1+bx_2)^2 + (d-b^2)x_2^2.$$
Can you finish now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4089646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the joint probability density function of transformations, the answer incorrect. What is my mistake? Given joint probability density function:
\begin{align}
f_{X,Y}(x,y)=
\begin{cases}
24x(1-y)&0<x<y<1\\
0&\text{otherwise}
\end{cases}.
\end{align}
Given transformation: $M=\dfrac{X+Y}{2}$ and $W=\dfrac{X}{2}$, find the joint p.d.f. of $M$ and $W$.
I try as follows.
Since $0<x<y<1$ The range of transformation is
$$
M=\dfrac{X+Y}{2}\geq \dfrac{X}{2}=W,
$$
so $0\leq M\leq 1$, $M\geq W$, and $0\leq W\leq 1$.
The invers of transformation is
$X=2W$ and $Y=2M-2W$.
The absolute value of Jacobian is
\begin{align}
|J|=
\begin{vmatrix}
\dfrac{dX}{dM}&\dfrac{dX}{dW}\\
\dfrac{dY}{dM}&\dfrac{dY}{dW}
\end{vmatrix}
=
\begin{vmatrix}
0&2\\
2&-2
\end{vmatrix}
=4.
\end{align}
The p.d.f. of transformation:
\begin{align}
g_{M,W}(m,w)&=f_{X,Y}(x,y)|J|\\
&=f_{X,Y}(2w,2m-2w)|J|\\
&=24(2w)(1-2m+2w)4\\
&=192w(1-2m+2w).
\end{align}
Now, we have
\begin{align}
g_{M,W}(m,w)=
\begin{cases}
192w(1-2m+2w)&0\leq M\leq 1, M\geq W,\text{ and }0\leq W\leq 1\\
0&\text{otherwise}
\end{cases}.
\end{align}
Now I want to check my answer with double integrating joint p.d.f.
\begin{align}
\int\limits_{0}^{1}\int\limits_{0}^{m} 192w(1-2m+2w) \,dw\,dm
\end{align}
and the result is $16$.(I use maple)
So we can conclude $g_{M,W}(m,w)$ is not a p.d.f.
Why this is happen? Am I have a mistake with my answer?
| The functional form of the transformed joint density is correct, but the support is incorrect. All you have to do to rectify this is to first note that the support for $(X,Y)$ is bounded by the lines $$X = 0, \quad X = Y, \quad Y = 1.$$ Because the transformation is linear and invertible, the resulting support in $(M, W)$ space is also bounded by three lines. We invert the transformation to get $$X = 2W, \quad Y = 2(M-W),$$ hence $$2W = 0, \quad 2W = 2(M-W), \quad 2(M-W) = 1,$$ or $$W = 0, \quad M = 2W, \quad M = W + 1/2.$$ This then becomes the support $$0 < 2W < M < W + 1/2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4089775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Calculus - Sequence and Function - hard exercise EDIT: With your help I have succeed to prove it! Thanks! I also have another question related to this one - written down here.
Let $a,b \in \mathbb{R}$ such that $a<b$ and let $f:[a,b] \to [a,b]$ be a differentiable function, let $t \in [a,b]$, and consider the sequence $(x_n)_{n=1}^{\infty}$ defined by: $$\left\{\begin{matrix}
x_1=t & \\
x_{n+1}=f(x_n) & \forall n\geq 1
\end{matrix}\right.$$
Suppose that there exists a point $\alpha \in [a,b]$ such that $f(\alpha)=\alpha$.
Prove, while using MVT, that if there exists a $0 \le q <1$ such that $|f'(x)|\le q$ for every $x \in [a,b]$, then $\lim_{n \to \infty}x_n=\alpha$
-
I don't know how to use MVT here, since I don't know nothing about $\alpha$ and $x_n$, I can't take the interval $[\alpha,x_n]$ or $[x_n,\alpha]$, ($\alpha$ and $x_n$ can be equal).
Any help will be awesome!
Thanks a lot!
EDIT:
Now I have a $y \in \mathbb{R}$, and the sequence $(x_n)_{n=0}^{\infty}$:
$$x_0=y ~ ~ , ~ ~ x_{n+1}=cos(x_n), \forall n \ge 0$$
Prove thatt the sequence converge to a limit $0<\alpha<1$.
Any tips?
Thanks again!
| By using the MVT you can show:
$$
|f(b) - f(a)| < b-a.
$$
Now, have a look at the Banach fixed point theorem for some inspiration.
https://en.wikipedia.org/wiki/Banach_fixed-point_theorem
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4089914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Prove that $f: \mathbb{R} \to \mathbb{R}, f(x) = x^\frac{1}{9}$ is not a differentiable function Prove that the following function is not a differentiable function: $$f: \mathbb{R} \to \mathbb{R}, f(x) = x^\frac{1}{9}$$
I believe all I have to show is one point in the domain where the function is not differentiable:
Hence I have the following proof:
$$ f'(x) =\frac{1}{9x^\frac{8}{9}} $$
At $ x = 0$, $f'(x)$ is undefined.
Hence, $f(x)$ is not a differentiable function.
Is this enough or do I need to show that the $\lim\limits_{h\to0} \frac{f(x_0 + h) - x_0}{h}$ does not exist in some other way.
| Yes, you must show that the limit $\lim_{x\to 0}\frac{f(x)-f(0)}{x}$does not exist.
Note that $f'(x)$ exists for all $x\ne 0$ however at $x=0$, differentiability is doubtful that is to say it is apriori not known whether $f'(0)$ exists or not and if it does exist then what form it will be in.
What you have shown above is that $f'(x) =\frac{1}{9x^\frac{8}{9}}$ does not exist at $x=0$, which has nothing to do with $f'(0)$ as we don't even know apriori about existence of $f'(0)$ or the kind of form $f'(0)$ will be in if it exists.
Therefore, by definition of differentiability at $x=0$, $f'(0)$ exists if and only if $\lim_{x\to 0}\frac{f(x)-f(0)}{x}$ exists.
Clearly, $\frac{f(x)-f(0)}{x}=\frac{x^{1/9}-0}{x}=\frac 1{x^\frac 89}$ (Note that it is different from $\frac{1}{9x^\frac{8}{9}}$) and therefore the limit does not exist finitely, which is to say that $f'(0)$ is non-existent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4090126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
What's the relationship between linear transformations and systems of equations? I began watching Gilbert Strang's lectures on Linear Algebra and soon realized that I lacked an intuitive understanding of matrices, especially as to why certain operations (e.g. matrix multiplication) are defined the way they are. Someone suggested to me 3Blue1Brown's video series (https://youtube.com/playlist?list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab) and it has helped immensely. However, it seems to me that they present matrices in completely different ways: 3Blue1Brown explains that they represent linear transformations, while Strang depicts matrices as systems of linear equations. What's the connection between these two different ideas?
Furthermore, I understand why operations on matrices are defined the way they are when we think of them as linear maps, but this intuition breaks when matrices are thought of in different ways. Since matrices are used to represent all sorts of things (linear transformations, systems of equations, data, etc.), how come operations that are seemingly defined for use with linear maps the same across all these different contexts?
| First of all, simply by using matrix multiplication, you can rewrite $$Ax$$
into some equations
$$a_{11}x_1+\cdots+a_{1n}x_n$$
$$\cdots$$
$$a_{m1}x_1+\cdots+a_{mn}x_n$$
This means, that a matrix does nothing more than being a representation of the coefficients of these linear equations, which is more convenient to write down.
If you think of it in a more abstract way, you could even just write down
$$a_{11},\ldots, a_{1n},a_{21},\ldots,a_{2n},\ldots,a_{m1},\ldots,a_{mn}$$
into one vector or text file, where $a_{ij}$ are just some variables for anything. The total amount of variables is $m\cdot n$.
Now, if you have given a matrix $A$ and you vary $x$ you can think of $A$ as a linear transformation of $x$. For example the matrix
$$\begin{pmatrix}
1 & 0 \\
0 & 0
\end{pmatrix}$$
will map the plane $\mathbb{R}^2$ onto a plane which has a $45°$ angle with the $x_1$-axis. So in this case it might be better to think of $A$ as a map, which transforms $x$ in a linear way.
However, if you want to think of systems of linear equation with a given $b$, you can rewrite $$Ax=b$$
into some equations
$$a_{11}x_1+\cdots+a_{1n}x_n=b_1$$
$$\cdots$$
$$a_{m1}x_1+\cdots+a_{mn}x_n=b_m$$
Now, you are more likely to have an interest in not transforming the $x$, but finding an $x$ that is a solution to the equations. In this case you are more likely to think of a system of equations, which needs a solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4090259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Probability of placing 8 rooks that cannot attack in a chessboard (error in textbook?) My solution would be $\frac{8!}{64 \choose 8}$ as there are 8 places to put the first rook on the first row, then there are 7 places left on the second row etc etc, then divide this by the total number of ways of choosing 64 squares to occupy with 8 rooks.
Looking elsewhere on this StackExchange, I believe this is correct (this post is slightly different as it's re. probabilities, not just combinations)?
However, the textbook has the following answer:
I am bamboozled as I believe the denominator is $64 \choose 7$ and the numerator seems to ignore the fact that it doesn't matter the order which the rooks are placed on each row.
| The textbook answer given has a small error - the denominator product should run to $57$. Otherwise they are the same number.
$$\frac{8!}{64 \choose 8} = \frac{8!}{\frac{64!}{56!\,8!}} = \frac{8!\,8!}{\frac{64!}{56!}} = \frac{\prod_1^8{i^2}}{64\cdot 63\cdot 62\cdots 57}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4090369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Let X be uniform on $[0, 10]$. Let $Y$ be exponential with $E(Y ) = 5$. Find $P(X < Y )$ From the given information above. I'm able to derive that
$f(x) = \cfrac{1}{10}$ from $0\leq x \leq 10$
and
$f(y) = \cfrac{1}{5}e^{\cfrac{-1}{5}y}$ for $0 < y$
I think since they're asking for P(X < Y) it's safe to assume that X and Y are independent (is this a fair assumption?) so $f(x,y) = \cfrac{1}{50}e^{\cfrac{-1}{5}y}$ for $0 \leq x \leq 10$ and $0 < y$
so $P(X < Y) = \int_0^{\infty}\int_0^y\cfrac{1}{50}e^{\cfrac{-1}{5}y}dxdy = \cfrac{1}{2}$
Have I understood and done this problem correctly?
| $$P(X<Y)=\int_{[0,10]\times[0,\infty)}1_{x\le y}f_{X,Y}(x,y)d(x,y)$$
Now, since $X$ and $Y$ are independent, the joint density function is the product $f_{X,Y}(x,y)=f_X(x)f_Y(y)$. Using partial integration, one finds
$$\begin{align*}
\int_{[0,10]\times[0,\infty)}1_{x\le y}f_{X,Y}(x,y)d(x,y) &=\int_0^\infty \int_0^{10} 1_{x\le y}\frac{1}{10}\frac{1}{5}e^{-\frac{1}{5}y}dxdy\\
&=\int_0^\infty\int_0^{min(10,y)} \frac{1}{10}\frac{1}{5}e^{-\frac{1}{5}y}dxdy\\
&=\int_0^{10}\frac{y}{10}\frac{1}{5}e^{-\frac{1}{5}y}dy+\int_{10}^\infty\frac{10}{10}\frac{1}{5}e^{-\frac{1}{5}y}dy\\
&=\frac{1}{50}\Big[-5ye^{-\frac{1}{5}y}\Big]_{y=0}^{y=10}-\frac{1}{50}\int_0^{10}-5e^{-\frac{1}{5}y}dy+e^{-2}\\
&=-e^{-2}+-\frac{1}{2}(e^{-2}-1)+e^{-2}\\
&=\frac{1}{2}(1-e^{-2})
\end{align*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4090475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Probability calculation for two independent uniform random variables I want to calculate the probability $\mathbb{P}\left( \max \{ U, \frac{1}{2} \} \leq X\right) $ with $U, X \sim $Unif$[0,1]$ and independent. I know that the result is $\frac{3}{8}$, but do not really know how to get there. I tried
\begin{align*}
\mathbb{P}\left( \max \{ U, \frac{1}{2} \} \leq X\right) = \mathbb{P}\left( U \leq X \text{ and } \frac{1}{2} \leq X\right) \overset{(*)}{=} \underbrace{\mathbb{P}\left( U \leq X \right)}_{= \frac{1}{2}} \cdot \underbrace{\mathbb{P}\left( \frac{1}{2} \leq X\right)}_{= \frac{1}{2}} = \frac{1}{4}.
\end{align*}
At (*) I used the Independence of $X$ and $U$. Obviously there must be a mistake at some point. Can anybody tell me how to get to $\frac{3}{8}$? It can't be that hard, but right now I do not know how to do it properly.
| Here is a sketch of the problem. I divided the $[0,1]\times[0,1]$ plane into 8 parts and shaded the areas where $\max(U,1/2)\leq X$. As you can see, it makes up $3/8$. (This is not a solution of course!)
Edit: corrected after feedback.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4090636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Show that $P=\sqrt{a^2-2ab+b^2}+\left(\frac{a}{\sqrt{a}-\sqrt{b}}-\sqrt{a}\right):\left(\frac{b\sqrt{a}}{a-\sqrt{ab}}+\sqrt{b}\right)$ is rational Show that the number $$P=\sqrt{a^2-2ab+b^2}+\left(\dfrac{a}{\sqrt{a}-\sqrt{b}}-\sqrt{a}\right):\left(\dfrac{b\sqrt{a}}{a-\sqrt{ab}}+\sqrt{b}\right)$$ is a rational number ($P\in\mathbb{Q})$ if $a\in\mathbb{Q},b\in\mathbb{Q},a>0,b>0$ and $a\ne b$. Find the value of $P$ if $a=1.1$ and $b=1.22$.
My try: $$P=\left|a-b\right|+\dfrac{a-\sqrt{a}\left(\sqrt{a}-\sqrt{b}\right)}{\sqrt{a}-\sqrt{b}}:\dfrac{b\sqrt{a}+\sqrt{b}\left(a-\sqrt{ab}\right)}{a-\sqrt{ab}}=\\=|a-b|+\dfrac{\sqrt{ab}}{\sqrt{a}-\sqrt{b}}.\dfrac{a-\sqrt{ab}}{b\sqrt{a}+a\sqrt{b}-\sqrt{b^2a}}.$$
| \begin{align}
P&=\sqrt{a^2-2ab+b^2}+\left(\dfrac{a}{\sqrt{a}-\sqrt{b}}-\sqrt{a}\right):\left(\dfrac{b\sqrt{a}}{a-\sqrt{ab}}+\sqrt{b}\right)=\\
&=|a-b|+\dfrac{\sqrt{ab}}{\sqrt a-\sqrt b}:\dfrac{b\sqrt a+a\sqrt b-b\sqrt a}{a-\sqrt{ab}}=\\
&=|a-b|+\dfrac{\sqrt{ab}}{\sqrt a-\sqrt b}:\dfrac{a\sqrt b}{\sqrt a\left(\sqrt a-\sqrt b\right)}=\\
&=|a-b|+\dfrac{\sqrt{ab}}{\sqrt a-\sqrt b}\cdot\dfrac{\sqrt a\left(\sqrt a-\sqrt b\right)}{a\sqrt b}=\\
&=|a-b|+\dfrac{a\sqrt b\left(\sqrt a-\sqrt b\right)}{a\sqrt b\left(\sqrt a-\sqrt b\right)}=\\
&=|a-b|+1\;.
\end{align}
Since $\;a\;$ and $\;b\;$ are rational numbers, also $\;P\;$ is a rational number.
If $\;a=1.1\;$ and $\;b=1.22\;,\;$ the value of $\;P\;$ is
$\begin{align}
P&=|a-b|+1=|1.1-1.22|+1=|-0.12|+1=\\
&=0.12+1=1.12
\end{align}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4090748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Sequences and inequalities in $\mathbb{R}^n$ Let $x_k,y_k$ be sequences in $\mathbb{R}^n$ with $\lim x_k=a$, $\lim y_k=b$ and $$|x_k-b|<r<|y_k-a|$$ prove that $|a-b|=r.$
My attempt was:
We can write $|x_k-b+(a-a)|=|x_k-a+(a-b)|$ and $|y_k-a+(b-b)|=|y_k-b +(b-a)|$. My first question is: In $\mathbb{R}$ we can write $|u|-|v|\leq |u-v|$, can we do $$|x_k-a|-|b-a|\leq |x_k-a-(b-a)| ?$$
if we can, we have $$|x_k-a|-|b-a|<r<|y_k-b|+|b-a|$$
and $|x_k-a|<\epsilon,|y_k-b|<\epsilon.$ So we can write $$\epsilon-|b-a|<r<\epsilon +|b-a|$$ This was my last "step". I can't see how can i conclude that $|b-a|=r.$
| Your second last and last inequalities are true but not useful. Instead you want to use triangle inequality at the second last inequality as follows:
$|b-a|-\epsilon < |b-a|-|x_k-a|<r < |y_k-b|+|b-a|< |b-a|+\epsilon\implies |r-|b-a||<\epsilon\implies r = |b-a|$ since it is true for any $\epsilon > 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4090891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Definition of $Y^f$ in category theory: exponential pullback. Pullback exponential is defined as follows.
Let $f \colon A \to B$, $g \colon X \to Y$. Let $C$ be a category with bi-closed, symmetric monoidal structure.
Then the exponential pullback is defined as follows.
It's clear to me that I obtain $g^A$ by applying the right adjoint to $A \otimes -$ to $g$. But I'm having trouble understanding what is $Y^f$. If this was in Set, it would be clear that $Y^f$ is just pre-composition, but how would I understand $Y^f$ in general category?
Thanks!
| $Y^f$ is precomposition, internalized via the tensor-hom adjunction.
By the adjunction, to give an arrow $Y^B\to Y^A$, it suffices to give an arrow $Y^B\otimes A\to Y$. We have an arrow $\mathrm{eval}\colon Y^B\otimes B\to Y$. So we can get the desired arrow as $\mathrm{eval}\circ (\mathrm{id}_{Y^B}\otimes f)$. Intuitively, this says "apply $f$ to the input in $A$ before evaluating the map in $Y^B$".
Similarly, $g^A$ can be viewed as internalized postcomposition. By the adjunction, to give an arrow $X^A\to Y^A$, it suffices to give an arrow $X^A\otimes A\to Y$. We have an arrow $\mathrm{eval}\colon X^A\otimes A\to X$. So we can get the desired arrow as $g\circ \mathrm{eval}$. Intuitively, this says "apply $g$ to the output in $X$ after evaluating the map in $X^A$".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4091047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a central limit theorem for $L^p$? let $Y_i$ be a sequence of identically distributed independent random variables with mean 0 and variance 1. If $X$ is independent and gaussian with the same mean and variance then does $\frac{1}{\sqrt{n}}(\sum_{i=1}^n Y_i)$ converge in $L^p$ to $X$? The central limit theorem usually holds in distribution but I would like to know if there is a stronger convergence where the sum converges to a gaussian random variable.
| Here is a counterexample: assume each $Y_i$ has distribution $\mathcal N(0,1)$, so that $\frac 1{\sqrt n}\sum_{i=1}^nY_i \sim \mathcal N(0,1)$.
Suppose for the sake of contradiction that there is some $X\sim \mathcal N(0,1)$ independent of $(Y_1,\ldots)$ with $\frac 1{\sqrt n}\sum_{i=1}^nY_i \xrightarrow{L^p}X$.
Since $X$ is independent of $(Y_1,Y_2,\ldots)$ the random variable $\frac 1{\sqrt n}\sum_{i=1}^nY_i - X$ has distribution $\mathcal N(0,2)$ thus
$$E\left[\left|\frac 1{\sqrt n}\sum_{i=1}^nY_i - X\right|^p\right]$$
is a non-zero constant. It does not converge to $0$ as $n\to \infty$, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4091211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Finite generation of vertex groups of a cyclic splitting of a hyperbolic group and generalisations of Grushko Theorem Let $G$ be a finitely generated word hyperbolic group. Suppose $G$ acts non-trivially (without a global fixed point) on a tree without inversions and with cyclic edge stabilizers. Is it true that the vertex stabilizers are finitely generated? Can we say the same without hyperbolicity?
In the case of a general f.g. group acting on a tree with trivial edge stabilizers, Grushko's theorem tells us that the vertex groups must be finitely generated. Although I suspect there are other ways to approach my question (especially with the extra assumption of hyperbolicity), are there generalisations of Grushko's theorem to amalgamated free products?
| Theorem: If $G$ is a finitely generated group acting non-trivially and without inversions on a tree $X$, and if the edge stabilizers are finitely generated, then the vertex stabilizers are also finitely generated.
The theorem follows by combining three fairly elementary lemmas which I state below. The proof of these can be found in the following book:
D. E. Cohen, Combinatorial Group Theory: A Topological Approach. Cambridge etc.: Cambridge University Press (1989); ZBL0697.20001.
We say a graph of groups is finite if the underlying graph is a finite graph.
Lemma 1: If $G$ is a finitely generated group acting non-trivially and without inversions on a tree $X$, then $G$ is the fundamental group of a finite graph of groups $\mathcal{Y}$. If $G_v$ is a vertex stabilizer under the action of $G$ on $X$, then either it is isomorphic to an incident edge stabilizer or it is conjugate to a vertex group of $\mathcal{Y}$.
Lemma 2: Let $I$ be a set and $H$ an HNN extension of the form $$ H = \langle A, t_i \mid t_iB_it_i^{-1} = B_{-i} \, \forall i\in I\rangle. $$ If $H$ is finitely generated then I is finite and $A$ is finitely generated.
Lemma 3: Let $H = A \ast_C B$. If $H$ and $C$ are finitely generated then so are $A$ and $B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4091309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Under what conditions is it true that if $f(x) \sim g(x)$ as $x \rightarrow x_0$, then $f'(x) \sim g'(x)$ as $x \rightarrow x_0$? So we say that two functions $f(x)$ and $g(x)$ of a real variable are asymptotic to each other as $x \rightarrow x_0$ ($x_0$ need not be finite) if
$$ \lim_{x \rightarrow x_0} \frac{f(x)}{g(x)} =1,$$
and in that case we write $f(x) \sim g(x)$ as $x \rightarrow x_0$. It is also not true generally that if $f(x) \sim g(x)$, then $f'(x) \sim g'(x)$. To see this, consider the cases $f(x) = x + \sin(x)$ and $g(x) = x$. In this cases it is true that $x + \sin(x) \sim x$ as $x \rightarrow \infty$, yet it is not true $1 + \cos(x) \sim 1$ as $x \rightarrow \infty$ (in fact the latter limit does not even exist).
My question is, under what conditions on $f(x)$ and $g(x)$ can we say that it is true that if $f(x) \sim g(x)$, then $f'(x) \sim g'(x)$ as $x \rightarrow x_0$.
| This is not a complete answer, but I fear the extra assumptions need to be too strong to be able to conclude something of this kind at this level of generality.
Indeed, if you know a bit of set theory, the asymptotic equivalence is an equivalence relation, and as such, it may be seen as a particular "equality" between functions, that tries to put together all of them in specific classes: like as if we take a great magnifying glass and we focus our attention really close to a particular point. In a nutshell, I may have a very monster function of which I may not be able to study its graph globally, so I try to use asymptotic equivalences to try to reduce, at least locally, the function to an easier one to be able to see its behaviour close to a point. For example, take $$f(x)=\sin(\log(1 + \sinh(e^{\arctan(x^3)}-1))) $$ which is asymptotically equivalent for $x \rightarrow 0$ to $$g(x)= x^3$$
But then, to be able to say their derivatives are also asymptotically equivalent for $x \rightarrow 0$, I think one must add a very restrictive assumption on the behavior of the functions in the neighborhood of zero, probably so much that the functions end up to be the same, at least in that neighborhood and at least for the first-order expansion around that specific point (but I do not have a proof of this, it is just my intuition).
Indeed, asymptotic calculus is not compatible with derivatives: your example is nice, as it is the following: $$f(x)=1+2x\sim_01\sim_0g(x)=1+x,$$ but $$f'(x)=2\nsim_0g'(x)=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4091428",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Show that $h$ is a constant. Suppose $h$ is a periodic function and has limit $l\in\mathbb{R}$ at $+\infty$. Show that $h$ is a constant.
Here is what I think.
Since $h(x)$ is a periodic function then $T\in\mathbb{R}^*$ such that $h(x+T)=h(x)$
And by the definition of limit $\lim_{x\to +\infty}h(x)=l$
$\forall \epsilon>0, \exists x_0, \forall x>x_0 $
$$|h(x)-l|<\epsilon$$
| Consider interval $I=(x_0,x_0+T)$. Suppose we find $x_1,x_2 \in I $ such that $|h(x_1)-h(x_2)|=\epsilon_1>0$Let's take $\epsilon=\frac{\epsilon_1}{2}$ then $|h(x_1)-l|<\epsilon$ and $|h(x_1)-l|<\epsilon$. On the other hand, $|h(x_1)-h(x_2)|=|h(x_1)-l-(h(x_2)-l)|<|h(x_1)-l|+|(h(x_2)-l)|<\epsilon_1$. A contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4091639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Concyclic tangent property of circles Suppose you have some $n$ points lying on the circles (n can be as large or as small as you want) spaced however you want. Now, let's say we give each point a constant velocity tangent to the circle, with the direction given by $\vec{v}= \vec{\omega} \times \vec{r}$ where $\vec{\omega}$ is a vector perpendicular to plane(you can fix it's direction to be above or below, but keep this direction same for all points) and $\vec{r}$ is a vector connecting to that point from origin of circle. Now, after some time $\Delta t$ prove that these points are still concylic.
This question arose from q.31 in page-13 in Jaan Kalda's olympiad physics notes, my personal try I tried checking the result by seeing if it was true for a shape inscribed in the circle and it looks to me that the result is more or less correct:
The black is the initial rectangle configuration made by points on the circle, green is the rectangle figure formed by points after moving with tangent velocity with some time after that blue and then purple are the figures.
However, I have pretty much no idea after this on how to prove this. Any help will be appreciated.
| WLOG, let the circle be centered at origin with unit radius. Then:
Let
$$
\begin{align*}
\theta_n \in [0, 2 \pi) &\Rightarrow z_n = e^{i\theta_n} \\
&\Rightarrow z_n^t = z_n + (v\Delta t)e^{i(\frac{\pi}{2} + \theta_n)} = e^{i\theta_n} + (v\Delta t)e^{i(\frac{\pi}{2} + \theta_n)} \\
&\Rightarrow ||z_n^t||^2 = 1 + v^2 (\Delta t)^2
\end{align*}
$$
Qed. Also note that the new circle is still centred at the same point with ratio of radii as $\frac{1}{\sqrt{1 + v^2 (\Delta t)^2}}$.
Edit. After writing my overkill solution, I just realized that you can use Pythagorous theorem to solve it immediately.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4091761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Asymptotic Gamma function bound Consider integers $m$ and $n$, with $m < n$.
Consider a random variable $X$, distributed according to the $\text{Gamma}~(2^{n-m}, 2^{-n})$ distribution. Consider the random variable
$$\mathrm{Z} = \frac{2^{m}}{2}\left|X - \frac{1}{2^{m}}\right|.$$
I am trying to compute the expected value of $\mathrm{Z}$. It is given by the following integral.
$$
\mathbb{E}(\mathrm{Z}) = \frac{2^{m}}{2}\int_{0}^{\infty} \left| x - \frac{1}{2^{m}} \right| \frac{\left(2^{-n}\right)^{-2^{n-m}} e^{-2^n x} x^{2^{n-m}-1}}{\Gamma
\left(2^{n-m}\right)}dx.
$$
A calculation in Mathematica tells me that the value is
\begin{equation}
\frac{2^{2^{-m} \left(2^m-2^n\right) (m-n)} e^{-2^{n-m}}}{\Gamma \left(2^{n-m}\right)}.
\end{equation}
Now, consider the cases when $m \leq \log(n)$ and $m \leq n - \log(n)$.
I am trying to find an asymptotic upper bound on the expected value, for both cases. For example, is it upper bounded by
\begin{equation}
\frac{\text{poly}(n)}{2^{\text{poly}(n)}},
\end{equation}
for some polynomial in n (or maybe, some other inverse polynomial or inverse exponential upper bound)? I tried some numerical simulations with trivial examples (like $\frac{1}{2^{n/2}}, \frac{n}{2^{n/4}} \frac{1}{n} )$ etc. These do not work.
| It is known that
$$
\Gamma (x) \ge \sqrt {2\pi } x^{x - 1/2} e^{ - x}
$$
for all $x>0$ (cf. http://dlmf.nist.gov/5.6.E1). Because of Stirling's formula, this is a rather sharp lower bound. This gives the simple bound
$$
\frac{{2^{2^{ - m} (2^m - 2^n )(m - n)} e^{ - 2^{n - m} } }}{{\Gamma (2^{n - m} )}} \le \frac{1}{{\sqrt {2\pi } 2^{(n - m)/2} }}.
$$
When $m \le \log n$ or $m \le n - \log n$, we obtain the upper bounds
$$
\frac{1}{{\sqrt {2\pi } 2^{\frac{{n - \log n}}{2}} }},\quad \frac{1}{{\sqrt {2\pi } 2^{\frac{1}{2}\log n} }}
$$
respectively.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4091909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that a monotone and surjective function is continuous Let $I$ be a interval and $f:I \rightarrow \mathbb{R}$ monotone and surjective prove that $f$ is continuous.
I tried using the definition of $\epsilon$-$\delta$ and supposing that $f$ is not continuous but I don't see where use that $f$ is surjective.
| choose a point 'a' from I. Wlog choose a is an interior point, then there exists an open interval $(x_1, x_2)$ in I containing a, now since f is monotone (wlog, assume increasing) $f(x_1)\leq f(a)\leq f(x_2)$. let $$p= min\{f(a)-f(x_1),f(x_2)-f(a)\}$$ now $f(a)-p$ and $f(a)+p$ will belong to f(I), and since f is surjective they have a preimage, say $r$ and $s$ respectively,(by contrapositive since f is increasing
$$f(x)\leq f(y) \leq x \leq y, r \leq a \leq s)$$ now take
$$ delta1 = min \{a-r, s-a\} $$
=> $r \leq (a-delta1) \leq a \leq (a+delta1) \leq s)$
now epsilon be given, if $epsilon > p$, then delta1 works by epsilon-delta definition of continuity,
now, say epsilon is less than p, then f(a)-epsilon and f(a)+epsilon must have a preimage(because f is surjective) say c and d respectively, now take delta=min{a-c, d-a} this delta will work.. why? draw pictures and check.. this is a easy nice proof, use geometry and you'll never forget this proof...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4092182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
How these 2 approximation equations are gained of euclild distances?
$$r\gg 0$$
$$r_{1}\approx r-\frac{d}{2}\cos(\theta)\tag{1}$$
$$r_{2}\approx r+\frac{d}{2}\cos(\theta)\tag{2}$$
How the above approximation equations are made?
| Use law of cosines and $d\ll r$
$r_1^2=r^2+(d/2)^2-dr\cos(\theta)$
$r_2^2=r^2+(d/2)^2+dr\cos(\theta)$
Divide by $r^2$ and get:
$(r_1/r)^2=1+(d/2r)^2-(d/r)\cos(\theta)$
Use $d\ll r$ and expand square root to first order
to get $r_1/r\approx 1-(d/2r)\cos(\theta)$ or $r_1\approx r-(d/2)\cos(\theta)$
Similarly $r_2\approx r+(d/2)\cos(\theta)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4092273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Help with inverse proportions The value of $y$ varies inversely as $\sqrt x$ and when $x=24$, $y=15$.
What is $x$ when $y=3$?
I'm having trouble on this and I don't get why it's not $\frac{2\sqrt6\cdot15}{3}=10\sqrt6$?
Am I misinterpreting the problem? This is how I learned inverse proportion so I'm really unsure.
| Your mistake is you got $\sqrt x=10\sqrt6$ and not $x=10\sqrt6$. You must square to get $x=600$.
For your note, here is an elaborative answer.
Two variable quantities $x$ and $y$ are said to be inversely proportional if and only if their product is a constant. Symbolically,
$$x\propto\frac1y\iff xy=k$$
for some constant $k$.
Now, for the given problem, we should have $y\sqrt x=k$. Now, putting the given values, $k=15\sqrt{24}$. Finally, for $y=3$,
$$3\sqrt x =15\sqrt{24}\implies\boxed{x=600}$$
Hope this helps. Ask anything if not clear :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4092411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
real valued random variable inclusion-exclusion I have the following question and I need help with it.
Let $X_1, X_2, X_3$ be realvalued Random Variables with densities and for all i,j in {1,2,3} such that $i \neq j$ we have $a_{ij} = P(X_i > X_j)$.
I must prove that min{$a_{12},a_{23},a_{31}$} $\leq 2/3$.
There is a hint to use the inclusion-exclusion principle but I don't know how.
| Suppose that $\min\{a_{12},a_{23},a_{31}\}>2/3$. Then
\begin{align}
2<a_{12}+a_{23}+a_{31}&=\mathsf{P}(\{X_1>X_2\}\cup \{X_2>X_3\}\cup \{X_3>X_1\}) \\
&\quad+\mathsf{P}(X_1>X_2>X_3) \\
&\quad+\mathsf{P}(X_3>X_1>X_2) \\
&\quad+\mathsf{P}(X_2>X_3>X_1)\le 2.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4092570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Damped Forced Motion Where forcing happens at a time ahead of 0 So presume that I have the following DE which represents a vibrating spring with a mass attached to it
$$x''+6x'+10x=25\cos(4t), x(0) = \frac{1}{2},x'(0)=0$$
I am lead to believe that the force is applied periodically starting when t = 0 seconds (that is to say the forcing function is applied when the mass is released). What form would this DE have if the force was applied starting when t was some greater positive number? Like say for example, 2 (that is to say the forcing function is applied 2 seconds after the mass is released) ? What form would it have if instead of a periodic forcing function, the forcing function was instead constant like for the DE:
$$x''+6x'+10x=5$$
| Case 1: $x''+6x'+10x=25\cos(4t), x(0) = \frac{1}{2},x'(0)=0$. The solution is
$$x(t) = \dfrac{1}{102} e^{-3 t} \left(-172 \sin (t)+100 e^{3 t} \sin (4 t)+76 \cos (t)-25 e^{3 t} \cos (4 t)\right)$$
Plots are
Case 2: $x''+6x'+10x=25\cos(4t), x(2) = \frac{1}{2},x'(2)=0$. The solution is
$x(t) = \dfrac{1}{204}\left(e^{6-3 t} (-306 \sin (2-t)+225 \sin (10-t)-425 \sin (t+6)+102 \cos (2-t)-375 \cos (10-t)+425 \cos (t+6))-50 (\cos (4 t)-4 \sin (4 t)\right)$
Plots are:
Case 3: $x''+6x'+10x=5, x(0) = \frac{1}{2},x'(0)=0$. The solution is
$$x(t) = \dfrac{1}{2}$$
The plot is
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4092691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
To prove that a polynomial is divisible by another polynomial . $x^{4p}$+$x^{(4q+1)}+x^{(4r+2)}+x^{(4s+ 3)} $is divisible by x^3+x^2+x+1, where ,q ,r ,s belongs to Natural numbers.
So , I did is this :
$x^3+x^2+x+1$ = $(x^2+1)(x+1)$ , So , x = +1 and -1.
Then , put in f(1) and f(-1). But I am not able to solve it further after this step.
| Zeros of the polynomial $(x^2+1)(x+1)$ are $-1,\pm i$. (I hope you are acquainted with $i=\sqrt{-1}.$) Show that $f(x) = x^{4p} + x^{4q+1} + x^{4r+2} + x^{4s+3}$ satisfies $$f(-1)=f(i) = f(-i) = 0.$$ Hence conclude (e.g., by factor theorem) that the polynomial (not the number itself) $(x+1)(x^2+1)$ divides the polynomial $f(x).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4092857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Weird "hidden answer" in $2\tan(2x)=3\text{cot}(x)$ The question is
Find the solutions to the equation $$2\tan(2x)=3\cot(x) , \space 0<x<180$$
I started by applying the tan double angle formula and recipricoal identity for cot
$$2* \frac{2\tan(x)}{1-\tan^2(x)}=\frac{3}{\tan(x)}$$
$$\implies 7\tan^2(x)=3 \therefore x=\tan^{-1}\left(-\sqrt\frac{3}{7} \right)$$
$$x=-33.2,33.2$$
Then by using the quadrants
I was lead to the final solution that $x=33.2,146.8$ however the answer in the book has an additional solution of $x=90$, I understand the reasoning that $\tan(180)=0$ and $\cot(x)$ tends to zero as x tends to 90 however how was this solution found?
Is there a process for consistently finding these "hidden answers"?
| $$\frac{4\tan(x)}{1-\tan^2(x)} = \frac{3}{\tan(x)}$$
$$\frac{4\tan(x)}{1-\tan^2(x)} - \frac{3}{\tan(x)} = 0$$
$$\frac{4\tan^2(x)-3[1-\tan^2(x)]}{\tan(x)[1-\tan^2(x)]} = 0$$
$$\frac{7\tan^2(x)-3}{\tan(x)[1-\tan^2(x)]} = 0$$
You focused in the fact that the equation is satisfied when the numerator is zero, i.e., $7\tan^2(x)-3=0$, but the equation is also satisfied when $\tan(x)\to\infty$ (when the denominator itself tends to infinity).
$$\lim_{\tan(x)\to\infty} \frac{7\tan^2(x)-3}{\tan(x)[1-\tan^2(x)]} = \lim_{\tan(x)\to\infty} \frac{\tan^2(x)\left[7-\frac{3}{\tan^2(x)}\right]}{\tan^2(x)\left[\frac{1}{\tan(x)}-\tan(x)\right]} = \lim_{\tan(x)\to\infty} \frac{7-\frac{3}{\tan^2(x)}}{\frac{1}{\tan(x)}-\tan(x)} = 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4092994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 0
} |
As a system of linear equations show lines of action of 3 co-planar forces in equilibrium meet at a point. I have multiple ways of showing that the lines of action of 3 co-planar forces in static equilibrium meet at a point. But they combine physical arguments and math. It seems like there should be some purely mathematical way of proving the claim, treating it as a system of equations having a unique solution.
Is there a way to express this as a uniquely determined system of linear equations?
Here are the equations of equilibrium corresponding to the illustration, where $\vec{r}_1=\mathfrak{p}_1-\mathscr{O},$ etc. The heavy gray line is a rigid rod in equilibrium under the applied forces.
$$\begin{aligned}
\vec{0}&=\vec{f}_{1}+\vec{f}_{2}+\vec{f}_{3}\\
\vec{0}&=\vec{r}_{1}\times\vec{f}_{1}+\vec{r}_{2}\times\vec{f}_{2}+\vec{r}_{3}\times\vec{f}_{3}\\
\vec{0}&=\left(\vec{r}_{1}-\vec{p}\right)\times\vec{f}_{1}+\left(\vec{r}_{2}-\vec{p}\right)\times\vec{f}_{2}+\left(\vec{r}_{3}-\vec{p}\right)\times\vec{f}_{3}
\end{aligned}$$
| The equation of a line through $r_i$ in the direction $f_i$ is $(r-r_i)\times f_i=0$ which we can rewrite as $r\times f_i=r_i\times f_i$. Now assuming $f_1$ and $f_2$ are not parallel, they intersect at some point $p$, so
$$p\times f_1=r_1\times f_1$$
and $$p\times f_2=r_2\times f_2$$
which gives that
$$p\times(f_1+f_2)=r_1\times f_1+r_2\times f_2.$$
Now add $p\times f_3$ to both sides, to give
$$0=r_1\times f_1+r_2\times f_2+p\times f_3$$
by the fact that the forces are in equilibrium (your first equation). But using the fact that the moments are in equilibrium (your second equation) gives that
$$r_1\times f_1+r_2\times f_2+r_3\times f_3=r_1\times f_1+r_2\times f_2+p\times f_3$$
so $$r_3\times f_3=p\times f_3$$
which exactly says that $p$ (the intersection of the first two lines) also lies on the third line, thus the lines all intersect at a common point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4093282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Conditional Variance formula $\mathbb{V}[\mu_t - \mu_{t|t-1} | F_{t-1}]$ I am working on linear state space models of the form
$$\mu_{t+1} = \mu_t + e_t, \ \ \ \ e_t \sim N(0, \sigma_e^2)$$
$$y_{t} = \mu_t + \eta_t, \ \ \ \ \eta_t \sim N(0, \sigma_{\eta}^2)$$
where $\mu_t$ is a pure random walk
In the formula in the title, restated below for clarity, the term $\mu_{t|t-1}$ is the conditional mean of some state $\mu$ at time $t$ conditioned on all observations up until time $t-1$. So it can be written $\mathbb{E}[\mu_t | F_{t - 1}]$ where $F_{t-1}$ is the set of observations $\{y_1, .., y_{t-1}\}$
I am trying to work out here, if the conditional mean, $\mu_{t|t-1}$, is in fact a constant?
$$\mathbb{V}[\mu_t - \mu_{t|t-1} | F_{t-1}]$$
The text I am reading, Financial Time Series Analysis by Tsay, states that the above formula for the variance is actually $$\mathbb{V}[\mu_t | F_{t-1}]$$
Could someone confirm for me please?
Btw this crops up on page 561 in Eq 2.11 of Financial Time Series Analysis by Tsay
| If you're working with a martingale, then the expectation of the next value in the sequence is equal to the present value. see here
Since the previous value is part of the filtration, if the sequence of RVs is a martingale, then $\mathbb{E} [X_{t+1}|X_t] = X_t$ and $\mathbb{E} [X_{t+1}|X_t,F] = x_t$, where $x_t$ is the value $X_t$ took.
We use the fact that
*
*$y_{t|t-1} = \mathbb{E}[y_t|F] = \mathbb{E}[\mu_t+\eta_t|F) = \mathbb{E}[\mu_t|F] = \mu_{t|t-1}$
*$\mu_{t|t-1}=y_{t|t-1}=\mathbb{E}[y_t|y_{t-1},F] = y_{t-1}$ which is constant given it's part of the history F.
*$Var [\mu_t - \mu_{t|t-1} | F_{t-1}]=Var [\mu_t - y_{t-1} | F_{t-1}]=Var [\mu_t | F_{t-1}]$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4093397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\{(x,\sin(1/x)) : x∈(0,1] \} \cup \{(0,y) : y ∈ [-1,1] \}$ is closed in $\mathbb{R^2}$ using sequences Usual metric.
The abridged definitions we were just given are:
Convergent Sequence: A sequence $(x_n)$ in a metric space $X$ converges to $x_0$ if for every $\varepsilon \gt 0$ there is $n_0\in\mathbb{N}$ such that $d(x_n, x_0) \lt \varepsilon$ for each $n \geq n_0$.
Closed set: A set $F$ in a metric space $X$ is closed if and only if every convergent sequence in $F$ converges to a point in $F$.
This is the last question of a problem set to study for class. The definitions we are using are very standard for an Intro to Topology course, I just don't understand how to apply them properly yet. Would anyone be able to lend me a hand? Or show me how to start?
| Let $S$ denote the set in question and fix $z \in S$.
First suppose that $z = (x,\sin(\frac{1}{x}))$ for some $x \in (0,1]$. Pick your favourite sequence $(x_n)$ such that $x_n \to x$ and $x_n \in (0,1]$ (eg. $x_n = x + \frac{1}{n}$ for large enough $n$.) Since $\sin(\frac{1}{x})$ is continuous on $(0,1]$ it follows that $\sin(\frac{1}{x_n}) \to \sin(\frac{1}{x})$. Hence, $(x_n,\sin(\frac{1}{x_n}))$ is a sequence in $S$ which converges to $z$.
Now suppose that $z = (0,y)$ for some $y \in [-1,1]$. Since the function $\sin$ is continuous, an Intermediate Value Theorem argument implies that for each $n \in \mathbb N$, there exists some $a_n \in [n,n+2\pi]$ such that $\sin(a_n) = y$. Let $x_n = \frac{1}{a_n}$ and observe that $x_n \in (0,1]$. Then $x_n \to 0$ and $\sin(\frac{1}{x_n}) \to y$. It follows that $(x_n,\sin(\frac{1}{x_n}))$ is a sequence in $S$ converging to $z$.
Since every point of $S$ is the limit point of some sequence in $S$, it follows that $S$ is closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4093492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Prove that $\sum_{n=0}^{\infty} \frac{n}{n+1}x^{n+1}=\frac{x}{1-x} + \ln(1-x)$ Prove that $\sum_{n=0}^{\infty} \frac{n}{n+1}x^{n+1}=\frac{x}{1-x} + \ln(1-x)$
Right off the bat, I noticed that $\frac{x}{1-x}=x\frac{1}{1-x}= x \sum_{n=0}^{\infty}x^n$, and I know that $\ln(1+x)=\sum_{n=1}^{\infty}(-1)^{n+1}\frac{x^n}{n}$. If somehow I can express $\ln(1-x)$ in terms of $\ln(1+x)$, then I'm set. Can I do something like $\ln(1-x)=\ln(1+(-x))=\sum_{n=1}^{\infty}(-1)^{n+1}\frac{(-x)^n}{n}$?
What's next? Should I add these 2 together? Can someone give me an exact answer? Also, the radius of convergence is supposed to be $(-1, 1)$ right?
| Know:
\begin{equation}
\frac{1}{1-x} = \sum_{n=0}^{\infty}x^{n} \implies \frac{1}{1-x} - 1 = \sum_{n=1}^{\infty}x^{n}, \label{first series}
\end{equation}
which holds for $|x| < 1$, and
\begin{equation}
\log(1+x) = -\sum_{n=1}^{\infty}\frac{1}{n}(-x)^{n} \implies \log(1-x) = -\sum_{n=1}^{\infty}\frac{1}{n}x^{n}, \label{second series}
\end{equation}
also valid for $|x| < 1$. If we now look at our series of interest,
\begin{equation}
f(x) = \sum_{n=0}^{\infty}\frac{n}{n+1}x^{n+1},
\end{equation}
we can use an index substitution $k = n+1 \iff n = k-1$ so that
\begin{align}
f(x) &= \sum_{k=1}^{\infty}\frac{k-1}{k}x^{k}\\
&= \sum_{k=1}^{\infty}\left(1-\frac{1}{k}\right)x^{k},\\
\end{align}
and under the condition that our series converges, we can use the linearity of the series operator to show
\begin{equation}
f(x) = \sum_{k=1}^{\infty}x^{k} + \left(-\sum_{k=1}^{\infty}\frac{1}{k}x^{k}\right),
\end{equation}
provided both the individual series converge (which they do). Notice that on the right-hand side of the above equation, the leftmost series is equal to $1/(1-x) - 1$ (from \eqref{first series}), and the rightmost series is equal to $\log(1-x)$ (from \eqref{second series}). Therefore
\begin{align}
f(x) &= \frac{1}{1-x} - 1 + \log(1-x)\\
&= \frac{x}{1-x} + \log(1-x).
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4093681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Simplifying Boolean expression $B'D' + CD' + ABC'D$ I'm struggling with simplifying $B'D' + CD' + ABC'D$.
Isn't it that it's already simplified? I tried doing $(B' + C)(D'+D) + ABC'D$ to get $B'+C+ABC'D$, but I am getting different truth tables.
What should the steps be?
EDIT:
I now have the following steps:
So, basically, I have
$D'(B'+C)+ABC'D$ to
$D'(B+C)+D(ABC')$
$(D'+D)((B+C)+ABC')$
$B'+C+ABC'$
Can it be simplified any further?
| The original expression is as simplified as it can be.
In your EDIT, the step from:
$D'(B +C) + D(ABC')$
to:
$(D'+D)((B+C)+ABC')$
is incorrect. That is like saying that $ab+cd=(a+c)(b+d)$, which from basic algebra you should know is not correct
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4093814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove Hilbert's 90 in characteristic $0$ using discriminant I 'm reading A Brief Guide to Algebraic Number Theory by Swinnerton-Dyer. He proves Hilbert's 90 at page 4~5 in lemma 4(assuming characteristic $0$):
lemma 4 Let $K/k$ be a Galois extension whose Galois group
$Gal(K/k)$
is cyclic with generator $\sigma$. If $\alpha$ in $K$ is such that
$\mathrm{norm}_{K/k} \alpha =1$ then $\alpha=\beta/\sigma \beta$ for
some $\beta$ in $K$; and we can take $\beta$ to be integral.
Proof Let $[K:k]=n$ and for any $\gamma$ in $K$ consider $$\beta=\gamma \cdot \alpha + \sigma \gamma \cdot \alpha \cdot \sigma
\alpha +\cdots + (\sigma^{n-1}\gamma)(\alpha \cdots
\sigma^{n-1}\alpha) $$ then $\alpha \cdot \sigma \beta =
\beta$. If $\beta=0$ for every $\gamma$ then
$\Delta^2_{K/k}(\gamma_1,...,\gamma_n)=0$ for any
$\gamma_1,...,\gamma_n$ in $K$, and this we know to be false.
I can't see why $\beta=0$ for every $\gamma$ implies the discirminant is zero, could you give me some help? Thank in advance.
| Hint: There is a formula for the discriminant in terms of $\sigma^i(\gamma_j)$, namely
$$
\Delta(\gamma_1,\ldots ,\gamma_n)=\det ((\sigma^i(\gamma_j))_{i,j})^2.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4093974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
limit of two entirely different sequences Let be two sequences $a_n$ and $b_n$ with $a_0 $ and $b_0$ positive real numbers such that
$$a_{n+1}=3a_nb_n(a_n+b_n)$$
and
$$b_{n+1}=a_n^3+b_n^3$$
Find the limit of
$$\lim \frac{a_0^3+a_1^3+...+a_{n-1}^3}{a_n}$$
I obtained that
$a_n+b_n=(a_0+b_0)^{3^n}$
by induction.
If it helps
I do not know how to obtain the sum $a_0^3+a_1^3+...+a_{n-1}^3$ what should I do. Any idea is welcomed.
| The sum $a_n+b_n$ behaves nicely. We want to know which part of it is $a_n$. In other words, let $x={a\over a+b}$; how will that change with each iteration?
$$x_{n+1}={a_{n+1}\over a_{n+1}+b_{n+1}} = {3a_nb_n(a_n+b_n)\over(a_n+b_n)^3}=3x_n(1-x_n)$$
That's the logistic map with the greatest parameter value that still lets it converge to a single limit. In other words, $\lim\limits_{n\to\infty}x_n={2\over3}$.
With that in mind, the rest is simple:
*
*$a_0+b_0<1$: the numerator is bounded from below, the denominator tends to 0, so the answer is $\infty$.
*$a_0+b_0=1$: the numerator grows indefinitely, the denominator is bounded from above, so the answer is still $\infty$.
*$a_0+b_0>1$: the numerator is basically $a_{n-1}^3$ (the rest does not matter), and $a_{n-1}^3\approx\Big({2\over3}(a_{n-1}+b_{n-1})\Big)^3=\left({2\over3}\right)^3(a_n+b_n)$, while the denominator is $a_n\approx{2\over3}(a_n+b_n)$ , so the answer is $\left({2\over3}\right)^2=\mathbf{\color{red}{4\over9}}$.
So it goes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4094196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Localization of sheaves Let $(X,\mathscr{O}_X)$ a ringed space and $\mathscr{F}$ an $\mathscr{O}_X$-module. Let $U\subseteq X$ be an open subset, and suppose we have $\mathscr{F}|_U=\mathscr{O}_U$, then for $x\in U$, how does this imply $\mathscr{F}_x=\mathscr{O}_{X,x}$? The explanation I saw was "because localization preserves exactness", but I don't quite understand how does it work here. Can somebody explain?
| It depends on how much formality you have seen but for me the most satisfying proof is:
let $j: U \rightarrow X$ be the inclusion and $i: \{\star \} \rightarrow U$ of image $x$. We have pullback functors $$j^{-1}: Sh(X) \rightarrow Sh(U) \\ i^{-1}: Sh(U) \rightarrow Sh(\{\star\}) \\
(j \circ i)^{-1}: Sh(X) \rightarrow Sh(\{\star\}).$$
And they are compatible with composition: $i^{-1} \circ j^{-1} = (j \circ i)^{-1}$.
By definition $\mathcal{F} \vert_U = j^{-1} \mathcal{F}$ and $O_U = j^{-1}O_X$.
Hence $\mathcal{F}_x = (j \circ i)^{-1} \mathcal{F} = i^{-1} \circ j^{-1} \mathcal{F} = i^{-1} \circ j^{-1} O_X = (j \circ i)^{-1} O_X = O_{X,x}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4094375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
For any $n-1$ non-zero elements of $\mathbb Z/n\mathbb Z$, we can make zero using $+,-$ if and only if $n$ is prime Inspired by Just using $+$ ,$-$, $\times$ using any given n-1 integers, can we make a number divisible by n?, I wanted to first try to answer a simpler version of the problem, that considers only two operations instead of three.
Let $n>2$ and parentheses are not allowed. Then, there are equivalent ways to ask this:
*
*Given any set of $n-1$ non-multiples of $n$, can we make a multiple of $n$ using $+,-$?
*Given any $n-1$ non-zero elements of $\mathbb Z/n\mathbb Z$, can we make $0$ using $+,-$?
Alternatively, we can also be asking to partition a $n-1$ element set $S$ into two subsets $S_+$ and $S_-$, such that the difference between the sum of elements in $S_+$ and the sum of elements in $S_-$ is a multiple of $n$ (is equal to $0$ modulo $n$).
For example, if $n=3$ then there are only $3$ (multi)sets we need to consider:
$$
\begin{array}{}
1-1=0\\
1+2=0\\
2-2=0\\
\end{array}
$$
which are all solvable (we can make a $0$ in $\mathbb Z/n\mathbb Z$).
In general, there are $\binom{2n-3}{n-1}$ (multi)sets to consider for a given $n$.
My conjecture is that any such (multi)set is solvable if and only if $n$ is a prime number.
If $n$ is not prime, then it is not hard to see that this cannot be done for all (multi)sets. If $n$ is even, then take all $n-1$ elements to equal $1$, to build an unsolvable (multi)set. If $n$ is odd, then take $n-2$ elements to equal to a prime factor of $n$ and last element to equal to $1$, to build an unsolvable (multi)set.
It remains to show that if $n$ is prime, then all such (multi)sets are solvable.
I have confirmed this for $n=3, 5, 7, 11, 13$ using a naive brute force search.
Can we prove this conjecture? Or, can we find a prime that does not work?
| Number the elements of the set $k_1, k_2, \dots, k_{p-1}$, where $p$ is an odd prime.
Another equivalent way to ask the question is to phrase it in terms of sum sets, where $A+B$ is defined to be $\{a + b : a \in A, b \in B\}$. The possible ways to specify the signs in $\pm k_1 \pm k_2 \pm \dots \pm k_{p-1}$ are the elements of the sum set
$$
\{k_1, -k_1\} + \{k_2, -k_2\} + \{k_3, -k_3\} + \dots + \{k_{p-1}, -k_{p-1}\}.
$$
Allowing $k_1$ to be negative is fine, since if we get $0$ with a negative $k_1$, we can flip every sign and get $0$ with a positive $k_1$.
By the Cauchy-Davenport theorem, when working in $\mathbb Z / p \mathbb Z$ for any prime $p$, we have $$|A+B| \ge \min\{|A|+|B|-1, p\}.$$ When $p$ is an odd prime and $x$ is not a multiple of $p$, $\{x, -x\}$ has size $2$, so by iterating the theorem with the sum set above, we have:
*
*$|\{k_1, -k_1\} + \{k_2, -k_2\}| \ge 2 + 2 - 1 = 3$.
*$|\{k_1, -k_1\} + \{k_2, -k_2\} + \{k_3, -k_3\}| \ge 3 + 2 - 1 = 4$.
*$|\{k_1, -k_1\} + \{k_2, -k_2\} + \{k_3, -k_3\} + \{k_4, -k_4\}| \ge 4 + 2 - 1 = 5$.
*...
*$|\{k_1, -k_1\} + \{k_2, -k_2\} + \{k_3, -k_3\} + \dots + \{k_{p-1}, -k_{p-1}\}| \ge p$.
So all $p$ values are possible; in particular, $0$ is possible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4094506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Bound in reflexive Banach space + monotonicity of sequence implies weak convergence of sequence? Let $V$ be a reflexive Banach space with a partial order relation $\leq.$ Furthermore, suppose it is a lattice. What further conditions does one need to have this property:
if $v_n$ is a bounded sequence in $V$ and $v_n \leq v_{n+1}$ for all $n$, then $v_n \rightharpoonup v$ in $V$.
Note the weak convergence is asked for for the original sequence (not a subsequence). Does anyone have a reference?
| Lets ask for the following compatibility conditions:
*
*If a monotone net $x_\alpha$ has least upper bound $x$ (ie $x_\alpha$ converges in order to $x$) then $x_\alpha$ converges weakly to $x$.
*If a monotone net $x_\alpha$ converges weakly to $x$ then $x$ is the least upper bound of $x_\alpha$.
For example on $L^2(\Bbb R)$ the usual relation $f≤g$ satisfies these two conditions.
Then if you have a monotone bounded sequence $v_n$ you note that it has a limit point $v$ by Banach Alaoglu. By Eberlein Smulian you get a convergent sub-sequence $v_{n_k}$. By condition (2) the weak limit $v$ is a least upper bound of $v_{n_k}$. But the least upper bound of $v_{n_k}$ is a least upper bound of $v_n$ since $v_{n_k}$ is final in $v_n$. Hence $v$ is the least upper bound of the monotone sequence $v_n$ and by (1) $v_n$ converges to $v$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4094673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Convergence of series $\sum \arcsin\left(\frac{r_n \sin(\theta_n)}{\sqrt{1+2r_n\cos(\theta_n)+r_n^2}}\right)$ Let $r_n$ be a sequence of strictly positive numbers converging to $0^+$. Let $\theta_n$ be a sequence of values in $[0,\tfrac{\pi}{2}]$ which may or may not converge.
Consider the following sequence
$$
\beta_n = \arcsin\left(\frac{r_n \sin(\theta_n)}{\sqrt{1+2r_n\cos(\theta_n)+r_n^2}}\right)
$$
$\beta_n$ is a strictly positive sequence converging to zero. This is the imaginary part of $\log(1 + r_ne^{i\theta_n})$.
From $\theta\in[0,\tfrac{\pi}{2}]$ it follows that $0\leq \sin(\theta_n),\cos(\theta_n)\leq 1$. Let $\theta = \liminf_{n\to\infty}\sin(\theta_n)$. We then have
$$
0\leq \arcsin\left(\frac{r_n \theta}{1+r_n}\right) \leq \beta_n \leq \arcsin\left(\frac{r_n}{\sqrt{1+r_n^2}}\right).
$$
Since $\arcsin\left(\frac{x}{\sqrt{1+x^2}}\right)\sim x$, and $\arcsin\left(\frac{x\theta}{1+x}\right)\sim x\theta$ we have that
$$
0 \leq \theta \sum r_n \leq \beta_n \leq \sum r_n,
$$
The inequalities being true ``eventually''.
We only care about convergence, so we might as well multiply $\sum r_n$ with a larger/smaller constant.
My question is the following, I want to prove that $\sum \beta_n$ is convergent if and only if $\sum r_n$ is. Now, this certainly not true when $\theta_n = 0$, when $\beta_n \equiv 0$. However, if I add another constraint, that there are infinitely many non-zero $\theta_n$'s, then is the statement true?
If $\liminf_{n\to\infty}\sin(\theta_n) \not=0$, it is true by the above proof. The only problem I have is when $\liminf_{n\to\infty}\sin(\theta_n) = 0$, but I can't find a counterexample, and I believe that the statement is true. We can also assume that all $\theta_n$ are non-zero.
If we allow $\theta_n$ to be both positive and negative around 0, then the statement is false.
| No; you need the liminf.
Let $$r_n=\begin{cases}
\frac{1}{n} & 2\mid n \\
\frac{1}{n^2} & 2\nmid n
\end{cases}$$ and $$\theta_n=\begin{cases}
0 & 2\mid n \\
\frac{1}{n} & 2\nmid n
\end{cases}$$ Then $$\beta_n=\begin{cases}
0 & 2\mid n \\
\sin^{-1}{\left(\frac{\sin{\frac{1}{n}}}{\sqrt{n^2+1+2\cos{\frac{1}{n}}}}\right)} & 2\nmid n
\end{cases}$$ $\sum_n{r_n}$ does not converge, but since $\beta_n\sim\frac{1}{n^2}$ for large odd $n$, the sum $\sum_n{\beta_n}$ converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4094800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to solve $x\cos(t)+y\sin(t)=1$ for $t$ When trying to find the points, $P_1$ and $P_2$, on a circle of radius $R$ such that the tangent line to those points passes through the point $P_0$, it was all simple geometry until I ran into $x\cos(t)+y\sin(t)=1$.
I found other questions solving the same circle problem, but they went about it in a different way so never ran into this expression.
Plugging into wolfram alpha gives $2\tan^{-1}(\frac{y+\sqrt{y^2+x^2-R^2}}{R+x})$ which works in practice here https://www.desmos.com/calculator/y8sz3kwjki, but I have no idea how it arrived at that solution. I assume there must be answers somewhere, but I couldn't find them for some reason.
| Hint
This is a well known problem. Divide everything by $\sqrt{x^2+y^2}$ and get
$$\frac{x}{\sqrt{x^2+y^2}}\cos(t)+\frac{y}{\sqrt{x^2+y^2}}\sin(t)=\frac{1}{\sqrt{x^2+y^2}}$$
Can you see the trigonometric meaning of $\frac{x}{\sqrt{x^2+y^2}}$ and $\frac{y}{\sqrt{x^2+y^2}}$?
For instance, make $\sin (a)=\frac{x}{\sqrt{x^2+y^2}}$, what happen now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4094934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving inequality (induction?) Let $a>0$, $n,p \in \mathbb{N}$, $p<n$
$\sqrt[\leftroot{-1}\uproot{2}n]{a^p} \leq 1+\frac{p}{n} (a-1)$
I tried to do it by induction, with the first step for $a=2, n=2, p=1$.
$\sqrt{2^1} \leq 1+ \frac{1}{2}(2-1)$. However I do not know how to go on with the induction. Would appreciate help
| Apply the AM-GM inequality:
$$
(x_1\cdot x_2\cdots x_n)^{\frac{1}{n}}\leq \frac{x_1+x_2+\ldots+x_n}{n}
$$
to $\Big(\underbrace{a\cdots a}_{\text{p times}} \cdot \underbrace{1\cdots 1}_{\text{(n-p) times}} \Big)$ gives
$$
a^{\frac{p}{n}}\leq \frac{pa+n-p}{n}=\frac{p}{n}a+\frac{n-p}{n}=1+\frac{p}{n}(a-1).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4095080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\lim_{x \to 1} \frac{1}{x+2} = 1/3$ with $\varepsilon$-$\delta$ definition? I'm trying to use the $\varepsilon$-$\delta$ definition of a limit to prove that
$$
\lim_{x \to 1} \frac{1}{x+2} = 1/3.
$$
But I'm getting stuck on finding the correct $\delta$. Here is my try:
\begin{align*}
\lvert f(x) - L \rvert < \varepsilon \\
\lvert \frac{1}{x+2} - \frac{1}{3} \rvert < \varepsilon \\
\lvert \frac{x -1}{2 + x} \rvert < 3\varepsilon.
\end{align*}
And then I'm not really sure what to do. How do you proceed from here?
| As the function is monotonic around $x=1$ (provided $\delta<3$), you can solve
$$\frac1{1+\delta+2}-\frac13=\pm\epsilon$$
which gives
$$\delta=\mp\frac{9\epsilon}{1\pm3\epsilon}.$$
Then take the smallest of the two absolute values,
$$\delta=\frac{9\epsilon}{1+3\epsilon},$$ which is the tightest possible.
Below, $\epsilon=\dfrac1{10}$ and $\delta=\dfrac9{13}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4095201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Continuous on $[a,b]$ and $f'(x)>0$ on $(a,b)$ except for a $c$ in $(a,b)$ then $f$ is strictly increasing Prove that if $f(x)$ is continuous on $[a, b]$ and $f'(x)>0$ on $(a, b)$ except at a point
$c \in (a, b)$ where $f'(c)=0$, then $f$ is strictly increasing on $[a, b]$.
I can prove the case without the $c$ using MVT, but not sure how to approach this particular case.
| Let $a \leq x \lt c$. Then the Mean Value Theorem tells you $f(c)-f(x)=f'(t)(c-x)$ for some $t \in (x, c)$ and since $f'(t) \gt 0$ we therefore know that $f(c) - f(x) \gt 0 \Rightarrow f(c) \gt f(x)$. Similarly, for any $y \in (c, b], f(c) \lt f(y)$. You already know how to prove the function is strictly increasing in the intervals $(a, c)$ and $(c, b)$, so you're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4095315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Divisibility of Mersenne numbers Is there a way to prove that $2$ is the only prime that never divides $2^n-1$ ? Obviously we can ignore all primes that are themselves of this form. Some other examples:
$$5\,|\,2^4-1 \qquad 9\,|\,2^6-1 \qquad 11\,|\,2^{10}-1 \qquad 13\,|\,2^{12}-1 \qquad 17\,|\,2^8-1$$
I checked for all $p<10^6$. Thanks in advance.
| You have noted that
$$
3\mid 2^2-1,\quad
5\mid 2^4-1,\quad
7\mid 2^6-1,\quad
11\mid 2^{10}-1,\quad
13\mid 2^{12}-1,\quad
17\mid 2^8-1
$$
and the last one seems an outlier, but since $2^{16}-1=(2^8-1)(2^8+1)$, you still have that $17\mid2^{16}-1$.
Now do you see something? The exponent is the prime number minus $1$.
OK, the conjecture might be that $p\mid2^{p-1}-1$ when $p$ is an odd prime. Does this ring a bell? What about any $a$ with $0<a<p$? Oh, yes, it holds that
$$
a^{p-1}-1\equiv 0\pmod{p}
$$
by Euler's theorem: if $\gcd(a,n)=1$, then $a^{\varphi(n)}\equiv1\pmod{n}$, where $\varphi$ is the totient function. For $p$ a prime, $\varphi(p)=p-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4095494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Does a compact linear operator on an infinite dimensional Banach space have a bounded inverse? Suppose $X$ is an infinite dimensional Banach space, and $A$ is a compact linear operator from $X$ to $X$. If $A$ is invertible and $A^{-1}$ is bounded, then $A A^{-1}=I$ ($I$ is the identity operator on $X$) is compact, which is impossible according to Riesz's Lemma. In this way we prove that $A$ doesn't have a bounded inverse. Whereas on the other hand, if $A$ is invertible, then $A^{-1}$ is indeed bounded according to the Inverse Operator Theorem. What's wrong?
| A compact operator $A$ defined in an infinite dimensional space does not have an inverse, since the image of a the closed ball $B$ is contained in a compact space $C$, and if $A^{-1}$ is bounded (continuous), $A^{-1}(C)$ is compact, this is implies that $B\subset A^{-1}(C)$ is compact, contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4095609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Coefficient of $x^{12}$ in $(1+x^2+x^4+x^6)^n$ I need to find the coefficient of $x^{12}$ in the polynomial $(1+x^2+x^4+x^6)^n$.
I have reduced the polynomial to $\left(\frac{1-x^8}{1-x^2}\right)^ n$ and tried binomial expansion and Taylor series, yet it seems too complicated to be worked out by hand.
What should I do?
| Your approach is also fine. In the following it is convenient to denote with $[x^k]$ the coefficient of $x^k$ in a series.
We obtain
\begin{align*}
\color{blue}{[x^{12}]}&\color{blue}{\left(\frac{1-x^8}{1-x^2}\right)^n}\\
&=[x^{12}](1-x^8)^n\sum_{j=0}^{\infty}\binom{-n}{j}\left(-x^2\right)^j\tag{1}\\
&=[x^{12}]\left(1-\binom{n}{1}x^8\right)\sum_{j=0}^{\infty}\binom{n+j-1}{j}x^{2j}\tag{2}\\
&=\left([x^{12}]-n[x^4]\right)\sum_{j=0}^{\infty}\binom{n+j-1}{j}x^{2j}\tag{3}\\
&\,\,\color{blue}{=\binom{n+5}{6}-n\binom{n+1}{2}}\tag{4}
\end{align*}
Comment:
*
*In (1) we use the binomial series expansion.
*In (2) we expand $(1-x^8)^n$ up to terms of $x^8$, since other terms do not contribute. We also use the binomial identity $\binom{-p}{q}=\binom{p+q-1}{q}(-1)^q$.
*In (3) we apply the rule $[x^p]x^qA(x)=[x^{p-q}]A(x)$.
*In (4) we select the coefficients of $x^k$ accordingly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4095701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
} |
Generalization of homotopy lifting property To paraphrase the normal homotopy lifting property, it states that given a covering map $\pi:E\to X$, an interval $I=[0,1],$ a homotopy $f:Y\times I\to X,$ and a lift $\widetilde{f}_0:Y\to E$ of $f_0:Y\to X$ (where $f_0(y)=f(y,0)$) so that $\pi\circ \widetilde{f}_0=f_0,$ there exists a homotopy $\widetilde{f}:Y\times I \to E$ lifting $f$ so that $\pi\circ \widetilde{f}=f$.
I am wondering if there is a generalization of this property where the interval is replaced by a path connected space $T$ with a basepoint $t_0,$ so that given a "homotopy" $f:Y\times T\to X$ and a lift $\widetilde{f}_{t_0}:Y\to E$ of $f_{t_0}:Y\to X$, there exists a "homotopy" $\widetilde{f}:Y\times T \to E$ that lifts $f:Y\times T\to X$.
| Path-connected won't work : otherwise take $Y=*$, then you would be claiming that any continuous function $T\to X$ lifts to $E$, which is known to be wrong.
However, if you require $T$ to be simply-connected and locally path-connected, then the answer will be yes, at least under the usual hypotheses of covering space theory.
Namely, here's a nice generalization of what you're looking for :
Assume $X$ is a nice space. Let $f: Y\to X$ be a continuous map with $Y$ path-connected and locally path-connected, $f(y)=x$ and $x_0\in E$ a lift of $x$. Then there is a lift to $E$ sending $y$ to $x_0$ if and only if the image $f_*\pi_1(Y,y)$ is contained in the image $\pi_*\pi_1(E,x_0)$; and if it is, the lift is unique.
If $T$ is simply-connected, then the image of $\pi_1(Y\times T)$ will be the same as that of $Y$ and so your assumption implies that there is a unique lift on $Y\times T$ having the desired value on $(y,t_0)$. Note that composing with the inclusion at $t_0$, $Y\to Y\times T$ provides a lift of $f_{t_0}$, so it must be $\tilde f_{t_0}$ by uniqueness.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4095905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Let $(X,S,\mu)$ is a measure space and $\mu(X)<\infty$. Define $d(f,g)=\int\frac{|f-g|}{1+|f-g|}d\mu$ is a metric on the space of measurable functions Let $(X,S,\mu)$ is a measure space and $\mu(X)<\infty$. Define $d(f,g)=\int\frac{|f-g|}{1+|f-g|}d\mu$ is a metric on the space of measurable functions.
My work-
symmetry,
let's show $d(f,g)=d(g,f)$
$d(f,g)=\int\frac{|f-g|}{1+|f-g|}d\mu$ and $d(g,f)=\int\frac{|g-f|}{1+|g-f|}d\mu$
It's obvious that, $d(f,g)=d(g,f)$
My concern is triangular inequality,
What want to show is,
$d(f,h)\leq d(f,g)+d(g,h)$
so the right hand side becomes,
$\int\frac{|f-g|}{1+|f-g|}d\mu+\int\frac{|g-h|}{1+|g-h|}d\mu= \int\frac{|f-g|}{1+|f-g|}+\frac{|g-h|}{1+|g-h|}d\mu$
I don't think simplification of right side will give the result easily. Can someone give me a hints. Thank you in advance
| The only thing missing is to check that
$$
\rho(x,y)=\frac{|x-y|}{1+|x-y|}
$$
is a metric on $\mathbb{R}$. Here is a simple proof:
Consider the function
$$ f(t)=\frac{t}{1+t}, \qquad t\geq0$$
notice that $f(t)=0$ iff $t=0$, $f$ is monotone non decreasing on $[0,\infty)$, and that $\rho(x,y)=f(|x-y|)$.
That $\rho$ satisfies the triangle inequality is a consequence of
$$ f(t+s)\leq f(t)+f(s),\qquad t,s\geq0$$
which follows from
\begin{align}
f(t+s)&=\frac{s+t}{1+s+t}=\frac{t}{1+t}\frac{1+t}{1+t+s} +\frac{s}{1+s}\frac{1+s}{1+t+s}\\
&\leq \frac{t}{1+t}+\frac{s}{1+s}
\end{align}
Though the is not needed for the conclusions in your problem, it is worthwhile noticing that on any matrix space $(S,d)$, any continuous monotone nondecreasing function $\phi:[0,\infty)\rightarrow[0,\infty)$ such that
*
*$\phi(t)=0$ iff $t=0$,
*$\phi(t+s)\leq \phi(t)+\phi(s)$ (subadditve)
induces another metric on $S$
$$\rho_\phi(x,y)=\phi(d(x,y))$$
that is equivalent to $d$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4096017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What are the possible real values of $\frac{1}{x} + \frac{1}{y}$ given $x^3 +y^3 +3x^2y^2 = x^3y^3$? Let $x^3 +y^3 +3x^2y^2 = x^3y^3$ for $x$ and $y$ real numbers different from $0$.
Then determine all possible values of $\frac{1}{x} + \frac{1}{y}$
I tried to factor this polynomial but there's no a clear factors
| A solution proceeds as follows: Denote $t:=(1/x)+(1/y)$. Observe that $$t^3=\left(\frac{1}{x}+\frac{1}{y}\right)^3=\frac{1}{x^3}+\frac{1}{y^3}+\frac{3}{xy}\left(\frac{1}{x}+\frac{1}{y}\right)=\frac{1}{x^3}+\frac{1}{y^3}+\frac{3}{xy}\cdot t.$$ For the given equation, you may divide $x^3y^3$ on both sides, which gives $$1=\frac{1}{x^3}+\frac{1}{y^3}+\frac{3}{xy}.$$ Subtracting these two formulas, we can obtain $$t^3-1=\frac{3(t-1)}{xy}.$$ If $t=1$, this formula holds definitely. If $t\neq 1$, then $$t^3-1=\frac{3(t-1)}{xy}\implies t^2+t+1=\frac{3}{xy}\implies\frac{1}{x}\cdot\frac{1}{y}=\frac{t^2+t+1}{3}.$$ If you regard $u:=1/x$ and $v:=1/y$, then $u$ and $v$ are solutions to the quadratic equation $$z^2-tz+\frac{t^2+t+1}{3}=0.$$ Since $x$ and $y$ are real numbers, both $u$ and $v$ are also real numbers. This implies that the discriminant of this quadratic equation is non-negative. Thus $$\Delta=t^2-\frac{4(t^2+t+1)}{3}=-\frac{(t+2)^2}{3}\geq 0, $$ hence $t=-2$ if $t\neq 1$.
In general, $t=-2$ or $t=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4096184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
If $A$ and $B$ both have defined asymptotic density, and $A\cap B=\emptyset$, does $A \cup B$? Let $A \subseteq \mathbb N$. Define the asymptotic density of $A$ as $$d(A) = \lim_{n\to \infty}\frac {|A \cap \{1,...,n\}|}{n}$$
If $A\cap B=\emptyset$, $d(A)$ and $d(B)$ are defined, is $d(A\cup B)$ defined too?
I know someone has asked a similar question here: If two sets have a natural (asymptotic) density, does their union?
But in that thread, $A$ and $B$ might not be disjoint.
| Note that
$$
\begin{eqnarray}d(A\cup B)&=&\lim_{n\rightarrow\infty}\frac{\left|(A\cup B)\cap [n]\right|}{n}\\&=&\lim_{n\rightarrow\infty}\frac{\left|A\cap[n]\right|+\left|B\cap[n]\right|-\left|(A\cap B)\cap[n]\right|}{n}\\
&=&\lim_{n\rightarrow\infty}\frac{\left|A\cap[n]\right|}{n}+\lim_{n\rightarrow\infty}\frac{\left|B\cap[n]\right|}{n}-\lim_{n\rightarrow\infty}\frac{\left|A\cap B \cap[n]\right|}{n}\\
&=&d(A)+d(B)-d(A\cap B)
\end{eqnarray}
$$
(where $[n]\equiv\{1,2,\ldots,n\}$), assuming that the limits on the right-hand side converge.
So if $A\cap B=\emptyset$ (or even under the weaker condition that $d(A\cap B)=0$), then $$d(A\cup B)=d(A)+d(B).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4096338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does $(A'MA)^{-1}A'M=(A'A)^{-1}A'$ for all nonsingular $M$ imply $A$ is nonsingular? Is the following statement true?
Suppose $A$ is a real matrix and $A'A$ is nonsingular. Suppose that for every nonsingular real matrix $M$, $(A'MA)^{-1}A'M=(A'A)^{-1}A'$, then $A$ is square and nonsingular.
| I eventually came up with an alternative proof to the ones provided. Let $A$ be $n\times m$. We know $A'A$ is a nonsingular $m\times m$ matrix, so (as pointed out in other answers) $m=rank(A'A)\leq rank(A)\leq n$. So if $m<n$ then $A'$ has a non-empty null space. Let $v$ be some non-zero element of the null space i.e., $A'v=0$, then $(A'A)^{-1}A'v=0$ and so for every non-singular $M$ $(A'MA)^{-1}A'Mv=0$ and thus $A'Mv=0$. Thus if $G$ and $H$ are non-singular $m\times m$ matrices $(G+H)v$ is in the null space of $A'$. Now, since $v$ is non-zero then for any vector $l$ there is a matrix $K$ so that $Kv=l$ (just set the $k^{th}$ row of $K$ to equal $l_k v'/(v'v)$ where $l_k$ is the $k^{th}$ entry of $l$. Any $m\times m$ matrix can be written as a sum of two non-singular matrices, so it follows that $l$ is also in the null space. But then every vector is in the null space of $A'$ which means $A'$ is zero and we have a contradiction. So $A$ is square and (as has already been pointed out) therefore nonsingular.
Edit: wrote $B$ instead of $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4096484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How to solve this integral with sigma notation? Let $$f(x) = \sum_{n=1}^{N}a_{n}sin(nx)$$.
For a positive integer m, show the following:
I've never even had a first step for this one even after hours of trying to understand it. I asked my professor and she told me it involves solving integrals involving integrals. Tried another day trying to solve it, but ended up with nothing. Any help out there would be deeply appreciated
Edit:
I tried to solve it, but still couldn't figure out an answer for a and not sure about the answer of b
(a)
$$\frac{1}{\pi}\int^{\pi}_{-\pi}\sum_{n=1}^{N}a_{a}sin(nx)sin(mx)dx\\
\frac{1}{\pi}\sum_{n=1}^{N}a_{n}\int^{\pi}_{-\pi}\sin(nx)sin(mx)dx\\
\frac{1}{\pi}\sum_{n=1}^{N}a_{n}\int^{\pi}_{-\pi}\frac{1}{2} \cos(nx-mx)-\cos(nx+mx)dx\\
\frac{1}{2\pi}\sum_{n=1}^{N}a_{n}\int^{\pi}_{-\pi} \cos(nx-mx)-\cos(nx+mx)dx\\
\int^{\pi}_{-\pi} \cos(nx-mx)dx-\int^{\pi}_{-\pi}\cos(nx+mx)dx\\
\frac{1}{2\pi}\sum_{n=1}^{N}a_{n}[\frac{\sin((n-m)\pi)}{n-m}-\frac{\sin((n-m)(-\pi))}{n-m}-\frac{\sin((n+m)\pi)}{n+m}+\frac{\sin((n+m)(-\pi))}{n+m}+C]
\frac{1}{2\pi}\sum_{n=1}^{N}a_{n}[0-0-0+0+C]\\0$$
When I solve it like this, all of the cases result to 0. Which is not similar to the equality stated in the given
(b)
$$\frac{1}{\pi}\int^{\pi}_{-\pi}\sum^{N}_{n=1}a^2_n\sin^2(nx)dx\\
\frac{1}{\pi}\sum^{N}_{n=1}a^2_n\int^{\pi}_{-\pi}\sin^2(nx)dx\\
\int^{\pi}_{-\pi}\sin^2(nx)dx\\
\frac{1}{2}(\pi-\frac{\sin(2\pi x)}{2n})-\frac{1}{2}(-\pi-\frac{\sin(-2\pi x)}{2n})\\
(\frac{\pi}{2}-0)-(\frac{-\pi}{2}-0)\\
\frac{1}{\pi}\sum^{N}_{n=1}a^2_n\pi\\
\sum^{N}_{n=1}a^2_n$$
| Hint
Regarding (a), as the integral is linear, it is sufficient to compute
$$\int_{-\pi}^\pi \sin nx \sin mx \ dx$$ for $m,n \in \mathbb Z$. Which can easily be done using the trigonometric formula
$$\sin a \sin b = \frac{1}{2}\left(\cos \frac{a-b}{2} - \cos \frac{a+b}{2}\right)$$
For part (b), use part (a) expanding $f^2(x) = f(x) \cdot f(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4096651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$AB=BA$ from $e^{A+B} = e^A e^B$, given Hermitian matrices Let $A$ and $B$ be Hermitian matrices.
*
*If $AB=BA$, we know that $e^{A+B} = e^A e^B$.
*In this paper, the author showed that $\text{Tr } e^{A+B} = \text{Tr } e^A e^B$ iff. $AB=BA$.
As such, $e^{A+B} = e^A e^B$ is equivalent to $\text{Tr } e^{A+B} = \text{Tr } e^A e^B$ in the context of Hermitian matrices.
My question is how we can derive the commutation relation between $A$ and $B$ directly from $e^{A+B}=e^A e^B$ without bringing in the Golden-Thompson inequality (as in the paper I linked). Since the condition $e^{A+B} = e^A e^B$ has a simpler form than that involving the trace, I think there should be some way.
Edit: rephrase the question
| The idea here is $A+B$ is Hermitian and the exponential map preserves Hermicity. Taking the conjugate transpose of each side, we have
$e^Ae^B = e^{A+B} = \big(e^{A+B}\big)^*=\big(e^B\big)^*\big(e^A\big)^*=e^Be^A$
so $e^A$ and $e^B$ commute.
Now call on a lemma twice:
for Hermitian $X,Y$
$e^XY= Ye^X$
iff $XY=YX$
proof sketch: the same unitary matrix $U$ that simultaneously diaogonalizes $e^X$ and $Y$ must diagonalize $X$ as well since all are Hermitian. And the same argument also runs backwards. (Underlying idea: the exponential map is injective on reals and Hermitian matrices are diagonalizable with real spectrum. So $e^X \mathbf v = \sigma \cdot \mathbf v\implies X\mathbf v = \log(\sigma)\cdot \mathbf v$ and of course $X \mathbf v = \lambda \cdot \mathbf v\implies e^X\mathbf v = e^{\lambda}\cdot \mathbf v$)
after applying the lemma once, with $Y:=e^B$, $X=A$, we know $Ae^B = e^BA$
and a 2nd application of the lemma, with $Y:=A$ and $X:= B$, tells us $AB = BA$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4096769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.