Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Difference of two stopping times Let $(X_{n})_{n \geq 0}$ be a sequence of random variables and $\tau,t$ stopping times with respect to the sequence $(X_{n})_{n \geq 0}$
$$\begin{align*}\{\tau+t =n\} = \{\tau+t = n\} \cap \{t \leq n\} &= \bigcup_{k=0}^n \{\tau+t = n\} \cap \{t = k\} \\ &= \bigcup_{k=0}^n \{\tau=n-k\} \cap \{t=k\}. \end{align*}$$
As $\{\tau =n-k\} \in \mathcal{F}_{n-k} \subseteq \mathcal{F}_n$ and $\{t = k\} \in \mathcal{F}_k \subseteq \mathcal{F}_n$ for any $k \leq n$, this implies that $\{\tau+t=n\} \in \mathcal{F}_n$, and so $\tau+t$ is a stopping time.
Now, my question is, let assume that I am considering $\tau-t$. I know that in general, $\tau-t$ is not a stopping time. However, if I were to consider my birthday this year (a stopping time), which is a deterministic stopping time. At any time, I know exactly when my birthday occurs. Also, I know two days before my birthday i.e, $\tau-2$. What kind of a formulated counterexample will show that $\tau-2$ is indeed a stopping time in this setting.
| If $\tau$ is deterministic, $\tau-2$ is deterministic. As long as $\tau-2 \ge 0$, it is a stopping time because $\{\tau-2 = n\} = \emptyset$ or $\{\tau-2 = n\} = \Omega$ for all $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4303673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Is the set of all underdetermined matrices A that solve Ax = b (for a given x and b) dense? Let's say I have two fixed vectors $x \in \mathbb{R}^{n}$ and $b \in \mathbb{R}^{m}$, where $m < n$. There are an infinite number of matrices $A \in \mathbb{R}^{m \times n}$ that can solve this underdetermined system. My question is: are these solutions dense over the set of all matrices in $\mathbb{R}^{m \times n}$?
Please be gentle, I am not very mathematical.
| The set of $m\times n$ matrices is a vector space, and the map $A\mapsto Ax$ is linear. Hence the set of solutions (in $A$) of $Ax=b$ is an affine subspace of the set of $m\times n$ matrices, and it is not dense, unless it is the whole space (which is the case iff $x=0$ and $b=0$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4303765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Proof for a nice conjecture related to circles I have made a conjecture but am unable to prove it.
If $P$ and $Q$ are any points such that they are both at an equal given distance from a given point $C$, subtending a fixed angle at another given point $O$ (wherein $ OC<PC=QC $). Then the maximum and minimum distance $PQ$ occurs when $P$ and $Q$ lie on equally inclined rays to the line perpendicular to $OC$ through $O$.
Clearly, the minimum case would be in the smaller segment and maximum in the larger segment. Also note that if $P$ and $Q$ subtend an angle $2x$ at $O$, then each of the rays will be inclined at $90°-x$ to the line perpendicular to $OC$.
| Consider the circle $a$ through $OPQ$ (violet in figure below). If this circle is not tangent to circle $c$ of center $C$ passing through $O$ (dashed), then on $c$ there are points both inside $a$ (as $O'$ in the figure) and outside $a$ (as $O''$). But then
$$
\angle PO''Q<\angle POQ<\angle PO'Q
$$
and from that it follows that it's possible to construct an angle of vertex $O'$ equal to $\angle POQ$ but subtending an arc on $a$ smaller than $PQ$, and it's possible to construct an angle of vertex $O''$ equal to $\angle POQ$ but subtending an arc on $a$ larger than $PQ$.
But the same angles can be constructed from vertex $O$ (just rotate them about $C$), hence if $a$ is not tangent to $c$ then point $O$ cannot subtend a minimum or maximum arc.
It follows that point $O$ subtends a minimum or maximum arc only if circle $POQ$ is tangent to circle $c$, which is equivalent to the thesis to be proved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4303894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Prove : $x$ is an isolated point of $F$ iff $F − \{x\}$ is still closed. I have a question in the following statement:
Let $F$ be closed and $x\in F$. Then $x $ is an isolated point of $F$
if and only if $F − \{x\}$ is still closed.
To show that it's closed, I've shown the missing inclusion, as another inclusion is obvious. But, considering that the set is closed, I can't see which isolated point.
| A set is closed iff it contains all of it's limit points. i.e $\bar{A}=A$ .
Since $x$ is isolated . There exist a nbd $U$ such that $U\cap F\setminus\{x\}=\phi$.
Hence $x$ is not a limit point of $F$.
So if $F$ is closed then $x\notin F'$ and $F'\subset F$. Hence $F'=F'\setminus\{x\} = (F\setminus\{x\})'\subset F\setminus\{x\}$ . So $F\setminus\{x\}$ is closed. $A'$ denotes derived set , i.e set of limit points.
Conversely . Let $F$ be closed and $F\setminus\{x\}$ is closed. Then clearly $x$ is not a limit point of $F$ as if it was then it would also be a limit point of $x$ as
for every open nbd $U$ of $x$ , $U\cap F\setminus\{x\}= U\cap ((F\setminus\{x\})\setminus\{x\})\neq \phi$ . Hence $F\setminus\{x\}$ is a closed set which does not contain $x$ which is a limit point of it. Contradiction. Hence $x$ is not a limit point. So $x$ is an isolated point of the set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4304076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Work done by $\vec F=\sin(x^2)\hat x+(3x-y)\hat y$ needs problematic integral? We attempt to find the work done by the force
$$\vec F=\sin(x^2)\hat x+(3x-y)\hat y$$
in moving a particle from $O\to A=(3,0)\to B=(0,4)$ in straight lines. This is rather problematic as far as I can tell. How can I evaluate the integral of $\sin(x^2)$? I assume I am not meant to obtain a number as this integral is a special function. For example I obtain the following for $O\to A$
$$\int_0^1\sin(3t^2)dt=\left[\sqrt{\frac{\pi}{6}}\operatorname{S}\left(\sqrt\frac{6}{\pi}x\right)\right]_0^1$$
Is this a legitimate answer to give? This is an exam style question so I am not sure.
| The force can be decomposed as two forces:
$$
F_1=(\sin(x^2),-y)\,\quad F_2=(0,3x)
$$
The work, thus, can be computed as two parts. The work done by $F_2$ is very easy.
To handle $F_1$, note that
$$
\textrm{curl } F_1=0\;
$$
This condition should remind you something about the Green's theorem. It follows that you can calculate the work done by $F_1$ using the path of the line segment from $O$ to $B$, where you don't need to worry about the $\sin(x^2)$ term anymore.
To summarize,
$$
\int_{O\to A\to B} F\cdot dr = \int_{O\to B}F_1\cdot dr+\int_{O\to A\to B} F_2\cdot dr.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4304409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the definition of commutant for an abstract von Neumannn algebra? Let's suppose I have an abstract (not concrete, i.e. not acting on a Hilbert space) von Neumman algebra $\mathcal{M}$. Is there a notion of commutant for it? Because up to my knowledge (I'm reading "An invitation to von Neumann algebras" of Sunder) the commutant is defined for subsets of operators acting on aHilbert space.
Any comment will be helpful.
| There isn't, as far as I know; the commutant is strongly dependent on the representation. There is a canonical way of representing a von Neumann algebra (the standard form), but there are many others.
Here is an example of how extreme the situation can be. Let $R\subset B(H)$ be the hyperfinite II$_1$ factor, represented in its standard form. Then
*
*$R'$ is a II$_1$-factor.
Now let $R_1=R\otimes I\subset B(H\otimes H)$. Then $R_1$ is still the hyperfinite II$_1$-factor, but now
*
*$R_1'=R'\otimes B(H)$, a II$_\infty$-factor.
To make things more extreme, let $\pi:R\to B(K)$ be an irreducible representation. Such $\pi$ exists, for instance because we can take $\phi$ a pure state on $R$, and let $\pi$ be its GNS representation. Because $R$ is simple, $\pi$ is faithful. So $R_2=\pi(R)$ is the hyperfinite II$_1$-factor. And because $\pi$ is irreducible,
*
*$R_2'=\mathbb C$.
Note that $R_2\subset B(K)$ is not a von Neumann algebra. Also $K$ is known to be non-separable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4304598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Power series representation implies holomoprhic I am not understanding a step in a proof from Rudin's Real and Complex Analysis. I am wondering why it is that $$\left[\frac{z^{n}-w^{n}}{z-w}-n w^{n-1}\right]=(z-w) \sum_{k=1}^{n-1} k w^{k-1} z^{n-k-1}$$
when $n\ge 2$. The details to obtain this equality are not apparent to me. Thank you!
10.6 Theorem If $f$ is representable by power series in $\Omega$, then $f \in H(\Omega)$ and $f^{\prime}$ is also representable by power series in $\Omega$. In fact, if
$$
f(z)=\sum_{n=0}^{\infty} c_{n}(z-a)^{n}
$$
for $z \in D(a ; r)$, then for these $z$ we also have
$$
f^{\prime}(z)=\sum_{n=1}^{\infty} n c_{n}(z-a)^{n-1}
$$
PROOF If the series $(1)$ converges in $D(a ; r)$, the root test shows that the series (2) also converges there. Take $a=0$, without loss of generality, denote the sum of the series $(2)$ by $g(z)$, fix $w \in D(a ; r)$, and choose $\rho$ so that $|w|<\rho<r .$ If $z \neq w$, we have
$$
\frac{f(z)-f(w)}{z-w}-g(w)=\sum_{n=1}^{\infty} c_{n}\left[\frac{z^{n}-w^{n}}{z-w}-n w^{n-1}\right]
$$
The expression in brackets is 0 if $n=1$, and is
$$
(z-w) \sum_{k=1}^{n-1} k w^{k-1} z^{n-k-1}
$$
if $n\ge 2$.
| $$\begin{align}(z-w)\sum_{k=1}^{n-1}kw^{k-1}z^{n-k-1}+nw^{n-1}&=\sum_{k=1}^{n-1}kw^{k-1}z^{n-k}-\sum_{k=1}^{n-1}kw^{k}z^{n-k-1}+nw^{n-1}\\
&=\sum_{k=1}^{n}kw^{k-1}z^{n-k}-\sum_{k=1}^{n-1}kw^{k}z^{n-k-1}\\
&=\sum_{k=1}^{n}kw^{k-1}z^{n-k}-\sum_{k=1}^{n}(k-1)w^{k-1}z^{n-k}\\
&=\sum_{k=1}^{n}w^{k-1}z^{n-k}\\
&=\frac{z^n-w^n}{z-w}\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4304751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove a congruence for prime number If $p$ is a prime number, then prove that $p!(1^{*}-2^{*}+3^{*}-...-(p-1)^{*}) \equiv 2-2^{p} \pmod{p^2}$ where $k^{*}k\equiv 1 \pmod {p}$.
I try to change the prove of Wolstenholme's Theorem, which is approximately similar to this problem, but I can't prove this correctly.
| We can start with expanding the binomial,
$$2^p=(1+1)^p = \sum_{n=0}^p \binom{p}{n} \mod p^2$$
Then subtract the first and last terms,
$$2^p-2 = \sum_{n=1}^{p-1} \binom{p}{n} \mod p^2$$
We can use the simple identity $\binom{a}{b}=\frac{a}{b}\binom{a-1}{b-1}$ to get,
$$2^p-2 = p \sum_{n=1}^{p-1}\frac{1}{n}\binom{p-1}{n-1} \mod p^2$$
At this point we know the right hand side is divisible by $p$ at least once, so everything multiplying $p$ can be thought of as just being a congruence mod $p$, so we just need to focus on,
$$\sum_{n=1}^{p-1}\frac{1}{n}\binom{p-1}{n-1} \mod p$$
We can rewrite the binomial coefficients in terms of the product,
$$\binom{p-1}{n-1} = \prod_{k=0}^{n-2} \frac{p-1-k}{k+1} = \prod_{k=0}^{n-2} \frac{-1-k}{k+1} = (-1)^{n-1} \mod p$$
Plugging in gets us,
$$\sum_{n=1}^{p-1}\frac{(-1)^{n-1}}{n} \mod p$$
This means we have shown,
$$2^p-2 = p \sum_{n=1}^{p-1}\frac{(-1)^{n-1}}{n}\mod p^2$$
Now we just need to multiply both sides by $-1$ and Wilson's theorem let's us write $-1 = (p-1)! \mod p$ which combines with $p$ to make $p!$.
$$2-2^p = p! \sum_{n=1}^{p-1}\frac{(-1)^{n-1}}{n}\mod p^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4305097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Do I have the right bounds and function for this integral? Find the integral of function $f(x,y,z)=(x^2+y^2+z^2)^{3/2}$ inside the sphere $(z-2)^2+x^2+y^2=4$
My approach: by changing it to spherical coordinates, we have $0\le\rho\le2\cos\varphi$ $(0\le\theta\le2\pi,\ 0\le\varphi\le\frac\pi2)$, and the function becomes $f(\rho,\theta,\varphi)=ρ^3$, which, when multplied by the Jacobian, $ρ^2\sin\varphi$, becomes $ρ^5\sin\varphi$. In other words, we are integrating this function in the above $\rho,\theta,\varphi$ bounds. Is this correct?Why would Wolfram alpha's calculator give a much larger value when integrated in Cartesian coordinates?
| No, the bounds are not correct. Note that\begin{align}x^2+y^2+(z-2)^2\leqslant4&\iff x^2+y^2+z^2\leqslant4z\\&\iff\rho^2\leqslant4\rho\cos\varphi\\&\iff\rho\leqslant4\cos\varphi.\end{align}So, it should be$$\int_0^{2\pi}\int_0^{\pi/2}\int_0^{4\cos\varphi}\rho^5\sin(\varphi)\,\mathrm d\rho\,\mathrm d\varphi\,\mathrm d\theta$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4305471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Hales-Jewett Number $HJ(2,r)$ I'm trying to find the exact value for the Hales-Jewett number $HJ(2,r)$, where $HJ(k,r)$ is defined as the smallest $n$ so that any coloring of the elements of $[k]^n$ by $r$ colors has a monochromatic combinatorial line.
It seems like a simple (maybe even trivial) problem, but I'm not sure how to proceed since I'm still trying to wrap my head around combinatorial lines.
| Consider the points in $[2]^r$ of the form:
$(1,1,1,\ldots, 1),$
$(2,1,1,\ldots, 1)$,
$(2,2,1,\ldots, 1),$
$\qquad\ddots$
$(2,2,2,\ldots, 2)$.
Any pair of these form a combinatorial line. As there $r+1$ of these points, in any $r$-colouring, some pair must have the same colour.
Conversely, if $k<r$, then we may $r$-colour $[2]^k$ by number of co-ordinates equal to $2$. Then any monochromatic pair of points will not form a combinatorial line, as moving from one point to the other, there will exist a co-ordinate where a $1$ changes to a $2$, as well as a co-ordinate where a $2$ changes to a $1$.
Thus we have $HJ(2,r)=r$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4305634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The harmonic Series sequence. Well! I was going through harmonic series from mathworld.worlfram I found harmonic numbers are really tough to calculate I was scribbling and wrote this thing
$$\sum_{k = 1}^{x}H_k= \sum_{k =1}^{x}\left(\sum_{u = 1}^{k}\frac 1{u}\right)$$
How can I simplify these highly complicated things Howe can we find this thing
At the first, we can't find $H_k$ itself and series over series making it a little hard
I wonder how can we simplify this thing?
It's just my curiosity....
| Well! then I liked this curiosity
I guess by solving you mean just to find only one harmonic number(say: $H_p)$ and finding the above $\sum\sum\frac 1{k}$
This is possible and I solved that by finding a pattern that only the last terms of each harmonic number are making previously calculated harmonic numbers different with only $\frac 1{k-1}$
But for the sake proof of my simplified result, I can provide an easy way
$$\begin{align*}
S
&=\sum_{k =1}^{x}H_k\\
&=\sum_{k =1}^{x}\sum_{n =1}^{k}\frac 1{n}\\
&=\sum_{k =1}^{x}\frac 1{n}\sum_{k =n}^{x}1\\
& = \sum_{k =1}^{x}\frac 1{n}(x-n+1)\\
& =\sum_{k =1}^{x}\frac {(x+1)-n}{n}\\
& =(x+1)\sum_{k =1}^{x}\frac 1{n} - \sum_{k =1}^{x}1\\
& = (x+1)H_x - x
\end{align*}$$
Here, SH() & test() is our result expected which are equal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4305791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Nonsingularity of $xy-z^2$ I am trying to show that $Z(xy-z^2)$ in $\mathbb{P}^2$ is nonsingular.
My idea is considering the first standard chart $\mathbb{A}^2\hookrightarrow\mathbb{P}^2$ given by $(x,y)\mapsto [x:y:1]$. Intersecting $Z(xy-z^2)$ with the chart gives $Z(xy-1)$. The complement of the first standard chart is a copy of $\mathbb{P}^1$, $[x:y:0]$. And $Z(xy-z^2)\cap\mathbb{P}^1_{x:y}=[1:0:0]\cup[0:1:0]$.
As I learned before, $Z(xy-z^2)$ is isomorphic to $\mathbb{P}^1$... The hint says do it for the first affine chart, then check the points not in the chart. I wonder how should I continue the proof.
| If you want to understand the tangent space at a given point, you probably want more than just that point; you want an open neighbourhood.
So instead of looking just at $[1:0:0]$ look at the open neighbourhood $\{x = 1\}$ which gives you the equation $y - z^2$.
The three standard open sets give you three equations: $y - z^2$, $x - z^2$, $xy - 1$.
If you take partial derivatives you get three Jacobians: $(1, -2z), (1, -2z), (y, x)$. Now check that $J = (0, 0)$ only if $z = 1$ and $x, y = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4305912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Reduce to first order and solve $yy'' = 3y'^2$ Reduce to first order and solve $yy'' = 3y'^2$
Dividing both sides by $y$ and re-arranging $y'' - 3y' = 0$. This is clearly an homogenous equation, a solution might be $e^{3x}$to test this:
$y' = 3e^{3x}, y'' = 9e^{3x} \implies 9e^x-9e^x = 0$
Substituting $y = ue^{3x}, y' = u'e^{3x}+3ue^{3x}, y'' = u''e^{3x}+3u'e^{3x}+3u'e^{3x}+9ue^{3x}$
Plugging these in $$(u''e^{3x}+3u'e^{3x}+3u'e^{3x}+9ue^{3x})-3\cdot (u'e^{3x}+3ue^{3x})=0$$
$$\implies u''e^{3x}+3u'e^{3x}=0$$
Substituting $v = u'$
$$\implies v'e^{3x}+3ve^{3x}=0$$
$$=\int\frac{dv}{v} = 3\int dx$$
However the solution to this exercise is $(c_1x+c_2)^{-\frac{1}{2}}$ , how did they get this?
| Hint...you can't cancel $y$ like you have. Instead put $y''=y'\frac{dy'}{dy}$ and the result will follow.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4306035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Prove that $(dx^i)_p\Bigg( \frac{\partial}{\partial x^j}|_p \Bigg )=\delta ^i_j$ In Tu's book, An Introduction to Manifolds (Second edition), at page 35 states that by definition:
$$(dx^i)_p\Bigg( \frac{\partial}{\partial x^j}|_p \Bigg )=\frac{\partial}{\partial x^j}|_p x^i$$
I don't understand how is that by definition.
If I apply the definition, I get:
$$(dx^i)_p\Bigg( \frac{\partial}{\partial x^j}|_p \Bigg )=\sum_{j=1}^n \frac{\partial}{\partial x^j }|_p \frac{\partial x^i}{\partial x^j}|_p$$
why is this true I don't follow $$\sum_{j=1}^n \frac{\partial}{\partial x^j }|_p \frac{\partial x^i}{\partial x^j}|_p=\frac{\partial}{\partial x^j}|_p x^i$$
Can someone explain ?
| Exterior derivative of a function is defined through the directional derivative:
$df(X)=X(f)$, where $X$ is an arbitrary tangent vector. We apply that formula on a coordinate function $f=x^i$ and along a tangent to the j-coordinate curve, $X=\frac{\partial}{\partial x^j}$, all happening at a point $p$.
In order to calculate the directional derivative $\frac{\partial}{\partial x^j}(x^i)$ on a manifold we're going to need a curve passing through the point $p$ at $t=t_0$ with the tangent velocity $\frac{\partial}{\partial x^j}$, the aforementioned j-coordinate curve is precisely that at each point of our chart. Let me denote the j-coordinate curve function as $x^{-j}\equiv({x^j})^{-1}$, it's the inverse to the j-coordinate function: $x^j\circ x^{-j}\equiv id$, but all other coordinates functions are constant on this curve. So we have,
$$\frac{\partial}{\partial x^j}(x^i)|_p=\lim_{t\to 0}\frac{[x^i\circ x^{-j}]_{t_0+t}-[x^i\circ x^{-j}]_{t_0}}{t}=\delta ^i_j$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4306238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Lyapunov's Central Limit Theorem Let $\{X_k\}$ are independent, $S_n = \sum_{k=1}^n X_k, D_n^2 = Var(S_n) < \infty$
(a) Show that if $\exists q >2$,s.t
$$ \lim_{n \rightarrow \infty} D_n^{-q} \sum_{k=1}^n E\{ |X_k - EX_k|^q \} = 0 $$
then $D_n^{-1}(S)n - ES_n) \xrightarrow[]{d} N(0,1)$
(b)Show that part(a) applies in the case $D_n \rightarrow \infty$ and $E\{ |X_k - EX_k|^q \} \leq C \cdot Var (X_k)$ for some $q >2, C > 0, k = 1,2,\ldots$
I approach part (a) by showing that the Lyapunov condition implies the Lindeberg condition, but I have very little clue about part (b).
Any help would be greatly appreciated. Thank you.
| In part (b), you have to check that $\lim_{n \rightarrow \infty} D_n^{-q} \sum_{k=1}^n E\{ |X_k - EX_k|^q \} = 0$. To do so, use the given bound to get
$$
D_n^{-q} \sum_{k=1}^n E\{ |X_k - EX_k|^q \}\leqslant CD_n^{-q}\sum_{k=1}^n\operatorname{Var}(X_k)=CD_n^{2-q}.
$$
Now you can conclude using the assumption on $D_n$ and $q$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4306425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Asymptotic convergence of ODE solutions to a unique function, regardless of initial conditions I have a non-linear differential equation of the form
$$
\frac{dy}{dx}=F(x) - G(y,x)
$$
where $G$ is of the form
$$
G(y,x) = y^3f_3(x)-y^2f_2(x)
$$
and $f_2,f_3>0$ for all $x$, and $F(x)>0$ for all $x$.
Below I attach results from numerical integration for different initial conditions (the integration is backwards from $x=3$ to $x=0$), indicated by the colored lines.
It seems that regardless of the initial condition, the solutions seem to be asymptotic to a unique function.
Is there a general statement I can use to prove this result? Namely, what conditions do $F,G$ need to satisfy in order for solutions to be asymptotic to a unique function, regardless of the initial conditions?
Furthermore, it seems that a good approximation for this asymptotic behavior is given by the solution to the equation (shown as the black dashed line)
$$
G(y,x)=F(x)
$$
which one gets if he sets $dy/dx=0$ in the above equation (which, of course, is not true). How can one explain this?
| If the rate of change of the $f_k$ coefficient functions is small enough, relative to their values, or the $y$-derivative close to the root of the polynomial, then you can set $x$ as constant and apply the rules of an autonomous scalar dynamical system.
Close to the roots (as long as they are simple) the solution either moves to them or away from them exponentially. If far away from the roots, the solution changes fast in vertical direction towards one of the roots, as you can see in your initial segment from $3.0$ to $2.8$. As the coefficients change "adiabatically", the solution will follow that change, lagging in time. This is what you observe moving from $2.8$ to the left, with the solution slightly below the root curve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4306654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Proof of $\text{Span}(A \cup \{x_0\})$ for $x_0 \notin A$ can uniquely determine $x = m + ax_0$? I have a difficult time understanding the claim $x = m + ax_0$ as stated in below text (it regards the proof of the Hahn-Banach theorem). I would rather look at a proof or a beginning of a proof; could anyone help me out a bit? It would be really appreciated.
| If $x\in \operatorname{Span}(M \cup \{x_0\})$, then $$x = \sum_{i = 1}^k \lambda_i m_i + \alpha x_0$$ for some scalars $\lambda_1,\ldots, \lambda_k, \alpha$ and vectors $m_1,\ldots, m_k\in M$. If $m = \sum \lambda_i m_i$, then $m\in M$ since $M$ is a subspace of $E$. Thus $x = m + \alpha x_0$ with $m\in M$.
Suppose $x = m + \alpha x_0 = m' + \alpha' x_0$ for some $m,m'\in M$ and scalars $\alpha, \alpha'$. Then $(\alpha - \alpha')x_0 = m' - m \in M$; the condition $x_0\in E\setminus M$ forces $\alpha = \alpha'$. Then $m = m'$. Hence, the representation in the proof is unique.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4306870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Functional analysis and Banach Algebras well defined I have a question regarding well - definedness.
Suppose $X$ is a banach space $\mathcal{l}^{1}(\mathbb{Z})$ given by the norm $||(x_{n})_{n}||_{1} := \sum_{n \in \mathbb{Z}} |x_{n}|$
If we define the product $xy$ as $(xy)_{n} = \sum_{m} x_{m}y_{n-m}$.
Then is it correctly understood that showing
$\sum_{n} |(xy)_{n}| \leq ||x|||y||$ and that $X$ has a unit implies that
$(xy)_{n} = \sum_{m} x_{m}y_{n-m}$ is a WELL DEFINED PRODUCT which makes $X$ a Banach algebra?
| So first of all I show that the product is indead in $\mathcal{l}^{1}(\mathbb{Z})$
\begin{align*}
\sum_{n} |(xy)_{n}| &\leq \sum_{n} \sum_{m} |x_{m}| |y_{n-m}|\\
&= \sum_{m} |x_{m}| \sum_{n} |y_{n-m}|\\
&= \sum_{m} |x_{m}| ||y||\\
&= ||x|| \ ||y||
\end{align*}
which implies that $xy \in \mathcal{l}^{1}(\mathbb{Z})$
which shows that $\sum_{n} |(xy)_{n}| < \infty$.
Furthermore,
\begin{align*}
x(y+z)_{n} &= x_{m} (\sum_{m} y_{n-m} + \sum_{m} z_{n-m})\\
&= \sum_{m} x_{m} y_{n-m} + \sum_{m} x_{m} z_{n-m}\\
&= (xy)_{n} + (xz)_{n}
\end{align*}
and
\begin{align*}
((x \cdot y) \cdot z)_{n} &= \sum_{m} (x_{m} y_{n-m}) z_{m})\\
&= \sum_{m} x_{m} (y_{n-m} z_{m})\\
&= (x(y \cdot z))_{n}
\end{align*}
which shows that the product is well defined
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4307019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Showing that $\sqrt{2+\sqrt3}=\frac{\sqrt2+\sqrt6}{2}$ So I was playing around on the calculator, and it turns out that $$\sqrt{2+\sqrt3}=\frac{\sqrt2+\sqrt6}{2}$$
Does anyone know a process to derive this?
| The simpliest way is to square both sides and then compare them.
Formally, consider $\left(\frac{\sqrt{2}+\sqrt{6}}{2}\right)^{2}=\left(\frac{8+2\sqrt{12}}{4}\right)=2+\sqrt{3}$.
Taking square-root on both sides yields $\frac{\sqrt{2}+\sqrt{6}}{2}=\sqrt{2+\sqrt{3}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4307201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Calculating a sum sigma, with minimum $\sum_{i=1}^{\min(n,k-1)}\frac{1}{(1-p)^i}$ How do I calculate this sum:
$$\sum_{i=1}^{\min(n,k-1)}\frac{1}{(1-p)^i}$$
It is like a geometric sum, but it has the minimum which I do not know how to deal with.
I got this sum while calculating two random variables $X+Y$, such that $X$~$U(1,n)$ , $Y$~$G(p)$, $n \in \mathbb{N}, 0<p<1$.
$$\begin{align}P(X+Y=k) &= \sum_{i=1}^{\min(n,k-1)} P(X=i,Y=k-i) \\[1ex] &=\sum_{i=1}^{\min(n,k-1)} P(X=i) \cdot P(Y=k-i) \\[1ex] &=\sum_{i=1}^{\min(n,k-1)} \frac{1}{n} \cdot (1-p)^{k-i-1}p\\[1ex]&=\frac{(1-p)^{k-1}p}{n} \cdot \sum_{i=1}^{\min(n,k-1)} \frac{1}{(1-p)^i} \\[1ex] &\end{align}$$
Thanks a lot!
| If you dont want to include the $\min(a,b)$ function then you could rewrite the sum as follows: $$\begin{align*}\sum_{i=0}^{\min(n,k-1)}\frac{1}{(1-p)^i}&=\min\bigg(\sum_{i=0}^{n}\frac{1}{(1-p)^i},\sum_{i=0}^{k}\frac{1}{(1-p)^i}\bigg)\\&=\frac{\sum_{i=0}^{n}\frac{1}{(1-p)^i}+\sum_{i=0}^{k}\frac{1}{(1-p)^i}-\bigg|\sum_{i=0}^{n}\frac{1}{(1-p)^i}-\sum_{i=0}^{k}\frac{1}{(1-p)^i}\bigg|}2\end{align*}$$
Note: $\sum_{i=0}^{\min(n,k-1)}\frac{1}{(1-p)^i}=\min\bigg(\sum_{i=0}^{n}\frac{1}{(1-p)^i},\sum_{i=0}^{k}\frac{1}{(1-p)^i}\bigg)$ is true because $p-1>0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4307507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Not defined in logarithmic differentiation The derivative of $\ln x^x$ by logarithmic differentiation is $\ln x^x[\ln(\ln(x)+(1/\ln x)]$ but what will be derivative when $x=1$ and at small values like $0.00001$ because $\ln (x)$ is negative and $\ln (\ln (x))$ is not defined.
| When you are taking logarithm both sides, it is assumed both sides are positive, because of the domain of $ln()$.
Now find out when $\ln\left(x^{x}\right)$ is positive. or $x^{x}>1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4307740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find area of triangle inscribed in a rectangle This is from an italian math competition.
In the rectangle below:
area of yellow, green and red triangles are 27, 35 and 40 resepectively. What is the area of the blue triangle ?
My trial: I tried to define $a,b,c$ and $d$:
$a(c+d)=80 \tag{1}$
$bc=70 \tag{2}$
$d(a+b)=54 \tag{3}$
(to clarify, a is the length of the side of the red triangle and b of the green triangle which lie on the top of the rectangle. c and d are the lengths of the sides of the green and yellow triangle lying on the right side of the rectangle )
This is a system of 3 equations in 4 unknowns. We can observe that if $(a,b,c,d)$ is a solution, so is $(ak,bk,c/k,d/k)$. We can therefore fix the value of one variable, $a=1$, solve the system, and obtain $b=5/4, c=56, d=24$.
The area of the entire rectangle would than be $R=(a+b)(c+d)=180$, and the blue area $78$.
This solution works, but I think it requires too many calculations for the type of competition. Is there a smarter/quicker way ?
| It becomes simple if you see the trick. Let the height of the rectangle be $a$ and the width be $b$. Assign the shortest side of red $c$ and shortest side of yellow $d$. You will have, $d=\frac{54}{b}$ and $c=\frac{70}{a}$. Also you have, $(b-c)(a-d)=80 (1)$. Now, the area of triangle $A = ab - 102$.
If $x=ab$, substituting $c$ and $d$ is $(1)$ we get $(x-70)(x-54)=80x$. Solve the quadratic equation for $x$ and you will get $A$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4307913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Questions regarding concept of multiplicity I am trying to understand the concept of multiplicity, I'll copy the definitions from the book I am reading:
If $f$ is a curve and $l$ is a line with equation $Y=aX+b$, the points of $l\cap f$ an be obtained eliminating $Y$ and solving the equation:
$$f_l(X):=f(X,aX+b)=0$$
We have three possibilities:
*
*$f_l(X)$ is zero, and $l$ is a component of $f$.
*$f_l(X)$ is a constant $\neq 0$, when $f\cap l = \emptyset$
*$f_l(X)$ is a non-constant polynomial that can be written as $f_l(X)=c\prod_{i=1}^{r}(X-x_i)^{m_i}$ where $c$ is a constant and $x_i$ are pairwise distinct roots of the intersection.
Definition: The multiplicity or index of intersection of $l,f$ at the point $P$ is given by:
$$
\begin{equation*}
(l,f)_P= \left\{
\begin{array}{ll}
0 & \quad P\notin l\cap f \\
\infty & \quad P\in l \subset f\\
m_i &\quad P=(x_i,ax_i+b) \text{ as in case 3 above.}
\end{array}
\right.
\end{equation*}
$$
Here is my confusion: Suppose I want to compute the multiplicity of $f(X,Y)=2X^4-3X^2Y+Y^2-2Y^3+Y^4$ at $(0,0)$. I think I need to take a straight line passing through $(0,0)$, which is $y=AX$. Suppose, for simplicity $A=1$, then we can write
$$f(X,AX)=X^2 - 5 X^3 + 3 X^4=\frac{ X^2\left(6 X-\sqrt{13}-5\right) \left(6 X+\sqrt{13}-5\right)}{12} $$
Then we have a polynomial $f_l(X)=c\prod_{i=1}^{r}(X-x_i)^{m_i}$ just as is given in the description.
My confusion is the following: We have factors with $m_i=1$ and $m_i=2$. Which one is the correct multiplicity? I noticed that in the definition, $x_i$ are pairwise distinct roots, so what do we do here? Because we have a term $(x-0)^2$ that has two equal roots.
EDIT:
Definition: Let $f$ be a curve and $P$ a point in $f$. There exists an integer $m=m_P(f)\geq 1$ such that for any line $l$ passing through $P$, we have $(l,f)_P \geq m$.
| To resolve your first confusion: you've done everything correct so far, and the intersection multiplicity of your curve and line is two, because that's the exponent corresponding to the $X$ term in your single-variable polynomial. So this says that the multiplicity is at most two.
To prove that the multiplicity is exactly two using your definitions, we'd need to check all the lines through the origin. Writing $f(X,AX)=2X^4-3AX^3+A^2X^2-2A^3X^3+A^4X^4$, this simplifies to $X^2(A^2-2(A+A^3)X+(2+A^4)X^2)$ and we see that the minimum order to which $X$ divides this polynomial is 2. We also need to check $X=0$, which gives $Y^2(1-2Y+Y^2)$, which is also divisible by $Y$ to order two, so the multiplicity is two.
The fact that this corresponded to the lowest-degree term, $Y^2$, in the expansion of your polynomial around $(0,0)$ is no accident: translating so the point where we're considering the intersection is the origin, one may note that the lowest-degree homogeneous part of the equation for your curve splits as a product of homogeneous linear factors, and as long as the line you're intersecting with isn't one of those factors, the intersection multiplicity you get from your calculation will be exactly the degree of the smallest nonzero homogeneous part. If the line you're intersecting with is one of those factors, it will be bigger, proving the claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4308111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Maximum value of function $\xi \mapsto |\hat{f}(\xi)|^2$ (Fourier Transformation) Let $f \in \mathcal{S(\mathbb{R})}$ a positive function. Find the point $\xi$ where the function $\xi \mapsto |\hat{f}(\xi)|^2$ reaches its maximum value.
Solution:
Defining the function $\varphi(\xi) = |\hat{f}(\xi)|^2$, we obtain that $\varphi '(\xi) = 2|\hat{f}(\xi)|(\hat{f}'(\xi))$ and using that $\hat{f}'(\xi) = -i\widehat{xf}(\xi) $, we obtain that
$\varphi '(\xi) = -2i|\hat{f}(\xi)|\widehat{xf}(\xi)$. We conclude that $\varphi '(\xi) = 0 \Leftrightarrow -2i|\widehat{f}(\xi)|\widehat{xf}(\xi) = 0$. Help.
| I think this is simpler than it looks: Recall $\hat{f}(\xi)= \int_{\mathbb{R}} e^{-2\pi ix\xi}f(x)\, dx$, and since $f$ is positive we have
$$
|\hat{f}(\xi)|\leq \int_{\mathbb{R}} |e^{-2\pi ix\xi}|f(x)\, dx = \int_{\mathbb{R}} f(x)\, dx = \hat{f}(0).
$$
Put another way, since $f$ is positive, the complex exponential could be introducing cancellation, making things smaller. Therefore, we'll maximize the value of $\hat{f}$ when no such effects are in play (note though that I'm not claiming $\xi=0$ is the only such point; I'm not sure if that's true or not).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4308257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why is it important that the $p$-adic absolute value satisfy multiplicativity? I suppose my question is really "why do we require norms in general to satisfy multiplicativity?". I ask this because for the usual absolute value on $\mathbb R$, I never feel like multiplicativity plays any "key role"; compare this to for example the omnipresent triangle inequality for the usual absolute value on $\mathbb R$, and in general in any metric space.
Suppose we are learning about $p$-adic numbers for the first time, say via this nice presentation: https://math.uchicago.edu/~tghyde/Hyde%20--%20Introduction%20to%20p-adics.pdf. I get that we would like some sort of norm $|a|_n$ for $a\in \mathbb Z_n$ that is smaller for values of $a$ that have a larger number $N$ of rightmost zeroes (in base $n$), and I agree something like $c^{-N}$ is a natural choice (really given the limited menagerie of "standard elementary functions" that's pretty much our only choice, other than perhaps an inverse power function like $\frac 1N$ or $\frac 1{N^k}$), but why do we emphasize that such a norm MUST be multiplicative?
EDIT: basically, I’m looking for the easiest example of the $p$-adic norm being used in some application to prove something “interesting” (outside the abstract theory of absolute values), that requires multiplicativity; in particular this rules out Ostrowski’s theorem.
Sort-of related is Why does the p-adic norm use base p? since it is sort of related to this "motivating the $p$-adic norm" business I have going on here.
| The $p$-adic norm on $\mathbb{Q}$ is a standard example of an algebraic absolute value. In particular, we are viewing $\mathbb{Q}$ as a ring (more specifically an integral domain) here, and so the only notions of absolute values that are of interest are the ones that respect the ring operations. None of the other axioms refer to ring multiplication in any way, so the requirement that $\lvert \cdot \rvert$ be multiplicative can be thought of as the simplest way to make the absolute value respect the ring multiplication.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4308371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
How to deal with this inequation with 3 absolute values?
Solve:$$|x-2|+|3x+2|-x-2|x+4|\le -x-4$$
I am having trouble even starting with this inequation. Do I find out first the zeroes of each absolute value and then I would get intervals for each absolute value?
| Rewrite your equation as
$$|x-2| + |3x+2| + 4 = 2|x+4|
\iff |-x+2| + |3x+2| + 4 = |2x+8|$$
Recall in the triangle inequality among $n$ real numbers $u_1,u_2,\ldots, u_n$,
$$|u_1| + \cdots + |u_n| \ge |u_1 + \cdots + u_n|,$$
the equality is achieved when and only when all non-zero $u_k$ have the same sign. Since
$$(-x + 2) + (3x +2) + 4 = 2x+8\quad\verb/and/\quad 4 > 0$$
the equation you have is equivalent to
$$-x + 2 \ge 0\;\;\verb/and/\;\; 3x + 2 \ge 0 \quad\iff\quad x \in \left[ -\frac23, 2\right]$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4308572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to compute the wave operator with respect to a metric? I am reading a paper where the author is working with the following metric
$$g = \frac{-dt^2 + dx^2}{t^2}$$
on the manifold $M=[0,\infty)\times \mathbb{R}^3.$ He then goes on to state that
$$\Box_g u = (t\partial_t)^2 u - 3 t\partial_t u -t^2 \Delta u.$$
I am not very well versed in geometry, so I am not sure how to derive the above expression. I know that the usual wave operator is $u_{tt}-\Delta u$ but I am guessing that because we are working with the metric $g$ we have some modifications. Any comments on how to derive the above expression will be much appreciated.
| Take a look at Definition of Tensor Laplacian; more specifically, the end of the answer where I write the coordinate formula for the Laplacian of a smooth function:
\begin{align}
\Delta_gu&=\frac{1}{\sqrt{|g|}}\frac{\partial}{\partial x^a}\left(\sqrt{|g|}g^{ab}\frac{\partial u}{\partial x^b}\right)\tag{$*$}
\end{align}
In that post, I merely stated that the above coordinate formula follows from the abstract definition $\Delta_gu= \text{div}(\text{grad}_gu)$. Recall that the gradient is the vector field associated to $du$ via the metric (below $g^{\sharp}:T^*M\to TM$ is the inverse musical isomorphism):
\begin{align}
\text{grad}_g(u):=g^{\sharp}(du)=g^{\sharp}\left(\frac{\partial u}{\partial x^a}\,dx^a\right)=\frac{\partial u}{\partial x^a}g^{\sharp}(dx^a)=
\frac{\partial u}{\partial x^a}g^{ab}\frac{\partial}{\partial x^b}
\end{align}
i.e the gradient has components $(\text{grad}_g(u))^b=g^{ab}\frac{\partial u}{\partial x^a}$. Once you have the formula for the components of the gradient, you need to calculate the divergence. For this, you need to know the Voss-Weyl formula, which I prove here. So, this establishes the formula $(*)$.
Now, for your specific case, I assume of course that you mean $dx^2+dy^2+dz^2$ in your definition of the metric. In this case, the components written out as a matrix are:
\begin{align}
[g_{ab}]=
\begin{pmatrix}
-\frac{1}{t^2}&0&0&0\\
0&\frac{1}{t^2}& 0&0\\
0&0&\frac{1}{t^2}&0\\
0&0&0&\frac{1}{t^2}
\end{pmatrix}\quad\text{and} \quad
[g^{ab}]=
\begin{pmatrix}
-t^2&0&0&0\\
0&t^2&0&0\\
0&0&t^2&0\\
0&0&0&t^2
\end{pmatrix}
\end{align}
and $\sqrt{|g|}:=\sqrt{|\det [g_{ab}]|}=\frac{1}{t^4}$ (the upstairs indices mean take the inverse matrix). So, carrying out the Einstein summation in $(*)$, we get
\begin{align}
\Delta_gu&=t^4\frac{\partial}{\partial t}\left(\frac{1}{t^4}\cdot (-t^2)\frac{\partial u}{\partial t}\right)+ \sum_{i=1}^3t^4\frac{\partial}{\partial x^i}\left(\frac{1}{t^4}\cdot t^2\frac{\partial u}{\partial x^i}\right)\\
&=-t^4\left[\frac{1}{t^2}\frac{\partial ^2u}{\partial t^2}+\left(-\frac{2}{t^3}\right)\frac{\partial u}{\partial t}\right]+t^2\sum_{i=1}^3\frac{\partial^2u}{(\partial x^i)^2}\\
&=-t^2\frac{\partial^2u}{\partial t^2}+2t\frac{\partial u}{\partial t}+t^2\sum_{i=1}^3\frac{\partial^2u}{(\partial x^i)^2}
\end{align}
So, I guess the $3\partial_tu$ in your post should actually be $2\partial_tu$.
What I found is the overall minus of what you wrote, so perhaps the author is defining $\Box_g$ as $-\Delta_g$ (which is weird, because I've always used the box operator to mean the Laplacian given by the metric), or perhaps the overall signature of the metric should have an extra minus sign. Anyway, I leave this minus sign issue to you to figure out based on the conventions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4308762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Weakly null sequence in Schreier space Define the Schreier space $X:=\overline{c_{00}}^{\Vert\cdot\Vert}$ where $$\Vert x\Vert = \sup\bigg\{\sum_{i=1}^k|x_{n_i}|:k\le n_1<n_2\cdots <n_k\bigg\}.$$
Show that there exists a weakly null sequence $(x_n)\in X$ such that for any subsequence $(x_{n_m})$, the Cesaro sum $\frac{1}{m}\sum_{i=1}^m x_{n_i}\not\rightarrow 0$ as $m\rightarrow\infty$.
I know that $x_n\rightharpoonup 0$ iff $x_n^{(k)}\rightarrow 0$ as $n\rightarrow \infty $ since $c_{00}$ is dense in $X$. I've tried many different sequences with componentwise decay e.g. shifts of $(1,\frac{1}{2},\frac{1}{2},\frac{1}{2},\frac{1}{4},\frac{1}{4},\frac{1}{4},\frac{1}{4},\cdots)$ but none of them seem to work. Does anyone have an example?
| Let $(e_n)$ be the standard sequence of Schauder basis elements. I.e $e_n(j)=\delta_n^j$, with $\delta$ the Kronecker-Delta function. Clearly this is a weakly null sequence, as pointwise it tends to $0$. Choose any subsequence $(e_{n_k})$. If we let $\lfloor x\rfloor$ denote the floor of any real number, then for any natural $m>1$ we know that
$$\|\frac{1}{m}\sum_{k=1}^me_{n_k}\|\geq\frac{1}{m}\sum_{j=1}^{\lfloor m/2\rfloor}|e_{n_{j+\lfloor m/2\rfloor}}(n_{j+\lfloor m/2\rfloor})|\geq 1/3,$$
which means that the Caesaro sums of $e_{n_k}$ cannot converge to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4308962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Plane versus 3D coordinates We know that line $ax+by+c=0$ is one dimensional and the plane $ax+by+cz+d=0$ is two dimensional.
My question is if line is one dimensional so why 2D points $(x, y)$ are used for line? And if plane two dimensional so why 3D points $(x, y, z)$ are used for plane?
N. B. - I want to understand the intuition rather than details proof.
| As noted in the comment of Bernard, we use two coordinates when we work on a plane (2D) and three coordinates when we work on the space (3D). So the equation $ax+by+d=0$ represents a stright line if it is refferred to points on a plane , but, if it is referred to points in a 3D space, it is a special case of $ax+by+cz=d$, with $c=0$ and it represents a plane parallel to the $z$ axis .
Also, in 3D the (two) equations
$$
\frac{x-x_0}{a}=\frac{y-y_0}{b}=\frac{z-z_0}{c}
$$
represent a straight line.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4309115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Cardinality of hypersphere over $\mathbb{F}_p$ is always divisible by $p$? I recently read an interesting post here (the second answer to this) about proving quadratic reciprocity by considering the set
$$S_p^{n-1} = \{(x_1,\dots,x_n)\in\mathbb{F}_p^n : x_1^2+\dots +x_n^2 = 1\}$$
Which is just the equation for an $(n-1)-$sphere with entries from a finite field.
Then, consider
$$N_p(n):= |S_p^{n-1}|$$
which just gives how many possible solutions to $x_1^2+\dots +x_n^2 = 1$ there are in $\mathbb{F}_p$.
It turns out that
$$N_p(2) \equiv 1 \mod p \iff p \equiv 3 \mod 4$$
and
$$N_p(2) \equiv -1 \mod p \iff p \equiv 1 \mod 4$$
for all $p$, which is interesting.
After reading the proof in the link above, I see why this is the case.
What I do not understand is that for $n>2$, I wrote some code and it seems
$$N_p(n) \equiv 0 \mod p$$
for all $p$.
So my question is
Why is it the case that $p$ divides $N_p(n)$ for all $n>2$? What does having a certain amount of solutions on a hypersphere over $\mathbb{F}_p$ have to do with being divisible by $p$?
| I will illustrate this for $n = 3$ for simplicity, but the proof directly generalizes. Let's assume $p > 2$ as well (I didn't check $p = 2$)
First, note that number of solutions to $x^2 = a \mod p$, denoted $N(x^2=a)$, is $1 + \chi(a)$, where $\chi(a) = \left(\frac{a}{p}\right)$ is the Legendre symbol. This allows us to rewrite $N_p(3)$ as a character sum by
$$
\begin{align*}
N_p(3) &= \sum_{a + b+ c = 1} N(x^2 = a) N(x^2 = b) N(x^2 = c) \\
&= \sum_{a+b+c = 1} (1 + \chi(a)) (1 + \chi(b)) (1 + \chi(c))
\end{align*}
$$
Expanding the product, there are three types of terms
*
*"Main term": $\sum_{a+b+c = 1} 1 \equiv 0 \mod p$.
*
*This counts number of solutions to $a+b+c = 1 \mod p$. You can pick anything you like for $a,b$, which then fixes $c = 1 - a - b$, so there's $p^2$ solutions, which is $\equiv 0 \mod p$.
*"Off-diagonal" term: the ones not involving $\chi(a)\chi(b)\chi(c)$. This turns out to be 0. There are two types in this case:
*
*Sums of the form $\sum_{a+b+c = 1} \chi(a)$. Since $\sum_a \chi(a) = 0$, we can sum over $a$ first and get
$$\sum_{a+b+c = 1} \chi(a) = \sum_{a} \chi (a) \sum_{b+c = 1-a} 1 = p^2 \sum_a \chi(a) = 0$$
*Sums of the form $\sum_{a+b+c = 1} \chi(a)\chi(b)$. Similarly,
$$\sum_{a+b+c = 1} \chi(a)\chi(b) = \sum_{a,b} \chi (a) \chi(b) \sum_{c = 1-a-b} 1 = \sum_{a, b} \chi(a)\chi(b) = \left(\sum_a \chi(a)\right)^2 = 0$$
*"Diagonal term": $D_3 = \sum_{a+b+c = 1} \chi(a)\chi(b)\chi(c)$.
*
*This is the most involved case. We will look at the sum
$$D_2 = \sum_{x+y = 1} \chi(x) \chi(y) = -\chi(-1)$$
and
$$E_2 = \sum_{x+y = 0} \chi(x)\chi(y) = (p-1)\chi(-1)$$
When handling $D_3$, sum over $a$ first, so that
$$D_3 = \sum_a \chi(a)\sum_{b+c = 1 - a} \chi(b) \chi(c)$$
*
*When $1 - a = 0$, inner sum is $E_2$.
*When $1 - a \neq 0$, we can write $b = b'(1-a), c = c'(1-a)$, then
$$\chi(b)\chi(c) = \chi(b')\chi(c') \chi(1-a)^2 = \chi(b')\chi(c')$$
since $\chi^2 = 1$ (Legendre symbol), so the inner sum is really $D_2$ in this case.
Hence,
$$D_3 = \chi(1) E_2 + \sum_{a \neq 1} \chi(a) D_2 = E_2 - D_2 = p\chi(-1) \equiv 0 \mod p$$
as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4309255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
Find the minimum value of the constant term.
Let $f(x)$ be a polynomial function with non-negative coefficients such that $f(1)=f’(1)=f’’(1)=f’’’(1)=1$. Find the minimum value of $f(0)$.
By Taylor’s formula, we can obtain
$$f(x)=1+(x-1)+\frac{1}{2!}(x-1)^2+\frac{1}{3!}(x-1)^3+\cdots+\frac{f^{(n)}(1)}{n!}(x-1)^n.$$
Hence $$f(0)=\frac{1}{2}-\frac{1}{6}+\sum_{k=4}^n\frac{f^{(k)}(1)}{k!}(-1)^k.$$
This will help?
| It is perhaps simpler to use Taylor's formula with fixed degree three and a Lagrange remainder:
$$
f(x)=1+(x-1)+\frac{1}{2!}(x-1)^2+\frac{1}{3!}(x-1)^3+\frac{f^{(4)}(\xi)}{4!}(x-1)^4.
$$
where $\xi$ is between $1$ and $x$. For $x=0$ is $\xi \ge 0$ and the last term is non-negative, since $f^{(4)}$ has non-negative coefficients as well. This gives
$$
f(0) \ge 1 - 1 + \frac 12 - \frac 16 = \frac 13 \, .
$$
The bound is sharp, equality holds for the function
$$
f(x)=1+(x-1)+\frac{1}{2!}(x-1)^2+\frac{1}{3!}(x-1)^3 = \frac 13 + \frac 12 x + \frac 16 x^3 \, .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4309441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
A map between connected $n$-dimensional CW complexes is a homotopy equivalence
Show that a map between connected $n$-dimensional CW complexes is a homotopy equivalence if it induces an isomorphism on $\pi_i$ for $i\leq n$. [Pass to universal covers an use homology].
I'm in the middle of proving the above statement. As in hint, for the given map $f:X\to Y$ between connected $n$-dimensional CW complexes, let $\tilde{f}:\tilde{X}\to\tilde{Y}$ be a lift of $f$ where $\tilde{X}$ and $\tilde{Y}$ are universal covers of $X$ and $Y$. Using the fact that $f:\pi_i(X)\to\pi_i(Y)$ is an isomorphism for all $i\leq n$ and $\tilde{X}$ and $\tilde{Y}$ are simply connected, $\tilde{f}_*:\pi_i(\tilde{X})\to\pi_i(\tilde{Y})$ is an isomorphism for all $i\leq n$. Using relative Hurewicz theorem, one can show $\tilde{f}_*:H_i(\tilde{X})\to H_i(\tilde{Y})$ for $i\leq n$.
Now the problem is that I want to say $\tilde{X}$ and $\tilde{Y}$ have $n$-dimensional CW complex structures. But it seems to be a very nontrivial fact. Is there another way that avoids using this fact?
| It is a very nontrivial fact because it appeared in a paper once? I do not agree. This fact is not hard to prove, and can be assigned as an exercise in a first algebraic topology course. So I will point you the right directions and let you give a proof, which hopefully you will write up as an answer.
Hint. Use Hatcher Proposition A.2 to define the cell structure. Clearly you want cells upstairs to project to cells downstairs, so you need to understand something about lifting along covering maps: so see the section titled "Lifting properties" in Hatcher 1.3.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4309615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Shifted alternating harmonic series Suppose I have the following series
$$ \sum_{k=1}^{\infty}(-1)^{k+1}\frac{1}{k+ia}, \hspace{1cm}$$
with $i$ the imaginary unit and $a \in \mathbb{R}$. Does this shifted sum converge?
| Let $a_k=(-1)^{k+1}$ and $b_k = \displaystyle{\frac{1}{k+ia}}$. Let $A_n = \sum_{k=1}^n a_k$. We do a summation by parts :
\begin{align*}
\sum_{k=1}^n (-1)^{k+1}\frac{1}{k+ia}& = \sum_{k=1}^n a_kb_k \\
&= a_1b_1 + \sum_{k=2}^n (A_k-A_{k-1})b_k\\
&=a_1b_1+\sum_{k=2}^n A_kb_k - \sum_{k=1}^{n-1} A_kb_{k+1}\\
&=a_1b_1 + A_nb_n + \sum_{k=2}^{n-1} A_k(b_k-b_{k+1})
\end{align*}
Now let's notice that
$\bullet\ $ $(A_nb_n)_{n \geq 1}$ tends to $0$ because $(A_n)_{n \geq 1}$ is bounded and $(b_n)_{n \geq 1}$ tends to $0$.\
$\bullet\ $ One has
$$ \left|A_k(b_k-b_{k+1})\right| \leq 2\left| \frac{1}{k+ia} - \frac{1}{k+1+ia}\right| \leq 2 \left| \frac{1}{(k+ia)(k+1+ia)}\right| = o \left( \frac{1}{k^{3/2}}\right)$$
Hence the series $\sum A_k(b_k-b_{k+1})$ is absolutely convergent, hence convergent, and you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4309760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Distinct integer solutions to linear equations involving irrational numbers Let $\alpha$ and $\beta$ be irrational numbers such that $\alpha/\beta$ is also irrational. Can we prove that there are no possible integer solutions for
$$
x+ \alpha y + \beta z = x' + \alpha y' + \beta z'
$$
with $x \neq x'$, $y \neq y'$, and $z \neq z'$?
It is obvious that no solutions exist with only one of the pairs "mismatched" (i.e., $x \neq x'$, $y = y'$, $z = z'$). And it is fairly trivial to prove that there exist no solutions with two "mismatched" pairs (for example, $x \neq x'$ and $y \neq y'$ but $z = z'$); the proof proceeds by contradiction. But I cannot see how to straightforwardly extend the proof to show that no solutions exist where all three values are distinct.
For context, this is related to a question over on Physics.SE concerning the degeneracies of the energy levels in a 3-D box. I realized in writing up my answer there that I didn't have a pithy proof for the statement I made about boxes whose side lengths are irrational multiples of each other, though I would be surprised if it turned out to be false.
| Note that your question is equivalent to asking whether there are integral solutions to
$$X+\alpha Y+\beta Z=0,$$
by taking $X=x-x'$, $Y=y-y'$ and $Z=z-z'$.
Simple counterexamples have been given in the comments, for example $\beta=\alpha+1$ for any irrational $\alpha$, with $(X,Y,Z)=(1,1,-1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4309923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Extracting coefficients I stuck at the following problem:
Let
\begin{equation}
f(z) = \frac{1 - \sqrt{1 - 4z}}{2}.
\end{equation}
Find $[z^n]$ of $f(z)^r$ for $r \in \mathbb{N}$. Where $[z^n]f(z)$ is the $n$-th coefficient of the power series $f(z) = \sum_{n \geq 0}{a_n z^n}$ therefore $[z^n]f(z) = a_n$.
So far I got
\begin{equation}
\sqrt{1 - 4z} = \sum_{n \geq 0}\binom{1/2}{n}(-4 x)^n.
\end{equation}
and therefore
\begin{align*}
f(z) = \frac{1 - \sqrt{1 - 4z}}{2} &= \frac{\sum_{n \geq 1}\binom{1/2}{n}(-4)^n z^n}{2} \\
&= \sum_{n \geq 1}\binom{1/2}{n}(-1)^n 2^{n} z^n \\
&= \sum_{n \geq 1}\binom{1/2}{n}(-2)^n z^n \\
&= \sum_{n \geq 0}a_n z^n
\end{align*}
with coefficients $a_0 = 0$ and $a_n = \binom{1/2}{n}(-2)^n$.
I wanted to use cauchys integral formula for
$g(z) = f(z)^r$ to extract the $n$ coefficient
but then I get
\begin{align}
\left( \frac{1 - \sqrt{1 - 4z}}{2}\right)^r &= \frac{1}{2^r} \left(1 - \sqrt{ 1 - 4z} \right)^r \\
&= \frac{1}{2^r} \sum^{r}_{m=0}\binom{r}{m}(-1)^m \sqrt{1 - 4z}^m \\
&= \frac{1}{2^r} \sum^{r}_{m=0}\binom{r}{m}(-1)^m (1 - 4z)^{m/2} \\
&= \frac{1}{2^r} \sum^{r}_{m=0}\binom{r}{m}(-1)^m \sum_{k \geq 0}\binom{m/2}{k}(-1)^k 4^k z^k \\
&= \frac{1}{2^r}\sum^{r}_{m=0} \sum_{k \geq 0}\binom{r}{m} \binom{m/2}{k} (-1)^{m+k} 4^k z^k.
\end{align}
Therefore I should have
\begin{align*}
[z^n] (f(z))^r &= [z^n] \sum^{r}_{m=0} \sum_{k \geq 0}\frac{1}{2^r} \binom{r}{m} \binom{m/2}{k} (-1)^{m+k} 4^k z^k \\
&= \sum^{r}_{m=0}[z^n] \sum_{k \geq 0}{\frac{1}{2^r} \binom{r}{m} \binom{m/2}{k} (-1)^{m+k} 4^k z^k} \\
&= \sum^{r}_{m=0}\frac{1}{2^r}\binom{r}{m} \binom{m/2}{n} (-1)^{m + n} 4^n \\
&= \sum^{r}_{m=0}\frac{1}{2^r}\binom{r}{m} \binom{m/2}{n} (-1)^m (-4)^n \\
&= \frac{1}{2^r} \sum^{r}_{m=0} \binom{r}{m} \binom{m/2}{n} (-1)^m (-4)^n \\
&= \frac{1}{2^r} \sum^{r}_{m=2 n} \binom{r}{m} \binom{m/2}{n} (-1)^m (-4)^n
\end{align*}
For $r \geq 2n$ otherwise the coefficient vanishes.
I would be thankful if anyone can give me a hint.
| We seek
$$[z^k] \left(\frac{1-\sqrt{1-4z}}{2}\right)^r.$$
where $k\ge r$ and we get zero otherwise since $\frac{1-\sqrt{1-4z}}{2}
= z + \cdots.$
Using the residue operator this is
$$\; \underset{z}{\mathrm{res}} \;
\frac{1}{z^{k+1}} \left(\frac{1-\sqrt{1-4z}}{2}\right)^r.$$
Now put $\frac{1-\sqrt{1-4z}}{2} = w$ so that $z = w(1-w)$ and $dz\; =
(1-2w) \; dw$ to get
$$\; \underset{w}{\mathrm{res}} \;
\frac{1}{w^{k+1} (1-w)^{k+1}} w^r (1-2w)
= \; \underset{w}{\mathrm{res}} \;
\frac{1}{w^{k-r+1} (1-w)^{k+1}} (1-2w)
\\ = {2k-r\choose k} - 2 {2k-r-1\choose k}
= {2k-r\choose k} - 2 \frac{k-r}{2k-r} {2k-r\choose k}
\\ = \frac{r}{2k-r} {2k-r\choose k}.$$
Following the other post we may write
$$\sum_{k\ge r} \frac{r}{2k-r} {2k-r\choose k} z^k
= z^r \sum_{k\ge 0} \frac{r}{2k+r} {2k+r\choose k} z^k.$$
The residue operator returns zero when $k\lt r.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4310062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
How would I go about solving this 3 Chests problem? How would I go about solving this 3 Chests problem?
I need to either solve or prove that the following riddle is unsolvable.
*
*Chest A contains gold if Chest B contains gold or if Chest C contains gold.
*Chest B contains gold if Chest C contains gold and Chest A contains silver.
*Chest C contains gold if Chest A contains silver.
I've created truth tables based on the 3 statements.
How do I go about either solving it or proving that it can't be solved?
| I'm assuming each chest contains either silver or gold.
If C contains gold, then A contains silver (Statement $3$), so B contains gold (Statement $2$). This contradicts Statement $1$.
If C does not contain gold, then A does not contain silver (Statement $3$), so B does not contain gold (Statement $2$). Again, (if A contains either silver or gold, this forces A to contain gold), this contradicts Statement $1$.
The combination therefore has no solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4310191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
How to convert this symmetric matrix to normal form? I have to find the signature of the symmetric matrix $\begin{pmatrix}1 & 0 & 0 &0\\0 &0 &1 &0\\0&1&0 &0\\0& 0 &0 & 1\end{pmatrix}$. Now to find the signature I want to reduce this matrix to normal form by doing congruence operation.But I am stuck as if we do $R_{23}$ then we have to do $C_{23}$.Then our purpose is not satisfied as we get the same matrix.We should be able to transform this matrix to $\begin{pmatrix} I_k & &\\ &-I_{r-k} &\\ & & O\end{pmatrix}$.
| The trick in situations like this is to first use off-diagonal non-zero entries to produce a non-zero on the diagonal. In your case, adding the third row to the second (and then likewise the third column to the second to preserve symmetry) gives
$$
\begin{pmatrix}
1&0&0&0 \\
0&2&1&0 \\
0&1&0&0 \\
0&0&0&1
\end{pmatrix}
$$
Now you can continue as usual, i.e. by subtracting half of the second row from the third (and likewise for columns):
$$
\begin{pmatrix}
1&0&0&0 \\
0&2&0&0 \\
0&0&-1/2&0 \\
0&0&0&1
\end{pmatrix}
$$
Normalizing (and reordering) the entries gives the signature: $(3,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4310377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Connecting homomorphism of long exact sequence sends fundamental class to fundamental class of a boundary component I have the following homework problem.
Let $f\colon (M,\partial M)\to (M',\partial M')$ be a map between two
compact connected oriented $n$-manifolds such that $f\big|\partial M\to \partial
M'$ is a homeomorphism. Then $\deg(f)=\pm 1$.
My idea is to use the long exact sequence of the pair $(M,\partial M)$ and $(M',\partial M')$. Since, $f\big|\partial M\to \partial M'$ is a homeomorphism $H_{n-1}(f)\colon H_{n-1}(\partial M)\to H_{n-1}(\partial M')$ is an isomorphism.
Now, if $\partial M$ is connected then $\partial M'$ is also connected and the connecting homomorphisms $\partial_n\colon H_n(M,\partial M)\to H_{n-1}(\partial M)$, $\partial_n'\colon H_n(M',\partial M')\to H_{n-1}(\partial M')$ send the fundamental classes $[M],[M']$ to the fundamental classes $[\partial M],[\partial M']$, respectively. So, $\partial_n, \partial_n'$ are isomorphisms. Finally, using naturality of connecting homomorphism we can say $H_n(f)\colon H_n(M,\partial M)\to H_n(M',\partial M')$ is an isomorphism. So, we are done in case $\partial M$ is connected.
But, I don't know how to deal with the case when $\partial M$ is not
connected.
| Perhaps the point is that if we decompose the boundary into its components
$$\partial M = B_1 \cup \cdots \cup B_K
$$
then the image of the homomorphism
$$\partial_n : H_n(M,\partial M) \to H_{n-1}(\partial M) \approx H_{n-1}(B_1) \oplus \cdots \oplus H_{n-1}(B_K)
$$
is an infinite cyclic group, generated by the sum over $k$ of the fundamental class of each component $B_k$ with respect to the induced boundary orientation on $B_k$.
So in the context of your problem, $\partial_n$ and $\partial'_n$ will still be isomorphisms onto their images.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4310564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Generating function on Lehman's Mathematics for Computer Science I am reading Lehman's Mathematics for Computer Science. In chapter 16 Generating Functions.enter image description here
I couldn't see how $1-x-x^2 = (x-r_1)(x-r_2)$. Shouldn't it be $1-x-x^2 = -1(x-r_1)(x-r_2)$? Since $r_1 r_2 = r_1+r_2 = -1$, $1-x-x^2 = -1(x-r_1)(x-r_2) = (1-x/r_1)(1-x/r_2)$?
| Yes, you're right. This is a typo. It should be $-(x-r_1)(x-r_2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4310726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding a sequence $\{e_n\}$ of continuous functions such that $0 \leq e_n (x) \leq 1$ for all $x \in X$ and $\text {supp}\ (e_n) \subseteq K_{n+1}.$
Let $X$ be a locally compact, $\sigma$-compact Hausdorff space. Then does there exist a sequence $\{e_n\}_{n \geq 1}$ of continuous functions on $X$ such that the following conditions hold $:$
$(1)$ $X = \bigcup\limits_{n \geq 1} K_n$ where each $K_n$ is compact and $K_n \subseteq K_{n+1}$ for all $n \geq 1.$
$(2)$ For any compact set $K \subseteq X$ there exists $m \in \mathbb N$ such that $K \subseteq K_m$ and hence $K \subseteq K_n$ for all $n \geq m.$
$(3)$ $0 \leq e_n(x) \leq 1$ for all $x \in X.$
$(4)$ $e_n (x) = 1$ for all $x \in K_n.$
$(5)$ $\text {supp}\ (e_n) \subseteq K_{n+1}$ for all $n \geq 1.$
$(1)$ follows from $\sigma$-compactness of $X$ and from the fact that union of finitely many compact sets is again compact. But I have no idea for the rest three. Could anyone give me some suggestion in this regard?
Thanks for your time.
| All the background results needed are stated and proved in Section 4.5 of Folland (1999, pp. 131–136). The references below correspond to the enumeration in that book.
Proposition 4.39 There is a sequence $(U_n)_{n\in\mathbb N}$ of open sets such that
*
*$\overline{U_n}$ (the closure of $U_n$) is compact for each $n\in\mathbb N$;
*$\overline{U_n}\subseteq U_{n+1}$ for each $n\in\mathbb N$; and
*$X=\bigcup_{n\in\mathbb N} U_n$.
Define $K_n\equiv\overline{U_n}$ for each $n\in\mathbb N$. Clearly,
(1) $X=\bigcup_{n\in\mathbb N} K_n$ and $K_n\subseteq K_{n+1}$ for each $n\in\mathbb N$.
Let $K\subseteq X$ be a compact set. Then, $\{U_n\}_{n\in\mathbb N}$ is a nested open cover of $K$, so there exists some $n\in\mathbb N$ such that $K\subseteq U_n\subseteq \overline{U_n}=K_n$. This gives you (2).
Lemma 4.32 (Urysohn, locally compact version) If $K\subseteq U\subseteq X$, where $K$ is compact and $U$ is open, then there exists a continuous $e:X\to[0,1]$ such that
*
*$e(x)=1$ for each $x\in K$; and
*$e$ vanishes outside a compact subset of $U$.
For each $n\in\mathbb N$, $K_n\subseteq U_{n+1}\subseteq X$, where $K_n$ is compact and $U_{n+1}$ is open. You can find
(3) a continuous function $e_n:X\to[0,1]$
such that
(4) $e_n(x)=1$ for each $x\in K_n$,
and a compact set $C_n\subseteq U_{n+1}$ such that $e_n(x)=0$ whenever $x\in X\setminus C_n$. This means that
(5) $\operatorname{supp}e_n\subseteq C_n\subseteq U_{n+1}\subseteq\overline{U_{n+1}}=K_{n+1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4310882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Concrete realization of quotient $C^*$-algebra Let $A$ be a $C^*$-subalgebra of the bounded operators $B(H)$ on some Hilbert space $H$. Let $J$ be a proper closed $2$-sided ideal of $A$. Then one knows that $A/J$ is another $C^*$-algebra.
By the Gelfand-Naimark theorem, one also knows that $A/J$ is $*$-isomorphic to a $C^*$-subalgebra of $B(H')$ for some Hilbert space $H'$.
Question: Is there a "natural" choice of $H'$?
This seems like a natural and basic question, but having looked around a bit, it seems that an answer may not be straightforward. In particular, in the case where $A=B(H)$ and $J=K(H)$ for $H$ separable, the Gelfand-Naimark-Segal construction gives a non-separable example of $H'$. But I would be interested if there are any references where this is discussed further.
| I guess it depends on what you consider "natural". If $A$ is separable, then you can take $H$ separable, which allows you to take $H'=H$; but I don't think you'll have any relation between those concrete realizations of $A$ and $A/J$. It is probably worth remarking that the concrete presentation of the Hilbert spaces $H$ and $H'$ can very a lot, and while some representations are better than others, there is no obvious "best" one.
For a kind of generic example, let $A=C^*(\mathbb F_\infty)$, the universal C$^*$-algebra of the free group on infinitely many generators. This C$^*$-algebra is universal, in the sense that any separable C$^*$-algebra is a quotient of it. Being separable, $A$ has a faithful state, $\phi$, so we can do GNS and represent $A\subset B(L^2(A,\phi))$. Because $A$ has countably many generators with no relations, any separable C$^*$-algebra $A_0$ is a quotient of $A$. So you can choose any separable C$^*$-algebra $A_0$, represent it on a Hilbert space $H'$, in one of many many ways, and conclude that one cannot expect any natural relation between $H'$ and $H=L^2(A,\phi)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4311037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Find the maximum of $\sin(A)+\sin(B)+\sin(C)$ for $\triangle ABC$. (Without Jensen Inequality)
Find the maximum of $\sin(A)+\sin(B)+\sin(C)$ for $\triangle ABC$. (Without Jensen Inequality)
Proof of Jensen Inequality:
\begin{align}
&\text{let } f(x)=\sin x. \\
\ \\
\Rightarrow & f(A)+f(B)+f(C) \\
& =3 \bigg( \frac 1 3 f(A) + \frac 1 3 f(B)+ \frac 1 3 f(C) \bigg) \\
& \leq 3 \Bigg( f \bigg( \frac 1 3 A + \frac 1 3 B + \frac 1 3 C \bigg) \Bigg) \\
& = 3\Bigg(f\bigg( \frac {A+B+C} 3 \bigg) \Bigg) \\
& = 3\big(f(60)\big) & (\because A+B+C=180) \\
&=3\sin 60 = 3 \cdot \frac {\sqrt{3}} 2 = \frac {3\sqrt{3}}{2}.
\end{align}
I just wondered if there is another precalculus solution to this. Is there another solution to this?
| Your solution is good; if you want another solution, use $C=\pi-A-B$ and then you look for the maximum value of
$$F=\sin (A)+\sin (B)+\sin (A+B)$$ Compute the partial derivatives
$$\frac{\partial F}{\partial A}=\cos (A)+\cos (A+B)=0\qquad \qquad \frac{\partial F}{\partial B}=\cos (B)+\cos (A+B)=0$$ Then, subtracting, $\cos(A)=\cos(B)$ and $A=B$ and then
$$\frac{\partial F}{\partial A}=\cos (A)+\cos (2A)=0$$
Use the double angle formula $$2\cos^2 (A)+\cos (A)-1=0$$ Solve the quadratic in $\cos(A)$ which gives $\cos(A)=\frac 12$ and then $A=B=C=\frac \pi 3$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4311183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why can't $|x-2|$ be less than zero? Just confused with the concepts in Absolute Value.
So we know that to solve absolute value equations such,
$$|x-2| = 5 \tag1$$
In this case we have, $x-2 = 5$ and $x-2 = -5$. then solve for $x$.
However,
$$|x-2| = -5 \tag2$$
Here is no solution.
Why can't the absolute value be less than zero?
Is it from a graph that we cannot get negative values from the $y$-axis? Are there any other explanations?
A proof is highly appreciated.
| What's called absolute value of a number is just the numbers's distance from zero. As such it's always non-negative. Furthermore $|a-b|$ is the distance from $a$ to $b$.
Your first equation, $|x-2|=5$ reads: "The distance from $x$ to $2$ equals $5$.", hence $x=-3$ or $x=7$. Now the second equation, $|x-2|=-5$, reads: "The distance from $x$ to $2$ equals $-5$.", which is impossible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4311377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 1
} |
How can I calculate the relative lengths of triangle sides if all angles are known? I've read about this on other occasions on here before but I think my problem isn't a duplicate.
I'm trying to find the lengths of sides of a triangle where I know all three angles.
Let's say $A = 60^\circ$, $B = 90^\circ$, $C = 30^\circ$ and $b = 1cm$.
How can I find the lengths of $A$ and $C$ without using the law of sines for any angle triangle where $B = 90^\circ$ and $b = 1cm$?
| Consider using Bhaksara I's sine approximation formula:
$$
\sin x^\circ \approx \frac{4x(180-x)}{40500 - x(180-x)}
$$
where $x$ is measured in degrees. This gives a result that is accurate to within about $\pm 0.0015$ over the entire range of angles from $0$ to $180°$. The fractional error maxes out at a little less than 2% near $0°$ and $180°$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4311514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Probability of finding a non-defective transistor when selecting 3 transistors out of 10 transistors . Question:
A box contains 10 transistors, of which 3 are defective. A transistor
is selected from the box and tested until a non-defective one is
chosen. If X is the number of transistors to be chosen. Find P(X = 3).
My attempt:
Since we keep testing until we find a non-defective one, then we need to select at least 1 non-defective transistor in our 3 tests.
then $P(X = 3) = $ $${7 \choose 1}{3 \choose 2} + {7 \choose 2}{3 \choose 1} \over {10 \choose 3}$$ = 7/10
However, final answer = 7/120, which contradicts my answer, and I'm quite confused.
Thanks in advance.
| You stop when you pick a non-defective transistor. So for $X = 3$, the first two must be defective one's.
So the probability is,
$P(X = 3) = \displaystyle \frac{3}{10} \cdot \frac{2}{9} \cdot \frac{7}{8} = \frac{7}{120}$
Or the way you calculated, it should be
$ \displaystyle \frac {2! \cdot {3 \choose 2}{7 \choose 1} }{ 3! \cdot {10 \choose 3}} = \frac{7}{120}$
You choose two defective one's that can be arranged in the first and second place. The third place is fixed for the non-defective transistor. The denominator is arrangements of any three chosen transistors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4311691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Question about mapping cones in homology theory Please see the image attached for the question
The "homology theory" in question simply refers to an abstract theory satisfying the Eilenberg-Steenrod axioms
| As you have constructed it, mapping cone $C_f$ fits into $X\longrightarrow Y \longrightarrow C_f \longrightarrow SX\longrightarrow SY$ where $S$ is the suspension and the map $C_f\to SX$ collapses the copy of $Y$ in $C_f$. What is important is the segment
$$Y\longrightarrow C_f \longrightarrow SX.$$
One of the axioms of a generalised homology theory is that the connecting morphism $H_n(SX) = H_{n-1}(X) \longrightarrow H_{n-1}(Y)$ identifies with $f$ in the LES. This allows you to conclude. If you are working with usual singular homology, maybe it is something you want to prove.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4312038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to apply Chain Rule with differentials in Matrix Derivatives? @Steph had kindly answered my other question, but I can't work out the math.
He said that "The correct way to apply chain rule with matrices is to use differentials", and provided the answer to $\partial E \over \partial W_4$.
OK, let's suppose that $\partial E \over \partial A_5$ is known to be $(A_5-R)$, so the answer checked out, no problem.
Now if I want to use the same approach to calculate $\partial E \over \partial W_3$, it should be
$dE={\partial E \over \partial A_5}:dA_5$
$dE=W_4^T{\partial E \over \partial A_5}:dA_4$
$dE=A_3^TW_4^T{\partial E \over \partial A_5}:dW_3$
${\partial E \over \partial W_3}=A_3^TW_4^T(A_5-R)$
The "order" is wrong!
If I want to make it right, then the $A$ has to be in the very front, and the $W$s have to be inserted in the very end for each operation.
Why is that!?
Why the same operation $(dA_5=dA_4W_4)$ will product answers in different positions?
The only "possible", if not "far-fetching", relationship I could find is: Because $A_4$ is "in front", so the answer $(A_4^T)$ will always be in the front, and because $W_4$ is "in the end", so the answer $(W_4^T)$ will always be in the very end.
Is it the right reason, or I'm just thinking too much?
Thank you very much for your help!
| $
\def\SSS{\sum_{i=1}^m\sum_{j=1}^n\sum_{k=1}^p}
\def\A{A_{ij}}
\def\B{B_{ik}}
\def\BT{B_{ki}^T}
\def\C{C_{kj}}
\def\CT{C_{jk}^T}
\def\LR#1{\left(#1\right)}
\def\BR#1{\Big(#1\Big)}
$To extend my comment above, by expanding the various products
$$\eqalign{
A:\LR{BC} &= \SSS \A\BR{\B\C} \\
\LR{AC^T}:B &= \SSS \BR{\A\CT}\B \\
\LR{B^TA}:C &= \SSS \BR{\BT\A}\C \\
}$$
it is obvious that the sums on the RHS are all identical, therefore the Frobenius (aka double-dot) products appearing on the LHS are likewise identical.
This equivalence could also be arrived at by considering the properties of the trace function when its matrix argument is transposed and/or cyclically permuted.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4312262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\|T\| \leq\|T\|_{0}^{1 / 2}\left\|T^{*}\right\|_{0}^{1 / 2}$ for compact operators on Hilbert spaces.
Let $H$ be a Hilbert space over $\mathbb{K}$ and $T$ be a compact operator on $H$. Let $\|\cdot\|_{0}$ be a norm on $H$ and
$$
\|T\|_{0}=\sup \left\{\|T(x)\|_{0}: x \in H,\|x\|_{0} \leq 1\right\}
$$
Then $\|T\| \leq\|T\|_{0}^{1 / 2}\left\|T^{*}\right\|_{0}^{1 / 2}$. In particular, if $\|\cdot\|_{0}$ is an norm on $\mathbb{K}^{n}$ and $M$ is an $n \times n$ matrix, then $\|M\|_{2} \leq\|M\|_{0}^{1 / 2}\left\|\overline{M}^{t}\right\|_{0}^{1 / 2}$.
My attempt : Notice that $\|T\| = \sup \left\{\|T(x)\|: x \in H, \langle x,x\rangle \leq 1\right\}$, where the norm $\|\cdot \|$ is induced from the inner product. $\| \cdot\|_{0}$ may not be induced from an inner product. The second part of the problem, can be solved by simply substituting $T = M$. If we for a moment, assume that it is true for self-adjoint compact operators, i.e $\|T\| \leq\|T\|_{0}$ holds true for self-adjoint compact operators, then $\|T^* T\| = \| T\|^2 \leq\left\|T^{*}T\right\|_{0} \le \|T\|_{0}\left\|T^{*}\right\|_{0}$, from which we obtain the result. How do I show $\|T\| \leq\|T\|_{0}$ holds true for self-adjoint compact operators ?
| Hint: By definition of adjoint, we have $\|Tx\|^2=\langle Tx,Tx \rangle =\langle x,T^*Tx\rangle=|\langle x,T^*Tx\rangle|.$ Consider Cauchy-Schwarz inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4312488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tables of zeros of $\zeta(s)$ Is there a table somewhere of the $n$th zero of $\zeta(s)$ for $n = 10^k$ for $k = 0,1,2,\ldots$? I need the values for $k$ up to as large as is known (e.g., $k = 22$). Same question for $n = 2^k$, or for other powers of a fixed integer. This is not found at http://www.dtc.umn.edu/~odlyzko/zeta_tables/, nor at https://www.lmfdb.org. Mathematica goes only to $10^8$. I need out to $10^{22}$.
EDIT: The context for this is that I'm doing some computations that require the zeros to some (not super-high) accuracy, and the values I'm asking for would be most useful. People have computed $\pi(n)$ for powers $n$ of a fixed integer $a>1$, so it shouldn't be too much to ask that we do the same for the $n$th zero of $\zeta(s)$.
| Existing calculations of the zeros of $\zeta(s)$ are summarized well on the Wikipedia page for the Riemann hypothesis (and citations to the literature are included). The first $1.2\times10^{13}$ or so zeros have been tabulated, and there have also been calculations of a relatively small number of zeros of very large height (around $10^{24}$, for instance, which is around the $8.3\times10^{24}$th zero). The Odlyzko–Schönhage algorithm is well suited for calculating specific zeros, so that might be where you can start to develop your own calculations if desired.
If you only want approximations to the zeros, the number of zeros of $\zeta(s)$ has an extremely good error term: for example, you could use this formula to find the $10^{22}$nd zero (which has size about $1.4\times10^{21}$) with an error of less than $10$; that's 20 significant figures, using an almost trivial calculation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4312618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Ellipse in the triangle Find the equation of an ellipse if its center is S(2,1) and the edges of a triangle PQR are tangent lines to this ellipse. P(0,0), Q(5,0), R(0,4).
My attempt: Let take a point on the line PQ. For example (m,0). Then we have an equation of a tangent line for this point: $(a_{11}m+a_1)x+(a_{12}m+a_2)y+(a_1m+a)=0$, where $a_{11}$ etc are coefficients of our ellipse: $a_{11}x^2+2a_{12}xy+a_{22}y^2+2a_1x+2a_2y+a=0$. Now if PQ: y=0, then $(a_{11}m+a_1)=0$, $a_{12}m+a_2=1$, $a_1m+a=0$.I've tried this method for other 2 lines PR and RQ and I got 11 equations (including equations of a center)! Is there a better solution to this problem?
| Translate the center to the origin.. the triangle translates to
The translated points become $(-2,-1), (3,-1), (-2,3)$
I considered this on the projective plane, I don't know if that makes it any easier, but that is what I did... if you don't know projective geometry, just set $z = 1$ in everything that follows.
$\begin {bmatrix} x&y&z\end{bmatrix}
\begin {bmatrix} A&B&0\\B&C&0\\0&0&D\end{bmatrix}
\begin{bmatrix} x\\y\\z\end{bmatrix} = 0$
Describes a cone, which when it intersects the plane $z = 1$ forms the ellipse
i.e. $Ax^2 + 2Bxy + Cy^2 + D = 0$
The planes $x + 2z = 0, y+z = 0$ and $4x + 5y - 7z = 0$ are tangent to the cone.
Consider the triplet $(B,-A, 1)$
$\begin {bmatrix} A&B&0\\B&C&0\\0&0&D\end{bmatrix}
\begin{bmatrix} B\\-A\\1\end{bmatrix} = \begin{bmatrix} 0 \\ B^2 - AC \\ D \end{bmatrix}$
If $A = 1$ this point lies in the plane $y + z = 0.$
And if $D = B^2 - AC$
$\begin{bmatrix} B&-1&1\end{bmatrix}\begin{bmatrix} 0 \\ B^2 - AC \\ B^2 - AC \end{bmatrix}= 0$
The point is on our cone.
Similarly $(-\frac {C}{2},\frac {B}{2}, 1)$
$\begin {bmatrix} A&B&0\\B&C&0\\0&0&B^2 - AC\end{bmatrix}
\begin{bmatrix} -\frac {C}{2}\\ \frac {B}{2}\\1\end{bmatrix} =
\begin{bmatrix} \frac 12 (B^2 - AC) \\ 0\\ B^2 - AC\end{bmatrix}$
If $C = 4$ the point is on the plane $x + 2z = 0$ and on the cone. Our matrix thus far.
$\begin {bmatrix} 1&B&0\\B&4&0\\0&0&B^2-4\end{bmatrix}$
We just need an equation for the 3rd point of tangency.
$(\frac {4C - 5B}{7}, \frac {5A-4B}{7}, 1)$
If it is in the plane $4x + 5y - 7z = 0$
$4\frac {4C - 5B}{7} + 5\frac {5A - 4B}{7} - 7 = 0\\
4\frac {16 - 5B}{7} + 5\frac {5 - 4B}{7} - 7 = 0\\
64 - 40 B + 25 - 49 = 0\\
40-40B = 0\\
B = 1$
And finally we need to check to see if this point is on the cone.
$\begin {bmatrix} 1&1&0\\1&4&0\\0&0&-3\end{bmatrix}
\begin{bmatrix} \frac {11}{7}\\ \frac {1}{7}\\1\end{bmatrix} =
\begin{bmatrix} \frac {12}7 \\ \frac {15}{7} \\-3\end{bmatrix}$
$\begin{bmatrix} \frac {11}{7} & \frac {1}{7} & 1\end{bmatrix} =
\begin{bmatrix} \frac {12}7 \\ \frac {15}{7} \\-3\end{bmatrix} = \frac {132 + 15}{49} - 3 = 0$
$x^2 + 2xy + 4y^2 - 3 =0$
And finally translate back to the original coordinates
$(x-2)^2 + 2(x-2)(y-1) + 4(y-1)^2 - 3 = 0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4312797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Normal subgroup: a problem in Verification of equivalence I am familiar with various definitions of normal subgroup. The question I am asking is very trivial one; about a way of proving equivalence.
For a subgroup $N$ of $G$, the following are equivalent: (1) $xN=Nx$ for all $x\in G$. (2) $xNx^{-1}=N$ for all $x\in G$.
My proof: $(1)\Rightarrow (2)$ right multiply by $x^{-1}$.
$(2)\Rightarrow (1)$: Right multiply by $x$.
But, the sources (books/notes) I saw verify the above equivalence by
*
*showing LHS is subset of RHS in each statement;
*proving LHS is in RHS by taking element of LHS and showing that it is in RHS.
But, I wonder, isn't it correct, to prove the equivalence in one or two lines (right multiply by $x$ or by $x^{-1}$ accordingly)?
(In short, can't we avoid "element-belonging" verification for proving equivalence?)
| TL;DR The coset notation is really good because it plays nicely with the group operation. However, you need to prove that it plays nicely.
The strings of letters $xN$, $Nx$ and $xNx^{-1}$ are just notation. They represent specific sets, but there is no reason to think that the group operation plays nicely with these sets. You argument assumes that the operation works nicely - for example, that when we multiply every element of $xN$ on the right by $x^{-1}$ we get precisely the set $xNx^{-1}$.
Lets write $xN\cdot x^{-1}$ for the set $\{yx^{-1}\mid y\in xN\}$, so your argument is saying that $xN\cdot x^{-1}=xNx^{-1}$. I think it is pretty clear that $xN\cdot x^{-1}\subseteq xNx^{-1}$, but to prove the other inequality we're going to have to think a bit. (Maybe not much, but a bit.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4313011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Convergence of the function series $\sum \frac{n!}{(nx)^n}$ for $x<0$ We want to determine the $x<0$ in $\mathbb{R}$ such that
$$\sum _{n=1}^{\infty} \dfrac{n!}{(nx)^n}$$
converges. The textbook says that the solution is $x<-1/e$. What I thought is that, since $x<0$, we have that $x^n>0$ if $n$ is even and $x^n<0$ if $n$ is odd, so we can rewrite the series as
$$\sum _{n=1}^{\infty} (-1)^n \dfrac{n!}{(nx)^n} \qquad \qquad \text{for } x>0$$
And because of the Leibniz test, the series converges if the sequence $(nx)^{-n}n!$ is decreasing and infinitesimal, but we have that
$$\lim _{n \rightarrow +\infty} \dfrac{n!}{(nx)^n}=\lim _{n \rightarrow +\infty} \dfrac{n!}{n^n} \cdot \dfrac{1}{x^n}=0 \Longleftrightarrow \lim _{n\rightarrow +\infty} \dfrac{1}{x^n} < +\infty \Longrightarrow x \ge1$$
Because $n!/n^n \rightarrow 0$ for $n \rightarrow +\infty$, but this seems to go nowhere. What can I do?
| Because $x \in \mathbf{R}$, a good idea is to study the absolute convergence of the series. Let:
$$a_n=\left|\frac{n!}{(x\cdot n)^n}\right|$$
Then, we can applay the root-criteria. In particluar, we have to determine the following limit:
$$\lim_{n\to +\infty}\sqrt[n]{a_n}=\lim_{n\to +\infty}\sqrt[n]{\left|\frac{n!}{(x\cdot n)^n}\right|}=\lim_{n\to +\infty}\frac{\sqrt[n]{\frac{n!}{n^n}}}{|x|}=(*)$$
Theorem: Let $\{b_n\}_{n\geq n_0}$ a real sequence, such that $a_n>0$ at least definitely. If $\lim_{n\to +\infty}\sqrt[n]{b_n}$ and $\lim_{n\to +\infty}\frac{b_{n+1}}{b_n}$ exist, then their limits are equal. Here a post where I used this tool.
Let $b_n=\sqrt[n]{\frac{n!}{n^n}}$, we have:
$$\lim_{n\to +\infty}\frac{b_{n+1}}{b_n}=\lim_{n \to +\infty}\frac{(n+1)!}{(n+1)^{n+1}}\cdot\frac{n^n}{n!}=\lim_{n\to +\infty}\frac{(n+1)!}{(n+1)\cdot n!}\cdot\left(\frac{n}{n+1}\right)^n=\lim_{n\to +\infty}\left(1-\frac{1}{n}\right)^n=e^{-1}=\frac{1}{e}$$
So, the first limits becomes:
$$(*)=\lim_{n\to +\infty}\frac{\sqrt[n]{\frac{n!}{n^n}}}{|x|}=\frac{1}{e\cdot|x|}$$
By root-criteria, the series converges if and only if:
$$\frac{1}{e\cdot|x|}<1\implies \frac{1}{e}<|x|\implies x\in \left(-\infty, -\frac{1}{e}\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4313266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find the number of different colorings of the faces of the cube with 2 white, 1 black, 3 red faces? I tried using Polya theorem same in this https://nosarthur.github.io/math%20and%20physics/2017/08/10/polya-enumeration.html guide, using S4 group for cube faces. But i Have a $\frac{1}{24}12w^2*b*r^3$.
Please help to me how count coloring faces of the cube and other platonic solids.
| First consider the three red faces. Either one of them is opposite another, forming three-quarters of a loop, or all three are adjacent to the same vertex. In the former case, there are three possibilities for the black face; it either "completes the loop," it's to the right of the loop's open face, or it's to the left of the loop's open face. The two remaining faces must be white.
Edited to add: However, as noted in the comments, the latter two possibilities are actually the same coloring, rotated a half turn around the axis through the center of the loop's open face. Therefore, there are only two distinct colorings in this family.
If all three red faces are adjacent to the same vertex, then the two white faces will complete a loop with two of the red faces. Use the existing coloring to choose an orientation. The black face can be on either side of the resulting loop, resulting in two possibilities. Edited to add: But these are the same. Assume the two white faces are "near" and "top" and the black face is "right." Rotate the die a quarter-rotation toward yourself around the left-right axis (so that the two white faces are now "bottom" and "near") and then flip the die a half-rotation around the near-far axis (bringing the "bottom" face to "top" and the "right" face to the "left").
There are, therefore, five (edit: three) different colorings.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4313417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Two Gaussian processes with same variances and means but different covariances. Let $(X_t)$ and $(Y_t)$ be two Gaussian stochastic processes. I am trying to find expressions for these processes such that $\mathrm E[X_t]=\mathrm E[Y_t]$, $\mathrm {Var}(X_t)= \mathrm {Var}(Y_t)$ for all $t\geq 0$ but $\mathrm {Cov}(X_s,X_t) \neq \mathrm {Cov} (Y_s,Y_t)$ for some $s,t \in \mathbb R^+$.
It is possible because mean and variance alone do not determine a Gaussian process in law, we need the covariance. That is the whole motivation behind trying to construct the above example.
In the discrete time case, this is simply saying that we want two different Gaussian couplings with the same marginals. I know how to do that.
For the continuous time case above, I can use the discrete time case for the finite dimensional law of $(X_t)$ and $(Y_t)$, which solves the question "in law". What I am struggling with is finding a actual expressions (omega by omega) $X_t=..., Y_t=...$, which would yield laws that satisfy the contraints in the question. Something built using Brownian motion for example ?
| Let $W_t$ be standard Wiener process. Define $X_t = W_t$ and
$$ Y_t = \begin{cases} 0 & \text{when} & t = 0 \\
\displaystyle \frac {W_{t^2}} {\sqrt{t}} & \text{when} & t > 0 \end{cases} $$
Then both $X_t$ and $Y_t$ are Gaussian process, with common mean
$$ E[X_t] = E[Y_t] = 0 $$
and common variance
$$ Var[X_t] = t = \frac {t^2} {(\sqrt{t})^2}
= Var\left[\frac {W_{t^2}} {\sqrt{t}} \right] = Var[Y_t] $$
However the autocovariance is different:
$$ Cov[X_s, X_t] = \min\{s, t\} $$
$$ Cov[Y_s, Y_t] = \frac {\min\{s^2, t^2\}} {\sqrt{st}}
= \begin{cases} \displaystyle \frac {s^2} {\sqrt{st}} = s \sqrt{\frac {s} {t}} \leq s & \text{when} & s \leq t \\
\displaystyle \frac {t^2} {\sqrt{st}} = t \sqrt{\frac {t} {s}} < t & \text{when} & s > t
\end{cases}$$
So we have $$ Cov[Y_s, Y_t] \leq Cov[X_s, X_t] $$ with equality holds only when $s = t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4313670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does the primorial $23\#$ come up so often in long prime arithmetic progressions? This section of the Wikipedia article on the Green-Tao theorem gives examples of the longest known arithmetic progressions of prime numbers. For every known arithmetic progression of $24$ or more consecutive primes, the common difference is a multiple of the primorial $23\# = 223,092,870$ (the product of all primes up to $23$).
Is it a coincidence that $23$ is the largest prime less than the length of the long sequences? With enough computing power, would we eventually expect to find arithmetic sequences of $29$ or more primes whose common differences are multiples of $29\#$? If not, is there something special about the number $23\#$, or does it simply reflect the limits of our computing power?
| Suppose you want to construct an arithmetic sequence of six primes and the difference is not a multiple of $5$. You start with, let us say, $7$ and proceed with a difference of $12$ to generate $7, 19, 31, 43, 55, 67.$ Uh-oh, you got a multiple of $5$.
That's because when the difference between successive terms is not a multiple of $5$, the sequence modulo $5$ will inevitably cycle through all residues modulo $5$ and thus you inevitably hit zero modulo $5.$
You could avoid this by using $5$ itself as the multiple of $5$, but then $5$ has to begin the sequence and you find the sixth term is a multiple of $5$ again, this time composite. With a difference of $12$ like before, you get $5, 17, 29, 41, 53, 65.$ You started with $5$ and then made five increments with a common difference of $12$, so could not avoid another multiple of $5.$
You get similar problems with differences that are not multiples of $2$ or $3.$ To get a sequence of primes as long as six terms, the difference must be divisible by all of $2, 3, 5$ and thus divisible by $5$# $=30.$ The sequence $7, 37, 67, 97, 127, 157$ makes it with a difference of exactly $30.$
Now try this logic with a sequence that's $24$ primes long and infer the difference between successively terms has to have all primes factors up to and including $23.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4313829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Sum of Dirichlet's Character Over Divisors of Natural Number It is said in Explanation for a theorem pertaining on Dirichlet character sums it is well-known that $A\left(n\right)=\sum_{d\mid n}\chi\left(d\right)$ is non-negative for $\chi$ is character modulo $k$, and $\geq 1$ when $n$ is square-free. Where can I find the proof for this fact?
| The proof of this result is available in chapter 6 of Apostol's Introduction to Analytic Number Theory. In the book, the result is labeled as Theorem 6.19.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4314257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does the metric $d(x,y)=||x-y||^p$ for $0Let $\| \cdot \|$ be the Euclidean norm on $\mathbb{R}^n$, $0<p<1$ and $d: \mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}$ be the function defined by
$$d(x,y)=||x-y||^p, \quad x,y \in \mathbb{R}^n. $$
I have demostrated that $d$ is a metric on $\mathbb{R}^n$. Does $d(x,y)$ induce the usual topology on $\mathbb{R}^n$?
| Yes, it does generate the same topology. The reason? The balls are the same! If $B_1(x, r)$ is the open ball centred at $x$, radius $r$, in this metric space, while $B_2(x, r)$ is the corresponding open ball with respect to the usual Euclidean metric, then
$$B_2(x, r) = B_1(x, r^p),$$
as
$$\|x - y\| < r \iff \|x - y\|^p < r^p \iff d(x, y) < r^p.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4314397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\bar{\mathbb{F}}_p$ is not a finite degree extension of any proper subfield. Consider $\bar{\mathbb{F}}_p$, the algebraic closure of $\mathbb{F}_p$. I want to see that: for every proper subfield $K \leq \bar{\mathbb{F}_p}$, $\bar{\mathbb{F}}_p/K$ is not a finite extension.
It is known that, and can be somewhat easily shown that $\bar{\mathbb{F}}_p = \cup_{n \geq 1}\mathbb{F}_{p^n}$
Now, if any of the proper subfields have the form $\mathbb{F}_{p^n}$, it is easy enough to see that $\bar{\mathbb{F}}_p \neq \mathbb{F}_{p^n}(\alpha_1, \cdots, \alpha_m)$ for some $\alpha_i$, by going high up enough, i.e, to some big enough $m$ such that $\alpha_i \not \in \mathbb{F}_{p^m} \subseteq \bar{\mathbb{F}_p}$
The problem is characterizing the proper subfields. Is every subfield of $\bar{\mathbb{F}_p}$ going to have this form? Can we have an infinite intermediate subfield?
| As Slade explained (+1) this follows either from the known structure of the automorphism group of $\overline{\Bbb{F}_p}$ or from a theorem of Artin & Schreier stating that any finite extension $\overline{K}/K$, with the bigger field algebraically closed, is of the form $\overline{K}=K(\sqrt{-1})$.
An elementary argument can also be given. Assume that $\overline{\Bbb{F}_p}/K$ is a finite extension. Then that extension is Galois. It is obviously normal, and it is also separable because every element of $\overline{\Bbb{F}_p}$ belongs to a finite field, and hence is a zero of a separable polynomial over the prime field (and hence also over $K$). By basic Galois theory this implies that $\overline{\Bbb{F}_p}$ has an automorphism of a finite order. But I shamelessly link to an old elementary answer of mine explaining that there are no such automorphisms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4314969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Fractional part of the product of a factorial tending to infinity and of an irrational number Let $\alpha\in\mathbb{R}\setminus\mathbb{Q}$ be an irrational number and take the limit
$$\lim_{n\to\infty}\cos^{2n}(n!2\pi \alpha).$$
Intuitively this must be zero since $n!\alpha$ will never be an integer and therefore the value or $|\cos|$ will always be less than 1, and taking an increasingly high power of values strictly less than 1 should tend to 0. That's my intuition.
But how can I be sure that in the infinite set of values of $n!x$ there will not be a sequence of values $(n_i!x)_i$ such that their fractional values are increasingly close to $0$ that even when taking the $2n$-th powers of the cosinus, the proximity to zero will compensate the effect of the power and the result will tend to something different than $0$?
Ideally I would like to find an upper bound $\eta$ such that $|\cos(n!2\pi \alpha)|<\eta<1$ for $n$ big enough, so that $\lim_{n\to\infty}\cos^{2n}(n!2\pi \alpha)\leq \lim_{n\to\infty}\eta^{2n}$ which is necessarily $0$. But does such a bound exist?
What do we know of the fractional part of $n!\alpha$ when $\alpha$ is an arbitrary irrational and $n\to\infty$?
| You can get a counterexample by taking $\alpha$ to be Euler's number $e$. We have
$$
n!2\pi e=n!2\pi\sum_{k=0}^\infty\frac{1}{k!} = 2\pi\sum_{k=0}^n \frac{n!}{k!} + 2\pi \sum_{k=n+1}^\infty \frac{n!}{k!},
$$
which differs from an integer multiple of $2\pi$ by
$$
2\pi \sum_{k=n+1}^\infty \frac{n!}{k!} = 2\pi\sum_{k\geq 1}\frac{1}{(n+1)\cdots (n+k)}< 2\pi\sum_{k\geq 1}\frac{1}{(n+1)^k}= \frac{2\pi}{n}.
$$
Since $\cos(x) \geq 1-x^2/2$, it follows that
$$
\cos(n!2\pi e)\geq 1-\frac{4\pi^2}{n^2},
$$
and therefore
$$
\begin{align*}
1\geq \cos^{2n}(n!2\pi e)&\geq \left(1-\frac{4\pi^2}{n^2}\right)^{2n}\\
&=\left(\left(1-\frac{4\pi^2}{n^2}\right)^{n^2}\right)^{2/n}.\\
\end{align*}
$$
Finally, taking the limit as $n\to\infty$, we have
$$
\lim_{n\to\infty} \left(1-\frac{4\pi^2}{n^2}\right)^{n^2} = e^{-4\pi^2},
$$
and so
$$
1 \geq \lim_{n\to\infty}\cos^{2n}(n!2\pi e)\geq \lim_{n\to\infty} \left(e^{-4\pi^2}\right)^{2/n} = 1.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4315152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
On the definition of local compactness INTRODUCTION
Definition 1
A topological space $(S, {\cal{T}})$ is compact if any of its open covers contains a finite subcover.
Definition 2
In a topological space $(S, {\cal{T}})$, a set $ A\subset S$ is compact if every open cover of $ A$ contains a finite subcover.
Equivalently, $ A\subset S $ is compact if it is, with the subspace topology, a compact topological space.
Definition 3
A topological space $(S, {\cal{T}})$ is locally compact if any point of $S$ has a compact neighbourhood, i.e. if for $\forall x\in S$ there exist an open $U\in{\cal{T}}$ and a compact $A\subseteq S$ such that $x\in U\subseteq A$.
QUESTION 1
Consider an arbitrary open set $U$ from the topology $\cal T$.
Would it be right to say that it is part of any cover that covers it?
QUESTION 2
If the answer to Q 1 happens to be affirmative, then the said $U$ must be compact, because it serves as its own finite subcover.
Is this correct?
QUESTION 3
If the answer to Q 2 is affirmative also, then we are arriving at a paradoxical conclusion: since each point resides in some $U\in\cal T$, and since each such $U$ is compact, then every point has a compact neighbourhood --- and the topological space is always locally compact.
Where did I mess up? (In Q 1, I guess?)
| The answer to your question $1$ is no. Consider the set $U:=(0,1)\subset \mathbb R$ equipped with the usual topology. The collection of open sets $\{(1/n,1-1/n)\}_{n\in\mathbb N}$ covers $U$, but $U$ is not in the collection. $U$ is definitely not compact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4315346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Properties of Sigma sum I am trying to prove that
$$\sum_{k=0}^{l} \binom{n}{k} \binom{m}{l-k} = \binom{n+m}{l} $$
use $(1+x)^{n}(1+x)^{m}$.
The book suggests the following solution
$$\sum_{k=0}^{n} \binom{n}{k} x^{k} \sum_{j=0}^{m} \binom{m}{j} x^{j}
= \sum_{l=0}^{n+m} \binom{n+m}{l} x^{l}$$
where
$$x^{l} = \sum_{k=0}^{l} \binom{n}{k} \binom{m}{l-k}$$
I understand the idea of how the formula to prove it makes sense, but I cannot finish wrapping my head around why $x^{l}$ has that value. Then again, I understand its value and how it fits, but not how I could deduce such value on my own in another exercise.
I am looking for help finishing my proof, but I am not looking for an intuitive way to understand it or an alternative answer to my attempt. I would appreciate very much if somebody would help me figure out if my idea is right, or if not, what I am missing. My attempt goes as follows:
$$(1+x)^{n}(1+x)^{m}= \sum_{r=0}^{n} \binom{n}{r} x^{r} \sum_{s=0}^{m} \binom{m}{s} x^{s}$$
Now, lets define l as $l=r+s$, such that $s=l-r$, so we have
$$= \sum_{r=0}^{n} \binom{n}{r} x^{r} \sum_{l-r=0}^{m} \binom{m}{l-r} x^{l-r}$$
$$= \sum_{r=0}^{n}\sum_{l-r=0}^{m} \binom{m}{l-r} x^{l-r} \binom{n}{r} x^{r} $$
And this is where I am stuck. The only way I can turn this into the correct answer is by saying
$$= \sum_{r+l-r=0}^{n+m} \binom{n+m}{r-l-r} x^{r-l-r} =
\sum_{l=0}^{n+m} \binom{n+m}{l} x^{l}$$
by saying that the sum of the terms $x^{l}$ is equivalent to
$x_{r_{1}}(x_{s_{1}}+x_{s_{2}}+...) +
x_{r_{2}}(x_{s_{1}}+x_{s_{2}}+...) +
x_{r_{3}}(x_{s_{1}}+x_{s_{2}}+...) + ...$
Is that equivalence correct? If that is not right, how can I turn my attempt into the right answer? How can I know the value of $x^{l}$ just by looking at the solution from the book since that it seems to suggest it is obvious?
| In your derivation:
Now, lets define l as $l=r+s$, such that $s=l-r$, so we have
\begin{align*}
\ldots = \sum_{r=0}^{n} \binom{n}{r} x^{r} \sum_{\color{blue}{l-r=0}}^{\color{blue}{m}} \binom{m}{l-r} x^{l-r}\tag{1}
\end{align*}
it is the index region in the inner sum (1) which needs to be revised. Note, from (1) we obtain
\begin{align*}
\sum_{r=0}^{n}& \binom{n}{r} x^{r} \sum_{\color{blue}{l=r}}^{\color{blue}{m}} \binom{m}{l-r} x^{l-r}\tag{1.1}\\
&= \sum_{r=0}^{n}\sum_{\color{blue}{l=r}}^{\color{blue}{m}} \binom{m}{l-r} x^{l-r} \binom{n}{r} x^{r}\tag{1.2}\\
&= \sum_{r=0}^{n}\sum_{\color{blue}{l=r}}^{\color{blue}{m}} \binom{m}{l-r} \binom{n}{r} x^{l}\tag{1.3}\\
\end{align*}
Comment:
*
*In (1.1) we use a more common notation to write the lower limit of the inner sum without changing anything else
*In (1.2) we follow your next step
*In (1.3) we collect the terms in $x$ and get $x^{l-r}x^r = x^{(l-r)+r}=x^l$.
We observe in (1.3) the expression is a polynomial in $x$ with degree $\leq m$ since $r\leq l \leq m$. But we know from
\begin{align*}
(1+x)^n(1+x)^m=(1+x)^{n+m}\tag{2}
\end{align*}
the degree of the polynomial in $x$ in (2) is $\color{blue}{n+m}$. So, something was wrong.
In fact when substituting $l=r+s$ we have to respect the valid index range of the newly introduced index variable $l$. We have
\begin{align*}
\color{blue}{\left.\begin{array}{l}
0\leq r \leq n\\
0\leq s\leq m
\end{array}\right\}
\qquad\to\qquad 0\leq r+s=l\leq n+m}\tag{3}
\end{align*}
We see the index range of $l$ is not $r\leq l\leq m$ as used in (1.1) to (1.3) but $0\leq l\leq n+m$ instead as stated in (3). With this correction we are on the right track again, since now we can write
\begin{align*}
(1+x)^n(1+x)^m&=\sum_{r=0}^{n} \binom{n}{r} x^{r} \sum_{s=0}^{m} \binom{m}{s} x^{s}\\
&=\sum_{r=0}^n\sum_{s=0}^m\binom{n}{r}\binom{m}{s}x^{r+s}\\
&=\sum_{l=0}^{m+n}\left(\sum_{{r+s=l}\atop{r\geq 0, s\geq 0}}\binom{n}{r}\binom{m}{s}\right)x^l\tag{3.1}\\
&=\sum_{l=0}^{m+n}\left(\sum_{r=0}^{l}\binom{n}{r}\binom{m}{l-r}\right)x^l\tag{3.2}\\
\end{align*}
Since we have $(1+x)^n(1+x)^m=(1+x)^{n+m}=\sum_{l=0}^{n+m}\binom{n+m}{l}x^l$ the claim follows by coefficient comparison with (3.2) of alike powers in $x$.
Comment:
*
*In (3.1) we introduce $l=r+s$ and rearrance the polynomial according increasing powers of $l$.
*In (3.2) we eliminate $r$ using $r=l-s$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4315571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is this limit proof right?
If $\displaylines{\lim_{n\rightarrow \infty}a_n=a}$ then $\displaylines{\lim_{n\rightarrow \infty}\alpha a_n=\alpha a}$.
Proof:
Let $\epsilon>0$ and $\alpha \neq 0$. Then for all $(\frac{\epsilon}{|\alpha|}>0)(\exists n(\epsilon)\in \mathbb{N})$ so that for all $n>n(\epsilon)$ we have $|a_n-a|<\frac{\epsilon}{|\alpha|}$.
Take $n(\epsilon)=\frac{\epsilon}{|\alpha|}$. Now,
$|\alpha a_n - \alpha a|=|\alpha||a_n-a|<|\alpha|\frac{\epsilon}{|\alpha|}=\epsilon$.
| You have the right idea. Why did you take $n(\epsilon)=\frac\epsilon{|\alpha|}$, and where was that used? It should be the same $n(\epsilon)$ that was used in the first limit. It should be more along the lines of:
For all $\epsilon>0$, there exists $n(\epsilon)>0$ such that if $n>n(\epsilon)$, then $|a_n-a|<\frac\epsilon{|\alpha|}$, and therefore,
$$|\alpha a_n-\alpha a|=|\alpha||a_n-a|<|\alpha|\frac\epsilon{|\alpha|}=\epsilon.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4315740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Estimate bias determination A statistical question for me went:
Suppose $X_1, X_2, ..., X_n$ are identical and independent variables and follow $uniform(0,\theta)$.
Show that $\frac{n+1}{n}X_{(n)}$ is an unbiased estimate of $\theta$.
The answer key does a step which I cannot understand:
$$f_{X_{(n)}}(x) = nf_X(x)[F_X(x)]^{n-1} \ =\frac{n}{\theta}[\frac{x}{\theta}]^{n-1}$$
Can anyone explain to me how did
$$nf_X(x)[F_X(x)]^{n-1}$$
come about? Why am I multiplying $n$ to a $PDF$ to a $CDF^{n-1}$?
Thank you in advance!
| *
*The probability density for a particular observation is $f_X(x)=\frac 1\theta$ when $0 \le x \le \theta$
*The probability a particular observation is $x$ or less is $F_X(x)=\frac x\theta$ when $0 \le x \le \theta$, the cumulative distribution function for a particular observation and the integral of the density
*The probability all observations are $x$ or less is $F_{X_{(n)}}(x)=\left(\frac x\theta\right)^n$ when $0 \le x \le \theta$, the cumulative distribution function for the maximum observation and $\left(F_{X}(x)\right)^{n}$
*The probability density function for the maximum observation is then the derivative
$$f_{X_{(n)}}(x) = F'_{X_{(n)}}(x)=\frac n\theta \left(\frac x\theta\right)^{n-1} = n f_{X}(x) \left(F_{X}(x)\right)^{n-1} $$ when $0 \le x \le \theta$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4315958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Homology of complement of a $m$-sphere in $\mathbb R^n$, with $mTrying to solve the problem in the title, I found this post where a particular case is described and the answer gives a generalization that I think could help me.
Here is my first doubt: How to see that $S^{n-1}\backslash(\{*\}\cup S^{k-1})$ is homotopic equivalent to $S^{n-k-1}\vee S^{n-2}$?
Assuming this result, I define the set $A$ as one of these spheres together with a small open neighborhood of the point shared with the other sphere. And $B$ the same, but with the other sphere and the open neighborhood corresponding to $A$.
This election is intended to make $A$ and $B$ open sets that can be retracted to the spheres. The union is the space I want to solve ($X$) and the intersection is the gluing point.
This way, Mayer-Vietoris says that we have
$\ldots \rightarrow H_n(\{p\}) \overset{}{\rightarrow} H_n(S^{n-k-1})\oplus H_n(S^{n-2}) \overset{}{\rightarrow} H_n(X) \overset{}{\rightarrow} H_{n-1}(\{p\}) \overset{}{\rightarrow} \ldots$,
And since these components are known, I think I could be able to finish the problem.
Does it seem ok to you?
Do you have other suggestions/approaches to solve the original problem?
Thank you in advance.
| I haven't read the linked question, so I won't suggest alternative approaches, but:
*
*Yes, you are using Mayer-Vietoris correctly/usefully. Of course when $k$ is very small you might have to deal with some nontrivial maps, but such is life.
*I believe the homotopy equivalence intuitively works like this. For a visual take $n=4, k=2$. Note $S^{n-1}-\{*\}$ is $\Bbb{R}^{n-1}$. Thicken the $(k-1)$-sphere to a $k$-ball with the origin removed, cut that out of the space (so the origin remains). Homotope the whole thing down to the unit $(n-1)$-ball with a hole in a coordinate plane, the shape of a "$k$-dimensional diameter", except the origin. Finally, expand the hole in the $(n-1)-k$ orthogonal dimensions until it meets the boundary $B^{n-1}$. Doing all this, you are left with an $(n-2)$-sphere that has a linear subspace [segment] running through it, and this is homotopic to $S^{n-k-1}\vee S^{n-2}$ by pulling the subspace segment outside and then homotoping the attaching sphere to a point.
*(A special argument may be needed for $n=2$ to deal with non-connectedness; also $n=1$ might be false as stated)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4316277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Main theorem of obstruction theory, Davis and Kirk I'm reading Davis and Kirk, in it they claim:
theorem 7.1: let $(X,A)$ be a CW pair and $Y$ is a path connected $n$-simple space with $n\geq 1$. Let $g:X_n\rightarrow Y$ be a continuous map. Then:
*
*there is a cellular cocycle $\theta\in C^{n+1}(X,A;\pi_n Y)$ which vanishes iff $g$ extends to a map $X_{n+1}\rightarrow Y$;
*the cohomology class $[g]\in H^{n+1}(X,A;\pi_nY)$ vanishes iff the restriction $g_{X_{n-1}}X_{n-1}\rightarrow Y$ extends to a map $X_{n+1}\rightarrow Y$.
My question is: what role does $A$ play here? It seems that the theorem has nothing to do with $A$ here. I suspect there is a typo in the theorem. Any help will be appreciated!
$\textbf{Oh, I see; Davis and Kirk are using the convention}$
$ \textbf{that $X_n$ means the union of $A$ and the $n$-skeleton of $A$.}$
| The map is assumed to be already defined on $A$. They say a couple of lines earlier: "We wish to study the question of whether $g: A \to Y$ can be extended to map $X \to Y$". You are right that there is an inaccuracy in that the theorem itself does not mention that the function is defined on $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4316462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $(23, \alpha -10, \alpha -3) = \mathbb{Z}[\alpha]$ Let $\mathcal{O}=\mathbb{Z}[\alpha]$ be a number ring, where $\alpha \in \mathbb{C}, \alpha^3=\alpha+1$. I have proved that
$$23\mathcal{O}=(23, \alpha -10)^2(23, \alpha-3)$$
and that $[\mathbb{Q}(\alpha):\mathbb{Q}]=3$. Now I am asked to prove that $(23, \alpha -10, \alpha -3)= \mathcal{O}$. A basis for $\mathcal{O}$ would be $\{1, \alpha, \alpha^2\}$. It would be enough to show that $1\in (23, \alpha -10, \alpha -3)$. However I've made several attempts and I cannot produce $1$ from a combination of $(23, \alpha -10, \alpha -3)$. Can someone help me with this?
| Observe
that not only 23 but also $7=\alpha-3-(\alpha-10)=7$ is a element of $\mathcal{O}$. Note also that $23$ and $7$ are coprime.
Thus there are integers $u,v$ such that $1=23u+7v$ implying that $1\in \mathcal{O}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4316636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Polynomial with no roots over the field $ \mathbb{F}_p $. How can I show that for any prime $p$ and any $d\ge 2$ there exists a polynomial of degree $d$ in $\mathbb{F}_p[X]$ with no roots? ($\mathbb{F}_p$ is the finite field with $p$ elements).
Thanks in advance for any idea.
| Even more than that, there exists an irreducible polynomial of degree $d$. The field $\mathbb{F}_{p^d}$ is a field extension of degree $d$ over $\mathbb{F_p}$. The multiplicative group a finite field is cyclic, so let $\alpha$ be a generator of $\mathbb{F}_{p^d}^{\times}$, and then we have $\mathbb{F}_{p^d}=\mathbb{F}_p(\alpha)$. Since $[\mathbb{F}_p(\alpha):\mathbb{F}_p]=d$ it follows that the minimal polynomial of $\alpha$ is an irreducible polynomial in $\mathbb{F}_p[x]$ of degree $d$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4316827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Weakly convergent sequence whose square converges strongly Let $(f_n)_n \in L^2(\mathbb{R})$ such that $f_n$ converges weakly to $f$ in $L^2(\mathbb{R})$ and $f_n^2 \rightarrow g$ in $L^1(\mathbb{R})$. Prove that $f^2 \le g$ almost everywhere in $\mathbb{R}$.
In order to prove the thesis, I have to show that $\int_{\mathbb{R}} f^2 \psi dx \le \int_{\mathbb{R}} g \psi dx$, for a suitable $\psi$.
My first idea was to apply Mazur's lemma to $(f_n)_n$ in order to obtain a strongly convergent sequence in $L^2$. However, I cannot conclude the strong convergence of the aforementioned sequence squared to $g$ in $L^1$.
I think that I should use the fact that $L^2$ is an Hilbert space and thus it is uniformly convex and reflexive. I know that there exists a theorem which states that if I have a weakly convergent sequence on an Hilbert space and I manage to prove that I have also convergence in norm, then the sequence is strongly convergent.
However, I can't prove the convergence in norm (if the sequence converges in that sense) and hence I am not able to conclude.
Any hint would be greatly appreciated.
| Let $\phi\in C_c(\mathbb R)$ with $\phi\ge0$. The functional $f\mapsto \int_{\mathbb R} f^2 \phi\ dx $ is convex and continuous on $L^2$, hence weakly sequentially lower semi-continuous. This proves
$$
\int_{\mathbb R} f^2 \phi\ dx \le \liminf_{n\to\infty} \int_{\mathbb R} f_n^2\phi\ dx.
$$
Since $\liminf_{n\to\infty} \int_{\mathbb R} f_n^2\phi \ dx=\int_{\mathbb R} g\phi \ dx$, we have
$$
\int_{\mathbb R} (g-f^2) \phi \ dx\ge 0.
$$
Since $\phi\ge0$ was arbitrary, $g\ge f^2$ a.e. follows.
The proof also works if $f_n^2\rightharpoonup g$ in $L^1$ only.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4316985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Bianchi classification of solvable Lie groups and cocompact subgroups Recall that abelian is contained in nilpotent is contained in solvable.
There is a unique 3 dimensional connected simply connected abelian Lie group $ \mathbb{R}^3 $.
And there is a unique connected simply connected nonabelian nilpotent 3 dimensional Lie group, the Heisenberg group
$$
Nil=\{ \begin{pmatrix} 1 & a & b \\ 0 & 1 & c \\ 0 & 0 & 1 \end{pmatrix} : a,b,c \in \mathbb{R} \}
$$
However there are lots of (simply connected, connected, 3 dimensional) solvable non nilpotent groups.
The two solvable (non nilpotent) groups I can think of are, first,
$$
{SE}_2= \{
\begin{bmatrix}
a & b & x \\
-b & a & y \\
0 & 0 & 1
\end{bmatrix} : a^2+b^2=1 \}
$$
which is the connected component of the identity in the Euclidean group $ E_2= \mathbb{R}^2 \rtimes O_2 $. And, second, the isometry group of the Minkowski plane $ E_{1,1}= \mathbb{R}^2 \rtimes O_{1,1} $
$$
E_{1,1}= \{
\begin{bmatrix}
a & b & x \\
-b & a & y \\
0 & 0 & 1
\end{bmatrix} : a^2-b^2=1 \}
$$
which is isomorphic to
$$
\{
\begin{bmatrix}
a & 0 & x \\
0 & b & y \\
0 & 0 & 1
\end{bmatrix} : ab=1 \}
$$
For solvable non nilpotent Lie groups the Bianchi classification https://en.wikipedia.org/wiki/Bianchi_classification lists 6 distinct types $ 3,4,5,6,6_0,7_0 $ as well as an infinite family of of distinct groups, type $ 7 $.
Which of these groups $ 3,4,5,6,6_0,7_0,7 $ admit cocompact discrete subgroups? A group admitting a cocompact discrete subgroup must be unimodular (see for example https://arxiv.org/pdf/0903.2926.pdf) and of these solvable groups only $ 6_0 $ (corresponding to $ E_{1,1} $) and $ 7_0 $ (corresponding to $ E_2 $) are unimodular and thus possibly contain cocompact discrete subgroups.
For both unimodular groups $ E_2 $ and $ E_{1,1} $ do there exist cocompact discrete subgroups?
If not why not? If so what is an example of a cocompact discrete subgroup?
And which of these groups is the Sol geometry for 3 manifolds based on?
| Great answer from Lee Mosher (I already accepted it). Just wanted to provide some extra information with some examples from this source https://arxiv.org/abs/0903.2926 for the sake of completeness.
As Lee Mosher notes, type $ 7_0 $ is $ E_2 $ which is contained in $ E_3 $ so these 3d compact manifolds have Euclidean geometry, just like the 3 torus. However topologically they can be quite different from the 3 torus. For example take
$$
{SE}_2= \{
\begin{bmatrix}
a & b & x \\
-b & a & y \\
0 & 0 & 1
\end{bmatrix} : a^2+b^2=1 \}
$$
and mod out by
$$
\mathbb{Z}^2 \cong
\{
\begin{bmatrix}
1 & 0 & n \\
0 & 1 & m \\
0 & 0 & 1
\end{bmatrix} : n,m \in \mathbb{Z} \}
$$
the resulting compact homogeneous Euclidean 3 manifold has fundamental group $ \mathbb{Z}^2 \rtimes \mathbb{Z} $ where the semi direct product is with respect to
$$
n \mapsto \begin{bmatrix} -1 & 0 \\ 0 & -1 \end{bmatrix}^n
$$
and the abelianization is $ H_1 \cong \mathbb{Z} $ ( thus this manifold is certainly not the 3 torus).
Type $ 6_0 $, the group on which the Sol geometry is based, is the isometry group of the Minkowski plane $ E_{1,1}= \mathbb{R}^2 \rtimes O_{1,1} $ with unimodular subgroup
$$
SE_{1,1}= \{
\begin{bmatrix}
a & b & x \\
b & a & y \\
0 & 0 & 1
\end{bmatrix} : a^2-b^2=1 \}
$$
which is isomorphic to
$$
\{
\begin{bmatrix}
a & 0 & x \\
0 & b & y \\
0 & 0 & 1
\end{bmatrix} : ab=1
\}
$$
which contains the following (cocompact) lattice
$$
H=
\{
\begin{bmatrix}
\beta^k & 0 & n+m \beta \\
0 & \beta^{-k} & n+m \beta^{-1} \\
0 & 0 & 1
\end{bmatrix}
: k,n,m \in \mathbb{Z} \}
$$
where $ \beta $ is the root of $ x^2+3x+1 $ (or $ x^2+dx+1 $ for any integer $ d $ such that the roots of the polynomial are real and not integers ( $ |d| \geq 3 $ )). The fact that this forms a group follows from the fact that the unimodular companion matrix for the polynomial $ x^2+dx+1 $ satisfies the following matrix equation
$$
\begin{bmatrix} \beta & 0 \\ 0 & \beta^{-1} \end{bmatrix}
\begin{bmatrix} 1 & \beta \\ 1 & \beta^{-1} \end{bmatrix}
=\begin{bmatrix} 1 & \beta \\ 1 & \beta^{-1} \end{bmatrix}
\begin{bmatrix} 0 & -1 \\ 1 & -d \end{bmatrix}
$$
Note that in the reference they construct a lattice using $ d=-3 $ and $ \beta= \frac{3+\sqrt{5}}{2} $. The compact coset manifold $ SE_{1,1}/H $ has fundamental group $ \mathbb{Z}^2 \rtimes \mathbb{Z} $ but the semidirect product is with respect to the map
$$
n \mapsto \begin{bmatrix} 0 & -1 \\ 1 & -d \end{bmatrix}^n
$$
and the abelianization is $ H_1 \cong \mathbb{Z} $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4317139",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Let the sequence $\{a_n\}$ be defined as $a_1=1$ and $a_{n+1} = \frac{6a_n+3}{a_n+4}$. Show that $a_n \lt 3$ and the sequence is increasing. Let the sequence $\{a_n\}$ be defined as $a_1=1$ and $$a_{n+1} = \frac{6a_n+3}{a_n+4}$$
Then I'm asked to show :
$1)$ $a_n \lt 3$.
$2)$ Assuming $a_n \lt 3$, show that the sequence is increasing.
For the first part I tried to use Induction. When I assumed $a_n \lt 3$ and went on to prove that this implies $a_{n+1} \lt 3$, I got stuck :
$$ a_{n+1} = \frac{6a_n+3}{a_n+4} \lt \frac{6\cdot 3+3}{a_n+4} \lt \frac{21}{a_n+4}.$$
I'm stuck here can anyone please help?
Edit : I can do the second part easily. And also I got an appropriate answer by now for the first part.
| If $a_n \lt 3$, then
$$a_{n+1}= \frac {6a_n+3}{a_n+4}=\frac{6a_n+24-21}{a_n+4}=6-\frac{21}{a_n+4} \lt 6-\frac{21}{7}=3.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4317299",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Describing a $m$-dimensional plane of $\Bbb{R^n}$ with a $m$-dimensional subspace of $\Bbb{R^n}$ I am given a definition of a plane as per the following : A $m$-dimensional plane in $\Bbb{R^n}$ is the set $$T=x_0+E:=\{x_0+y\in\Bbb{R^n}:y\in E\}$$
where $E$ is a $m$-dimensional subspace of $\Bbb{R^n}$, and $x_0$ is a point of the plane. I don't see how this definition of a plane fits in with the standard $ax+bx+cz = d$ equation of a plane. I'm trying to see how the plane $$x+y+z = 1$$
can be written in terms of the above definition. This equation describes a $2$-dimensional plane in $\Bbb{R^3}$, so $m=2$. Then the set $E$ must also be $2$-dimensional, hence it is also a plane. What exactly is this $E$, for the plane given by equation above?
| Consider the case of $E$ hyperplane, $i.e.$ codimension 1. The hyperplane is definded by $$E=\{y\in\mathbb{R}^n,\,a_{1}y_{1}+\dots+a_{n}y_n=0\}=\{y\in\mathbb{R}^n,\,\langle y,a\rangle=0\}=a^{\perp}.$$
Let then $T=x_{0}+E$, hence for $x\in T$, $x=x_{0}+y$, we have $$\langle x,a\rangle=\langle x_0,a\rangle+\langle y,a\rangle=\langle x_0,a\rangle+0=\langle x_0,a\rangle,$$
which is the other definition if you define $d:=\langle x_0,a\rangle$.
For codimension greater than $k>1$, remark that you have $k$ equation, that is the intersection of $k$ hyperplanes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4317444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Recurrence relations in the generating function of binary partitions Let $b(n)$ denote the number of binary partitions of $n$, that is, the number of partitions of $n$ as the sum of powers of $2$. Define
\begin{equation*}
F(x) = \sum_{n=0}^\infty b(n)x^n
= \prod_{n=0}^\infty \frac{1}{1-x^{2^{n}}}.
\end{equation*}
Clearly, $F(x)$ satisfies the functional equation
\begin{equation*}
(1-x)F(x) = F(x^2).
\end{equation*}
Then, we have recurrence relation
$b(2n+1)=b(2n)$ and $b(2n) = b(2n-2) + b(n), n\ge1$.
How to derive these recurrence relation from the given informations?
What I tried:
We have
\begin{equation*}
\sum_{n=0}^\infty b(2n)x^n
= \prod_{n=0}^\infty \frac{1}{1-x^{2^{2n}}}
\end{equation*}
and
\begin{equation*}
\sum_{n=0}^\infty b(2n+1)x^n
= \prod_{n=0}^\infty \frac{1}{1-x^{2^{2n+1}}}.
\end{equation*}
But, I didn't get those recurrence relation.
| The sequence $\,b(n)\,$ is OEIS sequence A018819 "Binary partition function: number of partitions of n into powers of 2.". The FORMULA section
states:
$a(2m+1) = a(2m), a(2m) = a(2m-1) + a(m)$. Proof: If $n$ is odd there is a part of size $1$; removing it gives a partition of $n - 1$. If $n$ is even either there is a part of size $1$, whose removal gives a partition of $n - 1$, or else all parts have even sizes and dividing each part by $2$ gives a partition of $n/2$.
You asked
Then, we have recurrence relation
$b(2n+1)=b(2n)$ and $b(2n) = b(2n-2) + b(n), n\ge1$.
How to derive these recurrence relation from the given informations?
but you did not explicitly state that you wanted to use the ordinary
generating function $\,F(x).\,$ The first recurrence relation for
$\,b(2n+1)\,$ is already given in the FORMULA entry. For $\,b(2n)\,$
we also get from the entry $\,b(2n) = b(2n-1)+b(n)\,$ but we already
know that if $\,n\ge 1\,$ then $\,b(2n-1)=b(2n-2)\,$ and thus
$\,b(2n)=b(2n-2)+b(n)\,$ as the second recurrence relation.
Your attempt using the g.f. $\,F(x)\,$ was flawed, but can be fixed up.
$$ F(x) = \frac1{1-x}\prod_{n=1}^\infty \frac{1}{1-x^{2^{n}}} =
\sum_{n=0}^\infty b(n)x^n. \tag{1}$$
Substitute $\,-x\,$ for $\,x\,$ to get
$$ F(-x) = \frac1{1+x}\prod_{n=1}^\infty \frac{1}{1-x^{2^{n}}} =
\sum_{n=0}^\infty b(n)(-x)^n. \tag{2}$$
Notice that
$$ \frac12\Big(\frac1{1-x} + \frac1{1+x}\Big) = \frac1{1-x^2}, \quad
\frac12\Big(\frac1{1-x} - \frac1{1+x}\Big) = \frac{x}{1-x^2}. \tag{3} $$
Add equation $(2)$ to $(1)$ and use equation $(3)$ to get the even part of $\,F(x)\,$
$$ \frac12(F(x)+F(-x)) =
\frac1{1-x^2}\prod_{n=1}^\infty \frac{1}{1-x^{2^{n}}} =
\sum_{n=0}^\infty \,b(2n)\,x^{2n}.\tag{4}$$
Subtract equation $(2)$ from $(1)$ and use equation $(3)$ to get the odd part
$$ \frac12(F(x)-F(-x)) =
\frac{x}{1-x^2}\prod_{n=1}^\infty \frac{1}{1-x^{2^{n}}} =
\sum_{n=0}^\infty \,b(2n+1)\,x^{2n+1}.\tag{5}$$
Divide both sides of equation $(5)$ by $\,x\,$ to get
$$ \frac1{1-x^2}\prod_{n=1}^\infty \frac{1}{1-x^{2^{n}}} =
\sum_{n=0}^\infty \,b(2n+1)\,x^{2n}.\tag{6}$$
Equate coefficients in equations $(4)$ and $(6)$ to get $\,b(2n+1)=b(2n).\,$
From equation $(1)$ we get
$$ (1-x)F(x) = \prod_{n=1}^\infty \frac{1}{1-x^{2^{n}}} = F(x^2) \tag{7}$$
and use equation $(1)$ again with $\,x\,$ replaced with $\,x^2\,$ to get
$$ F(x^2) = \sum_{n=0}^\infty\, b(n)\,x^{2n} \tag{8}$$
and also
$$ (1-x)F(x) = 1+\sum_{n=1}^\infty \,(b(n)-b(n-1))\, x^n. \tag{9} $$
Using equations $(7)$,$(8)$, and $(9)$ together to get
$$ \sum_{k=0}^\infty\, b(k)\,x^{2k} =
1+\sum_{n=1}^\infty \,(b(n)-b(n-1))\, x^n. \tag{10} $$
Equating coefficients of $\,x^n\,$ when $\,n\,$ is
even and odd gives the recurrence relations in the OEIS entry.
By the way, in equation $(4)$ replace $\,x^2\,$ with $\,x\,$ to get
$$ \frac1{1-x}F(x) = \sum_{n=0}^\infty \,b(2n)\,x^n \tag{11}$$
and since $\,b(2n+1)=b(2n)\,$ the same left side is the generating
function of $\,b(2n+1).\,$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4317636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find kernel of induced map after tensoring Let $R=k[x,y]$ and $I=(x,y)$. Consider the map $$R^2 \xrightarrow{\phi:(f(x,y),g(x,y))\mapsto xf(x,y)+yg(x,y)}I$$
I am trying to show that after tensoring by $R/I$, the kernel of the induced map is isomorphic to $R/I$ which I am pretty sure it is.
| Welcome to MSE!
This is actually false, and this problem is a great example of the power of abstract nonsense.
Start with the exact sequence
$$
0 \to R \xrightarrow{f \mapsto (yf, \ -xf)} R^2 \xrightarrow{(f,g) \mapsto xf + yg} I \to 0
$$
Now since tensoring is right exact, we get a new exact sequence
$$
R \otimes R/I \to R^2 \otimes R/I \to I \otimes R/I \to 0
$$
of course, we know how to compute tensor products with $R/I$, and we find
$$
R/I \to (R/I)^2 \to I \big / I^2 \to 0
$$
Lastly, we know $R = k[x,y]$ and $I = (x,y)$, so we can actually compute these quotients too.
$$
k \to k^2 \to (x,y) \big / (x,y)^2 \to 0
$$
Now, what are our maps?
Well we're viewing $k$ as $k[x,y] \big / (x,y)$. That is, the constant polynomials.
So our old map $f \mapsto (yf, -xf)$ always outputs a pair of polynomials with $0$ constant term. This becomes the $0$ map from $k$ to $k^2$.
At this point we can stop, because we see that $k$ is not the kernel of the resulting map
$k^2 \to (x,y) \big / (x,y)^2$. If we wanted to go further, though, we would see this map sends a pair of constant polynomials $(c_1, c_2) \mapsto c_1 x + c_2 y$. We have to quotient out any quadratic terms, but there aren't any! So we see this map is actually injective, and $k^2 \cong (x,y) \big / (x,y)^2$ as $R$-modules.
That is, unwinding all this, $R^2 \otimes R/I \cong I \otimes R/I$ as $R$-modules, and this isomorphism came from the induced map. So the kernel of the induced map is $0$ and not $R/I$.
There are faster ways to see this (using some algebraic geometry, for instance), but I think working things out like this is instructive. In general, you should reach for exact sequences, rather than the definition of tensor product, in basically every situation.
I hope this helps ^_^
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4317846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Are rings of power series over a local field complete? Let $K$ be a finite extension of $\mathbb{Q}_p$, and let $D$ be some disk $D = \{ x\in \overline{K} \mid |x| < c < 1\}$. Is the set of power series in $K[[T]]$ which converge on $D$ and are bounded on $D$ complete with respect to the supremum norm $\|f\|_\infty=\sup_{x\in \overline{K}, |x|<c} |f(x)|$?
| Ok here is my shot at an answer, but I am not certain it is correct.
Suppose $|f_n(x) - f_m(x)| < \epsilon$ for all $|x|<c$. I show all terms of this difference must have absolute value less than $\epsilon$ implying an upper bound on the coefficients. Suppose $f_n(x)-f_m(x)$ has terms of absolute value greater than $\epsilon$ when evaluated at some $z$. Let $S = \{c_{i_1}z^{i_1},\ldots, c_{i_k}z^{i_k}\}$ be the list of all terms which have lowest valuation $u$, implying the valuation of $\sum_{j=1}^k c_{i_j}z^{i_j}$ is greater than $u$. There is some finite extension $K'$ of $K$ in which all of the elements of $S$ live. Now take $L$ to be some unramified extension of $K'$ such that the size of the residue field of $L$ is greater than $i_k$. Now suppose $\lambda$ is some uniformizer of $L$, so that there exists a power $\lambda^{e_1}$ such that the valuation of $\lambda^{e_1}$ is $u$. Then $c_{i_j}z^{i_j}\lambda^{-e_1}$ are all units, and the polynomial $g(T) = \sum_{j=1}^kc_{i_j}z^{i_j}\lambda^{-e_1}T^{i_j}$ must vanish for all $t$ in $L/\lambda$ else we can find $x = zt \in L$ with $|f_n(x)-f_m(x)| > \epsilon$ and $|x| < c$. However $g$ cannot possibly vanish on all of $L/\lambda$ because $|L/\lambda| > i_k$ and $\deg g = i_k$. Thus all terms of $f_n(z)-f_m(z)$ have absolute value at most $\epsilon$, and this proves the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4318010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
How to show a sequence of measure converges weakly? The doubt I have is that in all equivalent definitions of weak convergence of finite measures via the Portmanteau theorems, some knowledge of $\mu$ is required to check those conditions,
for example to one should know about the continuity sets of the $\mu$, in order to use the condition, $\mu_n(A) \to \mu{}(A), \forall A \ni \mu(\partial A)=0$.
What are the methods to show weak convergence(or non convergence) of a sequence of finite measures, when you have no knowledge of what the limiting measure would be?
For eg, how to show the uniform measure on {1,$\cdots$,n} does not converge to any probability measure.
| One consequence of weak convergence is tightness: If $\mu_n$ converges weakly then , given $\epsilon >0$ there exists a compact set $K$ such that $\mu_n(K^{c}) <\epsilon$ for all $n$. So if $(\mu_n)$ is not tight then it is not convergent. This is often used to show that a given sequence does not converge.
This argument works in your example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4318197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A triangle has one vertex at a circle's center and two vertices on the circle. Can the three enclosed regions have rational areas?
A triangle has one vertex at a circle's center and two vertices on the circle. Can the three enclosed regions have rational areas?
Let $r=$ radius of circle, $\theta=$ angle at vertex of triangle at center of circle.
Assume the three regions have rational areas. The area of the circle is rational, so $r^2$ is a rational multiple of $1/\pi$. Then (since the area of the triangle is rational) $\sin\theta$ is a rational multiple of $\pi$, and (since the area of the segment is rational) $\theta$ is a rational multiple of $\pi$.
So I think the question is equivalent to:
Can $\theta$ and $\sin\theta$ both be rational multiples of $\pi$? ($0<\theta<\pi$)
I thought about Niven's Theorem, but it doesn't seem to help.
(I suspect the answer is no.)
| I think I can answer my own question. The sine of any rational multiple of $\pi$ is algebraic, as shown here, so it cannot be a rational multiple of $\pi$, so the answer to my question is no.
(I thought about this question on and off for a few days before posting it here. Then, for some reason, almost immediately after posting the question here, without receiving any replies, I found the answer.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4318386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that any group of order 15 is cyclic? I have multiple questions regarding: https://math.stackexchange.com/a/864985/997899
Q: Prove that any group of order 15 is cyclic.
| $HK$ is a group whose subgroups are both $H$ and $K$. Thus, its order is divisible by both $3$ and $5$, i.e. by $15$, which means it is $15$ at least! As it is contained in the group $G$ of order $15$, we must have $G=HK$.
How to then finish the proof? As groups $H$ and $K$ are of prime order, they are cyclic. So $H\cong C_3$ and $K\cong C_5$. Thus, $G=HK\cong H\times K\cong C_3\times C_5\cong C_{15}$.
The last isomorphism above (and generally, if $(m,n)=1$ it is known that $C_m\times C_n\cong C_{mn}$) is one of the equivalent formulations of Chinese Remainder Theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4318611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
What does "the iterates $T^{n}$ will $\epsilon$-approach the support of $\mu$" mean? Let $T:X \to X$ be homeomorphism on a compact manifold $X$. We say that $\mu$ is $T-$invariant measure if $\mu(T^{-1}(A))=\mu(A)$ for all Borel measurable set $A$.
I read the following sentence on some paper:
If $\mu$ is some $T$-invariant measure and $x\in A$ then, with a frequency that is asymptotically bounded away from zero, the iterates $T^{n}$ will $\epsilon$-approach the support of $\mu$ for any $\epsilon>0.$
Does the above sentence imply $A \cap \text{support}(\mu)\neq \emptyset?$
| It's not true in general. For instance, consider $X = \mathbb{T}^{2}$ the two-torus and let $T(x,y) = (x + \alpha, y)$ for some $\alpha \in \mathbb{R} \setminus \mathbb{Q}$. The probability measure $\mu$ obtained from integration on the line $[0,1] \times \{0\}$ is invariant for $T$, but the orbit $\{T^{n}(0,1/2)\}_{n \in \mathbb{N}}$ stays a distance $\frac{1}{2}$ away.
What would be true is ``$\mu$-a.e. $x \in X$ has the property that, with a frequency that is asymptotically bounded away from zero, the iterates $T^{n}$ will $\epsilon$-approach the support of $\mu$ for any $\epsilon > 0$." It follows from the ergodic theorem. (Also, by compactness of $X$, you can decompose $\mu$ into a mixture of ergodic, invariant probability measures --- this is convenient in the proof.)
If you knew $\mu$ was ergodic, then you could strengthen the above statement to ``$\mu$-a.e. $x \in X$ has the property that, for any $y$ in the support of $\mu$, with a frequency that is asymptotically bounded away from zero, the iterates $T^{n}(x)$ will $\epsilon$-approach $y$." The point is when things are ergodic, (most) orbits visit the entire support. (In the non-ergodic case, which parts of the support will be seen depends on the ergodic decomposition.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4318808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using the Dominated convergence theorem in a sequence of Indicator functions Let $Z_t\sim WN(0,\sigma^2)$ be a white noise. Consider a $\text{MA}(q)$ process:
\begin{equation}
X_t^q = \sum_{j=0}^{q} \theta_j Z_{t-j}, \quad X_t = \sum_{j=0}^{\infty} \theta_j Z_{t-j}
\end{equation}
where $\sum_{j=0}^{\infty} \theta_j^2 < \infty$. Fix any $t$ and any $x$, I want to show that:
$$\lim_{q \to \infty}P(X_t^q \leq x ) = P(X_t \leq x)$$
For this, I tried the Dominated convergence theorem: Define $f_q = I_{[\,X_q \, \leq \, x\,]}$ and $f = I_{[\,X \, \leq \, x\,]}$. It's easy to show that:
$$\int f_q\,dP = P(X_t^q \leq x ), \quad \int f\, dP = P(X_t \leq x ) $$
Also, it's easy to show that $|f_q| \leq 1$.
It only remains to show that the sequence $f_q$ converges pointwise to $f$ and I'm having a little trouble showing this. I think that the solution have to do with this two items questions:
*
*$X^q_t \to X_t$ pointwise? How I can show this?
*The first item implies that $f_q \to f$ pointwise?
Some help, pls!
| Fix a time point $t$.
*
*$\sum_{j=0}^{\infty}\theta_{j}^{2}<\infty$ implies $X_{j}^{q}$ and $X_{t}$ are both in $L^{2}$, also $\mathbb{E}[|X_{t}^{q}-X_{t}|^{2}]=\sigma^{2}\sum_{j=q+1}^{\infty}\theta_{j}^{2}\to0$ as $q\to\infty$, therefore $X_{t}^{q}\overset{L^{2}}{\to}X_{t}$, hence $X_{t}^{q}\overset{\mathbb{P}}{\to}X_{t}$.
*Convergence in probability implies convergence in distribution, the desired result holds for all $x$ such that $F(X_{t})$ is continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4318969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What does it mean that "the action of $\mathbb{Z}$ on $M$ factors through the quotient $\mathbb{Z}/p^n \mathbb{Z}$"? Let $M$ be a $\mathbb{Z}$-module, i.e. $\mathbb{Z}$ acts on $M$.
What does it mean that "the action of $\mathbb{Z}$ on $M$ factors through the quotient $\mathbb{Z}/p^n \mathbb{Z}$" ?
I want to understand it.
Suppose $\rho: \mathbb{Z} \times M \to M$ is defined by $\rho_z(m)=z\cdot m$ for all $z \in \mathbb{Z}$ and $m \in M$. Then "the action of $\mathbb{Z}$ on $M$ factors through the quotient $\mathbb{Z}/p^n \mathbb{Z}$" implies something like
where $\rho_1(z,m)=(z~\pmod{p^n},m)$ and $\rho_2(z~\pmod
{p^n},m)=z\cdot m$.
But still I don't have the motivation and clear understanding of what does mean by "the action of $\mathbb{Z}$ on $M$ factors through the quotient $\mathbb{Z}/p^n \mathbb{Z}$" ?
Any discussion is appreciated.
| The group action can equivalently be described as a group homomorphism
$$\rho':\ \Bbb{Z}\ \longrightarrow\ \operatorname{End}(M).$$
That the action factors over $\Bbb{Z}/p^n\Bbb{Z}$ simply means that this homomorphism factors over $\Bbb{Z}/p^n\Bbb{Z}$.
That is to say $p^n\Bbb{Z}$ is contained in the kernel of $\rho'$, or equivalently $p^n\cdot m=0$ for all $m\in M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4319093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability of XGiven two independent, geometric random variables $X$ and $Y$ with the same success probability $p\in(0,1)$, what is $\mathbf P(X<Y)$?
The answer can be computed formally using total probability and one gets
$$\mathbf P(X<Y)=\frac{1-p}{2-p}$$
In the concrete example, where $X$ and $Y$ model six sided fair dice and $p$ is the probability to throw a six with one dice, the problem above is the probability that the first dice shows a six for the first time before a second dice does when both are thrown simulatanously in rounds.
Curiously, the probability $\mathbf P(X<Y)=5/11$, which is exactly the number of events in $\Omega=\{1,2,...,6\}^2$ where the first entry is six, divided by the number of results that contain at least one six.
This observation opens the question whether there is a heuristic proof of the above general formula?
EDIT: Let's reformulate this as a game between player A and player B. They both bet that some event occurs as a result of independent experiments for them before it occurs in the same type of experiments for the other player independently. There is a remis, if the event happens for both at the same time.
Examples could be that both players throw a dice each and bet on who gets 6 first. Or they throw 2 coins each and bet on who gets two heads first.
The probability that the event happens is $p\in(0,1)$. Then, we are in the case above, where the number of trials until first success for each is geometric with success parameter $p$; let's say the number of trials for a is $X\sim Geo(p)$ and for player B $Y\sim Geo(p)$. Thus, the probability that player A wins is $\mathbf P(X<Y)$.
Now, in the two examples that i gave, the probability to win is just the number of events in which A wins divided by the number of events that either one wins or there is a remis. Is that coincidence? It should not be considering the case that the $p$ is a probability in a Laplace experiment.
EDIT 2: Thanks to an answer below, we can make this observation precise. Let $A$ be the event that player A wins and $D_i$ be the event that the game ends in round i.
Then,
$$D_i=\{\min(X,Y)=i\}$$
Furthermore,
$$\mathbf P(X<Y)=\mathbf P(A) = \sum_{i=1}^\infty \mathbf P(A|D_i)\mathbf P(D_i)$$
Since the geometric distribution is made up of independent Bernoulli trials, we have $\mathbf P(A|D_i)=\mathbf P(A|D_1)$ and thus
$$\sum_{i=1}^\infty \mathbf P(A|D_i)\mathbf P(D_i)=\mathbf P(A|D_1)\sum_{i=1}^\infty \mathbf P(\min(X,Y)=i)=\mathbf P(A|D_1)$$
We now remember that $X=\inf\{n\geq 1: X_n=1\}$ where $(X_i)_{i\geq 1}$ is a sequence of independent Bernoulli random variables with success probability $p$ and the same for $Y=\inf\{n\geq 1: Y_n=1\}$ where $(Y_i)_{i\geq 1}$, $Y_i\sim Ber(p)$.
That means
$$\mathbf P(A|D_1)=\frac{\mathbf P(X_1=1,Y_1=0)}{1-\mathbf P(X_1=0,Y_1=0)}
=\frac{p(1-p)}{1-(1-p)^2}=\frac{1-p}{2-p}$$
| Let $q=\mathbb P(X<Y)$. We condition on the outcome of the first trial for players $A$ and $B$.
If player $A$ gets a success, then $X<Y$ iff $B$ fails. This event happens with probability $p(1-p)$.
If $A$ gets a fail, then we need $B$ to fail too. After this we are in the same state as initially. So in this case $X<Y$ with probability $(1-p)^2q$.
So $$q=p(1-p)+(1-p)^2q\implies q=\frac{1-p}{2-p}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4319258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show an AEC with intersections is appropriately named I've just started learning the basics about AECs from this set of lecture notes. I am asked to show that if $M$ has intersections then the arbitrary intersection $S = \cap{S_i}$ is a strong substructure of $M$ if each $S_i$ is. (Here $A$ is said to be a strong substructure of $M$ if $A \leq_K M$.)
I've tried looking at $Cl(\cap{S_i})$ and the only thing I can think of is to use coherence, but this doesn't quite get me there.
| Suppose $S = \bigcap_{i\in I} S_i$, where $S_i\leq_K N$ for all $i\in I$. Then $$S\subseteq \text{cl}^N(S) = \bigcap \{M\leq_K N\mid S\subseteq M\} \subseteq \bigcap_{i\in I} S_i = S.$$
Since $K$ has intersections, we have $S = \text{cl}^N(S) \leq_K N$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4319453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $H$ be a subgroup of $G$ and $x,y \in G$. Show that $x(Hy)=(xH)y.$
Let $H$ be a subgroup of $G$ and $x,y \in G$. Show that $x(Hy)=(xH)y.$
I have that $Hy=\{hy \mid h \in H\}$ so wouldn't $x(Hy)=\{x hy \mid h \in H\}$? If so there doesn't seem to be much to be shown since if this holds I suppose that $(xH)y=\{x h y \mid h \in H\}$ would also hold and these two are clearly the same sets? Am I misinterpreting the set $x(Hy)$? Should this be $\{xhy \mid h \in H, y \in G\}$ for fixed $y$?
| If $w\in x(Hy),$ then for some $v\in Hy,$ $w=xv.$ Since $v\in Hy,$ for some $h\in H,$ $v=hy.$ So $w = xhy.$ So for some $u\in xH,$ $w=uy.$ Thus $w\in (xH)y.$
Therefore $x(Hy)\subseteq (xH)y.$
The inclusion in the other direction can be shown in the same way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4319590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Orthogonality of characters for powers of a character Suppose $G$ is a finite cyclic group, and let $\chi: G \to \mathbb{C}^*$ be a character of order $n$. That is, $\chi^n$ is the identity homomorphism. I came across the following relation in a paper but I cant quite prove it for myself:
$$
\frac{1}{n}\sum_{i=1}^n\chi^i(a)\overline{\chi}^i(b) =
\begin{cases}
1, &\chi(a) = \chi(b)\\
0, &\chi(a) \neq \chi(b)
\end{cases}
$$
The first case is easy to see - if $\chi(a) = \chi(b)$ then every term in the sum is $1$. But I cant quite see the case for $\chi(a) \neq \chi(b)$. I assume this is very very closely related to the typical orthogonality relations for characters. Any thoughts?
Edit: I thought of a solution different to the posted answer so I thought I would also share.
If $\chi(a) \neq \chi(b)$ then the sum is
$$\frac{1}{n}\sum_{i=1}^n z^i $$
where $z$ satisfies $z^d = 1$ but $z \neq 1$. Thus, $z$ is a primitive $k$-th root of unity for some $k|d$. So we can rewrite the sum as
$$\frac{1}{n}\frac{d}{k}\sum_{i=1}^k z^i. $$
This type of sum over a $k$-th root of unity is $0$.
| Since $\chi$ has order $n$, it is trivial on the subgroup $G^n$ of $n$th powers in $G$. For the purposes of this problem we can replace $G$ by $G/G^n$, on which $\chi$ is still a character. Therefore we can assume all elements of $G$ have trivial $n$th power and thus their orders divide $n$ (in terms of the original group $G$, these orders are really in the quotient group $G/G^n$).
Ignoring the $1/n$ outside the sum, we are looking at a sum of a character $\chi$ over the subgroup of $G$ (really, subgroup of $G/G^n$) that is generated by $ab^{-1}$ repeated $m$ times, where $m$ is $n/{\rm order \, of \,} ab^{-1}$. That order divides $n$ because of the reduction step we made. If $\chi(a)$ and $\chi(b)$ are not equal then $\chi(ab^{-1}) \not= 1$, so we are summing a nontrivial character $\chi$ over a finite abelian group $m$ times. Thus the sum is $0$. It is not necessary to assume $G$ is cyclic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4319781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What does a set of functions from $A$ to $B$ belong to? To understand the set-theoretic definition of functions, I tried to find a set that contains a set of all functions from $A$ to $B$ for (possibly empty) sets $A$ and $B$.
$
\newcommand{\eqv}{\Leftrightarrow}
\newcommand{\imply}{\Rightarrow}
\newcommand{\powset}{\mathcal{P}}
$
My approach (possibly informal):
Because a function is a subset of binary relation,
$$
f \in A \to B
\imply
f \subseteq A \times B
\imply
f \in \powset(A \times B)
$$
where $A \to B$ denotes the set of all functions from $A$ to $B$.
Therefore,
$$
\forall f: f \in A \to B \imply f \in \powset(A \times B)
$$
i.e., $A \to B \subseteq \powset(A\times B)$.
Finally, we have
$$
A \to B \in \powset(\powset(A\times B))
$$
Is this correct? What is the domain of discourse of $f$ here? (concerning $\forall f$)
| Yes, the set of maps from $A$ to $B$ is a subset of $\mathcal{P}(\mathcal{P}(A \times B))$. It is more often written $B^A$ than $A \to B$. In the statement
$(\forall f) (f \in B^A \Rightarrow f \in \mathcal{P}(A \times B))$, the domain of discourse is the not-a-set of all sets, assuming that you're working in ZFC.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4319939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Rolling a die - Conditional Probability A die is thrown repeatedly.
Let $X$ ~ First 5 is thrown and $Y$ ~ First 6 is thrown
Calculate $\mathbb{E}(X|Y=3)$
You may use the identity: $\sum_{n=k}^\infty nz^{n-k} = \frac{1}{(1-z)^2}+\frac{k-1}{1-z}$
I know from the definition of expectation, we have:
$\mathbb{E}(X|Y=3) = (1*\frac{1}{5})+(2*\frac{4}{5} * \frac{1}{5}) + (3* \frac{4}{5}* \frac{4}{5} * 0) + (5* \frac{4}{5} * \frac{4}{5} * 1 * \frac{5}{6} * \frac{1}{6}) + (6* \frac{4}{5} * \frac{4}{5} * 1 * \frac{5}{6} * \frac {5}{6} * \frac{1}{6}) + ...$, where every following term, has an extra '$*\frac{5}{6}$' term and constant increases by 1.
However I am unsure of how to apply this to the identity given to find the value of the infinite sum?
| $\mathbb{P}(X=1|Y=3)=\frac{1}{5},\mathbb{P}(X=2|Y=3)=\frac{4}{25},\mathbb{P}(X>3|Y=3)=1−(\frac{1}{5}+\frac{4}{25})=\frac{16}{25}.$
Then we multiply these by the expected results, i.e. $1,2$ and $9$, giving $\mathbb{E}(X|Y=3)=(1∗\frac{1}{5})+(2∗\frac{4}{25})+(9∗\frac{16}{25})=6.28$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4320165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Equality of submodules satisfying given conditions - Verification of solution Question:
Let $M$ be a left $R$-module($R$ is a unitary ring). let $M_1,M_2,N$ be $R-$submodules of $M$ such that $M_1\subseteq M_2$, $M_1+N=M_2+N$ and $M_1 \cap N=M_2 \cap N$. Are $M_1,M_2$ necessarily equal?
Attempt: I think I have a proof. I figured that $M_1=M_2$ if and only if $\frac{M_1}{M_1 \cap N}=\frac{M_{2}}{M_{2}\cap N}$. I also see that $\frac{M_1+N}{N}=\frac{M_2+N}{N}$ and hence the canonical isomorphisms $\natural_1:\frac{M_1+N}{N}\rightarrow \frac{M_1}{M_1\cap N}$ and $\natural_2:\frac{M_2+N}{N}\rightarrow \frac{M_2}{M_2\cap N}$ are identical. This proves the result.
It would be great if this can be verified. If it is correct, I don't have a good intuitive understanding as to why this holds. Hence, any interpretation/intuition or alternative simpler argument is highly appreciated. If wrong, please point out the error(s). Thank you.
| This is unfortunately not true even for vector spaces. For example, in $\mathbb{R}^2$ you can let $N=\mathrm{span}(e_1)$, $M_1=\mathrm{span}(e_2)$, and $M_2=\mathrm{span}\begin{bmatrix} 1 \\ 1 \end{bmatrix}$. Thus $N$ is the $x$-axis, $M_1$ is the $y$-axis, and $M_2$ is the line $y=x$. Then $$M_1+N=\mathbb{R}^2=M_2+N$$ and $$M_1\cap N = \{0\} = M_2\cap N.$$ But $M_1\neq M_2$.
Edit: It IS true if you require $M_1\subseteq M_2$. The proof is purely set-theoretic, no homomorphisms required; it goes as follows. Let $m_2\in M_2$. Then $m_2$ is in $M_2+N$, so it must also be in $M_1+N$ by assumption, so $m_2=m_1+n$ for some $m_1\in M_1$ and $n\in N$. But then $m_1$ is also in $M_2$ by assumption, so $n=m_2-m_1$ belongs to $M_2$. This shows $n\in M_2\cap N$, so $n\in M_1\cap N$. Thus $n$ belongs to $m_1$, so $m_2=m_1+n$ also belongs to $M_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4320379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What Pi-System is used to define independence for random variables?
Independent random variables [ edit]
The theory of $\pi$-system plays an important role in the probabilistic notion of independence. If $X$ and $Y$ are two random variables defined on the same probability space $(\Omega, \mathcal{F}, \mathrm{P})$ then the random variables are independent if and only if their $\pi$-systems $\mathcal{I}_{X}, \mathcal{I}_{Y}$ satisfy
$\mathrm{P}[A \cap B]=\mathrm{P}[A] \mathrm{P}[B] \quad$ for all $A \in \mathcal{I}_{X}$ and $B \in \mathcal{I}_{Y}$
which is to say that $\mathcal{I}_{X}, \mathcal{I}_{Y}$ are independent. This actually is a special case of the use of $\pi$-systems for determining the distribution of $(X, Y)$.
There are many Pi-Systems for a set $\Omega.$ Which one do we mean when we define independence of random variables?
| The quote appears to be from Wikipedia: https://en.wikipedia.org/wiki/Pi-system#Independent_random_variables
Looking in earlier sections, we see $\mathcal{I}_f$ is defined under Examples
For any measurable function $f: \Omega \to \mathbb{R}$, the set $\mathcal{I}_f = \{f^{-1}((-\infty,x]) : x \in \mathbb{R}\}$ defines a $\pi$-system, and is called the $\pi$-system generated by $f$.
So in words, given a real-valued random variable $X$ on probability space $(\Omega, \mathcal{F}, P)$, each subset of $\Omega$ which is in the $\pi$-system $\mathcal{I}_X$ is the set of all outcomes which give a value of $X$ not larger than a specific upper bound.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4320509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Method of Undetermined Coefficients using X's on left hand side I have a couple questions on this question.
The question is asking me to find the general solution to
$$x'' + 6x' + 9x = \cos(2t) + \sin(2t).$$
Solving for the general solution, I got $$Y_c = C_1e^{-3x}+ C_2xe^{-3x}.$$
I was wondering if the fact that the left hand side uses $x$'s instead of $y$'s matters? For almost every question, it uses $y$'s instead of $x$'s.
Also, for $Y_p$, I got $Y_p = A\cos(2t) + B\sin(2t)$.
I ultimately got the answer to be $$C_1e^{-3x}+ C_2xe^{-3x} - (7\cos(2t)/169) +(17\cos(2t)/169),$$ which I do not feel to be correct.
| The complementary solution is $x_c(t)=C_1e^{-3t}+C_2te^{-3t}$. Note: $x$ is the dependent variable and $t$ is the independent variable in this case.
Moreover, the complementary solution is the solution to the following second-order homogeneous differential equation: $$x_c''+6x_c'+9x_c=0.$$ Finally, in order to get the particular solution, one must guess it to be: $$y_p(t)=A\cos(2t)+B\sin(2t)$$ where $A$ and $B$ are to be found using the Method of Undetermined Coefficients.
After substitution and some algebraic manipulation, the equation becomes:
$$(5A+12B)\cos(2t)+(-12A+5B)\sin(2t)=\cos(2t)+\sin(2t)$$
This implies one gets the following system of two equations with two unknowns via Method of Undetermined Coefficients:
*
*$5A+12B=1$,
*$-12A+5B=1$.
Using Cramer’s rule to solve the system, the solution to the system is $A=-7/169$ and $B=17/169$.
This implies the particular solution is $$x_p(t)=-\frac7{169}\cos(2t)+\frac{17}{169}\sin(2t).$$
Hence, the general solution to the non-homogeneous second-order differential equation is:
$$X(t)=x_c(t)+x_p(t)=C_1e^{-3t}+C_2te^{-3t}-\frac7{169}\cos(2t)+\frac{17}{169}\sin(2t)$$ where $C_1$ and $C_2$ are constants.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4320773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Solution Set Inequality Continuous Function, Probability Theory I am trying to understand the Tchebychev's inequality, which turns into understanding the cardinality of solutions sets.
Here is what i've got so far. Tchebychev's inequality states that for $c > 0$:
$$
\mathbb{P}(|X - E[X]| > c) \leq \frac{Var(X)}{c^2}
$$
This comes from application of Markov inequality on $(X - E[X])^2 > c^2$:
$$
\mathbb{P}((X - E[X])^2 > c^2) \leq \frac{Var(X)}{c^2}
$$
and taking the square root:
$$
\sqrt{(X - E[X])^2} > \sqrt{c^2} \Rightarrow |X - E[X]| > c
$$
because $\sqrt(x^2) = |x|$ and $a < b \Rightarrow \sqrt{a} < \sqrt{b}$ for $a,b \in \mathbb{R}$. Now my understanding is that we require:
$$
\{|X - E[X]| > c\} \subseteq \{(X - E[X])^2 > c^2\}
$$ so that
$$
\mathbb{P}(\{|X - E[X]| > c\}) \leq \mathbb{P}(\{(X - E[X])^2 > c^2\}) \leq \frac{Var(X)}{c^2}
$$
otherwise it could not be guaranteed that
$$
\mathbb{P}(\{|X - E[X]| > c\}) \leq \frac{Var(X)}{c^2}
$$
So my question really is why does:
$$
\{|X - E[X]| > c\} \subseteq \{(X - E[X])^2 > c^2\}
$$
hold?
| If $f:\mathcal{Y}\subseteq \mathbb{R}\to\mathbb{R}$ is a strictly increasing function,
$$
y>c \Leftrightarrow f(y)>f(c).
$$
Thus, for a random variable $Y$,
$$
\{\omega: Y(\omega)>c\} =\{\omega: f(Y(\omega))>f(c)\}.
$$
In your case, $Y=|X-\mathsf{E}X|$ and $f(y)=y^2$ which is strictly increasing on $\mathcal{Y}=\mathbb{R}_{>0}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4320978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Upper bound for the trace of the product of a matrix and a positive matrix Let $A \in M_N(\mathbb C)$ be positive semi-definite (i.e. $A = C^*C$ for some $C \in M_N(\mathbb C)$) and let $B \in M_N(\mathbb C)$ be arbitrary. Is it then true that
$$\tag{1}
\lvert Tr(AB) \rvert \leq \lVert B\rVert Tr(A)
$$
where $\lVert B \rVert$ denotes the operator norm of $B$?
I have shown the above inequality with the Frobenius norm instead of the operator norm and I have shown that
$$
\lvert Tr(AB) \rvert \leq \lvert Tr(B) \rvert \lVert A \rVert
$$
but I'm unable to prove of disprove (1).
| By von Neumann's trace inequality, $|\operatorname{tr}(AB)|\le\sum_i\sigma_i(A)\sigma_i(B)$. Since the singular values of $A$ are the eigenvalues of $A$ and $\sigma_i(B)\le\|B\|_2$ for each $i$, the result follows.
By the way, your claim that $|\operatorname{tr}(AB)|\leq|\operatorname{tr}(B)|\|A\|_2$ is false. Consider $A=\pmatrix{1&1\\ 1&1}$ and $B=\pmatrix{0&1\\ 0&0}$ for instance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4321133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Question about the limit $\displaystyle\lim_{n\rightarrow \infty}\sqrt[n]{3^n+ 2^n}$ I have one question about limits: it is required to find the limit $\displaystyle\lim_{n \rightarrow \infty}\sqrt[n]{3^n + 2^n}$.
I calculated it like this:
$$(3^n+2^n)^{1/n} = \left(3^n \cdot \frac{3^n+2^n}{3^n}\right)^{1/n} \rightarrow (3^n)^{1/n} = 3.$$
but what if I divide, say by $5^n$:
$$\left(5^n \cdot \frac{3^n+2^n}{5^n}\right)^{1/n} \rightarrow (5^n \cdot 0)^{1/n} = 0.$$
Why is the second solution wrong?
| Notice that $$(3^n+2^n)^{1/n}=\left(5^n\cdot\frac{3^n+2^n}{5^n}\right)^{1/n}=5\cdot\left[\left(\frac35\right)^n+\left(\frac25\right)^n\right]^{1/n}.$$ Thus, when you concluded that the second limit is $0,$ you virtually assumed that $$\lim_{n\to\infty}\left[\left(\frac35\right)^n+\left(\frac25\right)^n\right]^{1/n}=0,$$ which is false. Your assumption is that since $$\lim_{n\to\infty}\left(\frac35\right)^n+\left(\frac25\right)^n=0,$$ that the former limit must also be $0.$ However, this is not the case: in the cases where the exponent has limit $0,$ it is not sufficient for the base to have limit $0$ in order for the entire power to have limit $0.$ The exponent is $1/n,$ and notice that $$\lim_{n\to\infty}\frac1{n}=0.$$ As such, your second calculation is simply incorrect. In general, if $$\lim_{n\to\infty}a_n=0$$ and $$\lim_{n\to\infty}b_n=0,$$ then $$\lim_{n\to\infty}{b_n}^{a_n}$$ can be equal to any nonnegative real number, or it can even not exist. You cannot simply conclude it to be $0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4321364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$\|S(t)\| \leq Me^{ct} \Rightarrow \|u\|_{C(0,T,H)} \leq \|u_0\|_H+\|f\|_{L^1(0,T,H)}$? Let $A$ be the infinitesimal generator of a $C_0$-semigroup of contractions $(S(t))$ in a Hilbert space $H$ and $f\in L^1(0,T;H)$. We know that the mild solution of the problem
\begin{equation}
\begin{cases}
u_t=Au+f,\\
u(0)=u_0 \in H,
\end{cases}
\end{equation}
is given by
$$u(t)=S(t)u_0+\int_{0}^{t}S(t-s)f(s)ds.\tag{*}$$
Then, since $\|S(t)\|_{\mathcal{L}(H)}\leq 1$ for all $t \geq 0$ it follows that
$$\|u\|_{C(0,T,H)} \leq \|u_0\|_H+\|f\|_{L^1(0,T,H)} \tag{**}.$$
My question: Does the inequality $(**)$ holds with the assumption $\|S(t)\| \leq Me^{ct}$ for all $t \geq 0$, with $M,c>0$?
From $(*)$ we have that:
$$\|u\|_{C(0,T,H)} \leq Me^{cT}(\|u_0\|_{H}+\|f\|_{L^1(0,T,H)}).$$
Is there any way to remove the term $Me^{cT}$ from the last inequality?
| To see that this bound is the best that you can get in the general setting, consider the case $H = \mathbb{R}$ and $S(t)x = e^t \cdot x$. Then $Ax = x$ and the equation in question is $$u_t - u = f.$$
To see an example where equality is achieved in your bound, consider $f = 0$ and $u_0 = 1$. Then one has $u(t) = e^{t}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4321529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Map between formal neighbourhoods Let $M$ be a moduli stack of elliptic curves. The $j$-invariant defines a map $M \to \mathbb{A}^1$. In the article of Fulton and Olsson they say that the map $\mathcal{O}_{\mathbb{A}^1, 1728} \to \mathcal{O}_{M, y^2=x^3-x}$ after completion looks like $k[[t]] \to k[[z]]$ where $t$ goes to $z^2$.
How can I show it? It seems not really intuitive because this point $1728$ is special, namely the group of automorphisms of $y^2=x^3-x$ is $\mathbb{Z}/6$. Any ideas are greatly appreciated!
| The elliptic curve with $j$-invariant $1728$ has $\mathbb{Z}/4\mathbb{Z}$ automorphism away from characteristic $2$ and $3$. A generic elliptic curve has automorphism group $\mathbb{Z}/2\mathbb{Z}$ so the coarse moduli map $j : M \to \mathbb{A}^1$ is ramified to order $2$ over $j = 1728$. Since both source and target are smooth curves, then we know that formally locally any map with ramifiication $2$ can be written as $t \mapsto z^2$ with appropriate choice of coordinates $z$ and $t$.
Alternatively, you can take an explicit étale cover of $M$, for example the one given by the Legendre family $y^2 = x(x-1)(x- \lambda)$, and just compute the $j$ invariant. In this case we have
$$
j = 2^8\frac{(\lambda^2 - \lambda + 1)^3}{\lambda^2(\lambda -1)^2}
$$
and you can directly compute that $j = 1728$ is a critical value of $j(\lambda)$ with multiplicity $2$ so after formal completion and an appropriate choice of coordinates the map has to look like $t \mapsto z^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4321710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Binomial distribution problem with inconsistent probability of occurrence | 0.4 | 0.5 | 0.3 | 0.4 | 0.5 |
| 0.4 | 0.5 | 0.3 | 0.4 | 0.5 |
| 0.4 | 0.5 | 0.3 | 0.4 | 0.5 |
| 0.4 | 0.5 | 0.3 | 0.4 | 0.5 |
In each cell of the matrix is the winning probability.
What is the probability that more than 4 cells will win if all cells are randomized at the same time.
Because each lattice has a different probability, I can't solve it with a binomial distribution, and then I can't figure out what I should do.
| There are $4$, resp. $8$ and $8$ cells with probability $0.3$, resp. $0.4$ and $0.5$.
The number of successes is:
$$X=X_1+X_2+X_3 \tag{1}$$
where $X_1 \sim Bin(4,0.3), \ \ X_2 \sim Bin(8,0.4), \ \ X_3 \sim Bin(8,0.5), \ \ $
The PGF of a Bin(n,p) distribution is $E(s^X)=(q+ps)^n$.
Therefore, as a consequence of (1), due to the independence of the $X_i$, the PGF of $X$ is $E(s^X)=E(s^{X_1})E(s^{X_2})E(s^{X_3})$, i.e.,
$$(0.7+0.3s)^4(0.6+0.4s)^8(0.5+0.5s)^8=p_0+p_1s+p_2s^2+\cdots p_{20}s^{20}\tag{2}$$
where $p_k:=P(X=k)$.
These coefficients can be obtained using Wolfram Alpha,
giving the final answer:
$$P(X>4)=1-P(X \le 4)=1-\sum_{k=0}^4 p_k \approx 1-0.0332812=0.9667188$$
Remark: $E(X)=4 \times 0.3 + 8 \times 0.4 + 7 \times 0.5=8.4$, this result being concordant with the fact that the most important values are $p_8$ and $p_{9}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4321853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability with card deck flips Here is my problem : we flip cards from a 52-card standard well-shuffled deck until the first club appears. I am looking to calculate the probability that the next card at the $k+1$th flip is also a club given that the $k$th flip is a club. Let $T$ be the flip on which we encounter the first club. Thanks to this answer I get
$$\mathbb{E}[T]=\frac{53}{14} \approx 3.7857$$
Now let $Y_n=1$ if we flip a club on the $n$th flip and $Y_n=0$ if we flip another suit. The number of clubs flipped amongst the first $n$ flips would be
$$C_n=\sum_{k=1}^n Y_k$$
with $C_T=1$. After the $n$th flip, we have $\tilde{X}_n$ clubs remaining in the deck with proportion $X_n$:
$$X_n =\frac{\tilde{X}_n}{52 - n}, \ \tilde{X}_n = 13 - C_n$$
with $\tilde{X}_T=12$. So
$$X_T = \frac{13-C_T}{52 - T} = \frac{12}{52 - T}$$
We get
$$\mathbb{E}[X_T] = \frac{12}{52 - 3.7857} \approx 0.2489$$
the probability that the next card is a club. Can I use $\mathbb{E}[T]$ in the denominator like this? Thanks!
| Fix the number of clubs ($k$). Let $Q_n$ be the desired probability for a deck of $n \ge k$ cards with exactly $k$ clubs. Either the first card is a club (with probability $k/n$), in which case the second card is a club with probability $(k-1)/(n-1)$; or else the first card is not a club (with probability $1-k/n$), and we've reduced to the $n-1$ case. That is:
$$
Q_n=\frac{k(k-1)}{n(n-1)}+\frac{n-k}{n}Q_{n-1},
$$
with the boundary condition that $Q_k=1$.
Playing with this recursion soon leads to the conjecture that $Q_n=k/n$, which may be proved by induction:
$$
Q_{n-1}=k/(n-1)\implies \\ Q_n=\frac{k(k-1)}{n(n-1)}+\frac{n-k}{n}\cdot\frac{k}{n-1}=\frac{k(k-1)+(n-k)k}{n(n-1)}=\frac{-k+nk}{n(n-1)}=\frac{k}{n}.
$$
In other words, the card after the first club is exactly as likely to be a club as any other card in the deck.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4322029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Shuffling $\{1,2,\dots,n\}$ into an index loop I am not sure if index loop is a proper name, but what I mean is that if you are at position $i$ of the array $A$ (1-based indexing), then your next position will be at $A[i]$, and your next-next position will be $A[A[i]]$, and so on. An index loop forms if you eventually visit all elements of the array and return to your starting point. WLOG, one always starts from index 1.
For example, let $A=\{1,2,3,4,5\}$. Then $A' = \{5,4,3,2,1\}$ won't work because one just jumps back and forth between the first and last elements. But $A'=\{5,4,1,3,2\}$ works.
So, what is the criterion of a successful shuffle? And how many ways of shuffling are there given an array of length $n$?
If this is a well-known and solved problem, kindly let me know.
| The index loop you concerned is called a 'cycle permutation' of length $n$.
Any permutation of a sequence can be represented by several cycle permutations.
You may find the wiki page useful: https://en.wikipedia.org/wiki/Cyclic_permutation.
The number of $n$-cycles is given by $(n-1)!$.
Or directly, you can consider what can '1' map to, say $i$ (with $(n-1)$ different choices), then what can $i$ map to (with now $(n-2)$ different choices), and multiply the number of choices together:
$$ \text{number of $n$-cycles} = (n-1)\times(n-2)\times\cdots\times 1 = (n-1)! $$
The followings are edition on the number of cycle permutations against all permutations according to the comments.
Obviously the number of all permutations is $N_p(n) = n!$.
The total number of cycle permutations, denoted as $N_c(n)$, can be formulated as:
\begin{equation}
N_c(n) = \sum_{k = 1}^{n}\binom{n}{k}(k - 1)! = \sum_{k = 1}^{n}\frac{n!}{k(n - k)!}
\end{equation}
Notice that from the Chebyshev's inequality we have
\begin{equation}
\begin{aligned}
N_c(n) & = n!\sum_{k = 1}^{n}\frac{1}{k(n - k)!} = n!\sum_{k = 1}^{n}\frac{1}{k}\cdot\frac{1}{(n - k)!} \\
& \le n!\cdot\frac{1}{n}\left(\sum_{k = 1}^{n}{\frac{1}{k}}\right) \left(\sum_{k = 1}^{n}{\frac{1}{(n - k)!}}\right) \\
& = (n - 1)!\left(\sum_{k = 1}^{n}{\frac{1}{k}}\right)\sum_{k = 0}^{n - 1}{\frac{1}{k!}} \\
\end{aligned}
\end{equation}
Obviously the series $\sum_{k = 0}^{+\infty}{(k!)^{-1}}$ converges so $\sum_{k = 0}^{n}{(k!)^{-1}}$ is bounded above by a fixed number $M$.
Now we can get to the ratio of $N_c(n)$ in $N_p(n)$:
\begin{equation}
\begin{aligned}
\varepsilon_n & = \frac{N_c(n)}{N_p(n)} \\
& \le \frac{(n - 1)!\left(\sum_{k = 1}^{n}{\frac{1}{k}}\right)\sum_{k = 0}^{n - 1}{\frac{1}{k!}}}{n!} \\
& \le \frac{M}{n}(\ln{n} + 1) \\
\end{aligned}
\end{equation}
Clearly, $\varepsilon_n \to 0$ as $n \to \infty$.
Therefore it can be concluded that the number of cycle permutations are far smaller than the number of all permutations, which is identical with our intuition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4322178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to find CDF of PDF i'm doing exercise (i'm a student) for probability and there is something that I don't understand, How do we manage to find bound of $CDF$ given a $PDF$.
the PDF : $f(x,y) = k$ , is obviously uniformly distributed have the constraint of $0<x<y<1$.
well we know that $\int_{}^{}\int_{}^{}f(x,y)dydx = 1$
$\int_{0}^{1}\int_{x}^{1}kdydx = 1$
$k/2 = 1$ then $k = 2$ we can verify that $k = 2$ by integrating the functions in all of it's domains and it is giving $1$
When I want to find the CDF I do this :
$\int_{0}^{x}\int_{0}^{y} 2dvdu = \int_{0}^{x} 2y du = 2xy$
I know that i'm wrong because $x$ and $y$ are dependant each other and here i'm counting also the part where where $x>y$ but I dont know to bound the integrals to respect the dependance. I followed the formula of a CDF given PDF : $F(x,y) = \int_{-\infty}^{x}\int_{-\infty}^{y} f(u,v)dvdu$
Thanks you in advance.
| The joint cdf is given by
$P(X\leq u,Y\leq v)$ . As $f(x,y)>0$ for $x<y<1$. We have to look at whether $u<v$ or $u>v$ and separate the cases .
if $u\geq v$ . Then
$P(X\leq u, Y\leq v) = P(X\leq v, Y\leq v) = \int_{0}^{v}\int_{0}^{y}2\,dx\,dy=v^{2}$.
If $u<v$ . Then we have $P(X\leq u,Y\leq v)=2(uv-\frac{1}{2}u^{2})$
So $$F(x,y)= \begin{cases} y^{2}\cdot\mathbf{1}_{\{x\geq y,\,0\leq y\leq 1\}}\\2(xy-\frac{x^{2}}{2})\cdot\mathbf{1}_{\{x<y,\,0\leq y\leq1\}}\\2(x-\frac{x^{2}}{2})\cdot\mathbf{1}_{\{y>1\,,\,0\leq x\leq 1\}}\\1\cdot \mathbf{1}_{\{x>1,y>1\}} \\0,\,\text{elsewhere} \end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4322318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Not sure how to finish this integral $\int_{-\infty}^\infty x^3 \delta(x^2-2)dx$ Dirac delta is a symmetric function defined as $$\int_{-\infty}^\infty f(t)\delta(t-A)dt = f(A)$$
Find the value of $$\int_{-\infty}^\infty x^3 \delta(x^2-2)dx$$
SOLUTION:
Let $t=x^2$ then $dt=2xdx \rightarrow dx = dt/(2x)$ and the integral is $$\int_{-\infty}^\infty tx \delta(t-2)dt/(2x) = \int_{-\infty}^\infty t/2 \delta(t-2)dt = f(A).$$ I am not quite sure what to do now in terms of finishing the problem...
| You can use the property
$$
\int_{-\infty}^{+\infty}f(x)\delta(g(x))~{\rm d}x = \sum_i \frac{f(x_i)}{|g'(x_i)|}
$$
where $x_i$ are the roots of $g$: $g(x_i) = 0$. In your case you have $g(x) = (x - 2^{1/2})(x + 2^{1/2})$, $x_1 = 2^{1/2}$ and $x_2 = -2^{1/2}$, $g'(x) = 2x$ and $f(x) = x^3$.
Putting everything together
\begin{eqnarray}
\int_{-\infty}^{+\infty}x^3\delta(x^2 - 2)~{\rm d}x = \frac{(2^{1/2})^{3}}{|2(2^{1/2})|} + \frac{(-2^{1/2})^{3}}{|2(-2^{1/2})|} = \cdots
\end{eqnarray}
can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4322491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is $\inf\{d(x,F):x\in E\}$ equal to $\inf\{|x-y|:x\in E,y\in F\}$? Let $E,F$ be two non-empty disjoint sets in $\mathbb{R}^n$ and let $|\cdot|$ be the Euclidean norm. As introduced in many textbooks on real analysis, we can define the distance $d(E,F)$ between $E$ and $F$ by
$$d(E,F)=\inf\{|x-y|:x\in E,y\in F\}.$$
I'm wondering if we can find $d(E,F)$ by following a two-step procedure. Specifically, if we define the distance $d(x,F)$ from $x\in E$ to $F$ by
$$d(x,F)=\inf\{|x-y|:y\in F\},$$
then does
$$\inf\{d(x,F):x\in E\}$$
equal $d(E,F)$? I didn't come across a question regarding double infimums before, and this question right here is too abstract for me. Does anyone have an idea? Thank you.
//////////////////////////
Update: I got something!!! Let $\epsilon>0$ be given. If we could verify the following two assertions (1) and (2), then the two infimums in the title would agree!!!
(1) $\forall x\in E$, $d(E,F)-\epsilon<d(x,F)$.
(2) $\exists x\in E$ s.t. $d(E,F)+\epsilon>d(x,F)$.
Assertion (2) can be easily verified by taking $(x,y_0)\in E\times F$ with $d(E,F)+\epsilon>|x-y_0|$ and noting that
$$|x-y_0|\geq\inf\{|x-y|:y\in F\}=d(x,F).$$
Now it remains to show that assertion (1) is true.
| It's all the same and you don't have to go so much into the definition of distances to see that. You simply have to show that for a family $(A_i)_{i \in I}$ of subsets of $\mathbb R$ you have
$$\inf_{i \in I} (\inf(A_i)) = \inf \left( \bigcup_{i \in I} A_i \right).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4322729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.