Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Proof for a Statement on Series In my textbook after this theorem comes another statement(I wasn't sure to call it a theorem). The proof for the theorem is given but I don't know why the statement is true. Can you help me prove it?
Theorem If $\Sigma^{\infty}_{n=1}a_n$ converges, then $\lim_{n\to\infty}a_n = 0$. Therefore, if $\lim_{n\to\infty}a_n$ does not exist, or exists but is not zero, then the series $\Sigma^{\infty}_{n=1}a_n$ is divergent.
Statement If the sequence of terms $\{a_n\}$ tends to a non-zero limit $L$, then $\Sigma^{\infty}_{n = 1}a_n$ diverges to infinity if $L > 0$ and diverges to negative infinity if $L < 0$.
| Think about it this way: if your sequence $a_n$ converges to a positive number $L>0$, then the series, which is the sum of the infinite terms of the sequence, will come to a point where it will basically be summing $L$ an infinite number of times. This can only result in the series diverging to $+\infty$.
On the other hand, if $L<0$, you are adding a negative value an infinite number of times. This can only result in the series diverging to $-\infty$.
The only way that a series can converge is if it comes to a point where the infinite terms you are adding are basically equal to zero. This requires $L=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3962301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Induced topology base Let $X$ be a topological space, $B$ an open base for $X$ and $Y \subseteq X$. Prove that $B_Y = \{U \cap Y: U \in B\}$ is an open base for $Y$ (considering, over $Y$, the subspace topology).
We must show that given an open $A$ relative to the topology of $Y$, we have to show that there is an element of $B_Y$ such that $ A = \cup (U \cap Y) $. Since $ B $ is a base for X, we have that Y can be written as a union of elements of $B$. That is, there is $W \in B$ such that $ Y = \cup W $. Is this demonstration absurd?
| Use the fact that the subspace toplogy $\tau_Y$ is the coarsest one for which the inclusion $i:Y\to X$ is continuous.
If $A$ is open in $Y$ then there is a $V$ open in $X$ such that $i^{-1}(V)=A$. Since $V$ is open in $X$ there is a basis element $B\in \tau_X$ such that $B\subseteq V.$ Then, $i^{-1}(B)=B\cap Y\subseteq A$ and $B\cap Y$ is open (in $Y$) by definition of $\tau_Y.$ It follows that the collection $\{B\cap Y\}_{B\in \tau_X}$ is a base for $\tau_Y.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3962388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How many words are required for What3Words? This is just curiosity / a personal exercise.
What3Words allocates every 3m x 3m square on the Earth a unique set of 3 words. I tried to work out how many words are required, but got a bit stuck.
$$
Area
= 510 \times 10^6 km^2
= 5.1 \times 10^{14} m^2
=> ~ 5.4 \times 10^{14} m^2
$$
(rounding up to make the next step easier!)
And so there are ~ $6\times10^{13}$ 3m x 3m squares.
I assumed I could use the equation to calculate number of combinations to find the number of words needed:
$$
_nC_r = \frac{n!}{r! (n - r)!}
$$
where $r$ is 3, and total number of combinations is the number of squares: $6\times10^{13}$
$$
6\times10^{13} = \frac{n!}{3! (n - 3)!}
$$
$$
6\times10^{13} = \frac{(n)(n-1)(n-2)(n-3)!}{3! (n - 3)!}
$$
$$
n^3 - 3n^2 + 2n - (36\times10^{13}) = 0
$$
... and then, I can't work out the first factor to use to solve the cubic equation, I'm not sure I've ever had to solve a cubic eqtn with a non-integer factor and none of the tutorials I've found have helped.
(And, my stats is also not good enough for me to be convinced this is the correct equation to use anyway!)
Any hints as to the next step would be appreciated.
| Almost there! Your equation is not correct tho, cause you need an order in the words. So the right inequality is $$3!\cdot\binom{n}{3}\geq 6\cdot 10^{13},$$
which is $$n(n-1)(n-2)\geq 6\cdot 10^{13}.$$ Now, you can depress it (actual math term) meaning that you can make a change of variable to make your polynomial of the form $x^3+px+q$ by doing the change of variable $x+1=n$ getting
$$(x+1)\cdot x\cdot (x-1)=x^3-x\geq 6\cdot 10^{13},$$
now you can use Cardano's formula.
getting
$$x=\sqrt[3]{3\cdot 10^{13}+\sqrt{9\cdot 10^{26}-\frac{1}{27}}}+\sqrt[3]{3\cdot 10^{13}-\sqrt{9\cdot 10^{26}-\frac{1}{27}}}=39148.67,$$
so $n=39149.67$ so take $n=39150.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3962514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Product of random numbers. Find the constant $p$ such that the product of any (positive) number $N_0$ multiplied by successive random numbers between $0$ and $p$ will, on average, neither diverge to infinity nor converge to zero.
(This is not for homework or anything. This is a problem I "invented" and solved, and now want to see how real mathematicians would think about this. Take your time.)
(Thanks md2perpe for fixing up my tag.)
(Thanks Forester for making the variables render nicely.)
| Thanks everyone. It turns out $p=e$ is exactly right, which all hinges on using the logarithm, and then transforming an infinite sum into an integral. My steps were pretty much the same:
Let $\tilde{N}$ denote the product after $j$ iterations. Then, we have $$\frac{\tilde{N}}{N_0} = \left(p \cdot r_1\right) \cdot \left(p \cdot r_2\right) \cdot \left(p \cdot r_3\right) \cdot \cdots \:,$$ where $p$ is constant, and each $r_j$ is a random number between $0$ and $1$. Next, take the natural log of both sides, turning the product on the right into sum:$$\ln\left(\frac{\tilde{N}}{N_0}\right) = \ln\left(p \cdot r_1\right) + \ln\left(p \cdot r_2\right) + \ln\left(p \cdot r_3\right) + \cdots$$
To satisfy the question as stated, we have the left side (on average) resolving to zero, with the number of terms on the right approaching infinity. This effectively makes $r$ a continuous variable: after many many iterations, we have sampled all of the real numbers between zero and one. Point is, we may replace the sum by an integral: $$0 \approx \ln\left(\frac{\tilde{N}}{N_0}\right) = \int_0^1 \left(\ln p + \ln r\right) dr \:,$$ and the answer falls right out. Only $p=e$ satisfies the above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3962630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
how to find the distribution density of the variable X + Y? X and Y are independent and have the same exponential distribution wih density: $$\lambda e^{-\lambda x} 1_{(0,\infty)} (x) $$ The density of the X + Y distribution can be calculated by convolution of the functions: $$f*f(t)=\int_{-\infty}^{\infty} f(x) f(t-x) \, dx $$ but I still have no idea how to solve it, during the lecture we only gave an example for a uniform distribution on (0,1)
| Applying the convolution formula to your example you immediately get
$$f_Z(z)=\int_0^{z}\lambda e^{-\lambda x}\lambda e^{-\lambda (z-x)}dx=\lambda^2 z e^{-\lambda z}$$
$z\geq 0$
Proof:
By Definition you get
$$F_Z(z)=\int_{-\infty}^{\infty}f_X(x)\Bigg[\underbrace{\int_{-\infty}^{z-x}f_Y(y)dy}_{F_Y(z-x)}\Bigg]dx$$
Derivating you get the density
$$f_Z(z)=\frac{d}{dz}F_Z(z)=\int_{-\infty}^{\infty}f_X(x)f_Y(z-x)dx$$
To realize that the integral bounds are $[0;z]$ just do a drawing of the transformation function
Without any calculations, $Z$ density could be found using the properties of Gamma distribution. As $Exp(\lambda)=Gamma(1;\lambda)$, thus $Z=X+Y\sim Gamma(2;\lambda)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3962878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Remainder of Polynomial Division of $(x^2 + x +1)^n$ by $x^2 - x +1$ I am trying to solve the following problem:
Given $n \in \mathbb{N}$, find the remainder upon division of $(x^2 + x +1)^n$ by $x^2 - x +1$
the given hint to the problem is:
"Compute $(x^2 + x +1)^n$ by writing $x^2 + x +1 = (x^2 - x +1) + 2x$. Then, use the uniqueness part of the division algorithm."
If I take $a = x^2 - x +1$ I have
$$(x^2 + x +1)^n = (a + 2x)^n = a^n + \binom{n}{1}a^{n-1} 2x+ \binom{n}{2}a^{n-2} (2x)^2 + \dots + (2x)^n$$
but how do I proceed further?
| You're trying to divide by $a$ with remainder. You almost have it, since in your expression for $(a+2x)^n$ all terms but the final $(2x)^n$ are multiples of $a$ so can be dropped in getting the remainder. You now can focus just on $(2x)^n.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3962969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
} |
Convergence of $\int_1^2 \frac 1{\sqrt{x^4 - 1}}dx$ I want to show that $\int_1^2 \frac 1{\sqrt{x^4 - 1}}dx$ is convergent.
I tried to use the limit convergence test by comparing with $\frac{1}{x^2}$ but that doesn't work. Any hints on how to proceed with this?
| You can set $x^2=sect$ and perform this substitution. When you do that, you in up with an integral of the form$\int\frac{dt}{\sqrt{cost}}$. The key is in the upper and lower bound. For both $x=1$ and $x=2$ you can conclude that on the corresponding $t$ interval, the function is continuous. in other words, it is no longer an improper integral, the integral has an answer (though through elementary techniques we can' figure that out) thus convergent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3963083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Fourier series at discontinuity: can the left and right limits be retrieved from the Fourier series? I understand that the Fourier Series for a discontinuous function, one with a jump discontinuity such as the sawtooth wave, converges to the average of the left and right limits at the discontinuity.
Is there some way by which the left and right limits themselves can be read out from the Fourier series representation?
That is, a way to write these quantities
$$\lim_{x\rightarrow x_0^+} \sum_{n=-\infty}^{\infty} c_n \exp(i n x)$$
$$\lim_{x\rightarrow x_0^-} \sum_{n=-\infty}^{\infty} c_n \exp(i n x)$$
directly as series over $c_n$, without the limits.
| When you have a single jump discontinuity then you can work numerically. Say, you have a periodic function $f$ which is smooth, apart from a jump at the points $2k\pi$. Let
$$J(t):={\pi-t\over2}\quad(0<t<2\pi),\qquad J(t+2\pi)\equiv J(t)\ ,$$
be your favorite jump function. Then
$$J(t)\rightsquigarrow\sum_{k=1}^\infty{\sin(kt)\over k}\ .$$
The auxiliary function
$$g(t):=f(t)-c J(t)$$
is then everywhere smooth for a suitable choice of $c$, hence has Fourier coefficients going to $0$ much more rapidly than ${1\over k}$. It follows that the Fourier sine coefficients of $f$ decay as fast as $b_k\approx{c\over k}$ $(k\gg1)$. This allows to obtain an estimate for $c$, maybe doing some minimization work involving many coefficients. The jump height then is $\pi c$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3963224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Computing $ \int_{-\infty}^{\infty} e^{-ax^4+bx^2} dx$ I was wondering if it was possible to compute $ \int_{-\infty}^{\infty} e^{-ax^4+bx^2} dx$ explicitly.
Completing the square doesn't seem to work, since there is no factor of $x$ in order to use the Gaussian integral.
Any hint would be greatly appreciated.
Thanks!
| With the change of integration variable $x = (2a)^{ - 1/4} \sqrt t$ and the representation http://dlmf.nist.gov/12.5.E1, we find
\begin{align*}
\int_{ - \infty }^{ + \infty } {e^{ - ax^4 + bx^2 } dx} & = 2\int_0^{ + \infty } {e^{ - ax^4 + bx^2 } dx} = (2a)^{ - 1/4} \int_0^{ + \infty } {t^{ - 1/2} e^{ - \frac{1}{2}t^2 + \frac{b}{{\sqrt {2a} }}t} dt}
\\ & = (2a)^{ - 1/4} \sqrt \pi e^{\frac{{b^2 }}{{8a}}} U\!\left( {0, - \tfrac{b}{{\sqrt {2a} }}} \right).
\end{align*}
Here $U$ is the parabolic cylinder function. By analytic continuation, this formula is valid whenever $\Re a>0$ and $b$ is any complex number. By http://dlmf.nist.gov/12.7.E10, this is also equal to
$$
\frac{1}{2}\sqrt { - \tfrac{b}{a}} e^{\frac{{b^2 }}{{8a}}} K_{1/4}\! \left( {\tfrac{{(-b)^2 }}{{8a}}} \right),
$$
where $K_\nu$ is the modified Bessel function. One has to choose the phase of $-b$ consistently in this form to make the function single-valued (and real valued for real $b$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3963370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
A function that is continuous at $c \in \mathbb Z$ and discontinuous at $c \in \mathbb R \setminus \mathbb Z$ I'm looking for a function that is continuous on $\mathbb Z$ and discontinuous on $\mathbb R \setminus \mathbb Z$.
I know the function floor of $x$, denoted $\lfloor x \rfloor$, that is continuous at all $c \in \mathbb R \setminus \mathbb Z$ and discontinuous at all $c \in \mathbb Z$.
But I'm looking for the opposite one.
Thank you.
| I think the function $f$ such that :
*
*if $x\in \mathbb{R}-\mathbb{Q}, f(x) = 0$
*if $x\in \mathbb{Q}, f(x) = \sin \pi x$
will work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3963612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Hamiltonian diffeomorphisms generated by periodic Hamiltonian function Let $\phi_{t}$ be a family of Hamiltonian diffeomorphisms on a symplectic manifold $(M,\omega)$, generated by a family of Hamiltonians $H(x,t)$. I'm trying to show that the family of Hamiltonians is periodic with period $1$ with respect to $t$ (in other words $H(x,t) = H(x,t+1)$ for all $x$) if and only if the family of diffeomorphisms satisfies $\phi_{t+1} = \phi_{t} \circ \phi_{1}$.
I know from the definition that $\frac{d}{dt}\phi_{t}(x) = X_{H}(\phi_{t}(x))$, where $X_{H}$ is the Hamiltonian vector field and that $i_{X_{H}}\omega = dH$. The symplectic form is compatible with this vector field in the sense that $\omega(X_{H}(x), \cdot) = dH$.
I tried to differentiate the equation $\phi_{t+1}=\phi_{t} \circ \phi_{1}$ and I obtain the following:
$$X_{H}(\phi_{t+1}(x)) = X_{H}(\phi_{t}(\phi_{1}(x))) \cdot X_{H}(\phi_{1}(x))$$
But I don't see how to proceed or finish the proof for either of the implications. How can I prove this?
| For one implication, assume that $H(x,t)=H(x,t+1)$. To prove that $\phi_{t+1}=\phi_{t}\circ\phi_{1}$, you can show that both sides solve the initial value problem
$$
\begin{cases}
\frac{d}{dt}\psi_{t}=X_{H_t}\circ\psi_{t}\\
\psi_{0}=\phi_1
\end{cases}.
$$
Indeed, on one hand we have
$$
\frac{d}{dt}\phi_{t+1}=X_{H_{t+1}}\circ\phi_{t+1}=X_{H_t}\circ\phi_{t+1},
$$
using that $H_{t+1}=H_t$. On the other hand we have
$$
\frac{d}{dt}\phi_{t}\circ\phi_{1}=X_{H_t}\circ\phi_t\circ\phi_1.
$$
The other implication doesn't seem true to me. Information about the flow tells you something about the Hamiltonian vector field, which does not determine the Hamiltonian function uniquely. For instance, take $$(\mathbb{R}^{2},dx\wedge dy)\ \ \text{and}\ \ H(x,y,t)=y+t.$$ Then
$
X_{H}=\partial_{x}
$
is time-independent, so its flow satisfies $\phi_{t+1}=\phi_t\circ\phi_1$. But $H(x,y,t+1)\neq H(x,y,t)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3963714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the following gamma function integral correct? $$\int_{0}^{\infty}\frac{x^{v-1}}{e^{-\beta\mu}{e^x}-1}=\Gamma(v)\left(e^{\beta\mu}+\frac{e^{2\beta\mu}}{2^v}+\frac{e^{3\beta\mu}}{3^v}+\dotsb\right)$$
This integral is given in my statistical physics assignment and I feel that the $\mu$ in the integrand should have the opposite sign (as my results would make more sense) but I don't know how to prove this and I can't find the formula anywhere. I would love to find a source for this formula.
| On the RHS you have:
$$\Gamma(v)\sum_{k=0}^\infty\frac{(e^{-\beta\mu})^k}{k^v}\tag{1}$$
we know that:
$$\Gamma(v)=\int_0^\infty x^{v-1}e^{-x}dx$$
$$\sum_{k=0}^\infty\frac{(e^{-\beta\mu})^k}{k^v}=\Phi(e^{-\beta\mu},v,0)$$
I have changed it to a negative sign because it is the only way that this series will converge for $\beta,\mu>0$ and $\Phi$ represents the Lerch transcendent
. On the same page you will find the identity:
$$\Phi(z,s,a)=\frac{1}{\Gamma(s)}\int_0^\infty\frac{t^{s-1}e^{-at}}{1-ze^{-t}}dt$$
which we can rearrange to give us the general formula:
$$\int_0^\infty\frac{t^{s-1}e^{-at}}{1-ze^{-t}}dt=\Gamma(s)\Phi(z,s,a)$$
Try looking into these functions and seeing if there is any material on this derivation. Note: It is interesting that this summation (with $a=0$) is close in appearance to the Riemann zeta function
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3963863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Compact 3-manifold with compressible boundary tori? There is much to be said about 3-manifolds with boundary consisting of a possibly empty collection of incompressible tori.
I don't seem to know where to look to find much about 3-manifolds with compressible torus boundary.
If $M^3$ is compact, with infinite $\pi_1$, and with non-empty boundary, consisting only of compressible tori, a solid torus is an option, but what else can happen?
| Assuming that your manifold is, say, compact, connected and orientable, whose boundary is a union of tori, then $M$ is homeomorphic to a connected sum of a manifold $M'$ whose boundary tori are all incompressible and some number of solid tori. This is a consequence of the Loop Theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3963951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof of relative complement identity I often see this identity for set theory,
$A-B=A\cap B^c$
It is easy to believe this identity by considering the venn diagram for a relative complement,
If we consider $B^c$ informally as the space not inside $B$ then the space of set $A$ not intersectiong with set $B$ would be included in $B^c$. So logically, $A\cap B^c$ would be the shaded area, which is the same space as $A-B$.
But how would one go about proving this identity purely from set theory and symbolic logic?
| Let $A = \{x\in U\mid p(x)\,\text{is true}\}$ and $B = \{x\in U\mid q(x)\,\text{is true}\}$, where $U$ is the universe of discourse.
Thus we have that
\begin{align*}
A - B & = \{x\in U\mid (x\in A)\wedge(x\not\in B)\}\\\\
& = \{x\in U \mid x\in A\}\cap\{x\in U \mid x\not\in B\} = A\cap B^{c}
\end{align*}
where we have used the definitions of difference, complement and intersection.
More precisely, given $X\subseteq U$ and $Y\subseteq U$, one has that
\begin{align*}
X^{c} & := \{x\in U \mid x\not\in X\}\\\\
X\cap Y & := \{x\in U \mid (x\in X)\wedge(x\in Y)\}
\end{align*}
Hopefully this helps!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3964038",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Geometry In Complex Numbers The line $T$ is tangent to the circumcircle of acute triangle $ABC$ at $B.$ Let
$K$ be the projection of the orthocenter of triangle ABC onto line $T$.
Let $L$ be the midpoint of side $AC$. Show that the triangle $BKL$ is isosceles.
We are supposed to do this using complex numbers. We can easily compute $H, B, L$ But how can we compute $K$? I tried some approaches but it doesn't seem to work or I can't see how to go further. Any help will be appreciated.
| We may and do assume that the circumcenter $\odot (ABC)$ is the unit circle, centered in the origin $O$, and that $B$ has the affix $b=1\in \Bbb C$. Let $a,c\in\Bbb C$ be the (affix) complex representations of $A,C$, so $|a|^2=a\bar a=1$, and $|c|^2=c\bar c=1$.
We use this convention further, lower case letters are affixes for the capitalized points. So $o$, the affix of $O$ is zero. The centroid $G$ has affix $g$ with
$3g=a+b+c$.
Vectorially, $GH=-2GO$, since $G$ is placed on the segment $HO$ as it is placed on the median $BL$, so vectorially $GH+2GO=0$, passing to complex numbers
$$
(h-g)+2(o-g)=0\ ,\qquad\text{ i.e. }\qquad
h+2o=3g=a+b+c\ .
$$
In our case, $o=0$, $b=1$, so $h=a+1+c$. The line $T$ is the line parametrized by $1+it$. The point $H,K$ have the same projection on the imaginary axis, parallel to $T$, so we know $k$ if we extract and adapt the imaginary part of $h$. It is
$$
\begin{aligned}
k
&= 1+i\operatorname{Im} h \\
&= 1+i\operatorname{Im} (a+1+c)\\
&= 1+i\operatorname{Im} (a+c) \\
&= 1+2i\operatorname{Im} l \ .
\end{aligned}
$$
Projecting on $T$, we see that $L$ projects to the mid point of $BK$, so
$\Delta LBK$ isosceles.
$\square$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3964168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving $\bigcap_n I_n \neq \emptyset$ for monotonically smaller closed intervals $(I_n)_n$ Theorem: Let $(I_n)_n$ be monotonically smaller closed intervals. i.e. $I_{n+1}\subseteq I_n$
Then $$\bigcap_n I_n \neq \emptyset$$
I easily grasped this geometrically, But I doubt of my proof. This is my proof:
Since $I_n$'s are real closed intervals, $I_n \neq \emptyset$. Then we can take $x \in I_n$. Since $I_n\subseteq I_{n-1}$, $x \in I_{n-1}$.
And generally from that,
$$x \in I_{n-2} \Rightarrow x \in I_{n-3} \Rightarrow x \in I_{n-4} \Rightarrow...\Rightarrow x \in I_2 \Rightarrow x \in I_1$$
Thus $x \in I_{n}$ is also in other closed intervals $I_i$s where $i < n-1$. Thus,
$$\bigcap_{i \leq n} I_i \neq \emptyset$$ Then, if we just take $\lim_{n\to \infty}$ equation will still hold and consequently,
$$\bigcap_{n} I_n \neq \emptyset$$
To me, it looks right, but I could'nt be sure. I would appreciate your short-time look. Thanks from now.
| For the case $I_n=[0,1/n]$ the number $1/100$
belongs to $I_1,I_2,\ldots,I_{100}$ but not to $I_{101}$
Hint: Use Bolzano-Weierstrass.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3964315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
A form of the Cauchy functional equation
Find all the functions $f :\mathbb R \to \mathbb R$ that satisfy the
conditions:
*
*$$f(x+y)=f(x)+f(y), \enspace \forall x,y \in \mathbb R;$$
*$$\exists \lim_{x\to \infty}f(x).$$
This problem is important for the community because on the forum I only saw the case where $f$ is continuous, but not the case when we only know that $\displaystyle\exists \lim_{x\to \infty}f(x).$
I found a solution to this problem in a book of analysis I own, which hints the following:
It is easy to see that $f(q)=q\cdot f(1), \enspace \forall q\in \mathbb Q$. Let $f(1)=a$.
If $a>0$, then consider the sequence $a_n=n.$ It follows that $$\lim_{x\to \infty}f(x)=\lim_{n\to \infty}f(a_n)=\lim_{n\to \infty}an=\infty.$$
The next step in the book is that this implies $f$ is increasing. I couldn't understand this line properly. Why is $f$ increasing? Of course, it is easy to check that $f(q)\le f(r), \forall q<r \in \mathbb Q$, but why is this also true for reals?
Please help me understand this! Thank you so much!
It is also clear that if $f$ is increasing then $f(x)=ax$ for all $x\in \mathbb R$ (suppose there exists an $x\in \mathbb R\setminus \mathbb Q$ such that $f(x)<x$. Then by density there is a $q\in \mathbb Q$ such that $f(x)<aq=f(q)<ax$, and this clearly gives the contradiction that $x< q <x$).
| So we have that lim $f = +\infty $.
Let's $x<y$ be real numbers. Then $f(y)-f(x) = f(y-x)$, so $n(f(y)-f(x)) = f(n(y-x))$.
Since $f(n(y-x)) \rightarrow +\infty$ when $n\rightarrow +\infty$, it is positive for $n$ big enough.
So $n(f(y)-f(x))$ is positive for $n$ big enough. But the sign of $n(f(y)-f(x))$ does not depend on $n$.
So $n(f(y)-f(x))$ is always positive, and especially $f(y) > f(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3964478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Subsequences of convergent series We know that all subsequences $x_{\phi(n)}$ of a convergent sequences $x_n$ have the same limit :
$$\lim_{n \to \infty} x_n = \lim_{n \to \infty} x_{\phi(n)}$$
Now we consider $x_n=\frac{n^3}{n!}$, we have that $S_n = \sum_{k=0}^n x_k = \sum_{k=0}^n\frac{k^3}{k!}$.
Suppose we already know that $\sum_{n=0}^\infty \frac{n^3}{n!}= \lim_{n \to \infty} S_n=5e$.
My question is : as the series converges (absolutely), i.e. $S_n$ has a limit when $n \to \infty$, why does the subsequence $S_{2n} = \sum_{k=0}^n\frac{(2k)^3}{(2k)!}$ seem to have a different limit that is not $5e$ ? I thought we should have
$\lim_{n \to \infty} S_n = \lim_{n \to \infty} S_{2n}$ but I also see that $S_{2n}$ is missing half of the terms from the original sum so its limit "must" be lower. So how does subsequences work with series ?
I am sure there is something very stupid that I don't get.
| You are wrong when you assert that$$S_{2n}=\sum_{k=0}^n\frac{(2k)^3}{(2k)!}.$$In fact,$$S_{2n}=\sum_{k=0}^{2n}\frac{k^3}{k!},$$which is a different thing. And there is no reason for you to assume that$$\lim_{n\to\infty}\sum_{k=0}^n\frac{(2k)^3}{(2k)!}=\lim_{n\to\infty}\sum_{k=0}^{2n}\frac{k^3}{k!}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3964573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
$\sum_{n=0}^\infty \frac{1}{(2n)!}=\cosh(1)$ Is there any way to show that $$\sum_{n=0}^\infty \frac{1}{(2n)!}=\cosh(1) = \frac{1+e^2}{2e}$$ knowing that $$\sum_{n=0}^\infty \frac{1}{n!}=e$$
I guess that we can also show that $\sum_{n=0}^\infty \frac{1}{(2n-1)!}=\sinh(1) $ but I have no clue of how to get rid of the "2" in the factorial...
| Write down the series expansions for $e^x$ and $e^{-x}$ around $0$. Add the two series and divide by $2$. Similarly, you can subtract them, to get $\sinh$.
$$e^x=\sum_{n=0}^\infty\frac{x^n}{n!}\\e^{-x}=\sum_{n=0}^\infty\frac{(-1)^nx^n}{n!}$$
When you add these two expressions, the odd terms will cancel. If you subtract them, the even terms will cancel out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3964670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Is this property true for all $p$ or for all primes? In my textbook the following property is stated:
Let $p$ be a prime number. For any $a_1,...,a_k\in \mathbb Z$ we have that:
$$p|a_1...a_k \implies \exists i\in\{1,...,k\}:p|a_i$$
In particular, for any $a,b \in \mathbb Z$:
*
*$p|ab \implies (p|a \vee p|b)$
*$p|a^k \implies p|a$
I was able to prove this for but, In my proof, I never used the fact that $p$ is prime. Because of this, I don't know if my proof is valid but this got me wondering: Is this property also valid for any $p$ or only if $p$ is prime?
Edit: I'll show the proof I made for this.
Let $p$ be a prime number and $a_1,...,a_k \in \mathbb Z$ such that:
$$p|\prod_{n=1}^k a_n$$
The proof is by contradiction. Let's assume that $\forall i \in \{1,...,k\}, p \not | a_i$
Now, let's use mathematical induction:
*
*$p \not | a_1$
*Now let's assume that $p \not | \prod_{n=1}^{k-1} a_n$:
We have that $p \not | a_k \iff \gcd(p,a_k) = 1$ and $p \not | \prod_{n=1}^{k-1} a_n \iff \gcd(p,\prod_{n=1}^{k-1} a_n)=1$.
Since $\gcd(p,\prod_{n=1}^{k-1} a_n)=1$ and $\gcd(p,a_k) = 1$ we conclude that: $\gcd(p,a_k \cdot \prod_{n=1}^{k-1} a_n)=1 \iff p \not | \prod_{n=1}^{k} a_n$
This is a contradiction, hence $\exists i \in \{1,...,k\}: p | a_i$
| Only if $p$ is prime.
Let's see counterexamples to this, to
*
*$n \vert ab \Longrightarrow n \vert a \vee n \vert b.$
If $n = ab,$ then $ab \not \vert a$ and $ab \not \vert b.$ The property works for primes because the prime factors cannot be split into further non 1 prime factors.
*
*$n \vert a^k \Longrightarrow n \vert a.$ This is not true: suppose $k > 2$ and $n = a^2.$ It follows that this doesn't hold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3964804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Prove that $z_1z_2z_3 = 1$ for complex numbers $z_1$, $z_2$, $z_3$. Let $z_1$, $z_2$, $z_3$ be distinct complex numbers such that $|z_1| = |z_2| = |z_3| > 0$. If $z_1 + z_2z_3$, $z_2 + z_3z_1$, and $z_3 + z_1z_2$ are real numbers, prove that $z_1z_2z_3 = 1$.
I believe that this problem comes from 1979 Romanian Math Olympiad (State Competition, 10th grade).
I tried several approaches: The first was to substitute $a + ib$, $c + id$, and $f + ig$ for $z_1$, $z_2$, and $z_3$, respectively, and bash. I used $(1+z_1)(1+z_2)(1+z_3)$, which didn't really get me anywhere.
After a couple more attempts at substitutions and such, I hit a wall and didn't really know what to do.
| We have $\dfrac{z}{|z|}+\dfrac{|z|}{z} \in \mathbb{R}$ when $z \ne 0$
So we have $\dfrac{z_1}{|z_1|}+\dfrac{|z_1|}{z_1} \in \mathbb{R}$ and $z_1+z_2z_3 \in \mathbb{R} \Rightarrow \dfrac{z_1}{|z_1|}+\dfrac{z_2z_3}{|z_1|} \in \mathbb{R}$
Subtracting we get $\dfrac{z_1z_2z_3-|z_1|^2}{z|z_1|} \in \mathbb{R} \Rightarrow z_1z_2z_3=|z_1|^2$
Hence $|z_1z_2z_3|=|z_1|^3=|z_1|^2 \Rightarrow |z_1|=1$ and the conclusion follows
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3965024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Is $PAPA^*P \leq AA^*$? I`m trying to figure out, if $PAPA^*P \leq AA^*$, where $A \in \mathfrak B (\mathcal H)$ and $P$ is some projector. It can be shown that $PAA^*P \nleq AA^*$ in general case. $\langle PAPA^*P x, x\rangle$ seems to be kind of a natural constriction of bilinear form $\langle AA^* x, x\rangle$ after projecting $B (\mathcal H)$ to $PB (\mathcal H)P$. That's my intuition, but I can't obtain anything more. I've tried to check explicitly, whether $\langle x , (AA^* - PAPA^*P) x\rangle $ is positive, but got nothing useful.
| I think
$$
P=\pmatrix{1 &0 \cr 0 &0}
$$
and
$$A=\pmatrix{ 1 & 1\cr 1 & 1}
$$
is a counter example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3965221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find an expression for $n$ in terms of $x$, $y$, and $z$. At a certain rate of compound interest, $100$ will increase to $200$ in $x$ years, $200$ will increase to $300$ in $y$ years, and $300$ will increase to $1,500$ in $z$ years. If $600$ will increase to $1,000$ in $n$ year, find an expression for $n$ in terms of $x$, $y$, and $z$.
I know:
$600(1+i)^n = 1000$
I wrote:
$200 = 100(1+i)^x$
$300 = 100(1+i)^{x+y}$
$1500 = 100(1+i)^{x+y+z}$
Also, I know that :
$600=100(1+i)^{2x+y}$
Hence:
$1000=100((1+i)^{x+y+z}-(1+i)^{x+z}-(1+i)^x)$
$1000=100(1+i)^{2x+y}((1+i)^{-x+z}-(1+i)^{-x}-(1+i)^{-x-y})$
$1000=600((1+i)^{-x+z}-(1+i)^{-x}-(1+i)^{-x-y})$
$(1+i)^n=(1+i)^{-x+z}-(1+i)^{-x}-(1+i)^{-x-y}$
I need to get $n$ as a function of $x$,$y$ and $z$
I have a problem with my last line.
Does anyone knows a faster way to get the solution ?
| Let's make it clean and simple. Instead of $1+i$, I'll put $q$ as the rate. And I'll cancel out all the unnecessary terms. So we have
$$
2 = q^x \qquad \frac{3}{2} = q^y \qquad 5 = q^z
$$
Now we want to make the ratio $\frac{10}{6}$ with these. We can 'massage' the expression a bit:
$$
\frac{10}{6}
= \frac{2 \cdot 5}{ \left( \frac{3}{2} \right)\cdot 2 \cdot 2}
= \frac{5}{ \left( \frac{3}{2} \right)\cdot 2} = 5 \cdot \left( \frac{3}{2} \right)^{-1} \cdot (2)^{-1}
$$
Now you can just plug in the known values and you're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3965463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to sketch level curves of the form $f(x,y)= |x|+|y|$? I'm struggling to sketch the level curves of the equation $f(x,y)= |x|+|y|$
I know for finding the level curves you have to set $f(x,y) = C$ (with c a constant). But then I have the equation $|x|+|y|=C$.
So lets say $C=0$, then $|x|+|y|=0$, but how can I sketch this. Because both x and y are greater then zero, and they have to add up to 0, therefore x and y needs to be zero, but then I can't draw anything.
So lets say $C=1$,then $|x|+|y|=1$, but how can I sketch this. Can I say that $x + y= \pm1$, therefor $y=1-x$ and $y = 1+x$ (but $x$ can't be negative, so I don't know how do this problem.)
If you know the answer, it would be very much appreciated to help me
Thank you in advance.
| I just add a comment to the already proposed solutions.
Since $f(x,y) = f(-x, y) = f(x, -y) = f(-x, -y)$, the level sets of $f$ are symmetric w.r.t. the reflections about the axes $(x,y) \mapsto (-x,y)$ and $(x,y) \mapsto (x,-y)$,
and w.r.t. the symmetry about the origin $(x,y) \mapsto (-x, -y)$.
Given that, it is enough to draw the level set for example in the quadrant $x\geq 0$, $y\geq 0$, and then you get the whole picture using the above symmetries.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3965581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How many ways? - difficult combinatorics These are from an archival local math contest:
*The city purchased train rails in two colors: gray and black. The only difference between them is the color. The project assumes that the rails will be installed in series of 10. We say that the series is elegant when no two black rails are directly adjacent to each other. How many different elegant series of 10 rails are possible?
I made some observations:
*
*In 3. I know the answer is supposed to be 144. If we have 6 black rails or more we cannot make an elegant series. For 1 black rail we can make 10 different series (the order matters), for two it is 8 + 7 + ... + 1 = 36 (i placed the first rail and looked where I can place the second one). For 5 rails there are two elegant series. However I cannot calculate the numbers for 3 or 4 black rails, is there any nice and clean way?
| I will answer the third one first.
For 1,2 and 5 black rails you have already calculated it so I would not bother calculating again.
We will use the gap method
Case 1: 3 black rails
First, you lay down 7 grey rails so there are 8 gaps
Out of those 8 gaps, you select 3 because there are 3 black rails.
So $\dbinom{8}{3} = 56$
Case 2: 4 black rails
You lay down 6 grey rails so 7 gaps are left and you have to select 4 for 4 black rails
.
So $\dbinom{7}{4} = 35$
You have made an error in the case where there are 5 black rails. The answer should be 6 for that case(6C1)
Total = 10 + 36 + 56 + 35 + 6 +1= 144
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3965737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Count the numbers in which the product of the digits is 48 (1 not allowed)
Count the numbers in which the product of the digits is 48 (1 not allowed as digit).
Proposed solution: I addressed the problem by brute force, first noticing that
$$48=2^4\times 3.$$
Then I found the possible groups of digits to be permuted for each number: (1) 2,2,2,2,3; (2) 2,2,2,6; (3) 4,2,2,3; (4) 4,2,6; (5) 4,4,3; (6) 8,2,3; (7) 8,6.
By adding the permutations with repetition for each case, I've found 38 possible numbers. Which agrees with the answer provided in the source.
My problem: I found this solution is not elegant and I'm wondering if there is a better way to address problems like these. By brute force it is easy to undercount terms (as I did as first solving the problem). Any help?
| There isn't a much better way, no. There are better ways for calculating the number of ordered factorizations but those will include factorizations with factors that aren't single digits.
If $a_n = $ the number of ordered factorizations of $n$ then we have two identities:
*
*$a_n = \sum a_d$ over all factors $d$ of $n$ not including $d = n$.
*$a_{p^k} = 2^{k - 1}$ when $p$ is a prime.
See http://oeis.org/A074206 for references/additional formulae/etc.
So using this we have
\begin{align}
a_{48} &= a_1 + a_2 + a_3 + a_4 + a_6 + a_8 + a_{12} + a_{16} + a_{24} \\
&= 1 + 1 + 1 + 2 + a_6 + 4 + a_{12} + 8 + a_{24} \\
&= 17 + a_6 + a_{12} + a_{24} \\
&= 17 + (a_1 + a_2 + a_3) + ([a_1 + a_2 + a_3] + a_4 + a_6) \\
&\quad\qquad+ ([a_1 + a_2 + a_3 + a_4 + a_6] + a_8 + a_{12}) \\
&= 17 + 3 + (3 + 2 + 3) + (8 + 4 + 8) \\
&= 48
\end{align}
So there are 48 total ordered factorizations. But we know this is overcounting because it includes factorizations which are permutations of $2,24$ and $2,2,12$ and $4,12$ and $3,16$. So that's an additional $2 + 3 + 2 + 2 = 10$.
So there are $48 - 10 = 38$ ordered factorizations where no factor is larger than $9$.
Was this easier than calculating the number $38$ directly? Probably not. But hopefully it's still useful to someone.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3965947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Find $\lim_{t \rightarrow 0} \int_{0}^{t} \frac{\sqrt{1+\sin(x^2)}}{\sin t} dx$ Find $$\lim_{t \rightarrow 0} \int_{0}^{t} \frac{\sqrt{1+\sin(x^2)}}{\sin t} dx$$
One easy step is replacing $\sin t = t$, since $\lim_{t \rightarrow 0} \sin t / t = 1$. How do I continue with the rest?
| Fix $t\neq 0$. By the Mean Value Theorem for Integrals, there is some $c_t$ between $0$ and $t$ such that
$$
\int_0^t\sqrt{1+\sin\left(x^2\right)}dx=t\sqrt{1+\sin\left(c_t^2\right)}
$$
Therefore,
$$\int_0^t\frac{\sqrt{1+\sin\left(x^2\right)}}{\sin(t)}dx=\frac{t}{\sin(t)}\sqrt{1+\sin\left(c_t^2\right)}$$
If $t\to 0$, the quotient tends to $1$, while the root tends to $1$, so the limit is $1$.
For a (marginally different) approach, let $f(t):=\int_0^t\sqrt{1+\sin\left(x^2\right)}dx$ and apply L'Hospital's Rule and the Fundamental Theorem of Calculus to
$$
\lim_{t\to 0} \frac{f(t)}{\sin(t)}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3966099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Help solving a recurrence relation I'm quite stuck on how to solve recurrence/difference equations.
For example I have the following linear inhomogeneous equation:
$a_n = 2a_{n-1} + 2^n n$, and $a_0 = 1/2$
I know that $2a_{n-1} = 1/2(2^n)$, and then we have $a_n = 1/2(2^n) + n(2^n)$. This is where I'm stuck though and the inhomogeneous part is throwing me off.
Am I in the right direction thinking that: $a_n = \frac12 \prod 2^n n$ ?
No answers please but some nudges in the right direction would be greatly appreciated. Thanks!
| Hint:
Set $b_n = \frac{a_n}{2^n}$. Then dividing your recurrence by $2^n$ gives
$$b_n = b_{n-1} + n.$$
Do you see how to find a closed form for $b_n$? Do you see how to turn this into a closed form for $a_n$?
As a general "life pro tip", when you see a recurrence with a geometric term (i.e. something that looks like $r^n$) that's otherwise linear, you should try dividing by it. Since $r^{n+1} = r \cdot r^n$, this procedure turns your geometric term into a constant one, which is easier to handle.
I hope this helps ^_^
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3966200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Deriving exponential generating function for central trinomial coefficients A recent question (link) asked for a derivation of the (ordinary) generating function for the central trinomial coefficients $\{T_n\}$. But the OEIS page (A002426) also lists an exponential generating function
$$\sum_{n=0}^\infty T_n \frac{x^n}{n!}=e^x I_0(2x)$$ where $I_0(x)$ is the zeroth Bessel function. How is this derived? I'll take a stab myself at showing this using the tools of analytic combinatorics, but I wanted to open this up to more knowledgeable folks as well.
| $$\sum_{n=0}^\infty T_n\frac{x^n}{n!}=\frac{1}{2\pi i}\sum_{n=0}^\infty\frac{x^n}{n!}\oint\frac{(1+z+z^2)^n}{z^{n+1}}\,dz=\frac{e^x}{2\pi i}\oint e^{x(z+1/z)}\frac{dz}{z}=e^x I_0(2x),$$ using $T_n=[z^n](1+z+z^2)^n$, then the exponential series, then the contour integral representation of $I_0$ based on the generating function $e^{z(t+1/t)/2}=\sum_{n\in\mathbb{Z}}I_n(z)t^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3966409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Solving $x + 3^x < 4$ analytically
I am solving the problems in Michael Spivak's Calculus book. In the Prologue chapter, there is the following problem:
$$x + 3^x < 4$$
I can solve this using graphical means, but not analytically. Is it possible to do the latter?
I can arrive at the solution as follows:
$$3^x < 4 - x$$
$$x < \log_3 (4-x)$$
I then draw the graphs of the functions $\log_3 (4-x)$ and $x$, and I can determine that $x$ needs to be smaller than 1 to be smaller than the $\log_3 (4-x)$.
How do I solve this purely analytically?
| $$x+3^x-4=0$$
1.) Solution as series
To solve the equation analytically as a series, you can use Lagrange inversion.
2.) No solution with elementary inverses
The equation is a zeroing equation of an elementary function.
Powers with irrational exponents are transcendental functions. Therefore the equation isn't an algebraic equation, it's a transcendental equation:
$$x+e^{\ln(3)x}-4=0$$
Because $x+e^{\ln(3)x}-4$ is a polynomial of two algebraically independent monomials, the function $\mathbb{R}\to\mathbb{R}, x\mapsto x+e^{\ln(3)x}-4$ seems to have no elementary inverse. Therefore we possibly cannot solve the equation by rearranging it only by applying elementary functions (elementary operations) which we can read from the equation.
3.) Solution in closed form with Lambert W
The equation is solvable by applying Lambert W. By some simple rearrangings, we can bring the equation into a form that is solvable by Lambert W.
The only real solution is:
$$x=\frac{4\ln(3)-W(81\ln(3))}{\ln(3)}.$$
That's equal to $1$, but I don't know the methods to show this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3966520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
$24ml+1=k^2$ has no solution for all $l=1 \dots m$ Investigating solutions of $$24ml+1=k^2$$ for $l=1\dots m$
The question is to find the $m$-s for which the above equation has no solution for all $l=1..m$-s.
The first few $m$-s are:
$$3, 9, 24, 27, 81, 192, 243, 432, 729$$
Actually have found that the $m$ should be of form $2^a3^b$. Seems hang on the simple task, of finding general formula for this.
One interesting addition. Just tested the cases when the $$12ml+1=k^2$$ has no solution for all $$l=1..m$$ Seems it has no solution iff $m=3^a$ But has no proof yet.
| Here are a couple of results that can be proved:
There is no solution if $m=3^a, a>0.$
For $24ml=(k-1)(k+1)$ we must have $k=6x\pm 1$. Then$$ml=\frac {x(3x\pm 1)}{2}.$$
Now if $m=3^a, a>0$, then $m$ has to be a factor of $x$. But then $l\ge \frac {3x- 1}{2}>m,$ a contradiction.
There is a solution if $m$ is not divisible by $3$.
Let $2m=3l\pm 1$. Then $l\le m$ and
$$24lm+1=12l(3l\pm 1)+1=(6l\pm 1)^2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3966638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Differentiate an equation I'm learning about implicit differentiation and i'm stuck at understanding the concept of "differentiating an equation".
If I define $ f(x) = x^2$ and $g_1(y) = 9$ and $g_2(y) = 9 + y^2$
Then I define 2 equations:
$(1)$ $f(x) = g_1(y)$ (i.e. $x^2 = 9$)
Then I can't differentiate both side of this equation with respect to x (ortherwise I will get: $2x = 0$)
$(2)$ $f(x) = g_2(y)$ (i.e. $x^2 = 9 + y^2$)
Then I can differentiate both side of this equation with respect to $x$ to get: $ 2.x = 2.y.\frac{dy}{dx}$
Could you please explain me the difference between the 2 cases ? And in general, in which condition could I differentiate both side of an equation concerning more than 1 variable ?
Thank you very much for your help!
| A few notes: you are taking derivatives. I think the more fundamental operator is the differential (which gets confusing because taking a derivative is referred to as "differentiating"). The differential is like a derivative, but not with respect to any variable. So, for instance the derivative of $x^2$ is $2x$ but the differential is $2x\,dx$. The derivative of $xy$ depends on which variable the derivative is being taken with respect to, but the differential is just $x\,dy + y\,dx$.
Derivatives are more likely to lead to contradictions. For instance, take the equation $x = 1$. The derivative of this with respect to $x$ is $1 = 0$. However, the differential is $dx = 0$, which is true: the value of $x$ is not changing. The reason for the contradiction of the derivative is that the derivative with respect to $x$ is just the differential divided by $dx$. Since $dx = 0$, dividing both sides by $0$ leads to an invalid $\frac{0}{0}$.
So, going back to your problem, when you had $x^2 = 9$, the differential gives you a better picture. $2x\,dx = 0$. Why? Because at $x^2 = 9$, $x$ is not moving, so the differential is zero. If you try to divide by $dx$, you will wind up with an invalid $\frac{0}{0}$.
So, in your second case, differentiating gives you: $2x\,dx = 2y\,dy$. You can then solve for the derivative by algebraic manipulations.
So, in both cases, finding the differential works. In the first case, finding the derivative doesn't work because you wind up with a $\frac{0}{0}$ when you try.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3966755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How many positive integral values of $n$, less than $100$, are there such that $n^3+72$ is completely divisible by $n+7$? How many positive integral values of $n$, less than $100$, are there such that $n^3+72$ is completely divisible by $n+7$ ?
MY WORK :-
Well we know that $a+b$ divides $a^3+b^3$ thus $n+7$ divides $n^3+7^3$ thus it divides $n^3+343$. Now if $n+7$ divides $n^3+72$ then it divides their difference. Thus $n+7$ divides $343-72=271$. Now the factor of $271$ are $1$ and $271$ only. So if $n\lt100$ then there are no values possible; hence answer should be $0$. However in answer key it's said $1$. I want to know why it's $1$ ??
| Do the polynomial division with remainder:
$${n^3+72\over n+7}=n^2-7n+49-{271\over n+7}\ .$$
Now $n^2-7n+49$ is an integer for all $n$. Therefore it remains to check, for which $n\in[100]$ the number $271$ is divisible by $n+7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3966917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Avoid Loss of significance error I want to compute $f(x)$to avoid loss of significance$$f(x)=\frac{1-\cos x}{x^2}$$
One way is writing Taylor series approximation for $\cos x$ about $x=0$. what about this approach:
$$f(x)=\frac{2\sin^2(\frac x2)}{x^2}$$
Is this ok ?
| Better than Taylor series would be the $[2n,2n]$ Padé approximants. Foe example, for $n=2$ you will have
$$f(x)=\frac{1-\cos (x)}{x^2}\sim\frac {65520-3780 x^2+59 x^4 } {131040+3360 x^2+34 x^4 } $$ Compared to a Taylor expansion, the difference is
$$\text{Padé- Taylor}=\frac{127 }{87178291200}x^{10}+O\left(x^{12}\right)$$
For example, for $x=10^{-6}$, the difference between the exact and approximated values is $1.46\times 10^{-69}$.
Using the next approximant $(n=3)$,it would be
$$f(x)=\frac{1-\cos (x)}{x^2}\sim\frac {5491886400-346666320 x^2+7038360 x^4-45469 x^6} {24(457657200+9249240 x^2+86030 x^4+389 x^6 )} $$ Compared to a Taylor expansion, the difference is
$$\text{Padé- Taylor}=\frac{11321 }{438437062103040000}x^{14}+O\left(x^{16}\right)$$
For example, for $x=10^{-6}$, the difference between the exact and approximated values is $2.58\times 10^{-98}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3967099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
How do we show $1-\cos x\ge\frac{x^2}3$ for $|x|\le1$? How do we show $1-\cos x\ge\frac{x^2}3$ for $|x|\le1$? My first idea was to write $$1-\cos x=\frac12\left|e^{{\rm i}x}-1\right|^2\tag1,$$ which is true for all $x\in\mathbb R$, but I don't have a suitable lower bound for the right-hand side at hand.
| The positive root of the equation $1-\cos x = \frac{x^2}{3}$ is approximately $2.16$
so the inequality is valid on a larger interval, say $[-2, 2]$. To show this consider
$$\frac{1-\cos x}{\frac{x^2}{3}} = \frac{2 \sin^2 \frac{x}{2}}{\frac{x^2}{3}}= \frac{3}{2} \left(\frac{\sin\frac{x}{2}}{\frac{x}{2}}\right)^2$$
Now, the function $t\mapsto \sin t$ is concave on $[0, \pi]$, so $\frac{\sin t}{t}$ is decreasing on $[0, \pi]$. Therefore, on the interval $[-2, 2]$ we have $\frac{\sin \frac{x}{2}}{\frac{x}{2}} \ge \sin 1$ and so
$$\frac{1-\cos x}{x^2/3} \ge \frac{3}{2} \sin^2 1= 1.06.. > 1$$
On the interval $[-1,1]$ the inequality is
$$1- \cos x \ge 2 \sin^2\frac{1}{2}\cdot x^2$$
where $2 \sin^2 \frac{1}{2} = 0.45969\ldots$
and this is the best estimate on $[-1,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3967205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 7,
"answer_id": 5
} |
Basic question about how to form left adjoint of forgetful functor $U: R\text-\mathrm{Mod}\text-S \to R\text-\mathrm{Mod}$ as tensor product with $S$? My algebra is a bit rusty, and I'm having some trouble forming a certain tensor product of modules over a non-commutative ring. Not sure if there is a typo, or if I'm forgetting something obvious. On page 87 of Mac Lane's Categories for the Working Mathematician (CWM) he offers a table of examples of left adjoints to forgetful functors. One of row of the table offers this example:
$U: R\text-\mathrm{Mod}\text-S \rightarrow R\text-\mathrm{Mod}$
taking $(R,S)$-bimodules to left $R$-modules, with the left adjoint given by:
$F:A\rightarrow A\otimes S$
taking left $R$-modules to $(R,S)$-bimodules. Presumably $S$ is intended to be a ring containing $R$, so that $S$ can serve as an $(R,S)$-bimodule here. But wouldn't $A$ is need to be a right $R$ module, so that we can even form the tensor product $A\otimes_R S$? Even if that were the case, wouldn't $A\otimes_R S$ then be merely a right $S$ module, when what is required is actually an $R,S$ bi-module? I'm apparently missing something, because I can't get the "types" correct.
| The tensor product should be over $\mathbb{Z}$. This doesn't require any relationship between $R$ and $S$ to be specified in advance.
If $R$ and $S$ are algebras over a common commutative ring $k$ and "$(R, S)$-bimodule" means "compatible $k$-module structures" then the tensor product should be over $k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3967295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Disagreement over the arc length of an ellipse I'm reading Elliptic Functions and Elliptic Integrals by Prasolov and Solovyev. On page $53$, it reads:
The ellipse $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ can be given parametrically by the formulas $x=a\cos \varphi$, $y=b\sin\varphi$. The differential $dl$ of the length of an arc on the ellipse is equal to $\sqrt{dx^2+dy^2}=d\varphi\sqrt{a^2\cos ^2\varphi +b^2\sin ^2\varphi}$. If $a=1$ and $b=\sqrt{1-k^2}$, then $dl=d\varphi \sqrt{1-k^2\sin ^2\varphi}$. In this case the length of the arc on the ellipse between the end point of the small half axis, $B$, and the point $M=(\cos\varphi ,b\sin\varphi )$ is equal to $E (\varphi )=\int_0^{\varphi}\sqrt{1-k^2\sin ^2\psi}\, d\psi .$
I think there is an error. My approach is the following:
The upper half of the ellipse (there are the points $B$ and $M$) can be represented by $y=b\sqrt{1-x^2}$. The arc length, measured from the point $B=(0,b)$ to an arbitrary point in the first quadrant $M$ in terms of the horizontal component of $M$ is
$$s=\int_0^x \sqrt{\frac{1-k^2t^2}{1-t^2}}\, dt$$
where $b=\sqrt{1-k^2}$. The substitution $u=\arcsin t$ gives
$$s=\int_0^{\arcsin x}\sqrt{1-k^2\sin ^2 u}\, du.$$
The ellipse is parametrized by $x=\cos\varphi$, $y=b\sin\varphi$, therefore if $M=(\cos\varphi ,b\sin\varphi )$, then
$$\begin{align}s&=\int_0^{\arcsin \cos\varphi}\sqrt{1-k^2\sin ^2 u}\, du\\&=\int_0^{\frac{\pi}{2}-\varphi}\sqrt{1-k^2 \sin ^2 u}\, du.\end{align}$$
So the required arc length should be $E\left(\frac{\pi}{2}-\varphi\right)$, not $E(\varphi )$. A considerable amount of the following theorems in the book is "proved" assuming $E (\varphi )$ for the arc length, which seems a bit worrying. Maybe I'm missing something.
| If $x=a\cos \varphi$ and $y=b\sin\varphi$ then
$$
(dx)^2+(dy)^2 = a^2\sin^2\varphi\, d\varphi +b^2\cos^2\varphi\, d\varphi,
$$
contrary to the book's claim that it would be
$a^2\cos^2\varphi\, d\varphi +b^2\sin^2\varphi\, d\varphi.$
Yet $x=a\cos \varphi$ and $y=b\sin\varphi$ also implies that
$(x,y) = (a,0)$ when $\varphi = 0$ and $(x,y) = (0,b)$ when $\varphi = \frac\pi2,$ which implies that to integrate the curve from $(0,b)$ to an arbitrary point on the ellipse you should choose $\frac\pi2$ and not $0$ as the fixed end of your integral.
So altogether the book is not making sense.
But if we make just one change -- instead of $x=a\cos\varphi$ and $y=b\sin\varphi$,
let $x=a\sin\varphi$ and $y=b\cos\varphi$ --
then $(x,y) = (0,b)$ when $\varphi=0,$ it therefore makes sense to use $0$ as the fixed end of the integral when integrating the curve length from $(0,b)$ to an arbitrary point on the ellipse, and
$$
(dx)^2+(dy)^2 = a^2\cos^2\varphi\, d\varphi +b^2\sin^2\varphi\, d\varphi
$$
as claimed in the book.
So I will guess that the original intention was to set
$x=a\sin\varphi$ and $y=b\cos\varphi,$ but sometime between the original conception
of the integration and the time when the book was typeset, someone mistakenly wrote
the more usual formulas $x=a\cos\varphi$ and $y=b\sin\varphi$ instead of the particular formulas that are correct for this particular problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3967454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is the use of the quadratic formula valid? (Flammable Math's "How REAL Men Solve Equations" video) I was wondering whether the steps used in Flammable Math's YouTube video "How REAL Men Solve Equations" are valid or not. It's a joke video, but all of his work seems correct. Is this a valid use of the quadratic formula?
$$8x=1\Rightarrow 4x+4x-1=0$$
$$\Rightarrow (x)2^2+(2x)2-1=0$$
$$
\Rightarrow 2_{12} = \frac{-2x\pm \sqrt{4x^2+4x}}{2x}
$$
$$
\require{cancel} = \frac{-\cancel{2x}\pm\cancel{2x} \sqrt{1+\frac{1}{x}}}{\cancel{2x}}
$$
$$
\stackrel{2>0}{\Rightarrow} 2 = -1+\sqrt{1+\frac{1}{x}}\Rightarrow 3 = \sqrt{1+\frac{1}{x}}
$$
$$
\Rightarrow 9 = 1+\frac{1}{x} \Rightarrow \frac{1}{8}=x
$$
I believe it is because if you let $2 = y$, you get a quadratic equation in terms of $y$, then you can set up the quadratic formula to solve for $y$.
Like this $(x)(y)^2+(2x)y-1 = 0$. So, we get $ y = \frac{-2x\pm \sqrt{4x^2+4x}}{2x}$. Now that renaming made the manipulation make much more sense.
| A common error in mathematicians is thinking that particular letters or symbols carry any significance. The most common statement of the quadratic formula is that given real numbers $a,b,c$, the solutions of the polynomial equation
$$ax^2+bx+c=0$$
Are
$$x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}$$
But one could just as easily say that given real numbers $\#,\%,\&$ the solutions of the polynomial equation
$$\#@^2+\%@+\&=\text{zero}$$
Are
$$@=\frac{\text{negative }\% ~\mp \square \text{root}\{{}^2\% ~\_~ \text{IV}\#\&\}}{2\#}$$
I'll leave it to you to work out what the odd symbols mean. The quadratic formula (and mathematics in general) is (are) a statement(s) about numbers (or other mathematical objects), not symbols.
To answer the comment
Let's suppose we have the equation $4x+3=11$. We can solve this using elementary methods to obtain $x=2$:
$$4x=11-3$$
$$x=\frac{11-3}{4}=8/4=2$$
Let's start again. Is it incorrect to now say $4\cdot 2+3=11$ ? No. Is it incorrect to say $4\cdot 2=11-3$? No. Is it incorrect to say $2=\frac{11-3}{4}$ ? Finally, is it incorrect to say $2=2$? Erm.. no. When we say "$x$ equals $2$ is a solution of the equation $4x+3=11$" precisely what we mean is that if we substitute the numeric value $2$ in place of $x$ in the equation, that the numeric value of both sides of the equation are the same (otherwise I suppose it wouldn't be an equation).
Another edit:
To reaffirm that the quadratic formula is a statement about numbers, and not symbols, I will present the quadratic formula without using symbols. Here goes:
Imagine any three numbers - call them the first, second and third. The only demand is that the first cannot be zero. If we can find a number, call it the 'square root' such that when we multiply it by itself we get the square of the second minus four times the product of the first and the third, then we can call half the 'square root' divided by the first, minus half the second divided by the first, the 'solution', and this 'solution' has the remarkable property that if we square it, multiply that result by the first, then add the product of the second and "solution" and then add the third, we will get zero, no matter what our choices for the first, second, and third are.
I think this makes it clear why we use symbols rather than words in mathematics!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3967597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
If $p$ is an odd prime then prove that there cannot exist a finite group $G$ such that ${\rm Aut}(G)\cong \mathbb{Z}_p$.
If $p$ is an odd prime then prove that there cannot exist a finite group $G$ such that ${\rm Aut}(G)\cong \mathbb{Z}_p$.
Can anyone tell me how to proceed in this question?
Here is my attempt:
If ${\rm Aut}(G)$ is cyclic then ${\rm Inn}(G)$ is also cyclic which implies $G/Z(G)$ is cyclic which implies $G$ is abelian.
If $G$ is abelian then for $\phi \in{\rm Aut}(G)$ such that $\phi(g)=g^{-1}$. then $\phi(\phi(g)) = \phi(g^{-1}) = g$. Hence $\phi$ is of order $2.$
Hence the order of ${\rm Aut}(G)$ must be a multiple of 2 and hence cannot have prime order.
Edit:- I cannot seem to proceed when $\phi$ becomes just the identity mapping. In that case every element of the group has order $2$ (except for the identity element of order $1$) .
| Hint: yes $G$ is abelian and consider the map $\phi$: $G \rightarrow G$, defined by $\phi(g)=g^{-1}$. Prove that this is an isomorphism. What is the order of $\phi$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3967814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sketches of stable 6-pointed curves I was wondering if I could ask about stable $n$-pointed curves (essentially intersecting copies of $\mathbb{P}^1$ with at least $3$ special points on each copy - these may be either a point of intersection or a marked point).
I was wondering whether someone could perhaps help me by sketching out the possible cases for stable $6$-pointed curves? I have found all the cases possible for $5$ marked points (see sketch below), but I lose track in terms of the types of intersections that may occur for $3$ components and beyond in the case of $6$ marked points.
I have for $n=6$:
1 component: analogous to $n=5$
2 components: $2$ potential cases (distributing the marks by $2,4$ or $3,3$).
3 components: where I find difficulty.
$n=5$" />
Would it be possible to construct the same sketch but for the case $n=6$?
Thanks you very much in advance!
| I think I've found the sketches for stable $6$-pointed curves. I was getting confused as I was forgetting the no closed circuits condition.
Thanks to @Tabes Bridges for their help!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3968328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\sigma$-weak closure of $B(H) \otimes B(K)$. Let $H$ and $K$ be Hilbert spaces. I'm trying to prove that $B(H) \otimes B(K)$ (= minimal tensor product of $C^*$-algebras) is $\sigma$-weakly dense (= weak$^*$-dense) in $B(H \otimes K)$.
My progress so far:
I managed to show:
*
*$B(H) \otimes B(K)$ is strongly dense in $B(H \otimes K)$
*$B(H) \otimes B(K)$ contains the compact operators on $H \otimes K$, hence it suffices to show that the compact operators are $\sigma$-weakly dense in $B(H \otimes K)$.
Any ideas how I can proceed?
| Here is a more or less alternative argument.
By looking at what the rank-one operators look like, it is easy to see that $\mathcal K(H\otimes K)=\mathcal K(H)\otimes \mathcal K(K)$, this last product as C$^*$-algebras (no ambiguity due to nuclearity).
Then $\mathcal K(H)\otimes \mathcal K(K)$ is $\sigma$-weakly dense in $B(H\otimes K)$ because the locally compact space $\mathcal K(H\otimes K)$ is weak$^*$-dense in its double dual $B(H\otimes K)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3968459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Using Dominance Convergence Theorem in derivation of Leibniz's rule I really need help understanding how Dominance Convergence Theorem applies to the following limit:
$$
\underset{\Delta t\rightarrow 0}{lim}\int_{a(t)}^{b(t)}\frac{f(x, t+\Delta t)-f(x,t)}{\Delta t}dx
$$
So that we can change the limit to the following integral:
$$
\int_{a(t)}^{b(t)}\underset{\Delta t\rightarrow 0}{lim}\left ( \frac{f(x, t+\Delta t)-f(x,t)}{\Delta t} \right )dx
$$
For clarification as to where I got this limit from, I was watching a video on the derivation of Leibniz's rule (link) and this was the only portion that I was unable to understand. The person doing the derivation just said we can switch the integral and limit, but I would like to understand why. Looking into it on my own, I understand that Dominance Convergence Theorem is used to switch a limit and integral, I just don't understand why this theorem can be applied in this specific case.
| Fix $t$. Let $g_h(x) = \frac{f(x, t+h) - f(x,t)}{h}$. If there exists a function $g^*$ that
*
*is integrable on the interval $[a(t), b(t)]$, and
*is such that $|g_h(x)| \le g^*(x)$ for all $x \in [a(t), b(t)]$,
Then $\lim_{h \to 0} \int_{a(t)}^{b(t)} g_h(x) \, dx = \int_{a(t)}^{b(t)} \lim_{h \to 0} g_h(x) \, dx$ (i.e. you can switch the order of limit and integration.)
I didn't watch the video, but I assume there are some assumptions on $f$ such that the above conditions hold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3968600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The relationship between covariance and inner product It looks like covariance has some relationship between inner product. I have checked that covariance is bilinear and symmetric, but not positive definite (it is positive semi-definite). Although in WIKI, there are some information about the relationship between covariance and inner product, I still cannot really get the point. Hope someone will give me a more detailed answer about the relationship.
| If $X$ and $Y$ are random variables with finite second moments on a probability space $(\Omega, \mathcal F ,P)$ and $L^{2}$ is the space of square integrable random variables on $(\Omega, \mathcal F ,P)$ with the inner product $ \langle X, Y \rangle=EXY$ then covariance of $X$ and $Y$ is the inner product of $X-EX$ and $Y-EY$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3968712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding the vertex of $y = x^2 - 6x + 1$. My solution doesn't match the book. I'm studying on how to find the vertex of a Parabola and got stuck with this question.
Determine the vertex of $y = x^2 - 6x + 1$
I found the vertex is $V=(3,-8)$ (see work below), but my workbook showed it as $V=(3,10)$. Unfortunately, it does not have a section where it shows how it got to that answer so I'm doubting if my understanding is incorrect. If I use online algebra calculators, it matches with mine but I just want to make sure that I understand how signs and the formula work.
Formula I used to look for the Vertex:
$$V=\left(-\frac{b}{2a} , \frac{4ac-b^2}{4a}\right)$$
My solution:
$$\begin{align}
x &=-\frac{-6}{2(1)} = -\frac{-6}{2} = -\frac{-3}{1} = 3 \\[6pt]
y &=\frac{4(1)(1)-(-6^2)}{4(1)} = \frac{4-36}{4} = \frac{-32}{4} = \frac{-8}{1}=-8
\end{align}$$
$$V=(3,-8)$$
What I think was done on the book:
$$\begin{align}
x &=-\frac{-6}{2(1)} = -\frac{-6}{2} = -\frac{-3}{1} = 3 \\[6pt]
y &=\frac{4(1)(1)-(-6^2)}{4(1)} = \frac{4-36}{4} = \frac{40}{4} = \frac{10}{1}=10
\end{align}$$
$$V=(3,10)$$
Is it correct that I should've added $4-36$ since they are both positive numbers or subtract it?
Any explanation is appreciated. Thanks!
| An easier way to do this is to take a derivative, solve for the minimum, this will give you, your x-value. The substituting that value back into the original equation we can get the y-value. For example here you have:
y=(x^2)-6x+1
derivative: y'=(2x-6)
Solving for x-value (solving for when y'=0) : 2x-6=0 -> x=3 <-X-value
Now that we have the x value we can use this value in our original equation
y=3^2 -6*3 +1 We find that y=-8
This gives us the coordinates for the vertex. This is probably the fastest approach here as taking the derivative of this function is pretty basic.. Hopefully this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3968861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Alignment of multiple bodies on circular paths Below is a problem that I came across in which I don't know the answer. It's as follows:
$\\$
Problem:
Suppose there are $n-2$ planets currently at rest (stationary) - all positioned on the positive side of the x-axis (line) of the euclidean plane. They will soon orbit on a circular motion around the sun. Each planet $k$ will travel at the same speed and have an orbit of length $k$ where $k=3,4,...,n$ and $n>4$. All orbits will lie on the euclidean plane and will be centred at the origin (where the sun is positioned). Suppose now that all planets leave their initial position of rest at the same time. From now on, two or more planets are considered to be aligned if they lie on the positive side of the x-axis (line) at the same time in their respective orbits. Now consider all the planets that won't be aligned with planet $n$ during its first orbit. How many of them will be aligned with at least one planet that will be aligned with planet $n$ (during the first orbit of planet $n$)?
$\\$
Additional Information:
In the previous version of this problem, I realised I had made the mistake of letting planet $n$ orbit the sun more than once which made it quite obvious that every other planet would be aligned with planet $n$ after a certain number of orbits - hence you have the second comment below and the answer (from 1st January) that were driven by this mistake. So I've now corrected the mistake by letting planet $n$ orbit the sun once only - which was my original intent.
| As I understand the question, at the end of the single orbit for planet $n$ that we are considering, it will be alignment with all planets $\{k_i\}$ where each $k_i\mid n$ - that is all the factors of $n$ greater than $2$, since the orbital period is here directly proportional to the radius (all speeds are equal).
So these planets may also align with other planets not in this set, in earlier passes through the $x$-axis - for example, $n=28$ means that planet $n$ then aligns with planets $4$, $7$ and $14$, but on earlier orbits planet $4$ has also aligned with planets $8, 12, 16, 20$ and $24$ and planet $7$ with planet $21$.
For odd $n$ the missing planet $2$ will introduce no complications. Essentially there we can count all multiples of prime factors of $n$ and subtract off the actual factors of $n$.
For even $n$ we have to work around the missing planet $2$, but the concept is similar. Note, we can't treat $4$ as an "honorary prime" though since $n$ may be divisible by 8.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3969035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
CDF - Cumulative distribution function If the CDF is defined: $F_X\left(x\right)=\:1-e^{-λx}\::\:x\ge 0\:and\:0:\:x<0$
But when we substitute $x=0$ in the equation $1-e^{-λx}$ we got $0$.
So why we can't say that: $F_X\left(x\right)=\:1-e^{-λx}\::\:x>0\:and\:0:\:x\le 0$
| This is the CDF of an exponential distribution which is a continuous distribution. As such it’s CDF is expected to be continuous so the left- and righthand limits of $F_X$ at $0$ should coincide.
As a result both of your definitions are valid and equally correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3969215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
the second order continued fraction expansion of an irrational number and the distance to its closest integer Let $u$ be a posive irrational number and let
$$u=a_0+ \frac{1}{a_1+\frac{1}{a_2 \cdots} }$$
be its continued fraction expansion.
Consider the second-order finite continued fraction expansion of $u$:
$\frac{p_2}{q_2}:=a_0+ \frac{1}{a_1+\frac{1}{a_2}}$ with $p_2$ and $q_2$ being positive coprime integers.
I wonder if we have the following inequality and how to prove it:
$$|p_2-q_2 u|<\|u\|,$$
where $\|u\|$ denotes the distance from $u$ to its nearest integer.
Please also note that I am not asking how to prove $|\frac{p_2}{q_2}-u|<\|u\|$, which is simpler.
If the above is true (I have verified it with many examples), please feel free to use any very basic properties of continued fractions to prove it.
But please do not use any general fact about the optimality of the rational approximation with continued fractions to prove this because I actually want to use the above fact as one of the steps to prove optimality of the rational approximation with continued fractions.
| Hint:
The Continued Fraction expansion (and approximation) is tied to the Stern-Brocot tree.
In turn, the properties of the Stern-Brocot tree and related Farey sequence, ensure that a truncation of the CF, that is a truncated path on the tree, represents the best rational approximation (with the denominator being less than a fixed threshold).
The above is for the general case of taking $n$- terms of the CF.
A more simple reply to your specific case is that, being
$$
u = \left\lfloor u \right\rfloor + \left\{ u \right\}\quad \left| {\;0 \le \left\{ u \right\} < 1} \right.
$$
the splitting of $u$ into the integral and fractional part, then
$$
a_{\,0} = \left\lfloor u \right\rfloor
$$
and
$$
\eqalign{
& \left\{ u \right\} = {1 \over {{1 \over {\left\{ u \right\}}}}}
= {1 \over {\left\lfloor {{1 \over {\left\{ u \right\}}}} \right\rfloor
+ \left\{ {{1 \over {\left\{ u \right\}}}} \right\}}}
= {1 \over {a_{\,1} + {1 \over {{1 \over {\left\{ {1/\left\{ u \right\}} \right\}}}}}}} = \cr
& = {1 \over {a_{\,1} + {1 \over {\left\lfloor {{1 \over {\left\{ {1/\left\{ u \right\}} \right\}}}}
\right\rfloor + \left\{ {{1 \over {\left\{ {1/\left\{ u \right\}} \right\}}}} \right\}}}}}
= {1 \over {a_{\,1} + {1 \over {a_{\,2} + r_{\,2} }}}} \cr}
$$
So you can conclude by yourself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3969361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $f$ is a contraction then it is continuous. The proof might seem intuitive if just has one or more jump points which have a distance $d$ from each other.
But I am struggling, with the following problem:
If f is not continuous, such that for all intervals: $$[a,b], \text{with } a < b < c < d: f \text{ is not continuous in } [b,c]$$
where $a,d$ are constant and $b,c$ freely selectable.
Can it then be a contraction?
Edit:
$$\exists a,d: \forall b,c: (a < b < c <d \implies f \text{ is not continuous in } [b,c])$$
I would prefer to use the definition of continuity using the epsilon-delta-criteria. However, I am fine with every (widely know) equivalent definition.
| Welcome to MSE!
Hint:
If $f$ is a contraction, then $d(fx,fy) < d(x,y)$ for every $x,y$.
To show $f$ is continuous, we want to show we can make $d(fx,fy) < \epsilon$ small by controlling $d(x,y)$... Of course, this is exactly the flavor of control that contractibility buys us.
Formally, say $\epsilon > 0$. We want to find a $\delta$ so that whenever $d(x,y) < \delta$, we're guaranteed $d(fx,fy) < \epsilon$...
By contractibility, we're guaranteed $d(fx,fy) < d(x,y) < \delta$.
Do you see where to go from here? What's a good choice of $\delta$?
I hope this helps ^_^
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3969502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Integral of functions of several variables The function is defined from $[-1,1]\times[-1,1]$ to $\Bbb R$, given by $f(x,y)=\frac{x^2-y^2}{(x^2+y^2)^2}$ when $(x,y)≠0$ and $f(0,0)=0$.
I could find that the function is not continuous at origin, so not differentiable and also partial derivatives does not exist at origin.
But how do we evaluate the integral of the function over the given domain. Being function of several variable how do we deal with the point of discontinuity? Can we convert it to polar coordinates and apply residue theorem? Do we have any other methods?
Any help would be appreciated. Thanks in advance.
| This is a (somewhat?) standard example of how rearranging an iterated integral makes it converge conditionally to either of two different numbers.
\begin{align}
& \int_0^1 \left( \int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2} \, dy \right) \, dx = +\frac \pi 4. \\[10pt]
& \int_0^1 \left( \int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2} \, dx \right) \, dy = -\frac \pi 4. \\[10pt]
& \iint\limits_{0\,<\,y\,<\,x\,<\,1} \frac{x^2-y^2}{(x^2+y^2)^2} \, d(x,y) = +\infty. \\[10pt]
& \iint\limits_{0\,<\,x\,<\,y\,<\,1} \frac{x^2-y^2}{(x^2+y^2)^2} \, d(x,y) = -\infty. \\[10pt]
& \iint\limits_{\varepsilon\,<\,x\,<\,1 \\ \varepsilon\,<\,y\,<\,1} \frac{x^2-y^2}{(x^2+y^2)^2} \, d(x,y) = 0.
\end{align}
Only when the integrals of the positive and negative parts are both infinite can the values of the two iterated integrals differ.
Let $y = x\tan \theta$, so that $dy = x\sec^2\theta\,d\theta$ and
$x^2 + y^2= x^2\sec^2\theta$, and as $y$ goes from $0$ to $1$
then $\theta$ goes from $0$ to $\arctan(1/x)$. Then
\begin{align*}
\int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2} \, dy
= {} & \int_0^{\arctan(1/x)}
\frac{x^2 - x^2 \tan^2\theta}{(x^2 + x^2\tan^2\theta)^2} \big( x\sec^2\theta\,d\theta\big) \\[10pt]
= {} & \frac 1 x \int_0^{\arctan(1/x)} \frac{1-\tan^2 \theta}{\sec^2 \theta} \, d\theta \\[10pt]
= {} & \frac 1 x \int_0^{\arctan(1/x)} (\cos^2\theta-\sin^2\theta) \, d\theta \\[10pt]
= {} & \frac 1 x \int_0^{\arctan(1/x)} \cos(2\theta) \, d\theta \\[10pt]
= {} & \frac 1 {2x} \sin\left(2\arctan \frac 1 x\right) \\[10pt]
= {} & \frac 1 x \sin\left(\arctan \frac 1 x \right) \cos\left( \arctan \frac 1 x \right) \\[10pt]
= {} & \frac 1 x \cdot \frac 1 {\sqrt{1+x^2}} \cdot \frac x {\sqrt{1+x^2}} = \frac 1 {1+x^2}. \\[10pt]
\text{And then}
& \int_0^1 \frac{dx}{1+x^2} = \frac \pi 4.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3969632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Given $a > b > c > d > 0,$ and U $= \sqrt{ab} + \sqrt{cd}$ , V $= \sqrt{ac} + \sqrt{bd}$ , W $= \sqrt{ad} + \sqrt{bc}$.
Given $a > b > c > d > 0,$ and U $= \sqrt{ab} + \sqrt{cd}$, $V = \sqrt{ac} + \sqrt{bd}$, $ W = \sqrt{ad} + \sqrt{bc}$, arrange $U$,$V$,$W$ in ascending order.
What I Tried: I squared $U$,$V$,$W$ to get :-
$\rightarrow U^2 = ab + cd + 2\sqrt{abcd}$.
$\rightarrow V^2 = ac + bd + 2\sqrt{abcd}$.
$\rightarrow W^2 = ad + bc + 2\sqrt{abcd}$.
We can cancel out the $2\sqrt{abcd}$ from each and we are only left to compare $(ab + cd) , (ac + bd) , (ad + bc)$ in order to compare $U$$,V$,$W$.
This is where I get stuck. I could say that $ab > ac$ , but I couldn't necessarily show that $cd > bd$ , as $b > c$. That way does not work for me, and I am not finding any other way to compare these.
Can anyone help?
| Hint: With your second & third expressions you're comparing, we get
$$\begin{equation}\begin{aligned}
(ac + bd) - (ad + bc) & = ac - ad + bd - bc \\
& = a(c - d) + b(d - c) \\
& = a(c - d) - b(c - d) \\
& = (a - b)(c - d) \\
& \gt 0
\end{aligned}\end{equation}\tag{1}\label{eq1A}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3969763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
What's the most elementary way to solve this trigonometric problem?
A bead is threaded onto a light, inextensible string of length $4m$. One end of the string is fixed to a point, $A$, on a (vertical) wall. The other end of the string is attached to a point $B$ on the wall exactly $2m$ directly below $A.$ The bead is held in place so that it is at a distance of $1m$ from the wall, such that the string is taut and the plane the string is in is perpendicular to the plane the wall is in. Find the two possible vertical components of the displacement from $B$ to the bead.
This is a question I came up with, and I thought there should be some relatively simple trigonometric methods to get to the answer. I tried lots of stuff but nothing seemed to work.
Of course, we could find the equation of the ellipse that corresponds to the locus of points the bead could be when the string is taut, and then find the two values of $y$ when $x = 1$. But I'm looking for more elementary methods involving only trigonometry and Pythagoras. This is because I want the answer to be aimed at secondary school students who know elementary trigonometry only (Pythagoras, addition angle formulae, R addition formulae etc).
Thanks in advance.
| I will give a simple solution using ellipse which is rather more easier.
Since the thread is tied to $A$ and $B$, we deduce that $A$ and $B$ are the focii of the ellipse. If we rotate the ellipse in the counter clockwise direction and make it horizontal, then we can use the equation for standard ellipse which is $$\dfrac{x^2}{a^2}+\dfrac{y^2}{b^2}=1$$ where $AD$ now becomes the $x$ axis, $DC$ the positive $y$ axis, with origin being the midpoint of $AB$. So difference between focii is given to be $2ae=2$ and length of string $=2a=4$ which implies $a=2$ and $e=\dfrac{1}{2}$. So, $b^2=a^2(1-e^2)=4(1-0.25)=3$. So, the equation of ellipse becomes, $$\dfrac{x^2}{4}+\dfrac{y^2}{3}=1$$ Putting $y=1$, we get $x=\pm \sqrt{\dfrac{8}{3}}$. So, $BD=\sqrt{\dfrac{8}3}-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3969860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 3
} |
Prove $\sum\limits_{n=1}^{\infty} \frac{1}{\int_0^1\ln^n(x)dx} = \frac{1}{e}-1$ Ok, that's really not an easy question to convey in text. What I'm asking is this:
$$\sum\limits_{n=1}^{\infty} \frac{1}{\int_0^1\ln^n (x)dx} = \frac{1}{e}-1$$
Numerically, I'm pretty sure it does (it's about -0.63212). But, I have no idea how I'd go about proving it.
Any insights?
| Note
$$\int_{0}^{1}\ln^nx dx = \frac{d^n}{da^n}\left(\int_{0}^{1} x^a dx\right)\bigg|_{a=0}
= \frac{d^n}{da^n}\left(\frac1{a+1}\right) \bigg|_{a=0} =(-1)^{n}n!
$$
Thus
$$\sum\limits_{n=1}^{\infty} \frac{1}{\int\limits_0^1 \ln^nx dx} =
\sum\limits_{n=1}^{\infty} \frac{(-1)^n}{n!}
=\sum\limits_{n=0}^{\infty} \frac{(-1)^n}{n!}-1=
e^{-1}-1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3969989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to prove there are $\frac{3^n-1}{2}$ couples $A,B \in \mathcal{P}([n])$ such that $A \cup B = [n]$, $A \neq B$. Consider the unordered couple of sets $A,B \in \mathcal{P}([n])$ such that $A \cup B = [n]$, $A \neq B$. I would like to prove that the number of such couples is:
$$\frac{3^n-1}{2}$$
I derived that expression by enumerating the solutions for the first values of $n$, and also considering the sum of the number of those couples for all different values of $n$ over $\mathcal{P}([m])$:
$${2^m \choose 2} = \sum_{n=0}^m {m \choose n}\frac{3^n-1}{2}$$
(this is easy to show using the binomial expansion).
I have checked OEIS A003462 where they count the number of couples $A,B \in \mathcal{P}([n])$ with $A \cap B = \emptyset$ and $A \neq \emptyset$ or $B \neq \emptyset$, with the same $\frac{3^n-1}{2}$ result, but it's not the same thing and I was not able to adapt that reasoning.
| Note that $A\cup B=(A\setminus B)\cup(A\cap B)\cup(B\setminus A)$ is a disjoint union. Assign to each unordered couple $\{A,B\}$ the $n$-length ternary string $(a_1,a_2,\ldots,a_n)$ defined by $$a_j=\begin{cases}0\quad\text{if $a_j\in A\setminus B$}\\1\quad\text{if $a_j\in A\cap B$}\\2\quad\text{if $a_j\in B\setminus A$}\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3970175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Is $\mathfrak{so}(2n)$ contained in $\mathfrak{sl}(2n)$? Let $\mathfrak{so}(2n) = D_n =\left\{ \begin{pmatrix} A&B \\ C&-A^T \end{pmatrix}: A,B,C\in M_n(K), B=-B^T, C=-C^T \right\}$ . Of course the matrices in $D_n$ have trace zero. This could lead me to conclude that $\mathfrak{so}(2n) \subseteq \mathfrak{sl}(2n)$, but this doesn’t seem right as for the inclusion of the respective Dynkin diagrams: in fact, it is not true that $D_n$ is contained in $A_n$. What am I missing here?
| There is nothing wrong in what you did. The fact that a simple Lie algebra $\mathfrak g_1$ is a subalgebra of another simple Lie algebra $\mathfrak g_2$ doesn't imply the existence of a connection between their Dynkin diagrams (as your own example shows).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3970343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Tuples of sets $A_1, ... , A_k \in \mathcal{P}([n])$ with $\bigcup_{i=1}^{k}{A_i}=[n]$ and $A_i \neq A_j$, $1 \le i \lt j \le k$ To generalize this question I would like to count how many tuples of sets $A_1, ... , A_k \in \mathcal{P}([n])$ with $\bigcup_{i=1}^{k}{A_i}=[n]$ and $A_i \neq A_j$, for any $i,j$ such that $1 \le i \lt j \le k$, are there.
The case $k=2$ has been solved in the linked question and the result is $\frac{3^n-1}{2!}$.
The case $k=3$ can be stated as $\frac{7^n-1}{3!}-\frac{3^n-1}{2!}$ and a computer test confirms the result.
For case $k = 4$ I would have expected something like $\frac{15^n-1}{4!}-\frac{7^n-1}{3!}+\frac{3^n-1}{2!}$, but the first addendum is not an integer. The values computed (hopefully correctly) for $n=1,2,3,4,5,6$ are $0,1,67,1546,27550,445531$.
To complete with computing data, the case $k = 5$ gives $0,0,56,4144,180096,6480656$, for $k = 6$ we have $0,0,28,7896,866432,69656776$, for $k = 7$: $0,0,8,11408,3308736,601192496$.
Any hint? If possible, for the general case, but especially for $k = 4$. Thank you.
| Another, non direct, way to answer the linked question is by using binomial inversion for the numerical part and using an inclusion-exclusion argument, so that if $$\binom{2^m}{2}=\sum_{n=0}^m {m \choose n}\frac{3^n-1}{2},$$ then $$\frac{3^m-1}{2}=(-1)^m\sum _{n=0}^m\binom{m}{n}(-1)^n\binom{2^n}{2}.$$ In this way, I think you are looking for
$$(-1)^m\sum _{n=0}^m\binom{m}{n}(-1)^n\binom{2^n}{k}.$$
which agrees with your computation. You can check this combinatorially using inclusion-exclusion by considering which element is missing in your decomposition.
Edit:
Consider then
$$(-1)^m\sum _{n=0}^m\binom{m}{n}(-1)^n\binom{2^n}{k}=\binom{2^{m}}{k}-\sum _{n=1}^{m}\binom{m}{n}(-1)^{n-1}\binom{2^{m-n}}{k},$$
this corresponds to doing $$\left |\mathcal{A}\setminus \bigcup _{\ell=1}^m\mathcal{A}_{\ell}\right |,$$ where $\mathcal{A}$ is all possible ways to get $k$ different subsets $\{A_i\}$ of $[n]$ and $\mathcal{A}_{\ell}$ is the number of ways to get $k$ different subsets of $[n]$ such that $\ell$ is in non of them. Show that
$|\mathcal{A}|=\binom{2^m}{k},$ and that if $X\subseteq [m],$ then $$\left |\bigcap _{x\in X}\mathcal{A}_x\right |=\binom{2^{n-|X|}}{k}.$$
Notice that $$\mathcal{A}\setminus \bigcup _{\ell=1}^m\mathcal{A}_{\ell}$$ is exactly the objects you want. You eliminate every time their union was not exactly $[n].$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3970442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Does homotopy equivalence to a subspace imply (weak) deformation retract? Here is a nearly same question, but with a slight difference in interpretation. We use the definition of wikipedia, which is different from that of Hatcher or Munkres (where deformation retract refers to strong deformation retract on wikipedia).
My question is :
Suppose $X$ is a path-connected topological space, and $A$ is its subspace. If $X$ and $A$ are homotopic equivalent, is it true that $A$ is a (weak) deformation retract of $X$?
Both Hatcher and Munkres have provided counterexamples for the case of strong deformation retract, but not the one above. Please help.
| This community wiki solution is intended to clear the question from the unanswered queue.
Noel Lundström has sketched a counterexample in a comment. Let $X = \bigvee_{i=1}^\infty S^1_i$ be the countably infinite wedge of copies of the circle $S^1$. This is a $CW$-complex with one $0$-cell and countably infinitely many $1$-cells. The space $A = \bigvee_{i=2}^\infty S^1_i$ is a subcomplex of $X$ which is homeomorphic to $X$; it is also a retract of $X$. However, the inclusion $i : A \to X$ is not a homotopy equivalence, thus $A$ is not a (weak) deformation retract of $X$.
To prove this, assume $i$ has a homotopy inverse $f : X \to A$. Then $i \circ f \simeq id_X$. Let $i_1$ be the inclusion of $S^1_1$ into $X$ and $r_1 : X \to S^1_1$ be the retraction which maps $A$ to the wedge point. We get $r_1 \circ f \circ i_1 \simeq r_1 \circ id_X \circ i_1 = id$. But $r_1 \circ f \circ i_1$ is constant, thus the identity on $S^1_1$ must be inessential. This is a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3970938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Let $x$ be a $p$-letter word, $f^{n}(x)=x \implies x$ must be a 'boring' word Let $D$ be the set of permutation of $p$-letter word, where $p$ is prime number. And let $f$ be the function $f:D \rightarrow D$ that moves the last letter to the front. For example, if $p=7$,
$$ f(abcdefgh) = habcdefg $$
We may call a word $x$ is a 'boring' word, only if $f(x) = x$. Notice that there exist only 26 boring words: all $a$s $\underbrace{aa...a}_{p}$, all $b$s, ..., and all $z$s $\underbrace{zz...z}_{p}$.
Proof that if $f^{n}(x) = x$, where $1 \le n \le p-1$, then $x$ is a 'boring' word.
My attempt, if $n=1$ is clear. Now let $p>2$, $n=2$ and $f^{2}(x)=x$. Define $\alpha_{i}$ to be the letter at position $i$th in $x$.
$$ x = \alpha_{1}, \alpha_{2}, ..., \alpha_{p} $$
$$ \implies f(x) = \alpha_{p}, \alpha_{1} , ..., \alpha_{p-1} $$
$$ \implies f^{2}(x) = \alpha_{p-1} , \alpha_{p}, ..., \alpha_{p-2} $$
This means that
$$ \alpha_{p} = \alpha_{p-2} = \alpha_{p-4} = ... = \alpha_{1} $$
but
$$ \alpha_{1} = \alpha_{p-1} = \alpha_{p-3} = ... = \alpha_{2} $$
so all the letters must be the same, which means a 'boring' word.
Now if $n > 2$ and $f^{n}(x)=x$. Notice that we always have $p = qn + r$, where $0<r<n$ is the remainder, then
$$ \alpha_{p} = \alpha_{p-n} = ... = \alpha_{(p -qn) = r} $$
how to continue..?
| Let $x=a_0\cdots a_{p-1}$ so $f^n(x)=a_{p-n}\cdots a_{p-1}a_0\cdots a_{p-n-1}$. By Lagrange we have $\langle p-n\rangle=\Bbb Z_p^+$ whenever $0<n<p$ and since $f^n(x)=x$ the result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3971060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Are two graphs isomorphic? Are the two graphs isomorphic?
$$G_1=\begin{bmatrix}
a & b & c & d & e & f \\
b & a & a & c & d & a \\
c & c & b & e & f & d \\
f & & d & f & & e
\end{bmatrix}\
\quad\quad
G_2=\begin{bmatrix}
u & v & w & x & y & z \\
v & u & v & u & x & u \\
x & w & x & w & z & w \\
z & & z & y & & y
\end{bmatrix}$$
$a,b,c$ creates a triangle in $G_1$, but no triangle is created in $G_2$.
Is that enough to claim that they are not isomorphic?
| This is a good example of two graphs with the same degree sequence $(3,3,3,3,2,2)$ which are not isomorphic. As the question states, the existence of the two triangles in $G_1$ and their absence in $G_2$ is proof that the two graphs are not isomorphic.
Here they are in graphical form:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3971285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Counting the number of good functions Hy, happy new year everyone.
I have been stuck on the following problem for a while now, so I am posting it here in other to discuss it.
A function $g: [[1,n]] \to [[0,n]]$ is called good if :
$$\forall j \in[[1,n]] , \exists i~ \text{integer} \geq 0 , g^{i} (j)=0 $$
where $g^{i}=g\circ \dots \circ g ~~(i ~~\text{times})$
How many such good functions there is?
| First $g(x)\ne x$ for all $x$, and $g^i(x)=0 \Rightarrow i\leq n$ and there exist a number x such that $g(x)=0$. Let $k_j$ denote number of numbers $x$ s.t. $g^j(x)=0$. So to construct such function, we first select $k_1$ numbers $x$ and force $g(x)=0$ in $\binom{n}{k_1}$ modes. Then we choose $k_2$ numbers and force $g(x)\in g^{-1}(0)$ in $\binom{n-k_1}{k_2}\times k_1^{k_2}$ modes and so on... . Finally the number of functions is:
$$\sum_{k_1\geq 1, k_j\geq0 , j=2,3,\cdots n}\binom{n}{k_1}\times \binom{n-k_1}{k_2}\times k_1^{k_2}\times \binom{n-k_1-k_2}{k_3}\times k_2^{k_3}\cdots \binom{n-k_1-k_2-\cdots k_{n-1}}{k_n}\times k_{n-1}^{k_n}=\sum_{k_1\geq 1, k_j\geq0 , j=2,3,\cdots n}\binom{n}{k_1,k_2,\cdots ,k_{n}}\times k_1^{k_2}\times k_2^{k_3}\cdots \times k_{n-1}^{k_n}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3971414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Analytic solution to $\alpha , \beta \in \mathbb{R}$ such that $\cos \alpha \cdot \sin \beta = \cos \bigl(\sin (\alpha \cdot \beta)\bigr)$? Are there any $\alpha , \beta \in \mathbb{R}$ such that $$\cos \alpha \cdot \sin \beta = \cos \bigl(\sin (\alpha \cdot \beta)\bigr)?$$
The trivial solutions are $(\alpha, \beta)=(0, \dfrac{\pi}{2})$. But are there more? I dont see an obvious way of tackling this problem but I keep coming back to it because it seems interesting to me (look at the graph of the solutions in desmos: https://www.desmos.com/calculator/ms8ad8cxqt).
I tried simplifying matters by trying out the case where $\alpha = \beta$ where we get that $$\cos \alpha \cdot \sin \alpha = \cos \bigl(\sin (\alpha ^{2})\bigr)$$ and using the fact that $2\sin \alpha \cos \alpha = \sin 2\alpha$ we really want to find $\alpha \in \mathbb{R}$ such that $$\dfrac{1}{2}\sin 2\alpha = \cos \bigl(\sin (\alpha ^{2}) \bigr),$$ though I have to admit that this does not make the problem easier (it seems to me at least).
Has anyone looked at this problem before and have a solution or maybe some hints or suggestions on how to go further in solving the problem?
| Here's one thing i did with this problem. Please let me know if it is legitimate or maybe trivial, hehe.
Suppose $\alpha , \beta \in \mathbb{R}$ where $\alpha \approx 0$ and $\beta \approx 0$, then it immediately follows that $\alpha \cdot \beta \approx 0$.
Now what I was thinking is that we can use the small angle approximations of $\cos \theta \approx 1-\dfrac{\theta ^{2}}{2}$ and $\sin \theta \approx \theta$ and get that $$\cos \alpha \cdot \sin \beta \approx \Bigl(1-\dfrac{\alpha ^{2}}{2} \Bigr)\cdot \beta = \beta - \dfrac{\alpha ^{2}\beta}{2}$$ and $$\cos \Bigl(\sin \alpha \cdot \beta \Bigr)\approx \cos \alpha \cdot \beta \approx 1-\dfrac{\alpha^{2}\beta^{2}}{2}.$$ So now we may ask: for which $\alpha , \beta \in \mathbb{R}$ does $$\beta - \dfrac{\alpha ^{2}\beta}{2}=1-\dfrac{\alpha^{2}\beta^{2}}{2}?$$
After some algebraic manipluations we get that $$\dfrac{(\beta -1)(\beta \cdot \alpha^{2}+2)}{\alpha^{2}}= 0 \Leftrightarrow \boxed{\beta = 1 \: \text{or} \: \beta = -\dfrac{2}{\alpha^{2}}}.$$ Since it is assumed that $\beta \approx 0$ I dont know if the solution $\beta = -\dfrac{2}{\alpha^{2}}$ is even legitimate since if $\alpha \approx 0$, then $\Bigl|-\dfrac{2}{\alpha^{2}}\Bigr|>>0.$ Anyhow, if we let $\beta = 1$ (can this be considered $\beta \approx 0$?) and let $\alpha$ be something $\approx 0$, say $\alpha = 0.001$, then we get $$\cos \alpha \cdot \sin \beta =\cos 0.001 \cdot \sin 1 \approx 0.\bar{9}\cdot 0.84147\approx 0.84147$$ and $$\cos \Bigl( \sin \alpha \cdot \beta \Bigr)\approx \cos \Bigl( \sin 0.001\cdot 1 \Bigr)\approx \cos 0.\bar{9}\approx 0.\bar{9}.$$ Thus $$\cos \alpha \cdot \sin \beta \approx \cos \Bigl( \sin \alpha \cdot \beta \Bigr),$$ whenever $\beta =1$ and $\alpha \approx 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3971556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 2
} |
How to analyze the following complex number? $f(T) = \int_{0}^{1}e^{-iT\lambda(s)}ds$, where $\lambda(s)$ increases monotonously. I want to show that $|f(T)|$ has an upper bound that decreases with $T$ for $T \gg 1$.
For example, if $\lambda(s) = s$ then $|f(T)| \le 2/T$.
This question comes from numerical analysis of my research. It's the expression of error analysis in adiabatic evolution. More specifically, the digital error has expression : $\sum_{k=1}^{L}e^{-iT\sum_{j<k}\lambda_{j}}/L$, where $\lambda_{j}$ equals $\lambda_{j} = \lambda(j/L)$. From numerical result I am sure it has an upper which bound decays with $T$. However, I don't know how to prove that. I wonder if there is related formulas in complex analysis and fourier series.
| Remark: This answer was for the first version of the question which stated that $|f(T)|^2$ was monotonic.
The result is not true. If $\lambda(s) = s$, then
\begin{equation}
f(T) = \frac{1}{i T}(1 - e^{-i T})
\end{equation}
hence $f(2k\pi) = 0$ for $k\in {\mathbb{N}^*}$ but $f$ is not identically $0$, hence $|f(T)|^2$ is not monotonic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3971687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
eig function MATLAB on complex symmetric matrix Suppose I have a complex symmetric matrix D (not Hermatian) which after diagonalization becomes E. Let, D = F + iG and E = H + iJ. E, H and J are diagonal matrices. I am using 'eig' function in MATLAB to diagonalize D, i.e., [V,E] = eig(D). I always find that if I apply 'eig' on matrices F and G, I always get H and J respectively. It is easy to understand if the matrix formed by V is always real, i.e., $\text{V}^{-1}\text{DV}=\text{V}^{-1}\text{FV}+\text{i}\text{V}^{-1}\text{GV}=\text{H}+\text{iJ}$. Why does it happen? Is there a theorem behind it? Or, it is just a shortcoming of the algorithm used in 'eig' in MATLAB?
| $\mathbb{C}$ is an isomorphism to $R^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3971810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is this solution for an homomorphism prove correct? given $f$ : $a+bi$ $; a,b \in \mathbb{Z}$ ; $ i^2=-1$; is a function over itself defined as $f(a+bi)=a-bi$ ; prove if $f$ is an homomorphism.
in some examples I had seen, it is only to write the functions like this: $(f\cdot g)(x)=f(x) \cdot g(x)$ but I am given a definition of $i^2 = -1$ and I don't know what to do with it
this is my solution, but I am not sure about it:
$f \rightarrow f \rightarrow \mathbb{Z} $
for $\forall c,d \in \mathbb{Z}$
a) sum
$(f+f)(a+bi)=(a-bi)+(c-di)$
$(f+f)(a+bi)=f(a-bi)+f(c-di)$
b) product
$(f \cdot f)(a+bi)=(a-bi) \cdot (c-di)$
$(f \cdot f)(a+bi)=f(a-bi) \cdot f(c-di)$
then $f$ is an homomorphism.
Can anyone with more knowledge in this topic tell me if this is correct or if not, show how can I solve this problem.
| Let $X = \{a+bi : a, b \in \mathbb{Z}, i^2 =-1\}$ (although you can just take $\mathbb{C}$ as well with no problem). What you want to show is that $f : X \rightarrow X$ is a homomorphism on the addition and/or multiplication structure. This just means that $f$ preserves the addition and/or multiplication structure (respectively). In other words, given $z, w \in X$ (say $z = a+bi, w = c+di$) you want that
$f(z+w) = f(z) + f(w)$ (preserving the addition structure)
and/or
$f(z\cdot w) = f(z) \cdot f(w)$ (preserving the multiplication structure)
$f(z+w) = f(a+c + (b+d)i) = a+c - (b+d)i = a-bi + c-di = f(z) + f(w)$ so $f$ is an addition preserving homomorphism.
$f(z \cdot w) = f(ac-bd+(ad+bc)i) = ac - bd - (ad + bc)i = (a-bi)(c-di) = f(z) \cdot f(w)$ so $f$ preserves the multiplication structure as well. So $f$ is indeed a homomorphism (in this case a ring homomorphism since it preserves both addition and multiplication).
In general, given a group $(G, +)$, a group homomorphism only needs to preserve the addition structure. Given a ring $(R, + , \cdot)$ a ring homomorphism needs to preserve both the addition and multiplication structures, and so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3971928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving Coin-Weighing Problem (81 Coins, 1 Fake) Using Information Theory So I have a coin-weighing puzzle under these situations:
*
*There are 80 real coins and 1 fake coin (total of 81 coins).
*The real coins are all the same weight, and the weight of the fake coin is different from the real coin.
*The real and fake coins cannot be distinguished, other than weight.
*A balance is used to identify the fake coin.
*When using the balance, the same number of coins is placed on either plate. The result is either "the left plate is heavier," "the right plate is heavier," or "the two plates are the same weight."
According to prior research, this problem cannot be solved by using the balance 4 times or less. I tried to show this by using information theory as follows, but I feel like I am missing something here.
Label the coins 1,...,81, and let $C$ be the number of the fake coin. Also, let $W$ be a random variable defined so that $W=1$ when the fake coin is heavier than the real coin, and $W=0$ when lighter. At the initial state, $C$ and $W$ can be regarded as independent random variables following a uniform distribution.
Then, define random variable $R_k$ so that $R_k = 0$, $R_k = 1$, $R_k = 2$, when, the result after using the balance for the $k$th time is, respectively, "the left plate is heavier," "the right plate is heavier," or "the two plates are the same weight."
Then, $R_k$ can be regarded as uniquely determined by $R_1, ..., R_k,$ and the real values of $C$ and $W$. (Do I have to add additional proof that this is true?)
Assume that the fake coin can be surely identified by four measurements using the balance, then this means that the value of $C$ is determined when $R_1,...,R_4$ have been determined. Thus, using the entropy function and its chain rule, \begin{eqnarray*}
H(C, W, R_1, ..., R_4) &=& H(C) + H(R_1 | C) + H(R_2 | R_1, C) + H(R_3 | R_2, R_1, C) + H(R_4|R_3, R_2, R_1, C) + H (W| R_4, R_3, R_2, R_1, C)
\end{eqnarray*}
And, we also have\begin{eqnarray*}
H(C, W, R_1, ..., R_4) &=& H(W) + H(C|W) + H(R_1 | C, W) + H(R_2 | R_1, C, W) + H(R_3 | R_2, R_1, C, W) + H(R_4|R_3, R_2, R_1, C, W)
\end{eqnarray*}
Then we notice that $H(C|W) = H(C)$, as $C$ and $W$ are independent variables. Also, $H(R_1 | W, C) < H(R_1 | C), H(R_2 | R_1, C, W) < H(R_2 | R_1, C) $, and so on. Thus, from the two equations, we get
$H(W) < H(W | R_1, ..., R_4, C)$
But we know that $H(W | R_1, ..., R_4, C) < H(W)$, so there is a contradiction.
OK. So I somehow arrived at a contradiction, showing that the problem cannot be solved by using the balance for times. But then I realized that the same logic applies when using the balance for any number of times, so apparently there is something wrong...
What sort of logic am I missing? I would appreciate any help.
| The conclusion in this statement is wrong:
Then we notice that $H(C|W) = H(C)$, as $C$ and $W$ are independent variables. Also, $H(R_1 | W, C) < H(R_1 | C), H(R_2 | R_1, C, W) < H(R_2 | R_1, C) $, and so on. Thus, from the two equations, we get ${\color{red}{H(W) < H(W | R_1, ..., R_4, C)}}$
The conclusion would be $H(W) > H(W|R_1,\ldots, R_4,C)$.
Your logic would be like saying "If $a+b=c+d$ and $b < c$, then $a < d$" The inequality is backwards.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3972102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
An exact and an approximate confidence interval for a Poisson distribution $X_{1}, ..., X_{10}$ ~ $Pois(\theta)$
Observations:
$x_{1} = x_{3} = x_{6} = x_{8} = x_{9} = 0$; $x_{2} = x_{5} = x_{10} = 1$; $x_{4} = 2$; $x_{7} = 3$.
I want to determine an exact (numerical) and an approximate (numerical) confidence interval for $\theta$ of confidence level 0.9:
*
*exact confidence interval: I don't know how to do this. On the internet I can only find things about an approximate confidence interval.
*approximate confidence interval:
$X = (X_{1}, ..., X_{10})$
MLE: $\hat{\theta} = \bar{X} = 0.8$
And: $Var(X) = \theta$. So an estimate for the variance is also $\bar{X} = 0.8$.
Then according to the answer: An approximate confidence interval is $\bar{X}$ $\pm$ $\sqrt{\bar{X}/n}$ $\xi_{1 - \alpha/2}$.
But this means that: T = $\sqrt{n}$ $\frac{\theta - E[X]}{\sqrt{Var(X)}}$ ~ N(0,1).
But this does not hold for X ~ Pois($\theta$), right?
Because a Poisson distribution does not have to be symmetric.
| Let's start with the approximate CI:
For $n$ greater enough ($n=10$ is borderline but actually enough to get your CI with a Gaussian distribution) you can apply CLT in the following way
$$\frac{\overline{X}_{10}-\theta}{\sqrt{\theta}}\sqrt{10}\sim \Phi$$
In fact this is a pivotal quantity with Standard Gaussian distribution so you have two ways to calculate an appropriate approximate CI
*
*(suggested procedure) Estimate the standard deviation of $\overline{X}_{10}$ with $\sqrt{\frac{\overline{X}_{10}}{10}}$ finding the following CI
$$\Bigg(\overline{X}_{10}-1.64\sqrt{\frac{\overline{X}_{10}}{10}};\overline{X}_{10}+1.64\sqrt{\frac{\overline{X}_{10}}{10}}\Bigg)$$
That is
$$\Bigg(0.8-1.64\sqrt{0.8/10};0.8+1.64\sqrt{0.8/10}\Bigg)$$
$$\Bigg(0.3348;1.2652\Bigg)$$
*(less common procedure) Soving the following double inequality
$$-1.64<\frac{0.8-\theta}{\sqrt{\theta}}\sqrt{10}<1.64$$
Leading to the following CI
$$0.8+\frac{1.64^2}{20}\pm \sqrt{\Bigg(\frac{1.64^2}{20}\Bigg)^2+0.8\frac{1.64^2}{10}}$$
Exact Confidence interval
It is easy to use the following estimator (that is Complete and sufficient as the sample mean)
$$T=\Sigma_i X_i\sim Po(10\theta)$$
now let's set the two probability
$$0.05=\sum_{t=0}^{8}\frac{e^{-10\theta}(10\theta)^t}{t!}$$
$$0.05=\sum_{t=8}^{\infty}\frac{e^{-10\theta}(10\theta)^t}{t!}$$
In order to solve these two equation w.r.t $\theta$ with some attempts (you can start with the approximate bounds found before) you (quite) easy will find that an exact CI for $\theta$ at 90% is the following
$$\Big(0.3980;1.4435\Big)$$
Further explanation for the example "in the lecture": finding CI for bernulli with the Statistical Method. They suppose $n=20$ and $\Sigma_i X_i =4$
Graphically:
Left tail: $5.1\%$
Right tail: $1.6\%$
Confidence interval %: $100-1.6-5.1=93.3\%$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3972363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to prove that this graph is non-planar?
I'm having trouble proving that the above shown graph is non planar, I think that I have to look for a subdivision of the graph $K_{3,3}$ using Kuratowski's theorem, can someone give me any tips to find this?
| Consider the cycle $ADEHCF$. The edges between them form $K_{3,3}$ minus the edge $AH$; the path $ABGH$ links $A$ and $H$ without using vertices in the chosen cycle. This is a $K_{3,3}$ minor in the graph, so it is non-planar.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3972509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How can I simulate poisson process? how can I simulate a single realization of the Poisson counting process as described above. Such that the discrete-amplitude random process X(t) counts the number of packets arriving in the time interval [0,t). Take your simulation total time to be 10 hours, time sampling step size to be 1 sec.
Assume that the average number of packets arriving per minute in a certain cell equal 10
I tried to simulate the poisson but I can't understand how to simulate it and how to compute the interval times between the arrival of each two successive packets
| The chance of an event in $1$ second is $\frac 1{6}$ because you have $10$ events per minute. Throw a random number with that probability for whether an event happens. You have an error because you can't have two events in one second, but the chance is rather small. I would have taken a shorter interval to reduce that, but $1$ second is specified in the problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3972659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Homogeneous polynomials' roots I'm trying to understand the proof of this result
Let $f(x,y) \in K[x,y]$ be an homogeneous polynomial s.t. $deg(f) = d>0$ then $\exists$ at most d $(a,b) \in K^2$ non-trivial roots of $f$
Proof:
$f(x,y)=\sum_{i=0}^{d} c_ix^iy^{d-i}$
$deg(f)>0 \implies \exists c_i \neq 0 \:$. Since f is homogeneous, $f(a,b)=0 \implies f(ta,tb)=0 \ \forall t \in K$
B.C. $(1,0)$ is not a root $\implies c_d \neq 0$. Then $f(t,1)$ is an homogeneous polynomial in one variable, with $deg=d$ $\implies$ it as at most $d$ roots
$f(t,1)=a_0\prod_{i=1}^{d}(t-a_i)$. Let's say $\ t = \frac{x}{y}$, then
$f(t,1)=f(\frac{x}{y},1)=a_0\prod_{i=1}^{d}(\frac{x-a_i}{y})$ but $f(\frac{x}{y},1)=y^df(x,y)$, which has a root in $(1,0)$ (contradiction)
So $(1,0)$ must be a root. Let $r<d$ be its multiplicity, then
$c_d = c_{d-1} = ... = c_{d-r+1}=0$
$f(x,y)=\sum_{i=0}^{d-r} c_ix^iy^{d-i}=y^{r}\sum_{i=0}^{d-r}c_ix^iy^{d-r-i}$
And that's how it ends.
I can't understand how this works:
1)I don't think it's necessary that $f(x,y)$ has a root in $(1,0)\ $ (for example $f(x,y)=x^d+y^d$ doesn't have it), so why he has followed this line? Also, he substitutes $t$ with $\frac{x}{y}$ but weren't we considering $f(t,0)$? Is this legit?
2)How does this proves the thesis?
Thank you for help
| Thanks to RandyMarsh, think I can close the question now, if the following should be correct. Here is the adjusted statement
Let $f(x,y)\in K[x,y]$ be an homogeneous polynomial s.t. $deg(f) = d>0$, then $\exists$ at most $d\ (a,b) \in K^2\ projective\ non-trivial \ roots\ [(a,b) \neq (0,0)]$ of $f$, counted with their multiplicity.
Proof:
$f(x,y)= \sum_{i=0}^{d} c_ix^iy^{d-i}\\deg(f)=d>0 \implies \exists c_i \neq 0\\ \text{Since } f \text{ is homogeneous, } f(a,b)=0 \implies f(ta,tb)=0\ \forall t \in K\\ \text{We have two cases} \\
\text{1) }(1,0)\text{ is not a root}\implies c_d \neq 0 \text{. Then } f(t,1) \text{ is an one-variable polynomial of } deg = d \implies \text{ it has at most d roots}\\
2)(1,0) \text{ is a root. Let } r \leq d \text{ be its multiplicity, then } c_d = c_{d-1} = \dots = c_{d-r+1} = 0 \\
\implies f(x,y) = y^r \cdot g(x,y) \text{ where g is homogeneous with } deg(g)=d-r \implies g(1,y) \text{ is an one-variable polynomial of } deg=d-r \text{, so it has at most } d-r \text{ roots.}$.
I considered two distinct cases because i didn't see the point assuming that $(1,0)$ must be a root.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3972825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluate derivative of reachability Gramian: is this correct? So I have the reachability Gramian matrix for a linear time-invariant system:
\begin{align}
W(t_{0},t) = \int_{t_{0}}^{t}e^{A(t-s)}BB^{\intercal}e^{A^{\intercal}(t-s)}.
\end{align}
In this case I have $t_{0}=0$. Let us differentiate this w.r.t. to $t$:
\begin{align}
\dot{W}(0,t)
=& \frac{d}{dt}\left[\int_{0}^{t}e^{A(t-s)}BB^{\intercal}e^{A^{\intercal}(t-s)}ds\right] \\
=& \frac{d}{dt}\left[e^{At}\int_{0}^{t}e^{-As}BB^{\intercal}e^{-A^{\intercal}s}ds\;e^{A^{\intercal}t}\right] \\
=& Ae^{At}\int_{0}^{t}e^{-As}BB^{\intercal}e^{-A^{\intercal}s}ds\;e^{A^{\intercal}t} \\
&+e^{At}\left[e^{-As}BB^{\intercal}e^{-A^{\intercal}s}\bigg\vert_{0}^{t}\right]e^{A^{\intercal}t} + e^{At}\int_{0}^{t}e^{-As}BB^{\intercal}e^{-A^{\intercal}s}ds\;e^{A^{\intercal}t}A^{\intercal} \\[3mm]
=& AW(0,t) + BB^{\intercal} - e^{At}BB^{\intercal}e^{A^{\intercal}t}+W(0,t)A^{\intercal}.
\end{align}
So this is the result I obtain. However, in the solution to this problem that I was trying to solve the term $- e^{At}BB^{\intercal}e^{A^{\intercal}t}$ did not appear, or was zero. So I am wondering if I made some error, or if there is some control theory result that make this term vanish?
| When you applied the product rule, you incorrectly computed the derivative of the integral. By the fundamental theorem of calculus, we have
$$
\frac d{dt} \int_{t_0}^t e^{-As}BB^\top e^{-A^\top s}\,ds = e^{-At}BB^\top e^{-A^\top t}.
$$
There is no $BB^\top$ term that should appear here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3973059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What do the local sections of an nilpotent ideal sheaf look like? Let $(X, O_X)$ be a scheme. and $I$ an nilptoent ideal sheaf. i.e. $I^n=0$ for some $n$. Would this imply that each $I(U)$ is an nilpotent ideal of $O_X(U)$?
| I’m submitting this as an alternative to hm2020’s answer. It’s an elaboration of what I wrote in the comments.
Let $I\subseteq \mathcal{O}_X$ be an ideal sheaf, and let $\mathcal{F}$ be the presheaf assigning to each open subset $U$ of $X$ the ideal $I(U)^n\subseteq \mathcal{O}_X(U)$. You say that $I$ is nilpotent of degree $n$ if $\mathcal{F}^\#$ is zero, where $\#$ is used to denote sheafification. But, since $\mathcal{F}$ is a separated presheaf, being a subsheaf of the sheaf $\mathcal{O}_X$, one has that $\mathcal{F}=0$ if and only if $\mathcal{F}^\#=0$ (e.g. see [1, Tag00WB]). Thus, we deduce the following:
Fact: Let $X$ be a scheme and $I$ an ideal sheaf of $\mathcal{O}_X$. Then, the following are equivalent:
*
*For all open subsets $U$ the ideal $I(U)^n$ is zero.
*The sheafification of the presheaf $U\mapsto I(U)^n$ is zero.
[1] Various authors, 2020. Stacks project. https://stacks.math.columbia.edu/
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3973157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to show this particular $4 \times 4$ matrix is positive definite? I am preparing for an exam in Numerical Analysis, and I am solving some practice problems. The question is to show that the following matrix $A$ is positive definite. I would only have about 10 minutes to work on this problem, so I am trying to solve this as fast as possible.
We are also told that all the eigenvalues of $A$ are distinct (but this information may not be useful here; there is a part b to this question which might make use of this fact)
$$A = \begin{bmatrix}
1 & -1 & 2 & 0\\
-1 & 4 & -1 & 1\\
2 & -1 & 6 & -2\\
0 & 1 & -2 & 4
\end{bmatrix}$$
My Attempts: My first tought is to use Greshgorin's Circle Theorem to show that all the eigenvalues are positive. However, this does not work because the first Greshgorin disk contains negative reals.
My second tought is to use Sylvester's Criterion. This is perhaps doable in under 10 minutes, but it is prone to mistakes (especially when going fast). I am also not sure if Sylvester's Criterion was taught in the class that this problem comes from.
| HINT:
The rows from $2$ to $4$ are diagonally dominant. The first one is not. But we can make it so by multiplying on both sides by the matrix $\operatorname{diag}(t, 1, 1, 1)$, where $t$ is large.
$\bf{Added:}$ Looks plausible, but the other rows are affected. Indeed, need $t> 3$, and then the second row is not dominant anymore. So the solution is not good. In fact, one can check that there is no way to transform our matrix with a diagonal matrix to make it diagonally dominant.
Maybe just showing directly that the determinant is positive. Since the principal minor $(2,3,4)$ is positive definite, being dominant, this would be good enough.
Using WolframAlpha, I got the Cholesky decomposition of the matrix
$$\left[\begin{matrix} 1 & -1 & 2 & 0 \\ -1& 4 & -1& 1\\ 2& -1& 6&-2\\ 0&1 &-2&4\end{matrix} \right]=\\=\left[\begin{matrix} 1 & 0 & 0 & 0 \\ -1& 1 & 0& 0\\ 2& 1/3 & 1&0\\ 0&1/3 &-7/5&1\end{matrix} \right]\left[\begin{matrix} 1 & 0& 0 & 0 \\ 0& 3 & 0& 0\\ 0& 0& 5/3&0\\ 0&0 &0&2/5\end{matrix} \right]\left[\begin{matrix} 1& -1& 2& 0\\ 0& 1& 1/3& 1/3\\ 0& 0& 1& -7/5\\0& 0& 0& 1\end{matrix} \right]$$
The diagonal part has moderate eigenvalues. Now, $A$ has a small eigenvalue, $\approx 0.02$, and this is possible since the upper diagonal part has a small singular value.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3973278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Indefinite integral of $\int \frac 1 x \operatorname{arsech} \frac x a \, \mathrm d x$ Spiegel's "Mathematical Handbook of Formulas and Tables" (Schaum, 1968), item $14.668$ gives the indefinite integral of the area hyperbolic secant (that is, the "inverse" hyperbolic secant) as:
$$\int \frac 1 x \operatorname{arsech} \frac x a \, \mathrm d x = -\frac 1 2 \ln (a/x) \ln (4 a/x) - \frac {(x/a)^2} {2 \cdot 2 \cdot 2} - \frac {1 \cdot 3 (x/a)^4} {2 \cdot 4 \cdot 4 \cdot 4} - \cdots$$
This can be expressed as:
$$\int \frac 1 x \operatorname{arsech} \frac x a \, \mathrm d x = -\frac 1 2 \ln \left({\frac a x}\right) \ln \left({\frac {4 a} x}\right) - \sum_{n \mathop \ge 0} \frac {(2 n)!} {2^{2 n} (n!)^2 (2 n)^2} \left({\frac x a}\right)^{2 n} + C$$
Getting nearly there is easy enough.
I take as my starting point the power series expansion of the area hyperbolic cosine of $\dfrac x a$:
$$\operatorname{arcosh} x = \ln \frac {2 x} a - \left({\sum_{n \mathop = 1}^\infty \frac {(2 n)!} {2^{2 n} (n!)^2 (2 n)} \left({\frac a x}\right)^{2 n} }\right)$$
(This is derived from the result in Spiegel, item $20.40$.)
From it I get the same for the area hyperbolic secant $\operatorname{arsech}$ of $\dfrac x a$, as it's just the $\operatorname{arcosh}$ of $\dfrac a x$:
$$\operatorname{arsech} x = \ln \frac {2 a} x - \left({\sum_{n \mathop = 1}^\infty \frac {(2 n)!} {2^{2 n} (n!)^2 (2 n)} \left({\frac x a}\right)^{2 n} }\right)$$
and integrate term by term (justified by Fubini's theorem, I believe) to get me eventually to:
$$\int \frac 1 x \operatorname{arsech} \frac x a \, \mathrm d x = -\frac 1 2 \ln^2 \left({\dfrac x {2 a} }\right) + \sum_{n \mathop = 0}^\infty \frac {(2 n)!} {2^{2 n} (n!)^2 (2 n)^2} \left({\frac x a}\right)^{2 n} + C$$
During the course of the above I inverted the reciprocal in the logarithm of the integrand to get it into a standard form for integration $\displaystyle \int \dfrac {\ln (c x)} x \mathrm d x = \dfrac {\ln^2 {c x} } 2$ which I think I got right.
The above is consistent with the result quoted in Schaum for the indefinite integral for the area hyperbolic cosine, where the logarithm term was left as a square.
So there are $2$ questions outstanding:
*
*How do you actually get from $\ln^2 \left({\dfrac x {2 a} }\right)$ to $\ln \left({\dfrac a x}\right) \ln \left({\dfrac {4 a} x}\right)$? I get that they will differ by a constant which can be subsumed into a constant of integration, but manipulation of $\ln^2 \left({\dfrac x {2 a} }\right)$ gets me only as far as $\left({\ln \left({\dfrac a x}\right)}\right)^2 + 2 \ln 2 \ln \dfrac a x + \left({\ln 2}\right)^2$ and at this point I can't see how to proceed. I can't reduce the $2 \ln 2 \ln \dfrac a x$ and get it to go the way I want it to.
*How did Spiegel ever get to that $\ln \left({\dfrac a x}\right) \ln \left({\dfrac {4 a} x}\right)$ term in the first place? His quoted result for the indefinite integral for the area hyperbolic cosine has that term in the $\ln^2$ form, the same as what I got for the area hyperbolic secant. That is, what direction could he have taken with his integration so as to land upon a result in that form? I can't see why he would deliberately manipulate it into that form, as he is happy enough to leave the indefinite integral for the area hyperbolic cosine in the $\ln^2$ form.
| Since you are calculating indefinite integral, the inconsistency is obvious, the difference between two results must be a constant!
*
*The difference:
$$\ln \left(\frac{a}{x}\right) \ln \left(\frac{4 a}{x}\right)-\ln ^2\left(\frac{x}{2 a}\right)=
\ln \left(\frac{a}{x}\right) \left[\ln \left(\frac{a}{x}\right)+2 \log (2)\right]-\left[\ln \left(\frac{a}{x}\right)+\ln (2)\right]^2
=-\ln ^2(2)$$
*The reason: the real domain of ${\rm arcsech(x)}$ is $[0,1]$, thus when you change the operations of expansion and integral, you must be careful, i.e.
$$
\int\to\int_a^x
$$
In other words, you should have
$$
\int_a^x \frac{\ln (2 a)-\ln (z)}{z} \, dz=-\frac{1}{2} \ln \left(\frac{a}{x}\right) \ln \left(\frac{4 a}{x}\right)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3973408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determine the Automorphisms of the extension field $\mathbb{F}_{19683}$ of $\mathbb{F}_{27}$ I am trying to find $\text{Aut}_{\mathbb{F}_{27}}(\mathbb{F}_{19683})$. I have a theorem that looks promising but can not make it work.
For a field $\mathbb{K}$ with $\text{char}(\mathbb{K})=p$ prime we have the Frobeniusmorphism $\text{Fr}_p:\mathbb{K} \rightarrow \mathbb{K}, \ \alpha \mapsto \alpha^p$. Then $(\text{Fr}_p)^n$ is also a morphism. The theorem I am trying to use states that
$\text{Aut}_{\mathbb{F}_{p}}(\mathbb{F}_{p^n}) = \langle Fr_p \rangle$.
Obviously my problem is that 27 is not prime. Rewriting the problem $\text{Aut}_{\mathbb{F}_{3^3}}(\mathbb{F}_{3^{3^3}})$ and the fact that the theorem was mention just above this exercise make me think that one can make use of it to solve the problem.
Any help appreciated.
| Note that for general field extensions $E/F/K$ we have a restriction homomorphism $$\psi:\operatorname{Aut}_K(E)\to\operatorname{Aut}_K(F)$$
with kernel $\operatorname{Aut}_F(E)$. Applying this to the situation $E=\mathbb{F}_{19683},F=\mathbb{F}_{27},K=\mathbb{F}_{3}$ we see that the group $\operatorname{Aut}_F(E)$ is exactly that subgroup of $\langle Fr_3\rangle=\operatorname{Aut}_K(E)$ which fixes $\mathbb{F}_{27}$. Since $\langle Fr_3\rangle$ is cyclic it shouldn't be too hard to find a generator for that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3973533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Existence of an injection between $\mathbb{Z}$ and $[0,1]$ Is the statement that there is an injection between $\mathbb{Z}$ and $[0,1]$ true or false
$\mathbb{Z}\xrightarrow{(1-1)} [0,1]$
i've build it on the definition of cardinality that is if $\lvert A \lvert \leq \lvert B \lvert$
then there exists an injection $f:A\longrightarrow B$ and one knows that
$\lvert \mathbb{Z} \lvert \leq \lvert [0,1]\lvert$
thus there exists an injection
| It is true, take for example a function $f:\mathbb{Z} \to [0,1]$ $$f(n)=\cases{{1\over 2|n|+1};\;\;\; n\leq0\\{1\over 2n};\;\;\; n>0}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3973684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Can every isomorphism between a vector space and its dual be written as a non-degenerate bilinear form? If I have understood correctly, non-degeneracy of a bilinear form:
$$\omega:V\times V\rightarrow \Bbb F \tag{1},$$
in which $\Bbb F$ the underlying field of $V$, is a strong enough condition to conclude that $\omega$ provides an isomorphism between $V$ and $V^*$: non-degeneracy $\Rightarrow$ trivial kernel $\Rightarrow$ injectivity $\Rightarrow$ surjectivity if dimesions are the same.
My question is does the converse hold, can all isomorphisms between a vector space and its dual be induced by some non-degenerate bilinear form on $V$?
| If $f:V\to V^*$ is an isomorphism, and we define $\omega:V\times V\to \Bbb{F}$ as $\omega(v_1,v_2):= [f(v_1)][v_2]$, then $\omega$ is bilinear, and note that from the definition, we have that $\omega(v_1,\cdot)= f(v_1)$. In other words, the mapping $v_1\mapsto \omega(v_1,\cdot)$ is equal to $f$, which by assumption is an isomorphism of $V$ onto $V^*$.
If $V$ is infinite-dimensional, there are no isomorphisms $V\to V^*$, so the statement above becomes vacuous.
Edit:
The previous version of my answer made it seem like there was a distinction to be made (because it wasn't obvious to me in the case of non-symmetric/skew-symmetric bilinear forms). But, thanks to @Marc van Leeuven for pointing out that the distinction between left/right non-degeneracy is unnecessary.
For the sake of completeness, here's the proof of the equivalence. Let $\omega_l:V\to V^*$ be the left map, $x\mapsto \omega(x,\cdot)$, and let $\omega:V\to V^*$ be the right map $y\mapsto \omega(\cdot, y)$. Also, let $\iota:V\to V^{**}$ be the canonical map into the double dual. One can easily verify by unwinding definitions that
\begin{align}
\omega_r^*\circ \iota=\omega_l\quad\text{and} \quad\omega_l^*\circ\iota=\omega_r
\end{align}
where the stars on the maps indicate the dual/transposed mapping.
Now, make the assumption that $V$ is finite-dimensional. Then, $\iota$ is an isomorphism.
So, if $\omega_r$ is assumed an isomorphism, then so is $\omega_r^*$, and thus (by first equation) $\omega_l$ being the composition of two isomorphisms is as well. Similarly, $\omega_l$ being an isomorphism implies by the second equation that $\omega_r$ is.
One can also rephrase the argument using matrices since we're in finite dimensions, but I'd rather not :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3973995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Dantzig's algorithm for longest path What I am looking for is a algorithm from Dantzig which should find a longest way in digraph. Couple of science sources mention this algorithm so afterwhile I have been able to track it down to this book:
https://books.google.cz/books?id=YfdgAQAAQBAJ&pg=PA72&lpg=PA72&dq=Gondran,+Michel+and+Minoux,+Michel.+Graphes+et+algorithmes.+dantzig&source=bl&ots=T4cjgcB38K&sig=ACfU3U0HSGy4bjfpXckDf9DjBIihagX02g&hl=cs&sa=X&ved=2ahUKEwiLus709fztAhXN8qQKHUceB4IQ6AEwEXoECBYQAg#v=onepage&q=du%20plus%20long&f=false
since I don't speak french it is matter of time until I translate all of that. I am unable to find any english-written resources. If you have knowledge of this algorithm, I would be much obliged if you pointed me to alternative resource.
| Okay I think I found it. The simple trick is to:
*
*multiply all arcs by -1
*find the shortest path in graph using Dantzig's algorithm (desribed in the book as Algorithm 10)
*multiply result by -1
Good thing is this algorithm should also work for multigraphs.
UPDATE: I ended up using Bellman-Ford algorithm with the modification described above. Worked for directed multigraph with negative edges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3974126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Understanding the derivation of the PDF of a two-point mixed distribution from its CDF
A random variable $X$ has the cumulative distribution function
$$\begin{cases}
0 & x<1 \\
\dfrac{x^2-2x+2}{2} & 1 \le x<2 \\
1 & x\ge 2
\end{cases}$$
Calculate $E[X]$.
The answer uses the following pdf to get $E[X]$:
$$\begin{cases}
\dfrac{1}{2} & x=1 \\
x-1 & 1 < x<2 \\
0 & \text{otherwise}
\end{cases}$$
The book I am reading doesn't have much (any!) information on two-point mixed distributions and I want to make sure that I understand how this PDF was derived from the CDF. My understanding so far is that firstly, we need to figure out from the CDF that $X$ follows a mixed distribution. Note that $F(1) = 1/2 \ne 0$, which indicates that there is a jump in the CDF of $X$ at $x=1$. Since $X$ is continuous from $1<x<2$, $X$ must follow a two-point mixed distribution. Now, the magnitude of the jump in the graph is $\frac{1}{2}-0 = \frac{1}{2}$ which gives $f(x) = \frac{1}{2}$ if $x=1$. The rest of the pdf can be obtained using routine computations. Finally, we need to compute the probabilistic "weights" to get to the CDF. Clearly, the "weight" for the discrete part is $\frac{1}{2}$, so the "weight" for the continuous part must be $1-\frac{1}{2} = \frac{1}{2}$, and we have
$$E[X] = \int_1^2 (x-1) \cdot x dx + \text{Weight for the discrete part} \cdot 1 = \int_1^2 x(x-1) dx + P[X=1] \cdot 1$$
$$= \int_1^2 x(x-1) dx + \frac{1}{2}$$
which gives the correct answer. However, I am more interested in learning whether my process for obtaining the answer (which was based mostly on deduction and intuition) is correct. Can someone please critique my post? Thanks!
| What you wrote there isn't a PDF, since it doesn't have integral $1$. There isn't really a PDF in the usual sense at all, because there is a discrete part.
A PDF for this distribution is $f(x)=\frac{1}{2} \delta(x-1) + (x-1) 1_{(1,2)}(x)$. Here $\delta$ denotes the Dirac delta "function", which is not really a function in the same way you are used to. This funny $1$ notation is defined by
$$1_A(x)=\begin{cases} 1 & x \in A \\
0 & x \not \in A \end{cases}$$
One can evaluate $\int_{-\infty}^\infty x f(x) dx = \int_{-\infty}^\infty x \left ( \frac{1}{2} \delta(x-1) + (x-1) 1_{(1,2)}(x) \right ) dx$ to get the expectation. To do that, the delta function term winds up giving you $\left. \frac{1}{2} x \right |_{x=1}=\frac{1}{2}$, while the other term gives you $\int_1^2 x(x-1) dx$.
Note that the PDF on $(1,2)$ is in fact $x-1$: there is no need for an extra factor of $1/2$ because $\int_1^2 (x-1) dx$ is already $1/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3974264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
conditioning of the monomial basis It is well-known that the Vandermonde matrix is ill-conditioned when all nodes, i.e., the points that generate it, are real. I have read the following on a paper:
The ill-
conditioning of the Vandermonde basis has an elementary explanation
in cases where the points xj are unequal in size. If $|x_j |$ varies with j, then the powers
$|(x_j)^k|$ vary exponentially. This means that function information associated with smaller
values of $x_j$ will only be resolvable through exponentially large expansion coefficients,
and accuracy will quickly be lost in floating point arithmetic
I have not understood the explanation because he terms "function information" and "expansion coefficients" are not defined in this context on the paper. Could somebody provide an explanation on why the Vandermonde matrix is ill-conditioned?.
| The Vandermonde matrices are not necessarily ill-conditioned when the nodes are real.
Example: In the case of $n=2$ nodes, the Vandermonde matrix is
$$A = \begin{bmatrix} 1 & x_0 \\ 1 & x_1 \end{bmatrix}.$$
If $x_0 = -1$ and $x_1 = 1$, then columns are orthogonal and the 2-norm condition number of $A$ is given by
$$\kappa_2(A) = \|A\|_2 \|A^{-1}\|_2 = 1.$$
This is the smallest possible value.
In order to understand exactly why the Vandermonde matrices can be arbitrarily ill-conditioned we investigate the singular values. Let $$\sigma_1 \ge \sigma_2 \dots \ge \sigma_n \ge 0$$
denote the singular values of $A$. Then $\{\sigma_i^2\}_{i=1}^n$ are the eigenvalues of $A^TA$ and $$\kappa_2(A) = \frac{\sigma_1}{\sigma_n}.$$ We can establish simple bounds on the singular values as follows. For a general symmetric matrix $B = [b_{ij}]$, the smallest eigenvalue satisfies
$$ \lambda_{\min}(B) = \min \{ x^T B x \: : \: \|x\|_2 = 1 \} \leq b_{ii} $$
for all $i$ and the largest eigenvalue satisfies
$$ \lambda_{\max}(B) = \max \{ x^T B x \: : \: \|x\|_2 = 1 \} \ge b_{ii} $$
for all $i$.
Example: In the case of $n=2$ nodes, we have
$$ A^T A = \begin{bmatrix} 2 & x_0+x_1 \\ x_0 + x_1 & x_0^2 + x_1^2 \end{bmatrix}.$$
It follows that $$ \sigma_2^2 \leq 2, \quad x_0^2 + x_1^2 \leq \sigma_1^2$$
We conclude that $$\kappa_2(A) \ge \sqrt{ \frac{x_0^2 + x_1^2}{2} }.$$
The case of $x_1 = -x_0 = -x$ is particular clear. In this case, the relevant linear system is
$$ \begin{bmatrix} 1 & x \\ 1 & -x \end{bmatrix} \begin{bmatrix} c_0 \\ c_1 \end{bmatrix} = \begin{bmatrix} y_0 \\ y_1 \end{bmatrix}. $$
The solution is
$$ \begin{bmatrix} c_0 \\ c_1 \end{bmatrix} = \frac{1}{2x} \begin{bmatrix} x & x \\ 1 & -1\end{bmatrix} \begin{bmatrix} y_0 \\ y_1 \end{bmatrix} = \begin{bmatrix} \frac{y_0+y_1}{2} \\ \frac{y_1-y_0}{2x} \end{bmatrix}.$$
Here we note that while $c_0 = \frac{y_0+y_1}{2}$ is insensitive to small relative changes in the right-hand side when $y_1$ and $y_2$ are nearly equal, the divided difference
$$c_1 = \frac{y_1 - y_0}{2x}$$ is extremely sensitive to small relative changes in the right-hand side when $y_1$ and $y_2$ are nearly equal. Specifically, a tiny relative change can flip the sign of $c_1$. The situation is reversed when $y_1 + y_2 \approx 0$. In both cases, we see that a small componentwise relative change in the right hand-side $y$, can cause a large componentwise relative change in the solution $c$.
The analysis presented here extends to the general case of $n>2$. Here we have $$\sigma_n^2 \leq n, \quad \sum_{j=0}^{n-1} (x_j^{n-1})^2 \leq \sigma_1^2$$
and
$$ \kappa_2(A) \ge \sqrt{\frac{\sum_{j=0}^{n-1} (x_j^{n-1})^2}{n }}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3974431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to prove that a sum: $\sum_{n=1}^\infty \cos(n\sqrt 2)$ is uniformly bounded? How to prove that a sum:
$$\sum_{n=1}^\infty \cos(n\sqrt 2)$$
is uniformly bounded?
That problem is connected to an answer to other question that I have asked before:
Convergence of $\sum_{n=0}^{\infty}\frac{cos(n \sqrt{2})}{\sqrt{n}}$. Is my thinking correct?
| I'll provide two methods, one using complex numbers and one using sum-of-angle formulas. Let $b=\sqrt 2$.
Note that by Euler's formula:
\begin{align*}
\sum_{n=1}^N\cos\left(bn\right)&=\Re\sum_{n=1}^N\left(\cos\left(bn\right)+i\sin\left(bn\right)\right)\\
&=\Re\sum_{n=1}^Ne^{ibn}\\
&=\Re\frac{e^{ib(N+1)}-e^{ib}}{e^{ib}-1}
\end{align*}
Here $e^{ib}-1\ne 0$ and $e^{ib(N+1)}$ is uniformly bounded, so the partial sum is also uniformly bounded.
To use trigonometry, you want to recall formulas for $f\left((n\pm 1)b\right)$ where $f$ is $\sin$ or $\cos$. In this case $\sin$ works:
\begin{align*}
\Delta_n&=\sin\left((n+1)b\right)-\sin\left((n-1)b\right)\\
&=\sin(nb)\cos(b)+\cos(nb)\sin(b)-\left\{\sin(nb)\cos(b)-\cos(nb)\sin(b)\right\}\\
&=2\cos(nb)\sin(b)
\end{align*}
Therefore we have a telescoping sum:
\begin{align*}
\sum_{n=1}^N\cos\left(bn\right)&=\frac{1}{2\sin\left(b\right)}\sum_{n=1}^N\left(\sin\left((n+1)b\right)-\sin\left((n-1)\right)b\right)\\
&=\frac{\sin\left((N+1)b\right)+\sin\left(Nb\right)-\sin\left(b\right)-\sin\left(0\right)}{2\sin(b)}
\end{align*}
Now again $\sin(b)\ne 0$ and $\sin((N+1)b)+\sin(Nb)$ is uniformly bounded, so the partial sum is uniformly bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3974578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that a set $F \subseteq \mathbf{R}$ is closed $\Longleftrightarrow$ every Cauchy sequence in $F$ has a limit in $F$ Exercise 3.2.5 in Stephen Abbott's Understanding Analysis asks to prove the below theorem. I would like to ask, if my proof is sound.
$\newcommand{\absval}[1]{\left\lvert #1 \right\rvert}$
Theorem. A set $F \subseteq \mathbf{R}$ is closed if and only if every Cauchy sequence contained in $F$ has a limit that is also an element of $F$.
My Attempt.
($\Longrightarrow$) Assume $F \subseteq \mathbf{R}$ is closed. By definition, a set is closed, if and only if it contains all its limit points. Let $x \in F$ be an arbitrary limit point of $x$. Thus, $V_\epsilon(x)$ intersects $F$ in some point other than $x$. To produce a Cauchy sequence in $F$, we let $\epsilon = 1/n$. Then, there exists a point $x_n \in F$, where
\begin{align*}
x_n \in V_\epsilon(x) \cap F
\end{align*}
with the stipulation that $x_n \ne x$.
It is easy to see, that $(x_n) \to x$. To see this, choose $N > 1/\epsilon$. Then, for all $n \ge N$, we have,
\begin{align*}
\absval{x_n - x} < \epsilon
\end{align*}
Convergent sequences are Cauchy and Cauchy sequences are convergent. Convergent Sequence $\Longleftrightarrow$ Cauchy sequence.
Since, $F$ contains all its limit points, all Cauchy sequences in $F$ have their limiting value in $F$.
($\Longleftarrow$) Assume that every Cauchy sequence in $F$ has a limit that is also an element of $F$. Therefore, $\lim x_n = x$, $x_n \ne x$. By the definition of convergence, given any $\epsilon > 0$m there exists a term $x_N$ in the sequence satisfying $\absval{x_N - x} < \epsilon$. So, $V_\epsilon(x) \cap F$ contains elements other than $x$.
| Your proofs in both directions are wrong. For $\implies$ assume that $F$ is closed and start with any Cauchy sequence $(x_n)$ in $F$. [This is imporatnt]. Since any Cauchy sequence of real numbers converges we see that $x =\lim x_n$ exists as a real number. Since each $x_n \in F$ and $F$ is closed it follows that $x \in F$.
For the converse let $(x_n)$ be as sequence in $F$ converging to some $x$. We have to prove that $x \in F$. Now $(x_n)$ is a Cauchy sequence in $F$. By assumption the limit $x$ is in $F$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3974758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof of $BAC-CAB$ identity missing step I'm stuck on one step of the proof for the identity: $$ \vec{A}\times(\vec{B}\times\vec{C}) = \vec{B}(\vec{A}\cdot\vec{C}) - \vec{C}(\vec{A}\cdot\vec{B})$$
So far, the proof follows as:
We know that $\vec{B}\times\vec{C}$ gives a vector perpendicular to both $\vec{B}$ & $\vec{C}$, and that $ \vec{A}\times(\vec{B}\times\vec{C})$ gives a vector perpendicular to both $\vec{A}$ & $(\vec{B}\times\vec{C})$. Therefore, the vector $\vec{A}\times(\vec{B}\times\vec{C})$ must lie in the plane containing both $\vec{B}$ & $\vec{C}$.
Provided $\vec{B}$ & $\vec{C}$ are not parallel (if they were, $\vec{A}\times(\vec{B}\times\vec{C}) = 0$ regardless), vectors $\vec{B}$ & $\vec{C}$ span the 2D plane containing them both.
Therefore, we can express any vector in the plane as a linear combination of both $\vec{B}$ and $\vec{C}$ and so we can write: $$\vec{A}\times(\vec{B}\times\vec{C}) = \alpha\vec{B} + \beta\vec{C} \tag{1}$$
Taking the scalar product of both sides with $\vec{A}$: $$\vec{A} \cdot (\vec{A}\times(\vec{B}\times\vec{C})) = \vec{A} \cdot (\alpha\vec{B} + \beta\vec{C}) = 0$$ So, $$\alpha(\vec{A} \cdot \vec{B}) + \beta(\vec{A} \cdot\vec{C}) = 0$$
Now writing, $$\lambda = \frac{\alpha}{\vec{A} \cdot\vec{C}} = -\frac{\beta}{\vec{A} \cdot \vec{B}}$$
and substituting $\alpha$ and $\beta$ back into (1) we get:
$$ \vec{A}\times(\vec{B}\times\vec{C}) = \lambda(\vec{B}(\vec{A}\cdot\vec{C}) - \vec{C}(\vec{A}\cdot\vec{B}))$$
I am able to show $\lambda = 1$ with particular choices of unit vectors for $\vec{A}, \vec{B}, \vec{C}$ but I am unable to prove that $\lambda$ is independent of the magnitude of vectors (i.e. $\lambda = 1$ for all choices of $\vec{A}, \vec{B}, \vec{C}$). This is the step that I am struggling with. Any suggestions?
| Observe that both sides of the equation to be proved are linear in each of the three variables (when the other two are fixed).
Based on this, it's enough to prove the statement for all possible combinations of the standard basis, say.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3975031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Approach ideas for the integral $\int\frac{dx}{(x^4-16)^2}$ Well, the title sums it up pretty well. I'm in search for some smart approach ideas for solving this indefinite integral:
$$\int\frac{dx}{(x^4-16)^2}$$
I know one that would work for sure, namely partial fraction decomposition, but it gets really heavy when regrouping the coefficients for the powers of $x$ and then solving an $8\times8$ system of linear equations. It will eventually work, but I suspect there is something more ingenuine behind this problem.
I also tried all sorts of trigonometric substitutions and formulations, but that added square power really is a bummer to it all.
I'm generally open to any exchange on the topic and would be glad to hear some advice in such situations. Many thanks in advance!
| Just integrate by parts
\begin{align}
\int\frac{dx}{(x^4-16)^2}
& =\int \frac1{64x^3} d\left( \frac{-x^4}{x^4-16}\right)
=-\frac1{64} \frac x{x^4-16} -\frac3{64} \int \frac{dx}{x^4-16}\\
&= -\frac1{64} \frac x{x^4-16} -\frac3{64}\cdot\frac18\int \left( \frac1{x^2-4}-\frac1{x^2+4} \right)dx
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3975368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Convergence of $\sum^\infty_{n=0} \frac{\cos(n + \frac{1}{n^2})}{n \cdot \ln(n^2 + 1)}$ Is my idea correct? Convergence:
$$\sum^\infty_{n=0}\frac{\cos \left(n + \frac{1}{n^2} \right)}{n \cdot \ln \left( n^2 + 1 \right)}$$
Edit (I tried to follow the idea written by @Daniel Fischer):
$$\sum_{n=0}^{\infty} \frac{cos(n + \frac{1}{n^2})}{n\cdot ln(n^2 + 1)} = \sum_{n=0}^{\infty} \frac{cos(n)}{n \cdot ln(n^2 + 1)} + \sum_{n=0}^{\infty} \frac{cos(n + \frac{1}{n^2}) - cos(n)}{n \cdot ln(n^2 + 1)}$$
$\sum_{n=0}^{\infty} \frac{cos(n)}{n \cdot ln(n^2 + 1)}$ is convergent because of Dirichlet test and How can we sum up $\sin$ and $\cos$ series when the angles are in arithmetic progression?
$\sum_{n=0}^{\infty} \frac{cos(n + \frac{1}{n^2}) - cos(n)}{n \cdot ln(n^2 + 1)} = \sum_{n=0}^{\infty} \frac{-2 \left(sin \left( \frac{2n+\frac{1}{n^2}}{2} \right) \cdot sin \left( \frac{\frac{1}{n^2}}{2} \right) \right)}{n \cdot ln \left(n^2 + 1 \right)} = \sum_{n=0}^{\infty} \frac{-2 \left(sin \left( n+\frac{1}{2n^2} \right) \cdot sin \left( \frac{1}{2n^2} \right) \right)}{n \cdot ln \left(n^2 + 1 \right)}$
Then:
$$\sum_{n=0}^{\infty} \left| \frac{-2 \left(sin \left( n+\frac{1}{2n^2} \right) \cdot sin \left( \frac{1}{2n^2} \right) \right)}{n \cdot ln \left(n^2 + 1 \right)} \right| \leq \sum_{n=0}^{\infty} \left| 2 \cdot \frac{ \frac{1}{2n^2} }{n \cdot ln \left(n^2 + 1 \right)} \right| \leq \sum_{n=0}^{\infty} \left| \frac{ 1 }{n^3 \cdot ln \left(n^2 + 1 \right)} \right| $$
For $n \geq 3$:
$$\sum_{n=0}^{\infty} \left| \frac{ 1 }{n^3 \cdot ln \left(n^2 + 1 \right)} \right| \leq \sum_{n=0}^{\infty} \left| \frac{1}{n^2} \right|$$
Therefore we know that $\sum_{n=0}^{\infty} \frac{cos(n + \frac{1}{n^2}) - cos(n)}{n \cdot ln(n^2 + 1)}$ it is also convergent. Is that correct?
| Continuing from the remark made in the comments,
$$\cos\biggl(n + \frac{1}{n^2}\biggr) = \cos n + \biggl(\cos \biggl(n + \frac{1}{n^2}\biggr) - \cos n\biggr).$$
The second term on the right decays like $1/n^2$, since cosine is Lipschitz continuous (prove this using the cosine sum identity). The partial sums of the first term on the right are uniformly bounded; to see this, it is convenient to remember cosine is the real part of the complex exponential,
$$ \cos n = \operatorname{Re} e^{in}. $$
Thus the partial sum of cosine is just the real part of a nice geometric series,
$$ \sum_{n = 0}^m \cos n = \operatorname{Re} \sum_{n = 0}^m e^{in} = \operatorname{Re} \frac{1 - e^{i(m + 1)}}{1 - e^i}. $$
The right hand side has uniformly bounded modulus in $m$, so we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3975527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Calculate the length of a polar curve
Calculate the length of the polar curve $$\theta (r)=\frac{1}{2}\left( r+\frac{1}{r}\right)$$ from r = 1 to r = 3.
I understand mostly how to get the length of a polar curve by:
$$\int_{a}^{b} \sqrt[]{(f(\theta ))^{2}+(f'(\theta ))^{2}} \ d\theta $$
But in this exercise i dont get how to do it. Maybe i need to write the function $\theta (r)$ in terms of $\theta$
Any ideas or hints? Thanks
| Building on my comment, since $rd\theta/dr=\tfrac12(r-\tfrac1r)$, $\int ds=\int_1^3\tfrac12(r+\tfrac1r)dr=2+\tfrac12\ln3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3975635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is $\emptyset \subseteq P(A)$ or $\{\emptyset\}\subseteq P(A)$, where $A$ is a set and $P(A)$ is the power set of $A$ I think my professor made an error on his answer key and I'm trying to confirm it before I bring it to his attention. He asserts only 1.) is false. I believe both 1 and 4 are false. This class is only using naïve set theory.
$A = \{1,2,3,4\}$
Select the statement that is false
1.) $\{2,3\} \subseteq P(A)$
2.) $\{2,3\} \in P(A)$
3.) $ \emptyset \in P(A)$
4.) $\emptyset \subseteq P(A)$
*
*$P(A)$ is the set of all the subsets of $A$
*
*$P(A) = P(A) = \{\emptyset,\{1\},\{2\},\{3\},\{4\},\{1,2\},\{1,3\},\{1,4\},\{2,3\},\{2,4\},\{3,4\},\{1,2,3\},\{1,2,4\},\{1,3,4\},\{2,3,4\},\{1,2,3,4\}\}$
*
*FALSE
*
*$2$ is not an element of $P(A)$
*$3$ is not an element of $P(A)$
*∴ $\{2,3\}$ cannot be a subset of $P(A)$
*$\{{2,3}\}$ would be a subset of $P(A)$
*TRUE
*
*The element $\{2,3\}$ can be found in the set $P(A)$
*TRUE
*
*The element $\emptyset$ can be found in the set $P(A)$
*FALSE
*
*Both operands of the subset operator requires a set. $\emptyset$ is the empty set were as $\{\emptyset\}$ is an element that is the empty set. Therefore $\emptyset$ is not a subset of $P(A)$.
| #4 is true. The empty set is a subset of every set.
It happens that the empty set us also an element of this set.
So both $\varnothing \subseteq \mathscr{P}(A)$ and $\varnothing \in \mathscr{P}(A)$ are true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3975737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
How is the depth of a parabolic mirror calculated How is the depth of a parabolic mirror calculated. I couldn't find a similar question/answer.
I have this specific problem to solve (not related to parabolic mirrors, but is a starting point, and not interested in a focal point):
*
*L=6"
*D=4.5"
*What is d?
Thanks for your help
| Staring from @Ishraaq Parvez's answer, as said, the equation
$$L=\frac{1}{2 a}\sinh ^{-1}\left(\frac{9 }{2}a\right)+\frac{9}{8} \sqrt{81 a^2+4}$$ requires some numerical method. However we can make nice approximations.
Let $x=\frac{9 }{2}a$ and $k=\frac{4 }{9}L$ to obtain
$$k=\sqrt{x^2+1}+\frac{\sinh ^{-1}(x)}{x}$$ and expand the rhs as a Taylor series
$$k=2+\frac{x^2}{3}-\frac{x^4}{20}+\frac{x^6}{56}-\frac{5 x^8}{576}+\frac{7
x^{10}}{1408}+O\left(x^{12}\right)$$ Now, let $y=x^2$ and use series reversion to obtain
$$y=t+\frac{3 t^2}{20}-\frac{3 t^3}{350}+\frac{23 t^4}{8400}-\frac{5889
t^5}{5390000}+O\left(t^6\right)$$ where $t=3(k-2)$.
Using $L=6$, $k=\frac{8}{3}$ then $t=2$, this would give
$$y\sim \frac{2567266}{1010625}\implies x\sim \frac 1 {175}\sqrt{\frac{2567266}{33}}\implies a\sim \frac 2 {1575}\sqrt{\frac{2567266}{33}}$$ Converted to decimals, this gives
$a=0.3542$ while the solution is $a=0.3553$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3975862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is this diagonal matrix not possible over the reals Let $\alpha = \begin{pmatrix}
7 &3 &-4 \\
-2&-1 &2 \\
6&2 &-3
\end{pmatrix}$ over the reals.
Show that there does not exist a invertible real matrix $\beta$, so that $\delta = \beta ^{-1} \alpha \beta$ is a diagonal matrix
My "proof"
the characterisitc polynomial is $-\lambda ^3 + 3\lambda ^2 -\lambda + 3=-(\lambda - 3)(\lambda ^2 +1)$. The solutions would then be $3,i,-i$. Now, we have that $\delta = \beta ^{-1} \alpha \beta$ where the diagonal values of $\delta$ would then be eigenvalues of $\alpha$. if both $\alpha , \beta$ are real $3 \times 3$-matrices then $\delta$ must also be a real $3 \times 3$-matrix. But that is not possible since the eigenvalues of $\alpha$ are $3,i,-i$ and thus $\beta$ must be complex.....
Now I am stuck since I would think I would have to show that the diagonal of $\delta$ must be the eigenvalues of $\alpha$. Is this a generel fact or should this be proven? If so - how?
| What you said is a general fact and your proof is correct.
If any matix $\alpha$ is similar to a diagonal matrix $\delta=\text{diag}[\delta_{11},\delta_{22},\ldots,\delta_{nn}]$ i.e. $\exists$ an invertible matrix $B$ such that $\delta=B^{-1}\alpha B$, the diagonal entries of $\delta$ must be the eigenvalues of $\alpha$. You can prove this fact easily.
We have $\alpha B=B\delta$. Let the columns of $B$ be $B_i$. Then$$\alpha B=\alpha[B_1~B_2~\ldots B_n]=[\alpha B_1~\alpha B_2\ldots\alpha B_n]$$and$$B\delta=[\delta_{11}B_1~\delta_{22}B_2\ldots\delta_{nn}B_n]$$Equating the columns we get $\alpha B_i=\delta_{ii}B_i$, i.e. $B_i$ are eigenvectors of $\alpha$ and $\delta_{ii}$ their corresponding eigenvalues.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3975982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Expected value of sum of two dependent Binomial variables Question: Assume we toss a coin in two trials. In the first trial we toss it n times. In the second trial we toss it as many as the number of tails observed in the first trial.
Calculate the expectation of total number of tails in both trails:
Solution: This is what I have tried but can't find any closed form for the answer when n is not given. I am not sure if I have taken the right approach:
$T_1$ = Number of tails in the 1st trial
$T_2$ = Number of tails in the 2nd trial
T1 ~ Binomial(n,p)
T2 ~ Binomial(T1,p)
$E(T1 + T2) =
E(T1) + E(T2) = \\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ n p+\sum_{i=0}^{n} E\left(T_{2} \mid T_{1}=i\right) P\left(T_{1}=i\right) =\\
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ n p+\sum_{i=0}^{n} i p \times\left(\begin{array}{l}n \\ i\end{array}\right) p^{i}(1-p)^{n-i}$
I couldn't find any closed form for the answer. Any advice on how to approach this?
| To calculate $E(T_2)$, consider using the law of total expectation.
$$E(X) = E\left[E(X|Y) \right]$$
Which in your case would give
\begin{align*}
E[T_2] &= E\left[E(T_2|T_1) \right] \\
&=E\left[p \cdot T_1 \right] = p E\left[T_1 \right] \\
&= np^2
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3976128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Evaluate $\int^{2\pi}_{0}\frac{\mathrm{d}x}{(\cos x+\sin x+\sqrt{3})^2}=2\sqrt{3}\pi.$ The problem is to evaluate the following definite integral:
$$\int^{2\pi}_{0}\frac{\mathrm{d}x}{(\cos x+\sin x+\sqrt{3})^2}=2\sqrt{3}\pi.$$
I have failed in almost every way I tried, with the exception of substitution by $\tan(\frac{x}{2})$ which leads to the integration of a rational function with annoying coefficients.
Is there any "good" way to address the problem, given that the result doesn't seem that complicated? I've also tried this website, but the result it gave was far from satisfactory. Please help.
| Note
$$ I(a)=\int_0^\pi \frac{dt}{(\cos t+a)^2} =
-\frac d{da}\int_0^\pi \frac{dt}{\cos t+a}
=-\frac d{da} \frac\pi{\sqrt{a^2-1}}=\frac{\pi a}{(a^2-1)^{3/2}}
$$
and apply it to
\begin{align} \int^{2\pi}_{0}\frac{\mathrm{d}x}{(\cos x+\sin x+\sqrt{3})^2}
&= \int^{2\pi}_{0}\frac{\mathrm{d}x}{(\sqrt2\cos (x-\frac\pi4)+\sqrt{3})^2}\\
&= \int^{\pi}_{0}\frac{\mathrm{d}t}{(\cos t+\sqrt{\frac32})^2}
=I\left(\sqrt{\frac32}\right)=2\sqrt3\pi
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3976271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Kummer extension correspondence without roots of unity (Serge Lang) I'm trying to solve the following problem.
Let $k$ be a field of characteristic $0$. Assume that for each finite extension $E$ of $k$, the index $(E^* : E^{*n})$ is finite for every positive integer n. Show that for each positive integer $n$, there exists only a finite number of abelian extensions of $k$ of degree $n$.
If $k$ contains a primitive n-th root of unity, one could use the one-to-one correspondence of abelian extension of $k$ of exponent n and subgroups of $k^*$ containing the n-th powers of the nonzero elements of $k$. For this case one of the ways to solve is as in the answer of this post: Find the bijection between Kummer's field and Galois subgroup.
But for $k$ not containing n-th roots of unity, do we have any kind of correspondence between, say, abelian extension of $k$ of exponent m and abelian extension of $k(\zeta)$ of exponent n, whence $\zeta$ is a primitive n-th root of unity?
I observed that an abelian extension of $k$ of exponent n has extension degree no more than the extension degree over $k(\zeta)$ of the abelian extension of $k(\zeta)$ of exponent n generated by the same set, multiplied by $\varphi(n)$, whence $\varphi(n)$ denotes the Euler function.
Another observation: Assume $k$ does not contain n-th roots of unity. Let H be a subgroup of $k^*$ containing the n-th powers of the nonzero elements of $k$, then $H$ and $\zeta^j$ together generates a subgroup of $k(\zeta)^*$ containing the n-th powers of the nonzero elements of $k(\zeta)$.
| Let $L/k$ be the compositum of all the abelian extensions of degree at most $n$ over $k(\zeta_n)$. Since $k$ has characteristic zero, $L/k$ is separable. Then, since $k(\zeta_n)$ has all $n$-th roots of unity, you already know that $L/k$ is finite. If $E/k$ is an abelian extension of degree $\leq n$, then $E(\zeta_n)$ is an abelian extension of $k(\zeta_n)$ of degree $\leq n$, hence $E\subset E(\zeta_n) \subset L$. Since $L/k$ is separable, it contains at most finitely many subextensions. Hence the set of possible $E$ is finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3976386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the matrix representation $[T]_{\epsilon}^{\epsilon}$ with respect to $\epsilon$ I'm preparing for a preliminary graduate entrance exam, and i'm going over an old test. The question is worded exactly as follows, I am not leaving anything out. I am a bit confused as to what they are asking and (what is the matrix representation of $[T]_{\epsilon}^{\epsilon}$ with respect to $\epsilon$?), also, how to do it. Could somebody help me out? Thank you!
Consider the basis $\epsilon$ for $V=M_{2 \times 2}(\mathbb{R})$ with the following basis vectors:
$$ e_1 = \left( \begin{matrix} 1 & 0 \\ 0 & 0 \end{matrix} \right) $$
$$ e_2 = \left( \begin{matrix} 0 & 1 \\ 0 & 0 \end{matrix} \right) $$
$$ e_3 = \left( \begin{matrix} 0 & 0 \\ 1 & 0 \end{matrix} \right) $$
$$ e_4 = \left( \begin{matrix} 0 & 0 \\ 0 & 1 \end{matrix} \right) $$
And consider the linear transformation $T: V \rightarrow V$:
$T(\left( \begin{matrix} a & b \\ c & d \end{matrix} \right)) = \left( \begin{matrix} 2a-2b & -a+3b \\ 4c-2d & 3c-d \end{matrix} \right) $
Find the matrix representation $[T]_{\epsilon}^{\epsilon}$ with respect to $\epsilon$
| $[T]_\epsilon^\epsilon$ means that the input and output vectors are in $\epsilon$ basis (here the standard basis).
Represent the $2\times2$ real matrix $\begin{bmatrix}a&b\\c&d\end{bmatrix}$ by the $4\times1$ column vector $\begin{bmatrix}a\\b\\c\\d\end{bmatrix}$.
So we are given $T\begin{bmatrix}a\\b\\c\\d\end{bmatrix}=\begin{bmatrix}2a-2b\\-a+3b\\4c-2d\\3c-d\end{bmatrix}$.
Can you figure out the matrix of the linear transformation now?
$M_T=\begin{bmatrix}2&-2&0&0\\-1&3&0&0\\0&0&4&-2\\0&0&3&-1\end{bmatrix}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3976543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Suggestion for mistake on computation of a pull-back $2$-form on the sphere Let $f : S^3 \to S^2$ be the function given by
$$
f(x , y , z , t) = (p = (2 (x z + y t) , q = 2 (- x t + y z) , r = - x^2 - y^2 + z^2 + t^2).
$$
Let $\omega = (p \, dq \wedge dr + q \, dr \wedge dp + r \, dp \wedge dq)$ be a $2$-form in $S^2$. How can I show that $f^*(\omega) = 4(dx \wedge dy + dz \wedge dt) \in {\Omega}^2(S^3)$? My attempt is the next:
$$
f^*(\omega) = p f^*(dq) \wedge f^*(dr) + q f^*(dr) \wedge f^*(dp) + r f^*(dp) \wedge f^*(dq).
$$
For instance,
$$
f^*(dq) = \frac{\partial q}{\partial x} \, dx + \frac{\partial q}{\partial y} \, dy + \frac{\partial q}{\partial z} \, dz + \frac{\partial q}{\partial t} \, dt = 2 (- t \, dx + z \, dy + x \, dz + y \, dt).
$$
If we continue with this process, do we reach that $f^*(\omega) = 4(dx \wedge dy + dz \wedge dt)$? I guess no, because
$$
p f^*(dq) \wedge f^*(dr) = 2 (x z + y t) 2 (- t \, dx + z \, dy + x \, dz + y \, dt) \wedge 2 (- x \, dx - y \, dy + z \, dz + t \, dt) =
$$
$$
= \ldots = 8 (x z + y t) (t y + x z) \, dx \wedge dy + \ldots
$$
(the coefficient of $dx \wedge dy$ in the expression of $p f^*(dq) \wedge f^*(dr)$ is $8 (x z + y t) (t y + x z)$. If we take the coefficient of $dx \wedge dy$ from $q f^*(dr) \wedge f^*(dp)$ and $r f^*(dp) \wedge f^*(dq)$, we obtain
$$
8 (- x t + y z) (- x t + y z) \quad \text{and} \quad 4 (- x^2 - y^2 + z^2 + t^2) (z^2 + t^2).
$$
Then using that $x^2 + y^2 + z^2 + t^2 = 1$ in $S^3$, we have that
$$
f^*(\omega) = (8 (x z + y t) (t y + x z) + 8 (- x t + y z) (- x t + y z) + 4 (- x^2 - y^2 + z^2 + t^2) (z^2 + t^2)) \, dx \wedge dy + \ldots = \ldots = 4 (z^2 + t^2) \, dx \wedge dy + \ldots
$$
if I am not wrong, and it is not compatible with $f^*(\omega) = 4(dx \wedge dy + dz \wedge dt)$. Where am I getting mistake? Also I have to keep in mind that $x \, dx + y \, dy + z \, dz + t \, dt = 0$ but it is because $x^2 + y^2 + z^2 + t^2 = 1$.
| Thanks to Eliot Yu, I could build the answer for the question. The whole expression for $p f^*(dq) \wedge f^*(dr)$ is
$$
p f^*(dq) \wedge f^*(dr) = 2 (x z + y t) 2 (- t \, dx + z \, dy + x \, dz + y \, dt) \wedge 2 (- x \, dx - y \, dy + z \, dz + t \, dt) =
$$
$$
= 8 (x z + y t) ((t y + z x) \, dx \wedge dy + (- t z + x y) \, dx \wedge dz + (- t^2 - x^2) \, dx \wedge dt +
$$
$$
+ (z^2 + y^2) \, dy \wedge dz + (z t - x y) \, dy \wedge dt + (y t + x z) \, dz \wedge dt).
$$
We display $(- t z + x y) \, dx \wedge dz$ using $x \, dx + y \, dy + z \, dz + t \, dt = 0$ and the multilineality from $\wedge$ as he said to get:
$$
(- t z + x y) \, dx \wedge dz = - t \, dx \wedge (z dz) + y \, (x \, dx) \wedge dz =
$$
$$
= - t \, dx \wedge (- x \, dx - y \, dy - t \, dt) + y (- y \, dy - z \, dz - t \, dt) \wedge dz =
$$
$$
= y t \, dx \wedge dy + t^2 \, dx \wedge dy - y^2 \, dy \wedge dz + y t \, dz \wedge dy.
$$
Similarly,
$$
(z t - x y) \, dy \wedge dt = x z \, dx \wedge dy - z^2 \, dy \wedge dz + x^2 \, dx \wedge dt + x z \, dz \wedge dt.
$$
If we replace these two values in the expression obtained for $p f^*(dq) \wedge f^*(dr)$ and computing we get
\begin{equation}\label{1}\tag{1}
p f^*(dq) \wedge f^*(dr) = 16 {(x z + y t)}^2 \, (dx \wedge dy + dz \wedge dz).
\end{equation}
Similarly we can obtain easier expressions for $q f^*(dr) \wedge f^*(dp)$,
\begin{equation}\label{2}\tag{2}
q f^*(dr) \wedge f^*(dp) = 16 {(- x t + y z)}^2 \, (dx \wedge dy + dz \wedge dz)
\end{equation}
and for $r f^*(dp) \wedge f^*(dq)$,
\begin{equation}\label{3}\tag{3}
r f^*(dp) \wedge f^*(dq) = 4 {(- x^2 - y^2 + z^2 + t^2)}^2 \, (dx \wedge dy + dz \wedge dz).
\end{equation}
Then using \eqref{1}, \eqref{2} and \eqref{3},
$$
f^*(\omega) = 4 \left({(x z + y t)}^2 + {(- x t + y z)}^2 + {(- x^2 - y^2 + z^2 + t^2)}^2\right) \, (dx \wedge dy + dz \wedge dz) =
$$
$$
= 4 (p^2 + q^2 + r^2) \, (dx \wedge dy + dz \wedge dz) = 4 \, (dx \wedge dy + dz \wedge dz)
$$
because $p^2 + q^2 + r^2 = 1$, as $f(S^3) \subset S^2$ (is well defined).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3976660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Complete Cayley table for a field
Let $M=\{0,1,a,b,c\}.$ Can I complete the Cayley tables so that $M$ is a field?
My thought was that $(M,+)$ must be an abelian group. This abelian group must be isomorphic to $(\mathbb Z_5,+).$ Correct? I managed to find that if $a=3, b=4$ and $c=2$ it works for addition.
But this does not work for the multiplication table. Does that mean that I can´t complete the Cayley table so that $M$ is a field?
| Although you can use ad-hoc arguments (as the other answer did) for specific cases, the general way to deal with this kind of question is in fact to prove that any two finite fields of the same size are isomorphic! The easiest proof is as follows. Take any finite field $F$. Then $\text{char}(F) = p$ for some prime $p$. Thus $F$ has a subfield isomorphic to$\def\ff{\mathbb{F}}$ $\ff_p$ and hence $F$ is a vector space over $\ff_p$, which implies that $\#(F) = p^k$ for some $k∈ℕ$. Since $(F^*,·)$ is a group where $F^*$ is the set of nonzero elements of $F$, we have $x^{p^k-1} = 1_F$ for every $x∈F^*$ by Lagrange's theorem. Thus every element of $F$ is a root of the polynomial $f = (x↦x^{p^k}-x)$ over $\ff_p$, and hence $f$ splits completely over $F$ since $f$ has at most $p^k$ distinct roots over any field. Therefore $F$ (being nothing more than the roots of $f$ in $F$) is a splitting field for $f$ over $\ff_p$. But splitting fields of the same polynomial over the same base field are unique up to isomorphism. Thus we are done.
Just for example, in this question, the field has size $5$ so it must be isomorphic to $\ff_5$. The first table allows us to easily find that the elements corresponding to $0,1,2,3,4$ are $0,1,c,a,b$ respectively since $\{a,c\} = \{2,3\}$ and $a$ is followed by $b$. But then clearly the second table is wrong because $3·4≠1$ in $\ff_5$. Although it may seem like more trouble here, it is easier in general since you know exactly what the finite field must look like.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3976768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Least residue power modulo May someone confirm that the least residue of $44^8$ modulo $7$ is $4$ please?
And that $2^2 \equiv 4\, (\!\!\!\mod 7)$?
Thanks in advance.
| Lemma needed : For $a,b,n,r \in \mathbb{Z^+},$
$$a \equiv b \pmod{n} \implies a^r \equiv b^r \pmod{n}.$$
Proof:
$\displaystyle a\equiv b \pmod{n} \implies n | (a - b) \implies$
$\displaystyle \exists ~k \in \mathbb{Z} ~\text{such that}~ a = (b + nk) \implies$
$\displaystyle (a^r - b^r) = (b + nk)^r - b^r$
[Using binomial expansion]
$\displaystyle =~ \sum_{i=0}^r \left[(b)^i (nk)^{r - i}\right]
~-~ b^r$
$\displaystyle =~ \sum_{i=0}^{(r-1)} \left[(b)^i (nk)^{r - i}\right].$
Since the above summation stops at $(r-1)$,
$n$ divides each term in the summation.
Therefore
$n | (a^r - b^r) \implies a^r \equiv b^r \pmod{n}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3976854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Convergence in probability question clarification Suppose you have a sequence of independent random variables {$ X_i, i \geq 1$} where:
$$\Bbb P(X_i = 1 )=1-\frac{1}{i}, \Bbb P(X_i =i )=\frac{1}{i}.$$
Let $Y_n=\frac{1}{n}X_n.$
How would you show that $Y_n$ converges to $0$ in probability?
i.e $\lim_{n \to \infty}\Bbb P(|Y_n| \gt \epsilon)=0$ $ \forall \epsilon \gt0.$
For a similar question, I had the distribution for $X_i$ [$\Bbb P(X_i = \sqrt{i} )= \Bbb P(X_i = -\sqrt{i} )= \frac{1}{i+1}$ and $\Bbb P(X_i=0)=1-\frac{2}{i+1}$ ] and where $Y_n=\frac{1}{n}\sum_{i=1}^{n}X_i$. I calculated $E(Y_n)=0, Var(Y_n)=\frac{2i}{n(i+1)}$. Therefore by Chebyshev's inequality, I obtained that the probability $\Bbb P(|Y_n|\gt\epsilon)\leq$Var($X$)/$\epsilon^2$= $\frac{2i}{\epsilon^2n(i+1)}$. Since Var($X$)/$\epsilon^2$ converges to $0$, the probability converges to $0$ and we say that $Y_n \to 0$ in probability.
However, that approach became particularly messy with the first question above.
| Let $\epsilon > 0$ and $n>1+\lfloor\frac{1}{\epsilon}\rfloor$ ($\implies n\epsilon >1$). You note that $\mathbb{P}(|Y_n|>\epsilon) = \mathbb{P}(X_n >n\epsilon) = \mathbb{P}(X_n = n) = \frac{1}{n}$ for all $n>1+\lfloor\frac{1}{\epsilon}\rfloor$ and taking $n\to\infty$, you have done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3976996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In a CCC with the initial object, can I have a morphism A -> 0? I'm in a cartesian-closed category with the initial object $0$. What are the consequences of having a morphism from some object $A$ to $0$? Does it mean that there is also a global element of $0$, that is $1\to 0$ ? Does it mean that we have a zero object, $1 \cong 0$ ? Does it mean that all objects are isomorphic?
Assume we don't have coproducts.
| Since the asker is not satisfied with Qiaochu's answer I'll try to expand on it and show more of the intermediate steps.
First we show that $X \times 0 \cong 0$, by showing that $X \times 0$ is an initial object.
$$Hom(X \times 0, A) \cong Hom(0, A^X) \implies |Hom(X \times 0, A)| = 1$$
This means that the projection $\pi_0: X \times 0 \to 0$ is an isomorphism. We use the $Hom(A, -)$ functor to get the isomorphism (functors send isomorphisms to isomorphisms).
$$Hom(A, \pi_0) : Hom(A, X \times 0) \to Hom(A, 0)$$
which is just post-composition by $\pi_0$.
The universal property of the product gives us another isomorphism, which we'll call $f$:
$$ f: Hom(A, X)\times Hom(A,0) \to Hom(A, X \times 0) $$
We can compose these two isomorphisms to get a third isomorphism:
$$ Hom(A, \pi_0) \circ f : Hom(A, X)\times Hom(A,0) \to Hom(A, 0)$$
It turns out that $ Hom(A, \pi_0) \circ f = \pi_{Hom(A,0)} $. This is because
$$ (Hom(A, \pi_0) \circ f)(g,h) = \pi_0 \circ f(g,h) = h $$
The last equality is due to the universal property of $A \times 0$. Hence the projection:
$$ \pi_{Hom(A,0)}: Hom(A, X)\times Hom(A,0) \to Hom(A, 0)$$
is an isomorphism. If $Hom(A, 0)$ is non-empty then $Hom(A, X)$ also has to be non-empty. If $Hom(A, X)$ had more than one element then $ \pi_{Hom(A,0)} $ would not be injective which means $ Hom(A, X)$ has exactly one element and $A$ is an initial object.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3977129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Equivalent condition of the following group presentation to give a group of order $mn$; the other side Let $G$ be a group (not necessary abelian) generated by $a, b$ that has a group presentation: $$\langle a, b \mid a^m = 1, b^n = 1, ba = a^rb\rangle. $$In my question Equivalent condition of the following group presentation to give a group of order $mn$ , I was able to understand that if $G$ is of order $mn$ then $r^n = 1 \pmod{m}$. I thought that I have proven the opposite side (if $r^n = 1 \pmod{m}$, then $G$ is of order $mn$), but my proof was erroneous. Especially I'm having a difficulty proving that all $a^xb^y$s are distinct for distinct $(x,y)$s, so that $G$ will have $mn$ distinct elements(I know that all elements in $G$ can be written in the form of $a^xb^y$, because of $ba = a^rb$ and Prove that the following equality holds in a group). How can I prove this?
| $\renewcommand{\phi}{\varphi}$$\newcommand{\Span}[1]{\left\langle #1 \right\rangle}$You can do it by bounding the group from below.
Construct the semidirect product $S$ of a cyclic group $\Span{\alpha}$ of order $m$ by a group $\Span{\beta}$ of order $n$, with $\beta$ acting on $\Span{\alpha}$ by raising to the $r$-th power. This is a group of order $m n$.
Let $G$ be your group. Now von Dyck's theorem yields that there is a unique homomorphism
\begin{align}
\phi :\ &G \to S
\end{align}
such that $\phi(a) = \alpha$ and $\phi(b) = \beta$, and $\phi$ is clearly surjective. Since the $\alpha^{x} \beta^{y}$ are distinct, for $0 \le x < m$ and $0 \le y < n$, so are the corresponding $a^{x} b^{y}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3977273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove certain set is convex I'm having trouble proving this statement:
Prove that the set
$$X=\{ (x,y)\in\mathbb{R}^2 : y\geq 0, x\leq b+y \}$$
given $b\in\mathbb{R}$ is convex.
The work I've done so far:
I first assumed we havo two different points $(x_1,y_1),(x_2,y_2)\in X$. Then, the segment between them is made of the points
$$(\lambda x_1 + (1-\lambda)x_2, \lambda y_1 + (1-\lambda)y_2) \ , \ \lambda\in[0,1].$$
So if I prove that points are inside $X$ for any $\lambda$ I would have proven the convexity.
It's easy to see that
$$\lambda y_1 + (1-\lambda)y_2 \geq 0,$$
since it's a positive number plus another positive number. Now, I don't know how to prove that
$$\lambda x_1 + (1-\lambda)x_2 \leq b + \lambda y_1 + (1-\lambda)y_2$$
knowing obviously that $x_1 \leq b+y_1$ and $x_2\leq b+y_2$.
What can I do to prove this part? Any help or hint will be appreciated, thanks in advance.
| Just multiply $x_1 \leq b+y_1$ by $\lambda$, $x_2 \leq b+y_2$ by $(1-\lambda)$ and add the inequalities. Finally note that $\lambda b +(1-\lambda) b=b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3977403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Characterizing open countable subsets of the Cantor set The Cantor-Bendixson theorem implies that any closed subset of the Cantor set $\mathcal{C}$ can be described as a disjoint union of a set $\mathcal{C}_c$ that is homeomorphic to the original Cantor set, and a countable open set $\mathcal{C}_o$.
The following answer, and the referenced work by Schoenberg & Grunhage therein, implies that all noncompact open subsets of the Cantor set are homeomorphic to the Cantor set minus a point—say, $\mathcal{C}/ \{ 0\}$. But this would mean that $\mathcal{C}_o$ is homeomorphic to $\mathcal{C}/ \{ 0\}$, which would imply the Cantor set minus a point is countable, which seems strange.
Is this true, or am I missing something?
| The Cantor-Bendixsson theorem says that $C$ is a union of it's scattered part (open) and its perfect part (closed). The scattered part here is empty (as $C$ has no isolated points, and $C'=\emptyset$, and there is no scattering part) and it's already perfect. The theorem is valid but void here.
If $O \subseteq C$ is an open non-compact subset, it's locally compact (being open in compact) so it has a one-point compactification $\alpha O = O \cup \{\infty\}$. This is compact metric totally disconnected and has no isolated points so is homeomorphic to $C$ by Brouwer's theorem and so $O$ is homeomorphic to $C$ minus (any) point (by homogeneity).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3977531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Continuity, differentiability and double-differentiability of $\sum_{k=1}^{\infty} \frac{\sin(x/k)}{k}$ and $\sum_{k=1}^{\infty}\frac{\sin(kx)}{k^3}$ I have solved the following exercise except for the last point and I would like to know if I have made any mistakes and I would appreciate any hint about how to complete the last question in part (b) about $g$ being twice-differentiable; thanks.
Consider $f(x)=\sum_{k=1}^{\infty} \frac{\sin(x/k)}{k}$ and $g(x)=\sum_{k=1}^{\infty}\frac{\sin(kx)}{k^3}$.
(a) Where is $f$ defined? Continuous? Differentiable? Twice-differentiable?
(b) Show that $g(x)$ is differentiable and that $g'(x)$ is continuous
(c) Can we determine if $g(x)$ is twice-differentiable?
My solution:
(a) f defined Let $x\in\mathbb{R}$: then $|\frac{1}{k}\sin(\frac{x}{k})|\leq\frac{1}{k}|\sin(\frac{x}{k})|\leq\frac{1}{k}\frac{|x|}{k}=\frac{|x|}{k^2}$ (where we have used the fact that $|\sin(x)|\leq |x|$ for all $x\in\mathbb{R}$) and since $\sum_{k=1}^{\infty}\frac{|x|}{k^2}=|x|\sum_{k=1}^{\infty}\frac{1}{k^2}<\infty$ by Comparison Test it must be $\sum_{k=1}^{\infty}|\frac{1}{k}\sin(\frac{x}{k})|<\infty$ which implies $\sum_{k=1}^{\infty}\frac{1}{k}\sin(\frac{x}{k})<\infty$ by Absolute Convergence Test. $f$ is thus well-defined for all $x\in\mathbb{R}$.
f continuous and differentiable $|f'_k(x)|=|\frac{1}{k^2}\cos(\frac{x}{k})|\leq\frac{1}{k^2}$ and $\sum_{k=1}^{\infty}\frac{1}{k^2}<\infty$ so $\sum_{k=1}^{\infty} f'_k(x)$ converges uniformly on $\mathbb{R}$ by Weierstrass M-test and since $\sum_{n=1}^{\infty}f_n(0)=0$ by Term-by-Term Differentiability Theorem we have that $f$ si differentiable (and hence continuous) on $\mathbb{R}$ and $f'(x)=\sum_{k=1}^{\infty}f'_k(x)=\sum_{k=1}^{\infty}\frac{1}{k^2}\cos(\frac{x}{k})$.
f twice-differentiable $|f''_k(x)|=|-\sin(\frac{x}{k})\frac{1}{k^3}|\leq\frac{1}{k^3}$ and $\sum_{k=1}^{\infty}\frac{1}{k^3}<\infty$ so $\sum_{k=1}^{\infty}f''_k(x)$ converges uniformly on $\mathbb{R}$ by Weierstrass M-Test and since $\sum_{k=1}^{\infty} f'_k(x)=\sum_{k=1}^{\infty}\frac{1}{k}\cos(\frac{0}{k})=\sum_{k=1}^{\infty}\frac{1}{k^2}<\infty$ by Term-by-Term Differentiability Theorem we have that $f'(x)$ is differentiable on $\mathbb{R}$ and $f''(x)=\sum_{k=1}^{\infty}f''_k(x)=\sum_{k=1}^{\infty}-\frac{1}{k^3}\sin(\frac{x}{k})$.
(b) g differentiable with g' continuous $|g'_k(x)|=|\frac{\cos(kx)}{k^2}|\leq\frac{1}{k^2}$ and $\sum_{k=1}^{\infty}\frac{1}{k^2}<\infty$ so $\sum_{k=1}^{\infty}g'_k(x)$ converges uniformly on $\mathbb{R}$ by Weierstrass M-Test and since $\sum_{k=1}^{\infty}\frac{\sin(k\cdot 0)}{k^3}=0$ by Term-by-Term Differentiability Theorem $\sum_{k=1}^{\infty}g_k(x)$ converges uniformly to a differentiable function $g(x)=\sum_{k=1}^{\infty}g_k(x)=\sum_{k=1}^{\infty}\frac{\sin(kx)}{k^3}$ on $\mathbb{R}$ and $g'(x)=\sum_{k=1}^{\infty}g'_k(x)=\frac{\cos(kx)}{k^2}$ and since each $g'_k$ is continuous by Term-by-Term Continuity Theorem $g'$ is continuous too.
g twice-differentiable? Here I have tried the estimates $|g''_k(x)|=|-\frac{\sin(kx)}{k}|\leq\frac{1}{k}$ which tells me nothing and neither does the estimate $|g''_k(x)|=|-\frac{\sin(kx)}{k}|\leq\frac{k|x|}{k}=|x|$ since in both of these cases I can't use Weierstrass M-Test so I don't see how I could prove that $g$ is twice differentiable.
| You need to use a partial summation/Dirichlet test to show that $h(x)=\sum_k \frac{\sin(kx)}{k}$ converges (locally uniformly, to a continuous function) away from $2\pi \Bbb{Z}$ (ie. using that $\sum_{k< K} \sin(kx)= \Im(\frac{e^{i Kx}-1}{e^{ix}-1})$ is bounded away from $2\pi \Bbb{Z}$)
Integrating $h$ two times (integrating termwise, allowed since the convergence is locally uniform) gives that $g$ is $C^2(\Bbb{R-2\pi Z})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3977716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.