Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Outer measure: if $A,B\subset\mathbb{R}$ and $|A|<\infty$, then $|B\setminus A|\geq|B|-|A|$. The outer measure is defined as
$|A|=\inf(\{\sum\limits_{i=1}^{\infty}l(I_i)\text{ with $I_1$,$I_2$,... s.t:} \hspace{0,2cm}A\subset\bigcup\limits_{i=1}^{\infty}I_i\}$
where $l$ is the length of an interval in the intuitive sense.
So I have to show that
"$A,B\subset\mathbb{R}$ and $|A|<\infty$, then $|B\setminus A|\geq|B|-|A|$".
Where $|\cdot|$ is the outer measure of a set. I'm struggling to find a way to manipulate the infimum of the open cover of $|B\setminus A|$ in order to show the inequality.
| Let $(I_i)$ be a sequence of intervals covering $A$ and $(J_l)$ be a sequence of intervals covering $B \setminus A$. Then $(I_i)\cup (J_l)$ covers $B$. Hence $|B| \leq \sum_i l(I_i)+\sum_l l(J_l)$. Taking infimum over all covers $(I_i)$ and $(J_l)$ we get $|B| \leq |A|+|B \setminus A|$. Since $|A|<
\infty$ we can subtract $|A|$ from both sides.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3448757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Assume an encrypted message is sent through use of exponential cipher. ...such that modulus $p = 2741 \text{ (p is prime)}$ and $e = 11 \text{ (e = exponent)}$
Message: $1315\quad 0611 \quad 0427 \quad 0091 \quad 0520 \quad 0733$
I am required to determine the decryption exponent and determine what it says.
This is my work so far on Mathematica. I believe my solution is wrong, as
the exponent I arrived at is negative. Please advise. Thanks in advance.
'In' =input
'Out' =output
In $ \text{mssg} = { 1315, 0611, 0427, 0091, 0520, 0733}$
Out ${1315, 611, 427, 91, 520, 733}$
In $p = 2741$
In $e = 11$
In $\text {code[x_]} := \mod[x^{e}, p]$
In $\text { cipher = code[mssg] }$
Out $\, {2622, 2659, 1544, 951, 2718, 859}$
In $\text{ ExtendedGCD }[e, p - 1]$
Out $\,(1, (-249, 1))$
Why do my exponent here shows as $-249 ?$
| Unfortunately, The Mathematica online help page about ExtendedGCD gives limited information about this;
\begin{align}
in[1]:&\; \{g, \{a, b\}\} = \operatorname{ExtendedGCD}[2, 3]\\
out[1]:&\; \{1,\{-1,1\}\} \quad \text{ //next, test the result}\\
in[2]: &\;2 a + 3 b == g\\
out[2]:&\; \texttt{True}\\
\end{align}
The extra output sub-list is for the Bezout's Identity.:
Let $a$ and $b$ be integers with greatest common divisor $d$. Then, there exist integers $x$ and $y$ such that $ax + by = d$
For your problem;
\begin{align}
\gcd(e,p-1) &= u \cdot e + v \cdot (p-1), \text{ for some } u,v\in\mathbb{Z}\\
1 &= -249 \cdot 11 + 1 \cdot 2740\\
1 &= -2739 + 2740
\end{align}
Actually, you want the inverse of $e$ $\bmod{p-1}$ and Bezout's Identity that is very helpful to find the inverses.
If you take $\bmod 2740$ on both side - that is one of the ways to calculate the inverse -;
$$ 1 = -249 \cdot 11 + 1 \cdot 2740$$
$$ 1 \equiv -249 \cdot 11 \pmod{2740}$$
you fill find the inverse of $11 \pmod{2740}$, that is $-249 \equiv 2491 \bmod 2740$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3448909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A Fourier Analysis Question related to fourier transform of $C_c(\mathbb{R}^n)$ functions $f\in L^p(\mathbb{R}^n),1\leq p\leq 2$ and $g\in C_c(\mathbb{R}^n), g$ is not identically $0$, such that $f*g\equiv 0.$ Prove that $f(x)=0$ for almost every $x\in\mathbb{R}^n$.
My strategy is to use that we have $\hat{f}\cdot \hat{g}\equiv 0.$ Now for $n=1$, I know that $\hat{g}$ is a restriction to an entire function and hence if $g$ isn't identically $0$, then the zero set of $\hat{g}$ is discrete. [We simply define $\hat{g}(z)=\int_{\mathbb{R}}g(t)e^{-2\pi i tz} dt$.] Hence $\hat{f}$ is $0$ almost everywhere and so $f$ is $0$ almost everywhere. But for $n>1$ how do I prove the similar statement?
| The strategy would be the same as $\widehat g$ would be holomorphic on $\mathbb{C}^n$ and its zero set would be of strictly lower dimension and hence of zero measure. Have in mind that the awkward part here is talking about the Fourier transform of $f \in L^p(\mathbb{R})$ as you would have to prove the result in a dense set or use distributions.
Alternatively you could try the following. If you replace $f$ with a Schwartz function using density (here I think that you need the hypothesis on $p$ to use one of young convolution inequalities), you can study $\hat{f}\cdot\hat{g}$ by fixing variables and working with just entire functions on one variable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3449042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Isomorphic quotient modules implies equal submodules? Let $R$ be a commutative ring with unity $M$ an $R$-module and $N,L$ submodules of $M$ with $N\subseteq L$.
$$M/N\cong M/L\implies N=L\ ?$$
| No, this is not true. Consider (the $\Bbb Z$-module) $M=\Bbb Z\times\Bbb Z\times \cdots$, and the two submodules $N=\langle(1,0,0,\ldots)\rangle$ and $L=\langle(1,0,0,\ldots), (0,1,0,0,\ldots)\rangle$. Then $M\cong M/N\cong M/L$, but $N$ and $L$ are not only unequal, they aren't even isomorphic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3449170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can I justify this without determining the determinant? I need to justify the following equation is true:
$$
\begin{vmatrix}
a_1+b_1x & a_1x+b_1 & c_1 \\
a_2+b_2x & a_2x+b_2 & c_2 \\
a_3+b_3x & a_3x+b_3 & c_3 \\
\end{vmatrix} = (1-x^2)\cdot\begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3 \\
\end{vmatrix}
$$
I tried dividing the determinant of the first matrix in the sum of two, so the first would not have $b's$ and the second wouldn't have $a's$.
Then I'd multiply by $\frac 1x$ in the first column of the second matrix and the first column of the second, so I'd have $x^2$ times the sum of the determinants of the two matrices.
I could then subtract column 1 to column 2 in both matrices, and we'd have a column of zeros in both, hence the determinant is zero on both and times $x^2$ would still be zero, so I didn't prove anything. What did I do wrong?
| For another solution, note that
$$
\underbrace{\begin{bmatrix}
a_1+b_1x & a_1x+b_1 & c_1 \\
a_2+b_2x & a_2x+b_2 & c_2 \\
a_3+b_3x & a_3x+b_3 & c_3 \\
\end{bmatrix}}_{A}
=
\underbrace{\begin{bmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3 \\
\end{bmatrix}}_{B}
\underbrace{\begin{bmatrix}
1 & x & 0 \\
x & 1 & 0 \\
0 & 0 & 1 \\
\end{bmatrix}}_{C}
$$
and therefore $\det(A) = \det(BC) = \det(B)\det(C)$. From there, it's enough to check that
$$
\det(C) = \begin{vmatrix}
1 & x & 0 \\
x & 1 & 0 \\
0 & 0 & 1 \\
\end{vmatrix} = \begin{vmatrix}1 & x \\ x & 1\end{vmatrix} = 1 \cdot 1 - x \cdot x = 1-x^2.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3449350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 7,
"answer_id": 0
} |
Subgroups of $\mathbb{Z}_2^n$ of order $2^{n-1}$
What are the subgroups of $\mathbb{Z}_2^n$ of size $2^{n-1}$?
I'm fairly convinced that it will be subgroups of the form $\mathbb{Z}_2 \times \dots \times \{0\} \times \dots \times \mathbb{Z}_2$, but I can't seem to know how to prove it.
edit: my guess is wrong. take $n=2$ and $\{(1 1)(00)\} \leq \mathbb{Z}_2^2$.
| $\;V:=\left(\Bbb Z_2\right)^n\;$ is an $\;n\,-$ dimensional vector space over the field $\;\Bbb F_2\cong\Bbb Z_2\;$ , and there's a $1$-$1$ correspondence between the subgroups of $\;V\;$ and the subspaces of $\;V\;$ . Since a group $\;H\le V\;$ has $\;2^{n-1}\;$ elements iff $\;\dim H=n-1\;\iff\; H\;$ is a hyperplane of $\;V\;$ , i.e. a maximal proper subspace of $\;V\;$.
We know that hyperplanes are in fact the kernel of non-zero linear functionals $\;\phi:V\to\Bbb F_2\;$ , so if you know how many of these are we're done...or you can also argue that the number of subspaces of dimension $\;n-1\;$ in $\;V\;$ is equal to the number of $\;1\,-$ dimensional subspaces (why?), and these are easier to calculate...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3449569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
finding inverse of function in ordered pair notation $$f: \mathbb{R} \times \mathbb{R} \mapsto \mathbb{R} \times \mathbb{R} $$
where f is defined as $$f(x,y) =(\text{somethingforx},\text{somethingfory}) $$
I dont want to post the exact question because I would like to get it on my own, but I am having trouble finding the best way to find the inverse of a function when it is given in ordered pair notation such as here. Any methods on how to proceed would be very helpful.
Thanks
| The inverse relation of f is
{ ( ( something for x, something for y), (x,y) ) | ((x,y) , (somethingforx, somethingfory) ) belong to f}.
In general the inverse of R is
{ (b,a) | (a,b) belong to R}
Here
a = (x,y)
and
b = ( something for x, something for y)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3449700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
supremum of brownian motion almost surely > 0 Is it true that
\begin{equation}
M_t = \sup_{s \leq t} B_s > 0 \ \ \text{a.s.}
\end{equation}
for all $t>0$? I remember reading this somewhere, but intuitively, can't the Brownian motion B stay below 0 for some time with probability $>0$?
| One way to argue is using Blumenthal's zero one law. Define $A_n=\{B_{1/n}>\frac{1}{\sqrt{n}}\}$, and set $B=\{B_{1/n}>\frac{1}{\sqrt{n}} ~\text{i.o.}\} $
Then,
\begin{align*}
\mathbb{P}(B) &=\mathbb{P}(\limsup_n A_n)\\
&\geq \limsup_n\mathbb{P}( A_n)\\
&= \limsup_n\mathbb{P}(B_{1/n}>\frac{1}{\sqrt{n}})\\
&=\limsup_n\mathbb{P}(N(0,1)>1)\\
&=\mathbb{P}(N(0,1)>1)=M>0
\end{align*}
By Blumenthal's zero one law $\mathbb{P}(B)= 1 ~\text{or}~0$. So $\mathbb{P}(B)=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3449837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Solve complex equation $\left(\frac{8}{z^3}\right) - i = 0$ from $$(a^3 + b^3) = (a+b)(a^2-ab+b^2)$$
I have $${(2/z)}^3 + i^3 =0$$
I have $$\left(\frac{2}{z} + i\right)\left(\left(\frac{2}{z}\right)^2-(2/z)(i)-1)\right) = 0$$
i.e.$ \left(\frac{2}{z}\right)+i = 0 $ or $ \left(\left(\frac{2}{z}\right)^2-(\frac{2}{z})(i)-1\right) = 0$
......
but I'm not sure that is the correct answer;
help me please.
Thank you.
| Since $i^{3} = -i$
Then $(\frac{2}{z})^{3} + i^{3} = 0 \iff (\frac{2}{z})^{3} - i = 0 \iff z^{3} = -2^{3}i = -8i $
Then the solutions are $z_{1} = 2i, z_{2} = 2e^{i\frac{2\pi }{3}}i, z_{3} = 2e^{i \frac{4\pi }{3}}i$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3450012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Decrypt 01 09 00 12 12 09 24 10 knowing encryption was doing with c=character $c^5\pmod{29}$ So the solution said in order to find out the original character number($A=0, B=1, C=2$, etc). That you have to find $5x\equiv1\pmod{28}$ so $x\equiv17\pmod{29}$, because $\phi(29)=28$. So the solution becomes $d$=encrypted word then c=$d^{17}\pmod{29}$ How does getting the $\phi$ give the equation for decryption when $e=5$ and $p=29$ in the equation $c^5\pmod{29}$?
| I agree that $\phi(29)=28$, as $29$ is prime. The encryption exponent is $5$ and so the decryption exponent is its inverse modulo $28$, which indeed is $17$.
So the decryption function is $x \to x^{17} \pmod{29}$. The inverse of an exponential function is another exponential function, over such a finite ring. The inverse of $e$ modulo $\phi(n)$ is the required exponent $d$.
So $1,9,0,12,12,9,24,10$ decrypted becomes
$1,4,0,12,12,4,20,15$ which is "beammeup" so "beam me up" (with 29 characters a space would have been nice...)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3450152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Compute the boundary and interior of $\left \{ \left(t,\frac{1}{t}\right) : \frac{1}{4}
Let $\mathbb{R}^2$ be given with $|\cdot |_\infty$.Compute the boundary and interior of $A=\left \{
\left(t,\frac{1}{t}\right) : \frac{1}{4} <t <4\right \}$
I suppose that the interior of $\left \{
\left(t,\frac{1}{t}\right) : \frac{1}{4} <t <4\right \}$ is every point that lies on $f(t)=1/t$ with $1/4<t<4$.
I'm having trouble with the boundary - can anyone help?
| The interior is empty and the boundary (which is also the closure) is $\{(t,\frac 1 t): \frac 1 4 \leq t \leq 4\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3450308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Notation for functional derivative of two variables I have the following functional
\begin{equation}
F_{\varepsilon}\left[\rho\right]\left(t\right):=\int_{0}^{1}\left[\frac{\varepsilon}{2}\left(\frac{d\rho}{dx}\right)^{2}+\frac{1}{4\varepsilon}\left(1-\rho^{2}\right)^{2}\right]dx.
\end{equation}
where $\rho(t,x)$. Calling $L\left(t,x\right):=\left[\frac{\varepsilon}{2}\left(\frac{d\rho}{dx}\left(t,x\right)\right)^{2}+\frac{1}{4\varepsilon}\left(1-\rho^{2}\left(t,x\right)\right)^{2}\right]$ the functional derivative I want is
\begin{equation}
\frac{\partial L}{\partial\rho}-\frac{d}{dx}\frac{\partial L}{\partial\rho'}
\end{equation}
where $\rho'=\frac{\partial\rho}{\partial x}$.
My question is: is there any standard notation to indicate this functional derivative (which uses $\rho$ as a function of $x$ only)? I was thinking about the following
\begin{equation}
\frac{\partial L}{\partial\rho}\left(t,x\right)-\frac{d}{dx}\frac{\partial L}{\partial\rho'}\left(t,x\right)=\frac{\delta F_{\varepsilon}\left[\rho\right]}{\delta_{x}\rho\left(t,x\right)}\left(t,x\right).
\end{equation}
| This is just $\frac{\delta F_\varepsilon}{\delta\rho}$ with action $F_\varepsilon=\int_0^1Ldx$. It's the usual functional derivative because $\frac{\partial L}{\partial\dot{\rho}}=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3450549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving an interesting inequality with square roots Let $a,b,c>0$ be real numbers such that $c \geq a \geq b$ and $a^2 \geq bc$. Show that
$$\frac{\sqrt{a^2b+b^2c}}{a+c}+\frac{\sqrt{b^2c+c^2a}}{b+a}+\frac{\sqrt{c^2a+a^2b}}{c+b} \geq \frac{\sqrt a +\sqrt b +\sqrt c}{2}.$$
I tried to augment each term on the left using what we have in the hypothesis:
$$\frac{\sqrt{a^2b+b^2c}}{a+c} \geq \frac{\sqrt{a^2b+b^2c}}{2c} \geq \frac{\sqrt{a^2b+b^2a}}{2c}=\frac{\sqrt{ab(a+b)}}{2c}$$
but this kind of inequality doesn't get me anywhere. Could you give me a hint?
| Even the following inequality is true for any positives $a$, $b$ and $c$:
$$\sum_{cyc}\frac{\sqrt{a^2b+b^2c}}{a+c}\geq\frac{\sqrt{a}+\sqrt{b}+\sqrt{c}}{\sqrt2}.$$
Indeed, let $a=x^2$, $b=y^2$ and $c=z^2$, where $x$, $y$ and $z$ are positives.
Thus, by Holder
$$\sum_{cyc}\frac{\sqrt{a^2b+b^2c}}{a+c}=\sqrt{\frac{\left(\sum\limits_{cyc}\frac{\sqrt{a^2b+b^2c}}{a+c}\right)^2\sum\limits_{cyc}\frac{b^2(a+c)^2}{a^2+bc}}{\sum\limits_{cyc}\frac{b^2(a+c)^2}{a^2+bc}}}\geq$$
$$\geq \sqrt{\frac{(a+b+c)^3}{\sum\limits_{cyc}\frac{b^2(a+c)^2}{a^2+bc}}}=\sqrt{\frac{(x^2+y^2+z^2)^3}{\sum\limits_{cyc}\frac{y^4(x^2+z^2)^2}{x^4+y^2z^2}}}$$ and it's enough to prove that
$$2(x^2+y^2+z^2)^3\geq(x+y+z)^2\sum\limits_{cyc}\frac{y^4(x^2+z^2)^2}{x^4+y^2z^2},$$
which is obviously true by BW:
https://www.wolframalpha.com/input/?i=2%28x%5E2%2By%5E2%2Bz%5E2%29%5E3-%28x%2By%2Bz%29%5E2%28y%5E4%28z%5E2%2Bx%5E2%29%5E2%2F%28x%5E4%2By%5E2z%5E2%29%2Bz%5E4%28x%5E2%2By%5E2%29%5E2%2F%28y%5E4%2Bx%5E2z%5E2%29%2Bx%5E4%28y%5E2%2Bz%5E2%29%5E2%2F%28z%5E4%2Bx%5E2y%5E2%29%29%2Cx%3Da%2Cy%3Da%2Bu%2Cz%3Da%2Bv
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3450689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
finding a constant $c$ so that $ \hat{f} (m)=0 $ Let be $ c \in (0, 2 \pi) $
and
$$f_c(x):= \begin{cases} \frac{x}{c} , 0 \leq x \leq c \\ \frac{2 \pi -x}{2 \pi -c}, c < x \leq 2 \pi \end{cases} $$
I want to determine $c$,
so that
$\hat{f} (m)=0 $
for $ m \in 7 \mathbb{Z} \backslash \{ 0\} $
it is
$ \hat{f}(m)= \frac{1}{2 \pi} \int_0^{2 \pi} f(x) e^{-imx} dx $
$= \frac{1}{2 \pi} [\int_0^c \frac{x}{c} e^{-imx} dx + \int_c^{2 \pi} \frac{2 \pi -x}{2 \pi -c} e^{-imx} dx] $
=$\frac{1}{2 \pi}[ [ \frac{(imx+1) e^{-imx}}{cm^2} ]_0^c +[ \frac{(im(x- 1 \pi)+1)e^{-imx}}{(c-2 \pi )m^2}]_c^{2 \pi} $
=$ \frac{1}{2 \pi} [ \frac{(cm-i)sin(cm)+(icm+1)cos(cm)-1}{cm^2}] + \frac{1}{2 \pi} [- \frac{((c- 2 \pi)m-i)sin(cm)+((ic-2i \pi )m+1)cos(cm)+isin(2 \pi m)-cos(2 \pi m)}{(c-2\pi)m^2}] $
i was trying to reshape the equation $ \hat{f} (m) =! 0 $
but i get stuck at determining a $c$.
And what will it mean for $ m \in 7 \mathbb{Z} \backslash \{0 \} $
do you see a mistake here? Or is there an other approach?
Would be very thankful for any help!
| We have
\begin{align}
\hat{f}(m) &= \frac{1}{2\pi}\int_0^c \frac{x}{c}\mathrm{e}^{-\mathrm{i}mx} dx
+ \frac{1}{2\pi}\int_c^{2\pi} \frac{2\pi - x}{2\pi - c}\mathrm{e}^{-\mathrm{i}mx} dx\\
&= \frac{1}{2\pi (2\pi - c) m^2}\left(1 - \mathrm{e}^{-\mathrm{i}2m\pi}\right)
- \frac{1}{(2\pi - c) cm^2}\left(1 - \mathrm{e}^{-\mathrm{i}cm}\right).
\end{align}
Remark: I used Maple to simplify the expressions.
For $m \in 7 \mathbb{Z} \backslash \{ 0\}$, we have
$$\hat{f}(m) = -\frac{1}{(2\pi - c) cm^2}\left(1 - \mathrm{e}^{-\mathrm{i}cm}\right).$$
Since $\hat{f}(m) = 0$ for all $m \in 7 \mathbb{Z} \backslash \{ 0\}$,
we have $c = \frac{2\pi}{7}, \frac{4\pi}{7}, \frac{6\pi}{7}, \frac{8\pi}{7}, \frac{10\pi}{7}, \frac{12\pi}{7}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3450871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A class of sequences is of bounded variation Let $\left\{ a_n \right\}$ be a null sequence s.t.
$$\sum_{n=1}^\infty \left( \frac{1}{n}\sum_{k=n}^\infty \vert\Delta a_k\vert^p\right)^\frac{1}{p}<\infty$$
for some $p>1$.
How to prove that $\left\{ a_n \right\}$ must be of bounded variation?
My attempt: It is enough to prove that
$$\vert \Delta a_n\vert \leq \left( \frac{1}{n}\sum_{k=n}^\infty \vert\Delta a_k\vert^p\right)^\frac{1}{p}, \forall n>n_0, \text{for some} \; n_0 \in \mathbb{N}.$$
I've tried proving the last inequality using Jensens' inequality, but I can't get to the desired result.
Edit:
How to find a sequence $\left\{ a_n \right\}$ of bounded variation for which $\sum_{n=1}^\infty \left( \frac{1}{n}\sum_{k=n}^\infty \vert\Delta a_k\vert^p\right)^\frac{1}{p}<\infty$ doesn't hold for any $p>1$?
| The argument is:
We wish to prove $n|\Delta a_n|^p \leq \sum_{k=n}^{\infty} |\Delta a_k|^p$.
Now suppose that the right hand side does not contain at least $n$ terms of size $|\Delta a_n|^p$. Then surely $|\Delta a_k|^p \leq |\Delta a_n|^p$ eventually (i.e. of BV). If it does contain at least $n$ terms of this size, then the inequality holds, and again $a_n$ is of bounded variation as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3450978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
A lower bound for $\sum\limits_\text{cyc} \frac{x}{\sqrt{x^2+y^2}}$ Let $x,y,z>0$. Then
$$\sum_\text{cyc} \frac{x}{\sqrt{x^2+y^2}}>1$$
I found a similar inequality in the other direction but I can‘t apply Cauchy-Schwarz here... All I see is by Cauchy-Schwarz,
$$\sum_\text{cyc} \frac{x}{\sqrt{x^2+y^2}}\geq \frac{\sum_\text{cyc}\sqrt x}{\sum_\text{cyc}\sqrt[4]{x^2+y^2}}$$ which is not helpful.
| Note that $$\sum_{\rm cyc} \frac x{\sqrt{x^2+y^2}}>\sum_{\rm cyc} \frac x{x+y}$$ $$>\sum_{\rm cyc}\frac x{x+y+z}=1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3451106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does locally smooth imply globally smooth for a function on a manifold? Say we have a function $f$ from a differentiable manifold $M$ to $\mathbb{R}$ such that for all points $m \in M$ there exists a neighborhood $U \ni m$ such that $f | U$ is smooth. Can we conclude that $f$ is smooth?
I'm trying to figure this out given the definition of smoothness from Warner's book, namely: "Let $U \subset M$ be open. We say that $f : U \to \mathbb{R}$ is a $C^\infty$ function on U if $f \circ \phi^{-1}$ is $C^\infty$ for each coordinate map $\phi$ on $M$."
| Yes, like on $\mathbb R^n$, smoothness is a local property. A function is (globally) smooth iff its restriction to any open set is a smooth function.
To prove it strictly using your definition, you should let your domain $U$ be $M$, and let $\phi:V\to \tilde V$ be a coordinate map on $M$. Then play around with your local smoothness assumption to reduce it to "smooth iff locally smooth" on $\mathbb R^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3451261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove that $2\sqrt{3}$ is greater than $\pi$ Without calculator, how to prove that $2 \sqrt{3} > \pi$?
The level is baccalauréat grade.
I confirm it's not a school exercise at all, as I left school like 35 years ago.
|
(Credit to David G. Stork for the image).
I am showing the area-based argument explicitly because it seems that the other answers rely on a perimeter-based argument (which I find unconvincing without a rigorous proof). In contrast, it is quite easy to conclude by simple inspection that the circumscribed hexagon has a larger area than the inscribed circle.
The area of the circle is clearly simply $\pi$.
The hexagon can be decomposed into six congruent equilateral triangles. The height of each is $1$. The base can be computed with trigonometry as $(2)\tan\frac{\pi}{6} = \frac 2{\sqrt 3}$. Hence the area of a single triangle is $\frac 12 (1)(\frac 2{\sqrt 3})= \frac 1{\sqrt 3}$. The area of the hexagon is therefore $\frac 6{\sqrt 3} = 2\sqrt 3$.
This allows us to immediately conclude $2\sqrt 3> \pi$ as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3451431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 0
} |
If $\lim_{x\rightarrow +\infty} \frac{f(x)}{x}=0,$ show there is $x_n\rightarrow\infty$ such that $\lim_{n\rightarrow\infty}f'(x_n)=0.$ I'm working on the problem:
Suppose $f(x)$ is differentiable on $(0,+\infty)$.
If $$\lim_{x\rightarrow +\infty} \frac{f(x)}{x}=0,$$ show there is
$x_n\rightarrow\infty$ such that $$\lim_{n\rightarrow\infty}f'(x_n)=0.$$
Here are some of my thoughts:
Let $n\geq 2$, then $\frac{f(n)-f(1)}{n-1}=f'(x_n)$ for some $x_n\in (1,n)$, so $f'(x_n)\rightarrow 0$. But I can't show $x_n\rightarrow +\infty.$
| If the conclusion is not true then there exists $\epsilon >0$ and $M$ such that $|f'(x)| >\epsilon$ for all $x \geq M$. Using the fact that derivatives have IVP we see that we can actually make $f'(x) >\epsilon$ for all $x \geq M$ or $f'(x)<-\epsilon$ for all $x \geq M$. Consider the former case. Note that $f(n+1)-f(n) \geq \epsilon $ for all $n >M$ by MVT. Now you can see easily that $\frac {f(n)} n $ does not tend to $0$.
[If $n_0 >M$ then $f(n) \geq (n-n_0)\epsilon+f(n_0)$ for all $n >n_0$ so $\frac {f(n)} n \geq (1-\frac {n_0} n)\epsilon+\frac 1 nf(n_0) \to \epsilon$ as $n \to \infty$].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3451553",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Probability measure over bijection map Let $P$ a probability measure over $\mathbb{R}^n$. Let $f: A \rightarrow B$ a bijection from $A$ to $B$ two sets of $\mathbb{R}^n$. Does the following equality could be hold?
$$
P(f(A)) = P(A).
$$
Thanks,
S
| No. We have $f(A)=B$, and since $B$ can have a different measure than $A$ the equality does not hold always.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3451670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
uniform convergent Given f is a differentiable function. Define $$f_n(x)=n\left(f\left(x+\frac{1}{n}\right)-f(x)\right)$$, prove that $f_n$ is uniformly converge to $f'$
I have tried to make these equations:
$n(f(x+\frac{1}{n})-f(x))=\frac{f(x+\frac{1}{n})-f(x)}{1/n}$ and taking limits as n $\to\infty$ but i got stuck to prove its uniform convergent
| This is true only when $f'$ is uniformly continuous.(Use MVT)
For,
By MVT, there exists $\varepsilon_n \in (x,x+1/n)$ such that $$\Big|\frac{f(x+1/n)-f(x)}{1/n}-f'(x)\Big|=|f'(\varepsilon_n)-f'(x)|$$
How to make the RHS small in order to make uniform convergent?
Ans: Using uniform continuity of $f'$
Otherwise, consider, for example, $f(x)=x^3$ on $\Bbb R$. Then $$\sup_x \Big|\frac{f(x+1/n)-f(x)}{1/n}-f'(x)\Big|=\sup_\color{red}x \Big|\frac{3x}{n}+\frac{1}{n^2}\Big|= \infty,$$concluding the convergence is not uniform!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3451815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
If the operator $T$ is defined by $Tf(x)=\int_0^xf(t)\,dt$, show that $Tf \in C[0,1]$
Consider the operator $T$ on $L^2[0,1]$ defined by $Tf(x)=\displaystyle \int_0^xf(t)\,dt.$ Show that $Tf \in C[0,1].$
I have one question before this:
What are the implications between $L^p$ spaces, i.e if $f \in L^p$ does this imply that $f \in L^{p+1}$?
My attempt:
Assume $Tf$ is not in $C[0,1]$, so there exists $y \in [0,1]$ and $\epsilon>0$ such that for all $\delta >0$, we can find $x_0$, with $|x-x_0|<\delta$ but
$$\bigg\rVert\int_0^x f-\int_0^{x_0} f\,\bigg\rVert>\epsilon$$
WLOG assume $x>x_0$, so $$\bigg\lVert \int_{x_0}^x f\,\bigg\rVert \geq \epsilon.$$
So for $\delta_n=\frac{1}{n},$ we can find $x_n \in [0,1]$ such that $|x-x_0|<\frac{1}{n}$ and $$\bigg\lVert\int_{x_n}^xf\,\bigg\rVert \geq \epsilon$$
I don't know if that will lead me to a contradiction.
I would appreciate any help or hints with that.
| Your first question is actually relevant here: the relevant inclusion is that a $L^p$ function on a finite measure space is in $L^r$ for all $r<p$. Thus a $L^2([0,1])$ function is also in $L^1([0,1])$. This is the main ingredient you need, along with the theorem that if $f \in L^1$ then for all $\varepsilon > 0$ there exists $\delta > 0$ such that if $\mu(A)<\delta$ then $\int_A |f| d \mu < \varepsilon$.
One way to prove this theorem more or less "by hand" is to use the bounded convergence theorem: given $g$ such that $\| f - g \|_{L^1} < \varepsilon/2$ and $|g| \leq M=M(\varepsilon)$, you have $\int_A |f| d \mu \leq \int_A |g| d \mu + \int_A |f-g| d \mu < M \mu(A) + \varepsilon/2$, so you can select $\delta=\varepsilon/(2M)$.
Another way to proceed is to use the dominated convergence theorem, by noting that $f(x) 1_{[0,x_n]}(x) \to f(x) 1_{[0,x_0]}(x)$ pointwise if $x_n \to x_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3451968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
For simple, connected graph $G$ with minimum degree $\geq k$, if $k\geq 3$, does $G$ always have a cycle of length exactly $k+1$?
Let $G$ be a simple, connected graph such that $\delta(G)\geq k$ (where $\delta(G)$ is the minimum degree). If $k$ is at least $3$, does $G$ always have a cycle of length exactly $k+1$?
P.S: I feel this is somwthat an extension to this question below:
Let $G$ be a graph of minimum degree $k>1$. Show that $G$ has a cycle of length at least $k+1$
I can't construct a graph with minimum degree 3 but not having cycles of length 4. Thanks a lot if you can show one!
| The Petersen graph is another example. It is 3-regular and has no cycle of length less than 5.
An easy construction for even $k$ is $K_{k,k}$. Every vertex has degree $k$ but since the graph is bipartite, there are no odd cycles in the graph (i.e. no cycle with length $k+1$).
For odd $n$, any $2$-connected graph satisfies your constraint.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3452094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Homogeneous principal G-set and its group of automorphisms Let $G$ be a group and consider the left operation of $G$ on itself by left translation; this action is simply transitive, i.e., $G$ operates freely and transitively on itself. Then $G$ together with this operation is a left homogeneous principal $G$-set; denote it by $G_{s}$.
Let $E$ be a homogeneous principal $G$-set and $a\in E$. The orbital mapping $\omega_{a}:x\mapsto x.a$ defined by $a$ is a $G$-isomorphism of the $G$-set $G_s$ onto E. Then, there exists an isomorphism
$$\psi_{a}:G^{op}\rightarrow \text{Aut}(E),x\mapsto\omega_{a}\circ\delta_{x}\circ\omega_{a}^{-1},$$
where $\delta_x$ is the right translation of $G$ defined by $x$.
Question: What is going on here? Specifically, why is this a bijection?
| Note that $ G^{op}\cong Aut(G_s) $ via $x\mapsto \delta_x$. Indeed, if $f: G_s\to G_s$ is an isomorphism, then for all $g$, $f(g) = f(g\cdot 1) = g\cdot f(1) = gf(1) = \delta_{f(1)}(g)$, so $f\mapsto f(1)$ is the inverse isomorphism.
Now this map $G^{op}\to Aut(E)$ is just the composition of $G^{op}\to Aut(G_s ) \to Aut(E)$ where the map $Aut(G_s)\to Aut(E)$ is the standard way to prove that if $X\cong Y, Aut(X)\cong Aut(Y)$ :
given an isomorphism $f:X\to Y$, any automorphism $h:X\to X$ fits in a commmutative square with a unique $h'$ :
$\require{AMScd}\begin{CD} X@>h>> X \\
@VfVV @VfVV \\
Y @>h'>> Y\end{CD}$
and that $h'$ is precisely $f\circ h\circ f^{-1}$. It's then easy to check that $h\mapsto h' = f\circ h\circ f^{-1}$ is an isomorphism $Aut(X)\to Aut(Y)$, the converse being given by $k\mapsto k' = f^{-1}\circ k\circ f$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3452248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to solve a system of linear inequalities? I am working on the following exercise:
Find a solution to the following system or prove that none exists:
\begin{align}
x_1-x_2 &\le 4\\
x_1-x_5 &\le 2\\
x_2-x_4 &\le -6 \\
x_3-x_2 &\le 1 \\
x_4-x_1 &\le 3 \\
x_4-x_3 &\le 5\\
x_4-x_5 &\le 10 \\
x_4-x_3 &\le -4 \\
x_5-x_4 &\le -8
\end{align}
I do not know how to do that. Is there some algorithm? I can not find anything online. Could you help me?
| From the first inequality you have $$x_1 \leq 4+x_2.$$
Now from the second $$4+x_2-x_5 \leq 2$$ or $$ x_2-x_5 \leq -2$$ or $$x_2 \leq -2+x_5.$$
Now from the third you get $$ -2+x_5-x_4\leq -6$$ or $$x_5\leq -4+x_4.$$ This can be applied to the last inequality and you get $$-4+x_4-x_4\leq -8$$ and that gives $$-4 \leq -8.$$ So the system has no solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3452354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Prove that $\lim_{n \to \infty} k^nn^p= 0$ where $|k| < 1$ and $p>0$ Hello I am working through some problems in a book and came across this question.
Prove that $\lim_{n \to \infty} k^nn^p= 0$ where $|k| < 1$ and $p>0$
I can see why it should be the case (since exponentials grow faster than polynomials) but I don't really know where to start to try and prove it rigourously. I have identified that it becomes a $"0 \times \infty"$ situation and have tried using L'Hôpital's rule on the expression $k^n/(1/n^p)$ which didn't seem to help. How should go about tackling this problem?
| It suffices to show
\begin{align*}
(|k|^{1/p})^{n}n\rightarrow 0.
\end{align*}
Let $a=|k|^{1/p}<1$, we are to show that $a^{n}n\rightarrow 0$.
Let $a=1/(1+r)$ for $r>0$, then $a^{n}\leq\dfrac{1}{1+nr+n(n-1)r^{2}/2}$, it is now easy to show that
\begin{align*}
\dfrac{n}{1+nr+n(n-1)r^{2}/2}\rightarrow 0.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3452517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
How many solutions to this system of linear equations in $\mathbb{Z}_N$? Given distinct non-negative integers $i$ and $j$, and given $a,b\in\mathbb{Z}_N$, is it true that there is at most one $(x,y)\in \mathbb{Z}_N^2$ so that
$$x+iy\equiv a \mod N,$$
$$x+jy\equiv b \mod N?$$
When it is in $\mathbb{Z}\subseteq\mathbb{Q}$, as the determinant of coefficient matrix is $j-i\neq 0$, there is at most 1 solution. But what about $\mathbb{Z}_N$?
**Revision: **to avoid some trivial cases, we can assume $i,j$ are distinct integers between $1$ and $N$.
**Further Revision: **if there are more than 1 solution, can you give some reasonable upper bound for the number of solutions?
| If $j \equiv i$ mod $N$ there may be zero or infinitely many solutions, depending on $a,b.$ But if not we can subtract the two equations and get $(j-i)y \equiv b-a$ mod $N.$ Then provided $j-i$ is invertible mod $N$ we might have a unique solution (again depending on $a,b$ mod $N.$)
Edit: When $j-i$ is invertible we can get $y=(j-i)^{-1}(b-a)$ uniquely. And with this $y$ value, either equation solved for $x$ gives the same value, so uniqueness works in this case.
User Connor asks what if $j-i$ not invertible; I don't know what happens in general in that case, it may be involved.
So I think the unique solution question is a bit involved in general.
Added: OP Connor has asked about the number of solutions. Let $d=\gcd(j-i,N).$ Then a necessary condition for solutions to exist is that $d$ divide $b-a.$ When it does there are exactly $d$ solutions mod $N$; get any one of the $y$ by using $(j-i)y \equiv b-a$ mod $N,$ then either equation determines $x$ mod $N$ (both give the same thing).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3452666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
maximize product of two sines with given precision and for smallest time parameter
Consider the function $f(t)=\text{sin}(\omega_1 t)\text{sin}(\omega_2 t)$, where $\omega_1 \ \text{and}\ \omega_2 \in \mathbb{R} $.
Is there a numerical or analytical solution to the following optimization problem:
find minimal $t$ so that $f(t)\ge 1-p$, where $p<<1$ is the precision required.
Thanks in advance for any help!
Background: I am solving spin dynamics of an electron, where my spin oscillates with two different frequencies, and am wondering what is the smallest time that I need to flip my spin with a given precision p .
| Although it is not a complete answer, my answer provides some partial solution to your problem (existence).
Depending on $w_1$ and $w_2$, the minimizer may not exist.
Let us first consider the maximum of $f(t) = \sin(w_1t)\sin(w_2t)$.
Since, at the maximum, $\sin(w_1t) = \sin(w_2t)$, without loss of generality, suppost $\sin(w_1t) = \sin(w_2t) > 0$. This can only happen when $w_1t + w_2t = \pi$ in mod $2\pi$.
Suppose $w_1+w_2 \ne 0$. (If it is not the case, $f(t) \le 0$).
Thus, at $\hat{t}_q = (\pi+2\pi q)/(w_1+w_2)$, we have
$$
f(\hat{t}_q) = \sin^2(w_1\hat{t}_q) = \sin^2\left(\alpha+2\alpha q)\right),\qquad \alpha =\frac{w_1}{w_1+w_2}\pi.
$$
Let us consider the mapping $\phi_\alpha:x \mapsto x + 2\alpha$ in mode $2\pi$.
Then, let consider
$$
\Omega = \{x_0=\alpha, x_1=\phi_\alpha(x_0),\cdots, x_k = \phi_\alpha(x_{k-1}), \cdots\}.
$$
By Ergodic theorem, if $\alpha/2\pi$ is rational number, $\Omega$ is finite.
If $\alpha/2\pi$ is irrational, $\Omega$ is dense in $(0,2\pi)$.
Therefore, if $\alpha/2\pi$ is rational, and
$\max_{x \in \Omega} \sin^2x < 1- p$, the solution does not exist.
If $\alpha/2\pi$ is irrational,
there exists $x^* \in \Omega$ such that $|\sin^2 x^* -1| < p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3452783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tap filling Tank .. Time taken?
A tap can fill a tank in 16 hours whereas another
tap can empty the tank it in 8 hours. If in a three
fourth filled tank both the taps are opened, then
how long will it take to empty the tank in this
scenario?
I know that time to fill+empty = (1/16)+(1/8)
How to incorporate three fourth filled tank?
| If the tank is full and you open both taps, then it will take $16$ hours to empty it: indeed in first $8$ hours the second tap empties the tank and in the second $8$ hours it empties one more tank that filled the first tap in these $16$ hours. If the tank was $3/4$ in the beginning, it would take $(3/4)\times16=12$ hours.
Or you can write the equations, if $V$ is the volume of the tank, the “speeds of emptying” of two taps are:
$$
u_1=-V/16,\qquad u_2=V/8
$$
To find the time, we should divide the volume of the water by the “speed of emptying”:
$$
t = \frac{(3/4)V}{u_1+u_2} = \frac34\frac{V}{\frac V8-\frac V{16}} = \frac34\frac{V}{\frac V{16}} = \frac34\times 16 = 12~\text{hours}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3452968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Definition of $C^k$ boundaries I am reading the book "Partial Differential Equations" of Lawrance c. Evans by myself and started with Appendix part.
At the very beginning of Appendix C, there exists a definition
"We say $\partial U$ is $C^k$ if for each point $x^0\in\partial U$, there exists $r>0$ and a $C^k$ function $\gamma:\mathbb{R}^{n-1}\longrightarrow\mathbb{R}$ such that we have
$ U\cap B(x^0,r)=\{x\in B(x^0,r)\lvert x_n>\gamma(x_1,...,x_{n-1})\}$
I do not understand the intuition behind this definition.To my understanding it does not correspond to derivatives. Can anyone help me with that? Why we call such boundary sets as $C^k$?
| The condition $$x_n > \gamma(x_1, \ldots, x_n)$$ says that, locally, the boundary can be written as the graph of a function -- it means that the boundary itself is locally the set were $$x_n = \gamma(x_1, \ldots, x_n)$$
This is equivalent to saying it is an $n-1$ dimensional submanifold or a hypersurface -- maybe you have heard these words.
The smoothness assumption on $\gamma$ translates to a smoothness statement about this hypersurface. If you would, e.g, only require $\gamma$ to be Lipshitz, this would allow for corners in the boundary. Asking it to be of class $C^k$ with $k\ge 1$ makes it differentiable, or (if e.g. $k\ge 2$ already implies that it curves continuously.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3453105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Name of partial derivatives where the order of differentiation can be reversed. Is there a name given to partial differential equations of the form: $$\frac{\partial{F}}{\partial{x}\partial{y}}=\frac{\partial{F}}{\partial{y}\partial{x}}$$ Not asking for any kind of proof, just specifically wondering if there is a name given to PDE's that satisfy this condition.
| I suspect that the equality
$$\frac{\partial{F}}{\partial{x}\partial{y}}=\frac{\partial{F}}{\partial{y}\partial{x}},$$
interpreted as a pde, have no special name because it is not commonly interpreted as a pde.
On the other hand, understood as "a condition on" or "a property of" a certain function $F$, the said equality is usually referred to as
*
*Equality of mixed partial derivatives.
*Symmetry of second derivatives.
*Symmetry of the hessian matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3453234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Solve the equation exponential radical Solve the equation$$31+8\sqrt{15}=(4+\sqrt{15})^x$$for $x$.
I think you could set up a recursion from the coefficients of $(4+\sqrt{15})^{x}$ to the coefficients of $(4+\sqrt{15})^{x+1}$ then find the general formula using the characteristic equation?
It should look like $a_{n+1}=4a_{n}+15b_{n}$ and $b_{n+1}=4b_{n}+a_{n}$, cancel out one of variables, and then take the characteristic equation.
| If $x = \log_{4+\sqrt {15}} (31+8\sqrt {15})$ is not an acceptable solution (and they should specify that it is not; it satisfies all the requirements of a solution; It exists and it is a unique value and it solves) then I'm not really sure there is anything to do but guess.
We can note for any positive integer $k$ that $(4+\sqrt{15})^k = \sum_{j=0}^k (\sqrt{15})^2 \sqrt{15}^j*4^{k-j}C_{k,j}=$
$\sum_{j=0;j\text{ even}}^k 15^{\frac j2}*4^{k-j}C_{k,j} + \sqrt{15}\sum_{j=0;j\text{ odd}}15^{\frac{j-1}2}4^{k-j}C_{k,j}$
So if we can get $31 = 4^{2m} + 15*4^{2m-2}C_{2m\{+1\},2} + .... + \{4;1\}\{2m;1\}15^m$.
And $8 = 4^{2m\pm 1} + 15*4^{2m-3;-1}C_{whatever} + .....$ we'd .... have something.
Now
$31 = 15 + 16 = (\sqrt {15})^2 + 4^2$ and $8\sqrt{15} = 2*4*\sqrt{15}$.
So $31+8\sqrt{15} = 4^2 + 2*4*\sqrt{15} + \sqrt{15}^2 = (4+\sqrt{15})^2$
So $x=\log_{4+\sqrt{15}} (4+\sqrt{15})^2 = 2$.
.....
I don't really like this because it is basically guessing.
But it does seem reasonable that if $x$ is not an integer then $(4+\sqrt {15})^x = \text{a mess}$. An if $x$ is an integer we must have $31 =$ some power of $4 + $ some power of $15$ + several combinations thereof. Well $8\sqrt{15} =\sqrt{15}($ combination of powers of $4$ and powers of $15$).
And given that $8\sqrt{15} = 2*4\sqrt{15}$ and $31 = 4^2 + 15$ and $x=2$ is a good.
Now, confession.... did I see it right away. Not really. I first tried factoring and got $31+ 8\sqrt{15} = 8(4+\sqrt {15}) -1$ which was odd. Then I figure $31 + 8\sqrt{15} = 31 + 2*4\sqrt{15} = 15 + 16+2*4\sqrt{15} + 16+2*4\sqrt{15} + \sqrt{15}^2$>
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3453378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Is there any finite dimensional algebra that is not isomorphic to some algebra of matrices? Suppose that $\mathcal A$ is a finite dimensional algebra. Is it true that there always exist some isomorphism
\begin{equation}
\phi:\mathcal{A}\rightarrow C\subset M_{K\times K},
\end{equation}
where $C$ is some subalgebra of the algebra of $K\times K$ matrices? If not, what is the requirement for an algebra to be isomorphic to some algebra of matrices?
I study physics, and in general, physicists are usually interested in some algebra $\mathcal{A}$ of operators over some vector space $V$, and if $V$ is finite dimensional, the statement is obviously true. However, I do not know how to prove it for any finite dimensional algebra, and neither think of a counterexample.
| In general, the answer is no. The octonions are not isomorphic to any matrix algebra because they're not associative.
If the algebra is associative then the answer is yes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3453519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Hom on sequences of integers is determined by values on finite sequences I have found this problem intended for school students (math olympiad) and couldn't solve it myself.
Consider the short exact sequence of abelian groups
$0 \to\bigoplus \limits_{i \in \mathbb{N}} \mathbb{Z}_i \to \prod\limits_{k \in \mathbb{N}} \mathbb{Z}_k \to X \to 0$
The inclusion is obvious (finite sequences of integers to all sequences).
The problem is to show that $Hom(X, \mathbb{Z}) = 0$.
I thought it can be done by simple categorical cosiderations but apparently it can't. We have
$0 \to Hom(X, \mathbb{Z}) \to Hom( \prod\limits_{k \in \mathbb{N}} \mathbb{Z}_k, \mathbb{Z}) \to Hom(\bigoplus \limits_{i \in \mathbb{N}} \mathbb{Z}_i, \mathbb{Z}) \cong \prod\limits_{i \in \mathbb{N}} \mathbb{Z}_i$
but the middle one doesn't give anything nice because $Hom(_,X)$ doesn't respect products.
So I am out of ideas.
| I know the proof now but I haven't solved it myself.
Note that for a prime $p$ a sequence of the form $(p^na_n)$ is divisible in $X$ by $p^k$ for all $k \in \mathbb{N}$ because all but finitely many of $p^na_n$ are. Therefore for any $f \in Hom(X, \mathbb{Z}), f(p^na_n) $ is divisible by $p^k$ for all $k$ so $f(p^na_n) = 0$.
Now take $x_n \in X$, it can be represented as $(2^na_n + 3^nb_n)$ because gcd$(2^k, 3^k) = 1 $. Hence $f(x_n) = f(2^na_n) + f(3^nb_n) = 0$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3453647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Translation of a union of sets equals the union of a translation of sets. I'm wondering whether or not this is true for any arbitrary set of real numbers. It seems pretty straightforward to me.
Sorry if this is overly pedantic, I've just been scarred by enough measure theory this semester that I've learned everything which seems obvious is true, except for the things which aren't; for those things, there is some pathological counter example that you forgot to consider.
If $\{S_{\alpha}\}_{\alpha \in I}$ is some arbitrary collection from the powerset of $\mathbb{R}$, then we should have, for any $x\in \mathbb{R},$ $$x+\bigcup_{\alpha \in I}S_{\alpha} = \bigcup_{\alpha \in I}x+S_{\alpha};$$
A typical element of the LHS is of the form $x+s$, where $s\in S_{\alpha}$ for some $\alpha\in I$. It then follows that $x+s \in x+S_{\alpha} \subseteq \bigcup_{\alpha \in I}x+S_{\alpha}$.
On the other hand, if $t\in\bigcup_{\alpha \in I}x+S_{\alpha}$, then $t\in x+S_{\alpha}$ for some $\alpha \in I$, and so $t=x+s$ (where $s\in S_{\alpha}$). Since $s\in S_{\alpha}$, $s\in \bigcup_{\alpha \in I}S_{\alpha}$, and so $t=x+s \in x+\bigcup_{\alpha \in I}S_{\alpha}.$
Is there something I'm missing here, or am I just being gun-shy at this point?
Thanks in advance.
| For any function $f: \mathbb R \to \mathbb R$ and any collection $(S_{\alpha})_{\alpha \in I}$ we have $f(\bigcup S_{\alpha})_{\alpha \in I})=\bigcup_{\alpha \in I} f(S_{\alpha})$. In particular for the function $f(y)=x+y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3453809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sum of the series $\sum_{n=0}^{\infty} \lfloor n\sqrt{2} \rfloor x^n$? Can we find the sum of the series $\sum_{n=0}^{\infty} \lfloor n\sqrt{2} \rfloor x^n$ explicitly? The question is related to this one where sum is computed if $\sqrt{2}$ is replaced by some rational number $r$.
| This isn't a solution but it does yield a representation that has a nice approximation built into it. Note the series representation
$$\left\lfloor x \right\rfloor =x-\frac{1}{2}+\frac{1}{\pi }\sum\limits_{k=1}^{\infty }{\frac{\sin \left( 2\pi kx \right)}{k}}$$
And so
$${{B}_{1}}\left( \left\{ x \right\} \right)=\left\{ x \right\}-\frac{1}{2}=x-\left\lfloor x \right\rfloor -\frac{1}{2}=-\frac{1}{\pi }\sum\limits_{k=1}^{\infty }{\frac{\sin \left( 2\pi kx \right)}{k}}$$
Where ${{B}_{1}}$is a Bernoulli polynomial and $\left\{ x \right\}$ is the fractional part of $x$. Substituting and evaluating some series we have
$$\sum\limits_{n=1}^{\infty }{\left\lfloor an \right\rfloor {{x}^{n}}}=\frac{ax}{{{\left( x-1 \right)}^{2}}}+\frac{1}{2\left( x-1 \right)}-\sum\limits_{n=0}^{\infty }{{{B}_{1}}\left( \left\{ an \right\} \right){{x}^{n}}}$$
Taking say the first term we get a good approximation
$$\sum\limits_{n=1}^{\infty }{\left\lfloor an \right\rfloor {{x}^{n}}}\simeq \frac{ax}{{{\left( x-1 \right)}^{2}}}+\frac{1}{2\left( x-1 \right)}+\frac{1}{2}$$
For $a=\sqrt{2}$ quick graphs highlight how reasonable the approximations indeed are. The first graph below shows the numerical evaluation of the series (black dashed), against the Bernoulli polynomial approximation for $n=0,1$ and $n=0,1,2$ (blue and orange respectively), over half the domain so as to highlight the discrepancy. The second graph shows the $n=0$ approximation over a wider domain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3453954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Probability distribution between two unit vectors using the taxicab metric for distance Suppose we have two positive, real unit vectors $X$ and $Y$ in $\mathbb{R}^n$.
EDIT: As was suggested in a comment, let me describe how $X$ and $Y$ are randomly generated. For both vectors, pick $n$ random values from the uniform distribution $[0,1]$. Then normalize the vector by the taxicab metric (i.e. all the entries sum to $1$).
By the taxicab metric, the distance between these vectors is
$$d(X,Y)=\sum_{i=1}^n |x_i-y_i|.$$
Now, if we perform this operation for thousands of random vectors, we get histograms that look like
$n=2$">
$n=3$">
$n=4$">
which correspond to $n=2,3,4$. First, it is easy to prove that $d(X,Y)\leq 2$. Second, this idea can be extended to other $Lp$ metrics (such as the standard Euclidean metric) by defining
$$d(X,Y)=\sqrt[p]{\sum_{i=1}^n |x_i-y_i|^p}$$
which is interesting but not something I am currently investigating. I am trying to find an analytic form for the histograms above in terms of $n$. Obviously, this function will be $0$ at $0$ and $2$, and the maximum seems to be approaching $2/3$ as $n$ goes to infinity. Unfortunately, I have not had any luck with this problem as it seems to go through several nasty integrals involving absolute values. Any helps, hints, or general terms to look up would be appreciated.
| As $n \to \infty$, the histogram will become more and more concentrated at $\frac{2}{3}$. For example, $Pr(|d(X,Y)-\frac{2}{3}| > \epsilon)$ goes to $0$ as $n \to \infty$ for any fixed $\epsilon > 0$. By analyzing the proof below, you could say something stronger (e.g. $Pr(|d(X,Y)-\frac{2}{3}| > \frac{1}{\log N})$ goes to $0$, idk).
For ease of notation, have $x_1,\dots,x_n,y_1,\dots,y_n$ be the points uniformly chosen from $[0,1]$, and then $$d(X,Y) = \sum_{i=1}^n \left|\frac{x_i}{x_1+\dots+x_n}-\frac{y_i}{y_1+\dots+y_n}\right|.$$ The main observation is that $x_1+\dots+x_n$ and $y_1+\dots+y_n$ will be really close to $\frac{1}{2}n$. We have $$\sum_{i=1}^n \left|\frac{x_i}{x_1+\dots+x_n}-\frac{x_i}{\frac{1}{2}n}\right| \le 2\frac{|\frac{1}{2}n-(x_1+\dots+x_n)|}{x_1+\dots+x_n},$$ which is $o(1)$ with probability $1-o(1)$ (this comes from, e.g., $Pr\left(|x_1+\dots+x_n-\frac{1}{2}n| > \frac{n}{\log n}\right) = o(1)$). Similarly with the $y_i$'s. Therefore, $d(X,Y)$ is basically $\frac{2}{n}\sum_{i=1}^n \left|x_i-y_i\right|$. And a concentration inequality again says this is very close to $\frac{2}{3}$ with overwhelming probability, since the expected value of $|x_i-y_i|$ is $\int_0^1\int_0^1 |x-y|dxdy = \frac{1}{3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3454085",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Monotonicity of a partial binomial sum I'm wondering when does the following partial sum monotonically increase or decrease in $n$:
$$f_x(n)=\sum^{\lfloor\frac{n}{2}\rfloor}_{k=0}{n \choose k}x^k(1-x)^{n-k}.$$
In theory, if $x<\frac{1}{2}$, the probability converges to 1. On the other hand, if $x>\frac{1}{2}$, it converges to zero. And when $x=\frac{1}{2}$, it converges to $\frac{1}{2}$.
I initially tried to show that the convergence is monotonically made, meaning that $f_x(n)$ is monotone in $n$. However, I tried several numbers for $x$ and found that it is actually not.
For example, when $x=0.49$, we have $f_{0.49}(2)\simeq0.7599$, $f_{0.49}(20)\simeq0.6229$, $f_{0.49}(10000)\simeq0.9778$. A similar example can be found for $x>\frac{1}{2}$.
Is there any intuition behind this non-monotonicity? or is there any increment in $n$ that the monotonicity is guaranteed?
| After I saw the answer from joriki, I only focused on odd $n$ cases and could show the monotonicity. I'll leave my reasoning here for a future reference. Thanks for your comment, joriki.
Let $n=2m+1$, $m\geq 0$. I will show the sign of $f_p(2m+3)-f_p(2m+1)$ depends on whether $p>\frac{1}{2}$ or $p<\frac{1}{2}$.
Let $X_{n,p}\sim Binomial(n,p)$. The associated cdf is given by $$F_p(n,k)=P[X_{n,p}\leq k]=\sum_{i=0}^{k}{n\choose i}p^i(1-p)^{n-i}.$$
Before begin, note that $f_p(2m+1)=F_p(2m+1,m)$. From the cdf, we have \begin{align}F_p(2m+3,m+1)=&P[X_{2m+3,p}\leq m+1]\\=&P[X_{2m+3,p}\leq m+1|X_{2m+1,p}<m]P[X_{2m+1,p}<m]\\&+P[X_{2m+3,p}\leq m+1|X_{2m+1,p}=m]P[X_{2m+1,p}=m]\\&+P[X_{2m+3,p}\leq m+1|X_{2m+1,p}=m+1]P[X_{2m+1,p}=m+1]\\
=&1\cdot F_{p}(2m+1,m-1)+(2p(1-p)+(1-p)^2){2m+1\choose m}p^m(1-p)^{m+1}\\
&+(1-p)^2{2m+1\choose m+1}p^{m+1}(1-p)^m.
\end{align}
Here, note that $F_p(2m+1,m-1)=F_p(2m+1,m)-{2m+1\choose m}p^m(1-p)^{m+1}$. Plugging this in, the expression after the last equality is the same as
$$F_p(2m+1,m)+(1-2p){2m+1\choose m}p^{m+1}(1-p)^{m+1}.$$
Thus, we have $$F_p(2m+3,m+1)=F_p(2m+1,m)+(1-2p){2m+1\choose m}p^{m+1}(1-p)^{m+1}$$
in which the sign of the last term is determined by whether it is $p>\frac{1}{2}$ or $p<\frac{1}{2}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3454283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Prove that no 2 orthogonal matrices satisfy this equation $A^2-B^2=AB$ This question came up in my linear algebra finals, and I couldn't prove it. Could anyone help me?
Question: Prove that no $2$ orthogonal matrices satisfy this condition: $A^2-B^2=AB$
Attempt: Assume $A$ and $B$ are orthogonal. Mutiplying $A^T, B^T$ to both sides, we have:
$$
A-A^TB^2=B \\
AB^T-A^TB=I
$$
Taking transpose of both sides:
$$
(AB^T-A^TB)^T=I\\
BA^T-B^TA=I
$$
Again multiplying $B, A$ to both sides:
$$
B^2A^T-A=B \\
B^2-A^2=BA
$$
So combining, I have $AB=-BA$, but after this I'm stuck. I thought of the fact that $2x2$ orthogonal matrices represent a rotation in the $\mathbb{R}^2$ space, and so they commute, but in general rotations do not commute with reflections and an $nxn$ orthogonal matrices do not commute.
Hints appreciated.
| Consider the traces of both sides of $AB^T-A^TB=I$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3454455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove the distributive law for maximum and minimum operation Given $A=\{1,2,3,4\}\subseteq \mathbb{N}$ and define the operation on $A$ as below
\begin{eqnarray}
a\oplus b&=& \max(a,b)\\
a\otimes b&=& \min(a,b)
\end{eqnarray}
for all $a,b\in A$.
Prove $(a\oplus b)\otimes c=(a\otimes b)\oplus(a\otimes c)$ for all $a,b,c\in A$.
I only can prove it with tables, but it spent long time. If I using tables, I must check $4^3=64$ possible way.
Now I try to prove with definition of operations as below.
\begin{eqnarray}
(a\oplus b)\otimes c&=&\max(a,b)\otimes c\\
&=&\min(\max(a,b),c)
\end{eqnarray}
Now I get stuck here. I cannot make this form
\begin{eqnarray}
(a\otimes b)\oplus(a\otimes c) &=& \min(a,b)\oplus \min(a,c)\\
&=& \max(\min(a,b),\min(a,c)).
\end{eqnarray}
Anyone can give me hint how to prove this distributive law for this operations?
| You have stated the distributive law incorrectly. You want $$(a\oplus b)\otimes c=(a\otimes c)\oplus(b\otimes c)\quad(*)$$
The easiest approach is to look at 6 cases: $a>b>c,a>c>b,b>a>c,b>c>a,c>a>b,c>b>a$.
If you look at $a>b>c$, then $(a\oplus b)\otimes c=a\otimes c=c$. But $(a\otimes b)\oplus(a\otimes c)=b\oplus c=b$.
It is straightforward to verify $(*)$ for each of the six cases.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3454547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A computation of $Ext^1(M,N)$ (how to derive the commutative diagram for $Ext^1(M,N)$?). I am trying to understand the proof of Proposition 5.6 in the paper on page 17. How to derive the commutative diagram for $Ext^1(M,N)$? Using the projective resolution
:
\begin{align}
\cdots \to \oplus_{v \in V} P_v \overset{D}{\to} \oplus_{u \in U} P_u \to L_I \to 0,
\end{align}
I obtained the part
\begin{align}
Hom(\oplus_{u \in U} P_u,L_J) \overset{D^*}{\to} Hom(\oplus_{v \in V} P_v, L_J).
\end{align}
How to derive the other parts of the commutative diagram? Thank you very much.
| By laziness I'll oversimplify the notation:
$$
M \xrightarrow{D} N \rightarrow L_I \rightarrow 0
$$
Now break this sequence in two pieces, noting that $\Omega L_I = imD$,
$$
M \xrightarrow{f} \Omega L_I \rightarrow 0, \\ 0 \rightarrow \Omega L_I \xrightarrow{g} N \rightarrow L_I \rightarrow 0
$$
We apply $Hom(-, L_J)$ to the first:
$$
0 \rightarrow Hom(\Omega L_I, L_J) \xrightarrow{f^\ast} Hom(M,L_J)
$$
This gives the vertical part of the diagram. Applying $Hom(-, L_J)$ to the second sequence and taking the long exact sequence of Ext modules leads us to
$$
0\to Hom(L_I, L_J) \to Hom(N, L_J) \xrightarrow{g^\ast} Hom(\Omega L_I, L_J) \rightarrow\\ \rightarrow Ext^1(L_I, L_J) \to Ext^1(N, L_J) = \{0\}
$$
which givers the horizontal part of the diagram. The last module is trivial since $N$ is projective.
Also note that $D=g\circ f$ hence $D^\ast = f^\ast \circ g^\ast$ which means that the triangle in the diagram commutes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3454696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Sequence: $u_n=\sum_{k=n}^{2n}\frac{k}{\sqrt{n^2+k^2}}$
Study the following sequence of numbers:
$$\forall n\in\mathbb{N}, u_n=\sum_{k=n}^{2n}\frac{k}{\sqrt{n^2+k^2}}$$
I tried to calculate $u_{n+1}-u_n$, but I couldn't simplify the expression.
Plotting the sequence shows arithmetic (or seems to be an) progression.
| Using the Euler-Maclaurin formula we can get more detailed asymptotics, e.g.:
$$ u_n = \left( \sqrt {5}-\sqrt {2} \right) n
+ \frac{\sqrt{5}}{5} + \frac{\sqrt{2}}{4}
+\left(\frac{\sqrt{5}}{300}- \frac{\sqrt{2}}{48} \right) n^{-1} +
\left(-\frac{\sqrt{5}}{10000} + \frac{\sqrt{2}}{1280} \right) n^{-3}
+O \left( {n}^{
-5} \right) $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3454810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Relation between areas of two non-similar triangles with individual proportional sides Triangle $ABC$ has area $k$ and $D$ is the middle point of $BC$.
We have $AP = 2 \cdot AB$, $AQ = 3 \cdot AD$ and $AR = 4 \cdot AC$. What's the area of triangle $PQR$?
I know that the answer is $k$, but I don't know how to prove it.
Thank you all in advance!
|
Let $\angle PAQ = \alpha$ and $\angle RAQ = \beta$.
$$A_{PQR} = A_{APQ} + A_{AQR}- A_{APR}$$
$$=\frac12 AP\cdot AQ\sin\alpha + \frac12 AR\cdot AQ\sin\beta - \frac12 AP\cdot AR\sin(\alpha+\beta)$$
$$=\frac12 ( 2AB \cdot 3AD\sin\alpha )+ \frac12 ( 3AD \cdot 4AC\sin\beta)
- \frac12 ( 2AB \cdot4AC\sin(\alpha+\beta))$$
$$= 6 A_{ABD}+ 12 A_{ACD} - 8A_{ABC}
=6\cdot\frac k2+12\cdot\frac k2-8k=k$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3454932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What's wrong in my calculation of $\lim\limits_{n \to \infty} \sum\limits_{k=1}^n \arcsin \frac{k}{n^2}$ I have the following limit to find:
$$\lim\limits_{n \to \infty} \sum\limits_{k=1}^n \arcsin \dfrac{k}{n^2}$$
This is what I did:
$$\lim\limits_{n \to \infty} \sum\limits_{k=1}^n \arcsin \dfrac{k}{n^2} = \lim\limits_{n \to \infty} \bigg ( \arcsin \dfrac{1}{n^2} + \arcsin \dfrac{2}{n^2} + ... + \arcsin \dfrac{n}{n^2} \bigg )$$
$$ \hspace{.8cm} = \arcsin 0 + \arcsin 0 + ... + \arcsin 0 $$
$$= 0 + 0 + ... + 0 \hspace{2.9cm}$$
$$=0 \hspace{5.2cm}$$
However, my textbook claims that the actual answer is in fact $\dfrac{1}{2}$. I don't see how I could reach this answer.
| As noted by others, there are infinite many summands, one cannot simply distribute the limit operator to them.
The following might be over-killed, but I think it is somehow interesting:
We know that
\begin{align*}
\lim_{x\rightarrow 0}\dfrac{\sin^{-1}x}{x}=1,
\end{align*}
given $\epsilon\in(0,1)$, there is an $N$ such that
\begin{align*}
1-\epsilon<\dfrac{\sin^{-1}x}{x}<1+\epsilon
\end{align*}
for all $n\geq N$ and $0<x<1/n$.
Note that
\begin{align*}
\sum_{k=1}^{n}\sin^{-1}\left(\dfrac{k}{n^{2}}\right)&=\sum_{k=1}^{n}\dfrac{\sin^{-1}\left(\dfrac{k}{n^{2}}\right)}{\dfrac{k}{n^{2}}}\cdot\dfrac{k}{n^{2}}\\
&=\dfrac{1}{n}\sum_{k=1}^{n}\dfrac{\sin^{-1}\left(\dfrac{k}{n^{2}}\right)}{\dfrac{k}{n^{2}}}\cdot\dfrac{k}{n},
\end{align*}
pluggint to the $\epsilon$-inequality for large $n$, we have
\begin{align*}
(1-\epsilon)\cdot\dfrac{1}{n}\sum_{k=1}^{n}\dfrac{k}{n}<\sum_{k=1}^{n}\sin^{-1}\left(\dfrac{k}{n^{2}}\right)<(1+\epsilon)\cdot\dfrac{1}{n}\sum_{k=1}^{n}\dfrac{k}{n}.
\end{align*}
Taking $n\rightarrow\infty$, the sum $\dfrac{1}{n}\displaystyle\sum_{k=1}^{n}\dfrac{k}{n}$ is simply the Riemann sum of $\displaystyle\int_{0}^{1}xdx=\dfrac{1}{2}$.
The arbitrariness of $\epsilon\in(0,1)$ gives the limit as $\dfrac{1}{2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3455140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Prove that $n^n>\left(\dfrac{n+1}{2}\right)^{n+1}$ for all positive integer $n>1$. Question: Prove that $n^n>\left(\dfrac{n+1}{2}\right)^{n+1}$ for all positive integer $n>1$.
I could not understand what should be the initial approach.
| We want to show the $2$s at the bottom are enough to defeat the $+1$s at the top. The effect of the $2$s is easy. Just write
$$ \left(\dfrac{n+1}{2}\right)^{n+1} = \frac{1}{2^{n+1}}(n+1)^{n+1}$$
We'd like a bound like $(n+1)^{n+1} < Cn^n$ for some $C < 2^{n+1}$.
$$(n+1)^{n+1}= (n+1)(n+1)^{n}$$
To bound $(n+1)^n$ use the binomial expansion, round each $n^i$ up to $n^n$, then use how the coefficients add up to $2^n$. This should complete the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3455262",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the period of the $f(x)=\sin x +\sin3x$?
What is the period of the $f(x)=\sin x +\sin3x$?
$f(x)=\sin x+\sin 3x=2\frac{3x+x}{2}\cos\frac{x-3x}{2}=2\sin2x\cos x=4\sin x\cos^2x\\f(x+T)=4\sin(x+T)\cos^2(x+T)=4\sin x\cos^2 x$
how can I deduct this I have no idea
| In general, if $T$ is the period of a function $f(x)$ then the period of the function $f(ax)$ is $\frac{T}{a}$.
Suppose two periodic functions $f_1(x)$ and $f_2(x)$ have periods $T_1$ and $T_2$. Then the period of the function $g(x)=f_1(x)\pm f_2(x)$ is LCM (least common multiple) of $T_1$ and $T_2$ (although, this certainly isn't true for all periodic functions, as explained inside this answer.)
In your question the periods of $\sin x$ and $\sin 3x$ are calculated as $\frac{2\pi}{1}=2\pi$ and $\frac{2\pi}{3}$ respectively.
So the period of the function $f(x)=\sin x+\sin3x$ is the $\text{LCM}(2\pi,\frac{2\pi}{3})=2\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3455375",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Why is Euler's Formula for Planar Graph Not Working Here?
I have worked out $r(n) = 2^n$, $e(n) = 1 + 3 \times 2^n$, $v(n) = 2\times(2^n - 1) + 4$
The expressions of $r(n)$, $e(n)$, and $v(n)$ are correct and this can be verified with $n = 0, 1, 2, 3\ldots$
But when I calculate $v(n) - e(n) + r(n)$, it does not equal to $2$. What's wrong?
Also, can we derive the relationship between v(n) and e(n) using the sum of degree of vertices?
|
when I calculate $v(n)−e(n)+r(n)$, it does not equal to $2$. What's wrong?
See Euler's formula for planar graphs :
if a finite, connected, planar graph is drawn in the plane without any edge intersections, and $v$ is the number of vertices, $e$ is the number of edges and $f$ is the number of faces (regions bounded by edges, including the outer, infinitely large region), then :
$v-e+f=2$.
In order to take into account the outer region, the formula for the number of regions $f(n)$ must be:
$f(n)=r(n)+1=2^n+1$,
where $r(n)$ is the number of rectangular regions.
For $n=0$ above, we have : $e(0)=v(0)=4,r(0)=1, f(0)=2$. Thus, it works.
We can check it reasoning by induction : at each subdivision of a region with a new line we add one region, two new vertices and three new edges.
Thus, assuming by induction hypoteses that $v(n)-e(n)+f(n)=2$, we have :
$$v(n+1)-e(n+1)+f(n+1)=v(n)+2 - (e(n)+3) + f(n)+1 = v(n)- e(n) + f(n) + 2 - 3 + 1 = v(n)- e(n) + f(n) = 2.$$
In conclusion, if $f(n)=r(n)+1$, from Euler's formula we have :
$v(n)- e(n) + r(n) = v(n)- e(n) + f(n) - 1 = 2-1=1.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3455567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What are the last two digits of 1^5 + 2^5 + 3^5 + ... +99^5? What are the last two digits of 1^5 + 2^5 + 3^5 + ... +99^5?
My work:
1^5 ends with 1.
2^5 ends with 2.
3^5 ends with 3.
And so on.
Do I simply add the ending digits to get my answer?
| $1^5+99^5=(1+99)(1-99+99^2-99^3+99^4)=100(\text{positive number})$ what can you see??
Also
$2^5+98^5=(100)(2^4-2^3(98)+2^2(98)^2-2(98)^3+98^4)$
My solution is a special case of lab's
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3455707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Intersection of two primary ideals in $\mathbb{Z}[x]$.
Consider $\mathbb{Z}[x]$, and define $I = (x(x^{2}-2),(x^{2}-2)(x^{2}+2))$, $J = (x^{2}-2)\cap(x^{3},2)$. I want to show that $I = J$.
Notice that $I\subset J$ is clear since the generators of $I$ are clearly in $J$. The other direction is less clear to me. It comes down to showing that $J/I = 0$. So you want to show that for $f\in J$ we have $f + I = 0 + I$. We can write $f = \alpha (x^{2}-2)$ for some $\alpha\in\mathbb{Z}[x]$. Since we also want $f\in (x^{3},2)$ we see that $f = \beta x^{3} + 2\gamma$ for some $\beta,\gamma\in\mathbb{Z}[x]$. So we clearly see that $\alpha(0) = -\gamma(0)$. From here I am stuck.
| To show the inclusion $J \subset I$ , we proceed as follows. Let $f(x) \in J$ such that $f(x) = \alpha (x^2-2)$, and $f(x) = \beta x^3 + 2 \gamma$, for $\alpha, \beta, \gamma \in \mathbb{Z}[x]$. As you suggested, we want to show that $f(x) \equiv 0 + I$. Observe that $\beta x^3 +2 \gamma + I = 2x \beta + 2 \gamma + I = 2(x\beta + \gamma) + I$ because $x^{3} \equiv 2x \pmod{x(x^2-2)}$. But $f(x) + I = 2(x\beta + \gamma) + I = \alpha(x^2-2) + I$ implies that $2 \mid \alpha(x)$. So $f(x) + I = k(2(x^2-2)) + I$, for some $k \in \mathbb{Z}[x]$. But $2(x^2-2) \in I$ because $-x(x(x^2-2)) + (x^2 + 2)(x^2 - 2) = -x^4 + 2x^2 + x^4 - 4 = 2x^2 - 4 \in I$, which implies $f(x) + I = k(2(x^2-2)) + I = 0 + I$. Altogether,
$$
f(x) + I = \alpha (x^2 - 2) + I = \beta x^3 + 2\gamma + I
\implies
$$
$$
f(x)+I = \alpha (x^2-2) + I = 2(x\beta + \gamma) + I \implies
$$
$$
2 \mid \alpha(x), 2(x^2 - 2) \in I \implies
$$
$$
f(x) + I = k\cdot 2(x^2-2) + I = 0 + I \implies f(x) \in I
$$
Please let me know if you have any questions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3455885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Given a space $X$, construct a CW complex $L(X)$ s.t. they have the same fundamental group This is exercise 1.2.15 in Hatcher's Algebraic topology
Given a space $X$ with basepoint $x_0∈X$, we may construct a CW complex $L(X)$ having a single $0$-cell, a $1$-cell $e^1_γ$ for each loop $γ$ in $X$ based at $x_0$, and a $2$-cell $e^2_τ$ for each map $τ$ of a standard triangle $PQR$ into $X$ taking the three vertices $P,Q$ and $R$ of the triangle to $x_0$.
The $2$-cell $e^2_τ$ is attached to the three $1$-cells that are the loops obtained by restricting $τ$ to the three oriented edges $PQ,PR$ and $QR$.
Show that the natural map $L(X)→X$ induces an isomorphism $π_1(L(X))\cong π_1(X,x_0)$.
I met some problem in constructing the CW complex $L(X)$, here are my thoughts:
$1$. $L(X)$ have a single $0$-cell, and for each loop $γ$ in $X$ based at $x_0$, is has a $1$-cell $e^1_γ$.
At this time, $L(X)$ is a wedge sum of circles, each circle $S^1$ represents a loop in $X$ based at $x_0$.
$2$. For map $\tau:\text{triangle } PQR\to X$, $\tau$ maps $P,Q,R$ to $x_0$ and maps $PQ$, $PR$, $QR$ to loops in $X$ based at $x_0$.
Let $\overrightarrow{PQ}$ correspond to loop $a$ in $X$ (also a loop in $L(X))$, $\overrightarrow{PR}$ correspond to loop $b$ , then $\overrightarrow{QR}$ correspond to $a^{-1}b$.
$3$. $2$-cell $e^2_\tau$ is attached to the three $1$ cells $a,b,a^{-1}b$, we obtain relation $a\cdot a^{-1}b \cdot b^{-1}=1$, which is trivial.
There's something wrong here since I don't fully understand the construction of $L(X)$. So how does the 2-cells $e^2_\tau$ attached to $1$-skeleton of $L(X)$?
Update:
I now realized that triangles in $L(X)$ is just triangulation of $2$-cells in $X$,
and it doesn't change homotopy type, so $\pi_1(L(X))\cong \pi_1(X,x_0)$
| $L(X)$ is attempting to "model" $X$ as a CW complex with the same fundamental group. To do this, we will consider all possible loops in $X$, then specify which ones should be considered homotopic.
*
*As you note, we have a wedge sum of circles, one for each loop in $X$. You can think of each summand as "representing" it's corresponding loop in $X$. At the moment, this is a poor model of $X$, as homotopic loops in $X$ are not homotopic in $L(X)$. We will fix this in part 2.
*Convince yourself that if I have a map $\tau : PQR \to X$ where the vertices are sent to $x_0$, each edge is a loop which is homotopic to the product of the other two (possibly with inverses, depending on orientation). Also convince yourself that any two homotopic loops in $X$ have a $\tau$ which shows this homotopy. We want loops in $L(X)$ to behave the same as in $X$, so we will attach a "triangle" (ie 2-cell) along the loops in $L(X)$ corresponding to the images of the edges of $PQR$. After attaching this triangle, homotopic loops in $X$ will correspond to homotopic loops in $L(X)$.
Hopefully, I've given some motivation for this construction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3456011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
What is the Basis of the Kernel and the Image $R^4 \to R^3$ where $ f(x,y,z,w) = \left[\begin{array}{ccc}2x + z -w\\x +w\\x +z-2w\end{array}\right]$
I started with matrix $ A = $$\left[\begin{array}{ccc}2 & 0 & 1 & -1\\1 & 0 & 0 & 1\\1 & 0 & 1 &-2\end{array}\right]$
and set it to 0 and the result $\left[\begin{array}{ccc}1 & 0 & 0 & 1 & 0\\0 & 0 & 1 & -3 & 0\\0 & 0 & 0 &0 &0\end{array}\right]$
The basis of the kernel or $\ker(F)$ is $\Biggl\{\left(\begin{array}{c}0\\1\\0\\0\end{array}\right),\left(\begin{array}{c}-1\\0\\3\\1\end{array}\right)\Biggl\} $
With dim = 2
and the basis of the image is $\Biggl\{\left(\begin{array}{c}2\\1\\1\end{array}\right),\left(\begin{array}{c}1\\0\\1\end{array}\right)\Biggl\} $
With Dim = 2
Did I miss up somewhere ?
| You are right indeed we can check that
$$\left[\begin{array}{ccc}2 & 0 & 1 & -1\\1 & 0 & 0 & 1\\1 & 0 & 1 &-2\end{array}\right]\left[\begin{array}{c}0\\1\\0\\0\end{array}\right]=\vec 0$$
$$\left[\begin{array}{ccc}2 & 0 & 1 & -1\\1 & 0 & 0 & 1\\1 & 0 & 1 &-2\end{array}\right]\left[\begin{array}{c}-1\\0\\3\\1\end{array}\right]=\vec 0$$
and by RREF first and third colum of the original matrix are linearly independent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3456161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finite abelian groups as direct products of proper characteristic subgroups Suppose that $A$ is a finite abelian group. To make things more interesting, assume further that $A$ is an abelian $p$-group for some prime $p$.
Is it ever the case that $A$ can be written as $A = C \times D$, where $C$, $D$ are proper characteristic subgroups of $A$? (I am inserting the word "proper" so as to avoid having the trivial $C=1$, $D=A$.)
| No. Both $C$ and $D$ are direct sums of cyclic $p$-groups,
$$
\begin{align*}
C &\cong C_{p^{a_1}}\oplus\cdots\oplus C_{p^{a_r}},\quad a_1\leq\cdots\leq a_r\\
D &\cong C_{p^{b_1}}\oplus\cdots \oplus C_{p^{b_t}},\quad b_1\leq\cdots\leq b_t.
\end{align*}
$$
If $a_i=b_j$ for any $i$ and $j$, then you have an automorphism of $G$ that swaps those two factors. This shows that neither $C$ nor $D$ are characteristic.
Otherwise, say $a_r\lt b_t$. Let the cyclic summands of $C$ be generated by $x_1,\ldots,x_r$, and the cyclic summands of $D$ be generated by $y_1,\ldots,y_t$. Define an automorphism of $G$ by fixing $x_1,\ldots,x_r$, $y_1,\ldots,y_{t-1}$, and mapping $y_t$ to $x_r+y_t$. The image of $D$ is not $D$, so $D$ is not characteristic.
To show $C$ is not characteristic either, fix $x_1,\ldots,x_{r-1}$, fix $y_1,\ldots,y_t$, and map $x_r$ to $x_r+p^{b_t-a_r}y_t$.
Of course, if you go to arbitrary finite abelian groups, you can decompose them into $p$-parts, each of which is characteristic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3456387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove by mathematical induction that $(3n+1)7^n -1$ is divisible by $9$ for integral $n>0$ $7^n(3n+1)-1=9m$
$S_k = 7^k(3k+1)-1=9P$
$\Rightarrow 7^k(3k+1) = 9P+1$
$S_{k+1} = 7\cdot7^k(3(k+1)+1)-1$
$= 7\cdot7^k(3k+1+3)-1$
$= 7\cdot7^k(3k+1) +21\cdot7^k -1$
$= 7(9P+1)+21\cdot7^k -1$
$= 63P+7+21\cdot7^k -1$
$= 63P+6+21\cdot7^k$
$= 9(7P +2/3+21\cdot7^k/9)$
therefore it is divisible by $9$
So I believe I have done this right but I've ended up with non-integers in the answer which im pretty sure isn't right.
Where have I gone wrong?
Thanks
| You're almost finished.
You assume that
$S_k = 7^k(3k+1)-1$ is divisible by 9, and therefore divisible by 3.
$S_k = 7^k(3k)+ 7^k-1$ is divisible by 3.
Therefore $7^k-1$ is divisible by 3.
let $3x = 7^k-1$.
$7^k=3x+1$
Then in your final steps:
$=63P+6+21\cdot7^k$
$=63P+6+21(3x+1)$
$=63P+63x+27$
$=9(7P+7x+3)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3456485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Suppose $ker f = ker g$, show that $\exists \alpha \in \mathbb{C}$ such that $f(p)=\alpha g(p)$, $\forall p \in \mathcal{P}_7 (\mathbb{C})$ Suppose $f$, $g$: $\mathcal{P}_7 (\mathbb{C})\to \mathbb{C}$ and ker $f$ = ker $g$, show that $\exists \alpha \in \mathbb{C}$ such that $f(p)=\alpha g(p)$, $\forall p \in \mathcal{P}_7 (\mathbb{C})$.
The hint is: first to show that, if $f(p) \ne 0$, then $\mathcal{P}_7 (\mathbb{C})$ = ker$f$ $\oplus$ span($p$).
I don't see how the hint fits in this problem.
| If $\mathcal{P}_7(\mathbb{C}) = \ker f \oplus \operatorname{span} \{p\}$ then every $q \in \mathcal{P}_7(\mathbb{C})$ can be uniquely written as $q = r + \lambda p$ where $r \in \ker f$ and $\lambda \in \mathbb{C}$.
Since $r \in \ker f = \ker g$ we have $f(r) = g(r) = 0$ so
$$f(q) = f(r+\lambda p) = f(r) + \lambda f(p) = \lambda f(p)$$
$$g(q) = g(r+\lambda p) = g(r) + \lambda g(p) = \lambda g(p)$$
Therefore since $f(p) \ne 0$ we have
$$g(q) = \frac{g(p)}{f(p)}f(q)$$
If we set $\alpha = \frac{g(p)}{f(p)} \in \mathbb{C}$ we get $g = \alpha f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3456766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A closed smooth manifold cannot admit free involutions if it does not bound. I have seen the following statement: If a closed smooth manifold does not bound, then it cannot admit fixed point free involutions. Here a manifold $M$ bound means there exists a compact manifold $W^{n+1}$ such that $\partial W=M$. I am considering unoriented cobordism.
By using Euler characteristic and fundamental groups I can determine when there is free involution for some surfaces but not getting the idea.
I would like to see proof of the above statement. I am unable to prove it.
Any help will be very helpful.
| Let $M$ be a closed $n$-dimensional manifold and $\tau: M\to M$ a fixed-point free involution. My argument works in any of the standard categories: topological, PL or smooth. Since you are asking about differentiable manifolds, I will work in the smooth category.
Define the quotient manifold $N=M/\tau$. Let $W\to N$ denote the interval bundle associated with the covering map $M\to N$. One way to define it is as the mapping cylinder of the projection $M\to N$. Alternatively, one can define it as follows: Consider the product manifold $E=M\times [-1,1]$. The group ${\mathbb Z}_2$ acts on $M\times [-1,1]$: The action on $M$ is generated by the involution $\tau$, the action on the interval $[-1,1]$ is generated by the involution $t\mapsto -t$. This action on $E$ is free and, hence, we get the (compact) quotient-manifold with boundary $W=E/{\mathbb Z}_2$. The manifold $M$ projects diffeomorphically to the boundary of $W$, $M\cong \partial W$ and, hence, $M$ bounds.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3456909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $\frac f g$ is symmetric then what conclusion can we make about $f$ and $g$? Let $K$ be a field (or a commutative ring with identity). Consider $f,g \in K[X_1,X_2, \cdots , X_n]$ with $g \neq 0.$ Suppose that $\frac f g \in K \left (X_1,X_2, \cdots , X_n \right)$ is symmetric i.e. $\frac f g \in \text {Fix}_{S_n} K\left (X_1,X_2, \cdots, X_n \right )$
where $S_n$ denotes the symmetric group on $n$-symbols and $$\text {Fix}_{S_n} K\left (X_1,X_2, \cdots, X_n \right ) = \left \{h \in K \left (X_1,X_2, \cdots , X_n \right )\ \big |\ \sigma (h) = h\ \text {for all}\ \sigma \in S_n \right \}.$$
What conclusion can we make about $f$ and $g$ from here? Are they necessarily symmetric? If so why?
Any help regarding this will be highly appreciated. Thank you very much.
| I have found the answer. It's quite easy. Take for instance $f=X_1^2X_2 \in K[X_1,X_2]$ and $g=X_1 \in K[X_1,X_2].$ Neither $f$ nor $g$ is symmetric. But $\frac f g = X_1X_2 \in K [X_1,X_2] \subseteq K \left (X_1,X_2 \right )$ is indeed symmetric.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3457102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a good upper ground to the number of digits of a factorial n! when I only know the number of digits of $n$? I was coding a function for calculating the factorial of a big number in $C$. Since I'm using a structure where I don´t know directly the value of the number, I need to find the number of digits of the result (or a decent upper bound) only knowing the number of digits of my original number. Any help is appreciated, thanks.
| Given a natural number $n$, the number of decimal digits is equal to $d(n)=\lfloor \log_{10}(n)\rfloor +1$. If you only know the value of $d(n)$, then $n$ could be as large as the number consisting of $d(n)$ nines $999...9$, or $10^{d(n)}-1$. This means that the best upper bound we can get on the number of digits of $n!$ given only the number of digits of $n$ is equal to
$$d(n!)\le d\big((10^{d(n)}-1)!\big)$$
and this is a tight bound. Let’s see if we can simplify this a bit to get an upper bound that is easier to calculate but a little bit less tight. First of all, note that $d(n) < \log_{10}(n)+1$, so we have
$$d(n!)\le \log_{10}\big((10^{d(n)}-1)!\big)+1$$
Now, a companion to Stirling’s Formula tells us that $N!$ is less than or equal to $N^{N+1/2}e^{1-N}$ for all natural numbers $N$. Thus, we have from the above inequality that
$$d(n!)\le \frac{10^{d(n)}-1/2}{\ln(10)}\ln(10^{d(n)}-1)+\frac{2-10^{d(n)}}{\ln(10)}+1$$
This is decent, but it’s still a bit ugly. Let’s cut out some unnecessary terms (while preserving inequality) to get the following nicer (but still less tight) upper bound:
$$d(n!)\le d(n)\cdot 10^{d(n)}-\frac{10^{d(n)}}{\ln(10)}$$
This formula should work for $n\ge 4$, say.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3457397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Graph Theory Proof Problem(Bipartites graphs) I'm struggling to proof something the Professor gave us. I asked several friends and no one knows what to do. Basically is this:
Prove for every simple bipartite Graph with $n \ge 1$ vertex: $$\delta(G)+\Delta(G)\le n.$$
Professor's Tip: Split you answer in two. First prove the follow:
If $G$ is simple and bipartite with at least $3$ vertices, $G$ has a vertex $v \in V(G)$ that satisfies at least one of these properties:
• $\Delta(G−v)=\Delta(G)$
• $\delta(G−v)= \delta(G)$
Now, prove by induction on $n$ using these properties.
Guys, I don't know where to start and how to use these information! I checked several books ([West], [Chartrand-Zhang] and [Diestel]) and none of them proved to be useful.
Sorry if my English was bad. Thanks!
| I don't know how to prove the tip, but the theorem is trivial. Suppose the parts are $V_1$ and $V_2$, that $|V_1|=n_1$ and $|V_2|=n_2$, where $n_1+n_2=n$. We may assume that $n_1\geq n_2$.
Every vertex in $V_1$ has degree at most $n_2$, and every vertex in $V_2$ has degree at most $n_1.$ Thus $\Delta(G)\leq n_1$, $\delta(G)\leq n_2,$ and the theorem follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3457498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What is an example of XOR? I have a doubt regarding the below lines in "deep learning" book.
I don't have a very good math background, I grasp most of the concepts with examples. First they describe this:
Which I understand it as ie $x_1=$ house_size , $x_2=$year_built and $f(x,w)$ shall be the house price.
But I can't think of an example for the below:
I see in wikipedia that XOR is exclusive 'or', so I understand that it means strictly either 'a' or 'b'. But what is an example of this?
| I don't like going to the movies alone. This week, there are 2 tickets for a movie I'd like to see, so I ask 2 friends A and B if they'd like to see it.
If neither A nor B wants to see it, then I wont go.
If A wants to go but B doesn't, or visa versa, then I will go with whoever wants to join me.
If both A and B want to go, I'll let them buy the tickets and stay home. I wont go.
Hence the "Do I go to the movies" function is A xor B.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3457603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
What is the minimum requirement on $f$ such that $\lim_{s\rightarrow 0}\int_0^{\infty}\exp(-st)f(t)dt = \int_0^{\infty}f(t)dt$? What is the minimum requirement on $f$ such that $\lim_{s\rightarrow 0}\int_0^{\infty}\exp(-st)f(t)dt = \int_0^{\infty}f(t)dt$?
I think if I take $f$ to be measurable and $f\in L^1(0, \infty)$, then by dominated convergence theorem we get that. Or is there any weaker condition? or is reasoning correct?
| If $$F(0)=\lim_{T \to \infty} \int_0^T f(t)dt$$ converges then $$ F(s) = \lim_{T \to \infty} \int_0^T f(t)e^{-st}dt=\int_0^\infty (\int_0^T f(t)dt) s e^{-sT}dT$$ converges and is analytic for $\Re(s) > 0$ and
$$\frac{F(s)}{s} = \int_0^\infty (F(0)+o(1)) e^{-sT}dT= \frac{F(0)}{s}+o(\frac{1}{\Re(s)})$$
Thus for $r\in [0,\pi/2)$
$$\lim_{s \to 0, arg(s) \in (-r,r)} F(s) =F(0)$$
This is the Abelian theorem for Laplace transforms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3457773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$2$-connected Eulerian graph that is not Hamiltonian An exercise in Chartand and Zhang asks to find a $2$-connected graph (that is, connected with order at least $3$ and no cut-vertices) that is Eulerian but not Hamiltonian (or prove none exists).
I was wondering whether the graph $K_{2, 4}$ works. I think it does. I have drawn a picture, and I think it is Eulerian because of the Theorem that states Eulerian iff all degrees have even degree (here, all vertices have degree $2$ or $4$). But I don't know how to justify $K_{2,4}$ not having a Hamiltonian cycle.
Can someone please help me?
| Yes, that is a perfectly fine example. It is obvious that a cycle in a bipartite graph must have the vertices alternate between the two parts. Therefore, an unbalanced bipartite graph can never be Hamiltonian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3457960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Induced image of the fundamental group of a covering space I have been reading Hatcher´s Algebraic Topology, and he wants to prove that if we have a covering space $(E,p)$, with $p(e)=x_0$ then $p_*(\pi_1(E,e)$) consists of the homotopy classes of loops in $X$ starting at $x_0$ such that their lifts are loops in $E$ starting at $e$.To do this in the proof he says that a loop representing an element of the image $p_*$ is homotopic to a loop having such a lift , and intuitively it seems right but i cant seem to see why this is true theoretically , so any help is apreciated, Thanks.
| This is basically just by definition.
Let $\gamma$ be a loop on $x_0$ that represents $p_*([\vartheta])\,\in\pi_1(X,x_0) $ with $[\vartheta] \in\pi_1(E,e)$.
This means $[\gamma] =p_*([\vartheta]) =[p\circ\vartheta]$, that is, $\gamma$ is homotopic to $p\circ\vartheta$, which lifted to $e$ obviously gives $\vartheta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3458106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculate areas of $0 \leq x \leq \sqrt{y}, 0 \leq y \leq \sqrt{x}, x+y \leq 3 / 4$ I have to calculate the areas of this set.
Could someone explain me how do I have to interprete the following:
\begin{equation}
\left\{(x, y) \in \mathbb{R}^{2}: 0 \leq x \leq \sqrt{y}, 0 \leq y \leq \sqrt{x}, x+y \leq 3 / 4\right\}
\end{equation}
Thank you in advance.
| At first i would calculate the intersection points of the curves
$$x=\sqrt{y},y=\sqrt{x},y=\frac{3}{4}-x$$
I have got
$$\frac{A}{2}=\int_{0}^{\frac{3}{4}}x-x^2dx+\int_\frac{3}{8}^{1/2}x-\frac{3}{4}-x^2dx$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3458291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Show a line in the form $ax+by=c$? I'm having trouble with a question which states:
Point $M$ has coordinates $(3,5)$. Points $A$ and $B$ lie on the coordinate axes and have coordinates $(0,p)$ and $(q,0)$ so that $AMB$ is a right angle. Show that $5p+3q=34$.
Edit - solved:
I used the gradient $\frac{p-5}{-3}$ $*$ $\frac{p-5}{-3}$ = -1
and multiplied the fractions together to give me $\frac{-5p+25}{-3q+9}$ = -1.
Then I cross-multiplied and got $-5p+25=3q-9$ and then rearranged the equation from there.
| We can proceed as follows
*
*the slope A-M is $$\frac{y_A-y_M}{x_A-x_M}=\frac{p-5}{-3}$$
*the slope B-M is $$\frac{y_B-y_M}{x_B-x_M}=\frac{-5}{q-3}$$
and we need
$$\frac{p-5}{-3}\cdot \frac{-5}{q-3}=-1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3458393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $W$ be a vector field of constant length and let $v$ be a vector. Show that the covariant derivative, $\nabla_vW$ and $W$ are orthogonal I know that the covariant derivative of $W$
with respect to $v$ is the tangent vector
$\nabla_vW=W(p+tv)′(0)$
at the point $p$
We want to show that the dot product of $W$ and
$\nabla_vW$ is zero which would imply orthogonality
So if $||W||=L$, $L$ is constant, would it be true that our vector $v=(a,b,c)$, is a vector of just coefficients without any variable $x,y,z$?
So any $v$ in the tangent space is constant?
and...
I want to write a clear proof of the above claim, but am not sure where to go from here
| You need Koszul's formula. It tells you that:
$\langle\nabla_XY,Z\rangle = \frac{1}{2}(X\langle Y,Z\rangle+Y\langle X,Z\rangle-Z\langle X,Y\rangle+\langle[X,Y],Z\rangle-\langle[X,Z],Y\rangle-\langle[Y,Z],X\rangle)$
Then we substitute $X=V$, $Y=Z=W$ and we use the fact that $ V\langle W,W\rangle=0$ since $W$ has a constant norm and it's derivative by any vector field (in this case $V$) is zero. Explicitly, we have the following:
$\langle\nabla_V W,W\rangle = \frac{1}{2}(V\langle W,W\rangle+W\langle V,W\rangle-W\langle V,W\rangle+\langle[V,W],W\rangle-\langle[V,W],W\rangle-\langle[W,W],V\rangle)=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3458534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find the equation of the line $ r $ Find the equation of the line $ r $ that goes through $ (1, -2,3) $, concurrent with the line
$$\begin{cases}
x=2+3t \\
y=1+2t \\
z= -1 \\
\end{cases}$$
and has orthogonal director vector $ (1, -3,1) $
Solution: line r is contained in the plane $(x-1)-3(y+2)+(z-3)=0$
$x-3y+z=10$
Next we find the intersection of the plane and the concurrent line
$2+3t-3(1+2t)-1=10$
$-3t-2=10$
$t=-4, so (-10,-7,-1)$
So line r goes through the points (1,-2,3) and (-10,-7,-1)
$\begin{cases}
x=1-11t \\
y=-2-5t \\
z=3-4t \\
\end{cases}$
How to solve without using plan concepts? (Whatever if it's parametric, symmetrical, vector representation ....)
| Use the information that you’ve been given directly. The two lines are coincident, so a direction vector of the line that you’re trying to find is $(2+3t,1+2t,-1)-(1,-2,3)$ for some value of $t$. This vector must be orthogonal to $(1,-3,1)$, so set their dot product equal to zero and solve for $t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3458667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
How to apply cauchy's residue theorem to rational function? I have the following function:
$$f(z)=\frac{(z-1)^2}{z*(z+1)^3)}$$
And I need to find its residue.
The formula that I know for this is the following:
The residue $a_{-1}$ for a pole of order "n" is:
$$a_{-1} =\frac{1}{(n-1)!} \lim_{z \rightarrow z_0} \frac{\partial^{n-1}(z-z_0)^{n}(f(z))}{\partial z^{n-1}}$$
Now, when I apply this to the function $f(z)$ above, I'm getting the following:
$$a_{-1} =\frac{1}{(3-1)!} \lim_{z \rightarrow -1} \frac{\partial^{3-1}(z-z_0)^{3}(f(z))}{\partial z^{3-1}}$$
$$a_{-1} =\frac{1}{2} \lim_{z \rightarrow -1} 2z - 2 = -2$$
However, I'm told that the answer is $-4$. Where did I go wrong in my solution?
Thanks.
| I get $-1$ for the residue. $$(z+1)^3f(z)= \frac{(z-1)^2}{z}=z-2+\frac1z$$ The second derivative is $2z^{-3}$ and $\lim_{z\to-1}\frac12\cdot2z^{-3}=-1$.
Alternatively, we want the coefficient of the $\frac{1}{z+1}$ term in the Laurent series fo $f(z)$. We have $$\frac{(z-1)^2}{z} = z-2+\frac1z = (z+1)-3-\frac{1}{1-(z+1)}$$ Dividing by $(z+1)^3$, we see that we want the coefficient of $(z+1)^2$ in the Taylor series for $-\frac{1}{1-(z+1)}$, which is $-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3458822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
An example of Sturm-Liouville eigenvalue problem I'm not sure if this has been asked before but I couldn't find it.
I want to solve the Sturm-Liouville eigenvalue problem
$$ u''+\lambda u=0,\ \ u'(0)=u(1)=0.$$
Let the SL differential operator be $Lu=u''.$ I want to find the eigenvalues/functions of $L$ by solving the boundary value ODE. (Hint: The solution of the ODE is
$$ u(x)=\lambda\int_0^1 g(x,s)u(s)ds$$
where $g(x,s)=
\begin{cases}
s(1-x),\ 0\leq s\leq x\leq 1\\
x(1-s),\ 0\leq x\leq s\leq 1
\end{cases}.$)
Here's my solution.
To solve the ODE, let $u''=\frac{\partial u' }{\partial x}$. Then
$$ \int \partial u' = -\lambda \int u(t) \partial t$$
$$ u'(x) = -\lambda \int_a^x u(t) \partial t + c_1$$
Now, let $u'(x) = \frac{\partial u(x) }{\partial x}$. Then,
$$\int \partial u(x) = \int^x_b \big( -\lambda \int^x_a u(t) \partial t + c_1\big)\ dx $$
$$ u(x) =-\lambda \int^x_b \big( \int^x_a u(t) \partial t + c_1\big)\ dx +c_2$$
Can someone help me with this?
| A simpler approach.
The general solution for the DE $u''+\lambda u = 0$ is
$$
u = c_1\sin(\sqrt \lambda t)+c_2\cos(\sqrt\lambda t)
$$
the boundary conditions are
$$
\cases{u'(0) = \sqrt\lambda c_1\cos(\sqrt\lambda 0)-\sqrt\lambda c_2\sin(\sqrt\lambda 0) = 0\\
u(1) = c_1\sin(\sqrt \lambda)+c_2\cos(\sqrt\lambda) = 0}
$$
or
$$
\left(\begin{array}{cc}1 & 0\\ \sin(\sqrt\lambda) & \cos(\sqrt\lambda)\end{array}\right)\left(\begin{array}{c}c_1\\ c_2\end{array}\right) = \left(\begin{array}{c}0\\ 0\end{array}\right)
$$
now to have non trivial solutions we need $\cos(\sqrt\lambda) = 0$ or
$$
\sqrt\lambda = (2k+1)\frac{\pi}{2}
$$
then the eigenvalues are
$$
\lambda_k = \left((2k+1)\frac{\pi}{2}\right)^2
$$
and the eigenfunctions are
$$
u_k = \cos\left((2k+1)\frac{\pi}{2}t\right)
$$
Those eigenfunctions can be used to solve non-homogeneous problems with the same boundary conditions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3458926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Uniform convergence of a Cauchy Sequence in a compact implies continuous limit. I need to show that if I have a Cauchy sequence of functions $f_n$.
which I know converges uniformly to $f$ in a compact set $[-r,r]$, then our limit function $f$ is continuous.
The thing is, for my definition of Cauchy sequence, it follows that a sequence is Uniformly Convergent iff it is a Cauchy sequence.
So maybe the question would not change if I wanted to show every uniformly convergent sequence in a compact converges to a continuous function.
But anyways I don't see how to procede because, for me, it would make a lot more sense if I knew the function $f_n$ to be continuous. In this case I could argue that they are uniformly continuous and I' know at least how to start.
| If $f_n$ are not continuous then it is not true.
Take $f_n=1_{[-\frac{1}{2},\frac{1}{2}]}+\frac{1}{n}$ on $[-1,1]$
where $1_A$ is the indicator function of the set $A$
You can see in many textbooks and notes that: If the functions $f_n$ are continuous on a compact interval ,then their uniform limit is continuous on that interval.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3459093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Limit of $\dfrac{t}{\ln(1+t)}$ without L'Hospital I'm trying to prove: $\lim_{t\rightarrow 0} \dfrac{t}{\ln(1+t)}=1$ without use L'Hospital and derivate, because I still do not define derived from the logarithm, is there any way to prove it by the definition epsilon-delta? So far I have only defined $\log$ as the inverse function of $a^{x}$ and particular case $a = e$ although the latter is proposed very weak, it may be improved later using integrals, some help thanks in advance.
| $$\lim\limits_{t\to0}\frac{\ln(1+t)}t=\lim\limits_{t\to0}\frac{\ln(1+t)-\ln1}t$$
That's the definition of the derivative of $\frac d{dx}\ln x$ evaluated at $x=1$, which is obviously $\frac11=1$. Therefore,
$$\lim\limits_{t\to0}\frac t{\ln(1+t)}=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3459184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
} |
$f_n$ uniformly convergent on open interval, convergent at endpoints Suppose $\{ f_n \}$ converges uniformly on $(-1,1)$. Suppose also that $f_n(-1)$ and $f_n(1)$ converge. Then $\{ f_n \}$ converges uniformly on $\lbrack -1, 1 \rbrack$.
Attempt:
Suppose $\{ f_n \}$ does not converge uniformly to $f(x)$ on the closed interval. Then there is a sequence $x_n$ in $\lbrack -1 , 1 \rbrack$ such that for some $\epsilon_0 > 0$,
$$
|f_n(x_n) - f(x)| \geq \epsilon_0.
$$
Note that $x_n \neq \pm 1$ for all $n$. If it did, then
$$
|f_n(\pm1) - f(x)| \geq \epsilon_0,
$$
contradicting convergence. So $x_n \in (-1,1)$ which contradicts that $f_n$ converges uniformly on $(-1,1).$
| I think you're over complicating matters.
Let $\varepsilon>0$. By uniform convergence, there is an $N$ such that $|f_n(x)-f(x)|<\varepsilon$ for every $x\in (-1,1)$ and every $n\ge N$. Also, since $f_n(1)\to f(1)$, there is some $N'$ such that $|f_n(1)-f(1)|<\varepsilon$ for every $n\ge N'$. Similarly, there is some $N''$ such that $|f_n(-1)-f(-1)|<\varepsilon$ for every $n\ge N''$. Now, if $n\ge \max\{N,N',N''\}$, then $|f_n(x)-f(x)|<\varepsilon$ for every $x\in [-1,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3459325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that all the eigenvalues of $A$ are real.(Gershgorin 's Theorem) The question and its answer are given below:
And this is the Greshgorin theorem:
My questions are:
1- I do not understand why a consequence of Greshgorin's theorem is that if a circle is disjoint then it contains 1 eigenvalue. could anyone explain this for me, please?
2- Why in a characteristic polynomial with real coefficients the complex roots occur in pairs?
3- Why in the last paragraph the $\lambda $ and $\bar{\lambda}$ inside the disk should be the same? and why these leads to all the eigenvalues are real? could anyone help me in answering those questions, please?
| That's a very 'legitimate question', but you could only post a question and work from that. I 'll try
$1$. That's the theorem the proof uses continuity argument.
$2$. If a polynomial $P(x)$ has real coefficients. If $P(z)=0$ then taking conjugate $\overline{P(z)}=P(\bar{z})=0$ so both $z$ and $\bar{z}$ are roots.
$3$. Imagine the disk $C$ with center $a\in x'Ox$ on the real line it is easy to see that for any complex number $z\in C$ $\implies$ $\bar{z}\in C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3459558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Prove that if $f$ is entire and there exists a bounded sequence of distinct real numbers {$a_n$} with given property, show that $f$ is a constant.
The above problem excerpted from Complex variables with application by Silverman, section 8.2.
The first part of the problem can be shown relatively easily by considering $g(z)=\overline{f(\bar{z})}$.
But I couldn’t show that the second part. I guess it probably can be proved by identity theorem or by comparing the coefficients of $f$.
Any help or hint will be appreciated.
Thanks.
| Since $f(t) \in \mathbb R$ for all real $t$, there is, by Rolle, some $t_n \in (a_{2n+1}, a_{2n})$ such that $f'(t_n)=0.$
Since $t_n \to 0$, we see that $0$ is an accumulation point of the zeroes of $f'$. This gives that $f'(z)=0$ for all $z.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3459703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding $\lim_{n \to \infty} \left( 1 + 2\int_0^1 \frac{x^n}{x+1} dx \right)^n$
I have to evaluate
$$\lim_{n \to \infty} \left( 1 + 2\int_0^1 \frac{x^n}{x+1} dx \right)^n. $$
My progress: Since $x \in (0, 1)$ we can use the series expansion of $\frac{1}{1+x} = 1-x+x^2-x^3+...$
Evaluating that integral in the parantheses (which I shall call $I_n$) gives
$$I_n = \sum_{k=1}^\infty \frac{(-1)^{k+1}}{n+k} = (-1)^n(\log{2} - A_n) $$
where $A_n$ is the nth partial sum of the alternating harmonic series, $A_n = \sum_{k=1}^n \frac{(-1)^{k+1}}{k}.$
Since $\log{2} - A_n$ goes to 0, it's enough to compute
$$2 \lim_{n \to \infty} n(-1)^n(\log{2} - A_n) $$
This is where I got stuck. Any ideas?
| $$|B_n|=|\int_0^1\dfrac{x^n}{1+x} \text{d}x| \leq \int_0^1 x^n\text{d}x=\dfrac{1}{n+1}$$
$$E_n=(1+2B_n)^n=\exp(n\log(1+2B_n))=\exp(n\times(2B_n+O(B_n^2))$$
so $$\lim E_n=\exp(\lim 2nB_n)$$
and $$nB_n=n\int_0^1 \dfrac{x^n}{1+x} \text{d}x=n\int_0^1\dfrac{u}{1+u^{1/n}}u^{(1-n)/n}\times \dfrac{1}{n}\text{d}u=\int_0^1\dfrac{u^{1/n}}{1+u^{1/n}}\text{d}u$$
so by dominated convergence theorem $\lim E_n=\exp(1)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3459797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
pole is well defined on the Riemann surface I want to show a pole is well defined on the Riemann surface.
Let $M$ is a Reimann surface, $f:M \rightarrow \hat{C}$ and $f(p)=\infty$. Suppose $(U,\varphi) $
and $(V, \psi)$ are two charts of M and $p \in V \cap U$.
$f o \varphi^{-1}$ has a pole of orde $t$ at $\varphi^{-1}(p)$ and
$f o \psi^{-1}$ has a pole of orde $l$ at $\psi^{-1}(p)$.
How can we prove $t=l$?
| Both maps differ by a holomorphic map $\varphi \circ \psi^{-1}$ and the order of a zero (or pole) does not hange under a holomorphic coordinate change.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3459890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can an algebra be morita equivalent to its dg-extension? Say we have a DG algebra $A=\bigoplus_{n\geq 0}A_n$, let $B=A_0$, the 0th degree of $A$. Assume we have that the category of DG-modules over $A$ is equivalent to the category of module over $B$. Does this imply that $A_i=0$ for $i\geq 1$?
| Yes, but rather vacuously so: it implies that $A_i=0$ for all $i$, not just $i\geq 1$! Indeed, the category of modules over a ring has the following property: if $(M_i)$ is an infinite family of nonzero objects, the canonical map $\bigoplus M_i\to\prod M_i$ is not an isomorphism. On the other hand, if $A$ is a nonzero nonnegatively graded dg-algebra, then the category of dg-modules over $A$ does not have this property, since for instance you can take $M_i$ to be $A$ with its grading shifted up by $i$, and then the direct sum $\bigoplus M_i$ has only finitely many nonzero terms in each degree and so coincides with the product. So the category of dg-modules over a nonzero nonnegatively graded dg-algebra can never be equivalent to the category of modules over any ring.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3460035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to prove that $ -n \int _0 ^1 x^{n-1} \log(1-x)dx$ equals the $n$-th harmonic number? From (Almost) Impossible Integrals, Sums, and Series section 1.3:
$$H_n = -n\int _0 ^1 x^{n-1} \log(1-x)dx$$
The proof of which was appetizingly difficult. I was unable to answer the follow-up challenge question, and do not have access to the given solution. It asks:
Is it possible to [prove this equality] with high school knowledge only (supposing we know and use the notation of the harmonic numbers)?
I've been fiddling with the integral on the right for an hour, but that $\log$ really throws a wrench in my plans.
| If you call the RHS $I_n$, then
\begin{align}I_n-I_{n-1}&=\int_0^1((n-1)x^{n-2}-nx^{n-1})\log(1-x)\,dx\\
&=\left[(x^{n-1}-x^n)\log(1-x)\right]_{x=0}^1
+\int_0^1\frac{x^{n-1}-x^n}{1-x}\,dx
\end{align}
on integration by parts.
Then
$$\lim_{x\to1}(x^{n-1}-x^n)\log(1-x)=\lim_{x\to1}(1-x)\log(1-x)=
\lim_{y\to0}y\log y=-\lim_{t\to\infty}te^{-t}=0$$
and the integral reduces to $\int_0^1x^{n-1}\,dx=1/n$. Therefore
$I_n-I_{n-1}=1/n$. Similarly $I_1=1$, using integration by parts.
I'd count all of this as A-level level maths.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3460270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
Probability that girl who answers the door is the eldest girl? You know that a family has 3 children. You walk up to and knock on the front door of their house. A girl answers the door.
What is the probability that the she is the eldest girl among the children?
Assume that all 3 children are home and equally likely to answer the door.
(I appreciate that with questions like this, the wording can be so crucial. If the question is in any way ambiguous, I would be delighted to have an explanation of why that is the case!)
| Condition on whether the girl who opened the door is the eldest child overall, the middle child, or the youngest child. Each of these occur with probability $\frac{1}{3}$. We are told this within the problem statement.
The probability she is the eldest girl given that she is the eldest child is clearly $1$.
The probability that she is the eldest girl given that she is the middle child is the same as the probability the eldest child is a boy which is $\frac{1}{2}$.
The probability that she is the eldest girl given that she is the youngest child is the same as the probability that the two eldest children are both boys which is $\frac{1}{4}$.
This makes the overall probability:
$$\frac{1}{3}\left(1+\frac{1}{2}+\frac{1}{4}\right) = \frac{7}{12}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3460488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Book suggestion for real analysis help. I am looking for a good book to supplement my class book on analysis.
I am struggling with concepts and I would like a book that can help me learn how to learn analysis.
| When I was learning introductory real analysis, the text that I found the most helpful was Stephen Abbott's Understanding Analysis. It's written both very cleanly and concisely, giving it the advantage of being extremely readable, all without missing the formalities of analysis that are the focus at this level. While it's not as thorough as Rudin's Principles of Analysis or Bartle's Elements of Real Analysis, it is a great text for a first or second pass at really understanding single, real variable analysis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3460612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Complex number: cube root of i I understand that the way to calculate the cube root of $i$ is to use Euler's formula and divide $\frac{\pi}{2}$ by $3$ and find $\frac{\pi}{6}$ on the complex plane; however, my question is why the following solution doesn't work.
So $(-i)^3 = i$, but why can I not cube root both sides and get $-i=(i)^{\frac{1}{3}}$. Is there a rule where equality is not maintained when you root complex numbers or is there something else I am violating and not realizing?
| $-i$ is certainly one of the cube roots of $i$ but it is certainly not the only one.
Every complex number (except $0$) has three cube roots.
A quicker way to find these roots is to use the cube roots of unity, which can be written $1, \omega, \omega^2$ and multiply them successively by the root you've already got.
So in your case, the three roots are $-i, - \omega i =\frac{\sqrt3}{2} + \frac 12 i, - \omega^2 i = -\frac{\sqrt3}{2} + \frac 12 i$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3460843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is function $\sin$ without $x$? I'm solving a problem set in ODE:
This is the first time in my life that I see such functions $\sin,\cos,\tan$ without argument $x$.
I would like to ask if $\sin,\cos,\tan$ mean $\sin x,\cos x,\tan x$ or they mean something else.
Thank you so much!
| Usually the convention is that if $f:X\to Y$ is a function, then $f(x)\in Y$ is the value at the point $x\in Y$. As you can do arithmetic with function values, you can do arithmetic with functions, applying the operation pointwise. Thus $h=f+g$ is $h(x)=f(x)+g(x)$ for all $x$.
This gives a little complication if you want to write equations like 1) without too much decorations like writing $\forall x\in X$. One possible interpretation would then be that $x$ has a double meaning, it is the variable name and, where appropriate, a function itself, the identity function. So that 1) would be in this convention $xy'+y=2x$, avoiding the function evaluation notation.
In 3-5, the pendulum is moving to the extreme other side regarding the function evaluation notation in that $\sin$ etc. is used as function name in the same way as $f$ above. This is a very unusual notation, but not formally wrong. But it will look very strange in larger terms, $\sin+\cos$ is tolerable, but how to interpret $\sin\cos$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3461072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Apollonian network and 4 coloring Is there a connection between the Apollonian network and the four-color theorem? Are there any attempts to prove this theorem using the Apollonian network?
| As Wikipedia's article on Apollonian networks mentions, "Birkhoff (1930) is an early paper that uses a dual form of Apollonian networks, the planar maps formed by repeatedly placing new regions at the vertices of simpler maps, as a class of examples of planar maps with few colorings." This refers to Birkhoff, On the number of ways of colouring a map.
More recently, Fowler proved in his PhD thesis Unique Coloring of Planar Graphs (1998) that every uniquely 4-colorable graph is an Apollonian network, and the Four Color theorem follows from this. Curiously, he does not cite Birkhoff's 1930 paper and does not use the name "Apollonian network". See Brændeland's Color fixation, color identity
and the Four Color Theorem for a different take on Fowler's theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3461217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Derivative of $\tan^{-1}\dfrac{\sqrt{1+x^2}-1}{x}$ with respect to $\tan^{-1}x$
Derivative of $\tan^{-1}\dfrac{\sqrt{1+x^2}-1}{x}$ with respect to $\tan^{-1}x$
Method 1
$$
w=\tan^{-1}\dfrac{\sqrt{1+x^2}-1}{x}\quad\&\quad z=\tan^{-1}x
$$
Put $\theta=\tan^{-1}x\implies\tan\theta=x$
$$
w=\tan^{-1}\frac{|\sec\theta|-1}{x}=\tan^{-1}\frac{|\sec\theta|-1}{\tan\theta}=\tan^{-1}\frac{\dfrac{1}{|\cos\theta|}-1}{\dfrac{\sin\theta}{\cos\theta}}=\tan^{-1}\frac{1-|\cos\theta|}{|\cos\theta|}.\frac{\cos\theta}{\sin\theta}\\
=\tan^{-1}\frac{1\mp\cos\theta}{\pm\cos\theta}.\frac{\cos\theta}{\sin\theta}=\pm\tan^{-1}\frac{1\mp\cos\theta}{\sin\theta}\\
w=\begin{cases}\tan^{-1}\dfrac{1-\cos\theta}{\sin\theta}=\tan^{-1}\tan(\theta/2)\quad;\quad\theta\in(-\pi/2,\pi/2)\\
-\tan^{-1}\dfrac{1+\cos\theta}{\sin\theta}=-\cot^{-1}\cot(\theta/2)\quad\&\quad\text{elsewhere}
\end{cases}\\
=\begin{cases}n\pi+\dfrac{\tan^{-1}x}{2}\quad;\quad\theta\in(-\pi/2,\pi/2)\\
-n\pi-\dfrac{\tan^{-1}x}{2}\quad\&\quad\text{elsewhere}
\end{cases}\\
\frac{dw}{dx}=\dfrac{\pm1}{2(1+x^2)}\\
\frac{dz}{dx}=\frac{1}{1+x^2}\implies\boxed{\frac{dw}{dz}=\pm\frac{1}{2}}
$$
Method 2
$$
\frac{dw}{dx}=\frac{1}{1+\Big(\frac{\sqrt{1+x^2}-1}{x}\Big)^2}.\frac{x.\dfrac{x}{\sqrt{1+x^2}}-(\sqrt{1+x^2}-1)}{x^2}\\
=\frac{x^2}{2(x^2+1-\sqrt{1+x^2})}.\frac{\sqrt{1+x^2}-1}{\sqrt{1+x^2}}=\frac{1}{2(1+x^2)}\\
\frac{dz}{dx}=\frac{1}{1+x^2}\implies\boxed{\frac{dw}{dz}=\frac{1}{2}}
$$
Why do I seem to get slightly different solutions in methods 1 and 2 ?
| In the first method for $\theta\in(-\pi/2,\pi/2)$ we have $|\cos\theta|=\cos\theta$ therefore only plus sign holds.
Second method seems more effective and clear to me.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3461332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Kernel of nonzero linear function in vector space Let $l$ be a nonzero linear function on a vector space $V$, i.e. $l:V\to \mathbb{k}$. Show that $\text{ker}(l)$ is a maximal nontrivial linear subspace in $V$. Also prove that $V/\text{ker}(l)$ has dimension 1.
Remark: It is not necessary that $V$ is finite-dimensional space.
This is quite a famous problem and I was able to show that $\text{ker}(l)$ is maximal linear subspace of $V$. In other words, if $\text{ker}(l)\subseteq W\subseteq V$ then $W=\text{ker}(l)$ or $W=V$.
Also since $l$ is a nonzero it means that $\exists v\in V$ such that $l(v)\neq 0$. Then one can show that $V/\text{ker}(l)$ has a basis $v+\ker (l)$ which means that $V/ \ker(l)$ has dimension.
But I have spent some time and was not able to show that $\ker(l)$ is nontrivial subspace of $V$, which means that there is some element $x\neq 0\in V$ such that $l(x)=0$.
I think that this is wrong.
Anyway I would be very grateful if anyone can show how to prove or disprove it.
| In a vector space of dimension $1$, a maximal subspace is trivial. To show it is nontrivial in dimension greater than one, simply take a one dimensional subspace as a counterexample to the containment condition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3461482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If a normal subgroup and its factor group are both abelian, then what can be said about the group? The textbook I am using to self-study Abstract Algebra has the following problem.
Prove or disprove: If $H$ is a normal subgroup of $G$ such that $H$ and $G/H$ are abelian, then $G$ is abelian.
My attempt:
Since $H\triangleleft G$ and $H,G/H$ are abelian, then $g_1g_2H=g_2g_1H\text{ , }\forall g_1,g_2\in G$
$\implies g_1g_2h_1=g_2g_1h_2 \text{ for some }h_1,h_2\in H$
$\implies g_1g_2h_1h_2^{-1}=g_2g_1$
$\implies g_1^{-1}g_2^{-1}g_1g_2h_1h_2^{-1}=e$
How do I conclude if $G$ is abelian or not?
| Hint. The symmetric group $S_3$ has a nontrivial normal subgroup.
Rethink your "proof".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3461588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Unique isomorphism between finitely generated $\tau$-structures Let $\mathcal{A}$, $\mathcal{B}$ be two $\tau$-structures, where $\tau$ is a first order language and let $\vec{a} = (a_1, \dots, a_n) \in A^n$ and $\vec{b} = (b_1, \dots, b_n) \in B^n$. We denote $\langle \vec{a}\rangle$ to be the smallest $\tau$-substructure of $\mathcal{A}$ containing $\{a_1, \dots, a_n\}$ (same definition for $\langle \vec{b} \rangle$). Suppose $\xi: \langle \vec{a} \rangle \to \langle \vec{b} \rangle$ is an isomorphism of $\tau$-structures such that $\xi(a_i) = b_i$ for $i=1, \dots, n$. Is $\xi$ unique in respect to these properties? If so, how can I show it? It seems to me to be unique, because if two isomorphisms agree in a set of generators they should (I don't knou how to prove it) be equal.
| Suppose $\xi$ and $\xi'$ are two such isomorphisms. Let $S=\{x:\xi(x)=\xi'(x)\}$. Note that $S$ is a $\tau$-substructure of $\langle \vec{a}\rangle$ since $\xi$ and $\xi'$ are both homomorphisms, and $a_i\in S$ for each $i$ by hypothesis. So by definition of $\langle \vec{a}\rangle$, $S$ must be all of $\langle \vec{a}\rangle$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3461851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove or Disprove - simple inequality conjecture for ratios of real numbers Please exhibit a proof or counterexample for the following claim.
Let $x,y,a,b \in \mathbb{R}$ and $0 < c < 1$. Then $$|x/y - 1| < c \text{ and } |a/b - 1| < c \iff |ay/bx - 1| < c.$$
If the bi-conditional does not hold, does either direction hold?
What have I tried? Practically nothing. I am very sick yet still working and would like to verify this for a program I am writing. Sorry, no energy for fiddling with absolute value inequalities :/ You can browse my other questions and see that I usually give attempts and approaches, but right now I just need to move on with this.
| counterexample for $\implies$: If $c=1-10^{-10}$, $a/b = 10^{-1}$, and $x/y = 10^{-5}$ then $\frac{ay}{bx} = 10^4$ which is very far from $1$.
counterexample for $\impliedby$: If $a/b = \frac{1}{4}$ and $x/y = \frac{1}{2}$ then $\frac{ay}{bx} = \frac{1}{2}$ which is within $2/3$ of $1$, but $|a/b-1| > 2/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3461985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Help with proof involving Eisenstein's criterion Let $a\in\mathbb {Z}[X]$ and suppose that $2a\in\mathbb {Z}[X]$ is Eisenstein with respect to a prime $p\in\mathbb {Z}$.
How can I prove that $a$ is an Eisenstein polynomial with respect to $p$?
Any help would be greatly appreciated!
| Hint 1 Show that $p \neq 2$. To do this, use $p\nmid 2a_n$.
Hint 2: If $p|2a_k$ and $p \neq 2$ deduce that $p|a_k$.
Hint 3: If $p^2 \nmid 2a_0$ show that $p^2 \nmid a_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3462110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Is $\frac{1}{\alpha} \in \mathbb{Q}[\alpha]$ for irrational $\alpha$? I have been trying to pick up abstract algebra and just attempted an exercise from Landin's An Introduction to Algebraic Structures which asks to prove whether $\frac{1}{\pi} \in \mathbb{Q}[\pi]$, and would like to ask a (slightly) more general question.
Let $\alpha$ be an irrational number. Is $\frac1\alpha \in \mathbb{Q}[\alpha]$?
$\mathbb{Q}[\alpha]$ being the ring of numbers of the form $a_0 +a_1\alpha+a_2\alpha^2\dots a_n\alpha^n$ with $a_i\in\mathbb{Q}$. I'm really not sure where to begin, and haven't found a similar question on MSE (perhaps because the solution is simpler than I realize.) I am stuck on what we can conclude about $\frac{1}{\alpha}$ other than that it is an irrational number greater than $1$. Clearly, $\alpha^k n \in \mathbb{Q}[\alpha]$, but I'm reluctant to make any claims about $\frac{1}{\alpha}$ and $\alpha^k n$, as I'm sure a solution would use other simpler methods.
*edited title and parts of the body because $\alpha$ need not be less than one.
| Not, it's not true for all irrational numbers $0 \lt \alpha \lt 1$. To see this, assume it's true so you get
$$\frac{1}{\alpha} = \sum_{i=0}^{n}a_i \alpha^n \tag{1}\label{eq1A}$$
for some set of $a_i \in \mathbb{Q}$. Multiply by $\alpha$ on both sides and then subtract $1$ from both sides to get
$$\sum_{i=0}^{n}a_i \alpha^{n+1} - 1 = 0 \tag{2}\label{eq2A}$$
This means $\alpha$ is a root of the polynomial
$$p(x) = \sum_{i=0}^{n}a_i x^{n+1} - 1 \tag{3}\label{eq3A}$$
However, since all of the coefficients of the terms in $p(x)$ are rational, this can only happen with $\alpha$ being an algebraic number, so it's not true for all irrational, i.e., it doesn't hold for cases where $\alpha$ is a transcendental number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3462245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
3D geometry-skew lines, distance of a point from a line A straight line L intersects perpendicularly both the line:
$$\frac{(x+2)}{2} = \frac{(y+6)}{3}=\frac{(z-34)}{-10} $$
and the line:
$$\frac{(x+6)}{4}=\frac{(y-7)}{-3}=\frac{(z-7)}{-2}$$
Then the square of the perpendicular distance of origin from L is
I could find the shortest distance between the skew lines but I can't find the equation of line and also the position of origin. How is the origin related to this system, I have no idea. Please help.
| $$L_1: \frac{x+2}{2}=\frac{y+6}{3}=\frac{z-34}{-10}=a$$
On $L_1$, a general point is $\vec p_1=(2a-2,3b-6,-10a+34)$
$$L_2: \frac{x+6}{4}=\frac{x-7}{-3}=\frac{x-7}{-2}=b$$
A generat point on $L_2$ is $\vec p_2\ (4b-6,-3b+7,-2b+7)$
Let the line $L_3$ which is $P_1P_2$, intersects both $L_1$ and $L_2$ orthogonally,
then $$(\vec p_1- \vec P_2).\vec L_1 =0 = (\vec P_1- \vec P_2). \vec L_2$$
We get $$113a-19b=301,~~ 19a-29b=-1 \implies a=3, b=2$$
So the point $\vec P_1=(4,3,4), \vec P_2=(2,1,3)$
We get line $L_3$ which joins $P_1,P_2$ as
$$L_3: \frac{x-4}{2}=\frac{y-3}{2}=\frac{z-4}{1}=c$$
with a gereral point $\vec p=(2c+4,2c+3, c+4)$ on it.
Then $\vec {OP}$ is perpendicular to $L_3$, so
we get $2(2c+4)+2(2c+3)+1(c+4)=0 \implies c=-2$ giving us the foot of perpendicular as $(0,-1,2)$, its distance from orogin is $\sqrt{5}$ which is the required distance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3462372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$(a+bi)$ of $\frac{3+4i}{5+6i}$ and $i^{2006}$ and $\frac{1}{\frac{1}{1+i}-1}$ How can one get the Cartesian coordinate form $(a+bi)$ of the following complex numbers?
$$\frac{3+4i}{5+6i}$$
$$i^{2006}$$
$$\frac{1}{\frac{1}{1+i}-1}$$
Regarding $\frac{3+4i}{5+6i}$ I tried expanding it with $\frac{5+6i}{5+6i}$ and got $\frac{-9+38i}{-9+60i}$, but that doesn't get me anywhere.
Regarding $i^{2006}$ I have $i^{2006} = -1$ (because $2006$ is an even number and $i^2 = -1$). The Cartesian coordinate form would then be $-1 + 0i$. So the real part is $-1$ and the imaginary is $0i$ is that correct?
Regarding $\frac{1}{\frac{1}{1+i}-1}$ I tried expanding the denominator with $\frac{1+i}{1+i}$ and got $\frac{1}{\frac{1}{1+i}-\frac{1+i}{1+i}} = \frac{\frac{1}{1}}{\frac{1+i}{i^2}} = \frac{-1}{1+i}$, but how do I continue from there?
| For $\frac{3+4i}{5+6i}$ multiply the numerator and denominator by the conjugate of the denominator, $5 - 6i$, what do you observe ?
For $i^{2006}$. If $i^2 = -1$, what can you say about $i^{2006} = (i^2)^{1003} = (-1)^{1003} $ ?
For $\frac{1}{\frac{1}{1+i}-1}$, multiply numerator and denominator by $1+i$, you get $\frac{1+i}{-i} = -\frac{1}{i} - 1 = -1 +i$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3462562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Can Abel's test and Dirichlet test be used interchangably? I have seen in most cases in series of constants and in series of functions that where Abel's test of convergence applies Dirichlet's test also applies and vice versa. This makes me to think whether the two tests are equivalent in the sense that one is derivable from other.Is it true? Or there are cases where we can apply one but cannot apply other. Also is it so that one is more general and other is a corollary out of it.
| The proofs have some commonality but one test is not a corollary of the other.
Suppose the sequence $(a_n)$ is not monotone and $\sum a_n$ converges. It follows that $\sum(1+1/n)^n a_n$ converges by Abel’s test. However, the conditions for Dirichlet’s test are not met since $(1+1/n)^n$ is not monotone decreasing — although it is bounded and monotone increasing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3462900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Finite simplicial complex can be viewed as a subcomplex of a simplex I am reading the proof of the Simplicial Approximation Theorem (2C.1) in Hatcher.
In the first paragraph of the proof, Hatcher says that:
Choose a metric on $K$ that restricts to the standard Euclidean metric on each simplex of $K$. For example, $K$ can be viewed as a subcomplex of a simplex $\Delta ^N$ whose vertices are all the vertices of $K$, and we can restrict a standard metric on $\Delta ^N$ to give a metric on $K$.
Here $K$ is just a finite simplicial complex.
I understand that the second sentence clearly implies the first, but I cannot understand why we can view $K$ as a subcomplex of a simplex $\Delta ^N$. Any help?
| The key idea is that in a simplicial complex (unlike in, say, a $\Delta$-complex), each simplex is uniquely determined by its vertices (this is part of the definition of a simplicial complex). So, since $K$ has finitely many vertices, say $N - 1$, consider the simplex $\Delta^n$, and identify the $N - 1$ vertices of $\Delta^N$ with the $N - 1$ vertices of $K$.
Now let's think about how to identify the $k$-simplices of $K$ with $k$-simplices of $\Delta^N$. For any $k + 1$ vertices in $K$, there might or might not be a $k$-simplex in $K$ with those vertices, but there's at most one $k$-simplex in $K$ with those vertices (since $K$ is a simplicial complex). There is also exactly one $k$-simplex in $\Delta^N$ with those $k + 1$ vertices. So, if $K$ has a $k$-simplex with those vertices, include the $k$-simplex in $\Delta^n$ with these vertices; if it doesn't, don't. Repeating this process and taking all simplices in $\Delta^n$ that correspond to simplices in $K$, we obtain a subcomplex of $\Delta^n$ which is homeomorphic to our original complex $K$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3463095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Convergence of $(1+f(n))^{g(n)}$ when $f(n) \to 0$ and $g(n) \to +\infty$ Let $f : \Bbb N \to [0, \infty[$ and $g : \Bbb N \to [0, \infty[$ be two functions, such that $f(n) \to 0$ and $g(n) \to +\infty$ when $n \to \infty$.
What are some (necessary/sufficient) conditions on $f$ and $g$ for the limit
$$a_{f,g} := \lim_{n \to \infty} (1+f(n))^{g(n)}$$
to exist and be finite?
For instance, if $f(n) = x/n, g(n)=n$, then the limit is $e^x$. On the other hand, if $f(n)=1/n, g(n)=n^2$, then the limit is $+\infty$.
If $f(n) = a^n, g(n)=n^k$ with $a \in [0,1[$ and $k \geq 1$, then the limit exists since $(1+a^n)^n$ is decreasing for $n$ large enough (take the log and then the derivative wrt $n$), and the sequence is bounded below by $1$. On the other hand, if $f(n) = a^n, g(n)=e^n$ (with $a \in [0,1[$) then the sequence $(1+f(n))^{g(n)}$ is unbounded.
Any help would be appreciated to understand better what conditions I should have on $f$ and $g$.
| We have that
$$(1+f(n))^{g(n)}=e^{g(n)\log(1+f(n))}$$
and since
$$g(n)\log(1+f(n))=g(n)\cdot f(n)\cdot \frac{\log(1+f(n))}{f(n)}$$
with
$$\frac{\log(1+f(n))}{f(n)}\to 1$$
all boils down in $g(n)f(n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3463226",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Gradient of $\mathcal{L}(W) = -\frac{n}{2}\left\{d\ln(2\pi)+\ln|C|+\mbox{Tr}(C^{-1}S)\right\}$ w.r.t. $W$ I have the following function $\mathcal{L}(W)$ and I want to find the gradient with respect to $W$, but I'm struggling with the matrix operations and derivations.
$$\mathcal{L}(W) := -\frac{n}{2}\left\{ d\ln(2\pi) + \ln|C| + \mbox{Tr}(C^{-1}S) \right\}$$
where $C := WW^T + \sigma^2I$. You can consider $S$, which is positive definite, as constant.
The gradient should be
$$\nabla_W \mathcal{L}(W) = -n \left( C^{-1} S C^{-1} W - C^{-1} W \right)$$
but I cannot understand the steps that lead to it.
If someone is interested, this is the probabilistic PCA, and you can find more information here.
| Use a colon as a convenient product notation for the trace, i.e.
$\;A:B = {\rm Tr}(A^TB)$.
Define the variables
$$\eqalign{
C &= WW^T +\sigma^2I &\implies dC = W\,dW^T+dW\,W^T \\
I &= C^{-1}C &\implies dC^{-1} = -C^{-1}\,dC\,C^{-1} \\
{\cal J} &= -\frac{n}{2}\log(2\pi) &\implies d{\cal J} = 0 \\
}$$
Write the objective function in terms of these new variables.
Then calculate the differential and gradient.
$$\eqalign{
{\cal L}
&= {\cal J} - \frac{n}{2}\Big(S:C^{-1} + \log(\det(C))\Big) \\
d{\cal L}
&= 0-\frac{n}{2}\Big(S:dC^{-1} + C^{-1}:dC\Big) \\
&= \frac{n}{2}\Big(C^{-1}SC^{-1} - C^{-1}\Big):dC \\
&= \frac{n}{2}\Big(C^{-1}SC^{-1} - C^{-1}\Big):(W\,dW^T+dW\,W^T) \\
&= n\Big(C^{-1}SC^{-1} - C^{-1}\Big):dW\,W^T \\
&= n\Big(C^{-1}SC^{-1} - C^{-1}\Big)W:dW \\
\frac{\partial {\cal L}}{\partial W} &= n(C^{-1}SC^{-1} - C^{-1})W
\\
}$$
Terms in a colon product can be rearranged in accordance with the properties of the trace, e.g.
$${(A:BC) \;=\; (B^TA:C) \;=\; (AC^T:B)}$$
which was used to rearrange some of the lines in the above derivation.
NB: The well-known gradient of $\log(\det(X))$ was used without any derivation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3463378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How would one show that $\textbf{Z}$ is countable? I am trying to show that the set of all integers $\mathbb{Z}$ is indeed countable which would mean that $|\mathbb{N}|=|\mathbb{Z}|$.
This would further imply that I have to find a bijection between the two sets, but I do not really know how to do that.
I have tried with a function $f:\mathbb{N}\longrightarrow \mathbb{Z}$ such that all the odd numbers are mapped to the positive subset of $\mathbb{Z}$, and all even numbers are mapped to the negative numbers.
I think that this will produce a valid solution but I don't have the details yet, and I also do not really know how to account for zero in all of this.
Any help on the matter would be much appreciated.
| Another possible bijection (likely similar to one you were thinking of) would be:
$$f(x)=\begin{cases}
\frac n2 &\text{$n$ is even}\\
-\frac {n+1}{2}\quad &\text{$n$ is odd}
\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3463552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Find area bounded by $y=\frac 3x, y=\frac 5x, y=3x, y=6x$ Let S be the area of the region bounded by the curves
$$y=\frac 3x,\>\>\> y=\frac 5x,\>\>\> y=3x,\>\>\> y=6x$$
Need to find $S$.
The coordinates of the vertices of the resulting figure were found. The problem with the transition from a double integral to a repeated one.
There's another idea. Make a replacement
\begin{cases}
\xi=xy \\ \eta=\frac{y}{x}
\end{cases}
Вut none led to the correct answer.
| In polar coordinates, the boundaries are
$$r^2=\frac3{\sin\theta\cos\theta},\>\>\>\>\>r^2=\frac5{\sin\theta\cos\theta},
\>\>\>\>\>\tan\theta = 3,\>\>\>\>\>\tan\theta = 6$$
Thus, the area integral is,
$$S=\int_{\theta_1}^{\theta_2}d\theta \int_{r_1}^{r_2}rdr
=\int_{\theta_1}^{\theta_2}\frac{d\theta}{\sin\theta\cos\theta}=\ln(\tan\theta)|_{\tan^{-1}3}^{\tan^{-1}6}=\ln2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3463661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How my professor derived this CDF? This was an example problem my professor went over in class.
Let $X =$ uniform $(1,4)$ where $Y=(X-2)^2$ Find the CDF.
He went on to derive:
$F_Y(y)=P(Y\leq y) = P((x-2)^2 \leq y) = P(-\sqrt{y}\leq (x-2) \leq \sqrt(y))$
= $P(2 - \sqrt{y} \leq x \leq 2 + \sqrt{y})$
Then he said the CDF is:
$$F_X(x)= \begin{cases}
0 & x\leq 1 \\
\frac{x-1}{3} & 1\leq x\leq 4 \\
1 & x\geq 4
\end{cases}
$$
I am very confused as to how he derived this CDF. Could someone please explain?
Edit: Unless he is wrong?
| The CDF given was simply that of the variable $X$, it's not the CDF for $Y$. Note that since this is a uniform distribution on $(1,4)$ then the density is given by $p_X(x) = \frac{1}{3}$. Then:
$$
F_X(x) \;\; =\;\; P(X\leq x) \;\; =\;\; \int_1^x \frac{1}{3}d\alpha \;\; =\;\; \frac{x-1}{3}.
$$
The values on the other parts of the domain are because $P(X\leq 1) = 0$ and $P(X\geq 4) = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3463787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Finding sum of series $\sum^{\infty}_{k=0}\frac{(k+1)(k+2)}{2^k}$
Finding sum of series $\displaystyle \sum^{\infty}_{k=0}\frac{(k+1)(k+2)}{2^k}$
what i try
Let $\displaystyle x=\frac{1}{2},$ Then series sum is $\displaystyle \sum^{\infty}_{k=0}(k+1)(k+2)x^k$
$\displaystyle \sum^{\infty}_{k=0}x^k=\frac{1}{1-x}\Rightarrow \sum^{\infty}_{k=0}x^{k+1}=\frac{x}{1-x}=\frac{1-(1-x)}{1-x}$
differentiating both side
$\displaystyle \sum^{\infty}_{k=0}(k+1)x^{k}=+\frac{1}{(1-x)^2}$
How do i solve it help me please
| You almost have it. Instead of multiplying the infinite sum by $x$, try multiplying by $x^2$, so you get
$$\sum^{\infty}_{k=0}x^{k+2} = \frac{x^2}{1-x} \tag{1}\label{eq1A}$$
After differentiating twice, you will then get your series sum on the left, i.e., $\sum^{\infty}_{k=0}(k + 1)(k + 2)x^{k}$. You can determine the right side at $x = \frac{1}{2}$. I'll leave the details to you to finish.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3463896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Let $X=Y=[0,1]$ and $g(x, y)=\max \{x(1-2 y), y(1-2 x)\}$. How to compute $\max_{x \in X} \min_{y \in Y} g(x,y)$? I'm trying to solve a question from last year final exam in game theory:
I've solved question (3a), but it takes me a lot of time to compute because of many possible cases to consider. I think I can not complete the exam in time with my approach.
Could you please inform me a shorter approach on this problem?
My attempt:
Because $x(1-2y)$ and $y(1-2x)$ are continuous, $g(x,y) =\max \{x(1-2y),y(1-2x)\}$ is continuous. Moreover, $X$ and $Y$ are compact. It follows that $\alpha = \max_{x \in X} \min_{y \in Y} g(x,y)$ and $\beta = \min_{y \in Y} \max_{x \in X} g(x,y)$.
We have $$\begin{aligned} \min_{y \in [0,x]} g(x,y) &= \min_{y \in [0,x]} x(1-2y) = x(1-2x) \\
\min_{y \in [x,1]} g(x,y) &= \min_{y \in [x,1]} y(1-2x) = \begin{cases}
1-2x & \text{if} \quad x \ge 1/2 \\
x(1-2x) & \text{if} \quad x < 1/2 \\
\end{cases} \end{aligned}$$
Consequently, $$\min_{y \in Y} g(x,y) = \begin{cases}
1-2x & \text{if} \quad x \ge 1/2 \\
x(1-2x) & \text{if} \quad x < 1/2 \\
\end{cases}$$
Hence $$\alpha = \max_{x \in X} \min_{y \in Y} g(x,y) = \max_{x \in [0,\frac{1}{2}]} x(1-2x) = \frac{1}{8}$$
Similarly, we have $$\begin{aligned} \max_{x \in [0,y]} g(x,y) &= \max_{x \in [0,y]} y(1-2x) = y \\
\max_{x \in [y,1]} g(x,y) &= \max_{x \in [y,1]} x(1-2y) = \begin{cases}
y(1-2y) & \text{if} \quad y \ge 1/2 \\
1-2y & \text{if} \quad y < 1/2 \\
\end{cases} \end{aligned}$$
Consequently, $$\max_{x \in X} g(x,y) = \begin{cases}
y & \text{if} \quad y \ge 1/3 \\
1-2y & \text{if} \quad y \le 1/3 \\
\end{cases}$$
Hence $$\beta = \min_{y \in Y} \max_{x \in X} g(x,y) = \frac{1}{3}$$
| Computing $\alpha$ involves first minimizing $g(x,y)$ with respect to $y$. This implies that we need to set $x(1-2y)=y(1-2x)$. To see this, note the following: If $x(1-2y)>y(1-2x)$, $g(x,y)=x(1-2y)$ and we can increase $y$ to decrease $g(x,y)$. If $y(1-2x)>x(1-2y)$, then $g(x,y)=y(1-2x)$ and we can decrease $y$ to decrease $g(x,y)$. Thus, if $x(1-2y)=y(1-2x)$ is false, we can increase or decrease $y$ to decrease $g(x,y)$. Hence, for fixed $x$, a minimum of $g(x,y)$ with respect to $y$ must be such that $x(1-2y)=y(1-2x)$. But $x(1-2y)=y(1-2x)$ implies $x=y$, so $\alpha=\max_x\min_yg(x,y)=\max_xg(x,x)=\max_xx(1-2x)=g(1/4,1/4)=1/8$.
Now, consider $\beta$. If $y>1-2y$, one would like to set $x=0$ for payoff $y$ when maximizing $g(x,y)$ with respect to $x$. Why? Well, to maximize $g(x,y)=\max(x(1-2y),y(1-2x))$ with respect to $x$, we should maximize $x(1-2y)$ or $y(1-2x)$. Thus, we should set $x=0$ or $x=1$. $x=0$ gives $y$, and $x=1$ gives $1-2y$. Since $y>1-2y$, we should set $x=0$. Similarly, if $y<1-2y$, one would like to set $x=1$ for payoff $1-2y$. Thus, if $y\neq 1-2y$, the other player would like to increase or decrease $y$ to decrease $\max_xg(x,y)$. Hence, the other player will set $y=1-2y$, i.e., $y=1/3$, so that $\beta=g(0,1/3)=g(1,1/3)=1/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3464059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.