Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Prove that $(x^n - 1)$ can divide $(x^{kn} - 1)$ without any remainder. I would like your help with proving that $(x^n - 1)$ can divide $(x^{kn} - 1)$ without any remainder.
I understand that both of these functions can be translated to a similar form such as $(x^2-1)=(x-1)(x+1)$.
But I'm not really sure how to do so.
Thank you.
| Notice that $x^{kn} - 1 = (x^n)^k - 1.$ This implies that $x^{kn} - 1 = (x^n - 1)P(x).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3889183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Solve the recurrence relation: $na_n = (n-4)a_{n-1} + 12n H_n$ I want to solve
$$ na_n = (n-4)a_{n-1} + 12n H_n,\quad n\geq 5,\quad a_0=a_1=a_2=a_3=a_4=0. $$
Does anyone have an idea, what could be substituted for $a_n$ to get an expression, which one could just sum up? We should use
$$ \sum_{k=0}^n \binom{k}{m} H_k = \binom{n+1}{m+1} H_{n+1} - \frac{1}{m+1} \binom{n+1}{m+1} $$
to simplify the result.
| Given by a CAS.
Isuppose that is missing the condition $a_3=0$.
Starting from $n=5$, the sequence
$$\left\{\frac{137}{25},\frac{1009}{150},\frac{17953}{2450},\frac{151717}{19600},\frac{
170875}{21168},\frac{1474379}{176400},\frac{3751927}{435600},\frac{20228477}{22869
00}\right\}$$ is not recognized by $OEIS$.
Being totally stuck, I used a CAS without conditions and got, after a lot of simplifications,
$$a_n=\frac{(-3 n^4+22 n^3-69 n^2+194 n+288+96 C) } {4 (n-3) (n-2) (n-1) n }+$$ $$\frac{12 (n-4) (n+1) \left(n^2-3 n+6\right) H_{n+1} } {4 (n-3) (n-2) (n-1) n }$$
For $n=0,1,2,3$, this leads to indeterminate forms. For $n=4$
$$a_4=C+\frac{25}{4}=0 \implies C=-\frac{25}{4}$$ leading to
$$a_n=(n-4)\frac{12 (n+1) \left(n^2-3 n+6\right) H_{n+1}-(n-3) \left(3 n^2-n+26\right) } {4 (n-3) (n-2) (n-1) n }$$ which is identical to what @Raffaele wrote in a comment.
Asymptotically,
$$a_n=3 \left(\gamma -\frac{1}{4}\right)+3 \log (n)+\frac{11}{2 n}-\frac{25}{4
n^2}+O\left(\frac{1}{n^3}\right)$$
I would really like to know how, starting from scratch, we could arrive to the result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3889303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
How do we prove the continuity of the inner product? Let $\lim x_{k}=a$, $\lim y_{k}=b$ in $\mathbb{R^n}$ then it must be proved that $\lim \langle x_{k}, y_{k}\rangle = \langle a, b\rangle$, where $\langle a, b \rangle$ is the inner product in $\mathbb{R^n}$
The idea I have is that $ \langle x_{k}, y_{k} \rangle$ $=$ $\sum_{i=1}^{n} x_{i}^k y_{i}^k$ then $\lim \langle x_{k},y_{k}\rangle = \lim \sum_{i=1}^{n} x_{i}^k y_{i}^k$ $=$ $\lim x_{1}^ky_{1}^k+...+\lim x_{n}^ky_{n}^k$
That would be fine and being like that how I continue?
| Proposition
Given a sequence $x_{k} = (x_{1}^{k},x_{2}^{k},\ldots,x_{n}^{k})\in\mathbb{R}^{n}$, it converges to $a = (a_{1},a_{2},\ldots,a_{n})$ iff $x_{j}^{k}$ converges to $a_{j}$.
Proof
Let us prove the implication $(\Rightarrow)$ first.
Suppose that $x_{k}\in\mathbb{R}^{n}$ converges to $a$. Let $\varepsilon > 0$. Then there exists a natural number $N_{\varepsilon}\in\mathbb{N}$ s.t.
\begin{align*}
k\geq N_{\varepsilon} \Rightarrow \|x_{k} - a\| \leq\varepsilon
\end{align*}
But we do also know that $|x_{j}^{k} - a_{j}| \leq \|x_{k} - a\|$. Hence we conclude that $x_{j}^{k}\to a_{j}$, and we are done.
Let us now prove the converse implication $(\Leftarrow)$.
We shall assume that $x_{j}^{k}$ converges to $a_{j}$. Let $\varepsilon > 0$. Then there exists $N_{\varepsilon_{j}}\in\mathbb{N}$ s.t.
\begin{align*}
k\geq N_{\varepsilon_{j}} \Rightarrow |x_{j}^{k} - a_{j}| \leq \varepsilon/n
\end{align*}
Consequently, if we take $N_{\varepsilon} = \max\{N_{\varepsilon_{1}},N_{\varepsilon_{2}},\ldots N_{\varepsilon_{n}}\}$, we conclude that
\begin{align*}
k\geq N_{\varepsilon} \Rightarrow \|x_{k} - a\| \leq |x_{1}^{k} - a_{1}| + |x^{k}_{2} - a_{2}| + \ldots + |x^{k}_{n} - a_{n}| \leq n\times\frac{\varepsilon}{n} = \varepsilon
\end{align*}
and we are done.
Proposition
Let $a_{k}\in\mathbb{R}$ and $b_{k}\in\mathbb{R}$ be sequences s.t $a_{k}\to a$ and $b_{k}\to b$. Then $a_{k} + b_{k}$ converges to $a + b$.
Proof
Let $\varepsilon > 0$. Then there are natural numbers $N_{\varepsilon_{1}}$ and $N_{\varepsilon_{2}}$ such that
\begin{align*}
\begin{cases}
k\geq N_{\varepsilon_{1}} \Rightarrow |a_{k} - a| \leq \varepsilon/2\\\\
k\geq N_{\varepsilon_{2}} \Rightarrow |b_{k} - b| \leq \varepsilon/2
\end{cases}
\end{align*}
Thus if we take $N_{\varepsilon} = \max\{N_{\varepsilon_{1}},N_{\varepsilon_{2}}\}$, one has that
\begin{align*}
k\geq N_{\varepsilon} \Rightarrow |a_{k} + b_{k} - a - b| \leq |a_{k} - a| + |b_{k} - b| \leq \varepsilon/2 + \varepsilon/2 = \varepsilon
\end{align*}
whence we conclude that $a_{k} + b_{k} \to a + b$.
Proposition
Let $a_{k}\in\mathbb{R}$ and $b_{k}\in\mathbb{R}$ be sequences such that $a_{k}\to a$ and $b_{k}\to b$. Then $a_{k}b_{k}$ converges to $ab$.
Proof
Let us consider the expression:
\begin{align*}
|a_{k}b_{k} - ab| = |a_{k}b_{k} - ab_{k} + ab_{k} - ab| & \leq |b_{k}||a_{k} - a| + |a||b_{k} - b|\\\\
& \leq |B||a_{k} - a| + |a||b_{k} - b|
\end{align*}
where $|b_{k}|\leq B$ (since $b_{k}$ converges). Since $a_{k}\to a$ and $b_{k}\to b$, let $\varepsilon > 0$. Then there are natural numbers $N_{\varepsilon_{1}}$ and $N_{\varepsilon_{2}}$ such that
\begin{align*}
\begin{cases}
k\geq N_{\varepsilon_{1}} \Rightarrow |a_{k} - a| \leq \displaystyle\frac{\varepsilon}{2(|B| + 1)}\\\\
k\geq N_{\varepsilon_{2}} \Rightarrow |b_{k} - b| \leq \displaystyle\frac{\varepsilon}{2(|a| + 1)}
\end{cases}
\end{align*}
If we take $N_{\varepsilon} = \max\{N_{\varepsilon_{1}},N_{\varepsilon_{2}}\}$, one has that
\begin{align*}
k\geq N_{\varepsilon} \Rightarrow |a_{k}b_{k} - ab| \leq \left(\frac{|B|}{2(|B| + 1)} + \frac{|a|}{2(|a| + 1)}\right)\varepsilon \leq \varepsilon
\end{align*}
whence we conclude that $a_{k}b_{k}\to ab$.
Solution
Based on the above-mentioned results, if $x_{k}\to a$ and $y_{k}\to b$, then $\langle x_{k},y_{k}\rangle$ converges to $\langle a,b\rangle$.
Hopefully this helps.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3889457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Linear Operator on $\mathcal{H} = l^2(\mathbb{N}\cup \{0\})$ This is taken from Conway's a course in functional analysis section 1.3 problem 3. It's a 3 part question I wanted to see if what I had below was correct and to ask about how I would approach part (c).
Let $\mathcal{H} = l^2(\mathbb{N}\cup \{0\})$.
(a) Show that if $\{\alpha_n\}\in \mathcal{H}$, then the power series $\sum_{n=0}^\infty\alpha_nz^n$ has radius of convergence $\geq 1$.
Since $\{\alpha_n\} \in \mathcal{H}$, we have $\sum_{n=0}^\infty |\alpha_n|^2 < \infty$. We hope to show that $\left\lvert\sum_{n=0}^\infty \alpha_nz^n\right\rvert< \infty$ for any $z<1$. For $z = 0$ the series is $0$ so let us show the series converges where $z \neq 0$. By Cauchy Schwarz inequality:
\begin{align}
\left(\sum_{n=0}^\infty \alpha_nz^n\right)^2 &\leq \left(\sum_{n=0}^\infty |\alpha_n|^2\right)\left(\sum_{n=0}^\infty z^{2n}\right)\\
&=\left(\sum_{n=0}^\infty |\alpha_n|^2\right)\left(\frac{1}{1 - z^2}\right)\\
&<\infty
\end{align}
where the last inequality comes from $\{\alpha_n\} \in \mathcal{H}$, and the result follows.
(b) If $|\lambda|<1$ and $L:\mathcal{H}\rightarrow \mathbb{F}$ is defined by $L(\{\alpha_n\}) = \sum_{n=0}^\infty \alpha_n\lambda^n$ find th vector $h_0$ in $\mathcal{H}$ such that $L(h) = \langle h, h_0\rangle$ for every $h$ in $\mathcal{H}$.
For this we can let $h_0(n) = \overline{\lambda^n}$.
(c) What is the norm of the linear functional $L$ defined in (b)
We have $||L|| = \sup\{|L(h)|: ||h||= 1\}$, and here I"m thinking that I should use the Cauchy Schwarz inequality from part (a), but it doesn't seem to suggest a good candidate for $||L||$.
| Parts (a) and (b) are good. For part (c), the norm of a linear functional $h \mapsto \langle h, h_0\rangle$ is precisely $\|h_0\|$. And yes, this is proven by Cauchy-Schwarz: $|\langle h, h_0\rangle| \le \|h_0\|\|h\|$, with equality occurring when taking $h = h_0$.
So, we just need to compute $\left\|\left(\overline{\lambda^n}\right)_n\right\|$, which is a straightforward geometric series:
$$\left\|\left(\overline{\lambda^n}\right)_n\right\|^2 = \sum_{n=0}^\infty|\lambda|^{2n} = \frac{1}{1 - |\lambda|^2} \implies\|L\| = \frac{1}{\sqrt{1 - |\lambda|^2}}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3889577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding minima and maxima of the map $f(x) = \sum_{k \in \mathbb Z} e^{-c(x+k)^2}, x \in [0,1]$ Suppose that $c>0$ and define $f : [0,1] \to \mathbb R,$
$$
f(x) = \sum_{k \in \mathbb Z} e^{-c(x+k)^2}
$$
I want to show that $x=1/2$ is a minimizer of this map and the maxima are attained at $x=0$ and $x=1$. The above map seems to be a Jacobi theta function of third kind. I'm not very familiar with special functions so any help or reference would be appreciated!
| A partial answer: Let $\phi$ and its Fourier transform $\hat{\phi}$ on ${\Bbb R}$ be sufficiently smooth and decay sufficiently fast at infinity for the following operations to be legal (this is the case e.g. for the Gaussian function). Here is a fast track to obtain the Poisson summation formula:
Define
$f(x) = \sum_{k\in {\Bbb Z}} \phi(x+k)$.
Then $f$ is $1$-periodic and $f(x) = \sum_{m\in {\Bbb Z}} \hat{f}_m e^{2\pi i m x}$ with
$$ \hat{f}_m = \int_0^1 f(y) \exp(-2\pi i m y) dy = \sum_k \int_0^1 \phi(y+k) \exp(-2\pi i m y) dy, $$
and after a change of variable and using periodicity of the exponential:
$$ \hat{f}_m = \int_{\Bbb R} \phi(t) e^{-2\pi i m t} dt = \hat{\phi}(m) $$
with $\hat{\phi}$ being the Fourier transform on the whole real line and the normalization factor $2\pi$ in the exponential. We have thus deduced the Poisson summation formula:
$$ \sum_{k\in {\Bbb Z}} \phi(x+k) = \sum_{m\in {\Bbb Z}} \hat{\phi}(m) e^{2\pi i m x}$$
Suppose now that $\hat{\phi}(-m)=\hat{\phi}(m)>0$ for all $m\in {\Bbb Z}$. Then the RHS is real-valued and clearly maximal when $\exp(2\pi i m x)=+1$ for all $m$, which happens precisely when $x\in {\Bbb Z}$.
When $\phi(x)=\exp(-c x^2)$, $c>0$ you have $\hat{\phi}(m)= \sqrt{\pi/c} \exp(- \pi^2m^2/c)$ with the given normalization so the wanted result follows for the maximizer from the above. For the minimizer it is not so clear but plausible that it should happen when $x-\frac12 \in {\Bbb Z}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3889796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Sum with two indices, how to handle the condition $i\neq j$? I'm given
$$
\sum_{i,j=1, i \neq j}^{N} ij \tag 1
$$
I assume this is a short hand for
$$
\sum_{i=1}^{N}
\Bigg (
\sum_{j=1}^{N} ij
\Bigg )
\qquad? \tag 2
$$
But where does $i\neq j$ belong, to the inner or outer sum?
Example with $N=2$ and $i\neq j$ in the inner sum:
\begin{align}
\sum_{i,j=1 \\i\neq j}^{2} ij
&=
\sum_{i=1}^{2}
\Bigg (
\sum_{j=1, i \neq j}^{2} ij
\Bigg )
\tag 3
\\
&=\sum_{i=1}^{2}
\Bigg (
i1+i2
\Bigg )
\tag 4
\\
&=
(11+12 + 21 + 22)
\tag 5
\\
&\text{\{Now discard 11 och 22\}}\\
&=23
\end{align}
I guess the end result is correct, but I don't think my approach is correct ("Now discard"). How should I handle the condition $i\neq j$?
| A simple way is to develop the square of $\sum_{i=1}^N{i}$ and to separate the two cases, $i=j$ and $i \ne j$:
$$\left(\sum_{i=1}^N {i}\right)^2 = \sum_{i,j=1}^N{ij} = \sum_{i,j=1 ; i \ne j}^N{ij} + \sum_{i,j=1; i=j}^N{ij}$$
$$ = \sum_{i,j=1; \,i \ne j}^N{ij} + \sum_{i=1}^N{i^2}$$
As
$$\sum_{i=1}^N {i} = \frac{N(N+1)}{2} $$
And
$$\sum_{i=1}^N {i^2} = \frac{N(N+1)(2N+1)}{6} $$
We get
$$ \sum_{i,j=1 ;\,i \ne j}^N{ij} = \frac{N^2(N+1)^2}{4} - \frac{N(N+1)(2N+1)}{6} = \frac{3N^4+2N^3-3N^2-2N}{12}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3889873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Sum over binomial coefficients I want to find out if the following sum has a closed form:
$$\sum_{j=0}^N{j+k-1\choose k-1}{N-j+k-1\choose k-1}$$
I tried to rewrite the sum such that I could use the following form of the Vandermonde identity:
$$\sum_{m=0}^n{m\choose j}{n-m\choose k-j}={n+1\choose k+1}$$
but I couldn't arrive at any result.
| Hint: You are doing good! Using what you put there you will need $n-(j+k-1)=N-j+k-1$ implies $n=N+2k-2$ and you also have that $2k-2-(k-1)=k-1$ So take the $k$ in the second equation to be $2k-2.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3890071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\operatorname{Null}(A) = \operatorname{Null}(A^T)$ when $A = A^2$ So I first let x belong to $\operatorname{Null}(A) $ i.e $Ax = 0$. Now I tried using the fact that if the length of $A^Tx = 0$ then $A^Tx = 0$.
But I don't know how to prove that $x^TA^TAx = 0$.
Any help will be appreciated.
| Consider
$A=\pmatrix{1 & 1\\0 & 0}$ and $x=\pmatrix{1\\-1}$. Are you sure you have the question correctly posted? Am I not understanding something correctly?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3890220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding last digit of $\,625703^{43898^{614961^{448629}}}\!\!$ using Euler's Theorem I'm a little bit stuck with this problem I hope you can help. I want to find the last digit of a power tower using Euler's theorem:
\begin{align}
q &= 10, \\ \varphi(q) &= 4, \\ \varphi(\varphi(q)) &= 2, \\\varphi(\varphi(\varphi(q))) &= 1.
\end{align}
\begin{align}
625703 ^{\displaystyle 43898 ^{\displaystyle 614961 ^{\displaystyle 448629}}} &\equiv (625703 \bmod 10)^{\displaystyle (43898 \bmod \varphi(10))^{\displaystyle (614961 \bmod \varphi(\varphi(10)))^{\displaystyle (448629 \bmod \varphi(\varphi(\varphi(10))))}}} \mod 10 \\ &\equiv 3^{\displaystyle 2^{\displaystyle 1^{\displaystyle 0}}} \mod 10 \\ &\equiv 3^{\displaystyle 2^{\displaystyle 1}} \mod 10 \\ &\equiv 3^{\displaystyle 2} \mod 10 \\ &\equiv 9 \mod 10
\end{align}
According to this approach the last digit of the power tower must be 9. However, the right solution is 1 (see here) - what am I doing wrong?
This approach is based on the following two answers
computing ${{27^{27}}^{27}}^{27}\pmod {10}$
What's a general algorithm/technique to find the last digit of a nested exponential?
| \begin{align}
625703^{43898^{\scriptstyle614961^{\scriptstyle448629}}}\mkern-18mu\bmod 10&= (625703\bmod 10)^{43898^{\scriptstyle614961^{\scriptstyle448629}}\bmod\varphi(10)}\\
&= 3^{43898^{\scriptstyle614961^{\scriptstyle448629}}\mkern-12mu\bmod4}=3^{2^{\scriptstyle614961^{\scriptstyle448629}}\mkern-12mu\bmod4}=3^0.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3890360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Backward Differential Equation with binomial tree I'm trying to understand/solve the following question but I honestly don't know what it's even asking about. I've included my attempt following the picture of the question.
I would approximate the first derivative of $F$ at $(x,t)$ by $$F_x(x,t) \approx \frac{F(x+dx, t+2dt) - F(x, t+2dt)}{dx} \equiv \Delta F(x, t)$$ giving rise to $$\frac{d^2}{dx^2}F(x,t) \approx \frac{\Delta F(x + dx, t) - \Delta F(x,t)}{dx} = \frac{F(x+2dx, t+2dt) - 2F(x+dx, t+2dt) + F(x,t+2dt)}{(dx)^2} = \frac{\Delta F(x + dx, t) - \Delta F(x,t)}{dx} = \frac{F(x+2dx, t+2dt) - 2F(x+dx, t+2dt) + F(x,t+2dt)}{\sigma^2 dt}$$
This is obviously different from what's given in the question statement, as the terms and the factor of 4 is missing. I'm completely lost. Any help would be massively appreciated!
| A classical outset for the presented finite difference to approximate the second order derivative is the following reasoning:
"Decompose" $\partial_{xx} F$ into $\partial_x \underbrace{\partial_x F}_{=:u}$ and use the central difference to approximate the "outer" derivative:
$$\partial_x u \approx \frac{u(x + d x ) - u(x - dx)}{2dx}$$
I believe this is what the first arrows in the image could describe.
Then, you use again central differences to be able to express $u(x \pm dx)$ in terms of $F$:
$$ u(x \pm dx) = \partial_x F \Big \vert_{x \pm dx} \approx \frac{F(x \pm d x + dx ) - F(x \pm dx - dx)}{2dx} $$
Thus,
\begin{align}
\partial_{xx} F \approx & \frac{\frac{F(x + 2 dx ) - F(x + dx - dx)}{2 dx} - \frac{F(x - dx + dx ) - F(x -dx -dx)}{2dx}}{2dx} \\
=& \frac{F(x + 2dx) - 2 F(x) - 2 F(x - dx)}{4 dx^2} \\
\overset{dx = \sigma \sqrt{dt}}{=} &\frac{F(x + 2dx) - 2 F(x) - 2 F(x - dx)}{4 \sigma^2 dt} \\
\end{align}
Note that these steps hold for any time $t$, so in particular also for $t + 2 dt$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3890541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determine $k$ in PDF.
Let $X$ and $Y$ random variables with PDF given by
\begin{eqnarray*}
f(x,y)=\left\{\begin{aligned}kxy, \quad x\geq 0, y\geq 0, x+y\leq 1\\ 0, \quad \text{elsewhere} \end{aligned} \right.
\end{eqnarray*}
$1.$ Calculate $k$.
$2.$ Calculate the marginals density of $X$ and $Y$.
$3.$ Are $X$ and $Y$ independent random variables?
$4.$ Calculate $$\mathbb{P}(X\geq Y), \quad \mathbb{P}(X\geq 1/2 | X+Y\leq 3/4), \quad \text{and} \quad \mathbb{P}(X^{2}+Y^{2}\leq 1)$$
$5.$ Calcule the joint density function of $U=X+Y$ and $V=X-Y$ with their respective marginals.
My approach:
*
*Since that $$\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(x,y)=1 \implies \int_{0}^{1}\int_{0}^{1-x}kxydydx=1 \implies k=24.$$
*By definition, we have that $$f_{X}(x)=\int_{0}^{1-x}24xydy=12x(x-1)^{2}, \quad x\in [0,1]$$and $$f_{Y}(y)=\int_{0}^{1-y}24xydx=12y(y-1)^{2}, \quad y \in [0,1]$$
*Since that $$f(x,y)\not=f_{X}(x)f_{Y}(y)$$so $X$ and $Y$ are not independents.
*Now, here I'm problem in how can I calculate $\mathbb{P}(X\geq 1/2| X+Y\leq 3/4)$ and $\mathbb{P}(X^{2}+Y^{2}\leq 1)$ Can you help me here?. I could find that $\mathbb{P}(X\geq Y)=1/2$.
*Here, I don't know how can I solve this part. I was trying to apply the Jacobian transformation, but I don't know how to use this method well. So, I'm not getting a good solution focus.
|
I have a question, can you explain a little more why $\mathbb{P}(X^{2}+Y^{2}\leq 1)$
Considering that, by assumption of the exercise
$$\mathbb{P}(X,Y \in A)=1$$
I think that the answer is self evident looking at the following drawing
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3890863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Can information-theory quantify deviations from bijectivity? There are basic ways to qualitatively classify deviations from bijectivity of a function $f: x \to y$, e.g. non-injective, non-surjective, non-existence of an inverse: more generally non-monomorphic, non-epimorphic.
Are there two "natural" quantitative measures of deviation from injectivity and surjectivity?
Is there one natural quantitative measure of "total deviation" from bijectivity (combining both injective/surjectivity violation)? And from isomorphic?
Brain storm: Relative entropy of $\{f^{-1}(y) \}$, i.e. the preimages of $f$, seems one relevant to measuring relative injectivity? Relative measure or cardinality $|Im(f)/Cod(f)|$ seems one way to measure surjectivity? These both invoke additional concepts, e.g. probability or measure. I am sure mathematicians will have clearer and better ways that I can't think of. Any ideas or references will be most welcome.
The above mostly relates to functions between sets. How does one ask and answer the corresponding questions for structure-preserving functions, i.e. functorial functions, such as monotone, equivariant, homomorphic, continuous? (Apologies if this latter is asking too much in one question!).
Thanks!
| Not a metric but, the following simply allows you to partially order functions by injectivity.
There is a partial order on the inverse image equivalence relations induced by the set of all functions $\{f|f:A \to B\}$, i.e. where each class of the relation or quotient on domain $A$ induced by $f$ is defined by $ x\equiv_fy$ iff $f(x)=f(y)$, for $x,y \in A$. The partial order encodes how coarse or fine the equivalence relations are, relative to one another, which in turn encodes nothing but the relative injectivity. A limitation is that this is not a total order: two "equally injective" functions may be incomparable and you would never know.
In terms of a numeric measure, for functions between finite sets, I guess you can also just calculate the entropy on the family of cardinals $(|f^{-1}(y)|)_{y \in B}$, treating the latter as a probability distribution (normalized frequency histogram). Not sure currently how this generalizes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3891045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is the space of all sequences on $[0,1]$ sequentially compact? We are in the space of sequences on [0,1] under the metric $d(x,y) = \Sigma^{\infty}_{i=1} 2^{-k} \mid x_k - y_k \mid $
The hint I've been given is to use a diagonal argument.
I'm thinking to take a sequence of sequences in the space, and by Bolzano-Weierstrass each of these sequences must have a convergent subsequence. If I take the first term of the first sequence, the second of the second etc. then that seems to be a 'diagonal' argument as told to use in the hint - but then I only have one sequence. Am I not supposed to find a sequence of sequences which converges?
| Let $(x_i^{n})$ be a sequence in this space. The diagonal argument gives you integers $n_1,n_2,..$ such that $\lim_{k \to \infty} x_i^{n_k}$ exists for each $i$. Call this limit $x_i$. We would like to show that $(x_i^{k})$ tends to $(x_i)$. Given $\epsilon >0$ choose $N$ such that $ \sum\limits_{k=N+1}^{\infty} \frac 1 {2^{k}} <\epsilon$. Split the sum in $d((x_i^{k}), (x_i))$ into the sum from $1$ to $N$ and the sum from $N+1$ to $\infty$. Can you finish?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3891181",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Evaluating $\lim_\limits{x\to 0}(e^{5/x}-6x)^{x/2}$. Is my method correct? I have to solve the following limit which is in indeterminate form. \begin{equation}\lim_\limits{x\to 0}(e^{\frac{5}x}-6x)^{\frac{x}{2}} \end{equation}
So to solve this limit I attempted to do the following, I ended up making use of a substitution, however, I am not sure I did it right... Please let me know what you think (by the way my TI-nspire CX CAS already gave me the solution of $e^{\frac52}$.)
Work
\begin{align} L &=e^{\lim_\limits{x\to 0^+}(\frac{1}2 x\ln(e^{\frac{5}x-6x}))} \\ u&=\frac{1}x \\ L&=e^{\lim_\limits{u\to+\infty} \frac{1}{2}\frac{\ln(e^{5u}-6\frac{1}u)}{u}} \end{align}
From here I then did the following:
\begin{align} L&=e^{\lim_\limits{u\to+\infty} \frac12 \frac{5\frac{e^{5u}-\frac{6}{u^2}}{e^{5u}-\frac{6}{u}}}1}\end{align}
From Here I multiplied and divided by $\frac{\frac{1}{e^{5u}}}{\frac{1}{e^{5u}}}$
Leading me to this:
\begin{equation}L=e^{\frac{5}{2}\lim_\limits{u\to+\infty}\frac{1-\frac{6}{u^2e^{5u}}}{1-\frac{6}{ue^{5u}}}} \end{equation}
From Here I deduced that the answer was $L=e^{\frac{5}2}$, were my steps wrong, is infinity times infinity indeterminate and it requires further calculation please let me know.
| Assuming $x \to 0^+$, your way is a little bit long and can be simplified but it is fine indeed at the end we obtain
$$\frac{1-\frac{6}{u^2 \ e^{5u}}}{1-\frac{6}{u \ e^{5u}}} \to 1$$
since $\frac{6}{u^2 \ e^{5u}}\to 0$ and $\frac{6}{u \ e^{5u}} \to 0$.
As an alternative, more directly we have
$$(e^{\frac{5}x}-6x)^{\frac{x}{2}}=(e^{\frac{5}x})^{\frac{x}{2}}\left(1-\frac{6x}{e^{\frac{5}x}}\right)^{\frac{x}{2}} =e^\frac52\left(1-\frac{6x}{e^{\frac{5}x}}\right)^{\frac{x}{2}} \to e^\frac52 \cdot 1 = e^\frac52$$
where it is important to note that the following
$$\lim_\limits{x\to 0^+}\left(1-\frac{6x}{e^{\frac{5}x}}\right)^{\frac{x}{2}}=1^0=1$$
is not an indeterminate form.
Note that for $x\to 0^-$ by $y=-x \to 0^+$ we obtain
$$\lim_\limits{x\to 0^-}(e^{\frac{5}x}-6x)^{\frac{x}{2}}=\lim_\limits{y\to 0^+}\frac1{\left(6y+\frac1{e^{\frac{5}y}}\right)^{\frac{y}{2}}}=1$$
since
$$\left(6y+\frac1{e^{\frac{5}y}}\right)^{\frac{y}{2}}=\left(6y\right)^{\frac{y}{2}}\left(1+\frac1{6ye^{\frac{5}y}}\right)^{\frac{y}{2}} \to 1\cdot(1+0)^0=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3891358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$x^2$ with removable discontinuities has bounded variation Define
$$f(x) = \begin{cases}0 & \text{ if } x=1/n \text{ for some } n\in\mathbb{N}
\\x^2 & \text{ else}\end{cases}$$
on $[0, 1].$ I want to show that this has bounded variation. It is straightforward to show that $x^2$ has bounded variation on $[0,1]$, but the discontinuities at $1/n$ cause problems. I was thinking of splitting up the variation over the interval $[0,1]$ into the variation over each subinterval $[1/n, 1/(n+1)]$ as follows:
$$V_{[0,1]}(f) = \sum_{n=1}^\infty V_{[1/n, 1/(n+1)]}(f) = 2\sum_{n=1}^\infty \frac{1}{n^2} <\infty.$$
However, I'm stuck in proving the first equality. In fact, I'm not even completely sure that it is true. Any help would be appreciated.
| According to wiki:
The total variation of a real-valued (or more generally complex-valued) function $f$, defined on an interval $[0, 1] \subset \mathbb {R}$ is the quantity
$$ V_{[0,1]} (f) = \sup_P \sum_{i=0}^{n_P - 1} |f(x_{i+1}) - f(x_i)|,$$
where the supremum runs over the set of all partitions $P$ of the given interval.
Every partition of $[0,1]$ is a subset of the union of some partitions of your intervals, namely if $P = \{0 = x_0, x_1, \dots, x_n = 1\}$ then $x_1 \ge \tfrac{1}{m}$ for some $m$, and you can add points $\tfrac{1}{m}, \tfrac{1}{m-1}, \dots, \tfrac{1}{2}$ to $P$, and then take $x_i$ into the corresponding interval, showing that $$V(P) \le \sum_{n = 1}^{m_P} V_{[1/n,1/(n+1)]}(f).$$
Taking $\sup_P$ on the left corresponds to taking $\sup_{m_P}$ on the right, which is the same as changing the finite sum to the series because the total variation is nonnegative. Therefore, you get $$V_{[0,1]}f(V) \le \sum_{n = 1}^\infty V_{[1/n,1/(n+1)]}(f).$$
The reversed inequality follows from considering a sequence of partitions $(P_n)$ with $m_P \to \infty$, one possible example being $$P_n = \left\{ 0, \tfrac{1}{n}, (\tfrac{1}{n}+\tfrac{1}{n-1})/2, \tfrac{1}{n-1}, (\tfrac{1}{n-1}+\tfrac{1}{n-2})/2, \dots, 1 \right\}.$$
It appears to me that a similar approach can be used to prove the general statement $$V_{[a,b)} = \sum_{i = 1}^\infty V_{[a_i, b_i)}(f), \quad \bigsqcup_{i = 1}^\infty [a_i, b_i) = [a, b),$$ but I don't know whether this result has a name and whether or not it holds in other measure spaces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3891515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Proving all exponential properties from $b^{x+k}=b^x\cdot b^k$ Let's say that there is a function $f(x)$ which satisfies the following property:
$$f(x+k)=f(x)\space{}\cdot\space{}f(k)$$
In addition, $f(1)=b\gt1$. I am trying to prove the following property without explicitly relying on the fact that $f(x)=b^x$ (circular reasoning) or requiring that $n\in\mathbb{Z}^+$:
$$[f(x)]^n=f(nx)$$
The only way I've been able to prove it is thinking about $nx$ as being $x+x+x+x+\space{}...\space{}$ n times, and then using the assumed property of $f(x)$ to break this up as a product $n$ times which can then be written as an exponent by definition. However, this argument only really makes sense if $n$ is a natural number, but this property should hold for all $n\in\mathbb{R}$. Is there a way of extending or altering this argument so that it is still sensible for any real number value of $n$?
| *
*First let examine $f(0)$:
$f(0)=f(0+0)=f(0)f(0)\iff f(0)(f(0)-1)=0$ therefore $f(0)\in\{0,1\}$
If $f(0)=0$ then $f(x)=f(x+0)=f(x)f(0)=0$ and $f$ is the null function, which is not very interesting.
From now on let set $f(0)=1$ and $f(1)=b$.
*
*Let examine $f(n)$:
$f(n+1)=f(n)f(1)=bf(n)=b^2f(n-1)=\cdots=b^{n+1}f(0)=b^{n+1}$
So by induction (base case $f(0)$ and $f(1)$ verified, then $f(n)=b^n,\ \forall n\in\mathbb N$
*
*Let examine $f(-x)$:
$1=f(0)=f(x-x)=f(x)f(-x)\implies f(-x)=\dfrac 1{f(x)}$
In particular $f(-n)=\dfrac 1{f(n)}=\dfrac 1{b^n}=b^{-n}$ and we have extended to all $\mathbb Z$.
*
*Let examine $f(\frac pq)$:
By the same induction used for $f(n)$ we have $f(nx)=f(x)^n$ for $n$ natural, and use $f(-nx)=\frac 1{f(nx)}$ to extend to all integers.
In particular $b=f(1)=f(\frac qq)=f(\frac 1q)^q$ therefore $f(\frac 1q)=b^{\frac 1q}$
And $f(\frac pq)=f(\frac 1q)^p=b^{\frac pq}$.
Note in the same way we have $f(\frac pqx)=f(x)^{\frac pq}$
*
*Now we use continuity of $f$:
If we don't assume continuity we are stuck with the rationals. So since $\mathbb Q$ is dense in $\mathbb R$, we can extend $f$ to the reals and we have $f(x)=b^x$ and the formula $f(xy)=f(x)^y$ by continuity too from the last note.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3891680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Parameters of Gompertz law I need to find the best parameters $B,c$ so that Gompertz law could be good aproximation of life tables.
By the Gompertz law force of mortality is given by $\mu(t)=Bc^t$.
We know that $\mu(x)=\frac{l'_x}{l_x}=\frac{d}{dx}ln(l_x)$. So basically:
$$l_x=\exp(\int u(x)dx)=\exp(\frac{Bc^t}{ln(c)}).$$
What should I do next? I could try just picking some values from life tables and then hope that $B$ and $c$ will eventually come from the equations but I feel it won't lead me to anything. I don't know any tools to do such things and I'm clueless. Could someone give me some hints?
| Gompertz in its basic form is $y=k_\infty e^{-e^{-(x-\tau)/u}}$. Where $\tau$ is the location of the inflection point of the curve, $k_\infty$ is the value of y at infinite time. $u$ is the value of the function at the inflection point divided by the derivative of the function at the inflection point. It can be shown the value of the function at the inflection point is $k_\infty/e=k_\tau$. And u=$k_\tau/k_\tau'.$
This implies $\frac{dy}{dt}=\frac{y}{u}e^{-(t-\tau)/u}=-\frac{y}{u}\ln{\frac{y}{k_\infty}}$
So $\frac{d (\ln y)}{dt}=\frac{-1}{u}\ln{y} +\frac{1}{u} \ln{k_\infty}$
So if you know the function value and increments of y at each x value, essentially the first derivative, then you can find two of the parameters by linear regression. Once you know $u$ and $k_\infty$, then you can also use linear regression to find $\tau$, though that is often pretty clear from the data, just where the first derivative data flatlines.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3891878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find two partitions P,Q of $[a,b]$ such that $||Q||<||P||$ but $\underline{S}(f,P)=\underline{S}(f,Q)$ I'm trying to answer the following exercise:
Give an example of a bounded function $f:[a,b]\rightarrow\mathbb{R}$, with two partitions P,Q of $[a,b]$ such that $||Q||<||P||$ but $\underline{S}(f,P)=\underline{S}(f,Q)$
Where $||Q||$ is the norm of Q and $||P||$ is the norm of P; $\underline{S}(f,P)$ and $\underline{S}(f,Q)$ are the inferior sums.
I've tried to solve it with the following attempts, but with none of them I got that $\underline{S}(f,P)=\underline{S}(f,Q)$:
$f: [0,2]\rightarrow \mathbb{R}$ such that $f(x)=e^x \: \forall x\in [0,2] $; with the partitions $P=\{0, \frac{1}{2}, 1, \frac{3}{2}, 2\}$ and $Q=\{0, \frac{1}{4}, \frac{1}{2},\frac{3}{4}, 1, \frac{5}{4},\frac{3}{2}, \frac{7}{4}, 2\}$, where it's clear that $||Q||=\frac{1}{4}<\frac{1}{2}=||P||$, but $\underline{S}(f,P)<\underline{S}(f,Q)$
And I got to the same result with this attempt:
$f: [0,2]\rightarrow \mathbb{R}$ such that $f(x)=x^2 \: \forall x\in [0,2] $; with the partitions $P=\{0, \frac{1}{2}, 1, \frac{3}{2}, 2\}$ and $Q=\{0, \frac{1}{4}, \frac{1}{2},\frac{3}{4}, 1, \frac{5}{4},\frac{3}{2}, \frac{7}{4}, 2\}$, where it's clear that $||Q||=\frac{1}{4}<\frac{1}{2}=||P||$, but $\underline{S}(f,P)\neq\underline{S}(f,Q)$
I know these are really simple functions but I was trying to see if with those I could get to some function that worked (which it's clear it didn't), and I'm guessing that only certain functions are capable of doing so.
If you could help me with an example I'd appreciate it.
Thank you in advance.
| Generally $\underline{S}(f, P)\le \underline{S}(f,Q)$ if $Q$ is a refinement of $P$. But for some special partitions $P$ and $Q$ and some function $f$, $\underline{S}(f, P)$ and $\underline{S}(f,Q)$ may be the same. For example, let
$$ f(x)=\left\{\begin{array}{ll} \frac12 \text{ if }x\in[0,\frac12]\\
\frac32 \text{ if }x\in(\frac12,1]\\
1 \text{ if }x\in(1,\frac32]\\
2 \text{ if }x\in(\frac12,2]
\end{array}\right. $$
be a piecewise constant function. Then for the partitions $P=\{0, \frac{1}{2}, 1, \frac{3}{2}, 2\}$ and $Q=\{0, \frac{1}{4}, \frac{1}{2},\frac{3}{4}, 1, \frac{5}{4},\frac{3}{2}, \frac{7}{4}, 2\}$, we have
$$ \underline{S}(f,P)=\frac12\cdot\frac12+\frac32\cdot\frac12+1\cdot\frac12+2\cdot\frac12=\frac52 $$
and
$$ \underline{S}(f,Q)=\frac12\cdot\frac14+\frac12\cdot\frac14+\frac32\cdot\frac14+\frac32\cdot\frac14+1\cdot\frac14+1\cdot\frac14+2\cdot\frac14+2\cdot\frac14=\frac52. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3892058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Deriving closed form of $\sum_{n=1}^\infty\frac{(-1)^{n+1}\ln(n+1)}{n} $ I wanted to derive closed form of $\sum_{n=1}^\infty\frac{(-1)^{n+1}\ln(n+1)}{n}$ which I converted into a integral,
$$\sum_{n=1}^\infty\frac{(-1)^{n+1}\ln(n+1)}{n}=\int_0^1\int_0^1\frac{u^v}{1+u^v}\text{d}u\text{d}v$$
after some more manipulations I got,
$$\int_0^1\int_0^1\frac{u^v}{1+u^v}\text{d}u\text{d}v=1-\int_0^1\frac{\psi^{(0)}\left(\frac{1}{v}\right)-\psi^{(0)}\left(\frac{1}{2v}\right)-\ln(2)}{v}\text{d}v$$
not sure how to proceed further...
| As you did
$$\int_0^1\frac{u^v}{u^v+1}\,du=\frac{H_{\frac{1}{2 v}}-H_{\frac{1}{2 v}-\frac12}}{2 v}$$
At this point, I am stuck with formal but we can notice that the rhs (our new integrand)
$$f(v)=\frac{H_{\frac{1}{2 v}}-H_{\frac{1}{2 v}-\frac12}}{2 v}\sim \frac{1}{2}- \left(\log (2)-\frac{1}{2}\right)v^{4/5}$$ This approximation matches the function value at the end points and corresponds to the minimum of the infinite norm.
This makes
$$\int_0^1f(v)\,dv \sim
\frac{1}{9} (7-5 \log (2))\approx 0.392696$$ while the exact value is $0.392259$.
For improvement, I think that we need an approximation at least able to reproduce the following values
$$\left(
\begin{array}{ccc}
v & f(v) & f'(v) \\
0 & \frac{1}{2} & -\frac{1}{4} \\
\frac{1}{2} & -1+2\log (2) & \frac{2}{3} \left(\pi ^2-6 (1+\log (2))\right) \\
1 & 1-\log (2) & -\frac{\pi ^2}{12}+\log (2)
\end{array}
\right)$$
Being lazy, for the time being, I used a quintic polynomial matching all the above conditions and obtained as an approximation
$$\frac{1}{720} \left(\pi ^2+588 \log (2)-135\right)\approx 0.392278$$ which is better but not enough.
What is amazing is
$$\frac{1}{10} \left(\psi \left(\frac{4}{9}\right)+\psi
\left(\frac{3}{5}\right)-\psi \left(\frac{1}{4}\right)-\psi
\left(\frac{3}{10}\right)\right)\approx 0.392259435$$ while the exact value is $0.392259418$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3892215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the condition $\|Z\| < 1$ equivalent to $I - ZZ^{\top} > 0$? As the title says, for a matrix $Z \in \mathbb{R}^{p \times q}$, the condition $\begin{Vmatrix}Z\end{Vmatrix} < 1$ equivalent to $I - ZZ^{\top} > 0$. How can I show the equivalence?
Attempt:
$\begin{Vmatrix}Z\end{Vmatrix} = \sup_{|x| = 1} \begin{Vmatrix}Zx\end{Vmatrix}$
$\implies$ $\begin{Vmatrix}Zx\end{Vmatrix} < 1$ $\implies$ $(Zx)^{\top}(Zx) < 1$ $\implies$ $x^{\top}Z^{\top}Zx < 1$.
Multiplying by the identity matrix on both sides,
$\implies$ $(x^{\top}Z^{\top}Zx)I < I \implies I - (x^{\top}Z^{\top}Zx)I > 0 $.
| You are almost there.
\begin{align}
I-ZZ^T\succ0
&\Leftrightarrow x^T(I-ZZ^T)x>0 \text{ for every unit vector } x\\
&\Leftrightarrow 1-\|Zx\|^2>0 \text{ for every unit vector } x\\
&\Leftrightarrow \|Zx\|<1 \text{ for every unit vector } x\tag{1}\\
&\Leftrightarrow \|Z\|<1\tag{2}.
\end{align}
In $(1)\Rightarrow(2)$, we have used the fact that the value of $f:x\mapsto\|Zx\|$ attains maximum on the unit sphere because the unit sphere is compact and $f$ is continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3892410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Example where an inverse function does not equal the elements I'm trying to find an example of a function $f: A \to B$ and $X \subset A$ so that $f^{-1}(f(X)) \ne X$, and similarly where $Y \subset B$ so that $f(f^{-1}(Y)) \ne Y$.
I thought to have $f = x^2$, which has no inverse, thus making it vacuously true that $f^{-1}(f(X)) \ne X$ and $f(f^{-1}(Y)) \ne Y$, but that seems like a copout. Any help is greatly appreciated.
| Consider $f : \mathbb{R} \rightarrow \mathbb{R}$ defined for all $x \in \mathbb{R}$ by $f(x)=0$, and $X = \lbrace 0 \rbrace$ and $Y = \mathbb{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3892583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
subtracting exponents with different bases and finding all the prime factors of it Here is the question:
Find all prime factors of $3^{18}-2^{18}$
No Calculators allowed, and it is a competition problem, fastest answers are the best.
How can I approach this problem? If it was just $3^{18}$, it would just be 3 and likewise for 2. What do I do?
| You are supposed to see it as a difference of squares and cubes to start. $3^{18}-2^{18}=(3^9-2^9)(3^9+2^9)$ and you can factor each of these as a sum or difference of cubes. That gets the numbers down to primes you may know, or you may need to do some trial division.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3892688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that three numbers form an arithmetic progression
The numbers $a,b$ and $c$ form an arithmetic progression. Show that the numbers $a^2+ab+b^2,a^2+ac+c^2,b^2+bc+c^2$ also form an arithmetic progression.
We have that $2b=a+c$ (we know that a sequence is an arithmetic progression iff $a_n=\dfrac{a_{n-1}+a_{n+1}}{2}\text{ } \forall \text{ }n\ge2$). I am stuck here and I would be very grateful if you could give me a hint.
| More hint:
\begin{align}&\quad(a^2+ab+b^2) + (b^2+bc + c^2) \\&= a^2+c^2 +2b^2 + b(a+c) \\&= a^2 + c^2 + 2b^2 + b(2b) \\&= a^2 + c^2 + 4b^2\\&=a^2+c^2+(a+c)^2\end{align}
In the worst case you can just substitute $b = \frac {a+c}2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3892856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
} |
What is the sequence represented by the exponential generating function $e^{x^4}$? Problem
For the exponential generating function $e^{x^{4}}$, give a formula in closed form for the sequence $\{a_n:n \geq 0\}$ it represents.
My Attempt
I have that
$$e^{x^{4}}=\sum_{n=0}^{\infty}\frac{(4n)!\,x^{4n}}{n!\,(4n)!},$$
which leads me to believe that $a_n=\frac{(4n)!}{n!}$. However, the series representation is not in the correct form, because it is not of the form $\sum_{n=0}^{\infty}a_n\frac{x^n}{n!}$. Is my answer still correct?
| Define:
$$a_n=\frac{1}{(n/4)!},\ \ {\rm for} \ \ n\equiv0\pmod 4\ {\rm or}\ n=0$$
and
$$a_n=0\ \ {\rm otherwise}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3893034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Convergence of a series defined with factorials This is the series in question.
$$\sum_{n=1}^{\infty} a_n \quad \text{where} \quad a_n = \frac{(2n)!}{4^nn!(n+1)!}$$
Naturally, I tried the Ratio test, but it turned out $L = 1$ so the test was inconclusive. In such series, if the Ratio test doesn't work, what can I try?
| Hint:
By the duplication formula for Gamma function you have
$$
\left( {2\,n} \right)! = {{4^{\,n} } \over {\sqrt \pi }}\left( {n - 1/2} \right)!n!
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3893247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
an inequality over integers Let $x$, $y$ and $z$ be integers different from 1.
Prove that if $x+y\leq xy$ and $y+z \leq yz$ then $x+z\leq xz$
My attempt:
$x+2y+z\leq (x+z)y$ $~~$so $2y \leq (x+z)(y-1)$
and I'm stack here.
I need just some hint.
Thanks.
| We have
$$\left\{\begin{aligned} & x+y \leqslant xy \\& y+z \leqslant yz \end{aligned}\right. \Rightarrow \left\{\begin{aligned} & (x-1)(y-1) \geqslant 1\\& (y-1)(z-1) \geqslant 1\end{aligned}\right. \Rightarrow (x-1)(z-1)(y-1)^2 \geqslant 1 > 0.$$
Therefore
$$ (x-1)(z-1) > 0 $$
But $x,\,y,\,z$ are integers, so
$$(x-1)(z-1) \geqslant 1 \Rightarrow x+z \leqslant xz.$$
Done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3893586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Metrics on $SO(n+1)/SO(n)$ I am having a question about finding the metric on $SO(n+1)/SO(n)$ such that
$\pi:SO(n+1)\rightarrow SO(n+1)/SO(n)$ is a Riemannian submersion, which $SO(n)$ are equipped with the standard bi-invariant metric. I try to use Schur's Lemma to show that it is isometric to the canonical metric on $S^n$ multiplied by a constant. I am confusing about how to find this constant directly.
| First, you are right that there is a unique (up to scaling) metric on $S^{n-1}$ that makes $\pi$ into a Riemannian submersion, as I explain in my answer here. Note that the isotropy action in this case is transitive on the unit sphere, so is definitely irreducible.
Now that we know there is a constant we can scale by, let's figure it out. I'm not exactly sure what you mean by the "standard bi-invariant metric" on $SO(n)$, but the bi-invariant metric I like to use is defined on $T_I SO(n)$ by $\langle X,Y\rangle =-Tr(XY)$.
The function $\pi:SO(n)\rightarrow S^{n-1}$ I'm going to use is $\pi(A) = A_n$ where $A_n$ denotes the last column of $A$. This means that the preimage of the point $p=(0,...,0,1)\in S^n$ corresponds to matrices of the block form $diag(B,1)$ with $B\in SO(n-1)$.
Consider the tangent vector $\alpha'(0)\in T_p S^{n-1}$ with $\alpha(t) = (0,....,\sin(t),\cos(t))$. Note that $\|\alpha'(0)\| = 1$ in the usual metric on $S^{n-1}$.
Now, the identity matrix $I\in SO(n)$ is an element of $\pi^{-1}(p)$, so let's find a tangent vector in$ (\ker \pi_\ast)^\bot\subseteq T_I SO(n) = \mathfrak{so}(n)$ which projects to $\alpha'(0)$. (The notation $\pi_\ast$ refers to the differential $\pi_\ast: T_I SO(n)\rightarrow T_p S^{n-1}$.) Then we can compute the length of this tangent vector to find out the scaling we need to have a Riemannian submersion.
To that end, first note that because $\pi$ is constant on the orbit $I \,\cdot SO(n-1)$, it follows that $\ker \pi_\ast$ contains $\mathfrak{so}(n-1)$, embedded in $\mathfrak{so(n)}$ as matrices with the block form $diag(B,0)$ with $B\in \mathfrak{so}(n-1)$. Since $\pi$ is a subermsion, the kernel of $\pi_\ast$ cannot be any larger, so $\ker \pi_\ast = \mathfrak{so}(n-1)$. A reasonable straightforward calculation now shows that $(\ker \pi_\ast)^\bot = \{M = (M)_{ij}\in \mathfrak{so}(n): M_{ij} = 0$ if both $i,j < n\}.$ In other words, $\ker \pi_\ast^\bot$ consists of matrices of the form $$M = \begin{bmatrix} 0 & \cdots & 0 & m_{1,n}\\ 0 & \cdots & 0 & m_{2,n}\\
\vdots & & \ddots & \vdots\\ -m_{1,n} & -m_{2,n} & \cdots & 0\end{bmatrix}.$$
Now, consider $\gamma:\mathbb{R}\rightarrow SO(n)$ with $\gamma(t) = diag\left(1,...,1, \begin{bmatrix} \cos t & \sin t\\ -\sin t & \cos t\end{bmatrix}\right)$. Then $\gamma(0) = I$ and $\gamma'(0)$ is a matrix whose only non-zero entries are $\gamma'(0)_{n-1,n} = -\gamma'(0)_{n,n-1} = 1$. It follows that $\gamma'(0)\in (\ker\pi_\ast)^\bot.$
Finally, note that $\pi \circ \gamma = \alpha$, so $\pi_\ast(\gamma'(0)) = \alpha'(0)$.
Now, an easy calculation shows that $\langle \gamma'(0),\gamma'(0)\rangle = 2$. Since $\langle \alpha'(0), \alpha'(0)\rangle = 1$, we see that the submersion metric on $S^{n-1}$ is the usual metric scaled by a factor of $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3893739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Is the Oval ( based on Ptolemy inequality) known? It has a property of enclosing quadrilaterals so the ratio of their diagonals product and sum of their opposite sides pair products is constant $(e<1)$. The curve is from a family defined by the Ptolemy Inequality
In order to rope in the Ptolemy Inequality Oval took three points on a unit generating circle radius $ a=1 $ and the fourth one outside the circle
$$(-1,0),(0,-1),(1,0),(x,y)$$
as particular vertices of a non-cyclic quadrilateral. The ratio $e$ defines its equation.
$$ \dfrac{\sqrt 2 \sqrt{x^2+(1+y)^2}}{\sqrt{y^2+(x+1)^2} + \sqrt{y^2+(x-1)^2}} =e<1 \tag 1 $$
Special case $e=1$ is the circle enclosing cyclic quadrilaterals that have the property given by Ptolemy theorem. A set of non-cyclic quadrilaterals can be inscribed in this oval shape. In this drawing $ e=0.95; $
Some shapes for Other $e$ values
Further simplification yields a fourth degree algebraic curve:
$$\left(-a^4-2 a^3 y+a^2 \left(2 \left(e^2-1\right) x^2-2 y^2\right)-2 a y \left(x^2+y^2\right)-\left(x^2+y^2\right)^2\right)+\frac{\left((a+y)^2+x^2\right)^2}{2 e^2}=0$$
| Just a comment because I want to insert a picture
You can find it here https://mathcurve.com/courbes3d/crepe/crepe.shtml . Sorry if the link is in french .
Hope it helps you !
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3893934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Proving $\sqrt[n]{n} < 2-\frac{1}{n}$ inductively
I have already done the first part of the problem. For the second part they give me the following hint: Use (1) with $a =\frac{n-1}{n}$, then take $n$th roots.
I'm actually quite confused. Rewriting the expression I have that $2-\frac{1}{n}=1+\frac{n-1}{n}=1+a$. Thus $\sqrt[n]{n} < 2-\frac{1}{n} = \sqrt[n]{n} < 1+a$, so $n<(1+a)^n$. From now on I don't know how to continue with the exercise. I have made several attempts starting from the last expression but it is not clear to me
| Using I) you get
$$\left(1+\frac{n-1}{n}\right)^n>1+n\frac{n-1}{n}\to \left(\frac{2n-1}{n}\right)^n>n$$
$$\frac{2n-1}{n}>n^{1/n}\to 2-\frac1n>n^{1/n}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3894061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Is the "ancestor" relationship impossible to define in first-order logic? Definition: Given a binary relationship $R$, an ancestor relationship $R^*$ exists between $a$ and $b$ iff there is a chain of relationships $R$ connecting $a$ and $b$, for example, $Rax$, $Rxy$ and $Ryb$.
$R^*$ is trivial to define in logic programming (whose logical aspect is often said to be a subset of first-order logic). However, a paper I was reading claimed that $R^*$ cannot be defined in first-order logic. Is this true?
A Prolog definition of ancestor given parent:
$$
\begin{align}
ancestor(X, Y) & \leftarrow parent(X, Y)
\\
ancestor(X, Y) & \leftarrow parent(X, Z) \land ancestor(Z, Y)
\end{align}
$$
| The main reason is that First-Order Logic uses Tarskian semantics and logic programming uses Herbrand semantics. This has various consequences when it comes to expressiveness and other properties like compactness and completeness. For more details, see for example The Herbrand Manifesto or Herbrand Semantics, where your example (transitive closure) is discussed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3894192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Branch and Bound algorithm on an integer problem When performing the branch and bound algorithm on an integer problem, multiple integer feasible solutions may be found.
Generally, is the second best integer feasible solution found during the branch and bound process
necessarily the second best integer solution of the Integer Problem??
| No, a second-best solution might be pruned away and never encountered. A simple modification to find the best two solutions is to keep a pair of solutions (instead of just one) as the incumbent, updating the incumbent whenever a solution is found that is better than the worse one. This idea extends to the $k$ best solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3894349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $I=\langle x+1, x^2+1\rangle$ is maximal in $\mathbb Z[x]$. I’m trying to show this by showing that $\mathbb Z[x]/I$ is a field. Since in this ring $x=-1$ and so $x^2=1$, but also $x^2=-1$ from which it follows that $1=-1$ and so it is probably isomorphic to $\mathbb F_2$? But I can’t find a homomorphism from $\mathbb Z[x]\ \longrightarrow\ \mathbb F_2$ with kernel $I$.
| Very useful exercise: If $R$ is a commutative ring and $r_1,\ldots,r_n\in R$, then
$$R/\langle r_1,\ldots,r_n\rangle\cong (R/\langle r_1\rangle)/\langle\bar{r_2},\ldots,\bar{r_n}\rangle.$$
Applying this here makes the exercise very easy; we have $R=\Bbb{Z}[x]$ and $r_1=x+1$ and $r_2=x^2+1$. Then
$$\Bbb{Z}[x]/\langle x+1,x^2+1\rangle\cong(\Bbb{Z}[x]/\langle x+1\rangle)/\langle\overline{x^2+1}\rangle.$$
Of course $\Bbb{Z}[x]/\langle x+1\rangle\cong\Bbb{Z}$ by mapping $x$ to $-1$. Then $x^2+1$ is mapped to $(-1)^2+1=2$ and so
$$(\Bbb{Z}[x]/\langle x+1\rangle)/\langle\overline{x^2+1}\rangle\cong\Bbb{Z}/\langle2\rangle=\Bbb{F}_2.$$
This is a field, and so this shows that the original ideal is maximal.
Alternatively, you mention that you already suspect that the quotient is isomorphic to $\Bbb{F}_2$, but cannot find a homomorphism $\Bbb{Z}[x]\ \longrightarrow\ \Bbb{F}_2$ with kernel $I$. Note that such a homomorphism is determined entirely by where $x$ is mapped. So $x$ must map to some element of $\Bbb{F}_2$ such that $x+1$ and $x^2+1$ are mapped to $0$. There aren't many candidates; you just have to check that this does indeed work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3894469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Flipping a Coin 10 Times and Getting a Sequence of Heads I toss a fair coin $10$ times resulting in a sequence of heads and tails. Let $X$ be the number of times that the sequence $HH$ appears. $HH$ appears thrice here.
Solution:
The example goes, say we flip a coin $10$ times and you get all heads then we have $HH$ appears $10$ times. (I would write it out, but my LateX Skills are not up to par.)
And we see $P(HH) = \frac{1}{4}$ because we have $.5$ for heads and tails for the two coins.
$X = HH$ appears.
This is where I don't understand what happens: $\sum_{n=0}^9P(X)\cdot X = \frac{9}{4}$.
When I thought of it, do you factor out $P(X)$ because it is always $\frac{1}{4}$, but I am wrong. Any help would be awesome!
| Let $X_j$ be the indicator random variable of HH appearing at position $j$ for $j=1,2,\dots, 9$. Notice that these aren't quite independent from each other so I don't know if the binomial solution is entirely accurate, although it gives the correct expected value. But we can anyway use linearity of the expectation. As $X=\sum_{j=1}^9 X_j$, in another words $X$, which is the total number of appearances of HH, is the sum of the ones that did occur, we get that
$$\mathbb{E}[X] = \mathbb{E} \left[\sum_{j=1}^9 X_j \right] =\sum_{j=1}^9 \mathbb{E}[X_j] = \sum_{j=1}^9 \frac{1}{4} = \frac{9}{4}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3894824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why is $e$ irrational? I'm not looking for a proof, but rather an explanation, because I know there's something wrong with my thinking.
So, I know that
$$ e = \lim_{n\to\infty} (1+\dfrac{1}{n})^n $$
And also
$$ e = \sum_{k=0}^{\infty} \dfrac{1}{k!} $$
And I'm confused as to why $e$ can be irrational, since both of those definitions are a rational number(?).
I know that addition under rationals is closed, so I'm confused as both of these can be rearranged to something with rationals (e.g. sum of rationals, or $(\dfrac{n+1}{n})^n$), so I guess my question is, is why these rationals converge to an irrational?
Thanks.
| Just because every partial (finite) sum is a rational number, that does not mean the limit is.
Take the well known constant as an example: $\pi = 3+0.1+0.04+...$
Every partial sum is rational, but the limit is a well known irrational number.
Similarly, for every finite natural number $n$, $1/n > 0$ but the limit $\lim_{n\to\infty} 1/n \not > 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3895001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Discrete Mathematics - Modular arithmethics I've been trying to solve a task for some while now but I'm currently very stuck.
For the integers $a$ and $b$ it applies that $b \equiv a \pmod{91}$ and $\gcd (a, 91) = 1$. Determine a positive number $k> 1$ such that $b^k \equiv a \pmod{91}.$
I know that $b^{\phi(91)} = 1\pmod{91}$ and $b^{\phi(91)} = 72$.
After this I'm pretty empty. Would anyone be so kind to guide me or help me in the right direction?
| Since we have $a\equiv b\mod 91$ and also $\varphi(91)=72$ added to $(a,91)=(b,91)=1$ giving $b^{72}\equiv 1\mod 91,$ then we can say $b\cdot b^{72}\equiv b\cdot 1\equiv a\equiv b^{73}\mod 91$.
Thus $b^k\equiv a\mod 91$ using $k=\varphi(91)+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3895251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find the relationship between $x_0$ and $y_0$ that ensures $x(t) → 0$ as $t → ∞$ If the roots of the auxiliary equation of a second order linear homogeneous
ODE are $k_1 > 0$ and $−k_2 < 0$ then the solution is
$$x(t) = Ae^{k_1t}+ Be^{−k_2t}.$$
For most choices of initial conditions
$x(0) = x_0$, $\dot x(0) = y_0$
we will have that $x(t) → ±∞$ as $t → ∞$. However, there are some special
initial conditions for which $x(t) → 0$ as $t → ∞$. Find the relationship between
$x_0$ and $y_0$ that ensures this
My working so far is $x_0 = A + B$ and $y_0 = Ak_1 - Bk_2$ and I'm really unsure of how to tackle this question further
| If
$x(t) \to 0 \; \text{as} \; x \to \infty, \tag 1$
we have
$A = 0, \tag 2$
lest the term
$Ae^{k_1t} \to \infty \tag 3$
and dominate
$x(t) = Ae^{k_1t} + Be^{-k_2t} \tag 4$
as $t$ increases without bound. (We note here that
$Be^{-k_2t} \to 0 \; \text{as} \; x \to \infty.) \tag 5$
In light of (2),
$x(0) = x_0 = B \tag 6$
and
$\dot x(0) = y_0 = -Bk_2; \tag 7$
thus the relationship 'twixt $x_0$ and $y_0$ is
$x_0 = B = -\dfrac{y_0}{k_2}, \tag 8$
that is,
$y_0 = -k_2 x_0. \tag 9$
It is also easy to see that this equation forces $A = 0$, for
$x_0 = A + B \Longrightarrow k_1A - k_2B = y_0 = -k_2x_0 = -k_2A - k_2B \Longrightarrow k_1A = -k_2A$
$\Longrightarrow (k_1 + k_2)A \Longrightarrow A = 0, \tag{10}$
since
$k_1 + k_2 > 0; \tag{11}$
thus, (9) implies
$x(t) \to 0 \; \text{as} \; t \to \infty. \tag{12}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3895387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Joint distribution of order statistics, need some more details? I was reading about how to compute the joint distribution of an order statistics but I don't really follow the beginning. How do we arrive to the formula?
$f_{X_{(k)}}(x) = \frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k}f_x(x)$
(it is not very detailed and I am bad in probabilities).
| This is strictly meant to give you some intuition behind the following equation:
$f_{X_{(k)}}(x) = \frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k}f_X(x).$
Let's break it down piece-by-piece. $f_{X_{(k)}}(x) $ is the PDF of the $k$th order statistic, i.e., if you draw $n$ independent identically distributed random variables from some distribution, arrange them in ascending order by their size, we are looking at how the $k$th one is distributed ($k$th smallest). Informally, it is the "probability" that the $k$th order statistic is $x$. Let's come back to $\frac{n!}{(k-1)!(n-k)!}$ in a bit. Next we have $[F_X(x)]^{k-1}$, which is the CDF of the original distribution raised to the $(k-1)$th power---the idea here is that in order for the $k$th order statistic to be $x$, then we need $k-1$ of the random variables to be less than or equal to $x$. The way we measure this probability is with the CDF and because our random variables are independent, we can just multiply these probabilities. The next term, $[1-F_X(x)]^{n-k}$, is similar; it gives the probability that $n-k$ of the random variables are greater than $x$. At the end, we have $f_X(x)$ which is just the "probability" that one of the random variables is $x$. Returning to $\frac{n!}{(k-1)!(n-k)!}$, this is combinatorial bookkeeping. The $n!$ gives the number of ways that the $n$ random variables might be ordered, and the denominator says we aren't concerned about the actual order of the first $k-1$ or the last $n-k$ random variables.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3895556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Unsure about double integral bounds of integration in polar coordinates I'm trying to convert the bounds of integration to polar coordinates but I'm stumped on one of the bounds.
$$\int_{x=0}^{6}\int_{y=\frac{1}{\sqrt{3}}x}^{\sqrt{8x-x^2}}\sqrt{x^2+y^2}\,dy\,dx$$
The only thing that left me stumped was converting $y=\frac{1}{\sqrt{3}}x$ to polar.
Right now I have $\int_{\theta=0}^{\frac{\pi}{6}}\int_{?}^{8\cos{\theta}}r^2\,dr\,d\theta$
Where do I go from here? It's nowhere in my notes and I'm having a tough time finding anything online about it. Thank you!
| For $r,$ your lower bound is simply zero. Please sketch and see.
You have a circle with radius $4$ at center $(4,0)$. You are integrating over the region above the line $y = \frac {x} {\sqrt3}$ and below the circle. In polar form the radius will go from zero to $8 \cos \theta$. The bound on $\theta$ ensures you are integrating over the circular segment.
EDIT: I just noticed bound of your $\theta$. That has to be from $\pi/6 \,$ to $\pi/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3895717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Real projective space is Hausdorff Define the projective quotient space as follows:
$\mathbb{R}P^n$ is the quotient of $\mathbb{R}^{n+1}-\{0\}$ by the quotient equivalence relation $(x_0,\cdots,x_n)\sim (\alpha x_0,\cdots,\alpha x_n)$ for all $\alpha\neq 0$. Denote the quotient map as $r$.
Consider the set $V_i=\{x\in\mathbb{R}^{n+1}:x_i=0\}$. Denote $U_i$ as $r(\mathbb{R}^{n+1}-V_i)$.
Question: Prove $\mathbb{R}P^n$ is Hausdorff.
My approach:
I have proved that $U_i$ is open and Hausdorff for all $i$. Thus, for two distinct points $x$, $y$ in $\mathbb{R}P^n$, if there are in the same $U_i$, we can find two distinct open sets to separete them. But there are also points that are not in the same $U_i$. How do we deal with this kind of points?
Update:
Consider the diagonal set $\Delta(\mathbb{R}P^n)=\{(x,x):x\in\mathbb{R}P^n\}=\cup_{i}\{(x,x):x\in U_i\}=\cup_{i}\Delta(U_i)$. (The second equality is because $\{U_i\}$ forms a cover for $\mathbb{R}P^n$.)
Because $U_i$ is Hausdorff, each $\Delta(U_i)$ is closed. This implies $\mathbb{R}P^n$ is Hausdorff.
Is my proof correct? Thank you!
| Hint: (I think the sets $U_i$ and $V_i$ are a bit of a distraction). Each point in $\Bbb{RP}^n$ is represented by a unique pair of points $\pm x \in \Bbb{R}^{n+1} \setminus \{0\}$ with $\|x\| = 1$.Given two distinct pairs $\pm x$ and $\pm y$ with $\|x\| = \|y\| = 1$ you can find disjoint open cones $C_x$ and $C_y$ such that $\pm x \in C_x$ and $\pm y \in C_i$. The images of $C_x$ and $C_y$ in $\Bbb{RP}^n$ are open sets that separate $r(x)$ and $r(y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3895835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question about orientations and loops of the $n-1$ faces of an n-simplex $\Delta^n$ $S$ is the $\Delta$-complex formed by considering $\Delta^n$ along with its faces. Let the vertices be $v_0, v_1, ... v_n$.
Then $C_n(S) = \langle[v_0 v_1..v_n]\rangle$
and $C_{n-1}(S) = \langle\{[v_0....\hat{v_i} ..v_n] : i \leq n \} \rangle$.
Now obviously $\partial_{n-1}(\Sigma_{i\leq n} (-1)^i[v_0..\hat{v_i}..v_n]) = 0$
Suppose I have $X \subseteq \partial\Delta^n$ a $\Delta$ complex. I want to show that if $H_{n-1}(X) \not = 0$, then $X = \partial \Delta^n$, but this amounts to showing that there can't be a cycle consisting of $< n+1$ many faces of the generators of $C_{n-1}(S)$. That is, I have to show that if $\partial_{n-1}(\Sigma_{i\leq m}N_i\times[v_0..\hat{v_i}..v_n]) = 0$, where $N_{i}$ are non-zero integers,then $m = n$. Thus $X$ contains all $n+1$ faces and then $X = \partial\Delta^n$.
This is intuitively pretty obvious, but I'm not sure how to rigorously show this. Hopefully this isn't too dumb of a question.
Is it obvious as saying that then $\Sigma_{i\leq m}N_i\times[v_0..\hat{v_i}..v_n]$ would then be in $\partial\Delta^n$'s $ker(\partial_{n-1})$, but then that woud be pushing back the question to showing that for $\partial\Delta^n$, $ker(\partial_{n-1}) = \langle\Sigma_{i\leq n}(-1)^i[v_0..\hat{v_i}..v_n]\rangle$, which, again is very obvious, but I'm blanking how to rigorously prove it.
Perhaps I'm missing something incredibly obvious however.
| The $(n-1)$-st homology of your set $X \subset \partial \Delta^n$ is the same as the $(n-1)$-st homology of the union of the (closed) $(n-1)$-dimensional faces of $X$. So we can assume that $X$ is just a union of $(n-1)$-dimensional faces. If $X \not= \partial \Delta^n$, then roughly $X \subset \partial \Delta^n$ looks like the complement of an $(n-1)$-ball in the $(n-1)$-sphere $S^{n-1}$, which is contractible.
Even if it's hard to see why $X$ looks like the complement of a ball, it's at least the complement of a finite set of open balls in $S^{n-1}$. And such a thing deformation retracts to an $(n-2)$-complex.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3896053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Counting probability of disease There are 2 types of people in a big factory: prone to COVID (30% of workers), and nonprone to COVID (70% of workers). For a COVID-prone person, p(COVID) = $0.4$. For a nonprone to COVID person, p(COVID) = $0.2$.
Compute:
a) the probability a random factory worker will have a COVID
b) given a person who has a COVID, what is p(person is COVID-prone).
I'm not sure, should I use binomial probability or usual, or how I should count in general. I suppose in (a) I need to calculate $0.3*0.4 + 0.7*0.2 = 0.26$
| If we write everything we know, we got :
$P(Covid~\vert~prone~Covid) = 0.4 $
$P(prone~Covid) = 0.3$
$P(non-prone~Covid) = 0.7$
$P(Covid~\vert~non-prone~Covid) = 0.2$
We can deduce : $P(Covid) = P(Covid\bigcap prone~Covid) + P(Covid \bigcap non-prone~ Covid)$
This expression corresponds to the answer you found for a)
For b) : You have to calculate $P(prone~Covid \vert ~Covid)$ using what you found previously
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3896182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Minimum number of distances determined by $n$ points on a plane Set $S$ is the set of the distances between any two dots in a panel. Prove that given $202$ dots in a panel, the set will have at least $11$ members.
I tried using pigeonhole principle. Every three dots can form a equilateral triangle but adding a fourth dot will definitely produce another distance. But I don't know how to proceed.
| This concerns a famous result of P.Erdos, later generalized by others. We in fact have :
If $f(n)$ denotes the minimum number of distances determined by $n$ points on a plane, then $$
\sqrt{n - \frac 34} - \frac12 \leq f(n)
$$
Proof of the lower bound : The $n$ points can be enclosed in a convex shape $P$, called the convex hull of the points. This can be proved using linear algebra, for example, and can be shown in this case to be a convex polygon which has vertices made up from these points.
Let $P_1$ be any one of the points lying on $P$ as well. Consider the distances given by $P_1P_i, i=2,3,...,n$, and let $K = |\{|P_1P_i|, i=2,3,...,n\}|$.
If $N$ is the maximum number of times a distance occurs, we must have $KN \geq n-1$, obviously i.e. $f(n) \geq \frac{n-1}{N}$.
Furthermore, if a distance $r$ occurs $N$ times, then these points occuring $n$ times must lie in the same semicircle of radius $r$ around $P_1$, since these points lie on one side of $P_1$ because $P_1$ is on the convex hull.
It is easy to see that these points among themselves have at least $N-1$ different distances (draw a diagram and see this). Therefore $f(n) \geq N-1$.
Thus, for any $N$ we have $f(n) \geq \max\{N-1 , \frac{n-1}{N}\}$. It is easy to minimize this over all $n>N >0$ and obtain the expression given.
It is even easier to obtain a rough upper bound, but remarkably there are better lower bounds than the above. One of these is $Cn^{\frac 57}$ for a fixed constant $C>0$, which I believe is the best.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3896376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the points of discontinuities of the function $x \sin\left(\frac{\pi}{x^2+x}\right)$ Need to find the points of discontinuities of the function $f(x) = x \sin\left(\dfrac{\pi}{x^2+x}\right)$
Obviously at zero. I can’t take the one-sided limit ($\lim{x \to 0+}$ and $x \to 0-$) any way, sin is periodic, and at zero has a million oscillations.
In zero discontinuities, how to prove it. And how to get similar limits.
sorry for lang. Frrrrom Rrrussia)
| Given $f(0)=0$, $f(x)$ will be continuous as the left and right limit abiut $x=0$ exist by sandwich theorem:$$ -x\le x\sin \frac{\pi}{x^2+x} \le x ~~~~(1)$$.
But by sandwich theorem (1) left lim is 1 and right limit is -1 about $x=-1$
the limit does not exist.So $f(x)$ is essentially discontinuous at $x=-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3896507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Convergence of maximum points on compact sets Suppose $f$ and $f_n$ ($n=1,2,\dots$) are continuous functions $[0,1]\rightarrow\mathbb{R}$, achieving their maxima at unique points $x^*$ and $x^*_n$ respectively. If $f_n(x)\rightarrow f(x)$ for all $x\in [0,1]$, do we also have $x^*_n\rightarrow x^*$?
Here are a few thoughts on the problem:
*
*One can ask a similar question when $[0,1]$ is replaced by $[0,\infty).$ In that case the answer is negative, as the maximum may escape to infinity. This behaviour is prohibited on $[0,1]$ due to compactness, but of course something else might go wrong.
*If $f_n\rightarrow f$ uniformly, then the answer is positive; indeed, say (after selecting a subsequence) $x_n^*\rightarrow y \neq x^*$. Then $\vert f_n(x_n^*) - f(y) \vert \le \vert f_n(x_n^*)-f(x_n^*)\vert + \vert f(x_n^*) - f(y) \vert \le \Vert f - f_n \Vert_{L^\infty[0,1]} + \vert f(x_n^*) - f(y) \vert \rightarrow 0$ and thus taking limits in the inequality $f_n(x^*) \le f_n(x_n^*)$ yields $f(x^*) \le f(y)$. This is a contradiction, as $f$ is assumed to achieve its maximum at a unique point.
| Hint
What about $f(x) = x$ and $f_n = \sup(f, 2g_n)$ where
$$g_n(x)=
\begin{cases}
nx & 0 \le x \le 1/n\\
2-nx & 1/n \le x \le 2/n\\
0 & x \ge 2/n
\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3896654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Homomorphism from $S_3$ to ($\mathbb{Q},+)$ I am solving exercise in abstract algebra and could not solve this 1 correctly.
Does there exists a homomorphism from $S_3$ to the additive group ($\mathbb{Q},+)$ of rational numbers?
I think it exists. Map $A_3$ to $1$ and remaining elements to $-1$. But answer is no.
So, what mistake I am making? Please tell.
| Your map is a homomorphism to the multiplicative group of non-zero rational numbers.
You could map every element of $S_3$ to $0$ in $\mathbb Q$
to obtain a homomorphism to the additive group of rational numbers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3896767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is this approach to the limit solution kosher?$ \lim_{n\rightarrow\infty}\frac{(n!)^{\tfrac{1}{n}}}{n}=\frac{1}{e}$ Here is a non-rigourous approach to a limit:
$$ \lim_{n\rightarrow\infty}\frac{(n!)^{\tfrac{1}{n}}}{n}=\frac{1}{e}$$
I know the solution is correct. I'm suspicious of my methods.
*
*Rewrite the expression:
$$ \lim_{n\rightarrow\infty}\left(\frac{n!}{n^n}\right)^{\frac{1}{n}}$$
*I know the $\lim_{n\rightarrow\infty}\left(\frac{n!}{n^n}\right)=0$. So bear with me.
$$\frac{n!}{n^n}=\frac{n(n-1)(n-2)...[n-(n-2)][n-(n-1)]}{n^n}$$
*Written this way, the numerator is a polynomial of degree $n$. For large $n$, the leading term will prevail. I will write that polynomial as a sum of $n^n$ and a polynomial of degree $n-1$.
$$\frac{n!}{n^n}\Rightarrow\frac{n^n+[\text{polynomial}]^{n-1}}{n^n}\Rightarrow 1+\frac{1}{n}$$
*As noted in Step 2, this is not the expected result for this limit. But watch:
$$ \lim_{n\rightarrow\infty}\frac{(n!)^{\tfrac{1}{n}}}{n}=\lim_{n\rightarrow\infty}\left(1+\frac{1}{n}\right)^{\tfrac{1}{n}}=\lim_{n\rightarrow\infty}\frac{1}{\left(1+\frac{1}{n}\right)^n}=\frac{1}{e}$$
This is the correct solution. But Step 2 is sketchy. Did I make another error which undid this error? Can this derivation be salvaged, or is my proof "not even wrong"?
Please be gentle with me.
| This can be considered an heuristic way to proceed, to be rigorous we can use the ratio-root criterion as explained here
*
*Why does $\;\lim_{n\to \infty }\frac{n}{n!^{1/n}}=e$?
As noticed there is a mistake in your heuristic derivation, we could try to proceed as follows
$$\left(\frac{n!}{n^n}\right)^n =1 \cdot \left(1-\frac1n\right)^n\cdot \ldots \cdot\left(1-\frac{n-1}n\right)^n \sim \frac1{e^0}\cdot\frac1{e^1}\cdot \ldots \cdot\frac1{e^{n-1}}=\frac1{e^{\frac{n(n-1)}2}}$$
but we would conclude that
$$\left(\frac{n!}{n^n}\right)^\frac1n \sim \frac1{e^{\frac{n(n-1)}{2n^2}}} \implies \lim_{n\rightarrow\infty}\frac{(n!)^{\tfrac{1}{n}}}{n}=\frac1{\sqrt e}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3896990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
} |
Prove that $\prod_{i=1}^n(1+x_i)\leq \sum_{i=0}^n\frac{S^i}{i!}$, where $x_i\in\mathbb{R^+}$.
Let $x_1$, $x_2$, $\ldots$, $x_n$ be positive real numbers, and let $$S=x_1+x_2+\cdots+x_n.$$ Prove that $$(1+x_1)(1+x_2)\cdots(1+x_n)\leq 1+S+\frac{S^2}{2!}+\frac{S^3}{3!}+\cdots+\frac{S^n}{n!}.$$
My first thought is about using induction. For $n=1$, $LHS=1+x_1\leq1+S=RHS$.
Now I suppose that this inequality holds true for $n=k$; that is, $$(1+x_1)(1+x_2)\cdots(1+x_k)\leq 1+S+\frac{S^2}{2!}+\frac{S^3}{3!}+\cdots +\frac{S^k}{k!}.$$
And here I am stuck. I can't really see how I can find the relation between when $n=k$ and $n=k+1$.
Perhaps induction may not be the way? Any hints or suggestions will be much appreciated. Thanks.
| I think you guys are trying to overkill these problems. Clearly
$${\rm LHS} \le \left(1 + \frac{S}{n}\right)^n = \sum_{k \ge 0}^n \frac{n!}{(n-k)!\cdot n^k} \frac{S^k}{k!} \le {\rm RHS}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3897248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Find the GS of the System of DE's $\begin{cases} x' = x-3y\\ y'=3x+7y \end{cases}$ Find the GS of the following system of DE's where the independent variable is $t$ and $x$ and $y$ are the dependent variables
\begin{cases}
x' = x-3y\\
y'=3x+7y
\end{cases}
I know using eigenvalues and eigenvectors or operators is one way to do this. But I wish to double check my answer using a substitution method.
So my work:
The second DE $y'=3x+7y$ can be rewritten as $x = \cfrac{y'}{3}-\cfrac 73y$
then $x' = \cfrac{y''}{3}-\cfrac73y'$
When we plug these values of $x$ and $x'$ into the first DE ($x' = x -3y)$, we get with some rearranging
$\cfrac{y''}{3}-\cfrac83y'+\cfrac{16}{3}y = 0$
Which has a characteristic equation of
$\cfrac{r^2}{3}-\cfrac83r+\cfrac{16}{3} = 0$
with roots $r_1=4$ and $r_2 = 4$
Then the solution for $y$ is $y$ = $C_1e^{4t}+C_2te^{4t}$
Then we back sub to solve for $x$ using $x = \cfrac{y'}{3}$$-\cfrac73y$ with the solution of y we just found.
We get $x =-C_1e^{4t}-C_2te^{4t} + \cfrac{C_2}{3}e^{4t} =-C_1e^{4t}-C_2te^{4t} + C_3e^{4t}$
so the GS to the homo system is
\begin{cases}
x = -C_1e^{4t}-C_2te^{4t}+C_3e^{4t}\\
y = C_1e^{4t}+C_2te^{4t}
\end{cases}
If this solution is right, then I'm confident that I understand how substitution method works for solving DE systems. (Also it would boost my confidence in using the operator method to solve this as I got the same answer as this using the operator method). I'm a little thrown off on the roots being the same but I still think my methodology is still sound. I would appreciate it if someone could tell me if I've got this right cause then I know I completely understand how to solve a system of DE's.
If more work is necessary to show please let me know.
| $$\begin{cases}
x = -C_1e^{4t}-C_2te^{4t}+C_3e^{4t}\\
y = C_1e^{4t}+C_2te^{4t}
\end{cases}$$
You should only end with two constants not three.
$$y=C_1e^{4t}+C_2te^{4t}$$
$$y'=4C_1e^{4t}+C_2e^{4t}+4tC_2e^{4t}$$
$$y'=4C_1e^{4t}+C_2e^{4t}(1+4t)$$
And since:
$$x = \cfrac{y'}{3}-\dfrac 73 y$$
You should only have $C_1$ and $C_2$ constant. You applied the method of substitution correctly.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3897361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
How to maximize $\sum x_i\times x_j$ as $1\leq i,j\leq n$ with $i\neq j$ subject to $\sum x_i=1$? I want to maximize $$\sum_{i,j \in [n], i \ne j} x_i \times x_j$$ where $\forall i ~~$ $0\le x_i \le 1$ and $x_1 + x_2 + x_3 + \ldots +x_n = 1$
I want to prove that the sum will be maximized when $\forall i ~~$ $x_i = \frac{1}{n}$. I don't know whether this statement is true or not.
Note:-
It would be more helpful if you can prove that using shifting of weights from one $x_i$ and $x_j$. Even the proof is not following that method it is fine.
| If you take the derivative wrt $x_i$ you get, for each $i$
$$ \sum_{j=1}^n x_j-x_i-\lambda=0\implies x_i=1-\lambda$$
where $\lambda$ is the Lagrange multiplier. This means that all the $x_i$ need to take the same value. Adding the $n$ FOC's results in
$$ 1=n(1-\lambda)\implies \lambda=\frac{n-1}{n},$$
and, therefore, $$ x_i=\frac{1}{n}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3897613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is $dW^a_t dW^b_t$ for two various Wiener processes? I need to simplify $dW^a_t dW^b_t$ where $W^a_t$ and $W^b_t$ are two different Wiener processes.
If I had only one process $W_t$, I could use the following properties:
$$ (dt)^2 = 0 \\ dt \text{ } dW_t = 0 \\ (dW_t)^2 = dt$$
Is it possible to do something similar with $dW^a_t dW^b_t$
| Well, you can't have anything. But like, if you have the independence of two processes, the product is necessarily zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3897761",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does the definition of rational numbers already assumes an understanding of them? I mean, how do we know that the correct equivalence relation is the equality of the crossed products? Does it already assume an intuitive informal understanding of rationals? If yes, what would that intuition be?
| Addendum to @fleablood 's correct answer.
Throughout mathematics, formal definitions always come after intuitive understanding. The intent of the formal definition is to capture that understanding so we can reason carefully even in places we don't quite yet understand.
You can see this as early as Greek geometry. The elementary theorems there were known long before Euclid's codification.
Newton and Leibniz and their followers developed calculus long before the $\epsilon - \delta$ definition of a limit provided rigor.
In your question about rational numbers you do have the right intuition based on years of elementary school arithmetic, so you can see that the definition captures that intuition.
Students often find new mathematics difficult because they see formal definitions before they have developed much intuition. It's hard to understand the definition of a group before you've actually looked at different kinds of groups in different contexts. I think courses in abstract algebra usually move to abstraction too soon - but if they didn't then they couldn't "cover as much". That tension is hard to resolve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3897928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Characteristic of a field to make projective points the same In the resolution of a linear algebra exercise, they tell me that given a field $\Bbb K$ and the projective space $\Bbb P^n(\Bbb K)$, the projective points $(1:-1:2)$ and $(2:1:1)$ are the same if and only if $\text{char}(\Bbb K)=3$.
I understand the right to left implication, since if $\text{char}(\Bbb K)=3$ then $3=0$, hence $-1=-1+0=-1+3=2$ and $4=3+1=1$. Therefore, $(1:-1:2)=(1:2:2)=(2:4:4)=(2:1:1)$.
However, I have trouble understanding the left to right implication, I am not really sure of why any other value of the characteristic of the field would result in those two points being different.
Thanks in advance!
| For $P=(1:-1:2)$ to be the same as $Q=(2:1:1)$ we need the existence of some $\lambda\neq0$ such that $\lambda(1,-1,2)=(2,1,1)$. From the first component we get that $\lambda=2$, which means $P=Q$ if and only if $(2,-2,4)=(2,1,1)$ if and only if $-2=1\land 4=1$.
Both $-2=1$ and $4=1$ are true if and only if $3=0$ (add $2$ to the first equality and subtract $1$ to the first one), so we can conclude $P=Q$ if and only if $\text{char}(\Bbb K)=3$.
Another way to see this is thinking that you need $(1,-1,2)$ and $(2,1,1)$ to lie in the same vector line, since then they're projected onto the same projective point. This is equivalent to $(1,-1,2)$ and $(2,1,1)$ being linearly dependent,
which can be checked looking at the range of
$\begin{pmatrix}
1 & 2 \\
\llap{\text{-}}1&1\\
2 & 1
\end{pmatrix}$
Clearly, this matrix will have range greater or equal to 1, since it has entries with $1$ and the multiplicative identity of a field can't be zero (by convention). We need the range to be 1 in order to have the linear dependence, so we need to make sure the minors of order two are all 0:
$\left|\begin{matrix}
\ 1 & 2 \\
\ \llap{\text{-}}1&1
\end{matrix}\right|=3,\quad\left|\begin{matrix}
1 & 2 \\
2 & 1
\end{matrix}\right|=-3,\quad\left|\begin{matrix}
\ \llap{\text{-}}1&1\\
\ 2 & 1
\end{matrix}\right|=-3$
and these are 0 if and only if $3=0$ if and only if $\text{char}(\Bbb K)=3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3898102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Asymptotic estimate as $N \rightarrow \infty$ of $\sum\limits_{n = 1}^{N} \left\{{\frac{\left({n \pm 1}\right)}{{n}^{2}} N}\right\}$ Looking for the exact if possible, otherwise the asymptotic expansion and best estimate of the error terms as $N \rightarrow \infty$ of the two fractional sums $$\sum\limits_{n = 1}^{N} \left\{{\frac{\left({n \pm 1}\right)}{{n}^{2}} N}\right\}$$ My literature search has not found any examples similar to this. I have some material for the divisors terms such as $\left\lfloor{N/a}\right\rfloor^k$ and $\left\{{N/a}\right\}^k$. This is part of the calculation of the summations over the floor function of this argument.
From Benoit Cloitre. On the circle and divisor problem. November 2012 we have
$$\lim_{x \rightarrow \infty} \sum_{n = 1}^{x} \left\lfloor{\frac{x}{{n}^{2}}}\right\rfloor \sim
\zeta \left({2}\right) x
+ \zeta \left({\frac{1}{2}}\right) \sqrt{x}
+ O \left({{x}^{\theta}}\right)$$
Where $\theta = 1/4 + \epsilon$ is the estimated best error.
| Asymptotic is $(1 - \gamma) N$, where $\gamma$ is Euler–Mascheroni constant.
Proof
For any $x, y$:
$$
\begin{array}\\
\{x \pm y\} &= x \pm y - [x \pm y] \\
&= x \pm y - [[x] + \{x\} \pm [y] \pm \{y\}] \\
&= x - [x] \pm \{y\} - [\{x\} \pm \{y\}].
\end{array}
$$
Now set $x = \frac Nn, y = \frac N{n^2}$ to break the sum apart:
$$
\sum\limits_{n = 1}^{N} \left\{{\frac{\left({n \pm 1}\right)}{{n}^{2}} N}\right\}
= \underbrace{\sum\limits_{n = 1}^{N} \frac Nn}_{(1)}
- \underbrace{\sum\limits_{n = 1}^{N} \left[ \frac Nn \right]}_{(2)}
\pm \underbrace{\sum\limits_{n = 1}^{N} \left\{ \frac{N}{n^2} \right\} }_{(3)}
- \underbrace{\sum\limits_{n = 1}^{N} \left[ \left\{ \frac Nn \right\} \pm \left\{\frac{N}{n^2} \right\} \right]}_{(4)}.
$$
$(1)$ is Harmonic series, $(1) = N \ln N + \gamma N + \frac 12 + o(1)$.
$(2)$ is divisor summatory function, $(2) = N \ln N + N(2\gamma - 1) + O(\sqrt N)$.
$(3) = \underbrace{ \sum\limits_{n = 1}^{ \left[ \sqrt N \right]} \left\{ \frac{N}{n^2} \right\} }_{(3.1)}
+ \underbrace{ \sum\limits_{n = \left[ \sqrt N \right]+1}^{N} \left\{ \frac{N}{n^2} \right\} }_{(3.2)}.
$
$
(3.1) \leq \sum\limits_{n = 1}^{ \left[ \sqrt N \right]} 1 \leq \sqrt N.
$
$(3.2) = \sum\limits_{\left[ \sqrt N \right]+1}^{N} \frac{N}{n^2}
\leq N \cdot \sum\limits_{\left[ \sqrt N \right]+1}^{N} \frac{1}{n (n-1)}
= N \cdot \left( \sum\limits_{\left[ \sqrt N \right]+1}^{N} \frac{1}{n-1} - \frac{1}{n} \right)
= N \left( \frac{1}{\left[ \sqrt N \right]} - \frac{1}{N} \right)
\leq \frac {N}{\sqrt{N} + 1} - 1.
$
$(4) = O(\sqrt N)$. Proof is very technical and is written below.
Putting $(1)$, $(2)$, $(3)$, $(4)$ together, and leaving only leading asymptotic terms we have
$$
\sum\limits_{n = 1}^{N} \left\{{\frac{\left({n \pm 1}\right)}{{n}^{2}} N}\right\}
= (1 - \gamma) N + O(\sqrt N).
$$
Proving $(4) = O(\sqrt N)$
We want to show that $\sum\limits_{n = 1}^{N} \left[ \left\{ \frac Nn \right\} \pm \left\{\frac{N}{n^2} \right\} \right] = O(\sqrt N)$.
$$
\sum\limits_{1}^{N} [...] = \sum\limits_{1 \leq n \leq \frac{N}{\left[\sqrt N \right]} }[...] + \sum\limits_{ \frac{N}{\left[\sqrt N \right]} < n \leq N } [...], \\
$$
We split the sum in such a way, so that
*
*First part doesn't have too many summands.
*In the second sum we have $n > \frac{N}{\left[\sqrt N \right]} \geq \sqrt N$, which means we can "drop" braces: $\left\{ \frac{N}{n^2} \right\} = \frac{N}{n^2}$.
*It will be convenient to work with the second sum later.
First sum is $O(\sqrt N)$ because the $[...]$ part equals either $-1$, $0$ or $1$:
$$
\left| \sum\limits_{1 \leq n \leq \frac{N}{\left[\sqrt N \right]}} [...] \right| \leq \sum\limits_{1 \leq n \leq \frac{N}{\left[\sqrt N \right]}} 1 = O(\sqrt{N}) .
$$
We'll split second sum even further, so that we can also "drop" braces for $\left\{ \frac Nn \right\}$:
$$
\sum\limits_{ \frac{N}{\left[\sqrt N \right]} < n \leq N} [...]
= \sum\limits_{k=1}^{\left[ \sqrt N \right] - 1} \sum\limits_{\frac{N}{k + 1} < n \leq \frac Nk} [...].
$$
Note, that $\frac{N}{k + 1} < n \leq \frac Nk \implies k \leq \frac Nn < k + 1 \implies \left\{ \frac Nn \right\} = \frac Nn - k$.
$$
[...]
= \left[ \left\{ \frac Nn \right\} \pm \left\{\frac{N}{n^2} \right\} \right]
= \left[ \frac Nn - k \pm \frac {N}{n^2} \right]
= \left[ N \frac{n \pm 1}{n^2} \right] - k.
$$
When "$\pm$" is "$+$", the $[...]$ is either $0$ or $1$. We want to find for how many $n$ it is $1$.
$$
\left[ N \frac{n + 1}{n^2} \right] - k = 1 \iff N \frac{n + 1}{n^2} \geq k + 1 \iff \frac{k+1}{N}n^2 - n - 1 \leq 0, \\
\text{where} \; n \in \left( \frac{N}{k+1}; \frac Nk \right].
$$
Solving quadratic inequality gives $n \in \left( \frac{N}{k+1}; \frac{N}{k+1} \frac{1 + \sqrt{1 + 4 \frac{k+1}{N}}}{2} \right] $. Length of this semi-interval is
$$
\frac{N}{k+1} \frac{1 + \sqrt{1 + 4 \frac{k+1}{N}}}{2} - \frac{N}{k+1}
= \frac{N}{k+1} \frac{-1 + \sqrt{1 + 4 \frac{k+1}{N}}}{2}
= \frac{N}{k+1} \frac{-1 + 1 + 4 \frac{k+1}{N}}{2 \left(1 + \sqrt{1 + 4 \frac{k+1}{N}} \right) }
= \frac{2}{1 + \sqrt{1 + 4 \frac{k+1}{N}}} < 1.
$$
This means that at most $1$ integer $n$ can be inside that semi-interval.
When "$\pm$" is "$-$", the logic is similar, in that case there can be at most $2$ integer $n$ for which $[...] \neq 0$.
Finally, for the second sum we have $$
\left| \sum\limits_{ \frac{N}{\left[\sqrt N \right]} < n \leq N} [...] \right|
\leq \sum\limits_{k=1}^{\left[ \sqrt N \right] - 1} 2
= O(\sqrt N).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3898268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Using differentiation to find the power series for $x/(1+2x)^2$ So I am not sure how to go about this problem: finding the series representation of
$$f(x) = \frac{x}{(1+2x)^2}$$
I used $1/(1+2x)$ and created the power series
$$\sum_{n=0}^\infty (-2x)^n = \frac{1}{1+2x}$$
and differentiated both sides. My derivative was
$$\sum_{n=0}^\infty (n+1)(-1)^{n+1}(2x)^n = \frac{-2}{(1+2x)^2}$$
I found that I needed to multiply my power series by $-x/2$ in order for the power series to be equivalent to $x/(1+2x)^2$. What I don't understand is why this is wrong. Can someone help point me in the right direction?
| $\frac{1}{1+2x} = \sum _0^{inf} \left(-2x \right )^n$
$-\frac{2}{\left ( 1+2x \right )^2} = \sum _1^{inf}{n \times \left( -2\right )^n \times x^{n-1}}=\sum _0^{inf}{\left( n+1\right ) \times \left( -2\right )^{n+1} \times x^{n}}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3898396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How I can solve this limit only with algebra I tried to resolve this using properties of limits, properties of logarithms and some substitutions, but i can't figure whats is the right procedure for this.
$$\lim_{x\to 0} \frac{1}{x} \log{\sqrt\frac{1 + x}{1 - x}}$$
first all I use the logarithms properties and rewrite like this:
$$\lim_{x\to 0} \log({\frac{1 + x}{1 - x}})^\frac{1}{2x}$$
and tried to make to this expression goes to $$\lim_{x\to 0} \log e^\frac{1}{2}$$
and then limit will be 1/2
I can't reach to this because the limit goes to 0 instead of ∞
| $f(x) :=\log (\dfrac{1+x}{1-x})^{1/2}=$
$(1/2)\log (1+x)-(1/2)\log (1-x);$
$f'(0)=$
$\lim_{x \rightarrow 0}\dfrac{\log (\dfrac {x+1} {x-1})^{1/2}-\log 1}{x}=$
$(1/2)(1)-(1/2)(-1)=1.$
Note:
$f'(x)=$
$(1/2)\dfrac{1}{1+x}-(1/2)\dfrac{1}{1-x}(-1).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3898556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
} |
In $\mathbb{R}$ find the solutions of the equation $x^4-x^3-18x^2+3x+9=0$ I couldn't solve this question and hence looked at the solution which goes as follows:
$0$ is no a root of the equation. Hence:
$$x^2-x-18+\frac{3}{x}+\frac{9}{x^2}=0. \text{ So, }
x^2+\frac{9}{x^2}-(x-\frac{3}{x})-18=0$$
I state that $y=x-\frac{3}{x}$, so we have that $y^2=x^2+\frac{9}{x^2}-6$, in other words $x^2+\frac{9}{x^2}=y^2+6$. Hence the equation is written as $y^2+6-y-18=0$ so $y^2-y-12=0$, hence $y=4$ or $y=-3$ so $x=2\pm\sqrt{7}$, or $x=\frac{-3\pm\sqrt{21}}{2}$
Could you please explain to me why the solution author of the solution thought of originally dividing by the equation by x and after that substituting $x-\frac{3}{x}$ with $y$? Also if you can think of a more intuitive approach could you please show it?
| For the factorization, using the long, long way ,consider
$$(x^2+ax+b)(x^2+cx+d)-(x^4-x^3-18x^2+3x+9)=0$$ Expand and group terms to gat
$$(b d-9)+x (a d+b c-3)+x^2 (a c+b+d+18)+x^3 (a+c+1)=0$$ So, $c=-a-1$; replace
$$(b d-9)+x ((-a-1) b+a d-3)+x^2 ((-a-1) a+b+d+18))=0$$ So, $b=a^2+a-d-18$; replace
$$\left(d \left(a^2+a-d-18\right)-9\right)+x (-a (a (a+2)-2
d-17)+d+15)=0$$ So, $d=\frac{a^3+2 a^2-17 a-15}{2 a+1}$; replace
$$\frac {(a-3)(a+4)(a^4+2 a^3-23 a^2-24 a-3 )} {(1+2a)^2 }=0$$ The quartic does not show any real root : so $a=3\implies c=-4$ or $a=-4\implies c=3$. Choose the one you prefer and run back for $b$ and $d$.
Edit
I made a mistake when I wrote "the quartic does not show any real root". What I was supposed to write is that "the quartic does not show any rational root". This correction followed @Macavity's comment.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3898670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Prove the inequality $x-4x \frac{x}{\sqrt{x}+x} +3x \left( \frac{x}{\sqrt{x}+x} \right)^2 -3 \left( \frac{x}{\sqrt{x}+x} \right)^2 <0$. I want to prove $$x-4x \frac{x}{\sqrt{x}+x} +3x \left( \frac{x}{\sqrt{x}+x} \right)^2 -3 \left( \frac{x}{\sqrt{x}+x} \right)^2 <0$$
for $x>0$ real number.
I tried supposing that $$x-4x \frac{x}{\sqrt{x}+x} +3x \left( \frac{x}{\sqrt{x}+x} \right)^2 -3 \left( \frac{x}{\sqrt{x}+x} \right)^2 \geq 0$$ holds to get a contradiction but I couldn't find the solution.
Can you help me proceed?
| Answer :
$x-4x\frac{x}{\sqrt{x} +x} +3x(\frac{x}{\sqrt{x} +x})^2 - 3(\frac{x}{\sqrt{x} +x})^2 =1-4\frac{x}{\sqrt{x} +x} +3(\frac{x}{\sqrt{x} +x})^2 - 3\frac{x}{(\sqrt{x} +x)^2 }= \frac{(\sqrt{x} +x)^2 - 4x(\sqrt{x} +x) +3x^2 - 3x}{(\sqrt{x} +x)^2} =\frac{x+2x\sqrt{x} +x^2 - 4x\sqrt{x} - 4x^2 +3x^2 - 3x}{(\sqrt{x} +x)^2} =\frac{-2x-2x\sqrt{x}}{(\sqrt{x} +x)^2}=\frac{-2\sqrt{x}}{\sqrt{x} +x} <0$
Finally :
$x-4x\frac{x}{\sqrt{x} +x} +3x(\frac{x}{\sqrt{x} +x})^2 - 3(\frac{x}{\sqrt{x} +x})^2 <0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3898803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Working with functions using modulo operator I have a function defined in a computer program that I would like to study mathematically:
$y = \frac{x}{2} + (\frac{5}{2}x + 1)*(x\%2)$
This function is used on strictly positive integers and also returns integers only.
The main issue is that it's using the modulo operator ($\%$) as part of the function calculation, not as in a modular equation, and I have no clue how to translate into a formula that is suitable for mathematical analysis...
I have found various topic related to modulo/floor and how to translate them into mathematical form:
*
*Solving equations involving modulo operator
*Solving modular equations with a variable in the modulus divisor
*How to represent the floor function using mathematical notation?
But none of them seems to match this particular situation.
Also, to clarify what I mean by "studiy mathematically", it would be for example to determine the range of output values if input values are taken in a given range, or to determine limits/periodicity if this function was called recursively.
| x % 2 is equivalent to $\frac{1-(-1)^x}{2}$. On positive integer $x$, x % 2${}\in \{0,1\}$. (Some programming languages do stupid things for negative $x$. Example.) Consequently, you have
$$ y = \begin{cases}
x/2 &, \text{$x$ even} \\
3x + 1 &, \text{$x$ odd}
\end{cases} \text{.} $$
The Collatz conjecture covers the study of this function under recursion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3898912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
little-oh notation Assume I have a function $ f:\mathbb{R}^{m}\to\mathbb{R}^n $, and let $ h\in \mathbb{R}^m $.
Does the notation $ f\left(x_0+h\right)=o\left(h\right) $ (for fixed $x_0 $)
mean that
$$ \lim_{h\to0}\frac{||f\left(x_{0}+h\right)||_{\mathbb{R}^{n}}}{||h||_{\mathbb{R}^{m}}}=0 ?$$
Because I've seen in a few places that lecturers wrote just $ \lim_{h\to0}\frac{f\left(x_{0}+h\right)}{||h||}=0 $, which doesn't make any sense to me. I'll be glad for a clarification. What's the acceptable definition worldwide? Thanks in advance
| I would never say $f(x_0 + h) = o(h)$ when $f$ has values in $\mathbb R^n$ rather than $\mathbb R$.
But if I were going to say it, it would mean one of the two limits you wrote. Those limits are equivalent: $\frac{f(x_0 + h)}{\|h\|} \to 0$ (that is, $0 \in \mathbb R^n$) exactly when $\frac{\|f(x_0 + h)\|}{\|h\|} \to 0$ (that is, $0 \in \mathbb R$).
That's because the definition of a limit in $\mathbb R^n$ means that the first of these limits holds if and only if $\left\|\frac{f(x_0 + h)}{\|h\|} - 0\right\| \to 0$, and this simplifies to the second limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3899276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Infinite sum of square root I tried demonstrate that $\displaystyle \sum_{k=0}^\infty\sqrt{a_n^2+b_n^2}$ is convergent with $a_n$ and $b_n$ are the fourier coefficients of continuos $f(x)$ with $f(x)\in L_2(-\pi, \pi)$ and $f(\pi)=f(-\pi)$. My attempt to show it is to proof that the sum $$\sum_{k=0}^\infty a_n^2+b_n^2$$ is convergent (using Bessel inequation). But I'm not sure if $$\sum_{k=0}^\infty\sqrt{a_n^2+b_n^2}\leq \sum_{k=0}^\infty a_n^2+b_n^2$$
or is it not necessary?
| NOT TRUE.
The series
$$
f(x)=\sum_{n=1}^\infty \frac{\sin nx}{n}
$$
converges conditionally for every $x\in\mathbb R$ and defines a $2\pi-$periodic function $f$, which belongs to $L^2[0,2\pi]$, since
$$
\sum_{n=1}^\infty (a_n^2+b_n^2)=\sum_{n=1}^\infty \frac{1}{n^2}<\infty.
$$
But
$$
\sum_{n=1}^\infty \sqrt{a_n^2+b_n^2}=\sum_{n=1}^\infty \frac{1}{n}=\infty.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3899431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Is every Borel set a countable union of intervals? Since the Borel $\sigma$ algebra is generated by all open subsets of $\mathbb{R}$ and all open sets are the countable union of disjoint open intervals, I figured that any Borel set is the countable union of intervals. I also used the fact that if we only use '$\sigma $ algebra operations' on intervals then we get an interval or countable union of intervals.
But I ask if this is actually true?
| The Cantor set $C$ is an uncountable Borel set and it does not contain any non-trivial interval (i.e. a singleton).
Hence $C$ can not be written an as countable union of intervals.
However, its complement is indeed a countable union of disjoint open intervals!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3899517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Is $U = V$ in the SVD of symmetric matrices? Consider the SVD of matrix $A$:
$$A = U \Sigma V^\top$$
If $A$ is a symmetric, real matrix, is there a guarantee that $U = V$?
There is a similar question here that also posits $A$ is positive semi-definite. But I'm wondering whether $U$ would be equal to $V$ if $A$ is symmetric?
| If $A=USU^T$ is a SVD, $A$ has to be positive semidefinite, as shown in another answer here.
The converse is not true. If $A$ is positive semidefinite and $A=USV^T$ is a SVD, $U$ is not necessarily equal to $V$. E.g. $A=U0V^T$ is a SVD for any two unitary matrices $U$ and $V$.
However, if $A$ is positive definite, we must have $U=V$ in its SVD, as shown in the answers to Is $U=V$ in the SVD of a symmetric positive semidefinite matrix?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3899662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How to represent number that is the smallest possible value larger than another number? How would I represent the number, i, that is one increment larger than $x$, where an increment is infinitely small.
For example, if $s \in (1,2)$, $i$ would be the smallest number $s$ can take on if $x=1$. I thought about making it $i= x+ (\lim_{n\to\infty} \frac{1}{n})$ but this evaluates to $x$. How would I represent this number $i$ for a given $x$?
I could represent it as $i=\text{min}(s)$ for $s \in (x,\infty)$ but this seems messy. I'm looking for a cleaner way to write this if possible
| Based on the reading from Player3236's comment, I understand that this is not possible and more importantly why this is not possible. examples to understand why for any future readers
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3899844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
In $\Delta ABC$ where $\angle A = 60^\circ$, $BP$ and $BE$ trisect $\angle ABC$ and $CP$ and $CE$ trisect $\angle ACB$ . Find $\angle BPE$ .
In $\Delta ABC$ where $\angle A = 60^\circ$, $BP$ and $BE$ trisect $\angle ABC$ and $CP$ and $CE$ trisect $\angle ACB$ . Find $\angle BPE$.
What I Tried: Here is a picture :-
It is not hard to realise that angle-chasing is useful here, and that's exactly the same thing I did. I got the required angles in the picture as shown, but I can't seem to understand how to get $\angle BPE$ once I join the line $PE$ , if there is no suitable idea other than angle-chasing which I can use here, what should I do?
Can anyone help me? Thank You.
Edit: I become a bit stupid sometimes, hence I didn't realise $E$ is the incenter which makes $PE$ the angle bisector, implying $\angle BPE = 50^\circ$.
| as :E is the incenter of $\Delta PEC$ therefore $PE$ bisects the angle
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3900001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Relationship determinant symmetric matrix and skew-symmetric counterpart Say we have a square matrix A and we write it as the sum of its symmetric and skew-symmetric counterpart. Is there any formula which relates the determinants of A, its symmetric and skew-symmetric parts?
| Let $A=B+C$, where $B$ is symmetric and $C$ is (hollow and) skew-symmetric.
When $n\le2$, one may directly verify that $\det(A)=\det(B)+\det(C)$.
When $n\ge3$, $\det(A)$ is not a function of $\det(B)$ and $\det(C)$. It suffices to construct two symmetric matrices $B_0,B_1$ and two skew-symmetric matrices $C_0,C_1$ such that $\det(B_0)=\det(B_1)$ and $\det(C_0)=\det(C_1)$ but $\det(B_0+C_0)\ne\det(B_1+C_1)$.
Here are our constructions (that work over all fields, including those of characteristic is $2$). When $n\ge3$ is odd, let
$$
B_k=\pmatrix{1&1&0\\ 1&1&0\\ 0&0&k}\oplus I_{n-3},
\quad C_0=C_1=\pmatrix{0&-1&0\\ 1&0&0\\ 0&0&0}\oplus0.
$$
Then $\det(B_0)=\det(B_1)=\det(C_0)=\det(C_1)=0$ but $\det(B_0+C_0)=0\ne1=\det(B_1+C_1)$.
When $n\ge4$ is even, let
\begin{aligned}
B_0&=I_2\oplus\pmatrix{1&1\\ 1&1}\oplus I_{n-4},\\
B_1&=I_2\oplus\pmatrix{0&0\\ 0&0}\oplus I_{n-4},\\
C_0&=\pmatrix{0&-1\\ 1&0}\oplus\pmatrix{0&0\\ 0&0}\oplus0,\\
C_1&=\pmatrix{0&0\\ 0&0}\oplus\pmatrix{0&-1\\ 1&0}\oplus0,\\
\end{aligned}
Again, $\det(B_0)=\det(B_1)=\det(C_0)=\det(C_1)=0$ but $\det(B_0+C_0)=0\ne1=\det(B_1+C_1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3900156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Give the context-free grammar that generates the pushdown automaton language The pushdown automaton :
From Here i figured out that having an idea of the language is a good first step so i came to the conclusion that the language represented by this pushdown automaton is something that looks like this :
$a^n b(b^*aa^*bb^*)^*a^n :n \in\mathbb{N}$ (correct me if i am wrong)
But from there i am kinda lost on how to approach the problem to figure out the CFG, in class were not really told a precise technique to do this so i would like to know how someone would proceed to solve this problem.
Thanks a lot.
EDIT my attempt: could someone validate if it's good or what i am missing thanks
$S \Rightarrow aSa$
$S \Rightarrow B$
$S \Rightarrow bB$
$B \Rightarrow aC$
$C \Rightarrow \epsilon$
$C \Rightarrow bB$
$B \Rightarrow b$
| We have shown (link in comment) that the language of this PDA is L = { a^n b (a* b)* a^n , n ≥ 0 } , now let's build the grammer ,
S --> aSa | bT
T --> AbT | ε
Α --> aA | ε
The first rule generates a^n b T a^n accounting for n = 0 , T generates (a* b)* , note how A generates a* , Ab is the same as a* b , and adding T , AbT allows for repeating ( you can form AbAbT , AbAbAbT and so on , or use T --> ε ) which is analogous to the *
As for your grammer , comparing it to the language you provided ( which is not the language for the PDA ) , it doesn't describe the language correctly , it doesn't also describe the correct language of the PDA
If we use the rules
S --> aSa , then S --> B , we arrive at aBa , now use B --> ε ,and you get the string aa , which doesn't belong to the language you provided or that of the PDA ( note how the languages require at least one b to be in any string )
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3900560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Self dual posets product
Let $P,Q$ be partially ordered sets such that $P\times Q$ is self dual.
Self dual means $(P\times Q)^*=P\times Q$ where $P^*$ is the dual of $P$.
Does that mean $P,Q$ are self dual?
My professor hinted that the answer is negative, that in particular there exist non self dual $P,Q: P\times Q$ is self dual. Why is that true? Is there a counterexample of two such sets?
| Consider the partial orders $\langle\Bbb N,\le\rangle$ and $\langle\Bbb N,\ge\rangle$. Neither is self-dual — each is the dual of the other, in fact — but their product is self-dual.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3900741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Denote a ``limiting path'' by $ p := \bigcap_{n \in \mathbb{N}} \overline{\bigcup_{\ell \geq n} p_\ell}.$ Show that $p \in P$. Fix $x \neq y \in \Bbb R^n$. Denote by the class of "paths" connecting $x$ and $y$ the collection of sets
$$ P\equiv P(x,y) := \{ p \subset \Bbb R^n: \text{$p$ connected and compact},\ x,y \in p\}.$$
Assume that $(p_n)_{n \in \mathbb{N}} \subset P(x,y)$ is "uniformly bounded" in the sense that there exist $R > 0$ such that $p_n \subset B(0,R)$ for all $n \in \mathbb{N}$.
Denote a "limiting path" by
$$ p := \bigcap_{n \in \mathbb{N}} \overline{\bigcup_{\ell \geq n} p_\ell}.$$
Show that $p \in P$.
My try:
Each $\overline{\bigcup_{\ell \geq n} p_\ell}$ is closed and bounded and connected. [As they are subset of $\overline{B(0,R)}$ and $p_l$ has two points $x,y$ in common.] Then
$$ p := \bigcap_{n \in \mathbb{N}} \overline{\bigcup_{\ell \geq n} p_\ell}.$$ is compact.
I am having trouble to understand why $p$ is connected. In generally for nested connected set, the intersection is not connected. So I am guessing that I have to use compact somehow. Please help.
| HINT: If $p$ is not connected, we can write it as $H\cup K$, where $H$ and $K$ are disjoint, non-empty closed sets. There are disjoint open sets $U$ and $V$ such that $H\subseteq U$ and $K\subseteq V$. Let $n\in\Bbb N$; the connectedness of $p_n$ implies that $p_n\nsubseteq U\cup V$ (why?), so there is an $x_n\in p_n\setminus(U\cup V)$. The sequence $\langle x_n:n\in\Bbb N\rangle$ is bounded, so it has a convergent subsequence. Consider the limit of that subsequence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3901063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find a side of smaller triangle
$\triangle ABC$ has side $\overline{AB}=160 cm$, $\overline{BC}=190 cm$, and $\overline{CA}=190 cm$.
Point $D$ is along side $\overline{AB}$ and $\overline{AD} = 100 cm$. Point $E$ is along side $\overline{CA}$.
Determine the length of $\overline{AE}$ if the area of $\triangle ADE$ is three-fifths the area of $\triangle ABC$.
Is my solution correct? Thanks in advance!
| As indicated in comment, you're assuming $\triangle AED$ is isosceles (i.e., $DE$ || $BC$) which need not be true.
If you know formula for area of a $\triangle$ is also given as
$$ \dfrac{1}{2}AB\cdot AC\sin A$$
then using this one can directly solve
$$ \dfrac{1}{2}AD\cdot AE\sin A = \dfrac{3}{5} \cdot\dfrac{1}{2}AB\cdot AC\sin A$$
$$ \Rightarrow AE = \dfrac{3}{5}\dfrac{AB\cdot AC}{AD}$$
without any use of trigonometry.
Otherwise,
$$ \dfrac{1}{2}AD\cdot h_2 = \dfrac{3}{5} \cdot\dfrac{1}{2}AB\cdot h_1$$
$$ \Rightarrow h_2 = \dfrac{24}{25}h_1$$
and since the two right triangles formed by $h_1$, $h_2$ and containing $\angle A$ are similar
$$ \dfrac{AE}{AC} = \dfrac{h_2}{h_1}$$
$$ \Rightarrow AE = \dfrac{24}{25}AC$$
Either should give $AE = 182.4 $
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3901362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Joint measure and absolute continuity wrt product of marginals I am trying to build intuition over the following matter: let $X,Y$ be two random variables with corresponding probability measures $P_X,P_Y$. Assume also there exists a joint measure $P_{XY}$ such that for every measurable set $E$ $P_{XY}(\mathcal{X}\times E) = P_Y(E)$ and for every $F$ $P_{XY}(F\times \mathcal{Y}) = P_X(F)$.
My question is: is there a characterisation for when the joint is guaranteed to be absolutely continuous wrt to $P_XP_Y$?
All the counterexamples I could come up with are somehow dimension-related or "pathological", like: let $P_XP_Y$ be the Lebesgue measure over the unit square and $P_{XY}$ the joint induced by $X=Y$.
If we exclude this type of settings, is the constraint of $P_X,P_Y$ being the marginals enough to guarantee absolute continuity? Does anyone have intuition on this problem?
| One sufficient condition for this to hold (a coupling $P_{XY}$ is absolutely continuous w.r.t. the product measure of marginals $P_{X} \times P_{Y}$) is when the coupling $P_{XY}$ is already absolutely continuous with respect to some reference measure on $\mathcal{X} \times \mathcal{Y}$ (e.g., Lebesgue measure on $\mathbb{R}^2$ in the 2D case), see this example: Marginally continuous measures.
This example excludes your "pathological" counterexample when $P_{XY}$ is supported on a Lebesgue measure zero subset, but the product measure is ``larger''.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3901510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Is there anyway to do this with eigen vectors and eigen basis? So, say we have a square matrix A, and lets say that $A^2, A^3....$ so on are defined. Now If I want to find a general term $A^n$ is there any way to do that with eigen vectors. Take for example $A = \begin{bmatrix} 2 & 0 \\ 1 & -1\end{bmatrix}$. Like I would like to know if its possible to do it with eigen vectors instead of just trying to guess a pattern ? Like can we do something by converting the matrix into an eigen basis ?
| In general:
$$A = TDT^{-1},$$
where $D$ is a diagonal matrix such that its diagonal entries correspond to the eigenvalues of $A$, and the columns of $T$ are the eigenvectors of $A$.
Starting from this, we get:
$$A^n = \underbrace{TDT^{-1} \cdot TDT^{-1} \cdot \ldots \cdot TDT^{-1}}_{n~\text{times}}= \\
= T D (TT^-1) D (TT^-1) \ldots (TT^-1) D T^{-1}= \\
= T D I D I \ldots I D T^{-1} = \\
= T D^n T^{-1}.
$$
Notice that $D^n$ is a diagonal matrix such that its diagonal entries are the $n$-th power of the eigenvalues of $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3901668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Terms of Stanley Sequences Stanley Sequences (named after R. P. Stanley) are integer sequences beginning with 0, k where k is an integer bigger than 0, and where every following member is chosen to be the smallest integer bigger than the previous term so that no three terms (not necessarily consecutive) form an arithmetic sequence. Another way of saying this is that we want to lexicographically the first three-terms-arithmetic-sequence-free sequence beginning with 0, k. An even simpler way to put is that we search for the smallest integer bigger than the former term so that no three terms x, y, and z construct the equation x + z = 2y.
Let's look at the sequence where k = 1.
The first few terms of this sequence are: 0, 1, 3, 4, 9, 10, 12, 13, 27, 28.
We notice an interesting thing if we write the indexes of the terms of S(1) (the sequence where k=1), where a a_0=0, a_1=1, and so on in binary.
We get 0, 1, 10, 11, 100, 101, 110, 111, 1000, 1001. Now let's write the values for these indexes in ternary => 0, 1, 10, 11, 100, 101, 111, 1000, 1001. This goes on forever. Another way of putting it is that each term of S(1) is obtained by writing the index in binary and reading it in ternary. Another way to look at this is to collect all the numbers in ternary without a '2' in their representation.
What is an easy way to prove this? Also, can we suggest other numerical systems we can use that give us a similar relationship when k is different than 1?
What I've tried doing is proving that since the only way to obtain a '2' when we multiply by 2 in ternary is by starting with a '1' (without carrying over, I mean) there are only a few cases which we should aim to avoid and then form a case by case analysis to prove that these are in fact the smallest remaining options but that is both unclear and tedious. Do you have any suggestions?
Thank you in advance, and please excuse any language mistakes I may have made...
(If someone wonders I'm researching the topic for fun. It's not for school work or sth like this)
| The $k=1$ case is not hard.
a) The sequence of ternary-2-free numbers is free of solutions to $x+z=2y$ with $x\ne z$: Since $x,z$ are $2$-free, adding produces no carries, hence $x+z$ has at least one digit $1$, namely at a place where $x$ and $z$ differ. The addition in $y+y$ alo produces no carries, hence $y+y$ consists only of digits $0$ and $2$. Therefore $x+y\ne 2z$
b) Any lexically smaller sequence has a solution to $x+z=2y$: Let $b_0, b_1, b_2,\ldots $ be lexically smaller than our ternary-2-free sequence $a_0,a_2,a_2,\ldots$. Let $n$ be minimal with $b_n\ne a_n$ (and hence with $b_n<a_n$). Since the ternary-2-free numbers $<a_n$ are precisely $a_0,\ldots, a_{n-1}$, we conclude that $b_n$ is not 2-free.
Let $x$ be the number that has ternary digit $0$ where $b_n$ has digit $0$ and has digit $1$ where $b_n$ has digit $1$ or $2$.
Let $y$ be the number that has ternary digit $1$ where $b_n$ has digit $1$ and has digit $0$ where $b_n$ has digit $0$ or $2$.
Then $x,y$ are $<b_n$ and 2-free, hence occur among $b_0,\ldots, b_{n-1}$.
Also, we verify that $b_n+y=2x$ because adding $y$ turns every $1$ of $b_n$ into a $2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3901878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Properties of compact operators I'm working on the following problem, regarding compact operators.
Let $T$ and $S$ be bounded linear operators on an infinite-dimensional Hilbert space. Prove or disprove the following statements:
*
*If $T^n=I$ for some $n$, then $T$ is not compact.
*If $T^2=0$ then $T$ is compact.
*If $TS$ is compact then either $T$ or $S$ is compact.
The definition of compactness that I'm familiar with is: a linear operator $T$ is compact if and only if whenever $(x_n)$ is a bounded sequence, $(Tx_n)$ contains a convergent subsequence.
What I've done: For (1), I have seen the proof that an idempotent operator is compact if and only if it has finite rank. For (2), I am not sure how to start. For (3), I have only been able to prove the converse (if either $T$ or $S$ is compact than $TS$ is compact).
Any help on this problem? Thanks!
| *
*If $T^n=I$ and $T$ is compact, then $T^n$ would be compact and so would be the identity, which is not possible in an infinite dimensional Hilbert space.
*Let $H$ be the space of square summable sequences and $T\colon (x_k)_{k\geq 1}\mapsto (\lambda_k x_{k+1})_{k\geqslant 1}$, where $\lambda_k=1$ if $k$ is even and $\lambda_k=0$ if $k$ is odd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3902012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
As rings, $\mathbb{Z}[x]$ is not isomorphic to $\mathbb{Q}[x]$
Theorem: $\mathbb{Z}[x]$ is not isomorphic to $\mathbb{Q}[x]$.
My attempt:
For a contradiction, suppose $\mathbb{Z}[x] \cong \mathbb{Q}[x]$. So there exists an isomorphism $f: \mathbb{Q}[x] \to \mathbb{Z}[x]$. Since $\mathbb{Q}$ and $\mathbb{Z}$ are integral domains, we can make the following assertions about the units of $\mathbb{Q}[x]$ and $\mathbb{Z}[x]$, denoted $\overline{\mathbb{Q}[x]}$ and $\overline{\mathbb{Z}[x]}$, respectively:
\begin{align*}
\overline{\mathbb{Q}[x]} = \overline{\mathbb{Q}} = \{q \in \mathbb{Q} \mid q \neq 0\} \\
\overline{\mathbb{Z}[x]} = \overline{\mathbb{Z}} = \{-1,1\}.
\end{align*}
Since $f$ is a homomorphism, it maps units to units, so $f(\overline{\mathbb{Q}[x]}) \subset \overline{\mathbb{Z}[x]}$. Take $q_1, q_2, q_3$ distinct in $\overline{\mathbb{Q}[x]}$. Then $f(q_1), f(q_2), f(q_3)$ are units in $\mathbb{Z}[x]$, and since $f$ is injective, $f(q_1) \neq f(q_2) \neq f(q_3)$, meaning $\mathbb{Z}[x]$ has three distinct units, a contradiction.
How does this proof look?
| Your proof looks good. Isomorphic rings have isomorphic groups of units.
Alternatively, the equation $2t=1$ has a solution in $\mathbb{Q}[x]$ but not in $\mathbb{Z}[x]$ and so the two rings cannot be isomorphic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3902325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to negate if and only if How can I get $(A ↔ ¬B)$ or $(¬A ↔ B)$ from $¬(A ↔ B)$ using propositional laws?
I tried expanding $¬(A ↔ B)$ and I got to $(A ∧ ¬B) ∨ (B ∧ ¬A)$
I also tried expanding $(A ↔ ¬B)$ and I got to $(¬A ∨ ¬B) ∧ (B ∨ A)$
I don't know where in my expansion I went off the wrong path
| Distribute:
$$(A\land\lnot B)\lor(B\land\lnot A)=(A\lor B)\land(\lnot B\lor B)\land(A\lor\lnot A)\land(\lnot B\lor\lnot A)$$
and then recognize that $(P\lor\lnot P)$ is always true, so the middle two pairs drop out and the right hand side reduces to
$$(A\lor B)\land(\lnot B\lor\lnot A)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3902591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Equation of three lines passing through the origin If $dx^3+7x^2y+bxy^2+y^3=0$ represent three straight lines of which two of them makes complementary angles with x-axis then the value of $|d^2-bd|$=_______
My approach is as follows
From the above equation, presume that line passes through origin
The equation of lines are $y=m_1x$, $y=m_2x$ & $ y=m_3x$
For complementary angle the angle between them is $180^o$
$(y-m_1x)(y-m_2x)(y-m_3x)=0$
$(y^2-(m_1+m_2)xy+m_1m_2x^2)(y-m_3x)=0$
$(y^3+(m_1m_2+m_2m_3+m_3m_1)x^2y-(m_1+m_2+m_3)xy^2+m_1m_2m_3x^3)=0$
How do I proceed from here. For supplementary angle $m_1m_2=-1$ then what is the condition for complementary angles?
| $m_1m_2=1$. Let $m_1 + m_2 = S$.
Comparing coefficients,
$$y^3 - xy^2\sum m_1 + x^2y \sum m_1m_2 \color{red}{-x^3}m_1m_2m_3 = 0 =$$$$ y^3 + bxy^2 + 7x^2y + dx^3$$
we get
$$ d=-m_3$$
$$-b = S+m_3$$
$$ 7 = 1 + m_3S$$
so that $$|d(d-b)| = |-m_3(S)| $$
$$ = 6 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3902695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finiteness of a $C^{*}$-algebra A unital $C^{*}$-algebra is said to be finite if $x^*x = 1$ implies $xx^* = 1$.
A unital $C^{*}$-subalgebra $B$ of a finite $C^{*}$-algebra $A$ is finite if $1_{B} = 1_{A}$.
Let $\lbrace A_n \rbrace$ be a sequence of finite unital $C^{*}$-algebras. Then both $\prod_{n=1}^{\infty}A_n$ and $\prod_{n=1}^{\infty}A_n / \oplus_{n=1}^{\infty}A_n$ are finte.
Could anybody help me to prove these two theorems?
| There is no magic happening here:
The unit of $\prod_nA_n$ is simply $(1_n)_{n\geq1}$ where $1_n$ denotes the unit of $A_n$. Now suppose that $(1_n)_{n\geq1}=x^*x$ for some $x\in\prod_nA_n$, say $x=(x_n)_{n\geq1}$, where $x_n\in A_n$ and $\sup_{n}\|x_n\|=\|x\|<\infty$. Then for each $n\geq1$ we have that $1_n=x_n^*x_n$ since the operations in $\prod_nA_n$ are pointwise. Since each $A_n$ is unital, we have that $x_nx_n^*=1_n$ as well, for all $n\geq1$. But this means that $1_{\prod_nA_n}=xx^*$. This shows that $\prod_nA_n$ is finite.
For the quotient, let $x\in\prod_nA_n$, $x=(x_n)$ so that $1+\sum_nA_n=(x+\sum_nA_n)^*\cdot(x+\sum_nA_n)$, thus $1-x^*x\in\sum_nA_n$. Equivalently, for each $\varepsilon>0$ there exists $n_0\geq1$ so that for all $n\geq n_0$ we have that $\|1_n-x_n^*x_n\|<\varepsilon$.
Lemma: If $p,q$ are projections in a $C^*$-algebra and $\|p-q\|<1$, then they are homotopic in the set of projections.
But homotopic projections are also Murray-von Neumann equivalent (see Rordam's book "An introduction to K-theory of $C^*$-algebras" for a proof of this and the above lemma). Now let $\varepsilon>0$. We have that there exists $n_0\geq1$ so that $\|1_n-x_n^*x_n\|<\min\{\varepsilon,1\}$ for all $n\geq n_0$. Therefore (by the lemma and our observation) we can find $y_n\in A_n$ so that $1_n=y_n^*y_n$ and $x_n^*x_n=y_ny_n^*$, for all $n\geq n_0$. But since $A_n$ is finite we have that $y_ny_n^*=1_n$, so $x_n^*x_n=1_n$ and again, since $A_n$ is finite we have that $x_nx_n^*=1_n$. This shows that $0=\|1_n-x_nx_n^*\|<\varepsilon$ for all $n\geq n_0$. In particular, $1-xx^*\in\sum_nA_n$, so $1+\sum_nA_n=(x+\sum_nA_n)\cdot(x+\sum_nA_n)^*$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3902828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An Application of Inclusion Exclusion Principle. Let $A$ be a finite set and let $A_{1},A_{2},\ldots,A_{n}$ be subsets of $A$. And let $C$ be a subset of $\{1,2,\ldots,n\}$. Let $B$ be the set of all $x\in A$ such that $x\in A_{i}$ for all $i$ and $x\not \in A_{i}$ if $i\in\{1,2,\ldots,n\}-C$. I have to prove that $$|B|=\sum_{C\subseteq D\subseteq \{1,2,\ldots,n\}}(-1)^{|D-C|}\cdot \bigcap_{d\in D}|A_{d}|$$ where $\bigcup_{d\in D}A_{d}$ is taken as $A$ if $D=\emptyset$. Definitely I know this is an application of Inclusion-Exclusion, but I am not sure how to go about applying the result.
| Hint: First notice that $x\in B$ iff $x\in A_i$ for $i\in C$ but $x\not \in A_j$ for $j\not \in [n]\setminus C$ so you want to take the following difference of the ones that are all in $C$ minus the ones that are not in $C.$ Show then that you want
$$B=\left (\bigcap _{i\in C}A_i\right )\setminus \left ( \bigcup _{j\not \in C}A_j\right)=\left (\bigcap _{i\in C}A_i\right )\setminus \left ( \bigcup _{j\not \in C}\left (A_j\cap \bigcap _{i\in C}A_i\right )\right),$$
the last equality is just by definition of difference.
Then take the size of the set and use normal inclusion-exclusion (notice that $C$ is always going to be in the intersection of the I-E and so you are taking sets that contain $C,$ be careful with the sign). Can you finish?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3902965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Torsion in the first Homology group I was trying to solve an exercise which says the following:
Let $U\subset\mathbb{R}^3$ be an open subset. Then $H_1(U;\mathbb{Z})$ has no torsion.
I think that the universal coefficient theorem in homology has to be used, but I don't know how.
Moreover, this result seems very strange to me since we do not have any other hypothesis on the open subset $U$. For example in dimension $4$ we have the smooth embedding of $ \ \mathbb{RP}^2$ in $ \ \mathbb{R}^4$ and so it's enough to take a tubular neighborhood of $ \ \mathbb{RP}^2$ (regarded as a closed submanifold of $\mathbb{R}^4$) to obtain an open subset $U\subset\mathbb{R}^4$ which retracts by deformation on $ \ \mathbb{RP}^2$ and so $H^1(U;\mathbb{Z})=\mathbb{Z}_2$.
If the proposition at the beginning is true then it is saying that no "strange" phenomena can happen in $\mathbb{R}^3$, as for example a smooth embedding of some projective space. It seems to be a non trivial result, but it was given to me as an exercise during my Algebraic Topology course.
| Hint: If $H_1(U)$ has torsion, find a nice compact subset $K\subseteq U$ such that $H_1(K)$ also has torsion, and then use Alexander duality.
More details are hidden below
Triangulate $U$ and consider a nonzero torsion class in $H_1(U)$. There is then a finite subcomplex $K$ of $U$ which contains a 1-cycle $\alpha$ representing that torsion class, and also a 2-cycle whose boundary is a multiple of $\alpha$ to witness that $[\alpha]$ is torsion. So, $[\alpha]$ is also a torsion class in $H_1(K)$. But by Alexander duality, $H_1(K)\cong H^1(S^3\setminus K)$, and $H^1$ can never have torsion (this follows from the universal coefficient theorem since $H_0$ is always free).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3903141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Showing that $f(x, y)=xy$ is continuous on $S^1$ This is question from studying general topology.
Let $f : S^1 \times S^1 \rightarrow S^1$ defined by $f(x, y)=xy$
How can I show that $f$ is continuous?
My attempt : Let $U : $ {$e^{i\theta}$ | $\alpha < \theta < \beta $ } : open set in $S^1$.
Then, $f^{-1}(U)$={$(e^{i \theta _1}, e^{i \theta _2})$ | $\alpha + 2n\pi < \theta_1 + \theta_2 < \beta + 2n\pi$} , which is homeomorphic to $V$ = {$(x, y)$ | $\alpha + 2n\pi < x+y < \beta + 2n\pi$} .
Since $V$ is open, $f$ is continuous.
| Let $\iota\colon S^1 \to \Bbb R$ be the inclusion. By definition of subspace topology, $\iota$ is continuous. So $\iota\times \iota\colon S^1\times S^1 \to \Bbb R \times \Bbb R = \Bbb R^2$ is also continuous. Define $\widetilde{f}\colon \Bbb R^2 \to \Bbb R$ by $\widetilde{f}(x,y) = xy$. This $\widetilde{f}$ is obviously continuous, as it is a polynomial. Then $$f = \widetilde{f}\circ (\iota\times \iota)$$is the composition of continuous maps, hence continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3903282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Pigeon hole principle to prove $a-b=9$ in subset. I have a set of numbers
$$
[n] = \{1,2,...,n\}
$$
in my special case $n = 100$, and I have a subset of $[100]$ with the following specification
$$
A\subseteq[100]
$$
and
$$
|A| >= 55
$$
now I should prove, that this statement is true for some
$$
a,b\in A: a-b=9
$$
I thought about the problem and I realised, that if I just take the numbers $1-55$ that there are a lot of pairs $a,b$ that match the condition.
So I tried to build a set in which no pair matches the condition. Therefore I just used the even numbers from $2-100$. Because even-even=even. But there are only $50$ even numbers in $[100]$, so I have to add at least $5$ odd numbers. So as soon as I add one odd number my set matches the condition.
Using the pigeonhole principle:
$$
n,m \in \mathbb{N}, f: [n] \to [m], |f^{-1}(j)|, j \in [m], \exists j^{*} \in [m], |f^{-1}|>=\lceil\frac{n}{m}\rceil
$$
I get, that there is at least
$$
\lceil\frac{100}{55}\rceil = 1
$$
solution to my problem.
But I think that I have to specify the function for the projection to prove the problem.
And I think that I can use the modulo operator to achieve my goal, but currently I am stuck.
Could someone please help me?
| First consider partitioning [$100$] in following manner :
$$ \{1,2,\ldots,18 \} \, \{19,20,\ldots,36 \} \cdots \{73,74,\ldots,90 \} \, \{91,92,\ldots,100 \} $$
Now in each of first five sets we have $9$ pairs differing by $9$ like $$ (1,10) , (2,11) , \ldots (9,18), \ldots, (89,90) $$
and in last, one pair $(91,100)$. Remaining are unpaired.
Can you complete?
We have $9\cdot5+1=46$ pairs. And $8$ : ${92,93,\ldots,99}$ are unpaired.
To make a set $A$ not having the property, choose one number from each pair and all the unpaired ones. But we can choose only $46+8=54$ such numbers. $55^{th}$ number belongs to one of previous pairs, so $a-b=9$ must satisfy.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3903357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$ \cos x\geq 1-\frac{x^2}{2} $ Prove that for $x\in\mathbb{R}$
$$
\cos x\geq 1-\frac{x^2}{2}.
$$
My try:
Consider $g(x)=\cos(x)-1+\frac{x^2}{2}.$
If I differentiate $g(x)$ then we get $g'(0)>0$ so locally we get $g(x)>g(0)=0$ and then we can see that the function is increasing for any $x$ the function is increasing and hence we have $g(x)\geq 0$ for any $x \geq 0$. But I am getting that if $x<0$ then $g(x) \leq 0.$ So this inequality isn't true in general for all $x \in \Bbb R$.
But, if we use Taylor's theorem with Lagrange's remainder then also I am not sure what will be the point $\zeta\in [-x,0]$ where $\cos(x)=1-\frac{x^2}{2}+\frac{x^4}{4}\cos(\zeta).$
| $$\cos x-1+\frac{x^2}2\ge0$$ and equality holds at $x=0$.
Then differentiating,
$$-\sin x+x\ge 0$$ and equality holds at $x=0$.
Finally,
$$-\cos x+1\ge 0.$$
So $-\sin x+x$ grows from $0$ and is non negative, and $\cos x-1+\dfrac{x^2}2$ grows from $0$ and is non-negative.
This technique works for Taylor developments to arbitrary orders.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3903461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 0
} |
Given X+Y+Z = 0, find $\frac{(X^3+Y^3+Z^3)}{XYZ}$ The result can be found using the equation :
$(X^3+Y^3+Z^3) - 3XYZ = (X+Y+Z)(X^2 - XY +Y^2 - YZ +Z^2 - XZ)$
Since X+Y+Z = 0, the right side of the equation is equal to 0. Therefore $X^3+Y^3+Z^3 = 3XYZ$ and the answer to the problem is 3.
However what if we calculate $X^3+Y^3+Z^3$ as $(X+Y+Z)^3 - 3X^2Y - 3Y^2X - 3X^2Z - 3z^2X - 3Y^2Z - 3Z^2Y$. $(X+Y+Z)^3 = 0$, so $X^3+Y^3+Z^3$ can be replaced with $- 3X^2Y - 3Y^2X - 3X^2Z - 3Z^2X - 3Y^2Z - 3Z^2Y$.
$\frac{- 3X^2Y - 3Y^2X - 3X^2Z - 3Z^2X - 3Y^2Z - 3Z^2Y}{XYZ}$ = $\frac{-3X - 3Y}{Z} - \frac{-3X-3Z}{Y} - \frac{-3Y-3Z}{X}$
If X+Y+Z = 0, consequenty 3X+3Y+3Z = 0. Our expression can be rewritten as $\frac{-3Z}{Z} + \frac{-3Y}{Y} +\frac{-3X}{X}$, so the answer is -9.
Could you please tell me which way of solving this problem is right and why
| The first attempt is correct, assuming that $X, Y, Z \neq 0$. You can verify this with a simple example, say $X = Y = 1, Z = -2$. The mistake in the second attempt is due to the fact that you have missed a term in your expression of $X^3 + Y^3 + Z^3$. Specifically,
$$(X + Y + Z)^3 = X^3 + Y^3 + Z^3 + 3(X^2 Y + Y^2 X + X^2 Z + Z^2 X + Y^2 Z + Z^2 Y) + \color{blue}{6XYZ}$$
To avoid errors like the one you made, always check that the number of terms in your expansion is correct! We expect $3^3 = 27$ terms in the expansion of $(X + Y + Z)^3$, and your expansion only gave $21$. You can then plug this into your second method and you will see that you get the correct answer of $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3903677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Axiom Q in Fischer's Intermediate Real Analysis In Intermediate Real Analysis by Emanuel Fischer page 6, the author states an axiom that says
(Axiom Q) If $x$ and $y$ are real numbers, where $z+y\neq z$ holds for some real $z$, then there exists a real number $q$ such that $yq=x$.
Is there a name for this axiom? I believe "$z+y\neq z$" is a typo; shouldn't it be $z+y\neq x$? Does anyone know if there is a list of typo for this book?
| He's postulating the axiom that quotients exist for real numbers, with the necessary restriction that the dividend must be nonzero, i.e. it cannot be the addtive neutral element (usually denoted by $0)$.
Unlike the most common axiomatization of a field, the author does not include axioms for the neutral elements $0$ and $1$ for addition and multiplication. Rather, he defines the additive and multiplicative groups using axioms that differences and quotients exist, i.e. $\,a + x = b\,$ is solvable for all $\,a,b\in \Bbb R,\,$ and $\, a\cdot x = b\,$ is solvable for all $\,a,b\,$ when $\,a\,$ is not additively neutral (i.e. when $\, x + a = x\,$ fails to be true for all $\,x\in \Bbb R,\,$ i.e. when $\,r+a\neq r\,$ for some $\,r\in \Bbb R).$
Later he proves that there exist unique additive and multiplicative neutral elements, denoted by the customary symbols $0$ and $1$.
This axiomatization of groups using division (or subtraction) is less common than that using inverses and neutral elements, but it does occur from time to time (and if memory serves correct it has been discussed here in the past on multiple occasions, e.g. here). In fact there are many known axiomatizations of groups, rings, fields. Typically we choose one that proves convenient for the context at hand.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3903819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $f∈C^{(n+1)}(a,b)$and suppose $x_o∈(a,b)$ and $f′(x_o) =···=f^{(n)}(x_0) = 0$ but $f^{(n+1)}(x_o) \neq 0$. Let $f∈C^{(n+1)}(a,b)$and suppose $x_o∈(a,b)$ and $f′(x_o) =···=f^{(n)}(x_0) = 0$ but $f^{(n+1)}(x_o) \neq 0$. Then in $x_o$ the function f has
(i) a strict local minimum, if $n$ is odd and $f^{(n+1)}(x_o)>0$,
(ii) a strict local maximum, if $n$ is odd and $f^{(n+1)}(x_o) < 0$
(iii) no extremum, if $n$ is even.
My work
Not a lot actually. I definitely do understand why the properties are as such. But where I am having trouble is explicitly writing it down in appropriate mathematical language. As such, what would be a "model answer" for questions like this that prove the statements conclusively?
| Here are some big hints.
Since $f^{(n+1)}(x_0)\neq 0$ and $f^{(n+1)}$ is continuous, there exists $\delta$ such that $f^{(n+1)}(x)$ has the same sign as $f^{(n+1)}(x_0)$ for all $x$ with $|x-x_0|<\delta$.
Then apply Taylor's theorem. For all $x$ with $|x-x_0|<\delta$, we may write
$$
f(x)=f(x_0)+f'(x_0)(x-x_0)+\ldots+\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n+\frac{f^{(n+1)}(c)}{(n+1)!}(x-x_0)^{n+1}
$$
for some $c$ between $x$ and $x_0$. It should be obvious how to break it up into cases and determine when $f(x)>f(x_0)$ and when $f(x)<f(x_0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3903945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Doubt in Sum no 29 of Rudin
Prove that every open open set in $R$ is the union of a countable number of disjoint intervals.
Although this question has been in stack exchange for too many times but here is my intuition to it. Can someone just suggest whether I am on the right track.
I am taking the example of the open set $(a,b)$.So I take the first segment as $(a, b-1)$ , the second segment as $[b-1,b-1/2)$ the third as $[b-1/2,b-1/3)$ and so on...My logic behind this is that $(0,1-1/n)$ covers $(0,1)$ and so $(a, b-1/n)$ covers $(a, b) $.The intervals are disjoint also and their union is also countable. Now, my question is the hint given in rudin is to use the fact that $R$ is separable , but where do I use it, Am I making a mistake somewhere?
|
I am taking the example of the open set $(a,b)$.So I take the first segment as $\color{red}{(a, b-1)}$ , the second segment as $[b-1,b-1/2)$ the third as $[b-1/2,b-1/3)$ and so on...My logic behind this is that $(0,1-1/n)$ covers $(0,1)$ and so $(a, b-1/n)$
$\color{red}{ covers}$ $(a, b) $.The intervals are disjoint also and their union is also countable. Now, my question is the hint given in rudin is to use the fact that $R$ is separable , but where do I use it, Am I making a mistake somewhere?
From what you have written, I assume you mean
$$(a,b)=(a,b-1) \cup (b-1,b-\frac{1}{2}) \cup (b-\frac{1}{2},b-\frac{1}{3}) \cup \dots $$
But there are many issues with this logic. The mistakes are
$1$-What if $a > b-1$?
$2$-Your open set need not of the form $(a,b)$. It can be of the form $\bigcup_{i\in I}(a_i,b_i)$ where $I$ can be an uncountable set as well.
$3$-Secondly, Your LHS may miss the points $\Big\{b-1,b-\frac{1}{2},b-\frac{1}{3},\dots\Big\}$ , So you cannot say that your set $(a,b)$ is covered
Now the hint can give you an easier solution. You can think about that. The hint says that there exists a countable dense set. The set of rational numbers $\mathbb{Q}$ is countable and dense in $\mathbb{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3904072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is it correct to write the whole number $2$ as a mixed fraction $1 \dfrac{1}{1}$ or $2 \dfrac{0}{1}$? So if we start from writing $2$ as an improper fraction, and use the same procedure for any improper fraction to covert it to a mixed number, we'll get $2\dfrac{0}{1}$. Which makes sense, and it can be reverted back to $2$.
\begin{align}
2\rightarrow\dfrac{2}{1}\rightarrow 2\dfrac{0}{1}
\end{align}
But from playing around with this, I also found that there's another mixed number that can be evaluated to $2$. Starting from $1+\dfrac{1}{1}$
$$1+\dfrac{1}{1}\rightarrow1\dfrac{1}{1}\rightarrow\dfrac{2}{1}\rightarrow2$$
So, $2$ can be represented as either $2\dfrac01$ or $1\dfrac11$.
Are both these correct?
| I would just write $2$, and reserve the use of mixed fractions for non-integers. Both your options look strange and unusual, but I don't think either of them is wrong.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3904228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the Real Prime? There seems to be an importance to the ring of adeles for the rational numbers (discussed here), with valuations for every $\mathbb{Q}_p$, but also one "infinite" valuation "$\mathbb{Q}_∞$", seemingly equal to $\mathbb{R}$.
Why would something like $\mathbb{Q}_∞$ be used in the first place, and how is that equal to the reals? Is there something like a $∞$-adic metric that works like the usual one?
Moreover, it seems to suggest that $∞$ here is a sort of an infinite prime number, i.e. the real prime, having some occult-sounding books written about it. So, does it exist as some sort of a describable object here, or is it just notation?
| It is a formal notation. We treat $|\cdot|$ as an absolute
value $|\cdot|_{\infty}$ coming from an “infinite prime”, so that we obtain, among other things, a product formula
$$
\prod_{p\le \infty} |\alpha|_p=1
$$
for every $\alpha\in \Bbb Q^{\times}$. Of course, $p=\infty$ is not really a prime. So $\Bbb Q_{\infty}:=\Bbb R$ is just a notation. We can summarize all completions of $\Bbb Q$ by
$$
\Bbb Q_2,\Bbb Q_3,\Bbb Q_5,\cdots ,\Bbb Q_{\infty}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3904392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Given a linear transformation $T$ find a basis $B$ such that $[T]_B$ is a diagonal matrix Given
$ A = \begin{bmatrix} 1 & 3 \\ 0 & 6 \end{bmatrix} $
and the linear transformation $T:M_2(\mathbb{R}) \rightarrow M_2(\mathbb{R})$ which is defined as
$T(v) = Av$
find a basis $B$ of $M_2(\mathbb{R})$ such that $[T]_B$ is a diagonal matrix.
I have no idea how to approach this problem, any help would be appreciated.
| A matrix $M = \begin{pmatrix} x & y \\ z & t \end{pmatrix}\neq 0$ is an eigen vector of $T$ for eigenvalue $\lambda$ if and only if
\begin{align}
\begin{pmatrix} x + 3z & y + 3t \\ 6z & 6t \end{pmatrix} = \begin{pmatrix} \lambda x & \lambda y \\ \lambda z & \lambda t \end{pmatrix}
\end{align}
and then, if and only if
\begin{align}
(1-\lambda)x + z &= 0 \\
(1-\lambda )y+ t&= 0 \\
(6-\lambda)z &=0\\
(6-\lambda)t&=0
\end{align}
So now, the problem reduces to determine for which value of $\lambda$ the above system have a non-zero solution. If you can determine $4$ independant matrices $M_1,M_2,M_3$ and $M_4$ that are solution (maybe they are eigenvectors with the same eigenvalue $\lambda$!), you can diagonalize $T$ in the basis $(M_1,M_2,M_3,M_4)$.
A trick is to notice that $\begin{pmatrix}1 \\ 0 \end{pmatrix}$ is an eigenvector of $M$ with eigenvalue $1$, so $M_1=\begin{pmatrix} 1 & 0\\ 0 & 0 \end{pmatrix}$ and $M_2 = \begin{pmatrix}0 & 1 \\ 0 & 0 \end{pmatrix}$ are eigenvectors of $T$ with eigenvalue $1$. You already have two independant eigenvectors for $T$, so just focus on finding two others.
In fact, if $V_1$ and $V_2$ are two linearly independant eigenvectors of $M$, then the $2\times 2$ matrices $(V_1,0)$, $(0,V_1)$, $(V_2,0)$ and $(0,V_2)$ will be independant eigenvectors of $T$. One can see this by looking at the above system: the $x$ and $z$ coordinates are coupled, independantly of the $y$ and $t$ coordinates (in fact, they are not so independant because they have to have the same eigenvalue!). So $(x,z)$ and $(y,t)$ have to be eigenvectors of $M$ with the same eigenvalue $\lambda$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3904576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Bound on the shortest non-zero vector in any full rank n-dimensional lattice $\Lambda \subseteq \mathbb{R}^n$ with respect to the $1$-norm. How can i prove
$$\lambda_1 \; \leq \; (n! \; det(\Lambda))^{\frac{1}{n}} \approx \frac{n \; det{(\Lambda)}^\frac{1}{n}}{e}.$$
Here $\Lambda_1$ is shortest non-zero vector. My initial thought was using Minkowski theorem (choosing $S =$ n-Ball of radius $\sqrt n \frac{\lambda_1}{n}$) and proof by contradiction (assuming $\lambda_1 \; > \; (n! \; det(\Lambda))^{\frac{1}{n}}$ and contracting with minimality of $\lambda_1$).
[Minkowski’s convex body theorem] : Let $\Lambda$ be a full dimensional lattice. If $S \subset \mathbb{R}^n$ is a symmetric convex body of volume $vol(S) > 2^n det(\Lambda)$ , then $S$ contains a non-zero lattice point.
$1-$norm for a vector $x$ : $\sum{}{}{|x_i|}$.
| I would like to add my answer too, somebody may find it useful. We show that every (full rank, n-dimensional) lattice $\Lambda$ always contains a non-zero vector $x$ such that
$||x||_1 \leq \; (n! \; det(\Lambda))^{\frac{1}{n}}$.
Let
$\lambda_1 = min\{||x||_1 : x \in \Lambda \setminus \{0\}\}$ and assume for contradiction $l > (n! \; det(\Lambda))^{\frac{1}{n}}$. As @reuns mentioned take
$S = \{ x : ||x||_1 < l \}$. Notice that $S$ is convex, symmetric and has volume
$vol(S) = 2^n \frac{l^n}{n!} > 2^n det(\Lambda)$. So, by Minkowski's theorem, $S$ contains a non-zero lattice vector $x$. By definition of $S$ ,we have$||x||_1 < l$, a contradiction to the minimality of $l$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3904836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
If $f(nx)=f(x)^n$ then $f(x+y)=f(x)f(y)$ If $f:\mathbb{R}\to\mathbb{R}_{+}$ is injective monotone function and if, $f(nx)=f(x)^n$ for all $n\in\mathbb{Z}$ and for all $x\in\mathbb{R}$, then,
$$f(x+y)=f(x)f(y),\text{ for all }x,y\in\mathbb{R}$$
It is easy to check the equality if $y$ is a multiple of $x$.
I thought about proving the inequalities, $f(x+y)\le f(x)f(y)$ and $f(x+y)\ge f(x)f(y)$, but I couldn't get to them.
Assuming that $f$ is increasing and $x<y$, then $y=kx+r$, for some $k\in\mathbb{Z}$ and $0< r\le \lvert x \rvert$, so
$$f(x+y)=f(x+kx+r)=f(x(k+1)+r)\ge f(x(k+1))=f(x)^{k+1}=f(x)f(x)^k=f(x)f(kx)$$
That is, $f(x+y)\ge f(x)f(kx)$, but the problem is that $f(kx)<f(y)$ so I can't continue the demonstration.
| If $x,y \in \mathbb{Z}$ then
$$
f(x+y) = f((x+y)1) = f(1)^{x+y} = f(1)^x f(1)^y = f(x1) f(y1) = f(x)f(y).
$$
If $x,y \in \mathbb{Q}$ then there is $n\in\mathbb{N}$ such that $nx,ny\in\mathbb{Z}.$ Then
$$
f(x+y)^n = f(n(x+y)) = f(nx+ny) = f(nx) f(ny) = f(x)^n f(y)^n = (f(x)f(y))^n,
$$
so $f(x+y)=f(x)f(y).$
If $x,y\in\mathbb{R}$ then take sequences $x_k^+, x_k^-, y_k^+, y_k^- \in \mathbb{Q}$ such that $x_k^+\to x+,\ x_k^-\to x-,\ y_k^+\to y+,\ y_k^-\to y-$. The result then follows from density of $\mathbb{Q}$ in $\mathbb{R}$ and from monotonicity. Try to complete this step yourself.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3905020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
The relation $\sim$ in $\mathbb{R}$ is defined as: $x \sim y \iff x − y \in \mathbb{Z}$ $(a) \quad$ Show that $\sim$ is an equivalent relation.
$(b) \quad$ Give $2$ distinct equivalent classes (must show they are distinct).
$(c) \quad$ $[0, 1) = \{x \in \mathbb{R} \mid 0 \leq x < 1\}.$ Show that $^{\mathbb{R}}\big/_{\sim} \cong [0, 1).$
I was able to prove that the relation is reflexive, symmetric and transitive. So, it is equivalent.
For $(b)$ one class I could find was
$$\{y \in \mathbb{R} \mid x \sim y\} = \{y \in \mathbb{R} \mid \exists z \in \mathbb{Z}, x - y = z\} = \{y \in \mathbb{R} \mid \exists z \in \mathbb{Z}, x = z + y\}.$$
But I cannot find another one which is distinct!
And for $(c)$ I don't even know how to start this proof. Any ideas? Thanks
| (a) It's called equivalence relation
(b) The set you provided here is the equivalence class of a fixed element $x$. For example what is the equivalence class of $0$, or of $1/2$?
(c) Consider a specific function $f:\Bbb R\to [0,1)$ which satisfies $f(x)=f(y)\iff x\sim y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3905188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is my Calculus Proof Correct? Question:
A not uncommon calculus mistake is to believe that the product rule for derivatives says that $(fg)'=f'g'$. If $f(x)=e^{x^2}$, determine, with proof, whether there exists an open interval $(a,b)$ and a nonzero differentiable function $g$ defined on $(a,b)$ such that this wrong product rule is true for $x$ in $(a,b)$.
Is my solution correct? Is there anything I need to improve?
My solution:
We're trying to find a function $g$ such that $(fg)'=f'g'.$
Using the product rule, we get $$f'g + fg' = f'g'.$$
Plugging in both $f(x)=e^{x^2}$ and $f'(x)=2xe^{x^2},$ we get
$$2xe^{x^2}g + e^{x^2}g' = 2xe^{x^2}g'.$$
Simplifying and canceling $e^{x^2},$ we get $$\frac{g'}{g} = \frac{2x-1}{2x} = 1 + \frac{1}{2x-1}.$$
Taking the integral of both sides,
$$\int \frac{dg}g = \int 1+ \frac{1}{2x-1} \, dx.$$
$$\log|g| = \frac{\log|2x-1|}{2} + x + C.$$
$$|g| = e^{\frac{\log|2x-1|}{2}}e^xe^C.$$
Letting $e^C=K,$ we get
$$\boxed{g = Ke^x\sqrt{|2x-1|}},$$ Where K is a constant greater than 0.
Therefore, on any interval $(a,b)$ that does not contain the value of $\frac{1}{2},$ there exists nonzero differentiable function $g$ defined on $(a,b)$ such that $(fg)'=f'g'$ is true for $x$ in $(a,b)$.
| I think you have gone far beyond the requirements of the question,
which asked merely to prove that there exists an open interval and
a non-zero differentiable function with the stated properties.
To prove that such an interval and function exist, you can simply
choose an open interval -- for example, $(a,b) = (1,2)$ --
and one function on that interval,
for example $g(x) = e^x \sqrt{2x - 1}.$
Then show that $f(x)$ and $g(x)$ (and therefore also $(fg)(x)$)
are defined on your chosen interval and calculate $f'$, $g'$, and $(fg)'$:
\begin{align}
f'(x) &= \frac{\mathrm d}{\mathrm dx} e^{x^2} = 2x e^{x^2}, \\
g'(x) &= \frac{\mathrm d}{\mathrm dx}\left(e^x \sqrt{2x - 1}\right)
= \frac{2x e^x}{\sqrt{2x - 1}}, \\
(fg)'(x) &= \frac{\mathrm d}{\mathrm dx} \left(e^{x^2}\cdot e^x \sqrt{2x - 1}\right) \\
&= \frac{4 x^2 e^{x^2 + x}}{\sqrt{2x - 1}} \\
&= 2x e^{x^2} \cdot \frac{2x e^x}{\sqrt{2x - 1}} \\
&= f'(x)\cdot g'(x).
\end{align}
Note that it was not necessary to take the absolute value of $2x-1$ in any of these formulas, since $2x - 1 > 0$ everywhere in the domain of $f$ and $g$ as defined in this proof.
And that is essentially the proof of existence of an interval and a function with the desired properties.
You could choose the interval $\left(\frac12,\infty\right)$ rather than $(1,2)$ for the existence proof (as I would) since it is one of the two maximal intervals on which such a function $g$ is defined,
and perhaps you might like to show off your clever method of identifying the entire family of functions that might be the desired function $g$ (as I likely would),
but all of that is bonus material beyond the plain meaning of the question.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3905321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Path of particles moving from the corners of an equilateral triangle toward each other at a constant rate So I just was solving this standard problem when a question struck my mind. What is the nature of this path? I tried my best and all i was able to do was to prove that it is not a circle. Here is the pic:
Basically three particles starts moving with a constant speed from their respective corner of an equilateral triangle such that velocity is one particle is always directed to another.
Once again, I am trying to find the nature of the path followed by any particle.
| Let's choose a coordinate system with the origin $O$ at the centre of the triangle and with a particle initially at $(1,0)$.
At every moment the particles lie at the vertices of an equilateral triangle, their distances $r$ from $O$ being equal and their polar angles $\theta$ differing by $120°$.
Velocity vector $\vec v$ of the first particle is directed towards the second particle: its projection along the radial direction is $v_r=\dot r=-v\cos30°=-(\sqrt3/2)v$, while its projection along the azimuthal direction is $v_\theta=r\dot\theta=v\cos60°=(1/2)v$.
It follows that:
$$
{dr\over d\theta}={\dot r\over\dot\theta}=-\sqrt3r,
$$
which can be solved to yield
$$
r=e^{-\sqrt3\theta}.
$$
This is the polar equation of a logarithmic spiral.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3905706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can we say anything about $\vert f'(x) \vert$ versus $\vert f''(x) \vert$ if $f$ is concave and goes through the origin? Suppose we have a function $f(x)$ that is concave, upward sloping, and goes through the origin.
Are we able to say anything about how $\vert f'(x)\vert$ compares to $\vert f''(x)\vert$, such as whether one is greater than/less than another?
| Alternative approach.
Depending on your definition of concave, "Concave, upward sloping" is contradictory. This answer assumes that you intend that $f(x)$ is convex everywhere.
Let $f(x) = ax^2 + bx \implies f$ passes through the origin.
$|f'(x)| = |2ax + b|~$ and $~|f''(x)| = |2a|.$
Convex merely implies 2nd derivative positive which implies that $a > 0$.
Consider the following three functions:
$f_1(x) = 1000x^2 + x~ \implies ~|f'_1(x)| = |2000x + 1|,~$
and $~|f''_1(x)| = 2000.$
For $~-(1/2) < x < (1/2), ~~|f'_1(x)| < |f''_1(x)|.$
For $~1 < x, ~~|f'_1(x)| > |f''_1(x)|.$
$f_2(x) = x^2 - x~ \implies ~|f'_2(x)| = |2x - 1|,~$
and $~|f''_2(x)| = 2.$
For $~0 < x < 1, ~~|f'_2(x)| < |f''_2(x)|.$
For $~3 < x, ~~|f'_2(x)| > |f''_2(x)|.$
$f_3(x) = x^2 + 1000x~ \implies ~|f'_3(x)| = |2x + 1000|,~$
and $~|f''_3(x)| = 2.$
For $~-(501) < x < -(500), ~~|f'_3(x)| < |f''_3(x)|.$
For $~0 < x, ~~|f'_3(x)| > |f''_3(x)|.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3905875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
clarification of continuous functions and Cauchy sequences A continuous function need not preserve a Cauchy sequence. For instance,
$f : (0,1) \to (e,\infty)$ where $a_n = \frac{1}{n}$ and $f(x) = e^{1/x}$
If $f$ is uniformly continuous then it will preserve.
My question is:
If the domain is complete or the co-domain is complete does this have an effect on preserving a Cauchy sequence?
| Continuous functions preserve Cauchy sequences if their domain is complete. If that's the case and $x_n$ is a Cauchy sequence in the domain, then $x_n$ converges to some $x$ in the domain. But then due to continuity, $f(x_n)$ converges to $f(x)$ and is thus a Cauchy sequence.
The codomain has no influence on this. For a counterexample where the domain is incomplete and the codomain is complete, yet a continuous function doesn't preserve Cauchy sequences, take any arbitrary continuous function which doesn't preserve Cauchy sequences, and replace its codomain by its own closure. The function still doesn't preserve Cauchy sequences, even though the codomain is now complete.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3906195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
A question about Lebesgue measure and integration. Let $(X,A,\mu)$ be a measurable space.
$f\colon X\to [0,\infty]$ measurable.
Denote $S=\lbrace x\in X : f(x)<1\rbrace$.
Prove
a. $\mu(S)=\lim_{n\to \infty} \int_S e^{-f^n}\,d\mu$
b. Assume $X=S$ , prove that:
$$\sum_{n=1}^{\infty} \int_X f^n\, d\mu= \int_{X} \frac{f}{1-f}\,d\mu$$
I have difficulty where I'm supposed to move between integrals.
We can define the measure in terms of integral. So in a I know that it is the start but it is not clear to me how to connect between the set $S$ to the integral.
In b, this is what I did:
$\sum_{n=1}^{\infty} \int_X f^n \,d\mu$= (by a theory int of sum =sum of int for a sequence of positive measurable functions)
=$\int_X (\sum_{n=1}^{\infty}f^n)\, d\mu$=(sum of a geometry column with $f(x)<1$)= $\int_X \frac{f}{1-f}\, d\mu$.
| Hint:
For (a) -
We want to write $\mu(S)$ as the limit of some integrals... So it might be useful to write $\mu(S)$ as an integral. Of course, we know how to do this: $\mu(S) = \int_S 1 \mathrm{d}\mu$.
Notice this already looks kind of like the limit we're meant to compute. If we can show
$$\lim_{n \to \infty} \int_S e^{-f^n} \mathrm{d} \mu = \int_S 1 \mathrm{d} \mu$$
then we're done!
If you play around with this, you might notice $f^n \to 0$ pointwise on $S$ (since $0 \leq f(x) < 1$ for each $x \in S$). Then $e^{-f^n} \to e^{-0} = 1$.
So the stuff we're integrating limits to $1$, which is what we need it to be!
If only we knew how to relate the limit of a sequence of integrals to the integral of the limit of the integrands...
For (b) -
You're on the exact right track! Do you know how to justify the fact that $\sum \int \text{stuff} = \int \sum \text{stuff}$? Since infinite sums are simply limits of finite sums, you'll want to use the same technique of swapping limits and integrals from part (a).
I hope this helps ^_^
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3906356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Diagonal is closed in $(X, \tau_1) \times (X, \tau_2)$ (both are Hausdorff) From Willard's General Topology:
Let $\tau_1$ and $\tau_2$ be Hausdorff topologies on the same set $X$. Let $\tau_3 = \tau_1 \cap \tau_2$.
If $(X, \tau_3)$ is Hausdorff, then the diagonal is closed in $(X, \tau_1) \times (X, \tau_2)$.
I know that since $\tau_1, \tau_2,$ and $\tau_3$ are all Hausdorff, that the diagonal $\Delta = \{(x,x) : x \in X\}$ is closed in each of the following product topologies $(X, \tau_1) \times (X, \tau_1)$, $(X, \tau_2) \times (X, \tau_2)$, and $(X, \tau_3) \times (X, \tau_3)$.
Without thinking much about the question, it seems intuitive that yes, $\Delta$ will also be closed in $(X, \tau_1) \times (X, \tau_2)$. I've tried to proceed by contradiction below, but I can't figure out how to finish it off. I would appreciate any help finishing it off.
I tried to proceed by contradiction:
Supposing that $\Delta$ is not closed in $(X, \tau_1) \times (X, \tau_2)$, then $\exists y \in X, y \neq x$, such that $(x,y) \in \overline{\Delta}$ (the closure of $\Delta$ in $\tau_1 \times \tau_2$). Then $\forall U \times V$ basic nbhoods of $(x,y)$, we must have that $(U\times V) \cap \Delta \neq \emptyset$. So $\exists z \in X$, such that $(z,z) \in U \times V$.
Also, since $U \times V$ is a basic nbhood of $(x,y)$ in $(X, \tau_1) \times (X, \tau_2)$, then $U$ is a basic nbhood of $x$ in $(X, \tau_1)$ and $V$ is a basic nbhood of $y$ in $(X, \tau_2)$. But then, since $(z,z) \in U \times V$, we must have $z \in U$ and $z \in V$. So $U, V$ are not disjoint. But $U$ was an arbitrary basic nbhood of $x$ in $(X, \tau_1)$ and $V$ was an arbitrary basic nbhood of $y$ in $(X, \tau_2)$, so there cannot exist any open disjoint sets $A \in \tau_1$, $B \in \tau_2$ such that $x \in A$ and $y \in B$...
I'm guessing this contradicts the Hausdorff-ness of $(X, \tau_3)$, but I'm having trouble putting it together.
Thanks.
| You are making your life overly complicated.
Take a point $(x,y)$ off the diagonal. We know there is an open set $U$ of $(X,\tau_3)$ and an open set $V$ of $(X,\tau_3)$ such that $x\in U$, $y\in V$, and $U\times V$ does not intersect the diagonal; i.e., $U\cap V=\varnothing$. This because $(X,\tau_3)$ is Hausdorff, so the diagonal in $X\times X$, with the $\tau_3\times\tau_3$ topology, is closed.
But $\tau_3=\tau_1\cap\tau_2$. Thus, $U\in \tau_1$ and $V\in \tau_2$. That means that $U\times V$ is open in $\tau_1\times\tau_2$, contains $(x,y)$, and does not intersect the diagonal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3906563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Open set in irrationals $A$ is the set of all irrationals in $\mathbb R$. Is there a non empty open set subset of $A$?
My hunch is that the irrationals can not contain an open set because rationals are dense in $\mathbb R$? But I am not able to reconcile to the fact that the rationals are countable and hence are contained in an open set of small radius. Please get me out of the muddled thinking.
| Suppose a subset $B$ of $A$ is open. Then all interior points are contained in $B$. So for any point in $B$ , there is a neighborhood of that point which is contained in $B$.
But neighborhood is an open interval and by denseness property it must contain rational numbers which is not possible since $B$ contains only irrational numbers. So $B$ cannot be open.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3906742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Markov chains with continuous time - is $P(2)$ having a given form even possible?
Is it possible that
$$
\begin{vmatrix}
0 & 1 \\
1 & 0
\end{vmatrix} = P(2)
$$
for any Markov semigroup $\{P(t), t \geq 0\}$?
Recalling what properties must hold for transition matrix $P$:
*
*$\forall t>0$ every row of matrix must sum to 1 and all entries must be non negative
*$P(0) = I$ and $\lim_{t \to 0^+} P(t) = I$
*$P(s+t) = P(s)P(t)$
Any hints appreciated. I suppose we're looking for some kind of contradiction but I couldn't find any clues.
| Let us abbreviate
$$
A :=
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}.
$$
Claim: A Markov semigroup $(P(t))_{t \in [0,\infty)}$ with $P(2) = A$ does not exist.
Here are three different proofs, so you can choose your favourite one. :-)
Proof 1 ($A$ has no real square root).
There is no matrix $B \in \mathbb{R}^{2 \times 2}$ such that $B^2 = A$; indeed, the equality $B^2 = A$ would imply $-1 = \det(A) = (\det(B))^2 \ge 0$.
Proof 2 (Perron-Frobenius theory).
For Markov semigroup $(P(t))_{t \in [0,\infty)}$ on $\mathbb{R}^n$ it follows from Perron-Frobenius theory that each matrix $P(t)$ has only one eigenvalue on the complex unit circle - namely the number $1$.
However, the matrix $A$ has the eigenvalues $1$ and $-1$.
Proof 3 (The isometry group with respect to the infinity norm is discrete).
Endow $\mathbb{R}^n$ with the infinity norm and assume that $(P(t))_{t \in [0,\infty)}$ is a Markov semigroup on $\mathbb{R}^n$ such that $P(2)$ is a permutation matrix. We show that this implies $P(t) = I$ (:= identity matrix) for all $t \ge 0$:
Each $P(t)$ acts contractively with respect to the infinity norm on $\mathbb{R}^n$ (via $x \mapsto P(t)x$). But $P(2)$ is an isometry with respect to this norm, which readily implies that each $P(t)$ is an isometry. Hence, each $P(t)$ is a signed permutation matrix (see this MathOverflow post). Since every entry of each $P(t)$ is $\ge 0$, this implies that each $P(t)$ is a permutation matrix.
From $P(t) \to I$ as $t \downarrow 0$ we now conclude that $P(t) = I$ for all sufficiently small times $t$. The semigroup law then implies that $P(t) = I$ for all times $t$.
In particular, the permutation matrix $P(2)$ must be equal to $I$, too.
Remark. Proofs 2 and 3 are, admittedly, quite an overkill if you are merely interested in the concrete example from the question. But on the other hand, proofs 2 and 3 give us more structural insight and work in more general situations than proof 1.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3906914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.