Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Proof of Plancherel formula I was looking at this question posted here some time ago. How to Prove Plancherel's Formula? I get it until in the third line he practically says that $\int _{- \infty}^{+\infty} e^{i(\omega - \omega')t} dt= 2 \pi \delta(\omega - \omega')$. I mean, I would understand if we were integrating over a period of length $2 \pi$, but here the integration is over $\mathbb{R}$. P.S. I would have asked this directly to the author of the post, but it's been over a year since he last logged in.
I think he used that $$ 1 = \hat{\delta(w)} $$ so, $$\int _{-\infty}^{+\infty} e^{i(\omega-\omega ')t} dt $$ is the antitransform of $\delta$ values in $(\omega - \omega') $ plus $2\pi$ for definition of antitransform.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2303106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Is locally completeness a topological property? I know that completeness itself is not a topological property because a complete and a not complete metric space can be homeomorphic, e.g. $\Bbb R$ and $(0,1)$. However, both $\Bbb R$ and $(0,1)$ are locally complete (each point has a neighborhood that is complete under the induced metric). As all examples I know of are of this form, the naturally occuring next question is Question: Is being locally complete a topological property? Or the other way around: are there metric spaces which are homeomorphic, but one is locally complete and the other one is not?
Another way of proving that the irrationals can be made complete with respect to a metric $d$ which is equivalent to the usual one consists in providing such a metric. This can be donne as follows: let $(q_n)_{n\in\mathbb N}$ be an enumeration of the rationals. Then, if $x,y\in\mathbb{R}\setminus\mathbb Q$, define$$d(x,y)=|x-y|+\sum_{k=1}^\infty2^{-k}\inf\left(1,\left|\max_{i\leqslant k}\frac1{|x-q_i|}-\max_{i\leqslant k}\frac1{|y-q_i|}\right|\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2303185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 0 }
Infinite Series for Signal Energy and Power A doubt came up to me on an Oppenheim's Signals and Systems 2ed exercise: 1.3) (f) Determine $P_{\infty}$ and $E_{\infty}$ for the following signal: $x\left[n\right]=\cos\left(\frac{\pi}{4}n\right)$ On theory, $P_{\infty}$ and $E_{\infty}$ are defined as: $$ \begin{align} E_{\infty}&\triangleq\lim\limits_{N\to \infty}{\sum_{n=-N}^{+N}{|x\left[n\right]|^{2}}}\\ P_{\infty}&\triangleq\lim\limits_{N\to \infty}{\frac{E_{\infty}}{2N+1}} \end{align} $$ Starting by $E_{\infty}$: $$ \begin{align} E_{\infty}&=\lim\limits_{N\to \infty}{\sum_{n=-N}^{+N}{\left|\cos\left(\frac{\pi}{4}n\right)\right|^{2}}}\\ &=\lim\limits_{N\to \infty}{\sum_{n=-N}^{+N}{\cos^{2}\left(\frac{\pi}{4}n\right)}}\\ &=\lim\limits_{N\to \infty}{\sum_{n=-N}^{+N}{\left(\frac{e^{j\frac{\pi}{4}n}+e^{-j\frac{\pi}{4}n}}{2}\right)^{2}}}\\ &=\lim\limits_{N\to \infty}{\frac{1}{4}\sum_{n=-N}^{+N}{\left(e^{j\frac{\pi}{2}n}+2+e^{-j\frac{\pi}{2}n}\right)}}\\ &=\lim\limits_{N\to \infty}{\frac{1}{4}\sum_{n=-N}^{+N}{\left(j^{n}+2+j^{-n}\right)}}\\ &=\lim\limits_{N\to \infty}{\frac{1}{4}\left[\sum_{n=-N}^{+N}{2}+\sum_{n=-N}^{+N}{j^{n}}+\sum_{n=-N}^{+N}{\left(-j\right)^{n}}\right]} \end{align} $$ So far, so good... Now here's what I imagined: $$ \begin{align} \sum_{n=-N}^{+N}{j^{n}}&=\color{red}{\sum_{n=-N}^{-1}{j^{n}}}+1+\sum_{n=1}^{+N}{j^{n}} \\ &=\color{red}{\sum_{n=1}^{+N}{\left(-j\right)^{n}}}+1+\sum_{n=1}^{+N}{j^{n}} \end{align} $$ and $$ \begin{align} \sum_{n=-N}^{+N}{\left(-j\right)^{n}}&=\color{red}{\sum_{n=-N}^{-1}{\left(-j\right)^{n}}}+1+\sum_{n=1}^{+N}{\left(-j\right)^{n}} \\ &=\color{red}{\sum_{n=1}^{+N}{j^{n}}}+1+\sum_{n=1}^{+N}{\left(-j\right)^{n}} \end{align} $$ Which gives us: $$\sum_{n=-N}^{+N}{j^{n}}+\sum_{n=-N}^{+N}{\left(-j\right)^{n}}=2+2\sum_{n=1}^{+N}{j^{n}}+2\sum_{n=1}^{+N}{\left(-j\right)^{n}}$$ Putting this all back into $E_{\infty}$: $$ \begin{align} E_{\infty}&=\lim\limits_{N\to \infty}{\frac{1}{4}\left[2+2\sum_{n=-N}^{+N}{1}+2\sum_{n=1}^{+N}{j^{n}}+2\sum_{n=1}^{+N}{\left(-j\right)^{n}}\right]}\\ &=\lim\limits_{N\to \infty}{\frac{1}{2}\left[\left(2N+2\right)+\sum_{n=1}^{+N}{j^{n}}+\sum_{n=1}^{+N}{\left(-j\right)^{n}}\right]} \end{align} $$ That's where I got stuck, because this should sum up to: $$E_{\infty}=\lim\limits_{N\to \infty}{\frac{1}{2}\left[2N+1\right]}=\infty$$ So that: $$P_{\infty}=\lim\limits_{N\to \infty}{\frac{E_{\infty}}{2N+1}}=\frac{1}{2}\left(\lim\limits_{N\to \infty}{\frac{2N+1}{2N+1}}\right)=\frac{1}{2}$$ Which are the books answers to this exercise... What am I doing wrong or missing here???
I got to a solution! All the previous assumtions about the sums were correct... Now: $$ \begin{align} E_{\infty}&=\lim\limits_{N\to \infty}{\frac{1}{4}\left[2+2\sum_{n=-N}^{+N}{1}+2\sum_{n=1}^{+N}{j^{n}}+2\sum_{n=1}^{+N}{\left(-j\right)^{n}}\right]}\\ &=\lim\limits_{N\to \infty}{\frac{1}{2}\left[\left(2N+2\right)+\underbrace{\sum_{n=1}^{+N}{j^{n}}+\sum_{n=1}^{+N}{\left(-j\right)^{n}}}_{0}\right]}\\ &=\lim\limits_{N\to \infty}{\frac{1}{2}\left(2N+2\right)}\\ &=\lim\limits_{N\to \infty}{\left(N+1\right)}= \infty \end{align} $$ Then: $$ \begin{align} P_{\infty}&=\lim\limits_{N\to \infty}{\frac{E_{\infty}}{2N+1}}\\ &=\lim\limits_{N\to \infty}{\frac{N+1}{2N+1}}\\ &=\underbrace{\lim\limits_{N\to \infty}\frac{\frac{d}{dN}\left(N+1\right)}{\frac{d}{dN}\left(2N+1\right)}}_{L'Hopital}=\frac{1}{2} \end{align} $$ Actually, I was doing nothing wrong... My mistake was trying to adapt my soluton induced by the answers I already had from the book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2303326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
distribution in an inner product (inner product spaces) Sorry for the dumb question. Suppose I'm in a general inner product space. How would I compute something like the following? $$\langle x - \alpha y, x - \alpha y\rangle$$ where $\alpha$ is a complex scalar. Is the following right? $$\begin{align} \langle x - \alpha y, x - \alpha y\rangle &= \langle x, x\rangle + \langle x, -\alpha y\rangle + \langle -\alpha y, x\rangle + \langle -\alpha y, -\alpha y\rangle \\ &= \|x\|^2 + \Re({-\alpha})\langle x, y\rangle + \Re{(-\overline{\alpha}})\langle x, y\rangle + |-\alpha|^2\|y\|^2 \\ &= \|x\|^2 + 2\Re{(-\alpha)}\langle x, y\rangle + |\alpha|^2\|y\|^2 \end{align}$$ It is mostly the negative sign that is throwing me off. I wasn't sure whether I should use minus instead of the plus between my terms.
The "real part" should appear after summing the two central terms because such a sum has the form $z+\overline{z}$ which equals to $2\Re(z)$ (in your case you have this for $z=\overline{-\alpha}\langle x,y\rangle$). Here is the step-by-step: $$\begin{align} \langle x - \alpha y, x - \alpha y\rangle &= \langle x, x\rangle + \langle x, -\alpha y\rangle + \langle -\alpha y, x\rangle + \langle -\alpha y, -\alpha y\rangle \\ &= \langle x, x\rangle + (\overline{-\alpha})\langle x, y\rangle + (-\alpha)\langle y, x\rangle + (-\alpha)(\overline{-\alpha})\langle y, y\rangle \\ &= \langle x, x\rangle + (\overline{-\alpha})\langle x, y\rangle + (-\alpha)\overline{\langle x, y}\rangle + (-\alpha)(\overline{-\alpha})\langle y, y\rangle \\ &= \|x\|^2 + (\overline{-\alpha})\langle x, y\rangle + \overline{(\overline{-\alpha})\langle x, y\rangle} + |-\alpha|^2\|y\|^2 \\ &= \|x\|^2 + 2\Re\left((-\alpha)\langle x, y\rangle\right) + |\alpha|^2\|y\|^2 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2303431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Closure of closure of set equals closure of set Let $A\subseteq \mathbb{R}^n$. I want to prove that $cl(cl(A))\subseteq cl(A)$. Let $x\in cl(cl(A))$, then $x$ is adherent to $cl(A)$, which implies that $B(x;r)\cap cl(A)\neq\emptyset, \forall r>0$. How do I proceed from here? And do I really need to? I'm guessing that it might be sufficient to argue as follows: Let $x\in cl(cl(A))$, then $x$ is adherent to $cl(A)$, which implies that $x\in cl(A)$. But would this be rigorous enough?
Closure of a closed set is equal to itself. As $cl(A)$ is closed, therefore $$cl(cl(A))=cl(A)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2303492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to calculate $\lim_{n \to \infty} \frac{(\ln(n))^n}{(\ln(n+1))^{n+1}}$ How would I solve the following limit $\lim_{n \to \infty} \frac{(\ln(n))^n}{(\ln(n+1))^{n+1}}$ Is there an intuitive way of solving it? I have no clue on how to start. Any help would be appreciated.
$\lim_{n \to \infty} \frac{(\ln(n))^n}{(\ln(n+1))^{n+1}}\\ \lim_{n \to \infty} \left(\frac{\ln(n)}{\ln(n+1)}\right)^n\left(\lim_{n \to \infty}\frac 1{\ln(n+1)}\right) $ $\left(\frac{\ln(n)}{\ln(n+1)}\right)<1$ for all $n>0$ $\lim_{n \to \infty}\frac 1{\ln(n+1)} = 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2303594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
3d triangle in one equation without min or max How can I graph a triangle in 3d with a single equation that does not use min, max, floor, ceil, or absolute value? If you believe not possible, then the closest approximation (equation that roughly appears to be a 3d triangle) will do.
Consider the example of triangle with vertices $(0,0,0)$,$(3,3,0)$ and $(-3,3,0)$ $$((x+y) (x-y)(y-3)(\sqrt{9-x^2- (y-3)^2}+1))^2+z^2=0$$ $a^2+b^2=0$ ensures that $a=0$ and $b=0$. Here $z=0$ denotes equation of plane of triangle. $x+y=0$,$x-y=0$ and $y=3$ are 3 planes that contain 3 lines of triangle. The square root is included to exclude points on the line not a part of triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2303712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that a metric space $V$ is sequentially compact iff it is limit point compact Given is a metric space (V,d). Show that the following properties are equivalent: a) V is sequential compact ( so each sequence in V has a convergent subsequence witha limit in V) b) for each subset $A \subset V$ with an infinite amount of elements, there exists a point $ x \in V$ such that for each $ \delta >0$ , $B(x; \delta) \cap A$ contains an infinite amount of elements. My idea for $a \Rightarrow b$ was to pick a suitable sequence $a_n$ in A, and we know $a_n$ converges, because V is sequential compact, than show that there exists a point $ x \in V$ such that for each $ \delta >0$ , $B(x; \delta) \cap A$ contains an infinite amount of elements. For $b \Rightarrow a$ I wanted to take a sequence $a_n$ out of $V$, and look at two cases, one with {$a_n|n\in N$} with a finite amount of elements, and one with an infinite amount. And from there show that V is sequential compact. I don't really know if these ideas are right, and if so how to continue. I also have some difficulty with picking suitable sequences, like in $a \Rightarrow b$ , tips about that are much appreciated aswel!
choose $a_n \in B(x;\frac{1}{n})∩A$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2303847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve the Cauchy problem: $u_t + u^2 u_x = 0$, $ u(0,x) = 2+x$ Solve the Cauchy problem: For $t>0$, $$u_y + u^2 u_x = 0$$ $$u(x,0) = 2+x$$ So by the characteristic method:$\frac{dx}{dt} = z^2$, $\frac{dy}{dt}=1$, $\frac{dz}{dt}=0$, parametrized by $\Gamma: (s,0,2+s)$. Then we have $z=2+s$, $x=t(2+s)^2 +s $ and $y=t$, which gives us $u=x-yu^2 + 2$. I think it's correct but when I plugged in to check, it only satisfies the initial condition. I appreciate if anyone could tell me where I did wrong.
$$u(x,y) \quad \begin{cases} u_y + u^2 u_x = 0 \\u(x,0) = 2+x \end{cases}$$ Your calculus is correct. The solution expressed on the form of implicit equation is : $$u=x-yu^2 + 2$$ This agrees with the condition $u(x,0)=2+x$. Probably you made a mistake in checking the agreement to the PDE. $$u_x=1-2yuu_x \quad\to\quad u_x=\frac{1}{1+2yu}$$ $$u_y=-u^2-2yuu_y \quad\to\quad u_y=\frac{-u^2}{1+2yu}$$ $$u_y+u^2u_x=\frac{-u^2}{1+2yu}+u^2\frac{1}{1+2yu}=0$$ All agrees.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2303966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Automorphisms of $\mathbb{Z}[x]$ (verification) Consider the ring $\mathbb{Z}[x]$. If $f$ is an automorphism of this ring, then since $f(1)=1$ so $f$ fixes $\mathbb{Z}$. To determine $f(x)$, since $0$ is a root of this polynomial with multiplicity $1$ we have $f(x)=ax$ for some $a\in\mathbb{Z}$. Applying evaluation map, we get $a=1$. So the only automorphism of $\mathbb{Z}[x]$ is identity. There could have been attempts to prove that the only automorphisms of the ring under consideration is identity; but my question is whether above proof is correct? Q. Is this correct way to prove the assertion that $\mathbb{Z}[x]$ only one ring automorphism? (my assertion may be wrong; I don't know in general what it should be.)
Hint: the map $$ a_0+a_1x+\dots+a_nx^n\mapsto a_0+a_1(x+1)+\dots+a_n(x+1)^n $$ is clearly an automorphism of $\mathbb{Z}[x]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2304103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How large a hypercap do you need to intersect every basis? Consider the unit sphere $\mathscr{S}^d$. We can define a hyperspherical cap by the angle $\theta$ that its associated hypercone subtends at the centre of the sphere. How large does $\theta$ have to be in order that for every set of orthogonal axes, the lines through a set of $d$ mutually orthogonal points on the sphere and the origin, at least one axis intersects on the hyperspherical cap?
This is equivalent to being given a set of axes, and asking for the largest spherical distance away from all of them a point can be. Without loss we can take the standard basis of $\mathbb{R}^d$. Symmetry considerations imply such a point will be the same distance from all of the axes. Restricting to the section of the sphere with positive coordinates, symmetry also forces it to be the point $d^{-1/2}(1,1,\dotsc,1)$: this is the only point on this side of the sphere an equal distance from the closest points. One then finds the distance $\theta$ satisfies $\cos{\theta} = d^{-1/2}$ from the dot product formula for the distance, and hence $\theta=\operatorname{arcsec}{\sqrt{d}}$. For example, in two dimensions, $\arccos{(1/\sqrt{2})} = \pi/4$, and in three, $\arccos{(1/\sqrt{3})} \approx 0.955$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2304227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integrate $\int{ e^{{x}^{2}-x} \cdot x \cdot e^x} dx$ Integrate $\int{ e^{{x}^{2}-x} \cdot x \cdot e^x} dx$ I'd like to know how to do it because I need it for another task. Here is what I tried: $$\int{ e^{{x}^{2}-x} \cdot x \cdot e^x} dx = \int{e^{x^{2}-x+x} \cdot x} \text{ }dx = \int{e^{x^{2}} \cdot x} \text{ }dx$$ Now substitute (especially at this step I'm not sure). Let $s=x^2$, then: $$s'=2x \Leftrightarrow 2x = \frac{ds}{dx} \iff dx = \frac{1}{2x} ds$$ Insert these into $\int{e^{x^{2}} \cdot x} \text{ }dx$: $$\int{e^{s} \cdot x} \cdot \frac{1}{2x} \text{ }ds= \frac{1}{2}\int{e^{s} ds} = \frac{1}{2}e^{s}+c = \frac{1}{2}e^{x^{2}}+c$$ I hope everything is alright? If it's correct, is there a faster way to solve it? Because substitution is confusing for me :s
You can always differentiate your answer and see what you get. In this case $$ \frac{d}{dx}\left(\frac{1}{2}e^{x^{2}}+c\right)=\frac{1}{2}2x e^{x^{2}}=xe^{x^{2}} $$ by the chain rule. So you are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2304343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Function from $[0,1]^2$ to $[0,1]$ Consider $(x,y)\in [0,1]^2$. Is it true, that there exists only one $t\in [0,1]$ such that $(x,y)$ belongs to the line passing through $(0,t)\in [0,1]^2$ and $(1,t^2)\in [0,1]^2$? Could you help me to prove it? Is there a way to write in an "explicit" way this function $f$ from $[0,1]^2$ to $[0,1]$? What about $f^{-1}$? It seems to me that given a $t\in [0,1]$ there is an infinite number of pairs $(x,y)\in [0,1]^2$ such that $(x,y)$ belongs to the line passing through $(0,t)\in [0,1]^2$ and $(1,t^2)\in [0,1]^2$. Is this correct?
The line between $(0, t)$ and $(1, t^2)$ has the equation $$y = t + (t^2-t)x$$ Given $(x,y)$ we can solve for $t$ and get $$t = \frac{x-1}{2x} \pm \sqrt{\left(\frac{x-1}{2x}\right)^2 + \frac{y}{x}} = \frac{(x-1) \pm \sqrt{(x-1)^2+4xy}}{2x}$$ Since $x,y \geq 0$ we have $(x-1)^2 + 4xy \geq (x-1)^2$ so taking the negative sign would make $t \leq 0$. Thus, given $(x,y)$ there's only one $t$ such that the line between $(0,t)$ and $(0, t^2)$ passes $(x,y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2304463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Bounding the derivative of a function Let $f(x) = \frac{\exp(ax)}{1+\exp(ax)}$ for some $a > 0$ be a logistic function. I am looking for an upper bound on the $k$-th derivative of $f$, i.e. $|f^{(k)}(x)|$ on $\mathbb{R}$. Using Cauchy's integral formula I only obtain a bound of order $k! / (\pi-\epsilon)^{k+1}$ for $a=1$. Is it possible to get below $k!$ and obtain a growth rate that is just exponential? Thanks!
Given that $y=\dfrac{e^{ax}}{1+e^{ax}}$ we have \begin{eqnarray}y^\prime&=&\frac{ae^{ax}}{1+e^{ax}}\\ &=&a(1-y)y \end{eqnarray} from which it follows that $$ y^{\prime\prime}=a^2(1-2y)(1-y)y $$ etc, for $0< y<1$ So for each $n$ there is a polynomial $P_n(y)$ of degree $n$ such that $$ y^{(n)}= a^nyP_n(y)\text{ for }y\in(0,1)$$ so the existence of an upper bound is assured. Addendum: Actually. $(1-y)$ will also be factor of $y^{(n)}$ for $n>0$ so we can make the slightly stronger statement $$ y^{(n)}= a^ny(1-y)Q_n(y)\text{ for }y\in(0,1) $$ where $Q_n$ is a degree $n-1$ polynomial in $y$. We have $Q_1(y)=1,\,Q_2(y)=1-2y,\,Q_3(y)=1-6y+6y^2,\cdots$ The sequence of $Q_n(y)$ polynomials are computed recursively by * *$Q_1(y)=1$ *$Q_{n+1}(y)=\dfrac{d}{dy}y(1-y)Q_n(y)$ Second Addendum: \begin{align*} Q_1(y)&=1\\ Q_2(y)&=1-2y\\ Q_3(y)&=1-6y+6y^2\\ Q_4(y)&=1-14y+36y^2-24y^3\\ Q_5(y)&=1-30y+150y^2-240y^3+120y^4\\ Q_6(y)&=1-62y+540y^2-1560y^3+1800y^4-720y^5\\ Q_7(y)&=1-126y+1806y^2-8400y^3+16800y^4-15120y^5+5040y^6\\ Q_8(y)&=1-254y+5796y^2-40834y^3+126000y^4-191520y^5+141120y^6-40320y^7 \end{align*} In general \begin{equation} Q_n(y)=\sum_{m=0}^{n-1}a(n,m)y^m \end{equation} where $\vert a(n,m)\vert$ equals the number of surjections of an $n$-element set onto an $m+1$ element set (according to the Online Encyclopedia of Number Sequences). Furthermore \begin{equation} a(n,m)=\sum_{k=1}^{m+1}(-1)^{k+1}\binom{m+1}{k}k^n \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2304544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
On the arithmetic differential equation $n''=n'$ If $n'$ denotes the arithmetic derivative of non-negative integer $n$, and $n''=(n')'$, then solve the following equation $$n''=n'.$$ What I have found, you can read in one minute! I have tried to explain it very detailed so anyone, even with a little knowledge of elementary number theory (like me), can follow the steps. $n=0$ and $n=1$ are solutions. It is known that for a natural number $n>1$ with prime factorization of $\prod_{i=1}^{k} p_i^{a_i}$ arithmetic derivative is $$n'=n \sum_{i=1}^{k} \frac{a_i}{p_i}. \tag{1}$$ Let $m=n'$, then equation becomes $m'=m$. Let prime factorization of $m$ be $\prod_{j=1}^{l} q_j^{b_j}$. Then from equation $(1)$ we get $$\frac{b_1}{q_1}+ \frac{b_2}{q_2}+... + \frac{b_l}{q_l}=1. \tag{2}$$ This equation implies that $q_j \ge b_j$. Multiply both sides of the equation $(2)$ by $q_1 q_2 ... q_{l-1}$. It follows that $q_1 q_2 ... q_{l-1}\frac{b_l}{q_l}$ is an integer. Thus $q_l | b_l$. Hence $b_l \ge q_l$ and $b_l=q_l$. Subsequently $b_1=b_2=...=b_{l-1}=0$ and $m=q^q$ for some prime number $q$. Thus we have $n'=m=q^q$ and $n\sum_{i=1}^{k} \frac{a_i}{p_i}=q^q$ or $$\prod_{i=1}^{k} p_i^{a_i-1}\sum_{i=1}^{k} \left( p_1 p_2 ... p_k \frac{a_i}{p_i} \right)=q^q. \tag{3}$$ Notice that if $p_i \neq q$ is a prime divisor of $n$, then $a_i=1$. We claim that if $q$ is a prime divisor of $n$, then its the only one. If $q \mid n$ then $n$ is in the form $$n=p_1p_2...p_kq^a,$$ Where $\gcd(q, p_i)=1$. Now its easy to see from equation $(3)$ that $a \le q$ and dividing both sides of it by $q^{a-1}$ gives $$q\sum_{i=1}^{k} \left( \frac{p_1 p_2 ... p_k}{p_i} \right)+p_1 p_2 ... p_k a=q^{q-a+1}.$$Therefore, $q|a$, which leads to $a \ge q$ and $q=a$. Thus $$\sum_{i=1}^{k} \left( \frac{p_1 p_2 ... p_k}{p_i} \right)+p_1 p_2 ... p_k=1,$$Which is a contradiction and $n=q^q$. Thus $n=q^q$ is a solution to the original equation, where $q$ is a prime number. If $q \nmid n$, then equation $(3)$ gives $$\sum_{i=1}^{k} \left( \frac{p_1 p_2 ... p_k}{p_i} \right)=q^q,$$Where I am stuck with. Edit: According to @user49640 comment, there are some solutions of the form $n=2p$, where $p=q^q-2$ is a prime. For example for $q=7$ and $q=19$. See also @Thomas Andrews answer for an another solution not in the form $n=p^p$. Look at this solution I found: $$(2\times17431\times147288828839626635378984008187404125879)'=29^{29}$$
We have that $$(3\cdot 29\cdot 25733)'=3\cdot 29 + 3\cdot 25733 + 29\cdot 25733=7^7$$ So you are going to get non-trivial solutions. It's probably a difficult problem to come up with all solutions. I was looking for "3 prime" solutions. So, if $n=abc$ with $n'=ab+ac+bc=(a+c)(b+c)-c^2$. Trying to solve with $q=5$ gives: $$(a+c)(b+c)=5^5+c^2$$ But $5^5\equiv 1\pmod{4}$ and thus $5^5+c^2$ cannot be divisible by $4$. so $a+c$ and $b+c$ cannot both be even, so one of $a,b,c$ must be $2$. We can assume $c=2$. Then we want $(a+2)(b+2)=3129=3\cdot 7\cdot 149$. No way to factor this as $mn$ with $m-2$ and $n-2$ prime. So there is no $3$-prime counter-example with $q=5$. So I tried $q=7$ and found the above solution. It helped that $7^7+9$ has divisible by $256$, which gave me a lot of possibilities for factorizations. There are two-prime solutions if $q^q-2$ is an odd prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2304625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 3, "answer_id": 0 }
Area of curve in parametric equation Given the curve defined by the parametric equations: $$ x=7\cos{3t}\\ y=7\sin{3t}\\ 0\le t\le2\pi $$ What is the area of the region bounded by this curve? Clearly, $$ x^2+y^2=(7\cos{3t})^2+(7\sin{3t})^2=7^2 $$ which is a circle centered at the origin and radius 7. Therefore, the area is $\pi\cdot7^2=49\pi$. However, it is also true that the area is $$ A=\int{ydx}=\int_{0}^{2\pi}{y(t)x'(t)dt}=\int_{0}^{2\pi}{7\sin{3t}(-7\sin3t)3dt}=-49\int_{0}^{2\pi}{\sin^2{3t}\;3dt}=-49\cdot(3\pi)=-3(49\pi) $$ which is the same area as before, but multiplied by $-3$. What is answer is correct? The $3t$ in the $\sin$ and $\cos$ terms means that as $t$ goes from $0$ to $2\pi$, the circle is traversed $3$ times, right? Which would explain why the area is multiplied by 3. But what about the minus sign?
In your second way of computing it, you have gone around the circle three times instead of one; thus you have (erroneously) tripled the answer. Also, there is a difference between clockwise and counterclockwise and you clearly got unlucky there, to get a negative area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2304721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Need help in evaluating $\int_{-1}^1 (1-x^2)^k, k \in \mathbb{N}$ Can someone tell me how to evaluate this integral please? $$\int_{-1}^1 (1-x^2)^k, k \in \mathbb{N}$$ I tried using the substitution x = sin(t), which would allow me to express this as: $$\int_{-1}^1 cos^{2k+1}(t) dt$$ but this doesn't really help. Any other tricks?
This is simply a Beta function. Using the substitution $t=x^2$, this reduces to $$2\int_0^1 \frac{1}{2\sqrt{t}} (1-t)^k\ dt$$ $$\int_0^1 t^{-\frac{1}{2}} (1-t)^k$$ $$B\left(\frac{1}{2},k+1\right)$$ which can be expressed in a lot of different ways (see the linked Wikipedia page).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2304819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Least squares with variants Find $\mathbf X$ and $\mathbf Y$ such that the two equations hold $$ (\mathbf X^t\mathbf X)^{-1}\mathbf X^t \mathbf Y=A $$ $$ (\mathbf X^t\mathbf X)^{-1}=B $$ $\mathbf{X}$ is an $n\times p$ matrix and $\mathbf{Y}$ is vector of length $n$
I assume the first equation is $(X^tX)^{-1}X^tY = A$, otherwise the matrix multiplication is not defined if $p < n$. I assume I have been given a $p \times p$ matrix B and a $p \times 1$ vector $A$ and a $n > p$ and I have to construct a $n \times p$ matrix $X$ and a $n \times 1$ vector $Y$ satisfying the two given equations. Clearly if $B$ is not positive definite the second equation does not have any solution. So assume $B$ is positive definite. $B^{-1}$ also must be positive definite. Let $B^{-1/2}$ be the $p \times p$ symmetric square root of $B^{-1}$. Let $X = \begin{pmatrix} B^{-1/2} \\ \mathbf{0} \end{pmatrix}$ where $\mathbf{0}$ is a $(n-p) \times p$ matrix of all zeroes. By construction $X^tX=B^{-1}$ and $(X^tX)^{-1} = B$. So this $X$ satisfies the second equation. Having determined $X$ choose $Y=XA$, it is easy to see for this choice of $Y$ we have $(X^tX)^{-1}X^tY=(X^tX)^{-1}X^tXA=A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2304900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Discontinuity of the following function. I want to find out the discontinuities of the function $f$ defined as $$f(x)=\begin{cases} \dfrac{1}{1+e^{1/(x-2)}+e^{1/(x-3)^2}}, & x\neq2,x\neq3, \\[6pt] 1, & x=2, \\[6pt] \dfrac{1}{1+e}, &x=3. \end{cases}$$ My attempt : Here we only have to check the continuity of $f$ at $x=2$ & $x=3$. As $x\to2$, $\frac{1}{x-2}\to\infty$ ,i.e., $e^{1/(x-2)}\to\infty$ ,i.e., $f(x)\to0$. But $f(2)=1\neq0$. So, $f$ is not continuous at $x=2$. Similar procedure results in discontinuity at $x=3$. Therefore the set of discontinuity of $f$ is $\lbrace2,3\rbrace$. Am I right in the above argument ? Can someone give me an alternative solution ?
hint $$\lim_{x\to 2^-}e^{\frac {1}{x-2}}=e^{-\infty}=0$$ and at $2^+$, it is $+\infty $. $$\lim_{x\to 3}e^{\frac {1}{(x-3)^2}}=+\infty $$ from this $$\lim_{2^+}f (x)=\lim_3f (x)=\frac {1}{+\infty}=0$$ it is not continuous neither at $2$ nor at $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2305020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Need help proving that $f(n) = 5n^2 - 2n + 16$ is not O(n) So I have tried time and time again but do not understand how to conclude this proof. The answer to the example states: Prove that $5n^2 - 2n + 16$ is not O(n). Assume $5n^2 - 2n + 16$ is O(n). Then there exist constants C and k so that $5n^2 - 2n + 16 \le Cn$ for all n > k. Dividing both sides by n (and assuming n > 0) we get $5n - 2 + 16/n \le C$, or $n \le C + 2 - 16/n \le C + 2$. This equality does not hold for $n > C + 2$ , contrary to our assumption that it held for all large values of n. Therefore $5n^2 - 2n + 16$ is not O(n). Considering my professor just keeps pointing at this answer and not actually listening to my questions I will try to get clarified on here. My first question about this proof is: How does $n > C + 2$ disprove big O? What is the condition testing against, is it infinite in some way? and where do we ever assume that C "held for all large values of n"? and what does that even mean? I understand this proof up until the point where we get $5n - 2 + 16/n <= C$, or $n <= C + 2 - 16/n <= C + 2$. in this snippet I don't understand where $5n$ disappears to and why we are left with just $n$ on the LHS? And after that, why are we left with only $C + 2$ on the RHS, where did $16/n$ go? I may be thinking of this problem too much like an equation and trying to balance each side but I don't know any other ways to approach it. I am not very well versed in big-oh proofs but I am really trying to understand how they work and how to solve them for my class. If anybody could lend insight, answers to my questions, a walk through, or even hints. I just really need help understanding this proof, thanks in advance!
"How does n>C+2 disprove big O? What is the condition testing against, is it infinite in some way? and where do we ever assume that C "held for all large values of n"? and what does that even mean?" You started by saying "$\underline{5n^2−2n+16≤Cn}$ for all $n > k$." So in particular there is some number $k$ so that the underlined inequality holds for all $n>k$. So as soon as $n$ is large enough, the underlined inequality holds. But you also showed that it doesn't hold when $n>C+2$. So in particular if $n$ is bigger than both $k$ and $C+2$, you get the contradiction that the underlined inequality both does and does not hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2305151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Convergence in probability implies convergence of $\lim_n P(X_n\in B)$ for all Borel $B$ The usual definition of convergence in probability is that $\lim_n P(|X_n-X|>\epsilon)\to 0$ for all $\epsilon>0$. Is an equivalent definition that $\lim_n P(X_n\in B) = P(X\in B)$ for all measurable $B$? Here's a sketch of why I think this may be true: For any closed $K$ let $K_\epsilon = \{ x:\inf_{y\in K} |x-y|\leq \epsilon \}$. Then we have $P(X_n\in K) \leq P(X\in K_\epsilon) + P(|X_n-X|>\epsilon)$ and $P(X\in K) \leq P(X_n\in K_\epsilon) + P(|X_n-X|>\epsilon)$. Letting $n$ tend to infinity, and $\epsilon\to 0$ and considering $K=\bigcap_\epsilon K_\epsilon$ and the continuity of measure, these inequalities imply $\lim_n P(X_n\in K) = P(X\in K)$. For arbitrary $B$, note that $B$ which satisfy $\lim_n P(X_n\in B) = P(X\in B)$ are a $\lambda$-system, and closed sets are a $\pi$-system, and the Borel sets are generated by the closed sets. Therefore Dynkin's $\pi-\lambda$ lemma lets us generalize the result. This result compares nicely with an analogous result for convergence in distribution: $X_n\to X$ in distribution iff $\lim_n P(X_n\in B) = P(X\in B)$ for all sets with $P(X\in \partial B)=0$. It shows directly how convergence in probability implies convergence in distribution, but not conversely. Anyway, is this right? It seems like this result would be a natural thing to put next to the usual definition, but I can't find it in Kallenberg or Durrett or Billingsley or anywhere else, which makes me wonder, am I missing something?
Let $X_{n}\stackrel{P}{\to}0$ and take $B=\mathbb{R}-\left\{ 0\right\} $. It is quite well possible that$P\left(X_{n}\in B\right)=1$ for each $n$, but next to that we have $P\left(0\in B\right)=0$. So $\lim_{n\to\infty} P(X_n\in B)=P(X\in B)$ extremely fails to be true in that situation. To get a concrete example: if e.g. $U$ has uniform distribution on $[-1,1]$ then you can take $X_n=\frac1nU$. Sidenote (handsome by studying convergence in probability): $$X_{n}\stackrel{P}{\to}X\iff X_{n}-X\stackrel{P}{\to}0$$ So for a big deal the subject can be studied by looking at the cases like $X_n\stackrel{P}{\to}0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2305244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I compute eigenvalues or characteristic polynomial of this matrix? Please help. \begin{pmatrix} 2na & -a & -a & -a & -a & -a& -a\\ -a& a+b & 0 & 0 & -b & 0 & 0\\ -a& 0 & a+b & 0 & 0 & -b &0 \\ -a& 0 & 0 & a+b & 0 & 0&-b \\ -a& -b & 0 & 0 & a+b & 0 & 0\\ -a& 0&-b & 0 & 0 & a+b &0 \\ -a& 0& 0&-b & 0 & 0 & a+b \end{pmatrix} I represent the matrix when $n=3$. There are block matrices $(a+b)I_3$ and $(-b)I_3$ above. I think the characteristic polynomial of this matrix for any natural number $n$ is calculated efficiently. The rank is $2n$, so the characteristic polynomial should zero as the constant part. But I cannot calculate the determinant and I can't get zero when I put $t=0$ when I calculate $\text{char}(M)(t)$ for any $n$. Please give me a favor.
Add to row 1 all other rows. Then subtract column 1 from every other column. Your matrix is then similar to $$ B=\left[\begin{array}{c|c}0&0\\ \hline\ast&C\end{array}\right] =\left[\begin{array}{c|cc} 0&0&0\\ \hline \ast&(a+b)I_n+aE&-bI_n+aE\\ \ast&-bI_n+aE&(a+b)I_n+aE \end{array}\right] $$ where $E$ denotes the $n\times n$ matrix of ones. Thus the spectrum of your matrix (or $B$) comprises of a zero and the $2n$ eigenvalues of $C$. Since all sub-blocks of $C$ commute, its eigenvalues are the roots of $\det\left([tI_n-(a+b)I_n-aE]^2 - (-bI+aE)^2\right)=0$, which are the two roots of $[t-(a+b)-na]^2 - (na-b)^2=0$ (each of multiplicity $1$) as well as the two roots of $[t-(a+b)]^2 - b^2 = 0$ (each of multiplicity $n-1$). So, the complete spectrum of the original matrix is $$ \begin{cases} 0 &\text{(of multiplicity $1$)},\\ (2n+1)a &\text{(of multiplicity $1$)},\\ a+2b &\text{(of multiplicity $n$)},\\ a &\text{(of multiplicity $n-1$)}. \end{cases} $$ Edit. Knowing the eigenvalues, it is not hard to find the eigenvectors of the original matrix by inspection: \begin{cases} \lambda=0: &v=e_1+\ldots+e_{2n+1},\\ \lambda=(2n+1)a: &v=-2ne_1+(e_2+\ldots+e_{2n+1}),\\ \lambda=a+2b: &v=e_{1+i}-e_{1+i+n}\quad (i=1,2,\ldots,n),\\ \lambda=a: &v=\sum_{i=1}^k (e_{1+i}+e_{1+i+n}) - k(e_{k+2}+e_{k+2+n})\quad (k=1,2,\ldots,n-1), \end{cases} where the eigenvectors for $\lambda=a$ are constructed in the spirit of "Jagy matrix". These eigenvectors are already mutually orthogonal to each other. So, if you want an orthonormal eigenbasis, just normalise each of them to a unit vector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2305384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Existence of square root of a matrix Testing a method with the use of C.-H. theorem for finding square roots of real $ 2 \times 2 $ matrices I have noticed that some matrices probably don't have their square roots with real and complex entries. An example the matrix $A= \begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$. However is it at all a proof that it is impossible to extend somehow the field of entries in order to satisfy equation $B^2=A$ similar to the situation when many years ago solution of $a^2=-1$ seemed to be impossible to solve for real numbers hence imaginary numbers $i$ were introduced ? Is it possible to devise such numbers (...quaternions? octonions ? or others..) that $B^2=A$ would be however satisfied ? Additionally, when we are sure in general case for $n \times n$ matrices that a square root exists if we are free to vastly extend a field?
Two partial answers to your question make a full answer! Let $A=\begin{bmatrix}0&1\\0&0\end{bmatrix}$. * *Answer 1: The set of matrices form a ring (rings are sets where you have algebraic operations of addition and multiplication). In abstract algebra, one learns about ring extensions, in other words, you can construct a bigger ring which contains the ring of matrices, but your given matrix $A$ has a square root. This is easier to do in a commutative ring, but matrix multiplication is not commutative. In this case, let $M_{2,\mathbb{R}}$ be the set of $2\times 2$ matrices with coefficients in $\mathbb{R}$, and we consider elements which are sums of matrices times $B$. In other words, you have sums whose terms are like $$ BM_1BM_2BM_3B\cdots M_kB $$ with or without the leading $B$'s. The one extra condition is that $B^2=A$. This gets complicated, but does include a square root of $B$. The problem is that $B$ is not a matrix, it's just an extra element in the ring that acts like the square root $A$. *Answer 2: Suppose that $B$ must be a matrix (and we're working in a field of characteristic $0$). Then, we have the situation $$ \begin{bmatrix}0&1\\0&0\end{bmatrix}=\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}a&b\\c&d\end{bmatrix}=\begin{bmatrix}a^2+bc&ab+bd\\ac+cd&bc+d^2\end{bmatrix}. $$ Let's start with the lower left corner. We have that $c(a+d)=0$, so either $c=0$ or $a=-d$. * *Let's start with the case where $c=0$. In this case, the matrix on the RHS simplifies to $$ \begin{bmatrix}a^2&ab+bd\\0&d^2\end{bmatrix} $$ Since the upper left and lower right corners are also $0$, $a^2=0$ and $d^2=0$, so $a=0$ and $d=0$. But this makes the upper right corner $0$ as well, a contradiction. *Suppose now that $a=-d$, but then the upper right corner is $b(a+d)=0$, which is also not possible. Therefore, if $xy=0$ implies that $x=0$ or $y=0$, then there is no way to write $A$ as a square of a matrix no matter what field you work with (assuming in our field that $0\not=1$). Concluding remark: If you work with matrices over a commutative ring which has zero divisors (so not an integral domain), then it may be possible to find a matrix which is the square root of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2305474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Closed form for $\int_{0}^{2\pi} e^{\sin(x)+\cos(x)} dx$ I'm solving an integral in three coordinates. One of the coordinates is the integral: $\int_{0}^{2\pi} e^{\sin(x)+\cos(x)} dx$. Is possible to get a closed form for that? My efforts. Let $u=-x$. $\int_{0}^{2\pi} e^{\sin(x)+\cos(x)} dx =-\int_{0}^{2\pi} e^{\sqrt{2}\sin(\pi/4-u)} du $
Let $$ I=\int_0^{2\pi}e^{\sin(x)+\cos(x)}\,dx. $$ From the trigonometric addition formula $\cos(\alpha-\beta) = \cos\alpha\cos\beta + \sin\alpha\sin\beta$, with $\alpha=\pi/4, \beta=x$, using the special value $\cos(\pi/4) = \sin(\pi/4) = 1/\sqrt2$, we have $$ \sin(x) + \cos(x) = \sqrt2\cos\left(\pi/4-x\right). $$ From here we have \begin{align} I &= \int_0^{2\pi}e^{\sqrt2\cos\left(\pi/4-x\right)}\,dx\\ &= \int_0^{2\pi}e^{\sqrt2\cos\left(x\right)}\,dx \tag{1}\\ &= \int_0^{\pi}e^{\sqrt2\cos\left(x\right)}\,dx + \int_\pi^{2\pi}e^{\sqrt2\cos\left(x\right)}\,dx \tag{2}\\ &= \int_0^{\pi}e^{\sqrt2\cos\left(x\right)}\,dx + \int_0^{\pi}e^{-\sqrt2\cos\left(x\right)}\,dx. \tag{3}\\ \end{align} In $(1)$ we used the substitution $x\mapsto\pi/4-x$ and the periodicity of the integrand. In $(2)$ we used the additivity of integration on intervals. In $(3)$ for the second integral we used the substitution $x \mapsto x-\pi$, and the symmetry of the cosine function, $\cos(x-\pi) = -\cos(x)$. Using the definition and the properties of the modified Bessel function of the first kind, we have $$ I_0(z) = \frac{1}{\pi}\int_0^\pi e^{z\cos(t)} \,dt = \frac{1}{\pi}\int_0^\pi e^{-z\cos(t)} \,dt. $$ Putting all this together, we have $$ I = 2\pi I_0\left(\sqrt2\right). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2305572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does the difference between a sequence of random variables and their conditional expectations converge in $L^1$? Let $(X_n)_n$ be a sequence of random variables on the probability space $(\Omega, \mathcal{F}, P)$, and let $(\mathcal{F}_n)_n$ be a filtration that increases to $\mathcal{F}$. We can assume $(X_n)_n$ is uniformly integrable, but I'm also interested in the general case if anyone wants to comment on that. Is it true that $\int |X_n - E(X_n \mid \mathcal{F}_n)|dP \to 0$ as $n \to \infty$? I haven't made any real progress on this and am just looking for some hints so I can try to prove or disprove it myself. I know that if $X_n$ is held fixed and $\mathcal{F}_n$ is allowed to increase, then the result holds. This is just a textbook martingale convergence result. But I don't know how to generalize this to a whole sequence of random variables and puttering around with Fatou's lemma and the like hasn't gotten me anywhere. Again, I'm just looking for some hints or suggestions so I can try to get it myself.
It's not true, and a counterexample would be one similar to my comment. Take $\{\xi_i\}$ i.i.d. with $P(\xi_i = 1) = P(\xi_i = - 1) = 1/2$. Set $X_k = \prod\limits_{i = 1}^k \xi_k$ and $\mathcal{F}_n = \sigma(\{\xi_i\}_{i = 1}^{n-1})$. Then $$|X_n - E(X_n | \mathcal{F_{n}})| = |X_n - 0| = 1.$$ This means that $E|X_n - E(X_n | \mathcal{F}_n)| = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2305676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Characterizing subsequences of the Thue-Morse sequence Consider the Thue-Morse sequence on the alphabet $\{0,1\}$ given by $T_0 = 0$ and $T_{n+1} = T_n \bar{T_n}$ where $\bar{T_n}$ is the bitwise negation of $T_n$. Then the Thue-Morse sequence is defined as $$TM:=\lim\limits_{n\to\infty}T_n$$ (this is just one of many equivalent definitions). It is widely known that this sequence is strongly cube-free but is riddled with squares due to the production $T_{n+2} = T_n\bar{T_n}\bar{T_n}T_n$. My question is: How can we characterize the subsequences of the Thue-Morse sequence? We already know that they must be strongly cube-free. I recently gave a talk about the Thue-Morse sequence and its many fascinating properties, and one of the people present asked the question: "Do all strongly cube-free sequences appear as a subsequence of the Thue-Morse sequence?", and I could not answer him. I wrote a small C++ program that checks for reasonably small subsequences, and I noted that in the first $33554432$ ($=2^{25}$) iterations, the following subsequences were missing: 11011 100100 110110 I checked for sequences of length upto and including 5, and left out the subsequences that are strongly cube-free. This by no means proves that these numbers will not show up at some later stage in TM, but I do not think they will. Is there any known complete characterization of the binary subsequences that will appear? Thanks in advance!
Every word of length at least 5 has a unique decomposition into 1-words, either beginning with the last letter of a 1-word or ending with the first letter of a 1-word. Moreover, if we put bars to indicate the separations between the 1-words occurring at positions 2k in the sequence, the sum of each 1-word is 1 exactly. Examples: $11010$ occurs since it can be decomposed into $1|10|10$, and the sum of each 1-word is 1: $1+0=1+0$. $01100$ also occurs since it can be decomposed into $01|10|0$. $11011$ cannot occur, because neither $1|10|11$ nor $11|01|1$ satisfies the second condition. The other two examples also follow. Note that this condition is only a sufficient condition. More check should be applied to see whether the subsequence occurs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2305767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Improper integral $\int \limits_{2}^{4}\frac{\sqrt{(16 - x^2)^5}}{(x^2 - 9x + 20)^3}dx$ I can't figure out how to solve (say whether it converges or diverges) the following improper integral: $$ \int \limits_{2}^{4}\frac{\sqrt{(16 - x^2)^5}}{(x^2 - 9x + 20)^3}dx $$ I've tried to simplify this and got: $$ \int \limits_{2}^{4}\frac{(4 + x)^{5/2}}{(4 - x)^{1/2}(5 - x)^3} $$ But I don't know what to do next. Could you please give me some hint
An alternate substitution is $x = 4 \, \sin(t)$ which leads to, with some difficulty, \begin{align} \int \frac{(16 - x^2)^{5/2}}{(x^2 - 9 x + 20)^3} \, dx = \frac{81}{2} \, \frac{x-6}{(x-5)^2} \, \sqrt{16 - x^2} - 63 \, \tan^{-1}\left(\frac{16 - 5x}{3 \, \sqrt{16 - x^2}}\right) - \sin^{-1}\left(\frac{x}{4}\right) + c_{0}. \end{align} Evaluating the integral at the limits (2,4) then provides the integrals value: \begin{align} \int_{2}^{4} \frac{(16 - x^2)^{5/2}}{(x^2 - 9 x + 20)^3} \, dx = 36 \, \sqrt{3} + \frac{125 \, \pi}{3}. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2305911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
A theory of numbers problem I could really use some help here: Prove that $17|2x+3y\iff 17|9x+5y$. I don't even know how to start. Just pointing me at a similar problem that has a full solution would be helpful enough, but of course I will be extremely thankful if someone could explain how to approach solving this! :)
Let's see the problem in terms of linear algebra. Let $u=(2,3)$, $v=(9,5)$, and $w=(x,y)$. These are vectors in $\mathbb F_{17}^2$. Then, $17\mid 2x+3y\iff 17\mid 9x+5y$ iff $\langle u,w \rangle = 0 \iff \langle v,w \rangle = 0 $, and this happens iff $u^{\perp} = v^{\perp}$, which happens iff $u$ and $v$ generate the same subspace, that is, are linearly dependent. Therefore, you want to prove that the vectors $(2,3)$ and $(9,5)$ are linearly dependent over $\mathbb F_{17}$. This is easy, because $$ \begin{vmatrix} 2 & 3 \\ 9 & 5 \end{vmatrix} = -17 \equiv 0 \bmod 17 $$ If you want do it explicitly, find $a$ such that $a(2,3)=(9,5)$ by solving $$ 2a \equiv 9 \bmod 17, \quad 3a \equiv 5 \bmod 17 $$ and hope to get the same solution. Indeed, the solution is $a=-4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Convergence in compact open topology implies convergence pointwise I'm a beginner in topology, so pardon me if this problem is too easy. Let $X$ and $Y$ be topological spaces. Let $C(X,Y)$ be the set of all continuous functions from $X$ to $Y$. Show that if a net $(f_i)$ converges to $f$ in the compact open topology of $C(X,Y)$ then it converges to $f$ pointwise. The part I'm confused is the compact open topology, I mean I know it's the topology generated by $S(A,U)=\{f \mid f(A) \subset U\}$ where $A$ compact and $U$ open. But I don't know how to explain $f_i$ converges to $f$ in definition because the open set of this topology seems to be complicated. So can you solve this problem for me? Thanks for helping.
Suppose $f_n \to f$ in the compact-open topology. Then for any $x \in X$, take an open set $U \subseteq Y$ that contains $f(x)$. Then $f \in S\{x\}, U)$, (this is open in $C(X,Y)$ as finite sets are always compact) which means that there exists $N$ such that for all $n \ge N$ we have $f_n \in S(\{x\}, U)$, which means exactly that for $n \ge N$ we have $f_n(x) \in U$. So as $U$ was an arbitrary open neighbourhood of $f(x)$: $f_n(x) \to f(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Existence of square root matrix $B \in \mathbb{C}^{2\times 2}$ for any $A \in \mathbb{C}^{2\times 2}$, where $A^2\neq 0$ I am trying to prove that for any $A \in \mathbb{C}^{2\times 2}$ with $A^2\neq 0$, there exists $B \in \mathbb{C}^{2\times 2}$ with $BB=A$. I have tried the approach of a general matrix A andB with variable entries $$B = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$$ $$A = \begin{bmatrix} \alpha & \beta \\ \gamma & \delta \end{bmatrix}$$ and assuming $BB=A$, I get the equations $$a^2+bc=\alpha$$ $$b(a+d)=\beta$$ $$c(a+d)=\gamma$$ $$d²+cb=\delta$$ However, here I am stuck since I do not know whether any of those variables is $0$, so I cannot operate with those equations. I have seen a solution on Wikipedia, however to me it seems to fall from the sky, especially the restrictions it makes. I have also found a thread, which shows that without the restriction $A^2\neq0$ this statement is false, however I fail to see how this is the critical restriction. Explanations, clarifications or hints on any of the things I mentioned are most welcome.
We have the equations \begin{eqnarray*} a^2+bc= A \\ b(a+d)=B \\ c(a+d)=C \\ bc+d^2=D \end{eqnarray*} Multiply the first equation by $(a+d)^2$ and use the second & third we have \begin{eqnarray*} a^2(a+d)^2 +BC=A(a+d)^2 \\ d= -a +\sqrt{\frac{BC}{A-a^2}}. \end{eqnarray*} Now subtract the first & the fourth \begin{eqnarray*} a^2-d^2=A-D \\ \end{eqnarray*} Substitute for $d$ and we have \begin{eqnarray*} a^2-A+D= \left( -a +\sqrt{\frac{BC}{A-a^2}} \right)^2 \\ D-A-\frac{BC}{A-a^2}=-2a \sqrt{\frac{BC}{A-a^2}} \end{eqnarray*} Square this & we have a quadratic in $a^2$ \begin{eqnarray*} a^4((D-A)^2+4BC)+a^2(-2A(D-A)^2-2BC(A-D)-4ABC)+A^2(D-A)^2-2ABC(D-A)+B^2C^2=0 \end{eqnarray*} Note that this has discriminant $ \Delta=4B^2C^2(AD-BC)$. This gives \begin{eqnarray*} a^2= \frac{A(A-D)^2+BC(3A-D) \mp 2BC \sqrt{AD-BC}}{(a-d)^2+4BC} \\ =A-\frac{BC}{A+D \pm 2 \sqrt{AD-BC}} \end{eqnarray*} Once the dust has settled ... \begin{eqnarray*} \sqrt{\left[ \begin{array}{cc} A & B \\ C & D\\ \end{array} \right]}=\left[ \begin{array}{cc} \sqrt{A-\frac{BC}{A+D \pm 2 \sqrt{AD-BC}} } & \frac{B}{\sqrt{A+D \pm 2 \sqrt{AD-BC}}} \\ \frac{C}{\sqrt{A+D \pm 2 \sqrt{AD-BC}}} & \sqrt{D-\frac{BC}{A+D \pm 2 \sqrt{AD-BC}} } \\ \end{array} \right] \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Formula using fibonacci numbers Let $a_n$ be the $n^{th}$ term of the sequence defined recursively by $a_{n+1} = \frac {1}{1+a_n}$ and let $a_1 = 1.$ Find a formula for $a_n$ in terms of the Fibonacci numbers $F_n$. Prove that the formula you found is valid for all natural numbers $n.$ Wow can I solve this type of problem? And, how to prove it by induction? Do i solve for $a_n$ or what? I'm new to this chaper( sequences and series).
By induction $a_n=\frac{F_n}{F_{n+1}}$ because $a_1=\frac{F_1}{F_2}=1$ and $$a_{n+1}=\frac{1}{1+a_n}=\frac{1}{1+\frac{F_n}{F_{n+1}}}=\frac{F_{n+1}}{F_{n+2}}.$$ $\{F_n\}:1,1,2,3,5,...$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Showing that a topological space is ${\rm T}_1$ Let $X$ be a topological space and let $\Delta = \{(x,x) : x\in X \}$ be the diagonal of $X\times X$ (with the product topology). I was asked to prove that $X$ is ${\rm T}_1$ if and only if $\Delta$ can be written as intersection of open subsets of $X\times X$. I think is better (maybe easier) to use the well-known result "$X$ is ${\rm T}_1$ if and only if $\{x\}$ is a closed set $\forall x\in X$". What I've done: Assuming that $X$ is ${\rm T}_1$, let $Y=(X\times X) \setminus\Delta$ and note that (trivially) $$Y = \bigcup_{y\in Y} \{y\}.$$ Then $$\Delta = (X\times X) \setminus Y = (X\times X) \setminus \left(\bigcup_{y\in Y} \{y\} \right) = \bigcap_{y\in Y} (X\times X)\setminus \{y\},$$ where $(X\times X)\setminus\{y\}$ is open since each $\{y\}$ is closed. The other direction seems be a bit harder, I've tried unsuccessfully. Any ideas? Thanks in advance.
Just an idea. Suppose $\Delta=\bigcap O_\lambda$, where $(O_\lambda)_{\lambda\in\Lambda}$ is a family of open sets in $X\times X$. Let $(x,y)\in X\times X$ with $x\neq y$. Then $(x,y)\notin \Delta$ and therefore $(x,y)\notin\bigcap O_{\lambda}$. So choose $\mu\in\Lambda$ such that $(x,y)\notin\ O_\mu$; say $O_\mu=\pi_1^{-1}(U_\mu)\cap\pi_2^{-1}(V_\mu)$ where $U_\mu,V_\mu$ are open in $X$. Then $(x,y)\notin \pi_1^{-1}(U_\mu)$ or $(x,y)\notin \pi_2^{-1}(V_\mu)$. Assume $(x,y)\notin \pi_1^{-1}(U_\mu)=U_\mu\times X$. Then $x\notin U_\mu$ as $U_\mu$ is open. But $(x,x)\in\Delta\subseteq O_\mu\subseteq \pi_1^{-1}(U_\mu)=U_\mu\times X$; so $x\in U_\mu$. And if you assume the latter case then you'll get $y\notin V_\mu$ and $y\in V_\mu$. Is something wrong?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Bounded and Closed but not Compact I had been proposed to construct not compact set that is however bounded and closed. I could easily imagine from the different metric - such as discrete metric where $d(x,y) =0$ if $x=y$ and $d(x,y) =1$ if $x \neq y$ then M itself is closed since it is metric itself and bounded by $D(x,2)$, but is not compact since there's $D(x,1/2)$ which is open cover of $M$ with no finite sub-cover any how to construct bounded and closed but not compact set in complete metric?
A subset of a banach space is compact if and only if it is closed and totally bounded. The closed unit ball in $l^1(\mathbb R)$ centered around $0$ is not totally bounded. For example because it is not a finite union of open balls of radius $\frac{1}{2}$, simply because each such ball can contain at most one of the sequences of the form $0,0,\dots, 0, 1,\dots $ (this is clear by the triangle inequality).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Probability that the withdrawn balls are the same color Question An urn contains $n$ white and $m$ black balls, where $n$ and $m$ are positive numbers. * *If two balls are randomly withdrawn, what is the probability that they are the same color? *If a ball is randomly withdrawn and then replaced before the second one is drawn, what is the probability that the withdrawn balls are the same color? My Approach * *Probablity$$=\frac{\binom {n}{2}\,+\,\binom {m}{2}}{\binom {n+m}{2}}$$ Either choosed $2$ color from white $(n)$ or 2 from black $(m)$ 2.Probablity$$=\frac{\binom {n}{1}\,*\,\binom {n-1}{1}\,+\,\binom {m}{1}\,*\,\binom {m-1}{1}}{\binom {n+m}{2}}$$ first choosed $1$ ball from black ,then $1$ from the remaining and did the same for white ball .Am i correct?
For the first one: $$\large{\frac{\binom{m}{2}+\binom{n}{2}}{\binom{m+n}{2}}}$$ Explanation: * *$\large{\binom{m+n}{2}}$ ways of selecting two balls. *$\large{\binom{m}{2}}$ ways of selecting two black balls. *$\large{\binom{n}{2}}$ ways of selecting two white balls. For the second one: $$\frac{m^2+n^2}{(m+n)^2}$$ Explanation: * *$(m+n)^2$ ways of selecting two balls, one at a time, with replacement. *$m^2$ ways of selecting two black balls. *$n^2$ ways of selecting two white balls.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Curvature of Regular Tilings of the Hyperbolic Plane Given a regular tiling of the hyperbolic plane, with p-sided polygons and q polygons meeting at each vertex, is the Gaussian curvature of the plane determined by (p,q)? Intuitively it seems like having five pentagons meeting at each point would need greater (negative) curvature than four pentagons. But I'm just guessing? (And have never formally studied hyperbolic geometry.)
This depends on the edge length of the polygons. Usually the convention is that you use a hyperbolic plane with curvature $-1$ unless indicated otherwise. In such a plane, you can observe that small figures look almost Euclidean, but the larger a figure is, the more pronounced the difference in geometry becomes. Actually the angle deficit can be directly used as a measure of area. So for five pentagons meeting at each corner, you need a smaller interior angle, which means a larger angle deficit and thus a larger area than for for pentagons meeting at each corner. But you could go about this the other way. You could fix the edge length of your pentagon, and then observe that the curvature has to be different in the two situations. Because essentially all lengths in hyperbolic geometry are relative to the curvature of the plane, which is a Gaussian curvature and thus a square of lengths. If you quadruple the curvature, you double all the lengths. Fixing the edge length of the polygon, you would observe an increasing angle deficit the more negative your curvature becomes, so you can fit in more of them around each corner, just as your intuition told you. The formula $\text{surplus angle}=\text{curvature}\times\text{area}$ holds in general. For hyperbolic geometry you have negative curvature and an angle deficit which is a negative surplus. For elliptic geometry surplus and curvature are positive, and for Euclidean both are zero. To fit more pentagons around a corner, you need more deficit and therefore have to increase either the absolute value of the curvature or the area of each pentagon. If you have a $p$-sided polygon, the Euclidean interior angle is $\frac{p-2}p\pi$. If you fit $q$ polygons around each corner the hyperbolic interior angle is $\frac2q\pi$. So the surplus angle is $\left(\frac2q-\frac{p-2}p\right)\pi$. For four pentagons this is $-0.1\pi$, for five pentagons it is $-0.2\pi$ so the value of the curvature would have to be twice as high in the latter case if you fix the area, or the area would have to be twice as large (corresponding to a $\sqrt2$ increase in edge length) if you fix the curvature.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Group homomorphisms from $\mathbb{C}^\ast$ to $ \mathbb{Z}$ I want to find $Hom_{\mathtt{Grp}}(\mathbb{C}^\ast,\mathbb{Z})$, where $\mathbb{C}^\ast$ is the multiplicative group, and $\mathbb{Z}$ is additive. $\mathbb{C}$ is the additive group of complex numbers. We have the following map: $\large{\mathbb{C} \xrightarrow{exp} \mathbb{C}^\ast \xrightarrow{?} \mathbb{Z}}$ where the fiber of $exp$ is $\mathbb{Z}$ And I don't know if this can help, any hint?
This isn't a full answer, but I suspect that this Hom group may be the trivial group. Suppose $\phi:\mathbb{C}^* \to \mathbb{Z}$ is a group homomorphism. We know that $\phi(1) = 0$ since $1$ is the identity. Then $$ \phi(-1)^2 = \phi(1) = 0 \implies 2 \phi(-1) = 0 \implies \phi(-1) = 0 $$ By a similar argument, any complex number $e^{2\pi i/k}$ where $k \in \mathbb{Z}$ should go to zero, since $$ \phi(e^{2\pi i/k})^{k} = \phi(2\pi i) = \phi(1) = 0 \implies k \phi(e^{2\pi i/k}) = 0 \implies \phi(e^{2\pi i/k}) = 0 $$ So we have a dense subset of the unit circle that all must get sent to zero. I don't quite know how to use this, but it seems likely to me that this will force $\phi$ to send everything to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to factorize :$f(x)=x^n+x+1 \ \ \ \ \ \ : n=3k+2 ,k\in \mathbb{N}$ How to factorize : $$f(x)=x^n+x+1 \ \ \ \ \ \ : n=3k+2 ,k\in \mathbb{N}$$ And : $$g(x)=x^n+x-1 \ \ \ \ \ \ : n=3k+2 ,k\in2m-1 \ \ \ , \ \ m\in\mathbb{N}$$ My try : $$f(x)=x^n+x+1=x^{3k+2}+x+1$$ $$=(x^{3k+2}+x^{3k+1}+x^{3k})-(x^{3k+2}+x^{3k+1}+x^{3k})+(x+1)$$ Now what ?
Hint: evaluate $f(x)$ at $\omega$, where $\omega=e^{2\pi i/3}$, so $\omega^3=1$, and $\omega^2 + \omega + 1 = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2306997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Lie derivative and representations of a Lie algebra I'm reading a book on integrable systems and am trying to understand Lie groups. The author states a property I cannot understand: Let me define the protagonists: L is the Lie derivative, m is an element of a Lie group and X and Y are in its Lie algebra. The dot, unless I'm mistaken, refers to a certain action of X on m. I can understand the first and the last equality, the problem lies in the second equality. Why is it so? Is it true?
Well, this comes from probably an older definition of a Lie Algebra, where someone defines for a given Lie Group $G$ its associated Lie Algebra, to be given by $$ \mathfrak{g}= \{ A \in M_{n \times n}(\mathbb{C}) \thinspace | \thinspace \forall t \in \mathbb{R}: \exp(tA) \in G\}. $$ So, if you have an action of the Lie group $G$ on a set $V$ (or vector space, module etc.), then this action defines a new one on the same $V$ of the the Lie Algebra this time, given as follows $$ \begin{split} \mathfrak{g} \times V &\rightarrow V, \\ (X, u) &\mapsto X.u= \frac{d}{dt}(\exp(tX).u)_{t=0}. \end{split} $$ You can check on your own that this is indeed a new action. What you see in the book is essentially this thing, applied into the corresponding Lie Bracket. Somewhere in here: http://www.maths.gla.ac.uk/~ajb/dvi-ps/lie-bern.pdf you can find all the possible questions you can come up with. However, if you need something else please do let me know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2307085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Uniform digit generator The question goes as follows: Let there be a random digit generator, generating the numbers $0,1,2,...,9$ uniformly and independently. The generator writes the numbers from left to right. a) What is the expected value for the number of different digits appearing between two zeroes? b) What is the distribution of the number of times the number $0$ appeared between two appearances of $9$? c) Assuming the first digit is 8. What is the expected value of the number made up from the first four digits? Well, 'c' was pretty simple, and is just $8000 + 4.5*100 + 4.5*10 + 4.5*1 = 8499.5$, since the expectation of the digit generated is $4.5$ and they are independent. I had problems with solving 'a' and 'b', and would be happy to see both rigorous approach and simple approach if there are. Thanks.
One rigorous solution for b) can be this: Let $d$ be the number of (not necessarily distinct) digits betweens two appearances of number 9, and $x$ the number of zeroes that appear in these digits, then $$P(x=k|d) = \frac{{d \choose k} 8^{d-k}}{9^d}$$ because that's the number of ways you can accomodate $k$ zeroes and $d-k$ other digits (distinct from zero or nine) in $d$ slots, and the universe has $9^d$ posibilities (any digit distinct from 9 in any of the $d$ places). Since $d$ can be viewed as the number of trials you have to do before getting another 9, $d$ follows a geometric distribution with $p = \frac{1}{10}$ $$P(d) = (\frac{9}{10})^d \frac{1}{10}$$ Now, by the law of total probability $$P(x = k) = \sum_{d=k}^{\infty} P(x=k|d) P(d) = \sum_{d=k}^{\infty} \frac{{d \choose k} 8^{d-k}}{9^d} (\frac{9}{10})^d \frac{1}{10}$$ If you opperate this equation using $u = d-k$ instead of $d$, you get $$P(x = k) =\frac{1}{10^{k+1}} \sum_{u=0}^{\infty} {{u+k} \choose k} (\frac{4}{5})^{u} = \frac{5^{k+1}}{10^{k+1}} = \frac{1}{2^{k+1}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2307161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Area between $r=4\sin(\theta)$ and $r=2$ I'm trying to find the area between $r=4\sin(\theta)$ and $r=2$. I found the points of intersections to be $\pi/6,5\pi/6$. Which implies the area is $$A=\frac{1}{2}\int_{\pi/6}^{5\pi/6}(4\sin(\theta))^2-2^2d\theta.$$ Is this correct? Or did I find the area for the following region
The desired red region is just the area of a circle with radius $2$ minus the area of the blue region: $$ \pi(2)^2 - A = 4\pi - \frac{1}{2}\int_{\pi/6}^{5\pi/6}(4\sin(\theta))^2-2^2d\theta = 4\pi - \left( 2\sqrt 3 + \frac{4\pi}{3} \right) = \frac{8\pi}{3} - 2\sqrt 3 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2307272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Functor $F$ is equivalence of categories implies $F$ is full. I am trying to understand the answer given to this question. From what I understand they are saying that if there exists an $f$ such that $Ff = g$ then by naturality it must have the property that $GFf = Gg$ from which faithfulness of $G$ implies $Ff = g$. The problem is that I don't see why we are guaranteed to have a morphism $f$ with this property. The definition of a natural transformation states that for every $f: X \rightarrow Y$ we have $\eta_Y \circ F(f) = G(f) \circ \eta_X$, so I do not see how we can use naturality unless we already know some $f$ exists with $Ff = g$.
You are misreading the answer. They aren't assuming such an $f$ exists: they are explaining that you can find the formula for such an $f$ by assuming it exists and figuring out what it must be. Namely if $f:X\to Y$ exists, it must be given by the formula $f=\eta_Y^{-1} \circ Gg \circ \eta_X$. Now stop assuming that $f$ satisfies $Ff=g$ and simply define $f=\eta_Y^{-1} \circ Gg \circ \eta_X$. You can now prove directly from this formula (using naturality of $\eta$ several times) that $GFf=Gg$, and so $Ff=g$. Here are the details of the proof that $GFf=Gg$. We have $$GFf=GF\eta_Y^{-1}\circ GFGg\circ GF\eta_X.$$ Now by naturality of $\eta$ (applied to the map $\eta_X:X\to GFX$), $GF\eta_X\circ\eta_X=\eta_{GFX}\circ\eta_X$. Since $\eta_X$ is an isomorphism, $GF\eta_X=\eta_{GFX}$. Similarly, $GF\eta_Y=\eta_{GFY}$, so we have $$GFf=\eta_{GFY}^{-1}\circ GFGg\circ \eta_{GFX}.$$ Finally, naturality of $\eta$ (applied to the map $Gg:GFX\to GFY$) gives $GFGg\circ\eta_{GFX}=\eta_{GFY}\circ Gg$, and so $$GFf=Gg.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2307372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
How to create a 3 parameter function that gets its maximum in the center of the cube? I have an problem where I have a 3 dimensional board, and I want to assign to the center of that cube (aka the board) maximum values and to the edges of the board lower values (preferably in a way so that the upper left side will get higher values then the lower right side) - so i have a cube with R rows, C columns and depth D. I have a coordinate inside the cube with coordinates (i,j,d), With the following constraint on the values (0,0,0)<=(i,j,d)<(R,C,D). I want a function that will map each value (i,j,d) to distinct values between 0 inclusive to RCD non inclusive in a way that the medium of the cube will get the highest values, then the upper left side and then the lower right side.
Let's assume each cell is identified by (i,j,d) and the maximum values are (I,J,D). Assume that i,j, and d start at 1, and that I, J, and D are odd so that there is a unique middle value on each axis, $M_I = (I+1)/2, M_J = (J+1)/2$, and $M_D = (D+1)/2$ Then the distance from the center, for example in i, is $$|i-M_I|$$ We can make the maximum value 1 in the center and apply a penalty for moving away. The simplest way to do this would be: $$V = 1 - \frac{|i-M_I|}{3(I-M_I)} - \frac{|j-M_J|}{3(J-M_J)} -\frac{|d-M_D|}{3(D-M_D)} $$ But this is symmetric, with all the corners giving 0. If we want to introduce asymmetry, so that the upper left will be higher than the lower right, we can define $$f(i-M_I) = i-M_I, for \ \ i\geq M_I$$ $$f(i-M_I) = \frac{2}{3}\left(M_I - i\right), for\ \ i<M_I$$ This function f is similar to the absolute value, but increases more slowly for negative values than positive ones, so we will get an asymmetric penalty like this: $$V = 1 - \frac{f(i-M_I)}{3(I-M_I)} - \frac{f(j-M_J)}{3(J-M_J)} -\frac{f(d-M_D)}{3(D-M_D)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2307458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How To Calculate Length Of Screw Thread? I'm having a tough time searching for the answer to this as most of the results are in the realm of mechanical engineering and appear to be unrelated, I apologize if this is a really obvious question. Say there is a circular arc in $2$ dimensions covering $90$ degrees at a radius of $21$. I know the length of the arc would be $\frac{21}{2}\pi$ or about $32.99$, but what if it were then stretched through the third dimension by some number $x$? How do you calculate the screw thread length?
As an intuitively convincing method, you could consider straightening out the arc into a line of length $\frac{21}2\pi$ and then stretching it along another axis by length x and using Pythagorean theorem to calculate its new length: $\sqrt{(\frac{21}2\pi)^2+x^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2307561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Find a basis for orthogonal complement in R⁴ How do I approach part 2? I found the projection of 1. to be (6,-2,2,-2) but what do I do now?
For vector $\mathbf v = (x_1, x_2, x_3,x_4)$, the dot products of $\mathbf v$ with the two given vectors respectively are zero. $$\begin{align*} \begin{bmatrix}1&2&3&4\\2&5&0&1\end{bmatrix} \begin{bmatrix}x_1\\x_2\\x_3\\x_4\end{bmatrix} &= \begin{bmatrix}0\\0\end{bmatrix}\\ \begin{bmatrix}1&2&3&4\\0&1&-6&-7\end{bmatrix} \mathbf v &= \begin{bmatrix}0\\0\end{bmatrix}\\ \begin{bmatrix}1&0&15&18\\0&1&-6&-7\end{bmatrix} \mathbf v &= \begin{bmatrix}0\\0\end{bmatrix}\\ \end{align*}$$ Let $x_3 = a$, $x_4 = b$, then $x_1 = -15a - 18b$, and $x_2 = 6a + 7b$. $$\mathbf v = \begin{bmatrix}-15a - 18b\\6a+7b\\a\\b\end{bmatrix} = a\begin{bmatrix}-15\\6\\1\\0\end{bmatrix} + b\begin{bmatrix}-18\\7\\0\\1\end{bmatrix}$$ So $(-15,6,1,0)$ and $(-18, 7,0,1)$ together is a basis. Setting $a=1, b=-1$ gives $(3,-1,1,-1)$, which is one of the vectors in your basis above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2307669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Prove that if $(C_n)$ is a sequence of connected subsets of $X$ such that $C_n\cap C_{n+1}\neq\emptyset$ then $\bigcup C_n$ is connected. Suppose that $(C_n)$ is a sequence of connected subsets of $X$ such that $C_n\cap C_{n+1}\neq\emptyset$ for each $n\in\mathbb{N}.$ It is required to prove that $\bigcup C_n$ is connected. The following is my attempt. Let $P(n)$ be the statement $\bigcup_{k=1}^n C_k$ is connected. Obviously $P(1)$ is true. Now let $n\in\mathbb{N}$. Suppose $P(n)$. Suppose the function $f:\bigcup_{k=1}^{n+1}C_k\to\{0,1\}$ is continuous where the set $\{0,1\}$ is endowed with its discrete topology. Since $A=\bigcup_{k=1}^n C_k$ is connected and $f\restriction_A$ is continous we have $f\restriction_A$ is constant; say $f\restriction_A=1$. Now let $x\in C_{n+1}$ and $a\in C_n\cap C_{n+1}$. Then $f(x)=f(a)$ because $C_{n+1}$ is connected. But $f(a)=1$ as $a\in A$. Thus $f\restriction_{C_{n+1}}\equiv 1$. Therefore $f$ is constant on $\bigcup_{k=1}^{n+1}C_k$. Hence $\bigcup_{k=1}^{n+1}C_k$ is connected. Now by induction $P(n)$ is true for all $n\in\mathbb{N}$, i.e. $\bigcup_{k=1}^n C_k$ is connected for all $n\in\mathbb{N}$. Now suppose $\bigcup C_n$ is not connected. Then there exists a continuous surjective function $f:\bigcup C_n\to\{0,1\}$. Thus there exist $a,b\in\bigcup C_n$ such that $f(a)=0$ and $f(b)=1$. So there exist $n_a,n_b\in\mathbb{N}$ such that $a\in C_{n_a}$ and $b\in C_{n_b}$. WLOG suppose $n_a\leq n_b$. Then $a,b\in \bigcup_{k=1}^{n_b} C_k$ and $\bigcup_{k=1}^{n_b} C_k$ is connected and therefore $f\restriction_{\bigcup_{k=1}^{n_b} C_k}$ is constant as it is continuous. Therefore $0=f(a)= f(b)=1$; contradiction. Hence $\bigcup C_n$ is connected. Is the above proof alright? Thanks.
In your proof, since you have already assumed that $f:\cup_{1}^{n+1}C_k\rightarrow\{0,1\}$ is a continuous function (we know that there exists atleast one such continuous function, hence the assumption) , its clear that $\cup_1^nC_k$ and $C_{n+1}$ being connected sets will map to an individual element each in $\{0,1\}$. Then, if further you assume that they map to distinct elements, like $\cup_1^nC_k$ mapping to $0$ and $C_{n+1}$ mapping to $1$, then contradiction shall arise because $X=(\cup_1^nC_k)\cap C_{n+1}\neq\phi$ and for any $x\in X$, the value of $f(x)$ will become uncertain. Thus, by contradiction, we have the entire set $\cup_1^{n+1}C_k$ mapped to either $0$ or $1$ in $\{0,1\}$. Now, $f(x)$ by virtue of being a continuous function shall preserve connectedness, hence, $\cup_1^{n+1}C_k$ is connected and by your induction argument, $\cup^nC_n$ is connected $\forall$ $n$. Thus I feel that the first paragraph of the proof is well and good but the second one is redundant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2307770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Solution to $u'=ru$ in distributional sense? How do we show that $u(x)=ce^{rx}$ is the only solution to $u'=ru$ in $\mathcal D'(X)$? I tried to decompose a $\phi\in\mathcal D(X)$ into parts and let $u$ acts on each of them, but I couldn't show that the sum converges in $\mathcal D(X)$. I am lost here, can anyone please help? More generally, for equation of the form $$ u^{(n)} + c_1 u^{(n-1)} + \dots + c_n u =0, $$ why is it true that classical solutions are all posible solutions. What is the reasoning behind that?
The calculations are the same as in classical real analysis: In $\mathcal D'(X)$ we can multiply with a $C^\infty$ functions, and since $e^{-rx}$ is never $0$ the equation $u'=ru$ will not change the set of solution when multiplied with $e^{-rx}$. Thus, $u'=ru$ is equivalent with $e^{-rx} u' - r e^{-rx} u = 0$. This can be rewritten as $(e^{-rx} u)' = 0$ which means that $e^{-rx} u = c$, where $c$ is a constant. Thus $u = c e^{-rx}$. Solution of $u''-u=0$: The equation can be rewritten as $(u'+u)' = u'+u,$ so by the previous result we have $$u'(x)+u(x) = (u'+u)(x) = c_1 e^x.$$ Just as for classical solutions, and for the same reasons, we first seek one particular solution and a set of solutions to the homogeneous equation. One particular solution is $u_p(x) = \frac12 c_1 e^x$ and the homogeneous solutions are $u_h(x) = c_2 e^{-x}$ by a modification of the previous result. Thus the general solution is $$u(x) = u_p(x) + u_h(x) = \frac12 c_1 e^x + c_2 e^{-x}.$$ (Of course, by adjusting $c_1$ we can drop the factor $\frac12$) How did I come up with $(u'+u)' = u'+u$? Start with $u''-u = 0$. Rewrite this as $(D^2-1)u=0$. Factor the operator to get $(D-1)(D+1)u=0$. Thus we have $D(D+1)u = (D+1)u,$ i.e. $(u'+u)' = u'+u.$ Generally, when we have an equation of the form $$u^{(n)} + c_1 u^{(n-1)} + \dots + c_n u =0,$$ where $c_1, \ldots, c_n$ are constants, we can factor the differential operator and get $$(D-r_1)(D-r_2)\cdots(D-r_n)u = 0$$ where $r_1, \ldots, r_n$ are the solutions to the characteristic equation $$r^n + c_1 r^{n-1} + \cdots + c_n = 0.$$ Then we can solve the equation by solving for one "differential factor" at a time through multiplication by $e^{-r_k x}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2307911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
BMO2 2017 Question 4 - Bobby's Safe Bobby’s booby-trapped safe requires a $3$-digit code to unlock it. Alex has a probe which can test combinations without typing them on the safe. The probe responds Fail if no individual digit is correct. Otherwise it responds Close, including when all digits are correct. For example, if the correct code is $014$, then the responses to $099$ and $014$ are both Close, but the response to $140$ is Fail. If Alex is following an optimal strategy, what is the smallest number of attempts needed to guarantee that he knows the correct code, whatever it is? I think the optimal number is $13$ (start by trying $000$, $111$, $\ldots$, $999$), but it's hard to find bounds here. Any help?
We shall first prove that the task requires at least $13$ attempts. In the worst case scenario, Alex's first six attempts all fail, which means he has at least $(10-6)^3=64$ remaining combinations to guess. If he gets a "Close" outcome in the seventh round, he will then have at least $$9+9+9+3+3+3+1=37$$ possibilities for the correct code. This means he will need at least $\left\lceil\log_2(37)\right\rceil=6$ more rounds, whence Alex cannot complete the task in fewer than $13$ trials. We now claim that $13$ trials suffice. For the first ten tries, Alex takes the guesses $000$, $111$, $222$, $\ldots$, $999$. * *If he gets nine failed attempts, then the only one that gets the "Close" outcome is the correct code. *Suppose for the moment that Alex gets exactly two "Close" outcomes $aaa$ and $bbb$. Then, pick a combination $ccc$ known to fail. Test $acc$, $cac$, and $cca$ to see which digit is $a$ and which is $b$. *Finally, assume that Alex gets three "Close" outcomes $aaa$, $bbb$, and $ccc$. Then, there are six possibilities left $abc$, $acb$, $bac$, $bca$, and $cab$. Alex can guess $add$ and $bdd$, where $ddd$ is an already known failed guess. This will establish the first digit of the combination. Without loss of generality, the first digit is now known to be $a$. There are only two possibilities left: $abc$ and $acb$. Take the guess $dbc$. If this results in a fail, then $acb$ is the correct combination; otherwise, $abc$ is the correct combination. Let $M(n)$ denote the minimum number of attempts guaranteed to crack the digit code of length $n\in\mathbb{Z}_{>0}$. Then, we obviously have $M(1)=9$, $M(2)=11$, and $M(3)=13$. Indeed, $$M(n) \geq \max\Big\{\big\lceil n\,\log_2(10-k)\big\rceil +k\,\Big|\,k\in\{0,1,2,\ldots,9\}\Big\}=:m(n)\,.$$ We see that $m(1)=9$, $m(2)=11$, $m(3)=12$, $m(4)= 15$, $m(5)= 18$, $m(6)= 21$, $m(7)= 24$, and for $n\geq 8$, $$m(n)= \big\lceil n\,\log_2(10)\big\rceil\,.$$ I believe that the inequality $M(n)\geq m(n)$ is not sharp for all $n\geq 4$. I struggled to find, for example, a $15$-step strategy for the case where $n=4$, and I conjecture that $$M(4)=16\,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2308064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Ellipse in a Rectangle What is the equation for an ellipse (or rather, family of ellipses) which has as its tangents the lines forming the rectangle $$x=\pm a, y=\pm b\;\; (a,b>0)$$? This question is a modification/extension of this other question here posted recently.
By exploiting the affine map $(x,y)\mapsto\left(\frac{x}{a},\frac{y}{b}\right)$ the question boils down to finding the family of ellipses inscribed in a square with vertices at $(\pm 1,\pm 1)$. In a ellipse the line joining the midpoints of parallel chords always go through the center. Additionally, the orthoptic curve of an ellipse is the director circle. It follows that all the ellipses that are tangent to the sides of the previous square fulfill $a^2+b^2=2$, are centered at the origin and are symmetric with respect to the diagonals of the square. Here it is a straightedge-and-compass construction. * *Take some point $P$ on the perimeter of the square and by reflecting it with respect to the center and the diagonals of the square construct the rectangle $PQRS$; *Consider the tangents at $P,Q,R,S$ at the circumcircle of $PQRS$; *Two intersections of such tangents are the vertices of the inscribed ellipse through $P$, that is simple to draw.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2308198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Proof using Induction? Any thoughts on how to prove the following proposition If $x_1>0$ and $a>0$ and $x_{ n+1 }=\frac { 1 }{ 2 } \left( x_{ n }+\frac { a^{ 2 } }{ x_{ n } } \right) $ then $a\leq x_{n+1}\leq x_n$.
hint $$x_{n+1}-a=\frac {(x_n-a)^2 }{2x_n}$$ $$x_{n+1}-x_n=\frac {a^2-x_n^2}{2x_n} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2308304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
What's the probability of getting 1000 heads in a row? I'm reading The Master Algorithm by Pedro Domingos and I'm having a hard time understanding something he wrote on page 74: "If you do a million runs of a thousand coin flips, it's practically certain that at least one run will come up all heads." My intuition tells me this is false. My understanding of probability would indicate that the chance of encountering $1000$ heads in a row after trying $1000000$ times is: $$\frac{1}{2^{1000}} *1000000$$ which is minuscule and hardly "practically certain." Is my understanding correct, or am I missing something?
The chances that flipping 1000 coins gives all heads is given by $\frac{1}{2^{1000}}$ as you predicted. However, the odds that this will happen, given that you try it $1000000$ times is: $$1 - \left(1 - \frac{1}{2^{1000}}\right)^{1000000}$$ The certainty of which, I'm not too sure about. My calculator can't seem to handle it. EDIT: Seems like the probability is still negligible. Author is wrong on this account. However, the point being made is that adding on extra trials causes an exponential growth of probability, making impossible things happen if you're willing to run enough trials.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2308416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 0 }
Prove the series $\sum_{n=1}^{\infty}\frac{(-1)^n}{\ln{n}+\sin{n}}$ converge How to prove this serie $$\sum_{n=1}^{\infty}\frac{(-1)^n}{\ln{n}+\sin{n}}$$ converge? I can't do a comparison test with the Leibniz formula for $\pi$ because the series are not $>0$ for all $n$. I can't do a ratio test because I can't compute the limit, the alternating series test can't be applied, the absolute serie is not convergent. I'm out of ideas. Any clues?
Let $$a_n = \frac{(-1)^n}{\sin n + \ln n }$$ We wish to show that $\sum a_n$ converges. In order to do this, first note that $a_n$ is negative when $n$ is odd, and positive when $n$ is even. We will write $\sum a_n < \sum b_i + \sum c_j$, where $b_i$ are negative terms (with only odd indices $i$) that are smaller in absolute value than the negative terms $a_i$, and $c_j$ are positive terms (with only even indices $j$) that are larger in absolute value than the positive terms $a_j$. We want to choose $\{b_i\}$ and $\{c_j\}$ to satisfy the following conditions: * *$\sum b_i$ (sum taken over odd $i$) converges by the alternating series test *$\sum c_j$ (sum taken over even $j$) converges by the alternating series test *$|a_i| > |b_i|$ or equivalently, $a_i < b_i$, for all odd $i$ *$a_j < c_j$ for all even $j$ For $i$ odd, let $$b_i = \frac{-1}{2 + \ln n } > a_i$$ For $j$ even, let $$c_j = \frac{1}{-2 + \ln n } > a_i$$ (I am choosing $2$ and $-2$ here because they are greater than the maximum of $\sin n$ and less than the minimum of $\sin n$, respectively) Note that $b_i$ and $c_j$ are both monotonically decreasing for $i,j > 10$. (I am choosing $10$ here because it is greater than $e^2$, to avoid negative denominators due to $-2 + \ln n$) Therefore, by the alternating series test, we know that the following sums must converge: $$\displaystyle\sum_{i=11,\, i \text{ odd}}^\infty b_i$$ $$\displaystyle\sum_{j=10,\, j \text{ even}}^\infty c_j$$ Therefore, $\sum a_n$ is bounded above by the sum of two convergent series: $$\displaystyle\sum_{n=10}^\infty a_n < \left(\displaystyle\sum_{i=11,\, i \text{ odd}}^\infty b_i \right) + \left(\displaystyle\sum_{j=10,\, j \text{ even}}^\infty c_j\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2308532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Show that the derivative of this function is positive Suppose that $n>1$, $g\in(0,1)$ and $f(g)\in(0,1)$. Suppose that $\frac{df(g)}{dg}\geq0$. Define $B(g,n)$ as: $B(g,n)=\sum _{k=1}^n \frac{n!}{k!(n-k)!}g^{n-k}(1-g)^{k-1}(1-(1-f(g))^k)$ Show that: $\frac{dB(g,n)}{dg}>0$. Some of these assumptions can be relaxed (for example the $\frac{df(g)}{dg}\geq0$ I suspect), but I am not especially interested in that. What I have done: * *I have shown that this is the case for $n=2$, $n=3$ and $n=4$, but did not find any pattern that helped me generalize to $n$. *I took the derivative with Mathematica and obtained that little monster: $ \frac{dB(g,n)}{dg}=-\frac{g^n \left(-n \left(\frac{(g-1) f(g)+1}{g}\right)^{n-1} \left(\frac{(g-1) f'(g)+f(g)}{g}-\frac{(g-1) f(g)+1}{g^2}\right)-n \left(\frac{1}{g}\right)^{n+1}\right)}{g-1}-\frac{n g^{n-1} \left(\left(\frac{1}{g}\right)^n-\left(\frac{(g-1) f(g)+1}{g}\right)^n\right)}{g-1}+\frac{g^n \left(\left(\frac{1}{g}\right)^n-\left(\frac{(g-1) f(g)+1}{g}\right)^n\right)}{(g-1)^2}$. *(Here I might be wrong) I think that the problem can be slightly simplified by assuming that $\frac{df(g)}{dg}=0$. Because assuming $\frac{df(g)}{dg}>0$ only "helps us" in proving the that the derivative is positive, then it suffices to show that our desired result holds when $\frac{df(g)}{dg}=0$. Thanks in advance!
This is not a full solution, but some possibly helpful thoughts too long to fit a comment. Notice that $$B(g,n) = \sum _{k=1} ^n \frac {n!} {k!(n-k)!} g^{n-k} (1-g)^{k-1} - \sum _{k=1} ^n \frac {n!} {k!(n-k)!} g^{n-k} (1-g)^{k-1} (1-f(g))^k = \\ \frac 1 {1-g} \sum _{k=1} ^n \frac {n!} {k!(n-k)!} g^{n-k} (1-g)^k - \frac 1 {1-g} \sum _{k=1} ^n \frac {n!} {k!(n-k)!} g^{n-k} (1-g)^k (1-f(g))^k = \\ \frac 1 {1-g} \{ [g + (1-g)]^n - 1 \} - \frac 1 {1-g} \{ [g + (1-g)(1-f(g))]^n - 1 \} = \\ \frac 1 {g-1} \{ [g - (g-1)(1-f(g))]^n - 1 \} \ ,$$ whence $$\frac {\Bbb d B(g,n)} {\Bbb d g} = \frac { \{n [g - (g-1)(1-f(g))]^{n-1} [1 - (1-f(g)) + (g-1) f'(g)] \} (g-1)} {(g-1)^2} - \frac {\{ [g - (g-1)(1-f(g))]^n - 1 \}} {(g-1)^2} = \\ \frac {\Bbb d B(g,n)} {\Bbb d g} = \frac { \{n [g - (g-1)(1-f(g))]^{n-1} [f(g) + (g-1) f'(g)] \} (g-1)} {(g-1)^2} - \frac {\{ [g - (g-1)(1-f(g))]^n - 1 \}} {(g-1)^2} \ .$$ Since the denominator is positive, you have then to show that $$\{n [g - (g-1)(1-f(g))]^{n-1} [f(g) + (g-1) f'(g)] \} (g-1) - \{ [g - (g-1)(1-f(g))]^n - 1 \} \ge 0 \ .$$ Notice that $$g - (g-1)(1-f(g)) = 1 + (g-1)f(g) < 1 \ ,$$ whence it follows that $$- \{ [g - (g-1)(1-f(g))]^n - 1 \} \ge 0 \ .$$ and that $$n [g - (g-1)(1-f(g))]^{n-1} \to 0 \ ,$$ so that $\lim \limits _{n \to \infty} \dfrac {\Bbb d B} {\Bbb d g} \ge 0$. Unfortunately, it is not clear whether this limit is uniform. It also follows that a sufficient (but not necessary) condition to get the desired conclusion is $f + (g-1)f' \le 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2308709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Why doesn't $[a]^{[b]} = [a^b]$? When we construct the modular numbers $\Bbb Z/n\Bbb Z$ so that addition works as $[a]+[b]=[a+b]$ and multiplication works as $[a][b]=[ab]$, we get for free that $a[b]=[ab]$ also works. Why can't we also define a sense of exponentiation $[a]^{[b]}$ as $[a^b]$? We do have that $[a]^b = [a^b]$ because $$[a]^b = [a]\cdots [a] = [a\cdots a] = [a^b]$$ but what is different about exponentiation vs addition and multiplication that we can't define $[a]^{[b]} = [a^b]$? Example where the formula doesn't work: Let our modular system be $\Bbb Z/4\Bbb Z$, then $$[5]=[1], \text{ but} \\ [2]^{[5]} = [2^5] = [0] \ne [2] = [2^1] = [2]^{[1]}$$ I'm looking for an intuitive explanation as to why this doesn't work right when the similar formulas for $+$ and $\cdot$ do.
The key notion here is a congruence — "congruent modulo $n$" was constructed to be an equivalence relation that respects addition and multiplication, but need not preserve anything not constructed from them. So, it's not really about addition/multiplication being special at all — it's about the relation being constructed. As an example consider instead the rational numbers modulo $1$; that is, $p \equiv q$ if and only if $p-q$ is an integer. This is a congruence for addition: $[p+q]=[p'+q]$ whenever $[p] = [p']$, but it can't be such for multiplication: e.g. $[1/2] = [3/2]$, but $[1/2 \cdot 1/2] \not\equiv [3/2 \cdot 1/2]$. Similarly, we could consider the equivalence relation on the integers defined by $x \equiv y$ if and only if either $x$ and $y$ are both zero or $xy$ is a nonzero square. This equivalence relation is a congruence for multiplication: $[xy]=[x'y]$ whenever $[x]=[x']$, but it can't possibly respect addition: $[1] \equiv [4]$, but $[1+1] \not\equiv [1+4]$ A note on language: normally the kind of structure is implicit, and we just speak of congruences. e.g. if we're talking about rings, then we simply say "congruence relation" for an equivalence relation that respects addition, negation, and multiplication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2308825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Calculus Spivak Chapter 2 problem 16(c) The question asks to prove that if $\frac mn \lt \sqrt{2}$, then there is another rational number $\frac {m'}{n'}$ with $\frac mn \lt \frac {m'}{n'} \lt \sqrt{2}$. Intuitively, it's clear that such a number exists, but I don't understand the solution to this problem. It states: let $m_1 = m + 2n$ and $n_1 = m + n$, and choose $m' = m_1 + 2n_1 = 3m + 4n$, and $n' = m_1 + n_1 = 2m + 3n$. Apparently $\frac {(m + 2n)^2}{(m+n)^2} \gt 2$, but can someone explain why and how plugging in those equations for $m'$ and $n'$ ensures that $\frac {m'}{n'}$ lies between $ \frac mn$ and $\sqrt {2}$?
I actually have a slightly different answer to the above, which I think is closer to the book as it relies directly on parts (a) and (b). If anyone spots anything wrong I'd appreciate if you comment below and point out any mistakes: We have proven in part (a) that: $\frac{m^2}{n^2} < 2 \implies \frac{(m + 2n)^2}{(m + n)^2}>2$ or equivalently since $m,n \in \mathbb{N}$ and therefore all terms are positive: $\frac{m}{n} < \sqrt{2} \implies \frac{m + 2n}{m + n}>\sqrt{2}$ Similarly in part (b) we have proven that: $\frac{m}{n} > \sqrt{2} \implies \frac{m + 2n}{m + n}<\sqrt{2}$ Therefore if we start with a ratio $\frac{m}{n} < \sqrt{2}$ we can use (a) to get a ratio $\frac{m + 2n}{m + n}>\sqrt{2}$ and then use (b) in sequence to get a ratio $\frac{3m + 4n}{2m + 3n}<\sqrt{2}$. We just need to prove that $\frac{m}{n} < \frac{3m + 4n}{2m + 3n}$. We have: $ \begin{aligned} \frac{m}{n}<\frac{3m+4n}{2m+3n} &\iff \\ m(2m+3n)<n(3m+4n) &\iff \\ 2m^2 +3mn < 3mn + 4n^2 &\iff \\ 2m^2 < 4n^2 & \iff \\ \frac{m^2}{n^2} < 2 & \iff \\ \frac{m}{n} < \sqrt{2} & \end{aligned} $ Please do let me know if this is a proper solution. I also should note that the idea to apply (a) and (b) in sequence and thus "try" the ratio $\frac{3m+4n}{2m+3n}$ originated from the second item proven in (a) and (b). Specifically we had proven: (a): $\quad \frac{m^2}{n^2} < 2 \implies \frac{(m + 2n)^2}{(m+n)^2} - 2 < 2 - \frac{m^2}{n^2}$ (b): $\quad \frac{m^2}{n^2} > 2 \implies \frac{(m + 2n)^2}{(m+n)^2} - 2 > 2 - \frac{m^2}{n^2}$ Notice that the expressions in the inequalities to the right of the "implies" sign can be considered as "measures of the distance". For example $\frac{(m + 2n)^2}{(m+n)^2} - 2$ is a measure of the distance of point $\frac{m+2n}{m+n}$ from the point $\sqrt{2}$. The same can be said of $2 - \frac{m^2}{n^2}$ which can be considered as a measure of the distance of point $\frac{m}{n}$ from point $\sqrt{2}$. This means that starting from any ratio $r_1=\frac{m}{n}<\sqrt{2}$ and creating a series of ratios $r_2=\frac{m+2n}{m+n}$, $r_3=\frac{3m+4n}{2m+3n}$, ... you would be "hopping" from one side of $\sqrt{2}$ to the other (on the number line). Hopping to the right of it will take you a bit closer to it, then hopping to the left in succession will get you slightly further away. In part (c) I think we are essentially proving that the decrease when hopping to the right is more than the increase when hopping back to the left, and thus we are getting "ever closer" to the point $\sqrt{2}$ with every pair of successive "hops".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2308909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
$(T^* T)^{-1}$ is self adjoint Suppose $U$, $V$ are finite-dimensional inner product spaces, and $T \in \mathcal{L}(U, V )$ is invertible. I need to show that $T^* T \in \mathcal{L}(V)$ is positive definite and invertible. Also show that the inverse of $T^* T$ is self-adjoint and positive definite. For the first part I think I'm on the right track but not sure, since $T$ is invertible, I said $T^*$ is also invertible and as a result $T^* T$ is invertible. Further as $\|T^*(u)\|^2$ is greater than or equal to zero for all values of $u$, and we know that the null space of $T^* T$ is $\{0\}$, we can say that $\|T^*(u)\|^2$ is greater than zero for all non-zero values of $u$, and thus we have $\langle T^*(u),T^*(u)\rangle$ is greater than zero which can further be simplified to say $\langle TT^*(u),u \rangle$ is greater than zero i.e $T^*T$ is positive semi definite. That's what I did for the first part, pretty lost on the second part.
Let $S = (T^{\ast}T)^{-1}$, then for any $v\in V$, let $u = S(v)$, so $$ 0\leq \langle T^{\ast}Tu,u\rangle = \langle v,S(v)\rangle $$ so $S$ is positive semidefinite. Since it is invertible, it must be positive definite. Also, it follows that all the eigen-values of $S$ are real (in fact, non-negative reals). Hence, $S$ is self-adjoint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2309024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a limit using the epsilon delta definition in C I am trying to prove that the $$\lim_{z \rightarrow z_0} z^2+c = z_0^2+c,$$ where $c$ is some complex constant. I was taught that typically it is a good idea to begin from the end of the proof and work backwards. Therefore, I will start with $$|z^2-z^2_0| < \epsilon$$ $$\Rightarrow |z+z_0|\cdot|z-z_0| < \epsilon.$$ Now, if $|z-z_0|<|z+z_0|$, then it follows that $$|z-z_0|^2<|z-z_0| \cdot |z+z_0| < \epsilon$$ $$\Rightarrow |z-z_0| < \sqrt\epsilon.$$ In that case, we would be able to set $\delta = \sqrt\epsilon$, and the proof would follow as such: $$$$ Suppose $\epsilon > 0$. We will define $\delta = \sqrt\epsilon$. Since $\epsilon > 0$, it follows that $\delta > 0$. Now, $\forall z \in \mathbb{C}$, the expression $$0 < |z-z_0| < \delta$$ implies $$0 < |z-z_0| < \sqrt\epsilon.$$ It then follows $$0 < |z+z_0| \cdot |z-z_0| < |z+z_0| \cdot \sqrt\epsilon.$$ $$$$ I now find myself in another predicament because I feel like I need to make the assumption that $|z+z_0|<|z-z_0|$ in order to continue with the proof. By making this assumption, I will then know that $|z+z_0| < \sqrt\epsilon$. $$\Rightarrow |z^2-z_0^2| < |z+z_0| \cdot \sqrt\epsilon < \sqrt\epsilon \cdot \sqrt\epsilon = \epsilon$$ $$\Rightarrow |z^2-z_0^2| < \epsilon.$$ $$$$ This proof seems to depend on the assumption that $|z+z_0| < |z-z_0|$, which confuses me because when working backwards, I made the assumption that $|z+z_0| > |z-z_0|$. However, I don't believe that irregularity poses a threat to the proof itself. What does ruin the proof is that gaping assumption that $|z+z_0| < |z-z_0|$. I have tried doing the proof by cases, where I first assume that $|z+z_0| < |z-z_0|$, then I assume that $|z+z_0| \geq |z-z_0|$. I will now show the progress I have made in the second case, which is close to none. $$$$ Suppose $\epsilon > 0$. We will define $\delta = \sqrt\epsilon$. Since $\epsilon > 0$, it follows that $\delta > 0$. Now, $\forall z \in \mathbb{C}$, the expression $$0 < |z-z_0| < \delta$$ implies $$0 < |z-z_0| < \sqrt\epsilon.$$ It then follows $$0 < |z+z_0| \cdot |z-z_0| < |z+z_0| \cdot \sqrt\epsilon.$$ $$$$ That's about as far as I've figured. However, what I have found is that if we assume that $$0 < |z-z_0| < \sqrt\epsilon$$ which we have been, it follows that $$(x-x_0)^2+(y-y_0)^2 < \epsilon,$$ by definition of the definition of the modulus ($|z-z_0| = \sqrt{(x-x_0)^2+(y-y_0)^2}$), where $z=x+iy$ and $z_0=x_0+iy_0$. Then, $$x^2-2xx_0+x_0^2+y^2-2yy_0+y_0^2 < \epsilon$$ $$\Rightarrow x^2+2xx_0+x_0^2+y^2+2yy_0+y_0^2 < \epsilon + 4xx_0 + 4yy_0$$ $$\Rightarrow |z+z_0| < \sqrt{\epsilon + 4xx_0 + 4yy_0}.$$ $$$$ I don't really see this helping either however. Thanks for the help!
You want to show that you can find a $\delta > 0$ such that for all $z \in U_{\delta}(z_0) := \{z \in \mathbb{C}: |z-z_0| < \delta \}$, you have that $z \in U_{\epsilon}(z_0)$. Now for any $\delta > 0$, there exists a constant $C_\delta$ such that $|z+z_0| < C_{\delta}$ for all $z \in U_{\delta}(z_0)$: $$|z+z_0| \leq |z| + |z_0| \leq 2|z_0| + \delta =: C_{\delta}, \quad z \in U_{\delta}(z_o).$$ Note that $C_\delta$ decreases as $\delta$ decreases, which is good. We see that for all $\delta < 1$, we have $|z+z_0| < C := 2|z_0| + 1$ for all $z \in U_{\delta}(z_0)$. Therefore, if $\delta < \text{min}(1,\epsilon/C)$ we find that for $z \in U_{\delta}(z_0)$ we have $$ |z^2 - z_0^2| = |z+z_0||z-z_0| \leq C|z-z_0| \leq C\cdot \frac{1}{C}\epsilon = \epsilon.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2309139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Class $K^{\infty}$ is elementary provided that $K$ is. Let $K$ be an elementary class of $\Sigma$-structures. Show that the class $K^{\infty}$ formed by the structures of $K$ with infinite domain is also elementary. I'm really clueless about this problem. I know that $K$ is elementary if there is som $\Phi \subseteq \mathrm{Sent}_{\Sigma}$ such that $K=\mathcal{Mod}(\Phi)$, that is $$K= \{ \mathcal{A} \mid \mathcal{A} \models \varphi \,\, \forall \varphi \in \Phi\}$$ so $$K^{\infty}=\{ \mathcal{A} \in K \mid |\mathcal{A}|=\infty\}$$ Obviously $K^{\infty}\subseteq K$, but I don't know what strategy to follow. Any hint would be highly appreciated. Thank you.
Suppose $\Phi$ is a set of sentences defining $K$ - that is, $K=\{\mathcal{A}: \mathcal{A}\models\Phi\}$. Now $K^\infty\subseteq K$ as you observe, so you want to find a bigger set of sentenecs $\Psi\supseteq \Phi$ such that $K^\infty=\{\mathcal{A}: \mathcal{A}\models\Psi\}$. (Why bigger? Well, satisfying more sentences is a stronger constraint, and there are fewer things satisfying it.) That means we want to add some sentences to $\Phi$. Now the obvious sentence to add is "the domain is infinite"; the structures satisfying $\Phi$ together with "the domain is infinite" are exactly the infinite models of $\Phi$, that is, the elements of $K^\infty$! So we're done, right? Just set $\Psi=\Phi\cup\{$"the domain is infinite"$\}$. But this doesn't work,since "the domain is infinite" isn't a first-order sentence. So instead, we need to somehow express "the domain is infinite" in a first-order way. On the plus side, we don't have to do this with a single sentence, we're allowed to add a bunch of sentences. Which leaves us with: Can you think of a family of sentences $\Theta=\{\theta_i: i\in\mathbb{N}\}$ such that the models of $\Theta$ are exactly the infinite structures? HINT: It's enough for the models of $\theta_i$ to be the structures with size at least $i$, for every $i\in\mathbb{N}$ ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2309284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ideal in the ring of symetric polynomials I need to find generators of the ideal $J$ in the ring of symmetric polynomials of 3 variables $(x_1, x_2, x_3)$ such that $\forall j \in J: x_1 = x_2 \Rightarrow j=0$. It is clear that polynomials $(x_1-x_2)(x_2-x_1)$ and $(x_1 - x_2)^2$ are generators. So are this polynomials enough to generate whole $J$ and if yes, how can I prove it? Thanks!
Take a symmetric function $f(x_1, x_2, x_3)$. View it as an element of $\mathbf{Q}(x_2,x_3)[x_1]$. For polynomials over a field, we know that $g(t) \in k[t]$ has a root at $a$ if and only if $(t - a) \mid g$. Thus since $f(x_2,x_2,x_3) = 0$ it must be that $(x_2 - x_1) \mid f$ in $\mathbf{Q}(x_2,x_3)[x_1]$. By Gauss's Lemma, we can say that $(x_2 - x_1) \mid f$ in $\mathbf{Q}[x_1,x_2,x_3]$ (or $\mathbf{Z}[x_1,x_2,x_3]$ if $f$ had integer coefficients). Therefore write $f = (x_1 - x_2)f_1$. If we now apply the permutation $(1,2)$ we get $f = (x_2 - x_1)\cdot (1,2)f_1$. Hence $(1,2)f_1 = -f_1$. But from this we know that $f_1$ must be zero if $x_1 = x_2$. Therefore $f = (x_1 - x_2)^2f_2$. Thanks to @darij grinberg for pointing this out: notice that $(x_1 - x_2)^2$ is not symmetric because it isn't invariant under switching $x_1$ and $x_3$ or $x_2$ and $x_3$. Notice that actually the ideal $J$ is the set of polynomials that vanish when $x_1 = x_2$ or $x_1 = x_3$ or $x_2 = x_3$ by symmetry. Thus using the same logic as above, if $f \in J$ then $f = (x_1 - x_2)^2(x_1 - x_3)^2(x_2 - x_3)^2f_3$. So $J$ is generated by $(x_1 - x_2)^2(x_1 - x_3)^2(x_2 - x_3)^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2309438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $f'(x)\le g'(x)$, prove $f(x)\le g(x)$ I have to do the following exercise: Let $f$ and $g$ two differentiable functions such that $f(0)=g(0)$ and $f'(x)\leq g'(x)$ for all $x$ in $\mathbb{R}$. Prove that $f(x)\leq g(x)$ for any $x\geq0$. Now, I know this is true because the first derivative of a function is the angular coefficient of the function in a point $x$. So, $f'(x)\leq g'(x)$ means, in other words, that the function $g(x)$ grows faster than $f(x)$. I think this is the base for a more formal proof, could someone help me to figure out a more formal proof?
For $x \ge 0$, we have $f(x) = f(0) + \displaystyle \int_0^x f'(s) ds = g(0) + \displaystyle \int_0^x f'(s) ds, \tag{1}$ since $f(0) = g(0)$; and since $f'(x) \le g'(x)$, we have $\displaystyle \int_0^x f'(s) ds \le \displaystyle \int_0^x g'(s) ds; \tag{2}$ thus, $f(x) = g(0) + \displaystyle \int_0^x f'(s) ds \le g(0) + \displaystyle \int_0^x g'(s) ds = g(x). \tag{3}$ NB: Note added in edit: I think it is only fair to admit this is sort of a "low-tech" argument, since it does in fact assume $f, g \in C^1$. Something suitable for first-year calculus. Rigel's answer above seems to indicate a more sophisticated view of the matter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2309532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
What does it mean, that "The coordinates of $a_n$ in the ... canonical base are ...? I would like to understand an article, which explains vectorial products in $n$ dimension. As far as I know, the bases are vectors. If I multiply them with scalars, and add them together, then I can produce all the vectors in that vector-space. But in this article, the bases are used as scalar values in a matrix. I don't understand how is it possible. Canonical base is maybe something else. I've found a Wikipedia article about it, but it says, that "it refers to the standard basis" The relevant part of the article, which I can't understand: There are given n-1 vectors $ a_i = (a_{i1}, \dots, a_{in}), i=1,\dots,n $ in $ \mathbb{R}^n $. We are looking for an $a_n \in \mathbb{R}^n $ which is perpendicular to all the others. Let the coordinates of $a_n$ in the $e_1, \dots, e_n$ canonical base be $b_1, \dots, b_n$ In this case, it is clear that: $$ a_n = \begin{vmatrix} e_1 & e_2 & \cdots & e_n \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \cdots & \cdots & \cdots & \cdots \\ a_{n-1,1} & a_{n-1,2} & \cdots & a_{n-1,n} \end{vmatrix} = b_1 e_1 + \dots + b_n e_n $$ Note: I can not link the article, because it is not in english.
The determinant defining vector $a_n$ is simply an abuse of notation. It means that $a_n$ may be expressed as the cofactor expansion of the determinant of the "matrix" $$ A = \left( \begin{matrix} e_1 & e_2 & \cdots & e_n \\ a_{11} & a_{12} & \cdots & a_{1n} \\ \cdots & \cdots & \cdots & \cdots \\ a_{n-1,1} & a_{n-1,2} & \cdots & a_{n-1,n} \end{matrix} \right) $$ along the first row, where the $e_i$ are vectors and the $a_{i,j}$ coefficients. Thus, if we denote the cofactor matrix of $A$ by $\text{C}(A)$ and we write $a_n = b_1e_1 + \cdots + b_{n}e_{n}$ where $b_i$ are the real numbers defined as $$b_i := \text{C}(A)_{1,i} = \text{the (1, $i$)-th coefficient of the matrix $C(A)$}$$ then it follows from the Laplace expansion theorem that $$a_n\cdot a_i = 0 \quad \text{ if } \quad i \neq n $$ and $a_n$ is perpendicular to the $a_i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2309633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to find a 3x3 matrix with determinant =0 from which I can delete random column and random row to make it nonzero? I need to find a $3 \times 3$ matrix and the determinant of this matrix has to be $0$. I also need to be able to delete randomly chosen column and row to make the determinant nonzero? Is it even possible? Thank you.
The matrix $$\begin{pmatrix}1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9\end{pmatrix}$$works just fine : * *the columns are in arithmetic progression, which means that the middle column is the arithmetic mean of the extremal columns, and thus they are not independent. *for any $2\times 2$ submatrix, the difference of the columns is a multiple of $\left(\begin{smallmatrix}1\\ 1 \end{smallmatrix}\right)$, but none of the column is, so the columns must generate a space of dimension $2$, hence they are linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2309785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 3 }
How many $4$-letter strings either start with c or end with two vowels? If the alphabet consists of $\{a,b,c,d,e,f\}$, how many four letter strings either start with c or end with two vowels? My reasoning was as follows: Starts with c but does not have vowel in third position + starts with c but does not have a vowel in fourth position + does not start with c but has vowels in the last two positions $$= 1 \cdot 6 \cdot 4 \cdot 6 + 1 \cdot 6 \cdot 6 \cdot 4 + 5 \cdot 6 \cdot 2 \cdot 2 = 408$$ Is my reasoning correct?
Easier to do inclusion-exclusion. Starts with C + ends with two vowels - (starts with C and ends with two vowels) $1*6*6*6 + 6*6*2*2 - 1*6*2*2 = 336$ You were okay but a double adding snuck in. You counted those that start with C and do not have vowels in either the 3rd or 4th position twice. Because those things have a way of sneaking in and as if this got even the tiniest bit more complicated you'd have many many things to keep track of, I prefer the inclusion-exclusion method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2309888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why an open upper half plane is not homeomorphic to infinite band and its boundary? Why an open upper half plane is not homeomorphic to infinite band and its boundary?
It is due to the fact that any homeomorphism maps the boundary of one to the other. The boundary of the range which is the infinite band is not connected, but the domain i.e the upper hand plane has a connected boundary. A homeomorphism is a continuous map, and images of connected sets under continuous maps are connected. And so what can you conclude?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2310084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is T one-to-one iff T is onto? In Friedberg's Linear Algebra, the following proof that T is one-to-one if and only if T is onto is presented. Here, $T:V\rightarrow F$ is linear. "We have that T is one-to-one if and only if N(T)=0, if and only if nullity(T) = 0, if and only if rank(T) = dim(V), if and only if rank(T) = dim(W), and if and only if dim(R(T))=dim(W)." My questions is how does it follow that rank(T) = dim(W) from knowing that rank(T) = dim(V)?
This holds only if V and W have equal (finite) dimensions. From the rank-nullity theorem one has: $nullity(T)+rank(T) = dim(V)$ Since $T$ is injective then $nullity (T)=0$. Then this implies $rank (T)=dim (V)=dim (W) $ which suggests surjectivity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2310189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can we prove that there does not exist any metric space $4,5,6$ dense subsets? Let $(X,d)$ be a metric space. Which of the following is possible? (A) $X$ has exactly $3$ dense subsets. (B) $X$ has exactly $4$ dense subsets. (C) $X$ has exactly $5$ dense subsets. (D) $X$ has exactly $6$ dense subsets. I think it's answer is (A) as if we consider $\mathbb R$ with usual metric then $\mathbb R$ contains three dense subsets $\mathbb Q$ ,$\mathbb Q^c$ and $\mathbb R$ itself. Am I correct?Are there other metric spaces which makes (B),(C),(D) true? If not How can we prove that there does not exist any metric space $4,5,6$ dense subsets?
Hint: Consider the set $X = A\cup D$ with $A=\{0\}$ and $D=\left\{\frac1n: n\geq 1\right\}$ equipped with the metric from ${\Bbb R}$. Show that the only dense subsets are $A\cup D$ and $D$ (so there are precisely two dense subsets). Now, this example is constructed using a discrete set $D$ with precisely one accumulation point $0$. Try now to construct sets using a discrete set and having 2, 3,... accumulation points and draw conclusions from this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2310446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to solve this absolute value equation I have the following integral $$\int_0^{\pi/2} |\sin x-\cos x|dx. $$It's a simple integral but when I try to solve the module I get stuck. I took $\sin x-\cos x>0$ and squaring this I found that $\sin 2x<1$. When I apply $\arcsin$ it would mean $$2x<\frac\pi2 \implies x<\frac\pi4$$ but the interval is wrong from the one in my book. When I apply $\arcsin$ does the sign change? Why?
\begin{align} \int_0^{\pi/2} |\sin(x)-\cos(x)|dx &= \int_0^{\pi/4} |\sin(x)-\cos(x)|dx+\int^0_{\pi/4} |\sin(x)-\cos(x)|dx\\ &=-\int_0^{\pi/4} \sin(x)-\cos(x)dx+\int_{\pi/4}^{\pi/2} \sin(x)-\cos(x)dx\\ &=.. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2310519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
Help with finding side of a triangle I have a right angled triangle question and have through my working figure out that the angles are $90^\circ$, $67^\circ$, and $23^\circ$. I have one side opposite $23^\circ$ which is $13$cm. I need to find the side length opposite the right angle. I've tried sine rule and I get a negative number. $$\sin \frac{23}{13} = \sin \frac{90}{CB}$$ Through rearranging (possibly incorrect?) $$CB = \frac{\sin 90}{ \frac{\sin 23 }{13}} ≈ -19$$ when typed in my calculator. What am I doing wrong? Any help is much appreciated; sorry if the question is very basic.
This is a more complete solution to the ones mentioned above: I think that a more straightforward way would be to use the sine here. As @zubzub mentioned, the side opposite a right angle is a hypotenuse (by definition). If we call the hypotenuse $h$, since sine = opposite/hypotenuse, $\sin 23º = 13/h$, so by rearrangement $(\sin 23º) (h) = 13$, or $h = 13/ \sin 23º$. In future problems like this, keep 3 things in mind: 1) Make sure you know which units you are using. Degrees often have whole numbers and are much larger (from 0 to 360) than radians, which are often in multiples of $\pi$ (such as $5/3 \pi$). 2) Use the appropriate rule: the sine rule and cosine rule apply for non-right angled triangles, while for right angled triangles the basic trigonometric functions (sine, cosine, tangent and their inverses) are more practical, since they can be computed by definition. 3) Make sure to check your order of operations: $\sin(23/13)$ is a very different thing to $\sin(23)/13$! Check your calculator to see whether it follows the order of operations or not (most scientific calculators do), and add brackets in the right places when you are not sure whether to place them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2310617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Set with sum of elements equal to sum of squares The question I'm attempting to solve is For a set $S$ such that $\sum S_i = \sum S_i^2$, which is greater: $\sum S_i^3$ or $\sum S_i^4$? My first thought was that $$\sum S_i - \sum S_i^2 = 0 \implies \sum S_i(1-S_i) = 0$$ hints at some kind of symmetry which may be leveraged, since the question can be rephrased as proving that $\sum S_i^3(1 - S_i)$ is less than, or greater than, $0$ for all values. Unfortunately I couldn't find a way to proceed from here. I found the parametric solution $$a=\frac{k(k+1)}{k^2+1}$$ and $$b=\frac{k+1}{k^2+1}$$ for $a+b = a^2 + b^2$, and I could manually try some values to see which is larger, but I am looking for a proof. I tried to show $(a^4+b^4)-(a^3+b^3)$ was always on one side of $0$, but I ended up with a polynomial of degree 7, which has odd degree so must cross the x-axis! I'm guessing my work up to this point was wrong - this post is getting a bit long, but I can post if it would be useful. I have the feeling that I am missing a very simple and intuitive proof, but it has eluded me so far.
If the $S_i$ can have any sign nothing can be said (see e.g. the example of Paul, where $\sum S_i^3 < \sum S_i^4$, whereas for $S = \{.5, .5, (1-\sqrt{3})/2\}$ the opposite inequality holds). Hence I am assuming that $S_i \geq 0$ for every $i$. We have that $$ S_i^2 = (S_i)^{1/2} (S_i)^{3/2} \leq \frac{1}{2} S_i + \frac{1}{2}S_i^3 $$ hence $$ \sum S_i^2 \leq \frac{1}{2} \sum S_i + \frac{1}{2} \sum S_i^3. $$ Since $\sum S_i = \sum S_i^2$, it follows that $\sum S_i^3 \geq \sum S_i = \sum S_i^2$. On the other hand $$ S_i^3 = S_i \cdot S_i^2 \leq \frac{1}{2} S_i^2 + \frac{1}{2}S_i^4, $$ so that, using $\sum S_i^3 \geq \sum S_i^2$, $$ \sum S_i^4 \geq 2 \sum S_i^3 - \sum S_i^2 \geq \sum S_i^3. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2310705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Show that if $AB=BA=0$ then $rank(A+B)=rank(A)+rank(B)$ We are supposed to show that if $AB=BA=0$ then $rank(A+B)=rank(A)+rank(B)$ Maybe this fact help to answer the question: $rank(A+B)\le rank(A)+rank(B)$ and $rank(A)+rank(B)=dim(Im(B))$. But I got stuck when trying to prove that $rank(A+B)\ge rank(A)+rank(B)$
Take $$ A=B=\begin{pmatrix} 0 & 1 \cr 0 & 0 \end{pmatrix}. $$ Then $AB=BA=0$, but $$ 1=rank(A+B)\neq rank(A)+rank(B)=2. $$ So Daniel is right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2310819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
recurrence relation for loans You borrow $4000$ dollars, at $12$ percentage compounded monthly, to buy a car. If the loan is to be paid back over two years, what is the monthly payment? Note you pay back same amount every month. Use a recurrence relation (for loans) to solve the problem. What I have tried so far is $P(A|P, i\%, N)$ This I substitute to $4 000(1\%, 24)$ and the table of discrete compounds give me $(A|P , 1\%,24) = 0.0471$ $4000(0.0471) = 188.4$ with $a_0 = 4 000$ and $a_n= a_{n-1} - 188.4$ gives $a_{24}= - 521.6$ However, $4000 \times 1.12 = 4480 $ and $4480 \ne 521.6 $ The second thing i tried was to calculate with $a_{n-1}\times\left(1+\frac {0.12} {100}\right)-1666.6667$ and this gives $a_{24}= 61.115$ It should be $a_{24}= 0$, what am I missing?
Not gonna do your problem for you. However, here is a full explanation for the derivation of the formula for the monthly payment, and a more in-depth description can be found at my blog post on this page. Given the initial balance, the number of payments, and the monthly interest amount, finding the monthly payment can be a tricky task. However, we can set up an iterated function to calculate it. First, we assign variables to the givens and the unknowns. Our givens are $L$, the loan amount; $m$, the monthly interest rate; and $n$, the number of payments. Our unknowns are $p$, the monthly payment; $i_k$, the interest amount at the kth payment; $l_k$ be the interest amount at the kth payment; and $B_k$, the balance after the kth payment. We already know one thing about the relationship between $i_k$ and $B_k$; namely, that $$i_k=mB_{k-1}$$ because the interest of the kth payment is the monthly interest times rate times the current balance before the payment is made. We also know that $$l_k=p-i_k$$ because whatever part of a payment not interest is the principal part of the payment. Also, because whatever is not interest goes towards the loan, $$B_k=B_{k-1}-l_k$$ We can now substitute $p-i_k$ for $l_k$ and we get $$B_k=B_{k-1}-(p-i_k)$$ $$B_k=B_{k-1}-p+i_k$$ We also can substitute $mB_{k-1}$ for $i_k$, so $$B_k=B_{k-1}-p+mB_{k-1}$$ $$B_k=(m+1)B_{k-1}-p$$ We have turned $B_k$ into a sequence. In order to finish defining the sequence, we need only one term of the sequence. We already know that before any payments, the balance is $L$, so we can let $B_0=L$. Then we can define $B_k$ as $$B_k=f(B_{k-1})$$ where $$f(x)=(m+1)x-p$$ However, since $B_0=L$, then $$B_k=f^k(L)$$ and by our formula for the nth iteration of functions of the form $f(x)=cx+d$, $$f_k(x)=(m+1)^kx+p\frac{1-(m+1)^k}{m}$$ and $$B_k=(m+1)^kL+p\frac{1-(m+1)^k}{m}$$ But this is still in terms of $p$, which is an unknown as of yet. However, we can solve for $p$ because we know another value of $B_k$. Since, after the last payment, the balance will be $0$, we can say that $B_n=0$. We can substitute and solve for $p$: $$0=(m+1)^nL+p\frac{1-(m+1)^n}{m}$$ $$p\frac{(m+1)^n-1}{m}=(m+1)^nL$$ $$p=\frac{m(m+1)^nL}{(m+1)^n-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2310906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why doesn't this linear transformation exist? In Friedberg's Linear Algebra, the following theorem is presented: "Let V and W be vector spaces over F, and suppose that V is finite-dimensional with a basis $\left\{v_1,...,v_n\right\}$. For any vectors $w_1,...,w_n$ in W there exists exactly one linear transformation $T:V\rightarrow W$ such that $T(v_i)=(w_i)$ for $i=1,...,n$." I think I understand what this is saying, but then in the exercises there is a true/false question that is supposedly false that is as follows: "Given $x_1,x_2\in V$ and $y_1,y_2\in W$, there exists a linear transformation $T:V\rightarrow W$ such that $T(x_1)=y_1$ and $T(x_2)=y_2$." My question is why is this false? Doesn't it follow from the theorem?
Let $V=W=\mathbb{R}^2$. Let $x_1 = (1,0), x_2 = (2,0), y_1=(0,1), y_2=(1,3)$. Suppose there exits a linear transformation $T$ as required. Then $$T(2,0)=(1,3).$$ Also $$T(2,0)=2T(1,0)=2(0,1)=(0,2).$$ This gives a contradiction. The reason this happens, as mentioned in the comments, is that $x_1$ and $x_2$ are linearly dependent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Studying the derivative of the integral function $\frac{1}{x}\int_0^x\arctan(e^t)\mathrm{d}t$ I was trying to calculate the derivative of the function $$ F(x) =\frac{1}{x}\int_0^x\arctan(e^t)\mathrm{d}t $$ I thought the fastest way was to use the Leibniz's rule for the derivative of a product, $$ (f\cdot g)' = f'g + g'f $$ and, choosing as $f(x) = \frac{1}{x}$ and as $g(x) = \int_0^x\arctan(e^t)\mathrm{d}t$, applying for the second one's derivative the fundamental theorem of calculus, I obtained $$ -\frac{1}{x^2}\int_0^x\arctan(e^t)\mathrm{d}t + \frac{1}{x}\left[\arctan(e^t)\right]\Bigg|_{t = 0}^{t = x} = -\frac{1}{x^2}\int_0^x\arctan(e^t)\mathrm{d}t + \frac{1}{x}\left[\arctan(e^x)-\frac{\pi}{4}\right] $$ Now there come the problems, since I don't know how to evaluate the limit as $x\rightarrow0$ for the first term of the expression, while the second one, as $x\rightarrow0$, $g'(x)f(x)\rightarrow\frac{1}{2}$. So I plotted the whole thing and I saw something very strange: The blue one is the function (which is right), the red one it's the derivative as calculated before. As you can notice looks like the derivative have a discontinuity in the point 0, while looking at the graph of the function $F(x)$ one would say that there's not such a discontinuity. I tried to evaluate the whole thing with Mathematica, but I did not solve the problem: there are strange things happening at the origin. Now, there are two possibilities: * *The derivative is wrong, but I wonder where, as it's so simple and linear *Grapher app from Mac OS X cannot handle with such functions in a proper way Can you find out the bug?
With your notation $$ g(x)=\int_0^x\arctan(e^t)\,dt $$ we have $$ F'(x)=\frac{xg'(x) - g(x)}{x^2}=\frac{x\arctan(e^x)-g(x)}{x^2} $$ for $x\ne0$. On the other hand, the function $F$ can be extended by continuity at $0$ as $$ \lim_{x\to0}F(x)=\frac{\pi}{4} $$ and $$ \lim_{x\to0}F'(x)= \lim_{x\to0}\frac{1}{2x}\frac{xe^x}{1+e^{2x}}=\frac{1}{4} $$ so $F$ (extended) is also differentiable at $0$. I used l’Hôpital for both limits. The bug in your argument is that you wrongly do $$ \frac{d}{dx} g(x) = \Bigl[ \arctan(e^t)\Bigr]_{t=0}^{t=x} $$ instead of $$ \frac{d}{dx}g (x) =\arctan(e^x) $$ according to the fundamental theorem of calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find $\lim_{x\rightarrow \frac{\pi}{4}}\frac{\cos(2x)}{\sin(x-\frac{\pi}{4})}$ without L'Hopital I tried: $$\lim_{x\rightarrow \frac{\pi}{4}}\frac{\cos(2x)}{\sin(x-\frac{\pi}{4})} = \frac{\frac{\cos(2x)}{x-\frac{\pi}{4}}}{\frac{\sin(x-\frac{\pi}{4})}{x-\frac{\pi}{4}}}$$ and $$\begin{align}\frac{\cos(2x)}{x-\frac{\pi}{4}} &= \frac{\cos(2x)}{\frac{4x-\pi}{4}} \\&= \frac{4\cos(2x)}{4x-\pi} = \,\,???\end{align}$$ What do I do next?
Just as an alternate approach: I said: $\frac{cos(2\theta)}{sin\theta*cos(\frac{\pi}{4})-cos\theta*(\frac{\pi}{4})}$ = $\frac{cos(2\theta)}{1/\sqrt(2)*sin\theta-cos\theta}$ Noting that $cos(2\theta)=cos^2\theta-sin^2\theta$ and $sin\theta-cos\theta=-(cos\theta-sin\theta)$ So, we have $\frac{cos^2\theta-sin^2\theta}{-1/\sqrt(2)*(cos\theta-sin\theta)}$ = $-2*\frac{(cos\theta-sin\theta)*(\cos\theta+sin\theta)}{(cos\theta-sin\theta)}$=$-2*cos(\theta)+sin(\theta)$=-2 when you take the limit. Note that I just reciprocated the $-\frac{1}{\sqrt(2)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Help with this integral: $\int_{0}^{1}\frac{1}{1+x^{2}}dx$ (Riemann) I'm stuck when I try to solve this integral: $$\int_{0}^{1}\frac{1}{1+x^{2}}dx$$ I try this: $$\int_{0}^{1}\frac{1}{1+x^{2}}dx=\lim_{n\rightarrow\infty}\sum_{i=1}^{n}\frac{1}{1+(\frac{i}{n})^{2}}\frac{1}{n}=\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{i=1}^{n}\frac{1}{1+(\frac{i}{n})^{2}}$$ Can someone help with the next step? I'm stuck.
Not sure why you can't use the well-known antiderivative $arctanx$, to get $$\int_0^1\frac1 {1+x^2}dx=[arctanx]_0^1=\frac \pi4-0=\frac \pi4$$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
What does the group of the geometric images of z, such that $z+\overline{z}+|z^2|=0$ define? What does the group of the geometric images of z, such that $z+\overline{z}+|z^2|=0$ define? A) A circumference of center (0,-1) and radius 1. B) Two lines, of equations $y=x+2$ and $y=x$. C)A circumference of center (1,0) and radius 1. D)A circumference of center (-1,0) and radius 1. I tried to simplify the expression: $$z+\overline{z}+|z^2|=0 \Leftrightarrow \\ x+yi+x-yi+|(x+yi)^2|=0\Leftrightarrow \\ 2x+\sqrt{(x^2+y^2)^2+(2xy)^2}=0 \Leftrightarrow \\ \sqrt{(x^2+y^2)^2+(2xy)^2}=-2x \Leftrightarrow \\ (x^2+y^2)^2+(2xy)^2=4x^2 \Leftrightarrow \\ x^4 - 2 x^2 y^2 + y^4+4x^2y^2 = 4x^2 \Leftrightarrow \\ x^4+y^4+2x^2y^2 = 4x^2 \Leftrightarrow \\ ???$$ How do I continue from here? I have also been thinking that if the sum of those three numbers is zero then they could be the vertices of a triangle. I rewrote the expression: $$\rho \cdot cis(\theta)+\rho cis(-\theta)+|\rho^2 \cdot cis(2\theta)|=0 \Leftrightarrow \\ \rho \cdot cis(\theta)+\rho cis(-\theta)+\rho^2 =0 \Leftrightarrow \\ \rho(cis(\theta)+cis(-\theta)+\rho) = 0 \Leftrightarrow \\ \rho = 0 \lor cis(\theta)+cis(-\theta)+\rho = 0 \Leftrightarrow \\ cis(\theta)+cis(-\theta) = -\rho \Leftrightarrow \\ \cos(\theta)+\sin(\theta)i+\cos(-\theta)+\sin(-\theta)i = -\rho \Leftrightarrow \\ \cos(\theta)+\cos(\theta) = -\rho \Leftrightarrow \\ 2\cos(\theta) = -\rho \Leftrightarrow \\ \rho = -2\cos(\theta)$$ This means that $\rho$ will be between 0 and 2. If $|z^2|=\rho^2 = \rho^2 cis(0)$, then one of the vertices is $4$. But what do I do next? How do I solve this?
Put $z=\rho (\cos (t)+i \sin (t))$. We have $$z+\overline {z}=2\rho \cos (t) $$ $$|z^2|=|z|^2=\rho^2$$ thus $$\rho=-2\cos (t) $$ and $$z=-2\cos^2 (t)-2i\cos (t)\sin (t) $$ or $$x_z=-1-\cos (2t) $$ $$y_z=-\sin (2t) $$ thus $$(x_z+1)^2+y_z^2=1$$ it is a circle with radius $1$ and center $(-1,0) $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Volume of revolution setup I've been given a question as so Consider the region $R = \{(x,y)\; |\; 0 ≤ y \;and \;y^2≤ x ≤ y+2\}$ Draw R and calculate the exact volume by rotating the region R about the line x=0 So this is what $R$ looks like (shaded in black).. I think. And the equation I came up with was to integrate from 0 to 2, of the equation $π((y+2)-y^2)^2 \;dy$ Is this correct? I've never attempted a question setup like this before. Thank you so much!
Close. Just remember that: $$ \pi(R - r)^2 \neq \pi(R^2 - r^2) $$ So the correct setup is: $$ V = \pi\int_0^2 \left[(y + 2)^2 - (y^2)^2 \right] \, dy $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Evaluate $\sqrt{41}$ to $n$ decimal places without using a calculator. Evaluate $\sqrt{41}$ to $n$ decimal places without using a calculator. $$x^2 -10x-16=0$$ I was asked to solve the above quadratic giving the solutions to one decimal place. Using the quadratic formula I got: $x= 5\pm \sqrt{41}$ Using a calculator that is: $11.4(1dp)$ or $-1.4(1dp)$ Then I wondered if I could solve this without a calculator. It can be done by trial and error squaring values and gradually getting closer to the required accuracy. Here are the values that I squared to get to the required accuracy for the above case. $6.5,6.4,6.45,6.43,6.41$ and $6.405$. But is there another cleaner way of solving a surd to some number of decimal places?
Hint. Construct a recurrence sequence $x_n=f(x_{n-1})$ for $n\geq 1$, with $$f(x)=\frac{1}{2}\left(x+\frac{41}{x}\right).$$ A reasonable starting point would be $x_0=7$ (note that $6^2<41<7^2$). For details take a look here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
In a nice protomodular category, is normality stable under joins? Call a subobject normal if it's an equivalence class of some internal equivalence relation on the codomain. In a finitely complete category, normality is closed under finite intersection and finite product and is also pullback stable. This can be proved using a metatheorem which reduces to verification in the category of sets. In the category of groups, normality is also closed under joins in the subobject poset (see this MSE question). Is the finite analogue of this state true in some nice protomodular categories, e.g homological ones?
This paper of Tomas Everaert and Tim Van der Linden shows that it is true for semi-abelian categories (see Proposition 2.7). Exactness implies that every normal monomorphism is a kernel, and they need the existence of coproduct to define joins of subobjects; so I'm not sure that this would still hold in a category that is only homological.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
CDF of max($X$, $Y$) - where is the mistake? $X$ and $Y$ are independent r.v.'s and we know $F_X(x)$ and $F_Y(y)$. Let $Z=max(X,Y)$. Find $F_Z(z)$. Here's my reasoning: $F_Z(z)=P(Z\leq z)=P(max(X,Y)\leq z)$. I claim that we have 2 cases here: 1) $max(X,Y)=X$. If $X<z$, we are guaranteed that $Y<z$, so $F_Z(z)=P(Z\leq z)=P(X<z)=F_X(z)$ 2) $max(X,Y)=Y$. Similarly, $F_Z(z)=P(Z\leq z)=P(Y<z)=F_Y(z)$ Since we're interested in either case #1 or #2, $F_Z(z)=F_X(z)+F_Y(z)-F_X(z)*F_Y(z)$ However, it's wrong and I know it. But I would like to know where the flaw in my reasoning is. I know the answer to this problem, I just want to know at what moment my reasoning fails.
I think you are fine with separating the cases, but then do not take care of them correctly. Since when you say in your case 1 that the maximum is $X$, you are "conditioning on" $X>Y$ and that changes the space over which you calculate the probabilities. We have two cases that either of which happens: case 1: $X<Y<z$ case 2: $Y<X<z$. That is \begin{align} \Pr(\max\{X,Y\}<z)&=\Pr(X<Y<z)+\Pr(Y<X<z)\\ &=\int_{x=-\infty}^z\int_{y=x}^zf_Y(y)f_X(x)dydx+\int_{y=-\infty}^z\int_{x=y}^zf_Y(y)f_X(x)dxdy\\ &=\int_{y=-\infty}^z\int_{x=-\infty}^yf_Y(y)f_X(x)dxdy+\int_{y=-\infty}^z\int_{x=y}^zf_Y(y)f_X(x)dxdy\\ &=\int_{y=-\infty}^zf_Y(y)\left(\int_{x=-\infty}^yf_X(x)dxdy+\int_{x=y}^zf_X(x)dx\right)dy\\ &=\int_{y=-\infty}^zf_Y(y)\left(\int_{x=-\infty}^zf_X(x)dxdy\right)dy\\ &=F_Y(z)F_X(z). \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Multiple integral related to zeta function I am trying to calculate the following integral $$\int_V \frac{d^d\vec{r}}{e^{x_1+...+x_d}-1},$$ where $V=[0,\infty)^d$ and $\vec{r}=(x_1,...,x_d)$. I know that the result should be related to the Riemann zeta function, but I do not see how to do it quickly and elementary (i.e. without the knowledge of all possible relations for zeta function). Any sugestion or hint ?
By Fubini's theorem $$\int_{(0,+\infty)^d}\frac{x_1}{e^{x_1+\ldots+x_d}-1}\,d\mu_d = \int_{(0,+\infty)^{d-1}}\int_{0}^{+\infty}\frac{x_1}{e^{x_1+\ldots+x_d}-1}\,dx_1\,d\mu_{d-1} $$ and the last integral equals $$ \int_{(0,+\infty)^{d-1}}\text{Li}_2\left(e^{-(x_2+\ldots+x_d)}\right)\,d\mu_{d-1}=\int_{(0,1)^{d-1}}\frac{\text{Li}_2(v_2\cdots v_d)}{v_2\cdots v_d}\,dv_2\cdot dv_d$$ or $$ \int_{(0,1)^{d-2}}\frac{\text{Li}_3(v_3\cdots v_d)}{v_3\cdots v_d}dv_3\cdots dv_d = \int_{0}^{1}\frac{\text{Li}_d(w)}{w}\,dw=\text{Li}_{d+1}(1)=\color{red}{\zeta(d+1)}.$$ With the same technique we have: $$\int_{(0,+\infty)^d}\frac{d\mu_d}{e^{x_1+\ldots+x_d}-1} = \text{Li}_d(1)=\color{red}{\zeta(d)}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2311941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Can the following trigonometric equation be transformed into the other? Can $$16\sec^2⁡(x)\tan^4⁡(x)+88\sec^4⁡(x)\tan^2⁡(x)+16\sec^6⁡(x)$$ be proven equal to $$24\sec^6(x)-8\sec^4(x)+96\sec^4(x)\tan^2(x)-16\sec^2(x)\tan^2(x)$$ I have made about six attempts, but I keep getting stuck. I thought I'd ask, maybe someone else can figure it out before me or verify that we cannot transform from one equation to another. The reason why this is important to me is because I met a question asking for the differentiation of $\tan^2(x)$ 4 times. Both expressions above are different ways to express the SAME first order derivative, so they are indeed equal. However, the second expression has been derived by replacing $\tan^2(x)$ by $\sec^2(x)-1$, and then carrying on with differentiating to get the third and fourth derivatives. However, I didn't make such a substitution, hence I ended up with the first derivative. So, I'm trying to figure out a strategy to get the derivative right in the exam. It starts with knowing whether one of these expressions can somehow be converted into the other.
Sometimes the easiest thing to do is convert everything into sines and cosines. \begin{array}{l} 16 \sec^2⁡(x) \tan^4⁡(x) + 88 \sec^4⁡(x) \tan^2⁡(x) + 16 \sec^6⁡(x) \\ =\dfrac{16\sin^4(x)}{\cos^6(x)}+\dfrac{88\sin^2(x)}{\cos^6(x)} +\dfrac{16}{\cos^6(x)} \\ =\dfrac{16\sin^4(x) + 88 \sin^2(x) + 16}{\cos^6(x)} \end{array} \begin{array}{l} 24\sec^6(x)-8\sec^4(x)+96\sec^4(x)\tan^2(x)-16\sec^2(x)\tan^2(x) \\ =\dfrac{24}{\cos^6(x)}-\dfrac{8}{\cos^4(x)}+\dfrac{96\sin^2(x)}{\cos^6(x)} -\dfrac{16 \sin^2(x)}{\cos^4(x)} \\ =\dfrac{24-8\cos^2(x)+96\sin^2(x)-16\sin^2(x)\cos^2(x)}{\cos^6(x)} \\ =\dfrac{24-(8-8\sin^2(x))+96\sin^2(x)-(16\sin^2(x) - 16\sin^4(x))}{\cos^6(x)} \\ =\dfrac{16\sin^4(x) + 88\sin^2(x) + 16}{\cos^6(x)} \end{array}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2312031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculate the value of the series $\,\sum_{n=1}^\infty\frac{1}{2n(2n+1)(2n+2)}$ Calculate the infinite sum $$\dfrac{1}{2\cdot 3\cdot 4}+ \dfrac{1}{4\cdot 5\cdot 6}+\dfrac{1}{6\cdot 7\cdot 8}+\cdots$$ I know this series is convergent by Comparison Test, but I can't understand how can I get the value of the sum. Is there any easy way to calculate this? Please someone help.
Hint: $$(2n)(2n+1)(2n+2) = (2n+1)^3 - (2n+1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2312143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Whether a series is convergent or divergent Is it true that if $\sum u_n$ is convergent, where $u_n$'s are positive real numbers then $\sum \dfrac{u_1+u_2+...+u_n}{n}$ is divergent? I know that if $\lim_{n\to\infty}u_n =0$ then $\lim_{n\to\infty}\dfrac{u_1+u_2+...+u_n}{n}=0$ and it is the necessary condition for a series to be convergent. Someone help please.
The post has been edited to say the $u_i$'s are positive. Under that assumption, $\sum \dfrac{u_1+u_2+...+u_n}{n}$ diverges because $\sum_{n=1}^m \dfrac{u_1+u_2+...+u_n}{n}>\sum_{n=1}^m \dfrac{u_1}{n}$ and the latter goes to infinity as $m\to\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2312256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Solve the recurrence relation. Need some help x(n)= x(n-1) + 2*n + 3 ; Given x(0)=4; I'm using backwards substitution x(n-1)= [x(n-2) + 2(n-1) + 3] + 2*n+ 3 x(n-2)= [x(n-3) + 2(n-2) + 3] + x(n-2) + 2(n-1) + 3 + 2*n + 3 so on and so forth...... so i write the general expression as : x(n) = x(n-i) + [something here] + i*3 but i am not sure because i'm getting terms like 2(n-3) + 2(n-2) + 2(n-1)+ 2(n) ... do i write these terms as (2-i-1) ? any help will be appreciated
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Indeed, you were going in the right track !!!. \begin{align} x_{n} & = x_{n - 1} + 2n + 3 = x_{n - 2} + \bracks{2\pars{n - 1} + 3} + \pars{2n + 3} \\[5mm] & = x_{n - 3} + \bracks{2\pars{n - 2} + 3} + \bracks{2\pars{n - 1} + 3} + \pars{2n + 3} = \cdots =\ \overbrace{\quad x_{0}\quad}^{\ds{=\ 4}} + \sum_{k = n}^{1}\pars{2k + 3} \\[5mm] & = 4 + 2\,{n\pars{n + 1} \over 2} + 3n = n^{2} + 4n + 4 = \bbx{\pars{n + 2}^{2}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2312352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
On proof of showing uniqueness of adjoint equivalence functor Suppose that $C,D$ are categories, and fix a functor $F:C\to D$. Then if there is some $G$ such that $(F,G,e,\epsilon)$ is an adjoint equivalence, and if $G'$ is another such functor, $G$ and $G'$ are isomorphic by a unique isomorphism. (Here adjoint equivalence means that $(F,G,e,\epsilon)$ is an adjoint with $e,\epsilon$ being natural isomorphisms) Proof: Fix $F:C\to D$, and define the category $E_F$ by * *$Ob(E_F)$ consists of triplet $(G,e,\epsilon)$ such that $(F,G,e,\epsilon)$ is an adjoint equivalence. *A morphism $(G_1,e_1,\epsilon_1)\to (G_2,e_2,\epsilon_2)$ is given by a natural transformation $f:G_1\rightarrow G_2$ such that $f\circ id_F$ compose $e_1$ coincides with $e_2$, and $\epsilon_2$ compose $id_F \circ f$ coincides with $\epsilon_1$ (this condition can be stated much more compactly by drawing commutative diagram, but I'm not sure how I could draw it here - Sorry about that) Then the claim is that any two objects in this category is isomorphic, and this isomorphism is unique. I have no problem understanding the part about objects being isomorphic, but I don't quite follow uniqueness proof. This goes as follows: By commutativity condition imposed on $f\in \text{Hom}((G_1,e_1,\epsilon_1),(G_2,e_2,\epsilon_2))$, given $x\in C$ and $y\in D$, $$ f_{F(x)}\circ e_{1,x}=e_{2,x}\;\;\; \text{and} \;\;\; e_{2,y}\circ F(f_y)=\epsilon_{1,y} $$ which can be put in the form $$ f_{F(x)}=e_{2,x}\circ e^{-1}_{1,x}\;\;\;\;\;\text{and}\;\;\; F(f_y)=\epsilon^{-1}_{2,y}\circ \epsilon_{1,y} $$ Applying $G_2$ to second equality yields $G_2F(f_y)=G_2(\epsilon_{2,y})^{-1}\circ G_2(\epsilon_{1,y})$ which from naturality of $e_2$ implies $$ e_{2,G_2(y)}\circ f_y=G_2(\epsilon_{2,y})^{-1}\circ G_2(\epsilon_{1,y})\circ e_{2,G_1(y)} $$ (Apply naturality diagram of $e_2$ to $G_1(y)\to G_2(y)$ by $f_y$) Therefore $f_y=G_2(\epsilon_{1,y})\circ e_{2,G_1(y)}$ which uniquely determines $f$. Question: How does "therefore" bit follows from previous line?
The previous line is equivalent to $$G_2(\epsilon_{2,y})\circ e_{2,G_2(y)}\circ f_y= G_2(\epsilon_{1,y})\circ e_{2,G_1(y)}.$$ The "therefore" follows from the "triangle identity" between the unit and counit for $F\dashv G_2$: $$G_2(\epsilon_{2,y})\circ e_{2,G_2(y)}=id_{G_2(y)}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2312459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to handle integrals of the form $\int_0^1 \sin^2(\pi x) f(x) dx$? In another question here on the site I was looking for an integral bound and it turned out one has to know how to deal with oscillatory integrals of the form $$\int_0^{1} \sin^2(\pi x) f(x) dx,$$ where $f$ is a smooth function. There should be some trick involving Fourier-analysis and the Riemann-Lebesgue Lemma, but I've never heard of this and a Google search didn't help much. Can anyone show me what the general method is in solving such integrals? Thanks!
$$\int_{0}^{1}\sin^2(\pi x)\,f(x)\,dx = \frac{1}{2}\int_{0}^{1}f(x)\,dx-\frac{1}{2}\int_{0}^{1}\cos(2\pi x)f(x)\,dx $$ and if $f$ is a $C^1$ function we have $$ \frac{1}{2}\int_{0}^{1}\cos(2\pi x)\,f(x)\,dx = -\frac{1}{4\pi}\int_{0}^{1}\sin(2\pi x)\,f'(x)\,dx $$ by integration by parts. If $f(x)$ behaves like $\frac{1}{x+k}$, this process greatly improves the convergence and allows to estimate $\int_{0}^{1}\sin^2(\pi x)\,f(x)\,dx$ with an arbitrary precision. For short: separate the mean value from the mean-zero part and apply integration by parts to the mean-zero part.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2312545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that if $\frac1a+\frac1b+\frac1c = a+b+c$, then $\frac1{3+a}+\frac1{3+c}+\frac1{3+c} \leq\frac34$ Show that if $a,b,c$ are positive reals, and $\frac1a+\frac1b+\frac1c = a+b+c$, then $$\frac1{3+a}+\frac1{3+b}+\frac1{3+c} \leq\frac34$$ The corresponding problem replacing the $3$s with $2$ is shown here: How to prove $\frac 1{2+a}+\frac 1{2+b}+\frac 1{2+c}\le 1$? The proof is not staggeringly difficult: The idea is to show (the tough part) that if $\frac1{2+a}+\frac1{2+c}+\frac1{2+c}=1$ then $\frac1a+\frac1b+\frac1c \ge a+b+c$. The result then easily follows. A pair of observations, both under the constraint of $\frac1a+\frac1b+\frac1c = a+b+c$: * *If $k\geq 2$ then $\frac1{k+a}+\frac1{k+c}+\frac1{k+c}\leq\frac3{k+1}$. This is proven for $k=2$ but I don't see how for $k>2$ *If $0<k<2$ then there are positive $(a,b,c)$ satisfying the constraint such that $\frac1{k+a}+\frac1{k+c}+\frac1{k+c}>\frac3{k+1}$. So one would hope that the $k=3$ case, lying farther within the "valid" region, might be easier than $k=2$ but I have not been able to prove it. Note that it is not always true, under our constraint, that $\frac1{2+a}+\frac1{2+c}+\frac1{2+c}\geq \frac1{3+a}+\frac1{3+c}+\frac1{3+c}+\frac14$, which if true would prove the $k=3$ case immediately.
For the case $k≥3$, $(k\in\mathbb Z)$ Given equality implies $ab+bc+ca=abc(a+b+c)$ We know that $(x+y+z)^2≥3(xy+yz+zx)$ put $x=ab,y=bc,z=ca\Rightarrow (ab+bc+ca)^2\geq3abc(a+b+c)=3(ab+bc+ca)$ $\Rightarrow abc(a+b+c)=ab+bc+ca\geq3$ and $a+b+c\geq3$ (obvious) $\dfrac {1}{k+a}+\dfrac {1}{k+b}+\dfrac {1}{k+c}≤\dfrac {3}{k+1} \Leftrightarrow\dfrac {3k^2+2k(a+b+c)+(ab+bc+ca)}{k^3+k^2(a+b+c)+k(ab+bc+ca)+abc}\leq\dfrac {3}{k+1}$ $\Leftrightarrow3k^3+3k^2+(2k^2+2k)(a+b+c)+(k+1)(ab+bc+ca)\leq3k^3+3k^2(a+b+c)+3k(ab+bc+ca)+3abc$ $\Leftrightarrow3k^2\leq(k^2-2k)(a+b+c)+(2k-1)(ab+bc+ca)+3abc$ Last inequality is true by Arithmetic-Geometric mean (and by $k^2-2k\geq1$) $ \underbrace {(a+b+c)+...+(a+b+c)}_{k^2-2k\geq3 \text{ times}}+\underbrace {(ab+bc+ca)+...+(ab+bc+ca)}_{2k-1 \text{ times}}+\underbrace {3abc}_{1 \text{ times}}\geq$ $k^2\sqrt[k^2]{3abc(a+b+c)^{k^2-2k}(ab+bc+ca)^{2k-1}}=k^2\sqrt[k^2]{3(a+b+c)^{k^2-2k-1}(ab+bc+ca)^{2k}}\geq3k^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2312672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove that a function $f : X \mapsto X$ is injective if and only if it has a left inverse. Prove that a function $f : X \mapsto X$ is injective if and only if it has a left inverse. Can someone help me with this? What I've done is form predicates of the form $LI(f) \Rightarrow INJ(f)$ to begin step-by-step. I understand the left hand side will be quantified to $\exists f,g: X \mapsto X: g \circ f = id_X$. Am I in the right direction here? Or am I missing something more that should be quantified? I know the following: * *the identity function, $id_X: X \mapsto X$, is defined by $\forall x \in X, id_X(x)=x$. *A function $g \in \mathcal{F}$ is called a left inverse of a function $f \in \mathcal{F}$ if $g \circ f =id_X$.
You can show injective implies a left inverse by constructing g. First create a helper relation $ g' = \{ ( f(x), x ) : x \in X \}$ . This relation is a (partial) function due to the injective property ensuring that for every element in the domain there is at most one image. Then you can construct $ g(x) = \begin{cases} g'(x) & \text{when } g'(x) \text{ is defined} \\ x & \text{otherwise} \end{cases} $ Which is a left inverse. You can show the converse by making use of the fact that for all functions $ x = y \rightarrow g(x) = g(y) $
{ "language": "en", "url": "https://math.stackexchange.com/questions/2312764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How shall I identify all assumptions required by a Theorem in Evan's PDE textbook? My general question is given in the title. Let us consider Hopf-Lax formula - Theorem 4 of Section 3.3.2 of the PDE book by Evans, Click here. [Theorem 4] If $x\in \mathbb R^n$ and $t>0$, then the solution $u= u(x, t)$ of the minimization problem (17) is $$u(x, t) = \ldots$$ The assumption given for Hopf-Lax formula above is "If $x\in \mathbb R^n$ and $t>0$". But obviously, one shall impose some conditions for the functions $L$ and $g$, at least measurability condition. Therefore, I wonder what exactly the conditions needed to have Hopf-Lax formula valid. From the context, $L$ is Lagrangian which may be required strongly convex as of (19) on the same page. In addition, $g$ is mentioned to be Lipschitz in (20). So my understanding for Hopf-Lax formula is [Restatement of Hopf-Lax] $L$ is strongly convex, $g$ is Lipschitz. If $x\in \mathbb R^n$ and $t>0$, then .... In fact, I have many similar questions on other Theorems. Is there any unified approach to figure out the exact conditions for theorems? Or otherwise, we have to read the proof back and forth and extract the conditions by ourselves?
The hypotheses are listed directly above the theorem statement (e.g., "Recall we are assuming..."). It is common in textbooks or papers to list some of the hypotheses right before the theorem to make the theorem statements more concise. In general, you should aim to understand the proof in sufficient detail that you can deduce on your own what assumptions are necessary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2312931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Problem with strong law of large numbers, but not identical Suppose that $X_n$ are independent random variables where $\sup_n E|X_n| < \infty$. Then, $\frac{\sum_{i=1}^{n}X_i}{n^a}$ converges to 0 almost surely for all $a>1$. I think that $\frac{1}{n^{a-1}}$ converges to 0 and $\frac{\sum_{i=1}^{n}X_i}{n}$ converges because the sup of expectation is bounded, but since $X_n$ are not identical, the SLLN cannot be applied directly. And it makes me hard to follow the process of proving SLLN. Could you please help me?
Let $\Delta\equiv \sup_n\mathsf{E}|X_n|$. Then $$ \sum_{k=1}^\infty\mathsf{P}\left(\left|\frac{X_k}{k^\alpha}\right|\ge 1\right)\le \sum_{k=1}^\infty \frac{\Delta}{k^\alpha}<\infty, $$ and for $Y_n=X_n/n^{\alpha}1\{|X_n|\le n^{\alpha}\}$, $$ \sum_{k=1}^\infty\mathsf{E}Y_k \quad\text{and}\quad \sum_{k=1}^\infty \operatorname{Var}(Y_k) $$ converge as well$^{(*)}$. Hence, by Kolmogorov's three-series theorem, $\sum_{k=1}^n X_k/k^{\alpha}$ converges a.s. in $\mathbb{R}$. Convergence of $\sum_{k=1}^n X_k/n^{\alpha}$ follows from Kroneker's lemma. $^{(*)}$ \begin{align} \sum_{k=1}^\infty \operatorname{Var}(Y_k)&\le \sum_{k=1}^\infty \frac{1}{k^{2\alpha}}\mathsf{E}[X_k^21\{|X_k|\le k^{\alpha}\}] \\ &\le \sum_{k=1}^\infty \frac{1}{k^{\alpha}}\mathsf{E}[|X_k|1\{|X_k|\le k^{\alpha}\}]\le \sum_{k=1}^\infty\frac{\Delta}{k^{\alpha}}<\infty. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2313051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find exponent of prime $p$ in prime factorization of a number $x$ Say we have a number $x$ such that $$ x = a^{r}.b^{s}.c^{t}.p^{u} $$ Is there a formula or method which can directly give me the exponent of a particular prime in this prime factorization. For small $x$ calculating it is not a problem but when $x$ is of the order of $10^{8}$ finding the exponent of a particular prime becomes tough. e.g. For $p$ it should give $u$, for $a$ it should give $r$ and so on. If somehow $x$ can be represented as $\frac{a!}{b!}$ then also it can be done.
Expanding on FredAkalin's answer, we can write an algorithm in pseudocode as follows: prime_exp(x,p): count = 0 while x mod p is 0: (loop while x is exactly divisible by p) count += 1 (increase exponent value by 1) x = x / p (divide x by p) return count We can run this on the number $364=2^2\times 7\times 13$: prime_exp(364,2) We first have count = 0 We compute 364 mod 2 = 0 So now we have count = 1 x = 364 / 2 = 182 We loop back round and compute 182 mod 2 = 0 So now we have count = 2 x = 182 / 2 = 91 We loop back round again and compute 91 mod 2 = 1 so we exit the while loop We therefore return the value of count = 2 We can see this is equal to the exponent of $2$ in the prime factorisation of $364$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2313162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prove that a real polynomial $x^n+ a_1x^{n-1}+ \cdots +a_n$ cannot be completely resolved into linear factors if $a_1^2 Prove that a real polynomial $x^n+ a_1x^{n-1}+ \cdots +a_n$ cannot be completely resolved into linear factors if $a_1^2<a_2$. Here's what I've got. Let $\alpha_1, \dots, \alpha_n$ be the roots of the polynomial. Then $\displaystyle{a_1^2-a_2=(\alpha_1+ \cdots +\alpha_n)^2 - \sum_{k,j \leq n; k \neq j}\alpha_{j}\alpha_{k} = \alpha_1^2+ \cdots +\alpha_n^2 + \sum_{k,j \leq n; k \neq j}\alpha_{j}\alpha_{k}}$, but I have no idea how to prove this nonnegative. Please help.
Assume that $\alpha_1,...,\alpha_n$ are $n$ real roots of the polynomial. Notice that $$\sum_{i<j} \alpha_i \alpha_j = a_2 > a_1^2 \geq 0\implies a_1^2-a_2 = \sum_i \alpha_i^2+\sum_{i<j}\alpha_i \alpha_j>0$$ which is a contradiction with our assumption that $a_1^2-a_2<0$. Hence, these real roots cannot exist at the same time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2313216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
In a metric space $(X,d)$ if two sequences {$x_n$} and {$y_n$} are Cauchy then d($x_n$,$y_n$) is convergent Let (X,d) be a metric space and{ ${x_n}$} , {$y_n$} be two arbitrary Cauchy sequences in X then {$d(x_n,y_n)$} is convergent. I think it is sufficient to show that {$d(x_n,y_n)$} is Cauchy in $\mathbb R$. But I can't use the triangle inequality properly so that I can't bring the result. Need someone's help please..
Let $\epsilon>0$ be given. As $\{x_n\}$ is Cauchy, there exists $N_1$ such that $d(x_n,x_m)<\frac\epsilon2$ for all $n,m>N_1$. Similarly, there exists $N_2$ such that $d(y_n,y_m)<\frac\epsilon2$ for all $n,m>N_1$. Therefore, with $N:=\max\{N_1,N_2\}$, we have $$d(x_n,y_n)\le d(x_n,x_m)+d(x_m,y_m)+d(y_m,y_n)< d(x_m,y_m)+\epsilon $$ and the same with $n\leftrightarrow m$, hence $$|d(x_n,y_n)-d(x_m,y_m)|<\epsilon $$ for all $n,m>N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2313325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
What is the difference between orthogonal subspaces and orthogonal complements? I am learning linear algebra through professor Giblert Strang's lectures on MIT OCW. The professor says that the row space and the null space of a matrix are orthogonal subspaces. This I can follow, since any vector in the nullspace takes any linear combination of the rows to zero. He then says that the row space and null space are more than just orthogonal subspaces, they are orthogonal complements, Because The "nullspace contains all the vectors that are perpendicular to the row space", and vice versa. Consider: $$A= \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}=0 $$ A is rank 2, Dimension of nullspace of A=1 Null space: $$X=c \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$$ Graphically, the row space of A is the x-y plane, and the null space is the z-axis. It's easy to see that the nullspace does not contain all the vectors that are perpendicular to a vector in the row space. If we look at $[1 ~0~ 0]$, it has the entire y-z plane perpendicular to it, not just the z-axis. All I can see is two orthogonal subspaces. I do not understand what additional property has earned these subspaces the term orthogonal complements. Refer: Lecture Video (from 31:16 to 33:00) Lecture Notes Page 2 paragraph 1
There are two conceptual issues here: (1) The orthogonal complement $X^{\perp}$ of a vector subspace $X \leq \Bbb V$ consists of the vectors in $\Bbb V$ that are perpendicular to all of the vectors in $X$, not just at least one vector. In practice, it is convenient to use the characterization that $v \in \Bbb V$ is orthogonal to the subspace $X \leq \Bbb V$ iff for any (equivalently every) basis $(E_a)$ of $X$ we have $v \perp E_a$ for all basis elements $E_a$. Using this characterization, it is straightforward to check that the row space and null space are orthogonal complements. (2) Given a vector space $\Bbb V$, the vector subspaces $X, Y \leq \Bbb V$ are complementary iff (a) $X$ and $Y$ are transverse, that is, $X \cap Y = \{ 0 \}$ and (b) $X$ and $Y$ together span $\Bbb V$, that is, $X + Y = \Bbb V$. Given the first condition, the second condition is equivalent to $X \oplus Y = \Bbb V$. Given an inner product space $(\Bbb V, \,\cdot\,)$, two vector spaces $X, Y \leq \Bbb V$ of are orthogonal (we denote $X \perp Y$) iff every element of $X$ is orthogonal to every element of $Y$. If $X$ and $Y$ are orthogonal and complementary, they are orthogonal complements
{ "language": "en", "url": "https://math.stackexchange.com/questions/2313448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
For a polynomial with integer coefficients, is it true that if constant term is prime then it cannot be the root of the polynomial. For a polynomial with integer coefficients, is it true that if constant term is prime then it cannot be the root of the polynomial. Let $p$ be a polynomial with constant term $a_0$ and if $a_0$ is prime then $p(a_0) \ne 0$ I just thought of this while working on some other problem. Is this true ? My attempt :- $$|a_0/a_n| = |r_1 ... r_n|$$ If, $r_1 = a_0$ $$1 = |a_n||r_2 ... r_n|$$ Also $$|a_1/a_n| = |r_2...r_n + r_1r_3...r_n + ... + r_1...r_{n-1}|$$ Or $$|a_1| = |r_1|\left| \dfrac{1}{r_1} + \dfrac{1}{r_2} + \cdots + \dfrac1{r_n}\right|$$ I am having difficulty in proving that $\left| \dfrac{1}{r_1} + \dfrac{1}{r_2} + \cdots + \dfrac1{r_n}\right|$ not a integer if $|r_1 ... r_n|^{-1}$ is a integer. Any hints ? Ok this is false but can somebody prove/disprove this if $p(1) \ne p(-1) \ne 0$ ?
Counter example: $$ p(x) = (x-3)(x-1) = x^{2}-4x+3 $$ where the root are, as noted by @hardmath, $\left( 3, 1 \right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2313539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
How would one find the number that is a given percentage between two numbers? Assume I have two numbers, say, 23 and 1150. Also assume I have a given percentage, say, 32.6%. What formula would I use to find the number that is 32.6% between 23 and 1150?
The number that is $x\%$ of the way between $a$ and $b$ is given by $$a + (b-a)\cdot \frac x{100}.$$ For example, the number $30\%$ of the way between $10$ and $20$ is $$10 + (20-10) \cdot \frac{30}{100} = 13.$$ Why this formula? The factor $b-a$ gives you the length of the interval. The factor $\dfrac x{100}$ tells you how much of this length you need in order to get $x\%$. Finally, the term $a$ ensures that you start moving "$x\%$ of the way into the interval" from the beginning of the interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2313598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Splitting a quadrilateral into two triangles if it has a vertex with an inner angle bigger than 180º I'm trying to write an app in which I need to test if a 3D quadrilateral has some angle equal or bigger to 180 degrees, Ie., it is a degenerated or concave quadrilateral. If this is true, I have to split it into two triangles. I have no problem on dividing the quadrilateral in two but don't know how to solve the test condition. Thanks in advance for your help. Edit: I only know the coordinates of the vertices
Let's assume you have a quadrilateral in 3D, defined by four vertices $$\begin{array}{l} \vec{v}_1 = ( x_1 , y_1 , z_1 ) \\ \vec{v}_2 = ( x_2 , y_2 , z_2 ) \\ \vec{v}_3 = ( x_3 , y_3 , z_3 ) \\ \vec{v}_4 = ( x_4 , y_4 , z_4 ) \end{array}$$ If one of the vertices is not in the same plane as the other three, the quadrilateral is non-planar, or skew quadrilateral. There are typically more than one way to interpret the surface -- or rather, which vertex is the one that is non-planar with the other three, since any three (that are not collinear) can define a plane -- so we need further information than just the vertex coordinates to make the decision in such cases. Because of this, I shall assume the quadrilateral is planar, and that the surface unit normal vector of the quadrilateral is $\hat{n}$. If we compute four vector cross products between each pair of consecutive edge vectors, $$\begin{array}{l} \vec{c}_1 = \left ( \vec{v}_4 - \vec{v}_1 \right ) \times \left ( \vec{v}_2 - \vec{v}_1 \right ) \\ \vec{c}_2 = \left ( \vec{v}_1 - \vec{v}_2 \right ) \times \left ( \vec{v}_3 - \vec{v}_2 \right ) \\ \vec{c}_3 = \left ( \vec{v}_2 - \vec{v}_3 \right ) \times \left ( \vec{v}_4 - \vec{v}_3 \right ) \\ \vec{c}_4 = \left ( \vec{v}_3 - \vec{v}_4 \right ) \times \left ( \vec{v}_1 - \vec{v}_4 \right ) \end{array}$$ and the quadrilateral is convex, then all the cross product vectors are in the same half-space as the surface normal vector: $$\begin{cases} \vec{v}_1 \cdot \hat{n} \ge 0 \\ \vec{v}_2 \cdot \hat{n} \ge 0 \\ \vec{v}_3 \cdot \hat{n} \ge 0 \\ \vec{v}_4 \cdot \hat{n} \ge 0 \end{cases}$$ The equal case in the above occurs only when consecutive edge vectors are parallel, for example if $\vec{v}_3 = \lambda \vec{v}_2$, $\lambda \in \mathbb{R}$. In fact, all of the cross products should be parallel to the surface unit normal, and you can even use the above to calculate the surface normal: $$\hat{n} = \frac{\vec{v}_1}{\lVert\vec{v}_1\rVert} = \frac{\vec{v}_2}{\lVert\vec{v}_2\rVert} = \frac{\vec{v}_3}{\lVert\vec{v}_3\rVert} = \frac{\vec{v}_4}{\lVert\vec{v}_4\rVert}$$ In practice, you might wish to use the longest one for numerical stability. If you find that, for example, $$\vec{v}_2 \cdot \hat{n} \lt 0$$ it means the inner angle at vertex $\vec{v}_2$ is greater than 180°.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2313715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $f(x_n)$ converges for all bounded continuous functions $f$, does then $x_n$ converge? Let $x_n\in \mathbb{R}$ be a sequence and $x\in \mathbb{R}$ such that $f(x_n) \longrightarrow f(x)$ (as $n \longrightarrow \infty$) for all bounded continuous functions $f \colon \mathbb{R} \rightarrow \mathbb{R}$. Do we then have $x_n \longrightarrow x$ (as $n \longrightarrow \infty$)?
Suppose otherwise. Then there is an an interval $(x-a,x+a)$ such that $x_n\notin(x-a,x-a)$ for infinitely many $n$'s. Define$$f(y)=\left\{\begin{array}{ll}x+a&\text{ if }y\geqslant x+a\\y&\text{ if }y\in(x-a,x+a)\\x-a&\text{ if }y\leqslant x-a.\end{array}\right.$$Then $\bigl|f(x_n)-x\bigr|\geqslant a$ for infinitely many $n$'s and therefore $\lim_nx_n\not\to x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2313955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Validity of Ito's formula for "piecewise-defined" Ito processes Let $b_i: \mathbb{R}^{m} \to \mathbb{R}$ be smooth functions, for each $i \in \{ 1, \ldots,m\}$ and let $(\Omega, \mathbb{F}, \mathbb{P})$ be a probability space equipped with a one-dimensional Brownian motion $W$. We divide the time interval $[0,T]$ evenly into partitions $\{t_r\}_{r=0}^{2N}$ such that $t_r = \frac{rT}{2N}$. On this probability space, we define one-dimensional processes $X^i$ such that they satisfy the following SDE with sign changes in the diffusion: $$ X^i_t = \int_0^t b_i(X^1_s, \ldots, X^m_s) \,ds + \,W_t, \quad \quad t_0 \leq t \leq t_1, \quad \quad 1 \leq i \leq m.$$ Afterwards, we change the sign of the diffusion for each interval $[t_{i-1}, t_i]$. More precisely, \begin{eqnarray} X^i_t & = & X^i_{t_1} + \int_{t_1}^t b_i(X^1_s, \ldots, X^m_s) \,ds - \,(W_t-W_{t_1}), \quad \quad t_1 \leq t \leq t_2, \quad \quad 1 \leq i \leq m. \\ X^i_t & = & X^i_{t_2} + \int_{t_2}^t b_i(X^1_s, \ldots, X^m_s) \,ds + \,(W_t -W_{t_2}), \quad \quad t_2 \leq t \leq t_3, \quad \quad 1 \leq i \leq m, \\ & \vdots & \\ X^i_t & = & X^i_{t_{2N-1}} + \int_{t_{2N-1}}^t b_i(X^1_s, \ldots, X^m_s) \,ds - \,(W_t-W_{t_{2N-1}}), \quad \quad t_{2N-1} \leq t \leq t_{2N}, \quad \quad 1 \leq i \leq m. \end{eqnarray} I am wondering the following: $1. \quad \quad $ Is it true that the quadratic variation $\langle X^i \rangle_t=t,$ for each $t \in [0,T]$? It seems likely, but I am not certain. $2. \quad \quad $ Is the Ito's formula applicable to processes $X^i$? Again, they seem to be continuous semimartingales, therefore the Ito's formula should be applicable, but I am not entirely sure.
You can write $$ X_t = \int_0^t b(X_s)ds + \int_0^t \sigma(s) dW_s, $$ where $$ \sigma(t) = \sum_{n=1}^{2N} (-1)^n \mathbf{1}_{[t_{n-1},t_n)}(t). $$ By the Lévy martingale characterization theorem, the process $$ B_t = \int_0^t \sigma(s) dW_s $$ is a standard Wiener process. Therefore, the answers to your questions are positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2314055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$f(z)=\int_C\frac {g(x)}{x-z}dx$ is holomorphic? Problem : C is a smooth simple closed curve in $\mathbb C$ $g(x)$ is a continuous function on C $f(z)=\int_C\frac {g(x)}{x-z}dx\quad (z\notin C)$ Show that $f(z)$ is holomorphic in $\mathbb C\setminus C$ My try : $\lim_{h\to0}\frac{f(z+h)-f(z)}{h} =\lim_{h\to0}\int_C\frac{g(x)}{[x-(z+h)][x-z]}dx=\int_C\lim_{h\to0}\frac{g(x)}{[x-(z+h)][x-z]}dx=\int_C\frac{g(x)}{[x-z]^2}dx$ So $f(z)$ is holomorphic in $\mathbb C\setminus C$. Is it right? I think I didn't use the fact that C is a smooth simple closed curve and $g(x)$ is continuous. And I'm not confident about the way I prove $\lim_{h\to0}\int_C\frac{g(x)}{[x-(z+h)][x-z]}dx=\int_C\lim_{h\to0}\frac{g(x)}{[x-(z+h)][x-z]}dx$ I think the interchange of integral and limit is possible by uniformly convergence $\sup_{x\in C}|\frac{g(x)}{[x-(z+h)][x-z]}-\frac{g(x)}{[x-z]^2}|=\sup |g(x)\frac{h}{[x-z]^2[x-(z+h)]}|\le|\frac{\max g(x)}{\min [x-z]^3}||h|$ . So it is uniformly convergent. (I think I used the fact $g(x)$ is continuous here around $\max g(x)$ but, still I think I didn't use the fact that C is a smooth simple closed curve.) Is my proof correct? Thanks for your reading.
The interchange of the limit and the integral is not justified, as you note, so you need to try something else. The following is the usual wey to get round your problem, and consists of expanding the integral kernel $(z-\xi)^{-1}$ into a uniformly convergent powerseries, so that you can interchange summation and integration: in general, it is much harder to prove one can interchange limits and integrals, but uniform convergence of series does the trick here. Let $C$ be any piecewise smooth curve and define $f : \mathbb C\smallsetminus C\longrightarrow \mathbb C$ so that $$ f(z)=\int_C\frac {g(\xi)}{\xi-z}d\xi$$ To show that $f$ is holomorphic, take a $z_0\in C$ and pick a ball $B = B(z_0,r)$ strictly missing $C$. I will show that $f$ admits a powerseries development in $B$, so that $f$ is analytic at $z_0$. Because $z_0$ is arbitrary, this shows $f$ is analytic and hence holomorphic throughout its domain. For $z\in B$ and $\xi \in C$ we have $$\left|\frac{z-z_0}{\xi-z_0}\right|<1-\delta$$ for some $0 < \delta <1$. Thus we may consider the powerseries development $$\frac{1}{z-\xi} =\frac{1}{\xi-z_0} \sum_{n\geqslant 0 } \left(\frac{z-z_0}{\xi-z_0}\right)^n $$ that is valid in $B$. This converges uniformly over $B$ by the estimate made above, and thus it is valid to interchange the order of integration and summation to obtain that for $z\in B$ we have $$ f(z)=\sum_{n\geqslant 0} a_n (z-z_0)^n$$ where $a_n = \displaystyle \int_C \frac{g(\xi)}{(\xi-z_0)^{n+1}}d\xi$ for each $n\in \mathbb N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2314183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }