Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Category Theory: Functors as Diagrams I have began classes on Category Theory, I do not understand the concept of 'functors as diagrams' the book I am using is Category Theory and Applications by Marco Grandis. I have been asked to prove the following statements: * *Every functor defined on a preordered set $S$ is automatically a commutative diagram. *A functor $1 \longrightarrow C$ amounts to an object of $C$ while a functor $2 \longrightarrow C$ amounts to a morphism of C. *A functor $X : (\mathbb{N},\leq) \to C$ defined on the ordered set of natural numbers (i.e. the ordinal $\omega$), amounts to a sequence of consecutive morphisms of $C$ \begin{equation*} X_0 \longrightarrow X_1 \longrightarrow X_2 \longrightarrow \dots \longrightarrow X_n \longrightarrow \dots \end{equation*} while the functor $X : (\mathbb{N},+)\to C$, defined on the additive monoid of natural numbers, amounts to an endomorphism $u_1 : X_* \to X_*$ $(\text{and its powers} \quad u_n=(u_1)^n$, including $u_0=id(X_*))$
The basic idea to understand diagrams as functors is that we are associating some sort of map (the functor) with its image (the visual diagram). As you can read on wikipedia, a diagram in category $C$ is a covariant functor $D:J \rightarrow C$ where $J$ is called an index category. It doesn't really matter what the objects and arrows in $J$ are. Instead it is the connectivity that is important. $D$ associated objects and arrows in $J$ with those in $C$ so that the relationships are preserved. You might find this blog link with nice images helpful. In statement 2, $1$ is the category with a single object and a single arrow. What is the possible image of the functor $1\rightarrow C$? Similar thinking is useful to decode $2 \rightarrow C$ where $2$ is the category $\bullet \rightarrow \bullet$. Build upon this idea in statement 3. What does the category $(\mathbb{N}, \le)$ look like? Then what is the possible image of a functor $X:(\mathbb{N}, \le)\rightarrow C$ look like? For the case involving $(\mathbb{N},+)$, think carefully about how a monoid is understood as a category.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4443201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\int_0^1 \left(\{ax\}-\frac{1}{2}\right)\left(\{bx\}-\frac{1}{2}\right) dx$ for $a,b\in\mathbb{Z}^+$ Evaluate $$\int_0^1 \left(\{ax\}-\frac{1}{2}\right)\left(\{bx\}-\frac{1}{2}\right) dx$$ where $a$ and $b$ are positive integers and $\{x\}:=x-\lfloor x\rfloor$ WLOG, we may assume that $a\le b$. Via the substitution $x \mapsto x/a$, we obtain $$\int_0^a \left(\{x\}-\frac{1}{2}\right)\left(\left\{\frac{bx}{a}\right\}-\frac{1}{2}\right) dx$$ I know that we should partition the interval $(0,1)$ in such a way that the values of $\{x\}$ and $\{bx/a\}$ are constant in each subinterval. How do I think of such a partition? I also tried to express $a=gu$ and $b=gv$ where $u:=\gcd(a,b)$. But then again I face the same issue of partitioning $(0,1)$
The answer is $\color{blue}{d^2/(12ab)}$ where $\color{blue}{d=\gcd(a,b)}$. A reasonably short solution is based on $$\newcommand{\saw}[1]{\left(\!\left(#1\right)\!\right)}\saw{x}:=\begin{cases}\{x\}-1/2,&x\notin\mathbb{Z}\\\hfill 0,\hfill&x\in\mathbb{Z}\end{cases}=-\sum_{n=1}^\infty\frac{\sin 2n\pi x}{n\pi}$$ in $L^2(0,1)$; the same holds if $x$ is replaced by $ax$ or $bx$, hence Parseval's theorem yields $$\int_0^1\saw{ax}\saw{bx}dx=\frac1{2\pi^2}\sum_{\substack{m,n>0\\am=bn}}\frac1{mn}=\frac1{2\pi^2}\sum_{k=1}^\infty\frac1{(bk/d)(ak/d)}=\frac{d^2}{12ab}$$ ($am=bn$ holds if and only if $m=bk/d$ and $n=ak/d$ for some $k\in\mathbb{Z}^+$). There's also an elementary one, using $\saw{nx}=\sum_{k=0}^{n-1}\saw{x+k/n}$ for $n\in\mathbb{Z}^+$, so that $$\int_0^1\saw{ax}\saw{bx}dx=\sum_{m=0}^{a-1}\sum_{n=0}^{b-1}f(m/a,n/b),\\f(u,v):=\int_0^1\saw{x+u}\saw{x+v}dx=f(u-v,0),\\f(t,0)=\frac1{12}-\frac{t(1-t)}2\quad\text{for}\quad 0\leqslant t\leqslant 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4443410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
On logical connector implies It may appear at first glance that this question had been asked over and over here. But I feel that the question that is in my mind is slightly different from what has already been asked. Here it is: What would have happened if $P \implies Q$ was taken false when $P$ was false and $Q$ was true? I read several answers for the question: "Why false implies True is false?". I could gather some information from the answers. One is that irrespective of the truth value of $Q$, if $P$ is false, we treat $P \implies Q$ to be true vacuously. But this didn't answer my question, what would have happened if I had taken it to be false? Another answer, which though addressed my question, I wasn't much satisfied with. This was that answer: If we had taken false implies true to be false, then truth tables of $\implies$ and $\leftrightarrow$ would have been one and the same. Any answer in the direction of consequence of false implies true false is highly appreciated. I mean, would there have been any logical fallacy, contradiction or paradox of so kind if I had assumed false implies true to be false?
What would have happened if $P \implies Q$ was taken false when $P$ was false and $Q$ was true? To avoid inconsistencies, at the very least, you would have to invalidate at least one step in the following proof that $\neg P \implies [P \implies Q]:\\$ (Screenshot from my proof checker) You might, for example, have to disallow or somehow restrict proof by contradiction (line 5), or elimination of '$\neg\neg$' (line 6).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4443559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
How can I prove that $f(x)=\det (A+xB)= \alpha x+\beta $? I have two matrices $A$ and $B$, such that : $$A=A(a,b,c)=\begin{pmatrix} a & c & c & \dots & c \\ b & a & c & \dots & c\\ b & b & a & \dots & c\\ \vdots &\vdots &\vdots & \ddots &\vdots\\ b & b & b &\dots& a \end{pmatrix} \hspace{1cm} \text{and} \hspace{1cm} B=A(1, 1, 1)$$ And we've : $$f(x)=\det(A+xB)$$ I have to prove the existence of two real numbers $\alpha$ and $\beta$, such that : $$f(x)=\alpha x +\beta$$ Just prove their existence not their values, because later in the same exercise we have to calculate $f(-c)$ and $f(-b)$, then deduce their values, then deduce $\det(A)$. So That's why I think calculating the determinant won't be a good idea I guess. Any Ideas to do so ?
In $A+xB$, every entry is of the form $const+x$. We do not change thet determinant if we subtract the first column from every other column. After that, only the first columnt depends on $x$, i.e., the first column is of the form $v+xw$ and all other columns are constant. As $\det$ is linear in every column, the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4443817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
3D forms of a two variables separated function I am looking for all the possible forms in 3D space of the function defined as $$ \Psi(x,t) = \psi(x) e^{-it}$$ There is this funny constraint: $$|\Psi (x,t)|^2 = \psi^{\ast}(x)\psi(x) e^{it} e^{-it}$$ $$|\Psi (x,t)|^2 = |\psi (x)|^2$$ For every $t$, so $|\Psi|$ doesn't depend on t. So far, I've found one possible form, a function looking like a spring like in this image (but with the x-axis inside of the spring): As time goes by (yes, I am of the ones that have a very hard time to see time as an extra axis) the spring rotates exactly as a screw does Do you guys know another form of this function in 3D? EDIT: After the reply below, I am looking now forward to vizualize what the $\Psi$ function, where $\psi$ has the form $A(x)e^{it}$, would look like. I am wondering if it looks like the function that is in the two following images... or not: Here the $A(x)$ would be a sort of gaussian?
$\psi(x) = A(x) e^{i \phi(x) }$ Therefore, $ \Psi(x) = A(x) e^{i(\phi(x) - t )} = A(x) \cos(\phi(x) - t) + i A(x) \sin(\phi(x) - t) $ Let $y = A(x) \cos(\phi(x) - t) $ , $ z = A(x) \sin(\phi(x) - t) $ To generate the screw in the picture, set $A(x) = A $ a constant, and $ \phi(x) = x $ Then $(x,y, z) = ( x, A \cos(x - t), A \sin(x - t))$ For a fixed $t$ this is a spiral (helix) along the $x$ axis. As $t$ advances,the spiral rotates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4444031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of the series $\sum_{n=1}^\infty \ln(n) \ln\left(1+ \frac{1}{n(n+2)}\right)$ I need your help to show that the series $\sum_{n=1}^\infty \ln(n)\ln\left(1+ \frac{1}{n(n+2)}\right)$ converges. I have tried to use the ratio and root test, but in both cases the limit is $1$. Could you help me, please? Thanks in advance.
Since $$ \ln (1 + x) \le x $$ for every $x>0$ you have that $$ \ln \left( {1 + \frac{1}{{n\left( {n + 2} \right)}}} \right) \le \frac{1}{{n\left( {n + 2} \right)}} $$ for every $n \in \mathbb N$. Now$$ \ln (n)\ln \left( {1 + \frac{1}{{n\left( {n + 2} \right)}}} \right) \le \frac{{\ln n}}{{n\left( {n + 2} \right)}} $$ But $ \ln n < \sqrt n $ for every $n\geq 1$ thus $$ \ln (n)\ln \left( {1 + \frac{1}{{n\left( {n + 2} \right)}}} \right) \le \frac{1}{{\sqrt n \left( {n + 2} \right)}} \le \frac{1}{{n^{{\textstyle{3 \over 2}}} }} $$ and the thesis follows by direct comparison test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4444585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When is $d^{2} x$ =0? We want to find $d^{2} z$ : $$ (4 x-3 z-16) d x+(8 y-24) d y+(6 z-3 x+27) d z =0 $$ So, in my book, we apply the differential operator d to the above equation : We use the product rule ; $$ (4 d x-3 d z) d x+(8 d y) d y+(6 d z-3 d x) d z+(6 z-3 x+27) d^{2} z=0 $$ So my question is : Where did $d^{2} x$ and $d^{2} y$ go? Are they equal to 0? If so why $d^{2} z$ is not 0? Edit: That's the problem : So above $d^{2} z$ isn't asked in the question but it's needed to find the asked ones ; Problem 1 (65 points) We consider the function $$ F(x, y, z)=2 x^{2}+4 y^{2}+3 z^{2}-3 x z-16 x-24 y+27 z+94 $$ and let $z=z(x, y)$ be an implicit function defined by the equality $F(x, y, z)=-1$. 1.1. Calculate $\frac{\partial z}{\partial x}, \frac{\partial z}{\partial y}, \frac{\partial^{2} z}{\partial x^{2}}, \frac{\partial^{2} z}{\partial y^{2}}$ and $\frac{\partial^{2} z}{\partial x \partial y}$ at point $(x=1, y=3, z=-5)$.
You are forgetting to use the product rule. If I have $d(8q\,dq)$, then the resulting differential will be $8\,dq^2 + 8q\,d^2q$. $$ d\left((4 x-3 z-16)\, dx +(8 y-24)\, dy +(6 z-3 x+27) \,dz\right) =d( 0) $$ This yields $$ (4x\,dx - 3z\,dz)\,dx + (4 x-3 z-16)\,d^2x + (8\,dy)\,dy + (8y-24)\,d^2y + (6\,dz - 3\,dx)\,dz + (6 z-3 x+27 )d^2z$$ Now, this only makes sense if you use the algebraically-manipulable version of the second derivate. In this, the second derivative of $y$ with respect to $x$ is: $$\frac{d^2y}{dx^2} - \frac{dy}{dx}\frac{d^2x}{dx^2}$$ This is described more fully in my paper, "Extending the Algebraic Manipulability of Differentials".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4444722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Behavior of Matrices in SO(3) Show that each matrix in SO$(3)$ equals $e^X$ for some skew-symmetric $X$. Here, SO(3) refers to the special orthogonal group that is the rotation group of $\mathbb{R}^3$. I am also supposed to use the following facts that I have derived in proving the above result. * *Given $B = \begin{pmatrix} 0 & -\theta & 0 \\ \theta & 0 & 0 \\ 0 &0 &0 \end{pmatrix}$, $e^B =\begin{pmatrix} \cos \theta & -\sin \theta & 0\\ \sin \theta & \cos \theta & 0\\ 0 & 0 & 1 \end{pmatrix}$, where $e^B$ denotes the exponential of $B$. (Note that this is what I calculated the exponential to be - is this correct?) *For any orthogonal matrix $A$, we have $Ae^BA^T = e^{ABA^T}$. (I have already proven this result.) Again, I need to use results 1 and 2 in the proof of my question, but I'm not seeing how these two results combine to show my desired result. Since $e^B$ is one of the 3-dimensional rotation matrices (assuming that I calculated my exponential correctly), do I need to make some sort of argument based on rotation? Or is there a simpler way?
Let $g\in SO(3,\mathbb{R})$ be any element. (1) We first show that $1$ is an eigenvalue of $g$. Consider the characteristic polynomial $f_g$ of $g$. It is clear that $f_g\in \mathbb{R}[x]$ and $\deg(f_g)=3$. Thus $f_g$ has a real root, say $\lambda_1$. Let $\alpha_1$ be the unit eigenvalue of $\lambda_1$. Since $g$ preserve the length, we get $\lambda_1^2=\pm 1$. If $\lambda_1=-1$, let $\lambda_2, \lambda_3$ be the other complex eigenvalues. Then $\lambda_2\lambda_3=-1$. Since $\lambda_2,\lambda_3$ satisfies a monic real quadratic equation $x^2+ax-1,$ which has only real roots, we have $\{\lambda_2,\lambda_3\}=\{\pm 1 \}$. Thus $1$ is an eigenvalue of $g$. (2) Let $\alpha_1$ be an unit eigenvector of $g$ and let $W=Span\{\alpha_1\}$. Consider $W^{\perp}$, which has dimension 2 and is invariant under $g$. Thus $g|_{W^{\perp}}\in SO(2,\mathbb{R})$. Thus there exists orthonormal basis $\alpha_2,\alpha_3$ and an angle $\theta$ such that $$g\begin{pmatrix}\alpha_2 \\ \alpha_3 \end{pmatrix}=\begin{pmatrix}\alpha_2 & \alpha_3 \end{pmatrix}\begin{pmatrix} \cos(\theta)&\sin(\theta)\\ -\sin(\theta)& \cos(\theta)\end{pmatrix}.$$ The rest is easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4444873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of the Multiplicative Legendre's symbol Let $p$ a be odd prime show that $\displaystyle\sum_{a=1}^{p-2}\left(\frac{a(a+1)}{p}\right)=-1$. Nota Bene : The $(\frac{a}{p})$ is $\textbf{Legendre Symbol}$
For $a\in\mathbb F_p^*$ we have $(\frac{a(a+1)}{p})=(\frac{(a+1)/a}{p})$ (since $(\frac{a^2}p)=1$ and since the Legendre symbol is multiplicative), so $$\sum\limits_{a\in\mathbb F_p^*}\left(\frac{a(a+1)}{p}\right)=\sum\limits_{a\in\mathbb F_p^*}\left(\frac{1+1/a}{p}\right)=\sum\limits_{b\in\mathbb F_p^*}\left(\frac{1+b}{p}\right)=-\left(\frac1p\right)+\sum\limits_{c\in\mathbb F_p}\left(\frac cp\right)$$ and since the sum over $c$ is $0$ (as many quadratic residues as non-residues) and since $\left(\frac1p\right)=1$, you get your result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4445108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is convolution integrable in $L_1(\mathbb{R}^n)$? Let $|h(y)|\in L_1(\mathbb{R}^n)$, i.e. $\int\limits_{\mathbb{R}^n}|h(y)|\,dy<+\infty$. Consider the function $F(x)=\int\limits_{\mathbb{R}^n}|h(x-y)|\,dy$. It is known that $F(x)$ be bounded and uniformly continuous. I try to find condtitions when $F(x)\in L_1(\mathbb{R}^n)$, i.e. $$ \int\limits_{\mathbb{R}^n}F(x)\,dx= \int\limits_{\mathbb{R}^n}\biggl(\int\limits_{\mathbb{R}^n}|h(x-y)|\,dy\biggr)\,dx<+\infty. $$ But I can't even find a specific example for that my assumption holds. Is this possible? Because I think $F(x)$ also be an increasing function, so its integral will always divergents.
No. Indeed, $F(x)\equiv \|h\|_{L_1(\mathbb{R}^n)}$, so that $$ \int\limits_{\mathbb{R}^n}F(x)\,dx=\|h\|_{L_1(\mathbb{R}^n)}\int \limits_{\mathbb{R}^n}\,dy=\infty. $$ Thanks for comments to @Chris and @RyszardSzwarc!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4445281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\sum_{1\le iLet $n \ge 2$ be a an integer and $x_1,...,x_n$ are positive reals such that $$\sum_{i=1}^nx_i=\frac{1}{2}$$ Prove that $$\sum_{1\le i<j\le n}\frac{x_ix_j}{(1-x_i)(1-x_j)}\le \frac{n(n-1)}{2(2n-1)^2}$$ Here is the source of the problem (in french) here Edit: I'll present my best bound yet on $$\sum_{1\le i<j\le 1}\frac{x_ix_j}{(1-x_i)(1-x_j)}=\frac{1}{2}\left(\sum_{k=1}^n\frac{x_k}{1-x_k}\right)^2-\frac{1}{2}\sum_{k=1}^n\frac{x_k^2}{(1-x_k)^2}$$ This formula was derived in @GCab's Answer. First let $a_k=x_k/(1-x_k)$ so we want to prove $$\left(\sum_{k=1}^na_k\right)^2-\sum_{k=1}^na_k^2\le \frac{n(n-1)}{(2n-1)^2}$$ But since $$\frac{x_k}{1-x_k}<2x_k\implies \sum_{k=1}^na_k<1 \quad (1)$$ Hence $$\left(\sum_{k=1}^na_k\right)^2\le \sum_{k=1}^na_k$$ Meaning $$\left(\sum_{k=1}^na_k\right)^2-\sum_{k=1}^na_k^2\le\sum_{k=1}^na_k(1-a_k)$$ Now consider the following function $$f(x)=\frac{x}{1-x}\left(1-\frac{x}{1-x}\right)$$ $f$ is concave on $(0,1)$ and by the tangent line trick we have $$f(x)\le f'(a)(x-a)+f(a)$$ set $a=1/2n$ to get $$a_k(1-a_k)\le\frac{4n^2\left(2n-3\right)}{\left(2n-1\right)^3}\left(x_k-\frac{1}{2n}\right)+ \frac{2(n-1)}{(2n-1)^2}$$ Now we sum to finish $$\sum_{k=1}^na_k(1-a_k)\le \frac{2n(n-1)}{(2n-1)^2}$$ Maybe by tweaking $(1)$ a little bit we can get rid of this factor of $2$
Hint: Indicating as $S_n , \, T_n$ $$ \begin{array}{l} T_n = \sum\limits_{1 \le i < j \le n} {a_i a_j } \\ S_n = \sum\limits_{1 \le i,j \le n} {a_i a_j } = \sum\limits_{1 \le i \le n} {\sum\limits_{1 \le j \le n} {a_i a_j } } = \sum\limits_{1 \le i \le n} {a_i } \sum\limits_{1 \le j \le n} {a_j } = \left( {\sum\limits_{1 \le i \le n} {a_i } } \right)^2 \\ \end{array} $$ then you also have that $$ \begin{array}{l} S_n = \sum\limits_{1 \le i,j \le n} {a_i a_j } = \sum\limits_{1 \le i < j \le n} {a_i a_j } + \sum\limits_{1 \le i = j \le n} {a_i a_j } + \sum\limits_{1 \le j < i \le n} {a_i a_j } = \\ = 2T_n + \sum\limits_{1 \le i \le n} {a_i ^2 } \\ \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4445439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
Let $z_1$ and $z_2$ be the roots of $az^2+bz+c=0;\,a,b,c\in \mathbb{C}, a\ne 0$ and $w_1,w_2$ be roots of $(a+\bar{c})z^2+(b+\bar{b})z+(\bar{a}+c)=0$ Let $z_1$ and $z_2$ be the roots of $az^2+bz+c=0;\,\,a,b,c\in \mathbb{C}, a\ne 0$ and $w_1,w_2$ be roots of $(a+\overline{c})z^2+(b+\overline{b})z+(\overline{a}+c)=0$. If $|z_1|<1,|z_2|<1$, then prove that $|w_1|=|w_2|=1$ My Progress: We can see that whenever $z$ is a root of $(a+\overline{c})z^2+(b+\overline{b})z+(\overline{a}+c)=0,\,\, \frac{1}{\overline{z}}$ is also a root. Case 1: $w_1=\frac{1}{\overline{w_1}}$ and $w_2=\frac{1}{\overline{w_2}}$ $\implies |w_1|=|w_2|=1$ Case 2: $w_1=\frac{1}{\overline{w_2}}$ and $w_2=\frac{1}{\overline{w_1}}$ $\implies w_1\overline {w_2}=1$
Worst comes to worst, you can bash this. Let $a+\overline{c} = t$ and $b+\overline{b} = 2s$ with $s$ real. Then your quadratic is: $$tw^2+2sw+\overline{t} = 0\implies w_{1,2} = \dfrac{-s\pm\sqrt{s^2-|t|^2}}{t}.$$ Now, observe that the entry inside the square root in the discriminant is real. This means that when you compute the norm of the numerator of $w_1$, (with the choice of plus sign in the middle): $$\left|-s+\sqrt{s^2-|t|^2}\right|^2 = (-s+\sqrt{s^2-|t|^2})(-s-\sqrt{s^2-|t|^2}) = |t|^2$$ and so $|w_1| = 1.$ Can do the same for $|w_2| = 1.$ Key observation here is that $s$ is real and so is $s^2-|t|^2,$ which enables us to compute the norm squared as itself times its conjugate. Lastly, $|z_1|, |z_2| < 1$ ensures $t\neq 0$ from Vieta.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4445610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Geodesics on a pseudosphere I am trying to show the following: Let \begin{equation} \gamma(t) = \begin{pmatrix} \frac{1}{t} \\ \sqrt{1 - \frac{1}{t^2}} + \cosh^{-1}(t) \end{pmatrix} \end{equation} for $t \geq 1$ be the unit-speed parametrization of the tractrix and \begin{equation} \sigma(u,v) = \begin{pmatrix} \frac{1}{u} \cos v \\ \frac{1}{u} \sin v \\ \sqrt{1 - \frac{1}{u^2}} + \cosh^{-1}(u) \end{pmatrix} \end{equation} the surface of revolution obtained by rotating the tractrix around the z-axis. Now I want to show that the geodesics on the pseudosphere (tractroid in this case) are represented in the $(u,v)$-coordinates by arcs of circles centered on the $v$-axis. The first thing that I noted is that the curve is not unit-speed parametrized. So I calculated the coefficients of the first fundamental form (for a surface patches) \begin{align*} E(u,v) &= \dot{f}^2(u) + \dot{g}^2(u) = \frac{3 + u^2}{u^2(u^2 - 1)}, \\ F(u,v) &= \langle \sigma_u(u,v), \sigma_v(u,v) \rangle = 0, \\ G(u,v) &= \frac{1}{u^2} = f^2(u) \end{align*} and now the geodesic equations are \begin{align*} \frac{d}{dt}\left( E \dot{u} + F \dot{v} \right) &= \frac{1}{2}\left( E_u \dot{u}^2 + 2 F_u \dot{u} \dot{v} + G_u \dot{v}^2 \right), \\ \frac{d}{dt}\left( F \dot{u} + G \dot{v} \right) &= \frac{1}{2}\left( E_v \dot{u}^2 + 2 F_v \dot{u} \dot{v} + G_v \dot{v}^2 \right) \end{align*} therefore \begin{align*} \frac{d}{dt} \left(\frac{3 + u^2}{u^2(u^2 - 1)} \dot{u} \right) &= -\frac{4 u \dot{u}^2 }{(u^2 - 1)^2} - \frac{\dot{v}^2 - 3 \dot{u}^2}{u^3}, \\ \frac{d}{dt}\left( \frac{1}{u^2}\dot{v} \right) &= 0 \end{align*} solving the second gives \begin{equation*} \dot{v} = C u^2 \end{equation*} which can also be obtained by clairauts theorem where $\psi$ is the angle between $\dot{\gamma}$ and the meridians of the surface \begin{equation*} \dot{v} = \frac{\sin \psi}{f} = u \sin \psi \end{equation*} and $f \sin \psi$ being constant gives \begin{equation*} f^2 \dot{v} = \frac{1}{u^2} \dot{v} = f \sin \psi \stackrel{!}{=} \mbox{const}. \end{equation*} Plugging this $\dot{v}$ into the first geodesic equation leaves us with \begin{equation*} \frac{d}{dt} \left(\frac{3 + u^2}{u^2(u^2 - 1)} \dot{u} \right) = \frac{\dot{u}\left(3 - \frac{4 u^4}{(u^2 - 1)^2} \right)}{u^3} - C^2 u. \end{equation*} From this I can't figure out how the geodesics are arcs of circles centered on the $v$-axis. So I tried using the fact that \begin{equation} I(\dot{\gamma},\dot{\gamma}) \stackrel{!}{=} 1 \end{equation} which gives \begin{equation} E \dot{u}^2 + G \dot{v}^2 = 1 \end{equation} or \begin{equation} \dot{u}^2 \frac{3 + u^2}{u^2 - 1} + C^2 u^4 = u^2 \end{equation} thus \begin{equation} \dot{u} = \pm \frac{\sqrt{u^2(u^2C^2 - 1)}}{\sqrt{ \frac{3 + u^2}{1 - u^2} }}. \end{equation} From this I would get an expression for $\frac{\dot{v}}{\dot{u}}$ hence by separation of variables $(v - v_0) = (\ldots)$ where the right hand side involves an elliptic integral. I would expect that from $\frac{\dot{v}}{\dot{u}} = F(u)$ where $F$ is some function depending only on $u$, I can use separation of variables to get something like \begin{equation} (v-v_0)^2 + u^2 = \frac{1}{C} \end{equation} but from the geodesic equation and the ODE it is not possible to obtain such a form. If I would switch the sign from $\cosh^{-1}(t)$ to $-\cosh^{-1}(t)$ this would work. I would be thankful if anybody can give me a hint.
Comment If $ (f, v) $ are polar coordinates it is in hyperbolic geodesic representation that have circles passing through origin and centered on the x-axis... or any radial line but not the lines on 2-D surface in 3-space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4445778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Probability of two things combined larger than probability of one of them? I am going through a course on basic statistics and instructor presented a problem, with a solution. To me it looks like the problem does not have a solution at all, let alone the solution posted by instructor. Most likely I am wrong, could you point out where? Below is the problem: A station along Route 66 sells gas and postcards. The probability that a customer buys postcards is .4. The probability that a customer leaves without buying anything is .3. The probability that the customer buys both gas and postcards is .6. What is the probability that the customer buys gas? Answer: .5 The way I see it: P(gas and postcard) = 0.6 P(postcard) = 0.4 P(gas or postcard) = 0.7 However, P(postcard) would be >= P(gas and postcard), because there's P(postcard and no gas) What am I missing?
A station along Route 66 sells gas and postcards. The probability that a customer buys postcards is .4. The probability that a customer leaves without buying anything is .3. The probability that the customer buys both gas and postcards is .6. What is the probability that the customer buys gas? Answer: .5 can be read two ways, both of which lead to negative probabilities. On is that the $0.4$ probability is for postcards with or without gas in which case the other information suggests the probability of buying postcards but not gas is an impossible $-0.2$, as in this diagram with the four possible purchase probabilities adding up to $1$: while the other is that the $0.4$ probability is for postcards without gas in which case the other information suggests the probability of buying gas but not postcards is an impossible $-0.3$, as in this diagram Another possibility is that the question has been transcribed wrongly with two of the numbers transposed and should have said something like: A station along Route 66 sells gas and postcards. The probability that a customer buys postcards (with or without gas) is $\mathbf{0.6}$. The probability that a customer leaves without buying anything is $0.3$. The probability that the customer buys both gas and postcards is $\mathbf{0.4}$. What is the probability that the customer buys gas (with or without postcards)? Answer: $0.5$ which can be illustrated with this diagram where $0.2=0.6-0.4$ and $0.1=1-0.6-0.3$ and $0.5=0.4+0.1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4445915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Uniqueness of solution for second order differential equation with two Dirichlet conditions Given is the differential equation $-cu''(x) + (u'(x))^2 - 1 = 0$ in $(-1,1)$ and $u(-1)=u(1)=0$ for some constant $c>0$. I need to show that there exists a unique solution $u\in C^2(-1,1)\cap C([-1,1])$ for the Dirichlet problem given above. I already know that $u(x)= -c \log\left(\frac{ e^{\frac{x}{c}} + e^{-\frac{x}{c}}}{e^{\frac{1}{c}} + e^{-\frac{1}{c}}} \right)$ is a solution, i just can't show that it is unique. I have also tried it via contradiction but it didn't work. Has anybody got an idea in which book or paper to differential equations one might find a theorem on the uniqueness of a solution of second order differential equations? I only found some theorems for problems with one Dirichlet condition and one condition for the first derivative but for two Dirichlet conditions and no condition for the first derivative I just could not find anything! Thank you all for your help, best regards!
Assume $u,v:[-1,1] \to \mathbb{R}$ are different solutions of this BVP. W.l.o.g. assume that $v(\xi)< u(\xi)$ for some $\xi \in (-1,1)$. Choose $\varepsilon > 0$ minimal such that $$ v_\varepsilon(x):=v(x)+ \varepsilon \ge u(x) \quad (x \in [-1,1]). $$ Since $\varepsilon$ is minimal there is some $x_0 \in (-1,1)$ such that $v_\varepsilon(x_0)=u(x_0)$. Since $v_\varepsilon - u \ge 0$ on $[-1,1]$ and $v_\varepsilon(x_0) - u(x_0)= 0$ we have $v'_\varepsilon(x_0)=u'(x_0)$. Since $v_\varepsilon$ is a solution of the differential equation both functions $u, v_\varepsilon$ are solutions of the same initial value problem at $x_0$. Hence $v_\varepsilon = u$, a contradiction (as $v_\varepsilon(1) \not= u(1)$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4446087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $f_n(x) = (x-\frac{1}{n})^2$ for $x\in[0,1]$. Does the sequence $\{f_n\}$ converge pointwise in the set $[0,1]$? And the question asks to give the limit function as well. So far I have: $$f(x)=\lim_{n\rightarrow\infty}f_n(x)$$ $$f(x)=\lim_{n\rightarrow\infty}(x^2-\frac{2x}{n}+\frac{1}{n^2})$$ $$f(x)=x^2 \;\forall x\in[0,1]$$ I am stuck from here
Your approach is correct and indeed $f_{n}\to x^{2}$ pointwise on $[0,1]$. The definition of pointwise convergence of a sequence of functions $(f_{n}(x))_{n\in \mathbb{N}}$ on a set $I\subseteq \mathbb{R}$ says, $f_{n}\to f$ pointwise on $I$ if, and only if, $$(\color{red}{\forall x\in I})(\forall \varepsilon >0)(\color{red}{\exists N\in \mathbb{N}})(\forall n\in \mathbb{N})(n>N\implies |f_{n}(x)-f(x)|<\varepsilon.$$ Let the sequence of functions $f_{n}(x)=\left(x-\frac{1}{n}\right)^{2}$ and let's to show that $f_{n}\to f$ pointwise on $[0,1]$, with $f(x)=x^{2}$. Let $x\in [0,1]$ and let $\varepsilon >0$. By the Archimedean property there exists a natural number $N>\frac{\varepsilon}{3}$. If $n>N$ we have \begin{align*} |f_{n}(x)-f(x)|&=\left|\left(x-\frac{1}{n}\right)^{2}-x^{2}\right|\\&=\left|x^{2}-\frac{2x}{n}+\frac{1}{n^{2}}-x^{2}\right|\\ &\leqslant \frac{|2x|}{n}+\frac{1}{n^{2}},\quad 0\leqslant x\leqslant 1,\\&\leqslant \frac{2}{n}+\frac{1}{n^{2}}\\&\leqslant \frac{3}{n},\\&<3\left(\frac{\varepsilon}{3}\right),\\&=\varepsilon. \end{align*} Therefore $f_{n}\to f$ pointwise on $[0,1]$. Moreover $f_{n}\rightrightarrows f$ uniformly on $[0,1]$. The definition of uniform convegence of a sequence of functions $(f_{n}(x))_{n\in \mathbb{N}}$ on a set $I\subseteq \mathbb{R}$ says, $f_{n}\rightrightarrows f$ uniformly on $I$ if, and only if, $$(\forall \varepsilon >0)(\color{red}{\exists N\in \mathbb{N}})(\forall n\in \mathbb{N})(\color{red}{\forall x\in I})(n>N\implies |f_{n}(x)-f(x)|<\varepsilon.$$ Let the sequence of functions $f_{n}(x)=\left(x-\frac{1}{n}\right)^{2}$ and let's to show that $f_{n}\rightrightarrows f$ uniformly on $[0,1]$, with $f(x)=x^{2}$. Let $\varepsilon >0$, thus choose $N\in\mathbb{N}$ such that $\frac{1}{N^{2}}<\frac{\varepsilon}{2}$ and $\frac{2x}{N}<\frac{\varepsilon}{2}$. Thus for all $n>N$ we have \begin{align*} |f_{n}(x)-f(x)|&=\left|\left(x-\frac{1}{n}\right)^{2}-x^{2}\right|,\\&=\left|-\frac{2x}{n}+\frac{1}{n^{2}}\right|,\\&\leqslant \frac{2x}{n}+\frac{1}{n^{2}},\\&<\frac{2x}{N}+\frac{1}{N^{2}},\\&=\frac{\varepsilon}{2}+\frac{\varepsilon}{2},\\&=\varepsilon. \end{align*} Therefore $f_{n}\rightrightarrows f$ uniformly on $[0,1]$. Alternatively notice that $$\sup_{x\in [0,1]}\left|f_{n}(x)-f(x)\right|\to 0,\quad n\to +\infty$$ Therefore $f_{n}\rightrightarrows f$ uniformly on $[0,1]$. Then also $(f_{n}(x))_{n\in \mathbb{N}}$ is uniform Cauchy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4446278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Computing limits using Taylor expansions and $o$ notation on both sides of a fraction Let's define $o(g(x))$ as usually: $$ \forall x \ne a.g(x) \ne 0 \\ f(x) = o(g(x)) \space \text{when} \space x \to a \implies \lim_{x \to a} \frac{f(x)}{g(x)}=0 $$ In theorem $7.8$, Tom Apostol in his Calculus Vol. $1$ gave and proved the following basic rules for algebra of o-symbols * *$o(g(x)) \pm o(g(x)) = o(g(x)) $ *$o(cg(x)) = o(g(x)) $, if $c \ne 0$ *$f(x) \cdot o(g(x)) = o(f(x)g(x)) $ *$o(o(g(x))) = o(g(x)) $ *$\frac{1}{1 + g(x)} = 1 - g(x) + o(g(x))$ I will use a concrete example to demonstrate the confusion I have, but the question is probably more generally applicable. By using Taylor expansions, compute $\lim_{x \to 0} \frac{1 - \cos x^2}{x^2 \sin x^2}$. \begin{equation} \cos x^2 = 1 - \frac{x^4}{2} + o(x^5) \\ \sin x^2 = x^2 + o(x^4) \end{equation} $$ \lim_{x \to 0} \frac{1 - \cos x^2}{x^2 \sin x^2} = \lim_{x \to 0} \frac{1 - (1 - \frac{x^4}{2} + o(x^5))}{x^2 (x^2 + o(x^4))} = \lim_{x \to 0} \frac{\frac{x^4}{2} + o(x^5)}{x^4 + o(x^6)} $$ What would be an easy way to solve that, without flattening the fraction by the application of case $5$ of the theorem $7.8$ above, while using the provided definition for $o$ notation (and without using a more advanced methods, like L'Hopital's rule)? I saw somewhere $\lim_{x \to 0} \frac{\frac{1}{2} + \frac{o(x^5)}{x^4}}{1 + \frac{o(x^6)}{x^4}} = \frac{1}{2}$, without explanation, suggesting it should be a trivial matter, but I'm not sure why that would be the case. Appendix in how I solved that, relying only on the definition and the $T7.8$. One way to proceed could be to use case $3$ from Theorem $7.8$, in reverse (i.e. $o(x^6) = x^4 o(x^2)$) $$ \lim_{x \to 0} \frac{\frac{x^4}{2} + o(x^5)}{x^4 + o(x^6)} = \lim_{x \to 0} \frac{\frac{1}{2} + o(x)}{1 + o(x^2)} $$ Then I could apply case $5$ from the above theorem and applying case $4$ of the theorem (i.e. $o(o(x^2)) = o(x^2)$), to get $\frac{1}{1+o(x^2)} = 1 - o(x^2)$. After applying case $2$ with $c = -1$, we get: $$ \lim_{x \to 0} \frac{\frac{1}{2} + o(x)}{1 + o(x^2)} = \lim_{x \to 0} (\frac{1}{2} + o(x)) (1 + o(x^2)) = \frac{1}{2} $$
A half is the correct answer. For instance, something that is smaller than $ x^5 $ or $ x^6 $ is neglected in comparison to something that grows (or decreases) at $x^4$. Remember that $o(x)$ is just a notation for something else, it may be a large polynomial approximation, so as it happens to be in this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4446443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Summation of roots In this post, Calculate summation of square roots we are shown how to sum square roots. My question is, can we get similarly simple expressions if instead of square roots we choose some other exponent. More specifically, is there a simple approximating expression for $\sum_{i=1}^N i^{2/3}$
Well writing $\;\displaystyle\sum_{k=1}^N k^a=H_N^{\left(-a\right)}\;$ is just a rewriting using generalized harmonic numbers. Writing this as a partial zeta sum or truncated zeta function $\zeta_N(-a)$ is exactly the same thing! You get too $\;\displaystyle\sum_{k=1}^N k^a =\zeta(-a) - \zeta(-a, N+1)\;$ using the Hurwitz zeta function. For $a$ integer you get Faulhaber's formula and for real or even complex values $\,a\neq -1\,$ excellent approximations are provided by the Euler-Maclaurin formula (with $s=-a$) : $$\sum_{k=1}^N k^a \sim\zeta(-a)+\frac {N^{a+1}}{a+1}+\frac {N^a}2-\sum_{i\ge 1}\frac{B_{2i}}{(2i)!}\frac{(s+2i-2)!\;N^{a-2i+1}}{(s-1)!}$$ (note that this is an asymptotic expansion and that the sum at the right has to remain finite with $B_j$ a Bernoulli numbers)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4446580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Given $[A,B]=0$ then $[A,f(B)]=0$ Given $A$ and $B$ two operators that commute ($[A,B]=0$) then $A$ commutes with an arbitrary function of $B$ I recently saw this property of the commutators on a quantum mechanics course. We didn't get any proof on the property and we didn't get told wether this propery is true for any function $f(B)$ or if it needs to fulfil some conditions. I've tried to prove it on my own but I have no idea on how to approach the problem as I haven't taken any course on operator theory. I'd appreciate if anyone could prove this here and if anyone could tell me if this is true for any function. Thanks in advance.
I fear that by asking this here instead of PSE, you are inviting people to overthink it. Almost certainly your context assumes $f(x)$ is well behaved, analytic, etc, and is assumed to amount to its Taylor expansion around the origin. So, by linearity of the commutator, you are supposed to prove that A commutes with all monomials $B^n$. You can prove this by induction, given the identity $$ [A,BC]=[A,B]C + B[A,C]. $$ Now take $C=B^n$ and apply induction. It's not meant to be rocket science; this is a routine identity you'll be using more than once a week.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4446857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove sum of two independent Poisson processes is another Poisson process I was trying to prove that the sum of two independent Poisson processes is another Poisson process. I know how to prove that the sum of the Poisson distributions is another Poisson distribution. But I think that is not enough. How can I continue from there?
If ${N_1(t):t\geqslant 0}$ and ${N_2(t):t\geqslant 0}$ are independent Poisson processes with rates $\lambda_1$ and $\lambda_2$, then for any $0\leqslant t_1 < \cdots < t_m$ \begin{align} N_1(t_1), N_1(t_2)-N_2(t_1),\ldots, N_2(t_m)-N_2(t_{m-1})\\ N_2(t_1), N_2(t_2)-N_2(t_1),\ldots, N_2(t_m)-N_2(t_{m-1}) \end{align} are independent, and hence $$ N(t_1), N(t_2)-N(t_1),\ldots, N(t_m)-N(t_{m-1}) $$ are independent. Moreover, for each $s,t\geqslant 0$ \begin{align} N(s+t)-N(s) &= N_1(s+t)+N_2(s+t)-(N_1(s)+N_2(s))\\ &= N_1(s+t)-N_1(s) + N_2(s+t)-N_2(s), \end{align} so as $N_1(s+t)-N_1(s)\sim\mathsf{Pois}(\lambda_1 t)$ and $N_2(s+t)-N_2(s)\sim\mathsf{Pois}(\lambda_2 t)$, it follows that $$ N(s+t)-N(s)\sim\mathsf{Pois}((\lambda_1+\lambda_2)t), $$ and hence the superposition $N(t)=N_1(t)+N_2(t)$ is a Poisson process with rate $\lambda_1+\lambda_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4446957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Covering the sphere with connected closed sets I am given the following question . Can $S^{2}$ be covered by finite number of connected closed sets $A_{1},...,A_{n}$ such that * *Each has diameter less than a given $\epsilon$ . *$\cup A_{n}= S^{2}$ *$\displaystyle A_{k}\cap\bigg(\bigcup_{i=1}^{k-1}A_{i}\bigg)$ is connected for all $k\leq n$ ? My attempt:- My first thought was to view it as a standard classic football and prove it for pentagons and hexagons. But then I realized that it is not very simple as ordering the sets such that property $3$ becomes a problem and moreover the pentagons and hexagons have a curvature in this case. So I thought of covering with small enough "curved" squares . But I am struggling to make the argument fully rigorous. I view the height of the sphere which is $1$ as the interval $[0,1]$ and divide into $\frac{1}{n^{2}}=\frac{1}{m}$ many may pieces where $\epsilon>\frac{1}{n}$. Now the surface area of the sphere between $0\leq h\leq \frac{1}{m}$ is just $\frac{2\pi}{m}$ . And so I divide this surface area into $m$ many "curved squares" of area $\frac{2\pi}{m^{2}}$ and stack them so as the property $3$ holds. Now it is hard to find the exact diameter $=\sup\{d(x,y):\,x,y\in A_{i}\}$ . But as the area of the square is taken to be small enough , we can say that it is less than $\epsilon$. If not then we can again scale it to make it so. Now we proceed inductively and cover the sphere up with such squares for each interval of heights $[\frac{k}{m},\frac{k+1}{m}]$ for $k=1,...,m-1$. Now this seems intuitively okay but I am worried about the connectedness part as we move up a level in height and I can't seem to make it fully rigorous. Can someone explain to me if I am going wrong somewhere or can suggest me a better way of solving this ? Any help is appreciated. PS. This was asked in a Algebraic Topology course as a similar argument was used to lift a path $\alpha :[0,1]\to X$ given a space $Y$ and a covering map $p$ .
Look at a globe: Meridians and latitude circles divide $S^2$ into closed triangular surface pieces (around north and south pole) and closed quadrangular surface pieces. Taking sufficiently many meridians and latitude circles ensures that all these pieces have diameter less than the given $ϵ$. Number the pieces as follows: * *Number the latitude circles from north to south by $L_1,\ldots, L_m$ and the meridians counterclockwise by $M_1,\ldots,M_n$. Set formally $M_{jn+i} = M_i$ for all $j \in \mathbb N$ and $i = 1,\ldots, n$. *For $i=1,\ldots, n$ let $A_i$ be the triangular surface piece bounded by $M_i, M_{i+1}$ and $L_1$. *For $j = 1,\ldots, m-1$ and $i = jn+1,\ldots,jn +n$ let $A_i$ be the quadrangular surface piece bounded by $M_{i}, M_{i+1}$ and $L_j, L_{j+1}$. *For $i = mn+1,\ldots,mn +n$ let $A_i$ be the triangular surface piece bounded by $M_{i}, M_{i+1}$ and $L_m$. This gives us $N = (m+1)n$ surface pieces. It is clear that $\displaystyle A_{k}\cap\bigg(\bigcup_{i=1}^{k-1}A_{i}\bigg)$ is connected for all $k\leq N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4447179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving $\cos(\log_{4x}(x+1))-\cos(\log_{4x}(4-x))\lt\log_{4x}(4-x)-\log_{4x}(x+1)$ There is an interesting inequality I've stumbled upon on the Internet. It has logarithms and trigonometry, but in contrast to something like this that uses trigonometry and logarithms separately, this one takes such kind of inequality to a whole new level! See for yourself. The task is to solve the following inequality over reals: $$\cos(\log_{4x}(x+1))-\cos(\log_{4x}(4-x))\lt\log_{4x}(4-x)-\log_{4x}(x+1)$$ I've found the domain of the inequality by applying restrictions to real-valued logarithms: its base must not be neither zero nor one, and its argument must be positive. $\begin{cases}4x\gt0,\\4x\neq1,\\4-x\gt0,\\x+1\gt0.\end{cases}\Leftrightarrow\begin{cases}x\gt0,\\x\neq0.25,\\x\lt4,\\x\gt-1.\end{cases}\Leftrightarrow x\in(0;0.25)\cup(0.25;4)$ I tried to use the fact that the difference of cosines has the domain $[-2;2]$ no matter the arguments of the cosines. My assumption comes from the following facts: * *The domain of $\cos(x)$ is $[-1;1]$. *The smaller the subtrahend, the bigger the difference. *The bigger the subtrahend, the smaller the difference. Thus, the domain of $\cos(a)-\cos(b)$ is $[\min(\cos(a))-\max(\cos(b));\max(\cos(a))-\min(\cos(b))]=[-1-1;1-(-1)]=[-2;2]$. QED. This did not work, however, because the right hand side sadly isn't always greater than two. From a mere observation, if we let $u=\log_{4x}(x+1)$, $v=\log_{4x}(4-x)$, and $f(x)=cos(x)$, then the inequality becomes $f(u)-f(v)\lt v-u$, leading to the inequality $f(u)+u\lt f(v)+v$. This may be useful in some way, I guess. I've also tried to step away from arbitrary logs and only use natural ones: $$\cos\left(\dfrac{\log (x+1)}{\log (4x)}\right)-\cos\left(\dfrac{\log (4-x)}{\log (4x)}\right)\lt\dfrac{\log (4-x)-\log(x+1)}{\log (4x)}$$ ...and I'm stuck here. No way to manipulate those cosines of which I'm aware. So, how to approach this kind of inequalities properly?
By inspection of log arguments the crucial roots/critical points of equality are noted at: $$ x= (0,\frac14,\frac32,4)$$ We compare the two component functions $ f(x),g(x)$ ( blue,red) defined/ listed in Mathematica, intervals are determined by above end points. By evaluating the function signs in the neighborhood of the roots ( it suffices to track function behaviour) it is found : In the interval $(\frac14,\frac32)\; f(x)< g(x) $. Outside this full interval $ f(x) , g(x) $ define all intervals of inequality. The plot also confirms the intervals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4447329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof: $\sin \mathrm{A}+\sin \mathrm{B}+\sin \mathrm{C} \leq \cos \frac{\mathrm{A}}{2}+\cos \frac{\mathrm{B}}{2}+\cos \frac{\mathrm{C}}{2}$ Let $\mathrm{I} \subseteq \mathbf{R}$ be an interval and $\mathrm{f}: \mathrm{I} \rightarrow \mathbf{R}$. We have this inequality for any $x, y, z \in I$, $$ f\left(\frac{x+y}{2}\right)+f\left(\frac{y+z}{2}\right)+f\left(\frac{z+x}{2}\right) \geq f(x)+f(y)+f(z) . $$ If A, B, C are the measures of the angles of a triangle $ \mathrm {ABC} $, expressed in radians, show that: $$\sin \mathrm{A}+\sin \mathrm{B}+\sin \mathrm{C} \leq \cos \frac{\mathrm{A}}{2}+\cos \frac{\mathrm{B}}{2}+\cos \frac{\mathrm{C}}{2}$$ I have to prove the inequality by using f. I don't know what "function" to take in order to prove inequality.
Following up on my comment, let $\,f(t) = \sin(t)\,$ and $\,x = B+C\,$, $\,y = C+A\,$, $\,z = A+B\,$. * *$f(x) = \sin \left(B+C\right) = \sin \left(\pi - A\right) = \sin \left(A\right)$ *$f\left(\frac{y+z}{2}\right) = f\left(\frac{(C+A)+(A+B)}{2}\right) = f\left(\frac{(A+B+C) + A}{2}\right) = f\left(\frac{\pi}{2} + \frac{A}{2}\right) = \cos\left(\frac{A}{2}\right)$ Then: $\quad\quad\quad\quad f\left(\frac{x+y}{2}\right)+f\left(\frac{y+z}{2}\right)+f\left(\frac{z+x}{2}\right) \geq f(x)+f(y)+f(z) \quad\quad\quad\quad\quad\text{(1)} \\ \iff\quad \cos \frac{A}{2} + \cos \frac{B}{2} + \cos \frac{C}{2} \ge \sin A + \sin B + \sin C$ What remains to be proved is inequality $\,(1)\,$ itself. That follows because $\,\sin(t)\,$ is concave on $\,[0, \pi]\,$, so $\,f\left(\frac{x+y}{2}\right) \ge \frac{f(x)+f(y)}{2}\,$ by Jensen's inequality. Writing it also for $\,(y,z)\,$ and $\,(z,x)\,$ then adding the three inequalities gives $\,(1)\,$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4447510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solve $y'+2xy^2 = 1$ Solve $y'+2xy^2 = 1$ I've tried to test if the equation is exact or if it can be made exact. Nothing worked and then I understood that it's a non-linear equation. I've tried a lot to think of a substitution so that I can do separation of variables but that also didn't work. It's also not even a Bernoulli's equation. This might be a simple problem for some but I can't get my head around it. Can I please get some hint? I'm not asking for a full solution so please don't close this question. I can't solve it and I need some help doing it on my own.
This is not a full solution, just some pointers which might help on the way towards a solution. This is a Riccati equation which according to Wolfram Alpha has solutions in so called Airy and Bairy functions which are defined as solutions to the related family of differential equations. $$\frac{d^2y}{dy^2} -xy = 0$$ These solutions can not be expressed in elementary functions, but are rather commonly expressed as trigonometric integrals $$Ai(x) = \frac 1 \pi \int_0^\infty \cos\left(\frac{t^3}3 +xt\right)dt$$ Moreover, we can easily see by ocular inspection that there can't be a polynomial solution to our question. This is because the $y'$ shrinks in order, while $2xy^2$ grows (as compared to $y$). If we slightly modify the differential equation to $$y' + y^2 = 1$$ This becomes a separable equation: $$\frac{y'}{(1+y)(1-y)} = 1$$ which we can integrate $$\int\frac{dy}{(1+y)(1-y)} = \int 1 dx$$ This gives us a hint that exponential functions shall be involved as the left hand sides gives us logarithms in $y$ (after some partial fraction decomposition). How to handle the $2x$ factor, I don't know. Maybe we can do $\sqrt{2x}$ and a variable substitution of some kind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4447631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $f(x)=-e^{-x}$ on $(0,\infty)$, Prove that $\inf_{x>0}$ $f(x)=-1$. If $f(x)=-e^{-x}$ on $(0,\infty)$, Prove that $inf_{x>0}$ $f(x)=-1$. For all $x>0$, $f(x)>-1$ then $-1$ is a lower bound of $f$ on $(0,\infty)$. I did it up to here. And I want to find a any $\delta$ to solve the problem, as in the following example. But I don't know what to do. I need some help. Example: $g(x)=\frac{1}{x}$, Prove $\inf_{x>0}$ $g(x)=0$. Since $0<g(x)$ for every $x>0$, $0$ is a lower bound of f on $(0,\infty)$. Suppose $\delta$ is any positive real number. If we choose $x>\frac{1}{\delta}$, then $f(x)=\frac{1}{x}<\delta$ and so $\delta$ is not a lower bound of f on $(0,\infty)$. Since $\delta>0$ was arbitrary, if follows that $0=\inf_{x>0}f(x)$.
$f(x)=\frac{-1}{e^x}$ $f'(x)=\frac{1}{e^x}$ Now, for any x in $(0,\infty)$, $f'(x)>0$,hence f is monotonically increasing function Hence, $\lim x\rightarrow0+$, f(x)=-1 is infimum of f on (0,$\infty$). I think ,this one is very easy to observe. Now, to prove it in your way, Assume, $ -1+\delta$ be inf of f on (0,$\infty$). There for some y in (0,$\infty$), $f(y)=-1+\delta$ Now, $\frac{-1}{e^y} =-1+\delta$ $e^y= \frac{1}{(1-\delta)}$ $y=\ln\frac{1}{(1-\delta)}$ choose some x <y from here On (0,$\infty$) & get answer
{ "language": "en", "url": "https://math.stackexchange.com/questions/4447764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For which ranges of $k$ is this collection a topology on $\mathbb{R}^2$? Let $S$ be a (non-empty) subset of $\mathbb{R}$. $$ \mathscr{T} := \left\{ \emptyset, \mathbb{R}^2 \right\} \bigcup \left\{ G_k \colon k \in S \right\}, $$ where $$ G_k := \left\{ \, (x, y) \in \mathbb{R}^2 \, \colon \, y < x-k \, \right\}. $$ The above collection $\mathscr{T}$ is a topology on $\mathbb{R}^2$ if $S = \mathbb{R}$ or if $S = \mathbb{N}$. Am I right? What if $S = \mathbb{Q}$? I think then $\mathscr{T}$ won't be a topology on $\mathbb{R}^2$, because if $\left( k_n \right)_{n \in \mathbb{N} }$ be a sequence of rational numbers converging from the right to $\sqrt{2}$, for example, then we would obtain $$ \cup_{n = 1}^\infty G_{k_n} = \left\{ \, (x, y) \in \mathbb{R}^2 \, \colon \, y < x - \sqrt{2} \, \right\}, $$ which would not be in our collection $\mathscr{T}$. Am I right?
Your reasoning is correct. Here's a more general observation. Your sets have the following interesting property: if $x_n$ is a decreasing and convergent (in the standard, Euclidean topology) sequence then $$G_{\lim_{n=1}^\infty x_n}=\bigcup_{n=1}^\infty G_{x_n}$$ which I leave as an exercise. Lemma. This collection is a topology if and only if $S$ contains $\inf$ of its every subset. Proof. "$\Rightarrow$" Assume that $A\subseteq S$, $A\neq\emptyset$ and let $s=\inf A$. If $s\in A$, then we are done. Otherwise there is a decreasing sequence $(s_n)\subseteq A$ such that $s_n\to s$. But then $G_s=G_{\lim s_n}=\bigcup G_{s_n}$ has to belong to the topology. Which means that $s\in S$. "$\Leftarrow$" Intersection of finitely many $G_k$ is of the form $G_k$ of course. Assume that we have a collection $\{G_i\}_{i\in I}$. We want to show that the union is in the topology. Since $I\subseteq S$ then by our assumption $s=\inf I$ is in $S$ as well. I leave as an exercise that $\bigcup_{i\in I}G_i=G_s$, and thus we have a topology. $\Box$ A straight forward conclusion is that indeed your collection is a topology for $S=\mathbb{R}$ and $S=\mathbb{N}$, but not for $S=\mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4447933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matrix exponential of infinite antisymmetric matrix with entries only next to its diagonal What is the exponential $\exp (t A)$ of the operator $A$ whose components are given by $A_{nm} = \delta_{nm-1} \sqrt{n+1} - \delta_{nm+1}\sqrt{n}$ where the $n,m \in \mathbb{N}_0$. If we just consider indices up to 1, the answer is easy. However, it doesn't seem to be straight forward to generalize to infinite dimensions. I tried to do it with mathematica by increasing the dimension, but it didn't seem to converge.
Define the infinite matrix $$ A_{n,m} := \delta_{n,m-1} \sqrt{n+1} - \delta_{n,m+1}\sqrt{n} \tag{1} $$ where $\,n,m\ge 0.\,$ Given a variable $\,t\,$ define another infinite matrix $$ B := \exp(t\,A) = \sum_{n=0}^\infty A^n \frac{t^n}{n!}. \tag{2} $$ The result is that $$ B_{n,m} = e^{-t^2/2}\,t^{m-n}\sqrt{\frac{m!}{n!}}\sum_{k=0}^n (-1)^k{n \choose k} \frac{t^{2k}}{(k+m-n)!} \tag{3}. $$ Note that $\, B_{n,m} = (-1)^{n+m}B_{m,n}. $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4448049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find $\sup$ and $\inf$ of $x\sin(\frac{1}{x})$ on $(0,\infty)$. Find $\sup$ and $\inf$ of $x\sin(\frac{1}{x})$ on $(0,\infty)$. I found an example. Example: Let $f(x)=x\sin(\frac{1}{x})$. We are interested in its behaviour for $x\geq1$. $f'(x)=\sin(\frac{1}{x})-\frac{1}{x}\cos(\frac{1}{x})$. $f''(x)=-\frac{1}{x^3}\sin(\frac{1}{x})$, and this is negative for any $x\geq1$. Then, $f'$ is decreasing. But $\lim_{x\rightarrow\infty}f'(x)=0$. Then $f'(x)>0$, for all $x\geq1$. So $f$ is increasing on $[1,\infty)$. Therefore the sequence $n\sin(\frac{1}{n})$ is increasing for all $n\geq1$. Therefore $$\inf n\sin(\frac{1}{n})=1\cdot\sin(\frac{1}{1}) = \sin(1).$$ $$\sup n\sin(\frac{1}{n})=\lim_{n\rightarrow\infty}n\sin(\frac{1}{n})=\lim_{n\rightarrow\infty}\frac{\sin(\frac{1}{n})}{\frac{1}{n}}=1.$$ I understood that the above example is when $x$ is more than $1$. However, I have to find inf and sup in the interval $(0,\infty)$, so I have to find inf and sup in the interval $(0,1)$. But I don't know what to do. I need some help.
hint We know that $$(\forall X>0)\;\; \sin(X)<X$$ So $$(\forall x>0)\;\;x\sin(\frac 1x)<\color{red}{1}$$ As you said $$\lim_{n\to+\infty}n\sin(\frac 1n)=\color{red}{1}$$ thus $$\sup_{x>0}f(x)=\color{red}{1}$$ On the other hand, $$(\forall x>0)\;\; -x\le f(x)\le x$$ the curve of $ f $ intersects the line $ y=-x $ when $$\sin(\frac 1x)=-1 \text{ or } x=\frac{1}{\frac{3\pi}{2}+2k\pi}$$ so $$\inf_{x>0}f(x)=-\frac{2}{3\pi}\approx -0.2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4448239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can $\sum_{n=1}^{k} \frac1{\sqrt{n}}$ be an integer for $k>1$? In one of new YouTube videos, I've seen the problem: to show that $$16 < \sum_{n=1}^{80} \frac1{\sqrt{n}} < 17$$ I've solved the problem easily, using $$\int_2^{81}\frac1{\sqrt{x}}\, dx < \sum_{n=2}^{80} \frac1{\sqrt{n}} < \int_1^{81}\frac1{\sqrt{x}}\, dx$$ But this problem aimed me on another problem: Can the sum $\sum_{n=1}^{k} \frac1{\sqrt{n}}$ be an integer number at some positive integer $k>1$? Is this problem well-known open or closed problem? Or maybe there is some approach allowing to solve this problem easily without using hard mathematical skills? I've seen the same problem without square roots and it was solved easily.
This is not an integer due to the same reason that $H_n$ is not an integer. Take the largest integer $k$ such that $2^k < n$. If $M = 2^{k - 1} \cdot (2n - 1)!!$, then $\frac{\sqrt{M}}{\sqrt{i}}$ is an algebraic integer for any $i \neq 2^k$ between $1$ and $n$, but $\frac{\sqrt{M}}{\sqrt{2^k}}$ is not algebraic integer. So $$\sqrt{M} \cdot \sum_{i = 1}^n \frac{1}{\sqrt{i}}$$ is not algebraic integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4448402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Prove that $\lim\limits_{(x,y) \to (0,0)} \frac{3x^{2}y}{x^{4}-2x^{2}y + 5y^{2}}$ does not exist Wolframalpha says that $$\lim\limits_{(x,y) \to (0,0)} \frac{3x^{2}y}{x^{4}-2x^{2}y + 5y^{2}}$$ does not exist. But I can not find a way to prove it, using $\gamma1(t) = (0,t)$, $\gamma(t) = (t,0)$ and $\gamma(t) = (t,t)$ gives me zero. Can anyone please, help me?
$(x,y)=(t,t^2)$ gives $\frac{3x^2y}{x^4-2x^2y+5y^2}=\frac{3t^4}{t^4-2t^4+5t^4}=\frac{3t^4}{4t^4}=\frac{3}{4}$ so the limit is $\frac{3}{4}$ along this path to the origin while $(x,y)=(t,t)$ gives $\frac{3x^2y}{x^4-2x^2y+5y^2}=\frac{3t^3}{4t^4}=\frac{3t^3}{t^4-2t^3+5t^2}=\frac{3t}{t^2-2t+5}$ so the limit is $0$ along this other path to the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4448506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$l_1$ and $l_2$ norm minimization with a constraint While working on the algorithm, I need to solve the following problem $$ \min_{x \in \mathbb{R}^n} \| x \|_1 + \frac{\alpha}{2}\| x - y \|^2 \\ \mathrm{s.t.} \ \| x - s \|^2 \le r$$ where $y,s \in \mathbb{R}^n, \alpha > 0 $ and $r > 0$. The Lagrangian function for this problem is given by $$ L(x,\lambda) = \| x \|_1 + \frac{\alpha}{2}\| x -y \|^2 + \lambda(\|x-s\|^2 -r).$$ Let $(x^*,\lambda^*)$ be the optimal solution of this Lagrangian function. If $\lambda^* = 0$, then $x^* = \mathrm{Prox}_{\frac{1}{\alpha}\|\cdot\|_1}(y)$ where $\mathrm{Prox}_h(u) = \arg \min_{v \in \mathbb{R}^n}\{h(v) + \frac{1}{2}\|u - v\|^2 \}$ If $\lambda^* > 0$, I obtain that $$ 0 \in \partial \| x^* \|_1 + \alpha(x^*-y) + 2\lambda^*(x^* -s ) \ \ \mathrm{and} \ \ \|x^* -s \|^2 = r.$$ Using the first relation, I have $$ x^* = \mathrm{Prox}_{\frac{1}{\alpha+2\lambda^*}\|\cdot\|_1}\left(\frac{\alpha}{\alpha + 2\lambda^*}y + \frac{2\lambda^*}{\alpha + 2\lambda^*}s \right)$$ $$ x^*_i =\left\{ \begin{array}{ll} \frac{\alpha}{\alpha + 2\lambda^*}y_i + \frac{2\lambda^*}{\alpha + 2\lambda^*}s_i- \frac{1}{\alpha + 2\lambda^*} & \alpha y_i + 2\lambda^*s_i \ge 1 \\ 0 & -1 \le \alpha y_i + 2\lambda^*s_i \le 1\\ \frac{\alpha}{\alpha + 2\lambda^*}y_i + \frac{2\lambda^*}{\alpha + 2\lambda^*}s_i + \frac{1}{\alpha + 2\lambda^*} & \alpha y_i + 2\lambda^*s_i \le -1 \end{array}\right. $$ and using the second relation, I have $$ \| x^* - s \|^2 = r \Leftrightarrow \sum_{i=1}^n (x^*_i -s_i)^2 = r $$ However, I am not sure how to obtain $\lambda^*$ from here. Could you please tell me how to obtain it?
In case $\lambda^* > 0$, the constraint $\|x^* - s\|^2 \le r$ is active. Thus, you have to choose $\lambda^* > 0$ such that $$ \| x^* - s\|^2 = r, $$ i.e., you have one equation to determine the scalar $\lambda^*$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4448698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I prove that the perimeter is at most 60? Problem: Let $\Delta$ be a triangle in the plane. Let $P$ be the perimeter of the triangle and $A$ be the area. Let $a,b,c$ be the length of the sides and suppose they are positive integers. Suppose finally that $A=P$. How can I prove that $P \leq 60$? My attempt: I tried using Erone's formula getting: $$a+b+c = \sqrt{\frac P 2 \left(\frac{P}{2}-a\right)\left(\frac P 2-b\right)\left(\frac P 2 - c\right)}$$ from which I obtained: $$4P=-P^3 + 4 (a^2b+a^2c+ab^2+b^2c+ac^2+bc^2+abc)$$ and I noticed that $abc \leq \left(\frac{P}{2}\right)^2$ but I am not able to continue. Any help will be appreciated. Remark: as suggested in the comments, defining $u=a+b-c$, $v=a+c-b$ and $w=b+c-a$ we get $16(u+v+w)=uvw$. Noticing that $uvw$ should be even and that none of $u,v,w$ can be odd we can write $u=2k$, $v=2h$, $w=2n$ in such a way we deduce $4(k+h+n)=khn$.
$$hnk=4(h+n+k), h,n,k\in\mathbb{N}$$ $$(hk-4)n=4(h+k)$$ WLOG $h\geq n \geq k > 0$. Then $$(hk-4)k \leq 4(h+k) \Rightarrow hk^2-8k \leq 4h \Rightarrow hk^2-8h\leq 4h\Rightarrow hk^2\leq12h\Rightarrow k\leq 3$$ $$n=\frac{4(h+k)}{hk-4}$$ At $k=1$: $$n=\frac{4h+4}{h-4}=4+\frac{20}{h-4}$$ $$h+n+k=h+5+\frac{20}{h-4}=9+(h-4)+\frac{20}{h-4}$$ $$(h-4)|20 \Rightarrow (h-4)+\frac{20}{h-4}\leq 21 \Rightarrow h+n+k\leq 30$$ At $k=2$: $$n=\frac{4h+8}{2h-4}=2+\frac{8}{h-2}$$ $$h+n+k=h+4+\frac{8}{h-2}=6+(h-2)+\frac{8}{h-2}$$ $$(h-2)|8 \Rightarrow (h-2)+\frac{8}{h-2}\leq 9 \Rightarrow h+n+k\leq 15$$ At $k=3$: $$n=\frac{4h+12}{3h-4}=1+\frac{h+16}{3h-4}$$ $$n\geq k=3 \Rightarrow \frac{h+16}{3h-4}\geq 2\Rightarrow h+16\geq 6h-8 \Rightarrow h\leq \frac{24}{5}<5$$ $$h\leq 4\Rightarrow h+n+k\leq 4+4+3=11$$ Then $h+n+k\leq 30$ in all possible cases. Perimeter of triangle $$P=a+b+c=u+v+w=2(h+k+n)\leq 60$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4448841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
APMO 2020 Geometry Problem | Proving lines to be concurrent PROBLEM Let $\Gamma$ be the circumcircle of $∆ABC$. Let $D$ be a point on the side $BC$. The tangent to $\Gamma$ at $A$ intersects the parallel line to $BA$ through $D$ at point $E$. The segment $CE$ intersects $\Gamma$ again at $F$. Suppose $B$, $D$, $F$, $E$ are concyclic. Prove that $AC, BF, DE$ are concurrent. MY APPROACH Let $X$ be some point on tangent line then, $$\angle XAB=\angle ACB=\angle ACD \quad(1)$$ Since $AB ||DE$, we cany say that $$\angle AED=\angle XAB \quad(2)$$ From $(1)$ and $(2)$, we can say that $$\angle AED=\angle ACD$$ Therefore $ADCE$ is cyclic. Consider the Circles of $BDEF$, Circles of $ADCE$ and Circumcircle of $∆ABC$. We see that $AC,BF$ and $DE$ are the radical axes of these circles which means they are concurrent at the radical Centre. Hence, Proved! But APMO-$2020$ haven't included my solution in their Official Answers. Is something wrong with my Proof? DIAGRAM
You have a beautiful, neat proof for this problem, and it is absolutely correct, in fact it looks much more neat than the official solutions. There is nothing wrong with your solution. But when exams are created, the creators do not know all the possible solutions to a problem, and they don't list all of them. However, as we try to solve them we can discover many new proofs which were not originally given as official solutions. This is your own solution to the problem, and it doesn't have to be among the official solutions. Hope this helps :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4449004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Two distinct lines for which there is no plane that contains both So if a plane contains a line, does that mean every point of that line is a point on the plane? And if so, would lines that are skewed and perpendicular be an example of the statement?
So if a plane contains a line, does that mean every point of that line is a point on the plane? Yes, that is by definition. Would lines that are skewed and perpendicular be an example of the statement? Sure. Any pair of lines on a plane will either be parallel or they'll intersect (just like in $\Bbb R^2$). So, any pair of skew lines will work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4449293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Idea for proving there are the same number of left cosets as right cosets of $H\leq G$ (for finite and infinite groups)? I'm trying to prove that there are the same number of left and right cosets of a subgroup $H$ of any group $G$ (finite or infinite) by finding a bijection mapping the left cosets of $H$ onto the right cosets of $H$. It seems a lot of people have asked about this problem, too. I'm thinking I could define $\phi:H\to aH$ by $h\mapsto ah$ and $\psi:H\to Hb$ by $h\mapsto hb$. Then after proving $\phi$ and $\psi$ are bijections, that would mean $\phi^{-1}$ exists and is a bijection, so then the composition $\psi\circ\phi^{-1}$ is a bijection from $aH$ to $Hb$, thus proving the collections of cosets have the same cardinality. Would this work? (PS: I'm aware the function $\phi:aH\to Ha$ where $ah\mapsto ha^{-1}$ is a possibility)
I'm trying to prove that there are the same number of left and right cosets of a subgroup $H$ of any group $G$ (finite or infinite) by finding a bijection mapping the left cosets of $H$ onto the right cosets of $H$. Well, either you want to show that left/right cosets have the same cardinality or the collections of left/right cosets have the same cardinality. These are different things. You've correctly shown that $gH$ is equinumerous to $Hg'$ for any $g,g'\in G$. But this doesn't show that $G\backslash H=\{gH\ |\ g\in G\}$ is equinumerous to $G/H=\{Hg\ |\ g\in G\}$. For that you would consider the following function $$f:G\backslash H\to G/H$$ $$f(gH)=Hg^{-1}$$ The most important thing you need to show is that it is well defined, i.e. $gH=g'H$ if and only if $Hg^{-1}=Hg'^{-1}$. Which I leave as an exercise. With that the inverse is simply given by $Hg\mapsto g^{-1}H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4449472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Simple Problem using Bayes Rule This is Exercise 1 in Chapter 2 of the Probabilistic Robotics book by S. Thrun etal. Problem. A robot uses a range sensor that can measure ranges from $1$m to $3$m. For simplicity, assume that actual ranges are distributed uniformly in this interval. Unfortunately, the sensor can be faulty. When it is faulty, it constantly outputs a range below $1$m, regardless of the actual range in the sensor's measurement cone. We know that the prior probability of a faulty sensor is $0.01$. Suppose the robot queried its sensor $N$ times and every single time the measurement value is below $1$m. What is a posterior probability of a sensor fault as a function of $N$? Attempted Solution. Let $\{X = 0\} = \{\text{sensor is faulty}\}$ and let $Z_k$ denote the $k^{\text{th}}$ sensor measurement. From the problem statement, \begin{equation} \begin{aligned} P(X = 0) &= 0.01,\\ P(Z_k < 1) &= 1/3,\\ P(Z_1 < 1, \ldots, Z_N < 1 | X = 0) &= 1. \end{aligned} \end{equation} I assume that the sensor measurements are independent, i.e., \begin{equation} P(Z_1 < 1, \ldots, Z_N < 1) = P(Z_1 < 1) \cdot \ldots \cdot P(Z_N < 1) = (1/3)^N. \end{equation} The probability we are after is $P(X = 0 | Z_1 < 1, \ldots, Z_N < 1)$. Using Bayes rule: \begin{align} P(X = 0 | Z_1 < 1, \ldots, Z_N < 1) &= \frac{P(Z_1 < 1, \ldots, Z_N < 1 | X = 0) P(X = 0)}{P(Z_1 < 1, \ldots, Z_N < 1)} \\ &= \frac{1 \cdot 0.01}{(1/3)^N}. \end{align} This is not a probability since it is not bounded by 1. Where is the mistake?
I will follow the OP's interpretation of the exercise. Then there are two ways the sensor can generate $N$ results below $1m$. Either the sensor is faulty, with probability $0.01$; or it is working with probability $0.99$ and measures $N$ values lower then $1m$, which has probability $(1/3)^N$. In Bayesian statistics we take the sum of these two probabilities [$= 0.01 + 0.99*(1/3)^N$] and calculate the relative frequency of the desired result [faulty $= 0.01$]. If we do this, the posterior probability that the sensor is faulty is given by: $$P = \frac {0.01} {0.01 + 0.99 * (1/3)^N}$$ This formula gives the correct result for $N = 0$. Furthermore if $N$ is very large the probability goes to $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4449733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $\lfloor \tan (n)\rfloor =n$ have infinite roots? Background This post can be understood as the gap between $\tan n$ and $n$ has no upper limit. Does the sequence $n+\tan(n), n \in\mathbb{N}$ have a lower bound? I want to know how close $\tan n$ can be to $n$. I try to find some solution with difference less than 1. $$|\tan n -n|<1$$ or similar $$\lfloor \tan (n)\rfloor =n$$ $\lfloor x\rfloor$ means floor function. I did a quick search for $n<10^7$ and found no solution other than $n = 0,1$ Question Is there an infinite number of integers n that make the equation true, and how big will the next solution be?
According to https://oeis.org/A249836 (6th comment), $n = 0, 1$ are the only currently known solutions. You can also refer to https://oeis.org/A258024.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4449933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
$\sum_{k=0}^\infty\frac{1}{k+1}\binom{3k+1}{k}\left(\frac{1}{2}\right)^{3k+2}$ converges to $\frac{3-\sqrt{5}}{2}$? I stumble upon the expression $$ \sum_{k=0}^\infty \frac{1}{k+1} \binom{3k+1}{k} \left( \frac{1}{2} \right)^{3k+2} $$ and it seems to converge to $$ \frac{3-\sqrt{5}}{2} $$ Do they equate ? How to prove that ? Not sure if it's helpful : $\frac{1}{k+1}\binom{3k+1}{k} , \; k\ge0$ is the OEIS sequence $A006013$. I only have fundamental knowledge in combinatorics so I could only check it numerically that the convergence seems to be true . However I wouldn't mind learning new theories .
Some thoughts: We use the integral representation $$\binom{3k + 1}{k} = \frac{1}{2\pi}\int_{-\pi}^\pi (1 + 2^{-1}\mathrm{e}^{\mathrm{i}t})^{3k + 1}(2^{-1}\mathrm{e}^{\mathrm{i}t})^{-k}\mathrm{d} t. \tag{1}$$ (Note: Similar to https://functions.wolfram.com/GammaBetaErf/Binomial/07/02/) We have \begin{align*} &\sum_{k=0}^\infty \frac{1}{k+1} \binom{3k+1}{k} \left( \frac{1}{2} \right)^{3k+2}\\[6pt] =\,& \frac{1}{2\pi}\int_{-\pi}^\pi \sum_{k=0}^\infty \frac{1}{k + 1}2^{-(3k + 2)}(1 + 2^{-1}\mathrm{e}^{\mathrm{i}t})^{3k + 1}(2^{-1}\mathrm{e}^{\mathrm{i}t})^{-k}\, \mathrm{d} t\\[6pt] =\,& \frac{1}{2\pi}\int_{-\pi}^\pi \frac{1 + z}{4}\sum_{k=0}^\infty \frac{1}{k + 1} \left(\frac{(1 + z)^3}{8z}\right)^k\Big\vert_{z = 2^{-1}\mathrm{e}^{\mathrm{i}t}}\, \mathrm{d} t \\[6pt] =\,& \frac{1}{2\pi}\int_{-\pi}^\pi \frac{-2z}{(1 + z)^2} \ln \left(1 - \frac{(1 + z)^3}{8z} \right)\Big\vert_{z = 2^{-1}\mathrm{e}^{\mathrm{i}t}}\,\mathrm{d} t \tag{2}\\[6pt] =\,& \frac{3 - \sqrt 5}{2} \tag{3} \end{align*} where we have used $$\sum_{k=0}^\infty \frac{1}{k + 1} a^k = - \frac{\ln(1 - a)}{a}, \quad 0 < |a| < 1$$ and $\frac{1}{32} \le \frac{(1 - 1/2)^3}{4} \le |\frac{(1 + 2^{-1}\mathrm{e}^{\mathrm{i}t})^3}{8\cdot 2^{-1}\mathrm{e}^{\mathrm{i}t}}| \le \frac{(1 + 1/2)^3}{4} = 27/32 < 1$ in (2). Proof of the integral representation (1): Using Cauchy integral formula, we have $$\Big[(1 + z)^{3k + 1}\Big]^{(k)}(0) = \frac{k!}{2\pi \mathrm{i}} \oint\limits_{|z| = 1/2} \frac{(1 + z)^{3k + 1}}{z^{k + 1}} \mathrm{d} z$$ which results in (with the substitution $z = 2^{-1}\mathrm{e}^{\mathrm{i} t}$) $$\binom{3k + 1}{k} = \frac{1}{2\pi \mathrm{i}} \oint\limits_{|z| = 1/2} \frac{(1 + z)^{3k + 1}}{z^{k + 1}} \mathrm{d} z = \frac{1}{2\pi \mathrm{i}} \int_{-\pi}^\pi \frac{(1 + 2^{-1}\mathrm{e}^{\mathrm{i} t})^{3k + 1}}{(2^{-1}\mathrm{e}^{\mathrm{i} t})^{k + 1}} 2^{-1}\mathrm{e}^{\mathrm{i} t} \mathrm{i}\, \mathrm{d} t.$$ The desired result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4450199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
First homology group of a closed non-orientable 2-manifold vía the cellular homology groups Let $N_h$ be a closed non-orientable 2-manifold of genus $h\geq 1$. I am trying to compute the first homology groups $H_1(N_h)$. For do so, it is sufficient compute the cellular homology group $H_1^{CW}(N_h)\cong H_1(N_h)$. Since $N_h$ has one $0$-cell $\{p\}$, $h$ $1$-cells $\{a_1,\ldots, a_h\}$ and one $2$-cell $\{F\}$ the cellular chain complex: $$\ldots \longrightarrow C_3^{CW}(N_h)\stackrel{d_3}{\longrightarrow} C_2^{CW}(N_h) \stackrel{d_2}{\longrightarrow}C_1^{CW}(N_h) \stackrel{d_1}{\longrightarrow}C_0^{CW}(N_h) \stackrel{d_0}{\longrightarrow}0,$$ is equivalent to: $$\ldots \longrightarrow 0\stackrel{d_3}{\longrightarrow} \langle F\rangle_{ab} \stackrel{d_2}{\longrightarrow} \langle a_1,\ldots, a_h\rangle_{ab} \stackrel{d_1}{\longrightarrow}\langle p\rangle_{ab}\stackrel{d_0}{\longrightarrow}0,$$ where $\langle S\rangle_{ab}$ denotes the free abelian group generated by a set $S$. The map $d_1$ is the trivial map. And for some reason, the map $d_2$ is given by: $$d_2(F)=2(a_1+\ldots+a_h)$$ From here, it is pretty cleat that $\textrm{im}(d_2)$ is isomorphic to $2\cdot\mathbb Z$ and: $$H_1(N_h)\cong\frac{\mathbb Z\oplus \stackrel{(h}{\ldots}\oplus \mathbb Z}{2\cdot\mathbb Z}\cong \mathbb Z \oplus \stackrel{(h-1}{\ldots}\oplus \mathbb Z\oplus \mathbb Z_2.$$ Why $d_2(F)=2(a_1+\ldots+a_h)$? I am not able to get this expression from the definition of the cellular map $d_n$: $$d_n(\underbrace{e_\alpha^n}_{n-cell})=\sum_{e_\beta\in ``(n-1)-cells``}e\beta\cdot d_{\alpha\beta}$$ where $d_{\alpha\beta}$ is the degree of the map: $$S_\alpha^{n-1}\stackrel{attaching\, map}{\longrightarrow} X^{n-1} \stackrel{quotient\, map}{\longrightarrow} X^{n-1}/X^{ n-2}\stackrel{homeomorphism}{\longrightarrow} \bigvee_{k=1}^hS^{n-1}_k\stackrel{\beta-projection\, map}{\longrightarrow}S^{n-1}_\beta\qquad (\star)$$ Any help would be appreciated.
You need to know how the $2$-cell $F$ is attached to the $1$-skeleton $\bigvee_{k=1}^hS_k^1$, namely by the word $a_1^2a_2^2\dotsc a_h^2$ (this word describes the homotopy class of the attaching map in the fundamental group $\pi_1(\bigvee_{k=1}^hS_k^1)=F(a_1,\dotsc,a_k)$, the free group on $a_1,\dotsc,a_k$, where $a_i$ denotes the loop traversing $S_i^1$ once counter-clockwise for each $i=1,\dotsc,h$). If you follow this attaching map by projecting onto the $i$-th $1$-cell $S_i^1$, you end with the map $S^1\rightarrow S_i^1$ represented by the word $a_i^2$, i.e. the map that traverses the circle $S_i^1$ twice in counter-clockwise direction. This map has degree $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4450332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Special definition of “algebraic curve” for Riemann surfaces? On the book Algebraic curves and Riemann Surface by Rick Miranda, page 169, I see the following definition: (Last part of definition 1.1) A complex Riemann surface $X$ is an algebraic curve if the space $\mathcal M(X)$ of global meromorphic functions separates the points and tangents of $X.$ Of course, usually, “algebraic curves” means the vanishing set of a polynomial in two variables. So how is the definition above related to the usual meaning of an algebraic curve?
It takes an argument to show that this condition : $\mathcal{M}(X)$ separates points and tangents is sufficient to guarantee that $X$ admits a holomorphic embedding into a complex projective space $\mathbb{P}^n$, $i:X\to \mathbb{P}^n$. Once you have that, Chow's Theorem implies that $X$ is in fact defined by algebraic equations. In particular, this shows that $X$ admits a structure of a projective algebraic variety. The main theorem that one needs here is : Riemann Existence Theorem: If $X$ is a compact Riemann surface, then $\mathcal{M}(X)$ separates points and separates tangents. The content of this is that there are "enough" global functions on the surface. That is by no means obvious and requires some hard work to be proven. I would look at Forster's Lectures on Riemann Surfaces if you want to see this proven. By the way, there are some ways to sidestep the Riemann Existence Theorem in some sense but there is as always a conservation of difficulty. If you prove the Kodaira Embedding Theorem as in Huybrechts' Complex Geometry, you can conclude that compact Riemann surfaces are projective and hence algebraic again by Chow's Theorem. The issue is that this assumes results from Hodge Theory which have their own analytic technicalities. Long story short, these conditions turn out to be equivalent to being algebraic (and you should probably not worry about it further for a while unless it is the sort of thing you find interesting).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4450531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $f(x)$ is irreducible, is $f(x^k)$ irreducible? Let $f(x)\in\mathbb{Z}[x]$ be an irreducible polynomial of degree $\ge 2$. Is it true that $f(x^k)$ is irreducible for $k\ge 2$? If not true, under what hypothesis, we can gurantee positive answer? For $\alpha=\sqrt[6]{\sqrt{2}+\sqrt{3}}$, I saw that it satisfies a polynomial over $\mathbb{Q}$ given by $x^{24}-10x^{12}+1$. I wanted to check whether this polynomial is irreducible, and I came to above general question.
Here is a sufficient condition: $f(x)$ is irreducible over $\mathbb{Q}$ and in some field $K$ (the splitting field of $f(x)$) we have $f(x) = (x-\alpha_1)\cdot \ldots \cdot (x-\alpha_n)$, and moreover each $x^k - \alpha_i$, $1\le i \le n$, is irreducible over $K$. This applies in particular to your polynomial, see this answer. $\bf{Added:}$ The point is that over some extension $K$ the polynomial $f(x^k)$ factors into irreducibles that are polynomials in $x^k$. $\bf{Added:}$ Let $\alpha$ be any root of $f(x)$. We have $f(x^k)$ irreducible if and only if the degree of $\sqrt[k]{\alpha}$ over $\mathbb{Q}$ is $k n$. This is equivalent to: the degree of $\sqrt[k]{\alpha}$ over $\mathbb{Q}(\alpha)$ is $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4450723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
orthogonal polynomials and determinant of jacobi matrix In the book 'Orthogonal Polynomials of Several Variables' by Charles F. Dunkl and Yuan Xu page 11 is the picture below. I assume the matrix in the picture is related to Corollary 1.3.10 For the case of orthonormal polynomials $p_n$, $xp_n(x)=a_{n-1}p_{n-1}(x)+b_n p_n(x)+a_n p_{n+1}(x)$ and $a_n=k_n/k_{n+1}=\sqrt{d_n d_{n+2}}/d_{n+1}$ where $k_n$ is coeff. leading or highest power of x and $b_n=\int_a^b xp_n(x)^2 d\mu(x)$ which is on the previous page. I understand the matrix identity but I can't understand why he says 'hence $P_n (x)$ = det$(xI_n-J_n)$ In the picture. I know it can be diagonalized by a similarity transform or by even a unitary matrix and have the same determinant. If one could easily show that the eigenvalues(which are the diagonal elements in the diagonalized matrix) were such that each had a maximum power of x for all different powers of x then I could understand that because eigenvectors of symmetric matrices are orthogonal. Note by using the upper case P in $P_n(x)$ it means that generally they are not normalized but instead I would assume they are such that the highest power of x has coefficient 1 or are monic polynomials ? Anyway can anyone explain because of that matrix identity one is supposed to understand. Like what is so special about that matrix identity have to do with orthogonality ? Note disregard that 'mouse' pointer symbol in first row of matrix. from where does that $P_n(x)=...$ arise ?
Prior in his book on page 9 and 10 for the special case of monic orthogonal polynomials,his corollary 1.3.8 $P_{n+1}(x)=(x+B_n)P_n(x)-C_nP_{n-1}(x)$ where $C_n=\frac {d_{n+1}d_{n-1}}{d_n^2}$ which equals $a_{n-1}^2$ and $B_n=-\frac{d_n}{d_{n+1}}\int_a^b xP_n(x)^2d\mu(x)=-b_n=-\int_a^b xp_n(x)^2d\mu(x)$ where $p_n(x)$ is orthonormalized eg $\int_a^b p_n(x)^2d\mu(x)=1$, while monic $\int_a^b P_n(x)^2d\mu(x)=\frac{d_{n+1}}{d_n}$. So we have the same recurrence and assuming the initial starting values are the same it is unique. That is assume $P_0=1$ and note $d_0=1$ by his definition and $d_n=det(g_{ij})_{i,j=1}^n$ where $g_{i,j}$ are elements of the Gram matrix $<f_i,f_j>$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4450867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Other approaches to evaluate $\lim_{h\to 0} \frac{4^{x+h}+4^{x-h}-4^{x+\frac12}}{h^2}$ $$\lim_{h\to 0} \frac{4^{x+h}+4^{x-h}-4^{x+\frac12}}{h^2}=?$$ I evaluated the limit by using the Hopital rule,$$\lim_{h\to 0} \frac{4^{x+h}+4^{x-h}-4^{x+\frac12}}{h^2}=4^x\lim_{h\to0}\frac{4^h+4^{-h}-2}{h^2}=4^x\lim_{h\to0}\frac{\ln(4)(4^h-4^{-h})}{2h}=4^x\lim_{h\to0}\frac{(\ln4)^2(4^h+4^{-h})}{2}=4^x\times4(\ln2)^2=4^{x+1}(\ln2)^2$$ I want to learn other ideas to solving this problem, so can you please evaluate the limit with other approaches?
A possible way is using the Taylor expansion $$2\cosh t = e^t+e^{-t} = 2+t^2 + o(t^2)$$ Hence, \begin{eqnarray*}\frac{4^{x+h}+4^{x-h}-4^{x+\frac12}}{h^2} & = & 4^x\frac{e^{h\ln 4}+e^{-h\ln 4}-2}{h^2}\\ & = & 4^x\cdot \frac{2+h^2\ln^2 4 + o(h^2)-2}{h^2} \\ & = & 4^x\cdot\ln^2 4 + o(1) \\ & \stackrel{h\to 0}{\longrightarrow} & 4^x\cdot\ln^2 4 \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4451044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Asymptotic for $\sum_{k=1}^n k^n$ Consider the OEIS sequence A031971, which is defined as: $$a_n=\sum\limits_{k=1}^n k^n\quad\color{gray}{(1,\,5,\,36,\,354,\,4425,\,67171,\,1200304,\,.\!.\!.\!)}\tag{1}$$ I'm interested in the asymptotic behavior of $a_n$ for $n\to\infty$. Empirically, it appears that $$a_n\stackrel{\color{gray}?}\sim\frac{e}{e-1}\,n^n\cdot\left(1-\frac{e+1}{2\,(e-1)^2}\,n^{-1}+c\,n^{-2}+\mathcal O\!\left(n^{-3}\right)\right),\tag{2}$$ where $c\approx0.6310116...$ (I haven't found a plausible closed form it). The leading term $\frac{e}{e-1}\,n^n$ is given in the OEIS. How can we prove this formula and find higher terms in it?
The leading behavior may be understood by rendering $\Sigma_{k=0}^n k^n = n^n\Sigma_{k=0}^n(k/n)^n = n^n\Sigma_{m=0}^n(1-m/n)^n.$ Here $m=n-k$. Then the terms in the last sum successively approach $e^{-m}$ as $n$ increases without bound, rendering the asymptotic result $\Sigma_{k=0}^n k^n\text{ ~ } n^n\Sigma_{m=0}^\infty e^{-m} = (n^n)[e/(e-1)].$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4451257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
A continuous map $f:S^2\rightarrow S^2$ such that $f(x)\neq f(-x)$ for all $x$ is surjective For all $x\in S^2$, let $-x$ denote its antipode. Let $f:S^2\rightarrow S^2$ be a continuous map such that $f(x)\neq f(-x)$ for all $x\in S^2$. Show that $f$ must be surjective. I'm working through an old practice exam, and I've been stuck on this question for hours. I think this has something to do with the Borsuk Ulam Theorem, or maybe Brouwer degree but I am not sure how to construct a function $S^2\rightarrow \mathbb{R}$ that makes $f$ not being surjective a contradiction, or that contradicts anything about degrees. I've found a somewhat similar question here, but not sure how to adapt it to this problem. I know that if $f$ is not surjective, $\exists y\in S^2$ such that $y\notin f(S^2)$, so I can construct a well-defined map $$g:S^2\rightarrow S^2,\hspace{2cm} g(x)=\frac{f(x)-y}{|f(x)-y|}$$ that's homotopic to $f$. Not sure how that helps. I also thought of the map $$h:S^2\rightarrow S^2,\hspace{2cm}h(x)=\frac{f(x)-f(-x)}{|f(x)-f(-x)|}$$ which we can well-define, and is homotopic to both $f$ and $f\circ A$ where $A:x\rightarrow -x$. I also know that if $f$ is not surjective then its degree is $0$, and that the degree of homotopic maps is equal, so the degree of $g$, $h$ and $f\circ A$ is zero, but not sure if there's anything wrong with that either. The most elementary answer possible would be most appreciated.
It sounds like you have the right idea! Particularly in thinking about the Borsuk-Ulam theorem. Here's a hint: Remember that $S^2 \cong \mathbb{R}^2 \cup \{ \infty \}$ (this is stereographic projection). So (towards a contradiction) say we're given a map $f : S^2 \to S^2$ isn't surjective. Well without loss of generality we can say it misses $\infty$, so that composing with stereographic projection gets us a map $f' : S^2 \to \mathbb{R}^2$. Now, since $\forall x . f(x) \neq f(-x)$, we see that the same must be true for $f'$... Let me know if you need more than this, but I'll leave it here so it's still technically a hint :P Do you see how to get a contradiction from here? I hope this helps ^_^
{ "language": "en", "url": "https://math.stackexchange.com/questions/4451330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\lim_{x \to 0+} {x^a}=0$ and $\lim_{x \to \infty} {x^a}=\infty$ I want to show that when $a>0$, $\lim_{x \to 0+} {x^a}=0$ and $\lim_{x \to \infty} {x^a}=\infty$ I tried to find some proper $\epsilon$ and $\delta$ to prove them directly using the definition of limit but I failed. Should I use the differentation rules for logarithmic or exponential functions?
(2) $f$ is said to have a limit at $\infty$ if for all $M>0$ there exists $N>0$ such that $f(x)>M$ for all $x>N$ can you take it from here? Let $M>0$ and $N = \sqrt[a]{M}$ $x^a > N = (\sqrt[a]{M})^a = M$ (1) We need to prove that $\forall \epsilon > 0$ $\exists \delta > 0$ such that, when $ 0 < x < 0 + \delta $, then $|f(x) - L | < \epsilon$. $x < \delta$ $\Rightarrow$ $|f(x) - 0| = |f(x)|< \epsilon$ taking $\epsilon = \delta$ we have the condition satisfied
{ "language": "en", "url": "https://math.stackexchange.com/questions/4451451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If $E(X_n^2) = \infty$ then $\limsup \frac{|X_n|}{\sqrt{n}} \geq a$ almost surely. We have given $X_1,X_2,\ldots$ an i.i.d. sequence of random variables such that $$\Bbb{E}(X_1^2)=\infty$$ I claim that for all $a>0$ $$\Bbb{P}\left(\limsup_{n\rightarrow \infty} \frac{|X_n|}{\sqrt{n}}\geq a\right)=1$$ My idea was to use Borel-Cantelli, but somehow I'm a bit confused since I never used that $\Bbb{E}(X_1^2)=\infty$. I wanted to do this as follows: Let $\Lambda_n=\left\{\frac{|X_n|}{\sqrt{n}}\geq a\right\}$ then $$\sum_{n=1}^\infty \Bbb{P}(\Lambda_n)=\sum_{n=1}^\infty \Bbb{P}\left(\frac{|X_n|}{\sqrt{n}}\geq a\right)=\sum_{n=1}^\infty 1-\Bbb{P}\left(\frac{|X_n|}{\sqrt{n}}\leq a\right)$$Now if $$\sum_{n=1}^\infty 1-\Bbb{P}\left(\frac{|X_n|}{\sqrt{n}}\leq a\right)<\infty$$ then it would mean that for infinitely many $n\in \Bbb{N}$ $$\Bbb{P}\left(\frac{|X_n|}{\sqrt{n}}\leq a\right)=1$$ Here I think I need some argument to show that this is not possible right? If this works I then could apply Borel-Cantelli and would be done. I'm not sure if this is correct so. (I also thought about the central limit theorem but I don't think this is useful here)
Note that for a non-negative random variable $Y$, we have $$E(Y) = \int_0^\infty P(Y > y) dy$$ Since $S(y) = P(Y > y) = 1 - F_Y(y)$ is a decreasing function in $y$, we have the following Riemann sum approximation: $$E(Y) = \int_0^\infty P(Y > y) dy \leq \sum_{n=1}^\infty P(Y \geq n)$$ Define the events $\Lambda_n$ as you did: i.e. $\Lambda_n = \{ |X_n| \geq a \sqrt{n} \} = \left\lbrace \left(\frac{X_n}{a} \right)^2 \geq n\right\rbrace$. Then, $$\begin{align*} \sum_{n=1}^\infty P(\Lambda_n) &= \sum_{n=1}^\infty P\left(\left\lbrace \left(\frac{X_n}{a} \right)^2 \geq n\right\rbrace \right)\\ &= \sum_{n=1}^\infty P\left(\left\lbrace \left(\frac{X_1}{a} \right)^2 \geq n\right\rbrace \right) \\ &\geq \int P(Y >y)dy & \text{where }Y = \frac{X_1^2}{a^2} \\ &= E(Y) \\ &= \infty \end{align*}$$ Since the events $\Lambda_n$ are independent, the second Borel-Cantelli lemma allows you to conclude that $P\left( \limsup_{n \to \infty} \Lambda_n \right) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4451721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Possible positions of the knight after moving $n$ steps in Chessboard. Problem There is a knight on an infinite chessboard. After moving one step, there are $8$ possible positions, and after moving two steps, there are $33$ possible positions. The possible position after moving n steps is $a_n$, find the formula for $a_n$. I found this sequence is http://oeis.org/A118312 But I can't understand this Recurrence Relation $$a_n = 3a_{n-1} - 3a_{n-2} + a_{n-3}, \quad\quad n\geq3$$ Can someone give the intuition for this relationship?
Mordechai Katzman demonstrates in section $3$ of his paper Counting monomials (pages $5$ - $8$) that $$a_n = \begin{cases} 1 \quad \quad \quad \quad \quad \; \, n = 0 \\ 8 \quad \quad \quad \quad \quad \; \, n = 1 \\ 33 \quad \quad \quad \quad \quad n = 2 \\ 1 + 4n + 7n^2 \quad \; \, n \ge 3 \end{cases}$$ We can now prove by induction that $$a_n = 3a_{n-1} - 3a_{n-2} + a_{n-3} = 1 + 4n + 7n^2, \quad\quad n\geq3 \tag{1}$$ To test whether $(1)$ holds for $n \ge 3$, we need to define $$a_0 = 1 + 4(0) + 7(0)^2 = 1$$ $$a_1 = 1 + 4(1) + 7(1)^2 = 12$$ $$a_2 = 1 + 4(2) + 7(2)^2 = 37$$ For the base cases, we have $$a_3 = 3a_2 - 3a_1 + a_0 = 3\cdot37 - 3\cdot12 + 1 = 1 + 4(3) + 7(3)^2 = 76$$ $$a_4 = 3a_3 - 3a_2 + a_1 = 3\cdot76 - 3\cdot37 + 12 = 1 + 4(4) + 7(4)^2 = 129$$ $$a_5 = 3a_4 - 3a_3 + a_2 = 3\cdot129 -3\cdot76 + 37 = 1 + 4(5) + 7(5)^2 =196$$ Now, we must prove using $(1)$ that $$a_{n+1} = 3a_n - 3a_{n-1} + a_{n-2} = 1 + 4(n+1) + 7(n+1)^2 = 7n^2 + 18n + 12$$ Substituting for $a_n, a_{n-1}$ and $a_{n-2}$, we get \begin{align} a_{n+1} &= 3\left(1 + 4n + 7n^2\right) -3\left(1 + 4(n-1) + 7(n-1)^2\right) + \left(1 + 4(n-2) + 7(n-2)^2\right)\\ & = 7n^2 + 18n + 12 \end{align} $\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4451894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
how to approach the discrete counterpart of a continuous CDF of uniform random variable I am following an online example to get the CDF of a function $Y$ of uniform random variable $x \in [0, 1]$ $$ Y = \frac{30}{2-x} $$ Based on the example, CDF is a distribution of probability for the case when random $Y$ is less than or equal to a given number $y$, that is $$ P(Y \leq y) = P\left(\frac{30}{2-x}\leq y\right) = P\left(x \leq 2 - \frac{30}{y}\right) = 2 - \frac{30}{y} $$ Assuming that $Y$ is the temperature of a room, and the only fact we know is that the temperature is changed based on a function of uniform variable $x$, such that the random temperature is bound in [15, 30] celsius degrees, once an hour. After the temperature is stabilized, it will be constant until the next hour. We would like to know the chance to see the room temperature below a certain celsius degress (within [15, 30]), which could be told by $$ P(Y\leq y) = 2 - \frac{30}{y} $$ This is not a uniform distribution. For example, the change to see the temperature between [15,20] degrees is $$P(15 \leq Y\leq 20) = 50\%$$ but the change to see the temperature between [20, 25] is $$P(20 \leq Y\leq 25) = 30\%$$ In all analyses, we consider the temperature distribution is continuous, any temperature between 15 and 30 is possible. But in the actual case, the temperature can only be changed in the increment of 0.1, which means we could only have $$15, 15.1, 15.2, 15.3, \cdots 24.8, 24.9, \cdots, 29.7, 29.8, 29.9$$ The random process should be modified as $$ Y^* = 0.1\times\left\lfloor\frac{300}{2-x}\right\rfloor $$ How do we approach $P(Y^*\leq y)$ when we only have some discrete values in the above case? I don't have a clue yet so I am trying the simulation below r = 0:0.00000001:1; N = length(r); y = floor(300./(2-r))*0.1; P = ( length(find(y<=25)) - length(find(y<=20)) )/N % probability with tempature between [20, 25] I got the result $$P(20\leq Y^*\leq 25) = 29.7318\%$$ Similarly, the simulation get $$P(15 \leq Y^*\leq 20) = 49.4218\%$$ It makes sense that the probability is lower for only some discrete numbers are included, but is it any way to get the close-form to estimate $P(Y^*\leq y)$? Thanks. While I am reading one response, I am thinking is it correct to sum all (continuous) probabilities between [15, 15.1) for the probability of getting the discrete value 15, all continuous probabilities between [15.1, 15.2) for the probability of getting the discrete value 15.1 and so on. So we may have $$P(Y^*=15) = P(15\leq Y < 15.1) = 2 - \frac{30}{15.1} - \left[2 - \frac{30}{15}\right] = 0.0130708$$ and something like that. Now if we add all $$P(Y^*=15) + P(Y^*=15.1) + \cdots + P(Y^*=20) = 0.501606$$ This number is actually more than 50%! Does it mean $P(15\leq Y \leq 20) < P(15 \leq Y^* \leq 20)$? I don't quite get it, if I do the simulation I get the result less than 50% for the discrete case, but from analytical, it is more than 50%.
If $Y^*$ is the same as $Y$ but rounded down to a $0.1$ of a degree, i.e. $Y^*=\lfloor 10Y\rfloor /10$ then you can say for any $y$ which is one of the possible values from $15.0$ through to $29.9$ * *$P(Y^* \lt y) = P(Y \lt y) = 2 - \frac{30}{y}$ *$P(Y^* \le y) = P(Y \lt y+0.1) = 2 - \frac{30}{y+0.1}$ *$P(Y^* = y) = P(Y \lt y+0.1)-P(Y \lt y)=\frac{30}{y}-\frac{30}{y+0.1}=\frac{30}{10y^2+y}$ This will for example give you $P(15 \le Y^* < 20) = \frac12$ as you might expect, but $P(15 \le Y^* \le 20) = \frac{34}{67}$ which is slightly more. The subtle $< , \le$ distinction happens with discrete distributions but not with continuous distributions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4452085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to compute $\frac{63.5\times 0.5\times 10\times 60}{2\times 96500}$ without a calculator? Thankfully, I have been provided 4 options: (a) $0.0987$ (b) $0.0897$ (c) $0.0798$ (d) $0.0789$ My attempt $$\begin{aligned} \frac{63.5\times 0.5\times 10\times 60}{2\times 96500} &= \frac{63.5\times 0.5\times 600}{2\times 96500} \\ &= \frac{63.5 \times 300}{2\times 96500} \\ &= \frac{63.5\times 150}{96500} \\ &= \frac{635\times 15}{96500} \\ &= \frac{6350+3000+150+25}{96500} \\ &= \frac{9500+25}{96500} \\ &= \frac{9525}{96500} \end{aligned}$$ I'm still stuck with a long division. $$\require{enclose} \begin{array}{rll} 0.09 && \\[-3pt] 96500 \enclose{longdiv}{9525}\kern-.2ex \\[-3pt] \underline{868500} \\[-3pt] \end{array}$$ [I couldn't display the long division nicely.] We do not need to continue the long division further. We can understand the answer will be (a) by checking the options. My question This is actually a chemistry question from a competitive exam, which involves stoichiometric calculations. I had the most trouble doing the long division. I had to guess that the number is $9$ and had to multiply $9$ by $96500$. Is there a quicker and easier way?
\begin{align}&\frac{63.5\times 0.5\times 10\times 60}{2\times 96500}\\&=\frac{635\times5\times3}{96500}\\ &=\frac{127\times3}{965\times4} \quad\text{(dividing by $5^2=\dfrac{100}4)$}\\&=\frac{128\times3-3}{965\times4} \quad\text{(retaining $965$ as it is near $1000)$}\\&=\frac{96-0.75}{965}\\&>\frac{95.25}{1000}\\&=0.09525.\end{align} (a) $0.0987$ (b) $0.0897$ (c) $0.0798$ (d) $0.0789$ The only option that exceeds $0.09525$ is (a).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4452284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why can't a second order differential equation have 2 Neuman conditions? I read that a problem such as: \begin{equation} \begin{cases} -\frac{\partial^2 y}{\partial x^2} + y = f(x)\\ \mathrm{condition} \ n°1\\ \mathrm{condition} \ n°2 \end{cases} \end{equation} Could have a unique solution if $\mathrm{condition} \ 1$ and $\mathrm{condition} \ 2$ were: * *Initial conditions (Cauchy problem) *2 Dirichlet conditions ($y(a) = y_a, \ y(b) = y_b \ | \ b > a, \ x \in [a, b]$) *1 Dirichlet condition + 1 Neuman condition ($y'(a) = y_a, \ y(b) = y_b \ | \ b > a, \ x \in [a, b]$ or the contrary) Why can't we have 2 Neuman conditions ?
Because you can’t have all your conditions with derivates. You need at least 1 normal condition because of the integration constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4452663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Derivative of the inverse of a symmetric matrix w.r.t itself I'm trying take the derivative of a symmetrix matrix $\mathbf{C}$ with respect to itself. $$ \begin{equation} \frac{\partial \mathbf{C}^{-1}}{\partial \mathbf{C}} \end{equation} $$ Using the indicial notation, above equation can be rewritten as follows $$ \begin{equation} \frac{\partial C_{ij}^{-1}}{\partial C_{kl}} \end{equation} $$ At first I've used the following formula, $$ \begin{equation} \frac{\partial C_{ij}^{-1}}{\partial C_{kl}} = -C^{-1}_{ik}C^{-1}_{lj} \end{equation} $$ But I quickly realized that we've lost the symmetry of the problem now. I read The Matrix Cookbook and the other posts about the same problem but unfortunately, I couldn't understand the things they've done. For example in this article, at Eq.(100) authors have used the property below when taking the derivative of Eq.(99) $$ \begin{equation} \frac{\partial \mathbf{C}^{-1}}{\partial \mathbf{C}} = -\mathbf{C}^{-1} \boxtimes \mathbf{C}^{-T} \mathbf{I}_s \end{equation} $$ Where $\boxtimes$ is the square product, $\mathbf{I}_s$ is the symmetric fourth-order identity tensor and they are defined as follows $$ \begin{align} (\mathbf{A} \boxtimes \mathbf{B})_{ijkl} &= \mathbf{A}_{ik}\mathbf{B}_{jl} \\ (\mathbf{I}_s)_{ijkl} &= \frac{1}{2}(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}) \end{align} $$ I couldn't understand how did they achieve this result and how can I derive it myself.
In the single-variable case, we have that $$\dfrac{d}{dt}C(t)^{-1}=-C(t)^{-1}\dfrac{dC(t)}{dt}C(t)^{-1}.$$ This can obtained by differentiating the expression $C(t)C(t)^{-1}=I$ on both sides with some simple algebra. This directly generalizes to the multivariable case by expressing $$C=\sum_{i,j}c_{ij}e_ie_j^T.$$ Then, we have that $$\dfrac{d}{dc_{kl}}C^{-1}=-C^{-1}\dfrac{dC}{dc_{kl}}C^{-1}=-C^{-1}e_ke_l^TC^{-1},$$ from which we get $$\dfrac{d}{dc_{kl}}C_{ij}^{-1}=-e_i^TC^{-1}e_ke_l^TC^{-1}e_j=-C_{ik}^{-1}C_{lj}^{-1}.$$ When $C$ is symmetric, then it can be written as $$C=\sum_{i}c_{ii}e_ie_i^T+\sum_{i>j}c_{ij}(e_ie_j^T+e_je_i^T).$$ $$\begin{array}{rcl} \dfrac{d}{dc_{kl}}C^{-1}&=&-C^{-1}\dfrac{dC}{dc_{kl}}C^{-1}=-C^{-1}(e_ke_l^T+e_le_k^T)C^{-1},\ \mathrm{for}\ k\ne l\\ \dfrac{d}{dc_{kk}}C^{-1}&=&-C^{-1}\dfrac{dC}{dc_{kk}}C^{-1}=-C^{-1}e_ke_k^TC^{-1} \end{array}$$ then we have that $$\begin{array}{rcl} \dfrac{d}{dc_{kl}}C_{ij}^{-1}&=&-e_i^TC^{-1}(e_ke_l^T+e_le_k^T)C^{-1}e_j=-C_{ik}^{-1}C_{lj}^{-1}-C_{il}^{-1}C_{kj}^{-1},\ \mathrm{for}\ k\ne l\\ \dfrac{d}{dc_{kk}}C_{ij}^{-1}&=&-e_i^TC^{-1}e_ke_k^TC^{-1}e_j=-C_{ik}^{-1}C_{kj}^{-1}.\end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4453422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
How do I solve this nonlinear ODE given the asymptotic series solutions as follows? Differential Equation: $$-{\frac { \left( {\frac {\rm d}{{\rm d}R}}f \left( R \right) \right) ^{2}}{2\,f \left( R \right) }}+{\frac {{\rm d}^{2}}{{\rm d}{R}^{2}}}f \left( R \right) +{\frac {{\frac {\rm d}{{\rm d}R}}f \left( R \right) }{R}}+2\,f \left( R \right) -2\, \left( f \left( R \right) \right) ^{2}-2\,{\frac {{B}^{2}}{f \left( R \right) {R}^{2}}}=0. $$ Series Solution at $R \to 0$: $${R}^{-2}+1+ \left( {\frac {{B}^{2}}{3}}+{\frac{1}{3}} \right) {R}^{2} + \left( {\frac{2}{33}}+{\frac {2\,{B}^{2}}{33}} \right) {R}^{4}+ \left( {\frac{31}{2277}}+{\frac {53\,{B}^{2}}{2277}}+{\frac {2\,{B}^{ 4}}{207}} \right) {R}^{6}+ \left( {\frac{70}{29601}}+{\frac {136\,{B}^ {2}}{29601}}+{\frac {2\,{B}^{4}}{897}} \right) {R}^{8}+O \left( {R}^{ 10} \right).$$ Series Solution as $R \to \infty$: $$1-{\frac {{B}^{2}}{{R}^{2}}}+{\frac {-2\,{B}^{4}-2\,{B}^{2}}{{R}^{4}}} +{\frac {-7\,{B}^{6}-23\,{B}^{4}-16\,{B}^{2}}{{R}^{6}}}+{\frac {-30\,{ B}^{8}-216\,{B}^{6}-474\,{B}^{4}-288\,{B}^{2}}{{R}^{8}}}+O \left( {R}^ {-10} \right) .$$ B is a free parameter. I tried solving it like a boundary value problem using series solution at very small R and large R as boundary conditions. I tried Newton iteration and imaginary time propagation for that. But that worked only for B=1. If I try solving it as an initial value problem starting from some Rmax, even RKF45 doesn't give good result. Can someone please suggest either analytical or numerical way of solving this equation? Or perhaps a way of analyzing the properties of this differential equation other than frobenius series solution? (If the questions lacks details, please let me know before downvoting) Edit: 2 downvotes without any explanation. I mean if you don’t wanna respond, then don’t respond. Why do you have to ruin my chances of getting any help? This is the worst forum. Most of the times people just keep downvoting without any explanation.
First, I'd prefer do a simple transformation $f(R) = (u(R))^2$ suggested by @EliBartlett. After some manipulation with computer algebra system I got $$ R\frac{d}{dR}\left(R \frac{du}{dR}\right) = R^2 (u^3 - u) + \frac{B^2}{u^3} $$ Frobenius analysis shows that $u \sim \frac{1}{R}$ near $R = 0$. Let's introduce $x = \log R$ and $w = \log u$. The domain for $x$ becomes $(-\infty, -\infty)$ and the boundary conditions $w \to -x$ when $x \to -\infty$ and $w(+\infty) = 0$. Also $R \frac{d}{dR} = \frac{d}{dx}$. The equation becomes $$ \frac{d^2}{dx^2} e^w = e^{2x} (e^{3w} - e^w) + B^2 e^{-3w}. $$ Consider a finite interval $x \in [-L, L]$ and a regular grid with step $h = \frac{2L}{N}$. The discrete problem becomes $$ \frac{e^{w_{n+1}} - 2e^{w_n} + e^{w_{n-1}}}{h^2} = e^{2x_n} (e^{3w_n} - e^{w_n}) + B^2 e^{-3w_n}, \quad n = 1, \dots, N-1.\\ w_0 = -x_0, \quad w_N = 0. $$ To avoid numerical cancellation let's divide the $n$-th equation by $e^{w_n}$: $$ \frac{e^{w_{n+1} - w_n} - 2 + e^{w_{n-1} - w_n}}{h^2} = e^{2x_n} (e^{2w_n} - 1) + B^2 e^{-4w_n}, \quad n = 1, \dots, N-1.\\ w_0 = -x_0, \quad w_N = 0. $$ This is a system that can be plugged into the Newton solver. I've implemented this in Python (can be found here). Unfortunately, it works only when $B \lesssim 1.117$. I tried to make Newton's method robust by introducing safety factors, but no luck. I also tried scipy nonlinear solvers and they also hit a wall of no-convergence near $B = 1.117$. I feel that the problem has no solution for such $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4453550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to show that the winding number must be $0$ or $\pm$ $1$ in this case? Prerequisite: $K: [0,1] → \mathbb{C}$ piecewise, continuously differentiable, closed path that meets the non-positive real axis at only at one point $K^{-1} (\{x|x \le 0\})= \{ t_0 \}$ with $0 < t_0 < 1$ and $K(t_0)=−r < 0$ and does not meet at the origin. Show: $\eta (0) \in \{0, \pm 1 \}$ (where $\eta$ is the winding number), and $0, 1$, and $− 1$ occur if $K$ runs locally at $t_0$ on one side of the negative real axis or intersects the negative real axis from top to bottom or rather from bottom to top. I tried to prove by contradiction, that if we assume, that $n=2$, then the closed path would obviously intersect the negative real axis twice at two real numbers $w=(a,0)$ and $z=(b,0)$. Would that be enough to prove the statement?
Pick a continuous branch $\theta(t)$ of $\operatorname{Arg}\gamma(t)$ such that $\theta(t_0)=\pi$. By assumption, $\theta(0),\theta(1)\notin\{\ldots,-3\pi,-\pi,\pi,3\pi,\ldots\}$. By intermediate value theorem, the values of $\theta(0)$ and $\theta(1)$ must lie inside the open interval $(-\pi,3\pi)$, otherwise $\theta(t)$ will be equal to $-\pi$ or $3\pi$ for some $t\in[0,1]$. Hence $|\theta(1)-\theta(0)|<4\pi$ and it must be equal to $0$ or $2\pi$. This means the winding number of $\gamma$ about the origin is $0$ or $\pm1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4453846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is a homothetic change of metric? How does it mean that using a homothetic change of metric we can set the volume equal to 1. I'm reading Aubin's work on Yamabe problem(the book Some Nonlinear Problems in Riemannian Geometry page150) He wrote that by a homothetic change of metric we can set the volume equal to one. So henceforth, without loss of generality, we suppose the volume equal to one. What does he mean, does he mean that we can set the volume to be 1 for any problems? or there is some condition? It seems that this can simplify many estimates.
The book is using homethetic in the sense described for example here. Essentially, you just scale the entire manifold up or down without changing its shape and after doing this you can assume the volume is equal to 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4454020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculating residue of $f(z)=\frac{z^2+z^3}{({\sin z})^3}$ at its singularities So the singularities would be at $n\pi$, $n\in \mathbb{Z}$. The residue in $z=0$, I have already calculated (through shifting the series) and it equals $1$ however for the others $n\pi$ I am stuck. Help would be appreciated
Hint:- $z^{2}=(z-n\pi)^{2}+2n\pi(z-n\pi)-2n^{2}\pi^{2}$ and $z^{3}=(z-n\pi)^{3}+3n\pi(z-n\pi)^{2}+3n^{2}\pi^{2}(z-n\pi)+n^{3}\pi^{3}$ Use these and write $\frac{z^{2}+z^{3}}{\sin(z)}=(-1)^{n}\frac{(z-n\pi)^{2}+2n\pi(z-n\pi)-2n^{2}\pi^{2}+(z-n\pi)^{3}+3n\pi(z-n\pi)^{2}+3n^{2}\pi^{2}(z-n\pi)+n^{3}\pi^{3}}{(\sin(z-n\pi))^{3}}$. And now calculate the residue as you would by computing the coefficient of $\frac{1}{z-n\pi}$ in the Laurent Series expansion. Another hint:- This whole thing can be done by calculating the residue at $0$ of $$(-1)^{n}\frac{z^{2}+2n\pi z-2n^{2}\pi^{2}+z^{3}+3n\pi z^{2}+3n\pi^{2} z+n^{3}\pi^{3}}{\sin^{3}(z)}$$ Caution:- I often make errors in calculation so the expressions might not be entirely accurate but I think the hint is enough to provide you with a method to proceed by!!!. I wish you a succesful dig for the residue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4454212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Portfolio value on selling and buying calls Consider buying a call option with strike $K − δ$, selling two call options with strike $K$ and buying a call option with strike $K + δ$, where $K, δ > 0$ and $K > \delta$, all with maturity $T > 0$. Draw the terminal payoff function of this portfolio of options. Does this portfolio have a positive value at time zero? So this is what I did: $C(S, t) = e^{−r(T −t)}E^Q(1_{S_T >K− δ} + 1_{S_T >K + δ} - 2* 1_{S_T >K} | St = S)$ $C(S, t) = e^{−r(T −t)} Q(S_T >K− δ) + Q(S_T >K + δ) - 2Q (S_T >K) $ $Q(Log\frac{S_T}{S_0} >Log\frac{K− δ}{S_0})$ = $Q(\frac{Log\frac{S_T}{S_0}-(r-\frac{σ^2}2)t}{σt^{0.5}} >\frac{Log\frac{K− δ}{S_0}-(r-\frac{σ^2}2)t}{σt^{0.5}})$ $=Q(z >\frac{Log\frac{K− δ}{S_0}-(r-\frac{σ^2}2)t}{σt^{0.5}})$ =$Φ^c(\frac{Log\frac{K− δ}{S_0}-(r-\frac{σ^2}2)t}{σt^{0.5}}))$ So $C(S, t) = e^{−r(T −t)} Q(S_T >K− δ) + Q(S_T >K + δ) - 2Q (S_T >K) $ $= e^{−r(T −t)}[Φ^c(\frac{Log\frac{K− δ}{S_0}-(r-\frac{σ^2}2)t}{σt^{0.5}})+Φ^c(\frac{Log\frac{K+ δ}{S_0}-(r-\frac{σ^2}2)t}{σt^{0.5}})-2Φ^c(\frac{Log\frac{K}{S_0}-(r-\frac{σ^2}2)t}{σt^{0.5}})]$ I'm not sure what to do from here.
A call option with strike $K$ maturing at $t=T$ has terminal payoff $(S_T - K)^+$. This holds from the definition of a call option in a model-free manner, so you should not be applying the Black-Scholes equation here. The case you have been given is known as a "butterfly spread". Here, you have terminal payoff equal to $$\varphi (S_T) = (S_T - (K - \delta))^+ - 2(S_T - K)^+ +(S_T - (K+\delta))^+$$ For instance, if $K = 1$ and $\delta = 0.1$, our payoff looks like this: One can check that for $\delta > 0$, $\varphi \geq 0$ and $\varphi \neq 0$. Thus, no-arbitrage pricing tells us that at $t = 0$, the portfolio has positive value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4454388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inequality involving sums with binomial coefficient I am trying to show upper- and lower-bounds on $$\frac{1}{2^n}\sum_{i=0}^n\binom{n}{i}\min(i, n-i)$$ (where $n\geq 1$) in order to show that it basically grows as $\Theta(n)$. The upper-bound is easy to get since $\min(i, n-i)\leq i$ for $i\in\{0, \dots n\}$ so that $$\frac{1}{2^n}\sum_{i=0}^n\binom{n}{i}\min(i, n-i)\leq \frac{1}{2^n}\sum_{i=0}^n\binom{n}{i}i = \frac{n}{2}.$$ Thanks to Desmos, I managed to find a lower bound, but I am struggling to actually prove it. Indeed, I can see that the function $f(n)=\frac{n-1}{3}$ does provide a lower-bound. One can in fact rewrite $$\frac{n-1}{3}=\frac{1}{2^n}\sum_{i=0}^n\binom{n}{i}\frac{2i-1}{3}.$$ I was thus hoping to show that for each term we have $\frac{2i-1}{3}\leq \min(i, n-i)$, but this is only true if $i\leq \frac{3n+1}{5}$ and not generally for $i\leq n$. I imagine there is a clever trick to use at some point but for some reason I am stuck here. Any help would be appreciated, thank you! EDIT: Thank you everyone for all the great and diverse answers! I flagged River Li's answer as the "accepted" one because of its simplicity due to the use of Cauchy-Schwartz inequality, which does not require a further use of Stirling's approximation. Note that the other answers which involve such an approximation are much tighter though, but proving $\Theta(n)$ growth was sufficient here.
Let's first note that $\binom{n}{i}\cdot i = n\cdot \binom{n-1}{i-1}$. For odd $n=2m+1$, this makes $$ \begin{align} S_n = \sum_{i=0}^{n}\binom{n}{i}\cdot\min(i,n-i) &= 2\sum_{i=0}^m\binom{n}{i}\cdot i = 2n\sum_{i=1}^m\binom{n-1}{i-1} = 2n\sum_{j=0}^{m-1}\binom{2m}{j} \\ &= n\cdot\left( \sum_{j=0}^{2m}\binom{2m}{j} - \binom{2m}{m} \right) = n\cdot\left( 2^{2m} - \binom{2m}{m} \right) \end{align} $$ where we have used that $\binom{2m}{j}=\binom{2m}{2m-j}$. This makes $$ \frac{S_n}{2^n} = \frac{n}{2}\cdot\left (1 - \frac{\binom{2m}{m}}{2^{2m}} \right) \approx \frac{n}{2}\cdot\left( 1 - \frac{1}{\sqrt{\pi m}} \right). $$ For even $n=2m$, we get $$ \begin{align} S_n = \sum_{i=0}^{n}\binom{n}{i}\cdot\min(i,n-i) &= 2\sum_{i=0}^m\binom{n}{i}\cdot i - \binom{2m}{m}\cdot m = 2n\sum_{i=1}^m\binom{n-1}{i-1} - \binom{2m}{m}\cdot m \\ &= n\sum_{j=0}^{n-1}\binom{n-1}{j} - \binom{2m}{m}\cdot m = n\cdot\left( 2^{n-1} - \frac{1}{2}\binom{2m}{m} \right) \end{align} $$ which once more makes $$ \frac{S_n}{2^n} = \frac{n}{2}\cdot\left(1-\frac{\binom{2m}{m}}{2^{2m}}\right). $$ In both cases, you get $n/2$ as an upper bound. However, there are strong bounds on $\binom{2m}{m}/2^{2m}$ which can be applied: $$ \frac{e^{-1/8m}}{\sqrt{\pi m}} \le \frac{\binom{2m}{m}}{2^{2m}} \le \frac{1}{\sqrt{\pi m}} $$ Eg, see Jack D'Aurizio's derivation of this, or Wikipedia. Additional bounds have been provided by robjohn. The following bounds seem to be the tightest proven so far: $$ \frac{4^me^{-1/8m}}{\sqrt{\pi m}} < \binom{2m}{m} < \frac{4^m}{\sqrt{\pi\left( m+\frac{1}{4} \right)}} $$ The following bound is even tighter, but I have no proof of it, just numerical evidence: $$ \frac{4^m}{\sqrt{\pi\left( m+\frac{1}{4}+\frac{1}{32m} \right)}} < \binom{2m}{m} $$ It's the same as the above up to second order approximation, so not a bit difference, but easier to compute.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4454637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
Can a manifold be reconstructed from its charts? I'm learning special relativity and I am having a confusion on this mathematical point. Whenever any sort of motion or non motion happens in the world, it can only be perceived by the scientist in a chart (inertial frame). But, we say there exists a common manifold which is being charted by different observers when doing observation. How can one from just seeing the charts with transition maps and metric (spacetime - metric) know the existence of a manifold? In a more simplistic sense, could I reconstruct how the earth sits in space just from seeing pages the world map (with marked distances)?
To my mind there are two ways to answer this question. From a mathematical point of view, a variety is a set with a set of maps which are bijections with open sets of $R^n$, subject to some conditions. So in principle, from the definition, we know what is the variety. From another point of view, we can also start from a set of charts and try to put two by two pieces together, as we do with road maps, or with an atlas. It is more intuitive, but it is not the usual definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4454786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
math of Mixing colours My brother asked me if I can give a mathematical formula for adding colours like in real life when adding red, green and blue one will get white but with these axioms: * *the addition operation is commutative meaning red+blue = blue + red *the addition operation is linear (just for simplicity) we both studied linear algebra and that was the first idea to build a mathematical system for mixing color. *the addition is associative meaning red + red + blue + green = red + blue + green + red = white + red = pink *the system is mainly about primary, secondary and known colours: red, green, blue, white, black. And the colours from combining two or more and known ones like (yellow, orange, purple,...) *the system preferably a discrete one(continues which have hues and colours) discrete system can be established from a continues one by dividing it into small intervals each with specific colour like naming the interval red even if it includes shades of red (light red, dark red) With these conditions in mind I thought about the unit circle in 3D with red = (1,0,0), green = (0,1,0), and blue = (0,0,1) and I would normalize the result to keep the system closed the problem is that associativity is violated in this system, any help in improving the system or a new system that works is fine. (Also my brother is interested in additive colours and namely RGB and not in subtractive colours such as CMY.)
You could try something like this. A paint pot is a vector in $\mathbb{N}^n$, where $n$ is the number of primary colours. A pot $(a_1, \dots, a_n)$ contains $a_i$ atoms of paint of colour $i$. The standard basis vectors are very small pots of a single colour; the zero vector is the empty pot. To describe the colour of the contents of a nonempty pot, apply to the map $$ (a_1,\dots, a_n)\mapsto \frac{1}{\sum a_i}(a_1, \dots, a_n) $$ where we view vectors in $\mathbb{Q}^n$ with nonnegative entries that moreover sum to 1 as colours in the obvious way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4455114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Must a certain continuous map have 0 in its image, given that its restriction to the unit sphere is homotopic to the identity? Suppose $f:\mathbb{B}^n \to \mathbb{R}^n$ is continuous (here $\mathbb{B}^n$ refers to the $n$-dimensional unit ball). Suppose also that its restriction $g := f|_{\mathbb{S}^{n-1}}$ does not have $0$ in its image, and $g:\mathbb{S}^{n-1} \to \mathbb{R}^n \setminus\{0\}$ is homotopic to the identity map. Is it necessarily true that $f(x) = 0$ for some $x\in \mathbb{B}^n$? I have proved it in the affirmative below. I'm hoping there may be a proof that uses only elementary results (i.e. avoids Borsuk-Ulam). I used this result in the corrected proof of this question about Chebyshev sets.
If, by contradiction, $f$ has no zero, then $f|_{\mathbb S^{n-1}}$ is homotopic to a constant in $\mathbb R^n\setminus\{0\}$ by $$ H(x,t)=f(tx), \quad x\in S^{n-1}, \ t\in[0,1]. $$ Therefore, by transitivity, the identity map on $\mathbb S^{n-1}$ is homotopic to a constant. However the inclusion of $\mathbb S^{n-1}$ in $\mathbb R^n\setminus\{0\}$ is easily seen to be a homotopy equivalence so we deduce that $\mathbb S^{n-1}$ is contractible which it obviously isn't.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4455248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Forming a committee with three restrictions Suppose that in the US Supreme Court a committee of seven politicians is chosen from five republicans, ten democrats and eight independents. How many different ways can the committee be chosen if the committee must include exactly one republican, at least three democrats and at least one independent? I first considered the number of ways of forming a committee with at least three democrats and thought this was equal to $$ C(10,3)\times C(13,4)+C(10,4)\times C(13,3)+C(10,5)\times C(13,2)=165,516. $$ However the answer in my book is $73,080$. I am yet to consider the other restrictions and yet my answer is too large. Where am I double counting?
Apart from Democrats, you have lumped all together, which is patently wrong, and will overcount by violating constraints for Republicans. Start by choosing $1$ Republican from $5$, and follow the restrictions for the other two parties to get the right answer. $\binom51\left[\binom{10}3\binom83 +\binom{10}4\binom82 +\binom{10}5\binom81\right]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4455446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
definition of lie bracket of linear connection and a $\operatorname{End}(E)$-valued $k$-form I'm currently studying LECTURES ON CHERN-WElL THEORY, in which it defines the trace function on a vector bundle $(M,E)$ by setting $$ \operatorname{tr}(\omega\otimes A) = (\operatorname{tr}A)\omega ,\quad A\in \operatorname{End}(E),\omega \in \mathcal{A}^*(M) $$ but lemma 1.7 said for all $A\in \mathcal{A}^*(M;\operatorname{End}(E))$ and linear connection $\nabla$ on $E$, $$ d\operatorname{tr}A= \operatorname{tr}[\nabla,A] $$ holds. What confused me is that in the book the lie bracket of $\operatorname{End}(E)$-valued forms is defined while $\nabla$ is NOT a $\operatorname{End}(E)$-valued $1$-form. So, can anyone explain what is this lie bracket $[\nabla,A]$?
I think the auther tried to define a product of two matrixes whose entries are forms and its action on $\Gamma(E)$. In this view the only thing he left is to define the exterior operator on $\mathcal{A}^*(M) \otimes \mathfrak{gl}(n,\mathbb{R}) $ and how it acts on $\Gamma(E)$, since $\nabla=\omega+d$. If we define this exterior operator(differ from the one on $\mathcal{A}^*(M)$) on $\mathcal{A}^*(M) \otimes \mathfrak{gl}(n,\mathbb{R}) $ by (frame {e_i} of $E$ is already chosen) $$ dA = d(A^i_j)=(dA^i_j),A\in \mathcal{A}^*(M) \otimes \mathfrak{gl}(n,\mathbb{R}) ,\quad de=d(x^ie_i)=(dx^1,\cdots,dx^r)^T,e\in\Gamma(E) $$ then the definition fits all the actions I want on $\operatorname{End}(E)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4455635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Spivak's Calculus: Using derivatives prove that if $n \geq 1$, then $(1+x)^n > 1+nx$ for $-10$. The following is a problem from Spivak's Calculus, Ch. 11 *Use derivatives to prove that if $n \geq 1$, then $$(1+x)^n > 1+nx, \text{ for } -1<x<0 \text{ and } x>0$$ (notice that equality holds for $x=0$) The solution in the solution manual is a bit terse Let $g(x)=(1+x)^n-(1+nx)$. Then $g(0)=0$, but $$g'(x)=n(1+x)^{n-1}-n\tag{1}$$ Since $n-1 \neq 0$ this means that $$\begin{align}g'(x) & < 0 \text{ for } -1<x<0, \\ & >0 \text{ for } x>0 \end{align}\tag{2}$$ Thus $g(x)>0$ for $-1<x<0$ and $x>0$ I'd like to fill in the the steps in more detail. Also, my solution differs from the solution above and I'd like to know why. Consider, for example, the case $n=2$. Then $$g(x)=(1+x)^2-(1+2x)$$ $$g'(x)=2(1+x)-2$$ $$g(-1.5)=(1-1.5)^2-(1+2\cdot (-1.5))=\frac{1}{4}+2>0$$ $$g'(-1.5)=2(1-1.5)-2=-3$$ Why doesn't the solution account for the interval $(-2,-1)$? Here is what I mean. Consider $(1)$ $$g'(x)=n[(1+x)^{n-1}-1]$$ $$g'(x)<0 \implies (1+x)^{n-1}-1<0$$ $$\implies (1+x)^{n-1}<1$$ If $n-1$ is odd, then $$1+x<1 \implies x<0$$ If $n-1$ is even, then $$1+x<1 \implies x<0$$ or $$1+x>-1 \implies x>-2$$ Therefore, for any $n\geq 1$ * *if $n$ is even, then $g$ is decreasing on $(-\infty,0)$ and increasing on $(0,\infty)$. $0$ is a local minimum. *if $n$ is odd, then $g$ is decreasing on $(-2,0)$ and increasing on $(\infty,-2)$ and $(0,\infty)$. Therefore, there is another root of $g(x)$ other than $0$. $0$ is a local minimum, $-2$ is a local maximum. Therefore * *if $n$ is even, $g(x)>0$ for all $x \neq 0$ *if $n$ is odd, $g(x)>0$ for all $x$ above some value $x_0$ such that $g(x_0)=0$, excluding $0$. Is my reasoning incorrect or is the solution manual incorrect?
The reason this holds is the convexity of $(1 + x)^n$. The line $y = 1 + nx$ is the tangent line to the graph of $y = (1 + x)^n$ at $(0,1)$ and the solution given is a long-winded way of saying the graph of a convex function lies above its tangent line, essentially using that its second derivative is positive. Note $n$ has to be strictly greater than $1$ for the exercise to hold true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4455768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Cannot find the x and y angles. Is there something about the congruent lines in the two triangles that i need to use to find the answer? Got the triangle on the left as 60° 60° 60° and the middle angle at the bottom is 46° but that's about it. What am I missing to get X and Y?
Since the segments marked with a red | are the same length, the other two (nonequiangular) triangles in the picture are both isosceles. $\Delta QST$ isosceles means both of the unknown angles in it have the same measure (i.e. both are $x^\circ$). Thus, $x^\circ + x^\circ + 74^\circ = 180^\circ$. Solving gives $x^\circ = 53^\circ$. $\Delta QSP$ isosceles means the angles opposite its congruent sides are congruent angles, so you can find the unknown angles in $\Delta QSP$ using similar reasoning. Bottom line: If you know any one of the angles in an isosceles triangle you can figure out the other two.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4455890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is a continuous function on compact convex set where the boundary is mapped to the set a self mapping? Let $K ⊂ R^n$ be convex and compact with $0$ in the interior of $K$. Let $f ∈ C(K, R^n)$ with $f(∂K) ⊂ K$. If this is the case, do we in fact have $f(K) \subset K$. It is probably not the case that the image of a convex set is convex as those are hard to prove, but we at least know it is compact and connected (also path-connected). But at the same time, just being continuous is not a strong enough property to make further conclusions. The image of boundary is not necessarily a boundary unless $f$ is a diffeomorphism, but here we don't even have a homeomorphism, but just continuity.
No, not necessarily. Let's consider the closed, convex, compact unit disc $K \subseteq \Bbb{R}^2$. Define the continuous function: $$f(x,y) = (2 - 2\|(x, y)\|, 0) = \left(2 - 2\sqrt{x^2 + y^2},0\right).$$ The boundary of $K$ consists of all points such that $\|(x, y)\| = 1$. Thus, $$f(\partial K) = \{(0,0)\} \subseteq K.$$ But, the point $(0, 0) \in K$ maps to $(2, 0) \notin K$, so $f$ is not a self-map on $K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4456055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$\mathbb{P}(-1\leqslant X\leqslant\frac{1}{2})$ from $\varphi_{X}(t)=\frac{1}{7}\left(2+e^{-it}+e^{it}+3e^{2it}\right).$ Let $X$ be a random variable with characteristic function given by $$\varphi_{X}(t)=\frac{1}{7}\left(2+e^{-it}+e^{it}+3e^{2it}\right).$$ Determine $\mathbb{P}(-1\leqslant X\leqslant\frac{1}{2})$. Notice $$\begin{align*} \frac{1}{7}\left(2+e^{-it}+e^{it}+3e^{2it}\right)&=\frac{3}{7}i\sin(2t)+\frac{2}{7}\cos(t)+\frac{3}{7}\cos(2t)+\frac{2}{7}\\ &=\frac{3}{7}(i\sin(2t)+\cos(2t))+\frac{2}{7}(\cos(t)+1)\\ &=\mathbb{E}[\cos(tX)]+i\mathbb{E}[\sin(tX)]. \end{align*}$$ I feel like I am very close to something, but I can't progress beyond this.
Remember that $$\varphi_X(t) = \operatorname{E}[e^{iXt}],$$ so if $X$ has discrete support on some set $\Omega$, $$\operatorname{E}[e^{iXt}] = \sum_{x \in \Omega} \Pr[X = x] e^{itx}.$$ This immediately suggests that $X \in \{-1, 0, 1, 2\}$ and we have $$\Pr[X = -1] = 1/7, \\ \Pr[X = 0] = 2/7, \\ \Pr[X = 1] = 1/7, \\ \Pr[X = 2] = 3/7.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4456387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimum possible sum of squares of two numbers with sum $k$? If the sum of two numbers is k. Find the minimum value of the sum of their squares. This is my calculations so far. a + b = k <-- google says that I should put x + y = k rather than a + b = k a² + b² = y (but I don't know why should I do that) (k - b)² + b² = y k² - 2kb + 2b² = y <-- now I'm stuck here. I don't know which differentiation variable to take. If it is 'b' then why? So here I am. Kindly show me how to solve the problem and where did I go wrong in my computations :P
Very simple approach: You know $a + b = k$. So $$(a+b)^2 = a^2 + 2ab + b^2 = k^2.$$ If you could somehow remove the $2ab$ term, then you'd have the desired sum of squares. Well, you also know that $$(a-b)^2 = a^2 - 2ab + b^2,$$ so if you add the two together, you'd get $$(a+b)^2 + (a-b)^2 = 2a^2 + 2b^2 = k^2 + (a-b)^2.$$ Therefore, $$a^2 + b^2 = \frac{k^2 + (a-b)^2}{2}.$$ Now, $k$ is a constant, and $(a-b)^2$, being the square of a real number, is never negative, so the right hand side is minimized when you can make $a-b = 0$, and the minimum value attained is $k^2/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4456521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
$A,B$ such that $A\cap B=\emptyset$ and $A\cup B=\mathbb{R}$ and $B=\{x+y : x,y\in A\}$? If set $A,B$ satisfy $A\cap B=\emptyset,A\cup B=I$, and $B=\{x+y : x,y\in A\}$, can $I$ be real number set $\mathbb{R}$? I think the answer is yes, but I can't construct it. If $A$ is odd number set, $B$ is even number set, then $I$ is integer set $\mathbb{Z}$. But how can the set of irrational numbers and the set of other rational numbers be put into the set $A$? I don't understand.
$A$ can be $ \ldots \cup [-2,-1) \cup [1,2) \cup [4,5) \cup \ldots$ and $B$ will then be $ \ldots \cup [-1,1) \cup [2,4) \cup [5,7) \cup \ldots$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4456633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Evaluating an integral using the saddle point approximation The following integral appears in certain physical problem: $$ \lambda_m = \int_{0}^{2\pi} \frac{d\phi}{2\pi} e^{K\cos \phi + im\phi} $$ where $m$ is an integer and $K$ is a large real number. I'm asked to evaluate it using the saddle point approximation (which in the math literature is called Laplace's method as far as I'm aware). The real part of the argument of the exponential is the largest when $\phi = 0$, so I approximated it by $$ e^{K \cos \phi + im\phi} \approx e^{K-\frac{1}{2}\phi^{2}K} $$ which means that $$ \lambda_{m}\approx \frac{e^{K}}{2\pi}\int\limits _{0}^{2\pi}d\phi\,e^{-\frac{1}{2}\phi^{2}K} $$ But apparently this derivation is wrong. The biggest problem is that $m$ doesn't appear in the approximation. Presumably, the correct approximation should be: $$ \lambda_{m}\approx\frac{e^{K}}{2\pi}\int_{-\infty}^{\infty}d\theta\,e^{-K\theta^{2}/2+im\theta}=\frac{e^{K}}{\sqrt{2\pi}K}e^{-\frac{m^{2}}{2K}} $$ However I don't understand what saddle point was used here and why the limits of the integral were changed.
Without Laplace method If $m$ is an integer $$\lambda_m = \frac{1}{2\pi}\int_{0}^{2\pi} e^{K\cos (\phi) + im\phi}\,d\phi=I_m(K)$$ where appears the modified Bessel function of the first kind. For large values of $K$, use the asymptotic expansions developed by Hankel $$I_m(K) =\frac{e^K}{\sqrt{2 \pi K}}\Bigg[1+(4m^2-1)\sum_{p=0}^\infty 4^p\,\frac{\left(\frac{3}{2}-m\right)_p \left(\frac{3}{2}+m\right)_p}{p!\,(8K)^p}\Bigg]$$ that is to say $$I_m(K) =\frac{e^K}{\sqrt{2 \pi K}}\Bigg[1-\frac{(4m^2-1)}{1!\,(8K)}+\frac{(4m^2-1)(4m^2-9)}{2!\,(8K)^2} +\cdots\Bigg] $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4456802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Asymptotic behaviour of $\sum_{i=n}^{\infty} a_i^2$ if $\sum_{i=1}^{\infty} a_i^2 i^{2\gamma} < \infty$ Consider a sequence of positive numbers $(a_i)_{i \in \mathbb{N}}$ such that for some $\gamma > 0$ we have $$ \sum_{i=1}^{\infty} a_i^2 i^{2\gamma} < \infty. $$ I was wondering how the tail sum $\sum_{i=n}^{\infty} a_i^2$ behave asymptotically, for $n \to \infty$. As a heuristic, consider $a_i = i^{-\gamma - \frac{1}{2} - \epsilon}$ for some $\epsilon > 0$. We know this satisfies the assumption and you can show that $\sum_{i=n}^{\infty} a_i^2 \approx n^{-2\gamma - 2\epsilon}$ by comparing it to $\int_n^{\infty} x^{-2\gamma - 1 - 2\epsilon} dx$. So intuitively I would think that for general $(a_i)$, the tail sum can be bounded by $c\ n^{-2\gamma}$ for some $c > 0$. Just using Cauchy-Schwartz, gives us $$ \sum_{i = n}^{\infty} a_i^2 = \sum_{i=n}^{\infty} a_i^2 i^{\gamma} i^{-\gamma} \leq \sqrt{\sum_{i=n}^{\infty} a_i^4 i^{2\gamma} \sum_{i=n}^{\infty} i^{-2\gamma}}. $$ We know the first sum is finite (since $a_i \to 0$), and the second sum behaves as $\sum_{i=n}^{\infty} i ^{-2\gamma} \approx n^{-2\gamma + 1}$, again using the integral trick. Therefore the tail behaves as $n^{-\gamma + 1/2}$. This doesn't seem to be the best possible. You could also write $$ \sum_{i=n}^{\infty} a_i^2 = \sum_{i=n}^{\infty} a_i i^{\gamma} a_i i^{-\gamma} \leq \sqrt{\sum_{i=n}^{\infty} a_i^2 i^{2\gamma} \sum_{i=n}^{\infty} a_i^2 i^{-2\gamma}}. $$ Again, the first term is finite, and it seems that the second sum should have the desired asymptotic decay (when considering it with $a_i = i^{-\gamma - \frac{1}{2} - \epsilon}$ for example). However I don't see how I show this last part in general. What am I missing?
Your tail sum satisfies $$\lim_{n \rightarrow \infty }(\sum_{i=n}^{\infty}a_{i}^2)n^{2\gamma} = 0$$ To prove this, we can set $a_{i}^2 = b_{i}$ and $2\gamma = \beta$ Note; $$\sum_{j=n}^{m} b_{j} = \sum_{j=n}^{m}b_{j}j^{\beta}j^{-\beta}. $$ Set $$B(x) = \sum_{0 \leq a \leq x} b_{a}a^{\beta}$$ with $b_{0} = 0$. By Abel Summation formula (https://en.wikipedia.org/wiki/Abel%27s_summation_formula) we have $$\sum_{j=n}^{m}b_{j}j^{\beta}j^{-\beta} = B(m)m^{-\beta} - B(n)n^{-\beta} - \int_{n}^{m} B(x)(-\beta n^{-\beta-1})$$ Clearly we can take $m \rightarrow \infty$ to get $$\sum_{j=n}^{\infty}b_{j} = -B(n)n^{-\beta}+\int_{n}^{\infty} B(x)(\beta n^{-\beta-1})$$ $$=\int_{n}^{\infty} \beta (B(x)-B(n)) n^{-\beta -1}$$ $$= o(n^{-\beta})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4456972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Fredholm alternative vs variational methods I was reading an exercise in a lecture notes of variational methods in order to show that if $\rho: \overline{\Omega} \rightarrow \mathbb{R}$ is continuous in a bounded domain $\Omega \subset \mathbb{R}^N$, then a necessary condition so that $-\Delta u = \lambda_j u + \rho(x)$ has a weak solution in $H_0^1(\Omega)$ is that $\int_{\Omega} \rho(x) v dx = 0$ for all $v \in \ \text{ker} \ (\Delta + \lambda_j Id)$. I did this exercise with Fredholm alternative (a tool of functional analysis), but this exercise in a lecture notes of variational methods led me to think in a comparation between Fredholm alternative and variational methods, so my question is: are there some advantages in Fredholm alternative compared to variational methods and vice versa to solve a PDE problem? If possible, give me examples, please. Thanks in advance! $\textbf{P.S.:}$ by "variational methods" I mean the study of find weak solutions of a PDE problem associating a functional to this problem and find a critical point for this functional that is a weak solution for the problem.
You cannot analyze the solvability of an equation like $$ -\Delta u + b\cdot u = f $$ by energy minimization methods. The bilinear form associated to weak solutions of the above equation is not symmetric. If studying critical points of a quadratic energy functional the resulting bilinear form is inevitably symmetric. So these kind of variational methods do not work for such equations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4457130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
log det on density matrix plus identity A very naive question: given a pure quantum state $|\phi\rangle$, and the associated density matrix $\rho=|\phi\rangle\langle\phi|$, does there exist an efficient quantum operator/procedure that gives me $$\log\operatorname{det}(I+\rho)\quad?$$ Would I need any oracles? Best, Whoopy
Assuming that the state space is finite dimesional there is a more pedestrian proof than the very nice once by Cosmas Zachos: In a basis where $|\phi\rangle$ is the first basis vector the matrix $\rho=|\phi\rangle\langle\phi|$ is the projection matrix $$ \rho=\begin{pmatrix}1&0&\dots&0\\0&0&\dots&0\\\vdots&\vdots&\ddots&\vdots\\0&0&\dots&0 \end{pmatrix}. $$ Then clearly $\operatorname{det}(I+\rho)=2$ and $\log\operatorname{det}(I+\rho)=\log 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4457384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to find the probability of drawing a certain two cards from a pack Could I please ask for help on the last part of this question: Two cards are drawn without replacement form a pack of playing cards. Calculate the probability: a) That both cards are aces b) that one (and only one) card is an ace c) That the two cards are of different suits d) Given that at least one ace is drawn, find the probability that the two cards are of different suits. Here's my attempt (for parts a, b, and c I get the answer given in the book): Let: $A =$ Event that both cards are aces $B =$ Event that one and only one card is an ace $C =$ Event that the two cards are of different suits. a) $P(A) = \frac{4}{52} \cdot \frac{3}{51} = \frac{1}{221}$ (as must pick an ace AND another ace) b) $P(B) = \frac{4}{52} \cdot \frac{48}{51} + \frac{48}{52} \cdot \frac{4}{51} = \frac{32}{221}$ (as can pick ace then not ace, or not ace than ace) c) $P(C) = \frac{13}{52} \cdot \frac{39}{51} \cdot 4 = \frac{13}{17}$ (as can pick any given suit first, followed by not that same suit, and this can be done in four ways, one for each suit). d) Let $D =$ Event that at least one ace is drawn. $P(D) = P(A) + P(B)$ (because at least one ace is drawn only "if both cards are aces" or "one and only one card is an ace") so $P(D) = \frac{1}{221} + \frac{32}{221} = \frac{33}{221}$ Now, I need to calculate $P(C \mid D) = \frac{P(C \cap D)}{P(D)}$ So if I can calculate $P(C \cap D)$ then I can divide this by $P(D)$ to get the answer. I (wrongly it appears!) reasoned like so: To end up with two cards where "at least one is an ace and both are of different suits" you can only have this by either having "the first card be an ace and the second a card of a different suit from that ace" OR having "the first card be of a certain suit and the second card an ace of another suit". Let @ stand for any suit. so \begin{align*} P(C \cap D) & = P(\text{ace of @}) \cdot P(\text{not @}) + P(\text{@}) \cdot P(\text{ace not of @})\\ & = \frac{4}{52} \cdot \frac{39}{51} \cdot 4 + \frac{13}{52} \cdot \frac{3}{51} \cdot 4\\ & = \frac{5}{17} \end{align*} Well this leads to $P(C \mid D) = \frac{5}{17} \cdot \frac{221}{33} = \frac{65}{33}$. Answer given in book is $\frac{25}{33}$. Thanks for any help.
Two cards are drawn without replacement from a standard deck of playing cards. Given that at least one ace is drawn, find the probability that the two cards are of different suits. Method 1: We correct your attempt. As you observed, the probability that at least one ace is drawn is $$\Pr(\text{at least one ace is drawn}) = \frac{4 \cdot 3 + 4 \cdot 48 + 48 \cdot 4}{52 \cdot 51} = \frac{33}{221}$$ If we draw two aces, they must be of different suits. This event can occur in $4 \cdot 3$ ways. If we draw one of the four aces with the first draw, there are $51 - 3 - 12 = 36$ cards left in the deck which are not aces and are of a different suit than the ace. Hence, the number of ways of drawing an ace and then drawing a non-ace of a different suit is $4 \cdot 36$. If we draw one of the $48$ non-aces with the first draw, there are $3$ aces left in the deck with a different suit than the non-ace we drew first. Hence, the number of ways of drawing a non-ace and then drawing an ace of a different suit is $48 \cdot 3$. Hence, the probability that the two cards are of different suits and at least one ace is drawn is $$\Pr(\text{of different suits} \cap \text{at least one ace is drawn}) = \dfrac{4 \cdot 3 + 4 \cdot 36 + 48 \cdot 3}{52 \cdot 51} = \frac{25}{221}$$ Hence, the probability that the two cards are of different suits given that at least one ace is drawn is \begin{align*} \Pr(\text{of different suits} \mid & \text{at least one ace is drawn})\\ \qquad & = \frac{\Pr(\text{of different suits} \cap \text{at least one ace is drawn})}{\Pr(\text{at least one ace is drawn})}\\ \qquad & = \frac{\dfrac{4 \cdot 3 + 4 \cdot 36 + 48 \cdot 3}{52 \cdot 51}}{\dfrac{4 \cdot 3 + 4 \cdot 48 + 48 \cdot 4}{52 \cdot 51}}\\ \qquad & = \frac{4 \cdot 3 + 4 \cdot 36 + 48 \cdot 3}{4 \cdot 3 + 4 \cdot 48 + 48 \cdot 4}\\ \qquad & = \frac{25}{33} \end{align*} Method 2: We work with combinations to avoid considering the order in which the cards are drawn, which simplifies the calculations. The probability of drawing at least one ace when two cards are drawn is $$\Pr(\text{at least one ace}) = \frac{\dbinom{4}{2} + \dbinom{4}{1}\dbinom{48}{1}}{\dbinom{52}{2}}$$ since we must either select two of the four aces or one of the four aces and one of the $48$ non-aces while selecting two of the $52$ cards in the deck. The probability that the two cards are of different suits and at least one ace is drawn is $$\Pr(\text{of different suits} \cap \text{at least one ace is drawn}) = \frac{\dbinom{4}{2} + \dbinom{4}{1}\dbinom{36}{1}}{\dbinom{52}{2}} = \frac{25}{221}$$ since either two aces are drawn or one ace and one of the $36$ non-aces that are of a different suit than the ace are drawn when two cards are drawn from the deck of $52$ cards. The probability of drawing two cards of different suits given that at least one ace has been selected is \begin{align*} \Pr(\text{of different suits} & \mid \text{at least one ace is drawn})\\ \qquad & = \frac{\Pr(\text{of different suits} \cap \text{at least one ace is drawn})}{\Pr(\text{at least one ace is drawn})}\\ \qquad & = \frac{\frac{\dbinom{4}{2} + \dbinom{4}{1}\dbinom{36}{1}}{\dbinom{52}{2}}}{\frac{\dbinom{4}{2} + \dbinom{4}{1}\dbinom{48}{1}}{\dbinom{52}{2}}}\\ \qquad & = \frac{\dbinom{4}{2} + \dbinom{4}{1}\dbinom{36}{1}}{\dbinom{4}{2} + \dbinom{4}{1}\dbinom{48}{1}}\\ \qquad & = \frac{25}{33} \end{align*} Note: Observe that in both methods, after we cancel the common denominators, the numerator reduces to the number of cases in which at least one ace is drawn and cards are of different suits and the denominator reduces to the number of cases in which at least one ace is drawn. This is because the sample space for the conditional probability that the cards are of different suits given at least one ace is drawn is the set of cases in which at least one ace is drawn.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4457545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How to programmatically calculate real eigenvalues and (optionally) complex eigenvectors for big matrix I am trying to solve 1D timeindependent schrodinger equation $$ -\frac{\hbar^{2}}{2 m} \frac{\partial^{2} \psi}{\partial x^{2}}+V(x) \psi=E \psi $$ for periodic potential to simulate crystal lattice using computational mathematics methods: $$\dfrac{\partial^2\psi}{\partial x^2}=\dfrac{\psi_{i+1}-2\psi_i+\psi_{i-1}}{\Delta x^2}$$ As I understand, I have to find eigenvalues for this matrix $$ \left[\begin{array}{cccc} \frac{1}{\Delta y^{2}}+L^{2} m V_{1} & -\frac{1}{2 \Delta y^{2}} & 0 & 0 \ldots \\ -\frac{1}{2 \Delta y^{2}} & \frac{1}{\Delta y^{2}}+L^{2} m V_{2} & -\frac{1}{2 \Delta y^{2}} & 0 \ldots \\ \ldots & \ldots & \ldots & -\frac{1}{2 \Delta y^{2}} \\ \ldots 0 & 0 & -\frac{1}{2 \Delta y^{2}} & \frac{1}{\Delta y^{2}}+L^{2} m V_{N-1} \end{array}\right]\left[\begin{array}{cc} \psi_1\\ \psi_2\\ \ldots\\ \psi_{N-1}\end{array}\right]=L^2mE\left[\begin{array}{cc} \psi_1\\ \psi_2\\ \ldots\\ \psi_{N-1}\end{array}\right];\quad L^2=2/\hbar^2 $$ Perturbation theory for weak connection gives continuous spectrum, so probably I have to take a really big matrix to get characteristic polynomial of huge order. I am looking for methods or at least python library to solve such a big equation. If it is important, I have nearly 5 GB of RAM and AMD® Athlon gold 3150u CPU, but probably I'll be able to use my university server
Ok, I don't need to calculate complex eigenvectors because all eigenstates are independent and are real in some point in time. So with numpy for potential $$U=\dfrac{U_0}{10(cos(i-5)\pi/5)^2+1};\quad U_0\approx 1\;eV$$ where $U_0$ I took nearly $1,6$ eV because experimentally silicium has 1,21 eV conduction band and where $i-$number if node (10 nodes per atom and 100 atoms). I used this code to calculate eigenvectors import numpy as np import math import scipy.linalg as la from numpy.linalg import eigvalsh def potential(i): return -1/(10*(math.cos((i-5)*math.pi/5)-1)*(math.cos((i-5)*math.pi/5)-1)+1) def element(i,j): if i==j: return 0.1524+potential(i) if abs(i-j)==1: return -0.1524/2 return 0 I = 1j a=[] for i in range(0,5000): b=[] for j in range(0,5000): b.append(element(i,j)) a.append(b) arr = np.array(a) print('matrix created') eigvals, eigvecs = la.eig(arr) print('eigenvalues and eigenvectors calculated') eigvals = eigvals.real f = open('Energies.txt', 'w') for element in eigvals: f.write(str(element)+'\n') print("eigenvalues stored") f = open('Psi_functions.txt', 'w') for element in eigvecs: f.write(str(element)+'\n') If anybody need Psi functions for selfpurposes you can get them here. It's much simplier to calculate eigenvalues so I made 200 nodes per atom: $$U=\dfrac{U_0}{10(cos(i/20-5)\pi/5)^2+1};\quad U_0\approx 1\;eV$$ and used eigvalsh(arr) instead off la.eig(arr). So I got these energies:
{ "language": "en", "url": "https://math.stackexchange.com/questions/4457736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A subgroup of full measure is dense given a haar measure I want to know why if $\mu$ is a haar measure on a compact $G$ and $\mu(A)=\mu(G)$ then $A$ is dense in $G$. This fact is mentioned in the wikipedia page, but I couldn't find a proof for it.
False as stated. Let $G$ be $\mathbb Z^2$ with the discrete topology. Haar measure is counting measure. Let $A = \left\{(x,0) \mid x \in \mathbb Z\right\}$. Then $A$ is closed, $\mu(A) = +\infty = \mu(G)$. But $A$ is not dense in $G$. To get the correct statement: replace $\mu(A) = \mu(G)$ by $\mu(G \setminus A) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4458038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Preimage of simply connected set Let $G\subset\mathbb{C}$ be a domain and $f:G\to\mathbb{C}$ a holomorphic, non-constant function. Prove or disprove: If $f(G)$ is simply connected, then $G$ is simply connected. My idea: I know that for the complex exponential function $\exp(\mathbb{C})=\mathbb{C}^*$ holds (and $\mathbb{C}^*$ is not simply connected). I wanted to use the complex logarithm for my task, but I'm unsure, especially since it is defined across different branches.. So any hints are greatly appreciated!
Let $G=\mathbb C\setminus\{1\}$ and $f(z)=z^2$. Then, $f(G)=\mathbb C$ is simply connected, but $G$ is not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4458217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate all the possibilities over a series This is a question that I came across and I'm not able to understand how I should approach this. Question: "You can use all the alphabets in English only for this (i.e., 26), repetition of alphabets are allowed, all the characters should be in lowercase. If you have to construct a string with length of minimum 3 characters to maximum 8 characters, how many total possibilities of strings would you have?" I think this has some link with combination and permutation but I don't think I'm doing it right because when I wrote a simple code to check the how many such strings could be made with only 3 characters I get answer as 17576 but I don't get the same answer with either combination or permutation, may be I'm missing something here. Image of the result of the Combination and Permutation for 3 characters What would be the correct formula for calculating this? and how should I approach this for the series [3, 4, 5, 6, 7, 8]
Repetitions are allowed. The number $_{26} C _3$ is the number of ways of choosing 3 distinct letters. The number of things like $\{a,b,c\}, \{c,x,z\}$ but not something like $\{a,a,b\}$. There is no mention of putting the letters in an order so this is not the way to do it. The number $_{26} P _3$ is the number of ways of choosing 3 distinct letters and putting them in an order. Things like "$abc$" or "$how$". But not something like "$too$" since the letters are not distinct.. We can write $_{26} P _3 = 26*25*24$. i.e choose the first letter from the 26 choices. Then choose the second from the 25 remaining choices. Then choose the third from the 24 remaining choices. In your case the letters need not be distinct. You are allowed have strings "$aaa$" and "$boo$" for example. This makes the formula simpler than the above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4458362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to evaluate $\int_{0}^{\infty} \frac{1}{x^{n}+1} d x?$ I first investigate the integral $$\int_{0}^{\infty} \frac{1}{x^{6}+1} d x$$ using contour integration along the semicircle $\gamma=\gamma_{1} \cup \gamma_{2},$ $\textrm{ where }$ $$ \gamma_{1}(t)=t+i 0(-R \leq t \leq R) \textrm{ and } \gamma_{2}(t)=R e^{i t} (0<t<\pi) .$$ $$ \begin{aligned} \int_{0}^{\infty} \frac{1}{x^{6}+1} d x &=\frac{1}{2} \int_{-\infty}^{\infty} \frac{d x}{x^{6}+1} \\ &=\frac{1}{2} \oint_{\gamma} \frac{d z}{z^{6}+1} \\ &=\frac{1}{2} \cdot 2 \pi i \sum_{k=0}^{2} \operatorname{Res}\left(\frac{1}{z^{6}+1} , z_k \right) \end{aligned} $$ with its simple poles at $ z_{k}=e^{\frac{(2 k+i) \pi}{6} i}$, where $k=0,1,2$. Now lets evaluate the residues of $\frac{1}{z^{6}+1} $ at $ z_{k}$. $$ \operatorname{Res}\left( \frac{1}{z^{6}+1} , z_{k}\right)=\frac{1}{6 z _k^{5}}=\frac{z_{k}}{6 z_{k}^{6}}=-\frac{z_{k}}{6} $$ Putting the residues back yields $$ \begin{aligned} \int_{0}^{\infty} \frac{1}{x^{6}+1} d x &=\pi i\left(-\frac{1}{6}\left(z_{0}+z_{1}+z_{2}\right)\right) \\ &=-\frac{\pi i}{6}\left(e^{\frac{\pi}{6}}+e^{\frac{\pi}{2} i}+e^{\frac{5 \pi}{6} i}\right) \\ &=-\frac{\pi i}{6}\left(\frac{\sqrt{3}}{2}+\frac{1}{2} i+i-\frac{\sqrt{3}}{2}+\frac{1}{2} i\right) \\ &=\frac{\pi}{3} \end{aligned} $$ Similarly, we can deal with the integral in general $$I_{2n}=\int_{0}^{\infty} \frac{1}{x^{2n}+1} d x$$ with the same contour $\gamma$ and $n $ simple poles whose residues are $$ \operatorname{Res}\left( \frac{1}{z^{2n}+1} , z_{k}\right)=\frac{1}{2n z _k^{2n-1}}=\frac{z_{k}}{2n z_{k}^{2n}}=-\frac{z_{k}}{2n} $$ $$ \begin{aligned} \int_{0}^{\infty} \frac{1}{x^{2 n}+1} d x&=\pi i \sum_{k=0}^{n-1} \operatorname{Res}\left(\frac{1}{z^{2 n}+1}, z_k\right) \\ &=\pi i \sum_{k=0}^{n-1}\left(-\frac{z _k}{2 n}\right)\\&=-\frac{\pi i}{2 n} \sum_{k=0}^{n-1} z_{k}\\&=-\frac{\pi i}{2 n} \sum_{k=0}^{n-1} e^{\frac{2 k+1}{2 n} \pi i}\\ &=-\frac{\pi i}{2 n} e^{\frac{\pi i}{2 n}} \cdot \frac{1-e^{\frac{\pi i}{n}(n)}}{1-e^{\frac{\pi i}{\pi}}}\\ &=-\frac{\pi i}{2 n} e^{\frac{\pi i}{2 n}} \cdot \frac{2}{1-e^{\frac{\pi}{n} i}}\\ &=-\frac{\pi i}{n} \frac{e^{\frac{\pi i}{2 n}}}{1-e^{\frac{\pi}{n} i}}\\&= -\frac{\pi i}{n} \cdot \frac{1}{e^{-\frac{\pi i}{2 n}}-e^{\frac{\pi i}{2 n}}}\\&= \frac{\pi}{2 n} \csc \frac{\pi}{2 n} \end{aligned} $$ However, I can’t use the same contour to deal with the odd one $$\int_{0}^{\infty} \frac{1}{x^{2 n-1}+1} d x$$ as the simple pole $-1$ is on $\gamma_1$. My Question: How can we evaluate the odd one?
This can be calculated explicitly, by the fomula $B(t,1-t)=π/\sin πt$, where $B$ is the Beta function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4458822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Could the linear algebra concepts regarding $n\times n$ matrices be used on any second order tensors? The metric tensor is actuality is a $(2,0)$ tensor define on a manifold, but in all practical use I've seen, it appears no different than a regular square matrix. In sense that it is just a collection of numbers of size $n\times n$. It feels a bit perverse, but could one define thing such trace, determinant, eigen vector, rank etc on this object after viewing it in that way? In this interpretation would it obey all the laws of linear algebra such as determinant being independent of basis and such? Related, also related
A metric tensor acts like an inner product (or bilinear form) at each point. We use it to compare vector fields and eventually to develop the connection operator which is important in the theory of curvature etc. The associated matrix simply gathers the effects it has on pairs of basis fields. For example: $$g_{ij}|_p = g(\partial_i,\partial_j)\big|_p.$$ Studying the coefficients of the characteristic poly for this matrix can of course be done. However this matrix is not applied to one vector as in the eigenvalue problem. It appears as in $\langle v,w\rangle = v^tAw$ instead of as in $Av$. I've never seen a metric tensor with degeneracies (not full rank). This would be problematic, I imagine. I recommend John Lee's Introduction to Smooth Manifolds and his follow up text on Curvature for more. Also Michael Artin's Algebra for discussion on Bilinear Forms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4458984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Explicit examples of formal power series which is not rational functions? This MO question https://mathoverflow.net/questions/249541/formal-power-series-is-taylor-expansion-of-rational-function-iff-hankel-determin states that if $k$ is a field and $k[[T]]$ is the power series ring over $k$ then $u(T)\in k[[T]]$ is the power series of some rational function if and only if a condition on Hankel determinant holds. However, I could not think of any explicit power series which does not satisfy this condition. I know that if $k=\mathbb{R},\mathbb{C}$, there are a lot of examples of power series which are not rational functions. But are there any explicit examples of power series which work for an arbitrary field $k$ ?
It's easiest to use condition (2): There is a finite sequence $q_0,\ldots, q_N$, not all zero, such that for all sufficiently large $m$, $$a_m q_N + a_{m+1} q_{N-1} + \cdots + a_{m + N}q_0 = 0.$$ In other words, the coefficients must obey a linear recurrence (with a finite number of exceptions.) If you choose coefficients "sufficiently random" you would not expect this to hold. For an explicit counterexample, you can take $f(T) = \sum_{n=0}^\infty a_n T^n$ where $a_n = 1$ if $n = 2^m$ for some $m$, otherwise $a_n = 0$. Then you will have arbitrarily long "gaps" in the sequence, with only coefficients equal to $0$. If the recurrence relation is obeyed, then any $N$ consecutive zeros $a_m, \ldots, a_{m+N-1}$ would mean that all coefficients after must be $0$, since you can solve for $a_{m+N}$ as a linear combination of $a_{m}, a_{m+1}, \ldots, a_{m+N-1}$, so that $a_{m+N} = 0$, and so on infinitum. Then we conclude $f(T)$ must be a polynomial, which we know it is not. So no such recurrence can hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4459197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Proportion of $0,1,-1$s in the values of a rational-valued irreducible character In a finite group $G$, is there a lower bound for how many times $\chi(g)$ can take the values $0$ or $\pm 1$? Here, I'm considering $\chi$ irreducible and rational valued.
I have never seen this particular question before, so maybe someone else will know whether there is a lower bound. There certainly isn't for zeros in irreducible characters, the proportion of which can take any value in $[0,1]$. The best I can do is to prove that any lower bound is at most $1/6$. I haven't checked this completely, but the group given by $$\langle a,b,c,d,e,f|a^2=b^2=c^2=d^3=e^3=f^p=1,b^a=bc,d^a=d^b=d^2,e^b=e^2,f^a=f^{-1}\rangle$$ (where all other conjugations are trivial, i.e., $f^b=f$, etc. should have a character, the inflation of the 2-dimensional character for the $D_8$ quotient by $\langle d,e,f\rangle$, where the proportion tends to $1/6$ as $p$ grows. For $p=7$ the proportion is 13/51=0.255, $p=11$ gives $17/75=0.227$, $p=29$ gives $35/183=0.191$, and so on. Since this character takes values $0,2,-2$, it's just those that have value $0$ that we need to consider. I haven't counted the number of such classes, but there are clearly three of orders $2$ or $4$. There should be two of order $6$, two of order $12$, and $p-1$ of order $2p$, so $p+6$ such classes in total. I haven't computed exactly the number of classes, but it's around $6p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4459339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is the dot product of a vector with itself not a linear function? If an inner product is linear by definition, i.e., $\langle\mathbf{v+w},\mathbf{u}\rangle=\langle\mathbf{v},\mathbf{u}\rangle+\langle\mathbf{w},\mathbf{u}\rangle $ and $\langle a\mathbf{v},\mathbf{w}\rangle=a\langle\mathbf{v},\mathbf{w}\rangle$, and the dot product is an inner product, then why is $f(\mathbf{x})=\mathbf{x}\cdot\mathbf{x }$ not a linear function? Thanks
If an inner product is linear by definition ... No, an inner product $\langle ,\cdot,\rangle$ of a (real) vector space $V$, is bilinear, not linear. It is linear in each component. Moreover, the domain of an inner product on $V$ is $V\times V$, while that of a linear map on $V$ is $V$. These two objects have completely different domains.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4459712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
If Complex Numbers Describe a Circle and Split-complex Numbers Describe a Hyperbola, Can One Make a Hypercomplex Number System to Describe any Shape? I was thinking about other complex-like systems the other day, and I decided to define a number $o$ such that $o^2 = 1, o \ne \pm 1$. I wondered if there was a formula like Euler's formula for this number system, where an exponential expression can be converted to a trignometric expression. As it turns out, there is: $$ \begin{align} e^{o\theta} & = \sum_{n = 0}^\infty \frac{(o\theta)^n}{n!} \\ & = \sum_{n = 0}^\infty \frac{(o\theta)^{2n}}{(2n)!} + \sum_{n = 0}^\infty \frac{(o\theta)^{2n + 1}}{(2n + 1)!} \\ & = \sum_{n = 0}^\infty \frac{\theta^{2n}}{(2n)!} + o\sum_{n = 0}^\infty \frac{\theta^{2n + 1}}{(2n + 1)!} \\ & = \cosh \theta + o \sinh \theta. \end{align} $$ It seems that where Euler's formula for complex numbers is related to a unit circle, Euler's formula for these numbers is related to a unit hyperbola. So, there is a sense in which $a + bi, i^2 = -1$ are circular numbers and $a + bo, o^2 = 1, o \ne \pm 1$ are hyperbolic numbers. I tried this again with a number system built on the definition $k^2 = 0, k \ne 0$. Euler's formula for this system is: $$ \begin{align} e^{k\theta} & = \sum_{n = 0}^\infty \frac{(k\theta)^n}{n!} \\ & = \sum_{n = 0}^\infty \frac{(k\theta)^{2n}}{(2n)!} + \sum_{n = 0}^\infty \frac{(k\theta)^{2n + 1}}{(2n + 1)!} \\ & = 1 + 0\sum_{n = 1}^\infty \frac{\theta}{(2n)!} + k\theta + 0\sum_{n = 1}^\infty \frac{k\theta}{(2n + 1)!} \\ & = 1 + k\theta. \end{align} $$ This is analagous to the parametric equation of a "unit line", $(1, t)$. As it turns out, these two number systems are already well-known: the hyperbolic numbers are called split-complex numbers (with $j$ as $o$) and the constant numbers are called dual numbers (with $\varepsilon$ as $k$). It seems intuitively true that if one can express a function in terms of parametric equations for a set of axes (parameters) and make Taylor series for those equations and show that the sum of those series can be represented in terms of the Taylor series representation of the exponentional function given an arbitrary complex-like number system, one can have a set of "hypercomplex" numbers which in some sense describes a geometry (as defined by the function). So, in a sense, reverse engineer what was done above. For example, assume complex numbers are not defined and one wants to make a number system that describes a circle. They could use the fact that the parametric equation for a circle is $(\cos t, \sin t)$, My question is whether or not this reasoning is true or useful and if something like this has been done before. Thank you for your answers.
No. If we take 2D case, those shapes characterize the sets of points that have magnitude $1$. Since real numbers $1$ and $-1$ as we know have magnitude $1$, our shape should cross these points. Other investigations show that it should cross them at right angles so that the multiplication to work, no line from origin should cross the shape twice, in order that $(-1)j=-j$, $-1$ should have twice the argument (that is area between the unit curve and origin) of $j$, which means the curve should be symmetric both against real and $j$ axis, etc. Basically, in 2 dimensions there are only 3 shapes that fit: unit circle (circle with radius 1 and centered at origin), hyperbola with apexes at $1$ and $-1$ and diagonal asymptotes, and two vertical lines perpendicular to the real line via $1$ and $-1$. This third shape defines dual numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4459901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Subgroups for Galois group of cyclotomic etension My teacher gave me this problem as a personal homework. I need to determine if $\cos(2\pi/13)$ and $\cos(2\pi/55)\in K = \mathbb{Q}[\cos(2\pi/37),\cos(2\pi/15),\cos(2\pi/11)]$. For $\cos(2\pi/13$) I said that $\mathbb{Q}[\cos(2\pi/13)] \subset \mathbb{Q}[\zeta_{13}]$ and $K \subset \mathbb{Q}[\zeta_{37*15*11}]$ and we know that $\mathbb{Q}[\zeta_{m}] \cap \mathbb{Q}[\zeta_{n}] = \mathbb{Q}$ when gcd(m,n) = 1. So $cos(2\pi/13) \notin K$. But that is not working for $\cos(2\pi/55)$. So I came up with another idea. Let $G(\mathbb{Q}[\zeta_{37*15*11}]/\mathbb{Q}) = \{\sigma_i \mid \sigma_i : \omega \to \omega^{i}\text{ for }\gcd(i,37*15*11) = 1\}\simeq \mathbb{Z}_{37*15*11}^*$. I want to look at $G(\mathbb{Q}[\zeta_{37*15*11}]/K)$ and $G(\mathbb{Q}[\zeta_{37*15*11}]/\mathbb{Q}[\cos(2\pi/55)])$ and compare them. That's where I am stuck. I know that I need to find such elements in $G(\mathbb{Q}[\zeta_{37*15*11}]/\mathbb{Q})$ that stabilized $K$ and $\mathbb{Q}[\cos(2\pi/55)])$. But I don't understand how to do that. Basically, the question is how $G(\mathbb{Q}[\zeta_{37*15*11}]/K)$ and $G(\mathbb{Q}[\zeta_{37*15*11}]/\mathbb{Q}[\cos(2\pi/55)])$ looks like and will it give me the solution of my problem.
Your argument for $\cos(2\pi/13)$ is fine (as $\cos(2\pi/13)\not \in \Bbb{Q}=\Bbb{Q}(\zeta_{11\cdot 15\cdot 37})\cap \Bbb{Q}(\zeta_{13})$). Then look at the automorphism of $\Bbb{Q}(\zeta_{11\cdot 15\cdot 37})$ sending $\zeta_{11\cdot 15 \cdot 37}$ to $ \zeta_{11\cdot 15 \cdot 37}^n$, where $ n\equiv -1\bmod 15,n\equiv 1\bmod 11,n\equiv 1\bmod 37$. Does it fix $\cos(2\pi/37),\cos(2\pi/15),\cos(2\pi/11),\cos(2\pi/55)$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4460074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Conclusion of $\dim X_n = \dim H(X_n)$ It's a part of proof that Euler characteristic of complex chain equal to the Euler characteristic of the homology group. Let $0\longrightarrow X_{n+1}\overset{f_{n+1}}\longrightarrow X_n\overset{f_n}\longrightarrow X_{n-1}\overset{f_{n-1}}\longrightarrow \cdots\longrightarrow X_1\overset{f_1}\longrightarrow X_0\longrightarrow 0$ denote a chain complex of finite dimensional vector spaces. I wan't to prove that $\dim X_n = \dim H(X_n)$, where $H(X_n)$ is the homology group. First of all, for a short exact sequence of vector spaces $0\longrightarrow U\overset{f}\longrightarrow V\overset{g}\longrightarrow W \longrightarrow 0$, we have $\dim V = \dim U + \dim W$ (already proved). Then, for $$\cdots \longrightarrow X_{k+1}\overset{f_{k+1}}\longrightarrow X_k\overset{f_k}\longrightarrow X_{k-1}\overset{f_{k-1}}\longrightarrow \cdots$$ we take the s.e.s. $$ 0\longrightarrow Im(f_{k+1})\longrightarrow Ker(f_k)\longrightarrow H(X_k)=\frac{Ker(f_k)}{Im(f_{k+1})}\longrightarrow 0 $$ then we have $$\dim H(X_k)= \dim \ker f_k - \dim \mathrm{Im} f_{k+1}\:\:\:\:\:(*)$$ From the s.e.s. $0\longrightarrow \ker(f_k)\longrightarrow X_k\longrightarrow \mathrm{Im}(f_k)\to 0$, we have $$\dim X_k = \dim \ker f_k + \dim \mathrm{Im} f_k \:\:\:\:\:(**)$$ I omitted the boring details. I'm stuck on the following : Combining $(**)$ with $(*)$, we have $$ \dim(X_k)+ \dim \mathrm{Im} f_k= \dim H(X_k) + \dim \mathrm{Im}f_{k+1}. $$ I think I'm missing something simple to conclude that $\dim X_k = \dim H(X_k)$ and finish the question, someone can help me?
What if we define the following chain complex $0\to k^n\xrightarrow{\mathrm{id}} k^n\to 0$? This chain complex has no homology, but the vector spaces are nonzero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4460228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating arc length of Bezier curves by hand I was wondering if there was a viable method to calculate arc length of a quadratic Bezier curve without using coding. If I know the points of P0, P1 and P2, would it be possible to calculate the arc length of quadratic Bezier curve?
Yes, you can do this calculation "by hand", without using computer numerical methods. First, convert your quadratic Bezier curve into polynomial form, $P(t) = At^2 + Bt + C$. If the curve's control points are $P_0$, $P_1$, $P_2$, then $A = P_0 -2P_1 + P_2$, $B= 2P_1 - 2P_0$, and $C=P_0$. Now you have to calculate the integral that represents arclength. This integral is of the form $\int{\sqrt{t^2 + pt + q}\,dt}$, so it can be computed by hand, though it's fairly complicated. The details are given here. For further info, just do a Google search for "arclength of quadratic Bezier curve".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4460366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The number of distinct pairs of integers $(m,n)$ satisfying $|1+mn|<| m+n|<5$ is? $|1+mn|<5$ $\Rightarrow -6 < mn < 4$ $|m+n|<5$ $\Rightarrow -5<m+n<5$ Now as $m$ and $n$ are integers , I tried to make cases such the product of $mn$ is equal to $-5,-4,-3,-2,-1,0,1,2$ and $3$ as $ -6 < mn < 4$ but I end up getting too many pairs and also I am confused what to do in the case where $mn=0$.
The official solution is right: There are 12 solutions: * *(-4, 0) *(-3, 0) *(-2, 0) *(0, -4) *(0, -3) *(0, -2) *(0, 2) *(0, 3) *(0, 4) *(2, 0) *(3, 0) *(4, 0)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4461066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Turning non martingale into martingale by changing filtration I would like to know if there is some theorem that allows us to turn a process $(X_t,\mathcal{F}_t)_{t \geq 0}$ (under the canonical filtration) which is not a martingale into one, by maybe changing the filtration. In my particular case, I have some one dimensional process that has some trigonometric functions and a deterministic function of time. More concretely, I have a process of the form $$X_t:=c(t) [\cos^{2/a} B_{f(t)} + \sin^{2/a} B_{f(t)}]$$ where $(B_t)_{t \geq 0}$ is a classical Brownian motion, and we can pick any $f(t)$ we like for our goal. I have checked that this is not a martingale using Ito's Lemma, for some functions $f(t)$ like $\log(t)$, $\sin(t), \cos(t)$, etc.
To the question in the title: this is not possible, since if $X$ is a martingale with respect to any filtration $\mathcal{G}$, then it is also a martingale with respect to its natural filtration. This is because if $X$ is a martingale with respect to $\mathcal{G}$, in particular $X$ must be adapted with respect to $\mathcal{G}$ and hence $\mathcal{F}_t \subseteq \mathcal{G}_t$ for every $t \geq 0$. Therefore, by the tower property of conditional expectations it holds for $s \leq t$ that $$\mathbb{E}[X_t | \mathcal{F}_s] = \mathbb{E}[\mathbb{E}[X_t | \mathcal{G}_s] | \mathcal{F}_s]] = \mathbb{E}[X_s |\mathcal{F}_s] = X_s,$$ so $X$ is martingale with respect to $\mathcal{F}$. The answer to the question in the body is also negative, since the time-change of a martingale is again a (local) martingale, if we assume the time-change to be continuous. If we allow for general $f$, not necessarily increasing and continuous, the answer is also negative for your problem: Letting $a=1$ we have $\cos^2(B_{f(t)}) + \sin^2(B_{f(t)}) = 1$ and therefore $X_{t} = c(t)$ for any choice of $f$. Now if $c$ is not constant, no choice of $f$ will turn $X$ into a martingale.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4461257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Künneth theorem for profinite cohomology. Let $G,H$ be groups and $k$ a field, there is a well known formula for the group cohomology of the product: $$H^\ast(G\times H,k)\cong H^\ast(G,k)\otimes H^\ast(G,k)$$ I was wondering whether this is also true for $G,H$ profinite groups. My thoughts were that you should argue that the formula commutes with limits and therefore reduce the statement to the case of finite groups. Howevery this seems to gloss over many details, so I would love to see a reference.
A reference can be found in the book Cohomology of Number Fields Theorem 2.4.6, which states: Let $G$ and $H$ be profinite groups and let $A$ be a discrete $H$-module, regarded as a $(G \times H)$-module via trivial action of the group $G$. Then the Hochschild-Serre spectral sequence $$ E_{2}^{p q}=H^{p}\left(G, H^{q}(H, A)\right) \Rightarrow H^{n}(G \times H, A) $$ degenerates at $E_{2}$. Furthermore, it splits in the sense that there is a decomposition $$ H^{n}(G \times H, A) \cong \bigoplus_{p+q=n} H^{p}\left(G, H^{q}(H, A)\right), $$ from which one can follow the Künneth theorem for fields.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4461430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding image and nucleus of $ f: x \in \mathbb{R}^3 \mapsto (aa^T +bb^T)x \in \mathbb{R}^3 $ I got a problem. Let $a, b ∈ \mathbb{R}^3$ be two linearly independent nonzero vectors in a three-dimensional real space, and let $c ∈ \mathbb{R}^3$ be a nonzero vector orthogonal to $a, b$. Find the image and nucleus of the linear map: $$ f: x \in \mathbb{R}^3 \mapsto (aa^T +bb^T)x \in \mathbb{R}^3 $$ I tried to solve this, but what came through my head is: $$ \operatorname{Im}{f} = \left\{\left(\begin{array}{l} \left(a_{1}^{2}+b_{1}^{2}\right) x_{1}+\left(a_{1} a_{2}+b_{1} b_{2}\right) x_{2}+\left(a_{1} a_{3}+b_{1} b_{3}\right) x_{3} \\ \left(a_{1} a_{2}+b_{1} b_{2}\right) x_{1}+\left(a_{2}^{2}+b_{2}^{2}\right) x_{2}+\left(a_{2} a_{3}+b_{2} b_{3}\right) x_{3} \\ \left(a_{1} a_{3}+b_{1} b_{3}\right) x_{1}+\left(a_{2} a_{3}+b_{2} b_{3}\right) x_{2}+\left(a_{3}^{2}+b_{3}^{2}\right) x_{3} \end{array}\right) \, \middle| \, x_1, x_2, x_3 \in \mathbb{R}\right\} $$ I also do not understand what this question would imply. Could you give me some hints to solve this?
Observe that $f(a)=aa^Ta+bb^Ta=\|a\|^2\cdot a$, $f(b)=aa^Tb+bb^Tb=\|b\|^2\cdot b $ and $f(c)=aa^T c + bb^T c = 0$. Therefore the kernel of $f$ is $c$ and the image of $f$ is $\mathrm{span}\{ a,b \}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4461599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Show that $\forall b > 1$, $\exists x \neq 0$ such that $x = b \sin x$ This is a problem from my calculus / analysis course. Show that $\forall b > 1$, $\exists x \neq 0$ such that $x = b \sin x$. My first thought is to use the intermediate value theorem: Pick any $b>1$. Let $f(x) = x - b \sin x$. Clearly $f$ is continuous on $\mathbb{R}$. Then $f(b^2) = b^2 - b \sin (b^2) > 0$. Now, I need to find some $c > 0$ such that $f(c) < 0$. Then by the intermediate value theorem, I can show that there exists some $x \in [c,b^2]$ such that $f(x) = 0$, which is the desired result. However, I cannot find such $c$. Is there any hint on this? Or are there any other approaches to this question?
$f(0)=0$ and $f'(0)=1-b<0$. By the definition of the derivative, there exists $x>0$ sufficiently small such that $$ \frac{1-b}2<\frac{f(x)-f(0)}x -(1-b)<\frac{-(1-b)}2 $$ The right inequality can be rewritten $$ \frac{f(x)}x< \frac{1-b}2$$ i.e. $f(x) < \frac{(1-b)x}2 < 0$. Now you can finish with IVT.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4461703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$\frac{1^2}{1\cdot3} + \frac{2^2}{3\cdot5} + \frac{3^2}{5\cdot7}+\cdots+\frac{500^2}{999\cdot1001} = ?$ I found this problem in a high school text book. Let $ \displaystyle s = \frac{1^2}{1\cdot3} + \frac{2^2}{3\cdot5} + \frac{3^2}{5\cdot7}+\cdots+\frac{500^2}{999\cdot1001}$. Find $s$. How I tried: Observe that $T_n = \frac{n^2}{(2n-1)(2n+1)}$. Here, $T_n$ is the $n$th term of the sequence. So, we need to find the value of $\sum_{n=1}^{500}\frac{n^2}{(2n -1)(2n+1)}$. We can find its value with Telescope Cancellation Method, but it requires breaking the expression we got into simpler terms. How to simplify $T_n = \frac{n^2}{(2n-1)(2n+1)}$ ?
$$T_n = \frac{n^2}{4n^2-1} = \frac{1}{4}\left( \frac{4n^2-1+1}{4n^2-1} \right) = \frac{1}{4}\left( 1 + \frac{1}{4n^2-1} \right)=\frac{1}{4}\left( 1 + \frac{1}{(2n-1)(2n+1)} \right). $$ Decomposing with partial fractions, we have: $$ \frac{1}{(2n-1)(2n+1)} = \frac{\frac{1}{2}}{2n-1} - \frac{\frac{1}{2}}{2n+1} = \frac{1}{2}\left( \frac{1}{2n-1} - \frac{1}{2n+1} \right). $$ And we see that most of the terms in $\displaystyle\sum_{n=1}^{500} T_n$ will cancel, apart from a few at either end.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4461891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
For fixed $t$ with $0 \le t < 1$, prove that $x \mapsto [x, t]$ defines a homeomorphism from a space $X$ to a subspace of the cone $CX$. For fixed $t$ with $0 \le t < 1$, prove that $x \mapsto [x, t]$ defines a homeomorphism from a space $X$ to a subspace of the cone $CX$. Let $\varphi : X \to \varphi(X)$ be the map defined by $x \mapsto [x,t]$. Then $\varphi$ is clearly surjective as the codomain is defined to be the image of it. To prove that $\varphi$ is injective I considered the standard approach to assume that $\varphi(x)=\varphi(y) \implies [x,t]=[y,t]$, but I don't think I can conclude from here that $x=y$? I am also wondering what is the idea with $t <1$ being a strict inequality? Are we excluding the vertex of the cone here, if so why?
Drawing a picture helps. Basically you're just trying to show that slices of the cylinder embed into the cone as long as $t\neq 1.$ Fix $0\le t<1.$ Then, $[x,t]\sim [y,t]\Leftrightarrow x=y$ because by definition of $\sim$, the equivalence classes are singletons when $0\le t<1.$ You have shown that $\varphi $ is continuous. So all you need to show is that it is an open map $\textit{onto}$ its image, $\varphi(X)$ with the $\textit{subspace}$ topology of the quotient topology on $C(X).$ Therefore, let $U\subseteq X$ be open. We want to show that $\varphi (U)$is open in $\varphi (X).$ This means we should find an open set $V$ in $C(X)$ such that $V\cap \varphi(X)=\varphi(U).$ But this is easy. Note that $V=U\times [0,1)$ is an open set in $C(X).$ And $V\cap \varphi(X)=U\times \{t\}=\varphi(U).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4462053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If $P(A\cap B\cap C)=P(A\cap B)P(C)$, does it follow that $P(A\cap C)=P(A)P(C)$ and $P(B\cap C)=P(B)P(C)$? If $P(A\cap B\cap C)=P(A\cap B)P(C)$, does it follow that $P(A\cap C)=P(A)P(C)$ and $P(B\cap C)=P(B)P(C)$? In other words, if the conjunction of two events $A$ and $B$ is independent of a third event $C$, is it always true that event $C$ is independent from events $A$ and $B$ separately? I've gone back and forth between believing that it is likely true and not true. I can't see how to prove it from the basic axioms of probability, but it also seems challenging to think of a counterexample where the conjunction of $A$ and $B$ is independent of $C$, but either $A$ or $B$ are dependent on $C$ (or vice versa).
Take any dependent events $A$ and $C$ and put $B=A^c$ (complement of $A$) to get a counter example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4462278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Equivalent definition of essential supremum Let $(X,\mathcal{A},\mu)$ be a measure space and let $f\colon X\to [-\infty,+\infty]$ be a measurable function. Denote with $\mathcal{N}_\mu$ the collection of $\mu$-null sets. On same text the definition of essential supremum is $$\operatorname{esssup}f:=\inf\left\{\sup_{x\in X\setminus N} f(x)\;\middle|\; N\in\mathcal{N}_\mu \right\}\tag 1$$ In other texts it is: $$\operatorname{esssup}f:=\inf\left\{a\ge 0\;\middle|\;\mu\left(\{x\in X\;\middle|\; f(x)>a\}\right)=0 \right\}\tag2$$ Question Are $(1)$ and $(2)$ equivalent? Why?
I think you need $a\in \mathbb{R}$ and f(x) without the absolute value. Otherwise f konstant -1 is a couterexample. But if you correct this, the two definitions are equivalent: Let $A:= \{\sup_{x\in X\setminus N}f(x)\mid N\in \mathcal{N}_{\mu}\}$ and $B:= \{a\in \mathbb{R}\mid \mu(\{x\in X\mid f(x)>a\})=0\}$. We show $\inf A=\inf B$: "$\geq$": For $N\in \mathcal{N}_{\mu}$ we have $\{x\in X \mid f(x)>\sup_{x\in X\setminus N}f(x)\}\subseteq N$, and hence $\mu(\{x\in X \mid f(x)>\sup_{x\in X\setminus N}f(x)\})=0$. Therefore, $A\subseteq B$ and $\inf A\geq \inf B$. "$\leq $": Let $a$ be given such that $\mu(\{x\in X\mid f(x)>a\})=0$ and define $N:= \{x\in X\mid f(x)>a\}$. Then $N\in \mathcal{N}_\mu$ and for $x\in X\setminus N$ we have $f(x)\leq a$, hence $\sup_{x\in X\setminus N}\leq a$. So for all $x\in B$ there is a $y\in A$ with $y\leq x$. Therefore, $\inf A\leq \inf B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4462408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find all continuous functions $f:[0, \infty) \to (0, \infty)$ such that $(\forall x > 0)$ $ 2x \int_0^x f(t) dt = f(x) $ Find all continuous functions $f:[0, \infty) \to (0, \infty)$ such that $(\forall x > 0)$ $$ 2x \int_0^x f(t) dt = f(x) $$ My work: First, for $x \not = 0$ given equality can be written down as $$\int_0^x f(t) dt = \frac{f(x)}{2x}$$ When we differentiate that we get $$f(x) = \frac{1}{2} \frac{xf'(x) - f(x)}{x^2}$$ which is $$2x^2 f(x) = f'(x) \cdot x - f(x)$$ My idea was to try to find function whose derivative is $2x^2 f(x) - f'(x) \cdot x + f(x)$, but closest I have got was this function: $$e^{x^2} \cdot x \cdot f(x)$$ but it is not exactly what I am looking for. Second, what I have tried, was the following: Let $$ F(x) = \int_0^x f(t) dt$$ Task condition then becomes $$2x F(x) = F'(x)$$ I tried to integrate $F(x)$ using partial integration I haven't got anything useful. Can someone please give me some hint or any other help? Thanks!
You already have $2x F(x) = F'(x)$ which is preferred over using $f'$ because you have only given that $f$ is continuous, but we don't know whether where $f'$. To solve the equation for $F$ is possible because $F'/F$ is just the so-called logarithmic derivative of $F$: Just integrate on both sides over an inteval where $F(x)\neq0$: $$2x=\frac{F'(x)}{F(x)}$$ thus $$\int_a^b \!2x\, dx = \int_a^b\frac{F'(x)}{F(x)} dx $$ Then substitute $y=F(x)$ and $dy = F'(x) dx$: $$b^2-a^2 = \int_{F(a)}^{F(b)} \frac{dy}{y} = \ln|y|\Bigg|_{y=F(a)}^{y=F(b)} = \ln|F(b)| - \ln|F(a)| $$ or $$\ln|F(x)| = x^2 + C$$ $$|F(x)| = \exp(x^2+C) = K\cdot\exp (x^2)$$ and finally absorbing the $\pm$ from the absolute value into the integration constant $K$. $$\begin{align} F(x) &= K\cdot\exp (x^2) \\ F'(x) &= f(x) = 2Kx\cdot\exp (x^2) \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4462582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Bilinear form Find q $\in\ V$ such that $\phi(p)=\psi(p,q)$ for all $p\in V$ Let $V$ be the $\mathbb{R}$-Vector space of polynomials with degree $\leq$ 2. Let $\psi\colon V \times V \rightarrow \mathbb{R}$ be the bilinear form $$(p,q) \mapsto \int_{0}^{1}p(x)q(x)dx$$ Let $\phi$ be a linear form $p \mapsto p(0)$. Find a $q \in V$, such that $\phi(p)=\psi(p,q)$ for all $p\in V$ I still have trouble understanding, what this $q$ that I'm looking for actually is and especially how I would manage to find it.
There are clever choices of "sampling functions" $p$ that you can use to tease out the coefficients of this polynomial $q$, but I'm going to show you the robust (if verbose) method to determine $q$. The vector space $V$ has a basis $\{1, x, x^2\}$ over $\mathbb{R}$. In other words, each $p \in V$ is expressed uniquely as $$ p = ax^2 + bx + c $$ for some $a, b, c \in \mathbb{R}$. Or in the language of coordinates, with respect to the monomial basis, $p$ has coordinates $$ \begin{bmatrix} a \\ b \\ c \end{bmatrix} \in \mathbb{R}^3. $$ Let's express our unknown special function $q \in V$ in this basis too: $$ q = Ax^2 + Bx + C. $$ Our goal is to determine $A, B, C \in \mathbb{R}$ such that $\phi(p) = \psi(p, q)$ for all $p \in V$. First, in this basis, what is $\phi(p)$? Evaluate at $x=0$: $\phi(p) = p(0) = c$. Now, the bilinear form involves the product $pq$, so let's work that out in coordinates: \begin{align} p(x) \, q(x) &= \bigl( ax^2 + bx + c \bigr) \bigl( Ax^2 + Bx + C \bigr) \\ &= aA\,x^4 + (bA + aB)\,x^3 + (cA + bB + aC)\,x^2 + (cB + bC)\,x + cC. \end{align} Now, we're supposed to integrate this over $[0, 1]$. Since $$ \int_0^1 k \, x^n \, dx = \biggl. \frac{k}{n+1} x^{n+1} \, \biggr\rvert_0^1 = \frac{k}{n+1} x^{n+1} $$ and integration is a linear operator, \begin{align} \psi(p,q) &= \int_0^1 \, p(x) \, q(x) \, dx \\ &= \int_0^1 \, \Bigl( aA\,x^4 + (bA + aB)\,x^3 + (cA + bB + aC)\,x^2 + (cB + bC)\,x + cC \Bigr) \, dx \\ &= \tfrac15 aA + \tfrac14 (bA + aB) + \tfrac13 (cA + bB + aC) + \tfrac12 (cB + bC) + cC \\ &= \bigl( \tfrac15 a + \tfrac14 b + \tfrac13 c \bigr) A + \bigl( \tfrac14 a + \tfrac13 b + \tfrac12 c \bigr) B + \bigl( \tfrac12 a + \tfrac12 b + c \bigr) C \end{align} In coordinates, $$ \psi(p, q) = \begin{bmatrix} a & b & c \end{bmatrix} \begin{bmatrix} \tfrac15 & \tfrac14 & \tfrac13 \\ \tfrac14 & \tfrac13 & \tfrac12 \\ \tfrac13 & \tfrac12 & \tfrac11 \end{bmatrix} \begin{bmatrix} A \\ B \\ C \end{bmatrix} $$ Now, the requirement that $\psi(p, q) = \phi(p)$ becomes $$ \begin{bmatrix} a & b & c \end{bmatrix} \begin{bmatrix} \tfrac15 & \tfrac14 & \tfrac13 \\ \tfrac14 & \tfrac13 & \tfrac12 \\ \tfrac13 & \tfrac12 & \tfrac11 \end{bmatrix} \begin{bmatrix} A \\ B \\ C \end{bmatrix} = \begin{bmatrix} a & b & c \end{bmatrix} \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix} $$ Since we need this equation to hold for any $a, b, c \in \mathbb{R}$, we need to solve the equation $$ \begin{bmatrix} \tfrac15 & \tfrac14 & \tfrac13 \\ \tfrac14 & \tfrac13 & \tfrac12 \\ \tfrac13 & \tfrac12 & \tfrac11 \end{bmatrix} \begin{bmatrix} A \\ B \\ C \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}. $$ Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4462770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why is the triangle inequality equivalent to $a^4+b^4+c^4\leq 2(a^2b^2+b^2c^2+c^2a^2)$? Consider the existential problem of a triangle with side lengths $a,b,c\geq0$. Such a triangle exists if and only if the three triangle inequalities $$a+b\geq c,\quad b+c\geq a\quad\text{and}\quad c+a\geq b\tag{0}$$ are all satisfied. Alternatively, if $\ell_1\leq\ell_2\leq\ell_3$ are the values of $a,b$ and $c$ ordered in ascending order, then the triangle exists iff $\ell_1+\ell_2\geq\ell_3$. Interestingly, the three triangle inequalities can be recast into a single quartic polynomial inequality. Let $0,x,y\in\mathbb R^2$ be the three vertices of the triangle, with $\|x\|=a,\,\|y\|=b$ and $\|x-y\|=c$. Then $c^2=\|x-y\|^2=\|x\|^2-2\langle x,y\rangle+\|y\|^2=a^2+b^2-2\langle x,y\rangle$. Therefore $x^Ty=\langle x,y\rangle=\frac{1}{2}(a^2+b^2-c^2)$ and $$\pmatrix{x^T\\ y^T}\pmatrix{x&y}=\frac{1}{2}\pmatrix{2a^2&a^2+b^2-c^2\\ a^2+b^2-c^2&2b^2}.\tag{1}$$ The RHS of $(1)$ must be positive semidefinite because the LHS is a Gram matrix. Conversely, if the RHS is indeed PSD, it can be expressed as a Gram matrix. Hence we obtain $x$ and $y$ and the triangle exists. As $2a^2$ and $2b^2$ are already nonnegative, the RHS of $(1)$ is positive semidefinite if and only if $(2a^2)(2b^2)-(a^2+b^2-c^2)^2\geq0$, by Sylvester's criterion. That is, the triangle exists if and only if $$-(a^4+b^4+c^4)+2(a^2b^2+b^2c^2+c^2a^2)\geq0.\tag{2}$$ This polynomial inequality can be derived by more elementary means. See circle-circle intersection on Wolfram MathWorld. The geometric explanation for the necessity of $(2)$ is given by Heron's formula, which states that the square root of the LHS is four times the area of the triangle. Since both $(0)$ and $(2)$ are necessary and sufficient conditions for the existence of the required triangle, the two sets of conditions must be equivalent to each other. Here are my questions. Is there any simple way to see why $(0)$ and $(2)$ are equivalent? Can we derive one from the other by some basic algebraic/arithmetic manipulations?
The quartic polynomial in (2) may be shown by algebra to equal the product $(a+b+c)(-a+b+c)(a-b+c)(a+b-c),$ so for nonnegative $a,b,c$ the triangle inequality implies that the quartic expression is nonnegative. But the quartic as defined above is also nonnegative if two or all four of the factors above are negative. So we have to exclude these alternatives to prove equivalence. Assume that $a+b-c$ and $a-b+c$ are negative. Simply adding these up gives $2a<0$, contradicting the hypothesis that $a$ is nonnegative. Similar contradictions occur if we try other pairs of factors being negative: $(-a+b+c<0, a-b+c<0) \to (c<0)$ $(-a+b+c<0, a+b-c<0) \to (b<0)$ Thus by indirect proof the equivalence of (0) and (2) is established.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4462950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Radius of convergence of the power series, obtained by the Taylor expansion of $f(z) = \frac{(z+20)(z+21)}{(z-20i)^{21} (z^2 +z+1)}$ about $z = 0$. Write down the radius of convergence of the power series, obtained by the Taylor expansion of the analytic functions about the stated point, in $f(z) = \frac{(z+20)(z+21)}{(z-20i)^{21} (z^2 +z+1)}$ about $z = 0$. My attempt: Since power series are continuous on the disk of convergence, the radius of convergence is the distance to the nearest point of discontinuity. $f(z)$ is not analytic at $z=20i$, hence the radius of convergence would be $|0-20i|= \sqrt{20}$. Am I correct?
The radius of convergence is the distance from $0$ to the nearest singularity. And the nearest singularities are $-\frac12\pm\frac{\sqrt3}2i$ (the roots of $z^2+z+1$), whose distance to $0$ is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4463105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }