Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Help with Maclaurin series of $\cos(\ln(x+1))$? Hi I've almost completed a maths question I am stuck on I just can't seem to get to the final result. The question is: Find the Maclaurin series of $g(x) = \cos(\ln(x+1))$ up to order 3. I have used the formulas which I won't type out as I'm not great with Mathjax yet sorry. But I have obtained: $$ \ln(x+1) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} $$ $$ \cos x = 1 - \frac{x^2}{2} + \frac{x^4}{24} - \frac{x^6}{720} $$ I'm sure what I have done above is correct however I can't seem to get to the final solution shown in the answers and would like some help please. Thanks
Since you also don't need very high orders, the straightforward calculation of derivatives is tractable, though not preferable computationally. If $f(x) = \cos \log(1+x)$, then $$f'(x) = -\frac{\sin \log (1+x)}{1+x}, \quad f'(0) = 0.$$ Then $$f''(x) = -\frac{\cos \log (1+x)}{(1+x)^2} + \frac{\sin \log(1+x)}{(1+x)^2}, \quad f''(0) = -1.$$ Finally, $$f'''(x) = \frac{\sin \log(1+x)}{(1+x)^3} + \frac{2\cos \log(1+x)}{(1+x)^3} + \frac{\cos \log(1+x)}{(1+x)^3} - \frac{2 \sin \log (1+x)}{(1+x)^3}, \quad f'''(0) = 3.$$ Then $$f(x) = f(0) + \frac{f'(0)}{1!}x + \frac{f''(0)}{2!}x^2 + \frac{f'''(0)}{3!}x^3 + O(x^4) = 1 - \frac{x^2}{2} + \frac{x^3}{2} + O(x^4).$$ It is worth noting that $$f^{(n)}(x) = \frac{A_n \cos \log (1+x) + B_n \sin \log (1+x)}{(1+x)^n},$$ for suitable constants $A_n$, $B_n$. A proof by induction is straightforward and yields insight into the recursion relations defining $\{(A_n, B_n)\}_{n \ge 1}$: $$\begin{align*} f^{(n+1)}(x) &= -A_n \frac{\sin \log (1+x)}{(1+x)^{n+1}} - nA_n \frac{\cos \log (1+x)}{(1+x)^{n+1}} - n B_n \frac{\sin \log(1+x)}{(1+x)^{n+1}} + B_n \frac{\cos \log (1+x)}{(1+x)^{n+1}} \\ &= \frac{(B_n - nA_n) \cos \log(1+x) + (-A_n - nB_n)\sin \log(1+x)}{(1+x)^{n+1}}. \end{align*}.$$ Therefore, $$\begin{align*} A_{n+1} & = -nA_n + B_n, \\ B_{n+1} &= -A_n - nB_n, \\ A_0 &= 1, \\ B_0 &= 0. \end{align*}$$ In matrix form, this recurrence is equivalent to $$\begin{bmatrix}A_{n+1} \\ B_{n+1}\end{bmatrix} = \begin{bmatrix} -n & 1 \\ -1 & -n \end{bmatrix} \begin{bmatrix} A_n \\ B_n \end{bmatrix},$$ consequently $$\begin{bmatrix}A_n \\ B_n\end{bmatrix} = M_n \begin{bmatrix} 1 \\ 0 \end{bmatrix},$$ where $$M_n = \prod_{k=0}^{n-1} \begin{bmatrix} -k & 1 \\ -1 & -k \end{bmatrix}$$ and $f^{(n)}(0) = A_n$. This lets us continue our calculation of higher orders with relative ease if we keep track of the matrix product $M_n$, which is always of the form $$M_n = \begin{bmatrix} a & b \\ -b & a \end{bmatrix};$$ for example, $$M_3 = \begin{bmatrix} 3 & 1 \\ -1 & 3\end{bmatrix}, \quad M_4 = \begin{bmatrix} -3 & 1 \\ -1 & -3\end{bmatrix}\begin{bmatrix} 3 & 1 \\ -1 & 3\end{bmatrix} = \begin{bmatrix} -10 & 0 \\ 0 & -10 \end{bmatrix},$$ so that $A_4 = -10$ and the next coefficient is $-10/4! = -5/12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3080664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Sylow 2 subgroups of S4 I am trying to find all the Sylow 2 subgroups of S4 using Sylow’s theorems. Now, I know that a Sylow 2 subgroup of S4 has size 8, and that there are either 1 or 3 of them (as the number of of Sylow 2-subgroups has form 1+2k and divides 3, the index). Now my lecturer states “stabilisers of the 3 different bisections of {1,2,3,4} yield 3 distinct Sylow 2 subgroups” Now my questions are: 1) What does it mean by stabilisers of 3 different bijections of {1,2,3,4}? Is that to say the elements of S4 which send {1,2} and {3,4} to {1,2},{3,4} i.e. by (12)(34)? 2) How does he know that there are 8 elements of this set? Perhaps this will become clearer after the 1st question is answered. Many thanks, group theory is hard.
1) Yes, you've got it. There are three unique ways to write $\{1,2,3,4\}$ as a union of two sets of size two. These are $\{1,2\}\cup\{3,4\}$, $\{1,3\}\cup\{2,4\}$, and $\{1,4\}\cup\{2,3\}$. These should not be called "bijections of $\{1,2,3,4\}$", but this is what was probably meant. Let's look at the first one, the one you mentioned: $\{1,2\}\cup\{3,4\}$. We can consider the subgroup $H$ of $S_4$ consisting of permutations that preserve this decomposition, i.e., which send $\{1,2\}$ to either $\{1,2\}$ or $\{3,4\}$. You can see $$H = \{(1), (1 2), (3 4), (1 2)(3 4), (1 3)(2 4), (1 4)(2 3), (1 3 2 4), (1 4 2 3)\}.$$ 2) It's clear there are going to be 8 elements in $H$, since we have 4 choices for where $1$ is sent, then we have no choice for where to send $2$ (it must stick next to where $1$ is sent), and then we have $2$ choices for where to send $3$, and then no choice for $4$. So $H$ is a 2-Sylow subgroup of $S_4$. Two others come from the other two decompositions. There can't be more than three of them, as you said. So these are all of them. In case you're curious, you can show that $H$ (and thus all of the 2-Sylow subgroups) is isomorphic to $D_4$, the dihedral group of order 8. You can see this pretty concretely if you know how $D_4$ acts on the vertices of a square by rotation and reflection. If you label the vertices of a square clockwise by $1$, $3$, $2$, and $4$, then all of these rotations and reflections keep opposite corners together, i.e., they do what elements of $H$ did. The other 2-Sylow subgroups come from labeling the vertices differently (and that's exactly the fact that the 2-Sylow subgroups are conjugate in $S_4$, since conjugacy is pretty explicitly just relabeling in the symmetric groups).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3080776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Are the letters $O$ and $\infty $ homeomorphic? I feel they are homeomorphic But if we remove the intersection point from $\infty $ we get two disconnected components, right ?
As you have realized, a useful topological invariant is the set of path components $\pi_0$. A continuous map $f\colon X\rightarrow Y$ must induce a map on path components $f\colon \pi_0(X)\rightarrow \pi_0(Y)$. This is because it respects the equivalence relation which defines path components; if $\beta\colon I\rightarrow X$ is a path from $x$ to $x'$ (both points in $X$), then $f\circ \beta \colon I\rightarrow Y$ is a path from $f(x)$ to $f(x')$. Importantly, homeomorphisms induce bijections on $\pi_0$ (this is an easy exercise). This is often useful in "local" considerations like the one you make in your question. Indeed, if $x\in X$ then $f$ must induce a homeomorphism from $X\setminus x$ to $Y\setminus f(x)$. But this will then induce a bijection between $\pi_0 (X\setminus x)$ and $\pi_0 (Y\setminus f(x))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3080920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What does it mean to find the distance from the origin to a plane in $\mathbb R^3$? In do Carmo, one exercise gives a plane in $\mathbb R^3$, $ax +by +cz+d = 0$, and tells us to show that $|d|/\sqrt{a^2 + b^2 + c^2}$ measures the distance from the plane to the origin. However, this seems a bit ambiguous since we don't know what the plane actually is. By distance, does he mean minimal distance?
For the plane $ax+by+cz + d = 0$, the normal vector is: $$\hat n = <a,b,c>$$ The vector from the plane to any arbitrary point is: $$\hat v = <(x-x_0),(y-y_0),(z-z_0)>$$ If we consider the origin in particular, $(x_0,y_0,z_0) = (0,0,0)$ so, $$\hat v = <(x),(y),(z)>$$ The MINIMUM distance from the origin to the plane is the projection of v onto n: $$\frac{|\hat n\cdot \hat v|}{|\hat n|}$$ Plugging in the vectors and computing the dot product yields:$$\frac{ax+by+cz}{\sqrt {a^2+b^2+c^2}}$$ From the plane's equation, $-d = ax+by+cz$ Thus the minimum distance from the origin to a plane is given by: $$\frac{|d|}{\sqrt {a^2+b^2+c^2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3081076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Proof verification: Show that $f$ is continuous if $f(\overline{A})\subset\overline{f(A)}$. Let $X,Y$ be metric spaces and define $f: X\to Y$. Show that $f$ is continuous iff $f(\overline{A})\subset\overline{f(A)}$ for each $A\subseteq X$. My proof: $\Rightarrow$ Let $f:X\to Y$ be continuous and $A\subseteq X$. Let $y\in f(\overline{A})$, that is, there exist $x\in\overline{A}$ such that $f(x)=y$ with $x\in\overline{A}$, $x\in A$ or $x$ is a limit point of $A$. I) If $x\in A$, as $f(x)=y$, then $y\in f(A)\subseteq\overline{f(A)}\implies y\in \overline{f(A)}$. II) If $x$ is a limit point of $A$, that is, there exist $(x_{N})\subseteq A$ such that $\lim_{N\to\infty}{x_{N}}=x$, so $f(x_{N})=y_{N}\in f(A)$ Take the limit $\lim_{N\to \infty}{f(x_{N})}=\lim_{N\to\infty}{y_{N}}$. Since $f$ is continouos, we can exchange the limit $f(\lim_{N\to\infty}{x_{N}})=\lim_{N\to\infty}{y_{N}}\implies f(x)=\lim_{N\to\infty}{y_{N}}$ but $f(x)=y$ by hypothesis, so $\lim_{N\to\infty}{y_{N}}=y$, then $y$ is a limit pointt of $f(A)$. Therefore, $y\in\overline{f(A)}$. From I) and II), we can conlcuded that $f(\overline{A})\subseteq \overline{f(A)}$. The other direction of the proof is clear to me, so I need verification of $\Rightarrow$ proof. Question: Is this proof sufficient? Thanks!
A more direct approach from the definition of closure: Let $f$ be continuous and suppose $y \in f[\overline{A}]$. So $y=f(x)$ with $x \in \overline{A}$. Now let $O$ be an open neighbourhood of $y$, then $f^{-1}[O]$ is an open neighbourhood of $x$, so $f^{-1}[O] \cap A$ is non-empty (as $x \in \overline{A}$), say that $x' \in f^{-1}[O] \cap A$. But then $f(x') \in f[A]$ and $f(x') \in O$ so that $O$ intersects $f[A]$. As $O$ was an arbitrary open neighbourhood of $y$, $y \in \overline{f[A]}$ and the inclusion has been shown.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3081160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
If union of n subspaces of V is a subspace of V, then one of the n subspaces must contain the other n-1 subspaces Help prove this? I can prove for n=2, but I'm stuck on proving it for general n. Thanks! My proof for n=2 Forward direction: Consider A and B and A $\cup$ B is a subspace of V. We prove by contradiction by assuming $\exists$ x $\in$ A s.t x $\notin$ B and $\exists$ y $\notin$ s.t y $\in$ B. Consider x+y. Since x $\in$ A and y $\in$ B, if A$\cup$B is a subspace, we expect x+y $\in$ A$\cup$B. However, since y$\notin$A, then x+y $\notin$A because of closure under addition and multiplication of scalar. If, -x+x+y$\in$A then y$\in$A, but that is a contradiction. Thus x+y $\notin$ A and by similar logic, same for B. Since x+y $\notin$ A and x+y $\notin$ B, x+y $\notin$ A $\cup$ B which is a contradiction. Thus one of the subspaces must contain the other Reverse direction: WLOG, A $\subset$ B so A $\cup$ B is equivalent to B. Since A and B are subspaces, they both have the 0 vector so A$\cup$B also does. For a1, a2 $\in$ A$\cup$B, a1, a2 $\in$ B, so a1+a2 $\in$A$\cup$B-->closed under addition. Consider b$\in$ A$\cup$B and scalar $\lambda$. b$\in$ B, and $\lambda$b $\in$ B so $\lambda$b $\in$ A$\cup$B.--> so closed under scalar multiplication. Thus A$\cup$B is a subspace. Edit: Does it work that since I proved it for n=2, we can use induction to just look at V$\cup$W for dim(W)=1 as one of the comments pointed out? Doesn't that just reduce it to the 2 subspace case again? And since we know that the union is a subspace, is that too easy?
Let $U_1, U_2, \dots, U_n$ be subspaces of $V$ such that their union is also a subset of $V$. We can suppose that $U_1 \nsubseteq U_2 \nsubseteq \dots \nsubseteq U_{n-1} \nsubseteq U_n \nsubseteq U_1$, because if $U_i \subseteq U_j$ for $i \ne j$ then we use induction on $n$ to conclude that there is $k \in \{1, \dots, n\} \setminus \{i\}$ such that $U_k$ contains all the other subsets. Let $u_k \in U_k \setminus U_{k+1}$ for $k \in \{1, \dots, n\}$, where the indices of subspaces are modulo $n$. Suppose we have $N = n^2 - n + 1$ vectors in $\mathbb{F}^n$ such that any $n$ of then are linearly independent. Using them as coefficients, we construct the following $$ x_1 = a_{11} u_1 + a_{12} u_2 + \dots + a_{1n} u_n\\ x_2 = a_{21} u_1 + a_{22} u_2 + \dots + a_{2n} u_n\\ \dots\\ x_N = a_{N1} u_1 + a_{N2} u_2 + \dots + a_{Nn} u_n\\ $$ As all $u_i$ are in $U_1 \cup U_2 \cup \dots \cup U_n$, so are all $x_i$. By the pigeonhole principle, there is $U_p$ that contains $n$ different $x_i$. WLOG, $x_1, \dots, x_n \in U_p$. $$\begin{bmatrix} a_{11} & a_{12} & \dots & a_{1n}\\ a_{21} & a_{22} & \dots & a_{2n}\\ \dots & \dots & \dots & \dots\\ a_{n1} & a_{n2} & \dots & a_{nn}\\ \end{bmatrix} \begin{bmatrix}u_1\\u_2\\\dots\\u_n\end{bmatrix} = \begin{bmatrix}x_1\\x_2\\\dots\\x_n\end{bmatrix} $$ As we chose the vectors such that $x_1, \dots x_n$ are linearly independent, the matrix above is invertible. Therefore, each $u_i$ is written as a linear combination of the $x_i$, and so each $u_i \in U_p$. This is a contradiction, since $u_{p-1} \notin U_p$. Therefore, we could not suppose that $U_1 \nsubseteq U_2 \nsubseteq \dots \nsubseteq U_{n-1} \nsubseteq U_n \nsubseteq U_1$, and so we conclude by induction that there is a subspace that contains all others.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3081296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Differentiation under the integral sign - what transformations to use? Need some help with this integral $$I (\alpha) = \int_1^\infty {\arctan(\alpha x) \over x^2\sqrt{x^2-1}} dx$$ Taking the first derivative with respect to $\alpha$ $$I' (\alpha) = \int_1^\infty { dx\over (1+\alpha^2 x^2) x\sqrt{x^2-1} }$$ What transformations to use in order to solve $I'(\alpha)$?
Substitute $$u=\sqrt{x^2-1}\implies du=\frac{x}{\sqrt{x^2-1}}dx\implies dx=\frac{\sqrt{x^2-1}}{x}du$$ Then $$\int { dx\over (1+\alpha^2 x^2) x\sqrt{x^2-1} }=\int { du\over x^2(1+\alpha^2 x^2)}=\int { du\over (u^2+1)(a^2u^2+a^2+1) }$$ Perform partial fraction decomposition $$\int { du\over (u^2+1)(a^2u^2+a^2+1) }=\int\frac{du}{u^2+1}-a^2\int\frac{du}{a^2u^2+a^2+1}$$ Sure you know that $$\int\frac{du}{u^2+1}=\arctan(u)+C$$ To solve for $$\int\frac{du}{a^2u^2+a^2+1}$$ Use substitution $$v=\frac{au}{\sqrt{a^2+1}}\implies du=\frac{a^2+1}{a}$$ $$\int\frac{du}{a^2u^2+a^2+1}=\int\frac{\sqrt{a^2+1}dv}{a((a^2+1)v^2+a^2+1)}=\frac{1}{a\sqrt{a^2+1}}\int\frac{dv}{v^2+1}=\frac{\arctan(v)}{a\sqrt{a^2+1}}+C$$ Now plug in back $x$, you would get $$\int_{1}^{\infty} { dx\over (1+\alpha^2 x^2) x\sqrt{x^2-1} }=\left[\arctan\left(\sqrt{x^2-1}\right)-\frac{a\arctan\left(\frac{a\sqrt{x^2-1}}{a^2+1}\right)}{\sqrt{a^2+1}}\right]_{1}^{\infty}$$ I think you can handle the rest of the calculation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3081395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
If $\ln(x)$ is gamma distributed, what is the distribution of $x$? Additionally, if someone could help calculate the mean and variance of $X$, that would be greatly appreciated.
Just do derive what gt6989b said: $$ \frac{f_Y(\ln x)}{x} = \frac{\beta^\alpha}{\Gamma(\alpha)x}\ln(x)^{\alpha-1}e^{-\beta\ln(x)} = \frac{\beta^\alpha}{\Gamma(\alpha)}\ln(x)^{\alpha-1}e^{-\beta} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3081506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why can't I make the substitution $ u = \sin (ax + b) $ to evaluate $ \int \sin (ax + b) \cos (ax + b) dx$? Evaluate $ \int \sin (ax + b) \cos (ax + b) dx$? To do this, I started of by substituting $ u = \sin (ax + b) $. That made $ du = cos (ax + b) \cdot a $ and wrote the integral as $ \frac 1a \int u \ du $ to get the final answer as: $$ \frac 1{2a} \sin^2 (ax + b) $$ This answer however, is wrong. My textbook uses a different method and arrives at a different answer. I understand how to arrive at the (right) answer but I want to know why I can't get the same answer by substitution here. My textbook starts off by rewriting the expression as $ \frac {\sin 2 (ax + b)}{2} $ and then substitutes $ 2 (ax + b) = u $ to get this answer: $$ - \frac { \cos 2 (ax + b) } {4a } $$
You can solve with the substitution u=sin(ax+b) then, du=acos(ax+b)dx so, ∫sin(ax+b)cos(ax+b)dx=∫(u/a)du=(u^2)/(2a)=sin^2(ax+b)/(2a) which is same with your answer-1/4a
{ "language": "en", "url": "https://math.stackexchange.com/questions/3081656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
For polynomial $g(x)$ satisfying $(g(a))^2+(g'(a))^2=0$, evaluate $\lim_{x\to a}\frac{g(x)}{g'(x)}\left\lfloor\frac{g'(x)}{g(x)}\right\rfloor$ If $g(x)$ is a polynomial function and $$(g(\alpha))^2+(g'(\alpha))^2=0$$ then evaluate $$\displaystyle \lim_{x\rightarrow \alpha}\frac{g(x)}{g'(x)}\bigg\lfloor \frac{g'(x)}{g(x)}\bigg\rfloor $$ Try: from $$(g(\alpha))^2+(g'(\alpha))^2=0\quad\implies\quad g(\alpha)=g'(\alpha) = 0 \tag{1}$$ means polynomial $g(x)=0$ has a repeated root, $x=\alpha$. Using $$\frac{g'(x)}{g(x)}-1<\bigg\lfloor\frac{g'(x)}{g(x)}\bigg\rfloor \leq \frac{g'(x)}{g(x)} \tag{2}$$ So $$\lim_{x\rightarrow \alpha}\bigg(\frac{g'(x)}{g(x)}-1\bigg)\frac{g(x)}{g'(x)}<\lim_{x\rightarrow \alpha}\bigg\lfloor\frac{g'(x)}{g(x)}\bigg\rfloor \frac{g(x)}{g'(x)}\leq \lim_{x\rightarrow \alpha}\frac{g'(x)}{g(x)} \frac{g(x)}{g'(x)} \tag{3}$$ with Squeeze Theorem, the limit must be equal to $1$. But I have a doubt for left side how can I prove $$\displaystyle \lim_{x\rightarrow \alpha}\frac{g(x)}{g'(x)} = 0 \tag{4}$$ Could some help me to explain it? Thanks.
$lim_{x\rightarrow a} \frac{g(x)}{g'(x)}=0$ You can apply L'Hospital's Rule, and get: $ lim_{x\rightarrow a} \frac{g(x)}{g'(x)}=lim_{x\rightarrow a} \frac{g'(x)}{g''(x)}$ if $g''(a)\neq 0$ you are done, otherwise you can keep going until you get $ lim_{x\rightarrow a} \frac{g^{(n)}(x)}{g^{(n+1)}(x)}=lim_{x\rightarrow a} \frac{g^{(n)}}{c}=\frac{0}{c}$ when $c \neq 0$. All this is true if $g^{(k)}(x)\neq 0$ for every $x \neq a$ around $a$. But let say for example $\forall x\in \Bbb R: g(x)=0$ then the limit you are asking for is not defined. **I am new here, so excuse me if I got some mistakes.. trying my best :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3081913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Any way to solve $|x-8| = |2-x|-6$ algebraically? Everything I've tried has given me $x = 2$ (which is obviously incorrect, since $-6 \neq 6$). The actual answer is $x \geq 8$ which I obtained by observing a graph. Would love assistance!
For this equation $$|x-8| = |2-x| - 6$$ there are two modulus terms $|x-8|$ and $|2-x|$ so these two terms will behave differently at the points $x=2$ and $x=8$. Now break the terms in the following cases: Case1: $x>=8$ , this means $-x\leq-8$ or $2-x\leq-6$ which reduces the equation to $$ x-8 = x-2-6$$ which reduces to an identity and this means $x\geq8$ is one of the solution for the equation. Case 2: $x<8$, this means $-x>-8$ or $2-x>-6$ which reduces the equation to $$8-x = x-2-6$$ if $$-6<2-x<0$$ or $$2x=16$$ when $$2<x<8$$ giving no solution from this case. Another equation is $$8-x=2-x-6$$ if $$x<2$$ giving $$8=-4$$ and $$x<2$$ again giving no solution Conclusion: There is only one possibility that $x\geq8$. Another way of investigating this is by inspection. After proving case 1 and case 2 just put values that are greater or equal 8 in the above equation and you will find that for all $x\geq8$ the equation holds true whereas for the rest of the cases it does not hold. Try putting $x=2$ or any other value. The equation wont hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3082011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Show that lower semi-continuous function attains it's minimum. (Proof verification) (By contradiction) Let $f: [0,1]\to \mathbb{R}$ be a lower semi-continuous function, then $$ \liminf_{x\to a} f(x) \geq f(a), \forall a \in [0,1]$$ I have to prove that $f$ attains its minimum on $[0,1]$, that is: $\exists x_0 \in [0,1]$ such that $f(x_0) \le f(x)$, $\forall x \in [0,1]$. My proof: Assume that $f$ is lower semi-continuous, but $f$ doesn't attain it's minimum on $[0,1]$, Since $f$ is lower semicontinuous at $x_0$, $$\forall \epsilon > 0, \exists \delta > 0 \mbox{ such that } |x - x_0| < \delta, \Rightarrow f(x) > f(x_0) - \epsilon, \forall x \in (x_0 - \delta,x_0 + \delta)$$ Now since $f$ doesn't attain it's minimum on $[0,1]$, then $$\forall x_0 \in [0,1], \exists x \in [0,1] \mbox{ s.t } f(x_0) > f(x)$$ Let $\epsilon = f(x_0) - f(x) > 0, \exists \delta > 0$, such that $|x-x_0| < \delta$ and $f(x) > f(x_0) - \epsilon \Rightarrow f(x_0) - f(x) > \epsilon = f(x_0) - f(x)$ which is a contradiction. I am right? Thanks.
Since $f$ is lower-semicontinuous, for each $x\in [0,1],$ there is an open interval $I_x\subseteq [0,1]$ such that $\inf\{f(y):y\in I_x\}\ge f(x)-1.$ The $I_x$ form an open cover of $[0,1]$ so passing to a finite subcover, we conclude that $f$ is bounded below. So, letting $y=\inf\{f(x):x\in [0,1]\}$, we can find a sequence $(x_n)\subseteq [0,1]$ such that $f(x_n)\to y.$ And of course, there is a subsequence $(x_{n_k})\subseteq (x_n)$ such that $x_{n_k}\to x_0\in [0,1]$. Then, $f(x_0)\le \liminf_{x\to x_0} f(x)\le \liminf_{k\to \infty}f(x_{n_k})=\lim_{k\to \infty}f(x_{n_k})=y,$ which implies that $f(x_0)=y.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3082099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove the inequality $\frac{e^x+e^{-x}}{2} \leq e^{x^2/2}$ for all real numbers $x$. How do I prove what's written in the title? I was able to get an incomplete proof for the case $x>2$. Here's my try: Use $e^x = \sum_{j=1}^{\infty} \frac{x^j}{j!}$. Now we can see that if $x$ is a real number, then: $$e^{x^2/2}-e^x/2-e^{-x}/2=\sum_{j=1}^\infty \frac{\frac{x^{2j}}{2^{j-1}}-x^j-(-x)^j}{2j!}$$ and we need to show this is positive. Now, if $j$ is odd, then the numerator is just $x^{2j}/2^{j-1}$ which is always greater than zero. But if $j$ is even, multiply the numerator by $2^{j-1}$, and check if the result is positive. The result is $x^{2j} -2^{j} \cdot x^j=x^{j}(x^j-2^j)$ So if $x$ is greater then $2$, we get a positive result, but otherwise, we don't. So how do I continue for other values of $x$?
Your computation is flawed. The expansion of $\cosh{x}$ is $\sum_{j \geq 0}{\frac{x^{2j}}{(2j)!}}$, but the expansion of $e^{x^2/2}$ is $\sum_{j \geq 0}{\frac{x^{2j}}{2^j \cdot j!}}$. So you just need to prove that $j!2^j < (2j)!$ for each $j$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3082185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Unipotent matrix similar to an upper-triangular matrix "Any unipotent matrix is similar to an upper-triangular matrix with 1's on the diagonal"... This is usually alleged, but I have no idea how to demonstrate that, starting with the definition : $A$ is unipotent if and only if there is $k\in \mathbb{N}$ so that $(A-I_n)^k=0$. And I browsed Internet for hints but found nothing useful. I am not looking here for a ready-made solution, but I would like to understand what is the procedure, what are the steps one has to make, in order to proceed from definition to the result I stated above. Thanks in advance, if someone is able to detail the path to do it.
Here I work over the complex field $\Bbb C$. The main steps are (1.): show that $1$ is the only possible eigevalue of $A$; (2.) cast $A$ into Jordan form. To wit: First, look at what the condition $(A - I_n)^k = 0 \tag 1$ reveals about the eigenvalues of $A$: that they must all be $1$, for if $A \vec x = \lambda \vec x = \lambda I_n \vec x, \; \vec x \ne 0, \tag 2$ then $(A - I_n) \vec x = (\lambda I_n - I_n) \vec x = (\lambda - 1)I_n \vec x = (\lambda - 1) \vec x, \tag 3$ from which we find $(\lambda - 1)^k \vec x = (A - I_n)^k \vec x = 0; \tag 4$ now since $\vec x \ne 0$ we infer that $(\lambda - 1)^k = 0 \Longrightarrow \lambda = 1. \tag 5$ Now, we may cast $A$ into Jordan normal form; that is, we may find a nonsingular matrix $P$ such that $PAP^{-1} = D + N, \tag 6$ where $D$ is a strictly diagonal matrix whose diagonal entries are the eigenvalues of $A$, and $N$ is strictly upper triangular; since the only eigenvalue of $A$ is $1$, we have $PAP^{-1} = I + N, \tag 7$ which is the requisite result. $OE\Delta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3082463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is there a unique ordinal $\alpha$ for every infinite cardinal $\kappa$ such that $\kappa = \aleph_\alpha$? For finding the $\alpha$ I literally can't get any further than writing out the definitions. For the second part: Suppose $\aleph_\alpha = \aleph_\beta$ and $\beta \neq \alpha$. Then either $\alpha \in \beta$ or $\beta \in \alpha$. Assume w.l.o.g. that $\alpha \in \beta$. If $\beta$ is a limit ordinal, then $\aleph_\beta = \bigcup \{\aleph_\gamma \ | \ \gamma < \beta\}$, so maybe that is strictly greater than $\aleph_\alpha$, but I can't prove that, and I don't know what to do if $\beta$ is not a limit ordinal.
First, use transfinite induction to prove that $\alpha\le \aleph_{\alpha}$ for all ordinal alpha. From that we get $\kappa\le\aleph_{\kappa}$, so the following is well defined: $$\alpha=\min\{\beta\in On\mid \kappa\le \aleph_{\beta}\}$$ Now try proving that this $\alpha$ is indeed the alpha you are searching for. For the other part, define $$\alpha=\min\{\beta\in On\mid \exists\gamma\in On(\gamma\ne \beta\land \aleph_\beta=\aleph_\gamma)\}$$ And $$\beta=\min\{\gamma\in On\mid \gamma\ne\alpha\land \aleph_\alpha=\aleph_\gamma\}$$ Now we get $\aleph_\alpha<\aleph_{\alpha+1}\le\aleph_\beta$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3082612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$f(x) = (x-2)(x-4)(x-6) +2$ then $f$ has all real roots between $0$ and $6$. True or false? $f(x) = (x-2)(x-4)(x-6) +2$ then $f$ has all real roots between $0$ and $6$ $($ true or false$)?$ Here $f(0) = -46$ and $f(6) = 2$ since function is continuous so it must have at least one root between $0$ and $6$, but how to check if it has all its roots between $0$ and $6$, without really finding out the roots?
Well, $$\alpha >0 \to f(-\alpha)=2-(\alpha+2)(\alpha+4)(\alpha+6) < -46$$ and $$f(6+\alpha)=2+(4+\alpha)(2+\alpha)(\alpha)> 2$$ So at the very least all its real roots are $\in (0,6)$ You just need to show all of its roots are real.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3082722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Boundedness of a linear operator Let $X$ be a real normed linear space of all real sequences which are eventually zero with the 'sup' norm and $T:X \to X$ be a bijective linear operator defined by $$T(x_1,x_2,x_3,....)=\left(x_1,\frac{x_2}{2^2},\frac{x_3}{3^2},....\right)$$ How to check whether $T$ and $T^{-1}$ is bounded or not ? $$\left\lVert Tx\right\rVert=\sup \Big\{\vert x_1 \vert,\frac{\vert x_2 \vert}{2^2},...\Big\}=\sup_n\Big\{\frac{\vert x_n \vert}{n^2}\Big\} \leq \sup_n\Big\{\frac{\vert x_n \vert}{n}\Big\}$$ How to make the RHS of above in the form $K \vert\vert x \vert \vert$ if possible ? Any hint ? On the otherhand, $T^{-1}:X \to X$ is a map by $$T^{-1}(x_1.x_2,...)=(x_1,2^2x_2,3^2x_3,...)$$ $$\left\lVert T^{-1}x\right\rVert=\sup_n\Big\{n^2 \vert x_n \vert\Big\} \geq n$$ so $T^{-1}$ is not bounded. Am I right? Any help ?
* *$\sup_n \{\frac{|x_n|}{n}\} \le \sup_n\{|x_n|\} =||x||.$ *Your considerations concerning $T^{-1}$ are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3082810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Do siblings share 50% of their genes? (This is my first question on Maths Stack Exchange, I'm not sure if this is a maths question or a physics question ... or something else.) I was watching this video about psychology, and it it the presenter asserts (at about 5:45) that: Identical twins share ... 100% of their genes ... whereas non-identical twins, ... just like any brother and sister, share only 50% of their genes. Now this doesn't seem quite right to me. I can accept that: * *Identical twins share 100% of each other's genes (being split from the same egg). *Brothers and sisters (twins or not) share 50% of their parents' genes Am I wrong in thinking that brothers and sisters don't necessarily share 50% of each other's genes?
Assuming the parents share no common alleles , then each twin will inherit $50\%$ of each parent's set. So for every gene, there is $50\%$ probability of both twins inheriting the same allele from a parent. There for the expectation is that the twins share half of their alleles. However, since it is likely that the parents actually do share some (indeed, rather many) alleles, the expectation should be somewhat higher than one half.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3082903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
It is true that $\overline{(0,\epsilon)\cup(\mathbb Q\cap(\epsilon,1))}=[0,1]$ for $\epsilon\in(0,1)$? Let $\epsilon\in(0,1)$ and $E=(0,\epsilon)\cup(\mathbb Q\cap(\epsilon,1))$. It is true that $\overline{E}=[0,1]$? I know that $\overline{E}\subset[0,1]$, but how to show that $\overline{E}\supset[0,1]$? I've tried to show that if I have an open interval $(a,b)$ then $\overline{\mathbb Q\cap(a,b)}=[a,b]$. By the way, is this true?
The result is indeed true. Here is a way to show it. Since $E$ is just a finite union of sets, we have $$ \overline{E} = \overline{(0,\varepsilon) \cup (\mathbb{Q}\cap(\varepsilon,1))}=\overline{(0,\varepsilon)}\cup\overline{\mathbb{Q}\cap(\varepsilon,1)} = [0,\varepsilon] \cup \overline{\mathbb{Q}\cap(\varepsilon,1)}. $$ If we manage to show that for an interval $(a,b)$, the equality $\overline{\mathbb{Q}\cap(a,b)}=[a,b]$ is true, as you have asked, then we are done. To show this, first notice that from standard properties of the closure we already have the inclusion $\overline{\mathbb{Q}\cap(a,b)} \subseteq \overline{\mathbb{Q}}\cap\overline{(a,b)}=[a,b]$. Now, to show the reverse inclusion, we must show that for an arbitrary point $x \in [a,b]$, any neighborhood of $x$ intersects the set $\mathbb{Q}\cap(a,b)$. Since any neighborhood of $x$ contains open balls of arbitrarily small radii centered at $x$, it is sufficient to prove that any sufficiently small ball centered at $x$ intercepts $\mathbb{Q}\cap(a,b)$. If $x \in (a,b)$, since this set is open, for any sufficiently small radius $r>0$, the interval $(x-r,x+r)$ is contained in $(a,b)$, but the density of $\mathbb{Q}$ then implies the existence of a rational number $q \in (x-r,x+r)$, and by construction this rational number is in $\mathbb{Q}\cap(a,b)$. If $x=a$, even though in this case we cannot obtain $r>0$ such that $(x-r,x+r) \subset (a,b)$, we can guarantee that the right-side $(x,x+r)$ of the interval will be contained in $(a,b)$. Density again implies that there is a rational number $q \in (x,x+r)$, therefore in $\mathbb{Q}\cap(a,b)$, so $(x-r,x+r)$ intersects $\mathbb{Q}\cap(a,b)$. The last case where $x=b$ is analogous to the previous one, just use the left-side $(x-r,x)$ of the interval and argue in the same way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3083042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When does the first repetition in $\;\lfloor x\rfloor, \lfloor x/2 \rfloor, \lfloor x/3\rfloor, \lfloor x/4\rfloor, \dots\;$ appear? Let $\lfloor x\rfloor$ denote the floor of $x$. When does the first repetition in $\lfloor x\rfloor$, $\lfloor x/2\rfloor$, $\lfloor x/3\rfloor$, $\lfloor x/4\rfloor$, ... approximately appear, as a function of $x$? It seems to be around ~ $c \sqrt x$. Example: $x = 2500$: 2500, 1250, 833, 625, 500, 416, 357, 312, 277, 250, 227, 208, 192, 178, 166, 156, 147, 138, 131, 125, 119, 113, 108, 104, 100, 96, 92, 89, 86, 83, 80, 78, 75, 73, 71, 69, 67, 65, 64, 62, 60, 59, 58, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 43, 43, 42, 41, 40, 40, 39, 39, 38, 37, 37, 36, 36, 35, 35, ...
It's essentially the same as Jyrki Lahtonen's answer, but they invited me, so here's mine. Well, it's the same until the part where I go into detail about estimating where in that interval of potential values we actually get the first pair of equal values. Let the sequence $a_n$, for $n=1,2,\dots$, be defined as $\left\lfloor \frac xn\right\rfloor$ for some positive real $x$. We seek the least $n$ for which $a_n=a_{n+1}$, or equivalently the greatest $a$ which appears twice in the sequence. Also, for convenience, define $b_n=\frac xn$. Note that the differences $b_n-b_{n+1}=\frac{x}{n(n+1)}$ form a decreasing sequence. First, a fact about the floor and ceiling function: $\lfloor u\rfloor + \lfloor v\rfloor\le \lfloor u+v\rfloor \le \lfloor u\rfloor + \lceil v\rceil$, with equality when $v$ is an integer. What does this mean for our sequence $a_n$? When $\frac xn - \frac x{n+1} \ge 1$, $a_n-a_{n+1}\ge 1$ as well; we can't have two consecutive entries equal until $b_n$ has two consecutive entries that differ by less than $1$. From that, we will get our lower bound: if $a_n=a_{n+1}$, $b_n-b_{n+1}=\frac{x}{n(n+1)}<1$ and $x < n^2+n=(n+\frac12)^2-\frac14$. Solving for $n$ in terms of $x$, $n > \sqrt{x+\frac14}-\frac12$. Let $$N(x)=\left\lfloor\sqrt{x+\frac14}+\frac12\right\rfloor$$ be the first integer value that get us the inequality. Now, for the upper bound. No matter how far we go, until $a_n$ drops all the way to zero, we still have the chance of $a_n$ and $a_{n+1}$ being different. Looking at one difference just won't be enough. Instead, we stack differences together; if $a_n-a_{n+k} < k$, then since $a_n$ is a decreasing sequence of integers, some two consecutive values in that range must be zero. By the other half of our key inequality, this is guaranteed to happen when $b_n-b_{n+k} \le k-1$. Start at the first possible place for two values to be equal; we're looking for the least $k$ such that $b_{N(x)}-b_{N(x)+k} \le k-1$. This inequality becomes $$k-1 \ge \frac{x}{N(x)}-\frac{x}{N(x)+k} = \frac{kx}{N(x)(N(x)+k)}$$ $$(k-1)N^2(x)+k(k-1)N(x) \ge kx$$ This is - well, it's a mess, because of the floor in the definition of $N$. So, we approximate - $N(x) \le \sqrt{x+\frac14}+\frac12$, so $x \ge (N(x)-\frac12)^2-\frac14=N^2(x)-N(x)$. Oops - we actually need a lower bound for $x$ here (see comments). If $$(k-1)N^2(x)+k(k-1)N(x) \ge k(N^2(x)+N(x))$$ $$(k^2+2k)N(x) \ge N^2(x)$$ $$N(X)\le (k+1)^2-1$$ then, since $k(N^2(x)+N(x)) > kx$, $(k-1)N^2(x)+k(k-1)N(x) > kx$ and we have a $k$ that works. This is true precisely when $k>\sqrt{N(x)+1}-1$; we will, of course, take the first successful value. The least $n$ with $a_n=a_{n+1}$ must satisfy $$\sqrt{x}\approx N(x) \le n \le N(x)+\lfloor\sqrt{N(x)+1}\rfloor-1 $$ $$\le \left\lfloor\sqrt{x+\frac14}+\frac12\right\rfloor + \left\lfloor\sqrt{\sqrt{x+\frac14}+\frac32}\right\rfloor-1\approx \sqrt{x}+\sqrt[4]{x}$$ And now, for something new. Where in that interval will it happen? For a randomly chosen $x$, it's essentially random - but biased. The deviation $1-(b_n-b_{n+1})$ increases approximately linearly with $n$ starting at zero for $n=N(x)$, so the sum of $j$ of them grows like $j^2$. The probability of our first duplication coming in the first $j$ chances is thus approximately proportional to $j^2$, and the location follows a wedge distribution; the probability of it being at $N(x)+j$ is approximately $\frac{2j+1}{N(x)}$ for $0\le j<\sqrt{N(x)}$. But we can do better than that. Write $N^2(x)=x+c$; rearranging our inequalities $N^2-N\le x< N^2+N$, $$x-N(x) < N^2(x) \le x+N(x)$$ Then $\frac{x}{N(x)}=\frac{N^2(x)+c}{N(x)}=N(x)+\frac{c}{N(x)}$. This fractional part $\frac{c}{N(x)}$, between $-1$ and $1$, is what actually determines where in the interval we finally reach a spot with two consecutive $a_n$ equal. As we repeatedly subtract quantities slightly less than $1$ from $b_n$, its fractional part increases until it ticks over an integer - and when that happens, we get our first repeat in the $a_n$. Let $\frac{x}{N(x)+k}=N(x)-k+e_k$. As already noted, $e_0=\frac{c}{N(x)}\in [-1,1)$. For $e_0\in [-1,0)$, we seek the first $k$ such that $e_k \ge 0$. For $e_0\in [0,1)$, we seek the first $k$ such that $e_k \ge 1$. We will then have $a_{N(x)+k-1}=a_{N(x)+k}$. Clear the denominator to get $$x = N^2(x) - k^2 + N(x)e_k + ke_k$$ $$0 = N(x)(e_k-e_0) + ke_k - k^2$$ $$k = \frac{e_k +\sqrt{k^2+4(e_k-e_0)N(x)}}{2}$$ For negative $e_0$, the key point comes when $e_k\approx 0$, and $2k\approx \sqrt{k^2-4e_0 N(x)}$, or $3k^2\approx -4e_0 N(x)$ and $k\approx \frac{2}{\sqrt{3}}\sqrt{-e_0 N(x)}$. For positive $e_0$, the key point comes when $e_k\approx 1$, and $2k-1\approx \sqrt{k^2+4(1-e_0) N(x)}$. Solve that to $k\approx \frac{2}{\sqrt{3}}\sqrt{(1-e_0)N(x)}+\frac23$. So then, the amount $k$ we need to add to $N(x)$ is about $\frac{2}{\sqrt{3}}\sqrt{N(x)}$ times the square root of either $-e_0$ or $1-e_0$. It takes longest when $e_0$ is equal to $-1$ or $0$, at $x=N^2-N$ or $x=N^2$, and shortest when $x$ is slightly less than one of those values. And that's all I have to say on this one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3083192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 4, "answer_id": 2 }
Finding limit of $\sin(x^2/2)/\sqrt{2}\sin^2(x/2)$ as $x\rightarrow0$ Can anybody help me find the limit as $x$ tends to $0$, for $$\frac{\sin(x^2/2)}{\sqrt{2}\sin^2(x/2)}.$$ How can I simplify the expression or use equivalent transformations to find a limit (without using L'Hospital)?
As $\dfrac{\sin x}x=1$, in multiplicative expressions you can replace $\sin x$ by $x$. Hence $$\lim_{x\to0}\frac{\sin\left(\dfrac{x^2}2\right)}{\sqrt2\,\sin^2\left(\dfrac x2\right)} =\lim_{x\to0}\frac{\dfrac{x^2}2}{\sqrt2\,\left(\dfrac x2\right)^2}=\sqrt2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3083288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Smoothness is local Let us consider a map $f:M\to R$ where $M$ is a smooth manifold. If every point $p\in M$ has a neighborhood $U$ such that $f|_U$ is smooth, prove that $f$ is a smooth function. My idea is to prove that any two coordinate charts from any two atlases are smoothly compatible (if $f|_U$ is smoothly than $f\circ \varphi^{-1}$ is smoothly for any $\varphi$ from the atlas that defines the smooth structure on $U$). Is that ok? If yes, how can I prove that? Thank you!
I think that is better to prove the statement by definition. Given $p\in M$ and $U$ neighborhood of $p$ such that $f|_U$ is smooth, there exists coordinate charts $(U\cap U_\alpha, \varphi_\alpha|_{U\cap U_\alpha})$ (where $(U_\alpha, \varphi_\alpha)$ is a chart of $M$) and $(V,\psi)$ of $p$ and $f|_U(p)=f(p)$ respectively such that $\psi\circ f\circ \varphi_\alpha^{-1}|_{U\cap U_\alpha}:\varphi_\alpha(U\cap U_\alpha)\to \psi(V)$ is smooth, but this is the definition of smoothness of $f:M\to N$ using the charts $(U\cap U_\alpha, \varphi_\alpha|_{U\cap U_\alpha})$ and $(V,\psi)$ where $f(U\cap U_\alpha)=f|_U (U\cap U_\alpha)\subset V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3083427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Even holomorphic function on the punctured disk has a primitive Let $f$ such that $f$ is holomorphic on $\{z|0<|z|<1\}$, and $f$ is even. I need to show that f has a primitive. Any ideas?
Let $0 < r < 1$. Then we have $$\int_{\partial B_r(0)} f(z)\, dz = \int_0^{2\pi} f(re^(it)ire^{it}\, dt = \int_0^{\pi} f(re^{it})ire^{it}\, dt + \int_\pi^{2\pi} f(re^{it})ire^{it}\, dt.$$ For the second integal we have $$ \int_\pi^{2\pi} f(re^{it})ire^{it}\, dt = \int_0^{\pi} f(re^{i(t+ \pi)})ire^{i(t+ \pi) }\, dt = -\int_0^{\pi} f(-re^{it})ire^{it }\, dt= -\int_0^{\pi} f(re^{it})ire^{it }\, dt,$$ by the evenness of $f$. By homotopy, every simple loop avoiding 0 integrates to 0. Now pick a any point $z_0$ in the punctured disk. Define $F(z) = \int_\gamma f(z)\, dz,$ where $\gamma$ is a path connecting $z_0$ to $z$. This is well-defined because of the fact for that every loop in the punctured disk, the integral along the loop integrates to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3083547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Discrete Spherical Symmetry Group Take two spheres each having a certain number (say 5) of identical dots on them. What is the approach to proving/disproving that they are equivalent under the set of spherical rotations? One could label the points: say (A,B,C,D,E),(1,2,3,4,5) Align (A,1),(X,Y) with (X,Y) being successive attempted matches, say (B,2),(B,3)... ; then check all of the rest for matching. But this seems rather inelegant. In general, it involves 5x4 test alignments and 3x3 tests. One would prefer some kind of rank/determinant calculation. This seems reasonable since alignments are linear when expressed in terms of cartesian coordinates with spherical rotation mappings, having only two degrees of freedom, but the formulation eludes me.
Well, you could calculate the various point-to-point distances in either setting. When the arrangements would be different then the set of distances also will come out different. You not even are forced to calculate them all: the first mismatch here is enough to deside. --- rk
{ "language": "en", "url": "https://math.stackexchange.com/questions/3083668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $\partial B(x_0, r) \subseteq \{x \in X : d(x_0,x) = r \}$ hold in an arbitrary metric space? Let $X$ be a metric space, and let $B(x_0, r)$ denote the open ball of radius $r$ centred at $x_0 \in X$. Does the statement $\partial B(x_0, r) \subseteq \{x \in X : d(x_0,x) = r \}$ hold true always? A similar question posted in Math StackExchange As answered in the link above, we know that the boundary is not the set $\{x \in X : d(x_0,x) = r \}$. however, it seems that the statement $\partial B \subseteq \{x \in X : d(x_0,x) = r \}$ is true. Considering the example in the link above, the boundary of any set in the discrete metric is the empty set which is a subset of $\{x \in X : d(x_0,x) = r \}$. But I am not too certain. Anyone mind showing me some counterexample? Thank you.
Let $(X,d)$ be a metric space, and let $B(x,r)\subseteq X$ be an open ball. Since $\text{Int}\left(B(x,r)\right)=B(x,r)$ (an open ball is an open set) and $\text{Cl}\left(B(x,r)\right)\subseteq\{y \in X :d(y,x)\leq r\}$ (a closed ball is a closed set), it follows that we have \begin{aligned}\partial B(x,r)=\text{Cl}\left(B(x,r)\right) \setminus B(x,r) &\subseteq \{y \in X :d(y,x)\leq r\} \setminus B(x,r) \\&=\{y \in X :d(y,x)= r\}.\end{aligned} Therefore, $\partial B(x,r) \subseteq \{y \in X :d(y,x) = r\}$ holds for any metric space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3083769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Exponential series with $k$ as base I've tried to understand why $\displaystyle\sum_{k=0}^{\infty} \frac{k^x}{k!}$ for lets say $x = 4$ equals $15e$. It's clear why $\displaystyle\sum_{k=0}^{\infty} \frac{x^k}{k!} = e^x$ and that $\displaystyle\sum_{k=0}^{\infty} \frac{1^k}{k!}=e$ It's also unclear for me why $\displaystyle\sum_{k=0}^{\infty} \frac{k}{k!}=e$ I've tried to argue that $\displaystyle\sum_{k=0}^{\infty} \frac{e^k}{k!}= \displaystyle\sum_{k=0}^{\infty}\frac{k}{\ln(k!)}$ but that doesn't bring me further. Hope someone here has got an idea for me Thanks.
It is very more simple than you think, it is only a recursive propertie. When $x=1$ $$ \sum_{k=0}^{\infty}\frac{k}{k!}=\sum_{k=1}^{\infty}\frac{k}{k!} $$ $$ \sum_{k=0}^{\infty}\frac{k}{k!}=\sum_{k=1}^{\infty}\frac{k}{(k-1)!\,k} $$ $$ \sum_{k=0}^{\infty}\frac{k}{k!}=\sum_{k=1}^{\infty}\frac{1}{(k-1)!} $$ $$ \sum_{k=0}^{\infty}\frac{k}{k!}=\sum_{k=0}^{\infty}\frac{1}{k!}=e $$ When $x=2$: $$ \sum_{k=0}^{\infty}\frac{k^2}{k!}=\sum_{k=1}^{\infty}\frac{k^2}{k!} $$ $$ \sum_{k=0}^{\infty}\frac{k^2}{k!}=\sum_{k=1}^{\infty}\frac{k^2}{(k-1)!\,k} $$ $$ \sum_{k=0}^{\infty}\frac{k^2}{k!}=\sum_{k=1}^{\infty}\frac{k}{(k-1)!} $$ $$ \sum_{k=0}^{\infty}\frac{k^2}{k!}=\sum_{k=0}^{\infty}\frac{k+1}{k!} $$ $$ \sum_{k=0}^{\infty}\frac{k^2}{k!}=\sum_{k=0}^{\infty}\frac{k}{k!}+\sum_{k=0}^{\infty}\frac{1}{k!} $$ From $x=1$: $$ \sum_{k=0}^{\infty}\frac{k^2}{k!}=e+e=2e $$ When $x=3$ $$ \sum_{k=0}^{\infty}\frac{k^3}{k!}=\sum_{k=1}^{\infty}\frac{k^3}{k!} $$ $$ \sum_{k=0}^{\infty}\frac{k^3}{k!}=\sum_{k=1}^{\infty}\frac{k^3}{(k-1)!\,k} $$ $$ \sum_{k=0}^{\infty}\frac{k^3}{k!}=\sum_{k=1}^{\infty}\frac{k^2}{(k-1)!} $$ $$ \sum_{k=0}^{\infty}\frac{k^3}{k!}=\sum_{k=0}^{\infty}\frac{(k+1)^2}{k!} $$ $$ \sum_{k=0}^{\infty}\frac{k^3}{k!}=\sum_{k=0}^{\infty}\frac{k^2}{k!}+2\sum_{k=0}^{\infty}\frac{k}{k!}+\sum_{k=0}^{\infty}\frac{1}{k!} $$ From $x=2$ and $x=1$: $$ \sum_{k=0}^{\infty}\frac{k^3}{k!}=2e+2e+e=5e $$ When $x=4$: $$ \sum_{k=0}^{\infty}\frac{k^4}{k!}=\sum_{k=1}^{\infty}\frac{k^4}{k!} $$ $$ \sum_{k=0}^{\infty}\frac{k^4}{k!}=\sum_{k=1}^{\infty}\frac{k^3}{(k-1)!} $$ $$ \sum_{k=0}^{\infty}\frac{k^4}{k!}=\sum_{k=0}^{\infty}\frac{(k+1)^3}{k!} $$ $$ \sum_{k=0}^{\infty}\frac{k^4}{k!}=\sum_{k=0}^{\infty}\frac{k^3}{k!}+3\sum_{k=0}^{\infty}\frac{k^2}{k!}+3\sum_{k=0}^{\infty}\frac{k}{k!}+\sum_{k=0}^{\infty}\frac{1}{k!} $$ From $x=1,2,3$: $$ \sum_{k=0}^{\infty}\frac{k^4}{k!}=5e+6e+3e+e=15e $$ And that's all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3083891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is the textbook solution wrong by a sign? Laurent Series Find the Laurent series of $\frac{e^z}{z^2 -1}$ about $z = 1$. Here is my solution: Factor denominator $\frac{e^z}{(z-1)(z+1)}$ let $w = z - 1$, and so $z = w + 1$, substitute in $\frac{e^{w+1}}{w(w+2)}$ Do partial fraction decomposition to get rid of the exponential in numerator $\frac{e^{w+1}}{w(w+2)}$ = $\frac{A}{w} + \frac{B}{w+2}$ $e^{w+1} = A(w+2) + Bw$ If $w=0$, $e=2A$, $A = e/2$. If $w = -2$, $\frac{1}{e} = -2B$, $B = \frac{-1} {2e}$ Thus our new equation is $\frac{e}{2w} + \frac{-1}{2e(w+2)}$ Because the first term $\frac{e}{2w}$ is already in terms of $w = (z-1), we leave it be. The second term we can expand using geometric series expansion $\frac{-1}{4e} \cdot \frac{1}{1 - (\frac{-w}{2})}$ and so the Laurent series we get is $$\frac{e}{2w} - \frac{1}{4e} \cdot \{1 - \frac{w}{2} + \frac{w^2}{4} - \frac{w^3}{8} ... \} $$ However, the textbook solution has same terms, but no negative sign. Where did I go wrong?
You considered $e^{w+1}$ as a rational function (which is not!). Instead you should expand it at $w=0$ as $$e^{w+1}=e\cdot e^w=e \left(1+w+\frac{w^2}{2}+\frac{w^3}{6}+\dots\right).$$ Hence, after decomposing the rational function $\frac{1}{w(w+2)}$, $$\frac{e^{w+1}}{w(w+2)}=e^{w+1}\left(\frac{1}{2w} - \frac{1/4}{1+w/2}\right)\\= e\left(1+w+\frac{w^2}{2}+\frac{w^3}{6}+\dots\right)\left(\frac{1}{2w} - \frac{1}{4}\left(1-\frac{w}{2}+\frac{w^2}{4}+\dots\right)\right).$$ Can you take it from here? Finally the Laurent expansion should be $$\frac{e}{2w} + \frac{e}{4} \left(1 + \frac{w}{2} + \frac{w^2}{12} +\dots\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3084133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is there any property shared by all possible matrices $M$ such that $A=M\cdot B$, with $M$ being lower triangular and A and B 1 dimension arrays I'm a little rusty on my linear algebra, but I would like to explore the solutions for the matrix $M$ that transforms $A$ into $B$. I generated code to find some solutions for low dimensional spaces, but I didn't find a pattern to the solution set other than the transformation itself. Is there any property for the possible $M$ that can be identified?
Note that you can look at each row of $M$ independently. For row $i$ you have one equation and $i$ unknowns, so you expect typically for there to be an $i-1$ dimensional space of solutions for that row. More specifically, let $b_i$ be the $i$th entry of $b$, $[a]_i$ the vector consisting of the first $i$ entries of $a$, and $[m]_i$ the first $i$ entries (in other words, the nonzero part) of the $i$th row of $M$. You need $$b_i = [m]_i \cdot [a]_i$$ which has general solution $$[m]_i = b_i [a]_i / \| [a]_i\|^2 + c$$ for $c$ any vector in the orthogonal complement of $[a]_i$; you can compute an explicit basis for this complement if needed using Gram-Schmidt. Notice there always exists infinitely many solutions ($i-1$ dimensions of them) unless $[a]_i$ is all zeros. In that case there is no solution, unless $b_i$ is also zero, in which case you can set the row $[m]_i$ to whatever you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3084276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
CW-Structure of Spin(n) I cannot find any information about the CW-Structure of $Spin(n)$ groups. Clearly $\pi_1=\pi_2=0$ and I think $H_3(Spin(n))=\mathbb Z$ $(n\geq 5)$, so the $3$-skeleton is $S^3$. What is the $4$-skeleton?
For each $n$ there is a fibration sequence $$Spin_n\rightarrow Spin_{n+1}\rightarrow S^n$$ covering the corresponding fibration sequence $SO_n\rightarrow SO_{n+1}\rightarrow S^n$. The point is that the inclusion $Spin_n\hookrightarrow Spin_{n+1}$ is $(n-1)$-connected, so the $4$-skeleton of $Spin_n$ is the same as the $4$-skeleton of $Spin_6$ for each $n\geq 6$. Now $Spin_5\cong Sp_2$ and it is know that $Sp_2\simeq S^3\cup_{\nu'} e^7\cup e^{10}$, so the $4$-skeleton of $Spin_5$. Of course $Spin_3\cong S^3$ and $Spin_4\cong S^3\times S^3$ which have $4$-skeletons $S^3$, $S^3\vee S^3$ and $S^3$, respectively. Also $Spin_6\cong SU_4$ and the $5$-skeleton of $SU_4$ is $\Sigma \mathbb{C}P^2\simeq S^3\cup_{\eta_3}e^5$, so by the previous comments, in fact the $5$-skeleton of $Spin_n$ for $n\geq 6$ is $S^3\cup_{\eta_3}e^5$. We also have a fibration $G_2\rightarrow Spin_7\rightarrow S^7$, and it's not that difficult to see that $Spin_7\simeq S^3\cup_{\eta_3}\cup e^6\cup e^6\cup\dots$, which gives the $6$-sekelton of $Spin_n$ for $n\geq 7$. In fact, the previous fibration splits when localised away from $2$ so there is a $\frac{1}{2}$-local homotopy equivalance $Spin_7\simeq G_2\times S^7$. Finally we mention that there is a homeomorphism $Spin_8\cong Spin_7\times S^7$, so you get the $7$-skeleton of $Spin_8$ from that of $Spin_7$ wedged with an extra $7$-cell. I'm not aware of any further isomorphisms for the higher rank spinor groups. A full CW decomposition was given by Araki in his paper On the homology of spinor groups, which was predated by some related results of Borel in Sur I'homologie et la cohomologie des groupes de Lie compacts connexes. As you might expect, the results tend to get a bit complicated as the rank grows, and whilst Araki calculates explicit attaching maps, he does not explicitly calculate their homotopy classes, so depending on what information you are looking for, you may be more interested in his homological calculations. And to this end I should refer you to the paper The integral homology and cohomology rings of $SO(n)$ and $Spin(n)$ of Pittie, which does exactly what it says.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3084420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Math competition problem involving ratios of areas. What's a good way to attack similar problems? 9 lines each separately partition a square into two quadrilaterals with areas having the ratio 2: 3. Show that 3 of these lines intersect at the same point. Any answer or hints is greatly appreciated, as I'm very puzzled by this problem. Clarification I have only tried drawing a few drawings and examples of the problem, since when I tried to read it over again, I guess I don't fully understand it either. I translated it from my main language, but the way I understand it, you have a square, and draw 9 lines from some side to a side. Now there's going to a be lots of different shapes. And there's two quadrilaterls with areas ratio 2:3, and out from that it should be proven, that 3 lines intersect, atleast that's what I think.
You ask : "What is a good way to attack such problems" ? Instead of giving a solution that would not differ much from the solutions given by @Aretino or @Jaap Scherphuis, I will stress a feature that is often useful in these issues. I will call it pompously "the principle of area balance". Take a look at the following picture. Fig. 1 : Representation of area loss and area gain when quadrilateral $ABFE$ is transformed into $ABF'E'$. Imagine you have already found a solution, i.e., line $EF$ (such that area($ABFE$)=$\frac12$ area($CDEF$)). Then, it suffices to move upwards $E$ into $E'$ and downwards $F$ into $F'$ from the same distance to have another solution. Why ? If $M$ denotes the midpoint of line segment $[EF]$, triangles $MEE'$ and $MFF'$ are opposite images one of the other, thus have the same area. In this way, there is a perfect balance between area loss and area gain. Thus, out of a solution, one can generate an infinity of solutions... Besides, this explains the central (!) rôle of pivoting point $M$ ; then, as this property (dividing the area of the square in a ratio 2:1) is invariant by $\tfrac{\pi}{2}$ rotations around the centre of the square, these rotations generate 3 avatars of $M$... Connected : https://scholarworks.umt.edu/cgi/viewcontent.cgi?article=1437&context=tme
{ "language": "en", "url": "https://math.stackexchange.com/questions/3084601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
For which values of $x$ the matrix is invertible? The following matrix has coefficients in $\Bbb Z_{11}$: $\left(\begin{matrix} 1 & 0 & 3 & 0 & 5 \\ 0 & 3 & 0 & 5 & 0 \\ 3 & 0 & x & 0 & 7 \\ 0 & 5 & 0 & 7 & 0 \\ 5 & 0 & 7 & 0 & 9 \end{matrix}\right)$ To determine for which values of $x$ it is invertible, I tried to find the correspondent triangular matrix so I can easily calculate the determinant and then understand for which values $x$ is $0$. I have come to this point: $\left(\begin{matrix} 1 & 0 & 3 & 0 & 5 \\ 0 & 1 & 0 & 7 & 0 \\ 0 & 0 & 2x & 0 & 3 \\ 0 & 0 & 0 & 5 & 0 \\ 0 & 0 & 3 & 0 & 6 \end{matrix}\right)$ I don't know how to remove the $3$ to make the matrix triangular. Any help?
The original matrix $A$ will not be invertible if and only if there is a nonzero vector $v=(v_1,\ldots,v_5)^T$ such that $Av=0$. By the pattern of zeros of $A$ we see that the equations from $Av=0$ for $v_2,v_4$ are independent of those for $v_1,v_3,v_5$. Moreover we have $3v_2+5v_4=0=5v_2+7v_4$, which are independent of each other in $\mathbb{Z}_{11}$, so $v_2=0=v_4$. Now we have to impose that the matrix for $v_1,v_3,v_5$ is not invertible. That matrix is equivalent modulo $11$ to $$\begin{pmatrix}1 & 3 & 5 \\ 3 & x & -4\\ 5 & -4 & -2\end{pmatrix}.$$ Its determinant is equivalent modulo $11$ to $6x+3$, so $\det(A)\equiv 0\pmod{11}$ iff $x\equiv (-3)6^{-1}\equiv (-3)2\equiv 5\pmod{11}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3084734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Inner product on a sequence and its limit I am stuck on a question, and it seems like I'm missing a really obvious Cauchy-Schwarz application or something, but I am left scratching my head. Let $(x_n):n \in \mathbb{N}$ be a sequence in a Hilbert space $H$. Let $x$ satisfy $\|x_n\|\to \|x\|$ and $\langle x,x_n\rangle \to \langle x,x\rangle$. Show that $x_n \to x$. I have found so far that $\|x_n-x\|^2=\langle x_n,x_n-x\rangle -\langle x,x_n-x\rangle $, and I know that the rightmost term tends to zero which helps, but I don't know about the first one. Any solutions? Thanks in advance.
Also the left term tends to zero as it is $$ \langle x_n, x_n - x \rangle = ||x_n||^2 -\langle x_n,x \rangle \to ||x||^2 - ||x||^2 = 0 $$ where I used both the hypotheses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3084846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of picking a number from a set of unique integers Suppose I have a set of $k$ integers such that every number is unique. Let $A = \{1,2,3,4,...,k\}$ Now suppose that we rearrange these numbers to a random permutation in the set. I want to find the probability of finding a fixed number $x$ at any position of the set. Here's my understanding: The probability of the first number in the set being $x$ is $\frac{1}{k}$ The probability of the second number in the set being $x$ is $(1-\frac{1}{k})\frac{1}{k}$ The probability of the third number being $x$ is $(1-\frac{1}{k})^2 \frac{1}{k}$ That leads to the probability of the $n^{th}$ term being $x$ to be $(1-\frac{1}{k})^{n-1} \frac{1}{k}$ Is this correct?
Hint: You are wrong at the second place in the set. $P(x=2)=P(x\neq1 )\cdot P(x=2|x\neq1)=(1-\frac{1}{k})\cdot \frac{1}{k-1}=\frac{k-1}{k}\cdot \frac{1}{k-1}=\frac{1}{k} $ This is because you already know it is not in the first place. Now try to look for the third one, and then the fourth.. And you can ask yourself, why should the first place will have more chances the the others to find x in it?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3084957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Use the Central Limit Theorem to deduce that if $λ$ is large, then $X$ approximately has a normal distribution. The time instants of incoming requests at a data server can be modelled with a Poisson process. Let $X$ be the number of requests in one hour and let $λ$ be the intensity (requests per hour) of the Poisson process. - Use the Central Limit Theorem to deduce that if $λ$ is large, then $X$ approximately has a normal distribution. Also specify its parameters. Can I assume that, if $X\sim\operatorname{Poisson}(\lambda)$ then: $$ \frac{X-\lambda}{\sqrt\lambda} \overset{\text{distribution}}\longrightarrow N(0,1) \text{ as } \lambda\to\infty \\ $$ It is a correct way to prove it? The parameters for the normal distribution are $\mu=0$ and $\sigma^2=1$ right?
Consider two Poisson iid variables of parameter $\lambda=1$. Their pdf is $$p_1(k)=\frac1{ek!}.$$ The pdf of the sum of these variables is given by $$p_2(k)=\sum_{i+j=k}\frac1{e^2i!j!}=\frac1{e^2k!}\sum\binom ki=\frac{2^k}{e^2k!}$$ which is simply a Poisson law of parameter $\lambda=2$. More generally, you could show that the sum of $\lambda$ Poisson iid variables of parameter $1$ follows a Poisson law of parameter $\lambda$. Then by the CLT, $$\frac{p_\lambda-\lambda}{\sqrt\lambda}\to N(0,1)$$ and $$p_\lambda\to N(\lambda,\sqrt\lambda).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3085078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why does this math trick work? 35 by 11 is 385 because 3+5 is 8, so it's the digit in the middle. Same for: 72 by 11 is 792 because 7+2 is 9, so it's the digit in the middle. I see it works because 35 by 10 is 350, or 72 by 10 is 720. The 0 is replaced with the extra digit. The last digit is 5 by 1 or 2 by 1, so it stays the same. But why should the middle digit be the sum of the first and last?
When performed in written calculation, $$ab\times11$$ is $$\ \ \ \ ab\\ab\\\ \ \overline{acb}$$ so that the digits are $a,a+b,b$. This breaks when there is a carry, e.g. $76\times11=836$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3085210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Finding Dual Basis from a basis in $\mathbb R^2$ I am given the fact that these two vectors form the basis B of $\mathbb R^2$: $$ B=\{\begin{bmatrix} 2 \\ 1 \\ \end{bmatrix} \begin{bmatrix} 3 \\ 1 \end{bmatrix}\} $$ and then asked to find the dual basis or the basis of $(\mathbb R^2)^*$. I would really appreciate if anyone could explain the procedure to follow for these types of questions.
$(\mathbb{R}^2)^*$ consists of linear maps $\ell:\mathbb{R}^2\to \mathbb{R}$ which have standard matrix representation $$ \begin{bmatrix} a&b \end{bmatrix}$$ where $a=\ell(e_1)$ and $b=\ell(e_2)$ for $e_1,e_2$ the standard basis. Let's write your basis vectors as $b_1,b_2$ respectively. The dual basis is the basis $(\phi_1,\phi_2)$ for $(\mathbb{R}^2)^*$ satisfying $\phi_i(b_j)=\delta_{ij}$ for $\delta_{ij}=1$ with $i=j$ and $0$ else. So, $\phi_1(b_1)=1$ and $\phi_1(b_2)=0$. Notice that $b_2-b_1=e_1$. So, $$\phi_1(e_1)=\phi_1(b_2-b_1)=\phi_1(b_2)-\phi_1(b_1)=-1.$$ Similarly, $3b_1-2b_2=e_2$ so $$ \phi_1(e_2)=\phi_1(3b_1-2b_2)=3\phi_1(b_1)-2\phi_1(b_2)=3.$$ Hence, $\phi_1$ has matrix representation $$ \begin{bmatrix} -1&3 \end{bmatrix}.$$ Applying the same reasoning, we get $$ \phi_2(e_1)=\phi_2(b_2-b_1)=\phi_2(b_2)-\phi_2(b_1)=1$$ $$ \phi_2(e_2)=\phi_2(3b_1-2b_2)=3\phi_2(b_1)-2\phi_2(b_2)=-2.$$ Hence, $\phi_2$ has matrix representation $$ \begin{bmatrix} 1&-2 \end{bmatrix}.$$ A simple calculation reveals that indeed these $\phi_1,\phi_2$ satisfy $\phi_i(b_j)=\delta_{ij}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3085306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find the sum of $x_1+x_2+x_3$ of intercept points Suppose that the straight line $L$ meets the curve $y=3x^3-15x^2+7x-8$ in three points $(x_1,y_1)$, $(x_2,y_2)$ and $(x_3,y_3)$. Then $x_1+x_2+x_3=?$ A) 3 $\quad$ B) 4 $\quad$ C) 5 $\quad$ D) 6 $\quad$ E) 7 At the beginning, my main idea is to use Vieta's theorem $x_1+x_2+x_3=-\frac{b}{a}$, and find the answer is $5$, it is correct if $y=0$. But when I use software to draw the grapic of $y=3x^3-15x^2+7x-8$ and a line intercept at three points,I also changed the coefficient of the straight line and use calculator did the summation,still can get $x_1+x_2+x_3=5$, I want to know the general solution of this question
Let $y=ax+b$ the equation of the straight line $L$. Then $x_1,x_2,x_3$ are solutions of the equation $3x^3-15x^2+(7-a)x-8-b=0$. Then Vieta says: $x_1+x_2+x_3= - \frac{-15}{3}=5.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3085514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What are the completions of first-order group theory? A completion of some theory $T$ (i.e. set of first order statements $T$) is a consistient theory $T' \supseteq T$ such that for every first order statement $\phi$, either $\phi \in T'$ or $\lnot \phi \in T'$. For example, the completions of the theory of algebraically closed fields consist of: * *The theory of algebraically closed fields of characteristic $0$ (which the complex numbers are a model of). *For every prime $p$, the theory of algebraically closed fields of characteristic $p$. The first order theory of groups is expressed in the language with a single binary operator, and is axiomatized by the following statements: * *$\forall a.b.c. (a \ast b)\ast c = a \ast (b \ast c)$ *$\exists e. \forall a. e \ast a = a = a \ast e$ *$\forall a. \exists z. \forall b. (a \ast z) \ast b = b = b \ast (a \ast z) \land (z \ast a) \ast b = b = b \ast (z \ast a)$ My question is, what are the completions of this theory? Given a group $G$, we define $Th(G)$ (the theory of $G$) as the set of true statements about $G$. Note that although every complete theory arises as the theory of some group, two groups might have the same theory. Since there are $\aleph_0$ statements, there are at most $2^{\aleph_0} = \mathfrak c$ consistient and complete theories, and so many "collisions" will occur. An obvious example is that two isomorphic groups will have the same theory. More generally two groups have the same theory iff they are elementarily equivalent (by definition).
This is totally intractible. For instance, the complete theory of every finite group is a completion (and these completions are distinct for non-isomorphic finite groups since elementarily equivalent finite structures are isomorphic), so describing all the completions is at least as hard as classifying finite groups. It is also easy to see that there are $\mathfrak{c}$ different completions: for instance, any subset of the statements "there exists an element of order $p$" where $p$ ranges over all primes can be true in a completion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3085625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove combinatorial $\sum_{k=0}^{n/2} {n\choose2k} = \sum_{k=0}^{n/2 - 1} {n\choose2k+1} = 2^{n-1}$ I have problems solving the following formula for even positive integers $n$: $$\sum_{k=0}^{n/2} {n\choose 2k} = \sum_{k=0}^{n/2 - 1} {n\choose 2k+1} = 2^{n-1}$$ I tried to prove it by induction but it didn't work. Is there any combinatorial proof?
Hint: You can get the result by expanding these two expressions using the binomial theorem $$(1-1)^n=0\ \ \ \text{ and } \ \ \ (1+1)^n=2^n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3085744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to transition from context-free grammar $G$ to to context-free grammar which starts and ends with specific letters? Given context-free grammar $G$ whose terminal letters are $\{a,b,c,d\}$ how can we transition to context-free grammar which contains the words from the language that $G$ creates but which start with $a$ and end with $d$? I guess we could create new production rules where for example $A\to aX$ from $G$ would also be in $G'$ but how can we determine whether a production rule for middle state come from some rule which starts with $a$ and eventually end with $d$?
Note that with the constuction from the question "How to define a grammar which creates a language from words of another grammar without one of the letters?" you see how messengers can be sent inside the derivation (or derivation tree) to perform certain actions or to check certain properties of the tree. In your case there are two such actions: (1) the first symbol should be an $a$ (2) the last symbol should be a $d$. This calls for two markers, or in other words two new copies of each of the nonterminals. The first and last symbols of the derivation $A_f$ and $A_\ell$. If $A\to \alpha $ is in the original set of productions $P$ then it is also in the new set $P'$. For the new copy $A_f$ we have cases to consider. If $\alpha$ starts with $a$, so $\alpha =a\beta$ then $A_f\to a\beta$ is in $P'$. Also, when $\alpha$ starts with a nonterminal, so $\alpha=B\beta$, then we get $A_f\to B_f\alpha$ in $P'$. Similarly for the final letter $d$ and variables $A_\ell$. Finally the construction is complicated by the fact that initially the axiom is marked by both $f$ and $\ell$. Sometimes it is easier to assume that the original grammar is in a kind of normal form. In this case Chomsky normal form seems best. Those grammars have only productions of the type $A\to BC$ or $A\to t$. The simplicity of this format means there are much less cases to consider.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3085897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Easy example of a herbrand structure Can someone give me an easy example of a Herbrand structure? I can't really visualise the difference between a Herbrand and a normal structure.
Example Consider the very simple FOL formula : $R(c)$. The domain of the Herbrand structure is : the set of all ground terms [i.e. closed terms] of the language. In the above case, we have only the individual constant $c$ as gorund term. Thus, the domain is $H = \{ c \}$. With it, we define the Herbrand interpretation : an interpretation in which all constants and function symbols are assigned very simple meanings. Specifically, every constant is interpreted as itself, and every function symbol is interpreted as the function that applies it. The interpretation also defines predicate symbols as denoting a subset of the relevant Herbrand base, effectively specifying which ground atoms are true in the interpretation. This allows the symbols in a set of clauses to be interpreted in a purely syntactic way, separated from any real instantiation. Again, we have a very simple Herbrand inerpretation $H_S$ : $H_S = (H, R^H)$, where $H$ is the domain defiend above and $R^H$ is the subset of $H$ interpreting the relation symbol $R$. Obviously, $R^H = \{ c \}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3086002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find the value $\sum_{n\geq2}^{\infty}(-1)^{n+1}\frac{n}{n^2-1}$ converges to? How to find the value this sum converges to?$$\sum_{n\geq2}^{\infty}(-1)^{n+1}\frac{n}{n^2-1} $$ I've tried writing it like this $$\sum_{n\geq2}^{\infty}(-1)^{n+1}·n·\Bigg(\frac{1/2}{n-1}-\frac{1/2}{n+1}\Bigg) $$ and writing a few terms, but they won't cancel and I ended up with no ideas, any hint? I haven't learnt integration nor differentiation FYI.
Note that$$(-1)^{n+1}\frac n{n^2-1}=\frac12\times\frac{(-1)^{n+1}}{n-1}+\frac12\times\frac{(-1)^{n+1}}{n+1}.$$But$$\sum_{n=2}^\infty\frac{(-1)^{n+1}}{n-1}=\sum_{n=1}^\infty\frac{(-1)^n}n=-\log(2)$$and$$\sum_{n=2}^\infty\frac{(-1)^{n+1}}{n+1}=\sum_{n=3}^\infty\frac{(-1)^n}n=\left(\sum_{n=1}^\infty\frac{(-1)^n}n\right)+1-\frac12=-\log(2)+\frac12.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3086095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Homogeneous or non - homogeneous $?$ The second order differential equation is given by - $ \frac{d^{2}y}{dx^{2}} + \sin (x+y) = \sin x$ Is this a homogeneous differential equation $?$ Well, I guess this is not a homogeneous differential equation since the form of this equation is not $a(x)y'' + b(x)y' +c(x)y = 0$. But the answer is given that it's homogeneous. How can this equation be homogeneous?
You are correct, as it is not a linear ODE, it is neither homogeneous nor inhomogeneous. The cited characterization is most likely based on the fact that $y=0$ is a solution, but that is only a necessary condition for linearity, not a sufficient one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3086218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Defining a tricky function: $(A \rightarrow \mathcal{P}(B)) \rightarrow\mathcal{P}(A \rightarrow B)$ How would I define a function of the form: \begin{align*} \phi: (A \rightarrow \mathcal{P}(B)) \rightarrow\mathcal{P}(A \rightarrow B) \end{align*} I know what behaviour I want, I'm just struggling to define it. For example, consider a function \begin{align*} &f: {0, 1} \rightarrow \mathcal{P}(\{0, 1, 2, 3\}) \\ &f(0) = \{0, 1\} \\ &f(1) = \{2, 3\} \\ \end{align*} I want $\phi(f)$ to be: \begin{align*} &\phi(f) = \{g_1, g_2, g_3, g_4 \} \\ &g_1(0) = 0 \qquad g_1(1) = 2 \\ &g_2(0) = 0 \qquad g_2(1) = 3 \\ &g_3(0) = 1 \qquad g_3(1) = 2 \\ &g_4(0) = 1 \qquad g_4(2) = 3 \\ \end{align*} That is, I want all possible combinations of $\{0, 1\} \times \{2, 3\}$ as functions.
The elements of $\phi(f)$ are exactly those $g \colon A \to B$ for which $$\forall a \in A: g(a) \in f(a).$$ They are choice functions: for every $a \in A$, they choose an element $g(a) \in f(a)$. The fact such a function, in general, exists at all is the axiom of choice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3086380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Complex analysis proof triangle inequality: Given: $|z+w|^2=|z|^2+|w|^2+2Re(z\bar w)$ Prove:$|z+w|\leq |z|+|w|$ Work done so far: Let $z=x+iy$ and $w=a+bi$, then: $$|x+iy+a+ib|=|z+w|=\sqrt{(x+a)^2+(y+b)^2}$$ $$\sqrt{x^2+y^2}+\sqrt{a^2+b^2}=|z|+|w|$$ Squaring it I get, $$x^2+y^2+2|z||w|+a^2+y^2$$ After this I am lost, any idea how to proceed or if I am doing it wrong, what is the right way. Any help is appreciated!
HINT: Please do not write out real and imaginary parts. Just use the inequality you were given. Compare $|z+w|^2$ and $(|z|+|w|)^2$, and recall what you know (or can prove) if $a,b\ge 0$ and $a^2\le b^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3086485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Inequality. ${{\sqrt{a}+\sqrt{b}+\sqrt{c}} \over {2}} \ge {{1} \over {\sqrt{a}}} + {{1} \over {\sqrt{b}}} + {{1} \over {\sqrt{c}}}$ Question. If ${{a} \over {1+a}}+{{b} \over {1+b}}+{{c} \over {1+c}}=2$ and $a$, $b$, $c$ are all positive real numbers, prove that $${{\sqrt{a}+\sqrt{b}+\sqrt{c}} \over {2}} \ge {{1} \over {\sqrt{a}}} + {{1} \over {\sqrt{b}}} + {{1} \over {\sqrt{c}}}$$ My approach. If we let $$x:=\frac{1}{1+a}, y:=\frac{1}{1+b}, z:=\frac{1}{1+c}$$ Then we can know that $${{a} \over {1+a}}+{{b} \over {1+b}}+{{c} \over {1+c}}=2 \Leftrightarrow abc=a+b+c+2 \Leftrightarrow x+y+z=1$$ (By definition of x, y, z) And, $$a=\frac{1}{x}-1=\frac{1-x}{x}=\frac{y+z}{x} (\because x+y+z=1)$$ $$\therefore (a, b, c)=(\frac{y+z}{x}, \frac{z+x}{y}, \frac{x+y}{z})$$ T.S. $$\sum_{cyc}\frac{\sqrt{a}}{2}>\sum_{cyc}\frac{1}{\sqrt{a}}$$ $$ \Leftrightarrow \sum_{cyc}(\sqrt{a}-\frac{2}{\sqrt{a}}) \ge 0$$ $$ \Leftrightarrow \sum_{cyc}(\sqrt{\frac{y+z}{x}}-2\sqrt{\frac{x}{y+z}}) \ge 0$$ $$ \Leftrightarrow \sum_{cyc}\frac{(y-x)+(z-x)}{\sqrt{x(y+z)}}\ge 0$$ $$ \Leftrightarrow \sum_{cyc}(x-y)(\frac{1}{\sqrt{y(z+x)}}-\frac{1}{\sqrt{x(y+z)}})\ge 0$$ $$ \Leftrightarrow \sum_{cyc}(x-y)\frac{\sqrt{x(y+z)}-\sqrt{y(z+x)}}{\sqrt{xy(x+z)(y+z)}} \ge 0$$ But I don't know the next stage. What should I do?
Now, use $$\sqrt{x(y+z)}-\sqrt{y(x+z)}=\frac{x(y+z)-y(x+z)}{\sqrt{x(y+z)}+\sqrt{y(x+z)}}=\frac{z(x-y)}{\sqrt{x(y+z)}+\sqrt{y(x+z)}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3086636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Show that the function $H(x, y) = x^2 + y^2 + |x − y|^{-1}$ achieves its global minimum somewhere on the set $\{(x, y) \in \mathbb{R}^2 : x \ne y\}$. Show that the function $H(x, y) = x^2 + y^2 + |x − y|^{-1}$ achieves its global minimum somewhere on the set $\{(x, y) \in \mathbb{R}^2 : x \ne y\}$. I kind of understand the minimum cannot be $x=y$ since $1/(x-y)=1/0$, which is infinity. I am trying to prove it via contradiction, but I am confused about how to find the minimum of a function with two variables. Any advice or hints will be appreciated. Thank you!
What this question is really asking is if the function has a global minimum at all, as it isn't even defined on the set $x=y$ as you mentioned. The set $\{(x,y)\in R^2:x\ne y\}$ is open, and therefore ay local (and thus the global) minimum must have the partial derivatives equal $0$. The function is symmetric, so we assume WLOG $x>y$ and the function is now $H(x,y)=x^2+y^2+\frac{1}{x-y}$. Thus $H_x(x,y)=2x-\frac{1}{(x-y)^2}$, $H_y(x,y)=2y+\frac{1}{(x-y)^2}$. These must both be $0$, so the sum of them must be $0$, so $2x+2y=0$ so $x=-y$. Plugging this back into $H_x$ and setting equal to $0$ gives $2x=\frac{1}{(2x)^2}$ so $8x^3=1$, so $x=\frac12$, and thus $y=-\frac12$. To see if this is actually a minimum, we find the Hessian at this point to be $$ \begin{bmatrix} 2.5 & -.5 \\ -.5 & 2.5 \\ \end{bmatrix} $$ which has eigenvalues $2$ and $3$ so is positive definite, thus this is a local minimum. As they are the only local minima, the global minimum is achieved at $H(\frac12,-\frac12)=\frac32$, and by symmetry at $H(-\frac12,\frac12)=\frac32$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3086774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Compute $\int_C ze^{\sqrt{x^2+y^2}} \mathrm ds$ Compute $\int_C ze^{\sqrt{x^2+y^2}} \mathrm ds$ where $$C:x^2+y^2+z^2=a^2, x+ y=0, a \gt 0$$ At first I thought to parametrize this as: $x=a \cos t , y=a \sin t, z =0$, but then the integral will result in $0$ and this might not be true.
The curve $C$ is a circle in the plane $x+y=0$ centered at the origin with radius $a$, so it is symmetric with respect to the plane $z=0$. Moreover the integrand is odd with respect to $z$and therefore, by symmetry, the given integral $\int_C ze^{\sqrt{x^2+y^2}} \mathrm ds$ is zero. BTW a convenient parametrization for the circle $C$ could be: $$x(t)=-y(t)=\frac{a\cos(t)}{\sqrt{2}},\quad z=a\sin(t)\quad \text{with $t\in [0,2\pi]$}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3086923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Solve the equation |x-1|=x-1 Solve the equation:$|x-1|=x-1$ My solution: Case 1 :$ x\ge1$, Hence $x-1=x-1$, therefore infinite solution Case 2 :$ x<1$, Hence $1-x=x-1$,$x=1$, hence no solution But the solution i saw concept used is $ x\le1$ in lieu of $ x<1$ Hence final answer is $[1,\infty]$, is this concept correct
Your answer is right, apart from the square bracket pointed out in @ElevenEleven's comment ($\infty$ can't be the upper end of a closed interval). Another way to get it is to note that $|a|=a$ only when $a\geq 0$, so for $a=x-1$, $$|x-1|=x-1$$ implies $$x-1\geq 0$$ which gives you the answer without needing to consider different cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3087013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Compact operator by proving Ascoli-Arzelà I need to prove that this operator satisfies Ascoli-Arzelà's hypothesis. $T: C^0[0,1] \rightarrow C^0[0,1] $, defined $Tu(x)=\int_0^x a(x,t) u(t)dt$, where $a(x,t)=C^0 ([0,1] \times [0,1])$. Equiboundedness is okay, I need to prove equicontinuity: taken $\{ u_n \} \subset C^0[0,1], ||u_n|| \leq M$ So I take $|Tu_n(x)-Tu_n(y)| \leq \ldots \leq L|x-y| + C$ , by bounding several times. Is it enouth to conclude?
You need to prove that for every $\varepsilon > 0$ and $x \in [0,1]$ there is a $\delta > 0$ such that $|x-y| < \delta$ implies that $|Tu_n(x) - Tu_n(y)| < \varepsilon$. The kind of upper bound you exhibit is insufficient to do this (assuming from context that $C$ is a positive constant) since for $\varepsilon = C/2$ your upper bound is always bigger than $\varepsilon$. Instead, assume without loss of generality that $y > x$ and bound \begin{align*} |Tu_n(x) - Tu_n(y)| &= \bigg| \int_0^x (a(x,t) - a(y,t)) u_n(t) dt + \int_x^y a(y,t) u_n(t) dt \bigg| \\ & \leq \int_0^x |a(x,t) - a(y,t)| |u_n(t)| dt + \int_x^y |a(y,t) u_n(t)| dt \\& \leq \int_0^x |a(x,t) - a(y,t)| M dt + M \|a\|_\infty |y-x| \end{align*} Now $a$ is continuous on a compact set and hence uniformly continuous so there is a $\delta < (2M \|a\|_\infty)^{-1} \varepsilon$ such that $|x-y| < \delta$ implies that $|a(x,t) - a(y,t)| \leq M^{-1} \frac{\varepsilon}{2}$ for every $t$. So for $|x-y| < \delta$, \begin{align} |Tu_n(x) - Tu_n(y)| < \int_0^x \frac{\varepsilon}{2} dt + \frac{\varepsilon}{2} \leq \varepsilon \end{align} as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3087124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Riemann-integration problem Here is the exercise Let $f:[a,b]\rightarrow \mathbb{R}$ be Riemann-integrable. Prove that $f^+$, $f^-$ and $|f|$ are also Riemann-integrable, when $$f^+=\begin{cases} f(x) & f(x)\geq 0 \\ 0 & otherwise \end{cases}$$ $$f^-=\begin{cases} -f(x) & f(x)\leq 0 \\ 0 & otherwise \end{cases}$$ This problem seems so obvious. Why wouldn't $f^+$ be integrable? Anyway, I need to prove this using this hint: "$f$ is Riemann-integrable if with every $\epsilon>0$ there exists step functions $h\leq f\leq g $ such that $\int h -\int g <\epsilon$." I have no idea where to start...
We prove the Riemann integrability of $f^+$. A similar proof can be done for $f^-$. As $f$ is Riemann-integrable, for all $\epsilon>0$ there exists step functions $h \leq f\leq g $ such that $\int h -\int g <\epsilon$. Now define $h^+ = \max(h,0)$ and $g^+ = \max(g,0)$. You can verify that: * *$h^+, g^+$ are step functions. *You have $h^+ \le f^+ \le g^+$ for all $x \in [a,b]$. *And also $g^+ - h^+ \le g-h$ for all $x \in [a,b]$. This implies $0 \le \int (g^+-h^+) \le \int (g-h) \le \epsilon$ And concludes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3087228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Investigate the convergence of $ \sum_{n=1}^{\infty } (-1)^{n}\frac{n+2}{n(n+1)} $ I am supposed to investigate the convergence of $ \sum_{n=1}^{\infty } (-1)^{n}\frac{n+2}{n(n+1)} $. I'm unsure whether to use Leibniz' criterion or a comparison test and I really can't start. Thanks
You can directly use Leibniz' criterion, but you have to show the absolute value of the general term is non-increasing. Here, I suggest an (arguably) simpler way to see what's happening and prove convergence: You have $$ \frac{n+2}{n(n+1)} = \frac{1}{n+1} + \frac{2}{n(n+1)} $$ and therefore $$ \sum_{n=1}^\infty (-1)^n \frac{n+2}{n(n+1)} = \sum_{n=1}^\infty \frac{(-1)^n}{n+1} + 2\sum_{n=1}^\infty \frac{(-1)^n}{n(n+1)} $$ The first series on the right-hand-side converges conditionally by Leibniz's criterion, the second converges absolutely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3087325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Calculate the value of $\int_0^\infty \frac{\sqrt{x}\cos(\ln(x))}{x^2+1}\,dx$ I'm asked to evaluate the integral $\displaystyle\int_0^\infty \frac{\sqrt{x}\cos(\ln(x))}{x^2+1}\,dx$. I tried defining a funcion $f(z)=\frac{e^{(1/2+i)\operatorname{Log}(z)}}{z^2+1}$, taking $\operatorname{Log}$ with a branch cut along the positive real axis: ($\operatorname{Log}(z)=\ln(|z|)+i\arg(z))$. Using residue theorem with the "pacman" contour. However when trying to bound the integral around a small circle around $0$, I cannot conclude it converges to $0$. My attempt was $|\int_{\gamma_\epsilon}f|\leq 2\pi\epsilon|e^{(0.5+i)(\ln|\epsilon|+i\theta))}|\frac{1}{\epsilon^2-1}\leq C\epsilon^{-0.5}.$ I'd love it if someone could either suggest a different way to bound the integral around $0$ of this function, or maybe suggest an easier complex function to work with. Edit: The wonderful "Related" algorithm of this site managed to link me to this answer Looking at it , a more general statement is proved, but the proof fails when we have $\alpha=0.5+i$ (The circle around $0$ doesn`t converge to $0$ by the proof given there, as a matter of fact any $\alpha$ with $Re(\alpha)>0$ would fail.)
As @Adrian suggested, define $\log z =\log |z|+i\arg(z)$ where $\arg(z)\in (0,2\pi)$ and let the contour be a keyhole contour. Then $$ \left|\int_{\gamma_R}\frac{e^{(1/2+i)\log z}}{z^2+1}\,dz\right|\le \int_{\gamma_R}\frac{e^{1/2 \log|z|-\arg(z)}}{R^2-1}\,|dz|\le C\frac{R^{3/2}}{R^2-1}\stackrel{R\to\infty}\longrightarrow 0, $$ $$ \left|\int_{\gamma_r}\frac{e^{(1/2+i)\log z}}{z^2+1}\,dz\right|\le \int_{\gamma_r}\frac{e^{1/2 \log|z|-\arg(z)}}{1-r^2}\,|dz|\le Cr^{3/2}\stackrel{r\to 0}\longrightarrow 0. $$ Thus it follows by residue theorem $$ \lim_{\epsilon\to 0}\left(\int_{\gamma_\epsilon} f(z)dz +\int_{\gamma_{-\epsilon}} f(z)dz\right) =2\pi i\left(\text{res}_{z=i}f(z)+\text{res}_{z=-i}f(z)\right). $$ We find$$ \lim_{\epsilon\to 0}\int_{\gamma_\epsilon} f(z)dz=\int_0^\infty \frac{\sqrt{x}e^{i\ln x}}{x^2+1}\,dx, $$ $$ \lim_{\epsilon\to 0}\int_{\gamma_{-\epsilon}} f(z)dz=-\int_0^\infty \frac{e^{(1/2+i)(\ln x+2\pi i)}}{x^2+1}\,dx=+e^{-2\pi}\int_0^\infty \frac{\sqrt{x}e^{i\ln x}}{x^2+1}\,dx. $$ And also $$ \text{res}_{z=i}f(z)=\frac{e^{(1/2+i)\frac{\pi i}{2}}}{2i}=\frac{e^{-\pi/2+\pi i/4}}{2i}, $$ $$ \text{res}_{z=-i}f(z)=-\frac{e^{(1/2+i)\frac{3\pi i}{2}}}{2i}=-\frac{e^{-3\pi/2+3\pi i/4}}{2i}. $$ Thus the given integral is $$ \frac{\pi}{1+e^{-2\pi}}\Re\left(e^{-\pi/2+\pi i/4}-e^{-3\pi/2+3\pi i/4}\right)=\frac{\pi\cosh(\frac{\pi}{2})}{\sqrt{2}\cosh(\pi)}\sim 0.4805. $$ (I found that this value coincides with the integral numerically by wolframalpha.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3087433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
point cloud in complex plane I want to draw the point cloud represented by the following term. $$M_{4}=\left\{z \in \mathbb{C} : | z-1|=\frac{1}{2}| z-j|\right\}$$ $j$ equals $i$, the imaginary square root of $-1$. I have made several attempts to get a solution for the equation. This is the one that looks the most promising. I don't really have an approach on how to continue. I think I have to generate a $j$, but I haven't seen the right way to do so. Thank you in advance. $ |x+j y-1|=\frac{1}{2}|x+j y-j| $ $ |(x-1)+jy|=\frac{1}{2}|x+(j y-j)| $ $ \sqrt{(x-1)^2+jy^2}=\frac{1}{4}\sqrt{x^2+(j y-j)^2} $ $ (x-1)^2+jy^2=\frac{1}{4}(x^2+(j y-j)^2) $ $ x^2 -2x +1 -y^2=\frac{1}{4}(x^2+(jy^2 -2jy^2 +j^2) $ $ x^2 -2x +1 -y^2=\frac{1}{4}(x^2-y^2 +2y^2 -1) $ $ \frac{3} {4}x^2 - 2x + \frac{5}{4}y^2 - \frac{1}{2}y = 0$
I am sorry if I am giving you a bum steer. I believe by point cloud we are looking for the locus of points described by the above equation. I don't know TEX. Six steps to solution. First, multiply both sides by of (z+i). Simplify. Second, add (iz)^2 to both sides. Simplify. Third, add the imaginary unit to both sides and simplify. Fourth, bring all terms involving z over to the left hand side of the equation and factor out z. Fifth, divide both sides by the constant coefficient of z. Sixth, and lastly, rationalize the denominator on the right hand side by multiplying top and bottom by (1-i). The answer I get is z equals modulus of [(3+i)/4]. I am not certain this is the right answer. Sorry if I am wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3087691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How this inequality is derived? Let $T$ ∶ $ℓ_2$ → $ℓ_2$ be defined by $T((x_1,x_2,...,x_n...))$=$(X_2-X_1, X_3-X_2,...,X_{n+1}-x_n,...)$ Then I have find the norm of $T$. Here is the answer to this question: https://math.stackexchange.com/a/1647794/581242 I am not able to see how this first inequality is derived. $\|(Tx)\| = \sqrt{\sum_{i=1}^\infty |x_{i+1}-x_i|^2} \leq \sqrt{\sum_{i=1}^\infty |x_{i+1}|^2 + \sum_{i=1}^\infty|x_i|^2} \leq 2\|x\|$
It appears that there was a small typo. Use the inequality $|a-b|^{2} \leq 2(a^{2}+b^{2})$ . The final result is correct but the first inequality is wrongly stated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3087783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the definition of $e^{ix}$? This might seem silly, but in proving Euler Formula taught in Calculus classes, we make the assumption that $\frac{d}{dx}e^{ix} = ie^{ix}$ However $e^{g}$ Pre-Euler’s Formula, only takes in real numbers for g. If we tried to use the Chain Rule where $g = ix$ we have no definition for that. My question is, how do we know, Pre-Euler’s Formula, that $e^{z}$ exists for complex values of z, and how do we know that $e^{ix}$ is differentiable at $ix$. Basically what is the definition of $e^{ix}$ before knowing Euler's Formula.
When I was teaching, I always used the series definition, as explained in @ErikParkinson’s answer. But you may also define $$ e^z=\lim_{n\to\infty}\left(1+\frac zn\right)^n\,. $$ When you look at this closely, the formula $e^{it}=\cos t+i\sin t$ becomes very reasonable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3087919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
if $f$ has Newton Polygon consisting of one segment $(0,0)$ to $(n,m)$ with $m,n$ coprime, then $f$ cannot be factored Let $f(X)\in 1+ X\mathbb{Z}_p[X]$ have Newton Polygon consisting of one segment joining $(0,0)$ to $(n,m)$ with $m,n$ coprime. I have to show that $f(X)$ cannot be factored as a product of two polynomials with coefficients in $\mathbb{Z}_p$. I know that the slope of the Newton Polygon is $m/n$ and since $m,n$ coprime, this does not lie in $\mathbb{Z}_p$. I also know there exists a theorem which says something about the number of same slopes, but I do not fully understand this theorem. Can someone help me to understand this question and hopefully to understand the theorem better? Thanks!
If you understand that every root $\rho$ of $f$ satisfies $v(\rho)=-m/n$, then you see that this will happen for both of $g$ and $h$ if $f=gh$. Now, what can the Newton polygon of $g$ be? It will be of width $r$ for some integer with $0<r<n$, since neither $g$ nor $h$ is constant. And the right-hand vertex? Since the slope is still $m/n$, the vertex will be at $(r, \frac{rm}n)$. But the $y$-coordinate has to be an integer if the polynomial has $\Bbb Z_p$-coefficients. Thus $g\notin\Bbb Z_p[X]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3088042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding sum of none arithmetic series I have a question to find the sum of the following sum: $$ S = \small{1*1+2*3+3*5+4*7+...+100*199} $$ I figured out that for each element in this series the following holds: $$ a_n = a_{n-1} + 4n - 3 $$ But I don't know where to go from here, I tried subtracting some other series but that did not work very well
$a_n=\sum_{r=1}^n(4r-3)+a_0=\dfrac n2(1+4n-3)+a_0=2n^2-n+a_0$ $$\sum_{n=1}^ma_n=2\sum_{n=1}^mn^2-\sum_{n=1}^mn+a_0\sum_{n=1}^m1$$ Here $a_0=0$ Alternatively, $$a_m=b_m+a+bm+cm^2$$ $$4n-3=a_n-a_{n-1}=b_n-b_{n-1}+b+c(2n-1)=b_n-b_{n-1}+2c(n)+b-c$$ WLOG set $2c=4,b-c=-3\iff c=b+3$ to find $b_n=b_{n-1}$ set $a=0$ so that $b_m=a_m=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3088304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 2 }
Apply the Implicit Function Theorem to find a root of polynomial Caculate the value of the real solution of the equation $x^7+0.99x-2.03$, and give a estimate for the error. The hint is: use the Implicit Function Theorem. I dont know how to use the IFT in this case, I'm not familiarized with this. I think in construct a function $F:\mathbb{R}^n \times \mathbb{R} \to \mathbb{R}$ with some parameters of which one is the root. Maybe $$F(c_1,c_2,c_3,x) = c_{1}x^7 + c_{2}x - c_{3}.$$ But I'm note sure about this. Can someone help me?
Let $F(x,y,z)=x^7+y\,x-z$; then $F(1,1,2)=0$. We have $$ \frac{\partial F}{\partial x}=7\,x^6+y\implies\frac{\partial F}{\partial x}(1,1,2)\ne0. $$ By the IFT, you can solve for $x$ in the equation $F(x,y,z)=0$ on a neighborhood of $(1,1,2)$. That is, there is a $C^1$ function $\phi(y,z)$ such that $\phi(1,2)=1$ and $F(\phi(y,z),y,z)=0$. What you want now is $\phi(0.99,2.03)$. You cannot obtain an exact formula, but you can find an approximation: $$ \phi(0.99,2.03)\approx\phi(1,2)+\frac{\partial \phi}{\partial y}(1,2)(.99-1)+\frac{\partial \phi}{\partial z}(1,2)(2.03-2). $$ You can find the values of the partial derivatives of $\phi$ from the IFT.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3088415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Exercise in abstract algebra Assume that $M$ is a set, and that $R$ is a ring. Let $\cal{F}$ be the collection all functions $M\rightarrow{R}$. Prove that the following two statements are equivalent. * *$\cal{F}$ is a field. *$M$ is a singleton (i.e. consisting one element), and $R$ is a field. The implication 2. $\implies$ 1. is straightforward. If $M=\{m\}$, one identifies any particular element $r\in{R}$ with the function $f(m) = r$, and checks that $\cal{F}$ satisfies the field axioms. But I am stuck on the implication 1. $\implies$ 2.
Suppose M has at least two distinct elements : $m_1$ and $m_2$. Then, consider $f :M\to R$ such that $f(m_1)=1$ and $f(m)= 0$ otherwise; and $g:M\to R$ such that $g(m_2)=1$ and $g(m) =0$ otherwise. Then, as functions (that is, as elements of $F$), $f$ and $g$ are non zero but $fg=0$. So $F$ can't be a field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3088526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Small question on Pythagoras theorem The projections-of-the-legs over the hypotenuse should add up to the hypotenuse $c$. Is there any alternative way to prove below? $$a\cos \alpha + b\sin \alpha = \sqrt{a^2+b^2}$$
Well, $$\cos \alpha = \frac ac\quad \&\quad \sin \alpha = \frac bc$$ so $$a\cos \alpha +b\sin \alpha = \frac 1c \times (a^2+b^2)=\frac {c^2}c=c$$ Of course, that last step requires the Pythagorean Theorem ("PT"). It is worth remarking that, without using PT the argument shows $$a\cos \alpha +b\sin \alpha = \frac {a^2+b^2}c$$ and since the OP has shown that $$a\cos \alpha +b\sin \alpha = c$$ we can combine the two arguments to get $$\frac {a^2+b^2}c=c \implies a^2+b^2=c^2$$ so the two arguments together yield a proof of PT.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3088623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to prove the following formula using an indirect proof I need to prove that the premise $A \to (B \vee C)$ leads to the conclusion $(A \to B) \vee (A \to C)$. Here's what I have so far. From here I'm stuck (and I'm not even sure if this is correct). My idea is to use negation intro by assuming the opposite and coming up with a contradiction. I assumed $A$ which led to $B \vee C$ and, as you can see, I'm trying or elim but the only way I can think of doing this is to use conditional intro and then or intro but that seems to only work for a single subproof. In other words, I can't use the assumption of $B$ to say $A \to B$. This is called an indirect proof.
Hint: if you assume $A \to (B \lor C)$, $\lnot(A \to B)$ and $A$, then you can conclude $B \lor C$ and $\lnot B$. Can you take it from there?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3088766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Translate this sentence to predicate logic The question given asks to translate to predicate logic: Every positive real number has a unique positive real root. My solution to this problem is to separate it into the appropriate quantifiers. C(x) = "Every positive real number x" S(x) = "x has a positive real root" the final logic form being: ∀x(C(x)→S(x)) It seems to easy for it to be correct. I need help in confirming my answer.
Couple of things: First, you don't want to define: $C(x)$ = "Every positive real number $x$" The 'every' needs to be taken care of by a quantifier so that should not be part of the meaning of this formula. Indeed, the formula $C(x)$ should simply express a property of $x$, just like your: $S(x)$ = "$x$ has a positive real root" The obvious thing to do is to define: $C(x)$ = "$x$ is a positive real number" Second, if we now look at $$\forall x (C(x) \rightarrow S(x))$$ then we see that this means: "Every positive real number has a positive real root" .. but you were supposed to translate "Every positive real number has a unique positive real root" So, you're missing the "unique" part. What to do? One thing you can do is to simply redefine: $S(x)$ = "$x$ has a unique positive real root" But given as you are using predicate logic, I think it is much better to further analyze this into objects and predicates. In particular, that "unique positive real root" is an object itself, so it makes sense to use a (separate) variable for it, say $y$ As such, instead of having a 1-place predicate $S(x)$ that says that "x has a unique positive real root", you want to use a 2-place predicate: $H(x,y)$: "$x$ has $y$ as a positive real root" OK, so now we can nicely re-express that "Every positive real number has a positive real root": $$\forall x (C(x) \rightarrow \exists y \ H(x,y))$$ Hmm, but we still leave out the "uniqueness" part. Well, the cool thing is that in predicate logic you can capture that using the identity relationship $=$. That is, you want to say that not only does there exist some $y$ that is a positive real root of $x$, but that this $y$ is the only positive real root of $x$, i.e. that there are no other positive real roots of $x$. As such, we can write: $$\forall x (C(x) \rightarrow \exists y (H(x,y) \land \neg \exists z (H(x,z) \land z \not = y)))$$ See how that works? There is a positive real root $y$, but there is not other possible real root $z$ Another way to think about this is that every real root will have to be $y$. So: $$\forall x (C(x) \rightarrow \exists y (H(x,y) \land \forall z (H(x,z) \rightarrow z = y)))$$ That is: there is a positive real root $y$, and everything that is a positive real root will have to be $y$, thus making $y$ the one and only one positive real root. Finally, though a little less intuitive, you can do: $$\forall x (C(x) \rightarrow \exists y \ \forall z (H(x,z) \leftrightarrow z = y))$$ This is equivalent to the last one, because $y$ is of course equal to $y$, so if tyou set $z$ to $y$, then you can go from right to left, and hence $z=y$ becomes a positive real root of $x$, but going left to right, we again get that any positive real root will have to be $y$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3088860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\lim_n{f(\frac{x}{n})}=0$, for all $x\in (0,1)$ implies $lim_{x\to0}f(x)=0$ (edit, $f$ is not continuous) Let be $f:(0,1)\rightarrow \mathbb{R}$. For all $x\in (0,1)$ I have that $\lim_n{f(\frac{x}{n})}=0$. It's true that $lim_{x\to0}f(x)=0$? And if $f$ is continuous? For the second point I just use the definition of continuity and limits: Let be $x_0\in(0,1)$ and let be the succession $\{f(\frac{x_0}{n})\}$, and I know that for all $\varepsilon_1>0$ $\exists N>0$ such that if $n>N$ then $|f(\frac{x_0}{n})|<\varepsilon_1$. $f$ is continuos in $\frac{x_0}{n}$, so for all $\varepsilon_2>0$ $\exists \delta_2>0$ such that if $|x-\frac{x_0}{n}|<\delta_2$ then $|f(x)-f(\frac{x_0}{n})|<\varepsilon_2$. So I have that $|x|\le|x-\frac{x_0}{n}|+|\frac{x_0}{n}|<\delta_2+|\frac{x_0}{n}|=\delta^*$, because $x_0$ is fixed. On the other hand $|f(x)|\le|f(x)-f(\frac{x_0}{n})|+|f(\frac{x_0}{n})|<\varepsilon_1+\varepsilon_2$. But I don't know how solve the second. I try to prove that is false using many non-continuous function. but I failed. (Sorry for my bad English, I'm Italian)
I will provide a proof for the continuous case. This result requires Baire Category Theorem. If $\epsilon >0$ then $(0,1)=\cup_n A_n$ where $A_k=\{x:|f(\frac x n )| \leq \epsilon \forall k \geq n\}$. Since $(0,1)$ is of second category it follows that there is some interval $(a,b)$ and some $n_0$ such that $|f(\frac x n )| <\epsilon$ whenever $a<x<b$ and $ n \geq n_0$ $\cdots (1)$. Now the length of the interval $(\frac a x, \frac b x)$ is greater than $1$ whenever $0<x<b-a$. Hence there is an integer $n$ in this interval. This integer is also greater than $n_0$ if $\frac a x >n_0$ or $x <\frac a {n_0}$. Replacing $x$ by $nx$ in (1) we get $|f(x)| <\epsilon$ whenever $x$ is sufficiently small.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3089006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can a simple closed curve in a compact surface be dense? I do not see an argument immediately that it cannot be, but it feels dubious. Does genus have anything to do with it?
A simple closed curve in a surface $X$ is a continuous injection $f:S^1\to X$. Since $S^1$ is compact, the image of $f$ is compact and hence closed. So, the image cannot be dense (the image cannot be all of $X$ since $f$ is a homeomorphism to its image). More generally, the same argument applies to any Hausdorff space $X$ which is not homeomorphic to $S^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3089116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Making $x$ the subject of $x^3-2x+y=1$ How to I make x the subject of this formula? $$x^3-2x+y=1\tag1$$ My attempt: I apply complete the square $$(x-1)^3+y-1=1\tag2$$ $$(x-1)^3=2-y\tag3$$ $$x-1=\sqrt[3]{2-y}\tag4$$ $$x=1+\sqrt[3]{2-y}\tag5$$ but my teacher said it is wrong! Can you please help? Thank in advance!
You make a mistake in the first step when you attempt to complete the square. Remember: you are working with an $x^3$, not an $x^2$, and because of this, any "completing of the cube" would require an $ax^2$ term as well. Unfortunately, due to the fact that $x^3 - 2x + y = 1$ is not one-to-one, there is no easy way to rewrite your function in the form $x = f(y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3089250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$\mathscr{B} = \{ [a, b) | a< b \in \mathbb{R} \}$ is a basis for a Topology in $\mathbb{R}$ I just want to ask if my proof for this problem is correct. $$\mathscr{B} = \{ [a, b) | a< b \in \mathbb{R} \}$$ is a basis for a Topology in $\mathbb{R}$ . Here is my proof: * *Let $x \in \mathbb{R}$. Choose $B \in \mathscr{B}$ st $a \leq x <b$. Then $x \in B.$ *Let $B_1= [a_1, b_1), B_2= [a_2, b_2) \in \mathscr{B}$. If $B_1 \cap B_2 = \varnothing$ then we are done. If otherwise, then there exists $x$ in the intersection. Choose $B_3= [a_3, b_3) \in \mathscr{B}$ st $a_3= \mbox{max}\{a_1, a_2 \}, b_3 =\mbox{min}\{b_1, b_2 \}$. Then $x \in B_3$ and $B_3 \subseteq B_1 \cap B_2$. Therefore $\mathscr{B}$ is a basis.
Simpler and direct is $[a,b) \cap [r,s) = [\max(a,r), \min(b,s))$. In addition, it is necessary to note that every $r \in \mathbb{R}$ is in some base set. $[r, r+1)$ for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3089359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Purpose of Variance I know how to calculate variance, in probability and experimental data ect but, what I cant seem to understand is the real world application of it. And just wondering if someone could give an explantation of it importance in statistics and experimental data, by giving an example.
For Practical Example I can say, Let's say that you've to a coke plan a party in your house, there are 100(s) of guest coming to your house. You want to give them a Coca Cola but the problem is that each of them has their different capacity. A kid might need just a glass, but an adult would need more than that. So, in this case, finding the mean would help you in imagining a person equivalent to someone in the party and finding the right amount. But, after the party, you took the data and a part of the people didn't have any drink, rest had just a glass and rest had more than a couple of glasses. (Probably some really liked that! :P) Since you had the correct mean, you didn't fall sort of drinks. But, when you noticed that variance (it tells the mean of variation) you found that it was more than mean. Why? It tells that those who took the drinks, took too much and who didn't take were very less. More the Variance $\iff$ Who liked the drink took more and who disliked, took very less. But Less Variance $\iff$ Most of the people took it nearly to your estimated amount. Variance = Measure of the variation of the quantity around the central tendency. Mean - the amount equivalent to each person who takes the drink. Variance - the amount of drink you gave for each time (or size of the glass)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3089488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability that sum of integer reciprocals is larger than a fixed number. Suppose $n$ numbers are drawn independently from the list of $m$ integers $\{1,2,3,\ldots ,m\}$ uniformly at random. Denote these $n$ picks as $x_1,x_2,\ldots x_n$. Note that $n\geq m$ is possible. Fix a positive integer $C$. I am trying to determine the probability that $$\sum_{i = 1}^{n} \frac{1}{x_n}\geq C.$$ However I am not really sure where to start as I have not done much work with probability before. Is there some way to get such a probability?
This is actually more of a consideration than an answer, but wishfully it may be of some help. We have $n$ discrete uniform i.i.d. random variables $X_k$, ranging from $1$ to $m$, and we want to find the distribution of the sum of their inverse. To the scope of finding an approximation for high values of $m$ and $n$, we should go through the Characteristic Function of each variable $1/X_k$, exploiting the fact that the CF of the sum will be the $n$-th power of the single CF. And after getting the global CF we can invert it to get the sought pdf. The single CF is given by $$ \eqalign{ & \varphi _{1/X} (t) = E\left[ {e^{\,i\,t/X} } \right] = {1 \over m}\sum\limits_{k = 1}^m {e^{\,i\,t/k} } = \cr & = e^{\,i\,t} {1 \over m}\sum\limits_{k = 1}^m {e^{\, - i\,t\left( {k - 1} \right)/k} } \cr} $$ and the matter comes to find a suitable approximation for the sum, or rather for its logarithm. It might help to approximate each variable with a continuous uniform one ranging, from $1/2$ to $m+1/2$ and with probability density of $1/m$. Geometrically that means to approximate a discrete hystogram of $m$ bars of height $1/m$ with $m$ vertical bands (rectangles) of width $1$ and height $1/m$, centered around each integral point. Then the Characteristic Function of each continuous variable $1/X_k$ would be $$ \eqalign{ & \varphi _{1/X} (t) = E\left[ {e^{\,i\,t/X} } \right] = {1 \over m}\int_{\;x = 1}^{\,m} {e^{\,i\,t/x} dx} = \cr & = {1 \over m}\left( {\int_{\;x = 1}^{\,m} {\cos \left( {{t \over x}} \right)dx} + i\int_{\;x = 1}^{\,m} {\sin \left( {{t \over x}} \right)dx} } \right) \cr} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3089607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
proof that e is the sum of the reciprocals of factorials Ok so we know that: e= lim n→∞(1+1/n)^n. and we know by binomial theorem, that lim n→∞ of $\sum_{k=0}^n {n \choose k} (1/n)^k = (1+ \frac{1}{n})$ To simplify further to $\sum_{k=0}^n \frac{1}{k!} = e$ we must evaluate the following limit: n→∞${n \choose k} \frac{1}{n^k}=\frac{(n)(n-1)(n-2)...(n-k+1)}{k! n^k}$ and this is supposed to equal to $\frac{1}{k!}$ This is not clear to me algebriacally, so may some one please clear this up step by step so I can understand?
We have $$e=\lim_{n\to\infty}\sum_{k=0}^n\frac{\binom{n}{k}}{n^k}=\lim_{n\to\infty}\sum_{k=0}^\infty\frac{\binom{n}{k}}{n^k}=\lim_{n\to\infty}\sum_{k=0}^\infty\frac{1}{k!},$$where with $k$ fixed$$\lim_{n\to\infty}\frac{\binom{n}{k}}{n^k}=\frac{1}{k!}\lim_{n\to\infty}\prod_{j=1}^{k-1}\left(1-\frac{j}{n}\right)=\frac{1^{k-1}}{k!}=\frac{1}{k!}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3089732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
If $(a,b)$ is a multiple of $(c,d)$, show that $(a,c)$ is a multiple of $(b,d)$ I need help with this problem: If $(a,b)$ is a multiple of $(c,d)$ with $abcd\neq0$, show that $(a,c)$ is a multiple of $(b,d)$. This is suprisingly important: call it a challenge question. You could use numbers first to see how $a,b,c$ and $d$ are related. The question will lead to: If A = $\left[\begin{array}{l}a&b\\c&d\end{array}\right]$ has dependent rows the it has dependent columns. I tried to do it this way: $(a,b)=x(c,d)\Rightarrow(a,b)=(xc,xd)$ $(a,c)=(xc,c)\Rightarrow c(x,1)$ and $(b,d)=(xd,d)\Rightarrow d(x,1)$ I don't know what to do after that, what should I do next?
First note the following: $(a,b) = (a,(\frac{b}{a}) a)$. [As $abcd \not =0$ we can assume that $\frac{b}{a}$ exists] So for some scalar $x$ we note: $(c,d) = x(a,b) = (xa,xb)$ $=(xa,x(\frac{b}{a}) a)$. Thus $c$ can be written $c=xa$ and $d$ can be written $d=x(\frac{b}{a}) a$. Thus $(b,d) = (\frac{b}{a} a, x(\frac{b}{a})a) = \frac{b}{a}(a,xa) = \frac{b}{a}(a,c)$. So $(b,d) = y(a,c)$ where $y = \frac{b}{a}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3089849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Periodic solutions of a planar ODE Linearizing the equation of the two body problem at a circular solution I came accross the following planar second order system of differential equations $$ \begin{cases} 2\frac{1}{\omega^2} \ddot{u}=3u\cos2\omega t+3v\sin2\omega t+u\\ 2\frac{1}{\omega^2}\ddot{v}=3u\sin2\omega t-3v\cos2\omega t+v \end{cases} $$ It is easy to see that $(u,v)=(-\sin\omega t,\cos\omega t)$ is a $\frac{2\pi}{\omega}$-periodic solution. I'm wondering if there are other independent ones. Do you have any suggestion to find them?
If we define $\eta = u + i v$, then the set of equations can be rewritten as $$ 0 = - \frac{2}{\omega^2} \ddot{\eta} + 3 e^{2 i \omega t} \bar{\eta} + \eta. $$ If $\eta$ is periodic with period $2 \pi/\omega$, then it will be expressible as a power series of the form $$ \eta = \sum_{m = - \infty}^\infty a_m e^{i m \omega t}. $$ Putting this ansatz into the ODE and manipulating the series appropriately, we find that if $\eta$ is of this form and satisfies the above ODE, then we must have $$ (1 + 2 m^2) a_m + 3 a^*_{2-m} = 0 $$ for all $m$. This "recursion" relation relates the coefficients $a_m$ in pairs; it relates $a_1$ to itself, $a_0$ to $a_2$, $a_{-1}$ to $a_3$, and so forth. Note that all of these pairs of coefficients are determined independently: the value of $\{a_0, a_2\}$ are unaffected by the values of $\{a_{-1}, a_3\}$ or any other pair. Moreover, if we set $m \to 2-m$ in the above recursion relation and conjugate it, we obtain $$ (1 + 2(2-m)^2) a^*_{2-m} + 3 a_m = 0, $$ and combining the above two equations yields $$ \left[ (1 + 2m^2) (1 + 2(2-m)^2) - 9 \right] a_m = 0 $$ for all $m$. Assuming we want $a_m \neq 0$, this implies the quantity in brackets must vanish. But the only roots of this polynomial are $m = 0, 1, 2$ (with 1 a double root). Thus, the only $a_m$ coefficients that can be non-zero are $a_0$, $a_1$, and $a_2$. Taking these cases separately: * *For $a_1 \neq 0$, the recursion relation is $3 a_1 + 3 a^*_1 = 0$, which implies that $a_1$ is pure imaginary. Thus, the solution is $\eta = A i e^{i \omega t}$ for $A \in \mathbb{R}$, which (taking the real and imaginary parts) yields the solution you found: $u(t) = - A \sin(\omega t)$, $v(t) = A \cos (\omega t)$. *For the pair $\{a_0, a_2\}$, we have $$a_0 = - 3 a_2^*.$$ Thus, we have a general solution of the form $$ \eta(t) = a_0 - \frac{1}{3} a_0^* e^{2 i \omega t} $$ for an arbitrary complex number $a_0$. Expressing this in terms of two real coefficients $a_0 = A + iB$, the solution then becomes $$ u(t) = A\left( 1 - \frac{1}{3} \cos (2 \omega t)\right) - \frac{B}{3} \sin (2 \omega t), \\ v(t) = B\left( 1 + \frac{1}{3} \cos (2 \omega t) \right) - \frac{A}{3} \sin (2 \omega t). \\ $$ The solutions found by Robert Israel correspond to $B = -3, A = 0$ and $A = -3, B = 0$ respectively. These three independent solutions, and linear combinations thereof, are the only possible solutions with a period of $2\pi/\omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3089957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solve the recurrence relation $a_n=6a_{n-1}-9a_{n-2}-8$ for $n\geq2$, $a_0=0$, $a_1=1$ My task: $a_n=6a_{n-1}-9a_{n-2}-8$ for $n\geq2$, $a_0=0$, $a_1=1$ My solution $x^{2}-6x+9$ $\Delta=0$ $x_0=3 $ So I am gonna use following formula: $a_n=ar^{n}+bnr^{n}$ $a_n=a*(3)^{n}+bn*3^{n}$ $-8$ is the problem, so I am looking for $c$ that $b_n:=a_n+c\implies b_n=6b_{n-1}-9b_{n-2}$ $$b_n=6(b_{n-1}-c)-9(b_{n-2}-c)-8+c=6b_{n-1}-9b_{n-2}-8+6c$$ I am setting $c=\frac{4}{3}$ so $$b_n=6b_{n-1}-9b_{n-2}\implies\exists a,\,b:\,b_n=a*3^{n}+bn*3^{n}.$$From $b_0=\frac{4}{3},\,b_1=\frac{7}{3}$, after finding $a,\,b$. Then $a_n=b_n-\frac{1}{2}$. $$a=\frac{4}{3}$$ $$b=-\frac{5}{9}$$ $b_2=22$ $a_2=22-\frac{4}{3}=\frac{62}{3}$ Actual $a_2=-2$ So $a_2$ from $b_n$ method is not equal to actual $a_n$. Can I use this $b_n$ method if delta equals 0? or should $c=-\frac{4}{3}$?
I think that your choice of $c$ is wrong From $$b_n=6(b_{n-1}-c)-9(b_{n-2}-c)-8+c=6b_{n-1}-9b_{n-2}-8+4c$$ this gives you $c=2$ and $b_n=a_n+2$ $$b_0=2, b_1=3$$ after solving $$b_n=a*3^{n}+bn*3^{n}$$ you get $a=2 ,b=-1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3090080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Algorithm for optimal assignment of tasks to a team of people Is there an algorithm to get a team of people to complete a certain number of tasks the fastest, where the time taken to complete a certain task is different for different people? Each task must be done fully by one person (eg can't have person A do the first half of task 1, then person B do the second half of task 1), and all members of the team should be working simultaneously and continuously. To give an example, there are tasks 1-5 and people A-D. * *Person A takes a1 minutes to complete task 1, a2 for task 2 etc. *Let tA be the total time person A spends working on tasks, ie if A is to complete tasks 3 and 4, tA = a3+a4. *Each task must be completed by one person, no task requires any other tasks to be previously completed. *All 4 people can work simultaneously to complete all the tasks, total time is given by T = max{tA,tB,tC,tD}, which we want to minimise. Obviously the algorithm would ideally be generalisable to m tasks and n people. Also if such an algorithm doesn't currently exist, how would you suggest going about constructing one? Currently I can only think of assigning the task relating to the biggest difference between the fastest and second fastest times to complete it, and I don't really know where to go from there. I get the feeling like this is similar to some bin-packing algorithm, however each bit of 'rubbish' is differently sized depending on which bin it is put in. Also I think this is similar to what I found here Assignment problem with divisible tasks and hours, but here people can split tasks, and the question wasn't answered fully. I hope I've clarified enough of the problem, can answer any questions if needed.
This problem is $NP$-complete, even in the case of two people and where both people take the same amount of time for each task (i.e.: $n = 2$ and $a_i = b_i$ for all $i=1,\ldots,m$) since this is essentially the PARTITION problem. As such, it is unlikely that you will find a polynomial-time algorithm that solves the problem exactly. However, it is possible that suitable approximations and/or super-polynomial-time solutions might exist for your purposes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3090216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Will the following sequence ever repeat? I'm unsure if the notation used by the author is common, so I will define some terms before stating the problem. {$0, 1$}$^\infty$ is the set of all functions $f:\mathbb{N} \rightarrow ${$0, 1$}. {$0, 1$}$^n$ is the set of all functions $f:$ {$1, ..., n$}$ \rightarrow ${$0, 1$}. {$0, 1$}$^*$ = $\cup_{i=1}^\infty${$0,1$}$^i$. Simply put {$0, 1$}$^\infty$ is the set of all infinite binary sequences, and {$0, 1$}$^n$ is the set of all binary sequences of length $n$, while {$0, 1$}$^*$ is the set of all finite binary sequences. For any binary strings $X, Z$ we write $X \preceq Z$ if $X$ is a prefix of $Z$, that is, if $\exists Y$ such that $XY = Z$. The problem: Define an (infinite) binary sequence $S \in$ {$0, 1$}$^ \infty$ to be prefix-repetitive if there are infinitely many strings $w \in ${$0, 1$}$^*$ such that $ww \preceq S$ Prove: If the bits of the sequence $S \in ${$0, 1$}$^\infty$ are chosen by independent tosses of a fair coin, then $Prob[S$ is prefix-repetitive$] =0$ My attempt. Firstly, if $S$ is prefix-repetitive then $ \forall N \in\mathbb{N}, \exists n>N$ such that $aa \preceq S$ for some $a \in $ {$0, 1$}$^n$. We shall call this statement Lemma $1$. Proof: choose $N \in \mathbb{N}$. Let $B$ be the set of all strings $b \in ${$0, 1$}$^*$ such that $bb \preceq S$. By the definition of prefix-repetitive $|B| = \infty$. Suppose there does not exist a string $a \in B$ such that $a \in $ {$0, 1$}$^n$ for some $n>N$. Then $B \subseteq \cup_{i=1} ^N$ {$0,1$}$^i$. However $|\cup_{i=1} ^N$ {$0,1$}$^i|$ is finite. A contradiction. Secondly, let $M\in \mathbb{N}$, then $Prob[\exists a \in ${$0, 1$}$^{m}$ such that $aa \preceq S$ for some $m>M] =$ $\sum_{i=M+1} ^\infty Prob[\exists a \in ${$0, 1$}$^{i}$ such that $aa \preceq S] = $ $\sum_{i=M+1} ^\infty \frac{1}{2^i} = \frac {1}{2^{M}}$ We shall call this result lemma $2$. Lastly, we prove the main statement. Let $N\in \mathbb{N}$. By succesive use of lemma $1$ $\exists n_1, n_2, ...$ such that $N<n_1<n_2<...$ and $a_ka_k \preceq S$ for some $a_k \in ${$0,1$}$^{n_k}$. By lemma $2$ $Prob[a_k$ exists$] = \frac{1}{2^{n_k}}$, and $Prob[a_k$ exists $\forall k] = \frac{1}{2^{n_1}} \frac{1}{2^{n_2}} ... = 0$. I'm not familiar with this area of mathematics and was wondering if my proof was valid. If not, how could I prove main statement? I would really appreciate any help/thoughts!
Let $R_n$ be the event that the second block of $n$ bits matches the first block of $n$ bits, that is, the event that $ww\prec S$ for some $w\in\{0,1\}^n$, and let $N=N(S)=\sum_{n\ge1}\mathbb 1_{R_n}(S)$ be the number of $n$ such that $S\in R_n$, that is, the number of $n$ such that $ww\prec S$ for $w$ of length $n$. It might be finite or it might be infinite. If $N(S)<\infty$ then $S$ is not prefix repetitive, but if $N(S)=\infty$ then $S$ is prefix repetitive. Now note that $P(R_n)=2^{-n}$, so $EN = EN(S) = \sum_{n\ge1} 2^{-n} < \infty$. Hence $P(N(S)<\infty) = 1$. (By the Borel-Cantelli lemma, or the fact that random variables with finite expectations are finite with probability 1.) So with probability 1, $S$ is not prefix repetitive. Where does this come up? What are you reading?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3090315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show local convexity Given a metric space $(X,d)$ where $X=C([0,1])$ and $d$ is defined as: $$d(f,g)=\inf \{\epsilon : \mu \{x \in [0,1] : |f(x)-g(x)|>\epsilon\}<\epsilon\}$$ ($\mu$ is a Lebesgue measure). I want to show this space is not locally convex. If it is not locally convex then $\exists u=\{g|d(f,g)<\epsilon \}$ in which $g_1, g_2\in u$ such that $d(f,rg_1+(1-r)g_2)>\epsilon,\ r\in [0,1]$. any hints on how to proceed?
It is easier to consider the space $Y$ of step functions, i.e., linear combinations of indicator functions of intervals with the same metric (which btw describes the convergence in measure). I claim that the only convex neighbourhood $U$ of $0$ is the full space $Y$: Indeed $U$ contains some ball $B_\varepsilon=\{f\in Y: \mu(\{t\in[0,1]: |f(t)|>\varepsilon\})<\varepsilon\}$. Given any $g\in Y$ you can write $$ g=\frac{1}{n} \left( ngI_{[0,1/n]} + \sum_{k=1}^{n-1} ngI_{(k/n,(k+1)/n]}\right). $$ If $n$ is bigger than $1/\varepsilon$, each term is in $B_\varepsilon \subseteq U$, and since $U$ is convex you get $g\in U$. If you insist to have the space of continuous functions you can do the same with a continuous partition of unity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3090428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $n^{23}+6n^{13}+4n^{3}$ is a multiple of $11$ I was checking the following Fermat's little theorem exercise: Show that $n^{23}+6n^{13}+4n^{3}$ is a multiple of $11$ I've started by stating each congruence individually suposing that each $n,6n$ and $4n$ are primes with $11$, for the first one I have: $$n^{10} \equiv 1 \mod {11}$$ $$ \equiv n^{3} \mod {11}$$ I've stated the second one this way $$6n^{10} \equiv 1 \mod {11}$$ But honestly I don't know how to go ahead as long as I don't have a number to evaluate with $11$. Also I'm considering that I have: $$n + 6n + 4n = 11n$$ This may have some relation with the proof but could be affected because of the powers of each one. Any help will be really appreciated.
Fermat's little theorem only applies when prime $p$ does not divide $a$ in $a^{p-1} \equiv 1 \pmod p$. In your expression, if $n$ is a multiple of $11$ the theorem doesn't apply but the expression is trivially a multiple of $11$. If $n$ is not a multiple of $11$ the theorem applies. Now note: $n^{10} \equiv 1 \pmod{11}$ $n^{20} \equiv 1^2 = 1 \pmod {11}$ So modulo $11$, your expression reduces to $n^3 + 6n^3 + 4n^3 =11n^3$, which is clearly a multiple of $11$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3090530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Expected value of $\sin(x)$ I have a uniformly distributed random variable $ \omega $ in the range $[\frac\pi2, \frac\pi{-2}]$. Then I have the function $ s = \sin(\omega) $ I want to calculate the expected value of this function $ s $. So far I know that the uniformly distributed random variable can be written as $$ \omega = \frac1{\frac\pi2 - - \frac\pi2} = \frac 1\pi $$ Then I don't know if the correct way of calculating the expected value is $$ E = \int_{-\frac\pi2}^{\frac\pi2} \frac1\pi \sin(x) dx $$ or if I'm completely off.
The expected value of any random variable $s(\omega)$ where $\omega$ is having the probability distribution function $f(\omega)$ is given by: $$ E(s(\omega)) = \int_{-\infty}^{\infty} s(\omega)f(\omega)d\omega$$ since $\omega$ is distributed uniformly in the interval $[-\pi/2,\pi/2]$ we have $$f(\omega) = \frac{1}{(\pi/2-(-\pi/2))} = \frac{1}{\pi}$$ Now, $$E(s(\omega))=\int_{-\infty}^{\infty} \sin(\omega)f(\omega)d\omega$$ or, $$E(s(\omega))=\int_{-\infty}^{\infty} \sin(\omega)\frac{1}{\pi}d\omega$$ The limits for $\omega$ is from $[-\pi/2,\pi/2]$ so the integral is $$E(s(\omega))=\int_{-\pi/2}^{\pi/2} \sin(\omega)\frac{1}{\pi}d\omega$$ The answer of this integral is $0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3090647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Strictly positive inner product for a pair of non-zero, positive operators. Let $ A,B $ be non-zero positive operators on a infinite-dimensional separable Hilbert space $(H , \langle \cdot, \cdot \rangle)$. I am required to prove that there exists $u' \in H$ such that \begin{alignat*}{2} \langle Au' , u'\rangle >0 \ \ \text{and} \ \ \langle Bu', u' \rangle >0. \end{alignat*} I am quiet stuck with this problem. For a few things I have tried. It is obvious that there exists $v,w \in H$ such that \begin{alignat*}{2} \langle Av, v \rangle > 0 \ \ \text{and} \ \ \langle Bw, w\rangle>0. \end{alignat*} And I have tried to calculate \begin{alignat*}{2} \langle A(v + w), v +w \rangle \end{alignat*} for which I have to now show for instance $ \text{Re} \langle Av, w \rangle \geq 0 $. Alternatively I could try to directly find a positive operator $E$ such that \begin{alignat*}{2} \langle Eu \ , \ u\rangle \leq \langle Au, u \rangle \ \ \text{and} \ \ \langle Eu , u \rangle \leq \langle Bu, u \rangle. \end{alignat*} I have also tried to apply orthogonal projections and polarisations, etc, but to no success. Hopefully it's some trivial details which I have missed. The spectral theorem for bounded self-adjoint operator is not at my disposal. Could anyone provide me with some hint? Thanks!
Let $v,w\in H$ be as you have defined them. For $t\in[0,1]$ put $x_t=tu+(1-t)w$, and define $f,g:[0,1]\to [0,\infty)$ by $$f(t)=\langle Ax_t,x_t\rangle,\quad g(t)=\langle Bx_t,x_t\rangle.$$ Note that $f$ and $g$ are non-zero polynomials (of degree at most $2$). Argue that there is some point $t_0\in[0,1]$ such that both $f(t_0)>0$ and $g(t_0)>0$, which proves the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3090811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
how to prove a spanning set of polynomial I am struggling so much understanding this concept of subspace and span. The question is, Given that $P2:W=\{(x+1)(ax+b)| a,b \in R\}$ show that $\{x^2+x, x^2+2x+1\}$ is a spanning set of $W$. I don't know if I got this concept right, but I've tried to do things by letting $p(x)=x^2+x$ $q(x)=x^2+2x+1$ then multiplying them a coefficient $\alpha$ and $\beta$ each and adding to fit in with $W$. but then I got an answer saying $a=\alpha + \beta$, $a+b = \alpha+ 2\beta, b=\beta$, which means... no solution? I am guessing? so this does not span $W$. Am I right?
Hint: First, $W$ is a $2$-dimensional vector space (easy to see). Now, $\{x^2+x,x^2+2x+1\}$ is linearly independent (easy to see) Let $a=1,b=0$. We get $x^2+x$. Now let $a=1,b=1$. We get $x^2+2x+1$. Thus $W=\operatorname{span} \{x^2+x,x^2+2x+1\}$. .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3090901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Unitary Central Character by Schur's Lemma Consider an irreducible smooth representation $\pi$ of the group $G=GL_n(\mathbb{Q}_p)$ with center $Z$. Does there exist a unitary central character for $\pi$? More precisely, is there a (quasi-)character $\omega: G \to \mathbb{C}^{\times}$ such that $\pi \otimes \omega$ when restricted to the center $Z$ is a unitary character for $Z$? I find this result casually stated in many references, where they say it follows from Schur's lemma. But I am unable to see it directly from Schur's lemma.
There is a central character $\omega:Z \rightarrow \mathbb{C}^{\times}$ by Schur, but it need not be unitary. E.g., consider $∣\text{det}∣$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3091005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Is there arbitrage? An economist writes a 1-period expectation model for valuing options. The model assumes that the stock starts at S and moves to $2S$ or $\frac{1}{2}S$ in 1 year's time with equal probability. Strike is equal to $K$ Assume rates are zero. I found that the value of the call option using the economist's model is $$S-\frac{1}{2}K$$ and using the Binomial 1-period pricing model it is $$\frac{2}{3}S-\frac{1}{3}K$$ So we have 2 models * *1-period binomial model *1-period expectation model Is there arbitrage between the two models? If so, how can we capture it?
First of all, economists think in terms of risk premia. There's a concept in the economic branch of asset pricing, called "stochastic discount factor", which differentiates economists from mathematicians. That being said, the question asks you to find the value of the option today. As an economist, you will attach 1/2 of the probability to each outcome; hence, Call Price Today = $\frac{1}{2}$ (Payoff from Call Price Up) + $\frac{1}{2}$ (Payoff from Call Price Down) Bear in mind that the tree you construct for the stock is equivalent to the tree constructed for the calls. Then, denote the payoff in the usual way, i.e. $max(S(1)-K, 0)$, which leads to the following: Payoff from Call Price Up = $max (2S-K, 0)$ Payoff from Call Price Down = $max (0.5S-K, 0)$ Now, make the assumption that the strike price, $K$, is such that $$S(down) < K < S(up)$$ Therefore, the call that goes down will be OTM, namely will have a payoff of zero; while the other one, which is ITM, will be worth $2S - K$. Therefore, the economist will price the option as: $p(Call) = \frac{1}{2} * (2S - K) + \frac{1}{2} * (0)$ For a mathematician using risk-neutral pricing, things are slightly different. There's no need to know the actual probability as it is possible to construct synthetic ones. However, note that economics and mathematics are highly interlaced here. Risk-neutrality lies on important economic intuition too, which relies around risk aversion. Hence, in the binomial tree, you simply calculate the risk-neutral probability as you did and find that: $p(Call) = \frac{1}{3} (2S-K) + \frac{2}{3} (0) $ Here comes the tricky part of the question. I think what is meant is that the 1-period expectation model presents a mis-pricing, which makes the price of the call overvalued. If you were an arbitrageur, you would know that because you calculated what the price of the call is using risk-neutrality. Hence, you would sell the overvalued call and buy the underlying. This strategy is known as "naked call strategy". I hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3091155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bound and limsup for cumulative sum of a random walk It is well-known from the law of iterated logarithm, that, if $X_k$ are symmetric Bernoulli random variables $\pm 1$, then $S_n= X_1 + X_2 + ... + X_n$ has this property: $$\limsup_{n \to \infty} \frac{S_n}{\sqrt{2 n \log \log n}} = 1 \qquad \text{a.s.}$$ giving an order of magnitude of $n^{1/2+\varepsilon}$ for the maximum of $|S_n|$. What bounds can be given about the cumulative sum: $$T_n = S_1 + S_2 + ... + S_n$$? I can imagine its order of magnitude will be around $n^{3/2+\varepsilon}$, but is this quantity approached infinitely often? What is the $\limsup$?
For any $\epsilon>0$, since $|T_i-T_{i-1}| \le i$ using Azuma's inequality \begin{align} \sum_n P\left( \left|T_n \right|\ge \epsilon n^{\alpha}\right) \le \sum_n 2 \exp\left( \frac{-\epsilon^2 n^{2\alpha}}{2 \sum_{i=1}^n i^2} \right) = \sum_n 2 \exp\left( \frac{-3 \epsilon^2 n^{2\alpha-1}}{ (n+1)(2n+1)} \right) \le \sum_n 2 e^{-3 \epsilon^2 C n^{2\alpha-3} } \end{align} So, the RHS is finite if and only if if $\alpha>3/2$. If $\alpha>3/2$, then Borel-Cantelli implies that $\frac{T_n}{n^{\alpha}} \to 0$ almost surely. If $\alpha\le 3/2$, then Borel-Cantelli does not help here. On the other hand, the $T_n$'s are not independent, so I wouldn't know what to use instead of Azuma's inequality. This may suggest that the bound $\alpha\ge 3/2+\epsilon$ is 'kind of tight', or, well, at least not that trivial!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3091444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Concerning the identity in sums of Binomial coefficients Let be the following identity $$\sum_{k=1}^{n}\binom{k}{2}=\sum_{k=0}^{n-1}\binom{k+1}{2}=\sum_{k=1}^{n}k(n-k)=\sum_{k=0}^{n-1}k(n-k)=\frac16(n+1)(n-1)n$$ As we can see the partial sums of binomial coefficients are expressed in terms of $3$-rd order polynomial $P_3(n)$, where $n$ is variable of upper bound of summation. We assume that order of resulting polynomial $P_3(n)$ depends on subscript of binomial coefficient being summed up (in our case the order of polynomial is $3=2+1$, where $2$ is subscript of bin. coef.) The question: Does there exist a generalized method to represent the sum of binomial coefficients $\sum_{k}^{n}\binom{k}{s}$ in terms of certain polynomials $P_{s+1}(n)=\sum_{k}^{n} F_s(n,k)$ for every non-negative integer $s$? I.e can we always find the function $F_s(n,k)$, such that $\sum_{k}^{n}\binom{k}{s}=\sum_{k}^{n}F_s(n,k)$ ? We assume that order of polynomial is $s+1$ by means of example above. The sub-question: (In case of positive answer to the first question.) If there exists a method to represent the sums of bin. coef. in terms of polynomials in $n$, how do summation limits of the $\sum_{k}^{n}\binom{k}{s}$ implies to the form of polynomial $P_{s+1} (n)$ exactly?
I would say that you have a good answer already. But their are other possible answers which seem reasonable. Further restriction might force the favored solution above. In the case $k=3$ (which is the only one I will discuss in any detail) $$\sum_{s=1}^n\binom{s}3= \\ \sum_{s=1}^ns\binom{n-s}2=\sum_{s=1}^n(n-s)\binom{s}2=\frac14\sum_{s=1}^n{s(n-s)(n-2)}=\frac1{24}\sum_{s=1}^n{(n+1)(n-1)(n-2)}$$ It is easy to see how to generalize these to other $k.$ The first three belong to the infinite family $$\frac14\sum_{s=1}^n{s(n-s)(\alpha n-(2\alpha -2)s-2)} \tag{*}$$ Going back to the favored solution: $$\sum_{s=1}^n\binom{s}3=\sum_{s=1}^n\frac{s^3}{6}-\frac{s^2}{2}+\frac{s}{3}=\sum_{s=1}^n\frac{n^2s}2-n{s}^{2}+\frac{s^3}2-\frac{ns}{2}+\frac{s^2}2.$$ If you wanted the thing on the right to be $$\sum_{s=1}^nA{n^2s}+Bn{s}^{2}+C{s^3}+D{ns}+E{s^2}$$ Then you do need to have $E=\frac12$ But the rest have two degrees of freedom $$C=-2A-\frac43B \ \ \ \ \ \ \ D=A-\frac13B-\frac23$$ For further restriction we might want to have $D=-E=-\frac12$ so than $A{n^2s}+Bn{s}^{2}+C{s^3}+D{ns}+E{s^2}=0$ when $s=n$ In this case the summand factors (of course) giving the family $(*)$ above. If we want the right-hand side to be $$K\sum_{s=1}^ns(An+(1-A)s+B)(Cn+(1-C)s+D)$$ It is possible to work out the requirements. I came up with $6$ families of solutions. Of course the set of solutions is invariant under switching the two terms and/or substituting $s=n+1-s.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3091598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Formula for the sequence 0,3,8,15,24 ... Out of my own interest I've been practicing finding formula for for sequences and I've been having trouble finding one for the nth term for this sequence. 0,3,8,15,24 ... Clearly you add 5,7,9,11 ... to the previous number but if anyone had some insight about how to express this in a formula that would be much appreciated.
The solution for $ a(n) $ is this $ a(n) = n(n+2) = (n+1)^2 - 1. $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3091713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 10, "answer_id": 3 }
Matrix function that gives a scalar I have the following function: $$f(z) = z\vec{b}^T[I-zA]^{-1}\vec{1},$$ where $z$ is a complex scalar with $Re(z)<0$ (for simplicity, WLOG, we can take $z$ to be real), $b$ is a vector, $1$ is a vector of ones. $I$ is the identity matrix, and $A$ is some arbitrary matrix so that $I-zA$ is non-singular. I want to show that $Re(f)<0$ for some values of $z$. How can I solve the equation $f(z)=0$? Some help would be appreciated.
So, assuming that $z \in \mathbf{R}$ we have * *$z = 0 \implies f(z) = 0$ *$z \neq 0 \implies b^T(I - zA)^{-1}\cdot \mathbf{1} = 0$ Now, using the formula of inversion of the sum of matrices (see, e.g. here) we get $$ b^T\left(I + \frac{z}{1 - \text{tr}(A)z}A\right)1 = 0 \implies b^T\cdot \mathbf{1} + \frac{z}{1 - \text{tr}(A)z}b^TA\cdot \mathbf{1} = 0. $$ Expressing $z$ from the latter equation we get that $$ z = \frac{b^T1}{b^T \cdot \mathbf{1} \text{tr}(A) - b^TA \cdot \mathbf{1}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3091826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Literature concerning Hawkes Processes I am looking into an introdoction in Hawkes processes (self-exciting processes). Are there books or lecture notes that explain the math behind it, or a chapter in a more general book? I haven't found anything good so far, so every recommendation is appreciated.
Hawkes Processes by Patrick J. Laub, Thomas Taimre, Philip K. Pollett is a short article that introduces Hawkes Processes. For the more mathematical theory, this can be found in Daley and Vere-Jones' An Introduction to the Theory of Point Processes: Volume I. Note however that they are focused on the general theory of point processes, and they show how this general theory applies to Hawkes processes only in examples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3091933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do we have an explicit expression of this function? Let $\Delta^{n-1} \equiv \{(p_1, \ldots, p_n) \, |\, p_i \geq 0, \sum_i p_i = 1\}$ be the $n-1$ dimensional simplex. Define $f : \Delta^{n-1} \times \Delta^{n-1} \rightarrow \Delta^{n-1}$ such that $$ f(p, q) = \Big(\frac{p_1 q_1}{\sum_i p_i q_i}, \ldots, \frac{p_n q_n}{\sum_i p_i q_i}\Big).$$ Suppose $h : \Delta^{n-1} \rightarrow \mathbb{R}$ be a function such that $h(f(p, q)) = \sum_i p_i q_i$. Do we have an explicit expression of $h$? Thank you!
No, we don't because there is no such function for $n\ge 2$. Suppose $$ \exists h:\Delta^{n-1}\to \Bbb R\ \ : \ \ h(f(p,q))=\sum_{i=1}^n p_iq_i. $$ For $p=q=(\frac{1}{2},\frac{1}{2},0,0,\ldots,0)$, it holds $f(p,q)=(\frac{1}{2},\frac{1}{2},0,0,\ldots,0)$ and $$h\left(\frac{1}{2},\frac{1}{2},0,0,\ldots,0\right)=h(f(p,q))=p\cdot q =\frac{1}{2}.$$ However, For $p'=(\frac{1}{3},\frac{2}{3},0,0,\ldots,0)$ and $q'=(\frac{2}{3},\frac{1}{3},0,0,\ldots,0),$ it holds $f(p',q')=(\frac{1}{2},\frac{1}{2},0,0,\ldots,0)$ and $$h\left(\frac{1}{2},\frac{1}{2},0,0,\ldots,0\right)=h(f(p',q'))=p'\cdot q' =\frac{4}{9}.$$ This leads to a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3092065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Asymptotics of Hypergeometric $_2F_1(a;b;c;z)$ for large $|z| \to \infty$? I found this list of asymptotics of the Gauss Hypergeometric function $_2F_1(a;b;c;z)$ here on Wolfram's site for large $|z| \to \infty$ In particular there is a general formula for $|z| \to \infty$ $$ _2F_1(a;b;c;z) \approx \frac{\Gamma(b-a)\Gamma(c)}{\Gamma(b)\Gamma(c-a)} (-z)^{-a} +\frac{\Gamma(a-b)\Gamma(c)}{\Gamma(a)\Gamma(c-b)} (-z)^{-b} $$ How is this derived? Also, is this always true (meaning, for all $a$, $b$, $c$)? There are no sources on the site I linked. Is there also a way to determine the next-order terms?
Converting ${_2\hspace{-1px}F_1}$ to the Meijer G-function, we obtain $${_2\hspace{-1px}F_1}(a, b; c; z) = \frac {\Gamma(c)} {\Gamma(a) \Gamma(b)} G_{2, 2}^{1, 2} \left( -z \middle| {1 - a, 1 - b \atop 0, 1 - c} \right) = \\ \frac {\Gamma(c)} {2 \pi i \Gamma(a) \Gamma(b)} \int_{\mathcal L} \frac {\Gamma(y + a) \Gamma(y + b) \Gamma(-y)} {\Gamma(y + c)} (-z)^y dy.$$ A left loop is a valid contour for $|z| > 1$, and the sum of the residues at $y = -a - k$ and $y = -b - k$ gives a complete asymptotic expansion for large $|z|$. Excluding the logarithmic cases, $$\operatorname*{Res}_{y = -a - k} \frac {\Gamma(y + a) \Gamma(y + b) \Gamma(-y)} {\Gamma(y + c)} (-z)^y = \frac {(-1)^k \Gamma(b - a - k) \Gamma(a + k)} {\Gamma(c - a - k)} \frac {(-z)^{-a - k}} {k!}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3092190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
prove $\int_0^\infty \frac{\log^2(x)}{x^2+1}\mathrm dx=\frac{\pi^3}{8}$ with real methods Context: I looked up "complex residue" on google images, and saw this integral. I, being unfamiliar with the use of contour integration, decided to try proving the result without complex analysis. Seeing as I was stuck, I decided to ask you for help. I am attempting to prove that $$J=\int_0^\infty\frac{\log^2(x)}{x^2+1}\mathrm dx=\frac{\pi^3}8$$ With real methods because I do not know complex analysis. I have started with the substitution $x=\tan u$: $$J=\int_0^{\pi/2}\log^2(\tan x)\mathrm dx$$ $$J=\int_0^{\pi/2}\log^2(\cos x)\mathrm dx-2\int_{0}^{\pi/2}\log(\cos x)\log(\sin x)\mathrm dx+\int_0^{\pi/2}\log^2(\sin x)\mathrm dx$$ But frankly, this is basically worse. Could I have some help? Thanks. Update: Wait I think I actually found a viable method $$F(\alpha)=\int_0^\infty \frac{x^{\alpha}}{x^2+1}\mathrm dx$$ As I have shown in other posts of mine, $$\int_0^\infty\frac{x^{2b-1}}{(1+x^2)^{a+b}}\mathrm dx=\frac12\mathrm{B}(a,b)=\frac{\Gamma(a)\Gamma(b)}{2\Gamma(a+b)}$$ so $$F(\alpha)=\frac12\Gamma\left(\frac{1+\alpha}2\right)\Gamma\left(\frac{1-\alpha}2\right)$$ And from $$\Gamma(1-s)\Gamma(s)=\frac\pi{\sin \pi s}$$ we see that $$F(\alpha)=\frac\pi{2\cos\frac{\pi \alpha}{2}}$$ So $$J=F''(0)=\frac{\pi^3}8$$ Okay while I have just found a proof, I would like to see which ways you did it.
Pretty straightforward this integral can be related to the Dirichlet Beta Function $\beta(s)$ and its integral representation which is given by $$\beta(s)~=~\frac1{\Gamma(s)}\int_0^\infty \frac{t^{s-1}}{e^{t}+e^{-t}}\mathrm dt$$ Therefore, enforce the substitution $x=e^{-t}$ within your integral $J$ to obtain \begin{align*} J=\int_0^\infty\frac{\log^2(x)}{1+x^2}\mathrm dx &= \int_{-\infty}^\infty\frac{t^2}{1+e^{-2t}}e^{-t}\mathrm dt\\ &=\int_0^\infty\frac{t^2}{1+e^{-2t}}e^{-t}\mathrm dt+\underbrace{\int_{-\infty}^0\frac{t^2}{1+e^{-2t}}e^{-t}\mathrm dt}_{t~\mapsto~ -t}\\ &=\int_0^\infty\frac{t^2}{e^t+e^{-t}}\mathrm dt+\int_0^\infty\frac{t^2}{1+e^{2t}}e^t\mathrm dt\\ &=2\int_0^\infty\frac{t^2}{e^t+e^{-t}}\mathrm dt\\ &=2\Gamma(3)\beta(3) \end{align*} $$\therefore~J~=~2\int_0^\infty\frac{t^2}{e^t+e^{-t}}\mathrm dt~=~\frac{\pi^3}8$$ The result follows from the known value $\beta(3)=\frac{\pi^3}{32}$ for which a proofcan be found here for instance. The validity of the integral representation can be shown by utilizing the geometric series of the denominator combined with the change of summation and integration, which is allowed in this case, and followed by applying the Defintion of the Dirichlet Beta Function and the Gamma Function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3092412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 3 }
How to find $x_n$ from $x_{n+1} = \frac{x_n}{1-a+a x_n}$? For $n\geq 0$ let $x_{n+1} = \frac{x_n}{1-a+a x_n}$, where $a\in (0,1)$. I would like to know if it is possible to express $x_n$ as a function of $a$ and $x_0$ for all $n\geq0$.
Note that $$\frac1{x_{n+1}}=\frac{1-a}{x_n}+a$$ which implies that $$\frac1{x_{n+1}}-1=(1-a)\left(\frac1{x_n}-1\right)$$ hence $$\frac1{x_n}-1=(1-a)^n\left(\frac1{x_0}-1\right)$$ from which an explicit formula for $x_n$ in terms of $n$, $a$ and $x_0$ follows. (Of course, there is some well known theory behind all this...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3092537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
But what is a continuous function? I have a very basic problem. I am confused about "continuous function" term. What really is a continuous function? A function that is continuous for all of its domain or for all real numbers? Let's say: $\ln|x|$ - the graph clearly says it's continuous for all real numbers except for $0$ which is not part of the domain. So is this function continuous or not? I could say same about $\tan{x}$ or $\frac{x+1}{x}$ And also what about: $\ln{x}$ - the graph clearly says it's continuous for all of its domain: $(0; \infty)$ - so is this $f$ continuous or not? Thanks for clarification.
The exact answer depends on your chosen definition of "function" (there is more than one). For most uses, a function is regarded as being continuous on an interval $(a,b)$ if for every number $c$ in $(a,b)$, $f(x)=\lim_{x\to c} f(x)$. In your example $f(x)=\ln{x}$ is continuous on the interval $(0,\infty)$ and either undefined or complex/multivalued everywhere else, depending on whether you consider the codomain (range) of $f$ to include the complex numbers or not. In other words, no function is ever just 'continuous' - it is continuous within an interval (which may or may not be its domain).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3092659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
For complex vectors $z_1$ and $z_2$, How do I show that if $|z_1+z_2|=|z_1|+|z_2|$ then the vectors $z_1$ and $z_2$ are parallel or antiparallel. In my complex analysis class, we went over a geometric proof of this with the triangle inequality, but I'm trying to find a more algebraic proof. I'm also trying not to use Arg because we haven't gone over it in my class yet and all I really know about an argument is that if $z_1$, $z_2$, and 0 are collinear than they'd all have the same angle off the real axis. I haven't really gotten particularly far. I can convert the absolute values (based on the definition of modulus that, for a complex number $z=x+iy$, $|z|=sqrt{(x^2+y^2)}$). I can then square both sides of the equation twice and then simplify to get something that looks really easy to work with, but I'm not sure what the results of that could tell me about how the three points are collinear. Any help would be much appreciated.
I will answer the question in the heading (not including $0$). Let $z_k=x_k+iy_k$ Square both sides and remove common terms to get: $(x_1+iy_1)(x_2-iy_2)+(x_1-iy_1)(x_2+iy_2)=(2(x_1x_2+y_1y_2))=2\sqrt{(x_1^2+y_1^2)(x_2^2+y_2^2)}$. Now square both sides and eliminate common terms to get $2x_1x_2y_1y_2=x_1^2y_2^2+x_2^2y_1^2$ Assume none of the terms $=0$, divide by the product and get $2=\frac{x_1y_2}{x_2y_1}+\frac{x_2y_1}{x_1y_2}$ Notice that the two terms on the right are reciprocal, su that with $u=$ one ratio, we have a quadratic $u^2-2u+1=0$, so that the ratio $=1$, forcing the vectors to be scalar multiples of each other.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3092752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Integral $\int\frac{1}{1+x^3}dx$ Calculate$$\int\frac{1}{1+x^3}dx$$ After calculating the partial fractions I got: $$\frac{1}{3}\int\frac{1}{x+1}dx+\frac{1}{3}\int\frac{2-x}{x^2-x+1}dx=\frac{1}{3}\ln(x+1)+\frac{1}{3}\int\frac{2-x}{x^2-x+1}dx$$ I have no idea on how to proceed. Am I missing a substitution or something?
Alternative approach: Partial fractions. Recall that for any complex $z$, and any $n=1,2,...$ $$z^{1/n}=|z|^{1/n}\exp\left[\frac{i}{n}(2\pi k+\arg z)\right],\qquad k=0,1,...,n-1$$ Then plug in $z=-1$ and $n=3$ to see that $\arg z=\arg(-1)=\pi$ so that in fact, $$1+x^3=\prod_{k=0}^2\left(x-\exp\frac{i\pi(2k+1)}3\right)$$ So, letting $r_k=\exp\frac{i\pi(2k+1)}3$, $$\frac1{1+x^3}=\prod_{k=0}^2\frac1{x-r_k}$$ Look! That's a thing we can do partial fractions on! To do so, we say that $$\prod_{k=0}^2\frac1{x-r_k}=\sum_{k=0}^2\frac{b(k)}{x-r_k}$$ $$\left(\prod_{a=0}^2(x-r_a)\right)\prod_{k=0}^2\frac1{x-r_k}=\left(\prod_{a=0}^2(x-r_a)\right)\sum_{k=0}^2\frac{b(k)}{x-r_k}$$ $$1=\sum_{k=0}^2\frac{b(k)}{x-r_k}\prod_{a=0}^2(x-r_a)$$ $$1=\sum_{k=0}^2b(k)\prod_{a=0\\ a\neq k}^2(x-r_a)$$ then for any $j=0,1,2$, we may plug in $x=r_j$ and notice that all the terms of the sum vanish except for the case $k=j$, which gives $$1=b(j)\prod_{a=0\\ a\neq j}^2(r_j-r_a)$$ $$b(j)=\prod_{a=0\\ a\neq j}^2\frac1{r_j-r_a}$$ Which is an explicit formula for our partial fractions coefficients. Anyway, we may now integrate: $$I=\int\frac{\mathrm dx}{1+x^3}=\int\sum_{k=0}^2\frac{b(k)}{x-r_k}\mathrm dx$$ $$I=\sum_{k=0}^2b(k)\int\frac{\mathrm dx}{x-r_k}$$ Which is very easily shown to be $$I=\sum_{k=0}^{2}b(k)\ln\left|x-r_k\right|$$ $$I=b(0)\ln\left|x-r_0\right|+b(1)\ln\left|x-r_1\right|+b(2)\ln\left|x-r_2\right|$$ And since $b(k)=\prod\limits_{a=0\\ a\neq k}^2\frac1{r_k-r_a}$, we have that $$ b(0)=\frac1{(r_0-r_1)(r_0-r_2)}\\ b(1)=\frac1{(r_1-r_0)(r_1-r_2)}\\ b(2)=\frac1{(r_2-r_0)(r_2-r_1)}$$ And from $\exp(i\theta)=\cos\theta+i\sin\theta$, $$ r_0=\exp\frac{i\pi}3=\frac{1+i\sqrt3}2\\ r_1=\exp\frac{3i\pi}3=-1\\ r_2=\exp\frac{5i\pi}3=\frac{1-i\sqrt3}2 $$ So $$ b(0)=-\frac16(1+i\sqrt3)\\ b(1)=\frac16(1+i\sqrt3)\\ b(2)=\frac16(-1+i\sqrt3) $$ And finally $$I=-\frac{1+i\sqrt3}6\ln\left|x-\frac{1+i\sqrt3}2\right|+\frac{1+i\sqrt3}6\ln\left|x+1\right|+\frac{-1+i\sqrt3}6\ln\left|x+\frac{-1+i\sqrt3}2\right|+C$$ In fact, using the same method, it can be shown that $$\int\frac{\mathrm dx}{1+x^n}=\sum_{k=0}^{n-1}\ln\left|x-\exp\frac{i\pi(2k+1)}{n}\right|\prod_{a=0\\a\neq k}^{n-1}\frac1{\exp\frac{i\pi(2k+1)}{n}-\exp\frac{i\pi(2a+1)}{n}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3092884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to check convergence of sequence in complete metric space. Let $\{x_n\}$ and $\{y_n\}$ be two sequences in a complete metric space $(X,d)$ such that * *$d(x_n,x_{n+1})\le\frac{1}{n^2}$ *$d(y_n,y_{n+1})\le \frac{1}{n}$ , for all $n\in \mathbb{N}.$ Then which sequence would converge? Justify. Now since the space is complete so every Cauchy sequence is convergent. Let $X=\mathbb{R}$ and $d$, the usual metric. So I take the sequence of partial sums of Harmonic series $\{H_n\}_{n\ge1}$, then $$d(H_n,H_{n+1})=|H_{n+1}-H_n|=1/(n+1)\le1/n$$ But $H_n$ is not a Cauchy sequence so it does not converge in $X$. So 2 need not converge. Is my reasoning correct? Also I have a feeling that 1 would always converge, but I haven't be able to prove it. Can anyone help me with that? Thanks.
Your reasoning is good. For 2, use the triangle inequality to show that, for $m < n$, $$d(x_n, x_m) \le \sum_{k = m}^{n - 1} d(x_k, x_{k+1}).$$ Cauchiness follows from the convergence of $\sum \frac{1}{n^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3093009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The set of all possible values of $AC$ is an open interval $(m,n)$ Side $\overline{AB}$ of $\triangle ABC$ has length $10$. The bisector of angle $A$ meets $\overline{BC}$ at $D$, and $CD = 3$. The set of all possible values of $AC$ is an open interval $(m,n)$. What is $m+n$? A) 16 B) 17 C) 18 D) 19 E) 20 Could someone help me understand what is wrong with my approach (none of the answers matches mine!): Letting BD equal $x$ (which must be positive) and using the angle bisector theorem, we get that AC is $\frac{30}{x}$. Using triangular inequality, we get that $$7 < x+\frac{30}{x}$$ $$x<7+\frac{30}{x}$$ and $$\frac{30}{x} < 13+x.$$ We can multiply $x$ to both sides of all 3 inequalities because $x>0$ and will not change the direction of the inequality. Doing so and moving all terms to the LHS, we get $x^2-77x+30>0$ (1), $x^2-7x-30<0$ (2) and $x^2+13x-30>0$ (3). (1) is always true because we can re-write it in the form of $(x-a)^2+b>0$ where $a$ and $b$ are both positive. (2) can be rewritten as $(x+3)(x-10)<0$, and thus $-3<x<10$. But $x>0$, so $0<x<10$. Finally, (3) can also be written as $(x+15)(x-2)>0$ where only $x>2$ works because $x<-15$ is less than 0. Then combining all the scenarios, $x$ can range from $2$ to $10$, but $12$ is not an option... I've double/triple checked my work, but there must be something I'm missing.
I believe you have the correct range for $x$ of $2 \lt x \lt 10$, but that is for $\overline{BD}$. However, the question asks for $\overline{AC}$. Since $\overline{AC} = \frac{30}{x}$, this means it's range is $3 \lt \overline{AC} \lt 15$, so $m = 3$ and $n = 15$, giving the correct answer of $m + n = 3 + 15 = 18$, i.e., choice $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3093173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Polar plots of $\sin(kx)$ The plots of $\sin(kx)$ over the real line are somehow boring and look essentially all the same: For larger $k$ you cannot easily tell which $k$ it is (not only due to Moiré effects): But when plotting $\sin(kx)$ over the unit circle by $$x(t) = \cos(t) (1 + \sin(kt))$$ $$y(t) = \sin(t) (1 + \sin(kt))$$ interesting patterns emerge, e.g. for $k = 1,2,\dots,8$ Interlude: Note that these plots are the stream plots of the complex functions $$f_k(z)=\frac{1}{2i}(z^k - \overline{z^k})z $$ on the unit circle (if I didn't make a mistake). Note that $f_k(z)$ is not a holomorphic function. You may compare this with the stream plot of $$g_k(z)=\frac{1}{2i}(z^k - \overline{z^k}) = f_k (z)/z$$ with $g_k(e^{i\varphi}) = \sin(k\varphi) $: [End of the interlude.] Even for larger $k$ one still could tell $k$: Furthermore you can see specific effects of rational frequencies $k$ which are invisible in the linear plots. Here are the plots for $k=\frac{2n +1}{4}$ with $n = 1,2,\dots,8$: The main advantage of the linear plot of $\sin(kx)$ is that it has a simple geometrical interpretation resp. construction: It's the plot of the y-coordinate of a point which rotates with constant speed $k$ on the fixed unit circle: Alternatively, you can look at the sine as the projection of a helix seen from the side. This was the idea behind one of the earliest depictions of the sine found at Dürer: Compare this to the cases of cycloids and epicycles. These also have a simple geometrical interpretation - being the plots of the x- and y-coordinates of a point on a circle that rolls on the line resp. moves on another circle with constant speed My question is: By which geometrical interpretation resp. construction (involving circles or ellipses or whatsoever) can the polar plots of $\sin$ be seen resp. generated? Which construction relates to the construction of $\sin$ by a rotating point on a circle in the way that the construction of epicycles relates to the construction of cycloids? Just musing: Might this question have to do with this other question on Hidden patterns in $\sin(kx^2)$? (Probably not because you cannot sensibly plot $\sin(kx^2)$ radially, since there is no well-defined period.)
I did not grasp exactly what you are asking, however it might be of interest to know that in "old times" electrical engineers were used to visualize phase and frequency of a sinusoidal wave by feeding it to the $x$ axis of an oscilloscope in combination to a known and tunable signal (sinusoidal, triangular , ..) fed to the $y$ axis and produce a Lissajous figure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3093359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 2, "answer_id": 1 }
If $f:M \mapsto M'$ is bijective and $d'(f(x),f(y))\ge d(x,y)~\forall x,y\in M$ then $M$ compact $\implies M'$ compact Let $(M,d)$ and $(M',d')$ be metric spaces and let $f:M\mapsto M'$ be a bijective function such that $$d'(f(x),f(y))\ge d(x,y)~\forall x,y\in M$$ Is it true that if $M$ is compact then so is $M'$? I found that the function $f(x)=\begin{cases} x & \text{ if } x\in[0,1/2)\\ x+1/2 & \text{ if } x\in[1/2,1]\end{cases}$ satisfies the conditions for $M=[0,1]$ and $M'=[0,1/2)\cup[1,3/2]$ and $M'$ is not compact. Is this true? What if $f$ had to be continuous?
Yes, your example works. If you're worried, then try going into more detail. How do you know $M$ is compact? (Cite a theorem.) How do you know $M'$ is not compact? (Find a sequence which contains no convergent subsequence, or indeed a non-convergent Cauchy sequence would do!) Are you sure that $d(x, y) \le d'(f(x), f(y))$? Prove it! If $f$ is continuous, then $M'$ must be compact, as the continuous image of a compact set is compact (regardless of whether $d(x, y) \le d'(f(x), f(y))$ holds true or not).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3093485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of two irrational numbers being rational or irrational I am currently doing a project on irrational and transcendental numbers and part of this project requires me to look at sums and products of irrational numbers. I am aware that the sum of 2 irrational numbers can be rational or irrational but was wondering if anyone knew of a definite way to look at the numbers and say if their sum/product will be rational/irrational. Is there some sort of theorem than can be applied or is the only way of knowing just working it out? Thanks in advance for any help.
No, there is not. If there was, we would know whether $e+\pi$ is rational or not. But, in fact, that's an open problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3093684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Homomorphism $f: \mathbb{Z}_{12} \longrightarrow \mathbb{Z}_{30}$ Suppose that we want to construct a non-surjective homomorphism $$ f: \mathbb{Z}_{12} \longrightarrow \mathbb{Z}_{30} $$ Since $\mathbb{Z}_{12}$ is cyclic, $f$ is completely determined from the image of $\overline{1}$ (its generator), $f(\overline{1})$. For the homomorphism to be well-defined, $f(\overline{1})$ must equal $\overline{d}$, where $d$ is a common divisor of $12$ and $30$. If $f(\overline{1})=\overline{1}$, $f$ is surjective. By excluding this case, we're left with the possible $f$s: $$ f(x)=d \cdot x, \quad d\in \{2,3,6\} $$
The one requirement for $f(1)$ which must be fulfilled is $$ 0=f(0)=f(12\cdot 1)=12f(1) $$ Among elements in $\Bbb Z_{30}$, these are exactly the elements which are multiples of $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3093778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }