Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Minimum value of $A = (x^2 + 1)(y^2 +1)(z^2 +1)$? Given: $\begin{cases}x;y;z \in\Bbb R\\x+y+z=3\end{cases}$ Then what is the minimum value of $A = (x^2 + 1)(y^2 +1)(z^2 +1)$ ? I start with $AM-GM$ So $A \geq 8xyz$ Then I need to prove $xyz\geq 1$ right? How to do that and if it's a wrong way, please help me to solve it !!!
With Lagrange multipliers: consider $f(x,y,z)=(x^2+1)(y^2+1)(z^2+1)$ and $$ L(x,y,z,t)=f(x,y,z)-t(x+y+z-3) $$ Then you have \begin{align} \frac{\partial L}{\partial x}&=2x(y^2+1)(z^2+1)-t \\[1ex] \frac{\partial L}{\partial x}&=2y(x^2+1)(z^2+1)-t \\[1ex] \frac{\partial L}{\partial x}&=2z(x^2+1)(y^2+1)-t \end{align} These should vanish together, so we obtain, for instance, $$ 2x(y^2+1)(z^2+1)=2y(x^2+1)(z^2+1) $$ that yields $xy^2+x=x^2y+y$, hence $xy(x-y)=x-y$. Thus either $xy=1$ or $x=y$. Similarly, $xz=1$ or $x=z$. Thus we have four cases to examine. The case $x=y$ and $y=z$ yields $x=y=z=1$. The case $xy=1$ and $xz=1$ yields $y=z$ and so $x=2,y=1/2,z=1/2$. Since the function $f$ as well as the constraint are symmetric in $x,y,z$, the remaining two cases will provide $x=1/2,y=2,z=1/2$ and $x=1/2,y=1/2,z=2$. We have $f(1,1,1)=8$ and $f(2,1/2,1/2)=125/16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3995081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Integration in a non-standard way: evaluating $ \int \frac 1 { x ^ 2 } \, \mathrm d x $ without applying the power rule - Is it nonsense? I want to evaluate the integral $$ \int \frac 1 { x ^ 2 } \, \mathrm d x $$ in a non-standard way and without applying the power rule. Let $ x \ne 0 $ and $$ \int \frac 1 { x ^ 2 } \, \mathrm d x = f ( x ) + C \text . $$ For any $ a \in \mathbb R \setminus \{ 0 \} $ we have $$ f ( a x ) = \int \frac 1 { a ^ 2 x ^ 2 } \, \mathrm d ( a x ) = \frac 1 a \int \frac 1 { x ^ 2 } \, \mathrm d x = \frac 1 a \bigl ( f ( x ) + C \bigr ) \text . $$ Then, we have $$ f ( x ) = a f ( a x ) \text , $$ for any $ a \in \mathbb R \setminus \{ 0 \} $. Putting $ x = 1 $, we have $$ f ( a ) = \frac { f ( 1 ) } a \text . $$ This is the critical point for me. I'm not sure I'm not wrong here. I will just continue. Here, I will replace the constant $ a $ with the variable $ x $. My argument for doing this is as follows: $ a $ covers all nonzero real numbers. So, I will write the last equation like this (but, I'm not sure): $$ f ( x ) = \frac { f ( 1 ) } x \text . $$ We continue. Let $ u = x $ and $ v = f ( x ) $, so that $ \mathrm d u = \mathrm d x $ and $ \mathrm d v = \frac 1 { x ^ 2 } \, \mathrm d x $. $$ \int x \cdot \frac 1 { x ^ 2 } \, \mathrm d x = x \cdot f ( x ) - \int f ( x ) \, \mathrm d x = \ln x + C _ 1 \text ; $$ $$ \implies x \cdot \frac { f ( 1 ) } x - \int \frac { f ( 1 ) } x \, \mathrm d x = \ln x + C _ 1 \text ; $$ $$ \implies f ( 1 ) - f ( 1 ) \left ( \ln x + C _ 2 \right ) = \ln x + C _ 1 \text ; $$ $$ \implies \bigl ( f ( 1 ) + 1 \bigr ) \ln x = f ( 1 ) ( 1 - C _ 2 ) - C _ 1 \text . $$ The right side is a constant. The left side must also be a constant. $$ \implies f ( 1 ) + 1 = 0 \implies f ( 1 ) = - 1 \text . $$ Finally, we get $$ f ( x ) = - \frac 1 x \text ; $$ $$ \int \frac 1 { x ^ 2 } \, \mathrm d x = - \frac 1 x + C \text . $$ Remark 1. My goal was to find antiderivative of $ \frac 1 { x ^ 2 } $. Because, $$ \int \frac 1 { x ^ 2 } \, \mathrm d x = \text {antiderivative of } \frac 1 { x ^ 2 } + \text{constant.} $$ In short, the function $ f ( x ) $ was an antiderivative. Remark 2. Initially $ a $ is a constant. However, it covers all real numbers except zero. Therefore, I replaced it with the variable $ x $. How much of my work does math allow? Is the method I am using a nonsense? Otherwise, can the errors it contains be corrected?
It's best to use definite integrals; this not only distinguishes certain variables your reasoning risks conflating (though you've done your best to explain why the steps are legal), but also makes it shorter. Define $f(x):=\int_\infty^x\frac{dt}{t^2}$ so $f(ax)=\int_\infty^{ax}\frac{dt}{t^2}=\int_\infty^x\frac{du}{au^2}=\frac{f(x)}{a}$; in particular, $f(a)=\frac{f(1)}{a}$. For $x>0$, integrate by parts with $u:=t,\,v:=f(t)$ so$$\begin{align}\ln x&=\int_1^x\tfrac{dt}{t}\\&=[tf(t)]_1^x-\int_1^xf(t)dt\\&=\underbrace{xf(x)-f(1)}_0-\int_1^x\tfrac{f(1)}{t}dt\\&=-f(1)\ln x\\\implies f(1)&=-1.\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3995313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Prove $(x-1)(y-1)>(e-1)^2$ where $x^y=y^x$, $y>x>0$. Let $x,y$ be different positive real numbers satisfying $x^y=y^x$. Prove $(x-1)(y-1)>(e-1)^2$. We may suppose $a=\frac{y}{x}>1$. Then we obtain $x=a^{\frac{1}{a-1}},y=a^{\frac{a}{a-1}}.$ But how to go on?
Sketch of a proof: Fact 1: Let $f(u) = \frac{\ln u}{u}$. Then, $f(u)$ is strictly increasing on $(0, \mathrm{e})$, and strictly decreasing on $(\mathrm{e}, \infty)$. WLOG, assume $y < x$. From $x^y = y^x$, we have $\frac{\ln x}{x} = \frac{\ln y}{y}$. By Fact 1, it is easy to prove that $1 < y < \mathrm{e} < x$. We need to prove that $y > 1 + \frac{(\mathrm{e} - 1)^2}{x-1} \triangleq y_0$. Clearly, $y_0\in (1, \mathrm{e})$. By Fact 1, it suffices to prove that $f(y) > f(y_0)$ or $\frac{\ln x}{x} > f(y_0)$. It suffices to prove that $$\frac{\ln x}{x} > \frac{\ln\left( 1 + \frac{(\mathrm{e} - 1)^2}{x-1}\right)}{1 + \frac{(\mathrm{e} - 1)^2}{x-1}}, \ \forall x > \mathrm{e}.$$ It is true. But my proof is not nice. (Hint: Use Fact 2, then use derivative.) Fact 2: $\frac{\ln v}{v} \ge \frac{(2\mathrm{e} + 10)v - 16\mathrm{e} + 4\mathrm{e}^2} {(3\mathrm{e} - 3)v^2 + (16\mathrm{e} - 4\mathrm{e}^2)v - 19\mathrm{e}^2 + 7\mathrm{e}^3}$ for all $v \ge \mathrm{e}$. (Hint: Take derivative.) Remark 1: I propose the following problem: Let $0 < y < x$ such that $\frac{\ln x}{x} = \frac{\ln y}{y} = a$ ($a>0$). Prove that $$(x-1)(y-1) > \left(\mathrm{e} - 1 + \frac{5}{8}\left(\frac{1}{\mathrm{e}} - a\right)\right)^2.$$ By the way, there are many similar problems in MSE or AoPS. These problems has the following description: Let $f(z)$ be a unimodal function. Let $f(x) = f(y) = a$. Then $g(x, y) \ge h(a)$. See my closed post: Inequalities involving roots of some functions (e.g., $\frac{\ln x}{x}$, $x\ln x$) For example, Estimate the bound of the sum of the roots of $1/x+\ln x=a$ where $a>1$, let $f(x) = (x-1)\ln x$, and given $0 < a < b$. If $f(a) = f(b)$, prove that $\frac{1}{\ln a}+\frac{1}{\ln b} < \frac{1}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3995517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
point of intersection of two lines in barycentric coordinate system I am looking for an efficient way to determine the intersection point of two lines which go through a triangle (face) of a 3D triangular surface mesh. For both lines I know the two points at which they intersect with the edges of a triangle (face). Denoted $P_A^1, P_B^1$ for the first line and $P_A^2, P_B^2$ for the second line (See illustrative example figure). Based on my investigations my preferred approach would be to: * *Transform the the points of intersection for both lines ($P_A^1, P_B^1$ and $P_A^2, P_B^2$) into barycentric coordinates resulting in $B_A^1, B_B^1$ and $B_A^2, B_B^2$. *Define two lines: $L_1$ which goes through $B_A^1, B_B^1$ and $L_2$ going through $B_A^2, B_B^2$ based on the two-point form defined in section 4.1.1. of this document *Determine the point $B_I$ as the barycentric coordinates where $L_1$ and $L_2$ intersect based on the equations in section 4.3 of this document. *Transform $B_I$ back into the cartesian coordinate system to have its 3D coordinates, denoted as $P_I$. Unfortunately steps 2 and 3 do not lead to meaningful results (i.e., the intersection points fall outside the triangle) and I have doubts that I am applying equations in section 4.1.1. and section 4.3 properly. Even when using a simple example based on the upper illustration in this figure here by defining $B^1_A=(1,0,0), B^1_B=(0,1/2,1/2)$ and $B^2_A=(0,1,0), B^2_B=(1/2,0,1/2)$ I cannot determine $B_I$ correctly as $B_I=(1/3,1/3,1/3)$ with the above procedure.
For clarity, let's use $(u, v, w)$ for barycentric coordinates. Let's say the first line passes through points $(u_1 , v_1 , w_1)$ and $(u_2 , v_2 , w_2)$ in barycentric coordinates; and the second line through points $(u_3, v_3, w_3)$ and $(u_4, v_4, w_4)$. If we parametrise these using $t_1$ and $t_2$, we have lines $$\left\lbrace ~ \begin{aligned} L_1(t_1) &= \Bigr(u_1 + t_1 (u_2 - u_1) ,~ v_1 + t_1 (v_2 - v_1) ,~ w_1 + t_1 ( w_2 - w_1) \Bigr) \\ L_2(t_2) &= \Bigr(u_3 + t_2 (u_4 - u_3) ,~ v_3 + t_2 (v_4 - v_3) ,~ w_3 + t_2 ( w_4 - w_3) \Bigr) \\ \end{aligned} \right.$$ At the intersection, $L_1(t_1) = L_2(t_2)$. By definition, $u + v + w = 1$ for all $u, v, w \in \mathbb{R}$. Therefore, we can pick any pair ($u, v$ or $u, w$ or $v, w$) of barycentric coordinates. If we pick $u$ and $v$ (and therefore $w = 1 - u - v$), we have a system of two linear equations and two unknowns $t_1$ and $t_2$: $$\left\lbrace ~ \begin{aligned} u_1 + t_1 ( u_2 - u_1 ) &= u_3 + t_2 ( u_4 - u_3 ) \\ v_1 + t_1 ( v_2 - v_1 ) &= v_3 + t_2 ( v_4 - v_3 ) \\ \end{aligned}\right.$$ which has exactly one solution: $$\left\lbrace ~ \begin{aligned} t_1 &= \frac{ u_1 (v_4 - v_3) + u_3 (v_1 - v_4) + u_4 (v_3 - v_1) }{ (u_1 - u_2)(v_4 - v_3) - (u_4 - u_3)(v_1 - v_2) } \\ t_2 &= \frac{ u_1 (v_2 - v_3) + u_2 (v_3 - v_1) + u_3 (v_1 - v_2) }{ (u_1 - u_2)(v_4 - v_3) - (u_4 - u_3)(v_1 - v_2) } \\ \end{aligned}\right.$$ where the denominator is the same for both, and is zero if and only if the lines are parallel. If $0 \le t_1 \le 1$, then the intersection is on the line segment between the specified points on the first line. If $0 \le t_2 \le 1$, then the intersection is on the line segment between the specified points on the second line. The intersection is at $(u, v, w)$, $$\left\lbrace ~ \begin{aligned} u &= u_1 + t_1 ( u_2 - u_1 ) = u_3 + t_2 ( u_4 - u_3 ) \\ v &= v_1 + t_1 ( v_2 - v_1 ) = v_3 + t_2 ( v_4 - v_3 ) \\ w &= w_1 + t_1 ( w_2 - w_1 ) = w_3 + t_2 ( w_4 - w_3 ) \\ \end{aligned} \right.$$ where you can use either of the right sides. The point is inside the triangle if and only if $0 \le u \le 1$, $0 \le v \le 1$, and $0 \le w \le 1 \iff 0 \le u + v \le 1$. If you have the Cartesian coordinates $(x_1, y_1, z_1)$ corresponding to $(u_1, v_1, w_1)$, and so on, you can calculate the 3D coordinates of the intersection $(x, y, z)$ using $$\left\lbrace ~ \begin{aligned} x &= x_1 + t_1 ( x_2 - x_1 ) = x_3 + t_2 ( x_4 - x_3 ) \\ y &= y_1 + t_1 ( y_2 - y_1 ) = y_3 + t_2 ( y_4 - y_3 ) \\ z &= z_1 + t_1 ( z_2 - z_1 ) = z_3 + t_2 ( z_4 - z_3 ) \\ \end{aligned} \right.$$ again using either one of the right sides, as they are equal at the intersection by definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3995636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The relation $a^n|b^n$ implies that $a|b$, where $a$ and $b$ are positive integers and $n\ge1$. Let $gcd(a, b) = d$ Let $a = rd$ and $b = sd$, where $gcd(r, s) = 1$. $\therefore gcd(r^n, s^n) = 1$ Now, how do I show that $r = 1$?
Okay. It shouldn't be hard to prove $\gcd(m^k, n^k) = [\gcd(m,n)]^k$. You can do it by considering prime factors, or by considering $$\gcd(m^k,n^k)=\\\gcd(m^{k-1}\frac m{\gcd(m,n)}\gcd(m,n), n^{k-1}\frac n{\gcd(m,n)}\gcd(m,n))=\\\gcd(m,n)\gcd(\frac m{\gcd(m,n)}m^{k-1},\frac m{\gcd(m,n)}n^{k-1})=\\\gcd(m,n)\gcd(m^{k-1},n^{k-1})$$ and using induction. Or we could do it your way. If $\gcd(m,n) = d$ and $m=rd;n=sd$ with $\gcd(r,s) =1$. then $\gcd(m^k,n^k)=\gcd(d^kr^k,d^ks^k)=d^k\gcd(r^k,s^k) = d^k*1 = d^k$. ....... So now bear in mind $g|h \iff \gcd(g,h) = g$. (That's too obvious to require any discussion isn't it? If $g$ is a common divisor $g$ and $g$ then $g|h$. ANd if $g|h$ than $g$ is a common divisor and there can't be any greater divisor of $g$ than $g$ itself... right?) So we are done. ...... $a^n|b^n\implies a^n = \gcd(a^n,b^n) = d^n\gcd(r^n,s^n) =d^n*1 =d^n\implies a=d=\gcd(a,b) \implies a|b$. That's it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3995790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
No differentiable function on $[-1,1]$ with derivative $\operatorname{sgn}(x)$ How can I prove for certain that there exists no differentiable function $g(x)\colon [-1,1]\rightarrow \mathbb{R}$ with $$g'(x) = \operatorname{sgn}(x) = \begin{cases} +1, & \text{for $x > 0$} \\ 0, & \text{for $x=0$} \\ -1, & \text{for $x<0$} \end{cases} $$ I'm currently jumping to the absolute value function but that's not differentiable at $x = 0$ and that doesn't exactly disprove the existence of others. Looking at the limits \begin{align} \lim_{h\to 0^{+}} g'(h) &=1\\ &= \lim_{h\to 0^{+}} \frac{g(0+h)-g(0)}{h}\\ &\neq \lim_{h\to 0^{-}}\frac{g(0+h)-g(0)}{h}\\ &= -1\\ &= \lim_{h\to 0^{-}}g'(h)\\ &\neq 0\\ &= g'(0) \end{align} which implies discontinuity at $h = 0$. Continuity and differentiability are topics that I still don't quite grasp properly. Can I use the contraposition from Darboux's Theorem for this specific case?
Suppose that such a function $g$ existed. Then based on your definition of $g'$ we can conclude that $$g(x) = \begin{cases} -x+c_1, &x<0\\ a, &x = 0\\ x+c_2, &x > 0 \end{cases}$$ where we don't know the value of $a$. The only question is whether or not there are some choice of constants to make $g'(x) = \operatorname{sgn}(x)$. If that were to work, then following would work out: $$0 = g'(0) = \lim_{h\to 0}\frac{g(h) - g(0)}{h}. $$ Because the definition of $g$ differs between the left and right of $0$, we need to consider the left and right limits, though as we will see it suffices to consider the right-hand limit: $$\lim_{h\to 0^{+}}\frac{g(h) - g(0)}{h} = \lim_{h\to 0}\frac{h+c_2 - c_2}{h} = \lim_{h\to 0}\frac{h}{h} = 1 \ne 0.$$ Thus, we can conclude that there is no such function $g$ with $g'(x) = \operatorname{sgn}(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3995857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Does any sequence with values in a directed set have a monotone subsequence? Definition: a directed set is a set $M$ together with a preorder $\geq$ (reflexive and transitive order) such that every pair of elements in $M$ has an upper bound ($\forall x,y \in M, \ \exists z \in M,\ z \geq x \wedge z \geq y$) For sequences in the reals, there always exists a monotone subsequence. The proof of this fact uses the total order property of the real numbers. But I was wondering, would it work for directed sets? In other words, does a sequence with values in a directed set $(M, \geq)$ always have a monotone subsequence? I tried to find a counterexample, but I have yet to work with directed sets to be able to visualize them at this level.
Let $M=\wp(\Bbb N)$, directed by inclusion, and let $a_n=\{n\}$ for $n\in\Bbb N$; then $\langle a_n:n\in\Bbb N\rangle$ is a sequence in $M$ with no monotone subsequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3995983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
R-orientations of Manifolds Let $M$ be an $n$-manifold and $R$ a commutative ring with $1$. Then for every $x\in M$ the relative homology group $H_{n}(M,M-{x};R)$ is isomorphic to $R$ and an $R$-orientation of $M$ is defined to be a function that assigns to each $x\in M$ a unit in $H_{n}(M,M-{x};R)$ subject to a local consistency condition. This definition is where I am struggling. This isomorphism is merely a group isomorphism, so how does it make sense to talk about a unit in $H_{n}(M,M-{x};R)$? There is no canonical group isomorphism $H_{n}(M,M-{x};R)\rightarrow R$, and even if there was this doesn't seem helpful since homology groups aren't rings. It seems like we are being invited to use what ever ismorphism $H_{n}(M,M-{x};R)\rightarrow R$ in order to associate the homology groups $H_{n}(M,M-{x};R)$ with $R$. This would be fine if it's the case that if $\alpha$ and $\beta$ are any two isomorphisms $H_{n}(M,M-{x};R)\rightarrow R$ that there exists a ring isomorphism $\phi:R\rightarrow R$ such that $\phi\circ \beta=\alpha\circ id_{H_n}$. And also if $B$ is a ball in $\mathbb{R}^{n}$ containging $x$ that the isomorphism $H_{n}(M,M-{B};R)\rightarrow H_{n}(M,M-{x};R)$ must send units to units. But are these things true?
No, it is not merely a group isomorphism. It is a module isomorphism. Let $R$ be a ring, and consider $R$ as a module over itself. Suppose $\varphi: R \to R$ is a module homomorphism. Then $\varphi(r) = \varphi(1 \cdot r) = \varphi(1) \cdot r$ for all $r \in R$, by the fact that this is a module homomorphism. So every module endomorphism of $R$ is of the form "multiplication by an element of $R$". Write $\varphi_a: R \to R$ for the map $\varphi_a(r) = ar$. Then $\varphi_a \varphi_b = \varphi_{ab}$. It follows that $\varphi_a$ is invertible as a module homomorphism if and only if $a$ is invertible as an element of $R$. As products of units are units, it follows that any module isomorphism $\varphi: R \to R$ sends units to units. Note that "units in $R$" are definable module-theoretically, which provides an alternative proof. An element $r \in R$ is a unit if and only if the submodule generated by $r$ --- also known as the ideal $(r)$ --- is the entire module $R$. A unit of a ring is the same as a generator of $R$, considered as a module over itself (that is, we say $m \in M$ is a generator of a module if $mR = M$; notice that if $M = R$ considered as a module over itself, the condition $rR = R$ is precisely the condition that $r$ is a unit). As isomorphisms send module generators to module generators, it follows that a module isomorphism $R \to R$ sends units to units.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3996133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
For the given function $f(x)=\frac{1}{\sqrt {1+x}} +\frac{1}{\sqrt {1+a}} + \sqrt{\frac{ax}{ax+8}}$, prove that $1 For the given function $f(x)=\frac{1}{\sqrt {1+x}} +\frac{1}{\sqrt {1+a}} + \sqrt{\frac{ax}{ax+8}}$, prove that $1<f(x)<2$ for +ve $a$ and $x\ge 0$ I tried the most rudimentary method ie. differentiating wrt x and hoped to get at least 2 solutions for $f’(x)$, unfortunately I wasn’t able to find a solution for implying function is monotonous . Now whether it’s sis increasing or decreasing still remains a mystery, given that we don’t know what $a$ is. What method can be used to solve this?
This is actually a very devious inequality problem disguised as a calculus problem, and it took me very long to realise this. But eventually what gave it away was that calculating the second derivative of $f(x)$ would be a monstrous task, and that the $\sqrt{\dfrac{ax}{ax+8}}$ term seemed very suspicious (As we shall see, dividing both numerator and denominator by $ax$ would yield something very interesting and useful.) First let us rename $x$ as $b, b \geq 0$. The case where $b=0$ is trivial, since we have $\dfrac{1}{\sqrt{1+a}} + \dfrac{1}{\sqrt{1+b}} + \sqrt{\dfrac{ab}{ab+8}} = \dfrac{1}{\sqrt{1+a}} +1 $. Thus, we may assume $a,b >0$. Let $a=2x,b=2y$, where $x, y >0$. Thus, $$\dfrac{1}{\sqrt{1+a}} + \dfrac{1}{\sqrt{1+b}} + \sqrt{\dfrac{ab}{ab+8}} =\dfrac{1}{\sqrt{1+2x}} + \dfrac{1}{\sqrt{1+2y}} + \dfrac{1}{\sqrt{1+\dfrac{2}{xy}}}.$$ Let $2z=\dfrac{2}{xy} \Rightarrow xyz=1$. So really, this problem is equivalent to us proving the following inequality: Inequality: Let $x,y,z \in \mathbb{R^+}$, $xyz=1$. Prove that $1<\dfrac{1}{\sqrt{1+2x}} + \dfrac{1}{\sqrt{1+2y}} + \dfrac{1}{\sqrt{1+2z}} <2. $ Now, the lower bound is relatively easy, but the upper bound is a real killer. Lower Bound: Since $1+2x >1 , \sqrt{1+2x} < 1+2x \Rightarrow \dfrac{1}{\sqrt{1+2x}} > \dfrac{1}{1+2x}$. Thus it suffices for us to prove that: \begin{align} \sum_{\text{cyc}} \dfrac{1}{1+2x} \geq 1 & \iff \dfrac{(1+2y)(1+2z) + (1+2x)(1+2z) + (1+2x)(1+2y)}{(1+2x)(1+2y)(1+2z)} \geq 1 \\ & \iff 3+4(x+y+z) + 4(yz+xz+xy) \geq 9+2(x+y+z) + 4(yz+xz+xy) \\ & \iff 2(x+y+z) \geq 6 \\ & \iff x+y+z \geq 3 \\ \end{align} Which follows immediately from AM-GM and the given condition. Upper Bound: The following proof borrowed an important idea introduced in Yufei Zhao's inequalities notes: Inequalities. Sometimes switching the roles of the constraint and the inequality can yield wonders! We begin by letting $p=\dfrac{1}{\sqrt{1+2x}}, q=\dfrac{1}{\sqrt{1+2y}}, r=\dfrac{1}{\sqrt{1+2z}} $, and $p,q,r<1$. We have to prove that $xyz=1 \Rightarrow p+q+r <2$. Instead of proving this directly, we will prove the contrapositive: $p+q+r \geq 2 \Rightarrow xyz <1 \Rightarrow xyz \neq 1$. Next we carry out the following algebraic manipulations: \begin{align} p=\dfrac{1}{\sqrt{1+2x}} & \Rightarrow p^2=\dfrac{1}{1+2x} \\ & \Rightarrow \dfrac{1}{p^2} = 1+2x \\ & \Rightarrow x=\dfrac{1-p^2}{2p^2} = \dfrac{(1+p)(1-p)}{2p^2} < \dfrac{1-p}{p^2}\\ \end{align} But \begin{align} p+q+r \geq 2 & \Rightarrow p \geq 2-q-r \\ & \Rightarrow 1-p \leq -1+q+r = qr-(1-q)(1-r) < qr \\ & \Rightarrow x < \dfrac{qr}{p^2}. \end{align} Similarly, we obtain that $y < \dfrac{pr}{q^2}$ and $z < \dfrac{pq}{r^2}$. Thus, $xyz <\dfrac{qr}{p^2} \cdot \dfrac{pr}{q^2} \cdot \dfrac{pq}{r^2} =1$, and we are done. Final note: In fact, I believe that we have the following stronger upper bound: $$\dfrac{1}{\sqrt{1+2x}} + \dfrac{1}{\sqrt{1+2y}} + \dfrac{1}{\sqrt{1+2z}} \leq \sqrt{3}.$$ However, I have absolutely no clue on how to proceed. Perhaps someone else can come up with a nice idea?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3996338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Power Series with $a_{n}$ and $a_{k}$ If $P(x)= \sum_{n=0}^{\infty} a_{n} x^{n}$. It is known that $P$ satisfies: $$P^{\prime}(x)=2xP(x)$$ $\\$ for all $ x\in \mathbb{R}$ and $$P(0)= 1$$ \begin{split}\end{split} (1) Prove that $a_{2k+1}=0$ for every $ k\in \mathbb{\left \{1,2,3,4,... \right \}}$ Its true that if two power series have the same coefficients then they are equal? I can demonstrate the first equation transforming the series to an integral, but i cant prove (1) and (2) because $k$ confuses me, I was thinking that $k$ is for $P'(x)$, just as $n$ is for $P(x)$. Also i need orientation with the proof requested. Thanks.
It is true that if two power series have the same coefficients then they are equal. More is true actually. If two power series are equal on some interval then their coefficients must be equal. Assuming that the series has a positive radius of convergence $R$, we can differentiate term by term to obtain a series for $P'(x)$. $$P'(x)=\sum_{k=1}^{\infty}{ka_kx^{k-1}}$$ This series also has radius of convergence $R$, and since $P'(x)=2xP(x)\implies P'(0)=a_1=0$ Also, since $P'(x)=2xP(x)$ $$\sum_{k=1}^{\infty}{ka_kx^{k-1}}=\sum_{k=0}^{\infty}{2a_kx^{k+1}}$$ making a shift of index we see that, $$\sum_{k=1}^{\infty}{(k+1)a_{k+1}x^k}=\sum_{k=1}^{\infty}{2a_{k-1}x^k}$$ Since these power series are equal on some interval with radius $R>0$, their coefficients must be equal. Thus we have, $$(k+1)a_{k+1}=2a_{k-1}$$ Now you can probably finish the proof on your own using induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3996442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Inequality for standard normal distribution Suppose $X\sim N(0,\,1)$ and $c, \, y\in\Bbb R$ with $c>0$. Prove that: $$|P(X<y)-P(X<cy)| \leq \frac{|c-1|\max(1,1/c)}{\sqrt{2\pi e}} $$
WLOG, suppose $c>1$ (for $c<1$, we work with $\frac{1}{c}$ instead) and $y>0$ (for $y<0$, we work with $-X$). The left hand side becomes $$ \begin{align} P(X<cy)-P(X<y) &= P(y<X<cy) \\ &= \int_y^{cy}\frac{e^{-\frac{-t^2}{2}}}{\sqrt{2\pi} }dt \\ &\le \int_y^{cy}\frac{e^{-\frac{-y^2}{2}}}{\sqrt{2\pi} }dt =(c-1)y\frac{e^{-\frac{-y^2}{2}}}{\sqrt{2\pi} }\\ \end{align} $$ But we have, according to Wolfram Alpha here $$ye^{-\frac{-y^2}{2}} < \frac{1}{\sqrt{e}}$$ Hence,The left hand side is less or equal than $(c-1)\frac{1}{\sqrt{2\pi e}}$ (QED).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3996545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving $T(n) = 2^n T(n-1) + 2^n$ Is there an exact (and if not, an asymptotic) solution to the following recurrence relation? $$ T(n) = 2^n T(n-1) + 2^n, \text{for } n > 0 $$ If yes, I'd also like to know whether this relation is an instance of a more general class of relations for which you can apply the same technique for solving it.
A sketch: * *Let $u_n$ denote the value of $T(n)$ obtained if $T(0)=0$. Prove by induction $T(n)=2^{n(n+1)/2}T(0)+u_n$. *Define $v_n:=u_n/2^{n(n+1)/2}$ so $T(n)=2^{n(n+1)/2}(T(0)+v_n)$. Prove by induction $v_n=v_{n-1}+2^{-n(n-1)/2}$. *Since $v_0=0$, $T(n)=2^{n(n+1)/2}(T(0)+\sum_{k=1}^n2^{-k(k-1)/2})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3996727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Linearly independent set in $\mathbb R$ over $\mathbb Q$ The set $$\left\{\sqrt[2^n]{2},\quad n\in\mathbb N\right\}\subset\mathbb R$$ is linearly independent over $\mathbb Q$. Is there an easy or elementary way to see this? I.e., why does the equation $\alpha_1 2^{2^{-i_1}}+\cdots +\alpha_N 2^{2^{-i_N}}=0$ not have a non-trivial solution in $\alpha_i\in\mathbb Q$ for any choice of $N\in\mathbb N$ and $i_1,\dots,i_N\in\mathbb N$?
More generally, for each $N \in \mathbb N$, the set $\left\{\sqrt[2^n]{2} : n=0,\dots,N\right\}$ is linearly independent because $X^{2^N}-2$ is irreducible over $\mathbb Q$ due to Eisenstein's criterion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3996903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Q: Let $X$ be a continous random variable s.t. $X\sim U(-1,3)$. Find $F_Y$, for $Y=X^4$. Q: Let $X$ be a continous random variable s.t. $X\sim U(-1,3)$. Find $F_Y$, for $Y=X^4$. My attempts so far: $F_Y(t)=P(Y\le t)=P(X^4\le t)=P(-\sqrt[4]{t}\le X\le\sqrt[4]{t})=F_X(\sqrt[4]{t})-F_X(-\sqrt[4]{t})$. Since $X\sim U(-1,3)$ then $F_X(t)=\begin{cases}0&, t<-1 \\ \frac{t+1}{4}&, -1\le t\le 3 \\ 1&, 3<t &\end{cases}$ What I don't understand is how find the right way to split $F_Y(t)=F_X(\sqrt[4]{t})-F_X(-\sqrt[4]{t})$.
Let's step back for a minute and think about what happens to $Y$ for various values of $X$. When $-1 \le X \le 1$, then $0 \le Y \le 1$. So when we pick some $y$ between $0$ and $1$, $$\Pr[Y \le y] = \Pr[-y^{1/4} \le X \le y^{1/4}].$$ But when $1 < y \le 3^4$, instead of writing $-y^{1/4} \le X \le y^{1/4}$, the left-hand side inequality stops at $-1$ because that's where the support of $X$ ends. And when $y > 81$, then both left and right sides of the inequality becomes $-1 \le X \le 3$. So in summary we have $$\Pr[Y \le y] = \begin{cases} 0, & y < 0 \\ \Pr[-y^{1/4} \le X \le y^{1/4}], & 0 \le y \le 1 \\ \Pr[-1 \le X \le y^{1/4}], & 1 < y \le 81 \\ 1, & y > 81. \end{cases}$$ Now there are no further restrictions and we can proceed with the evaluation of each case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3997020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Number of Paths on a Grid From S to M I was reading about a the derivation of the formula for the number of paths from one corner to another corner of a H by W grid here and I wondered whether it is possible to apply the result: $\binom{(H-1)(W-1)}{H-1}$ to find the number of paths from a given square on the top row of the grid to another selected square in the bottom row. For example the number of paths from B to J. I thought of reducing the grid to just the columns B to D, counting the paths there and then adding the possible paths from every other square outside the reduced grid. However I had trouble in finding a formula for the possible paths from outside of the reduced grid. In a path you cant repeat a square, and you can move to any adjacent square.
A possible (but involved) approach You could represent moves by complex numbers: $1$ is move right, $-1$ is move left, $i$ is move up, $-i$ is move down. Then a path is a sequence such as $1,-i, -1, ...$ The conditions for a valid path then have simple arithmetical equivalences. To move from B to J The total sum of the numbers must be $2-5i$. To stay in the grid For any natural number $n$, the sum $S_n$ of the first $n$ numbers must satisfy $4\ge \Re (S_n)\ge -1, 0\ge \Im (S_n)\ge -5. $ To not visit any square twice No sum of successive numbers must have sum $0$. This looks suitable to be programmed if that is of interest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3997164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof by induction with an nxn-matrix The matrix A $ \in \mathbb{R}^{n\times n}$ is of the form \begin{equation*} A = \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ & & \ddots & \ddots & 0 \\ \vdots & & & \ddots & 1 \\ 0 & \cdots & & \cdots & 0 \\ \end{pmatrix} \end{equation*} Now I want to compute $e^{tA}$ and $e^{tA} = \sum_{k=0}^{\infty} \frac{1}{k!}\cdot (tA)^{k}$. I observed that $A^{2}$ is equal to the matrix A only with de "diagonal" of ones moved 1 up. \begin{equation*} A^{2} = \begin{pmatrix} 0 & 0 & 1 & \cdots & 0 \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ & & \ddots & \ddots & 1 \\ \vdots & & & \ddots & 0 \\ 0 & \cdots & & \cdots & 0 \\ \end{pmatrix} \end{equation*} And $A^{3}$ is equal to the matrix A with the "diagonal" of ones moved 2 up. So to compute $e^{tA}$ I want to proof that $A^{n}$ is equal to the matrix A with the "diagonal" of ones moved up n-1 times, which results in $A^{n} = 0$. I have tried to proof it with induction. So claim: If A $ \in \mathbb{R}^{n\times n}$ is of the above form, then $A^{n} = 0$. For n = 2, the 2 x 2 matrix is equal to: \begin{equation*} A = \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \end{equation*} And $A^{2} = 0$. So the claim holds for n = 2. But I don't know how to do the induction step in this case. Other ways to compute $e^{tA}$ with this matrix A are also welcome.
Prove the following by induction on $n$: Lemma: Let $A$ be an $m \times m$ matrix of any size such that $$A_{ij}= \left\{ \begin{array}{lc} 1 &\mbox{ if } j-i=1 \\ 0 & \mbox{ otherwise} \end{array} \right. $$ Then, for all $n$ we have $$(A)^n_{ij}= \left\{ \begin{array}{lc} 1 &\mbox{ if } j-i=n \leq m \\ 0 & \mbox{ otherwise} \end{array} \right. $$ The inductive step is immediate $$ (A)^{n+1}_{ij}=(A)^n_{i1}A_{1j}+(A)^n_{i2}A_{2j}+...+(A)^n_{im}(A)^n_{mj} $$ Just note here that $$ (A)^n_{ik}A_{kj}=0 $$ unless $k-i=n$ and $j-k=1$. Alternate proof Just observe that $A$ is the matrix corresponding to the Linear Transformation $T: \mathbb R^m \to \mathbb R^m$. $$ T(e_k)=\left\{ \begin{array}{lc} e_{k+1} &\mbox{ if } k<m \\ 0 & \mbox{ if } k=m \end{array} \right. $$ Prove by induction on $n$ that $$ T^n(e_k)=\left\{ \begin{array}{lc} e_{k+n} &\mbox{ if } k+n \leq m \\ 0 & \mbox{ if } k+n>m \end{array} \right. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3997270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Prove that $(ab = -b )\land (a\ne 0)\implies a+1=0$ I am having some troubles proving the implication that if $ab=-b$, and $b$ is not equal to $0$, then $a=-1$. I am only supposed to use axioms of integers to prove the proposition. I have written a proof, though I am not sure if it achieves its purpose. I am looking for some help to improve my proof where it may be weak. Let $a,b$ be elements of integers. If $ab = -b$, then $a+1 = 0$ $ab = -b \implies ab+b = -b+b \implies ab+b = 0$ If $ab+b = 0$, then $ab+b = 0 = a+1$. $a+1 = 0 \implies b(a+1) = 0 \implies ba+b = 0 \implies ab+b = 0$. $ab+b = a+1 \implies b(a+1) = a+1 $ Here is where I get stuck. I need to prove $ab+b = a+1 \implies b(a+1) = a+1 \implies b=1$ Is there a way I can do so without division? So far I have this: $\implies (a+b)|(ba+b)=(a+b)|(a+b) \implies b=1$ Though I would like to avoid division if possible? If $b=1$, then $ab=-b \implies a(1)=-(1) \implies a=-1$ As proven, $ab = -b$, then $a+1=0$. If you have another approach to proving the implication, I'd really appreciate it if you could share it with me. I'm looking for some alternate methods to prove it.
$$ ab = -b $$ $$ ab + b = 0$$ Using converse of distributive property, $$ b (a+1) = 0 $$ Now for $b(a+1)$ to be zero either $b$ or $a+1$ has to be zero. Since it is given that b is not zero so $a+1$ will be zero $$ a+1=0\implies a = -1 $$ Hence proved that $a+1$ is zero as well as $a = -1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3997433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geometrical proof for property of chord of contact with a circle Let their be a circle with center O, and radius r. Let their be a point P outside the circle at a distance d from O. Let a variable line passing through P cut the circle in 2 points, A and B. We define a point C between A and B such that PC is harmonic mean of PA and PB. Prove that the locus of C is chord of contact of P with respect to the circle. I can do it through coordinate geometry, I just wanted to find a geometrical method. I tried looking at its inversion, and trying to find relations but I couldn't. Any help is appreciated. Thanks.
Let the circle have centre $O$.Let the points at which the tangents touch the circle be $D$ and $E$.A variable line through point $P$ cuts the circle at points $A$ and $B$. $DE$ cuts this line at $C$. So proving that $PC$ is the harmonic mean of $PA$ and $PB$ will prove the given statement. Let $M$ be the midpoint of $AB$. Then $OM\perp AP$. Now, observe that, pentagon $MODPE$ is cyclic. Hence, $PC\cdot CM=DC\cdot CE$. Also, since $DC\cdot CE=AC\cdot CB$, $AC\cdot CB=PC\cdot CM$. $AC=PA-PC$ $BC=PC-PB$ $CM=\frac {1}{2}(AC-BC)=\frac {1}{2}(PA+PB)-PC$ Plugging these values into the equation gives, $PC\{\frac {1}{2}(PA+PB)-PC\}=(PA-PC)(PC-PB)$ $\Rightarrow PC=\frac {2PA\cdot PB}{PA+PB}$ Hence, $PC$ is the harmonic mean of $PA$ and $PB$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3997593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is power series $\sum_{n=1}^\infty\frac{x^n}{n}$ uniformly convergent in $(-1,1)$ I know that if $\sum a_nx^n$ has radius of convergence R then this power series converges uniformly for every compact subset of $(-R,R)$ and now no doubt converges pointwise in $(-R,R)$ but little bit confused what can we say about uniform convergence in $(-R,R)$. As for example what can we say about uniform convergence of series $\sum_{n=1}^\infty\frac{x^n}{n}$ in (-1,1)?
Here's a mostly painless way to show unboundedness, in order to apply Arthur's hint. As you note $s = \sum_j x^j/j$ converges over compact sets, and moreover so does the series of term-wise derivatives $\sum_{j \geq 1}x^{j-1} = \sum_{j \geq 0}x^j = 1/(1-x)$. A standard theorem of series gives $s' = 1/(1-x)$ and so integrating and using that $s(0) = 0$ one gets $$ s(x) = -\mathbf{ln}(1-x) $$ and then $s(x) \xrightarrow{x \to _+1} +\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3997688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How do I show that some eigenvalues of two square matrices of different dimensions are the same? I have three matrices in a field $F$: $X \in F^{a,a}, Y \in F^{b,b}, Z \in F^{a,b}$, where $a,b \in \mathbb{N}, a \geq b$ and $\text{rank}(Z) = b$. The following term describes their relation: $X \cdot Z = Z \cdot Y$ I want to show that all eigenvalues of $Y$ are eigenvalues of $X$.
Note: Not all of the determinants you wrote down are well-defined, since $XZ$ and $ZY$ are not quadratic if $a \neq b$. Also, as stated, this does not seem quite correct. Set $a = 2, b = 1$ and $$X = \begin{pmatrix} 1 & 0 \\ 0 & 2 \end{pmatrix}, \quad Y = (1), \quad Z = \begin{pmatrix} 1 \\ 0 \end{pmatrix},$$ then $a \geq b$, the rank of $Z$ is equal to $b = 1$, and $$XZ = \begin{pmatrix} 1 \\ 0 \end{pmatrix} = ZY.$$ Then $X$ has the eigenvalues 1 and 2, but $Y$ has the eigenvalue 1. Note that you can replace the 2 in $X$ with any other value. Generally, with problems like these it's a good idea to start by comparing if you can somehow make eigenvectors of one of the matrices into an eigenvector of the other one, and here this gives us some insight, as well. If $x \neq 0$ is an eigenvector of $Y$ to the value $\lambda$, then $$\lambda Zx = Z(\lambda x) = Z Yx = XZx,$$ hence, if $Zx \neq 0$, then it is an eigenvector of $X$ to the value $\lambda$. But the full rank condition on $Z$ and $a \geq b$ means that its kernel is empty, so if $x \neq 0$ then also $Zx \neq 0$. So every eigenvalue of $Y$ is an eigenvalue of $X$. But, looking at the counterexample above, that's all that you can show from the given requirements. X might have a bunch of other eigenvalues that you do not know anything about from the given data.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3997796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about example 7.1.7 Alaca/Williams. So they are trying to prove that given $K=\mathbb{Q(\sqrt{2}+i)}$: $O_K= \mathbb{Z}+\mathbb{Z}\sqrt{-1} + \mathbb{Z}\sqrt{2} + \mathbb{Z}(\frac{1}{2}(\sqrt{2}+\sqrt{-2}))$ You have that $K$'s subfields are $\mathbb{Q},\mathbb{Q}(\sqrt{-1}),\mathbb{Q}(\sqrt{2})$ and $\mathbb{Q}(\sqrt{-2})$ Now,given $\alpha \in O_K$, if $\alpha$ belongs to any of the subfields, you can easily show that $\alpha \in \mathbb{Z}+\mathbb{Z}\sqrt{-1} + \mathbb{Z}\sqrt{2} + \mathbb{Z}(\frac{1}{2}(\sqrt{2}+\sqrt{-2}))$ Suppose that $\alpha$ does not belong to any of the subfields: Now here is what I do not get, they state that since $\alpha \in K$, you ca express it as: $\alpha = a + b i+ c\sqrt{2}+ d\sqrt{2}i$ with $a,b,c,d \in \mathbb{Q}$ (Why? Is this a general thing? Is it because the extension is Galois?) And then they proceed with my second doubt, they state that the conjugates of $\alpha$ are: $\alpha' = a - b i+ c\sqrt{2}- d\sqrt{2}i$ $\alpha'' = a + b i-c\sqrt{2}- d\sqrt{2}i$ $\alpha''' = a - b i- c\sqrt{2}+ d\sqrt{2}i$ I don't get why this is? I mean in this particular case the conjugate are directly related to the subfields, but I don't get how you get to those equations for the conjugates. This could just be a lot of operating though, this book has a habit of not showing any of the work (Although I get why, some of the work ommited is not at all trivial, for example, finding the subfields was a hassle, they just invoke them. And I am trying to do the work for this examples so when I get to do excercises I will have a better idea on how to solve them)
Okay so the first step is to convince yourself that $K = \Bbb Q(\sqrt{2},i)$, if that hasn't been done in the mentioned argument. This isn't hard, but it's far from obvious if you haven't done it before, so let's delve into it. Note that since $\sqrt{2}+i \in \Bbb Q(\sqrt{2},i)$, we already have $K \subset \Bbb Q (\sqrt{2},i)$. To achieve the equality, we can show that $\sqrt{2},i \in K$. But moreover, if we show that one of these is in $K$, since their sum is already an element of $K$, we get the other for free. So, by direct computation: $$ K \ni (\sqrt{2}+i)^3 = (1+\sqrt{2}i)(\sqrt{2}+i) = 3i $$ hence $i \in K$ and $K = \Bbb Q(\sqrt{2},i)$. Thus we have a tower of extensions $K = \Bbb Q(i)(\sqrt{2}) - \Bbb Q(i) - \Bbb Q$, and there's a general way to compute a $k$-basis of $E$ if it lies in a tower $E - F -k$. Namely: consider a $k$-basis $\{x_i\}$ of $F$, an $F$-basis $\{y_j\}$ of $E$, and then do all possible products $\{x_iy_j\}_{i,j}$. Moreover, when the extension is already in the form $\Bbb Q(\alpha)$, it's not hard to show that a $\Bbb Q$-basis is $\{1,\alpha,\ldots, \alpha^{d-1}\}$ with $d = \deg m(\alpha, \Bbb Q)$. Thus as $\Bbb Q$-vector spaces we have $\Bbb Q(i) = \Bbb Q 1 + \Bbb Q i$ and $\Bbb Q(\sqrt{2}) = \Bbb Q 1+ \Bbb Q\sqrt{2}$, hence a $\Bbb Q$ basis for $K$ consists of all possible products of basic elements $$ B = \{1,\sqrt{2},i,\sqrt{2}i\}. $$ That solves the first question. As for the second: via our new characterization of $K$, we know that automorphisms are determined by the images of $i$ and $\sqrt{2}$, these have quadratic polynomials, and so their images must be $\pm i$ and $\pm \sqrt{2}$ respectively. Hence we have $4$ different admissible morphisms, each corresponds to a different conjugate. Edit: we did not need any Galois theory but the extension is indeed Galois, as it is the splitting field of $\{X^2-2,X^2+1\}$ and its Galois group then must be the Klein four group. Explicit generators are $$\sigma(a+ib+c\sqrt{2}+di\sqrt{2}) = a+ib-c\sqrt{2}-di\sqrt{2}$$ and $$c(a+ib+c\sqrt{2}+di\sqrt{2}) = a-ib+c\sqrt{2}-di\sqrt{2},$$ i.e. the map that sends $i\mapsto i, \sqrt{2} \mapsto - \sqrt{2}$ and complex conjugation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3997961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of binary square matrices whose every contiguous 2 by 2 submatrix has exactly 2 ones (zeroes) What is the number of binary $n\times n$ matrices whose every contiguous $2\times 2$ submatrix has exactly two ones (zeroes)? That is, if $a_{ij}\in\{0,1\}$ for $i,j\in\{1,2,\dots,n\}$ are matrix entries, then $$a_{i,j} + a_{i,j+1} + a_{i+1,j} + a_{i+1,j+1}= 2$$ for $i \in \{1,\dots,n-1\}$ and $j \in \{1,\dots,n-1\}$, where $n\ge 2$. After writing out some cases for small $n$, it appears to me that the answer might be $2^{n+1}-2$. Is there an elegant way to prove this? For example, for $n=2$ we have $6$ matrices $$ \begin{bmatrix} 1 &0 \\ 1 &0 \\ \end{bmatrix}, \begin{bmatrix} 1 &1 \\ 0 &0 \\ \end{bmatrix}, \begin{bmatrix} 1 &0 \\ 0 &1 \\ \end{bmatrix}, \begin{bmatrix} 0 &1 \\ 1 &0 \\ \end{bmatrix}, \begin{bmatrix} 0 &0 \\ 1 &1 \\ \end{bmatrix}, \begin{bmatrix} 0 &1 \\ 0 &1 \\ \end{bmatrix}. $$ For example, for $n=3$ we have $14$ matrices: * *$4$ rotations of $\begin{bmatrix} 1 &0 &1 \\ 1 &0 &1 \\ 0 &1 &0 \\ \end{bmatrix}$, $2$ rotations of $\begin{bmatrix} 1 &0 &1 \\ 1 &0 &1 \\ 1 &0 &1 \\ \end{bmatrix}$, $1$ matrix $\begin{bmatrix} 1 &0 &1 \\ 0 &1 &0 \\ 1 &0 &1 \\ \end{bmatrix}$. *One additional $(J-M)$ matrix for every of $7$ matrices $M$ from previous case. $J$ is a matrix filled with $1$'s at every entry.
We will casework on the numbers in the first row of the $n*n$ table in two cases: * *The first row is $1,0,1,0,\dotsb$ or $0,1,0,1,\dotsb$ *There are two consecutive elements in the first set. First of all, it is clear that in the first case the answer is exactly $2^n$: Name every row without consecutive elements as simple rows. Now if the first case happens, each row can be a simple row and it is all of the cases without consecutive elements in the first row so, the first case is of $2^n$ cases. On the other in the second case we will prove that the rest of the table can be constructed uniquely: The two unit squares under the two consecutive elements of the first row are chosen uniquely and based on these two, the rest of the second row is constructed uniquely. Similarly, the second row has at least two consecutive elements (the two unit squares under the two consecutive elements in the first row are the same) so the third row is constructed uniquely and so on. So in the second case, the rest of the table is constructed uniquely. So the number of cases of the tables of the second kind is equal to the number of binary sets except the two simple sets which is $2^n-2$ so the answer is equal to $2^n+2^n-2=2^{n+1}-2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3998149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Compute the integral $\int_{\delta_{B_1(-2)}} \frac{1}{z(z+2)^3}dz$ Could you please help me to solve this integral. I have no idea on how to proceed. Thank you. I think, i should rewrite $\frac{1}{z(z+2)^3}$ as a sum from a geometric series and then use the Cauchy integral formula, but I don´t see a trick.. $\int_{\delta_{B_1(-2)}} \frac{1}{z(z+2)^3}dz=\oint\frac{\frac{1}{z}}{(z+2)^3}dz$ Then $\oint\frac{\frac{1}{z}}{(z+2)^3}dz=\left(\frac{2\pi i }{2!}\right)\left[\frac{d^2f(z)}{dz^2}\right]_{z=-2}$ $=\left(\frac{2\pi i }{2!}\right)\left[\frac{d^2}{dz^2}(\frac{1}{z})\right]_{z=-2}$ $=\left({\pi i}\right)\left[{\frac{2}{(z)^3}}\right]_{z=-2}$ $=\color{blue}{-\frac{1\pi i}{4}}$ Is that correct? *Solution (geometric progression) $\int_{\delta_{B_1(-2)}} \frac{1}{z(z+2)^3}dz$ $\frac{1}{z(z+2)^3} = 2*(1-(1+\frac{z}{2}))^{-1}*\frac{1}{z(z+2)^3}$ $=\frac{2}{z(z+2)^3}*\frac{1}{1-(1+\frac{z}{2})}$ We know, that $\frac{1}{1-(1+\frac{z}{2})}=\sum_{n=0}^∞ {(1+\frac{z}{2})^n}=1+(1+\frac{z}{2})^1+(1+\frac{z}{2})^2+(1+\frac{z}{2})^3+ ...)$ So we get $\frac{2}{z(z+2)^3}*\frac{1}{1-(1+\frac{z}{2})}=\frac{2}{z(z+2)^3}*(1+(1+\frac{z}{2})^1+(1+\frac{z}{2})^2+(1+\frac{z}{2})^3+ ...)=$ $=\frac{4}{(z+2)^3}+\frac{z}{(z+2)^3}+\frac{1/2}{(z+2)}+\frac{1}{4}+\frac{2+z}{8}+...$ => $f(z)=1/2, z=-2 => 2\pi if(z)=2\pi i*1/2=\pi i$ In this solution I have $\color{blue}{\pi i}$ And in the top solution I have another answer $\color{blue}{-\frac{\pi i}{4}}$. What's my mistake?
The only enclosed pole is a third-order one at $z=-2$, so the residue is $\tfrac12\lim_{z\to-2}\tfrac{d^2}{dz^2}[(z-2)^3f(z)]$. I leave you to evaluate this, then multiply by $2i\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3998313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\sigma$-algebra intuition So, i am trying to learn measure theory, for applications in probability theory. However, i am having some issues fully understanding the definition of a $\sigma$-algebra. I am working with the following definition of a $\sigma$-algebra: * *Definition: Let $\Omega \ne \emptyset$. Then $\mathcal{B} \subseteq \mathcal{P}(\Omega)$ is a $\sigma$-algebra if * *(i) $ \Omega \in \mathcal{B} $ *(ii) if $B \in \mathcal{B}$, then $B^c \in \mathcal{B}$ *(iii) If $B_n \in \mathcal{B}$ for all $n \in \mathbb{N}$ then $\cup_{n=1}^{\infty}B_n \in \mathcal{B}$ I do not fully understand what is going on in (iii). My intuition would say, that it should mean "If a set is in the sigma-algebra, then every possible union you could make with other sets in the sigma-algebra, is also in the sigma-algebra", is this correct? Just looking at the notation i would think that it means that the union of all sets in the sigma-algebra, should also be in the sigma-algebra, but that would just return $\Omega$ as in (i), so that cannot be correct?
The key-word which is not written in the definition is countable. A $\sigma-$algebra is intuitively a subset of $\mathcal{P}(\Omega)$ which is stable by countable "operations" (such as union, intersection and complementary) on its elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3998410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
find an unbiased estimator of $\sigma$ based on $\sum X_i^2$ Let $X_1,...X_n$ be iid drawn from $N(\sigma,\sigma^2)$ where $\sigma\gt0$ is the unknown parameter. Find an unbiased estimator of $\sigma$ based on $\sum X_i$, find an unbiased estimator of $\sigma$ based on $\sum X_i^2$, and use these results to determine the minimal sufficient statistic $T=(\sum X_i, \sum X_i^2)$ is complete? It's easy to find an unbiased estimator based on $\sum X_i$. $E\left(\frac {\sum X_i} n\right)=\sigma$, so $\bar X$ is an unbiased estimator of $\sigma$. But $E(X_i^2)=var(X_i)+(E(X_i))^2=2\sigma^2$, so it's hard to get $E(\sum X_i^2)$ to be $\sigma$. $E(\frac {\sum X_i^2)} {2n})=\sigma^2$, but how can you spit out a $\sigma$? Neither $\sqrt {\frac {\sum X_i^2)} {2n}}$ nor $\frac n 2\cdot \frac {\sum X_i^2}{\sum X_i}$ seem to work, because you can't put the expectation into the square root, and I don't think you can split an expectation across a numerator and denominator. Or can you? Am I even allowed to include $\sum X_i$ here? Help
Denote $Y_n = \frac{1}{2n} \sum_{i=1}^n X_i^2$ with $X_i$ iid and follows the normal distribution $N(\sigma,\sigma^2)$, We will contruct an unbiased estimator with the form $\theta(Y_n,n) = \sqrt{Y_n}g(n)$ The estimator $\theta(Y_n,n)$ must satisfy the 2 following conditions for all $n$ $$E(\theta(Y_n,n))= \sigma \tag{1}$$ and $$\theta(Y_n,n) \xrightarrow{n \rightarrow +\infty} \sigma \tag{2} $$ For the second condition (2), we know already that $\sqrt{Y_n}\rightarrow \sigma$, so it suffices to construct $g(n)$ such that $g(n)\xrightarrow{n \rightarrow +\infty} 1 \tag{3}$ For the first condition (1), we have $Y_n = \frac{\sigma^2}{2n} \sum_{i=1}^n (\frac{X_i}{\sigma})^2 =\frac{\sigma^2}{2n} Z_n$ with $Z_n$ follows the noncentral chi-squared distribution $\chi^2_{n,n}$ of $n$ degrees of freedom and non-centrality parameter $\lambda = n$. The condition (I) is so equivalent to $$E(\sigma\frac{1}{\sqrt{2n}}\sqrt{Z_n}g(n))= \sigma$$ or $$g(n)=\frac{\sqrt{2n}}{E(\sqrt{Z_n})}=\frac{\sqrt{2n}}{E(\sqrt{\chi^2_{n,n}})} \tag{4}$$ For information, $E(\sqrt{\chi^2_{n,n}})$ has a semi-analytical form $$E(\sqrt{\chi^2_{n,n}})=\sqrt{2}\Gamma(\frac{1+n}{2}) \text{Hypergeometric1F1Regularized}(-\frac{1}{2},\frac{n}{2}, -\frac{n}{2}) \tag{5}$$ Now, it suffices to check whether $g(n)$ in (4) satisfies the condition (3). I did check with mathematica and effectively, we have $$\frac{\sqrt{2n}}{E(\sqrt{\chi^2_{n,n}})}\xrightarrow{n \rightarrow +\infty} 1 $$ (for information, you can also check this by using Wolfram Alpha by typing the formula (5), for example with $n=1000$ here). Conclusion: One of the unbiased estimators of $\sigma$ from $\sum_{i=1}^n X_i^2$ is $$\frac{\sqrt{2n}}{E(\sqrt{\chi^2_{n,n}})} \sqrt{\frac{1}{2n}\sum_{i=1}^n X_i^2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3998533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can I do this to retrieve the Euler numbers from the power series of $\sec(x)$ I know that we can derive the general formula series for functions like $\tan(x)$ or $\sec(x)$ elegantly using the compact sigma notation. I am still very clumsy with manipulating with these compacts so instead I just divide 1 for $\cos(x)$ (since $\sec(x)=\frac{1}{\cos(x)}$) and I obtain the following power series: $\sec(x)=\color{green}{1}+\dfrac{\color{green}{1}x^2}{2!}+\dfrac{\color{green}{5}x^4}{24}+\dfrac{\color{green}{61}x^6}{720}+\dfrac{277x^8}{8064}+\dfrac{50521x^{10}}{3628800}...$ We know that the compact and useful general formula for $\sec(x)$ is $$\sum_{n=0}^{\infty}\dfrac{(-1)^{n}E_{2n}x^{2n}}{(2n)!}$$ The even Euler numbers are $E_0=1$, $E_2=1$, $E_4=5$, $E_6=61$, $E_8=1385$, $E_{10}=50521$, $E_{12}=2702765...$ So the colored number matches the Euler numbers But as you can see, due to the long division, and using symbolab to sum up terms in between during the process of long division, the fifth term $\dfrac{277}{8064}x^8$ is obscured, and it doesn't show the eighth Euler number, which is $1385$. So, in order to improve the visibility, I reason as followed: The Taylor formula tells us that the 8th term of this power series of $\sec(x)$ must be $\dfrac{f^{8}(0)}{(8!)}x^8$, the denominator of $\dfrac{277}{8064}x^8$ must be a number $x$ that is multiplied with $8!$ to produce 8064, in other words, $8!x=8064$. I then divide $\frac{8064}{8!}$ and obtain $\frac{1}{5}$. This means multiply the numerator with $5$, you get $277\cdot{5}=1385$, the $E_8$. To confirm this, I simply use the compact formula to calculate the 8th power and I get $\dfrac{1385}{40320}$, simplify both for $5$, I get the simplified version made by symbolab. To further confirm this, I repeat the process for the $12th$ term of this series, which is $\dfrac{540553}{95800320}$, take $\frac{95800320}{12!}$, I get $5$, multiply this to the numerator, and I obtain the 12th Euler number $2702765$ My question is, is this the right way to retrieve the original numbers after simplification made by symbolab? Why does this work?
You were right, to see why the method works you can compare each term with the series given by symbolab i.e. $$ \dfrac{E_{2n}}{(2n)!} = \dfrac{a}{b} $$ now you are multiplying a number $c$ such that $bc = (2n)!$ and if you put this in the equation you can easily see that $E_{2n} = ac$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3998710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\frac{d}{dt} \Big|_{t=0}\mbox{tr}(e^{X+tY})=\mbox{tr}(e^XY)$ I’m asked to prove $\frac{d}{dt}\Big|_{t=0}\mbox{tr}(e^{X+tY})=\mbox{tr}(e^XY)$ for any $X,Y$ in $M_n(\mathbb{C})$. My attempt is to assume both $X$ and $Y$ are diagonalizable, and since the set of all diagonalizable matrices is dense in $M_n(\mathbb{C})$, if we can show this is true for diagonalizable matrices, then we are done. I expected this will somehow simplify the proof, but seems it does not work well unless I further assume $X,Y$ can be diagonalizable at the same time. Any suggestions on this?
The trace is just the sum of diagonal values, so you can put the derivative inside the trace: $$\frac{d}{dt}|_{t=0}tr(e^{X+tY})=tr(\frac{d}{dt}|_{t=0}e^{X+tY})=tr(e^{X+tY}Y)|_{t=0}=tr(e^XY)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3998839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Why is the subspace topology same as the Order topology? Hi i am reading Topology by Munkres and there in the proof of theorem 27.1 it is written that Given $a<b$, let $\mathcal A$ be a covering of $[a,b]$ by sets open in $[a,b]$ in the subspace topology(which is the same as the order topology). I can't understand why is the subspace topology same as the order topology in this case? Is this always true? Btw i know how subspace topology and order topologies are defined. If $(X,\tau)$ is a topological space and $A$ be any subset of X then we define $\tau_A$ to be the set with subsets as : $\tau_A=\{U\cap A | U\in \tau\}$. And order topology is defined in a different way.So how can they be same here? For reference i am attaching the screenshot where i have highlighted the part where this is mentioned.
Well, in general to check that two topologies are the same, you need to check that an open set in one topology is also open in the other, and viceversa. Recall that in an ordered set, you define the order topology by declaring your open sets to have a basis of open rays, and open intervals. On the other hand, the subspace topology on the set $[a,b]$ consists precisely (has a basis) of open intervals. Thus, in this case both topologies agree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3999001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Are the following function 1st derivatives continuous at (0,0)? Apologies: I've had to rephrase this question quite a few times Suppose the following function: $$f(x,y)=\frac{2x^3y^2}{x^4+y^2}$$ $$f(0,0)=0$$ Show it is differentiable/not differentiable at $(0,0)$ I attempted this question first by applying the definition of differentiability which yielded the following expression: $$\lim_{(x,y)\to(0,0)}\frac{f(x,y)}{\sqrt{x^2+y^2}} = \lim_{(x,y)\to(0,0)}\frac{2x^3y^2}{(x^4+y^2)\sqrt{x^2+y^2}}$$ If the above equals 0, then the function is infact differentiable at $(0,0)$. I can't seem to find a counterexample which proves that it is discontinuous so I attempted to prove its continuity by applying the $\epsilon-\delta$ definition of limits. Unfortunately, no matter whether I express the function in terms of polarcoordinates or not, I cannot bound the following $$|\frac{2x^3y^2}{(x^4+y^2)\sqrt{x^2+y^2}}| < \epsilon$$ for all $\epsilon$. I'd be greatly appreciative for a solution to this kind of problem!
Credits to Kavi Rama Murthy for pointing out the $|2x^2y|<x^4+y^2$ inequality for me. Note that: * *$(x^2-y)^2 \geq 0 \implies x^4 - 2x^2y + y^2 \geq 0 \implies x^4 + y^2 \geq 2x^2y$ *$(x-y)^2 \geq 0 \implies x^2 - 2xy + y^2 \geq 0 \implies x^2 + y^2 \geq 2xy \implies x^2+y^2 \geq xy \implies \sqrt{x^2+y^2} \geq \sqrt{xy}$ So it turns out the function is differentiable at (0,0). If $0 < \sqrt{x^2+y^2} < \delta$ then: $$|\frac{2x^3y^2}{(x^4+y^2)\sqrt{x^2+y^2}}| = |\frac{(2x^2y)(\sqrt{xy})(\sqrt{xy})}{(x^4+y^2)\sqrt{x^2+y^2}}| = |\frac{2x^2y}{x^4+y^2}||\frac{\sqrt{xy}}{\sqrt{x^2+y^2}}||\sqrt{xy}| \leq (1)(1)\sqrt{xy} < \delta = \epsilon $$ Thus the definition of limits is satisfied therfore $f(x,y)$ is differentiable at $(0,0)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3999081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
fast string fixing Hi I'm trying to prove something and i would appreciate some help. Given an uniformly 2n long randomly selected string of Parenthesis( "(",")" ) prove that it is possible to generate a valid sequence of parenthesis by changing csqrt(nlogn) chars at most where c is constant in probability of 1-o(1) .
Given a string of parentheses, and $i\in \{1,\dots,2n\}$, let $o_i$ be the number of open parentheses in the first $i$ symbols of the string, and let $c_i$ be the number of closed parentheses in this prefix. There are two quantities of interest: * *Let $X=o_{2n}-c_{2n}$ be the excess of open parentheses over closed parentheses. *Let $Y=\max_{0\le i\le 2n}(c_i-o_i)$ be the worst case excess of closed parentheses over open parentheses in any prefix. I claim that you can make a permutation valid in $O(X+Y)$ moves. This can be done as follows: * *As long as there is an unmatched closed parenthesis, flip the leftmost unmatched parenthesis. This decreases $Y$ by at least one, and increases $X$ by two. *Then, until the number of parentheses of each type are equal, flip the rightmost open parenthesis. This decreases $X$ by two without changing $Y$. Therefore, all that remains is to show that $X+Y$ is at most $c\sqrt{n\log n}$ with probability at least $1-o(1)$. It is easy to see this is true of $X$, since $X$ is approximately normal with a mean of $0$ and a variance of $O(\sqrt{n})$. The variable $Y$ is much trickier. However, using the reflection principle, you can show that $Y$ shares the same probabilistic bound. To give some details, identify parenthesis string with lattice walks, where an open parenthesis is the step $(1,1)$ and a closed parenthesis is the step $(1,-1)$. Then $Y$ is the absolute value of the lowest height of the path, while $X$ is the final height of the path. You can then show that, for any $k\ge 0$, $$ P(Y> k)\le 2P(X> k)\tag{*} $$ This is because paths whose lowest height is at least $k$ come in two flavors: * *Those paths whose final height is less than $-k$; the probability of this is exactly $P(X>k)$. *Those whose final height is more than $-k$, but which attain $-k$ at some intermediate point. By taking the first point this path hits $-k$ and reflecting everything (that is, switching $(1,-1)$ with $(1,1)$), you get a path whose final height is less than $-k$. Since this process is reversible, the probability of these paths is at most $P(X>k)$. As long as you believe $(*)$, you know now that $Y$ enjoys the same probability bound that $X$ does, which means the same is true of $X+Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3999257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ring homomorphism $f: \Bbb{Z}_p[X] \to \Bbb{Z}_p^{n\times n}$. Prove it is never bijective and find the amount of elements in Im$(f)$. Let $f: \mathbb{Z}_p[X] \to \mathbb{Z}_p^{n \times n}$ with $n \geq 2$ be a ring homomorphism with $f(1) = I_n, f(X) = M$. a) Prove it is never injective and never surjective. b) Find the minimal polynomial and the amount of elements $ \operatorname{Im}(f)$ with $p=3, n=6$ and with $M$ given: $f(X)= M = \left( \begin{matrix} 2 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 & 0 \\ 0 & 1 & 2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 2 & 0 & 0 \\ 0 & 0 & 0 & 1 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \end{matrix} \right).$ a) I first proved that $f$ is fully determined by $f(X)$. $f(p(X)) = f(\sum_{i=1}^n a_iX^i) = \sum_{i=1}^n f(a_i)f(X^i) = \sum_{i=1}^n a_if(X)^i$, so we see that indeed $f$ is determined by the value of $f(X)$. (These operations are allowed since $f$ is a ring homomorphism). Then I tried proving that it is never injective. Since $M \in \mathbb{Z}_p^{n \times n}$, I know that there exists a $q \in \mathbb{N}$ such that $M^q = 0$ with $q$ a divisor of $p$. Consider $p(X) = X^i$, then $f(p(X)) = p(f(X)) = p(0) = 0$, which means that $p(X) \in \text{ker}(f)$ and since the kernel is not equal to $\{0\}$ we know it is not injective. I am not sure how to prove that it is never surjective. b) I think that the minimal polynomial is equal to $\phi_M(X) = (X-2)^2(X-1)$ since there are 2 Jordan blocks for eigenvalue 2 and 1 for eigenvalue 1. I don't really know how to calculate the amount of elements that $ \operatorname{Im}(f)$ has.
For a): Your proof of injectivity is not correct. For example if $M$ is the identity matrix, then $M^k=I\ne0$ for all $k$. One way would be to argue via cardinalities: $\Bbb Z_p[X]$ is infinite while $\Bbb Z_p^{n\times n}$ is finite, so .... A bit cleaner/more general argument would be to look at the dimension. $\Bbb Z_p^{n\times n}$ has dimension $n^2$, hence the set $\{I,M,\dots,M^{n^2}\}$ is linearly dependent, so there are coefficients $a_0,\dots,a_{n^2}$, not all equal to $0$, such that $$a_0I+\dots+a_{n^2}M^{n^2}=0$$ Now see how you can make a non-zero polynomial $p$ out of this such that $p(M)=0$. Hint for the surjective part: $\Bbb Z_p[X]$ is commutative while $\Bbb Z_p^{n\times n}$ is not. For b): For the minimal polynomial you need to look at the size of the largest Jordan block for each eigenvalue and not the number of Jordan blocks. Assume that we found the minimal polynomial $\phi$ and say it is of degree $d$. We know that $\ker f=\langle\phi\rangle$, hence $$\operatorname{im} f\cong \Bbb Z_p[X]/\langle\phi\rangle$$ In general for a field $K$ and a polynomial $p$ over $K$ of degree $n$ we have that $K[X]/\langle p\rangle$ is a $K$-vector space of dimension $n$ (if you don't know this try to prove this, e.g. by explicitely writing down a basis). So in this case we know that $\operatorname{im} f$ is a $\Bbb Z_p$-vector space of dimension $d$. What does this tell us about the cardinality?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3999363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
The negation of $\exists x \forall y \neg \forall z(P(x,y) \iff Q(x,y) \land R(x,y,z))$ I am trying to understand the negation of $\exists x \forall y \neg \forall z(P(x,y) \iff Q(x,y) \land R(x,y,z))$ As a side example if I have a statement $\neg \forall x P(x)$, then this is equivalent to $\exists x \neg P(x)$ where the negation is $\forall x P(x)$, motivating what I think is the answer to my overall question. Then I believe the answer should be: $\forall x \exists y \forall z(P(x,y) \iff Q(x,y) \land R(x,y,z))$ (i.e simply remove the $\neg$ from the $\forall z$) but I am a bit paranoid that the answer could be: $\forall x \exists y \forall z(P(x,y) \iff \neg Q(x,y) \lor \neg R(x,y,z))$, passing the negation through to the inner bracket. Any insights which is correct appreciated.
Your initial feeling is right:$$\neg\exists x\forall y\neg\forall z\phi(x,\,y,\,z)\iff\forall x\neg\forall y\neg\forall z\phi(x,\,y,\,z)\iff\forall x\exists y\forall z\phi(x,\,y,\,z).$$Alternatively,$$\neg\exists x\forall y\neg\forall z\phi(x,\,y,\,z)\iff\neg\exists x\neg\exists y\exists z\phi(x,\,y,\,z)\iff\forall x\exists y\forall z\phi(x,\,y,\,z).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3999539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Distance between a point $x\in H$ and $H\cap \overline{B}(0,1)$, with $H$ a hyperplane Let $X$ be a normed space. Let $f:X\to\mathbb{R}$ be a linear functional, and consider the hyperplane $H=f^{-1}(r)$, for some $r>0$. Let $x\in H$ be such that $1<\left\|{x}\right\|\le 1+\epsilon$, for some $\epsilon>0$. If $B$ is the unit closed ball and $H\cap B\neq\emptyset$, my question is: Is it true that $d(x,H\cap B)\le\epsilon$ ? In any case, I am trying to prove that $d(x,H\cap B)$ has to be "small", so $2\epsilon$ instead of $\epsilon$ would do as well. But I'm still unable to find a suitable $y\in H\cap B$ such that $d(x,y)$ is small. In $\mathbb{R}^2$ (visually speaking), it becomes obvious that such a $y$ has to exist, but I'm stuck when $X$ is any normed space.
The answer is no, and here is a counter example in $X=\mathbb R²$, with the norm $\|(x,y)\|=\max\{|x|,|y|\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3999641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
solve the Bernoulli equation xy' - y = xy^2 Solve the Bernoulli equation $xy' - y = xy^2$. I started with diving both sides by $x$, and ended up with $y' - \frac{y}{x} = y^2$. Then, I divided both sides by $y^2$ and got $\frac{y'}{y^2} - \frac{1}{xy} = 1$. Can someone help me finish this problem?
Start over. Divide through by $y^2$. Express in terms of differentials (dy and dx). The LHS is an exact differential.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3999972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find the expected value and variance of $f\left(x\right)=\left(\frac{1}{2}\right)^{x}x=1,2,3,\ldots$ Find the expected value and variance of $f\left(x\right)=\left(\frac{1}{2}\right)^{x}x=1,2,3,\ldots$ where $X$ is a discrete random variable. I try $E\left[X\right]=\sum_{i=1}^{k}x_{i}f\left(x_{i}\right)=\sum_{i=1}^{\infty}x_{i}\left(\frac{1}{2}\right)^{x_{i}}=1\cdot\left(\frac{1}{2}\right)+2\cdot\left(\frac{1}{2}\right)^{2}+3\cdot\left(\frac{1}{2}\right)^{3}+\cdots$ $$E\left[X\right]=1\cdot\left(\frac{1}{2}\right)+2\cdot\left(\frac{1}{2^{2}}\right)+3\cdot\left(\frac{1}{2^{3}}\right)+\cdots$$ That sum converges to $2$, so $$E\left[X\right]=2$$ $Var\left(X\right)=E\left[\left(X-\mu\right)^{2}\right]=E\left[X^{2}\right]-E\left[X\right]^{2}$ $E\left[X^{2}\right]=\sum_{i=1}^{\infty}x_{i}^{2}\left(\frac{1}{2}\right)^{x_{i}^{2}}\approx0.7678$ $E\left[X\right]^{2}=4$ Not sure if those are the right steps in order to solve the problem.I know is wrong due to the negative resulting variance.
Should be $E(X^2)=\sum x_i^2 (\frac 1 2)^{x_i}$. Do not plug the function of x into the density.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4000108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the limits of integration for the volume of a region inside a cube Premise We are restricted to the region $x, y, z \in [f, 1]$ where $0 \leq f < 1$. The surface $y^2=4xz$ divides this cube into two regions. We are interested in finding the volume of the region where $y^2 - 4xz \geq 0$. My Attempt Let $t=\frac{y^2}{4}$ for brevity. When $f=0$, it is relatively easy to figure out the limits. We can separate the region into two where $x \leq t$ and $x > t$. In the first case, we have $y^2 - 4xz \geq 0$ for all $z \in [0, 1]$ because $xz \leq x\cdot 1 \leq t$. Similarly, in the second case, we have $y^2 - 4xz \geq 0$ for all $z \in [0, \frac{t}{x}]$ because $xz \leq x\cdot \frac{t}{x} = t$. Applying these limits, we have $$ \begin{align} V &= \int_{0}^{1}{dy\int_{0}^{t}{dx\int_{0}^{1}{dz}}} + \int_{0}^{1}{dy\int_{0}^{t}{dx \int_{0}^{\frac{t}{x}}{dz}}}\\ &= \int_{0}^{1}{dy\int_{0}^{t}{dx\cdot (1)}} + \int_{0}^{1}{dy\int_{0}^{t}{dx\cdot \frac{t}{x}}}\\ &= \frac{1}{12} + \frac{1 + 3\log{2}}{18} \\ &\approx 0.254 \end{align} $$ However, when $f > 0$, I cannot seem to understand the proper limits for $z$. Is it $f$ to $\frac{t}{x}$ or is the lower limit $f^2$? What happens when $f$ is less than or equal to $\frac{y^2}{4}$? I tried a few times, but the results ended up being negative, which is clearly wrong as volumes are nonnegative. Specific Questions Primary objective I would like help in figuring out the limits in the case where $f>0$. Secondary objectives * *Is this approach sensible? Perhaps there is a better way to solve this problem? *Are there any glaring oversights? *What books/courses/reference should I follow to be able to solve these types of problems?
We need to find the volume of the cube formed by $x, y, z \in [f, 1], 0 \leq f \lt 1$, with the condition $y^2 \geq 4xz$ (outside the cone). For the cube, the upper limit for $x, y, z$ is $1$. For the cone, please note when $x = z = 1, y = 2$ (I am only considering first octant given our cube is in the first octant) but as we are restricted by $y = 1$ for the cube, we have $4xz \leq 1 \implies x, z \leq \min (\frac{1}{4f}, 1)$ for $f \ne 0$. At the same time for the points outside the cone, $y^2 \geq 4f^2 \implies y \geq 2f$ for $x = z = f$ and $y \geq 2 \sqrt f \ \gt f $ as $f \lt 1$ for $x = 1, z = f$ or $z = 1, x = f$. These are important observations to find the right limits for our volume integral. What it indicates is that if $f \leq 0.25, $ we have following $3$ vertices of the cube outside the cone and rest $5$ are inside - $(f,1,f),(f,1,1), (1,1,f)$ In fact at $f = 0.25$, $(f,1,1)$ and $(1,1,f)$ are on the cone and only $(f,1,f)$ is outside. Also note that at $f = 0.5$, $(f,1,f)$ is on the cone and rest are inside and so for $f \geq 0.5$, we have not volume bound (as all points of the cube are inside the cone, $y^2 \leq 4xz$). So here is the integral to find the desired volume i) $0 \leq f \leq 0.25$ (first integral is zero at $f = 0.25$) $\displaystyle \int_{f}^{0.25} \int_{f}^{1} \int_{2\sqrt{xz}}^{1} dy \ dx \ dz + \int_{0.25}^{1} \int_{f}^{1/(4z)} \int_{2\sqrt{xz}}^{1} dy \ dx \ dz$ And your answer is indeed correct for $f = 0$. ii) $0.25 \leq f \leq 0.5$ (zero for $f = 0.5$) $\displaystyle \int_{f}^{1/(4f)} \int_{f}^{1/(4z)} \int_{2\sqrt{xz}}^{1} \ dy \ dx \ dz$ To your other specific questions, no there does not seem anything wrong with your approach. But your approach did not extend to other cases, probably because you did not visualize the surface well enough. A $3D$ sketch using an online tool helps. I literally had to keep a cube in front of me while answering the question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4000248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
how to prove $f^\prime(x)$ is not bounded? $f$ is a differentiable function in $(0, 1)$. The limit $\lim_{x \to 0^+}f(x)$ doesn't exist. Prove that $f^\prime(x)$ is not bounded in $(0, 1)$. I can see why it's true, but I am struggling proving it. I hope you guys can help me. Thank you!
Put $\alpha=\limsup_{x \to 0^{+}} f(x), \beta = \liminf_{x \to 0^{+}} f(x)$. We may choose strictly decreasing sequences $a_n, b_n \to 0^{+}$ with $a_n \neq b_n$ for each $n$, and with $f(a_n) \to \alpha, f(b_n) \to \beta$. Note the former condition implies $a_n - b_n \to 0$. Hence $$\limsup_{n \to \infty} \left|\frac{f(a_n) - f(b_n)}{a_n - b_n}\right| = \limsup_{n \to \infty} \overbrace{|\alpha - \beta|}^{>0}\left|\frac{1}{a_n - b_n}\right| = +\infty$$ and the result follows from the MVT.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4000407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the maximum error of the following polynomial interpolation, but the values are too small $$f(x) = \log_{10}(x)$$ Error formula: $$e < max[1.35;1.45] |(x-1.35)(x-1.37)(x-1.40)(x-1.45)| max[1.35;1.45] |\frac{-6/\ln(10) * 1/x^4}{4!}| = \\ max[1.35;1.45] |(x-1.35)(x-1.37)(x-1.40)(x-1.45)|*0.0327$$ The problem is that in that range, the value of $(x-1.35)(x-1.37)(x-1.40)(x-1.45)$ is so small my calculator only displays zeros. I solved this another way by calculating the derivative and then factoring it, and using the factors on the zeroes. For factoring I used WolframAlpha.https://www.wolframalpha.com/input/?i=4%28x%5E3-4.1775x%5E2%2B5.81575x+-+2.69817%29+factor The result I got then way $$e < 2.933*10^{-6} *0.0327 = 9.59091*10^-8$$ Is this correct? Is there any way of solving this without WolframAlpha?
For the four data points, the interpolating polynomial is $$P_3(x) = 0.416667 x^3 - 1.85 x^2 + 3.03996 x - 1.62717$$ The formula for the error bound is given by: $$E_n(x) = {f^{(n+1)}(\xi(x)) \over (n+1)!} \times (x-x_0)(x-x_1)...(x-x_n)$$ So, we have $$E_3(x) = {f^{(4)}(\xi(x)) \over 4!} \times (x-1.35)(x-1.37)(x-1.4)(x-1.45)$$ The fourth derivative of $f(x)$ is $$f^{(4)} = -\dfrac{6}{x^4 \ln(10)}$$ The error is given by $$E_3(x) = \max_{1.35 \le x \le 1.45}\left|-\dfrac{6}{24 \ln(10) x^4}\right| \max_{1.35 \le x \le 1.45}\left| (x-1.35)(x-1.37)(x-1.4)(x-1.45)\right|$$ For the first max, it is clear that it is at the left endpoint $$\max_{1.35 \le x \le 1.45}\left|-\dfrac{6}{24 \ln(10) x^4}\right| \approx 0.0326881 $$ We find the second max at $x\approx1.43287$ as $$\max_{1.35 \le x \le 1.45}\left| (x-1.35)(x-1.37)(x-1.4)(x-1.45)\right| \approx 2.93358×10^{-6}$$ Note, we use calculus by finding the derivative, setting it to zero and then testing the absolute value at each critical point and the endpoints. Thus $$ E_3(x) \le 9.5893156398 \times 10^{-8}$$ If we plot the interpolating polynomial and points, we have If we overlay the $P_3(x)$ and $\log_{10}(x)$ plots, we have The error seems to match the results and this makes sense because a log plot looks linear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4000610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Extending the Spectral Theorem of Unbounded Self-Adjoint Operators on Infinite-Dimensional Hilbert Spaces I'm a physics student trying to do the maths of the Hilbert space in quantum mechanics with a bit more rigour than I'm accustomed to. I am trying to find ways to extend the spectral theorem for unbounded self-adjoint operators on an infinite dimensional Hilbert space to other operators. The form of the spectral theorem I'm working with is from Hall, which says that a self-adjoint operator is unitarily equivalent to a multiplication operator on some other Hilbert space. (Page 207, Theorem 10.10.) Specifically, I have an operator $\hat{H}$ on the infinite dimensional Hilbert space, which is not itself self-adjoint but is related to self-adjoint operators by at least two simple relations. First, both $$ \hat{H}\hat{\gamma}^0 \; \text{and} \; \hat{\gamma}^0\hat{H} $$ are self-adjoint, where $\hat{\gamma}^0$ can be written in block-diagonal form as $$ \hat{\gamma}^0 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}. $$ It therefore has all sorts of useful properties, such as being bounded, Hermitian, invertible, unitary, and $(\hat{\gamma^0})^2=1$. I also have the knowledge that $$ (\hat{H})^2 $$ is self-adjoint. Is there any way of proving that $\hat{H}$ must be isomorphic (though presumably not in general by a unitary transformation) to a multiplication operator on a Hilbert space, from the fact that $\hat{H}^2$ and/or $\hat{H}\hat{\gamma}^0$ are? It feels like there might or might not be something obvious I'm missing. Would any further properties of $\hat{H}$ be needed?
I'd suggest checking Frederic Schuller's Youtube Channel. He covers those kinds of topics from a physics perspective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4000829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Elementary proof of Power Rule For differentiation $f'(x)=rx^{r-1}$ for $f(x)=x^r$ I could not find any elementary proof of the Power Rule for differentiation: Given $x,r \in R$ ,$x>0$ and a function $f(x)=x^r$, then its derivative is $f'(x)=rx^{r-1}$ Defintion:Let a sequence of rational numbers {$r_n$} tend to $r$ then $x^r=\lim_{r_n \to r}x^{r_n}$ All the proofs I had seen utilized the derivative of $logx$ but I dont think this is elementary because one has to show that for $e=\text{Euler's number}$ , $t \in R$and $n \in N$ $\lim_{n \to \infty}(1+\frac{1}{n})^n=\lim_{t \to 0}(1+t)^\frac{1}{t}=e$ which can be proved using the Power Rule Ofcourse the proof for the Power Rule is straight forward if one is familiar with the result of Power Rule if $r$ is a rational number and theorem regarding derivative of a sequence of derivatives that are uniformly convergent.
For real exponents in general, I don't think you can avoid the exponential function and the logarithm because the definition of $x^r$ tends to be $\exp(r\log x)$. However, for integer exponents there are a number of simple proofs. Here is a lesser-known one. Let $f:\mathbb{R}\mapsto\mathbb{R}$ be the function defined by $f(x)=x^n$ for all $x$, with $n \in \mathbb{N}$. We find that \begin{align} f'(a) &= \lim_{x \to a}\frac{f(x)-f(a)}{x-a} \\[6pt] &= \lim_{x \to a}\frac{x^n-a^n}{x-a} \\[6pt] &= \lim_{x \to a}\frac{(x-a)(x^{n-1}+x^{n-2}a+x^{n-3}a^2+\ldots+xa^{n-2}+a^{n-1})}{x-a} \\[6pt] &= \lim_{x \to a}x^{n-1}+x^{n-2}a+x^{n-3}a^2+\ldots+xa^{n-2}+a^{n-1} \\[6pt] &= a^{n-1}+a^{n-2}a+a^{n-3}a^2+\ldots+aa^{n-2}+a^{n-1}\\[6pt] &= \underbrace{a^{n-1}+a^{n-1}+a^{n-1}+\ldots+a^{n-1}+a^{n-1}}_{\text{$n$ terms}}\\[6pt] &= na^{n-1}\blacksquare \end{align} This proof can be extended to negative integer exponents by writing $x^{-n}$ as $1/x^n$ and using the quotient rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4001006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 0 }
Finding a linear transformation such that $T((4,1)) = (0,21)$ and $T((1,5))=(-19,10)$ Finding a linear transformation such that $T((4,1)) = (0,21)$ and $T((1,5))=(-19,10)$ So, i'm trying to find multiple different ways to do this. I know that every linear transformation can be represented by a matrix, so i'm letting $A$ be a $2 \times 2$ matrix and defining $T(x) = Ax$ and trying to figure out what $A$ has to be. Is this a good way to go about this problem? I'm ending up with two seperate systems of linear equations, each with four variables and two equations, t hus I have four variables and four equations, so that's good I guess, but I feel that I'm getting lost in the weeds and there may be an easier way to do this... Would appreciate multiple perspectives on this one... Thank you!!
Let's do this the way you initially tried (and yes, it's a good way to go about the problem). Let $$A = \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right].$$ We are given $$A \left[\begin{array}{c}4\\1 \end{array} \right] = \left[\begin{array}{c} 0\\21 \end{array} \right], \text{ and}$$ $$A \left[\begin{array}{c} 1\\5 \end{array} \right] = \left[\begin{array}{c} -19\\10 \end{array} \right].$$ That is, we have $$\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[\begin{array}{c}4\\1 \end{array} \right] = \left[\begin{array}{c} 0\\21 \end{array} \right], \text{ and}$$ $$\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \left[\begin{array}{c} 1\\5 \end{array} \right] = \left[\begin{array}{c} -19\\10 \end{array} \right],$$ giving us the system $$4a+b=0$$ $$4c+d=21$$ $$a+5b=-19$$ $$c+5d=10.$$ Focussing on the system of two equations with the $a$ and $b$ variables, i.e. the system $$4a+b=0$$ $$a+5b=-19,$$ we have an easy system of two simultaneous equations with two variables. Solving this, we have $a=1$ and $b=-4$. Now do the same with the two equations involving the $c$ and $d$ variables, and you're done. Of course, the strategy outlined in the comments is equally valid, but you asked for multiple perspectives!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4001172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
How did Euler obtain these two formulae? Are they correct? I am reading this book on trigonometric series, "Тригонометрические ряды от Эйлера до Лебега" (Trigonometric series from Euler to Lebesgue) , it is in Russian, and my Russian is abysmal. But there is a very interesting formula, it is: $$\dfrac{1-r\cos(x)}{1-2r\cos(x)+r^2}=1+r\cos(x)+r^2\cos(2x)+r^{3}\cos(3x)...$$ $$\dfrac{r\sin(x)}{1-2r\cos(x)+r^2}=r\sin(x)+r^2\sin(2x)+r^{3}\sin(3x)...$$ As $r=1$ or $r=-1$ We have: $$-\dfrac{1}{2}=\cos(x)+2\cos(x)+3\cos(3x)+...$$ $$\dfrac{1}{2}=\cos(x)-2\cos(x)+3\cos(3x)+...$$ He integrated the second series and obtain the trigonometric series: $$\dfrac{x}{2}=\sin(x)-\dfrac{1}{2}\sin(2x)+\dfrac{1}{3}\sin(3x)-...$$ The first and second series are divergent, so I think the result doesn't make sense, although the formal method of deriving it may seem correct. The third series is correct on the interval $[\pi, -\pi]$, because the series expansion converges to that function (Correct me if I am wrong here, I haven't studied Fourier series). I wish to ask how did he obtain the above formula? Are these formula incorrect because the issue of convergence? Also, which papers contain these results?
In Euler's time, the concept of convergence had not been properly defined. Much of his work lacks the rigor we are now used to. At that time, even renowned mathematicians used to do things like manipulate divergent series freely and, at the end of their calculations, examine the result. If it made sense, they save it, if not, they discard it. They developed an intuition about what could and could not be done, but they did not have the theory to back up many of the algebraic manipulations they did. The formulas can be proved using $\displaystyle \cos(x) =\frac{e^{i x}+e^{-ix}}{2}$ and $\displaystyle\sin(x) =\frac{e^{i x} - e^{-ix}}{2i}$ to turn the sums into geometric series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4001298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Non-uniqueness of MLE of multivariate Laplace distribution? Suppose $\{X\}_{i=1}^n\overset{i.i.d}{\sim}X$, and $X\in\mathbb{R}^d$ has density,$$f_{\theta}(x)=c\exp\left\{-||x-\theta||\right\},\theta\in\mathbb{R}^d,$$ where $||\cdot||$ denotes the Euclidean norm. Show that the MLE $\hat{\theta}$ exists but is not unique when $n$ is even. I know how to prove that the MLE $\hat{\theta}$ exists, by noticing $\underset{\theta\rightarrow\partial\mathbb{R}^d}{\lim} \log f(\theta)=-\infty$, however I don't know how to prove it is not unique. I understand that the log-likelihood function, $l(\theta)=-\sum_{i=1}^n||x_i-\theta||$ is a convex, but not strictly convex, function, but this does not guarantee the nonuniqueness. I also tried taking the first and second gradient of $l(\theta)$. But even I have a non-negative definite $l(\theta)$, I still don't have the non-uniqueness... ---Update--- The original question is here. This is from Mathematical Statistics (Bickel).
The minimizer of such set of points is given by the Geometric Median. For example, for $ d = 1 $ you get the Median which is not unique for an odd set of different numbers. For higher dimensions you need to take care of the case the points are collinear, which basically means that the problem, is again, equivalent to 1D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4001476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
finding the mean in normal distribution I am struggling on this question where I need to find the mean given the standard deviation. The question is the following: A certain brand of flood lamps has a length of life that is normally distributed, with a standard deviation of 392 hours. If 8 % of the lamps last more than 5600 hours, what is the mean length of life? This is what I've tried so far let $X$ = length of light $X\sim\mathcal{N}(\text{mean = unknown }(u), SD = 392)$ $P(X > 5600) = 0.8$ $1 - P(X < 5600) = 0.8$ (stuck after this part) $1 - P(5600-u/392 < 5600) = 0.8$ ( incorrect) I am having troubles working backwards to the find the mean. Could someone point me in the right direction.
$$\mathbb{P}[X>5600]=8\%=0.08$$ $$\mathbb{P}[X\leq 5600]=92\%=0.92$$ that is $$\Phi\Bigg[\frac{5600-\mu}{392}\Bigg]=0.92$$ Now using Z-table you get $$\frac{5600-\mu}{392}=1.4051$$ that is $$\mu=5049$$ How to read a Z-table I used a calculator. With Paper -table you get that the quantile corresponding to the probability of $0.92$ is between $1.40$ and $1.41$, more or less exactly in the middle...
{ "language": "en", "url": "https://math.stackexchange.com/questions/4001589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Motion on a slope - A Level A particle P of mass 5kg and particle Q of mass 1kg are connected by a light inextensible string. P lies on a slope inclined at a 60 degree angle to the horizontal. The string passes from P parallel to the line of greatest slope, and runs over a light pulley that can freely rotate. Then descends vertically to Q. The coefficient of friction between P and the slope is 0.7. Find the force of friction acting on P and it’s direction and find the acceleration of P when the system is released from rest. I was able to find the frictional force 17.15N up the slope) but I can’t find the acceleration. I have tried to use the equation 5gsin(60)-17.15-9.8 = 5a with 9.8 being the weight of Q, but I get the answer of 3.1m/s/s for a, bit the mark scheme says that a (acceleration) is 2.58m/s/s but I don’t see how. Could you please give me some advice on how to get 2.58?
You have done all correct, you just messed up in the calculation. The equation $$5g\sin60° -17.15-9.8=6a$$ has the solution $\boxed{a=2.58\,\text{m}\,\text{s}^{-2}}$ as required. Hope this helps. Ask anything if not clear :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4001718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In general, can we prove that admissible quadratic polynomials must yield at least one prime value? By admissible, I'm referring to an integer-valued quadratic function with no common factors, i.e. one that meets the basic requirements to be able to produce primes. To be more specific, a quadratic polynomial $f(x):=ax^2+bx+c$ with $a>0$ where $x\in\mathbb Z \implies f(x) \in\mathbb Z$, and where there exist $m,n\in\mathbb Z$ with $\gcd(f(m),f(n))=1$ and $ m \neq n$. In short, a quadratic which has no a priori reason to be prime-free. First, I know the infinitude of quadratic primes is an open question, so I suspect this will be too, but I'll ask anyway. Do we know of any quadratic functions that should be able to produce primes yet they don't seem to? By this, I mean an admissible quadratic as described above but for which computer searches have been unable to find any prime values. I'm assuming not, in which case my primary question is: have we reached the point yet where we can prove, in general, that an admissible quadratic function as described above must yield at least one prime value? Or is that still out of reach?
Upon further thought, I realized that the answer must be no, we currently can't prove it. If we could, a proof of the infinitude of polynomial primes immediately follows, since you can always split one into arbitrarily many parts. For example, the first six values of $x^2+1$ are $\{2,5,10,17,26,37\}$. But by substituting in $3x-2,3x-1,3x$ to $x^2+1$, it can be completely split into * *$9x^2-12x+5$ yielding $\{2,17,\ldots\}$ *$9x^2-6x+2$ yielding $\{5,26,\ldots\}$ *$9x^2+1$ yielding $\{10,37,\ldots\}$ each of which is itself admissible and contains a prime. To ensure admissibility, you either have to split into $k$ pieces where $k$ is not a residue of the original polynomial, or factor out $k$ if it is a residue, but aside from that, I see no reason why this wouldn't work in general, and since I'm sure I'm not the first person to have noticed this, it means that we can't prove that an admissible quadratic must have at least one prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4001869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Be $H$ Sylow p-subgroup of $S_{2p}$ with $p > 2$ prime number. Be $N(H)$ the normalizer of H. Find the cardinality of $N(H)$ I've got some trouble doing this exercise Be $H$ Sylow p-subgroup of $S_{2p}$ with $p > 2$ prime number. Be $N(H)$ the normalizer of H. Find the cardinality of $N(H)$ I know that H is generated by two disjoint p-cycle, so H is isomorphic to $\mathbb{Z}_{p}\times \mathbb{Z}_{p}$ but I'm stuck.
We note that $N(H) = \text{Stab}(H)$ under the conjugation action of $S_{2p}$ on $H$. By the Orbit-Stabilizer Theorem, we have that $n_{p} \cdot N(H) = |S_{2p}|$, where $n_{p}$ is the number of Sylow $p$-subgroups. Start by determining $n_{p}$. We have two disjoint $p$-cycles. We select the elements for the first $p$-cycle in $2p \cdot (2p-1) \cdot (2p-2) \cdots (p+1)$ ways. Two $p$-cycles are equivalent if one is obtained from the other by cyclic rotation. So we divide out by $p$, the number of cyclic rotations, to obtain: $$2p(2p-1) \cdots (p+1)/p$$ such ways to choose the first $p$-cycle. Similarly, the second $p$-cycle is chosen in $p!/p = (p-1)!$ ways. The selections are independent. So by the rule of product, we have $n_{p} = (2p)!/p^{2}$. So $|N(H)| = p^{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4002063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matrix Derivative Product Rule $\frac{d}{dx} \langle x,y\rangle Ax $ Let $A\in\mathbb{R^{n\times n}}$,$x,y\in\mathbb{R^{n}}$. What is the derivative of the following expression? $$ \frac{d}{dx} \langle x,y\rangle Ax $$ What I tried: Applying the product rule doesn't work because the first term (see next equation) is inconsistent dimensionally, or I am applying it in the wrong way. Let $f(x) = \langle x,y \rangle, g(x) = Ax$, then $$ \frac{d}{dx} f(x)g(x) = \frac{df(x)}{dx} g(x) + f(x)\frac{dg(x)}{dx} = xAx + \langle x,y \rangle A$$. Here the first term in the last equation doesn't make sense dimensionally, any help or suggestion would be appreciated.
Define the vector-valued function $$\eqalign{ f &= (y^Tx)(Ax) \\ }$$ and calculate its differential and gradient $$\eqalign{ df &= (Ax)(y^Tdx) + (y^Tx)(A\,dx) \\ &= \Big(Axy^T + (y^Tx)A\Big)\,dx \\ &= A\Big(xy^T + (y^Tx)I\Big)\,dx \\ \frac{\partial f}{\partial x} &= A\Big(xy^T + (y^Tx)I\Big) \\ }$$ which is another way to derive JG's result while avoiding index notation. Your error was to assume that the product rule is valid for the gradient of a vector-valued function like $f$, but it isn't. The problem is that a gradient operation changes the tensorial character of any term to which it is applied (i.e. it turns vectors into matrices), and matrix multiplication is inherently non-commutative. Index notation was developed to handle precisely such calculations, but with standard matrix notation you must proceed very cautiously.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4002136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How can we prove that Gaussian integers are algebraic integers? I heard in a lecture that every gaussian integer is in fact a root of a monic 2nd degree polynomial, it has been a day now and I can't figure this out.
There are a few things to say here: It's a general fact that the sum of two algebraic numbers/integers is also one and clearly, $bi = b\sqrt{-1}$ and $a$ are algebraic integers so $a+bi$ is one too. But in this example, we can be more explicit: $a+bi$ is the root of the polynomial $x^2 - 2ax + (a^2+b^2) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4002323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Compute the difference of three series (motivated by an integral formula) Let \begin{align} L&=2^{3 / 2} \pi^{4} \frac{1}{4} \sum_{n, m, k=1}^{\infty} n^{2} m a_{n} a_{m} b_{k}\left(\delta_{m, n+k}+\delta_{n, m+k}-\delta_{k, n+m}\right) \\[5pt] R_{1}&=2^{3 / 2} \pi^{4} \frac{1}{4} \sum_{n, m, k=1}^{\infty} n^{3 / 2} m^{3 / 2} a_{n} a_{m} b_{k}\left(\delta_{m, n+k}+\delta_{n, m+k}+\delta_{k, n+m}\right) \\[5pt] R_{2}&=\frac{1}{2} 2^{3 / 2} \pi^{4} \frac{1}{4} \sum_{n, m, k=1}^{\infty} n^{1 / 2} m^{1 / 2} k^{2} a_{n} a_{m} b_{k}\left(\delta_{m, n+k}+\delta_{n, m+k}-\delta_{k, n+m}\right), \end{align} where the $\delta$ denote the Kronecker delta. How can we compute $$L-R_1-R_2 \ ?$$ The motivation for this computation comes from an integration by parts formula analyzed on MathOverflow.
With the help of the Kronecker delta's we can carry out the summation over $k$, $$L-R_1-R_2=2^{3/2}\pi^4 \frac{1}{4}\sum_{n=1}^\infty\sum_{m=1}^\infty a_na_m\biggl[\bigl(m n^2-(m n)^{3/2}-(m-n)^2 \sqrt{m n}\bigr) b_{|n-m|}$$ $$-\bigl(m n^2+(m n)^{3/2}-(m+n)^2 \sqrt{m n}\bigr) b_{n+m}\biggr].$$ Without further knowledge of the coefficients this cannot be evaluated further.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4002492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find $\int_{0}^{\infty}\frac{x^{2}}{e^{x}+1}dx$ using known result for $\int_{0}^{\infty}\frac{x^{2}}{e^{x}-1}dx$ I need to find integral of $\int_{0}^{\infty}\frac{x^{2}}{e^{x}+1}dx$. From this Riemann_zeta_function , it gives $\zeta\left( s\right) =\frac{1}{\Gamma\left( s\right) }\int_{0}^{\infty}\frac{x^{s-1}}{e^{x}-1}dx$. Hence for $s=3$ this gives \begin{align} \int_{0}^{\infty}\frac{x^{2}}{e^{x}-1}dx & =\zeta\left( 3\right) \Gamma\left( 3\right) \nonumber\\ & =2\zeta\left( 3\right) \tag{1}% \end{align} My question is, how to use the above result to find integral of $\int% _{0}^{\infty}\frac{x^{2}}{e^{x}+1}dx$? Mathematica gives \begin{equation} \int_{0}^{\infty}\frac{x^{2}}{e^{x}+1}dx=\frac{3}{2}\zeta\left( 3\right) \tag{2} \end{equation} I tried to see if there is change of variable or other trick, but I do not see one. I do know about a direct method given in this answer here, but I want to see if it is possible, with some trick, to use the result from (1) to find (2).
Let $\displaystyle \mathcal I =\int_{0}^{\infty}\frac{x^{2}}{e^{x}-1}\,\mathrm{d}x $ and $\displaystyle \mathcal J = \int_{0}^{\infty}\frac{x^{2}}{e^{x}+1}\,\mathrm{d}x. $ You already know $\mathcal I =2\zeta\left( 3\right)$ but you seek $\mathcal J$. $\displaystyle \mathcal I =\int_{0}^{\infty}\frac{(2x)^{2}}{e^{2x}-1}\,\mathrm{d}(2x) = \int_0^\infty \frac{8x^2}{e^{2x}-1}\, \mathrm{d}x$ But $\displaystyle \frac{8x^2}{e^{2x}-1} = \frac{4x^2}{e^x-1}-\frac{4x^2}{e^x+1}$ Thus $\displaystyle \mathcal I = 4\int_{0}^{\infty} \frac{x^2}{e^x-1}\,\mathrm{d}x-4\int_{0}^{\infty} \frac{x^2}{e^x+1}\,\mathrm{d}x $ So $\mathcal I = 4 \mathcal I-4 \mathcal J $ and it follows that $\displaystyle \mathcal J =\frac{3}{2}\zeta(3). $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4002633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to solve equation with multiple variables with each of them having different "weight"? I need to solve equation where the goal is to buy specific number of snacks for the lowest price. There are 7 types of snack packages, each of them contains different number of snacks and has different price, here is the list: * *A: 0.0051 $ / 4 snacks *B: 0.0102 $ / 4 snacks *C: 0.0204 $ / 8 snacks *D: 0.0408 $ / 17 snacks *E: 0.0816 $ / 35 snacks *F: 0.1632 $ / 58 snacks *G: 0.3264 $ / 58 snacks Now I need to know, how to get the best "combo of packages" for lowest price when buying 10 snacks, 20 snacks..... up to 100 snacks. (When 10 is needed, buying some extra is not problem - such as buying 12 instead of 10.) I tried bruteforcing (such as starting from maximum number of cheapest package and then combining with other packages), but I think there mast be some smarter way to solve this. :)
You want to minimize the total cost, i.e., $$0.0051A+0.0102B+0.0204C+0.0408D+0.0816E+0.01632F+0.3264G$$ under the condition that $$ 4A+4B+8C+17D+25E+58F+58G\geq T,$$ where $T$ is your target number of snacks. Given prices and quantities of snaks per package, the solution will be such that $G=0$ (because $F$ includes the same number of snacks and it's half the price) and $C>0$ implies that the solution is not unique (as $B$ results in the same price per snack).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4003047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there any analytic method for solving $\ddot{\phi} + 2\cot{\theta}{\dot\theta}{\dot\phi} =0$ Hi i am trying to solve the following two equations: $$\ddot{\phi} + 2\cot{\theta}{\dot\theta}{\dot\phi} =0$$ $$\ddot{\theta}-\sin\theta\cos\theta{\dot{\phi}}^2=0$$ where $\dot{\theta}=\frac{d\theta}{dt}$ and $\dot{\phi}=\frac{d\phi}{dt}$ and $\ddot{\theta}=\frac{d^2\theta}{dt^2}$ and $\ddot{\phi}=\frac{d^2\phi}{dt^2}$. I am trying to solve this analytically and looked up the standard techniques but couldn't find how solve these. Basically i want $\phi$ in terms of $\theta$ that is $\phi(\theta)$. Is there any way to do this?
Here is my attempt. Divide the first equation by $\dot\phi$, then we get $$ \frac{d\ln\dot\phi}{dt}+2\frac{d\ln(\sin\theta)}{dt}=0.$$ We then obtain the integral of motion $$\ln\dot\phi+2\ln\sin\theta=C.$$ Hence, we find $\dot\phi\sin^2\theta=e^C=B$. Plugging this in the second equation, we find $$ \ddot\theta-B^2\frac{\cos\theta}{\sin^3\theta}\dot\theta=0.$$ Multiplying this by $\dot\theta$, we find $$\frac12\frac{d\dot\theta^2}{dt}+\frac b2\frac{d\sin^{-2}\theta}{dt}=0,$$ where $b=B^2$. Then we get another integral of motion, a Hamiltonian $H$. $$\frac{\dot\theta^2}2+\frac b{2\sin^2\theta}=H.$$ I believe this is right, but I don't have time to check it carefully now. I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4003200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Question on colimit over the category of simplices I get slightly confused on the colimit definition: $$X \cong \lim_{\Delta^n \rightarrow X} \Delta^n $$ where the symbol $\lim$ is actually for denoting colimit. I know that colimit is defined for a diagram $F: J \rightarrow C$ as the initial object of the category $\text{CoCone}(F)$. What is the index category $J$ for the above-mentioned case? In the category of simplices, $\Delta \downarrow X$, the objects are given by natural transformations $\Delta^n \rightarrow X$ or $\Delta^m \rightarrow X$ and the morphisms by natural transformations such as $\Delta^n \rightarrow \Delta^m$ induced by $[n] \rightarrow [m]$. This should serve as a CoCone(?) of something. Is this way of thinking correct?
You are entirely that the index category is the category of simplicies (Grothendiecks category of elements is an alternative name) of $X$. I think you have a misconception in what the data is of a colimit depends on. Here we are missusing notation. A colimit depends on both the index category and functor. Now the category of simplicies $Elt(X)$ has forgetful functor $U:Elt(X)\to \mathrm{Set}_{\Delta}$ which is given on objects by $$ (\Delta^n \to X) \mapsto \Delta^n. $$ Now lets explore what a cocone for this functor is. By definition a cocone on $U$ is an object $Y$ and natural transformation $U\to \delta(Y)$, where $\delta$ is the functor $\mathrm{Set}_\Delta\to \mathrm{Fun}(Elt(X),\mathrm{Set}_\Delta)$. Using the Yoneda this can be seen to be the same as an $n$-simplex of $Y$ for all $n$-simplicies of $X$. Satisfying some naturality condition, which heuristically says that this collection defines a simplicial subset of $Y$. Now it is clear that $X$ defines a cocone for the forgetful functor. So suppose $(Y,\alpha)$ is another cocone for $U$. We want to build map $X\to Y$ making everything commute. We then basicly build this by $x \in X_n$ maps to $\alpha_{x}$ seen as a $n$-simplex in $Y$ by the Yoneda lemma. I leave it to you to show that this is welldefined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4003371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
"Reducing" spliting fields Let $f(X)=X^n+a_{n-1}X^{n-1}+\ldots a_{1}X+a_{0} \in \Bbb Q[X]$, irreducible in $\Bbb Q[X].$ Let $K_{1},K_{2},\ldots,K_{n}$ be the $n$ roots of $f(X)$ in $\mathbb{C}$. Why $\Bbb Q[K_{1},K_{2},\ldots,K_{n}] = \Bbb Q[K_{1},K_{2}, \ldots, K_{n-1}]?$ I think that i need to use the multiplicative property of dimensions of field extensions but i dont know if i'm right. Any help is appreciated!
The sum of the roots $K_1+\dots+K_{n-1}+K_n$ is $-a_{n-1}\in Q\subseteq Q[K_1,\dots, K_{n-1}]$. Also $K_1+\dots+K_{n-1} \in Q[K_1,\dots, K_{n-1}]$. So $K_n\in Q[K_1,\dots, K_{n-1}]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4003479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
what value of K does the system have a unique solution $\begin{cases}x_1 + kx_2 - x_3 = 2\\2x_1 - x_2 + kx_3 = 5\\x_1 + 10x_2 -6x_3= 1\\ \end{cases}$ I've been trying echelon form where i took $R_2 = R_2 - 2R_1$ and $R_3 = R_3-R_1$ So I have $\left[\begin{array}{ccc|c}1&K&-1&2\\2&-2&K&5\\1&10&-6&1\end{array}\right]$ I've been trying echelon form where i took $R_2 = R_2 - 2R_1$ and $R_3 = R_3-R_1$ and reduced it So I have $\left[\begin{array}{ccc|c}1&K&-1&2\\0&-1-2K&K+2&1\\0&10-K&-5&-1\end{array}\right]$ But now I am not sure how i could remove $10-K$ with $-1-2K$ any help would be appreciated
Without solving you the problem, you can ALWAYS proceed with the following way if you don't spot any easy factorisation. Suppose that, in your matrix, you have two values $a_{i,j}$ and $a_{i+1,j}$ you want to reduce in that way. You can ALWAYS exploit the property of linearity, and proceed with the substitution $R_{i+1} = a_{i,j}R_{i+1} - a_{i+1,j}R_{i}$, providing that $a_{i,j} \neq 0$. In your case, proceed with the substitution: $R_3 = (-1-2K)R_3 - (10-K)R_2$, providing (i.e. check at the end if the solution allows it) that $-1-2K \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4003621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Evaluate $\int \frac{\left(x^2+x+3\right)\left(x^3+7\right)}{x+1}dx$. Evaluate: $$\int \frac{\left(x^2+x+3\right)\left(x^3+7\right)}{x+1}dx$$ The only thing I can think of doing here is long division to simplify the integral down and see if I can work with some easier sections. Here's my attempt: \begin{align} \int \frac{\left(x^2+x+3\right)\left(x^3+7\right)}{x+1}dx &= \int \left(\:x^4+3x^2+4x+3+\frac{18}{x+1} \right )dx \\ &= \int \:x^4dx+\int \:3x^2dx+\int \:4xdx+\int \:3dx+\int \frac{18}{x+1}dx \\ &= \frac{x^5}{5}+x^3+2x^2+3x+18\ln \left|x+1\right|+ c, c \in \mathbb{R} \end{align} The only issue I had is that the polynomial long division took quite some time. Is there another way to do this that is less time consuming? The reason I ask this is that, this kind of question can come in an exam where time is of the essence so anything that I can do to speed up the process will benefit me greatly.
If you do the substitution $x+1=t\iff x=t-1$ you only need do a (long) multiplication: $$\int \frac{\left(x^2+x+3\right)\left(x^3+7\right)}{x+1}dx= \int \frac{\left((t-1)^2+t+2\right)\left((t-1)^3+7\right)}{t}dt= $$ $$=\int \frac{\left(t^2-t+3\right)\left(t^3-3t^2+3t+6\right)}{t}dt=\int \frac{t^5-4 t^4+9 t^3-6 t^2+3 t+18}{t}dt=$$ $$=\int\left( t^4-4 t^3+9 t^2-6 t+3 +\frac{18}{t}\right)dt $$ And now it's simple.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4003737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
A bag contains 7 white balls and 3 red balls. A ball is taken and its colour noted. It is not replaced. A second ball... The question is: A bag contains 7 white balls and 3 red balls. A ball is taken and its colour noted. It is not replaced. A second ball is taken and its colour noted. Find the probability of obtaining two white balls. Solution: Let $A$ be the event 'the first ball is white'. Let $B$ be the event 'the second ball is white'. $P(A \cap B)=P(A) P(B \mid A)=\frac{7}{10} \times \frac{6}{9}=\frac{7}{15}$ I don't understand how $P(B \mid A)=\frac{6}{9}$. Intuitively, it makes sense: $\frac{6}{9}$ represents the probability that the second ball is white given that the first ball is white, and this is sampling without replacement. But $P(B \mid A)=\frac{n(A \cap B)}{n(A)}$, so am I to understand that $n(A)=9$ and $n(A \cap B)=2$? The fact that $n(A)=9$ and $n(A \cap B)=2$ doesn't make sense to me, even though $P(B \mid A)=\frac{6}{9}$ seems intuitively correct. Thanks..
$n(A \cap B)$ and $n(A)$ are not counting the number of balls in the bag. They are counting the number of ways to satisfy the conditions of $A \cap B$ and $A$. That is: $n(A \cap B)$ is the total number of all possible ways of getting both the first and second ball as whites. There are $7$ possible ways of getting the first white ball; then, after that white ball is removed, there are $6$ possible ways of getting the second white ball. So, $n(A \cap B) = 7 \cdot 6$. $n(A)$ is the total number of all possible ways of getting the first ball as white. The second ball can be anything. So, there are $7$ ways of getting the first white ball; then, after that first white ball is removed from the bag, $10 - 1 = 9$ balls remain. So there are $9$ ways of getting the second ball. Hence $n(A) = 7 \cdot 9$. Thus $n(A \cap B) / n(A) = (7 \cdot 6) / (7 \cdot 9) = 6/9$. And this brute force counting approach matches the more "intuitive" $P(B | A) = 6/9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4004106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A basic question about prenex normal form I meet the following formula: $$(\exists xAx\rightarrow\exists yBy)\rightarrow(\exists x\Box Ax\rightarrow\exists y\Box By)$$ I wonder what amounts to a prenex form of this formula, that is, putting all quantifiers in front of a quantifier-free formula. Following the prenex rules for implication, I obtain the following: $$\exists y\forall x(Ax\rightarrow By)\rightarrow\exists y\forall x(\Box Ax\rightarrow\Box By)$$ But now, $x$ and $y$ occur in both formulas. I don't know how to proceed. Can anyone help?
Here's one way to do it, step by step. $$ (\exists x A x \to \exists y B y) \to (\exists x \square A x \to \exists y \square B y) $$ replace $x \to y $ with $(\lnot x) \lor y$. $$ (\exists x A x \not\to \exists y B y) \lor (\forall x \lozenge \lnot A x \lor \exists y \square B y ) $$ Replace $x \not\to y$ with $x \land \lnot y$. $$ (\exists x A x \land \lnot \exists y B y) \lor \forall x \lozenge \lnot A x \lor \exists y \square B y $$ Move $\lnot$ inside the quantifier. $$ (\exists x A x \land \forall y \lnot B y) \lor \forall x \lozenge \lnot A x \lor \exists y \square B y $$ Rename the variables so that no bound variables have the same name. $$ (\exists z A z \land \forall w \lnot B w) \lor \forall x \lozenge \lnot A x \lor \exists y \square B y $$ None of the variables are shared between multiple subexpressions, so we can extract them in whatever order we want. $$ \exists z \mathop. \exists y \mathop. \forall w \mathop. \forall x \mathop. (Az \land \lnot B w) \lor \lozenge \lnot A x \lor \square B y $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4004301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
The value of $\sin\left(\frac{1}{2}\cot^{-1}\left(\frac{-3}{4}\right)\right)$ What is the value of this expression : $$\sin\left(\frac{1}{2}\cot^{-1}\left(\frac{-3}{4}\right)\right)$$ Using calculator and wolfram alpha, the answer is, $-\frac{1}{\sqrt{5}}$ But, by solving it myself the result comes out to be different. My solution is as follow: $$\begin{aligned} Put,\, &\cot^{-1}\left(\frac{-3}{4}\right) = \theta \\\implies &\cot(\theta) = \frac{-3}{4} =\frac{b}{p} \\So,\,&\cos(\theta) = \frac{-3}{5}\end{aligned}$$ $$\begin{aligned}\\Then, \\\sin\left(\frac{1}{2}\cot^{-1}\left(\frac{-3}{4}\right)\right) &= \sin\left(\frac{\theta}{2}\right) \\&= \sqrt{\frac{1-\cos{\theta}}{2}} \\&= \sqrt{\frac{1+\frac{3}{5}}{2}} \\&= \frac{2}{\sqrt{5}} \end{aligned}$$ This solution is used by a lot of websites. So, I got two different values of single expression but I am not sure which one is correct. Can you point out where I have done the mistake?
In all four values. First, half angle formula $$ \pm\sqrt{[1+\cos \left( \cot^{-1(...)})]/2\right)}$$ $$\pm \sqrt{\dfrac{1\pm\frac35 }{2}}$$ $$\pm \dfrac{2}{\sqrt 5}, \pm\dfrac{1}{\sqrt 5}.$$ So both answers correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4004465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Convergence of Sum of a series $$\frac{(1+r)^{N+1}-(1+r)-rN}{r^2(1+r)^N}$$ I'm calculating the sum of a special series and finally get a general formula for the series. $r$ is a constant in the formula. And I'm curious whether this series will be convergent or not. Could someone explain how to calculate the sum when $N$ is towards infinity? Thank you very much.
Welcome to MSE! If you write $\alpha = 1+r$ to simplify notation you get $$ \frac{1}{(\alpha-1)^2} \left ( \alpha - \frac{\alpha}{\alpha^n} - \frac{\alpha n}{\alpha^n} + \frac{n}{\alpha^n} \right ) $$ But we know that * *$\frac{\alpha}{\alpha^n} \to 0$ when $|\alpha| > 1$ *$\frac{\alpha n}{\alpha^n} \to 0$ when $|\alpha| > 1$ *$\frac{n}{\alpha^n} \to 0$ when $|\alpha| > 1$ This is basically because $\alpha^n$ grows exponentially in $n$, which dominates any linear (or constant) growth. You can use L'hospital if you want to verify this yourself. Then, applying these limits, we see that your limit converges if and only if $|\alpha| > 1$, and then it converges to $$\frac{\alpha}{(\alpha - 1)^2}$$ I'll leave it to you to convert this back into a formula in terms of $r$. If you like, you can code this up quite quickly in sage and convince yourself that this formula is correct. Checking with $n \approx 500$ should be good enough to see the convergence for most values of $\alpha$. I hope this helps ^_^
{ "language": "en", "url": "https://math.stackexchange.com/questions/4004652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Shortest time to travel around quicksand You want to travel from one side of a quicksand $(1,0)$ pit to another side of the quicksand pit $(-1,0)$. The speed you can run is determined by how far you are away from the quicksand $|v(x,y)| = \sqrt{x^2+y^2}$. What is the optimal path you should travel such that the time taken to travel is minimized? In my attempted solution: * *Use cylindrical coordinates $(r,\theta)$ and find an expression for the total time taken $T$ *Attempt to minimize $T$ using Beltrami's identity *Solve the resulting differential equation to find $r(\theta)$ However, when I do this, I find $r = c_2 e^{c_1 \theta}$. This is strange because I would have expected a symmetrical path around the y axis. Also, when I plug in the start and end points I get nonsensical values for the constants $c_1$ and $c_2$. Can someone help me figure out what I'm doing wrong?
From $$ T = \int_{0}^{\pi}\sqrt{1+\left(\frac{r'(\theta)}{r(\theta)}\right)^2}d\theta $$ we obtain the Euler-Lagrange equations $$ r(\theta ) r''(\theta )-r'(\theta )^2=0 $$ with solution $$ r(\theta) = c_2e^{c_1\theta} $$ now applying the boundary conditions $$ \cases{ r(0) = c_2 = 1\\ r(\pi) = e^{\pi c_1}= 1 } $$ we determine $c_1 = 0, c_2 = 1$ so the minimum time orbit is the semi-circumference with radius $r = 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4004715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $\int_{0}^{1}\frac{\|f\|_{L^p(0,x)}}{x}dx $ controlled by $\|f\|_{L^p(0,1)}$ Let $f\in L^{p}(0,1)$ for some $1<p<\infty$. Is $$\int_{0}^{1}\frac{\|f\|_{L^p(0,x)}}{x}dx $$ controlled by $\|f\|_{L^p(0,1)}$ ? Equivalently: Does $\frac{\|f\|_{L^p(0,x)}}{x}$ behave like $\frac{\|f\|_{L^p(0,1)}}{x^{\alpha}}$ for some $\alpha<1$ as $x\rightarrow 0^+$ ? I think it is stupid to try to prove the pointwise inequality $\|f\|_{L^p(0,x)}\leq C x^{1-\alpha}\|f\|_{L^p(0,1)}$. Simply because for each fixed "very small" $x$ we can find a function supported in $[0,x]$ so that $\|f\|_{L^p(0,x)}=\|f\|_{L^p(0,1)}$ and the inequality fails.
I think I figured this out. A counterexample: Let $f(x):=\frac{\chi_{[0,1]}(x)}{x^{\frac{1-\epsilon}{p}}}$. Then $\int_{0}^{1}\frac{\|f\|_{L^p(0,x)}}{x}dx=\frac{C}{\epsilon^{1+\frac{1}{p}}}$. While $\|f\|_{L^p(0,1)}=\frac{C}{\epsilon^{\frac{1}{p}}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4004884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Distance between a point inside a circle and circumference on a line along a point outside of circle Point $A$ is located outside of a circle centered at $C$ and with radius $r$. Point $B$ is given point inside the circle. How to calculate $d$, the length of line segment between $B$ and circumference on the line $\overline{AB}$. Would there be a solution regardless of where $B$ is inside the circle? Update: given to the problem is coordinates $A$, $B$ and $r$. I will need to extend the same problem to 3D geometry with a sphere and 3D coordinates. I would appreciate any help.
A method for you to use. Yes there is always a solution. To find it you can do the following, where the centre of the circle is taken as the origin. You can find the equation of the line $AB$ in the form $y=mx+c$. Then the point $P(x,y)$ where the line crosses the circle satisfies the equation $$ x^2+(mx+c)^2=r^2.$$ Solve this equation for $x$ and then find $y$ from $y=mx+c$. There will be two points, choose the one on the same side of the circle as $A$. Finally calculate the distance $BP$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4004982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the stationary distribution of a Markov Chain one over the mean sojourn time? I am reading the book "Markov Chains and Mixing Times" by Levin et al. and i have a problem with proposition 1.14, page 12. There a stationary distribution is constructed by defining for fixed initial state z $$\tilde \pi(y):=\mathbf E_y(\text{number of visits to y before returning to z})=\sum_{t=0}\mathbf P(X_t=y,\tau_z^+>t)$$ $\tau_z^+$ is the first return time to z. The proof is all well, but in the end a stationary distribution $\pi$ is constructedby normalizing such that $$\pi(x)=\frac{\tilde \pi(x)}{\mathbf E_z(\tau_z^+)}$$ Then, without further comment it is stated that this means in particular that $$\pi(x)=\frac{1}{\mathbf E_x(\tau_x^+)}$$ How do i see this?
The statement is true, but I think the way it is presented is a mistake by the authors. What is obvious is that $$\pi(z) = \frac{1}{\mathbb{E}_z (\tau_z^+)}.$$ Indeed, starting from $z$, the Markov chain visits $z$ exactly once (at time $t=0$) before returning to $z$, so $\tilde{\pi} (z) = 1$. However, the stationary measure we got depends a priori on the state $z$ chosen to construct it. If we use another state $z'$, then we get another stationary measure $\pi'$ such that $\pi'(z') = \frac{1}{\mathbb{E}_{z'} (\tau_{z'}^+)}$, but there is no guarantee that $\pi'(z') = \pi(z')$! This is actually false for certain examples of Markov chains (take e.g. a Markov chain on two states with the identity as the transition matrix). What saves us is that, if the Markov chain is irreducible, there is indeed a unique stationary probability measure, which implies that $\pi = \pi'$. Anyway, at this point, the proof must somehow use that fact that the Markov chain is irreducible. I think that this is a mistake by the authors since they do not explicitely mention the irreducibility at this point of the proof, and the uniqueness is only proved in the following section.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4005083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Positive integer solution to $a^3=3b^2-2$ The equation $a^3=3b^2-2$ seem to only have one positive integer solution $(a,b)=(1,1)$, but I am unable to prove that. What I did: Google told me that the elliptic curve theory might help me, but I don't really know anything about it. This thread in MO looks similar to what we have, but I was unable to apply it in my case.
Let $a$ and $b$ be integers such that $a^3=3b^2-2$. First note that $a$ is not divisible by $3$, and that $a$ and $b$ are both odd. In the UFD $\Bbb{Z}[\sqrt{6}]$ we have the factorization $$3a^3=9b^2-6=(3b+\sqrt{6})(3b-\sqrt{6}),$$ where the gcd of the two factors divides their difference $2\sqrt{6}$ as well as their product $3a^3$. As noted $a$ is odd, and so the two factors are coprime. The prime factorization $3=(3+\sqrt{6})(3-\sqrt{6})$ in $\Bbb{Z}[\sqrt{6}]$ shows that, after changing the sign of $b$ if necessary, we have $$3b+\sqrt{6}=u(3+\sqrt{6})(c+d\sqrt{6})^3,\tag{1}$$ for some unit $u\in\Bbb{Z}[\sqrt{6}]$ and some integers $c$ and $d$. The unit group of $\Bbb{Z}[\sqrt{6}]$ is generated by $-1$ and $5+2\sqrt{6}$, meaning that $u=\pm(5+2\sqrt{6})^k$ for some integer $k$. Adjusting $c$ and $d$ if necessary, we may assume without loss of generality that $k\in\{-1,0,1\}$ and that we have the $+$-sign. Then we are left with solving $$3b+\sqrt{6}=(5+2\sqrt{6})^k(3+\sqrt{6})(c+d\sqrt{6})^3,$$ for some integers $c$ and $d$, and some $k\in\{-1,0,1\}$. Expanding the product for each of the three values of $k$ and comparing coefficients of $\sqrt{6}$, we end up with the following three Diophantine equations: \begin{eqnarray*} 1&=&-c^3+9c^2d-18cd^2+18d^3\\ 1&=&c^3+9c^2d+18cd^2+18d^3\\ 1&=&11c^3+81c^2d+198cd^2+162d^3\\ \end{eqnarray*} These are three cubic Thue equations, for which there exist effective methods for finding all integral solutions. The first has the unique solution $(c,d)=(-1,0)$, the second has the unique solution $(c,d)=(-1,0)$, and the third has no integral solutions. These two solutions correspond to $b=-1$ and $b=1$, respectively, and this shows that these are the only integral solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4005253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Rewriting a Binomial expansion and expanding the rewritten version after making a convenience substitution to get a totally different answer I know that if I have a function $$f(x) = \frac{1}{x-1}$$ that if I perform a binomial expansion on I get the result $$f(x)\approx -1 -x -x^2-x^3-x^4 ...$$ However if I rewrite the function $$f(x) = \frac{1}{x-1} = \frac{1}{x(1-\frac{1}{x})} = \bigg[\big(1-\frac{1}{x}\big)\big(x\big) \bigg]^{-1}$$ Then we make the expansion of $\bigg[\big(1-\frac{1}{x}\big)\big(x\big) \bigg]^{-1}$ by using the subsitution $\frac{1}{x} = z$ We get $$(1-z)^{-1} = 1+z+z^2+z^3+z^4... = 1 + \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3}+ \frac{1}{x^4}...$$ so that $$x^{-1}(1-z)^{-1} = \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \frac{1}{x^4}+ \frac{1}{x^5}...$$ This is completely different result! Why is this the case?
The expansion of $\frac 1 {x-1}$ you are using is valid only for $|x|<1$. When you change $x$ to $\frac 1 x$ and use the expansion you are assumung that $|x|>1$. Since there is no number with $|x|<1$ and $|x| >1$ you cannot use both of them for the same $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4005365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is polynomial algebra the only minimal dense subalgebra of $C^{\infty}(U)$ Let $U$ be connected open set of $\mathbb{R}^{n}$. Consider $C^{\infty}(U)$ with sup norm, we said $A\subset C^{\infty}(U)$ is a minimal dense subalgebra of $C^{\infty}(U)$ if and only if any subalgebra contain in $A$ is not dense in $C^{\infty}(U)$. I want to ask are the polynomial algebra of $U$, either with coefficients in $\mathbb{Q}$ or $\mathbb{R}$\ $\mathbb{Q}$, the only two minimal dense subalgebra of $C^{\infty}(U)$?
For proper U, polynomials in even powers or trigonometric polynomials i.e. span of ${e^{ikt}}$ will do. Also, if $U=[a,b]\subset (0,1)$ then polynomials with integer coefficients are good.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4005522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $X$ be a set of $5$ elements. Find the number of ordered pairs $(A,B)$ of subsets of $X$ such that $A, B\neq \emptyset$ and $A\cap B=\emptyset$ I know this question involves use of combinatorics, but all the solutions I have seen just give the equation without explaining it. Can I get an explanation, because I know the answer and I know the method, I just don’t know how to apply.
If $|A| = k>0$ then you have ${5\choose k}$ possibilities for $A$ and $2^{5-k}-1$ possibilities for $B$ since $B\subseteq X\setminus A$ and $B\ne \emptyset$. So you have to sum (use binomial theorem): \begin{align}\sum _{k=1}^5{5\choose k}(2^{5-k}-1) &= \color{red}{\sum _{k=1}^5{5\choose k}2^{5-k}} -\color{blue}{\sum _{k=1}^5{5\choose k}}\\ &=\color{red}{(2+1)^5-2^5} -\color{blue}{(1+1)^5+1} \\&= 3^5-2^6+1\\ &= 243-64+1 =180 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4005669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Cramer's rule doesn't work in this case. Why? Here I have these equations, x + y + z = 2, 2x + 3y + 2z = 5, 2x + 3y + 15z = 5 When solving normally, we get (x,y,z) = (1,1,0) but when used cramer's rule, we get Delta = 0 , Delta x = 13, Deltta y = 13, Delta z = 0. It means that the given system of equation is inconsistent. How can this be possible? EDIT : Determinant is not 0, it's 13. I just wrote the terms wrong when calculating :p
$$\Delta=\det\left( \begin{array}{ccc} 1 & 1 & 1 \\ 2 & 3 & 2 \\ 2 & 3 & 15 \\ \end{array} \right)=13$$ and $$\Delta_x=\det\left( \begin{array}{ccc} 2 & 1 & 1 \\ 5 & 3 & 2 \\ 5 & 3 & 15 \\ \end{array} \right)=13$$ To solve with Cramer's rule you must do $$x=\frac{\Delta_x}{\Delta}=\frac{13}{13}=1$$ And so on for the other unknowns
{ "language": "en", "url": "https://math.stackexchange.com/questions/4005844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Given conditions on differentiability, what can we say about second derivative? Let $f:[0,10] \rightarrow [10,30]$ be a continuous and twice differentiable function such that $f(0)=10$ and $f(10) =20$. Suppose $\mid f'(x) \mid \leqslant 1 \; \forall x \in [0,10]$. Then the value of $f"(5)$ is $(a) 0$ $(b) 1/2$ $(c) 1$ $(d) Can't be determined$ My trial: We need the function to be twice differentiable. So let $f"(x)=c$ where $c$ is a constant. Then $f(x) =\frac{cx^{2}}{2} + dx + e$ where $c,d,e \in \mathbb{R}$ Also, $f(0) =10$ implies $e=10$ and $f(10) = 20 \;$ implies $ \; 50c + 10d =10$ or $5c + d=1$ Now, $f'(5) = 5c +d = 1$ (suprise, shawty!) therefore $c = \frac{1-d}{5} , d \in \mathbb{R}$ How to proceed? what values of $d$ to chooses such that $\mid f'(x) \mid \leqslant 1$ Any other easier approach is also welcome. Thanks. Can we somehow use MVT or Brower's Fixed Point here?
Hint One can prove that $f'(x)=1$ for all $x\in [0,10]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4005971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Surface integral first kind I have to calculate surface integral:$$\iint_D zdD$$ where $S$ is part of cone $z=\sqrt{x^2+y^2}, 1<z<2$ cut out by $(x^2+y^2)^2=2(x^2-y^2)$. I am only not sure how to use part that $1<z<2$. I thought to plug this in the formula and get $\iint_S \sqrt{x^2+y^2}\sqrt{1+(\frac{x}{x^2+y^2})^2+(\frac{y}{x^2+y^2})^2} dS$. And then using given cylinder, and converting to polar coordinates I get $-\frac{\pi}{4}\le \phi\le \frac{\pi}{4}, 0\le r\le 2\sqrt{\cos(2\phi)}$ it shouldn't be hard to calculate this integral. Is this correct or there is something I'm missing?
While most of what you have mentioned is correct, you have not considered that $z \geq 1$. The limits of $\theta$ will also be different. $-\frac{\pi}{4} \leq \theta \leq \frac{\pi}{4}$ is valid only when $z$ starts from $0$. Also just to notice, there is no surface area of the cone cut out by the cylinder above $z = \sqrt2$. Cone surface is given by $z = \rho$; Cylinder is given by $\rho = \sqrt{2\cos2\theta}$ Maximum radius of the cylinder's cross section is $\sqrt2 \ $ at $\theta = 0$ so for $z \gt \sqrt2$, the cylinder is completely inside the cone. For a cone with $z=r, \ |r'_{\rho} \times r'_{\theta}| = \rho\sqrt2$. At $z = 1, \rho = 1 = \sqrt{2 \cos2\theta} \implies -\frac{\pi}{6} \leq \theta \leq \frac{\pi}{6}$. As $z$ increases, $\rho$ increases and range of $\theta$ that cuts out the surface area of the cone reduces. So we cannot have $\theta \gt \frac{\pi}{6}$. Also note that there is one more petal between $\frac{3\pi}{4} \leq \theta \leq \frac{5\pi}{4}$. So we need to multiply integral by $2$. Substituting $z = \rho$ and multiplying by factor $\rho \sqrt2$, the integral becomes, $S = 2 \sqrt2 \displaystyle \int_{-\pi/6}^{\pi/6} \int_{1}^{\sqrt{2\cos (2\theta)}} \rho^2 \ d\rho \ d\theta$ Alternatively, $S = 2\sqrt2 \displaystyle \int_1^\sqrt2 \int_{-\frac{1}{2} \arccos\big(\frac{\rho^2}{2}\big)}^{\frac{1}{2} \arccos\big(\frac{\rho^2}{2}\big)} \rho^2 \ d\theta \ d\rho$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4006082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $f:[0,\infty) \to [0,\infty)$ be increasing and satisfy $f(0)=0$ and $f(x)>0 \forall x>0$. Let $f:[0,\infty) \to [0,\infty)$ be increasing and satisfy $f(0)=0$ and $f(x)>0 \forall x>0$. If $f$ also satisfies $f(x+y) \leq f(x)+f(y) \forall x,y \geq 0$, then $f \circ d$ is a metric whenever $d$ is metric. Show each of the following conditions is sufficient to ensure that $f(x+y) \leq f(x)+f(y) \forall x,y \geq 0$: a) $f$ has a second derivative satisfying $f'' \leq 0$; b) $f$ has a decreasing first derivative. c) $f(x)/x$ is decreasing for $x>0$. My Work Thus Far: Let $g(x)=f(x)x$, for $x>0$. According to the question we have that $f''<0$ then $f$ has a decreasing first derivative, meaning $f'$ is decreasing. Now, taking the derivative of $g$ yields: $g'(x)=xf'(x)-f(x)x^2=f'(x)x-f(x)x^2$, where $x>0$. I know that $-f(x)x^2$ is decreasing, but how do I know for sure that $f'(x)x$ is decreasing so I can deduce that $g'(x)<0, \forall x>0$ and that $g(x)$ is a decreasing function?
By $c$ you have $f(x+y)\cdot(x+y)\leq f(x)\cdot x$ and $f(x+y)\cdot(x+y)\leq f(y)\cdot y$. Equivalently: $$f(x+y)\cdot\dfrac{x+y}x\leq f(x), \ f(x+y)\cdot\dfrac{x+y}{y}\leq f(y)$$ Summing both inequalities gives: $$f(x+y)\left(\dfrac{x+y}{x}+\dfrac{x+y}{y}\right)\leq f(x)+f(y)$$ From here you can conclude expanding and aplying AM-GM inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4006233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Follow-up Question about Cesaro mean proof I am trying to understand the proof behind the Cesaro mean converging. I am using https://math.stackexchange.com/a/2342856/633922 (hopefully it is also correct) as a guide because it seems very direct. I will comment on the steps I understand and where I need help. The statement: If $(x_n)$ converges to $x$, the sum of averages $y_n=\dfrac{x_1+x_2+\cdots+x_n}{n}$ also converges to the same limit. Proof: Since $(x_n)$ converges, given an arbitrary $\epsilon >0$, there exists an $N_1\in\mathbb{N}$ such that whenever $n\geq N_1$ we have $|x_n-x|<\epsilon$. (Definition of convergent sequence) Now, $$\begin{align*} \left|\frac{x_1+x_2+\cdots+x_n}{n}-x\right|=&\left|\frac{(x_1-x)+\cdots+(x_{N_1-1}-x)}{n}+\frac{(x_{N_1}-x)+\cdots+(x_{n}-x)}{n}\right|\\ \leq& \left|\frac{(x_1-x)+\cdots+(x_{N_1-1}-x)}{n}\right|+\left|\frac{(x_{N_1}-x)+\cdots+(x_{n}-x)}{n}\right|\text{ (Triangle inequality)} \end{align*} $$ Now we want to make a statement about the first $N_1-1$ terms, $\color{red}{why?}$ That is: By the Archimedean principle we can find an $N_2$ such that whenever $n\geq N_2$ we have that $$\left|\frac{x_1+x_2+\cdots+x_{N-1}}{n}\right|<\epsilon $$ (Thought: is it because $x_1,\dots,x_{N_1-1}$ is finite?) Now we can choose an $N_3=\max\{N_1,N2\}$ such that for all $n\geq N_3$ we have (My thought: Is this because choosing the max of both will always guarantee the final inequality to always work?) $$ \left|\frac{x_1+x_2+\cdots+x_n}{n}-x\right|\leq \underbrace{\epsilon}_{N_1-1}+\underbrace{\color{red}{\frac{n-N_1}{n}}}_{\text{why and how?}}\epsilon< 2\epsilon $$ And this finishes the proof. I always assumed the ending statement has to be (something)$<\epsilon$ or is this saying that each sum of the right side of the triangle inequality is less than $\epsilon/2$. I would really appreciate the help on the areas I am thoroughly confused about.
* *The first part is controlled because $N_1$ is fixed and the denominator $n$ can be arbitrarily large. *For the second part you are using the estimate $|x_n-x|<\epsilon$ for large $n$. Note that there are totally $n-N_1$ terms in the numerator, the absolute value of each is less than $\epsilon$. To summarize, you are first given $\epsilon>0$. Then you have $N_1$ such that $n\ge N_1$ implies $|x_n-x|<\epsilon$, in particular $$ |x_{N_1+1}-x|<\epsilon,\quad |x_{N_1+2}-x|<\epsilon,\cdots,|x_{n}-x|<\epsilon\tag{1} $$ which implies by the triangle inequality that $$ \left|\frac{(x_{N_1+1}-x)+\cdots+(x_{n}-x)}{n}\right|<\frac{(n-N_1)\epsilon}{n}\tag{1'} $$ Then, for this (fixed) $N_1$, since $$ \lim_{n\to\infty}\frac{|(x_1-x)+\cdots+(x_{N_1}-x)|}{n}=0 $$ there exists $N_2$ such that $n\ge N_2$ implies $$ \frac{|(x_1-x)+\cdots+(x_{N_1}-x)|}{n}<\epsilon\tag{2} $$ So if you pick $N_3=\max(N_1,N_2)$, then $n\ge N_3$ implies both (1') and (2). If you can show that for every $\epsilon>0$, there exists $N>0$ such that $n\ge N$ implies $|a_n|<2\epsilon$, it follows that $$ \lim_{n\to\infty}a_n=0 $$ You don't have to have exactly $\epsilon$ in the estimate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4006384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Showing that $V=\operatorname{ker} f^{p} \oplus \operatorname{Im} f^{p}$ where $\operatorname{Im} f^{p} =\operatorname{Im} f^{p+1}$ This is a problem from Blyth and Robertson's Further Linear Algebra: If $V$ is a finite dimensional vector space and $f: V\to V$ is a linear mapping, prove that there is a positive integer $p$ such that $\operatorname{Im} f^{p}=\operatorname{Im} f^{p+1}$ and deduce that $$(\forall k\ge 1)\; \operatorname{Im} f^{p}=\operatorname{Im} f^{p+k}, \ker f^{p}=\ker f^{p+k}$$ Show also that $V=\operatorname{Im} f^{p} \oplus \ker f^{p}$. My attempt: Let $\displaystyle m=\min_{k\in\mathbb{N}} (\dim f^{k}(V))$. Then there must be a $p\in \mathbb{N}$ such that $m=\dim f^{p}(V)$. It is easy to see that $f^{p+k}(V)\subset f^{p}(V)$, so, $m=\dim f^{p}(V)\ge\dim f^{p+k}(V)$ and by definition of $m$, $\dim f^{p+k}(V)\ge m$ for any $k\in \mathbb{N}$. So, we have that $\dim f^{p+k}(V)=m$ for any $n\in \mathbb{N}$. Hence, we have $f^{p+k}(V)=f^p(V)$. Also, we have that $\ker f^{p}\subset \ker f^{k+p}$. By rank nullity theorem we have $\dim \ker f^p =\dim \ker f^{k+p}$. Hence $\ker f^{p}=\ker f^{p+k}$. Let $v\in V$. Then $f^{p}(v)\in f^{2p}(V)$ because $f^{p}(V)=f^{2p}(V)$. Thus, $f^{p}(v)=f^{2p}(w)$ for some $w\in V$. Thus, $f^{p}(v-f^{p}(w))=0$. Hence $v-f^{p}(w)\in \ker f^{p}$. Hence, we can write $v=v_0+f^{p}(w)$ where $v_0\in \ker f^{p}$. This shows that $V=\operatorname{Im} f^{p} + \ker f^{p}$. Now, by the dimension formula and rank nullity theorem, $\dim (\ker f^{p} \cap \operatorname{Im} f^{p})=\dim \ker f^{p} + \dim \operatorname{Im} f^{p} - \dim V=0$. So, $\ker f^{p} \cap \operatorname{Im} f^{p}=\{ 0 \}$. Hence $V=\operatorname{Im} f^{p} \oplus \ker f^{p}$. Is this proof correct? Is there a more direct way to do this?
Your proof seems correct to me. If $N_i$ denotes the kernel of $f^i$ then the chain $\{0\} = N_0 \subseteq N_1 \subseteq \cdots$, must eventually stabilise, because there are at most $d$ strict inclusions, where $d = \dim V$. Likewise, if $R_i$ is the range of $f^i$, then $V = R_0 \supseteq R_1 \supseteq \cdots$ must stabilise for the same reason. For what it's worth, the decomposition of $V$ that you find is sometimes called the Fitting decomposition of $V$ with respect to $f$. That is, $V$ decomposes as a direct sum $N \oplus R$ such that $f|_{N}$ is nilpotent and $f|_{R}$ is invertible. Really cool!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4006491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve with KKT min $x_1-4x_2+x_3$ Given min $x_1-4x_2+x_3$ s.t $x_1+2x_2+2x_3=-2$ $||x||^2\leq1$ (i)Given a KKT point of the problem above,must it be an optimal solution? (ii) Find the optimal solution of the problem using KKT conditions. for (i) taking the point $(0,-0.89,-0.11)$ we satisfy slater's condition and the objective function is convex and the first contrains is affine and second is convex with inequality. therefore a KKT point is optimal point. for (ii) I know I just need to find 1 KKT point $L(x,\lambda,\eta)=x_1-4x_2+x_3+\lambda(x_1+2x_2+2x_3+2)+\eta(x_1^2+x_2^2+x_3^2-1)$ (1) $\frac{\partial L}{\partial x_1}=1+\lambda+2\eta x_1 = 0$ (2) $\frac{\partial L}{\partial x_2}=-4+2\lambda+2\eta x_2 = 0$ (3) $\frac{\partial L}{\partial x_3}=1+2\lambda+2\eta x_3 = 0$ (4) $\lambda(x_1+2x_2+2x_3+2)=0$ (5) $\eta(x_1^2+x_2^2+x_3^2-1)=0$ (6) $(x_1+2x_2+2x_3+2)=0$ (7) $x_1^2+x_2^2+x_3^2-1\leq0$ if $\lambda=\eta=0$ we get from (1) that $1=0$ therefore not feasible if $\lambda\neq0,\eta=0$ from (1) we get that $\lambda=-1$ and from (2) we get that $\lambda=2$ not feasible again. if $\eta\neq0,\lambda=0$ we get from (1) and (3) that $x_1=x_3=\frac{-1}{2\eta}$ and $x_2=\frac{2}{\eta}$ plugging into (6) we get that $\eta=\frac{-5}{4}<0$ not possible because $\eta\geq0$ therefore not feasible again. if $\lambda,\eta\neq0$ from (2)-(3) we get $\frac{-5}{2\eta}+x_2=x_3$ and (2)-2*(1) we get that $\frac{-6}{4\eta}+\frac{1}{2}x_2=x_1$ plugging into (6) we get that $\frac{1}{\eta}=\frac{9}{13}x_2+\frac{4}{13}$ plugging all that we got into (5) we know that $x_1^2+x_2^2+x_3^2=1$ because $\eta\neq0$ and I get that $x_2=0.0503$ or $x_2=-0.764$ the second options is not feasible because $\eta\geq0$ and from the first one I don't get the right answer(checked in matlab) I wonder what am I doing wrong here, any help please? Opt vector is $(-0.5194,0.1074,-0.8478)$ opt val $-1.7969$
plugging all that we got into (5) (..) I get that $x_2=0.0503$ or $x_2=-0.764$ Something went wrong in this step. I put (5) into Wolfram Alpha and got $x_2 \approx 0.1074$ as one of the solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4006649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can $B(a)$ be utilized in the attached question in graph theory Let $G := (V,E)$ be a graph. Show that there is a function $α : V → \{0,1\}$ such that, for each $v ∈ V$, at least half of the neighbours of $v$ have a different $α$-value than $v$. Here's the question. It is part of my Graph Theory homework and I've done every other problem except this one... I've tried an approach utilizing the fact that every graph $G$ contains a bipartite subgraph with at least half the edges, but that didn't work. Can somebody tell me how this hint is related to other theorems in Graph Theory? Because I'm really a little stuck around here.
As you write in the comments, let $B(\alpha)$ denote the number of edges of $G$ that have different $\alpha$-values on the endpoints. Hint: choose $\alpha$ so that $B(\alpha)$ is maximal and show that it has the desired property. Solution: If there was a vertex $v$ such that less than half of its neighbours have $\alpha$-values different from $\alpha(v)$, then changing $\alpha(v)$ would increase $B(\alpha)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4006805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$ \sum_{i=1}^{10} x_i =5 $, and $ \sum_{i=1}^{10} x_i^2 =6.1 $ ; Find the largest value of these ten numbers Ten non-negative numbers are such that $ \sum_{i=1}^{10} x_i =5 $, and $ \sum_{i=1}^{10} x_i^2 =6.1 $. What is the greatest value of the largest of these numbers? It presents a lot of confusion to me, but I think the largest number maybe a integer. Please help me with procedure. Source : Moscow Institute of Physics and Technology admission assessment for computer Science
Just apply Cauchy-Schwarz or AM-QM inequality to the nine smaller numbers, you get (WLOG assume $x_1 \le x_2 \le \cdots \le x_{10}$) $$ \left(\sum_{i=1}^9 x_i\right)^2 \le 9 \left( \sum_{i=1}^9 x_i^2\right)\\ \iff (5-x_{10})^2 \le 9(6.1-x_{10}^2) \\ \iff 10x_{10}^2 - 10x_{10} - 29.9 \le 0 \\ \implies x_{10} \le 2.3.\blacksquare$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4006963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
$\mathcal A$ is a Banach algebra. Can there be two different $*$ operators which both make $\mathcal A$ a $C^*$-algebra? I really have no idea where to start. The only thing I know is that there can not be different $C^*$-norms (whether complete or not) on a $C^*$-algebra, but I find that I barely know nothing about algebraic isomorphisms between $C^*$-algebras. If such isomorphism (which is not a $*$-isomorphism) exists, it must map some self-adjoint element to a non self-adjoint one. I can neither find an example nor show it is impossible.
The answer pointed out by @Mark presumes that you want to keep the norm, that is, there is no adjoint operation "$^{\bigstar}$" on $\mathcal A$, other than the defaul adjoint, such that $(\mathcal A, ^{\bigstar}, \|\cdot\|)$ is a C*-algebra, where $\|\cdot\|$ is the default norm. However, it is possible to find a different adjoint operation "$^{\bigstar}$", and a different norm $|||\cdot|||$, such that $(\mathcal A, ^{\bigstar}, |||\cdot|||)$ is a C*-algebra. All you need to do is choose an automorphism $$ \varphi :\mathcal A \to \mathcal A, $$ which preserves everything but the star and norm, and define a new star and norm by $$ a^\bigstar := \phi^{-1}\big (\phi(a)^*\big ), \quad \text{and} \quad |||a||| := \|\phi(a)\|. $$ One such automorphism may be taken to be $$ \phi(a) = uau^{-1}, $$ where $u$ is a non-unitary, invertible element, such that $u^*u$ is non-central. One more point: on a commutative C*-algebra one cannot find another adjoint operation, even if one is willing to consider a change of norm. The reason is that self-adjoint elements may be characterized as those with real spectrum and, moreover, the space of self-adjoint elements determines the adjoint operation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4007120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluating $\sum_{n=1}^{\infty} \int_{1}^{\infty} \frac {\cos(2 \pi n t)}{t^z} dt$ I need help evaluating any of: $$\sum_{n=1}^{\infty} \int_{1}^{\infty} \frac {\cos(2 \pi n t)}{t^z} dt$$ $$\sum_{n=1}^{\infty} \int_{1}^{\infty} \frac {\cos(2 \pi (2n - 1) t)}{t^z} dt$$ Where $z \in \Bbb{C}$ Also I would like to ask if this is true: $$\sum_{n=1}^{\infty} \int_{1}^{\infty} \frac {\cos(2 \pi n t)}{t^z} dt = \int_{1}^{\infty} \frac {\sum_{n=1}^{\infty} \cos(2 \pi n t)}{t^z} dt$$ If yes then how to in clever way calculate: $$\sum_{n=1}^{\infty} \cos(2 \pi n t)$$
In a very general manner, the sum and integration operations can be switch iff they both converge in both cases. For the evaluation of the integral I would do the following: $$ \sum_{n=1}^\infty \int_0^\infty \frac{\cos(2\pi nt)}{t^z}dt=\sum_{n=1}^\infty \int_0^\infty \frac{\frac{e^{i2\pi nt}-e^{-i2\pi nt}}{2}}{t^z}dt\\ \frac{1}{2}\sum_{n=1}^\infty \int_0^\infty \frac{e^{i2\pi nt}-e^{-i2\pi nt}}{t^z}dt $$ Now try interchanging the sum and integral, I think this would be a good way to approach the problem
{ "language": "en", "url": "https://math.stackexchange.com/questions/4007310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove $\sup\{r\geq 0: \exists M\in\mathbb{R}[\forall n\in \mathbb{N} [\, |r^n a_n|Given a complex power series $\sum_n a_n z^n$ for $z,a_n\in\mathbb{C}$, let $$A:=\{r\geq 0: \exists M\in\mathbb{R}[\forall n\in \mathbb{N} [\, |r^n a_n|<M\ ]] \}$$ $$B := \{ r\geq 0 :\sum_n |r^n a_n| \in \mathbb{R} \}$$ Prove that $\sup A = \sup B$. Now my colleagues and I are becoming crazy with this one. We know that $B\subset A$, which implies that $\sup B \leq \sup A$, so I am trying to prove that $\sup B < \sup A$ leads to a contradiction, but after many trials, I can't find a reason to imply the conclusion, although I am not able either to find a contre-example to $\sup A = \sup B$, for every time I try I get simply that the radius of convergence of the failed counter-example series is actually the $\max A$, which still confirms the expected result. On the other hand, $$\begin{split} &r>\sup B\ &\Rightarrow l :=\limsup_{n\to\infty} |a_n r^n|\neq 0\\ &l \in \mathbb{R} &\Rightarrow \exists M\in\mathbb{R}[\forall n\in\mathbb{N}[\ |a_n r^n| < M\ ]]\\ &&\Rightarrow l \in A\\ &&\Rightarrow \sup A \neq \sup B \end{split}$$ which is a reasoning that I find correct, but I guess it is not, right? I am completely puzzled, and I've been thinking in circles for hours. Please, is $\sup A = \sup B$? Give a proof, or at least lead me to it. Thank you in advance.
Clearly $B\subseteq A$ and hence $\sup B\leq\sup A$. We go to show that $\sup A\leq\sup B$. Prove by contradiction. Suppose the contrary that $\sup A>\sup B$. Choose $r_{1}$ such that $\sup B<r_{1}<\sup A$. Since $r_{1}$ is not an upper bound of $A$, there exists $r\in A$ such that $r_{1}<r$. Choose $M>0$ such that $|r^{n}a_{n}|<M$ for all $n$. Observe that \begin{eqnarray*} & & \sum_{n=0}^{\infty}\left|r_{1}^{n}a_{n}\right|\\ & = & \sum_{n=0}^{\infty}\left|\left(\frac{r_{1}}{r}\right)^{n}r^{n}a_{n}\right|\\ & \leq & \sum_{n=0}^{\infty}M\left(\frac{r_{1}}{r}\right)^n\\ & < & \infty \end{eqnarray*} because $0<\frac{r_{1}}{r}<1$. This shows that $r_{1}\in B$, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4007484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Riemann Integrability of the indicator function-- small clarification Suppose $a\le s<t\le b$. Define $f:[a, b]\mapsto \mathbb{R}$ by $$f(x) = \begin{cases} 1 \; \; s<x<t \\ 0 \; \; \text{otherwise} \end{cases}$$ Prove that $f$ is Riemann integrable on $[a, b]$ and that $\int_a^b f = t-s$. I am trying to construct a proof of this statement, but I don't see how the assertion $\int_a^b f = t-s$ is true. If we think of $\int_a^b f$ in terms of area under a curve, I would say that $\int_a^b f = t-s$ would hold iff $f(x) = 1$ for when $s\le x\le t$ (replacing strict inequalities with non-strict ones) and $0$ otherwise. Can someone please explain how $\int_a^b f = t-s$ holds given how $f$ is defined originally? Note: Please don't give a proof of the statement since that is my HW assignment. I'd appreciate just some intuitive clarifications/ explanations.
Notice that $$\int_a^bfdx=\int_{a}^sfdx+\int_s^tfdx+\int_t^bfdx=\int_s^t1dx$$ When the function it is the value $1$, what the integral really means, it's the lenght of the interval. Or in terms of area, it means the area of a rectangle of length $1$ and width $t-s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4007634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sandwich rule for continuity Let $c ∈ X ⊆ R $ . Suppose that $f : X → \mathbb{R}$, $g : X → \mathbb{R}$, and $h : X → \mathbb{R}$ are functions such that $g(x) ≤ f(x) ≤ h(x)$ for all $x ∈ X$. Suppose that $g$ and $h$ is continuous at $c$ and $g(c) = h(c).$ Question: Show that f is continuous at c. Attempt: I have started by writing down the following. Let $\epsilon >0$. We know that $g$ is continuous hence we have $\delta_g$ such that $|x-c|< \delta_g$ $\implies$ $|g(x)-g(c)|< \epsilon$. We know that $h$ is continuous hence we have $\delta_h$ such that $|x-c|< \delta_h$ $\implies$ $|h(x)-h(c)|< \epsilon$. How do I connect $f$ to $h$ and $g$?
First note that setting $x=c$ in the inequality implies that $g(c) \le f(c) \le h(c) = g(c)$ and therefore $g(c) = f(c) = h(c)$. For $\epsilon > 0$ you have found $\delta_g > 0$ and $\delta_h > 0$ such that $$ |x-c|< \delta_g \implies |g(x)-g(c)| < \epsilon \implies g(x) > g(c) - \epsilon\\ |x-c|< \delta_h \implies |h(x)-h(c)| < \epsilon \implies h(x) < h(c) + \epsilon $$ Now set $\delta = \min(\delta_g, \delta_h)$. Then $|x-c| < \delta$ implies $$ f(c) - \epsilon = g(c) - \epsilon < g(x) \le f(x) $$ and $$ f(x) \le h(x) < h(c) + \epsilon = f(c) + \epsilon $$ and therefore $$ f(c) - \epsilon < f(x) < f(c) + \epsilon \iff |f(x) - f(c)| < \epsilon \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4007901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Cosine of complex variable Show that $|\cos z|\leq2$ for all $z \in B(0,r)\subset C$ for some $r>0$ . Can we have $|\cos z|\leq 1$ for all $z \in B(0,r)\subset C$ for some $r>0$ ? I have found the answer for the first part but can't think of a solution for the second question. For the first part $$|\cos z|=\biggl|\frac{e^{iz}+e^{-iz}}{2}\biggr|$$ $$ =\biggl|\frac{e^{ix}*e^{-y}+e^{-ix}*e^{y}}{2}\biggr|$$ $$ \leq\biggl|\frac{e^{ix}*e^{-y}}{2}\biggr|+ \biggl|\frac{e^{-ix}*e^{y}}{2}\biggr| $$ $$ \text{But }\:|e^{-ix}|=|e^{ix}|=1$$ $$ =\biggl|\frac{e^{-y}}{2}\biggr|+ \biggl|\frac{e^{y}}{2}\biggr|$$ $$= \cosh y$$ $$\text{So }\: |\cos z|\leq \cosh y$$ and since $\cosh 0=1$ we can find a neighborhood around the point $0$ such that $|\cos z|\leq 2$. Since $\cosh x$ is a strictly increasing function I can't use it to prove that $|\cos z|\leq 1 $ for some neighborhood. Another method I can think of is taking partial derivative of $|\cos z|$ along $x$ and $y$-axes to check the direction in which it increases or decreases. But is there a simpler way for checking whether such a neighborhood exists ?
For $r>0$, consider a disc $B(0,r)$. Then at some point on imaginary axis $(x=0)$, you have $f(y)=\cos (iy)=\cos hy=\frac{e^y+e^-y}{2}$ Since $f(y)$ attains its minimum value $1$ at $y=0$ and is an increasing function $( f'(y)>0)$ for $y>0$ hence $|f(y)|\ge 1$ (even for $y<0$). Thus $|\cos z|\le 1$ for all $z\in B(0,r)$ is impossible regardless of the choice of $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4008038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Solve $\begin{cases}x^2+y^4=20\\x^4+y^2=20\end{cases}$ Solve $$\begin{cases}x^2+y^4=20\\x^4+y^2=20\end{cases}.$$ I was thinking about letting $x^2=u,y^2=v.$ Then we will have $$\begin{cases}u+v^2=20\Rightarrow u=20-v^2\\u^2+v=20\end{cases}.$$ If we substitute $u=20-v^2$ into the second equation, we will get $$v^4-40v^2+v+380=0$$ which I can't solve because we haven't studied any methods for solving equations of fourth degree (except $ax^4+bx^2+c=0$). Any other methods for solving the system?
First, you can guess the answer, which might be $(u, v) = (2, 2)$. Since there might exist other solutions, we need to think more. The next thing is: I believe that you can draw the graph of $u = 20 - v^{2}$ and $v = 20 -u^{2}$ on $(u, y)$-plane, which gives two parabolas that are reflections each other with respect to the line $u = v$. And, you have another guess: The other solutions do not satisfy $u\geq 0$ and $v\geq 0$, which should happen since $u = x^{2}$ and $v = y^{2}$. (Note that there are 3 more intersection points) So we can say confidently that the only solution is $(u, v) = (2, 2)$, but this is not a proof. There's another solution that you can guess that corresponds to another intersection point with $u = v$ other than $(4, 4)$. If you set $u = v$, then you obtain $u = -5, 4$ by solving a quadratic equation. At last, to prove that this is the only solution, note that the degree 4 polynomial should be divisible by $(v-4)$ and $(v+5)$ since $4, -5$ are roots of it. We have $$ v^{4} - 40v^{2} +v + 380 = (v-4)(v+5)(v^{2} - v - 19) $$ and $v^{2} - v - 19 = 0 \Leftrightarrow v = \frac{1 \pm \sqrt{77}}{2}$, where both of them fail to satisfy $(u \geq 0) \wedge (v \geq 0)$. So the only answer is $(u, v) = (2, 2) \Leftrightarrow (x, y) = (\pm 2, \pm 2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4008153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 3 }
Calculating probability of next event given remaining count and range for each (Apologies if my use of terms is off - I am not a mathematician by trade and what I did learn way back was in a different language) What I know: the total length of the stream, the body of possible events, the pop-count for each event, and the range of each event (essentially the index of the last instance of this event). What I want: Given an incoming stream of events I am trying to predict what the next event is likely to be, i.e. calculate the probability of the next event being a particular one of the known population. Example: Let's say the incoming stream is ABABCCCC (not known by the receiver) Known by the receiver: Length=8 Count Range
2 3 B: 2 4 C: 4 8 Ignoring the range component we can trivially calculate the probability of each event by dividing its pop-count into the total message length: Count Range Prob
{ "language": "en", "url": "https://math.stackexchange.com/questions/4008251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
For a point K inside a triangle show an equality Let ABC be a triangle whose heights are $h_a,h_b$ and $h_c$. Let $K$ be any point inside the triangle, and $d_a,d_b$ and $d_c$ the distances of $K$ from the sides $a,b$ and $c$, respectively. Show that $$\dfrac{d_a}{h_a}+\dfrac{d_b}{h_b}+\dfrac{d_c}{h_c}=1.$$ We can draw a line from $K$ to each of $A, B,$ and $C$, forming three triangles $KAB, KBC,$ and $KCA$. We know $$S_{KBC}+S_{KAC}+S_{KAB}=S_{ABC}$$ or $$\dfrac{ad_a}{2}+\dfrac{bd_b}{2}+\dfrac{cd_c}{2}=\dfrac{ah_a}{2}=\dfrac{bh_b}{2}=\dfrac{ch_c}{2}.$$ If we multiply the last by 2, we get $$ad_a+bd_b+cd_c=ah_a=bh_b=ch_c.$$ I don't see anything else. Any help would be appreciated. (I would love to hear your thoughts on the problem)
Now from your last equation, call $M=ah_a=bh_b=ch_c$. Then you have that $\frac{ad_a+bd_b+cd_c}{M}=1$, that is, $\frac{ad_a}{M}+\frac{bd_b}{M}+\frac{cd_c}{M}=1$, that is, $\frac{ad_a}{ah_a}+\frac{bd_b}{bh_b}+\frac{cd_c}{ch_c}=1$, which is what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4008419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Evaluate the lateral surface defined by a set rotated 360 degree around the z axis Let $D=\left\{(y,z): \sqrt{y} \leq z \leq 2-\sqrt{y} \right\}$. Rotate $D$ $360$ degree around the $z$ axis generating the volume $V_z$ and evaluate the lateral surface of $V_z$, $|\partial V_z|$. For evaluating the lateral surface of $V_z$, I considered these two curves: * *$\gamma_1(t) = (t,\sqrt{t})$, $0 \leq t \leq 1$; *$\gamma_2(t) = (t,2- \sqrt{t})$, $0 \leq t \leq 1$. So I continued by doing the integral $$|\partial V_z| = 2\pi\int_{0}^{1}t\sqrt{1+\frac{1}{4t}}dt +2\pi\int_{0}^{1}t\sqrt{1+\frac{1}{4t}}dt = 4\pi\int_{0}^{1}t\sqrt{1+\frac{1}{4t}}dt.$$ Any tips on how to solve this integral? Is it possible to evaluate the surface with a different approach?
The surface area of a solid of revolution around the $z$-axis is given by $$S = 2\pi\int_{a}^{b} f(z) \sqrt{1 + f'(z)^{2}}\, dz.$$ In your case we have two functions: $f_1(z)=z^2$ (from $z=\sqrt{y}$) over the interval $[0,1]$ and $f_2(z)=(2-z)^2$ (from $z=2-\sqrt{y}$) over the interval $[1,2]$. Therefore $$|\partial V_z|=2\pi\int_{0}^{1} f_1(z) \sqrt{1 + f_1'(z)^{2}}\, dz+2\pi\int_{1}^{2} f_2(z) \sqrt{1 + f_2'(z)^{2}}\, dz$$ Note that, by symmetry, the two integrals are equal. Moreover $$\int_{0}^{1} f_1(z) \sqrt{1 + f_1'(z)^{2}}\, dz=\int_{0}^{1} z^2 \sqrt{1 + 4z^2}\, dz=\int_{0}^{\text{arcsinh}(2)}\cosh^2(t)\sinh^2(t)\,dt$$ where we applied the substitution $2z=\sinh(t)$. Can you take it from here? You may also take a look at How to evaluate integral: $\int x^2\sqrt{x^2+1}\;dx.$ P.S. Your approach is correct and after letting $t=z^2$ we find the same integral $$\int_{0}^{1}t\sqrt{1+\frac{1}{4t}}\,dt=\int_{0}^{1} z^2 \sqrt{1 + 4z^2}\, dz.$$ I don't see an easier way to find this area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4008582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why defining homotopy on functions instead of spaces? Apparently, the definition of homotopy formalizes the idea of continuous transformation between two things. (*) Let's take this motivating example: I have $ S,S' \subset \mathbb{R}^3 $ two surfaces in space. If I wanted to define a continuous transformation between them I would request for a continuous function $\Gamma:S\times [0,1] \rightarrow \mathbb{R}^3$ to exist such that $\Gamma(\cdot,0)=id|_S $ and $Im(\Gamma(\cdot,1))=S' $. Instead, the definition in use is not between spaces ($S$ and $S'$) but between functions. I think my definition (*) would be a special case of the definition in use. In fact, by posing $f:=\Gamma(\cdot,0)=id|_S$ and $f':=\Gamma(\cdot,1)=id|_{S'}$ I have that $f\simeq f'$. Can someone clarify what's the limitations (or what's wrong with) my definition (*)?
I think this is a very nice question, and your definition (*) is very close to the definition of an isotopy. You want to also require that $\Gamma(S,t)$ is an embedding for all $t$ so that you can't collapse everything to a point (as was pointed out in the comments). Otherwise, any two maps would be equivalent. I think it's useful as well to compare isotopy to the notion of homotopy equivalence. As an example the circle is homotopy equivalent to itself via the identity map, but it has many distinct (up to isotopy) embeddings in $\mathbb{R}^3$. Specifically, the unknot and trefoil can be shown to be non-isotopic using various knot invariants.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4008736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Numerically evaluate a series involving Stirling numbers of the second kind How can I numerically evaluate the following for $n$ above one million: $$ \sum_{k>=n} \sum_{i=0}^{n-1} (-1)^i {{n-1}\choose{i}} \left( \frac{n-1-i}{n} \right)^{k-1} k (k+1)$$? I have tried changing the order of summation and the getting closed form for the geometric series in $k$ but the terms are large (even for $n = 100$) and I run into precision problems.
Too long for comments. Starting from the point where @Phicar wrote $$S_n=2\sum _{k= n}^\infty\left (\frac{n-1}{n}\right )^{k-1}\binom{k+1}{2}$$ we have $$S_n=\sum_{k= n}^\infty k (k+1) \left(\frac{n-1}{n}\right)^{k-1}=(n-1)^{n-1} n^{3-n} (5 n-3)$$ and for large values of $n$ $$S_n=\frac 1e\left(5 n^3-\frac{n^2}{2}-\frac{n}{24}+\frac{1}{16}+\frac{95}{1152 n}+\frac{917}{11520 n^2}+O\left(\frac{1}{n^3}\right)\right)$$ which seems to be quite different; however, none of these match the numbers that you provided as examples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4008869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
closeness of a set of probability distributions how can one check if a family of distributions is closed? what does closeness of a set of distributions mean? I understand the concept of closed sets in the context ofEuclidean geometry, but I have difficulty understanding what these concepts mean for a set of probabilities.
It all depends (by definition, really) on what topology you put on the set of distributions. There are probably several that are used (weak topology, vague topology IIRC), depending on context. So go find out what that context is and you'll know what closed means. It probably won't be easy, depending on how strong your background in general topology or functional analysis or measure theory is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4008978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a consistent Estimator for $\mathbb{E}(X^2)$ Given $X_1,\dots,X_n\stackrel{iid}{f}(x;\mu,\sigma^2)$ with population mean $\mu$ and population standard deviation $\sigma$. I want to find a consistent estimator for $\mathbb{E}(X^2)$. By the variance formula we know $\mathbb{E}(X^2)=\mathrm{Var}(X_i)+[\mathbb{E}(X_i)]^2=\sigma^2+\mu^2$ So if I am not mistaken $\sigma^2+\mu^2$ is our estimand and we want to find an estimator that will converge to this estimand as $n\rightarrow\infty$. $\color{red}{\text{correct me if im wrong!}}$ We know $\bar{X}\stackrel{p}{\longrightarrow}\mu$ , I am assuming squaring will preserve continuity so $\bar{X}^2\stackrel{p}{\longrightarrow}\mu^2$. $\color{red}{\text{correct me if im wrong!}}$ But I do not know how to proceed from here. It does not feel right to say $\mathbb{E}(\bar{X}^2)\stackrel{p}{\longrightarrow}\mu^2+\sigma^2$. I am assuming I will have to show it converges using chebyshev's. I know in general $\mathbb{E}(\bar{X}^2)=\frac{\sigma^2}{n}+\mu^2$, so I was considering $n\bar{X}$ $\color{red}{\text{ (is this a good idea?)}}$ Any help or guidance would be appreciated!
If you want to estimate $\mathbb{E}[X^2]$, a natural estimator would be to simply take the sample mean of $X_i^2$. Then by the weak law of large numbers, $\frac{1}{n} \sum^n_{i=1} X_i^2 \stackrel{p}{\rightarrow} \mathbb{E}[X^2]$. If you want to use your approach (which seems to also work when pushed a bit), a useful tool would be the continuous mapping theorem. Thus we know $\overline{X} \stackrel{p}{\rightarrow} \mu$, and so since squaring is continuous, indeed $\overline{X}^2 \stackrel{p}{\rightarrow} \mu^2$. For the variance, we can use the usual sample variance estimator, i.e. $\frac{1}{n} \sum^n_{i=1} (X_i - \overline{X})^2$, which converges to $\sigma^2$ in probability. Then the sum of these two estimators will also converge to $\mu^2 + \sigma^2$ in probability (again, by the continuous mapping theorem).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4009063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
what is the circle with the smallest radius that contains the three points $(0,0)$ , $(1,1)$, and $(2,3)$? How to write a model that gives us a circle with the smallest radius that contain three points $(0,0)$ , $(1,1)$, and $(2,3)$? I've tried to model this as following: if $x, y$ be the location and $r$ is radius: $ Min \ \ x^2+y^2 = r^2$ $ x^2+y^2 \le 0$ $ (x-1)^2+(y-1)^2 \le 1$ $ (x-2)^2+(y-3)^2 \le \sqrt 13$
How about: min $r^2$ subject to: $(x-0)^2 + (y-0)^2 \leq r^2$ $(x-1)^2 + (y-1)^2 \leq r^2$ $(x-2)^2 + (y-3)^2 \leq r^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4009195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\operatorname{tr}(BA)$ where $A$ is low rank and has a subset of eigenvalues from $B^{-1}$ Most of the question is in the title. Suppose that I have $B$ a square $N\times N$ positive definite matrix, and $A$ a low-rank square matrix with its non-zero eigenvalues being a subset of the spectrum of $B^{-1}$. Is there a simple way to compute the trace : $\mathrm{tr}(BA)$? My intuition is that if $M$ is the rank of $A$, then $B$ and $A$ can share all $N$ eigenvectors and that one can diagonalize $A$ and $B$ and get automatically $\mathrm{tr}(BA) = \sum_{i:\lambda_i>0} \frac{\lambda_i}{\lambda_i} = M$ where $\lambda_i$ are the eigenvalues of $A$. But I am not sure how to obtain the proper ordering etc. Thanks for your help!
Without further information, there is no way to use the eigenvalues of $A$ in this computation. However, we can make use of the fact that $A$ is low-rank. In particular, if $A = CF$ is a rank factorization (so that $C$ is $N \times r$ and $F$ is $r \times N$), then we have $$ \operatorname{tr}(BA) = \operatorname{tr}(BCF) = \operatorname{tr}(FBC). $$ That is, we may rewrite the expression as the trace of an $r \times r$ matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4009380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Describe all functions which are defined as $f:\mathbb R \to \mathbb R $ and satisfying $f(x)=\frac1{f(x)}$ Describe all functions which are defined as $f:\mathbb R \to \mathbb R $ and satisfying $$f(x)=\dfrac1{f(x)}$$ My Attempt: If $f$ is constant function then. $$(f(x))^2=1$$ Then $f(x)=1$ or $f(x)=-1$ I cannot think non-constant function satisfying this What about $f$ being continuous?
$f$ satisfies this equation iff there exists a set $A$ such that $f =2\chi_A-1$. If $f$ is continuous then it is identically $1$ or identically $-1$. [For the proof define $A$ as $\{x: f(x)=1\}$; $ \chi_A$ is defined by $ \chi_A(x)=1$ if $ x \in A$ and $0$ if $x \notin A$].
{ "language": "en", "url": "https://math.stackexchange.com/questions/4009523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Equivalent definitions of convex hull $\newcommand{\N}{\mathbb{N}}$ The convex hull $H$ of a subset $A$ of $X$ is defined as $$ H = \{z \in X~|~\exists x,y \in A:\exists t \in [0,1]:z=(1-t)x+ty\} $$ Then, can we show that any convex combination of elements of $A$ is in $H$? That is, $$ \forall n \in \N: P(n) $$ where $$ P(n) \Leftrightarrow \forall (x_i)_{i \leq n} \subseteq A: \forall (t_i)_{i \leq n} \subseteq [0,1]: \sum_{i=1}^n{t_i = 1} \Rightarrow \sum_{i=1}^n {t_i x_i} \in H $$ I tried to show by mathematical induction: First, it is obvious that $A \subseteq H$ because we can pick an element of $A$ twice; for all $x \in A$, $(1-t)x+tx = x \in H$. Let $n = 1$. Then, $x_1 \in A$ and $t_1 = 1$ implies that $\sum_{i=1}^n {t_i x_i} = t_1 x_1 =x_1 \in A \subseteq H$. Therefore, $P(1)$ holds. Now, let $n$ be arbitrary and suppose $P(n)$ holds. Let $(x_i)_{i \leq n+1} \subseteq A$, $(t_i)_{i \leq n+1} \subseteq [0,1]$, and $\sum_{i=1}^{n+1}t_i = 1$. Case 1) $t_{n+1} = 1$. Then, $t_i = 0$ for all $i \leq n$, which gives $\sum_{i=1}^{n+1}t_ix_i = x_{n+1} \in A \subseteq H $. Case 2) $t_{n+1} \neq 1$. From $\sum_{i=1}^{n+1}t_i = \sum_{i=1}^{n}t_i + t_{n+1} = 1$, we have $\sum_{i=1}^{n}\frac{t_i}{1-t_{n+1}} = 1$. Thus, by $P(n)$, $\sum_{i=1}^{n}\frac{t_i}{1-t_{n+1}}x_i \in H$. However, we cannot conclude that $$ \sum_{i=1}^{n+1}t_i x_i = (1-t_{n+1})\sum_{i=1}^{n}\frac{t_i}{1-t_{n+1}}x_i + t_{n+1}x_{n+1} \in H $$ because it is not guaranteed that $$ \sum_{i=1}^{n}\frac{t_i}{1-t_{n+1}}x_i \in A $$ whereas $x_{n+1} \in A$.
There is a major problem with your definition of the convex hull $H$: it is not a convex set! As an example, let $A$ be 3 points in the plane that form the vertices of a triangle. Now, all of the edge points are included in $H$ because they are a convex combination of two of the vertices. However, no interior point of the triangle can be written as a convex combination of two of the vertices (since a convex combination of two points is collinear with them). This is a problem: now the line connecting the midpoints of two edges is not contained in $H$, breaking convexity. I realized this issue while thinking about the end of your argument. Because you showed that $y := \sum_{j = 1}^n \frac{t_j}{1 - t_{n + 1}} x_j \in H$, you'd be done if you could conclude $$(1 - t_{n + 1})y + t_{n + 1}x_{n + 1} \in H$$ using the convexity of $H$. So I tried to verify that $H$ is convex and failed for a while.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4009663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find the minimum value of cos(2x)+cos(4x)+cos(6x)+cos(8x)+...+cos(20x) I want to find the minimum value of the series cos(2x)+cos(4x)+cos(6x)+cos(8x)+...+cos(2nx). x could be 2pit. Anyone can share a method of how to determine the minimum value of the series?
You can use $\displaystyle \sum_{k=1} \cos(kx)=R\left(\sum_{k=1} e^{2ikx}\right)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4009800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
General solution of $\cos (2\arctan x) = \frac{{1 - x^2 }}{{1 + x^2 }}?$ How to find the general solution of $$\cos (2\arctan x) = \frac{{1 - x^2 }}{{1 + x^2 }}$$
You can start by substituting both sides into $\arccos()$ function: $$\arccos(\cos(2 \arctan x)) = \arccos\Biggl(\frac{1-x^2}{1+x^2}\Biggl)$$ $$2 \arctan x =\arccos\Biggl(\frac{1-x^2}{1+x^2}\Biggl)$$ Then you can take the derivative of both sides: $$\begin{align} \ \frac{2}{1+x^2} &=\ \frac{1}{\sqrt{1-\Biggl(\frac{1-x^2}{1+x^2}\Biggl)^2}} \frac{4x}{(1+x^2)^2} \\ &=\ \sqrt{\frac{(1+x^2)^2}{4x^2}} \frac{4x}{(1+x^2)^2} \\ \end{align}$$ If you organize the right-hand side of the equation, you eventually get: $$\bbox[yellow] {\frac{2}{1+x^2}=\frac{2}{1+x^2}}$$ This means the equation holds for all real $x$ values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4009924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to generate a random vector, guaranteed to be within the hemisphere with respect to another vector? Given a normalized vector N, how can one generate a random direction vector that is guaranteed to be in the hemisphere with respect to N (i.e. the hemisphere where N is exactly in the middle)? The way I am currently doing this is to sample a random direction vector d and dot it with N and keep that vector if the dot product is greater than 0. This method is not guaranteed to generate a vector in the hemisphere that I am interested in as ~50% of the random vectors would have a 0 or negative dot product result. I saw somewhere that there is a way to transform the randomly generated vector d to place it in the right hemisphere using a transformation matrix but I don't know how to do it. Can someone write a [pseudo]code for how one generate a direction vector using the transformation method?
To answer your followup question from the comments: If you're in 3D, we have the convenient fact that the $z$ coordinate of a random point on the unit sphere is uniformly distributed. So if you take $z\sim U(\cos\theta,1)$, $\phi\sim U(0,2\pi)$, $r=\sqrt{1-z^2}$, then $\left(r\cos\phi,r\sin\phi, z\right)$ is a random point on the sphere within $\theta$ radians of the positive $z$-axis. To transform this to be within $\theta$ radians of $N$, multiply it by an orthonormal matrix whose last column is $N$. (In other words, apply the transformation $(x,y,z)\mapsto(xV,yW,zN)$ where $V$ and $W$ are unit vectors that are orthogonal to $N$ and to each other.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4010111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
if $a^{2}+b^{2}=1$ Prove that $-\sqrt{2} \le a+b \le \sqrt{2}$ $a,b \in \mathbb{R}$ My attempt: $$a^{2}+b^{2}=1 \iff a^{2}+2ab+b^{2}=1+2ab$$ $$(a+b)^2=1+2ab \iff \mid a+b \mid =\sqrt{1+2ab} \iff a+b = \pm\sqrt{1+2ab}$$ But notice that $\sqrt{1+2ab} \in \mathbb{R} \iff 1+2ab \geq 0 \iff 2ab \geq -1$ . let's take the case where :$2ab=-1$. $$a^{2}+2ab+b^{2}=0 \iff a+b=0 \iff a=-b.$$ When I plug this result into the original equation, I get: $$b=\frac{1}{\sqrt{2}}$$ And now, I don't know where to go.
Hint (1): Why did you choose the case $2ab = -1$ ? Well, as you showed, this is the smallest possible value for $2ab$ . What can you say about $|a+b|$ when $2ab > -1$ ? Hint (2) for a different method: Consider a circle of radius 1 around the center of coordinates. The equation of this circle is $x^2 + y^2 = 1$ . Now consider the set of parallel lines $x+y = c$ where $c$ can be any number. Some of these lines touch the circle. Among those lines, which one has the highest value of $c$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4010205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 3 }