Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Fixed field of a subgroup of a Galois group For the Galois group $Gal(\mathbb{Q}(\sqrt2, \sqrt3, \sqrt5)/\mathbb{Q})$, I'm trying to understand how to find the permutations of the roots and how the subgroups of the Galois group are related to their fixed fields. Take, for example, the permutation that takes $\sqrt3$ to $-\sqrt3$, $\sqrt5$ to $-\sqrt5$, and fixes the other roots (lets call the permutation $\alpha$). Is $\{\varepsilon, \alpha\}$ a subgroup of the Galois group? If so, is its fixfield $\mathbb{Q}(\sqrt2, \sqrt{15})$? Or is it $\mathbb{Q}(\sqrt2)$? Or something else entirely? If I used any terminology incorrectly, or used incorrect logic, please correct me. Thanks in advance!
By definition, Galois group $\operatorname{Gal}(L/K)$ consists of all $K$-linear automorphisms of $L$, where $K$-linearity is in the sense of linear algebra. For your example, you have $\mathbb Q$-linear maps acting on $L = \mathbb Q(\sqrt 2, \sqrt 3, \sqrt 5)$ and $L$ is $\mathbb Q$-algebra generated by set $\{1,\sqrt 2, \sqrt 3, \sqrt 5\}$, and that means that we only need to know where $\{1,\sqrt 2, \sqrt 3, \sqrt 5\}$ are sent to fully determine automorphism of $L$. $\mathbb Q$-linearity precisely means that $\varphi(1) = 1$ for any such automorphism $\varphi$. But where do other roots go? Let's say we want to check where $\alpha$ goes, and let $f$ be it's minimal polynomial over $\mathbb Q$. Then $0 = \varphi(f(\alpha)) = f(\varphi(\alpha))$ thus $\varphi(\alpha)$ is the root of the same polynomial as $\alpha$ ($f$ and $\varphi$ commute because $f$ is polynomial and $\varphi$ is ring homomorphism). So, in your case, $\sqrt 2$ goes to $\pm \sqrt 2$ because minimal polynomial of $\sqrt 2$ is $x^2 - 2$ and similarly, $\sqrt 3$ maps to $\pm \sqrt 3$ and $\sqrt 5$ maps to $\pm\sqrt 5$. Thus, there are only $8$ possible automorphisms, so Galois group is group with $8$ elements. Now, for the subgroups, Fundamental theorem of Galois theory tells us that any subgroup of Galois group corresponds to a subextension of fields. How do you get this field? Well, for a subgroup $H$, intermediate field $L'$ is given by set $\{ x\in L \ | \ \varphi(x) = x,\ \forall \varphi\in H\}$. And yes, to your question, subgroup spanned by $\varphi = [\sqrt 2\mapsto \sqrt 2, \sqrt 3\mapsto -\sqrt 3, \sqrt 5\mapsto -\sqrt 5]$ is $\{\mathrm{id}, \varphi\}$ because $\varphi^2 = \mathrm{id}$. Timbuc already wrote explicit basis for $L$ and $\varphi$ fixes $\{1,\sqrt 2, \sqrt {15}, \sqrt {30}\}$ which corresponds to field $\mathbb Q(1,\sqrt 2, \sqrt {15}, \sqrt {30}) = \mathbb Q(\sqrt 2, \sqrt 15)$ since other generators can be obtained by multiplication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Evaluate $ \int^\infty_0\int^\infty_0 x^a y^{1-a} (1+x)^{-b-1}(1+y)^{-b-1} \exp(-c\frac{x}{y})dxdy $ Evaluate $$ \int^\infty_0\int^\infty_0 x^a y^{1-a} (1+x)^{-b-1}(1+y)^{-b-1} \exp(-c\frac{x}{y})dxdy $$ under the condition $a>1$, $b>0$, $c>0$. Note that none of $a$, $b$ and $c$ is integer. Mathematica found the following form, but I prefer more compact expression for the faster numerical evaluation. $$ \bigg(2 c^2 \Gamma (a-2) \Gamma (b-2) \Gamma (b-1) \Gamma (b+1) \Gamma (-a+b+1) \, _2F_2(3,b+1;3-a,3-b;-c) \nonumber\\ ~~~ -\pi \csc (\pi a) \Big(\pi c^b \Gamma (b+1) \Gamma (2 b-1) (\cot (\pi (a-b))+\cot (\pi b)) \, _2F_2(b+1,2 b-1;b-1,-a+b+1;-c) \nonumber\\ ~~~~~ ~~~~~ +(a-1) a c^a \Gamma (b-1) \Gamma(b-a) \Gamma (-a+b+1) \Gamma (a+b-1) \, _2F_2(a+1,a+b-1;a-1,a-b+1;-c)\Big)\Bigg) \nonumber\\ ~~~~~ ~~~~~ \bigg/\Big(c^a{\Gamma (b-1) \Gamma (b+1)^2 \Gamma (-a+b+1)}\Big) $$
$\int_0^\infty\int_0^\infty x^ay^{1-a}(1+x)^{-b-1}(1+y)^{-b-1}e^{-c\frac{x}{y}}~dx~dy$ $=\Gamma(a+1)\int_0^\infty y^{1-a}(1+y)^{-b-1}U\left(a+1,a-b+1,\dfrac{c}{y}\right)dy$ (according to http://en.wikipedia.org/wiki/Confluent_hypergeometric_function#Integral_representations) $=\dfrac{\Gamma(a+1)\Gamma(b-a)}{\Gamma(b+1)}\int_0^\infty y^{1-a}(1+y)^{-b-1}{_1F_1}\left(a+1,a-b+1,\dfrac{c}{y}\right)dy+c^{b-a}\Gamma(a-b)\int_0^\infty y^{1-b}(1+y)^{-b-1}{_1F_1}\left(b+1,b-a+1,\dfrac{c}{y}\right)dy$ (according to http://en.wikipedia.org/wiki/Confluent_hypergeometric_function#Kummer.27s_equation) $=\dfrac{\Gamma(a+1)\Gamma(b-a)}{\Gamma(b+1)}\int_\infty^0\left(\dfrac{1}{y}\right)^{1-a}\left(1+\dfrac{1}{y}\right)^{-b-1}{_1F_1}(a+1,a-b+1,cy)~d\left(\dfrac{1}{y}\right)+c^{b-a}\Gamma(a-b)\int_\infty^0\left(\dfrac{1}{y}\right)^{1-b}\left(1+\dfrac{1}{y}\right)^{-b-1}{_1F_1}(b+1,b-a+1,cy)~d\left(\dfrac{1}{y}\right)$ $=\dfrac{\Gamma(a+1)\Gamma(b-a)}{\Gamma(b+1)}\int_0^\infty y^{a+b-2}(y+1)^{-b-1}{_1F_1}(a+1,a-b+1,cy)~dy+c^{b-a}\Gamma(a-b)\int_0^\infty y^{2b-2}(y+1)^{-b-1}{_1F_1}(b+1,b-a+1,cy)~dy$ $=\dfrac{\Gamma(a+1)\Gamma(b-a)}{\Gamma(b+1)}\int_0^\infty\sum\limits_{n=0}^\infty\dfrac{(a+1)_nc^ny^{a+b+n-2}(y+1)^{-b-1}}{(a-b+1)_nn!}dy+c^{b-a}\Gamma(a-b)\int_0^\infty\sum\limits_{n=0}^\infty\dfrac{(b+1)_nc^ny^{2b+n-2}(y+1)^{-b-1}}{(b-a+1)_nn!}dy$ $=\dfrac{\Gamma(a+1)\Gamma(b-a)}{\Gamma(b+1)}\sum\limits_{n=0}^\infty\dfrac{(a+1)_nc^nB(a+b+n-1,2-a-n)}{(a-b+1)_nn!}+c^{b-a}\Gamma(a-b)\sum\limits_{n=0}^\infty\dfrac{(b+1)_nc^nB(2b+n-1,2-b-n)}{(b-a+1)_nn!}$ (according to http://en.wikipedia.org/wiki/Beta_function#Properties) $=\sum\limits_{n=0}^\infty\dfrac{\Gamma(a+1)\Gamma(b-a)(a+1)_n\Gamma(a+b+n-1)\Gamma(2-a-n)c^n}{\Gamma(b+1)\Gamma(b+1)(a-b+1)_nn!}+\sum\limits_{n=0}^\infty\dfrac{\Gamma(a-b)(b+1)_n\Gamma(2b+n-1)\Gamma(2-b-n)c^{b-a+n}}{\Gamma(b+1)(b-a+1)_nn!}$ $=\sum\limits_{n=0}^\infty\dfrac{\Gamma(a+1)\Gamma(2-a)\Gamma(b-a)\Gamma(a+b-1)(a+1)_n(a+b-1)_n(-1)^nc^n}{(\Gamma(b+1))^2(a-1)_n(a-b+1)_nn!}+\sum\limits_{n=0}^\infty\dfrac{\Gamma(a-b)\Gamma(2-b)\Gamma(2b-1)(b+1)_n(2b-1)_n(-1)^nc^{b-a+n}}{\Gamma(b+1)(b-1)_n(b-a+1)_nn!}$ (according to http://en.wikipedia.org/wiki/Pochhammer_symbol#Properties) $=\dfrac{\Gamma(a+1)\Gamma(2-a)\Gamma(b-a)\Gamma(a+b-1)}{(\Gamma(b+1))^2}{_2F_2}(a+1,a+b-1;a-1,a-b+1;-c)+\dfrac{\Gamma(a-b)\Gamma(2-b)\Gamma(2b-1)c^{b-a}}{\Gamma(b+1)}{_2F_2}(b+1,2b-1;b-1,b-a+1;-c)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
The supremum-like definition of greatest common divisor I have reasonable experience with analysis, but I have just recently begun studying abstract algebra from Dummit and Foote. I am frightened that I am already getting tripped up on pp.4, with the definition of greatest common divisor. My intuitive understanding of greatest common divisor, from grade school, would be the maximal common element of two integers' factor trees. I am having trouble aligning this with Dummit and Foote's definition, which parallels the definition of a supremum. First, they make a predefinition: if $a,b \in \mathbb{Z}$ with $a \neq 0$, then they say $a$ divides $b$, or $a|b$, if $\exists c \in \mathbb{Z} : b= ac$. Then, they proceed with the definition: if $a,b \in \mathbb{Z} - \{0\}$, then there exists a unique positive integer $d$, called the greatest common divisor of $a$ and $b$, satisfying: * *$d | a$ and $d | b$ (so $d$ is a common divisor of $a$ and $b$) *if $e|a$ and $e|b$ then $e|d$ (so $d$ is the greatest such divisor) Now I can't actually figure out how to prove the second property. What I mean is that, suppose I assume $d|a$,$d|b$,$e|a$,$e|b$. Then why does $d>e \implies e|d$?
Let $g$ be the gcd of $a$ and $b$. The we can write $$a=m*g$$ $$b=n*g$$ Where $m$ and $n$ are co-prime ie their gcd is 1. ( This is evident ). Now let us assume $e$ divides both $a$ and $b$. Then $e$ can't be a divisor of $m$ and $n$ because they are co-prime , it means $e$ has to be a divisor of $g$ so that it can divide both $a$ and $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to solve $5000 n \log(n) \leq 2^{n/2}$ I'm trying to solve the following problem: What is the smallest value of n so that an algorithm with a runtime of $5000 n \log(n)$ runs faster than an algorithm with a runtime of $2^{2/n}$ on the same computer? So I figured it was just a matter of solving the equation $5000 n \log(n) \leq 2^{n/2}$ but I have no idea how I should do it, I've tried for a while using algebra but that's leading me nowhere.
Take log of both sides (I'm assuming that "log" here means $\log2$) to get $$ \log(5000) + \log(n) + \log \log n \le \frac{n}{2} $$ Since $\log n$ and $\log \log n$ are both small compared to $n$, you can say that $n$ is somewhere around $2\log(5000) \approx 25$. So start with $n = 20$ and $n = 52$, say; for one of these the left side should be smaller than the right; for the other, it should be greater. Apply bisection at most 6 times to get your answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Polynomial Diophantine Equation If $x$,$y$ $\in \mathbb Z$, find all the solutions of $$y^3=x^3+8x^2-6x+8$$ I have tried factorizing the equation but the polynomial on $\text{R.H.S.}$ doesn't have any integral roots. Further, I deduce that the parity of $x$ and $y$ is same. However, I can't seem to find a way to go further. Any help will be appreciated. Thanks!
Suppose that $x\geq 0$. Then we have $x^3 < x^3 + 8x^2 - 6x + 8 < (x+3)^3$, so $y \in \{x+1,x+2\}$. $y=x+1$ has no integer solutions, and plugging in $y=x+2$ yields the two solutions $x=0$ and $x=9$. For negative $x$, we can do something similar, though the details are messier: Suppose $x\leq 0$, and let $u=-x$, $v=-y$. Then $v^3 = u^3 - 8u^2 -6u - 8$. We have $(u-12)^3<u^3 - 8u^2 -6u - 8<u^3$, so $v \in \{u-11, u-10, \ldots, u-1\}$. Plugging in yields no integer solutions except for the ones we already found, $(x,y) \in\{(0,2),(9,11)\}$. There may be a way to avoid difficulty in the last step, but the basic idea is to use bounds to reduce the problem to checking finitely many cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Polar coordinate double integral I have to integrate the following integral: $$ \iint \limits_A sin({x_1}^2 + {x_2}^2) dx_1dx_2 $$ over the set: $A=\{x \in \mathbb{R}^2: 1 \leq {x_1}^2 + {x_2}^2 \leq 9,x_1 \geq -x_2\}$ I understand geometrically what I have to do (i.e. subtract one half of a circle with of radius 1 from half of a circle of radius 3) but how do I use polar coordinates to calculate this interal?
Hint: You can use that $1\le r\le 3 \text{ and } -\frac{\pi}{4}\le\theta\le\frac{3\pi}{4}$ to set up the integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that factor modules are isomorphic. I'm trying to prove (from a previous post) that if $A=k[x,y,z]$ and $I=(x,y)(x,z)$ then $$\dfrac{(x,y)/I}{(x,yz)/I} \cong\dfrac{A}{(x,z)}.$$ I did this by defining the homomorphism $\phi: A \to ((x,y)/I)/((x,yz)/I)$ by $f(x,y,z) \mapsto \dfrac {f(x,y,0)+ I}{(x,yz)/I}$ with the extra property that $\phi$ kills all the constants in $f$ (not only all $z$). Then since it is surjective and its kernel equals $(x,z)$ I am done...Does this seem correct? Second, I am trying to prove that $(x,yz)/I \cong A/(x,y,z)$. How can I do this? I forgot to mention that we consider the $A$-module $M=A/I$, and that we want to prove that submodules are isomorphic.
$\dfrac{(x,y)/I}{(x,yz)/I} \simeq\dfrac{(x,y)}{(x,yz)}=\dfrac{(x,y)}{(x,y)\cap (x,z)}\simeq\dfrac{(x,y)+(x,z)}{(x,z)}=\dfrac{(x,y,z)}{(x,z)}=\dfrac{(x,z)+(y)}{(x,z)}\simeq$ $\dfrac{(y)}{(x,z)\cap (y)}=\dfrac{(y)}{(xy,zy)}\simeq\dfrac{A}{(x,z)}$. $\dfrac{(x,yz)}{I}=\dfrac{(x,yz)}{(x^2,xy,xz,yz)}\simeq\dfrac{(x)}{(x^2,xy,xz)}\simeq\dfrac{A}{(x,y,z)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that if $a_{2n} \rightarrow g$ and $a_{2n+1} \rightarrow g$ then $a_n \rightarrow g$ The problem is in the question: Prove that if sequences $a_{2n} \rightarrow g$ and $a_{2n+1} \rightarrow g$ then $a_n \rightarrow g$. I don't know how to prove that - it seems obvious when we look at the definition that for sufficiently large $n$ (let's say $n>N$) we have $|a_{2n}-g|$ and $|a_{2n+1}-g|$ less than any given $\epsilon$ and these are all elements of $a_n$ when $n>N$ (even and odd terms). Does this need more formal proof?
Yes, for a given $\varepsilon>0$, you will get an $N_1$ from $(a_{2n})$ and an $N_2$ from $(a_{2n+1})$ and you will need to take their maximum or so..
{ "language": "en", "url": "https://math.stackexchange.com/questions/1105930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is Fourier transform method suitable for solving equation $\int g(x-t)e^{-t^2} dt = e^{-a|x|}$ Is Fourier transform method suitable for to solve the following equation \begin{align*} \int g(x-t)e^{-t^2/2} dt = e^{-a|x|} \end{align*} Suppose we take the Fourier transform of the above equation \begin{align*} &G(\omega) \sqrt{2\pi} e^{-\omega^2/2}=\frac{2}{1+\omega^2}\\ &G(\omega)=\frac{2e^{\omega^2/2}}{\sqrt{2\pi}(1+\omega^2)} \end{align*} But the function on the right is not integrable. What can we do in case like this? Is there a strategy to solving general cases like this?
The Fourier transform method is useful in showing that there is no solution. Since $G$ is unbounded, what your argument shows is that there is no solution in $L^1$. It is clear also that $G$ is not in $L^2$, so that the is no solution in $L^2$. Moreover, $G$ is not a tempered distribution, so that there is no solution in the space of tempered distributions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1106049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Any example of strongly convex functions whose gradients are Lipschitz continuous in $\mathbb{R}^N$ Let $f:\mathbb{R}^N\to\mathbb{R}$ be strongly convex and its gradient is Lipschitz continuous, i.e., for some $l>0$ and $L>0$ we have $$f(y)\geq f(x)+\nabla f(x)^T(y-x)+\frac{l}{2}||y-x||^2,$$ and $$||\nabla f(y)-\nabla f(x)||\leq L||y-x||$$ for any $x,y\in\mathbb{R}^N$. For such a function, the only example I can come up with is the quadratic function $$f(x)=x^T A x +B x + C$$ where $A$ is positive definite. I wonder if there is any other example? Thanks. Note: the strong convexity and Lipschitz continuity hold for the whole $\mathbb{R}^N$; otherwise $e^x$ ($x\in\mathbb{R}$) is good enough in $[0,1]$. -- New Remark: I ask this question because these two assumptions are often seen in optimization papers. Functions that satisfy one of the two are easy to think of; to satisfy these two assumptions at the same time, I really doubt how many functions exist. -- Finally came up with an example by myself: $$f(x)=\log(x+\sqrt{1+x^2})+x^2,x\in\mathbb{R}.$$ Strong convexity: $$\nabla f(x) = \frac{1}{\sqrt{1+x^2}}+2x,$$ $$\nabla^2 f(x) = -\frac{x}{(1+x^2)^{3/2}}+2.$$ Since $$\left|\frac{x}{(1+x^2)^{3/2}}\right|=\left|\frac{x}{\sqrt{1+x^2}}\right|\frac{1}{1+x^2}<1,$$ strong convexity follows from $$3 > \nabla^2 f(x)>1.$$ Lipschitz continuous gradient: \begin{align} \left|\nabla f(x+h)-\nabla f(x)\right|&\stackrel{(a)}{=}\left|\nabla f(x)+\nabla^2 f(y)h-\nabla f(x)\right|\\ &=\left|\nabla^2 f(y)h\right|\\ &\leq 3|h| \end{align} where $(a)$ is from the mean value theorem and $y$ is some number in $[x,x+h]$ for $h\geq 0$ or $[x+h,x]$ for $h<0$.
Here's an example on $\mathbb R: f(x) = x^2-\cos x.$ A way to make lots of examples: Let $f$ be any positive bounded continuous function on $[0,\infty).$ For $x\ge 0,$ set $$g(x) =\int_0^x \int_0^t f(s)\, ds\,dt.$$ Extend $g$ to an even function on all of $\mathbb R.$ Then $g$ satisfies the requirements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1106154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Asymptotic of $\sum_{n = 1}^{N}\frac{1}{n}$ versus $\sum_{1 \leq n \leq x}\frac{1}{n}$ Consider the series $\sum_{n = 1}^{N}\frac{1}{n}$. It is well known that we have the asymptotic: $$\sum_{n = 1}^{N}\frac{1}{n} = \log N + \gamma + \frac{1}{2N} + \frac{1}{12N^{2}} + O(N^{-3}).$$ My question: Consider the similar sum $F(x) = \sum_{1 \leq n \leq x}\frac{1}{n}$. Note that $F(N) = \sum_{n = 1}^{N}\frac{1}{n}$. Then does $F$ has the same asymptotic above? That is, is $$F\sum_{1 \leq n \leq x}\frac{1}{n} = \log x + \gamma + \frac{1}{2x} + \frac{1}{12x^{2}} + O(x^{-3})?$$ I think this should be true, but the only thing I can come up with is to look at $F(\lfloor x \rfloor)$ and use the asymptotic for $\sum_{n = 1}^{\lfloor x \rfloor}\frac{1}{n}$ but I am unsure how to compare $\log x$ and $\log \lfloor x \rfloor$ in a way that the error that occurs is $O(x^{-3})$.
Note that $$\log x - \log \lfloor x \rfloor=\log x - \log\left(x - [x]\right)=-\log\left(1-\frac{[x]}{x}\right)=\frac{[x]}{x}+O(x^{-2}),$$ where $[x]$ is the fractional part of $x$, and is clearly $O(1)$. That is, the oscillations around the smooth curve $\log x$ contribute already at the same order as $1/x$. Therefore, the best you can say is that $$ F(x)=\log x + \gamma + O(x^{-1}). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1106241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Taylor series for $\sqrt{x}$? I'm trying to figure Taylor series for $\sqrt{x}$. Unfortunately all web pages and books show examples for $\sqrt{x+1}$. Is there any particular reason no one shows Taylor series for exactly $\sqrt{x}$?
if $$ \sqrt{x}=a_0+a_1x+a_2x^2+\dots $$ then $$ x=a_0^2+(a_0a_1+a_1a_0)x+(a_0a_2+a_1a_1+a_2a_0)x^2+\dots $$ and if you want the identity theorem to hold this is impossible because $a_0=0$ would imply that the coeff of $x$ is zero
{ "language": "en", "url": "https://math.stackexchange.com/questions/1106344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "44", "answer_count": 8, "answer_id": 2 }
Limit of complex numbers What would be the limit of following term? $$\lim_{n \to \infty} \frac{e^{inx}}{2^n}$$ I tried to convert the $e^{inx}$ into trigonometric form and tried to do some simplification but got stuck after that. Thanks. :)
Hint Just as you tried $$\frac{e^{inx}}{2^n}=\frac{\cos(nx)+i\sin(nx)}{2^n}$$ Each sine and cosine being between $-1$ and $1$, the numerator is finite and the denominator goes to $\infty$; thus, the limit is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1106446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Value of $\pi$ by Aryabhata Aryabhata gave accurate approximate value of $\pi$. He wrote in Aryabhatiya following: add 4 to 100, multiply by 8 and then add 62,000. The result is approximately the circumference of circle of diameter twenty thousand. By this rule the relation of the circumference to diameter is given. This gives $\pi= \frac{62832}{20000}=3.1416$ which is approximately 3.14159265...(correct value of $\pi$) From where did Aryabhata got all the above values(4, 100, 8, 62000)?
Kim Plofker says that it probably was by computing the circumference of an inscribed polygon with 384 sides. She gives Sarasvati Amma as a source; I have also seen a reference to Florian Cajori with the same claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1106543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Conditional Probability - Basic Question On question 2b. below, why does knowing the extra bit of information A increase the probability of D?
From the question I get the feeling that you want an answer regarding intuition and not so much mathematics. If I'm wrong, please comment and I will change my answer. From the probabilities given, we see that it is much more likely that the whole shipment consists of forgeries (10%) than it is likely that only 2,3 or 4 paintings are forgeries (2%, 1% and 2%). Hence, knowing that there is at least 1 forgery in the shipment, increases the chance that the whole shipment is "tainted".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1106626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Help with Baby Rudin Excercise 3.17. Partial converse to the fact that convergence impiies Cesaro summability. Warning: Pretty involved problem. If $\{s_n\}$ is a complex sequence, define its arithmetic mean $\sigma_n$, by $$\sigma_n=\frac{s_0+s_1+\cdots+s_n}{n+1}.$$ Put $a_n=s_n-s_{n-1}$ for $n\ge 1$. Assume $M<\infty$, $|na_n|\le M$ for all $n$, and $\lim\sigma_n=\sigma$. Prove that $\lim s_n=\sigma$, by completing to following outline: If $m<n$, then $$s_n-\sigma_n={m+1\over n-m}(\sigma_n-\sigma_m)+\frac1{n-m}\sum_{i=m+1}^n(s_n-s_i).\tag{1}\label{1}$$ For the $k$ in the last term, show $$|s_n-s_k|\le{n-m-1\over m+2}M.\tag{2}\label{2}$$ Fix $\epsilon>0$ and associate with each $n$ the integer $m$ that satisfies $$m\le{n-\epsilon\over 1+\epsilon}<m+1.$$ Then $$(m+1)/(n-m)\le\frac1\epsilon \text{ and } |s_n-s_k|\le M\epsilon.\tag{3}\label{3}$$ Hence $$\limsup_{n\to\infty}|s_n-\sigma|\le M\epsilon.\tag{4}\label{4}$$ I've worked through $\eqref{1}-\eqref{3}$, but having trouble with $\eqref{4}$. My attempt: Choose $m$ and $n$ so that $|\sigma-\sigma_m|<\epsilon$, then $$\begin{align} \limsup_{n\to\infty}|s_n-\sigma| & \le\limsup_{n\to\infty}|s_n-\sigma_n|+\limsup_{n\to\infty}|\sigma_n-\sigma|\\ & =\limsup_{n\to\infty}|s_n-\sigma_n|\\ & \le{m+1\over n-m}\limsup_{n\to\infty}|(\sigma_n-\sigma_m)|\\ &\quad\quad+\frac1{n-m}\limsup_{n\to\infty}\sum_{k=m+1}^n|s_n-s_k|\\ & \le\frac1\epsilon|\sigma-\sigma_m|+M\epsilon\\ &\color{red}{\le 1+M\epsilon}. \end{align}$$ I don't know how to do this without getting the $1$ term.
I agree with most of your work. A couple of notes: * *"Choose $m$ and $n$ so that $|\sigma-\sigma_m|<\epsilon$" doesn't really make sense since $m$ is predetermined by your choice of $n$ and $\epsilon$. *All of the instances of $n$ and $m$ should still be contained within the $\limsup$'s. So with these small alterations you can still get $$\limsup_{n\to\infty}|s_n-\sigma|\leq\frac{1}{\epsilon}\limsup_{n\to\infty}|\sigma_n-\sigma_m|+M\epsilon.$$ But since $\{\sigma_n\}$ is convergent and $\frac{n-\epsilon}{1+\epsilon}<m+1$ implies $m\to\infty$ as $n\to\infty$, we have $$\limsup_{n\to\infty}|\sigma_n-\sigma_m|=0$$ and the desired result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1106820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Moment generating function of $f(x)=\frac{1-\cos(x)}{\pi x^2}$ does not exist How to prove that $\int_{\mathbb{R}}\exp(tx)\frac{1-\cos(x)}{x^2}dx$ is not finite for any fixed $t$? Thank you!
Let $t>0$. Since $1-\cos x\ge0$ we have for any $k\in\mathbb{N}$ we have $$ \int_{k\pi}^{(k+1)\pi}\exp(t\,x)\frac{1-\cos x}{x^2}\,dx\ge\frac{e^{k\pi t}}{(k+1)^2\pi^2}\int_{k\pi}^{(k+1)\pi}(1-\cos x)\,dx=\frac{e^{k\pi t}}{(k+1)^2\pi} $$ and $$ \sum_{k=1}^\infty\frac{e^{k\pi t}}{(k+1)^2}=+\infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1106911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$f$ has a local maximum at a point $x \in E$. Prove that $f'(x)=0$ Suppose that $f$ is a differentiable real function in an open set $E \subset \mathbb{R^n}$, and that $f$ has a local maximum at a point $x \in E$. Prove that $f'(x)=0$
Suppose that at some point $p=(p_1,p_2,\ldots,p_n)$ and for some $i$, we have $\frac{\partial f}{\partial x_i}=c\neq0$. We can regard $f$ as a differentiable function in a single variable $t$, where you only vary $x_i=t$ and leave $x_j$ constant for $j\neq i$. Denote this as $g(t)$. By definition, $g'(p_i)=c$. Clearly $g(t)$ does not have a local maximum at $t=p_i$. It follows that $f(x)$ does not have a local maximum at $x=p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Hahn-Banach theorem exercise Let $X$ be a Banach space (over $\mathbb{R}$) and $u,v\in X$ such that $\|u\|=\|v\|=1$ and $\|2u+v\|=\|u-2v\|=3$. Show that there is $f\in X'$ of unit norm such that $f(u)=f(v)=1$. My idea is building directly the functional on $Y=\operatorname{Span}\{u,v\}$, something like $f(\alpha u+ \beta v)=\alpha+\beta$, and then use HB theorem to extend it to the whole space, but I am missing the role of the second condition and (consequently) failing to prove $\|f\|=1$. P.S: This is not homework, just an exercise taken from a course for independent study.
It suffices to prove that $\Vert \alpha u + \beta v\Vert=|\alpha + \beta|$. Maybe you can proceed through the following steps (I don't know): * *Prove that it is true for $\alpha$ and $\beta$ integer numbers. *Prove that it is true for $\alpha$ and $\beta$ rational numbers, based on the last step. Finally, since $(\alpha,\beta)\rightarrow\Vert\alpha u + \beta v\Vert$ is continuous on $R\times R$ and $Q\times Q$ is a dense subset, the clonclusion follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Long exact sequence into short exact sequences This question is the categorical version of this question about splitting up long exact sequences of modules into short exact sequence of modules. I want to understand the general mechanism for abelian categories. Here's the decomposition I managed to get: But it doesn't involve any cokernels like the module version does. The closest I get to a cokernel is via the following argument (which i'm not even sure is correct): Since exactness is autodual, we have for any composition $g\circ f$ $$\mathrm{coim}g=\mathrm{coker}f\iff \mathrm{im}f=\mathrm{ker}g$$ This, along with the Image-Coimage isomorphism, yields the isomorphism of the objects $$\mathrm{Im}f\cong\mathrm{Coker}g$$ I'm not sure whether this is correct reasoning since it also leads to the isomorphism $\mathrm{Coker}g\cong\mathrm{Ker}g$ whenever $(f,g)$ is exact. So: * *What is the general decomposition of a long exact sequence into short exact sequences in an abelian category? *Is the argument I gave for $\mathrm{Im}f\cong\mathrm{Coker}g$ correct?
You are right, it finally involves cokernels (=coimages of the next arrows). Your first line is correct: taking cokernel of both sides of $\def\im{{\rm im}\,} \im f=\ker g$, we'll get $\def\coker{{\rm coker}\,} \def\coim{{\rm coim}\,} \coker f=\coim g$. For the converse, take kernel of both sides. Then, for the objects, your second line should read $${\rm Im\,}g\cong {{\rm Coker}\,f}\,.$$ The statement ${\rm Coker\,}g\cong{\rm Ker\,}g$ does not follow. And yes, this is the way how the long exact sequence splits into short exact sequences.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }
Find the right cosets of $H$ in $G$ simple example Question: Let $G$ be a group and $H<G$ a subgroup with $|G:H|=2$ Show that the right cosets of $H$ in $G$ are $H$ and $G\backslash H$ Answer given: There are two right cosets, they are disjoint and their union is $G$. One coset is $H$ and so the other is $G\backslash H$ My interpretation: I understand that the two right cosets comes from $|G:H|=2$. Why are they disjoint? The definition I'm working with for right cosets is: Let $H\leq G$ and let $x\in G$. Then the subset $Hx=\{hx\mid h\in H\}\subseteq G$ is called a right coset of $H$ in $G$. From this I can see that $H$ is a right coset of $G$ from letting $x=e$ How is $G\backslash H$ the other right coset? I can see that $G\backslash H\leq G$ and is disjoint to $H$ by definition, but how did we arrive at this result? Is this just a standard result?
A nice way to see that distinct right cosets are disjoint is to do the following: Let $H\leq G$, and for $a,b \in G$, define the relation $a \sim b$ if and only if $a \in Hb$. If we can show this is an equivalence relation, we will have right cosets of $H$ as equivalence classes, which shows they partition $G$ into disjoint subsets. Reflexivity is obvious, as $a=ea \in Ha$ for all $a \in G$. Now if $a \in Hb$, $a=hb$, so $b=h^{-1}a \in Ha$, which shows the relation is symmetric. Finally, if $a \in Hb$ and $b \in Hc$, we have $a=hb$ and $b=h'c$, so $a=(hh')c \in Hc$, so the relation is transitive. On a somewhat related note, for finite groups, Lagrange's Theorem is a trivial consequence of this construction, as one need only show that a typical coset of $H$ can be bijectively identified with $H$. The map from $H$ into $Ha$ that sends $h\mapsto ha$ suffices. Cosets of $H$ partition $G$ into equal sized chunks, which is a very nice property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can anyone sketch the proof or provide a link that there is always a prime between $n^3$ and $(n+1)^3$ In a recent forum discussion on number theory, it was mentioned that A. E. Ingham had proven that there is always a prime between $n^3$ and $(n+1)^3$. Does anyone know if there is a link available on the web or knows a rough sketch of the proof. Does it use sieve theory? I am very interested in checking out the proof.
Back in the mid-80s, when I first opened Apostol's "Introduction to Analytic Number Theory," Apostol stated the theorem that there exists a real $\alpha$ such that $\left\lfloor\alpha^{3^n}\right\rfloor$ was always prime. Apostol noted, though, that the existing proof was non-constructive. I realized relatively quickly that if you could prove there was always a prime between $n^3$ and $(n+1)^3$, you'd have an easy constructive proof. So I highly doubt that there was a proof before 1967, when Wikipedia says Ingham died. Apostol's book was first published in 1976.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Orthogonal complement of the kernel of $u\in B(H, H')$ Let $H,H'$ be Hilbert spaces and $u \in B(H,H')$. Let $u^\ast$ denote the adjoint. I know (and can show) that $(\mathrm{im} u)^\bot = \ker u^\ast$. From this I would deduce that $(\ker u^\ast)^\bot = \mathrm{im} u$. But instead, $$ (\ker u^\ast)^\bot = \overline{\mathrm{im} u}$$ and it is not intuitive for me. How to (geometrically, if possible) see that $ (\ker u^\ast)^\bot = \overline{\mathrm{im} u}$ and not $ (\ker u^\ast)^\bot = \mathrm{im} u$?
If $X\subseteq H$, then $X^\perp$ is closed. Let $\{y_n\}$ be any convergent sequence in $X^\perp$ with $y_n\to y$. Then for any $x\in X$, $$\langle x,y\rangle=\langle x,y-y_n\rangle+\langle x,y_n\rangle=\langle x,y-y_n\rangle.$$ Applying Cauchy-Schwarz, this gives $$|\langle x,y\rangle|\leq \|x\|\|y-y_n\|$$ which tends to zero since $y_n\to y$. Therefore $\langle x,y\rangle=0$ for all $x\in X$, so $y\in X^\perp$. Edit: I must have misread the original question, as I thought you were only asking why $(\ker u^*)^\perp$ must be closed. At this point we know that $\overline{\text{ran } u}\subseteq (\ker u^*)^\perp$. To see the opposite inclusion, let $x\in (\text{ran } u)^\perp$. For any $y\in H$, we then have $$0=\langle uy,x\rangle=\langle y,u^*x\rangle,$$ so $u^*x=0$ and therefore $(\text{ran } u)^\perp\subseteq \ker u^*$. Taking orthogonal complements gives $$(\ker u^*)^\perp\subseteq (\text{ran } u)^{\perp\perp}=\overline{\text{ran } u}.$$ Thus $\overline{\text{ran } u}= (\ker u^*)^\perp$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
ODEs - Seperable differential equation, is an explicit answer necessary in an exam? Find the general solution for the following first order differential equation: $$\frac{dy}{dt}=(1-y)(3-y)(5-t).$$ I end up getting to $$\exp(10x-x^2+2C)=(y-3)(y-1).$$ Is there any more simplifying that can be done to express $y$ explicitly? Wolfram Alpha has an explicit answer, but that may be due simply to its computational power... Do you think it is likely, that in an exam, it is required to simplify further?
i think you made a sign error. separating the equation you get $$\dfrac{dy}{(1-y)(3-y)} = (5-t)dt$$ the partial fraction decomposition gives you $$\dfrac{dy}{y-3} - \dfrac{dy}{y-1} = (10 - 2t)\ dt$$ on integration this gives you $$\dfrac{y-3}{y-1} = Ce^{10t -t^2}$$ solving the last one for $y$ gives $$y = \dfrac{3-Ce^{10t-t^2}}{1+Ce^{10t-t^2}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Elementary proof of the fact that any orientable 3-manifold is parallelizable A parallelizable manifold $M$ is a smooth manifold such that there exist smooth vector fields $V_1,...,V_n$ where $n$ is the dimension of $M$, such that at any point $p\in M$, the tangent vectors $V_1(p),...,V_n(p)$ provide a basis for the tangent space at $p$. Equivalently, a manifold is parallelizable if its tangent bundle is trivial. There is a theorem that states that any compact orientable 3-manifold is parallelizable, and there is a proof of this result which uses $spin^c$ structures and the Steifel-Whitney class. I am wondering whether there exists a more elementary, perhaps more straightforward proof. Otherwise, I would be grateful for some intuition on why this is true. Also, it the theorem still true without the compactness assumption? If so, is there a relatively simple proof in that case?
Kirby gives a nice proof that every orientable 3-manifold is parallelizable at https://math.berkeley.edu/~kirby/papers/Kirby%20-%20The%20topology%20of%204-manifolds%20-%20MR1001966.pdf . (See the unnumbered non-blank page between page 46 and page 47.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 5, "answer_id": 3 }
Understanding the arc length integral formula I believe the proof in my book is slighty more informal than the proof that uses the Mean Value Theorem. Could someone tell me what exactly the difference is, and if there are any mistakes in the proof below? Thanks. Proof of the arc length integral formula Divide your interval $[a,b]$ into $n$ pieces of width $\Delta x$, then zoom into the subinterval $[x_{i-1},x_i]$. The arc length in this interval is approximately $$\sqrt{\Delta x^2+\Delta y^2}=\sqrt{1+\left(\frac{\Delta y}{\Delta x}\right)^2}\Delta x$$ As $\Delta x$ goes to zero, $\frac{\Delta y}{\Delta x}$ is equal to the slope at $x=x_{i-1}$, that is $f'(x_{i-1})$. The Riemann sum becomes $$\sum_{i=1}^n \sqrt{1+[f'(x_{i-1})]^2}\Delta x$$ As $n\to\infty$, the arc length is $$\int_a^b \sqrt{1+[f'(x)]^2}\,dx$$
Elaboration of my comment: You want to convert an infinite sum to an integral. As you probably understand that can be interpreted as summing infinitely many rectangles and deciding what the area converges to. But this is not enough! The rectangles have to be infinitely small as well. Take a look at the picture. There are infinitely rectangles, but since they are not infinitely small, the area does not converge to the area under the curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number theory: prove that if $a,b,c$ odd then $2\gcd(a,b,c) = \gcd(a+b,b+c, c+a)$ Please help! Am lost with the following: Prove that if $a,b,c$ are odd integers, then $2 \gcd(a,b,c) = \gcd( a+b, b+c, c+a)$ Thanks a lot!!
Let $d=\gcd(a,b,c)$. Then solve $d=ax+by+cz$. Use that $2a=(a+b)+(a+c)-(b+c)$, $2b=(a+b)+(b+c)-(a+c)$ and $2c=(a+c)+(b+c)-(a+b)$. Then $$2d=2ax+2by+2cz = (a+b)(x+y-z) + (a+c)(x-y+z) + (b+c)(y+z-x)$$ So we have a solution to: $$2d = (a+b)X+(a+c)Y + (b+c)Z$$ So we know $\gcd(a+b,a+c,b+c)\mid 2\gcd(a,b,c)$. The other direction is easier (and there, you use that $a,b,c$ are odd.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Invertible "Sigmoid + x" function I need an invertible function that represents a smooth transition between two straight, parallel line segments, like this: Depicted is $f(x) = -0.3/(1+e^{-10*(x-p)})+0.3/2+x$ (where $p$ is the location of the lowest slope), but it does not seem to be invertible in terms of standard mathematics. Can you recommend a function that looks similar, but is invertible?
I think a relatively simple way to obtain an "invertible sigmoid" is to find a suitable cubic polyomial. The method I propose here has the drawback that it does not let you control both the tangency points at the same time. Let's first simplify the setting. Let's consider the two points $P_1=(-2,-1)$ and $P_2=(2,1)$ and the two lines $r_1:y=x+1$ and $r_2:y=x-1$. We now wish to find a cubic polynomial that is always increasing and is tangent to the lines in $P_1$ and $P_2$. In this particular situation the points are symmetrical w.r.t the origin, so our polynomial must be of the form $y=px^3+qx$. If we impose the aforementioned conditions, we get: 1) passage through $P_1$: $px_1^3+qx_1=y_1\Rightarrow 8p+2q=1$ (this is equivalent to the request that the cubic pass through the other point). 2) tangency in $P_1$: $3px_1^2+q=1\Rightarrow 12p+q=1$ (Again, tangency in $P_2$ is automatically satisfied) These equations completely determine $p$ ad $q$, namely $p=1/16$ and $q=1/4$, so the polynomial we are searching for is \begin{equation} f(x)=\frac{1}{16}x^3+\frac{1}{4}x \end{equation} Take this as your "base" function: we will tweak it, adjusting it to the real case. Let the two parallel lines be $r: y=ax+b$ and $s: y=ax+c$ with $c<b$ (i.e. $r$ is the "leftmost" line), with $a>0$. Their zeroes are $-b/a$ and $-c/a$ respectively, and the distance between them is $\Delta x=(b-c)/a$. With this in mind, let's first tweak the two lines $r_1$ and $r_2$ so that the distance between their zeroes matches $\Delta x$. Let's define $\tilde{r}_1: y=kx+1$ and $\tilde{r_2}: y=kx-1$. The zeroes are respectively $-1/k$ and $1/k$; their distance is therefore $2/k$ and so in order to match $r$ and $s$ we must set $k=2a/(b-c)$. with this in mind, we can tweak our cubic, setting \begin{equation} f_1(x):=f(kx)=\frac{1}{16}(kx)^3+\frac{1}{4}(kx)=\frac{kx}{4}\left(\frac{k^2}{4}x^2+1\right) \end{equation} Now, to adjust the slope to the required slope ($a$), we can multiply the cubic equation by an appropriate factor, which is $a/k$: \begin{equation} f_2(x):=\frac{a}{k}f_1(x)=\frac{ax}{4}\left(\frac{k^2}{4}x^2+1\right) \end{equation} Now, with a suitable translation, we can move around our function so that a tangency point match a given one: \begin{equation} F(x):=f_2(x-x_0)+y_0=\frac{a(x-x_0)}{4}\left(\frac{k^2}{4}(x-x_0)^2+1\right)+y_0=\\ =\frac{a(x-x_0)}{4}\left(a^2\frac{(x-x_0)^2}{(b-c)^2}+1\right)+y_0 \end{equation} (The new tangency points are now $\tilde{P}_1=(-2/k,-a/k)$ and $\tilde{P}_2=(2/k,a/k)$, so if we want, for example, move $\tilde{P}_1$ to a point on $r$ which has coordinates $(P_x,P_y)$, we will have to set in the previous equation $x_0=P_x+2/k$ and $y_0=P_y+a/k$. You requested that this function is invertible- well, it is thanks to Cardano's formulas. Our original cubic has an inverse which is explicitly \begin{equation} f^{-1}(y)=\left(8y-8\sqrt{y^2+1/27}\right)^{\frac{1}{3}}-\frac{4}{3}\left(8y-8\sqrt{y^2+1/27}\right)^{-\frac{1}{3}} \end{equation} To get the inverse $F^{-1}$ of the function $F$, we just have to do the transformations backwards: \begin{equation} F^{-1}(y)=\frac{1}{k}f^{-1}\left(\frac{k}{a}(y-y_0)\right)+x_0 \end{equation} It might be a lenghty procedure (even if not so complex), but at least you get all analytical results. SUMMARY: Let the two lines be $r:y=ax+b$ and $s:y=ax+c$, with $c<b$. Let $P=(P_x,P_y)$ be a point on $r$. Let $x_0=P_x+(b-c)/a$ and $y_0=P_y+(b-c)/2$. Then a cubic function tangent to $r$ in $P$ which joins smoothly $s$ is given by the function \begin{equation} F(x)=\frac{a(x-x_0)}{4}\left(a^2\frac{(x-x_0)^2}{(b-c)^2}+1\right)+y_0 \end{equation} Its inverse is \begin{equation} F^{-1}(y)=\frac{b-c}{2a}f^{-1}\left(\frac{2}{b-c}(y-y_0)\right)+x_0 \end{equation} where $f^{-1}(y)$ is given above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1107926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
why is $\lim_{x\to -\infty} \frac{3x+7}{\sqrt{x^2}}$=-3? Exercise taken from here: https://mooculus.osu.edu/textbook/mooculus.pdf (page 42, "Exercises for Section 2.2", exercise 4). Why is $\lim_{x\to -\infty} \frac{3x+7}{\sqrt{x^2}}$=-3*? I always find 3 as the solution. I tried two approaches: Approach 1: $$ \lim_{x\to -\infty} \frac{3x+7}{\sqrt{x^2}} = \lim_{x\to -\infty} \frac{3x+7}{x} = \lim_{x\to -\infty} \frac{3x}{x}+\lim_{x\to -\infty} \frac{7}{x}\\ = \lim_{x\to -\infty} 3+\lim_{x\to -\infty} \frac{7}{x} = 3 + 0 = 3 $$ Approach 2: $$ \lim_{x\to -\infty} \frac{3x+7}{\sqrt{x^2}} = \lim_{x\to -\infty} \frac{3x+7}{\sqrt{x^2}} \times \frac{\frac{1}{x}}{\frac{1}{x}} = \lim_{x\to -\infty} \frac{3+\frac{7}{x}}{\frac{\sqrt{x^2}}{x}}\\ = \lim_{x\to -\infty} \frac{3+\frac{7}{x}}{\frac{\sqrt{x^2}}{\sqrt{x^2}}} = \lim_{x\to -\infty} \frac{3+\frac{7}{x}}{\sqrt{\frac{x^2}{x^2}}}\\ = \frac{\lim_{x\to -\infty}3+\lim_{x\to -\infty}\frac{7}{x}}{\lim_{x\to -\infty}\sqrt{1}}\\ = \frac{3 + 0}{1} = 3 $$ * -3 is given as the answer by the textbook (cf. page 247) as well as wolfram|alpha EDIT: I just re-read page 40 of the textbook and realized that I made a mistake in my approach 2. Instead of multiplying with $\frac{1}{x}$ I should have multiplied with $\frac{-1}{x}$ which is positive because as $x\to -\infty$, x is a negative number. It thus reads: $$ \lim_{x\to -\infty} \frac{3x+7}{\sqrt{x^2}} = \lim_{x\to -\infty} \frac{3x+7}{\sqrt{x^2}} \times \frac{\frac{-1}{x}}{\frac{-1}{x}} = \lim_{x\to -\infty} \frac{-3+\frac{-7}{x}}{\frac{\sqrt{x^2}}{-x}}\\ = \lim_{x\to -\infty} \frac{-3+\frac{-7}{x}}{\frac{\sqrt{x^2}}{\sqrt{x^2}}} = \lim_{x\to -\infty} \frac{-3+\frac{-7}{x}}{\sqrt{\frac{x^2}{x^2}}}\\ = \frac{\lim_{x\to -\infty}-3+\lim_{x\to -\infty}\frac{-7}{x}}{\lim_{x\to -\infty}\sqrt{1}}\\ = \frac{-3 + 0}{1} = -3 $$
For $x \to -\infty$, we have that $\sqrt{x^2} = |x| = -x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluating Integrals using Lebesgue Integration Suppose we are to evaluate: $$I = \int_{0}^{1} f(x) dx$$ Where $$f(x)=\begin{cases}1 \space \text{if} \space x\space \text{is rational}, & \newline 0 \space \text{if} \space x \space \text{is irrational} \\ \end{cases}$$ I have been told that this can be done using measure theory. Will anyone care to explain how possibly? I am new to measure theory, so I am just researching, please do not say "no attempt shown" this is because I dont know Lebesgue yet, but I heard it has great applications on this?
Lebesgue integration tells you that the value is zero. Basically, much like how you can split integrals over intervals in Riemann integration, you can split integrals over arbitrary measurable sets in Lebesgue integration. Here we write: $$\int_{[0,1]} f(x) dx = \int_{[0,1] \cap \mathbb{Q}} 1 dx + \int_{[0,1] \cap \mathbb{Q}^c} 0 dx$$ Now that we've written it as an integral of constant functions, we just multiply the constants by the measures of the corresponding sets, obtaining $$\int_{[0,1]} f(x) dx = 1 \cdot 0 + 0 \cdot 1 = 0.$$ since $[0,1] \cap \mathbb{Q}$ has measure zero. Explaining how to prove that it has measure zero would require a significant amount of explanation of the definitions and theorems from measure theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 4 }
Abelian categories with tensor product Is there a standard notion in the literature of abelian category with tensor product? The definition ought to be wide enough to encompass all the usual examples of abelian categories with standard `tensor product'. I'd guess something like "symmetric monoidal bi-functor $\otimes \colon \mathcal{A} \times \mathcal{A} \rightarrow \mathcal{A}$ which preserves finite colimits" would do, but I wonder if there is a reference for this?
Let $(\mathcal{A},\otimes)$ be a monoidal category. It is called abelian monoidal if $\mathcal{A}$ is an abelian category and $\otimes$ is an additive bifunctor (see here, Section 1.5). An example given the linked paper is the category of bimodules over a ring (in fact any closed abelian monoidal category can be exactly embedded in a bimodule category).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What's your favorite proof accessible to a general audience? What math statement with proof do you find most beautiful and elegant, where such is accessible to a general audience, meaning you could state, prove, and explain it to a general audience in roughly $5 \pm\epsilon$ minutes. Let's define 'general audience' as approximately an average adult with education and experience comparable to someone holding a bachelor's degree in any non science major (e.g. history) from an average North American university.
I would explain the Pigeon Hole principle, or one of its many guises. In fact, I remember explaining the PHP to a non math student while playing bridge. I told him that one gets at least $4$ cards of some suit- which is really the PHP. You can come up with many other interesting "real life" examples, and it's fun.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "121", "answer_count": 39, "answer_id": 4 }
What's your favorite proof accessible to a general audience? What math statement with proof do you find most beautiful and elegant, where such is accessible to a general audience, meaning you could state, prove, and explain it to a general audience in roughly $5 \pm\epsilon$ minutes. Let's define 'general audience' as approximately an average adult with education and experience comparable to someone holding a bachelor's degree in any non science major (e.g. history) from an average North American university.
There's no way to tune a piano in perfect harmony. There are twelve half-steps in the chromatic scale, twelve notes in each octave of the keyboard. Start at middle "C", and ascend a perfect fifth to "G". That's seven half steps up, with a frequency ratio of 3/2. Drop an octave to the lower "g" -- that's twelve half steps down, and a frequency ratio of 1/2. Continuing around the "circle of fifths" twelve times, and dropping an octave seven times, brings you back to middle "C", a frequency ratio of 1. So, $1 = (\frac{3}{2})^{12} \times (\frac{1}{2})^7$, or $3^{12}=2^{19}$. Ask your piano tuner next time about those fifths.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "121", "answer_count": 39, "answer_id": 36 }
Considering $ (1+i)^n - (1 - i)^n $, Complex Analysis I have been working on problems from Complex Analysis by Ahlfors, and I got stuck in the following problem: Evaluate: $$ (1 + i)^n - (1-i)^n $$ I have just "reduced" to: $$ (1 + i)^n - (1-i)^n = \sum_{k=0} ^n i^k(1 - (-1)^k) $$ by using expansion of each term. Thanks.
There are a number of spiffy techniques one could use on this problem, but Ahlfors doesn't get to conjugation and modulus until 1.3 and geometry of the complex plane until Section 2 (of Chapter 1), while this is still in 1.1. [I dug up my copy of the third edition to see how much was discussed to that point.] abel shows one approach in using the binomial theorem that lies within the (rather) limited means available. Given what the author covers in this section, this is another possibility: $$ ( 1 \ + \ i )^n \ + \ ( 1 \ - \ i )^n \ \ = \ \ ( 1 \ + \ i )^n \ \left[ \ 1 \ + \ \frac{( 1 \ - \ i )^n}{( 1 \ + \ i )^n} \ \right] $$ $$ = \ \ ( 1 \ + \ i )^n \ \left[ \ 1 \ + \ \left(\frac{ 1 \ - \ i }{ 1 \ + \ i } \right)^n \ \right] \ \ = \ \ ( 1 \ + \ i )^n \ \left[ \ 1 \ + \ \left(\frac{ [ \ 1 \ - \ i \ ] \ [ \ 1 \ - \ i \ ] }{ [ \ 1 \ + \ i \ ] [ \ 1 \ - \ i \ ] } \right)^n \ \right] $$ [the conjugate is being applied as shown in that section, but Ahlfors hasn't called it that yet] $$ = \ \ ( 1 \ + \ i )^n \ \left[ \ 1 \ + \ \left(\frac{ 1 \ - \ 2i \ - \ 1 }{ 2 } \right)^n \ \right] \ \ = \ \ ( 1 \ + \ i )^n \ [ \ 1 \ + \ ( - i) ^n \ ] \ \ . $$ The binomial theorem can now be applied to the first factor: $$ ( 1 \ + \ i )^n \ + \ ( 1 \ - \ i )^n $$ $$ = \ \ \left( \ 1 \ + \ \left( \begin{array}{c} n \\ 1 \end{array} \right) i \ + \ \left( \begin{array}{c} n \\ 2 \end{array} \right) i^2 \ + \ \ldots \ + \ \left( \begin{array}{c} n \\ n-1 \end{array} \right) i^{n-1} \ + \ i^n \ \right) \ [ \ 1 \ + \ ( - i) ^n \ ] \ \ . $$ [The "typoed" version of the problem that David Cardozo originally posted is analogous: $$ ( 1 \ + \ i )^n \ - \ ( 1 \ - \ i )^n $$ $$ = \ \ \left( \ 1 \ + \ \left( \begin{array}{c} n \\ 1 \end{array} \right) i \ + \ \left( \begin{array}{c} n \\ 2 \end{array} \right) i^2 \ + \ \ldots \ + \ \left( \begin{array}{c} n \\ n-1 \end{array} \right) i^{n-1} \ + \ i^n \ \right) \ [ \ 1 \ - \ ( - i) ^n \ ] \ \ . \ \ ] $$ $ \ \ $ Presumably, we'd like to consolidate this a bit. Because of that $ \ (-i)^n \ $ term in the second factor, that factor has a cycle of period 4. We see that this product is zero for $ \ n \ = \ 4m \ + \ 2 \ $ , with integer $ \ m \ \ge \ 0 \ $ . [These will be "out-of-phase" with abel's expressions, since I am using Ahlfors' version of the problem.] For the other cases, we will write the first factor as $$ \left[ \ 1 \ - \ \left( \begin{array}{c} n \\ 2 \end{array} \right) \ + \ \left( \begin{array}{c} n \\ 4 \end{array} \right) \ + \ \text{etc.} \ \right] \ \ + \ \ i \ \left[ \ \left( \begin{array}{c} n \\ 1 \end{array} \right) \ - \ \left( \begin{array}{c} n \\ 3 \end{array} \right) \ + \ \left( \begin{array}{c} n \\ 5 \end{array} \right) \ - \ \text{etc.} \ \right] \ \ . $$ For $ \ n \ = \ 4m \ $ , the factor $ \ [ \ 1 \ + \ ( - i) ^n \ ] \ = \ 2 \ $ and the imaginary part of the binomial series is zero, owing to the symmetry of the binomial coefficients. The real part also simplifies due to this symmetry, so we have $$ ( 1 \ + \ i )^{4m} \ + \ ( 1 \ - \ i )^{4m} \ \ = \ \ \left[ \ 2 \cdot 1 \ - \ 2 \ \left( \begin{array}{c} 4m \\ 2 \end{array} \right) \ + \ 2 \ \left( \begin{array}{c} 4m \\ 4 \end{array} \right) \ - \ \ldots \ + \ \left( \begin{array}{c} 4m \\ 2m \end{array} \right) \ \right] \cdot \ 2 \ \ . $$ The remaining cases are somewhat more complicated to work out: for $ \ n \ = \ 4m \ + \ 1 \ $ and $ \ n \ = \ 4m \ + \ 3 \ $ , respectively, we obtain $$ \left( \ \left[ \ 1 \ - \ \left( \begin{array}{c} n \\ 2 \end{array} \right) \ + \ \left( \begin{array}{c} n \\ 4 \end{array} \right) \ \ldots \ \pm \ n \ \right] \ \ + \ \ i \ \left[ \ n \ - \ \left( \begin{array}{c} n \\ 3 \end{array} \right) \ + \ \left( \begin{array}{c} n \\ 5 \end{array} \right) \ \ldots \ \pm \ 1 \ \right] \ \right) \ \cdot \ ( 1 \ \mp \ i) \ \ . $$ For either of these cases, since $ \ n \ $ is odd, the number of binomial coefficients is even. So the real part has the symmetry in which the first half of the terms are identical to the second half of them; also, the symmetry among the coefficients produces a sum which is always a power of 2 . In the imaginary part, we do not get a simple alternation of signs (which we know gives a sum of zero for the binomial coefficients), but the "double-alternating" signs proves to have the same effect; the result is that the imaginary part is zero for these cases as well. Hence, the expression $ \ ( 1 \ + \ i )^n \ + \ ( 1 \ - \ i )^n \ $ is always real; by analogous reasoning, the expression $ \ ( 1 \ + \ i )^n \ - \ ( 1 \ - \ i )^n \ $ is always pure imaginary. We find the sequences (including the values abel presents) for $ \ 0 \ \le \ n \ \le \ 9 \ $ $$ \ ( 1 \ + \ i )^n \ + \ ( 1 \ - \ i )^n \ \ : \ \ 2 \ , \ 2 \ , \ 0 \ , \ -4 \ , \ -8 \ , \ -8 \ , \ 0 \ , \ 16 \ , \ 32 \ , \ 32 \ \ \text{and} $$ $$ \ ( 1 \ + \ i )^n \ - \ ( 1 \ - \ i )^n \ \ : \ \ 0 \ , \ 2i \ , \ 4i \ , \ 4i \ , \ 0 \ , \ -8i \ , \ -16i \ , \ -16i \ , \ 0 \ , \ 32i \ \ . $$ [Incidentally, these results indicate the interesting identities$ ^* $ $$ ( 1 \ + \ i )^{4m} \ + \ ( 1 \ - \ i )^{4m} \ \ = \ \ ( 1 \ + \ i )^{4m+1} \ + \ ( 1 \ - \ i )^{4m+1} \ \ \text{and} $$ $$ ( 1 \ + \ i )^{4m+2} \ - \ ( 1 \ - \ i )^{4m+2} \ \ = \ \ ( 1 \ + \ i )^{4m+3} \ - \ ( 1 \ - \ i )^{4m+3} \ \ ] $$ $ ^* $ with unintentional alliteration on the theme of $ \ i \ $ ... $ \ \ $ To be sure, this is a cumbersome description of the result, but it is a consequence of using Cartesian coordinates for the description of the complex values. Once you reach Section 2 and the use of polar coordinates, you will have the far more compact expressions $$ \ ( 1 \ + \ i )^n \ + \ ( 1 \ - \ i )^n \ \ = \ \ 2^{(n+2)/2} \ \cos\left( \frac{n \pi}{4} \right) \ \ \text{and} $$ $$( 1 \ + \ i )^n \ - \ ( 1 \ - \ i )^n \ \ = \ \ 2^{(n+2)/2} \ \sin\left( \frac{n \pi}{4} \right) \ i \ \ . $$ (The exponential factor simply grows by a factor of $ \ \sqrt{2} \ $ at each successive stage, but its product with the trigonometric factors create the complications in the sequences above. The trigonometric factors also immediately explain the "out-of-phase" behavior between the two versions of the expression we have been evaluating. These products follow from the methods being described by dustin and RikOsuave. )
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Why do some other people use dek and el rather than letters as the eleventh and twelfth digits in the dozenal or duodecimal system? I've noticed on a YouTube video titled Base $12$ - Numberphile that some other people who use the duodecimal system use dek and el for the eleventh and twelfth digits. I know for one thing that them plus the regular digits equals twelve digits using base $12$, but why do they use additional digits that differ from the first two letters of the English alphabet?
As mentioned in the comments, It works fine if you use a's and b's, but they tried to use names for "ten" and "eleven" which suggest their meaning. "dek" from "decem", coming from the latin word for 10, and "el" from "eleven", coming from the english word eleven. It was a stylistic choice and nothing more.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\sin \left( {5x} \right) = 2\sin \left( {3x} \right)\sin \left( {4x} \right)$ ask gentlemen to help solve the equation Where the real number $$ x \in \mathbb{R}: \sin \left( {5x} \right) = 2\sin \left( {3x} \right)\sin \left( {4x} \right); $$ I notice that $$x = k\pi \quad ;k \in \mathbb{Z}$$ are solutions
Expressing everything in terms of $\sin(x)= s$ and $\cos(x) = c$, the equation says $$ -64\,{c}^{5}{s}^{2}+16\,{c}^{4}s+48\,{c}^{3}{s}^{2}-12\,{c}^{2}s-8\,c{ s}^{2}+s = 0$$ Taking the resultant of the left side and $c^2 + s^2 - 1$ with respect to $c$, we get $$ {s}^{2} \left( 4096\,{s}^{12}-14336\,{s}^{10}+19968\,{s}^{8}-13952\,{s }^{6}+4976\,{s}^{4}-776\,{s}^{2}+25 \right) = 0 $$ There are $9$ real solutions of this polynomial equation, all of which are in the interval $[-1,1]$. Except for $0$, I don't think they can be expressed in terms of radicals. For each nonzero solution $s$, one of the two corresponding values $\pm \sqrt{1-s^2}$ of $\cos(x)$ gives you a solution of the original equation; for $s=0$, both $\cos(x) = \pm 1$ give solutions. So there are $10$ solutions for $x$ in each interval $[a,a+2\pi)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is a linear transformation a $(1,1)$ tensor? Wikipedia says that a linear transformation is a $(1,1)$ tensor. Is this restricting it to transformations from $V$ to $V$ or is a transformation from $V$ to $W$ also a $(1,1)$ tensor? (where $V$ and $W$ are both vector spaces). I think it must be the first case since it also states that a linear functional is a $(0,1)$ tensor and this is a transformation from $V$ to $R$. If it is the second case, could you please explain why linear transformations are $(1,1)$ tensors.
It's very common in tensor analysis to associate endomorphisms on a vector space with (1,1) tensors. Namely because there exists an isomorphism between the two sets. Define $E(V)$ to be the set of endomorphisms on $V$. Let $A\in E(V)$ and define the map $\Theta:E(V)\rightarrow T^1_1(V)$ by \begin{align*} (\Theta A)(\omega,X)&=\omega(AX). \end{align*} We show that $\Theta$ is an isomorphism of vector spaces. Let $\{e_i\}$ be a basis for $V$ and let $\{\varepsilon^i\}$ be the corresponding dual basis. First, we note $\Theta$ is linear by the linearity of $\omega$. To show injectivity, suppose $\Theta A = \Theta B$ for some $A,B\in E(V)$ and let $X\in V$, $\omega \in V^*$ be arbitrary. Then \begin{align*} (\Theta A)(\omega,X)&=(\Theta B)(\omega,X)\\ \\ \iff \omega(AX-BX)&=0. \end{align*} Since $X$ and $\omega$ were arbitrary, it follows that \begin{align*} AX&=BX\\ \iff A&=B. \end{align*} To show surjectivity, suppose $f\in T^1_1$ has coordinate representation $f^j_i \varepsilon^i \otimes e_j$. We wish to find $A\in E(V)$ such that $\Theta A = f$. We simply choose $A\in E (V)$ such that $A$ has the matrix representation $(f^j_i)$. If we write the representation of our vector $X$ and covector $\omega$ as \begin{align*} X&=X^i e_i\\ \omega&=\omega_i \varepsilon^i, \end{align*} we have \begin{align*} (\Theta A)(\omega, X)&=\omega(AX)\\ \\ &=\omega_k \varepsilon^k(f^j_i X^i e_j)\\ \\ &=f^j_i X^i \omega_k \varepsilon^k (e_j)\\ \\ &=f^j_i X^i \omega_k \delta^k_j\\ \\ &=f^k_i X^i \omega_k. \end{align*} However we see \begin{align*} f(\omega,X)&=f(\omega_k\varepsilon^k,X^ie_i)\\ \\ &=\omega_k X^i f(\varepsilon^k,e_i)\\ \\ &=f^k_i X^i \omega_k. \end{align*} Since $X$ and $\omega$ were arbitrary, it follows that $\Theta A = f$. Thus, $\Theta$ is linear and bijective, hence an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 3, "answer_id": 1 }
If $G$ is a group whereby $(a\cdot b)^{i} =a^i\cdot b^i$ for three consecutive integers $i$ for all $a, b \in G$, show $G$ is abelian. If $G$ is a group in which $(a\cdot b)^{i} =a^i\cdot b^i$ for three consecutive integers $i$ for all $a, b \in G$, show that $G$ is abelian. Proof: Let $x$ be the smallest of the 3 consecutive integers. Then, we have $(1)(a\cdot b)^{x} =a^x\cdot b^x$, $(2)(a\cdot b)^{x+1} =a^{x+1}\cdot b^{x+1}$ and $(3)(a\cdot b)^{x+2} =a^{x+2}\cdot b^{x+2}$. Using $(2)$ and multiplying $a^{-1}$ on the left and $b^{-1}$ on the right, we get $baba...ba = a^{x}b^{x}(4)$ whereby there are $x$ number of $a$ and $x$ number of $b$ on both sides. Using $(1)$ and multiplying both sides on the right by $ab$, we have $\overbrace{abab...ab}^{(x+1)ab} = (a^{x}b^{x})ab (5)$. Substitute $(4)$ into $(5)$, we get $\overbrace{ab...ab}^{(x+1)ab} =\overbrace{ba...ba}^{(x)ba}ab(6)$. Using $(3)$ and multiplying $a^{-1}$ on the left and $b^{-1}$ on the right, we get $\overbrace{ba...ba}^{(x+1)ba} = a^{x+1}b^{x+1} = \overbrace{abab...ab}^{(x+1)ab}(7)$. Combining $(6)$ and $(7)$, and multiply $a^{-1}b^{-1}...a^{-1}b^{-1}$ on the left, we get $ab = ba$. Hence $G$ is abelian.
Use $\backslash$overbrace{below}^{above}, as in (right click and select to see LaTeX commands): $$ \overbrace{a\ldots a}^{27} $$ The proof looks fine to me.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1108950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Which arrangement produces the largest number? I learnt that the power tower $2\uparrow3\uparrow4\uparrow...\uparrow n$ is larger than any power tower with a different order of the numbers $2,3,4,...,n$. Is this also true for conway-chains and for bowers array notation ? Are $$2\rightarrow 3\rightarrow 4\rightarrow...\rightarrow n$$ and {$2,3,4,...,n$} also larger than any other number of this form with different order of the numbers ?
No for both. For example, for $n = 4$ we have $2 \rightarrow 3 \rightarrow 4 = 2 \rightarrow 4 \rightarrow 3 = 2 \uparrow \uparrow 65536$, whereas $3 \rightarrow 2 \rightarrow 4 = 3 \uparrow \uparrow 3^{27}$. We can show by induction that $2 \rightarrow 3 \rightarrow n < 3 \rightarrow 2 \rightarrow n$, as $$ 2 \rightarrow 3 \rightarrow 1 = 8 < 9 = 3 \rightarrow 2 \rightarrow 1 $$ and assuming $2 \rightarrow 3 \rightarrow n = 2 \rightarrow 4 \rightarrow (n-1) < 3 \rightarrow 2 \rightarrow n = 3 \rightarrow 3 \rightarrow (n-1)$, we have $$ 2 \rightarrow 3 \rightarrow (n+1) = 2 \rightarrow (2 \rightarrow 2 \rightarrow (n+1)) \rightarrow n = 2 \rightarrow 4 \rightarrow n $$ $$ = 2 \rightarrow (2 \rightarrow 4 \rightarrow (n-1)) \rightarrow (n-1) < 2 \rightarrow (3 \rightarrow 3 \rightarrow (n-1)) \rightarrow (n-1) $$ $$ < 3 \rightarrow (3 \rightarrow 3 \rightarrow (n-1)) \rightarrow (n-1) = 3 \rightarrow 3 \rightarrow n = 3\rightarrow 2 \rightarrow (n+1). $$ It follows that $2 \rightarrow 3 \rightarrow X < 3 \rightarrow 2 \rightarrow X$ for any chain $X$, since when you evaluate the chains you will get the same expressions, except one will have chains starting with $3 \rightarrow 2$ and one will have chains starting with $2 \rightarrow 3$. Whenever the expressions reduce a chain to a 3-chain, we will have $3 \rightarrow 2 \rightarrow n$ evaluate to a higher value than $2 \rightarrow 3 \rightarrow n$, so in the end $3 \rightarrow 2 \rightarrow X$ will evaluate to a higher value than $2 \rightarrow 3 \rightarrow X$. In particular, $2 \rightarrow 3 \rightarrow \cdots \rightarrow n < 3 \rightarrow 2 \rightarrow \cdots \rightarrow n$. For Bowers arrays the situation is more extreme. Any expression of the form {2,2,...} will evaluate to the number 4, as applying the evaluation rules will either keep the first two entries the same, or will replace the array with {2, {2,1,...},...} = {2,2,...}. So the array will eventually evaluate to {2,2} = 4. Further, any array {2,b,c,d,...} with four or more entries (not counting trailing 1's) will evaluate to 4 as well, as {2,b,c,d,...} will evaluate to {2,b',c-1,d,...} and then {2,b'',c-2,d,...} eventually reaching {2,n,1,d,...}. This evaluates to {2,2,{2,n-1,1,d,...},d-1,...}, which we have already determined equals 4. In particular, {2,3,4,...,n} will evaluate to 4 for n > 4, whereas {3,2,4,...,n} will grow extremely fast.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1109057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Example of Parseval's Theorem In the textbook "Mathematics for Physics" of Stone and Goldbart the following example for an illustration of Parseval's Theorem is given: Until 2.42 I understand everything but I don't understand the statement: " Finally, as $\sin^2(\pi(\zeta-n))=\sin^2(\pi \zeta)$ " Can you explain me why this equality holds?
$\sin{\pi n} = 0$ when $n \in \mathbb{Z}$. Thus, because $\cos{\pi n} = (-1)^n$, we have $$\sin{\pi (\zeta - n)} = \sin{\pi \zeta} \cos{\pi n} - \sin{\pi n} \cos{\pi \zeta} = (-1)^n \sin{\pi \zeta} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1109221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is there a proof for the fact that, if you perform the same operation on both sides of an equality, then the equality holds? Is there a proof for this, or is it just taken for granted? Does one need to prove it for every separate case (multiplication, addition, etc.), or only when you are operating with different elements (numbers, matrixes, etc.)?
No, it needs no proof. In general one has that if $x=y$ and $f$ is a function that has $x$ and $y$ in its domain then $f(x)=f(y)$. Since $a+b$ is just another way of writing $+(a,b)$, one has that if $a=c$ and $b=d$ then $a+b=c+d$ (same for multiplication).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1109319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why does $\lim_{x\rightarrow 0}\frac{\sin(x)}x=1$? I am learning about the derivative function of $\frac{d}{dx}[\sin(x)] = \cos(x)$. The proof stated: From $\lim_{x \to 0} \frac{\sin(x)}{x} = 1$... I realized I don't know why, so I wanted to learn why part is true first before moving on. But unfortunately I don't have the complete note for this proof. * *It started with a unit circle, and then drew a triangle at $(1, \tan(\theta))$ *It show the area of the big triangle is $\frac{\tan\theta}{2}$ *It show the area is greater than the sector, which is $\frac{\theta}{2}$ Here is my question, how does this "section" of the circle equal to $\frac{\theta}{2}$? (It looks like a pizza slice). *From there, it stated the area of the smaller triangle is $\frac{\sin(\theta)}{2}$. I understand this part. Since the area of the triangle is $\frac{1}{2}(\text{base} \times \text{height})$. *Then they multiply each expression by $\frac{2}{\sin(\theta){}}$ to get $\frac{1}{\cos(\theta)} \ge \frac{\theta}{\sin(\theta)} \ge 1$ And the incomplete notes ended here, I am not sure how the teacher go the conclusion $\lim_{x \to 0} \frac{\sin(x)}{x} = 1$. I thought it might be something to do with reversing the inequality... Is the answer obvious from this point? And how does step #3 calculation works?
hint use that $$\cos(\theta)\le \frac{\sin(\theta)}{\theta}\le 1$$ and take the limit for $\theta$ goes to zero
{ "language": "en", "url": "https://math.stackexchange.com/questions/1109399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Embeddings of a subfield of $ \mathbb{C} $ I'm trying to understand / solve the following problem: Let $ L \subset \mathbb{C} $ be a field and $ L \subset L_1 $ its finite extension ($ [L_1 : L] = m $). Prove that there are exactly $ m $ disintct embeddings $$ \sigma_1, \dots, \sigma_m: L_1 \hookrightarrow \mathbb{C} $$ which are identity on $ L $. I can use Abel's theorem to reduce the problem to the case where $ L_1 = L(a) $. And now I've been given a hint that if that's the case, then the number of such embeddings is equal to the number of roots of $f(x) \in L[x]$ minimal for $ a $. I can't really think of an answer why that is. I would appreciate some hints
Let $f(x)$ be the minimal polynomial of $a$ over $L$, it then has to be of degree $m$. This means, we can express $a^m$ by $L$-linear combination of lower powers $a^k$ ($k<m$), and by definition, $a$ doesn't satisfy any other algebraic equations that doesn't follow from $f(a)=0$. Now, if we replace the root $a$ by an indeterminant $x$, we see that $L[x]/(f)\cong L(a)=L_1$. (More directly, you can prove that the kernel of the evaluation $L[x]\to L(a),\ x\mapsto a$ is just the ideal $(f)$.) This could be read as if the root $a$ could equally well be just a 'formally adjoint element' $x$ requested to satisfy $f(x)=0$. Since $f$ is irreducible in $L[x]$, and has no multiple roots (because $\gcd(f,f')=1$), it has $m$ roots in $\Bbb C$. Say these are $z_1,\dots,z_m$. Now, if $\sigma:L_1\to\Bbb C$ is an embedding, then $f(a)=0$ implies $f(\sigma(a))=0$ as $f$ is a homomorphism, i.e. $\sigma(a)=z_k$ for some $k$. And conversely, given any root $z_k$ of $f$ in $\Bbb C$, we can define $\sigma_k$ to take $a\mapsto z_k$, and as it wants to be a homomorphism, it needs to satisfy $$\sigma_k(\sum_{j=0}^{m-1}\lambda_ja^j)=\sum_{j=0}^{m-1}\lambda_j{z_k}^j$$ with $\lambda_j\in L$, which already defines $\sigma_k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1109500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to solve this Linear Algebra problem involving a system of linear equations? The following is what I have so far. I'm not sure how to use my echelon matrix to find out which values for the variables can provide an answer to the question or how to prove it. I was thinking of plugging in arbitrary numbers for $x_3\ y_1\ y_2$ but not sure if this is the way to approach this.
Expanded coefficients matrix and its reduction: $$\begin{pmatrix}1&\!\!-2&1&y_1\\ 2&1&q&y_2\\ 0&5&\!\!-1&y_3\end{pmatrix}\stackrel{R_2-2R_1}\rightarrow\begin{pmatrix}1&\!\!-2&1&y_1\\ 0&5&q-2&y_2-2y_1\\ 0&5&\!\!-1&y_3\end{pmatrix}\stackrel{R_3-R_2}\rightarrow$$$${}$$ $$\rightarrow\begin{pmatrix}1&\!\!-2&1&y_1\\ 0&5&q-2&y_2-2y_1\\ 0&0&1-q&y_3-y_2+2y_1\end{pmatrix}$$ Thus, for example: $$\text{If}\;\;q=1\implies y_3-y_2+2y_1=0$$ and the system has solution (but not unique! Why?) . Try to take it from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1109558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $\frac{(n-a)^2}{n}$ can be written as $1-\left(\frac{n}{a}\right)^2\cdot\frac{n}{(n/a)^2}$ \left(\frac{n}{a}\right)^2\cdot\frac{n}{(n/a)^2}$. I have got so far to $(a^2/n)-2a+n$ But I can not see how to proceed. Can anyone help please?
$$\frac{(n-a)^2}{n} =(n-a)^2\left(\frac{1}{n}\right) =(n-a)^2\left(\frac{1}{n}\right)\frac{a^2}{a^2} =\left(\frac{n-a}{a}\right)^2 \frac{a^2}{n}= \left(\frac{n}{a}-1\right)^2 \frac{a^2}{n} = \left(\frac{n}{a}-1\right)^2 \frac{n^2}{n^2} \frac{a^2}{n} = (-1)^2\left(1-\frac{n}{a}\right)^2 \frac{n}{(\frac{n}{a})^2}=\left(1-\frac{n}{a}\right)^2 \frac{n}{(\frac{n}{a})^2}$$ The real trick here is multiplying by $1=\frac{a^2}{a^2}=\frac{n^2}{n^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1109659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Vectors of force and their angles This is a homework problem which has me stumped. I have just begun Calculus 3 and this is the first section introducing vectors. . The embedded picture states the problem and shows some of my hen's scratching in an effort to solve it. After studying the example problem from the same sub-chapter, I understand that the two vectors given, when summed, is the vector that is acting in opposition to the vector pulling the weight down. That is $\mathbf{F_1} + \mathbf{F_2} = \langle 0,50 \rangle$. This leads to the following system of equations: $$ \begin{array}{rcl} -\left|\mathbf{F_1}\right|cos(\alpha) + \left|\mathbf{F_2}\right|cos(60^\circ) & = & 0 \\ \left|\mathbf{F_1}\right|sin(\alpha) + \left|\mathbf{F_2}\right|sin(60^\circ) & = & 50 \\ \end{array} $$ Basically, I'm having troubles figuring out how to eliminate one of the two unknowns: $\alpha$ and $\left|\mathbf{F_2}\right|$. Using the system of equations, I can tell you that $\left|\mathbf{F_2}\right|$ is as follows: $$ \begin{array}{rcl} -\left|\mathbf{F_1}\right|cos(\alpha) + \left|\mathbf{F_2}\right|cos(60^\circ) & = & 0 \\ -35cos(\alpha) + \left|\mathbf{F_2}\right|\frac{1}{2} & = & 0 \\ -70cos(\alpha) + \left|\mathbf{F_2}\right| & = & 0 \\ \left|\mathbf{F_2}\right| & = & 70cos(\alpha) \\ \end{array} $$ Also, knowing that the interior angles of a triangle must sum $180^\circ$, it's also quite a simple matter to demonstrate that $\alpha$ is as follows (assigning $\beta$ to the unnamed angle): $$ \begin{array}{rcl} 180 & = & 60 + \alpha + \beta \\ \alpha & = & 120 - \beta \\ \end{array} $$ From the scratching notes you can see in the picture, you can see that I've tried to draw in the unseen vector $\langle 0,50 \rangle$ in the hopes of making a right triangle to get myself closer to the answer. This didn't help me. I had the inspiration that some of the trigonometric functions and identities I learned (too many years ago now) would be of use. So I pulled out my pre-calc book to look up things like SSA and so forth. However, none seem to be the answer. For example, the SSA only works if you know 2 of the three sides and one angle. Well, I've got to of the three required criteria. I also refreshed myself on the law of sines. However, this doesn't bring me closer to (what I think are) the solutions. So, one again, I can tell you this much: $$ \text{The law of sines} \\ \frac{sin(A)}{a} = \frac{sin(B)}{b} = \frac{sin(C)}{c} \\ \text{I don't know C, or c, but this shouldn't matter} \\ \Rightarrow \frac{sin(60)}{35} = \frac{sin(\alpha)}{\left|\mathbf{F_2}\right|} $$ However, it's pretty obvious that, though I can calculate the left side, it doesn't bring me close enough to get either $\alpha$ or $\left| \mathbf{F_2} \right|$ So, if someone could kindly point me in the right direction, I'd be very grateful. Thanks.
Using Lami's there $\frac{F_2}{sin (90+\alpha)}=2F_1=\frac{50}{sin (120-\alpha)}$ $\frac{F_2}{cos \alpha}=2F_1=\frac{50}{sin (120-\alpha)}$ $\frac{F_2}{cos \alpha}=\frac{50}{sin 120cos\alpha-cos 120sin\alpha}$ $\frac{F_2}{cos \alpha}=\frac{50}{\frac{\sqrt3}{2}cos\alpha+\frac{1}{2}sin\alpha}$ $\frac{F_2}{cos \alpha}=\frac{100}{\sqrt3cos\alpha+sin\alpha}$ Substituting your answer above where you got $F_2=70 cos \alpha$, $70=\frac{100}{\sqrt3cos\alpha+sin\alpha}$ $\sqrt3cos\alpha+sin\alpha=\frac{10}{7}$ By using double angle, R=2, $\theta=60$ $\sqrt3cos\alpha+sin\alpha=2sin(\alpha+60)$ $2sin(\alpha+60)=\frac{10}{7}$ $sin(\alpha+60)=\frac{5}{7}$ $\alpha+60=45.584$ $\alpha=-14.4 Since $60\le \alpha \le 120, \alpha=-14.4$ $F_2=67.79$. I am getting the angle to be -ve, meaning sin is -ve and cos is +ve in the 4th quadrant. Angles in the 3rd quadrant too keep the sin -ve exceeds 120. It sounds a bit improbable to have a negative angle in real life, but according to the question, it makes sense. Good luck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1109837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
On adding terms to limits Is it always possible to add terms into limits, like in the following example? (Or must certain conditions be fulfilled first, such as for example the numerator by itself must converge etc) $\lim_{h \to 0} {f(x)} = \lim_{h \to 0} \frac{e^xf(x)}{e^x}$
It's not clear to me what "add terms into limits" is supposed to mean in general. In your example, you're not really doing anything: $f(x)$ and $\dfrac{e^x f(x)}{e^x}$ are equal, so any well-defined operation on them produces the same results.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1109918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finite cyclic subgroups of $GL_{2} (\mathbb{Z})$ How could we prove that any element of $GL_2(\mathbb{Z})$ of finite order has order 1, 2, 3, 4, or 6? I am aware of the proof supplied here at this link: https://www.maa.org/sites/default/files/George_Mackiw20823.pdf. But I am curious if there are any other proofs.
Suppose that $A$ has finite order in ${\rm GL}(2,\Bbb Q)$. Then we know there is $k$ such that $X^k-1$ annihilates $A$. If $m_A$ is the minimal polynomial of $A$, $\deg m_A\leqslant 2$. Moreover, $m_A$ divides $X^k-1$ so $m_A$ has as its irreducible factors the irreducible factors of $X^k-1$. Since everything is monic, irreducibility over $\Bbb Q$ is the same as irreducibility over $\Bbb Z$. Of course $X^k-1$ factors into the cyclotomic polynomials, which are irreducible$^{1}$. We aslo know the $n$-th cyclotomic polynomial has degree $\varphi(n)$. And it happens that $\varphi(1)=1,\varphi(2)=1,\varphi(3)=2,\varphi(4)=2,\varphi(6)=2$, but any other number fails to have $\varphi(n)$ at most $2$. For the sake of it, here are (invertible) matrices over $\Bbb Z$ of orders $2,3,4,6$ $$\begin{pmatrix}-1&0\\0&1\end{pmatrix},\begin{pmatrix}-2&-3\\1&1\end{pmatrix}, \begin{pmatrix}2&1\\-5&-2\end{pmatrix},\begin{pmatrix}3&-7\\1&-2\end{pmatrix}$$ * *Here you can find several proofs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Summation of an infinite series The sum is as follows: $$ \sum_{n=1}^{\infty} n \left ( \frac{1}{6}\right ) \left ( \frac{5}{6} \right )^{n-1}\\ $$ This is how I started: $$ = \frac{1}{6}\sum_{n=1}^{\infty} n \left ( \frac{5}{6} \right )^{n-1} \\ = \frac{1}{5}\sum_{n=1}^{\infty} n \left ( \frac{5}{6} \right )^{n}\\\\ = \frac{1}{5}S\\ S = \frac{5}{6} + 2\left (\frac{5}{6}\right)^2 + 3\left (\frac{5}{6}\right)^3 + ... $$ I don't know how to group these in to partial sums and get the result. I also tried considering it as a finite sum (sum from 1 to n) and applying the limit, but that it didn't get me anywhere! PS: I am not looking for the calculus method. I tried to do it directly in the form of the accepted answer, $$ \textrm{if} \ x= \frac{5}{6},\\ S = x + 2x^2 + 3x^3 + ...\\ Sx = x^2 + 2x^3 + 3x^4 + ...\\ S(1-x) = x + x^2 + x^3 + ...\\ \textrm{for x < 1},\ \ \sum_{n=1}^{\infty}x^n = -\frac{x}{x-1}\ (\textrm{I looked up this eqn})\\ S = \frac{x}{(1-x)^2}\\ \therefore S = 30\\ \textrm{Hence the sum} \sum_{n=1}^{\infty} n \left ( \frac{1}{6}\right ) \left ( \frac{5}{6} \right )^{n-1} = \frac{30}{5} = 6 $$
hint: differentiate the identity $$\sum_{k=0}^{\infty} x^k = \frac{1}{1-x} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
prove the given inequality (for series ) For any given $n \in \Bbb N,$ prove that, $$1+{1\over 2^3}+\cdots+{1\over n^3} <{3\over 2}.$$
Hint: Notice that as $f(x)=\frac{1}{x^3}$ is decreasing for every $x>0$ then the following expression holds $$\sum_{k=1}^n \dfrac{1}{k^3}<\int_0^n\dfrac{1}{x^3}d x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Basin of attraction of the fixed map $f(x) = x-x^3$ Prove that the interval $(-\sqrt 2 ,\sqrt 2 )$ is the basin of attraction of the fixed point $0$ of the map $f(x)=x-x^3$, for $x \in \mathbb{R}$. How one would prove this? In the examples I've seen so far they usually prove that a fixed point has a certain basin of attraction by proving that the function is decreasing or increasing for certain values within the basin of attraction. In this case however the values 'jump' from positive to negative making it impossible to use that method. I've been trying some options using the absolute value, but I can't figure it out. Could you please show me a plausible proof for this situation?
Let $F(x)=\left|f(x)\right|$. I claim that $F(x)<\left|x\right|$ for all $x\in\left(-\sqrt{2},\sqrt{2}\right)$. This is quite easy to see from a graph: It can be proved using the factorization $$F(x) = \left|x-x^3\right| = \left|x\right|\,\left|1-x^2\right|.$$ Now, the claim is trivial for $0 < x \leq 1$ since then $0<1-x^2<1$ so $$F(x) = \left|x\right|\,\left|1-x^2\right| < |x|.$$ If $1<x<\sqrt{2}$, then $1<x^2<2$ so that $0<x^2-1<1$ and, again, $\left|1-x^2\right|<1$. With this lemma out of the way, your problem is easy. Any seed $x_1\in\left(0,\sqrt{2}\right)$ leads to a decreasing sequence, that is bounded below by zero, under iteration of $F$. Thus, there is a limit; that limit must be zero, since zero is the only fixed point of $F$ in $[0,\sqrt{2})$. Any seed in $\left(-\sqrt{2},0\right)$ leads to a positive first iterate to which the previous analysis applies. These results extend to $f$ since the absolute value of an orbit of $f$ is exactly an orbit of $F$. Finally, the basin is no larger than $\left(-\sqrt{2},\sqrt{2}\right)$, since those endpoints form an orbit of period 2 for $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Integral $\int \frac{x+2}{x^3-x} dx$ I need to solve this integral but I get stuck, let me show what I did: $$\int \frac{x+2}{x^3-x} dx$$ then: $$\int \frac{x}{x^3-x} + \int \frac{2}{x^3-x}$$ $$\int \frac{x}{x(x^2-1)} + 2\int \frac{1}{x^3-x}$$ $$\int \frac{1}{x^2-1} + 2\int \frac{1}{x^3-x}$$ now I need to resolve one integral at the time so: $$\int \frac{1}{x^2-1}$$ with x = t I have: $$\int \frac{1}{t^2-1}$$ Now I have no idea about how to procede with this...any help?
The Heaviside cover-up method for solving partial fraction decompositions deserves to be more widely known. We want to find $A,B,C$ in this equation: $$\frac{x+2}{x(x-1)(x+1)}=\frac{A}{x}+\frac{B}{x-1}+\frac{C}{x+1}$$ To find $A$, multiply by $x$: $$\frac{x+2}{(x-1)(x+1)}=A+\frac{Bx}{x-1}+\frac{Cx}{x+1}$$ Now put $x=0$ to get $\dfrac{2}{-1}=A$. It looks like a swindle, because the starting equation is not valid when $x=0$. But if this bothers you, you can make it rigorous by taking the limit as $x \to 0$ in the second equation. To find $B$, multiply by $x-1$ and put $x=1$ to get $\dfrac{3}{2}=B$. To find $C$, multiply by $x+1$ and put $x=-1$ to get $\dfrac{1}{2}=C$. (Things are not quite so simple if the denominator has a repeated root, but it's still doable. See the link for details.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Generating a data set with given mean and variance Suppose we have to create n integers in a given range say between 1 and 1000 with given mean and variance .My question is :Is there an algorythm that can tell us whether such a data set exists and how to create it if it exists?Thanks in advance for any help.
Let $p_1, p_2, ..., p_n$ be unknown reals, say $n=1000$, and let $M\ge0$, and $s^2$ be the required mean and variance, respectively. The following conditions are to be met: $$\sum_{k=1}^n p_i=1$$ $$\ \ \ \sum_{k=1}^n ip_i=M$$ $$\ \ \ \sum_{k=1}^n (i-M)^2p_i =s^2$$ with the restriction that $$\ 0\le p_i \le 1$$ for all $i$. You probably can solve this system of equations for $n=1000$. Choose 997 $p_i$'s between $0$ and $1$ for which $$\sum_{k=1}^{997} p_i\lt1,$$ and $$\sum_{k=1}^{997} ip_i\lt M.$$ and $$\sum_{k=1}^{997} (i-M)^2p_i \lt s^2.$$ Then try to solve the remaining three equations. If there are solutions between $0$ and $1$ then you are OK, if not then choose new $p_i$'s. Finally take the following intervals: $$[0,p_1), \ [p_1,p_1+p_2), \ [p_1+p_2, p_1+p_2+p_3),\ ..., \ [\sum_{k=1}^{999} p_i,1].$$ Then get an ordinary random number generator (RAND) that will give you independent random numbers uniformly distributed over the interval $[0,1]$. If RAND falls in the $i^{th}$ interval defined above then your numer is $i$. These randomly selecten integers will have the desired mean and variance.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving that $BI$, $AE$ and $CF$ are concurrent? Let $ABC$ be a triangle, and $BD$ be the angle bisector of $\angle B$. Let $DF$ and $DE$ be altitudes of $\triangle ADB$ and $\triangle CDB$ respectively, and $BI$ is an altitude of $\triangle ABC$. Prove that $AE$, $CF$ and $BI$ are concurrent. I saw that $BEDF$ is a kite as well as a cyclic quadrilateral, and even $I$ would lie on that circle. My approach was to assume $H$ as the intersection of $AE$ and $CF$, and prove that $BH\perp AC$, but it didn't work. Can anyone help? :)
Consider that: $$\frac {AF}{BF}=\frac{FA}{FD}\cdot \frac{FD}{FB}=\cot A\cdot \tan \frac{B}{2}$$ $$\frac {BE}{EC}=\frac{BE}{DE}\cdot \frac{DE}{CE}=\cot \frac{B}{2}\cdot \tan C$$ $$\frac{IC}{IA}=\frac{IC}{IB}\cdot \frac{IB}{IA}=\cot C\cdot \tan A$$ Multiply those equalities side by side, $AE,BF,CI$ are concurrent by Ceva theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prove that the edge coloring number is smaller than or equal to two times the maximum degree Let G be a graph with maximum degree ∆(G) and χ’(G) the edge coloring number. Prove that χ’(G) ≤ 2∆(G) without using Vizing's theorem. I really don't have a clue on how to tackle this problem. Can anybody push me in the right direction? Thanks in advance
For any graph G with maximum $\Delta(G)=0, \chi'(G)=0$ and for any graph G with maximum $\Delta(G)=1, \chi'(G)=1$. Assume as an inductive hypothesis that $\chi'(H) \leq 2\Delta(H)$ for all graphs $H$ with $\Delta(H)<\Delta(G)$ or with $\Delta(H)=\Delta(G)$ and $E(H)<E(G)$. Now if we remove a vertex of maximum degree $\Delta G$ from G, the remaining graph can be edge-coloured with at most $2\Delta G$ colours by hypothesis. Now the removed vertex and edges can be reinserted as follows: Insert the vertex and chose an edge to reinsert. The connecting vertex has at most $\Delta G-1$ associated colours so we have $\Delta G+1$ colours to choose from. Inserting a second edge, we have (at most) $\Delta G-1$ colours on the connecting vertex and $1$ colour already chosen from the edge already inserted, so still $\Delta G$ colours to choose from. We can continue adding edges in this way until the last edge when there is at most $\Delta G-1$ colours on the connecting vertex and $\Delta G-1$ colours on the edges already chosen, leaving $2$ colours for the final edge choice, as required. As you can probably see, this means that, perhaps with a little more work to close out the $\Delta(G)=0$ case, we can actually infer that $\chi'(G) \leq 2\Delta(G)-1$ for $\Delta(G)>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the real and imaginary part of the following I'm having trouble finding the real and imaginary part of $z/(z+1)$ given that z=x+iy. I tried substituting that in but its seems to get really complicated and I'm not so sure how to reduce it down. Can anyone give me some advice?
$$\begin{align} &\frac{z}{z+1}\\ =& \frac{x+iy}{x+iy+1}\\ =&\frac{x+iy}{x+iy+1}\times \frac{x-iy+1}{x-iy+1} \end{align}$$ I'm sure you can take the rest from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Angular momentum in Cylindrical Coordinates How to calculate the angular momentum of a particle in a cylindrical coordinates system $$x_1 = r \cos{\theta}$$ $$x_2 = r \sin{\theta}$$ $$x_3 = z$$ Thanks.
I assume you are seeking the infinitesimal deformation operators generating rotations, as in quantum mechanics, so, then, $\vec{p}\sim \nabla$. In that case, since $$ \vec{R}= z \hat z + r \hat r ,\\ \nabla = \hat z \partial_z + \hat r \partial_r + \hat \theta \frac{1}{r} \partial _\theta, $$ $$ \vec L = \vec{R} \times \nabla = (z \hat z + r \hat r ) \times \left ( \hat z \partial_z + \hat r \partial_r + \hat \theta \frac{1}{r} \partial _\theta \right ) \\ = \hat z \partial_\theta- \hat r \frac{z}{r} \partial_\theta +\hat \theta (z\partial_r -r\partial_z). $$ Confirm by limiting cases z=0, θ=0, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1110982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Forming equations for exponential growth/decay questions Problem Dry cleaners use a cleaning fluid that is purified by evaporation and condensation after each cleaning cycle. Every time the fluid is purified, 2.1% of it is lost. The fluid has to be topped up when half of the original fluid remains. a) Create a model which represents this situation. b) After how many cycles will the fluid need to be topped up? Progress I am thinking that I will need to use something like $y=ca^x$ where $c$ is the initial amount and a equals the decay factor. However I am not certain if this is correct since an initial amount is not given. $100$ was the first number to come to mind for the initial amount, I just didn't know if it'd have any influence if the number was different for the initial amount just merely because of part b) asking how many cycles.
Let the initial volume of the container be $V_0$ and the density be $\rho$. Let the evaporation and condensation be uniform and that 2.1% of the volume is lost everytime the purifying process is over. Thus the model is $${\rho\times(\dot V_0 - \dot V_1)} = 0.021*\rho\times\dot V_0$$ Cancelling $\rho$, and converting the volumetric rate to volume, You get$$ V_1 = V_0 - 0.021V_0 = 0.979V_0$$ After the second cycle, $$V_2 = V_1(1-0.021) = V_0*(0.979)*(0.979) = 0.979^2V_0$$ After n cycles, $$V_n = 0.979^nV_0$$ And if $$V_n = 0.5V_0 => 0.5V_0 = 0.979^nV_0$$ $$log(0.5) = nlog(0.979) => n = \frac{log(0.5)}{log(0.979)}$$ $$n = 32.69 = 33\text{ cycles}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1111063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Example: convergence in distributions Give an example $X _n \rightarrow X$ in distribution, $Y _n \rightarrow Y$ in distribution, but $X_n + Y_n$ does not converge to $X+Y$ in distribution. I got a trivial one. $X_n$ is $\mathcal N(0,1)$ $\forall n$, $Y_n=-X_n$, $X$ and $Y$ are also $\mathcal N(0,1)$, then $X _n \rightarrow X$ and $Y _n \rightarrow Y$, but $X_n+Y_n=0$ does not converge to $X+Y$ which is $\mathcal N(0,2)$ distributed. Do you have a more interesting example?
Notice that if $(X_n)$ and $(Y_n)$ are weakly convergent, then the sequence $(X_n+Y_n)_{n\geqslant 1}$ is tight, hence we can extract a weakly convergent subsequence. Therefore, the problem may come from the non-uniqueness of the potential limiting distributions. Consider $(\xi_i)_{i\geqslant 1}$ a sequence of i.i.d. centered random variables with unit variance. If $n$ is even, define $$X_n:=\frac 1{\sqrt n}\sum_{i=1}^n\xi_i\mbox{ and }Y_n:=-\frac 1{\sqrt n}\sum_{i=1}^n\xi_i,$$ and if $n$ is odd, then $$X_n:=\frac 1{\sqrt n}\sum_{i=1}^n\xi_i\mbox{ and }Y_n:=\frac 1{\sqrt n}\sum_{i=n+1}^{2n}\xi_i.$$ Then $X_n\to X$ and $Y_n\to Y$ where $X$ and $Y$ are standard normal, but the sequence $(X_{2n}+Y_{2n})_{n\geqslant 1}$ is null while the sequence $(X_{2n+1}+Y_{2n+1})_{n\geqslant 1}$ converges weakly to a (non-degenerated) normal distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1111276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can I prove that two recursion equations are equivalent? I have two recursion equations that seem to be equivalent. I need a method to show the equivalence relation between them. The equations calculate number of ones in binary representation of the number. The equations are given below: 1) $$ f(0) = 0 $$ $$ f(n) = \begin{Bmatrix} f(n-1)+1 & \text{if n is odd} \\ f(\frac{n}{2}) & \text{if n is even} \end{Bmatrix} $$ 2) $$ g(0) = 0$$ $$g(n)=g(n-2^{\lfloor log_2{(n)}\rfloor})+1$$ I thought about using induction, but I have no clue how to use it along with recursive equations. Any help will be appreciated.
You can do it with induction I believe. The induction hypotheses will have to be chosen cleverly to simplify the expression for $g(n)$. My suggestion would be to do induction steps on an exponential scale. That is, assume that $g(n) = f(n)$ for all $n$ less than or equal to $2^m$. Then, prove that $g(n) = f(n)$ for $n = \left\{2^m+1, \ldots , 2^{m+1}\right\}$. The reason we want to do this is because, for $n = \left \{ 2^m+1, \ldots, 2^{m+1}-1\right\}$, $\lfloor \log_2(n)\rfloor$ has the constant value $m$. For $n = 2^{m+1}$, $\lfloor \log_2(n)\rfloor = m+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1111382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to solve the recurrence relation $t_n=(1+c q^{n-1})p~t_{n-1}+a +nbq$? How to solve $$t_n=(1+c q^{n-1})p~t_{n-1}+a +nbq,\quad n\ge 2.$$ given that $$t_1=b+(1+c~p)(a~q^{-1}+b),\qquad p+q=1.$$ N.B- Some misprints in the question I corrected. Sorry for the misprint $an+b$ is actually $a+nbq$.
let $$\dfrac{h_{n-1}}{h_{n}}=(1+cq^{n-1})p$$ so we have $$\dfrac{h_{n-2}}{h_{n-1}}=(1+cq^{n-2})p$$ $$\cdots\cdots$$ $$\dfrac{h_{1}}{h_{2}}=(1+cq)p$$ $$\dfrac{h_{1}}{h_{n}}=\prod_{k=1}^{n-1}(1+cq^k)p\Longrightarrow h_{n}=\dfrac{h_{1}}{\prod_{k=1}^{n-1}(1+cq^k)p}$$ then we have $$t_{n}=\dfrac{h_{n-1}}{h_{n}}t_{n-1}+an+b$$ $$\Longrightarrow h_{n}t_{n}=h_{n-1}t_{n-1}+(an+b)h_{n}$$ so we have $$h_{n}t_{n}=\sum_{i=2}^{n}\left(h_{i}t_{i}-h_{i-1}t_{i-1}\right)+h_{1}t_{1}=\sum_{i=2}^{n}(ai+b)h_{i}+h_{1}t_{1}$$ so $$t_{n}=\dfrac{\sum_{i=2}^{n}(ai+b)h_{i}+h_{1}t_{1}}{h_{n}}=\prod_{k=1}^{n-1}(1+cq^k)p\cdot\left(\dfrac{(\sum_{i=2}^{n}(ai+b)h_{i}}{h_{1}}+t_{1}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1111476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proof with orthogonal vectors in real analysis If $x,y \in \mathbb{R}^n$, then $x$ and $y$ are called perpendicular (or orthogonal) if $\langle x,y\rangle =0$. If $x$ and $y$ are perpendicular, prove that $|x+y|^2=|x|^2+|y|^2$. Seems pretty basic, but I'm missing something.
$$\big|x+y\big|^2= \langle x+y,x+y \rangle=\langle x,x+y\rangle+y\langle x+y\rangle$$ $$= \langle x,x \rangle + \langle x,y \rangle + \langle y,x\rangle+ \langle y,y\rangle$$ $$=\big|x\big|^2+\big|y\big|^2$$ as $\langle x,y \rangle = \langle y,x \rangle =0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1111572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Number of circular permutation of word 'CIRCULAR' Hey please help me with this question... Find the number of circular permutation of the word 'CIRCULAR'. Number of circular permutaion is (n-1)!
An essential point to note is that all orbits under cyclic rearrangement have $8$ distinct elements, since there are some letters that occur only once and can serve as marker. Now one can just count the number of permutations of the letters, and divide by $8$, namely $\frac{8!}{2!2!1!1!1!1!}/8=1260$. Just to show the contrast with not having a singleton letter as marker, let me also count the cyclically distinct permutations of AAAABBCC. Now there are $\frac{8!}{4!2!2!}=420$ permutations to begin with, for which we need to count the cyclic orbits. Most orbits will have $8$ elements, but some orbits will have only $4$ elements because a cyclic shift of $4$ maps the permutation to itself (even larger symmetry is not possible). The number of permutations with that property is $\frac{4!}{2!1!1!}=12$: they are formed of a permutation of AABC repeated twice. So the solution of this alternative problem is $(420-12)/8+12/4=51+3=54$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1111664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Equation of circle touching a parabola Suppose we have a parabola $y^2=4x$ . Now, how to write equation of circle touching parabola at $(4,4)$ and passing thru focus? I know that for this parabola focus will lie at $(1,0)$ so we may assume general equation of circle and satisfy the points in it . hence $$x^2 +y^2 + 2gx + 2fy +c =0$$ should be equation of general circle but on satisfying given points we get 2 equations and 3 variables . I think I am missing something, what to do?
the tangent to the parabola at $(4, 4)$ has slope $1/2$ so the radius has slope $-2.$ let the center of the circle touching the parabola $y^2 = 4x$ at $(4,4)$ be $x = 4 + t, y = 4 - 2t$. now equating the radius $$5t^2 = (3+t)^2 + (4-2t)^2$$ you can find $t$ which will give you the center and the radius of the circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1111749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Does $\int_0^\infty \sin(x^{2/3}) dx$ converges? My Try: We substitute $y = x^{2/3}$. Therefore, $x = y^{3/2}$ and $\frac{dx}{dy} = \frac{2}{3}\frac{dy}{y^{1/3}}$ Hence, the integral after substitution is: $$ \frac{3}{2} \int_0^\infty \sin(y)\sqrt{y} dy$$ Let's look at: $$\int_0^\infty \left|\sin(y)\sqrt{y} \right| dy = \sum_{n=0}^\infty \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| \sqrt{y} dy \ge \sum_{n=0}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\left|\sin(y)\right| dy \\= \sum_{n=1}^\infty \sqrt{n\pi} \int_{n\pi}^{(n+1)\pi}\sqrt{\sin(y)^2}$$
$\sin x^{2/3}$ remains above $1/2$ for $x$ between $[(2n+1/6)\pi]^{3/2}$ and $[2n+5/6]^{3/2}$, so the integral rises by more than $\left([2n+5/6]^{3/2}-[2n+1/6]^{3/2}\right)\pi^{3/2}/2$ during that time. $$[2n+5/6]^{3/2}-[2n+1/6]^{3/2}=\frac{[2n+5/6]^3-[2n+1/6]^3}{[2n+5/6]^{3/2}+[2n+1/6]^{3/2}}\\ >\frac{8n^2}{2[2n+1]^{3/2}}$$ That increases as a function of $n$, so the integral does not converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1111952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 3 }
Getting the standard deviation from the pdf A normally distributed random variable with mean $\mu$ has a probability density function given by $\dfrac{\gamma}{\sqrt{2\pi\sigma}}$ $\exp(-\dfrac{\gamma ^2}{\sigma} \dfrac{(x-\mu)^2}{2}) $ So the standard deviation is the square root of the variance, which is $E[(x-\mu)^2]$. However, I don't know how to proceed with this information. How can I get the standard deviation from the pdf?
Usually one writes $$ \frac1{\sigma\sqrt{2\pi}}\exp\left(-\frac 1 2 \left( \frac{x-\mu}\sigma \right)^2\right). \tag 1 $$ If one must add that extra parameter $\gamma$, then one has $$ \frac\gamma{\sigma\sqrt{2\pi}}\exp\left(-\frac 1 2 \gamma^2\left( \frac{x-\mu}\sigma \right)^2\right). \tag 2 $$ This amounts to just putting $\sigma/\gamma$ where $\sigma$ had been. If one can show that for $(1)$, the standard deviation is $\sigma$, then it follows that for $(2)$ the standard deviation is $\sigma/\gamma$. PS: I see the question has undergone further editing, having $\sqrt{\sigma}/\gamma$ where $\sigma$ appears in $(1)$. In that case, the standard deviation would be $\sqrt{\sigma}/\gamma$. end of PS The variance is $$ \int_{-\infty}^\infty (x-\mu)^2 f(x)\,dx. $$ In the case of $(1)$, this is \begin{align} & \int_{-\infty}^\infty (x-\mu)^2 \frac1{\sigma\sqrt{2\pi}}\exp\left(-\frac 1 2 \left( \frac{x-\mu}\sigma \right)^2\right)\,dx \\[10pt] = {} & \sigma^2 \int_{-\infty}^\infty \left( \frac{x-\mu}\sigma \right)^2 \frac1{\sqrt{2\pi}}\exp\left(-\frac 1 2 \left( \frac{x-\mu}\sigma \right)^2\right)\,\frac{dx}\sigma \\[10pt] = {} & \frac{\sigma^2}{\sqrt{2\pi}} \int_{-\infty}^\infty w^2 \exp\left(-\frac 1 2 w^2\right)\,dw. \end{align} So it is enough to show that without the $\sigma^2$ we get $1$. Since we have an even function over an interval symmetric about $0$, the integral is \begin{align} & \frac2{\sqrt{2\pi}} \int_0^\infty w^2 \exp\left(-\frac 1 2 w^2\right)\,dw = \frac2{\sqrt{2\pi}} \int_0^\infty w \exp\left(-\frac 1 2 w^2\right)(w\,dw) \\[10pt] = {} & \frac2{\sqrt{\pi}} \int_0^\infty \sqrt{u} e^{-u}\,du = \frac2{\sqrt{\pi}} \Gamma\left(\frac 3 2 \right) = \frac2{\sqrt{\pi}}\cdot\frac 1 2 \Gamma\left(\frac 1 2 \right). \end{align} Now recall that $\Gamma(1/2)= \sqrt{\pi}$. As to how we know that, that is the topic of another question already posted here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
inequality symbols question (beginning algebra) Please help me with this problem: "In each of the following exercises $x$ and $y$ represent any two whole numbers. As you know, for these numbers exactly one of the statements $x < y, x = y$, or $x > y$ is true. Which of these is the true statement for each of the following exercises?" My reasoning is: e. Since $y$ is not greater than $x$ and $y$ is not equal to $x$ then from this follows that $у$ is less than $x$, which is equivalent to the statement $x > y$ (from the above). So my answer: $x > y$ is the only true statement. f. Since $y$ is not less than $x$ and $y$ is not equal to $x$ then from this follows that $y$ is greater than $x$, which is equivalent to the statement $x < y$ (from the above). So my answer: $x < y$ is the only true statement. Am I right? (on the image in red color are the answers suggested by the book (i have instructor's copy for self-study) which contradicts with my reasoning...) P.S. sorry for my English
Your reasoning is correct, but your reasoning actually does not contradict the answers suggested in the book. As voldemort pointed out in a comment, $x>y$ is logically equivalent to $y<x$ [e.g., $3>2$ and $2<3$]. This is due to the symmetric nature of the relations $<$ and $>$. That is, $x>y$ and $y<x$ are logically the same; also, $x<y$ is logically the same as $y>x$. In case you are interested, note that even though the relations $>$ and $<$ have the property of symmetry, they do not possess the property of reflexivity; that is, $x<x$ is not a valid statement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
The cardinality of the set of open subsets and related proofs. I have been going around in circles trying to prove this things for the last week, I would really appreciate any ideas on any of the next proofs. Let C be the linear continuum with no endpoints and D a dense countable subset of C.Let O be the set of all the open subsets of C. Show the next things: * *A subset of A in C is open iff A is the union of a set of open intervals of a linear continuum and extremities in D. *$|O| = 2^{\aleph_0}$ *Every well ordered strictly increasing family of open sets is countable. *For every $\delta < \omega_1 $ there is a strictly increasing family $\langle A_\xi: \xi < \delta \rangle$ of open subsets of C. I only know that for the second one i should find an embedding into the rationals or something similar, but i can't figure out how to do it.
These are a handful of questions that have been asked before. Let me provide a few helpful hints: * *One direction is trivial; for the other direction show that every open set is the union of open intervals, and every open interval is the union of open intervals with endpoints in $D$. *Use the fact that if $U$ is open, then $U$ is fully determined by the countable set of intervals with endpoints in $D$ which are subsets of $U$. *If $\langle A_\xi\mid \xi<\delta\rangle$ is an increasing sequence of open sets, then by the fact from the previous hint, this defines a well-ordered family of subsets of $\Bbb N$ ordered by $\subseteq$ which has order type $\delta$. Therefore $\delta<\omega_1$ (assign $\xi$ to least natural number which appears on the $\xi$-th set for the first time). *Since every countable linear order embeds into $\Bbb Q$ (and therefore into $D$), embed $\delta$ into $D$ and use it to construct these open sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving augmented linear system $\left(\begin{smallmatrix}1&0&-2&~~~&0\\0&1&0&&0\\0&0&0&&0\end{smallmatrix}\right)$ $$\left(\begin{smallmatrix}1&0&-2&~~~&0\\0&1&0&&0\\0&0&0&&0\end{smallmatrix}\right)\to \mathbb{L}=\langle\left(\begin{smallmatrix}2\\0\\1\end{smallmatrix}\right)\rangle$$ I have seen a lot ot tutorials, but none explaining how to assemble the solution $\mathbb{L}$, when $(0, 0, 0)$ is the solution of the whole matrix. In other words, how can $(2, 0, 1)$ as a solution be possible?
You're trying to solve the matrix equation $$\left(\begin{matrix} 1 & 0 & -2\\ 0 & 1 & 0\\ 0 & 0 & 0 \end{matrix}\right) \left(\begin{matrix} x \\ y \\ z \end{matrix}\right) =\left(\begin{matrix} 0 \\ 0 \\ 0 \end{matrix}\right) $$ Subsituting $$\left(\begin{matrix} 2 \\ 0 \\ 1 \end{matrix}\right)$$ for $$\left(\begin{matrix} x \\ y \\ x \end{matrix}\right)$$ will verify that it is a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
(Infinite) Nested radical equation, how to get the right solution? I've been tasked with coming up with exam questions for a high school math contest to be hosted at my university. I offer the following equation, $$\sqrt{x+\sqrt{x-\sqrt{x+\sqrt{x-\cdots}}}}=2$$ and ask for the solution for $x$. Here's what I attempted so far. The first utilizes some pattern recognition, but it gives me two solutions (only one of which is correct). $$\begin{align*} \sqrt{x+\sqrt{x-\sqrt{x+\sqrt{x-\cdots}}}}&=2\\ \sqrt{x-\sqrt{x+\sqrt{x-\cdots}}}&=4-x\\ \sqrt{x+\sqrt{x-\cdots}}&=x-(4-x)^2\\ 2&=x-(4-x)^2&\text{(from line 1)}\\ (x-6)(x-3)&=0 \end{align*}$$ $x=6$ is the extraneous solution. Where did I go wrong, and how can I fix this? I know there's a closed form for non-alternating nested radicals $\sqrt{n+\sqrt{n+\cdots}}$ and $\sqrt{n-\sqrt{n-\cdots}}$, but I can't seem to find anything on alternating signs.
You know that: $$\sqrt{x+\sqrt{x-\sqrt{x+\sqrt{x-\cdots}}}}=A = 2$$ and hence $$A = \sqrt{x+\sqrt{x-A}} = 2$$ or equivalently $$\sqrt{x+\sqrt{x-2}} = 2$$ Clearly, $\sqrt{x-2}$ is well defined when $$x \geq 2. ~~~(1)$$ Then, squaring both side, you get: $$x+ \sqrt{x-2} = 4 \Rightarrow \sqrt{x-2} = 4-x ~~~(2).$$ Since $\sqrt{x-2} \geq 0$, then also $4-x \geq0$, and hence $$x \leq 4. ~~~(3)$$ Joining conditions $(1)$ and $(3)$, one obtain the existence set for $x$: $$x \geq 2 \wedge x \leq 4 \Rightarrow 2\leq x \leq 4. ~~~(4)$$ Going back to $(3)$, we can square both side and we get: $$x-2 = (4-x)^2 \Rightarrow x-2=16+x^2-8x \Rightarrow x^2-9x+18=0 \Rightarrow $$ $$\Rightarrow (x-3)(x-6) = 0. ~~~(5)$$ The solution of $(5)$ are $x_1 = 3$ and $x_2 = 6$, but according to $(4)$, only $x_1 = 3$ is feasible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Proving Set Operations I'm trying to prove that if $A$ is a subset of $B$ then $A \cup B = B$, but I am having trouble trying to proves this mathematically. I know that since $A$ is a subset, then $A$ has an element $x$ which is in $B$. So when $A \cup B$, some of the elements in $A$ are already in $B$; so the result of the union would be $B$. How would I show this mathematically? What about if $A$ is a subset of $B$; then $A \cap B = A$?
The first thing you need to do is make sure you know what the definitions of union, intersection, and subset really mean. Union: $A\cup B = \{x : x\in A \space\text{or}\space x\in B\}$. Intersection: $A\cap B = \{x : x\in A \space\text{and}\space x\in B\}$. Subset: We say that $A$ is a subset of $B$, written $A\subseteq B$, provided that for all $x$, if $x\in A$, then $x\in B$. That is $$ (A\subseteq B) \Longleftrightarrow (\forall x)(x\in A\to x\in B) \Longleftrightarrow (\forall x\in A)(x\in B). $$ The two problems you are considering are proved by showing mutual subset inclusion; that is, if you can show that, for your first problem, that $A\cup B \subseteq B$ and also that $B\subseteq A\cup B$, then you will have shown that $A\cup B = B$. Similarly, for your second problem, if you can show that $A\cap B \subseteq A$ and also that $A\subseteq A\cap B$, then you will have shown that $A\cap B = A$. Given these facts (refer back to them often while reading on), try to follow the two proofs below (and let me know if you have questions). Problem 1: If $A$ is a subset of $B$, then $A\cup B = B$. Proof. Suppose $A\subseteq B$. Then we have the following: ($\subseteq$): Pick $x\in A\cup B$. Thus, either $x\in A$ or $x\in B$. Now, if $x\in B$, we are done; thus, suppose $x\in A$. Then, since $A\subseteq B$, we have that $x\in B$ also. In either case, $x\in B$, so that $A\cup B \subseteq B$. ($\supseteq$): Pick $x\in B$, which implies that $x\in A\cup B$. Thus, $B\subseteq A\cup B$. Since we have shown that $A\cup B \subseteq B$ and also that $B\subseteq A\cup B$, we necessarily have that $A\cup B = B$. Problem 2: If $A$ is a subset of $B$, then $A\cap B = A$. Proof. Suppose $A\subseteq B$. Then we have the following: ($\subseteq$): Pick $x\in A\cap B$. Then $x\in A$ and $x\in B$. Since $x\in A$, we have that $A\cap B\subseteq A$. ($\supseteq$): Pick $x\in A$. Then, since $A\subseteq B$, we have that $x\in B$ also. Therefore, since $x\in A$ and $x\in B$, it follows that $x\in A\cap B$. Thus, $A\subseteq A\cap B$. Since we have shown that $A\cap B \subseteq A$ and also that $A\subseteq A\cap B$, we necessarily have that $A\cap B = A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solution to Differential Equation I'm looking for a solution to the following differential equation: $$ y'' = \frac{c_1}{y} - \frac{c_2}{y^2} $$ where $c_1$ and $c_2$ are non-zero constants, and y is always positive. The resulting function should be periodic. Any help is appreciated.
$$y' y'' = \frac12 \frac{d}{dx} (y'^2) = c_1 \frac{y'}{y} - c_2 \frac{y'}{y^2} $$ Integrate both sides to get $$\frac12 y'^2 = c_1 \log{y} + \frac{c_2}{y} + K_1$$ where $K_1$ is a constant of integration. Now take the square root of both sides and integrate to get $$\pm x+K_2 = \frac1{\sqrt{2}}\int \frac{dy}{\sqrt{K_1+c_1 \log{y}+c_2 y^{-1}}}$$ At this point, nothing further comes to mind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How would the intersection of two uncountable sets form a countably infinite set? This is based off my last question How would the intersection of two uncountable sets be finite? Here is the problem(from Discrete Mathematics and its Applications) The book's definition on countable And the definition of having the same cardinality I was able to get 11c pretty easily. What I thought was the intersection of the same uncountable set, say [1,2], that is [1,2]∩[1,2] would be [1,2], a uncountable set. Via help, I was able to get understand 11a. That is if you have two uncountable sets, say (−∞,0]∩[0,∞), the intersection of those two sets would be that one value, zero, meaning it is finite countable. What I am struggling with is applying that same idea to 11b. What I thought of was having two intervals that didn't end quite at the same spot say (−∞,3]∩[0,∞) but the intersection of those would be [0, 3] which itself is a uncountable set. From 11a, what endpoints would you set on the intervals so that A ∩ B would be countably infinite?
How about $[0, 1] \bigcup \{2, 3, 4, 5, \dots \}$ and $[5, 6] \bigcup \{7, 8, 9, 10, \dots\}$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Linear transformation of variable under the integral sign. Easy change of variables question I realize this might be a basic question, but I need a sanity check. Let $f(\vec{x})$ be a function that takes $n$-dimensional vectors and returns a real number. Suppose the goal is to compute $$\int_{\mathbb{R}^n} f(\vec{x})\, d\vec{x},$$ but in fact it is easier to work with the transformed variable $T\vec{x}$, where $T$ is some $n\times n$ matrix. What exactly is the change of variables procedure for $$\int_{\mathbb{R}^n} f(T(\vec{x}))\, d\vec{x}?$$ Do you simply divide the second integral by $\det(T)$?
The other answer uses the same name for the transformed variable, which can be confusing for some. Let's call $\vec{y} = T \vec{x}$. Then \begin{align*} \int_{\mathbb{R}^n} f(\vec{x}) d \vec{x} &= \int_{\mathbb{R}^n} f(T^{-1}\vec{y}) \frac{1}{|\det T | } d \vec{y} \end{align*} and also \begin{align*} \int_{\mathbb{R}^n} f(T\vec{x}) d \vec{x} &= \int_{\mathbb{R}^n} f(\vec{y}) \frac{1}{|\det T|} d \vec{y} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Unconditional probability I understand the tree diagram but my answer is wrong. Urn I contains three red chips and one white chip. Urn II contains two red chips and two white chips. One chip is drawn from each urn and transferred to the other urn. Then a chip is drawn from the first urn.What is the probability that the chip ultimately drawn from urn I is red?
Here is a tree diagram for the scenario. A reminder on how to use the tree diagram, on each branch the probability of traveling along that branch from the previous branching point is written. For example, to travel along the topmost branch on the left corresponds to the event of pulling a red from the first urn and a white from the second, which occurs with probability $\frac{3}{4}\cdot\frac{1}{2}=\frac{3}{8}$. To arrive at a particular leaf corresponds to having traveled along the branches to get there, and so occurs with probability equal to the product of the probabilities associated with each branch. For example, the topmost leaf is $\frac{3}{8}\cdot\frac{1}{2}=\frac{3}{16}$. The event we are curious about is "pull a red from urn 1 at the end", and the corresponding leaves for that event are put in blocks. Adding these together gives the final answer $\frac{3}{16}+\frac{9}{32}+\frac{3}{32}+\frac{1}{8}=\frac{11}{16}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
can probability be negative I am solving a question that says: Given that $$P(B)=P(A\cap B)=\frac{1}{2}$$ and $$P(A)=\frac{3}{8}$$ Find $$P(A\cap {B}^{c})$$ My answer is $$P(A\cap {B}^{c})=P(A)-P(A\cap B)=(-)\frac{1}{8}$$
It's because the two conditions contradict each other. The first says that $P(B)$ is $\frac{1}{2}$, and also that whenever $B$ happens $A$ happens, so already we know that $P(A) \ge \frac{1}{2}$. But the second says that $P(A) \lt \frac{1}{2}$. The derivation you have carried out is another way to say the same thing (as a proof by contradiction), because negative probability violates Kolmogorov's first probability axiom. On the other hand, negative "probability" can have meaning if we expand our scope somewhat.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1112991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Discrete Time Fourier Transform of the signal represented by $x[n] = n^2 a^n u[n]$ I have a homework problem that I am just not sure where to start with. I have to take the Discrete Time Fourier Transform of a signal represented by: $$x[n] = n^2 a^n u[n]$$ given that $|a| < 1$, $\Omega_0 < \pi$, and u[n] being the unit step function. There is a hint saying that "Calculus and derivatives will help!" but that actually confuses me more than it helps. However, regardless, I'm just not sure how to even get started evaluating that. I know how to take the DTFT of a signal, but just can't figure out how to get it into a usable form. I also am allowed to use the following conversion: $$ x[n] = a^nu[n] \iff X(\Omega) = \frac{e^{j\Omega}}{e^{j\Omega} - a}$$ I am not looking for the answer, but rather just some pointers as to how to start manipulating the original signal.
To include this as the answer that most helped me, since it was just as a comment to my original post. The most straightforward solution seemed to be by using the property of the Discrete Time Fourier Transform for when you are multiplying by n in the time domain, that corresponds to a derivation in the Fourier/frequency domain. Therefore, the solution would just be to take the Fourier transform from the provided conversion, and then taking the second derivative of that. This is the solution that I got after doing so: $$X(\Omega) = \frac{a^3e^{j\Omega} - ae^{j3\Omega}}{(e^{j\Omega} - a)^4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1113086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
understanding a statement in Gill, Murray and Wright "Practical Optimization" Hi: I'm reading the book "Practical Optimization" and there's a part in Chapter 3 that I can't prove to myself but I'm sure it's true. On page 64, they define the Taylor expansion of $F$ about $x^{*}$: (3.3) $F(x^{*} + \epsilon p) = F(x^{*}) + \epsilon p^{T}g(x^{*}) + \frac{1}{2} \epsilon^2 p^{T} G(x^{*} + \epsilon \theta p)p$ where $g$ is the gradient of $F$ and $G$ is the Hessian of $F$. They want to prove that a necessary condition for $x^{*}$ to be a local minimum of $F$ is that $g(x^{*}) = 0 $. i.e: $x^{*}$ is a stationary point. They proceed using a contradiction argument as follows. Assume that $x^{*}$ is a local minimum of $F$ and that it is not a stationary point. If $g(x^{*})$ is non-zero, then there must exist a vector $p$ for which (3.4) $p^{T}g(x^{*}) < 0$. Any vector that satisfies(3.4) is called a descent direction at $x^{*}$. This is fine so far. It is the next statement that I don't see. The statement is "Given any descent direction $p$, there exists a positive scalar $\bar\epsilon$ such that for any positive $\epsilon$ satisfying $\epsilon \le \bar\epsilon$, it holds that $\epsilon p^{T}g(x^{*}) + \frac{1}{2} \epsilon^2 p^{T} G(x^{*} + \epsilon \theta p)p < 0$. I'm wondering how this statement can be proven. It's obvious to the authors and probably others but not to me. Thanks.
Divide the inequality by $\epsilon$. Then for $\epsilon\to0$ the left-hand side tends to $p^Tg(x^*)<0$. (Assuming that $G$ is continuous). Thus, there is $\bar\epsilon$ such that the expression is negative for all $\epsilon\in(0,\bar\epsilon)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1113153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing lim sup and lim inf of $\exp(n\sin(\frac{n\pi}{2}))+\exp(\frac{1}{n}\cos(\frac{n\pi}{2}))$ and $\cosh(n\sin(\frac{n²+1}{n}\frac{\pi}{2}))$? It's the first time I encounter lim sup and lim inf and I only just know about their definitions. I have difficulties finding out about lim sup and lim inf of the following sequences $$\exp\left(n\sin\left(\frac{n\pi}{2}\right)\right) + \exp\left(\frac{1}{n}\cos\left(\frac{n\pi}{2}\right)\right)$$ and $$\cosh\left(n\sin\left(\frac{n^2+1}{n}\frac{\pi}{2}\right)\right).$$ For the first one, I have the strong intuition that lim sup is infinite and maybe lim inf is 1? For the second one I'm really lost... Can you help me? Tell me how you proceed? Thank you very much!!
Note: After submitting the answer I realized that I've made a mistake. The conclusions that I draw are to be referred to $\sup$ and $\inf$ of the sequences, and not $\limsup$ and $\liminf$. I apologize for the error. I wanted to delete the answer, but then reconsidered it and I'll leave it here in case someone will find it useful. The reader should read those $\limsup$ and $\liminf$ as $\sup$ and $\inf$. Let $$a_n = \exp\left(n\sin\left(\frac{n\pi}{2}\right)\right) + \exp\left(\frac{1}{n}\cos\left(\frac{n\pi}{2}\right)\right)$$ We have that $$\begin{align} n = 2k &\implies \sin\left(n\frac\pi2\right) = 0\text{ and }\cos\left(n\frac\pi2\right) \in \{-1, 1\} \implies a_{2k} = 1 + \exp\left(\frac1n(-1)^k\right)\\ n = 2k + 1 &\implies \cos\left(n\frac\pi2\right) = 0\text{ and }\sin\left(n\frac\pi2\right) \in \{-1, 1\} \implies a_{2k+1} = \exp\left(n(-1)^k\right) + 1 \end{align}$$ We see that, as $k \to +\infty$, $a_{2k} \to 2$ and this subsequence attains its minimum and maximum values respectively when $k = 1$ and $k = 2$ (because it's oscillating around $2$ and the limit is $2$). So $$\left\{a_{2k}\right\} \subseteq \left[1+\exp\left(-\frac12\right), 1 + \exp\left(\frac14\right)\right]$$ For the second one, we see that $n(-1)^k$ is divergent: as $k \to +\infty$, when $k$ is even $a_{2k+1} \to +\infty$, while when $k$ is odd we have that $a_{2k+1} \to -\infty$. Hence $$\left\{a_{2k+1}\right\} \subseteq \left(1, +\infty\right)$$ Hence, we conclude that $$\liminf a_n = 1,\qquad\limsup a_n = +\infty$$ For the second exercise, observe that $\cosh x \in [1, +\infty)$ and the minimum value is attained when $x = 0$. Also note that $\cosh x$ is an even function. We will now study the inner function to determine its behavior. Let $$b_n = n\sin\left(\frac{n^2+1}n\frac\pi2\right)$$ As $n \to +\infty$ we can consider three subsequences of $b_n$: $$\begin{align} b_{n_k} &\to +\infty\\ b_{m_k} &\equiv 0\\ b_{l_k} &\to -\infty \end{align}$$ We are not interested in finding $n_k$, $m_k$ and $l_k$, but it is fairly easy to prove that they exist. Hint: recall that $n \to +\infty$ and use the fact that $\sin x \in [-1, 1]$. Putting it together: $$\cosh\left(b_{n_k}\right) = \cosh\left(b_{l_k}\right) \to +\infty$$ $$\cosh\left(b_{m_k}\right) \equiv 1$$ Therefore, denoting the whole sequence with $a_n$, we have: $$\liminf a_n = 1,\qquad\limsup a_n = +\infty$$ where $1$ is also a minimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1113258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing coercivity of a function I am well attuned to the definition for a function to be coerce, which is that $\lim_{\|x\| \to \infty}f(x) = \infty$ ie the values of $f$ go to infinity as the norm goes to infinity. So Ex.1 $f(x_1,x_2) = x_1^4 + x_2^4 - 3x_1x_2$ I thought this was immediately obvious because $x_1^4$ and $x_2^4$ grow faster than the linear terms we have. Would this be considered a sufficient proof? I am unsure in particular how taking the limit would be shown. Additionally, I don't see why functions like Ex.2 $f:\mathbb{R}^n \to \mathbb{R}, f(x) = a^tx$ isn't coercive, or Ex.3 $f(x_1,x_2) = x_1^2 + x_2^2 - 2x_1x_2$ isn't coercive. Since they are both unbounded, doesn't taking the norm of $x \to \infty$ show that these functions also approach infinity?
Ex.1 No, your answer is not rigorous. It is true, but you need to prove it. My suggestion is to show that $$\lim_{\|(x_1,x_2)\| \to +\infty} \frac{x_1 x_2}{x_1^4+x_2^4}=0.$$ Ex.2 If $x \perp a$, then $f(x)=0$. And since on the subspace $\{a\}^\perp$ there are vectors of arbitrarily large norm... Ex. 3 If $x_1=x_2=t$, then $f(t,t)=0$ for any $t>0$. Letting $t \to +\infty$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1113409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding the nth term in a recursive coupled equation. I'm probably missing something simple, but if I have the recursive sequence: $$ a_{i+1} = \delta a_i+\lambda_1 b_i $$ $$ b_{i+1} = \lambda_2 a_i + \delta b_i $$ how would I find a formula for $a_n$, $b_n$, or even $\frac{a_n}{b_n}$, given, for example, $a_0 = 1$, $b_0=0$? I've tried expanding it out and looking for patterns but to no avail - I'm sure there must be an analytic solution to this, I really would rather not do it computationally!
Let $X_i:=[a_i,b_i]^T$ then $$X_{i+1}=AX_i$$ where $$A:=\begin{bmatrix}\delta&\lambda_1\\\lambda_2&\delta\end{bmatrix}$$ Then it is clear that $$X_i=A^{i}X_0$$ To get a nice formula for $A^i$ first write it as $P^{-1}JP$, where $J$ is its Jordan form. Notice that if $\lambda_1\lambda_2\neq0$ $J$ is going to be diagonal (even better). In any case, the worst that can happen is that $J=D+N$ where $D$ is diagonal and $N^2=0$. The, $$A^i=P^{-1}J^iP=P^{-1}(D^i+iD^{i-1}N)P$$ where the powers $D^i$ and $D^{i-1}$ are easy to write because it is just rising the diagonal elements to $i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1113489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove, using the mean value theorem, that $x+1 \lt e^x \lt xe^x+1$ for $x \gt 0$ Prove, using the mean value theorem, that $x+1 \lt e^x \lt xe^x+1$ for $x \gt 0$. What I tried so far: Let there be a function $f(x)=xe^x$. So $f'(x)=xe^x+e^x$. Let there be arbitrary interval $(0,b)$. So by the mean value theorem there is $c \in (0,b)$ so that: $f'(c)= \frac{f(b)-f(0)}{b-0}$, thus $ce^c+e^c=\frac{be^b}{b}=e^b$. So $e^c(c+1)=e^b$. So $c+1 \lt e^b$, but I don't really know how to continue. Any hints will be greatly appreciated. Thank you!
Hint: $$x+1 \lt e^x \lt xe^x+1 \iff 1 \lt \frac{e^x-e^0}{x - 0} \lt e^x$$ Now can you apply the Mean-Value Theorem?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1113603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding an isomorphism of groups Let $R$ be a commutative ring, $R_U$ be the group of units of R. Show that (i) $\mathbb{C}_U \cong \left(\left(\mathbb{R},+\right)/\mathbb{Z}\right)\times\left(\mathbb{R},+\right)$ (ii) $(\mathbb{Z}[x] / (x^2-x))_U \cong \mathbb{Z}_2 \times \mathbb{Z}_2$ where $\cong$ stands for isomorphic as groups. (i) I thought about using the polar form of complex numbers. At first I had a problem with the fact that (for using with the angle) the real numbers can be negative. So I tried the Euler identity to the rescue: $e^{i\pi} = -1$. So if I map $\gamma : (r,\phi) \mapsto r e^{i\pi(1+\phi)}$, this seemed to work. But what about the zero? How can I get this to be injective? (ii) First as $\# \mathbb{Z}_2 \times \mathbb{Z}_2 = 4$, the target group will have 4 elements. But what are they? I tried $\mathbb{Z}[x] / (x^2-x) = \{ax + b\ \big\vert\ a,b \in \mathbb{Z}\}$ (right?). $(ax+b)(cx+d) = 1$ didn't lead me to a useful solution. Can you please help me to go on?
$\mathbb C_U=\mathbb C^{\times}\simeq(\mathbb R/\mathbb Z,+)\times(\mathbb R_{> 0},\cdot)$ by using the isomorphism you suggested. Moreover, $(\mathbb R_{>0},\cdot)\simeq(\mathbb R,+)$ by using logarithm. $\mathbb{Z}[x] / (x^2-x)\simeq\mathbb Z[x]/(x-1)\times\mathbb Z[x]/(x)$ by CRT. Moreover, $\mathbb Z[x]/(x-1)\simeq\mathbb Z[x]/(x)\simeq\mathbb Z$, so $(\mathbb{Z}[x] / (x^2-x))_U\simeq\mathbb Z_U\times\mathbb Z_U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1113743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a formula of coefficients of Newton-Cotes Method in numerical intergation? We know the coefficients of Newton-Cotes method in numerical integration are: 2-points $ 0.5$ , $0.5$ 3-points $ 1/6$, $2/3$, $ 1/6$ 4-points $1/8$, $3/8$, $3/8$, $1/8$ and so on I ask if there is a formula of all coefficients in this method.
Maybe read: http://people.clas.ufl.edu/kees/files/NewtonCotes.pdf It presents a method easily convertible to MATLAB that can be used to generate the coefficients: function q = NewtonCotesCoefficients(degree) q = 0:degree; m = fliplr(vander(q)); for ii = 0: degree b(ii + 1) = degree^(ii + 1)/(ii + 1); end q = m'\b'; end
{ "language": "en", "url": "https://math.stackexchange.com/questions/1113826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Field homomorphisms of $\mathbb{R}$ How to prove that $\mathrm{Hom}(\mathbb{R})=\mathrm{Aut}(\mathbb{R})$ ? (We treat it as field homomorphisms. ) I know that $\mathrm{Aut}(\mathbb{R})=\{\mathrm{id}\}$ and $\mathrm{Mon}(\mathbb{R})=\mathrm{Aut}(\mathbb{R})$.
Hint: Let $\phi: \mathbb R \to \mathbb R$ is a ring homomorphism. Prove the followings: (1). $\phi(r) = r, \forall r \in \mathbb Q.$ (2). $\phi(x) > 0, \forall x >0.$ (3). $|x-y|<\frac{1}{m} \Rightarrow |\phi(x) - \phi(y)|<\frac{1}{m}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1113934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Simplify $\arctan (\frac{1}{2}\tan (2A)) + \arctan (\cot (A)) + \arctan (\cot ^{3}(A)) $ How to simplify $$\arctan \left(\frac{1}{2}\tan (2A)\right) + \arctan (\cot (A)) + \arctan (\cot ^{3}(A)) $$ for $0< A< \pi /4$? This is one of the problems in a book I'm using. It is actually an objective question , with 4 options given , so i just put $A=\pi /4$ (even though technically its disallowed as $0< A< \pi /4$) and got the answer as $\pi $ which was one of the options , so that must be the answer (and it is weirdly written in options as $4 \arctan (1) $ ). Still , I'm not able to actually solve this problem. I know the formula for sum of three arctans , but it gets just too messy and looks hard to simplify and it is not obvious that the answer will be constant for all $0< A< \pi /4$. And I don't know of any other way to approach such problems.
As $0<A<\dfrac\pi4\implies\cot A>1\implies\cot^3A>1$ Like showing $\arctan(\frac{2}{3}) = \frac{1}{2} \arctan(\frac{12}{5})$, $\arctan(\cot A)+\arctan(\cot^3A)=\pi+\arctan\left(\dfrac{\cot A+\cot^3A}{1-\cot A\cdot\cot^3A}\right)$ Now $\dfrac{\cot A+\cot^3A}{1-\cot A\cdot\cot^3A}=\dfrac{\tan^3A+\tan A}{\tan^4A-1}=\dfrac{\tan A}{\tan^2A-1}=-\dfrac{\tan2A}2$ and $\arctan(-x)=-\arctan(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1114007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Heat Equation, possible solutions NOTE: This is a homework problem. Please do not solve. I was given a problem that asked me to find a function of the form $u_n(x,t)=\chi_n(x) \cdot T_n(t) $ that solves the heat equation with the following conditions: $u_t = u_{xx}\\ u(0,t)=0\\ u_x(1,t)=0$ That is all of the information, but I am unsure how to solve such a problem without an initial heat distribution. I considered using the final condition, but all that tells me is that $u(1,t) = f(t)$ for some function $f(t)$, not what that $f(t)$ could be. Is a solution even possible based on this information?
* *Plug $u(x,t)=T(t)\,X(x)$ into the heat equation. *Obtain an equation where on the left hand side you have $T$ and $T'$ and on the right hand side $X$ and $X'$. *If a function that depends only on $t$ is equal to a function that depends only on $x$, what type of functions can they be? *Obtain a second order ordinary differential linear equation for $X$ and solve using the boundary conditions. *Obtain a first order ordinary differential linear equation for for $T$ and solve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1114172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving with Big O Notations Is there a way I can prove that $O(3^{2n})$ does NOT equal $10^n$? How would that be done? Also, is it okay to simplify $O(3^{2n})$ to $O(9^n)$ to do so?
It seems that you want to show that $10^n\notin O(3^{2n})$. To prove that $10^n$ is not $O(3^{2n})$ it is enough to show that for any $M$ there is $n$ such that $10^n > M\cdot3^{2n}$, in particular \begin{align} 10^n &> M\cdot 9^n\\ \frac{10^n}{9^n} &> M \\ n &> \log_{\frac{10}{9}} M \end{align} so $n = \left\lfloor\frac{\log M}{\log\frac{10}{9}}\right\rfloor + 1$ suffices. I hope this helps $\ddot\smile$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1114284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Double Factorial I am having trouble proving/understanding this question. Let $n=2k$ be even, and $X$ a set of $n$ elements. Define a factor to be a partition of $X$ into $k$ sets of size $2$. Show that the number of factors is equal to $1 \dot\ 3 \dot\ 5 \cdots\ (2k-1)$. Lets suppose our set $X=\{x_1,x_2,...,x_n\}$ contains $4$ elements, i.e $X=\{x_1,x_2,x_3,x_4\}$ then $k=2$ implies $k = (x_1,x_2), \ (x_3,x_4)$ but the numbers of factors equals $3$. What am I doing wrong trying to understand this?
You can also partition the set as $\{x_1,x_3\}$ and its complement and as $\{x_1,x_4\}$ and its complement. Giving three factors in total. In general there are always $2k-1$ two element sets containing some fixed element. This observation also suggest a way towards a proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1114394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
I can't remember a fallacious proof involving integrals and trigonometric identities. My calc professor once taught us a fallacious proof. I'm hoping someone here can help me remember it. Here's what I know about it: * *The end result was some variation of 0=1 or 1=2. *It involved (indefinite?) integrals. *It was simple enough for Calc II students to grasp. *The (primary?) fallacy was that the arbitrary constants (+ C) were omitted after integration. I'm not certain, but I have a strong hunch it involved a basic trigonometric identity.
The simplest one I have is not actually 0=1 but $\pi=0$. This is one of my favourites,the most shortest and has confused a lot of people. $\int \frac{dx}{\sqrt{1-x^2}} = sin^{-1}x$ But we also know that $\int - \frac{dx}{\sqrt{1-x^2}} = cos^{-1}x$ So therefore $sin^{-1}x=-cos^{-1}x$ But also, $sin^{-1}x+cos^{-1}x=\pi/2$ $\implies \pi/2=0$ $\implies \pi=0$. I'm so evil. :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1114605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 4, "answer_id": 0 }
A sequence of Continuous Functions Converges Uniformly over $\mathbb{R}$ if it Converges Uniformly over $\mathbb{Q}$ I'm trying to show that if ${f_n}$ is a sequence of real functions that is continuous over all of $\mathbb{R}$ and that converges uniformly to $f$ over $\mathbb{Q}$, then it converges uniformly to $f$ over $\mathbb{R}$. The hint I'm given is to use the Cauchy criterion for uniform convergence. I know that $f$ is continuous over $\mathbb{Q}$ and that $\mathbb{Q}$ is dense in $\mathbb{R}$, but using the definitions of continuity and denseness I'm still having trouble showing that $f_n$ is uniformly Cauchy. Any help at all would be much appreciated!
let $r\in \mathbb Q$ then by definition $|f_n(r)-f_m(r)|<\frac{\epsilon}{2}\forall n,m\geq p$ Since $\mathbb Q$ is dense in $\mathbb R$ given $x\in \mathbb R\exists r\in \mathbb Q$ such that $|x-r|<\delta $ Since $f_n$ is continuous $|x-r|<\delta \implies |f_n(x)-f_n(r)|\epsilon \forall n\geq p$ thus for $x\in \mathbb R,|f_n(x)-f_m(x)|=|(f_n(x)-f_n(r))+(f_n(r)-f_m(r))|\leq... $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1114690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The maximum of $\binom{n}{x+1}-\binom{n}{x}$ The following question comes from an American Olympiad problem. The reason why I am posting it here is that, although it seems really easy, it allows for some different and really interesting solutions. Do you want to give a try? Let $n$ be one million. Find the maximum value attained by $\binom{n}{x+1}-\binom{n}{x}$, where $0<x<n$ is an integer. Edit: I saw the answers below, and I can say that I posted this one because there is at least one really nice solution, which is not so "mechanical" :) The value of $n$ has no particular meaning, it stands just for a "sufficiently large integer"..
$$t_x=\binom n{x+1}-\binom nx=\frac{(n-2x-1)}{(x+1)}\binom nx$$ $$\frac{t_x}{t_{x-1}}=\frac{n-x+1}{(x+1)(n-2x+1)(n-2x)}=z$$ If $z\ge1$: $$\frac{n-x+1}{(x+1)(n-2x+1)(n-2x)}\ge1\implies x\le z_0(\text{say})$$ Then maximum occurs at : $$x=\lfloor z_0\rfloor$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1114783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Transition probability matrix of Markov chain Given that $g(x)=\begin{cases} 1/3 \quad\text{for } x=0\\ 1/3 \quad \text{for } x=1\\ 1/3 \quad \text{for } x=2\end{cases}$ Explain why independent draws $X_1,X_2,\dots$ from $g(x)$ give rise to a Markov chain. What is the state space and what is the transition probability matrix $P$? My thoughts: $P = \begin{pmatrix} 1/3 & 1/3 & 1/3\\ 1/3 & 1/3 & 1/3\\ 1/3 & 1/3 & 1/3\end{pmatrix}$ I don't see how successive draws depend on each other since they are independent and thus how this is a Markov chain. Can anyone please explain this to me?
Markov Chains are stochastic processes $\{X_n\}$ such that $X_n$ depends only on $X_{n-1}$, in the sense that $\mathbb{P}(X_n = a_n \mid X_{n-1} = a_{n-1}, \ldots, X_1 = a_1) = \mathbb{P}(X_n = a_n \mid X_{n-1} = a_{n-1})$. This still holds here, as both sides equal $1/3$, although the "dependence" is in this case is rather trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1114891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Functions - finding the domain Question: Consider the function: $$f(x) = \log(2x + 1) - \log(x - 3)$$ What will be the domain of this function? I used two approaches to solve this question. Both approaches got me different answers. Consider that we do not merge the two $log$ together. As we know that the value inside a $\log$ should be greater than zero: $$2x + 1 \gt 0$$ $$x - 3 \gt 0$$ From these two inequalities, we get that $x \gt 3$ Now consider that we merge together the two $\log$: $$\log(2x + 1) - \log(x - 3) = \log(\frac{2x + 1}{x - 3})$$ We now know that the value inside the log has to be greater than zero. This can be in two conditions. Either both the numerator and denominator are positive, or both are negative. Thus we get: $$x \in (-\infty, -\frac{1}{2}) \cup (3, \infty)$$ This is perfectly valid when we simplify the expression. However, if we don't simplify, the interval $(-\infty, -\frac{1}{2})$ becomes invalid for $\log(x-3)$ as the value inside is negative. Which answer is correct?
Consider whether the domain of this contains "-1": $$y=\sqrt x-\sqrt x$$ Or if the following contains "2": $$\frac{(x+1)(x-2)}{x-2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1114992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Limit of logarithm function, $\lim\limits_{n\to\infty}\frac{n}{\log \left((n+1)!\right)}$ Determine $$\lim_{n \to \infty} \frac{n}{\log \left((n+1)!\right)}.$$ Now, I know that $\log x < \sqrt{x} < x$ and trying to apply comparison test, but it does not work. Please help.
We have that $$ \lim_{n\to\infty}\frac n{\log(n+1)!}=\lim_{n\to\infty}\frac n{\log 2+\log3+\ldots+\log(n+1)}. $$ Since the logarithm is a monotone function, we can estimate the sum with integrals from below and above in the following way $$ \int_1^{n+1}\log x\mathrm dx\le\sum_{i=2}^{n+1}\log i\le\int_2^{n+2}\log x\mathrm dx. $$ Also, we have that $$ \int_1^{n+1}\log x\mathrm dx=(n+1)(\log(n+1)-1)+1 $$ and $$ \int_2^{n+2}\log x\mathrm dx=(n+2)(\log(n+2)-1)-2(\log 2-1). $$ Hence, $$ \sum_{i=2}^{n+1}\log i\sim n\log n $$ as $n\to\infty$, where $\sim$ means that the ratio of the sequences tend to $1$ as $n\to\infty$. So the limit in question is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1115104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
factor the following expression $25x^2 +5xy -6y^2$ How to factor $$25x^2 +5xy -6y^2$$ I tried with $5x(5x+y)-6y^2$. I'm stuck here. I can't continue.
The trick here is to manipulate the expression $25x^2+5xy-6y^2$. Try the following: \begin{align} 25x^2+5xy-6y^2 &= 25x^2+15xy-10xy-6y^2\tag{manipulate}\\[0.5em] &= 5x(5x+3y)-2y(5x+3y)\tag{factoring}\\[0.5em] &= (5x-2y)(5x+3y)\tag{group} \end{align} Is this clear now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1115172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 2 }
Can a non-periodic function have a Fourier series? Consider two periodic functions. Assume their sum is not periodic. The periodic functions can be represented by a Fourier series. If you add up the Fourier series, you get a series that represents their sum. But their sum is not periodic, yet you have described it using a Fourier series. I thought that non-periodic functions can't be represented by a Fourier series. Why isn't this a contradiction?
A Fourier series means the amplitude of the different harmonics, who are an integer multiple of a base frequency. It is easy to see, that this base frequency simply doesn't exist in your case. Although a Fourier transform of a such function of course exist, which is trivially $$F(s)=\delta(t-\omega_1)+\delta(t-\omega_2)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1115240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 6, "answer_id": 0 }
Given $S \subset \Bbb{R}$, show $\textbf{int}(S)+\textbf{ext}(S)+\partial S =\Bbb{R}$ The way I proved it is that we knwo R is open so intR=R. For any point in IntS is inside of IntR and any point in ExtS is inside of IntR. any point that is neither intS nor extS is still inside of IntR So intR is collection of all intS,extS,and boundary of S, which means R is union of intS extS and boundary of S It seems like my proof is not complete, and cant be sure whether i can call it a proof,, Can anyone help me ?
Since $\text{ext}(S), \space \text{int}(S), \space \partial S \subseteq \Bbb{R}$ it follows that the union $$\text{ext}(S) \cup \text{int}(S) \cup \partial S \subseteq \Bbb{R}$$ Now you need to show the reverse inclusion. Recall: $$\text{ext}(S) = \Bbb{R} \setminus \overline{S} \\ \text{int}(S) =\Bbb{R} \setminus \overline{\text{ext}(S)} \\ \text{and} \quad \partial S = \overline{S}\setminus \text{int}(S)$$ Now let $x \in \Bbb{R}$. It should be clear that $x \in \overline{S}$ or $x \in \Bbb{R} \setminus \overline{S}$, as $\Bbb{R} = \overline{S} \cup \left(\Bbb{R} \setminus \overline{S}\right)$. If $x \in \Bbb{R} \setminus \overline{S}$ then $x \in \text{ext}(S)$ by definition. Else, $x \in \Bbb{R}\setminus \text{ext}(S)$ Can you argue from here that either $x \in \Bbb{R} \setminus \overline{\text{ext}(S)}$ or $x \in \overline{S} \setminus \text{int}(S)$? It may be helpful to note that $$\Bbb{R} \setminus \overline{\text{ext}(S)} \subseteq \Bbb{R} \setminus \text{ext}(S)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1115294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find the least number b for divisibility What is the smallest positive integer $b$ so that 2014 divides $5991b + 289$? I just need hints--I am thinking modular arithmetic? This question was supposed to be solvable in 10 minutes...
Using the Extended Euclidean Algorithm as implemented in the Euclid-Wallis Algorithm: $$ \begin{array}{r} &&2&1&38&2&25\\\hline 1&0&1&-1&39&-79&2014\\ 0&1&-2&3&-116&235&-5991\\ 5991&2014&1963&51&25&1&0\\ \end{array} $$ Therefore, $2014\cdot235-5991\cdot79=1\implies5991\cdot79+1\equiv0\pmod{2014}$. Multiply the last equivalence by $289$ to get the equivalence $$ 5991\cdot b+289\equiv0\pmod{2014} $$ for $b\equiv289\cdot79\pmod{2014}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1115400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Finding the volume using cylindrical shells about the x-axis So I have spent about a hour on this problem and figured it was time to ask for some advice. The problem is to find the volume using cylindrical shells by rotating the region bounded by $$8y = x^3,\qquad y = 8,\qquad x = 0$$ about the x-axis. I changed the one on the far left to $x = 2*y^{1/3}$. I said the region of the integral went from $0$ to $8$ and that the integrand is $$2 * \pi * y * (2* y^{1/3})\,dy$$ I figured the height would be $x = 2*y^{1/3}$ and that the radius is just $y$. Any help would be greatly appreciate since I can't find any help for this online. I searched around and found some similar problems but it still doesn't make sense to me. Thanks!
I agree with you. The volume is \begin{align*} \int_0^8 2\pi y x\,dy &= \int_0^8 2\pi y \left(2\sqrt[3]{y}\right)\,dy \\ &= 4\pi \int_0^8y^{4/3} \,dy \\ &= 4\pi \left[\frac{3}{7}y^{7/3}\right]^8_0 = \frac{12\pi}{7} \cdot 2^7 = \frac{1536\pi}{7} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1115572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }