Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Order estimates QUESTION: Suppose $y(x) = 3 + O (2x)$ and $g(x) = \cos(x) + O (x^3)$ for $x << 1$. Then, for $x << 1:$ (a) $y(x)g(x) = 3 + O (x^2)$ (b)$ y(x)g(x) = 3 + O (x^4)$ (c) $y(x)g(x) = 3 + O (x^6)$ (d) None of these MY WORKINGS: $y(x) = 3+O(2x) = 3 + O(x) \implies y(x)g(x) = 3(\cos(x)) + 3(O(x^2)) + O(x)\cos(x) + O(x^4)$ Which simplifies to: $3 + O(x^2) + O(x^3) + O(x^4) + O(x) + O(x^3)$, given that $\cos(x) = 1 - \frac{x^2}{2!} \cdots = 1 + o(x^2)$ Now, the answer is (d), none of the above, but in the solutions they simplify $3 + O(x^2) + O(x^3) + O(x^4) + O(x) + O(x^3)$ to $3 + O(x)$ which I don't understand.
Well, the fact is that $\forall \beta<\alpha,\,\ O(x^\alpha)\subset o(x^\beta)\subset O(x^\beta)$ Proof: Indeed, by definition $o(x^\beta)\subset O(x^\beta)$. Moreover, let $\alpha > \beta$, $g(x)\in O(x^\alpha)$. In a neighborhood of $x_0=0$ it holds $|g(x)|\le C|x^\alpha|=|x^\beta|\cdot C\left|(x^{\alpha-\beta})\right|$. But since $\alpha-\beta>0$, $\ C|x^{\alpha-\beta}|$ becomes arbitrarly small as $x\to 0$. Which means that $g\in o(x^\beta)$ Provided this, it stands clear that $O(x)+O(x^2)+O(x^3)+O(x^3)+O(x^4)$ can be substituted with the coarsest approximation, which is $O(x)$ (indeed, the other ones would be redundant).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1308962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding a variable substitution for a double integral I have the following double integral $$ \iint_D xdxdy $$ where D is given by the inequalities $$ x^2+xy+y^2 \le 4, x\ge 0 $$ In the solution given to the problem they apply this variable substitution when solving the integral: $$ u = \frac{\sqrt{3}}{2}x $$ $$ v = \frac{x}2 + y $$ My question is how they arrive at this substitution? Is there a method for finding substitutions like these? While I understand that this substitution works, there is no explanation for how they arrive at this substitution at all in my solution.
You can transform the region D by completing the square $$x^2+xy+y^2=\frac{3}{4}x^2+\frac{1}{4}x^2+xy+y^2=(\frac{1}{2}x+y)^2+(\frac{\sqrt{3}}{2}x)^2\leq 4$$ Let $u = \frac{\sqrt{3}}{2}x, v=\frac{x}{2}+y$. And notice that $x\geq 0$ gives $u\geq 0$ too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1309038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why more than 3 dimensions in linear algebra? This might seem a silly question, but I was wondering why mathematicians came out with more than 3 dimensions when studying vector spaces, matrices, etc. I cannot visualise more than 3 dimensions, so I am not seeing the goal of having more than 3 dimensions. Except from the fact that the general rules of linear algebra can be applied mathematically to vectors spaces of higher dimensions, what is the practical purpose of having more than 3 dimensions? Linear algebra is also the study of system of linear equations that can have more than 3 unknowns, is this maybe related?
You might want to look at applied sciences. Weather forecast seems to be a nice example which shows how a rather easy question leads to high dimensional vector spaces. Suppose you want to predict the temperature for tomorrow. Obviously you need to take today's temperature into account so you start with a function $$f:\mathbb R\rightarrow\mathbb R,~x\mapsto f(x),$$ where $x$ is the current temperature temperature and $f(x)$ your prediction. But there is more than just the current temperature you have to consider. The humidity is important as well, so you modify your function and get $$\tilde{f}:\mathbb R^2\rightarrow\mathbb R,~(x,y)\mapsto f(x,y),$$ where $y$ is the current humidity. Now, the barometric pressure is important as well, so you modify again and get $$\hat{f}:\mathbb R^3\rightarrow\mathbb R,~(x,y,z)\mapsto f(x,y,z),$$ where $z$ is the current barometric pressure. Already this function can't be visualized, as it takes a 4-dimensional coordinate system to graph it. When you now take into account, that there are many more factors to consider (e.g. wind speed, wind direction, amount of rainfall) you easily get a domain with 5,6,7 or more dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1309149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Absolute value of vector not equal to magnitude of vector I've come accross the following inequality for a norm (where the norm defines the length of the vector): $$\lvert x \rvert ≤ \lvert \lvert x \rvert \rvert \leq \sqrt{n} \lvert x \rvert$$ where $x$ is a vector. Firstly, what is this inequality called? Secondly, in what situation (please also provide an example) does the magnitude of the vector ($\lvert x \rvert$) not equal the norm of the vector ($\lvert \lvert x \rvert \rvert $)? Thank you!
This is an instance of norm equivalence, here between some norm $\lVert.\rVert$ and the Euclidean norm $\lVert.\rVert_2$. For any two norms $\lVert.\rVert_a$ and $\lVert.\rVert_b$ (of a finite-dimensional vector space) one can give such an equation $$ m \lVert x \rVert_a \le \lVert x \rVert_b \le M \lVert x \rVert_a $$ with specific constants $m$ and $M$. E.g. if $\lVert.\rVert = \lVert.\rVert_1$ then $\lVert(1,2)\rVert_1 = \lvert 1 \rvert + \lvert 2 \rvert = 3$ and $\lVert(1,2)\rVert_2 = \sqrt{1^2 + 2^2} = \sqrt{5}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1309233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of Number of: *permutations of ‘n’ things, taken ‘r’ at a time, when ‘m’ specified things always come together* I read below at many sources Number of permutations of ‘n’ things, taken ‘r’ at a time, when ‘m’ specified things always come together =$ m!  * (n-m+1) !$ However no one gave the proof. I reached till this: * *First we have choose r out of n: $^nC_r=\frac{n!}{(n-r)!r!}$ *Then choose m out of r: $^rC_m=\frac{r!}{(r-m)!m!}$ *Next arrange m elements : $m! (r-m+1)!$ *Next I have to multiply all above and do cancelation. So I reached to: $\frac{n!(r-m+1)!}{(n-r)!(r-m)!}$ But I don't to get how to proceed to get the given $ m!  * (n-m+1) !$
This is permutation question, so objects can come in different order. * *Treat M objects as single entity/object. Remember, inside this entity, m objects can be arranged in M! ways. *Now, total count of objects is (n-m+1). Ex. if, n = 3 and m=2 then if we treat 2 objects as single entity/object we will have 2 (3-2+1=2) objects. *Since, it is permutation, (n-m+1) can be arranged in (n-m+1)! ways. i.e. (n-m+1) P (n-m+1). *Now; from step 1 and step 3; M objects can be arranged in M! ways, AND (n-m+1) objects can be arranged in (n-m+1) ways. Hence, total number of ways of selection is- Step 1 * Step 3. Which is , M!*(n-m+1)! Thanks!!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1309316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Proof that $f(x)=0 \forall x \in [a,b]$ Lemma: If $f \in C([a,b])$ and $\int_a^b f(x) h(x) dx=0 \ \forall h \in C^2([a,b])$ with $h(a)=h(b)=0$ then $f(x)=0 \ \forall x \in [a,b]$. Proof of lemma: Suppose that there is a $x_0 \in (a,b)$ such that $f(x_0) \neq 0$, for example without loss of generality we suppose that $f(x_0)>0$. Because of continuity there is an interval $[x_1, x_2] \subset (a,b)$ such that $x_0 \in (x_1, x_2)$ and $f(x)>0 \ \forall x \in (x_1, x_2)$. We define the function $g(x)=\left\{\begin{matrix} (x_2-x)^3 (x-x_1)^3 & , x \in (x_1, x_2)\\ \\ 0 & , x \in [a,b] \setminus{(x_1,x_2)} \end{matrix}\right.$. Then $g \in C^2([a,b])$ and $g(a)=g(b)=0$. From the hypothesis we have: $$\int_a^b f(x)g(x) dx=0$$ But $\int_a^b f(x)g(x) dx= \int_{x_1}^{x_2} f(x)g(x) dx>0$, contradiction. First of all, why do we say that there is an interval $[x_1, x_2] \subset (a,b)$ such that $x_0 \in (x_1, x_2)$ and $f(x)>0 \forall x \in (x_1, x_2)$? Why don't we pick the closed interval $[x_1, x_2]$ ? Also why does it hold that $\int_a^b f(x) g(x) dx= \int_{x_1}^{x_2} f(x) g(x) dx$? Furthermore, the prof told us that we couldn't take the function $g(x)=\left\{\begin{matrix} (x_2-x)^2 (x-x_1)^2 & , x \in (x_1, x_2)\\ \\ 0 & , x \in [a,b] \setminus{(x_1,x_2)} \end{matrix}\right.$ but the powers both of $(x_2-x), (x-x_1)$ have to be greater or equal to $3$. Why is it like that?
Short answer: * *Just a mater of taste. *Because $g(x)$, and therefore $f(x)g(x)$, is zero outside $(x_1,x_2)$. *Because the function must be twice continuously differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1309411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding $\lim\limits_{n\to\infty }\frac{1+\frac12+\frac13+\cdots+\frac1n}{1+\frac13+\frac15+\cdots+\frac1{2n+1}}$ I need to compute: $\displaystyle\lim_{n\rightarrow \infty }\frac{1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots+\frac{1}{n}}{1+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+\cdots+\frac{1}{2n+1}}$. My Attempt: $\displaystyle\lim_{n\rightarrow \infty }\frac{1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots+\frac{1}{n}}{1+\frac{1}{3}+\frac{1}{5}+\frac{1}{7}+\cdots+\frac{1}{2n+1}}=\lim_{n\rightarrow \infty }\frac{2s}{s}=2$. Is that ok? Thanks.
Hint The numerator is $H_n$ and the denominator is $H_{2n+1}-\frac12H_n$. Also, $$\frac{H_n}{H_{2n+1}-\frac12H_n}=\frac1{-\frac12+\frac{H_{2n+1}}{H_n}}$$ and $$H_n\sim\ln n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1309606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 1 }
The perpendicular distance from the origin to point in the plane The plane $3x-2y-z=-4$ is passing through $A(1,2,3)$ and parallel to $u=2i+3j$ and $v=i+2j-k$. The perpendicular distance from the origin to the plane is $r.n = d$ but how to determine the point (Call it N) on the plane and what's the coordinate of the point N?
There is a general procedure & formula derived in Reflection formula by HCR to calculate the point of reflection $\color{blue}{P'(x', y', z')}$ of the any point $\color{blue}{P(x_{o}, y_{o}, z_{o})}$ about the plane: $\color{blue}{ax+by+cz+d=0}$ & hence the foot of perpendicular say point $N$ is determined as follows $$\color{blue}{N\equiv\left(\frac{x_{o}+x'}{2}, \frac{y_{o}+y'}{2}, \frac{z_{o}+z'}{2}\right)}$$ Where $$\color{red}{x'=x_{o}-\frac{2a(ax_{o}+by_{o}+cz_{o}+d)}{a^2+b^2+c^2}}$$ $$\color{red}{y'=y_{o}-\frac{2b(ax_{o}+by_{o}+cz_{o}+d)}{a^2+b^2+c^2}}$$ $$\color{red}{z'=z_{o}-\frac{2c(ax_{o}+by_{o}+cz_{o}+d)}{a^2+b^2+c^2}}$$ As per your question, the foot of perpendicular $N$ drawn from the origin $\color{blue}{(0, 0, 0)\equiv(x_{o}, y_{o}, z_{o})}$ to the given plane: $\color{blue}{3x-2y-z+4=0}$ is determined by setting the corresponding values in the above expression as follows $$\color{}{x'=0-\frac{2(3)(3(0)-2(0)-(0)+4)}{(3)^2+(-2)^2+(-1)^2}}=-\frac{24}{14}=-\frac{12}{7}$$ $$\color{}{y'=0-\frac{2(-2)(3(0)-2(0)-(0)+4)}{(3)^2+(-2)^2+(-1)^2}}=\frac{16}{14}=\frac{8}{7}$$ $$\color{black}{z'=0-\frac{2(-1)(3(0)-2(0)-(0)+4)}{(3)^2+(-2)^2+(-1)^2}}=\frac{8}{14}=\frac{4}{7}$$ Now, setting these values, we get foot of perpendicular $N$ $$N\equiv\left(\frac{0+\left(-\frac{12}{7}\right)}{2}, \frac{0+\frac{8}{7}}{2}, \frac{0+\frac{4}{7}}{2}\right)$$ $$\color{blue}{N\equiv\left(-\frac{6}{7}, \frac{4}{7}, \frac{2}{7}\right)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1309736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does $e^{-(x^2/2)} \approx \cos[\frac{x}{\sqrt{n}}]^n$ hold for large $n$? Why does this hold: $$ e^{-x^2/2} = \lim_{n \to \infty} \cos^n \left( \frac{x}{\sqrt{n}} \right) $$ I am not sure how to solve this using the limit theorem.
In a neighbourhood of the origin, $$\log\cos z = -\frac{z^2}{2}\left(1+O(z^2)\right)\tag{1} $$ hence for any $x$ and for any $n$ big enough: $$ \log\left(\cos^n\frac{x}{\sqrt{n}}\right)=-\frac{x^2}{2}\left(1+O\left(\frac{1}{n}\right)\right)\tag{2}$$ and the claim follows by exponentiating $(2)$: $$ \cos^n\frac{x}{\sqrt{n}} = e^{-x^2/2}\cdot\left(1+O\left(\frac{1}{n}\right)\right).\tag{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1309864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 0 }
Constructing a multiplication table for a finite field Let $f(x)=x^3+x+1\in\mathbb{Z}_2[x]$ and let $F=\mathbb{Z}_2(\alpha)$, where $\alpha$ is a root of $f(x)$. Show that $F$ is a field and construct a multiplication table for $F$. Can you please help me approach this problem? I've tried searching around, but I don't really know what I'm looking for! Thanks.
By the division algorithm, any polynomial $g\in\mathbb{Z}_2[x]$ can be uniquely written as $$g=a_0+a_1x+a_2x^2+qf$$ for some $q\in\mathbb{Z}_2[x]$ and some $a_0,a_1,a_2\in\mathbb{Z}_2$ (depending on $g$, of course). Thus, the quotient ring $\mathbb{Z}_2[x]/(f)$ consists precisely these eight cosets (corresponding to each possible choice of the $a_i$): $$\begin{array}{cc} 0 + (f) &\quad 1 + (f) \\ x + (f) &\quad 1 + x + (f) \\ x^2 + (f) &\quad 1 + x^2 + (f) \\ x + x^2 + (f) &\quad 1 + x + x^2 + (f) \\ \end{array}$$ Use the definition of addition and multiplication in a quotient ring to construct the multiplication table. For example, $$\begin{align*} \biggl[x + (f)\biggr]\cdot \biggl[x^2 + (f)\biggr]&=x^3 + (f)\\\\ &= \biggl[0 +(f)\biggr] + \biggl[x^3+(f)\biggr]\\\\ &= \biggl[f +(f)\biggr] + \biggl[x^3+(f)\biggr]\\\\ &= \biggl[1 + x + x^3+(f)\biggr] + \biggl[x^3+(f)\biggr]\\\\ &= 1 + x + 2x^3+(f)\\\\ &=1+x+0x^3+(f)\\\\ &=1+x+(f) \end{align*}$$ You can prove that $F\cong\mathbb{Z}_2[\alpha]\cong\mathbb{Z}_2[x]/(f)$ is a field because: $\mathbb{Z}_2[x]$ is a PID, hence a non-zero ideal of $\mathbb{Z}_2[x]$ is maximal iff it is prime iff it is generated by an irreducible element, so $\mathbb{Z}_2[x]/(f)$ is a field iff $f$ is irreducible, and you can either check directly that $f$ doesn't factor non-trivially, or observe that since $\deg(f)\leq 3$ it suffices to check that $f$ has no roots in $\mathbb{Z}_2$, which it doesn't because $f(0)=1$ and $f(1)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1309954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove the following limit using the N-delta definition $\lim\limits_{x \to 0^+} {\ln x} = -\infty$ My attempt: $\forall\ N<0,\ \exists \ \delta>0 \ st. \forall x,\ c < x < \delta + c \implies f(x) < N$ Let $N$ be given. Consider $\ \ln x < N$ $\; \; \; \; \; \; \; \; \; \; \; \; \; e^{\ln x} < e^N$ $\; \; \; \; \; \; \; \; \; \; \; \; \; x < e^N$ so I set $\delta = e^N$. The correct solution is $\delta = \frac{1}{e^m}$, and I'm not sure how they arrived at that. Can anyone show me the way they arrive at that with a little explanation?
Your work is correct. But usually, the statement is: $$\forall \ M > 0, \ \exists \ \delta > 0, \ 0 < x < \delta \implies f(x) < -M. $$ That is, $M$ is taken positive. In this case, the choice would be $e^{-M}$, which is $1/e^{M}$. Note that this is equivalent to what you have done: You arrived at $\delta = e^N$, where $N < 0$. Now write that as $\delta = 1/e^{-N}$, and here $-N > 0$. That's the same thing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Lower bounding the eigenvalue of a matrix Suppose I have the following symmetric matrix $$ A'=\begin{bmatrix} A + b b^T & b \\ b^T & 1 \end{bmatrix} $$ where $A$ is positive definite $n \times n$ symmetric matrix and $b$ is a $n \times 1$ vector. Suppose $\|b\|_2 \leq B_1$, and all eigenvalues of $A$ are between $[B_2, B_3]$. What is a bound in terms of $B_1$, $B_2$, and $B_3$ for the smallest eigenvalue of $A'$? (It is straight forward to show that $A'$ is positive definite.)
Let $z=(x,x_{n+1})$, $x\in\mathbb R^n$. Then $$ z^TA'z = x^TAx+ (b^Tx)^2 + 2 x_{n+1} (b^Tx) + x_{n+1}^2 \ge x^TAx+ (b^Tx)^2 -((b^Tx)^2+ x_{n+1}^2) + x_{n+1}^2 = x^TAx, $$ which tells that $A'$ is positive semi-definite. The right-hand side does not depend on $x_{n+1}$, we do not get positive definiteness here. Using the positive definiteness of $A$ gives $$z^TA'z = x^TAx+ (b^Tx)^2 + 2 x_{n+1} (b^Tx) + x_{n+1}^2\\ \ge B_2 \|x\|^2+ (b^Tx)^2 + 2 x_{n+1} (b^Tx) + x_{n+1}^2. $$ Estimating $$ -2 x_{n+1} (b^Tx) \le (1-\epsilon) x_{n+1}^2 + \frac1{1-\epsilon}(b^Tx)^2 $$ with some $\epsilon\in(0,1)$ gives $$ z^TA'z \ge B_2 \|x\|^2 - \frac\epsilon{1-\epsilon}(b^Tx)^2 + \epsilon \,x_{n+1}^2. $$ Estimating the term containing $\|x\|^2$ gives $$ B_2 \|x\|^2 - \frac\epsilon{1-\epsilon}(b^Tx)^2 \ge \left(B_2 - \frac\epsilon{1-\epsilon}B_1^2\right)\|x\|^2, $$ setting $\epsilon:=\frac{B_2}{B_2+2B_1^2}$ gives $\frac\epsilon{1-\epsilon}B_1^2=\frac12B_2$. Thus, we obtain the lower bound $$ z^TA'z \ge \frac{B_2}2\|x\|^2 + \frac{B_2}{B_2+2B_1^2} x_{n+1}^2 \ge \frac{B_2}{\max(2,B_2+2B_1^2)} \|z\|^2 $$ and hence the smallest eigenvalue $\lambda_1$ of $A'$ is bounded below by $$ \frac{B_2}{\max(2,B_2+2B_1^2)} \le \lambda_1, $$ hence the smallest eigenvalue is bounded away from zero. One can try to balance the $\epsilon$-dependent by solving with $\epsilon\in(0,1)$ $$ B_2 - \frac\epsilon{1-\epsilon}B_1^2=\epsilon. $$ In case $\epsilon<1$ this is equivalent to $$ f(\epsilon)=\epsilon^2-\epsilon(1+B_2+B_1^2) +B_2=0. $$ Assume $b\ne0$ and $B_1>0$. Since $f(0)=B_2>0$, $f(1)=-B_1^2<0$, there is a root in $(0,1)$, but no negative root, and the smallest root is given by $$ \epsilon^*=\frac12\left( 1+B_2+B_1^2 -\sqrt{( 1+B_2+B_1^2)^2 - 4B_2} \right), $$ this $\epsilon^*$ constitutes another (optimal?) lower bound for the smallest eigenvalue, $\epsilon^*\le\lambda_1$. In the case $B_1=0$ it holds $\epsilon^*=\min(1,B_2)$, which is optimal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$a^2=b^3+bc^4$ has no solutions in non-zero integers this problem is from number theory book , $$a^2=b^3+bc^4$$ has no solutions in non-zero integers This book hint :First show that $b$ must be a perfect square.and how to do?
It is clear that $b\ge 0$. Suppose $b\gt 1$, and let $p$ be a prime that divides $b$. Let $p^k$ be the highest power of $p$ that divides $b$. There are $2$ cases. If $p$ does not divide $c$, then since $p^{3k}$ divides $b^3$, it follows that the highest power of $p$ that divides $a^2$ is $p^k$, so $k$ is even. If $p$ divides $c$, then the highest power of $p$ that divides $bc^4$ is $k+4t$ for some $t$. If $3k\ne k+4t$, then the highest power of $p$ that divides $a^2$ is $p^u$, where $u=\min(3k, k+4t)$. If $k$ is odd, then $u$ is odd, impossible. Finally, suppose $3k=k+4t$. Then $2k=4t$, so $k$ must be even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Let $ I $ be an ideal in $\mathbb Z [i]$. Show that $\mathbb Z[i] /I $ is finite. Let $I$ be an ideal in $\mathbb Z[i]$. I want to show that $\mathbb Z[i]/I$ is finite. I start with $Z[i]/I$ is isomorphic to $Z$. $Z$ is ID then $I$ is prime .Here i get stuck. Thanks for help.
Let $I \subseteq \mathbb{Z}[i]$ be a nonzero ideal. Since $\mathbb{Z}[i]$ is a principal ideal domain, it follows that $I = (\alpha)$ for some $\alpha \in \mathbb{Z}[i].$ Let $a + I$ be a coset of $I$ with $a = k \alpha + \beta$ and $\delta(\beta) < \delta(\alpha).$ In particular, every coset in $\mathbb{Z}[i]$ is represented by an element with norm less than $\delta(\alpha).$ Now let $z = x+iy$ have norm $q.$ Then $x^2+y^2 = q$ with $|x|,|y| \leqslant q$ meaning that there are is a finite amount of possibilities for $z.$ Thus there are finitely many possible representatives of distinct cosets in $\mathbb{Z}[i].$ Therefore $\mathbb{Z}[i]/I$ is finite. $\hspace{1mm} \Box$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Name of the highest power of 2 smaller than or equal to a given number For a number $x$, I would like to know whether there is a common name for the number $2^n$ such as $2^n \leq x < 2^{n+1}$ (e.g. If $x = 7$, then $2^n = 4$, $n = 2$). I have some computer science related article where I extensively use such a number and I need a name to give it in order to explain how an algorithm works without having to repeat the number definition over and over every time I need to use it. I currently call it a "base $2$", saying for example that "$4$ is the base $2$ of $7$" (see example above), and that we need to "compute the base $2$ of the number", but this name feels wrong. Do you know whether a common name exists for such a number? Note: actually, the article I am talking about deals with Gray codes. I am looking for a term that looks like it comes from math and not from computer science since many terms from computer science that deal with powers of two tend to be references to the usual binary representations of numbers. As an example, with Gray codes $2^3$ is 0b1100 and not 0b1000 so I am trying to avoid names that would literally mean the $n$th set bit, hence the question on Math.SE. Note 2: as it has been highlighted in the many answers and comments, the goal of this question, once clearly reformulated, is to find a terse, pronounceable name for the function $2^{\lfloor \log_2(x) \rfloor}$ so that it is possible to say that "some number is the [insert name here] of $x$".
How about truncate? That is, you truncate all but the most significant digit to 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 14, "answer_id": 13 }
Subsets of set satisfying open set condition Suppose an iterated function system of similarity transformations $S_1, S_2, \dotsc, S_k:\mathbb{R}^n\to\mathbb{R}^n$ (with unique invariant set $F$) satisfies the open set condition for some non-empty bounded open set $O\subset \mathbb{R}^n$, so that $$\bigcup_{i=1}^k S_i(O)\subset O$$ with the union disjoint. If $U\subset O$ is open and arbitrary, I would like to prove that $U$ is also a suitable choice in the OSC, so that $$\bigcup_{i=1}^k S_i(U)\subset U.$$ Clearly $\bigcup_{i=1}^k S_i(U)\subset \bigcup_{i=1}^k S_i(O)\subset O$, but is it the case that the setup necessarily implies $\bigcup_{i=1}^kS_i(U)\subset U$ with this union disjoint?
The statement is false. Consider the IFS on $\mathbb R$ consisting of the similarities $S_1(x)=x/2$ and $S_2(x)=x/2+1/2$. The open unit interval $(0,1)$ verifies OSC for this IFS. However, no interval of the form $(a,b)$ with $0<a<b<1$ will verify OSC.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Product topology - definition Can someone please give me a detailed explanation of the concept of product topologies? I just can't get it. I have looked in a number of decent textbooks(Munkres, Armstrong, Bredon, Wiki :P, Class notes, a youtube video). This is what it seems like to me: We have two topological spaces $(X,\tau_1)$ and $(Y,\tau_2)$ and we take their product topology: $$(X\times Y,\tau_1\times \tau_2)$$ Where this product topology $\tau_1\times \tau_2$ consists of unions of all elements of $\tau_1$ with all element of $\tau_2$ I.e. The first element of $\tau_1$ is taken in union with every element of $\tau_2$ and then the second element and so on, and all unions and intersections of these are taken. Now I am confused as well since apparently the product topology is immediately $T_{3.5}$ but I have seen that the product of two hausdorff spaces is hausdorff, then what's the deal with this? Are two hausdorff spaces actually $T_{3.5}$ and then $T_2$ is absorbed?
The product topology (on a product of two spaces $(X,\tau_1)$ and $(Y,\tau_2)$ consists of all unions of sets of the form $U \times V$, where $U \in \tau_1$ and $V \in \tau_2$. On easily checks that this forms a topology. A more general way of defining it, which works for products of any number of spaces $(X_i, \tau_i), i \in I$, is that it is the intersection of all topologies $\tau$ on $\prod_{i \in I} X_i$ that are such that for all $i$, the projection $p_i: (\prod_{i \in I} X_i, \tau) \rightarrow (X_i, \tau_i)$ is continuous. It's a small proof to show that for two spaces this coincides with the above definition, and it shows that the product topology is natural (it's the minimal topology that makes all projections continuous) and also is the category-theoretical product (if you care for such things). Now, the product topology is quite natural for the lower separation axioms: $X \times Y$ is a $T_i$ space for $i=0,1,2,3,3{1\over 2}$ iff $X$ and $Y$ are both $T_i$ spaces. (For $T_4$ spaces this can fail.) It's certainly not true that $T_{3\frac{1}{2}}$ is automatic for products, as you seem to think. It does need the same to already hold for both composing spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Clarify: "$S^0$, $S^1$ and $S^3$ are the only spheres which are also groups" The zero, one, and three dimensional spheres $S^0$, $S^1$ and $S^3$ are in bijection with the sets $\{a\in \mathbb{K}:|a|=1\}$ for $\mathbb{K} = \mathbb{R}, \mathbb{C}, \mathbb{H}$ respectively. The real, complex and quaternionic multiplication therefore provide a group operation on these spheres. This is mentioned in the book: Kristopher Tapp (2011), Matrix Groups for Undergraduates, Indian Edition, pp. 40. Following this there is a statement: It turns out that $S^0$, $S^1$ and $S^3$ are the only spheres which are also groups. Can someone please clarify this statement? How are these three the only spheres which are also groups? For example, I could take any sphere $S^k$ ($k\ge 1$) and get a bijection $f:S^k \to S^1$ and define a binary operation on $S^k$ by $$a*b = f^{-1}(f(a)\cdot f(b))$$ and $S^k$ would be a group under this operation. So what exactly is meant by the above statement? In what sense are these the only three spheres which are also groups?
The spheres $S^0,S^1$ and $S^3$ are the only spheres that are lie groups (a group that is a differentiable manifold as well). The proof uses the group cohomology (ie, studying groups using cohomology theory) of spheres. Check out the De Rham cohomology of the $n-$dimensional sphere, which states that $H^1(S^n\times I)$ and $H^3(S^n\times I)$ both equal $0$ as long as $n$ does not equal $1$ or $3$ (in this case). Here, $I$ is a real and open interval (eg, the real numbers). Check out the following links: http://en.wikipedia.org/wiki/De_Rham_cohomology#The_n-sphere http://planetmath.org/spheresthatareliegroups
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Solving easy first-order linear differential question. Question Solve $y'=2x(1+x^2-y)$. My attempt Rearranging gives $y'+2xy=2x(1+x^2)$. Thus, the integrating factor is $e^{\int2x\,dx}=e^{x^2}$ and multiplying the equation throughout by this gives $e^{x^2}y'+2xe^{x^2}y=2xe^{x^2}(1+x^2)\Rightarrow\dfrac{d}{dx}{e^{x^2}y}=2xe^{x^2}+2x^3e^{x^2}$ which is separable. Then, $e^{x^2}y=\int2xe^{x^2}\,dx+2\int x^3e^{x^2}\,dx=e^{x^2}+2\int x^3e^{x^2}\,dx$ Is this correct and is there some easy way of working out $\int x^3e^{x^2}\,dx$? Wolfram has it as $e^{x^2}(x^2-1)$. Maybe I write $2x^3e^{x^2}$ as $x(2x^2e^{x^2})$?
make a change of variable $$1+x^2 - y = u, \quad y =1+x^2 - u, y' = 2x-u' $$ then the de $y' = 2x(1+x^2 - y)$ is turned into $$2x-u' = 2xu $$ multiplying by $e^{x^2},$ we get $$e^{x^2}(u'+2xu) = \left(e^{x^2}u\right)' = 2xe^{x^2}$$ on integration yields $$e^{x^2}u = e^{x^2} + c\to u = 1+ce^{-x^2},\quad y = x^2 -ce^{-x^2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Evaluate $\lim\limits_{x\to\infty}(\sin\sqrt{x+1}-\sin\sqrt{x})$ When using Maclaurin series, the limit is $$\lim\limits_{x\to\infty}\frac{1}{\sqrt{x+1}+\sqrt{x}}=0$$ If we expand the expression with two limits $$\lim\limits_{x\to\infty}\sin\sqrt{x+1}-\lim\limits_{x\to\infty}\sin\sqrt{x}$$ it diverges. Which solution is right?
Hint: Mean value theorem implies there exist $c\in]x,x+1[$ such that $$\sin(\sqrt{x+1})-\sin(\sqrt{x})=\frac{\sqrt{x+1}-\sqrt{x}}{2\sqrt{c}}\cos\sqrt{c}$$ Then $$\left|\sin(\sqrt{x+1})-\sin(\sqrt{x})\right|\le\left(\frac{\sqrt{x+1}}{2\sqrt{x}}-\frac{\sqrt{x}}{2\sqrt{x+1}}\right)|\cos\sqrt{c}|\le\frac{1}{2}\left(\sqrt{1+\frac{1}{x}}-\sqrt{\frac{1}{1+1/x}}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 4 }
Fourier transform of a radial function Consider a function $f \in L^2(\mathbb{R}^n)$ such that $f$ is radial. My question is, is the Fourier transform $\hat{f}(\xi)$ automatically radial (I can see it is even in each variable $x_i$), or we need some conditions on $f$? Thanks for your help.
Preliminaries: i) $f$ is radial iff $f\circ T = f$ for every orthogonal transformation $T$ on ${R}^n.$ ii) An orthogonal transformation $T$ preserves the inner product: $\langle Tx,Ty \rangle = \langle x,y \rangle$ for all $x,y \in \mathbb {R}^n.$ iii) If $T$ is orthogonal, then $|\det J_T|=1,$ where $J_T$ is the Jacobian matrix of $T.$ So suppose $f\in L^1(\mathbb {R}^n)$ and $f$ is radial. Fix an orthogonal transformation $T.$ Then $$\hat {f} (Tx) = \int_{\mathbb {R}^n} f(t) e^{-i\langle Tx,t \rangle}\,dt = \int_{\mathbb {R}^n} f(Ts) e^{-i\langle Tx,Ts \rangle}\,ds= \int_{\mathbb {R}^n} f(s) e^{-i\langle x,s \rangle}\,ds = \hat f (x).$$ Thus $\hat f$ is radial as desired. That was for $L^1,$ but the question was about $L^2$ as @AdamHughes reminded me. See the comments below to get the result for $L^2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1310936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
$H_0^1(\Omega)$ space where $\Omega$ is an open bounded subset of $\mathbb R^N$. I have some trouble with proper understanding of $H_0^1$ space. I confess that I am now beginning to study functional analysis and maybe my question may seem rather trivial. However I would like to know if $$H_0^1(\Omega)$$ where $\Omega$ is an open bounded subset of $\mathbb R^N$, is a finite dimensional Hilbert space.
It is not finite dimensional. One way of seeing this is if you consider the simple case $\Omega = ~ ]0,1[ \subset \Bbb R$. Then $H^1_0(\Omega)$ is the set of square integrable functions, whose derivative is also square integrable, and that the trace map on the boundary is $0$. A subset of this is the set of continuous functions on $]0,1[$ such that vanish at $0$ and $1$. This set is infinite dimensional: think Fourier transform. In fact, $H^1_0(\Omega)$ is a separable Hilbert space. This means that it admits a countable Hilbert basis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $F$ a sheaf and $S\subset F$ a subfunctor, then $S$ is a subsheaf if and only if... This is Proposition 1 from Maclane & Moerdijk's Sheaves in Geometry and Logic, part II, section 1. Proposition 1. Let $F$ be a sheaf on $X$ and $S\subset F$ a subfunctor. $S$ is a subsheaf if and only if, for every open set $U$ on $X$, every element $f\in F(U)$, and every open covering $U=\bigcup U_i$, one has $f\in S(U)$ if and only if $f|_{U_i}\in S(U_i)$ for all i. Proof: The stated condition is clearly necessary for $S$ to be a sheaf. Conversely, consider the commutative diagram $$\require{AMScd} \begin{CD} S(U) @>>> \prod S(U_i) @>>> \prod S(U_i\cap U_j)\\ @VVV & @VVV & @VVV\\\ F(U) @>>> \prod F(U_i) @>>> \prod F(U_i\cap U_j) \end{CD}$$ The bottom row an equalizer. The last condition of the proposition states that the left hand square is a pullback. I have to prove that the top row is an equalizer. Can someone help me complete/formalize the proof?
Having reduced the problem this far, the argument no longer needs any information about $S$ or $F$: such a diagram's upper left corner is an equalizer in any category. That said, I'll write a proof using the names you've provided. Suppose $f$ is equalized by the two top-right maps. Then the composition of $f$ with the middle vertical map is equalized by the two bottom-right maps, so factors through $FU$. Now we have maps to $FU$ and $\prod SU_i$ which agree at $\prod FU_i$, so they simultaneously factor through $SU$, and in particular $f$ does, so $SU\to \prod SU_i$ is an equalizer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the most unusual proof you know that $\sqrt{2}$ is irrational? What is the most unusual proof you know that $\sqrt{2}$ is irrational? Here is my favorite: Theorem: $\sqrt{2}$ is irrational. Proof: $3^2-2\cdot 2^2 = 1$. (That's it) That is a corollary of this result: Theorem: If $n$ is a positive integer and there are positive integers $x$ and $y$ such that $x^2-ny^2 = 1$, then $\sqrt{n}$ is irrational. The proof is in two parts, each of which has a one line proof. Part 1: Lemma: If $x^2-ny^2 = 1$, then there are arbitrarily large integers $u$ and $v$ such that $u^2-nv^2 = 1$. Proof of part 1: Apply the identity $(x^2+ny^2)^2-n(2xy)^2 =(x^2-ny^2)^2 $ as many times as needed. Part 2: Lemma: If $x^2-ny^2 = 1$ and $\sqrt{n} = \frac{a}{b}$ then $x < b$. Proof of part 2: $1 = x^2-ny^2 = x^2-\frac{a^2}{b^2}y^2 = \frac{x^2b^2-y^2a^2}{b^2} $ or $b^2 = x^2b^2-y^2a^2 = (xb-ya)(xb+ya) \ge xb+ya > xb $ so $x < b$. These two parts are contradictory, so $\sqrt{n}$ must be irrational. Two things to note about this proof. First, this does not need Lagrange's theorem that for every non-square positive integer $n$ there are positive integers $x$ and $y$ such that $x^2-ny^2 = 1$. Second, the key property of positive integers needed is that if $n > 0$ then $n \ge 1$.
$$\boxed{\text{If the boxed statement is true, then the square root of two is irrational.}}$$ Lemma. The boxed statement is true. Proof. Assume for a contradiction that the boxed statement is false. Then it has the form "if $S$ then $T$" where $S$ is false, but a conditional with a false antecedent is true. Theorem. The square root of two is irrational. Proof. * *The boxed statement is true. (By the Lemma.) *If the boxed statement is true, then the square root of two is irrational. (This is the boxed statement itself.) *The square root of two is irrational. (Modus ponens.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "114", "answer_count": 19, "answer_id": 14 }
Prove $\sin(kx) \rightharpoonup 0$ as $k \to \infty$ in $L^2(0,1)$ I want to show that $u_k(x)= \sin(kx) \rightharpoonup 0$ as $k \to \infty$ in $L^2(0,1)$. We know trivially that $0 \in L^2(0,1)$. I need to show that $\langle u^*,\sin(kx) \rangle \to \langle u^*, 0 \rangle$ for each bounded linear functional $u^* \in L^2(U)$, where $L^2(U)$ is a dual space of itself (since $L^2$ is a Hilbert space). I think I need to show that, as $k \to \infty$, $$\int_0^1 u^* \sin(kx) \, dx \to 0.$$
Let $f(x) = \frac{a_0}{2} + \sum_{n=1}^\infty a_n \sin (2\pi i \,n x) + b_n \cos (2\pi i \,n x) \in L^2[0,1]$ then we have Bessel inequality: $$ \frac{a_0}{2} + \sum_{n=1}^\infty (a_n^2 + b_n^2) < \bigg|\bigg|\int_0^1 f(x)^2 \, dx\bigg|\bigg|^2 < \infty$$ Then the Fourier coefficients tend to zero $|a_n| =|\langle f ,\sin (2\pi i \,n x)\rangle | \to 0$ Any orthogonormal basis tends the intuition here is that $\{ \sin kx \}$ or $\{e^{2\pi i \, kx}\}$ oscillate so violently we may as well just substitute $0$ there. Weak convergence formalizes this asymptotic behavior in many cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why conjugate when switching order of inner product? There is an axiom of inner product spaces that states: * *$\overline{\langle x,y\rangle } = \langle y,x\rangle$ Basically (without any conceptual understanding) it seems like all you have to do when you swap the order of the arguments in an inner product space is take their conjugate. How does this make any sense? I know if we are dealing with an inner product space over $\mathbb{R}$ then the conjugate of a real number is just the real number itself so there is no change. But how does this make sense over the field $\mathbb{C}$?
Conjugation is there to make sure the signs work out. If you don't conjugate, then you'll find that $\langle ix, ix \rangle = -\langle x, x\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 5, "answer_id": 0 }
Proving a formula using another formula These questions are from the book "What is Mathematics": Prove formula 1: $$1 + 3^2 + \cdots + (2n+1)^2 = \frac{(n+1)(2n+1)(2n+3)}{3}$$ formula 2: $$1^3 + 3^3 + \cdots + (2n+1)^3 = (n+1)^2(2n^2+4n+1)$$ Using formulas 4 and 5; formula 4: $$1^2 + 2^2 + 3^2 + \cdots + n^2 = \frac{n(n+1)(2n+1)}{6}$$ formula 5: $$1^3 + 2^3 + \cdots + n^3 = \left(\frac{n(n+1)}{2}\right)^2$$ My approach for proving the first one was to do something like a subtraction between formula 4 and 1, substituting the value of $n $ in formula 4 for $2n + 1$. I was left with a formula which I proved by mathematical induction gives me $2^2 + 4^2 + \cdots + (2n)^2$. I am wondering if this is the correct approach before doing the second proof? Is there a better and simpler way of doing this?
I would go like this: $(2k+1)^2 = 4k^2+4k+1 \Rightarrow 1+3^2+5^2+\cdots + (2n+1)^2=\displaystyle \sum_{k=0}^n (2k+1)^2=4\displaystyle \sum_{k=0}^n k^2 + 4\displaystyle \sum_{k=0}^n k + (n+1)= 4\cdot\dfrac{n(n+1)(2n+1)}{6}+4\cdot\dfrac{n(n+1)}{2}+(n+1)=...$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
If $A$ is normal and $A$ and $B$ commute, then $A^*$ and $B$ commute Let $A$ is a normal matrix: $A^*\! A = A A^*\!\!$,$\,$ and $AB = BA$. Prove that $A^*\!B=BA^*\!\!$. I can prove that if $\det A\ne 0$ by multiplication $AB=BA$ by $A^*$ left and right and using some manipulation. But I have no idea what to do if $\det A = 0$.
We can use the fact that $$\def\tr{\mathrm{tr}} X=0\iff \tr(XX^*)=0. $$ Since $A$ and $B$ commute, $A^*$ and $B^*$ commute as well. Together with the cyclic property of trace, $\mathrm{tr}(XY)=\mathrm{tr}(YX)$, we find that in each term of $$ \begin{split} \tr[(A^*B-BA^*)(A^*B-BA^*)^*] &= \tr(A^*BB^*A)+\tr(BA^*AB^*) -\tr(BA^*B^*A)-\tr(A^*BAB^*) \end{split} $$ is equal to a constant, say, $\tr(A^*AB^*B)$. E.g., $\tr(A^*BB^*A)=\tr(AA^*BB^*)=\tr(A^*AB^*B)$ and $\tr(BA^*B^*A)=\tr(BB^*A^*A)=\tr(A^*ABB^*)=\tr(A^*AB^*B)$. Hence the trace of $(A^*B-BA^*)(A^*B-BA^*)^*$ is zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Simplify $\prod_{k=1}^5\tan\frac{k\pi}{11}$ and $\sum_{k=1}^5\tan^2\frac{k\pi}{11}$ My question is: If $\tan\frac{\pi}{11}\cdot \tan\frac{2\pi}{11}\cdot \tan\frac{3\pi}{11}\cdot \tan\frac{4\pi}{11}\cdot \tan\frac{5\pi}{11} = X$ and $\tan^2\frac{\pi}{11}+\tan^2\frac{2\pi}{11}+\tan^2\frac{3\pi}{11}+\tan^2\frac{4\pi}{11}+\tan^2\frac{5\pi}{11}=Y$ then find $5X^2-Y$. I couldn't find any way to simplify it. Please help. Thanks.
The main building block of our solution will be the formula \begin{align*}\prod_{k=0}^{N-1}\left(x-e^{\frac{2k i\pi}{N}}\right)=x^N-1.\tag{0} \end{align*} It will be convenient to rewrite (0) for odd $N=2n+1$ in the form \begin{align*} \prod_{k=1}^{n}\left[x^2+1-2x\cos\frac{\pi k}{2n+1}\right]=\frac{x^{2n+1}-1}{x-1}. \tag{1} \end{align*} Replacing therein $x\leftrightarrow -x$ and multiplying the result by (1), we may also write \begin{align*} \prod_{k=1}^{n}\left[\left(x^2-1\right)^2+4x^2\sin^2\frac{\pi k}{2n+1}\right]=\frac{1-x^{4n+2}}{1-x^2}. \tag{2} \end{align*} * *Setting in (1) $x=-i$, we get $$\left(2i\right)^n\prod_{k=1}^n\cos\frac{\pi k}{2n+1}=\frac{i^{2n+1}-1}{i-1} \qquad \Longrightarrow\qquad \prod_{k=1}^n2\cos\frac{\pi k}{2n+1}=1.$$ *Setting in (2) $x=1$ and computing the corresponding limit on the right, we get $$\prod_{k=1}^n2\sin\frac{\pi k}{2n+1}=\left[\lim_{x\to 1}\frac{1-x^{4n+2}}{1-x^2}\right]^{\frac12}=\sqrt{2n+1}.$$ *Combining the two results yields $$\boxed{\quad\prod_{k=1}^n\tan\frac{\pi k}{2n+1}=\sqrt{2n+1}\quad}$$ and to find $X$, it suffices to set $n=5$. *To find $Y$, let us rewrite (1) in the form (set $x=-e^{i\gamma}$) $$\prod_{k=1}^n \left[2\cos\gamma+2\cos\frac{\pi k}{2n+1}\right]=\frac{\cos\frac{\left(2n+1\right)\gamma}{2}}{\cos\frac{\gamma}{2}}$$ Taking the logarithm and differentiating twice with respect to $\gamma$, we find $$\sum_{k=1}^{n}\frac{1}{\left(\cos\gamma+\cos\frac{\pi k}{2n+1}\right)^2} =-\frac{1}{\sin\gamma}\frac{\partial}{\partial \gamma}\left(\frac{1}{\sin\gamma}\frac{\partial}{\partial \gamma}\ln \frac{\cos\frac{\left(2n+1\right)\gamma}{2}}{\cos\frac{\gamma}{2}}\right).\tag{3}$$ *Computing the right side of (3) and setting therein $\gamma=\frac{\pi}{2}$, we finally arrive at $$\sum_{k=1}^{n}\frac{1}{\cos^2\frac{\pi k}{2n+1}}=2n(n+1)\qquad \Longrightarrow\quad \boxed{\quad\sum_{k=1}^{n}\tan^2\frac{\pi k}{2n+1}=n(2n+1)\qquad}$$ This yields $Y=55$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Proving pseudo-hyperbolic distance is distance The pseudo-hyperbolic distance on the unit disk is defined as: $$\rho(z,w)=\left|\dfrac{z-w}{1-\bar wz}\right|.$$ I'd like to prove it's a distance. The real problem is, as always, the triangle inequality, because the other properties are mostly obvious. That is, I need to prove: $$\rho(z,w)\leq\rho(z,t)+\rho(t,w),$$ for all $z,w,t\in\mathbb{D}$. I tried writing $z,t,w$ as real part plus $i$ times imaginary part, and ended up with a messy expression Wolfram can't handle. I tried polar coordinates, and the mess is even worse, and Wolfram's help is even less. I Googled first, but only found stuff about the Hyperbolic distance, and a pdf having this as an exercise, suggesting to also show that: $$\rho(z,w)\leq\frac{\rho(z,t)+\rho(t,w)}{1+\rho(z,t)\rho(t,w)}.$$ But that didn't help. So here I am. How do I solve this?
It helps to know that $\rho$ is invariant under Möbius transformations. Indeed, $\rho(z,w)=|\phi(z)|$ where $\phi$ is any Möbius map such that $\phi(w)=0$ (they all agree up to rotation, which doesn't change the modulus). Since Möbius maps form a group, applying one of them to both $z$ and $w$ does not affect the above formula for $\rho$. In order to prove the triangle inequality, first map $t$ to $0$ by a suitable Möbius map. This reduces the task to $$\left|\frac{z-w}{1-z\overline w}\right|\le |z|+|w| \tag{1}$$ Let $|z|=a$, $|w|=b$, and let $\theta $ be the polar angle between $z$ and $w$. Then $(1)$ becomes $$ \frac{a^2+b^2-2ab\cos\theta }{1+a^2b^2-2ab\cos\theta } \le (a+b)^2 \tag{2}$$ To estimate the left side, rewrite it as $$ 1 - \frac{(1-a^2)(1-b^2) }{1+a^2b^2-2ab\cos\theta } \le 1 - \frac{(1-a^2)(1-b^2) }{1+a^2b^2+2ab } =\frac{(a+b)^2}{(1+ab)^2} $$ and $(2)$ follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Surjectivity and injectivity I need to show the injective and surjective for $f:\mathbb R^2 \longrightarrow \mathbb R$ where $f(x,y)=5xe^y$ For injective $f(0,0)=f(0,1)$ but $(0,0) \neq (0,1)$. For surjective i must show that the function covers the codomain so that every value from $\mathbb R^2$ must have an exit value such that $f(x,y)=z$ right? How does that work? Thanks!
For surjectivity you have to show that for any $z \in \def\R{\mathbf R}\R$ there is $(x,y) \in \R^2$ such that $f(x,y) = z$. Hint. To do so, use that you (hopefully) know that $\exp \colon \R \to (0,\infty)$ is bijective. If for example $z > 0$, then we can choose $x = \frac 15$. Which leaves us with $$ f\left(\frac 15, y\right) = e^y \stackrel != z $$ So $y = \textbf ?$ will do. For $z < 0$ you can use almost the same trick with another $x$-value. This leaves you with $z = 0$, but this is easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Number system and $\pi$ Ok, we all use the decimal system with numbers from 0 to 9. And we have $\pi$ with an infinite number of decimals. We also have a boolean system or hexadecimal. Is there any decimal system where $\pi$ has an ending number of numbers?
If a number has a finite expansion, in a rational base, using rational digits, then the number is rational. This is because the sum and product of rational numbers is rational. Note: Even some rational numbers have non-terminating expansions in base 10. For example, $1/3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1311995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Changing variables in multiple integral (commutative of convolution operation) In the space $\mathbb{R}^n$, $n\geq 1$, the Lebesgue measure is denoted by $dx=dx_1\dots dx_n$ and $\int_{\mathbb{R}^n}f(x)dx$ stnads for $\int_{\mathbb{R}^n}f(x_1\dots x_n)dx_1\dots dx_n$. I want to prove that convolution operation is commutative, i.e. $f\ast g=g\ast f$. More precisely, $$ \int_{\mathbb{R}^n}f(x-y)g(y)dy=\int_{\mathbb{R}^n}f(y)g(x-y)dy $$ This is a consequence of changing variables. But if i say $x-y=u$ then $(-1)^n dy=du$ and the result is $$ \int_{\mathbb{R}^n}f(x-y)g(y)dy=(-1)^n\int_{\mathbb{R}^n}f(y)g(x-y)dy $$ Where is my mistake?
The problem is how you're parameterizing the whole space. In the left hand side, your integrals are from $-\infty$ to $\infty$. In the integrals on the right hand side, it's the opposite parameterization.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proof by counting two ways Proof by counting two ways: \begin{equation}\sum_{k_1+k_2+...+k_m=n}{k_1\choose a_1}{k_2\choose a_2}...{k_m\choose a_m}={n+m-1\choose a_1+a_2+...+a_m+m-1}\end{equation} I have a proof by algebra for it, but I want to seek a proof by counting it two ways. Can you help me?
Let $\ell=n-\sum_{i=1}^ma_i$; then $$\dbinom{n+m-1}{a_1+\ldots+a_m+m-1}$$ is the number of ways to distribute $\ell$ indistinguishable balls amongst $$\sum_{i=1}^ma_i+m=\sum_{i=1}^m(a_i+1)$$ distinguishable boxes. For $i=1,\ldots,m$ let $A_i$ be a distinct set of $a_i+1$ distinguishable boxes, and let $A=\bigcup_{i=1}^mA_i$. $\binom{k_i}{a_i}$ is the number of ways to distribute $k_i+1$ indistinguishable balls amongst the $a_i+1$ distinguishable boxes in $A_i$ so that each box gets at least one ball. Thus, it’s also the number of ways to distribute $(k_i+1)-(a_i+1)=k_i-a_i$ indistinguishable balls amongst the $a_i+1$ boxes in $A_i$. It follows that $$\sum_{k_1+\ldots+k_m=n}{k_1\choose a_1}{k_2\choose a_2}\ldots{k_m\choose a_m}$$ is the total number of ways to distribute $\ell$ indistinguishable balls amongst the $\sum_{i=1}^m(a_i+1)$ boxes in $A$, summed over all $m$-tuples $\langle\ell_1,\ldots,\ell_m\rangle$ of non-negative integers such that $\sum_{i=1}^m\ell_i=\ell$, where $\ell_i$ is the number of balls distributed amongst the $a_i+1$ boxes in $A_i$. It follows immediately that $$\sum_{k_1+\ldots+k_m=n}{k_1\choose a_1}{k_2\choose a_2}\ldots{k_m\choose a_m}=\binom{n+m-1}{a_1+\ldots+a_m+m-1}\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Markov chain - is my diagram/matrix correct? A boy goes to school on a bike or on foot. If one day he goes on foot, then on the second day he takes a bike with probability $0.8$. If he goes on a bike one day, then he falls off the bike with probability $0.3$ and goes on foot the next day. What is the probability that one the last day of school he goes to school on a bike? We assume that the school year has just started. I can see that we need to define a Markov chain. I've tried coming up with a directed graph which I cannot draw here, but I've pasted it here This is what the matrix would look like: $$A= \begin{array}{l} \mbox{bike} \\ \mbox{falls} \\ \mbox{on foot} \\ \mbox{doesn't fall} \end{array} \left( \begin{array}{cccc} 0 & 0.3 & 0 & 0.7 \\ 0 & 0 & 1 & 0\\ 0.8 & 0 & 0.2 & 0 \\ 1 & 0 & 0 & 0 \end{array} \right) $$ This matrix is ergodic and solving the equation $[x_1, x_2, x_3, x_4]A = [x_1, x_2, x_3, x_4], \ \ x_1 + x_2 + x_3 +x_4=1$ I get that $x_1=8/19$. So it would seem that the probability that on the last day of school the boy travels on a bike is $8/19$. Is that true? Is my solution correct?
I think you can simplify things a lot by just using two states, foot (state $1$) and bicycle (state $2$). Then you have the transition matrix: $$\begin{pmatrix} 0.2 & 0.8 \\ 0.3 & 0.7 \end{pmatrix}$$ In fact, the way you've done it is a bit confusing because falling off a bicycle is not a state in the same sense that going to school by foot/bicycle is. For one time step (a day) his state is the mode of transport he uses. But he doesn't spend a full day falling off a bike, so it doesn't make sense to use this as a state.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\mathbb{C}$ is a one-dimensional complex vector space. What is its dimension when regarded as a vector space over $\mathbb{R}$? $\mathbb{C}$ is a one-dimensional complex vector space. What is its dimension when regarded as a vector space over $\mathbb{R}$? I don't understand how $\mathbb{C}$ is one-dimensional. Please help me understand that. Also, I'm pretty sure that when the field is reals we have $\dim(\mathbb{C})=2$. Since when $\alpha$ is real and $z=a+bi$ is complex we have $\alpha z=\alpha a+\alpha bi=\alpha a(1,0)+\alpha b(0,i)$. How does this look? Any solutions or help is greatly appreciated.
The complex numbers as a vector space over the field of real numbers is of dimension $2$. The two vectors $1$ and $i$ form a basis and any complex vector $a+ib$ is a linear combination of the two vectors $1$ and $i$, multiplied by real scalars $a$ and $b$ and added. The complex numbers as a vector space over the field of complex numbers is of dimension $1$. Chose any vector in this space. You can multiply it with a scalar $a+bi$ (scalar because the field is the field of complex numbers) to get the arbitrary vector $a+bi$ (now $a+bi$ is any vector in the complex vector space, got by multiplying a unit vector $1$ with the scalar $a+bi$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Show that every subspace of $\mathbb{R}^n$ is closed Show that every subspace of $\mathbb{R^n}$ is closed. I'm not sure how to do this or even what closed means. I don't even have a starting point. Any hints or solutions are greatly appreciated.
Let $S$ be a linear subspace of $\mathbb{R}^n$. Consider a sequence $\{x_n\}_{n\in\mathbb{N}}$ of points in $S$ which converges to a point $y\in\mathbb{R}^n$. Show that in fact $y$ must lie in $S$. (Here I am using a characterization of "closed" which is equivalent to that described in the other answers). You could use the continuity of any projection map to the subspace, if that's available to you or easy to prove. You could also show it directly by just doing the analysis. (Hint: the finite-dimensionality should matter)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 5 }
Find an inner product that makes a given set of linearly independent vectors orthogonal I need to find an inner product such that given a set $S$ of linearly independent vectors in a Hilbert space $H$, $S$ will be orthogonal with these product. I thought Gram -Schmidt Process would help but it's not, because for the process you already have the inner product.
Start with a set $\{ x_1,\cdots,x_n\}$. Define $$ U : \mathbb{C}^{n} \rightarrow H $$ by $$ U(\alpha_1,\cdots,\alpha_n) = \alpha_1 x_1+\cdots+\alpha_n x_n $$ Let $P$ the be orthogonal projection of $H$ onto the closed subspace spanned by $\{x_1,\cdots,x_n\}$. Define $$ (x,y)_{\mbox{new}} = (U^{-1}Px,U^{-1}Py)_{\mathbb{C}^{n}}+((I-P)x,(I-P)y)_{H}. $$ Because $(I-P)x_k=0$ for $1 \le k \le n$ and $U^{-1}x_k$ is the $k$-th standard basis element in $\mathbb{C}^{n}$, then $(x_j,x_k)_{\mbox{new}}=\delta_{j,k}$ and $(x_j,(I-P)y)_{\mbox{new}}=0$ for all $1 \le j \le n$ and $y\in H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
If all pairs of addends that sum up to $N$ are coprime, then $N$ is prime. I think this must be a known theorem, but I've tried searching for it on google without much luck. I would state it as follows: If for all possible pairs of addends that sum to the same number N each of those pairs is comprised of two numbers that are coprime, then N is prime. It is the inverse of that which was discussed/proved here: If the sum of positive integers $a$ and $b$ is a prime, their gcd is $1$. Proof? Instead of starting with a prime number and wanting to prove that any two numbers summing to it are coprime, in this case I'm starting with a set of addend pairs that sum to the same number, noticing that those addend pairs are always coprimes, and wanting to come to the conclusion that the common sum must be a prime. I would also like to know what this theorem is called if it is indeed a known theorem.
I do not think it has a name, but we can prove it right now. We will prove it by proving the contrapositive: If $N$ is not prime, then there exists a pair of addends $X,Y$ summing to $N$ such that $X$ and $Y$ are not coprime. Proof: Let $N = ab$ where $a, b> 1$ (since $N$ is composite). Then we can take $X = a(b-1)$ and $Y = a$, so that $X + Y = ab = N$ and $\gcd(X,Y) = a > 1$. This completes the proof. $\diamondsuit$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Combination of trees If you have 12 trees with five of one kind, four of another and three of a third kind how many combinations of these trees can be planted in twelve holes?
When you have an $n$ element set with all distinct elements, you can arrange the elements of the set into a line in $n!$ ways (i.e. $n\times(n-1)\times(n-2)\times...\times1$). The reason for this is that there are $n$ options for which element is placed in the first spot, then $n-1$ remaining options for which of the remaining elements is placed in the second spot, etc., until you get to the last spot and there is only one element available to choose from. Imagine that your example involved $12$ trees of all different colors. There would therefore be $12\times11\times10\times...\times1=12!$ (nearly half a billion) ways to arrange those distinct trees in a line of $12$ holes in the ground. However, your example does not involve $12$ distinct trees. Instead, it involves $3$ distinct types of trees in a total set of $12$ trees. This greatly limits the possible number of meaningfully different linear arrangements, since trees of the same type could exchange spots and nobody would know the difference. So how do we compensate for overcounting the possible linear arrangements of trees? We do this by dividing out the number of ways we could exchange same-type trees in the line of holes. Let's say $5$ of the trees are spruce. Based on the rule mentioned in the first paragraph, we could independently arrange those trees in $5!$ ways once you already fix $5$ total spots in which to put spruce trees. If you also have $4$ oak trees and $3$ hawthorn trees, you could also independently rearrange those in $4!$ and $3!$ linear ways, respectively. So we divide out $5!\times4!\times3!$ from our initial permutation of $12!$ arrangements. The result is that there are $\frac{12!}{5!\times4!\times3!}= 27,720$ ways to linearly arrange those three types of trees in twelve holes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Transform a polynomial so that positive roots are shifted right and negative roots are shifted left I'm trying to figure out if it is possible to shift the roots of a polynomial outward, instead of to the left or right. Its relatively simple to shift all the solutions in one direction by substituting (x-k) or (x+k) for x within the equation, but is it possible to shift all the positive solutions to the right and all the negative solutions to the left?
Let $p(z) = a_0 + a_1z + a_2 z^2 + \dots + a_m z^m$ and let $z_1, \dots, z_m \in \mathbb{C}$ be the roots of $p$. For $\lambda \in \mathbb{C} \setminus \{0\}$, set $q(z) = b_0 + b_1z + \dots + b_mz^m$, where $b_j = \lambda^{-j} a_i$. Then the zeros of $q$ are $\lambda z_1, \dots, \lambda z_m$. Therefore if $\lambda > 0$, all roots are moved away from $0$ and if $0 < \lambda < 1$, all roots are moved closer to $0$. For $\lambda = -1$, negative roots become positive and positive roots become negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Riemann zeta function, representation as a limit is it true that $$\displaystyle \zeta(s) = \ \lim_{\scriptstyle a \to 0^+}\ 1 + \sum_{m=1}^\infty e^{\textstyle -s m a } \left\lfloor e^{\textstyle(m+1)a} - e^{\textstyle m a} \right\rfloor$$ my proof : \begin{equation}F(z) = \zeta(-\ln z) = \sum_{n=1}^\infty z^{\ln n}\end{equation} which is convergent for $|z| < \frac{1}{e}$. now I consider the functions : $$\tilde{F}_a(z) = \sum_{n=1}^\infty z^{a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor } = 1 + \sum_{m=0}^\infty z^{a n} \left\lfloor e^{a(m+1)} - e^{a m} \right\rfloor $$ because $\displaystyle\lim_{ a \to 0^+} a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor = \ln n$, we get that : $$\lim_{\scriptstyle a \to 0^+} \ \tilde{F}_a(z) = \sum_{n=1}^\infty z^{\ln n} = \zeta(-\ln z)$$ (details) $\displaystyle\sum_{m=0}^\infty z^{a m} \left\lfloor e^{a(m+1)} - e^{a m} \right\rfloor $ is also convergent for $z < \frac{1}{e}$ because $\displaystyle\sum_{m=0}^\infty (z^a e^a)^{m}$ is convergent for $z < \frac{1}{e}$ and $\displaystyle\sum_{m=0}^\infty z^{am} \left\{e^{a(m+1)} - e^{a m} \right\} $ is convergent for $z < 1$. to justify $\displaystyle\sum_{n=1}^\infty z^{a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor } = 1 + \sum_{m=1}^\infty z^{a m} \left\lfloor e^{a(m+1)} - e^{a m} \right\rfloor $ : if $\left\lfloor \frac{\ln n}{a} \right\rfloor = m \ne 0$ then $\displaystyle\frac{\ln n}{a} \in [m, m+1[ \implies n \in [ e^{am}, e^{a(m+1)}[ $ . how many different $n$'s is that ? $\left\lfloor e^{a(m+1)} - e^{am} \right\rfloor $.
$$\begin{align}\left|e^{-nas}\left\lfloor e^{(n+1)a}-e^{na}\right\rfloor\right|&\leq e^{-nas}\left(e^{(n+1)a}-e^{na}\right)\\&=e^{-nas}e^{na}\left( e^{a}-1\right)\\&=\frac{e^a-1}{e^{na(s-1)}} \end{align}$$ As $a\to 0$, $e^a-1\to 0$. and $e^{na(s-1)}\to 1$. So each term goes to zero as $a\to 0$. That seems to contradict your conclusion. Also, why is $$\sum z^{a \left\lfloor \textstyle \frac{\ln n}{a} \right\rfloor } = \sum z^{a n} \left\lfloor e^{a(n+1)} - e^{a n} \right\rfloor?$$ That seems like a huge leap.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Roots of a power series in an interval Let $ a_0 + \frac{a_1}{2} + \frac{a_2}{3} + \cdots + \frac{a_n}{n+1} = 0 $ Prove that $ a_0 + a_1 x + a_2 x^2 + \cdots + a_n x^n = 0 $ has real roots into the interval $ (0,1) $ I found this problem in a real analysis course notes, but I don't even know how to attack the problem. I tried to affirm that all coefficients are zero, but that is cleary not true, we have many cases when the result is 0 but $ a_i \ne 0$ for some $i$. I have tried derive/integrate, isolate and substitute some coefficients ($ a_0 $ and $a_n $ where my favorite candidates). Work with factorials (and derivatives and factorials) but could not find a way to prove. I have many pages of useless scratches. Any tips are welcome.
Consider the function $f(x)=a_0x+ \frac{a_1}{2}x^2+\cdots + \frac{a_n}{n+1}x^{n+1}$. Then $f(1)=0$ is given, and $f(0)=0$ is clear. Rolle's theorem now shows there is $x \in (0,1)$ such that $f'(x)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1312930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Solving a Diophantine equation with LTE Show that only positive integer value of $a$ for which $$4(a^n+1)$$ is a perfect cube for all positive integers $n$, is $1$. Rewriting the equation we obtain: $$4(a^n+1)=k^3$$ It is obvious that $k$ is even and that in his prime factorization presents a prime factor $=2$ I proved to solve with LTE and other methods but in vain. How can I solve it? Edit: Supposing that a prime factor of $k$ ($k=p_1^{3q}\cdot p_2^{3q}\cdot ....\cdot p_j^{3q}$) is a divisor of $a+1$ (i.e. $p|a+1$) therefore the greatest power of $p$ that divides $a^n+1$ has to be a multiple of $3$: $$\upsilon_p(a+1)+\upsilon(n)=3q$$ but we can note that $3q$ depends not only by $a$ but also by $n$ therefore this isn't possible for hypothesis. But if $p$ isn't a divisor of $a+1$, How can I continue?
No need of LTE. Suppose $$4(a^n+1)=k^3 \tag{$\star$}$$ for some $a>1$. Then, since $a^n+1>2$, we must have $$a^n+1=16b$$ $$ a^n=16b-1 \tag{P(n)}$$ for some $b\ge1$ (in fact $b$ is a cube, but it is an unnecessary information for our purposes). However, if $P(n)$ holds, then multiplying both sides by $a$ we see that $P(n+1)$ also holds if and only if there exists a positive integer $c$ such that $a-1=16c$:$$a^{n+1}=16ab-a \\ a^{n+1}=16ab-(a-1)-1 \\ a^{n+1}=16\left(ab-\frac{a-1}{16}\right)-1.$$ But this yields that $P(n)$ is equivalent to $$a^n-1=16b-2 \\ (a-1)\left(a^{n-1}+a^{n-2}+\cdots+1\right)=16b-2 \\ 16c\left(a^{n-1}+a^{n-2}+\cdots+1\right)=16b-2,$$ which is clearly impossible, hence we conclude $a=1$ alone satisfies $(\star)$ for all positive integers $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1313139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show determinant of $\left[\begin{matrix} A & 0 \\ C & D\end{matrix}\right] = \det{A}\cdot \det{D}$ Let $A \in \mathbb{R}^{n, n}$, $B \in \mathbb{R}^{n, m}$, $C \in \mathbb{R}^{m, n}$ and $D \in \mathbb{R}^{m, m}$ be matrices. Now, I have seen on Wikipedia the explanation of why determinant of $\left[\begin{matrix} A & 0 \\ C & D\end{matrix}\right] = \det{A}\cdot \det{D}$, but I still did not get it. Specifically, the explanation is: This can be seen ... from a decomposition like: $\left[\begin{matrix} A & 0 \\ C & D\end{matrix}\right] = \left[\begin{matrix} A & 0 \\ C & I_{m}\end{matrix}\right]\left[\begin{matrix} I_n & 0 \\ 0 & D\end{matrix}\right]$ I understood that the equation is true from the standard rules of matrix-matrix multiplication, but it is still not too clear why this should prove what we want to prove or show. If $A$, $B$, $C$ and $D$ were regular reals (and $I_{i}$ was $1$), then the equation and the explanation would be obvious, because of the standard rules of calculating determinants... But in this case, I cannot understand why the equation shows that the final determinant is $$\det{A} \cdot \det{D}$$ Those 2 matrices $\left[\begin{matrix} A & 0 \\ C & I_{m}\end{matrix}\right]$ and $\left[\begin{matrix} I_n & 0 \\ 0 & D\end{matrix}\right]$ basically could not be triangular or diagonal matrices, from my understanding...
The first thing to note is of course the fact that $\det (AB)=\det A \cdot\det B$. This is well known - if you search you will find several proofs - in some texts this condition is used as an axiom when defining the determinant. So this allows you to assert \begin{equation} \det \begin{bmatrix} A & 0 \\ C & D\end{bmatrix} = \det \begin{bmatrix} A & 0 \\ C & I_{m}\end{bmatrix} \det \begin{bmatrix} I_n & 0 \\ 0 & D\end{bmatrix}.\end{equation} Now it is very simple - if you just apply Laplace expansion (cofactor expansion - which is the usual way to determine the determinant by hand) the answer follows directly, since you can expand \begin{equation}\det \begin{bmatrix} A & 0 \\ C & I_{m}\end{bmatrix}\end{equation} starting at the last column and then the second last column etc. to see that \begin{equation}\det \begin{bmatrix} A & 0 \\ C & I_{m}\end{bmatrix}=1^m \cdot \det A = \det A.\end{equation} Now similarly you can expand \begin{equation} \det \begin{bmatrix} I_n & 0 \\ 0 & D\end{bmatrix} \end{equation} by the first row, then the second, etc. to see that \begin{equation} \det \begin{bmatrix} I_n & 0 \\ 0 & D\end{bmatrix}=1^n \cdot \det D = \det D.\end{equation} If you have difficulty understanding this last part (cofactor expansion), then I think you should spend some time with a good text book to make sure you understand exactly how it works. Basically it allows you to calculate determinants very easily in some cases by making intelligent choices with regards to the rows and columns you are expanding by. In this case in particular there are rows/columns which contain only a single 1 which effectively means you can reduce immediately to determining the cofactor, or minor for that matter since the 1 is in a diagonal position.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1313285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
$p$(ain)-adic number sequence I am trying to figure out how $p$-adic numbers work and currently am having trouble wrapping my head around how they work, so I made a pun! HAH! Jokes aside, I am working on this question Show that the sequence $(3,34,334,3334,.....)$ is equal to $2/3$ in $\hat{\mathbb{Z}}_5$ I assume they mean it converges to $2/3$ in the $5$-adic integers by the question. My big issue is that I am struggling to properly attack it. One question immediately comes to mind is: does $334$ mean $334=3\cdot 5^0+3\cdot 5^1+4\cdot 5^2$ as all $p$-adic numbers can be written as such, or what does it exactly mean? If so, how would I go on exactly demonstrating this? Their initial peculiarity, especially with increased number decreases distance is throwing me off quite a bit.
Here, the notation is in the usual base $10$, so the number $a_n = 33\ldots34$ simply denotes $$a_n = 3 \times 10^n + 3 \times 10^{n-1} + \ldots + 3 \times 10 + 4 = 1 + 3\sum_{k = 0}^n 10^k = 1 + \frac{10^{n+1}-1}{3}$$ Now, in the $5$-adic topology, the sequence $(10^n)_{n \ge 0}$ goes to $0$ at infinity because $\left|10^n\right|_5 = \frac{1}{5^n} \longrightarrow 0$, so you get $$\lim_{n \to \infty} a_n = 1 + \frac{0-1}{3} = \frac{2}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1313348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Functions proof. Find all functions $$f: \mathbb{Z} \rightarrow \mathbb{Z}$$ such that $$f(a)^2+f(b)^2+f(c)^2=2f(a)f(b)+2f(b)f(c)+2f(c)f(a)$$ for all integers $$a, b, c$$ satisfying $$a+b+c=0$$ I have no idea how to even begin this one? Any comments?
There are three different families of solutions for this equation. For any constant $k$, we have the following possibilities: $$\begin{align*}f(2n)=0, && f(2n+1)=k && \forall n \in \mathbb{Z}\end{align*}$$ or $$\begin{align*}f(4n)=0, && f(4n+1)=f(4n-1)=k, && f(4n+2)=4k && \forall n \in \mathbb{Z}\end{align*}$$ or $$f(n) = kn^2 \quad \forall n \in \mathbb{Z}$$ We can (and should) check that each of these solutions satisfies the functional equation. I will now show that these are the only possibilities. First, by taking $a=b=c=0$, we obtain that $3f(0)^2=6f(0)^2$, from which we get $f(0)=0$. Now take $a=n, b=-n$ and $c=0$ to get that $$f(n)^2+f(-n)^2=2f(n)f(-n)$$ for all $n$, from which we get $f(-n)=f(n)$ for all $n$. Let $f(1)=k$. I claim that there is a function $g:\mathbb{Z}\to\mathbb{N}$ such that $f(n)=kg(n)^2$ for all $n\in\mathbb{Z}$ We can take $g(0)=0$ and $g(1)=1$. Now suppose that we have a value for $g(n)$. Then taking $a=n+1, b=-n$ and $c=-1$ in the functional equation (noting that $g(-n)=g(n)$), we get $$f(n+1)^2 + k^2g(n)^4 + k^2 = 2kf(n+1)g(n)^2 + 2k^2g(n)^2+2kf(n+1)$$ and so $$(f(n+1)-kg(n)^2-k)^2 = 4k^2g(n)^2$$ giving us $$f(n+1)=k(g(n)^2\pm 2g(n)+1)=k(g(n)\pm 1)^2$$ and so we can take $$g(-n-1)=g(n+1)=g(n)\pm 1$$ which is an integer. I claimed that we can make $g(n)$ a non-negative integer.The only case in which the above makes $g(n+1)$ negative, is if $g(n)=0$ and $g(n+1)=-1$. But then we can take $g(n+1)=1$ instead, which is non-negative. In this way, we can define $g(n)$ for all integers $n$. If $k=0$ then we see that $f(n)=0$ for all integers $n$, which satisfies the functional equation. From now on, we can assume that $k \neq 0$. Now taking $a=2n, b=-n$ and $c=-n$ in the functional equation gives us that $$f(2n)^2+2f(n)^2=4f(2n)f(n)+2f(n)^2$$ for all $n$, and so we get that $f(2n)=0$ or $f(2n)=4f(n)$ for all $n$. In terms of $g$, this gives us that for any integer $n$, either $g(2n)=0$ of $g(2n)=2g(n)$. Now I claim that if there is any integer $m$ such that $f(m)=0$, then $f$ is periodic with period $m$. i.e. $f(n+m)=f(n)$ for all integers $n$. Indeed, take $a=n+m, b=-n$ and $c=-m$ in the functional equation. We get that $$f(m+n)^2+f(n)^2=2f(m+n)f(n)$$ giving us $f(m+n)=f(n)$ as desired. We see that this holds for $g$ as well, since $g(m)=0$ if and only if $f(m)=0$. Now we know that either $g(2)=0$ or $g(2)=2$. If $g(2)=0$ then we see that $f$ is periodic with period $2$, which gives us the family of solutions $$\begin{align*}f(2n)=0, && f(2n+1)=k && \forall n \in \mathbb{Z}\end{align*}$$ Otherwise, we have that $g(2)=2$. We then get that either $g(4)=0$ or $g(4)=4$. If $g(4)=0$, then from $g(n+1)=g(n)\pm 1$, we see that $g(3)=1$. We also have that $f$ is periodic with period $4$, and so in this case we get the family of solutions $$\begin{align*}f(4n)=0, && f(4n+1)=f(4n-1)=k, && f(4n+2)=4k && \forall n \in \mathbb{Z}\end{align*}$$ Finally, suppose that $g(4)=4$. We see that $g(3)=3$. I claim that in this case, $g(n)=n$ for all non-negative integers $n$. This holds for $n \leq 4$. Suppose that the claim holds for all non-negative integers $n\leq 2k$. (For some $k>1$) We will show that it also holds for $n=2k+1$ and $n=2k+2$. We know that $g(2k+2)=0$ or $g(2k+2)=2g(k+1)$. But $g(2k+2) \geq g(2k)-2=2k-2>0$, and so we must have that $g(2k+2)=2g(k+1)=2k+2$. We then easily see that $g(2k+1)=2k+1$, since the only other possibility is $g(2k+1)=2k-1$, which makes it impossible that $g(2k+2)=2k+2$. Thus the claim holds for $n=2k+1$ and $n=2k+2$ as well, and hence for all natural numbers $n$ by induction. Thus $f(n)=kn^2$ for all non-negative integers $n$, and since $f(-n)=f(n)$, we get that $f(n)=kn^2$ for all integers $n$, giving us the final family of solutions $$f(n) = kn^2 \quad \forall n \in \mathbb{Z}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1313461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Can I show $\int^\infty_0\frac{\cos^2x}{x^2}dx=\infty$ by saying $\int^\infty_0\frac{1-\sin^2x}{x^2}dx=\infty-\frac\pi2$? $${I=\int^{\infty}_{0}\frac{\cos^{2} x}{x^2}\;dx=\infty}$$ Attempt: $$\begin{align}&= \int^{\infty}_{0}\frac{1- \sin^{2} x}{x^2}\, dx \tag1 \\[8pt] &= \infty-\int^{\infty}_{0}\frac{\sin^{2} x}{x^2}\, dx \tag2 \\[8pt] &= \infty-\frac{\pi}{2} \tag3 \end{align}$$ Hence, $I$ is divergent.
This looks a little better, I'd say. $$\int_0^\infty\frac{\cos^2x}{x^2}dx>\int_0^{\pi/4}\frac{\cos^2x}{x^2}dx\ge\frac12\int_0^{\pi/4}\frac{dx}{x^2}=\infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1313554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Sum with Generating Functions Find the sum $$\sum_{n=2}^{\infty} \frac{\binom n2}{4^n} ~~=~~ \frac{\binom 22}{16}+\frac{\binom 32}{64}+\frac{\binom 42}{256}+\cdots$$ How can I use generating functions to solve this?
If $f(z) = \sum\limits_{n=0}^\infty a_n z^n$ converges for $|z| < \rho$, then for any $m \ge 0$, $$\frac{z^m}{m!} \frac{d^m}{dz^m} f(z) = \sum_{n=0}^\infty a_m\binom{n}{m} z^n \quad\text{ for } |z| < \rho.$$ Apply this to $$f(z) = \frac{1}{1-z} = \sum_{n=0}^\infty z^n,$$ we get $$\sum_{n=2}^\infty \binom{n}{2} z^n = \frac{z^2}{2!}\frac{d^2}{dz^2}\frac{1}{1-z} = \frac{z^2}{(1-z)^3}$$ Substitute $z = \frac14$ will give you $$\sum_{n=2}^\infty \binom{n}{2} z^n = \frac{\frac14^2}{(1-\frac14)^3} = \frac{4}{27}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1313644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why can't we determine the limit of $\cos x$ and $\sin x$ at $x=\infty $ or $x=-\infty$? I'm confused about why we can't determine the limit of $\cos x$ and $\sin x$ as $x \to \infty$, even though they are defined over $\mathbb{R}.$ When I use Wolfram Alpha, I get the following result (link to page): which shows only that there are $2$ limits :$-1$ and $ 1 $. Can someone show me why we can't determine $\lim \sin x$ and $\lim \cos x$ at $x=\infty $ or $x=-\infty$ ? Thank you for your help.
They can't have a limit because they're periodic functions. What Wolfram Alpha outputs are the limit inferior and the limit superior of these functions, which always exist as soon as the functions are bounded. In case you haven't Similarly seen these notions yet, by definition: $$\limsup_{x\to\infty}f(x)=\lim_{x\to\infty}\sup_{t\ge x}f(t)$$ Note that the function $g(x)=\sup_{t\ge x}f(t)$ is non-increasing, hence it has a limit if $f$ is bounded from below. Similarly, $$\liminf_{x\to\infty}f(x)=\lim_{x\to\infty}\inf_{t\ge x}f(t)$$ which exists because the function $h(x)=\inf_{t\ge x}f(t)$ is non-decreasing, hence it has a limit if $f$ is bounded from above. Also, one can prove that for any number $a\in[-1,1]$ there exists a sequence $(x_1, \dots, x_n,\dots) \to \infty$ such that $(f(x_1),\dots,f(x_n),\dots)\to a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1313717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Some questions about the cartesian product I understand that the cartesian product of $A \times B$ is a set with elements of the form $(a,b)$ where $a\in A$, $b\in B$. My question arise from the fact that I was described $\Bbb{R}^3$ as $\Bbb{R} \times \Bbb{R} \times \Bbb{R}$, but elements of $\Bbb{R}^3$ have the form $(x,y,z)$, while elements of $\Bbb{R} \times \Bbb{R} \times \Bbb{R}$ should have the form $((x,y),z)$ where $(x,y)\in \Bbb{R}^2,z\in\Bbb{R}$. If this sets are different, how do we construct $\Bbb{R}^n$ with elements of the form $(x_1,x_2,...,x_n)$?
$A \times A \times A$ is usually defined as $(A \times A) \times A$ when the Cartesian product of two sets has been defined. This corresponds to your first view of $\mathbb{R}^3$. On the other hand, powers of sets are also defined, namely $A^B$ is defined as the set of all functions from $B$ to $A$. Defining the natural number $n+1$ as the set $\{0,\ldots,n\}$ (and $0 = \emptyset$), as is usual, we can define $A^n$ as the set of all functions from the set $n$ to $A$. It's quite easy to see that we can identify an $f \in A^2 = A^{\{0,1\}}$ with its tuple of values $(f(0), f(1))$ and so with $A \times A$ as a Cartesian product, and similarly $A^3$ with $(A \times A) \times A$ etc. so that (up to obvious bijections; the sets are not the same as pure sets, but can be easily identified using "trivial" or "natural" bijections) we can consider powers of a set as iterated products (like we have for numbers). The view of $\mathbb{R}^n$ as $n$-tuples corresponds to the "power" view most naturally, but as said, is easily identified with iterated Cartesian products as well. Also see this answer, e.g.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1313958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Trace of AB = Trace of BA We can define trace if $A =\sum_{i} \langle e_i, Ae_i\rangle$ where $e_i$'s are standard column vectors, and $\langle x, y\rangle =x^t y$ for suitable column vectors $x, y$. With this set up, I want to prove trace of AB and BA are same, so it's enough to prove that $$\sum_{i} \langle e_i, ABe_i\rangle =\sum_{i} \langle e_i, BAe_i\rangle$$ but how to conclude that?
by definition $$\begin{align}trace(AB) &= (AB)_{11}+(AB)_{22}+\cdots+(AB)_{nn}\\ &=a_{11}b_{11}+a_{12}b_{21}+\cdots + a_{1k}b_{k1} \\ &+ a_{21}b_{12}+a_{22}b_{22}+\cdots + a_{2k}b_{k2}\\ &+\vdots \\ &+a_{n1}b_{1n}+a_{n2}b_{2n}+\cdots + a_{nk}b_{kn}\end{align}$$ if you view the sum according to the columns, then you see that it is the $trace(BA).$ therefore, $$trace(AB) = trace(BA). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1314142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Solve the following PDE using Fourier transform Solve the following 3-D wave equation using Fourier transform $$PDE: u_{tt}=C^2[u_{xx}+u_{yy}+u_{zz}],\qquad-\infty<x,y,z<\infty,\qquad t>0$$ $$BC: u(x,y,z,t)\rightarrow 0\qquad as \qquad r^2=x^2+y^2+z^2\rightarrow \infty \qquad $$ $$IC: u(x,y,z,0)=f(r),\qquad r=\sqrt{x^2+y^2+z^2} , \qquad u_t(x,y,z,0)=0 $$
Just take 3 dimensional Fourier transform on the equation, and then it will be solved. Let $\hat{u}$ be the 3-D Fourier transform of $u$ and the variables $x,y,z$ will transform to $s_1,s_2,s_3$. Take Fourier on the original equation, we have $$ \hat{u}_{tt}=-C^2(s_1^2+s_2^2+s_3^2)\hat{u}\\ \hat{u}(s_1,s_2,s_3,0)=\hat{f}\\ \hat{u}_t(s_1,s_2,s_3,0)=0 $$ This is a 2nd order ODE. It can be solved by classical methods (without lose of generality we assume $C\ge0$): $$ \hat{u}=\hat{f}\cos(Ct\sqrt{s_1^2+s_2^2+s_3^2}) $$ Then, take inverse Fourier transform on it, you can obtain the solution of original PDE. I omit it here because it is trivial. PS: Note that when you take inverse Fourier transform, the zero boundary condition will be used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1314249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can a polynomial equation always be manipulated to give a recurrence formula? Let $p(x)$ be a real (or maybe complex) polynomial. Suppose we wish to (numerically) solve $p(x) = 0$. This can be done for example with Newton's method of course, but I was thinking about if you "solve $x$" from the equation somehow and then start iterating. For example, if $p(x) = x^5 + x^2 -1$, you can solve $$x=\frac{1}{\sqrt{x^3+1}}$$ and starting from $x_0=1$ recursing (plugging $x$ back to the formula $\frac{1}{\sqrt{x^3+1}}$) gives $x = 0.80873...$ I wonder if it's always possible to derive a recurrence equation from a polynomial to find out its roots starting from some initial value? Well, if you start from the root it stays in the root, but maybe in a way that there is some open set around the root such that the root pulls it into itself.
it depends on the right hand side of x=g(x). If g(x) is a contraction mapping or if the initial iteration is within a compact set where g sends to itself or at least if futher iterations lie within such a compact set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1314455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Laurent expansion of $\frac{2}{(z-1)(3-z)}$ The question asks me to find all the possible Laurent series expansions of $$f(z)=\frac{2}{(z-1)(3-z)}$$ about the origin so $$z_0 =0 $$ First I convert $f(z)$ into partial fractions to get $$f(z)=\frac{1}{z-1}+\frac{1}{3-z}$$ We can see there are three domains $$D_1: |z|<1$$ $$D_2 :1<|z|<3$$ $$D_3 :|z|>3$$. I first look at $\frac{1}{z-1}$ and I write it in the form $-(1-z)^{-1}$. By writing in this form I can see that it converges in $D_1$. By applying the maclaurin series I get $$\sum^\infty_{n=0} z^n$$ How do I get the expression $\frac{1}{z-1}$ in the form so that it converges between $D_2$ and $D_3$ ?
Hint: $$\frac{1}{z-1}=-\frac{1}{1-z}=\frac{z^{-1}}{1-z^{-1}}$$ and $$\frac{1}{3-z}=\frac{1}{3}\frac{1}{1-\frac{z}{3}} = -\frac{z^{-1}}{1-3z^{-1}}$$ Can you see how to get a Laurent series for $\frac{z^{-1}}{1-z^{-1}}$ so that it converges when $|z|>1$ - that is, when $|z^{-1}|<1$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1314508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Uniform convergence implies continuity and differentiability? For example: Suppose I have the following series: $$\sum_{k=0}^{\infty}e^{-k}\sin(kt)$$ The Weierstrass-M-Test shows that the series is uniformly convergent on $\mathbb R$. Does this imply differentiability and continuity on $\mathbb R$ aswell?
Yes, since each finite sum is continuous, then the uniform convergence of the series implies the continuity of the limit on any compact of $\mathbb{R}$, thus the continuity of the limit sum on $\mathbb{R}$. For the differentiability, you can check that the series $$ \left|\sum_{k=0}^{\infty}k\:e^{-k}\cos(kt)\right|\leq\left|\sum_{k=0}^{\infty}k\:e^{-k}\right|=\frac{e}{(1-e)^2}<+\infty $$ is (normally) uniformly convergent on $\mathbb{R}$, giving the desired the differentiability of the limit sum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1314580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does notation $(a_1, a_2,\cdots, a_n)$ mean in book "The Classical Introduction to Modern Number Theory"? I am reading the book on number theory and I have problems understanding this definition: DEFINITION: Let $a_1, a_2, \cdots, a_n\in\mathbb{Z}$; we define $$\left(a_1, a_2, \cdots, a_n\right):=\left\{a_1x_1+\cdots+a_nx_n\,:\,x_1,\cdots, x_n\in\mathbb{Z}\right\}$$ If I got it right, then every set $A=(a,b)$ contains all integer numbers. Let me make it a little bit more clear. $$A=(5,7)\\ 0*5+0*7=0\\ (-4)*5+3*7=1\\ (-1)*5+1*7=2\\ (-5)*5+4*7=3$$ et cetera... I can do same thing but swap the $+$ and $-$ signs in coefficients, and get all the negative integers too. And I can do it with every other pair of numbers, and triplet and so on. So my question is, did I get this right?
It is correct that $(5,7)$ is the set of all integers. However, this does not make the definition meaningless. For example, if all $a_i$ are even, clearly each element in $(a_1, \dots, a_n)$ will be even. So the set $(a_1, \dots, a_n)$ is not always the set of all integers. Presumably, the book will proceed to show that $(a_1, \dots, a_n)= d \mathbb{Z}$ for some (non-negative) integer $d$ and this $d$ is a GCD of $a_1, \dots, a_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1314673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate the residue of this function Find the residue at $z=0$ of the function $f(z)=\frac{\cot z}{z^4}$ I know that $z_0=0$ is a pole of order $k=5$, and $$Res(f;z_0)=\frac{\phi(z_0)^{(k-1)}}{(k-1)!}$$ but I cannot get the right answer that is $-\frac{1}{45}$
Laurent series approach It is easy to see that the Laurent series of $\cot(z)$ around $z=0$ is $$ \cot(z)=\frac 1z - \frac z3 - \frac{z^3}{45} - \cdots $$ Thus $$ \frac{\cot(z)}{z^4}=\frac{1}{z^5} - \frac{1}{3z^3} - \frac{1}{45z} - \cdots $$ Hence, as pointed out in the comments the order of the pole is $5$ and indeed $Res(\cot; 0)=a_{-1}=-1/45$ Without using Laurent Series It is more complicated since you need to compute 4 derivates. If you insist you will need to do as follows: because the pole is of order $5$, then \begin{align} Res(\cot; 0)=a_{-1}=\frac{1}{4!}\lim_{z \to 0}\frac{d^4}{dz}\left( z^5 \frac{\cot(z)}{z^4}\right) & =\frac{1}{24}\lim_{z \to 0}\frac{d^4}{dz}\left( z\cot(z)\right)\\ & = \left(\frac{1}{24}\right)\left(\frac{-8}{15}\right)\\ & = -\frac{1}{45} \end{align} However it is a little bit messy (click here too see why $\lim d^4(z\cot(z))=-8/15$ ). For this case, I of course prefer the Laurent series option.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1314778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Assume that f is a one to one function: If $f(x) = x^5 + x^3 +x$ , find $f^{-1}(3)$ and $f(f^{-1}(2))$ If $f(x) = x^5 + x^3 +x$ , find $f^{-1}(3)$ and $f(f^{-1}(2))$ How do I go about solving this? For example, since I am giving f inverse should $I = x^5 +x^3 + x = 3$ ?
In this case, you don't have to do very much. You can see just by examining the coefficients that $f(1) = 1^5 + 1^3 + 1 = 3$, so $f^{-1}(3) = 1$. Since we're assuming that $f$ is one-to-one, that means precisely that $f(f^{-1}(x)) = x$ for all $x$. So $f(f^{-1}(2)) = 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1314872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $A$ and $A^T$ do not have the same eigenvectors in general I understood that $A$ and $A^T$ have the same eigenvalues, since $$\det(A - \lambda I)= \det(A^T - \lambda I) = \det(A - \lambda I)^T$$ The problem is to show that $A$ and $A^T$ do not have the same eigenvectors. I have seen around some posts, but I have not understood yet why. Could you please provide an exhaustive explanation of why in general $A$ and $A^T$ do not have the same eigenvectors?
The matrix $A=\begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix}$, and its transpose $A^T$, have only one eigenvalue, namely $1$. However, the eigenvectors of $A$ are of the form $\begin{bmatrix} a\\ 0 \end{bmatrix}$, whereas the eigenvectors of $A^T$ are of the form $\begin{bmatrix} 0\\ a \end{bmatrix}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1314980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 2 }
Visual representation of matrices I am used to seeing most basic mathematical objects being visually represented (for instance, a curve in the plane divided by the xy axis; the same goes for complex numbers, vectors, and so on....), However, I never saw a visual representation of a matrix. I do not mean the disposition in rows and columns, of course. I would like to know whether they can be graphically represented in some intuitive way. If so, how? Could you illustrate it, say, with a 2x2 quadratic matrix? Thanks a lot in advance
You can represent a $2 \times 2$ matrices $A = \left[\begin{smallmatrix}a & b\\c & d\end{smallmatrix}\right]$ as a parallelogram $Q_A \subset \mathbb{R}^2$ with vertices $(0,0), (a,c), (b,d)$ and $(a+b,c+d)$. If one identify the plane $\mathbb{R}^2$ with $M^{2\times 1}(\mathbb{R})$, the space of $2 \times 1$ column matrices, then $Q_A$ is the image of the unit square $[0,1] \times [0,1]$ under linear transform $$[0,1] \times [0,1] \ni \begin{bmatrix}x \\ y\end{bmatrix} \quad \mapsto \quad \begin{bmatrix}x' \\ y'\end{bmatrix} = \begin{bmatrix}a & b\\c & d\end{bmatrix} \begin{bmatrix}x \\ y\end{bmatrix} \in Q_A $$ Since a linear transformation is uniquely determined by its action on a basis, this provides a faithful represent of the $2 \times 2$ matrices. Under this representation, some geometry related operations now correspond to familiar geometric shapes. e.g. * *The matrix $\left[\begin{smallmatrix}s & 0\\0 & s\end{smallmatrix}\right]$ represents a scaling of geometric objects. It corresponds to a square of side length $s$, axis aligned with the standard $x$- and $y$-axis. *The matrix $\left[\begin{smallmatrix}\cos\theta & -\sin\theta\\ \sin\theta & \;\cos\theta\end{smallmatrix}\right]$ represents a counterclockwise rotation of angle $\theta$. It corresponds to the unit square rotated counterclockwisely for angle $\theta$. *The matrices $\left[\begin{smallmatrix}1 & m\\ 0& 1\end{smallmatrix}\right]$ and $\left[\begin{smallmatrix}1 & 0\\ m& 1\end{smallmatrix}\right]$ represent sheer mappings in horizontal and vertical directions. They can visualized as parallelogram with one pair of its sides staying horizontal and vertical respectively. This sort of shapes provides a useful visual mnemonics for what are the effects of those matrices (when viewed as a transformation of the plane). Finally, one can also use this to introduce the concept of determinant to students. * *What is the the determinant of a matrix $A$? It is just the area of $Q_A$. *What does it mean $\det A < 0$? It just mean the $Q_A$ has been flipped. I'm sure there are other ways to use this visualization as a teaching tool.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1315082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Ordering relations smallest/minimal elements definitions In How to Prove It: A Structured Approach, 2nd Edition, page 192, the author introduces the following definitions of smallest and minimal elements of partial orders: Definition 4.4.4. Suppose R is a partial order on a set A, B $\subseteq$ A, and b $\in$ B. Then b is called an R-smallest element of B (or just a smallest element if R is clear from the context) if $\forall$x $\in$ B(bRx). It is called an R-minimal element (or just a minimal element) if $\lnot$$\exists$x $\in$ B(xRb $\land$ x $\neq$ b). Then the author introduces the following example: Let L = { (x, y) $\in$ ℝ x ℝ | x ≤ y }, as before. Let B = { x $\in$ ℝ | x ≥ 7}. Does B have any L-smallest or L-minimal elements? What about the set C = { x ∈ ℝ | x > 7 }? Solution: Clearly 7 ≤ x for every x $\in$ B, so ∀x $\in$ B(7Lx) and therefore 7 is a smallest element of B. It is also a minimal element, since nothing in B is smaller than 7, so ¬∃x ∈ B(x ≤ 7 ∧ x $\neq$ 7). There are no other smallest or minimal elements. Note that 7 is not a smallest or minimal element of C, since 7 $\not\in$ C . According to Definition 4.4.4, a smallest or minimal element of a set must actually be an element of the set. In fact, C has no smallest or minimal elements. The part about B makes perfect sense to me, but I'm confused about: In fact, C has no smallest or minimal elements As far as my understanding goes, C does have an L-smallest element, which happens to be 8 (the example says that 7 is not a smallest/minimal element of C, which is obvious since 7 is not a member of C): $$\forall x \in C(8Lx)$$ Which is obviously true since 8 ≤ 8, 8 ≤ 9, 8 ≤ 10, and so on. Also, 8 looks like an L-minimal element of C as well, since: $$\lnot \exists x \in C(xL8 \land x \neq 8)$$ The only element of C which is smaller or equal to 8 is 8, but 8 = 8. Why does the author says that C has no smallest or minimal elements?
There is no real number which can serve as a minimal element. 8 is the smallest natural number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1315238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there infinite number of postive integer pairs $(p>q)$ ,such $3(2^p+1)=(2^q+1)^2$ Is there infinite number of postive integer pairs $(p>q)$ $$3(2^p+1)=(2^q+1)^2$$ I add my some approach $$3\cdot 2^p+3=4^q+2^{q+1}+1$$ Give by $$2^{2q-1}+2^q=3\cdot 2^{p-1}+1$$ I don't see how to proceed from this point
Look at the binary digits of both sides. Or consider the remainders $\bmod 4$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1315326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Principal ideals containing an ideal in a Noetherian integral domain Let $R$ be a Noetherian integral domain and $I$ a nonzero ideal consisting only of zero divisors on $R/(x)$, where $x$ is a nonzero element of $I$. Could we always find an element $y\notin (x)$ such that $yI\subseteq (x)$? Thanks for any help!
Isn't this obvious? $I$ is contained in the union of associated primes of $R/(x)$, so there is such a prime $\mathfrak p$ with $I\subset\mathfrak p$. Now write $\mathfrak p=\operatorname{Ann}(\hat y)$ for some non-zero $\hat y\in R/(x)$, and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1315413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Order $n^2$ different reals, such that they form a $\mathbb{R^n}$ basis I've been trying to solve this linear algebra problem: You are given $n^2 > 1$ pairwise different real numbers. Show that it's always possible to construct with them a basis for $\mathbb{R^n}$. The problem seems to be intuitive enough, but I couldn't come up with a solution. I tried using the Leibniz formula for determinants, but I can't argument why I should be able to always arrange the numbers in such a way so that $det \ne 0$. I also thought about first ordering the $n^2$ numbers, and then filling up a $n \times n$ matrix in a specific pattern, but I also couldn't close that argument. Anyway, any help in the right direction would be appreciated :)!
We may prove the statement by mathematical induction. The base case $n=2$ is easy and we shall omit its proof. Suppose $n>2$. We call the target matrix $A$ and we partition it in the following way: \begin{align*} A=\left[ \begin{array}{ccccc} \pmatrix{|\\ |\\ \mathbf v_1\\ | \\ |} &\pmatrix{|\\ |\\ \mathbf v_2\\ | \\ |} &\cdots &\pmatrix{|\\ |\\ \mathbf v_n\\ | \\ |}\\ a_{n1}&a_{n2}&\cdots&a_{nn} \end{array} \right] \end{align*} where each $\mathbf v_j=(a_{1j},a_{2j},\ldots,a_{n-1,j})^\top$ is an $(n-1)$-dimensional vector. By induction hypothesis, we may assume that the entries of the submatrix $M_{n1}=[\mathbf v_2,\ldots,\mathbf v_n]$ have been chosen so that $M_{n1}$ is nonsingular. Therefore, by deleting some row $\color{red}{k}$ of the submatrix $[\mathbf v_{\color{red}{3}},\ldots,\mathbf v_n]$, one can obtain an $(n-2)\times(n-2)$ nonsingular submatrix. What does that mean? It means that by varying the choice of $a_{\color{red}{k}1}$, we can always pick $\mathbf v_1$ so that with $M_{n2}=[\mathbf v_1, \mathbf v_3,\ldots,\mathbf v_n]$, we have $\det M_{n2}\ne-\det M_{n1}$. It remains to pick the entries of the last row of $A$ from the $n$ numbers left. By Laplace expansion, $(-1)^{n+1}\det A$ is equal to $$ a_{n1}\det M_{n1} - a_{n2}\det M_{n2} + \ldots\tag{1} $$ where the ellipses denote other summands that do not involve $a_{n1}$ or $a_{n2}$. If we swap the choices of $a_{n1}$ and $a_{n2}$, the signed determiant becomes $$ a_{n2}\det M_{n1} - a_{n1}\det M_{n2} + \ldots\tag{2} $$ instead. Since the difference between $(1)$ and $(2)$ is $(a_{n1}-a_{n2})(\det M_{n1}+\det M_{n2}) \ne 0$, we see that at least one set of choices would make $\det A$ nonzero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1315505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Linear Algebra: orientations of vector spaces (problem) This is an exercise from J.Munkres's Analysis on Manifolds: Consider the vectors $\mathbf{a_i}$ in $\mathbb{R}^3$ such that:$$[\mathbf{a_1},\mathbf{a_2},\mathbf{a_3},\mathbf{a_4}]=\begin{bmatrix} 1 &0&1&1 \\ 1&0&1&1\\1&1&2&0\end{bmatrix}$$ Let $V$ the subspace of $\mathbb{R}^3$ spanned by $\mathbf{a_1}$ and $\mathbf{a_2}$. Show that 1) $\mathbf{a_3}$ and $\mathbf{a_4}$ also span $V$, and that 2) the frames $(\mathbf{a_1},\mathbf{a_2})$ and $(\mathbf{a_3},\mathbf{a_4})$ belong to opposite orientation of $V$. I'm having troubles showing part 2): J.Munkres gives the following definitions concerning frames and orientations of vector spaces: Orientations of $n$-dimensional vector spaces. Do those definitions apply in this problem?
I'll only explain part 2. $\{a_1,a_2\}$ forms a basis for $V$. I want to express $a_3, a_4$ as linear combination of the basis. $a_3=a_1+a_2$, $a_4=a_1-a_2$. Therefore, in the basis $\{a_1,a_2\}$, $a_3, a_4$ can be expressed in a component form: $(1,1), (1,-1)$. Observe that $\det \begin{pmatrix} 1 & 1 \\ 1 & -1 \end{pmatrix}=-2$ is negative. By the definition of orientation, the frames $\{a_1, a_2\}$, $\{a_3, a_4\}$ have different orientation. Well, if you like, you can do it in another way, without choosing basis. That is, think of the cross product. If $a_1 \times a_2$ has the same direction as $a_3\times a_4$, then they have the same orientation. If the cross products are in the same direction, then they are in the opposite direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1315611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
In $\Bbb R^3$, is there a general principle governing these "visual" angles? I believe most of you have drawn the xyz coordinate system hundreds of times and so have I. You may have drawn it like these, on various occasions: (the reverse directions of the axis are not shown.) All look ok, don't they? But if you draw like these: then they would just look perfectly weird, because you are never supposed to see such things in real life! Thereby arises my question: is there something to which the "visual" angles in these pictures must conform so that things wouldn't look out of place? Like an inequality or, more probably, a group of them? (well, I don't think it's likely to be equations) Here "visual" angles take their literal meaning, say, if $\angle xOy$ looks like $120$ degrees measured from the picture, then its "visual" angle is $120$ degrees despite the fact that $\angle xOy=90$ degrees. If we denote the "visual angles" of $\angle xOy$, $\angle yOz$ and $\angle xOz$ by $\alpha$, $\beta$ and $\gamma$ respectively, then does there exist a general principle which they have to fulfil so that picture won't look visually unacceptable? And by the way, do the "visual" angles have to do with the optics of our eyes?
I don't think there is some general principle for this. If anything, this is entirely a social construct. All of the pictures that you have drawn above are valid, and we could probably come up with various surfaces such that each is most easily visualized using each of these "visual angles". The only thing that might take some getting used to is if you switched the x and y in these images. The reason this would be weird is because we typically choose the orientation given by "right-hand rule" for $\mathbb{R}^3$. So, if you chose the alternate orientation ("left-hand rule" I guess?) it would look strange, but it wouldn't necessarily be wrong in any meaningful sense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1315671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is every finite field a quotient ring of ${Z}[x]$? Is every finite field a quotient ring of ${Z}[x]$? For example, how a field with 27 elements can be written as a quotient ring of ${Z}[x]$?
Every finite field has an order which is the power of a prime. Every finite field of order $p$ is isomorphic to integers modulo $p$. Every finite field of order $p^k$ is isomorphic to polynomials over the field with p elements; modulo an irreducible polynomial of degree $k$. There are no other fields. So yes. Taking $\mathbb Z[x]$ through first a quotient on $p\mathbb Z$, then on $d\mathbb Z[x]$ with $p, d$ prime/irreducible will yield a field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1315813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
An inequality with $\sum_{k=2}^{n}\left(\frac{2}{k}+\frac{H_{k}-\frac{2}{k}}{2^{k-1}}\right)$ show that $$\sum_{k=2}^{n}\left(\dfrac{2}{k}+\dfrac{H_{k}-\frac{2}{k}}{2^{k-1}}\right)\le 1+2\ln{n}$$where $ n\ge 2,H_{k}=1+\dfrac{1}{2}+\cdots+\dfrac{1}{k}$ Maybe this $\ln{k}<H_{k}<1+\ln{k}$?
We have, by partial summation: $$\begin{eqnarray*} \sum_{k=2}^{n}\frac{H_k}{2^{k-1}}&=&H_n\left(1-\frac{1}{2^{n-1}}\right)-\sum_{k=2}^{n-1}\left(1-\frac{1}{2^{k-1}}\right)\frac{1}{k+1}\\&=&1-\frac{H_n}{2^{n-1}}+\sum_{k=2}^{n}\frac{2}{k\, 2^{k-1}}\tag{1}\end{eqnarray*}$$ hence: $$ \sum_{k=2}^{n}\frac{H_k-\frac{2}{k}}{2^{k-1}}=1-\frac{H_n}{2^{n-1}}\tag{2}$$ and: $$\sum_{k=2}^{n}\left(\frac{2}{k}+\frac{H_k-\frac{2}{k}}{2^{k-1}}\right)\leq 2 H_n-1\leq 2\log n+1.\tag{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1315922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Critical point of a function - $\Bbb R^n$ Analysis Consider $f=(f_1,f_2,f_3): U \rightarrow \mathbb{R}^3$ a function not identically null, $f\in C^1$ and rank $3$ at every point of the open $U \subset \mathbb{R}^n$, $n \geq 3$. Show that $g(x)= f_1^2(x)+f_2^2(x)+f_3^2(x)$, $x \in U$, hasn't maximum in $U$. Suggestion: Suppose by contradiction considering $\nabla g$; look the sign of $g$.
There is a 'geometry' intuition behind your question. Observe that $g$ is the composition of $f$ and the square of the distance to the origen in $\mathbb{R}^3$. Namely, let $s : \mathbb{R}^3 \to \mathbb{R}$ be $s(x,y,z) := x^2 + y^2 + z^2$ be the square of the distance to $(0,0,0)$. Then $g = s \circ f$. Since $f$ has rank 3 at any point $x_0$ then there are vectors $v$ at $x_0$ such that the derivative of $f$ at $x_0$ takes $v$ to a vector $df(v)$ at $f(x_0)$ pointing outside of the sphere whose radius is $g(x_0)$. So $x_0$ can not be a maximum of $g$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1316045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Euclidean domains with multiplicative and super triangular norms I want to prove that if the norm function $N$ of a Euclidean domain $R$ satisfies the conditions * *$N(ab)=N(a)N(b)$ *$N(a+b) \le max\{N(a),N(b)\}$ then $R$ is a field or R is a homomorphic image of a polynomial ring $F[x]$ where $F$ is any field. I know the converse is true, for if $R$ is a field then the trivial norm satisfies the conditions; and if $R=F[x]$ then we can define $N(f)=2^{\deg(f)}$. I can prove that condition (1) implies that a is a unit iff $N(a)=1$. but I have no good idea for how to use condition (2). can anybody help me?
This is a sketch of the proof. Nice exercise from {Algebra , N.Jacobson , V.1 , p149}. From (1) we can deduce that $a$ is invertible iff $N(a)=1$. From (2) we can deduce that the sum of two invertible elements is again invertible or $0$. Hence, the set of invertible elements together with $0$ is a field we call $F$. if $R=F$ we are done. Otherwise, let $t \in R$ be a non-invertible element with the least possible norm. Using (1) we can get a base-$t$ representation for any element of $R$ with coefficients from $F$, so we found a surjective homomorphism between $R$ and $F[t]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1316141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expected number of dice rolls for a sequence of dice rolls ending at snake eyes If I roll a pair of dice repeatedly and stop only when I get snake eyes (both dice show 1), what is the expected number of dice rolls that will occur? I know the answer is 36, but I'm having trouble understanding why that is the answer.
That happens because the mean of a geometric distribution with $p=\frac{1}{36}$ is exactly $\frac{1}{p}=36$. The probability that a double one occurs at the $k$-th throw is given by: $$ \mathbb{P}[X=k] = \frac{1}{36}\left(1-\frac{1}{36}\right)^{k-1},\tag{1}$$ hence: $$ \mathbb{E}[X]=\sum_{k\geq 1}k\cdot\mathbb{P}[X=k]=\frac{1}{36}\sum_{k=1}^{+\infty}k\left(\frac{35}{36}\right)^{k-1},\tag{2}$$ but since for any $|x|<1$ we have: $$ \sum_{k\geq 0}x^k = \frac{1}{1-x},\tag{3}$$ by differentiating both sides of $(3)$ with respect to $x$ we have: $$ \sum_{k\geq 1}k x^{k-1} = \frac{1}{(1-x)^2}\tag{4}$$ so the claim follow by evaluating $(4)$ at $x=\frac{35}{36}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1316282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Regarding chains and antichains in a partially ordered infinite space I've been given this as an exercise. If P is a partially ordered infinite space, there exists an infinite subset S of P that is either chain or antichain. This exercise was given in the Axiom of Choice section of the class. I answered it using Zorn's lemma but I'm not 100% sure. Can you give me any hints? I used Hausdorff's maximal principle and said that there would be a chain that is maximal. And I used Zorn's lemma on the partially ordered subspace of P that contains all the subsets of P that are antichains. But I don't know if these solutions are correct and even if they are I'm stuck at the infinite part. I was informed that this question is missing context or other details. I'm sorry but that was exactly how it was given to me so I don't know how to correct it.
HINT: The easiest argument is to use the infinite Ramsey theorem; you need just two colors, one for pairs that are related in $P$, and one for pairs that are incomparable in $P$. There is a fairly easy proof of the theorem at the link.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1316382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding Function's Extension and Its Unique Existence. Let $$A= \left\{\frac j{2^n}\in [0,1] \mid n = 1,2,3,\ldots,\;j=0,1,2,\ldots,2^n\right\} $$ and let $$ f:A\rightarrow R $$ satisfy the following condition: There is a sequence $ \epsilon_n \gt 0 $ with $\sum_{n=1}^\infty \epsilon_n \lt \infty $ and $$\left|f\left(\frac{j-1}{2^n}\right)-f\left(\frac j{2^n}\right)\right| \lt \epsilon_n $$ for all $ n\gt0, j=1,2,\ldots,2^n $. Prove that $f$ has a unique extension to a continuous function from $[0,1]$ to $R$. I think it is maybe related to contraction mapping. But, I could not apply that thery into this problem. Is there anybody who can prove this? p.s. The Formatting is helped by @Math1000. I appreciate that.
An alternate answer that uses some of Alex Ravsky's method, but sticks to analysis methods, and in my opinion provides a more constructive demonstration: For $x \in [0, 1]$, let $$x = \sum_{k = 1}^{\infty} \omega_{k}(x) 2^{-k},\; \omega_{k} \in \{0, 1 \}$$ i.e. suppose $(\omega_{k}(x))_{k \in \mathbb{N}}$ is the binary expansion of $x$, and define the extension of $f$ by $$f(x) = \lim_{n \to \infty} f\left(\sum_{k = 1}^{n} \omega_{k}(x) 2^{-k}\right).$$ We claim first that the sequence converges by showing it to be Cauchy for a given $x \in [0, 1]$, which we now fix. Let $\epsilon > 0$, and pick $N \in \mathbb{N}$ such that if $n \geq N$, then $\sum_{k = n}^{\infty} \epsilon_{n} < \epsilon$. Pick $m, n \geq N$. Then as Ravsky points out, \begin{align*} \left|f\left(\sum_{k = 1}^{n}\omega_{k}(x) 2^{-k}\right) - f\left(\sum_{k = 1}^{m}\omega_{k}(x) 2^{-k}\right)\right| & < \epsilon \end{align*} Boom. Cauchy, and thus convergent, so our extension is defined. Now, let $|y - x| < 2^{-(N + 2)}$. Having chosen $x$ and $y$ close enough, we have that $\omega_{k}(x) = \omega_{k}(y)$ for all $k \leq N$. Thus $$|f(x) - f(y)| \leq \sum_{k = N + 1}^{\infty} \epsilon_{k} < \epsilon,$$ completing the proof of continuity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1316447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does an integral change signs when flipping the boundaries? Let us define a very simple integral: * *$f(x) = \int_{a}^{b}{x}$ for $a,b\ge 0$. Why do we have the identity $\int_{a}^{b}{x} = -\int_{b}^{a}{x}$? I drew the graphs and thought about it but to me integration, at least in two-dimensions, is just taking the area underneath a curve so why does it matter which direction you take the sum?
There is nothing to prove here. It is just a definition (more precisely, just a notaion). Note that for $b \geq a$, from the point of view of Lebesgue integration, the value of the integral only depends on the domain $[a,b]$ (and does not depend on the fact that if we consider the function from $a$ to $b$ or $b$ to $a$) and this value is written as $\int _a^b f$. The point is that if we make such a definition (or notation), then we can write the calculations involving the change of variable formula for integration in 1D more conveniently. More precisely, if we have such a definition, then we do not need to consider separately the two cases, when the change of variable function is monotonically increasing or decreasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1316529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52", "answer_count": 10, "answer_id": 9 }
Fourier transform of the principal value distribution I would like to compute the Fourier transform of the principal value distribution. Let $H$ denote the Heaviside function. Begin with the fact that $$2\widehat{H} =\delta(x) - \frac{i}{\pi} p.v\left(\frac{1}{x}\right).$$ Rearranging gives that the principal value distribution is, up to a constant $$\delta(x) - 2\widehat{H}.$$ If we take the Fourier transform of this, we get $$1- 2H(-x) ,$$ which seems wrong. First, why does this method produce nonsense? Second, what is a good way to do this computation?
Another solution The distribution $\mathrm{pv} \frac{1}{x}$ satisfies $x \, \mathrm{pv} \frac{1}{x} = 1.$ Therefore, $$ 2\pi \, \delta(\xi) = \mathcal{F} \{ 1 \} = \mathcal{F} \{ x \, \mathrm{pv} \frac{1}{x} \} = i \frac{d}{d\xi} \mathcal{F} \{ \mathrm{pv} \frac{1}{x} \} $$ Thus, $ \mathcal{F} \{ \mathrm{pv} \frac{1}{x} \} = -i \pi \, \operatorname{sign}(\xi) + C $ for some constant $C$. But $\mathrm{pv} \frac{1}{x}$ is odd so its Fourier transform must also be odd, and since $-i \pi \, \operatorname{sign}(\xi)$ is odd while $C$ is even, we must have $C=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1316786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What exactly is a trivial module? Yes, this is a quite basic answer, but I have to admit to be absolutely confused about this notion. Searching on the web, I managed to found two possible definition of trivial modules, referring actually to two different mathematical objects. The first one is just the singleton set with the only possible module structure, also called the zero module. The second one, that is the definition that is leading me to be confused and a bit stuck after so much hours of studying, is the following: Let $A$ be a ring, $M$ an abelian group. $M$ is called a trivial module if it is a module endowed with the trivial action. But...what exactly is a trivial action? Yes, of course the first things that I think is the trivial $ax=x$ for each $x \in M$, but there is something wrong with it, because directly from the axioms of modules I have: for each $x \in M, (1+1)x = x$ (because the action is trivial), and $(1+1)x = 1x+1x=x+x$ that implies $x = 0$, i.e. $M$ is the group $0$. Please, could you help me in understanding what am I missing? Thank you very much!!! Ps: this question is related to this one (From $G$-mod to $\mathbb{Z}G$-mod and a related question.) where I believed to have understood this definition :)
You seem to be just confusing different notions of module. * *$G$-module would be an abelian group $M$ with the action of a group $G$ compatible with addition. (This can well be the trivial action.) *$R$-module, or just module, where $M$ is an abelian group and one has scalar multiplication with the elements from the ring $R$ similarly to case of vector-spaces. (This can also be understood as an instance of the former.) To make matter worse it is also not uncommon to have an $R$-module with an additional group action.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1316867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Solving for n in the equation $\left ( \frac{1}{2} \right )^{n}+\left ( \frac{1}{4} \right )^{n}+\left ( \frac{3}{4} \right )^{n}=1$ Solving for $n$ in the equation $$\left ( \frac{1}{2} \right )^{n}+\left ( \frac{1}{4} \right )^{n}+\left ( \frac{3}{4} \right )^{n}=1$$ Can anyone show me a numerical method step-by-step to solve this? Thanks
The function : $$y=0.25^x+0.5^x+0.75^x-1$$ is decreasing. For example $y(1)=2$ and $y(2)=-\frac{1}{8}$. So, the root for $y=0$ is between $x=1$ and $x=2$. In this case, among many numerical methods, the dichotomic method is very simple. The successives values $x_k$ are : $$x_{k+1}=x_{k}+\frac{\delta_k}{2^k}$$ where $\delta_k=\pm 1$ the signe is $+$ if $y_k=(0.25^{x_k}+0.5^{x_k}+0.75^{x_k}-1) >0$ the signe is $-$ if $y_k<0$. ALGORITHM: $x:=1$ $d:=1$ repeat $ \quad \quad d:=\frac{d}{2}$ $ \quad \quad y:=0.25^x+0.5^x+0.75^x-1$ $ \quad \quad $ if $y>0$ then $x:=x+d$ else $x:=x-d$ until $d<10^{-15} $ (or another limit depending on the wanted accuracy). RESULT : $x=1.73050735785763$ Of course, the convergence is slower than with the Newton-Raphson method for example. But, in both casses, the time of computation is so small that this is negligible. On the other hand, the time spent in programming the algorithm is smaller with the dichotomic method : that is the most important point in practice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1316957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 2 }
Why the lens space L(2,1) is homeomorphic to $\mathbb{R}P^3$? According to one definition of lens space $L(p,q)$, which is gluing two solid tori with a map $h:T^2_1 \rightarrow T^2_2$. And $h(m_1)=pl_2+qm_2$, $l_i$ means longitude and $m_i$ means meridian of the boundary torus. I cannot understand why $L(2,1)$ is homeomorphic to $\mathbb{R}P^3$. Can someone give me some hints? Thank you.
Here's a sort of diagrammatic argument from an old homework assignment: Construct $\mathbb{R}P^3$ as the quotient of $B^3 \subset \mathbb{R}^3$ under the antipodal map $a: \partial B^3 \to \partial B^3$. Let $K$ be the knot in $\mathbb{R}P^3$ obtained as the quotient of the vertical segment $V=\{(0,0,z) \in \mathbb{R}^3 : -1 \leq z \leq 1\}$ in $B^3$. As depicted in the figure below, a normal neighborhood $N(K)$ of $K \subset \mathbb{R}P^3$ is a solid torus. We claim its complement is also a solid torus. As depicted, there is a simple closed curve in $\partial N(K)$ that bounds an embedded disk $(D,\partial D) \subset (\mathbb{R}P^3 \setminus \mathring{N}(K), \partial N(K))$. Manipulating the identification diagram (not shown, but achieved by cutting $B^3 \setminus V$ along the disk and gluing via the antipodal map on $S^2 \setminus \{(0,0,\pm 1)\}$), we can see that the complement of this disk in $\mathbb{R}P^3 \setminus N(K)$ is a 3-ball. It follows that $\mathbb{R}P^3 \setminus \mathring{N}(K)$ is a solid torus. As discussed below the figure, we can apply a half-twist to $\partial N(K)$ to identify it with a standard torus, taking $\partial D$ to a $(2,1)$-curve in $\partial(S^1 \times D^2)$. This decomposition corresponds to a homeomorphism from $\mathbb{R}P^3$ to the lens space $L(2,1)$. $\square$ Left: $\mathbb{R}P^3$ is depicted as a quotient of the closed 3-ball $B^3$ via the antipodal map $a: \partial B^3 \to \partial B^3$. The vertical line depicts the knot $K \subset \mathbb{R}P^3$. Middle: The knot's normal neighborhood $N(K)$ situated inside of $\mathbb{R}P^3$. The boundary of $N(K)$, seen here as a dotted cylinder, is a torus. We can visualize a deformation retract of $\mathbb{R}P^3 \setminus N(K)$ to the equatorial $\mathbb{R}P^1=S^1$ by taking successively "larger" neighborhoods $N(K)$. Right: We see a simple closed curve in $\partial N(K)$ which bounds a disk in $\mathbb{R}P^3 \setminus K$. The antipodal map sends the "upper circle" to the "lower circle" with a rotation by $\pi$, so the depicted curve becomes a $(2,1)$-curve in $\partial N(K)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Special properties in the direct solving of sparse symmetric linear systems In the area of computational solving of large sparse linear systems, some solvers specialize only on symmetric sparse matrices, be it positive definite or indefinite as compared to general (non-symmetric) sparse systems solver. What mathematical properties does symmetric sparse matrix possess that makes it computationally more efficient to solve as compared to a nonsymmetric matrix?
A great advantage of the sparse Cholesky factorization for symmetric and positive definite matrices is that you do not need to do pivoting for numerical stability but only focus on the symbolic diagonal pivoting to minimize fill-in. So you can completely separate the symbolic and numeric factorization and reuse the structure of the triangular factor when only the values of the original matrix change but not its structure. This is not the case for symmetric indefinite and nonsymmetric sparse direct solvers where you need to do pivoting not only to reduce fill-in but also for numerical stability reasons. Hence the symbolic and numeric factorization are not completely separable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to find $\lim_{x \to 0}\frac{\cos(ax)-\cos(bx) \cos(cx)}{\sin(bx) \sin(cx)}$ How to find $$\lim_\limits{x \to 0}\frac{\cos (ax)-\cos (bx) \cos(cx)}{\sin(bx) \sin(cx)}$$ I tried using L Hospital's rule but its not working!Help please!
Before using L'Hospital, turn the products to sums $$\frac{\cos(ax)-\cos(bx)\cos(cx)}{\sin(bx)\sin(cx)}=\frac{2\cos(ax)-\cos((b-c)x)-\cos((b+c)x)}{\cos((b-c)x)-\cos((b+c)x)}.$$ Then by repeated application $$\frac{2a\sin(ax)-(b-c)\sin((b-c)x)-(b+c)\sin((b+c)x)}{(b-c)\sin((b-c)x)-(b+c)\sin((b+c)x)},$$ and $$\frac{2a^2\cos(ax)-(b-c)^2\cos((b-c)x)-(b+c)^2\cos((b+c)x)}{(b-c)^2\cos((b-c)x)-(b+c)^2\cos((b+c)x)}.$$ The limit is $$\frac{2a^2-(b-c)^2-(b+c)^2}{(b-c)^2-(b+c)^2}=-\frac{a^2-b^2-c^2}{2bc}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 4 }
Laplace equation on a disk I have the Laplace equation $$\Delta u=\frac{1}{r} \frac{\partial}{\partial r } \left(r \frac{\partial u}{\partial r} \right)+\frac{1}{r^2} \frac{\partial u }{\partial \theta^2}=0$$ on a unit disk $$0<r \leq 1$$ We note that we have the boundary condition for R $$|u(0,\theta)|<\infty \rightarrow |R(0)|<\infty$$ and the boundary conditions for $\theta$ are $$\Theta(- \pi)=\Theta(+ \pi)$$ and $$\Theta'(- \pi)=\Theta'(+ \pi)$$ I assume the solution is of the form $$u(r,\theta)=R(r)\Theta(\theta)$$ and subbing this into $$\Delta u$$ I get 2 ODE's, $$-\frac{r}{R(r)}(rR'(r))'=k$$ and $$\frac{\Theta''(\theta)}{\Theta(\theta)}=k$$ I first look at the case for when $$k=p^2>0$$ This gives $$\Theta=ae^{p \theta}+be^{-p \theta}$$ So by subbing in the boundary conditions for $\theta$, I get $$ae^{p \pi}+be^{-p \pi}=ae^{-p \pi}+be^{p \pi}$$ Why does this give me no solution? When I look at the case $$k=0$$ the corresponding ODE for R(r) gives $$R=c_1 \ln r+c_2$$ Why does this being subject to the boundary condition $|R(0)|$ give me $R=c_2$? and why do I get the solution for this case being $u_0 (r,\theta)=1$?
Why does this give me no solution? You have $ae^{p\pi}+be^{-p\pi}=ae^{-p\pi}+be^{p\pi}$. That is equivalent to $a(e^{p\pi}-e^{-p\pi})=b(e^{p\pi}-e^{-p\pi})$, which implies either $a=b$ or $e^{p\pi}=e^{-p\pi}\iff p=ki$, but in our case $p^2=k>0$, so that $k$ must be 0. If $p\neq0$ you have no solution, with $p=0$ you get a constant function which is found also in the $k=0$ case, so nothing from this case that you don't get from other cases. But if $p\neq0$, the solution becomes $\Theta(\theta)=\frac{a}{2}\cosh(p\theta)$, which satisfies one boundary condition (the one on $\Theta$, being the $\cosh$ an even function), but not the other, as $\Theta'(\theta)=\frac{ap}{2}\sinh(p\theta)$, which is odd. Why does this being subject to the boundary condition $|R(0)|$ give me $R=c_2$? Well, suppose otherwise. If $c_1\neq0$, then for $r\to0$ we have $|R(r)|\to+\infty$, but the boundary condition states otherwise. So $c_1=0$ and $R(r)=c_2$. And why do I get the solution for this case being $u_0(r,\theta)=1$? I'd rather say you get a constant function. The $R$ part we have already shown to be constant, and the $\theta$ part has a zero second derivative and must thus be $\Theta(\theta)=a\theta+b$, which satisfies the boundary conditions iff $a=0$, and ends up being forced by the boundary conditions to be $\Theta(\theta)\equiv b$, a constant. $u_0(r,\theta)=R(r)\Theta(\theta)=b\cdot c_2$, so it is constant. Why it is one, I wouldn't know. Maybe there is some kind of normalization imposed by whatever you are following. Hope this answers you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which mathematics theory studies structures like these? Let $A_p$ be the set of all numbers whose prime factors are all in first $p$ prime numbers. example: $A_2= \{2,3,4,6,9,12,16,18,\ldots \}$ (all of these numbers can be generated by repeatedly multiplying only $2$ and $3$ the first two prime numbers). as $p \to \infty$ intuition is that this set would cover all of natural numbers. Would that make this $(A_\infty)$ set equivalent in some sense to $\mathbb N$ itself. Is a theorem proved on $\mathbb N$ imply that it is also true on the set that I defined (and vice versa). Is there any branch of mathematics that deals with these kind of structures. Is there a specific theorem that specifically deals with the above mentioned theorem. Thanks is advance ,sorry for my poor mathematical literary skills.
You haven't really defined a meaning for your $A_\infty$, unless you think "the first infinity prime numbers" make sense. If you choose a definition for $A_\infty$ -- such as the union of all $A_p$ for finite $p$, $$ A_\infty = \bigcup_{p\in\mathbb N} A_p $$ then it will be easy to show that $A_\infty=\mathbb N$. (Note that you really ought to let $1$ be an element of each of your $A_p$s. Since $1$ has no prime factors, in particular it has no prime factors outside the first $p$ prime numbers, and you can generate it by multiplying together none of the first $p$ primes). (I'm also assuming for simplicity that $0$ does not count as a "natural number" for you). But there's no general theory that says that if you have defined some objects $B_n$ for $n\in\mathbb N$, then the notation $B_\infty$ must mean such-and-such. That's a definition you have to decide on in each particular case. Mathematically, it would be perfectly valid to define $A_\infty$ to mean the set $\{42,117\}$ -- there is no automatic relation between $A$ with a number as a subscript and $A$ with an "$\infty$" symbol as a subscript. The worst that can happen is that your readers will be confused.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
When does the cardinality disappear? Pardon me, if this question sounds stupid. I am learning real analysis on my own and stumbled on this contradiction while reading this -- http://math.kennesaw.edu/~plaval/math4381/setseq.pdf. I appreciate any help or pointers. Consider this example: We choose an open interval of $(0, \frac{1}{n})$ for each n from N, the set of natural numbers i.e. N = {1, 2, 3, ...}. Let us call this interval I(n) as in the pdf. Now, let us form intersections of all the I(n) we have at any point. I(1) is (0, 1), which has the cardinality of R. Then we form I(2) and look for intersection between I(1) and I(2), which is I(2). When I use n = 3, I get I(3) and the intersection of I(1), I(2), and I(3) is I(3). But, I can transform $(0, \frac{1}{2})$ and then map it back to R. So |I(2)| = |I(1)| -- that is, the cardinalities of I(1) and I(2) are the same. Now, I can repeat this argument for each n. Thus, |I(n)| = |I(1)| for all n in N. Thus, the intersection of {I(k)}, k in {1, 2, 3, ..., k} has the cardinality of R. But, as $n \to \infty$, the intersection is the empty set, whose cardinality is 0. So, I see that the cardinality of intersections of I(n) went from R to 0. But at every step, the cardinality does not change. Nor do I see it decrease. My question then is -- how does the cardinality simply disappear when I don't see it changing at any step? Did I make one or more wrong assumptions?
In essence this boils down to a question about the interchange of the limit and the cardinality function. In general it is not true that $$ \lim_{n\to\infty} card(C_n) = card\left(\lim_{n\to\infty} C_n\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
What is the definition of differentiability? Some places define it as: If the Left hand derivative and the Right hand derivative at a point are equal then the function is said to be differentiable at that point. Others define it based on the condition of the existence of a unique tangent at that point. Which one of these is correct? Or are both of them wrong? According to the first definition, the curve need not be continuous at that point and can have a point discontinuity or a hole, like this: Moreover it doesn't stop a curve, with a jump discontinuity but with same slope on both sides of it, from being differentiable. And lastly, if a function is not defined at a point, then is the function discontinuous there too?
A function is differentiable (has a derivative) at point x if the following limit exists: $$ \lim_{h\to 0} \frac{f(x+h)-f(x)}{h} $$ The first definition is equivalent to this one (because for this limit to exist, the two limits from left and right should exist and should be equal). But I would say stick to this definition for now as it's simpler for beginners. The second definition is not rigorous, it is quite sloppy to say the least. Also, there's a theorem stating that: if a function is differentiable at a point x, then it's also continuous at the point x.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
A finite set is closed Question: Prove that a finite subset in a metric space is closed. My proof-sketch: Let $A$ be finite set. Then $A=\{x_1, x_2,\dots, x_n\}.$ We know that $A$ has no limits points. What's next? Definition: Set $E$ is called closed set if $E$ contains all his limits points. Context: Principles of Mathematical Analysis, Rudin
If $M$ is a metric space then every subset $A =\{x_1, \ldots, x_n\} \subseteq M$ is closed. In fact, if $a \notin A$ then $d(a,A)$ is the least of the numbers $d(a,x_1) ,\ldots, d(a,x_n)$ thus, $d(a,A) > 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 7, "answer_id": 2 }
Prove that if A is singular, then adj(A) is also singular Prove that if A is singular, then adj(A) is also singular. How do you prove this without proving by contradiction?
From the basic equation: $A\,\text{adj}(A)=\det(A)I$ we have: $$ \det(A\,\text{adj}A)=\det(\det(A)I)\\ \det(A)\det(\text{adj}(A))=\det(A)^n \det(I) $$ $\det(\text{adj}(A))=\det(A)^{n-1}$ Since $A$ is singular, $$ \det(\text{adj}(A))=0 $$ and then $\text{adj}(A)$ is also singular
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
The Standarization of Matrix by Vector Multiplication I apologize for the trivialness of my question but it has been bugging me as to why the standard for multiplying a matrix by a vector that will give a column matrix mean that the vector has to be a column matrix? To me it seems more natural to write the vector horizontal and then match the same components to the matrix which will give the same results. Was it just the person's preference who defined it or is their some reason for the choosing of the notation? Not only does it make matrix by column matrix (vector) multiplication awkward but it also seems to produce an unintuitive way of multiplying matrices by matrices and vectors by vectors (specifically the dot product). Please respond without using matrix multiplication in your answer, if possible, because matrix multiplication is often defined by vector notation. Thanks, Jackson
Let $u$ be one of those dreaded column vectors. Then $$ (A u)^\top = u^\top A^\top \Rightarrow A u = (u^\top A^\top)^\top $$ This means one can get the same result by left multiplying the transposed vector $u^\top$ (now a row vector) with the transposed matrix $A^\top$, getting a row vector result and transposing the result if needed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to prove that $\lim_{n \to \infty} \frac{3^n-1}{2 \cdot 3^n} = \frac{1}{2}$? I used this limit as an argument in a proof I wrote (Proof by induction that $\sum\limits_{k=1}^n \frac{1}{3^k}$ converges to $\frac{1}{2}$). I was told I should "prove" the limit but given no indication as to how to go about it. I didn't even know it was possible to formally prove a limit, but if it is I'd love to know how to do it.
$$\lim_{n\rightarrow\infty}\frac{3^n-1}{2\cdot 3^n}=\frac{1}{2}\lim_{n\rightarrow\infty}\frac{3^n}{3^n}-\frac{1}{3^n}=\frac{1}{2}(1-0)=\frac{1}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
How prove this $\cot(\pi/15)-4\sin(\pi/15)=\sqrt{15}$ I need some help with this demonstration, please I have tried with some identities but nothing. I wanted to use this $$\sin(\pi/15)\cdot \sin(2\pi/15)\cdots\sin(7\pi/15)=\sqrt{15}$$
We may prove: $$ \cos\frac{\pi}{15}-4\sin^2\frac{\pi}{15}=\sqrt{15}\sin\frac{\pi}{15} $$ by squaring both sides. By setting $\theta=\frac{\pi}{15}$, that leads to: $$ \frac{13}{2}-2\cos(\theta)-\frac{15}{2}\cos(2\theta)+2\cos(3\theta)+2\cos(4\theta) = \frac{15}{2}-\frac{15}{2}\cos(2\theta)$$ or to: $$ -\cos(\theta)+\cos(3\theta)+\cos(4\theta) = \frac{1}{2} $$ so we just have to prove that $\cos(\theta)$ is a root of: $$ p(x) = 16x^4+8x^3-16x^2-8x+1.$$ That easily follows from: $$ \Phi_{30}(x) = x^8+x^7-x^5-x^4-x^3+x+1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1317960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Binary tree bijection I've been studying for an up coming exam in combinatorics and I came across something interesting by accident. We have the two combinatorial constructions: $$\mathbb{U}\cong SEQ(\mathbb{ZU})$$ And $$\mathbb{T}\cong \mathbb{Z}*SEQ_2(\mathbb{T})+\epsilon$$ The first one I interpret as planar trees where the size is determined by number of edges. The second one (and I need to confirm if this is correct), I'm interpreting as planar complete binary trees where size relates to the number of internal nodes (nodes that are not leaves). When you convert these to generating fucntions: $$U(z)=\frac{1}{1-zU(z)}$$ Which rearanges to: $$U(z)=zU^2(z)+1$$ And $$T(z)=zT^2(z)+1$$ Clearly the two generating functions are the same, so there must be a bijection between the two sets I interpret them to represent. I'd love to spend time working it out, but I've got exams to cram for, so I was wondering if someone could enlighten me specifically using my interpretations (or the closest thing if I've got them wrong). I can't afford to waste any more time on it! EDIT I forgot to mention that $\mathbb{Z}$ is the atomic class, and $\epsilon$ is the empty set, and that these are unlabelled objects. EDIT 2 Fixed an error in the expression for $\mathbb{T}$
I agree with your interpretation of the second construction. The most natural interpretation of the first, however, seems to me to be ordered plane forests of rooted trees, where size is determined by the number of nodes. Ah, I see: that’s the same as rooted plane trees with size determined by the number of nodes not counting the root, which in turn is equivalent to your interpretation. In The Book of Numbers J.H. Conway and R.K. Guy illustrate the desired bijection as follows. First, here are the $5$ rooted plane trees with $3$ edges. The vertices have been labelled with the integers $2,3,4$, and $5$, and sister vertices are shown with multiplication signs between them. 2 | 3 3 × 2 3 2 | \ / | | 4 4 4 × 2 4 × 3 4 × 3 × 2 | | \ / \ / \ | / 5 5 5 5 5 We can interpret each tree as representing an exponentiation: $$\begin{array}{lll} \large5^{4^{3^2}}=5^{\left(4^{\left(3^2\right)}\right)}&&5^{4^{3\cdot 2}}=5^{\left(\left(4^3\right)^2\right)}\\ 5^{4^3\cdot2}=\left(5^{\left(4^3\right)}\right)^2&&5^{4\cdot3^2}=\left(5^4\right)^{\left(3^2\right)}\\ 5^{4\cdot3\cdot2}=\left(\left(5^4\right)^3\right)^2 \end{array}$$ In each entry the expression on the left matches the tree, while the expression on the right is fully parenthesized (apart from the outermost pair of parentheses). The fully parenthesized versions are then easily converted to binary trees by adding $3$ internal nodes, especially if exponentiation is replaced by an arbitrary binary operation $*$: $$\begin{array}{lll} 5*(4*(3*2))&&5*((4*3)*2)\\ (5*(4*3))*2&&(5*4)*(3*2)\\ ((5*4)*3)*2 \end{array}\tag{1}$$ 3 2 4 3 4 3 5 4 \ / \ / \ / \ / 4 * * 2 5 * 5 4 3 2 * 3 \ / \ / \ / \ / \ / \ / 5 * 5 * * 2 * * * 2 \ / \ / \ / \ / \ / * * * * * It’s not hard to see how to extend this example to the general case, and the forms at $(1)$ make it clear that we’re dealing with the Catalan numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1318127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Convergence/divergence of the series $\sum_{n=1}^{\infty}\frac{1}{n^\sqrt{n}}$ $$\sum_{n=1}^{\infty}\frac{1}{n^\sqrt{n}}$$ Determine whether this series is convergent or not, with explanation. Each element is positive, so I've tried bounding it by another convergent series, but couldn't see how. I couldn't apply integral test, because I couldn't integrate it. I'm struggling to figure out which convergence/divergence test I should use. (I can use absolute convergence theorems too) I would really appreciate some help! Thank you!
$1\lt \sqrt2\lt\sqrt{n}\,\,\,\,\,\,\,\,\,\,\forall n\ge3\implies0\lt\dfrac{1}{n^{\sqrt{n}}}\lt\dfrac{1}{n^{\sqrt2}}\,\,\,\,\,\,\,\,\,\,\forall n\ge3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1318251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A topological group is embeddble in a product of a family of second-countable topological groups if and only if it is $\omega$-narrow How to prove the following property: a topological group is topologically isomorphic to a subgroup of the product of some family of second-countable topological groups if and only if it is ω-narrow
This is a result of my first supervisor Igor Y. Guran. Its proof it rather long and can be found, for instance, in a book “Topological groups and related structures” by his supervisor Alexander V. Arhangel'skii and co-student Mikhail G. Tkachenko (Atlantis Press, Paris; World Sci. Publ., NJ, 2008), where this results is formulated as Theorem 3.4.23.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1318376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Difference between Gentzen and Hilbert Calculi What is the difference between Gentzen and Hilbert Calculi? From my understanding of Rautenberg's Concise Introduction to Mathematical Logic, Gentzen calculus is based on sequents and Hilbert calculus, on tautologies. But isn't every Gentzen sequent a tautological modus ponens? For instance, the sequent $X\vdash a \wedge b |X \vdash a,b$ can be written as the tautology $\forall X \forall a \forall b\:( X\vdash a \wedge b \rightarrow X \vdash a,b$), can't it?
You can see a very detailed overview into : Francis Pelletier & Allen Hazen, Natural Deduction : Sequent Calculus was invented by Gerhard Gentzen (1934), who used it as a stepping-stone in his characterization of natural deduction [...]. It is a very general characterization of a proof; the basic notation being $ϕ_1,\ldots,ϕ_n ⊢ ψ_1,\ldots,ψ_m$, which means that it is a consequence of the premises $ϕ_1,\ldots,ϕ_n$ that at least one of $ψ_1,\ldots,ψ_m$ holds. If $\Gamma$ and $\Sigma$ are sets of formulas, then $\Gamma ⊢ \Sigma$ means that it is a consequence of all the formulas of $\Gamma$ that at least one of the formulas in $\Sigma$ holds. Sequent systems take basic sequents such as $ϕ ⊢ ϕ$ as axiomatic, and then a set of rules that allow one to modify proofs, or combine proofs. The modification rules are normally stated in pairs, ‘$x$ on the left’ and ‘$x$ on the right’: how to do something to the premise set $\Gamma$ and how to do it to the conclusion set $\Sigma$. So we can understand the rules as saying “if there is a consequence of such-and-so form, then there is also a consequence of thus-and-so form”. These rules can be seen as being of two types: structural rules that characterize the notion of a proof, and logical rules that characterize the behavior of connectives. For example, the rule that from $\Gamma ⊢ \Sigma$ one can infer $\Gamma,ϕ ⊢ \Sigma$ (“thinning on left”) characterizes the notion of a proof (in classical logic), while the rule that from $\Gamma,ϕ ⊢ \Sigma$ one can infer $\Gamma, (ϕ∧ψ) ⊢ \Sigma$ (“∧-Introduction on left”) characterizes (part of) the behavior of the logical connective ∧ when it is a premise. In the paper you can find an historical overview and several useful discussions; relevant for Rautenberg's textbook, see : * *3.4 From Sequent Natural Deduction to Sequent Calculus *3.8 Natural Deduction with Sequences .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1318504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Is there a name of such functions? Let $U$ be an open subset of $ \mathbb R^n$ and consider $f :\mathbb R^n \to \mathbb R$ with the properties that $ f( \partial U)=0$ and $f$ takes negative values on $U$. My questions: * *Is there any name for such functions in Mathematics Literature? *Given any set $U$ as above can we always find a continuous function with above properties? *Are these functions important at all? Thank you for your help.
There's an old theorem (due to Whitney, I think) that says the following: Given any smooth manifold $M$ and any closed subset $K\subseteq M$, there exists a smooth function $f\colon M\to \mathbb R$ that satisfies $f=0$ on $K$ and $f>0$ on $M\smallsetminus K$. You can find a proof in my Introduction to Smooth Manifolds (2nd ed.), Theorem 2.29. In your situation, if you apply this with $K=\mathbb R^n\smallsetminus U$ and then take the negative of the resulting function, you get a smooth solution to question 2. Is there a name for such functions? They might sometimes be called (negative) bump functions, but that term more commonly is applied to a function whose zero set contains a given closed set (or equivalently, whose support is contained in some given open set). In the special case that $\partial U$ is a smooth hypersurface, a smooth function $f$ such that $f<0$ (or $>0$) on $U$, $f^{-1}(0) = \partial U$, and $df\ne 0$ on $\partial U$ is called a defining function for $U$. As to your third question, bump functions and defining functions are extremely useful throughout differential geometry and PDE theory. But functions of the type guaranteed by the Whitney theorem I quoted above seem to be mainly curiosities. They provide an answer to the question "How special are the zero sets of smooth functions?" (Answer: not special at all, except for the fact that they're closed.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1318576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Express 99 2/3% as a fraction? No calculator My 9-year-old daughter is stuck on this question and normally I can help her, but I am also stuck on this! I have looked everywhere to find out how to do this but to no avail so any help/guidance is appreciated: The possible answers are: $1 \frac{29}{300}$ $\frac{269}{300}$ $9 \frac{29}{30}$ $\frac{299}{300}$ $1 \frac{299}{300}$ Whenever I try this I am doing $\frac{99.66} \times 100$ is $\frac{9966}{10000}$ and then simplify down, but I am not getting the answer. I get $\frac{4983}{5000}$? Then I can't simplify more — unless I am doing this totally wrong? Hello All - just an update - I managed to teach this to her right now, and she cracked it pretty much the first time! Thanks sooooooo much for the answers and here is some of the working she did:
$99 \frac{2}{3} \% = 99 \frac{2}{3} \cdot \frac{1}{100} = \frac{299}{3} \cdot \frac{1}{100} = \frac{299}{300}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1318621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65", "answer_count": 14, "answer_id": 7 }
Finding incomplete geodesics I have a problem with the notion of incomplete geodesics. Can someone give me a minimal example for such a geodesic? In particular, I am trying to solve the following exercise: Consider the upper half plane $\mathbb{H}:= \{(x,y):y>0\}\subset \mathbb{R}$ equiped with the metric: $g_{q}:=\frac{1}{y^q}\delta_i^j$ for some real number $q>0,q\not=2$ (for $q=2$ you obtain the poincare half-plane model.) Show that $(\mathbb{H},g_q)$ is not geodesically complete. (Hint: Consider unit-speed geodesics starting at $(x_0,y_0):=(0,1)$) So far I found the following system of ordinary differential equations: $x''\cdot y=q\cdot (x'y')$ $y''\cdot y=\frac{q}{2}((y')^2-(x')^2)$ $y^q=(x')^2+(y')^2$
Hint: Try integrating up (if $q > 2$) or down (if $q < 2$) the $y$-axis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1318698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Explain a couple steps in proof that ${n \choose r} = {n-1 \choose r-1} + {n-1 \choose r}$ Show ${n \choose r}$ = ${n-1 \choose r-1}$ + ${n-1 \choose r}$ I found a similar question on here but I am looking for a little bit more of an explanation on how they simplified Right Hand Side $$= \frac{(n-1)!}{(r-1)!(n-r)!} + \frac{(n-1)!}{r!(n-r-1)!}$$ $$= \frac{r(n-1)!+(n-r)(n-1)!}{r!(n-r)!}$$ $$= \frac{n!}{r!(n-r)!}$$ $$= \binom{n}{r}$$, the first equality because of definition ,the second equality because of summing fractions the third because of $n(n-1)!=n!$, the fourth by definition. I would like more of an explanation on equality 2 and equality 3. I understand that they are summing fractions but I am interested in how they came up with the common denominator and then simplified.
For a combinatorial proof, $\binom nr$ is the number of $r$-element subsets of $\{1,2,\ldots,n\}$. Each such subset either contains $n$ or does not. $\binom {n-1}{r-1}$ counts the former, $\binom {n-1}r$ counts the latter. Therefore the quantities $\binom nr$ and $\binom {n-1}{r-1} + \binom {n-1}r$ are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1318753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
When does $(x^x)^x=x^{(x^x)}$ in Real numbers? I have tried to solve this equation:$(x^x)^x=x^{(x^x)}$ in real numbers I got only $x=1,x=-1,x=2$ , are there others solutions ? Note: $x$ is real number . Thank you for your help .
Just a sketch for $x>0$: Let $y=x^x$ and rewrite $x^y=y^x$ by taking $log$'s as $\log (x)/x = \log(y)/y$. For each $x \in (1, e)$ there is a unique $y(x)$ such that $f(x) = f(y(x))$. As a function of $y(x)$, you can show that it is increasing. So we are looking for solutions of $y(x) = x^x$. Because $g(x) = y(x)-x^x$ is increasing, it can have only one root. So there is only one positive $x$ that satisfies the equation with $x\neq 1$. EDIT: For integer $x < 0$ non-integer, things get complicated because we will involve complex numbers and raising numbers to complex powers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1318919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Do $\Bbb Q (\sqrt 2)$ and $\Bbb Q [\sqrt 2]$ mean the same? Do $\Bbb Q (\sqrt 2)$ and $\Bbb Q [\sqrt 2]$ mean the same? I'm trying to refer to the field of the real numbers of the form $a + b \sqrt 2$ where $a$ and $b$ are rationals. E: I'm sorry, my question was unclear, I was using $\sqrt 2$ as an example number, but from what I read in the answers, if I choose a number like $\pi$ then $\Bbb Q (\pi)$ and $\Bbb Q [\pi]$ would be different, correct?
Yes, they are same, but it's because $\mathbb{Q}$ is a field, and they are not same if you replace it by an arbitrary ring, which is not a field. In general, bracket gives the meaning "smallest ring containing the element in the bracket and the given ring", while paranthesis means "smallest field containing the element in the paranthesis and the given ring." So, when the given ring is itself a field, they're basically same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }