Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is the difference between line search and gradient descent? I understand the gradient descent algorithm, but having trouble how it relates to line search. Is gradient descent a type of line search?
The following figure shows the hierarchy of some of the line search optimization methods for quadratic forms. Indeed, the methods are categorized based on choosing the descent direction $p_k$ and the length step $\alpha_k$. Just recall that \begin{align} \phi(x) &= \frac{1}{2}x^{\top}A \, x + b^\top x + c,\\ x_k &= x_{k-1} + \alpha_{k} p_k, \\[0.5em] r_{k-1} &= -\nabla\phi(x_{k-1}) = b - Ax_{k-1}. \end{align} where $A$ is required to be positive definite. Now, suppose that we are in $k$th iteration of the algorithm with $r_{k-1}\ne 0$ (otherwise the solution is found) and look at the following diagram. You should also be familiar with the following results Lemma. In exact line search, the coefficients for updating solutions are defined as $\alpha_{k}:= \arg\underset{\alpha \in \mathbb{R}}{\min} \phi(x_{k}+\alpha p_k)$. This can be further simplified to $\alpha_{k}=p_k^\top r_{k-1} /p_k^\top A\, p_k$. Theorem. In exact line search, if $p_k^\top r_{k-1} \ne 0$ then then it is guaranteed that $\phi(x_{k}) < \phi(x_{k-1})$. More, specifically, we have $$\phi(x_{k}) = \phi(x_{k-1}) -\frac{1}{2}\frac{(p_k^\top r_{k-1})^2}{p_k^\top A \, p_k}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1973521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What do the eigenvalues of a matrix tell us about the original matrix? I have a problem of... Let $A$ be a 2x2 matrix such that it is not invertible and 2 is an eigenvalue of $A$. a) Find all eigenvalues of $A+I$. b) Prove or disprove A+I is invertible. Since it's not invertible, it has an eigenvalue of 0. So I can think of a matrix easily such as the one below with eigenvalues of 2 and 0... $\begin{bmatrix} 0 & a \\ 0 & 2 \end{bmatrix}$ Where $a$ is just some unknown. However, I'm assuming there are many matrices that have eigenvalues of 2 and 0 for a 2x2. So I am having trouble even seeing what the eigenvalues will even tell me about the original matrix. Do eigenvalues tell you anything about the original structure of the matrix? Also I haven't learned about eigenvectors yet in class.
You don't have to find the original matrix to answer the question. As you said the eigenvalues must be $\lambda =0,2$. Claim, $\mu =1,3$ are the eigenvalues of $A+I$. Lets check, Let $x$ be an eigenvector of $A$ corresponding to the eigenvalue 2, then $$(A+I)x=Ax+x=2x+x=3x.$$Thus, $\mu=3$ is an eigenvalue for $A+I$. Similarly, we can conclude that $\mu=1$ is the other eigenvalue. Which means $A+I$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1973625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove that $\cos\alpha+\cos2\alpha+\cdots+\cos n\alpha=\frac{1}{2}\left(\frac{\sin\left(n+\frac{1}{2}\right)\alpha}{\sin\frac{1}{2}\alpha}-1\right)$ I have to prove using mathematical induction that: $$\cos\alpha+\cos2\alpha+\cdots+\cos n\alpha=\frac{1}{2}\left(\frac{\sin\left(n+\frac{1}{2}\right)\alpha}{\sin\frac{1}{2}\alpha}-1\right)$$ If I substitute n equals one then I'm giving a such thing as: $$\cos\alpha=\frac{1}{2}\left(\frac{\sin\frac{3}{2}\alpha}{\sin\frac{1}{2}\alpha}-1\right)$$ But I don't what I should do to prove nextly and that this equation is completed for n+1.
Hint: Write \begin{align} \cos\alpha = \operatorname{Re} e^{i\alpha} \end{align} then the sum becomes \begin{align} \operatorname{Re}\left(e^{i\alpha}+e^{i2\alpha}+\ldots+e^{in\alpha} \right) \end{align} which is a geometric series. Edit: It's not hard to see \begin{align} e^{i\alpha}+e^{i2\alpha} + \ldots + e^{in\alpha} = \frac{e^{i\alpha}-e^{i(n+1)\alpha}}{1-e^{i\alpha}}. \end{align} I will leave it to the reader to put it in real form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1973736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of a Epigraph that is not convex Let the $g: [-2, 2] \rightarrow \mathbb{R}$ be defined by $g(x) = \begin{cases} 2x^2 \quad \text{ if } -2 \leq x \leq 0 \\ x+2 \quad \text{if } 0 < x \leq 2 \end{cases}$ Prove by definition: "Epi $g$ is not convex" I'm not sure how you prove the above. I tried by stating "Epi $g$ is not convex $\iff g$ is not convex''. I defined $g_1 \in g: [-2,0] \rightarrow \mathbb{R}$ by $g_1(x) = 2x^2$ and $g_2 \in g : (0,2] \rightarrow \mathbb{R}$ by $g_2(x) = x + 2$. I know that $\lim_{x \uparrow 0} g_1(x) = 0$ and $\lim_{x \downarrow 0} g_2(x)= 2$. At point $x = 0$ we see that the $g(x)$ is not continuous. Applying the definition ''$g$ is convex $\implies g$ is continuous" yields that $g(x)$ is not convex, hence Epi $g$ is not convex. What do you think? Is this an appropriate proof? I think it isn't. I think you have to prove it by contradiction. That you should prove the convexity of $g$ with the definition "for all $x, y \in \mathbb{R}$ and every $\lambda \in [0,1]$, $g(\lambda x + (1 - \lambda) y) \leq \lambda g(x) + (1 - \lambda) g(y)$" and that you will end with something like $2 \leq 0$ which isn't true and therefore not convex etc. But how do you construct such a proof with a splitted function like $g(x) = \begin{cases} 2x^2 \quad \text{ if } -2 \leq x \leq 0 \\ x+2 \quad \text{if } 0 < x \leq 2 \end{cases}$ ?
$$ {\rm epi}\ g=\{ (x,y)|g(x)\leq y \} $$ Note that $(0,0),\ (2,4)\in {\rm epi}\ g$ Hence if ${\rm epi}\ g$ is convex then $$ \frac{1}{2} \{ (0,0)+(2,4)\}=(1,2)\in {\rm epi}\ g$$ But $$ g(1)=3\Rightarrow (1,t)\in {\rm epi}\ g,\ t\geq 3$$ It is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1973925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $\sin 2x = 2\sin x \cos x$ using Taylor Series and Cauchy products I have that the Taylor series of $\sin x$ and $\cos x$ are \begin{equation*} \sin x = \sum_{n=0}^{\infty} \frac{(-1)^n}{(2n+1)!}x^{2n+1} \\ \cos x = \sum_{n=0}^{\infty} \frac{(-1)^n}{(2n)!}x^{2n} \end{equation*} which I understand yields the product series $\sum_{n=0}^{\infty} c_{n}x^n$ where \begin{equation*} c_n = \begin{cases} \sum\limits_{k=0}^{m} \frac{(-1)^{k}(-1)^{m-k}}{(2k+1)!(2m-2k)!} & n = 2m + 1 \\ \hspace{32 pt} 0 & n = 2m \end{cases} \end{equation*} $\forall \hspace{3 pt} m \in \mathbb{N}$. I then know by simple substitution that \begin{equation*} \frac{1}{2} \sin 2x = \sum_{n=0}^{\infty} \frac{(-1)^{n}2^{2n}}{(2n+1)!}x^{2n+1} \end{equation*} I know I need to show that the odd $c_n$'s and the terms of the above series equal (the even ones are irrelevant as they are all $0$ and so have no bearing on sum) but I am having trouble doing so. Can someone please show how to reduce said equality? Thanks in advance.
Note that $$ \begin{align} c_{2n+1}&=\sum_{k=0}^{n} \frac{(-1)^{k}(-1)^{n-k}}{(2k+1)!(2n-2k)!}\\ &=\sum_{k=0}^{n} \frac{(-1)^{n}(2n+1)!}{(2n+1)!(2k+1)!(2n-2k)!}\\ &=\sum_{k=0}^{n} \frac{(-1)^{n}}{(2n+1)!}{2n+1 \choose 2k+1} \end{align} $$ and that $$ \begin{align} \sum_{k=0}^n{2n+1 \choose 2k+1}&=\sum_{k=0}^n({2n \choose 2k}+{2n \choose 2k+1})\\ &=\sum_{k=0}^{2n}{2n \choose k}\\ &=2^{2n} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove by induction that $\sum\limits_{i=1}^n \frac{1}{n+i} \leq \frac{3}{4}$ Prove by induction that $\sum\limits_{i=1}^n \frac{1}{n+i} \leq \frac{3}{4}$. I have to prove this inequality using induction, I proved it for $n=1$ and now I have to prove it for $n+1$ assuming $n$ as hypothesis, but this seems impossible to me because the difference between the sum of $n$ and the sum of $n+1$ is a positive value. Adding a positive value on both sides of the inequality, I don't know how to prove that is always less than or equal to $\frac{3}{4}$.
Note that you do not use induction in the above. Another way: Define $$S_n := \sum_{i=1}^n \frac{1}{n+i}$$ Then $$S_{n+1} = \sum_{i=1}^{n+1} \frac{1}{n+1+i} = \sum_{i=1}^{n+1} \frac{1}{n+(i+1)} = \sum_{i=2}^{n+2} \frac{1}{n+i}.$$ Hence $$S_{n+1} = S_n - \frac{1}{n+1} + \frac{1}{n+n+1} +\frac{1}{n+n+2}.$$ Now proceed...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
How would you go about this algebra problem? (order of operations?) Okay, so my teacher gave us this to solve: $$a + b/c = 2$$ In this equation he explained that there were many ways to do it. However, we were told to do it with at least two negative numbers. I was a bit stuck at this one, however, more than a couple of students in my class had figured it out anyway. So one of the solutions to this problem was to do $ -1 -1/-1 = 2$ ( $-$ & $+$ becomes $-$ and then $-$ & $-$ becomes $+$). (Of course assigning $a$, $b$ and $c$ $-1$ as a value). Another solution was to do for example $-5 + 1/-2$ ($a=-5$ $b=1$ & $c=-2$ and the $- / -$ is $+$). However, there was still something that didn't seem right, although I could see the logic of how these made the answer. But then I realized what was bugging me. The order of operations, which states that you always should do multiplication and division before addition and subtraction (which is something these equations do not follow). So I raised my hand and asked the teacher about this, and he just told me that this wasn't the case for these kind of math problems. He also told me that he had no logical answer to why it wasn't. However, I also asked another (outside) person about this (whom is exeptionally good at math), and she told me that it didn't matter if it was letters (algebra) or just normal numbers, the order of operations would still have to be used. In which case what I learned in class is wrong. So my question is, what is right and what is wrong? And also why? If anyone could help med with this it'd be awesome! Kind regards ~
Your teacher is wrong, the order of operation applies always. $$-1-1/(-1)=0$$ as that means $$-1-\frac{1}{-1}=-1-(-1)=-1+1 = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Prove the following equality: $\int_{0}^{\pi} e^{4\cos(t)}\cos(4\sin(t))\;\mathrm{d}t = \pi$ Prove the following equality: $$ \int_{0}^{\pi} e^{4\cos(t)}\cos(4\sin(t))\;\mathrm{d}t = \pi $$
Hint : Consider the other integral $S=\int_0^\pi e^{4cos(t)}sin(4sin(t))dt$ and the sum $C+iS=\int_0^\pi e^{4(cos(t)+isin(t))}dt$ $=\int_0^\pi e^{4e^{it}}dt$ put $4e^{it}=u$ or $t=-iln(u)+i2ln(2)$ the integral becomes $i\int_{-1}^{1}\frac{e^udu}{u}$ I'll surely be downvoted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If for every $z,$ either $|f(z)| \le 1$ or $|f'(z)| \le 1,$ then $f$ is a linear polynomial I am working on the following exercise: Let $f$ be entire and assume that for every $z,$ either $|f(z)| \le 1$ or $|f'(z)| \le 1$ (or both). Then $f$ is a linear polynomial. I have a few questions about this. First, I believe I solved it and would like someone to verify that my proof is correct: Proof. We can use the generalised Liouville theorem. It states that if $f$ is bounded by $A + B|z|^n$ then $f$ is a polynomial of degree at most $n-1$. Applying this theorem to $|f'|\le 1$ we get that $f'$ is a polynomial of degree at most $0$ at points where $|f'|\le 1$. Hence at these points $f$ is a polynomial of degree at most $1$. At the other points $f$ has degree at most $0$. Hence $f$ has degree at most $1$ everywhere. Is this proof correct? My other question about this exercise is this: The hint I have is to write $f$ as $$ f(z) = f(z_0) + \int_{z_0}^z f'(w) dw$$ where $z_0 = t_0 z$ where $t_0 = \sup \{t_1 \mid 0 \le t \le t_1, |f(tz)|\le 1 \}$. So one expresses $f(z)$ as an integral along a line from $z_0$ to $z$. Taking the absolute value on both sides and using the triangle inequality: $$ |f(z)| \le |f(z_0)| + \int_{z_0}^z |f'(w)| dw$$ I was tempted to continue by adding $$ \le |f(z_0)| + \int_{z_0}^z 1 dw$$ but there is no reason why $f'$ should be bounded by $1$ between $z_0$ and $z$. So my second question is: How do I use this hint? Is there a mistake in the hint? Should it be $f'$ in the definition of $t_0$?
I think there’s a mistake in the hint. Inspired by the hint,I think we can prove as follow: $\forall x\in \mathbb{C}\backslash\{0\}\ $,suppose $\ \vert f(z)\vert>1\ $,then$\ \vert f’(z)\vert\leq 1.\ $ $Let\ t_0=inf\lbrace t_1\in [0,1]\vert\ \vert f’(tz)\vert\leq 1, \forall t\in[t_1,1]\rbrace.$ Since$\ f(z)\ $is entire,we know that both$\ f(z)\ $and$\ f’(z)\ $are continuous,so$\ \vert f(t_0z)\vert=\vert f’(t_0z)\vert =1\ $ By Rectangle Integral Formula,we have$\ f(z)=f(z_0)+\int_{z_0}^{z}f’(\omega)\,d\omega,\ $ where $\ z_0=t_0z,\ $the integral line is the line segment connecting$\ z_0\ $and$\ z\ $ Let A=max$\{\vert f(0)\vert ,\ 1\},\ $then$\ \vert f(z)\vert \leq A+\vert\int_{z_0}^{z}f’(\omega)\,d\omega\vert\leq A+\vert z\vert$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $E$ and $F$ are connected subsets of $M$ with $E\cap F\ne\emptyset$, show that $E\cup F$ is connected. If $E$ and $F$ are connected subsets of $M$ with $E\cap F\ne\emptyset$, show that $E\cup F$ is connected. My attempt: Suppose $E\cup F$ is disconnected. $\boldsymbol{\Rightarrow \exists} \ open \ sets\ A, B\ne \emptyset$ such that $A\cap B=\emptyset$ and $A\cup B=E\cup F.$ Consider $E_1=A\cap E$ and $E_2=B\cap E$. $A\cap B=\emptyset\ \boldsymbol{\Rightarrow} (A\cap E)\cap (B\cap E)=\emptyset.$ $A\cup B=E\cup F\ \boldsymbol{\Rightarrow}\ (A\cap E)\cup (B\cap E)=E$ claim: $A\cap E\ne \emptyset$ and $B\cap E\ne \emptyset$. Suppose $A\cap E=\emptyset \boldsymbol{\Rightarrow} B=E \boldsymbol{\Rightarrow}A=F$. This is a contraction since $E\cap F\ne \emptyset$. Maybe I should prove $E_1 $ and $E_2$ are open in E. I don't know how to do next.
This hint should help you solve the problem: Since $E\cap F\neq \varnothing$, we have a point $x\in E\cap F$. In particular, $x\in E$ as well as $x\in F$. Now, consider the connected component containing $x$. Does it contain $E$? Does it contain $F$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there an entire function with $f(\mathbb{Q}) \subset \mathbb{Q}$ and a non-finite power series representation having only rational Coeffitients I'm trying to answer the following question: Is there an entire function $f(z) := \sum \limits_{n=0}^\infty c_nz^n$ such that * *$f(\mathbb{Q}) \subset \mathbb{Q}$ *$\forall n: c_n \in \mathbb{Q}$ *$f$ is not a polynomial ? I'm trying to show that no such function exists. Here's why I think so: Assuming such a function existed. We would get $f(10^k) \in \mathbb{Q}$ for all $k \in \mathbb{Z}$. So the decimal representation of $f(10^k)$ either cuts at some digit or consists of repeating digits. Now my gut is telling me that if this is true for $f(10^n)$ with $n \in \mathbb{N}$, it won't be for $f(10^{-n}).$ (e.g. for $c_n$ with a finite digit representation: that's because the number of zeroes between each non-zero digit would increase indefinitely) But, is this correct at all? And if so, how do I show it rigorously?
If you allow meromorphic functions (and as a consequence, finite radius of convergence), you have $$ \frac{1}{1-x}=1+x+x^2+x^3+\cdots $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 0 }
Evaluating $\frac{1}{\pi} \int_a^b \frac{1}{\sqrt{x(1-x)}}dx$ I can't calculate the following integral: $$\frac{1}{\pi} \int_a^b \frac{1}{\sqrt{x(1-x)}}dx,$$ where $[a, b] \subset [0,1]$. Can someone, please, give me a hint? Thank you!
Anoteher simple substitution exploiting the symmetry of the integrand: $x=1/2 + u \implies dx=du$. $$\int\frac{1}{\sqrt{x(1-x)}}dx=\int\frac{1}{\sqrt{(1/2+u)(1/2-u)}}du$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Degree of field extension $\mathbb{Q}\left(\sqrt{1 + \sqrt{3}}\right):\mathbb{Q}$ I think the degree $$ \left[\mathbb{Q}\left(\sqrt{1 + \sqrt{3}}\right):\mathbb{Q}\right] $$ is equal to four, but how do I find the minimum polynomial of such an extension? If I square the term I take care of one square root, but squaring again doesn't help me get rid of the $\sqrt{3}$ term. Any advice is very much appreciated!
$x = \sqrt{1 + \sqrt 3} \implies x^2-1 =\sqrt{3} \implies x^4-2x^2-2 = 0$. This polynomial is irreducible by Eisenstein's criterion, since $2$ divides all the coefficients except the first, and $2^2=4$ does not divide the constant. Hence, the degree of the extension, is the degree of the minimal polynomial, which is $4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that an arithmetic function is multiplicative, but not completely multiplicative? Define an arithmetic function $\rho$ by $\rho(1)=1$ and $\rho(n)=2^m$ where m is the number of distinct prime numbers in the prime factorization of n. Prove that $\rho$ is multiplicative, but not completely multiplicative. This is my first introduction to arithmetic functions so I'm not quite sure how to prove these kinds of things. I can plug numbers into the $\rho$ function, but how can I show this through a formal proof?
Just show that $\rho(2)^2\ne \rho(2^2)$. Clearly $\rho(2)=\rho(4)=2$, so you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1974955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving area of triangle formed at parallelogram midpoint is 1/4 of the parallelogram? ABCD is a parallelogram . X is the midpoint of AD & Y is the midpoint of BC. Show that the area of $\triangle {ABX}$ is $\frac{1}{4}$ the area of ABCD Can you help me with this proof? Where should I start? I think It should be by proving $\triangle{DBC} \cong \triangle{DBA} $ using SAS as DB is a common side DC= AB as ABCD is a parallelogram, $\angle {BDC} = \angle{DBA} $ alternate angles And I can also predict that the use of the midpoint theorem here. Many thanks!
The length of perpendicular for the triangle and parallelogram is the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1975187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
positive elements in $\mathbb{M}_n(A)$ are unltraweakly dense in the positive part of $\mathbb{M}_n(A^{**})$ I try to read the book of C*-algebra and finite- dimensional approximations. In the proof of Theorem 2.3.8, i can't understand the question that positive elements in $\mathbb{M}_n(A)$ are unltraweakly dense in the positive part of $\mathbb{M}_n(A^{**})$, where A is a C*-algebra. 1. we known ultraweak top. coincides with weak operator top. on bounded set, but how to understand this quesiton by using the kaplansky density theorem. 2. which topology is applicable for the kaplansky desity theorem besides strong operator top. and w operator top? please help me.
It is true in general that $M_n (A)^+ $ is ultraweakly dense in $M_n(A'')^+$. The first thing is to notice that $M_n (A'')=M_n (A)''$. So we can simply work with an inclusion $A\subset A''$ with $A $ (wot, sot, ultra wot) dense. Note first that, by using Kaplanski, the unit ball of $A $ is sot (and so, wot and ultrawot) dense in the unit ball of $A''$. This allows us to work with bounded nets. Next, $A^{\rm sa} $ is wot dense in $(A'')^{\rm sa} $. Indeed, if $x=x^*$ and $x_n\to x $ (wot), then $(x_n+x_n^*)/2\to x $ (wot). Now choose $x\in (A'')^+$ with $\|x\|\leq1$. By the above, there is a net of selfadjoints $x_n $ with $x_n\to x $ (wot). If we consider the nets of positive and negative parts, $\{x_n^+\} $ and $\{x_n^-\} $ (where $x_n=x_n^+-x_n^-$ and $x_n^+x_n^-=0$), by wot-compactness of the unit ball we may replace them by convergent subnets. Write $y=\lim_{wot} x_n^+$ and $z=\lim_{wot } x_n^-$. Then $y^{1/2}=\lim_{sot}(x_n^+)^{1/2} $ and similarly for $z $. We also have $(x_n^+)^{1/2}(x_n^-)^{1/2}=0$. As the sot preserves products of bounded nets, we get $y^{1/2}z^{1/2}=0$; but then $yz=0$. As $x=y-z $ it follows that $z=x^-=0$. Thus $$x=y=\lim_{wot}x_n^+. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1975324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Second derivative of bump function? My question is? Does there exist a function $f\in C^{2}\left(\mathbb{R}\right)$ such that $$\begin{cases} f\left(x\right) & =1\quad\textrm{when }\left|x\right|<1,\\ f\left(x\right) & =0\quad\textrm{when }\left|x\right|\geq2,\\ f\left(x\right) & \in\left[0,1\right],\forall x\in\mathbb{R},\\ \left|f'\left(x\right)\right| & \leq2,\forall x\in\mathbb{R},\\ \left|f"\left(x\right)\right| & \leq2,,\forall x\in\mathbb{R}. \end{cases}$$ Thanks.
Observation: Since $f$ is continuous, $\lim_{|x|\rightarrow 1^-}f(x)=1$, so the first inequality is not strict. (Alternately, $f^{-1}(1)$ is closed). Therefore, $f(1)=1$ and $f(2)=0$. So, by the mean value theorem, there is some $c\in(1,2)$ such that $f'(c)=\frac{f(2)-f(1)}{2-1}=-1$. Since $f$ is constant for $|x|<1$ and $|x|>2$ and the first derivative is continuous, $f'(1)=0$ and $f'(2)=0$. Now, we can see that $c$ above must be $\frac{3}{2}$ since otherwise, if $c<\frac{3}{2}$, then there is some $d$ in $(1,c)$ so that $f'(d)=\frac{f(c)-f(1)}{c-1}=-\frac{1}{c-1}$. Since $c<\frac{3}{2}$, $c-1<\frac{1}{2}$, so $\frac{1}{c-1}>2$, which contradicts the final assumption. This last observation can be extended as follows: suppose that $a\in(1,\frac{3}{2})$, then $|f'(a)|\leq 2(a-1)$ since, once again by the mean value theorem, there would be a point $b\in(1,a)$ so that $f''(b)=\frac{f'(a)-f'(1)}{a-1}=\frac{f'(a)}{a-1}$, by the last condition, $\frac{|f'(a)|}{a-1}\leq 2$ and the result follows. Therefore, we see that $f'(a)\geq -2(a-1)$ for all $a\in(1,\frac{3}{2})$. Similarly, $f'(a)\geq -2(2-a)$ for all $a\in(\frac{3}{2},1)$. Moreover, since the given bounds are not differentiable, the inequality must be strict at one point (and hence in a neighborhood of that point). However, $-1=f(2)-f(1)=\int_1^2 f'(x)dx> \int_1^{3/2} -2(x-1)dx+\int_{3/2}^2 -2(2-x)dx=-\frac{1}{4}-\frac{1}{4}=-\frac{1}{2}$. This is a contradiction. I feel like I've made a mistake somewhere (since the argument didn't need to be as tight as I expected it to be), but I can't spot it - let me know if you see a hole!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1975452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why won't a series converge if the limit of the sequence is 0? Just thinking about it in terms of logic, shouldn't the series of a sequence whose limit as $n$ approaches infinity is 0 converge? I know that the $n$th term test for divergence says that if a series is convergent, then the limit of its sequence is 0 and I also know there are some sequences for which it has been "proven" that their series does not converge even though the sequence converges to 0, but I just don't believe these tests. If we stretch $n$ out to infinity and the terms are approaching 0, then how is it possible for the sum of the terms to "overflow" and diverge if the terms are becoming negligibly small?
Do you think the series $$1+\frac12+\frac12 + \frac14+\frac14+\frac14+\frac14 + \frac18 + \cdots$$ converges? Note, there are $2$ terms equal to $\frac12$, $4$ terms equal to $\frac14$, $8$ terms equal to $\frac18$ and so on, with $2^i$ terms equal to $\frac{1}{2^i}$ for each $i\in\mathbb N$. You will probably agree that this series diverges. In fact, if you give me a number $m\in \mathbb N$, I can calculate exactly how many terms of the series you have to add together for the sum to reach $m$. For example, it takes $1$ term to reach $1$, it takes $1+2=3$ terms to reach $2$, and then $1+2+4=7$ terms to reach $3$. You can show, with a simple inductive argument, that you will reach $m$ after $$1+2+4+\dots + 2^{m-1}$$ terms, which is actually equal to $2^m-1$ and is certainly a finite number. It's good to understand the concept why this series diverges. The thing is that yes, the terms go to $0$, but they don't do so "fast enough". The problem is than once the terms hit $\frac14$, they stick at that number for $4$ steps, long enough for the sum to increase by $1$. And imagine what happens way way way down the line. The sum is equal to $\frac{1}{1024}$ for a whole $1024$ terms, for example. Sure, it will eventually fall to an even lower number, but it will stay on that number for even longer, again long enough for the whole sum to increase by $1$. Side fact: the series I wrote down at the start has the bonus property that each term in the sequence is larger than the corresponding term of the sequence $$1+\frac12+\frac13+\frac14+\frac15+\cdots$$ which is also known as the harmonic series and is the most famous divergent series. So, you now see that if you sum $2^m$ terms of the harmonic series, your sum will be equal to at least $m$ (and more, in fact).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1975608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 6, "answer_id": 1 }
Is there a statement independent from PA and does not increase the consistency strength? There are many known independence results of $\mathsf{PA}$, for example, Goodstein theorem, Paris-Harrington theorem, and the reflection principle. But these examples imply the consistency of $\mathsf{PA}$. I guess not all independent statements of $\mathsf{PA}$ imply the consistency of $\mathsf{PA}$. I have tried to find such examples, but I can't even prove the existence of such a proposition. I would appreciate your help.
You can attack this kind of question by thinking about the Lindenbaum-Tarski algebra for PA. This can be viewed as a partial order $\preceq$ on the set of all sentences in the language of arithmetic, where we have $\phi \preceq \psi$ if and only if $PA + \psi \vdash \phi$. (It is possible to use the opposite order, but I prefer to have stronger sentences be higher in the order than weaker sentences). One fact about this order is that, when we use PA or other sufficiently strong theories, the order is dense: if $\phi \prec \psi$ then there is a sentence $\theta$ with $\phi \prec \theta \prec \psi$. One way to construct $\theta$ is to begin with a sentence $\chi$ which is independent of $\text{PA} + \phi + \lnot \psi$ and then let $\theta = \psi \lor (\phi \land \chi)$. So, to answer the question, if we let $\phi$ be a sentence provable in PA, and we let $\psi$ be Con(PA), then we have $\phi \prec \psi$, and so by density there is a sentence $\theta$ strictly between them. This sentence $\theta$ is not provable in PA, because it is strictly above $\phi$, but it does not imply Con(PA), because it is strictly below Con(PA). It is much more challenging to find "natural" examples of sentences independent of PA that are implied by, but do not imply, Con(PA). The general construction shows there are at least some sentences with that property, though. The example constructed as above is essentially $$\text{Con}(\text{PA}) \lor \text{Con}(\text{PA} + \lnot\text{Con}(\text{PA}))$$ where $\text{Con}(\cdot)$ is the Gödel/Rosser consistency sentence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1975706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Is it possible for an integral between $a$ and $a$ to have a value other than 0 Does there exist a function $f: \mathbb{R} \to \mathbb{R}$ with a constant $a \in \mathbb{R}$ such that $$\int_{a}^{a} f(x) \, dx \neq 0 \quad$$ holds?
Assuming that the integral is defined, the answer in NO. $$\int_{a}^{a} f(x) dx =\int_{\{a\}} f(x) dx$$ and since the integral on the right is taken over the set of measure $0$, the integral is $0$. However, notice that if instead of Lebesgue measure we put a counting measure on $\mathbb{R}$ (call it $\mu$), then $$\int_{a}^{a} f(x) d \mu =\int_{\{a\}} f(x) d \mu=f(a)$$ Thus, any measurable function that is non-zero on $a$ will give you a non-zero integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1975829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 1 }
Interesting mathematical artifact: Equality sign wrong for exponential function. I found an interesting case where it seems like an equality sign works wrong. Let's consider the following construction: $\frac{1+\Lambda}{2} e^{i\Lambda \phi}$ where $\Lambda = \pm1$, so $\Lambda^2=1$. Then I apply Euler formula: $\frac{1+\Lambda}{2} e^{i\Lambda \phi} = \frac{1+\Lambda}{2} (\cos \phi + i\Lambda \sin \phi)= \frac{1}{2} \cos \phi + \frac{\Lambda}{2} \cos \phi + \frac{i\Lambda}{2} \sin \phi + \frac{i}{2} \sin \phi = \frac{1+\Lambda}{2} e^{i\phi}$ where I have used $\sin(\Lambda \phi) = \Lambda \sin \phi$ and $\cos (\Lambda \phi) = \cos \phi$. However, this is just wrong! $e^{i\Lambda \phi}\neq e^{i \phi}$ even though the equality sign was not broken anywhere in between (at least it doesn't seem to be broken to me). What am I doing wrong?
The problem is the division by zero: The identity $\tfrac{1+\Lambda}{2}e^{i\Lambda\phi}=\tfrac{1+\Lambda}{2}e^{i\phi}$ holds as you have shown. For $\Lambda=1$, this implies $e^{i\Lambda\phi}=e^{i\phi}$, which is obviously correct. However, for $\Lambda=-1$, this means that $\tfrac{1+\Lambda}{2}=0$ and therefore, the last implication $\tfrac{1+\Lambda}{2}e^{i\Lambda\phi}=\tfrac{1+\Lambda}{2}e^{i\phi}\implies e^{i\Lambda\phi}=e^{i\phi}$ is not correct in this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1975969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Can an unbounded sequence have a convergent cesaro mean? I was wondering if an unbounded sequence may have a convergent cesaro mean ($\frac{1}{n}\sum_{k=1}^n a_n$). I was maybe thinking of $$a_n = (-n)^n$$ as a sequence having a convergent mean, but I might be wrong. Anyways, how would you proceed to prove such an intuition?
Take the following example $u_{2n}=-\sqrt{n}$ and $u_{2n+1}=\sqrt{n}$ then $v_{2n}=0$ and $v_{2n+1}=\frac{\sqrt{n}}{2n+1}$. $(u_n)$ is unbounded. $(v_n)$ goes to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1976222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
combinatorics: sum of product of integer compositions I am trying to solve a problem from Stanley's book, it says: Fix $k,n \in \mathbb{P}$. Show that: \begin{align} \sum a_1 a_2 \cdots a_k = \binom{n+k-1}{2k-1} \end{align} where the sum ranges over all compositions $(a_1 , a_2 , \ldots , a_k)$ of $n$ into $k$ parts. I am trying to reason like this: we need to find the coefficient $c_n = \sum a_1 a_2 \cdots a_k$ from this generating function -- \begin{align} \sum_n c_n x^n &= \sum_n \sum a_1 a_2 \cdots a_k x^n \\ &= \sum_n \sum a_1 a_2 \cdots a_k x^{a_1 + a_2 + \cdots + a_k}\\ &= \sum_n \sum a_1x^{a_1} a_2x^{a_2} \cdots a_kx^{a_k} \end{align} after that, I have no clue, how do I solve this ? moreover, what is the range in the inner sum ? If we consider Mark Riedel's answer, and assume $n=4$, $k=2$; then the sum will be \begin{align} \sum (z + 2z^2)^2 = z^2 + 4z^3 + 4z^4 \end{align} On the other hand the compositions will be $(1,3), (2,2), (3,1)$, therefore the above sum will be counted as: \begin{align} (1.3)z^{1+3} + (2.2)z^{2+2} + (3.1)z^{3+1} &= 1z^1.3z^3 + 2z^2.2z^2 + 3z^3.1z^1\\ &= 3z^4 + 4z^4 + 3z^4 = 10z^4 \end{align} what's going on? what am I missing?
Compositions of $n$ into $k$ parts can be represented as a string of $n$ stars and $k-1$ bars. The $k-1$ bars break the stars into $k$ groups, and the number of stars in the $i^{th}$ group represents $a_i$. For each of these compositions, we are adding $a_1a_2\dots a_k$. This represents the number of ways to choose one star from each of the $k$ blocks, and color it red. For example, with $n=6, k=3$, and the composition $\star\star|\star\star\star|\star$, the number of ways to color one star in each group red like $\star\color{red}\star|\star\color{red}\star\star|\color{red}\star$ would be $2\cdot 3\cdot 1$. However, there is a more direct way to count these arrangements of black stars, red stars, and bars. If you ignore the black stars, then what remains is an alternating pattern of red stars and bars; in the example, this is $\color{red}\star|\color{red}\star|\color{red}\star$. There are $k+(k-1)=2k-1$ symbols which are red stars or bars, and $n+k-1$ symbols total. Therefore, there are $\binom{n+k-1}{2k-1}$ arrangements, since an arrangement is specified by choosing which of the $2k-1$ symbols are red stars or bars, and then assigning these in an alternating fashion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1976347", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Show that $L(HK) = L(H) + L(K) - L(H\cap K) $ where L is the length of the composition series of the group and H and K are normal subgroups of G. I already proved that $L(G) = L(H) + L(\dfrac{G}{H})$ and then Im asked to prove that $L(HK) = L(H) + L(K) - L(H\cap K) $ but I fail to see a connection with what I already proved.
Hint: Remember that $$\frac{HK}{K}\simeq\frac{H}{H\cap K}$$ and shows that $L(M/N)=L(M)-L(N)$ for any subgroup $N$ of the group $M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1976502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find integer that satisfies two congruences Find the integer between $0$ and $29\times 23$ $= 667$ that satisfies the two following congruences: $x ≡ 15$ (mod $23$) $x ≡ 1$ (mod $29$).
You can just look for a pattern. Each time you add $29$ to a number, the remainder when divided by $23$ increases by $6$, so you can see that: $$1≡1\pmod {23}, \;30≡7\pmod {23}, \;59≡13\pmod {23},\;\;...\;\;291≡15\pmod {23}$$ and there's your answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1976590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Difficult limit problem involving sine and tangent I encountered the following problem: $$\lim_{x\to 0} \left(\frac 1{\sin^2 x} + \frac 1{\tan^2x} -\frac 2{x^2} \right)$$ I have tried to separate it into two limits (one with sine and the other with tangent) and applied L'Hôpital's rule, but even third derivative doesn't work. I also tried to simplify the expression a bit: $$\frac 1{\sin^2 x} + \frac 1{\tan^2 x} = \frac{1+\cos^2 x}{\sin^2 x} = \frac{ 1}{1-\cos x} + \frac 1{1+\cos x} -1$$ But I cannot make it work either. I would like answers with series expansion. Thanks in advance.
$$\frac{1}{\sin^2x}-\frac{1}{x^2}=\frac{x-\sin x}{x^3}\frac{x+\sin x}{x}\frac{x^2}{\sin^2x}\to\frac{1}{6}\cdot 2\cdot 1$$ Now try the remaining terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1976711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Can we show that all $2 \times 2$ matrices are sums of matrices with determinant 1? I came across a paper on the Sums of 2-by-2 Matrices with Determinant One. In the paper, which I have conveniently indicated here for reference, the author claims, but without proof, that a $2 \times 2$ is a sum of elements of the special linear group, $SL_2(\mathbb{F})$, whose elements, $U$, are also $2 \times 2$ matrices, such that $|U|=1$. I was thinking of proving this by either technique. Let $A= \begin{bmatrix}a & b\\c & d\end{bmatrix}$, and $a, b, c, d \in \mathbb{F}$. Technique 1. Consider the following 4 matrices with determinant 1: $M_1= \begin{bmatrix}e & 0\\0 & 1/e\end{bmatrix}$, $M_2 = \begin{bmatrix}0 & -f\\1/f & 0\end{bmatrix}$, $M_3 = \begin{bmatrix}1/h & 0\\0 & h\end{bmatrix}$, and $M_4 = \begin{bmatrix}0 & 1/g\\-g & 0\end{bmatrix}$. We show that that $\sum\limits_{i=1}^4 M_i = A$. Thus, we have $e + 1/h = a$, $1/e + h = d$, $-f + 1/g = b$, $1/f - g = c$. Technique 2. Consider $\{U_i\}_{i=1} ^\infty \in SL_2(\mathbb{F})$. Show that the sum of a countable number of $U_i$s is $A$. The problem I have here is that I don't know how to proceed from here. I don't know if either of these will be considered correct, though. I hope someone could help me out here. Thanks.
The matrix decomposition into a matrices is not unique. For example, $$ \begin{bmatrix} a&b\\c&d \end{bmatrix}= \begin{bmatrix} a&-1\\1&0 \end{bmatrix} + \begin{bmatrix}-1&b\\0&-1 \end{bmatrix} + \begin{bmatrix} 1&0\\c&1 \end{bmatrix} +\begin{bmatrix}0&1\\-1&d\end{bmatrix}. $$ is another such decomposition into matrices with a determinant of 1. Another possibility is to further decompose these four matrices individually.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1976815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Particular solution of $y''+4y=x\cos x$ Find both the general solution $y_h$ and a particular solution $y_p$ for $$y''+4y=x\cos x$$ So far I've got $y_h$ from factoring the characteristic polynomial: $$y_h=C_1\sin2x+C_2\cos2x$$ But the $y_p$ part troubles me, any help?
For $y_p$ use method of variation of parameter $$ \begin{aligned} & W=\left|\begin{array}{cc} y_{1} & y_{2} \\ y_{1}^{\prime} & y_{2}^{\prime} \end{array}\right| \\ & W=\left|\begin{array}{cc} \sin 2 x & \cos 2 x \\ 2 \cos 2 x & -2 \sin 2 x \end{array}\right|=-2 \\ \therefore y_{p} &=-y_{1} \int \frac{y_{2} x}{w} d x+y_{2} \frac{y_{1} x}{w} d x . \end{aligned} $$ $y_{1}=\sin 2 x\\ y_{2}=\cos 2 x $ [Solution by complement function.] $x=x \cos x$ [Forcing function] Original image See the steps for finding particular integral. Try using variation of parameter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1977013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
When to stop rolling a die in a game where 6 loses everything You play a game using a standard six-sided die. You start with 0 points. Before every roll, you decide whether you want to continue the game or end it and keep your points. After each roll, if you rolled 6, then you lose everything and the game ends. Otherwise, add the score from the die to your total points and continue/stop the game. When should one stop playing this game? Obviously, one wants to maximize total score. As I was asked to show my preliminary results on this one, here they are: If we simplify the game to getting 0 on 6 and 3 otherwise, we get the following: $$\begin{align} EV &= \frac{5}{6}3+\frac{25}{36}6+\frac{125}{216}9+\ldots\\[5pt] &= \sum_{n=1}^{\infty}\left(\frac{5}{6}\right)^n3n \end{align}$$ which is divergent, so it would make sense to play forever, which makes this similar to the St. Petersburg paradox. Yet I can sense that I'm wrong somewhere!
The question is missing the concept of utility, a function that specifies how much you value each possible outcome. In the game, the utility of ending the game with a certain score would be the value you place on that outcome. Although you could certainly argue it is implied in the question that the utility of a score is simply the score, I would like to add an answer that takes a non-trivial utility into account. If each point translated to 1000 USD, for example, you might have a utility that looks more like $U(x) = \log(x)$ than $U(x) = x$. Let's say that $U(x)$ is the utility of score $x$ for $x \ge 0$ and assume that $U$ is monotonically non-decreasing. Then we might say that the optimal strategy is that which maximizes $E[U(X)]$, where $X$ is a random variable representing the final score if one plays with the policy where you roll the die if and only if your current score is less than $t \in \mathbb{Z}_{\ge 0}$. (It is clear that the optimal policy must have this form because the utility is non-decreasing.) Let $Z$ denote current score. Suppose we are at a point in the game where our current score is $z \ge 0$. Then $$E[U(X)|Z = z] = \frac{1}{6} \left( U(0) + \sum_{i=1}^5 E[U(X)|Z=z+i] \right) \text{ if } z < t$$ $$E[U(X)|Z = z] = U(z) \text{ if } z \ge t$$ Note that for many choices of $U(x)$ the recurrence relation is very difficult to simplify, and that, in the case of choosing to roll the die, we must consider the expected change in utility from that roll and all future rolls. The figures below are examples of what the above recurrence relation gives for $U(x) = x$ and $U(x) = \log_2(x + 1)$. The expression $E[U(X)]$ means $E[U(X)|Z=0]$, because at the start we have $0$ points. The horizontal axis corresponds to different policies, and the vertical axis corresponds to expected utility under each policy. Gist with Python code
{ "language": "en", "url": "https://math.stackexchange.com/questions/1977081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 11, "answer_id": 8 }
Sum of reciprocals of the perfect powers * *What is the sum of all of the numbers which can be written as the reciprocal of an integer that is also a power greater than one, excluding the powers of one ?. *I'm asking about the sum of numbers which can be written as powers. $$ \mbox{This sum would start off as}\quad \frac{1}{4} + \frac{1}{8} + \frac{1}{9} + \frac{1}{16} + \frac{1}{25} + \cdots $$ Note that I didn't count $1/16$ twice, as we are only adding numbers which can be written in that form. -I can see how this alternative sum would have probably been easier to handle, since we can see that it equates to the sum $\zeta\left(s\right) - 1$ for all $s$ larger than one. - I know that the first sum must converge, because I have worked out that the sum of numbers which can be written in the form $1/\left(a^{b} - 1\right)$ converges to $1$, where $a,b > 1$. Is there any way of evaluating the first sum ?. Also, does the second sum converge as well and how can we evaluate it ?. I have made a program $\left(~\texttt{python}~\right)$ which carries out partial sums for these sorts of series. I'll attach the source code if anyone wants it.
Your sum contains the sum of all reciprocals of non-unit natural number powers of primes, that is, every number of the form $\frac{1}{p^k}$ for prime $p$, and $k\in \{2,3,...\}=\mathbb{N}\setminus\{1\}$. Since each number of this form is uniquely determined by the choice of $p$ and $k$, we obtain $\sum_{p}\sum_{i=2}^{\infty}\dfrac{1}{p^i}=\sum_p\dfrac{1}{p(p-1)}$ where $p$ is every prime. Since $\sum_{i=1}^{\infty}\dfrac{1}{p^i}=\dfrac{1}{p-1}$ (this follows from some basic number theory; an easy way to see it is to consider the repeating decimal, $0.111...$ in base $p$), yet we have removed the leading term, $\dfrac{1}{p}$, hence $\sum_{p}\sum_{i=2}^{\infty}\dfrac{1}{p^i}=\dfrac{1}{p-1}-\dfrac{1}{p}=\dfrac{p}{p(p-1)}-\dfrac{p-1}{p(p-1)}=\dfrac{p-(p-1)}{p(p-1)}=\dfrac{1}{p(p-1)}$ As for the case at hand, we are no longer concentrating on primes, but the prior counting result holds for non-prime numbers, $n$, the problem now is: how to avoid double counting numbers such as 4^2=4^2=16? Again the solution is to pick on primes, the prime here is $2$, thus we will count that as the $2^4$ and this tactic, in fact, handles all powers of $4$. But that doesn't solve the issue raised by numbers such as $36=2^2\cdot 3^2$. Here we have distinct primes that are part of the factorization, in this case, we can count this as $6^2$. We also have to deal with cases such as $4^2\cdot 3^2=2^4\cdot 3^2=12^2=144$ For this, we consider it as $12^2$. By now the pattern is maybe emerging: We want take our sum of (non-unit) reciprocal powers over every number whose exponents of its prime factorization have gcd of 1. That is, We are interested in numbers: $b=p_1^{k_1}\cdot p_2^{k_2}\cdot ...\cdot p_f^{k_f}$ for which $\gcd(k_1,...,k_f)=1$. The reason why this is the criterion we require is that we seek to eliminate recounts of numbers we already have, and any number which has the above $\gcd(k_1,...,k_f)=r\neq 1$, then we can take the $r$th root of that number, and obtain a number which does have $\gcd(k_1,...,k_f)=1$, hence we have a unique way of representing every number we care about as a power of a number in the above form with $\gcd(k_1,...,k_f)=1$. Thus our sum is, in terms of the above notation: $\sum_b\sum_{i=2}^{\infty}\dfrac{1}{b^i}=\sum_b\dfrac{1}{b(b-1)}$. I am not sure if there is a nice way of characterizing the relevant numbers $b$ or not, but provided a suitably nice characterization can be found this may prove useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1977170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 1 }
Radical series and simple factors in the composition series The context is a finitely generated modules of a finite dimensional $\mathbb{K}$-algebra $A$, where $\mathbb{K}$ is an arbitrary field. Now, if we consider $A$ as a left $A$-module as usual, this module $A$ has many composition series which, according to the Jordan-Holder theorem, have the simple quotients in common, meaning that each simple module appears the same number of times in any series (and since any simple module has to be a quotient of $A$ these are also all the simple modules). We also have the notion of the radical series of $A$ $$A \supseteq \operatorname{rad}(A) \supseteq \operatorname{rad}(A)^2 \supseteq \dots \supseteq \operatorname{rad}(A)^l \supseteq 0 $$ in which, due to the basic properties of the radical and the fact that $\operatorname{rad}^n(A) = \operatorname{rad}(\operatorname{rad}^{n-1}(A))$ we have that each quotient is a semisimple $A$-module. Decomposing these quotients in direct sums of simple modules, we get a list of simple modules. My question is if all isomorphism types of simple $A$-modules also appear in this list, and if their multiplicity here is the same as the one they would have in any composition series. I think the answer is yes to both questions, but I am not able to show this.
All isotypes of simple left $A$-modules already appear in $A/rad(A)$. This is because the two rings have the "same set" of simple modules, and since $A$ is Artinian, each one is a direct summand of $A/rad(A)$. Since you can take any of those semisimple radicals and express it as a composition series, you can chain them all together to get a composition series for $A$, so yes, the multiplicities of simple modules appearing throughout are governed by the uniqueness of the composition series for $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1977304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given $f''(x) = -4f(x)$ for all $x \in$ $\mathbb{R}$ and $f(0) = f'(0) = 1$, how do I find $f(x)$ Let $f$ be a function so that $f''(x) = -4f(x)$ for all $x \in \mathbb{R}$ and $f(0) = f'(0) = 1$. i) Prove that $2f(x)\sin(2x)+f'(x)\cos(2x)=1$ for all $x \in \mathbb{R}$ ii) Prove that $2f(x)\cos(2x)-f'(x)\sin(2x)=2$ for all $x \in \mathbb{R}$ iii) Using (i) and (ii), prove that $f(x)=\frac12\sin(2x)+\cos(2x)$ for all $x \in \mathbb{R}$
If we put $t=2x$ and if prime denotes derivative with respect to $t$ then it is easy to see that we have $f''+f=0$ and this has a unique solution $f= f(0)\cos t + f'(0)\sin t$ and note that given conditions imply that $f(0)=1,f'(0)=1/2$ so that $f(x) = \cos 2x +(1/2)\sin 2x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1977399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Show that $f(g(x)) = x$ and $g(f(x))= x$, with $f(x) = x^e \bmod n$ and $g(x) = x^d \bmod n$ I want to solve the following problem: Let $d$ and $e$, both natural numbers, be each others inverses modulo $\varphi(n)$, where $n = p\cdot q$ is a product of two different prime numbers $p$ and $q$. Let $M = \{0,1,2,\dots,(n-1)\}$ be the set of nonnegative numbers smaller than $n$. Define two functions $f: M \rightarrow M$ and $g: M \rightarrow M$ as \begin{align*} f(x) = x^e \bmod n \quad \mbox{and}\quad g(x) = x^d \bmod n \end{align*} Show that $f(g(x)) = x$ and $g(f(x))= x$ for all $x \in M$. I understand that $f(x)$ and $g(x)$ will always produce numbers between 0 and $n$, since $x$ is smaller than $n$. In that respect, $f(x) = g(x)$ no matter what $e$ and $d$ we choose. But I don't understand why $f(g(x)) = x$ and $g(f(x))= x$.
I think I figured it out. First I have to prove that $x^{k\varphi(n) + 1} \equiv x \pmod{n}$, even when I don't know if $\gcd(x,n) = 1$. We look at the system \begin{align} \begin{cases} y \equiv x \pmod p \\ y \equiv x \pmod q \end{cases} \end{align} Since $q$ and $p$ are two different prime numbers, they are relatively prime to eachother. Then we have \begin{align*} &\varphi(n) = \varphi(p)\cdot \varphi(q) \end{align*} and so, by Eulers theorem, \begin{align*} &x^{k\varphi(n) + 1} = (x^{\varphi(p)})^{k\varphi(q)} \cdot x \equiv 1^{k\varphi(q)} x \equiv x \pmod{p}\\ &x^{k\varphi(n) + 1} = (x^{\varphi(q)})^{k\varphi(p)} \cdot x \equiv 1^{k\varphi(p)} x \equiv x \pmod{q} \end{align*} Thus, a solutions to the set of congruences above is \begin{align*} y = x^{k\varphi(n) + 1} \end{align*} By the Chinese Remainder Theorem, this solution is unique modulo $p\cdot q =n$. Thus, \begin{align*} x^{k\varphi(n) + 1} \equiv x \pmod{n} \end{align*} Then, I can apply the solution as proposed by marwalix, namely \begin{align*} &f(g(x)) = x^{ed} = x^{k\varphi(n) + 1} \equiv x \pmod{n}\\ &g(f(x)) = x^{de} = x^{k\varphi(n) + 1} \equiv x \pmod{n} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1977611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If f is injective can you prove this relation? Given that f maps E to F and A is a subset of E, prove that if f is injective then f^-1(f(A)) is a subset of A. Actually, im good at performing similar proofs, but i didn't understand why should f be injective? I proved it without using this given and didn't know how to benefit from it. Please help me cz i have an exam. Thanks :)
Choose $x\in f^{-1}(f(A))$. Then $f(x)\in f(A)$. Hence there exists $a\in A$ such that $f(x)=f(a)$. Injectivity of $f$ then implies that $x=a$. In particular, $x\in A$. Hence, $f^{-1}(f(A))\subset A$. To show that injectivity is really needed, consider for instance the non-injective function $$f:\{a,b,c\}\rightarrow\{a,b,c\}: \begin{cases} a\mapsto a\newline b\mapsto a\newline c\mapsto a \end{cases}.$$ If we take $A=\{a\}$, then $f^{-1}(f(A))=f^{-1}(\{a\})=\{a,b,c\}$ which is not a subset of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1977742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Are these two measures same? Let $f_k :\{0,1\}^{\infty} \to \{0,1\}^k$ denote the projection map onto first $k$ component. Now given a probability measure $P$ on the measure space $( \Omega=\{0,1\}^{\infty},P(\Omega))$ we can push forward this probability measure using $f_k$ and we obtain a probability measure on $(\{0,1\}^k,P(\{0,1\}^k)$. Suppose $P_1$ and $P_2$ are two probability measures on $( \Omega=\{0,1\}^{\infty},P(\Omega))$ such that the induced probability measure on $(\{0,1\}^k,P(\{0,1\}^k)$ by $P_1$ and $P_2$ are same for each $k$. Does this say that $P_1$ and $P_2$ are same measures on $(\{0,1\}^{\infty},P(\{0,1\}^{\infty}))$ ? I think that $P_1$ and $P_2$ should be some but i don't have any formal argument.Any ideas?
The condition on $\mathbb{P}_1$ and $\mathbb{P}_2$ means that they agree on all cylinder sets, i.e. sets of the form $$ E_n=\{\omega\in \Omega:\omega_1=a_1,\dots,\omega_n=a_n\} $$ where $n$ is a natural number and $a_1,\dots,a_n\in\{0,1\}$. The cylinder sets (together with the empty set) form a $\pi$-system which generates the product $\sigma$-algebra on $\Omega$, and it follows from the $\pi-\lambda$ theorem that if two probability measures agree on a $\pi$-system, then they agree on the $\sigma$-algebra generated by this $\pi$-system. Therefore $\mathbb{P}_1=\mathbb{P}_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1977839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove or disprove: if $f_n$ converges uniformly on $A$, then $f_n$ converges uniformly on $\overline{A}$. I am interested in the statement Let $A$ be a set and $f_n$ be a sequence of functions defined on $\overline{A}$. Assume that $f_n$ pointwise converges to a function $f$ on $\overline{A}$. If $f_n$ converges uniformly on $A$ to $f$, then $f_n$ converges unifomly on $\overline{A}$. I initially wanted to use the contraposition of this statement to answer this question. But trying to prove it lead me to think that it may false. But cannot find counter-examples. My question is then $2$-fold: * *If the statement is false, what is a simple counter-example? *Under what conditions is the statement known to be true? It is true if $\overline{A}\setminus A$ is finite for example. I assume no a priori condition on $f_n$ and $f$.
False. Let $A=\mathbb Q$ and define $f_n(x)=f(x)=0$ for each $x\in\mathbb Q$ and $n\in\mathbb N$. If $x\in\mathbb R\setminus \mathbb Q$, then let $f_n(x)=x/n$ for each $n\in\mathbb N$ and $f(x)=0$. Clearly, $(f_n)_{n\in\mathbb N}$ converges pointwise to $f$ on $\overline A=\mathbb R$. However, convergence is not uniform on $\overline A$, even though it is on $A$. Incidentally, this counterexample reveals that the additional assumption that the limit function $f$ is continuous (even uniformly continuous) on $\overline A$ is still not strong enough to make the claim true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1978040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Basis of a finite free $R$-algebra starting with $1$ , $R$ is local Let be $A$ be a finite free $R$-algebra where $R$ is a local ring. Does $A$ always have a basis $a_1, \ldots, a_n$, where $a_1=1$?
Yes, that's true: More generally, if $k:=R/{\mathfrak m}$ is the residue field of $R$ and if $M$ is a finite free $R$-module such that $\{\overline{m_1},\ldots,\overline{m_n}\}$ is a basis of $M\otimes_R k=M/{\mathfrak m}M$, then ${\mathscr B}:=\{m_1,\ldots,m_n\}$ is a basis of $M$. For the proof, note that the base change of ${\mathscr B}$ to any fixed basis ${\mathscr C}$ of $M$ has non-vanishing determinant in $k=R/{\mathfrak m}$, hence is invertible in $R$ since $R$ is local.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1978151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving two lines are perpendicular Given $z_1,z_2,z_3 $ and $z_4$ are complex numbers, prove that the line joining $z_1,z_2$ and the line joining from $z_3,z_4$ are perpendicular iff $Re\{(z_1-z_2)(\bar z_3-\bar z_4)\}=0$. Try not to use polar form. I try to start with writing $Re\{(z_1-z_2)(\bar z_3-\bar z_4)\}=Re\{z_1\bar z_3\}-Re\{z_1\bar z_4\}-Re\{z_2\bar z_3\}+Re\{z_2\bar z_4\}$ (I'm not sure if it's right) Then how can I make use of the perpendicular condition? Any hints for the reverse direction, or I just have to reverse the argument? Thank you!
Let $z_1-z_2=ae^{i\theta}$ and $z_3-z_4=be^{i\phi}$, then $$(z_1-z_2)\overline{(z_3-z_4)} = abe^{i(\theta-\phi)}$$ $$\operatorname{Re}[(z_1-z_2)\overline{(z_3-z_4)}] = ab\cos (\theta-\phi)$$ \begin{align*} \operatorname{Re}[(z_1-z_2)\overline{(z_3-z_4)}]=0 & \iff \cos (\theta-\phi) =0 \\ & \iff \theta-\phi=\left(n+\frac{1}{2} \right) \pi \\ & \iff (z_1-z_2)\perp (z_3-z_4) \end{align*} where $a$, $b> 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1978290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Separate equation $\sin3x\mathrm{d}x+2y(\cos3x)^3\mathrm{d}y$ Find solution by using separable equation $$\sin3x\mathrm{d}x +2y\cos^33x\mathrm{d}y=0$$ So far I got this $$\frac{\sin3x}{\cos^33x}\mathrm{d}x + 2y\mathrm{d}y = 0$$ $$ 2y\mathrm{d}y = -\frac{\sin3x}{\cos^33x}\mathrm{d}x$$ Integrate both sides $$\int2y\mathrm{d}y = -\int\frac{\sin3x}{\cos^33x}\mathrm{d}x$$ let $U=\sin3x$ and $\mathrm{d}u=3\cos3x \mathrm{d}x$ $$y^2=\int\frac{u}{(\mathrm{d}u)^3}$$ Now $\mathrm{d}u$ has become a denominator and cubed, not sure how to proceed, thanks in advance
Hint:$$y^2=-\int\frac{\sin3x}{\cos^33x}dx$$ $$\cos 3x=u\Rightarrow-3\sin3xdx=du\Rightarrow-\sin3xdx=\frac{du}{3}$$ $$y^2=\int\frac{du}{3u^3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1978418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find partial derivatives in very abstract case when F is just F(x1,x2) and how to express it correctly? I've just started learning differential calculus and there's one task that I don't completely understand. It sounds like: "Given $Y=F(x_1,x_2)+f(x_1)+g(x_2)$, find $\frac {\partial Y}{\partial X_1}$, $\frac {\partial ^2 Y}{\partial X_1^2}$ and $\frac {\partial ^2 Y}{\partial X_1 \partial X_2}$". They don't describe exactly what do the F, f and g functions look like. Still, it's quite easy to find the partial derivative for the $f(x_1)$ and $g(x_2)$ ($\frac {\partial Y}{\partial X_1} f(x_1) = f'(x_1)$ and $\frac {\partial Y}{\partial X_1} g(x_2) = 0$ as $x_2 = const$. But what to do with the "big" two-arguments function? What notation to use? Will $$\frac {\partial Y}{\partial X_1} F(x,y) = \frac {\partial Y}{\partial X_1} F(x,y)_{x_2}$$ be just the required answer? ($x_2$ subscript means that we consider it a constant in this case) But how to write the answers for $\frac {\partial ^2 Y}{\partial X_1^2}$ and $\frac {\partial ^2 Y}{\partial X_1 \partial X_2}$? Will the result for $\frac {\partial ^2 Y}{\partial X_1^2}$ look just like the one above? And what for the $\frac {\partial ^2 Y}{\partial X_1 \partial X_2}$? I've tried WolframAlpha just to give me some clues, but it uses some notation I don't understand: $$\frac {\partial Y}{\partial X_1} F(x,y) = F^{(1,0)}(x,y)$$ I'm getting quite confident when it comes to differentiating some "concrete" functions but I somehow get stuck when it comes to such abstract situations. Will appreciate any help, Thanks, Paul
$$Y=F(x_1,x_2)+f(x_1)+g(x_2) \\ \implies \frac{\partial}{\partial x_1} Y = \frac{\partial}{\partial x_1} \left[F(x_1,x_2)+f(x_1)+g(x_2)\right] \\ \begin{align}\implies \require{enclose}\enclose{box}{\frac{\partial Y}{\partial x_1}} &= \frac{\partial F(x_1,x_2)}{\partial x_1}+ \frac{\partial f(x_1)}{\partial x_1}+\frac{\partial g(x_2)}{\partial x_1} \\ &= \frac{\partial F(x_1,x_2)}{\partial x_1}+ \frac{df(x_1)}{dx_1}+0 \\ &\enclose{box}{= \frac{\partial F(x_1,x_2)}{\partial x_1}+ \frac{df(x_1)}{dx_1}}\end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1978683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Need some hints for proving a logarithmic inequality. $$\frac{\log_ax}{\log_{ab}x} + \frac{\log_b{x}}{\log_{bc}x} + \frac{\log_cx}{\log_{ac}x} \ge 6$$ Did as you suggested and got this, im stuck again: $$\log_ab + \log_bc + \log_ca \ge 3$$
If you write $\frac{\log_a x}{\log_{ab} x}=\frac{\frac{\ln x}{\ln a}}{\frac{\ln x}{\ln ab}} = \frac{\ln ab}{\ln a} = 1+\frac{\ln b}{\ln a}$, and something similar with the other fractions, you can the use the familiar inequality : $$\frac{x}{y}+\frac{y}{z}+\frac{z}{x}\ge3$$ which can be derived easily from the inequality between arithmetic and geometric means.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1978805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Show that if two maximal values are equal on continuous functions, then there exists $\psi \in [a,b]$ with $f(\psi) = g(\psi)$ Let $f,g : [a,b] \rightarrow \mathbb{R}$ be continuous. We know that $f$ and $g$ have maximal values, as they are continuous on a closed interval. Let $M_f$ be the maximal value of $f$, and $M_g$ the maximal value of $g$. Show that if $M_f$ = $M_g$, then there exists $\psi \in [a,b]$ with $f(\psi) = g(\psi)$ Would it suffice to show that $\psi$ = maximal values, and show that this is an example which shows the exist of such a $\psi$?
Suppose that $f(x_1) = M_f$ and $g(x_2) = M_g$ and assume without loss of generality that $x_1 < x_2$. Now consider $h:=f-g$ restricted to the interval $[x_1,x_2]$. Note that $$ h(x_1) = f(x_1) - g(x_1) = M_g - g(x_1) \geq 0 $$ and $$ h(x_2) = f(x_2) - M_f \leq 0 $$ So if $h(x_1) = 0$ or $h(x_2) = 0$ we are done. Else, the intermediate value theorem applies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1978935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Are Cartesian and spherical coordinates smoothly compatible? And is the transition map a global diffeomorphism? Consider the transition from spherical coordinates $(r, \theta, \varphi)$ to Cartesian coordinates $(x, y, z)$, given by the map $$F:(0,\infty) \times [0, \pi] \times [0, 2 \pi) \to \mathbb R^3,\qquad (r,\theta,\varphi)\mapsto (x,y,z)$$ where \begin{align} x &= r \sin \theta \cos \varphi \\ y &= r \sin \theta \sin \varphi \\ z &= r \cos \theta \end{align} with the inverse relations \begin{align} r&=\sqrt{x^2 + y^2 + z^2} \\ \theta &= \arccos\frac{z}{\sqrt{x^2 + y^2 + z^2}} = \arccos\frac{z}{r} \\ \varphi &= \text{angle}(y,x) \end{align} where the function $\text{angle}:\mathbb R^2\backslash\{0\}\to [0,2\pi)$ is defined (awkwardly) as \begin{align}\label{eq:angle_function} \text{angle}(y,x)= \left\{ \begin{matrix} \arctan(\frac y x) &\text{if } x > 0 \text{ and } y\geq 0\\[2px] \arctan(\frac y x) +2\pi &\text{if } x > 0 \text{ and } y< 0\\[2px] \arctan(\frac y x) + \pi &\text{if } x < 0 \\[2px] +\frac{\pi}{2} &\text{if } x = 0 \text{ and } y > 0 \\[2px] +\frac{3 \pi}{2} &\text{if } x = 0 \text{ and } y < 0 \end{matrix}\right. \end{align} The Jacobian matrix of $F$ is given by \begin{align} J_{\mathbf F}(r, \theta, \varphi) = \begin{bmatrix} \dfrac{\partial x}{\partial r} & \dfrac{\partial x}{\partial \theta} & \dfrac{\partial x}{\partial \varphi} \\[10px] \dfrac{\partial y}{\partial r} & \dfrac{\partial y}{\partial \theta} & \dfrac{\partial y}{\partial \varphi} \\[10px] \dfrac{\partial z}{\partial r} & \dfrac{\partial z}{\partial \theta} & \dfrac{\partial z}{\partial \varphi}\end{bmatrix} = \begin{bmatrix} \sin \theta \cos \varphi & r \cos \theta \cos \varphi & - r \sin \theta \sin \varphi \\ \sin \theta \sin \varphi & r \cos \theta \sin \varphi & r \sin \theta \cos \varphi \\ \cos \theta & - r \sin \theta & 0 \end{bmatrix} \end{align} which has determinant $\det J_{\mathbf F}(r, \theta, \varphi) = r^2\sin\theta$. I have some questions about this map: Is $F$ a diffeomorphism between its domain and its image $\mathbb R\backslash\{0\}$? If so, can someone show how to prove this? References to proofs are also appreciated. At least I see that $F$ is bijective and smooth, but I am not so sure about the inverse, especially the $\varphi$ part, although intuitively it seems clear. I found that the map is discussed in John Lee's Introduction to Smooth Manifolds on page 167, where he proves that $F$ (but restricted to $0<\theta<\pi$) is a local diffeomorphism. Then he states that $F$ is also a diffeomorphism whenever the domain is restricted to an open set in $\mathbb R^3$, but I don't see how he comes to that conclusion. Could someone explain this? Moreover, (as I already asked above) does this result als extend to 'my' domain of $F$? And finally, I imagine that the fact that $F$ is an (at least local) diffeomorphism implies that spherical coordinates are smoothly compatible with the standard differentiable structure on $\mathbb R$, so that a map between manifolds with domain or codomain $\mathbb R$ is smooth w.r.t. Cartesian coordinates if and only if it is smooth w.r.t. spherical coordinates, am I right about this? Am I right about this? Thanks! (Note that in Lee's textbook the roles of $\theta$ and $\phi$ are interchanged.)
What you can use here is 1) A smooth map whose differential at $p$ is an isomorphism is a diffeomorphism in a neighbourhood of $p$. 2) An bijective local diffeomorphism is a diffeomorphism. So in particular from the Jacobian determinant you see that $F$ is a local diffeomorphism at any point for which $r\neq 0$ and $\theta\neq 0, \theta\neq\pi$. Therefore with domain $(0, \infty)\times(0,\pi)\times(0,2\pi)$ the map $F$ is a diffeomorphism. To cover the whole sphere you need more than one chart.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1979105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Group a range of integers such as no pair of numbers is shared by two or more groups This is a duplicate of another question from StackOverflow. I've been advised to post it on Mathematics by another user who clearly has more experience in combinatorics than myself, and, although I have my doubts, I hope he is right. You are given two numbers, $N$ and $G$. The goal is to create an algorithm to split a range of integers $[1-N]$ in equal groups of $G$ numbers each, in reasonable time. Each pair of numbers must be placed in exactly one group. Order does not matter. For example, given $N=9$ and $G=3$, I could get these 12 groups: 1-2-3 1-4-5 1-6-7 1-8-9 2-4-6 2-5-8 2-7-9 3-4-9 3-5-7 3-6-8 4-7-8 5-6-9 As you can see, each possible pair of numbers from 1 to 9 is found in exactly one group. I should also mention that such grouping cannot be done for every possible combination of $N$ and $G$.
Talking about pairs of numbers makes me naturally think of edges in graphs. Each pairing can be represented by an edge of the graph. Since we want all possible pairings (without repeats), we're looking at the desired state of a complete graph on $N$ vertices, $K_N$. Each group is a complete subgraph of $G$ vertices. that produces pairs in accordance with the edges The problem can be regarded as finding a decomposition of $K_N$ into copies of $K_G$ - a set of edge-disjoint subgraphs that account for all the edges in $K_N$. There are $N(N-1)/2$ edges in $K_N$, and $N-1$ edges from each vertex, and similiarly there are $G(G-1)/2$ edges in $K_G$, and $G-1$ edges from each vertex. So as a minimum consideration we need: $$\begin{align} G-1 &\mid N-1 \tag{1} \\ G(G-1)/2 &\mid N(N-1)/2 \tag{2} \end{align}$$ * *the first so that the $k=(N-1)/(G-1)$ groups involving any one element exhaust the edges from (pairs with) that element and *the second to ensure that that the groups can exhaust all pairings exactly. For example, with $G=3$, $N$ must be odd and either $N$ or $N-1$ must be divisible by $3$ (to satisfy the second condition). So, for example, $N=5, G=3$ is not possible. For $G=4 However for $(N,G) = (7,3)$, we have $2 \mid 6$ and $3 \mid 21$, so those criteria are met and the $K_7$ graph decomposes into $7$ $K_3$ graphs: Numbering the vertices from $1$ at the top clockwise, this corresponds to groups $\{1,2,6\},\{1,3,7\},\{1,4,5\},\{2,3,4\},\{2,5,7\},\{3,5,6\},\{4,6,7\}$ For $G=4$, the smallest $N$ for which divisibility is met is $13$, and that works as follows: $\{1,2,3,4\}$ $\{1,5,6,7\}$ $\{1,8,9,10\}$ $\{1,11,12,13\}$ $\{2,5,8,11\}$ $\{2,6,9,12\}$ $\{2,7,10,13\}$ $\{3,5,9,13\}$ $\{3,6,10,11\}$ $\{3,7,8,12\}$ $\{4,5,10,12\}$ $\{4,6,8,13\}$ $\{4,7,9,11\}$ My expectation is that whenever the divisibility criteria are met, it will be relatively simple to generate the required sets just by tracking which pairings have been used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1979205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
trouble with double/iterated integration of $\int^1_0[\int^1_0v(u+v^2)^4du]dv$ I have: For $\int^1_0v(u+v^2)^4du$: u substitution (using x instead since there's a u in there already) with $x=(u+v^2), dx/du=1$ $v\int^1_0x^4=v\frac{1}{5}x^5=v\frac{(u+v^2)^5}{5}-\frac{v(0+v^2)^5}{5}|^1_0=\frac{(v+v^3)^5}{5}-\frac{v^{15}}{5}$ then $ \frac{1}{5}[\int^1_0(v+v^3)^5dv-\int^1_0v^{15}dv]$ used substitution again with $x= (v+v^3),dx/du=(1+3v^2)$ for the first one $\int^1_0\frac{x^5dx}{(1+3v^2)}->\frac{1}{1+3v^2}\int^1_0x^5dx=\frac{1}{1+3v^2}\frac{1}{6}x^6|^1_0->\frac{1}{1+3v^2}\frac{1}{6}(v+v^3)^6|^1_0$ $= \frac{1}{24}*1-0$ for the second: $\int^1_0 v^{15}=\frac{1}{16}v^{16}|^1_0=\frac{1}{16}$ $\frac{1}{5}[\frac{1}{24}-\frac{1}{16}] = -\frac{1}{240}$ The answer is supposed to be 31/30.
An alternate approach would be to change the order of integration. This will greatly reduce the difficulty of the integral. As the limits of the inner integral do not depend upon $v$ this trivially becomes: $$\int^1_0[\int^1_0v(u+v^2)^4\ dv]\ du$$ $$=\int^1_0\left[\frac{1}{10}\bigg((u+v^2)^5\bigg)_0^1\right]\ du$$ $$=\int^1_0\frac{1}{10}\left[(u+1)^5-(u+0)^5\right]\ du$$ $$=\frac{1}{10}\int^1_0(u+1)^5-u^{5}\ du$$ $$=\frac{1}{10}\bigg(\frac{1}{6}(u+1)^6-\frac{1}{6}u^6\bigg)_0^1$$ $$=\frac{1}{10}\bigg(\frac{1}{6}(1+1)^6-\frac{1}{6}1^6-\left(\frac{1}{6}(1+0)^6-\frac{1}{6}0^6\right)\bigg)_0^1$$ $$=\frac{31}{30}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1979273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solving a system of equations involving functions Solve this system of equations: $$ \left\{\begin{array}{cccccr} \displaystyle{\mathrm{f}\left(x\right)} & \displaystyle{+} & \displaystyle{3\mathrm{f}\left(x - 1 \over x\right)} & \displaystyle{=} & \displaystyle{7x} & \displaystyle{\qquad\qquad\qquad\qquad\left(\mathrm{A}\right)} \\ \displaystyle{\mathrm{f}\left(x - 1 \over x\right)} & \displaystyle{+} & \displaystyle{3\mathrm{f}\left(1 \over 1 - x\right)} & \displaystyle{=} & \displaystyle{7x - 7 \over x} & \displaystyle{\left(\mathrm{B}\right)} \\ \displaystyle{\mathrm{f}\left(1 \over 1 - x\right)} & \displaystyle{+} & \displaystyle{3\mathrm{f}\left(x\right)} & \displaystyle{=} & \displaystyle{7 \over 1 - x} & \displaystyle{\left(\mathrm{C}\right)} \end{array}\right. $$ I've never solved a system of equations with functions, so I'm brand new to this concept. Could someone clue me out or provide a solution ?. Thanks !.
When given such a system of equations, you are expected to find a function (here $f$) that satisfies all those equations for all values of $x$. Let $\frac{x-1}x=y$ and $\frac1{1-x}=z$. Then $$f(x)+3f(y)=7x\tag1$$ $$f(y)+3f(z)=7y\tag2$$ $$f(z)+3f(x)=7z\tag3$$ Eliminate $f(z)$ by subtracting $(3)$ from $(2)$ thrice: $$f(y)-9f(x)=7y-21z\tag4$$ Eliminate $f(y)$ by subtracting $(4)$ from $(1)$ thrice: $$f(x)+27f(x)=28f(x)=7x-21y+63z$$ $$4f(x)=x-3y+9z$$ Hence we have the desired expression for $f(x)$: $$f(x)=\frac14\left(x-\frac{3(x-1)}x+\frac9{1-x}\right)$$ $$=\frac{x^2(1-x)-3(x-1)(1-x)+9x}{4x(1-x)}$$ $$=\frac{x^2-x^3+3-6x+3x^2+9x}{4x(1-x)}$$ $$=\frac{-x^3+4x^2+3x+3}{4x(1-x)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1979348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Second order linear differential equations : Particular Integral How do they find the particular Integral. These two lines are not giving me any clue. Please someone explain me.
The method of variation of parameters, however, provides one extra term. $y_p=-\frac{1}{4}\mathrm{e}^{-2x}+\frac{x}{2}\mathrm{e}^{-2x}$ Solution Procedure: Two fundamental solutions are $y_1(x)=\mathrm{e}^{-2x}$ and $y_2(x)=\mathrm{e}^{-4x}$. Thus, $y_p=y_2(x)\int\frac{y_1(x)f(x)}{W(y_1,y_2)}\mathrm{d}x-y_1(x)\int\frac{y_2(x)f(x)}{W(y_1,y_2)}\mathrm{d}x$ where, $W(y_1,y_2)=y_1(x)y_2'(x)-y_1'(x)y_2(x)$ and $f(x)=\mathrm{e}^{-2x}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1979439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$V = V_1\oplus V_2\oplus....\oplus V_k$, show that $f V =f V_1\oplus f V_2\oplus...\oplus f V_k$. Let $T$ be a linear operator on the vector space $V$ over the field $F$. If $f$ is a polynomial over $F$ and $a$ is in $V$, let $fa = f(T)a$. If $V_1, . . . , V_k$ are $T$-invariant sub-spaces and $V = V_1\oplus V_2\oplus....\oplus V_k$, show that $f V =f V_1\oplus f V_2\oplus...\oplus f V_k$. Let $a = v_1 + v_2 +...+ v_k$ since $V_i's$ are T-invariant we have $Tv_i \in V_i$ and hence $fv_i = f(T)V_i \subset fV_i$. Can we conclujde from here...Please tell if my lolgic is wrong.
You almost completed one part of the question. Since the $V_i$ are $T$-invariant, they are also $f(T)$ invariant and so $fv_i = f(T)v_i \in f(T)(V_i) = fV_i$ which shows that any vector $fa \in fV$ can be written as a sum of vectors from $fV_i$. What is left is to show that the sum is in fact a direct sum, or, in other words, each $fa$ can be represented uniquely as a sum of vectors from $fV_i$. To see this, write $0 = w_1 + \dots + w_k$ with $w_i \in fV_i$. Since $V_i$ is $f(T)$-invariant, we have $fV_i \subseteq V_i$ and we also have a representation of $0$ as a sum of elements from $V_i$. Since the $V_i$ form a direct sum, we have $w_i = 0$ for all $i$ which shows that the $fV_i$ also form a direct sum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1979516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Functional equation (N to N) Find all $f : \mathbb{N} \to \mathbb{N}$ which satisfy the equation: $f(d_1)f(d_2)...f(d_n)=N$ Where $N$ is a natural number and $d_i, 1 \leq i \leq n$ are all of the divisors of $N$.
Here is my attempt. Any $N$ will have a prime power factorisation, so its divisors are all powers of primes, or a product of prime powers. Consider $N$ = $p^2$ with $p$ prime. Taking $f(p) = p$ and $f(1) = 1$ as given, we have $f(1)f(p)f(p^2)=p^2$, and so $f(p^2) = p$. Generalising, $f(p^k)=p$, for any positive integer $k$. Now when we have more than one prime in the prime power factorisation of $N$, and at least one of those primes has multiplicity greater than $1$, there will be divisors of $N$ which are not prime powers (composite numbers with more than one prime factor). e.g. $pq$ divides $p^2q^2$. But $f$ acting on these divisors must give $1$, since otherwise there is no way that $f(c_1)f(c_2)$ can equal $1$, where $c_1$ and $c_2$ are composite numbers, each with more than one prime factor. To see this, recall that the $f$ outputs natural numbers, and $1 \times 1 = 1$ is the only way to obtain $1$ from the product of two natural numbers. Summarising: $f(1) = 1$, $f(p^k) = p$, for $p$ prime and any positive integer $k$. For all natural numbers $l$ with more than one prime factor, $f(l) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1979662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If the fourier integral of a function exists, is it always equal to the fourier cosine integral of that function? I read that every function which is piecewise continuous and absolutely integrable has a Fourier integral representation which reduces to a Fourier sine or cosine integral accordingly as the function being odd or even. I also read that a function which is neither even nor odd has a Fourier cosine and sine integral representation. Does that mean if for a function Fourier integral exists, it is always equal to Fourier cosine integral of that function?
A function $f(x)=f_{e}(x)+f_{o}(x)$ where $f_e$ is even, $f_o$ is odd: $$ f_e(x)=\frac{f(x)+f(-x)}{2},\;\; f_o(x)=\frac{f(x)-f(-x)}{2}. $$ If $f$ is even, then $f_e=f$ and $f_o=0$. If $f$ is odd, then $f_e=0$ and $f_o=f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1979790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why the $\lambda$-generalized eigenspace is invariant? The following comes from a text book, I am very confused about the last sentence, For a matrix $A$, a subspace $V$ is invariant w.r.t $A$ if $AV\subseteq V$. From my understanding we need to show $\forall x\in V_{\lambda_i},Ax\in V_{\lambda_i}$, i.e. $(A-\lambda_iI)^n(Ax)=0$.
Indeed, you need to show $(A-\lambda_iI)^n(Ax)=0$. But this is easy since if $f(A)$ is a (analytic) function of operator $A$, then $[f(A),A]=0$, i.e. you have an operator identity $f(A) \cdot A = A \cdot f(A)$ and thus $$(A-\lambda_iI)^n(Ax)=\left[(A-\lambda_iI)^n\cdot A\right]x=\left[A\cdot (A-\lambda_iI)^n\right]x=A\left((A-\lambda_iI)^nx\right)=0$$ In this problem, just by using the property that every operator commutes with $I$ and itself, you can show \begin{align*} (A-\lambda_iI)^n\cdot A &= \sum _{k=0}^n\binom{n}{k} A^k (-\lambda _i I)^{n-k} \cdot A=\sum _{k=0}^n \binom{n}{k} A^k \left(A\cdot (-\lambda _i I)^{n-k} \right) \\ &= \sum _{k=0}^n \binom{n}{k}\left(A^k \cdot A\right) (-\lambda _i I)^{n-k}=\sum _{k=0}^n \binom{n}{k}\left(A\cdot A^k \right) (-\lambda _i I)^{n-k}\\ &=A\cdot \sum _{k=0}^n \binom{n}{k}\ A^k (-\lambda _i I)^{n-k} =A\cdot (A-\lambda_iI)^n \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1979891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the following limit problem I'm trying to find the following limit $$ \lim_{x\to 0} \left({1-\sin x } {\cos x}\right)^{{ \csc 2x} }$$ How to prove the above limit equals $e^{-{1\over2}}$?
$$ \lim_{x\to 0} \left(1-\sin x \cos x\right)^{\csc 2x}=\lim_{x\to 0} \left(1-\sin x \cos x\right)^{\frac{1}{\sin 2x}}=$$ $$ \lim_{x\to 0} \left(1-\sin x \cos x\right)^{\frac{1}{\sin x\cos x}\frac{1}{2}}=e^{-1\cdot\frac{1}{2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1980057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Bellman Ford Algorithm Clarifications I'm a little hung up on the Bellman-Ford algorithm. Here is my current understanding and some questions: 1) The root is defined as a source node that has only outgoing paths from it and the goal of the algorithm is to find a path from this source node to every other node in the graph G : there is a spanning, directed tree from the root. 2) There can only be one root and there must exist a path from the root to every other node in the graph G. Do we need to always assume this? It feels like we should have to make this assumption as our goal is to form a shortest path from the root to every other node in the graph and if there exists some node such that there is only an outgoing path from it and it isn't the source, then I don't think we'd be able to reach it. I just want to be sure that this is the case. 3) A sequence is formed during each pass of the algorithm and there will be a maximum of n-1 passes as there are n nodes and our goal is only to connect them analogous to a MST. 4) A sequence is an ordered set of nodes, starting from the root and branching outwards to depict the past from the root that was taken. 5) This is more of a question regarding the root, related to (1)... Can we arbitrarily assign a node as the root even if it has an inflowing arc and just ignore that inflow? Am I on the right track in my understanding?
The selection of source node depends on you. You can choose any node to be your source irrespective of its non zero in-degree. The algorithm tries to generate the shortest path distance from your selected source node to all other nodes in the graph. It is safe to assume that the graph is connected . For the same you can first run DFS over the graph to find the connected components. If the graph is connected then run Bellman Ford algorithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1980170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Pairing function for ordered pairs Is there a pairing function like Cantor's (https://en.wikipedia.org/wiki/Pairing_function) that would map ordered pairs (of integers) to different integers? ie: (M, N) -> L1 (N, M) -> L2 Where L1 != L2 All input integers could be positive, but the output does not have to be positive..so perhaps something like: newpair(M, N) = if (M < N): cantorpair(M, N) else: -1 * cantorpair(M, N) Or is there some well known / standard pairing function for ordered 2-tuples?
Cantor's pairing function should already work, as does a prime number encoding, e.g. $$ \langle M, N \rangle = 2^M 3^N $$ Example: $$ \langle 1, 2 \rangle = 2^1 3^2 = 18 \\ \langle 2, 1 \rangle = 2^2 3^1 = 12 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1980348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integrating on open vs. closed intervals What is one difference in the values of $$\int\limits_{\left[0,1\right]}y\, dx$$$$\int\limits_{\left(0,1\right)}y\, dx$$ and how would you calculate the values? For the sake of simplicity, let $y=x$. Conceptualizing integration as the area bounded by the function, the $x$-axis and the limits of integration, the latter should be smaller.
It should be intuitive that $\displaystyle \int_{(0, 1)} f(x) \ dx = \int_{[0, 1]} g(x) \ dx$ where $g(x) = \begin{cases} f(x) & \ \text{ if }\ x \in (0, 1) \\ 0 & \ \text{ if } \ x \in \{0, 1\}\end{cases}$. We claim that $\displaystyle \int_{[0, 1]} f(x) \ dx = \int_{[0, 1]} g(x) \ dx$, or more generally, changing the value of $f$ at finitely many points has no effect on the value of the definite integral. Sketch of proof: Provided a function $f$ is integrable on an interval $[a, b]$, the definite integral is rigorously defined as follows: there is a unique $I$ such that, for any given partition $\mathcal{P}$ of an interval $[a, b]$, we have: $$L(f, \mathcal{P}) \leq I = \int_a^b f(x) \ dx \leq U(f, \mathcal{P})$$ Where $\displaystyle L(f, \mathcal{P}) = \sum_{i} (x_{i+1} - x_i)\inf \Big( \{f(x) \ | \ x \in [x_i, x_{i+1}] \} \Big)$ where $x_i$'s $\in \mathcal{P}$ and likewise $\displaystyle U(f, \mathcal{P}) = \sum_i (x_{i+1} - x_i)\sup \Big( \{ f(x) \ | \ x \in [x_i, x_{i+1}] \} \Big)$ Now suppose we change the value of $f$ at a point $y \in [a, b]$. For any given partition, we can "refine" this partition to encapsulate $y$ inside an arbitrarily small interval, in effect making its associated term in the $L(f, \mathcal{P}')$ and $U(f, \mathcal{P}')$ summations arbitrarily insignificant (limiting to zero in successive such refinements of the partition).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1980485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
Show that a given harmonic function has a specific form Suppose $u$ is a harmonic function in $\mathbb{R}^n$ and satisfies $$|u(x)| \leq C |x|~\forall x \in \mathbb{R}^n.$$ Show that $u(x)=q \cdot x$ where $q$ is a constant vector. I'm struggling with this problem. I tried defining a new function, $v(x)=u(x)/{x}$ for $x \neq 0$ and using the given bound $C,$ that leads to a constant vector. Being dividing by a vector, that couldn't be correct. My gut feeling says that this could be done using Maximum principle. Any help is much appreciated. Thank you.
Hint: Because $u$ is harmonic, it is real analytic, and so it can be uniformly approximated in some large open ball by a high-degree polynomial. Deduce that most of the polynomial's coefficients are zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1980620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $aI have the following GRE question that I have some trouble seeing. If $g$ is a function defined o the open interval $(a,b)$ such that $a < g(x) < x$ for all $x \in (a,b)$, then $g$ is A) an unbounded function B) a nonconstant function C) a nonnegative function D) a strictly increasing function E) a polynomial function of degee 1 I answered that D), because I thought I could take the derivative on the inequality $a < g(x) < x$ and get $0< g'(x)<1$, showing that the equation is strictly increasing. However the answer says it should be B) and I don't really see how they concluded this. Could anyone help me with this problem? Thanks in advanced!
Differentiation is not monotonic, unlike integration. GRE Subject Test in Mathematics - Where can I find related past papers, solutions to those, sample tests, advice, books, apps or other resources?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1980701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Find the sum of $\sum\limits_{n=1}^{\infty} \frac{x^{2n}}{(2n)!}$ Find the sum of $\sum_{n=1}^{\infty} \dfrac{x^{2n}}{(2n)!}$ on its interval of convergence. We can see that the domain of convergence is $D=R$. Then let: $$f(x)=\sum_{n=1}^{\infty} \dfrac{x^{2n}}{(2n)!}$$ $$f'(x)=\sum_{n=1}^{\infty} \dfrac{x^{2n-1}}{(2n-1)!}$$ $$f''(x)=\sum_{n=1}^{\infty} \dfrac{x^{2n-2}}{(2n-2)!}$$ Thus $f''(x)=f(x)$, solve this differential equation, we'll get the solution. Is my solution right? I just begin to study the power series. Thank you so much.
Note that your equation should be $f''(x)=f(x)+1$. Using the series for $e^x$, we get $$ \begin{align} e^x&=\sum_{k=0}^\infty\frac{x^k}{k!}\tag{1}\\ e^{-x}&=\sum_{k=0}^\infty(-1)^k\frac{x^k}{k!}\tag{2} \end{align} $$ Average $(1)$ and $(2)$ $$ \frac{e^x+e^{-x}}2=\sum_{k=0}^\infty\frac{x^{2k}}{(2k)!}\tag{3} $$ Subtract $1$ $$ \frac{e^x-2+e^{-x}}2=\sum_{k=1}^\infty\frac{x^{2k}}{(2k)!}\tag{4} $$ $(4)$ can be written as $\cosh(x)-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1980787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
$\sqrt{1 + \sqrt{1 + \sqrt{1 + ...}}} = \frac{1+\sqrt{5}}{2} = \phi$, is this a coincidence? I was playing around with square roots today when I "discovered" this. $\sqrt{1 + \sqrt{1 + \sqrt{1 + ...}}} = x$ $\sqrt{1 + x} = x$ $1 + x = x^2$ Which, via the quadratic formula, leads me to the golden ratio. Is there any significance to this or is it just a random coincidence?
If you play around a little more, you will also notice that: $$ \frac{1+\sqrt{5}}{2} = 1+\dfrac{1}{1+\dfrac{1}{1+\dfrac{1}{1+\dfrac{1}{1+ \ldots} } } } $$ Which simplifies to $x = 1+ \frac 1x \implies x^2=x+1$. It's no coincidence. I mean to say, it comes directly from the equation itself. Just to give you another example: The equation $x^2 = 4+x$ is satisfied by the fraction $\frac{1+\sqrt{17}}{2}$. Now, we can use the same logic to extend this fellow: $$ x = \sqrt{4 + x} = \sqrt{4 + \sqrt{4 + \sqrt{4 + \sqrt{4 + \ldots}}}} $$ While at the same time, this also expands as a continuous fraction, namely: $$ x = 1 + \frac{4}{x} = 1 + \frac{4}{1 + \frac{4}{1 + \frac{4}{1 + \frac{4}{1 + \ldots} } } } $$ You see, it's not a coincidence, yet it's wonderful. The question arises: Can we do this with other quadratic polynomials? Take for example, $ax^2+bx+c=0$. Then $ax^2 = -bx-c$ and $x^2 = -\frac{b}{a}x -\frac{c}{a}$. This will expand now in an interesting way: $$ x = \sqrt{-\frac{c}{a}-\frac{b}{a}x} = \sqrt{-\frac{c}{a}-\frac{b}{a}\sqrt{-\frac{c}{a}-\frac{b}{a}\sqrt{-\frac{c}{a}-\frac{b}{a} \ldots}}} $$ And as a continuous fraction: $$ x = -\frac{b}{a} - \frac{c}{ax} =-\frac{b}{a} - \frac{c}{a(-\frac{b}{a} - \frac{c}{a(-\frac{b}{a} - \frac{c}{a \ldots} )}) } $$ That is your license to play around. Please do so. Also, see what you get if $ax^3+bx^2+cx+d=0$, and if you can find something interesting here do comment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1980909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Image and preimage of a function Given the function $f(x)=x^{2}-4x-5$, $A=[0,3)$ and $B=[0,1]$, find $f(A)$ and $f^{-1}(B)$. I found $f(A)$ by looking at the graph $f([0,3))=[-9,-5]$ but how would I calculate this without the graph? If I plug in $0$ into $f$ I get $-5$, and if I plug in $3$ I get $-8$, when I should get $-9$.
More general if $f: \mathbb{R} \rightarrow \mathbb{R}, f(x) = ax^2 + bx + c, a \gt 0$ then $f$ is decreasing on $(-\infty, - \frac b {2a}]$ and increasing on $[- \frac b {2a}, + \infty)$. You can use this together with the fact that $f$ is continuous to get the images you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1981014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What's the difference between $∀x\,∃y\,L(x, y)$ and $∃y\,∀x\,L(x, y)$? * *Everybody loves somebody. $∀x\,∃y\,L(x, y)$ *There is somebody whom everybody loves. $∃y\,∀x\,L(x, y)$ What's the difference between these two sentences? If they are same, can I switch $\exists y$ and $\forall x$?
If $L$ satisfies 2., then it necessarily satisfies 1. Therefore you can switch $\exists y$ and $\forall x$ to go from 2. to 1., but not the other way around. Counterexample: let $L$ be a relation over set $S=\{a,b,c\}$, and suppose $L(a,b)$, $L(b,c)$, $L(c,a)$. You can easily verify that 1. holds here, but 2. does not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1981098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 3 }
are the integers modulo 4 a field? Basically are the integers mod 4 a field? I want to know because I am reading a text and it has a problem assuming the integers modulo any number are a field
If $q=p^n$, then $\mathbf F_q$ denotes the field with $q$ elements. You have to know that for any integer $n\ge 1$, there exists a finite field with $p^n$ elements, and this field is unique up to an isomorphism. It is even unique in the still more restrictive sense: a field with $p^n$ elements is unique within a given algebraic closure of the prime field $\mathbf F_p$. Furthermore, for any two such finite fields, $$\mathbf F_{p^m}\subseteq\mathbf F_{p^n}\iff m\mid n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1981296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Polynomial n variables differentiable on $\mathbb{R^n}$ How would I define a polynomial of $n$ variables? And how would I go on to prove that any polynomial in $n$ variables is differentiable on $\mathbb{R^n}$? (assuming this function is continuous on $\mathbb{R^n}$) I'm struggling to find a formal definition and I assume I use the chain rule for the second part but other than that I'm not sure what to do.
Aloizio's answer should already be enough (and you should accept it), but explicitly, we'd have $$p(x_1,\cdots,x_n) = \sum a_{i_1\cdots i_n} x_1^{i_1}\cdots x_n^{i_n}.$$Since $$\Bbb R^n \ni (x_1,\cdots,x_n) \mapsto x_i \in \Bbb R$$is differentiable for each $i$, it follows that $p$ is differentiable as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1981402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculus: Finding limit $$\lim_{x\to0}\frac{\displaystyle\\x^3}{\sin^3x}.$$ Do I use L'Hopital's rule for this? But I can't seem to find the answer.
Hint: Since $x \mapsto x^3$ is continuous and $x \mapsto 1/x$ is too, we have $$\lim_{x \to 0 }\frac{x^3}{\sin^3x} = \left( \frac{1}{\color{red}{\lim_{x \to 0}\frac{\sin x}{x}}}\right)^3.$$That limit in red you absolutely must know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1981527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Fourier transform properties (integration) proof From Signals and Systems _ Alan V. Oppenheim There's a property of fourier transform states as below. Fourier transform of $\int_{-\infty}^\tau x(\tau) d\tau $ equals to $\frac{ X(j\omega)}{j\omega} + \pi \delta(\omega)X(0)$ Can someone prove this?
I know it is kind of late for answering the question, but it might help somebody else. I would approach it using the convolution property and the Heaviside Step Distribution u(t). First of all, notice that: $$f(t)*u(t) = \int_{-\infty}^{+\infty}f(s)u(t-s)ds$$ Since, for $t-s < 0 \Longrightarrow s > t$, the integrand is zero, then: $$f(t)*u(t) = \int_{-\infty}^{t}f(s)ds$$ Now, all that is left is to use the convolution property of the Fourier Transform: $$\mathscr{F}\Big(\int_{-\infty}^{t} f(s)ds\Big) = \mathscr{F}(f(t)*u(t)) = F(\omega) U(\omega)$$ Since the fourier transform of the heaviside distribution is: $$\mathscr{F}(u(t)) = \frac{1}{i\omega} + \pi \delta(\omega)$$ Then, we get: $$\mathscr{F}\Big(\int_{-\infty}^{t} f(s)ds\Big) = F(\omega) \Big(\frac{1}{i\omega} + \pi \delta(\omega)\Big)$$ The trick here, is to see that, for all $\omega \neq 0$, the Dirac's Delta distribution is actually zero, so we take $F(0)$ instead of $F(\omega)$ for the "second product": $$\mathscr{F}\Big(\int_{-\infty}^{t} f(s)ds\Big) = \frac{F(\omega)}{i\omega} + \pi F(0)\delta(\omega)$$ QED. Hope that it helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1981656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When is a function who is defined on two disjoint sets are continuous? Okay so let $f$ be a function, $A$ and $B$ two sets that are disjoint. $f$ is continuous on $A$ $f$ is continuous on $B$. My teacher told me that $f$ is continuous on these two disjoint sets if and only if they are mutually separated, meaning no one contains the boundary of the other. I couldn't understand the reason behind this. Could someone help out?
Definitely false: an easy general method for making counterexamples is take $A$ and $B$ as disjoint subsets of some larger topological space $X$. Then take a function $f^*:X\rightarrow Y$ that is continuous, and restrict its domain to $A\cup B$ to get a new function $f:A\cup B\rightarrow Y$. This function is continuous because $f^*$ was. To make it a counterexample to your teacher's claim, all that you require is that $\overline{A}\cap\overline{B}\neq\emptyset$ (they are not separated).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1981748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to prove that $a\le b$ , if $ a0$? How to prove that $a\le b$ , if $ a<b+c$ for each $c>0$? I tried to prove it with the reductio ad absurdum method and with the trichotomy property of two real numbers $a,b$ : $a=b$ ,$a<b$ or $a>b$. But I didn't make it. Any advice would be helpful !
Suppose $a>b$, then $a-b>0$. Let $c=b-a$ and we have $$a<b+(a-b)=a$$ We found a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1981885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Why is this corollary of Liouville true I know that if $f(z)$ is an entire function and $f(z)$ is bounded, then $f(z)$ is constant. I also know how this can be used to prove the fundamental theorem of algebra. However, in my books after Liouville it states as a corollary Suppose that f is entire and n a natural number such that $$|f(z)| \le K(1+|z|^{n})$$ for all $z \in \mathbb{C}$, then f must be a polynomial of degree at most n. But I am a bit confused by this, for one, $(1+|z|^{n}) \ge 1$ so wouldn't this simply just imply that f is constant again, by Liouville. Im sure there is a big flaw in my reasoning somewhere and I am looking to sort it out. Anyways, if this is indeed the case, then what is the proof? I think it probably wouldn't be a very long one seeing as the book just slipped it in at the end of the page as a remark. Thanks
By Cauchy's Integral formula, we see that \begin{align} |f^{(n)}(z)| \leq C \int_{C_R} \frac{|f(\zeta)|}{|z-\zeta|^{n+1}}\ |d\zeta| \leq \int_{C_R} \frac{(1+|\zeta|^n)}{\left||\zeta|-|z| \right|^{n+1}}|d\zeta| \leq C\frac{(1+R^n)R}{|R-|z||^{n+1}} \leq C \end{align} for all $z$. Hence $f^{(n)}(z)$ is constant. Moreover, it's easy to see $f(z) = z^n+1$ satisfies the bound and it's not constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1982002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a commonly used notation for flipped composition? We have $ (f \circ g) = x \mapsto f(g(x)) $ however since I read left to write it always seems backwards to me. Is there a symbol like $ ( g \ggg f) = x \mapsto f(g(x)) $. It is especially grating in situations like. $$ \require{AMScd} \begin{CD} X @>{f \circ g}>> Y \\ @VVgV @AAfA \\ g(X) @>{id}>> g(X)\end{CD} $$
Some math books (especially abstract algebra) may write composition in the reverse order: $\sigma\tau$ means: first $\sigma$ then $\tau$. For notation they write $$ x^{\sigma \tau} = \big(x^\sigma\big)^\tau $$ For example, this may be seen with field automorphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1982144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
erdos.sdslabs problem Let $n$ be the largest positive integer, such that $n!$ can be expressed as the product of $(n−2014^{2015})$ consecutive integers. Let $x$ be equal to $n$ mod $38980715857$. Find x mod $8037517$ ? Question source is this. .
as we know that value of n is equal to(by solving it with number theory) n=(2014^2015!)+1 now, given that x=n mod38980715857. by solving it with help of wolfram alpha we get, x=38980715856, further xmod8037517is equal to: 6795923
{ "language": "en", "url": "https://math.stackexchange.com/questions/1982224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to translate geometric intuitions about vector fields into algebraic equations By considering stereographic projections, I was asked to find a smooth vector field on $S^2$ which vanishes at 1 point, and one that vanishes at 2 points. The intuition, I think, for the vanishing at 1 point is to have all vectors emanating from either the north pole or south pole; for the vanishing at 2 points I think it should be swirling around one of the axes. But I don't know which fields in $\mathbb{R}^2$ will get mapped to these under the push forward of the coordinate maps. Whats the approach to figuring this out?
First, to address the question in your title: I think the only honest answer is that there is no "standard algorithm" for translating intuitions into equations. It takes lots of practice and lots of trial and error. Try to stretch your geometric intuition as far as you can, and then try to write down formulas to prove your intuition correct. The things that hang you up will lead to new insights, which you can feed back into your intuition for the next pass. Lee Mosher is probably right that stereographic projection is a red herring for the problem of finding a vector field that vanishes at exactly two points -- there are simpler ways to write down such a vector field, such as the one tangent to latitude circles. But to find a vector field that vanishes at exactly one point, stereographic projection can be extremely helpful. [Hint: think about a coordinate vector field on $\mathbb R^2$.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/1982333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$n_1,...,n_k$ pair coprime $\!\iff\! {\rm lcm}(n_1,...,n_k)=n_1...n_k$ [lcm = product for coprimes] $n_1,...,n_k$ pairwise coprime $\iff LCM(n_1,...,n_k)=n_1...n_k$ Recently, I was told this as part of a larger proof concerning direct products of groups. I am wondering why this is true.
Can you prove the following two statements? * *If $a \mid c$ and $b \mid c$ with $\gcd(a,b) = 1$ then $ab \mid c$. *If $\gcd(a,b) = 1$ and $\gcd(a,c) = 1$ then $gcd(a,bc) = 1$. Then, use induction to show that $n_1 \dots n_k$ divides $\text{lcm}(n_1,\dots,n_k)$. Since $n_1\dots n_k$ is a common multiple that divides the least common multiple, it must be equal to the least common multiple.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1982395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Roots of the polynomial $x^{p-1}+x^{p-2}+\ldots+x+1$ when $p$ is a prime. I'm studying Galois Theory and I have some doubts about the roots of the polynomial $x^{p-1}+x^{p-2}+\ldots+x+1$ when $p$ is a prime. Let $\zeta_n$ be an $n$th root of unity, i.e, $\zeta^n-1$, and $\zeta_n \neq 1$. Then, as $(x^n-1)=(x-1)(x^{n-1}+x^{n-2}+\ldots+x+1)$, the minimal polynomial of $\zeta_n$ is $x^{n-1}+x^{n-2}+\ldots+x+1$, because it is irreducible (am I right?). Now, let $p$ be a prime and suppose that I want to find the automorphisms of $\mathbb{Q}(\zeta_p)$ that fixes $\mathbb{Q}$, that is, the Galois group Gal($\mathbb{Q}(\zeta_p)/\mathbb{Q})$. I know that $\zeta_p$ must be send to another root of its minimal polynomial, i.e other root of $x^{p-1}+x^{p-2}+\ldots+x+1$. Looking in some exercises, I read that $\zeta_p$ can be sent to $\zeta_p^k$ for any $k \in \{1,\ldots,p-1 \}$. But that means that $\zeta_p^k$ is a root of $x^{p-1}+x^{p-2}+\ldots+x+1$ for any $k \in \{1,\ldots,p-1 \}$. Is that correct? How can i prove that?
It's easy to show that $\zeta_p^k$ is a root of $f(x) = x^p - 1 = (x-1)(x^{p-1} + x^{p-2} + \cdots + x + 1)$ for all $1 \leq k < p$. Since it is obviously not a root of $(x-1)$, it must be a root of the other factor of $f$. P.S. it's an important / not-too-difficult exercise to show that $\operatorname{Gal}(\mathbb{Q}(\zeta_p)/\mathbb{Q}) \cong \mathbb{Z}_p^\times$ (notice that every automorphism is determined by it's action on $\zeta_p$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1982516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to prove $Hom_A(P_0,X),Hom_A(P_1,X)$ are injective right $R$-modules? I asked a question here:https://mathoverflow.net/questions/252724/how-to-prove-hom-ap-0-x-hom-ap-1-x-are-injective-right-r-modules. But there are no responses and I really want to know how to solve it. So I repost it here. Hope for any help. Let $A$ be a finite dimensional k-algebra. Suppose $X$ is a left $A$-module such that every indecomposable projective or injective $A$-left module is isomorphic to a direct summand of $X$. Let $\dots \rightarrow P_1 \rightarrow P_0 \rightarrow X \rightarrow 0$ be a projective resolution of $X$. $R:=End_A(X)$. Then we get the exact sequence $$0 \rightarrow Hom_A(X,X)=R\rightarrow Hom_A(P_0,X)\rightarrow Hom_A(P_1,X)$$ of right $R$-modules and $R$-homomorphisms. Suppose $X$ is projective and injective as right $R$-module. Since $P_0,P_1\in add(X)$, we know $Hom_A(P_0,X),Hom_A(P_1,X)$ are projective right $R$-modules. My question is how to prove $Hom_A(P_0,X),Hom_A(P_1,X)$ are also injective right $R$-modules?
Let $P$ be a projective $A$-module. Then we have $P\oplus Q=A^{(I)}$, for some set $I$ (free module), so $\def\H{\operatorname{Hom}_A}\H(P,X)$ is a direct summand of $\H(A^{(I)},X)\cong(\H(A,X))^I\cong X^I$ (direct product). A direct product of injective modules is injective, as well as direct summands thereof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1982648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
In how many different ways can boys and girls sit a desks such that at each desk only one girl and one boy sits? There are $n$ boys, $n$ girls and $n$ desks. In how many different ways can boys and girls sit a desks such that at each desk only one girl and one boy sits? I have a solution, but I have a little doubt that it is incomplete. So my solution is as follows: My worked solution: For the first desk we can choose one girl by $\binom n1$ and one boy by $\binom n1$ and we can permute them by $2!$ ways. In result we have $2!\binom n1 \binom n1$. In the same way, we can work out for the second desk $2!\binom {n-1}1 \binom {n-1}1$ and so on. For the $nth$ desk we have $2!\binom {1}1 \binom {1}1$ ways. $Total = 2!\binom n1 \binom n1 2!\binom {n-1}1 \binom {n-1}1 ... 2!\binom {1}1 \binom {1}1 = 2^n(n!)^2$
To remove from unanswered queue: Yes, you are correct. Approaching via multiplication principle, first choose whether the boy is on the left, or the right for each desk in sequence. Then in sequence, choose which boy is at the desk and which girl is at the desk for a final total of: $$2^n(n!)(n!)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1982742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove by Induction with Summation Struggling with this , especially with the double summation; if anyone can help it be much appreciated! $$\forall n \in \Bbb N : \quad \sum_{i=1}^n \sum_{c=1}^i c = \sum_{i=1}^n i(n − i + 1) .$$ It needs to be answered in the following format: 1- Prove for n=1 2- assume n=k 3- prove for k+1
Rewrite the identity for $n+1$: * *Left hand side $$\sum_{i=1}^{n+1} \sum_{c=1}^i c =\sum_{i=1}^{n} \sum_{c=1}^i c+\sum_{c=1}^{n+1} c.$$ *Right hand side $$\sum_{i=1}^{n+1} i(n+1 − i + 1)=\sum_{i=1}^{n+1} i(n− i + 1)+\sum_{i=1}^{n+1}i=\sum_{i=1}^{n} i(n− i + 1)+\sum_{i=1}^{n+1}i.$$ As the additional terms are the same, if the identity holds for $n$, then it holds for $n+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1982891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Using $3$ points create new coordinate system and create array of points on $XY$ plane in original coordinate system This is my first post on stack exchange :) Basically what I am trying to do is something called palletizing in the robotics world. Given $3$ points I would like to create a new coordinate system and then create a lattice structure of points on the new $XY$ plane, but I would like the final coordinates of the points on the $XY$ plane to be in the original coordinate system. I found this link HERE which does almost exactly what id like. Any help is much appreciated.
Here is a possible approach to your problem. Let $A$, $B$, and $C$, be the 3 points of interest. 1. Define your coordinate system by three orthogonal unit vectors with $A$ as the origin. Perhaps defined as $V_1 = \frac{B-A}{\|B-A\|}$ $V_2 = \frac{V_1 \times (C-A)}{\|V_1 \times (C-A)\|}$ $V_3 = V_1 \times V_2$ 2. Define your lattice in the new coordinate system. 3. Calculate the transform between your original coordinate system and your new coordinate system defined by $V_1, V_2,$ and $V_3$. 4. Transform your points in the lattice from your new coordinate system to your old coordinate system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1983054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Given a compact $S$ in $\mathbb R^m$ with the following property: For every pair of points $a, b \in S$ and for every $\varepsilon > 0$ there exists a finite set of points $\{x_0, x_1,..., x_n\}$ in $S$ with $x_0 = a$ and $x_n = b$ such that $\|x_k - x_{k-1}\|<\varepsilon$ for $k = 1, 2, . . , n$. Prove or disprove: $S$ is connected. Proof Why in the proof they say that $A$ is closed in $S$, and $B$ is closed in $S$, also that $x=y$?
We have that $S=A\cup B$ where $A,B$ are non-empty open sets such that $A\cap B=\emptyset$. Then $A=B^c$ and $B=A^c$, and therefore, by definition, $A$ and $B$ are closed sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1983180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many ways are there to arrange two digits into an n digit number, when both digits must be used? I know the answer to this question is $2^n-2$, but I am unsure of how this answer is gotten. Could someone please explain to me how the answer is gotten step by step? Also, how many ways would there be to arrange 3,4, or 5 digits into an n digit number? Thanks!
If the number must contain both digits then the number of ways is $$ \sum_{k=1}^{n-1} \left( \begin{array}{c} n\\ k \end{array}\right) = 2^n - 2 $$ Suppose the two digits are $a$ and $b$, $a\neq b$. Then the number of ways to choose $k$ of the digits to be equal to $a$ is $$ \left( \begin{array}{c} n\\ k \end{array}\right) $$ All remaining digits are $b$, so there is no choice to make for them. Now you simply sum up all the possibilities $(k = 1,\ldots,n-1)$, ignoring the cases $k = 0,n$, in which the number would only contain a single digit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1983357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Dirac delta distribution and fourier transform Dirac delta distribution is defined as $f(t_{0})=\int_{-\infty }^{\infty } \! f(t)\delta(t-t_{0}) \, dt $ where $f(t)$ is smooth function. Then my question is: :Calculate Fourier transform $\hat \delta(\omega)$ from $\delta (t-t_{0})$ Solution: $$\hat \delta(\omega)=\frac{1}{\sqrt{2 \pi} }\int_{-\infty }^{\infty } \! \delta (t-t_{0}) e^{-j \omega t}\, dt $$ $$\hat \delta(\omega)=\frac {1}{\sqrt{2 \pi}}e^{-j \omega t_{0}}$$ Can someone explain me how they got this solution and write what are the steps between? On internet I always find some general formulas and I don't know how to use them.
We have \begin{align}f(t_{0})=:\int_{-\infty }^{\infty } \! f(t)\delta(t-t_{0}) \, \text{d}t &&(1) \\ \hat g(\omega):=\widehat{g(\cdot)}(\omega):=\int_{-\infty }^{\infty }e^{-j \omega t}g(t) \ \text{d}t &&(2) \end{align} and therefore we get: $$\widehat{\delta(\cdot-t_0)}(\omega) \stackrel{(2)}{=} \frac{1}{\sqrt{2 \pi} }\int_{-\infty }^{\infty } \underbrace{e^{-j \omega t}}_{=:f(t)} \delta (t-t_{0})\ \text{d}t\stackrel{(1)}{=}\frac{1}{\sqrt{2 \pi}}f(t_0)=\frac{1}{\sqrt{2 \pi}}e^{-j \omega t_{0}}$$ To be precise this is the Fourier transform of $\delta(\cdot-t_0)$ and not $\delta$. For $\delta=\delta(\cdot)$ we'd have to set $t_0=0$ in above formula and we get $\hat\delta(\omega)=(2\pi)^{-1/2}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1983446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How can I prove $d_1(x,y) \leq n d_\infty (x,y)$ $$d_\infty (x,y) = \max{|x_i - y_i| | i=1,2,...,n}$$ $$d_1 (x,y)= \sum_{i=1}^n |x_i - y_i|$$ $$d_\infty : \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}$$ $$d_1: \mathbb{R}^n \times \mathbb{R}^n \rightarrow \mathbb{R}$$ $$x,y\in \mathbb{R}^n$$ I would like to prove that $d_1(x,y) \leq n d_\infty (x,y)$. Attempt: Since $$d_1(x,y) = \sum_{i=1}^n |x_i - y_i|=|x_1 - y_1|+ \dots + |x_n - y_n|\leq \max|x_1 - y_1| +\dots + \max|x_i - y_i |=$$ $$= n\max |x_i - y_i |=nd_{\infty}(x,y).$$
As C. Falcon says, you should change $d_{\infty}$ by $d_{2}$. By Cauchy-Schwarz we have $$(\sum_{i=1}^n (|x_i - y_i|\cdot 1))^2\le (\sum_{i=1}^n |x_i - y_i|^2)(1^2+\cdots 1^2).$$ Then $$\sum_{i=1}^n |x_i - y_i|\le \sqrt{n}(\sqrt{\sum_{i=1}^n |x_i - y_i|^2}).$$ Hence $d_{1}(x,y)\le \sqrt{n}d_{2}(x,y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1983536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Examples of quasi-projective varieties that are not (topologically) quasi-affine I'm trying to think of a quasi-projective variety that is not isomorphic to a quasi-affine one. I image that it must be $Y \subseteq \mathbb{P}^n$ of at least $n \geq 3$, and maybe $\operatorname{dim} Y \geq 2$ as well. I am also interested in finding a low (co)dimensional example of a quasi-projective that is not homeomorphic to a quasi-affine.
I think $\mathbb{P}^2 - pt$ is an example, but I would have to think longer about why this can't be quasi-affine. (I think you can argue that if it were quasi-affine, it would have global functions, but if that were the case, you would be able to find global functions on $\mathbb{P}^2$, which can't be. Hartog's extension theorem works over $\mathbb{C}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1983646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
I throw $4$ dice, what is the probability of having at least one $6$? Say that I have an event that happens with probability $p$ -- lets say the probability of having a $6$ when I throw a die. Is there a formula that tells me the probability of have at least one of these events if I have $n$ simultaneous trials? Let's say I throw $4$ dice, what is the probability of having at least one $6$?
We can use something called complementary counting to find the probability that an event doesn't happen. Then we subtract this probability from $1$ to find the probability that the event does happen. In your example, if you have $n$ trials and the probability of getting a 6 is $p$ on each trial, then the probability that any given roll is not a 6 is $1-p$. Then out of $n$ trials, the probability that none of them are a 6 is $(1-p)^n$. Then we see that if we take $1-(1-p)^n$, this gives us the probability that at least one of the trials is a 6.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1983735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to convert this Cartesian double integral to polar coordinates? So most of the converting cartesian to polar integrals I've seen (both on this website and in class) involve some sort of circular integral region. I was doing practice problems for my exam and I ran across one that does not and I'm stuck on how to solve it. The integral is the following and the instructions say to solve it by converting to polar coordinates (which is annoying because this integral would be so easy to do normally): $$ \int _0 ^1 \int_x^1 x^2 dydx $$ How should I approach this? I tried setting 1 = y and x = yand got 1 $=rcos(\theta)$ and $1 = tan\theta$ but these bounds don't really seem to help me integrate.
Don't try to do this sort of thing by "pure algebra" - always draw the region of integration. If you do this you will see easily that $\theta$ varies from $\pi/4$ to $\pi/2$. So we have $$I=\int_{\pi/4}^{\pi/2}\int_?^? x^2\,J\,dr\,d\theta$$ where $J$ is the Jacobian. To find the limits for $r$, draw a line on your diagram starting at the origin and heading in the $\theta$ direction (where $\theta$ is between $\pi/4$ and $\pi/2$). You can see that the values of $r$ which are in your region and on this line go from a minimum of $0$ to a maximum on the horizontal line $y=1$. To find the $r$ value on this line we have $$r\sin\theta=y=1$$ and so $r_{\rm max}=1/\sin\theta$. Hence $$I=\int_{\pi/4}^{\pi/2}\int_0^{1/\sin\theta} x^2\,J\,dr\,d\theta\ .$$ You should also know that the Jacobian for polar coordinates is $r$ and that $x=r\cos\theta$. Hence $$I=\int_{\pi/4}^{\pi/2}\int_0^{1/\sin\theta} (r\cos\theta)^2\,r\,dr\,d\theta\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1983858", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving distance between bounded and compact set in $\mathbb{R}$ Let $E,F \in \mathbb{R}$ be two non-empty closed sets, with $E$ bounded. Show that there are points $x \in E, y \in F$ such that $\text{dist}(E,F) = \lvert x - y\rvert\cdot\text{dist}(E,F)$ is defined as $\inf\{\lvert x - y \rvert: x \in E, y \in F\}$ I know that $E$ bounded implies that it is compact, and intuitively we should have $x$ on the "boundary" of $E$ and similarly for $y$, but I'm struggling to find a way to say everything precisely. I've looked at some of the other similar questions but none of them have been that helpful for me. I've also read that a point has a minimum distance from a compact set, and this sounds like it could be useful.
Since \begin{align} \operatorname{dist}(E, F) = \inf\{|x-y| : x \in E, y \in F\}, \end{align} then there exists a sequence of pairs $(x_n , y_n) \in E\times F$ such that $|x_n-y_n| \rightarrow \operatorname{dist}(E, F)$. Now, since $\{x_n\}\subset E$ is bounded then it contains a convergent subsequence say $\{x_{n_k}\}$, i.e. $x_{n_k} \rightarrow x \in E$. Moreover, since \begin{align} |x_{n_k}-y_{n_k}| \rightarrow \operatorname{dist}(E, F) \end{align} then it follows \begin{align} |y_{n_k}| \leq \operatorname{dist}(E, F)+|x_{n_k}| \leq \operatorname{dist}(E, F)+M. \end{align} where $M$ is a bound on $E$, i.e. $|x|\leq M$ for all $x \in E$. Thus, it follows $y_{n_k}$ is also bounded which means it has a subsequence that converges to some $y \in F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1983960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are these derivatives correct (the respective functions involve a square root, fraction and expansion)? Question: Use rules of differentiation to answer the following. There is no need to simplify your answer. * *If $y = 3x{\sqrt x}$, find $\frac{dy}{dx}$ My working: $y = 3x^{1+\frac{1}{2}}$ $y = 3x^{\frac{3}{2}}$ $\frac{dy}{dx} = 3 \left(\frac{3}{2}x^{\frac{3}{2}-1}\right)$ $\frac{dy}{dx} = 3 \left(\frac{3}{2}x^{\frac{1}{2}}\right)$ $\frac{dy}{dx} = \frac{9}{2}x^{\frac{1}{2}}$ (differentiating $x^n$) *If $f(x) = \frac{2x-1}{5x}$, find $f'(x)$ My working: $f(x) = \frac{2x-1}{5x}$ $f'(x) = \frac{2(1x^{1-1})-1}{5(1x^{1-1})}$ $f(x) = \frac{1}{5}$ (differentiating $x^n$) *If $y = (2-3x)^2$, find $\frac{dy}{dx}$ My working: $y = 9x^2-12x+4$ $f'(x) = 9(2x^{2-1})-12(1x^{1-1})+0$ $f'(x) = 18x-12$ (differentiating $x^n$ and differentiating a constant)
Your work for Question 1 is fine. For Question 2, use the Quotient Rule (you seem to have assumed that the derivative of a fraction is the derivative of the numerator over the derivative of the denominator, which isn't the case): \begin{align} f' \left ( x \right ) & = \frac{5x \cdot 2 - \left ( 2x - 1 \right ) \cdot 5}{\left ( 5x \right )^2} = \frac{1}{5x^2} \end{align} In the comments you mentioned that your course permits neither the use of the Product Rule nor of the Quotient Rule. In that case, you can split the numerator into two fractions (this is probably a simpler method anyway, but it wasn't the first way that came to mind): \begin{align} f' \left ( x \right ) & = \frac{d}{dx} \left ( \frac{2 x}{5 x} - \frac{1}{5x} \right ) = \frac{d}{dx} \left ( \frac{2}{5} - \frac{1}{5x} \right ) = \frac{d}{dx} \left ( - \frac{1}{5x} \right ) = - \frac{1}{5} \left ( - x^{-2} \right ) = \frac{1}{5x^2} \end{align} For Question 3, you could have also used the Chain rule, although what you've done is fine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1984048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Compute operator norm of $l^1$ bounded operator I have a problem with the following exercise: We have the operator $T: l^1 \to l^1$ given by $$T(x_1,x_2,x_3,\dots)=\left(\left(1-\frac11\right)x_1, \left(1-\frac12\right)x_2, \dots\right)$$ for $(x_1,x_2,x_3,\dots)$ in $l^1$. Showing that this operator is bounded is easy, but I am really desperate with showing that the norm $\|T\| = 1$. I know that for bounded operators the norm is defined as $\|T\|=\sup{\left\{\|T(x)\|: \|x\| \le 1\right\}}$. I am also wondering if there exists a x in $ l^1$ such that $\|x\|=1 $ and $\|T(x)\|= \|T\|$ Thank you! :)
$$\left|\left| T((x_n)_{n\in\mathbb{N}})\right|\right|_{\ell_1}=\left|\left| \left(\left(1-\frac{1}{n}\right)x_n\right)_{n\in\mathbb{N}}\right|\right|_{\ell_1}=\sum_j \left|\left(1-\frac{1}{j}\right)x_j\right|\leqslant \sum_j |x_j |=||(x_n )_{n\in\mathbb{N}} ||_{\ell_1}$$ hence $$||T||\leqslant 1$$ but $$||T||\geqslant \sup_j ||Te_j || =\sup_j \left(1-\frac{1}{j}\right) =1$$ Thus $$||T||=1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1984178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Square class of algebraic extension of finite fields This is Q17 in Chapter 2 of the book "Introduction of quadratic forms over fields". Let $F$ be an algebraic extension of a finite field $\mathbb F$. Show that $|F^*/(F^*)^2| \le 2$. If the extension is finite, then $F$ is itself a finite field and indeed $|F^*/(F^*)^2| = 2$ (basically because $F^*$ is cyclic). An example will be $$ F = \bigcup _{n=1}^\infty \mathbb F_5\left(\sqrt[2^{\ n}]{2}\right),$$ where $|F^*/(F^*)^2|=1$ (also an exercise in the book). I don't even know if I could use any theory of quadratic form to tackle this. The closest one is of course the Pfister's Theorem, which says $$I(F)/I^2(F) \cong F^*/(F^*)^2,$$ where $I(F)$ is fundamental ideal in the Witt Ring $W(F)$.
Corollary 3.6 Let $F=\mathbb{F}_q,$ ($q$=odd). A) If $q\equiv 1(\mod 4)$, then $W(F)\cong \mathbb{Z}_2[\dot{F}/\dot{F}^2].$ B) If $q\equiv 3(\mod 4)$, then $W(F)\cong \mathbb{Z}_4.$ Since all finite algebraic extensions of a finite field are finite, then the theorem applies, and you can use $I/I^2\cong\dot{F}/\dot{F}^2$ to make the desired assertion since you know the Witt rings up to isomorphism. On the other hand, if $F$ is the algebraic closure of some finite field $GF(p),$ for prime $p,$ then $\dot{F}/\dot{F}^2=1,$ and you have the desired claim. For an infinite extension of $F/GF(p)$ for some prime $p,$ that is not the algebraic closure, you should consider that this will be the direct limit of a set of finite fields (like the union you mentioned), which produces a direct limit of the group of units. You can then make an argument about the square classes of each finite extension, and get a direct limit of the square class groups, and classify the square class group $\dot{F}/\dot{F}^2$ up to isomorphism (in this case either trivial or cyclic of order 2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1984271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Trigonometric inequality $\cos x+ \sin x>0$ Solve the inequality: $\cos x+ \sin x >0$ Why can't I square this to get $\sin 2x>0$? And what is the first step here then?
$$\cos x+ \sin x >0\Rightarrow \frac{\cos}{\sqrt2}+\frac{\sin x}{\sqrt2}=\sin(\frac{\pi}{4}+x)\gt0$$ It follows $$x\in\bigcup\space\left]-\frac{\pi}{4}+2k\pi,\frac{3\pi}{4}+2k\pi\space\right[$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1984361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
What is the number that, when divided by $3$, $5$, $7$, leaves remainders of $2$, $3$, $2$, respectively? What is the number? The LCM of the divisors is 105. I think this has something to do with the Chinese remainer theorem, but I am not sure how to apply this knowledge.
We can solve in a simple way without the Chinese Remainder Theorem. Let $n = 3x + 2 = 5y + 3 = 7z + 2$. Then, $5y + 3 \equiv 3x + 2 \equiv (\mod 3) \implies y \equiv 1(\mod 3)y$ Hence let $y = 3k + 1$ which gives $n = 5y + 3 = 15k + 8$. Again $n = 15k + 8 \equiv 7z + 2 (\mod 7) \implies k \equiv 1 (\mod 7)$ Hence let $k = 7m + 1$. This gives $n = 15k + 8 = 105m + 23$ Thus in general the number $n$ is of the form $n = 105m + 23$. The smallest such number is when $m = 0$, we get $n = 23$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1984457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Find basis vectors of the vector space of all $4 \times 4$ magic squares I'm taking a course in linear algebra and I need to solve this problem: Let's define a magic square as a matrix whose sums of all the numbers on a line, a column and on both the main diagonal and the main anti-diagonal are the same. * *Prove that $4 \times 4$ magic squares form a vector space. *Find the basis vectors of this vector space. There are more questions in the exercise, but I guess these are the most important ones that it will help me solve other questions. I have already searched almost the whole Internet, but I'm not able to find the answer. Thank you!
Here's another basis, with an easy proof of linear independence: the entry marked with a star is the only nonzero entry in that location in any of the eight matrices. $$\pmatrix{1*&0&0&0\cr0&0&0&1\cr0&1&0&0\cr0&0&1&0\cr}\quad\pmatrix{0&1*&0&0\cr0&0&0&1\cr0&0&1&0\cr1&0&0&0\cr}\quad\pmatrix{0&0&1*&0\cr0&0&0&1\cr0&0&1&0\cr1&1&-1&0\cr}\quad\pmatrix{0&0&0&1*\cr0&0&0&1\cr0&-1&2&0\cr1&2&-1&-1\cr}\quad\pmatrix{0&0&0&0\cr1*&0&0&-1\cr0&1&-1&0\cr-1&-1&1&1\cr}\quad\pmatrix{0&0&0&0\cr0&1*&0&-1\cr0&0&-1&1\cr0&-1&1&0\cr}\quad\pmatrix{0&0&0&0\cr0&0&1*&-1\cr0&-1&0&1\cr0&1&-1&0\cr}\quad\pmatrix{0&0&0&0\cr0&0&0&0\cr1*&1&-1&-1\cr-1&-1&1&1\cr}$$ A similar approach is taken in Ward, Vector spaces of magic squares, Math Mag 53 (1980) 108-111.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1984555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 1 }
Definition of Homeomorphism Background: Let $(X, \tau_X)$ and $(Y,\tau_Y)$ be topological spaces. A bijection $\gamma:X\to Y$ is called a homeomorphism if: * *$\gamma$ is continuous, and *$\gamma$ has some continuous inverse $\gamma^{-1}:Y\to X$. Also recall that $\gamma$ is (topologically) continuous if the preimage of each open set $U\in\tau_Y$ is also open. Given that $\gamma^{-1}$ is also required to be continuous, we have that $\gamma$ induces a bijection on the topologies $\tau_X$ and $\tau_Y$. My question is: why do we need the initial bijection condition? Given a map $\phi:X\to Y$ which induces some bijection $\tau_X\to\tau_Y$, why isn't $\phi$ a homomorphism?
Let $\langle X,\tau\rangle$ be any space. Let $D=\{0,1\}$ with the indiscrete topology $\{\varnothing,D\}$, and let $Y=X\times D$ with the product topology $\tau_Y$. The projection map $\pi_X:Y\to X:\langle x,d\rangle\mapsto x$ induces a bijection from $\tau_X$ to $\tau_Y$, but it’s not a homeomorphism. Specifically, the induced bijection sends $U\in\tau_X$ to $U\times D\in\tau_Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1984648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence of metrics Let $U\subset\mathbb{R}^2$ be an open neighborhood of the origin and let $f_n:U\rightarrow \mathbb{R}_{>0}$, $n\in \mathbb{N}$ be a sequence of differentiable functions which uniformly converges on $U$ to an integrable function $f:U\rightarrow \mathbb{R}_{>0}$. Fix two point $p_1,p_2\in U$ and call $\Gamma= \Gamma_{p_1}^{p_2}$ the set of differentiable paths $\gamma:I\rightarrow U$, $\gamma(t)=(\gamma_1(t),\gamma_2(t))$, such that $\gamma(0)=p_1$, $\gamma(1)=p_2$ and $\gamma(I)\subset U$. For every $\gamma\in \Gamma$ I'm quite sure it's true: $$\lim_{n\rightarrow \infty}\int_0^1\sqrt{f_n(\gamma(t))(\dot\gamma_1^2+\dot\gamma_2^2)}dt=\int_0^1\sqrt{f(\gamma(t))(\dot\gamma_1^2+\dot\gamma_2^2)}dt,$$ since the functions $f_n$ converge uniformly. I'm not sure if it's also true $$\lim_{n\rightarrow \infty}\inf_{\gamma\in\Gamma}\int_0^1\sqrt{f_n(\gamma(t))(\dot\gamma_1^2+\dot\gamma_2^2)}dt = \inf_{\gamma\in\Gamma}\int_0^1\sqrt{f(\gamma(t))(\dot\gamma_1^2+\dot\gamma_2^2)}dt $$ Is this equality true or is it only verified under additional hypothesis?
This equality is true. For each $\epsilon>0$ we can choose a path $\gamma$ between $p_1$ and $p_2$ such that $l_g(\gamma)< d_g(p_1,p_2)+\epsilon$. Since $f_n$ converges to $f$ uniformly, we can pick an $N>0$ such that for each $n>N$ we have also $l_{g_n}(\gamma)<l_g(\gamma)+\epsilon$. Therefore $d_{g_n}(p_1,p_2)\leq l_{g_n}(\gamma)<d_g(p_1,p_2)+2\epsilon$. Since $\epsilon$ is arbitary, we obtain $\lim_{n\to\infty} d_{g_n}(p_1,p_2)\leq g(p_1,p_2)$. Since the path $\gamma$ is compact, $\frac{f_n}{f}$ converges to $1$ uniformly along $\gamma$. Therefore $l_{g_n}(\gamma)$ converges to $l_g(\gamma)$, proving the opposite inequality. In more detail, given points $p,q$, the sequence $d_{g_n}(p,q)$ converges to some limit $L>0$. For small $\epsilon>0$ negligible compared to $L$ choose $n$ large enough so that $d_{g_n}(p,q)>L-\epsilon$. Then for each curve $\gamma$ between $p$ and $q$ we have $l_{g_n}(\gamma)>L-\epsilon$. In particular if $\gamma$ is a minimizing curve for $g$ we argue as above to get that $L=d_g(p,q)$. If $\gamma$ is not minimizing choose a sufficiently good approximation and argue as before.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1984779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find the biggest potential P (Particle) can have along this the curve C? The curve $C$ is given by $x=t×cos(t)$, and $y=sin(t)$ $C$ $R^2$, where $t$ $∈$ $R≥0$. * *Find the parametrization of the curve. *Find the biggest potential P can have along this curve. This may be a part of the exercise as well: * *Vector field: $(x+2xy)i+(y+x^2-y^2)j$ I am not sure where to even begin. I am not sure how to apply what I know of Gradients, Line-integrals, Lagrange multipliers, etc to this problem. Any hints are very appreciated.
* *The answer is in the question: \begin{cases} x=t\cos t\\ y=\sin t \end{cases} *$P$ is under the influence of $\vec{F}=(x+2xy,y+x^2-y^2)$. A potential of $\vec{F}$ is a mapping $f(x,y):\mathbb{R}\rightarrow \mathbb{R}$ such that $$ \vec{F}=\nabla f $$ Solving for $f$ yields $$ f(x,y)=\frac{x^2}{2}+x^2y+\frac{y^2}{2}-\frac{y^3}{3}+K, $$ where $K$ is an arbitrary constant in $\mathbb{R}$. Now, you want to maximize this potential on the curve, i.e. you want to maximize $f(x,y)$ subject to \begin{cases} x=t\cos t\\ y=\sin t \end{cases} Substituting $x$ and $y$ by these expressions yields $$ f(t)=\frac{t^2}{2}\cos^2(t)+t^2\cos^2(t)^2\sin(t)+\frac{\sin^2(t)}{2}-\frac{\sin(t)^3}{3}+K $$ This function, however, is unbounded: So either I misinterpreted the question, either something is wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1984912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the coefficient of $x^{80}$ in the power series expansion $\dfrac{x^2(1+x^2+x^5)}{(1-x)^2(1+x)}$ Find the coefficient of $x^{80}$ in the power series expansion $$\dfrac{x^2(1+x^2+x^5)}{(1-x)^2(1+x)}.$$ I don't know how to find coefficients in power series, solutions are greatly appreciated!
Write it as $$\frac{x^2(1+x^2+x^5)}{(1-x)(1-x^2)}=x^2(1+x^2+x^5)\left(\sum_{h=0}^\infty x^h\right)\left(\sum_{s=0}^\infty x^{2s}\right)=\\=(x^2+x^4+x^7)\sum_{k=0}^\infty x^k\#\left\{(h,s)\,:\,0\le h,s\wedge h+2s=k\right\}=$$ if you call $a_k:=\#\left\{(h,s)\,:\,0\le h,s\wedge h+2s=k\right\}$, it continues as $$=\sum_{k=0}^\infty (a_{k-7}+a_{k-4}+a_{k-2})x^k$$ So, you just have to compute $a_{73}+a_{76}+a_{78}$. You can easily see that a pair $(h,s)$ is uniquely determined by the choice of $s$ such that $0\le 2s\le k$. Hence $a_k=\begin{cases}0&\text{if }k<0\\ 1+\left\lfloor\frac k2\right\rfloor&\text{if }k\ge0\end{cases}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1985027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Algebraic closures and monomorphisms Let $F$ be a field and $f\in F[X]$, an irreducible polynomial over $F$, $C$ an algebraic closure of $F$ and $a,b \in C$ two roots of $f$. In an exercise I proved the existence of an $F-$monomorphism $\tau :C \longrightarrow C$ such that $\tau (a)=b$. But I also have to prove that $\tau$ is an automorphism (from $C$ into $C$). I am a little bit stuck for ideas to prove this last one and it's ridiculous because I think I proved the hard part... Can someone help me please? Thank you in advance!
Hint: Any element $c\in C$ is algebraic over $F$ and hence is a root of some nonzero polynomial $g\in F[X]$. Observe that $\tau$ must map roots of $g$ to roots of $g$, and there are a finite number of roots of $g$ in $C$. (More generally, this line of argument shows that if $C$ is any algebraic extension of a field $F$ and $\tau:C\to C$ is an $F$-monomorphism, then $\tau$ is surjective.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1985098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Number of ways a 6-letter word contains at least 1 "a" and at least 1 "b, c, or d". Repeats are allowed. I tried to approach this problem by using complementary counting. The number of ways to get a 6-letter word with no restriction is $26^6$. Then we can subtract the number of ways where the word does not contain "a, b, c, or d", which is $22^6$. But then I realized there was a problem with this. This problem is weird in that the word has to contain "a" and ANY of "b, c, or d." For example, the following would be valid words: "abzzzx", "axkkd", "yuabd". However, it would not be valid if the word is "bcdbcz" or "azzzzz". I hope this makes sense. I was thinking maybe exclusion-inclusion needs to be used somewhere, but I'm not sure if this thinking is even right. This is an extension to a counting problem I had in class (I thought of it).
Casework usually works when you want complementary counting, but the problem has a few details to take care of. Case (1): No $a,b,c,$ or $d$. Case (2): Yes $a$, no $b,c,$ or $d$. Case (3): No $a$, yes $b,c,$ or $d$. These cases are mutually exclusive, and so overcounting is not possible. The sum of these cases is the complement of what you seek. Case (1): There are $22^6$ ways. Case (2): There are $23^6$ ways of constructing a $6$ letter word with $23$ words in the alphabet (no $b,c$ or $d$), and $22^6$ of these ways do not include $a$ (or the number of ways of constructing a $6$ letter word with $22$ words in the alphabet). Hence there are $23^6-22^6$ ways. Case (3): There are $25^6$ ways of constructing the word without $a$. Of these ways, there are $22^6$ ways of constructing the word without any of the three letters, so there are $25^6-22^6$ ways total. Then, there are $26^6 - (22^6) - (23^6 - 22^6) - (25^6 - 22^6) = 26^6 - 23^6 - 25^6 + 22^6$ ways total.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1985164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Solve the differential equation $ \dfrac{ d^2x }{ dt^2 } + 6 \dfrac{ dx }{ dt } + 9x = 4t^2 + 5 $ using variation of parameters Would solving $ \dfrac{ d^2x }{ dt^2 } + 6 \dfrac{ dx }{ dt } + 9x = 4t^2 + 5 $ using variation of parameters require integration by parts or can I solve it without knowing integration by parts? I'm not sure if I'm just using the method wrong or if it requires integration by parts. I'm new to variation of parameters, and I haven't encountered integration by parts. Thanks. The non homogeneous characteristic equation is $ r^2 + 6r + 9 = 4t^2 + 5 $ The characteristic homogeneous equation is $ r^2 + 6r + 9 = 0 $ $ \Rightarrow (r + 3)(r + 3) $ $ \Rightarrow r = -3 $ $ \therefore x(t) = C_1e^{-3t} + C_2te^{-3t}$ is the general solution for the homogeneous differential equation. I now find the particular solution. $-e^{-3t} \displaystyle\int \dfrac{te^{-3t}(4t^2 + 5)}{W(x_1, x_2)} dt + te^{-3t} \displaystyle\int \dfrac{e^{-3t}(4t^2 + 5)}{W(x_1, x_2)} dt$ The Wronskian $W(x_1, x_2) = e^{-6t} $ $-e^{-3t} \displaystyle\int \dfrac{te^{-3t}(4t^2 + 5)}{W(x_1, x_2)} dt + te^{-3t} \displaystyle\int \dfrac{e^{-3t}(4t^2 + 5)}{W(x_1, x_2)} dt $ $= -e^{-3t} \displaystyle\int \dfrac{te^{-3t}(4t^2 + 5)}{e^{-6t}} dt + te^{-3t} \displaystyle\int \dfrac{e^{-3t}(4t^2 + 5)}{e^{-6t}} dt$
Considering the general case of $$I_k=\int t^k e^{r t}\,dt$$ Using integration by parts $$u=t^k \implies du=k t^{k-1}\,dt$$ $$dv=e^{r t}\,dt\implies v=\frac{e^{r t}}{r}$$ makes $$I_k=\frac{t^k e^{r t}}{r}-\frac k r\int t^{k-1} e^{r t}\,dt=\frac{t^k e^{r t}}{r}-\frac k r I_{k-1}$$ with $I_0=\frac{e^{r t}}{r}$. If you already heard about the incomplete gamma function, almost from definition, you would have $$I_k=-\frac{t^{k+1}} {(-r t)^{k+1}} \Gamma (k+1,-r t)$$ which, in the case where $r<0$ reduces to $$I_k=-\frac{\Gamma (k+1,-r t)} {(-r )^{k+1}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1985312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Taylor Series of trigonometric function I searched quite a bit online but could only find the MACLAURIN SERIES of $\sin x$ and $\cos x$: $$\sin x = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \frac{x^7}{7!}+\cdots$$ $$\cos x = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \frac{x^6}{6!}+\cdots$$ Can anyone explain how we can express the TAYLOR SERIES of $\sin x$ and $\cos x$, and also show me the derivation? Thanks in advance!
The Taylor expansion of any function about any point can be found by replacing $x$ with $x-c$, assuming you want a $c$-centered expansion. You also would need to evaluate your derivatives at this point $c$ as opposed to zero. Check the general formula for a Taylor expansion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1985425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Alternative solution and generalization to a puzzle "gasoline crisis". Suppose that on a circular route, the gas stations located along the route contain just enough gas for one full trip. Prove that if one starts at the right gas station with an empty tank, one can complete the route. The solution that is offered is: Suppose that one with plenty of gas. So after completing the trip and emptying each gas station, one has the same amount of gas one started with. Notice that the fuel level is fluctuating, and at some station $k$ the amount of fuel left is minimized. Starting at station $k$ is the solution. I suppose station $k$ is not unique, there can be multiple stations with the right amount of fuel? But my main question is if there is another way of proving this and can this problem be generalized?
I suppose station $k$ is not unique, there can be multiple stations with the right amount of fuel? That's right. If there are multiple "minimal" stations, then the driver can start at any of these stations. By the time the driver arrives at the next minimal station, she will just have run out of gas, so she can fill up and continue on. is [there] another way of proving this? If we consider a necklace of the net change in gasoline for each fill-and-drive-to-the-next-station step, then this necklace's beads will add up to 0. Thus this problem is equivalent to showing that there's a way to orient the necklace such that all of the partial sums are non-negative. can this problem be generalized? Certainly! Here are a couple ideas: * * Instead of the route on a circular track, try a route which is an Eulerian circuit on some graph. * The car could have a fuel efficiency which is a function of the amount of gas in the car.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1985532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }