Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Alternating sum of combinatorics I have tried to do this exercise using index degradation, using Newton's binomial, if you can give me some idea to solve this problem I will appreciate Let $a,b\in\mathbb{Z}^+$ such that $${1999 \choose 0}-{1999 \choose 2}+{1999 \choose 4}-\dots-{1999 \choose 1998}=a^b$$ the question is to find the smallest value of $a+b$.
Visualization I will first explain how to visualize the back propagation of Pascal's identity, just in case you haven't seen this method before. Let's look at these identities graphically on Pascal's triangle: Pascal's identity: $$\binom{n}{k} = \binom{n-1}{k-1} + \binom{n-1}{k}$$ Whenever we see a number in Pascal's triangle, we know that it is the sum of the two numbers above it. For the purposes of this visualization, if you see a number inside the circle, it represents the multiplier of that particular binomial coefficient. Finally, don't forget that the triangle is symmetrical. Symmetric identity: $$\binom{n}{k} = \binom{n}{n-k}$$ Proof Apply Pascal's identity. Notice that each row has the same sum. Just to make it easier to see, positive multipliers are green and negative multipliers are red. Notice two patterns: * *Every time we back propagate two rows, the multiplier is doubled. *(Remember also that due to the symmetric identity, it is valid to start from the right instead of the left.) If there are the same amount of green and red circles on a particular row, then the signs of the multipliers two rows back are flipped. So if the initial row started $+$, $-$, etc., then two rows back it will start $-$, $+$, etc. So if we are trying to find: $${1999 \choose 0}-{1999 \choose 2}+{1999 \choose 4}-\cdots-{1999 \choose 1998}$$ then we know this is equal to: $$-4\left[{1995 \choose 0}-{1995 \choose 2}+{1995 \choose 4}-\dots-{1995 \choose 1994}\right]$$ which is equal to: $$(-4)^2 \left[{1991 \choose 0}-{1991 \choose 2}+{1991 \choose 4}-\dots-{1991 \choose 1990}\right]$$ We can keep doing this until we get: $$(-4)^{499} \left[{3 \choose 0}-{3 \choose 2}\right]$$ which now becomes very easy to simplify to $2^{999}$. Reflection Try to see how this recursion pattern is parallel to the technique where you find the real part of $(1+i)^{1999}$ by converting $1+i$ to polar form and looking at the pattern when multiplying/dividing by $(1+i)^4$. In my opinion, this proof is a little bit more obtuse and not as elegant as the $(1+i)^{1999}$ method, but I think there is value to knowing this technique. It is a fairly straightforward strategy without any need to invoke derivatives or constructing a counting proof, which makes it an excellent "last resort" or "brute force" strategy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3718866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find the distance along a sphere from an angle? Imagine there are two points on planet Earth and a light is shone from one to the other by reflecting off an object 500km up (think of this as a mirror oriented parallel to the surface right below it). Let us assume the Earth is a perfect sphere. As the distance between the points increases, the angle that the light is received at increases with a limit of 90 degrees which corresponds to a tangent of the sphere. I would like to know how far apart the two points along the sphere are as a function of this angle. To try to solve it I first notice that the angle of incidence equals the angle of reflection. So we can draw an isosceles triangle with the reflector at the top and the two points as the other two vertices. The height of the triangle is a function of how far apart the two points are. But now I am stuck. The radius of the Earth is 6371km.
Analytical geometry calculation. You want intersection between ( meridians of ) sphere and cone of semi-vertical angle $\theta$ from a celestial mirror at $O$. $$ z^2+r^2=R^2 \tag1$$ and a ray ( generator of cone ) from the mirror $$ r \cot \theta-z = R+h\tag2$$ Eliminate $z$ between (1),(2) $$ r^2( 1+\cot^2 \theta) -2 a r \frac{\cos \theta}{\sin \theta} + (a^2-R^2) \tag3$$ $$ r^2-2 a r \frac{\cos \theta}{\sin \theta}+ h (2R+h)\sin^2 \theta $$ The quadratic has two roots $$ \frac{r}{\sin \theta}= a \cos \theta \pm \sqrt{ a^2 \cos^2 \theta -h(2R+h)} \tag4$$ $-ve $ sign for required spherical cap nearby and $+ve,$ if the ray is produced to pierce second half hemisphere. By geometry of right triangles we can find when the ray is tangential to the sphere: $$ (z_m,r_m)= \dfrac{h(2R+h)}{(R+h)},\dfrac{R\sqrt{h(2R+h)}}{(R+h)}\tag5$$ is the required relation, graphed. If $OM$ is along polar axis connecting north/south poles, the the latitude, co-latitude of required circle circle of horizon which has $$ \cos^{-1}\frac{r_m}{R}, \sin^{-1}\frac{y_m}{R}\tag6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3719032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Discrete-time Input-to-State Stability From a famous control paper titled 'Input-to-state stability for discrete-time nonlinear systems', the following holds true: given a discrete-time system $$x(k+1)=f(x(k),u(k))$$ let $V(x)=|x|^2$ where $|\cdot|$ indicates L2 norm, given that $$V(x(k+1))-V(x(k))\leq-aV(x(k))+b|u|^2,$$ where $a,b>0$, then $V(x)$ is a ISS-Lyapunov function for the system and the following inequality can be established (the definition of Input-to-State stability): $$|x(k)|\leq\beta(|x(0)|,k)+\gamma(\lVert u\rVert_\infty),$$ where $\lVert u\rVert_\infty=\text{sup}(|u(k)|:k\in\mathbb{Z}_+)<\infty$, $\gamma:\mathbb{R}_{\geq0}\rightarrow\mathbb{R}_{\geq0}$ is a class $\textit{K}$ function (i.e. $\gamma$ is continuous, strictly increasing and $\gamma(0)=0$) and $\beta:\mathbb{R}_{\geq0}\times\mathbb{R}_{\geq0}\rightarrow\mathbb{R}_{\geq0}$ is a class $\textit{KL}$ function (i.e. $\beta(s,t)$ is a stricly increasing function with respect to $s$ and a decreasing function with respect to $t$, and $\beta(s,t)\rightarrow0$ as $t\rightarrow0$). My question is how to obtain the expression of $\beta$ and $\gamma$ in the above context? They should be in the form of $k$, $a$, $b$, I have tried to read the paper but could not figure out the exact expression. Thanks!
OK, after some further reading it seems that there is a general form of the solution: $$ |x(k)|\leq\left(1-\left(1-\rho\right)a\right)^{\frac{k}{2}}|x(0)|+\left(\frac{b}{a\rho}\right)^\frac{1}{2}|u|_\infty, $$ given that $0<a<1$ and for an arbitrary $\rho$ such that $0<\rho<1$. Let me know if this is not correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3719154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Transforming a summation over multiple variables with a condition Given a general sum with multiple variables from some finite set, I can transform it to be a nested sum or a product of distinct sums over one variable each: $$ S = \sum_{\substack{a,b,c, d}} f(a)f(b)f(c)f(d) = \left(\sum_a f(a)\right)\left(\sum_b f(b)\right)\left(\sum_c f(c)\right) \left(\sum_d f(d)\right)$$ But can I do something similar if I have a similar sum with a condition like $$ S = \sum_{\substack{a,b,c, d\\a+b=c+d}} f(a)f(b)f(c)f(d) = \sum_{a}\sum_{b}\sum_c\sum_{d}f(a)f(b)f(c)f(d) \cdot \delta\{a+b=c+d\} $$ Can I transform it to be a product of simpler sums, or a nested sum? One way I can think of is to throw away the condition, then I'll be able to write the sum as a product of independent sums (exactly as shown above), and finally multiply it by the number of 4-tuples $(a, b, c, d)$ matching the condition divided by the total number of 4-tuples $(a, b, c, d)$, but I am not sure if it's correct, or if there is a simpler solution.
From $$ a+b = c +d, $$ we obtain $$ d = a+b-c, $$ and then $$ \sum_a \sum_b \sum_c \sum_d f(a) f(b) f(c) f(d) = \sum_a \sum_b \sum_c \sum_{a+b-c} f(a) f(b) f(c) f(a+b-c). $$ In the right-hand side, the first three summation symbols can of course be interchanged freely without affacting the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3719298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluating $\lim_{n \to \infty} \frac{(2n)!}{2^n (n!)^2}$ Can you please explain how should I evaluate this limit $$\lim_{n \to \infty} \frac{(2n)!}{2^n (n!)^2}$$ I know the solution is $\geq1$ but I don't know how I can just simplify like this but I stuck here $$\frac{2n(2n-1)(2n-2)...1}{2^n(n(n-1)(n-2)...1)}$$ I don't know if I should simplify it like that as I said the answer is $\geq1$ but my textbook doesn't explain why?
Let $a_n=\frac{(2n)!}{2^n (n!)^2}$ and consider the limit of the ratio $$\frac{a_{n+1}}{a_n}=\frac{(2n+2)!}{2^{n+1} ((n+1)!)^2}\cdot \frac{2^n (n!)^2}{(2n)!}=\frac{(2n+2)(2n+1)}{2 (n+1)^2}\to \ ?$$ Do you know any theorem about limit of sequences which involves such ratio?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3719439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $m\in\mathbb N$ be such that $m>1$ and $n$ be another natural number such that $n\mid(m^2+1).$ If $n>m,$ then how to prove that $n>m+\sqrt m.$ (Basically I have no clue how to approach this problem but still tried to write few lines based on my little knowledge..) We have: $n>m>1.$ Let $n=m+x$ with $x\le\sqrt m.$ Then $n$ divides both $(m^2+1)$ and $(m+x)(m-x)=m^2-x^2.$ This implies $n\mid\{(m^2+1)-(m^2-x^2)\}=(x^2+1),$ where $x^2+1\le m+1.$ I was actually looking for a contradiction. But I don't know how to proceed further with it. Is this even a correct way to approach the problem? Please help! Thanks in advance.
Yes, just go a further step with your proof: Since $n \mid (x^2+1)$, it must be that $x^2 + 1 \geq n \gt m$, therefore $x^2 \geq m$, so $x \geq \sqrt{m}$. But if $x = \sqrt{m}$, then from the above we know $m+1 \geq n \gt m$, therefore $n = m+1$ and $n = m+\sqrt{m}$, so $\sqrt{m}=1$, $m=1$, contradicting the assumption. Therefore it must be that $x \gt \sqrt{m}$ and $n = m+x \gt m+\sqrt{m}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3719573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Ways to write $n=p^k$ as a product of integers Let's say that $F(n)$ is the number of ways to write $n$ as a product of integers greater than $1$. For example, $F(12)=4$ since $12=2\cdot 2 \cdot 3$, $12=2\cdot 6$, $12=3\cdot 4$ and $12=12$. Given $n=p^k$ where $p$ is a prime number, what is the value of $F(n)$? I know how to manage this problem when $n=p_1\cdots p_k$ where $p_1,\ldots , p_k$ are different primes (the result is $B(k)$, where $B(n)$ is the Bell-Number), but how can I do it in this case? Note: The order of the factors does not matter; that is, $a\cdot b$ and $b \cdot a$ do not count as different ways to write a number
As already answered, the number is given by the number of partitions. We add two more facts about it. 1 - By the fundamental theorem for finite abelian groups the number of abelian groups of order $n=p_1^{n_1}\dots p_k^{n_k}$ is the product of the partition numbers of $n_i$. 2- For the partition function $p(n)$ there are recursion formulas, asymptotic formulas and even an exact formula. The latter one is due to Rademacher (going back to work of Hardy and Ramanujan). We have $$p(n) = \frac{1}{\pi \sqrt{2}} \sum_{k=1}^{\infty} \sqrt{k}\ A_k(n)\ F_k'(n),$$ with $$A_k(n) = \sum_{0 \le m < k, \gcd(m, k) = 1} e^{i\pi\left(s(m, k) - 2nm/k\right)}$$ and $$F_k(x) = \frac{1}{\sqrt{x - \frac{1}{24}}} \sinh\left(\frac{\pi}{k} \sqrt{\frac{2}{3}\left(x - \frac{1}{24}\right)}\right)$$ Here $s(m, k)$ is the Dedekind sum given by $$s(m, k) = \sum_{n=1}^{k} \left(\left(\frac{n}{k}\right)\right)\left(\left(\frac{mn}{k}\right)\right)$$ where $((x))$ is the sawtooth function $$((x)) = \begin{cases} x - \lfloor x \rfloor - \frac{1}{2}, &\mbox{if } x \in \mathbb{R} \setminus \mathbb{Z}\\ 0, &\mbox{if }x \in \mathbb{Z} \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3719661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find number of ordered pairs $(a, b)$ satisfying $a^2+b^2=2^3\cdot3^4\cdot5\cdot17^2$ Find number of ordered pairs $(a, b)$ satisfying $a^2+b^2=2^3\cdot3^4\cdot5\cdot17^2$. By rearranging the terms, I found a pair (918, 306). But I wonder if there is a systematic way to solve for the number of pairs? Any hint will be appreciated!
Not a 'real' answer, but it was too big for a comment. I wrote and ran some Mathematica code: In[1]:=Length[FullSimplify[ Solve[{a^2 + b^2 == 2^3*3^4*5*17^2, b > a}, {a, b}, Integers]]] Running the code gives: Out[1]=12 Looking for the solutions, we can see: In[2]:=FullSimplify[ Solve[{a^2 + b^2 == 2^3*3^4*5*17^2, b > a}, {a, b}, Integers]] Out[2]={{a -> -954, b -> -162}, {a -> -954, b -> 162}, {a -> -918, b -> -306}, {a -> -918, b -> 306}, {a -> -702, b -> -666}, {a -> -702, b -> 666}, {a -> -666, b -> 702}, {a -> -306, b -> 918}, {a -> -162, b -> 954}, {a -> 162, b -> 954}, {a -> 306, b -> 918}, {a -> 666, b -> 702}} So, we can see that when we have $(\text{a},\text{b})$ where $\text{b}>\text{a}$ there are $12$ solutions to that problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3720150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What are the last two digits in the ordinary decimal representation of $3^{400}$ My question is to find the last two digits of $3^{400}$ and I found that the last digit is $1$ but I'm not sure about the other one. Any help would be appreciated, thank you for your time. Edit: I've found that $3^{\phi(100)} = 1 \pmod{100} ; \phi(100) = 40$ and $3^{400} = 1 \pmod{100}$ So the last digit of $3^{400}$ is $1$
The multiplicative group of integers mod 100 has order of $\phi (100) = 100 \cdot \frac 12 \cdot \frac 45 = 40$ $a^{40} \equiv 1 \pmod {100}\\ (a^{40})^{10} \equiv 1 \pmod {100}$ The last two digits are $01$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3720307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Counter example for a matrix $A^3=A$ but $\mbox{rank}(A) \neq \mbox{tr}(A^2)$ If $A$ is diagonalizable then $\mbox{rank}(A) = \mbox{tr}(A^2)$. How to find a counter example for a matrix $A^3=A$ but $\mbox{rank}(A) \neq \mbox{tr}(A^2)$
There is not any counterexample, regardless of the underlying field or whether $A$ is diagonalisable. When $A^3=A$, $\operatorname{rank}(A)$ must be equal to $\operatorname{tr}(A^2)$. (When the field has characteristic $p>0$, this identity should be understood as one over $\mathbb F$. That is, we actually mean $\varphi(\operatorname{rank}(A))=\operatorname{tr}(A^2)$ where $\varphi$ is the ring homomorphism between $\mathbb Z$ and $\mathbb F$.) Denote the underlying field by $\mathbb F$ and let $A$ be $n\times n$. Since $(x^2-1)x$ is an annihilating polynomial of $A$ and $x^2-1,x$ are relatively prime, $\mathbb F^n = \ker((A^2-I)A) = \ker(A^2-I)\oplus\ker(A)$. Also, as $A^2-I$ and $A$ commute with $A$, both $\ker(A^2-I)$ and $\ker(A)$ are invariant subspaces of $A$. Therefore \begin{aligned} \operatorname{tr}(A^2) &= \operatorname{tr}(A^2|_{\ker(A^2-I)}) + \operatorname{tr}(A^2|_{\ker(A)})\\ &= \operatorname{tr}(A^2|_{\ker(A^2-I)})\\ &= \operatorname{tr}(I|_{\ker(A^2-I)}) \quad(\because A^2=I \text{ on } \ker(A^2-I))\\ &= \dim\ker(A^2-I) \quad\text{(we mean $\varphi(\dim\ker(\cdots))=\operatorname{tr}(\cdots)$ here)}\\ &= n - \dim\ker(A) \quad(\because \mathbb F^n=\ker(A^2-I)\oplus\ker(A))\\ &= \operatorname{rank}(A). \end{aligned}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3720581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Behaviour of sine sums I was considering the following function $$f_a(x)=\sum_{n=0}^{\lfloor x\rfloor}\sin^2(an)$$ and, as expected, $f_a(x)\approx x/2$ for every $a$ (except $2\pi$ and similar). This is because the function $\sin^2(ax)$ is "on average" equal to $1/2$. That is: we are adding $x$ numbers all close to $1/2$, and if one of them is bigger (than $1/2$) then not much later we are going to find another that is smaller (than $1/2$), cancelling the effect. At first I thought that the behaviour of $f_a$ would depend on how good of an approximation is $a$ to a rational multiple of $\pi$ (problems of convergence and similar always take this into consideration), but it is not the case. The problem begins when one considers (I found this before the $f_a$) the functions $$g_a(x)=\sum_{n=0}^{\lfloor x\rfloor}\sin(an)$$ I'm puzzled by these: they are (almost always) approximately equal to $A\sin^2\left(Bx\right)$, where $A$ and $B$ are, I think, constants (or almost constant) which depend on $a$. First of all, why does this happen? If $\sin^2(ax)$ is, on average equal to $1/2$, then $\sin(an)$ is equal to $0$ on average, so we should oberve $g_a\approx0$ always. Instead, $g_a$ is either positive (and oscillating) or negative (also oscillating), except for very few values where it becomes negative (psitive) but very small. So they are, in a sense, "very positive" or "very negative". Either way, very not zero. Maybe it has to do with the fact that $$\frac{1}{x}\int_0^x\sin^2(at)\,dt$$ converges to $1/2$ but oscillating and $$\frac{1}{x}\int_0^x\sin(at)\,dt$$ converges to $0$ but is always postivie or negative so that, while it is $0$ on average, it is more positive somehow. Can you expalin this phenomenon? Also, this is not the whole story: there are some values of $a$ (for example $a=2.8$) for which $g_a$ is "made" of two sine waves (like a standing wave), but this time the "very positive or very negative" doesn't show up. For some other values ($a=23$), $g_a$ is made up of three sine waves. I suspect this can be explained with the rational approximation thing. Note: (you may want to graph $g_a$ times a big constant like 10 and zoom out to see the various sine waves) Thanks!! Edit: We have the following identity: $$\sin(\varphi)+\sin(\varphi+\alpha)+\sin(\varphi+2\alpha)+\dots+\sin(\varphi+n\alpha)=$$ $$=\frac{\sin\left(\frac{(n+1)\alpha}{2}\right)\sin\left(\frac{n\alpha}{2}\right)}{\sin\left(\frac{\alpha}{2}\right)}$$ So making $\varphi=0$ and $\alpha=a$ we have a precise expression $$g_a(x)=\frac{\sin\left(\frac{(\lfloor x\rfloor+1) a}{2}\right)\sin\left(\frac{\lfloor x\rfloor a}{2}\right)}{\sin\left(\frac{a}{2}\right)}$$ Which looks similar enough to $A\sin^2(Bx)$. This does not solve the "$\sin$'s average is $0$ so $g_a$ should be $0$". Also, and I find this very curious, if we substitute $\lfloor x\rfloor$ for $x$ in the formula we get a completely different function, and the "made up of various sine waves for some choices of $a$" is far from clear from the formula. I'm guessing its a "how synched" the two waves in the numerator are: if they are very very similar then their product is very similar to $\sin^2$. This explains the "very positive" or "very negative", but no more, because the periods are $a/2$ for both waves. Another observation: if we use $\cos$ instead, the opposite phenomenon occurs. For most values of $a$, the sum is "made up" of various waves
The function has a closed form, found with a sum formula similar to the sine-sum formula you cite $$\sum_{n=0}^k\cos(k \varphi) = \frac{\sin(k\varphi/2)\sin((k+1)\varphi/2)}{\sin(\varphi/2)}$$ thus $$ f_a(x) = \sum_{n=0}^{\lfloor x \rfloor}\sin^2(2an) = \sum_{n=0}^{\lfloor x \rfloor}\frac12 - \sum_{n=0}^{\lfloor x \rfloor}\frac12 \cos(2an)=\\$$ $$ = \frac{1+\lfloor x \rfloor}2 - \frac{\sin(\lfloor x \rfloor a)\sin\left((\lfloor x \rfloor + 1)a\right)}{\sin(a)} $$ which indeed satisfies $$ f_a(x) = \frac x 2 + O(1) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3720695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Evaluation of $S_{k,j}=\sum_{n_1,\ldots,n_k=1}^\infty\frac{n_1\cdots n_j}{(n_1+\cdots+n_k)!}$ for $0\leqslant j\leqslant k>0$ For a positive integer $k$, and an integer $j$ with $0\leqslant j\leqslant k$, the problem of evaluating $$S_{k,j}=\sum_{n_1,\ldots,n_k=1}^\infty\frac{n_1\cdots n_j}{(n_1+\cdots+n_k)!}$$ appears as an extension of the problem 3.137 in the book "Limits, Series, and Fractional Part Integrals" by O. Furdui (which asks for evaluation of $S_{k,k}$). It's stated as an "open problem" there, and a quick search over the Internet reveals a few solutions, like this one. In the answer below, I'm sharing a solution that looks straightforward to me. I wonder is there anything similar online. I didn't find it.
I'm using the Cauchy integral formula ($n$ is a nonnegative integer): $$\frac{1}{n!}=\frac{1}{2\pi\mathrm{i}}\oint_C\frac{e^z\,dz}{z^{n+1}},$$ where $C$ is, say, the circle $|z|=r$, with $r>1$ to ensure the convergence: \begin{align*} S_{k,j}&=\frac{1}{2\pi\mathrm{i}}\sum_{n_1,\ldots,n_k=1}^{\infty}n_1\cdots n_j\oint_C\frac{e^z\,dz}{z^{n_1+\cdots+n_k+1}} \\&=\frac{1}{2\pi\mathrm{i}}\oint_C\frac{e^z}{z}\left(\sum_{n=1}^\infty\frac{n}{z^n}\right)^j\left(\sum_{n=1}^\infty\frac{1}{z^n}\right)^{k-j} dz \\&=\frac{1}{2\pi\mathrm{i}}\oint_C\frac{e^z}{z}\left(\frac{z}{(z-1)^2}\right)^j\frac{dz}{(z-1)^{k-j}} \\&=\frac{1}{2\pi\mathrm{i}}\oint_C\frac{z^{j-1}e^z}{(z-1)^{k+j}}\,dz. \end{align*} For $j>0$, using the "coefficient-of" notation, we get readily $$S_{k,j}=e[z^{k+j-1}](1+z)^{j-1}e^z=e\sum_{r=0}^{j-1}\binom{j-1}{r}\frac{1}{(k+r)!}.$$ For $j=0$, the residue at $z=0$ enters the picture, resulting in $$S_{k,0}=(-1)^k+e[z^{k-1}](1+z)^{-1}e^z=(-1)^k\left(1-e\sum_{r=0}^{k-1}\frac{(-1)^r}{r!}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3721015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Special types of prime numbers We know that there is infinitely many of prime by the proof of Euclidean.and we know also that there are infinitely many of primes in arithmetic progression by the Dirichlet theorem My question is, are there some special types of prime number that it has proven that there is infinitely many of them like the Fermat prime or the Sophie German prime or any other special types?
* *Famous Dirichlet theorem: Let $\ a\ $ and $\ b\ $ be relatively prime natural numbers. Then, there are infinitely many primes $\ p\equiv a\mod b.$ *Fascinating Friedlander–Iwaniec theorem: there are infinitely many primes $\ p=a^2+b^4\ $ where $\ a\ $ and $\ b\ $ are natural numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3721135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Sum of series $\sum_{n=1}^{\infty}\frac{1}{n(n+\frac{1}{2})}$ $\displaystyle\sum_{n=1}^{\infty}\frac{1}{n(n+\frac{1}{2})}$ i am trying to solve an integral and this ended up being last part I need, Wolfram alpha gives $4-2\ln{4}$ but I don't know how they actually got that value. can anyone help or give a starting point? telescoping series didn't work
Hint: $a_n = \dfrac{4}{2n} - \dfrac{4}{2n + 1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3721381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing $\frac{d\theta }{ d \tan \theta}=\frac{ 1}{ 1+ \tan^2 \theta}$ I suppose that $$ \frac{d\theta }{ d \tan \theta}=\frac{d \arctan x }{ d x}= \frac{1}{1+x^2}=\frac{ 1}{ 1+ \tan^2 \theta} $$ So is $$ \frac{d\theta }{ d \tan \theta}=\frac{ 1}{ 1+ \tan^2 \theta} $$ correct? And $$ \frac{d (\theta) }{ d \tan \frac{\theta}{2}}=\frac{ 2}{ 1+ \tan^2 \frac{\theta}{2}} \; ? $$
Follows is the way I look at the derivation of $\dfrac{d\theta}{d\tan \theta} = \dfrac{1}{1+ \tan^2 \theta} \tag 0$ and related identities; starting with $\dfrac{d\theta}{d\tan \theta} = \dfrac{1}{\dfrac{d\tan \theta}{d\theta}} \tag 1$ we use the definition $\tan \theta = \sin \theta / \cos \theta$ and the quotient rule for derivatives to obtain: $\dfrac{d\tan \theta}{d\theta} = \dfrac{d}{d\theta}\dfrac{\sin \theta}{\cos \theta} = \dfrac{(\cos \theta)(\cos \theta) - (-\sin \theta)(\sin \theta)}{\cos^2 \theta}$ $= \dfrac{\cos^2 \theta + \sin^2 \theta}{\cos^2 \theta} = \dfrac{\cos^2 \theta}{\cos^2 \theta} + \dfrac{\sin^2 \theta}{\cos^2 \theta} = 1+ \tan^2 \theta; \tag 2$ now by (1), $\dfrac{d\theta}{d\tan \theta} = \dfrac{1}{1+ \tan^2 \theta}; \tag 3$ having obtained this result, we may calculate $\dfrac{d\theta}{d\tan (\theta/2)}$, additionally invoking the chain rule; we set $u(\theta) = \dfrac{\theta}{2}, \tag4$ whence $\dfrac{du(\theta)}{d\theta} = \dfrac{1}{2}, \tag 5$ and $\dfrac{d\tan (\theta/2)}{d\theta} = \dfrac{d\tan u(\theta)}{d\theta} = \dfrac{d\tan u(\theta)}{du} \dfrac{du(\theta)}{d\theta}$ $=\dfrac{1}{2}(1 + \tan^2(u(\theta)) = \dfrac{1 + \tan^2 (\theta/2)}{2}, \tag 6$ and thus $\dfrac{d\theta}{d\tan (\theta/2)} = \dfrac{2}{1 + \tan^2 (\theta/2)}. \tag 7$ $OE\Delta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3721507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Transform expression for Taylor series $f^{(k)}(x) = \frac{(-1)^{k+1}}{2^k}\cdot\prod^{k-1}_{j=1}(2j-1)\cdot\frac{1}{\sqrt{x^{2k-1}}}$ For $x_0=1$ calculate Taylor series. So the above expression has already been proven by induction. $T_{f,x_0}= \sum^\infty_{k=0}\frac{f^{(k)}(x_0)}{k!}\cdot(x-x_0)^k$ $\begin{align} T_{f,1} &= \sum^\infty_{k=0}\frac{f^{(k)}(1)}{k!}\cdot(x-1)^k \\ &= \sum^\infty_{k=0} \frac{(-1)^{k+1}}{2^k}\cdot \prod^{k-1}_{j=1}(2j-1)\cdot \frac{1}{\sqrt{1^{2k-1}}} \cdot\frac{1}{k!}\cdot(x-1)^k \\ &= \sum^\infty_{k=0} \frac{1}{2k-1}\prod^{k}_{j=1}(2j-1) \cdot \frac{1}{k!} \cdot(-1) \cdot \frac{(-1)^k}{2^k} \cdot (x-1)^k \\ &=\sum^\infty_{k=0} \frac{1}{2k-1}\prod^{k}_{j=1}[(2j-1) \cdot \frac{1}{j}] \cdot (-1) \cdot (\frac{(-1)\cdot(x-1)}{2})^k \\ &=\sum^\infty_{k=0} \frac{1}{2k-1}\prod^{k}_{j=1}(2-\frac{1}{j}) \cdot (-1) \cdot (\frac{1-x}{2})^k \end{align}$ Is there a way to transform this expression further
I have the feeling that there is something in particular when suddenly $k!$ disappears. Starting from the beginning $$f^{(k)}(x) = \frac{(-1)^{k+1}}{2^k}\prod^{k-1}_{j=1}(2j-1)\frac{1}{\sqrt{x^{2k-1}}}=\frac{(-1)^{k+1}}{2^k\sqrt{x^{2k-1}}}\prod^{k-1}_{j=1}(2j-1)$$ Now $$\prod^{k-1}_{j=1}(2j-1)=\frac{2^{k-1} }{\sqrt{\pi }}\Gamma \left(k-\frac{1}{2}\right)$$ makes $$\frac {f^{(k)}(1)}{k!}=\frac{(-1)^{k+1} }{2 \sqrt{\pi }\,k!}\Gamma \left(k-\frac{1}{2}\right)$$ $$T_{f,1}=\frac{1 }{ 2\sqrt{\pi }}\sum_{k=0}^\infty \frac{(-1)^{k+1} }{k!}\Gamma \left(k-\frac{1}{2}\right)(x-1)^k=\sum_{k=0}^\infty \binom{\frac{1}{2}}{k}(x-1)^k$$ which is in fact a very, very simple function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3721675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
integral calculation mistake I try to solve this: $ \int_{0}^{\pi /2}\sin x \cos x\sqrt{1+\cos^{2}x } dx $ This is what I do: $ \cos x = t; -\sin x dx = dt; -\sqrt{1-u^{2}} dt $ $ \frac{-\sqrt{1-t^{2}}*t*\sqrt{1+t^{2}}}{-\sqrt{1-t^{2}}} dt $ $-\int_{0}^{1} t * \sqrt{1+t^{2}} dt$ $1+t^2 = a; 2tdt = da; tdt = da/2$ $-\int_{0}^{2}\sqrt{a}da = - \frac{2\sqrt{2}}{3}$ But the answer is $\frac{2\sqrt{2}}{3} - \frac{1}{3}$ Where do I make the mistake
Your second substitution is: $a=1+t^2$. If $t \in [0,1]$, then $a \in [1,2]$ and not $a \in [0,2].$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3721874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding Mass of Object Given Density I need to find the mass of an object that lies above the disk $x^2 +y^2 \le 1$ in the $x$-$y$ plane and below the sphere $x^2 + y^2 + z^2 = 4$, if its density is $\rho(x, y, z)=2z$. I know that the mass will be $\iiint_R 2z$ $dV$, and I just need to determine the region $R$ which bounds the object. If I were to use spherical coordinates, I'd have $(r, \theta, \phi)$ where $0 \le r \le 1$ (since the radial distance is restricted by the disk), $0 \le \theta \le 2\pi$ (since we can complete a full revolution just fine), however I am unsure how to determine the upper limit of $\phi$. Am I going in the right direction using spherical coordinates? And how would I find the upper limit of $\phi$? Thanks.
Try to do it using cylindrical coordinates instead. You can find the limits of $z$ by solving the sphere and a cylinder (extended into z - axis from disc) of radius $r$. $$(x^2+y^2) + z^2 = 4$$ $$r+z^2 = 4$$ $$z = \sqrt{4-r^2}$$ $$\iiint_V \rho(x,y,z)\,dx\,dy\,dz = \int_0^1\int_0^{2\pi} \int_0^{\sqrt{4-r^2}} 2z\,dz\,(rd\theta)\,dr $$ $$M = 2\pi\int_0^1 r(4-r^2)\,dr = 2\pi\left( 2-\frac{1}{4} \right) = \frac{7\pi}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3722056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove there exists $\alpha \ge 0$ s.t $\int_0^\alpha f(x)dx =\int_0^\infty g(x)dx$ given that $f,g\ge 0$, $F(x)$ diverges and $G(x)$ converges This is one of the problems we got as an assignment: if $f(x),g(x)$ are two integrable functions on $[0,t]$ for any $0<t\in \Bbb{R}$. and suppose that: * *$f(x)\ge 0,\ g(x)\ge 0$, for all $x\ge 0$ *$\int_0^\infty f(x)dx$ diverges and $\int_0^\infty g(x)dx$ converges. Prove that there exists some $\alpha \ge 0$ such that $\int_0^\alpha f(x)dx =\int_0^\infty g(x)dx$ So I obviously see that if $\int_0^\infty g(x)dx=0$ then $\alpha =0$ I also know that both functions are non-negative, so they are increasing. therefor if $\int_0^\infty g(x)dx = S$ then $S>0$ But now I don't understand how this gets me to the a value that im trying to get to... also with integrals I try to somehow visualize and I don't understand how this is true at all, I mean if $\int_0^\infty f(x)dx$ diverges how can I find such specific value? I mean if it diverges it can "start diverging" at any point..
If $\int_0^{\infty}g(x)dx=0$,then clearly $\alpha=0$ If not , then let $\int_0^{\infty}g(x)dx=L\gt 0$ (since $g\ge 0$) Now consider $F(x)=\int_0^{x}f(t)dt$ . Then $F(0)=0$ and $F(x)$ is continous and $\displaystyle\lim_{x\to \infty}F(x)=\infty$ (Why? Because $F(x)$ is increasing and thus the above limit diverges to infinity) Can you do from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3722149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can I say a set has measure $1$? Suppose $(\Omega,\mathscr{E},\mathbb{P})$ is a measure space such that $\mathbb{P}(\Omega)=1$. Suppose $A_i \in \mathscr{E}$ for $1 \leq i \leq n$ where $n \in \mathbb{N}$. Suppose $\mathbb{P}(\bigcup_{1 \leq i \leq n} A_i)=1$. Can I conclude that there exists $1 \leq i \leq n$ such that $\mathbb{P}(A_i)=1$? If not, how can I found a counterexample? I know the inclusion-exclusion principle, but I do not know if we can use it here and how. In addition, are there conditions (i.e. $(\Omega,\mathscr{E},\mathbb{P})$ has not atoms) such that the statement holds?
No. There seems to be no inclusion hypothesis in your statement. Consider $\Omega=[0,1]$ with the Borel $\sigma$-algebra generated by the open sets and the the usual probability $dx$. Then you can have $$A_j = \left(\frac{j-1}{n},\frac{j}{n}\right)\,.$$ Clearly the union has probability one but they are all disjoint and with probability $1/n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3722279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Finite-index normal subgroup acts amenably on compact Hausdorff space $X$ Let $\Gamma$ be a discrete group acting on a compact Hausdorff space $X$. The action is called (topologically) amenable if there exists a net of continuous maps $m_i: X\rightarrow \text{Prob}(\Gamma)$ such that for each $s \in \Gamma$, $$\lim_{i\rightarrow\infty}\left(\sup_{x\in X}\left\Vert s.m_{i}^{x}-m_{i}^{s.x}\right\Vert _{1}\right)=0,$$ where $s.m_i^x(g)=m_i^x(s^{-1}g)$. Here $\text{Prob}(\Gamma)$ denotes the set of propability measures on $G$ and continuous means that for every convergent net $x_j \rightarrow x\in X$ we have $m_i^{x_j}(g) \rightarrow m_i^x (g)$ for all $g \in \Gamma$ Question: Let $\Gamma$ be a discrete group acting on $X$. Is it true that if $N\vartriangleleft\Gamma$ is a finite-index normal subgroup that acts amenably on the space $X$, then the action action $\Gamma \curvearrowright X$ is amenable as well?
This is a corollary of the following fact, found as Proposition 5.1.11 in Brown & Ozawa's "$C^*$-Algebras and Finite-Dimensional Approximations". Let $\Gamma$ be a discrete group, $N$ be a normal subgroup and $\overline \Gamma=\Gamma/N$. If $Y$ is a compact amenable $\overline \Gamma$-space and $X$ is a $\Gamma$-space which is amenable as a $N$-space, then $X\times Y$ (with the diagonal $\Gamma$-action) is an amenable $\Gamma$-space. Given this, the result is straightforward: Since $\Gamma/N$ is a finite group, it's trivial action on a one-point space $\{p\}$ is amenable. As the action $N\curvearrowright X$ is also amenable, this implies that the diagonal action $\Gamma\curvearrowright X\times\{p\}=X$ is amenable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3722411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that if $X$ and $Y$ are independent with the same exponential distribution, then $Z= |X - Y|$ has the same exponential distribution $$P\left(Z\le z\right)=P\left(\left|X-Y\right|\le z\right)=P\left(-z\le X-Y\le z\right)=P\left(Y-z\le X\le Y+z\right)$$ This means that, because $\space f(x,y)=f(x)f(y) \space$ as they are independent, we get to calculate this integral: $$\int _0^{\infty }\:\int _{y-z}^{y+z}\:\left(\lambda e^{-\lambda x}\right)^2dxdy$$ We get $$\left(e^{\left(-2\cdot \lambda \cdot z\right)}\cdot \left(e^{\left(4\cdot \lambda \cdot z\right)}-1\right)\right)/4$$ Which is not what we want So where's the mistake?
The main error arises from not considering when the interval $[y-z, y+z]$ is not a subset of $[0,\infty)$. That is to say, when $y < z$, then $y-z < 0$. So you need to take this into account when integrating over $x$. The second error is in writing the integrand as the square of the marginal density $f_X(x)$, when the outer integral is with respect to $y$. The way to set up the integral is $$\begin{align*} &\Pr[Y - z \le X \le Y + z] \\ &\quad = \int_{y=0}^z \int_{x=0}^{y+z} f_{X \mid Y}(x \mid y) f_Y(y) \, dx \, dy + \int_{y=z}^\infty \int_{x=y-z}^{y+z} f_{X \mid Y}(x \mid y) f_Y(y) \, dx \, dy\end{align*}$$ where $f_{X \mid Y}(x \mid y) = f_X(x)$ because $X$ and $Y$ are independent. As the above is not sufficiently detailed for the audience, I will proceed with a full explanation as follows. Note we want to compute $\Pr[Y - z \le X \le Y + z]$ for IID $$X, Y \sim \operatorname{Exponential}(\lambda)$$ with rate parametrization $$f_X(x) = \lambda e^{-\lambda x}, \quad x \ge 0, \\ f_Y(y) = \lambda e^{-\lambda y}, \quad y \ge 0.$$ We will do this the mechanical way as requested, then show an alternative computation that is easier. First note $$\begin{align*} &\Pr[Y - z \le X \le Y + z] \\ &= \Pr[0 \le X \le Y+z \mid Y \le z]\Pr[Y \le z] + \Pr[Y - z \le X \le Y + z \mid Y > z]\Pr[Y > z], \end{align*}$$ where we conditioned the probability on the event $Y > z$. This gives us the aforementioned sum of double integrals. Next, we consider the following idea: let $$f_{X,Y}(x,y) = f_X(x) f_Y(y) = \lambda^2 e^{-\lambda (x+y)}, \quad x, y \ge 0$$ be the joint density. We want to compute $F_Z(z) = \Pr[|X - Y| \le z]$. To do this, we want to integrate the joint density over the region for which $|X - Y| \le z$, when $X, Y \ge 0$; i.e., when $(X,Y)$ is in the first quadrant of the $(X,Y)$ coordinate plane. Notably, when $Y = X$, then $|X-Y| = 0$, and as the distance away from this line increases, $|X-Y|$ increases. So this region comprises a "strip" of width $\sqrt{2} z$ centered over $Y = X$ in the first quadrant. We can also see this by simply sketching the region bounded by the lines $X \ge 0$, $Y \ge 0$, $Y \le X+z$, $Y \ge X-z$. But because this region is symmetric about $Y = X$, and the joint density is also symmetric about this line, the integral can be written symmetrically: $$\int_{x=0}^\infty \int_{y=x}^{x+z} f_{X,Y}(x,y) \, dy \, dx + \int_{y=0}^\infty \int_{x=y}^{y+z} f_{X,Y}(x,y) \, dx \, dy,$$ and these two pieces are equal in value. This avoids the computation of two separate double integrals with different values. To really solidify the point, here are some diagrams of the regions of integration. The region described in the first setup looks like this: The blue region is the first integral, and the orange is the second (where I've only plotted up to $x, y \le 10$ since obviously a plot to $\infty$ is not possible), for the choice $z = 2$. This illustrates why we must split the region into two parts, because if we integrated $x$ on $[y-z, y+z]$ for the blue region, you'd be including a portion in the second quadrant that is not allowed. But this is not the only way to split up the region: This is the second approach I described. The blue region is the first integral, and the orange region is the second. This is possible because the order of integration of one integral is the reverse of the order in the other, whereas in the first setup, the order of integration is the same (horizontal strips). Here, we integrate the blue region in vertical strips, and the orange region in horizontal strips. And because of the symmetry, we don't even have to compute both--the contribution of the orange region is equal to the contribution of the blue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3722528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Using Uniqueness Result for Analytic Functions I am reviewing for an Analysis qual and stumbled upon this question. In particular, I am having difficulties with part (ii). My attempt is the following: Using the hint, let $\Omega = \mathbb{C}$, $S=\{1/n : n\in \mathbb{N}\}$, and $g(z)=z^2$. We have that since $S \subset \Omega$ and both $f$ and $g$ are entire, then $f$ and $g$ are analytic on $S$. Per the uniqueness result, if $g(z)=f(z)$ for all $z\in S$, we know that since $0$ is a limit point of $S$ that is in $\Omega$, then it must be the case that $g(z)=f(z)$ for all $z\in \Omega$. However, we are given that $|f(i)| =2$, yet $|g(i)| = 1$. So in this case, just because $|g(z)| = |f(z)|$ for all $z\in S$, we don't have $g(z)=f(z)$. My strategy is then to find different functions $g$ such that $|g(z)| = |f(z)|$ for all $z\in S$ and $|g(i)|=2$. After finding all these different $g's$, I should have all the possible values of $|f(-i)|$ by just calculating $|g(-i)|$. However, I'm having trouble finding even a single function $g$ that satisfies these two conditions, much less finding all of them. Is there some systematic way I can go about finding these different $g$ functions?
First, if $f$ is any entire function, $\overline{f(\bar{z})}$ is always entire, because $\overline{\sum_{n=0}^\infty a_n \bar{z}^n} = \sum_{n=0}^\infty \bar{a_n}z^n,$ which converges exactly when $\sum_{n=0}^\infty a_n z^n$ does. As $f$ is holomorphic at $0$ and not uniformly $0$, there is a unique integer $n \geq 0$ so $\lim_{z \to 0} \frac{f(z)}{z^n}$ is a nonzero complex number ($n$ is the order of the zero of $f$ at $0$). Our hypotheses show that $f$ has a zero of degree $2$ at $0$. So we can express $f(z) = z^2 h(z)$ for some entire function $h$, and our hypotheses show that $|h(z)| = 1$ for $z = 1/n, n \geq 1$. In particular, $|h(0)| = 1$. So there's a neighborhood of $0$ on which $h(z)$ is nonzero. Now put $g(z) = \frac{1}{\overline{h(\bar{z})}}$, which holomorphic on a neighborhood of $0$, by part i. We observe that for $1/n$, $n\geq 1$, $$\frac{h(z)}{g(z)} = h(z) \overline{h(z)} = |h(z)|^2 = 1.$$ So by the identity principle, $g(z)$ and $h$ agree in a neighborhood of $0$, and we discover that in fact on all $z \in \mathbb{C}$, $h(z) = \frac{1}{\overline{h(\bar{z})}},$ or equivalently that $h(\bar{z}) = \frac{1}{\overline{h(z)}}$. For $|z| = 1$, $|f(z)| = |h(z)|$ and we see that for $|z| = 1$, $|f(\bar{z})| = \frac{1}{|f(z)|}$. So we conclude $|f(-i)| = 1/|f(i)| = 1/2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3722756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that if $n\times n$ matrix $A$ is nonsingular, then it is invertible. I know that nonsingularity means having unique soln. $x$ for every RHS b in the system $Ax= b$. But how do I use this definition to come up with a proof for the problem ? I really have tried but to no avail.
Recall that $$\mathbf{M}^{-1}=\frac{1}{\det(\mathbf{M})}\operatorname{adj}(\mathbf{M})$$ Simply show that $\operatorname{adj}(\mathbf{M})$ always exists (this is not difficult), and thus $\mathbf{M}^{-1}$ always exists if $\det(\mathbf{M}) \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3722868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
If $f\in L^1(\Bbb R)$ is continuous then $f\in C_0(\Bbb R)$ Suppose $f \in L^1(\Bbb R)$ is continuous. Then is it necessarily true that $f\in C_0(\Bbb R)$? It is easily seen to be true if $f$ is uniformly continuous, but I can't see whether it is true if $f$ is merely continuous. Thanks in advance.
You can even have continuous unbounded real maps that are Lebesgue integrable. See here for an example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3723018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Complex polarization identity proof getting stuck towards the end w.r.t. the imaginary part I'm working on a homework problem regarding the proof for the polarization identity for complex scalars. I've taken a look at another question on this community (Polarization Identity for Complex Scalars) and have tried working it out on my own, but am getting stuck towards the end, particularly towards dealing with the imaginary part. I'll elaborate on my approach. Starting with the definition: $$ \langle x, y \rangle = \frac{1}{4} \left( \Vert x + y \Vert^2 - \Vert x - y \Vert^2 - i\Vert x - iy \Vert^2 + i \Vert x + iy \Vert^2 \right) $$ Focusing only on the imaginary part (i.e. $i \Vert x + iy \Vert^2 - i \Vert x - iy \Vert^2$): $$ \begin{align} i \Vert x + iy \Vert^2 - i \Vert x - iy \Vert^2 & = i \left( \Vert x + iy \Vert^2 - \Vert x - iy \Vert^2 \right) \\ & = i \left( \langle x + iy, x + iy \rangle - \langle x -iy, x- iy \rangle \right) \\ & = i \left[ (\langle x, x \rangle + \langle x, iy \rangle + \langle iy, x \rangle + \langle iy, iy \rangle ) - ( \langle x, x \rangle + \langle x, -iy \rangle + \langle -iy, x \rangle + \langle -iy, -iy \rangle ) \right] \\ & = 2i \left( \langle x, iy \rangle + \langle iy, x \rangle \right) \end{align} $$ Using $\langle x, iy \rangle = -i \langle x, y \rangle$ and $\langle iy, x \rangle = \overline{\langle x, iy \rangle} = \overline{-i\langle x, y \rangle} = i\overline{\langle x, y \rangle}$, $$ \begin{align} 2i\left( \langle x, iy \rangle + \langle iy, x \rangle \right) & = 2i \left( -i\langle x, y \rangle + i \overline{\langle x, y \rangle} \right) \\ & = 2 \langle x, y \rangle - 2\overline{\langle x, y \rangle} \end{align} $$ I'm not sure how to proceed from here. I believe that I should end up with something like $-2(-2 \mathfrak{I} \langle x, y \rangle )$ but how does the last line become expressed as this? Thanks.
Hint: For $z \in \mathbb{C},$ $$z-\overline{z}=2i\,\Im{z},$$ where $\Im z$ denotes the imaginary part of $z.$ Btw, once you have shown that $$ \Vert x + y \Vert^2 - \Vert x - y \Vert^2=4\Re \langle x,y\rangle,$$ then replacing $y$ by $iy:$ $$ \Vert x + iy \Vert^2 - \Vert x - iy \Vert^2=4\Re\langle x,iy\rangle=4\Im \langle x,y\rangle.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3723388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is it possible to define the notion of a Submanifold of Euclidean space through properties of its tangent cone? The tangent cone to a set $\mathcal S \subset\Bbb R^n$ at a point $x \in \Bbb R^n$ is the set of all vectors $w \in \Bbb R^n$ for which there exists sequences $x_i \in \mathcal S$ and $\tau_i> 0$, with $x_i\to x$ and $\tau_i\searrow 0 $ such that $w = \lim\limits_{i \to \infty} \frac{x_i - x}{\tau_i}$. It is well known that if $\mathcal M\subset\Bbb R^,$ is a differentiable embedded submanifold of dimension $k \le n$, then the tangent cone at every point is a $k$-dimensional vector subspace of $\Bbb R^n$ and amounts to the concept of a tangent space. My question is if $k$-dimensional differentiable embedded submanifolds of $\Bbb R^n$ could be defined as subsets of $\Bbb R^n$ for which the tangent cone at every point is a $k$-dimensional vector subspace of $\Bbb R^n$?
I don't think so The subsets of $\mathbb{R}^n$ with that property seems to include much, much much more than submanifolds. As a counterexample in $\mathbb{R}^2$, consider a the union of a circle and one of its tangent lines, which intersect at a point $p$. Since these constructions are local, there is no issue at any point other than $p$. Let $t$ be one of the unit tangents of the circle/line at $p$, and $w\neq 0$ be an element of the tangent cone. There must be sequences $x_i$, $\tau_i$ such that $\lim\frac{x_i-p}{\tau_i}=w$, so eliminating terms as necessary and using the fact that proportional sequences have proportional limits, we also have $w\propto\lim\frac{x_i-p}{\|x_i-p\|}=\pm t$. As another counterexample which fails to be a submanifold even locally, consider $\mathbb{Q}^m\subset\mathbb{R}^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3723491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating matrix equation A $2x2$ matrix $M$ satisfies the conditions $$M\begin{bmatrix} -8 \\ 1 \end{bmatrix} = \begin{bmatrix} 3 \\ 8 \end{bmatrix}$$ and $$M\begin{bmatrix} 1 \\ 5 \end{bmatrix} = \begin{bmatrix} -8 \\ 7 \end{bmatrix}.$$ Evaluate $$M\begin{bmatrix} 1 \\ 1 \end{bmatrix}.$$ What is the question essentially asking? Isn't this just a matrix equation?
We know that $$M\begin{pmatrix} -8 & 1 \\ 1 & 5 \end{pmatrix} = \begin{pmatrix} 3 &-8\\ 8 & 7 \end{pmatrix}.$$ Multiply that equation from right by $$ \begin{pmatrix} -8 & 1 \\ 1 & 5 \end{pmatrix}^{-1}=\frac{1}{-41}\begin{pmatrix} 5 & -1 \\ -1 & -8 \end{pmatrix} $$ to get $M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3723590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
What are the definable subsets of $(\mathbb{R}, +)$? Consider the structure $(\mathbb{R},+)$. What are the subsets of that structure that are definable without parameters? I conjecture there are only four, namely $\emptyset$, $\mathbb{R}$, $\{0\}$, and $\mathbb{R} - \{0\}$. Is this correct?
Turning the comment thread above into an answer: Most of the time it's easier to analyze the orbit relation of a structure than its definable sets per se. This is the relation $a\sim b$ iff there is an automorphism sending $a$ to $b$. Since automorphisms preserve $\models$, we know that every definable set is closed with respect to $\sim$ - or, more concretely, every definable set is a union of $\sim$-classes. For example, in the case above we have $a\sim b$ iff both $a$ and $b$ are nonzero or both $a$ and $b$ are zero, so there are two $\sim$-classes ($\{0\}$ and $\mathbb{R}\setminus\{0\}$). Consequently there are only $2^2=4$ possible candidates for definable sets and it's easy to check that each of these four is in fact definable. Now in general this won't always be enough to fully classify the definable subsets of a structure.. For example, the structure $(\mathbb{N};<)$ is rigid - it has no nontrivial automorphisms - but it can only have countably many (parameter-freely-)definable subsets. This is where more intricate techniques such as quantifier elimination come in. But for the specific problem above, looking at automorphisms is sufficient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3723727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
"infinitely oscillating" manifold Find out whether $M:=\{\left(x\cdot\cos\left(\frac{1}{x}\right),\,x\cdot\sin\left(\frac{1}{x}\right)\right)|\,x>0\}$ is a submanifold of $\mathbb{R}^2$ or not. My guess is that it's not since I've tried to construct a homeomorphism from $\mathbb{R}$ to $M$ $\left(x\mapsto \left(x\cdot\cos\left(\frac{1}{x}\right),\,x\cdot\sin\left(\frac{1}{x}\right)\right)\right)$ and failed in proving bijectivity. I then tried to construct multiple charts from $\mathbb{R}$ and $\mathbb{R}^2$ but it hasn't worked out so far because of the way $M$ looks near the origin. Thank you very much in advance.
It is a submanifold. Consider the map $f : (0,\infty) \to \mathbb R^2, f(x) = (x\cos(\frac{1}{x}),x\sin(\frac{1}{x}))$. We have $M = f((0,\infty))$. The map $f$ is injective since $\lVert f(x) \rVert = \sqrt{x^2\cos^2(\frac{1}{x}) + x^2\sin^2(\frac{1}{x})} = \sqrt{x^2} = x$ which implies $f(x_1) \ne f(x_2)$ for $x_1 \ne x_2$. Thus $\tilde f : (0,\infty) \stackrel{f}{\to} M$ is a bijection. Moreover, $g = \lVert - \rVert_M : M \to (0,\infty)$ is continuous and we have $g \circ \tilde f = id$, therefore $\tilde f$ is homeomorphism. We have $f'(x) = (\cos(\frac{1}{x}) + \frac{1}{x}\sin(\frac{1}{x}), \sin(\frac{1}{x}) - \frac{1}{x}\cos(\frac{1}{x}))$ and we get $$\lVert f'(x) \rVert = \sqrt{\cos^2(\frac{1}{x}) + \frac{2}{x}\sin(\frac{1}{x})\cos(\frac{1}{x}) + \frac{1}{x^2}\sin^2(\frac{1}{x}) + \sin^2(\frac{1}{x}) - \frac{2}{x}\sin(\frac{1}{x})\cos(\frac{1}{x}) + \frac{1}{x^2}\cos^2(\frac{1}{x})} \\ = \sqrt{1 + \frac{1}{x^2}} \ne 0 .$$ Thus $f'(x) \ne 0$ so that $f$ is an immersion. Hence $f$ is a smooth embedding and $M$ is a smooth submanifold of $\mathbb R^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3723891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding volume of the region enclosed by $x^2+y^2-6x=0, z = \sqrt{36-x^2-y^2}$ and $z=0$ I'm trying to calculate the volume of the region enclosed by the cylinder $x^2+y^2-6x=0$, the semicircle $z = \sqrt{36-x^2-y^2}$ and the plane $z=0$. I tried moving the region towards -x so that the cylinder has it's center at zero. I ended up using these equations and proposing this integral using cylindrical coordinates: New cylinder --> $x^2+y^2=9$ New semicircle --> $z = \sqrt{36-(x+3)^2-y^2}$ $$\int_{0}^{2\pi}\int_{0}^{3}\int_{0}^{\sqrt{36-(rcos(\theta)+3)^2-(rsin(\theta))^2}} r\cdot dzdrd\theta$$ For some reason I can't solve it this way and I don't know what am I doing wrong.
If you take cylindrical coordinates $x = r \cos \theta, y=r \sin \theta, z=z$ with Jacobian $J=r$, then you'll have $$\int_{0}^{2 \pi}\int_{0}^{6 \cos \theta}\int_{0}^{\sqrt{36-r^2}}rd\theta drdz$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3724012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
AHSME Challenge Math Question Cowboy A cowboy is $4$ miles south of a stream which flows due east. He is also $8$ miles west and $7$ miles north of his cabin. He wishes to water his horse at the stream and return home. The shortest distance (in miles) he can travel and accomplish this is: a. $4 + \sqrt{185}$ b. $16 $ c. $17 $ d. $18 $ e. $\sqrt{32} + \sqrt{137}$ The correct answer is C, however I did not get that. I got my incorrect answer (A) by adding $7$ and $4$. I used the Pythagorean Theorem to find the hypotenuse length of an $8\times 11$ triangle which is $\sqrt{185}$. Could you please explain how to get it? Thank you.
Your answer proposes the following trajectory for the cowboy: But that's not optimal: it's better for the cowboy to be moving east the whole time, not just after getting water. To see the optimal path, imagine the cowboy's evil twin, who is 8 miles further north of the cowboy: his position is the reflection of the cowboy's position through the edge of the stream. Also, the evil twin can fly. Whatever path the cowboy takes, the evil twin can imitate, by first following the reflection of that path to the bank of the river, then following the cowboy back to his cabin. Conversely, however the evil twin gets to the cabin, the cowboy can imitate the evil twin's path, reflecting its first segment. The evil twin's best path is the straight line from his location to the cabin. So the cowboy's best path is the red trajectory shown for him in the image above. Its length is $\sqrt{8^2 + (4 + 4 + 7)^2} = 17$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3724171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Calculating gradient, how to do this? Given $f(x,y)$, a function that has continuous partial derivatives in every point. such that $\nabla f(0,-18)=-2i+3j$ We define a new function $g(x,y)=f(xy+x^2,xy-y^2)$ calculate $\nabla g(3,-3)$ How I tried to solve this? I need to find: $$\nabla g(3,-3) = g_x'(3,-3)i+g_y'(3,-3)j=f(xy+x^2,xy-y^2)_x'(3,-3)i+f(xy+x^2,xy-y^2)_y'(3,-3)j$$ and I got stuck here; I don't have f to calculate the partial directive for it...
We can make use of the multivariable chain rule (see this answer) which states that given a function $h: \mathbb{R}^n \to \mathbb{R}^m$ and a function $f: \mathbb{R}^m \to \mathbb{R}$ we have $$ \nabla (f \circ h) = \left(h'\right)^T \cdot \left(\nabla f \circ h\right) $$ where "$\cdot$" represents matrix multiplication. In our case, we see that if we define $h(x,y) = (xy+x^2,xy-y^2)$ then it follows from the definition of $g$ that $$g(x,y) = (f \circ h)(x,y)= f(xy+x^2,xy-y^2) \tag{1}$$ which means that calculating $\nabla g(3,-3)$ is equal to calculating $\nabla (f \circ h)(3,-3) $. Now, by definition, we know that $$ h' = \begin{pmatrix} \frac{\partial}{\partial x}(xy+x^2) & \frac{\partial}{\partial y}(xy+x^2)\\ \frac{\partial}{\partial x}(xy-y^2) & \frac{\partial}{\partial y}(xy-y^2) \end{pmatrix} = \begin{pmatrix} y+2x & x\\ y & x-2y \end{pmatrix} $$ which means that the transposed matrix $\left(h'\right)^T$ is $\begin{pmatrix} y+2x & y\\ x & x-2y \end{pmatrix} $. If we then evaluate this matrix at $(3,-3)$ we get $$ \left(h'\right)^T(3,-3) = \begin{pmatrix} -3+2(3) & -3\\ 3 & 3-2(-3) \end{pmatrix} = \begin{pmatrix} -3+6 & -3\\ 3 & 3+6 \end{pmatrix} = \begin{pmatrix} 3 & -3\\ 3 & 9 \end{pmatrix} $$ On the other hand, we see that $$ h(3,-3) = (3(-3)+3^2, 3(-3) -(-3)^2) = (-9+9, -9-9) = (0,-18) $$ which tells us that $$ \left(\nabla f \circ h\right)(3,-3) = \nabla f\left(h(3,-3)\right) = \nabla f\left(0,-18\right) = \begin{pmatrix} -2 \\3 \end{pmatrix} $$ using the convention of the gradient as a column vector. Finally, putting all this together tells us that $$ \nabla g(3,-3) = \nabla (f \circ h)(3,-3) = \left[\left(h'\right)^T(3,-3)\right] \cdot \left[\left(\nabla f \circ h\right)(3,-3)\right] = \begin{pmatrix} 3 & -3\\ 3 & 9 \end{pmatrix} \cdot \begin{pmatrix} -2 \\3 \end{pmatrix} = \begin{pmatrix} -2(3)+3(-3) \\-2(3)+3(9) \end{pmatrix}= \begin{pmatrix} -6-9 \\-6+27 \end{pmatrix}= \begin{pmatrix} -15 \\21 \end{pmatrix} = -15 \boldsymbol{\hat\imath} + 21 \boldsymbol{\hat\jmath} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3724288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Power series approximation for $\ln((1+x)^{(1+x)}) + \ln((1-x)^{(1-x)})$ to calculate $ \sum_{n=1}^\infty \frac{1}{n(2n+1)} $ Problem Approximate $f(x) = \ln((1+x)^{(1+x)}) + \ln((1-x)^{(1-x)})$ and then calculate $ \sum_{n=1}^\infty \frac{1}{n(2n+1)} $ My attempt Let $$f(x) = \ln((1+x)^{(1+x)}) + \ln((1-x)^{(1-x)}) \iff $$ $$f(x) = (1+x)\ln(1+x) + (1-x)\ln(1-x) \quad $$ We know that the basic Taylor series for $\ln(1+x)$ is $$ \ln(1+x) = \sum_{n=0}^\infty (-1)^n \frac{x^{n+1}}{n+1} \quad (1)$$ As far as $\ln(1-x)$ is concerned $$y(x) = \ln(1-x) \iff y'(x) = \frac{-1}{1-x} = - \sum_{n=0}^\infty x^n \text{ (geometric series)} \iff$$ $$y(x) = \int -\sum_{n=0}^\infty x^n = - \sum_{n=0}^\infty \frac{x^{n+1}}{n+1} \quad (2)$$ Therefore from $f(x), (1), (2)$ we have: $$ f(x) = (1+x)\sum_{n=0}^\infty (-1)^n \frac{x^{n+1}}{n+1} - (1-x)\sum_{n=0}^\infty \frac{x^{n+1}}{n+1} \iff$$ $$ f(x) = \sum_{n=0}^\infty \frac{2x^{n+2} + (-1)^n x^{n+1} - x^{n+1} }{n+1} $$ Why I hesitate It all makes sense to me up to this point. But the exercise has a follow up sub-question that requires to find: $$ \sum_{n=1}^\infty \frac{1}{n(2n+1)} $$ I am pretty sure that this sum is somehow connected with the previous power series that we've found, but I can't find a way to calculate it, so I assume that I have made a mistake. Any ideas?
$$ f(x) = \sum_{n=0}^\infty \frac{2x^{n+2} + (-1)^n x^{n+1} - x^{n+1} }{n+1} $$ Supposing the above is right. We want to change the $n+2$'s to $n+1$'s. To do this, write, by letting $m+1 = n+2$, $$ \sum_{n=0}^\infty \frac{2x^{n+2}}{n+1} = \sum_{m=1}^\infty \frac{2x^{m+1}}{m} = \sum_{n=1}^\infty \frac{2x^{n+1}}{n},$$ where, in the last step, we simply changed the dummy variable $m$ to $n$. I haven't read it very carefully, but sometimes you can get $2n+1$ in the denominator when you're only summing over odd integers. SPOILER
{ "language": "en", "url": "https://math.stackexchange.com/questions/3724442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
General form for this problem I encountered this problem by Polya about counting the number of ways that a dollar can be changed. Suppose that there are pennies (worth 1), nickels (worth 5), dimes (worth 10), quarters (worth 25) and fiftycent coins (worth 50). The number of ways to change a dollar (worth 100) can be written as the following generating function: $$ D(z) = \sum_p z^p \sum_n z^{5n} \sum_d z^{10d} \sum_q z^{25q} \sum_f z^{50f} $$ where $D(z)$ is: $$ \frac{1}{(1-z)(1-z^5)(1-z^{10})(1-z^{25})(1-z^{50})} $$ I understand the generating function, but is there a general form to express its coefficients given any set of denominations ? i.e. How to dervive $[z^n]D(z)$, where: $$ D(z) = a_0z^0 + a_1z^1 + a_2z^2 + ... $$ and the coefficient $a_k$ of $z^k$ express the number of ways that an amount worth $k$ can be arrived at given denominations $\{d_1,d_2,d_3,...,d_n\}$, i.e. in the example above, we have $n=5$ and $d_1=1,d_2=5,d_3=10,d_4=25,d_5=50$. EDIT: It looks like that a general form for this problem is hard to compute ... (the problem hints that a computer simulation may be needed) ... However, it looks like that $D(z)$ is asymptotic to the following formula, where $N$ represents the denomination of the bill, i.e. if it is a dollar, we have $N=100$ $$ \frac{N^{t-1}}{d_1d_2,...,d_t(t-1)!} $$ Is there an explanation why $D(z)$ has this asymptotic form ?
If you have a closed formula for your generating function $D(z)$, all you have to do to obtained its coefficients differentiate several times and evaluate at zero. This works since if $$ D(z) = a_0 + a_1 z + a_2 z^2 + ...$$ then we have that $$a_k = \frac{D^{(k)}(0)}{k!},$$ and this is easy enough to compute for finite $k$ with a bit of Maple/Mathematica/Sympy. If this is not the kind of closed formula you're after, you will have to look at the comment of Matti P and Somos to find a different closed expresion in terms of finite sums.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3724627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that $ x\cdot\cos(x)+\sin(x)/2=\sum_{n=2}^\infty (-1)^n\cdot\frac{2n}{n^2-1}\cdot\sin(nx)$ when $x\in [-\pi,\pi]$ To show this, I have used definitions for $\cos(x)$ and $\sin(x)$: $$x\cdot \cos(x)+1/2\sin(x)=x\cdot \sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}\cdot x^{2n}+1/2\cdot \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}\cdot x^{2n+1}$$ However, i do not how I'm supposed to proceed from here. You've got any ideas? This comes from the fourier series of $x\cdot \cos(x)$. We have the fourier coefficients as: $$ c_{-1}=-\frac{i}4, c_0=0,c_1=\frac{i}4$$ and $$c_n=(-1)^{n-1} \cdot \frac{in}{n^2-1}$$ when $|n|\geq 2$
Hint: $$\dfrac{2n\sin nx}{(n+1)(n-1)}=\dfrac{\sin nx}{n-1}-\dfrac{\sin nx}{n+1}$$ Now $(-1)^n\dfrac{\sin nx}{n-1}$ is the imaginary part of $$\dfrac{(-1)^ne^{inx}}{n-1}=e^{ix}\cdot-\dfrac{(-e^{ix})^{n-1}}{n-1}$$ Now $$\sum_{n=2}^\infty-\dfrac{(-e^{ix})^{n-1}}{n-1}=\ln(1+e^{ix})=\ln (e^{ix/2})+\ln\left(2\cos\dfrac x2\right)=(2n\pi+\dfrac x2)i+\ln\left(2\cos\dfrac x2\right)$$ Put $n=0$ to find the principal value Similarly for $$\dfrac{\sin nx}{n+1}$$ Finally use How to prove Euler's formula: $e^{i\varphi}=\cos(\varphi) +i\sin(\varphi)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3724730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
System of differential equations depending on parameter This is the first time to get a question like this: How to solve the system $y'=3by+(1-2b)z$, $z'=by+z+e^{4x}$, where $b\in\mathbb{R}$? Any help is welcome.
Hint: Substracting both differential equations gives us: $$(y-z)'-2b(y-z)=-e^{4x}$$ This DE can easily be solved. It's a first linear DE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3724937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Am I misapplying the chain rule when differentiating $x^{5x+7}$ with respect to $x$? The problem I am attempting to solve is: \begin{align} y=x^{5x+7} \\ \text{Find $\frac{dy}{dx}$} \end{align} Here is my working so far: $$\begin{align} \text{let }u &= 5x+7 \\ \frac{dy}{dx}&=\frac{dy}{du} \cdot \frac{du}{dx} \\ \frac{dy}{du}&=ux^{u-1}=(5x+7)x^{5x+6} \\ \frac{du}{dx}&=5 \\ \therefore \frac{dy}{dx}&=(25x+35)x^{5x+6} \end{align}$$ (Additionally, I'm having trouble using the align environment in Mathjax. This question wasn't formatted very well. If someone could give me some help in the comments, then I would be very thankful.)
If $u= 5x+7$ and $y = x^{5x+7} = x^u$ then if you attempt to solve $\frac {dy}{dx}=\frac{dy}{du}\frac {du}{dx}$ then you must solve $\frac {dy}{du} = \frac {dx^u}{du}$ but $x$ is dependent on $u$ so you'd have to solve $\frac {dx^u}{du} = \frac {dx^u}{dx}\frac {dx}{du}$ but to solve $\frac {dx}{du}$ we have $x$ is dependant on $u$ and .... and we'd have an infinite loop. (By the way, if we treat $x$ as constant we'd have $\frac {dx^u}{du}=\ln(x)x^u$; not $\frac {dx^u}{dx} = ux^{u-1}$ which is what we'd have if we treated $u$ as a constant and differentiating in respect to $x$.) To do the properly we need to express $y =x^{5x+7}$ entirely in terms of $u$ So if $u = 5x +7$ then $x = \frac {u-7}5$ and so $y =x^{5x+7} = x^u = (\frac {u-1}5)^u$ which ..... just doesn't help. Better we do: let $y = x^{5x + 7} = e^{\ln x(5x+7)}$ the let $u = \ln x(5x+7)$ so $y=e^u$ and we can do $\frac {dy}{dx} = \frac {dy}{du}\frac {du}{dx}$. $\frac {dy}{du}=\frac{de^u}{d^u} = e^u= y$ was easy enough. And we can use the product rule to figure $\frac {du}{dx}$. Let $w= \ln x$ and $v = 5x+7$ then $\frac {du}{dx} =\frac {dv\cdot w}{dx} = \frac{dv}{dx} w + \frac {dw}{dx}v$. And $\frac {dv}{dx} = \frac {d(5x+7)}{dx}= 5$ and $\frac {dw}{dx}= \frac {d\ln x}{dx} = \frac 1x$ so $\frac {du}{dx} = (5\ln x + \frac 1x(5x + 7))=(5\ln x + 5 +\frac 7x)$ and do $\frac {dy}{dx}=\frac {dy}{du}\frac {du}{dx} = \frac {dy}{du}(\frac {dv}{dx}w + \frac {dw}{dx}v) = e^u(5\ln x + 5 +\frac 7x)= e^{5x+7}(5\ln x + 5 + \frac 7x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3725023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Geometric approach to $\lim_{n\to\infty}\left(\frac{x_{n+1}}{x_n}\right)^n$ where $x_1=1$ and $x_{n+1}=\sqrt{1+x^2_n}$? I was solving a question : Let $x_1=1$ and $x_{n+1} = \sqrt{1+x^2_n } \ \ \forall \ \ n\in \mathbb{N}$ Then evaluate $$\lim_{n \to \infty} \left( \frac{x_{n+1}}{x_n} \right)^n$$ The way I did it was : $$x_1^2=1$$ $$x_2^2=2$$ $$x_3^2=3$$ $$x_4^2=4$$ $$x_{n+1}^2=(n+1)$$ $$\therefore \lim_{n \to \infty} \left( \frac{x_{n+1}^2}{x_n^2} \right)^\frac{n}{2}$$ $$A = \lim_{n \to \infty} \left( \frac{n+1}{n} \right)^{\frac{n}{2}}$$ $$\ln(A) = \lim_{n \to \infty} \frac{1}{2} \frac{\ln\left( 1+ \frac{1}{n} \right)}{\frac{1}{n}} $$ Applying L'Hopital's rule : $$\ln(A) = \frac{1}{2}$$ $$A = \sqrt{e}$$ But what I think is that this could also be solved using a geometric approach. I believe so because the recurrence relation given is similar to the Pythagoras Theorem. $$x_{n+1}^2 = 1^2 + x_n^2$$ I tried to draw the diagram for something like this : However all I could conclude was the indeterminate form the was forming : For the the limit is of the secant of the base angle of the triangle ( $(\sec \alpha)^n$) with $x_n$ as the base. i.e. the limit becomes : $$\lim_{n \to \infty} \sec^n (\alpha_n)$$ which I can see that as $n$ approaches $\infty$, $\alpha_n$ will keep decreasing until we can say $x_n \approx x_{n+1}$, so it is a $(1)^\infty$ form. So my question is : How would one prove the above question using geometry?
You can define as $\alpha_n$ the angle of the Pythagorean triplet $(1, x_n, x_{n+1})$ between the sides of length $x_n$ and $x_{n+1}$. Its value comes from $$ \tan\alpha_n = \frac{\sin\alpha_n}{\cos\alpha_n} = \frac{1/x_{n+1}}{x_n/x_{n+1}} \implies \alpha_n = \cot^{-1} x_n. $$ From your drawing, one can see (as you suggested) that $$ \frac{x_{n+1}}{x_n} = \sec\alpha_n = \sec\cot^{-1}x_n = \sqrt{\frac{x_n^2+1}{x_n^2}} = \sqrt{\frac{n+1}{n}}. $$ Therefore, $$ \boxed{\lim_{n \rightarrow +\infty} \left( \frac{x_{n+1}}{x_n} \right)^n = \lim_{n \rightarrow +\infty} \left( 1 + \frac{1}{n} \right)^{n/2} = \sqrt{e}.} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3725296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 1, "answer_id": 0 }
Proof of Lagrange's Mean Value Theorem? The Mean value theorem for Mapping says: Let $f(x,y)$ be differentiable in $D$. (D is open and connected). For every $p=(x_1,y_1), q=(x_2,y_2)$ there exists a point $s \in [p,q]$ such that: $$f(q)-f(p) = \nabla f(s)*(q-p)= f_x(s)(x_2-x_1)+f_y(s)(y_2-y_1)$$ Note: the interval $[p,q]$ is the following set of points $\{(1-t)p+tq |\ 0 \le t \le 1\}$ How may I prove that theorem? (it goes beyond my current level of studies) Any help is appreciated :-)
Assuming$[p,q]\subseteq D$ (see @LurchedSawyer 's answer): Let $g(t)=f((1-t)p+tq)$. Then $$g'(t)=\nabla f((1-t)p+tq)*(q-p)$$ by the chain rule. Using the usual 1 dimensional mean value theorem you may find a value $u\in [0,1]$ such that $$g'(u)=g(1)-g(0)=f(q)-f(p)$$ Setting $$s=(1-u)p+uq$$ gives:$$f(q)-f(p)=g'(u)=\nabla f(s)*(q-p)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3725463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Exterior derivative of a vector field The exterior derivative of a scalar function is $d f(x,y,z) = ( \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy + \frac{\partial f}{\partial z} dz )$ Am I correct in assuming then that $d\left( F_x(x,y,z) e_x + F_y(x,y,z) e_y + F_z(x,y,z) e_z \right)$ would be $\left( \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right) dx \wedge dy + \left( \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) dz \wedge dx + \left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z} \right) dy \wedge dz$
$$d(F_xdx + F_ydy + F_zdz) = dF_x \wedge dx + dF_y \wedge dy + dF_z \wedge dz$$ and use that $$dF = \frac{\partial F}{\partial x}dx + \frac{\partial F}{\partial y}dy + \frac{\partial F}{\partial z}dz$$ according to your first claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3725638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $ \sum_{i=1}^{N} a_i \leq \sqrt{N \sum_{i=1}^{N}a_i^2} $ Prove that $ \sum_{i=1}^{N} a_i \leq \sqrt{N \sum_{i=1}^{N}a_i^2} $. Well i choose $u=(1,\ldots,1)$ and $v=(a_1,\ldots,a_N)$ whit $a_i$ positive and. Apply $u \circ v \leq |u||v|$. With this now i want to prove that $\sum_{i,j}^{N}\frac{\partial u}{\partial x_i}\frac{\partial v}{\partial x_j}\leq N |\nabla u||\nabla v|$. I know that I am very close , but if somebody can help me, i will aprecciate..thank you
Recall the Cauchy-Schwarz inequality for sums: $$\left( \sum_1^N a_n b_n \right)^2 \le \left( \sum_1^N a_n^2 \right) \left( \sum_1^N b_n^2 \right)$$ Let $b_n = 1$. Then this simplification results: $$\left( \sum_1^N a_n \right)^2 \le \left( \sum_1^N a_n^2 \right) N$$ Taking the square root of each side gives the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3725762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculate the residue of $\exp\left(\frac{z+1}{z-1}\right)$ in every point of $\mathbb{C}$ I have to calculate the residue of $\exp\left(\frac{z+1}{z-1}\right)$ in every point of $\mathbb{C}$. So I tried to compute the Laurent Series expansion $\forall z_0 \in \mathbb{C}$. For $z_0 = 0$ we obtain that $f(z)=\sum_{k \geq 0}\frac{(z+1)^k}{(z-1)^k}$ but I dont understand what the coefficient $a_{-1}$ is. Thanks in advance.
The only singularity of $f$ is at $z=1$, but that's an essential singularity. I get $$f(z)=\sum_{n=0}^\infty\frac{(z+1)^n}{n!(z-1)^n}$$ but that's not a Laurent series as it stands. But also $$f(z)=\exp\left(1+\frac{2}{z-1}\right)=e\exp\left(\frac{2}{z-1}\right) =e\sum_{n=0}^\infty\frac{2^n}{n!(z-1)^n}.$$ Now that is a Laurent series at $z=1$, and the coefficient of $(z-1)^{-1}$ is $2e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3726093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a natural way to totally order the set of unlabeled binary trees on $n$ nodes? Let $C_n$ be the $n^{th}$ Catalan number. There are $C_n$ unlabeled binary trees having $n$ internal nodes. I want to totally order these trees in some (hopefully not too complicated) natural manner. Perhaps there is some "standard" way of doing this. When I look at pictures of, say the 14 binary trees on 4 nodes, they never seem to be listed in any particular order. Notice that what I want to order is the trees themselves not the vertices within each tree. I tried classifying by height and then by height of shortest branches from left to right but I'm not sure if what I have is correct. I think there must be some well known way to order these trees?
For $k = 0$ to $n - 1$ (or vice versa) enumerate all the possible left subtrees with $k$ nodes. For each possible left subtree with $k$ nodes, enumerate all the possible right subtrees with $n - k - 1$ nodes. Recursively, this gives for every positive integer $n$ a fairly natural way to totally order the set of all unlabeled binary trees with $n$ nodes, by referring to the same procedure to totally order the set of all trees with any given smaller number of nodes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3726215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to prove that this function all over the positive integers gives us this sequence? On the first hand, I have this sequence : $0,1,1,2,2,2,3,3,3,3,...$ which is the sequence where an $n$ positive integer appears $n+1$ times consecutively. On the other hand, I have this function : $a_n=\lfloor\frac{\sqrt {1+8n}-1}{2}\rfloor$ where $n\ge0$ $a_0=0$ ; $a_1=1$ ; $a_2=1$; $a_3=2$ This function seems to be a formula for this sequence. However, if it is the case, how can we prove it ? And if it isn't, what is the explanation ? To begin with, i did something : $\frac{\sqrt {1+8n}-1}{2}=t+b$ where $t\in \mathbb N$ and $b\in [0,1[$ After some simplification, i get this : $8n= 4t^2+4b^2+8tb+12t+12b+8$ After this, i don't know how to continue...
Idea: In the sequence, $a_n$ becomes $m$ when $n=\sum\limits_{i=0}^m i=\dfrac{m(m+1)}2$; i.e., $m^2+m-2n=0$. Solving this quadratic for $m$, we get $m=\dfrac{-1+\sqrt{1+8n}}2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3726350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Maximal tamely ramified abelian extension of $\mathbb{Q}_p$ is finite over the maximal unramified extension $\mathbb{Q}_p^{nr}$? I came across an curious exercise in Neukirch's algebraic number theory book. Exercise 2 page 176 (Chapter II section 9) asks the following Prove that the maximal tamely ramified abelian extension V of $\mathbb{Q}_p$ is finite over the maximal unramified extension $\mathbb{Q}_p^{nr}$ of $\mathbb{Q}_p$. I suspect that this is wrong because one can always consider the extensions $$ L_n = \mathbb{Q}_p^{nr} (p^{\frac{1}{p^n - 1}}) $$ $L_n$ is tamely ramified and has degree $p^n - 1$ since $X^{p^n - 1} - p$ is irreducible in $\mathbb{Q}_p$ (by Eisenstein criterion). Am I missing something here? Here is why I think $L_n$ is an abelian extension. Call $K_n = Q_p^{nr}(p^{1/n})$ for $n$ prime to $p$. The extension $K_n/Q_p$ (I think) is abelian for all $n$ prime to $p$ and here is why: It suffices to show that $K_n/Q_p^{nr}$ is abelian (since $Q_p^{nr}/Q_p$ is already abelian). Any $Q_p^{nr}$-linear automorphism $\sigma$ of $K_n$ is determined by the image $\sigma(p^{1/n})$. The Galois group elements of $Gal(K_n/Q_p^{nr})$ are then the elements $\sigma_i$ with $$\sigma_i(p^{1/n}) = p^{1/n} \zeta_{n}^{i}$$ for $i = 0, 1, \dots, n-1$ and these commute since $\zeta_n$ ( a primitive $n$-th root of the unit) is in $Q_p^{nr}$. $$ \sigma_i \sigma_j(p^{1/n}) = \zeta_{n}^{i} \zeta_{n}^{j} p^{1/n}$$
$V$ is a tamely ramified extension of $Q_p$ and it contains $Q_p^{nr}=\bigcup_{p\ \nmid\ m} Q_p(\zeta_m)$ so $$V=\bigcup_j Q_p^{nr}(p^{1/n_j})$$ for some $p\nmid n_j$ and $n_j | n_{j+1}$. Since any subextension of an abelian extension is abelian It suffices to find for which $n$ we have $Q_p^{nr}(p^{1/n})/Q_p$ is abelian and tame. ie. when $Q_p(p^{1/n})/Q_p$ is abelian and tame. This happens iff $n | p-1$. Whence $V=Q_p^{nr}(p^{1/(p-1)})$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3726660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Easy proof that $\{(x,y) \in \mathbb{R}^2 : y=\tan(x) \}$ is a closed set. I'd like to know whether there's an "easy" proof that $$A:= \{(x,y) \in \mathbb{R}^2 : y=\tan(x) \} .$$ I've tried to prove that its complement is open, but given an $(x,y)$ such that $\tan(x) \neq y$, it's a bit of a grind to find (in the general case) an open set that contains $(x,y)$ and does not intersect $A$. Is there a simpler argument using basic euclidean topology? Some of the comments here gave me an idea: If $U:=\{{(x,y) \in \mathbb{R}^2: cos(x) \neq 0}\}$ and $f:U -> \mathbb{R}$ is such that $f(x,y)=y-tan(x)$, then$A=f^{-1}(\{0\})$, therefore A is closed in U. Does this imply that A is closed in \mathbb{R}^2, though?
The graph of a continuous function $f: X \to Y$ is closed in $X \times Y$, assuming $Y$ is hausdoff (you can google this result or even better, prove it). The $\tan$ function is not continuous however but it is continuous on each interval $(-\frac{\pi}{2}+ n \pi , \frac{\pi}{2}+ n \pi )$. Hence, you only must check that it is closed in those slices that are not covered by those intervals, so only for $x =\frac{\pi}{2}+n \pi$, so just find an open set that does not intersect $\tan$ for each $(\frac{\pi}{2}+n \pi, y)$. This is much more easier to do just using facts about the $\tan$ function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3726817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Given $V \in \mathbb{R}^{n\times(n-r)}$, why $V^TAV = 0$ implies $\operatorname{rank}(A) \leq r$? I am doing a problem from convex optimization written by Stephen P Boyd. I am having trouble understanding the solution. The original problem statement and solution is as follow: 2.13 Conic hull of outer products. Consider the set of rank-k outer products, defined as $\left\{X X^{T} \mid X \in \mathbf{R}^{n \times k}, \ \textbf{rank} X=k\right\} .$ Describe its conic hull in simple terms. Solution. We have $X X^{T} \succeq 0$ and $\textbf{rank}\left(X X^{T}\right)=k .$ A positive combination of such matrices can have rank up to $n,$ but never less than $k .$ Indeed, Let $A$ and $B$ be positive semidefinite matrices of rank $k,$ with $\textbf{rank}(A+B)=r<k .$ Let $V \in \mathbf{R}^{n \times(n-r)}$ be a matrix with $\mathcal{R}(V)=\mathcal{N}(A+B),$ i.e. $$V^{T}(A+B) V=V^{T} A V+V^{T} B V=0$$ since $A, B \succeq 0,$ this means $$V^{T} A V=V^{T} B V=0$$ which implies that $\textbf{rank} A \leq r$ and $\textbf{rank} B \leq r .$ We conclude that $\textbf{rank}(A+B) \geq k$ for any $A, B$ such that $\textbf{rank}(A, B)=k$ and $A, B \succeq 0$. It follows that the conic hull of the set of rank- $k$ outer products is the set of positive semidefinite matrices of rank greater than or equal to $k,$ along with the zero matrix. In the solution above, there are two steps that I don't understand. * *Why $\mathcal{R}(V) = \mathcal{N}(A+B)$ implies $V^T(A+B)V = 0$? (The notation here, $\mathcal{R,N}$ means range and nullspace, respectively.) *Why $V^TAV = 0$ implies $\textbf{rank} A \leq r$?
If ${\cal R} V = \ker(A+B)$ then $(A+B)V x = 0$ for all $x$ hence $(A+B)V=0$. Hence it follows that $V^T(A+B)V = 0$. Note that if $A$ is symmetric positive semi definite then using the spectral decomposition we can write $A = C^T C$ for some $C$. So, if $V^TAV = 0$ then $(CV)^T(CV) = 0$ and so $CV =0$ and so $C^TCV=AV = 0$. Also, note that the proof as you have shown only establishes that matrices in the conic hull have rank $\ge k$, but does not show that for any $r =k+1,...,n$ that there is a conical combination that has rank $r$. It is not hard to demonstrate but the above is not a complete proof. Pick $A\ge 0$ of rank $r \in \{k,...,n\}$ and suppose that $U$ is an orthogonal matrix such that $U^TAU = \Lambda = \operatorname{diag} \{\lambda_1,...,\lambda_r,0,..., 0\}$, where the $\lambda_1,...,\lambda_r$ are all the strictly positive eigenvalues. If $b \in \{0,1\}^r$, let $\Lambda_b = \operatorname{diag} \{ b_1 \lambda_1,..., b_r \lambda_r, 0,..., \}$. Let $B= \{ b \in \{0,1\}^r | \text{exactly }k\text{ of the }b_i\text{ are 1}\}$ and note that if $b \in B$ then $\Lambda_b$ has rank $k$ and hence so does $U \Lambda_b U^T$. Finally, note tha $\Lambda = {r \over k}{1 \over \binom{r}{k} }\sum_{b \in B} \Lambda_b$ and so $A = {r \over k}{1 \over \binom{r}{k} }\sum_{b \in B} U \Lambda_b T^T$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3726932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
show this $\tan{x}\tan{y}\tan{z}=1$ in an acute.$\Delta ABC$,if $x,y,z$ such$$\cos{A}=\cos{y}\sin{z},\cos{B}=\cos{z}\sin{x},\cos{C}=\cos{x}\sin{y}$$show that $$\tan{x}\tan{y}\tan{z}=1$$or $$\sin{x}\sin{y}\sin{z}=\cos{x}\cos{y}\cos{z}$$ I want use $$\cos^2{A}+\cos^2{B}+\cos^2{C}+2\cos{A}\cos{B}\cos{C}=1$$ so we have $$\sum_{cyc}\cos^2{y}(1-\cos^2{x})+2\cos{x}\cos{y}\cos{z}\sin{x}\sin{y}\sin{z}=1$$ then $$\sum_{cyc}\cos^2{x}-\sum_{cyc}\cos^2{x}\cos^2{y}+2\cos{x}\cos{y}\cos{z}\sin{x}\sin{y}\sin{z}=1$$ then I can't
Let $$ \tan x\tan y\tan z=t $$ then $$ \sin x\sin y\sin z=t\cos x\cos y\cos z $$ so we get $$ \sum_{cyc}\cos^2x-\sum_{cyc}\cos^2x\cos^2y+2t\cos^2x\cos^2y\cos^2z=1 $$ then $$ \begin{aligned} & (2t-1)\cos^2x\cos^2y\cos^2z \\\\[1ex] =& 1-\sum_{cyc}\cos^2x+\sum_{cyc}\cos^2x\cos^2y-\cos^2x\cos^2y\cos^2z \\\\ =& \left(1-\cos^2x\right)\left(1-\cos^2y\right)\left(1-\cos^2z\right) \\\\[1ex] =& \sin^2x\sin^2y\sin^2z \end{aligned} $$ therefore $$ 2t-1=\frac{\sin^2x\sin^2y\sin^2z}{\cos^2x\cos^2y\cos^2z}=\tan^2x\tan^2y\tan^2z=t^2 $$ so $$t=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3727103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Lemma 4.1.3 in Category Theory in Context: Adjoint Functors I'm having hard time proving this lemma, so hints would be greatly appreciated. The naturality of collections of isomorphisms is defined as follows: I was able to prove the right direction. i.e. showing that if the collection is natural, the left square commute iff right square commutes. But I'm having trouble with the reverse direction. It seems hard for me to somehow use an implication, to prove that the isomorphisms are natural. Hints would be greatly appreciated, thanks!
I’ll only show naturality in $D$, but I hope this helps. Suppose we have $k: d \to d’$. By assumption, we have some isomorphism of sets $D(Fc, d) \cong C(c, Gd)$. Hence consider a morphism $f^{\#}: Fc \to d$. Then this corresponds to some $f^{\flat}: c \to G(d)$, which we may compose with $G(k)$ to obtain $G(k) \circ f^{\flat}: c \to Gd’$. Now observe that the diagram on the right commutes. By our assumption the diagram on the left must commute. As a result, we see that $(G(k) \circ f^{\flat})^{\#} = k \circ f^{\#}$, so that $(k \circ f^{\#})^{\flat} = G(k) \circ f^{\flat}$. Hence the diagram commutes. As Riehl points out, this demonstrates that we have naturality $D$. By a similar arrangement, you can get the other naturality in $C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3727208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Order of an element divides $m$ when $a^m \equiv 1 \pmod n$ https://brilliant.org/wiki/order-of-an-element/ I was referring to the above link for order of an element and in basic properties while proving property $1$ due to minimality of d, $d \le gcd(m,d)$ is written. Is it because $mx+dy\ge d$ i.e. $gcd(m,d)\ge d$ but that inequality hold only for all positive $x$ and $y$ but there are cases when $x$ is positive and $y$ is negative, in that case $mx+dy \le d$ ie $gcd(m,d)\le d$? Can someone help me what does the minimality of d actually mean and how did this inequality obtained $d \le gcd(m,d)$?
The order is defined to be the smallest positive $p$ such that $a^p \equiv 1 \pmod{n}$ holds. Since we have shown that $a^{gcd(d.m)} \equiv 1 \pmod{n}$ and we know that $gcd(d,m)>0$, then we must have $d \le gcd(d,m)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3727345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $\operatorname E[e^{-X}Y]\le c$ imply $\operatorname E[Y]\le c\operatorname E[e^X]$? Quick question: Let $X,Y$ be real-valued random variables, $Y$ being nonnegative, such that $\operatorname E[e^{-X}Y]\le c$ for some $c\ge0$. Can we somehow show that $\operatorname E[Y]\le c\operatorname E[e^X]$? It's clearly true when $X$ is nonrandom. In the situation I've got in mind, $X$ is $\sigma(Y)$ measurable, which might be useful.
This is equivalent to asking for the inequality $E[ZW]\le E[Z]E[W]$ to hold for all $Z\ge0$ and $W> 0$ (set $W=e^X$ and $Z=e^{-X}Y$). In other words, is it true that if $Z,W>0$, then $\operatorname{Cov}(Z,W)\le 0$? This is not the case, see for instance $U$ uniformly distributed on $(0,1]$, $Z=U^3$ and $W=U^5+1$. We have $$E[ZW]-E[Z]E[W]=\frac19+\frac14-\frac14\cdot\left(\frac16+1\right)=\frac5{72}$$ Also notice that if we recover the original random variables via the relations $Z=e^{-X}Y$ and $W=e^X$, we obtain $Y=U^8+U^3$ and $X=\ln(U^5+1)$, and since the map $t^8+t^3$ is strictly increasing for $t\ge0$, we have that $U=h(Y)$ for some continuous function $h:[0,\infty)\to [0,\infty)$. Therefore $\sigma(Y)=\sigma(U)=\sigma(X)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3727512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Convert sum of $N$ bits, $k$ shift right, to propositional formula. This problem is somehow related to multiplication circuits Input: $N$ bits, integer $K$ * *Output: a propositional formula that is satisfiable if and only if the sum of the bits, K shift right mode 2 equals 1 Example: * *N = {1,0,1,1}, K = 1 -> output = 1 *N = {1,1,1,1}, K = 1 -> output = 0 *N = {1,1,1,1}, K = 2 -> output = 1
I found an exponential solution for it, I hope that other answers will come out with something more efficient or prove that it is hard to find a better answer. I will start with the example of N = {1,0,1,1}, K = 1. It's easy to solve it on the paper by summing those numbers and deleting the last digit. $1+0+1+1 = 11$, and deleting the last digit gives $1$. Since it is a sum, the order of the digits is not important, the only thing that matters is the number of $1$`s, so we can look at the truth table of all the possibilities and concatenate it with $\lor$: $$(i_1 \land i_2 ...\land i_n) \lor (i_1 \land i_2 ...\land i_n) \lor (i_1 \land i_2 ...\land i_n)...$$ All the $i$ that represent zeroes should also be prefixed with $\lnot$ For example N = 3, K = 1 * *000 -> false *001 -> false *010 -> false *011 -> true *100 -> false *101 -> true *110 -> true *111 -> true Taking all the truth results gives: $$(\lnot i_1 \land i_2 \land i3) \lor (i_1 \land \lnot i_2 \land i3) \lor (i_1 \land i_2 \land \lnot i3) \lor (i_1 \land i_2 \land i3)$$ We can quickly calculate how many different counts of 1 we can have with $N$ bits that will yield a satisfactory result, there are a total of $N$ possibilities. For each possibility with $P$ $1$ bits, there are $\frac{N!}{P!(N-P)!}$ combinations. We can make a small optimization and instead of count $1$ combinations count $0$ combinations and negate the final result, but still, the final result is exponentially large.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3727795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is the Jackson integral of $e_q(x)$ , $e_q(x)$ itself? Some info- * *Jackson integral (q-analog of standard integration) simply defined as -$$\int f(x){\mathrm d}_qx=(1-q)x\sum_{k=0}^\infty q^kf(q^kx)$$ *q-exponential defined as $$e_q(x)=\sum_{n=0}^\infty \frac{x^n}{[n]_q!} = \sum_{n=0}^\infty \frac{x^n (1-q)^n}{(q;q)_n} = \sum_{n=0}^\infty x^n\frac{(1-q)^n}{(1-q^n)(1-q^{n-1}) \cdots (1-q)}$$ My question - $$\text{is}\int e_q(x){\mathrm d}_qx = e_q(x) ?$$ My approach- $$\int e_q(x){\mathrm d}_qx=(1-q)x\sum_{k=0}^\infty q^ke_q(q^kx) \\ =(1-q)x\sum_{k=0}^\infty q^k\sum_{n=0}^\infty(q^kx)^n\frac{(1-q^k)^n}{(q^k;q^k)_n} \\ $$ How should I proceed?
Write out $e_q(q^kx)$ explicitly and then interchange the summations: $$ \begin{array}{ll} \displaystyle \int e_q(x)\,\mathrm{d}_qx & \displaystyle =(1-q)x\sum_{k=0}^\infty q^k e_q(q^kx) \\[5pt] & \displaystyle =(1-q)x\sum_{k=0}^\infty q^k \sum_{n=0}^\infty \frac{(q^kx)^n}{[n]_q!} \\[5pt] & \displaystyle = (1-q)x\sum_{n=0}^\infty \frac{x^n}{[n]_q!} \sum_{k=0}^\infty (q^{n+1})^k \\[5pt] & \displaystyle =\sum_{n=0}^\infty \frac{x^{n+1}}{[n]_q!} \frac{1-q~~~~~}{1-q^{n+1}} \\[5pt] & \displaystyle = \sum_{n=0}^\infty \frac{x^{n+1}}{[n+1]_q!} \\[5pt] & = e_q(x)-1. \end{array} $$ In particular, $\displaystyle \int x^n\,\mathrm{d}_qx=\frac{x^{n+1}}{[n+1]_q} $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3727934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rotation Lemma explanation. I'm reading the following rotation lemma on graphs. There's this statement in the proof: "If $y \in V(P)$, rotate $P'$ along the edge $\{v,y\}$", which I don't see how it could happened when applied to the example I've drawn. Rotations: Suppose $P - v_0v_1...v_t$ is a path in $G$. Suppose $\left\{v_{i}, v_{t}\right\} \in E(G), 1 \leq i \leq t-2$. Rotation along $\left\{v_{i}, v_{t}\right\}: P^{\prime}=v_{0} v_{1} \ldots v_{i} v_{t} v_{t-1} \ldots v_{i+1}$ is also a path of length $t$. Lemma 5.3.7. Let $P=v_{0} v_{1} \ldots v_{t}$ be a longest path in a graph $G,$ and let $R$ be the set of endpoints reachable from $v_{0}$ by sequences of rotations. Then $N_{G}(R) \subseteq N_{P}(R)$. Proof * *After rotating along $\left\{v_{i}, v_{t}\right\},$ only $v_{i}, v_{t}$ get new neighbours on the path. *Let $v \in R$. Rotate to path $P^{\prime}$ with $v$ as an endpoint. *Let $y \in N(v) \backslash R$ * *If $y \notin V(P)$, extend $P^{\prime}$ to $y \Rightarrow$ longer path than $P$ *If $y \in V(P)$, rotate $P^{\prime}$ along the edge $\{v, y\}$ *$\Rightarrow$ a neighbour $x$ of $y$ on $P^{\prime}$ is an endpoint of the new path, so $x \in R$ *If $x$ also a neighbour of $y$ on $P$, then $y \in N_{P}(R)$ *Otherwise must have rotated along an edge incident to $y \Rightarrow y \in N_{P}(R)$
If you rotate a path along the last edge, this does nothing. You get the original path back. In the example you drew, rotating $P'$ along the edge $\{v,y\}$ gives the path $P'$ back again, and it's still true that the endpoint of the resulting path (namely, $v$) is a neighbor of $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3728266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
X Hausdorff: every point has precompact neighborhood iff X has a basis of precompact open sets Claim Suppose $X$ is Hausdorff, then: Every point $p\in X$ has a precompact neighborhood in $X$ $\Longleftrightarrow$ $X$ has a basis of precompact open subsets Question: It's not clear to me why it's necessary for X to be Hausdorff; so, what is missing from the following proof that X being Hausdorff provides? Proof: "$\Longrightarrow$" Suppose that every point $p \in X$ has a precompact neighborhood in $X$. We will construct a basis using precompact open subsets of $X$. Denote the set of precompact neighborhoods for each point as $\mathbb{U} = \left\{ U_p \right\}_{p\in X}$. Denote the topology of $X$ as $\tau$. We claim that $\forall u \in \tau$ and $\forall U_p \in \mathbb{U}$, the intersection, $u_p = u \cap U_p$ is precompact. This follows from the fact that every closed subset of a compact subspace is compact. Thus, $\overline{u}_p \subset \overline{U}_p$ implies that $\overline{u}_p$ is compact. Therefore, the following is a precompact basis: $\mathcal{B} = \left\{u_p: \exists u \in \tau, \exists U_p \in \mathbb{U}\,\, \mathrm{ s.t. }\,\, u_p = u \cap U_p \right\}$ "$\Longleftarrow$" Suppose $X$ has a basis of precompact open subsets, which we'll denote $\mathcal{B}$. $X$ is open, so $X =\underset{u\in \mathcal{B}}\cup u$. Thus, $p \in X$ implies that $\exists u \in \mathcal{B}$ such that $p \in u$. So, every $p$ has a precompact neighborhood.
$X=(\mathbb{R}\times\mathbb{Z})/\sim$, where $\sim$ is the equivalence relation generated by $(x,n)\sim (x,m)$ for all $n,m\in \mathbb{Z}$ and all $x\ne 0$. $X$ is non-Hausdorff. $X$ is locally compact but does not have a basis of precompact open subsets. This comes from "Introduction to Topological Manifolds" by J.M. Lee problem 4-22. The proof by Henno Bransdma above assumes the existence of a precompact neighborhood that does not exist for the origins of this set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3728390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Unable to Solve a quiz question asked in mathematics exam ( Quantitative Aptitude) I am self studying for an exam and I am unable to solve this quiz question. Adding it's image -> I tried by finding numbers in the sentences but couldn't find and I think that's a wrong approach. Can anyone please tell how to solve this question. Answer is B.
Numbers are spelled out in each phrase. The first has eleven (Tinselevent), the second nine. Look for the other two. There is no excuse for this being called mathematics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3728507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Direction vector from 2 orthogonal angles I have an acoustic sensor that measures angle on a single axis. If the sensor is pointing upwards, then I can measure an angle $\theta$ that has the range $-\pi/2 < \theta < \pi/2$. A value of $\theta = 0$ means that an object (sound source) is directly above it. In other words, it measures the angle from a surface normal with respect to the ground, on a single axis. If I mount a second sensor with a baseline sitting at 90 degrees from the first sensor, rotated on the z axis, I measure two angles, $\theta$ and $\phi$. $\theta$ defines a unit vector on the x,z plane (z is up) and $\phi$ defines a unit vector on the y,z plane. Both vectors point upwards when the object is directly above the sensor. $\theta$ varies as the object moves in the $\pm$ x direction and similarly $\phi$ varies according to the position on the y axis. It cannot measure distance, only direction. How can I turn these two measurements ($\theta$ and $\phi$) into a direction vector that points towards the object? Note that this is not the same as the spherical coordinate system that measures azimuth and inclination. In my case $\theta$ and $\phi$ represent angles from a surface normal on 2 orthogonal axes.
Does this figure help you? The "perspective of sensor" triangles give you the angles you need. EDIT: The boxed equations are the ones you want, if i've interpreted your question correctly. (See new image) You can figure out a direction vector, because my $\beta$ and $\alpha$ are the co-latitude and azimuthal angles in polar coordinates, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3728684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Equivalence definition of Lower Integral let $f$ be a real-valued bounded function on $[a,b]$. For all partition $P:x_0,...x_N$ of $[a,b]$, define $m_k(f,P)=\inf_{x_{i-1}\le x\le x_i}(f(x))$ and $m^*_k(f,P)=\inf_{x_{i-1}< x< x_i}(f(x))$ (which does not include the endpoints) for all $k=1,...,N$. The lower integral of $f$ is usually defined by $L=\sup\{\sum_{k=1}^Nm_k(f,P)(x_i-x_{i-1}): P:x_0,...,x_N$ is a partition of $[a,b]\}$. But intuitively, the value of the lower integral should remains the same if we replace $m$ by $m^*$, i.e $L=L^*:=\sup\{\sum_{k=1}^Nm^*_k(f,P)(x_i-x_{i-1}): P:x_0,...,x_N$ is a partition of $[a,b]\}$, since only 2 points are removed for each section and should not affect the whole integral. It is clear that $L\le L^*$, since each $m_k(f,P)\le m_k^*(f,P)$ by the property of infimum, but I am stuck in showing the another direction of the equality.
For a given partition $P = (x_0,x_1, \ldots x_n)$, let $L(f,P) = \sum_{k=1}^nm_k(x_k - x_{k-1})$ denote the usual lower Darboux sum and let $L^*(f,P) = \sum_{k=1}^nm_k^*(x_k - x_{k-1})$ denote the lower sum with infima taken over open subintervals. You already have shown that $L(f,P) \leqslant L^*(f,P)$ which implies that $$L = \sup_P L(f,P) \leqslant \sup_PL^*(f,P) = L^*$$ To prove that $L = L^*$, it is enough to show that for any $\epsilon >0$ there exists a partition $Q$ such that $L^* - L(f,Q) < \epsilon$. Since $f$ is bounded, we have $m < f(x) < M$ for all $x \in [a,b]$. Also, for any $\epsilon > 0$, there exists a partition $P = (x_0,x_1,\ldots, x_n)$ such that $L^* - L^*(f,P) < \frac{\epsilon}{2}$ (since $L^* = \sup_PL^*(f,P)$). Define the partition $Q = (x_0, x_0+\delta, x_1-\delta, x_1,x_1+\delta,\ldots, x_n-\delta,x_n)$ where $$0 < \delta < \min\left(\frac{\max_{1\leqslant j \leqslant n}(x_j - x_{j-1})}{2}, \frac{\epsilon}{4n(M-m)}\right)$$ We have $$L(f,Q) = \sum_{k=1}^n\left(\inf_{x \in [x_{k-1},x_{k-1} + \delta]}f(x)\cdot\delta + \inf_{x \in [x_{k-1}+ \delta,x_{k} - \delta]}f(x)\cdot (x_k - x_{k-1} - 2\delta)+ \inf_{x \in [x_{k}-\delta,x_{k} ]}f(x) \cdot\delta\right)$$ Since $\inf_{x \in [x_{k-1},x_{k-1} + \delta]}f(x), \, \,\inf_{x \in [x_{k}-\delta,x_{k}]}f(x) \geqslant m$ and $\inf_{x \in [x_{k-1}+ \delta,x_{k} - \delta]}f(x) \geqslant m_k^*$ it follows that $$L(f,Q) \geqslant \sum_{k=1}^n\left(m\cdot\delta + m_k^*\cdot (x_k - x_{k-1} - 2\delta)+ m \cdot\delta\right) \\ = \sum_{k=1}^nm_k^*\cdot (x_k - x_{k-1}) - 2\delta\sum_{k=1}^nm_k^* + 2nm\delta$$ The first sum on the RHS is just $L^*(f,P)$ and for the second sum we have $2\delta\sum_{k=1}^nm_k^* \leqslant 2nM\delta$. Thus, $$L(f,Q) \geqslant L^*(f,P) - 2n(M-m)\delta > L^* - \frac{\epsilon}{2} - 2n(M-m) \frac{\epsilon}{4n(M-m)}= L^*- \epsilon$$ Therefore, $L = \sup_P L(f,P) = L^*$ since for any $\epsilon > 0$ there exists a partition $Q$ such that $L^* - L(f,Q) < \epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3728829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding an irreducible polynomial over the rationals I'm very confused by a homework question. "Find the irreducible polynomial for $ \sin{2\pi/5}$ over Q. I found that $16t^{4}-20t^{2}+5=0$ but this is not monic? This is also irreducible by Eisenstein, but minimal polynomials are always monic?
$p(t)=t^{4}-5/4t^{2}+5/16$ is the minimal polynomial of $\sin 2\pi/5$ over $\mathbb Q$. It is irreducible in $\mathbb Q[x]$ according to Gauss's lemma as $16t^{4}-20t^{2}+5$ is irreducible in $\mathbb Z[x]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3729140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Taylor series for $\ln(\frac{1-z^2}{1+z^3})$ Taylor series for $\ln(\frac{1-z^2}{1+z^3})$ I've tried to $$\ln(1-z^2)-\ln(1+z^3)=\sum (-1)^{3n-1}z^{2n}-\sum(-1)^{4n-1}z^{3n}$$ I didn't manage to make it one series any help is good
Starting from @J.G.'s answer, all symplify to $$\log\left(\frac{1-z^2}{1+z^3}\right)=\sum_{n=2}^\infty \frac{2 \cos \left(n\frac{\pi }{3}\right)-1}{n} z^n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3729239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Computing $\lim_{x\rightarrow 0}{\frac{xe^x- e^x + 1}{x(e^x-1)}}$ without L'Hôpital's rule or Taylor series This limit really stamped me because i'm not allowed to use L'Hôpital's rule or Taylor's series, please help! I think the limit is $\frac{1}{2}$, but i don't know how to prove it without the L'Hôpital's rule or Taylor's series $$\lim_{x\rightarrow 0}{\frac{xe^x- e^x + 1}{x(e^x-1)}}$$
How about using the Cauchy's mean value theorem (L'Hospital rule can be seen as a specialization of this). Let $f(x)=xe^x-e^x+1$ and $g(x)=xe^x-x$, then $f(0)=0=g(0)$ and by the (generalize) mean value theorem, there is $c_x$ between $0$ and $x$ such that $$f'(c_x)(g(x)-g(0))=g'(c_x)(f(x)-f(0)).$$ This can be expressed as $$ \frac{f(x)}{g(x)}=\frac{f(x)-f(0)}{g(x)-g(0)}=\frac{f'(c_x)}{g'(c_x)}=\frac{c_xe^{c_x}}{c_xe^{c_x}+ e^{c_x}-1}=\frac{e^{c_x}}{e^{c_x}+\frac{e^{c_x}-1}{c_x}}$$ As $x\rightarrow 0$, $c_x\rightarrow 0$ and so $$\lim_{x\rightarrow0}\frac{f(x)}{g(x)}=\lim_{x\rightarrow0}\frac{e^{c_x}}{e^{c_x}+\frac{e^{c_x}-1}{c_x}}=\frac{1}{2}$$ Here we have use the fact that $\lim_{h\rightarrow0}\frac{e^h-1}{h}=\exp'(0)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3729502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Maximum number of acute triangles in a regular convex polygon triangulated into $n-2$ triangles by its diagonals. I have been reading about convex polygons, and I found the following: We say that a simple polygon is convex if all its interior angles are less than $\pi$. If $P$, a regular convex polygon, is divided into $n-2$ triangles with diagonals, what is the maximum number of acute triangles one can have? I don't understand what is meant by "$n-2$ triangles with diagonals". Thank you. Note: I have understood what you mean by "$ n-2 $ triangles with diagonals", but I have tried a lot to solve it using everything that has been written to me, and I have not succeeded, I do not know how to take the factor that the angles have to be sharp. I would appreciate it if you could help me solve it. Thank you.
To reword that sentence, If $P$ is a convex polygon with $n$ sides then you can use several diagonals of $P$ to subdivide $P$ into $n-2$ triangles. In fact we can be still more precise than that: If $P$ is a convex polygon with $n$ sides then you can use $n-3$ diagonals of $P$ to subdivide $P$ into $n-2$ triangles. For example, if $n=4$, and so $P$ is a quadrilateral, then you can use $1$ diagonal to subdivide $P$ into $2$ triangles. Next, if $n=5$ and so $P$ is a pentagon, then you can use 2 diagonals of $P$ to subdivide $P$ into $3$ triangles. And so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3729637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
When is $\mathbb Z[\sqrt d]$ a Euclidean or principal ideal domain? Let $d$ be an integer $\neq 1$ such that $d$ is not divisible by the power of any prime.Consider the ring $\mathbb Z[\sqrt d]$.My question is when is this ring a Euclidean domain,and when it is not an ED,when is this ring a PID?Is there any criterion to ensure this?How to understand by observation? Note that $d$ may be negative also.
* *There is no simple rule that classifies when $\mathbf Z[\sqrt{d}]$ is a PID or Euclidean when $d$ is squarefree and positive, and there is no reason to expect a simple rule. (But note, as Gerry Myerson points out, that $\mathbf Z[\sqrt{d}]$ is the "wrong ring" to be thinking about when $d \equiv 1 \bmod 4$ since the full ring of algebraic integers of $\mathbf Q(\sqrt{d})$ in that case is the bigger ring $\mathbf Z[(1+\sqrt{d})/2]$. The ring $\mathbf Z[\sqrt{d}]$ is never a UFD for $d \equiv 1 \bmod 4$ for structural reasons.) *The list of norm-Euclidean real quadratic rings is known. See the Wikipedia page for quadratic integers, but note that it is about when the full ring of integers is norm-Euclidean. That is never $\mathbf Z[\sqrt{d}]$ when $d \equiv 1 \bmod 4$, so if you discard such $d$ then you are left with $d = 2, 3, 6, 7, 11, 19$. Note that being a Euclidean domain is a weaker property than being norm-Euclidean: maybe the ring is Euclidean with respect to some bizarre function, not its (absolute) norm function. An example of that is $\mathbf Z[\sqrt{14}]$, which is not norm-Euclidean but was proved to be Euclidean by Malcolm Harper. *If you want to accept the Generalized Riemann Hypothesis for zeta-functions of number fields, then PID = Euclidean for real quadratic rings of algebraic integers. For example, if $d$ is positive, squarefree, and not $1 \bmod 4$ then $\mathbf Z[\sqrt{d}]$ is Euclidean (not necessarily norm-Euclidean!) if and only if it is a PID. Neither of these properties is easy to characterize, but the properties are equivalent to each other if you accept GRH. This is a special case of a theorem of Weinberger in 1973: GRH for zeta-functions of all number fields implies the ring of integers $\mathcal O_K$ of a number field $K$ is Euclidean if it is a PID (has class number $1$) and the unit group $\mathcal O_K^\times$ is infinite. Weinberger's paper is On Euclidean rings of algebraic integers, pp. 321-332 in "Analytic number theory'' (Proc. Sympos. Pure Math., Vol. XXIV). Amer. Math. Soc., Providence (1973).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3729808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How is a real number a proper subset of $ℚ$! (dedekind cut)? I just started studying some set theory using "Classic Set Theory: For Guided Independent Study" and i've been stuck at the 8th and 9th page for 2 days lol. It says: A Dedekind left set is a subset of $r$ of $ℚ$ with the following properties: * *$r$ is a proper, non-empty subset of $ℚ$ *if $q∈r$ and $p<q$, then $p∈r$ *$r$ has no greatest element A real number is a Dedekind left set and $ℝ$ is the set of all such real numbers. Let $q∈ℚ$. Then the real number corresponding to $q$ is $Q=\{p∈ℚ:p<q\}$ Everything is clear except "a real number is a Dedekind left set", how?? How is $\sqrt2$ a non empty subset of $ℚ$? How does it even make sense? Any help please?! Thank you
Well, $\sqrt2$ is the set of all rational numbers $x$ which are negative or that $x^2<2$. This set is certainly not empty, it contains all the negative rationals, etc. But wait, you might say, this is somehow circular. How do you know to define it by $x^2<2$? Well, $\sqrt\cdot$ is not an integral part of our language. Instead we have $2$, which is a rational number, and then we say that $\sqrt2$ is the unique positive solution to $x^2-2=0$, so we define the Dedekind cut as above, and we can then show in the field $\Bbb R$, given by these Dedekind cuts, $\sqrt2$ is in fact the cut we defined, i.e. the positive solution for $x^2-2=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3729946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Inequality of arithmetic mean of two sets If $a,b>0$ and $Q=\{x_1, x_2, x_3,..., x_a\}$ a subset of the natural numbers $1, 2, 3,..., b$ such that, for $x_i+x_j<b+1$ with $1 ≤ i ≤ j ≤ a$, then $x_i+x_j$ is also an element of Q. Prove that: $ \frac{x_1+x_2+x_3+...+x_a}{a} ≥ \frac{b+1}{2}$ so basically, you have to prove that the arithmetic mean of Q satisfying the condition is greater than equal to the arithmetic mean of the set the natural numbers $1, 2, 3,..., b$. Any help would be appreciated. Thanks in advance!
Hint: For each $i$, show that $ x_i + x_{a-i} \geq b+1$. Proof by contradiction. Suppose $x_i + x_{a-i} < b+1$, which satisfies the condition in the problem. What does it make sense to consider next? Do some work to reach a contradiction. Corollary: $ 2\sum x_i \geq a (b+1)$, and the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3730083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
On ranges of nuclear operators Consider nuclear (trace class) operators acting on a separable infinite-dimensional Hilbert space. Does there exist a nuclear operator $A$ such that, for any other nuclear operator $B$, $\mathrm{ran}(B) \subset \mathrm{ran}(A)$? It is known that a nuclear operator cannot have a closed range unless it is finite-dimensional, so if the Hilbert space is infinite-dimensional, there does not exist a nuclear operator whose range is the whole space. But I am not sure if one can still construct an operator with the "largest" possible range. I would appreciate any insight about this.
No. This fails even if you require compact and not trace-class. Note first that the range of any compact operator has an orthonormal basis. This follows from the polar decomposition, as we can write $T=V|T|$ with $|T|$ positive and $V$ a partial isometry. As $|T|$ is self-adjoint and compact, its range has an orthonormal basis of eigenvectors. And $V$ maps this orthonormal basis to an orthonormal basis of the range of $T$. Fix any dense subspace $H_0\subset H$ with $\{e_j\}$ an orthonormal basis of $H_0$. Let $f\in H\setminus H_0$ with $\|f\|=1$. Define $$ Ax=\langle x,f\rangle\,f+\sum_j\frac1j\,\langle x,e_j\rangle\,e_j. $$ Then the range of $A$ contains $Af=f+\sum_j\frac1j\langle f,e_j\rangle\,e_j$ which is not in $H_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3730217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that there exists and angle $\alpha$ and $r \in \Bbb R$ such that $a\cos x + b\sin x = r\cos\alpha$ Let's say that we have an expression $a\cos x + b\sin x$ where $a \in \Bbb R$ and $b \in \Bbb R$. I was learning about finding the minimum and maximum values of an expression of this form for some given value of $a$ and $b$ by expressing it in terms of a single trigonometric function. My textbook did it by assuming that $a = m\sin\phi$ and $b = m\cos\phi$, where $m \in \Bbb R$ and $\phi$ is some angle. But I couldn't wrap my head around the fact that any two real numbers can be expressed as the product of another real number and a trigonometric function for some angle. So, I decided to take another approach which is highly similar to this one. It is solely based on the assumption that the expression can be expressed in the form of $r\cos\theta$, where $r \in \Bbb R$ and $\theta$ is some angle. Once this assumption is proved, here's how I will continue it : $$a\cos x + b\sin x = r\cos\theta$$ Let's say that $\theta = \alpha + x$. So : $$a\cos x + b\sin x = r\cos(\alpha + x) = (r\cos\alpha)\cos x + (-r\sin\alpha)\sin x$$ This gives us the values of $a$ and $b$ as $r\cos\alpha$ and $-r\sin\alpha$ respectively. So, it would work perfectly if I can prove the assumption mentioned above. Unfortunately, I haven't been able to prove it yet. I was successful in proving it's converse, though i.e. for a given expression, say $p\cos\gamma$, where $p \in \Bbb R$ and $\gamma$ is some angle, it can be expressed in the form of $c\cos\delta + d\sin\delta$ where $c \in \Bbb R$, $d \in \Bbb R$ and $\delta$ is some angle. This is highly similar to what I've stated above (what I'd do once the assumption is proved). First, we assume that $\gamma = \beta + \delta$, where $\beta$ and $\delta$ are two angles that fit in the equation. $$\therefore p\cos\gamma = p\cos(\beta + \delta) = p(\cos\beta\cos\delta - \sin\beta\sin\delta) = (p\cos\beta)\cos\delta + (-p\sin\beta)\sin\delta$$ Substituting $p\cos\beta$ by $c$ and $-p\sin\beta$ by $d$, we can arrive at $c\cos\delta + d\sin\delta$. I don't know if this will be helpful in proving the initial assumption that an expression $a\cos x + b\sin x$ can be expressed as $r\cos\theta$ for some angle $\theta$ and for some real value of $r$. I'd really appreciate help in proving this. Thanks! PS : I'm not familiar with Euler's formula
We begin by observing that $$a\cos x+b\sin x =\sqrt{a^2+b^2}\left\{\frac{a}{\sqrt{a^2+b^2}}\cdot\cos x +\frac{b}{\sqrt{a^2+b^2}}\cdot\sin x\right\}$$ Now, define $\phi\in[0,2\pi)$ such that $$\cos\phi=\frac{a}{\sqrt{a^2+b^2}}\text{ and }\sin\phi=\frac{b}{\sqrt{a^2+b^2}}$$ Note that such a value of $\phi$ is unique. Therefore, we have $$a\cos x + b\sin x =\sqrt{a^2+b^2}\left(\cos\phi \cos x + \sin\phi \sin x\right)=r\cos\alpha$$ with $r=\sqrt{a^2+b^2}$ and $\alpha = \phi-x$. This finishes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3730356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why do we use the rank of a matrix to compute the Schatten norms? The Schatten norm is defined as $$\|A\|_{S_P} = \left(\sum_{i=1}^{r(A)}\sigma_i^P(A)\right)^{\frac{1}{P}}$$ where $r(A)$ represents the rank of the matrix $A$. Why do we use the rank of the matrix to compute the Schatten $p$-norm?
If you assume (as you do, although you don't say) that the singular values are ordered from bigger to smallest, you have $$ \sum_{i=1}^{r(A)}\sigma_i^P(A)=\sum_{i=1}^{n}\sigma_i^P(A), $$ since the singular values $\sigma_{r(A)+1},\ldots,\sigma_n\}$ are zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3730443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $\prod_{k=2}^{99}\frac{k^{3}-1}{k^{3}+1}$ is greater than $\frac{2}{3}$. I have to prove that the product $$\prod_{k=2}^{99}\frac{k^{3}-1}{k^{3}+1}$$ is greater than $\displaystyle\frac{2}{3}$. I've tried to write $k^{3}-1$ as $(k-1)(k^{2}+k+1)$ or another ways but I couldn't finish it.
$\begin{array}\\ f(n) &=\prod_{k=2}^{n}\dfrac{k^{3}-1}{k^{3}+1}\\ &=\dfrac{\prod_{k=2}^{n}(k^{3}-1)}{\prod_{k=2}^{n}(k^{3}+1)}\\ &=\dfrac{\prod_{k=2}^{n}(k-1)(k^2+k+1)}{\prod_{k=2}^{n}(k+1)(k^2-k+1)}\\ &=\dfrac{\prod_{k=1}^{n-1}k}{\prod_{k=3}^{n+1}k}\dfrac{\prod_{k=3}^{n+1}((k-1)^2+(k-1)+1)}{\prod_{k=2}^{n}(k^2-k+1)} \qquad\text{(this is the only clever step)}\\ &=\dfrac{2}{n(n+1)}\dfrac{\prod_{k=3}^{n+1}(k^2-2k+1+k-1+1)}{\prod_{k=2}^{n}(k^2-k+1)}\\ &=\dfrac{2}{n(n+1)}\dfrac{\prod_{k=3}^{n+1}(k^2-k+1)}{\prod_{k=2}^{n}(k^2-k+1)}\\ &=\dfrac{2}{n(n+1)}\dfrac{(n+1)^2-(n+1)+1)}{2^2-2+1}\\ &=\dfrac{2}{n(n+1)}\dfrac{n^2+2n+1-n-1+1}{3}\\ &=\dfrac23\dfrac{n^2+n+1}{n^2+n}\\ &=\dfrac23(1+\dfrac1{n^2+n})\\ &\gt \dfrac23 \qquad\text{for all } n\\ \end{array} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3730561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Geometry Proof to Find Maximum area of $\triangle PIE$ Circle $\omega$ is inscribed in unit square $PLUM,$ and points $I$ and $E$ lie on $\omega$ such that $U, I,$ and $E$ are collinear. Find, with proof, the greatest possible area for $\triangle PIE.$ I'm not sure if there is a solution possible without trigonometry. Also, for my diagram in my solution, I'm not sure how to center it. Sorry about that.
Let $O$ be the center of circle $\omega.$ Let $X$ be the foot of the altitude from $P$ to $IE,$ and let $Y$ be the foot from $O$ to $IE.$ Denote line segment $\overline{YO}$ as length $d,$ the radius as $r,$ and $\angle XUP$ as $\theta.$ $\textbf{Claim:}$ The greatest area of $\triangle PIE$ is $\frac{1}{4}.$ $\textbf{Proof:}$ To find the area of $\triangle PIE,$ we can find the lengths of $\overline{IE}$ and $\overline{PX},$ and then use the formula of the area of a triangle to conclude. Let's start by finding lengths $\overline{IO}$ and $\overline{YO},$ and then apply the Pythagorean Theorem to get our $\overline{IY},$ then multiply by two to get the base of $\triangle PIE.$ Clearly, segment $\overline{IO}$ is the radius of the circle, which has length $1/2.$ Then, by taking $\sin \theta,$ we have $$\sin \theta = \frac{\overline{YO}}{\overline{UO}}=\frac{d}{\sqrt{2}/2} \implies d = \frac{\sqrt{2}}{2} \cdot \sin \theta.$$ Similarly with $\overline{PX},$ we take $\sin \theta$ and get $$\sin \theta = \frac{\overline{PX}}{\overline{UP}} = \frac{\overline{PX}}{\sqrt{2}} \implies \overline{PX} = \sqrt{2} \cdot \sin \theta.$$ Thus, after finding these two lengths, we know the largest possible area of $\triangle PIE$ is $$\displaystyle{\max\left(\frac{1}{2} \cdot \overline{PX} \cdot \overline{IE}\right) = \max \left(\frac{1}{2} \cdot \sqrt{2} \cdot \sin \theta \cdot 2\sqrt{r^2-d^2}\right)}.$$ Note that $\overline{IE} = 2\sqrt{r^2-d^2}$ by the Pythagorean Theorem, where clearly in the diagram $\overline{IO}$ is the radius and $\overline{YO}$ is distance $d.$ Simplifying our equation above to lowest terms, we get: \begin{align*} \max \left(\frac{1}{2} \cdot \sqrt{2} \cdot \sin \theta \cdot 2\sqrt{r^2-d^2}\right) &= \max \left(\frac{1}{2} \cdot \sqrt{2} \cdot \sin \theta \cdot 2\sqrt{\left(\frac{1}{2} \cdot \frac{1}{2}\right)-\left(\frac{1}{2} \cdot \sin^2 \theta\right)}\right) \\ &= \max \left(\frac{1}{2} \cdot \sqrt{2} \cdot \sin \theta \cdot \sqrt{1 - 2 \sin^2 \theta} \right). \\ \end{align*} Then, let's subsititute $\alpha = \sin \theta.$ Thus, to maximize the area of $\triangle PIE,$ all we need to do is find the maximum of $\max \left(\frac{1}{2} \cdot \sqrt{2} \cdot \alpha \cdot \sqrt{1-2\alpha^2}\right).$ This implies we need to find the value of $\alpha$ that suffices to maximize the following equation: $$\max \left(\alpha^2 \left(1-2\alpha^2\right)\right) = \max \left(\alpha^2 - 2\alpha^4 \right).$$ Taking the derivative of $\alpha^2 - 2\alpha^4,$ we get $$\frac{\mathrm{d}}{\mathrm{d}\alpha} \left(\alpha^2 - 2\alpha^4\right) = 2\alpha - 8\alpha^3.$$ The equation $2\alpha - 8\alpha^3 = 0$ is maximized when $\alpha = \frac{1}{2},$ which we hence get the largest area of $\triangle PIE$ as $\frac{1}{2} \cdot \frac{1}{2} = \boxed{\frac{1}{4}},$ as desired. $\qquad\blacksquare$ $\textbf{Claim:}$ There is such $\theta$ that achieves the maximum area stated above. $\textbf{Proof:}$ We have found that the maximum of $\sin \theta = \frac{1}{2},$ thus meaning that the $\theta = 30^{\circ},$ in which that is where the maximum area of $\triangle PIE$ occurs. Hence, proven. $\qquad\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3730695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Prove that $ \int_{\gamma} \frac{1}{z} dz = 0$ Let $\displaystyle\gamma$ be a closed curve exactly located in $A =\mathbb C \setminus\{z\in\mathbb C: Re(z)\leq 0\}$. I found a similar problem here : Find $\int_{\gamma}\frac{dz}z$, but they concluded that the value of the contour integral is $\displaystyle i\pi$. How does the result changes for this problem?
If you know the Cauchy's integral theorem you can use that $\gamma$ is closed and $\frac{1}{z}$ is holomorphic in $A$, so the theorem say that the integral is zero. However I think the easy way is notice that in $A$ the principal branch of the complex logarithm is well defined, so you can use that $\frac{1}{z} = \frac{\mathrm{d}}{\mathrm{d}z} \log(z)$, so the integral is $$ \int_{\gamma} \frac{1}{z} \,\mathrm{d}z = \log(\gamma(0)) - \log (\gamma(1)) = 0$$ because the curve is closed, so $\gamma(0)=\gamma(1)$. Edit: Here I'm thinking in $\gamma$ parameterized in the unit interval, i.e. $\gamma: [0,1] \rightarrow A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3730795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the interior of the union of $n$ closed balls equal to the union of the interiors of the $n$ closed balls? I am reading "Calculus on Manifolds" by Michael Spivak. I am solving problem 1-22 on p.10 now. If the following equality holds, I can solve the problem. Let $B_1, \dots, B_n$ be closed balls in $\mathbb{R}^m$. Intuitively, I guess the following equality holds, but I cannot prove that. Does the following equality hold? $$\operatorname{Int}(B_1 \cup \dots \cup B_n) = \operatorname{Int}(B_1) \cup \dots \cup \operatorname{Int}(B_n)$$
I think your intuition is correct in one sens this a counterexample take two intervals $[1,2]$and $[2,3]$ it's clear that the interior of the union is not equal to the union of the interior even if your sets are closed
{ "language": "en", "url": "https://math.stackexchange.com/questions/3730921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How many equivalence classes will there be? Consider the subset $T\subseteq \mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ where the three numbers will be the corner angles (in degrees) of a (real) triangle. For example $(30, 70, 80)\in T$ but $(10, 30, 50) \not\in T$ (since $10 + 30 + 50 < 180$), and $(−10, 20, 170) \not\in T$ (since there would not be a negative angle). We define a relation on $T$ by $(a_1, b_1, c_1)\sim(a_2, b_2, c_2)$ if and only if the triangles that these triples are from have the same largest angle. This is the exact question from my e-book. And I am literally confused to tackle with this one. I know it must include something with combinations but in that way their will be a lot of cases. If I am right then yes I can solve this by $a+b+c=180$ (probably $15931$ is answer) *not sure by applying a particular formula here but I know my professor can't gonna allow me to go with this formula because we haven't covered that in our course. So is there any other way to solve this. I really appreciate you to read this question. It will be very helpful if you will answer this question. Thanks
For $(a,b,c)$ to represent angles of a triangle, we should have $a+b+c=180$. Two triangles are equivalent iff their largest angles are the same. For example, $(61,60,59) \sim (61,61,58)$. Note that at least one of $a,b$ or $c$ must be at least $60$ for the triangle to exist. So to count the number of equivalence classes, we just have to look at the possible values that the largest angle can take. Suppose $a$ is the largest angle, then $60 \leq a \leq 178$. So the number of equivalence classes will be $119$. Note that the size of the classes can be different. For example $$[(60,60,60)]=\{(60,60,60)\} \qquad \text{ and } \qquad [(61,60,59)]=\{(61,60,59), (61,61,58)\}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3731078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to find area of rectangle inscribed in ellipse. In an ellipse $4x^2+9y^2=144$ inscribed is a rectangle whose vertices lies on the ellipse and whose sides are parallel with the ellipse axis. Longer side which is parallel to major axis, relates to the shorter sides as $3:2$. Find area of rectangle. I can find the values of $a$ and $b$ as $$\frac{4x^2}{144}+\frac{9y^2}{144}=1$$ $$\frac{x^2}{6^2}+\frac{y^2}{4^2}=1$$ Comparing with $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$, gives $a=6$ & $b=4$. From here I have no idea how to solve further?
Let the top-right corner be at $(x,y)$. Squaring the aspect ratio, we have the system $$\begin{cases}4x^2+9y^2=144,\\4x^2=9y^2\end{cases}$$ the solution of which is $x^2=18,y^2=8$. Area $$4\sqrt{18\cdot8}=48.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3731491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
What does $\int_{-\infty}^{\infty}\sum_{i=0}^{\infty}\frac{x^i}{i!}dx$mean? I was reading a research paper and I came across this sentence which I didn't really understand. We first cast Eq.1 into its standard form: $$\int_{-\infty}^{\infty}\sum_{i=0}^{\infty}\frac{x^i}{i!}dx$$ $$T^2-944T+155184-h=0$$ Eq. 1 is stated at the beginning of the paper and is the following: $$h(T)=520(212-T)+{(212+T)}^2$$ What are the integral and sums supposed to mean and/or refer to?
I don't know the context, but $$\sum_{i=0}^{\infty} \frac{x^i}{i!} = e^x$$ because the sum is the Taylor series of $e^x$. If this is the case, the integral has a more clear meaning.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3731598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about finding the value of an infinite sum What is the value of: $$\sum_{k=0}^\infty \frac{1}{(4k+1)^2}?$$ I realised that $$\sum_{n=2,4,6,8,...} \frac{1}{n^2} + \sum_{n=1,3,5,7,...} \frac{1}{n^2} = \sum_{n \geq 1 } \frac{1}{n^2}$$ $$\sum_{n \geq 1 } \frac{1}{4n^2}+\sum_{n \geq 0 } \frac{1}{(2n+1)^2} = \sum_{n \geq 1 } \frac{1}{n^2}$$ $$\sum_{n \geq 0 } \frac{1}{(2n+1)^2} = \frac{3}{4}\frac{\pi^2}{6} \qquad \Rightarrow \qquad \sum_{n=1,3,5,7,...} \frac{1}{n^2} = \frac{1}{8} \cdot \pi^2$$ So $$\sum_{k=0}^\infty \frac{1}{(4k+1)^2} + \sum_{k=0}^\infty \frac{1}{(4k+3)^2} = \frac{\pi^2}{8}$$ But I cannot find the value of the second summation. Any suggestions?
The sum can be expressed in terms of Catalan's constant. In the following video, the sum is computed in a step by step manner: https://www.youtube.com/watch?v=r2OJtsHNDZA.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3731838", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find $\sum_{n=-\infty}^{\infty} (0.5)^n$ We have: $$L=\lim_{n\to\infty} \sum_{k=-n}^n \frac1{2^k}.$$ The limit surely diverges or tends to $\infty.$ But I can't think of a proper way to show this. Please suggest, how can I show that $L=\infty\,?$ Thanks in advance.
$$\sum_{k=-n}^n\frac1{2^k}\ge2^n$$ just by the term $k=-n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3731964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Determinant of $2 \times 2$ block matrix whose diagonal blocks are zero $\Bbb A$ is an $n\times n$ matrix, and $\Bbb B$ is an $m × m$ matrix.$\space$What is the determinant of matrix $\Bbb C$? \begin{equation*} \mathbb{C}= \begin{pmatrix} \begin{array}{@{}c|c@{}} \begin{matrix} 0 \end{matrix} & \mathbb{A} \\ \hline \mathbb{B} & \begin{matrix} 0 \end{matrix} \end{array} \end{pmatrix} \end{equation*} I thought it could be just ($\det\Bbb A\cdot\det\Bbb B$) or ($-\det\Bbb A\cdot\det\Bbb B$), but I'm not sure, as it seems too easy.
This determinant can be computed by the Laplace expansion theorem (the generalized form). Fix the first $1, 2, \ldots, n$ rows, and let columns range over $(j_1, j_2, \ldots, j_n) \in \{1, 2, \ldots, n + m\}$. Since the square minor is non-zero only if $(j_1, j_2, \ldots, j_n) = (m + 1, m + 2, \ldots, m + n)$, it follows that \begin{align*} \det C = & C\begin{pmatrix}1 & 2 & \cdots & n \\ m + 1 & m + 2 & \cdots & m + n\end{pmatrix}(-1)^{1 + \cdots + n + m + 1 + \cdots + m + n}C\begin{pmatrix}n + 1 & n + 2 & \cdots & n + m \\ 1 & 2 & \cdots & m \end{pmatrix} \\ = & (-1)^{n(n + 1) + mn}\det(A)\det(B). \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3732109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Explanation for behaviour of graph of $y=x^2e^{-x^2}$ (Maxwell-Boltzmann distribution) Consider the function $$y=x^2e^{-x^2}$$ The graph initially behaves as a parabola then in later part exponential part of it dominates; i.e., the graph looks exponential after maximum of the curve. Actually this graph is related to Maxwell Boltzmann distribution graph. Please help me so that I can easily remember the property of this graph.
You actually gave the mathematical explanation. The graph is below. Over the range $[-1,1]$ the exponential doesn't change that much-it is $1$ at the center and $\frac 1e \approx 0.3679$ at the ends. That is less than a factor $3$. The parabola is $0$ at the middle and $1$ at the ends, an infinite ratio. It dominates the product over this interval. As you get outside that interval, the exponential dominates. From $1$ to $3$ the parabola rises by a factor $9$, but the exponential drops by a factor $2980$, so it dominates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3732230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
A proof of existence of canonical divisors I am confused with the proof of Lemma 1.5.10 in Algebraic Function Fields and Codes by Henning Stichtenoth. Let $0\ne\omega\in\Omega_F$. Then there is a uniquely determined divisor $W\in M(\omega)$ such that $A≤W$ for all $A\in M(\omega)$. $\Omega_F$ is the set of Weil differentials of function field $F/K$, and $M(\omega)$ is the set of divisors $A$ such that $\omega$ vanishes on the adele space $\mathcal A_F(A)+F$. In the proof he said: ... we can choose a divisor $W\in M(\omega)$ of maximal degree. Suppose $W$ does not have the property of our lemma. Then there exists a divisor $A_0\in M(\omega)$ with $A_0\not\le W$, i.e. $v_Q(A_0)>v_Q(W)$ for some divisor $Q$. We claim that $W+Q\in M(\omega)$, which is a contradiction to the maximality of $W$. In fact, consider an adele $\alpha=(\alpha_P)\in\mathcal A_F(W+Q)$. We can write $\alpha=\alpha^\prime+\alpha^{\prime\prime}$ with $$\alpha_P^\prime=\begin{cases}\alpha_P&\text{ for }P\ne Q,\\0&\text{ for }P=Q,\end{cases}\quad\alpha_P^{\prime\prime}=\begin{cases}0&\text{ for }P\ne Q,\\\alpha_Q&\text{ for }P=Q.\end{cases}$$ Then $\alpha^\prime\in\mathcal A_F(W)$ and $\alpha^{\prime\prime}\in\mathcal A_F(A_0)$. As far as I know, $\alpha^\prime\in\mathcal A_F(W)$ only if $0=v_Q(\alpha_Q^\prime)\ge-v_Q(W)$, but I can see nowhere that $v_Q(W)\ge0$, as well as $v_P(A_0)\ge0$. Forgive me if this is trivial.
Note that $\alpha'_P$ is the $p$th component of the adele $\alpha'$ and that $\nu_P(0) = \infty$ by definition. Therefore, $\nu_Q(\alpha'_Q) = \nu_Q(0) = \infty > -\nu_Q(W)$ since $\nu_Q(W)$ is necessarily finite. Therefore, you get that $\alpha'\in \mathcal A_F(W)$. This same reasoning also gives you that $\alpha''\in \mathcal A_F(A_0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3732383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Nice inequality ,Prove that $\Gamma\Big(\frac{\sin(x)}{x}\Big)\leq \frac{\pi}{\pi-x}$ Playing with geogebra I get : Let $0\leq x<\pi$ then we have : $$\Gamma\Big(\frac{\sin(x)}{x}\Big)\leq \frac{\pi}{\pi-x}$$ Where we have the Gamma function . I have tried to use the Wendel inequality to prove that the ratio of the LHS and the RHS is one when $x$ tends to $\pi$ without success . The derivative is here but I can't handle it . I have tried power series of the Gamma function but I think it's reveals nothing good . So now I think it's not a trivial problem and I can't solve it . My question : How to solve it properly ? Thanks in advance for your comments\answers.
Over $[0,\pi]$ $$ \Gamma\left(1+\frac{\sin x}{x}\right) \leq 1 \leq \frac{\pi \sin(x)}{x(\pi -x)}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3732511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability question - Normal and Uniform In a factory there are $2$ machines that create tubes (They are independent from each other) Length of the tubes of machine A is distributed normally with an expectancy of $101$ cm. and a Variance of $102$. ($\mu = 101, V[A]=102$) The tubes of the second machine is distributed uniformly $U(85, 115)$ A proper length of a tube is $[90,110]$, it is known that $90\%$ of all the tubes are fine. * *What is the percentage of the tubes created in Machine A ? I've tried to solve it, but I get a weird answer: $X \sim N(101, \sqrt{102})$ - Machine A (and $\sqrt{102}$ because $\sigma = \sqrt{\mathbb{V}[X]}$ ) $Y \sim \text{Uniform}(85,115)$ - Machine B So we can build this equation: $a \cdot \mathbb{P}(90 \leq X \leq 110) + (1-a) \cdot \mathbb{P}(90 \leq Y \leq 110) = 0.9$ Where $a$ is the percentage of all the tubes created at machine A (and thus $1-a$ is the percentage of machine B , however I am not sure if this is the right way of doing it0 I calculate $\mathbb{P}(90 \leq X \leq 110) = P(Z \leq 0.89) - P(Z \leq -1.09) = 0.6754$ And: $\mathbb{P}(90 \leq Y \leq 110) = \frac{1}{115-85} \cdot (110-90) = \frac{2}{3}$ And I get: $a \cdot 0.6754 + (1-a) \cdot \frac{2}{3} = 0.9$ But $a = 26.7$ which is greater than $1$. I am very confused what I did wrong here. What I think the mistake is: I am not sure if the information that the machines are independent from each other mean that: $P_{A} + P_{B} = 1$ meaning, I am not sure that the percentage machine A creates + percentage of machine B creates = $1$ , but it seemed logical, no? What is the problem? Thank you!
* *what is requested is a conditional probability. Then to get a you have to solve $$\frac{0.6755\cdot a}{ 0.6755\cdot a +(1-a)\cdot \frac{2}{3}}=0.9$$ leading to $a\approx 0.899$ I want to underline that "Variant" in Statistics has no a nice meaning. We have "variance" and "standard deviation". If you 102 is expressed in cm cannot be a variance but a standard deviation....so verify what your data mean....anyway the procedure does not change.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3732714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convolution with Dirac delta How to solve this expression: $$\int_{-\infty}^{\infty} \left[ \delta(k-k_0)f(k)\right]*f(k)dk=?$$ Here $\delta$ represents the Dirac delta function and $*$ represents the convolution over the $k$ variable. What I think: $$\int_{-\infty}^{\infty} \left[ \delta(k-k_0)f(k)\right]*f(k)dk=\int_{-\infty}^{\infty} \delta(k-k_0)*\left[f(k)f(k)\right]dk = \int_{-\infty}^{\infty} f(k-k_0)^2dk =\int_{-\infty}^{\infty}f(k)^2dk $$ However, I have doubts about the solution as the influence of the Dirac function seems to disappear?
$$\left[\delta\left(k-k_0\right)\ f(k)\ *\ f(k)\right](x)=f\left(k_0\right)\ f\left(x-k_0\right)$$ $$\int\limits_{-\infty}^{\infty}\left[\delta\left(k-k_0\right)\ f(k)\ *\ f(k)\right](x)\ dx=\int\limits_{-\infty}^{\infty}f\left(k_0\right)\ f\left(x-k_0\right)\ dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3732854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving $\prod_{i = 1}^n X_i \xrightarrow[n \to \infty]{(\mathbb{P})} 0 \iff \prod_{i = 1}^\infty \mathbb{E}[\sqrt{X_i}] = 0$ I am having a hard time proving the following : Let $(X_i)_{i \geq 1}$ be a sequence of independent random variables which take their value in $\mathbb{R}^{+*}$ and such that $\mathbb{E}[X_i]=1$ for all $i \geq 1$. Then prove that : $$\prod_{i = 1}^n X_i \xrightarrow[n \to \infty]{(\mathbb{P})} 0 \iff \prod_{i = 1}^\infty \mathbb{E}[\sqrt{X_i}] = 0$$ Intuitively it seems quite natural since most of the mass of the $X_i$ is on $[0,1]$. Taking the square-root just means we are concentrating further the mass of the $X_i$ around one. Thus it should go to zero. Yet I don't know how to prove this result. I tried taking the logarithm to manipulate sums yet doesn't seem to work.
Declare a positive $\varepsilon$. For any integer $n$, you can control $E[\sqrt{(\prod_i X_i)}]$ by separating the events $\prod_i X_i < \varepsilon$ and $\prod_i X_i \geq \varepsilon$. The first term is bounded by $\sqrt{\varepsilon}$. For the second term, use Cauchy-Schwarz : $E[1_{\prod_i X_i \geq \varepsilon } \, .\sqrt{\prod_i X_i}] \leq \sqrt{P(\prod_i X_i \geq \varepsilon)} E[\prod_i X_i] = \sqrt{P(\prod_i X_i \geq \varepsilon)}$. The hypothesis tells you this term tends to $0$ as $n$ tends to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3733000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is This Proof for "If $\sup A < \sup B$, show that there exists an element $b\in B$ that is an upper bound for $A$" correct? This question is from Understanding Analysis (Stephan Abbot) Exercise $1.3.9$. The Question is If $\sup A < \sup B$, show that there exists an element $b \in B$ that is an upper bound for $A$. My proof is as follows: If there exists an element $b\in B$ that is an upper bound for $A$ than $(\exists b\in B)(\forall a\in A) a <b$. Assume (for the sake of contradiction) that $\sup(A) < \sup(B)$ but $(\forall b\in B)(\exists a\in A)b \leq a$. Since $\sup(A) \geq a \geq b (\forall a \in A, b \in B)$ and $\sup(A) < \sup(B)$, $\sup(A)$ is an upper bound for B which is less than $\sup(B)$ which is a contradiction. Therefore, if $\sup A < \sup B$, there exists an element $b\in B$ that is an upper bound for $A$. Is this proof correct?
Your proof is correct but be careful with quantifiers. $(\forall a \in A ,b \in B\, a \geq b)$ is not true. We only know that for each $b \in B$ there exists $a \in A$ such that $a \geq b$, but in general this is not true for every element $a \in A$. Here is another proof which I find be more natural: Let $c \in ]\sup A, \sup B[$. There exists $b \in B$ such that $c \leq b$, and thus for all $a \in A$ we have $a \leq \sup A \leq c \leq b$ so $b$ is an upper bound for $A$. You will find that this proof is rather intuitive when making a drawing of the situation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3733157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the true status of the Lehmer totient problem? The Lehmer-totient problem : For a prime number $\ n\ $ we have $\ \varphi(n)=n-1\ $. In particular, we have $\ \varphi(n) \mid n-1\ $. Is there a composite number $\ n\ $ with $\ \varphi(n)\mid n-1\ $ ? It can easily be shown that such a number must be a Carmichael number. What is the real status of this problem ? I found some pages in the internet claiming a proof, but neither Wikipedia nor Mathworld consider this problem to be solved. The best lower bound is claimed to be $10^{22}$ in Mathworld, but Wikipedia still gives $10^{20}$ as the best bound. Which is true ? And is the problem solved or not ?
A quick search through arXiv yields the following results: (1) This proof for the Lehmer-totient problem has been withdrawn: On the Lehmer's problem involving Euler's totient function (Huan Xiao) (2) I highly doubt the validity of the following proof, although it is not yet withdrawn: An analytical proof for Lehmer's totient conjecture using Mertens' theorems (Ahmad Sabihi) Regarding the lower bounds for $n$ and $\omega(n)$ when $\varphi(n) \mid (n-1)$, I quote from the MathWorld website: The best current result is $n > {10}^{22}$ and $\omega(n) \geq 14$, improving the ${10}^{20}$ lower bound of Cohen and Hagis (1980) - "On the Number of Prime Factors of $n$ is $\varphi(n) \mid (n-1)$", since there are no Carmichael numbers less than ${10}^{22}$ having $\geq 14$ distinct prime factors (Pinch). When $3 \mid n$, then it is known that $$\omega(n) \geq 40000000$$ and $$n > {10}^{360000000}$$ by computational work of P. Burcsi, S. Czirbusz, and G. Farkas (2011).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3733276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Does $\left\lVert a-b \right\rVert_\infty < \epsilon \implies (\int a-b)< \int \epsilon$? $a,b$ are continuous functions. I think this is true as constants are continuous functions but I'm not sure if it holds for all cases. Edit: Integrated over two real numbers
Only if the measure space is finite, otherwise you'll have only $\int (a-b) \le \int \varepsilon$ because both sides may be infinite.$\,$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3733362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If R is a ring, and A has all the sets in R and it's complements, is A an algebra? (Halmos Measure Theory question) The question I have is related to problem 4.5 in chapter 1 of Halmos' text. Some definitions related to the question are the following. If $X$ is a set then a ring $\textbf{R}$ is a non-empty class of subsets such that if $E,F\in \textbf{R}$ then $E\cup F, E-F\in \textbf{R}$, where $E-F=E\cap F^c$. Similarly, an algebra is a non-empty class of sets such that if $E,F\in \textbf{R}$ then $E\cup F\in \textbf{R}$ and $E\in\textbf{R}$ then $E^c\in\textbf{R}$. The problem in the text is as follows. If $\textbf{R}$ is a ring, and $A=\{E\subseteq X|E\in \textbf{R}$ or $E^c\in\textbf{R}\}$, then show that $A$ is an algebra. My trouble is in the following line of argument in showing so. If $E\in\textbf{R}$ and $F\in A$ is such that $F^c\in\textbf{R}$, why does it imply that $E\cup F\in A$? I'm surely missing something. Here's what I've got till now. $E\cup F^c\in R\subseteq A\Rightarrow$ either $E\cup F^c\in R$ or $E^c \cap F\in R$. In the case of the latter, $E\cup F=(E-F)\cup (F-E)\cup (E\cap F)\in R$ by definition, but I'm stuck with the former and have tried other arguments like showing that it's in the intersection of all algebras that contain the ring and $X$, but can't wrap my head around why $E\cup F$ in this case must be in $A$, and am starting to doubt whether $A$ would even be an algebra. Any pointers to this effect would help, thanks!
Note that $E$ and $F^c$ are in $\mathbf{R}$, hence so is $F^c-E=F^c\cap E^c = (F\cup E)^c = (E\cup F)^c$. But if $(E\cup F)^{c}\in\mathbf{R}$, then $E\cup F\in A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3733486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Linear Least Squares with Monotonicity Constraint I'm interested in the multidimensional linear least squares problem: $$\min_{x}||Ax-b||^2$$ subject to a monotonicity constraint for $x$, meaning that the elements of $x$ are monotonically increasing: $x_0 \leq x_1$, $x_1 \leq x_2$, ... , $x_{n-1} \leq x_n$. I basically have two questions regarding this problem: 1.) Is there maybe literature regarding this problem out there? I wasn't able to find anything online so far. 2.) If not, is it maybe possible to rewrite my problem in such a way that i could use already existing methods like Non Negative Least Squares (NNLS) or a Constrained Least Squares (CLS) method? Regarding the NNLS, I had the idea to formulate my problem in terms of an $\tilde{x} := (x_0, x_1-x_0,\; ...\;,x_n - x_{n-1})$ as this would also achieve monotonicity if every term in non negative, but I can't seem to do it, maybe I'm missing something here? Many thanks in advance!
Your idea to reformulate the problem so that the variables are $x_0$ and $y_i = x_i - x_{i-1}$ for $i =1, \ldots, n$ will work. Let $y$ be the vector whose components are $x_0, y_1, \ldots, y_n$ and define $$ M = \underbrace{\begin{bmatrix} 1 & 0 & \cdots & 0 \\ 1 & 1 & \cdots & 0 \\ \vdots & & & \vdots \\ 1 & 1 & \cdots & 1 \end{bmatrix}}_{(n+1)\times(n+1)}. $$ Notice that $M y = x$. Expressed in terms of $y$, your optimization problem is to minimize $\| AM y - b \|^2$ subject to the constraint that $y_i \geq 0$ for $i = 1, \ldots, n$. In this reformulated problem, the optimization variable is the vector $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3733659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Spreading tickets in a lottery actually diminishes your chances? Here is the scenario: There is a lottery running for $n$ terms, which means that it is repeated. In each term, there are a total of $T$ tickets and one prize. You currently own $t$ tickets and your dilemma is to either use all of your tickets in one go or spread them for $n$ terms. The probability of winning at least one prize (naturally, only one in the first scenario) when you group or evenly spread your tickets can be calculated respectfully: $P_1=\frac{t}{T}$ and $P_2=1-(\frac{T-\frac{t}{n}}{T})^n$ Let's say that $T=100$, $t=12$ and $n=2$: $P_1=\frac{12}{100}=0.12$ and $P_2=1-(\frac{100-\frac{12}{2}}{100})^2=0.1164$, hence $P_1>P_2$. Even if I try to spread the tickets unevenly, like 11 tickets in one term and 1 in the other, the relation is the same: $P_1=0.12$ and $P_2=1-(\frac{100-11}{100})⋅(\frac{100-1}{100})=0.1189$, still $P_1>P_2$. The mathematical model where the tickets are distributed unevenly becomes: $P_2=1-\prod_{i=1}^n\frac{T-t_i}{T}$ where $t_i$ is the number of tickets spent in each term. I tried plotting the mathematical models on Desmos and playing around with different combinations of variables, but it always seemed that using all tickets together always, even if by a minuscule margin, gives better chances of winning anything at all than spreading them in every case. Will this always be the case; should we always use all of our tickets in one go? How can it be mathematically proved then? I think that the number of prizes shouldn't change the outcome, should it? Thank you for reading!
Suppose $n=2$. Using $x$ tickets in the first lottery and $t-x$ in the second yields a probability of winning $1-(1-\tfrac{x}{T})(1-\tfrac{t-x}{T})$. The maximum in the range $0\leq x \leq t$ is at the bounds, so it is better to attend in only one lottery, either the first or the second. This is true in general for $n>2$ (by induction). The idea is that you can always repeat the previous argument for two of the lotteries (say, the 4th and the 10th) keeping the number of tickets used in the others fixed, and deduce that the optimal strategy would be to use all the tickets dedicated to the 4th and the 10th lottery in only one of them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3733941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Constructing a right triangle with a given hypotenuse segment and given point of tangency for its incircle Given a hypotenuse $AB$ and an arbitrary point $C$ on $AB$. How to construct a right triangle with the given hypotenuse $AB$ such that point $C$ is the point of tangency of the inscribed circle? My attempt: First draw a circle with $AB$ as diameter (Thales theorem). if i get the incenter of the triangle then the rest is easy. but how to locate the point$G$ on the circle or how to locate the incenter with the given information . any hints or idea here is my construction. $F$ is the midpoint of $CD$ here]
Let $ABG$ be the triangle you want to construct, $F$ its incenter. In the circle $\Gamma$ with diameter $AB$, let $D$ be the endpoint of the diameter perpendicular to $AB$ and on the opposite side of $AB$ to $G$. Since $AGB$ is a right triangle, $G$ lies on $\Gamma$. Because $GF$ bisects $\angle AGB$, it meets $\Gamma$ at $D$. Furthermore, it is easy to see that $\angle AFB = 135^{\circ}$. It follows that $F$ is on the circle $\Gamma'$ centered at $D$ passing through $A$ and $B$. Hence $G$ can be constructed as follows. First construct the point $D$. Then let $F$ be the intersection of $\Gamma'$ with the perpendicular to $AB$ through $C$. Lastly, let $G$ be the other intersection of $DF$ with $\Gamma$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3734067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Proving angles in a circle are equal $A$, $B$, $R$ and $P$ are four points on a circle with centre $O$. $A$, $O$, $R$ and $C$ are four points on a different circle. The two circles intersect at the points $A$ and $R$. $CPA$, $CRB$ and $AOB$ are straight lines. Prove that angle $CAB$ = angle $ABC$. Not really sure how to start here. I am thinking about proving the sides $AC = CB$, but not sure how I can do that. The only striking thing is that $AB$ is a diameter, so angle $APB = 90$. Then letting $PAB = x$, one gets $PBA = 90 - x$ and also $CPB = 90$. However, can't get much further from here.
Since $AB$ is a diameter, as you've already stated, this means $\measuredangle ARB = 90^{\circ}$ as well. As such, you also have $\measuredangle ARC = 90^{\circ}$. Thus, in the circle on the left side, you have $AC$ is its diameter. This means $\measuredangle COA = 90^{\circ}$ (note you could also get $\measuredangle COA = \measuredangle ARC$ from that both angles subtend the same chord of $AC$) and, thus, $\measuredangle COB = 90^{\circ}$ also. Since $|OA| = |OB|$, you have with $CO$ being a common side that the Pythagorean theorem in $\triangle COA$ and $\triangle COB$ gives $|CA| = |CB|$. This means $\triangle ABC$ is isosceles so $\measuredangle CAB = \measuredangle ABC$. Update: Instead of using the Pythagorean theorem, I could've used that side-angle-side (SAS) matches so $\triangle COA \cong \triangle COB$ which directly gives $\measuredangle CAB = \measuredangle ABC$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3734203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Open and Closed subset of $\mathbb{R}^n$ Let $A,B\subseteq \mathbb{R}^n$, define $A+B=\{a+b : a\in A,b\in B\}$ Then which of the following is/are true? $(1)$ If $A$ and $B$ are open, then $A+B$ is open. $(2)$ If $A$ open and $B$ close, then $A+B$ closed. $(3)$ If $A$ is closed and $B$ open, then $A+B$ is open. $(4)$ If $A$ and $B$ are closed, then $A+B$ is closed. My attempt To prove $(1)$ Let $a+b \in A+B$. Since $A$ and $B$ are open , there are $\epsilon_1,\epsilon_2$ such that $B(a,\epsilon_1)\subseteq A$ and $B(b, \epsilon_2) \subseteq B$ Here $B(a,\epsilon_1)=\{x\in \mathbb{R}^n : ||x-a||\lt \epsilon_1\}$ Let $\epsilon=min \{\epsilon_1,\epsilon_2\}$ Let $z\in B(a+b,\epsilon) $ be arbitary. Then $||(a+b)-z||\lt \epsilon$ or $||(a-(z-b)||\lt \epsilon \lt \epsilon_1$ $\Rightarrow z-b=a_1 \in A$ $\Rightarrow z=a_1+b \in A+B$ Hence $B(a+b,\epsilon) \subseteq A+B$ impling $A+B$ is open if $A$ and $B$ are open. $(2)$ It is false by taking $A=(0,1) $ and $B=\{0\}$ in $\mathbb{R}$ To prove $(3)$ This is the kind of same as $(1)$ Let $a+b\in A+B$ .Since $B$ is open there is $\epsilon \gt 0$ such that $B(b,\epsilon)\subseteq B$ Let $z\in B(a+b,\epsilon)$ $||(a+b)-z||\lt \epsilon$ $\Rightarrow ||b-(z-a)||\lt \epsilon $ $z-a=b_1 \in B$ $z=a+b_1 \in A+B$ and thus $A+B$ is open. $(4)$. It is false , I have two examples Eg $(a)A=\mathbb{N}$ and $B=\{-n+\frac 1n: n\in \mathbb{N}\}$ in $\mathbb{R}$ .Then $A$ and $B$ being discrete subsets are closed. But $A+B$ cotains the sequence $\{1/n \}$ converging to $0$ but $0\notin A+B$ Eg $(b)$ A book I referred asked to think of $tanx$ on $(-\pi/2,\pi/2)$ in $\mathbb{R}^2$ but I can't proceed. Can you please help me complete it.? Sorry for a long solution but being a budding mathematician, I want correct proofs . Can you please go through it and point out mistakes , if any? Any alternative ideas will be appreciated. Thanks for your valuable time.
Your answers seem right to me. Here is a more general way to look at (1) and (3). Claim 1. If $A$ is open and $B$ is arbitrary then $A+B$ is open. Proof. Note that $A+B=\bigcup_{b\in B}A+\{b\}$. Each set $A+\{b\}$ is open (by your argument for "open + closed is open"; but this specific case is easier.) Now $A+B$ is a union of open sets so it's open. For $\tan x$ I guess you could do something like let $A$ be the graph of $\tan x$ on that interval and let $B=\{(0,r):r\in\mathbb{R}\}$. Those are closed sets but $A+B$ is $(-\pi/2,\pi/2)\times\mathbb{R}$ which is not closed. (I like your example better.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3734380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Localization commutes with Hom for finitely presented modules I am trying to solve an exercise given in Vakil's Algebraic Geometry notes. Suppose $M$ is a finitely presented $A$-module. The $M$ fits inside an exact sequence $A^q\rightarrow A^p\rightarrow M\rightarrow 0$. I'd like to understand why in this case we get an isomorphism $S^{-1}\text{Hom}_A(M,N)\cong \text{Hom}_{S^{-1}A}(S^{-1}M,S^{-1}M)$. This problem is towards the beginning of the book, so in particular there should be a way to solve it without heavy duty commutative algebra. So far, I've only come up with the following: We can use the universal property of localization of modules so that for any map from $\text{Hom}_A(M,N)$ to $\text{Hom}_{S^{-1}A} (S^{-1}M,S^{-1}M)$ (in which the elements of $S$ are invertible), there exists a unique map from $S^{-1}\text{Hom}_A(M,N)$ to $\text{Hom}_{S^{-1}A}(S^{-1}M,S^{-1}M)$. However, what should this map explicitly be? Is this the way to go about showing these two are isomorphic? EDIT: There is a question about the same problem, but I am specifically asking about how to construct a map between the two sets. The solution in the related question uses facts about flat modules which I am trying to eschew.
As $S^{-1}$ is a functor we have a map $$\mathrm{Hom}_A(M,N)\rightarrow \mathrm{Hom}_{S^{-1}A}(S^{-1}M,S^{-1}N)$$ and as multiplication by an element $s\in S$ gives an isomorphism in the module $\mathrm{Hom}_{S^{-1}A}(S^{-1}M,S^{-1}N)$ this map extends to a map $$\tag{$\star$} S^{-1}\mathrm{Hom}_A(M,N)\rightarrow \mathrm{Hom}_{S^{-1}A}(S^{-1}M,S^{-1}N)$$ Now you have that * *The map ($\star$) is an isomorphism for $M=A$. More generally, it is an isomorphism for $M=A^n$. *The map ($\star$) is natural in $M$, in particular if we take an exact sequence $$A^q\rightarrow A^p\rightarrow M\rightarrow 0.$$ Then we get a diagram $$\begin{array}{c} 0 &\rightarrow & S^{-1}\mathrm{Hom}_A(M,N) & \rightarrow & S^{-1}\mathrm{Hom}_A(A^p,N) & \rightarrow & S^{-1}\mathrm{Hom}_A(A^q,N)\\ &&\downarrow && \downarrow && \downarrow\\ 0 &\rightarrow & \mathrm{Hom}_{S^{-1}A}(S^{-1}M,S^{-1}N) & \rightarrow & \mathrm{Hom}_{S^{-1}A}(S^{-1}A^p,S^{-1}N) & \rightarrow & \mathrm{Hom}_{S^{-1}A}(S^{-1}A^q,S^{-1}N) \end{array}$$ By the previous point the maps on the 3rd and 4th columns are isomorphism. Hence by Five Lemma the map in the second column is an isomorphism and we finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3734488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Interesting Partition Questions There is a good question here. My question is; "x is a positive integer and $\lfloor x\rfloor$ denote the largest integer smaller than or equal to $x$. Prove that $\lfloor n / 3\rfloor+1$ is the number of partitions of $n$ into distinct parts where each part is either a power of two or three times a power of two." There is a Theorem related with this question. Theorem: $ p(n \mid \text {parts in } N)=p(n \mid \text { distinct parts in } M) \quad \text { for } n \geq 1 $ where $N$ is any set of integers such that no element of $N$ is a power of two times an element of $N,$ and M is the set containing all elements of $N$ together with all their multiples of powers of two. Can anyone help? thanks.
Let’s use a generating function. If $p(n)$ is the number of partitions of $n$ into numbers of the form $2^k$ or $3\cdot 2^k$, then we have the following generating function: $$\sum_{n=0}^\infty p(n)x^n = \prod_{k=0}^\infty (1+x^{2^k})(1+x^{3\cdot 2^k})$$ Recall the following identity, which follows from the fact that every nonnegative integer has a unique binary representation: $$\prod_{k=0}^\infty (1+x^{2^k})=1+x+x^2+...=\frac{1}{1-x}$$ From this, it follows that our generating function is given by $$\sum_{n=0}^\infty p(n)x^n=\frac{1}{(1-x)(1-x^3)}$$ On the other hand, we have that $$\begin{align} \sum_{n=0}^\infty (\lfloor n/3\rfloor +1)x^n &= 1+x+x^2+2x^3+2x^4+2x^5+3x^6+... \\ &= (1+x+x^2)(1+2x^3+3x^6+4x^9+...) \\ &= \frac{1+x+x^2}{(1-x^3)^2} \\ &= \frac{1}{(1-x)(1-x^3)} \end{align}$$ Well, whaddaya know?! The two generating functions are equal to each other! Thus, we have the desired result: $$p(n)=\lfloor n/3\rfloor +1$$ QED! Thanks for the fun problem!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3734626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finding zeroeth coefficient of a Chebyshev polynomial expansion Let $v_\theta = (\cos\theta,\sin\theta)$ be a unit vector in the plane. I have a kernel $p(\theta,\theta') = p(v_\theta\cdot v_{\theta'})$ that satisfies $$\int_0^{2\pi} p(v_\theta\cdot v_{\theta'})\,d\theta' = 1\;\;\;(*)$$ for all $\theta\in [0,2\pi]$. I also have Chebyshev polynomials $T_0,T_1,\dots$ such that $T_k(cos\theta) = \cos(k\theta)$, normalized such that $$\{T_0/\sqrt{\pi}\}\cup\{\sqrt{2/\pi}T_k\}_{k=1}^\infty$$ form an orthonormal basis of $L^2(-1,1)$ with weight $1/\sqrt{1-t^2}$. Now I write the Chebyshev expansion of my kernel: $$p(t) = \sum_{k=0}^\infty p_kT_k(t), \;\;\;\;\; t\in(-1,1)$$ and I want to show that $p_0 = \frac{1}{2\pi}$. My progress so far: by orthonormality, we have \begin{align*} \int_0^{2\pi} &\frac{1}{\sqrt{\pi}}T_0(v_\theta\cdot v_{\theta'})\frac{\sqrt{2}}{\sqrt{\pi}}p(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1 - (v_\theta\cdot v_{\theta'})^2)}}\,d\theta'\\ &= \sum_{k=0}^\infty\int_0^{2\pi} T_0(v_\theta\cdot v_{\theta'})\frac{\sqrt{2}}{\sqrt{\pi}}p_kT_k(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1-(v_\theta\cdot v_{\theta'})^2}}\,d\theta'\\ &= \sqrt{2}p_0. \end{align*} Also, noting that $T_0\equiv 1$, I know \begin{align*} \int_0^{2\pi} &\frac{1}{\sqrt{\pi}}T_0(v_\theta\cdot v_{\theta'})\frac{\sqrt{2}}{\sqrt{\pi}}p(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1 - (v_\theta\cdot v_{\theta'})^2)}}\,d\theta'\\ &=\int_0^{2\pi} \frac{\sqrt{2}}{\pi} p(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1 - (v_\theta\cdot v_{\theta'})^2}}\,d\theta'. \end{align*} Then it would suffice to show $$\int_0^{2\pi} p(v_\theta\cdot v_{\theta'})\frac{1}{\sqrt{1 - (v_\theta\cdot v_{\theta'})^2}}\,d\theta' = \frac{1}{2}.$$ This is where I'm stuck: I'm not sure how to use (*) in the expression above. Indeed since the expression above is constant in $v_\theta$ as we showed earlier, we are free to pick a particular value, say, $v_\theta = (0,1)$, to make this $$\int_0^{2\pi} \frac{p(\cos\theta')}{\sin\theta'}\,d\theta',$$ but still I am not sure what to do with this.
As it turns out, using orthonormality was a a red herring, and the solution is actually quite simple. Choosing $v_\theta = (1,0)$, we compute \begin{align*} 1 &= \int_0^{2\pi} p(v_\theta\cdot v_{\theta'})\,d\theta'\\ &= \sum_{k=0}^\infty \int_0^{2\pi} p_kT_k(v_\theta\cdot v_{\theta'})\,d\theta'\\ &= \sum_{k=0}^\infty \int_0^{2\pi} p_kT_k(\cos\theta')\,d\theta'\\ &= \sum_{k=0}^\infty \int_0^{2\pi} p_k\cos(k\theta')\,d\theta'\\ &= 2\pi p_0 + \sum_{k=1}^\infty \underbrace{\int_0^{2\pi} p_k\cos(k\theta')\,d\theta}_{= 0}, \end{align*} and so $p_0 = 1/2\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3734763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Variance of a discrete random variables that takes on 2 values. Suppose I have a random variable that takes on a value of 10 with p(x=10)=.7 and a value of 20 with p(x=20) = .3. The E(X) = .7(10)+.3(20) = 13. The variance would be the expected value of the differences from the mean for each x. Var(X) = .7(13-10)^2 + .3(20-13)^2 = 21. The posted solution has (.7)(.3)(10)^2 which I've identified as (.7)(.3)(20-10)^2. This also equals 21. I just can't see how that identity works. In general it would look like this: E(X) = p*(1-p)*(b-a)^2 where P(x=b) = p and P(x=a) = 1-p. I'm just not seeing it for some reason, but it appears to always work. I've tried expanding all the expressions out and comparing, but I can't get them to match up.
Consider a Bernoulli random variable $Y$ with parameter $p=0.3$: i.e., $\Pr[Y=1]=p$, $\Pr[Y=0]=1-p$. It is known (and easy to verify) that $\operatorname{Var}[Y] = p(1-p)$. Set $b=20$, $a=10$. Note that $X$ has the same distribution as $(b-a)Y+b$ (can you see why?). Therefore, $$ \operatorname{Var}[X] = \operatorname{Var}[(b-a)Y+b] = \operatorname{Var}[(b-a)Y] = (b-a)^2\operatorname{Var}[Y] = (b-a)^2\cdot p(1-p) $$ where the second and third equality are by properties of the variance (are you familiar with them? $\operatorname{Var}[\alpha X]=\alpha^2\operatorname{Var}[X]$ and $\operatorname{Var}[X+\beta]=\operatorname{Var}[X]$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3734864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Curvature of Fernet curve on a sphere The question is, how to prove that the curvature of any Frenet curve on a sphere with radius $R$ is bigger or equal to $1/R$. I have managed to prove so far that the Gauss curvature of the sphere $x^2+y^2+z^2=R^2$ is $1/R^2$, but I don't know if this helps at all
Suppose $\alpha(s)$ is a unit speed curve lying in the sphere of radius $R$ centered at the origin. Then $\alpha(s) \cdot \alpha (s) = R^2, \tag 1$ whence $\dot \alpha(s) \cdot \alpha(s) = 0; \tag 2$ since $\dot \alpha(s) = T(s), \tag 3$ the unit tangent vector to $\alpha(s)$, (2) becomes $T(s) \cdot \alpha (s) = 0; \tag 4$ differentiating this equation yields $\dot T(s) \cdot \alpha(s) + T(s) \cdot \dot \alpha(s) = 0; \tag 5$ we now recall (3), viz. $\dot \alpha(s) = T(s) \tag 6$ and the Frenet-Serret equation $\dot T(s) = \kappa(s) N(s); \tag 7$ then (5) yields $\kappa(s) N(s) \cdot \alpha(s) + T(s) \cdot T(s) = 0; \tag 8$ also, $T(s) \cdot T(s) = 1, \tag 9$ $T(s)$ being a unit vector. (8) may now be written $\kappa(s) N(s) \cdot \alpha(s) = -1; \tag{10}$ note this forces $\kappa(s) \ne 0; \tag{11}$ taking absolute values in (10) we find $\kappa(s) \vert N(s) \cdot \alpha(s) \vert = 1; \tag{12}$ by Cauchy-Schwarz, $ \vert N(s) \cdot \alpha(s) \vert \le \vert \alpha(s) \vert \vert N(s) \vert = R, \tag{13}$ since $\vert \alpha(s) \vert = R \tag{14}$ and $\vert N(s) \vert = 1; \tag{15}$ assembling (12) and (13) together we have $\kappa(s) R \ge 1, \tag{16}$ or $\kappa(s) \ge \dfrac{1}{R}, \tag{17}$ $OE\Delta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3735022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Rank of $A^n$ and $A^{n+1}$ Suppose $A$ is a $n\times n$ real matrix. Then is it always true that Rank($A^n$) = Rank ($A^{n+1}$) for a matrix $A$? This doubt came while solving the attached question: If A is a 10×10 real matrix, then which of the following is true: * *rank($A^8$)=rank($A^9$) *rank($A^9$)= rank($A^{10}$) *rank($A^{10}$)=rank($A^{11}$) *rank($A^8$)=rank($A^7$) Attempt: I can take a nilpotent matrix of maximal index 10 for a matrix of order 10 and therefore option 1,2,4 are rejected but option 3 still is correct. So I thought is there any generalization or I analyzed the question incorrectly?Please throw some light.
Yes, it is always true. One argument is to use Jordan form: a matrix $A$ is necessarily similar to a block diagonal matrix of the form $$ \pmatrix{M & 0\\0& N}, $$ where $M$ is invertible and $N$ is nilpotent. Since $N^n = 0$, the ranks of $A^n,A^{n+1}$ must be equal to the ranks of $$ \pmatrix{M & 0\\0& N}^n = \pmatrix{M^n & 0\\0& 0}, \quad \pmatrix{M & 0\\0& N}^{n+1} = \pmatrix{M^{n+1} & 0\\0& 0}, $$ which is to say that the rank of $A^n$ and of $A^{n+1}$ is simply the size of $M$. So, the ranks are indeed equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3735153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }