Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Find the sum to $n$ terms of the series: $1^2.1+2^2.3+3^2.5+....$ Find the sum to $n$ terms of the series: $$1^2.1+2^2.3+3^2.5+.....$$ My Attempt: Here, $n^{th}$ term of $1,2,3,....=n$ $n^{th}$ term of $1^2,2^2,3^2,....=n^2$ Also, $n^{th}$ term of $1,3,5,....=2n-1$ Hence, $n^{th}$ term of the given series is $t_n=n^2(2n-1)$
As , $$T_n=n^2(2n-1)$$ Hence , $$S_n=\sum T_n$$ Or , $$S_n=\sum_{k=1}^{n}2k^3-k^2$$ $$S_n=2\sum_{k=1}^{n}k^3-\sum_{k=1}^{n}k^2$$ Hence , $$S_n=\frac{(n(n+1))^2}{2}-\frac{n(n+1)(2n+1)}{6}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2680816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Theorem Explanation: Sequences and Convergence I'm currently studying analysis and came across this theorem: Let $(x_n)$ be a sequence of real numbers and let $x \in {\Bbb R}$. If $(a_n)$ is a sequence of positive real numbers with $\displaystyle\lim_{n\rightarrow \infty}{a_n} =0$ and if for some constant $C > 0$ and some $m \in {\Bbb N}$ we have: $|x_n - x| \leq Ca_n$ $\forall n \geq m$ then: $\lim{x_n} = x$ as $ x \to \infty$ I don't understand what the theorem means so I can't even begin to start constructing a proof for it. I've read the statement over and over again, however, I just can't make sense of it. Can anyone please explain what this theorem is saying? Thank you.
This is a different version of the squeeze theorem. It is basically saying that since we know that $a_n$ gets really small as $n \to \infty$, if we know that there is a constant $C> 0$ such that $|x-x_n| \leq Ca_n$ (the distance between $x_n$ and $x$ is a multiple of how close $a_n$ is to $0$ since $a_n = |a_n - 0|$) then $x_n$ gets really close to $x$ as $n\to \infty$. Example: We know that $a_n = \frac{1}{n} \to 0$ as $n\to \infty$. Since $$\left|\frac{5n^2 + n}{n^2 + n^{1/2} + 1} - 5\right| \leq \frac{3}{n},$$ (you can check by a calculation) for $n \geq 25 = m$. If we write $x_n = \frac{5n^2 + n}{n^2 + n^{1/2} + 1}$ and $x = 5$, then $$|x_n - x| = \left|\frac{5n^2 + n}{n^2 + n^{1/2} + 1} - 5\right| \leq \frac{3}{n} = 3a_n =Ca_n,$$ and so the theorem that you are wanting to prove says that $x_n \to x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2681007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rolling dices possibility I must solve given task: Three dice are rolled. What is possibility the sum of it to be $5$? I guess I have to use combination formula, but I do not know the basic approach. Can you please explain. My thoughts: Sum of $5$ can be only get from numbers: $1$, $2$ and $3$. - $221$ $113$. I guess those are permutations. How must I continue?
If the outcome the experience is a triplet$\{d_1,d_2,d_3\}$ with the scores of each of the three dice, you have $6^3$ possible outcomes, all with the same probability. $3$ of these are a permutation of $1-1-3$ and $3$ are a permutation of $1-2-2$ $6$ favourable outcomes in an universe of $6^3$ possible outcomes gives you a probability of $6^{-2}$ of reaching a sum of $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2681149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that $3^{2n-1} + 2^{n+1}$ is always a multiple of $7$. I'm trying to prove the following statement: $P(n) = 3^{2n-1} + 2^{n+1}$ is always a multiple of $7$ $\forall n\geq1$. I want to use induction, so the base case is $P(1) = 7$ so that's okay. Now I need to prove that if $P(n)$ is true then $P(n+1)$ is true. So there exists a $d \in \mathbb{N}$ such that $$ 3^{2n-1} + 2^{n+1} = 7d $$ From this I need to say that there exists a $k \in \mathbb{N}$ such that: $$ 3^{2n+1} + 2^{n+2} = 7k $$ With a little algebraic manipulation, I have managed to say: $$ 2 \cdot 3^{2n+1} + 9 \cdot 2^{n+2} = 7\cdot(18d) $$ But now I am stuck. How should I keep going?
Alternatively, suppose $3^{2n-1}+2^{n+1}= 7x$. Then, $x=\frac17\left(3^{2n-1}+2^{n+1}\right)=\frac{1}{21}\left(3^{2n}+6(2^{n})\right)=\frac{1}{21}\left(9^{n}+6(2^{n})\right)\equiv 0 \pmod3$ Now, $9\equiv 2 \pmod7\Rightarrow9^n\equiv 2^n \pmod3$, $6\equiv -1 \pmod7$ and hence, $6\cdot2^n\equiv (-1)\cdot2^n\pmod7$ Therefore, $9^{n}+6(2^{n})\equiv 0 \pmod7$, because $9^{n}\equiv 2^n \pmod7$ and $6\cdot2^{n}\equiv -2^n \pmod7$ Therefore, since $9^{n}+6(2^{n})\equiv 0 \pmod3$, $9^{n}+6(2^{n})\equiv 0 \pmod{21}$, hence $\frac{1}{21}\left(9^{n}+6(2^{n})\right)=\frac{1}{7}\left(3^{2n-1}+2^{n+1}\right)$ is an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2681223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Spectral families of commuting operators Consider two self-adjoint bounded operators $A$ and $B$ on a separable Hilbert space. According to the spectral theorem we can write $$ A=\int_{-\infty}^{\infty} x d E^{A}_x, \quad B=\int_{-\infty}^{\infty} y d E^{A}_y $$ where $E^{A}_x$ and $E^{B}_y$ are the spectral families of projectors of $A$ and $B$ respectively. Is there a simple way to prove that if $[A,B]=AB-BA=0$, then $[E^{A}_x,E^{B}_y]=0$ for all $x,y$?
From $AB=BA$, you get $A^nB=BA^n$ for all $n$, and immediately $p(A)B=Bp(A)$ for any polynomial $p$. By Stone-Weierstrass, $f(A)B=Bf(A)$ for any $f\in C(\sigma(A))$. Now let $$\Sigma=\{\Delta:\ \Delta\ \text{ is Borel and } E^A(\Delta)B=BE^A(\Delta)\}. $$ From the fact that $E^A$ is a spectral measure, it is quickly deduced that $\Sigma$ is a $\sigma$-algebra. If $V\subset\sigma(A)$ is any open set, it may be written as a disjoint union of intervals, which allows us to see that there exists a sequence $\{f_n\}\subset C(\sigma(A))$ such that $f_n\nearrow 1_V$ pointwise. Then, for any $x\in H$, \begin{align} \langle BE^A(V)x,x\rangle &=\langle E^A(V)x,B^*x\rangle =\int_{\sigma(A)}1_V\,d E^A_{x,B^*x}\\ \ \\ &=\lim_n\int_{\sigma(A)}f_n\,d E^A_{x,B^*x} =\lim_n\langle f_n(A)x,B^*x\rangle\\ \ \\ &=\lim_n\langle Bf_n(A)x,x\rangle=\lim_n\langle f_n(A)Bx,x\rangle\\ \ \\ &=\lim_n\int_{\sigma(A)}f_n\,d E^A_{Bx,x} =\int_{\sigma(A)}1_V\,d E^A_{Bx,x}\\ \ \\ &=\langle E^A(V)Bx,x\rangle. \end{align} As $x$ was arbitrary, $E^A(V)B=BE^A(V)$. So $V\in\Sigma$, and thus $\Sigma$ contains all open subsets of $\sigma(A)$, and then the whole Borel $\sigma$-algebra of $\sigma(A)$. Thus $E^A(\Delta)B=BE^A(\Delta)$ for any Borel $\Delta\subset\sigma(A)$. So far we haven't even used that $B$ is selfadjoint; but now we can use the fact to repeat the above argument for a fixed $\Delta_1\subset\sigma(A)$, to obtain $$ E^A(\Delta_1)E^B(\Delta_2)=E^B(\Delta_2)E^A(\Delta_1) $$ for any pair of Borel sets $\Delta_1\subset\sigma(A)$, $\Delta_2\subset\sigma(B)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2681358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Find a basis for all vectors perpendicular to $x-2y+3z=0$. I am looking at the following question: Find a basis for all vectors perpendicular to $x-2y+3z=0$. Clearly $<1,-2,3>$ is a vector that must be in the basis or a scalar multiple of $<1,-2,3>$ must be in the basis. But the solution given says that this vector is enough to describe a basis for all orthogonal vectors to the plane. However, my initial intuition when solving this problem told me that this was not enough to describe the basis of all perpendicular vectors. My understanding is that any basis of a vector space is a minimal set of vectors whose linear combinations span the space. My answer was that we would need to take a basis of the plane, say, $<2,1,0>$ and $<-3,0,1>$ and then combine that with $<1,-2,3>$. The reason for this is that a perpendicular vector need not be fixed. That is, we can move anywhere on the plane, and then move in the orthogonal dimension to that plane, by taking any scalar multiple of $<1,-2,3>$ and adding it to a linear combination of $<2,1,0>$ and $<-3,0,1>$. How does the basis $<1,-2,3>$ capture the orthogonal vector $<2,1,0>+<-3,0,1>+<1,-2,3>=<0,-1,4>$? Perhaps my plane/vector geometry is off. Also, my answer seems to suggest that the basis for the set of perpendicular vectors is also a basis for $\mathbb{R}^3$, which also seems odd.
The equation $$x-2y+3z=0$$ is the equation of a plain which is a two dimensional subspace in the three dimensional vector space of $\mathbb{R}^3.$ The set of vectors perpendicular to this plain constitute a one dimensional vector space which is just a line, spanned by any non-zero vector on that line. Since $<1,-2,3>$ is a non-zero vector on that line, the line will be spanned by this vector. Thus a basis for your line consists of just that vector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2681592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$\sum a_n$ converges conditionally and $\sum b_n$ converges absolutely, then will $\sum a_nb_n$ converge absolutely? Suppose $\sum a_n$ converges conditionally and $\sum b_n$ converges absolutely, then will $\sum a_nb_n$ converge absolutely? I know that $\sum|a_n|$ does not converge while $\sum a_n$ does and that $\sum|b_n|$ does converge, and also that $\lim_{n \to \infty} |a_nb_n|=0$ but I'm not sure how to proceed from here.
$|a_{n}|<1$ eventually and so $|a_{n}b_{n}|\leq|b_{n}|$ eventually, but $\displaystyle\sum|b_{n}|<\infty$, then so is $\displaystyle\sum|a_{n}b_{n}|<\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2681709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Suppose a large number of N people flip a coin 100 times, how would you find the percentage of them that would get less than 45 heads? Assume that the probability of 100 people is big enough that the trial can be approximated by normal distribution. In the problem, N is unknown. I think you would need to use the 68-95-99.9 rule, but I'm not sure exactly how/what formula I would use. Help is appreciated. Thanks!
Let $X$ denote the number of heads we obtain in $100$ flips. Normal approximation without continuity correction: $$\mu=np$$ $$\sigma^2=npq$$ where $$\frac{X-\mu}{\sqrt{npq}}\sim N(0,1)$$ Then we have $$\begin{align*} P(Z\lt 45) &=\Phi\left({\frac{45-50}{\sqrt{100\cdot0.5\cdot0.5}}}\right)\\\\ &=\Phi(-1)\\\\ &\approx0.1587 \end{align*}$$ With continuity correction: $$\begin{align*} P(Z\lt 45) &=\Phi\left({\frac{44.5-50}{\sqrt{100\cdot0.5\cdot0.5}}}\right)\\\\ &=\Phi(-1.1)\\\\ &\approx0.1335 \end{align*}$$ Exact probability using binomial distribution: $$\begin{align*} P(X<45) &=\sum_{k=0}^{44} {n\choose k}{0.5^ k}{0.5^{n-k}}\\\\ &=\sum_{k=0}^{44} {100 \choose k}{0.5^ k}{0.5^{100-k}}\\\\ &=0.1356265 \end{align*}$$ This can be calculated in R: > sum(dbinom(0:44,100,.5)) [1] 0.1356265
{ "language": "en", "url": "https://math.stackexchange.com/questions/2681802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simplifying $\frac{1}{2\cdot\sqrt{e^x}}\cdot e^x$ We have to simplify: $$\frac{1}{2\cdot\sqrt{e^x}}\cdot e^x$$ I came to the conclusion that the answer would be: $$\frac{e^x}{2\cdot\sqrt{e^x}}$$ But I was wrong and it was: $$\frac{\sqrt{e^x}}{2}$$ Where did I go wrong and why? Thanks
$$\frac{1}{2\sqrt{e^x}}\cdot e^x=\frac{1}{2\sqrt{e^x}}\cdot (\sqrt{e^x}\sqrt{e^x})=\frac{\require{cancel}\cancel{\sqrt{e^x}}}{2\cancel{\sqrt{e^x}}}\cdot\sqrt{e^x}=\frac{\sqrt{e^x}}2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2681943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Finding a closed form for $\sum\limits_{\substack{0\le n\le N\\0\le m\le M}}\left|nM-Nm\right|$ I am trying to figure out if there is a closed form for the following sum: $$ \sum_{\substack{0\le n\le N \\ 0\le m\le M}}\left|(N-n)(M-m)-nm\right|=\sum_{\substack{0\le n\le N \\ 0\le m\le M}}\left|nM-Nm\right|. $$ Clearly, from symmetry, if we remove the absolute values, the sum evaulates to $0$. Using the symmetry, I tried to evaluate the sum as $$ 2\cdot\sum_{\substack{0\le n\le N \\ 0\le m\le M \\ (N-n)(M-m)\gt nm}}\left((N-n)(M-m)-nm\right). $$ However, that expression ended up containing sums of floor functions, for which I did not know a closed form. Is there another way to obtain a closed form for the above sum? Edit: If no closed form can be provided an efficient algorithm to compute the above sum given $N,M$ would also be good. Edit 2: Since the time I have placed the bounty, I was able to find a closed form for the sum: $$\frac{1}{6}\left[MN(2 M N + 3 (M + N + 1)) + M^2 + N^2-\gcd(M,N)^2\right].$$ So now I change the question to a challange: derive the above form from the sum. The prettiest derivation (if there are any) will get the bounty.
Hint: The problem is governed by the solutions of $Mn=Nm$, the main diagonal of the rectangle. WLOG $M\ge N$ and let us assume for now that $M,N$ are relative primes, so that equality only occurs at the corners. By symmetry, we just look below the diagonal and for a given $m$, $$0\le n\le\left\lfloor{\frac{Nm}M}\right\rfloor=\left\lfloor{Qm}\right\rfloor.$$ Then, $$S:=\sum_{m=0}^{M-1}\sum_{n=0}^{\left\lfloor{Qm}\right\rfloor}(Nm-Mn)=M\sum_{m=0}^{M-1}\sum_{n=0}^{\left\lfloor{Qm}\right\rfloor}(Qm-n)\\ =M\sum_{m=0}^{M-1}\left(Qm-\frac12\left\lfloor{Qm}\right\rfloor\right)\left(\left\lfloor{Qm}\right\rfloor+1\right)\\ =\frac M2\sum_{m=0}^{M-1}\left(Qm+\{Qm\}\right)\left(Qm-\{Qm\}+1\right)\\ =\frac M2\sum_{m=0}^{M-1}\left((Qm)^2+Qm-\{Qm\}^2+\{Qm\}\right).$$ As in the sums of fractional parts, all fractions from $0/M$ to $(M-1)/M$ appear (in disorder), $$12S=6M\left(Q^2\frac{(M-1)M(2M-1)}6+Q\frac{(M-1)M}2-\frac{(M-1)M(2M-1)}{6M^2}+\frac{(M-1)M}{2M}\right)\\ =2M^2N^2+M^2-3MN^2-3MN+N^2+3NM^2-1.$$ To this, we need to add the omitted term ($m=M$), which is found to be $$T:=\sum_{n=0}^{N}(NM-Mn)=\frac{MN(N+1)}2,$$ and finally $$2(S+T)=\frac{2M^2N^2+M^2+9MN^2+9MN+N^2+3NM^2-1}6.$$ Now if $M,N$ are not relative primes, one may split the summation in $G=\gcd(M,N)$ subsums of $\dfrac MG$ terms and a final $M^{th}$ term. This amounts to replacing the final $-1$ by $-G^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2682060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Why $\frac{1}{z^2+1}$ has simple poles at $\pm i$. $\frac{1}{(z+i)(z-i)}$ has simple poles at $\pm i$. I know that pole is simple if Laurent series has only $a_{-1} \ne 0$. So, i tried to derive the series: \begin{align*} \frac{1}{(z+i)(z-i)} &= \frac{1}{2i}\left(\frac{1}{z-i}-\frac{1}{z+i}\right). \end{align*} But here i have two terms $\frac{1}{z-i}-\frac{1}{z+i}$. So instead of single $a_{-1}(z-z_0)^{-1}$ i have two terms. Does this mean that the function has two poles? And what if i had three such terms?
$\frac{1}{z+i}$ is holomorphic in a neigborhood of $i$, hence $\frac{1}{z+i}= \sum_{n=0}^{\infty}a_n(z-i)^n$. The Laurent expansion around reads now as follows: $\frac{1}{z^2+1}= \frac{1}{2i}\left(\frac{1}{z-i}-\frac{1}{z+i}\right)=\frac{1}{2i}\left(\frac{1}{z-i}-\sum_{n=0}^{\infty}a_n(z-i)^n \right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2682156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Prove that $f$ is uniformly continuous iff there exist sequences $a_n,b_n$ such that if $\lim a_n=\lim b_n \implies \lim f(a_n)=\lim f(b_n)$ Suppose $f$ is a real-valued function, and $a_n,b_n$ are real sequences. Prove that $f$ is uniformly continuous $\iff (\lim a_n=\lim b_n \implies \lim f(a_n)=\lim f(b_n))$ * *Suppose $f$ is uniformly continuous. Then, in particular, $f$ is continuous. Suppose $a_n\to x$ as $n\to\infty$. We know by continuity of $f$ that: $$\lim a_n=x\implies \lim f(a_n)=f(x)$$ We know that $\lim a_n=\lim b_n=x$ so that also: $$\lim b_n=x\implies \lim f(b_n)=f(x)$$ Together this gives that if $f$ is uniformly continuous, that $\lim f(a_n)=\lim f(b_n)$. * *Suppose $f$ is not uniformly continuous. We then know that: $$\forall_{\delta>0}\exists_{\epsilon>0}:|x-a|<\delta \not \Rightarrow |f(x)-f(a)|<\epsilon$$ Suppose we know that $\lim a_n=a=\lim b_n=b$, we want to show that $\lim f(a_n)\neq f(b_n)$. As the sequences are arbitrary in $\mathbb{R}$, we know that: $a-b=0\implies|a-b|<\delta$. As $f$ is not uniformly continuous, we know that for a particular $\delta>0$ we pick, the following holds: $$|a-b|<\delta \not \Rightarrow |f(a)-f(b)|<\epsilon \ \ \ (*)$$ Now suppose that $\lim f(a_n)=\lim f(b_n)$ so that $f(a)=f(b)$. Then for any $\epsilon>0$: $$|a-b|=0<\delta\implies|f(a)-f(b)|=0<\epsilon$$ This is contradictory with the statement $(*)$; non-uniform continuity. Thus: $f(a)$ cannot equal $f(b)$ if $a=b$ so that if $f$ is not uniformly continuous, we know that: $$\lim a_n = \lim b_n \not \Rightarrow \lim f(a_n) = \lim f(b_n)$$ $\tag*{$\Box$}$
If you go for contradiction, it would mean whatever $\delta$ you take, we have that $\forall$ $\epsilon$ we have $|x-a_n|< \delta \not \Rightarrow |f(x) - f(a_n)| < \epsilon$. Now take $x=b_n$, and since you can take whatever $\delta$ you want, we have that $\lim a_n = \lim b_n$. Now what does that imply?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2682364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is $\log|f|$ for a holomorphic function continuous when viewed as a map to the extended real line? Let $f$ be a holomorphic function on $\mathbb{C}$ that is not identically zero, and consider $\log|f|$ and view this as a morphism to $\mathbb{R} \cup \infty \cup -\infty.$ If $f(z)=0$ we of course let $log|f(z)| = -\infty.$ It is easy to see that this gives an upper semicontinuous function. It seems to me that this function is actually lower semicontinuous, so that it is a continuous function. Is this true? For the definition of semicontinuity, see: https://en.wikipedia.org/wiki/Semi-continuity#Formal_definition
$g(z)=\log|f|$ is clearly continuous on $\{z\in\Bbb C:f(z)\ne0\}$. Since $f$ is holomorphic and not identically $0$, its zeroes are isolated. Then, if $f(z_0)=0$, we have $\lim_{z\to z_0}g(z)=-\infty$. Thus, $g\colon\Bbb C\to\Bbb R\cup\{\pm\infty\}$ is continuous. However, $\lim_{z\to\infty}g(z)$ exists if and only if $f$ is a polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2682440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Examine whether $\sum_{n=1}^\infty{\sin^2\left(\frac{1}{n}\right)}$ converges or not I have the series $\sum_{n=1}^\infty{\sin^2\left(\frac{1}{n}\right)}$ and I'm trying to examine whether it converges or not. My Attempts: * *I first tried finding whether it diverges by checking if $\lim_{n\to\infty}{\sin^2\left(\frac{1}{n}\right)} \ne 0$. $$ \lim_{n\to\infty}{\sin^2\left(\frac{1}{n}\right)}= \lim_{n\to\infty}{\sin\left(\frac{1}{n}\right)}\cdot\lim_{n\to\infty}{\sin\left(\frac{1}{n}\right)}=0 $$ *Since I didn't get a confirmation from the first try, I then tried the d'Alembert's Criterion which didn't get me very far. $$ \frac{a_{n+1}}{a_n}= \frac{\sin^2\left(\frac{1}{n+1}\right)}{\sin^2\left(\frac{1}{n}\right)}= \frac{ -\dfrac{2\cos\left(\frac{1}{n}\right)\sin\left(\frac{1}{n}\right)}{n^2} }{ -\dfrac{2\cos\left(\frac{1}{n+1}\right)\sin\left(\frac{1}{n+1}\right)}{\left(n+1\right)^2} }= \frac{ \cos\left(\frac{1}{n}\right)\sin\left(\frac{1}{n}\right)\left(n+1\right)^2 }{ \cos\left(\frac{1}{n+1}\right)\sin\left(\frac{1}{n+1}\right)n^2 }=\ ... $$ *Finally, I tried Cauchy's Criterion, but I didn't get any conclusive result either. $$ \sqrt[n]{a_n}= \sqrt[n]{\sin^2\left(\frac{1}{n}\right)}= \sin^{\frac{2}{n}}\left(\frac{1}{n}\right)=\ ... $$ Question: I've been thinking for while of using the Comparison Test, but I'm not sure which series to compare mine to. How can I examine whether the series converges or not?
Just use the fact that$$(\forall n\in\mathbb{N}):\sin^2\left(\frac1n\right)\leqslant\frac1{n^2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2682573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Help me motivate a topic. I'm teaching a topic for the first time, and I'm struggling to motivate it. I usually know where things lead and what they're eventually used for, but in this case I'm a bit stumped. (I suspect Galois Theory?!) Say you have a cubic equation $ax^3+bx^2+cx+d=0$ with roots $\alpha,\beta$ and $\gamma$. We can show that $\displaystyle{\alpha+\beta+\gamma = -\frac{b}{a}}$, $\displaystyle{\alpha\beta+\alpha\gamma+\beta\gamma=\frac{c}{a}}$ and $\displaystyle{\alpha\beta\gamma=-\frac{d}{a}}$. Given a cubic, say $2x^3-3x^2+4x-1=0$, we're then asked to find things like $\alpha^2+\beta^2+\gamma^2$ or $\displaystyle{\frac{1}{\alpha}+\frac{1}{\beta}+\frac{1}{\gamma}}$. This is all very straight forward, but what're these quantities good for? Given a cubic, say $2x^3-3x^2+4x-1=0$, with roots $\alpha,\beta$ and $\gamma$, we're asked to find another cubic with roots $2\alpha-1,2\beta-1$ and $2\gamma-1$. Substituting $w=\frac{1}{2}(x+1)$ to get $w^3+5w+2=0$. Again, this is quite straightforward, but where is this leading to? What comes next? I'm a pure mathematician, so I don't need an application in the real world. I'd just like to know its context in Mathematics and, more importantly, some references/links to the next step.
As others have noted, a big part of this is the aim to solve polynomial equations (implicitly, usually, by radicals, as opposed to using elliptic functions or modular functions). Also as noted, an interesting tangential point is that symmetric functions in the roots can be expressed in terms of the "standard" ones (and/or the sums of powers, by the Girard-Newton identities). This does predate Galois theory by many decades, if not a century or two. An often-neglected point is that use of Lagrange resolvents (a late 18th-century idea) leads one to discover the formulas of del Ferri, Ferraro, Tartaglia, and Cardano. Knowing how to manipulate symmetric polynomials is essential. Similarly, to express roots of unity in terms of radicals, although by now general Galois theory proves that this is possible, the usual incarnations of it nowadays do not mention any tangible device to do it. I.e., Lagrange resolvents are not usually high-lighted. But, again, if one uses Lagrange resolvents to set things up, and knows how to employ symmetric polynomials, one can obtain the expressions in radicals. (The question of why we might care about complicated expressions in terms of radicals is not quite answered by this, but one might claim that the details and ideas in the very process itself are of surprising interest.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2682672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Sliding mode stability Given the scalar system $$ \dot{x} = -\text{sgn}(x) \tag{1} $$ with $$ \text{sgn}(x) = \begin{cases} -1 & x < 0 \\ 0 & x = 0 \\ 1 & x > 0 \,. \end{cases} $$ What is an easy method to check for stability? And: Say I use quasi sliding mode with $$ \dot{x} = -\tanh(a x) \,. \tag{2} $$ If it is shown that $(2)$ is stable for all $a > 0$, does that also show stability of $(1)$ because $$ \lim_{a \rightarrow \infty} \tanh(a x) = \text{sgn}(x) \,? $$ Edit: For the general question, it is required that both original and aproximation function have exactly the same equilibria, i.e. that the approximation doesn't add any new equilibira.
The answer to the second question is generally NO. Consider a family $f_a(x) = x (x - \frac{1}{a}) (x + \frac{1}{a})$, $a > 0$. One has $\lim\limits_{a \to \infty} f_a(x) = x^3$, uniformly for $x$ in compact subsets of $\mathbb{R}$. The equilibrium $0$ is (even asymptotically) stable for any $\dot{x} = f_a(x)$, whereas it is unstable for the limiting equation $\dot{x} = x^3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2682772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to prove an implication within an if and only if Suppose you need to prove that $A\iff (B\implies C)$. The two ways to prove this are: (1a): Suppose $A$ and $B$ are true. Prove that $C$ is true. (1b): Suppose $B$ and $C$ are true. Prove that $A$ is true. (2a): Suppose $A$ and $B$ are true. Prove that $C$ is true. (2B): Suppose $A$ is not true and B is true, prove that $C$ cannot be true. Are these ways correct? I always get confused what you can assume and what you have to prove when there's multiple implications and such in one statement.
It helps to call $D$ the statement $B\implies C$. One has to prove $A\iff D$. So we need to show that $A$ implies $D$, and $D$ implies $A$. This means again, that, assuming $A$ it must follow $C$ if we assume $B$, and conversely, that whenever $C$ follows from $B$, then $A$ follows. Now check your $4$ statements according to this reasoning.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2682904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Expected number of days until it rains d days in a row when P(rain)=p What is the expected number of days until it rains $d$ days in a row (including the $d$ days) when the probability it rains on a single day is $p$? I managed to figure out a solution for the case $d=2, p=1/2$ but don't know how to generalize it to different values of $d$ and $p$ when the probabilities don't line up nicely with the Fibonacci series. My proof Let $P_u(n)$ denote the chance it rained for $d$ consecutive days during any of $n$ days. I managed to work out the value of $P_u(n)$ using this question. Let $a_n$ be the number of combinations of $n$ days such that there were no $2$ consecutive rainy days and the last day was rainy. Let $b_n$ be the number of combinations of $n$ days where there were no $2$ consecutive rainy days and the last day was not rainy. $$a_1 = 1, \; b_1 = 1 \\ a_n = b_{n-1}, \; b_n=a_{n-1}+b_{n-1}$$ Calculating the first few values we get $$a_2 = 1, \; b_2 = 2 \\ a_3 = 2, \; b_3 = 3 \\ a_4 = 3, \; b_4 = 5$$ We can notice that (where $F_n$ is the $n^{th}$ number in the Fibonacci sequece) $$a_n = F_n, \; b = F_{n+1}$$ The total number of days it didn't rain $2$ consecutive days is then $a_n + b_n = F_n + F_{n+1} = F_{n+2}$ with the total number of combinations being $2^n$, so $$P_u(n) = 1 - \frac{F_{n+2}}{2^n}$$ Let $P_e(n)$ denote the chance the chance $n$ is the last of the $d$ days in which it rained and it is the first time it rained for $d$ consecutive days. $$E_{days} = 1P_e(1) + 2P_e(2) + 3P_e(3) + 4P_e(4) + ... \\ = P_e(\ge1) + P_e(\ge2) + P_e(\ge3) + P_e(\ge4) + ... \\ = (1 - P_u(0)) + (1 - P_u(1)) + (1 - P_u(2)) + ... \\ = \frac{F_{0+2}}{2^0} + \frac{F_{1+2}}{2^1} + \frac{F_{2+2}}{2^2} + ... = \sum_{n=0}^{\inf} \frac{F_{n+2}}{2^n}$$ $$E_{days} = \sum_{n=0}^{\inf} \frac{F_{n+2}}{2^n} = \sum_{n=0}^{\inf} \frac{F_n}{2^n} + \sum_{n=0}^{\inf} \frac{F_{n+1}}{2^n} \\ = \frac14 \sum_{n=0}^{\inf} \frac{F_n}{2^{n-2}} + \frac12 \sum_{n=0}^{\inf} \frac{F_{n+1}}{2^{n-1}} \\ = \frac14(E_{days} + 2) + \frac12(E_{days} + 2)$$ $$E_{days} = \frac34E_{days} + \frac32$$ $$E_{days} = 6$$
Consider $d=2$ first. Let $M:=E_{days}$ be the requested expected number of days. Consider recurrent relation for $M$. We are waiting for the first rainy day during the random number of days with geometric distribution and with expected value $\frac1p$. Next day, we have rainy day with probability $p$ and then the trials are finished. In this case the expected number of days we are waiting for two successfull rainy days equals to $\frac1p+1$. If the next day is not rainy we start trials again. This case we again are waiting for two successfull rainy days until $M$ days in an average. This case the total average number of days is $\frac1p+1+M$. Combining this two cases by Law of Total Expectation, we have: $$M=p\cdot \left(\frac{1}{p}+1\right) + (1-p) \cdot \left(\frac1p+1+M\right).$$ From this equation, $$M=\frac{p+1}{p^2}. $$ Note that for $p=\frac12$, $M=6$. Next consider arbitrary $d\geq 2$. Again we are waiting for the first rainy day for an average $\frac1p$ days. And next few days we obtain the following possibilities. Here $R/C$ denotes rainy/clear day. * *$C$ with probability $1-p$: this case $M=\left(\frac1p+1+M\right)$, *$RC$ with probability $p(1-p)$: this case $M=\left(\frac1p+2+M\right)$, *$RRC$ with probability $p^2(1-p)$: this case $M=\left(\frac1p+3+M\right)$, so on * *$\underbrace{R\ldots R}_{d-2}C$ with probability $p^{d-2}(1-p)$: this case $M=\left(\frac1p+d-1+M\right)$, and finally * *$d-1$ rainy days $R\ldots R$ with probability $p^{d-1}$: then $M=\left(\frac1p+d-1\right)$. By Law of Total Expectation we get $$ M=(1-p)\left(\frac1p+1+M\right)+p(1-p)\left(\frac1p+2+M\right)+p^2(1-p)\left(\frac1p+3+M\right)+\ldots+p^{d-2}(1-p)\left(\frac1p+d-1+M\right)+p^{d-1}\left(\frac1p+d-1\right). $$ Find $M$ from this equation and obtain $$M=\frac{1-p^d}{(1-p)p^d}.$$ Addition. Let $q=1-p$. Rewrite the last equation as $$\tag{1}\label{1} M=q\left(\frac1p+M\right)\left(1+p+\ldots+p^{d-2}\right)+q\left(1+2p+3p^2+\ldots+(d-1)p^{d-2}\right) + p^{d-1}\left(\frac1p+d-1\right). $$ Here $$1+p+\ldots+p^{d-2} = \frac{1-p^{d-1}}{q},$$ $$1+2p+3p^2+\ldots+(d-1)p^{d-2}=\frac{d}{dp}(p+p^2+\ldots+p^{d-1})=\frac{d}{dp}(1+p+p^2+\ldots+p^{d-1})=\frac{d}{dp}\left(\frac{1-p^d}{1-p}\right)=\frac{(d-1)p^d-dp^{d-1}+1}{q^2}$$ Substitute this value into (\ref{1}): $$ M=\left(\frac1p+M\right)(1-p^{d-1}) + \frac{(d-1)p^d-dp^{d-1}+1}{q} + p^{d-1}\left(\frac1p+d-1\right) $$ Simplifying this equation and leading r.h.s. to a common denominator, obtain $$ Mp^{d-1}=\frac{1-p^d}{pq} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2682996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do you solve $\dot{X} = UX$? Let $X(t) = \left[\begin{matrix}a(t) & b(t) \\c(t) & d(t)\end{matrix}\right]$ and let $U$ be a nonsingular matrix. How do you solve $$\frac{d}{dt} X(t)=UX(t)$$ I presume there is some general method to solve these kinds of ODEs but I cannot find anything about it, online. BTW: ----------------- I know that you can get 4 ODE equations for the four unknown functions of $t$. The problem is that each ODE equation includes other functions as so $$\frac{d a}{d t}=u_{11}a+u_{12}c$$
You are correct, it can be solved column-wise, and in analogy to the scalar form of this differential equation. The solution is $$X(t)=\exp\left(U t\right) X(0)$$ where $X(0)$ is the initial condition (matrix) and $$\exp(Ut)=\sum_{n=0}^\infty \frac{(Ut)^n}{n!}$$ is the matrix exponential, which can be computed explicitly if eigenvectors and eigenvalues are known. Let $U=W \Lambda V^T$ with diagonal matrix $\Lambda$ and left and right eigenvector matrices $V$ and $W$. Then, $$\exp(Ut)=W \exp(\Lambda t) V^T$$ where $$\Lambda = \textrm{diag}(\lambda_1 ,\ldots,\lambda_N )$$ is the diagonal matrix of the $N$ eigenvalues of $U$ $$\exp(\Lambda t) = \textrm{diag}(\exp(\lambda_1 t),\ldots,\exp(\lambda_N t))$$ If you assume the functions $a(t)$, $b(t)$ etc. to be scalars, you have $N=2$ and can compute the eigenvalues from the trace and determinant of $U$, $\lambda_1+\lambda_2=U_{11}+U_{22}$ and $\lambda_1\lambda_2=U_{11}U_{22}-U_{21}U_{12}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2683070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Given that $P(x)$ is a polynomial such that $P(x^2+1) = x^4+5x^2+3$, what is $P(x^2-1)$? How would I go about solving this? I can't find a clear relation between $x^2+1$ and $x^4+5x^2+3$ to solve $P(x^2-1)$.
One more alternative is to start by deducing that $P(x)$ is a quadratic based on the fact that its degree has to be $2$ in order to output a quartic when the argument is a quadratic expression. So let $P(x) = ax^2 + bx + c$. When $x^2 + 1 = 0, x = \pm i$, so we get From $P(0) = 1 - 5 + 3 = -1$, and also $P(0) = c$ we can immediately get $c=-1$. When $x=0, x^2 + 1 = 1$ so we have that $P(1) = 3$ and $P(1) = a+b-1$, so $a+b = 4$. When $x = 1, x^2 + 1 = 2$, so we have that $P(2) = 9$ and $P(2) = 4a + 2b - 1$, so $4a + 2b = 10$. Solving the latter two for $a$ and $b$, we get $a=1, b=3$. Hence $P(x) = x^2 + 3x - 1$ and $P(x^2-1)$ can be worked out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2683178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Breaking up a Stick into different pieces Assume you have a stick of length 1. Choose 5 breaking points randomly along the stick such that the stick is divided into 6 parts. What is the probability that no part is greater than 1/2?
similar to what @hardmath said you can break the line into predetermined equal segments for example (10,20,....,40,...100 etc) Example lets break the line into 20 segments, the total number of ways 6 line segments can be selected is total number of positive integers in the equation $x_1+x_2+x_3+x_4+x_5+x_6 = 20$ = $\binom{19}{5}$ To get our event E: one segment is greater >= 10 let x_1 be that segment and possible cases for X_1 can be (10|11|12|13|14|15) It can not be more than 15 because in that case value of other segment has to be 0 which is not possible. when $x_1 = 10 : x_2+x_3+x_4+x_5+x_6 =10 : \binom{9}{4}$ $x_1 = 11 : x_2+x_3+x_4+x_5+x_6 =9 : \binom{8}{4}$ $x_1 = 12 : x_2+x_3+x_4+x_5+x_6 =8 : \binom{7}{4}$ $x_1 = 13 : x_2+x_3+x_4+x_5+x_6 =7 : \binom{6}{4}$ $x_1 = 14 : x_2+x_3+x_4+x_5+x_6 =6 : \binom{5}{4}$ $x_1 = 15 : x_2+x_3+x_4+x_5+x_6 =5 : \binom{4}{4}$ Total number of cases for event (E): 6*($\binom{4}{4}$+$\binom{5}{4}$+$\binom{6}{4}$+$\binom{7}{4}$+$\binom{8}{4}$+$\binom{9}{4}$) = 1512 Probability of event E: $\frac{1512}{11628}$ = .13 P(E) = .152 in case of 30 segments P(E) = .167 in case of 50 segments We can see that the probaility of E converges to some number A our answer = 1 - A
{ "language": "en", "url": "https://math.stackexchange.com/questions/2683320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Deriving the variance of the Bernoulli distribution For a Bernoulli distribution, $\mu_X = p$. I can easily derive this from the general equation for mean of a discrete random variable: $$ \mu_X=\sum_{i=1}^kx_iPr(X=x) $$ $$ \mu_X=1(p)+0(1-p)=p $$ I know that the variance of the Bernoulli distribution is supposed to be $\sigma_x^2=p(1-p)$. But I can not seem to derive that properly from the general equation for variance of a discrete random variable: $$ \sigma_x^2=\sum_{i=1}^k(x_i-\mu_X)Pr(X=x_i) $$ $$ \sigma_x^2=(x_0-p)(1-p)+(x_1-p)(p) $$ $$ \sigma_x^2=(0-p)(1-p)+(1-p)(p) $$ $$ \sigma_x^2=-p(1-p)+(1-p)(p) $$ $$ \sigma_x^2=-p+p^2+p-p^2 $$ $$ \sigma_x^2=0 $$ This is obviously incorrect; what am I doing incorrectly in my derivation?
By definition, $$Var(X)=E(X^2)-E(X)^2%$$ Consider a success being a $1$ and a failure being a $0$. Then we have $$\begin{align*} Var(X) &=\sum_{x=0}^1(x^2)Pr(X=x)-\left(\sum_{x=0}^1(x)Pr(X=x)\right)^2\\\\ &=\left(1^2\cdot p + 0^2 (1-p)\right)-\left(1 \cdot p + 0 (1-p)\right)^2\\\\ &=p-p^2\\\\ &=p(1-p) \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2683410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
How can I find the appropiate launching angle given different heights? The problem states the following: A soccer player $\textrm{20.0 m}$ from the goal stands ready to score. In the way stands a goalkeeper, $\textrm{1.70 m}$ tall and $\textrm{5.0 m}$ out from the goal, whose crossbar is at $\textrm{2.44 m}$ high. The stricker kicks the ball toward the goal at 18 $\frac{m}{s}$. For what range of angles does the player score a goal meaning the ball passes above the goalkeeper but below the crossbar? What I tried to do is to follow the equations of parabolic motion and I defined them as: $\textrm{For y:}$ $\textrm{Launching angle:}\,\omega$ $$y=y_{0}+v_{0y}\sin\omega\times t-\frac{1}{2}\times g t^{2}$$ Since what it is being asked is related to the maximum height this is found by knowing the vertex of the parabola or the maximum of the function which I found below: $$v_{0y}\sin\omega-gt=0$$ $$t=\frac{v_{0y}\sin\omega}{g}$$ Replacing this in the original function would render the maximum $\textrm{y}$: $$y=y_{0}+v_{0y}\sin\omega\times (\frac{v_{0y}\sin\omega}{g})-\frac{1}{2}\times g (\frac{v_{0y}\sin\omega}{g})^{2}$$ Simplifying: $$y=y_{0}+\frac{v^{2}_{0y}\sin^{2}\omega}{2g}$$ I used the coordinates origin as $\textrm{(0,0)}$ So when the ball is at the ground being kicked by the soccer player would be: $$y=0+\frac{v^{2}_{0y}\sin^{2}\omega}{2g}$$ Therefore by replacing the known values $g=9.8\,\frac{m}{s^{2}},\,v=18\,\frac{m}{s}\,\textrm{crossbar height = 2.44 m}$ I calculated the value of omega as follows: $$\sin^{2}\omega=\frac{2\times9.8\times 2.44}{18^{2}}$$ $$\sin\omega=\sqrt{\frac{2\times9.8 \times 2.44}{18^{2}}}=0.3842$$ $$\omega=\sin^{-1}0.3842\approx 0.3943\,\textrm{rad or}\,22.5940^{\circ}$$ Therefore that would be the maximum angle or the upper boundary of the launch angle from which the ball would be kicked so that it does not fly higher than the crossbar thus to ensure it will score the goal. The other angle I thought would take into account the height of the goalkeeper, from the data is known as $\textrm{1.7 m}$. By reusing the previous formula I got to: $$\sin\omega=\sqrt{\frac{2\times9.8 \times 1.7}{18^{2}}}=0.3207$$ $$\omega=\sin^{-1}0.3207\approx 0.3265\,\textrm{rad or}\,18.7044^{\circ}$$ However I dismissed the last result as if such angle is used the goalkeeper would catch the ball. The other part is the horizontal component. I defined the equation as: $$x=x_{0}+v_{0x}\cos\omega \times t$$ I figured to use the earlier $\textrm{t}$ and to find the range would meant that it would be twice tha time that was used to achieve the maximum height and that range must be the goal. Therefore the earlier equation would become into: $$x=x_{0}+v_{0x}\cos\omega \times 2t$$ $$x=x_{0}+v_{0x}\cos\omega \times (\frac{2v_{0y}\sin\omega}{g})$$ $$x=x_{0}+\frac{2v^{2}_{0y}\sin\omega\cos\omega}{g}$$ $$x=x_{0}+\frac{2v^{2}_{0y}\sin2\omega}{g}$$ By using the data $\textrm{20 m}$ and using $\textrm{coordinates origin at (0,0)}$: $$x=0+\frac{v^{2}_{0y}\sin2\omega}{g}$$ $$\sin2\omega=\frac{20\times 9.8}{18^{2}}=0.6049$$ $$\omega=\sin^{-1}\frac{0.6049}{2}=0.3073\,\textrm{rad or}\,17.6060^{\circ}$$ Therefore these range of angles would be the ones to score a goal. But none of these are right according to my book. Since it says that the range of angles would be between $20.4^{\circ}$ and $26.57^{\circ}$ What is it the step or the interpretation which I did it wrong?. Can someone show a better method or put me in the right track?.
The initial speed is given as $v_0=18m/s$, $$y=y_0+(v_0\sin\omega)t-\frac{1}{2}gt^2$$ for $x$, $$x=x_0+(v_0\cos\omega)t$$ (note both equations contain $v_0$, not $v_{0x}$ and not $v_{0y}$) Let's put the player at $x=0m$, so $x_0=0m$, the goal at $x=20m$, and the goalkeeper at $x=15m$. When does the ball pass by the goalkeeper? When $$x=(v_0\cos\omega)t=15m$$ $$t=\frac{15m}{v_0\cos\omega}$$ How high is the ball at that time? $$y=(v_0\sin\omega)\frac{15m}{v_0\cos\omega}-\frac{1}{2}g\left(\frac{15m}{v_0\cos\omega}\right)^2$$ To pass by the goalkeeper, $$(v_0\sin\omega)\frac{15m}{v_0\cos\omega}-\frac{1}{2}g\left(\frac{15m}{v_0\cos\omega}\right)^2>1.7m$$ When does the ball pass by the goal? When $$x=(v_0\cos\omega)t=20m$$ $$t=\frac{20m}{v_0\cos\omega}$$ How high is the ball at that time? $$y=(v_0\sin\omega)\frac{20m}{v_0\cos\omega}-\frac{1}{2}g\left(\frac{20m}{v_0\cos\omega}\right)^2$$ To pass into the goal, $$(v_0\sin\omega)\frac{20m}{v_0\cos\omega}-\frac{1}{2}g\left(\frac{20m}{v_0\cos\omega}\right)^2<2.44m$$ You can just plot this and you will see that the ball passes over the goalkeeper for angles above 20.4 degrees, but misses the goal for angles above 26.6 degrees:
{ "language": "en", "url": "https://math.stackexchange.com/questions/2683555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Upper bound on cumulative power of system of limitedly intersecting subsets? We have a set $S$ of power $n$ and $k < n$ subsets $S_1, \ldots, S_k \subseteq S$ such that $|S_i \cap S_j| \le 1$ when $i \ne j$. Is there any nontrivial upper bound on total power of sets $S_1, \ldots, S_k$? In particular, is it true that $$\sum_{i=1}^k |S_i| = O(n)$$
The Bonferroni inequalities (basically a truncated version of the inclusion–exclusion principle) can be brought to bear: $$ \left|\bigcup_{i=1}^kS_i\right| \ge \sum_{i=1}^k |S_i| - \sum_{i<j}\left|S_i\cap S_j\right| \ge \sum_{i=1}^k|S_i| - \frac{k(k-1)}{2}, $$ hence $$ \sum_{i=1}^k|S_i| \le \left|\bigcup_{i=1}^kS_i\right| + \frac{k(k-1)}{2} \le n + \frac{n(n-1)}{2}, $$ which is enough to bound your sum with $ O(n^2) $. I'm not sure you can do any better, although I wouldn't be surprised if your stricter bound were to be found to hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2683708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to derive $¬X \lor ¬Y \lor ¬Y$ from $Z ⊃ (¬X \lor ¬Y)$ and $¬Z ⊃ ¬ Y$ I know that one can derive $¬X \lor ¬Y \lor ¬Y$ (which simplifies to $¬X \lor ¬Y$, right?) from $Z ⊃ (¬X \lor ¬Y)$ and $¬Z ⊃ ¬ Y$ but I don't know how you do this. Maybe one just has to use constructive dilemma?
If you have Hypothetical Syllogism and Contraposition, you can do it without having to assume $Z \lor \neg Z$: $1. Z \rightarrow (\neg X \lor \neg Y) \quad Premise$ $2. \neg Z \rightarrow \neg Y \quad Premise$ $3. Y \rightarrow Z \quad Contraposition \ 2$ $4. Y \rightarrow (\neg X \lor \neg Y) \quad Hypothetical \ Syllogism \ 1,3$ $5. \neg Y \lor \neg X \lor \neg Y \quad Implication \ 4$ $6. \neg X \lor \neg Y \lor \neg Y \quad Commutation \ 5$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2683827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability of symmetric difference How do we prove the following inequality? $$\mathbb{P}[A\triangle B ]\geq \max\left \{ \mathbb{P}[A-B],\mathbb{P}[B-A] \right \}$$ I already proved that $$\mathbb{P}[A\triangle B ]=\mathbb{P}[A]+\mathbb{P}[B]-2\mathbb{P}[A\cap B]$$ $$\left | \mathbb{P}[A]-\mathbb{P}[B] \right |\leq \mathbb{P}[A\triangle B ]$$
We have $$P(A-B)=P(A)-P(A\cap B)$$ $$P(B-A)=P(B)-P(B\cap A)$$ Using these we can rewrite the first relation as $$P(A\triangle B)=P(A-B)+P(B-A)$$ Since probabilities are non-negative: $$P(A\triangle B)\ge\max(P(A-B),P(B-A))$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2683936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
proving that $n^n \le (n!)^2$ I want to prove that $n^n \le (n!)^2$. Now I tried by induction: for $n=1$ ,$1=1$ and $P(1)$ is true I suppose that $P(n)$ is true and I have to demonstrate that $P(n+1)$ is true $$((n+1)!)^2=(n+1)^2*(n!)^2 \le (n+1)^2*n^n=(n+1)(n+1)*n^n=(n+1)^n$$ But I'm not sure about the last passage
Before editing for $n^n \ge (n!)^2$ Note that it is false indeed $$n^n=\overbrace{n\cdot n \cdot n\cdot...\cdot n} ^{n \, terms} $$ $$(n!)^2=(\overbrace{n\cdot (n-1) \cdot (n-2)\cdot...\cdot 1}^{n \, terms})^2\ge(n\cdot (n-1) \cdot (n-2)\cdot...\cdot 1)\cdot n\ge n^n$$ After editing for $n^n \le (n!)^2$ induction step $$(n+1)^{n+1}=n^n(n+1)\frac{(n+1)^n}{n^n}=n^n(n+1)\left(1+\frac1n\right)^n\stackrel{Ind. Hyp.}\le(n!)^2(n+1)e \le ((n+1)!)^2$$ which is true for $n\ge 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2684033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Show that if $f :[0,\infty[ \rightarrow\mathbb{R}$ is continous then $f$ attain a maximum. Show that if $f :[0,\infty[ \rightarrow\mathbb{R}$ is continous with $\lim \limits_{x \to 0} f(x) = 0$ and $\lim\limits_{x \to\infty} f(x) = 0$ then $f$ attain a maximum. Suppose by contradiction that the real $M=\max_{x\in[0,\infty[}f(x)$ does not exist and suppose that $f \not\le 0$, $\forall x\in[0,\infty[$. Otherwise, $M=0$ Then by definition, $\forall M\in\mathbb{R}$, $ \exists x \in[0, \infty[$ such that $f(x) \ge M$. So, let $M\ge 0$, By assumption, $\lim\limits_{x \to\infty} f(x) = 0$ $\Leftrightarrow$ $\exists A_M$, $\forall x \gt A_M \Rightarrow$ $\lvert f(x)\rvert \le M$. So, $\not\exists x \in]A_M, \infty[$ that verified $f(x) \ge M$. Since, $ x \not\in]A_M, \infty[$, $x$ has to be in the interval $[0,A_M]$. But, since $f$ is continous on a closed interval, by the extreme value theorem, $\exists \beta \in \mathbf{Im_f} =\{f(x) \in\mathbb{R}: x \in[0,A_M]\} $ $\mathbf {s.t}$ $\forall x \in [0,A_M]$ we have $f(x)\le\beta$. This is a contradiction to our conclusion that there is not a maximum for $f$. Can someone tell me if my proof is correct, if not why and how to make it true ?
HINT.- $f([0,\infty[)$ is connected and $\lim \limits_{x \to 0} f(x) = \lim \limits_{x \to \infty} f(x) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2684158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find a linear transformation such that $S^2 = T$ (general case) Suppose a linear transformation $T:\mathbb{R}^3\rightarrow \mathbb{R}^3$, $T\begin{pmatrix} x\\ y\\ z \end{pmatrix} = \begin{pmatrix} x+y\\ y+z\\ z+x \end{pmatrix}$ (just an example). How do i find a linear transformation S such that $S^2=S\circ S=T$ ? it's part of a basic linear algebra course. Would appreciate a general answer (not specific to the example above). Iv'e already tried several ways, didn't manage to get somewhere.. Thanks. EDIT: The answer is supposed to be based only on basic matrices and linear transformations material, no Diagonalization and eigenvalues
Here is an explicit counterexample: Consider the linear transformation $T:\mathbb{R} \to \mathbb{R}$, $x \mapsto -x$. If $S:\mathbb{R} \to \mathbb{R}$, $x \mapsto ax$ (for some $a \in \mathbb{R}$) is a linear transformation satisfying $S \circ S = T$, then $-1 = T(1) = (S \circ S)(1) = a^2 \cdot 1$. But since we are working over the reals, we cannot have $a^2 = -1$. So in general, we cannot find the transformation you are looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2684247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Complex Analysis Sketch the image under the function $w = e^z$ of each of the following subsets of the z plane: (a) $\{z \in \Bbb C: Re (z) = -2\}$ (b) $\{z \in \Bbb C: Im (z) = 5\pi\}$ (c) $\{z = x + iy : 0 \le x \le 1, 0 \le y \le \pi\}$ (d) $\{z = x + iy : -2 \le x \le -1, -\pi \le y \le 4\pi\}$ (e) $\{z : Im z \ge 0\}$ I know that (a) is mapped onto a circle of radius $\frac{1}{e^2}$, and b is mapped onto a ray from the origin with argument of $\pi$ (or 5$\pi$) For the rest of the questions, I am unsure of what to do. Any help appreciated, and thanks in advance!
a)$z = -2 + iy\\ w = e^z = e^{-2} e^{iy} = e^{-2}(\cos y + i\sin y)$ $(\cos y + i\sin y)$ desribes a circle b) $z = x + 5\pi i\\ w = (e^x)e^{5\pi i} = -e^x$ The image of $w$ is the negative real numbers c) $x = e^x(\cos y + i\sin y)$ and I will let you guess what that maps to as the limits of $x,y$ change
{ "language": "en", "url": "https://math.stackexchange.com/questions/2684371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding equation and centre of circle through 3 points using matrices Based on answer given here: Get the equation of a circle when given 3 points. We can find equation of circle through points $(1,1), (2,4), (5,3)$ by taking: $\left|\begin{array}{cccc} x^2+y^2&x&y&1\\ 1^2+1^2&1&1&1\\ 2^2+4^2&2&4&1\\ 5^2+3^2&5&3&1\\ \end{array}\right|=0$ Can anyone explain why this works? I know generally what determinant equal zero means but can't see why doing this works.
You’re looking for the coefficients $a$, $b$, $c$ and $d$ of the general equation $$a(x^2+y^2)+bx+cy+d=0 \tag 1$$ of a circle. For each known point on the circle, substituting its coordinates into equation (1) results in a linear equation in these coefficients, so in effect you’re solving the system of linear equations $$\begin{align}(x_1^2+y_1^2)\,a+x_1b+y_1c+d &= 0 \\ (x_2^2+y_2^2)\,a+x_2b+y_2c+d &= 0 \\ (x_3^2+y_3^2)\,a+x_3b+y_3c+d &= 0 \end{align}.$$ The coefficient matrix of this system is of course $$\begin{bmatrix}x_1^2+y_1^2 & x_1 & y_1 & 1 \\ x_2^2+y_2^2 & x_2 & y_2 & 1 \\ x_3^2+y_3^2 & x_3 & y_3 & 1 \end{bmatrix}.$$ If the three points are non-colinear, then the circle is unique and this matrix will have full rank so that its null space, and hence the solution space of the system, is one-dimensional. (The infinite number of solutions reflects that fact that equation (1) is not unique: you can multiply by any non-zero constant to get an equivalent equation for the same circle.). For any other point on the circle, the coefficients must also satisfy the associated linear equation, but in order to maintain the one-dimensional solution space this linear equation must not be independent of the three that you already have, i.e., it must be a linear combination of those three equations. This implies that for any point $(x,y)$ on the circle, the rows of the matrix $$\begin{bmatrix}x^2+y^2 & x & y & 1 \\ x_1^2+y_1^2 & x_1 & y_1 & 1 \\ x_2^2+y_2^2 & x_2 & y_2 & 1 \\ x_3^2+y_3^2 & x_3 & y_3 & 1 \end{bmatrix}$$ are linearly dependent, which in turn means that its determinant vanishes. This method of constructing an equation for a curve is widely applicable. For instance, there’s a similar determinant for a general conic, although that requires five independent points on the curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2684523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
A quicker way to do a Lagrange multiplier problem I was working on the problem: minimize $x + 4z$ subject to $x^2 + y^2 + z^2 \le 2 $. I have it solved, I want a faster method for use in standardized exams. My work: I tackled this using Lagrange Multipliers, considering the interior by looking for points where all individual partial derivatives of $x+ 4z$ are zero, (of which there are none). Then considering the boundary $$ (x + 4z) - \lambda ( x^2 + y^2 +z^2 - 2) $$ From here I differentiated w.r.t x,y,z, $\lambda$ and set equal to 0 to yield $$ 1 - 2\lambda x = 0 \rightarrow 1 = 2\lambda x $$ $$ - 2 \lambda y = 0 \rightarrow 0 = 2 \lambda y \rightarrow y=0$$ $$ 4 - 2 \lambda z = 0 \rightarrow 4 = 2\lambda z$$ $$ - (x^2 + y^2 +z^2 -2 ) = 0 \rightarrow x^2 +z^2 = 2$$ Looking at equations 1, 3 we have $$ \frac{1}{2} = \lambda x, 2 = \lambda z $$ And therefore $$ \frac{1}{4} + 4 = \lambda^2 (x^2 +z^2 ) = 2 \lambda ^2 $$ $$ \frac{17}{8} = \lambda ^2 $$ And thus $$ \lambda = \pm \sqrt{ \frac{17}{8} } $$ $x = \frac{1}{2\lambda}, z = \frac{2}{\lambda} $ Yields $$ x + 4z = \frac{1}{2\lambda} + 4 \frac{2}{\lambda} = \frac{17}{2 \lambda} = \pm \sqrt{17} \sqrt{2} = \pm \sqrt{34}$$ Clearly $-\sqrt{34}$ is smaller, so we opt for that as our solution. Now while this works, and makes sense, its not satisfactory as it TAKES SO LONG. And on a Math GRE where the expectation is to do this under 30 seconds a problem, I was hoping there was a faster method. Any suggestions? [Also open to ways to speed up the process, since even the same method with a different angle might be superior]
By Cauchy-Schwarz, $$|x + 4z| \le \sqrt{1^2 + 4^2} \sqrt{x^2+z^2} \le \sqrt{34}.$$ Then think about when Cauchy-Schwarz attains equality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2684676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
When is it necessary to prove well definedness? Last semester, we tackled group theory. Our professor did not require us to show that a mapping $\phi:G\to G'$ is well-defined before proving that it is a group homomorphism. Now, our new professor requires us to show that $\phi$ is well-defined first before proving that $\phi:R\to R'$ is a ring homomorphism. So, in general, when is it necessary to show that a mapping is well-defined?
When there's a real chance that the map is not well-defined. For instance, the map $\psi\colon\mathbb{Z}_2\longrightarrow\mathbb Z$ defined by $\psi(\overline x)=x$ is not well-defined, since $\overline 0=\overline 2$ but $\psi(\overline0)\neq\psi(\overline2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2684811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
$\exists g\in G: H\cap gPg^{-1}$ is a Sylow $p-$subgroup of $H$ Question: If $P$ is a Sylow $p-$subgroup of $G$ and $H\leq G$ with $p||H|$ then $\exists g\in G: H\cap gPg^{-1}$ is a Sylow $p-$subgroup of $H$. Attempt: We consider the action $H\times G/P\to G/P$ with $(h,xP)\to hxP$ and we know that $$|G/P|=\sum_{x\in S}|[xP]_{H}|=\sum_{x\in S}|H:H\cap xPx^{-1}|$$ since $|[xP]_{H}|=|H:Stab_H(xP)|=|H\cap xPx^{-1}|$. Now, $P$ is a Sylow $p-$subgroup of $G$ so $p\nmid |G/P|=m$ so from above $\exists g\in G:p\nmid |H:H\cap gPg^{-1}|$ * *If $|H:H\cap gPg^{-1}|\not=1$: we have that $p\nmid |H:H\cap gPg^{-1}|$ and $H\cap gPg^{-1}$is a $p-$subgroup of $H$(please explain this statement) so $H\cap gPg^{-1}$ is a Sylow $p-$subgroup of $H$. *If $|H:H\cap gPg^{-1}|=1$ then $H=H\cap gPg^{-1}\Rightarrow H\leq gPg^{-1}(= $$p-$group) and therefore $H\cap gPg^{-1}$ is a Sylow $p-$subgroup of $H$. Is this proof correct? Is there a quicker way to prove the initial statement?
For the "please explain this statement" part: The conjugate $gPg^{-1}$ has the same order with $P$, hence it is a power of $p$. Therefore, $H\cap gPg^{-1}$, being a subgroup of $gPg^{-1}$, has order dividing $|gPg^{-1}|$, therefore, it a power of $p$. Hence, it is a group of order $p^k$ for some $k\neq 0$, hence a $p$-subgroup of $H$. Now, in the first bullet, the only "rough" part is why $H\cap gPg^{-1}$ is a Sylow $p$-subgroup. That is because it a non-trivial $p$-subgroup with $p$ not dividing $[H:H\cap gPg^{-1}]$, hence a maximal $p$-subgroup (aka Sylow). In the second bullet, it is clear that $H\cap gPg^{-1}=H$ hence it is trivially a $p$-Sylow subgroup. Keep in mind, η Ταλέλλη δεν κάνει λάθος...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2684925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Calculete $a$ and $b$ in a limit Calculate $a$ and $b$ in $$\lim_{x \to 1} \frac{ax^2+(3a+1)x+3}{bx^2+(2-b)x-2} = \frac{3}{2}$$ I tried this $$\lim_{x \to 1} \frac{(ax+1)(x+3)}{(bx+2)(x-1)} = \frac{3}{2}$$ but I could not see the next step I tried to look but it did not help. Solve for $a$ and $b$ in a limit and Find A and B in this limit
Your first step $$\lim_{x \to 1} \frac{ax^2+(3a+1)x+3}{bx^2+(2-b)x-2} =\lim_{x \to 1} \frac{(ax+1)(x+3)}{(bx+2)(x-1)}$$ is correct, now observe that you need to cancel out the term $(x-1)$ in order to have a finite limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2685214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Understanding $\int_\limits{0}^{1}\frac{1}{|t-s|^{\frac{1}{3}}}ds$ I am trying to understand the following integral $\int_\limits{0}^{1}\frac{1}{|t-s|^{\frac{1}{3}}}ds =\int_\limits{0}^{t}\frac{1}{(t-s)^{\frac{1}{3}}}ds +\int_\limits{t}^{1}\frac{1}{(s-t)^{\frac{1}{3}}}ds=2\int_\limits{0}^{1} \frac{dy}{y^{\frac{1}{3}}}\leqslant 3 $ I am not understanding the step $\int_\limits{0}^{1}\frac{1}{|t-s|^{\frac{1}{3}}}ds =\int_\limits{0}^{t}\frac{1}{(t-s)^{\frac{1}{3}}}ds +\int_\limits{t}^{1}\frac{1}{(s-t)^{\frac{1}{3}}}ds$. Question: Why is there a break in the interval $[0,1]$? What happened to the modulus? How can one integrate with modulus? I do not know this technique and I have not been able to figure it out. Thanks in advance!
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \int_{0}^{1}{\dd s \over \verts{t - s}^{1/3}} & = \int_{-t}^{1 - t}\verts{s}^{-1/3}\,\dd s \\[1cm] & = \bracks{t < 0}\int_{-t}^{1 - t}s^{-1/3}\,\dd s + \bracks{0 < t < 1}\pars{% \int_{-t}^{0}\pars{-s}^{-1/3}\,\dd s + \int_{0}^{1 - t}s^{-1/3}\,\dd s} \\[2mm] & + \bracks{t > 1}\int_{-t}^{1 - t}\pars{-s}^{-1/3}\,\dd s \\[1cm] & = \bracks{t < 0}\bracks{{3 \over 4}\,\pars{1 - t}^{4/3} - {3 \over 4}\,\pars{-t}^{4/3}} + \bracks{0 < t < 1}\bracks{{3 \over 2}\,t^{2/3} + {3 \over 4}\,\pars{1 - t}^{4/3}} \\[2mm] & + \bracks{t > 1}\bracks{-\,{3 \over 2}\,\pars{t - 1}^{2/3} + {3 \over 2}\,t^{2/3}} \\[1cm] & = \begin{array}{|l|}\hline\mbox{}\\ \ds{\ {3 \over 4}\bracks{t < 0}\bracks{\pars{1 - t}^{4/3} - \pars{-t}^{4/3}} + {3 \over 4}\bracks{0 < t < 1}\bracks{2\,t^{2/3} + \pars{1 - t}^{4/3}}\ } \\[2mm] \ds{\ +\ {3 \over 2}\bracks{t > 1}\bracks{t^{2/3} -\pars{t - 1}^{2/3}}\ } \\ \mbox{}\\ \hline \end{array} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2685484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
What are some good examples of "almost" isomorphic graphs? I'm examining isomorphisms of simple finite undirected graphs. In order to test whether or not two graphs are isomorphic, there are a lot of "simple" tests one can do, namely, compare the number of vertices, number of edges, degree sequences, and look for edge cycles. In addition to this, there are a few examples of graphs for which efficient isomorphism tests are known, Wikipeda lists a few examples of this, including, for example, planar graphs. What are some good examples of pairs of graphs that are not isomorphic, but pass all the simple tests for isomorphism, and don't fall into a class of graphs for which a polynomial-time algorithm is known?
One class of examples comes from Latin square graphs. If $L$ is an $n\times n$ Latin graphs, take the graph with the $n^2$ triples $(i,j,L_{i,j})$ as vertices, with two triples adjacent if they agree on one of the three coordinates. The graphs from distinct Latin squares are isomorphic if the Latin squares are isotopic, but the Latin squares arising as the multiplication tables of finite groups are isotopic if and only if the groups are isomorphic. The smallest relevant examples occur on 16 vertices. For a second class, start with an $n\times n$ Hadamard matrix. We construct a bipartite graph on $4n$ vertices. The first step is to convert the Hadamard matrix to a 01-matrix of order $2n\times 2n$, but replacing each entry by a $2\times2$ matrix as follows: $0$ becomes the $2\times2$ zero matrix, 1 becomes the $2\times2$ identity matrix and $-1$ becomes \[ \begin{pmatrix}0&1\\ 1&0 \end{pmatrix}. \] Denote the resulting matrix by $\tilde{H}$. The Hadamard graph is the bipartite graph with adjacency matrix \[ \begin{pmatrix} 0&\tilde{H}\\ \tilde{H}^T&0\end{pmatrix}. \] The Hadamard graphs are isomorphic if and only if the Hadamard matrices are equivalent )permutations of rows and columns, rescaling. There are five equivalence classes of Hadamard matrices of order 16, giving non-isomorphic graphs on 64 vertices. For both families the graphs produced will be cospectral when the input matrices are of the same order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2685594", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluating the stochastic integral of Brownian motion without using Ito's formula? I am currently trying to understand some lecture notes (which offer little depth) and the following exercise (without a solution) occurs: Using the definition $$ \int_0^t f_s \hspace{1mm} dW_s := \text{l.i.m}_{\lambda \rightarrow 0} \left[ \sum_i f_{s_i} (W_{s_{i+1}} - W_{s_i}) \right] \hspace{10mm} (*) $$ where l.i.m = limit in (square) mean, show that $$ \int_0^t W_s \hspace{1mm} dW_s = \frac{W_t^2}{2} + \frac{t}{2} $$ Can someone please help me to understand how to do this? Plugging $W_s$ into the formula $(*)$ we get $$ \int_0^t W_s \hspace{1mm} dW_s = \text{l.i.m}_{\lambda \rightarrow 0} \left[ \sum_i W_{s_i} (W_{s_{i+1}} - W_{s_i}) \right] $$ Unfortunately, I am not sure where to go from here. I am also not certain what $\lambda$ is supposed to denote, since it is not defined in the lecture notes. I suspect (by comparing this formula to the formula for Riemann integration) that $$ \lambda = s_{i+1} - s_i $$
The integral value should be: $$\int_0^tW_s\;dW_s=\frac{W_t^2}{2}-\frac{t}{2}$$ Notice that $$\sum_iW_{s_i}(W_{s_{i+1}}-W_{s_i})=\frac{1}{2}\sum_i(W_{s_{i+1}}^2-W_{s_i}^2)-\frac{1}{2}\sum_i(W_{s_{i+1}}-W_{s_i})^2$$ The first sum is a telescoping sum that reduces to $\frac{W_t^2}{2}$ (since $W_0 = 0$). The second sum becomes half the quadratic variation of $W_s$. So your integral is...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2685692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If a Vector Space $V$ has dimension $n$ and $T:V\to V$ is linear, then is the following true? Suppose we have that a basis for $V$ is $B=\{v_1,v_2,\ldots ,v_n\}$ and $v_1,v_2,\ldots,v_k$ are a basis for $\operatorname{im}(T)$. Does it follow that $v_{k+1}, v_{k+2},\ldots, v_n$ is a basis for $\ker(T)$? What if $T$ is idempotent?
Suppose $T(v_1) = v_1, \quad T(v_2) = v_2, \quad T(v_3) = v_1+v_2.$ Then $\{v_1,v_2\}$ is a basis of $\operatorname{im}(T)$ but $\{v_3\}$ is not a basis of $\ker(T).$ In this case $\{v_1+v_2-v_3\}$ is a basis of $\ker(T).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2685771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Exact Solution for Simple Sliding Linkage I am working with a linkage that consists of two cylinders with lengths $b$ and $c$ that actuate a platform to height $h$ and angle $\theta$. The $h$ link is always perpendicular to the bottom two $a$ links which are fixed. The top two $a$ links are collinear. The value of $a$ is a known constant, and the values of $h$ and $\theta$ change based on the lengths of the cylinders. The dashed lines are shown as a reflection of the left side to illustrate the difference in angle between $b$ and $c$. I have derived the formulas to calculate the exact lengths of the two cylinders $b$ and $c$ for given values of $h$ and $\theta$. $$b^2 = {(h+a\sin\theta)}^2 + {(a-a\cos\theta)}^2$$ $$c^2 = {(h-a\sin\theta)}^2 + {(a-a\cos\theta)}^2$$ My problem is that I also want to solve it the other way, using values for $b$ and $c$ to find $h$ and $\theta$. From the above two equations, I derived the following: $$h = \frac{b^2 - c^2}{4a\sin\theta} $$ I am currently using the approximation that $a\sin\theta=\frac{b-c}{2}$ which allows me to use the above equation to find $h$. This assumes that the very thin triangle on the right has no area, which is only actually true when the platform is horizontal. The approximation is pretty close, but I would like to know the exact solution to calculate $h$ and $\theta$ from given values of $b$ and $c$.
I don't know that there is a "pretty" way to solve it symbolically. It is, however, technically possible to separate the variables and derive "exact" equations in either variable alone, which could then be solved numerically to the desired precision. One way to do that: $$ b^2 - a^2(1-\cos\theta)^2 = {(h+a\sin\theta)}^2 \\ c^2 - a^2(1-\cos\theta)^2 = {(h-a\sin\theta)}^2 $$ Multiplying the above: $$ \big(b^2 - a^2(1-\cos\theta)^2\big)\big(c^2 - a^2(1-\cos\theta)^2\big) = \big(h^2 - a^2 (1 - \cos^2 \theta)\big)^2 \tag{1} $$ Using the other posted equation: $$h = \frac{b^2 - c^2}{4a\sin\theta} \tag{2} \;\;\implies\;\; h^2 = \frac{(b^2-c^2)^2}{16a^2(1 - \cos^2 \theta)}$$ Substituting $h^2$ from $(2)$ back into $(1)$ gives in the end an $8^{th}$ degree polynomial equation in $\cos \theta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2685870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Parity of an odd integer I'm not sure if I'm just tired and I'm missing something obvious, but how come I'm obtaining the following: $$(2m+1)^n=\sum_{k=0}^n \binom{n}{ k} (2m)^k = 2 \sum_{k=0}^n \binom{n}{ k} 2^{k-1}m^k $$ This seems to imply any power of an odd integer is even, but $3^2=9$ is an obvious counterexample.
All has been said: $k=0$ spoils the party: $\sum_{k=0}^{n} \binom{n}{k}(2m)^k=$ $ \binom{n}{0}(2m)^0 + 2\sum_{k=1}^{n}\binom{n}{k}(2)^{k-1}m^k.$ The first term in the above sum $=1.$ Hence?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2685974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Concerning the invertibility of $T-2I$ and $T-6I$ when $\dim E(8,T) = 4$. Prior to reading the following post please consider the following notation and results. * *$\mathbf{F}$ is either $\mathbf{R}$ or $\mathbf{C}$ *$T\in\mathcal{L}(V)$ means the $T$ belongs to the set of all linear transformations over the vector space $V$. *$\mathcal{M}(T)$ denotes the matrix of the linear transformation $T$. *$E(\lambda,T) = \operatorname{null}(T-\lambda I)$ is the eigenspace of $\lambda\in\mathbf{F}$ $5.30$ If $T\in\mathcal{L}(V)$ has an upper triangular matrix with respect to some basis of $V$ then the invertibilty of $T$ is equivalent to all diagonal entries on the matrix of $T$ being non-zero. I would like to know if the following Proof is correct? Theorem. Given that $T\in\mathcal{L}(\mathbf{F}^5)$ and $\dim E(8,T) = 4$, then either $T-2I$ or $T-6I$ is invertible. Proof. Assume on the contrary that both $T-2I$ and $T-6I$ are not invertible and let $v_1,v_2,v_3,v_4,v_5$ be the basis of $\mathbf{F}^5$ obtained by first extending the basis $v_1,v_2,v_3,v_4$ of $E(8,T)$. Regardless of the image of $v_5$ under $T$ it is not difficult to see that $\mathcal{M}(T)$ is upper-triangular and by extension so is $\mathcal{M}(T-2I)$ and $\mathcal{M}(T-6I)$ then from theorem $\textbf{5.30}$ at least one of the entries on the diagonal of both $\mathcal{M}(T-2I)$ and $\mathcal{M}(T-6I)$ is zero but $(\mathcal{M}(T-2I))_{ii} = 6$ and $(\mathcal{M}(T-6I))_{ii} = 2$ for all $i\in\{1,2,3,4\}$ which implies that $(\mathcal{M}(T-2I))_{55} = (\mathcal{M}(T-6I))_{55} = 0$ but then $(\mathcal{M}(T))_{55} = 2$ and $(\mathcal{M}(T))_{55} = 6$ resulting in a contradiction. $\blacksquare$
Your proof is correct. However, you made a mistake in the statement of theorem (5.30): it's not “all entries”; it's “all diagonal entries”. Besides, when you applied it, you only mentioned the diagonal entries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2686233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Tangents of a curve whose trace remains in a half-plane. I need to prove the following result: If a curve's trace is contained in a half-plane then the tangents at the intersection of the curve with the line defining the half-plane $R$ coincide with $R$. In other words: If $\alpha:I \to \mathbb{R}^2$ is a regular curve and $H^+,H^-$ denote the two closed semiplanes determines by line $R$ then show that the points in $t \in \alpha(\mathbb{R}) \cap R$, verify that $R \equiv \alpha(t)+L(\alpha'(t))$ the tangent line at $t$. I have doubts that I have enough tools to prove this result. Could you provide some hints on how to solve it?
Assume that $R$ does not pass through $0$. For any half-plane that does not contain $0$ there exists a unique vector $x \in \mathbb{R}^2$ such that $$y \in H^+ \Leftrightarrow x.y \geq 1$$ and $$y \in H^- \Leftrightarrow x.y \leq 1.$$ If we assume that $\alpha$ lies in $H^+$ and touches the line $R$ at $t_0$ then $x.\alpha(t) \geq 1$ for all $t \in \mathbb{R}$ and $x.\alpha(t_0)=1$. Define the map $f:\mathbb{R} \rightarrow \mathbb{R}, ~ t \mapsto x. \alpha(t)$, then by the linearity of the dot product we can see that $f'(t) = x.\alpha'(t)$. Since $t_0$ is a minimum point of $f$ then $f'(t_0)=x. \alpha'(t_0)=0$. The result now follows as we can define $R$ to be the line $t \mapsto x+ tx^\perp$. If $R$ contains $0$ then we can define choose $x \in \mathbb{R}^2$ such that $$y \in H^+ \Leftrightarrow x.y \geq 0$$ and $$y \in H^- \Leftrightarrow x.y \leq 0.$$ We now employ the same argument as above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2686329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why: $z^{-1}=\cos (-\phi)+i\sin(-\phi)$ Let's suppose that we have a complex number with $$r=1 \implies z=\cos\phi+i\sin\phi$$ Then why is $$z^{-1} =\cos (-\phi)+i\sin (-\phi)=\cos\phi-i\sin\phi$$
As your complex number as $r=1$, you can express it like $z=e^{i\theta}$, where $\theta$ is the argument. Then, $$z^{-1}=\frac{1}{z}=\frac{1}{e^{i\theta}}=e^{-i\theta}$$ Now, using the trigonometric form of complex numbers, $$e^{-i\theta}=\cos(-\theta)+i\sin(-\theta)=\cos(\theta)-i\sin(\theta),$$ where we used that $\cos(\theta)=\cos(-\theta)$ and $\sin(\theta)=-\sin(-\theta)$ for all $\theta$. Edit: The power series of the exponential, the sine and the cosine are $$e^x=\sum_{n=0}^\infty \dfrac{x^n}{n!}=1+x+\dfrac{x^2}{2!}+\dfrac{x^3}{3!}+\dfrac{x^4}{4!}+...$$ $$\sin x=\sum_{n=0}^\infty \dfrac{(-1)^n}{(2n+1)!}x^{2n+1}=x-\dfrac{x^3}{3!}+\dfrac{x^5}{5!}-\dfrac{x^7}{7!}+\dfrac{x^9}{9!}+...$$ $$\cos x=\sum_{n=0}^\infty \dfrac{(-1)^n}{(2n)!}x^{2n}=1-\dfrac{x^2}{2!}+\dfrac{x^4}{4!}-\dfrac{x^6}{6!}+\dfrac{x^8}{8!}+...$$ Now, substituting $i\theta$ instead of $x$ in the power series of $e^x$, we have \begin{align*} e^{i\theta} & =1+i\theta+\dfrac{(i\theta)^2}{2!}+\dfrac{(i\theta)^3}{3!}+\dfrac{(i\theta)^4}{4!}+...=1+i\theta+\dfrac{i^2\theta^2}{2!}+\dfrac{i^3\theta^3}{3!}+\dfrac{i^4\theta^4}{4!}+... \\ \\ & =1+i\theta-\dfrac{\theta^2}{2!}-\dfrac{i\theta^3}{3!}+\dfrac{\theta^4}{4!}+...=\left(1-\dfrac{\theta^2}{2!}+\dfrac{\theta^4}{4!}-...\right)+i\left(\theta-\dfrac{i\theta^3}{3!}+\dfrac{\theta^5}{5!}-...\right) \\ \\ & =\cos\theta +i\sin\theta \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2686442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 4 }
What kind of fraction will have this property? List out i/19: They form a square, and the sum of each column is the same, 81. i/7 also have such property: 142857 285714 428571 571428 714285 857142 Any other fractions have such property? What kind of fractions will have such property?
You are after numbers that are the decimal period of so-called cyclic numbers. $7$ and $19$ in this context are called full reptend primes (OEIS A001913): Primes p such that the decimal expansion of 1/p has period p-1, which is the greatest period possible for any integer. The first few are $7, 17, 19, 23, 29, 47, 59, 61, 97.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2686676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Integrating w.r.t. two different variables I can't really follow what's going on here to get from line 2 to line 3: \begin{align} & \int^\infty_0 \frac{d^2 p}{(2\pi)^2} \frac{\Lambda^2}{(p^2+\Lambda^2)(p^2+m^2)} \\[10pt] = {} & \int^1_0 dz \, \frac{d^2 p}{(2\pi)^2}\frac{\Lambda^2}{(p^2+z\Lambda^2+(1-z) m^2)^2} \\[10pt] = {} & \int^1_0dz \, \frac{1}{4\pi}\frac{\Lambda^2}{z\Lambda^2+(1-z)m^2} \\[10pt] = {} & \frac{1}{4\pi}\frac{\Lambda^2}{\Lambda^2-m^2}\ln\frac{\Lambda^2}{m^2} \end{align} There are two different integration variables and then one of them vanishes. I've tried evaluating the integral w.r.t $p$ and then again w.r.t. $p$ because it's $d^2 p$, but that didn't work (just using wolfram alpha). From line one to line two I can follow, it's a Feynman parametrization, but what do you do to get from line 2 to line 3?
The integral over momentum space is missing in Line 2 of the OP. That is to say that the expression $\displaystyle d^2 p\,\frac{\Lambda^2}{(p^2+z\Lambda^2+(1-z) m^2)^2}$ should read $$\int_{-\infty}^\infty\int_{-\infty}^\infty \frac{\Lambda^2}{(p_1^2+p_2^2+z\Lambda^2+(1-z) m^2)^2}\,dp_1\,dp_2$$ which after a transformation to polar coordinates $(p_1,p_2)\mapsto(\rho,\phi)$ becomes $$\begin{align} \int_0^\infty\int_0^\infty \frac{\Lambda^2}{(p_1^2+p_2^2+z\Lambda^2+(1-z) m^2)^2}\,dp_1\,dp_2&= \int_0^{2\pi}\int_0^\infty \frac{\Lambda^2\rho}{(\rho^2+z\Lambda^2+(1-z) m^2)^2}\,\rho\,d\rho\,d\phi\\\\ &=2\pi \int_0^\infty \frac{\Lambda^2\rho}{(\rho^2+z\Lambda^2+(1-z) m^2)^2}\,\rho\,d\rho\\\\\ &=2\pi \left.\left(-\frac12 \frac{1}{\rho^2+z\Lambda^2+(1-z)m^2}\right)\right|_{\rho=0}^{\rho\to \infty}\\\\ &=2\pi\left(\Lambda^2 \frac12 \frac{1}{z\Lambda^2+(1-z)m^2}\right) \end{align}$$ Finally, dividing by $4\pi^2$ and integrating over $z$ from $0$ to $1$ yields the coveted result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2686759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Question on evaluating $\int_{C}\frac{e^{iz}}{z(z-\pi)}dz$ without the residue theorem I am trying to figure out how to evaluate the integral $\int_{C}\frac{e^{iz}}{z(z-\pi)}dz$ where $C$ is any circle centered at the origin with radius greater than $\pi$. I can see that $\frac{e^{iz}}{z(z-\pi)}$ is analytic everywhere except where $z=0$ and $z=\pi$, both of which are in the region bounded by $C$. I can also see that by using the Taylor expansion of $e^{iz}$ that $$\int_{C}\frac{e^{iz}}{z(z-\pi)}dz = \sum_{n=0}^{\infty}\frac{i^{n}}{n!}\int_{C}\frac{z^{n-1}}{z-\pi}dz$$ I'm supposed to apply Cauchy's Theorem or Cauchy's Integral Theorem to evaluate the integral along this curve but I am not sure how to do so. I do not yet have the residue theorem in my tool box.
By symmetry the residues of $\frac{e^{iz}}{z(z-\pi)}$ at $z=0$ and $z=\pi$ are the same and they equal $-\frac{1}{\pi}$. It follows that for any $R>\pi$ we have $$ \oint_{\|z\|=R}\frac{e^{iz}}{z(z-\pi)}\,dz = 2\pi i\cdot\left(-\frac{2}{\pi}\right) = -4i.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2686903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
When an odd polynomial is a one-one map on $\mathbb{R}$ Let $f(x)=c_1x+c_3x^3+c_5x^5+\cdots+c_{2m+1}x^{2m+1}$, $c_1,c_3,c_5,\ldots,c_{2m+1} \in \mathbb{R}$, $m \in \mathbb{N}$, namely, $f$ is an odd polynomial over $\mathbb{R}$. When such a polynomial is a one-one map on $\mathbb{R}$? Is it always one-one? Example: $f(x)=x+x^3$. If $f(a)=f(b)$ for $a,b \in \mathbb{R}$, then $a+a^3=b+b^3$, so $(a-b)+(a^3-b^3)=0$, and then, $(a-b)(1+(a^2+ab+b^2))=0$. Therefore, $a-b=0$ or $1+(a^2+ab+b^2)=0$. In the first case $a=b$ and we are done, while in the second case, the discriminant is $-4-3a^2 < 0$, so there are no $a,b \in \mathbb{R}$ satisfying $1+(a^2+ab+b^2)=0$. I guess that a general solution should be similar to the special case of the example: If $c_1a+c_3a^3+c_5a^5+\cdots+c_{2m+1}a^{2m+1}=c_1b+c_3b^3+c_5b^5+\cdots+c_{2m+1}b^{2m+1}$, then $(a-b)(c_1+c_3(a^4+a^3b+a^2b^2+ab^3+b^4)+\cdots)=0$, so (first case) $a-b=0$ and we are done, or (second case) $c_1+c_3(a^4+a^3b+a^2b^2+ab^3+b^4)+\cdots=0$, which I am not sure I know how to show that it is necessarily non-zero (it is an even polynomial).
The answer is no, ti is not always one-to-one. Consider the following odd polynomial $$P(x)=x(x-5)(x-10)= x^3-15x^2+50x$$ $$ P(0)=P(5)=P(10)=0$$ It is not one-to-one. However, a function will be one-to-one if it's derivative is always positive or always negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2687041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
How to find UMVUE of $\theta^k$ when $x_1, \ldots, x_n$ is a sample from Bernoulli$(\theta)$? Let $x_1, x_2, \ldots, x_n$ be a random sample from the Bernoulli ($\theta$). The question is to find the UMVUE of $\theta^k$. I know the $\sum_1^nx_i$ is the complete sufficient statistics for $\theta$. Is $\left(\frac{\sum_1^nx_i}{n}\right)^k$ the estimator or any other possible estimator? Could someone just help me?
Having that $$\theta^m=P\{ X_1=x_1,X_2=x_2,...,X_m=x_m\}$$ An unbiased estimator for $\theta^m$ is $$T= \begin{cases} 1, & if \ \ X_1=X_2= \, ... \,=X_m =1 \\ 0, & in \ other \ case \end{cases}$$ But $$\begin{align} E[T|S=s] & = P\{X_1=1,X_2=1,...,X_m=1|S=s\}=\frac{P\{X_1=1,X_2=1,...,X_m=1,S=s\}}{P\{S=s\}} = \\\\ & = \begin{cases} 0, & if \ \ m>s \\ \frac{\theta^m\binom{n-m}{s-m}\theta^{s-m}(1-\theta)^{n-s}}{\binom{n}{s}\theta^s(1-\theta)^{n-s}}, & if \ \ m\leq s \end{cases} \end{align}$$ By the theorem of Lehmann-Scheffé, the UMVUE for $\theta^m$ is, after operating the latter expression: $$E[T|S=s]=\begin{cases} 0, & if \ \ m>s \\ \frac{s!(n-m)!}{n!(s-m)!}, & if \ \ m\leq s \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2687375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Summation Manipulation Problem Is the following statement true: $$\sum^k_{j=0} \left(\sum^j_{i=0}a_i b_{j-i}\right) d_{k-j} = \sum^k_{j=0} \left(\sum^j_{i=0}b_i d_{j-i}\right) a_{k-j}$$ I'm not sure if this is true as I've been unable to prove this directly. However, I think this is true. If we fix $k$ and then $j$, we have fixed the subscript of $d$. So, there is a term $a_m b_n d_o$ on the LHS such that $m + n + o = k$. However on the RHS, if we fix the same value of $k$ as above and strategically fix a different value of $j$ so that the subscript of $a$ is $m$, then we can find a particular $i$ such that we also have $a_m b_n d_o$. That is, the subscript of $b$ plus the subscript of $d$ on the RHS is equal to $j$, so we can find an $n$ and $o$ such that $n+o = k - m = j$. If it is true, is there a more direct (or different) proof of this? If not, why is it not true?
Both sides equalize: $$\sum_{m+n+o=k}a_mb_nd_o$$ where $m,n,o$ denote nonnegative integers. The expression: $$\sum_{m+n+o=k}a_mb_nd_o$$ abbreviates: $$\sum_{\langle m,n,o\rangle\in S}a_mb_nd_o$$ where $S=\{\langle m,n,o\rangle\in\mathbb N\mid m+n+o=k\}$ and $\mathbb N=\{0,1,2,\dots\}$. Further $|S|=\binom{k+2}2$ (which can be found with stars and bars).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2687552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find the number of ways of arranging the letters Find the number of ways of arranging the letters $\text{AAAAA, BBB, CCC, D, EE & F}$ in a row if no two $\text{C's}$ are together? My Attempt: Well, I should I arrive at the answer if I subtract the cases where 3 $\text{C's}$ and 2 $\text{C's}$ appear together from the total. Total possibilities $= \frac{15!}{5!3!3!2!}$ Total possibilities where 3 $\text{C's}$ appear $=\frac{13!}{5!3!2!}$ However, I am not able to find the possibilities for 2 $\text{C's}$ being together and get to the answer. Any help would be appreciated.
While I prefer the approach explained by Alaleh A and Parcly Taxel, here is how you can solve the problem using the Inclusion-Exclusion Principle. The number of distinguishable arrangements of $5$ A's, $3$ B's, $3$ C's, $1$ D, $2$ E's, and $1$ F is $$\frac{15!}{5!3!3!1!2!1!}$$ as you found. From these, we must subtract those arrangements in which a pair of C's are adjacent. A pair of C's are adjacent: We have $14$ objects to arrange, $5$ A's, $3$ B's, $1$ CC, $1$ C, $1$ D, $2$ E's, and $1$ F. They can be arranged in $$\frac{14!}{5!3!1!1!1!2!1!}$$ ways. However, if we subtract $\frac{14!}{5!3!1!1!1!2!1!}$ from the total, we will have subtracted too much since we will have subtracted each arrangement with two pairs of adjacent C's twice (such arrangements have three consecutive C's), once when we designated the first two C's as the adjacent pair and once when we designated the last two C's as the adjacent pair. Since we only want to subtract such arrangements once, we must add them back. Two pairs of adjacent C's: As mentioned above, this means the three C's are consecutive. Hence, we have $13$ objects to arrange, $5$ A's, $3$ B's, $1$ CCC, $1$ D, $2$ E's, and $1$ F. The number of such arrangements is $$\frac{13!}{5!3!1!1!2!1!}$$ as you found. Hence, the number of arrangements of $5$ A's, $3$ B's, $3$ C's, $1$ D, $2$ E's, and $1$ F in which no two C's are consecutive is $$\frac{15!}{5!3!3!1!2!1!} - \frac{14!}{5!3!1!1!2!1!} + \frac{13!}{5!3!1!1!2!1!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2687684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 1 }
Extension of rational map from normal variety to algebraic group Assume $k$ is an algebraically closed field, $X$ is a normal variety over $k$, $G$ is an algebraic group over $k$. I heard that Weil proved any rational map from $X$ to $G$ which defined over a codimension $>1$ open subset can be extended to the whole $X$, and the idea was to consider the diagonal. What is the whole proof? I think this generalizes the case $G=\Bbb G_a$ (well-known fact about extension of rational functions) and there are counterexamples without the assumption on $G$(e.g $\Bbb A^2-0 \rightarrow \Bbb P^1$).
I'm not too convinced by the statement but, here's something that may be relevant. I'm paraphrasing from page 151-152, Chapter 8 section f of James Milne's ``Algebraic Groups: the theory of group schemes of finite type over a field". The proofs are given in a further reference: MILNE, J. S. 1986. Abelian varieties, pp. 103–150. In Arithmetic geometry (Storrs, Conn., 1984). Springer-Verlag, Berlin. * *Every rational map from a normal variety to a complete variety is defined on an open subset whose complement has codimension $\geq 2$. *Every rational map from a smooth variety to a connected group variety is defined on an open set whose complement is either empty or has pure codimension 1. *Combining 1)+2), every rational map from a smooth variety $X$ to an abelian variety $A$ is defined on all of $X$. Maybe some part of, or some combination of 1), 2), 3) answers some part of your question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2687792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If matrices $A,B$ similar, find nonsingular $S$ s.t.$B=S^{-1}AS$ Consider the matrices below $$A=\begin{bmatrix}9&4&5\\-4&0&-3\\-6&-4&-2\end{bmatrix}$$ and $$B=\begin{bmatrix}2&1&0\\0&2&0\\0&0&3\end{bmatrix}$$ These matrices have the same eigenvalues $\{2,2,-3\}$ and the same Jordan Canonical Form so they are similar. In trying to find $S$ s.t. $B=S^{-1}AS$ I set $$S=\begin{bmatrix}a&b&c\\d&e&f\\g&h&k\end{bmatrix}$$ and then tried to solve the system of 9 equations with 9 unknowns $$B=S^{-1}AS\Leftrightarrow SB=AS$$ but Matlab showed it is rank deficient so it provided only the zero solution. How can I find such matrix $S$? Is there a systematic way to do that in the general case when both $A,B$ are $n\times n$ matrices?
Notice that $B$ is precisely the Jordan form of $A$, so it suffices to find a Jordan basis for $A$. We have: $$A - 2I =\begin{bmatrix}7&4&5\\-4&-2&-3\\-6&-4&-4\end{bmatrix},\quad (A - 2I)^2 =\begin{bmatrix}3&0&3\\-2&0&-2\\-2&0&-2\end{bmatrix}$$ so $\ker (A - 2I)^2 = \operatorname{span}\{e_1- e_3, e_2\} = \operatorname{span}\left\{\pmatrix{1 \\ 0\\ -1}, \pmatrix{0\\1\\0}\right\}$. $$A - 3I = \begin{bmatrix}6&4&5\\-4&-1&-3\\-6&-4&-5\end{bmatrix} \implies \ker (A - 3I) = \operatorname{span}\left\{\pmatrix{-3 \\ 2 \\ 2}\right\}$$ Therefore, one Jordan basis is $$\left\{(A - 2I)e_2, e_2, \pmatrix{-3 \\ 2 \\ 2}\right\} = \operatorname{span}\left\{\pmatrix{4 \\ -2\\ -4}, \pmatrix{0\\1\\0}, \pmatrix{-3 \\ 2 \\ 2}\right\}$$ So the similarity matrix is $$S = \begin{bmatrix}4&0&-3\\-2&1&2\\-4&0&2\end{bmatrix}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2687921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How to deal with $(-1)^{k-1}$ It's a problem on mathematical induction. $$1^2-2^2+3^2-.....+(-1)^{n-1}n^2=(-1)^{n-1}\frac{n.(n+1)}{2}$$ I have proved it for values of $n=1,2$. Now I assume for $n=k$ $$P(k):1^2-2^2+3^2-.....+(-1)^{k-1}k^2=(-1)^{k-1}\frac{k.(k+1)}{2}$$. $$P(k+1):1^2-2^2+3^2-.....+(-1)^{k-1}k^2+(k+1)^2=(-1)^{k-1}\frac{k.(k+1)}{2}+(k+1)^2\\=\frac{(k+1)}{2} [(-1)^{k-1}.k+2k+2]$$ I need suggestion to deal with the $(-1)^{k-1}$ so that I can prove the whole. Any help is appreciated.
you have to prove that $$(-1)^k\frac{k(k+1)}{2}+(-1)^k(k+1)^2=(-1)^k\frac{(k+1)(k+2)}{2}$$ or $$(-1)^{k-1}\frac{k(k+1)}{2}=(-1)^k\frac{(k+1)(k+2)}{2}-(-1)^k(k+1)^2$$ the right-hand side is given by $$(-1)^k(k+1)\left(\frac{(k+2)}{2}-k-1\right)$$ and this is equal to $$(-1)^k(k+1)\left(\frac{k+2-2k-2}{2}\right)$$ Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2688026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Double Antiderivation problem I have to find $f(x)$ given $f''(x)$ and certain initial conditions. $$f"(x) = 8x^3 + 5$$ and $f(1) = 0$ and $f'(1) = 8$ $$f'(x) = 8 \cdot \frac{x^4}{4} + 5x + C = 2x^4 + 5x + C$$ Since $f'(1) = 8 \Rightarrow 2 + 5 + C = 8$ so $C = 1$ $$ f'(x) = 2x^4 + 5x + 1$$ $$f(x) = 2 \cdot \frac{x^5}{5} + 5 \cdot \frac{x^2}{2} + X = \frac{2}{5} x^5 + \frac{5}{2} x^2 + X + D$$ Since $f(1) = 0$ then: $$\frac{2}{5} + \frac{5}{2} + 1 + D = 0$$ $$\frac{29}{10} + 1 + D = 0$$ So $D = \dfrac{-39}{10}$ So $$f(x) = \frac{2}{5} \cdot x^5 + \frac{5}{2} \cdot x^2 + x - \frac{39}{10}$$ Does that look right?
Let check $$f(x) = \frac{2}{5} \cdot x^5 + \frac{5}{2} \cdot x^2 + x - \frac{39}{10}\implies f'(x)= 2x^4+5x+1\implies f''(x)=8x^3+5$$ $$f(1)=\frac{2}{5}+ \frac{5}{2} + 1 - \frac{39}{10}=\frac{4+25+10-39}{10}=0$$ $$f'(1)=2+5+1=8$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2688163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
When point is inside of circle Let we have points $A_1,A_2,A_3,A_4$.In time $t$ point $A_i$ has coordinates $(x_i,y_i) + (v_{xi},v_{yi}) * t$, all parametrs are given. Describe algorithm to find all $t$ when point $A_4$ inside circle circumscribed around $A_1,A_2,A_3$ , or to find first moment when this happens.
Let $A_i(X_i,Y_i)$ where $$X_i=x_i+v_{xi}t\quad\text{and}\quad Y_i=y_i+v_{yi}t$$ for $i=1,2,3$ and $4$. According to MathWorld, the center of the circle passing through three points $(X_1,Y_1),(X_2,Y_2)$ and $(X_3,Y_3)$ is given by $$\left(-\frac{b}{2a},-\frac{c}{2a}\right)$$ and its radius is given by $$\sqrt{\frac{b^2+c^2}{4a^2}-\frac da}$$ where $$a=\begin{vmatrix} X_1 & Y_1 & 1 \\ X_2 & Y_2 & 1 \\ X_3 & Y_3 & 1 \\ \end{vmatrix},\ b=-\begin{vmatrix} X_1^2+Y_1^2 & Y_1 & 1 \\ X_2^2+Y_2^2 & Y_2 & 1 \\ X_3^2+Y_3^2 & Y_3 & 1 \\ \end{vmatrix},$$ $$c=\begin{vmatrix} X_1^2+Y_1^2 & X_1 & 1 \\ X_2^2+Y_2^2 & X_2 & 1 \\ X_3^2+Y_3^2 & X_3 & 1 \\ \end{vmatrix},\ d=-\begin{vmatrix} X_1^2+Y_1^2 & X_1 & Y_1 \\ X_2^2+Y_2^2 & X_2 & Y_2 \\ X_3^2+Y_3^2 & X_3 & Y_3 \\ \end{vmatrix} $$ Therefore, under the conditions $$a\not=0\quad\text{and}\quad \frac{b^2+c^2}{4a^2}-\frac da\gt 0$$ which are needed in order for such a circle to exist, we have $$\begin{align}&\text{$A_4$ is inside the circle} \\\\&\iff\sqrt{\left(X_4-\left(-\frac{b}{2a}\right)\right)^2+\left(Y_4-\left(-\frac{c}{2a}\right)\right)^2}\le \sqrt{\frac{b^2+c^2}{4a^2}-\frac da} \\\\&\iff \left(X_4+\frac{b}{2a}\right)^2+\left(Y_4+\frac{c}{2a}\right)^2\le \frac{b^2+c^2}{4a^2}-\frac da \\\\&\iff X_4^2+\frac{bX_4}{a}+Y_4^2+\frac{cY_4}{a}+\frac da\le 0 \\\\&\iff a^2X_4^2+abX_4+a^2Y_4^2+acY_4+ad\le 0\end{align}$$ The LHS of the last inequality is a sixth degree polynomial on $t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2688282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why $A_\varepsilon \subset \bigcup_{\sigma \in \mathfrak{F}} S_\sigma,\;?$ Let $(X,\mu)$ be a measure space and $f_1,\cdots,f_d\in L^\infty(X)$. Set $$g:=\displaystyle\sum_{k=1}^d|f_k|^2\;\;\text{and}\;\;c:=\|g\|_\infty.$$ Let $\sigma:=\{a_1,b_1,\cdots,a_d,b_d\}$ be such that $a_i,b_i\in \mathbb{Q}_+$ for all $i$. Set $$S_\sigma=\left\{x \in X;\; \left[\Re(f_k(x))\right]^2>a_k,\; \left[\Im(f_k(x))\right]^2>b_k,\;\;k=1,\cdots,d\right\}.$$ and $$\mathfrak{F}=\left\{\{a_1,b_1,\cdots,a_d,b_d\}\subset \mathbb{Q}_+^{2d};\;\;\sum_{k=1}^d (a_k+b_k) > c-\varepsilon,\;\forall \varepsilon>0\right\}.$$ Why $$A_\varepsilon \subset \bigcup_{\sigma \in \mathfrak{F}} S_\sigma,\;\forall \varepsilon>0\;?$$ with $$A_\varepsilon=\left\{x\in X;\; g(x)>c-\varepsilon\right\}.$$
Let $x\in A_\varepsilon$. Choose non negative rational numbers $a_i$ and $b_i$ such that $$ 0\lt \left[\Re(f_i(x))\right]^2- a_i\lt \frac{ \varepsilon}{2d} \mbox{ and } 0\lt \left[\Im(f_i(x))\right]^2 -b_i \lt \frac{ \varepsilon}{2d} .$$ Let $\sigma:=\{a_1,b_1,\cdots,a_d,b_d\}$. Then $\sigma$ belongs to $\mathfrak{F}$ and $x$ belongs to $S_\sigma$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2688343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Diffeomorphism invariance, Lie derivative There is written in the Hamilton's Ricci flow book about Lie Derivative this: The Lie derivative, which measures the infinitesimal lack of diffeomorphism invariance of a tensor with respect to a 1-parameter group of diffeomorphisms generated by a vector field, has the following properties: (1) If $f$ is a function, then $\cal L_x f = Xf$, (2) If $Y$ is a vector field, then ${\cal L_X} Y = [X,Y]$. I do not understand what intuitively is the "diffeomorphism invariance".
To make things simple, let $X,Y$ be vector fields on $\mathbb{R}^n$. Then $\mathcal{L}_XY$ gives the instantaneous rate of change of $Y$ in the direction of the flow $\phi_t$ in which $X$ induces. You can show that, $$ \mathcal{L}_XY(p) = [X,Y](p) = \lim_{t \to 0} \frac{Y( \phi_t(p)) - Y(p)}{t}$$ Here $\{\phi_t\}$ is the $1$-parameter group of differomorphisms. By definition of the limit, the lie derivative is just asking, ``how does $Y$ change in the direction of the velocity for the phase flow?". Letting $t$ be very small we have, $$ \mathcal{L}_XY(p) t \approx Y(\phi_t(p))-Y(p)$$ i.e the lie derivative tells us also how $Y$ varies with respect to the flow i.e. under the coordinate change $\phi_t$, how different is $Y(\phi_t(p))$ and $Y(p)$? Hence it is measuring the invariance of $Y$ (which is a tensor) w.r.t the diffeomorphism $\phi_t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2688428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Homology of the closed topologist sine curve The closed topologist sine curve, $X$, is the subspace of $R^2$ consisting of all the points $(x,\sin(1/x))$ for $x \in (0,1]$, and all points $(0,y)$ for $y \in [-1,1]$ and an arc from $(0,-1)$ to $(1,\sin(1))$ Compute the singular homology of $X$, using a suitable Mayer-Vietoris sequence. So to utilize the Mayer-Vietoris sequence I need an excisive couple, one condition that will gaurentee that a pair is an excisive pair is if the union of their interiors cover the space. $X_1$ = $(x,\sin(1/x))$ for $x \in (0,1]$ and the arc from $(0,-1)$ to $(1,\sin(1))$ $X_2$ = all points $(0,y)$ for $y \in [-1,1]$ and the arc from $(0,-1)$ to $(1,\sin(1))$ I don't know i'm just taking a shot in the dark really I guess. Can anyone offer some insight for me? Thanks!!
You do not consider the closed topologist's sine curve, but the Warsaw circle. See To show that Warsaw circle is simply connected. Let $S = \{(x,\sin\frac{1}{x}):0<x \le 1 \}$, $L = \{0\} \times [-1,1]$ and $T = L \cup S$. This is the closed topologist's sine curve. Your space $X$ is obtained by joining $(0,-1), (1,\sin(1)) \in T$ by an arc $A$ such that $A$ does not meet other points of $T$. Your subspaces $X_1, X_2$ do not form an excisive couple. The points of $L \subset X_2$ are no interior points of $X_1$ and $X_2$. Choose a homeomorphism $h : [0,1] \to A$. Let $X_1 = h((0,1))$ and $X_2 = X \setminus \{h(1/2)\}$. Then you have an excisive couple. $X_1$ is contractible, thus $$H_n(X_1) = \begin{cases} 0 & n > 0 \\ \mathbb Z & n = 0\end{cases}$$ The space $X_2$ contains $T$ as a strong deformation retract and $T$ has two path components ($L$ and $S$) which are both contractible. Hence $$H_n(X_2) \approx H_n(T) \approx H_n(L) \oplus H_n(S) = \begin{cases} 0 & n > 0 \\ \mathbb Z \oplus \mathbb Z & n = 0\end{cases}$$ Moreover, $X_1 \cap X_2$ is the disjoint union of two copies $I_1, I_2$ of an open interval. Thus $$H_n(X_1 \cap X_2) \approx H_n(I_1) \oplus H_n(I_2) = \begin{cases} 0 & n > 0 \\ \mathbb Z \oplus \mathbb Z & n = 0\end{cases}$$ Finally, since $X$ is path-connected $$H_0(X) = \mathbb Z .$$ By Mayer-Vietoris we get for $n > 0$ an exact sequence $$0 \to H_n(X) \to H_{n-1}(X_1 \cap X_2) \to H_{n-1}(X_1) \oplus H_{n-1}(X_2) \to H_{n-1}(X) \to \ldots$$ For $n > 1$ we know that $H_{n-1}(X_1 \cap X_2) = 0$, thus $H_n(X) = 0$. Let us come to $n = 1$. We get the exact sequence $$0 \to H_1(X) \to H_0(X_1 \cap X_2) \to H_0(X_1) \oplus H_0(X_2) \to H_0(X) \to 0$$ If $K$ denotes the kernel of $H_0(X_1) \oplus H_0(X_2) \to H_0(X)$, we get a short exact sequence $$0 \to K \to H_0(X_1) \oplus H_0(X_2) \to H_0(X) \to 0$$ Since $H_0(X) = \mathbb Z$, this sequence splits and we get $$H_0(X_1) \oplus H_0(X_2) \approx K \oplus H_0(X) .$$ We already know $H_0(X_2) = \mathbb Z \oplus \mathbb Z$ and $H_0(X_1) = H_0(X) = \mathbb Z$, thus $$\mathbb Z \oplus \mathbb Z \oplus \mathbb Z \approx K \oplus \mathbb Z .$$ This shows that $K \approx \mathbb Z \oplus \mathbb Z$. But $K$ agrees with the image of $H_0(X_1 \cap X_2) = \mathbb Z \oplus \mathbb Z \to H_0(X_1) \oplus H_0(X_2)$. This is only possible if this map is injective. Thus we must have $H_1(X) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2688633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How can i prove this? (infinite summation) I have to prove the following $$\sum_{k=1}^n \frac{(-1)^{k-1}}{k} {n \choose k} =1+\frac{1}{2}+...+\frac{1}{n}$$ I try to prove it using that $$\sum_{k=0}^n {(-1)^{n-k}} {n \choose k} k^m = \begin{cases} 0, & \text{if $m<n$ } \\ n!, & \text{if $m=n$ } \end{cases} $$ but my biggest problem is $ (-1)^{n-k} $ Any thoughts on that or another way to approach this?
An overkill. By the Melzak's identity with $f\equiv1$ we have $$\sum_{k=1}^{n}\dbinom{n}{k}\frac{\left(-1\right)^{k-1}}{x+k}=\frac{1}{x}-\frac{1}{x\dbinom{x+n}{n}}=\frac{\dbinom{x+n}{n}-1}{x\dbinom{x+n}{n}}$$ then taking $x\rightarrow0$ and recalling that $$\frac{d}{dx}\dbinom{x+n}{n}=\dbinom{x+n}{n}\left(\psi^{\left(0\right)}\left(n+x+1\right)-\psi^{\left(0\right)}\left(x+1\right)\right)$$ where $\psi^{\left(0\right)}\left(x\right)$ is the Digamma function, we have $$\sum_{k=1}^{n}\dbinom{n}{k}\frac{\left(-1\right)^{k-1}}{k}=\color{red}{\sum_{m=1}^{n}\frac{1}{m}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2688718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Quotient group of $D(2,3,7)$ under the Klein Quartic The Klein Quartic is a quotient space of the hyperbolic plane. Let $k$ be its quotient map. Given two isometries $f,g$ of the hyperbolic plane, we say that $f \cong g$ iff $k \circ f = k \circ g$. The Von Dyck group $D(2,3,7)$ is a group of isometries of the hyperbolic plane. It can be presented by $\langle r,m|r^7 = m^2 = (rm)^3 = 1 \rangle$. My question is, what is $D(2,3,7)/\cong$ (both the group, and the quotient map)? Note: The Klein Quartic can be tiled by $(2,3,7)$ triangles, so the group elements will correspond to those (half of those) triangles (and in particular will be a finite group).
The group is isomorphic to $GL(3,2)$. The quotient map $k$ is generated by $$k(r) = \begin{bmatrix}0&1&1\\0&0&1\\1&0&0\end{bmatrix}$$ $$k(m) = \begin{bmatrix}0&0&1\\0&1&0\\1&0&0\end{bmatrix}$$ These matrices where found by a computer program looking for matrices that satisfied the Von dyck group relations and that generated $GL(3,2)$. There are many other options.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2688837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that set $A$ a finite abelian group and $B$ multiplicative group of all homomorphisms from $A$ to units of $C$ are isomorphic Let $A$ be a finite abelian group and let $B := \{f: A \to \mathbb C^{×} | f$ is a group homomorphism $\}$. It can be easily checked that $B$ is an abelian group via $fg(x) = f(x)g(x)$. Prove that $A$ is isomorphic to $B$. Also, prove that if $f$ is not an identity element of $B$ then $\sum _{a \in A} f(a) = 0$. This problem is from a module theory class which previously dealt with group theory. I have no idea how this would relate to module theory. I attempted to look at a special case, $A = \mathbb{Z / 2Z}$. For notational simplicity denote the two elements by $1,0$. To find a corresponding isomorphism, we must send $1$ to some $f$. say $f(0) = a, f(1) = b$. Then $a = f(0) = f(2 * 1) = 2f(1) = 2b$, and similiarly $a = 4b$, so that $a = b = 0$, which is inpossible since the image of $f$ must lie outside of $0$. Am I doing something wrong here? Any input for solving the problem - not just critic of my attempt - would be greatly appreciated. The second part, which I tried to prove by assuming the first, didn't work out so well. How is module theory related to this?
The problem with your special case is that we want the multiplicative group of the units of $\mathbb{C}$, so we have that $f(2 \cdot 1) = f(1)^2$, not $f(2 \cdot 1) = 2f(1)$. Since $f$ is a homomorphism, it maps the identity element of $A$ to the identity element of $\mathbb{C}^\times$, and so $f(0) = 1$. Since $f(1)^2 = f(0) = 1$, we get that $f(1) = -1$ or $f(1) = 1$. In the general case, the classification of finite abelian groups tells us that $$ A \simeq \mathbb{Z} / n_1 \mathbb{Z} \times \mathbb{Z} / n_2 \mathbb{Z} \times \cdots \times \mathbb{Z} / n_k \mathbb{Z} $$ for some natural numbers $n_1, n_2, \dots, n_k$. The homomorphism $f : A \to \mathbb{C}^\times$ is then uniquely determined by the images of the elements $\eta_1 = (1, 0, 0, \dots, 0), \eta_2 = (0, 1, 0, \dots, 0), \dots, \eta_k = (0, 0, \dots, 0, 1)$. Since $n_i \eta_i = 0$ for each $i$, we have that $f(\eta_i)^{n_i} = f(0) = 1$, and so $f(\eta_i)$ is some $n_i^\text{th}$ root of unity. Let $\zeta_i$ be a primitive $n_i^\text{th}$ root of unity. To construct the isomorphism between $A$ and the group of homomorphisms from $A$ to $\mathbb{C}^\times$, we can map each element $(a_1, a_2, \dots, a_k)$ of $A$ to the function which maps $\eta_i$ to $\zeta_i^{a_i}$ for each $i$. We then just need to check that this does indeed define an isomorphism. As for the final problem, let $f$ be some non-identity element of $B$. Then $f(\eta_i) \neq 1$ for some $i$. Suppose without loss of generality that $f(\eta_1) \neq 1$. We then have that $$ \sum_{a \in A} f(a) = \sum_{a_1=0}^{n_1-1} \sum_{a_2=0}^{n_2-1} \cdots \sum_{a_k=0}^{n_k-1} f((a_1, a_2, \dots, a_k)) = \sum_{a_1=0}^{n_1-1} \sum_{a_2=0}^{n_2-1} \cdots \sum_{a_k=0}^{n_k-1} f((a_1, 0, \dots, 0)) f((0, a_2, \dots, a_k)) = \sum_{a_1=0}^{n_1-1} f((1, 0, \dots, 0))^{a_1} \sum_{a_2=0}^{n_2-1} \cdots \sum_{a_k=0}^{n_k-1} f((0, a_2, \dots, a_k)). $$ But $f((1, 0, \dots, 0))$ is a $n_1^\text{th}$ root of unity which is not equal to $1$, and for any $n_1^\text{th}$ root of unity $\omega \neq 1$, we have that $$ \sum_{a_1=0}^{n_1-1} \omega^{a_1} = 0, $$ and so we have that $$ \sum_{a \in A} f(a) = 0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2688992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\int_a^\infty f(x) \, dx$ converges $\Rightarrow$ $\lim_{x\to \infty}f(x)=0$ $\int_a^\infty f(x) \, dx$ converges $\Rightarrow$ $\lim_{x\to \infty}f(x)=0$. Give a proof or counterexample.(Assume $f(x)$ positive and continuous.) I can show that $\int_R^{R'}f(x)\,dx \approx 0$ for $R,R'$ large. $f(c)(R'-R) \approx 0$ for some $c \in (R,R')$ by mean value theorem. Then I don't know how to prove.
Construct a piecewise-linear function $f$ such that around every positive integer $n$ the function describes an isosceles triangle with height $1$ and base length of $\frac{1}{2^n}$. Clearly $\lim_{x\rightarrow \infty} f(x) \nrightarrow 0$, however the integral $\int_0^\infty f(x)dx$ is the sum of the areas of the triangles: $$\int_0^\infty f(x)d = \sum_{n=1}^\infty {\frac{1}{2}\cdot 1 \cdot \frac{1}{2^n}} = \frac{1}{2} \sum_{n=1}^\infty {\frac{1}{2^n}} = \frac{1}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2689297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Derivative of eigenvectors of a symmetric matrix-valued function Given a real symmetric $3\times3$ matrix $\mathsf{A}_{ij}$ and its derivative (w.r.t. some parameter, let's call it time) $\dot{\mathsf{A}}_{ij}$, I want to measure/obtain the rotation (rate and direction) of the eigenvectors (the eigenvectors of a real symmetric matrix form an orthonormal matrix). How can this be done? Edit Since the eigenvectors of a real symmetric matrix are mutually orthogonal, the change of the eigenvectors can only be an overall rotation. An infinitesimal rotation is uniquely determined by the rate $\boldsymbol{\omega}$ such that $\dot{\boldsymbol{x}}=\boldsymbol{\omega}\times\boldsymbol{x}$ for any vector $\boldsymbol{x}$. My question then becomes how to obtain $\boldsymbol{\omega}$.
Suppose that a given (differentiable) matrix-valued function $\mathrm A : \mathbb R \to \mbox{Sym}_n(\mathbb R)$, where $\mbox{Sym}_n(\mathbb R)$ denotes the set of $n \times n$ real symmetric matrices, does have a time-varying spectral decomposition $$\mathrm A (t) = \mathrm V (t) \, \Lambda (t) \,\mathrm V^\top (t)$$ where the columns of orthogonal matrix $\mathrm V (t)$ and the diagonal entries of diagonal matrix $\Lambda (t)$ at a given $t$ are the (unit) eigenvectors and eigenvalues of $\mathrm A (t)$, respectively. Differentiating with respect to time, we obtain a nonlinear matrix differential equation in $\rm V$ and $\Lambda$ $$\dot{\mathrm A} (t) = \dot{\mathrm V} (t) \, \Lambda (t) \,\mathrm V^\top (t) + \mathrm V (t) \, \dot\Lambda (t) \,\mathrm V^\top (t) + \mathrm V (t) \, \Lambda (t) \,\dot{\mathrm V}^\top (t)$$ where $\dot{\mathrm A}$ serves as known input. From $\mathrm A (0)$, we obtain initial conditions $\mathrm V (0)$ and $\Lambda (0)$. We have: * *$\binom{n+1}{2}$ ordinary differential equations. *$n^2$ (algebraic) quadratic equations (to ensure that $\mathrm V$ stays orthogonal). *$n^2 + n = (n+1) n = 2\binom{n+1}{2}$ functions to determine. Unfortunately, I do not know how to solve this matrix ODE. In fact, I am not even sure that a time-varying spectral decomposition of a symmetric matrix-valued function is actually legal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2689374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Prove that $D_{\mathbf{v}}{f}(\mathbf{x}) = \nabla f(\mathbf{x}) \cdot{} \mathbf{v}$ Given the directional derivative $$D_{\mathbf{v}}{f}(\mathbf{x}) = \lim_{h \rightarrow 0}{\dfrac{f(\mathbf{x} + h\mathbf{v}) - f(\mathbf{x})}{h}}$$ and that $f$ is differentiable at $\mathbf{x}$, how do I prove that $$D_{\mathbf{v}}{f}(\mathbf{x}) = \nabla f(\mathbf{x}) \cdot{} \mathbf{v}$$ where $\nabla f(\mathbf{x}) = $ grad $(f(\mathbf{x}))$? My attempt: $$(D_{\mathbf{v}}{f}(\mathbf{x}))_{x_1} = \lim_{h \rightarrow 0}{\dfrac{f(x_1 + hv_1, 0, 0, ...) - f(x_1, 0, 0, ...)}{h}} = \lim_{h \rightarrow 0}{\dfrac{f(x_1 + hv_1, 0, 0, ...) - f(x_1, 0, 0, ...)}{hv_1}} v_1$$ Then $(D_{\mathbf{v}}{f}(\mathbf{x}))_{x_1} = \lim_{h \rightarrow 0}{\dfrac{f(x_1 + hv_1) - f(x_1)}{h}} = \lim_{hv_1 \rightarrow 0}{\dfrac{f(x_1 + hv_1) - f(x_1)}{hv_1}} v_1 = \dfrac{\partial f}{\partial x_1} v_1$ So all I have to prove is: $$D_{\mathbf{v}}{f}(\mathbf{x}) = \sum_i (D_{\mathbf{v}}{f}(\mathbf{x}))_{x_i}$$
Let $g(h):=f(x+hv)$ where $x:=(x_1,...,x_n)$ and $v:=(v_1,...,v_n)$ and $h\in\mathbb{R}$. Then since $f$ is differentiable we have $g(h)$ is differentiable. Therefore $$g'(h)=\frac{d}{dh}f(x+hv)=\frac{d}{dh}f(x_1+hv_1,...,x_n+hv_n)=\frac{d}{dh}f(y_1,...,y_n)$$ where $y_k:=x_k+hv_k$. Applying the rule of total differentiation we have $$g'(h)=\frac{\partial f}{\partial y_1}\frac{dy_1}{dh}+...+\frac{\partial f}{\partial y_n}\frac{dy_n}{dh}=\frac{\partial f}{\partial y_1}v_1+...+\frac{\partial f}{\partial y_n}v_n=\langle \nabla_y f,v\rangle$$ As $h\to 0$ we get $y\to x$ and therefore $$g'(0)=\langle \nabla_xf,v\rangle$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2689506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof verification : $X$ and $Y=g(X)$ are independent random variables $\implies$ $Y$ is degenerate THE PROBLEM : Source : Alan F. Karr (Probability), p.$96$, problem $3.10.(b)$ MY SOLUTION : Suppose $B \subset \mathbb{R}$ such that $P\left\{Y \in B\right\}=1$. We want to show that $B$ is singleton. First of all $B \neq \phi$ since $g(0) \in B$. Assume, for sake of contradiction, $B$ is not singleton. Let $A \subset B$ such that $P\left\{Y \in A\right\}>0$ and $P\left\{Y \in B \setminus A\right\}>0$. Such a set $A$ can always be constructed because of the assumption. Note that $$P\left\{X \in g^{-1}(A), Y \in A^c\right\} = 0 \neq P\left\{X \in g^{-1}(A)\right\} \cdot P\left\{Y \in B \setminus A\right\}$$ Both the terms in the right side are strictly positive by assumption $(P\left\{X \in g^{-1}(A)\right\}=P\left\{Y \in A\right\}>0)$. Hence, $X$ and $Y$ are not independent. Thus, we have arrived at a contradiction. Please verify whether or not my solution is technically okay and if it can be improved in any regards. Thanks in advance.
I may be using facts above what you currently know* but here goes: Y is a distraction. We have that $X$ & $g(X)$ are independent. By definition, $$\sigma(X) \ \text{&} \ \sigma(g(X)) \ \text{are independent.}$$ $$\to \sigma(X) \ \text{&} \ \sigma(X) \ \text{are independent} \tag{Why? Hint: subset}$$ This means that $X$ is independent of itself and thus is constant or at least almost surely constant! Why? Let $B \in \mathscr B$. $$P(X \in B, X \in B) = P(X \in B)P(X \in B)$$ $$P(X \in B, X \in B) = P(X \in B)$$ Equating the RHS's, we have $P(X \in B) = 0,1$ I think this means $X$ is constant, but this certainly means that $X$ is almost surely constant i.e. $\exists d \ \in \ \mathbb R$, s.t. $$P(X=d)=1$$ $$\to P(g(X)=g(d))=1 \tag{Why? Hint: subset}$$ $$\to P(Y=g(d))=1$$ Now choose $c=g(d)$. Then $$P(Y=c)=1 \ \text{QED}$$ *Do you know Kolmogorov 0-1 Law?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2689602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Construct example - $A$ and $B$ have strictly positive eigenvalues, but $A+B$ and $AB$ have strictly negative eigenvalues. As a follow-up to this question I asked, I wondered what would happen if I imposed the weaker condition of having positive eigenvalues, rather than being positive definite. How do I construct an example of two matrices $A$ and $B$ such that: 1) $A$ and $B$ have strictly positive eigenvalues. 2) $A + B$ has strictly negative eigenvalues (is this even possible?). 3) $AB$ has strictly negative eigenvalues. Generally, I'm unsure how to begin going about constructing an example of a matrix that satisfies these properties.
For an example where $A$ and $B$ have all positive eigenvalues while $AB$ has all negative eigenvalues, consider $$ A = \pmatrix{10 & 8\cr -9 & -7\cr},\ B = \pmatrix{1 & 0\cr 0 & 2\cr} $$ $A$ and $B$ both have eigenvalues $1$ and $2$, while $AB$ has a double eigenvalue $-2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2689708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
$|\mathbb{Z}|=|\mathbb{N}|$ Hello all, if you could give some critique on my proof I would be grateful. Show that $|\mathbb{Z}|=|\mathbb{N}|$ for, $$\displaystyle f(n) = \begin{cases} 2|n|+1, \text{if} & n\leq0 \\ 2n, \text{if} & n>0\end{cases}$$ By the definition of cardinality (def. 2.10b) we note that if a function is bijective, cardinality of the sets equal to one another. So, we want to show that $f(n)$ is bijective. To prove that $f:\mathbb{Z}\rightarrow \mathbb{N}$ injective, we must show $\forall n_1,n_2\in \mathbb{Z}$, if $f(n_1)=f(n_2)$ then $n_1=n_2$ for both conditions of $f(n)$, so, $f(n)=2|n|+1 $, if $ n\leq 0$ $2|n_1|+1 = 2|n_2|+1$ $2|n_1|=2|n_2|$ $|n_1|=|n_2|$ and $f(n)=2n$, if $n>0$ $2n_1=2n_2$ $n_1=n_2$ Thus we see that $f(n)$ is clearly injective. To prove $f:\mathbb{Z}\rightarrow \mathbb{N}$ is surjective, meaning $\exists n\in \mathbb{Z},\forall r\in \mathbb{N}$ s.t. $f(n)=r$, again for both conditions, $f(n)=2|n|+1 $, if $ n\leq 0$ $r=2|n|+1$ $r-1=2|n|$ $|n|=\frac{r-1}{2}$ and $f(n)=2n$, if $n>0$ $r=2n$ $n=\frac{r}{2}$ Thus we see that for both conditions $f(n)=r$, and function $f:\mathbb{Z}\rightarrow \mathbb{N}$ is surjective. Hence, we prove $f:\mathbb{Z}\rightarrow \mathbb{N}$ is bijective as it is injective and surjective. Therefore, by definition of cardinality, we say that cardinality of $\mathbb{N}$ is equal to $\mathbb{Z}$. $\blacksquare$
Using common sense: * *send even naturals to positives, *send odd naturals to negatives. This is clearly reversible and establishes a bijection. Some care is required in the vicinity of $0$, but if necessary you can adjust by translation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2689829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Can this be done with Stokes' Theorem? (Exercise) Problem description: Let $\gamma$ denote the curve of intersection of the two surfaces $z=x^2+y^2$ and $z=1+2x$. Calculate the line integral $W=\oint_\gamma \boldsymbol{F}\cdot d\boldsymbol{r}$, where $\boldsymbol{F}=(0,x,-y)$ and $d\boldsymbol{r}=(dx,dy,dz)$. Now, by easy substiution of the first equation into the second we get the realtionship between $x$ and $y$ as $(x-1)^2 + y^2=2$. Parametrization gives $x=1+\sqrt{2}\cos{t}\quad$ and $\quad y=\sqrt{2}\sin{t}$. Lastly the complete parametrization of the curve $\gamma$ is $\boldsymbol{r}=(1+\sqrt{2}\cos{t},\sqrt{2}\sin{t},3+2\sqrt{2}\cos{t})$, where I have inserted the new expression for $x$ into the second equation. Well, from here we can can calculate $W$ over $0\leq t \leq 2\pi$ and get the answer of $6\pi$ relatively quickly. But I am wondering if Stokes' Theorem could be used here? My textbook implies it could be done. The result is then $W=\frac{3}{\sqrt{5}}\iint_S dS$. How do I know what the area of S is, that I am guessing is an ellipse? It shoud evidently become $2\sqrt{5}\pi$, so maybe the axises are $2$ and $\sqrt{5}$, respectively? But how do I reach this result otherwise?
If you have an closed path integral, you can use stokes theorem. $\oint_\gamma F\cdot \ d\gamma = \iint \nabla \times F\cdot dS$ Where $S$ is the elliptical disc on the plane bounded by the paraboloid. $\nabla \times F = (-1,0,1)$ $dS = (-\frac {\partial z}{\partial x},-\frac {\partial z}{\partial y}, 1)\ dA= (-2,0,1)\ dA$ $A$ is the projection of $S$ onto the $xy$ plane (which is a circle) $\iint 3\ dA = 3 A$ $6\pi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2689900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Reference request: Hard measure theory / functional analysis problems I want to revise basic measure theory and functional analysis and I was wondering if there's a good source of challenging problems? Ideally I'm hoping these problems will help me go through the material again, as I know that this is the best way to learn. Preferably, there are worked out solutions to refer to as well, but at the very least hints? One possibility is to say "just look at the problems at the end of the chapters in Rudin" but I'm wondering if there are any other good sources you know of.
You might try Biler and Witkowski, Problems in Mathematical Analysis
{ "language": "en", "url": "https://math.stackexchange.com/questions/2690023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of divisors of the number $2079000$ which are even and divisible by $15$ Find the number of divisors of $2079000$ which are even and divisible by $15$? My Attempt: Since they are divisible by $15$ and are even, $2$ and $5$ have to included from the numbers prime factors. $2079000 = 2^3 \cdot 3^3 \cdot 5^3 \cdot 7 \cdot 11$ Therefore, the number of divisors should be $2 \cdot 2 \cdot (3+1) \cdot (1+1) \cdot (1+1)$ But however this answer is wrong. Any help would be appreciated.
Your factorization of $2079000$ is incorrect. \begin{align*} 20790000 & = 2079 \cdot 1000\\ & = 2079 \cdot 10^3\\ & = 2079 \cdot 2^3 \cdot 5^3\\ & = 3 \cdot 693 \cdot 2^3 \cdot 5^3\\ & = 3 \cdot 3 \cdot 231 \cdot 2^3 \cdot 5^3\\ & = 3 \cdot 3 \cdot 3 \cdot 77 \cdot 2^3 \cdot 5^3\\ & = 2^3 \cdot 3^3 \cdot 5^3 \cdot 7 \cdot 11 \end{align*} If a divisor of $2079000$ is a multiple of $2$ and $15 = 3 \cdot 5$, it must be a multiple of $2 \cdot 15 = 30$ since $2$ and $15$ are relatively prime. If a divisor of $2079000$ is a multiple of $30 = 2 \cdot 3 \cdot 5$, then $\frac{1}{30}$ of it must be a factor of $$\frac{2079000}{30} = \frac{2^3 \cdot 3^3 \cdot 5^3 \cdot 7 \cdot 11}{2 \cdot 3 \cdot 5} = 2^2 \cdot 3^2 \cdot 5^2 \cdot 7 \cdot 11$$ Hence, the number of such divisors is $$(2 + 1)(2 + 1)(2 + 1)(1 + 1)(1 + 1) = 3 \cdot 3 \cdot 3 \cdot 2 \cdot 2 = 108$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2690113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 1 }
Connected Set in $ {R}^{2} $ Is $A=\{(x,y): x^2+y^2=1\}$ is connected in $ℝ^2$? From its graph, I would conclude that it's not path connected.
$$ A = \left\{(x,y) : x^2 + y^2 = 1 \right\} = \left\{(x,y) : x = \cos \theta, y = \sin \theta, 0 \leq \theta < 2\pi\right\} $$ The interval $\Theta = \left[0,2\pi \right)$ is connected, since $A$ is image of continuous map defined on a connected set then it is also connected (as proved here)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2691012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fraction and simplification solve: $\frac{1}{x(x-1)} + \frac{1}{x} = \frac{1}{x-1}$ What are the possible answers ? (A) -1 (B) Infinitely Many Solutions (C) No solution (D) 0 The answer from where i've referred this is (B), but when i simplify it I get (D) My solution: $$\frac{1}{x(x-1)} + \frac{1}{x} = \frac{1}{x-1}$$ $$ \frac{x +x(x-1)}{x(x-1)\cdot x} = \frac{1}{x-1} \text{ (took l.c.m on l.h.s)}$$ $$ \frac{x + (x^2 -x)}{(x^2 - x)\cdot x}= \frac{1}{x-1}$$ $$\frac{x^2}{x^3 - x^2} = \frac{1}{x-1}$$ $$ x^2(x-1) = x^3 - x^2$$ $$ x^3 - x^2 = x^3 - x^2$$ Have I simplified it correctly?
You can make it much simpler. First you have to set the domain of validity: you must have $x\ne 0,1$. Next, on this domain, remove the denominators multiplying both sides by the l.c.m. of the denominators, and simplify; you get: $$1+(x-1)=x\iff x=x.$$ Hence any number $x\ne 0,1$ is a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2691116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Generating multivariate Gaussian samples--Why does it work? I came across the method for generating multivariate normal samples on wikipedia: https://en.wikipedia.org/wiki/Multivariate_normal_distribution#Drawing_values_from_the_distribution A widely used method for drawing (sampling) a random vector $X$ from the $N$-dimensional multivariate normal distribution with mean vector $μ$ and covariance matrix $Σ$ works as follows:[28] * *Find any real matrix $A$ such that $AA^T = Σ$. When $Σ$ is positive-definite, the Cholesky decomposition is typically used, and the extended form of this decomposition can always be used (as the covariance matrix may be only positive semi-definite) in both cases a suitable matrix $A$ is obtained. An alternative is to use the matrix $A = UΛ^½$ obtained from a spectral decomposition $Σ = UΛU^T$ of $Σ$. The former approach is more computationally straightforward but the matrices $A$ change for different orderings of the elements of the random vector, while the latter approach gives matrices that are related by simple re-orderings. In theory both approaches give equally good ways of determining a suitable matrix $A$, but there are differences in computation time. *Let $Z = (z_1, …, z_N)^T$ be a vector whose components are $N$ independent standard normal variates (which can be generated, for example, by using the Box–Muller transform). *Let $X$ be $μ + AZ$. This has the desired distribution due to the affine transformation property. Why does the cholesky decomposition matrix '$A$' multiplied by the vector of samples chosen from the standard normal distribution '$Z$' plus '$μ$' give us our result (ie $X = μ + AZ$)? Why does this work? What is the proof?
Simply take the vector you have generated, $\boldsymbol{x}$, and compute its covariance: $$\mathbb E[(\boldsymbol{x}-\boldsymbol{\mu})(\boldsymbol{x}-\boldsymbol{\mu})^T] = \mathbb E[\boldsymbol A\boldsymbol z\boldsymbol z^t\boldsymbol A^t] = \boldsymbol A\mathbb E[\boldsymbol z\boldsymbol z^t]\boldsymbol A^t = \boldsymbol A \boldsymbol I \boldsymbol A = \boldsymbol A \boldsymbol A^T = \boldsymbol\Sigma,$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2691243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Linear combination of non-identically distributed, independent exponential random variables I am working on the following homework assignment: Under the assumptions of the Normal Simple Linear Regression model, $Y_i|X_i \sim N(\beta_0 + \beta_1 X_i, \sigma^2)$. Consider the model where $Y_i|X_i \sim Exp(\frac{1}{\beta_1 X_i})$, that is, where $E[Y_i|X_i] = \beta_1 X_1$. Let $(X_i, Y_i), \,i=1,...,n$ be a random sample. * *Find the Maximum Likelihood Estimator, $\hat{\beta_1}$, for $\beta_1$ *What is the distribution of $\hat{\beta_1}$? I have managed to solve the first part of the assignment. However, I am struggling with the second part. For the first part, my likelihood function was given by $$\cal{L}(\beta_1) = \prod_{i=1}^n f_{Y_i|X_i}(y_i) = \prod \frac{1}{\beta_1 x_i}e^{-\frac{y_i}{\beta_1 x_i}} = \left(\frac{1}{\beta1}\right)^n \left(\prod_{i=1}^n \frac{1}{x_i}\right) \left(e^{-\sum_{i=1}^n \frac{y_i}{\beta1 x_i}} \right)$$ for $y_i > 0$. I applied the natural logarithm, differentiated with respect to $\beta_1$, set the differential equal to zero and got $$\hat{\beta_1} = \frac{1}{n} \sum_{i=1}^n \frac{y_i}{x_1} = \sum_{i=1}^n k_iy_i \qquad k=\frac{1}{nx_i}$$ Now, for the second part, we have $$ \hat{\beta_1} = \frac{1}{n} \sum_{i=1}^n \frac{Y_i}{x_1} = \sum_{i=1}^n k_iY_i \qquad k=\frac{1}{nx_i} $$ where $x_i$ is an observation, $Y_i \sim Exp(\frac{1}{\beta1 x_i})$ and $\beta_1 x_i > 0$. I have searched online and I found that the linear combination of exponential variables is a hyper-exponential random variable when each of the coefficients is a probability. I'd love for this to be my case, however, I don't have enough evidence to justify that each $k_i \in [0,1]$. In fact, I can't even affirm that $k_i > 0$ for $i = 1,\dots,n$, so I think it's safe to say that it's not a hyper-exponential random variable.
Using the moment generating function: $$M_{\hat{\beta_1}}(t) = E[e^{\hat{\beta_1}t}] = E\left[\prod_{i=1}^ne^{k_i Y_i t}\right] = \prod_{i=1}^n E[e^{k_i Y_i t}] = \prod_{i=1}^n M_{Y_i}(k_i t) = \prod_{i=1}^n (1-k_i \beta_1 x_i t)^{-1}$$ Finally, by substituting $k_i$, I get $$M_{\hat{\beta_1}}(t) = \left(1 - \dfrac{\beta_1}{n}t \right)^{-n}$$ So $\hat{\beta_1} \sim Gamma(n, \frac{\beta_1}{n})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2691383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show that $x^2 - 37y^2 =2$ does not have integer solutions We need to prove that $x^2 - 37y^2 =2$ does not have integer solutions. I have two angles I thought about approaching it from: * *Since 37 is prime, I can show that for $x$ not divisible by $37$, we have $x^{36} ≡ 1mod(37)$ but I don't see how that's useful *I could manipulate the equation and make it to: $x^2 - 4 = 37y^2 - 2$ $\implies (x-2)(x+2) = 37y^2 - 2$ Then if the RHS is even, then $y^2$ is even $\implies$ $y^2$ ends with $0, 4,$ or $6$ $\implies$ $37y^2$ ends with $0, 8,$ or $2$ $\implies 37y^2 -2$ ends with $0, 6,$ or $8$ But then I reach a dead end here too Any suggestions or ideas?
$x^2 - 37y^2 = 2$ is even. So as $odd \pm even = odd$, $even + even = even$, $odd + odd = even$ we can see that either $x^2$ and $37y^2$ are either both even or both odd and we can pursue that and get a contradiction. But now would be a nice time to point out that for all integer $m^2 \not \equiv 2 \mod 4$ and $m^2 \not \equiv 3 \mod 4$ and that either $m^2 \equiv 0 \mod 4$ or $m^2 \equiv 1 \mod 4$. Prook: Let $m = 2k + i$ where $i= 0,1$. Then $m^2 \equiv 4k^2 + 4ki +i^2 \equiv i^2 \mod 4$. And $i^2$ is either $0$ or $1$. So $x^2 \equiv \{0,1\} \mod 4$ and $37y^2 \equiv y^2 \equiv \{0,1\} \mod 4$. So $x^2 - 37y^2 \equiv \{0,1\} - \{0,1\} \equiv \{0-0,0-1,1-0,1-1\} \equiv \{0,3,1,0\} \mod 4$. And $x^2 - 37y^2 \equiv 2 \mod 4$ is just about the only possibility that can never happen.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2691514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Is the function $\sin^{x}(x)$ discontinuous on the entire $\mathbb{R}$? I have come across the function $\sin^{x}(x)$ (or a in a different notation$(\sin(x)^{x})$ while practicing calculating its the first derivative. I then plot it in Desmos since I don't know how to graph this by hands yet. It seems that the graphing device suggests this function is discontinuous between the interval $[\pi ,2\pi ]$. The graph is also peculiar between the interval $[0 ,\pi ]$, while replicating itself for any interval $(2\pi ,3\pi )$, $(3\pi ,4\pi )$, $(5\pi ,6\pi )$, etc. The behaviour of this function on the entire interval $(-\infty ,0)$ is also interesting. It doesn't look anywhere similar to $[0,+∞)$. The first derivative of this function is $\sin^{x}(x)+\ln(\sin(x))+\frac{x\cos(x)}{\sin(x)}$. As with most other complex transcendental functions, this is not a simple derivative to work with. Could you show me some strategies to explore and graph this function manually. I would like to particularly find the red circle point found on this graph: I have also tried to plot this on Wolfram and its shape is similar. Is this an accurate shape at all or is the graphing device errorneous somehow?
The canonical way to define a power $a^b$ is to do $$ a^b:=e^{b\log a}. $$ This works great, but the problem is that it doesn't make sense when $a<0$. And that's the state of things. There is no natural way to define arbitrary powers of negative numbers (think $(-1)^{1/2}$ for an easy example). So your function is not defined whenever $\sin x<0$, which is all the intervals $((2k-1)\pi,2k\pi)$. That's why your graph has so many gaps. On the left axis, you have negative powers of $\sin x$, and that's why you get the asymptotic behaviour towards $x=-k\pi$, $k\in\mathbb N$. I don't think you can expect to find the roots of the derivative analytically. At best, you could try Newton's method.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2691611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
binomial limit when limit approches to infinity $\displaystyle \lim_{n\rightarrow\infty}\binom{n}{x}\left(\frac{m}{n}\right)^x\left(1-\frac{m}{n}\right)^{n-x}$ solution i try $\displaystyle \lim_{n\rightarrow\infty}\left(\frac{m}{n}\right)^x\left(1-\frac{m}{n}\right)^{n-x}=\lim_{n\rightarrow\infty}(\frac{m}{n}+1-\frac{m}{n})^n=1$ I have edited my post This is wrong how i find right answer. Help me
Note that we have $$\binom{n}{x}\left(\frac{m}{n}\right)^x\left(1-\frac mn\right)^{n-x}=\frac{m^x}{x!}\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\cdots\left(1-\frac{x-1}{n}\right)\left(1-\frac mn\right)^{n-x}$$ Therefore, for fixed $x$, we have $$\lim_{n\to \infty}\binom{n}{x}\left(\frac{m}{n}\right)^x\left(1-\frac mn\right)^{n-x}=\frac{m^x}{x!}e^{-m}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2691740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
The norm or the singular values of the sum of identity matrix and a rank-$1$ matrix Let $A$ be an $N \times N$ rank-$1$ matrix. I am interested in finding the norm or the maximum singular value of $(A-cI)$ where $I$ is $N \times N$ identity matrix and $c>0$ is a scalar constant.
If $N=1$ then $\|A-cI\| = |A-c|$. Assume that $N>1$. Note that since $\ker A $ is non trivial have $(A-cI)x = -cx$ for some non zero $x$, and so $\|A-cI\| \ge |c|$. If $A$ is rank one it can be written as $A=u v^T$ for two vectors $u,v$. Without loss of generality we can take $\|u\| = 1$. If $u,v$ are colinear, then $A=k u u^T$, for some $k$, and the eigenvalues of the symmetric $A$ are $k,0$, hence $\|A-cI\| = \max(|c|,|k-c|)$. Assume that $u,v$ are not colinear (equivalently, they are linearly independent). Now assume that $N=2$. The $N>2$ case will be dealt with subsequently. Let $B=(uv^T -c I) (v^Tu-cI) = v v^T +c^2I -c(u v^T + v u^T)$. We want to compute $\sqrt{\lambda_\max(B)}$. Note that $\lambda_\max(B) = c^2+\lambda_\max(C)$, where $C=v v^T -c(u v^T + v u^T)$. In the basis $u,v$, the matrix $C$ has the representation $\begin{bmatrix} -c u^Tv & -c \|v\|^2 \\ u^T v - c & \|v\|^2-c u^T v \end{bmatrix} = \begin{bmatrix} 0 & -c \|v\|^2 \\ u^T v - c & \|v\|^2 \end{bmatrix} - c u^TvI$. The eigenvalues of the last matrix are ${1 \over 2} (\|v\|^2 \pm \sqrt{\|v\|^4+4 \|v\|^2c(c-u^Tv)})$, and hence the eigenvalues of $B$ (which are non negative) are ${1 \over 2} (\|v\|^2 \pm \sqrt{\|v\|^4+4 \|v\|^2c(c-u^Tv)}) +c (c-u^Tv)$. Hence $\|A-cI\| = \sqrt{{1 \over 2} (\|v\|^2 + \sqrt{\|v\|^4+4 \|v\|^2c(c-u^Tv)}) + c(c-u^Tv)}$. If $N>2$, then $B$ has additional eigenvalues at $c^2$, hence the formula remains the same, since we know that the norm of $A-cI$ restricted to the subspace $\operatorname{sp}\{u,v\}$ is no less than $|c|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2691867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do I find the truth value in this logic problem: $(( p \lor s) \land \neg q) \rightarrow ( r \rightarrow s)$? The problem is as follows: $$[\left ( p \vee s\right) \wedge \sim q\,] \rightarrow \left ( r \rightarrow s \right )$$ From the preceding statement find the truth value of $\textrm{p, q, r, s}$ The alternatives in my book are the following: * *TFTF *TTFT *TTTF *TFFT *TFFF However I'm confused exactly how do I get to any of those answers. I tried to build up a truth table to get the answer, since the number of combinations for the four different variables would mean $2^{4}=16$. The sketch of the table what I build is shown below: The process was tedious and I'm not sure if the result it is correct but the thing is I don't know if this is what it is being asked. Is there any way to reduce this table to any of the alternatives given? I have forgotten exactly if there is a way to solve this problem without resorting with this approach or any shortcut? Can somebody help me to find the right answer or to guide me what to do?.
The problem is as follows: $$[\left ( p \vee s\right) \wedge \sim q\,] \rightarrow \left ( r \rightarrow s \right )$$ From the preceding statement find the truth value of $\textrm{p, q, r, s}$ Your truth table is incorrect. Try this truth table generator: http://web.stanford.edu/class/cs103/tools/truth-table-tool/ Hint: From its truth table, your statement is true for all combinations of truth values for p, q, r, s except for p=T, q=F, r=T and s=F.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2691983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Dimension of $V(g_1(\mathbf{x})- g_1(\mathbf{y}), ...,g_s(\mathbf{x})- g_s(\mathbf{y}) )$ compared to $V(g_1(\mathbf{x}), ...,g_s(\mathbf{x}) )$? Let $g_1, ..., g_s$ be non-constant homogeneous polynomials in $n$ variables with coefficients in $\mathbb{C}$. Let $\mathbf{x}$ and $\mathbf{y}$ be two sets of $n$ variables. Let $V = V(g_1(\mathbf{x})- g_1(\mathbf{y}), ...,g_s(\mathbf{x})- g_s(\mathbf{y}) ) \subseteq \mathbb{A}_{\mathbb{C}}^{2n}$ and $W = V(g_1(\mathbf{x}), ...,g_s(\mathbf{x}) ) \subseteq \mathbb{A}_{\mathbb{C}}^{2n}$. Is it always true that $\dim V \leq \dim W$? or maybe even that $\dim V = \dim W$? I would greatly appreciate any comments.
Here is a heuristic argument for the equality $\dim V=\dim W$ when the $g_i$'s are polynomials with constant term equal to zero (but not necessarily homogeneous). 1) Consider the morphism $g=(g_1,\cdots, g_s):\mathbb A^n\to \mathbb A^s$ and let $$S=V_{\mathbb A^n}(g_1,\cdots, g_s)\subset \mathbb A^n, \;\Sigma=\overline {g(S)}\subset \mathbb A^s$$ Since $S=g^{-1}(0)$ is a non-empty fiber of $g$, we expect $\dim S\stackrel {?}{=}n-\dim \Sigma$ 2) Since the morphism $p:\mathbb A^{2n}\to \Sigma:(x,y)\mapsto g(x)$ is dominant and since $W=p^{-1}(0)$ we expect $$\dim W\stackrel {?}{=}2n-\dim \Sigma $$ 3) We have a dominant morphism $g\times g:\mathbb A^{2n}\to \Sigma\times \Sigma:(x;y)\mapsto (g(x),g(y))$. The variety $V\subset \mathbb A^{2n}$ is the inverse image under that morphism of the diagonal subvariety $\Delta =\{(\sigma,\sigma )\vert \sigma\in \Sigma \}\subset \Sigma\times \Sigma$. Thus we expect $$\dim V\stackrel {?}{=}2n-\dim \Delta=2n-\dim \Sigma$$ 4) The expected equalities above show why it is reasonable to hope that $\dim V=\dim W$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2692106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is it possible to draw a homomorphism between all groups of the same finite order For example, if I have $2$ groups of order $n$, then could I label their elements $1,2,...,n$ and $1,2,...,m$ and say that $\phi(n)=m$
Suppose $n$ is 2. We apply your procedure as follows: * *For the first group we label the identity as 1 and the other element as 2 *For the second group we label the identity as 2 and the other element as 1 Then $\phi$ is a well-defined function, but it's not a group homomorphism, since it doesn't map the identity to the identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2692276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does the notation of “1=/= 0” mean for “Let R be a ring with unity 1=/= 0”? Should I read it literally? Or does it mean that the multiplicative identity on the left hand side is not equal to the additive identity on the right hand side?
I think the answers are clear. Here is one more just to be explicit. We are concerned with commutative rings with a multiplicative identity. Suppose I have a set $S=\{a,b,c\}$ with commutative operations satisfying $a=a+c=b+b=ab$, $b=a+a=b+c=bb$ , $c=a+b=c+c=ac=bc=cc.$ Then one way or another you can check the rules for a ring and verify that the structure is a ring with additive identity $c$ and multiplicative identity $b$. A shorter way to say that is $0=c$ and $1=b$. Without the rule you mention $T=\{x\}$ would count as a ring with $x+x=xx=x.$ Since $a+x=a$ for everything in the singleton set $T,$ $x=0$, similarly $ax=a$ gives $x=1.$ It turns out that $T$ is the only thing which follows all the ring rules except the requirement $0 \neq 1$. It is desireable to not have $T$ count as a ring. That uniqueness. Claim isnt that hard to prove but first you need some smaller results such as $0a=0$ . I do not recall for sure, but it might be cleanest to use $0\neq 1$ in an early proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2692356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Qualitative graph for a function of $x^{3}$. I have this function: $$f(x)=\frac{x(x^2-3)}{x^3-3x-3}$$ I need to draw its graph. I've tried with a classic study of a function but it cames out a mess. Any idea to simplify the study for draw the graph? Thank you.
HINT Note that $$f(x)=\frac{x(x^2-3)}{x^3-3x-3}=f(x)=\frac{x^3-3x}{x^3-3x-3}=f(x)=1+\frac{3}{x^3-3x-3}$$ For a first sketch * *determine domain *find the value for some "special" and/or "simple" point as x=0,1,etc. *find the value for which denominator = 0 (and thus vertical asymptothes) *find limit at $\pm \infty$ Then for a complete study we need use derivatives.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2692471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Smooth approximation of three-phased linear models I am looking for a smooth (continuous differentiable) approximation of the following two three-phased functions with breakpoints at $B_1$ and $B_2$:$$ y_1(x, B_1, B_2, a, b) = \begin{cases} a; & x < B_1\\ a + b(x - B_1); & B_1 \leqslant x \leqslant B_2\\ a + b(B_2 - B_1); & x > B_2 \end{cases}, $$ and$$ y_2(x, B_1, B_2, a, b_1, b_2, b_3) = \begin{cases} a + b_1(x - B_1); & x < B_1\\ a + b_2(x - B_1); & B_1 \leqslant x \leqslant B_2\\ a + b_2(B_2 - B_1) + b3(x - B_2); & x > B_2 \end{cases}. $$ The derivative equals $b$ or $b2$ at $\dfrac{1}{2}(B_1+B_2)$ for functions $y_1$ and $y_2$, and equals $b1$ at $x\ll B_1$ and $b3$ at $x\gg B_2$ for function $y_2$. I would like one additional parameter $s$ that would describe how closely the smooth function would approximate the piecewise linear versions. The function should extrapolate more or less linearly on both sides. Any thoughts about what could be good functions for this? I want the functions to be smooth and continuously differentiable as that would help me to fit these parameters as I could then provide the analytical first-order derivative.
Ha with the help of my brother found the answer myself in the end. If we define \begin{align*} f(x, b, s) &= \frac{1}{2}\sqrt{\smash[b]{b(4s+bx^2)}}\\ &\mathrel{\phantom{=}}{} \end{align*} then my piecewise linear function $y_1(x, a, b, B1, B2, s)$ can be approximated as \begin{align*} y_1(x, a,b,B1,B2,s) &= a + f(x-B1,b,s)-f(x-B2,b,s) \end{align*} if the intercept is $a$ and slope of the middle part $b$. Likewise, my piecewise linear function $y_2(x, a, b1, b2, b3, B1, B2, s)$ with slopes $b1$, $b2$ and $b3$ in the 3 linear parts can be approximated as \begin{align*} y_2(x, a,b1,b2,b3,B1,B2,s) &= a + b1.x + f(x-B1,b2-b1,s)-f(x-B2,b2-b3,s) \end{align*} These will approach the original piecewise model more as $s\to0$. The same system can be used for functions with any number of breakpoints. They are much easier to fit than the original piecewise model though, due to them being continuously differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2692572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Adjoint of projection onto direct sum of Hilbert spaces Let $K_n$ be Hilbert spaces and define \begin{equation*} K := \bigoplus_{\ell_2} K_n = \left\{ (x_1,x_2,\ldots) \in \bigoplus_{n=1}^\infty K_n : \sum_{n=1}^\infty \|x_n\|^2 < \infty \right\} \end{equation*} It is easy to see that $K$ is a Hilbert space with the inner product $(x,y) = \displaystyle{\sum_{n=1}^\infty (x_n,y_n) } $. I'm interested in the adjoint of the operator projection \begin{equation*} \begin{split} & \pi_n :\bigoplus_{\ell_2}K_n \rightarrow K_n \\ & (x_1, \ldots,x_n, \ldots) \mapsto x_n. \end{split} \end{equation*} My idea is that ${\pi_n}^*$ is the natural inclusion from $K_n$ to $K$, but I can't find the correct way to prove it. Is my idea right? How can I prove it? Than you for your help.
You are right, the adjoint of $\pi_n : K \to K_n$ is the canonical inclusion $\iota_n : K_n \to K$. Let $(x_1, x_2, \ldots) \in K$ and $y_n \in K_n$. We have: \begin{align} \Big\langle \pi_n(x_1, x_2, \ldots), y_n\Big\rangle_K &= \langle x_n, y_n\rangle_{K_n} \\ &= \Big\langle (x_1, x_2, \ldots), (\underbrace{0, \ldots, 0}_{n-1}, y_n, 0, \ldots)\Big\rangle_{K}\\ &= \Big\langle (x_1, x_2, \ldots), \iota_n(y_n)\Big\rangle_{K}\\ \end{align} Therefore $\pi_n^* = \iota_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2692788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Isometries of the hyperbolic plane in the Beltrami–Klein disk model I am interested in the isometries of the hyperbolic plane in the Beltrami–Klein disk model. (https://en.wikipedia.org/wiki/Beltrami%E2%80%93Klein_model) The Wikipedia article does not say anything about the structure of the isometries in this model. Since the isometries in the upper half-plane model are well-known, I did a change of variables to go from upper half-plane to the Beltrami–Klein disk. After some calculations, I determined that every isometry in the Beltrami-Klein disk model must be a projective transformation $\mathbb R \mathbb P^2 \to \mathbb R \mathbb P^2$ which maps the "unit disk" $\{ [x:y:1] \in \mathbb R \mathbb P^2 ~|~ x^2 + y^2 < 1 \}$ to itself. (The unit disk is, after all, the disk in the Beltrami-Klein disk model of hyperbolic space.) Some questions I have: * *Is every projective transformation of this form an isometry of the hyperbolic plane? (Is there an easy way to see this?) *The set of such projective transformations is a subgroup of $SL(3,\mathbb R)$. Is there a nice characterization of this group? Update: I just realized that every projective transformation which maps the unit disk to itself must be an element of $SO(2,1)$. Here is the reason: such a projective transformation must map the unit circle to itself, so the corresponding linear transformation $\mathbb R^3 \to \mathbb R^3$ must preserve the quadratic form $x^2+y^2-z^2$. Therefore, if the answer to Question 1 is "yes," then the answer to Question 2 should be $SO(2,1)$.
In addition to Lee Mosher's very nice answer, here are some texts that some of my friends have recommended: * *John Ratcliffe: Foundations of Hyperbolic Manifolds *Cannon, Floyd, Kenyon, Parry: Hyperbolic Geometry
{ "language": "en", "url": "https://math.stackexchange.com/questions/2692957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to find the trapping region. Show that the system, $$x'=-x-y+x(x^2+2y^2)$$ $$y'=x-y+y(x^2+2y^2)$$ has at least one peridic solution. I know that I need to use the Poincare Bendixon Theorem, but I'm not to sure how to find the trapping region. When my teacher did an example in class he basically did a proof by picture and made it seem like all arrows were pointed inward within the trapping region. I'm wondering how would I find and rigorously prove that a trapping region really has all arrows pointed inward? Any help is appreciated, thanks! EDIT: I believe I have figured out the trapping region to be the ellipse $x^2+2y^2=1$ and I believe I have proven it by looking at cases depending on which quadrant the coordinates is in. Now my question is, how do I deal with the fixed point $(0,0)$ within the trapping region?
Because of the form of the linear part of the vector field it seems advisable to explore the dynamic of the Euclidean radius. For simplicity of computation, use $E=\frac12r^2=\frac12(x^2+y^2)$ to get $$ \frac{d}{dt}E=x\dot x+y\dot y=-2E+2E(x^2+2y^2) $$ so that $$ 2E(2E-1)\le\dot E\le 2E(4E-1) $$ This means that $\dot E$ is negative for $0< E< \frac14$ and positive for $E>\frac12$. Looking in the direction of negative times, the time reversed ODE, this means that the annulus $\frac14<E<\frac12$, $\sqrt{\frac12}<r<1$, is invariant in that time direction and thus has to contain a limit cycle which is also a periodic solution. some numerical solutions, the circles have radii $0.6$ to $1.1$ by $0.1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2693150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Distance between a point and a line and between two lines Let $P = (-5, 3, 4)$, $Q = (-6, 0, 3)$, $R = (-7, 1, 6)$ and $S = (-4, 2, 2)$. Let $A$ be the line passing through $P$ and $Q$, and let $B$ be the line passing through $R$ and $S$. a) What is the distance between $R$ and $A$? b) What is the distance between $A$ and $B$? I am quite confused on how to start with this problem. Firstly, I am not entirely sure how I will find the distance between the point and the line. Would that distance simply be the normal vector multiplied by the projection? If so, how exactly would I calculate the projection here? No equations for the lines are given so I am quite confused. Also, for the shortest distance between two lines, will it be a similar approach of finding the normal vector and projection? I am not entirely sure how to proceed here. Any help would be highly appreciated!
Even if you're only in Calc I, you can still do this. Write an equation for the distance from the point to an arbitrary point on the line and then differentiate the equation you come up with with respect to $x$. The value of the derivative will be zero when your equation for the distance from the point to the line is at a minimum and this will give you the $x$ value on the line where the point is closest to. You can then get the $y$ and $z$ values from the original equation for the line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2693228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Prove that $ \int_{0}^{c} \frac{\sin(\frac{x}{2})x~dx}{\sqrt{\cos(x) - \cos(c)}} = \sqrt{2} \pi \ln(\sec(\frac{c}{2}))$ As the title says, I want to find a way calculate the following integral$ \int_{0}^{c} \frac{\sin(x/2)x~dx}{\sqrt{\cos(x) - \cos(c)}}$, which I know is equal to $\sqrt{2} \pi \ln(\sec(\frac{c}{2}))$. At first glance, I thought this would not be very difficult to prove (and maybe it isn't), but after some straighforward manipulations, I was unable to make the $\pi$ factor appear. Obs: $ 0<c < \pi$.
We can transform \begin{align} I(c)&=\int_{0}^{c} \frac{\sin(x/2)}{\sqrt{\cos(x) - \cos(c)}}x\,dx\\ &=\frac{1}{\sqrt{2}}\int_{0}^{c} \frac{\sin(x/2)}{\sqrt{\cos^2(x/2) - \cos^2(c/2)}}x\,dx\\ &=2\sqrt{2}\int_0^{c/2}\frac{\sin y}{\sqrt{\cos^2y-\cos^2(c/2)}}y\,dy \end{align} Now, denoting $C=\cos (c/2)$ and enforcing the substitution $\cos y=Cu$, it comes \begin{equation} I(c)=2\sqrt{2}\int_1^{1/C}\frac{\arccos(Cu)}{\sqrt{u^2-1}}\,du \end{equation} By differentiation of this expression with respect to $C$, noticing that $\arccos(1)=0$, we obtain \begin{align} \frac{dI(c)}{dC}&=2\sqrt{2}\left[-\frac{1}{C^2}\frac{\arccos(1)}{\sqrt{\tfrac{1}{C^2}-1}}-\int_1^{1/C}\frac{u}{\sqrt{u^2-1}\sqrt{1-C^2u^2}}\,du\right]\\ &=\frac{-2\sqrt{2}}{\sqrt{1-C^2}}\int_0^{\frac{\sqrt{1-C^2}}{C}}\frac{dw}{\sqrt{1-\left( \frac{Cw}{\sqrt{1-C^2}} \right)^2}}\\ &=\frac{-2\sqrt{2}}{\sqrt{1-C^2}}\left[\frac{\sqrt{1-C^2}}{C}\arcsin\left(\frac{Cw}{\sqrt{1-C^2}}\right)\right]_{w=0}^{w=\frac{\sqrt{1-C^2}}{C}}\\ &=-\sqrt{2}\frac{\pi}{C} \end{align} (we have used the substitution $u^2=1+w^2$). Then, \begin{equation} I(c)=-\pi\sqrt{2}\ln\left( \frac{C}{A} \right) \end{equation} where $A$ is a constant. For $c=0$, we have $C=1 $ and $I(0)=0$, thus $A=1$. Finally, \begin{equation} I(c)=\pi\sqrt{2}\ln\left(\sec\left( \frac{c}{2} \right)\right) \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2693362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 1, "answer_id": 0 }
Solving the differential equation $2y'\sin x + y\cos x = y^3(x\cos x - \sin x) $ Does anyone know how to solve the following differential equation: $$2y'\sin x + y\cos x = y^3(x\cos x - \sin x) $$ I tried dividing both sides by sine, then by cosine, which in either case brought me nowhere. I tried isolating $y'$ but only ended up getting a complex expression which is seemingly impossible to integrate. Are there any other approaches here?
Hint Substitute $u=\frac 1 {y^2}$ Then $$u'=-2 \frac {y'} {y^3}$$ $$2y'\sin x + y\cos x = y^3(x\cos x - \sin x)$$ $$-u' + u\cot (x) = (x\cot(x) - 1)$$ $$ {u'} -u\cot (x) = (1-x\cot(x) )$$ $$.............$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2693500", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Sum of little o and big o Consider the following expression: $$o\left(\frac{1}{nh_n^p}\right) + O\left(\frac{1}{n}\right) \ \text{as} \ n \rightarrow \infty$$ where $p$ is a positive integer and $h_n$ is a function of $n$. We assume that $nh_n \rightarrow \infty$ as $n \rightarrow \infty$ and $h_n \rightarrow 0$ as $n \rightarrow \infty$. I am told that the expression is equal to $$o\left(\frac{1}{nh_n^p}\right)$$ Why is this the case? What I did was the following. We know that $O\left(\frac{1}{n}\right) = o(1)$ and so $$o\left(\frac{1}{nh_n^p}\right) + O\left(\frac{1}{n}\right) = o\left(\frac{1}{nh_n^p}\right) + o\left(1\right) = o\left(\frac{1+nh_n^p}{nh_n^p} \right)$$.
Under the assumption that $h_n\to0$ and $p\geq 1$, we have $$ h_n^p = o(1) $$ so that $nh_n^p = o(n)$ and therefore $1/nh_n^p = \omega(1/n)$, i.e., $$ \frac{1}{n} = o\!\left(\frac{1}{nh_n^p}\right)\,. $$ It follows that $$o\!\left(\frac{1}{nh_n^p}\right)+O\!\left(\frac{1}{n}\right) = o\!\left(\frac{1}{nh_n^p}\right)+o\!\left(\frac{1}{nh_n^p}\right) = o\!\left(\frac{1}{nh_n^p}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2693632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to make a number out of other numbers in mathematica? I’m not entirely sure if this is possible, but I would like to make numbers composed of other numbers. For example, given $a$, $b$, and $c$, I’d like to make a number such that the digits are $a.bc$. So, if $a = 1$, $b = 2$, and $c = 3$, we would have the number $1.23$. Is this possible? Thanks!
I think what you want is: FromDigits[{a, b, c}] // Simplify 100 a + 10 b + c For your example: FromDigits[{1, 2, 3}]/100. 1.23
{ "language": "en", "url": "https://math.stackexchange.com/questions/2693747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Quotient space and quotient map Let $X:= [-2,2]$ a topological space with the canonical topology and let $A:=[-1,1]\subset X$ and $f:X \rightarrow X/A$ the quotient map. How can I find an explicit form of the quotient space $X/A$? And is $X$ a compact space? (I only know that it is complete, but I don't know if it is also compact) Have someone any suggestions please? Thanks!
The quotient of a compact space is always compact, as $q: X \to X / \sim$ is always continuous (where $q$ is the standard quotient map), and images of compact spaces are compact. In your case the function $f: X=[-2,2] \to [-1,1]$ where $f(x) = x+1$ for $x\in [-2,-1]$, $f(x) = 0$ for $x \in [-1,1]$ and $f(x) = x-1$ for $x \in [1,2]$ is continuous and obeys $f(x) = f(y)$ iff $x \sim y$ in the equivalence relation determined by identifying $A$ to a point. A standard theorem then implies that $X/A = X / \sim \simeq f[X] = [-1,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2693887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If $B$ if a finite Boolean algebra, then it contains an atom Let $B=\{a_1,\dots,a_n,0,1\}$ be a finite Boolean algebra. I want to show that there exists an atom $x\in B$. So I want to show that there exists $x\in B$, such that for each $a\in B$ for which $a<x$, we have $a=0$. I was thinking of maybe looking at $a=\bigwedge\{a_1,\dots,a_n\}$. If $a\neq 0$, then it seems to me that $a$ is an atom. So assume $a=0$. Is it possible to shrink the size of $\{a_1,\dots,a_n\}$ so that we get $a\neq 0$? And for the biggest set for which $a> 0$ we could argue that $a$ is an atom? (Short remark: I do know that every finite Boolean algebra equals $\mathcal P(\operatorname{At}(B))$, but to me it seems that we use the fact that $\operatorname{At}(B)$ is nonempty, or at least, that seems to be assumed. So I want to prove this from scratch.)
Assume that $x\in B$ is not an atom. Then there are $x_a,x_b\in B$ such that $x_a\vee x_b=x$, $x_a,x_b\neq x$, and $x_a,x_b\neq 0$. If $x_a$ is not an atom, then we can find $x_{aa},x_{ab}$ such that $x_{aa}\vee x_{ab}=x_a$, $x_{aa},x_{ab}\neq x_a$, and $x_{aa},x_{ab}\neq 0$. Continuing in this way we find $$x=x_1\vee x_2\vee ...\vee x_n$$ such that $x_i\neq0$ and, because $B$ is finite, if $x_i=x_a\vee x_b$ with $x_a,x_b\neq 0,x_i$, then $x_a,x_b\in\{x_1,...,x_n\}$. When this happens $n$ can be decreased maintaining the same property. Take $n$ minimum with the property above. Each of the $x_i$ must be atoms, otherwise one can split them and reduce $n$ further.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2693996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
the sequence $n!+2,...,n!+n$ is made up of only composite numbers I have found the claim that given that $n\geq2$, we have that the sequence of $n-1$ numbers $n!+2,n!+3,...,n!$ is made up of only composite numbers. Is there a proof of this? I found this pretty fascinating but I am not sure how to go around it. It seems to hold for the first few examples $n=2$: $S=\{4\}$ $n=3$: $S=\{8,9\}$ $n=4$: $S=\{26,27,28\}$
$n!$ is a rather special integer. It is composed of 1 factor of each from the list $\{1,2,3, \cdots, n-1, n\}.$ So, $$n \geq 2 \Rightarrow 2|n! \Rightarrow n!+2 \equiv 0 \mod(2)$$ $$ n \geq 3 \Rightarrow 3|n! \Rightarrow n!+3 \equiv 0 \mod(3)$$ $$\vdots$$ $$ n \geq k \Rightarrow k|n! \Rightarrow n! + k \equiv 0 \mod(k)$$ $$\vdots$$ $$ n \geq n-1 \Rightarrow n-1|n! \Rightarrow n! + (n-1) \equiv 0 \mod(n-1)$$ $$n \geq n \Rightarrow n|n! \Rightarrow n! + n \equiv 0 \mod(n).$$ The last two steps shown here are rather obvious but I want to make it reasonably clear why every term in the list is composite. Note that one of the nice things about this is that we have shown that there exists arbitrarily long lists of consecutive composite integers!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2694102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Is $\lim_{x→a} f(x) $ equivalent to $\lim_{h→0} f(a+h)$? $\lim_{x→a} f(x) = \lim_{h→0} f(a+h)$ does this always hold true? Intuitively these two things seem to describe same concept, and I've seen some algebraic manipulations that implicitly use this identity. Are there any scenarios where this falls apart? Also, if it does hold true, why is that (putting the intuition aside)? Thanks.
If $\lim_{x\rightarrow a}f(x)=L$, then by $\epsilon$-argument, somehow it is like $0<|x-a|<\delta$, then $|f(x)-L|<\epsilon$, for $0<|h|<\delta$, then $|(a+h)-a|=|h|$, so $0<|(a+h)-a|<\delta$, set $x=a+h$, then $|f(a+h)-L|=|f(x)-L|<\epsilon$, this shows $\lim_{h\rightarrow 0}f(a+h)=L$. Another way is similar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2694210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Approximating the limit of a Cauchy sequence in a Banach space Let $E$ be a Banach space and consider a sequence $(x_n)_n$ in $E$ satisfying the following condition: $$||x_n-x_{n-1}||\leq 3^{-n}\mbox{ for all }n\in\mathbb{N}.$$ Clearly $(x_n)_n$ is a Cauchy sequence and therefore converges to an element $x\in E$. Question: How to prove that $||x-x_n||\leq \frac{1}{2}3^{-n}$ for all $n\in\mathbb{N}$?
We can prove it using triangle equation: $$||x-x_n|| \leq \sum_{k=n}^{\infty}||x_{k+1}-x_k|| \leq \sum_{k=n}^{\infty}3^{-k-1}=\frac{3^{-n-1}}{1-\frac{1}{3}} = \frac{3^{-n}}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2694316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Prove that $\lim_{h\to 0} \frac{g(a+h)-2g(a)+g(a-h)}{h^2} = g''(a)$ How do I prove that $$\displaystyle\lim_{h\to0} \dfrac{g(a+h)-2g(a)+g(a-h)}{h^2} = g''(a),$$ where $g$ is of class $C^2$? My attempt: By Mean Value Theorem, there are $c_0\in (a-h, a)$ and $c_1\in(a, a+h)$ such that $$g(a)-g(a-h) = g'(c_0)\cdot h$$ $$g(a+h)-g(a) = g'(c_1)\cdot h$$ Then $\displaystyle\lim_{h\to0} \dfrac{g(a+h)-2g(a)+g(a-h)}{h^2} = \displaystyle\lim_{h\to0} \dfrac{g(a+h)-g(a)-(g(a)-g(a-h))}{h^2} = \\ \displaystyle\lim_{h\to0} \dfrac{g'(c_1)-g'(c_0)}{h}= \displaystyle\lim_{h\to0} \dfrac{g'(c_1)-g'(c_0)}{c_1-c_0}\cdot\dfrac{c_1-c_0}{h}=\\ \displaystyle\lim_{h\to0} \dfrac{g'(c_1)-g'(c_0)}{c_1-c_0}\cdot\displaystyle\lim_{h\to0}\dfrac{c_1-c_0} {h} $ The first limit is equals to $g''(a)$. Then the second limit should be equals to $1$, but I can't see why.
\begin{align*} g(a+h)&=g(a)+g'(a)h+\dfrac{1}{2}g''(\xi_{h})h^{2}\\ g(a-h)&=g(a)-g'(a)h+\dfrac{1}{2}g''(\eta_{h})h^{2}, \end{align*} where $\xi_{h}$ lies in between $a$ and $a+h$, and $\eta_{h}$ lies in between $a$ and $a-h$, so \begin{align*} \dfrac{1}{h^{2}}[g(a+h)+g(x-h)-2g(a)]=\dfrac{1}{2}(g''(\xi_{h})+g''(\eta_{h}))\rightarrow\dfrac{1}{2}(g''(a)+g''(a))=g''(a) \end{align*} by the continuity of $g''$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2694441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Find a matrix A such that $\operatorname{rank}{A} = \operatorname{rank}{A^2} \neq \operatorname{rank}{A^3}$ Let $A$ be a complex square matrix of order 2 ($A \in M_{2,2}$). Then, does there exist $A$ such that $\operatorname{rank}{A} = \operatorname{rank}{A^2} \neq \operatorname{rank}{A^3}$? If that doesn't exist, how can I prove it?
$\DeclareMathOperator{\rank}{rank}$ In general, for a linear transformation $T: V \to V$, one has $T^{k+1} V \subseteq T^k V$ for all $k \ge 0$. If one ever has equality for a particular $k$, i.e. $T^{k+1} V = T^k V$, then $T^{k + l} V = T^k V$ for all $l \ge 1$. In fact, this holds for $l =1$ by assumption, and if it holds for any particular $l$, then also $$T^{k + l + 1} V = T T^{k + l} V = T T^k V = T^{k + 1} V = T^k V.$$ So in particular, for any square matrix $A$ of any size, if $\rank(A^2) = \rank(A)$, then $\rank(A^l) = \rank(A)$ for all $l \ge 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2694566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }