Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Power series, how to shift the index of a summation with a powers 2n+1 I am taking an ordinary differential equation class, and we are currently learning about power series. One thing that comes up is index shifting, for the most part, I can shift the index quite easily, but in the following case, I end up with a fraction index. I have an ODE of the form y'' + 2xy' + y = 0, with one of the solutions being (in power series form): y = $\sum_{n=0}^{\infty} b_n x^{2n+1}$ finding y' and y'', gives me: y' = $\sum_{n=1}^{\infty} (2n+1)b_n x^{2n}$ y'' = $\sum_{n=2}^{\infty} (2n)(2n+1)b_n x^{2n-1}$ plugging it in the ODE, I get: $\sum_{n=2}^{\infty} (2n)(2n+1)b_n x^{2n-1} + 2x\sum_{n=1}^{\infty} (2n+1)b_n x^{2n} + \sum_{n=0}^{\infty} b_n x^{2n+1} = 0$ which is simplified as $\sum_{n=2}^{\infty} (2n)(2n+1)b_n x^{2n-1} + \sum_{n=1}^{\infty} 2(2n+1)b_n x^{2n+1} + \sum_{n=0}^{\infty} b_n x^{2n+1} = 0$ From there, I can put y and y' together as such: $b_0x + \sum_{n=1}^{\infty} [b_n + 2(2n+1)b_n]x^{2n+1} + \sum_{n=2}^{\infty} (2n)(2n+1)b_n x^{2n-1} = 0$ And now I'm stuck on how to shift the index so I can add both of the remaining summations together. The problem comes from having a power of 2n instead of just n, i end with indexes that are fractions, which cannot be right. I tried to look around on google but every examples are showing how to shift indexes with a power series of $x^{n+a}$ and not $x^{2n+a}$ Can anyone be kind enough to show me how to shift the index for this problem (and perhaps give me another example which is similar to this)? Thank you
Hint you can downshift or upshift. $$\sum_{n=k}^N f(n)=$$ $$\sum_{n=k-1}^{N-1}f(n+1)=\sum_{n=k+1}^{N+1}f(n-1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2023436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
line integral hard function to differentiate $$\int_\gamma \frac{(x^2+y^2-2)\,dx+(4y-x^2-y^2-2) \, dy}{x^2+y^2-2x-2y+2}$$ where $\gamma = 2\sin\left(\frac{\pi x}{2}\right)$ from $(2,0)$ to $(0,0)$. I think it should be a shortcut to this problem that I cannot see , if that is not the case I will keep trying to simplify it . Thnaks in advance
Complete the squares in the denominator $(x-1)^2+(y-1)^2$ and change the variables $x-1\mapsto x$ and $y-1\mapsto y$. You get the vector field \begin{align} &\left[\frac{x^2+2x+y^2+2y}{x^2+y^2},\frac{4y-x^2-2x-y^2-2y}{x^2+y^2}\right]=\\ &=\left[1+\frac{2x}{x^2+y^2},-1+\frac{2y}{x^2+y^2}\right]-2\left[\frac{-y}{x^2+y^2},\frac{x}{x^2+y^2}\right]. \end{align} The first term is conservative (with an easy potential), the second term has zero curl (easy to check). P.S. Actually the second term has a potential too if you restrict you domain e.g. to the plain without the negative $y$-axis, but the potential is less obvious (easier to get in the polar coordinates).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2023569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Probability of a random graph being triangle-free Let $S$ be the set of all graphs with vertex set $\{1,2,\ldots,n\}$. A random graph $G\in S$ has probability $2^{-{n \choose 2}}$. Show that a random graph almost surely contains a triangle. My attempt so far: we want to show that as $n$ goes to infinity, the probability of a triangle-free graph goes to zero. I tried to find a general formula for the probability of a triangle-free graph with $n$ vertices, but it seems impossible. I know that for any three vertices, the probability of not being a triangle is $\frac{7}{8}$, but I can't calculate the probability of the union of all such three vertices. Is there any other way to approach this problem?
Let me preface this answer with the following: The overestimation of triangles is a much better argument for your particular problem as the probability of an edge appearing is fixed for all $n$. If you allow $p=p(n)$ then you have to be more careful. The technique that follows is technical, but it can be adapted very easily to show for any $p\ne o(\frac{\ln n}{n})$ that a.a.s. $G$ will have a triangle. First observe that since the probability that any $G$ exists is $2^{-\binom{n}{2}}$ it is equivalent to consider each of the $\binom{n}{2}$ edges being chosen independently to appear in $G$ with probability $p=1/2$. If we let $X$ be the discrete nonnegative random variable that counts the number of triangles in $G$, then we wish to show that $P(X=0)\to 0$ as $n\to\infty$. We will use the second moment method to get our result. By Chebychev's inequality and the fact that $X$ is a nonnegative random variable, then we know that $$ P(X=0)\le P(|X-E(X)|\ge E(X))\le \frac{Var(X)}{E(X)^2}=\frac{E(X^2)-E(X)^2}{E(X)^2}=\frac{E(X^2)}{E(X)^2}-1. $$ So if we can show that $\frac{E(X^2)}{E(X)^2}\to 1$ as $n\to\infty$, then we are done. Observe that $E(X)=\binom{n}{3}p^3\approx c_1n^3$ so that $E(X)^2\approx c_2n^6$. Suppose we enumerate all the $\binom{n}{3}$ possible triples and for $1\le i\le \binom{n}{3}$ we let $$ T_i=\begin{cases}1, &\text{if the }i\text{th triple is a triangle}\\0, &\text{otherwise}\end{cases}. $$ Then $X=\sum T_i$ so that $$ X^2=\left(\sum_{i=1}^{\binom{n}{3}} T_i\right)^2=\sum_{1\le i,j\le\binom{n}{3}} T_iT_j $$ and $$ E(X^2)=\sum_{1\le i,j\le\binom{n}{3}} P(T_iT_j=1). $$ The RHS can be partitioned by the number of vertices that the $i$th and $j$th triangles can have in common: zero, one, two or three in common. Thus $$ \begin{align*} E(X^2)&=\sum_{1\le i,j\le\binom{n}{3}} P(T_iT_j=1)\\ &=\sum_{zero} P(T_iT_j=1)+\sum_{one} P(T_iT_j=1)+\sum_{two} P(T_iT_j=1)+\sum_{three} P(T_iT_j=1)\\ &=\binom{n}{3}\binom{n-3}{3}\frac{1}{2^6}+n\binom{n-1}{4}\frac{1}{2^6}+\binom{n}{2}\binom{n-2}{2}\frac{1}{2^5}+\binom{n}{3}\frac{1}{2^3}\\ &\approx c_3n^6+c_4n^5+c_5n^4+c_6n^3. \end{align*} $$ Thus $$ \frac{E(X^2)}{E(X)^2}\approx\frac{c_3n^6+c_4n^5+c_5n^4+c_6n^3}{c_2n^6}=\frac{c_3}{c_2}+\frac{c_4}{c_2n}+\frac{c_5}{c_2n^2}+\frac{c_6}{c_2n^3}. $$ Observing that $c_2=c_3=\frac{1}{3^22^8}$ then as $n\to\infty$ we have $$ \frac{E(X^2)}{E(X)^2}\approx\frac{c_3}{c_2}+\frac{c_4}{c_2n}+\frac{c_5}{c_2n^2}+\frac{c_6}{c_2n^3}\to 1 $$ and our conclusion follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2023681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Expression for every combination by multiple sets I'd like to express every combination represented using union or intersection between multiple sets. For example, we have 3 sets $A_1,A_2,A_3$, the following combinations can be generated by unions and/or intersections between $A_1, A_2, A_3$. $$A_1 \setminus (A_2 \cup A_3)$$ $$A_2 \setminus (A_1 \cup A_3)$$ $$A_3 \setminus (A_1 \cup A_2)$$ $$(A_1 \cap A_2) \setminus A_3$$ $$(A_1 \cap A_3) \setminus A_2$$ $$(A_2 \cap A_3) \setminus A_1$$ $$A_1 \cap A_2 \cap A_3$$ Using Venn diagram, the combination means the distinct area enclosed by lines of $A_1, A_2, A_3$. I'd like to formalize the combination for multiple sets $A_i \; (1 \leq i \leq n)$. Then I considered the following expression, but I cannot prove the expression is true. $$\bigcap_{r \in N} A_r \setminus \bigcup_{s \in N^\complement} A_s \quad (\emptyset\not=N\subseteq\{1,2,\dots,n\}, N^\complement=\{1,2,\dots,n\}\setminus N)$$ If the expression is true, please prove it. If not, please correct the expression to represent all combination of multiple sets. In addition, I think that the cardinality of union of multiple sets can be represent using the expression as follows; $$\left|\bigcup_{i=1}^n A_i\right| = \sum_{\emptyset\not=N\subseteq\{1,2,\dots,n\}} \left| \bigcap_{r \in N} A_r \setminus \bigcup_{s \in N^\complement} A_s \right|$$ For the cardinality of union of multiple sets, there is inclusion–exclusion principle. $$\left|\bigcup_{i=1}^n A_i\right| = \sum_{\emptyset\neq J\subseteq\{1,2,\ldots,n\}}(-1)^{|J|-1} \Biggl|\bigcap_{j\in J} A_j\Biggr|$$ From this, we can have $$\sum_{\emptyset\not=N\subseteq\{1,2,\dots,n\}} \left| \bigcap_{r \in N} A_r \setminus \bigcup_{s \in N^\complement} A_s \right| = \sum_{\emptyset\neq J\subseteq\{1,2,\ldots,n\}}(-1)^{|J|-1} \Biggl|\bigcap_{j\in J} A_j\Biggr|$$ Is this correct?
Your expression is correct. Given any point $x\in\bigcup_{i=1}^n A_i$, let $N$ be the (necessarily nonempty) set of all $1\le i\le n$ for which $x\in A_i$; then $x$ is in the set $\bigcap_{r\in N} A_r \setminus \bigcup_{s\in N^c} A_s$ for this $N$ and for none of the others. That proves your disjoint union claim, and the other claims follow quickly from that one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2023776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding lengths in a diagram. Find $x$ and $y$. Given that: $$a_1+a_2+a_3-a_4+a_5=180^\circ$$ $$\text{cos}(180^\circ-a_5)=0.4$$ I managed to solved this problem using Mathematica based on an equation with whole bunch of ArcTan. However, I am looking for an easier way to solve it based on its geometrical properties. Thank you!. Remark: * *Thanks @Blue for editing the question! *code for solving the problem in Mathematica: ratio = Tan[ArcCos[0.4]]; Solve[ArcTan[ratio*x/(10-x)]+ArcTan[ratio*x/(3-x)]+Pi-ArcTan[ratio*x/(x-1)]+Pi-ArcCos[0.4]-(Pi-ArcTan[ratio*x/(x-0.1)])==Pi&&x>1&&x<3,x]
$$x=t\cos a_5,\\y=t\sin a_5 $$ where $a_5$ is given by the second equation. Then there remains a sum of four angles which can indeed be expressed as arc tangents of terms $t_k:=y/(x-x_k)$. As $$\tan(a_1+a_2+a_3-a_4)=\frac{\tan(a_1+a_2)+\tan(a_3-a_4)}{1-\tan(a_1+a_2)\tan(a_3-a_4)} =\frac{\dfrac{t_1+t_2}{1-t_1t_2}+\dfrac{t_3-t_4}{1+t_3t_4}}{1-\dfrac{t_1+t_2}{1-t_1t_2}\dfrac{t_3-t_4}{1+t_3t_4}}$$ you will get a ratio of cubic/quartic polynomials in $t$. I don't see a possible simplification among the terms so this problem seems to be equivalent to the resolution of a quartic equation. There will be no easy shortcut.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2023851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can we show $\cos^6x+\sin^6x=1-3\sin^2x \cos^2x$? How can we simplify $\cos^6x+\sin^6x$ to $1−3\sin^2x\cos^2x$? One reasonable approach seems to be using $\left(\cos^2x+\sin^2x\right)^3=1$, since it contains the terms $\cos^6x$ and $\sin^6x$. Another possibility would be replacing all occurrences of $\sin^2x$ by $1-\cos^2x$ on both sides and then comparing the results. Are there other solutions, simpler approaches? I found some other questions about the same expression, but they simplify this to another form: Finding $\sin^6 x+\cos^6 x$, what am I doing wrong here?, Alternative proof of $\cos^6{\theta}+\sin^6{\theta}=\frac{1}{8}(5+3\cos{4\theta})$ and Simplify $2(\sin^6x + \cos^6x) - 3(\sin^4 x + \cos^4 x) + 1$. Perhaps also Prove that $\sin^{6}{\frac{\theta}{2}}+\cos^{6}{\frac{\theta}{2}}=\frac{1}{4}(1+3\cos^2\theta)$ can be considered similar. The expression $\cos^6x+\sin^6x$ also appears in this integral Find $ \int \frac {\tan 2x} {\sqrt {\cos^6x +\sin^6x}} dx $ but again it is transformed to a different from then required here. Note: The main reason for posting this is that this question was deleted, but I still think that the answers there might be useful for people learning trigonometry. Hopefully this new question would not be closed for lack of context. And the answers from the deleted question could be moved here.
Any symmetric polynomial in $X$ and $Y$ can be expressed as a polynomial in $S=X+Y$ and $P=XY$. If $X=\cos^2x$ and $Y=\sin^2x$, then $S=\cos^2x+\sin^2x=1$, so a symmetric polynomial expression in $\cos^2x$ and $\sin^2x$ can be written as a polynomial in $P=\cos^2x\sin^2x$. If the symmetric polynomial is also homogeneous, all terms in the new expression are homogeneous as well. In this case, $P$ appears at most with degree $1$, because $P$ counts for degree $2$ in $X$ and $Y$. Hence $X^3+Y^3=a+b(X+Y)^3+c(X+Y)XY$; evaluating at $X=Y=0$ entails $a=0$. With $X=1$ and $Y=0$ we get $b=1$; with $X=1$ and $Y=1$ we get $$ 1+1=8+2c $$ so $c=-3$; therefore $$ \cos^6x+\sin^6x=1-3\cos^2x\sin^2x $$ As another example, $X^4+Y^4=a(X+Y)^4+bXY(X+Y)^2+c(XY)^2$. Evaluating at $X=1$ and $Y=0$ gives $1=a$; evaluating at $X=1$ and $Y=-1$ gives $2=c$; evaluating at $X=Y=1$ gives $2=16a+4b+c$, so $b=-4$. Therefore $$ \cos^8x+\sin^8x=1-4\cos^2x\sin^2x+2\cos^4x\sin^4x $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2023931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 11, "answer_id": 8 }
Show that $1^3 + 2^3 + ... + n^3 = (n(n+1)/2)^2$ by induction I have some with proving by induction. I cannot find a solution for the inductive step: $1^3 + 2^3 + ... + n^3 = (n(n+1)/2)^2$ I already did the induction steps: Basis: P(1) = $1^3 = (1(1+1)/2)^2$ (This is true) Inductive step: Assume $P(k) = ((k)(k+1)/2)^2$ To be proven: $((k)(k+1)/2)^2 + (k+1)^3 = ((k+1)(k+2)/2)^2$ My problem is that I do not know how I can put the $ + (k+1)^3$ inside $((k)(k+1)/2)^2$. Simplifying the left and right part of the statement does not help: Simplifying the left side: $((k)(k+1)/2)^2 + (k+1)^3 = ((k^2+k)/2)^2 + (k+1)^3 $ Simplifying the right side: $((k+1)(k+2)/2)^2 = ((k^2+3k+2)/2)^2$ So i am left with: $((k^2+k)/2)^2 + (k+1)^3 = ((k^2+3k+2)/2)^2$ That is the same as: $1/4 (k^2+k)^2 + (k+1)^3 = 1/4((k^2+3k+2))^2$ Going further with the left side:$1/4 (k^2+k)^2 + (k+1)^3 = (1/4)(k^4 + 2k^3 + k^2) + k^3+3 k^2+3 k+1$ Going further with the right side: $1/4((k^2+3k+2))^2 = 1/4 (k^4+6 k^3+13 k^2+12 k+4)$ Now I am stuck with: $(1/4)(k^4 + 2k^3 + k^2) + k^3+3 k^2+3 k+1 = 1/4 (k^4+6 k^3+13 k^2+12 k+4)$ Now I am kind of left with garbage. Am I missing something? What do I do wrong? Where can I find a good resources to learn how to solve this issue?
\begin{align} \underbrace{1^3+2^3+\ldots+n^3}_{\left[\frac{n(n+1)}{2} \right]^2}+(n+1)^3 =& \left[\frac{n(n+1)}{2} \right]^2+(n+1)^3 \\ =& \frac{n^2\color{red}{(n+1)^2}}{4} + (n+1)\color{red}{(n+1)^2} \\ =& \left\lgroup \frac{n^2}{4} + (n+1) \right\rgroup \color{red}{(n+1)^2} \\ =& \left\lgroup \frac{n^2 +4(n+1)}{4} \right\rgroup \color{red}{(n+1)^2}\\ =&\left\lgroup \frac{(n+2)^2}{4} \right\rgroup \color{red}{(n+1)^2} \\ =&\left\lgroup \frac{(n+2)}{2} \right\rgroup^2 \color{red}{(n+1)^2} \\ =&\left\lgroup \frac{\big(\color{red}{(n+1)}+1\big)}{2} \right\rgroup^2 \color{red}{(n+1)^2} \\ =&\left[ \frac{\color{red}{(n+1)}\big(\color{red}{(n+1)}+1\big)}{2} \right]^2 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2024028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 3 }
Probability of drawing exactly 13 black & 13 red cards from deck of 52 We have a normal deck of $52$ cards and we draw $26$. What's the probability of drawing exactly $13$ black and $13$ red cards? Here's what I have so far. Consider a simplified deck of $8$ (with $4$ $B$'s and $4$ $R$'s), we have 6 permutations of $BBRR,RRBB,RBRB,RBBR,BRBR,BRRB$, each with probability $p=\frac{4^23^2}{(8*7*6*5)}$, therefore the overall probability is $6p = 0.5143$. I could extend this method to 52 if I knew how to find the number of multi-set permutations, but I'm not sure how to get that. I thought it's $\frac{nPr}{n_B!n_R!}$ but this gives $8!/(8-4)!/4!^2 = 2.9166$ for the 8 card example, which is incorrect (so I made a mistake).
The OP should be commended for approaching the problem by first thinking about a smaller analog that's easy to solve explicitly -- and then for rejecting an idea because it gives the wrong answer for the smaller analog. That is exactly the right thing to do when faced with a problem that seems complicated by its very size. You have to choose two red cards and two black cards. There are $4 \choose 2$ ways to choose the red cards and $4 \choose 2$ ways to choose the black ones. There are $8 \choose 4$ ways to choose the four cards if you don't care about the colors, so the chance you get two reds and two blacks is $\frac {{4 \choose 2}^2}{8 \choose 4}=\frac {36}{70}.$ For $13$ reds out of $26$ cards drawn from a standard deck it is $\frac {{26 \choose 13}^2}{52 \choose 26}\approx 0.218$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2024175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Infimum of probability and probability of infimum I am studying the Borel Cantelli proof and there is the following step: $$\Pr\left( \bigcap \limits_{N=1}^{\infty} \bigcup\limits_{n=N}^{\infty}E_n\right) \le \inf_{N\ge1} \Pr\left( \bigcup\limits_{n=N}^{\infty} E_n\right)$$ What happened here? I guess that: $$\Pr\left(\bigcap \limits_{N=1}^{\infty}\bigcup\limits_{n=N}^{\infty} E_n\right) = \Pr\left(\inf_{N\ge1}\bigcup\limits_{n=N}^{\infty} E_n\right) \le \inf_{N\ge1} \Pr\left( \bigcup\limits_{n=N}^{\infty} E_n\right)$$ But why is this true?
Note that for any positive integer $K$ you have the inclusion $\bigcap \limits_{N = 1}^\infty \bigcup \limits_{n = N} E_n \subset \bigcup \limits_{n = K}E_n$. Using the monotonicity of measures we can deduce $P\left(\bigcap \limits_{N = 1}^\infty \bigcup \limits_{n = N} E_n \right) \le P\left(\bigcup \limits_{n = K}E_n\right)$. Now the left-hand side is a lower bound for the term on the right-hand side for all $K$, so taking the infimum over all $K$ yields the assertion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2024268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Pointwise convergence doesn't imply $L^p$ convergence if $p=\infty$ under some hypothesis I just proved that if $1\leq p<\infty$ and $f,f_n$ is a sequence of measurable functions such that $f_n(x)\rightarrow f(x)$ a.e $x\in X$ and $\exists g\in L^p(\mu)$ such that $|f_n(x)|\leq g(x)$ a.e $x\in X$, then $f_n\rightarrow f$ in $L^p.$ Why is it that this is not true if $p=\infty$?
When $p=+\infty$, the statement reads as follows: If $\left(f_n\right)_{n\geqslant 1}$ is a sequence of measurable functions such that $f_n(x) \to f(x)$ almost everywhere and there exists a constant $M$ such that $\left|f_n(x)\right|\leqslant M$ for every $n$ and almost every $x$, then $\lVert f_n-f\rVert_{\mathbb L^{\infty}}\to 0$. But we can imagine a sequence of functions taking the values $0$ and $1$, with pointwise but not uniform convergence. We choose for example $f_n(x):=1$ if $0\lt x\lt 1/n$ and $0$ otherwise (on the unit interval, with Borel $\sigma$-algebra and Lebesgue measure).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2024411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove: If $p$ is prime and $ (a,p)=1 $, then $\ 1 + a + a^2 + ... + a^{p-2}=0$(mod $p$) Prove: If $p$ is prime and $ (a,p)=1 $, then $\ 1 + a + a^2 + ... + a^{p-2}≡0 \pmod p$ As an example, for $a=2$ and $p=5$, then: $$1+2+2^2+2^3=1+2+4+8=15≡0 \pmod 5$$ This can also be written as: $$1 \pmod 5 + 2 \pmod 5 + 4 \pmod 5 + 3 \pmod 5 ≡ 10 \pmod 5 ≡ 0 \pmod 5$$ Which is a complete residue system (mod 5). This is where I'm stuck. At a glance it looks like fermat's little theorem, but it seems like I have to prove this creates a CRS for any $a$ and any $p$. How can I proceed? Thanks!
If $p=2$, then $a^{p-2}=a^0=1$, so in this case is false, because $1\not\equiv 0(mod \ 2)$. If $p\neq 2$, then $1+a+\dots+ a^{p-2}=\frac{a^{p-1}-1}{a-1}$. From the fermat theorem, if $(a,p)=1$, then $a^{p-1}\equiv 1 (mod\ p)$ (if this doesn't happen, maybe the $p$ factor of $a^{p-1}-1$ will dissapear), so you need now to see that $p\not\mid a-1$, because if not, for example if $p=7$ and $a=8$, you will have $$1+8+\dots+8^5\equiv 6\not\equiv 0 (mod\ 7)$$ If $p\neq 2$ and $(a(a-1),p)=1$, then it's true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2024524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
A bijective arcwise isometry is an isometry? Let $X,Y$ be length spaces. Suppose $\, f:X \to Y$ is an arcwise isometry, i.e $L_X(\gamma) = L_Y(f \circ \gamma)$ for every continuous path $\gamma:I \to X$. (In particular, $f$ takes non-recitifiable paths to non-recitifiable paths). In addition, assume $f$ is a bijection. Is it true that $f$ is an isometry? The naive approach would be to say $$ d_Y(f(p),f(q))= \inf \{ L_Y(\beta): \beta \, \, \text{is a path from } f(p) \text{ to } f(q)\} \stackrel{(*)}{=}\inf \{ L_X(\alpha): \alpha \, \, \text{is a path from } p \text{ to } q\}=d_X(p,q),$$ so the answer is positive. However, there is a problem with this argument: Equality $(*)$ relies upon the assumption we have a "length-preserving" bijection between the two sets of paths, namely $\alpha \to f \circ \alpha$. The problem is that we do not know $f^{-1}$ is continuous*, so for a path $\beta$ in $Y$, $f^{-1} \circ \beta$ is not necessarily a path in $X$ (paths must be continuous by definition). This causes an obstacle in showing surjectivity: If $\beta$ is a path in $Y$, we would like to say $\beta=f(f^{-1}(\beta))$, but why is $f^{-1}(\beta)$ a legitimate path? *We do know $f$ is continuous, in fact it's $1$-Lipschitz.
Hint: consider the age old counterexample $f : [0,2\pi) \to S^1$ given by $f(t) = (\cos(t),\sin(t))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2024623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Convexity Geometric Brownian Motion Let $B(t)$ be a standard Brownian motion with filtration $\{ \mathcal{F}_t: t \geq 0\}$ and $\mu, \sigma >0$ are real parameters. $S(t)$ is modeled by \begin{align} S(t) = e^{\mu t + \sigma B(t)- \frac{1}{2}\sigma^2 t}. \end{align} In order to find an expression for $\mathbb{P}\big(\ S(2t)>2S(t)\ \big)$, I equal both expressions: \begin{align} S(2t) &= 2 S(t) \\ e^{2 \mu t + \sigma B(2t) - \sigma^2 t} &= 2 e^{\mu t + \sigma B(t)- \frac{1}{2}\sigma^2 t} \\ e^{\mu t + \sigma ( B(2t) - B(t))- \frac{1}{2}\sigma^2 t} &= 2 \\ \mu t + \sigma ( B(2t) - B(t))- \frac{1}{2}\sigma^2 t &= \ln(2). \end{align} I understand that the increment $B(2t) -B(t)$ is independent and that we are studying the realized volatility. However, I am wondering how we can find an expression for $\mathbb{P}\big(\ S(2t)>2S(t)\ \big)$? (is convexity the correct name for the property $S(2t)>2S(t)$?)
By taking the logarithm on the left and right side we have $P(S(2t)>2S(t))=P(B_{2t}-B_t>\frac{log(2)-\mu t+0.5\sigma^2t}{\sigma})$ We know that $B_{2t}-B_t$ is normally distibuted with mean 0 and variance t, therefore we can replace $B_{2t}-B_t$ by $\sqrt{t}Y$ where Y is centered and normally distributed. Finally , $P(S(2t)>2S(t))=P(Y>\frac{log(2)-\mu t+0.5\sigma^2t}{\sigma\sqrt{t}})$ $=1-F(\frac{log(2)-\mu t+0.5\sigma^2t}{\sigma\sqrt{t}})$ where F is a cdf of centered normally distributed variable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2024737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Continuous function and Borel sets If $f:\mathbb{R}^p \rightarrow \mathbb{R}$ is continuous and $B\subset \mathbb{R}$ is Borel set, how to show that $f^{-1}(B)$ is also Borel set? I was trying to construct a $\sigma$-ring of sets whose preimage is Borel, but they're not necessarily all borel in that sigma ring... What would be the main idea for the proof here?
A Borel set is any set in a topological space that can be formed from open sets through the operations of countable union, countable intersection, and relative complement. As the inverse images of open sets under a continuous function are open sets and inverse images of a countable union is the countable union of the inverse images, same for countable intersection and relative complement. We get the the inverse image of a Borel set under a continuous function is a Borel set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2024867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Finding $\lim_\limits{x \rightarrow +\infty}\frac{\log_{1.1}x}{x}$ analytically $$\lim_{x \rightarrow +\infty}\frac{\log_{1.1}x}{x}$$ I can solve this easily by generating the graph with my calculator, but is there is a way to do this analytically?
Use the definition: $\;\log_{1.1}x=\dfrac{\ln x}{\ln 1.1}$, so $$\frac{\log_{1.1}x}{x}=\frac 1{\ln 1.1}\dfrac{\ln x}{x}\xrightarrow[x\to+\infty]{}\frac 1{\ln 1.1}\cdot 0=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2024997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Find the ordinary generating funtion for thenumber of at most binary trees on n unlabelled nodes. An at most binary tree is an unordered rooted tree in whic each node has at most two children. I want to find the ordinary generating function for the number of at most binary trees on n unlabelled nodes. I am trying to solve this problem using the theory of species. I am thinking the following way.: Let $t_n$ be thenumber of trees with n vertices and let $T(x) = \sum_n t_n x^n$ be their ordinary generating function. We first pick the root of the tree and then any of the remaining branches must be an at most binary unorderedrooted tree. Thus, the generating function fo any of the branches is $(1+T(x) + T(x)^2/2)$. THus, multiplying our choice for root and the remaining branches, we have that $T(x) = x(1+T(x)+T(x)^2/2)$. From here I should just solve the recurrence and find the ordinary generating function. I'm not sure if this reasoning is right, but in a previous problem I found the exponential generating fucntion for thenumber of at most binary trees on n laelled nodes, but I got a very similar result. I am wondering if they way I approached this problem is right and, if not, if anyone could give me a hint for it. Thanks!
Your $t_n$ are the Wedderburn–Etherington numbers, OEIS A001190. Specifically, $t_n=a_{n+1}$, where the $a_n$ are the Wedderburn-Etherington numbers. The OEIS entry has copious references but neither a closed form nor an explicit generating function. It does note that the generating function $A(x)$ for the $a_n$ satisfies $$A(x)=x+\frac12\left(A(x)^2+A(x^2)\right)$$ and $$A(x)=1-\sqrt{1-2x-A(x^2)}\;.$$ Since $a_0=0$, $T(x)=\frac{A(x)}x$. For $n\ge 1$ the Wedderburn–Etherington numbers satisfy the recurrences $$\begin{align*} a_{2n-1}&=\sum_{k=1}^{n-1}a_ka_{2n-k-1}\\ a_{2n}&=\frac{a_n(a_n+1)}2+\sum_{k=1}^{n-1}a_ka_{2n-k}\;, \end{align*}$$ With base case $a_1=1$. This yields the recurrences $$\begin{align*} t_{2n}&=\sum_{k=0}^{n-1}t_kt_{2n-k-1}\\ t_{2n-1}&=\frac{t_{n-1}(t_{n-1}+1)}2+\sum_{k=1}^{n-1}t_{k-1}t_{2n-1-k} \end{align*}$$ for $n\ge 1$, with base case $t_0=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2025081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Throwing darts probability Assume a sequence from $1$ to $n$, with $e_i$ denoting the $i$-th element. We throw a dart at random into the array: if we hit $e_i$ or $e_j$ then $X_{i,j}$ becomes $1$, if we hit between $e_i$ and $e_j$ then $X_{i,j}$ becomes 0, and otherwise we throw another dart. Once $X_{i,j}$ is assigned a value, the game is over. Assume $j>i$. In other words, we keep throwing the darts until we hit one of the elements $e_i,e_{i+1},...,e_j$. If we hit either $e_i$ either $e_j$, $X_{i,j}$ is assigned a $1$ and the game is over. If we hit any of the elements $e_{i+1},...,e_{j-1}$, $X_{i,j}$ is assigned a $0$ and the game is over. If we hit any of the elements $e1,...e_{i-1},e_{j+1},...,e_n$, the game continues and we throw the dart again. What is the probability of $$P(X_{i,j}=1)=\phantom{} ?$$ Let $E$ be the event of a hit between $e_i$ and $e_j$ (including these two elements), meaning that the $X_{i,j}$ has become either $0$ either $1$ and the game has finished. In th event of $\bar{E}$, the dart is thrown again. Than $$P(X_{i,j}=1 | E)=\frac{2}{j-i+1}$$ Thus, we know what the probability of $X_{i,j}=1 $, given that the game has ended. How to deduce the overall probability that $X_{i,j}=1 $? If we turn to Bayes, we have $$P(X_{i,j}=1)=\frac{P(X_{i,j}=1\cap E)}{P(E|X_{i,j}=1)}$$ But we know $$P(E|X_{i,j}=1)=1$$ Thus $$P(X_{i,j}=1)=P(X_{i,j}=1|E)P(E)$$ But the text I am reading states At each step, the probability that $X_{i,j}= 1$ conditioned on the event that the game ends in that step is exactly $\frac{2}{j-i+1}$. Therefore, overall, the probability that $X_{i,j}=1$ is $\frac{2}{j-i+1}$. Where am I making the mistake?
The game is a sequence of trials, each one of three states, $A_t,B_t,C_t$.   State $A_t$ is that $\{X_{i,j}=1\}$ , state $B_t$ that $\{X_{i,j}=0\}$ and state $C_t$ that the value is undefined, for a given trial of the game, $t$. Assuming each section of the array is equally likely to be hit, and the space "between" $e_i, e_j$ are the sections $e_{i+1},\ldots,e_{j-1}$ then $$\mathsf P(A_t)= 2/n, \mathsf P(B_t)=(j-i-1)/n, \text{ and }\mathsf P(A_t\cup B_t)=(j-i+1)/n$$ So $$\mathsf P(A_t\mid A_t\cup B_t)= 2/(j-i+1)$$ This is the probability that the game will be in the favoured state, given that the game is in the end state.   Since the game will end on the first independent trial where the end state happens, which will eventually happen, then: $$\begin{align}\mathsf P(A_N)&=\mathsf P(A_N\cap(A_N\cup B_N))\\&=\mathsf P(A_N\mid(A_N\cup B_N))\cdot\mathsf P(A_N\cup B_N) \\ &= \frac 2{j-i+1}\cdot 1\end{align}$$ (Although the argument that $\mathsf P(A_N\mid(A_N\cup B_N))=\mathsf P(A_t\mid(A_t\cup B_t))$ irrespective of $N$ being the end trial is a mite circular.) More formally we might say that the probability the game ends in state $A$ is, by the law of probability, the series of probabilities that it is in state $A$ on a trial after being in state $C$ for all previous trials, for all possible trials. $$\begin{align}\mathsf P(A_N) & = \sum_{t=1}^\infty \mathsf P(A_t\cap\bigcap_{s=1}^{t-1} C_s) \\ &= \sum_{t=1}^\infty \mathsf P(A_t)\prod_{s=1}^{t-1}\mathsf P(C_s) \\&= \frac{2}{n}\sum_{t=1}^\infty\left(1-\frac{j-i+1}{n}\right)^{t-1} \\&= \frac 2n\cdotp\frac{n}{j-i+1}\end{align}$$ Thanks to the Geometric series (which you apparently figured out while I was typing this up while distracted... $\ddot\frown$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2025232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Should I study projective geometry or commutative algebra as prerequisite to start algebraic geometry? I am looking to study Algebraic Geometry but some books list projective geometry as a prerequisite and some list commutative algebra. I have taken one semester of abstract algebra, real analysis, complex analysis, topology, combinatorics, and differential geometry. I have not taken a course in projective geometry nor commutative algebra. Which would be more important as a prerequisite if I want to start learning algebraic geometry? Do you have any books to recommend?
Marco Flores has given a very good answer. I would like to echo one aspect of his answer to suggest that if one wants to learn algebraic geometry, one can just start learning it. There are several good books available beginning at an undergraduate level, such as Reid's book that Marco Flores mentions. Later on, one can choose various approaches: the commutative algebra approach of Hartshorne's book, the differential geometry/topology approach of Griffiths and Harris, the related (but more bare-bones) approach of Mumford's book on projective varieties, ... . There are lots of "road map" questions about algebraic geometry texts here on Math.SE and also on MO that you can consult for guidance. Finally, let me mention two answers on MO that push against the idea that algebraic geometry should be regarded as a branch of commutative algebra: this one and this one. I agree with their sentiment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2025331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Finding $n$ for the given probability. A certain explosive device will detonate if anyone of $n$ short-lived fuses lasts longer than $0.8$ seconds. Let $X_i$ represent the life of the $ith$ fuse. It can be assumed that each $X_i$ is uniformly distributed over the interval $(0,1)$ , also $X_i $'s are independent. We need to find the number of fuses required if one wants to be 95% certain that the device will detonate. That is : $P(detonation) = 0.95$ => $P(X_1 > 0.8)+P(X_2 > 0.8)+P(X_3 > 0.8)$. . . $+P(X_n > 0.8)=0.95$ => $\sum_{i=1}^{n}P(X_i > 0.8)=0.95$. Since $X_i$~$U(0,1)$ the above statement gives : $\sum_{i=1}^{n}(0.2)=0.95$ => $0.2n = 0.95$ which gives $n \approx 5$ . Is this correct ?
Probability of a fuse lasts less than $0.8$ seconds is $0.8$. The probability of none of $n$ independent fuses last longer than $0.8$ seconds is therefore $0.8^n$. So the probability of at least one of them last longer than $0.8$ becomes $1-0.8^n$ which should be greater than or equal to $0.95$. That is $$1-0.8^n\ge0.95,\,n\in\mathbb{N}$$ So $$n\ge\lceil\log(0.05)/\log(0.8)\rceil=14$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2025416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving a Lebesgue integral exists: $\int_0^{\pi/2}\left(\frac{1}{2(e^x -1)}-\frac{1}{\tan(x)\sin(x)}\right)d\lambda(x)$ Can you tell me how I would go about proving that the Lebesgue integral $$\int_{0}^{\pi/2}\left(\frac{1}{2(e^x -1)} - \frac{1}{\tan(x)\sin(x)}\right)d\lambda(x)$$ exists? I've already shown that $\dfrac{1}{2(e^x -1)} - \dfrac{1}{\tan(x)\sin(x)}$ is continuous over the interval $(0,\pi/2)$ but I don't know how to proceed from here. Thank you for the help.
As written, the given integral is divergent since the integrand, as $x \to 0^+$, admits the Laurent series expansion $$ \frac{1}{2(e^x -1)} - \frac{1}{\tan(x)\sin(x)}=-\frac1{x^2}+\frac1{2x}-\frac1{12}+O\left(x\right) $$ which is not Lebesgue integrable over $(0,\pi/2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2025554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Applications of words? What are some real-life applications of (Sturmian) words? I'm doing an undergraduate thesis on the Fibonacci infinite word $f$, and although what I'm doing is purely theoretical (by counting maximal occurrences of factors of $f$), I want to put in the introduction a one- or two-sentence application of words. But I can't seem to find one online. Thanks for the help.
Let me quote the introduction of J. Berstel, Sturmian and Episturmian Words. A Survey of Some Recent Results, Algebraic Informatics LNCS 4728, pp 23-47 (2007) Sturmian words have a geometric description as digitized straight lines. Computer representation of lines has been an active subject of research, although early theory of Sturmian words remained unnoticed in the patter recognition community. The paper [1] is a review of recognition of straight lines with respect to interaction with other disciplines. [1] Klette, R., Rosenfeld, A.: Digital Straightness—a review Discrete Appl. Math. 139, 197–230 (2004)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2025656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Path on the torus Prove that the following map is smooth and analyse its image as the real number $\alpha$ varies (how does it look like? Is it a submanifold?) \begin{align*} f_{\alpha}:\mathbb{R}&\to\mathbb{S}^1\times\mathbb{S}^1\\ t&\mapsto (e^{2\pi it}, e^{2\pi \alpha it}) \end{align*} I've managed to prove that $f_{\alpha}$ is smooth and that $\text{Im}(f_{\alpha})$ is a closed curve on the torus $\Leftrightarrow \alpha\in\mathbb{Q}$. But I'm having trouble to formalize whether or not it is a submanifold of the torus. I actually don't know if it is true when the image is not a closed curve. How can I solve this?
Should it be $f_{\alpha}(t)=(e^{2\pi i t}, e^{2\pi \alpha i t})$ ? If so, sure it is a manifold when $\alpha\in\mathbb Q$ (with the proper identification it is simply a curve on $\mathbb R^2$. In this case, it is a curve that spirals around the torus until it closes, much like in the figure: When $\alpha\not\in\mathbb Q$ the curve is dense in the torus. Since rational numbers and irrational numbers are dense, it is difficult to say much more about what happens when $\alpha$ varies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2025896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Arranging colored balls in a line. 2 balls from 5 distinct colors are collected. * *In how many ways can the balls be arranged? *In how many ways can the balls be arranged so that no two balls of the same color are next to one another? The first is easy. If I have 10 spots and 10 balls, then I have 10 choices in the first place, 9 in the second, and so on. This yields $10!$ ways. The second is somewhat of a curve ball. I suppose that the answer would be $10! - N $, where $N$ is the number of arrangements with at least one pair of balls next to one another. I'm not sure how to approach finding $N$ and would like some help. Thanks
For the first case, we could arrange the balls in $10!$ different ways if they were unique. However, they aren't unique. For each color, there are two balls. Therefore, we can swap any of the balls that are the same color and have the same arrangement, so there are $2!$ different arrangements that are the same for each color. Therefore, we get $\frac{10!}{(2!)^5}$ different ways to arrange the $10$ colored balls. We have the $10!$ for each arrangement and divide by $2!$ $5$ times for the $5$ different colors. For the second question, this requires the inclusion exclusion principle. First we must find the number of combinations with one color next to the same colored orb. To do this, suppose that, say the blue balls, are one unit. Then there are $\frac{9!}{(2!)^4}$ arrangements where the blue balls are next to each other. We continue pairing colors that will become a unit like this and then apply the inclusion exclusion principle to see that the number of arrangements where no two similarly colored balls are next to each other is given by the following equation: $\frac{10!}{(2!)^5}-{5\choose1}\frac{9!}{(2!)^4}+{5\choose2}\frac{8!}{(2!)^3}-{5\choose3}\frac{7!}{(2!)^2}+{5\choose4}\frac{6!}{(2!)^1}-{5\choose5}\frac{5!}{(2!)^0}=39,480$ I only very briefly explained the use of the Inclusion-Exclusion Principle, but you can find more at its wikipedia page
{ "language": "en", "url": "https://math.stackexchange.com/questions/2025990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Polynomial division in $\Bbb Z/n\Bbb Z$ So, I know how polynomial division works in principle, but I have currently no Idea what is asked here: We have to divide two polynomials: f = $4t^4-2t^3-3$ and g = $2t^2-3$ but in the polynomial ring $F_{p}[t]$ with p prime. (F = $\mathbb{Z/pZ}$). So how does the algorithm for polynomial division change now?
The high-school polynomial division algorithm works over any commutative coefficient ring, as long as the divisor $g$ has invertible leading coefficient. Then the leading term of $g$ always divides any monomial of higher degree, thus the high-school division algorithm works to kill all higher degree terms in the dividend, leaving a remainder of degree smaller than $\,\deg g$. Alternatively we can reduce to the case where the divisor $\,g\,$ is monic (lead coef $= 1)$ as follows. Make the divisior $g$ monic by dividing it by its lead coef $c,\,$ then divide by the monic $\,g/c\,$ yielding $\, f = q(g/c)+ r = (q/c) g + r\,$ as desired, after moving the division by $\,c\,$ into the quotient. The division algorithm generally fails if the lead coef of $\,g\,$ is not invertible, e.g. $ \: x = 2x\:q + r\:$ has no solution for $ \:r\in \mathbb Z,\ q\in \mathbb Z[x],\:$ since evaluating at $ \:x=0\:$ $\Rightarrow$ $ \:r=0,\:$ evaluating at $ \:x=1\:$ $\Rightarrow$ $\:2\:|\:1\:$ in $\mathbb Z,\,$ contradiction. Notice that the same proof works in any coefficient ring $ \:R\:$ in which $2$ is not invertible. Conversely, if $2$ is invertible in $ \:R,$ say $ \:2u = 1\:$ for $ \:u\in R,\:$ then division is possible: $ \: x = 2x\cdot u + 0.$ However, we can generalize the division algorithm to the non-monic case as follows. Theorem (nonmonic Polynomial Division Algorithm) $\ $ Let $\,0\neq F,G\in A[x]\,$ be polynomials over a commutative ring $A,$ with $\,a\,$ = lead coef of $\,F,\,$ and $\, i \ge \max\{0,\,1+\deg G-\deg F\}.\,$ Then $\qquad\qquad \phantom{1^{1^{1^{1^{1^{1}}}}}}a^{i} G\, =\, Q F + R\ \ {\rm for\ some}\ \ Q,R\in A[x],\ \deg R < \deg F$ Proof $\ $ Hint: use induction on $\,\deg G.\,$ See this answer for a full proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2026174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
projection matrix using A = QR in a projection matrix where P = $A(A^TA)^{-1}A^T$, if you allow A=QR where Q is orthogonal and R is reversible, P can be then expressed as P = $QQ^T$. I can follow all those steps, but I seem to also be able to come up with an incorrect solution, and can't figure out what rule I'm violating. P = $A(A^TA)^{-1}A^T$ and A = QR P = $QR((QR)^TQR)^{-1}(QR)^T$ P = $QR(R^TQ^TQR)^{-1}(QR)^T$ at this point if we simplify $Q^TQ$=I then it leads to the correct P=$QQ^T$ but if we don't combine and keep going... P = $QRR^{-1}Q^{-1}(Q^T)^{-1}(R^T)^{-1}(QR)^T$ P = $QRR^{-1}Q^{-1}(Q^T)^{-1}(R^T)^{-1}R^TQ^T$ P = $QIQ^{-1}(Q^T)^{-1}IQ^T$ = $QQ^{-1}(Q^T)^{-1}Q^T = II = I$ This is supposed to be the case when Q is square, but not all cases. I'm pretty sure I did something illegal, but I can't seem to find where.
Hint: you are inverting $Q$. Is this well-defined for nonsquare matrices?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2026348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Change of mean for a standard normal variable Beginning to learn a bit of probability theory and I'm a little confused with this one. I know that given a Standard Normal variable X $X \sim \mathcal{N}(0, 1)$, if $Y = X + c$ then $Y \sim \mathcal{N}(c, \sigma^2)$ Now if, $Y = aX+b $, is $Y \sim \mathcal{N}($b/a$, \sigma^2)$?
Let $X \sim \mathcal{N}(0, 1)$. Then, $E[X+c] = E[X] + E[c] = E[X] + c = c$. Also, $\text{Var}[X+c] = \text{Var}[X] = 1. $ So $Y \sim \mathcal{N}(c,1)$. If instead we have $Y=aX+b$, then $E[Y] = E[aX+b] = E[aX] + E[b] = a E[X] + b = a \cdot 0 + b$. Also, $\text{Var}[aX+b] = \text{Var}[aX] = a^2 \text{Var}[X] = a^2 $. So $Y \sim\mathcal{N}(b, a^2 )$. Take a look at the properties of expectation/variance, they will come up alot. https://en.wikipedia.org/wiki/Expected_value#Properties https://en.wikipedia.org/wiki/Variance#Basic_properties EDIT: I'm assuming that you know these transformations still result in a normal distribution and simply checking their mean & variance. The other answer provided is much more thorough . . .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2026474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Deformation retraction restricted to a path Let $X$ be a topological space with subspace $Y$. Suppose that we have a (strong) deformation retraction onto $Y$. That is, a continuous map $F:X\times [0,1]\to X$ such that for all $x\in X$, $F(x,0)=x$, $F(x,1)\in Y$, and for all $y\in Y$ and all $t\in [0,1]$, $F(y,t)=y$. Is it true that for any $x\in X$ and $t\in [0,1]$, $F(F(x,t),1)=F(x,1)$? It corresponds to my intuition regarding deformation retracts as paths $F(x,\cdot):I\to X$ in $X$ such that all points on the path have the same endpoint in $Y$. However I was unable to prove the result. Any thoughts on this? Any help is much appreciated.
Thank you for your answers. Indeed I now realize that the conjecture is false. In fact, I came up with a quite a silly (but easy) counterexample myself: Let $X=\{a,b,c\}$ with trivial topology, $Y=\{a,b\}$. Define $F:X\times [0,1]\to X$ (continuous because $X$ has the trivial topology) by $$ F(a,t)=a,\quad F(b,t)=b \quad \mbox{ for all } t\in[0,1],\quad F(c,t)=\begin{cases} c &\mbox{ if }\quad t=0\\ a &\mbox{ if }\quad t\in(0,1)\\ b &\mbox{ if }\quad t=1 \end{cases} $$ Now (similarly to the example by Luiz) $$ F(F(c,t),1)=F(c,1)\iff t\in \{0,1\} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2026589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Homeomorphism of the cone on a space The unreduced cone $\text{Cone}(X)$ on a space $X$ is given by $$ \text{Cone}(X)=X\times I/ X\times \{1\}$$ where $I$ is a unit interval. Show that Cone($S^{n-1}$) is homeomorphic to $D^{n}$ where $S^{n-1}=\left\{x\in R^{n}| \left\| x \right\|=1 \right\}$ and $D^{n}=\left\{x\in R^{n}| \left\| x \right\| \leq 1 \right\}$. Intuitively I want to construct a homeomorphism by projecting some part of the cone into the interior of the sphere. But I don't know how to carry it out by using accurate mathematical language.
Let $p$ denote the apex of the cone $\text{Cone}(S^{n-1})$. For each $x\in S^{n-1}$ and each $t\in[0,1]$ let $c(x,t)\in \text{Cone}(S^{n-1})$ satisfying $$ c(x,t)=(1-t)p+tx$$ Then let $f: \text{Cone}(S^{n-1})\to D^n$ be defined by $$ f(c(x,t))=tx $$ This continuously maps $\text{Cone}(S^{n-1})$ onto $D^n$ and its inverse continuously maps $D^n$ onto $\text{Cone}(S^{n-1})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2026728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$\mathbb R^3$ minus a line is connected. Let $S\subseteq \mathbb R^3$ be homeomorphic to $\mathbb R$. Prove that $\mathbb R^3 \setminus S$ is connected. I haven't been able to solve this, although my topology skills are pretty weak. My friend told me he managed to prove this using results from his "dimension topology" class. Although I think this should be solvable using more mainstream stuff like homology or something. But I'm not sure. Regards.
Here's a nice hint: The zero-th homology $H_0(\mathbb{R}^3 - S)$ must be a direct sum of $n$ copies of $\mathbb{Z}$, where $n$ denotes the number of connected components of $\mathbb{R}^3 - S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2026842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 5, "answer_id": 2 }
Using ZFC axioms, prove that the set $\{ \emptyset \}$ exists. As the title describes, I want to prove that the set $\{ \emptyset \}$ exists, using ZFC axioms. I have an answer that I wish to check if I understood ZFC correctly. Is it that simple as: 1) The empty set axiom - There is a set having no elements. we get $\{ \}$. 2) The Power Set axiom - For every set $A$, there is a set $B$ whose elements are the subsets of $A$. We get $\{ \emptyset \}$. Thanks.
This really depends on what axioms you have at your disposal. Your suggestion works, if you have the empty set axiom and power set. You can also use the empty set and pairing. You can also use just the axiom of infinity (well, an extensionality, you can't do anything without extensionality!): Let $A$ be an inductive set whose existence is guaranteed by the axiom; then by the axiom $\varnothing\in A$, and therefore $\varnothing\cup\{\varnothing\}\in A$. In particular, $\{\varnothing\}$ exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2026971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Real-valued function with strictly positive integral on every subinterval of [0, 1] Assume we have a Riemann-integrable function $h: [0, 1] \to \mathbb R$ such that \begin{equation} \int_a^b h(x)dx > 0\quad \forall a, b: 0 \leq a < b \leq 1. \end{equation} Further assume that $f, g: [0, 1] \to \mathbb R$ are continuous functions with \begin{equation} f(x) \geq g(x) \quad\forall x \in [0, 1]. \end{equation} I suspect that in this case \begin{equation} \int_0^1 f(x)h(x)dx \geq \int_0^1 g(x)h(x)dx. \end{equation} However, I cannot prove this (I need this as part of another proof). I would like to stay within "elementary" analysis, i.e., do not resort to measure theory (no "almost everywhere" stuff). Many thanks.
I have found a proof that works even under weaker conditions. $f$ and $g$ need to be merely Riemann-integrable and $\int_a^b h(x) \, dx \ge 0$ needs to hold (i.e. the integral does not have to be strictly positive). We can assume wlog $g \equiv 0$. Let $Z_n$ be a sequence of meshes of $[0, 1]$ with $|Z_n| \to 0$. Since $h$ is Riemann-integrable, we know $\int_0^1 h(x) \, dx = \lim \limits_{n \to \infty} \overline{S}(Z_n, h)$, where $\overline{S}(Z, h) = \sum \limits_{I \in Z} |I| \sup \limits_{x \in I} h(x)$. Since the integral of $h$ is nonnegative on every interval, we know that $\sup \limits_{x \in I} h(x) \ge 0$ holds for all (nondegenerate) intervals $I \subset [0, 1]$. But from this we can conclude that $\sup \limits_{x \in I} \{f(x) h(x)\} \ge 0$. Since $f \cdot h$ is Riemann-integrable, this yields the assertion: $$\int f(x) h(x) \, dx = \lim \limits_{n \to \infty} \overline{S}(Z_n, f \cdot h) = \lim \limits_{n \to \infty} \sum \limits_{I \in Z_n} |I| \sup \limits_{x \in I}\{ f(x) h(x)\} \ge \lim \limits_{n \to \infty} \sum \limits_{I \in Z_n} 0 = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2027073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finitely generated as a field vs. as an algebra? Let $K$ be a field, $L$ a $K$-algebra. For $x_1, \ldots, x_n \in L$, denote $K[x_1, \ldots, x_n]$ for the smallest subalgebra of $L$ containing $K$ and $x_1, \ldots, x_n$. If $L$ is a field, denote $K(x_1, \ldots, x_n)$ for the smallest subfield of $L$ containing $K$ and $x_1, \ldots, x_n$. Clearly $K[x_1, \ldots, x_n] \subseteq K(x_1, \ldots, x_n)$. If $L$ is a field and $L = K(x_1, \ldots, x_n)$ for some $x_1, \ldots, x_n \in L$, can we conclude that $L = K[y_1, \ldots, y_m]$ for some $y_1, \ldots, y_m \in L$? Both hints and full answers are appreciated.
No. For example, $\;L=\Bbb Q(\pi)\;$ is a field, but it can't be that $\;L=\Bbb Q[\alpha]\;$ , as: (1) If $\;\alpha\;$ is algebraic over $\;\Bbb Q\;$ then $\;\Bbb Q[\alpha]=\Bbb Q(\alpha)\neq\Bbb Q(\pi)\;$ , as the last one is infinite dimensional over $\;\Bbb Q\;$ whereas the first one is finite dimensional, and (2) if $\;\alpha\;$ is transcendental over $\;\Bbb Q\;$ then $\;\Bbb Q[\alpha]\;$ is not even a field and in fact $\;\Bbb Q[\alpha]\cong\Bbb Q[x]=$ the "usual" polynomial ring of rational polynomials
{ "language": "en", "url": "https://math.stackexchange.com/questions/2027276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof why $\frac{1}{n^x}+\frac{1}{(n+1)^x}=\frac{H_{{n+1,x}}}{n^xH_{n,x}}+\frac{H_{{n-1,x}}}{(n+1)^xH_{n,x}}$ I found this in a not very straightforward way, and it seems like a rather strange-looking identity, but there is probably a simple proof. $$\frac{1}{n^x}+\frac{1}{(n+1)^x}=\frac{H_{{n+1,x}}}{n^xH_{n,x}}+\frac{H_{{n-1,x}}}{(n+1)^xH_{n,x}}$$ Where $H_{n,x}$ is the Generalized Harmonic number.
Using the simple computation $$ \frac{H_{n,x}}{n^x}+\frac{H_{n,x}}{(n+1)^x}=\frac{H_{n+1,x}-(n+1)^{-x}}{n^x}+\frac{H_{n-1,x}+n^{-x}}{(n+1)^x}=\frac{H_{n+1,x}}{n^x}+\frac{H_{n-1,x}}{(n+1)^x}, $$ the result follows immediately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2027369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
System of four equations of four variables including second powers. I've been tasked with solving the following system of equations and it seems like I am stuck: $$a-x^2=y$$$$a-y^2=z$$$$a-z^2=t$$$$a-t^2=x,$$where $a$ is a real number, for which $0\leq a\leq 1$. I thought the best way would be to subtract some equations from each other and the exploit $x^2-y^2=(x+y)(x-y)$. Even some estimates could be useful, since we have an estimate for $a$. However, I put this system of equations to WolframAlfa and the solutions (depending on $a$) looked very uneasy. In general, I have got very little experience in solving quadratic systems of equations like this one, could somebody please point me in the right direction? Thanks a lot!
Because of the symmetry, it is natural to assume $x=y=z=t$, which gives you an easily solvable quadratic. The solutions are $\frac 12(-1 \pm \sqrt{4a+1})$, which are real when $a \ge -\frac 14$, covering your region of interest. Another approach is to assume $x=z, y=t$, which gives $a-(a-x^2)^2=x$ and the additional two solutions $x=\frac 12(1\pm \sqrt {4a-3}), y=\frac 12(1\mp \sqrt {4a-3})$, which are real when $a \ge \frac 34$. If you don't assume the equality, you get the sixteenth degree polynomial of Dr. Sonnhard Graubner. These will reduce the degree to $12$, but there is still a long way to go.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2027492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Equivalence of convergence of a series and convergence of an infinite product Let $(a_n)_n$ be a sequence of non-negative real numbers. Prove that $\sum\limits_{n=1}^{\infty}a_n$ converges if and only if the infinite product $\prod\limits_{n=1}^{\infty}(1+a_n)$ converges. I don't necessarily need a full solution, just a clue on how to proceed would be appreciated. Could any convergence test like the ratio or the root test help?
A way is to use logarithm and the equivalence $$\ln(1+x) \sim x$$ around the origin. Interesting to notice is that for complex series (or series not having a constant sign) $\sum z_n$, the absolute convergence of the series implies the convergence of the product $\displaystyle \prod (1+z_n)$, but the converse is not always true. See my counterexamples web site.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2027618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Bode plot of the open loop given the state space - SIMO I have a SIMO system. $\xi$ is the input and $Y$ is the output. The state space model is given by \begin{align} \dot{X} &= AX + B\xi\\ y&=r-Cx \end{align} $A$ is $5 \times 5$ matrix. $B$ is $5 \times 1$. The controller $K$ is a state feedback controller such that $A - BK$ is Hurwitz. The schematic overview is shown in the figure. How can I find out the open loop or closed loop bandwidth for this system? Edit: By bandwidth I mean the frequency at which the open loop transfer function crosses the $0$dB line. Edit2: Maybe the question wasn't very clear. After I design the feedback control law, my system is \begin{equation} \dot{X} = (A - BK)X \end{equation} Now the transfer function is essentially from reference $r$ to a chosen output $y$. But given $A-BK$, how do I find the transfer function? Note that the feedback is a state feedback and hence is very difficult to write as a transfer function.
Given a state space model of the following form, $$ \dot{x} = A\,x + B\,u, \tag{1} $$ $$ y = C\,x + D\,u. \tag{2} $$ The openloop transfer function of this system can be found by taking the Laplace transform and assuming all initial conditions to be zero (such that $\mathcal{L}\{\dot{x}(t)\}$ can just be written as $s\,X(s)$). Doing this for equation $(1)$ yields, $$ s\,X(s) = A\,X(s) + B\,U(s), \tag{3} $$ which can be rewritten as, $$ X(s) = (s\,I - A)^{-1} B\,U(s). \tag{4} $$ Substituting this into equation $(2)$ and defining the openloop transfer function $G(s)$ as the ratio between output ($Y(s)$) and input ($U(s)$) yields, $$ G(s) = C\,(s\,I - A)^{-1} B + D. \tag{5} $$ In a normal block diagram representation the controller has as an input $r-y$, with $r$ the reference value you would like to have for $y$, and an output $u$, which would be the input to $G(s)$. For now $r$ can be set to zero, so the controller can be defined as the transfer function from $-y$ to $u$. For an observer based controller ($L$ and $K$ such that $A-B\,K$ and $A-L\,C$ are Hurwitz) for a state space model we can write the following dynamics, $$ u = -K\,\hat{x}, \tag{6} $$ $$ \dot{x} = A\,x - B\,K\,\hat{x}, \tag{7} $$ $$ \dot{\hat{x}} = A\,\hat{x} + B\,u + L(y - C\,\hat{x} - D\,u) = (A - B\,K - L\,C + L\,D\,K) \hat{x} + L\,y. \tag{8} $$ Similar to equations $(1)$, $(2)$ and $(5)$, the transfer function of the controller $C(s)$, defined as the ratio of $U(s)$ and $-Y(s)$, can be found to be, $$ C(s) = K\,(s\,I - A + B\,K + L\,C - L\,D\,K)^{-1} L. \tag{9} $$ If you want to find the total openloop transfer function from "$-y$" to "$y$" you have to keep in mind that in general $G(s)$ and $C(s)$ are matrices of transfer functions, so the order of multiplication matters. Namely you first multiply the error ($r-y$) with the controller and then the plant, the openloop transfer function can be written as $G(s)\,C(s)$. The closedloop transfer function can then be found with, $$ \frac{Y(s)}{R(s)} = (I + G(s)\,C(s))^{-1} G(s)\,C(s). \tag{10} $$ It can also be found directly using equations $(2)$ and $(6)$, and the closedloop state space model dynamics, $$ \begin{bmatrix} \dot{x} \\ \dot{\hat{x}} \end{bmatrix} = \begin{bmatrix} A & -B\,K \\ L\,C & A - B\,K - L\,C \end{bmatrix} \begin{bmatrix} x \\ \hat{x} \end{bmatrix} + \begin{bmatrix} 0 \\ -L \end{bmatrix} r, \tag{11} $$ $$ \frac{Y(s)}{R(s)} = \begin{bmatrix} C & -D\,K \end{bmatrix} \begin{bmatrix} s\,I - A & B\,K \\ -L\,C & s\,I - A + B\,K + L\,C \end{bmatrix}^{-1} \begin{bmatrix} 0 \\ -L \end{bmatrix}. \tag{12} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2027809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$\forall n\in\mathbb N$ prove that at least one of the number $3^{3n}-2^{3n}$,$3^{3n}+2^{3n}$ is divisible by $35$. $3^{3n}-2^{3n}=27^n-8^n=(27-8)(27^{n-1}+27^{n-2}\cdot 8+...+27^1\cdot8^{n-2}+8^{n-1})$ If $n$ is even, $3^{3n}+2^{3n}=27^n+8^n=(27+8)(27^{n-1}-27^{n-2}\cdot 8+...-27^1\cdot8^{n-2}+8^{n-1})$ If $n$ is even and a power of $2$, $3^{3n}+2^{3n}$ can't be factorized. If $n$ is even, $n=m\cdot 2^k,m>1,k>0$ and $m$ is odd $\Rightarrow$ $$27^n+8^n=(27^{2^k}+8^{2^k})\sum_{i=1}^m 27^{(m-i)2^k}(-b^{2^k})^{i-1}$$ How to check divisibility using these cases? Reference
Hint $\ {\rm mod}\ 35\!:\,\ 27\equiv -8\,\ $ so $\,\ \overbrace{27^{\large n}}^{\Large 3^{\Large 3n}}\equiv (-8)^{\large n}\equiv \pm\! \overbrace{8^{\large n}}^{\Large 2^{\Large 3n}}\,$ depending on parity of $n.\ $ QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2027993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
heterogeneous recurrence with f(n) as constant How to solve this $s_{n+1}=4s_{n-1}-3s_n+5$ where f(n)=5 conditions $s_0=-3$ $s_1=3$ I calculated the general solution $s_n=c_1*(-4)^n+c_2*1^n$ of this recurrence. The roots are $q_1=-4$ and $q_2=1$ but I have problem with particular solution with method of prediction . I have problem with this 5 as constant solution(homo+heterogeneous) is $s(n) = n - (-4)^n - 2$ I learn in Polish so I can't find too much words to describe this
$s_{n+1}=4s_{n-1}-3s_n+5$ $q^2-4+3q=0$ $\Delta=3^2-4*1*(-4)=25$ $\sqrt\Delta=5$ $q_1=\frac{-3-5}2=-4$ $q_2=\frac{-3+5}2=1$ homo general $s_n=c_1*1^n+c_2*(-4)^n$ $k=1$ where $k$ is multipiclity of root hetero particular $s_n=Q(n)*q^n*n^k$ $s_n=A*1^n*n^1=An$; $1^n=1$ because $1^0=1$, $1^1=1$ and so on $A(n+1)=4(A(n-1))-3An+5$ $An+A=4An-4A-3An+5$ $An+A=An-4A+5$ $A=1$ hetero particular $s_n=n$ hetero general $s_n=c_1*1^n+c_2*(-4)^n+n$ $\begin{cases} -3=s_0=c_1*1^0+c_2*(-4)^0=c_1+c_2 => c_1=-3-c_2=-3+\frac{6}{5}=\frac{-9}{5} 3=s_1=c_1*1^1+c_2*(-4)^1=-3-c_2-4*c_2 => -5c_2=6 => c_2=\frac{-6}{5} \end{cases}$ hetero general $s_n=\frac{-9}{5}*1^n-\frac{6}{5}*(-4)^n+n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2028074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Resultant contains all common roots as linear factors? Let $f,g \in \mathbb{C}[x,y,z]$ be homogeneous polynomials, so they define projective plane curves $C$ and $D$ in $\mathbb{C}P^2$. We are interested in Bezout's theorem applied to $C \cap D$. Write $f$ and $g$ as polynomials in $z$: $$ f(x,y,z) = \sum_{i = 0}^ma_i(x,y)z^{m-i}, \quad g(x,y,z) = \sum_{j = 0}^nb_j(x,y)z^{n-j}, $$ where $a_i, b_j \in \mathbb{C}[x,y]$ are homogeneous of degree equal to their index. The resultant of $R(f,g)$ is a homogeneous polynomial in $x,y$, so we may factor it into linear factors. $R(f,g)$ vanishes at some $x_0,y_0$ if and only if $f(x_0,y_0,z), g(x_0,y_0,z) \in \mathbb{C}[z]$ have a common root. My question is: if $f(x_0,y_0,z)$ and $g(x_0,y_0,z)$ have many distinct common roots $z_1,\dotsc,z_k$, then we have different points $P_i = (x_0:y_0:z_i) \in C \cap D$. But how do we define the intersection multiplicity of each $P_i$? Section 4.2 of this book defines it to be the multiplicity of the factor $(x_0y - y_0x)$ in $R(f,g)$, but that would be the same for all $P_i$ which is very strange.
The multiplicity of that factor would be the sum of the intersection multiplicities at the $P_i$. To get individual intersection multiplicities, you want that number $k$ to be 1. In $\mathbb{C}[x,y,z]$ you can accomplish that by first applying a generic linear transformation, $(x,y,z) \mapsto$ linear combinations of $x,y,z$. An easy (but computationally inefficient) way to do this is: pick two constants $t_1,t_2$ that are algebraically independent over the field of constants over which $f,g$ are defined, and then do $(x,y,z) \mapsto (x+t_1 z,y+t_2 z,z)$. After that, $k$ will each time be $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2028169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why don't quaternions contradict the Fundamental Theorem of Algebra? I don't pretend to know anything much about the Fundamental Theorem of Algebra (FTA), but I do know what it states: for any polynomial with degree $n$, there are exactly $n$ solutions (roots). Well, when it comes to quaternions, apparently $i^2=j^2=k^2=-1$, but $i\ne j\ne k\ne i$. So now, we have apparently found three solutions to the second-degree polynomial $x^2=-1$. I'm not aware of the justification of the FTA, nor I am I aware of Hamilton's justification for quaternions. However, I know a contradiction when I see one. What am I missing here?
Let me put my comments in an answer. As SpamIAm said, the FTA has a generalization in any integral domain : If $R$ is an integral domain, then any polynomial $P \in R[x]$ of degree $n$ has at most $n$ roots (counted with multiplicity) The proof is that : $\quad $(a field is commutative, otherwise we say a non-commutative field) * *Any integral domain can be embedded in its field of fraction $K$ *Any field $K$ can be embedded in its algebraic closure $\overline{K}$ *In an algebraically closed field $\overline{K}$, the FTA is Any polynomial $P \in \overline{K}[x]$ of degree $n$ has exactly $n$ roots (counted with multiplicity) proof : * *since $\overline{K}$ is algebraically closed $P \in \overline{K}[x]$ has at least one root $a \in \overline{K}$ *if $F$ is a field, then $P \in F[x]$ and $ P(a) = 0$ for some $ a \in F \implies P(x) = (x-a)Q(x)$ for some $Q\in F[x]$ (see polynomial long division) *Repeating the factorization on $Q(x)$ you have $P(x) = C \prod_{j=1}^n (x-a_j)$ *If $R$ is an integral domain and $P(x) = C \prod_{j=1}^n (x-a_j)$ with $a_j \in R$ and $C \ne 0$ then $P(b) = 0 \implies b = a_j$ for some $j$ This way you can see what steps fail for $\mathbb{H}$ a non-commutative field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2028288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 4, "answer_id": 0 }
Inequality with $x,y,z\geq 0$, $x+y+z=1.$ With $x,y,z\geq 0$, $x+y+z=1$.Prove that $$\sqrt{x+y^2}+\sqrt{y+z^2}+\sqrt{z+x^2}\geq 2 \tag{i}$$ The hint is using a lemma: If $a,b,c,d\geq 0 $satisfying $a+b=c+d$ and$|a-b|\leqslant|c-d|$ then we have $\sqrt{a}+\sqrt{b}\geq \sqrt{c}+\sqrt{d}$ How to prove this lemma? And is there a different way to prove the inequality (i)?
There is the following Vo Quoc Ba Can's solution. We need to prove that $$\sum\limits_{cyc}\left(\sqrt{x+y^2}-y\right)\geq1$$ or $$\sum\limits_{cyc}\frac{x}{\sqrt{x+y^2}+y}\geq1.$$ Now, by AM-GM $$\sum\limits_{cyc}\frac{x}{\sqrt{x+y^2}+y}=\sum\limits_{cyc}\frac{x(x+y)}{(x+y)\sqrt{x+y^2}+y(x+y)}\geq\sum\limits_{cyc}\frac{x(x+y)}{\frac{1}{2}((x+y)^2+x+y^2)+y(x+y)}.$$ Thus, it remains to prove that $$\sum\limits_{cyc}\frac{x(x+y)}{2x^2+4y^2+5xy+xz}\geq\frac{1}{2},$$ which is $$\sum\limits_{cyc}(4x^4y^2+16x^4yz+3x^3y^2z-19x^3z^2y-4x^2y^2z^2)\geq0,$$ which is obvious. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2028360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
ending zeros in 100! I'm working through Hammack's Book of Proof. Section 3.2 has an weird question, and unfortunately it's even-numbered, so there is no answer key. "There are two 0's at the ned of 10! = 3,628,800. Using only pencil and paper, determine how many 0's are at the end of the number 100!." I used the special case De Polignac's formula for factorials to get tz(100!) = 24. I believe this is the right answer. But the thing is, this question (and this solution) is totally unlike everything else I've seen in the book. This chapter is about 'Counting' (factorials, unique lists, etc). I'm wondering if there is some other way to get the answer using tools I'd seen in text so far, and not this kind of weird excursion into number theory. Like some intuitive way to think through the question.
I believe the best way is what you did! Indeed the simple formula applied to that gives you $$n = \text{int}\left(\frac{100}{5^1}\right) + \text{int}\left(\frac{100}{5^2}\right) = 20 + 4 = 24$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2028464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
What is exactly the relation between vectors, matrices, and tensors? I am trying to understand what Tensors are (I have no physics or pure math background, and am starting with machine learning). In an introduction to Tensors it is said that tensors are a generalization of scalars, vectors and matrices: Scalars are 0-order tensors, vectors are 1-order tensors, and matrices are 2-order tensors. n-order tensors are simply an n-dimensional array of numbers. However, this does not say anything about the algebraic or geometric properties of tensors, and it seems from reading around the internet that tensors are algebraically defined, rather than defined as an array of numbers. Similarly, vectors are defined as elements of a set that satisfy the vector space axioms. I understand this definition of a vector space. And I understand that matrices are representations of linear transformations of vectors. Question: I am trying to understand what a Tensor is more intuitively, and what the algebraic and intuitive/geometric relation is between tensors on the one hand, and vectors/matrices on the other. (taking into account that matrices are representations of linear transformations of vectors)
Vectors and Tensors are mathematical objects invariant under coordinate system chosen. Matrices and Arrays for that matter are representations of rank-2 tensor or vectors in a given coordinate system. As for the intuitive understanding part, take the analogy of a vector. By definition a vector is a magnitude and a direction. If I said "5 miles an hour that a way" and point, that's a velocity vector. Until you put a coordinate system on it you can't write it down as an ordered pair. A tensor is the linear transformation that is invariant. Once you impose a coordinate system you can write it down as a matrix. That's why tensors are often used to describe material properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2028572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Fourier transform of $F(x)$ $$F(x)=\int^{\infty}_{-\infty} e^{-i \xi x}f(x)\ dx$$ Fourier transform of $F$: $$\int^{\infty}_{-\infty}\left[\int^{\infty}_{-\infty} f(\xi)e^{-x^2\xi^2}\ d\xi\right]\ dx$$ Is that the way to proceed? If yes, how do I continue and if No, show me how I should proceed.
$=2 \pi [\int^{\infty}_{-\infty} F(x) e^{ix(-\xi)}$ $dx]=2\pi$ $f(-\xi)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2028825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find $\lim \limits_{n \to \infty}{1*4*7*\dots(3n+1) \over 2*5*8* \dots (3n+2)}$ I am trying to find $$\lim \limits_{n \to \infty}{1*4*7*\dots(3n+1) \over 2*5*8* \dots (3n+2)}$$ My first guess is to look at the reciprocal and isolate factors: $${2 \over 1}{5 \over 4}{8 \over 7} \dots {3n+2 \over 3n+1}= {\left(1+1\right)}\left(1+{1 \over 4}\right)\left(1+{1 \over 7}\right) \dots \left(1+{1 \over 3n+1}\right)$$ Now we take the natural log: $$\ln{ + \ln\left(1+1\right)} + \ln\left(1+{1 \over 4}\right) +\ln\left(1+{1 \over 7}\right) \dots +\ln\left(1+{1 \over 3n+1}\right)$$ Using $\ln(1+x) \le x$, we get: $$\ln{ + \ln\left(1+1\right)} + \ln\left(1+{1 \over 4}\right) +\ln\left(1+{1 \over 7}\right) \dots +\ln\left(1+{1 \over 3n+1}\right) \le 1 + {1 \over 4} + {1 \over 7} + \dots + {1 \over 3n+1}$$ Now, I'm stuck. I suppose I might use the fact that the RHS is similar to the harmonic series and show that it converges to some log, but I'm not sure how to do that. Do you have any clues?
You are on the right track. Use the inequality $\ln(1+x)\ge x-x^2/2$ for $x\ge0$: $$ \sum_{k=0}^n\ln\Bigl(1+\frac1{3\,k+1}\Bigr)\ge\sum_{k=1}^n\Bigl(\frac{1}{3\,k+1}-\frac12\,\frac{1}{(3\,k+1)^2}\Bigr)=\sum_{k=1}^n\frac{1}{3\,k+1}-\frac12\sum_{k=1}^n\frac{1}{(3\,k+1)^2}. $$ The first sum goes to $+\infty$ because the series $\sum_{k=0}^\infty1/(3\,k+1)$ is divergent, and the second sum is bounded because the series $\sum_{k=0}^\infty1/(3\,k+1)^2$ is convergent. This means that the original limit is equal to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2028920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Is there a bijection from $[0,1]$ to $\mathbb{R}$? I'm looking for a bijection from the closed interval $[0,1]$ to the real line. I have already thought of $\tan(x-\frac{\pi}{2})$ and $-\cot\pi x$, but these functions aren't defined on 0 and 1. Does anyone know how to find such a function and/or if it even exists? Thanks in advance!
A continuous bijection can't exists because $[0,1]$ is a compact set and continuous functions send compacts in compacts. You can look for a non-continuous bijection, that exists because $[0,1]$ and $\mathbb R$ have the same cardinality. It follows from Cantor-Bernstein theorem, that states "given two sets $A, B$ if exist two injective functions $f:A\rightarrow B \ g:B\rightarrow A$" than exists a bijective function $h:A\rightarrow B$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2029021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Advice on proof in linear algebra. I just wrote my first proof in linear algebra so I'd love some advice on the things that go well and what could be improved upon. It's a proof by induction. Theorem: Let $A_n$ be a $n\times n$ matrix of the form: $\begin{pmatrix} 2 & 1 & 0 & 0 && & \cdots & 0\\ 1 & 2 & 1 & 0 && & \cdots & 0\\ 0 & 1 & 2 & 1 && & \cdots & 0\\ 0 & 0 & 1 & 2 && & \cdots & 0\\ \vdots& & & & \ddots & && \vdots\\ 0&\cdots&&&&2&1&0\\ 0&\cdots&&&&1&2&1\\ 0&\cdots&&&&0&1&2\\ \end{pmatrix}$ Then $det(A_n)=3(n-1)$. Proof: We'll give a proof by mathematical induction. Let $n=2$: $\begin{vmatrix} 2&1\\ 1&2\\ \end{vmatrix}=4-1=3$. Let $n=3$: $\begin{vmatrix} 2&1&0\\ 1&2&1\\ 0&1&2\\ \end{vmatrix}=2\begin{vmatrix} 2&1\\ 1&2\\ \end{vmatrix}-\begin{vmatrix} 1&0\\ 1&2\\ \end{vmatrix}=8-2=6$ Let $n=4$: $\begin{vmatrix} 2&1&0&0\\ 1&2&1&0\\ 0&1&2&1\\ 0&0&1&2\\ \end{vmatrix}=2\begin{vmatrix} 2&1&0\\ 1&2&1\\ 0&1&2\\ \end{vmatrix}-\begin{vmatrix} 1&0&0\\ 1&2&1\\ 0&1&2\\ \end{vmatrix}=2\begin{vmatrix} 2&1&0\\ 1&2&1\\ 0&1&2\\ \end{vmatrix}-\begin{vmatrix} 2&1\\ 1&2\\ \end{vmatrix}=12-3=9$ Notice how due to the repetitive nature of our matrix, we'll candevise a recursive formula for our determinant: $|A_n|=2|A_{n-1}|-|A_{n-2}|$. If we assume: $|A_n|=3(n-1)$, $|A_{n-1}|=3(n-2)$, $|A_{n-2}|=3(n-3)$, then: $|A_n|=6(n-2)-3(n-3)=3(n-1)$ Since we checked $n=2, 3$; we conclude that $|A_n|=3(n-1)$ for all $n \in \mathbb{N} | n>1$
You could remark some of the easy steps you did, making the proof more understandable yet (even though they are very simple). For example, you could write $3=3(2-1)$ (same with the other 2) to show that the formula holds for the smaller values of $n$. Moreover, and more important, you should explain how you used Laplace's formula to derive the recurrent relationship (which involves another induction proof). It is not enough to show that the relationship is true for some values of $n$, but, of course, it will help you to generalize the expression.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2029176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Examples of pairewise independent but not independent continuous random variables By considering the set $\{1,2,3,4\}$, one can easily come up with an example (attributed to S. Bernstein) of pairwise independent but not independent random variables. Counld anybody give an example with continuous random variables?
An answer of mine on stats.SE gives essentially the same answer as the one given by vadim123. Consider three standard normal random variables $X,Y,Z$ whose joint probability density function $f_{X,Y,Z}(x,y,z)$ is not $\phi(x)\phi(y)\phi(z)$ where $\phi(\cdot)$ is the standard normal density, but rather $$f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z) & ~~~~\text{if}~ x \geq 0, y\geq 0, z \geq 0,\\ & \text{or if}~ x < 0, y < 0, z \geq 0,\\ & \text{or if}~ x < 0, y\geq 0, z < 0,\\ & \text{or if}~ x \geq 0, y< 0, z < 0,\\ 0 & \text{otherwise.} \end{cases}\tag{1}$$ We can calculate the joint density of any pair of the random variables, (say $X$ and $Z$) by integrating out the joint density with respect to the unwanted variable, that is, $$f_{X,Z}(x,z) = \int_{-\infty}^\infty f_{X,Y,Z}(x,y,z)\,\mathrm dy. \tag{2}$$ * *If $x \geq 0, z \geq 0$ or if $x < 0, z < 0$, then $f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z), & y \geq 0,\\ 0, & y < 0,\end{cases}$ and so $(2)$ reduces to $$f_{X,Z}(x,z) = \phi(x)\phi(z)\int_{0}^\infty 2\phi(y)\,\mathrm dy = \phi(x)\phi(z). \tag{3}$$ *If $x \geq 0, z < 0$ or if $x < 0, z \geq 0$, then $f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z), & y < 0,\\ 0, & y \geq 0,\end{cases}$ and so $(2)$ reduces to $$f_{X,Z}(x,z) = \phi(x)\phi(z)\int_{-\infty}^0 2\phi(y)\,\mathrm dy = \phi(x)\phi(z). \tag{4}$$ In short, $(3)$ and $(4)$ show that $f_{X,Z}(x,z) = \phi(x)\phi(z)$ for all $x, z \in (-\infty,\infty)$ and so $X$ and $Z$ are (pairwise) independent standard normal random variables. Similar calculations (left as an exercise for the bemused reader) show that $X$ and $Y$ are (pairwise) independent standard normal random variables, and $Y$ and $Z$ also are (pairwise) independent standard normal random variables. But $X,Y,Z$ are not mutually independent normal random variables. Indeed, their joint density $f_{X,Y,Z}(x,y,z)$ does not equal the product $\phi(x)\phi(y)\phi(z)$ of their marginal densities for any choice of $x, y, z \in (-\infty,\infty)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2029257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 1 }
Do the even cycles of a graph form a subspace of the cycle space? From Wikipedia: The cylce space of an undirected graph is the set of its Eulerian subgraphs. This set of subgraphs can be described algebraically as a vector space over the two-element finite field. The vector addition operation is the symmetric difference. My main question is if the number of independent even cycles in a graph is well-defined or if it is dependent on the choice of cycles. It occurred to me that if the subset of even cycles is closed under vector addition, then it would form a subspace of the cycle space and the number of independent even cycles is the unique value that is the dimension of that subspace. I've found a reference (Theorem 2.2.9) that states The multiplicity of the eigenvalue ${}-2$ in the line graph $L(H)$ is equal to the maximal number of independent even cycles in $H$. Is this maximal number just the dimension of the basis or is such a basis not even well-defined.
Let $\mathscr{C}_0(G)=\{E\in\mathscr{C}(G):|E|\text{ is even}\}$. Let $E_0,E_1\in\mathscr{C}_0(G)$, with $|E_0|=2m$ and $|E_1|=2n$. If $k=|E_0\cap E_1|$, then $$|E_0\mathbin{\triangle}E_1|=|E_0\setminus E_1|+|E_1\setminus E_0|=(2m-k)+(2n-k)=2(m+n-k)\;,$$ so $E_0\mathbin{\triangle}E_1\in\mathscr{C}_0(G)$, and $\mathscr{C}_0(G)$ is indeed a subspace of $\mathscr{C}(G)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2029369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How thick should a cylindrical coin be for it to act as a fair three-sided die? When flipping a coin of radius $r>0$ and thickness $t>0$ in the real world, there is some non-zero probability of getting neither heads nor tails, but instead landing on the thin lateral side. My question is, how thick does this lateral face need to be relative to $r$ so that the coin has equal chance to land on each of its 3 faces? That is, how do we choose $t$ so that the chance of landing on the lateral is $\frac{1}{3}$? Clearly this problem requires some simplification, so we may assume the coin does not bounce and lands on a flat surface, and, well, any other assumptions implicit in the next paragraph: I have heard from a friend that there is a method of solving this which entails centering the coin in a sphere, and adjusting $t$ so that a randomly chosen line segment between the center and a point on the sphere has an equal probability of intersecting each face of the coin. Is this a common modeling technique? How should we choose the radius of this sphere? Does it not matter given that the coin is fully contained? And secondly, how can we proceed to find $t$ as a function of $r$ once this model has been established?
This is an old and charming question. The thick coin is a cylinder. Cut it vertically by its diameter. You get a rectangle by section with sides $e\wedge D\ $. The thickness is $e$ and $D$ its diameter. When you flip the coin the decision variable are the angles that form the rectangle diagonals. These angles determine if the rest position will be on some head-tail or on the edge. Name $\vartheta_e$ the angle between diagonals having the edge as the opposite side. The fact is that the angle is a random variable having uniform distribution U[0,2$\pi$]. $\ $Thus $\frac{2\vartheta_e}{2\pi}=\frac{1}{3}$ and $\vartheta_e=\frac{\pi}{3}$. But $\tan(\vartheta_e/2)=e/D$ and $e=D\cdot\tan(\pi/6)=D/\sqrt{3}$. I've seen some wrong answers having high score for this question. I do not know, but I hope this can be ammended.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2029476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 3 }
Lagrange maximization with inequalities I need to prove the maxima of the following summation, using Lagrange. $$\max_{x_m} \left( \sum_m a_m log(x_m)\right) $$ s.t. $$0 \le x_m \le 1$$ $$ \sum_m x_m = 1$$ The solution is a closed form $ x_m = \frac{a_m}{\sum_m a_m} $ . I formulated the Lagrange equation but I am confused about the signs and the multipliers. $ L(x,\lambda,\mu) = \sum_m a_m log(x_m) + \sum_m \lambda_m (1-x_m) + \mu (\sum_mx_m-1) $ , Is this formulation correct ? what is wrong ? note: only one $\mu$ for one constraint.
I think that the Lagrangian is: \begin{equation} L(x,\lambda,\mu) = \sum_m a_m log(x_m) + \mu \left(\sum_mx_m-1\right) \end{equation} Now we have: \begin{equation} \begin{cases} \frac{a_1}{x_1}+\mu=0\\\frac{a_2}{x_2}+\mu=0\\\\\frac{a_m}{x_m}+\mu=0\\\sum_mx_m=1 \end{cases} \end{equation} so \begin{equation} \begin{cases} x_1=-\frac{a_1}{\mu}\\x_2=-\frac{a_2}{\mu}\\\\x_m=-\frac{a_m}{\mu}\\\sum_mx_m=1 \end{cases} \end{equation} and from the last equation: $-\frac{1}{\mu}\sum_ma_m=1$ so $\mu=-\sum_ma_m$. We can write the system like this: \begin{equation} \begin{cases} x_1=-\frac{a_1}{-\sum_ma_m}\\x_2=-\frac{a_2}{-\sum_ma_m}\\\\x_m=-\frac{a_m}{-\sum_ma_m}\\\mu=-\sum_ma_m \end{cases} \begin{cases} x_1=\frac{a_1}{\sum_ma_m}\\x_2=\frac{a_2}{\sum_ma_m}\\\\x_m=\frac{a_m}{\sum_ma_m}\\\mu=-\sum_ma_m \end{cases} \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2029621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the Lie algebra of the Euclidean group? I am trying to find the Lie algebra for $E(n) = \left\{\begin{bmatrix}1 & 0^t \\ \mathbf{x} & A \end{bmatrix}: A \in SO(n), \mathbf{x} \in \mathbb{E}^n \right\}$. In particular, I would like to show that $\mathfrak{e}(n) = \left\{\begin{bmatrix}0 & 0^t \\ \mathbf{b} & B \end{bmatrix}:B \in \mathfrak{so}(n),\mathbf{b} \in \mathbb{E}^n \right\}$ using only the definition that a Lie algebra is the tangent space at the identity of the Lie group. I've managed to show that $\mathfrak{so}(n)$ is the set of skew-symmetric matrices but I'm not sure how to proceed from there. Thank you in advance.
Proof: suppose that $\gamma(t)$ is a path in the Lie Group with $\gamma(0) = I$. $\gamma$ must have the form $$ \gamma(t) = \pmatrix{1&0\\ \mathbf x(t) & A(t)} $$ It follows that $\gamma'(0)$ has the form $$ \gamma'(0) = \pmatrix{0&0\\ \mathbf x'(t) & A'(0)} $$ which is of the desired form. On the other hand, take any $B \in so$ and $\mathbf b \in \Bbb R^n$. We can define $A(t)$ in $SO$ such that $A(0) = I$ and $A'(0)=B$, and define $$ \gamma(t) = \pmatrix{ 1 & 0\\ t\mathbf b & A(t) } \implies \gamma'(0) = \pmatrix{ 0&0\\\mathbf b & B } $$ thus, we have both inclusions and the sets are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2029701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Radius of convergence of the power series $\sum_n a_n x^n$ where $a_n={{\sin (n!)}\over {n!}}$ Find the radius of convergence of the power series $\sum_{n=0}^{\infty} a_n x^n$ where $a_n={{\sin (n!)}\over {n!}}.$ Now using the ratio test $$R=\lim_{n\rightarrow \infty}\left|{{a_n}\over {a_{n+1}}}\right|\\=\lim_{n\rightarrow \infty}\left|n\cdot{\sin(n!)\over \sin((n+1)!)}\right|$$ Now, $n\rightarrow \infty$ but the limit of ${\sin(n!)\over \sin((n+1)!)}$ is not known to me. If in degrees, I could say it converges to $1$ as $360$ devides every integer of form $n!,\forall n\ge 360.$ and the $R$ would be $\infty.$ But as the questions says, I have to find this in radians. The options are $1.R\ge 1, 2.R\ge 2\pi, R\le 4\pi , R\ge \pi.$ Please help. thanks.
If you use the root test; since $|\sin(n!)| < 1$ and $n!^{1/n} \to n/e$, $|{{\sin (n!)}\over {n!}}|^{1/n} <\frac{1}{n/e} \to 0 $ so the series converges everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2029805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$R^{3}$ is open and closed I have thought about 2 ways to prove this, but there are not complete. What should I add to finish the proof? Proof 1: For all $x\in \mathbb{R^{3}}$ there is $r>0$ such that $B(x,r)\subset \mathbb{R^{3}}$ any $r>0$ will satisfy the requirement because $\mathbb{R^{3}}$ is dense? which arguments about $\mathbb{R^{3}}$ can I use. And therefore $\mathbb{R^{3}}$ is open On the other hand A=$\mathbb{R^{3}}\setminus \mathbb{R^{3}}=\emptyset$ and the empty set is open and closed so $A$ is open (and closed?) Proof 2: Every limit point of $\mathbb{R}^{3}$ are contained in $\mathbb{R}^{3}$ so $\mathbb{R^{3}}$ is closed. which facts on the limit points can I use to prove it is open?
Take $x\in\mathbb{R}^{3}$. Then, for all $\varepsilon >0$, $B_{\varepsilon}(x)\subset\mathbb{R}^{3}$ By contradiction. Suposse that exist $x\in B_{\varepsilon}(x)$ such that $x\notin\mathbb{R}^{3}$, then, $x\in{\mathbb{R}^{3}\setminus\mathbb{R}^{3}}=\emptyset$. Clearly this is a contradiction. Now, since $\mathbb{R}^{3}$ is open, $\emptyset$ is closed. In the other way, we want to prove that $\emptyset$ is open, i.e., if a $x\in\emptyset$, then exist $\delta>0$ such that $B_{\delta}{x}\subset\emptyset$. This is true by emptiness. Then, $\emptyset$ is open and his complemente $\mathbb{R}^{3}$ is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2029916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to prove that $\sin(x)≤\frac{4}{\pi^2}x(\pi-x) $ for all $x\in[0,\pi]$? How to prove the inequality $$ \sin(x)≤\frac{4}{\pi^2}x(\pi-x) $$ for all $x\in[0,\pi]$? As both functions are symmetric to $\frac{\pi}{2}$ it suffices to prove it for $x\in\left[0,\frac{\pi}{2}\right]$. Furthermore one can see that $\frac{4}{\pi^2}(\pi-x)$ is the tangent to $\frac{\sin(x)}{x}$ in the point $\left(\frac{\pi}{2},\frac{2}{\pi}\right)$ so it would be enough to prove that $\frac{d^2}{dx^2}\left[\frac{\sin(x)}{x}\right]≤0$ in $\left[0,\frac{\pi}{2}\right]$. Is this the right way or are there more elegant approaches?
Let $f(x)=\frac{4x(\pi-x)}{4\pi^2}-\sin{x}$. Since $f(\pi-x)=f(x)$, it's enough to prove our inequality for $x\in\left[0,\frac{\pi}{2}\right]$ $f'(x)=\frac{4(\pi-2x)}{\pi^2}-\cos{x}$ and $f''(x)=\sin{x}-\frac{8}{\pi^2}$, which says that $f$ is a concave function on $\left[0,\arcsin\frac{8}{\pi^2}\right]$ and $f$ is a convex function on $\left[\arcsin\frac{8}{\pi^2},\frac{\pi}{2}\right]$. Easy to see that $f\left(\arcsin\frac{8}{\pi^2}\right)>0$, $f(0)=0$, $f\left(\frac{\pi}{2}\right)=0$ and $f'\left(\frac{\pi}{2}\right)=0$. Thus, graph of $f$ located above AB, where $A(0,0)$ and $B\left(\arcsin\frac{8}{\pi^2},f\left(\arcsin\frac{8}{\pi^2}\right)\right)$ on $\left[0,\arcsin\frac{8}{\pi^2}\right]$ and graph of $f$ located above the $x$-axis on $\left[\arcsin\frac{8}{\pi^2},\frac{\pi}{2}\right]$. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2030069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
How to relate areas of circle, square, rectangle and triangle if they have same perimeter?? I was given a question which was like: Suppose that a circle, square, rectangle and triangle have same perimeter. How area their areas related?? My work: I broke the question in parts and tried to prove it seperately: STEP $1$. Suppose the given perimeter is $P$.So the radius of circle will be ${r=P/2\pi}$. And hence area of the circle will be: ${P^2/4\pi}$. Side of square will be $P/4$ and hence area of square will be $P^2/16$. It is obvious that ${P^2/4\pi}$ > $P^2/16$. So Area of circle is more than area of square. STEP $2$: Whether by using $AM>GM$ inequality or by using a bit of differentiation, we can get that product off two quantities is maximum when they are equal. So, Area of square is more than Area of rectangle. Here, the hardest part come (at least for me). I can't relate the area of triangle to any of Circle, Square or Rectangle. It would have been easy if the triangle was Equilateral, Right angle but the question is talking about a general triangle. However, when I considered the situation with different examples (Assuming the perimeter and calculating the area, I got the relation: Circle > Square > Rectangle > Triangle. I shall be highly thankful if you guys can establish the relation in purely mathematical way. EDIT: This question does not ask that whether circle have the largest area among all 2-D figures or not. The OP just want to relate the areas of Circle, Square, Rectangle, Triangle in a pure mathematical way.
To generalize: It is easily shown that any regular $n$-gon with side length $a$ the radius $r_n$ of its inscribed circle is $$r_n=\frac{a/2}{\tan(\pi/n)},$$ hence the ratio of its perimeter $P$ and the diameter of its inscribed circle is $$\pi_n:=n\cdot\tan(\pi/n).$$ Let $r_n$ be the radius of its inscribed circle. Now verify that the polygon's area is $\pi_n\cdot r_n^2$ and its circumference equals $2\pi_n\cdot r_n$. (For example take $n=3$, a triangle with side $a$. Then $r_3=\frac{a/2}{\sqrt{3}}$ and $\pi_3=3\cdot\tan(\pi/3)=3\sqrt{3}$. We calculate its area to $\pi_3\cdot r_3^2=\frac{\sqrt{3}}{4}\cdot a^2$.) Given $P$, the radius $r_n=\frac{P}{2\pi_n}$, hence the area is $$\frac{P^2}{4}\cdot\frac{1}{n\tan(\pi/n)}.$$ Now $\displaystyle\frac{1}{n\tan(\pi/n)}$ is strictly increasing with limit $\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2030161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Holomorphic function $f$ on the unit disc such that $|f(z)|\leq 1.$ Let $f$ be a holomorphic function on $D=\{z\in\mathbb{C}:|z|\leq 1\}$ such that $|f(z)|\leq 1.$ $$g(z)=\begin{cases} \dfrac{f(z)}{z} &\text{ if } z\neq 0\\[8pt] f'(0) &\text{ if } z=0.\end{cases}$$ Which of the following statements are true $1.$ $g$ is holomorphic on $D.$ $2.$ $|g(z)|\leq 1$ for all $z\in D.$ $3.$ $|f'(z)|\leq 1$ for all $z\in D.$ $4.$ $|f'(0)|\leq 1.$ By Riemann removablity theorem it is clear that $g$ is holomorphic on $D.$ I don't know how to handle other options. Please help me. Thanks a lot.
Hint: Maximum modulus principle for your function on $\mathbb{D}$ . The answer is $g$ is holomorphic and $|f'(0)|\le 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2030316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof that sum of squares of error for simple linear regression follows chi-square distribution I can understand that if Y1~Yn are random samples from N(μ,σ), then the sum of squares of difference between Yi and bar(Y) divided by sigma^2 follows chi-square distribution with n-1 degress of freedom. But I can't easily prove that sum of squares of error follows chi-square distribution with n-2 degrees of freedom because it is difference between Yi and estimated Yi. How can i prove that not using matrix form?
Without involving asymptotic results, you have to assume that the error terms follow Gaussian (uncorrelated) distribution with a constant variance, i.e., $\epsilon \sim N(0, \sigma^2)$. Next, you can show that each fitted value $\hat{Y}_i$ is normally distributed as well, which is follows from $$ \hat{Y}_i = Y_i + \sum_{i=1}^n w_i \epsilon _i, \quad i=1,...n, $$ thus $\hat{Y}_i \sim N(Y_i, \sigma^2 \sum_{i=1}^nw_i^2)$, for each $i$ and they are uncorrelated as well. Now, recall that if $X_i \sim N(0, \sigma^2)$, i.i.d then $$ \sum_{i=1}^n\frac{(X_i - \mu)^2}{\sigma^2} \sim \chi^2(n). $$ Thus, $$ \sum_{i=1}^n \frac{(Y_i - \hat{Y_i})^2}{\sigma^2}\sim \chi^2{(n-2)}. $$ The "reduction" of $2$ df stems from the estimation of $\beta_0$ and $\beta_1$. So, strictly speaking the residuals sum of squares follows $\sigma^2 \chi^2_{(n-2)}$ distribution. To figure out the exact form of $w_i$ you will need some simple algebra of the OLS estimators, but note that $w_i = g(x_i, \bar{x}_n, \sum (x_i -\bar{x}_n)^2)$ so they are considered "constant" terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2030401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A very short proof of $e$ is irrational I was talking with my friend and he came up with this very short proof Given $x\in \mathbb{R}$, if $xy \notin \mathbb{Z}$ for any $y\in \mathbb{Z}$, then $x$ is irrational. Since $e = \sum \frac{1}{n!}$, we see that $ey \notin \mathbb{Z}$ for any $y \in \mathbb{Z}$. So $e$ is irrational. Is his argument correct? If not, why?
Perhaps you can take the fractional part of $n! \times e$ that number is $$ 0< n! e - \text{whole number } = \frac{1}{n+1} + \frac{1}{(n+1)(n+2)}+\dots <1$$ if $e$ is rational this number has to be integer eventually. However, this error term is strictly between $0$ and $1$ always. So $e \notin \mathbb{Q}$. https://en.m.wikipedia.org/wiki/Proof_that_e_is_irrational
{ "language": "en", "url": "https://math.stackexchange.com/questions/2030524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Does $\int_\limits{0}^{\infty} \frac{\sin(2x)}{x}dx $ absolutely converge? My teacher gave me this task to prove it, but I have no idea how to begin. Can I have any clue?
Note that we can write $$\begin{align} \int_0^{N\pi/2}\left|\frac{\sin(2x)}{x}\right|\,dx&=\sum_{n=1}^N \int_{(n-1)\pi}^{n\pi}\frac{|\sin(x)|}{x}\,dx\\\\ &\ge \frac1\pi \sum_{n=1}^N\frac1n \int_{(n-1)\pi}^{n\pi}|\sin(x)|\,dx\\\\ &=\frac2\pi \sum_{n=1}^N \frac1n \end{align}$$ Inasmuch as the harmonic series diverges, the integral fails to be absolutely convergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2030658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Implicit Differentiation: Find $\frac{d^2y}{dx^2}$ Find $\frac{d^2y}{dx^2}$ of: $$4y^2+2=3x^2$$ My Attempt I attempted the probelm my first solving for the first derivative: $8y*y'=6x$ $y'=\frac{3x}{4y}$ Then I tried it again; however I was a bit confused, and ended up getting $y''=\frac{6y-3x(2*y')}{16y^2}$ Would I substitute in the first derivative back in to get: $y''=\frac{6y-6x(\frac{3x}{4y})}{4y^2}$ and then finally Final Answer $$y''=\frac{6y^2-9x^2}{4y^3}$$ Is my final answer correct, if not what is the correct answer please?
For future reference, $F(x,y)=0$ $\frac{d^2[F(x,y)]}{dx^2}=-\frac {\frac{\partial^2 F}{\partial x^2}\left(\frac{\partial F}{\partial y}\right)^2 -2·\frac{\partial^2 F}{\partial x\partial y}·\frac{\partial F}{\partial y}·\frac{\partial F}{\partial x} +\frac{\partial^2 F}{\partial y^2}\left(\frac{\partial F}{\partial x}\right)^2} {\left(\frac{\partial F}{\partial y}\right)^3}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2030739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $X$ contains a strictly increasing sequence which converges to $\sup X$ Suppose that $X ⊂ \mathbb{R}$ is bounded above and that $\sup X \notin X$. Prove that $X$ contains a strictly increasing sequence which converges to $\sup X$. I've started by assuming there to be an increasing sequence and using its definition to show that as $m$ tends to infinity, $a_n < l$ for all $n < m$. This satisfies first criteria of suprenum but not sure where to go next? Thanks.
Assuming you have a sequence to begin with doesn't seem like a good way to go -- you need to CONSTRUCT a sequence that satisfies the given properties. Focus on the definition of supremum as least upper bound. Show that for each $n\in\mathbb{N}$, there must exist $a_n\in X$ so that $\lvert a_n-\sup X\rvert<\frac{1}{n}$. Clearly, $a_n$ defined this way converges to the supremum. Now, there's no guarantees that $(a_n)$ is an increasing sequence. But, we can find a subsequence which does what we want. Prove that $(a_n)$ contains a strictly increasing subsequence. How? Consider the fact that $X$ doesn't contain its supremum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2030851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Distributing 12 distinct balls to 5 different persons. Actually , my doubt is about number of ways of distributing 12 distinct balls to 5 different persons such that 3 persons get 2 balls each and 2 persons get 3 balls each. There is another question in this site about "How many ways to divide group of 12 people into 2 groups of 3 people and 3 groups of 2 people?" But my question is slightly different. Solutions with explanation is needed. Thank you anyways for your help!
I believe that the correct answer is the product of: * *The number of ways to choose the persons: $\frac{5!}{3!2!}=10$ *The number of ways to choose the balls: $\frac{12!}{2!2!2!3!3!}=1663200$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2031029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
find the maximum $\frac{\frac{x^2_{1}}{x_{2}}+\frac{x^2_{2}}{x_{3}}+\cdots+\frac{x^2_{n-1}}{x_{n}}+\frac{x^2_{n}}{x_{1}}}{x_{1}+x_{2}+\cdots+x_{n}}$ give the postive intger $n\ge 2$,and postive real numbers $a<b$ if the real numbers such $x_{1},x_{2},\cdots,x_{n}\in[a,b]$ find the maximum of the value $$\dfrac{\frac{x^2_{1}}{x_{2}}+\frac{x^2_{2}}{x_{3}}+\cdots+\frac{x^2_{n-1}}{x_{n}}+\frac{x^2_{n}}{x_{1}}}{x_{1}+x_{2}+\cdots+x_{n}}$$ it seem the polya-szego inequality http://journalofinequalitiesandapplications.springeropen.com/articles/10.1186/1029-242X-2013-591
Let $M$ is a maximum value (it exists because continuous function on compact gets there a maximum value) and $$f(x_1,x_2,...x_n)=\frac{x_1^2}{x_2}+\frac{x_2^2}{x_3}+...+\frac{x_n^2}{x_1}-M(x_1+x_2+...+x_n).$$ Since $f$ is a convex function for all $x_i$, we obtain $$0=\max_{\{x_1,x_2,...,x_n\}\subset[a,b]}f=\max_{\{x_1,x_2,...,x_n\}\subset\{a,b\}}f$$ From here if $n$ is even we have $$M=\frac{\frac{a^2}{b}+\frac{b^2}{a}}{a+b}=\frac{a}{b}+\frac{b}{a}-1,$$ which occurs for $x_1=x_3=...=a$ and $x_2=x_4=...=b$. If $n$ is odd we have for $n=2m+1$: $$M=\frac{m\left(\frac{a^2}{b}+\frac{b^2}{a}\right)+a}{(m+1)a+mb},$$ which occurs for $x_1=x_3=...=x_{2m+1}=a$ and $x_2=x_4=...=x_{2m}=b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2031113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
Proof: If a function is in the Schwartz Space, then this function is uniformly continuous I don't have this really clear, I want to justify if $f\in\mathcal{S}(\mathbb{R})$ then $f$ is uniformly continuous. So far, I know how can I bound $|x|$ for $f$ is in the Schwartz space, but I can't proceed with the uniformly continuous proof because I don't know how to bound $|y-x|$ to find a $\delta$ which depends on an $\varepsilon>0$ such as $|f(y)-f(x)|<\varepsilon$. Thank you so much
I'll prove a result slightly stronger. Consider the space $C_0(\mathbb R)$ of the continuous functions from $\mathbb R$ to $\mathbb C$ such that $\lim_{|x|\to \infty} f(x) = 0$. The functions from this space are uniformly continuous. Consider $\epsilon >0$ and $f\in C_0(\mathbb R)$. There exist, from the limit above, a radius $R>0$ such that $|f(x)|<\epsilon/2$ when $x\in \mathbb R$ and $|x|>R$. (Therefore, $f$ is small far from the point 0) Now, let $B=\{x\in \mathbb R:|x|\leq R+1\}$. Since $B$ is compact and $f|_B:B \to \mathbb C$ is continuous we have that $f|_B$ is uniformly continuous (On compact spaces, to be continuous = to be uniformly continuous). So, there exist $\delta\in (0,1)$ such that $|f(x)-f(y)|<\epsilon/2$ for every $x,y\in B$ where $|x-y|<\delta$. (Therefore, $f$ is uniformly continuous near 0) If $x,y\in \mathbb R$ and $|x-y|<\delta$, we have the following cases: * *If $x,y \in B$ then $|f(x)-f(y)|<\epsilon/2<\epsilon,$ *If $x\not \in B$ then $|x|>R+1$ and therefore $|y|>R$, because $|x-y|<\delta<1$. So we obtain $|f(x)|<\epsilon/2$ and $|f(y)|<\epsilon/2$. Therefore $$|f(x)-f(y)|\leq|f(x)|+|f(y)|<\epsilon.$$ From 1. and 2. we conclude that $|f(x)-f(y)|<\epsilon$ when $|x-y|<\delta$. Now, observe that $S(\mathbb R) \subset C_0(\mathbb R)$. Indeed, if $f\in S(\mathbb R)$ then $(1+|x|)f(x)$ is bounded, that is, there is $B>0$ such that $(1+|x|)|f(x)|<B$ for all $x\in \mathbb R$. Observe that $$ |f(x)|<\frac{B}{1+|x|}$$ and as consequence $\lim_{|x|\to \infty}f(x) = 0$. Therefore $f\in C_0(\mathbb R)$. Then, every $f\in S(\mathbb R)$ is uniformly continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2031235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Quotient of finitely presented group Suppose $G, H$ be two groups such that $G$ is finitely presented(with number of defining relators must be at least 1) and let $\phi$ : $G \rightarrow$ $H$ be an epimorphism. Does it imply that $H$ a finitely presented group?
A method for constructing an example which shows that the answer is negative (see for instance this reference) is to go for a finitely presented group $G$ whose centre $Z(G)$ is not finitely generated. Then, by a standard result (which I think has been recorded by Bernhard Neumann), $G/Z(G)$ is not finitely presented.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2031371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the remainder when a large number is divided by 13 Let a number $x = 135792468135792468$. Find the remainder when $x$ is divided by $13$. Is it possible to use Fermat's little theorem on this? I notice that the number is also repeating after $8$. Would really appreciate any help, thanks!
You noticed how the number repeats, so you can see that it equals $135792468\times1000000001$. Now test $1000000001$ for divisibility by $13$ (repeatedly add $4$ times the rightmost digit to the rest of the number, and if you reach a multiple of $13$ (you reach 26 in this case), the original number is divisible by $13$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2031461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 0 }
LPP auxiliary problem optimal solution Let's look at a linear programming problem $$\max\{\langle c,x\rangle \ \colon Ax=b, \ x\geq 0\}$$ and its auxiliary problem $$\max\{\langle \overline{c},\overline{x}\rangle\ \colon \overline{A}\overline{x}=b, \ \overline{x}\geq 0\}.$$ I want to prove that if the LPP feasible region is not empty, then there is $M_0$, such that for all $M\geq M_0$ the auxiliary problem has a solution and for the optimal solution $x_{n+1}=x_{n+2}=\ldots =x_{n+k}=0$. For notation: $$\langle c,x\rangle =c_1x_1+c_2x_2+\ldots +c_nx_n,$$ $$x=(x_1,x_2,\ldots , x_n), \ \overline{x}=(x_1,x_2,\ldots ,x_n,x_{n+1},\ldots ,x_{n+k}) $$ $$\langle \overline{c},\overline{x}\rangle =\langle c,x\rangle -M(x_{n+1}+x_{n+2}+\ldots +x_{n+k}).$$ $$Ax=b\Leftrightarrow \begin{cases}a_{11}x_1+a_{12}x_2+\ldots +a_{1n}x_n=b_1\\ a_{21}x_1+a_{22}x_2+\ldots +a_{2n}x_n=b_2\\ \ldots \\ a_{m1}x_1+a_{m2}x_2+\ldots +a_{mn}x_n=b_m \end{cases}$$ $$\overline{A}\overline{x}=b\Leftrightarrow \begin{cases}a_{11}x_1+a_{12}x_2+\ldots +a_{1n}x_n+x_{n+1}=b_1\\ a_{21}x_1+a_{22}x_2+\ldots +a_{2n}x_n+x_{n+2}=b_2\\ \ldots \\ a_{m1}x_1+a_{m2}x_2+ \ldots +a_{mn}x_n+x_{n+k}=b_m \end{cases}$$
Let $x$ be an optimal problem to the original problem with objective value $p := c^Tx$, and let $\tilde{x}$ be the vector $(x,0,0,\ldots,0)$ (the corresponding solution in the auxiliary problem), which is feasible and also yiels an objective value of $p$. Using revised simplex, the reduced cost of $x_{n+j}$ for $\tilde{x}$ is $c_B^T B^{-1} e_j - M$, with $e_j$ the $j^{th}$ unit vector. For $M_0 = \max_j \{ c_B^T B^{-1} e_j \}$, the reduced costs becomes negative for the extra variables, proving the solution $\tilde{x}$ is optimal with $x_{n+j}=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2031559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to show that $\iint_{S}\vec{F}\cdot d\vec{S}=0$ with the vector field $\vec{F}=\big\langle0,0,z\big\rangle$? Problem: If $S$ is the cylindrical surface parametrized by $\phi(\theta,u)=(\cos{(\theta)},\sin{(\theta)},u)$, $\theta\in[0,2\pi]$ and $u\in[0,1]$, then $\iint_{S}\vec{F}\cdot d\vec{S}=0$ for the vector field $\vec{F}=\big\langle0,0,z\big\rangle$. Solution: $\iint_{S}\vec{F}\cdot d\vec{S}=\iint_{D}\vec{F}(\phi(\theta,u))\cdot \vec{n}\ dA$ I know that $\vec{F}(\phi(\theta,u))=F(\cos{(\theta)},\sin{(\theta)},u)=\big\langle0,0,u\big\rangle$ But I don't know how to get $\vec{n}$. What it is?
$d\vec{S} = \frac{\partial \phi}{\partial u} \times \frac{\partial \phi}{\partial v} \ du \ dv$ this will be in the direction perpendicular to the $\vec{F}$ which you are given.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2031643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
The "lower part" of $BV$ function is always lower semi-continuous. Let $u\in BV(\Omega)$ be a function of bounded variation, where $\Omega\subset\mathbb R^N$ is open bounded smooth boundary. Define $$ u^-(x):=\sup\left\{t\in\mathbb R:\,\lim_{r\to 0}\frac{\mathcal L^N(B(x,r)\cap\{u<t\})}{r^N}=0\right\}. $$ Then $u^-$ is $\mathcal H^{N-1}$ a.e. well defined. My question: do we have $u^-$ is lower semi-continuous? In one dimension this is true. But what about multi-dimensions? PS: by lower semi-continuous we mean for any $x_n\to x$, we have $$ \liminf_{n\to \infty} u^-(x_n)\geq u^-(x) $$
Not necessarily. Consider $u(x)=\chi_E(x)$ where $E$ is a set which has an inner cusp, for instance the subset of $\mathbb R^2$ given by $$E=\left\{(x_1,x_2):\,x_2< \sqrt{|x_1|}\right\}. $$ Then $u$, restricted to a bounded open set, is BV, and $u^-(x)=u(x)$ for every $x$ except for the origin, in which $u^-(0)=1$, therefore lower semicontinuity fails for example along the sequence $(0,\frac1n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2031732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Winding number: times f(w) goes around unit disc I have a problem which is like this: Let $S^1=\partial D(0,1)$ and consider a differentiable function $f:S^1\rightarrow S^1$. Can you compute the number of times that $f(w)$ goes around $S^1$ per each time that $w$ goes around $S^1$? I have tried to reparameterize the first $S^1$ with a $\gamma$ path and then parametrize the second $S^1$ in terms of $f(\gamma)$, but I do not get any further. Any hints or ideas?
In fact, holomorphic functions/residue theory isn't required. See Baby Rudin, exercises 23-26 of ch. 8. A brief summary: (1) Given a closed differentiable curve $\gamma:[a,b]\longrightarrow\Bbb C\setminus\{0\}$, let be $$ \text{Ind}(\gamma) = \frac1{2\pi i}\int_a^b\frac{\gamma'(t)}{\gamma(t)}\,dt. $$ (yes, is a disguised line integral) (2) Considering $\gamma\exp(-\int(\gamma'/\gamma))$ and the periodicity of the complex exponential is easy to prove that $\text{Ind}(\gamma)\in\Bbb Z$. (3) Compute $\text{Ind}(\gamma)$ for the curves $\gamma(t) = \exp(int)$, $t\in[0,2\pi]$. (4) Prove (essentially) that the integer-valued $\text{Ind}(\gamma)$ depends continuously of $\gamma$. (5) Using (4) you can define $\text{Ind}(\gamma)$ for continuous curves.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2031884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Group (co)homology and classyfing spaces I would like to ask where I can find in the literature the proof of the following fact: the group cohomology of the group $G$ is naturally isomorphic with the ordinary (say singular) cohomology of the classyfing space $BG$ of $G$.
This is explained in Chater 8 of Weibel's "Introduction to Homological Algebra", specifically Example 8.2.3. The idea is to revise the construction of $BG$ as the geometric realization of a certain simplicial set — the nerve $N G$ of $G$ viewed as a category: $$BG = |NG|.$$ For any simplicial set $X$ its simplicial homology coincides with the cellular (and hence singular) homology of its geometric realization: $$H_\bullet^\text{Simpl.} (X;\mathbb{Z}) \cong H_\bullet^\text{Sing.} (|X|;\mathbb{Z}).$$ On the other hand, the complex for calculating the simplicial homology of $NG$ coincides with the standard complex for calculating group homology (the one that comes from the bar-resolution), hence $$H_\bullet (G,\mathbb{Z}) \cong H_\bullet^\text{Simpl.} (NG; \mathbb{Z}) \cong H_\bullet^\text{Sing.} (BG; \mathbb{Z}).$$ Another reference: Section I.6 in Brown's "Cohomology of Groups" (GTM 87). This is more or less the same argument. The author checks that $C_\bullet (EG)$ is a free resolution of $\mathbb{Z}$ by $\mathbb{Z}G$-modules, which coincides with the standard bar-resolution. It only remains to note that if you take the complex of fixed points $C_\bullet (EG)_G$, then you calculate both the group homology $H_\bullet (G,\mathbb{Z})$ and $H_\bullet (BG;\mathbb{Z})$. Actually, this is the right way to see where the bar-resolution comes from.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2032002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Help with $\int \cos^6{(x)} \,dx$ Problem: \begin{eqnarray*} \int \cos^6{(x)} dx \\ \end{eqnarray*} Answer: \begin{eqnarray*} \int \cos^4{(x)} \,\, dx &=& \int { \cos^2{(x)}(\cos^2{(x)}) } \,\, dx \\ \int \cos^4{(x)} \,\, dx &=& \int { \frac{(1+\cos(2x))^2}{4} } \,\, dx \\ \int \cos^4{(x)} \,\, dx &=& \int { \frac{\cos^2(2x)^2 + 2\cos(2x)+1}{4} } \,\, dx \\ \int \cos^4{(x)} \,\, dx &=& \int { \frac{(\frac{1+\cos(4x)}{2} + 2\cos(2x)+1}{4} } \,\, dx \\ \int \cos^4{(x)} \,\, dx &=& \int { \frac{1+\cos(4x) + 4\cos(2x)+2}{8} } \,\, dx \\ \int \cos^4{(x)} \,\, dx &=& \int { \frac{\cos(4x) + 4\cos(2x)+3}{8} } \,\, dx \\ \int \cos^4{(x)} \,\, dx &=& \frac{\sin(4x)+ 8 \sin(2x)+12x}{32} \\ \text{Let }I_6 &=& \int \cos^6{(x)} \,\, dx \\ \end{eqnarray*} To perform this integration, I use integration by parts with $u = \cos^5(x)$ and $dv = \cos(x) dx$. \begin{eqnarray*} I_6 &=& \sin(x)\cos^5(x) - \int \sin(x) 5\cos^4(x)(-\sin(x)) \,\, dx \\ I_6 &=& \sin(x)\cos^5(x) + \int 5\cos^4(x)(\sin(x))^2 \,\, dx \\ I_6 &=& \sin(x)\cos^5(x) + \int 5\cos^4(x)(1 - \cos(x))^2 \,\, dx \\ I_6 &=& \sin(x)\cos^5(x) + \int 5\cos^4(x) \,\, dx - 5I_6 \\ 6I_6 &=& \sin(x)\cos^5(x) + \int 5\cos^4(x) \,\, dx \\ 6I_6 &=& \sin(x)\cos^5(x) + \frac{5\sin(4x)+ 40 \sin(2x)+60x}{32} + C_1 \\ 6I_6 &=& \frac{32\sin(x)\cos^5(x) + 5\sin(4x)+ 40 \sin(2x)+60x}{32} + C_1 \\ I_6 &=& \frac{32\sin(x)\cos^5(x) + 5\sin(4x)+ 40 \sin(2x)+60x}{192} + C \\ \end{eqnarray*} I believe that the above result is wrong. Using an online integral calculator, I get: \begin{eqnarray*} I_6 &=& \frac{\sin(6x) + 9\sin(4x) + 45 \sin(2x) + 60x}{192} + C \\ \end{eqnarray*} I am hoping that somebody can tell me where I went wrong. Bob
In your solution, when substituting the already known expression for $I_4$ (in the third line from the bottom), you forgot to multiply it by $5$. That's the only error there. Put it back in there, and you'll have a correct answer. Your answer would still look different from the output of that online integrator, but the two are in fact equivalent via trigonometric identities. On a side note, this integral can also be found without integration by parts, but by using the same approach that worked for $I_4$ if you write $\cos^6(x)=(\cos^2(x))^3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2032178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Solving inequality involving floor function We have this inequality (over real numbers) : $$x^2-2x\le \frac{\sqrt{1-\lfloor x\rfloor^2}}{\lfloor x \rfloor + \lfloor -x \rfloor}$$ How we can solve it using both of algebraic and geometric methods ?
Hint: $x$ cannnot be integer, otherwise the denominator of the RHS would be $0$. So $x$ is not integer, which gives $\lfloor x \rfloor+\lfloor -x \rfloor=-1$ Also $\lfloor x \rfloor\in\{-1,0,1\}$ since it must be integer and $1-\lfloor x\rfloor^2>0$ which is equivalent to $\lfloor x\rfloor^2<1$. Then you solve on each interval $-1<x<0$, $0<x<1$, and $1<x<2$ EDIT: For a geometric solution, I would plot $x^2-2x$ and I would observe that the RHS is either $0$ or $-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2032279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
How can the Airy's equation $y'' - xy = 0$ be used to model diffraction of light? Differential Equations with Boundary-Value Problems by Dennis G. Zill, Michael R. Cullen states that the equation is used to model diffraction of light. It doesn't explain how, it just goes on to solve it using a series solution. Has anyone any idea how it is used or has any references? Or if anyone knows any other applications of this equation then that would be interesting also :)
Here is a reference to a 1977 article by H. M. Nussenzveig in Scientific American about the Theory of the Rainbow. The captions of the figures in the article are informative. The differential equation also appears in Professor Nussenzveig's 1992 book Diffraction Effects in Semiclassical Scattering.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2032406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimum value of angles of a triangle In a triangle $ABC$, if $\sin A+\sin B+\sin C\leq1$,then prove that $$\min(A+B,B+C,C+A)<\pi/6$$ where $A,B,C$ are angles of the triangle in radians. if we assume $A>B>C$,then $\sum \sin A\leq 3 \sin A$,and $ A\geq \frac{A+B+C}{3}=\pi/3$.also $\sum \sin A\geq 3\sin C$ and $ C\leq \frac{A+B+C}{3}=\pi/3$.But I could not proceed with this. Please help me in this regard.Thanks.
Since you assumed $A\geq B\geq C$, it must be that $\dfrac{A}{2}+C\leq\dfrac{\pi}{2}.$ Hence, $\sin\tfrac{A}{2}<\sin(\tfrac{A}{2}+C) = \cos(\tfrac{B-C}{2})$. Finally, $$1\geq \sin A+\sin B+\sin C = \sin A+2\sin\tfrac{B+C}{2}\cos\tfrac{B-C}{2} = 2\cos\tfrac{A}{2}\big(\sin\tfrac{A}{2}+\cos\tfrac{B-C}{2}\big)>4\cos\tfrac{A}{2}\sin\tfrac{A}{2} = 2\sin A\Rightarrow$$ $\sin A< \tfrac{1}{2}$, which means that $A\gt\frac{5\pi}{6}$, because $A$ is the largest angle, by assumption. Then, $B+C<\tfrac{\pi}{6}$ and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2032711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Rearranging angular velocity equation to make $T$ the subject I want to rearrange the formula for angular velocity $\omega = \dfrac{2\pi}{T}$, to make $T$ the subject as I wish to find the period. Would the correct answer be $T = \frac{\omega}{2\pi}$ or would it be $T = \frac{2\pi}{\omega}$? And is there a certain rule you should follow when rearranging ?
It is quite simple. You have $\omega = \dfrac{2\pi}{T}$ since you want to make T the subject, multiply the whole equation by T and you will get $$\omega{\cdot T} = \dfrac{2\pi}{T}{\cdot T} = 2\pi$$ On bringing ${\omega}$ on right you will have $T = \frac{2\pi}{\omega}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2032923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Is $(x,y) \mapsto 0$ on $\mathbb{Q}\backslash\{0\}$ associative and commutative? I have the following definition of operations on the following sets: * *$(x,y) \mapsto 9xy$ on $\mathbb{Z}$ *$(x,y) \mapsto 0$ on $\mathbb{Q}\backslash\{0\}$ I have to determine whether the operations on the given sets are associative, commutative, have a neutral element, and have inverse elements. For $(x,y) \mapsto 9xy$ I have that it is associative, commutative, and has the neutral element $1 \in \mathbb{Z}$, but does not have inverse elements as $(9xy)^{-1} \notin \mathbb{Z}$. Could you please help me with $(x,y) \mapsto 0$? I don't understand the operation. It always maps $(x,y) \mapsto 0$, so how do I prove if this is associative, commutative etc.?
For the first one, it is indeed associative and commutative because the usual multiplication of integers is som. It does not have a neutral element though, for the following reason: if $u \in \Bbb Z$ is this neutral element, then $9xu = x$ for all $x \in \Bbb Z$. For $x=1$ this would imply $9u = 1$, whence $u = \frac 1 9$ which is not in $\Bbb Z$. For the second, $\Bbb Q \setminus \{0\}$ is not even closed under the operation $(x,y) \mapsto 0$, so it makes no sense to speak about associativity and the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2033051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to show that $\left|\sum_{k=0}^\infty\frac{(ix)^k}{(k+1)!}\right|\le \left|\sum_{k=0}^\infty\frac{(ix)^k}{k!}\right|=|e^{ix}|=1$ with restrictions How to show that $\left|\sum_{k=0}^\infty\frac{(ix)^k}{(k+1)!}\right|\le \left|\sum_{k=0}^\infty\frac{(ix)^k}{k!}\right|=|e^{ix}|=1$ with restrictions, for $x\in\Bbb R$. To prove this inequality we cant use any related to derivatives, integrals, geometric statements about sine or cosine, or uniform convergence. We can use limits and basic facts about the convergent properties of these power series. We already knows that $|e^{ix}|=1$ for $x\in\Bbb R$. The inequality is a slight rewrite of $$\frac{|e^{ix}-1|}{|x|}=\left|\sum_{k=0}^\infty\frac{(ix)^k}{(k+1)!}\right|\le 1,\quad\forall x\in\Bbb R$$ what need to be proved. I dont know exactly what to do here, Im completely lost. The best I can think is to prove something like $$\forall\epsilon>0,\exists N\in\Bbb N:\left|\sum_{k=0}^n\frac{(ix)^k}{(k+1)!}-L\right|<\epsilon,\quad\forall n\ge N$$ for some $0\le L<1$. The exercise leave the hint $\lim_{z\to 0}\frac{\exp(z)-1}{z}=1$ for $z\in\Bbb C\setminus\{0\}$, but I dont see how to relate this to our problem, because we need the result for any $x$, not just for $x=0$. Some hint or solution will be appreciated, thank you.
Just for the record I will add a second proof. We knows that $$e^{ix}=\cos(x)+i\sin(x)$$ and $|e^{ix}|=1$ for all $x\in\Bbb R$. And we want to prove $$\frac{|e^{ix}-1|}{|x|}\le 1$$ From the last inequality we have the bound $$|e^{ix}-1|\le |e^{ix}|+1=2\le|x|$$ then for $|x|\ge 2$ the inequality is clear. Now, we will study the case for $|x|<2$. From the Euler's formula we have that $$\frac{|e^{ix}-1|}{|x|}=\frac{|\cos(x)+i\sin(x)-1|}{|x|}=\\=\frac{\sqrt{(1-\cos (x))^2+\sin^2(x)}}{|x|}=\frac{\sqrt{2(1-\cos(x))}}{|x|}\le 1$$ Then our inequality can be written as $$1-\cos(x)=\sum_{k=1}^\infty(-1)^{k+1}\frac{x^{2k}}{(2k)!}\le \frac{|x|^2}2\implies\sum_{k=2}^\infty(-1)^k\frac{x^{2k}}{(2k)!}\ge 0$$ Now observe that for $|x|<2$ and $k\ge 2$ the sequence $(x^{2k}/(2k)!)$ decreases monotonically. Then for these alternating series we have the bound $$|s-s_n|\le |c_{n+1}|$$ where $s:=\sum_{k=0}^\infty (-1)^k c_k$ is an alternating series where $(c_n)\to 0$ monotonically, $s_n$ is a partial sum of the series, and $c_{n+1}$ is an element of the monotone sequence. Then $$\left|\sum_{k=2}^\infty(-1)^k\frac{x^{2k}}{(2k)!}-\frac{x^4}{4!}\right|\le\frac{x^6}{6!}\implies \sum_{k=2}^\infty(-1)^k\frac{x^{2k}}{(2k)!}\ge \frac{x^4}{4!}-\frac{x^6}{6!}=\frac{x^4}{4!}\left(1-\frac{x^2}{30}\right)\ge 0$$ whenever $|x|<2$.$\Box$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2033168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to find a inverse of a multivariable function? I have a function $f:\mathbb{R}^{2} \rightarrow \mathbb{R}^{2}$ defined as: $$f(x,y) = (3x-y, x-5y)$$ I proved that it's a bijection, now I have to find the inverse function $f^{-1}$. Because $f$ is a bijection, it has a inverse and this is true: $$(f^{-1}\circ f)(x,y) = (x,y)$$ $$f^{-1}(3x-y,x-5) = (x,y)$$ I don't know where to go from here. In a one variable function I would do a substitution of the argument of $f^{-1}$ with a variable and express x with that variable, and then just switch places. I tried to do a substitution like this: $$3x-y = a$$ $$x-5y = b$$ And then express $x$ and $y$ by $a$ and $b$ , and get this: $$f^{-1}(x,y) = (\frac{15x-3y}{42}, \frac{x-3y}{14})$$ But I'm not sure if I'm allowed to swap $x$ for $a$, and $y$ for $b$. Any hint is highly appreciated.
You can split this into two separate functions $u, v:\Bbb R^2\to \Bbb R$ the following way: $$ u(x, y) = 3x-y\\ v(x, y) = x-5y $$ and we have $f(x, y) = (u(x, y), v(x, y))$. What we want is $x$ and $y$ expressed in terms of $u$ and $v$, i.e. solve the above set of equations for $x$ and $y$, so that you get two functions $x(u, v), y(u, v)$. Then $f^{-1}(u, v) = (x(u,v), y(u,v))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2033272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to determine the number of coin tosses to identify one biased coin from another? If coin $X$ and coin $Y$ are biased, and have the probability of turning up heads at $p$ and $q$ respectively, then given one of these coins at random, how many times must coin A be flipped in order to identify whether we're dealing with coin $X$ or $Y$? We assume a 0.5 chance that we can get either coin.
Some factors to think about: * *How different are the probabilities? *How sure do we want to be? If the probabilities of heads are close together, like $p=.501$ and $q=.500$, it will take many trials to really see any difference, but if the probability of heads are drastically different, like $p=.9$ and $q=.4$, fewer trials are needed. All we need is to tell if our statistical significance overlaps at all. Doing the math: Mean number of heads for X->$n*p$ | Standard Deviation X$=\sqrt{n*p*(1-p)}$ Mean number of heads for Y->$n*q$ | Standard Deviation Y$=\sqrt{n*q*(1-q)}$ In order to determine if we have one coin, we need to be sure we DON'T have the other coin. If we do enough trials and our head success count is outside of our allowed parameters, we decide it is the other coin. For example, X has probability $.3$, Y has probability $.6$. We will use an $a=.05$ significance value. Therefore, Mean X$=.3n$ Standard Deviation X$=\sqrt{.3n*(1-.3)}\approx.46\sqrt{n}$ We run $n$ trials and get a proportion $g$. test$=$normalpdf$(.3*n,.46*\sqrt{n},g*n)$ If this test probability is more than $.05$, we can determine with an $a=.05$ significance that the coin's bias is not different enough from coin X's bias to prove that the coin is not coin X. But if the test probability is more than $.05$, we fail to prove with an $a=.05$ significance that the coin's bias is different enough from coin X's bias to not be coin X. Does this help?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2033370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Uniquness of multiplicative inverses in $\Bbb Z_n$ (or any abeliean monoid) Assume that an integer $a$ has a multiplicative inverse modulo an integer $n$. Prove that this inverse is unique modulo $n$. I was given a hint that proving this Lemma: \begin{align} n \mid ab \ \wedge \ \operatorname{gcd}\left(n,a\right) = 1 \qquad \Longrightarrow \qquad n \mid b \end{align} should help me in finding the answer. Here are my steps in trying to solve the problem: \begin{align} \operatorname{gcd}\left(n,a\right) = 1 & \qquad \Longrightarrow \qquad sn + ta = 1 \qquad \Longrightarrow \qquad sn = 1 - ta \\ & \qquad \Longrightarrow \qquad 1 \equiv ta \mod n \qquad \Longrightarrow \qquad ta \equiv 1 \mod n . \end{align} I know that having the GCD of m and a equal to 1 proves there is a multiplicative inverse mod n, but I'm not sure on how to prove $n \mid b$ with and how it helps prove the multiplicative inverse is unique.
What you have shown already is a technique that, using the Extended Euclidean Algorithm will give you the inverse (if it exists). Note also that you can determine if an element $a \mod n$ has an inverse by checking that they have no common divisors (that is, $\gcd(a,n)=1$). Now to answer your question. Suppose that you have two multiplicative inverses $b,c$ such that $ab\equiv 1 \equiv ac \mod n$. (Then you also have $ba\equiv 1 \equiv ca \mod n$) Consider $b \equiv b\cdot 1 \equiv bac \equiv 1\cdot c \equiv c \mod n$. That shows that modulo $n$, there exists a unique inverse. This technique is very useful and you can also use it to show that: there exists exactly one $0$-vector in every vector space, group, ring... and every invertible element in a group, ring... has a unique inverse. Note: For instance $2\cdot 3 \equiv 1 \mod 5$, but also $2\cdot 8 \equiv 1 \mod 5$ (Why is this not a contradiction to the above?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2033470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
For any prime number $p$ and natural number $i < p$, prove that $p$ divides ${p \choose i}$. For any prime number $p$ and natural number $i < p$, prove that $p$ divides ${p \choose i}$. Also, what happens when $p$ is not a prime. Is this still true? I tried writing out the formula for combination but couldn't get further.
By a combinatorial argument, or by manipulating factorials, we have $$i\binom{p}{i}=p\binom{p-1}{i-1}\ .$$ Since $\binom{p-1}{i-1}$ is an integer, $$p\mid i\binom{p}{i}\ .$$ But $p$ is prime and $1\le i<p$, so $p$ and $i$ have no common factor, so $$p\mid \binom{p}{i}\ .$$ For the case when $p$ is not prime, just take $p=4$ and try out various values of $i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2033748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How does Laplace transform ℒ{ sin(t)/t } solves definite integral 0 to ∞ ∫ (sin(t)/t) dt How does the answer of the Laplace transform $$\mathcal L \left\{ \frac{\sin t}{t} \right\}= \frac{\pi}{2}-\tan^{-1}(s)$$ solve the definite integral $$\int_0^{\infty} \frac{\sin t}{t} dt = \frac{\pi}{2} $$ How are they related? why does this solve the definite integral? Thank you.
Your statement is $$\int_0^{\infty} dt \frac{\sin{t}}{t} e^{-s t} = \frac{\pi}{2} - \arctan{s} $$ Plug in $s=0$ to both sides. There are lots of ways to prove the LT. One way to do it is to use the FT relation for the sinc term.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2033868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Length of Hilbert Curve in 3 Dimensions The Hilbert Curve is a continuous space filling curve. The length of the $n^{th}$ iteration in two dimensions can be calculated by $2^n-\frac{1}{2^n}$. The curve can be generalized to fill volumes; what is the length of the $n^{th}$ iteration of the Hilbert Curve in three dimensions?
In $2$ dimensions the square is divided into $2^n$ by $2^n$ smaller squares ($n$ is the iteration). The majority of squares contain $2$ lines of length $1/2^n$ except for two, giving us $2(2^n * 2^n -1)/2^n$. In 3 dimensions there is still $2$ lines in the majority of cubes so wee need only adjust things to account for the $3$rd axis, giving $2(2^n \cdot 2^n \cdot 2^n -1)/2^n$. In general we have $2(2^dn-1)/2^n$, where $d$ stands for the number of dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2034027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Determine the original group according to its quotient Suppose that $G$ is a group and the group of integers $\mathbb Z$ is its normal subgroup with $G/\mathbb Z\cong\mathbb Z$. Then can I say that $G$ is isomorphic to $\mathbb Z\times\mathbb Z$?
The answer is no. ${\mathbb Z}^2$ is the only abelian group with that property, but there is a nonabelian group defined by the presentation $\langle x,y \mid y^{-1}xy=x^{-1} \rangle$. Let $x$ generate the normal infinite cyclic subgroup $N$ and choose $y \in G$ such that $yN$ is a generator of $G/N$. Then $G=\langle x,y \rangle$ and, since $y^{-1}xy$ must also generate $N$, we have $y^{-1}xy$= $x$ or $x^{-1}$. So there are just two isomorphism classes of groups with thsi property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2034150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $A^2=0$, then $\mathrm{rank}(A) \le \frac{n}{2}$ For my matrix algebra class I need to prove the following: If $A^2=0$, prove $\mathrm{rank}(A) \le \frac{n}{2}$. So if A is nilpotent prove $\mathrm{rank}(A) \le \frac{n}{2}$. I know already how to solve this, but my initial way of solving is false. I am looking for the mistake, but cannot find one. I know there already exists a question where this is asked. I'm just curious about my particular mistake. Proof: $A=\begin{bmatrix}a_1&&a_2&& ...&&a_n\end{bmatrix}$ and $A=\begin{bmatrix}a^T_1\\a^T_2\\...\\a^T_n\end{bmatrix}$ $AA=\begin{bmatrix}a_1 && a_2 && ... && a_n\end{bmatrix}\begin{bmatrix}a^T_1\\a^T_2\\...\\a^T_n\end{bmatrix}$ $=\begin{bmatrix}a_1a^T_1&&...&&a1a^T_n\\...\\a_na^T_1 && ...&& a_na^T_n\end{bmatrix}$ $=\begin{bmatrix} a_1 \cdot a_1&&...&&a_1 \cdot a_n\\...\\a_n \cdot a_1 && ...&& a_n \cdot a_n\end{bmatrix}$ $=0_{nxn}$ So we know the diagonal is zero thus $a_i \cdot a_i = 0$ this equals $\vert\vert{a_i}\vert\vert^2$ The square root of this equals the length, therefore the length is equal to 0. The only vector with this property is the zero vector. Herefore all vectors $a$ must be equal to the zero vector. The last however is not the truth, is there anyone who can spot my mistake?
Pretty late to the party but adding another way of solving since it's different and uses a nice idea, in case anyone stumbles upon this post for help. We know from the Sylvester inequality $$\mathrm{rank}(XY)\geq\mathrm{rank}(X)+\mathrm{rank}(Y)-n$$ Setting $X=Y=A$ we get $$\mathrm{rank}(AA)\geq\mathrm{rank}(A)+\mathrm{rank}(A)-n$$ or equivalently $$\mathrm{rank}(A^2)\geq2\mathrm{rank}(A)-n$$ and since $$\mathrm{rank}(A^2)=0$$ because $A^2=O_n$, we get $$0\geq2\mathrm{rank}(A)-n$$ or equivalently $$\mathrm{rank}(A)\leq \frac{n}{2}$$ and we are done . $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2034296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Gradient of quadratic forms involving matrix powers Let $f:\mathbb{R}^{n \times n} \to \mathbb{R}$ be defined as: $$ f(A)= x^T (A^2)^i y + v^T A^i w, $$ where $i \in \mathbb{N}$ and $x,y,v,w$ are some fixed column vectors. One can assume that $A$ is a symmetric matrix. I am interested in computing the gradient of $f$ with respect to $A$. The only rule that I know is: $$ \frac{\partial x^T (A^TA)y }{\partial A}=A(xy^T+yx^T). $$ Can anyone help me to find the derivative $\frac{\partial f(A)}{\partial A}$?
Warning: Lots of tedious algebra below, so there's definitely room for errors. I'll see if I can validate this result by other means. To make the notation less confusing, I'll write the exponent as $N$ rather than $i$ so that I can use lower-case Latin letters such as $i$ for indices. I will also adopt the Einstein convention, i.e. doubled indices are to be summed over. First, let's translate $f(A)$ into a sum over indices. Inserting dummy indices for each of the matrix multiplications yields \begin{align} v^T A^N w &=v_k (A^N)_{kl}w_l \\ &=v_k A_{kj_1}A_{j_1j_2}\cdots A_{j_{n-1}l}w_l,\\\\ x^T (A^2)^N y &=x_k (A^2)^N_{kl}y_l\\ &=x_k(A^2)_{k j_1}(A^2)_{j_1j_2}\cdots(A^2)_{j_{N-1}l}y_l\\ &=x_kA_{ki_1}A_{i_1j_1}A_{j_1i_2}A_{i_2j_2}\cdots A_{j_{N-1}i_n}A_{i_N l}y_l. \end{align} We can now differentiate with respect to a matrix element $A_{ab}$. We have $(\partial A_{ij}/\partial A_{ab})=\delta_{ia}\delta_{jb}$, so the linear term gives \begin{align} \frac{\partial}{\partial A_{ab}}\left(v^T A^N w\right) &=v_k (\delta_{ka}\delta_{j_1b})A_{j_1j_2}\cdots A_{j_{n-1}l}w_l+\cdots+v_k A_{kj_1}A_{j_1j_2}\cdots (\delta_{j_{n-1}a}\delta_{lb})w_l \\ &=v_aA_{bj_1}\cdots A_{j_{n-1}l}w_l+\cdots+v_k A_{kj_1}A_{j_1j_2}\cdots A_{j_{n-2}a}w_b \\ &=(v^T)_a (A^{N-1} w)_b+(v^T A)_a(A^{N-2}w)_b+\cdots+(v^T A^{N-1})_a (w)_b\\ &=(v)_a (w^T A^{N-1})_b+(Av)_a(w^T A^{N-2})_b+\cdots+(A^{N-1}v)_a (w^T)_b. \end{align} where in the last line I have both used $A^T=A$ and swapped column vectors with row vectors (and vice versa). Similarly, for the quadratic term (going directly to the result) we obtain \begin{align} \frac{\partial}{\partial A_{ab}}\left(x^T (A^2)^N y\right) &=(x^T)_a (A^{2N-1}y)_b+(x^T A)_a (A^{2N-2}y)_b+\cdots +(x^T A^{2N-1})_a (y)_b\\ &=x_a (y^T A^{2N-1})_b+(Ax)_a (y^T A^{2N-2}y)_b+\cdots +(A^{2N-1}x)_a (y^T)_b\\ \end{align} Since $\left(\frac{\partial}{\partial A} f(A)\right)_{ab}=\frac{\partial}{\partial A_{ab}} f(A)$, we can combine these two terms and place them in matrix form: $$\boxed{\frac{\partial}{\partial A} f(A)=\left(x y^T A^{2N-1}+A x y^T A^{2N-2}+\cdots+A^{2N-1} xy^T\right)\\\hspace{2cm}+ \left( vw^T A^{N-1}+Avw^T A^{N-2}+\cdots A^{N-1}vw^T \right).}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2034415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find symmetric matrix of polynomial equation Let f be a polynomial such that $x, y, z, t$ belong to $\mathbb{R}^4$. $$f(x) = 4x^2+4y^2+4z^2+4t^2+8xy+6xt+6yz+8zt$$ Find the symmetric matrix and determine whether $A$ is a positive definite or not. I understand how to find a symmetric matrix and check whether or not it is a positive matrix. I'm having trouble starting. Would I change this equation to a matrix? Any guidance on how to start this problem would be appreciated.
$\begin{eqnarray*}f(x) &=& 4x^2+4y^2+4z^2+4t^2+8xy+6xt+6yz+8zt\\ &=&\begin{bmatrix} x&y&z&t \end{bmatrix} \begin{bmatrix} 4&4&0&3\\4&4&3&0\\0&3&4&4\\3&0&4&4 \end{bmatrix} \begin{bmatrix}x\\y\\z\\t\end{bmatrix} \end{eqnarray*} $ It's eigenvalues are -3, 3, 5, 11,so it's not positive definite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2034569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that a linear subspace of $C([0,1])$ is closed Prove that the set $$ W= \{f \in C([0, 1]):f(0)=0\}$$ is a closed linear subspace of $C([0,1]).$ Here $C([0, 1])$ is the space of continuous functions equipped with the uniform norm $||f||_\infty = \sup_{x\in[0,1]}|f(x)|$. I need some help in how to proceed with this problem. First, I want to prove that $W$ is, indeed, a subset of $C([0,1])$. For this, we have that for any two functions $f,g\in W$, $f+g\in W$, since $f(0) + g(0) = 0$. Then, also for any $\lambda \in \mathbb{R}$, we have that $\lambda f \in W$, since $\lambda f(0) = 0$. Finally, we also have $0 \in W$. So $W$ is a linear subspace of $C([0, 1])$. My problem is how to prove that $W$ is closed. Based on the definition of a closed set, I wanted to show that a sequence of continuous functions $f_n$ converges uniformly to $f$ and that this $f$ is also in $W$. I started with something like this: Let $f_n$ be a sequence of continuous functions such that $f_n(0) = 0$ for any $n\in \mathbb{N}$. We need to prove that $f_n \to f$, uniformly. i.e $$ \|f_n - f\|_\infty = \sup_{x\in[0,1]} |f_n(x) - f(x)|\to 0$$ as $n\to \infty$. How can I proceed?
The linear functional $L(x)=x(0)$ is a continuous linear map from $C[0,1]$ to $\mathbb{C}$ because it is bounded, i.e., $|T(x)|=|x(0)| \le \|x\|$. Therefore the inverse image of $\{0\}$ under $T$ is closed, which is the subspace $W$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2034747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A condition for irreducibility Let $X$ be a closed projective set. Prove that $X$ is an irreducible set if and only if $X \cap U_i$ is irreducible for every i=0,...,n; where $\cup U_i$ is an open cover of $\mathbb{P^n}$. For "$\Rightarrow $" I succeded.
It is a topological exercise! Let $X$ be a topological space, let $\{U_i\}_{i\in I}$ be an open covering of $X$; a subset $Y$ of $X$ is irreducible only if $\forall i\in I,\,Y\cap U_i=Y_i$ is irreducible. Vice versa, if $Y_i$ is irreducible and $\forall i,j\in I,\,Y\cap U_i\cap U_j=Y_{ij}\neq\emptyset$ then $Y$ is irreducible. Proof. Let $Y\subseteqq X$; if $Y$ is reducible, let $Z$ be an irreducible component of $Y$, then \begin{equation*} \exists h\in I\mid Y_h=Z\cap U_h\neq\emptyset \end{equation*} because $Y_h$ and $Z\cap U_h$ are irreducible. By hypothesis $\forall k\in I,\,Z\cap U_h\cap U_k=Z_{hk}$ is a non-empty open subset of $Z$ and so $Z_{hk}$ is dense in $Z$. By the same reasoning, $Z_{hk}$ is a non-empty, open and dense subset of $Y_k$; then \begin{equation*} \forall k\in I,\,Z_{hk}\subseteqq Y_k\Rightarrow Y_k\subseteqq Z\Rightarrow Z=Y. \end{equation*} The other implication is tautological! Q.E.D. $\Box$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2034844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Functional in Hilbert space Let $H$ be a Hilbert space and $0\neq x\in H$. I want to prove that there is an unique $f\in H^*$, such that $\|f\|=1$ and $f(x)=\|x\|$. Any ideas on how to approach this problem
Let $K=\Bbb R\text{ or } \Bbb C$ the field of scalars. Let $V$ be the subspace of $H$ defined by $$V=\{\lambda x\}_{\lambda\in K}$$ Consider the continuous form $\phi\in V^*$ defined by $$\phi(\lambda x)=\lambda\space ||x||$$ It is clear that $\phi( x)= ||x||$. Besides, for all $\lambda x\in V$ one has $$|\phi(\lambda x)|=|\lambda|\space ||x||=||\lambda x||=||x||$$ Hence $$||\phi||_V=1$$Now by Hahn-Banach theorem we can extend $\phi$ to the whole $H$ with preservation of the norm. (We get this way a form $f\in H^*$ such that the restriction $f_{|V}=\phi$ and $||f||=||\phi||=1$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2035080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula As far as every question I've seen concerning "what is $\sum_{k=1}^nk^p$" is always answered with "Faulhaber's formula" and that is just about the only answer. In an attempt to make more interesting answers, I ask that this question concern the problem of "Methods to compute $\sum_{k=1}^nk^p$ without Faulhaber's formula for fixed $p\in\mathbb N$". I've even checked this post of common questions without finding what I want. Rule #1: Any method to compute the sum in question for arbitrary $p$ is good, either recursively or in some manner that is not in itself a closed form solution. Even algorithms will suffice. Rule #2: I don't want answers confined to "only some values of $p$". (A good challenge I have on the side is a generalized geometric proof, as that I have not yet seen) Exception: If your answer does not generalize to arbitrary $p$, but it still generalizes to an infinite amount of special $p$'s, that is acceptable. Preferably, the method is to be easily applied, unique, and interesting. To start us off, I have given my answer below and I hope you all enjoy.
By the binomial theorem, $$(x+1)^{n+1}=\sum_{h=0}^{n+1} {n+1 \choose h}x^h$$ $$(x+1)^{n+1}-x^{n+1}=\sum_{h=0}^n {n+1 \choose h}x^h$$ Sum this equality for $x=0,1\dotsb k$ $$\sum_{x=1}^k((x+1)^{n+1}-x^{n+1})=(k+1)^{n+1}-1=\sum_{x=1}^k\sum_{h=0}^n {n+1 \choose h}x^h=\sum_{h=0}^n{n+1 \choose h}\sum_{x=1}^kx^h=(n+1)\sum_{x=1}^kx^h+\sum_{h=0}^{n-1}{n+1 \choose h}\sum_{x=1}^kx^h$$ Which means $$(n+1)\sum_{x=1}^kx^n=(k+1)^{n+1}-1-\sum_{h=0}^{n-1}{n+1 \choose h}\sum_{x=1}^kx^h$$ So you can find the sum of the $n$th powers if you have all the previous ones. The base case is $$\sum_{x=1}^kx^0=k$$ Then $$\sum_{x=1}^kx^1=\frac{1}{2}\left((k+1)^2-1-{2 \choose 0}k\right)=\frac{k^2+k}{2}$$ $$\sum_{x=1}^kx^2=\frac{1}{3}\left((k+1)^3-1-{3 \choose 0} k - {3 \choose 1} \frac{k^2+k}{2}\right)=\frac{k^3}{3}+\frac{k^2}{2}+\frac{k}{6}$$ Etcetera.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2035188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "38", "answer_count": 14, "answer_id": 1 }
Interchanging the sides of a Heegaard splitting Suppose that $M$ is a closed orientable connected $3$-manifold and that $M = U \cup V$ is a Heegaard splitting of $M$ (i.e. $U$ and $V$ are both handlebodies with a common boundary). What are some necessary/sufficient conditions for there to exist a homeomomrphism $h : M \to M$ where $h(U) = V$ and $h(V) = U$?
For what it's worth, your question can be translated into a group theoretic criterion, expressed in the mapping class group $\text{MCG}(S)$ of the surface $S = U \cap V$; this is the group of homeomorphisms of $S$ modulo the normal subgroup of homeomorphisms isotopic to the identity. A homeomorphism $h : M \to M$ which swaps the two sides $U$ and $V$ restricts to a homeomorphism $\Phi : S \to S$. Furthermore, if one considers all homeomorphisms $\Phi : S \to S$, the ones that extend to a side-swapping homeomorphism of $M$ are invariant up to composition by homeomorphisms of $S$ isotopic to the identity. Thus your question can be equivalently reformulated as follows: * *Question: What is a necessary and sufficient condition for the existence of $\phi \in \text{MCG}(S)$ that is represented by the restriction of a side-swapping homeomorphism of $M$? Let $\text{MCG}_U(S) < \text{MCG}(S)$ denote the subgroup of all mapping classes that are represented by the restriction to $S$ of a homeomorphism of the handlebody $U$ (this is a well-studied subgroup of $\text{MCG}(S)$, used by Masur, Canary, and others in understanding hyperbolic structures on handlebodies with applications to the proof of Thurston's ending lamination conjecture). Similarly denote $\text{MCG}_V(S)$. These two subgroups are conjugate in $\text{MCG}(S)$: the set of conjugators is represented by the restrictions to $S$ of homeomorphisms $U \mapsto V$. Furthermore this set of conjugators is a left coset of the normalizer $N_U < \text{MCG}(S)$ of $\text{MCG}_U(S)$ which I'll denote $N^V_U$. Similarly let $N^U_V$ denote the left coset of the normalizer $N_V < \text{MCG}(S)$ of $\text{MCG}_V(S)$ represented by the restrictions to $S$ of homeomorphisms from $V \mapsto U$. Notice that $(N^U_V)^{-1} = N^V_U$. * *Answer: $N^V_U \cap (N^V_U)^{-1} \ne \emptyset$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2035252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integrate $\int_0^{\pi/2}\frac{\cos^2x}{a\cos^2x + b\sin^2x}\,dx$ I don't know how to deal with this integral $$I=\displaystyle\int_0^{\pi/2}\frac{\cos^2x}{a\cos^2x + b\sin^2x}\,dx$$ I reached the step $$I =\displaystyle\ \int_0^{\pi/2}\frac{1}{a + b\tan^2x}dx$$ Now what should I do? Please help.
Now substitute $\displaystyle\ u=\tan x \implies x= tan^{-1} u \implies dx=\frac{1}{1+u^2} du$ $\displaystyle\ =\int_0^{\infty}\frac{1}{(1+u^2)(a + bu^2)}du$ Partial Fractions $\displaystyle\ =\int_0^{\infty}\frac{1}{(1+u^2)(a + bu^2)}du= \frac{1}{a-b} \left(\int_{0}^{\infty} \frac{du}{1+u^{2}} - b \int_{0}^{\infty} \frac{du}{a+bu^{2}} \right)$ Evaluating $\displaystyle\ =\frac{1}{a-b} \left(\int_{0}^{\infty} \frac{du}{1+u^{2}} - b \int_{0}^{\infty} \frac{du}{a+bu^{2}} \right)$ $\displaystyle\ =\frac{1}{a-b} \left([\tan x]_0^{\infty} - b \left(\frac{\tan^{-1}\left[\frac{\sqrt{b}x}{\sqrt{a}}\right)}{\sqrt{ab}} \right]_0^{\infty}\right)$ Put the Values
{ "language": "en", "url": "https://math.stackexchange.com/questions/2035381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
I'm looking for some mathematics that will challenge me as a year $12$ student. I am an upcoming year $12$ student, school holidays are coming up in a few days and I've realised I'm probably going to be extremely bored. So I'm looking for some suggestions. I want a challenge, some mathematics that I can attempt to learn/master. Obviously nothing impossible, but mathematics is my number $1$ favorite thing and I really want something to keep me busy and something that can further my understanding of mathematics. Also I would be interested in any mathematical focused book suggestions. So far in school I've done the usual: Matrices, transformation matrices, Sine Cosine and Tangent (graphs and proofs), lots and lots of parabolas/quadratics, statistics, growth and decay, calculus intro, Calculus derivation and integration, vectors, proof by induction and complex numbers. Any suggestions would be heavily appreciated.
You could look at some of the Questions tagged recreational-mathematics, soft-question and big-list (sort them by "votes").
{ "language": "en", "url": "https://math.stackexchange.com/questions/2035454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 16, "answer_id": 15 }
Calculate the expected value of X I have no idea how to solve this problem, any help would be greatly appreciated: During the course of $9$ lessons the teacher randomly selects one student (from a class of $30$), asks him several questions and either grades him (with the probability of $1/3$) or not (with the probability of $2/3$). It is possible for the same student to be chosen on more than one lesson. Let $X$ be the total number of students with at least one grade at the end of those $9$ lessons and calculate the expected value of $X$.
Let $X_i$ be the indicator random variable that the $i$-th student is selected and graded at least once in the nine lessons.   (Having a value of $1$ if the event happens, or $0$ otherwise.) Then the count of students graded is: $\sum\limits_{i=1}^{30} X_i$ Now find the expectation of this count.   (Hint: use the Linearity of Expectation.) Ok, it is my bad for not having said what the problem is: 1) I have basically thought of defining $X_i$ as 1 if the event occurs and 0 if it doesn't. My problem is with working out $P(X_i=1)$ It is the probability that a particular student is selected and graded at least once in a sequence of nine lessons.   In any particular lesson, the probability that that student is selected is $1/30$, and when selected the (conditional) probability of being graded is $1/3$.   The probability the student is selected and graded in a particular lesson is therefore obvious.   Thus the probability this happens at least once among nine lessons is readily determined (such as by considering the complementary event: that it does not happen in any of them). 2) Considering that $P(X_i=1)=0$ for $i > 9$, can we not just sum up to 9? Because it is not true.   $X_i$ is the indicator that the $i$-th student on the class roll receives some grade.   Because there is apparently no bias in selection or grading, then for all $i$ in $1$ to $30$, $\mathsf P(X_i=1)$ equals the same value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2035596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question on proof that $|G| = pqr$ is not simple Assume $|G| = pqr$ where $p,q,r$ are primes with $p < q < r$. Then $G$ is not simple. I have a problem understanding the proof (see for example here). In the proof one assumes that $n_p,n_q,n_r > 1$ (number of each $p,q,r$-Sylow subgroups respectively) and then by Sylow we have $$n_r | pq \qquad \text{and} \qquad n_r = 1 + kr, k\in \mathbb{N}_0$$ Now one deduces that $n_r = pq$, which I do not understand.
Since $n_r$ is not $1$, we have $k>0$ which implies $n_r=1+kr > r > p$ and $q$, so the only possible divisor of $pq$ is $pq$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2035688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Determinant of a block anti-diagonal matrix Let $a = \begin{pmatrix} O & \cdots & O & A \\ O & \cdots& B & O\\ \vdots & \ddots & O & O\\ C & \dots & O & O \end{pmatrix}$, where $A, B, C$ are $2n \times 2n $ matrices over ring of integers modulo $m$ that is, $\mathbb{Z}_m$. Is $det(a) = det(AB\dots C)?$
Think of rearranging the matrix, by swapping columns, say, so that it becomes block diagonal. You will have to keep track of the signs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2035936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A doubt about ramification in cyclotomic fields I'm studying the Algebraic Number Theory Notes of Robert B. Ash. I really like his notes, but I don't understand a suggestion he gives at page 7 of chapter 8 (Factoring of prime ideals in Galois Extensions). He's using all the theory of that chapter to discover new properties of cyclotomic fields. So pick $\zeta$ a primitive $m^{th}$ root of unity and let $L=\mathbb Q(\zeta)$, $ A=\mathbb Z$ and $K=\mathbb Q$. Consider $p$ rational prime that does not divide $m$. Say $B$ the integral closure of $A$ in $L$ and that $(p)$ factors in $B$ as $Q_1....Q_g$. We know that the relative degree $f$ is the same for all $Q_i$. He wants to find the Frobenius automorphism $\sigma$ explicitly. We know that $\sigma$ has the property that $\sigma(x)\equiv x^p\pmod {Q_i}$ for all $i$ and for all $x\in B$. From this, why do we deduce that $\sigma(\zeta)=\zeta^p$?
All finite extensions of $\Bbb F_p$ are separable (in fact Galois), so all polynomials are separable--note some authors demand separable to mean "distinct roots" I use the "irreducible factors have distinct roots" definition, if you are using the former, then clearly this is not always true as $x^{pk}-1$ has repeated roots modulo $p$. Now, the point of the suggestion is that $\zeta\mapsto\zeta^p$ generates the Galois group of $x^m-1$ over $\Bbb F_p$ because it generates all finite extension Galois groups over $\Bbb F_p$. We know that it permutes the roots of $x^m-1$ in the big field as well, so it lifts to an element of $\text{Gal}(L/\Bbb Q)$. What Ash is referencing is his theorem 8.1.8 which says that the map from the decomposition group is surjective in the case that the residue field extension is separable on page 3 of his notes. This is used to life the mod $Q$ Frobenius to the Frobenius element in the big Galois group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2036084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }