Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Specializations of elementary symmetric polynomials Let $\mathcal{S}_{x}=\{x_{1,},x_{2},\ldots x_{n}\}$ be a set of $n$ indeterminates. The $h^{th}$elementary symmetric polynomial is the sum of all monomials with $h$ factors \begin{eqnarray*} e_{h}(\mathcal{S}_{x}) & = & \sum_{1\leqslant i_{1}<i_{2}<\ldots<i_{h}\leqslant n}x_{i_{1}}x_{i_{2}}\ldots x_{i_{h-1}}x_{i_{h}} \end{eqnarray*} which, from a generating function standpoint, can be built up as the coefficients of the $h^{th}$ power of the following linear factorization \begin{eqnarray*} \prod_{i=1}^{n}(1+x_{i}z) & = & (1+x_{1}z)(1+x_{2}z)(1+x_{3}z)\ldots(1+x_{n}z)\\ & = & \sum_{h=0}^{n}e_{h}(\mathcal{S}_{x})z^{h} \end{eqnarray*} Some usual specializations of the set $\mathcal{S}_{x}$ lead to known families of numbers and multiplicative identities: binomial coefficients for $x_{i}=1_{i}$, to $q$-binomial coefficients for $x_{i}=q^{i}$ and Stirling numbers of the first kind for $x_{i}=i$; (i) For $\mathcal{S}_{1}=\{1_{1},1_{2},1_{3},\ldots,1_{n}\}$ \begin{eqnarray*} (1+z)^{n} & = & (1+1_{1}z)(1+1_{2}z)(1+1_{3}z)\ldots(1+1_{n}z)\\ & = & \sum_{h=0}^{n}{n \choose h}z^{h} \end{eqnarray*} we have binomial coefficients $e_{h}(\mathcal{S}_{1})={n \choose h}$ (ii) For $\mathcal{S}_{q^{i}}=\{q,q^{2},q^{3}\ldots,q^{n-1},q^{n}\}$ \begin{eqnarray*} \prod_{i=1}^{n}(1+q^{i}z) & = & (1+q^{1}z)(1+q^{2}z)(1+q^{3}z)\ldots(1+q^{(n-1)}z)\\ & = & \sum_{h=0}^{n}{n \choose h}_{q}q^{{h+1 \choose 2}}z^{h} \end{eqnarray*} we get the $q$-binomial coefficients (or Gaussian coefficients) $e_{h}(\mathcal{S}_{q^{i}})={n \choose h}_{q}q^{{h+1 \choose 2}}$ (iii) And for $\mathcal{S}_{i}=\{1,2,3,\ldots n-1\}$ \begin{eqnarray*} \prod_{i=1}^{n-1}(1+iz) & = & (1+1z)(1+2z)(1+3z)\ldots(1+(n-1)z)\\ & = & \sum_{h=0}^{n}\left[\begin{array}{c} n\\ n-h \end{array}\right]z^{h} \end{eqnarray*} the elementary symetric polynomial generates Stirling numbers of the first kind $e_{h}(\mathcal{S}_{i})=\left[\begin{array}{c} n\\ n-h \end{array}\right]$ In this context, are there other specializations of the set $\mathcal{S}_{x}=\{x_{1,},x_{2},\ldots x_{n}\}$ which lead to other families of numbers or identities?
You have mentioned what are known as the (stable) principal specializations of the ring of symmetric functions $\Lambda$. If you haven't already, you should check out section 7.8 of Stanley's Enumerative Combinatorics Vol. II, where he summarizes the specialization you have have mentioned. In particular, using the specializations you have mentioned on the other bases for the symmetric functions i.e. the homogenous (complete) symmetric functions you can derive similar standard combinatorial formulas. Stanley also mentions another interesting specialization for symmetric functions called the exponential specialization, which is the unital ring homomorphism $ex:\Lambda \rightarrow \mathbb{Q}[t],$ that acts on the basis of monomial symmetric functions by $m_\lambda\mapsto \frac{t^n}{n!}$ if and only if $\lambda=(1,1,\ldots,1)$ is the partition of $n$ consisting of all $1$'s, otherwise the map sends $m_\lambda$ to $0$. According to Stanley, this specialization is some sort of limiting case of the principal specialization. The combinatorial significance of the $ex$ specialization as it allows one to prove that $$[x_1\cdots x_n]\left(\sum_\lambda s_\lambda(x_1,\ldots,x_n)\right)=e_2(n),$$ where $s_\lambda$ is the Schur function for a partition $\lambda$ and $e_2(n)$ is the number of involutions in the symmetric group $\mathfrak{S}_n$. This has some significance in the RSK correspondence. P.S. I would make this a comment but I don't have enough reputation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/801399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 1, "answer_id": 0 }
How find this sum $\sum_{k=0}^{p}t^k\binom{n}{k}\binom{m}{p-k}$ Find the closed form $$\sum_{k=0}^{p}t^k\binom{n}{k}\binom{m}{p-k}$$ since $$\binom{n}{k}\binom{m}{p-k}=\dfrac{n!}{(n-k)!k!}\cdot\dfrac{m!}{(p-k)!(m-p+k)!}$$ then I can't
Let's find the generating function $F(z):=\sum_p a_pz^p$, where $a_p$ is your sum. Notice that your sum is a convolution of $t^k\binom{n}{k}$ and $\binom{m}{k}$. Therefore $$\begin{align}F(z)&=\left(\sum_k t^k\binom{n}{k}z^k\right)\left(\sum_k\binom{m}{k}z^k\right)\\&=(1+tz)^n(1+z)^m\end{align}$$ Notice that for $m=0$, for example, the sum is $\sum_{k=0}^{p}\binom{n}{k}t^k$, which for $p=0,1,...,n$ are the truncations of $(1+t)^n$. These don't have a closed(er) form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/801462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Deterministic Push-Down Automata Does there exist Deterministic Push-Down Automata for the language below. Any kind of answer will be highly appreciated! $$L =ba^nb^n U bba^nb^{2n}$$
Probably $U$ denotes union, and you mean $L = \{ ba^nb^n \mid\ n\ge 0\} \cup \{ bba^nb^{2n} \mid\ n\ge 0\}$. Yes that can be done by a deterministic PDA. The first two letters of the string decide how to handle the remainder of the string. Pushing the $a$'s and popping the $b$'s in appropriate ratio is a standard task.
{ "language": "en", "url": "https://math.stackexchange.com/questions/801529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Equivalence of different definitions of continuity Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be a real function. $f$ is continuous at point $c$ iff $$(\forall\epsilon>0)(\exists\delta>0)(\forall x)(|x-c|<\delta\Rightarrow|f(x)-f(c)|<\epsilon)$$ Continuity is also defined at a point $c$ if $\lim\limits_{x\rightarrow c}f(x)=f(c)$ i.e. $$(\forall\epsilon>0)(\exists\delta>0)(\forall x)(0<|x-c|<\delta\Rightarrow|f(x)-f(c)|<\epsilon)$$ So this is a trivial question but how do I show these definitions are equivalent? They are almost the same except that in the limit definition we have $0<|x-c|<\delta$ So how can I get rid of it in this case. As I said it is trivial and one can see it but can't see how to formally show it, so any help please.
Take the first definition as given. Then clearly the second condition follows, since the range of $x$ values being considered is a subset of the range provided by the first definition. If we take the second definition as given, we have to show that the weaker condition $$0<|x-c|<\delta\implies |f(x)-f(c)|<\epsilon$$ implies the stronger one $$|x-c|<\delta\implies |f(x)-f(c)|<\epsilon$$ So we need only verify that for $x=c$, we still have $|f(x)-f(c)|<\epsilon$. But this is easy, because when $x=c$, $|f(x)-f(c)|=|f(c)-f(c)|=0<\epsilon$ regardless of the value of $\epsilon>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/801630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does this sum converge or diverge? Does the infinite sum $\large{\sum_{n=1}^\infty \frac{1}{n^{x_{\small{n}}}}}$ converge if $x_n$ is a random variable (generated within each term) that takes values between $0$ and $2$ with equal probability converge or diverge? I have a suspicion that it diverges, but I don't know how to prove it.
I don't know if I exactly have a proof, but here's a thought. The infinite series of reciprocals of the prime numbers diverges. How likely is it that $ \ x_n \ > \ 1 \ $ "often enough" to produce a series with terms that can bring the series to convergence despite that? That is, can there be a high enough "density" of terms that make the series convergent against the sum of terms that would cause divergence? Perhaps there is an argument something like comparing $ \ \sum_{n=1}^{\infty} \ \frac{1}{n^{x_n}} \ $ to $ \ \sum_{n=1}^{\infty} \ \frac{1}{p_n} \ $ , which has a lower "density" of terms than, say, the harmonic series. (I suspect the probability of having a convergent series is essentially zero.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/801709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How to convert expectation to integration $S: \{1,-1\}^n \rightarrow \{0,1\}$ and $E(S(x))=p$, Where $E$ denotes the expectation, and is taken over $x$ , where $x$ is uniformly distributed on $\{-1,1\}^n$. Then how to prove the following, \begin{equation*} E_x\Bigg[S(x) \sum_{i=1}^n x_i \Bigg] \leq \int_{0}^{\infty}Pr \Big(S(x)\sum_{i=1}^n x_i >y \Big)dy \end{equation*}
Answer: Using the fact that, for every nonnegative random variable $Y$, $$ E(Y)=\int_0^\infty P(Y\gt y)\,\mathrm dy=\int_0^\infty P(Y\geqslant y)\,\mathrm dy. $$ Proof: $$ Y=\int_0^Y\mathrm dy=\int_0^\infty \mathbf 1_{Y\gt y}\,\mathrm dy=\int_0^\infty \mathbf 1_{Y\geqslant y}\,\mathrm dy. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/801793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve for $x$ in $2\log(x+11)=(\frac{1}{2})^x$ Solve for $x$. $$2\log(x+11)=(1/2)^x$$ My attempt: $$\log(x+11)=\dfrac{1}{(2^x)(2)}$$ $$10^{1/(2^x)(2)}= x+11$$ $$x=10^{1/(2^x)(2)}-11$$ I'm not sure what to do next, because i have one $x$ in the exponent while the other on the left side of the equation.
Hints: $$2\log(x+11)=\frac14\implies \log(x+11)=\frac18\implies \color{red}{x+11=e^{1/8}}\ldots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/801885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Natural Deduction Given the following premises: P AND Q 1. P IMPLIES R 2. Q IMPLIES R 3. I need to demonstrate that this entails: Q AND R The way I tackled the problem was: Q 4. AND ELIMINATION on Line 1 R 5. IMPLICATION ELIMINATION on Lines 3, 4 Q AND R 6. AND INTRODUCTION on Lines 4, 5 However, the textbook solution derives: P OR Q and then uses both statements 2 and 3 to imply R. However, isn't having Q alone sufficient to imply R and complete the proof? PS: Apologies for the poor formatting, I'm not sure how I can do better.
Proof : $$\begin{align} (1) & P \land Q && [\text{assumed}] \\ (2) & P && [\land \text{-elim(1)}] \\ (3) & Q && [\land \text{-elim(1)}] \\ (4) & P \rightarrow R && [\text{assumed}] \\ (5) & R && [\rightarrow \text{-elim}(2,4)] \\ (6) & Q \land R && [\land\text{-intro}(3,5)] \\ \end{align}$$ Thus we have proved : $P \land Q, P \rightarrow R \vdash Q \land R$. The third premise : $Q \rightarrow R$ seems unnecessary to me. But you are right; we may replace step (4) with $Q \rightarrow R$ and then derive $R$ from (3) and (4) by $\rightarrow$-elim; then conclude as before. In this case, we have : $P \land Q, Q \rightarrow R \vdash Q \land R$ and is $P \rightarrow R$ which is redundant. The "detour" through $P \lor Q$ seems unnecessary; in this case we have to derive $P \vdash R$ and $Q \vdash R$ (as in the two versions of the proof above) and then we use $\lor$-elim : to conclude $P \lor Q \vdash R$. Thus, we have : $P \land Q, P \rightarrow R, Q \rightarrow R \vdash Q \land R$ and we have used all the premises.
{ "language": "en", "url": "https://math.stackexchange.com/questions/801967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
cross product in cylindrical coordinates Hi i know this is a really really simple question but it has me confused. I want to calculate the cross product of two vectors $$ \vec a \times \vec r. $$ The vectors are given by $$ \vec a= a\hat z,\quad \vec r= x\hat x +y\hat y+z\hat z. $$ The vector $\vec r$ is the radius vector in cartesian coordinates. My problem is: I want to calculate the cross product in cylindrical coordinates, so I need to write $\vec r$ in this coordinate system. The cross product in cartesian coordinates is $$ \vec a \times \vec r=-a y\hat x+ax\hat y, $$ however how can we do this in cylindrical coordinates? Thank you
The radius vector $\vec{r}$ in cylindrical coordinates is $\vec{r}=\rho\hat{\rho}+z\hat{z}$. Calculating the cross-product is then just a matter of vector algebra: $$\vec{a}\times\vec{r} = a\hat{z}\times(\rho\hat{\rho}+z\hat{z})\\ =a(\rho(\hat{z}\times\hat{\rho})+z(\hat{z}\times\hat{z}))\\ =a\rho(\hat{z}\times\hat{\rho})\\ =a\rho\hat{\phi},$$ where in the last line we've used the orthonormality of the triad $\{\hat{\rho},\hat{\phi},\hat{z}\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/802077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Intricate proof by induction: $2+8+24+64+...+(n)(2^n)=2(1+(n-1)(2^n))$ Help the King out... $$2+8+24+64+...+(n)(2^n)=2(1+(n-1)(2^n))$$ I am at the step where I am proving $P(k+1)$ to be true: $$2(1+(k-1)(2^k))+(k+1)((2)^{k+1}))=2(1+((k+1)-1)(2^{k+1}))$$
See part I of my answer here for the background to the following systematic approach. We have here $f(k) = k\cdot2^k$ and $g(n) = 2 + (n-1)\cdot 2^{n+1}$ Inductive step: 1: Assume true for $n$, that is $\sum_{k=1}^{n}f(k) = g(n)\tag{1}$ 2: Let $m = n + 1$ $\begin{align}f(m) &= m\cdot2^m\\\\ g(m) - g(m-1) &= \left(2 + (m-1)\cdot2^{m+1}\right) - \left(2 + (m-2)\cdot2^{m}\right)\\&=(2m-2 - m + 2)\cdot2^m\\&=m\cdot2^m\\\\\therefore f(m) &= g(m) - g(m-1)\\\text{i.e } \color{blue}{f(n+1)}&=\color{blue}{g(n+1) - g(n)}\tag{2}\end{align}$ We thus have $$\begin{align}\sum_{k=1}^{n+1}f(k)=\sum_{k=1}^{n}f(k) + f(n+1) = \underbrace{g(n)}_{\text{from }(1)} + \underbrace{g(n+1) - g(n)}_{\text{from }(2)} = g(n+1)\end{align}$$ implies that the statement is true for $n+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/802164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When working proof exercises from a textbook with no solutions manual, how do you know when your proof is sound/acceptable? When working proof exercises from a textbook with no solutions manual, how do you know when your proof is sound/acceptable? Often times I "feel" as if I can write a proof to an exercise but most of those times I do not feel confident that the proof that I am thinking of is good enough or even correct at all. I can sort of think a proof in my head, but am not confident this is a correct proof. Any input would be appreciated. Thanks.
Use a computer with automated proof checking software, also called a proof assistant or interactive theorem prover. Typically you will need to write your proof in a special, machine readable format (be careful for translation/copy errors), but past that point this field is well studied and computer-based proof checking is generally reliable. Wikipedia has a comparison table of different software for this purpose: http://en.wikipedia.org/wiki/Proof_assistant . (If this is preferable to, easier than, or even faster than hand-verifying your proof step-by-step is another question.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/802276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 7, "answer_id": 4 }
If $(B \cap C) \subset A$, then $(C\setminus A) \cap (B\setminus A) = \emptyset$ Question: Prove/disprove: For all sets $A,B,C$, if $B \cap C \subset A$, then $(C \backslash A) \cap (B \backslash A) = \emptyset$ I'm a bit confused about the question, or where to start. When we learned how to prove these, the examples given were usually either sets that were equal (in which case we could prove that they were subsets of each other) or cases where there weren't subsets at all. Unfortunately, looking at my professors solution is only making things more confusing as I can't find any properties of these sets that he is using in his answer. His answer: * *Let us assume that $B \cap C \subset A$. This implies that *$(B \cap C) \cap A^c \subset A \cap A^c = \emptyset$ *$(B \cap A^c) \cap (C \cap A^c) = \emptyset$ *$(B \backslash A) \cap (C \backslash A) = \emptyset$ I understand that lines 3 and 4 are correct. The thing I don't understand is the jump from lines 1 and 2, and how he goes about getting that.
$ \newcommand{\calc}{\begin{align} \quad &} \newcommand{\calcop}[2]{\notag \\ #1 \quad & \quad \text{"#2"} \notag \\ \quad & } \newcommand{\endcalc}{\notag \end{align}} $ (This is not a direct answer, but an alternative approach.) I would prefer a more 'logical' approach, where you start with the most complex side, $\;(C \setminus A) \cap (B \setminus A) = \emptyset\;$, apply the definitions, and then simplify using the laws of logic. That results in $$\calc (C \setminus A) \cap (B \setminus A) \;=\; \emptyset \calcop{\equiv}{basic property of $\;\emptyset\;$} \langle \forall x :: \lnot(x \in (C \setminus A) \cap (B \setminus A)) \rangle \calcop{\equiv}{definition of $\;\cap\;$, and of $\;\setminus\;$ twice} \langle \forall x :: \lnot(x \in C \land x \not\in A \;\land\; x \in B \land x \not\in A) \rangle \calcop{\equiv}{logic: simplify} \langle \forall x :: \lnot(x \in C \land x \in B \;\land\; x \not\in A) \rangle \calcop{\equiv}{logic: DeMorgan -- keeping $\;B,C\;$ together as in our goal} \langle \forall x :: \lnot(x \in C \land x \in B) \;\lor\; x \in A \rangle \calcop{\equiv}{logic: $\;\lnot P \lor Q\;$ is another way to write $\;P \Rightarrow Q\;$} \langle \forall x :: x \in C \land x \in B \;\Rightarrow\; x \in A \rangle \calcop{\equiv}{definitions of $\;\cap\;$ and $\;\subseteq\;$} C \cap B \;\subseteq\; A \endcalc$$ This shows that the two given statements are even equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/802349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why is $S_{\ast}\left(X,A\right)$ free? Why is $S_{\ast}\left(X,A\right)$ free? it is the quotient of two free groups $S_{\ast}\left(X\right)$ & $S_{\ast}\left(A\right)$
The quotient is free because the smaller group is generated by a subset of a basis of the larger one. Indeed, SX is freely generated by all singular simplices in X, and SA is generated by the set of simplices in X whose image contained in A.
{ "language": "en", "url": "https://math.stackexchange.com/questions/802447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find matrix determinant How do I reduce this matrix to row echelon form and hence find the determinant, or is there a way that I am unaware of that finds the determinant of this matrix without having to reduce it row echelon form given this is all I know and there exists no additional information. $\left[ \begin{array}{ccc} 1+x & 2 & 3 & 4 \\ 1 & 2+x & 3 & 4 \\ 1 & 2 & 3+x & 4 \\ 1 & 2 & 3 & 4+x \\ \end{array} \right]$
Assume $f(x)=\Delta$. It's a fourth degree polynomial. $C_1\to C_1+C_2+C_3+C_4\implies x+10 $ is a factor. $f(0)=0\implies 0$ is a root.// Repeated Rows $f'(0)=0+0+0+0\implies 0$ is again repeated.// Repeated Rows $f''(0)=0\implies 0$ is again repeated.// Repeated Rows for last time. This all $\implies f(x)=x^3(x+10)$. Its obvious that coefficient of $x^4$ must be $1$. For differentiation, differentiate one row and treat others as constant(like product rule). Add them all. It's pretty easy to see that they have same rows(without even writing them).
{ "language": "en", "url": "https://math.stackexchange.com/questions/802517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Dropping letters in post boxes In how many different ways can 5 letters be dropped in 3 different post boxes if any number of letters can be dropped in all of the post boxes?
In general number of droppings of $k$ letters in $m$ boxes is $$\sum_{x_1+x_2+...+x_m=k,0\leq x_i\leq k}1=\binom{m+k-1}{k}$$ in our case $m=3,k=5$ $$\sum_{x_1+x_2+x_3=5,0\leq x_i\leq 5}1=\binom{5+3-1}{5}=21$$ Below is the list of all droppings $$(5,0,0),(0,5,0),(0,0,5)$$ $$(4,0,1),(4,1,0),(0,1,4),(0,4,1),(1,0,4),(1,4,0)$$ $$(3,1,1),(1,3,1),(1,1,3)$$ $$(2,3,0),(2,0,3),(3.0,2),(3,2,0),(0,2,3),(0,3,2)$$ $$(2,2,1),(2,1,2,(1,2,2)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/802643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
there is any relation between $\pi$, $\sqrt{2}$ or a generic polygon? I'm a programmer, I'm always looking for new formulas and new way of computing things, to satisfy my curiosity I would like to know if there are any formulas, or I should say equalities, that make use of both $\pi$ and $\sqrt{2}$ . I would also like to know if it's possible to generalize this relatively to any n-sided polygon ( even a 3D figure ), $\sqrt{2}$ that usually appears in quadrilaterals only. Of course I would like to know about any possible domain, but since we should start from something, I would say that the domain of polygons and polyhedron triggers my interest in the first place.
Stirling's approximation: $$ n! \sim \sqrt{2 \pi n}\left(\frac{n}{e}\right)^n $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/802751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 9, "answer_id": 2 }
A problem of diagram chasing Consider the following diagram of functions between sets: I know that the $4$ inner triangles (i.e. $\{X,X',Z\}$,$\{X',Y',Z\}$...) are all commutative diagrams and moreover that $f_1$ and $f_3$ are bijective functions. Can I conclude that the outer square $\{X,X',Y',Y\}$ is a commutative diagram? Thanks in advance.
I think that you can't. Take for example $X=X'=Z=\{a\}$, $Y=Y'=\{a,b\}$, $f_1,f_3,a,b$ the identity maps. Then set $f_2\colon a\mapsto a$ and $f_4\colon a\mapsto b$. Then every triangle commute but the big square doesn't. More generally, you can take $X=X'$, $Y=Y'$, $f_1,f_3$ the identity maps. Then you pick two different maps $f_2,f_4\colon X\to Y$ and you set $Z$ as the equalizer of those maps and $X\to Z$ the map that you get composing $f_2$ with the equalizer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/802828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that convergence of double sequence Suppose $f:X\rightarrow \mathbb R$ has property $$sup\left \{ {\sum_{a\in F}^{}} \left |f(a) \right | \right \}< \infty$$ :F is finite subset of X. 1.Show that $\left \{ \ a \in X : f(a)\neq 0 \right \}$ is countable set 2.If $a_{kj}\in R$, show that $$\sum_{k=1}^{\infty}\sum_{j=1}^{\infty}a_{kj}=\sum_{j=1}^{\infty}\sum_{k=1}^{\infty}a_{kj}$$ I need for your help. Thank you for reading my problem
Let $S$ be the supremum of $\sum_{a\in F}|F(a)|$, taken over all finite subsets $F \subset A$. Then for each $m \in \mathbb N$, the set $$ A_m = \left \{ \ a \in X : |f(a)| \ge \frac 1m \right \} $$ has at most $m \cdot S$ elements. Therefore $\left \{ \ a \in X : f(a)\neq 0 \right \} = \cup A_m$ is countable as the countable union of finite sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/802904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to think when solving $3\frac{\partial f}{\partial x}+5\frac{\partial f}{\partial y}=0$? Solve this differential equation $$3\frac{\partial f}{\partial x}+5\frac{\partial f}{\partial y}=0$$ Usually, when we get these problems, they tell us what variable change is smart to do and we just chunk through the chain rule and end up with an answer. Now, you have to think for yourself what variables to put. Therefore, I did not know how to do. I know the variable substitution that they made, but not why the made it. How should I think here? My first guess was $e^{5x-3y}+C$ which does indeed solve it, but that solution is not general enough. Any one function $g(5x-3y)$ will do it, according to the solution manual.
Basically, when given a change of variables $u=u(x,y),v=v(x,y)$, you will use the Chain rule to transform your equation with unknowns $\dfrac{\partial f}{\partial x}$ and $\dfrac{\partial f}{\partial y}$ (i.e. the gradient $\nabla f(x,y)$) into an equation with unknowns $\dfrac{\partial f}{\partial u}$ and $ \dfrac{\partial f}{\partial v}$ (i.e. the gradient $\nabla f(u,v)$). So you may want to think about $0=3\dfrac{\partial f}{\partial x}+5\dfrac{\partial f}{\partial y}$ as $0=\dfrac{\partial x}{\partial u}\dfrac{\partial f}{\partial x}+\dfrac{\partial y}{\partial u}\dfrac{\partial f}{\partial y}=\dfrac{\partial f}{\partial u}$. So $\dfrac{\partial x}{\partial u}=3$ and $\dfrac{\partial y}{\partial u}=5$, and $x(u,v)=3u+g(v)$ and $y(u,v)=5u+h(v)$. Choos $g$ and $h$ for the transformation to be invertible and computations easy. For example, take $x=3u-2v$ and $y=5u-3v$ (so $u=-3x+2y$ and $v=-5x+3y$), we obtain that $\dfrac{\partial f}{\partial u}=0$, so $f$ is a constant function of $u$: $f(u,v)=\phi(v)$ for some function $\phi$. Back to the $(x,y)$-coordinates, we have $f(x,y)=\phi(3y-5x)$ for some function $\phi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/802954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Möbius transformations lines and circles I am looking for a basic outline of a proof I know that all MT's are of the form $\frac{ax+b}{cx+d}$ For $c=0$, I know that lines/circles are preserved because translations and dilations do not change a line/circle from being a line/circle But I am not sure how to prove this for all cases My exam is actually tomorrow, so it would be great if someone could help me today :)
To begin with, if $c\neq0,$ then put $f_1(z)=z+d/c,$ $f_2(z)=1/z,$ $f_3(z)=\frac{bc-ad}{c^2}z,$ and $f_4(z)=z+a/c.$ Then $$(f_4\circ f_3\circ f_2\circ f_1)(z)=\frac{az+b}{cz+d}.$$ So, it suffices to show that each of $f_1,f_2,f_3,f_4$ maps generalized circles to generalized circles. The fact that $ad-bc\ne0$ will be essential here. The only somewhat tricky part is showing that this is true for $f_2.$ The idea is to show that lines are mapped to generalized circles through the origin and vice versa, while circles that don't pass through the origin are again mapped to circles that don't pass through the origin. The $c=0$ case is straightforward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/803068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proof for $\sin(x) > x - \frac{x^3}{3!}$ They are asking me to prove $$\sin(x) > x - \frac{x^3}{3!},\; \text{for} \, x \, \in \, \mathbb{R}_{+}^{*}.$$ I didn't understand how to approach this kind of problem so here is how I tried: $\sin(x) + x -\frac{x^3}{6} > 0 \\$ then I computed the derivative of that function to determine the critical points. So: $\left(\sin(x) + x -\frac{x^3}{6}\right)' = \cos(x) -1 + \frac{x^2}{2} \\ $ The critical points: $\cos(x) -1 + \frac{x^2}{2} = 0 \\ $ It seems that x = 0 is a critical point. Since $\left(\cos(x) -1 + \frac{x^2}{2}\right)' = -\sin(x) + x \\ $ and $-\sin(0) + 0 = 0 \\$ The function has no local minima and maxima. Since the derivative of the function is positive, the function is strictly increasing so the lowest value is f(0). Since f(0) = 0 and 0 > 0 I proved that $ \sin(x) + x -\frac{x^3}{6} > 0$. I'm not sure if this solution is right. And, in general, how do you tackle this kind of problems?
You just have to prove your inequality when $x\in(0,\pi)$, since otherwise the RHS is below $-1$. Consider that for any $x\in(0,\pi/2)$, $$ \sin^2 x < x^2 \tag{1}$$ by the concavity of the sine function. By setting $x=y/2$, $(1)$ gives: $$ \forall y\in(0,\pi),\qquad \frac{1-\cos y}{2}<\frac{y^2}{4}\tag{2}, $$ so: $$ \cos y > 1-\frac{y^2}{2} \tag{3} $$ for any $y\in(0,\pi)$. By integrating $(3)$ with respect to $y$ over $(0,x)$ we get our inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/803127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 9, "answer_id": 1 }
Determine the region bounded by the inequalities Determine the region bounded by the inequalities: $$ 0 \leq x + y \leq 1 \\ 0 \leq x - y \leq x + y $$ I don't know what to solve for first, so I just added them: $$ 0 \leq x \leq 1 + x + y \\ $$ I guess I can subtract $x$: $$ -x \leq 0 \leq 1 + y \\ $$ Or: $$ -y - 1 \leq 0 \leq x \\ $$ So from this inequality, it looks like some area in the 4th quadrant because $x \geq 0$ means everything to the right of the $y$-axis, and $-y - 1 \leq 0$ means $- 1 \leq y$ which is above the line $y = -1$. However, it looks like I'm analyzing incorrectly as the answer says that it is some area above $y = 0$. I'm not sure what I'm doing wrong.
Note that you can also write your pair of inequalities as a single linear chain of inequalities: $$0 \leq x-y \leq x+y \leq 1.$$ So all the information you need is contained in the three inequalities of the chain: $$\begin{cases}0 \leq x-y,\\x-y \leq x+y,\\x+y \leq 1.\end{cases} \iff \begin{cases}y \leq x,\\0 \leq y,\\y \leq 1-x.\end{cases}$$ The region bounded by these three lines is found to be the interior of the right isosceles triangle with vertices $(0,0),(0,1),(\frac12,\frac12)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/803228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
I am having problems proving that the limit of a certain multivariable function is equal to 0. What I need to prove is the following: $$\lim_{(x,y)\rightarrow(0,0) }xy^2e^{x^2/y^4}=0$$ for $x,y \in D=\{(x,y):0\leq y \leq 1, 0\leq x\leq y^2\}$. I tried solving the problem using the 'sandwich'theorem and ended up with the solution below: $$0\leq \lim_{(x,y)\rightarrow(0,0) }xy^2e^{x^2/y^4}\leq\lim_{(x,y)\rightarrow(0,0)} y^4e^{x^2/y^4}\leq\lim_{(x,y)\rightarrow(0,0) }y^4e^{x^2}= 0^4e^{0^2}=0\cdot1=0$$ It would be highly appreciated if someone could verify my answer, and perhaps give me some useful tips. Thank you ps: sorry for my bad English; I speak French.
This inequality is false: $$\lim_{(x,y)\rightarrow(0,0)} y^4e^{x^2/y^4}\leq\lim_{(x,y)\rightarrow(0,0) }y^4e^{x^2}.$$ However, you can say that $e^{x^2/y^4}\le e^1$ (by assumption $x/y^2\le 1$) and write $$\lim_{(x,y)\rightarrow(0,0)} y^4e^{x^2/y^4}\leq e \lim_{(x,y)\rightarrow(0,0) }y^4 =0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/803325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Integral $I:=\int_0^1 \frac{\log^2 x}{x^2-x+1}\mathrm dx=\frac{10\pi^3}{81 \sqrt 3}$ Hi how can we prove this integral below? $$ I:=\int_0^1 \frac{\log^2 x}{x^2-x+1}\mathrm dx=\frac{10\pi^3}{81 \sqrt 3} $$ I tried to use $$ I=\int_0^1 \frac{\log^2x}{1-x(1-x)}\mathrm dx $$ and now tried changing variables to $y=x(1-x)$ in order to write $$ I\propto \int_0^1 \sum_{n=0}^\infty y^n $$ however I do not know how to manipulate the $\log^2 x$ term when doing this procedure when doing this substitution. If we can do this the integral would be trivial from here. Complex methods are okay also, if you want to use this method we have complex roots at $x=(-1)^{1/3}$. But what contour can we use suitable for the $\log^2x $ term? Thanks
Hint: Consider the change of variable $x=1/t$ hence you have $$2I = \int^\infty_0 \frac{\log^2(t)}{t^2-t+1}\,dt$$ Now integrate the function $$f(z) =\frac{\log^3(z)}{z^2-z+1}$$ Along a key hole contour indented at 0
{ "language": "en", "url": "https://math.stackexchange.com/questions/803389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 8, "answer_id": 2 }
Optimal Strategy for Rock Paper Scissors with different rewards Imagine Rock Paper Scissors, but where winning with a different hand gives a different reward. * *If you win with Rock, you get \$9. Your opponent loses the \$9. *If you win with Paper, you get \$3. Your opponent loses the \$3. *If you win with Scissors, you get \$5. Your opponent loses the \$5. *If you tie, you get $0 My first intuition would be that you should play Rock with a probability of 9/(9+3+5), Paper with 3/(9+3+5) and Scissors with 5/(9+3+5) however this seems wrong, as it doesn't take into consideration the risk you expose yourself to (if you play Paper, you have an upside of \$3 but a downside of \$5). So I put the question to you, in such a game -- what is the ideal strategy. Edit: By "ideal" strategy, I mean playing against an adversarial player who knows your strategy.
If "optimal" means Nash equilibrium (i.e. a state that is stable wrt. small perturbations of strategies), than it can be computed. If you assume that $x_1$ is the probability of first player to play Rock, $x_2$ his probability to play Scissors and $1-x_1-x_2$ his probability to play Paper, and similarly for $y_i$, then the Payoff of the first player is $$f(x_1, x_2, y_1, y_2) = x_1 (9y_2 - 3 (1-y_2-y_3)) + x_2 (-9 y_1 + 5(1-y_2-y_3)) + (1-x_1-x_2)(3y_1-5y_2)$$ or something like that. The condition on Nash is that all partial derivatives vanish; you can probably easily compute the probabilities and check, whether you guessed the right solutions (the solution should be unique in this case with $x_i$ and $y_i$ nonzero). However, in different circumstances, optimal may meen different things; if they are good friends and know that it's a zero sum game, they can also play both Rock all the time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/803488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
A fair coin is flipped 2k times. What is the probability that it comes up tails more often than it comes up heads? I'm studying for a probability exam and came across this question. I watched the video solution to it but I don't really understand it. I was hoping someone could explain this problem to me. Are there different ways to go about this?
Hint: * *Fair coin $\implies$ Probability of tails occurring more $=$ probability of heads occurring more $= p$, say. *Probability of exactly equal number of heads and tails $=1-2p$. Can you find this one?
{ "language": "en", "url": "https://math.stackexchange.com/questions/803628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Sentence $\varphi$ of set theory that is satisfied by all well-founded models of ZFC, but which is not a theorem of ZFC. I think I read somewhere the following. If a first-order sentence $\varphi$ in the language of set theory holds for every well-founded model of ZFC, then nonetheless: * *$\varphi$ may fail for a non-well-founded model; *in other words, $\varphi$ needn't be a theorem of ZFC. What is an example of such a $\varphi$?
Every statement which is in its essence a true [first-order] number theoretic statement in the universe must be true for every well-founded model. The most striking example for these statements are consistency of various theories.$\DeclareMathOperator{\con}{con}$ For example, if there are well-founded models of $\sf ZFC$, then $\con\sf(ZFC)$ holds. It follows that every well-founded model satisfies $\con\sf(ZFC)$. Similarly if there is a model with an inaccessible cardinal, then $\con\sf(ZFC+I)$ holds, so it must hold in every well-founded model, and if there is a model with a proper class of supercompact cardinals, then in every well-founded model it is true that there is a model with a proper class of supercompact cardinals. On the other hand, if there is a model of $\sf ZFC$ then there is a model for $\sf ZFC+\lnot\con(ZFC)$. And this model is necessarily not well-founded, and in fact not even an $\omega$-model (meaning: it has non-standard integers).
{ "language": "en", "url": "https://math.stackexchange.com/questions/803713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does the imag. part of the graph of $\zeta(n^{ix})$ resemble the tangent function? If you input $\zeta(n^{ix})$ into the Wolfram Alpha search bar, in the plot, you get an infinitely repeating sinusoidal curve, which resembles the real part, and you get an infinitely repeating tangent curve, which resembles the imaginary part. Yet, plotting $n^{ix}$ alone does not give a tangent-like imaginary curve. So, why does a tangent-like curve appear? Note: the constant $n$ is any arbitrary real number. Link for confirmation.
Two facts explain the qualitive picture from your link. First: $\zeta(z)\;$ has a pole at $z=1= e^{i\cdot 0}\;$ with $\zeta(e^{ix})=-\frac{i}{x} + \dots\;$ for $x\approx 0,\;$ second: $e^{ix}=e^{i(x+2\pi)},\;$ this gives the periodic structure. Further: Since $\zeta(-1)=-\frac{1}{12}\;$ you have $\Im \zeta(e^{i\pi})=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/803787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Surely You're Joking, Mr. Feynman! $\int_0^\infty\frac{\sin^2x}{x^2(1+x^2)}\,dx$ Prove the following \begin{equation}\int_0^\infty\frac{\sin^2x}{x^2(1+x^2)}\,dx=\frac{\pi}{4}+\frac{\pi}{4e^2}\end{equation} I would love to see how Mathematics SE users prove the integral preferably with the Feynman way (other methods are welcome). Thank you. (>‿◠)✌ Original question: And of course, for the sadist with a background in differential equations, I invite you to try your luck with the last integral of the group. \begin{equation}\int_0^\infty\frac{\sin^2x}{x^2(1+x^2)}\,dx\end{equation} Source: Integration: The Feynman Way
This integral is readily evaluated using Parseval's theorem for Fourier transforms. (I am certain that Feynman had this theorem in his tool belt.) Recall that, for transform pairs $f(x)$ and $F(k)$, and $g(x)$ and $G(k)$, the theorem states that $$\int_{-\infty}^{\infty} dx \, f(x) g^*(x) = \frac1{2 \pi} \int_{-\infty}^{\infty} dk \, F(k) G^*(k) $$ In this case, $f(x) = \frac{\sin^2{x}}{x^2}$ and $g(x) = 1/(1+x^2)$. Then $F(k) = \pi (1-|k|/2) \theta(2-|k|)$ and $G(k) = \pi \, e^{-|k|}$. ($\theta$ is the Heaviside function, $1$ when its argument is positive, $0$ when negative.) Using the symmetry of the integrand, we may conclude that $$\begin{align}\int_0^{\infty} dx \frac{\sin^2{x}}{x^2 (1+x^2)} &= \frac{\pi}{2} \int_0^{2} dk \, \left ( 1-\frac{k}{2} \right ) e^{-k} \\ &= \frac{\pi}{2} \left (1-\frac1{e^2} \right ) - \frac{\pi}{4} \int_0^{2} dk \, k \, e^{-k} \\ &= \frac{\pi}{2} \left (1-\frac1{e^2} \right ) + \frac{\pi}{2 e^2} - \frac{\pi}{4} \left (1-\frac1{e^2} \right )\\ &= \frac{\pi}{4} \left (1+\frac1{e^2} \right )\end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/803954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 3, "answer_id": 2 }
get an element by finitely generated set I want to know the method to get a element in a finitely generated group by its generated set, is there a general way to calculate? For example, $SL(2,\mathbb{Z})=<a,b|a=\begin{pmatrix}0 &1\\-1 &0\end{pmatrix}, b=\begin{pmatrix}1&1\\-1&0\end{pmatrix}>$, how to write $\begin{pmatrix}1&1\\0&1\end{pmatrix}$ as the multiplication of {$a,b,a^{-1},b^{-1}$}? Thanks.
The question in your first paragraph does not quite make sense: how is the element to be given in general, if not by a product of generators? In specific instances that question could make sense, such as the instance of $SL_2(\mathbb{Z})$ where elements are given by matrices. For the special case of $SL(2,\mathbb{Z})$, one way to calculate is to use the presentation $\langle a,b \, | \, a^4 = b^6 = 1, a^2 = b^3 \rangle$, which expresses $SL(2,\mathbb{Z})$ as the free product of $\mathbb{Z}/4$ and $\mathbb{Z}/6$ amalgamated over $\mathbb{Z}/2$. Using this presentation it follows that $a^2=b^3$, which is minus the identity matrix, generates the center of the group. It also follows that every element of $SL(2,\mathbb{Z})$ can be represented uniquely as the product of an element of the center times a word $w$ which alternates between the letter $a$ and one of the letters $b,b^{2}$. So then, suppose you are given a matrix $M \in SL(2,\mathbb{Z})$, and you wish to compute the corresponding word $w$. There is an inductive procedure that you can use to figure out the last "letter" of $w$, which will be one of the three elements $a,b,b^2$. Once you know that last letter, you can postmultiply $M$ by the inverse of that letter, which will shorten the representing word. So, try some calculations, and you should be able to figure out the algorithm which from the matrix $M$ produces that last letter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/804100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that two paths on opposing corners of the unit square must cross. I'm looking for a simple argument to the following: Given two (continuous) paths on the unit square, one from (0,0) to (1,1) and the other from (1,0) to (0,1), prove that the paths cross at some point $(x_0, y_0)$. I have constructed a topological argument for why this is true using compactness (and a proof by contradiction), but the person I'm working with seems to think there is a "simple" and "well-known" argument that says the two paths must cross. I haven't been able to find such a result. Does anyone know of one? Thanks!
The result is Lemma 2 of this paper by Maehara, which uses the Brouwer Fixed Point Theorem. (I'm not sure if this argument qualifies as "simple" since this theorem is usually proved using homology.) See also this MO thread.
{ "language": "en", "url": "https://math.stackexchange.com/questions/804150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How can we prove that the rank of a matrix is a non-convex function of that matrix? How can we prove that $\operatorname{rank}(\mathbf{X})= 1$ is a non-convex function of $\mathbf{X}$.
It seems pretty clear that if we take $X = \begin{bmatrix} 1 & 0 \\ 0 & 0\end{bmatrix}$ and $Y = \begin{bmatrix} 0 & 0 \\ 0 & 1\end{bmatrix}$, then $\operatorname{rank}(tX+(1-t)Y) = 2$ for $t\ne 0,1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/804246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Checking if a monic polynomial can be decomposed into linear factors I have questions about how to determine if a polynomial can be decomposed into linear factors. If it is not solvable over radicals by Galois Theory, then I am done. But do I have to resort to Galois Theory? Let the polynomial be: $$f(x) = x^5 + a x^4 + bx^3 + c x^2 + d x + e $$ where $a,b,c,d$ and $e$ are integers. I know based on the rational root theorem, I would need to check all factors of “$\pm e$.” However, I do not know the exact values of “$a,b,c,d$ and $e$.” I just know certain properties of them. Also, I cannot use Eisenstein's Criterion since $p^2 \mid e$ Also, I want to use this for higher order monic polynomials with integer coefficients. Is their a way to answer this in terms of “$a,b,c,d$ and $e$?” Also, based on Galois Theory how can I determine this based on “$a,b,c,d$ $e$” without having to resort to the abstract aspects?
There are general formulas for the solutions of equations such as: $ax + b = c$ $ax^2 + bx + c = 0 $ $ax^3 + bx^2 + cx + d = 0$ $ax^4 + bx^3 + cx^2 + dx + e = 0$ In other words, general radical solutions exist up to fourth order solutions. Whether you need to use Galois theory actually depends on the polynomial itself. For example, $x^5 + x^4 - 3x - 3 = 0$ can be solved with radicals since you can just factor it out to $(x^4 - 3)(x + 1)$ and $x^4 - 3$ can be solved with radicals. But what if you have $x^5 + x - 1 = 0$? There is no "easy" way to factor it out. You will actually have to solve it numerically. Whether or not you use Galois theory for $x^5 + x = 1$ or other "irreducible" polynomials, is up to you. Although many of my friends that deal with these types of problems say that Galois theory is the most efficient way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/804401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do these integrals have a closed form? $I_1 = \int_{-\infty }^{\infty } \frac{\sin (x)}{x \cosh (x)} \, dx$ The following integrals look like they might have a closed form, but Mathematica could not find one. Can they be calculated, perhaps by differentiating under the integral sign? $$I_1 = \int_{-\infty }^{\infty } \frac{\sin (x)}{x \cosh (x)} \, dx$$ $$I_2 = \int_{-\infty }^{\infty } \frac{\sin ^2(x)}{x \sinh (x)} \, dx$$
For $I_2$, we can use a well-known result: $$ \int_{-\infty }^{\infty } \frac{\sinh (ax)}{\sinh(bx)}dx=\frac{\pi}{b}\tan\frac{a\pi}{2b}. $$ Note $\sinh(ix)=\sin(x), \tanh(ix)=\tan(x)$. Thus $$ \int_{-\infty }^{\infty } \frac{\sin (ax)}{\sinh(bx)}dx=\int_{-\infty }^{\infty } \frac{\sinh (iax)}{\sinh(bx)}dx=\frac{\pi}{b}\tanh\frac{a\pi}{2b}. $$ For $I_2$, define $$ I_2(a)=\int_{-\infty }^{\infty } \frac{\sin^2(ax)}{x\sinh (x)}dx. $$ Then $I_2(0)=0$ and $I_2(1)=I_2$. Now \begin{eqnarray} I_2'(a)&=&\int_{-\infty }^{\infty } \frac{2\sin(ax)\cos(ax)}{\sinh (x)}dx\\ &=&\int_{-\infty }^{\infty } \frac{\sin(2ax)}{\sinh (x)}dx\\ &=&\pi\tanh(a\pi). \end{eqnarray} So $$ I_2(1)=\int_0^1\pi\tan(a\pi)da=\ln(\cosh(a\pi)). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/804483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 5, "answer_id": 0 }
How to prove Ass$(R/Q)=\{P\}$ if and only if $Q$ is $P$-primary when $R$ is Noetherian? Let $R$ be a Noetherian ring, $P$ be a prime ideal, and $Q$ an ideal of $R$. How to prove that $$ \text{Ass}(R/Q)=\{P\} $$ if and only if $Q$ is $P$-primary? Update In fact, I have proved that if $Q$ is primary, then Ann$(R/Q)$ is primary. Let $P=\text{rad}(\text{Ann}(R/Q))$, we have $Q$ is $P$-primary and $\text{Ass}'(R/Q))=\{P\}$ (where $\text{Ass}'$ consists of all primes occur as the radical of some $\text{Ann}(x)$ )
Here's a relatively elementary proof, which is (in my opinion) one of many extremely beautiful proofs in the theory of associated primes and primary decomposition: An ideal $Q$ is primary iff every zerodivisor in $R/Q$ is nilpotent, i.e. the set of zerodivisors is equal to the nilradical. Since zerodivisors are a union of associated primes in a Noetherian ring, the nilradical is the intersection of all minimal primes, and every minimal prime is an associated prime, one sees that $Q$ is primary iff $\DeclareMathOperator{\Ass}{Ass}$ $$\bigcup_{p \in \Ass(R/Q)} p = \bigcap_{p \in \text{Min}(R/Q)} p = \bigcap_{p \in \Ass(R/Q)} p$$ which occurs iff $|\Ass(R/Q)| = 1$ (since neither side is empty). It follows that $\sqrt{Q} = P$ for the single associated prime $\{P\} = \Ass(R/Q)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/804556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find determinant of this matrix? Is there a manual method to find $\det\left(XY^{-1}\right)$ ? Let $$X=\left[ {\begin{array}{cc} 1 & 2 & 2^2 & \cdots & 2^{2012} \\ 1 & 3 & 3^2 & \cdots & 3^{2012} \\ 1 & 4 & 4^2 & \cdots & 4^{2012} \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 1 & 2014 & 2014^2 & \cdots & 2014^{2012} \\ \end{array} } \right], $$ $$Y=\left[ {\begin{array}{cc}\frac{2^2}{4} & \frac{3^2}{5} & \dfrac{4^2}{6} & \cdots & \dfrac{2014^2}{2016} \\ 2 & 3 & 4 & \cdots & 2014 \\ 2^2 & 3^2 & 4^2 & \cdots & 2014^{2} \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 2^{2012} & 3^{2012} & 4^{2012} & \cdots & 2014^{2012} \\ \end{array} } \right] $$. Thanks in advance.
Consider something a bit more general. Let $$X=\left[ {\begin{array}{cc} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\ 1 & x_3 & x_3^2 & \cdots & x_3^{n-1} \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1} \\ \end{array} } \right], $$ and $$Y=\left[ {\begin{array}{cc}\frac{x_1^2}{x_1+r} & \frac{x_2^2}{x_2+r} & \dfrac{x_3^2}{x_3+r} & \cdots & \dfrac{x_n^2}{x_n+r} \\ x_1 & x_2 & x_3 & \cdots & x_n \\ x_1^2 & x_2^2 & x_3^2 & \cdots & x_n^2 \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ x_1^{n-1} & x_2^{n-1} & x_3^{n-1} & \cdots & x_n^{n-1} \\ \end{array} } \right]. $$ Your problem corresponds to $x_i=i+1,$ $n=2013,$ $r=2.$ Let $Y'$ be the matrix obtained by rescaling the columns of $Y$ by multiplying column $i$ by $(x_i+r)/x_i.$ So $$\det Y'=\det Y\prod_{i=1}^n\frac{x_i+r}{x_i} $$ and $$Y'=\left[ {\begin{array}{cc}x_1 & x_2 & x_3 & \cdots & x_n \\ x_1+r & x_2+r & x_3+r & \cdots & x_n+r \\ x_1(x_1+r) & x_2(x_2+r) & x_3(x_3+r) & \cdots & x_n(x_n+r) \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ x_1^{n-2}(x_1+r) & x_2^{n-2}(x_2+r) & x_3^{n-2}(x_3+r) & \cdots & x_n^{n-2}(x_n+r) \\ \end{array} } \right]. $$ Now show that $$\det Y'=-r\det X,$$ from which your determinant can easily be evaluated. This is done by a series of row operations on $Y'.$ First subtract row $1$ from row $2.$ Then swap the first two rows. Then divide row $1$ by $r$. (These steps account for the factor $-r.$) We now have the matrix $$\left[ {\begin{array}{cc}1 & 1 & 1 & \cdots & 1 \\ x_1 & x_2 & x_3 & \cdots & x_n \\ x_1(x_1+r) & x_2(x_2+r) & x_3(x_3+r) & \cdots & x_n(x_n+r) \\ \vdots & \vdots & \vdots & \cdots & \vdots \\ x_1^{n-2}(x_1+r) & x_2^{n-2}(x_2+r) & x_3^{n-2}(x_3+r) & \cdots & x_n^{n-2}(x_n+r) \\ \end{array} } \right]. $$ Now subtract $r$ times row $2$ from row $3$. Then subtract $r$ times row $3$ from row $4.$ Continue in this way, finally subtracting $r$ times row $n-1$ from row $n.$ The resulting matrix will be $X^T.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/804657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Very good linear algebra book. I plan to self-study linear algebra this summer. I am sorta already familiar with vectors, vector spaces and subspaces and I am really interested in everything about matrices (diagonalization, ...), linear maps and their matrix representation and eigenvectors and eigenvalues. I am looking for a book that handles every of the aforementioned topics in details. I also want to build a solid basis of the mathematical way of thinking to get ready to an exciting abstract algebra next semester, so my main aim is to work on proofs for somehow hard problems. I got Lang's "Intro. to Linear Algebra" and it is too easy, superficial. Can you advise me a good book for all of the above? Please take into consideration that it is for self-study, so that it' gotta work on its own. Thanks.
S. Winitzki, Linear Algebra via Exterior Products (free book, coordinate-free approach)
{ "language": "en", "url": "https://math.stackexchange.com/questions/804716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 12, "answer_id": 9 }
How to verify whether a solution to an optimization problem is correct. Consider a general optimization problem min f(x) subject to g(x) <=0 h(x)=0, where x denotes a vector and the functions are $R^n$ -> $R^n$. suppose somebody gave me a solution x*, how can I verify whether this solution is correct? One straight forward idea is to check whether x* satisfies the constrains. But how can I determine whether x* will minimize f(x)?
You could try and plot it (using some mathematics software) and see if the solution is actually a minimum within the constrains.
{ "language": "en", "url": "https://math.stackexchange.com/questions/804801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Any open interval in R is union of intervals of the form (a,b] As part of a proof that the Borel Set $B\mathbb(R)$ is generated by the collection of subintervals of the reals of the form $(a,b]$, my measure theory textbook (Cohn) asserts that any open interval $(x,y)$ can be written as the union of a sequence of sets of the form $(a,b]$. This seems intuitively true, but I'm stuck as to how one would formally prove that. Can anyone point me in the right direction?
Hint: Consider $\bigcup_{n\in\mathbb{N}}{(a,b-\frac{1}{n}]}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/804893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Sage usage to calculate a cardinality I would like to compute the cardinality of an elliptic curve group over the finite field $\mathbb{F}_{991}$. I'm trying to use sage but I still have an error in the syntax (I never used it before and I tryed to adapt a code). Here is what I have: sage: E = EllipticCurve(GF(991)) sage: E Elliptic Curve defined by y^2 = x^3 + 446*x + 471 over Finite Field of order 991 Does some one know how I should modify it?
Alternatively, you can use MAGMA online. I usually do it like this: K:=GF(991); g:=Generator(K); E:=EllipticCurve([0,0,0,446*g,471*g]); #E; and MAGMA returns 984.
{ "language": "en", "url": "https://math.stackexchange.com/questions/805075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Domain of the Function Square Root of 12th Degree Polynomial Find the Domain of $$f(x)=\frac{1}{\sqrt{x^{12}-x^9+x^4-x+1}}$$ My Try: The Domain is given by $$x^{12}-x^9+x^4-x+1 \gt 0$$ $\implies$ $$x(x-1)(x^2+x+1)(x^8+1)+1 \gt 0$$ Please help me how to proceed further..
If $x$ is out of the interval $[0,1]$ all the facotrs in $$x(x-1)(x^2+x+1)(x^8+1)+1$$ are positive so we have $$x(x-1)(x^2+x+1)(x^8+1)+1\gt 0$$ But for $x\in [0,1],$ $$x(x-1)\geq -\frac{1}{4}$$ So $$x(x-1)(x^2+x+1)(x^8+1)+1\geq -\frac{1}{4}\cdot (x^2+x+1)(x^8+1)+1\geq -\frac{1}{4}\cdot\frac{3}{4}.1+1=\frac{13}{16}$$ i.e., for all $x\in \mathbb R$ $$x(x-1)(x^2+x+1)(x^8+1)+1\gt 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/805281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Decimal form of irrational numbers In the decimal form of an irrational number like: $$\pi=3.141592653589\ldots$$ Do we have all the numbers from $0$ to $9$. I verified $\pi$ and all the numbers are there. Is this true in general for irrational numbers ? In other words, for an irrational number $$x=\sum_{n\in \mathbb{Z}} a_n 10^n$$ Does $a_n$ takes all the numbers between $0$ and $9$ ?
This gives a nice opportunity to use cardinality arguments to show that the answer is negative. Pick two digits $n,k$ such that $\{n,k\}\neq\{0,9\}$. There is a bijection between the numbers in $[0,1]$ whose decimal form includes only $n$ and $k$, and the set of infinite binary sequences. Therefore the set $\{x\in[0,1]\mid x\text{ has only }n,k\text{ as decimal digits}\}$ is uncountable. So it must include at least one irrational number, and in fact almost the entire set is made of irrational numbers. The same can be done with three, four, five, six, seven, eight or nine digits. The argument is just the same. Alternatively one can use diagonalization to show that the set is uncountable, it's the same method as you would use otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/805386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 3 }
Proving that the harmonic p series converges for p>1 and diverges for p<=1 Can someone please check if I have done this correctly? The harmonic p-series: $$ \sum_{n=1}^\infty \frac{1}{n^p}$$ $$ let $$ $$f(n)=\frac{1}{n^p}$$ $$ f(x)=\frac{1}{x^p}$$ Since f(x) is a positive, decreasing, continuous function, applying the integral test: $$\int_0^\infty \frac{1}{x^p} = \int_0^\infty x^{-p} =\frac{1}{1-p} \lim_{h\to \infty} (h^{1-p} - 1) = \frac{1}{1-p} \lim_{h\to \infty} \frac{1}{h^{p-1}} $$ For this integral to converge (and the harmonic p-series) to converge: $lim_{h\to \infty} \frac{1}{h^{p-1}}$ must be be finite. For this to be the case, $p-1>0$ <--Should this be p-1>=0????--> Therefore for convergence of the p-series, $p>1$, and for divergence, $p<=1$ by the integral test.
You're almost correct but take care with the limits of the integral: the given series has the same nature (being convergent or divergent) as the integral $$\int_{\color{red}{\pmb1}}^\infty\frac{dx}{x^p}$$ which's convergent if and only if $p>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/805477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the inverse of the $\mbox{vec}$ operator? There is a well known vectorization operator $\mbox{vec}$ in matrix analysis. I've vectorized my matrix equations, did some transformation of vectorized equations and now I want to get back to the matrix form. Is there special operator for it?
Adding to the excellent answer by Rodrigo de Azevedo, I would like to point out that there is an explicit formula for the inverse $\operatorname{vec}_{m\times n}^{-1}$, given by $$ \mathbb{R}^{mn}\ni x \mapsto \operatorname{vec}_{m\times n}^{-1}(x) = \big((\operatorname{vec} I_n)^\top \otimes I_m\big)(I_n \otimes x) \in \mathbb{R}^{m\times n}, $$ where $I_n$ denotes the $n\times n$ identity matrix and $\otimes$ denotes the Kronecker product. The formula above can be verified in the following way: Let $X\in\mathbb{R}^{m\times n}$ be such that $\operatorname{vec}{X}=x\in\mathbb{R}^{mn}$, and let $X_k$ denote the $k$-th column of $X$. Additionally, let $M=I_n\otimes x \in\mathbb{R}^{mn^2\times n}$ and let $M_k$ denote the $k$-th column of $M$. Finally, we let $e_k\in\mathbb{R}^n$ be the $k$-th column of $I_n$. Note that $M_k=e_k\otimes\operatorname{vec}{X}$. Recall the identity $\operatorname{vec}(ABC) = (C^\top\otimes A)\operatorname{vec}{B}$, which we shall make heavy use of. Observing that $\operatorname{vec}(e_k^\top\otimes X)=M_k$, we see that $$ \big((\operatorname{vec} I_n)^\top \otimes I_m\big)M_k = \big((\operatorname{vec} I_n)^\top \otimes I_m\big)\operatorname{vec}(e_k^\top\otimes X) = \operatorname{vec}\big(I_m(e_k^\top\otimes X)\operatorname{vec}{I_n}\big) = \operatorname{vec}\big((e_k^\top\otimes X)\operatorname{vec}{I_n}\big) = \operatorname{vec}\big(\operatorname{vec}(XI_ne_k)\big) = \operatorname{vec}(Xe_k) = X_k, $$ and thus $$ \big((\operatorname{vec} I_n)^\top \otimes I_m\big)M = \big((\operatorname{vec} I_n)^\top \otimes I_m\big) \begin{bmatrix} M_1 & \ldots & M_n \end{bmatrix} = \begin{bmatrix} X_1 & \ldots & X_n \end{bmatrix} = X. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/805565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
The ability of a logical statement to represent a two-place truth function. How can i determine which two-place truth functions can be represented using a logical statement built out of a subset of two logical connectors in $ \{\rightarrow, \wedge, \vee ,\equiv \}$ ? for example $\{\rightarrow, \wedge\}$
For any two place truth function X, we can write it's truth table as follows: p q X(p, q) 0 0 ?1 0 1 ?2 1 0 ?3 1 1 ?4 where, of course, ?1, ?2, ?3, and ?4 belong to {0, 1}. Notice that all wffs of propositional logic can get built up from the variables and the connectives. For example (using Polish notation) the wff CKpqNDrArs can get built up via the sequence (p, q, Kpq, r, r, s, Ars, DrArs, NDrArs, CKpqNDrArs). Thus, we can build up any wff using say two (or 1 or 3 or 4) connectives in this way, and see how their truth tables work and see if the columns of the truth tables end up repeating or if we get new columns. For instance... if we just have implication "C", we can write p q Cpq CpCpq Cpp Cqp Cqq CqCpq CCpqp CCpqq CCpqCpq 0 0 1 1 1 1 0 0 0 1 1 1 1 0 0 1 1 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 I've left some blank, since they have duplicate values to something else we already have. I generated this example (by hand) by using (p, q, Cpq) as the initial set and finding all possible substitutions in Cxy from that set. Then leaving only those columns which are not a duplicate of some other column, we can use each wff above the column to see if we get a new column not in our list. Eventually, since the number of possible columns is finite, we can eventually see which truth functions can get represented (and we also could prove that using the set of all formulas above the column, as I used the initial set above, that on the next step we'll only get a column that we already have. Since this procedure can generate all formulas in two variables, it will follow that no other binary truth-function can get generated using just the connectives we've selected).
{ "language": "en", "url": "https://math.stackexchange.com/questions/805634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
find all values of k for which A is invertible $\begin{bmatrix} k &k &0 \\ k^2 &2 &k \\ 0& k & k \end{bmatrix}$ what I did is find the det first: $$\det= k(2k-k^2)-k(k^3-0)-0(k^3 -0)=2k^2-k^3-k^4$$ when $det = 0$ the matrix isn't invertible $$2k^2-k^3-k^4=0$$ $$k^2(k^2 +k-2)=0$$ $$k^2+k-2=0$$ $$(k+2)(k-1)=0$$ $k = -2$ or $k = 1$. I am lost here how to find the value for k when the matrix is invertible.
Almost all square matrices are invertible. It is very special, i.e. singular, for a square matrix to be non-invertible. As you say, $\det = 2k^2-k^3-k^4$. This factorises to give $k^2(2+k)(1-k)$. Your matrix is invertible for all values of $k$ except $k=0$, $k=-2$ or $k=1$. The matrix is invertible for all of the other values of $k$. There are three isolated values that make the matrix non-invertible. We can let $k$ be any complex number, i.e. $k$ can belong to an infinite, two-dimensional plane. out of all of the infinite points in the complex plane, only three isolated points give a singular matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/805703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Markov property for a Stochastic Process My question: Every Stochastic Process $X(t), t\geq 0$ with space states $\mathcal{S}$ and independent increments has the Markov property, i.e, for each $\in \mathcal{S}$ and $0\leq t_0\leq< t_1<\cdots <t_n<\infty$ we have $$ P[X(t_n)\leq y|X(t_0),X(t_1), \ldots, X(t_{n-1})] =P[X(t_n)<y|X(t_{n-1})] $$ This theorem is a statement in the Kannan's book, An Introduction to Stochastic Processes, in the page 93, There is a sketch of proof, but for me that I'm beginner, I would say this proves intelligible. I would like to see a detailed proof, or a good reference on this theorem.
For every nonnegative $s$ and $t$, let $X^t_s=X_{t+s}-X_t$, then the hypothesis is that the processes $X^t=(X^t_s)_{s\geqslant0}$ and ${}^t\!X=(X_s)_{s\leqslant t}$ are independent. For every $s\geqslant t$, $X_s=X_t+X^t_{s-t}$ hence the process $(X_s)_{s\geqslant t}$ is a deterministic function of the random variable $X_t$, which is ${}^t\!X$-measurable, and of the process $X^t$, which is independent of ${}^t\!X$. Thus, the conditional distribution of the future $(X_s)_{s\geqslant t}$ conditionally on the past ${}^t\!X$ depends only on the present $X_t$. This is Markov property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/805809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A logarithmic integral $\int^1_0 \frac{\log\left(\frac{1+x}{1-x}\right)}{x\sqrt{1-x^2}}\,dx$ How to prove the following $$\int^1_0 \frac{\log\left(\frac{1+x}{1-x}\right)}{x\sqrt{1-x^2}}\,dx=\frac{\pi^2}{2}$$ I thought of separating the two integrals and use the beta or hypergeometric functions but I thought these are not best ideas to approach the problem. Any other ideas ?
After this corps has rised from the dead anyway, let me give an additional solution which is based on complex analysis but avoids the use of branch cuts, so it should be viewed as complementary to @RandomVariables approach. First, perform an subsitution $x\rightarrow\sin(t)$ which brings our integral into the form $$ I=\int_0^{\pi/2}\frac{\log(1+\sin(t))-\log(1-\sin(t))}{\sin(t)} dt $$ Now we introduce a parameter $a$ and differentiate w.r.t. it: $$ I'(a)=\int_0^{\pi/2}\frac{1}{(a+\sin(t))\sin(t)}-\frac{1}{(a-\sin(t))\sin(t)} dt $$ Please note the singularity at zero is removeable, so we don't need a Cauchy principle part. We can reduce the above to $$ I'(a)=4\int_0^{\pi/2}\frac{1}{2a^2-1+\cos(2t)}dt=\int_0^{2\pi}\frac{1}{2a^2-1+\cos(t)}dt $$ we now can perform the ususal $e^{it}\rightarrow z$ substitution to map the problem onto a contour integral over the unit circle $C_+$ traveresed counterclockwise: $$ I'(a)=\frac{2}{i}\int_{C_+}\frac{1}{2z(2a^2-1)+z^2+1}dt $$ the poles are given by $z_{\pm}=1-2a^2\pm a\sqrt{a^2-1}$. At this stage a little problem appears: For $a < 1$ the poles degenerate and become double poles on our contour which we don't like, so we want to choose $a>1$ and take the limit $a\rightarrow 1_+$ in the end. For this choice of $a$ only $z_+$ lies inside our contour of integration. The corresponding residue is $$ \text{Res}(z=z_+)=\frac{1}{2 i a\sqrt{a^2-1} } $$ and therefore $$ I'(a)=\frac{\pi}{a \sqrt{a^2-1}} $$ Now what is left is integrating back to $a$. The corresponding integral is elementary and yields $$ I(a)=\pi\arctan\left(\sqrt{\frac{1}{a^2-1}}\right)+C $$ The constant of integration can be fixed by the observation that $I(\infty)=0$ yieldung $C=0$ Taking finally the limit $a\rightarrow 1_+$ yields $$ I=I(1)=\pi \times \arctan(+\infty)=\pi \times \frac{\pi}{2}=\frac{\pi^2}{2} $$ in accordance with other answers Remark: We see that we can avoid branch cuts completly but get instead new complications in choosing the correct limit of $a$ and finding the poles which are inside our contour. In summary both methods work and we just exchange some comperable hard problems at the end of the day, so it's just a matter of taste which one we choose
{ "language": "en", "url": "https://math.stackexchange.com/questions/805893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 10, "answer_id": 5 }
What does the dot product of two vectors represent? I know how to calculate the dot product of two vectors alright. However, it is not clear to me what, exactly, does the dot product represent. The product of two numbers, $2$ and $3$, we say that it is $2$ added to itself $3$ times or something like that. But when it comes to vectors $\vec{a} \cdot \vec{b}$, I'm not sure what to say. "It is $\vec{a}$ added to itself $\vec{b}$ times" which doesn't make much sense to me.
First of all, if we write $\vec{a} = a \vec{u}$ and $\vec{b} = b \vec{v}$, where $a$ and $b$ are the length of $\vec{a}$ and $\vec{b}$ respectively, then $$\vec{a} \cdot \vec{b} = (a \vec{u})\cdot (b \vec{v}) = ab \,\, \vec{u} \cdot \vec{v};$$ this is a pretty natural property for a product to have. Now as for $\vec{u} \cdot \vec{v}$, this is equal to $\cos \theta,$ where $\theta$ is the angle between $\vec{u}$ and $\vec{v}$. As King Squirrel notes, this is also the length of the projection of $\vec{u}$ onto the line through $\vec{v}$, and also the length of the projection of $\vec{v}$ onto the line through $\vec{u}$. So altogether we get $$\vec{a} \cdot \vec{b} = a b \, \cos \theta,$$ and it has the interpretation in terms of projecting one vector onto another that King Squirrel discusses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/805954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "124", "answer_count": 12, "answer_id": 0 }
Is injective function $f:A \to A$ always surjective? Ok so while browsing a book(namely Herbert Endertons book "Elements of set theory") I have stumbled upon a curiosity which provoked me to try to prove this.Here is how I went about it,but I do not think my solution is correct. All answers as well as corrections are more then welcome. Proof: Since $f: A\to A$ and f is injective we have $$(x,y)\in f \implies x \in A \;\;\land \;\; y\in A \implies (y,z)\in f \;\; \land \;\; z\neq x $$ by using this step repeatedly,we will eventually exhaust set A of members,and thus range is equal to domain. However I am not sure that I have covered a case where there is element which is in relation with x.Also I am not sure what would happen if the domain and range are set of real numbers(as I have only managed to study up to natural numbers so far). All input is highly appreciated
You statement works for finite $A$. If $$ f: A\to A $$ is injective then it is surjective. This is not true when $A$ is infinite. Consider for example the funcstion $f : \mathbb{Z} \to \mathbb{Z}$ given by $f(x) = 2x$. This function is injective, but not surjective,
{ "language": "en", "url": "https://math.stackexchange.com/questions/806016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Help Understanding Evaluation of Integral Please help me to understand the evaluation of this integral. $$\int_0^1\int_u^{\mathrm{min(1,u+z)}} 2\;dv\;du$$ I know that the correct answer is $$ f(z) = \left\{ \begin{array}{lr} 1 & & z \geq 1\\ 2z & & z \leq 0\\ -(z-2)z && \mathrm{else} \end{array} \right. $$ But I've been staring at this for a while and I don't understand the evaluation when $z$ is greater than or equal to 1. It seems like it should be $2-2u$. And in the "else" case, I don't know how to evaluate that. Any help would be appreciated.
Note that the limits of integration on $u$ are $u \in [0,1]$. So for $u$ in this interval, and for $z \ge 1$, it follows that $u + z \ge 1$, hence $\min\{1, u+z \} = 1$, and the integral becomes $$\int_{u=0}^1 \int_{v = u}^1 2 \, dv \, du = \int_{u=0}^1 2(1-u) \, du = 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/806110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral $\int_0^{\pi/4}\frac{dx}{{\sin 2x}\,(\tan^ax+\cot^ax)}=\frac{\pi}{8a}$ I am trying to prove this interesting integral$$ \mathcal{I}:=\int_0^{\pi/4}\frac{dx}{{\sin 2x}\,(\tan^ax+\cot^ax)}=\frac{\pi}{8a},\qquad \mathcal{Re}(a)\neq 0. $$ This result is breath taking but I am more stumped than usual. It truly is magnificent. I am not sure how to approach this, note $\sin 2x=2\sin x \cos x$. I am not sure how to approach this because of the term $$ (\tan^ax+\cot^ax) $$ in the denominator. I was trying to use the identity $$ \tan \left(\frac{\pi}{2}-x\right)=\cot x $$ since this method solves a similar kind of integral but didn't get anywhere. A bad idea I tried was to try and differentiate with respect to a $$ \frac{dI(a)}{da}=\int_0^{\pi/4}\partial_a \left(\frac{dx}{{\sin 2x}\,(\tan^ax+\cot^ax)}\right)=\int_0^{\pi/4} \frac{(\cot^a x \log(\cot x )+\log(\tan x ) \tan^a x)}{\sin 2x \, (\cot^a x+\tan^a x)^2}dx $$ which seems more complicated when I break it up into two integrals. How can we solve the integral? Thanks.
$\newcommand{\+}{^{\dagger}} \newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\down}{\downarrow} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\isdiv}{\,\left.\right\vert\,} \newcommand{\ket}[1]{\left\vert #1\right\rangle} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert} \newcommand{\wt}[1]{\widetilde{#1}}$ $\ds{{\cal I}\equiv\int_{0}^{\pi/4}{\dd x \over \sin\pars{2x}\bracks{\tan^{a}\pars{x} + \cot^{a}\pars{x}}}={\pi \over 8\verts{a}},\qquad \Re\pars{a} \neq 0}$. Note that the integral depends obviously on $\ds{\large\verts{a}}$. \begin{align} {\cal I}&=\half\ \overbrace{\int_{0}^{\pi/2}{\dd x \over \sin\pars{x} \bracks{\tan^{\verts{a}}\pars{x/2} + \cot^{\verts{a}}\pars{x/2}}}} ^{\ds{t \equiv \tan\pars{x \over 2}}} \\[3mm]&=\half\int_{0}^{1}{2\,\dd t/\pars{1 + t^{2}} \over \bracks{2t/\pars{1 + t^{2}}}\pars{t^{\verts{a}} + t^{-\verts{a}}}} =\half\ \overbrace{\int_{0}^{1} {t^{\verts{a} - 1} \over t^{2\verts{a}} + 1}\,\dd t} ^{\ds{t^{\verts{a}} \equiv x}\ \imp\ t = x^{1/\verts{a}}} \\[3mm]&=\half\int_{0}^{1}{ \pars{x^{1/\verts{a}}}^{\verts{a} - 1} \over x^{2} + 1}\,{1 \over \verts{a}}\,x^{1/\verts{a} - 1}\dd x ={1 \over 2\verts{a}}\ \overbrace{\int_{0}^{1}{\dd x \over x^{2} + 1}} ^{\ds{=\ {\pi \over 4}}} \end{align} $$\color{#00f}{\large% {\cal I}\equiv\int_{0}^{\pi/4}{\dd x \over \sin\pars{2x}\bracks{\tan^{a}\pars{x} + \cot^{a}\pars{x}}}={\pi \over 8\verts{a}}}\,,\qquad\Re\pars{a} \not= 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/806195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Using Integration By Parts results in 0 = 1 I've run into a strange situation while trying to apply Integration By Parts, and I can't seem to come up with an explanation. I start with the following equation: $$\int \frac{1}{f} \frac{df}{dx} dx$$ I let: $$u = \frac{1}{f} \text{ and } dv = \frac{df}{dx} dx$$ Then I find: $$du = -\frac{1}{f^2} \frac{df}{dx} dx \text{ and } v = f$$ I can then substitute into the usual IBP formula: $$\int udv = uv - \int v du$$ $$\int \frac{1}{f} \frac{df}{dx} dx = \frac{1}{f} f - \int f \left(-\frac{1}{f^2} \frac{df}{dx}\right) dx$$ $$\int \frac{1}{f} \frac{df}{dx} dx = 1 + \int \frac{1}{f} \frac{df}{dx} dx$$ Then subtracting the integral from both sides, I've now shown that: $$0 = 1$$ Obviously there must be a problem in my derivation here... What wrong assumption have I made, or what error have I made? I'm baffled.
The problem here is that when you applied the By Parts Formula $$uv-\int \frac{du}{dx}vdv $$ you took $u=\frac 1f$ and $dv=\frac {df}{dx}dx$ Now when you use by parts formula your first task is to get $v$ and for that you need to do this $\int dv$, right? You have done all righty, but the problem comes when you write $\int df =f$ Here you also need to add the constant C, Or your proof will get pretty much messed up. In the end where you got $0=1$ , you are lacking that $C$ in this equation here, when you do antiderivatives you can't just provide one answer.If you write $\int df=f$ only then you are only getting one of the infinite solutions because $\frac {d}{df}(f)=1$ but $\frac {d}{df}(f+2)$ is also 1, infact take the derivative of $f+c$ (w.r.t f) where $C$ is any constant, you will always end up with 1.So I guess that's the error.Next time be careful with constants. Hope It answered your question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/806254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 5, "answer_id": 4 }
Prove that $A^k = 0 $ iff $A^2 = 0$ Let $A$ be a $ 2 \times 2 $ matrix and a positive integer $k \geq 2$. Prove that $A^k = 0 $ iff $A^2 = 0$. I can make it to do this exercise if I have $ \det (A^k) = (\det A)^k $. But this question comes before this. Thank you very much for your help!
The solution using the minimal polynomial and Cayley-Hamilton is a bit of an over-kill and somewhat of a magic solution. I prefer the following non-magic solution (for the non-trivial implication, and there is no need to assume the matrix is $2\times 2$). Think of $A$ as a linear operator $A: V \to V$ with $V$ an $n$-dimensional vector space, and suppose $A^t=0$. We'll show that $A^n=0$. Now, for each $m\ge 1$ consider the space $K_m$, the kernel of $A^m$. It is immediate that $K_{m}\subseteq K_{m+1}$. Since $A^t=0$ it follows that $K_t=V$. It is also immediate that if $K_m=K_{m+1}$, then $K_m=K_{r}$ for all $r>m$. Thus, the sequence of kernels is an increasing sequence of subspaces, that stabilizes as soon as the next step is equal to the previous one, and it is eventually all of $V$. Now we will use the fact that $V$ is of dimension $n$. Considering the dimensions of the kernels, the above implies that the sequence of dimensions is strictly increasing until it stabilizes. Since it reaches $V$, the dimensions reach $n$ and since the dimension of $K_1$ is not zero, the sequence of dimensions starts at $1$ or more. That means it has to stabilize after no more than $n$ steps, and thus $K_t=V$ for some $t\le n$. But then $K_n=V$, and thus $A^n=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/806394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 8, "answer_id": 2 }
Sum of products of binomial coefficient $-1/2 \choose x$ I am having trouble with showing that $$\sum_{m=0}^n (-1)^n {-1/2 \choose m} {-1/2 \choose n-m}=1$$ I know that this relation can be shown by comparing the coefficients of $x^2$ in the power series for $(1-x)^{-1}$ and $(1+x)^{-1/2} (1+x)^{-1/2}$.
Consider the sum \begin{align} S_{n} = (-1)^{n} \sum_{m=0}^{n} \binom{-1/2}{m} \binom{-1/2}{n-m}. \end{align} Using the results: \begin{align} \binom{-1/2}{m} &= \frac{(-1)^{m}(1/2)_{m}}{m!} \\ \binom{-1/2}{n-m} &= \frac{(-1)^{n-m} (1/2)_{n-m}}{(n-m)!} = (-1)^{n+m} \frac{(1/2)_{n} (-n)_{m}}{(1)_{n} (1/2-n)_{m}} \\ \end{align} the series becomes \begin{align} S_{n} &= (-1)^{n} \sum_{m=0}^{n} \binom{-1/2}{m} \binom{-1/2}{n-m} \\ &= \frac{(1/2)_{n}}{(1)_{n}} \ \sum_{m=0}^{n} \frac{(1/2)_{m}(-n)_{m}}{m! (1/2-n)_{m}} \\ &= \frac{(1/2)_{n}}{(1)_{n}} \ {}_{2}F_{1}(1/2, -n; 1/2-n; 1) \\ &= \frac{(1/2)_{n}}{(1)_{n}} \ \frac{\Gamma(1/2-n) \Gamma(0)}{\Gamma(1/2) \Gamma(-n)} = \frac{(1/2)_{n} (1/2)_{-n}}{(1)_{n} (0)_{-n}} = 1. \end{align} Hence, \begin{align} (-1)^{n} \sum_{m=0}^{n} \binom{-1/2}{m} \binom{-1/2}{n-m} = 1. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/806605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is a proof still valid if only the writer understands it? Say that there is some conjecture that someone has just proved. Let's assume that this proof is correct--that it is based on deductive reasoning and reaches the desired conclusion. However, if he/she is the only person (in the world) that understands the proof, say, because it is so complicated, conceptually, and long, does this affect the validity of the proof? Is it still considered a proof? Essentially, what I'm asking is: does the validity of a proof depend on the articulation of the author, and whether anyone else understands it? The reason I ask is that the idea behind a proof is to convince others that the statement is true, but what if no-one understands the proof, yet it's a perfectly legitimate proof?
There only appears to be a problem because we are using the same word for closely-related but distinct concepts (not an uncommon situation in philosophy), namely * *"proof" as in formal proof, which Wikipedia defines as a finite sequence of sentences each of which is an axiom or follows from the preceding sentences in the sequence by a rule of inference *"proof" as in "any argument that the listener finds sufficiently convincing" The situation you describe contains a proof according to the first meaning, but not the second. Conundrum resolved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/806676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "52", "answer_count": 16, "answer_id": 1 }
Sum of these quotient can not be integer Suppose $a$ and $b$ are positive integers such that are relatively prime (i.e., $\gcd(a,b)=1$). Prove that, for all $n\in \mathbb{N}$, the sum $$ \frac{1}{a}+\frac{1}{a+b}+\frac{1}{a+2b}+\cdots+\frac{1}{a+nb} $$ is not an integer. I think I have tried many ways I could, but none led me to the complete answer. Do you have any idea?
A good way to demonstrate it could be to find an integer K such that, multiplying the sum by K, the result is not an integer. Finding this K would clearly imply that the sum is not an integer. For example, we could choose as K a value obtained starting from the product $a(a+b)(a+2b)(a+3b)...(a+nb)$ and eliminating one factor (or more factors) in the sequence. Multiplying the whole sum by this K, all elements of the sum except one (or except those corresponding to the eliminated terms) become integers. Showing that the product of the remaining term/terms with K is not an integer would provide the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/806711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Comparing two linear functions Let $X$ be a Banach space and let $h:X\to\Bbb C$ and $f:X\to\Bbb C$ be two bounded linear functions such that if for some $x\in X$ we have $f(x)=0$ then $h(x)=0$. Prove that there exists a $\lambda\in \Bbb{C}$ such that for any $x\in X$ we have $h(x)=\lambda f(x)$.
If $f=0$ then choose $\lambda=0$. If $f\neq0$ then let $x_0\in X$ such that $f(x_0)\neq0$ or we may suppose that $f(x_0)=1$. For any $x\in X$ we have $f(x-f(x)x_0)=0$ and so $h(x-f(x)x_0)=0$. This means $h(x)=f(x)h(x_0)$. Thus it's enough to set $\lambda=h(x_0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/806773", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Propositional logic De-morgans theorem question the theorem states that $(A\wedge B) = \neg (\neg A\vee \neg B)$, where $A$ and $B$ are propositional formulas. Can't I turn $\neg (\neg A\vee \neg B)$ to $(\neg \neg A\vee \neg \neg B)$ then cancel the double negations so its $(A\vee B)$, because that seems to be allowed if $A$ and $B$ were propositions.
$$\neg(\neg A \vee \neg B) \iff \neg\neg A \wedge \neg\neg B \iff A \wedge B $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/806897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Derive a transformation matrix that mirrors the image over a line passing through the origin with angle $\phi$ to the $x$-axis. Question: Using homogeneous coordinates, derive a $3$x$3$ transformation matrix $M$ that mirrors an image over a line passing through the origin, with angle $\phi$ to the $x$-axis. Comment: This is from an old exam in computer graphics. I don't remember how we did this back in linear algebra so I'd be grateful if someone could show me the steps. If you don't know what "homogeneous coordinates" means, pay no attention to it, a $2$x$2$ transformation matrix without homogenous coordinates would suffice and I can do the rest.
If vector $A$ is reflected across vector $B$ to create vector $C$, * *The midpoint of $A$ and $C$ is along $B$: $C + (A - C)/2 \in kB$, so $A + C \in kB$ *Length is preserved: $|A| = |C|$ Your $B$ vector is $\begin{bmatrix} \cos(\phi) \\ \sin(\phi)\end{bmatrix}$ First consider the x-axis unit: $e_1 = \begin{bmatrix} 1 \\ 0 \end{bmatrix}$ translates to $f(e_1) = \begin{bmatrix} a \\ b \end{bmatrix}$: $$\begin{bmatrix} 1 \\ 0 \end{bmatrix} + \begin{bmatrix} a \\ b \end{bmatrix} = k\begin{bmatrix} \cos(\phi) \\ \sin(\phi)\end{bmatrix}$$ $$\begin{cases} a + 1 = k \cos(\phi) \\ b = k \sin(\phi) \\ a^2 + b^2 = 1 \quad \text{(because length is preserved)} \end{cases}$$ $$\downarrow$$ $$\begin{cases} k = 2\cos(\phi) \\ a = 2\cos(\phi)^2 - 1 \\ b = 2\cos(\phi)\sin(\phi) \end{cases}$$ Same for the y-unit $e_2 = \begin{bmatrix} 0 \\ 1 \end{bmatrix}$ translates to $f(e_2) = \begin{bmatrix} c \\ d \end{bmatrix}$: $$\begin{bmatrix} 0 \\ 1 \end{bmatrix} + \begin{bmatrix} c \\ d \end{bmatrix} = k\begin{bmatrix} \cos(\phi) \\ \sin(\phi)\end{bmatrix}$$ $$\begin{cases} c = k \cos(\phi) \\ d + 1 = k \sin(\phi) \\ c^2 + d^2 = 1 \quad \text{(again because length is preserved)} \end{cases}$$ $$\downarrow$$ $$\begin{cases} k = 2\sin(\phi) \\ c = 2\cos(\phi)\sin(\phi) \\ d = 2\sin(\phi)^2 - 1 \end{cases}$$ Since we know that $f(v) = f(xe_1 + ye_2) = xf(e_1) + yf(e_2)$, it follows: $$\begin{align} f\left(\begin{bmatrix} x \\ y \end{bmatrix}\right) &= \begin{bmatrix} 2\cos(\phi)^2 - 1 & 2\cos(\phi)\sin(\phi) \\ 2\cos(\phi)\sin(\phi) & 2\sin(\phi)^2 - 1\end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \\ &= \begin{bmatrix} \cos(2\phi) & \sin(2\phi) \\ \sin(2\phi) & -\cos(2\phi)\end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} \\ \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/807031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Equivalent norm in Sobolev space Let $\rho\in H^{1}(0,\pi)$ be a function, and consider the functional $$ I(\rho)=\bigg(\int_{0}^{\pi}{\sqrt{\rho^2(t)+\dot\rho^2(t)}\,dt}\bigg)^2. $$ I'm asking if it is equivalent to the norm $$ \lVert \rho \rVert_{H^1}=\lVert \rho \rVert_{L^2}+\lVert \dot\rho \rVert_{L^2} $$ on $H^{1}(0,\pi)$. Obviously $I(\rho)\leq \lVert \rho \rVert_{H^1}^2$, i'm asking if the other inequalities holds.
To be precise, you are asking if $\sqrt{I(\rho)}$ is equivalent to $\|\rho\|_{H^1}$. As you noted, $\sqrt{I(\rho)}$ is dominated by $\|\rho\|_{H^1}$. However, the converse fails. Consider $\rho(x)=\sqrt{x+\epsilon}$. Since $\rho'(x) = \dfrac{1}{2\sqrt{x+\epsilon}}$, we have $\|\rho\|_{H^1}\to\infty$ as $\epsilon \to 0$. On the other hand, $\sqrt{I(\rho)}$ stays bounded as $\epsilon\to 0$: $$\int_{0}^{\pi}{\sqrt{\rho^2(t)+\dot\rho^2(t)}\,dt} \le \int_{0}^{\pi}{\sqrt{\pi+\epsilon + \frac{1}{ 4(x+\epsilon)} }\,dt} = O(1) $$ since the singularity at $x=0$ is like $1/\sqrt{x}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/807156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Calculus Proof Unsure Let $$g(x)=x^2\sin\left(\frac{1}{x}\right)+\frac{1}{2}x$$ Show that $g'(0)>0$ but there is no neighborhood of $0$ on which $g$ is increasing. (More precisely, every interval containing $0$ has sub intervals on which g is decreasing). For the first part, I used the limit definition of the derivative to calculate $g'(0)=0.5$. I have an intuitive understanding of the second part but I am having trouble coming up with an approach as to how to come up with the proof for it. Any help would be appreciated.
This is a very good question. Clearly $$g'(x) = 2x\sin\left(\frac{1}{x}\right) - \cos\left(\frac{1}{x}\right) + \frac{1}{2},\,\,\text{if }x \neq 0$$ and $$g'(0) = \lim_{x \to 0}\frac{g(x) - g(0)}{x} = \lim_{x \to 0}x\sin\left(\frac{1}{x}\right) + \frac{1}{2} = \frac{1}{2}$$ Thus we have $g'(0) > 0$. But if we see the function $g'(x)$ it consists of three parts: 1) $2x\sin(1/x)$ which tends to $0$ as $x \to 0$, 2) $-\cos(1/x)$ which oscillates between $-1$ and $1$ and 3) $1/2$ which remains constant. Thus as $x \to 0$, $g'(x)$ oscillates between $(-1 + (1/2))$ and $(1 + (1/2))$ i.e. between $-1/2$ and $3/2$. It follows that if we take any interval $(-h, h)$ around $0$ then we have $g'(x) < 0$ at some points in this interval (because of oscillation around the negative value $-1/2$). And hence $g(x)$ is not increasing in any interval containing $0$. If we observe carefully our argument we find that the modified function $f(x) = x^{2}\sin(1/x) + kx$ also has the same behavior as $g(x)$ if $0 < k < 1$. If $k = 1/2$ then $f(x) = g(x)$. We can analyze the behavior for $f(x)$ in case $k \geq 1$ with some more difficulty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/807259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Do we have such kind of estimates? Let $0<a_0\leq a(x)$ be a smooth function on $\mathbb{T=[0,2\pi]}$, and $a(0)=a(2\pi)$, then whether it holds that $$ \int_{\mathbb{T}}a(x)|\partial_x\phi|^2 dx\geq \int_{\mathbb{T}}|\partial_xa|^2|\phi|^2 dx $$ for all $\phi\in H_{per}^1(\mathbb{T})$ ? More precisely, $$ \phi(0)=\phi(2\pi)=0,\quad \int_{\mathbb{T}}\phi dx=0 $$ Thanks.
Controlling $|\phi|$ by $|\phi'|$ sounds reasonable. But we can't control $|a'|$ by $|a|$. Example: let $a(x) = 2+\sin nx$, where $n$ is large. As $n\to \infty$, $$\int_{\mathbb{T}}a(x)|\partial_x\phi|^2 dx$$ stays bounded but $$\int_{\mathbb{T}}|\partial_xa|^2|\phi|^2 dx$$ blows up (unless $\phi\equiv 0$). You may want to consider $$\int_{\mathbb{T}}a(x)|\partial_x\phi|^2 dx\geq C\int_{\mathbb{T}}|\partial_xa|^2|\phi|^2 dx$$ with $C$ depending on $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/807334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Permutation and Combination Puzzle - Spy Keypad Keypad 1 2 3 4 5 6 7 8 9 J. Bond has to break into the headquarters of an evil organization and steal important documents. The documents are in a safe that can only be opened by entering the correct code into the keypad, which is a 3 × 3 grid as shown on the right. Bond has been told that every two consecutive digits in the code will always be adjacent keys on the keypad. For example, the digit 1 will only be followed by a 2 or 4, the digit 5 will only be followed by a 2, 4, 6 or 8, and so on. So 3252 and 12369 are valid codes, but 1234 is not (3 is not adjacent to 4 on the keypad) and 55 is not (5 is not adjacent to 5 on the keypad). Bond also knows the first digit of the code and the length of the code. From this, he would like to compute the number of possible codes he has to try. For instance, if the first digit is 4 and the length of the code is 3, then there are 8 possible codes, namely {412, 414, 452, 454, 456, 458, 474, 478}. In each of the following cases, given the first digit of the code and the number of digits in the code, help Bond compute the total number of possible secret codes. (a) First digit 2, number of digits 8. (b) First digit 5, number of digits 10. (c) First digit 9, number of digits 13. This was a real tough one that I simply cannot get through. Help please?
First of all, note that there are $3$ classes of numbers: corner ($1,3,7,9$), edge ($2,4,6,8$) and centre ($5$). Now clearly a corner number can be followed by any $1$ of $2$ edge numbers. An edge number can be followed by any $1$ of $2$ corner numbers or a centre number. And a centre number can only be followed by $1$ of $4$ edge numbers. Using these relations, define $3$ functions $C(n)$,$E(n)$ and $Cen(n)$ where $C(n)$ represents the total possible number of strings of length $n$ starting with a corner number. Likewise for the other 2 functions. Now we have $$C(n)=2\cdot E(n-1)$$ $$E(n)=2\cdot C(n-1)+Cen(n-1)$$ $$Cen(n)=4 \cdot E(n-1)$$ Also you can determine the initial values for the $3$ functions $$C(1)=E(1)=Cen(1)=1$$ Using these and the recurrence relations you're good to go.
{ "language": "en", "url": "https://math.stackexchange.com/questions/807431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Explaining something to the half I'm a private tutor in my free time, teaching some basic high school mathematics and I've often been asked: ''Why is something to the half equal to the root of that something?''. And I'm having problems explaining it. I have an idea of why in my head but obviously this idea is not strong enough, as I can't explain it properly. Can anyone lay it out?
We have for $x\in\Bbb{R}_{>0}$ the functional equation $x^ax^b =x^{a+b}$, so $x^{\frac{1}{2}}x^{\frac{1}{2}}=x^{\left(\frac{1}{2}+\frac{1}{2}\right)}=x^{1}$. Since finding a square root of $x$ is equivalent to finding an $y\in\Bbb{R}$ with $y\cdot y=x$, we can conclude $\sqrt{x}=x^{\frac{1}{2}}$ (for the standard branch of the root and the $\exp$-function).
{ "language": "en", "url": "https://math.stackexchange.com/questions/807541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Integration $\int_{-1}^1 \sqrt{\frac{r^2-x^2}{1-x^2}}dx$ I am interested in the following integral: (r is a constant) $$\int_{-1}^1 \sqrt{\frac{r^2-x^2}{1-x^2}}dx$$ Initially I thought of a trigonometric substitution, or a substitution like $z^2=r^2-x^2$, but to no avail. Is it possible to find an analytical solution?
The antiderivative involves elliptic integrals (which are not the nicest I know). From there, the integral is given by $$\int_{-1}^1 \sqrt{\frac{r^2-x^2}{1-x^2}}dx=2 r E\left(\frac{1}{r^2}\right)$$ provided that $\Re(r)\geq 1\lor \Re(r)\leq -1\lor r\notin \mathbb{R}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/807636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $\lim_{n\rightarrow \infty} \sqrt[n]{c_1^n+c_2^n+\ldots+c_m^n} = \max\{c_1,c_2,\ldots,c_m\}$ Let $m\in \mathbb{N}$ and $c_1,c_2,\ldots,c_m \in \mathbb{R}_+$. Show that $$\lim_{n\rightarrow \infty} \sqrt[n]{c_1^n+c_2^n+\ldots+c_m^n} = \max\{c_1,c_2,\ldots,c_m\}$$ My attempt: Since $$\lim_{n\rightarrow \infty} \sqrt[n]{c_1^n+c_2^n+\ldots+c_m^n} \leq \lim_{n\rightarrow \infty}\sqrt[n]{\max\{c_1,c_2,\ldots,c_m\}} = \lim_{n\rightarrow \infty} \sqrt[n]{n}\sqrt[n]{\max\{c_1,c_2,\ldots,c_m\}}=\lim_{n \rightarrow \infty} \max\{\sqrt[n]{c_1^n},\sqrt[n]{c_2^n},\ldots,\sqrt[n]{c_m^n}\}=\lim_{n \rightarrow \infty}\max\{c_1,c_2,\ldots,c_m\}=\max\{c_1,c_2,\ldots,c_m\}$$ it follows that $\lim_{n\rightarrow \infty} \sqrt[n]{c_1^n+c_2^n+\ldots+c_m^n}$ is bounded, but I don't think it's monotonically decreasing, at least I can't prove this. Can anybody tell me whether the approach I have chosen is a good one, whether what I have done is correct and how to finish the proof?
You can see there is an error in your approach if you consider a simple example. Let $c_1=2$ and $c_2=\cdots=c_m=0$. Then $$\lim_{n\to\infty}\sqrt[n]{c_1^n+c_2^n+\cdots+c_m^n}=\lim_{n\to\infty}\sqrt[n]{2^n}=2$$ but $$\lim_{n\to\infty}\sqrt[n]{\max\{c_1,c_2\ldots,c_m\}}=\lim_{n\to\infty}\sqrt[n]{2}=1$$ so your first inequality does not always hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/807759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Prove that the gradient of a unit vector equals 2/magnitude of the vector Let $\vec r=(x,y,z)$ Firstly find $\vec \nabla (\frac 1 r)$ where r is the magnitude of $\vec r$. I think I've done this correctly to get $-x(x^2+y^2+z^2)^{-\frac32} \hat i-y(x^2+y^2+z^2)^{-\frac32} \hat j-z(x^2+y^2+z^2)^{-\frac32} \hat k$ Secondly prove that $\vec \nabla. \frac{\vec r}{r}=\frac2r$ I've really got no idea for the second part.
Let $\vec r=(x,y,z)=v_x\hat{i}+v_y\hat{j}+v_z\hat{k}$ so that $v_x=x$, $v_y=y$ and $v_z=z$. Note that the magnitude of the vector $\vec r$ is given by $$r=(v_x^2+v_y^2+v_z^2)^{1/2}=(x^2+y^2+z^2)^{1/2}$$ For the second part, the divergence operator on vector $\vec r$ (denoted by $\nabla\cdot\vec r$) results in a signed scalar, and is given by the following equation:- $$\nabla\cdot\vec r = \frac{\partial v_x}{\partial x}+\frac{\partial v_y}{\partial y}+\frac{\partial v_z}{\partial z}$$ As we need to apply the divergence operator to $\frac{\vec r}{r}$ we can use the product rule, making use of your answer to the first part (which is indeed correct, and highlighted in blue):- $$\nabla\cdot\frac{\vec r}{r}=\frac{1}{r}(\nabla\cdot\vec r)+\vec r\cdot(\color{blue}{\nabla\frac{1}{r}})\\=\frac{1}{r}\left(\frac{\partial v_x}{\partial x}+\frac{\partial v_y}{\partial y}+\frac{\partial v_z}{\partial z}\right)+\left(v_x\hat{i}+v_y\hat{j}+v_z\hat{k}\right)\cdot\left(\color{blue}{-\frac{x}{r^{3}}\hat{i}-\frac{y}{r^{3}}\hat{i}-\frac{z}{r^{3}}\hat{k}}\right)\\=\frac{1}{r}\left(\frac{\partial x}{\partial x}+\frac{\partial y}{\partial y}+\frac{\partial z}{\partial z}\right)-\left(\frac{x^2+y^2+z^2}{r^3}\right)\\=\frac{3}{r}-\frac{1}{r}=\frac{2}{r}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/807853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
in triangle abc the measure of angle b is 90o ac is 50o and bc is 14o which ratio represents the tangent of angle a Need help answering this question. in triangle abc the measure of angle b is 90o ac is 50o and bc is 14o which ratio represents the tangent of angle a
Since you have a right triangle, you can use the pythagorean theorem to find side ab . $ab^2+bc^2=ac^2$ $ab^2=500^2-140^2$. Tangent is opposite over adjacent, so it would be $\frac{bc}{ab}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/807950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Subgroups of GL(2,C) isomorphic to Z Let $\mathbb Z\to \mathrm{GL}_2(\mathbb C)$ be an injective homomorphism. I'm wondering about the possibilities for the image of $\mathbb Z$. I think the image is always conjugate to a subgroup of matrices of the form $$\left( \begin{array}{cc} \lambda_1 & b \\ 0 & \lambda_2\end{array}\right),$$ where $b \in n\mathbb Z$ (for some $n$) and $\lambda_1$ and $\lambda_2$ are in $\mathbb C$? My question is what other non-trivial and easy to state conditions does this cyclic subgroup (of infinite order) have to satisfy? If $b=0$, then $\lambda_1$ or $\lambda_2$ is of infinite order (in $\mathbb C^\ast$) and both are non-zero. What if $b\neq 0 $?
The Jordan normal form of the image of $1\in\mathbb Z$ is either $$\begin{pmatrix}\lambda_1&0\\0&\lambda_2\end{pmatrix}, $$ which is the case you handled, or it is $$\begin{pmatrix}\lambda&1\\0&\lambda\end{pmatrix}. $$ In the second case, $$\begin{pmatrix}\lambda&1\\0&\lambda\end{pmatrix}^n=\begin{pmatrix}\lambda^n&n\lambda^{n-1}\\0&\lambda^n\end{pmatrix}, $$ so any (nonzero) $\lambda$ will do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/808034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Examples of quasigroups with no identity elements If you scroll to the bottom of this page, there is a table claiming quasigroups have divisibility but not identity (in general). What would be some examples of quasigroups without an identity element?
A finite quasigroup is essentially a Latin square used as a "multiplication" table. Consider for $n \gt 2$ a Latin square, and label the rows (resp. columns) with a permutation of the symbols not appearing in any row (resp. in any column). This determines a quasigroup without identity, if the entries of the Latin square are considered the result of the binary operation on the row symbol and column symbol assigned to that entry. The rows and columns may then be permuted to any common order you please, and for symbol set $\{1,2,3\}$ a specific example of the Latin square with rows and columns in the canonical order would be: $$ \begin{bmatrix} 1 & 3 & 2 \\ 3 & 2 & 1 \\ 2 & 1 & 3 \end{bmatrix} $$ In this fashion $2 * 3 = 1$, but no element acts as a left (resp. a right) identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/808122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Why does an $n \times n$ rotation matrix have $\frac{1}{2}n(n-1)$ undetermined parameters? Consider an orthogonal transformation between Cartesian coordinate systems in $n$-dimensional space. The $n \times n$ rotation matrix $$R = \left(a_{ij}\right)$$ has $n^2$ entries. These are not independent; they are related by the orthogonality conditions $$a_{ik}a_{jk} = \delta_{ij},$$ which are $\frac{1}{2}\!\!\left(n^2 + n\right)$ independent equations in the $a_{ij}$. Thus, $$n^2 - \frac{1}{2}\!\!\left(n^2 + n\right) = \frac{1}{2}n\left(n-1 \right)$$ of the $a_{ij}$ are left undetermined. Why is the last step justified? If there are $n$ independent equations, is it always possible to solve them for $n$ unknowns (even if the equations involve the sums of quadratic terms, as they do here)? If so, why? Under what general conditions is it possible to solve a system of $n$ independent equations for $n$ unknowns?
If you want something a bit more concrete, pick one row at a time. The first row is any norm 1 vector, so that is $n-1$ parameters. The second row is any norm 1 vector perpendicular to the first row, so that is $n-2$ parameters. The $k$ row is any norm 1 vector perpendicular to the first $k-1$ columns, so that is $n-k$ parameters. You end up with $(n-1) + (n-2) + \cdots + 2 + 1 + 0 = \frac12 n(n-1)$ parameters.
{ "language": "en", "url": "https://math.stackexchange.com/questions/808195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why are integrals called integrals? What is the historical background for this term? I cannot quite see what is integral about an integral, even if we go back to the viewing it as the area under a curve. It seems to me a strange choice of word.
"I cannot quite see what is *integral* about an integral" From your statement above, it appears you are thinking of an alternate meaning of the word "integral." Specifically, A is integral to B if it is a necessary component of B (e.g., "this scene is integral to the plot"). But that is not how it is used in mathematics. Think instead of integration in society (as contrasted with segregation) - to bring pieces together into a whole. An integral is a mathematical instance of integration in this sense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/808285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "43", "answer_count": 6, "answer_id": 0 }
Is there a standard recurrence relation to solve this? I have infinite supply of $m\times 1$ and $1\times m$ bricks.I have to find number of ways I can arrange these bricks to construct a wall of dimensions $m\times n$. My problem is how can I approach the question? Is there a recurrence relation to describe the problem ?
Hint. A recurrence relation would be a good idea. * *Place an $m\times1$ brick along the side of length $m$. What do you now have to do to complete the construction of the wall? *Are there any other ways that you could have started? Good luck!
{ "language": "en", "url": "https://math.stackexchange.com/questions/808359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Confusion over Matrix rotation I want to make a function in C++ that accepts an angle 'a', and a vector 'v' as arguments and returns a matrix. 'a' should represent the amount that is rotated around vector 'v', an arbitrary axis, and the matrix returned should contain values that will make the rotation around v by certain degrees, possible. However, I have no idea how matrices should end up looking, or if I do, how and why the values are assigned. Here's an example. values that are passed in.. a = 120, v = vector(1,1,1) Magic happens matrix data returned.. { 0, 0, 1, 0 1, 0, 0, 0 0, 1, 0, 0 0, 0, 0, 1 } So the issue is, why are the 1's and 0's placed in such ways? What is the relationship between that specific matrix and a matrix that contains cosines, sins, -sins, or 1-cos? As far as I know, they're all rotation matrices Thanks, in advance.
See Quaternion-derived rotation matrix here $a=(a_x,a_y,a_z)$ is the axis, $c=\cos \theta$, $s=\sin \theta$ where $\theta$ is the angle of rotation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/808444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Degrees of maps in algebraic topology Please can I have some tips on how to construct maps between topological spaces of a given degree? For example, how would you go about building a map of degree $3$ from $\mathbb{CP}^1\times\mathbb{CP}^2 \to \mathbb{CP}^3$? Or a map from $S^2\times S^2 \to \mathbb{CP}^2$ of even degree? I don't know where to start. Are there any particular techniques that are useful?
In general it is hard to write down maps of a given degree, or even to determine whether such maps exist. I don't know if there is a map $\mathbb{CP}^1\times\mathbb{CP}^2 \to \mathbb{CP}^3$ of degree three, but there are maps $S^2\times S^2 \to \mathbb{CP}^2$ of even degree. In fact, every map $S^2\times S^2 \to \mathbb{CP}^2$ has even degree. To see this, recall that $H^*(S^2\times S^2; \mathbb{Z}) \cong \mathbb{Z}[\alpha, \beta]/(\alpha^2, \beta^2)$ and $H^*(\mathbb{CP}^2; \mathbb{Z}) \cong \mathbb{Z}[\omega]/(\omega^3)$. Consider a map $f : S^2\times S^2 \to \mathbb{CP}^2$. Note that $f^*(\omega^2) = (f^*\omega)^2$ and since $f^*\omega \in H^2(S^2\times S^2; \mathbb{Z}) \cong \mathbb{Z}\alpha\oplus\mathbb{Z}\beta$, $f^*\omega = x\alpha + y\beta$ for some $x, y \in \mathbb{Z}$. Therefore $$f^*(\omega^2) = (f^*\omega)^2 = (x\alpha + y\beta)^2 = 2xy\alpha\beta.$$ As $\alpha\beta$ is a generator of $H^4(S^2\times S^2; \mathbb{Z})$, $\deg f = \pm 2xy$ which is even (the sign depends on the orientations).
{ "language": "en", "url": "https://math.stackexchange.com/questions/808538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
An equation that generates a beautiful or unique shape for motivating students in mathematics Could anyone here provide us an equation that generates a beautiful or unique shape when we plot? For example, this is old but gold, I found this equation on internet: $$ \large\color{blue}{ x^2+\left(\frac{5y}{4}-\sqrt{|x|}\right)^2=1}. $$ When I plot on Wolfram Alpha, the output is The reason why I post this question is not only for fun or the sake of curiosity but it is also to motivate my students and kids around me to like and to learn mathematics more enthusiastic because motivating students to be enthusiastically receptive is one of the most important aspects of mathematics education. A good teacher should focus attention on the less interested students as well as the motivated ones. I have learnt from my $3$-year experience on teaching that the good strategies for increasing students motivation in mathematics are enticing the class with a “Gee-Whiz” mathematical result and using recreational subjects that consist of puzzles, games, paradoxes, experiments, and pictures/ video animations. We all know, 'a picture is worth a thousand words'.
Here is a way to generate bunches of intriguing (most often) periodic curves drawn by adding unit length complex numbers of the form $$e^{2\pi i m} \ \ \ \text{with} \ \ \ m:=\dfrac{n}{a}+\dfrac{n^2}{b}+\dfrac{n^3}{c}$$ for $0 \le n < abc$, where $a,b,c$ are fixed positive integers. Here are displayed some of them with the corresponding values of $a,b,c$ : Please note that two same curves like the hourglass-like shapes in position 1 and 3 can sometimes be generated with different values of $a,b,c$. Here is the Matlab program that has generated these 25 curves : clear all;close all; set(gcf,'color','w');axis equal off;hold on for P=1:5 for Q=1:5; V=ceil(9*rand(1,3));a=V(1);b=V(2);c=V(3);L=a*b*c; S=zeros(1,L+1); for n=0:L; m=n/a+(n^2)/b+(n^3)/c; S(n+1)=exp(2*pi*i*m); end S=cumsum(S); M=mean(S);S=S-M;R=max(abs(S));S=S/R; shi=3*(P+i*Q); plot(shi+S); text(real(shi),-1.5+imag(shi),num2str(V),'horizontalalignment','center'); end; end; Remarks : 1) This idea comes from the explained logo one can find here : https://math.stackexchange.com/users/119775/david 2) About spirographs, one can use the following splendid simulation : https://nathanfriend.io/inspirograph/
{ "language": "en", "url": "https://math.stackexchange.com/questions/808650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 5, "answer_id": 2 }
$SL(2, \mathbb F_3)$ does not have a subgroup of order $12$ Using the characteristic polynomial I can prove that $SL(2, \mathbb F_3)$ does not has an element of order $12$, but how can I prove that $SL(2, \mathbb F_3)$ does not has a subgroup of order $12$?
Outline for a proof: With the characteristic polynomial, you can see that $A^3 = I$ in $G = \operatorname{SL}(2,3)$ if and only if $tr(A) = -1$. Count that there are $8$ elements of order $3$ in $G$. A subgroup $H$ of order $12$ would have to contain every element of order $3$. Conclude $H \cong A_4$. On the other hand $H$ must contain $-I$, so $H$ has a subgroup of order $6$. This is a contradiction since $A_4$ has no subgroup of order $6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/808721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Questions--Heat equation with $x>0,t>0$ I have the following problem: $$u_t=u_{xx}, x>0, t>0$$ $$u(x=0,t)=0 , t>0$$ $$u(x,t=0)=f(x), x>0$$ The solution of the problem is: $$u(x,t)=\int_0^{+\infty} a(k) \sin(kx) e^{-k^2t} dk$$ $$u(x,0)=f(x)=\int_0^{+\infty} a(k) \sin(kx) dk$$ $$\sin(k'x) f(x)= \sin(k'x) \int_0^{+\infty} a(k) \sin(kx) dk \Rightarrow \int_{0}^{\infty}\sin(k'x) f(x) dx = \int_0^{+\infty} a(k) \sin(kx) \sin(k'x) dk dx$$ We know the integral: $$\int_{-\infty}^{+\infty} e^{-i(k-k')x}dx= 2 \pi \delta(k-k')$$ $$e^{-ikx} e^{ik'x}=\cos(kx) \cos(k'x)+\sin(kx) \sin(k'x)+ i(\cos(kx) \sin(k'x)-sin(kx) \cos(k'x)) $$ Why do we know that $e^{-ikx} e^{ik'x}=$ is real,so $\cos(kx) \sin(k'x)-sin(kx) \cos(k'x)=0$ ? Also, why $\int_{-\infty}^{+\infty} (\cos(kx) \cos(k'x)+\sin(kx) \sin(k'x))dx=2 \int_{\infty}^{+\infty} \sin(kx) \sin(k'x) dx$ ?
Of course use separation of variables: Let $u(x,t)=X(x)T(t)$ , Then $X(x)T'(t)=X''(x)T(t)$ $\dfrac{T'(t)}{T(t)}=\dfrac{X''(x)}{X(x)}=-k^2$ $\begin{cases}\dfrac{T'(t)}{T(t)}=-k^2\\X''(x)+k^2X(x)=0\end{cases}$ $\begin{cases}T(t)=c_3(k)e^{-tk^2}\\X(x)=\begin{cases}c_1(k)\sin xk+c_2(k)\cos xk&\text{when}~k\neq0\\c_1x+c_2&\text{when}~k=0\end{cases}\end{cases}$ $\therefore u(x,t)=\int_0^\infty a(k)e^{-tk^2}\sin xk~dk+\int_0^\infty b(k)e^{-tk^2}\cos xk~dk$ $u(0,t)=0$ : $\int_0^\infty b(k)e^{-tk^2}~dk=0$ $b(k)=0$ $\therefore u(x,t)=\int_0^\infty a(k)e^{-tk^2}\sin xk~dk$ $u(x,0)=f(x)$ : $\int_0^\infty a(k)\sin xk~dk=f(x)$ $\mathcal{F}_{s,k\to x}\{a(k)\}=f(x)$ $a(k)=\mathcal{F}^{-1}_{s,x\to k}\{f(x)\}$ $\therefore u(x,t)=\int_0^\infty\mathcal{F}^{-1}_{s,x\to k}\{f(x)\}e^{-tk^2}\sin xk~dk$
{ "language": "en", "url": "https://math.stackexchange.com/questions/808828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Finding pure strategy and pay off matrix in game theory "A two person games begins with the random selection of an integer $x$ from the set {$1,2,3$}, each choice is equally likely. Then the two players, not knowing the value of $x$, simultaneously select integers from {$1,2,3$}. Each players objective is to choose an integer $\geq x$. If P1 is successful, he wins, if both are successful then the player who chose the smallest integer wins, in all other cases it is a draw. The winner gets one dollar from the loser." I have drawn my tree for this, But I am really struggling to understand how I find my pure strategy solution for each player and then the pay off matrix?
You already did the hardest part, now write 3 payoff matrices. The first matrix $M1$ corresponds to the game if Nature chose 1: $$ \begin{array}{c|c|c|c|} P1\backslash P2\\\hline\\& 1 & 2 &3\\ \hline 1 & 0 & 1 & 1\\ \hline 2 &-1 &0 &1 \\ \hline 3 & -1 &-1 &0 \\ \hline\end{array} \tag{M1}$$ The second matrix $M2$ corresponds to the game if Nature chose 2, and the third $M3$ if Nature chose 3. Of course, none of these matrices makes sense since players do not observe Nature's choice. But now, computed the expected matrix ( $\frac{1}{3} M_1+\frac 13 M_2 + \frac 13 M_3$), using the expected matrix you can compute the Nash equilibrium (or equilibria) as usual.
{ "language": "en", "url": "https://math.stackexchange.com/questions/808887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\alpha$ is a plane curve if and only if all its osculator planes intersect at one point Let $\alpha$ be a regular curve. Prove that $\alpha$ is plane if and only if all the osculator planes intersect at one point. I know that $\alpha$ is plane iff the binormal vector is constant, or iff the osculator plane is the same at every point. However, I don't know how to prove that. NOTE: Originally, the statement said that "a curve is plane if and only if all the tangent planes intersect at one point". I understant that the tangent plane is the osculator plane, right?
HINT: Say all the osculating planes pass through the origin. This means that $$\alpha(s)=\lambda(s)T(s)+\mu(s)N(s)$$ for some functions $\lambda$, $\mu$. Now differentiate and use Frenet.
{ "language": "en", "url": "https://math.stackexchange.com/questions/809004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find a formula for $1 + 3 + 5 + ... +(2n - 1)$, for $n \ge 1$, and prove that your formula is correct. I think the formula is $n^2$. Define $p(n): 1 + 3 + 5 + \ldots +(2n − 1) = n^2$ Then $p(n + 1): 1 + 3 + 5 + \ldots +(2n − 1) + 2n = (n + 1)^2$ So $p(n + 1): n^2 + 2n = (n + 1)^2$ The equality above is incorrect, so either my formula is wrong or my proof of the implication is wrong or both. Can you elaborate? Thanks.
The issue here is that $p(n+1)$ is note the statement $$ 1+3+5+\cdots+(2n-1)+2n=(n+1)^2; $$ it is the statement $$ 1+3+5+\cdots+(2n-1)+(2n+1)=(n+1)^2. $$ Why? The left side of your formula is the sum of all odd numbers between $1$ and $2n-1$. So, when you replace $n$ by $n+1$, you get the sum of all odd numbers between $1$ and $2(n+1)-1=2n+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/809071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Algebra Manipulation Contest Math Problem The question was as follows: The equations $x^3+Ax+10=0$ and $x^3+Bx^2+50=0$ have two roots in common. Compute the product of these common roots. Because $x^3+Ax+10=0$ and $x^3+Bx^2+50=0$ it means that $x^3+Ax+10=x^3+Bx^2+50$ Take $x^3+Ax+10=x^3+Bx^2+50$ and remove $x^3$ from both sides, you get $Ax+10=Bx^2+50$ or $Bx^2-Ax+40=0$ By the quadratic equation, we get $\frac {A \pm \sqrt {(-A)^2 - 4*40B}}{2B}=\frac {A \pm \sqrt {A^2 - 160B}}{2B}$ This gives us two answers: $\frac {A + \sqrt {A^2 - 160B}}{2B}$ and $\frac {A - \sqrt {A^2 - 160B}}{2B}$ $\frac {A + \sqrt {A^2 - 160B}}{2B} * \frac {A - \sqrt {A^2 - 160B}}{2B}=\frac {A^2 - {A^2 - 160B}}{4B^2}$ This simplifies as $\frac {160B}{4B^2}=\frac{40}{B}$ $\frac{40}{B}$ is an answer, but in the solutions, they expected an integer answer. Where did I go wrong?
Hint: The common roots must be both roots of $$- (x^3 + Ax +10 ) + (x^3 + Bx^2 + 50) = Bx^2 - Ax + 40 $$ Let this quadratic polynomial be denoted by $f(x)$. Hint: We have $$ f(x) ( \frac{1}{B} x + \frac{5}{4} ) = x^3 + Bx^2 + 50. $$ This gives $B^2 = 4A$ and $160=5AB$, so $5B^3 = 640 $. This gives $B = 4 \sqrt[3]{2} $, $ A = 4\sqrt[3]{4}$. This does not give me an integer answer for $ \frac{40}{B} = 5 \sqrt[3]{4}$, so perhaps they had an error?
{ "language": "en", "url": "https://math.stackexchange.com/questions/809170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Limit with a big exponentiation tower Find the value of the following limit: $$\huge\lim_{x\to\infty}e^{e^{e^{\biggl(x\,+\,e^{-\left(a+x+e^{\Large x}+e^{\Large e^x}\right)}\biggr)}}}-e^{e^{e^{x}}}$$ (original image) I don't even know how to start with. (this problem was shared in Brilliant.org) Some of the ideas I tried is to take the natural log of this expression and reduce it to $\ln(a/b)$ then use L'Hopital's but that made it false!! I know the value of the limit it is $e^{-a}$ but please how to prove it?
Let $ A=\exp\left(x+e^{-\left(a+x+e^x+e^{e^x}\right)}\right) $ and $B=\exp(x) $. Then we can easily conclude that $A/B$ tends to $1$, but a little more analysis allows us to infer that $A-B$ also tends to $0$. We have $$A-B=B\cdot\frac{\exp(\log A-\log B) - 1}{\log A-\log B} \cdot(\log A-\log B) $$ and the middle fraction tends to $1$ and hence the limit of $A-B$ is same as that of $$B(\log A-\log B) $$ ie $$e^x\cdot e^{-\left(a+x+e^x+e^{e^x}\right)}$$ and thus $A-B$ tends to $0$. It follows that $C/D\to 1$ where $C=e^A, D=e^B$. Applying same idea we note that limit of $C-D$ is same as that of $$D(\log C-\log D)=D(A-B) $$ which is same as that of $$DB(\log A-\log B) $$ ie $$e^{e^x} e^x\cdot e^{-\left(a+x+e^x+e^{e^x}\right)}$$ Thus we conclude that $C-D\to 0$. Next let $E=e^C, F=e^D$ then $E/F\to 1$ and limit of $E-F$ is same as that of $$F(\log E-\log F) =F(C-D) $$ which is same as that of $$FDB(\log A-\log B) $$ ie $$e^{e^{e^x}} e^{e^x} e^x\cdot e^{-\left(a+x+e^x+e^{e^x}\right)}$$ The desired limit is then $e^{-a} $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/809234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 4, "answer_id": 3 }
On proving events have nonempty intersection if the sum of their complement is smaller than 1 Suppose for Events $A_1, A_2,\ldots,A_n$ we have that: $$\sum\limits_{i=1}^n {\mathbb P}(A^{c}_i) < 1 $$ Does this imply: $$\bigcap_{i=1}^{n} A_i \neq \emptyset $$ I think it does, but I couldn't manage to prove it, anybody please give some hints! Thanks a lot!
Here's a hint: Start with n = 2. Suppose that $$\sum\limits_{i=1}^2 {\mathbb P}(A^{c}_i) = 1$$ We also know that $$P(A_1^c \cup A_2^c)=P(A_1^c)+P(A_2^c)-P(A_1^c \cap A_2^c) $$ Since probability is bounded at 1, it is clear that $P(A_1^c \cap A_2^c)$ must equal 0. By De Morgan's laws, we know that $$P(A_1^c \cap A_2^c) = P((A_1 \cup A_2)^c)$$ Since $$P((A_1 \cup A_2)^c) = 0$$ Then $$P(A_1 \cup A_2) = 1$$ From here, $$P(A_1 \cup A_2) = P(A_1) + P(A_2) - P(A_1 \cap A_2)$$ $$1 = P(A_1) + P(A_2) - P(A_1 \cap A_2)$$ $$1 = (1 - P(A_1^c)) + (1 - P(A_2^c)) - P(A_1 \cap A_2)$$ $$1 = 2 - [P(A_1^c) + P(A_2^c)] - P(A_1 \cap A_2)$$ From the last equation we can see that if, as we first conjectured, $P(A_1^c) + P(A_2^c) = 1$, then $P(A_1 \cap A_2) = 0$. Otherwise, in order for the equality to be satisfied, $P(A_1 \cap A_2)$ must be greater than 0. Now try to extend this to n > 2 and your proof will be complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/809322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Computing eigenvalues with characteristic polynomial I have two questions about computing eigenvalues with the characteristic polynomial. * *Eigenvalues exist if and only if I can factor the polynomial?? For example, I know i can calculate the roots of $ t^2 - 3t + 3 $ but I would use a quadratic formula for that. *An exercise asks me to find the eigenvalues of a matrix $$A = \left( \begin{array}{cc} 1 & 2 \\ 3 & 2 \end{array}\right)$$ Computing $\det(A-tI) $ I get $ t^2 - 3t + 3 $. Computing $\det(tI -A) $ I get $ t^2 - 3t - 4 = (t-4)(t + 1) $ I didn't come up with the last solution, but I'm not even sure why it is correct to compute $\det(tI -A) $ instead of $\det(A-tI) $, and in which cases it is more convenient to do that.
Eigenvalues may be imaginary, if you calculate the characteristic polynomial and set it equal to zero to find the roots then just as with a quadratic which may have no real solutions the characteristic polynomial may have imaginary roots. Take a rotation matrix as an example Eigenvalues of a rotation
{ "language": "en", "url": "https://math.stackexchange.com/questions/809410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Matrix multiplication: $X_{r \times c}$ and $Y_{c \times d}$ Matrix $X$ has $r$ rows and $c$ columns, and matrix $Y$ has $c$ rows and $d$ columns, where $r, c$, and $d$ are different. Which of the following must be false? * *The product $YX$ exists *The product of $XY$ exists and has $r$ rows and $d$ columns *The product $XY$ exists and has $c$ rows and $c$ columns The answer says only 2 is false, but isn't 2 the only correct choice?
It helps to visualize matrix multiplication: (courtesy of Wikimedia) The number of columns of the first multiplicand has to match the number of rows of the second multiplicand. Looking at the three choices: * *The product YX exists This would require that $d$ the number of columns in Y equals the number $r$ of rows in X. Since $r \ne d$, this is not the case. *The product of XY exists and has $r$ rows and $d$ columns Since $c = c$, the product exists. It has in fact $r$ rows and $d$ columns. *The product XY exists and has $c$ rows and $c$ columns Since $c = c$, the product exists. But it has $r$ rows and $d$ columns. Therefore, choice 2 is the only true one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/809640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve this quadratic congruence equation? Well, we have : $$n^2+n+2+5^{4n+1}\equiv0\pmod{13}$$ i'm little bit confused, I think i can solve this using the reminders of $n^2$, $n$ and $5^{4n+1}$ over $13$, by the way I have no idea about the Chinese Reminder Theorem no need to use it. and thanks in advance edit: $4 \le n \le 25$
It is easy to see that $$5^2\equiv -1 \pmod{13}$$ So, we have $$5^4\equiv 1 \pmod{13}$$ Therefore, for any n, we have $$5^{4n+1}\equiv 5\pmod{13}$$ So, the equation simplifies to $$n^2+n+7\equiv 0\pmod{13}$$ Considering vieta, we check the factors of 7,20,33,... Looking at the factors of 20 , we notice that 2 and 10 sum upto 12, which is $\equiv -1 \pmod{13}$, so 2 and 10 are the desired solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/809774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is the Koch curve homeomorphic to $[0,1]$? Henning Makholm has provided a nice proof that the limiting curve is a continuous function from $[0,1]$ to the plane. I was curios if the function is homeomorphism. A quick search gave me many sources mentioning that $[0,1]$ and the Koch curve are indeed homeomorphic (as a proof that they have the same topological dimension), yet without showing why are they homeomorphic. What would be the simplest way to show it?
The map from the interval onto the Koch curve is a continuous bijection from a compact space to a Hausdorff one. So it's closed since closed subsets of compact spaces are compact, images of compact spaces are compact, and compact subspaces of Hausdorff spaces are closed, and thus a homeomorphism. To check this is indeed an injection from $I$ to its image, consider the following picture of one stage in the construction. Each point on some curve $\gamma_i$ is drawn directly above the point on $\gamma_{i+1}$ mapped to by the same point in $I$. One sees that only $D$ and $F$ are closer together on $\gamma_{i+1}$ than on $\gamma_i$. But even so they will never get closer than $3^{-i}$, the length of the segment containing just $E$-at the very least no closer than half this. To see this, observe that only the tip of the new triangle lies directly above $F$'s segment, so that we can bound the distance between the whole segment $AE$ and the next triangle to the right, (out of frame,) by $3^{-i}$. And continuing the construction we can easily bound the image of $AE$ in the limit curve away from the out-of-frame triangle by $\frac{1}{2*3^{i}}$, since in later stages no point can move further than $\frac{1}{2*3^{i}}=\sum_k 3^{-i-1-k}$. Indeed at stage $j$ no point moves further than $\frac{\sqrt{3}}{2* 3^{j}}\leq 3^{-j}$. So if $|\gamma_i(x)-\gamma_i(y)|>\delta$, then on the limit curve $|\gamma(x)-\gamma(y)|\geq \min(\delta,\frac{1}{2* 3^i})$, in particular, $\gamma(x)\neq \gamma(y)$ and $\gamma$ is injective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/809873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
can $L^p$ norm convergence and pointwise monotonic imply pointwise convergence? Let $(f_n)_{n=1}^\infty$ be a sequence of measurable function such that $\lim_{n\to\infty}||f_n-f||_p=0$. If for any $x\in \Omega$, $\{f_{n}(x)\}_{n=1}^\infty$ is a monotonic sequence, can we deduce that $f_n\to f$ almost everywhere?
For any $x\in\Omega$, the monotonic sequence $(f_n(x))$ has a limit, possibly equal to $-\infty$ or $+\infty$; so the sequence $(\vert f_n(x)-f(x)\vert)$ has a limit in $[0,\infty]$. Let $g$ be the measurable function (with values in $[0,\infty]$) defined by $g(x)=\lim_{n\to\infty} \vert f_n(x)-f(x)\vert^p$. By Fatou's lemma and since $\Vert f_n-f\Vert_p\to 0$, we have $$\int_\Omega g\, d\mu\leq \liminf_{n\to\infty}\int_\Omega \vert f_n-f\vert^pd\mu =0\, .$$ Since $g\geq 0$, it follows that $g(x)=0$ almost everywhere; in other words, $f_n(x)\to f(x)$ a.e.
{ "language": "en", "url": "https://math.stackexchange.com/questions/809949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Isn't it possible that $D_4$ has also a subgroup with $1$ element?? A consequence of the Lagrange theorem: Let $G$ a finite group and $H$ a subgroup of G. Then $|H| \mid |G|$. is that each subgroup $\neq <i_d>$ of $D_4$, which has $8$ elements , has either $2$ or $4$ elements.. But.... $1 \text{ divides also }8$..Isn't it possible that $D_4$ has also a subgroup with $1$ element??
$D_4$ has a subgroup with one element as does every group: the trivial group $\langle id\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/810032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to find the Taylor series of $f(x)=\arctan x$. I want to find the Taylor series of $f(x)=\arctan x,\; x\in[-1,1],\;\xi=0$. That's what I have tried do far: $$f'(x)=\frac{1}{1+x^2}=\frac{1}{1-(-x^2)}=\sum_{n=0}^{\infty} (-x^2)^n.$$ How can I continue?
Thus, we have $$f(x)=\int f'(x)\,dx=\int\sum_{n\ge 0}(-1)^nx^{2n}\,dx\ =\\ =\ \sum_{n\ge 0}(-1)^n\frac{x^{2n+1}}{2n+1}\ +C$$ Then find $C$ by plugging in $x=0$. What will you get if you plug in $x=1$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/810121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do i prove this? Bartle - Introduction to Analysis p.275 Define $x_n = \frac{a(a+1)\cdots(a+n)b(b+1)\cdots(b+n)}{n!c(c+1)\cdots(c+n)}$. Show there $\sum x_n$ converges if $c>a+b$ and diverges if $c<a+b$. How do i prove this? I was trying to apply Raabe's Test, so i found that $\lim n(|\frac{x_{n+1}}{x_n}| - 1) = a+b-c$. However i think this does not answer the question. How do i prove this?
You have $$\frac{x_n}{x_{n-1}}=\frac{n^2+(a+b)n+ab}{n^2+cn},\tag{1}$$ hence if $a+b\geq c$ you have: $$ \frac{x_n}{x_{n-1}}\geq 1+\frac{ab}{n^2+cn}, \tag{2}$$ so: $$ x_n \geq x_0\prod_{k=1}^{n}\left(1+\frac{ab}{n^2+cn}\right),\tag{3}$$ and since the product on the right is convergent $x_n$ cannot be infinitesimal, hence $\sum x_n$ cannot converge. On the other hand, if $a+b<c$ then $d=c-(a+b)>0$ and: $$ \frac{x_n}{x_{n-1}}=1+\frac{ab-dn}{n^2+cn}\ll\left(1-\frac{1}{n}\right)^d, $$ hence: $$ x_n \ll \frac{1}{n^{d}}$$ and, if $c>a+b+1$, the series $\sum x_n$ converges. Since the $\ll$-symbol in the above lines can be replaced by the $\sim$-symbol, I believe that the correct statement is: Given $x_n = \frac{(a)_{n+1}(b)_{n+1}}{n!\cdot(c)_{n+1}}$, $\sum x_n$ converges if $c>a+b+1$ and diverges if $c<a+b+1$. As a matter of fact, if we take $a=b=1$ and $c=\frac{5}{2}$, we have: $$ x_n = \frac{15\sqrt{\pi}}{8}\cdot \frac{(n+1)\Gamma(n+2)}{\Gamma(n+7/2)}\sim\frac{1}{\sqrt{n}},$$ hence $\sum x_n$ does not converge as expected by your claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/810196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Constructing Galois extensions. In Wiki Page I found the following statement. A result of Emil Artin allows one to construct Galois extensions as follows: If $E$ is a given field, and $G$ is a finite group of automorphisms of $E$ with fixed field $F$, then $E$ over $F$ is a Galois extension. Somebody please explain me, why the above statement is true. Any hint is welcome. Thanks in advance.
First we show that $E/F$ is finite, of degree less than or equal to $m = |G|$. Let $\alpha \in E$. Let $\alpha=\alpha_1,\dots\alpha_r$ be the orbit of $\alpha$ under $G$, $r \leq m$. Then define $f(X) = \Pi(X-\alpha_i)$. $f$ is separable by definition and is fixed by every element of $G$, since any element of $G$ simply permutes the roots. Since $F = E^G$, this means $f \in F[X]$, so in particular $\alpha$ is algebraic over $F$. Also, the minimal polynomial of $\alpha $ over $F$ divides $f$, so is separable. Thus we have shown that $E/F$ is both algebraic and separable, and that for all $\alpha \in E$, $[F(\alpha):F]\leq m$. Now let $\alpha$ be an element such that $[F(\alpha):F]$ is maximal. If $F(\alpha) \not = E$, take $\beta \in E\backslash F(\alpha)$. $E/F$ is separable, so by the primitive element theorem, $F(\alpha,\beta)= F(\gamma)$ for some $\gamma \in E$, with $[F(\gamma):F]>[F(\alpha):F]$. Contradiction. Now all that remains to be shown is that $G = Aut(E/F)$. Certainly $G \subset Aut(E/F)$. But $E/F$ is finite, so $|Aut(E/F)|\leq[E:F]\leq|G|\leq|Aut(E/F)|$. Thus we have equivalence at each point in the equality, so in particular $G = Aut(E/F)$. This is a great theorem, because it allows us to prove things like the fact that for in integral domain $R$, $R[X_1,\dots,X_n]/Sym_n$ where $Sym_n$ is the symmetric polynomials over $R$ with $n$ variables is finite and Galois. This is both hard to do otherwise, quite interesting, and useful in e.g. finding polynomials which have insoluble Galois groups. It also allows us to show that the characterisation of Galois extensions as those where $|E/F| = |Aut(E/F)|$ is equivalent to the definitions "$E/F$ is normal and separable" or "$E^{Aut(E/F)} = F$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/810294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How come the function and the inverse of the function are the same? What is the inverse of the function: $$f(x)=\frac{x+2}{5x-1}$$ ? Answer: $$f^{-1}(x)=\frac{x+2}{5x-1}$$ Can one of you explain how the inverse is the same exact thing as the original equation?
The inverse is not in general "the same exact thing as the original equation". Generally, $f(x)\ne f^{-1}(x)$, but this is not always true. For example, consider the function $f(x) = -x$. This function is just the function that negates its input. Of course, if your negate your input twice, you get the original input. Put another way, to reverse the operation of negating your input, you simply negate it again. It just happens to be the case that your function satisfies the same property. Namely, $$f(f(x)) = \frac{f(x)+2}{5f(x)-1} = \frac{\frac{x+2}{5x-1}+2}{5\frac{x+2}{5x-1} - 1} = x$$ and hence $f(x)$ is its own inverse. (ie, to reverse $f(x)$, simply apply it again!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/810394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 1 }
optimal way to approximate second derivative Suppose there is a function $f: \mathbb R\to \mathbb R$ and that we only know $f(0),f(h),f'(h),f(2h)$ for some $h>0$. and we can't know the value of $f$ with $100$% accuracy at any other point. What is the optimal way of approximating $f''(0)$ with the given data? I'd say that $f''(0)=\frac{f'(h)-f'(0)}{h}+O(h)$ and $f'(0)=\frac{f(h)-f(0)}{h}+O(h)$, therefor we get $$f''(0)=\frac{f'(h)-\frac{f(h)-f(0)}{h}}{h}+O(h)$$ But that can't be the optimal way since we know $f(2h)$ and i didn't use it at all. Could someone shed some light?
Let's say that $$y_k=\{f(0),f(h),f'(h),f(2h)\}$$ are given and $$x_k=\{f(0),f'(0),f''(0),f'''(0)\}$$ are unknown. Taylor's theorem gives a way of writing each $y_k$ as a linear combination of $x_j$'s, dropping all the terms starting with $f^{(4)}(0)$. For example: $$f(2h)=f(0)+f'(0)(2h)+\frac12f''(0)(2h)^2+\frac16f'''(0)(2h)^3. $$ Conversely, let $y=Ax$ be the linear equations relating $y$'s and $x$'s. These are four linear equations in four unknowns. To find the solution $x_2=f''(0)$ in terms of $y$'s just solve the system of linear equations for $x$. To find how accurate the approximation is, compute the Taylor expansion of $y$'s to a higher order and substitute into the expression for $x_2$. The answer will always have the form $$ f''(0) = \alpha_1 y_1+\cdots+\alpha_4 y_4 + \Theta(h^m), $$ where $m$ is the order of the approximation. This approximation is "best" in the sense that it is the highest-order approximation possible (in this case $x_2 = f''(0) + O(h^2)$) with the given data. This is also the way all the standard finite-difference formulas are derived.
{ "language": "en", "url": "https://math.stackexchange.com/questions/810454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
translate sentences into logic If I would like to translate the English sentence below into a predicate logic formula "The parents of a green dragon are green" Using predicates dragon, childOf and green, how would I go about this? I understand that it may help to work the sentence into something that looks like logic, but I am getting stuck at how to represent "parents" as it is not a predicate. Do either of these translations help me? Are they correct interpretations of the original sentence? If a dragon is the child of green parents then it is green. All dragons who are children of green parents are green. ∀(X) . dragon(X) ∧ childOf(X) ...? please help. thank you in advance
Being a parent is a relation. Let $D$ be the predicate of being a dragon, $G$ being the predicate of being green and $P$ being the "is a parent of" relation. Then we have: $\forall x\forall y (G(x)\wedge D(x)\wedge P(y,x))\implies G(y)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/810541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Partial fractions on $(cx^2+dx+e)^n$ If I have $$\frac{ax+b}{(cx^2+dx+e)^n}$$ with real coefficients and $(cx^2+dx+e)$ has complex roots, what does $$\frac{ax+b}{[c(x-\alpha)(x-\alpha^*)]^n}$$ turn into, in terms of partial fractions?
Based on differentiating this answer w.r.t $x$, you have: $$ \frac{1}{ \left( x-\mu \right) ^ {1+n} \left( x-\nu \right) ^ {1+n} }=\sum _{m=0}^{n}{2\,n-m\choose n} \left( -\nu+\mu \right) ^{m-1-2\,n} \left( {\frac { \left( -1 \right) ^{n+m}}{ \left( -x+\nu \right) ^{m+1}}}-{\frac { \left( -1 \right) ^{n}}{ \left( -x+\mu \right) ^{m+1}}} \right) $$ which, if you add up terms backwards $(m\rightarrow n-m)$, is equivalent to: $$\sum _{m=0}^{n}{n+m\choose n} \frac{1}{\left( -\nu+\mu \right) ^{1+n+m}}\left( {\frac { \left( -1 \right) ^{-m}}{ \left( -x+\nu \right) ^{n-m+1}}}-{\frac { \left( -1 \right) ^{n}}{ \left( -x+\mu \right) ^{n-m+1}}} \right) $$ It is simple to multiply by the numerator in your question if you so wish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/810667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Arrangements of children Can somebody please double check my work? $n$ children must be arranged in a line. $k$ pairs of children want to be next to each other, and each member of the pair will be unhappy if they are not next to each other. The other children don't care where they are (i.e., they will be happy anywhere) If all the arrangements are equally likely, what is the probability that all the children will be happy? There are $(n - k)$ bins in which a pair or a happy kid can be placed (considering each as a distinguishable "atom"). So there are $(n - k)!$ ways to arrange the pairs and kids. Each pair can be arranged in $2$ ways, so there are $2^k$ ways to arrange all the pairs (considering them as "compound" entities). So there are $2^k (n - k)!$ arrangements which make the kids happy, so $p = \frac{2^k}{P(n,k)}$ I am asking for help because either my notes were wrong when I made them or I am wrong now. I'm not confident that I can tell. (In particular, they differ by a factor of $k!$, so the answer in my notes is basically $\frac{2^k}{\binom{n}{k}}$
We need to choose where the leftmost of each pair of fussy children will sit. Write down $n-k$ stars, where the other $n-k$ children will sit, like this $$\ast\quad\ast\quad\ast\quad\ast\quad\ast\quad\ast\quad\ast\quad\ast\quad\ast\quad\ast$$ There are $n-k$ positions where these leftmost children can sit: In one of the $n-k-1$ gaps between stars, or at the left end. There are $\binom{n-k}{k}$ ways to choose these positions. These positions can be filled by children from the fussy pairs in $k!2^k$ ways. There is then only $1$ way to fill the positions to the right of the chosen positions. The rest of the positions can be filled with unfussy children in $(n-2k)!$ ways. So the required probability is $$\dfrac{\binom{n-k}{k}k!\,2^k(n-2k)!}{n!}.$$ The expression can be simplified in various ways. In particular, since $\binom{n-k}{k}=\frac{(n-k)!}{k!(n-2k)!}$, it simplifies to $\frac{(n-k)!2^k}{n!}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/810747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Computing the coordinates of a Bezier Curve I just started messing with Bezier Curves over the past couple days and I'm trying to get some of the basics down. I have this problem. Consider a quadratic Bezier curve with control points (0, 0), (2, 2), and (4, 0). What are the coordinates of the curve at t = 0.3? How would I go about solving this? Would I just use the quadratic Bezier curve formula and go from there? If anyone could walk me through this and explain, that would be much appreciated.
The curve point $\mathbf{C}(t)$ at parameter value $t$ is given by the standard formula $$ \mathbf{C}(t) = (1-t)^2\mathbf{P}_0 + 2t(1-t)\mathbf{P}_1 + t^2\mathbf{P}_2 $$ In our case, we have $\mathbf{P}_0 = (0,0)$, $\mathbf{P}_1 = (2,2)$, $\mathbf{P}_2 = (4,0)$, and we're interested in the parameter value $t = 0.3$. Plugging all these into the formula, we get \begin{align*} \mathbf{C}(0.3) &= (0.7)^2(0,0) + 2(0.3)(0.7)(2,2) + (0.3)^2(4,0) \\ &= (0.49)(0,0) + (0.42)(2,2) + (0.09)(4,0) \\ &= (1.2, 0.84) \end{align*} Note that $0.49+0.42+0.09=1$. This is significant. You can get the same answer by using de Casteljau's algorithm. This says that $$ \mathbf{C}(t) = (1-t)\big[(1-t)\mathbf{P}_0 + t\mathbf{P}_1\big] + t\big[(1-t)\mathbf{P}_1 + t\mathbf{P}_2\big] $$ which has a nice geometric interpretation. Plugging $t=0.3$ into this, we get \begin{align*} \mathbf{C}(0.3) &= (0.7)\big[0.7\mathbf{P}_0 + 0.3\mathbf{P}_1\big] + (0.3)\big[0.7\mathbf{P}_1 + 0.3\mathbf{P}_2\big] \\ &= (0.7)(0.6,0.6) + (0.3)(2.6, 1.4) \\ &= (1.2, 0.84) \end{align*} You'll probably learn something if you draw a picture showing the original points $\mathbf{P}_0 = (0,0)$, $\mathbf{P}_1 = (2,2)$, $\mathbf{P}_2 = (4,0)$, the intermediate de Casteljau points $(0.6,0.6)$, and $(2.6, 1.4)$, and the final point $(1.2, 0.84)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/810926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }