Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
On group such that the group of inner automorphisms of it is isomorphic to $S_{3}$. Let $\frac{G}{Z(G)}\cong S_{3}$, such that $S_{3}$ is permutation group on 3 letters and $Z(G)$ is non trivial central subgroup of $G$. Then does there exist an automorphism $\alpha$ of $G$ such that $\alpha(g)\neq g$ for $g\in G-Z(G)$? Thanks.
Let $G=S_3\times \mathbb{Z}_5,$ then $Z(G)=Z(S_3)\times Z(\mathbb{Z}_5)=\{\omega\}\times\mathbb{Z}_5,$ where $\omega$ is the trivial permutation of $S_3.$ Also $$G/Z(G)\cong S_3,$$ and since $\operatorname{gcd}(5,6)=1$ we have $$\operatorname{Aut}(G)\cong S_3\times\mathbb{Z}_4$$ (In fact one can compute both of these groups explicitly, if needed). Let's choose a particular isomorphisms $\psi : S_3\to S_3$ such that $\psi(\rho)=\rho^2, \psi(\sigma)=\rho\sigma,$ where $\rho=(1\,2\,3), \sigma=(1\,2),$ and $\varphi:\mathbb{Z}_5\to\mathbb{Z}_5$ be such that $\varphi(1)=2.$ Note that $\psi(\omega)=\omega, \psi(\rho^2\sigma)=\rho^2\sigma$ and $\varphi(0)=0$ are the only fixed points. Moreover, $\alpha=\psi\times\varphi$ is an automorphism of $G$ satisfying $$\alpha(g)\neq g$$ for any $g\in G\setminus\{(\omega, 0), (\rho^2\sigma, 0)\}.$ However since any automorphism of $S_3$ has at least two fixed points this construction cannot implement further. All the central extensions of $S_3$ are of the form $G\cong S_3\times A,$ and in general, $${\rm Aut}(S_3)\times{\rm Aut}(A)\subseteq{\rm Aut}(S_3\times A).$$ So, I guess one can find a perfect counter example by picking up $A$ such that above is a proper inclusion. But I cannot think of anything off the top of your head now.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the limit of $x_1=0 , x_{n+1}=\frac{1}{1+x_n}$ I have had big problems finding the limit of the sequence $x_1=0 , x_{n+1}=\frac{1}{1+x_n}$. So far I've only succeeded in proving that for $n\geq2$: $x_n>0\Rightarrow x_{n+1}>0$ (Hopefully that much is correct: It is true for $n=2$, and for $x_{n+1}>0$ exactly when $\frac{1}{1+x_n}>0$, which leads to the inequality $x_n>-1$ which is true by the induction assumption that $x_n>0$.) On everything else I failed to come up with answers that make sense (such as proving that $x_{n+1}>x_n \forall n\geq1$). I'm new to recursive sequences, so it all seems like a world behind the mirror right now. I'd appreciate any help, thanks!
Here is a way to show that the sequence converges to the unique positive solution to $a=1/(1+a)$: Define $f(x)=1/(1+x)$. Then $f'(x)=-1/(1+x)^2$. For every $n$, the mean value theorem gives $$x_{n+1}-a=f(x_n)-f(a)=(x_n-a)f'(c_n)$$ for some $c_n$ between $x_n$ and $a$. Since $-1<f'(x)<0$ for all $x>0$, this shows that $\lvert x_n-a\rvert$ decreases with $n$. Moreover, we already have $f'(1/2)=-4/9$, and $f'$ is increasing, so $-4/9<f'(c_n)<0$ for all $n\ge2$. Thus $\lvert x_n-a\rvert$ decreases at least as fast as $(4/9)^n$, i.e., geometrically. In particular, the $x_n-a\to0$. Addendum: To do this without differentiation, just rely on the computation $$f(x)-f(y)=\frac1{1+x}-\frac1{1+y}=\frac{y-x}{(1+x)(1+y)}$$ instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Show the convolution of a $C_c^\infty (\Bbb R^n)$ function with a $L^p(\Bbb R^n)$ function is in $C^\infty(\Bbb R^n)$, $1\le p\le\infty$ Let $f \in L^p\left(\Bbb R^n\right)$ and $g \in C_c^\infty \left(\Bbb R^n\right)$. Show $f \ast g \in C^\infty\left(\Bbb R^n\right)$ for $1 \le p \le \infty$. Let $x=(x_1,x_2,\ldots,x_n)$ and $y=(x_1+h,x_2,\ldots,x_n)$, $h \ne 0$. The first step is to show $$ \partial_{x_1} \left(f \ast g\right)= f \ast \left(\partial_{x_1}g\right), $$ through the dominated convergence theorem. Is it possible to bound $$ \int_{\Bbb R^n} \left|f(t)\right| \frac{\left|g(x-t)-g(y-t)\right|}{|x-y|}dt $$ with the maximal function of $g$? Or resorting to the mean value theorem $$ g(x-t)-g(y-t)=\int_0^1 Jg\left(x+t(y-x)\right)dt \cdot (y-x) $$ is the only way, where $Jg$ is the Jacobian matrix of $g$.
Here is a proof without using dominated convergence: Let $e_1 = (1,0,\ldots,0) \in \mathbb{R}^n$, $x \in \mathbb{R}^n$, $t \in \mathbb{R}$. Then $$(f \ast g)(x+t \cdot e_1)-(f \ast g)(x) = \int_{\mathbb{R}^n} (g(x+t \cdot e_1-y)-g(x-y)) \cdot f(y) \, dy \\ = \int_{\mathbb{R}^n} \int_0^t \partial_1 g(x+\tau \cdot e_1-y) \, d\tau f(y) \, dy \\ \stackrel{\text{Fubini}}{=} \int_0^t \int_{\mathbb{R}^n} \partial_1 g(x+t \cdot e_1-y) \cdot f(y) \, dy \, d\tau \\ = \int_0^t \underbrace{((\partial_1 g) \ast f)}_{\text{continuous}} (x+\tau \cdot e_1) \, d\tau$$ Thus (by First Fundamental Theorem of Calculus): $t \mapsto (f \ast g)(x+t \cdot e_1)$ is differentiable and (for $t=0$) we obtain $\partial_1 (f \ast g) = f \ast (\partial_1 g)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Theorem Fourier Analysis The inner product of the two-dimensional sequences $f(x,y)$ and $g(x,y)$ is equal to the inner product of their Fourier transforms, that is: $$\sum_{x=-\infty}^{\infty}\sum_{y=-\infty}^{\infty}f(x,y)g^*(x,y)=\dfrac{1}{4\pi^2}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}F(w_x,w_y)G^*(w_x,w_y)\,dw_x\,dw_y.$$ I am trying use a Fourier transform inverse and follow re-arranged the integrals and use the Dirac function. But I don't know Why the integrals have limits $(-\pi,\pi)$.
The set $\{e^{imx}e^{iny},m,n\in\Bbb Z\}$ forms a Hilbert basis of the space $H_1:=L^2((-\pi,\pi)^2)$ with the canonical inner product. Denote $e_{m,n}$ the sequence whose all terms are $0$, except the $(m,n)$-th which is $1$. Then $\{e_{m,n},m,n\in\Bbb Z\}$ form a Hibert basis for $H_2:=\ell^2(\Bbb Z^2)$. The two mentioned Hilbert spaces are isometric (say $\iota\colon H_1\to H_2$, with $\iota(e^{imx}e^{iny})=e_{m,n}$), so for each $x\in H_1$, $\lVert \iota(x)\rVert_{H_2}²=\lVert x\rVert_{H_1}^2$ . Such an equality is true for $x\pm iy$ and $x\pm y$, which gives $$\langle \iota(x),\iota(y)\rangle_{H_2}=\langle x,y\rangle_{H_1},$$ what is wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to invert this function? I need to invert this function: $$ y=\frac{\ln(x)}{\ln(x-1)}+1 $$ The domain is real (for x>1 and x!=2) Why can't we just divide it like this: $$ y=ln(x-(x-1))+1 $$ and then it's: $$ y=ln(1)+1 $$ so it seems wrong. Where did I make the mistake?
In general, $\dfrac{\ln a}{\ln b}\ne \ln(a-b)$. Remarks: $1.$ The false simplification was probably motivated by $\ln\left(\frac{a}{b}\right)=\ln a-\ln b$, which is true for positive $a$ and $b$. $2.$ (added) If $x\ne 1$, then the equation can be manipulated to $y\ln(x-1)=\ln x+\ln(x-1)$. We recognize $y\ln(x-1)$ as the logarithm of $(x-1)^y$. So we can rewrite our equation as $(x-1)^y=x(x-1)$, which, since $x\ne 1$, can be simplified to $(x-1)^{y-1}=x$. It is likely that the solution can be written in terms of the Lambert $W$-function. A solution in terms of elementary functions seems highly unlikely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
If $A$ generates the topology $\mathcal{T}$ and the $\sigma$-field $\mathcal{F}$ then $\mathrm{Borel} (\mathcal{T}) =\mathcal{F}$? Let $\mathcal{A}$ a collection of subsets of $\mathbb{X}$. Let $ \mathcal{T}$ the topology generated by colection $\mathcal{A}$ and $\mathcal{F}$ the $\sigma$-field generated by $ \mathcal{A}$. Denote by , $\mathrm{Borel}(\mathcal{T})$ the $\sigma$-field of Borel sets of $\mathbb{X}$ whit respect the topology $\mathcal{T}$. Question1: Is true that $\mathrm{Borel} (\mathcal{T}) =\mathcal{F}$? Thank's. Edit: Question 2: Suppose now that $\mathbb{X}$ is countable and discrete with respect to some metric $d$ that generates the topology $\mathcal{T}$. Is true that $\mathrm{Borel} (\mathcal{T}) =\mathcal{F}$? If answer is not, I have a more question. Question 3 If the answer to question 2 is still there some condition (topological, metric or condition of measurability) on $\mathbb{X}$ or $\mathcal{A}$ it is enough that the answer is yes?
Let $X$ be uncountable and $\mathcal{A}$ be the set of all singletons. The topology generated contains all subsets of $X$, but the $\sigma$-algebra generated contains only those subsets that are countable or have a countable complement. So the answer is no. Edit: To the second and third question: It is enough that $\mathcal{A}$ is countable for the two $\sigma$-algebras to coincide. Since $\mathcal{A}\subseteq\mathcal{T}$, we have always $\sigma(\mathcal{A})\subseteq\sigma(\mathcal{T})=\mathrm{Borel} (\mathcal{T})$. Now, if $\mathcal{A}$ is countable, then the set of finite intersections of elements of $\mathcal{A}$ is countable too and forms a basis for the topological space $\mathcal{T}$. So every open set is a countable union of these finite intersections and therefore in $\sigma(\mathcal{A})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Where am I going wrong with this proof of expected value of a geometric random variable? I know that the expected value of a geometrically distributed random variable is $\frac1p$ but how do we get there. This is what I got so far: $$\sum_{x=1}^\infty xP(X=x)$$ where X is the number of failures until first success. Since it's geometric we have:$$\begin{align} \sum_{x=1}^\infty xp(1-p)^{x-1}\\ \frac{p}{1-p} \sum_{x=1}^\infty x(1-p)^x\\ .... \end{align}$$ How do we sum that?
An experiment has probability of success $p\gt 0$, and probability of failure $1-p$. We repeat the experiment until the first success. Let $X$ be the total number of trials. We want $E(X)$. Do the experiment once. So we have used $1$ trial. If this trial results in success (probability: $p$), then the expected number of further trials is $0$. If the first trial results in failure (probability: $1-p$), our experiment has been wasted, and the expected number of trials remains at $E(X)$. Thus $$E(X)=1+(p)(0)+(1-p)E(X).$$ If $E(X)$ exists, we can solve for $E(X)$ and obtain $E(X)=\dfrac{1}{p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
What's the proper way to calculate probability for a card game? I'm creating AI for a card game, and I run into problem calculating the probability of passing/failing the hand when AI needs to start the hand. Cards are A, K, Q, J, 10, 9, 8, 7 (with A being the strongest) and AI needs to play to not take the hand. Assuming there are 4 cards of the suit left in the game and one is in AI's hand, I need to calculate probability that one of the other players would take the hand. Here's an example: AI player has: J Other 2 players have: A, K, 7 If a single opponent has AK7 then AI would lose. However, if one of the players has A or K without 7, AI would survive. Now, looking at possible distribution, I have: P1 P2 AI --- --- --- AK7 loses AK 7 survives A7 K survives K7 A survives A 7K survives K 7A survives 7 KA survives AK7 loses Looking at this, it seems that there is 75% chance of survival. However, I skipped the permutations that mirror the ones from above. It should be the same, but somehow when I write them all down, it seems that chance is only 50%: P1 P2 AI --- --- --- AK7 loses A7K loses K7A loses KA7 loses 7AK loses 7KA loses AK 7 survives A7 K survives K7 A survives KA 7 survives 7A K survives 7K A survives A K7 survives A 7K survives K 7A survives K A7 survives 7 AK survives 7 KA survives AK7 loses A7K loses K7A loses KA7 loses 7AK loses 7KA loses 12 loses, 12 survivals = 50% chance. Obviously, it should be the same (shouldn't it?) and I'm missing something in one of the ways to calculate. Which one is correct?
It depends on how the cards are drawn, which you haven't described. For example, if each card is dealt to a random player, one at a time, then the first calculation is correct. On the other hand, if the cards are first shuffled, the deck is then split at a random position, and one player gets the bottom half while the other gets the top half, then the second calculation is correct. In particular, using the first method of dealing, the probability of player 1 getting no cards at all is (1/2)3 = 0.125, while using the second method, it is 1/(3+1) = 0.25.
{ "language": "en", "url": "https://math.stackexchange.com/questions/235986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $A \subseteq X\land B \subseteq Y$ are any sets, prove that $f(A\cap f^{-1}(B)) \subseteq f(A) \cap B$ If $A \subseteq X\land B \subseteq Y$ are any sets, and $f:X\to Y$, prove that $f(A\cap f^{-1}(B)) \subseteq f(A) \cap B$ Here is what I've done for the proof, I just need a little bit of guidance in finishing it up. Proof: Suppose $A \subseteq X \text{ and } B \subseteq Y$, Let $z \in f(A \cap f^{-1}(B))$ be arb. First I unpacked the goal statement: $ \Rightarrow z \in f(A) \cap B $ $ \Rightarrow z \in f(A) \land z \in B $ $ \Rightarrow \exists x \in A \text{ s.t } z = f(x) \land z \in B $ Next I unpacked the assumptions: $ \Rightarrow \exists x \in A \cap f^{-1}(B) \text{ s.t } z = f(x) $ $ \Rightarrow \exists x \in A \land x \in f^{-1}(B) \text{ s.t } z = f(x) $ $ \Rightarrow \exists (x \in A \text{ s.t } z = f(x)) \land (x \in Y \land x \in B \text{ s.t } z = f(x)) $ So I've proven that $ \exists x \in A \text{ s.t } z = f(x) $, but how do I go about proving that $x \subseteq Y$ and $z \in B$?
In fact, wa have equality. For the case : $$f(A) \cap B \subset f \left(A \cap f^{-1}(B)\right)$$ Let $y \in f(A) \cap B$ then $y \in f(A)$ and $y \in B$. We have $y \in f(A)$ then : $$\exists x \in A, y = f(x)$$ We deduce that $f(x) = y \in B$ then $x \in f^{-1}(B)$. We have $x \in A$ and $x \in f^{-1}(B)$ then : $$x \in A \cap f^{-1}(B)$$ Hence : $$y = f(x) \in f \left(A \cap f^{-1}(B)\right)$$ Finally : $$f(A) \cap B \subset f \left(A \cap f^{-1}(B)\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/236033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Summation over exponent $\sum_{i=0}^k 4^i= \frac{4^{k+1}-1}3$ Why does $$\sum_{i=0}^k 4^i= \frac{4^{k+1}-1}{3}$$where does that 3 comes from? Ok, from your answers I looked it up on Wikipedia Geometric Progression, but to derive the formula it says to multiply by $(1-r)$ not $(r-1)$ why is this case different?
The other answers are correct, this is the finite geometric series for $r=4$. You asked why this is so, and in fact, this is not too difficult to derive. Let $S$ be the sum we want to evaluate, which means $S=1+r+r^2+r^3+\ldots+r^n$ If we multiply by $r$, $rS=r+r^2+r^3+\ldots+r^n+r^{n+1}$ Now subtract $rS-S = S(r-1) = r^{n+1}-1$ $\Longrightarrow S= \dfrac{r^{n+1}-1}{r-1}$. So for your series, evaluating this for $r=4$ gives $S=\dfrac{4^{n+1}-1}{4-1}$. Thus the $3$ comes from the $r-1$ term in the denominator. Note: In the final step we are dividing by $r-1$, so this formula is no longer valid if $r=1$, since we cannot divide by zero. Comment response: In the way it's done on Wikipedia, the sum is $S=\dfrac{1-r^{n+1}}{1-r}$, and actually this is equivalent to what I wrote. Note that $(1-r^{n+1})=-(r^{n+1}-1)$ and the same goes for the denominator; $1-r = -(r-1)$. So since the sign of the terms change on both top and bottom, the fraction remains the same. Thus both expressions are equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 3 }
Example of a general random variable with finite mean but infinite variance Given a probability triple $(\Omega, \mathcal{F}, \mu)$ of Lebesgue measure $[0,1]$, find a random variable $X : \Omega \to \mathbb{R}$ such that the expected value $E(X)$ converges to a finite, positive value, but $E(X^2)$ diverges.
One answer is Pareto distribution with parameters $\alpha , x_0$ which are both positive. The distribution is given by: $$f_X(x)= \begin{cases} \alpha\,\frac{x_0^\alpha}{x^{\alpha+1}} & x \ge x_0, \\ 0 & x < x_0. \end{cases}$$ Note that $E[X] = \infty$ for $\alpha \leq 1$ and is finite elsewhere. The variance is not finite for $\alpha \in [1,2) $ Hence it satisfies your question for $(1,2)$ In general,$E[X^n]= \infty \ \ ;n\geq \alpha$. EDIT: Clarified the answer as suggested.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Question on a proof of a sequence I have some questions 1) In the forward direction of the proof, it employs the inequality $|x_{k,i} - a_i| \leq (\sum_{j=1}^{n} |x_{k,j} - a_j|^2)^{\frac{1}{2}}$. What exactly is this inequality? 2) In the backwards direction they claim to use the inequality $\epsilon/n$. I thought that when we choose $\epsilon$ in our proofs, it shouldn't depend on $n$ because $n$ is always changing?
1) In the forward direction of the proof, it employs the inequality $|x_{k,i} - a_i| \leq (\sum_{j=1}^{n} |x_{k,j} - a_j|^2)^{\frac{1}{2}}$. What exactly is this inequality? Obviously, for any $i$ you have $$|x_{k,i}-a_i|^2 \le \sum_{j=1}^{n} |x_{k,j} - a_j|^2.$$ (You are working with a sum of non-negative numbers, one summand cannot be larger than the whole sum.) Therefore $$|x_{k,i}-a_i|=\sqrt{|x_{k,i}-a_i|^2} \le \sqrt{\sum_{j=1}^{n} |x_{k,j} - a_j|^2}.$$ 2) In the backwards direction they claim to use the inequality $\epsilon/n$. I thought that when we choose $\epsilon$ in our proofs, it shouldn't depend on $n$ because $n$ is always changing? In the whole proof $n$ is fixed - it is the dimension of $\mathbb R^n$; the variable used for indices in the sequence is $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Are there five complex numbers satisfying the following equalities? Can anyone help on the following question? Are there five complex numbers $z_{1}$, $z_{2}$ , $z_{3}$ , $z_{4}$ and $z_{5}$ with $\left|z_{1}\right|+\left|z_{2}\right|+\left|z_{3}\right|+\left|z_{4}\right|+\left|z_{5}\right|=1$ such that the smallest among $\left|z_{1}\right|+\left|z_{2}\right|-\left|z_{1}+z_{2}\right|$, $\left|z_{1}\right|+\left|z_{3}\right|-\left|z_{1}+z_{3}\right|$, $\left|z_{1}\right|+\left|z_{4}\right|-\left|z_{1}+z_{4}\right|$, $\left|z_{1}\right|+\left|z_{5}\right|-\left|z_{1}+z_{5}\right|$, $\left|z_{2}\right|+\left|z_{3}\right|-\left|z_{2}+z_{3}\right|$, $\left|z_{2}\right|+\left|z_{4}\right|-\left|z_{2}+z_{4}\right|$, $\left|z_{2}\right|+\left|z_{5}\right|-\left|z_{2}+z_{5}\right|$, $\left|z_{3}\right|+\left|z_{4}\right|-\left|z_{3}+z_{4}\right|$, $\left|z_{3}\right|+\left|z_{5}\right|-\left|z_{3}+z_{5}\right|$ and $\left|z_{4}\right|+\left|z_{5}\right|-\left|z_{4}+z_{5}\right|$is greater than $8/25$? Thanks!
I think the answer is no. Suppose we can find such $z_1,\dotsc,z_5$. Then all of them are non-zero. Let $\theta_{ij}$ be the angle between $z_i$ and $z_j$. For all $i,j$, we have \begin{align*} \frac{8}{25} & \leq |z_i| + |z_j| - |z_i + z_j| \\ & = \frac{(|z_i| + |z_j|)^2 - |z_i + z_j|^2}{|z_i| + |z_j| + |z_i + z_j|} \\ & \leq \frac{2|z_i||z_j|(1-\cos{\theta_{ij}})}{2|z_j|} \\ & = |z_i| (1-\cos{\theta_{ij}}).\end{align*} Reorder $z_1,\dotsc,z_5$ so that $|z_1|\leq \dotsb \leq |z_5|$. Now as $|z_1| \leq \frac{1}{5}$, the above gives $\theta_{1j} \geq \cos^{-1}\left(-\frac{3}{5}\right)=2.21$ rad. This means if we join each $z_i$ to the origin separately, there is a region of $4.42$ rad around $z_1$ without any other $z_i$'s. So $z_2,\dotsc,z_5$ must be within a region of at most $2\pi - 4.42 = 1.87$ rad. (I rounded up/down frequently so that the bounds actually become cruder at each stage.) In particular, there is some $i \neq 1$ such that $\theta_{ij} \leq \frac{1.87}{3} = 0.63$ radians, so that $|z_i| \geq 1.67$. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to do \mathbf in handwriting? I have a bunch of vectors and matrices that are written using \mathbf, eg $\mathbf{w}$ and $\mathbf{W}$. What are standard ways of writing these in hand-writing? I know this is not exactly a maths question, but I'm not sure where else to ask it really?
Since it is hopeless to write bold-face characters by hand, people introduced symbols like $$ \vec{w}, \quad \underline{w}, $$ for vectors. Matrices are usually written in roman characters, but with upper case: $A$, $B$, $W$, etc. Mathematicians do not like strange symbols for vectors, and they tend to write simply $v$, $w$ as if they were writing scalar quantities. The context makes the difference. Physicists and engineers prefer arrows, as far as I know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Proving a reduction formula for the antiderivative of $\cos^n(x)$ I want to show that for all $n\ge 2$, it holds that $$ \int \cos^n x\ dx = \frac{1}{n} \cos^{n-1} x \sin x + \frac{n-1}{n}\int \cos^{n-2} x\ dx. $$ I'm not even getting the result for the induction base $(n=2)$: Using integration by parts, I only get $$ \int \cos^2 x\ dx = \cos x \sin x + \int \sin^2 x\ dx. $$ I'm suspecting that I need to use some trigonometric identity here.
You only need that $$\sin^2 x+\cos^2 x=1$$ Let $$\varphi(n)=\int \cos^n x dx$$ Integrate by parts $$\begin{cases}\cos^{n-1} x =u\\ \cos x dx =dv\end{cases}$$ Then $$\begin{align}\varphi(n)&=\int \cos^n x dx\\ &=uv-\int v du\\&=\cos^{n-1}x\sin x+\int (n-1)\cos^{n-2}x \sin ^2x d x\end{align}$$ But $\sin^2 x=1-\cos^2 x $ $$\begin{align}&=\cos^{n-1}x\sin x+\int (n-1)\cos^{n-2}x \sin ^2x d x \\&=\cos^{n-1}x\sin x+\int (n-1)\cos^{n-2}x \left(1-\cos^2 x\right) d x \\ &=\cos^{n-1}x\sin x+\int (n-1)\cos^{n-2}x dx-\int (n-1)\cos^{n-2}x \cos^2 x d x \\&=\cos^{n-1}x\sin xdx+(n-1)\int \cos^{n-2}x dx-(n-1)\int \cos^{n }x dx \\&=\cos^{n-1}x\sin xdx+(n-1)\int \cos^{n-2}xdx -(n-1)\varphi(n)\end{align}$$ Now solve for $\varphi(n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Are these charts on the circle compatibly oriented? I've tried a few methods but I can't seem to work this one out. Consider the charts $$f(s) = (\cos s, \sin s) \in \mathbb{R}^2$$ for $-\pi < s < \pi$ and $$g(t)=(\frac{2t}{t^2 + 1}, \frac{t^2 - 1}{t^2 + 1})$$ for $t \in \mathbb{R}$. Are these charts on the circle compatibly oriented?
We will use the identities $$ \sin(2 \theta) = \frac{2 \tan(\theta)}{1 + \tan^2(\theta)}, \;\;\; \cos(2 \theta) = \frac{1 - \tan^2(\theta)}{1 + \tan^2(\theta)}. $$ Making the change of variable $t = \tan(\theta)$, we define a chart by $$ \psi(\theta) = g(\tan(\theta)) = (\sin(2\theta), -\cos(2\theta)) : (-\frac{\pi}{2}, \frac{\pi}{2}) \rightarrow \mathbb{R}^2. $$ Drawing the situation, we see that both $f$ and $\psi$ trace the circle counter-clockwise as we go from lower values of $s$ (or $\theta$) to higher values. This implies that the charts are consistently oriented. To show this rigorously, write $$ \cos(s) = \sin(2\theta) = \cos(\frac{\pi}{2} - 2\theta). $$ Taking into the account the domains of $\theta$ and $s$, we have then $$ 2\theta = \left\{ \begin{array}{lr} s + \frac{\pi}{2}, \;\;\; -\pi < s < \frac{\pi}{2}\\ s - \frac{3\pi}{2},\;\;\;\; \frac{\pi}{2} \leq s < \pi. \end{array} \right.$$ (Why is there a discontinuity in the description?) We see that the coordinate transformation from $\theta$ to $s$ has positive Jacobian, and as this is also true for the transformation from $t$ to $\theta$, the charts are indeed consistently oriented.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
solutions fort-order homog. system Question: Can one of the following functions be a solution of a first-order autonomous homogeneuous system? (1) $x(t)=(3e^{t}+e^{-t},e^{2t})$ (2) $x(t)=(3e^{t}+e^{-t},e^{t})$ (3) $x(t)=(3e^{t}+e^{-t},t e^{t})$ (4) $x(t)=(3e^{t},t^2 e^{t})$ I know that every solution can be written as a linear combination of $t^j exp(\alpha t)$, therefore I would say that only (4) can be a solution, true?
Its a first order system with a 2nd order characteristic equation for each of the variables. These two solutions show up as linear combinations of exp in the answer. It is also possible that the two roots are in fact, repeated roots, in which case you do not have two distinct combinations of exp, but what you will have is exp of the repeated root with an extra factor of t. Possible answers would therefore look like: $Ae^a + Be^b, Ce^a + De^b$ OR $Ae^a , t Be^{a}$ None, but 2) of the options matches the form above. 4) cannot be an answer because, only a triple repeated root would have a t^2, which is not possible for a 1st order system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/236672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Convergence of $\sum_{n=1}^\infty \frac{a_n}{n}$ with $\lim(a_n)=0$. Is it true that if $(a_n)_{n=1}^\infty$ is any sequence of positive real numbers such that $$\lim_{n\to\infty}(a_n)=0$$ then, $$\sum_{n=1}^\infty \frac{a_n}{n}$$ converges? If yes, how to prove it?
No: take $a_n:=\frac 1{\log n}$, then $$\sum_{j=2}^{N-1}\frac 1{j\log j}\leqslant \int_2^N\frac 1{t\log t}dt=[\log(\log t)]_2^N\geqslant \frac{\log \log N}2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/236776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
How to show that two equivalence classes are either equal or have an empty intersection? For $x \in X$, let $[x]$ be the set $[x] = \{a \in X | \ x \sim a\}$. Show that given two elements $x,y \in X$, either a) $[x]=[y]$ or b) $[x] \cap [y] = \varnothing$. How I started it is, if $[x] \cap [y]$ is not empty, then $[x]=[y]$, but then I am kind of lost.
Once you proved that $[x] \cap [y] \neq \varnothing$ implies $[x] = [y]$ then you are done. To do so, just notice that if the intersection is not empty (say it contains $a$), then any element in $[x]$ is equivalent to $a$, and so is any element in $[y]$, so you get your result by transitivity ($[x] \subset [y]$ and $[y] \subset [x]$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/236851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Functions that are compatible with equivalence relations over its domain and codomain Given a function $f : X \to Y$, an equivalence relation $\sim_X$ on $X$ and an equivalence relation $\sim_Y$ on $Y$, there is a notion of ``compatibility'' between $f$, $\sim_X$ and $\sim_Y$ if the following holds: $x_0 \mathbin{\sim_X} x_1 \implies f(x_0) \mathbin{\sim_Y} f(x_1)$. If it holds, we can define a quotient over $f$ as the function $f/(\sim_X, \sim_Y) : X/{\sim_X} \to Y/{\sim_Y}$ that maps each $E \in X/{\sim_X}$ to the unique element of $Y/{\sim_Y}$ that contains $f(E)$. Is this notion (of compatibility and quotient of functions) already well-studied, and is there a proper term for this ``compatibility''? Perhaps $f$ is some kind of homomorphism... In the special case where ${\sim_X} = {\sim_Y}$ and $X = Y$, we can say that $\sim_X$ is a congruence relation that is invariant under $f$. Update: Wikipedia calls such an $f$ a morphism from $\sim_X$ to $\sim_Y$. However, as I am using this concept together with other categories and morphisms, such terminology can be confusing. Is there a standard name for the category $\mathbf{C}$ of equivalence relations (or partitions, I suppose) and such morphisms between them, so I can state without ambiguity that $f \in \hom_\mathbf{C}({\sim_X}, {\sim_Y})$?
See, for example, page 118 of Bourbaki's "Theory of Sets Volume 1."
{ "language": "en", "url": "https://math.stackexchange.com/questions/236923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Could $\frac x0 = \pm\infty$? Possible Duplicate: Is it wrong to tell children that 1/0 = NaN is incorrect, and should be ∞? I remember that dividing by zero is frowned upon, because it is said that there is no real answer. With the concept of limits, going from the negative direction to zero would give $-\infty$, and going towards zero from the positive direction would give $+\infty$. This is partially the reason that $\frac x0 = $ undefined, even with using limits. But could $\frac x0$ be equal to $\pm\infty$? I suspect this is not the case, so please explain why this is incorrect.
I do not entirely agree with the answers posted so far. First, a comment on something in the question: One should not write $\dfrac x0 =\text{undefined}$. Rather, one should say that the value of the expression $\dfrac x0$ is undefined. This is not that "is" of equality; this is the "is" of predication. In some contexts, it makes sense to put a single $\infty$ at both ends of the real line $\mathbb R$, so that $\mathbb R \cup \{\infty\}$ is topologically a circle. That makes sense when dealing with either rational functions or trigonometric functions. It makes rational functions defined and continuous everywhere on $\mathbb R\cup\{\infty\}$ and it makes trigonometric functions defined and continuous everywhere on $\mathbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Calculate a point on the line at a specific distance . I have two points which make a line $l$ , lets say $(x_1,y_1) , (x_2,y_2)$ . I want a new point $(x_3,y_3)$ on the line $l$ at a distance $d$ from $(x_2,y_2)$ in the direction away from $(x_1,y_1)$ . How should i do this in one or two equation .
Any point $p$ on your line can be written as $$ p = (x_1, y_1) + \lambda \cdot (x_2 - x_1, y_2 - y_1) $$ for some $\lambda \in \mathbb R$, as these points form a line on which $(x_1, y_1)$ (corresponding to $\lambda = 0$) and $(x_2, y_2)$ (with $\lambda = 1$) lie. A point lies on the same side of $(x_1, y_1)$ as $(x_2, y_2)$ when $\lambda > 0$, and "behind" $(x_2, y_2)$ if $\lambda > 1$. The distance of a point on $l$ to $(x_2, y_2)$ is given by \begin{align*} d^2 &= \bigl((1-\lambda)x_1 + \lambda x_2 - x_2\bigr)^2 + \bigl((1-\lambda)y_1 + \lambda y_2 - y_2\bigr)^2\\ &= (1-\lambda)^2(x_1 - x_2)^2 + (1-\lambda)^2(y_1 - y_2)^2\\ \iff d &= \left|1-\lambda\right| \cdot \bigl((x_1-x_2)^2 + (y_1-y_2)^2\bigr)^{1/2} \end{align*} So we want a $\lambda > 1$ (that is $|1-\lambda| = \lambda - 1$) at distance $d$, giving $$ \lambda = 1 + \frac d{\bigl((x_1-x_2)^2 + (y_1-y_2)^2\bigr)^{1/2}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/237090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Existence of two solutions I am having a problem with the following exercise. I need to show the $x^2 = \cos x $ has two solutions. Thank you in advance.
$f(x) = x^2 - \cos x$ is a continuous function. Since $f(0) = -1$ and $f(\frac{\pi}{2}) = \frac{\pi^2}{4}$, $f$ has at least one zero in the interval $(0, \pi/2)$. Its derivative $f'(x) = 2x + \sin x$ is strictly positive in the interval $(0,\pi/2)$ so $f$ is strictly increasing and we conclude that $f$ has exactly one root $f(x_0) = 0$ in the interval $(0,\pi/2)$. For $x \geq \pi/2$, $f(x) \geq \frac{\pi^2}{4} - 1 > 0$, so $f$ has no roots in $[\pi/2, +\infty)$. Since $f$ is symmetric with respect to the $y$-axis, meaning $f(x) = f(-x)$, $f$ has only two roots -- $x_0$ and $-x_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How do I get the residue of the given function? I'm reading the solution of the integral: $$\int\limits_{-\infty}^{\infty} dx\frac{e^{ax}}{1+e^x}$$ by the residue method. And I understood everything, but how to get the residue of $\frac{e^{az}}{1+e^z}$ (the book just states that the residue is $-e^{i\pi a}$). I know there is a simple pole at $z=i\pi$ and that is the point where I want the residue. Since it is a simple pole I tried using the formula $a_{-1}=f(x)(z-z_0)$ by using the series expansion of the exponential function and I got to this formula $$a_{-1}=-e^{i\pi a}\left[\frac{\left(1+\sum\limits_{n=1}^{\infty}\frac{(z-i\pi)^{n-1}}{n!}(z-i\pi)\right)^a}{\sum\limits_{n=1}^{\infty}\frac{(z-i\pi)^{n-1}}{n!}}\right]_{z=i\pi}$$but I believe thats wrong and I couldn't find my mistake or another way of solving it.
Let $g,h$ holomorphic and $z_0 \in \Omega$ such that $h(z_0)=0$, $h'(z_0) \not= 0$ and $g(z_0) \not= 0$. Then the equality $$\text{Res}_{z=z_0} \frac{g}{h} = \frac{g(z_0)}{h'(z_0)}$$ holds. In this example we obtain $$\text{Res}_{z=\imath \, \pi} f = \frac{e^{\imath \, \pi \cdot a}}{e^{\imath \, \pi}} = - e^{\imath \, \pi \cdot a}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/237236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why is the set of all real numbers uncountable? I understand Cantor's diagonal argument, but it just doesn't seem to jive for some reason. Lets say I assign the following numbers ad infinitum... * *$1\to 0.1$ *$2\to 0.2$ *$3\to 0.3$ ... *$10\to 0.10$ *$11\to 0.11$ and so on... How come there's supposedly at least one more real number than you can map to a member of $\mathbb{N}$?
Cantor's diagonal is a trick to show that given any list of reals, a real can be found that is not in the list. First a few properties: * *You know that two numbers differ if just one digit differs. *If a number shares the previous property with every number in a set, it is not part of the set. Cantor's diagonal is a clever solution to finding a number which satisfies these properties. The number which is the diagonal is transformed s.t. it doesn't share the first digit of the first number nor the second digit with the second and so on. Thus the number is unique to the list. This is why Cantor's diagonal as a method proves the result that the reals are uncountable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Finding positive Bézout coefficients If I have Bézout coefficients (obtained using extended Euclidean algorithm) and one of them is negative, what is the easiest way to obtain positive Bézout coefficients?
Given some natural $a$,$b$,$n$ we have the diophantine equation $$ax+by=n$$ which which we can find a solution for using Euclid's algorithm. Let $(x,y)$ and $(x',y')$ be two distinct solutions then subtracting $ax+by=n$ and $ax'+by'=n$ we find $a(x-x')=b(y'-y)$. The question is what does this tell us about $x-x'$ and $y'-y$? It is difficult to deduce something from it since $a$ and $b$ are not coprime, so let $a' = \frac{a}{(a,b)}$, $b' = \frac{a}{(a,b)}$ and we have $a'(a,b)(x-x')=b'(a,b)(y'-y)$, now $a'|y'-y$ and $b'|x-x'$ so we must in fact have $y' = y + a'k$ and $x' = x-b'k$ for some integer $k$. Reversing this, we can say that given any one solution $(x,y)$ we can parametrize all solutions by an integer $k$: $(x-b'k,y + a'k)$. If any positive solution to the diophantine equation exists, you can easily find it with this formula. If it doesn't you can use this formula to show that too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
In any set of n different natural numbers, exists subset of more than n/3 numbers, such as there are no three numbers in it : a+b=c I need to prove, that in any set of n different natural numbers, exists subset of more than n/3 numbers, such as there are no three numbers in it, one of which is the sum of two others. Can anyone help me with this?
I think this probablistic argument should work: Let $S$ be our set of size $n$. Pick some very large prime $p$ or the form $p = 3k+2$ (this is possible by Dirichlet's theorem on prime numbers in arithmetic progressions). We can assume $p > \max S$, so that we can consider $S$ as a subset of $\mathbb Z/p\mathbb Z$. Now choose an $x \in (\mathbb Z/p\mathbb Z)^\times$ uniformly at random and consider $xS \subseteq \mathbb Z/p \mathbb Z$. Note that the set $A = \{k+1,\ldots,2k+1\} \subset \mathbb Z/p\mathbb Z$ is sum-free modulo $p$ and contains $k+1$ numbers. The probability that for a particular number $s \in S$ the number $xs$ falls into $A$ is $\frac{k+1}{3k+2} > \frac{1}{3}$. By linearity of expectation, the set $xS$ contains on average $\frac{k+1}{3k+2}n$ elements in $A$. That means there has to be at least one choice of $x$ such that $xS \cap A$ has size at least $\frac{k+1}{3k+2}n > n/3$. Then $S \cap x^{-1}A$ is a subset of $S$ which is sum-free (modulo $p$, hence also as a subset of $\mathbb N$), and has size $> n/3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Prove sum is bounded I have the following sum: $$ \sum\limits_{i=1}^n \binom{i}{i/2}p^\frac{i}{2}(1-p)^\frac{i}{2} $$ where $p<\frac{1}{2}$ I need to prove that this sum is bounded. i.e. it doesn't go to infinity as n goes to infinity.
Since, by Stirling's approximation, $n! \approx c n^{n+1/2}e^{-n}$ (where $c = \sqrt{\pi}$) and $\binom{i}{i/2} = \frac{i!}{(i/2)!^2}$, $\binom{i}{i/2} \approx \frac{ci^{i+1/2}e^{-i}}{(c(i/2)^{i/2+1/2}e^{-i/2})^2} = \frac{i^{i+1/2}e^{-i}}{c(i/2)^{i+1}e^{-i}} =2^{i+1}/(c\sqrt{i}) $. Since $0 < p < 1/2$, $p(1-p) = p-p^2 = 1/4 - 1/4+p-p^2 = 1/4-(1/2-p)^2$, so $p^{i/2}(1-p)^{i/2} = (p(1-p))^{i/2} = (1/4-d)^{i/2} $ where $d = (1/2-p)^2$. So $\binom{i}{i/2}p^{i/2}(1-p)^{i/2} \approx 2^{i+1}/(c\sqrt{i})(1/4-d)^{i/2} = \frac{2}{c\sqrt{i}}(1-4d)^{i/2} $, and since $d < 1/4$, letting $r = \sqrt{1-4d}$, $\binom{i}{i/2}p^{i/2}(1-p)^{i/2} \approx \frac{2}{c\sqrt{i}}r^i$ where $r < 1$, and the sum of these is less than a geometric series with ratio less than $1$ and so converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
is $0.\overline{99}$ the same as $\lim_{x \to 1} x$? So we had an interesting discussion the other day about 0.999... repeated to infinity, actually being equal to one. I understand the proof, but I'm wondering then if you had the function... $$ f(x) = x* \frac{(x-1)}{(x-1)} $$ so $$ f(1) = NaN $$ and $$ \lim_{x \to 1} f(x) = 1 $$ what would the following be equal to? $$ f(0.\overline{999}) = ? $$
The answer is that $$0.\overline{9} = 1.$$ So when you want to find $f(0.\overline{9})$ then that is the exact same as writing $f(1)$ which is not defined. About the function $f$: As we have just noted, $f(1)$ is not defined. However, as you point out $\lim_{x\to 1} f(x)$ is defined and is equal to $1$. If you are interested, the fact that $f$ is not defined means that $f$ is not continuous at $1$. To answer specifically the question in the title, you indeed have that, $$0.\overline{9} = \lim_{x\to 1} x$$ Sidenote: When you write $0.\overline{999}$ you can just write $0.\overline{9}$. They are the same thing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Prove whether or not the series converges? $\sum_{n=2}^{\infty} \frac{(-1)^n}{n^{1/n}}$ Any help is much appreciated on this!
You can use the Test for Divergence here. For a series to be convergent you need the limit of the terms to go to zero, i.e. that $$ \lim_{n\to \infty} \frac{(-1)^n}{n^{1/n}} = 0. $$ That is is equivalent to needing that $$ \lim_{n\to \infty} n^{1/n} = \pm\infty. $$ So if this limit is not infinity (or negative infinity), then the series is divergent. However by noting that with $y = n^{1/n}$ you have $\ln(y) = \frac{\ln(n)}{n}$. You can probably show that the limit of $\ln(y)$ as $n\to \infty$ is $0$. So that means that $$ \lim_{n\to \infty} n^{1/n} =\lim_{n\to \infty} y = \lim_{n\to \infty} e^{\ln(y)} = e^{\lim_{n\to \infty} \ln(y)} = e^0 = 1. $$ So this series is divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What's the solution to this system of equation? $$ xy^3z^3 = yx^3z^3 = zx^3y^3 $$ Is there a way to solve this system? I think the answer is 1, but i can't verify my intuition.
Clearly if one of the variables is $0$ then all three equations are satisfied. Suppose then that's not the case. We rewrite as $$y^2 = x^2,\ \ x^2 = z^2,\ \ y^2 = z^2$$ Letting $x$ be a free-variable, we have $y = \pm x$ and $z=\pm x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Generating function for binomial coefficients $\binom{2n+k}{n}$ with fixed $k$ Prove that $$ \frac{1}{\sqrt{1-4t}} \left(\frac{1-\sqrt{1-4t}}{2t}\right)^k = \sum\limits_{n=0}^{\infty}\binom{2n+k}{n}t^n, \quad \forall k\in\mathbb{N}. $$ I tried already by induction over $k$ but i have problems showing the statement holds for $k=0$ or $k=1$.
We proceed by induction on $k$. For $k = 0$, we have $\displaystyle \sum\limits_{m=0}^{\infty} \binom{2m}{m}t^m = \frac{1}{\sqrt{1-4t}}$ and for $k = 1$, we have \begin{align*} \sum\limits_{m=0}^{\infty} \binom{2m+1}{m+1}t^m &= \sum\limits_{m=0}^{\infty} \left(2 - \frac{1}{m+1}\right)\binom{2m}{m}t^m \\&= \frac{2}{\sqrt{1-4t}} - C(t)\\&= \frac{2}{\sqrt{1-4t}} -\frac{2}{1+\sqrt{1-4t}} \\&= \frac{2}{\sqrt{1-4t}(1+\sqrt{1-4t})}\end{align*} Where, $\displaystyle C(t) = \sum\limits_{m=0}^{\infty} \frac{1}{m+1}\binom{2m}{m}z^m = \frac{2}{1+\sqrt{1-4t}}$ is the generating function of Catalan Numbers. Assuming the result holds for $k$ we prove for $k+1$: \begin{align*}\sum\limits_{m=0}^{\infty} \binom{2m+k+1}{m+k+1}t^m &= \sum\limits_{m=0}^{\infty} \left(2 - \frac{k+1}{m+k+1}\right)\binom{2m+k}{m+k}t^m\\ &= \frac{2^{k+1}(1+\sqrt{1-4t})^{-k}}{\sqrt{1-4t}} - \frac{k+1}{t^{k+1}}\int_0^t \sum\limits_{m=0}^{\infty} \binom{2m+k}{m+k}t^{m+k}\,\mathrm{d}t\\ &= \frac{2^{k+1}(1+\sqrt{1-4t})^{-k}}{\sqrt{1-4t}} - \frac{2^k(k+1)}{t^{k+1}}\int_0^t \frac{t^k(1+\sqrt{1-4t})^{-k}}{\sqrt{1-4t}}\,\mathrm{d}t\\ &= \frac{2^{k+1}(1+\sqrt{1-4t})^{-k}}{\sqrt{1-4t}} - \frac{k+1}{2^kt^{k+1}}\int_0^t \frac{(1-\sqrt{1-4t})^{k}}{\sqrt{1-4t}}\,\mathrm{d}t\\ &= \frac{2^{k+1}(1+\sqrt{1-4t})^{-p}}{\sqrt{1-4t}} - \frac{k+1}{2^{k+1}t^{k+1}} \frac{(1-\sqrt{1-4t})^{k+1}}{k+1}\\ &= \frac{2^{k+1}(1+\sqrt{1-4t})^{-k}}{\sqrt{1-4t}} - 2^{k+1}(1+\sqrt{1-4t})^{-(k+1)}\\ &= \frac{2^{k+1}(1+\sqrt{1-4t})^{-(k+1)}}{\sqrt{1-4t}}\\ \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/237810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 6, "answer_id": 2 }
topology puzzle - without cut the rope, separate two rings hello I wonder whether this puzzle is possible to solve. if possible, what kind of thing should I learn to solve this? the problem is make left one to right one without cut the rope only stretch and bending are allowed I found out this puzzle here->(www.ocf.berkeley.edu/~wwu/riddles/hard.shtml/) I wish this problem lead me to learn math intuitively.
Here's one way to think of it. Call the loops $L_1$ and $L_2$, joined by the stem which attaches to $L_1$ at $S_1$ and $L_2$ at $S_2$. You will notice there is a point $A$ where $L_1$ crosses and is above $L_2$ and another point $B$ where $L_1$ crosses and is below $L_2$. Now contract the stem until it vanishes so that $S_1$ and $S_2$ become a single point $S$ and you will see that one of the points $A$ or $B$ will also move to $S$ under this deformation. If you try to draw the two rings now joined at the single point $S$ you will see they are no longer linked - there is only one crossing point now so you simply have two rings sitting on top of one another joined at a point. All you need to do now is stretch out the stem again and the rings will look like the second picture.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Mathematics induction on inequality: $2^n \ge 3n^2 +5$ for $n\ge8$ I want to prove $2^n \ge 3n^2 +5$--call this statement $S(n)$--for $n\ge8$ Basis step with $n = 8$, which $\text{LHS} \ge \text{RHS}$, and $S(8)$ is true. Then I proceed to inductive step by assuming $S(k)$ is true and so $2^k \ge 3k^2 +5 $ Then $S(k+1)$ is $2^{k+1} \ge 3(k+1)^2 + 5$ I need to prove it so I continue by $2^{k+1} \ge 3k^2 +5$ $2(2^k) \ge 2(3k^2 +5)$ I'm stuck here...don't know how to continue, please explain to me step by step. I search for whole day already, all give answer without explanation. I can't understand, sorry for the trouble.
Observe that since $k\ge8>6$, then $$3(k+1)^2+5=3k^2+6k+8<3k^2+k^2+8=4k^2+8<6k^2+8<6k^2+10.$$ Rewriting the far right expression as $2(3k^2+5)$, we use the fact that $S(k)$ holds (and the fact that multiplication by $2$ preserves order) to conclude that $$3(k+1)^2+5<2(3k^2+5)\leq 2(2^k)=2^{k+1},$$ so $S(k+1)$ holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/237958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Applying Central Limit Theorem to show that $E\left(\frac{|S_n|}{\sqrt{n}}\right) \to \sqrt{\frac{2}{\pi}}\sigma$ In the book Probability Essentials, by Jacod and Protter, the following question has bugged me for a long while and I'm wondering if it is bugged. The question is an application of Central Limit Theorem: Let $(X_j)_{j\geq1}$ be iid with $E[X_j] =0$ and $\sigma_{X_j}^2 = \sigma^2 < \infty$. Let $S_n = \sum_{j=1}^n X_j$. Show that $$ \lim_{n\rightarrow \infty} E\left\{\frac{|S_n|}{\sqrt{n}}\right\} = \sqrt{\frac{2}{\pi}}\sigma$$ What I tried and am aware of: Interchanging E and lim is wrong as $\frac{|S_n|}{\sqrt{n}}$ does NOT converge a.s to $|Z|$ where $ Z \sim \mathcal{N}(0,\sigma^2)$, but does so in distribution. Note the use of continuous mapping theorem. Weak convergence aka convergence in distribution implies $E[f(X_n)] \rightarrow E[f(X)]$ when $X_n \rightarrow^d X$ for f continuous and bounded. $f(x)=|x|$ is not bounded. I tried using a truncated version of $|X|$. But at one stage I had to swap limits and couldn't justify the steps. I appreciate any help. Also kindly avoid Skorokhod's theorem if possible as it has not been covered. Although if you have an idea using that, I'm all ears. Note: The RHS is $E|Z|$.
We assume $\sigma=1$. * *Show that for all $K$ and all $n$, $$P\left(\frac{|S_n|}{\sqrt n}\geqslant K\right)\leqslant\frac 1{K^2}.$$ *This implies $$\int \frac{|S_n|}{\sqrt n}dP\leqslant \int_{|S_n|<K\sqrt n} \frac{|S_n|}{\sqrt n}dP+\frac 1{K^2}.$$ *Use the definition of weak convergence together with $x\mapsto \min\{|x|,K\}$, a continuous bounded map. Actually, there is a deeper result (not needed here) called the invariance principle (see Billingsley's book Convergence of probability measures) which stated the following. Take $\{X_n\}$ a sequence of i.i.d. centered random variables, with $EX_n^2=1$, and define for each $n$, a random function $f_n(\omega,\cdot)$ in the following way: * *$f_n(\omega,kn^{-1})=\frac 1{\sqrt n}\sum_{j=1}^kX_j(\omega)$ for $0\leqslant k\leqslant n$. *$f_n(\omega,\cdot)$ is piecewise linear. The measures associated with these functions converge in law to a Brownian motion. Now, to get the result, take the continuous bounded functional $F\colon C[0,1]\to \Bbb R$ given by $F(f)=|f(1)|$. The first version of the invariance principle, found by Kac and Erdős, was about the functional $F(f):=\sup_{0\leqslant x\leqslant 1}f(x)$. Then Donsker generalized it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Prove that if $g^2=e$ for all $g$ in $G$ then $G$ is Abelian. Prove that if $g^2=e$ for all $g$ in $G$ then $G$ is Abelian. This question is from group theory in Abstract Algebra and no matter how many times my lecturer teaches it for some reason I can't seem to crack it. (Please note that $e$ in the question is the group's identity.) Here's my attempt though... First I understand Abelian means that if $g_1$ and $g_2$ are elements of a group $G$ then they are Abelian if $g_1g_2=g_2g_1$... So, I begin by trying to play around with the elements of the group based on their definition... $$(g_2g_1)^r=e$$ $$(g_2g_1g_2g_2^{-1})^r=e$$ $$(g_2g_1g_2g_2^{-1}g_2g_1g_2g_2^{-1}...g_2g_1g_2g_2^{-1})=e$$ I assume that the $g_2^{-1}$'s and the $g_2$'s cancel out so that we end up with something like, $$g_2(g_1g_2)^rg_2^{-1}=e$$ $$g_2^{-1}g_2(g_1g_2)^r=g_2^{-1}g_2$$ Then ultimately... $$g_1g_2=e$$ I figure this is the answer. But I'm not totally sure. I always feel like I do too much in the pursuit of an answer when there's a simpler way. Reference: Fraleigh p. 49 Question 4.38 in A First Course in Abstract Algebra.
For any $g, h \in G$, consider the element $g\cdot h\cdot h\cdot g.~$ Since $g^2 = g\cdot g= e$ for all $g \in G$, we find that $$g\cdot h\cdot h\cdot g = g\cdot(h\cdot h)\cdot g = g\cdot e\cdot g = g\cdot g = e.$$ But, $g\cdot h$ has unique inverse element $g\cdot h$, while we have just proved that $(g\cdot h)\cdot (h\cdot g) = e$, and so it must be that $g\cdot h = h\cdot g$ for all $g, h \in G$, that is, $G$ is an abelian group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 14, "answer_id": 5 }
Basis for this $\mathbb{P}_3$ subspace. Just had an exam where the last question was: Find a basis for the subset of $\mathbb{P}_3$ where $p(1) = 0$ for all $p$. I answered $\{t,t^2-1,t^3-1\}$, but I'm not entirely confident in the answer. Did I think about the question in the wrong way?
I'm assuming by $\mathbb{P}_3$ you mean the vector space of polynomials of degree 3 or less. It has dimension $4$, and since you have one condition (namely $p(1)=0$), we expect the subspace to have dimension 3, one less. So it is enough to find three linearly independent basis vectors. In this example, that means three linearly independent polynomials, all with $p(1)=0$. In your answer ($\{t, t^2-1, t^3-1\}$), the first one must be wrong, since $t(1)=1$. If you replace $t$ by $t-1$, then you have three linearly independent polynomials, all satisfying $p(1)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does a Chi-Square random variable $\chi^2_1$ mean that only one normal random variable was taken? I'm trying to understand how Chi-Square variables work. So far, I know that a Chi-Square random variable, $\chi^2$, means that one random value has been taken from a normally distributed graph. Let's say it was the standard normal distribution. This means, $\chi^2$ has a high probability of being zero or near zero. Here's what I don't understand; How many degrees of freedom does $\chi^2$ have? If it only represents one random variable, then it has zero degrees of freedom, doesn't it? For example, if I take 5 random variables from a normal distribution, is it $\chi^2_1+\chi^2_1+\chi^2_1+\chi^2_1+\chi^2_1= \chi^2_5$ or is one somehow not counted?
As Robert Israel has pointed out, the sum of squares of $n$ independent random variables with a standard normal distribution has a chi-square distribution with $n$ degrees of freedom. Take them from a normal distribution whose expectation is $\mu$ and whose standard deviation is $\sigma$, you have have $$ \left(\frac{X_1-\mu}{\sigma}\right)^2 + \cdots + \left(\frac{X_n-\mu}{\sigma}\right)^2 $$ has chi-square distribution with $n$ degrees of freedom. So why might it appear that one of them is not counted? The answer to that comes from such results as this: Suppose instead of the population mean $\mu$, you subtract the sample mean $\overline X$. Then you have $$ \left(\frac{X_1-\overline X}{\sigma}\right)^2 + \cdots + \left(\frac{X_n-\overline X}{\sigma}\right)^2,\tag{1} $$ and this has a chi-square distribution with $n-1$ degrees of freedom. In particular, if $n=1$, then the sample mean is just the same as $X_1$, so the numerator in the first term is $X_1-X_1$, and the sum is necessarily $0$, so you have a chi-square distribution with $0$ degrees of freedom. Notice that in $(1)$, you have $n$ terms in the sum, not $n-1$, and they're not independent (since if you take away the exponents, you get $n$ terms that necessarily always add up to $0$) and the standard deviation of the fraction that gets squared is not actually $1$, but less than $1$. So why does it have the same probability distribution as if there were $n-1$ of them, and they were indepedent, and those standard deviations were each $1$? The simplest way to answer that may be this: $$ \begin{bmatrix} X_1 \\ \vdots \\ X_n \end{bmatrix} = \begin{bmatrix} \overline X \\ \vdots \\ \overline X \end{bmatrix} + \begin{bmatrix} X_1 - \overline X \\ \vdots \\ X_n - \overline X \end{bmatrix} $$ This is the decomposition of a vector into two components orthogonal to each other: one in a $1$-dimensional space and the other in an $n-1$ dimensional space. Now think about the spherical symmetry of the joint probability distribution, and about the fact that the second projection maps the expected value of the random vector to $0$. Later edit: Sometimes it might seem as if two of them are not counted. Suppose $X_i$ is a normally distributed random variable with expected value $\alpha+\beta w_i$ and variance $\sigma^2$, and they're independent, for $i=1,\ldots,n$. When $w_i$ is observable and $\alpha$, $\beta$, are not, one may use least-squares estimates $\hat\alpha$, $\hat\beta$. Then $$ \left(\frac{X_1-(\alpha+\beta w_1)}{\sigma}\right)^2 + \cdots + \left(\frac{X_n-(\alpha+\beta w_n)}{\sigma}\right)^2 \sim \chi^2_n $$ but $$ \left(\frac{X_1-(\hat\alpha+\hat\beta w_1)}{\sigma}\right)^2 + \cdots + \left(\frac{X_n-(\hat\alpha+\hat\beta w_n)}{\sigma}\right)^2 \sim \chi^2_{n-2}. $$ A similar sort of argument involving orthogonal projections explains this. One needs these results in order to derive things like confidence intervals for $\mu$, $\alpha$, and $\beta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In the ring $\mathbb Z[i]$ explain why our four units $\pm1$ and $\pm i$ divide every $u\in\mathbb Z[i]$. In the ring $\mathbb Z[i]$ explain why our four units $\pm1$ and $\pm i$ divide every $u\in\mathbb Z[i]$. This is obviously a elementary question, but Gaussian integers are relatively new to me. I found this exercise in the textbook, and my professor overlooked it, but i'm curious. Is this basically the same thing as, for a lack of a better term, "normal" integers? As in, $\pm1$ divides everything?
Hint $\ $ Units $\rm\:u\mid 1\mid x.\ $ Alternatively $\rm\ u\mid u(u^{-1}x) = (uu^{-1})x = x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/238406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Ratio of lengths in isosceles triangle In $\triangle ABC$ , $BC = AC$. Also $D$ is a point on side $AC$ such that $BD = AB$. Find the ratio $\frac{AB}{AD}$. Justify your answer. The answer is supposed to be $\frac1 {cosA}$ where $A = \angle BAC$. I can't figure out how to get there: Related Topics: Similarity, Areas, Golden Ratio
Draw the perpendicular from $B$ to $AC$, meeting $AC$ at $X$. Then $\dfrac{AX}{AB}=\cos A$. But $AD=2AX$, and therefore $\dfrac{AB}{AD}=\dfrac{1}{2\cos A}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Simple inequality on real numbers Let $a, b > 0$ such that $a + b ≥ 1$. Show that $a^4 + b^4 ≥ \frac18$. What is the best possible approach on this problem?
$2(a^2 + b^2) \geq (a+b)^2 + (a-b)^2$ $ ≥ (a+b)^2 $ . Similarly we have , $a^4+b^4≥ \frac {(\space a^2\space+\space b^2\space)^2}2$ Hence we get $$a^4+b^4≥ \frac {(\space a\space+\space b\space)^4}8$$ ; the rest follows easily .
{ "language": "en", "url": "https://math.stackexchange.com/questions/238534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
About the asymptotic behaviour of $\sum_{n\in\mathbb{N}}\frac{x^{a_n}}{a_n!}$ Let $\{a_n\}_{n\in\mathbb{N}}$ be an increasing sequence of natural numbers, and $$ f_A(x)=\sum_{n\in\mathbb{N}}\frac{x^{a_n}}{a_n!}. $$ There are some cases in which the limit $$ l_A=\lim_{x\to+\infty} \frac{1}{x}\,\log(f_A(x)) $$ does not exist. However, if $\{a_n\}_{n\in\mathbb{N}}$ is an arithmetic progression, we have $l_A=1$ (it follows from a straightforward application of the discrete Fourier transform). Consider now the case $a_n=n^2.$ * *Is it true that there exists a positive constant $c$ for which $$\forall x>0,\quad e^{-x}f_A(x)=\sum_{k\in\mathbb{N}}x^k\left(\sum_{0\leq j\leq\sqrt{k}}\frac{(-1)^{k-j^2}}{(j^2)!\,(k-j^2)!}\right)\geq c\;?$$ *Is it true that $l_A=1$?
There is an error in the argument. See the comments. It is a nice exercise to show $l_A = 1$ for every polynomial $n^k$. Let me sketch the proof for $k = 2$. Observe that for $x \geq 0$: $$\sum_{n=0} \frac{x^{n^2}}{(n^2)!} \leq \sum_{n=0} \frac{x^n}{n!} = e^x$$ simply, because the left hand side is a subsequence of the right hand side. The next thing to do is to bound our sequence from the left. We have: $$\sum_{n=0} (2n + 1)\frac{x^{(n+1)^2}}{(n+2)^2!} \geq \sum_{n=0}\frac{x^n}{(n+2)!}$$ because in each group of the size $(n+1)^2 - n^2 = 2n + 1$ the last nominator is the biggest and the first denominator is the smallest one. For sufficiently large $x$ (for example $x \geq 2$): $$\sum_{n=0} \frac{x^{(n+2)^2}}{(n+2)^2!} \geq \sum_{n=0} (2n+1)\frac{x^{(n+1)^2}}{(n+2)^2!}$$ thus $$\sum_{n=0} \frac{x^{n^2}}{(n^2)!} - x - 1= \sum_{n=2} \frac{x^{n^2}}{(n^2)!} \geq \sum_{n=2} \frac{x^{(n-2)}}{n!} = \frac{1}{x^2}\sum_{n=2} \frac{x^n}{n!} = \frac{1}{x^2}(e^x - x - 1)$$ Therefore, for sufficiently large $x$: $$\frac{e^x}{x^2} \leq \frac{e^x}{x^2} + x + 1 - \frac{x+1}{x^2} \leq \sum \frac{x^{n^2}}{(n^2)!} \leq e^x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/238615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Plotting an integral of a function in Octave I try to integrate a function and plot it in Octave. Integration itself works, i.e. I can evaluate the function g like g(1.5) but plotting fails. f = @(x) ( (1) .* and((0 < x),(x <= 1)) + (-1) .* and((1 <x),(x<=2))); g = @(x) (quadcc(f,0,x)); x = -1.0:0.01:3.0; plot(x,g(x)); But receive the following error: quadcc: upper limit of integration (B) must be a single real scalar As far as I can tell this is because the plot passes a vector (namely x) to g which passes it down to quadcc which cannot handle vector arguments for the third argument. So I understand what's the reason for the error but have no clue how to get the desired result instead. N.B. This is just a simplified version of the real function I use, but the real function is also constant on a finite set of intervals ( number of intervals is less than ten if that matters). I need to integrate the real function 3 times in succession (f represents a jerk and I need to determine functions for acceleration, velocity and distance). So I cannot compute the integrals by hand like I could in this simple case.
I do not know if it is valuable to answer this question at this moment, but it is indeed a valuable question in terms of how to use (and think in terms of) GNU Octave. GNU Octave is based on vector and matrices. We have a function $f$ f = @(x) ( (1) .* and((0 < x),(x <= 1)) + (-1) .* and((1 <x),(x<=2))); which, in terms of GNU Octave, is a vectorized function (i.e. if $x$ is a vector, then $f$ is also a vector). In order to plot the integral of $f$, we have to understand its integral $g$ must be also a vector. The problem is here that function quadcc is not vectorized, you cannot enter an $x$ which is a vector into quadcc. To my knowledge integration procedures are given most of the time in scalar form (not vectorized!). To do what is demanded in the question above, the simpler way is recurring to a loop, e.g. defining $g$ as following function y = g(x) f = @(x) ( (1) .* and((0 < x),(x <= 1)) + (-1) .* and((1 <x),(x<=2))); y = []; for i = 1:length(x), y = [y; quadcc(f,0,x(i))]; end end This way you have a vectorized version of the integral of $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
10 voters for A, 8 voters for B, 6 voters for C, probability that 2 for each are chosen The problem in full is: If 10 voters are for A, 8 voters are for B, and 6 voters are for C, what is the probability that a random selection of 6 (no two can be the same) voters will yield 2 voters for each candidate? and a follow up is Instead, suppose you call 6 numbers chosen randomly (same number can be chosen) from the pool of 24 voters--now what is the probability that the calling will yield 2 voters for each candidate? my work so far is as follows: for the first: (C(10,2)*C(8,2)*C(6,2) / 3!) / (24*23*22*21*20*19) and for the second: (C(10,2)*C(8,2)*C(6,2) / 3!) / (24^6) my thought process is that you must get 2 from each group, you must divide by 3! to eliminate combinations including the same people in different order, and the denominator is the total number of ways to choose/call 6 people in each different case. Thanks for any help in advance!
First problem: For me, (and perhaps soon for you), the easiest way is to note that there are $\dbinom{24}{6}$ equally likely ways to choose $6$ people from the $24$. Now we count the "favourables." The number of ways to choose $2$ from the first group, $2$ from the second, and $2$ from the third, is $\dbinom{10}{2}\dbinom{8}{2}\dbinom{6}{2}$. For the probability, divide this by $\dbinom{24}{6}$. Second problem: This time, we are choosing with replacement. To stay close to your approach, there are $24^6$ sequences of length $6$ made up of not necessarily distinct voters. These are all equally likely. There is some ambiguity in the statement of the problem. Are the $2$ voters of each type have to be (i) not necessarily distinct or (ii) distinct? (i) Not necessarily distinct: We ask how many strings there are that contain $2$ voters of each type. The positions of the voters of the first type can be chosen in $\dbinom{6}{2}$ ways. For each choice, there are $10^2$ ways to fill these positions with such voters. For each such choice, there are $\dbinom{4}{2}8^2$ ways to deal with the second type of voter. And then there are $\dbinom{2}{2}6^2$ ways for the third type, where of course the $\dbinom{2}{2}$ is superfluous. That gives a total of $\dbinom{6}{2}10^2\dbinom{4}{2}8^2\dbinom{2}{2}6^2$. Divide. (ii) Distinct: The calculation is very similar, except that $10^2$ is replaced by $(10)(9)$, and $8^2$ by $(8)(7)$, and $6^2$ by $(6)(5)$. Remarks: Here is an approach to the first problem that is closer in spirit to yours. We can, as you did, observe that there are $(24)(23)(22)(21)(20)(19)$ strings of length $6$ made up of distinct voters. These are all equally likely. Now we need to count the strings made up of $2$ voters of each type. The particular voters can be chosen in $\dbinom{10}{2}\dbinom{8}{2}\dbinom{6}{2}$ ways. For each such way, there are $6!$ ways to arrange them in a row, giving a total of $\dbinom{10}{2}\dbinom{8}{2}\dbinom{6}{2}6!$ ways. For the probability, divide by $(24)(23)(22)(21)(20)(19)$. For the second problem, I somewhat prefer to work directly with probabilities. We solve the problem for interpretation (i), the not necessarily distinct case. The analysis for Case (ii) is similar. We first find the probability that the first $2$ chosen people are of the first type, the next $2$ are of the second type,and the last $2$ of the third type. This probability is $a=\left(\dfrac{10}{24}\right)^2\left(\dfrac{8}{24}\right)^2\left(\dfrac{6}{24}\right)^2$. But there are $k=\dbinom{6}{2}\dbinom{4}{2}\dbinom{2}{2}$ ways to choose the positions. So our probability is $ka$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
approximate error between integral an sum I am new here. My problem: There is an integral $I:=\int_0^1 f(x)\,dx$ for $f\colon [0,1]\to\mathbb{R}$ and I want to compute it by $H_n:=\frac{f\left(0\right)}{2n}+\frac{1}{n}\left[f\left(\frac{1}{n}\right)+f\left(\frac{2}{n}\right)+\dotsm + f\left(\frac{n-1}{n}\right)\right]+\frac{f\left(1\right)}{2n}$ Whats an easy way to prove the error $|I-H_n|\leq \frac{L}{4n}$? The function $f$ suffices the lipsch. condition $\Vert f(x_1)-f(x_2) \Vert\leq L \vert x_1-x_2\Vert$? I didn't attend any lecture about numerical analysis yet. I know how it looks like. I take equidistant steps an only compute the mean of these values. From the lip. condition I know that $\max_{x\in [0,1]} \frac{d}{dx}f(x)\leq L$. Is there some literature or an EASY way to see this inequality? I think induction in $n$ won't make sense. I don't even know if this approximation has a name in the literature.
This approximation is called the trapezoid rule. Write $I - H_n = \sum_{j=1}^n E_j$ where $$E_j = \int_{(j-1)/n}^{j/n} f(x)\ dx - \frac{f((j-1)/n) + f(j/n)}{2n} $$ For convenience I'll write $(j-1)/n = a$, $j/n = b$, $1/n = b - a = \delta$, so $$E_j = \int_a^b f(x)\ dx - \frac{\delta}{2}(f(a) + f(b))$$ Given the Lipschitz condition and the values $f(a)$ and $f(b)$, for $a \le x \le b$ we have $$f(x) \le g(x) = \cases{f(a) + L(x-a) & for $a \le x \le (a+b)/2$\cr f(b) + L (b-x)) &for $(a+b)/2 < x \le b$\cr}$$ so $\int_a^b f(x)\ dx \le \int_a^b g(x)\ dx$, which works out to $$\frac{\delta^2}{4} L + \frac{\delta}{2} (f(a) + f(b))$$ Thus $E_j \le \dfrac{\delta^2}{4} L$. Similarly, we get a lower bound $E_j \ge -\dfrac{\delta^2}{4} L$. So $|E_j| \le \dfrac{\delta^2}{4} L = \dfrac{L}{4n^2}$ and $|I - H_n| \le \sum_{j=1}^n |E_j| \le \dfrac{L}{4n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Tangent Spaces and Morphisms of Affine Varieties In page 205 of "Algebraic Curves, Algebraic Manifolds and Schemes" by Shokurov and Danilov, the tangent space $T_x X$ of an affine variety $X$ at a point $x \in X$ is defined as the subspace of $K^n$, where $K$ is the underlying field, such that $\xi \in T_x X$ if $(d_x g)(\xi)=0$ for any $g \in I$, where $I$ is the ideal of $K[T_1,\cdots,T_n]$ that defines $X$ and by definition $(d_x g)(\xi)=\sum_{i=1}^n \frac{\partial g}{\partial T_i}(x) \xi_i$, where partial derivatives are formal. So far so good. Next, it is mentioned, that if $f:X \rightarrow Y$ is a morphism of affine varieties, then we obtain a well-defined map $d_x f : T_x X \rightarrow T_{f(x)} Y$. How is this mapped defined and why is it well-defined?
Let $y = f(x)$. Then f induces a map on stalks, which in turn induces a map on cotangent spaces: $$\mathfrak{m}_{y,Y}/\mathfrak{m}_{y,Y}^2 \rightarrow \mathfrak{m}_{x,X}/\mathfrak{m}_{x,X}^2$$ Taking the dual over $K$ yields the map on tangent spaces. To put this into context, your definition of the tangent space makes use of the explicit basis $$\frac{\partial}{\partial T_i} \bigg|_x$$ That is, partial derivation with respect to the variable $T_i$, followed by evaluation at $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
proof: set countable iff there is a bijection In class we had the following definiton of a countable set: A set $M$ is countable if there is a bijection between $\mathbb N$ and $M$. In our exam today, we had the following thesis given:If $A$ is a countable set, then there is a bijection $\mathbb N\rightarrow A$. So I am really not sure if the thesis and therefore the equivalence in the definition is right. So is it correct? And how do you proove it? Thanks a lot!
Suppose $B$ and $C$ are arbitrary sets, and suppose the following statement holds: "There is a bijection between $B$ and $C$." Observe that it isn't specified whether there is a bijection $B\to C$ or whether there is a bijection $C\to B$. (This may be the source of your confusion.) As it turns out, there is no need to specify. If there is a bijection $B\to C$, then the inverse of that bijection is a bijection $C\to B$; likewise, the existence of a bijection $C\to B$ guarantees the existence of a bijection $B\to C$. All of the following statements, then, are equivalent: "There is a bijection between $B$ and $C$." (No direction specified.) "There is a bijection $B\to C$." (One direction specified.) "There is a bijection $C\to B$." (Other direction specified.) "There is a bijection $B\to C$ and a bijection $C\to B$." (Both directions specified.) Consequently, saying that there is a bijection between $\Bbb N$ and $A$ (i.e.: $A$ is countable) is equivalent to saying that there is a bijection $A\to\Bbb N$, or saying that there is a bijection $\Bbb N\to A$, or saying there is a bijection $A\to\Bbb N$ and a bijection $\Bbb N\to A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Exponents in Odd and Even Functions I was hoping someone could show or explain why it is that a function of the form $f(x) = ax^d + bx^e + cx^g+\cdots $ going on for some arbitrary length will be an odd function assuming $d, e, g$ and so on are all odd numbers, and likewise why it will be even if $d, e, g$ and so on are all even numbers. Furthermore, why is it if say, $d$ and $e$ are even but $g$ is odd that $f(x)$ will then become neither even nor odd? Thanks.
It’s just a matter of checking the definitions of even function and odd function. Let’s look at a simple case that displays all of the possible behaviors of the more general case: consider $f(x)=ax^m+bx^n$. * *Suppose that $m$ and $n$ are even, say $m=2k$ and $n=2\ell$; then $$(-x)^m=(-x)^{2k}=\left((-x)^2\right)^k=\left(x^2\right)^k=x^{2k}=x^m\;,$$ and similarly $(-x)^n=x^n$, so $$f(-x)=a(-x)^m+b(-x)^n=ax^m+bx^n=f(x)\;,$$ and by definition $f$ is an even function. *Now suppose that $m$ and $n$ are odd, say $m=2k+1$ and $n=2\ell+1$. Then $$(-x)^m=(-x)^{2k+1}=(-x)^{2k}(-x)=x^{2k}(-x)=-x^{2k+1}=-x^m\;,$$ and similarly $(-x)^n=-x^n$, so $$f(-x)=a(-x)^m+b(-x)^n=-ax^m-bx^n=-\left(ax^m+bx^n\right)=-f(x)\;,$$ and by definition $f$ is an odd function. *Finally, suppose that $m$ is odd and $n$ is even. (The argument is the same if $m$ is even and $n$ is odd, just reversing the rôles of $m$ and $n$.) Then by what we’ve already seen we know that $$f(-x)=a(-x)^m+b(-x)^n=-ax^m+bx^n\;,$$ which is neither $f(x)$ nor $f(-x)$. Thus, $f$ is neither even nor odd. Nothing essentially different happens if there are more than two terms. In fact the names even function and odd function come from the behavior of even and odd powers, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/238980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Question from Folland, criteron for a function to belong to $L^p$ This question is from Folland 6.38, Show that $f \in L^p $ iff $\sum_{k=-\infty}^ {\infty} 2^{pk} \mu \{{x: |f(x)|>2^{k}}\} \lt \infty$ * *If $f \in L^p $, I applied the Chebyshev's inequality *But for the other direction, I don't know how to begin. Any advice would be appreciated. Thanks
On $E_k = \{ 2^k < \left| f(x) \right| \leq 2^{k+1} \}$, we have $$ 2^{pk} < \left|f(x)\right|^{p} \leq 2^{p(k+1)}.$$ Thus integrating on $E_k$ we have $$ 2^{pk} \mu (E_k) < \int_{E_k} \left|f(x)\right|^{p} \, d\mu \leq 2^{p(k+1)} \mu(E_k).$$ Summing through $k$, we obtain $$ \sum_{k=-\infty}^{\infty} 2^{pk} \mu (E_k) < \int \left|f(x)\right|^{p} \, d\mu \leq 2^{p} \sum_{k=-\infty}^{\infty} 2^{pk} \mu(E_k).$$ Finally, note that $$ \sum_{j=-\infty}^{k} 2^{pj} = \frac{2^{kp}}{1 - 2^{-p}}. $$ Thus we have $$ \begin{align*} \sum_{k=-\infty}^{\infty} 2^{pk} \mu (E_k) &= (1 - 2^{-p}) \sum_{k=-\infty}^{\infty} \sum_{j=-\infty}^{k} 2^{jp} \mu (E_k) \\ &= (1 - 2^{-p}) \sum_{j=-\infty}^{\infty} \sum_{k=j}^{\infty} 2^{jp} \mu (E_k) \\ &= (1 - 2^{-p}) \sum_{j=-\infty}^{\infty} 2^{jp} \mu \{ 2^j < \left| f(x) \right| \}. \end{align*}, $$ from which the conclusion follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Identity Theorem Let $(\alpha_n)$ be a sequence of complex numbers such that $$ \sum_{n=1}^\infty \frac{\alpha_n}{k^n} = 0 $$ for all $k = 1, 2, 3, ...$ Prove that $\alpha_n=0$ for all $n$ As far as a solution goes, really not sure where to start. Any hints would be appreciated. Thanks.
Consider the power series $f(z)=\sum_{n=1}^\infty\alpha_nz^n$. From $f(1)=0$ you will know that $\lim_{n\to\infty}\alpha_n=0$, so the radius of convergence of the power series is at least $1$. That is to say, $f$ is a well defined holomorphic function on $\mathbb{D}:=\{z\in\mathbb{C}:|z|<1\}$. Your assumption says that $f(\frac{1}{k})=0$ for every $k\ge 1$, but $\lim_{k\to\infty}\frac{1}{k}=0\in \mathbb{D}$, which implies that $f$ must be identically $0$ on $\mathbb{D}$, i.e. $\alpha_n=0$ for every $n\ge 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Showing that $|||A-B|||\geq \frac{1}{|||A^{-1}|||}$? $A,B\in M_n$, $A$ is non-singular and $B$ is singular. $|||\cdot|||$ is any matrix norm on $M_n$, how to show that $|||A-B||| \geq \frac{1}{|||A^{-1}|||}$? The hint is let $B=A[I-A^{-1}(A-B)]$, but I don't know how to use it. Appreciate any help! update: is $\geq$,not $\leq$.Sorry!
This is wrong. In what follows, we assume that we are working with the operator norm. If $A$ is non-singular, it is easy to check that $P$ is non-singular for all $P\in B\left(A;\frac{1}{\Vert A^{-1}\Vert}\right)$. The proof runs as follows. Let $P\in B\left(A,\frac{1}{\|A^{-1}\|}\right)$. We shall prove that $P\in GL_n(\mathbb{R})$. For each $x\in\mathbb{R}^n$, \begin{eqnarray*} \|x\|&=&\|A^{-1}(Ax)\|\leqslant \|A^{-1}\|\|Ax\|\\& \leqslant & \|A^{-1}\|\left(\|(A-P)x\|+\|Px\|\right)\leqslant \|A^{-1}\|\left(\|A-P\|\|x\|+\|Px\|\right). \end{eqnarray*} Thus, $$ \|x\|\leqslant \frac{\|A^{-1}\|}{1-\|A-P\|\|A^{-1}\|}\|Px\|,\text{ for all }x\in\mathbb{R}^n. $$ This implies that $P\in GL_n(\mathbb{R})$ because if $ Px=0$, we have $x=0$. Thus we have proved that $ B\left(A,\frac{1}{\|A^{-1}\|}\right)\subset GL_n(\mathbb{R})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Basis, dense subset and an inequality Let $V \subset H$, where $V$ is separable in the Hilbert space $H$. So there is a basis $w_i$ in $V$ such that, for each $m$, $w_1, ..., w_m$ are linearly independent and the finite linear combinations are dense in $V$. Let $y \in H$, and define $y_m = \sum_{i=1}^m a_{im}w_i$ such that $y_m \to y$ in $H$ as $m \to \infty$. Then, why is it true that $\lVert y_m \rVert_H \leq C\lVert y \rVert_H$? I think if the $w_i$ were orthonormal this is true, but they're not. So how to prove this statement?
For simplifying calculations, we can assume that $w_m$ is an orthonormal basis, by the following argument: By the Gram-Shmidt orthogonalization process we can define $v_m$ out of the $w_1,..,w_m$'s, such that $v_1,..v_m$ is an orthonormal basis of $\langle w_1,..,w_m\rangle$. Then rewriting the coordinates accordingly, we have $$y_m=\sum_{i\le m} b_{i(m)} v_i $$ Then, $||y_m||^2=\sum_{i\le m}(b_{i(m)})^2$, and, using the continuity of $x\mapsto \langle x,v_i\rangle$, we see that $b_{i(m)}$ tends to $\langle y,v_i\rangle=:b_i$ as $m\to\infty$, and that $$||y||^2=\sum_{i=1}^\infty {b_i}^2.$$ But maybe all these are not needed for your inequality, all is needed that the scalar product is continuous, and hence so is the norm, so $$||y_m||\to ||y||$$ From which, if $y\ne 0$, the sequence $\displaystyle\frac{||y_m||}{||y||}$ convergences (to 1), hence bounded. Well, if $y$ happens to be $0$, then the claim can also be false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Unable to find Lipschitz constant for $y'=(t-1)\sin(y)$ Given the problem: $$y′ = (t − 1)\sin(y),\;\;\;y(1) = 1$$ find an approximation for $y(2)$. Give an upper bound for the global error taking $n = 4$ (i.e., $h = \frac{1}{4}$) The goal is to find an upper bound, not actually find an approximation for $y(2)$. However I don't think it is possible to find an upper bound as we need to use the equation $$|E_n| \leq \frac{T}{L}[(e^{(t_n - t_0)L} - 1)]$$ where $T$ is the max truncation error and $L$ is the Lipschitz constant. But I am unable to get a Lipschitz constant from $y′ = (t − 1)\sin(y)$, I don't think it's possible without being given a bound on $t$. $$|f(t, u) - f(t,v)| = |(t-1)\sin(u) - (t-1)\sin(v)|= |t-1||\sin(u) - \sin(v)|$$ $\sin\;$ has Lipschitz constant of $2$ so we can say $$\leq 2|t-1||u - v|$$ And where can I go from here...? Is the question just not well defined or am I missing something?
Letting $f(t,y) = (t-1)\sin(y)$, you can use that an upper bound on $|\partial_y f(t,y)|$ will be a Lipschitz constant (this is an application of the mean value theorem). To see this, note, as the commenter mentioned, you only need to consider values of $1\leq t \leq 2$. It should be clear that a bound on this partial exists for any $t$ in this interval, and any value of $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve the recurrence relation:$ T(n) = \sqrt{n} T \left(\sqrt n \right) + n$ $$T(n) = \sqrt{n} T \left(\sqrt n \right) + n$$ Master method does not apply here. Recursion tree goes a long way. Iteration method would be preferable. The answer is $Θ (n \log \log n)$. Can anyone arrive at the solution.
Use substitution method: $$\Large\begin{align*} \text{T}(n) &= \sqrt{n}\ \text{T}(\sqrt{n})+n\\ &= n^{\frac{1}{2}}\ \text{T}\left(n^{\frac{1}{2}} \right )+n\\ &= n^{\frac{1}{2}}\left( n^{\frac{1}{2^2}}\ \text{T}\left(n^{\frac{1}{2^2}} \right )+n^{\frac{1}{2}} \right )+n\\ &= n^{\frac{1}{2}+\frac{1}{2^2}}\ \text{T}\left(n^{\frac{1}{2^2}}\right ) +n^{\frac{1}{2}+\frac{1}{2}}+n\\ &= n^{\frac{1}{2}+\frac{1}{2^2}}\ \text{T}\left(n^{\frac{1}{2^2}}\right ) +2n\\ &= n^{\frac{1}{2}+\frac{1}{2^2}}\left(n^{\frac{1}{2^3}}\ \text{T}\left(n^{\frac{1}{2^3}}\right ) +n^{\frac{1}{2^2}} \right )+2n\\ &= n^{\frac{1}{2}+\frac{1}{2^2}+\frac{1}{2^3}}\ \text{T}\left(n^{\frac{1}{2^3}}\right ) +n^{\frac{1}{2}+\frac{1}{2^2}+\frac{1}{2^2}} +2n\\ &= n^{\frac{1}{2}+\frac{1}{2^2}+\frac{1}{2^3}}\ \text{T}\left(n^{\frac{1}{2^3}}\right ) +3n\\ \vdots \\ &= n^{\sum_{i=1}^{k}\frac{1}{2^i}}\ \text{T}\left(n^{\frac{1}{2^k}}\right ) +kn\\ \end{align*}$$ assuming $\text{T}(2) = 2$, which is the least value of n that could be. So, $$\begin{align*} n^{\frac{1}{2^k}} &= 2\\ \frac{1}{2^k}\log_2(n) &= \log_2(2) \\ \log_2(n) &= {2^k} \\ \log_2\log_2(n) &= k\log_2(2) \\ \log_2\log_2(n) &= k \end{align*}$$ therefore, the recurrence relation will look like: $$\large \begin{align*} \text{T}(n)&=n^{\sum_{i=1}^{k}\frac{1}{2^i}}\ \text{T}\left(n^{\frac{1}{2^k}}\right ) +kn\\ &=n^{\sum_{i=1}^{\log_2\log_2(n)}\frac{1}{2^i}}\ \text{T}\left(n^{\frac{1}{2^{\log_2\log_2(n)}}}\right ) +n \log_2\log_2(n)\\ \end{align*}$$ where, $$\sum_{i=1}^{\log_2\log_2(n)}\frac{1}{2^i} = 1-\frac{1}{\log_2(n)} = \text{fraction always, as }n\geq 2$$ so, $$\large \text{T}(n) = \mathcal{O}\left( n \log_2 \log_2 n \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/239402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 1 }
Convergence of CG method I have a question like how can we mathematically prove that for a general matrix Conjugate Gradient method will always converge within n steps in exact arithmetic ? where n is the size of the matrix.
The CG method, like other Krylov methods, finds a "best" solution to $A x=b$ on successively larger subspaces. In particular, the n'th subspace is $\text{span}(x_0, Ax_0, A^2 x_0, ..., A^n x_0)$, and "best" means the point where the residual is orthogonal to the subspace. Since you keep adding vectors to your subspace and the full space is finite dimensional, eventually there are only 2 possibilities. Either 1) the subspace fills up the whole space, in which case the "best" solution corresponds to the true solution, or 2) at some point $A^{k+1} x_0 \in \text{span}(x_0, Ax_0, A^2 x_0, ..., A^k x_0)$. This might seem bad, but it's actually a "happy coincidence" since it means $\text{span}(x_0, Ax_0, A^2 x_0, ..., A^k x_0)$ is a direct sum of $k$ eigenspaces. Everything that starts in there will stay in there under application of $A$, and everything orthogonal to it will stay orthogonal. (note that $A$ is symmetric so it's eigenvectors are orthogonal) If the true solution is in this eigenspace we would have found it, and if it isnt' we can just pick a point orthogonal to the eigenspace and start again, having reduced the problem by $k$ dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proof relating to Orthogonal Complement a.) Show that if $A=A^T$ is a symmetric matrix, then $A\mathbf{x}=\mathbf{b}$ has a solution iff b is orthogonal to $\ker A$. b.) Prove that if $K$ is a positive semi-definite matrix and $\mathbf{f}\notin \operatorname{rng}K$, then the quadratic function $$p(\mathbf{x}) = \mathbf{x}^\mathrm{T}K\mathbf{x} -2\mathbf{x}^\mathrm{T}\mathbf{f} + c$$ has no minimum value. c.) Suppose $\{\mathbf{v_1},\ \cdots,\ \mathbf{v_n}\}$ span a subspace $V \subset \mathbb{R}^m$. Prove that $\mathbf{w}$ is orthogonal to $V$ iff $\mathbf{w}\in \operatorname{coker}A$ where $A=\begin{pmatrix}\mathbf{v_1} & \mathbf{v_2} & \cdots & \mathbf{v_n}\end{pmatrix}$ is the matrix with the indicated columns. My attempt: a.) We know that $A=A^\mathrm{T}$, then a vector $\mathbf{x}\in \mathbb{R}^n$ lies in $\ker A$ iff $A\mathbf{x} = \mathbf{0}$. By matrix multiplication we know that the $i^{\text{th}}$ entry of $A\mathbf{x}$ equals the vector product of the $i^{\text{th}}$ row $\mathbf{r_i}^T$ of $A$ and $\mathbf{x}$, hence $\mathbf{r_i}^{T}\cdot \mathbf{x} = \mathbf{r_i} \cdot \mathbf{x} = 0$ iff $\mathbf{x}$ is orthogonal to $\mathbf{r_i}$. Therefore $\mathbf{x}\in \ker A$ iff $\mathbf{x}$ is orthogonal to all the rows of $A$. Thus $A\mathbf{x} = \mathbf{b}$ has a solution iff $\mathbf{b}$ is orthogonal to $\ker A$. Is this correct? b.) I do not know how to do this c.) In this do I have to prove that $\mathbf{w} \in \operatorname{coker}A$ is orthogonal to the range of $A$? I am not exactly sure what they are asking.
For a, your argument is showing that the rowspace is the orthogonal complement of the nullspace. That is not what the question is asking (although it is related). Notice that $\mathbf{b}$ is an element in the columnspace of $A$. The columnspace, rowspace and nullspace of a symmetric matrix are related in a very specific manner. See if you can use these relations. For part b, consider the critical points of the quadratic form. Where do they occur? Does your function have any? For c, what can you say about the left-nullspace and the columnspace of a matrix?
{ "language": "en", "url": "https://math.stackexchange.com/questions/239570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Set of Increasing Functions on an Interval is Closed To prove the existence of nowhere monotone continuous functions, one step is to show that for any interval $I \subset [0,1]$ of positive length, the set $A_I= \{f \in C([0,1]): f|_I$ is increasing$\}$ is closed (i.e. see http://www.apronus.com/math/nomonotonic.htm). I am not sure how to do prove this. My idea is that if $(f_n) \in A_I$ converges to $f$, then for all $x <y \in I$, consider $f(x)-f(y) = (f(x)-f_n(x))+(f_n(x)-f_n(y))+(f_n(y)-f(y))$. The first and third brackets can be made arbitrarily small and the middle bracket is negative, so the left side intuitively should also be negative, which would show that $f \in A_I$. I am not sure how to actually prove this. Thank you.
It is much easier, and directly follows from the result that if $a_n \le b_n$ for all $n$, and if $a_n \to a$ and $b_n \to b$, then $a \le b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$G=\langle a,b\mid aba=b^2,bab=a^2\rangle$ is not metabelian of order $24$ This is my self-study exercise: Let $G=\langle a,b\mid aba=b^2,bab=a^2\rangle$. Show that $G$ is not metabelian. I know; I have to show that $G'$ is not an abelian subgroup. The index of $G'$ in $G$ is 3 and doing Todd-Coxeter Algorithm for finding any presentation of $G'$ is a long and tedious technique (honestly, I did it but not to end). Moreover GAP tells me that $|G|=24$. May I ask you if there is an emergency exit for this problem. Thanks for any hint. :)
We have a homomorphism $a\rightarrow (1,2)(3,4,8,5,6,7)$, $b\rightarrow (1,3,6,2,5,4)(7,8)$. The image has order 24 so it is an isomorphism by your T-C result. The kernel $G'$ of the map to $Z_3$ is generated by $ab, ba$ which is quaternion of order 8; so the group is not metabelian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A Book for abstract Algebra I am self learning abstract algebra. I am using the book Algebra by Serge Lang. The book has different definitions for some algebraic structures. (For example, according to that book rings are defined to have multiplicative identities. Also modules are defined slightly differently....etc) Given that I like the book, is it OK to keep reading this book or should I get another one? Thank you
It is not a book but if it can help there are a lot of materials (videos classes, lecture notes, book references, assignements...) provided for free at the MIT Open Course Ware Initiative web site (http://ocw.mit.edu/courses/find-by-topic/)
{ "language": "en", "url": "https://math.stackexchange.com/questions/239734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 10, "answer_id": 9 }
Double Expectation of a Conditional Expectation If there are two sub-sigma algebras $\mathcal{G}$ and $\mathcal{H}$ of $\mathcal{F}$, neither a subset of the other from a probability space $(Y,\mathcal{F},P)$ and a random variable $X$ which is not measurable with respect to either $\mathcal{G}$ or $\mathcal{H}$, can I apply double expectation on the conditional expectation of $X|\mathcal{G}$ like this: $$ E[E[X|\mathcal{G}]|\mathcal{H}] = E[X|\mathcal{G}] $$ Thanks.
The property doesn't hold when $\Omega=\{a,b,c\}$, $\cal F:= 2^\Omega$, $\cal G:=\{\emptyset,\{a,b\},\{c\},\Omega\}$, $\cal H:=\{\emptyset,\{a\},\{b,c\},\Omega\}$ and $X:=\chi_{\{b\}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/239792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Find the sum of the first $n$ terms of $\sum^n_{k=1}k^3$ The question: Find the sum of the first $n$ terms of $$\sum^n_{k=1}k^3$$ [Hint: consider $(k+1)^4-k^4$] [Answer: $\frac{1}{4}n^2(n+1)^2$] My solution: $$\begin{align} \sum^n_{k=1}k^3&=1^3+2^3+3^3+4^3+\cdots+(n-1)^3+n^3\\ &=\frac{n}{2}[\text{first term} + \text{last term}]\\ &=\frac{n(1^3+n^3)}{2} \end{align}$$ What am I doing wrong?
Let $f(n)=\Big(\frac{n(n+1)}2\Big)^2$ , then$\space$$f(n-1)=\Big(\frac{n(n-1)}2\Big)^2$ ; now we know that $(n+1)^2 - (n-1)^2=4n$ , this implies $\space$ $n^2\Big((n+1)^2 - (n-1)^2\Big)=4n^3=$$\big(n(n+1)\big)^2 - \big(n(n-1)\big)^2$ $\implies$ $\frac{\big(n(n+1)\big)^2}4 - \frac{\big(n(n-1)\big)^2}4=$ $\Big(\frac{n(n+1)}2\Big)^2-\Big(\frac{n(n-1)}2\Big)^2=n^3$$=f(n)-f(n-1)$ $\implies$ $\sum_{n=1}^m \Big(f(n)-f(n-1)\Big) =$$\sum_{n=1}^mn^3$$=f(m)-f(0)=f(m)=\Big(\frac{m(m+1)}2\Big)^2$ , where we have used $f(0)=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/239909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Solve the recurrence $T(n) = 2T(n-1) + n$ Solve the recurrence $T(n) = 2T(n-1) + n$ where $T(1) = 1$ and $n\ge 2$. The final answer is $2^{n+1}-n-2$ Can anyone arrive at the solution?
This recurrence $$T(n) = 2T(n-1) + n$$ is difficult because it contains $n$. Let $D(n) = T(n) - T(n-1)$ and compute $D(n+1) = 2D(n) + 1$ this recurrence is not so difficult. Of course $D(1) = 4 - 1 = 2 + 1$. The sequence $D(n)$ goes: $2 + 1$, $2^2 + 2 + 1$, $2^3 + 2^2 + 2 + 1$. $D(n) = 2^{n+1}-1$. Now $$T(n) = \sum_{i=1}^n D(i) = 2 \sum_{i=1}^n 2^{i} - \sum_{i=1}^n 1 = 2^{n+1}-2-n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/239974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 9, "answer_id": 2 }
All derivatives zero at a point $\implies$ constant function? Suppose $f: \mathbb{R} \rightarrow \mathbb{R} $ is a continuous function, and there exists some $a \in \mathbb{R}$ where all derivatives of $f$ exist and are identically $0$, i.e. $f'(a) = 0, f''(a) = 0, \ldots$ Must $f$ be a constant function? or if not, are there examples of non-constant $f$ that satisfy these properties? What if the hypothesis is changed so that the derivatives of $f$ are identically $0$ on an open interval, i.e. $f'(A) = 0, f''(A) = 0, \ldots$ for some open interval $A$. Are the answers still the same?
As others have pointed out, the canonical counter-example is the function $f(x)=e^{-1/x^2}$. But what is special about this function? The answer is that it is very badly behaved near $0$ in the complex plane, because $-1/x^2$ is arbitrarily large and positive along the imaginary axis close to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 4, "answer_id": 0 }
Solving differential equation invariant I have eqn: $\frac{dx}{dt} = -y(t)$ and $\frac{dy}{dt} = x(t)$ I know that $(x(0),y(0))= (1,0)$. I want to solve eqn and show that it admits an invariant $I = x(t)^2 + y(t)^2$. I know $x' = -y$, $y' = x$, $x^{\prime\prime} = -y' = -x$ I know general solution of $x" = -x$ is $x = a\sin x = b\cos x$. I know $x(0) = a\sin 0 + b\cos 0 = 1$ So $b = 1$ How can I show $a = 1$? (I think it should!) I tried $x' = a\cos x - b\sin x$ since $y = -x$ but it just gives $ a = 0$.
You have $\frac{dx}{dt} = -y(t)$ and $x(t)=a\sin(t)+\cos(t)$ so take the derivative and use $y(0)=0$. You will find $a=0$, as you have already discovered but do not believe. If you had $a=1$ then you would not have $y(0)=0$. So you have $(x(t),y(t)) = \left((\cos(t),-\sin(t)\right)$. This is a parametric equation of a circle of radius $1$ centred on the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Flow network - minimum capacity cuts proof Let's start out by reviewing max-flow min-cut, as well as the flow networks they operate on. http://en.wikipedia.org/wiki/Flow_network http://en.wikipedia.org/wiki/Max-flow_min-cut_theorem Let $G = (V,E)$ be a flow network. Prove a minimum cut is also a minimum-capacity cuts of $G$. Thanks for any help, this problem has been throwing me for a loop! Note: NOT homework, extra problems from our book I'm working through for exam preparation.
Here is a way to do this. Consider a max flow $f$ and the residual network $E_f$. From the max flow min cut theorem it is easy to get that $E_f$ doesn't contain any edge that goes from $S$ to $T$. Neither does it contain any edges from $S'$ to $T'$. Then it also has no edges going from $S \cup S'$ to $T \cap T'$. But if no edge from $E_f$ crosses the cut $(S \cup S',T \cap T')$, then $$ c(S \cup S', T \cap T') = f(S \cup S', T \cap T'), $$ and using the min cut max flow theorem once again we see that cut $(S \cup S', T \cap T')$ is also minimal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$L_1$ norm of Gaussian random variable Ok. This is bit confusing. Let $g$ be a Gaussian random variable (normalized, i.e. with mean 0 and standard deviation 1). Then, in the expression $$\|g\|_{L_1}=\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}}|x|\exp(\frac{-x^2}{2})dx =\sqrt{\frac{2}{\pi}},$$ shouldn't the term $|x|$ not appear in the integral?
It should! The $L_1$-norm of a random variable $X$ with density $f(x)$ equals $$\int_{\Omega} |X(w)| d\mu(\omega) = \int_{-\infty}^{+\infty} |x| f(x) dx,$$ where $(\Omega, \mu)$ is the probability space $X$ defined on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Number of solutions $x_1x_2\dots x_k = n, x_i, n \in \mathbb{N}$ Here's a question I've been asked: Let $n\in \mathbb{N}$ and let $d_k(n)$ be the number of solutions of $$x_1\dots x_k = n, \hspace{5mm}x_i\in \mathbb{N}$$ I need to show $$d_k(n) = \sum_{d|n}d_{k-1}(d), \hspace{5mm} k \ge 2$$ and that for any $\epsilon > 0$, $d_k(n) = O(n^\epsilon)$ where the implied constant depends only on $k$. I.e $d_k(n) \ll f(k)n^\epsilon$ for some function $f$. I have to admit, I'm lacking an idea of what $d_k(n)$ looks like. Setting up the base case for an inductive argument isn't difficult, but I'm having a hard time picturing how this being true for an arbitrary $k$ implies the statement for $k+1$. The statement $\epsilon > 0$, $d_k(n) = O(n^\epsilon)$ is also very strange to me, having a CS background. I can kind of see how the choice of $\epsilon$ is arbitrary, since it just varies the implied constant in $O(n^\epsilon$). I can't seem to reconcile any of this in my head. Advice? Thanks for your time.
For the summation, note that $d_k(n)$ is the number of ordered $k$-tuples $(x_1,x_2, \dots, x_k)$ of $x_2x_2\cdots x_k=n$. If $d$ is any divisor of $n$, consider the $k$-tuples whose last term $x_k$ is equal to $n/d$. How many of these are there? The $(k-1)$-tuple $(x_1,x_2,\dots, x_{k-1})$ must then have product $d$, and by definition there are $d_{k-1}(d)$ such $(k-1)$-uples. Finally, sum over all $d$ that divide $n$. For the estimate, probably you are expected to use the just proved sum result, and to use an upper bound on the number of divisors of $n$ to bound $d_k(n)$. Remark: If you are familiar with the term, note that the relationship just proved shows that $d_k(n)$ is multiplicative, that is, that $d_k(ab)=d_k(a)d_k(b)$ whenever $a$ and $b$ are relatively prime. That implies that we can write down $d_k(n)$ once we know the structure of the prime power factorization of $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that if m/n is a good approximation of $\sqrt{2}$ then $(m+2n)/(m+n)$ is better Claim: If $m/n$ is a good approximation of $\sqrt{2}$ then $(m+2n)/(m+n)$ is better. My attempt at the proof: Let d be the distance between $\sqrt{2}$ and some estimate, s. So we have $d=s-\sqrt{2}$ Define $d'=m/n-\sqrt{2}$ and $d''=(m+2n)/(m+n)-\sqrt{2}$ To prove the claim, show $d''<d'$ Substituting in for d' and d'' yields: $\sqrt{2}<m/n$ This result doesn't make sense to me, and I was wondering whether there is an other way I could approach the proof or if I am missing something.
First Proof of Convergence Although not explicitly asked for by the OP, a convergent sequence can be defined. Let $F(x) = \frac{x+2}{x+1}$ and $p_0 \gt 0$ a rational number. Define the sequence $$\tag 1 p_{k+1} = F(p_k), \quad \text{ for } k \ge 0$$ Now J. W. Tanner's answer tells us that $(p_k)$ is alternating about $\sqrt 2$ with each term getting closer. To show convergence we can also assume that $p_0 \lt \sqrt 2$, so that we have an increasing sub-sequence $$\tag 2 {(p_{2k})}_{\, k \ge 0} \text{ with each } p_{2k} \lt \sqrt 2$$ But to skip the odd entries, we can construct the function $$\tag 3 G(x) = F \circ F(x) = \frac{3x+4}{2x+3}$$ so that $$\tag 4 p_{2(k+1)} = \frac{3p_{2k}+4}{2p_{2k}+3}$$ The increasing bounded sequence $\text{(2)}$ has a limit $\alpha \gt 0$ and the following is also true, $$\tag 5 \lim_{k\to +\infty} p_{2k} - p_{2(k+1)} = \lim_{k\to +\infty} p_{2k} - \frac{3p_{2k}+4}{2p_{2k}+3} = 0$$ Using more basic properties of limits, we can also write $$\tag 5 \alpha - \frac{3\alpha+4}{2\alpha+3} = 0$$ and simple algebra shows that $\alpha = \sqrt 2$. Using the fact that each next term is better as we alternate around $\sqrt2$, we must conclude that the starting sequence $\text{(1)}$ also converges to it. Second Proof of Convergence Following J. W. Tanner's hints (see comments), there is a simpler way to prove convergence via the following identity, $\tag 6 \text{For every } x \in \Bbb R \setminus \{-1\}, \quad F(x) - \sqrt 2 = (x-\sqrt2) \left(\frac{1}{1+x}\right)\left(1-\sqrt2\right)$ We can use $\text{(6)}$ since the terms of our sequence $(p_k)$ contain only positive numbers. Moreover, to show convergence we can assume that $|p_0 - \sqrt 2| \lt 1$. But then $\tag 7 \text{For every } k \ge 0, \quad |p_{k+1}-\sqrt2|<(\sqrt2 -1)^k$ and we have convergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Integrate $\sqrt{1+9x^4} \, dx$ I have puzzled over this for at least an hour, and have made little progress. I tried letting $x^2 = \frac{1}{3}\tan\theta$, and got into a horrible muddle... Then I tried letting $u = x^2$, but still couldn't see any way to a solution. I am trying to calculate the length of the curve $y=x^3$ between $x=0$ and $x=1$ using $$L = \int_0^1 \sqrt{1+\left[\frac{dy}{dx}\right]^2} \, dx $$ but it's not much good if I can't find $$\int_0^1\sqrt{1+9x^4} \, dx$$
If you set $x=\sqrt{\frac{\tan\theta}{3}}$ you have: $$ I = \frac{1}{2\sqrt{3}}\int_{0}^{\arctan 3}\sin^{-1/2}(\theta)\,\cos^{-5/2}(\theta)\,d\theta, $$ so, if you set $\theta=\arcsin(u)$, $$ I = \frac{1}{2\sqrt{3}}\int_{0}^{\frac{3}{\sqrt{10}}} u^{-1/2} (1-u^2)^{-7/2} du,$$ now, if you set $u=\sqrt{y}$, you have: $$ I = \frac{1}{4\sqrt{3}}\int_{0}^{\frac{9}{10}} y^{-3/4}(1-y)^{-7/2}\,dy $$ and this can be evaluated in terms of the incomplete Beta function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Radius, diameter and center of graph The eccentricity $ecc(v)$ of $v$ in $G$ is the greatest distance from $v$ to any other node. The radius $rad(G)$ of $G$ is the value of the smallest eccentricity. The diameter $diam(G)$ of $G$ is the value of the greatest eccentricity. The center of $G$ is the set of nodes $v$ such that $ecc(v) = rad(G)$ Find the radius, diameter and center of the graph Appreciate as much help as possible. I tried following an example and still didn't get it, when you count the distance from a node to another, do you count the starting node too or you count the ending node instead. And when you count, do you count the ones above and below or how do you count? :)
to find the radius of aconnected graph G denoted by r=[v+e]÷[e-2] where v=vertex of the graph e=edge of the graph
{ "language": "en", "url": "https://math.stackexchange.com/questions/240556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 3, "answer_id": 2 }
Proof of Riemann Stieltjes Integral Does anyone help me to understand why the last inequality holds? 2M$\epsilon$ comes from that $M_i-m_i \le 2M$ and delta($\alpha$)< $\epsilon$ but why the other term appeared?
Let $ a_j $ be such numbers that $ x_{a_j} = u_j $ and $ b_j $ s. t. $ x_{b_j} = v_j $. Let $ k $ be the number for which $ [u_k, v_k] $ is the last of the intervals covering $ E $. Now we can break down the main sum this way: $ \sum_1^n(M_i-m_i)\Delta\alpha_i = \sum_1^{a_1}(M_i-m_i)\Delta\alpha_i + \sum_{a_1+1}^{b_1}(M_i-m_i)\Delta\alpha_i + \sum_{b_1+1}^{a_2}(M_i-m_i)\Delta\alpha_i + \sum_{a_2+1}^{b_2}(M_i-m_i)\Delta\alpha_i + ... + \sum_{a_k+1}^{b_k}(M_i-m_i)\Delta\alpha_i + \sum_{b_k+1}^{n}(M_i-m_i)\Delta\alpha_i$ In the sums of the form $ \sum_{a_j+1}^{b_j}(M_i-m_i)\Delta\alpha_i $ we have $ M_i - m_i \le 2M $ and $ \alpha(x_{b_j}) - \alpha(x_{a_j}) = \alpha(v_j) - \alpha(u_j) \le \epsilon $. In the others, we can use $ M_i - m_i \le \epsilon $. Try manipulating the sums an you should get the desired inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Recursive Integration over Piecewise Polynomials: Closed form? Is there a closed form to the following recursive integration? $$ f_0(x) = \begin{cases} 1/2 & |x|<1 \\ 0 & |x|\geq1 \end{cases} \\ f_n(x) = 2\int_{-1}^x(f_{n-1}(2t+1)-f_{n-1}(2t-1))\mathrm{d}t $$ It's very clear that this converges against some function and that quite rapidly, as seen in this image, showing the first 8 terms: Furthermore, the derivatives of it have some very special properties. Note how the (renormalized) derivatives consist of repeated and rescaled functions of the previous degree which is obviously a result of the definition of the recursive integral: EDIT I found the following likely Fourier transform of the expression above. I do not have a formal proof but it holds for all terms I tried it with (first 11). $$ \mathcal{F}_x\left[f_n(x)\right](t)=\frac{1}{\sqrt{2\pi}}\frac{2^n \sin \left(2^{-n} t\right)}{t} \prod _{k=1}^n \frac{2^{k} \sin \left(2^{-k} t\right)}{t} $$ Here an image of how that looks like (first 10 terms in Interval $[-8\pi,8\pi]$): With this, my question alternatively becomes: What, if there is one, is the closed form inverse fourier transform of $\mathcal{F}_x\left[f_n(x)\right](t)=\frac{1}{\sqrt{2\pi}}\frac{2^n \sin \left(2^{-n} t\right)}{t} \prod _{k=1}^n \frac{2^{k} \sin \left(2^{-k} t\right)}{t}$, especially for the case $n\rightarrow\infty$? As a side note, it turns out, that this particular product is a particular Browein integral (Wikipedia) using as a sequence $a_k = 2^{-k}$ which exactly sums to 1. The extra term in the front makes this true for the finite sequence as well. In the limit $k \to \infty$, that term just becomes $1$, not changing the product at all. It is therefore just a finite depth correction.
This equation \begin{equation}f'(x)=2f(2x+1)-2f(2x-1), \quad f(0) = 1, \tag{$*$} \end{equation} has a finite solution which is also known as the $\mathrm{up}(x)$ or $\mathrm{hut}(x)$ function. It has compact support $\mathrm{supp}\,\mathrm{{up}}(x)=[-1,1]$ and its Fourier transform is $\hat{f}(t)=\prod\limits_{k=1}^{\infty}\mathrm{sinc}{(t\cdot 2^{-k})}.$ So, $\mathrm{up}(x)$ is defined by inverse Fourier transform as follows $$ \mathrm{up}(x)=\frac{1}{2\pi}\int\limits_{\mathbb{R}}e^{-itx}\hat{f}(t)\,dt= \frac{1}{2\pi}\int\limits_{\mathbb{R}}e^{-itx}\prod\limits_{k=1}^{\infty}\mathrm{sinc}{(t\cdot 2^{-k})}\,dt. $$ Equation $(*)$ and its finite solution $\mathrm{up}(x)$ was independently introduced in 1971 by V.L. Rvachev, V.A. Rvachev and W. Hilberg. It's interesting to note that a simpler version of $(*)$ was introduced by J. Fabius only 5 years earlier: \begin{equation}f'(x)=2f(2x), \quad f(0) = 0. \tag{$**$} \end{equation} By the way, the signs of the first 2 derivatives of $\mathrm{up}(x)$ are the elements of Thue-Morse sequence $\{+1,-1,-1,+1\}$ as well. P.S. I've uploaded some plots here of generalizations of $\mathrm{up}(x)$ which is known as the $\mathrm{h}_a(x)$ function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 1 }
Sufficient condition for differentiability at endpoints. Let $f:[a,b]\to \mathbb{R}$ be differentiable on $(a,b)$ with derivative $g=f^{\prime}$ there. Assertion: If $\lim_{x\to b^{-}}g(x)$ exists and is a real number $\ell$ then $f$ is differentiable at $b$ and $f^{\prime}(b)=\ell$? Is this assertion correct? If so provide hints for a formal $\epsilon-\delta$ argument. If not, can it be made true if we strengthen some conditions on $g$ (continuity in $(a,b)$ etc.)? Provide counter-examples. I personally think that addition of the continuity of $g$ in the hypothesis won't change anything as for example $x\sin \frac{1}{x}$ has a continuous derivative in $(0,1)$ but its derivative oscillates near $0$. I also know that the converse of this is not true. Also if that limit is infinite, then $f$ is not differentiable at $b$ right?
This is clearly false as stated. For example, consider $f:[0,1] \to \mathbb{R}$, $f(1)=1, f(x)=0$ otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
what's the name of the theorem:median of right-triangle hypotenuse is always half of it This question is related to one of my previous questions. The answer to that question included a theorem: "The median on the hypotenuse of a right triangle equals one-half the hypotenuse". When I wrote the answer out and showed it a friend of mine, he basically asked me how I knew that the theorem was true, and if the theorem had a name. So, my question: -Does this theorem have a name? -If not, what would be the best way to describe it during a math test? Or is it better to write out the full prove every time?
Let us consider the right triangle ABC with the right angle  and let AD be the median drawn from the vertex A to the hypotenuse  BC. We need to prove that the length of the median AD is half the  length of the hypotenuse BC.  Draw the straight line DE passing through the midpoint D parallel to the  leg AC till the intersection with the other leg AB at the point E. The angle BAC is the right angle by the condition. The angles BED and BAC are congruent as they are corresponding angles at the parallel lines AC and ED and the transverse. AB Therefore, the angle BED is the right angle.  Now, since the straight line DE passes through the midpoint D and is parallel to AC, it cuts the side AB in two congruent segments of equal length: AE = EB  So, the triangles AED and BED are right triangles that have congruent  AE and EB and the common DE. Hence, these triangles are congruent in accordance to the postulate (SAS)  It implies that the segments AD and DB are congruent as corresponding sides of these triangles. Since DB has the length half the length of the hypotenuse BC, we have proved that the median AD has the length half the length of the hypotenuse. 
{ "language": "en", "url": "https://math.stackexchange.com/questions/240819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
Combinatorics problem with negative possibilities I know how to solve the basic number of solutions equations, such as "find the number of positive integer solutions to $x_1 + x_2 + x_3$ = 12, with ". But I have no clue how to do this problem: Find the number of solutions of $x_1+x_2-x_3-x_4 = 0$ in integers between -4 and 4, inclusive. If I try and solve it like the basic equations, I get $C(n+r-1,r)$$ = C(0+9-1,9)$$ = C(8,9)$, which is obviously improper. Can someone point me in the right direction on how to solve this type of problem?
Hints: (1). Let $y_1=x_1$, $y_2=x_2$, $y_3=-x_3$, $y_4=-x_4$. We want to solve $y_1+y_2+y_3+y_4=0$, with the $y_i\dots$. (2) Let $z_i=y_i+4$. We want to solve $z_1+z_2+z_3+z_4=\dots$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/240890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
On operator norm of Hilbert spaces I was going through some lecture notes on operators on Hilbert space and on a proof it is stated note that $||T||=\sup_{||x||=||y||=1}\text{Re}\langle Tx,y\rangle$ However I cannot see why this is true. Is this something that holds only for symmetric operators or is it more general? Can someone enlighten me? The above can be found in here, it is in the proof of lemma 1.
This is more general. By definition, $$\|T\| = \sup_{x \ne 0} \|Tx\|/\|x\| = \sup_{\|x\|=1} \|Tx\|$$ But for any $z$ in the Hilbert space there is $y$ with $\|y \| = 1$ such that $\langle z, y \rangle = \|z\|$, while of course for any $y$ with $\|y\|=1$, $\text{Re} \langle z, y \rangle \le |\langle z, y \rangle| \le \|z\|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
show that the equation $x^5+5px^3+5p^2x+q=0$ will have a pair of equal roots, if $q^2+4p^5=0$ how can I show that the equation $x^5+5px^3+5p^2x+q=0$ will have a pair of equal roots, if $q^2+4p^5=0$. can anyone help me.thanks a lot.
A polynomial has a pair of equal roots if it has a common factor with its derivative, so you might want to compute the derivative and then see whether that condition gives it a common root with the original polynomial. There is a more systematic approach, involving the resultant (which you can look up). The polynomial and its derivative have a common root if and only if the resultant of the polynomial and its derivative is zero. So, you can calculate the resultant, then see whether $q^2+4p^5$ is a factor of the resultant. These seem like unlikely approaches for something at the level of algebara-precalculus, though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Growth in direct product I started reading with growth of groups. The problem is the following: Problem : Let $G$ be an infinite finitely generated group. a. If $G$ is polynomial growth then so is $G^m$ (a direct product of $G$). Moreover, the growth function $\gamma_G$ is not equivalent of $\gamma_{G^m}$ unless $m=1$. b. If $G$ is exponential growth then so is $G^m$, and their growth functions are equivalent. Could any one give me a hint. Thanks in advance.
Let $S= \{e_1, \dots ,e_r \}$ be a generator set of $G$. For $G^m$, take the generator set $S_m= \{(e_1,1),\dots,(e_r,1),(1,e_1) ,\dots ,(1,e_r)\}$. Claim: For $m=2$, $\ell g(x,y)=\ell g(x)+ \ell g(y)$. If $x=w_1$ and $y=w_2$ where $w_1$ and $w_2$ are words over $S$, then $(x,y)=(w_1,1) \cdot (1,w_2)$, so $\ell g(x,y) \leq \ell g(x)+ \ell g(y)$. If $(x,y)=w_3$ where $w_3$ is a word over $S_2$, you can write $w_3=(w_1,1) \cdot (1,w_2)$ where $w_1$ and $w_2$ are words over $S$ (since $G \times \{1\}$ and $\{1\} \times G$ commute and have trivial intersection); in particular, $\ell g(w_3)=\ell g(w_1)+ \ell g(w_2)$. Therefore, $\ell g(x,y) \geq \ell g(x)+ \ell g(y)$. Now by induction, you can show that: Lemma: $\displaystyle \ell g(x_1, \dots ,x_m)= \sum\limits_{i=1}^m \ell g(x_i)$. You deduce that $\left\{ \begin{array}{ccc} B_{G^m}(k) & \to & B_{G}(k)^m \\ (x_1,\dots,x_m) & \mapsto & (x_1,\dots,x_m) \end{array} \right.$ and $\left\{ \begin{array}{ccc} B_{G}(k)^m & \to & B_{G^m}(km) \\ (x_1,\dots,x_m) & \mapsto & (x_1,\dots,x_m) \end{array} \right.$ are well-defined and injective, hence: $$\gamma_{G^m}(k/m) \leq \gamma_{G^m}(k) \leq \gamma_G(k)^m \leq \gamma_{G^m}(km).$$ Consequently, $\gamma_{G}^m \sim \gamma_{G^m}$. Then the given assertions follow.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving an equation with invertible matrices I need to prove that if $A$, $B$ and $(A + B)$ are invertible then $(A^{-1} + B^{-1})^{-1} = A(A + B)^{-1}B$ I'm a bit lost with this one, I can't find a way to make any assumptions about $(A^{-1} + B^{-1})$, Neither by using $A^{-1}$, $B^{-1}$ and $(A + B)^{-1}$. If someone could clue me in I'll be grateful. UPDATE: Thanks for all of your input - it really helped, I got a solution, I'd like to know if my way of proving it is valid: $(A^{-1} + B^{-1})^{-1} = A(A + B)^{-1}B$ multiplying both sides by $A^{-1}$ and $B^{-1}$ $A^{-1}(A^{-1} + B^{-1})^{-1}B^{-1} = (A+B)^{-1}$ now multiplying both sides by $(A + B)$, I get to $(A+B)(A^{-1}(A^{-1} + B^{-1})^{-1}B^{-1}) = I$ Lets call $(A^{-1}(A^{-1} + B^{-1})^{-1}B^{-1})$ -> $C$ Now I determine that $C$ is $(A+B)^{-1}$, because $(A+B)C$ equals the Identity Matrix. So lastly to verify that I place $C$ in the original equation: $(A^{-1} + B^{-1})^{-1} = A(A^{-1}(A^{-1} + B^{-1})^{-1}B^{-1})B$ and from this I get: $(A^{-1} + B^{-1})^{-1} = (A^{-1} + B^{-1})^{-1}$
$$ A+B=B(B^{-1}+A^{-1})A, $$ So $$ (A+B)^{-1}=A^{-1}(A^{-1}+B^{-1})^{-1}B^{-1}. $$ Finally,$$ A(A+B)^{-1}B=A(A^{-1}(A^{-1}+B^{-1})^{-1}B^{-1})B=(A^{-1}+B^{-1})^{-1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/241287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
How to show that a form on $\mathbb{C}$ defines a holomorphic $1$-form on $\mathbb{C}/\Gamma$? Can anyone give me a hint on how to start solving this problem? Let $\Gamma \subset \mathbb{C}$ be a lattice. Show that the form $dz$ on $\mathbb{C}$ defines a holomorphic $1$-form on $\mathbb{C}/\Gamma$. Any help appreciated!
Thank you for the information provided. Here are some little hints to get you started: * *Your idea is the right one. Choose a chart, find the appropriate function, and prove it. *Your holomorphic $f$ will be in fact $1$. *Your $Y$ is in fact the whole torus $\mathbb{C}/\Gamma$. The solution, if you write it down, should look almost trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Intersection multiplicity of two curves. I want to compute the intersection multiplicity of $YZ=X^2$ and $YZ=(X+Z)^2$ at $P=(0:1:0)$ In an affine nbd of $P$, let $(X:Y:Z)=(x:1:z)$ $$I_P=\dim \mathcal{O}_{\mathbb{A}_k^2,(0,0)}/(x^2-z,x^2+2xz+z^2-z)$$ In my previous question, I learned that if $(0,0)$ is the only zero, I can change the local ring to $k[x,z]$. But the problem occurs since there are intersection points not only $(0,0)$, but also $(-2,4)$. I saw the Fulton's book(section 2.9 prop 6), which says that $$k[x,z]/I \simeq \mathcal{O}_{(0,0)}/I\mathcal{O}_{(0,0)} \times \mathcal{O}_{(-2,4)}/I\mathcal{O}_{(-2,4)}$$ So now how can I compute the intersection multiplicity?
That there are multiple points of intersection does not really affect your calculation. In order to compute the intersection multiplicity at $P,$ we must pass to the local ring, which forgets other intersection points. Thus, you can compute $$I_P=\dim_k k[x,z]_{(x,z)}/(z-x^2,z-(x+z)^2)_{(x,z)},$$ i.e. we can localize $k[x,z]$ at the prime $(x,z)$ and then take the quotient. Of course, localization commutes with quotients, so we have $$I_P=\dim_k (k[x,z]/(z-x^2,z-(x+z)^2))_{(x,z)}=\dim_k (k[x]/(x^2-(x+x^2)^2))_{(x)}=\dim_k (k[x]/(x^3(-2-x)))_{(x)}=\dim_k k[x]/(x^3)$$ since $-2-x$ is a unit in the localization $k[x]_{(x)}$. This shows that the multiplicity is $I_P=3.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/241432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Hint about a identity Who has hint how to prove: $\sum_{n=0}^N \sum_{k=1}^N g_k y_n = \sum_{n=1}^N \sum_{k=1}^n g_k y_{n-k}+\sum_{n=1}^N\sum_{k=1}^n g_n y_{N-k+1}$ Thank in advance!
It's almost always helpful to expand both sides to see where all the terms come from: LHS: We can factor out $g_{k}$ from the inner sum, since it is independent of both $n$ and $N$. Thus, we have $g_{1}(y_{0}+y_{1}+\ldots+y_{N})+g_{2}(y_{0}+\ldots+y_{N})+\ldots+g_{N}(y_{0}+\ldots +y_{N})$, which we can easily see is $$y_{0}(g_{1}+g_{2}+\ldots+g_{N})+y_{1}(g_{1}+\ldots+g_{N})+\ldots+y_{N}(g_{1}+\ldots+g_{N})$$ RHS: I think your first sum is incorrect; it should read $\sum_{n=1}^{N} \sum_{k=1}^{n} g_{k}y_{n-k}$. If we expand this first sum, where each set of square brackets contains the next value for $n$ (the first set for $n=1$, the next for $n=2$, etc.), we get the following:$[g_{1}y_{0}]+[g_{1}y_{1}+g_{2}y_{0}]+[g_{1}y_{2}+g_{2}y_{1}+g_{3}y_{0}]+\ldots+[g_{1}y_{N-1}+g_{2}y_{N-2}+\ldots g_{N}y_{0}]$ Which, after rearranging, is equal to $y_{0}(g_{1}+g_{2}+\ldots+g_{N})+y_{1}(g_{1}+g_{2}+\ldots+g_{N-1})+\ldots y_{N-1}g_{1}$ Before expanding the second sum, let's see what we have. Each of the $y_{k}$ Has $N-k$ of the terms we want it to be multiplied by. So $y_{0}$ is being multiplied by all the $g_{k}$ we require, $y_{1}$ by all but $g_{N}$, and so on. Let us move onto the second sum. Expanding similarly, $[g_{1}y_{N}]+[g_{2}y_{N}+g_{2}y_{N-1}]+\ldots+[g_{N}y_{N}+g_{N}y_{N-1}+\ldots g_{N}y_{1}]$. By rearranging, we have $y_{1}g_{N}+y_{2}(g_{N-1}+g_{N})+\ldots+y_{N}(g_{1}+g_{2}+\ldots+g_{N})$ Add the two sums on the RHS and collect your terms, and your identity follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Faster than Fast Fourier Transform? Is it possible to make an algorithm faster than the fast Fourier transform to calculate the discrete Fourier transform (is there proofs for or against it)? OR, a one that only approximates the discrete Fourier transform, but is faster than $O(NlogN)$ and still gives about reasonable results? Additional requirements: 1) Let's leave the quantum computing out 2) I don't mean faster in sense of how its implemented for some specific hardware, but in the "Big-O notation sense", that it would ran e.g. in linear time. Sorry for my english
There is an algorithm called sparse fast Fourier transform (sFFT), which is faster than FFT algorithms when the Fourier coefficients are sparse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
covariance of integral of Brownian What is the covariance of the process $X(t) = \int_0^t B(u)\,du$ where $B$ is a standard Brownian motion? i.e., I wish to find $E[X(t)X(s)]$, for $0<s<t<\infty$. Any ideas? Thanks you very much for your help!
$$\mathbb E(X(t)X(s))=\int_0^t\int_0^s\mathbb E(B(u)B(v))\,\mathrm dv\,\mathrm du=\int_0^t\int_0^s\min\{u,v\}\,\mathrm dv\,\mathrm du$$ Edit: As @TheBridge noted in a comment, the exchange of the order of integration is valid by Fubini theorem, since $\mathbb E(|B(u)B(v)|)\leqslant\mathbb E(B(u)^2)^{1/2}\mathbb E(B(v)^2)^{1/2}=\sqrt{uv}$, which is uniformly bounded on the domain $[0,t]\times[0,s]$ hence integrable on this domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Finding a High Bound on Probability of Random Set first time user here. English not my native language so I apologize in advance. Taking a final in a few weeks for a graph theory class and one of the sample problems is exactly the same as the $k$-edge problem. We need prove that if each vertex selected with probability $\frac{1}{2}$, that the probability that except that we must find that the probability of the set being an independent set is $\geq (\frac{2}{3})^k$ (instead of $\frac{1}{2^k}$, like original problem). Here is what I am trying: I am looking into calculating the number of independent sets for each possible number of vertices $i$, for $i=0$ to $i=n$. Once I calculate the probability of there being an independent set for the number of vertices $n$, I can take the expectation and union bound.
You can adapt the answer of mjkxxxx to the mathstackexchange problem that you mentioned : From each edge, you can choose one vertex. You get a collection of $K$ vertices with $K\leqslant k$ (because one same vertex can be choosen for different edges), say $v_1, \dots, v_K$. Each choosen vertex $v_i$ is linked by an edge to $k_i$ different vertices, with $k_i\geqslant 1$, and $\sum_{i=1}^{K} k_i=k $ (if the graph is simple). Now, if , for any vertex $v_i$, either you don't select $v_i$, either you don't select any of the vertices linked to $v_i$, then you will get an independant set. The probability of this event is greater than (and not equal, because some vertices can be linked to several $v_i$'s) : $$ \prod_{i=1}^K \left(\frac12 +\frac12 (\frac12 )^{k_i}\right)$$ Then, using the convexity of the function $x\mapsto \log\left(\frac12 +(\frac12 )^{x+1} \right)$ and Jensen's inequality, the probability is greater than $$ \exp\left(K \log\left(\frac12 +(\frac12 )^{(\frac1K\sum k_i)+1} \right) \right) = \left(\frac12 +(\frac12 )^{\frac{k}{K}+1} \right)^K $$ Then posing $x=k/K$, the above formula can be written $$ \left(\left(\frac12 + (\frac12 )^{x+1} \right)^{\frac1x}\right)^k $$ and the function $x\mapsto \left(\frac12 + (\frac12 )^{x+1} \right)^{\frac1x}$ is increasing from $\frac34$ when $x=1$ to $1$ when $x\rightarrow\infty$. We conclude that the probability of picking an independent set is greater than $\left(\frac34\right)^k$. Remark 1 : This bound is optimal. When the graph is the union of couples of vertices linked by an edge, all the $k_i$' are equal to $1$, all the inequalities become equalities and the probability of picking an independent set is exactly $(3/4)^k$. Remark 2 : If your graph was not simple, the probabilty would be greater than $(3/4)^{k'}$ where $k'$ is the number of useful edges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove complement of this 12-vertex graph is a tree Let G12 be a simple graph of 12 vertices, and H12 its complement. It is known that G12 has 7 vertices of degree 10, 2 vertices of degree 9, 1 vertex of degree 8 and 2 vertices of degree 7. Prove or disprove H12 is a tree.
In the complement graph, every vertex is adjacent to the vertices it was not adjacent to originally, so from the degree sequence of $G_{12}$ we can obtain the degree sequence of $H_{12}$. For example each of the 7 degree 10 vertices in $G_{12}$ are adjacent to only one vertex in $H_{12}$ (remember that their degree is at most 11, as they can't be adjacent to themselves). So in $H_{12}$ we have seven vertices of degree 1, two vertices of degree 2, one vertex of degree 3 and two vertices of degree 4. Given this sequence we now want to know if $H_{12}$ is a tree. If we knew that $H_{12}$ was connected, we could just count the number of edges (a connected graph on $n$ vertices with exactly $n-1$ edges is a tree - not a bad exercise in itself), so if we assume $H_{12}$ is connected, we can count the edges (half the sum of the degrees) and see that it is a tree. We can go a little further by actually drawing the graph, which we can do by (going back small step) asking whether the degree sequence is graphic. In this case we have the sequence $\{4,4,3,2,2,1,1,1,1,1,1,1\}$. This sequence is graphic if and only if the sequence $\{3,2,1,1,1,1,1,1,1,1,1\}$ is graphic - what this implies is that we can construct the graph by taking the vertex of largest degree $d$ and assuming it is adjacent the next $d$ vertices of largest degree. Then one vertex is taken care of, and $d$ of the remaining vertices have one of their edges taken care of. Applying this to our sequence we get: However this is not the only graph with this degree sequence, there are other possibilities, at least one of which (I checked) has a cycle. So really what we have is that there is at least one possible graph fitting the description of $G_{12}$ that has a complement graph $H_{12}$ that is a tree. If you want a more arcane test, you can construct a multinomial with the degree sequence that has integer solutions if and only if the sequence can be realized as a tree (Gupta, Joshi & Tripathi, 2007).
{ "language": "en", "url": "https://math.stackexchange.com/questions/241733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Basic theory about divisibility and modular arithmetic I am awfully bad with number theory so if one can provide a quick solution of this, it will be very much appreciated! Prove that if $p$ is a prime with $p \equiv 1(\mod4) $ then there is an integer $m$ such that $p$ divides $m^2 +1$
The Legendre symbol sutisfies: $$\left(\frac{-1}{p}\right) = (-1)^\tfrac{p-1}{2} =\begin{cases} \;\;\,1\mbox{ if }p \equiv 1\pmod{4} \\ -1\mbox{ if }p \equiv 3\pmod{4} \end{cases}.$$ Thus for $p\equiv 1 \pmod4, \ \left(\frac{-1}{p}\right)=1$ and the result follows. Also see this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Sum of the eigenvalues of adjacency matrix Let $G$ be a simple undirected graph with $n$ vertices, and let $A_G$ be the corresponding adjacency matrix. Let $\kappa_1, \dots , \kappa_n$ be the eigenvalues of the adjacency matrix $A_G$. I have read that $$ \kappa_1 + \dots + \kappa_n = 0, $$ and I have checked this by hand for $n \leq 3$ along with a few "random" graphs. It is true in all the cases I have checked. It seems that we want to write $A_G = U^{T} D U$, where $D$ is a diagonal matrix with trace $\text{tr}(D) = 0$, but I have been unable to supply a proof. How would one prove this?
If there are no self loops, diagonal entries of a adjacency matrix are all zeros which implies $trace(A_G)=0$. Also, it is a symmetric matrix. Now use the connection between the trace of a symmetric matrix and sum of the eigenvalues (they are equal). To prove this, since $A_G$ is symmetric, $A_G=U^{-1}DU$ for some unitary matrix $U$. Now, note that trace has circularity property, i.e. $trace(ABC)=trace(BCA)$. So \begin{align} 0=trace(A_G)=trace(U^{-1}DU)=trace(DUU^{-1})=trace(D) \end{align} and $trace(D)$ is the sum of eigen values.
{ "language": "en", "url": "https://math.stackexchange.com/questions/241875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
prove sum of divisors of a square number is odd Don't know how to prove that sum of all divisors of a square number is always odd. ex: $27 \vdots 1,3,9,27$; $27^2 = 729 \vdots 1,3,9,27,81,243,729$; $\sigma_1 \text{(divisor function)} = 1 + 3 + 9 + 27 + 81 + 243 + 729 = 1093$ is odd; I think it somehow connected to a thing that every odd divisor gets some kind of a pair when a number is squared and 1 doesn't get it, but i can't formalize it. Need help.
Sum of divisors =$\prod_{p_i\mid n}(1+p_i+p_i^2+\cdots+p_i^{2P_i})$ where $p_i$s are distinct primes and $p_i^{P_i}\mid\mid n$ Now, $p_i^r+p_i^{2P_i+1-r}$ is even for all the prime divisor $p_i$ of $n$ for $0< r\le 2P_i+1-r\implies 0<r\le P_i$ leaving $p_i^0=1$ unpaired. Alternatively, $p_i^k\equiv p_i\pmod 2$ for $k\ge 1$ So, $1+p_i+p_i^2+\cdots+p_i^{2P_i}\equiv 1+\sum_{1\le s\le 2P_i}p_i\pmod 2\equiv 1+2P_is\pmod 2\equiv 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/241935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 1 }
$p$ an odd prime, $p \equiv 3 \pmod 8$. Show that $2^{(\frac{p-1}{2})}*(p-1)! \equiv 1 \pmod p$ $p$ an odd prime, $p \equiv 3 \pmod 8$. Show that $2^{\left(\frac{p-1}{2}\right)}\cdot(p-1)! \equiv 1 \pmod p$ From Wilson's thm: $(p-1)!= -1 \pmod p$. hence, need to show that $2^{\left(\frac{p-1}{2}\right)} \equiv -1 \pmod p. $ we know that $2^{p-1} \equiv 1 \pmod p.$ Hence: $2^{\left(\frac{p-1}{2}\right)} \equiv \pm 1 \pmod p. $ How do I show that this must be the negative option?
Euler's Criterion Need an $x \in \Bbb N$ such that $2 \equiv x^2 \pmod p$. However no such $x$ exists in $\Bbb Z_p$. For example, in $\Bbb Z_3$, $1^2=2^2=1 \neq 2$. In $\Bbb Z_{11}$, no square number is $\equiv 2 \pmod {11}$. Leave that proof to you. If you get stuck examine $x^2-2 \equiv 0 \pmod p$. $\Rightarrow$ $2^{(p-1)/2} \equiv -1 \pmod p$
{ "language": "en", "url": "https://math.stackexchange.com/questions/242007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Showing that $\langle p\rangle=\int\limits_{-\infty}^{+\infty}p |a(p)|^2 dp$ How do I show that $$\int \limits_{-\infty}^{+\infty} \Psi^* \left(-i\hbar\frac{\partial \Psi}{\partial x} \right)dx=\int \limits_{-\infty}^{+\infty} p \left|a(p)\right|^2dp\tag1$$ given that $$\Psi(x)=\frac{1}{\sqrt{2 \pi \hbar}}\int \limits_{-\infty}^{+\infty} a(p) \exp\left(\frac{i}{\hbar} px\right)dp\tag2$$ My attempt: $$\frac {\partial \Psi(x)}{\partial x} = \frac{1}{\sqrt{2\pi \hbar}} \int\limits_{-\infty}^{+\infty} \frac{\partial}{\partial x} \left(a(p)\exp\left(\frac{i}{\hbar} px\right)\right)dp\tag3$$ $$=\frac{1}{\sqrt{2\pi \hbar}} \int\limits_{-\infty}^{+\infty} a(p) \cdot \exp\left(\frac{i}{\hbar} px\right)\frac{i}{\hbar}p \cdot dp\tag4$$ Multiplying by $-i\hbar$: $$-i\hbar \frac {\partial \Psi}{\partial x}=\frac{1}{\sqrt{2\pi \hbar}} \int\limits_{-\infty}^{+\infty} a(p) \cdot \exp\left(\frac{i}{\hbar} px\right)p \cdot dp\tag5$$ At this point I'm stuck because I don't know how to evaluate the integral without knowing $a(p)$. And yet, the right hand side of equation (1) doesn't have $a(p)$ substituted in.
You almost got it, you just have to plug your result (5) up in the left hnd side of (1) and also use \begin{equation} \Psi^*(x)=\frac{1}{2\pi}\int\thinspace dq\thinspace a^*(q)\exp(-iqx) \end{equation} where I have set $\hbar=1$. The left hand side of (1) then reads \begin{eqnarray} \int dx \thinspace \Psi^*(-i\frac{\partial}{\partial x})\Psi&=&\frac{1}{2\pi}\int dp\int dq\int dx\thinspace a^*(q)\thinspace p \thinspace a(p)\exp\Big(i(p-q)x\Big)\\\\\\&=&\frac{1}{2\pi}\int dp\int dq\thinspace a^*(q)\thinspace p \thinspace a(p)\Big[\int dx\exp\Big(i(p-q)x\Big)\Big]\\\\\\\end{eqnarray} The integral in $x$ is simply the plane wave represeantation of a Dirac delta function $\int dx\exp\Big(i(p-q)x\Big)=2\pi\thinspace\delta(p-q)$. Pluging this above and integrating in $q$ gives you what you want, i.e. \begin{eqnarray} \int dx \thinspace \Psi^*(-i\frac{\partial}{\partial x})\Psi&=&\int dp\int dq\thinspace a^*(q)\thinspace p \thinspace a(p)\delta(p-q)\\ &=&\int dp\thinspace p\thinspace a^*(p)\thinspace a(p)\\ &=&\int dp\thinspace p\thinspace |a(p)|^2 \end{eqnarray}
{ "language": "en", "url": "https://math.stackexchange.com/questions/242117", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What's the derivative of $\int_1^x\sin(t)dt$? What's the derivative of the integral $$\int_1^x\sin(t) dt$$ Any ideas? I'm getting a little confused.
You can use the fundamental theorem of calculus, but if you have not yet covered that theorem, in short, you'll be taking the derivative - with respect to x - of the integral of $\sin(t)dt$ when the integral is evaluated from $1$ to $x$: $$\frac{d}{dx}\left(\int_1^x \sin(t) \text{d}t\right) = \frac{d}{dx} [-\cos t]_1^x = \frac{d}{dx}\left(-\cos(x) - (-\cos(1))\right) = \sin(x).$$ and you'll no doubt be encountering the Fundamental Theorem of Calculus very, very soon: For any integrable function $f$, and constant $a$: $$\frac{d}{dx} \left(\int_a^x f(t)dt \right)= f(x),$$ (provided $f$ is continuous at $x$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/242203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
How to explain that division by $0$ yields infinity to a 2nd grader How do we explain that dividing a positive number by $0$ yields positive infinity to a 2nd grader? The way I intuitively understand this is $\lim_{x \to 0}{a/x}$ but that's asking too much of a child. There's got to be an easier way. In response to the comments about it being undefined, granted, it is undefined, but it's undefined because of flipping around $0$ in positive or negative values and is in any case either positive or negative infinity. Yet, $|\frac{2}{0}|$ equals positive infinity in my book. How do you convey this idea?
One way to explain that division of $x$ by $0$ is undefined is by contradiction. Suppose $x/0 = a$ and suppose $x$ is a non zero value. Then, by cross multiplication, we get $0\cdot a = x$. At this point ask the child what number times $0$ equals a non zero number. After a little thought the child will most likely say that any number times zero is $0$ so that $0\cdot a = x$, $x$ a non zero number is not possible. Next consider $x = 0$ so you have $0/0$. Let $0/0 = b$, where $b$ is a non zero number. Then you cross multiply to get $0\cdot b = 0$. Now ask the child to come up with a number that satisfies this equation. The child will most likely realize that any number will do and pick one, say $5$. $0\cdot 5 = 0$, true. Now say, what about $0\cdot 6$? The child will say that equals zero too. So, going back to $x/0$, there is no solution and in the case of $0/0$, in effect, any solution will do. Neither of these are allowed in mathematics. The above is not a proof of course but it might help a little. Note: the explanation doesn't really work for the case where $x/0 = 0$ or $0/0 = 0$. I imagine this observation would have to be modified a lot to be useful but perhaps it would be a good starting point for explaining that division by $0$ is undefined. Also, a way I use to think of limit is to imagine what happens when you place smaller and smaller number under, say $1$. $1/(1/2)$, $1/(1/100)$, $1/(1/1000000)$ etc. I imagine that any child will know that you flip the denominator to simplify these equations so you get $1\cdot (2/1) = 2$; $1\cdot (100/1) = 100$; $1\cdot (1000000/1) = 1000000$ etc. These two approaches combined might be used to explain one reason for limit. One thing that limit allows is to go as close as you would like to something that is not possible, ie. division by zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 13, "answer_id": 2 }
Proof that the common zeros of multihomogeneous polynomials is a Zariski closed subset of a product of projective spaces Let $K$ be an algebraically closed field. Let $n, m \ge 0$ be integers. A polynomial $F \in K[x_0,\dots,x_n,y_0,\dots,y_m]$ is called bihomogeneous of bidegree $(p,q)$ if $F$ is a homogeneous polynomial of degree $p$(resp. $q$) when considered as a polynomial in $x_0\dots,x_n$(resp. $y_0\dots,y_m)$ with coefficients in $K[y_0\dots,y_m]$(resp. $K[x_0,\dots,x_m])$. Let $P^n, P^m$ be projective spaces over $K$. We consider $P^n$ and $P^m$ as topological spaces equipped with Zariski topology. We consider $P^n\times P^m$ a topological space equipped with the product topology. Let $(F_i)_{i\in I}$ be a family of bihomogeneous polynomials in $K[x_0,\dots,x_n,y_0,\dots,y_m]$. Let $Z = \{(x, y) \in P^n\times P^m| F_i(x, y) = 0$ for all $i \in I\}$. Then how do you prove that $Z$ is a closed subset of $P^n\times P^m$? This is a related question
The result you ask about is not true. For example take $n=m=1$ and the family consisting of the sole polynomial $F=x_0y_1-x_1y_0$, which is bihomogeneous of bidegree $(1,1)$. Its zero locus is the diagonal $\Delta =\lbrace (a,a)\mid a\in \mathbb P^1\rbrace \subset \mathbb P^1\times \mathbb P^1$ which is not closed in the product topology of $\mathbb P^1\times \mathbb P^1$, since the only closed sets of that topology are unions of points, vertical lines and horizontal lines. The result holds however if you endow $\mathbb P^n\times \mathbb P^m$ with the topology induced by the Segre embedding $\mathbb P^n\times \mathbb P^m \hookrightarrow \mathbb P^{(n+1)(m+1)-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Completely additive functions Other then the logarithm, does anybody know of any other important completely additive functions, by completely additive I mean $f(ab)=f(a)+f(b)$, for any integers $a,b$ ?
There is a general description of such functions. For each prime p, choose $f(p)$ arbitrarily. Then for a general integer $n=\prod_ip_i^{n_i}$, we have $f(n)=\sum_i n_i f(p_i)$. Possible choices for $f(p)$ include log $p$ (which gives the logarthm) and 1 for every p (the weighted prime divisor counting function) but there are uncountably many more choices. (PS: Gerry's example of the "$p$-adic order function" comes from setting $f(p) = 1$ for one specific prime $p$ and $f(q) = 0$ for all other primes $q$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/242440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Two curious "identities" on $x^x$, $e$, and $\pi$ A numerical calculation on Mathematica shows that $$I_1=\int_0^1 x^x(1-x)^{1-x}\sin\pi x\,\mathrm dx\approx0.355822$$ and $$I_2=\int_0^1 x^{-x}(1-x)^{x-1}\sin\pi x\,\mathrm dx\approx1.15573$$ A furthur investigation on OEIS (A019632 and A061382) suggests that $I_1=\frac{\pi e}{24}$ and $I_2=\frac\pi e$ (i.e., $\left\vert I_1-\frac{\pi e}{24}\right\vert<10^{-100}$ and $\left\vert I_2-\frac\pi e\right\vert<10^{-100}$). I think it is very possible that $I_1=\frac{\pi e}{24}$ and $I_2=\frac\pi e$, but I cannot figure them out. Is there any possible way to prove these identities?
Somebody actually made a very interesting tactic on MathOverflow. Here is a link to that idea. It mainly involved these two integral representations and multiplying the two (Euler Reflection Formula). Indeed, letting $\ell(u):=u-\ln u$ for $u>0$, note that for $x\in(0,1)$ $$\qquad \Gamma(x)x^{-x}=\int_0^\infty e^{-x\ell(u)}\,du=\int_0^\infty e^{-x\ell(u)}\,\frac{du}u $$ and also $$\qquad \Gamma(1-x)(1-x)^{x-1}=\int_0^\infty e^{(x-1)\ell(v)}\,dv=\int_0^\infty e^{(x-1)\ell(v)}\,\frac{dv}v. $$ This is really for the sake of alternate ideas.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "78", "answer_count": 3, "answer_id": 2 }
Can a cubic that crosses the x axis at three points have imaginary roots? I have a cubic polynomial, $x^3-12x+2$ and when I try to find it's roots by hand, I get two complex roots and one real one. Same, if I use Mathematica. But, when I plot the graph, it crosses the x-axis at three points, so if a cubic crosses the x-axis a three points, can it have imaginary roots, I think not, but I might be wrong.
Since I have not a copy of Mathematica with me right now, I will send a link to the Wolfram Alpha results: http://www.wolframalpha.com/input/?i=solve+x3%E2%88%9212x%2B2+%3D+0, which are the same as the Mathematica results, since Wolfram Alpha uses Mathematica as a computational engine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Learning about the universe or special/general relativity I have done a standard course in differential geometry/Riemannian geometry. Am I now able to understand the concepts people talk about when they say things like "spacetime is curved" and when I see things like outer space looking curved. Does one need to know other physics concepts or can I just learn it from now? Can anyone recommend me anything to learn it from? I want to learn about things like black holes and metrics and want to avoid things like electromagnetism etc. Thanks
The canonical book for GR is Robert Ward's book. Physicist usually are a bit scared of it because it has to much Differential geometry.. It is a excellent book. I think it is quite good for mathematicians. If you want to learn a more physical approach to GR I recommend Weinberg's GR book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Is this a correct proof that an isomorphic isometry preserved separability? Let $(X,\|\cdot\|_X)$ be a separable normed space and let $(Y,\|\cdot\|_Y)$ be a normed space. Assume they are both infinite dimensional. Let $$T:(X,\|\cdot\|_X) \longrightarrow (Y,\|\cdot\|_Y)$$ be a isomorphic isometry. What I'm interested to know is if $(Y,\|\cdot\|_Y)$ also will be separable? What I am really interested to know is wheter or not there can exist such a map between $c$ and $l^\infty$, but it doesn't hurt to be a bit more general. My attempt: Take $y\in Y$. Then there exist $x \in X$ such that $T(x) = y$. Also there exist a countable dense subset $X'$ of $X$ such that for every $\epsilon > 0$ there exist $x' \in X'$ such that $\| x' - x \|_X < \epsilon$. But since $T$ is an isometry we get $$\| x' - x \|_X = \| T(x' - x) \|_Y = \| T(x') - y \|_Y < \epsilon$$ But $T(x') \in T(X')$ and $T(X')$ is countable (T is bijective) so therefore $(Y,\|\cdot\|_Y)$ must separable. Thank you in advance!
Yes, your proof is correct. But I would write it slightly differently: first I would introduce $X'$, and take its image in $Y$ as a countable subset. Afterwards I would prove that $T(X')$ is dense in $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Are there infinite many integer $n\ge 0$ such that $10^{2^n}+1$ prime numbers? It is clear to see that 11 and 101 are primes which sum of digit is 2. I wonder are there more or infinte many of such prime. At first, I was think of the number $10^n+1$. Soon, I knew that $n\neq km$ for odd $k>1$, otherwise $10^m+1$ is a factor. So, here is my question: Are there infinite many integer $n\ge 0$ such that $10^{2^n}+1$ prime numbers? After a few minutes: I found that if $n=2$, $10^{2^n}+1=10001=73\times137$, not a prime; if $n=3$, $10^{2^n}+1=17\times5882353$, not a prime; $n=4$, $10^{2^n}+1=353\times449\times641\times1409\times69857$, not a prime. Now I wonder if 11 and 101 are the only two primes with this property.
Since no one else has mentioned it: Standard heuristics in number theory suggest that there are only finitely many primes of the form $(2k)^{2^n}+1$ for any integer $k>0.$ The probability that a random number around $(2k)^{2^n}+1$ is prime is roughly $1/(2^n\log(2k))$; if you take into account the congruence conditions for such numbers and treat the chance that such a number is prime as a random variable, then the expectation is $C_k/2^n$ and the sum over these values converges. If you sum this 'probability' over $n\ge24$ the expected number of primes of this form is less than 0.000001.
{ "language": "en", "url": "https://math.stackexchange.com/questions/242949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }