Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Algorithm to multiply nimbers Let $a,b$ be nimbers. Is there an efficient algorithm to calculate $a*b$, the nim-product of $a$ and $b$? The following rule seems like it could be helpful: $$ 2^{2^m} * 2^{2^n} = \begin{cases} 2^{2^m} 2^{2^n} & \text{if $m \ne n$} \\ 3(2^{2^m - 1}) & \text{if $m = n$} \\ \end{cases}. $$ Juxtaposition denotes ordinary ordinal multiplication here (not nim-multiplication).
An algorithm is given at https://www.ics.uci.edu/~eppstein/numth/ (C++ implementation of J.H.Conway's "nimber" arithmetic.). The function to actually perform the multiplication is at nimber.C:316:nim_times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/909304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Find the probability of winning at this lottery. So, the problem I found goes like this: You have $n$ different numbers, numbered from $ 1 $ to $n$. You can randomly choose $m$ (different) of them. The computer also randomly selects $m$ (different) of them. If you and the computer have exactly $k$ common numbers, then you win a certain amount of money. The problem asks us to find the probability of winning. I have solved some easier problems involving probabilities. But here, the only thing I could think of was that the probability for a certain sequence of $m$ numbers to emerge is: $$ \frac{1}{\dbinom{n}{m}} $$ How do you solve it? I'm on my way of getting used to this type of problems and I could really use some help.
Let us assume you have picked your $m$ numbers. Now it's the computer's turn. It has to match $k$ of your numbers. Which $k$? These can be chosen in $\binom{m}{k}$ ways. Then it has to produce $m-k$ numbers which do not match any of yours. This can be done in $\binom{n-m}{m-k}$ ways. So the number of ways the computer can match $k$ of your numbers is $\binom{m}{k}\binom{n-m}{m-k}$. For the probability, divide $\binom{m}{k}\binom{n-m}{m-k}$ (the number of "favourables,") by the number of (equally likely) choices the computer can make. This is $\binom{n}{m}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/909472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
What is $\lim_{n\to \infty}\frac{2n \choose {n}}{4^n}$? What is the result of the following limit? $$\lim_{n\to \infty}\frac{2n \choose {n}}{4^n}$$ since $$\sum_{k=0}^{2n}{2n \choose {k}}=2^{2n}=4^n$$ then $$\frac{4^n}{2n+1}\leq{2n \choose {n}}\leq 4^n$$ and limit is clealy $\in [0,1]$, but what is it exactly?
Hint: Use Stirling approximation $$ n! \sim n^n e^{-n} \sqrt{2 \pi n} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/909678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Expressing $ 12\sin( \omega t - 10) $ in cosine form $$ 12\sin( \omega t - 10) $$ I understand how it's solved when using the graphical method, however I'm having trouble understanding something about the trigonometric identities method. The solution in the text book goes like this (It wants positive amplitudes) : (All angles are in degrees) $$ 12\cos( \omega t - 10 - 90) $$ $$ 12\cos( \omega t - 100) $$ I know that in order to convert from sine to cosine angle you either add or subtract $90$ degrees. What I don't understand is whether I should add or subtract to get the equivalent with positive amplitude. The way I approach this is that I imagine the graph where $+\cos \omega t$ is the positive $x$-axis, $-\cos \omega t$ is the negative $x$-axis, $+\sin \omega t$ is the negative $y$-axis and $-\sin \omega t$ is the positive $y$-axis. Since I want to change from positive amplitude sine to positive amplitude cosine I add $90$ degrees. But apparently that is incorrect. Please explain this to me.
The identities you can use are: \begin{align} \sin x&=\cos(90°-x)\\ \cos x&=\cos(-x) \end{align} Therefore $$ \sin(\omega t-10°)=\cos(90°-(\omega t-10°))= \cos(100°-\omega t)=\cos(\omega t-100°). $$ Of course, you could also directly use $$ \sin x=\cos(90°-x)=\cos(x-90°). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/909871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Density of Pythagorean triples We define a Pythagorean triple as a triple $<a,b,c>$ such that $a,b,c\in \mathbb N$ and $a^2+b^2=c^2$. In order to avoid duplicates, we say that a triple $<a,b,c>$ is legit iff $b>a$. Let $\mathcal P$ be the set of all legit Pythagorean triples. We define $$L_{PT}^N=\{<a,b,c> | <a,b,c> \in \mathcal P\wedge b\leq N\}$$ (If it's more convinient we can define it for $b^2\leq N$, $c\leq N$ or $c^2\leq N$). What is the density of $|L_{PT}^N|$ as a function of $N$? e.g. is $|L_{PT}^N|=\Theta(N^2)?\Theta(N)?$ We say that a triple $<a,b,c>$ is minimal if $gcd(a,b,c)=1$. Let $\mathcal P_M$ be the set of all legit, minimal triples. Let $$L_{MPT}^N=\{<a,b,c> | <a,b,c> \in \mathcal P_M\wedge b\leq N\}$$ What is the density of $|L_{MPT}^N|$ as a function of $N$? e.g. is $|L_{MPT}^N|=\Theta(N)?$
It was proved by Lehmer that the number of primitive triples with hypotenuse less than $x$ is asymptotic to $$ \frac{x}{2\pi} \approx 0.15915494309x $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/909954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Expected value expressed by CDF. I have found following formula for expected value: $$\operatorname{E}[X] = \int_0^\infty \int_0^x \! \mathrm{d}t \, \mathrm{d}F(x) = \int_0^\infty \int_t^\infty \! \mathrm{d}F(x)\mathrm{d}t = \int_0^\infty \! (1-F(t))\,\mathrm{d}t$$ and I don't understand this equality: $$\int_0^\infty \int_0^x \! \mathrm{d}t \, \mathrm{d}F(x) = \int_0^\infty \int_t^\infty \! \mathrm{d}F(x)\mathrm{d}t.$$ Firstly I have thought that it is just Fubini but the ranges of the integrals doesn't match. Probably it's something obvious but for now I cannot come up with an idea so I'd be glad for any tip.
Prescribe function $f\left(x,t\right)$ by $\left(x,t\right)\mapsto1$ if $t<x$ and $\left(x,t\right)\mapsto0$ otherwise. Applying Fubini you find: $$\int_{0}^{\infty}\!\!\int_{0}^{x}dtdF\left(x\right)=\int_{0}^{\infty}\!\!\int_{0}^{\infty}f\left(x,t\right)dtdF\left(x\right)=\int_{0}^{\infty}\!\!\int_{0}^{\infty}f\left(x,t\right)dF\left(x\right)dt=\int_{0}^{\infty}\!\!\int_{t}^{\infty}dF\left(x\right)dt$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/910177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve for $\theta$: $a = b\tan\theta - \frac{c}{\cos\theta}$ This question was initially posted on SO (Link). I'm not sure the answer given there was correct. I cannot get the results from those expressions to match my CAD model. The title pretty much sums it up. How do I solve for theta given the following equation. $$a = b\tan\theta - \frac{c}{\cos\theta}$$ I am not a student and this is not homework. It's been quite a while since I've done any significant trig and I'm out of time to figure this out.
The given equation is equivalent to $$ b\sin(\theta)-a\cos(\theta)=c. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/910269", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 8, "answer_id": 6 }
Derivative of the Inverse Cumulative Distribution Function for the Standard Normal Distribution As the title says, I am trying to find the derivative of the inverse cumulative distribution function for the standard normal distribution. I have this figured out for one particular case, but there is an extra layer of complexity that has be stumped. Let $0 \le p \le 1$ and let $z = \Phi^{-1}(p)$, where $\Phi^{-1}(p)$ is the inverse cumulative distribution function for the standard normal distribution. Then: $$\frac{\partial \Phi^{-1}(p)}{\partial p} = \left(\frac{\partial \Phi(z)}{\partial z}\right)^{-1},$$ where $\Phi(z)$ is the cumulative distribution function for the standard normal distribution. This yields: $$= \left(\frac{1}{\sqrt{2\pi}} \exp(-z^2/2) \right)^{-1} = \frac{\sqrt{2\pi}}{\exp(-z^2/2)}.$$ I think/hope this is right so far. But now I have $p_1$ and $p_2$ and I need to find the derivative of $$\frac{\partial \Phi^{-1}\left(\frac{p_1}{p_1+p_2}\right)}{\partial p_1}$$ and $$\frac{\partial \Phi^{-1}\left(\frac{p_1}{p_1+p_2}\right)}{\partial p_2}.$$ Any help would be appreciated.
$\Phi:\mathbb R \to (0,1)$ and $\Phi^{-1}:(0,1) \to \mathbb R$ are strictly increasing continuous bijective functions If $z=\Phi^{-1}(p)$ then $p=\Phi(z)$ and $\dfrac{dp}{dz}=\Phi^\prime(z)=\phi(z)$, so $\dfrac{dz}{dp} = \dfrac{1}{\phi(z)}= \dfrac{1}{\phi(\Phi^{-1}(p))}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/910355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Does $\sum_{i=1}^\infty a_i/i < \infty$ imply that $a_i$ has Cesaro mean zero? If $(a_i)_{i=1}^\infty$ is a sequence of positive real numbers such that: $$ \sum_{i=1}^\infty \frac{a_i}{i} < \infty. $$ Does this mean that the sequence $(a_i)_{i=1}^\infty$ has Cesaro mean zero? As in $$ \lim_{n\to\infty} \frac{1}{n} \sum_{i=1}^n a_i = 0.$$
Summation by parts gives: $$\sum_{i=1}^{n}a_i=\sum_{i=1}^{n}\frac{a_i}{i}+\sum_{j=1}^{n-1}\sum_{k=j+1}^{n}\frac{a_k}{k},\tag{1}$$ while the convergence of $\sum_{i=1}^{+\infty}\frac{a_i}{i}$ gives that for any $\varepsilon>0$ there exists $M_\varepsilon$ such that $$\sum_{n\geq M_\varepsilon}\frac{a_i}{i}\leq \varepsilon.$$ Hence, by assuming $\sum_{n=1}^{+\infty}\frac{a_n}{n}=C$, we have, through $(1)$: $$\sum_{i=1}^{KM_\varepsilon}a_i\leq C+CM_\varepsilon+\varepsilon(K-1)M_\varepsilon,\tag{2}$$ hence: $$\limsup_{K\to +\infty}\frac{1}{KM_\varepsilon}\sum_{i=1}^{K M_\varepsilon}a_i \leq \varepsilon.\tag{3}$$ Since $\varepsilon$ was arbitrary, this proves that $\{a_i\}_{i\in\mathbb{N}^*}$ is Césaro summable to zero as conjectured.
{ "language": "en", "url": "https://math.stackexchange.com/questions/910439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
converting geometric infinite series to another infinite series Let $\{x_n\}_{n=1}^\infty$ be a sequence satisfying the recurrence relation: $$ x_n = a\left(1- \sum_{k=0}^{n-1}x_k\right) $$ Where $ x_0 = 1 $, and $a \in [0,1]$ is chosen so that $$ \sum_{k=1}^{\infty} x_k = 1$$ Given a positive integer $d$, how do I generate a sequence $\{y_n\}$ such that $$ \sum_{k=p}^{p+(d-1)} y_k = x_{\frac {p+(d-1)}d}$$ for example, if $a = 0.5$ and $d = 4$, $$ \sum_{k=1}^{4} y_k = x_1 = 0.5 $$ and $$ \displaystyle \sum_{k=5}^{8} y_k = x_2 = 0.25 $$ I originally thought that this would be related to compound interest, but doing arithmetic by hand, have not found this to be the case. I have limited mathematical knowledge, so if the answer requires anything beyond algebra, please explain or cite references to the form you are using. If the title of the question can be made clearer, please feel free to edit. My use case is a computer application that will calculate $ g(x) $ from $x = 0$, so iterative solutions work for me.
For the beginning, rewrite $$f(x) = xa^x=a x a^{x-1}=a\frac{d(a^x)}{da}$$ So, $$\displaystyle \sum_{x=1}^{\infty} f(x) = a \sum_{x=1}^{\infty}\frac{d(a^x)}{da}=a\frac{d}{da}\sum_{x=1}^{\infty}a^x=a\frac{d}{da}\Big(\frac{a}{1-a}\Big)=\frac{a}{(1-a)^2}$$ So, if $$\displaystyle \sum_{x=1}^{\infty} f(x) =1$$ solving $$\frac{a}{(1-a)^2}=1$$ corresponds to the quadratic $a^2-3a+1=0$ the roots of which being $$a_{\pm}=\frac{1}{2} \left(3\pm\sqrt{5}\right)$$ bur since $a [0,1]$ the only solution is $$a_{\pm}=\frac{1}{2} \left(3-\sqrt{5}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/910510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\int \sqrt{1+\sin ^2 x} dx$ an elliptic integral? It seems to be an elliptic integral of the second kind, but when $k=i$? This is going by the definition that $E(\theta,k)=\int_{0}^{\theta} \sqrt{1-k^2 \sin^2x}dx$. That seems a bit off. Or is this not one at all due to the indefinite nature of the integral?
Consider the elliptic integral of the second kind $E(\phi,m):=\int\limits _0^\theta \sqrt{1-m \sin^2 x}\,dx$. (Note that my convention will be to write in terms of $m=k^2$ rather than $k$ itself.) Naively, the range of allowed $m$ is $m\in [0,1]$. The question posted above is then a special case of the following: What is the meaning of $E(\theta,-m)$ for $0\leq m \leq 1$? To answer this, We can manipulate $E(\theta,-m)$ into a more recognizable form. Substituting $u=\pi/2-x$, we have $$1+m \sin^2 x =1+m(1-\sin^2 u)=(1+m)\left(1-\frac{m}{1+m}\sin^2 u\right)$$ and therefore \begin{align} E(\theta,-m) &=(1+m)^{1/2}\int\limits _{\pi/2-\theta}^{\pi/2} \sqrt{1-\frac{m}{1+m}\sin^2 u}\,du \\ &=(1+m)^{1/2} E\left(\frac{\pi}{2},\frac{m}{1+m}\right)-(1+m)^{1/2} E\left(\frac{\pi}{2}-\theta,\frac{m}{1+m}\right). \end{align} Note that the function in the first term is in fact the complete elliptic integral $E\left(\frac{m}{m+1}\right).$ For the special case of $m=1$, we conclude that \begin{align} E(\theta,-1) &=\int_{0}^\theta \sqrt{1+\sin^2 x}\,dx \\ &= \sqrt{2}E\left(\frac{1}{2}\right)-\sqrt{2}E\left(\frac{\pi}{2}-\theta,\frac{1}{2}\right)=\sqrt{2}\int_{\pi/2-\theta}^{\pi/2} \sqrt{1-\frac{1}{2}\sin^2 x}\,dx. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/910597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Homework | Find the general solution to the recurrence relation A question I have been stuck on for quite a while is the following Find the general solution to the recurrence relation $$a_n = ba_{n-1} - b^2a_{n-2}$$ Where $b \gt 0$ is a constant. I don't understand how the general solution can be found with $b$ and $b^2$ in the relation. Any help or advice would be greatly appreciated. EDIT Using $a_n = t^n$ I found the quadratic equation $t^2 - bt + b^2$ Which then comes to: $$\frac{b \pm \sqrt{-4b^2 + b}}{2}$$ Therefore I have complex roots as $-4b^2 + b$ will be a negative number. How do I continue from this point? EDIT Using $a_n = b^nc_n$ I came to $c_n = c_{n-1} - c_{n-2}$. Substituting $t^n$ for $c_n$ I get the quadratic $t^2 - t + t$. Which solves to: $$ \frac{1 \pm i\sqrt{3}}{2} $$ $\Rightarrow D = \frac{1}{2}\sqrt{1 + 2\sqrt{3}}$ and $tan\theta = \frac{1}{\sqrt{3}}$ $\Rightarrow a_n = \left(\frac{\sqrt{1 + 2\sqrt{3}}}{2}\right)(Acos(n\theta) + Bsin(n\theta))$
When you have $a_n=ba_{n-1}+b^2a_{n-2}$, you can see $$b^\color{red}{0}\cdot a_\color{blue}{n}=b^\color{red}{1}\cdot a_{\color{blue}{n-1}}-b^\color{red}{2}\cdot a_{\color{blue}{n-2}}$$ where $0+n=1+(n-1)=2+(n-2)$. In such case, dividing the both sides by $b^n$ gives you $$\frac{a_n}{b^n}=\frac{a_{n-1}}{b^{n-1}}+\frac{a_{n-2}}{b^{n-2}}\iff c_n=c_{n-1}-c_{n-2}$$ where $c_n=a_n/b^n$. Solving $t^2=t-1$ gives us $t=\frac{1-\sqrt 3i}{2}(=\alpha), \frac{1+\sqrt 3i}{2}(=\beta)$. Hence, we have $$c_{n+1}-\alpha c_n=\beta (c_n-\alpha c_{n-1})=\cdots =\beta^n(c_1-\alpha c_0),$$ $$c_{n+1}-\beta c_n=\alpha (c_n-\beta c_{n-1})=\cdots =\alpha^n(c_1-\beta c_0).$$ Substracting the latter from the former gives you $$\frac{a_n}{b^n}=c_n=\frac{\beta^n-\alpha^n}{\beta-\alpha}c_1+\frac{\alpha\beta(\alpha^{n-1}-\beta^{n-1})}{\beta-\alpha}c_0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/910702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Showing that $\exp(\sum_{n=1}^\infty a_nX^n)=\prod_{n=1}^\infty\exp(a_nX^n)$ for formal power series I've just come across formal power series and am not very fluent with them yet. I'd like to show that $\exp(\sum_{n=1}^\infty a_nX^n)=\prod_{n=1}^\infty\exp(a_nX^n)$. Can anybody help?
$$\exp\left(\sum\limits_{n=0}^{+\infty}a_n X^n\right)=\exp\left({\lim\limits_{n\to +\infty}\sum\limits_{k=0}^{n}a_k X^k}\right)=\lim\limits_{n \to +\infty}\exp \left(\sum\limits_{k=0}^{n}a_k X^k\right)=\lim\limits_{n \to \infty}\prod\limits_{k=0}^{n}\exp \left(a_k X^k\right)=\\=\prod\limits_{n=0}^{+\infty}\exp \left(a_n X^n\right) $$ We changed $\lim$ and $\exp$ because of the fact that $\exp x$ is continuous
{ "language": "en", "url": "https://math.stackexchange.com/questions/910796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Openness of path connected components of open subsets of $\mathbb C$ Let $\Omega\subset \Bbb{C}$ be an open set. My textbook states that every path connected component of $\Omega$ is open. I can't seem to understand why that is. Why does every point have to contained in a path-connected neighbourhood which lies entirely inside the path connected component?
Let $P$ be any path component of $\Omega$. Let $p \in P$ be any point. Since $\Omega$ is open, there exists an open ball $B$ such that $p \in B$ and $B \subset \Omega$. Since $B$ is path connected, it must be contained in $P$, by definition of path component. Therefore $P$ is open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/910884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Why is there a 'missing' $1$ in the Euler–Mascheroni constant? It is easy to show that: $$ \sum_{k=1}^n \frac{1}{k} > \ln(n+1), $$ but the Euler–Mascheroni constant is defined as: $$ \gamma = \lim_{n \to \infty} \left( \sum_{k=1}^n \frac{1}{k} - \ln(n) \right). $$ My question is, why was $\gamma$ defined using $\ln(n)$ and not $\ln(n+1$)? Are the two definitions identical, or does it simply turn out to be more convenient for other applications to define $\gamma$ using $\ln(n)$?
$$ \left( \sum_{k=1}^{n} \dfrac{1}{k} - \ln(n+1) \right) - \left( \sum_{k=1}^{n} \dfrac{1}{k} - \ln(n) \right)=\ln(n)-\ln(n+1)=\ln\left(\frac{n}{n+1}\right)$$ And $$\lim_n \ln\left(\frac{n}{n+1}\right)=\ln 1=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/910965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What is the expected number of coin tosses needed to obtain a head? Due to my recent misunderstandings regarding the 'expected value' concept I decided to post this question. Although I have easily found the answer on the internet I haven't managed to fully understand it. I understood that the formula for the expected value is: $$E(x) = x_1p_1 +x_2*p_2 +...+x_np_n$$ The x's are the possible value that the random variable can take and the p's are the probabilites that this certain value is taken. So, if I get a head on the first try, then $ p_1 = \frac{1}{2} , x_1 = 1 $ If I get a head on the second try, then $ p_2 = \frac{1}{4} , x_2 = 2 $ And then, I woudl have that: $$E(x) = \frac{1}{2}1+ \frac{1}{4}2 +...$$ So my reasoning led me to an inifnite sum which I don't think I can't evaluate it that easy. In the 'standard' solution of this problem, the expected value is found in a reccurisve manner. So the case in which the head doesn't appear in the first toss is treated reccursively. I haven't understood that step. My questions are: is my judgement correct? How about that reccursion step? Could somebody explain it to me?
Let $X$ be the number of tosses, and let $e=E(X)$. It is clear that $e$ is finite. We might get a head on the first toss. This happens with probability $\frac{1}{2}$, and in that case $X=1$. Or else we might get a tail on the first toss. In that case, we have used up $1$ toss, and we are "starting all over again." So in that case the expected number of additional tosses is $e$. More formally, the conditional expectation of $X$ given that the first toss is a tail is $1+e$. It follows (Law of Total Expectation) that $$e=(1)\cdot\frac{1}{2}+(1+e)\cdot\frac{1}{2}.$$ This is a linear equation in $e$. Solve. Remark: The "infinite series" approach gives $$E(X)=1\cdot\frac{1}{2}+2\cdot\frac{1}{2^2}+3\cdot\frac{1}{2^3}+\cdots.$$ This series, and related ones, has been summed repeatedly on MSE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/911050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
A problem about martingale with stopping time . In Durrett's "Probability,theory and examples": Suppose $X_n$ is supermartingale and $H_n$ is predictable. define: $$(H\cdot X)_n\triangleq\sum^n_{m=1}H_m(X_m-X_{m-1}) $$ $N$ is stopping time and let $H_m=1_{\{N\ge m\}}$ Then the author claim: $$(H\cdot X)_n=X_{N\wedge n}-X_0 $$ I am confused about this claim since we consider ther coefficient of $X_0$, according to the definition,the coefficient should be $-1_{\{N\ge 1\}}$,not $-1.$ If you don't understand the background you can refer to: http://www.math.duke.edu/~rtd/PTE/PTE4_1.pdf Page 200,the last but 3rd line.
It follows from the very definition of a stopping time (page 155) that $N \geq 1$. Hence, $$-1_{\{N \geq 1\}} = -1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/911093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finite rings without unity that are subrings of finite rings with unity I know that a ring $R$ without unity can be embedded as a subrng of a ring with underlying additive structure $R \oplus \mathbb{Z}$, a ring with unity. But this does not yield a finite field. But I read somewhere that $\mathbb{Z}$ can be replaced by $\mathbb{Z}/p\mathbb{Z}$, where p is the characteristic of $R$ (seen as the least number of terms needed for $a+a+\ldots+a=0 $ for any $a \in R$. Suppose $R$ has additive structure $\mathbb{Z}_2 \oplus \mathbb{Z}_3$, so $p=2$, but then if $a \in \mathbb{Z}_3$ then $2a=(1+1)a=0$, on the other hand $a+a+a=0$. How is that possible?
Indeed, the well-known Dorroh adjunction of $1$ is not useful in many contexts because it doesn't preserve crucial properties of the source rng and/or doesn't satisfy various minimality properties. Below is an alternative, which alleviates some of these problems, addressing the issue you mention W.D. Burgess; P.N. Stewart. The characteristic ring and the "best" way to adjoin a one. J. Austral. Math. Soc. 47 (1989) 483-496. $\ \ $ Excerpt:
{ "language": "en", "url": "https://math.stackexchange.com/questions/911187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
procedure to pair all people with all others in a group I am looking for an easy method to ensure that all people in a group get to meet all others. The "speed dating" method is to have two rows of people facing each other, and then rotate one of the rows. This works for half of the pairings. How do I get the remaining pairings to happen properly in a simple way?
When $n$ is odd, label each person with elements of $\mathbb Z_{n}$ and, each round of introductions is also labeled from elements in $\mathbb Z_n$. Then in round $k$, person $a$ is introduced to person $b$, if $a+b=k$, with the unique person $i$ so that $2i=k$ not introduced to anybody. This yields $n$ rounds of $\frac{n-1}{2}$ pairs, the best you can do. So persons $a$ and $b$ meet in round $a+b$, and person $a$ is left out of the introductions in round $2a$. When $n$ is even, pick a single individual $X$ out of the set. Then apply the odd case to the $n-1$ other people, but introduce $X$ to the person left out each round. So person $X$ meets person $a$ in round $2a\pmod{n-1}$. This yields $\frac n2$ introductions in $n-1$ rounds, again the best you can do. So, when $n=6$ we label the people $\{X,0,1,2,3,4\}$, and we get: Round 0: X0 14 23 Round 1: X3 01 24 Round 2: X1 02 34 Round 3: X4 03 12 Round 4: X2 04 13 If you absolutely must start with the speed dating approach, with $n_1$ men and $n_2$ women, then after those first $\max(n_1,n_2)$ rounds of speed dating introductions, the rest of the rounds might as well just follow my approach for introducing the $n_1$ men to each other and the $n_2$ women to each other. You can't do better. Starting with speed dating would then require $$\max\left(n_1+2\left\lfloor\frac{n_1-1}{2}\right\rfloor+1,n_2+2\left\lfloor\frac{n_2-1}{2}\right\rfloor+1\right)$$ rounds of introductions. There is a subtle reason that it is easier when $n$ is odd rather than when $n$ is even. When $n$ is even, we need a binary operation $\star$ on $\{1,2,3,\dots,n\}$ which maps $(i,j)$ to the round number where $i,j$ meet, or $n$ when $i=j$. We obviously need $a\star b=b\star a$ for all $a,b$, and $a\star b=a\star c$ implies $b=c$. There is no good simple arithmetic commutative binary operation on $\mathbb Z_n$ such that $a\star a$ is independent of $a$. (If $n=2^k$, however, we can use the vector space of dimension $k$ over $\mathbb Z_2$ and define $a\star b=a+b$, since $a+a=0$ for all $a\in \mathbb Z_2^k$.) On the other hand, if $n$ is odd, we have a binary commutative operation on $\mathbb Z_n$ in which $a\star b$ is the round when $a$ and $b$ are introduced, and $a\star a$ is the round when $a$ is left out. If we label the rounds so that $a$ is left out in round $a$, then this means: $$a\star a = a\\a\star b=b\star a\\a\star b=a\star c\implies b=c$$ Then in any ring in which $2=1+1$ is a unit, this is easily defined as $a\star b=\frac{a+b}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/911268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
difference between the polynomials I have a homework assignment that I do not know how to solve. I don't understand how to calculate $f(x)$ in this assignment. $f(t)$ is the difference between the polynomials $2t^3-7t^2-4$ and $t^2-7t-3$. Calculate $f(3)$. What should I do to calculate $f(t)$? Thanks!
Should be $f(t)$ not $f(x)$. Subtract the two equations to find $f(t)$. This will leave you with $2t^{3}-8t^{2}+7t-1$. Now plug in $t=3$ and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/911328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
The intersection of $3$ set is empty, would the intersection of $4$ sets be empty? Let me clarify some more. Let's say we have four sets $A,B,C,$ and $D$. If the intersection of any three sets is empty, by default is the intersection of all four sets empty?
Yes, if $A$, $B$, $C$, and $D$ are sets such that the intersection of any three is empty, then because intersection is associative, we have that $A \cap B \cap C \cap D = (A \cap B \cap C) \cap D = \emptyset \cap D = \emptyset$. Just in case you aren't sure what associative means, it means that we can intersect in any order. So, in the case of three sets, we get $(A \cap B) \cap C = A \cap (B \cap C)$. In the case of four sets intersected, you can intersect them in any order you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/911415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
How can I numerically evaluate the total derivative of a multivariate function? I think I understand now the intuitive reasoning behind the total derivative of a multivariate function $z = z(x, y)$, which is $$ dz = \frac{\partial{z}}{\partial{x}}dx + \frac{\partial{z}}{\partial{y}}dy $$ So let's take an example, $z = x^2 + y^2$, a paraboloid and a surface of revolution. Here $z_x = 2x$ and $z_y = 2y$, so $$ dz = 2xdx + 2ydy. $$ How would I evaluate this numerically? For example, if I had a program that calculated the infinitesimal change $dz$ at a point $\langle x_0, y_0 \rangle$, I could plug in the values of $x_0$ and $y_0$ for $x$ and $y$ respectively, but what would I plug in for $dx$ and $dy$? EDIT: My guess is that you actually can't evaluate it directly as a number, but instead have to use it as a combination of infinitesimals, kind of like how you can't treat problems involving the imaginary unit $i$ as all real numbers. So instead of treating $dz$ as some actual value, you use it to derive other values that are useful. That's my guess but I'm not sure if it's valid or not.
The differential is not a numerical value, so it's meaningless to associative a number to a differential. But it indeed has a representation using a set of basis, in which the differential is regarded as some kind of vector. Say instead we use binary tuple $(2x,2y)$ to represent differential $dz$. In practice, we also don't care about $dz$. What we concern is how it applies to a vector $w=(u,v)$. So now we can give $dz(w)$ a numerical value by $2(ux+vy)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/911483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
probability rolling a dice 5 times I can't solve this problem: What is the probability that, when rolling a dice 5 times, the number of times when you get a 1 or 2 is greater than the number of times when you get a 6. any help?
@Stones Yes, but the table is not quite right. If there are three or more 1s or 2s, then it does not matter how many 6s there are. So N5 the number of ways of getting five 1s or 2s is 32. Similarly, N4 the number of ways of getting four is $5\times 2^4\times 4=320$ and N4 the number of ways of getting three is $10\times 2^3\times 4^2=1280$. If we get just one 1/2 then we cannot get any 6s, so N1 is $5\times 2\times 3^4=810$. That leaves the trickier case of two 1s/2s. If there are no 6s, then we have N2a as $10\times 2^2\times 3^3=1080$. If there is one 6, then we have N2b as $10\times 3\times 2^2\times 1\times 3^2=1080$. Adding those up, we get 4602, so the probability is 767/1296 = 0.5918 approx. [The coefficients 1, 5, 10 are just the binomial coefficients, because we are looking at the number of ways of picking 5, 4, 3 things from 5. The extra 3 for N2b comes because there are 3 ways of choosing which of the three results not a 1 or 2 is the 6.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/911571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Square Integrable local martingale or locally square integrable martingale? I have a question about martingales. What is the difference between "locally square integrable martingale" and "square integrable local martingale"? In particular, which set does $M_{loc}^2$ represent?
Protter gives the following definition: Let $X$ be a stochastic process. A property $\pi$ is said to hold locally if there exists a sequence of stopping times $(T_n)_{n \geq 1}$ increasing to $\infty$ almost surely such that $X^{T_n} 1_{\{T_n>0\}}$ has property $\pi$, each $n \geq 1$. If we speak of a square integrable local martingale $(X_t)_{t \geq 0}$, then * *$(X_t)_{t \geq 0}$ is a local martingale, i.e. there exists stopping times $(T_n)$, $T_n \uparrow \infty$ such that the stopped process $(X_t^{T_n} 1_{\{T_n>0\}})_{t \geq 0}$ is a martingale. *$X_t \in L^2(\mathbb{P})$ for all $t \geq 0$. In contrast, a locally square integrable martingale $(X_t)_{t \geq 0}$ satisfies * *There exists a sequence of stopping times $(T_n)$, $T_n \uparrow \infty$, such that $X_{t}^{T_n} \in L^2$ for all $n \geq 1$, $t \geq 0$. *$(X_t)_{t \geq 0}$ is a martingale. Sometimes, the notion locally square integrable martingale is also used for processes for which there exist stopping times $(T_n)$ such that $(X_t^{T_n})_{t \geq 0}$ is a martingale and $X_t^{T_n} \in L^2$ - in this case, "local" refers both to integrability and martingale property. This highly depends on the book (or paper) you are reading.
{ "language": "en", "url": "https://math.stackexchange.com/questions/911653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Why $\sin(n\pi) = 0$ and $\cos(n\pi)=(-1)^n$? I am working out a Fourier Series problem and I saw that the suggested solution used $\sin(n\pi) = 0$ and $\cos(n\pi)=(-1)^n$ to simply the expressions while finding the Fourier Coefficients $a_0$, $a_n$, $b_n$. I am aware that the $\sin(x)$ has a period of $2\pi$. So I am thinking that every half of period, the graph of $\sin(x)$ has to cut through the $x$ axis thus giving us the value $0$. Am I right to think that way or is there some more important reason for that? Also, how do they come up with $\cos(n\pi) = (-1)^n$?
On a unit circle $x$ coordinate of any point on the circle is given by $\cos\theta$ and $y$ coordinate is given by $\sin\theta$ Now, $\sin(n\pi)$, where $n=0,1,2,3...$ is always the X-axis and on X-axis we have $y=0$ and $\cos(n\pi)$ assumes $x=1$ or $x=-1$ as $\cos(0.\pi)=1=(-1)^0,\cos(1.\pi)=-1=(-1)^1,\cos(2.\pi)=1=(-1)^2...$
{ "language": "en", "url": "https://math.stackexchange.com/questions/911716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 7, "answer_id": 4 }
What is the motivation for quaternions? I know imaginary numbers solve $x^2 +1=0$, but what is the motivation for quaternions?
Hamilton (and Graves) wanted to generalize $\mathbb C$ - if viewed as $\mathbb R^2$ with a multiplication that turns it into a field with a multiplicative absolute value. They were looking for something similar in $\mathbb{R}^n$ for $n>2$. It turns out that Hamilton spent 13 years in vain with $n=3$ although it was essentially known since Diophantus that what he was looking for was impossible. He finally figured out that he could succeed for $n=4$ if he gave up commutativity. (This is a short summary of chapter 20 of Stillwell's wonderful "Mathematics and Its History".)
{ "language": "en", "url": "https://math.stackexchange.com/questions/911807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 6, "answer_id": 3 }
Pointwise estimate for a sequence of mollified functions In the answer to Characterisation of one-dimensional Sobolev space Tomás wrote ... let $\eta_\delta$ be the standard mollifier sequence. Let $u_\delta=\eta_\delta\star u$ and note that for any $c\in (a,b)$ $$|u_\delta(x)-u_\epsilon(x)|\le \int_c^x |u'_\delta (t)-u'_\epsilon(t)|dt+|u_\delta (c)-u_\epsilon(c)|\tag{1}.$$ Since I am new to this subject, I'd like to know which theorem/lemma Tomás used to get inequality (1).
By the fundamental theorem of calculus, we have that $$u_\delta(x)=u_\delta (c)+\int_c^xu_\delta'(t)dt,$$ Can you conclude now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/911868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 1, "answer_id": 0 }
Graph of the $\sqrt { x } $ Why does the graph only go to the right and up? Wouldn't there be negative values of y as well? Since, for example, the $\sqrt{4}$ is $2$ and $-2$
A function $f$ is a map $f: X\rightarrow Y$ which for every input $x\in X$ gives precisely one output $f(x)\in Y$. Taking $\sqrt4=\pm2$ is not a function, because it has two outputs. The notation $\sqrt{x}$ is used to denote the function $\sqrt{\:\:\:}\:: \mathbb{R}^{+}\rightarrow\mathbb{R}^{+}$ with positive image, ignoring the negative numbers which are square roots. When you graph $\sqrt{x}$, you are graphing this function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/911947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Check if two vector equations of parametric surfaces are equivalent Give the vector equation of the plane through these lines: $\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}4\\1\\1\end{pmatrix}+\lambda\cdot\begin{pmatrix}0\\2\\1\end{pmatrix}\,\,\,$ and $\,\,\,\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}4\\0\\3\end{pmatrix}+\mu\cdot\begin{pmatrix}0\\2\\1\end{pmatrix}$. My answer is: $\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}4\\1\\1\end{pmatrix}+\lambda\cdot\begin{pmatrix}0\\-1\\2\end{pmatrix}+\mu\cdot\begin{pmatrix}0\\-3\\1\end{pmatrix}$. The solutions manual suggests the following equation, which is the equation of a straight line (probably a typo): $\begin{pmatrix}x\\y\\z\end{pmatrix}=\begin{pmatrix}4\\1\\1\end{pmatrix}+\lambda\cdot\begin{pmatrix}0\\2\\1\end{pmatrix}+\mu\cdot\begin{pmatrix}0\\2\\1\end{pmatrix}$ Is my solution the right solution? Could someone provide a general way to check?
But the really simple way is to pick three arbitrary points (not on the same line) from the first plane and check if they all lie in the second one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/912048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is there an integer $N>0$ such that $\varphi(n) = N$ has infinitely many solutions? Let $\varphi: \mathbb{N} \to \mathbb{N}$ be the totient function. Is there an integer $N > 0$ such that there are infinitely many integers $n > 0$ such that $$\varphi(n) = N?$$
No. We use the fact that if $n = p_1^{e_1} \cdots p_k^{e_k}$, with the $p_i$ different primes and the $e_i$ positive, we have $\phi(n) = p_1^{e_1-1}(p_1-1) \cdots p_k^{e_k-1}(p_k-1)$. Now consider an integer $N>0$ and suppose that $\phi(n) = N$. Then $n$ cannot be divisible by primes larger than $N+1$, since if $q > N+1$ divides $n$, we have $N+1 \leq q-1 \mid \phi(n) = N$, contradiction. The set of primes that may divide $n$, therefore, is finite. Let $p$ be such a prime. Then if $p^k \mid n$ we have $p^{k-1} \mid N$, which leaves a finite number of possibilities for $k$. Thus the number of primes that may divide $n$ is bounded and the exponent of each prime is bounded as well, thus there are at most finitely many $n$ for which $\phi(n) = N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/912137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Rationale for expressing as a direct sum and a direct product In "Ireland and Rosen" page 35, it says if $R_1, R_2, ..., R_n$ are rings, then $R_1 \oplus R_2 \oplus \dots \oplus R_n = S$ is the direct sum of the $R_i$. Later in a proposition it says if $S = R_1 \oplus R_2 \oplus \dots \oplus R_n$, then the group of units $U(S) = U(R_1) \times \dots \times U(R_n)$. I would appreciate help understanding why in the first instance $S$ is expressed as a direct sum, and the group of units (over the same set of $R_i$) is expressed as a direct product. Thanks
It is standard to use $\oplus$ for the biproduct of modules. (aside: for infinite products of modules, $\oplus$ is interpreted as the coproduct) If the modules $M$ and $N$ have an algebra structure, then they induce a canonical algebra structure on their direct sum $M \oplus N$. It is standard, but strange, notation to write $R \oplus S$ for the algebra constructed by forgetting the algebra structure on $R$ and $S$, taking the direct sum of the modules, and then putting the canonical algebra structure back on the direct sum. I really don't understand why this notation is used, since it is rather misleading, as it is very much not the coproduct of the rings. (although it is their product)
{ "language": "en", "url": "https://math.stackexchange.com/questions/912229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to derive the closed form of the sum of $kr^k$ $$ \sum_{k=0}^{n}kr^k = r\frac{1-(n+1)r^n + nr^{n+1}}{ (1 - r)^2 } $$ How to derive it? I read about some finite calculus, and i understand how to tackle sums of $x^2$, $x^3$, etc.. But I don't know if the same methods can be used on this sum?
Hint: $kr^k = r k r^{k-1} = r \frac{d}{dr}r^{k}$, now interchange summation and derivative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/912354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Limit of a sequence of products How do you prove the following? $$\lim_{n\,\to\,\infty}\,\frac{1\cdot3\cdot5\cdots(2n-1)}{2\cdot4\cdot6\cdots(2n)}\ =\ 0$$
We verify easily that for $k\geq 1$, we have $\displaystyle \frac{2k-1}{2k}\leq \sqrt{\frac{k}{k+1}}$. Hence $$\frac{1\cdot3\cdot5\cdots(2n-1)}{2\cdot4\cdot6\cdots(2n)}\leq \prod_{k=1}^n \sqrt{\frac{k}{k+1}}= \frac{1}{\sqrt{n+1}}$$ and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/912421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Complex number equations I cannot solve two problems regarding complex equations. 1)Let $z^2+w^2=0$, prove that $$z^{4n+2}+w^{4n+2}=0, n \in \mathbb{N^{*}}$$ What I tried; $$z^2 \cdot z^{4n}+w^2 \cdot w^{4n}=0 \iff w^2(w^{4n}-z^{4n})=0$$ but it doesn't really prove anything. 2) Let $z=\frac{1+i\sqrt{3}}{2}$, evaluate $1+z^{1997}-z^{1998}+z^{1999}$ I can think of a way turning $z=e^{\frac{i\pi}{3}}$ but I have to solve it without anything but complex number properties(and algebra of course).
1) Rearrange your equation, conversely just plug in $z^2 = -w^2$. 2) You are right, use your $z = e^{i\frac{\pi}{3}}$ You just need to think about what the exponents are mod 6. That's all that matters here. Do you see why?
{ "language": "en", "url": "https://math.stackexchange.com/questions/912497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
When does $(e^a)^b = e^{ab}$ hold? For a complex number $A$ and a real number $B$, when does the well-known formula $(e^A)^B = e^{AB}$ fail? Or does it hold at all for complex A? Since $e^{2\pi i} = 1$, if this formula holds for all complex numbers $A$ and real numbers $B$, then it would imply that $e^{2\pi ti}=( e^{2\pi i} )^t = 1^t=1$ for all $t$, which is obviously false.
Let's try to see what $(e^a)^b$ means first: Well, $(e^a)^b= e^{b \cdot \ln(e^a)}$, by definition. Now, if only it were true that $\ln(e^a)=a$ then we would be happy, and $(e^a)^b$ would be our familiar $e^{ab}$. Now, let $a=x+iy$ where $x,y$ are reals. Then, $e^a=e^x \cdot e^{iy}$. What is $\ln(e^a)$ then? It should be $\ln(e^x)+\ln(e^{iy})$. There are no problems with $\ln(e^x)$, as $x$ is real. However, lot of care must be taken while dealing with complex logs.For example see here. A brief reason is because $e^{iy}=e^{i(2\pi+y)}$, we are in a spot of bother. (This point is very well taken in your question). So, there is no problem with the formula $(e^a)^b=e^{ab}$ if $a$ is real. The troubles begin when $a$ is complex, for the reasons mentioned above. Thuus the answer to your question is that the formula hols for real $a$ and any $b \in \mathbb{C}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/912670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Expected Value of Identically distributed random variables I have a very quick question regarding the expected value of two random variables $X,Y$ that are identically distributed and not necessarly independent. Is this equation valid? $E[XY]=E[X^2]$ If this is not true, why is this? and what relations can I get (regarding expected value, variance and covariance) when two random variables are identically distributed?
By definition: $$cov(X, Y) = E(X-EX)(Y-EY) = EXY - (EX)(EY).$$ If $X, Y$ are independent, then $cov(X, Y) = 0,$ so that $EXY = (EX)(EY).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/912747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Does the sum of the reciprocals of all primes of the form $4k+1$ converge? Let $S=\{p\in \mathbb{Z}^+ : p\ \text{is prime and}\ p\equiv 1 \mod \ 4\}.$ Is $\displaystyle\sum_{p\in S}\frac{1}{p}$ finite or infinite, and where can I find more information about it?
Dirichlets theorem on arithmetic progressions says there are infinitely many primes in every arithmetic progression $an+b$ where $a$ and $b$ coprime. In particular, there are infinitely many primes of the form $4n+1$. The proof of the theorem makes use of analysis and in fact shows that the sum $\sum_{p = an+b} \frac{1}{p}$ is divergent. For more information, see here: Dirichlet's Theorem on Primes in Arithmetic Progressions from Wikipedia.
{ "language": "en", "url": "https://math.stackexchange.com/questions/912824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
An interesting linear algebra question Let $A$ and $u$ be $n\times n$ matrix and $n\times 1$ vector of $\mathbb{C}$. Denote $\overline{A}$ is the matrix $(\overline{A})_{ij}=A_{ij}^*$, the conjugate number; ($\overline{A}$ is not the conjugate transpose matrix) and similarly $\overline{u}$. Prove that if $\lambda$ is a nonnegative eigenvalue of $A\overline{A}$, ie $\exists v\ne 0:A\overline{A}v=\lambda v$, then $\exists u\ne 0$ such that: $$A\overline{u}=\sqrt{\lambda}u$$
Actually I have found a simple way to directly point out the vector $u$: * *$u=\sqrt{\lambda}A\overline{v}+\lambda v$ if $\sqrt{\lambda}A\overline{v}+\lambda v\ne0$ *If $\sqrt{\lambda}A\overline{v}+\lambda v=0$ then $A\overline{v}=-\sqrt{\lambda}v$ then we choose $u=iv$ Also thanks for the elegant answer from User1551. Thanks to Leeuwen's comment I think the more appropriate choice is $u=A\overline{v}+\sqrt{\lambda}v$
{ "language": "en", "url": "https://math.stackexchange.com/questions/912889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
If $f,g$ are entire functions and$\ fg\equiv 0$ then either $f \equiv 0$ or $g\equiv0. $ Let $f,g$ be entire functions such that $g \not\equiv 0.$ If $fg\equiv0$ in $\mathbb{C},$ could anyone advise me how to show $f \equiv0$ in $\mathbb{C} \ ?$ Thank you.
Suppose there exists $z$ such that $f(z) \ne 0$. Then $f$ is non-zero in some neighbourhood of $z$, so $g$ must be zero in the same neighbourhood. And if an entire function is identically zero in the neighbourhood of any point, it is zero in the whole of $\mathbb C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/912950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Equation $3x^4 + 2x^3 + 9x^2 + 4x + 6 = 0$ Solve the equation $$3x^4 + 2x^3 + 9x^2 + 4x + 6 = 0$$ Having a complex root of modulus $1$. To get the solution, I tried to take a complex root $\sqrt{\frac{1}{2}} + i \sqrt{\frac{1}{2}}$ but couldn't get the solution right. Please help me.
Let the root be $\cos y+i\sin y,$ Using Complex conjugate root theorem, $\cos y-i\sin y$ must be another root So, if the four roots are $\cos y\pm i\sin y,u, v$ using Vieta's formula, $(\cos y+i\sin y)(\cos y-i\sin y)u\cdot v=\dfrac63\implies v=\dfrac2u$ So we have $$3[x-(\cos y+i\sin y)][x-(\cos y-i\sin y)](x-u)\left(x-\dfrac2u\right)=3x^4 + 2x^3 + 9x^2 + 4x + 6$$ $$\iff3[(x^2-2x\cos y+1)]\left[x^2-\left(u+\frac2u\right)+2\right]=3x^4 + 2x^3 + 9x^2 + 4x + 6$$ $$\iff3\left[x^4-x^3\left(2\cos y+u+\frac2u\right)+x^2\left[1+2+2\cos y\left(u+\frac2u\right)\right]+\cdots\right]=3x^4 + 2x^3 + 9x^2 + 4x + 6$$ Equating the coefficients of $x^3,x^2$ $$2\cos y+u+\frac2u=-\frac23\ \ \ \ \ (1)$$ and $$1+2+2\cos y\left(u+\frac2u\right)=\frac93\iff2\cos y\left(u+\frac2u\right)=0$$ If $\cos y=0\implies\sin y=\pm1\implies \cos y\pm\sin y=\pm i$ which does not satisfy the given equation So, $u+\dfrac2u$ must be $0$ and from $(1),2\cos y=-\dfrac23\iff\cos y=-\dfrac13\implies\sin y=\pm\dfrac{2\sqrt2}3$ Observation: $x^2+2$ is a factor of $$3x^4 + 2x^3 + 9x^2 + 4x + 6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/913039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Linear inequality problem: $2x + 1 > 10$ $2x + 1 > 10$ $2x > 9$ $x > 4.5$ The answer in the book says: $x\lt 4.5$. Am I doing it wrong?
$$2x+1>10 \iff x\gt 4.5$$ In other words, your solution to the posted inequality is correct! I suspect the book's (wrong) solution must be a typo/misprint, either in the solution, or in the book's statement of the problem. The book's solution satisfies the following inequality $$-2x + 1 \gt 10 \iff -2x \gt 9 \iff x\lt -4.5,$$ but does not satisfy the posted inequality. (Plug in, say $1<4.5$ into the original inequality and test it for yourself: Certainly, claiming that $2(1) + 1 = 3\gt 10$ is absurd.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/913114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Problem about Random walk and Stopping time. Here is an example in "Probability with Martingales" My questions are: (1)Does equation (a) hold for $T=\infty$? (2)The equation:$$\mathbb{E}M_T^\theta=1=\mathbb{E}[(sech \theta)^Te ^\theta]$$ The author said when $T=\infty$ ,$\mathbb{E}[(sech \theta)^Te ^\theta]=0$ So the equation doesn't hold?? (3)Why if $T=\infty$ ,$(sech\theta)^T \uparrow 0$? In my opinion,if $T=\infty$ ,$(sech\theta)^T \equiv 0$. Thanks for regards.
The author did not say that the expectation is $0$: he rather meant that $M_{T(\omega)}^\theta=0$ if $T(\omega)=\infty$. This is justified because $0\lt \mathrm{sech}(x)\lt 1 $ for each $x$. We thus have $$\mathbb E[M_{T}^\theta]=\mathbb E[M_T^\theta\chi\{T<\infty\}].$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/913202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is a geometric structure? Every elementary book on abstract algebra usually begins with giving a definition of algebraic structures; generally speaking one or several functions on cartesian product of a point-set to the set. My question is this: Is there a property that unifies different geometric structures like topology(I consider it a geometric structure), differential structure, incidence structure and so on? Can one say a geometric structure on a set one way or another involves a subset of its powerset ?
It is known that, for a (compact) topological space, the continuous functions into $\mathbb{C}$ characterize the topology on the space. As far as I know similar statements hold for smooth manifolds (using smooth functions) and algebraic varieties (using polynomials). So one possible answer is that a geometric structure is an algebra of functions on your space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/913283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
What is the infinite sum of $a^{b^x}$? What would $$\sum^{\infty}_{n=0}(1/2)^{4^n}$$ be and how to determine it? Note that is not a typo, it is of the form $a^{b^x}$ if it were $(1/2)^{4n}$ it would of course be trivial and could be treated using the geometric series summation formula $1/(1-r)$ with $r$ being $1/16$. I can see this converges by the ratio test. My issue is working out its sum, more for fun really. It expands to $(1/2) + (1/2)^{4} + (1/2)^{4^2} + \ldots + (1/2)^{4^n}$, and there doesn't seem to be anything simple to do. I have attempted to look for analogies by treating it as a function and integrating, but it doesn't seem expressible with elementary functions. Is this a problem that cannot be tackled by elementary methods, (the only methods I currently have at my disposal)? What things should I study to be able to handle these kind of sums?
By asking what the infinite sum is, would you like to write it down as a decimal (using base 10)? Review of decimals: A decimal $.a_1a_2a_3...$ really is itself an infinite series: $$\sum^{\infty}_{n=1}\frac{a_n}{10^{n}}$$ We are comfortable with such series at least partly because we know exactly how to interpret the accuracy of such an expression: If $a$ is the above decimal and we approximate it by using the first $n$ digits, then we are within $(.1)^{n}$ of the real value of $a$. "Evaluating" the given series: Even if there isn't a nice way to express your given series, all is not lost; we need only know how accurate given approximations are. In fact, we can compare the tail of the given series with a geometric series: for all $k>0$, $$\sum^{\infty}_{n=k}(1/2)^{4^n} < \sum^{\infty}_{n=4^{k}}(1/2)^{n} = 2 \times (1/2)^{4^{k}}.$$ In other words, we have an upper bound on the error produced by the partial sum $\sum^{k-1}_{n=0}(1/2)^{4^n}$. And this upper bound approaches zero very very quickly: for $k = 5,$ we have the following inequality: $2 \times (1/2)^{4^{k}} < (.1)^{307}$. That means that if you evaluate $\sum^{4}_{n=0}(1/2)^{4^n}$, you will be accurate to more than 300 decimal places! It begins like this: $0.562515258789062$. Finally, note that you are already given the binary expansion of this number by the series itself: $.10010000000000010000000000000001...$ (The set of indices where the $1$'s appear is $\{4^{n} | n = 0,1,2,...\}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/913378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do I solve $x^5 +x^3+x = y$ for $x$? I understand how to solve quadratics, but I do not know how to approach this question. Could anyone show me a step by step solution expression $x$ in terms of $y$? The explicit question out of the book is to find $f^{-1}(3)$ for $f(x) = x^5 +x^3+x$ So far I have reduced $x^5 +x^3+x = y$ to $y/x - 3/4 = (x^2 + 1/2)^2$ or $y = x((x^2+1/2)^2 + 3/4)$ but Im still just as lost.
A few of the answers suggest guessing the answer by inspection. Indeed, in most calculus problems that could be on an exam it is very likely that the solution to the polynomial is meant to be 'obvious', frequently one of $0,\pm1,\pm2$. This is just to keep numbers and computations reasonable, but is not an absolute guarantee (especially if you get to use a calculator). The mantra is usually "the uglier it looks, the more likely it is that there is a simple answer and argument." But there's also a formal justification. Our first hope would be for a rational solution to $f(x)=3$. Sounds like a lot of possibilities, right? But by the rational roots theorem the only possibilities here are $\pm1,\pm3$. That's a nice, small list, and it's easy to verify that one of these actually works, thus solving the problem. If there wasn't a rational root we'd be stuck with a more problematic situation, but it's always a good idea to check for predictable, nice answers first, especially when it involves so little effort to do so.
{ "language": "en", "url": "https://math.stackexchange.com/questions/913487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 2 }
Two point boundary value problem $x''+x =0$ Side conditions x(0)=0 and x(1)=1 I know that I need to find the roots first but don't know how to continue. Using $x=e^{\lambda t}$, the roots are found by $\lambda^2 + 1 = 0$, which gives us $i$ as the root.
Your roots are $$ \lambda = i,\quad \lambda =-i. $$ Now just construct the solution as $$ x(t) =c_1e^{it} +c_2 e^{-it}. $$ You can simplify the above if you want. I think you know how to advance. Added: As I said you can simplify the above answer to the form $$ x(t) = A\sin(t) + B\cos(t). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/913554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Question about left and right derivative. Let $f(x):\mathbb{R}\rightarrow\mathbb{R}$ be a function such that $\forall x\in\mathbb{R}$ there exist: $$f'_+(x)=\lim_{\delta\rightarrow 0^+}\frac{f(x+\delta)-f(x)}{\delta}$$ $$f'_-(x)=\lim_{\delta\rightarrow 0^-}\frac{f(x+\delta)-f(x)}{\delta}$$ and $\forall x:f'_-(x)=2f'_+(x)$. How to prove that $f$ is constant. I'm really in nowhere with this kind of problem. EDIT: Actually I'm having an idea of using similar technique as in https://math.stackexchange.com/a/79617/161212 but it's still getting nowhere.
A) In fact, if a function $f$ has at each point a left and a right derivative, it has a derivative except possibly for $x\in D$, with $D$ at most countable. See: http://books.google.fr/books?id=rbCmt-2NxtIC&printsec=frontcover&hl=fr#v=onepage&q&f=false Look at page 174, point 7) where you will find a proof. B) Now you have that your continuous function $f$ has a zero derivative, except possibly for $x\in D$, with $D$ at most countable. Then $f$ is constant. The classic proof is as follows. a) Suppose first that a continuous function $g$ has a derivative for $x\not \in D$ (supposed at most countable), and $g^{\prime}(x)>0$ for $x\not \in D$. Take $a,b$ with $b>a$. Let $d \not \in g(D)$ ($g(D)$ is at most countable) such that $d\leq g(a)$, and $E=\{x\in [a,b]; g(x)\geq d\}$. Let $c={\rm Sup}(E)$. As $g$ is continuous, we have $c\in E$. Suppose that $c<b$. Then we must have $g(c)=d$. Hence $c\not \in D$. As $g^{\prime}(c)>0$, we get that there exists $\alpha>0$ such that in $[c,c+\alpha[$ we have $g(x)\geq g(c)=d$, a contradiction. Hence $b\in E$, we have $g(b)\geq d$. As this is true for all $d\not \in g(D)$, we get $g(b)\geq g(a)$, and $g$ is increasing. b) Take now $\varepsilon>0$, and $g(x)=f(x)+\varepsilon x$. By a) $g$ is increasing. As this is true for every $\varepsilon$, $f$ is increasing. c) But $-f(x)$ has also a zero derivative if $x\not \in D$. Hence $-f(x)$ is increasing, and so $f$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/913632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How many ways are there of choosing $k$ distinct items from a set of $n$? Specifically, say I have the integers $1,2,3,\dots,n$ (a set of $n$ integers). I want to select numbers one after another (not at the same time) until I have $k$ distinct numbers. How many ways are there of doing this? Someone told me $nPk$ but I don't understand why. Is there another way to approach it?
From $n$ distinct elements we can choose $k$-elements in $\binom{n}{k}$ ways and after permuting them in $k!$ ways we get that there exists $$\binom{n}{k}k!=\frac{n!}{k!(n-k)!}k!=\frac{n!}{(n-k)!}$$ possibilities
{ "language": "en", "url": "https://math.stackexchange.com/questions/913756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Finding Interpretations for First Order Logic I'm currently looking at first order logic and I'm having a difficult time with the following question: Now I don't want answer cause that wont really help me. What I am looking for is help, with the best approaches or techniques used that can help solve these types of questions. I've sat now for about 2 hours and I just come up blank. Any help from experienced members would really help.
It is difficult to give hints without essentially solving the problems, but here goes! We will take the natural numbers as meaning $1,2,3,\dots$. If you are using $0,1,2,\dots$ (which I prefer) the examples either need no adjustment or can be trivially adjusted. Finding interpretations in which the given sentences are true should not be difficult. Your sentences all have the basic structure $A\to B$, and in each case it is easy to find interpretations under which $A$ is false and therefore the implication is true. More simply, use for $p(x,y)$, $p(x)$, and $q(x)$ relations that are always true. Or else you can proceed like this. For each of the sentences, make up some simple interpretation for the predicate symbols, and check whether under that interpretation the given sentence is true or false. Whatever the result, you will have done half the problem! We now give some specific hints. We hope that they help lead to answers, but leave some work to you. a) To make this false, recall the example you were undoubtedly given, which looks like this. Let $p(s,t)$ hold if $s$ is the mother of $t$. Everybody has a mother, but it is not true that there is someone who is everybody's mother. Imitate "mother" in the natural numbers. But there is a simpler example involving $\le$, and an even simpler example using $=$. b) For a falsifying interpretation, let $q(x)$ be the expression $x=17$. You can find a suitable $p(x)$. c) For a falsifying interpretation, for $p(x)$ use $x\le 17$. You can find a suitable $q(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/913926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why don't semi-direct products determine a group uniquely? While reading some group theory notes I came up to this fact: Proposition: If $G$ is the inner semi direct product of $H,K$ ($G=HK$, $H\cap K=\left\{1\right\}$ and $H\unrhd G$) then $G\cong H\rtimes_fK$ where $f$ is expicitely given by $k\mapsto f_k(h)=h^{-1}kh$. Now I had no problem with this until I read the Classification of finite groups with order $pq$ for prime $p,q$ where $p\equiv 1\mod q$. After application of the Sylow Theorems one obtains such $H,K$. But then, the writer turned to the problem of counting the homomorphisms in $f:K\to Aut(H)$. This seems the way all other group theory books deal with this but it is never explained why this is needed. Unless I am confused, the Proposition implies that $G$, up to isomorphism is determined by $H,K,f$ and $f$ is explicitely given. If $H\rtimes_gK\not\cong H\rtimes_fK$ then it can't be that $G\cong H\rtimes_gK$ as that would contradict the transitivity of $\cong$. If we only had the existence of $f$ in Proposition, then I agree we would need to determine all such $f$, as $G$ would be the product of $H,K$ and one of these $f$. So why do we need to do this even when we have a specific $f$?
$\newcommand{\Aut}[0]{\mathrm{Aut}}$In case this is what you are interested in, given $H$ and $K$, two different $f$ may well lead to isomorphic groups $G$. For instance, in the case you mentioned, consider the primes $p = 7$ and $q = 3$. Let $K = \langle b \rangle$ and $H = \langle a \rangle$. Consider the homomorphisms $f, g : K \to \Aut(H)$ determined by $$ f : b \mapsto (a \mapsto a^{2}) $$ and $$ f : b \mapsto (a \mapsto a^{4}). $$ Then $$ H \rtimes_{f} K \cong H \rtimes_{g} K. $$ Actually, when $p \equiv 1 \pmod{q}$, you should have seen that there are two isomorphism classes of groups of order $pq$. One, the cyclic group, corresponds to the trivial homomorphism $K \to \Aut(H)$, the other to all other homomorphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/913997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove a random variable has normal distribution Let $X$ be a standard normal random variable and $a>0$ a constant. Define: $Y = \begin{cases} \phantom{-}X & \text{if $\,|X| < a$}; \\ -X & \text{otherwise}. \end{cases}$ Show that $Y$ is a standard normal distribution and the vector $(X,Y)$ is not two-dimensional normal distributed. I tried approaching this using the expectation saying for $g$ a measurable function \begin{align*} E[g(Y)] &= \int_{\mathbb{R}} g(y)\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} dx \\ &= \int_{-\infty}^{-a} g(x)\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} dx + \int_{-a}^a g(-x)\frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} dx + \int_a^{\infty} g(x) \frac{1}{\sqrt{2\pi}}e^{-\frac{x^2}{2}} dx \end{align*} as $x^2 = (-x)^2$ I could reason that $Y$ must have normal distribution, but this just doesn't seem clean enough. As to the second question I thought I could express $(X,Y)$ as a function of $X$ and then: $E[g(X,Y)] = E[g(h(X))] = E[f(X)]$ where $f = g \circ h$ a measurable function at which point I was confused by the dimensions.
The distribution of a real-valued random variable $Y$ is determined by its cdf $F := \mathbb{P}(Y \leq y)$ (because sets of the form $\{(-\infty, y]\}$ generate the Borel $\sigma$-algebra on $\mathbb{R}$). Let $N$ denote a standard normal random variable, $Y = -X$ and $y \in \mathbb{R}$; there are three cases: (i) $y \leq -a$, (ii) $-a < y \leq a$, and (iii) $a < y$. Consider case (ii): \begin{align*} P(Y \leq y) &= P(Y \leq -a) + P(-a < Y \leq y) \\ &= P(-X \leq -a) + P(-a < X \leq y) \\ &= P(X \geq a) + P(-a < N \leq y) \\ &= P(N \geq a) + P(-a < N \leq y) \\ &= P(N \leq -a) + P(-a < N \leq y) \\ &= P(N \leq y). \end{align*} The first equality follows by disjointness and finite additivity; the second because $\{\omega : Y \leq -a\} = \{ \omega : -X \leq -a\}$ and $\{-a < Y \leq y\} = \{-a < X \leq y\}$ for $-a < y \leq a$. The second-to-last line follows by symmetry of $N$. The case (i) is easy and (iii) is symmetric. For the second question, have you drawn the support of the measure on $\mathbb{R}^2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/914105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Closed-forms of infinite series with factorial in the denominator How to evaluate the closed-forms of series \begin{equation} 1)\,\, \sum_{n=0}^\infty\frac{1}{(3n)!}\qquad\left|\qquad2)\,\, \sum_{n=0}^\infty\frac{1}{(3n+1)!}\qquad\right|\qquad3)\,\, \sum_{n=0}^\infty\frac{1}{(3n+2)!}\\ \end{equation} Of course Wolfram Alpha can give us the closed-forms \begin{align} \sum_{n=0}^\infty\frac{1}{(3n)!}&=\frac{e}{3}+\frac{2\cos\left(\frac{\sqrt{3}}{2}\right)}{3\sqrt{e}}\\ \sum_{n=0}^\infty\frac{1}{(3n+1)!}&=\frac{e}{3}+\frac{2\sin\left(\frac{\sqrt{3}}{2}-\frac{\pi}{6}\right)}{3\sqrt{e}}\\ \sum_{n=0}^\infty\frac{1}{(3n+2)!}&=\frac{e}{3}-\frac{2\sin\left(\frac{\sqrt{3}}{2}+\frac{\pi}{6}\right)}{3\sqrt{e}} \end{align} but how to get those closed-forms by hand? I can only notice that \begin{equation} \sum_{n=0}^\infty\frac{1}{n!}=\sum_{n=0}^\infty\frac{1}{(3n)!}+\sum_{n=0}^\infty\frac{1}{(3n+1)!}+\sum_{n=0}^\infty\frac{1}{(3n+2)!}=e \end{equation} Could anyone here please help me? Any help would be greatly appreciated. Thank you. PS: Please don't work backward.
$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ With $\ds{\ell = 0,1,2}$: \begin{align} {\cal I}_{\ell}&\equiv\sum_{n = 0}^{\infty}{1 \over \pars{3n + \ell}!}= \sum_{n = 0}^{\infty}\sum_{k=0}^{\infty}{\delta_{k,3n + \ell} \over k!} =\sum_{n,k = 0}^{\infty}{1 \over k!} \oint_{\atop{\atop\verts{z}\ =\ a\ >\ 1}}{1 \over z^{3n + \ell - k + 1}} \,{\dd z \over 2\pi\ic} \\[3mm]&=\oint_{\atop{\atop\verts{z}\ =\ a\ >\ 1}}{1 \over z^{\ell + 1}} \bracks{\sum_{n = 0}^{\infty}\pars{1 \over z^{3}}^{n}} \bracks{\sum_{k = 0}^{\infty}{z^{k} \over k!}}\,{\dd z \over 2\pi\ic} \\[3mm]&=\oint_{\atop{\atop\verts{z}\ =\ a\ >\ 1}}{1 \over z^{\ell + 1}} \,{1 \over 1 - 1/z^{3}}\,\expo{z}\,{\dd z \over 2\pi\ic} =\oint_{\atop{\atop\verts{z}\ =\ a\ >\ 1}} {z^{2 - \ell} \over z^{3} - 1}\,\expo{z}\,{\dd z \over 2\pi\ic} \\[3mm]&=\sum_{m = -1}^{1}{z_{m}^{2 - \ell}\expo{z_{m}} \over 3z_{m}^{2}}\qquad \mbox{where}\qquad z_{m} \equiv \exp\pars{2m\pi\ic \over 3}\,,\quad m = -1,0,1 \end{align} Then, \begin{align} {\cal I}_{\ell} &= {1 \over 3} \sum_{m = -1}^{1}z_{m}^{-\ell}\expo{z_{m}} ={1 \over 3}\,\expo{} + {2 \over 3}\,\Re\pars{z_{1}^{-\ell}\expo{z_{1}}} ={1 \over 3}\,\expo{} +{2 \over 3}\,\Re\pars{\expo{-2\ell\pi\ic/3}\exp\pars{\expo{2\pi\ic/3}}} \\[3mm]&={1 \over 3}\,\expo{} +{2 \over 3}\, \Re\pars{\expo{-2\ell\pi\ic/3}\exp\pars{-\,\half + {\root{3} \over 2}\,\ic}} \\[3mm]&={1 \over 3}\,\expo{} +{2 \over 3\root{\expo{}}}\, \Re\exp\pars{\bracks{{\root{3} \over 2} - {2\pi \over 3}\,\ell}\ic} \end{align} $$ {\cal I}_{\ell}\equiv\sum_{n = 0}^{\infty}{1 \over \pars{3n + \ell}!} ={1 \over 3}\,\expo{} +{2 \over 3\root{\expo{}}}\,\cos\pars{{\root{3} \over 2} - {2\pi \over 3}\,\ell} \,,\qquad\ell = 0,1,2 $$ $$\begin{array}{rclcl} {\cal I}_{0}&=&\color{#66f}{\large\sum_{n = 0}^{\infty}{1 \over \pars{3n}!}} &=&{1 \over 3}\,\expo{} +{2 \over 3\root{\expo{}}}\,\cos\pars{{\root{3} \over 2}} \\[5mm] {\cal I}_{1}&=&\color{#66f}{\large\sum_{n = 0}^{\infty}{1 \over \pars{3n + 1}!}} &=&{1 \over 3}\,\expo{} +{2 \over 3\root{\expo{}}}\ \overbrace{\cos\pars{{\root{3} \over 2} - {2\pi \over 3}}} ^{\ds{\color{#c00000}{\sin\pars{{\root{3} \over 2} - {\pi \over 6}}}}} \\[5mm] {\cal I}_{2}&=&\color{#66f}{\large\sum_{n = 0}^{\infty}{1 \over \pars{3n + 2}!}} &=&{1 \over 3}\,\expo{} +{2 \over 3\root{\expo{}}}\ \underbrace{\cos\pars{{\root{3} \over 2} - {4\pi \over 3}}} _{\ds{\color{#c00000}{-\sin\pars{{\root{3} \over 2} + {\pi \over 6}}}}} \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/914176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
Extremal Finite Set Book Recommendation I want to read Extremal Finite Set Combinatorics in some detail. (By 'Extremal Finite Set Combinatorics' I mean the subject which covers theorems like Sperner's Theorem, Erdos-Ko-Rado Theorem, deBruijn-Erdos Theorem etc.) Can somebody please recommend me a good book for self-study for this purpose? Thanks.
I would recommend Ian Anderson's book: Combinatorics of Finite Sets. It is very readable and gives a good introduction. Another good choice is Bollobas's book: Combinatorics: Set Systems, Hypergraphs, Families of Vectors and Combinatorial Probability. Also readable. A bit more emphasis on probabilistic ideas. There is also a couple chapters of an unfinished book here http://www.renyi.hu/~ohkatona/ For linear algebraic techniques there is a great preliminary manuscript which is linked to here due to Frankl and Babai: https://mathoverflow.net/questions/17006/linear-algebra-proofs-in-combinatorics One more nice introduction in a broader setting is the notes of this BSM class, they are available somewhere online I think but I can't find them at the moment: http://www.bsmath.hu/11spring/com2a_11s.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/914260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
The alternating group is generated by three-cycles Prove that, for $n \geq 3$, the three-cycles generate the alternation group $A_n$ Proof: We multiply on the left by 3-cycles to "reduce" an even permutation $p$ to the identity, using induction on the number of indices fixed by a permutation. How the indices are numbered is irrelevant. If $p$ contains a $k$-cycle with $k \geq 3$, we may assume that it has the form $p=(123\dots k)\dots$ Multiplying on the left by $(321)$ gives $$p'= (321)(123 \dots k)\dots=(1)(2)(3\dots k)\dots$$ More fixed indices. What do you think ?
The idea of the proof is that we will show that any product of two transpositions is a product of 3-cycles ; and ,if so, then since every member of the An is a product of even number of transpositions, we are done. To begin with lets us suppose that we have a product of transpositions a1 and a2 where both have a common number a which belongs to the set {1,2,.....}- say they both move a. Then we have a1 of the form (a b) and a2 of the form (a c) . In this case , we have a1 a2 = (a b)(a,c)= (a,c,b) and we are done. Now suppose that a1 and a2 move different numbers . a1 = (a b) a2 = (c d) Then a1a2 = (a,b)(c,d) = ( d a c ) ( a b d ). And we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/914338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 2, "answer_id": 0 }
Determining whether points are collinear $(1,1)(3,9)(6,21)$ The way I figured that this should be solved is by finding the slope of: $(1,1)(3,9)$ Then, $(3,9)(6,21)$ Finally $(1,1)(6,21)$ Which are 4, 4,and 4 respectively. So I assume that they are collinear. Am I correct? And if not, please provide me with an explanation as to what needs to be done to find the answer rather than a direct answer.
Yes, you are correct. It is sufficient to calculate only two slopes to decide that three points are collinear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/914443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
About the definition of complex multiplication Some people say that the complex product is the way it is to respect the distributive law of multiplication. However, the distributive law acts in the whole number, like: $$(a+b)(c+d) = ac + ad + bc + bd$$ The multiplication for complex numbers would be something in the form: $$(z_1+z_2)(z_3+z_4) = z_1z_3 + z_1z_4 + z_2z_3 + z_2z_4$$ Where $z_n$ is a complex number in the form $a+bi$ I don't see why the distributive law must act inside the number: $$(a+bi)(c+di) = ac + adi + bci - bd^2$$ What's the real reason for this definition?
Let $z_1 = a, z_2 = bi, z_3 = c, z_4 = di$. Then the rule \begin{equation*} \tag{$\spadesuit$}(z_1 + z_2)(z_3 + z_4) = z_1 z_3 + z_1 z_4 + z_2 z_3 + z_2 z_4 \end{equation*} tells us that \begin{align*} (a + bi)(c + di) &= ac + adi + bci + bd i^2 \\ &= ac - bd + (ad + bc)i. \end{align*} So if we want ($\spadesuit$) to be true, we are forced to use the standard formula for multiplication of complex numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/914571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Easy exponentiation method Is there a simple way of solving, say, $x^{3/2}$? For example, one way of solving $16^{3/2}$ is to calculate the square root of $16^3$, but I was wondering if there is a simpler mental trick for doing this that generalizes to all possible exponentiation.
If you could see that $16=4^2$ then you could do this $$\left(4^2\right)^{3/2}=4^3$$ If you can see such a number that would be the fastest method, another method would be the one rae306 mentioned.
{ "language": "en", "url": "https://math.stackexchange.com/questions/914697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
general of partial sum of sequence I am trying to find the limit of an infinite series given as $$\sum\frac{1}{n^2-1}.$$ I came across the following general term of the sequence of partial sums $$3/4-\left(\frac{1}{2n}-\frac{1}{2(n+1)}\right).$$ I would appreciate assistance to understand how this expression is arrived at. I have tried breaking down the original expression into partial fractions, but cannot get to the given result.
You have $$ \frac{1}{n^2-1} = \frac 1 2 \left(\ \underbrace{\frac{1}{n-1}-\frac{1}{n+1}\ }_{\text{Call this $\{$A$\}$}} \right) $$ Consequently \begin{align} & \frac12\left( \left(\frac{1}{2-1} - \frac{1}{2+1}\right) + \left(\frac{1}{3-1} - \frac{1}{3+1}\right) + \left( \frac{1}{4-1} - \frac{1}{4+1} \right) + \cdots + \{\text{A}\} \right) \\[10pt] = {} & \frac 1 2 \left( \left(\frac 1 1 - \frac 1 3\right) + \left( \frac 1 2 - \frac 1 4 \right) + \left( \frac 1 3 - \frac 1 5 \right) + \left( \frac 1 4 - \frac 1 6 \right) + \left( \frac 1 5 - \frac 1 7 \right) + \cdots +\{\text{A}\} \right). \end{align} So $+1/3$ cancels $-1/3$; $+1/4$ cancels $-1/4$; $+1/5$ cancels $-1/5$, and so on. Only a few terms at the beginning and the end do not cancel, and you're left with an expression for the partial sum whose complexity does not grow with the number of terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/914853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
What is $f(f^{-1}(A))$? Suppose that $f : E \rightarrow F$. What is $f(f^{-1}(A))$? Is it always $A$? $f^{-1}$ is the inverse function. This is not a homework, I'm confused by this statement.
$ \newcommand{\calc}{\begin{align} \quad &} \newcommand{\calcop}[2]{\\ #1 \quad & \quad \text{"#2"} \\ \quad & } \newcommand{\endcalc}{\end{align}} $(This is essentially the same answer as the one by Henno Brandsma, but a bit more expanded in notation I like better, taking smaller steps.) Which elements $\;y\;$ are in $\;f[f^{-1}[A]]\;$? Let's expand the definitions. (Implicitly, we let $\;x \in E\;$ and $\;y \in F\;$.) $$\calc y \in f[f^{-1}[A]] \calcop{\equiv}{definition of $\;\cdot[\cdot]\;$} \langle \exists x : x \in f^{-1}[A] : f(x) = y \rangle \calcop{\equiv}{basic property of $\;\cdot^{-1}[\cdot]\;$} \langle \exists x : f(x) \in A : f(x) = y \rangle \calcop{\equiv}{logic: substitute for $\;y\;$ in left hand part from right hand part} \langle \exists x : y \in A : f(x) = y \rangle \calcop{\equiv}{logic: extract part not using $\;y\;$ out of $\;\exists\;$} y \in A \;\land\; \langle \exists x :: f(x) = y \rangle \calcop{\equiv}{make implicit range explicit} y \in A \;\land\; \langle \exists x : x \in E : f(x) = y \rangle \calcop{\equiv}{definition of $\;\cdot[\cdot]\;$} y \in A \;\land\; y \in f[E] \calcop{\equiv}{definition of $\;\cap\;$} y \in A \cap f[E] \endcalc$$ By set extensionality, $\;f[f^{-1}[A]] \;=\; A \cap f[E]\;$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/914957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Trying to Integrate$ \iint xy\log|x-y|\, dy\,dx $ Hello I am trying to integrate $$ I:=\int_{a}^{b}\int_{a}^{b}xy\log\left(\,\left\vert\,x - y\,\right\vert\,\right) \,{\rm d}y\,{\rm d}x,\qquad 0 < a <b $$ for $x,y\in \mathbb{R}$. I added the bounds of integration to deal with any convergence problems as suggested. We can re-write this as $$ I=\frac{1}{2}\int_a^b \int_a^b xy\log \left((x-y)^2\right)\, dy\,dx. $$ From here we can re-write the log as $$ \frac{1}{2}\int_a^b \int_a^b xy \left(\int\frac{2}{x-y}dx\right)dy\, dx. $$ I am not sure how to approach this from here till end. Thanks.
First notice the integrand is symmetric in $x$ and $y$, so it suffices to double the $y > x$ portion; that is $$ I:=2\int_{a}^{b}\int_{x}^{b}xy\log\left(y - x \right) \,{\rm d}y\,{\rm d}x,\qquad 0 < a <b$$ Change variables to $u = y - x$ and $v = y + x$. The triangle of integration rotates to another triangle and we get $${1 \over 4}\int_{0}^{b - a}\int_{ 2a + u }^{2b - u}{v - u \over 2}{v + u \over 2}\log\left(u \right) \,{\rm d}v\,{\rm d}u,\qquad 0 < a <b$$ $$= {1 \over 16}\int_{0}^{b - a}\log\left(u \right)\int_{ 2a + u }^{2b - u}(v^2 - u^2) \,{\rm d}v\,{\rm d}u,\qquad 0 < a <b$$ $$ = {1 \over 16}\int_{0}^{b - a}\log\left(u \right)(-{4 \over 3})(2 a^3 + 3 a^2 u - (b - u)^2 (2 b + u))) \,du$$ The integral rather sucks but it can be done in an elementary fashion through an integration by parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/915045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to compute dimension of $O(n,\mathbb{R})$ Let $f:GL(n,\mathbb{R})\to GL(n,\mathbb{R})$ be the smooth map $A\mapsto A^TA$. Observe that $f$ has constant rank on $GL(n,\mathbb{R})$ by chain rule and that $O(n,\mathbb{R})$ is the preimage of $I$, therefore is a regular submanifold. To compute the dimension of $O(n,\mathbb{R})$, it thus suffices to compute the rank of $f$ at any point, e.g at $I$. In an appropriate choice of bases, the matrix of $Df|_{I}$ is given by $$\left[\frac{\partial f^{ij}}{\partial x^{kl}}|_I\right]$$ where $(i,j)$ with $1\le i,j\le n$ gives the row index and $(k,l)$ $1\le k,l\le n$ gives the column index. So how do I proceed from here really?
The dimension of $O(n)$ and $SO(n)$ is $\frac{n^2-n}{2}$. This is best seen by computing their Lie algebra $\mathfrak{so}(n)$, which obviously has vector space dimension $\frac{n^2-n}{2}$. For details. see for example here. See also the answer in the question Show that an orthogonal group is a $\frac{n(n−1)]}2-$dim. $C^\infty$-Manifold and find its tangent space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/915138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why induction cannot be used for infinite sets? Explain why induction cannot be used to conclude: $$\left(\bigcup_{n=1}^\infty A_n\right)^c = \bigcap_{n=1}^\infty A_n^c$$
You should look at how your text defines the set $\left( \bigcup_{n=1}^\infty A_n\right)$. My guess is that it defines as the set of all elements $x$ for which there exists an $A_n\in \{A_i\}_{i=1}^\infty$ with the property that $x\in A_n$. To expand on the previous by Pinilla we can't conclude that $\sum_{i=1}^\infty \frac{1}{i} $ is finite even though $\sum_{i=1}^N \frac{1}{i}$ is finite for every positive integer $N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/915234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Irreducible components of fiber bundle Suppose $\pi:X \rightarrow Y$ is a (locally trivial) fiber bundle $F$, where all spaces are Noetherian. Suppose $F$ and $Y$ are irreducible; show that $X$ is also irreducible. Here a similar question is discussed; but this question should be simpler, and should have an easier proof. Thanks in advance.
The following proof works if one assumes that the fiber $F$ is proper. I am suspicious it might not be true if the fiber is not proper. Let $X = Z_1 \cup Z_2$ where each $Z_i$ is closed. Then since each fiber $\iota_y : F \hookrightarrow X$ is irreducible, we have that either $\iota_y^{-1}(Z_i)$ is of the same dimension as $F$ and equal to all of $F$, or has dimension strictly less than that of $F$. Note that the morphism $X \rightarrow Y$ is proper if and only if $F$ is proper. To see this, properness is local on the target, and restricting to an open cover our map is of the form $U \times F \rightarrow U$. This morphism is proper if and only if $F$ is. Assume $F$ and hence $X \rightarrow Y$ is proper, by upper-semicontinuity of the dimension of the fiber with respect to the morphism $Z_i \rightarrow Y$, we see that there is a closed subscheme $W_i \subset Y$ where for every closed point $y \in W_i$, $\iota_y^{-1}(Z_i)$ consists of the entire fiber $F$. Since $W_1 \cup W_2 = Y$, it must be that $W_1 = Y$ or $W_2 = Y$. Of course, this implies $Z_1 = X$ or $Z_2 = X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/915300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Equation involving a partial trace Is there, in general, a solution to the following equation? $\text{Tr}_{V_1}(A(X\otimes I_{V_2})) = B$ where * *A is an operator on $V_1\otimes V_2$, *$B$ is an operator on $V_2$, *$I_{V_2}$ is the identity on $V_2$. *$X$ is an unknown operator on $V_1$. Assuming that $V_1\cong V_2$ and reasonable invertibility constraints, can we solve for $X$? Intuitively, I am wondering if composing and partially tracing on one component loses information.
If $V_1,V_2$ are finite dimensional, we can write $A=\sum_{i=1}^nA_i\otimes B_i$, where $A_1,\ldots,A_n$ are linear independent operators acting on $V_1$ and $B_1,\ldots,B_n$ are linear independent operator acting on $V_2$. Your equation is equivalent to $\sum_{i=1}^ntr(A_iX)B_i=B$. You can solve this equation iff $B\in \text{span}\{B_1,\ldots,B_n\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/915374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Simplifying the sum of powers of the golden ratio I seem to have forgotten some fundamental algebra. I know that: $(\frac{1+\sqrt{5}}{2})^{k-2} + (\frac{1+\sqrt{5}}{2})^{k-1} = (\frac{1+\sqrt{5}}{2})^{k}$ But I don't remember how to show it algebraicly factoring out the biggest term on the LHS gives $(\frac{1+\sqrt{5}}{2})^{k-2}(1+(\frac{1+\sqrt{5}}{2}))$ which doesn't really help
$$ \left (\frac{1+\sqrt{5}}{2} \right )^{k-2} + \left (\frac{1+\sqrt{5}}{2} \right )^{k-1} = \left ( \frac{1+ \sqrt{5}}{2}\right )^{k-2} \left ( 1+ \frac{1+ \sqrt{5}}{2}\right)$$ It is known that the Greek letter phi (φ) represents the golden ratio,which value is: $$\phi=\frac{1+ \sqrt{5}}{2}$$ One of its identities is: $$\phi^2=\phi+1$$ Therefore: $$ \left ( 1+ \frac{1+ \sqrt{5}}{2}\right)= \left ( 1+ \frac{\sqrt{5}}{2}\right)^2$$ So: $$ \left ( \frac{1+ \sqrt{5}}{2}\right )^{k-2} \left ( 1+ \frac{1+ \sqrt{5}}{2}\right)= \left ( 1+ \frac{\sqrt{5}}{2}\right)^k$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/915456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Show $\prod \limits_{\substack{k=0\\k\neq k_0}}^{K-1}\left(\exp\left[{\frac{2 \pi i (k-k_0)}{K}}\right]-1\right)=(-1)^{K-1} K$ I know it is true that, for any $k_0 \text{ s.t. } 0\leq k_0 < K$, $$\prod \limits_{\substack{k=0\\k\neq k_0}}^{K-1} \left(\exp\left[{\frac{2 \pi i (k-k_0)}{K}}\right]-1\right)=(-1)^{K-1} K$$ However, I'm unable to come up with a nice (hopefully intuitive) way to prove it. Any suggestions? Note: I know this is true because this is the expression for a residue of $z^{K-1}/(1+z^K)$, and I can compute the residue using limits. (Thanks to David for corrections)
Unless I am missing something, your evaluation is not correct: for example, if $K=2$ and $k_0=1$ then your answer is $1$ but the correct product is $-2$. We can evaluate the product as follows: $$\eqalign{P &=\prod_{k=0}^{k_0-1}\left(\exp\left[{\frac{2 \pi i (k-k_0)}{K}}\right]-1\right) \prod_{k=k_0+1}^{K-1} \left(\exp\left[{\frac{2 \pi i (k-k_0)}{K}}\right]-1\right)\cr &=\prod_{k=K}^{K+k_0-1} \left(\exp\left[{\frac{2 \pi i (k-k_0)}{K}}\right]-1\right) \prod_{k=k_0+1}^{K-1} \left(\exp\left[{\frac{2 \pi i (k-k_0)}{K}}\right]-1\right)\cr &=\prod_{k=k_0+1}^{K+k_0-1}\left(\exp\left[{\frac{2 \pi i (k-k_0)}{K}}\right]-1\right)\cr &=\prod_{m=1}^{K-1} \left(\exp\left[{\frac{2 \pi im}{K}}\right]-1\right)\ .\cr}$$ If we write $$f(x)=\prod_{m=1}^{K-1} \left(x-\exp\left[{\frac{2 \pi im}{K}}\right]\right)=x^{K-1}+x^{K-2}+\cdots+x+1$$ then $$P=(-1)^{K-1}f(1)=(-1)^{K-1}K\ .$$ Addendum. Formula for $f(x)$: the roots of $x^K=1$ are $x=\exp(2\pi i m/K)$ for $m=0,1,\ldots,K-1$. Therefore $$x^K-1=\prod_{m=0}^{K-1} \left(x-\exp\left[{\frac{2 \pi im}{K}}\right]\right)\ ,$$ and dividing both sides by $x-1$ gives $$x^{K-1}+x^{K-2}+\cdots+x+1 =\prod_{m=1}^{K-1}\left(x-\exp\left[{\frac{2 \pi im}{K}}\right]\right)\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/915549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of alternating series Looking to find the sum of the following series: $$\sum^\infty_{n=1}\frac{(-1)^n}{(2n+1)3^n}$$ It converges due to the Alternating Series Test. Prior attempts had me isolating the numerator and the $3^n$ in one fraction, and using a geometric series. However, this proved fruitless since it left the other part of the fraction with a denominator that implies divergence. I am looking to help a calculus student who turned to me for advice, but I myself have not been worked with converging series in almost a decade, and so I'm not as quick with the material as I used to be.
To find the sum consider the series $$ \sum_{k=0}^{\infty} x^{2n} = \frac{1}{1-x^2} $$ Integrate both sides from zero to $x$ and then substitute $x=i/\sqrt{3}$ and simplify. Notes: To evaluate the integral just use the change of variables $ t=\sin(u) $ which gives $$ I = \int_{0}^{x} \frac{dt}{1-t^2} = \int_{0}^{\arcsin(x)}\sec(u)du=\dots\,. $$ You need the integral $$ \int \sec(u)du = \ln(\sec(x)+\tan(x)). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/915629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
"Proof" that $1-1+1-1+\cdots=\frac{1}{2}$ and related conclusion that $\zeta(2)=\frac{\pi^2}{6}.$ Sorry if this has been posted before. Can somebody please tell me whether this result is correct, and give explanation as to why or why not? I'm not good at the formal side of maths. Start here: $$\sum\limits_{k=0}^{\infty}e^{ki\vartheta}=\frac{1}{2}+\frac{i}{2}\cot\frac{\vartheta}{2},~0<\vartheta<2\pi.$$ Then equate the real and imaginary parts, so $$\begin{align*}\sum\limits_{k=1}^{\infty}\cos k\vartheta &=-\frac{1}{2},\\ \sum\limits_{k=1}^{\infty}\sin k\vartheta &=0.\end{align*}$$ For $\varphi=\vartheta+\pi$ for $-\pi<\varphi<\pi$ we could write the cosine equation as $\frac{1}{2}-\cos\varphi+\cos 2\varphi-\cdots=0$ which would mean $$1-1+1-1+\cdots=\frac{1}{2}.$$ I'm not a mathematician - is this valid? Edit: For context, here is why I want this result. If the cosine formula holds and we can integrate it twice to some angle $0<\varphi<\alpha$ then get this interesting result $$\sum\limits_{k=1}^{\infty}(-1)^{k+1}\frac{1-\cos k\alpha}{k^2}=\frac{\alpha^2}{4}$$ which for the angle of $\pi$ would imply that $$\sum\limits_{k=1}^{\infty}\frac{1}{(2k-1)^2}=\frac{\pi^2}{8}=\zeta(2)-\sum\limits_{k=1}^{\infty}\frac{1}{(2k)^2}=\frac{3}{4}\zeta(2)$$ and finally we get $\zeta(2)=\frac{\pi^2}{6}.$ It's interesting that such a pretty result comes out of what is essentially crappy maths. Also has that $$1-\frac{1}{4}+\frac{1}{9}-\cdots=\frac{\pi^2}{12}$$ by the way.
The equation: $$\sum\limits_{k=1}^{\infty}(-1)^{k+1}\frac{1-\cos k\alpha}{k^2}=\frac{\alpha^2}{4}$$ seems to be valid for all $a\in[-\pi,\pi]$, so there has to be something gone right with your "proof" (or at least the idea behind it), even if it seems flawed "as-is." I suspect that your original series are "true" if you view it with the "Cesàro" or "Abel" definition (for all but finitely many points, I hope), which makes the derivation valid. Can someone verify this?
{ "language": "en", "url": "https://math.stackexchange.com/questions/915710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
Telescoping series of form $\sum (n+1)\cdots(n+k)$ Wolfram Alpha is able to telescope sums of the form $\sum (n+1)\cdots(n+k)$ e.g. $(1\cdot2\cdot3) + (2\cdot3\cdot4) + \cdots + n(n+1)(n+2)$ How does it do it? EDIT: We can rewrite as: $\sum {(n+k)! \over n!} = \sum k!{(n+k)!\over n!k!} = \sum k!{{n+k} \choose n}$ (Thanks Daniel Fischer) EDIT2: We can also multiply out and split sums. So e.g. $$\sum (n-1)n(n+1) = \sum (n^3-n) = \sum n^3 - \sum n$$ But sums of powers actually seem to be more nasty than the original question, involving Bernoulli numbers. (Thanks Claude Leibovici) And is there any name for this particular corner of maths? (i.e. How might I go about searching the Internet for information regarding this?) PS please could we have a 'telescoping' tag?
Let $p(n, k) =n(n+1)...(n+k-1) =\prod_{j=0}^{k-1} (n+j) $. Then (writing each step in detail) $\begin{array}\\ p(n+1, k)-p(n, k) &=\prod_{j=0}^{k-1} (n+1+j)-\prod_{j=0}^{k-1} (n+j)\\ &=\prod_{j=1}^{k} (n+j)-\prod_{j=0}^{k-1} (n+j)\\ &=(n+k)\prod_{j=1}^{k-1} (n+j)-n\prod_{j=1}^{k-1} (n+j)\\ &=((n+k)-n)\prod_{j=1}^{k-1} (n+j)\\ &=k\prod_{j=1}^{k-1} (n+j)\\ &=k\prod_{j=0}^{k-2} (n+1+j)\\ &=kp(n+1, k-1)\\ \end{array} $ or, putting $k+1$ for $k$ and $n$ for $n+1$, $p(n, k) =\frac{p(n, k+1)-p(n-1. k+1)}{k+1} $. Therefore $\begin{array}\\ \sum_{n=1}^M p(n, k) &=\sum_{n=1}^M \frac{p(n, k+1)-p(n-1. k-1)}{k+1}\\ &=\frac1{k+1}\sum_{n=1}^M (p(n, k+1)-p(n-1. k+1))\\ &=\frac1{k+1}(p(M, k+1)-p(0, k+1))\\ &=\frac{p(M, k+1)}{k+1}\\ \end{array} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/915793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 8, "answer_id": 3 }
The line is tangent to a parabola The line $y = 4x-7$ is tangent to a parabola that has a $y$-intercept of $-3$ and the line $x=\frac{1}{2}$ as its axis of symmetry. Find the equation of the parabola. I really need help solving this question. THx
Let the parabola be defined by the function $y = f(x)$ and let $g(x) = f(x) - 4x + 7.$ Then $y = g(x)$ is a parabola passing through $(0,4)$ and it has slope $-4$ at $x = \frac 12.$ In addition, the parabola described by $y = g(x)$ is tangent to the $x$-axis so $g(x)$ can be written $g(x) = a(x - p)^2.$ We can also write the derivative of $g$ with respect to $x$ as $g^\prime(x) = 2a(x - p).$ This gives us $g(0) = ap^2 = 4$ and $g^\prime\left(\frac 12\right) = a(1 - 2p) = -4.$ Then $$ \frac{ap^2}{a(1 - 2p)} = \frac{4}{-4}, $$ and we can simplify both sides of this to conclude that $$ \frac{p^2}{1 - 2p} = -1. $$ Rearranging terms, $$ p^2 = -1(1 - 2p), $$ $$ p^2 - 2p + 1 = 0 $$ Solve for $p;$ that is the $x$-coordinate of the (unique) point where $g(x) = 0.$ It's also the $x$-coordinate of the unique point where $f(x) = 4x - 7.$ Given $p,$ use one of the previous equations (such as $ap^2 = 4$) to solve for $a.$ You can then expand $g(x)$ as a simple polynomial. Use the fact that $f(x) = g(x) + 4x - 7$ to write $f(x).$ You can solve the problem without introducing the additional parabola $y = g(x),$ but I like the geometric intuition of this method. The key idea is that by a change of coordinates ($y \rightarrow y - 4x + 7$), we transform the parabola $y = f(x)$ into one with a (somewhat) easier geometry (or at least so I think). It seems a mere coincidence that the point of tangency of $y = f(x)$ and $y = 4x - 7$ just happens to be a point that we could already know is on $y = f(x)$ by virtue of the axis of symmetry and the one given point of that parabola. That's not a necessary condition for this problem to be solvable. It does suggest a clever insight that for this particular problem that leads to a solution more quickly than the steps above, because you can find the point of tangency almost right away.
{ "language": "en", "url": "https://math.stackexchange.com/questions/915882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to show $\int\frac{d}{dx}(a^u)dx=a^u+C$ more rigorously? We all know that $$\int\frac{d}{dx}(a^u)dx=a^u+C$$ where I am differentiating with respect to $x$. But how can I write it in a more rigorous way like for example using the Fundamental Theorem of Calculus or any other method? How can I deduce that $$\int d(a^u)=a^u+C$$ The notation looks a bit odd for me because normally we view integral as the antiderivative or the limit of Riemann sum, but this time what does $\int d(a^u)$ actually mean? Also, when are we allowed to swap the order of differentiation and integration? Sorry, maybe I need to clarify that $a$ is a constant and $u$ is a function of $x$. Just for reference, my knowledge about differentials is that $dy=f'(x)dx$. Thanks for the explanation.
In calculus (handling of derivatives and primitives, etc.) it is accepted to present functions $f:\ x\mapsto f(x)$ as function terms $f(x)$. This is in contrast to "abstract analysis" where $f(x)$ just denotes the value of the function $f$ at the point $x\in{\rm dom}(f)$, and is a number. Without saying it you are actually considering a function $x\mapsto u(x)$ and then the function $f(x):=a^{u(x)}$. Therefore the term $${d\over dx}a^u$$ appearing in your question means the function $x\mapsto f'(x)$ obtained by using the chain rule on $x\mapsto f(x)=a^{u(x)}$. The expression $$\int{d\over dx}a^{u(x)}\>dx=\int f'(x)\>dx$$ by definition denotes the set of all primitives of $f'(x)$, and this is the set of all functions $x\mapsto f(x)+C$, $\>C\in{\mathbb R}$. As $f(x)=a^{u(x)}$ we therefore have $$\int{d\over dx}a^{u(x)}\>dx=a^{u(x)}+C\ ,$$ as stated in your question. Apart from the "dangling" $C$ appearing in formulas of this sort there is no "more rigorous way" of describing the situation at hand. In particular the FTC has nothing to do with this. The only "integral theorem" we have tacitly used is the following: A function whose derivative is $\equiv0$ on some interval $I\subset{\mathbb R}$ is constant on $I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/915998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that $\int_0^{\pi/2}\ln^2(\cos x)\,dx=\frac{\pi}{2}\ln^2 2+\frac{\pi^3}{24}$ Prove that \begin{equation} \int_0^{\pi/2}\ln^2(\cos x)\,dx=\frac{\pi}{2}\ln^2 2+\frac{\pi^3}{24} \end{equation} I tried to use by parts method and ended with \begin{equation} \int \ln^2(\cos x)\,dx=x\ln^2(\cos x)+2\int x\ln(\cos x)\tan x\,dx \end{equation} The latter integral seems hard to evaluate. Could anyone here please help me to prove it preferably with elementary ways (high school methods)? Any help would be greatly appreciated. Thank you. Addendum: I also found this nice closed-form \begin{equation} -\int_0^{\pi/2}\ln^3(\cos x)\,dx=\frac{\pi}{2}\ln^3 2+\frac{\pi^3}{8}\ln 2 +\frac{3\pi}{4}\zeta(3) \end{equation} I hope someone here also help me to prove it. (>‿◠)✌
We are all familiar with the famous Wallis integrals, $W_n=\displaystyle\int_0^\frac\pi2\sin^{n-1}x~dx=\int_0^\frac\pi2\cos^{n-1}x~dx$ $=\dfrac12B\bigg(\dfrac12,~\dfrac n2\bigg)$, see beta function for more details. It follows then that our integral is nothing other than $W''(1)$, which can be evaluated in terms of the digamma and trigamma functions, for arguments $\dfrac12$ and $1$. The former can be expressed as $\psi_{_0}(k+1)=H_k-\gamma$, where $\gamma\approx\dfrac1{\sqrt3}~$ is the Euler-Mascheroni constant, and $H_k=\displaystyle\int_0^1\frac{1-x^k}{1-x~~}dx=-\int_0^1\ln\Big(1-\sqrt[^k]x\Big)~dx$ is the harmonicnumber. Thus, that $H_1=1$ is self-evident, and that $H_\frac12=2~(1-\ln2)$ can be shown by a simple substitution. The values are also found here. For the latter we have $~\psi_{_1}(x)=\displaystyle\sum_{k=0}^\infty\frac1{(k+x)^2}$, which implies $\psi_{_1}(1)=\zeta(2)=\dfrac{\pi^2}6$, and $\psi_{_1}\bigg(\dfrac12\bigg)=4\bigg(1-\dfrac14\bigg)\zeta(2)=\dfrac{\pi^2}2$. See Basel problem for more information.
{ "language": "en", "url": "https://math.stackexchange.com/questions/916085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 5, "answer_id": 2 }
Find prize per unit that will maximize profit at a given $x$-value Struggling while reviewing my old math books. The problem has a prize-function and wants to know how the prize-per-unit should be chosen to maximize the profit at $\mathbf{x=160}$. First I look analyze the given equation. The given prize function is: $$ \operatorname{Prize}(x) = 0.003x^3-0.54x^2+96.6x+8100 $$ I calculated the prize-per-unit by dividing by $x$: $$ \operatorname{PrizePerUnit}(x) = 0.003x^2-0.54x+96.6 + 8100/x $$ First derivative is: $$ \operatorname{PrizePerUnit}'(x) = 0.006x-0.54 + 8100/x^2 $$ Only real solution is $x=150$ which gives a prize-per-unit of $137.10$. The usual way to calculate the profit is: $$ \operatorname{Profit} = \operatorname{Returns} - \operatorname{Prize} $$ If I use the prize-per-unit from the given function: $$ \operatorname{Profit} = \operatorname{Returns} - \operatorname{Prize} $$ $$ \operatorname{Profit} = x - (0.006x-0.54 + 8100/x^2) $$ I get: $$ \operatorname{Profit}(x)=-96.6-0.003 x^2+1.54 x-8100/x $$ Which has a maximum at $\mathbf{274.573}$. And this is where I'm stuck. The solution according to the book is $\mathbf{154.20}$ prize-per-unit. Anybody know how to reach that number? Integration is apparently not part of the solution, because that is introduced much later in the book. Can this be done with some kind of equivalence-equation?
You price function gives the total costs to make a number of $x$ products. Call this $C(x)$. Your income is given by the number of sold items times the sale price. If we denote the sale price by $\alpha$, then the income is given by $\alpha x$. Your profit $P$ can be calculated by subtracting the costs from the income: $$P(x) = \alpha x - C(x).$$ If $P$ has a maximum then it's derivative must be zero there, i.e.: $$P^\prime(x) = \alpha - 0.009x^2 + 1.08x - 96.6 = 0.$$ Now you can simply move all the $x$'s to the right-hand side and substitute $x=160$. Then you will find that $\alpha = 154.20$. Formally you should check that this indeed gives you a maximum and not a minimum, since we only used that the derivative is zero, which can imply both.
{ "language": "en", "url": "https://math.stackexchange.com/questions/916156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find a bijection to show $\left|B\right| = \mathfrak{c}$. Let $B = \left\{ A \cup \mathbb{N}_\text{even} : A\subseteq \mathbb{N}_\text{odd} \right\}$ I need to show $\left|B\right| = \mathfrak{c}$ by using an equivalence function (bijection) to another set with the same cardinality. Any idea? Thanks.
HINT: Note that there is a bijection between $B$ and $\mathcal P(\Bbb N_{\rm odd})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/916241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The limit of $(4-\sqrt{16-7\sin(x)})/(8x)$ at zero without using L'Hôpital I stumbled across this silly limit and I am perplexed at how I can arrive to a solution by only relying on the simplest rules of limits. $$ \lim_{x \to 0}\frac{4-\sqrt{16-7\sin(x)}}{8x} $$ Any help is appreciated, thanks in advance.
View it as the derivative of $f(x) = \sqrt{16-7\sin x}$ at $x = 0$ or using the fact that $\sin x \approx x$ near $x = 0$, and expand the top expression binomially.
{ "language": "en", "url": "https://math.stackexchange.com/questions/916332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Prove $\alpha \in\mathbb R$ is irrational, when $\cos(\alpha \pi) = \frac{1}{3}$ I am trying to prove: If $\cos(\pi\alpha) = \frac{1}{3}$ then $\alpha \in \mathbb{R} \setminus \mathbb{Q}$ So far, I've tried making it into an exponential, since exponentials are easier to manipulate (at least for me), when compared to $\cos$ or $\sin$. So: $$\cos(\pi\alpha)^2 + \sin(\pi\alpha)^2 = 1$$ $$\Big(\frac{1}{3}\Big)^2 + \sin(\pi\alpha)^2 = 1$$ $$\sin(\pi\alpha)^2 = \frac{8}{9}$$ $$\sin(\pi\alpha) = \frac{\sqrt{8}}{3}$$ $$\sin(\pi\alpha) = \frac{2\sqrt{2}}{3}$$ Then we can use Euler's formula: $$e^{i\pi\alpha} = \cos(\pi\alpha) + i\sin(\pi\alpha) = \frac{1}{3} + i\frac{2\sqrt{2}}{3} = \frac{1+i2\sqrt{2}}{3}$$ Now take the log of both sides: $$ \ln(e^{i\pi\alpha}) = \ln(\frac{1+i2\sqrt{2}}{3}) = \ln(1+i2\sqrt{2}) - \ln(3) = i\pi\alpha$$ $$\therefore \alpha = \frac{\ln(1+i2\sqrt{2}) - \ln(3)}{i\pi}$$ But, here I get stuck, and don't know how to show it is irrational, any ideas? (Even though it looks pretty darn irrational to me...) Should I be trying something else? I'm so lost...
Just in case it is not assumed that $\alpha\in\mathbb{R}$, let $\alpha=a+bi$ with $a,b\in\mathbb{R}$. Then $$\begin{align} \cos(a\pi+b\pi i)&=\cos(a\pi)\sin(b\pi i)+\sin(a\pi)\cos(b\pi i)\\ \frac13&=i\cos(a\pi)\sinh(b\pi)+\sin(a\pi)\cosh(b\pi)\\ \end{align}$$ Since the left side is real, either $\cos(a\pi)$ or $\sinh(b\pi)$ is $0$. The former implies $\frac13=\pm\cosh(b\pi)$, which is impossible, So $\sinh(b\pi)=0$, which implies $b=0$, and so $\alpha\in\mathbb{R}$. Now assume that $\alpha$ is rational: $\alpha = \frac{a}{b}$ with $a,b\in \mathbb{Z}$ and consider $$\left(e^{i\alpha\pi}\right)^b = e^{i\pi a} = (-1)^a$$ Now use that $e^{i\alpha\pi} = \frac{1}{3} \pm i\frac{2\sqrt{2}}{3}$ to derive a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/916394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
How to expand quadratic equations in Octave/Matlab? I have some column vectors, and I put them in a row, so I have $[a \, b \, c]$. Now I want to get a matrix of the form $[a^2 \, b^2 \, c^2 \, ab \, ac \, bc]$. It is like expanding $(a+b+c)^2$, but applying it to matrix columns. Is there any trick in Octave/Matlab that would allow me to do it automatically? The example is easy. Imagine if I want to get $[a_1 \, a_2 \, \cdots a_n]$ to a higher degree. It would be a pain to do it manually.
Polynomial multiplication is equivalent to convolution. This means that if you have two numeric vectors, you can use the conv function in Matlab: v1 = [2 3 4]; v2 = [3 5 7]; w = conv(v1,v2) which returns w = 6 19 41 41 28 The vectors don't even need to be the same length. In your case, you can just pass the same vector in twice. Unfortunately, the conv function isn't overloaded for symbolic variables. Howver, you can take advantage of MuPAD's polynomial capabilities from within Matlab (I have no idea what Octave supports in this area). You can use poly to create polynomials from coefficients. These objects can be multiplied together naturally. Then you can use coeff to extract the symbolic coefficients. Here's an example: syms a1 a2 b1 b2 c1 c2; v1 = char([a1 b1 c1]); p1 = feval(symengine,'poly',v1(9:end-2),'[x]'); v2 = char([a2 b2 c2]); p2 = feval(symengine,'poly',v2(9:end-2),'[x]'); w = feval(symengine,'coeff',p1*p2) which returns the symbolic vector: w = [ c1*c2, b1*c2 + b2*c1, a1*c2 + a2*c1 + b1*b2, a1*b2 + a2*b1, a1*a2]
{ "language": "en", "url": "https://math.stackexchange.com/questions/916475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A relation that is Reflexive & Transitive but neither an equivalence nor partial order relation Set $A = \{0,7,1\}$ 1. So for a relation that is reflexive and transitive but neither an equivalence relation nor partial order...Can a relation be both partial order and equivalence? Attempt: $R_1=\{(0,0),(7,7),(1,1),(0,1),(1,0),(0,7)\}$ Is $R_1$ Reflexive? Yes because $(0,0),(7,7),(1,1)$ exist in the relation. Is $R_1$ Symmetric? No because every edge of the relation does not have either a two way street or a loop (if you drew $R_1$ as a diagraph) Is $R_1$ Antisymmetric? No because there exists a two way street between two distinct vertices. Now the part where I doubt myself... Is $R_1$ Transitive? Yes? I don't think $(0,7)$ would break transitivity. If $R_2=\{(0,7)\}$ is transitive then the addition of $(0,7)$ to an already transitive relation wouldn't make it not transitive? Could someone please help clarify? 2. Relation on $A$ that is not reflexive, not transitive, not antisymmetric, but is symmetric. I got $R_2=\{(0,7),(7,0),(7,7)\}$, I believe that is correct since it misses $(0,0)$ and $(1,1)$ and therefore is not reflexive. It is not antisymmetric since the diagraph has no one way streets. It is symmetric because there exists a two way street between each distinct vertices. Could some please let me know if I am correct?
$Q1$ Can a relation be both partial order and equivalence? Yes, for example, the equality relation. Is $R_1$ Transitive? No. It has $(1,0)$ and $(0,7)$ but not $(1,7)$. As this example show, if you add an ordered pair to a transitive relation it can become non-transitive. A relation on set $A$ that is both reflexive and transitive but neither an equivalence relation nor a partial order (meaning it is neither symmetric nor antisymmetric) is: $$R_3 = \left\{(0,0),\, (7,7),\, (1,1),\, (0,7),\, (7,1),\, (0,1),\, (1,7) \right\}$$ Reflexive? Yes, because it has $(0,0),\, (7,7),\, (1,1)$. Transitive? Yes. We go through the relevant cases: $$(0,7) \mbox{ and } (7,1) \Rightarrow (0,1) \qquad\checkmark$$ $$(7,1) \mbox{ and } (1,7) \Rightarrow (7,7) \qquad\checkmark$$ $$(0,1) \mbox{ and } (1,7) \Rightarrow (0,7) \qquad\checkmark$$ $$(1,7) \mbox{ and } (7,1) \Rightarrow (1,1) \qquad\checkmark$$ Symmetric? No, because we have $(0,1)$ but not $(1,0)$ Antisymmetric? No, because we have $(1,7)$ and $(7,1)$. $Q2$ Your relation, $R_2$, is correct but your explanations for symmetric and antisymmetric are the wrong way around. $R_2$ is not antisymmetric because there is as two-way street between distinct vertices, namely, $0$ and $7$. $R_2$ is symmetric because there is no one-way street between distinct vertices. Also, $R_2$ is not transitive because it has $(0,7)$ and $(7,0)$ but not $(0,0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/916559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Complex solutions to $ x^3 + 512 = 0 $ An algebra book has the exercise $$ x^3 + 512 = 0 $$ I can find the real solution easily enough with $$ x^3 = -512 $$ $$ \sqrt[3]{x^3} = \sqrt[3]{-512} $$ $$ x = -8 $$ The book also gives the complex solutions $$ 4 \pm 4\sqrt{3}i $$ But I don't understand how to find these answers. Having completed the chapter on complex numbers I can find square roots of negative numbers easily, but cube (or higher) roots are never explained.
If you want to solve the complex roots using quadratic formula, try below : $$x^3 + 512 = x^3+8^3 = (x+8)(x^2-8x+64) = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/916612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Discrete mathematics subsets Suppose I have two sets A and B: $$ A = \lbrace 2k-1 : k \in \mathbb{Z}\rbrace$$ $$ B = \lbrace 2l+1 : l \in \mathbb{Z}\rbrace$$ I need to prove that A = B. I know that to prove equality between two sets I need to prove both: $$ A \subseteq B $$ and $$ A \supseteq B $$ I tried to start with something like : Suppose x is an element of A, then $$ x = 2k - 1 $$ EDIT : Which we can rewrite as $$ x = 2k + 1 - 2$$ $$ x = 2(k-1) + 1$$ Because $$ (k-1) \in \mathbb{Z} $$ We know that x is also in B. Is this the correct way of approaching this problem?
Hint: * *$2k-1=2l+1$ where $l=(k-1)$; *$2l+1=2k-1$ where $k=l+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/916688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
If the closure of a set $A$ is defined as the intersection of all closed sets which contain $A$, prove that closure of a closed set $B$ is $B$ itself If the closure of a set $A$ is defined as the intersection of all closed sets which contain $A$, prove that closure of a closed set $B$ is $B$ itself. Attempt: I apologize if this is too basic but I am taking an inttoductory course in Elements of Real Analysis. Let $B$ be a closed set, then closure of a $B$ is defined as the intersection of all closed sets which contain $B$. Hence, we wants closed sets $B_1,B_2, \cdots , B_m$ such that $B_1 \bigcap B_2 \bigcap \cdots \bigcap B_m = B$ Such that $B \subseteq B_1,B \subseteq B_2,~~\cdots~~, B \subseteq B_n$ We need to prove using the given definition that $B_1 \bigcap B_2 \bigcap \cdots \bigcap B_m = B$ Since, intersection of closed sets is a closed set $\implies B_1 \bigcap B_2 \bigcap \cdots \bigcap B_m = A$ where $A$ is a closed set and $B \subseteq A$ ( Where $B$ is also a closed set ) How do I move ahead using the given definition. I am aware of the method of solving the problem using the derived set method, but the problem seeks to solve it through the given definition only. EDIT : Since, one of the closed sets that contains B is B itself, then is it appropriate to take $B_1=B_2=⋯B_m=B $. But, then what about the other closed sets which contain $B$? Thank you for your help.
Here's a second hint: Suppose you have two sets with different definitions, as you do here. Let's call them $X$ and $Y$. Then to show that $X=Y$, it is enough to show $X\subseteq Y$ and $Y\subseteq X$. To show $X\subseteq Y$ you take an arbitrary point $p$ in $X$ and show that $p$ is also a point of $Y$; to show $Y\subseteq X$ you do the reverse. In this case you want to show $$\bigcap_{\substack{B\subseteq C\\\text{$C$ closed}}} C = B.$$ Let's call the thing on the left $X$ for short. You then want to show $X=B$. To do this, it is enough to show that $X\subseteq B$ and that $B\subseteq X$. Start with $X \subseteq B$. Suppose some point $p$ is in $X$. What properties must $p$ have? Can you use those properties to show that $p$ must also be in $B$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/916791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Intuition underlying stopped martingales Let $X$ be a martingale and $T$ a stopping time. Define the stopped martingale $X_{\min\{T,n\}}$. What is the intuition underlying this process? It is quite confusing here. $X$ is random and $T$ is also random and depends on $X$. Then what does one get putting two together?
The Gambler's Ruin example is probably helpful for intuition. Consider a Markov chain $X_n$ whose state space is the integers which jumps 1 unit to the left or to the right with equal probability. Now let $\tau = \inf \{ n : X_n = 0 \}$. Then the stopped process is the same as the original process until the gambler runs out of money, after which he presumably leaves the casino. By contrast, in the original process, the gambler would be allowed to play into debt rather than stopping when he went bankrupt.
{ "language": "en", "url": "https://math.stackexchange.com/questions/916920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Area of a Triangle, If three vertices are given taken in anticlockwise direction. Three vertices are given. We can find the area using the determinant. Can someone explain it to me why the number will be a positive number, if vertices are chosen in anticlockwise direction.
Using rschwieb's diagram, change the coordinate system so that $(x_1,y_1)$ is at the origin, and so that $x_2>0$ and $y_2 = 0$. (This obviously doesn't change the area). Then the three points are $(0,0)$, $(x_2,0)$, and $(x_3,y_3)$. Since the angle between $\mathbf{a}$ and $\mathbf{b}$ is strictly between $0$ and $\pi$, we know that $y_3 > 0$. Then $$\begin{vmatrix} x_2 & 0 \\ x_3 & y_3\end{vmatrix} = x_2y_3 > 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/916996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Test the convergence of the series $\sum\limits_nn^p(\sqrt{n+1}-2\sqrt n + \sqrt{n-1})$ I am trying to test the convergence of the series $$\sum_{n=1}^\infty n^p(\sqrt{n+1}-2\sqrt n + \sqrt{n-1})$$ $p$ is a fixed real number. I tried the ratio test and the integral test without success. Now, I am stuck with the quantity between the round brackets. I have no idea on how to treat it in order to obtain something useful to work with.
Tool: For every fixed $a$, when $x\to0$, $(1+x)^a=1+ax+\frac12a(a-1)x^2+o(x^2)$. In particular, the case $a=\frac12$ yields $\sqrt{1+x}=1+\frac12x-\frac18x^2+o(x^2)$. For every fixed $c$, $$\sqrt{n+c}=\sqrt{n}\cdot\sqrt{1+\frac{c}n}=\sqrt{n}\cdot\left(1+\frac{c}{2n}-\frac{c^2}{8n^2}+o\left(\frac1{n^2}\right)\right),$$ hence $$\sqrt{n+1}-2\sqrt{n}+\sqrt{n-1}=\sqrt{n}\cdot\left(1+\frac1{2n}-\frac1{8n^2}-2+1-\frac1{2n}-\frac1{8n^2}+o\left(\frac1{n^2}\right)\right),$$ that is, $$\sqrt{n+1}-2\sqrt{n}+\sqrt{n-1}\sim\sqrt{n}\cdot\frac{-1}{4n^2}=\frac{-1}{4n^{3/2}}.$$ Can you finish this?
{ "language": "en", "url": "https://math.stackexchange.com/questions/917068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Eigenvalue and proper subespace. I have the follow problem: Suppose that $A,B\in{\cal M}_n(\mathbb{R})$ such that $AB = BA.$ Show that if $v$ is an eigenvector of $A$ associated to the eigenvalue $\lambda$, with $Bv\neq 0$ and dim$(S_\lambda)=1$, then $v$ is also an eigenvector of B. Thanks in advance!
$ABv = BAv = B\lambda v$ = $\lambda Bv$ So $Bv$ is an eigenvector of $A$ with eigenvalue $\lambda$ Since $dim(S_\lambda) = 1$ we have that $v \tilde{} Bv$. In other words: $v$ is an eigenvector of $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/917137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can't find limit tending to infinity of a sequence I'm stumped by $$\lim_{x \to \infty}\frac{1+3+5+\cdots+(2x-1)}{x+3} - x$$ My obvious first step was to get a lowest common denominator by $x(\frac{x+3}{x+3})$, giving $$\lim_{x \to \infty}\frac{1+3+5+\cdots+(2x-1)-x^2-3x}{x+3} $$ But from here I'm stumped, because with x tending to infinity, the $2x-1-x^2$ part of the numerator will be indeterminate, won't it? I was hoping to calculate the answer via the highest powers on both sides of the fraction, which I know you can do when the variable tends to infinity, but then I'd get an answer of $-\infty$ which is incorrect according to my solution book. What did I miss? In edit, thanks to those have responded so far, but I'm even more confused. Here's the solution in my answer book: $$\lim_{x \to \infty}\frac{(1+2x-1)\frac{x}{2}}{x+3} - x $$ $$\lim_{x \to \infty}\frac{x^2-(x+3)x}{x+3} = -3 $$ Does this make sense to any of you? You know your stuff, I'm willing to believe that either the question was badly worded or the answer is wrong.
Assuming $\;x\in\Bbb N\;$ (Otherwise I cannot understand the expression), we get: $$\frac{1+3+5+\ldots+(2x-1)}{x+3}-x=\frac{x^2}{x+3}-x=-\frac{3x}{x+3}\xrightarrow[x\to\infty]{}-3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/917187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Is a pre-additive structure on a category $\mathcal{C}$ necesarrily unique? If $\mathcal{C}$ is an additive category, i.e. it is $\mathbf{Ab}$-enriched and moreover it admits finite biproducts, it is quite well-known that the additive structure is uniquely determined by internal properties of $\mathcal{C}$. All the proofs I know make use of the existence of biproducts so I suppose this fact is crucial. However I wondered: Are there easy accesible examples of categories $\mathcal{C}$ which can be enriched over the category of abelian groups in several non-equivalent ways?
It suffices to give an example of two rings (with unit but not necessarily commutative) that have isomorphic multiplicative monoids but non-isomorphic additive groups: use the identification of rings with one-object preadditive categories. So consider $\mathbb{Z}$. Its multiplicative monoid is (by the fundamental theorem of arithmetic) isomorphic to $\{ 0 \} \cup \{ \pm 1 \} \times \mathbb{N}^\mathbb{N}$. But the same is true of the polynomial ring $\mathbb{F}_3 [x]$ (because polynomials over any field form a UFD), and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/917284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Upper approximation of $\mathrm{atanh}(x)$? Is there are nice upper approximation of $\mathrm{atanh(x)}$? For example, $\ln(x)$ is nicely approximated by $x-1$ for $x$ around $1$.
$$Atanh(x) = \sum_{k=0}^{\infty} \frac{x^{1+2 k}}{1+2 k}, |x|<1$$ And $$Atanh(x) = -\frac{\pi \sqrt{-x^2}}{2 x}+\sum_{k=0}^{\infty} \frac {x^{-1-2 k}}{1+2 k}, |x|>1$$ You can approximate it by using $$\sum_{k=0}^n$$ and decide what $n$ gives the level of approximation you need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/917529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What type of numbers are roots of $x^{2} = -1$ themselves? Are the roots $i, -i$ themselves irrational numbers or complex numbers or left auxiliary so that undefined?
Just $i$ and $-i$. The fundamental theorem of algebra tells us that an equation like $x^2 + 1 = 0$ has two roots. In this particular case, $i$ is the principal square root of $-1$, and $-i$ is the other root. The idea that $i$ could be irrational seems ridiculous to me, but I have to admit that I have never stopped to think why I would consider one complex number rational and another one irrational. The thing about the purely imaginary numbers is that in some ways they are just like the purely real numbers, only the axis of the former is perpendicular to the axis of the latter. Let me ask you this: do you consider the number $-21$ to be rational or irrational? What about $-\sqrt{7}$? It seems to me that if you consider $-21$ to be a rational number, you should also consider $21i$ and $-21i$ to be rational, and if you consider $-\sqrt{7}$ irrational, you should also consider $\sqrt{-7}$ and $-\sqrt{-7}$ irrational. (And I'm using the square root symbol to mean the "principal" square root, but contrarians will still find a reason to howl, and to push the limits of serial downvoting). But I would still stop short of redefining the symbol $\mathbb{Q}$; I recommend you only use that to mean purely real rational numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/917747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Evaluate $ \int_{0}^{1} \ln(x)\ln(1-x)\,dx $ Evaluate the integral, $$ \int_{0}^{1} \ln(x)\ln(1-x)\,dx$$ I solved this problem, by writing power series and then calculating the series and found the answer to be $ 2 -\zeta(2) $, but I don't think that it is best solution to this problem. I want to know if it can be solved by any other nice/elegant method.
I've found a solution that is interesting, but probably not elegant, and definitely not short. $I = \displaystyle\int_0^1 \ln(x)\ln(1 - x) dx$ Basic results: * *$\lim\limits_{n \to 0} \dfrac{x^n - 1}{n} = \log x$, or $\lim\limits_{n \to 1}\dfrac{x^{n-1} - 1}{n - 1} = \log x$. *$\dfrac{d}{dn}\beta(n, n) = 2\beta(n, n)(\psi_0(n) - \psi_0(2n))$ where $\psi_0(n)$ is the digamma function. *$\dfrac{d^2}{dn^2}\beta(n, n) = 4\beta(n, n)(\psi_0(n) - \psi_0(2n))^2 + 2\beta(n, n)(\psi_1(n) - 2\psi_1(2n))$, where $\psi(1)(n)$ is the polygamma function. *$\psi_0(1) - \psi_0(2) = -1$ according to the recurrence relation. *$\psi_1(2) = \psi_1(1) - 1$ according to the recurrence relation. *$\psi_1(1) = \zeta(2)$. Solution: $\begin{align} I & = \lim\limits_{n \to 1} \displaystyle\int_0^1 \dfrac{(x^{n - 1} - 1)((1 - x)^{n - 1} - 1)}{(n - 1)^2} dx\\ & = \lim\limits_{n \to 1}\displaystyle\int_0^1 \dfrac{x^{n-1}(1-x)^{n-1} - x^{n - 1} - (1-x)^{n-1} + 1}{(n-1)^2} dx\\ & = \lim\limits_{n \to 1} \dfrac{\beta(n,n) - \frac{1}{n} - \frac{1}{n} + 1}{(n-1)^2}\\ & = \lim\limits_{n \to 1} \dfrac{\beta(n,n)(\psi_0(n)-\psi_0(2n)) + \frac{2}{n^2}}{2(n-1)} \quad [\text{l'Hospital's rule}]\\ & = \lim\limits_{n \to 1} \dfrac{4\beta(n,n)(\psi_0(n)-\psi_0(2n))^2 + 2\beta(n,n)(\psi_1(n)-2\psi_1(2n))- \frac{4}{n^3}}{2}\quad [\text{l'Hospital's rule}]\\ & = 2\beta(1, 1)(\psi_0(1) - \psi_0(2))^2 + \beta(1, 1)(\psi_1(1) - 2\psi_1(2)) - 2\\ & = 2(-1)^2 + 1(\psi_1(1) - 2\psi_1(1) + 2) - 2\\ & = 2 - \psi_1(1)\\ & = 2-\zeta(2) \end{align}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/917833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 9, "answer_id": 2 }
Prove quotient of two $N(0,1)$ is $\text{Cauchy}(0,1)$ Problem: Show that if $X$ and $Y$ are independent $N(0,1)$-distributed random variables, then $X/Y ∈ C(0,1)$. Question: I don't know how to proceed below. I want to prove that the PDF of $X/Y$ is Cauchy. PS. I looked on wiki and they defined $C$ as $X/Y$ Attempt: The expression looked hairy and wolfram could not integrate it. Note to self: problem 8.
An alternative proof taken from here - I find it helpful as most steps are explained intuitively. Please correct me if something is incorrect. Proof Start with the CDF which is $P(\frac{X}{Y} \leq t)$ which is an event, and we want to find the probability of this event. $$ P(\frac{X}{Y} \leq t) = P(\frac{X}{|Y|} \leq t) = P(X \leq |Y|t) \text{ by symmetry of } N(0,1) $$ To get a probability, integrate the joint pdf over the region of interest. For $Y$ this is between $- \infty$ to $\infty$ but for $X$ this is from $- \infty$ to $t |y|$ ! $$P(X \leq |Y|t) = \int_{- \infty}^{\infty} \int_{- \infty}^{t |y|} f(x,y) \ dx \ dy $$ As $X$ and $Y$ are i.i.d. the joint PDF can be found by multiplying the pdf : $f_{X,Y}(x,y) = f_X(x) f_Y(y)$ \begin{aligned} P(X \leq |Y|t) &= \int_{- \infty}^{\infty} \int_{- \infty}^{t |y|} \frac{1}{\sqrt{2 \pi}} \frac{1}{\sqrt{2 \pi}} e^{- \frac{1}{2}(x^2 + y^2)}\ dx \ dy \\ & = \frac{1}{\sqrt{2 \pi}} \int_{- \infty}^{\infty} e^{- \frac{1}{2} y^2} \int_{- \infty}^{t |y|} \frac{1}{\sqrt{2 \pi}} e^{- \frac{1}{2} x^2}\ dx \ dy \end{aligned} Recognize that the inner integral cannot be computed analytically, however this is the normal CDF $\Phi(t |y|)$ \begin{aligned} P(X \leq |Y|t) & = \frac{1}{\sqrt{2 \pi}} \int_{- \infty}^{\infty} e^{- \frac{1}{2} y^2} \Phi(t |y|) \ dy \end{aligned} This is an even function so we can remove the absolute value and multiply by 2. \begin{aligned} P(X \leq |Y|t) & = \frac{2}{\sqrt{2 \pi}} \int_{0}^{\infty} e^{- \frac{1}{2} y^2} \Phi(t y) \ dy \end{aligned} The PDF is the derivative of the CDF, so we can take the derivative of the above equation. Under mild technical conditions we can swap the derivative and the integral. Note that $y$ is a dummy variable so here we need to take the derivative w.r.t. $t$ \begin{aligned} PDF = F'(t) & = \frac{d}{dt} \frac{2}{\sqrt{2 \pi}} \int_{0}^{\infty} e^{- \frac{1}{2} y^2} \Phi(t y) \ dy \\ & = \frac{2}{\sqrt{2 \pi}} \int_{0}^{\infty} e^{- \frac{1}{2} y^2} \ \frac{d}{dt} \Phi(t y) \ dy \end{aligned} To compute the derivative use the chain rule: $\frac{d}{dt} \Phi(t y) = \frac{d \Phi(t y) }{d(ty)} \frac{d (t y) }{d(t)} = y \frac{d \Phi(t y) }{d(ty)} = y \phi(ty)$ where $\phi(ty)$ is the normal pdf of $(ty)$ \begin{aligned} F'(t) & = \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} y \ e^{- \frac{1}{2} y^2} \ \phi(t y) \ dy \\ & = \sqrt{\frac{2}{\pi}} \int_{0}^{\infty} y \ e^{- \frac{1}{2} y^2} \frac{1}{\sqrt{2 \pi}} e^{- \frac{t^2y^2 }{2}} \ dy \\ & = \frac{1}{\pi} \int_{0}^{\infty} y e^{ - \frac{1}{2}(1 + t^2)y^2 } dy \end{aligned} Which can be solved using substitution $u = \frac{1}{2}(1 + t^2)y^2$ so $du = y (1 + t^2) dy$ \begin{aligned} F'(t) & = \frac{1}{\pi} \int_{0}^{\infty} y e^{ - \frac{1}{2}(1 + t^2)y^2 } dy \\ & = \frac{1}{\pi(1 + t^2)} \int_{0}^{\infty} e^{-u} \ du \\ & = \frac{1}{\pi(1 + t^2)} \ \ \forall t \end{aligned} Additional detail about the symmetry of $N(0,1)$ - Taken from here The probability $P( X/Y\leqslant t)$ may be viewed as \begin{aligned} &P(\lbrace X/Y\leqslant t, Y>0 \rbrace\cup\lbrace X/Y\leqslant t, Y<0 \rbrace) \\ =\; &P(X/|Y|\leqslant t, Y>0) + P(-X/|Y|\leqslant t, Y<0)\\ =\; &P(X/|Y|\leqslant t, Y>0) + P(X/|Y|\leqslant t, Y<0)\\ =\; &P(X/|Y|\leqslant t) \end{aligned} by symmetry of the standard normal distribution, independence and the fact that $P(Y=0)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/917916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Inverting a vector If I have $Ax=b$ where $A$ is $n$ by $n$ while $x$ and $b$ are $n$ by 1, is it possible to find $A$ given $x$ and $b$. The idea would be some sort of $x^{-1}$ operation on the right of both equations but I'm not sure how to go about it. If necessary, we can consider a specific case of $n=2$. Also, is the $A$ unique or not?
No, inverse in this case is impossible, because you have a lot of solutions, for example consider: $A\begin{bmatrix} 1 \\ 0\end{bmatrix}=\begin{bmatrix} 1 \\ 0\end{bmatrix}$ $A=\begin{bmatrix}1 && 0 \\ 0 && 1\end{bmatrix}$is one solution, but for example $A=\begin{bmatrix}1 && 1 \\ 0 && 1\end{bmatrix}$ is solution, too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/918106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Conditional Probability matching socks A drawer contains eight different pairs of socks. If six socks are taken at random without replacement, compute the probability that there is at least one matching pair among these six socks. Hint: compute the probability that there is not a matching pair first. Sorry guys, I'm really awful at this class, I'm trying to train my brain to think the right way to solve these on my own but it's hard.
Ok my suggestion is the following. You surely know how to use the hint: If you denote the probability of not having a matching term by $P$ then the probability of having at least one matching pair is ... (I'll leave it to you to think about this one) Now you might want to use the following formula. Let $\Omega$ be a (finite) probability space and $A\subset \Omega$ an event, then for the probability of $A$ holds$$ P(A)=\frac{|A|}{|\Omega|} $$ That means the probability of $A$ is the number of all results such that $A$ has happened divided by the total number of possible results. Now you have to compute those cardinalities: $\binom{n}{k}$ denotes the number of possibilities to draw $k$ items out of $n$ items when you do not care about the order in which the items were drawn (which is the case in your problem). So $|\Omega|=\binom{16}{6}$ in your case, because you draw 6 socks out of 16 socks in total. Furthermore there are $16 \cdot 14 \cdot 12 \cdot 10 \cdot 8 \cdot 6$ possibilities to draw $6$ socks without having a matching pair when taking into account the order (why?). Furthermore you have $6!$ possibilities to arrange $6$ objects on $6$ slots, so you have to divide by that number. I'll leave the rest to you. I hope this was helpful (and I didn't make a mistake somewhere). EDIT: I just read your comment asking to explain the $16 \cdot 14 \cdot 12 \cdot 10 \cdot 8 \cdot 6$ part. Well this is just counting socks. I'll call the socks that result in the event "no matching pairs" as good socks and any sock preventing that result a bad sock. * *In the first round you cannot get a matching pair obviuosly (as you'd need $2$ socks for a pair) so the number of good socks is 16. *In the second round you have only 15 socks left, however one of those is of the same color as the one you already have and therefore a bad sock. Makes in total 15-1=14 good socks *round three: you have 14 socks left. Among those are 2 bad socks (the corresponding sock to your first and second pick) so you have 14-2=12 socks left. *and so on. You also asked for a way to write this with binomials. Look at the following$$ \frac{\binom{2}{1} \cdot \binom{2}{1} \cdot \binom{2}{1} \cdot \binom{2}{1} \cdot \binom{2}{1} \cdot \binom{2}{1} \cdot \binom{2}{0} \cdot \binom{2}{0} \cdot \binom{8}{2}}{\binom{16}{6}} $$ What do I want to tell you with that fraction? Well we now consider pairs. One pair consists of $2$ socks and we draw $6$ socks in total. That means we want exactly one sock out of $6$ pairs (therefore $6$ times $\binom{2}{1}$) and no sock out of the resulting $2$ pairs (therefore the two $\binom{2}{0}$). Of course you could just omit the $\binom{2}{0}$ term as it is $1$ anyway, I just put it there for illustration. Ok and now you have $\binom{8}{2}$ possibilities to choose "which pair you don't get any sock from" ( I hope that sounds remotely logical). If you type it in a calculator you'll find that the two results are the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/918208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Exercise on Cyclic Group Let $G$ be a finite cyclic group of order $n$. Let $a$ be a generator. Let $r$ be an integer $\neq 0$, and relatively prime to n. Show that $a^r$ is also a generator of $G$. Proof: $G=\langle a\rangle$ such that $a^n=1$ $|a^r|=\frac{|a|}{\gcd(r,n)}=|a|$ Since $|a^r|=|a|$m then $G=\langle a^r\rangle $. Is my proof correct ?
Another way: $$g.c.d(r,n)=1\implies \;\exists\,x,y,\in\Bbb Z\;\;s.t.\;\; xr+yn=1$$ and from here $$a=a^1=a^{rx+yn}=\left(a^r\right)^x\left(a^n\right)^y=\left(a^r\right)^x\implies \langle a\rangle \subset \langle a^r\rangle$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/918300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Mathmatical Induction Prove, by mathmatical induction, that $5^{2n}$ + $2^{2n-2}$$3^{n-1}$ is divisible by $13$. I first plugged in n as 1 and showed that the expression is divisible by 13 for n=1. Then I assumed that the expression was divisible by 13 for n=k and plugged in k. The simplified expression in terms of k was $25^k$ + $\frac{4^n3^n}{12}$. I then plugged in k+1 and got an expression for that. I am unable to show that the sum or difference of the k and k+1 expressions is divisible by 13. How do I proceed?
$$25^{k+1}+\frac{4^{k+1}3^{k+1}}{12}=25\cdot 25^{k}+12 \frac{4^{k}3^{k}}{12} $$ $$=13\cdot 25^k+12\cdot25^k+12 \frac{4^{k}3^{k}}{12} $$ $$=13\cdot 25^k + 12 \left(25^k+ \frac{4^{k}3^{k}}{12}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/918392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Sum of two trig function's identity We all know that $\sin(x) + \sin(y) = 2\sin((x+y)/2)\cos((x-y)/2)$ But is there an identity for $\sin(x) + z\sin(y) = ?$ Or do I need to figure it out using Euler's formula $\sin(x) = (e^{ix} - e^{-ix})/2$ and put it back into trigonometric form?
According to Alan Jeffrey, Hui-Hui Dai: Handbook of Mathematical Formulas and Integrals. 4th Edition, Academic Press, 2008. at page 131, 2.4.1.9 Sum of Multiples of $\sin x$ and $\cos x$ There is a formula for that * *$A \sin x + B \cos x = R \sin(x + \theta)$, where $R = (A^2+B^2)^{1/2}$ with $\theta = \arctan B/A$ when $A>0$ and $\theta = \pi + \arctan B/A$ when $A<0$. *$A \cos x + B \sin x = R \cos(x - \theta)$, where $R = (A^2+B^2)^{1/2}$ with $\theta = \arctan B/A$ when $A>0$ and $\theta = \pi + \arctan B/A$ when $A<0$. It is not exactly what you are looking for, but maybe it could help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/918474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving the Cauchy-Schwarz inequality by induction I ran across this problem in some old notes, and I frustratingly can't figure out how to do it Let $a_i$ and $b_i$ be sequences of natural numbers, use induction to show $\sum_{i=1}^n (a_ib_i)^{1/2} \le (\sum_{i=1}^n a_i)^{1/2}(\sum_{i=1}^n b_i)^{1/2} $ Obviously this is trivial to show for n=1. I can't make much progress on n+1. I've tried various tactics, squaring both sides etc. Any hint or help would be appreciated. Thanks.
Hint: Another way would be to rewrite using the substitutions $x_i^2=a_ib_i$ and $y_i^2=a_i^2$ in the form $$CS(n): \quad \frac {\left( \sum_{i=1}^n x_i \right)^2}{\sum_{i=1}^n y_i} \le \sum_{i=1}^n \frac{x_i^2}{y_i}$$ This form is particularly amenable to induction, after you prove the base case of $n=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/918580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
Proving that $\exp(-x^2)$ has a unique fixed point on the interval $[0,1]$ Consider the function $g(x)=e^{-x^2}$. Prove that g has a unique fixed point on the interval [0,1]. So, our teacher did not go over this section, but assigned it for homework and I have no idea where to even start with a proof. Could someone help me with this please?
Hint: Show that the function is continuous, and is decreasing in the aforementioned interval (by considering the derivative). Then conclude that fixed point has to be unique under these conditions. Alternatively, look at $f(x)=g(x)-x$. Show that $f(0)$ and $f(1)$ have different signs- and so by intermediate value theorem, $f(x)=0$ for some $x \in [0,1]$, and by considering its derivative, conclude that $f(x)=0$ can hold for only one $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/918673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the flaw in this proof that all triangles are isosceles? What is the flaw in this "proof" that all triangles are isosceles? From the linked page: One well-known illustration of the logical fallacies to which Euclid's methods are vulnerable (or at least would be vulnerable if we didn't "cheat" by allowing ourselves to be guided by accurately drawn figures) is the "proof" that all triangles are isosceles. Given an arbitrary triangle ABC, draw the angle bisector of the interior angle at A, and draw the perpendicular bisector of segment BC at D, as shown below: If the angle bisector at A and the perpendicular bisector of BC are parallel, then ABC is isosceles. On the other hand, if they are not parallel, they intersect at a point, which we call P, and we can draw the perpendiculars from P to AB at E, and to AC at F. Now, the two triangles labeled "alpha" in this figure have equal angles and share a common side, so they are totally equal. Therefore, PE = PF. Also, since D is the midpoint of BC, it's clear that the triangles labeled "gamma" are equal right triangles, and so PB = PC. From this it follows that the triangles labeled "beta" are similar and equal to each other, so we have BE+EA = CF+FA, meaning the triangle ABC is isosceles. All theorems from Euclidean geometry used in the argument are correct. I know this statement is false but I was wondering if anyone knew what the problem was. I'm very confused about solving this question.
If the drawing is made correctly, $P$ needs to be outside, but not too outside, the triangle. If it is far outside, $E$ is on the extension of $AB$ and $F$ is on the extension of $AC$, two plus signs in the argument change to minus signs, and the argument goes through. If $P$ is not too far outside, $E$ is on the extension of $AB$, $F$ is still between $A$ and $C$ (or the other way around), one plus becomes a minus, and the argument fails.
{ "language": "en", "url": "https://math.stackexchange.com/questions/918739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 0 }
Find the number of spanning trees of a dumbbell graph. A (k, l)-dumbbell graph is obtained by taking a complete graph on k (labeled) nodes and a complete graph on l (labeled) nodes, and connecting them by a single edge. Find the number of spanning trees of a dumbbell graph. Do I have use Kruskal's algorithm for this problem? I am not sure where to begin.
Hints * *There is one edge of your graph which must be included in any spanning tree. *The number of spanning trees in a complete graph is known, see https://oeis.org/A000272
{ "language": "en", "url": "https://math.stackexchange.com/questions/918853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }