Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Convergence radius and two-times-differentiability of power series. I wanted to compute the radius of convergence for the following the power series $$\sum_{n=1}^{\infty} a_nz^n$$ with $(i) \, a_n = n!, \, (ii) \, a_n = \sqrt[\leftroot{-3}\uproot{3}n]{n}$ Then I need to determine which power series defines a function that is twice differentiable and compute the second derivative. For the convergence radius I had no problem, and used once the Alembert criterion, once the Cauchy criterion and found that the convergence radius for $a_n = n!$ is 0, and the one for $a_n = 1$. Now I'm tempted to say that therefore the first power series gives a non differentiable function, since it doesn't converge. Is that right? Because I'm not sure about it. Then for the second I was told that since the convergence radius is $1$, it's differentiable at $]-1,1[$, which I can understand, but that it's also automatically differentiable twice at $]-1,1[$. And that part I don't undertand. Why? Has that something to do with the radius being specifically $1$? And how to conclude in general? For the last question, about derivating twice, isn't it just derivate the sum? So I'd get: $$f''(x) = \sum_{n = 2}^{\infty} n(n-1)a_nz^{n-2}$$
Every convergent power series is infinitely differentiable in the same interval of convergence. $$ f'(x)=\sum_{n=1}^{\infty}na_nx^{n-1} $$ So radius of convergence of $f'(x)$ is $$ {1\over\limsup(n|a_n|)^{1\over n}}={1\over\limsup |a_n|^{1\over n}} $$ as $$\lim_{n\to\infty}n^{1\over n}=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1631165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Relation between symplectic blow-up of a compact manifold and fibre bundles over same manifold The symplectic blow-up of a compact symplectic manifold $(X,\omega)$ along a compact symplectically embedded submanifold $(M,\sigma)$ results in another compact manifold $(\tilde{X},\tilde{\omega})$ given by $$\tilde{X}=\overline{X-V}\cup_{\varphi} \tilde{V}$$ where $V$ is a tubular neighborhood of $M$ that is diffeomorphic via $\varphi$ to $\tilde{V}$ and lives away from the zero section in the canonical line bundle over the projectivization of the normal bundle of $M$ in $X$. The blow-down map $f:\tilde{X}\rightarrow X$ restricts to a diffeomorphism in $\overline{X-V}$, so in general, if my understanding is correct, $\tilde{X}$ may not be realized as a fibre bundle over $X$. Indeed, the cohomology algebra for $\tilde{X}$, as shown by McDuff, is a direct sum of the cohomology for $X$ with a finitely generated module over the cohomology of $M$. The Leray-Hirsch theorem gives a tensor product result for the cohomology of fibre bundles and product spaces. My question is, is it known what circumstances precipitate the symplectic blow-up $\tilde{X}$ to be at least homotopically equivalent to a fibre bundle over $X$? Or is this nonsense?
This really isn't even true when $M$ is a point (this is really the only case I am familiar with). The symplectic blow-up is the natural generalization of the blowup in the complex setting (i.e. the birational isomorphism $\Bbb CP^2 \# \overline {\Bbb CP} ^2 \to \Bbb CP^2$). This is relatively clearly not a fiber bundle, and it would be completely impossible the blowdown to be a fiber bundle projection. In fact, say if $X$ is a symplectic 4-manifold, any symplectic blowup of $X$ at a point is diffeomorphic to $ X \# \overline {\Bbb CP}^2$. If you really want, you can even pick compatible almost complex structures such that the blowup is biholomorphic to the (almost) complex blowup and the projection maps commute. In the big McDuff-Salamon book, there is a proof of Gromov non-squeezing using this construction, which can probably be read without reading much of the rest of the book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1631274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine the number of graphs on the vertex set $\{1, 2, 3 , 4, 5\}$, every vertex is incident to at least one edge. I have the problem of determining how many graphs from the set $\{1, 2, 3, 4, 5\}$ there are, given the property that every vertex is incident to at least one edge. The at least one part of the question is what makes me unsure. I know how to do a similar problem for a specific amount of edges, and have taken the same approach. Using a table where a $1$ represents an edge between vertexes would look like this: 1 2 3 4 5 1 X 0 0 0 0 2 0 X 0 0 0 3 0 0 X 0 0 4 0 0 0 X 0 5 0 0 0 0 X Given that points such as $(1, 2)$ and $(2, 1)$ will always mirror each other on this table we only need to count half the values (the $0$ values about the $X$ line for example). This gives us $$\frac{4 \cdot 5}{2}$$ spots to represent edges to choose from, so each possible value will be of the form $C(10, n)$. Finally, I count each possible number of choices to at least one value for each. Total number of graphs $= C(10,10) + C(10, 9) + C(10, 8) + C(10, 7) + C(10, 6) + C(10, 5)$ I am not sure this works, I think it might count possibilities with less than one edge. Is my logic correct?
For $n>0$, the total number of graphs on $n$ vertices is $2^{n(n-1)/2}$. Let $S_k$ be the set of graphs in which vertex $k$ has no incident edge. Then by inclusion/exclusion, the number of graphs you want is $$|\overline S_1\cap\cdots\cap \overline S_n| =2^{n(n-1)/2}-|S_1\cup\cdots\cup S_n| =(-1)^n+\sum_{k=1}^n(-1)^{n-k}\binom nk2^{k(k-1)/2}$$ which for $n=5$ works out to $768$. In fact the sequence is A006129.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1631336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving positive definiteness of matrix $a_{ij}=\frac{2x_ix_j}{x_i + x_j}$ I'm trying to prove that the matrix with entries $\left\{\frac{2x_ix_j}{x_i + x_j}\right\}_{ij}$ is positive definite for all n, where n is the number of rows/columns. I was able to prove it for the 2x2 case by showing the determinant is always positive. However, once I extend it to the 3x3 case I run into trouble. I found a question here whose chosen answer gave a condition for positive definiteness of the extended matrix, and after evaluating the condition and maximizing it via software, the inequality turned out to hold indeed, but I just can't show it. Furthermore, it would be way more complicated when I go to 4x4 and higher. I think I should somehow use induction here to show it for all n, but I think I'm missing something. Any help is appreciated. Edit: Actually the mistake is mine, turns out there are indeed no squares in the denominator, so it turns out user141614's first answer is what I really needed. Thanks a lot! Should I just accept this answer, or should it be changed back to his first answer then I accept it?
(Update: Some fixes have been added because I solved the problem with $a_{ij}=\frac{2x_ix_j}{x_i+x_j}$ instead of $a_{ij}=\frac{2x_ix_j}{x_i^2+x_j^2}$. Thanks to Paata Ivanisvili for his comment.) The trick is writing the expression $\frac{1}{x^2+y^2}$ as an integral. For every nonzero vector $(u_1,\ldots,u_n)$ of reals, $$ (u_1,\ldots,u_n) \begin{pmatrix} a_{11} & \dots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{n1} & \dots & a_{nn} \\ \end{pmatrix} \begin{pmatrix} u_1\\ \vdots \\ u_n\end{pmatrix} = \sum_{i=1}^n \sum_{j=1}^n a_{ij} u_i u_j = $$ $$ =2 \sum_{i=1}^n \sum_{j=1}^n u_i u_j x_i x_j \frac1{x_i^2+x_j^2} = 2\sum_{i=1}^n \sum_{j=1}^n u_i u_j x_i x_j \int_{t=0}^\infty \exp \Big(-(x_i^2+x_j^2)t\Big) \mathrm{d}t = $$ $$ =2\int_{t=0}^\infty \left(\sum_{i=1}^n u_i x_i \exp\big(-{x_i^2}t\big)\right) \left(\sum_{j=1}^n u_j x_j \exp\big(-{x_j^2}t\big)\right) \mathrm{d}t = $$ $$ =2\int_{t=0}^\infty \left(\sum_{i=1}^n u_i x_i \exp\big(-{x_i^2t}\big)\right)^2 \mathrm{d}t \ge 0. $$ If $|x_1|,\ldots,|x_n|$ are distinct and nonzero then the last integrand cannot be constant zero: for large $t$ the minimal $|x_i|$ with $u_i\ne0$ determines the magnitude order. So the integral is strictly positive. If there are equal values among $|x_1|,\ldots,|x_n|$ then the integral may vanish. Accordingly, the corresponding rows of the matrix are equal or the negative of each other, so the matrix is only positive semidefinite. You can find many variants of this inequality. See problem A.477. of the KöMaL magazine. The entry $a_{ij}$ can be replaced by $\left(\frac{2x_ix_j}{x_i+x_j}\right)^c$ with an arbitrary fixed positive $c$; see problem A.493. of KöMaL.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1631439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Why a code has code words with length $3$ add a parity check matrix become length $4$ Let $\mathcal{C}$ be the code whose codewords are all the words of length 3. Let $D$ be the code formed by adding a parity check matrix digit to each codeword in the code $C$. Find $D$. The answer key says $D=\{0000, 1001, 0101, 0011, 1100, 1010, 0110, 1111\}$ I don't see each codeword of $D$ has length $4$. Since $C$ is a code with codewords in length $3$, the length of parity check matrix should be $3$ also. Can someone tell me why $D$ has length $4$ after $C$ adding digit of parity check matrix. Thanks.
That phrase is surely meant to describe the process (IMO better known) as extending a code by adding an overall parity check bit (or some abbreviated version of that phrase). You begin with a word $x$ of length $n$, and then augment it to a word $x'$ of length $n+1$, where $x'=x|0$ or $x'|1$ and the choice is made in such a way that the extended word $x'$ has an even weight. So for example $x=110$ becomes $x'=1100$, and $x=111$ becomes $x'=1111$. The mapping $x\mapsto x'$ is linear, so when we extend all the words of a linear code $C$ in this way we get another linear code $C'$ (aka the extended code): * *The length of the extended code $C'$ is one more than the length of the original code. *If the minimum distance $d$ of $C$ is odd, then the minimum distance of $C'$ is $d+1$. But if $d$ is even, then the minimum distance of $C'$ is also $d$. For this reason it typically does not make any sense to extend a code with an even minimum distance. *The dimensions of $C$ and $C'$ are equal. *If $H$ is a check matrix of $C$, then you get a check matrix $H'$ for $C'$ by first adding a column of all zeros, and then adding a row of all ones: $$ H'=\left(\begin{array}{c|l}H&0\\111\ldots1&1\end{array}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1631536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Upper bound on integral: $\int_1^\infty \frac{dx}{\sqrt{x^3-1}} < 4$ I'm going through Nahin's book Inside Interesting Integrals, and I'm stuck at an early problem, Challenge Problem 1.2: to show that $$\int_1^\infty \frac{dx}{\sqrt{x^3-1}}$$ exists because there is a finite upper-bound on its value. In particular, show that the integral is less than 4. I've tried various substitutions and also comparisons with similar integrals, but the problem is that any other integral that I can easily compare the above integral to is just as hard to integrate, which doesn't help solve the problem. I also tried just looking at the graph and hoping for insight, but that didn't work either. So how doesone one place an upper bound on the integral?
$\int_1^\infty \frac{dx}{\sqrt{x^3-1}}=\int_1^2 + \int_2^\infty=I_1+I_2.$ $I_2 = \int_2^\infty \frac{dx}{x^{3/2}\sqrt{1-1/x^3}}\color{red}{\leq}\lim_{A \to \infty} \int_2^A \frac{dx}{x^{3/2}\sqrt{1-1/A^3}}=\lim_{A \to \infty}\sqrt{\frac{A^3}{A^3-1}}\int_2^A x^{-3/2}dx$ $I_1=\int_1^2\frac{dx}{(x-1)^{1/2}\sqrt{x^2+x+1}}\le \frac{1}{\sqrt{3}}\int_1^2(x-1)^{-1/2}d(x-1)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1631639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
What is the average sum of distances of a random point inside a triangle to its three sides? Given a non- Equilateral Triangle with following side sizes: $45,60,75$. Find the sum of distances from a random located point inside a triangle to its three sides. Note 1: Viviani's theorem related only to equilateral triangles. Note 2: Fermat point is related to the minimization of distances from a random point inside the triangle and its vertices. As we can see that both notes are not helpful to solve that problem. I have been given that puzzle during an hour an a half exam. There were only 6 minutes to solve that problem. Afters many hours I still do not have an answer. I will be very glad to get some assistance or maybe the whole solution Regards, Dany B.
Let $ (x,y)$ be a point in a semi-circle with diameter inclined at $ \sin ^{-1} \frac35 $ to x-axis having sides proportional to Pythagorean triplet (8,6,10) as given. The three perpendicular distance to sides of a scalene triangle are $ (x,y, 4 x/5 + 3 y/5) , $ totaling to $ \dfrac {9 x + 8 y} {5} $ which is variable , needing to be averaged. If it were constant, a result like those from Viviani, Fermat would have been in existence now for more than two centuries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1631704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Find $\alpha^3 + \beta^3$ which are roots of a quadratic equation. I have a question. Given a quadratic polynomial, $ax^2 +bx+c$, and having roots $\alpha$ and $\beta$. Find $\alpha^3+\beta^3$. Also find $\frac1\alpha^3+\frac1\beta^3$ I don't know how to proceed. Any help would be appreciated.
For $\frac1{\alpha^3}+\frac1{\beta^3}$, use that the roots of $a+bx+cx^2$ are $\frac1\alpha$ and $\frac1\beta$ to reduce to the previous problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1631779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Prove that the exponential $\exp z$ is not zero for any $z \in \Bbb C$ How can the following been proved? $$ \exp(z)\neq0, z\in\mathbb{C} $$ I tried it a few times, but I failed. Probably it is extremly simple. If a draw the unit circle and then a complex number $\exp(a+ib)=\exp(a)\exp(ib)$ then it is obvious that this expression is only $0$ if $\exp(a)$ equals zero, but $\exp(a),a\in\mathbb{R}$ is never zero. This seems not to be very robust. Thank you
If you know that $\exp(z+w)=\exp(z) \exp(w)$, then $\exp(z)\ne 0$ follows from $$1=\exp(0)=\exp(z-z)=\exp(z)\exp(-z)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1631886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
Why a truncated table for logic implication $(p\wedge q) \implies p$ verification? The book Discrete Mathematics by Kenneth A. Ross says: "Let's verify the logic implication $(p\wedge q) \implies p$. For that, we need to consider only just the case when $p\wedge q$ is true; i.e., both, $p$ and $q$, are true. This gives us the truncated table:" " Why the author considers that portion of the truth table? What reasoning I must understand to follow that validation technique? N.B.: The same technique is applied to $(p \rightarrow q) \Rightarrow [(p \vee r) \rightarrow (q \vee r)]$:
Recall that an implication $R\implies S$ is "automatically true" in cases where the hypothesis $R$ is false. So the only interesting cases to check are those for which the hypothesis is true. In this case the hypothesis is $p \wedge q$, so you can proceed just with analyzing what would be implied by that being true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1632006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluate the triple integral $\iiint (x^2+y^2+z^2)\,dx\,dy\,dz$ I have to evaluate this integral. It is enough for me to know the correct limits to integration. $$ \iiint_W (x^2 + y^2 + z^2) \,\mathrm dx\,\mathrm dy\,\mathrm dz$$ Conditions: $$x\ge 0,\quad y \ge 0 ,\quad z \ge 0,\quad 0 \le x + y + z \le a,\quad (a>0)$$ My ideas: As I have $0 \le x + y + z \le a$, I can say that $$0 \le x + y + z$$ $$x \le - z - y $$ so my limits to integration could be: $$\int _{ z }^{ a }{ } \int _{ -z-y }^{ y }{ } \int _{ 0 }^{ -z-y }{ (x^2 + y^ 2+ z^ 2) \,\mathrm dx\,\mathrm dy\,\mathrm dz } $$ Can somebody help/correct me?
Your integral should be of the form $$\int_{z_0}^{z_1}\int_{y_0(z)}^{y_1(z)}\int_{x_0(y,z)}^{x_1(y,z)}(x^2+y^2+z^2)\,\mathrm dx\,\mathrm dy\,\mathrm dz $$ By the conditions, we find $0\le z\le a$, and for given $z$, $0\le y\le a-z$, and for given $z$ and $y$ $0\le x\le a-y-z$. So in the end it should be $$\int_{0}^{a}\int_{0}^{a-z}\int_{0}^{a-z-y}(x^2+y^2+z^2)\,\mathrm dx\,\mathrm dy\,\mathrm dz $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1632121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Proving that there is no continuous function $f:\Bbb R\to\Bbb R$ satisfying $f(\Bbb Q)\subset\Bbb R-\Bbb Q$ and $f(\Bbb R-\Bbb Q) \subset\Bbb Q$. How can I prove that there is no continuous function $f:\mathbb{R}\to \mathbb{R}$ satisfying $f(\mathbb{Q}) \subset \mathbb{R}\backslash \mathbb{Q}$ and $f(\mathbb{R}\backslash \mathbb{Q} ) \subset \mathbb{Q}$?
HINT: Use the fact that Between any two rational numbers, we have infinite irrational numbers and similarly, Between any two irrational numbers, we have infinite rational numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1632218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Solve $z^4+2z^3+3z^2+2z+1 =0$ Solve $z^4+2z^3+3z^2+2z+1 =0$ with $z$: a complex variable. Attempt at solving the problem: We divide the polynom by $z^2$ and we get: $z^2+2z+3+\dfrac{2}{z}+ \dfrac{1}{z^2}=0 $ $ $ We set $w=z+ \dfrac{1}{z}$ We now have $w^2+2w+5=0$ $\bigtriangleup = -16$ Let's find $\omega$ such that $\omega^2=-16$ We have $\omega=4i$ Therefore we have the 2 roots: $w_ {1}=-1-2i$ and $ w_ {2}=-1+2i $ The issue is: I don't know how to find z
"I don't know how to find z" Sure you do! .... If you are correct in what you have done so far and you have $z + \frac 1z = w$ And $w_1 = -1-2i$ and $w_2 = -1 + 2i$ then you need to solve $z +\frac 1z = (-1-2i)$ or $z^2 +(1+2i)z + 1=0$ ANd $z + \frac 1z = (-1+2i)$ or $z^2 + (1-2i)z + 1 = 0$. Both of which can be solved by quadratic equation to get the four solutions $z = \frac {(-1\pm 2i) \pm\sqrt{(1\mp 2i)^2 -4}}{2}$ (And if you are worried about expressing $\sqrt{M}$ just convert to polar coordinates.) That is.... if everything you've done is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1632346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Erdős-Mordell theorem geometry proof Using the notation of the Erdős-Mordell theorem, prove that $PA \cdot PB \cdot PC \geq \dfrac{R}{2r}(p_a+p_b)(p_b+p_c)(p_c+p_a)$. The notation of the Erdős Mordell theorem means that $p_a$ for example is the distance from the point $p$ to the side $a$ and $R$ is the circumradius. I am struggling to see how to use the product $PA \cdot PB \cdot PC$. We also have to relate this somehow to the circumradius. EDIT: Sorry there was a typo originally. The $24$ should've been a $2r$.
In order to prove this inequality, it is important that you first prove the identity: $r = 4R\sin\left(\dfrac{A}{2}\right)\sin\left(\dfrac{B}{2}\right)\sin\left(\dfrac{C}{2}\right)$. Thus using Jensen's inequality: $p_c = PA\sin A_1, p_b = PA\sin A_2, A = A_1+A_2\Rightarrow p_c+p_b = PA(\sin A_1+\sin A_2)\leq 2PA\sin\left(\dfrac{A_1+A_2}{2}\right)=2PA\sin\left(\dfrac{A}{2}\right)\Rightarrow \dfrac{(p_a+p_b)(p_b+p_c)(p_c+p_a)}{PA\cdot PB\cdot PC} \leq 8\sin\left(\dfrac{A}{2}\right)\sin\left(\dfrac{B}{2}\right)\sin\left(\dfrac{C}{2}\right)= 8\left(\dfrac{r}{4R}\right)= \dfrac{2r}{R}$. Thus the problem is solved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1632434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Inverse of sum of matrices (SVD, ridge regression) Looking at these slides, I've found the following: $X=UDV^T$, where $U$ and $V$ are orthogonal matrices, $V$ is a square matrix, and $D$ contains the singular values of $X$. The author then writes $$X(X^TX + \lambda I)^{−1}X^Ty$$ $$=UD(D^2 + \lambda I)^{−1}DU^Ty.$$ Why does that equality hold? I've tried substituting $X=UDV^T$, have got $$UDV^T(VD^2V^T+\lambda I)^{-1}VDU^Ty.$$ If there wasn't the $+\lambda I$ term, then, as $V$ is a square matrix, we would have $$UDV^T V^{-T}D^{-1}D^{-1}V^{-1} VDU^Ty$$ $$=UU^Ty.$$ However, it's not the case that there isn't the $+\lambda I$ term, so I don't know how to proceed.
Here is the point : In: $\ UDV^T(VD^2V^T+\lambda I)^{-1}VDU^Ty$ you should write $I=VV^T$ giving the factorization: $UDV^T(V(D^2+\lambda I)V^T)^{-1}VDU^Ty$ $=UDV^T(V^T)^{-1}(D^2+\lambda I)^{-1}V^{-1}VDU^Ty$ yielding the looked-for formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1632540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Intuition behid $P(A\mid B)$. What is the intuition behind the formula $$P(A\mid B)=\frac{P(A\cap B)}{P(B)}$$ I have seen this formula around, but every site/book I look at does not really have a clear & cut explanation behind this formula.
It's equivalent to $$\frac{n(A\cap{B})}{n(B)}$$ in other words the proportion of the members of set $B$ which are also members of set $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1632697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Conditional Expectation - using Wald's Equation Let $N\sim\!\mathcal{P}(\lambda)$ and $(X_i)_{i\!\geq{1}}$ iid, $X_i\sim\!Be(p)$. If $N$ and $(X_i)_{i\!\geq{1}}$ are independent for all $i$, calculate $P(\mathbb{E}(X_1+\ldots+X_N|N)=0)$. So using Wald's equation and the fact that $(X_i)_{i\!\geq{1}}$ are iid, I know that $\mathbb{E}(\sum_{i=1}^{N}X_i|N)=N\mathbb{E}(X_1)=Np$ But, how do I calculate $P(\mathbb{E}(X_1+\ldots+X_N|N)=0)$? Thanks for the help!
As you found, $\mathsf E\left(\sum\limits_{j=1}^N X_j\;\middle\vert\; N\right)= Np$ You know that $N\sim\mathcal P(\lambda)$ so you can find $\mathsf P(Np{=}0)$ from $$\mathsf P(N{=}k) \;=\; \dfrac{\lambda^k\, {\sf e}^{-\lambda}}{k!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1632799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the sum $\sum _{ k=1 }^{ 100 }{ \frac { k\cdot k! }{ { 100 }^{ k } } } \binom{100}{k}$ Find the sum $$\sum _{ k=1 }^{ 100 }{ \frac { k\cdot k! }{ { 100 }^{ k } } } \binom{100}{k}$$ When I asked my teacher how can I solve this question he responded it is very hard, you can't solve it. I hope you can help me in solving and understanding the question.
I am re-editing a-rodin's answer, correcting a few typos [of an earlier version, now edited]. \begin{align} \sum\limits_{k=1}^{100} \frac {k\cdot k!}{100^k} \frac{100!}{k!(100-k)!} &= \frac{100!}{100^{100}} \sum\limits_{k=1}^{100} \frac{k\cdot100^{100-k}}{(100-k)!}\\ &= \frac{100!}{100^{100}} \sum\limits_{k=0}^{99}\frac{(100-k)\cdot 100^k}{k!}\\ &=\frac{100!}{100^{100}} \sum\limits_{k=1}^{99} \left( \frac {100^{k+1}}{k!} - \frac{100^k}{(k-1)!}\right)+\frac{100!}{100^{99}}\\ &= \frac{100!}{100^{100}} \left( \sum\limits_{k=1}^{99} \frac {100^{k+1}}{k!} - \sum\limits_{k=0}^{98} \frac{100^{k+1}}{k!}\right)+\frac{100!}{100^{99}}\\&= \frac{100!}{100^{100}} \left( \frac{100^{100}}{99!} + \sum\limits_{k=1}^{98} \frac {100^{k+1}}{k!} - \sum\limits_{k=1}^{98} \frac{100^{k+1}}{k!} -\frac{100^1}{0!} \right)+\frac{100!}{100^{99}}\\ &= 100-\frac{100!}{100^{99}}+\frac{100!}{100^{99}}=100. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1632928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Is the punctured plane homotopy equivalent to the circle? I know that the fundamental group of $X = \mathbb R^2 \setminus \{(0,0)\}$ is the same as the fundamental group of the circle $Y = S^1$, namely $\mathbb Z$. However, $X$ and $Y$ are not homotopic, i.e. we can't find continuous maps $f:X\to Y, g : Y \to X$ such that $f \circ g$ is homotopic to $id_Y$ and $g \circ f$ is homotopic to $id_X$. I would like to prove it, but I don't really know how to do it. If such $f$ and $g$ existed, then it would be something like $f : x \mapsto x/\|x\|$, and $g(Y)$ has to be compact... I don't know how to continue. Any hint would be appreciated. I apologize if this has already been asked.
Just to answer this question: they are homotopy equivalent. If $g : Y \to X$ is the natural embedding, and $f$ is as above, then $g \circ f$ is homotopy equivalent to $id_X$ : Let $H_1 : X \times [0,1] \to X$ be defined as $$(x, t) \mapsto tx/\|x\| + (1-t)x.$$ (Even if $X$ is not convex, $H_1(x,t) \in X$ for any values $x,t$ — this is clear geometrically). Then $H_1(x,0)=x,H_1(x,1) = g(f(x))$ for any $x \in X$. Moreover $f \circ g$ is homotopy equivalent to $id_Y$, because it is equal to $id_Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1633029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Dense on the unit circle I am reading: "It is sufficient to show that the points $z_n = e^{2\pi in \xi}$ $\:\:n = (1, 2, 3...)$ are dense on the unit circle. ( $\xi$ is an irrational number)" How is this possible? Can anyone give me an intuitive explanation of this? (Not a solution) Thank you.
Intuitive explanation. Let $\alpha$ be an irrational angle - not a rational multiple of $2\pi$. Then if you start at $(1,0)$ and step around the unit circle in steps of size $\alpha$ the set of points you reach will come as close as you like to any other point. That follows from the fact that you can never get back to your starting point in a finite number of steps (the irrationality).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1633139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Matrix induction proof Given the following $\lambda_{1}=\frac{1-\sqrt{5}}{2}$ and $\lambda_{2}=\frac{1+\sqrt{5}}{2}$ How do I prove this using induction: $\begin{align*} A^k=\frac{1}{\sqrt{5}}\left(\begin{array}{cc} \lambda_2^{k-1}-\lambda_1^{k-1} & \lambda_2^{k}-\lambda_1^{k}\\ \lambda_2^{k}-\lambda_1^{k} & \lambda_2^{k+1}-\lambda_1^{k+1} \end{array}\right),\,k>0 \end{align*}$ when $ A=\left(\begin{array}{cc} 0 & 1\\ 1 & 1 \end{array}\right)\in\text{Mat}_{2}(\mathbb{R}) $. I know how induction works and it holds for $k=1$. But I'm stuck at $k+1$. My own work (Revision): \begin{align*} A^{k+1}=\frac{1}{\sqrt{5}}\left(\begin{array}{cc} \lambda_2^{k-1}-\lambda_1^{k-1} & \lambda_2^{k}-\lambda_1^{k}\\ \lambda_2^{k}-\lambda_1^{k} & \lambda_2^{k+1}-\lambda_1^{k+1} \end{array}\right)\cdot\left(\begin{array}{cc} 0 & 1\\ 1 & 1 \end{array}\right)=\frac{1}{\sqrt{5}}\left(\begin{array}{cc} \lambda_2^{k}-\lambda_1^{k} & \lambda_2^{k-1}-\lambda_1^{k-1}+\lambda_2^{k}-\lambda_1^{k}\\ \lambda_2^{k+1}-\lambda_1^{k+1} & \lambda_2^{k}-\lambda_1^{k}+\lambda_2^{k+1}-\lambda_1^{k+1} \end{array}\right) \end{align*} So this is Fibonacci I guess? And therefore equivalent to: \begin{align*} \frac{1}{\sqrt{5}}\left(\begin{array}{cc} \lambda_2^{k}-\lambda_1^{k}& \lambda_2^{k+1}-\lambda_1^{k+1}\\ \lambda_2^{k+1}-\lambda_1^{k+1} & \lambda_2^{k+2}-\lambda_1^{k+2} \end{array}\right) \end{align*} (Revision 2) $A^k=P\Lambda^{k}P^{-1}$: \begin{align*} \left(\begin{array}{cc} -\lambda_2& -\lambda_1\\ 1 & 1 \end{array}\right)\cdot\left(\begin{array}{cc} \lambda_1^k& 0\\ 0 & \lambda_2^k \end{array}\right)\cdot\frac{1}{\sqrt{5}}\left(\begin{array}{cc} -1& -\lambda_1\\ 1 & \lambda_2 \end{array}\right)=\frac{1}{\sqrt{5}}\left(\begin{array}{cc} \lambda_1^k\lambda_2-\lambda_1\lambda_2^k& \lambda_1^{k+1}\lambda_2-\lambda_1\lambda_2^{k+1}\\ \lambda_2^k-\lambda_1^k & \lambda_2^{k+1}-\lambda_1^{k+1} \end{array}\right) \end{align*} Is this the right way?
1) A straightforward proof which is more natural than recursion, in my opinion (for a recursion proof see 2).) Use diagonalization identity $A=P\Lambda P^{-1}$ from which $A^k=P\Lambda^kP^{-1} \ \ (1)$ where $\Lambda$ is the diagonal matrix diag$(\lambda_1,\lambda_2)$. Here is an extension of my first explanation: Indeed the columns of matrix $P$ are eigenvectors associated with $\lambda_1$ ans $\lambda_2$ in this order ; we can take $P=\left(\begin{array}{cc} -\lambda_2 & -\lambda_1\\ 1 & 1 \end{array}\right)$ with $P^{-1}=\dfrac{1}{\sqrt{5}}\left(\begin{array}{cc} -1 &-\lambda_1 \\ 1 &\lambda_2 \end{array}\right)$. Plugging these expressions into formula (1) gives the answer. 2) @dk20, as you asked for a recursion proof, I add it to the previous text: One wants to prove that, for any $k>1$, matrices $A=\left(\begin{array}{cc} 0 & 1 \\ 1 & 1 \end{array}\right)\left(\begin{array}{cc} \lambda_2^{k-1}-\lambda_1^{k-1} & \lambda_2^{k}-\lambda_1^{k}\\ \lambda_2^{k}-\lambda_1^{k} & \lambda_2^{k+1}-\lambda_1^{k+1} \end{array}\right)$ and $B=\left(\begin{array}{cc} \lambda_2^{k}-\lambda_1^{k} & \lambda_2^{k+1}-\lambda_1^{k+1}\\ \lambda_2^{k+1}-\lambda_1^{k+1} & \lambda_2^{k+2}-\lambda_1^{k+2} \end{array}\right)$ are identical. It is clear that $A_{1j}=B_{1j}$ ($j=1,2$:coefficients of the first line). Let us now prove that $A_{21}=B_{21}$ (bottom left coefficients), i.e., $\lambda_2^{k-1}-\lambda_1^{k-1} + \lambda_2^{k}-\lambda_1^{k}=\lambda_2^{k+1}-\lambda_1^{k+1}$ This equation is equivalent to the following one: $\lambda_1^{k-1}(1+\lambda_1-\lambda_1^2)=\lambda_2^{k-1}(1+\lambda_2-\lambda_2^2)$ which is evidently fulfilled because it boils down to $0=0$ ; indeed, $\lambda_1$ and $\lambda_2$ are both roots of the quadratic equation $x^2-x-1=0$ ($\lambda_2$ is the "golden number"). The reason why $A_{22}=B_{22}$ is identical (change $k$ into $k+1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1633222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Detailed balance implies time reversibility, how about the converse? Given a Markov chain (finite state space) $X_1,X_2,...$ with transition matrix $P$ and initial distribution $\pi$, if they satisfy $\pi(x)P(x,y)=\pi(y)P(y,x)$, we say they satisfy detailed balance. If the joint distribution of $(X_1,X_2,...,X_n)$ is identical to $(X_n,X_{n-1},...,X_1)$ for any $n \ge 1$, i.e. $\mathbb{P}\left\{ {{X_0} = {x_0},{X_1} = {x_1}, \ldots ,{X_n} = {x_n}} \right\} = \mathbb{P}\left\{ {{X_0} = {x_n},{X_1} = {x_{n - 1}}, \ldots ,{X_n} = {x_0}} \right\}$ for any realization $x_0,x_1,...,x_n$ and any $n$, then we say the Markov chain is reversible. Detailed balance implies reversibility (subscripts are used to in the proof). My question is, does time reversibility implies detailed balance? I think it is probably not true, can anyone help give a counter example? Thank you!
$\Bbb P[X_0=x_0,X_1=x_1]=\pi(x_0)P(x_0,x_1)$ and $\Bbb P[X_0=x_1,X_1=x_0]=\pi(x_1)P(x_1,x_0)$, so the $n=1$ case of reversibility already implies detailed balance.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1633308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confused about proof by contradition In proof by contradiction, I can understand how it works when the hypothesis leads to a clearly false proposition. e.g., if we want to prove $P$, we assume $\neg P$ and show that $\neg P \implies ... \implies Q$, but we know that $\neg Q$ and since we just proved that $\neg Q \implies P$, $P$ is true. However, I get confused when the hypothesis leads to its own negation. In other words, we have $\neg P \implies ... \implies Q \implies P$. I can't help but feel that $\neg P\implies P$ is just something meaningless that we can't use to conclude anything. We could say that since $Q\implies P$, $\neg P \implies \neg Q \implies P$, but again what this is saying is $\neg P \implies P$. A good example of this type of proof is Dijkstra's algorithm where the assumption that the selected vertex does not have its shortest path determined leads to the conclusion that it does indeed have its shortest path set.
Let $Q$ be the statement $P\land\lnot P$. We know that $Q$ is false. Then if you have $\lnot P\implies P$ then you have $\lnot P\implies (P\land \lnot P)$, which is $P\implies Q$. But we know $Q$ is false, so $P$ is false. Indeed, every proof by contradiction can be written as: $$P\implies(A\land \lnot A)$$ for some predicate $A$. In your case, you knew that $\lnot Q$ is true, so you can conclude from $P\implies Q$ that $P\implies (Q\land\lnot Q)$. In the case of $P\implies \lnot P$, you have $A$ the same as $P$. But you still reach a contradiction - you know that something is both true and not true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1633419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to minimize $ab + bc + ca$ given $a^2 + b^2 + c^2 = 1$? The question is to prove that $ab + bc + ca$ lies in between $-1$ and $1$, given that $a^2 + b^2 + c^2 = 1$. I could prove the maxima by the following approach. I changed the coordinates to spherical coordinates: $a = \cos A \\ b = \sin A \cos B \\ c = \sin A \sin B$ Using that $\cos X + \sin X$ always lies between $- \sqrt 2$ and $\sqrt 2$ I proved that $a + b + c$ lies between $- \sqrt 3$ and $\sqrt 3$. Using $(a + b + c)^2 = a^2 + b^2 + c^2 + 2(ab + bc + ca)$ I could prove that $ab + bc + ca$ is maximum at $1$ but I can't prove the minima.
From what you've done, $ab+bc+ca = \dfrac{(a+b+c)^2 - (a^2+b^2+c^2)}{2} \geq \dfrac{0 - 1}{2} = \dfrac{-1}{2}$, and this is the minimum value you sought. The minimum occurs when $a+b+c = 0, a^2+b^2+c^2 = 1$. To solve for $a,b,c$ you only need to find one solution of the system of $2$ equations above, then you are done. Since there are $3$ variables and only $2$ equations, you can take $c = \dfrac{1}{2}, \Rightarrow a, b$ are the solutions of the equation: $4x^2+2x-1 = 0$, thus $a = x = \dfrac{-1+\sqrt{5}}{4}, \Rightarrow b = -\dfrac{1}{2} - a = \dfrac{-1-\sqrt{5}}{4}$ by Viete's theorem on quadratic equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1633536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Can you find the maximum or minimum of an equation without calculus? Without using calculus is it possible to find provably and exactly the maximum value or the minimum value of a quadratic equation $$ y:=ax^2+bx+c $$ (and also without completing the square)? I'd love to know the answer.
One approach for finding the maximum value of $y$ for $y=ax^2+bx+c$ would be to see how large $y$ can be before the equation has no solution for $x$. First rearrange the equation into a standard form: $ax^2+bx+c-y=0$ Now solving for $x$ in terms of $y$ using the quadratic formula gives: $x= \frac{-b\pm \sqrt{b^2-4a(c-y)}}{2a}$ This will have a solution as long as $b^2-4a(c-y) \geq 0$ You can rearrange this inequality to get the maximum value of $y$ in terms of $a,b,c$. See if you get the same answer as the calculus approach gives. Remember that $a$ must be negative in order for there to be a maximum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1633619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Calculating the length of the paper on a toilet paper roll Fun with Math time. My mom gave me a roll of toilet paper to put it in the bathroom, and looking at it I immediately wondered about this: is it possible, through very simple math, to calculate (with small error) the total paper length of a toilet roll? Writing down some math, I came to this study, which I share with you because there are some questions I have in mind, and because as someone rightly said: for every problem there always are at least 3 solutions. I started by outlining the problem in a geometrical way, namely looking only at the essential: the roll from above, identifying the salient parameters: Parameters $r = $ radius of internal circle, namely the paper tube circle; $R = $ radius of the whole paper roll; $b = R - r = $ "partial" radius, namely the difference of two radii as stated. First Point I treated the whole problem in the discrete way. [See the end of this question for more details about what does it mean] Calculation In a discrete way, the problem asks for the total length of the rolled paper, so the easiest way is to treat the problem by thinking about the length as the sum of the whole circumferences starting by radius $r$ and ending with radius $R$. But how many circumferences are there? Here is one of the main points, and then I thought about introducing a new essential parameter, namely the thickness of a single sheet. Notice that it's important to have to do with measurable quantities. Calling $h$ the thickness of a single sheet, and knowing $b$ we can give an estimate of how many sheets $N$ are rolled: $$N = \frac{R - r}{h} = \frac{b}{h}$$ Having to compute a sum, the total length $L$ is then: $$L = 2\pi r + 2\pi (r + h) + 2\pi (r + 2h) + \cdots + 2\pi R$$ or better: $$L = 2\pi (r + 0h) + 2\pi (r + h) + 2\pi (r + 2h) + \cdots + 2\pi (r + Nh)$$ In which obviously $2\pi (r + 0h) = 2\pi r$ and $2\pi(r + Nh) = 2\pi R$. Writing it as a sum (and calculating it) we get: $$ \begin{align} L = \sum_{k = 0}^N\ 2\pi(r + kh) & = 2\pi r + 2\pi R + \sum_{k = 1}^{N-1}\ 2\pi(r + kh) \\\\ & = 2\pi r + 2\pi R + 2\pi \sum_{k = 1}^{N-1} r + 2\pi h \sum_{k = 1}^{N-1} k \\\\ & = 2\pi r + 2\pi R + 2\pi r(N-1) + 2\pi h\left(\frac{1}{2}N(N-1)\right) \\\\ & = 2\pi r N + 2\pi R + \pi hN^2 - \pi h N \end{align} $$ Using now: $N = \frac{b}{h}$; $R = b - a$ and $a = R - b$ (because $R$ is easily measurable), we arrive after little algebra to $$\boxed{L = 4\pi b + 2\pi R\left(\frac{b}{h} - 1\right) - \pi b\left(1 + \frac{b}{h}\right)}$$ Small Example: $h = 0.1$ mm; $R = 75$ mm; $b = 50$ mm thence $L = 157$ meters which might fit. Final Questions: 1) Could it be a good approximation? 2) What about the $\gamma$ factor? Namely the paper compression factor? 3) Could exist a similar calculation via integration over a spiral path? Because actually it's what it is: a spiral. Thank you so much for the time spent for this maybe tedious maybe boring maybe funny question!
Lets do the spiral version. Using your notation, a spiral joining circles of radiuses $r$ and $R$ with $N$ twists has the form $S(t)=(r+\frac{tb}{2\pi N})e^{i t}$, where $t\in[0,2\pi N]$ The length $L$ of the spiral is $$\begin{align} L & = \int_{0}^{2\pi N}|S'(t)|dt \\ & = \int_{0}^{2\pi N}\Big|\frac{b}{2\pi N}e^{it}+(r+\frac{tb}{2\pi N})ie^{it}\Big|dt \\ & = \int_{0}^{2\pi N}\sqrt{\Big(\frac{b}{2\pi N}\Big)^2+\Big(r+\frac{tb}{2\pi N}\Big)^2}dt \\ & = \frac{b}{2\pi N}\int_0^{2\pi N}\sqrt{1+\Big(\frac{2\pi Nr}{b}+t\Big)^2}dt \\ & = \frac{b}{2\pi N}\int_{2\pi Nr/b}^{2\pi N(r/b+1)}\sqrt{1+t^2}dt \\ & = \frac{b}{4\pi N}\big(t\sqrt{1+t^2}+\ln(t+\sqrt{1+t^2})\big)\Big|_{2\pi Nr/b}^{2\pi N(r/b+1)} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1633704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "350", "answer_count": 8, "answer_id": 5 }
What is the order when doing $x^{y^z}$ and why? Does $x^{y^z}$ equal $x^{(y^z)}$? If so, why? Why not simply apply the order of the operation from left to right? Meaning $x^{y^z}$ equals $(x^y)^z$? I always get confused with this and I don't understand the underlying rule. Any help would be appreciated!
The exponent is evaluated first if it is an expression. Examples are $3^{x+1}=3^{\left(x+1\right)}$ and $e^{5x^3+8x^2+5x+10}$ (the exponent is a cubic polynomial) and $10^{0+0+0+10^{15}+0+0+0}=10^{10^{15}}$. The left-associativity simply fails when the exponent contains multiple terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1633790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50", "answer_count": 6, "answer_id": 4 }
Limit on a five term polynomial it has been two years since I have taken or used calculus, and I am having some trouble with factoring a polynomial in order to take a limit on it. I have searched for previous similar questions here, but I have been unable to find anything helpful. Here is my problem: I have: \begin{equation} \lim_{x \to\ 1}\frac{{x^4+3x^3-13x^2-27x+36}}{x^2+3x-4} \end{equation} So, how do I approach this? When I try long division, I end up with a remainder, e.g.: \begin{equation} _{x \to\ 4}\frac{{x^4+3x^3-13x^2-27x+36}}{x^2+3x-4} = (x+3)(x+3)+\frac{{24x+72}}{(x+1)(x-4)} \end{equation} Plugging in the limit to what I came up with via long division just yields an undefined result, so I'm obviously doing something wrong. If anyone can help, I would really appreciate it! Thanks!
Ok, so I figured out my error. I was allowing the term \begin{equation}13x^2\end{equation} from the numerator to mess me up. Factoring the numerator needs to be approached by first turning the prime number, 13, into non-prime numbers. Thus,\begin{equation}x^4+3x^3-13x^2-27x+36\end{equation}becomes \begin{equation}x^4+3x^3-9x^2-4x^2-27x+36\end{equation}Next, I group the terms that have common factors. Thus the previous term becomes \begin{equation}x^2(x^2+3x-4)-9(x^2+3x-4)\end{equation}Since the term \begin{equation} (x^2+3x-4)\end{equation} is identical to the denominator, the only terms left are \begin{equation}x^2-9\end{equation}Hence, \begin{equation}\lim_{x \to\ 1}x^2-9=-8\end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1634007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Cantor's Intersection Theorem with closed sets Cantor's Intersection Theorem states that "if $\{C_k\}$ is a sequence of non-empty, closed and bounded sets satisfying $C_1 \supset C_2 \supset C_3 \dots$, then $\bigcap_{n \ge 1} C_n$ is nonempty. If the term "compact sets" is replaced by "closed sets", the statement is not true. It makes sense to me, but couldn't find such a counterexample for it.
Consider the sequence $C_n = [n, \infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1634093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Defining natural transformations based on generalized elements? Let $F : \mathbf{C} \to \mathbf{D} : G$ be two functors between categories $\mathbf{C}$ and $\mathbf{D}$. A natural transformation $\eta$ from $F$ to $G$ is a collection of morphisms $\eta : FC \to GC$ in $\mathbf{D}$ for each $C \in \mathbf{C}$. Particularly, since $FC$ and $GC$ are just objects, we can't simply suppose that they, too, have objects. And sometimes this makes it very difficult for me to come up with a definition if I am working with an arbitrary category $\mathbf{C}$ which I know nothing about. However, in Awodey's Category Theory (p.158), where $\mathbf{C}$ is an arbitrary category with producs, he defines the component of a "twist" natural transformation $$ t_{(A,B)} : A \times B \to B \times A$$ by $$ t_{(A,B)} \langle a,b \rangle = \langle b,a \rangle$$ without making it explicit what $\langle a,b \rangle$ really are here. Are they generalized objects $a : Z \to A$ and $b: Z \to B$? If this is correct, I wonder when is it acceptable to define a component based on its action on generalized elements? Moreover, how do I know that this morphism really exist in $\mathbf{C}$?
What Awodey is trying to express in intuitive notation is that the twist map is $\langle \pi_1, \pi_0\rangle:A\times B\to B\times A$, so that $\pi_0\circ t_{(A,B)}=\pi_1$ and $\pi_1\circ t_{(A,B)}=\pi_0$. It's easy to see that for any pair of generalized elements $a:Z\to A$ and $b:Z\to B$ this will give you the property that you cite from Awodey. Conversely, if for all such generalized elements Awodey's property holds, then in particular it holds of $\langle \pi_0,\pi_1\rangle=id_{A\times B}$, and the universal property of products gives us again that $t_{(A,B)}=\langle\pi_1,\pi_0\rangle$. In any case, what makes sure that $t_{(A,B)}$ exists is the existence of products in $\mathbf{C}$ and their universal property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1634196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Find $\lim_\limits{R\to \infty}{1\over 2\pi i}\int_{{1\over 2}-iR}^{{1\over 2}+iR}{x^s\over s}ds$ $\lim_\limits{R\to \infty}{1\over 2\pi i}\int_{{1\over 2}-iR}^{{1\over 2}+iR}{x^s\over s}ds$ where $x>0$. Split it to cases: $x>1,x=1,0<x<1$. I tried using contour integration but I am still very confused as to how I should approach it. When $x=1$ I get $\ln|s|$. I spot no difference between $x>1$ and $x<1$ when it is presented like this, but I do sense it has something to do with $Log$. I would really appreciate any guiding. (When writing this, no match came up, and looking for it didn't bring me much forward.)
Rewrite as $$\lim_{R \to \infty} \frac1{i 2 \pi}\int_{\frac12-i R}^{\frac12+i R} ds \frac{e^{s \log{x}}}{s} $$ When $x \gt 1$, $\log{x} \gt 0$ and we may close to the left of $\operatorname{Re}{s}=\frac12$ with a circle of radius $R$ centered at the origin. The integral about the circle vanishes when $R \to \infty$. In this case, the closed contour encloses the pole at $s=0$ so that, in the limit, the integral is $1$. However, when $0 \lt x \lt 1$, $\log{x} \lt 0$ and we must close to the right of the line $\operatorname{Re}{s}=\frac12$ in order for the integral about the circle to vanish as $R \to \infty$. In this case, there are no poles and the integral is $0$ in this limit. Thus $$\lim_{R \to \infty} \frac1{i 2 \pi}\int_{\frac12-i R}^{\frac12+i R} ds \frac{x^s}{s} = \begin{cases}0 & 0 \lt x \lt 1 \\ 1 & x \gt 1 \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1634401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Guide to solving Harary's exercises Most of Harary's harder exercises are research problems (although solved), that need almost always a single key idea as a breakthrough. Often it so happens that even after thinking for a long time no solution comes to mind. I am a beginner in Graph Theory. How should I solve the problems from Harary? Often what I do is, I have no option other than looking up the original paper where the solution was published, get some idea on how to start, and then sometimes I am able to complete the proof. For example, I am stuck for some time on this problem: If a graph has diameter $d$ and girth $2d+1$ then it is regular. This also was a research problem solved. I think Harary should have given some hints on solving the harder problems. What I know is, for any simple graph, if $D$ is the diameter, then the girth $\leq 2D+1$. But what happens when there is equality? I tried to start with vertices $u$ and $v$, with $deg(u)>deg(v)$ and whether $u$ is connected to the cycle, or $v$ is connected to $u$, etc. There are so many things at once! Could someone please give me a hint only to start? I haven't seen the solution to this problem and have been trying it. However, I am convinced that the way problems have been solved, I could never have done them. All I would have spent are fruitless months.
Choose vertices $a$ and $u$ at maximum distance. There is one neighbor of $a$ at distance $D-1$ from $u$. What is the distance from $u$ of the other neighbors? Working from here, that $a$ and $v$ have the same valency. Now consider a cycle of length $2D+1$, and prove that all vertices on it have the same valency. To be fair to Harary, this is a reasonable problem for a graduate course, although hard to get started on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1634567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
prove homomorphic image has order 4 Let $G$ be a group of order $20$ show that $G$ has homomorphic image of order $4$. From Cauchy theorem we have elements $a,b \in G$ of orders $2,5$ respectively then from first isomorphic theorem homomorphic image is isomorphic with $G/kerf$ for some homomorphism $f:G \to im(f)$ so if $kerf=<b>$ then will be ok but to show that I need $<b>$ is normal subgroup of $G$ and I have some problems with that
$G/\langle b\rangle$ is such a homomorphic image. The subgroup generated by $b$ is normal by the Sylow's theorems. Indeed the number $n$ of subgroups of order $5$ is congruent to $1\mod 5$, and a divisor of $20/5=4$. Hence $n=1$, which proves the subgroup is normal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1634679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why it is more accurate to evaluate $x^2-y^2$ as $(x+y)(x-y)$ in floating point system? The expression $x^2-y^2$ exhibits catastrophic cancellation if $|x|\approx|y|$. Why it is more accurate to evaluate as $(x+y)(x-y)$ in floating point system (like IEEE 754)? I see this is intuitively true. Any one can help demonstrate an example when $|x|\approx|y|$? And is there any (or how to write) a formal proof for the claim? A detailed explanation would be very much appreciated! Thank you!
I guess the answer is when $x$ and $y$ are large, $x^2$ and $y^2$ are larger. If they are of the same order of magnitude then when you consider evaluating $x^2 - y^2$ you are taking away two large numbers and the round off error will be larger than if you write it is $(x + y) (x - y)$. For $x \approx y$, the second approach gives $\approx 2x \times (x-y)$. Hope that helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1634785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
In $\lim(x,y)\to(0,0)$ why can I change to $(x^2,x)$? When we have a multivariable function and we want to see if the function is continuous at a point, normally the origin, we sometimes "change" $(x,y)\to(0,0)$ to expressions like $(x^2,x)\to(0,0)$ to make it work. For example, for the function: $f(x)=\begin{cases}\frac{yx^2}{(x^4+y^2)}& \text{if } (x,y)\neq (0,0)\\ 0& \text{if } (x,y)=(0,0)\end{cases}$ In order to see the discontinuity we can consider $(x,xm)\to(0,0)$ and we get $0$. But if we change to $(x,x^2)\to(0,0)$, the limit becomes $1/2$. Why can we choose those curves? Also, for a different function, could I choose $(1/x,x)\to(0,0)$. I know the limit of $1/x$ as $x$ goes to zero does not exist, therefore I am unsure.
Confronted with such a problem you have to make a decision, founded on your experience with similar problems: Shall I try to prove that the limit exists, or shall I try to prove that the limit does not exist? If you conjecture that the limit $\lim_{{\bf z}\to{\bf 0}}f({\bf z})$ does not exist you can try to exhibit two curves $$\gamma:\quad t\mapsto{\bf z}(t)=\bigl(x(t),y(t)\bigr)\ne{\bf 0},\qquad \lim_{t\to0+}{\bf z}(t)={\bf 0}\ ,\tag{1}$$ for which the limit $\lim_{t\to0+}f\bigl({\bf z}(t)\bigr)$ is different, or one such curve, for which this limit does not exist. The logic behind this procedure is as follows: If $\lim_{{\bf z}\to{\bf 0}}f({\bf z})=\alpha$ for a certain $\alpha$ then by the "law of nested limits" one has $\lim_{t\to0+}f\bigl({\bf z}(t)\bigr)=\alpha$ for all curves $(1)$. If you conjecture that the limit $\lim_{{\bf z}\to{\bf 0}}f({\bf z})$ exists then you have to provide a fulfledged $\epsilon/\delta$ proof of this conjecture, and you cannot resort to special curves for a proof. In such cases it often, but not always, helps to express $f$ in polar coordinates, because the variable $r$ encodes the nearness of ${\bf z}$ to ${\bf 0}$ in a particularly simple way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1634898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Notation conversion help with respect to combinatorical proof First off, I wouldn't bring this to SO, but my teacher refuses to teach me notation. Anyhow... I'm doing a proof. The proof says: "Show that $8^n - 3^n$ is a multiple of 5 for all non-negative integers $n$. How do I say this in notation? I've got: $\forall x \in \mathbb{N_0}$ ...and $(8^n - 3^n)\%5=0$...but I don't know the proper way to link this. Would I say: Show $\forall n \in \mathbb{N_0}$, $(8^n - 3^n)\%5=0$? (where I join the clauses with a comma)...And is $\%$ even the correct symbol here for the modulo function? Also, how would I say the set $X$ not including $x_i$? "$X$ \ $x_i$"?
First, let me say that your teacher might refuse teching you this notation because it is considered bad style in written mathematics. The symbols from formal logic (like $\forall$) should be used nearly exclusively when talking about formulas in formal logic and maybe (carefully!) as a shorthand on the blackboard. Full English sentences are just easier to read. In your personal notes, you can do whatever you want, of course. If you insist on using symbols anyway, there are many different conventions. A comma is fine, as is a colon. You can also put parentheses around the quantifier or what follows it. For modulo, it is usually used as an equivalence relation and then written $8^n-5^n \equiv 0 \pmod 5$. This leaves us with the following (incomplete) list of possiblities: $$\forall n\in \mathbb N, 8^n-5^n \equiv 0 \pmod 5\\\forall n\in \mathbb N\colon 8^n-5^n \equiv 0 \pmod 5\\(\forall n\in \mathbb N)(8^n-5^n \equiv 0 \pmod 5)$$ which I would all consider correct (but other people might have more pronounced opinions). For your second question: You should use $X \setminus \{x_i\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim_{n\to\infty}{n\left(\ln(n+2)-\ln n\right)}$ I am trying to find$$\lim_{n\to\infty}{n\left(\ln(n+2)-\ln n\right)}$$ But I can't figure out any good way to solve this. Is there a special theorem or method to solve such limits?
Hint $$\ln(n+2)-\ln(n) = \ln\bigg (\frac{n+2}{n}\bigg)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 6 }
Convergence of $\sum_{n=1}^\infty \sqrt[n]{2}-1$ I'm trying to determine whether $$\sum_{n=1}^\infty \left ( \sqrt[n]{2}-1\right )$$ converges or diverges. Ratio, root, nth term, etc tests are either inconclusive or too difficult to simplify. I feel like there must be something I can bound this series by but I can't think what. In this question the answerer very smartly (I have no idea how he/ she thought to do that) used the fact that $(1+\frac 1n)^n \le e \le 3$ to bound $\sqrt[n]{3} -1$ by $\frac 1n$ but that only worked because $3\ge e$. So even though that question looks very similar I don't think I can apply that idea here. Edit: This is in the section before power/ Taylor series so I don't think I'm allowed to use that.
Hint: $$\sqrt[n]{2}-1 = e^{\frac{\log{2}}{n}} - 1 = \frac{\log{2}}{n} + O(n^{-2})$$ as $n \to \infty$. ADDENDUM To address your specific problem, consider instead $$\left (1+\frac{\log{2}}{n} \right )^n $$ which you should be able to show is less than $2$. I that case, then you can show that $\sqrt[n]{2}-1 \gt (\log{2})/n$ and draw your conclusion about the sum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that for $p \neq 2$ not every element in $\mathbb{Z}/p\mathbb{Z}$ is a square. Show that for $p\neq2$, not every element in $\mathbb{Z}/p\mathbb{Z}$ is a square of an element in $\mathbb{Z}/p\mathbb{Z}$. (Hint: $1^2=(p-1)^2=1$. Deduce the desired conclusion by counting). So far I have that $1=p^2-2p-1\Rightarrow p^2-2p=0\Rightarrow p^2=2p$, but I don't know where to go from here. I also don't fully understand what it means to deduce it by counting.
Because $\mathbb{Z}/p\mathbb{Z}$ is finite the map $f \colon \mathbb{Z}/p\mathbb{Z} \to \mathbb{Z}/p\mathbb{Z}$, $x \mapsto x^2$ is surjective (i.e. every element is a square) if and only if it is injective. But for $p \neq 2$ we have $-1 \neq 1$ with $f(-1) = 1 = f(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
The nature of roots of the quadratic equation $ax^2+(b-c)x-2b-c-a=0,$ If the expression $ax^2+2bx+c$, where $a$ is a non-zero real number, has the same sign as that of $a$ for every real value of $x$, then roots of the quadratic equation $ax^2+(b-c)x-2b-c-a=0$ are: (A) real and equal (B) real and unequal (C) non-real having positive real part (D) non-real having having negative real part As the expression $ax^2+2bx+c$ has the same sign as that of $a$ for every real value of $x$, so if $a>0,$ then $4b^2-4ac<0$ and if $a<0$, then $4b^2-4ac>0$ To determine the nature of roots of the equation $ax^2+(b-c)x-2b-c-a=0, I found its discriminant $\Delta =(b-c)^2+4a(2b+c+a)=b^2+c^2-2bc+8ab+4ac+4a^2$ Now I am not able to find the nature of roots of the equation.
Since $ax^2+2bx+c$ has always the same sign as $a$ for any real $x$, it has no real roots, so $4b^2 - 4ac < 0$. Now try writing \begin{align} (b-c)^2 + 4a(2b+c+a) &= (b-c)^2 + 4(2ab+ac+a^2) \\ &= (b-c)^2 + 4(2ab+b^2+a^2) + 4(ac - b^2) . \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When is the rank of Jacobian constant? Suppose I've got a function $f : \mathbb{R}^{n} \to \mathbb{R}^{m}$ which I know is bijective. Considering $\mathcal{J}$, the Jacobian of $\ f$, I want to understand what can be said about the rank of $\mathcal{J}(\mathbf{x})$. Let's say I evaluate $\mathcal{J}(\mathbf{0})$, and find that the rank of $\mathcal{J}(\mathbf{0})$ is $k$. Does this mean that the rank of $\mathcal{J}(\mathbf{x})$ is $k$ for all $\mathbf{x} \in \mathbb{R}^{n}$? Is there a theorem regarding this? EDIT: If this is not enough that $f$ is a bijection, what if $f$ is a homeomorphism? Or of class $C^{\infty}$?
If $f$ is a bijection, the Jacobian need not even exist. Let $m=n$ and $f$ be a generic permutation of $\Bbb{R}^m$. A random permutation is unlikely to be continuous anywhere, much less have derivatives. (Bi-)Continuity (being homeomorphic) isn't sufficient to guarantee a derivative exists. Consider continuous nowhere differentiable functions, for example, the Weierstrass function. A standard counterexample to $\mathscr{C}^{\infty}$ being sufficient is $f(x) = x^3$, which is rank deficient at $x=0$ and full rank everywhere else.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Distance function is continuous in topology induced by the metric The question is (from Topology without tears) that: Let $(X,d)$ be a metric space and $\tau$ the corresponding topology on $X$. Fix $a \in X$. Prove that the map $f:(X,\tau) \rightarrow \mathbb{R}$ defined by $f(x) = d(a,x)$ is continuous. My first attempt was that let an open ball, $B_{\epsilon}(a)\subset \mathbb{R}$ and show that $f^{-1}(B_{\epsilon}(a)) \subset \tau$. Since $f(x) = d(a,x)$, $f^{-1}(B_{\epsilon}(a))=B_{\epsilon}(a) \in \tau$. I am pretty sure the last sentence is wrong. I am not sure if every open ball is an open set in topology induced by metric space. Please give me some hint and direction. Thanks
Not sure why you use $a$ both for elements in $\mathbb R$ and in $X$. Note by triangle inequality, $$d(x, a)\le d(x, y)+ d(y, a)\Rightarrow d(a, x) - d(a, y)\le d(x, y)$$ Interchanging the role of $x, y$ and use $f(x) = d(a, x)$ we have $$\tag{1}|f(x) - f(y)|\le d(x, y).$$ This inequality is sufficient for us to condlude that $f$ is continuous. Let $U$ be open in $\mathbb R$. Let $x\in f^{-1}(U)$. Then $f(x) \in U$ and there is $\epsilon>0$ so that $(f(x) -\epsilon, f(x) + \epsilon) \subset U$, as $U$ is open. Then if $y\in B_\epsilon(x)$, then $d(x, y)<\epsilon$ and by $(1)$, $$|f(x) - f(y)|<\epsilon \Rightarrow f(y) \in (f(x)-\epsilon, f(x) +\epsilon)\subset U.$$ This implies $B_\epsilon(x) \subset f^{-1}(U)$. Thus $f^{-1}(U)$ is open and so $f$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $G$ is of order $n$ and $k$ is prime to $n$ then $g(x)=x^k$ is one-to-one Let $G$ be a group of order $n$ and let $k$ be prime to $n$, show that $g(x) = x^k$ is one-to-one. I started trying to prove this, and said: If $g(x) = g(y)$ for some $x,y \in G$ then $x^k = y^k$. Also, since $n,k$ are prime we have that there are $a,b$ such that $an + bk = 1$. Also, we know that $x^n = y^n = 1$. But I got stuck there. Somehow I need to use $an+bk = 1$ for some $a,b$ to come to the conclusion that $x=y$, no luck
Since $G$ is finite, you only have to show injectivity. So let $x \in G$ with $x^k=1$. Then the order of $x$ is a divisor of $k$. But we also know, that the order of $x$ is a divisor of $n$. Hence the order of $x$ is a common divisor of $k$ and $n$, hence equal to $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find a thousand natural numbers such that their sum equals their product The question is to find a thousand natural numbers such that their sum equals their product. Here's my approach : I worked on this question for lesser cases : \begin{align} &2 \times 2 = 2 + 2\\ &2 \times 3 \times 1 = 2 + 3 + 1\\ &3 \times 3 \times 1 \times 1 \times 1 = 3 + 3 + 1 + 1 + 1\\ &7 \times 7 \times 1 \times 1 \times \dots\times 1 \text{ (35 times) } = 7 + 7 + 1 + 1 .... \text{ (35 times) } \end{align} Using this logic, I seemed to have reduced the problem in the following way. $a \times b \times 1 \times 1 \times 1 \times\dots\times 1 = a + b + 1 + 1 +...$ This equality is satisfied whenever $ ab = a + b + (1000-n)$ Or $ abc\cdots n = a + b + \dots + n + ... + (1000 - n)$ In other words, I need to search for n numbers such that their product is greater by $1000-n$ than their sum. This allows the remaining spots to be filled by $1$'s. I feel like I'm close to the answer. Note : I have got the answer thanks to Henning's help. It's $112 \times 10 \times 1 \times 1 \times 1 \times ...$ ($998$ times)$ = 10 + 112 + 1 + 1 + 1 + ...$ ($998$ times) This is for the two variable case. Have any of you found answers to more than two variables ? $abc...n = a + b + c + ... + n + (1000 - n) $
There's a sign error in your final equation; you want $$ a+b+998=ab $$ which simplifies to $$ (a-1)(b-1) = 999 $$ from which it should be easy to extract several integer solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Amount of interest Find the amount of interest earned between time t and n where $t<n$; if $I_r=r$ for some positive integer r. Answer is $\frac{1}{2}(n^2+n-t^2-t)$ $I_{[t,n]}=A(n)-A(t)$ $I_{[0,r]}=A(r)-A(0)=r$ $A(r)=A(O)+r$ For $t<r<n$ $I_{[r,n]}=A(n)-A(r)$ ;(1) $I_{[t,r]}=A(r)-A(t)$ ;(2) Cannot grasp any further concept.
$$ A(n)-A(t)=\sum_{r=t+1}^n I_{[0,r] }=\sum_{r=t+1}^n r = \sum_{r=1}^n r-\sum_{r=1}^t r=\frac{n(n + 1)}{2}-\frac{t(t + 1)}{2}=\frac{1}{2}(n^2 + n- t^2 - t) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1635958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Sum of all elements in congruence class modulo n With $+$ defined as $[a]+[b]=[a+b]$, show that $[0]+[1]+\cdots+[n-1]$ is equal to either $[0]$ or $[n/2]$ in $\Bbb Z_n$. How do I go about proving this? I have managed to get $[(n^2-n)/2]$ using the definition but how do I proceed from here to the result? Help would be much appreciated, thanks in advance!
If $n=2k+1$ is odd, $\left[\dfrac{n(n-1)}2\right]=[nk]=[0]$. If $n=2k$ is even, $\left[\dfrac{n(n-1)}2\right]=[k(n-1)]=[-k]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1636064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
BMO2 2016 Number Theory Problem Suppose that $p$ is a prime number and that there are different positive integers $u$ and $v$ such that $p^2$ is the mean of $u^2$ and $v^2$. Prove that $2p−u−v$ is a square or twice a square. Can anyone find a proof. I can't really see any way to approach the problem.
Note that $2p^2=u^2+v^2$, or $(p-u)(p+u)=(v-p)(v+p)$. WLOG, suppose $u<p<v$. From the above equation, we have: $$2p-u-v=(p-u)+(p-v)=\frac{(v-p)(v-u)}{p+u}$$ Now, we do following analysis: If $q$ is odd prime, and $q^a|(v-p)$ then $q^a\not|(v+p)$ since $p$ is prime, and $p\not|v$. So, $q^a|(p-u)(p+u)$, and only one of $(p-u)$ and $(p+u)$ is divisible by $q^a$. If $q^a|(p-u)$, then $q^a|(v-p+p-u)$, i.e $q^a|(v-p)$, and so $$q^{2a}|\frac{(v-p)(v-u)}{p+u}.$$ If $q^a|(p+u)$, then $$q\not|\frac{(v-p)(v-u)}{p+u}.$$ since factorization of numerator and denominator will cancel all $q$'s. The above is true for all odd primes, so$$2p-u-v=\frac{(v-p)(v-u)}{p+u}=n^2,$$ or $$2p-u-v=\frac{(v-p)(v-u)}{p+u}=2n^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1636137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
Infimal convolution $g^\star = f_1^\star + f_2^\star$ Let $f_1$ and $f_2$ be convex functions on $R^n$. Their infimal convolution $g = f_1 \diamond f_2$ is defined as $$ g(x) = \inf \{f_1(x_1) + f_2(x_2) \mid x_1 + x_2 = x\}. $$ Prove that $g^\star = f_1^\star + f_2^\star$.
I finally found a solution to the problem $$ g*(y) = \underset{x}{\sup}\; \{x^Ty - g(x)\} $$ As we know $$ g(y) = \underset{x_1 + x_2 = x}{\inf}\; \{f_1(x_1) + f_2(x_2)\} $$ Now we have $$ g*(y) = \underset{x}{\sup}\; \{x^Ty - \underset{x_1 + x_2 = x}{\inf}\; \{f_1(x_1) + f_2(x_2)\} \} $$ $$ = \underset{x}{\sup}\; \{x^Ty + \underset{x_1 + x_2 = x}{\sup}\; \{-f_1(x_1) - f_2(x_2)\} \} $$ $$ = \underset{x}{\sup}\underset{x_1 + x_2 = x}{\sup}\; \{x^Ty -f_1(x_1) - f_2(x_2)\} \} $$ $$ = \underset{x_1,x_2}{\sup}\; \{x_1^Ty + x_2^Ty -f_1(x_1) - f_2(x_2)\} \} $$ $$ = f_1^\star(x_1) + f_2^\star(x_2) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1636208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
u-substitution, indefinite integrals I've looked on the web for an answer to this question, and could not find an example. Could you push me towards a proper u substitution for the following integral? Please don't solve the problem just state what you would use as a substitution and why. $$\int(\sin^{10}x \cdot \cos x)\ dx$$ My sad attempt let $$u=\sin x$$ $$du=\cos{x}\ dx$$ $$\int(\sin^{9}x) du$$ should I use trigonometric identities or is another substitution valid? This question is in the substitution section of the textbook, so it has to be solved with simple substitution Thanks.
thanks to u/justpassingthrough $$\int(sin^{10}x*cosx)dx$$ let $$u=sinx$$ $$du=cosx$$ $$\int(sin^{10}x*cosx)dx=\int(u^{10})du$$ $$=\frac{1}{11}u^{11}+C=\frac{1}{11}(sinx)^{11}+C$$ cool
{ "language": "en", "url": "https://math.stackexchange.com/questions/1636372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Polynomial divides set of points Given a set of points in the plane with distinct $x$-coordinates, each point colored black or white. A polynomial $P(x)$ "divides" the set of points if no black point lies above $P(x)$ and no white point lies below $P(x)$, or vice versa. Points of any color can lie on $P(x)$. What is the least $k$ such that any valid set of $n$ points can be divided by a polynomial of degree at most $k$? We can do $k=n-2$ by having a polynomial pass through any $n-1$ points, thus trivially satisfying the condition of dividing the points.
For all $n$, there is a way to choose the points & colors that forces $ P $ to have degree at least $ n-2 $ . Assume $ n \geq 3 $ (the smaller cases are easy). Choose the points $ \{(i,0) | 1 \leq i \leq n-2\} \cup \{(n-1,1),(n,n^2)\}$. Color them alternating white and black in order of their $ x $-coordinates. For any $ 1 \leq i \leq n-3 $, we have that either $ P(i) \geq 0$ and $ P(i+1) \leq 0 $ or vice versa. Since $ P $ is non-zero somewhere between $ i $ and $ i + 1 $ ($ P $ can't be the $ 0 $ polynomial - that doesn't divide the points), $ P $ has a root somewhere in $ [i,i+1]$. It is possible for two of these roots to be the same, but it's not hard to see that this must either be double root, or there will be additional roots in one of the two intervals. This gives a root of $P$ for each $ i $, totaling $ n - 3 $ roots. Assume that $ P $ has degree $ k = n-3 $. Then $ P $ can't have more roots than we've already found. Let $ x_1, x_2 ... x_{n-3} $ be these roots. Since the two last points are both above the $x$-axis, and one of those points is a lower bound, $ P $ must stay above the $x$-axis after its last root. Now we consider the two cases, when the final point is a lower bound or an upper bound. Case 1: The final point is an upper bound. The point at $(n-2,0)$ is also an upper bound. $ P $ is, after its last root, on the "wrong" side of the $x$-axis for this point. Its last root is at most $ n-2 $, so it must pass through the point $(n-2,0)$. Then it will be on the wrong side of the $x$-axis for the previous point, and the previous root is at most $ n-3 $, so $ P $ must pass through the point $(n-3,0)$. This continues through all the points, and $ P $ must pass through the 2nd point, only to be on the wrong side of the $x$-axis for the first point. But then $ P $ already has $n-3$ roots, so it can't cross the $x$-axis again to get to the correct side for the 1st point. Thus, we have a contradiction. Case 2: The final point is a lower bound. Since we know all the roots of $ P $, we can write it as $ P(x) = a \displaystyle\prod_{i=1}^{n-3} (x-x_i)$. Now, we can give an upper bound for $ \dfrac{P(n)}{P(n-1)}\\ = \dfrac{\displaystyle\prod_{i=1}^{n-3} (n-x_i)}{\displaystyle\prod_{i=1}^{n-3} (n-1-x_i)} \\ \leq \dfrac{\displaystyle\prod_{i=1}^{n-3} (n-i)}{\displaystyle\prod_{i=1}^{n-3} (n-1-(i+1))} \\ = \dfrac{\displaystyle\prod_{i=1}^{n-3} (n-i)}{\displaystyle\prod_{i=1}^{n-3} (n-2-i)} \\ = \dfrac{\displaystyle\prod_{i=3}^{n-1} i}{\displaystyle\prod_{i=1}^{n-3}i}\\ = \dfrac{(n-1)!/2}{(n-3)!}\\ = \dfrac{(n-1)(n-2)}{2}\\ < n^2 $ But since the last point, $(n,n^2)$ is a lower bound, and the 2nd last point, $(n-1, 1)$ is an upper bound, we have $ P(n) \geq n^2 $ and $ P(n-1) \leq 1 $, so $ \dfrac{P(n)}{P(n-1)} \geq n^2 $, and we've reached a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1636494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to find proper functions to bound for integral squeeze thereom I am trying to prove that the function on $[0,1]$ defined by $$f(x)=\begin{cases} 1 & \text{if $x=\frac{1}{n}$ , $n \in \mathbb{N}$}\\ 0 &\text{else} \end{cases}$$ Is Riemann integrable on this interval. I would like to use squeeze theorem but I am having trouble choosing an appropriate upper bound. Let a lower bound, integrable function be $\alpha(x)=0$ I need an integrable function, $\omega$, dependent on $\epsilon$ such that $\alpha(x) \le f(x) \le \omega(x)$ all $x \in [0,1]$ with $\int_{0}^{1} \omega-\alpha \lt \epsilon$ for all $\epsilon \gt 0$ But I dont know I can use the definition of $f$ to choose a suitable $\omega$ I could also use some sort of arguments involving changing things only at finite points, the Archemdian property, etc Overall, I am just really confused. I have been working on it for hours but I cant make any progress. I really need some advice Can anyone offer some help? Thanks
Let's choose a sequence of functions $f_N$, which is $1$ in little $\delta$ intervals around those $\frac{1}{n}$ with $n \leq N$, and is $1$ for $x \in [0, \frac{1}{N}]$. Then you have exactly $N$ little $\delta$ balls and a little remainder. Note that the integral of $f_N$ is always at least as large as the integral of $f$, since it's at least as large everywhere. We can over-estimate the integral of $f_N$ by $N \cdot (2\delta) + \frac{1}{N} = 2N\delta + \frac{1}{N}$. Now, given an $\epsilon$, you can show the integral of $f$ is less than $\epsilon$ by comparing with $f_N$ with $N > \frac{2}{\epsilon}$ and $\delta < \frac{\epsilon}{4N}$. The first inequality guarantees that $\frac{1}{N} < \frac{\epsilon}{2}$, and the second guarantees that $2N\delta < \frac{\epsilon}{2}$. Together, along with our naive upper bound for the integral of $f_N$ from above, this guarantees that the integral of $f$ is at most $\epsilon$. As this can be done for any $\epsilon > 0$, and the integral of $f$ is bounded below by $0$, you can conclude that the overall integral of $f$ is $0$. $\diamondsuit$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1636738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of Linearly Dependent Vectors Is it possible that vectors $v_1, v_2, v_3$ are linearly dependent, but the vectors $w_1=v_1+v_2$, $w_2=v_1+v_3$, $w_3=v_2+v_3$ are linearly independent? I believe the answer is no, this is not possible, but I am struggling to formulate the proof.
No, they must be dependent. Note that you have $$\begin{align} v_1 &= \frac{1}{2}(w_1+w_2-w_3) \\ v_2 &= \frac{1}{2}(w_1-w_2-w_3) \\ v_3 &= \frac{1}{2}(-w_1+w_2-w_3) \end{align}$$ Now, by assumption the $v_i$'s linearly dependent, so that there exist $(a,b,c)\neq(0,0,0)$ such that $0=av_1 + bv_2+c v_3$. This gives, if I didn't mess up, $$ 0 = (a+b-c) w_1 + (a-b+c) w_2+ (-a+b+c) w_3 $$ which will prove the $w_i$'s are linearly dependent, once we have shown that at least one of the 3 resulting coefficients is non-zero. But indeed, this must hold: otherwise, if they all are equal to zero, solving the system $$\begin{align} 0 &= a+b-c \\ 0 &= a-b+c \\ 0 &= -a+b+c \end{align}$$ shows there is only one solution, $(a,b,c)=(0,0,0)$: which was excluded by assumption.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1636853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An example of a reducible random walk on groups? Random walk on group is defined in the following way as a Markov chain. A theorem says the uniform distribution is stationary for all random walk on groups. If the random walk is irreducible, for example, on a cyclic group $\Bbb Z_n$, then the stationary distribution is unique and hence uniform. My question is, is there any good example of reducible random walk on groups so that the uniform distribution is not the only stationary distribution? It is really hard for me to imagine such a reducible random walk. All examples for random walk on groups given in the textbook so far are irreducible. Hope someone can help. Thank you!
It is not quite true that any random walk on $\mathbb{Z}_n$ is irreducible. This will depend on the increment distribution $\mu$ as well. For example, on $\mathbb{Z}_4$ if you take $\mu(g)=\delta_{g+2}$ then there are two communicating classes $\{0,2\}$ and $\{1,3\}.$ There are lots of different stationary distributions for this random walk.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1636965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a formula for the area under $\tanh(x)$? I understand trigonometry but I've never used hyperbolic functions before. Is there a formula for the area under $\tanh(x)$? I've looked on Wikipedia and Wolfram but they don't say if there's a formula or not. I tried to work it out myself and I got this far: $\tanh(x) = {\sinh(x)\over\cosh(x)} = {1-e^{-2x}\over 1+e^{-2x}} = {e^{2x}-1\over e^{2x}+1} = {e^{2x}+1-2\over e^{2x}+1} = 1-{2\over e^{2x}+1}$ Now I'm stuck. I don't know if I'm on the right track or not.
Notice, we know $\sinh(x)=\frac{e^x-e^{-x}}{2}$ & $\sinh(x)=\frac{e^x+e^{-x}}{2}$, hence the area under $\tanh(x)$ is $$\int \tanh(x)\ dx=\int \frac{\sinh(x)}{\cosh (x)}\ dx$$ $$=\int \frac{e^x-e^{-x}}{e^x+e^{-x}}\ dx$$ $$=\int \frac{d(e^x+e^{-x})}{e^x+e^{-x}}$$ let $e^x+e^{-x}=t\implies d(e^x+e^{-x})=dt $, $$=\int \frac{dt}{t}=\ln|t|+C$$ $$=\color{red}{\ln(e^x+e^{-x})+C}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1637179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Contour Integration: non-convergent integral The question is $$I=\int_{-\infty}^{\infty} \frac{\sin^2{x}}{x^2} dx$$ My attempt: $$I=-\frac{1}{4}\int_{-\infty}^{\infty} \frac{e^{2ix}-2+e^{-2ix}}{x^2} dx$$ $$I=-\frac{1}{4} \Big[ \int_{-\infty}^{\infty} \frac{e^{2ix}}{x^2} dx - \int_{-\infty}^{\infty} \frac{2}{x^2} dx +\int_{-\infty}^{\infty} \frac{e^{-2ix}}{x^2} dx \Big] $$ Now I use contours as mentioned in this question. If we write $$I=-\frac{1}{4}\Big[A+B+C\Big]$$ where $A,B,C$ are the corresponding integrals. For, $A$, and $C$ they can be worked out using Jordan's lemma and the residue theorem. I run into a problem with $B$, as it does not converge. Any ideas?
We can use Parseval's theorem to solve the integral. The rectangle function is, $$\Pi(x) = \begin{cases} 1 \quad |x|<1/2 \\ 0 \quad \text{otherwise} \end{cases}$$ The Fourier transform of the rectangle function is the $\mathrm{sinc}$ function. $$ \hat{\Pi}(k) = \mathrm{sinc}(k) = \frac{1}{\sqrt{2\pi}}\frac{\sin(k)}{k}.$$ Parseval's theorem tells us that, $$\frac{1}{2\pi}\int_{-\infty}^\infty \mathrm{sinc}^2(k) = \int_{-\infty}^\infty \Pi^2(x) dx = \frac12,$$ which gives us, $$ \boxed{\int_{-\infty}^\infty \mathrm{sinc}^2(k) = \pi}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1637277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find the value of $x$ which is correct I have one exercise which is $$(x+2013)(x+2014)(x+2015)(x+2016)+1=0$$ I tag $A=x+2013$ or other for many ways but still can not find the first $x$ value. please help.
If you let $y=x+2014$, then the equation becomes $$(y-1)y(y+1)(y+2)+1=0 \Leftrightarrow (y^2+y-1)^2=0$$ So $$y = \frac{-1\pm\sqrt{5}}{2} \Rightarrow x = \ldots$$ \begin{align} (y-1)y(y+1)(y+2)+1 & = y^4+2 y^3-y^2-2 y+1 \\ {} & = \left( y^4+y^3-y^2 \right) + y^3-2y+1 \\ {} & = y^2 \left( y^2+y-1 \right) + \left(y^3+y^2-y\right) -y^2-y+1 \\ {} & = y^2 \left( y^2+y-1 \right) + y\left(y^2+y-1\right) -\left( y^2+y-1\right) \\ {} & = \left( y^2+y-1 \right)^2 \end{align} \begin{align} (y-1)y(y+1)(y+2)+1 & = y^4+2 y^3-y^2-2 y+1 \\ {} & = \left( y^4+2 y^3+y^2 \right) - 2y^2-2 y+1 \\ {} & = \left( y^2 + y \right)^2 - 2(y^2+y)+1 \\ {} & = \left( y^2+y-1 \right)^2 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1637361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is it true that for any set $X,Y \subseteq \Bbb A^n(k)$, $I(X) = I(Y )$ iff $X = Y$ . Is it true that for any set $X,Y \subseteq \Bbb A^n(k)$, $I(X) = I(Y )$ iff $X = Y$ . Because I know $I(X) = I(Y )$ implies $\overline X=\overline Y$ But I am thinking that this is not true because if I take $Y=\overline X$ then $I(X) = I(\overline X )$ but $X \neq \overline X$ always. But in this paper http://www.math.uchicago.edu/~may/VIGRE/VIGRE2010/REUPapers/Bloom.pdf I am getting one proof The following are the first basic properties of these objects, and these properties are easy to verify. (1) If $X ⊂ Y$ , then $I(X) ⊃ I(Y )$, so this correspondence is inclusion-reversing. Moreover, $I(X) = I(Y ) ⇐⇒ X = Y $. We give a construction which will be useful later: Proposition 1.9. Let $P_1, P_2, . . . , P_n$ be distinct points in $A^ n$. Then, $\exists$ polynomials $F_i ∈ k[X_1, X_2, . . . , X_n]$ such that $F_i(P_j ) = δ_{ij} =0$ if $i \neq j$ and $1$ if $i = j$. Proof. For each $i$, let $V_i = \{P_1, . . . , P_{i−1}, P_{i+1}, . . . , P_n\}$. Then, from the first property above, $V_i \subsetneq V_i ∪ \{P_i\}$ implies $I(V_i) \supsetneq I (V_i ∪ \{P_i\})$. Pick $G_i ∈ I (V_i ∪ {P_i}) - I(V_i)$. $G_i$ is zero on $V_i$ , but is non-zero on $P_i$ . Thus, our desired polynomials are $F_i =\frac 1{G_i(P_i)}Gi$. So this whole proposition is false right?
The correspondence you mentioned, $X = Y$ if and only if $I(X) = I(Y)$, is only valid when $X, Y$ are assumed to be closed subsets of $\mathbb{A}^n(k)$. In the proposition you mentioned, all the sets which they were dealing with were closed, so there is no trouble there. For an example when $I(X) = I(Y)$ holds, but $X \neq Y$, take $n = 1, X = \mathbb{A}^1(k)$, and $Y$ any proper subset of $X$ which has infinitely many elements. Since $X$ and $Y$ are both infinite, $I(X) = I(Y) = (0)$, because a nonzero polynomial over a field in one variable cannot have infinitely many roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1637434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the commutator subgroup of a profinite group closed? Let $G$ be a profinite group, $[G,G]=\{ghg^{-1}h^{-1}|g,h\in G\}$ is a subgroup of $G$. Is $[G,G]$ closed? In the case we are interested, $G$ is the absolute galois group of a local field.
In general is not true that for a profinite group $G$ the derived subgroup $G'$ is closed; But if $G$ is a pro-$p$-group, with $p$ a prime, and if $G$ is finitely generated then it becomes true. For example this is true for the group of $p$-adic integer $\mathbb{Z}_{p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1637512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Image of a family of circles under $w = 1/z$ Given the family of circles $x^{2}+y^{2} = ax$, where $a \in \mathbb{R}$, I need to find the image under the transformation $w = 1/z$. I was given the hint to rewrite the equation first in terms of $z$, $\overline{z}$, and then plug in $z = 1/w$. However, I am having difficulty doing this. I completed the square in $x^{2}+y^{2}=ax$ to obtain $\left(x - \frac{a}{2} \right)^{2} + y^{2} = \left(\frac{a}{2} \right)^{2}$. Then, given that $\displaystyle x = Re(z) = \frac{z+\overline{z}}{2}$ and $\displaystyle y = Im(z)= \frac{-(z-\overline{z})}{2i}$, I made those substitutaions and my equation became $\left( \frac{z+\overline{z}-a}{2}\right)^{2} - \left(\frac{\overline{z}-z}{2} \right)^{2} = \left(\frac{a}{2} \right)^{2}$. Then, sbustituting in $z = \frac{1}{w}$, this became $\displaystyle \frac{\left(\frac{1}{w} + \frac{1}{\overline{w}} - a \right)^{2}}{4} - \frac{\left(\frac{1}{\overline{w}} - \frac{1}{w} \right)^{2}}{4} = \frac{a^{2}}{4}$. Beyond this, my algebra gets very wonky. Could someone please tell me what my final result should be? Knowing that would allow me to work backwards and then apply these methods to other problems (of which I have many to do!). Thanks.
Write $z=x+iy$, so $x^2+y^2=z\bar{z}$, and $x=\frac{z+\bar{z}}{2}$. Thus the circles can be described by $$ z\bar{z}=a\frac{z+\bar{z}}{2} $$ Upon doing $z=1/w$, you get $$ \frac{1}{w\bar{w}}=\frac{a}{2}\frac{\bar{w}+w}{w\bar{w}} $$ that becomes $$ a\frac{\bar{w}+w}{2}=1 $$ Writing $w=X+iY$, you get $$ aX=1 $$ More generally, if you have the circle $x^2+y^2+ax+by+c=0$, with $z=x+iy$ you get $$ z\bar{z}+a\frac{z+\bar{z}}{2}+b\frac{z-\bar{z}}{2i}+c=0 $$ Upon changing $z=1/w$, the equation becomes $$ \frac{1}{w\bar{w}}+a\frac{\bar{w}+w}{2w\bar{w}} +b\frac{\bar{w}-w}{2iw\bar{w}}+c=0 $$ and removing $w\bar{w}$ from the denominator, $$ 1+a\frac{w+\bar{w}}{2}-b\frac{w-\bar{w}}{2i}+cw\bar{w}=0 $$ or, with $w=X+iY$, $$ 1+aX-bY+c(X^2+Y^2)=0 $$ So the circle becomes a straight line if $c=0$ (that is, it passes through the origin), otherwise it is transformed into a circle. Similarly, the line $ax+by+c=0$ becomes $$ a\frac{\bar{w}+w}{2w\bar{w}}+b\frac{\bar{w}-w}{2iw\bar{w}}+c=0 $$ and finally $$ aX-bY+c(X^2+Y^2)=0 $$ so it becomes a circle if $c\ne0$, or a straight line through the origin otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1637621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
If $f(x)$ is continuous at $ x=0$ Given that $f(x)$ is continuous at $ x=0$, and the limit : $$\lim \limits_{x \to 0} \frac{f(x)}{x^2} = L$$ then: $$\implies f(0) = 0 $$ and $$ \implies f(x) \text{ is differentiable at }x=0 $$ My question is: why $f(0) = 0 $ ?
Given $\varepsilon >0$ there exist $\delta_{\varepsilon}>0$ such that $$-\delta_{\varepsilon}<x<\delta_{\varepsilon}\qquad \implies \qquad\left|\frac{f(x)}{x^2}-L\right|<\varepsilon$$ So $$|f(x)-Lx^2|<\varepsilon x^2$$ Now, take $\delta=\min(\delta_{\varepsilon},1)$, then $$-\delta<x<\delta\qquad \implies \qquad\left|f(x)-Lx^2\right|<\varepsilon\tag{1}$$ So \begin{align} \lim_{x\to0}f(x)&=\lim_{x\to0}Lx^2=0\qquad\text{from }(1) \end{align} Since $f$ is continuous at $0$ we get $$f(0)=\lim_{x\to0}f(x)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1637823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to calculate $\lim \limits_{h \to 0}{\frac{a^h-1}{h}}$? As the title says, I would like to prove for $f(x) = a^x$ there is always some constant c such that $f'=cf$. Is calculating the limit the right approach to solve this problem? Also, how to show there is only one solution when $c=1$? (the $e^x$)
Assuming $a>0$, you need to use the definition of real exponentiation: $$a^h=e^{h\ln a}=\sum_{n=0}^\infty\frac{(h\ln a)^n}{n!}$$ Then $$\lim_{h\to0}\frac{a^h-1}h=\lim_{h\to 0}\sum_{n=1}^\infty h^{n-1}\frac{(\ln a)^n}{n!}=\ln a$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1638017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find a parametric equation to solve a line integral Find $ \int_C \frac{dz}{z} $ from $1 - 5i$ to $5 + 6i$ I want to use $$\int_C f(Z) dz = \int_a^b f[z(t)]z'(t)dt$$ My guess is that I need to find $z(t)$ such that $$z(a) = 1 - 5i$$ $$z(b) = 5 + 6i$$ But how?
You can actually use any path from $a$ to $b$. You should get the same answer as long as the path does not include the origin. This is the Principle of Path Independence. It follows from the Cauchy-Goursat Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1638103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Prove that $U+W = \{u+w\mid u\in U, w\in W\}$ is a finite-dimensional subspace of $V$ Let $V$ be a vector space over a field $k$ and let $U,W$ be finite-dimensional subspaces of $V$. Prove that $U+W = \{u+w\mid u\in U, w\in W\}$ is a finite-dimensional subspace of $V$. I know how to prove that $U\cap W$ is a subspace of $V$ but I'm having a hard time grasping how to prove $U+W$ This is what I was doing in regards to the first part of proving closed under addition: Let $u,w\in U+W$ and $u\in U, w\in W$ Since $U$ is a space, it is closed under addition and $u+w\in U$. Also, $W$ is a space so it is closed under addition as well and so $u+w \in W$. So, $u+w\in W$ and $u+w\in W$ shows that $u+w\in U+W$. Thus, $U+W$ is closed under addition. But this is almost exactly how I've proved $U\cap W$ is a subspace and I have a feeling I've made a mistake somewhere.
"Closed under addition" means that $u+w \in U$ would be true if $u,w \in U.$ Written more concisely, $$U + U \subseteq U.$$ It's not enough when $u \in U$ and $w$ isn't. This is why your proof doesn't work. You can make a proof around the computation $$(U+W) + (U+W) = (U+U) + (W+W) \subseteq U + W.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1638221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Integral of $-4\sin(2t - (pi/2)) $ weird behavior on wolfram alpha I'm confused by what Wolfram Alpha is doing with my function: $$-4\sin{(2t - (\pi/2))}$$ on why the it gets replaced by $$4\cos{(2t)}$$. Is it equal? Link: See behavior here
Yes. Using the angle-addition formula, $$ \sin{(2t-\pi/2)} = \sin{2t}\cos{(\pi/2)}-\cos{2t}\sin{(\pi/2)} = -\cos{2t}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1638300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integration by Parts? - Variable Manipulation $$\int x^3f''(x^2)\,\mathrm{d}x$$ Solve using Integration by Parts. \begin{align} u&=x^3\qquad\mathrm{d}v=f''(x^2) \\ \mathrm{d}u&=3x^2\qquad v=f'(x^2) \\ &=x^3f'(x)-\int f'(x^2)3x^2 \\ u&=3x^2\quad\mathrm{d}v=f'(x^2) \\ \mathrm{d}u&=6x \qquad v=f(x^2) \\ &=x^3f'(x^2)-[3x^2f(x^2)-\int f(x^2)6x] \end{align} No clue what do from here as the correct answer is: $$\frac{1}{2}(x^2f'(x^2)-f(x^2))+C$$ Can you guys think of anything? I appreciate the help. :)
I suggest you start with a small change of variable (just to make life easier) since $$I=\int x^3f''(x^2)~dx=\frac 12\int x^2 f''(x) 2x dx$$ S0, let $x^2=y$, $2x\,dx=dy$ which make $$I=\frac 12\int y\,f''(y) \, dy$$ Now, one integration by parts $$u=y\implies u'=dy$$ $$dv=f''(y)dy\implies v=f'(y)$$ So $$2I=u\, v-\int v\,u'=y \,f'(y)-\int f'(y)\,dy=y \,f'(y)-f(y)+C$$ Go back to $x$ using $y=x^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1638407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Kelly criterion for Each-Way betting 3 outcome answered question Hi all, I've been having trouble finding the Kelly Criterion bet size for an each-way bet. The above link shows the solution to a problem with 3 distinct and mutually exclusive outcomes. In an each-way bet there are three distinct outcomes (Lose all bets, horse places but doesn't win, horse places and does win). An each way bet is a matched bet of one bet on the outright win and one on the placing of the horse (1st or 2nd or 3rd for example). These bets are of an equal amount. I have calculated the Kelly criterion decimal bet for the win only and place bets independently and these are: 0.06 and 0.23 respectively but since to place this each way bet, I must choose an equal stake for each bet, hence my problem. One option is a simple average which brings us to 0.145 The problem to use the Kelly criterion method as linked is that I can not know the odds for the second possible outcome (places but doesn't win) as the odds for place (win or not) or outright win can only be known. Does anyone have any insights into this? Thanks for reading and any forthcoming help!
The question to which you link to, answers your question too, as you can treat these as 3 different outcomes. For example, if you back a horse E/W with x-x @5/1 odds with 1/5 E/W for places 1-2-3, then, in David Speyer's answer, you need to substitute for no place $b_1=-2$, for place 2-3 $b_2=1-1=0$, and for win $b_3=1+5=6$ (where the numbers are nice because I've chosen the parameters so).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1638638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Isomorphic to Subgroup of even permutations True or False Every finite group of odd order is isomorphic to a subgroup of $An$, the group of all even permutations. The question was in entrance exam. I think there is counter example to this statement but i am not reaching that example. Can some one help?
One can embed $S_n$ can be embedded into$S_n\times S_n$ diagonally, i.e., $\sigma\mapsto (\sigma,\sigma)$ we see that $S_n$ embeds into $A_{2n}$ and so every finite group, order is odd or even, can be embedded into suitable alternating group. (Compare it with similar statement any matrix group can be embedded into $SL(n)$ )
{ "language": "en", "url": "https://math.stackexchange.com/questions/1638794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $C_0(\mathbb{R})$ a Banach space? Let $C(\mathbb{R})$ be a Banach space of continuous real-valued functions defined on $\mathbb{R}$, with supremum norm, and let $C_0(\mathbb R)$ be the subspace of functions vanishing at infinity. Is $C_0(\mathbb{R})$ a Banach space? I try to see it using: $f\in C_0(\mathbb{R})$ iff for any $\epsilon>0$ there exists $K>0$ such that $|f|<\epsilon$ whenever $|x|>K$. But I think it is not Banach. Please I need a counter-example or a proof.
$C_0(\mathbb R)$ is a Banach space because it may be identified with a closed subspace of some $C(K)$, the real vector space of continuous functions on the compact Hausdorff space $K$, equipped with the $\|\cdot\|_\infty$ norm. Since uniform convergence on compact sets is so well-behaved, $C(K)$ is a Banach space (belonging to the "classical" ones). The continuous embedding $$\mathbb R\hookrightarrow S^1,\: x\mapsto\frac{x+i}{x-i}\in\mathbb C\;\text{ or } \left(\frac{x^2-1}{x^2+1},\frac{2x}{x^2+1}\right)\in\mathbb R^2$$ of $\,\mathbb R\,$ into its 1-point compactification $S^1$ lets one identify $C_0(\mathbb R)$ with the subspace $\big\{f\in C(S^1)\mid f(1)=0\big\}$. It is closed being the kernel of the continuous linear map $\operatorname{eval}_{x=1}:C(S^1)\to\mathbb R, f\mapsto f(1),\,$ and has codimension $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1638912", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fixed points for 1-D ODE I'm doing some independent work, and have managed to come across the following interesting 1-D autonomous ODE: $\dot{x} = x(1-x) \log^2\left[\frac{x}{1-x}\right]$. For the fixed points, i.e., where $\dot{x} = 0$, I know that the only valid one should be for $x = 1/2$ because of the Log function. However, technically, $x = 1$ and $x = 0$ also satisfy $\dot{x} = 0$. Even though the log function blows up at these points, can one not make some type of argument where since a linear function goes to zero "faster" than the log function, these other points should be fixed points too? Just wondering! Thanks.
Writing \begin{equation} f(x) = x(1-x) \log^2 \frac{x}{1-x}, \end{equation} the limits $\lim_{x\to 0} f(x)$ and $\lim{x \to 1} f(x)$ both exist and yield 0. As you noticed, the singularity in the log is 'overpowered' by the fact that $x(1-x)$, which is polynomial, has roots at $x = 0$ and $x = 1$. So, $x = 0$ and $x = 1$ are both valid equilibria of the ODE $\dot{x} = f(x)$. Fun fact: the solution to the ODE is given by (corrected): \begin{equation} x(t) = \frac{e^{\frac{1}{t_0-t}}}{1 + e^{\frac{1}{t_0-t}}} \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1638990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Calculate area of figure What is the area of the shaded region of the given $8 \times 5$ rectangle? Attempt: We know that the area of a kite is $\dfrac{pq}{2}$ where $p$ and $q$ are the diagonals. Thus, since the two lines in the figure intersect at the center of the square, the length of the long diagonal of one of the two shaded kites is half the length of the diagonal of the rectangle, which is $\dfrac{\sqrt{89}}{2}$. Thus the answer should be $\dfrac{\sqrt{89}\sqrt{2}}{2}$ but this is not the right answer (answer is $6.5$). What did I do wrong?
We draw a diagonal from the upper-left to the bottom-right corner. This divides the two quadrilaterals into four triangles, two with base $1$ and height $4$ and two with base $1$ and height $\frac{5}{2}.$ Using basic area calculations, the total area should be $\boxed{6.5}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1639072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Computation of an iterated integral I want to prove $$\int\limits_{-\infty}^\infty\int\limits_{-\infty}^\infty\frac{\sin(x^2+y^2)}{x^2+y^2}dxdy=\frac{\pi^2}{2}.$$ Since the function $(x,y)\mapsto\sin(x^2+y^2)/(x^2+y^2)$ is not integrable, I can't use the Theorem of Change of Variable. So, I'm trying to use residue formulae for some suitable holomorphic function to compute the inner integral, but I can't continue. Can someone suggest me a hint to solve this problem? Addendum: I may be wrong, but I suspect Theorem of Change of Variable (TCV) is not the answer. The reason is the following: the number $\pi^2/2$ is gotten if we apply polar coordinates, but TCV guarantees that if we apply any other change of variable we can get the same number, $\pi^2/2$. If this function were integrable, this invariance property would be guaranteed, but it is not the case. Thus we may have strange solutions to this integral.
Let $D$ be any Jordan domain in $\mathbb{R}^2$, containing origin in its interior, whose boundary $\partial D$ has the form $r = f(\theta)$ in polar coordinates where $f \in C[0,2\pi]$. Consider following integral as a functional of $D$: $$\mathcal{I}_D \stackrel{def}{=} \int_D \phi(x,y) dx dy \quad\text{ where }\quad\phi(x,y) = \frac{\sin(x^2+y^2)}{x^2+y^2} $$ Since the origin is a removable singularity for $\phi(x,y)$, as long as $D$ is of finite extent, there isn't any issue about integrability or change of variable. We have $$\mathcal{I}_D = \int_0^{2\pi} \int_0^{f(\theta)}\frac{\sin(r^2)}{r^2} rdr d\theta = \frac12\int_0^{2\pi} \left[\int_0^{f(\theta)^2}\frac{\sin t}{t} dt \right] d\theta $$ For any non-increasing, non-negative function $g$ on $(0,\infty)$. Using integration by part (the RS version), one can show that $$\left|\int_a^b g(x) \sin(x) dx \right| \le 2 g(a)\quad\text{ for }\quad 0 < a < b < \infty$$ For any $R > 0$ where $B(0,R) \subset D$. By setting $g(x)$ to $1/x$, above inequality leads to following estimate for $\mathcal{I}_D$. $$\left| \mathcal{I}_D - \mathcal{I}_{B(0,R)} \right| = \frac12 \left| \int_0^{2\pi} \left[\int_{R^2}^{f(\theta)^2}\frac{\sin t}{t} dt \right] d\theta \right| \le \frac12 \int_0^{2\pi} \left|\int_{R^2}^{f(\theta)^2}\frac{\sin t}{t} dt\right| d\theta \le \frac{2\pi}{R^2} $$ For any fixed $Y$, the integrand $\phi(x,y)$ is Lebesgue integrable over $(-\infty,\infty)\times [-Y,Y]$. Double integral of the form below is well defined. With help of DCT, one can evaluate it as a limit $$\int_{-Y}^Y \int_{-\infty}^{\infty}\phi(x,y) dxdy = \lim_{X\to\infty}\int_{-Y}^Y \int_{-X}^X \phi(x,y) dxdy = \lim_{X\to\infty}\mathcal{I}_{[-X,X]\times[-Y,Y]}$$ We will combine this with above estimation. By setting $R = Y$ and $[-X,X] \times [-Y,Y]$ taking the role of $D$, one get $$\left|\int_{-Y}^{Y} \int_{-\infty}^{\infty}\phi(x,y) dxdy - \mathcal{I}_{B(0,Y)}\right| \le \limsup_{X\to\infty}\left|\int_{-Y}^{Y} \int_{-X}^{X}\phi(x,y) dxdy - \mathcal{I}_{B(0,Y)}\right| \le \frac{2\pi}{Y^2}$$ Since following two limits exist, $$\lim_{Y\to\infty} \mathcal{I}_{B(0,Y)} = \lim_{Y\to\infty} \pi\int_0^{Y^2}\frac{\sin t}{t}dt = \pi\int_0^\infty \frac{\sin t}{t} dt = \frac{\pi^2}{2} \quad\text{ and }\quad \lim_{Y\to\infty}\frac{2\pi}{Y^2} = 0$$ By squeezing, the double integral at hand exists as an improper integral! $$\int_{-\infty}^\infty \int_{-\infty}^\infty \phi(x,y) dxdy \stackrel{def}{=} \lim_{Y\to\infty} \int_{-Y}^Y \int_{-\infty}^\infty \phi(x,y) dxdy = \lim_{Y\to\infty} \mathcal{I}_{B(0,Y)} = \frac{\pi^2}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1639190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Marginalising out $B$ in $P(A \mid B,C)$ Let's say that I have $P(A \mid B,C)$ - is it accurate to say that $P(A \mid C)$ can be found like this: $P(A \mid C) = \sum_B P(A \mid B,C)$ I know the values of all $P(A \mid B,C)$ as well as $P(B)$ and $P(C)$
You probably know (marginalization, or total probability) $$P(A) = \sum_B P(A,B)$$ Because this is true for any $A,B$, it's also true for the variables conditioned on $C$ (which amounts to restricting the universe). $$P(A \mid C) = \sum_B P(A,B\mid C)$$ Further $P(A,B\mid C)=P(A \mid B,C)P(B\mid C)$. (Why? Because $P(A,B)=P(A \mid B) P(B)$; again, that must be also true if conditioning everything over $C$). Hence $$P(A \mid C) =\sum_B P(A \mid B,C)P(B\mid C)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1639282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
$\sup$ and $\inf$ of $E=\{p/q\in\mathbb{Q}:p^2<5q^2 \text{ and } p, q > 0\}$ I'd appreciate if you could please check to see if my proof is valid. Find $\sup$ and $\inf$ of $E=\{p/q\in\mathbb{Q}:p^2<5q^2 \text{ and } p, q > 0\}$. Solution: $q^2 > p^2/5 \iff q > p/\sqrt{5} \iff 0<p/q<\sqrt{5} $. $\implies E = \mathbb{Q}\cap [0, \sqrt{5})$. (i) Supremum. Since E is bounded, there exists an $M\ge y$ for all $y\in E$. Since $\sqrt{5}>y$ for all $y\in E$, $\sqrt{5}$ is an upper bound. But $\sqrt{5}\not\in E$. Suppose $s:=\sup E\ne \sqrt{5}$. Then for some $\varepsilon>0$, $\exists y_0\in E$ such that $s-\varepsilon <y_0 \le \sup E < \sqrt{5}$. Thus $\sqrt{5}-s>0$. Set $b:=\frac{1}{\sqrt{5}-s}\in \mathbb{R}$. By Archimedean Principle, $1\cdot n > \frac{1}{\sqrt{5}-s}\iff s < \sqrt{5} - 1/n < \sqrt{5}$. But $s\ge \sqrt{5}-1/n$ since $\exists y_1 \in E$ such that $\sqrt{5}-1/n < y_1 < \sqrt{5}$ (by density of rationals), and $y_1\le s<\sqrt{5}$. We thus arrive at a contradiction, hence $\sup E=\sqrt{5}.$ (ii) Infimum. There exists a lower boundary $L$, so that $L\le y$ for all $y\in E$. In particular, $0\ge L$, and $0\in E$, hence $0=\inf E$.
There is a mistake in part (ii): as pointed out in the comments, $0$ is not in $E$. What you wrote in part (i) is true, but more complicated than it needs to be. Here are a few suggestions to simplify your proof: * *Delete the first sentence. The variable $M$ is not used anywhere in the rest of the proof. *Delete the sentence that starts with "Then for some $\varepsilon > 0 \dots$". Your argument makes no use of $\varepsilon$ and $y_0$. *If you know that the rationals are dense in the reals, you can deduce directly from the inequality $\sup E < \sqrt{5}$ that there exists $y \in \mathbb{Q}$ such that $\sup E < y < \sqrt{5}$. There is no need to use the Archimedean principle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1639382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Limit in integral: $\lim\limits_{\epsilon \rightarrow 0}\frac1{\epsilon}\int_{t}^{t+\epsilon}f(s)ds=f(t)$? Let $f$ be a smooth function : can someone tell me why we have : $\lim\limits_{\epsilon \rightarrow 0}\frac{1}{\epsilon}\int_{t}^{t+\epsilon}f(s)ds=f(t)$ thank you very much !
Let $F(x) = \int_0^x f$. Then $F$ is smooth since $f$ is, and $F^\prime = f$ by the Fundamental Theorem of Calculus; but we also have $$F^\prime(t) = \lim_{\varepsilon\to0}\frac{F(t+\varepsilon) - F(t)}{\varepsilon} = \lim_{\varepsilon\to0}\frac{\int_t^{t+\varepsilon} f}{\varepsilon}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1639492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to calculate the derivative of logarithm of a matrix? Given a square matrix $M$, we know the exponential of $M$ is $$\exp(M)=\sum_{n=0}^\infty{\frac{M^n}{n!}}$$ and the logarithm is $$\log(M)=-\sum_{k=1}^\infty\frac{(I-M)^k}{k}$$ The derivative of $\exp(M)$ should be itself. It is easy to prove if $\frac{dM}{M}=I$. But how to calculate the derivative of $\log(M)$? By the same way of calculation of the derivative of $\exp(M)$, the derivative of $\log(M)$ cannot converge. So what is the derivative of $\log(M)$?
The derivative of $\log(x)$ is $1/x$. The derivative of the power series $$ \sum_{n=1}^\infty (1-x)^n/n$$ is $$\sum_{n=1}^\infty (1-x)^{n-1}$$ which converges to $x^{-1}$ if $|x| < 1$. The matrix power series $$\sum_{n=1}^\infty (I-M)^{n-1}$$ converges to $M^{-1}$ if the spectral radius of $I-M$ is less than $1$. However, caution is needed with the notion of "derivative" of a matrix valued function. The basic problem is that matrices don't always commute. You can't say that the derivative of $f(M)$ will be, e.g. $$ \lim_{A \to 0} (f(M+A) - f(M)) A^{-1}$$ For example, if $f(M) = M^2$, $$(f(M+A) - f(M)) A^{-1} = M + A M A^{-1} + A$$ and as $A \to 0$, $A M A^{-1}$ need not converge to anything in particular; it can do all sorts of strange things.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1639595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that the sequence of combinations contains an odd number of odd numbers Let $n$ be an odd integer more than one. Prove that the sequence $$\binom{n}{1}, \binom{n}{2}, \ldots,\binom{n}{\frac{n-1}{2}}$$ contains an odd number of odd numbers. I tried writing out the combination form as $$\frac{(2k+1)!}{(m!)((2k+1)-m)!}.$$ How do I use this to show that the sequence contains an odd number of odd numbers?
Suppose $n=2k+1$ Note that $\binom{2k+1}{1}=\binom{2k+1}{2k}$, $\binom{2k+1}{2}=\binom{2k+1}{2k-1}$,...,$\binom{2k+1}{k}=\binom{2k+1}{k+1}$. Thus $$\binom{2k+1}{1}+\binom{2k+1}{2}+\cdots+\binom{2k+1}{k}+\binom{2k+1}{k+1}+\cdots+\binom{2k+1}{2k-1}+\binom{2k+1}{2k}=2^{2k+1}-2$$ From the above considerations, $$2\binom{2k+1}{1}+2\binom{2k+1}{2}+\cdots+2\binom{2k+1}{k}=2^{2k+1}-2.$$ Therefore $$\binom{2k+1}{1}+\binom{2k+1}{2}+\cdots+\binom{2k+1}{k}=2^{2k}-1 \tag{$*$}$$ If the number of odd terms in $\binom{2k+1}{1},\binom{2k+1}{2}, \ldots, \binom{2k+1}{k}$ is even, then the sum in $(*)$ is even. This contradiction shows that the number of odd terms is odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1639688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
About Factorization I have some issues understanding factorization. If I have the expression $x^{2}-x-7$ then (I was told like this) I can put this expression equal to zero and then find the solutions with the quadratic formula, so it gives me $x_{0,1}= 1 \pm 2\sqrt{2}$ then $$x^{2}-x-7 = (x-1-2\sqrt{2})(x-1+2\sqrt{2}).$$ That is correct I have checked it. Now for the expression $3x^{2}-x-2$ if I do the same I have $x_{0} = 1$ and $x_1=\frac{-2}{3}$ so I would have $$3x^{2}-x-2 = (x-1)(x+\frac{2}{3})$$ but this is not correct since $(x-1)(x+\frac{2}{3}) = \frac{1}{3}(3x^{2}-x-2)$, the correct factorization is $3x^{2}-x-2 = (3x+2)(x-1)$. So I guess finding the roots of a quadratic expression is not sufficient for factorizing.
$3(x - 1)(x + {2 \over 3}) = (x -1)(3x + 2) = (3x - 3)(x + {2 \over 3}) =...$ etc. are all valid factoring. The leading coefficient is just a constant. And if $(x - 1)(x + {2 \over 3}) = 0$ then $3(x - 1)(x + {2 \over 3}) = 0 = (x - 1)(x + {2 \over 3}) $. If you are concerned about going from roots to factoring think of it this way: If the roots of $P(x) = ax^n + ...... + c$ are $r_1, .... ,r_n$ then the polynomial factors as $a(x - r_1)(x - r_2).....(x - r_3) = P(x)$ or $(x - r_1)(x - r_2).....(x - r_3) = P(x)/a$. When setting to 0 the $a$ doesn't matter as if $y = 0$ then $a*y = 0$ also no matter what the $a$ is. Solving for 0, the $a$ gets lost. Going from the roots back, we only take $(x - r_i)$ so the resulting coefficient will always be $1$. The resulting polynomial will be $1/a$ of the original.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1639767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
In the context of ordered statistics, each of Y(1),Y(2),...,Y(n) a single observation or distributions that are I.I.D? In statistics one aspect of the I.I.D. concept that bothers is when I think about it in the context of ordered statistics. As most of you already know, $Y_1,Y_2,Y_3,...,Y_n$ are I.I.D. when the parameters are the same. Now, here are two things I'm confused about. * *In the context of ordered statistics, each of $Y_{(1)},Y_{(2)},...,Y_{(n)}$ a single observation or distributions that are I.I.D? *If they are distributions, how in the world is it possible to order distributions from the least to greatest??
If you have $Y_1,Y_2, \dotsc, Y_n$ and each are independent and follow some distribution $G$, then you could consider each $Y_i$ as a realization, or sample, taken from $G$. If you then order, then each $Y_{(i)}$ follows a new distribution. For example, say we have $Y_1, \dotsc, Y_n$, where each one is independent and follows a $\text{unif}(0,1)$. Then, it should be clear, that each $Y_i\sim \text{unif}(0,1)$ (nothing changed). Once we order then, however, recall that for example, the first ordered statistic $Y_{(1)}\sim \text{Beta}(1,n)$ (Beta distribution). It is still an observation, but $Y_{(1)}$ follows a different distribution from $Y_1$. And in general, $Y_i\sim\text{unif}(0,1)$, but $$Y_{(i)}\sim \text{Beta}(i, n+1 -i).$$ You don't order the distributions, you order the $Y_i$s. For example, the distribution of $Y_{(1)}$ goes as follows $$P(Y_{(1)}\leq z) = 1-P(Y_{(1)}>z) =1- (1-z)^n.$$ Then the pdf is $n(1-z)^{n-1}$. Notice that the interpretation is that there is $\binom{n}{1}$ options for the smallest. Then the rest $\binom{n-1}{n-1} = 1$ have to fall in the interval $(1-z)$, hence $n(1-z)^{n-1}$. Let $Y_1,Y_2,Y_3$ iid exponential distributions with mean $1/\lambda$. Then, to find distribution of the minimum $M:=Y_{(1)}$, we must consider $$P(M\in dm)$$ There are $3$ choices, for the minimum * *$Y_1$, or *$Y_2$ or *$Y_3$. Once we have chosen the smallest, then there is only one way to choose the other two larger ones. So it must be the case that $$P(M\in dm) = \binom{3}{1}f_Y(m)\binom{2}{2}(1-F_Y(m))^2 =3\lambda e^{-\lambda m}(e^{-\lambda m})^2 = 3\lambda e^{-3\lambda m}$$ Notice that the minimum $M$ (or $Y_{(1)}$) follows an exponential distribution with mean $\frac{1}{3\lambda}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1639891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Countability of generated ring $R(E)$ I am studying Paul R. Halmos Measure theory. In the section 5 of chapter 1, theorem 5 states that : If $E$ is a countable class of sets, then $R(E)$ is countable. The proof uses class of all finite unions of differences of sets of class E. Can anyone explain this is a simple manner or any other methods of proof?
Let $X = \bigcup E$ be the set of which all members of $E$ are subsets. For good measure, assume that $X\in E$ (if it isn't, use $E'=E\cup {X} in what follows). Note that $R(E)$ is closed under intersection: if $A,B\in R(E)$, then $(X\setminus A)\cup (X\setminus B) = X\setminus(A\cap B) \in R(E)$, so $A\cap B = X\setminus (X\setminus (A\cap B)) \in R(E)$. It's easy to show that $R(E)$ is a ring of sets: $\emptyset, X\in R(E)$, and $R(E)$ is closed under union and complement with respect to $X$. For each $n\ge 1$, consider the set $E^{2n}$ of sequences of length $2n$ of members of $E$. There is a map $f_n\colon E^{2n}\to R(E)$ defined by $$ f_n(A_0,A_1,\dotsc, A_{2i}, A_{2i+1}, \dotsc A_{2n}, A_{2n+1}) = (A_0\setminus A_1)\cup \dotsc (A_{2i}\setminus A_{2i+1})\dotsc (A_{2n}\setminus A_{2n+1}). $$ The union of all of these maps is a surjection $$ f = \bigcup_n f_n\colon \bigcup_n E^{2n}\to R(E). $$ Every $E^{2n}$ is countable, so the domain of $f$, as the union of countably many of these sets, is itself countable. So $f$ is a surjection of a countable set onto $R(E)$, thus $R(E)$ is countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1640083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve $\int \frac{1}{1-y^2}$ with respect to $y$? I was solving an A Level paper when I came across this question. I tried substitution, but I'm not getting the answer with that. Would appreciate it if someone would help me.
Using a simple substition of $y = \tanh{(u)}$ gives $\displaystyle\int \dfrac{1}{1-y^{2}} \mathrm{d}y = u + C = \tanh^{-1}y + C = \ln{\dfrac{1 + y}{1 - y}} + C$ Notably can also be solved using partial fractions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1640202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is $|A| < |B|$ if $A-B$ is positive definite? I want to prove this. Say if $A-B$ is a positive definite matrix then can we find a relation between $\det(A)$ and $\det(B)$? e.g. is $|A| < |B|$.
Consider $A=2$, $B=1$. Then $A-B=1$ is positive definite but det$(A)=2 \not<$ det$(B)=1$. If you have mistyped the direction of the $<$ sign for the determinants then with: $$A=\begin{pmatrix}1& 0 \\ 0& 1\end{pmatrix}\qquad B=\begin{pmatrix}-2& 0 \\ 0& -2\end{pmatrix}$$ $$A-B=\begin{pmatrix}3& 0 \\ 0& 3\end{pmatrix}$$ Is positive definite and det$(A)=1$, det$(B)=4$. It appears to me if $A, B$ are positive definite diagonalisable finite dimensional matrices then $A-B$ is positive definite implies det$(A)>$det$(B)$. To see this note that if $A$ is positive definite it is invertible with positive definite inverse. The product of positive operators is positive so $$1-A^{-1}B=A^{-1}(A-B)$$ is positive. For it to be positive all eigenvalues of $A^{-1}B$ must be smaller than 1. So det$(A^{-1}B)=\frac{det(B)}{det(A)}<1$, which implies det$(A)>$det$(B)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1640340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Construction of a continuous function which maps some point in the interior of an open set to the boundary of the Range I was studying the Inverse function theorem when I came across the following problems : (Let the closed set $V$ i.e the range have non-empty interior) * *Does there exist a continuous onto function from an open set $U$ in $\mathbb{R}^n $ to a closed set $V$ in $\mathbb{R}^m$ such that some points in the interior of $U$ get mapped to the boundary of $V$? *Does there exist a continuous $1-1$ map from an open set $U$ in $\mathbb{R}^n $ to a closed set $V$ in $\mathbb{R}^m$ such that some points in the interior of $U$ get mapped to the boundary of $V$? If there are examples in $C(\mathbb{R})$ i.e continuous functions from $\mathbb{R}$ to $\mathbb{R}$, then that would be great too! Though I do need some example in the general case too. Simpler examples will be really appreciated. Thanks in advance. Edit: The case (1) can be dealt with using any "cut-off" function. e.g let $U,V$ two balls around $0$ in $\mathbb{R}^n $ with radius $r(>1)$ and $1$, and be open and closed respectively. Let $f: U \rightarrow V $ such that $x \in V \implies f(x)=x$ and $x \in U-V \implies f(x)= x/||x|| $.
The sine/cosine functions map $\mathbb{R}$ to the closed interval $[-1,1]$. (THIS IS WRONG: Moreover, the embedding $\mathbb{R} \to \mathbb{R}^2, x \mapsto (x, 0)$ satisfies both assertions.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1640409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Inverse of the composition of two functions If I have a composition of two functions: $$y = f(g(x),h(x))$$ where both $g(x)$ and $h(x)$ are readily invertible, can I find the inverse of the composition? i.e.: Can I find $x = f^{-1}(y)$? I know this is generally possible for the the composition of one function. Perhaps there is a special form of f that permits this? Thanks
EDIT: Previous answer used the accepted definition of 'composition'. I am trying to interpret what the OP means by composition... I think he means that $f$ is a formula e.g. something like $f(\sin x,e^x)=\sin x\sqrt{e^x}$ means that $f(x,y)=x\sqrt{y}$. Let $f:\mathbb{R}^2\rightarrow \mathbb{R}$, $g:\mathbb{R}\rightarrow \mathbb{R}$ and $h:\mathbb{R}\rightarrow \mathbb{R}$ be invertible maps. Define a function $F:\mathbb{R}\rightarrow \mathbb{R}$ by $$y=F(x)=f(g(x),h(x)).$$ Then $F$ is invertible with $$x=F^{-1}(y)=g^{-1}\left(\pi_1\left(f^{-1}(y)\right)\right)=h^{-1}\left(\pi_2\left(f^{-1}(y)\right)\right),$$ where $\pi_i:\mathbb{R}^2\rightarrow\mathbb{R}$ is the projection onto the $i$-th factor of $\mathbb{R}^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1640539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is $\frac{1}{\sin x}-\frac{1}{x}$ uniformly continuous on $(0,1)$? So I am tasked with finding whether $\frac{1}{\sin(x)}-\frac{1}{x}$ is uniformly continuous on the open interval $I=(0,1)$. To look at the "simple" ways to prove it first: I obviously can't extend the function to $[0,1]$ since the limit at $x=0$ is not defined. I tried to check if its derivative, $\frac{1}{x^2}-\frac{\cos(x)}{\sin^2(x)}$, is bounded on $I$, in which case the function is uniformly continuous by definition. By using $sin(x)\approx x$ when $x\rightarrow0$ I see that $\frac{1}{x^2}-\frac{\cos(x)}{\sin^2(x)}\approx \frac{1-\cos(x)}{\sin^2(x)}=\frac{1-\cos(x)}{1-\cos^2(x)}=\frac{1-\cos(x)}{(1-\cos(x))(1+\cos(x))}=\frac{1}{1+\cos(x)}$ when $x\rightarrow0$ So intuitively it looks like it is indeed bounded, but for what $M$ is $|f'(x)|<M$? I haven't shown that it is strictly increasing or decreasing on $I$, so I cant assume $M=max[f'(0),f'(1)]$. Also this method simply feels icky here. I assume I might have to use epsilon delta to prove/disprove uniform continuity on the interval, but I have simply no idea what values to insert while proceeding with that (the inverse sine bit has me absolutely stumped, I can't seem to relate it to previous assignments using other functions with $sin(x)$ to prove/disprove uniform continuity). As a side note, I haven't worked with trigonometry in some time so I have a hard time identifying various trigonometric identities off the bat.
As $x>\sin x$ in $(0,1)$ $$f'(x)<{1\over1+\cos x}<1\;\forall\; x\in(0,1)$$ Just replace $\approx$ with $<$. You don't need that $\lim_{x\to0}{\sin x\over x}=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1640624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If we showed that $\mu(F_n)<\infty$ for all $n\in \mathbb{N}$, can we get $\cup_{n \in \mathbb{N}}F_n<\infty$? If we showed that $\mu(F_n)<\infty$ for all $n\in \mathbb{N}$, can we get $\cup_{n \in \mathbb{N}}F_n<\infty$? The problem is the following: In the solution of Folland chapter 1 exercise 14, Suppose $F^*=${$F:F\subset E, 0<\mu(F)<\infty$} , $\alpha:=sup_{F\in F^*}${ $ \mu(F)$} $<\infty$. Then for every $n$ there exists $E_n\in F^*$ with $\alpha-1/n\leq \mu(E_n) \leq \alpha <\infty$,. Let $F_n=\cup^n_1 E_j$. Then $\mu(F_n)\geq\alpha-1/n$ for every $n\in \mathbb{N}$. Also $F_n \subset E$ and $\mu(F_n)<\infty$, so $F_n\in F^*$ for every $n\in \mathbb{N}$... (the whole solution is in Question from Folland Chapter 1 Exercise 14) Here is the problem, since we can get that $\mu(F_n)<\infty$ for every $n\in \mathbb{N}$ since $\mu(F_n)=\mu(\cup^n_1 E_j) \leq \sum^n_1\mu(E_j)\leq n\alpha<\infty$. What will happen when $n=\infty$? I can't prove that $\mu(F_n)<\infty$ when $n=\infty$. So how can I prove this question so that the solution will be more precise??
Let $$\mathcal{F} = \{F\subset E: F \ \ \text{is measurable and} \ 0 < \mu(F) < \infty\}$$ Since $\mu$ is semi-finite, $\mathcal{F}$ is not empty. Let $$s = \sup\{\mu(F):F\in\mathcal{F}\}$$ It suffices to show that $s = \infty$. Choose a sequence $\{F_n\}_{n\in\mathbb{N}}$ such that $\lim_{n\to \infty}\mu(F_n) = s$. Then $$F = \bigcup_{1}^{\infty}F_n\subset E$$ and $\mu(F) = s$. If $s < \infty \Rightarrow \mu(E\setminus F) = \infty$ and hence there exists $F'\subset E\setminus F$ such that $0 < \mu(F') < \infty$. Then, $F\cup F'\subset E$ and $s < \mu(F\cup F') < \infty$,i.e., $F\cup F'\in\mathcal{F}$ which contradicts the definition of $s$. Thus $s = \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1640837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Counting permutations with given condition I need to find number of permutations $p$ of set $\lbrace 1,2,3, \ldots, n \rbrace$ such for all $i$ $p_{i+1} \neq p_i + 1$. I think that inclusion-exclusion principle would be useful. Let $A_k$ be set of all permutation that for every permutation $a$ in this set $a_{k+1} \neq a_k + 1$. So our answer would be $| A_1 \cap A_2 \cap \ldots \cap A_n |$. Could you help me with completing the proof?
We can complete Penitent's induction argument as follows: Denote the number of valid permutations of $k+1$ elements by $a_k$. There are two ways to generate a valid permutation of $k+1$ elements from a permutation of $k$ elements: Either by inserting $k+1$ into a valid permutation of $k$ elements at any of the $k+1$ gaps except the one behind $k$; this yields $ka_k$; or by inserting $k+1$ into a single consecutive pair that made the permutation invalid, thus making it valid. Each permutation of $k$ elements with a single consecutive pair corresponds to exactly one valid permutation of $k-1$ elements obtained by merging the pair and suitably renumbering the remaining elements. This yields a contribution $(k-1)a_{k-1}$, since each of the $k-1$ elements in each of the $a_{k-1}$ valid permutations of $k-1$ elements can be expanded into a pair. Thus we have the recurrence relation $$ a_{k+1}=ka_k+(k-1)a_{k-1} $$ with the initial values $a_1=a_2=1$. We can rearrange the recurrence relation $$!(k+2)=(k+1)(!(k+1)+!k)$$ for the number $!k$ of derangements of $k$ elements to $$ \frac{!(k+2)}{k+1}=k\frac{!(k+1)}k+(k-1)\frac{!k}{k-1}\;. $$ This is our present recurrence relation with $a_k=\frac{!(k+1)}k$, and the initial values coincide, so this is a "closed form" for $a_k$, though since the number of derangements is usually counted using inclusion-exclusion, it's not really more "closed" than what you'd get directly using inclusion-exclusion in the way Rob outlined. I posted this new question asking for a combinatorial proof of the count $a_k=\frac{!(k+1)}k$. By the way, this is OEIS sequence A000255, shifted by one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1640926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove if $\sum\limits_{n=1}^ \infty a_n$ converges, {$b_n$} is bounded & monotone, then $\sum\limits_{n=1}^ \infty a_nb_n$ converges. Prove that if $\displaystyle \sum_{n=1}^ \infty a_n$ converges, and {$b_n$} is bounded and monotone, then $\displaystyle \sum_{n=1}^ \infty a_nb_n$ converges. No, $a_n, b_n$ are not necessarily positive numbers. I've been trying to use the Dirichlet's Test, but I have no way to show that $b_n$ goes to zero. If I switch $a_n$ and $b_n$ in Dirichlet's Test, I can show $a_n$ goes to zero, but then I'm having trouble showing that $\displaystyle \sum_{n=1}^ \infty \left\lvert a_{n+1}-a_{n}\right\rvert$ converges (because $a_n$ isn't necessarily monotone).
Outline: Indeed, the key is use Dirichlet's test (a.k.a. Abel's summation at its core) as you intended: $$\begin{align} \sum_{n=1}^N a_n b_n &= \sum_{n=1}^N (A_n-A_{n-1}) b_n = A_Nb_N + \sum_{n=1}^{N-1} A_n b_n -\sum_{n=1}^{N-1} A_n b_{n+1} \\ &= A_Nb_N + \sum_{n=1}^{N-1} \underbrace{A_n}_{\text{bounded}} \underbrace{(b_n -b_{n+1})}_{\text{constant sign}} \end{align}$$ where $A_n \stackrel{\rm def}{=} \sum_{n=1}^{N} a_n$ and $A_0=0$. Now, this does not quite work: the issue boils down to the fact that at the end of the day you cannot rely on $b_n\xrightarrow[n\to\infty]{} 0$; since indeed it is not necessarily true. But you need this for the argument to go through. To circumvent that, observe that $(b_n)_n$ is a bounded monotone sequence, and therefore is convergent. Let $\ell\in\mathbb{R}$ be its limit. You can now define the sequence $(b^\prime_n)_n$ by $b^\prime_n \stackrel{\rm def}{=} b_n-\ell$. This is a monotone bounded sequence converging to $0$, and $$ \sum_{n=1}^N a_n b^\prime_n = \sum_{n=1}^N a_n b_n - \ell \sum_{n=1}^N a_n. $$ The second term is a convergent series by assumption on $(a_n)_n$, so showing convergence of the series $ \sum a_n b_n$ is equivalent to showing convergence of the series $\sum a_n b^\prime_n$. Apply your idea (Abel's summation) on the latter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Joint distributions where one is uniform Let $X$ have a uniform distribution on the interval $(0,1)$. a) Find the c.d.f. and p.d.f. of $Y=\dfrac{X}{1-X}$. b) Find the c.d.f. and p.d.f. of $W=\ln Y$. I am extremely confused on part A, and part B also. I get to this very early step and am stuck: $$Y= \dfrac{X}{1-X} \\ \boxed{ F_Y(y)=\Pr(Y\le y)=\Pr\left(\frac{X}{1-X}\le y\right) = \Big\vert } $$ I can't figure out how to isolate for $y$ here. Any help would be appreciated.
We have $$F_Y(y)=\Pr(Y\le y)=\Pr\left(\frac{X}{1-X}\le y\right).$$ This is $\Pr(X\le y(1-X))$, which is $\Pr(X(1+y)\le y)$. Finally, for $y$ positive, which is the only interesting part, we have $$F_Y(y)=\Pr\left(X\le \frac{y}{1+y}\right)=\frac{y}{1+y}.$$ Elsewhere, we have $F_Y(y)=0$. For the density function, differentiate. The second problem is handled in an analogous way. Note that $W\le w$ if and only if $\ln Y\le w$ if and only if $Y\le e^w$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a polynomial such that $F(p)$ is always divisible by a prime greater than $p$? Is there an integer-valued polynomial $F$ such that for all prime $p$, $F(p)$ is divisible by a prime greater than $p$? For example, $n^2+1$ doesn't work, since $7^2+1 = 2 \cdot 5^2$. I can see that without loss of generality it can be assumed that $F(0) \ne 0$. Also, it is enough to find a polynomial where the property is true for sufficiently large prime $p$, since we could multiply that polynomial by some prime in the sufficiently large range and fix all the smaller cases. I think it is possible that there are no such polynomials, is there any good hint for proving this? I can't find any solutions to $\text{gpf}(p^4+1) \le p$ for prime $p \le 10000$, where $\text{gpf}$ is the greatest prime factor, but there are plenty for $\text{gpf}(p^3+1) \le p$, for example $\text{gpf}(2971^3+1) = 743 \lt 2971$. So I guess $F(p) = p^4+1$ might be an example. I also checked higher powers for small $p$ and couldn't find solutions there either, so $k \ge 4 \rightarrow \text{gpf}(p^k+1) \gt p$ is plausible.
$$10181^4 + 1 = 2 \cdot 17 \cdot 1657 \cdot 4657 \cdot 5113 \cdot 8009$$ I have many factoring variants. Many of the commands in the following involve string variables, these are used in the versions where the factorization will actually be printed out, but are irrelevant here. part of file form.h #include <iostream> #include <stdlib.h> #include <fstream> #include <sstream> #include <list> #include <set> #include <math.h> #include <iomanip> #include <string> #include <algorithm> #include <iterator> #include <gmp.h> #include <gmpxx.h> using namespace std; int mp_all_prime_factors_below_bound( mpz_class i, mpz_class bound) { int squarefac = 0; string fac; fac = " = "; mpz_class p = 2; mpz_class temp = i; if (temp < 0 ) { temp *= -1; fac += " -1 * "; } if ( 1 == temp) fac += " 1 "; if ( temp > 1) { int primefac = 0; while( temp > 1 && p <= bound && p * p <= temp) { if (temp % p == 0) { ++primefac; if (primefac > 1) fac += " cdot "; // fac += stringify( p) ; temp /= p; int exponent = 1; while (temp % p == 0) { temp /= p; ++exponent; } // while p is fac if ( exponent > 1) { fac += "^" ; fac += stringify( exponent) ; if (p >2) ++squarefac ; } } // if p is factor ++p; // cerr << " temp " << temp << endl; } // while p if (temp > 1 && primefac >= 1) fac += " "; if (temp > 1 && temp < bound * bound ){ fac += " cdot "; fac += temp.get_str() ;} if (temp > 1 && temp >= bound * bound ){ fac += " cdot mbox{BIG} "; } // if (squarefac) fac += " WOW " ; } // temp > 1 // cerr << " temp " << temp << endl; return ( temp <= bound ) ; } // mp_all_prime_factors_below_bound
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 2, "answer_id": 1 }
How to determine the reflection point on an ellipse Here is my problem. There are two points P and Q outside an ellipse, where the coordinates of the P and Q are known. The shape of the ellipse is also known. A ray comming from point P is reflected by the ellipse and arrivates at Q. The question is how to determine the reflection point on the ellipse. I mean is there any analytical method to calculate the coordinate of the reflection point?
Hint: We can use a method similar to that of Max Payne, but representing the ellipse by its parametric equations $x = a \cos t$, $y = b \sin t$. By differentiation, the tangent vector is $(- a \sin t, b \cos t)$, hence the normal vector $(b \cos t, a \sin t)$. Now we express the equality of the angles to the normal, by the equality of dot products $$\frac{(P_x - a \cos t) b \cos t + (P_y - b \sin t) a \sin t}{\sqrt{(P_x - a \cos t)^2+(P_y - b \sin t)^2}} = \frac{(Q_x - a \cos t) b \cos t + (Q_y - b \sin t) a \sin t}{\sqrt{(Q_x - a \cos t)^2+(Q_y - b \sin t)^2}}.$$ This can be rewritten as a nasty trigonometric polynomial of degree four by squaring and reducing to the common denominator, then rationalized to an octic polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Is it true that $ f(n) = O(g(n))$ implies $g(n) = O(f(n))$ So I have this is an assignment for algorithms. I've googled a lot, read the chapter in the book about big Oh notation, and I understand the concept. I do not however understand how to prove it. I know I need to work with a constant and put that out of the equation. And I know that the answer I've found for $f(n) = O(g(n))$ implies $g(n) = O(f(n))$ is NO. $f(x)=x$,$g(x)=x^2$ is a counterexample. But I do not understand this answer. If $g(x)$ could be $x^2$ then $f(n)$ would not be equal the $O(g(n))$. Can someone explain in a simple way why this is the case though? :(
The definition of the big-oh notation is as follows : $f(x)=O(g(x))$ if $|f(x)| \leq c|g(x)|$ for every big enough $x$ and some constant $c$ This is why $f(x)=x$ and $g(x)=x^2$ is a counter-example : * *$x=O(x^2)$ because for example taking $c=1$ we have $x \leq x^2$ for every $x \geq 1$ *$x^2$ can't be $O(x)$ because that would mean that $x^2 \leq cx$ for every $x \geq x_0$ or $x^2-cx \leq 0$ but this is false because : $$\lim_{x \to \infty} x^2-cx=\infty$$ The usual intuition is that : $f(x)=O(g(x))$ when $g(x)$ grows faster than $f(x)$ This explains why $x=O(x^2)$ but the reverse isn't true .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How did the rule of addition come to be and why does it give the correct answer when compared empirically? I'm still a high school student and very interested in maths but none of my school books describe these kind of things, and only say how to do it not the whys. My question is very simple, for example: 19 +25 = 44 because the one from adding 9 and 5 goes on to add with 1 and two. How did this rule of addition come to be? Here's a bit of explanation that can be useful(sorry if it is frustrating): Suppose we are a 3 year old child and no one teaches us how to add and we recognize 1 is our index and 5 all palm fingers held up. Someone gives us the problem add: 1+5 so we hold 'em up, right? and again someone gives us to add 8564+2345 so we can't lift 'em up. So we try to device a rule but we don't recognize 6+4= 10 in which 0 stays and one jumps neither can we say that only the digits from rightmost positions are to be added. This is what i meant.
A visual representation: 19 + 25 = 44 Let's remove the answer and split the numbers into tens and units: 10 + 9 {= 19} + 20 + 5 {= 25} This give us: 30 + 14 We can make a new addition out of these numbers: 14 + 30 Let's split these numbers into tens and units too: 10 + 4 {= 14} + 30 + 0 {= 30} The 10 represents the 1 that was carried from the units column to the tens column. The second number, 30, must end in a zero because we only used digits from the tens column to calculate it. This means that at this stage we will never get a carry from the units column to the tens column, allowing us to progress past this stage, although for larger numbers we could get a carry from the tens column to the hundreds column (not shown). We can now complete the sum: 10 + 4 + 30 + 0 = 40 + 4 {= 44} This is why this method works. It is because breaking a number into parts (not digits, $10\ne1$) and adding all the parts up again will always give you the original number. $14=10+4=4+10=7+7=11+3=1+2+3+8$, etc. This is called associativity. When you carry the 1 from 14, you are actually carrying 10, but in the tens column you write 10 as 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 16, "answer_id": 11 }
What does third derivative tell about inflection point? I was trying to find the nature (maxima, minima, inflection points) of the function $$\frac{x^5}{20}-\frac{x^4}{12}+5=0$$ But I faced a conceptual problem. It is given in the solution to the problem that $f''(0)=0$ and $f'''(0) \neq 0$ so $0$ is not an inflection point. But why should we check the third derivative? Isn't checking first and second derivative sufficient for verifying an inflection point ? Why must the higher order odd derivatives be zero for an inflection point?
It depends on your definition of inflection point. You have given this as "Where curve changes its concavity". In this case checking $f'''(x)$ is not necessary since for example, $$f(x) = x^3 \text{ gives } f'''(x) = 6 \neq 0 $$ however $x = 0$ is an inflection point since there is a change in concavity here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
The eigenvalues of $AB$ and of $BA$ are identical for all square $A$ and $B$ …a different approach The eigenvalues of $AB$ and of $BA$ are identical for all square $A$ and $B$. I have done the proof in a easy way… If $ABv = λv$, then $B Aw = λw$, where $w = B v$. Thus, as long as $w \neq 0$, it is an eigen vector of $B A$ with eigenvalue $λ$ and the other case when $w = 0$ can be easily handled. But I have seen the proof the same result in a book ~"H. Wilkinson, The Algebraic Eigenvalue Problem, Clarendon Press, Oxford, 1965, " on page 54 in a different way where I am unable to understand how they did the matrix construction during starting the problem. that is how they got the idea of the matrices $$\begin{bmatrix}I&O\\-B&\mu I\end{bmatrix}$$ and $$\begin{bmatrix}\mu I&A\\B&\mu I\end{bmatrix}$$ For ready refference I am attaching a screenshot of the proof as follows: Eigenvalues of $AB$ *Notice, however, that the eigenvalues of $AB$ and of $BA$ are identical for all square $A$ and $B$. The proof is as follows. We have $$\begin{bmatrix}I&O\\-B&\mu I\end{bmatrix}\begin{bmatrix}\mu I&A\\B&\mu I\end{bmatrix}=\begin{bmatrix}\mu I&A\\O&\mu^2I-BA\end{bmatrix}\tag{51.1}$$ and $$\begin{bmatrix}\mu I&A\\O&I\end{bmatrix}\begin{bmatrix}\mu I&A\\B&\mu I\end{bmatrix}=\begin{bmatrix}\mu^2I-BA&O\\B&\mu I\end{bmatrix}.\tag{51.2}$$ Taking determinants of both sides of $(51.1)$ and $(51.2)$ and writing $$\begin{bmatrix}\mu I&A\\B&\mu I\end{bmatrix}=X,\tag{51.3}$$ we have $$\mu^n\det(X)=\mu^n\det(\mu^2I-BA)\tag{51.4}$$ and $$\mu^n\det(X)=\mu^n\det(\mu^2I-AB).\tag{51.5}$$ Equations $(51.4)$ and $(51.5)$ are identities in $\mu^2$ and writing $\mu^2=\lambda$ we have $$\det(\lambda I-BA)=\det(\lambda I-AB),\tag{51.6}$$ showing that $AB$ and $BA$ have the same eigenvalues.
The key is to realize that you multiply partitioned matrices together in exactly the same manner by you do "regular" matrix matrix multiplication. The only difference is that multiplication is no longer commutative because your are dealing with submatrices rather than scalars. I shall demonstrate. Let $A$ and $B$ be partitioned conformally, i.e. \begin{equation} A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}, \quad B = \begin{pmatrix} B_{11} & B_{12} \\ B_{21} & B_{22} \end{pmatrix} \end{equation} where $A_{ij}$ and $B_{ij}$ are matrices in their own right. Conformally means that all the subsequent products are defined. Then \begin{equation} A B = \begin{pmatrix} A_{11} B_{11} + A_{12} B_{21} & A_{11} B_{12} + A_{12} B_{22} \\ A_{21} B_{11} + A_{22} B_{21} & A_{21} B_{12} + A_{22} B_{22} \end{pmatrix} \end{equation} This is how equations (51.1) and (51.2) were derived.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove this integral is analytic Let $\phi$ be a continuous (complex valued) function on the real interval [−1, 1] inside C, and define $$f(z)=\int_{-1}^1\frac{\phi(t)}{t-z}dt$$ Show that f is analytic on C less the interval [−1, 1]. I thought about CR equation to prove analytic but the function is not of the form $f(z) = u(x, y) + iv(x, y)$
If you wanted to use CR: (This answer only works for $\phi$ real, but if $\phi=\phi_1+i\phi_2$ with $\phi_1,\phi_2$ real, you can see that it follows from the real case.) Write $z=a+bi$ then $$\frac{\phi(t)}{(t-a)-bi} = \frac{\phi(t)(t-a +bi)}{(t-a)^2+b^2}$$ So $$\int_{-1}^{1} \frac{\phi(t)}{t-z}\,dt = \int_{-1}^1 \frac{\phi(t)(t-a)}{(t-a)^2+b^2}\,dt + i\int_{-1}^1 \frac{b\phi(t)}{(t-a)^2+b^2}\,dt$$ So there are your $u(a,b)$ and $v(a,b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Cluster points and the sequence 1,1,2,1,2,3,1,2,3,4,1,... I am working on a problem in analysis. We are given a sequence $x_n$ of real numbers. Then a definition: A point $c \in \mathbb{R}\cup{\{\infty, -\infty}\}$ is a cluster point of $x_n$ if there is a convergent subsequence of $x_n$ with limit $c$. Then, we let $C$ bet the set of all cluster points. The problem is to prove that the set $C$ is closed. In working on this problem, I tried to determine the cardinality of $C$. And, in doing this I came up with the sequence $x_n = 1,1,2,1,2,3,1,2,3,4,1,2,3,4,5,...$ as an example of a sequence with countable number of cluster points. I believe that for this sequence $C = \mathbb{N}$. Now consider an enumeration of $\mathbb{Q}$, $q_1, q_2, q_3,...$ And define the sequence $y_n = q_1, q_1, q_2, q_1, q_2, q_3, q_1, q_2, q_3, q_4,...$ It seems to me that the set of cluster points of this sequence is $\mathbb{Q}$. But now, we could take a sequence from the cluster points of $y_n$ (which is $\mathbb{Q}$) that converges to a member of $\mathbb{Q}^c$ (like $\sqrt{2}$), which would contradict the assumption that the set of cluster points is closed. (Since we have a theorem: a subset of a metric space is closed iff every convergent sequence in the set converges to an element of that set). So, in trying to prove $C$ is closed, I have come up with what I believe is a counterexample. So my question is: where is the error?
The error is in the sentence that starts "It seems to me...." The set of cluster points for the sequence you describe is, in fact, all of $\mathbb{R}$, not just $\mathbb{Q}$, for pretty much precisely the reason you give in the next paragraph: for any real number, consider a sequence of rationals converging to that number, and then simply pick a subsequence from your sequence consisting of those rationals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1641992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Product of two primitive roots $\bmod p$ cannot be a primitive root. I recently proved that the product of all primitive roots of an odd prime $p$ is $\pm 1$ as an exercise. As a result, I became interested in how few distinct primitive roots need to be multiplied to guarantee a non-primitive product. Testing some small numbers, I have the following claim: "If $n$ and $m$ are two distinct primitive roots of an odd prime $p$, then $nm \bmod p$ is not a primitive root." I've tried to make some progress by rewriting one of the primitive roots as a power of the other, but haven't been able to see any argument which helps me prove the result. Any help would be appreciated.
A primitive root of an odd prime $p$ must be a quadratic non-residue of $p$, and the product of two non-residues is a residue.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1642070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Can I further simplify $5^k \cdot 5 + 9 < 6^k \cdot 6$ to prove this is true I am trying to prove this statement, but I'm not sure where to go from here. Is don't think this is sufficiently reduced to conclude the statement is true, but I'm not positive. $k ≥ 2$ Can I further simplify this?
Hint: It's not hard to check that $6^{k+1}-{5}^{k+1}$ is increasing and $(6\cdot6-5\cdot 5) \geq 9$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1642157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
limit of $f(x) = \lim \limits_{x \to 0} (\frac{\sin x}{x})^{\frac 1x}$ Any ideas how to calculate this limit without using taylor? $$f(x) = \lim \limits_{x \to 0} \left(\frac{\sin x}{x}\right)^{\frac1x}$$
Take the log of both sides and examining the 2 sides of the limits yields $$\ln L^+=\lim \limits_{x \to 0^+} \frac{\ln (\sin x)-\ln(x)}{x}$$ $$\ln L^-=\lim \limits_{x \to 0^-} \frac{\ln (\sin (-x))-\ln(-x)}{x}$$ which can be solved by L'Hopital's to both equal $0$, so the limit is 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1642327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Arc length of a sequence of semicircles. Let $\gamma_1$ be the semicircle above the x axis joining $-1$ and $1$. Now divide this interval into $[-1,0]$ and $[0,1]$, and trace a semicircle joining $-1,0$ above the $x$-axis and another one joining $0,1$ below the $x$-axis, call this curve $\gamma_2$ Keep subdividing the interval into $2^n$ subintervals, proceding as above, and call the resulting curve $\gamma_n$. Now, from looking at their graphs, I can say that the arc length of all these curves is $\pi$, but the arc length of the limit, $\gamma$ (which I think should be a straight line connecting $-1$ and $1$) is $2$. Why does this happen? Shouldn't the arc length of the limit be $\pi$ as well? Does $\gamma_n\to \gamma$ uniformly? One of my professors said that this is because $$ \operatorname{Arc}(\gamma_i)=\int_D\sqrt{1+\gamma_i'^2}dx $$ And that $\gamma_i'$ becomes pretty crazy when $i$ is large (as it's $\pm \infty$ on the edges of the circles).
The problem with this argument, and many other alike, is that the arc length of a curve is not a continuous function of a curve, which means that if sequence of curves $\gamma_i$ converges to a curve $\gamma$ (notion of convergence of functions like that can be made formal, but let me not do that here), then there is no guarantee the sequence of arclengths $l(\gamma_i)$ to converge to the arclength $l(\gamma)$. Indeed, if you look at the integrals $\int_0^1\sqrt{1-\gamma_i'(t)^2}dt$, then you might expect that the expression under the integral converges to $\sqrt{1-\gamma'(t)^2}$, which however isn't the case, because convergent sequence of differentiable functions (I am ignoring the "cusps" where the semicircles meet here; they are important as well, but that's beyond my point right now), then the sequence of derivatives need not be convergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1642415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }