Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Accuracy of solution? I have a question that asks me to get the solution to the equation $x+\arcsin(x)={\pi\over 2}$ by using a calculator. (Repeatedly pressing cos?) Then it asks to justify the accuracy of the answer. What does this mean? What am I supposed to do? Please help!
For me it seem to be an exercise on the Contraction Mapping Theorem: rewrite the equation as $$ x = \pi/2-\arcsin x $$ so rhs is defined only on $[-1,1]$ and takes values on $[0,\pi]$. By monotone arguments we obtain that there is only one solution of the equation. Now, rewrite it as $$ \arcsin x = \frac\pi2-x $$ and apply $\sin$ to both sides: $$ x = \cos x \quad(1) $$ Geometrically it means that you reflected the graph of these functions with respect to $y=x$. Now we can apply Contraction Mapping Theorem to solve $(1)$ on $[-1,1]$. You can show using Lipschitz continuity of $\cos$ that $$ |\cos x' - \cos x''|\leq \alpha|x'-x''| $$ where $\alpha = \sin1<1$. Now, we put $x_0 =0$ and construct $x_{n+1} = \cos x_n$ which converges to the solution $x^* = \lim\limits_{n\to\infty}x_n$. We only need to find bounds: $$ |x^*-x_n| \leq \sum\limits_{k=n}^\infty|x_{k+1}-x_k| \leq \sum\limits_{k=n}^\infty\alpha^k = \frac{\alpha^n}{1-\alpha} $$ where we used that $|x_{k+1}-x_k|\leq \alpha|x_k-x_{k-1}|\leq\dots\leq\alpha^k|x_1-x_0|=\alpha^k$. By the way, $\alpha\leq 0.85$
{ "language": "en", "url": "https://math.stackexchange.com/questions/89899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Do localization and dual commute for locally free modules of rank $1$? Let $A$ be an integral domain and $M$, $N$ be finitely generated $A$-modules. I know from this topic that one cannot expect $\hom_A(M,N)_P \cong \hom_{A_P}(M_P,N_P)$ to be true in the general case (although I lack the background to fully grasp the given counterexample), but what if I consider the localization of the dual module of $M$, i.e. $N = A$? For my purposes, I may even assume that $M$ is locally free of rank $1$. If I didn't miscalculate, the natural map should be injective, but I fail to prove surjectivity. (A more abstract proof would be great, too!)
Yes, in the case you mention $Hom$ commutes with localization. Reminder For a locally free $A$-module $M$ of rank 1, the dual module $Hom_A(M,A)$ is the only module $L$ such that $M\otimes L=A$. [This is why such $M$ are also called invertible modules] The rest is easy: localizing at $P$, you get $(M\otimes_A L)_P=M_P\otimes_{A_P} L_P=A_P$ and this proves, by the Reminder again, that $L_P=Hom_{A_P}(M_P,A_P)$. This is what you wanted : $(Hom_A(M,A))_P=Hom_{A_P}(M_P,A_P)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/89965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Heuristic Proof of Hardy-Littlewood Conjecture for 3-term Arithmetic Progressions The Hardy-Littlewood Conjecture for 3-term arithmetic progressions is that $$ \# \{ x,d \in \{1,\ldots,N\} \, | \, x,x+d,x+2d \text{ are all prime} \} \sim \frac{3}{2} \prod_{p > 2} \left(1+\frac{1}{(p-1)^2}\right) \frac{N^2}{(\log N)^3}. $$ In this (piece) of a paper (http://www.claymath.org/publications/Gauss_Dirichlet/green.pdf), Ben Green outlines a heuristic argument for the (k-term version) of the conjecture that I am trying to understand. I will repeat the most important parts here. For large $N$, the probability that an arbitrary integer $\leq N$ is prime is $$ \mathbb{P}(x \text{ is prime} | 1 \leq x \leq N) \approx \frac{1}{\log N} $$ by the Prime Number Theorem. Choose $x,d \in \{1,\ldots,N\}$ at random among the $N^2$ choices and write $E_j$ for the event that $x+jd$ is prime. If the events $E_0, E_1, E_2$ were independent, we would expect that $$ \mathbb{P}(x,x+d,x+2d \text{ are all prime}) = \mathbb{P}(E_0 \cap E_1 \cap E_2) \approx \frac{1}{(\log N)^3}, $$ and so $$ \# \{x,d \in \{1,\ldots,N\} \, | \, x,x+d,x+2d \text{ are all prime} \} \approx \frac{N^2}{(\log N)^3}, $$ which is the correct result up to a constant factor. Green says that the correct constant can be obtained by discarding the incorrect assumption of independence and taking account of the fact that the primes $> q$ fall only in those residue classes $a(\text{mod }q)$ with $a$ coprime to $q$. He gives no more details. I've been trying to figure out how to do this, but haven't been successful. Could someone please help me out or point me to a reference where it is done? Thanks.
Very roughly speaking, the probability for a number $x$ to be divisible by a prime $p$ is $1/p$. We can't take this literally because it stops making sense when $p$ gets close to $\sqrt x$, but it will make more and more sense for more and more primes as $x$ increases. Thus, even though we can't calculate the probability of a number being prime as $\prod_p(1-\frac1p)$ (which would be zero), we can calculate corrections to the independence assumption as if we were correcting the factor $1-\frac1p$. If the product of the corrections from primes above some limit $q$ converges to $1$ as $q\to\infty$, we can expect the correction to be asymptotically correct, since the part of it that doesn't make sense will not matter asymptotically. Now if $E_0$, $E_1$ and $E_2$ were independent, the contribution from a prime $p$ to the probability of the triple being prime would be $(1-\frac1p)^3$. To find the correction for $p\gt2$, we can consider the two cases where $d$ is or isn't divisible by $p$. If $d$ is divisible by $p$, which happens with probability $\frac1p$, then $x$, $x+d$ and $x+2d$ all have the same residue $\bmod p$, and this is non-zero with probability $1-\frac1p$; whereas if $d$ is not divisible by $p$, which happens with probability $1-\frac1p$, then $x$, $x+d$ and $x+2d$ all have different residues $\bmod p$, and the probability of none of them being zero is $1-\frac3p$. Thus, in total, the probability of none of the three numbers being divisible by $p$ is $$\frac1p(1-\frac1p)+(1-\frac1p)(1-\frac3p)=1-\frac3p+\frac2{p^2}=(1-\frac1p)(1-\frac2p)\;.$$ Thus the correction with respect to $(1-\frac1p)^3$ is $$\frac{(1-\frac1p)(1-\frac2p)}{(1-\frac1p)^3}=\frac{1-\frac2p}{(1-\frac1p)^2}=\frac{p^2-2p}{(p-1)^2}=1-\frac1{(p-1)^2}\;,$$ so we should expect a minus sign where you have a plus sign, and indeed the paper you cite has a minus sign. The case $p=2$ has to be treated separately because there aren't three different residues $\bmod2$. In this case, $(1-\frac1p)^3$ is $\frac18$, whereas the correct probability is $\frac12\cdot\frac12=\frac14$, so the correction is $2$, and indeed the paper you cite has $2$ where you have $\frac32$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Tautological vector bundle over $G_1(\mathbb{R^2})$ isomorphic to the Möbius bundle Let $V$ be a finite dimensional vector space, and let $G_k(V)$ be the Grassmannian of $k$-dimensional subspaces of $V$. Let $T$ be the disjoint union of all these $k$-dimensional subspaces and let $\pi:T\rightarrow G_k(V)$ be the natural map sending each point $x \in S$ to $S$. Then $T$ has a unique smooth manifold structure making it into a smooth rank-$k$ vector bundle over $G_k(V)$, with $\pi$ as a projection and with the vector space structure on each fiber inherited from $V$. $T$ is called the tautological vector bundle over $G_k(V).$ What I want to prove is that tautological vector bundle over $G_1(\mathbb{R^2})$ is isomorphic to the Möbius bundle. (This is a problem from Introduction to Smooth Manifolds by Lee and Möbius bundle is defined as in Lee's book, page 105. Also I took the definition of the tautological vector bundle over $G_k(V)$ from Lee's book as well.)
As @Sam pointed out $G_1(\mathbb{R^2}) \cong \mathbb{RP^1} \cong \mathbb{S^1}$. Now, writing a smooth bundle isomorphism is not so hard.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
What is the product of all nonzero, finite cardinals? To be specific, why does the following equality hold? $$ \prod_{0\lt n\lt\omega}n=2^{\aleph_0} $$
As a product of cardinals, yes: $$2^{\aleph_0} \leq \prod_{0 < n < \omega} n \leq {\aleph_0}^{\aleph_0} \leq 2^{\aleph_0 \cdot \aleph_0} = 2^{\aleph_0}$$ As a product of ordinals, no: $$\prod_{0 < n < \omega} n \leq \prod_{0 < n < \omega} \omega = {\omega}^{\omega}$$ but the ordinal ${\omega}^{\omega}$ is countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Prove or disprove this calculus limit result by geometric approach my question is: Could we prove the this conversion of variable work by my formula on the bottom? $$\iint_R f(r,\theta) \ dxdy = \int_a^b \int_0^{r(\theta)} f(r,\theta) r (dr)\ d\theta$$ as $d r$ and $d \theta$ approach $0$. Prove or disprove that: $$((r+\Delta r) \cos(a +\Delta \theta) -r \cos a) \cdot ((r+\Delta r) \sin(a + \Delta \theta) -r \sin a) / (r \;\Delta \theta \; \Delta r)=1 .$$ where the variable represent as in this graph : as $\Delta r $ and $\Delta \theta$ approach $0$ This question is inspired from $dx\;dy=r \;dr \;d \theta$.
The diagram that you are giving is not what is going on at all. $\mathrm{d}x\;\mathrm{d}y$ is an element of area intended to represent the plane broken up into small rectangles.: $\mathrm{d}r\;\mathrm{d}\theta$ is an element of area in a space whose small squares get mapped to small annular wedges by $x=r\cos(\theta)$, $y=r\sin(\theta)$: The Jacobian is the matrix that locally maps between two coordinate systems. $$ \frac{\partial(x,y)}{\partial(u,v)}=\begin{bmatrix}\frac{\partial x}{\partial u}&\frac{\partial y}{\partial u}\\\frac{\partial x}{\partial v}&\frac{\partial y}{\partial v}\end{bmatrix}\tag{1} $$ From polar to rectangular coordinates, the Jacobian is $$ \frac{\partial(x,y)}{\partial(r,\theta)}=\begin{bmatrix}\cos(\theta)&\sin(\theta)\\-r\sin(\theta)&r\cos(\theta)\end{bmatrix}\tag{2} $$ Note that $\begin{vmatrix}\cos(\theta)&\sin(\theta)\\-r\sin(\theta)&r\cos(\theta)\end{vmatrix}=r$. A small area in $\mathrm{d}r\;\mathrm{d}\theta$, the green square, is mapped by the polar coordinate map to the blue annular wedge, which has approximately the same area as the red rectangle. The Jacobian matrix maps the green square to the red rectangle. The ratio of the area of the red rectangle to the green square is the determinant of the Jacobian (this is just linear algebra). Therefore, since a small square in $\mathrm{d}r\;\mathrm{d}\theta$ is mapped by the coordinate transform so that its area is $\left|\frac{\partial(x,y)}{\partial(r,\theta)}\right|\;\mathrm{d}r\;\mathrm{d}\theta$ $$ \begin{align} \iint f(x,y)\;\mathrm{d}x\;\mathrm{d}y &=\iint f(r,\theta)\left|\frac{\partial(x,y)}{\partial(r,\theta)}\right|\;\mathrm{d}r\;\mathrm{d}\theta\\ &=\iint f(r,\theta)\;r\;\mathrm{d}r\;\mathrm{d}\theta\tag{3} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/90239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Need help finding limit $\lim \limits_{x\to \infty}\left(\frac{x}{x-1}\right)^{2x+1}$ Facing difficulty finding limit $$\lim \limits_{x\to \infty}\left(\frac{x}{x-1}\right)^{2x+1}$$ For starters I have trouble simplifying it Which method would help in finding this limit?
If you know that $$\lim_{x\to\infty}\left(1 + \frac{a}{x}\right)^x = e^{a},$$ so that $$\lim_{x\to\infty}\left(1 - \frac{1}{x}\right)^x = e^{-1},$$ then you can try to rewrite your limit into something involving this limit. So try rewriting it; perhaps as a product, $$\begin{align*} \left(\frac{x}{x-1}\right)^{2x+1} &= \left(\left(\frac{x}{x-1}\right)^x\right)^2\left(\frac{x}{x-1}\right)\\ &= \left(\frac{1}{\left(\frac{x-1}{x}\right)^x}\right)^2\left(\frac{x}{x-1}\right)\\ &= \left(\frac{1}{\left(1 - \frac{1}{x}\right)^x}\right)^2\left(\frac{x}{x-1}\right). \end{align*}$$ Then use limit laws to compute it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 3 }
Simple Combinations with Repetition Question In how many ways can we select five coins from a collection of 10 consisting of one penny, one nickel, one dime, one quarter, one half-dollar and 5 (IDENTICAL) Dollars ? For my answer, I used the logic, how many dollars are there in the 5 we choose? I added the case for 5 dollars, 4 dollars, 3 dollars and 2 dollars and 1 dollars and 0 dollars. $$C(5,5) + C(5,4) + C(5,3) + C(5,2) + C(5,1) + 1 = 32$$ which is the right answer ... but there has to be a shorter way, simpler way. I tried using the repetition formula that didn't pan out. If you could introduce me to a shorter way with explanation I appreciate it.
This is equivalent to counting the number of subsets of the non-dollar coins, because you have exactly 5 (pairwise distinct) non-dollar coins, and you are trying to select $5$ coins. Each possible selection is completely determined by the subset of $\{\text{penny}, \text{nickel}, \text{dime}, \text{quarter}, \text{half-dollar}\}$ that it contains. So you just need to count the number of subsets of a set of 5 elements, which is $2^{5} = 32$. That is, the dollar coins are really red-herrings; all they do is fill up the space. The question would have the exact same answer if it had been phrased as "In how many different ways can you select coins from among a single penny, a single nickel, a single dime, a single quarter, and a single half-dollar?"
{ "language": "en", "url": "https://math.stackexchange.com/questions/90384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Weak convergence as convergence of matrix elements Let $H$ be a Hilbert space with orthonormal basis $(e_h)_{h \in \mathbb{N}}$ and let $(A_n)_{n \in \mathbb{N}}$ and $A$ be bounded linear operators. We say that $A_n$ converges weakly to $A$ if $$\forall \xi, \eta \in H, \quad (\eta, A_n\xi)\to (\eta, A\xi).$$ Question Is it true that $A_n$ converges weakly to $A$ if and only if $$\forall h, k \in \mathbb{N},\quad (e_h, A_n e_k) \to (e_h, Ae_k)?$$ Indeed, I'm wondering if it is correct to think at weak convergence as convergence of the matrix entries associated to the operators $A_n$ and $A$. Thank you.
If it is possible to write $\zeta = \sum_{i=0}^{\infty}a_{k}e_{k}, \eta=\sum_{i=0}^{\infty}b_{k}e_{k}$ with the $a_{k}, b_{k} = const. \forall k \in\mathbb{N}$, the the weak convergence as defined above follows directly from the fact that the direct product on the Hilbert space is linear, that the operator we're dealing with is linear, and that we can write the $\zeta, \eta$ can be written as linear combinations of the o.n.b.. The "if part" doesn't follow, however, since you only regard the diagonal entries of the operator. This goes in the same direction as the person answering the question before me already said. You should easily find plenty of examples that disprove the if part.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is a measure for a sigma algebra determined by its values for a generator of the sigma algebra? If the value of a measure on any subset in a generator of a sigma algebra is known, will the measure for the sigma algebra also be uniquely determined? Thanks!
Consider flipping two coins. Let $A$ be the event that the first coin is heads, and $B$ the event that the second coin is heads. $A$ and $B$ together generate the $\sigma$-algebra of all possible events. Suppose we know that $P(A) = P(B) = 1/2$ (i.e. each coin is unbiased). This is not enough information to determine whether the two coins are independent, so $P$ is not completely determined. This is the same counterexample that I gave in this answer to another question. In its notation, $P$ and $Q$ agree on the events in $\mathcal{L}$, and $\sigma(\mathcal{L}) = \mathcal{F}$, but $P \ne Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 6, "answer_id": 0 }
Determining the Value of a Gauss Sum. Can we evaluate the exact form of $$g\left(k,n\right)=\sum_{r=0}^{n-1}\exp\left(2\pi i\frac{r^{2}k}{n}\right) $$ for general $k$ and $n$? For $k=1$, on MathWorld we have that $$g\left(1,n\right)=\left\{ \begin{array}{cc} (1+i)\sqrt{n} & \ \text{when}\ n\equiv0\ \text{mod}\ 4\\ \sqrt{n} & \text{when}\ n\equiv1\ \text{mod}\ 4\\ 0 & \text{when}\ n\equiv2\ \text{mod}\ 4\\ i\sqrt{n} & \text{when}\ n\equiv3\ \text{mod}\ 4 \end{array}\right\} .$$ I know how to generalize to all $k$ when $n=p$ is a prime number, but what do we do when $n$ is not prime? Is there a simple way to rewrite it using whether or not $k$ is a square? I have a suspicion it should be fairly close to the form above, any help is appreciated. Thanks,
They do not provide a derivation, but this is actually written up in Wikipedia. I use the standard notation $e(x) = \exp(2 \pi i x)$. Assuming $\gcd(k,n) = 1$, we have $$ \sum_{x \in \mathbb{Z}/n \mathbb{Z}} e\left(\frac{kx^2}{n} \right) = \left\{ \begin{array}{lcl} \varepsilon_n \left( \frac{k}{n} \right) \sqrt{n} & & n \equiv 1 \pmod{2}\\ 0 & & n \equiv 2\pmod{4}\\ (1 + i) \varepsilon_k^{-1} \left(\frac{n}{k} \right)\sqrt{n} & & k \equiv 1 \pmod{2}, 4 \mid n \end{array}\right. $$ where $\left( \cdot \right)$ and for odd $m$, $$ \varepsilon_m = \left\{ \begin{array}{cc} 1 & & m \equiv 1 \pmod{4}\\ i & & m \equiv 3 \pmod{4} \end{array}\right. $$ When $\gcd(k,n) > 1$, write $k' = k/\gcd(k,n)$ and $n' = n/\gcd(k,n)$, to see that $$ \sum_{x \in \mathbb{Z}/n \mathbb{Z}} e\left(\frac{kx^2}{n} \right)= \gcd(k,n) \sum_{x \in \mathbb{Z}/ n' \mathbb{Z}} e\left(\frac{k' x^2}{n'} \right) $$ since the variable $x$ runs through $\gcd(k,n)$ copies of the above sum. Added: I looked through Iwaniec-Kowalski and they have a few notes on the sums, but nothing short worth noting here. These sums are discuss in Chapter 3 (around page 49) of their book. Anyways, I hope this answer is what you were looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Isometries of $\mathbb{R}^3$ So I'm attempting a proof that isometries of $\mathbb{R}^3$ are the product of at most 4 reflections. Preliminarily, I needed to prove that any point in $\mathbb{R}^3$ is uniquely determined by its distances from 4 non-coplanar points, and then that an isometry sends non-coplanar points to non-coplanar points in $\mathbb{R}^3$. I've done the first preliminary step, and finished the proof assuming the second, but I can't find a simple way to prove the second... Intuitively it makes a lot of sense that non-coplanar points be sent to non-coplanar points, but every method I've stumbled upon to prove such has been quite heavy computationally... I know for example that any triangle chosen among the four points, A, B, C, D must be congruent to the triangles of their respective images, but what extra bit of information would allow me to say that the image of the whole configuration can't be contained in a single plane...
Isometries are affine and preserve volume : so if $A$, $B$, $C$, $D$ are non-coplanar, their convex hull $ABCD$ has non-zero volume, and the convex hull $f(A)f(B)f(C)f(D)$, which is the image of the convex hull $ABCD$ under $f$ (since $f$ is affine) has non-zero volume, therefore $f(A)$, $f(B)$, $f(C)$ and $f(D)$ must be non coplanar.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why is the Kendall tau distance a metric? So I am trying to see how the Kendall $\tau$ distance is considered a metric; i.e. that it satisfies the triangle inequality. The Kendall $\tau$ distance is defined as follows: $$K(\tau_1,\tau_2) = |(i,j): i < j, ( \tau_1(i) < \tau_1(j) \land \tau_2(i) > \tau_2(j) ) \lor ( \tau_1(i) > \tau_1(j) \land \tau_2(i) < \tau_2(j) )|$$ Thank you in advance.
Given two rankings $r_1,r_2$, define $S(r_1,r_2)$ as the set of pairs $\{i,j\}$ that are ranked the same by $r_1$ and $r_2$. If a pair is ranked the same by $r_1$ and $r_2$ and also by $r_2$ and $r_3$, then it is also ranked the same by $r_1$ and $r_3$. Therefore: $$S(r_1,r_3)\supseteq S(r_1,r_2)\cap S(r_2,r_3).$$ Let's denote by $N$ the total number of pairs (it is ${n\choose 2}$, where $n$ is the number of items). Then: $$|S(r_1,r_2)\cap S(r_2,r_3)| \geq |S(r_1,r_2)| + |S(r_2,r_3)| - N.$$ By definition, $K(r_1,r_2) = N-|S(r_1,r_2)|$. Hence: $$K(r_1,r_3) = N-|S(r_1,r_3)| \leq N - |S(r_1,r_2)| - |S(r_2,r_3)| + N \leq K(r_1,r_2)+K(r_2,r_3).~~~ \square$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/90661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Optimal Mixed Strategies? I'm trying to understand how I would find the optimal mixed strategies in zero sum games. For example... given the following zero sum game in standard strategic form... \begin{array}{r|r|} +8 & -2 \\ -4 & +20\\ \end{array} How would I find the optimal mixed strategy for the given player?
Suppose hero chooses first strategy (payoffs +8/-2) with probability $p$. We want to have the same expectation independent of which strategy the other player chooses. So $$ 8p -4(1-p) = -2p + 20(1-p) \Rightarrow p = \frac{12}{17}.$$ gives the optimal mixed strategy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of getting different results when tossing a coin Here's a question I got for homework: In every single time unit, Jack and John are tossing two different coins with P1 and P2 chances for heads. They keep doing so until they get different results. Let X be the number of tosses. Find the pmf of X (in discrete time units). What kind of distribution is it? Here's what I have so far: In every round (time unit) the possible results HH - p1p2 TT - q1q2 TH - q1p2 HT - q2p1 and so P(X=k) = ((p1p2 + q1q2)^(k-1))*(q1p2+q2p1) Which means we're dealing with a geometric distribution. What doesn't feel right is that the question mentions 'discrete time units'. That makes me think about a Poisson distribution, BUT - Poisson is all about number of successes in a time unit, while here we only have one round in every time unit. If I'm not too clear its only because I'm a little confused myself. Any hint would be perfect. Thanks in advance
The discrete time units refer to the series of flips. You can't have the process end after $1.5$ flips, only 1, 2, etc. Maybe that exponent $k-1$ will give you a hint to the name of the distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/90839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is the definite integral over a continuous function always finite? eg. is $$\int_a^b{f(x)}\mathrm{d}x$$ always finite for a continuous function $f:\mathbb R\rightarrow\mathbb R$ ? If not are there any particular constraints that $f$ must obey for this to be true?
Since $[a,b] = \{ x \in \mathbb{R} \;|\; a \leq x \leq b \}$ is a compact set, any continuous function restricted to $[a,b]$ attains a minimum and maximum value on $[a,b]$ (this is the extreme value theorem). Thus there exists numbers $m,M \in \mathbb{R}$ such that $m \leq f(x) \leq M$ for all $a \leq x \leq b$. Thus we have $$ m(b-a) = \int_a^b m\;dx \leq \int_a^b f(x)\;dx \leq \int_a^b M\;dx = M(b-a) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/90939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
What function does $\sum \limits_{n=1}^{\infty}\frac{1}{n3^n}$ represent, evaluated at some number $x$? I need to know what the function $$\sum \limits_{n=1}^{\infty}\frac{1}{n3^n}$$ represents evaluated at a particular point. For example if the series given was $$\sum \limits_{n=0}^{\infty}\frac{3^n}{n!}$$ the answer would be $e^x$ evaluated at $3$. Yes, this is homework, but I'm not looking for any handouts, any help would be greatly appreciated.
We have $ \displaystyle -\ln(1-x) = \sum_{n \geq 1 } \frac{x^n}{n} $ by course for $|x|<1$ So this is simply $f(x)=-ln(1-x) $ evaluated at $x=\frac{1}{3}$ Which gives the value : $-\ln(1-\frac{1}{3})=\ln(\frac{3}{2})$
{ "language": "en", "url": "https://math.stackexchange.com/questions/90991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
A group of order $195$ has an element of order $5$ in its center Let $G$ be a group of order $195=3\cdot5\cdot13$. Show that the center of $G$ has an element of order $5$. There are a few theorems we can use here, but I don't seem to be able to put them together quite right. I want to show that the center of $G$ is divisible by the prime number $5$. If this is the case, then we can apply Cauchy's theorem and we are done. By Sylow's theorems we get that there are unique $3$-Sylow, $5$-Sylow, and $13$-Sylow subgroups in $G$. Since they are of prime order, they are abelian. Furthermore, their intersection is trivial (by a theorem I beleive). Does this then guarantee that $G=ABC$ and that $G$ is abelian?
Here is one more way to solve this problem. By Sylow's theorem, the $5$-Sylow subgroup $P$ and the $13$-Sylow subgroup $Q$ are both normal in $G$. Then $G/Q$ is cyclic, since any group of order $15$ is. In particular it is abelian, so $G' \leq Q$. This shows that $P \cap G' = \{ 1 \}$. From this it follows that $P$ is central. If $p \in P$ and $g \in G$, we have $p^{-1}g^{-1}pg \in P \cap G'$, so $pg = gp$. In general it is true that if $P \trianglelefteq G$ and $P \cap G' = \{1\}$, then $P$ is central.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
How to prove that $\lim \limits_{n \to \infty}(1+a_n)^n=e^p$ when $a_n \to 0$ and $n a_n \to p$? Assume that $(a_n)$ is a sequence of complex numbers and $p \in \mathbb{C}$. How to prove, maybe without complex logarithm if possible, that: If $a_n \rightarrow 0$ and $n a_n \rightarrow p$ then $(1+a_n)^n\to e^p$. Such theorem is used for example in probability in proofs of some limits theorems.
Eited Too long for a comment, so i edited the first "real" solution. $$(1+a_n)^n= \sum_{k=0}^n \frac{n!}{k! (n-k)!}a_n^k \,.$$ $$e^{na_n}= \sum_{k=0}^n \frac{n^k}{k!}a_n^k + \sum_{k=n}^\infty \frac{n^k}{k!}a_n^k\,.$$ Thus, $$ \left|(1+a_n)^n- e^{na_n} \right| \leq \left| \sum_{k=0}^n \frac{n(n-1)..(n-k)-n^k}{k!}a_n^k \right| +\left| \sum_{k=n}^\infty \frac{n^k}{k!}a_n^k\right|$$ It is obvious that the second term converges to $0$, you can use the continuity of the expenential+ uniform convergence of the Taylor series. The first one is $$\left| \sum_{k=0}^n \frac{(1-\frac{1}{n})(1-\frac{2}{n})...(1-\frac{k}{n})-1}{k!}(na_n)^k \right|$$, should be eay to show it is convergent to 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Question on finding eigenvalues of another linear transformation I have a quick question on finding eigenvectors for a linear transformation. I'm given: $T(A) = A^t$ where $A = M_2$ i.e. a $2 \times 2$ matrix consisting of real numbers. So the general approach is to solve $T(v) = \lambda v$, and from there I set up: $$ \left\{\begin{align*} a &= \lambda a \\ d &= \lambda d \\ b &= \lambda c \\ c &= \lambda b \end{align*}\right. $$ which suggests that that $\lambda = 1$ is the only eigenvalue. Given that, I can verify that $T(v) = \lambda v$ holds when $\lambda = 1$ and $b = c$. But I'm not sure how to describe the eigenvector. How do I put it into terms? I was thinking that a basis for the solution set would be: $$a \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + d \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix} + b\begin{pmatrix} 0 & 1 \\\ 1 & 0 \end{pmatrix} $$ but that's not an eigenvector. How exactly would I describe the eigenvector for $\lambda = 1$? Thanks in advance. Edit: So if you have a matrix with two eigenvectors, both of which are 2 x 2 matrices, how would you go about diagonalizing it? i.e. if you have to solve $\Lambda = SAS^{-1}$, how do you determine S?
It seems that you're confused by the two different uses of the term "vector". In one use, a vector is a column or row of numbers, a matrix one of whose dimensions is $1$. In another use, a vector is any element of any vector space. It is this second sense of the term that's being used when we say that the matrices you describe in your last displayed equation are the eigenvectors corresponding to the eigenvalue $1$ of this linear transformation. The equation $\Lambda = SAS^{-1}$ doesn't make sense here since you're examining the eigenvalues and eigenvectors of $T$, not of $A$, so you want $\Lambda = STS^{-1}$. You can choose a basis for the vector space of $2\times2$ matrices, for instance the canonical basis, and then express $T$ as a $4\times4$ matrix with respect to this basis. If you also express the eigenvectors you found (three for $1$ and one for $-1$) in this basis, you'll find that they are eigenvectors, in the first sense of the word, of the $4\times4$ matrix representing $T$, and that a $4\times4$ matrix formed out of them diagonalizes the matrix representing $T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is my trig result unique? I recently determined that for all integers $a$ and $b$ such that $a\neq b$ and $b\neq 0$, $$ \arctan\left(\frac{a}{b}\right) + \frac{\pi}{4} = \arctan\left(\frac{b+a}{b-a}\right) $$ This implies that 45 degrees away from any angle with a rational value for tangent lies another angle with a rational value for tangent. The tangent values are related. If anyone can let me know if this has been done/shown/proven before, please let me know. Thanks!
Quoting from Wikipedia's list of trigonometric identities: BEGIN QUOTE $$ f(x) = \frac{(\cos\alpha)x - \sin\alpha}{(\sin\alpha)x + \cos\alpha}, $$ [$\ldots\ldots$ some material omitted here $\ldots\ldots$] If $x$ is the slope of a line, then $f(x)$ is the slope of its rotation through an angle of $-\alpha$. END QUOTE Dividing the numerator and denominator by $\tan\alpha$ may give the same result as is posted here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Prove that $\mathbb{Z}_p^{\times}/(\mathbb{Z}_p^{\times})^2$ is isomorphic to $\{\pm1\}$. Prove that $\mathbb{Z}_p^{\times}/(\mathbb{Z}_p^{\times})^2$ is isomorphic to $\{\pm1\}$, where $p$ is a prime integer.
Mod $p$, there is an equal number of quadratic residues and nonresidues (ignoring 0 of course). Therefore, the index of $(\mathbb{Z}_p^\times)^2$ in $\mathbb{Z}_p^\times$ is 2, and so the quotient is isomorphic to $\{\pm 1\}$, as this is the only group of order 2. Edited to only use words from group theory: Note that $\mathbb{Z}_p^\times$ is cyclic of order $p-1$. We'll show that the subgroup $(\mathbb{Z}_p^\times)^2$, which is the set of squares in $\mathbb{Z}_p^\times$, has index 2. Then the quotient is $\{\pm 1\}$, since this is the only group of order 2. Let $a$ be a generator for $\mathbb{Z}_p^\times$. Then the elements of $\mathbb{Z}_p^\times$ are $a,a^2,\ldots,a^{p-1}=1$. If $k$ is even, then $a^k = (a^{k/2})^2$ is a square. However if $k$ is odd, then $a^k$ is not a square, for if $a^k = b^2$ for some $b \in \mathbb{Z}_p^\times$ and $k$ odd, then $b= a^j$ for some $j$, and so $a^k = a^{2j}$. But then $k -2j\equiv 0$ mod $p-1$, which is impossible since $k-2j$ is odd. Since $p-1$ is even, there are the same number of elements of $\mathbb{Z}_p^\times$ which are even and odd powers of $a$, so $(\mathbb{Z}_p^\times)^2$ has index 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proving the Cantor Pairing Function Bijective How would you prove the Cantor Pairing Function bijective? I only know how to prove a bijection by showing (1) If $f(x) = f(y)$, then $x=y$ and (2) There exists an $x$ such that $f(x) = y$ How would you show that for a function like the Cantor pairing function?
Let us begin with a general theorem. Theorem 1: For each integer $k \ge 0$ let there be given a nonempty finite set $D_k$ that is totally ordered by a relation $ \le_k$, and that $D_j \cap D_k = \emptyset$ when $j \ne k$. Then there exists a 'natural' bijective correspondence $\tag 1 C: \bigcup D_k \to \mathbb N$ Proof We define $C$ recursively on one $D_k$ 'piece' at a time, starting with an increasing order isomorphism $C_0: D_0 \to [0, j_0]$ where $D_0$ has $j_0 + 1$ elements. We can continue in a natural way, extending $C_0$ to a bijective function $C_1: D_0 \cup D_1 \to [0, j_1]$ where the cardinality of $D_0 \cup D_1$ is $j_1 + 1$. We can continue in this way, defining bijective maps $C_k$. $C$ is the direct limit of the $C_k$ mappings. $\qquad \blacksquare$ For each integer $k \ge 0$ define $\tag 2 D_k = \{(m,n) \in \mathbb N \times \mathbb N \; | \: m + n = k\}$ It is easy to see that these finite sets partition $\mathbb N \times \mathbb N$. We also have two simple ways of ordering $D_k$. So we insist that $(k,0)$ is the smallest element, followed by $(k-1,1)$, $(k-2,2)$, and so on. By theorem 1, a bijective correspondence naturally follows between $\mathbb N \times \mathbb N$ and $\mathbb N$. To make it explicit using arithmetic, you need to count things. First you have to know how many elements are in each $D_k$ and then the number of elements $j_k + 1$ in the domain of $C_k$. If you work this out, you will be looking for a formula to add up $1 + 2 + 3 \dots + n$. Proposition 2: The Cantor pairing function is a bijection. Proof Let $(m,n)$ belong to $D_k$. We already have $C_{k-1}$ (see Theorem 1) mapping $D_0 \cup D_1 \cup D_2 \dots \cup D_{k-1}$ onto an initial segment of $\mathbb N$ with exactly $1 + 2 + \dots + k$ integers. Since $m+n = k$ and $0$ is in the range of $C_{k-1}$, the restriction map $C_{k-1}$ reaches a maximum integer value of $\tag 3 \frac{(m+n)(m+n+1)}{2} - 1$ Now if $(m,n) = (k,0)$, the first element of $D_k$, we would add $1$ to the expression (3). A moments thought and you can see that moving along $D_k$ in $1 \text{-step}$ increments is exactly defined by the quantity $n$. So $\tag 4 C_k(m,n) = \frac{(m+n)(m+n+1)}{2} + n$ But this formula does not depend on $k$, so (4) defines the bijective Cantor Pairing Function. $\qquad \blacksquare$ Using this approach, it is not necessary to check for injectivity or surjectivity or to find an inverse function. That is taken care of by the general construction of $C$ in theorem 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 3 }
Characteristic equation of a recurrence relation Yesterday there was a question here based on solving a recurrence relation and then when I tried to google for methods of solving recurrence relations, I found this, which gave different ways of solving simple recurrence relations. My question is how do you justify writing the recurrence relation in its characteristic equation form and then solving for its roots to get the required answer. For example, Fibonacci relation has a characteristic equation $s^2-s-1=0$. How can we write it as that polynomial?
We aren't writing the Fibonacci relation as $s^2-s-1=0$, we are noting a relation between that equation and the Fibonacci relation. We are noting that if $r$ is a solution of the characteristic equation, then $r^n$ is a solution of the recurrence relation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Order of Multiplication in Matrix Multiplication So I have a general question on matrix multiplication. I know the order of multiplication matters, except if they're invertible. So, consider something like $A(X+B)C = I$. If $\mathbf{A, B, C}$ are invertible, is: $A^{-1}A(X+B)C = A^{-1}$ equivalent to $A(X+B)CA^{-1} = A^{-1}$ ? If so, can I do a similar thing for C and say: $X = C^{-1}A^{-1}-B$ ? Furthermore, that's not necessarily equivalent to $X = A^{-1}C^{-1}-B$ right? In general, how do you determine the order of multiplication when simplifying a system? Thanks.
In general, don't assume that matrices commute. Almost always, they don't. Ever if the three matrices $A, B, C$ did have some sort of condition that implied that they commute (perhaps being diagonal), you don't know about $X$ and still can't commute over the $(X + B)$ term. Here, only $X = A^{-1}C^{-1}-B$ is correct. But to answer your question, "How do you determine the order of multiplication?" I remind you that matrix multiplication is associative, so you can do the 'order' in any way you want as long as you always remember what's on the left and what's on the right. That is, $A(BC) = (AB)C$ and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Generalization of manifold Is there a generalization of the concept of manifold that captures the following idea: Consider a sphere that instead of being made of a smooth material is actually made up of a mesh of thin wire. Now for certain beings living on the sphere the world appears flat and 2D, unware that they are actually living on a mesh, but for certain other smaller beings, the world appears to be 1D most of the time (because of the wire mesh).
I think that the concept that you want is that of a stratified space. Particularly, the stratification of a manifold by a piecewise linear (PL) decomposition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find the ordinary generating function $h(z)$ for a Gambler's Ruin variation. Assume we have a random walk starting at 1 with probability of moving left one space $q$, moving right one space $p$, and staying in the same place $r=1-p-q$. Let $T$ be the number of steps to reach 0. Find $h(z)$, the ordinary generating function. My idea was to break $T$ into two variables $Y_1,Y_2$ where $Y_1$ describes the number of times you stay in place and $Y_2$ the number of times we move forward or backward one. Then try to find a formula for $P(T=n)=P(Y_1+Y_2=n)=r_n$, but I'm getting really confused since there are multiple probabilities possible for each $T=n$ for $n\geq 3$. Once I have $r_n$ I can then use $h_T(z)=\sum_{n=1}^\infty r_n z^n$, but I'm not sure where to go from here.
I do not have the references to find the generating function of $Y_2$ with me, and it is rather tedious to re-do those computations. I'll skip this part, and only find the generating function of $T$ provided the generating function of $Y_2$ is known. Let $F (z) = E (z^{Y_2})$ be the generating function of $Y_2$. First, let us write the generating function of $T$ and change the indices of the summation: $$h (z) = \sum_{N=1}^{+ \infty} P (T = N) z^N = \sum_{N=1}^{+ \infty} \sum_{k=0}^{N-1} P (Y_2 = N-k, Y_1 = k) z^N = \sum_{n=1}^{+ \infty} \sum_{k=0}^{+ \infty} P (Y_2 = n, Y_1 = k) z^{n+k}.$$ Note that, is $Y_2$ is known, the law $Y_1$ is a sum of $Y_2$ i.i.d. random variables of geometric law: after each step, you spend an geometric time of parameter $p+q$ waiting before taking the next step. Since the law of $Y_1$ knowing $Y_2$ is well-known, let us condition over the value of $Y_2$. $$h (z) = \sum_{n=1}^{+ \infty} P (Y_2 = n) z^n \sum_{k=0}^{+ \infty} P (Y_1 = k | Y_2 = n) z^k.$$ The generating function of a single random variable of geometric law of parameter $p+q$ is: $$g (z) = \sum_{k=0}^{+ \infty} (1-p-q) (p+q)^k z^k = \frac{1-(p+q)}{1-(p+q)z}.$$ The generating function of the sum of $n$ independent random variables with the same law is $g^n$ (generating functions are really, really nice), so that: $$\sum_{k=0}^{+ \infty} P (Y_1 = k | Y_2 = n) z^k = g(z)^n = \left( \frac{1-(p+q)}{1-(p+q)z} \right)^n.$$ If we inject this expression into the formula for $h$, we get: $$h (z) = \sum_{n=1}^{+ \infty} P (Y_2 = n) z^n g(z)^n = F (z g(z)) = F \left( \frac{[1-(p+q)]z}{1-(p+q)z} \right).$$ The last step is to find $F$. But I hope this clears your confusions. Edit : fixed the confusion between geometric and exponential laws. I also want to stress the point that finding $F$ is feasible, the method is well-known, but this is another matter (besides, I don't think that your problem is there). I prefer to keep this answer rather short, and if necessary I will add something about $F$. Later.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Composed Covers I have problems solving this seemingly straightforward question. Let $q : X \rightarrow Z$ be a covering space. Let $p : X \rightarrow Y$ be a covering space. Suppose there is a map $r : Y \rightarrow Z$ such that $q = r \circ p$. Show that $r : Y \rightarrow Z$ is a covering space. Could someone give me a hint? Of course I should pick some covering definition and show that $r$ indeed satisfies this. Thank you
We will suppose that our spaces are locally connected, so that connected components are open and closed. The space $Z$ can be covered by open connected subsets over which $q$ is trivial, and since the restriction of $res(p):p^{-1} (r^{-1}(U) = q^{-1}(U) \to r^{-1}(U)$ is still a covering , we may and will henceforth assume that $q$ is a trivial covering and that $Z$ is connected. The core of the proof Take a connected component $V\subset X$ of $X$ ( a sheet of the trivial covering $q$) . Its image $p(V)$ will be a connected component of $Y$, according to Spanier's Algebraic Topology, Chap.3, Theorem 14, page 64. But then $res(r):p(V)\to Z$ is a homeomorphism and since, by surjectivity of $p$, the space $Y$ is a disjoint union of such $p(V)$, the map $r:Y\to Z$ is a trivial covering whose sheets are exactly the connected components of $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How do I find the equation of a parabola given the max, and two points? I was given the points (2, -1) and (10,-1) and also a max of 4. How would I go about finding the equation of the parabola given this info?
The above answers are using the following form for the equation of a parabola: If the vertex of the parabola is located at $(h,k)$, then the equation of the parabola is $$\tag{1}y=a(x-h)^2+k$$ for some constant $a$. That this holds can be seen by taking the graph of the parabola $y=ax^2$ and translating it $h$ units to the right and $k$ units up. Some notes: 1) The line $x=h$ is the line of symmetry of the parabola. 2) $a$ is the "scaling factor". The larger $a$ is in absolute value, the "narrower" the parabola is. 3) If $a>0$, the parabola opens up and $k$ is the minimum value of the $y$ coordinates on the parabola 4) If $a<0$, the parabola opens down and $k$ is the maximum value of the $y$ coordinates on the parabola An example is shown below. As made clear by Ross and Jonas, you should find the vertex first, then write the equation in the form (1) above. There will still be the unknown $a$ in the equation at this point. To find it, substitute the information given by one of the given points into the equation and solve for $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How to show the ascending chain condition Let $(A,\le)$ be a poset. Suppose that for any $a < b \in A$ and for any chain $Q$ of $A$ whose maximum and minimum are $a$ and $b$ respectively, $Q$ is finite. Let $C$ be the set of all chains of $A$, ordered under inclusion. I'd like to show that the poset $(C,\subset)$ satisfies the ascending chain condition (ACC). I tried to show it by contradiction. Negating the ACC, we can say the existence of an infinite sequence $(Q_i)_{i\in\mathbb{N}}$ of elements of $C$ such that $Q_0 \subsetneq Q_1 \subsetneq Q_2 \subsetneq \cdots$. I think this contradicts the fact that all elements of $C$ are finite, but I cannot show it vigorously. EDIT: What I really wanted to prove was that $(C, \subset)$ satisfies the ACC, where C is the set of chains whose whose maximum and minimum are $a$ and $b$ respectively. I'm sorry, but the answers seem to still hold.
Note that the increasing union of chains is a chain. Formally: Suppose for $i\in\mathbb N$, $Q_i$ is a chain in $(A,\le)$ and for $i < j$ we have $Q_i\subseteq Q_j$, then $\bigcup Q_i=Q$ is a chain. Proof: Given $a,b\in Q$ then for some $i,j$ we have $a\in Q_i$ and $b\in Q_j$. Since either $Q_i\subseteq Q_j$ or $Q_j\subseteq Q_i$ we have that $a,b\in Q_{\max\{i,j\}}$ and therefore $a\le b$ or $b\le a$. Thus, $Q$ is a chain. By the same argument you can also show that if $a$ is the minimum of $Q_i$ and $b$ is the maximum of $Q_i$, for all $i\in\mathbb N$ then $a,b$ are also the minimum/maximum of $Q$. Now if you have an infinite chain of strict inclusions then there is some $a_i\in Q_i\setminus\bigcup_{j<i}Q_j$ such that $a<a_i<b$ (simply because there is a strict inclusion between the $Q_i$'s). The collection $\{a_i\mid i\in\mathbb N\}\subseteq Q=\bigcup Q_i$, and $a_i\neq a_j$ since if $j<i$ then $a_i\notin Q_j$, but $a_j\in Q_i$. This means that the chain $Q$ is infinite, and its minimum is $a$ and its maximum is $b$ - which contradicts the property of every chain strictly between $a$ and $b$ is finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Euler characteristic of a space minus a point Let $X$ be a topological space and $*$ be the base point of $X$. How does $\chi(X-*)$ relate to $\chi(X)$ do we have $\chi(X-*)=\chi(X)-\chi(*)=\chi(X)-1$?
This additivity property is true for the compactly supported Euler characteristic $\chi_c$, that is, the alternating sums of the dimensions of the cohomology groups with compact support. More generally one has $$\chi_c(X) = \chi_c(X \setminus Z) + \chi_c(Z)$$ for a closed subset $Z$ of $X$. In this sense the compactly supported Euler characteristic is nicer, but there are other drawbacks, like that the compactly supported cohomology groups is not a homotopy invariant. On a manifold $M$ of even dimension $2n$, the compactly supported and the ordinary Euler characteristic coincide by Poincaré duality, interpreted as the assertion that there is a perfect pairing between $H^i(M)$ and $H^{2n-i}_c(M)$. This generalizes Poincaré duality on a closed manifold, since on a closed manifold the ordinary cohomology and the cohomology with compact support coincide. This also tells you that on a manifold of odd dimension, the two Euler characteristics only differ by a sign. To learn about this and much more, see e.g. the book by Bott and Tu.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91854", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Finding angles in a parallelogram without trigonometry I'm wondering whether it's possible to solve for $x^{\circ}$ in terms of $a^{\circ}$ and $b^{\circ}$ given that $ABCD$ is a parallelogram. In particular, I'm wondering if it's possible to solve it using only "elementary geometry". I'm not sure what "elementary geometry" would usually imply, but I'm trying to solve this problem without trigonometry. Is it possible? Or if it's not, is there a way to show that it isn't solvable with only "elementary geometry" techniques?
In the case $a = \pi/2$, $x = \arccos \left( 2\,{\frac {\sin \left( b \right) }{\sqrt {4-3\, \cos^2 \left( b \right) }}} \right)$. This is not an algebraic function of $b$, because its derivative is $\frac{dx}{db} = \frac{2}{3 \cos^2 b - 4}$ for $-\pi/2 < b < \pi/2$, and $\cos(b)$ is not an algebraic function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 2 }
Give an explicit embedding from $\mathbb{R}P_2$ to $\mathbb{R}^4$ I have heard that the least dimension $m$ required for $\mathbb{R}P_2$ to be embedded in the Euclidean space is 4, thus I wanted to find an explicit formulae for it. I found two possible strategies, but is not sure that they'll work. * *Define $\phi([x_1,x_2,x_3])=(|x_1|,|x_2|,|x_3|,x_1x_2+x_2x_3+x_3x_1)$, where $[x_1,x_2,x_3]$ is the eq.class under quotient from $S^2$. I hope that would be an embedding, since the last is a symmetric polynomial which is equal for $(x_1,x_2,x_3)$ and $-(x_1,x_2,x_3)$. *My second stragy is more geometric approach. Note that the projective plane deleted a circle is a Möbius band $M$. Thus if I could "paste" the boundary of a closed disk (via the fourth dimension) onto the boundary circle of a Möbius Band, then I'm done. But since I'm no good at "visualizing" four dimensions, I don't know exactly how to proceed. My question is: 1) is 1. an embedding or not? (Or give another more elegant imbedding) 2) is there a way to realize my second approach? or is it hopeless? 3) Is there a more systematic way of doing such embeddings? (Frankly, If our world is in $\mathbb{R}_2$, I would probably not even be able to imagine how to imbed the torus into $\mathbb{R}_3$)
You can do #2 by what are called "Movie moves". Think of $\mathbb{R}^4 = \mathbb{R}^3 \times \mathbb{R}$. Then in level $\mathbb{R}^3\times 0$ embed the Möbius band. Smoothly propagate the boundary of the Möbius band to $\mathbb{R}^3\times \epsilon$. Now, the boundary of the Möbius band is a smooth unknot so we can insert the isotopy to the standard unknot in $\mathbb{R}^3\times [\epsilon,1-\epsilon]$. Now cap off the unknot inside of $\mathbb{R}^3\times [1-\epsilon,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/91981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
Showing that $\left|x-1+\frac 1{1+x}-\left(y-1+\frac 1{1+y}\right)\right|\leq 2|x-y|$ I asked a question here the other day and one of the steps in the answer I encountered was: $$\left|x-1+\frac 1{1+x}-\left(y-1+\frac 1{1+y}\right)\right|\leq 2|x-y|$$ when $x,y\ge0$ I can't seem to figure out why this is true.. Cans someone help me out? Thanks :)
We have thanks to the triangular inequality \begin{align*}\left|x-1+\frac 1{1+x}-\left(y-1+\frac 1{1+y}\right)\right|&= \left|x-1+\frac 1{1+x}-y+1-\frac 1{1+y}\right|\\ &=\left|x+\frac 1{1+x}-y-\frac 1{1+y}\right|\\ &\leq |x-y|+\left|\frac 1{1+x}-\frac 1{1+y}\right|\\ &= |x-y|+\left|\frac {1+y-(1+x)}{(1+x)(1+y)}\right|\\ &=|x-y|+\frac {|y-x|}{(1+x)(1+y)}\\ &\leq 2|x-y|. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/92035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$f$ uniformly continuous and $\int_a^\infty f(x)\,dx$ converges imply $\lim_{x \to \infty} f(x) = 0$ Trying to solve $f(x)$ is uniformly continuous in the range of $[0, +\infty)$ and $\int_a^\infty f(x)dx $ converges. I need to prove that: $$\lim \limits_{x \to \infty} f(x) = 0$$ Would appreciate your help!
If the integral converges than you have: $\lim_{M\to\infty} \int_a^M f(x) dx = c\ $ for some $c$. Assuming that f does not converge to 0 we have: $\forall M>0\ \exists p>M: |f(p)|>\varepsilon\ \ $ for some $\varepsilon$. Without loss of generality we have $\forall M>0\ \exists p>M: f(p)>\varepsilon\ \ $. So there exists $(p_n)_{n=1}^\infty$ s.t. $p_n \to \infty\ $. Using uniform continuity we have that for $q: |p-q|<\delta\ \ $ the inequality holds: $f(q)>\varepsilon/2\ $. So $\int_a^{p+\delta} f(x) dx - \int_a^{p-\delta} f(x) dx > \delta \cdot \epsilon \ \ $, which means that the sequence $(\int_a^{q_n}f(x)dx)_{n=1}^\infty$ (where $q_0=p_0-\delta,\ q_1=p_0+\delta,\ q_2=p_1-\delta \ldots)\ \ $ is not a Cauchy sequence so it can not converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/92105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 3, "answer_id": 1 }
Simplifying trig expression I was working through some trig exercises when I stumbled upon the following problem: Prove that: $ \cos(A+B) \cdot \cos(A-B)=\cos^2A- \sin^2B$. I started out by expanding it such that $$ \cos(A+B) \cdot \cos(A-B)=(\cos A \cos B-\sin A \sin B) \cdot (\cos A \cos B+ \sin A \sin B),$$ which simplifies to: $$ \cos^2 A \cos^2 B- \sin^2 A \sin^2 B .$$ However, I don't know how to proceed from here. Does anyone have any suggestions on how to continue.
Here is a detailed answer.Let's rock! $$ \require{cancel}\begin{align} \cos\left(A-B\right)\cdot\cos\left(A+B\right)&=\left(\cos A\cos B-\sin A\sin B\right)\left(\cos A\cos B+\sin A\sin B\right)\\ &=\cos^2A\cos^2B-\sin^2A\sin^2B\\ &=\cos^2A\left(1-\sin^2B\right)-\sin^2A\sin^2B\\ &=\cos^2A-\sin^2B\cos^2A-\sin^2A\sin^2B\\ &=\cos^2A-\sin^2B\cancelto{1}{\left(\cos^2A+\sin^2A\right)}\\ &=\cos^2A-\sin^2B \end{align} $$ I hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/92159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
matrix of linear transformation The linear transformation $A:\mathbb{R}^2\to \mathbb{R}^2$ is given by the images of basis vectors: $A((1,1))=(2,1)$ and $A((1,0))=(0,3)$. * *Find a matrix of linear transformation $A$ in the basis $(1,1), (1,0)$. *Find $A((3,2))$. *Find vector $x=(x_1,x_2)$ such that the matrix $\begin{pmatrix}-6 &-6\\ 3 &4\end{pmatrix}$ is matrix of the linear transformation $A$ in the basis $x$, $(0,3)$. Please help me about this.
If $\cal B=\{ {\bf v},{\bf w}\}$ is an ordered basis of $\Bbb R^2$, then the matrix representation, $M$, of $A:\Bbb R^2\rightarrow\Bbb R^2$ with respect to this basis (I assume you want both the domain and range to have basis $\cal B$) is the $2\times 2$ matrix that has as its first column the coordinates of $A{\bf v}$ with respect to $\cal B$ and as its second column the coordinates of $A{\bf w}$ with respect to $\cal B$. What does this mean? Well, if you write a vector ${\bf x}$ in terms of this basis $${\bf x}= c_1{\bf v}+c_2{\bf w},$$ then, setting, $[{\bf x}]_{\cal B}=[{c_1\atop c_2}]$ $$ \tag {1}[A {\bf x } ]_{\cal B} = M[{\bf x }]_{\cal B}. $$ That is, for $\bf x$ written in the standard basis, the coordinates of $A{\bf x}$ with respect to $\cal B$ are given by the product of the matrix $M$ with the coordinate matrix of $\bf x$ with respect to $\cal B$. For part 1.: The matrix representation of $A$ is easily found, since you were told what $A\bigl((1,1)\bigr)$ and $A\bigl((1,0)\bigr)$ were. We need to write $(2,1)$ and $(0,3)$ in terms of the basis $\cal B=\{(1,1),(1,0)\}$. $$(2,1)= 1 (1,1)+1(1,0)$$ and $$ (0,3)= 3(1,1)-3(1,0) $$ The matrix $M$ is $$M=\Bigl[\, \underbrace{1\atop 1}_{ [A(1,1)]_{\cal B} } \ \underbrace{3\atop -3}_{ [A(1,0)]_{\cal B} }\,\Bigr].$$ For part 2.: You need to write $(3,2)$ in terms of $\cal B$: $$ (3,2)=2(1,1)+1(1,0). $$ Using the matrix representation of $A$, $$[A\bigl((3,2 )\bigr)]_{\cal B}=\Bigl [\, {1\atop 1}\ {3\atop -3}\,\Bigr ]\Bigl[ {2\atop 1}\Bigr]=\Bigl[{5\atop -1} \Bigr]. $$ This gives the coordinates of $A((3,2))$ with respect to $\cal B$, so $$ A((3,2))= 5(1,1)+(-1)(1,0)=(4,5). $$ For part 3.: Let $\cal B'=\{(x_1,x_2), (0,3)\}$ You know the matrix $$W= \Bigl[\,{-6\atop 3}\ {-6\atop4}\,\Bigr ] $$ is the matrix representation of $A$ with respect to $\cal B'$. The second column of $W$ is $[ A\bigl((0,3)\bigr)]_{\cal B'}$. So, $$\tag{2}A((0,3))=-6(x_1,x_2)+4(0,3)=(-6x_1, -6x_2+12).$$ But you can compute $A\bigl((0,3)\bigr)$ using the matrix representation from part 1. We find $[A\bigl((0,3)\bigr)]_{\cal B}$ first. Towards this end, we write $(0,3)$ in terms of the basis $\cal B$ first. Solve: $$ (0,3) = c_1(1,1)+c_2(1,0) $$ to obtain $$ \eqalign{ c_1&=3\cr c_2&=-3. } $$ Then: $$ [A\bigl((0,3)\bigr)]_{\cal B}=\Bigl [\, {1\atop 1}\ {3\atop -3}\,\Bigr ]\Bigl[ {3\atop -3}\Bigr]=\Bigl[{ -6\atop 12}\Bigr]. $$ So, the coordinates of $A((0,3))$ with respect to $\cal B$ are $(-6,12)$. So, $$ \tag{3}A((0,3))=-6(1,1)+12(1,0)= (6, -6) $$ Comparing equations (2) and (3) gives $$ \eqalign{ 6&=-6x_1\cr -6&=-6x_2+12 } $$ This gives $x_1=-1$ and $x_2=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/92206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the limit of a sequence $\lim _{n\to \infty} \sqrt [3]{n^2} \left( \sqrt [3]{n+1}- \sqrt [3]{n} \right)$ If there were a regular square root I would multiply the top by its adjacent and divide, but I've tried that with this problem and it doesn't work. Not sure what else to do have been stuck on it. $$ \lim _{n\to \infty } \sqrt [3]{n^2} \left( \sqrt [3]{n+1}- \sqrt [3]{n} \right) .$$
Presumably you don't want a Taylor series expansion, since you said you "don't want to differentiate anything," but it's worth pointing out that you can apply the binomial expansion: $$ \begin{eqnarray} \sqrt[3]{n+1} &=& \sqrt[3]{n}\sqrt[3]{1+n^{-1}} \\ &=& \sqrt[3]{n}\sum_{k}{{1/3}\choose{k}}n^{-k} \\ &=& \sum_{k}{{1/3}\choose{k}}n^{1/3-k} \\ &=& \sqrt[3]{n} + \frac{1}{3}n^{-2/3}+O(n^{-5/3}). \end{eqnarray} $$ So $\sqrt[3]{n^2}(\sqrt[3]{n+1}-\sqrt[3]{n}) = 1/3 + O(n^{-1}) \rightarrow 1/3$ as $n \rightarrow\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/92272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Expected value of $XYZ$, $E(XYZ)$, is not always a $E(X)E(Y)E(Z)$, even if $X$, $Y$, $Z$ are not correlated in pairs Could you prompt me, please, is it true? Expected value of $XYZ$, $E(XYZ)$, is not always $E(X)E(Y)E(Z)$, even if $X$, $Y$, $Z$ are not correlated in pairs, because if $X$, $Y$, $Z$ are not correlated in pairs it doesn't entail that they are uncorrelated in aggregate (it is my idea)?
Let $\Omega=\{\omega_1,\omega_2, \omega_3, \omega_4\}$, $\mathcal{F}=2^{\Omega}$ and $\mathbb{P}(\{\omega_i\})=1/4$ for all $i$. It is easy to check that desired random variables are $$ X=1_{\{\omega_1,\omega_2\}},\quad Y=1_{\{\omega_1,\omega_3\}},\quad Z=1_{\{\omega_1,\omega_4\}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/92321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proofs for an equality I was working on a little problem and came up with a nice little equality which I am not sure if it is well-known (or) easy to prove (It might end up to be a very trivial one!). I am curious about other ways to prove the equality and hence I thought I would ask here to see if anybody knows any or can think of any. I shall hold off from posting my own answer for a couple of days to invite different possible solutions. Consider the sequence of functions: $$ \begin{align} g_{n+2}(x) & = g_{n}(x) - \left \lfloor \frac{g_n(x)}{g_{n+1}(x)} \right \rfloor g_{n+1}(x) \end{align} $$ where $x \in [0,1]$ and $g_0(x) = 1, g_1(x) = x$. Then the claim is: $$x = \sum_{n=0}^{\infty} \left \lfloor \frac{g_n}{g_{n+1}} \right \rfloor g_{n+1}^2$$
I'll let you decide if it's trivial :-): $$ \begin{align*} \sum_{n=0}^{\infty} \left \lfloor \frac{g_n}{g_{n+1}} \right \rfloor g_{n+1}^2 &= \sum_{n=0}^{\infty} \left( \left \lfloor \frac{g_n}{g_{n+1}} \right \rfloor g_{n+1} \right) \cdot g_{n+1} \\&= \sum_{n=0}^{\infty} \left( g_{n} - g_{n+2} \right) \cdot g_{n+1} \\&= \sum_{n=0}^{\infty} \left(g_{n} g_{n+1} - g_{n+1} g_{n+2} \right) \\&= g_0 g_1 = x, \end{align*} $$ by a telescopic cancelation. Technical note: Convergence. The above manipulations are valid only after we check the convergence of the infinite series. Note that $$ g_{n+2} = g_{n+1} \cdot \left \{ \frac{g_{n}}{g_{n+1}} \right\} , $$ where $\{ \cdot \}$ denotes fractional part. Hence, inductively we see that the sequence $(g_n)$ is nonnegative and monotone decreasing; therefore it converges. We claim that $g_n$ in fact converges to $0$. Towards a contradiction, assume that $\lim\limits_{n \to \infty} \ g_n > 0$. Then for large enough $n$, we have $1 < \frac{g_n}{g_{n+1}} < \frac{3}{2}$, and hence $ \left\{ \frac{g_n}{g_{n+1}} \right\} < \frac{1}{2}$. Therefore $g_{n+2}\lt \frac{g_{n+1}}{2}$, which is a contradiction to the previous sentence. :) Thus $g_n$ converges to $0$. Finally, for any $N$, we have $$ \left| g_0 g_1 - \sum_{n=0}^{N} (g_n g_{n+1} - g_{n+1} g_{n+2}) \right| = g_{N+1} g_{N+2} \to 0. $$ That is, the series $\sum \limits_{n=0}^{\infty} \left(g_{n} g_{n+1} - g_{n+1} g_{n+2} \right)$ converges to $g_0 g_1$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/92382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Sum and intersections of vector subspaces $U_1+U_2=(U_1 \cap U_2) \oplus W$ Let $U_1,U_2$ be vector subspaces from $\in \mathbb R^5$. $$\begin{align*}U_1 &= [(1,0,1,-2,0),(1,-2,0,0,-2),(0,2,1,2,2)]\\ U_2&=[(0,1,1,1,0),(1,2,1,2,1),(1,0,1,-1,0)] \end{align*}$$ (where [] = [linear span]) Calculate a basis from $U_1+U_2$ and a vector subspace $W \in \mathbb R^5$ so that $U_1+U_2=(U_1 \cap U_2) \oplus W$. ($\oplus$ is the direct sum). I have the following so far. I calculated a basis from $U_1 \cap U_2$ in the previous exercise and got the following result: $(1,0,1,-1,0)$. I've also calculated a basis from $U_1+U_2$ and got that the standard basis from $\mathbb R^5$ is a basis. So I suppose now I should solve the following: standard basis from $\mathbb R^5$ = $(1,0,1,-1,0)\oplus W$ I thought I should get 4 additional vectors and they should also respect the direct sum criterion, that their intersection $= \{0\}$, however my colleagues have this: $W = \{(w_1,w_2,0,w_3,w_4) | w_1,w_2,w_3,w_4 \in \mathbb R\}$. Where did I go wrong? Many many many thanks in advance!
It seems that you're fine. The $W$ given by your colleagues' has as a basis $\{e_1,e_2, e_4,e_5 \}$ where $e_i$ is the standard $i^{\rm th}$ unit vector in $\Bbb R^5$. Moreover, their $W$ does not contain $ (1,0,1,-1,0)$ (any vector in $W$ has 0 in its third coordinate); thus, this vector together with $e_1$, $e_2$, $e_4$, $e_5$ will give a basis for $\Bbb R^5$. So, then, $\mathbb R^5$ = $(1,0,1,-1,0)\oplus W$. Your approach would be more or less the same. I imagine your colleagues' interpreted the question as "exhibit the subspace $W$", rather than "find a basis of the subspace $W$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/92483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to prove the following inequality: $1+ac+ab+3a\leq b+c+abc+3bc$? Show that $$1+ac+ab+3a\leq b+c+abc+3bc$$ if $1\leq a\leq bc,$ $1\leq b\leq ac,$ $1\leq c\leq ab.$
$1\leq a$, so $0\leq (a-1)$. Similarly for $b$ and $c$, so we have $$ 0\leq (a-1)(b-1)(c-1) $$ $a \leq bc$ so $0\leq bc-a$ and $$ 0 \leq 4(bc-a) $$ Adding these two we get $$ 0 \leq (a-1)(b-1)(c-1) + 4(bc-a) $$ Multiplying out yields the result: $$ 0 \leq (abc-ab-ac+a-bc+b+c-1) + 4bc - 4a $$ $$ 0 \leq abc-ab-ac-3a+3bc+b+c-1 $$ $$ 1+ab+ac+3a \leq abc+3bc+b+c $$ As required. To solve this I took out the $bc$ and $a$ terms, as given that $bc-a$ could be 0, I reasoned that the inequality should hold with them gone. I "guessed" the factorization $(a-1)(b-1)(c-1)$, and put in some extra terms to compensate, which happened to be of the form $bc-a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/92608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A problem about stochastic convergence (I think) I am trying to prove the convergence of the function $f_n = I_{[n,n+1]}$ to $f=0$, but first of all I don't in which way it converges, either in $\mathcal{L}_p$-measure or stochastically, or maybe some other form of convergence often used in measure-theory. For now I'm assuming it's stochastic convergence, as in the following: $$ \text{lim}_{n \rightarrow \infty} \, \mu(\{x \in \Re: |f_n(x)-f(x)| \geq \alpha\}\cap A )=0$$ must hold for all $\alpha \in \Re_{>0}$ and all $A \in \mathcal{B}(\Re)$ of finite measure. I know it must be true since there is no finite $A$ for which this holds. Could someone give me a hint how to start off this proof?
For those interested, this is my complete proof: For $\alpha > 1$ the limit is clearly equal to $0$. For $\alpha \leq 1$ we need to prove that $$ \text{lim}_{n \rightarrow \infty} \mu([n,n+1] \cap A) = 0.$$ Let us assume that there is an $A \in \mathcal{B}(\mathbb{R})$ with finite measure, such that the above expression is not equal to $0$, but some $c \in \mathbb{R}$. If we take $\varepsilon \in \mathbb{R}_{>0}$, then there exists an $N \in \mathbb{N}$ such that for all $n>N$ we have $|\mu([n,n+1] \cap A) - c| < \varepsilon$. We can then make the following estimation for all $n>N$: $$\mu([n,n+1] \cap A) > c - \varepsilon.$$ And since $\mu(A) \geq \mu(A \cap \mathbb{R}^+) \geq \sum_{n=N}^\infty \, \mu([n,n+1] \cap A) = \infty$ we have a contradiction, since we assumed $A$ had finite measure. Therefore the limit is always equal to $0$ for all finite $A \in \mathcal{B}(\mathbb{R})$ and $f_n$ converges to $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/92733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Representing the $q$-binomial coefficient as a polynomial with coefficients in $\mathbb{Q}(q)$? Trying a bit of combinatorics this winter break, and I don't understand a certain claim. The claim is that for each $k$ there is a unique polynomial $P_k(x)$ of degree $k$ whose coefficients are in $\mathbb{Q}(q)$, the field of rational functions, such that $P_k(q^n)=\binom{n}{k}_q$ for all $n$. Here $\binom{n}{k}_q$ is the $q$-binomial coefficient. I guess what is mostly troubling me is that $P_k(q^n)$ is a polynomial in $q^n$. I'm sure it's obvious, but why is the claim true? Thanks.
Existence. Recall $[\begin{smallmatrix} n \\ k \end{smallmatrix}]_q$ counts the $k$-dimensional subspaces of $\mathbb{F}_q^{\,n}$. A $k$-frame is an ordered linearly independent $k$-subset of $\mathbb{F}_q^{\,n}$. The number of $k$-frames may be counted in two ways: first, pick a vector, then a vector not in the span of the first, then a vector not in the span of the first two, and so on yielding $(q^n-1)(q^n-q)\cdots(q^n-q^{k-1})$. On the other hand, we may also count them by first picking a $k$-dimensional subspace, and then within that subspaces repeating the same process, yielding instead $[\begin{smallmatrix} n \\ k \end{smallmatrix}]_q(q^k-1)(q^k-q)\cdots(q^k-q^{k-1})$. Equating yields the formula $$ \begin{bmatrix} n \\ k \end{bmatrix}_q = \frac{\color{Red}{q^n}-1}{q^k-1}\frac{\color{Red}{q^n}-q}{q^k-q}\cdots\frac{\color{Red}{q^n}-q^{k-1}}{q^k-q^{k-1}}. $$ Evidently this is a polynomial in $q^n$ with coefficients in $\mathbb{Q}(q)$. By cancelling powers of $q$ in each of the fractions, then dividing top and bottom of each by $(q-1)$ in order to obtain $q$-analogs of integers, we may check this matches the textbook formula $$ \frac{[n]_q[n-1]_q\cdots[n-(k-1)]_q}{[k]_q\,[k-1]_q\cdots\cdots\cdots\cdots[1]_q} = \frac{[n]_k!}{[k]_q!\,[n-k]_q!}. $$ Uniqueness. If $P,Q\in\mathbb{Q}(q)[T]$ and $P(q^n)=Q(q^n)$ for all $n$, then their difference $R(q^n)$ must be the zero element of $\mathbb{Q}(q)$ for all $n$. Without loss of generality, $R(T)\in\mathbb{Q}[q,T]$ after clearing denominators of coefficients of powers of $T$. Writing $R(T)=a_d(q)T^d+\cdots+a_1(q)T+q_0(q)$, we can find an $n$ large enough so that $\deg(a_d)+nd>\deg(a_k)+nk$ for $k<d$, so $\deg P(q^n)>0$, a contradiction, unless $d=0$ and so $R(T)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/92772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Showing $\tau(n)/\phi(n)\to 0$ as $n\to \infty$ I was wondering how to show that $\tau(n)/\phi(n)\to 0$, as $n\to \infty$. Here $\tau(n)$ denotes the number of positive divisors of n, and $\phi(n)$ is Euler's phi function.
Note that for an odd integer $n>1$, we have $\tau(n)/\varphi(n)<1$ (first note it for an odd prime-power, then by multiplicativity to all odd $n$). For each $\epsilon>0$, consider the set $S_\epsilon$ of those odd $n$'s with $\tau(n)/\varphi(n)>\epsilon$. Write $n=p_1^{\alpha_1}\dots p_k^{\alpha_k}$, where $p_1, \dots, p_k$ are odd. Then $$\frac{\alpha_1+1}{p_1^{\alpha_1}} \dots \frac{\alpha_1+1}{p_1^{\alpha_1}}>\epsilon$$ and since each term of the product is $<1$, each term must itself be $>\epsilon$. Hence, for each $i$, $$\frac{1+\alpha_i}{p_i^{\alpha_i}}>\epsilon.$$ Clearly there are finitely combinations of $p$ and $\alpha$ which satisfy this. Hence $S_\epsilon$ is finite. Now for a general integer $n=2^mn'$ where $n'$ is odd and with $\tau(n)/\varphi(n)> \epsilon$, note that $\tau(n)/\varphi(n)<2\tau(n')/\varphi(n')$, so there are finitely many choices for $n'$, and $\tau(n)/\varphi(n)<\frac{m+1}{2^m}$ so there are finitely many choices for $m$. Hence, for every $\epsilon>0$, there are finitely many integers $n$ with $\tau(n)/\varphi(n)>\epsilon$. Here is a plot of $\tau(n)/\varphi(n)$ for $n<10^5$: And also, here is a picture of a baby bear
{ "language": "en", "url": "https://math.stackexchange.com/questions/92851", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Find limit of polynomials Suppose we want to find limit of the following polynomial $$\lim_{x\to-\infty}(x^4+x^5).$$ If we directly put here $-\infty$, we get "$-\infty +\infty$" which is definitely undefined form, but otherwise if factor out $x^5$, our polynomial will be of the form $x^5(1/x+1)$. $\lim_{x\to-\infty}\frac 1x=0$, so our result will be $-\infty*(0+1)$,which equal to $-\infty$. I have exam in a 3 days and interested if my last procedure is correct? Directly putting $x$ values gives me undefined form, but factorization on the other hand, negative infinity, which one is correct?
The second method is correct, since it shows that when $x$ becomes "really negative", $x^4$ will weight less than $x^5$. The first method is not correct, since "$-\infty+\infty$" can give anything: for example with $f(x)=x+a$ and $g(x)=-x$, where $a$ is a real number, we have $\lim_{x\to+\infty}f(x)=+\infty$, $\lim_{x\to+\infty}g(x)=-\infty$ but $\lim_{x\to+\infty}f(x)+g(x)=a$, so we can get any real number, $+\infty$ or $-\infty$ as your example shows. It's possible that $f(x)+g(x)$ doesn't converge, for example with $f(x)=x+\cos x$ and $g(x)=-x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/92915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Software to display 3D surfaces What are some examples of software or online services that can display surfaces that are defined implicitly (for example, the sphere $x^2 + y^2 + z^2 = 1$)? Please add an example of usage (if not obvious). Also, I'm looking for the following (if any): * *a possibility to draw many surfaces on the same sheet *to show cross-sections
Wolfram Mathematica can handle the first through the use of ContourPlot3D. That reference page has the necessary syntax for all of what you are asking. By cross-sections I am assuming you are referring to $f(x,y,z) = k$ as $k$ varies? If so, that is done by just leaving off the == k in the function usage (see the documentation for more information).
{ "language": "en", "url": "https://math.stackexchange.com/questions/92963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 2 }
Why is $\lim\limits_{x \space \to \infty}\space{\arctan(x)} = \frac{\pi}{2}$? As part of this problem, after substitution I need to calculate the new limits. However, I do not understand why this is so: $$\lim_{x \to \infty}\space{\arctan(x)} = \frac{\pi}{2}$$ I tried drawing the unit circle to see what happens with $\arctan$ when $x \to \infty$ but I don't know how to draw $\arctan$. It is the inverse of $\tan$ but do you even draw $\tan$? I would appreciate any help.
I finally solved it with help of this picture. * *$\sin x = BC$ *$\cos x = OB$ *$\tan x = AD$ *$\cot x = EF$ *$\sec x = OD$ *$\csc x = OF$ Note that, our nomenclature of $\tan x$ is not really arbitrary. $AD$ is really the tangent to the unit circle at A. Now it is clearly visible that when $\tan{(x)} = AD \to \infty$ then $\arctan{(AD)} = x = \frac{\pi}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
How to prove that geometric distributions converge to an exponential distribution? How to prove that geometric distributions converge to an exponential distribution? To solve this, I am trying to define an indexing $n$/$m$ and to send $m$ to infinity, but I get zero, not some relevant distribution. What is the technique or approach one must use here?
The geometric distribution is given by $$P(n) = (1-p)^n p = q^np$$ Here $P(n)$ is the probability of a succes after $n$ failures. If we define the chance of succes $p=X/n$, where $X$ is a bernouilli random variable then we get after substitution in the above equation: $$P(n) = \left(1-\frac{X}{n}\right)^n \left(\frac{X}{n}\right) $$ In taking the limit we get: $$ \lim_{n\rightarrow \infty} dP(n) = \lim_{n\rightarrow \infty} \left(1-\frac{X}{n}\right)^n \left(\frac{X}{n}\right) = e^{-X} dX$$ Where we see $e^{-X}$ which is the exponential probability density function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 5, "answer_id": 3 }
How to interpolate this table I have a 3D function(al) f whose independent variables are A,C,D and E. Various tables are provided to show the function values. For each individual table, value of A is a constant. The nth table is provided for A + nk where k is a constant. The structure of each table is as follows: Column 1 = X(value of the first dependent variable) Column 2 = Y(value of the second dependent variable) Column 3 = Z(value of the third dependent variable) Column 4 = dXC(change of value of the first dependent variable, for C+dC, where dC is the change in C, and D,E remains constant) Column 5 = dXD(change of value of the first dependent variable only, for D+dD, where dD is the change in D, and C,E remains constant) Column 6 = dXE(change of value of the first dependent variable only, for E+dE, where dE is the change in E, and C,D remains constant) Column 7 = dYC(change of value of the second dependent variable only, for C+dC, where dC is the change in C, and D,E remains constant) Column 8 = dYD(change of value of the second dependent variable only, for D+dD, where dD is the change in D, and C,E remains constant) Column 9 = dYE(change of value of the second dependent variable only, for E+dE, where dE is the change in E, and C,D remains constant) The changes in respective independent variables are same across tables. Each row is given for uniform incremental values of X. For each row Y and Z are calculated for same values of C,D,E for which X is calculated. That is if X = f1(a1,c1,d1,e1), then Y = f2(a1,c1,d1,e1) and Z = f3(a1,c1,d1,e1). Now my problem: I want to calculate the values of X,Y and Z for intermediate values in C,D and E. If I apply small changes in C,D and E simultaneously, what will be values of X, Y and Z. (These changes are smaller than dC,dD and dE respectively). How can I approach this problem. What subject of mathematics deals with these kind of problems.
This appears to me to be an interpolation problem. In fact, this sounds like a challenging interpolation problem (I haven't done multivariate interpolation myself, and I imagine that some software can do it for you; but I am familiar with univariate interpolation, and it has enough complexity on its own in my opinion). I suspect that you'll want to use "Catmull-Rom splines," well-studied interpolation points that work in any dimension. So to answer your question, this area of math is 'multivariable interpolation.'
{ "language": "en", "url": "https://math.stackexchange.com/questions/93153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $22/7$ equal to the $\pi$ constant? Possible Duplicate: Simple numerical methods for calculating the digits of Pi How the letter 'pi' came in mathematics? When I calculate the value of $22/7$ on a calculator, I get a number that is different from the constant $\pi$. Question: How is the $\pi$ constant calculated? (The simple answer, not the Wikipedia calculus answer.)
Pi ($\pi$) is a mathematically defined, a priori constant, one definition being the ratio of the circumference of a circle to its diameter. In some places, it also has (had) a (mathematically inaccurate) legal definition. The mathematical definition is universally agreed upon. In the decimal number system, it has an infinite decimal expansion, and so cannot be represented (mathematically) exactly as a decimal number (in a finite amount of space). For most non-mathematical purposes, (e.g. architecture, agriculture), accuracy past a few decimal places is not necessary. For astronomy, aviation & aeronotics, more accuracy is usually needed -- maybe ten or twenty decimal places (please correct me if I'm wrong). The estimation of $\pi$ has a very long history (compared to the span of recorded history and of the history of mathematics). There are in turn many books on Pi. For example, The Joy of Pi & A History of Pi to name a few. There are even more methods of calculating $\pi$, including, amazingly, some relatively recent developments. Perhaps the easiest method, if you want to avoid advanced mathematics (and calculus is too advanced) and take a few things on faith, is to use simple geometry and rely on a trigonometry function (which could be argued is circular reasoning since we will use the fact that $360$ degrees equals $2\pi$ radians). You can use, for example, the area of a regular $n$-gon (for $n$ large) with vertices on a circle of radius $1$ as an approximation for $\pi$. This area is then $$ A_n=n\cdot\sin\frac{\theta_n}{2}\cos\frac{\theta_n}{2}=\frac{n}{2}sin\theta_n \qquad\text{for}\qquad \theta_n=\frac{360\text{ deg}}{n}=\frac{2\pi}{n}\text{ (rad)} $$ (draw a triangle from the center to two adjacent vertices, bisect this triangle by a line from the center to the midpoint of the vertices, calculate lengths, the area of this whole triangle, and multiply by the number of these which is $n$). With a little calculus knowledge, we can also verify that in fact (when $\theta_n$ is in radians!), $$ \lim_{n \to \infty}A_n= \lim_{n \to \infty}\frac{sin\frac{2\pi}{n}}{\frac{2}{n}}= \lim_{x \to 0}\frac{\sin\pi x}{x}=\pi\;. $$ A more recently found formula (Bailey–Borwein–Plouffe, 1995) whose statement requires not so much math is $$ \pi=\sum_{n=0}^{\infty} \left( \frac{4}{8n+1} - \frac{2}{8n+4} - \frac{1}{8n+5} - \frac{1}{8n+6} \right) \left( \frac{1}{16} \right)^n $$ which converges very quickly to the answer, i.e., not very many terms are needed to get any given desired accuracy, since as can be seen, the $n$th term is (much) less than $16^{-n}$, so that the first $n$ terms easily give $n$ hexadecimal (base 16) digits, or $n\log{16}\simeq{1.2n}$ decimal places of accuracy. The (early) history of approximations to $\pi$ can also be (roughly) traced by the its (various) continued fraction expansions, e.g. $\pi = 3 + \frac{1}{7+}\frac{1}{15+}\frac{1}{1+}\frac{1}{292+}\dots$, with the number of terms used increasing (roughly!) with historical time; so $3$ is the simplest approximation, then $22/7$, then $333/106$, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Is quasi-isomorphism an equivalence relation? Let $E^\bullet$ and $F^\bullet$ be complexes on an abelian category; what does it mean to say that $E^\bullet$ and $F^\bullet$ are quasi-isomorphic? Does it only mean that there is a map of complexes $f:E^\bullet \to F^\bullet$ that induces isomoprhisms between the cohomology objects? Or does it also guarantee the existence of a map of complexes $g:F^\bullet \to E^\bullet$ inducing the inverses of $H^pf:H^p(E^\bullet)\to H^p(F^\bullet)$? Put in another way: is quasi-isomorphism an equivalence relation?
$\def\ZZ{\mathbb Z}$The relation $E \sim F$ defined by «there exists a morphism $E\to F$ inducing an isomorphism in homology» is not an equivalence relation because it is not symmetric (it is relfexive and transitive) For example, there is a morphism from $$\cdots 0\to \ZZ\xrightarrow2\ZZ\to0\to\cdots$$ to the complex $$\cdots 0\to 0\to\ZZ/2\ZZ\to0\to\cdots$$ inducing an isomorphism in homology, but there is no non-zero morphism in the other direction. The useful relation is the symmetric closure of this relation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Evaluating Integral $\int e^{x}(1-e^x)(1+e^x)^{10} dx$ I have this integral to evaluate: $$\int e^{x}(1-e^x)(1+e^x)^{10} dx$$ I figured to use u substitution for the part that is raised to the tenth power. After doing this the $e^x$ is canceled out. I am not sure where to go from here however due to the $(1-e^x)$. Is it possible to move it to the outside like this and continue from here with evaluating the integral? $$(1-e^x)\int u^{10} du$$
FWIFs, this also would be easily generalized via a recurrence relation (aside from the obvious substitution that confirms this] as the integral is of the form; f ' (x) times g(x) (where f(x) is (1/11)(1+e^x)^11) and f(x), g(x) both behave good enough - i.e. we get back something close enough to our integral the define a recurrence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Does every Abelian group admit a ring structure? Given some Abelian group $(G, +)$, does there always exist a binary operation $*$ such that $(G, +, *)$ is a ring? That is, $*$ is associative and distributive: \begin{align*} &a * (b * c) = (a*b) * c \\ &a * (b + c) = a * b + a * c \\ &(a + b) * c = a * c + b * c \\ \end{align*} We also might have multiplicative identity $1 \in G$, with $a * 1 = 1 * a = a$ for any $a \in G$. Multiplication may or may not be commutative. Depending on the definition, the answer could be no in the case of the group with one element: then $1 = 0$. But the trivial ring is not a very interesting case. For cyclic groups the statement is certainly true, since $(\mathbb{Z}_n, +, \cdot)$ and $(\mathbb{Z}, +, \cdot)$ are both rings. What about in general? Is there some procedure to give arbitrary abelian groups ring structure?
Every finite Abelian group is a product of finite cyclic groups, so you get a ring for free. Similarly, every finitely generated abelian group is isomorphic to some copies of $\mathbb{Z}$ times a finite Abelian group, so you get a ring for free there as well. The only interesting case remaining would be a non-finitely-generated Abelian group. There are a few more steps that one can probably go, but I don't know how we can get a ring in general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "81", "answer_count": 3, "answer_id": 2 }
A question on Taylor Series and polynomial Suppose $ f(x)$ that is infinitely differentiable in $[a,b]$. For every $c\in[a,b] $ the series $\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n $ is a polynomial. Is true that $f(x)$ is a polynomial? I can show it is true if for every $c\in [a,b]$, there exists a neighborhood $U_c$ of $c$, such that $$f(x)=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n\quad\text{for every }x\in U_c,$$ but, this equality is not always true. What can I do when $f(x)\not=\sum\limits_{n=0}^\infty \cfrac{f^{(n)}(c)}{n!}(x-c)^n$?
* *All polynomials are Power Series but not all Power Series are not polynomials. For a certain Power Series $\displaystyle f(x) = \sum_{k=0}^\infty a_k \left( x-c \right)^k = a_0 + a_1 (x-c)^1 + a_2 (x-c)^2 + a_3 (x-c)^3 + \cdots$ to be a Polynomial of degree $n$, then for all $k>n$, $a_k = 0$. *If $ f(x)$ is infinitely differentiable in the interval $[a,b]$, then for every $k \in \mathbb{N}$, $f^{(k)}(x) \in \mathbb{R}$ i.e. exists as a finite number. The Taylor Series of $f(x)$ in the neighbourhood of $c$ is $\sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k $ and *If the remainder, $R_N(x) = f(x) - \sum\limits_{k=0}^N \cfrac{f^{(k)}(c)}{k!}(x-c)^k $ for a certain $N \in \mathbb N$, converges to $0$ then $f(x) = \sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k $ *Taylor's inequality: If $|f^{(N+1)}(x)|≤ B$ for all $x$ in the interval $[a, b]$, then the remainder $R_N(x)$ (for the Taylor polynomial to $f(x)$ at $x = c$) satisfies the inequality $$|R_N(x)|≤ \cfrac {B}{(N+ 1)!}|x − c|^{N+1}$$ for all $x$ in $[c − d, c + d]$ and if the right hand side of this inequality converges to $0$ then $R_N(x)$ also converges to $0$. According to your question, supposing that $\sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k $, $\forall c \in [a,b]$ is a polynomial which translates to $$\text{given } c\in[a,b],\ \ \exists n_c\in \mathbb N \ (\text{ $n_c$ depends on c}) \quad|\quad\sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k=P_{n_c}(x)$$ $$\quad \quad \quad\quad \quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad \quad\quad \quad \text { and} \ \forall k>n_c, \ k\in \mathbb N, \ {f^{(k)}(c)}=0$$ This is true because if one looks at the finite sum $N\ge n_c$, $$\displaystyle\sum^N_{k=0} a_k(x-c)^k=\sum^N_{k=0}\sum^k_{i=0}a_k\binom ki(-1)^{k-i} c^{k-i}x^{i}=\sum^N_{i=0}x^{i}\sum^N_{k=i}a_k\binom ki(-1)^{k-i} c^{k-i}$$ if this is a polynomial $P_{n_c}(x)$ of degree $n_c$, then $$\forall i>n_c,\ \ \displaystyle \sum^N_{k=i}a_k\binom ki(-1)^{k-i} c^{k-i}=0$$ Solving this system of equations gives that $\forall n_c<k\le N, \ \ a_k=0$ and $$a_k=\cfrac{f^{(k)}(c)}{k!}=0\implies f^{(k)}(c)=0, \ \ \forall k>n_c$$ This holds when $N\rightarrow \infty$ Since $n_c$ depends on each $c\in[a,b]$, it is sufficient to take $\displaystyle n=\max_{c\in[a,b]} (n_c)$ such that for any $c\in [a,b]$ and for any $k>n,\ \ k\in \mathbb N$, we have $f^{(k)}(c)=0$. Thus, the Taylor series is of $f$ is a polynomial of degree $\displaystyle n=\max_{c\in[a,b]} (n_c)$ because $\displaystyle f(x) = \sum_{k=0}^\infty a_k \left( x-c \right)^k=P_n(x)$. At this point it is sufficient to prove that $\displaystyle f(x) = \sum_{k=0}^\infty a_k \left( x-c \right)^k=P_n(x)$ using the Taylor Remainder Theorem (#4). We've already found out that $f^{(k)}(c) = 0,\space \forall k>n$, thus $ f^{(n+1)}(x) = 0$ or simply $ f^{(n+1)}(x) \le 0$ (to work with inequalities) which implies that $B = 0$. At this point it is clear that $|R_N(x)|≤ \cfrac {B}{(N+ 1)!}|x − c|^{N+1} = 0$ and we can conclude that $R_N(x)$ converges to $0$ and that $f(x) = \sum\limits_{k=0}^\infty \cfrac{f^{(k)}(c)}{k!}(x-c)^k = P_n(x)$. $f$ is a polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "41", "answer_count": 2, "answer_id": 1 }
Absolute sizes of a two-dimensional square in perspective Given is the following square in perspective view: * *The square has an edge length of 500 feet. *The square is rotated by 45 degree. *The viewer is 1600 feet away from the square's center. Needed are formulas for the following sizes in a two-dimensional view (trapezium): * *width w *height h1 *height h2
You still need to specify some things. Where is the $1600$ foot distance measured to? The near edge, the center, the far edge? You also have nothing to set the vertical scale: think of looking at it through a zoom lens. Assuming the $1600$ feet is to the near edge, the angle subtended by the near edge is $2 \arctan \frac{250}{1600}= 17.76^\circ$. The far edge is then $1600+250\sqrt{2}=1953.55$ feet away and the angle is then $14.59^\circ$. The horizontal angle subtended can be found from the law of cosines: $500^2=1600^2+1953.55^2-2*1600*1953.55 \cos \theta$, giving $11.46^\circ$
{ "language": "en", "url": "https://math.stackexchange.com/questions/93493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fourier transform (logarithm) question Can we think, at least in the sense of distribution, about the Fourier transform of $\log(s+x^{2})$? Here '$s$' is a real and positive parameter However $\int_{-\infty}^{\infty}dx\log(s+x^{2})\exp(iux)$ is not well defined. Can the Fourier transform of logarithm be evaluated ??
The answer is given here; $a$ is assumed to be positive, so $a=\sqrt s$: $$\mathcal F\!\left[ \ln\left( x^2 + s \right) \right] = -\sqrt{2 \pi} \left( \left| w \right|^{-1} e^{-\sqrt s \left| w \right|} + 2 \gamma \delta(w) \right).$$ Also, with $\left| x \right|^{-1}$ defined in the same way, for negative $s$ we have $$\mathcal F\!\left[ \ln\left( x^2 + s \right) \right] = -\sqrt{2 \pi} \left( \left| w \right|^{-1} \cos \left( \sqrt {-s} w \right) + 2 \gamma \delta(w) \right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/93555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Does the golden angle produce maximally distant divisions of a circle? I just ran across a video that claimed that the sequence of multiples of the golden angle produces some sort of optimal spacing around a circle for all possible iterations (this is a little hand-wavy, I'm aware). Of course, any irrational angle has the property that it won't ever repeat and will form a dense set as the limit goes to infinity, but intuitively the golden angle seems to work much better than an irrational very close to $\frac{\pi}{2}$. I'm looking for a way to more formally state this idea of optimal spacing for any iteration as a means to either prove or disprove that the golden angle provides this property (and is it unique?).
Given an arbitrary angle $\theta$ which is not a multiple of $\pi$, to say that $n \theta$ and $m \theta$ are close to each other on the circle is to say that $(n-m)\theta$ is close to $0 \bmod 2\pi$, so to say that the fractional part of $(n-m) \frac{\theta}{2\pi}$ is close to $0$. If you want to avoid this, you want to pick a value of $t = \frac{\theta}{2\pi}$ with the property that, for any integer $n-m$, the fractional part of $(n-m)t$ is never too close to $0$. Phrased another way, for any integer $q$ we never want $qt$ to be too close to another integer $p$. Phrased yet another way, we never want $t$ to be too close to a rational number $\frac{p}{q}$. So we are looking for irrational numbers which are hard to approximate by rationals. The closest rational approximations (in a suitable sense) to an irrational number can be read off from truncations of its continued fraction $$t = a_0 + \frac{1}{a_1 + \frac{1}{a_2 + ...}}$$ and in particular are better approximations if the corresponding entries $a_i$ of the continued fraction are large. That means that the best irrational number for the job is the one with the property that each $a_i$ is as small as possible, so $$1 + \frac{1}{1 + \frac{1}{1 + ...}}$$ which turns out to be precisely the golden ratio. (Why? Because the above number $t$ has the property that $t = 1 + \frac{1}{t}$, or $t^2 = t + 1$, so it is either the golden ratio or its conjugate, and it's greater than $1$ so it must be the golden ratio.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/93623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Elementary proof of $\mathbb{Q}(\zeta_n)\cap \mathbb{Q}(\zeta_m)=\mathbb{Q}$ when $\gcd(n,m)=1$. In an answer to another question I used the fact that $\mathbb{Q}(\zeta_m)\subseteq \mathbb{Q}(\zeta_n)$ if and only if $m$ divides $n$ (here $\zeta_n$ stands for a primitive $n$th root of unity, Edit: and neither $m$ nor $n$ is twice an odd number; see KCd comments below). More generally, one can show that $\mathbb{Q}(\zeta_n)\cap \mathbb{Q}(\zeta_m)=\mathbb{Q}$ when $\gcd(n,m)=1$. The only proof of this fact that comes to mind uses facts about discriminants of cyclotomic extensions, and the fact that every non-trivial number field extension over $\mathbb{Q}$ ramifies at least at one prime (see, for instance, Washington, "Introduction to Cyclotomic Fields", Chapter 2, Proposition 2.4). Since the original question that I was trying to answer was somewhat elementary, I was left wondering if there are more elementary proofs of the fact $$\mathbb{Q}(\zeta_n)\cap \mathbb{Q}(\zeta_m)=\mathbb{Q}, \text{ when } \gcd(n,m)=1. $$ By "elementary proof" here I mean some proof that does not involve algebraic number theory results about discriminants, or ramification of primes in rings of integers of number fields. Can anyone think of an elementary proof? Thanks!
This is a complement to Dylan's answer: we prove that $\mathbb{Q}(\zeta_{mn})=\mathbb{Q}(\zeta_m,\zeta_n)$. We have $(\zeta_m)^{mn}=1^n=1$ and $(\zeta_n)^{nm}=1^m=1$, and so the inclusion $\mathbb{Q}(\zeta_{mn})\supseteq\mathbb{Q}(\zeta_m,\zeta_n)$ is clear. Conversely, $\zeta_{mn}^m$ is a primitive $n$-th root of unity, and $\zeta_{mn}^n$ is a primitive $m$-th root of unity, which gives the reverse inclusion. I believe this is as elementary as it gets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 2, "answer_id": 0 }
How many ordered triple $ (p,a,b) $ is possible such that $p^a=b^4+4$? If we have a prime number $p$ and two natural numbers $a$ and $b$ such that $p^a=b^4+4$, then how many such ordered triplets $(p,a,b)$ exist? What should be the strategy to solve this one? The only I can see is $(5,1,1)$, is this the only one? if yes, how could we prove that?
This is an old contest problem, I wish I could remember where I first saw it. Anyway, André's comment that $$b^4+4=(b^2-2b+2)(b^2+2b+2)$$ is the key to a solution. Looking modulo $16$, we see that $b^4+4$ cannot be a power of $2$. For $b>1 $, both factors will be strictly greater then $1$, so that if $p^k|(b^4+4)$ then $p$ must divide both $b^2-2b+2$, and $b^2+2b+2$. Since $\gcd(b^2-2b+2,b^2+2b+2)$ must divide $4b$, and $p$ divides both terms, we see that $p|b$. This then implies that $p$ divides $4$ which is impossible. If $b=1$, then we get the one special case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What are the interesting applications of hyperbolic geometry? I am aware that, historically, hyperbolic geometry was useful in showing that there can be consistent geometries that satisfy the first 4 axioms of Euclid's elements but not the fifth, the infamous parallel lines postulate, putting an end to centuries of unsuccesfull attempts to deduce the last axiom from the first ones. It seems to be, apart from this fact, of genuine interest since it was part of the usual curriculum of all mathematicians at the begining of the century and also because there are so many books on the subject. However, I have not found mention of applications of hyperbolic geometry to other branches of mathematics in the few books I have sampled. Do you know any or where I could find them?
Hyperbolic manifolds are topologically interesting: a hyperbolic manifold $M$ is an Eilenberg-MacLane space $K(\pi_1(M), 1)$ for its fundamental group. It follows that the de Rham cohomology of a flat vector bundle on a hyperbolic manifold computes the group cohomology of its fundamental group. Thus, hyperbolic manifolds are also interesting for group theorists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 16, "answer_id": 14 }
How to solve a literal equation How do I solve $2^{x-1}=3^{x+a}$? I cannot solve it and have spent an hour on it trying many different ways. Please help me! Thank you!
To get the variables in a manageable place, take the logarithm of both sides of the equation (it does not matter what base you use; the power law for logarithms will let you bring the powers in front of the logarithm): $$ 2^{x-1}=3^{x+a}\iff\ln 2^{x-1}=\ln 3^{x+a}\iff (x-1)\ln 2= (x+a)\ln 3 $$ The above is valid, since logarithm functions are one-to-one, and since the first equation above has positive quantities on both sides. Generally, if you have an equation with the variable appearing in an exponent, you can try (perhaps after a bit of algebra) taking logarithms to produce a more manageable equation as in the case above. Finishing this problem: $$\eqalign{ &(x-1)\ln 2= (x+a)\ln 3\cr \iff& x\ln 2-\ln 2 = x\ln 3+a\ln 3\cr \iff &x\ln 2-x\ln 3= \ln 2+a\ln 3\cr \iff &x (\ln 2- \ln 3)= \ln 2+a\ln 3\cr \iff &x = {\ln 2+a\ln 3\over \ln 2- \ln 3 }\cr &= { \ln(2\cdot 3^a)\over \ln(2/3)}. } $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/93860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trying to figure out how an approximation of a logarithmic equation works The physics books I'm reading gives $$\triangle\tau=\frac{2}{c}\left(1-\frac{2m}{r_{1}}\right)^{1/2}\left(r_{1}-r_{2}+2m\ln\frac{r_{1}-2m}{r_{2}-2m}\right).$$ We are then told $2m/r$ is small for $r_{2}<r<r_{1}$ which gives the approximation$$\triangle\tau\approx\frac{2}{c}\left(r_{1}-r_{2}-\frac{m\left(r_{1}-r_{2}\right)}{r_{1}}+2m\ln\left(\frac{r_{1}}{r_{2}}\right)\right).$$ I can see how $$\frac{2}{c}\left(1-\frac{2m}{r_{1}}\right)^{1/2}\approx\frac{2}{c}$$ but can't see how the rest of it appears. It seems to be saying that$$2\ln\frac{r_{1}-2m}{r_{2}-2m}\approx\left(-\frac{\left(r_{1}-r_{2}\right)}{r_{1}}+2\ln\left(\frac{r_{1}}{r_{2}}\right)\right)$$ I've tried getting all the lns on one side, and also expanding $\ln\frac{r_{1}-2m}{r_{2}-2m}$ to $\ln\left(r_{1}-2m\right)-\ln\left(r_{2}-2m\right)$ and generally juggling it all about but with no luck. Any suggestions or hints from anyone? It's to do with the gravitational time delay effect. It seems a bit more maths than physics which is why I'm asking it here. Many thanks
As Raskolnikov says, the first approximation is actually $$ \frac{2}{c}\left(1-\frac{2m}{r_1}\right)^{1/2}\approx\frac{2}{c}\left(1-\frac{m}{r_1}\right) $$ This is a valid approximation because the power series for $(1-x)^{1/2}$ is $$1 -\frac{1}{2}x+\cdots$$ So as long as $x=\frac{2m}{r_1}$ is close to zero, the above approximation is a valid first-degree approximation. Expanding this substitution, $$ \begin{align} \Delta\tau&\approx\frac{2}{c}\left(1-\frac{m}{r_1}\right)\left(r_{1}-r_{2}+2m\ln\frac{r_{1}-2m}{r_{2}-2m}\right)\\ & = \frac{2}{c}\left(r_{1}-r_{2}-\frac{m(r_1-r_2)}{r_1}+2m\left(1-\frac{m}{r_1}\right)\ln\frac{r_{1}-2m}{r_{2}-2m}\right) \end{align} $$ So the second approximation that has been made is $$ \begin{align} \left(1-\frac{m}{r_1}\right)\ln\frac{r_{1}-2m}{r_{2}-2m}&\approx\ln\left(\frac{r_1}{r_2}\right)\\ \end{align} $$ This is equivalent to the following approximation using logarithm rules $$ \begin{align} \left(1-\frac{m}{r_1}\right)\left(\ln(r_1)+\ln(1-2m/r_1)-\ln(r_2)-\ln(1-2m/r_2)\right)&\approx\ln\left(\frac{r_1}{r_2}\right)\\ \end{align} $$ Now you just drop all the terms that have $\frac{m}{r_i}$, and your approximation is another logarithm rule. It is valid to drop these terms, because presumably $\ln(r_1)$, $\ln(r_2)$, and $1$ are relatively much larger.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why is $0$ excluded in the definition of the projective space for a vector space? For a vector space $V$, $P(V)$ is defined to be $(V \setminus \{0 \}) / \sim$, where two non-zero vectors $v_1, v_2$ in $V$ are equivalent if they differ by a non-zero scalar $λ$, i.e., $v_1 = \lambda v_2$. I wonder why vector $0$ is excluded when considering the equivalent classes, since $\{0\}$ can be an equivalent class too? Thanks!
You could do this, but the resulting space would not be as useful. For example, suppose $V$ is $\mathbb{R}^n$ equipped with its usual topology. Then the projective space $P \mathbb{R}^n$ can be made into a topological space by giving it the quotient topology. If you include 0 as in your suggestion, the projective space would not be Hausdorff in this topology; in fact, the only open neighborhood of the equivalence class $\{0\}$ is the entire quotient space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/93956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Irrational rotation and recurrence time Let $\mathbb{T}=\mathbb{R}/\mathbb{Z}$ be the torus, and $\alpha\in(0,1)$ be an irrational number, then the transformation $T$ defined by $Tx=x+\alpha$ is the irrational rotation on $\mathbb{T}$. Now for a neighborhood $(-\epsilon,\epsilon)\pmod 1$ of the point $0$ we consider the set of the recurrence time $$A = \{n:T^n(0)\in (-\epsilon,\epsilon)\pmod 1 \} = \{n:n\alpha\in(-\epsilon,\epsilon)\pmod 1 \} \;.$$ For a quadratic polynomial, for example $n^2$, I want to ask whether we can find two numbers $\beta,\delta\in (0,1)$,such that the set of polynomial recurrence time $\{n:n^2\beta\in(-\delta,\delta)\pmod 1\}$ is contained in the set $A$.
Sure! Even with $\beta=\alpha$ and even much more than you asked for. This comes from Fürstenberg's proof of the equidistribution of $n^2 \alpha$. There are many places to read about it; one of them is section 3 here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Approximation algorithms used in exact algorithms Approximation algorithms might give output up to some constant factor. This is a bit less satisfying than exact algorithms. However, constant factors are ignored in time complexity. So I wonder if the following trick is possible or was used, to solve some problem $B \circ A$: * *Use an approximation algorithm solving problem $A$ to get solution $S$ within constant factor; *Use an exact algorithm, solving problem $B$, whose runtime depends on weight of $S$ but works as long as $S$ is a correct solution. This way the approximation is a "subprocedure" of an exact algorithm, and the constant factor lost in step 1 is swallowed in step 2.
Question was answered at cstheory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that $G/N$ is an abelian group Let $G$ be the group of all $2 \times 2$ matrices of the form $\begin{pmatrix} a & b \\ 0 & d\end{pmatrix}$ where $ad \neq 0$ under matrix multiplication. Let $N=\left\{A \in G \; \colon \; A = \begin{pmatrix}1 & b \\ 0 & 1\end{pmatrix} \right\}$ be a subset of the group $G$. Prove that $N$ is a normal subgroup of $G$ and prove that $G/N$ is abelian group. Here is my attempt! To prove $N$ is normal I consider the group homomorphism $f \colon G \to \mathbb R^*$ given by $f(B) = \det(B)$ for all $B$ in $G$. And I see that $f(N)$ is all the singleton $\{1\}$ since $\{1\}$ as a subgroup of $\mathbb R^*$ is normal, it follows that $N$ is also normal. Is this proof helpful here? Then how to prove that $G/N$ is Abelian? I know $G/N$ is a collection of left cosets. Thank you.
This can another solution. Consider the relation of equivalence: $$a\equiv b\pmod H \iff a^{-1}b\in H$$ where $a$ and $b$ are elements of a group $G$ while $H$ is a subgroup of $G$ Note that the equivalent class $[a]=aH$ the left coset of $a$ (generic element of this group). Note that every triangular upper matrix is congruent to diagonal matrix. Indeed if $$A=\begin{pmatrix} a & b \\ 0 & d \end{pmatrix}$$ then $A$ is congruent to: $$ \begin{pmatrix} a & b' \\ 0 & d \end{pmatrix}$$ in the preceding equivalence relation. Therefore every element of $G$ stands for a diagonal matrix in the equivalence class : $$Nab=Nba$$ because $a$ and $b$ are diagonal matrix but diagonal matrix commute
{ "language": "en", "url": "https://math.stackexchange.com/questions/94162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 2 }
Generalize the equality $\frac{1}{1\cdot2}+\frac{1}{2\cdot3}+\cdots+\frac{1}{n\cdot(n+1)}=\frac{n}{n+1}$ I'm reading a book The Art and Craft of Problem Solving. I've tried to conjecture a more general formula for sums where denominators have products of three terms. I've "got my hands dirty", but don't see any regularity in numerators. Please, write down your ideas.
Too long for a comment, some of the answers posted are longer than they need. You don't need the full partial fraction decomposition or mathematica/calculators: $$\frac{1}{i(i+k)}= \frac{1}{i} \left( \frac{1}{i} - \frac{1}{i+k} \right) \,.$$ Thus $$\frac{1}{i(i+1)...(i+k)}= \frac{1}{k} \left( \frac{1}{i(i+1)...(i+k-1)} - \frac{1}{(i+1)(n+2)....(i+k)} \right) \,.$$ Summing, the right side is telescopic, thus $$\sum_{i=1}^n\frac{1}{i(i+1)...(i+k)}=\frac{1}{k} \left( \frac{1}{k!} - \frac{n!}{ (n+k)!} \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/94216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 3 }
Purpose Of Adding A Constant After Integrating A Function I would like to know the whole purpose of adding a constant termed constant of integration everytime we integrate an indefinite integral $\int f(x)dx$. I am aware that this constant "goes away" when evaluating definite integral $\int_{a}^{b}f(x)dx $. What has that constant have to do with anything? Why is it termed as the constant of integration? Where does it come from? The motivation for asking this question actually comes from solving a differential equation $$x \frac{dy}{dx} = 5x^3 + 4$$ By separation of $dy$ and $dx$ and integrating both sides, $$\int dy = \int\left(5x^2 + \frac{4}{x}\right)dx$$ yields $$y = \frac{5x^3}{3} + 4 \ln(x) + C .$$ I've understood that $\int dy$ represents adding infinitesimal quantity of $dy$'s yielding $y$ but I'am doubtful about the arbitrary constant $C$.
Say every month you earned 10 dollars on a principal of 1000 dollars and put it aside. What have you got after 10 months ? Not always 100 dollars or 180 dollars or 130. That is so, only if the box contained nothing not at start. If it contained 35 dollars at start, that is the constant added to the accumulated (integrated) amount $A:$ $ A = m.10 + c = m.10 + 35 $ dollars or 135 or 235 dollars.. The same applies to more complicated integrals. It is also the initial value in integration of a variable function with initial value prescribed to evaluate boundary value differential equations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 9, "answer_id": 8 }
Simple trigonometry question (angles) I am starting again with trigonometry just for fun and remember the old days. I was not bad at maths, but however I remember nothing about trigonometry... And I'm missing something in this simple question, and I hope you can tell me what. One corner of a triangle has a 60º angle, and the length of the two adjacent sides are in ratio 1:3. Calculate the angles of the other triangle corners. So what we have is the main angle, $60^\circ$, and the adjacent sides, which are $20$ meters (meters for instance). We can calculate the hypotaneous just using $a^2 + b^2 = h^2$. But how to calculate the other angles? Thank you very much and sorry for this very basic question...
This is indeed a very basic question. The information you have is "Side-Angle-Side (SAS)", which is enough to classify your triangle. (You may assume that one side has length 1 and the other has length 3 or just keep it as a variable x and 3x). Then you essentially have to apply the law of cosines, see this page for a detailed solution, or just google for SAS.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Need help deriving recurrence relation for even-valued Fibonacci numbers. That would be every third Fibonacci number, e.g. $0, 2, 8, 34, 144, 610, 2584, 10946,...$ Empirically one can check that: $a(n) = 4a(n-1) + a(n-2)$ where $a(-1) = 2, a(0) = 0$. If $f(n)$ is $\operatorname{Fibonacci}(n)$ (to make it short), then it must be true that $f(3n) = 4f(3n - 3) + f(3n - 6)$. I have tried the obvious expansion: $f(3n) = f(3n - 1) + f(3n - 2) = f(3n - 3) + 2f(3n - 2) = 3f(3n - 3) + 2f(3n - 4)$ $ = 3f(3n - 3) + 2f(3n - 5) + 2f(3n - 6) = 3f(3n - 3) + 4f(3n - 6) + 2f(3n - 7)$ ... and now I am stuck with the term I did not want. If I do add and subtract another $f(n - 3)$, and expand the $-f(n-3)$ part, then everything would magically work out ... but how should I know to do that? I can prove the formula by induction, but how would one systematically derive it in the first place? I suppose one could write a program that tries to find the coefficients x and y such that $a(n) = xa(n-1) + ya(n-2)$ is true for a bunch of consecutive values of the sequence (then prove the formula by induction), and this is not hard to do, but is there a way that does not involve some sort of "Reverse Engineering" or "Magic Trick"?
Let $S$ be the shift operator on sequences (as in Bill Dubuque's answer). Note that the Fibonacci sequence is killed by $S^2-S-1$. The Fibonacci sequence will then be killed by any "polynomial" multiple of $S^2-S-1$. To get a recurrence for every $k^{\rm{th}}$ term, all we need to do is find a multiple of $S^2-S-1$ that only involves powers of $S^k$. First note that $S^2-S-1=(S-a)(S-b)$ where $a=\phi$ (the golden ratio) and $b=-1/\phi$. Consider the operator $(S^k-a^k)(S^k-b^k)=S^{2k}-(a^k+b^k)S^k+(ab)^k$. It is a polynomial multiple of $S^2-S-1$, so it kills the Fibonacci sequence. It only involves powers of $S^k$. Recall that one formula for the $k^{\rm{th}}$ Lucas number is $L_k=a^k+b^k$, and note that $ab=-1$. Thus, we get that $S^{2k}-L_kS^k+(-1)^k$ kills the Fibonacci sequence. Therefore, in summary, we get $$ F_{n+2k}=L_kF_{n+k}-(-1)^kF_n\tag{1} $$ For example, $$ F_{n+2}=F_{n+1}+F_n\tag{k=1} $$ $$ F_{n+4}=3F_{n+2}-F_n\tag{k=2} $$ $$ F_{n+6}=4F_{n+3}+F_n\tag{k=3} $$ $$ F_{n+8}=7F_{n+4}-F_n\tag{k=4} $$ $$ F_{n+10}=11F_{n+5}+F_n\tag{k=5} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/94359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 5 }
Solving quadratic equation $$\frac{1}{x^2} - 1 = \frac{1}{x} -1$$ Rearranging it I get: $1-x^2=x-x^2$, and so $x=1$. But the question Im doing says to find 2 solutions. How would I find the 2nd solution? Thanks.
$$\frac{1}{x^2} - 1 = \frac{1}{x} -1$$ $$\frac{1}{x^2} = \frac{1}{x} $$ $$x^2-x=0$$ $$x(x-1)=0$$ The solutions are $x=1$ or $x=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 4 }
Proof that the set of odd positive integers greater then 3 is countable I found one problem which asks following: Show that the set of odd positive integers greater then 3 is countable. At the begining I was thinking that such numbers could be represented by $2k+1$,where $k>1$; but in the answers paper there was written as $2n+3$ or general function is $$f(n)=2n+3$$ and when I was thinking how to prove that such answer is countable, the answer paper said this function is a one-to-one correspondence from the set of positive numbers set to the set of positive odd integers greater to 3. My question is: is it enough to prove a one-to-one correspondence between two sets, that one of them is countable. If yes, then once my lecturer ask me to proof that rational numbers are countable, so in this case if I represent rational numbers by following function from set of positive numbers: $$f(n)=\frac{n+1}{n}$$ or maybe $f(n)=\frac{n}{n+1}$. They both are one-to-one correspondences from the set of positive numbers to the set of rational numbers (positives sure). Please help me, is my logic correct or not?
The other answers are focused on explicitly constructing an bijection to the natural numbers. In practice, one would just note that the set in the question is a subset of a countably infinite set, $\mathbb{N}$, and so it is either finite or countably infinite. It is infinite, so it must be countably infinite. This requires some justification, but it is not too difficult. See the first chapter of Topology by Munkres, for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Matrix/Vector Derivative I am trying to compute the derivative:$$\frac{d}{d\boldsymbol{\mu}}\left( (\mathbf{x} - \boldsymbol{\mu})^\top\boldsymbol{\Sigma} (\mathbf{x} - \boldsymbol{\mu})\right)$$where the size of all vectors ($\mathbf{x},\boldsymbol{\mu}$) is $n\times 1$ and the size of the matrix ($\boldsymbol{\Sigma}$) is $n\times n$. I tried to break this down as $$\frac{d}{d\boldsymbol{\mu}}\left( \mathbf{x}^\top\boldsymbol{\Sigma} \mathbf{x} - \mathbf{x}^\top\boldsymbol{\Sigma} \boldsymbol{\mu} - \boldsymbol{\mu}^\top\boldsymbol{\Sigma} \mathbf{x} + \boldsymbol{\mu}^\top\boldsymbol{\Sigma} \boldsymbol{\mu} \right) $$ yielding $$(\mathbf{x} + \boldsymbol{\mu})^\top\boldsymbol{\Sigma} + \boldsymbol{\Sigma}(\boldsymbol{\mu} - \mathbf{x})$$ but the dimensions don't work: $1\times n + n\times 1$. Any help would be greatly appreciated. -C
In full technicality: $$\frac{\partial}{\partial u_k}\left(\sum_{i,j=1}^n \sigma_{ij}(x_i-u_i)(x_j-u_j)\right)=\sum_{i,j=1}^n\sigma_{ij}\left[-\delta_{ik}(x_j-u_j)-(x_j-u_j)\delta_{jk}\right]$$ $$=-\sum_{l=1}^n (\sigma_{kl}+\sigma_{lk})(x_l-u_l)=\left[(\Sigma+\Sigma^T)(\vec{u}-\vec{x})\right]_k.$$ IOW you should have gotten a $\Sigma^T(u-x)$ instead of $(x+u)^T\Sigma$. Note $a^T\Phi b=b^T\Phi^T a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
How many smooth functions are non-analytic? We know from example that not all smooth (infinitely differentiable) functions are analytic (equal to their Taylor expansion at all points). However, the examples on the linked page seem rather contrived, and most smooth functions that I've encountered in math and physics are analytic. How many smooth functions are not analytic (in terms of measure or cardinality)? In what situations are such functions encountered? Are they ever encountered outside of real analysis (e.g. in physics)?
I came upon this quirky example, find smooth $f(x)$ such that $f(f(x)) = \sin x$ with $f(0)=0$ and $f'(0) = 1.$ The fact that this can be solved is quite nontrivial. It turns out that it is analytic between each $(k \pi, (k+1) \pi)$ but only $C^\infty$ at $0-$ and all multiples of $\pi.$ Go figure. The answer is periodic, resembles the sine wave but with slightly larger amplitude. This can be seen since, for $0 < x \leq \frac{\pi}{2},$ the function is between $\sin x$ and $x.$ See my answer to my own question at https://mathoverflow.net/questions/45608/formal-power-series-convergence/46765#46765
{ "language": "en", "url": "https://math.stackexchange.com/questions/94634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 1 }
Why is the pullback completely determined by $d f^\ast = f^\ast d$ in de Rham cohomology? Fix a smooth map $f : \mathbb{R}^m \rightarrow \mathbb{R}^n$. Clearly this induces a pullback $f^\ast : C^\infty(\mathbb{R}^n) \rightarrow C^\infty(\mathbb{R}^m)$. Since $C^\infty(\mathbb{R}^n) = \Omega^0(\mathbb{R}^n)$ (the space of zero-forms) by definition, we consider this as a map $f^\ast : \Omega^0(\mathbb{R}^n) \rightarrow \Omega^0(\mathbb{R}^m)$. We want to extend $f^\ast$ to the rest of the de Rham complex in such a way that $d f^\ast = f^\ast d$. Bott and Tu claim (Section I.2, right before Prop 2.1), without elaboration, that this is enough to determine $f^\ast$ . I can see why this forces e.g. $\displaystyle\sum_{i=1}^n f^\ast \left[ \frac{\partial g}{\partial y_i} d y_i \right] = \sum_{i=1}^n f^* \left[ \frac{\partial g}{\partial y_i}\right] d(y_i \circ f)$, but I don't see why this forces each term of the LHS to agree with each term of the RHS -- it's not like you can just pick some $g$ where $\partial g/\partial y_i$ is some given function and the other partials are zero.
Consider $h = g \circ Q_{i}$ where $Q_{i}$ sends $(a_1,...,a_n)$ to $(0,\dotsc, a_i, \dotsc, 0)$. Apply your observation to $h$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
$p(x)$ divided by $x-c$ has remainder $p(c)$? [Polynomial Remainder Theorem] This is from Pinter, A Book of Abstract Algebra, p.265. Given $p(x) \in F[x]$ where $F$ is a field, I would like to show that $p(x)$ divided by $x-c$ has remainder $p(c)$. This is easy if $c$ is a root of $p$, but I don't see how to prove it if $c$ is not a root.
Here's how it goes : the polynomials $\{1, (x-c), (x-c)^2, \dots \}$ form a basis of the vector space $F[x]$. Write $$ p(x) = a_0 + a_1 (x-c) + a_2 (x-c)^2 + \dots + a_n (x-c)^n. $$ Then $$ p(x) = (x-c) \left( a_1 + a_2(x-c)^2 + \dots + a_n (x-c)^{n-1} \right)+ a_0 $$ and you can see that $p(c) = a_0$. Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/94728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Why do we reverse inequality sign when dividing by negative number? We all learned in our early years that when dividing both sides by a negative number, we reverse the inequality sign. Take $-3x < 9$ To solve for $x$, we divide both sides by $-3$ and get $$x > -3.$$ Why is the reversal of inequality? What is going in terms of number line that will help me understand the concept better?
Let $c$ be a negative number. In the case of multiplying both sides of an inequality by $c$, note that the function $f$ defined by $f(x) = cx$ is strictly decreasing on the entire real line. By definition, this means that if $x_{1} < x_{2},$ then $f\left(x_{1}\right) > f\left(x_{2}\right)$ (i.e. $cx_{1} > cx_{2}$). Incidentally, this is equivalent to $x_{2} > x_{1}$ implying $f\left(x_{2}\right) < f\left(x_{1}\right)$, so $f$ also reverses both types of strict inequalities. Moreover, it is not difficult to see that a strictly decreasing function reverses both types of non-strict inequalities. As for dividing both sides by a negative number, note that the function $g$ defined by $g(x) = \frac{1}{c}x$ is strictly decreasing on the entire real line. The same explanation can be used for taking the reciprocal of both sides of an inequality, when both sides are positive or when both sides are negative. In general, if a function $h$ is strictly decreasing on an interval $I$, then we can "take $h$" of both sides of an inequality as long as both sides belong to $I$ and we reverse the inequality. Similarly, strictly increasing functions preserve inequalities. This gives a sometimes useful application of the calculus task of determining on what interval(s) a function might be increasing or decreasing, by the way. For example, $\arctan(x)$ is strictly increasing on the entire real line, so you can take the arctangent of both sides of an inequality (keeping the inequality type the same).
{ "language": "en", "url": "https://math.stackexchange.com/questions/94790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 9, "answer_id": 1 }
What is the result of $\lim\limits_{x \to 0}(1/x - 1/\sin x)$? Find the limit: $$\lim_{x \rightarrow 0}\left(\frac1x - \frac1{\sin x}\right)$$ I am not able to find it because I don't know how to prove or disprove $0$ is the answer.
If you believe (or know how to show) that the function $\displaystyle{f(x)=\frac{x}{\sin(x)}}$, $x\neq 0$, $f(0)=1$ is differentiable at $0$, then because $f$ is even, it follows that $f'(0)=0$. Note that $\frac{1}{x}-\frac{1}{\sin(x)}=-\frac{f(x)-f(0)}{x}$, so the limit in question is $-f'(0)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 8, "answer_id": 1 }
How to show if $ \lambda$ is an eigenvalue of $AB^{-1}$, then $ \lambda$ is an eigenvalue of $ B^{-1}A$? Statement: If $ \lambda$ is an eigenvalue of $AB^{-1}$, then $ \lambda$ is an eigenvalue of $ B^{-1}A$ and vice versa. One way of the proof. We have $B(B^{-1}A ) B^{-1} = AB^{-1}. $ Assuming $ \lambda$ is an eigenvalue of $AB^{-1}$ then we have, $$\begin{align*} \det(\lambda I - AB^{-1}) &= \det( \lambda I - B( B^{-1}A ) B^{-1} )\\ &= \det( B(\lambda I - B^{-1}A ) B^{-1})\\ &= \det(B) \det\bigl( \lambda I - B^{-1}A \bigr) \det(B^{-1})\\ &= \det(B) \det\bigl( \lambda I - (B^{-1}A )\bigr) \frac{1}{ \det(B) }\\ \ &= \det( \lambda I - B^{-1}A ). \end{align*}$$ It follows that $ \lambda$ is an eigenvalue of $ B^{-1}A.$ The other side of the lemma can also be proved similarly. Is there another way how to prove the statement?
Even if $A$ is $n\times m$ and $B$ is $m\times n$ with $m\le n$, we have $$ \det(\lambda I_n-AB)=\lambda^{n-m}\det(\lambda I_m-BA)\tag{1} $$ Proof: Drawing from an answer of julien's, $$ \begin{bmatrix}I_n&-A\\0&\lambda I_m\end{bmatrix} \begin{bmatrix}\lambda I_n&A\\B&I_m\end{bmatrix} =\begin{bmatrix}\lambda I_n-AB&0\\\lambda B&\lambda I_m\end{bmatrix}\tag{2} $$ $$ \begin{bmatrix}I_n&0\\-B&\lambda I_m\end{bmatrix} \begin{bmatrix}\lambda I_n&A\\B&I_m\end{bmatrix} =\begin{bmatrix}\lambda I_n&A\\0&\lambda I_m-BA\end{bmatrix}\tag{3} $$ Since the determinants on the left sides of $(2)$ and $(3)$ are equal, the determinants on the right side prove $$ \lambda^m\det(\lambda I_n-AB)=\lambda^n\det(\lambda I_m-BA)\tag{4} $$ In the case of square matrices, since the characteristic polynomials are the same, the eigenvalues are the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Krylov-Bogoliubov for measurable transformations In the Krylov-Bogoliubov theorem, the transformation is assumed continuous. Does the theorem hold if the transformation is assumed only to be measurable? If not, what is a counterexample? Edit A few proofs of the Krylov-Bogoliubov theorem are given here, but I'm not sure if the proofs carry through with the transformation just measurable.
No, it does not hold for merely measurable transformations. As a counterexample, let the space be the nonnegative integers $X=\mathbb N$ with the sigma-algebra $\mathcal{P}(\mathbb N)$ (the power set). Let $F\colon X\to X$ be given by $F(x)=x+1$. If $\mu$ is a finite $F$-invariant measure then every $A\in\mathcal{B}$ must have measure 0. For any nonnegative integer $n$, $F$-invariance gives $\mu([n,\infty))=\mu(F^{-n}([n,\infty)))=\mu(\mathbb{N})$. So, by monotone convergence $$ \mu(\mathbb{N})=\lim_{n\to\infty}\mu([n,\infty))=\mu(\emptyset)=0. $$ See also the following questions and answers, Probability of picking a random natural number and Why isn't there a uniform probability distribution over the positive real numbers?. Just in case you are concerned that this example is not a compact metrizable space, consider the metric on $\mathbb{N}$ given by $d(x,y)=\vert\theta(x)-\theta(y)\vert$, where $\theta(x)=\frac1x$ for $x\not=0$ and $\theta(0)=0$. This is a metric making $\mathbb N$ into a compact space, with Borel sigma-algebra equal to the power set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/94981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
A Limit problem : No-existence I find difficulty proving the no existence of this limit I show my process $$ \lim_{x\to 0} \biggl(1 + x e^{- \frac{1}{x^2}}+\sin \frac{1}{x^4}\biggr)^{e^{\frac{1}{x^2}}}$$ We begin with rewriting the limit as follows: $$ \lim_{x\to 0} \biggl(1 + x e^{- \frac{1}{x^2}}+\sin \frac{1}{x^4}\biggr)^{e^{\frac{1}{x^2}}}=\lim_{x\to 0} \biggl( 1 + x \frac{1}{e^{\frac{1}{x^2}}} +\sin \frac{1}{x^4}\biggr)^{e^{\frac{1}{x^2}}}$$ and analyze the various addends and the exponent of the limit: $$\begin{align*} &x \frac{1}{e^{\frac{1}{x^2}}}\to 0\\ &\sin \frac{1}{x^4}\to \not \exists\\ &e^{\frac{1}{x^2}}\to+\infty.\\ \end{align*}$$ The problem here lies in the fact that we have an addendum that there is no limit, let's consider: $$\sin{a_n}\quad\text{e}\quad\sin{b_n}\quad\text{con }\quad n\to+\infty$$ where the two sequences are: $$a_n=\frac{\pi}{2}+2n\pi\quad\text{e}\quad b_n=2n\pi$$ Then, the function values ​​calculated in the sequence $ a_n $, with $ k $ positive integer, tends to $ 1 $, calculated values ​​of the sequence $b_n$ tends to $0$,: $$\lim_ {n \to\infty}\sin{a_n}=1 \quad\text{and}\quad \lim_{n\to \infty}\sin {b_n} = 0$$ and therefore, as we know, the limit of $\sin x$ ($x \to \infty$) not exists. Now, to prove that the given limit does not exist,i continued in this way $t= \frac{1}{x^2},$ (if $x\to0 \rightarrow t\to+\infty$) : $$\lim_{x\to 0} \biggl( 1 + x \frac{1}{e^{\frac{1}{x^2}}} +\sin \frac{1}{x^4}\biggr)^{e^{\frac{1}{x^2}}}=\lim_{t\to +\infty} \biggl( 1 + \frac{ \sqrt{t}}{t}\cdot \frac{1}{e^t} +\sin{t^2}\biggr)^{e^t}$$ $\frac{ \sqrt{t}}{t}\cdot \frac{1}{e^t}\to 0$ i consider $$\begin{align*} &\lim_{t\to +\infty} \biggl(1 + \sin{({a_n})^2}\biggr)^{e^t}=\biggl(1+1\biggr)^{e^t}=+\infty\\ &\lim_{t\to +\infty} \biggl( 1 + \sin{({b_n})^2}\biggr)^{e^t}=e^{e^t\ln\biggl( 1 + \sin{({b_n})^2}\biggr)}=e^{+\infty\ln( 1 + 0)}=??? \end{align*}$$
Hint: Better consider the subsequence where $\sin(x)=-1$ instead of the one where $\sin(x)=0$, it is easy to see that the limit is $0$ there and then you found a subsequence that converges to $0$ and one that converges to $+\infty$ (the one where $\sin(x)=1$). Hence the limit is not existent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/95034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Why does the existence of independent statements not prove consistency? I've read before that, by the Principle of Explosion, if a theory is inconsistent, then absolutely any statement can be proven within it. Obviously, there are statements which are independent of ZFC (Continuum Hypothesis, etc). It seems to me that this proves that ZFC is incomplete. Why does it not then follow that ZFC is consistent? It seems to me that we could say "Assume ZFC is inconsistent. Then the the Continuum Hypothesis is provable in ZFC. But the Continuum Hypothesis is neither provable nor disprovable in ZFC. Therefore ZFC is consistent." What am I missing here?
Independence proofs generally have as an explicit premise that ZFC is consistent. Thus, what is proved is If ZFC is consistent, then the Continuum Hypothesis is independent of ZFC. Therefore the reasoning you sketch is not available.
{ "language": "en", "url": "https://math.stackexchange.com/questions/95094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Proof that $\binom{2\phi(r)}{\phi(r)+1} \geq 2^{\phi(r)}$ I try to prove the following $$\binom{2\phi(r)}{\phi(r)+1} \geq 2^{\phi(r)}$$ with $r \geq 3$ and $r \in \mathbb{P}$. Do I have to make in induction over $r$ or any better ideas? Any help is appreciated.
Let $x_n=2^{-n}{2n\choose n+1}$. Then $x_{n+1}=x_n\frac{(2n+1)(n+1)}{n(n+2)}\gt x_n$ for every $n\geqslant1$ and $x_2=1$, hence $x_n\geqslant1$ for every $n\geqslant2$. Since $\varphi(r)\geqslant2$ for every integer $r\geqslant3$, the estimate above implies that the desired inequality holds for every (not necessarily prime) integer $r\geqslant3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/95168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why isn't $\frac{\mathrm{d} }{\mathrm{d} x} \ln(x)$ specified as $\frac{1}{x},x>0$? As I understand, $\begin{eqnarray} \frac{\mathrm{d}}{\mathrm{d}x}\ln(x)\end{eqnarray} $ is generally specified as $\begin{eqnarray} \frac{1}{x} \end{eqnarray}$. Would it be more appropriate to state it as $\begin{eqnarray} \frac{1}{x}, x>0\end{eqnarray}$ since $\ln(x)$ is undefined for $x\leq0$? If not, why not? In addition, what does this imply about the indefinite integral of or the definite integral with negative limits of integration of $\begin{eqnarray} \frac{1}{x} \end{eqnarray}$? Thank you!
Another answer. Some strange people use complex numbers, not just real numbers. For them, log(x) makes sense other than for $x>0$. And for them its derivative is $1/x$. More importantly: for them, $\int (1/x)\,dx = \log(|x|)+C$ is WRONG!
{ "language": "en", "url": "https://math.stackexchange.com/questions/95281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
How do I solve this integral (volume of a torus)? I figured the volume of a torus is $4\pi R \int_{-r}^{r}\sqrt{r^2-y^2}dy$ But I don't know how to solve an integral like this. How is it done?
Use the trig substitution: $y=r\sin(t)$ Then $dy = r\cos(t)\,dt$ and then limits change to $\pm 1 =\sin(t)$ so that $t=\pm \pi/2$. Thus $$ 4\pi R\int_{-r}^r \sqrt{r^2-y^2}\,dy = 4\pi R \int_{-\pi/2}^{\pi/2} \sqrt{r^2-r^2\sin^2(t)}\cdot r\cos(t)\,dt $$ $$= 4\pi R \int_{-\pi/2}^{\pi/2} \sqrt{r^2\cos^2(t)}\cdot r\cos(t)\,dt = 4\pi R \int_{-\pi/2}^{\pi/2} r^2\cos^2(t)\,dt$$ $$= 4\pi R \int_{-\pi/2}^{\pi/2} \frac{r^2}{2}(1+\cos(2t))\,dt = 4\pi R \frac{r^2}{2}\pi$$ Alternatively, you could just note that the integral is computing the area of the upper-half of a circle of radius $r$ (thus $0.5\pi r^2$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/95343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Set Notation Excercise I am working through some set theory exercises, and am stuck with this one: $A = \left \{ 2,4,6,7,11 \right \}$ Find the set S1: $S1 = \left \{ (n+4): n \in A) \right \}$ Taking away 4 from each member, of the set I get: $A = \left \{ -2,0,2,3,7 \right \}$ The applet tells me the answer is wrong, but I can't figure out how!
I think you don't want to take away $4$ but you want to add $4$: $$ S_1 = \{ n + 4 \mid n \in \{ 2,4,6,7,11\} \} = \{ 2 + 4, 4 + 4, 6 + 4, 7 + 4, 11 + 4\} = \{ 6, 8, 10, 11, 15\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/95409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Meaning of $f:[1,5]\to\mathbb R$ I know $f:[1,5]\to\mathbb R$, means $f$ is a function from $[1,5]$ to $\mathbb R$. I am just abit unclear now on the exact interpretation of "to $\mathbb R$". Is $1\le x\le 5$ the domain? And is $\mathbb R$ the co-domain (or image?)? Is my interpretation in words ---$f$ is a function which takes a number $1\le x\le 5$, and maps it onto the real numbers $\mathbb R$, correct? Suppose we take $f=x^2$ and $x=2$, does $f:[1,5]\to\mathbb R$ hold? So thus the function gives us $4$, which is $\in \mathbb R$.
It's correct if you say "into" rather than "onto". The domain is $[1,5]=\{x: 1\le x\le 5\}$. For every number $x$ in the domain, $f(x)$ is in $\mathbb{R}$. "Onto" is often taken to mean "surjective" (except that "onto" is also used as a preposition). I.e. if $f$ is said to map the set $[0,5]$ "into" $\mathbb{R}$, that means only that for every number $x$ in the domain, $f(x)$ is in $\mathbb{R}$, but if $f$ is said to map the set $[0,5]$ "onto" $\mathbb{R}$, that means that in addition to the fact that for every number $x$ in the domain, $f(x)$ is in $\mathbb{R}$, it is also the case that for every member $y$ of $\mathbb{R}$, there is some number $x$ in $[1,5]$ such that $f(x)=y$. So, for example $f(x)=x^2$ maps $[0,1]$ into, but not onto, $\mathbb{R}$. It's not onto because for example $26\in\mathbb{R}$ but there is no number $x\in[1,5]$ such that $f(x)=26$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/95626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
about the percentile of $f(X)$ Is the, say $90\%$, percentile of $X$ the same as the $90\%$ percentile of $aX+b$, where $a, b$ are constant? I mean, to calculate the $90\%$ percentile of $X$, can I use the central limit law to calculate the $90\%$ percentile of $Y=\frac{X-\mu}{\sigma}$ instead of X? Is the $90\%$ percentile of $f(X)$ always the same as $X$? (or iff $f(\cdot)$ is a linear function?) Thank you so much!
Roughly speaking (exactly for random variables $W$ with continuous distribution) a $90$-th percentile of $W$ is a number $k$ (usually unique) such that $P(W \le k)=0.9$. Let $\sigma$ be any positive constant, let $\mu$ be a constant. Let $Y=\frac{X-\mu}{\sigma}$. Let $k$ be "the" $90$-th percentile of $Y$. Then $$0.9=P(Y \le k)=P\left(\frac{X-\mu}{\sigma} \le k\right)=P(X \le \sigma k+\mu).$$ So $\sigma k+\mu$ is the $90$-th percentile of $X$. Conversely, if $d$ is the $90$-th percentile of $X$, similar reasoning shows that $\frac{d-\mu}{\sigma}$ is the $90$-th percentile of $Y$. Comment: The idea generalizes. If $f$ is a strictly increasing function, we can go back and forth between the $p$-th percentile of $X$ and the $p$-th percentile of $f(X)$ by doing what comes naturally. For such an $f$, the number $k$ is a $p$-th percentile of $X$ if and only if $f(k)$ is a $p$-th percentile of $f(X)$. So your intuition was right. The actual expression you used was not quite right, the percentiles are not the same, but they are related in the "natural" way. Of course things can, and usually do break down for functions $f$ that are not everywhere increasing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/95681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How many $N$ of the form $2^n$ are there such that no digit is a power of $2$? How many $N$ of the form $2^n,\text{ with } n \in \mathbb{N}$ are there such that no digit is a power of $2$? For this one the answer given is the $2^{16}$, but how could we prove that that this is the only possible solution? and what about the general case of $x^n, \text{ with } x,n \in \mathbb{N}$?
Define the acceptable digits to be 0, 3, 5, 6, 7, and 9; and define the score of a number $n$ to be the number of trailing acceptable digits in the decimal expansion of $n$ (with no leading zeroes). So for instance 65536 has a score of 5, and $2^{96} = 79228162514264337593543950336$ has a score of 7 (and this is the smallest power of $2$ with a score greater than 5). I did a computer search for high-scoring powers of $2$ up to $2^{332192}$ (i.e. those with less than 100000 decimal digits). The highest-scoring was $2^{57072}$, with a score of only 25 (it ends with ...40535076966633036050333696). So your conjecture is plausible, but it looks like one of those things that could be very difficult to prove.
{ "language": "en", "url": "https://math.stackexchange.com/questions/95738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 1, "answer_id": 0 }
Dedekind domain with a finite number of prime ideals is principal I am reading a proof of this result that uses the Chinese Remainder Theorem on (the finite number of) prime ideals $P_i$. In order to apply CRT we should assume that the prime ideals are coprime, i.e. the ring is equal to $P_h + P_k$ for $h \neq k$, but I can't see it. How does it follow?
Here's one proof. Let $R$ be a Dedekind ring and assume that the prime ideals are $\mathfrak{p}_1,\ldots,\mathfrak{p}_n$. Then $\mathfrak{p}_1^2,\mathfrak{p}_2,\ldots,\mathfrak{p}_n$ are coprime. Pick an element $\pi \in \mathfrak{p}_1\setminus \mathfrak{p}_1^2$ and by CRT you can find an $x\in R$ s.t. $$ x\equiv \pi\,(\textrm{mod } \mathfrak{p}_1^2),\;\; x\equiv 1\,(\textrm{mod } \mathfrak{p}_k),\; k=2,\ldots,n $$ Factoring we must have $(x)=\mathfrak{p}_1$. It follows that all prime ideals are principal, so all ideals are principal and $R$ is a PID. EDIT: The definition of a Dedekind domain is a Noetherian integrally closed, integral domain of dimension 1. The last condition means precisely that every nonzero prime ideal is maximal, so maximality of nonzero primes is tautological. Maximal ideals are always coprime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/95789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
software for algebraic simplifying expressions I have many huge algebraic expressions such as: $$\frac{8Y}{1+x}-\frac{(1-Y)}{x}+\frac{Kx(1+5x)^{3/5}}{2}$$ where $\ Y=\dfrac{Kx(1+x)^{n+2}}{(n+4)(1+5x)^{2/5}}+\dfrac{7-10x-x^2}{7(1+x)^2}+\dfrac{Ax}{(1+5x)^{2/5}(1+x)^2}\ $ and $A,n$ are constants. To simplify these expressions by hand is taking me a lot of time and there is also the danger of making a mistake. I am looking for a free software on the internet using which I can simplify these expressions. Does anyone have any recommendations?
Try WolframAlpha, http://www.wolframalpha.com/. It is free on the internet. But another one you can also try which is very easy to use is Maple. I cannot tell if it is free.
{ "language": "en", "url": "https://math.stackexchange.com/questions/95829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Mapping of Inverse functions. Let $\mathsf{f(x)}$ be a differential and an invertible function, such that $\mathsf{f''(x)>0}$ and $\mathsf{f'(x)>0}$. Prove that$$\mathsf{ f^{-1}\left(\frac{x_1 + x_2 +x_3}{3} \right) > \frac{f^{-1}(x_1)+f^{-1}(x_2)+f^{-1}(x_3)}{3}}$$ I have no clue, how to start it. I think a graphical solution can be obtained but I am confused about the graph of the inverse function.
This should help : 'Inverse of Convex Strictly Monotone Function'. Use the definition of concavity with $\alpha=1/3, \beta=2/3$ and so on...
{ "language": "en", "url": "https://math.stackexchange.com/questions/95900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Original source for a quote by Lobachevsky Lobachevsky is quoted in many places to have once written (said?) "There is no branch of mathematics, no matter how abstract, which may not someday be applied to phenomena of the real world." (In the original: Нет ни одной области математики, как бы абстрактна она ни была, которая когда-нибудь не окажется применимой к явлениям действительного мира.) My question is: where can this be found in his work? I am not interested in being told about a book from the 1980s that has this in a list of quotes, so please don't waste time telling me about sources other than Lobachevsky's. I did find online a copy of some of his collected works, but the file wasn't in a form that allowed a text search by computer.
The list of reference numbers for the site linked by Marcus Barnes is here. This gives us the reference: -115. Зенкевич И. Г. Не интегралом единым. Тула, 1971. 136 с. It appears to be a collection of quotes, and according to this site, it has a list of original references at the back.
{ "language": "en", "url": "https://math.stackexchange.com/questions/95972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
How to deduce that A has an eigenvalue -1 with algebraic multiplicity 2 without calculating characterstic polynomial. $A = \begin{pmatrix} 0 & 1 &1 \\ 1 & 0 &1 \\ 1& 1 &0 \end{pmatrix} $ The matrix $(A+I)$ has rank $1$ , so $-1$ is an eigenvalue with an algebraic multiplicity of at least $2$ . I was reviewing my notes and I don't understand how the first statement implies the second one. Can anyone please explain how rank 1 of $(A + I)$ implies $-1$ is an eigenvalue with an algebraic multiplicity of $2$? Thank you in advance.
It is well known that the algebraic multiplicity of an eigenvalue is greater or equal than the geometric multiplicity of that eigenvalue (i.e. the dimension of its eigenspace). Therefore, knowing that the rank of $(A-(-1)I)$ is $1$, you know that the algebraic multplicity of $-1$ is at least $2$ because if we denote the geometric multiplicity by $n$, $rank + n = 3$ in this case (dimension of the kernel + dimension of the image is $3$ for a $3 \times 3$ matrix). Therefore $n = 2$ which is the lower bound for the algebraic multiplicity. EDIT : After reading Pierre-Yves's answer, I had a little flash : you can actually deduce more from this. Using the argument above gives you that the algebraic multiplicity of $-1$ is at least $2$, and you know that $2$ is an eigenvalue since $(1,1,1)$ is an eigenvector of $A$. The sum of the geometric multiplicities is $3$. Since $-1$ has a geometric multiplicity of two and $2$ has a geometric multiplicity of $1$ (it can't have more), we know that there are no other eigenvalues. (I never computed the polynomial! Yay =D) Hope that helps,
{ "language": "en", "url": "https://math.stackexchange.com/questions/96110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Do "imaginary" and "complex" angles exist? During some experimentation with sines and cosines, its inverses, and complex numbers, I came across these results that I found quite interesting: $ \sin ^ {-1} ( 2 ) \approx 1.57 - 1.32 i $ $ \sin ( 1 + i ) \approx 1.30 + 0.63 i $ Does this mean that there is such a thing as "imaginary" or "complex" angles, and if they do, what practical use do they serve?
Here's a more literal interpretation. Yes, you can actually create geometric spaces in which angles can be imaginary and even complex. The trick is to note that the most general definition of angle is staged within the framework of inner product spaces: vector spaces which have an operation called the "inner product" that in effect defines how a "dot product" works for that type of vector. This definition is based on how that, for vectors in Euclidean $n$-dimensional space, we can define the angle between them via $$\theta(\mathbf{v}, \mathbf{w}) := \cos^{-1}\left(\frac{\mathbf{v} \cdot \mathbf{w}}{|\mathbf{v}| |\mathbf{w}|}\right)$$ which, in turn, is derived from the "geometric dot product" $$\mathbf{v} \cdot \mathbf{w} = |\mathbf{v}| |\mathbf{w}| \cos \theta$$ where $\theta$ is the angle between the vectors, but we now go the other way and take the dot product as a primitive operation, not length and angle, and we use that to define length and angle, which are concepts that don't exist in an ordinary vector space. (Thus, we should note also that the length of $|\mathbf{v}|$ is then defined by, as one might think, $|\mathbf{v}| := \sqrt{\mathbf{v} \cdot \mathbf{v}}$.) To see how this framework works, note that for real, say 2-dimensional, Euclidean vector space, we have that vectors look like $$\mathbf{v} = \langle v_x, v_y \rangle$$ so that $$\mathbf{v} \cdot \mathbf{w} = v_x w_x + v_y w_y$$ and from which we get the usual angle between two points in 2-dimensional space. However, because we are talking more general vector spaces, there is no need for the scalar components of a vector to be real numbers only: we can - and do! - just as well take them as complex numbers. Of course, if we have vectors of two complex numbers, that is effectively like having four real-number dimensions, but we are defining a new kind of geometry on that. In particular, if we now have two-dimensional vectors $\mathbf{v}$ with the same form as given, only components $v_x$ and $v_y$ are complex, so that the space has two complex dimensions (and so four real dimensions), we can define an inner product like $$\mathbf{v} \cdot \mathbf{w} := v_x \bar{w}_x + v_y \bar{w}_y$$ where you note we take the conjugate: this is to ensure that in various ways the dot product is "nice" to the structure of the complex numbers. Moreover, it ensures that the length of a vector continues to remain real (exercise: check this by considering what $\mathbf{v} \cdot \mathbf{v}$ is and then think of a certain fundamental property of the complex numbers). So then if we do this, and we return to our definition of $\theta(\mathbf{v}, \mathbf{w})$, we can see that while the denominator of what's inside $\cos^{-1}$ is just lengths and so will be real, the numerator, which is a bare inner product, has no need to be real at all, and so we can in general get an inverse cosine of a complex number and so find vectors with an angle which is complex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/96151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "37", "answer_count": 7, "answer_id": 6 }
What does ad$f$ mean, for $f$ a smooth function? I am currently reading Nicole Berline "Heat Kernels and Dirac Operators". On page 64 Differential Operators are introduced that are generalized from operators acting on scalar functions to vector bundles. Thereby it is mentioned that if $D$ is an i-th order operator and $f$ is a smooth function, then (ad $f)^i D$ is a zeroth order operator. I am not sure I understand this comment, in case anybody can help that would be great!! Please let me know in case more details are necessary.
A function determines a 0th order differential operator by pointwise multiplication. Then $(\text{ad } f) (D) = [f,D]$ is the commutator of the operator corresponding to $f$ with the operator $D$. The statement that $(\text{ad } f)^i D$ is 0th order if $D$ is of order $i$ follows from the Leibniz (product) rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/96225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Almost sure pointwise inequality between conditional variance and variance or between conditional expectation and expectation I have a question about conditional expectation and conditional variance. It's a very general question. We defined conditional variance by $$ \operatorname{Var}(X|\mathcal{F}):=E((X-E(X|\mathcal{F}))^2|\mathcal{F}) $$ For a random variable $ X $ and a $\sigma$-algebra $ \mathcal{F}$. Are there any inequalities such that $ \operatorname{Var}(X|\mathcal{F})\le \operatorname{Var}(X)$ or $ \operatorname{Var}(X|\mathcal{F})\ge \operatorname{Var}(X)$ and the same question for the conditional expectation: $E(X|\mathcal{F}) \le E(X)$ or $ E(X|\mathcal{F}) \ge E(X)$ Are any one of them true in general, or what further assumption have to bee assumed that a conclusion as above is true? Often such an inequality would be very usful. Thanks in advance. hulik
None of these inequalities are true in general. Suppose you have * *$\Pr(X=0,Y=0) = \frac{1}{5}$ *$\Pr(X=0,Y=1) = \frac{1}{5}$ *$\Pr(X=2,Y=1) = \frac{1}{5}$ *$\Pr(X=-5,Y=2) = \frac{1}{5}$ *$\Pr(X=3,Y=2) = \frac{1}{5}$ Then defining $\mathcal{F}$ on values of $Y$ we have $E[X]=0$, $E[X|Y=1]=1$, $E[X|Y=2]=-1$, and $Var(X)=7.6$, $Var(X|Y=0)=0$, $Var(X|Y=2)=16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/96303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Question about implication and probablity Let $A, B$ be two event. My question is as follows: Will the following relation holds: $$A \to B \Rightarrow \Pr(A) \le\Pr(B) $$ And why?
Yes. Since events are sets of states, $A\implies B$ means $A\subseteq B$, which is equivalent to $B=A\cup(B\backslash A)$, a disjoint union. So $P(B)=P(A)+P(B\backslash A)\geq P(A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/96364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Cycle attack on RSA Let $p$ and $q$ be large primes, $n=pq$ and $e : 0<e<\phi(n), \space gcd(e, \phi(n))=1$ the public encyption exponent, $d : ed \equiv 1 \space (mod \space \phi(n)) $ the private decription exponent, and $m \in \mathbb{Z_n}$ the plaintext, in an $RSA$ cryptosystem. Suppose Eve wants to read the ciphertext $\mu= m^e$ (suppose she can tell when an element of $\mathbb{Z_n}$ is the plaintext), she comes up with the following attack: compute $m^{e} \left(\mod n \right)$, $m^{e^2} (\mod\space n)...$ and so on untill, for some $k: \space$ $m^{e^k} = m$ Notice that such $k$ exists, as $e$ can be considered an element of the multiplicative group $\mathbb{Z_{\phi(n)}}^\times$ and therefore $e^{-1}\in<e>\leq\mathbb{Z_{\phi(n)}}^\times$. I found this attack to be called the cycle attack but it isn't mentioned in any cryptography textbooks I know of, and therefore I'm guessing it isn't much of a a threat to $RSA$. Having said this, my questions are: * *How can we justify that the attack is computationally infeasible, even when $e$ is chosen at random? We know $k=|e|$ , and that $|e|$ divides $ |\mathbb{Z_{\phi(n)}}^{\times}|=$$\phi(\phi(n))=\phi((p-1)(q-1))$ , but do we know anything about the expected value for $|e|$ (for example, by deducing it from the structure, and in particular from the distribution of orders of elements of $\mathbb{Z_{\phi(n)}}^{\times}$)? *Is there an efficient algorithm to chose $e$ such that its order in $\mathbb{Z_{\phi(n)}}^{\times}$ is sufficiently large (although this doesn't seem to be necessary)? I also posted this thread in the cryptography section, you can view it here
The question has been answered here. However if anyone can provide any further results regarding orders of elements in multiplicative groups of integers modulo $n$, I would much appreciate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/96423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }