Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
LCM of randomly selected integers What is the expected LCM of 21 randomly selected positive integers under 10000000? How would someone even approach this problem? EDIT: The positive integers are chosen with replacement.
Let $1 \leq a_1, \dots, a_n \leq N$ be number chosen uniformly at random. What is their least common multiple? What is the highest power of $2$ dividing any of the $a_1, \dots, a_n$? * *All the numbers are odd with probability $(1 - \frac{1}{2})^n$ *All the numbers are odd or even (but not divisible by 4) with probability $(1 - \frac{1}{4})^n$ *All the numbers are odd or even (but not divisible by 8) with probability $(1 - \frac{1}{8})^n$ *... So the expected power of $2$ dividing all these numbers is (this number might simplify): $$ \mathbb{E}_2 = \sum_{k \geq 0} \Big(2^k - 2^{k-1} \Big) \left[1- \left( 1- \tfrac{1}{2^k} \right)^n \right]$$ A similar story for $\mathbb{E}_3, \mathbb{E}_5, \mathbb{E}_7$ etc; multiply your answers $$ \mathbb{E} = \prod_{p } \mathbb{E}_p$$ See also: Expectation of the maximum of i.i.d. geometric random variables a quick look confirms there is no closed-form answer. In your case $n = 21$ and $N = 1,000,000$ so we can hope for an estimate. One way to state the prime number theorem is that $\mathrm{lcm}(1,2,\dots, n) = e^n$ So the least common multiple is growing exponentially fast in $n$. In your case, there are $n = 21$ numbers ranging from $1$ to $N = 10^6$. Perhaps $$ \mathbb{E} \approx e^{n} \times \left(\frac{N}{n}\right)^n \approx \frac{N^n}{n!} = \frac{10^{21}}{ 21!} \approx \frac{10^{21}}{ 5 \times 10^{19}} = 40 $$ Still doesn't seem quite right. See Granville Prime Number Races)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1410576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Cauchy's root test for series divergence Just a question regarding determining the divergent in this example :$$\sum{ 1 \over \sqrt {n(n+1)}} $$ is divergent. It explains the reason by saying that $a_n$ > $1 \over n+1$. If I am not wrong it uses the root test but should not we have the $a_n \ge 1$ but how does their reasoning assure us this exactly?
Or still simpler, with equivalences, since it is a series with positive terms: $$\frac1{\sqrt{n(n+1)}}\sim_\infty\frac 1n,$$ which diverges, hence the original series diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1410637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Area of a sphere bounded by hyperplanes Say we have a sphere in d-dimensional space, and k hyperplanes (d-1 dimensional) all passing through the origin. Is there a way to calculate (or approximate) the area of the surface of the sphere enclosed by the half-spaces \begin{align*}w_1 \cdot x &\leq 0 \\ w_2 \cdot x &\leq 0\\ \vdots& \\w_k \cdot x &\leq 0\end{align*}
Seven years late here. But this citation might be useful if someone (probably someone else) is looking into this problem. Cho, Y., & Kim, S. (2020). Volume of Hypercubes Clipped by Hyperplanes and Combinatorial Identities. In The Electronic Journal of Linear Algebra (Vol. 36, Issue 36, pp. 228–255). University of Wyoming Libraries. https://doi.org/10.13001/ela.2020.5085 https://journals.uwyo.edu/index.php/ela/article/download/5085/5047
{ "language": "en", "url": "https://math.stackexchange.com/questions/1410799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove that for any integers $x,y,z$ there exist $a,b,c$ such that $ax+by+cz=0$ It is rather obvious that for any 3 coprime integers $x,y,z$ there exist 3 non-zero integers $a,b,c$ such that: $$ax+by+cz=0$$ Any simple argument to prove it?
Assume $z\neq 0$. One of them needs to be for $x,y,z$ to be coprime. Find $u,v\neq 0$ so that $ux+vy\neq 0$. Then let $a=zu,b=zv,c=-(ux+vy)$. There is always such $u,v$ unless one of $x,y$ is zero. Assume $x=0$ and $y\neq 0$ then choose $a=1,b=z,c=-y$. If $z=1$ and $x=y=0$, then you can't find a solution with non-zero $c$. You don't really need coprime, of course. Just that at least $2$ of $x,y,z$ are non-zero. Also, if all three are zero, you can trivially solve with $a=b=c=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1410920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Fastest way to perform this multiplication expansion? Consider a product chain: $$(a_1 + x)(a_2 + x)(a_3 + x)\cdots(a_n + x)$$ Where $x$ is an unknown variable and all $a_i$ terms are known positive integers. Is there an efficient way to expand this?
The constant term is easy to compute, it's just the product of the $a_i$. The coefficient of $x^{n-1}$ is $\sum_i a_i$, so that is also easy. Assume for a moment that $n = 2k$ is even, then the coefficient of $x^k$ is the sum of all possible products consisting of $k$ different $a_i$. There are $\binom{n}{k} \approx 2^n/\sqrt{n} $ such products and there is no obvious way to simplify their sum in the general case. So I suspect that in the general case of large $n$ there is no efficient way to obtain the expansion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1411025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
A question about matrix kernels and Kronecker products Let us define $$ v:=v_A\otimes v_B\quad (*) $$ where $v_A$ is a fixed vector in $\mathbb{R}^{d_A}$, $v_B$ is any vector in $\mathbb{R}^{d_B}$ and $\otimes$ denotes the Kronecker product. To rule out trivial cases assume $d_A,d_B>1$. My question: Suppose that $v$, defined as in $(*)$, belongs to the kernel of the symmetric matrix $C\in\mathbb{R}^{d\times d}$, with $d:=d_Ad_B$, for all $v_B\in\mathbb{R}^{d_B}$. Namely, $Cv=C (v_A\otimes v_B)=0$, for all $v_B\in\mathbb{R}^{d_B}$ and for fixed $v_A\in\mathbb{R}^{d_A}$. Is it true that $C$ has the form $$ C=A\otimes B, $$ where $A\in\mathbb{R}^{d_A\times d_A}$ and $B\in\mathbb{R}^{d_B\times d_B}$? Thank you for your help.
No. Consider $v_A=(1,0,0)^T,\ v_B\in\mathbb R^2$ and $$ C=\pmatrix{ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&1&0&0&1\\ 0&0&0&1&1&0\\ 0&0&0&1&1&0\\ 0&0&1&0&0&1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1411104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of Brouwer fixed point theorem using Stokes's theorem $\omega$ is the volume form on the boundary B -ball $f\colon B \to \partial B$ $$ 0 < \int_{\partial B}\omega = \int_{\partial B} f^*(\omega) = \int_{B} df^*(\omega) = \int_B f^*(d\omega) = \int_B(0) = 0 $$ What I do not understand, is why on the left, we may conclude that * *$\int_{B} d \omega = \int_{\partial B} \omega > 0 $ where as later we are alowed to conclude *$\int_{B} f^*(d\omega$) =0 as $d\omega=0$ What puzzles me is, what prevents us from concluding that $d\omega=0$ in 1. Both 1. and 2. are integrated over B, so where does the difference come from?
In the equation $$ \int_{\partial \Omega} \alpha = \int_{\Omega} d\alpha $$ it is assumed that $\alpha$ is defined on $\Omega$. If $i\colon\partial\Omega\to\Omega$ is the inclusion, it induces a restriction map $i^*$ on differential forms in the opposite direction. Really the $\alpha$ on the left-hand side is $i^*\alpha$. We don't usually bother with this extra notation, but it matters. In general $i^*$ is not an injection. Let's consider the $n=2$ case. So $B$ is the unit disk and $\partial B$ is the unit circle. Let $\alpha = x\,dy - y\,dx$ and $\omega = i^*\alpha$. In other words, $\alpha$ is an extension of $\omega$ from $\partial B$ to $B$. We have $$ d\alpha = 2\, dx\wedge dy $$ which is definitely not zero on the unit disk. But $i^*(d\alpha)=di^*\alpha = d\omega$ is zero, by dimension count (it's a two-form on a one-manifold). To say the same thing without the $i^*$, it is possible for $\omega$ to extend from $\partial B$ to $B$, and for $d\omega =0$ to be true on $\partial B$, but $d\omega \neq 0$ on $B$. This is what's happening here. Now back to your proof. Suppose $\omega$ is a volume form on $\partial B$ (we are no longer assuming that $\omega = i^*\alpha$ for some $\alpha$ on $B$; maybe it is, maybe it isn't). If $f \colon B \to \partial B$ were a retraction, then $f \mathbin{\circ} i = \operatorname{id}_{\partial B}$. Then on differential forms, $i^* \mathbin{\circ} f^* = \operatorname{id}_{\Omega^*(\partial B)}$. Therefore $$\begin{split} \int_{\partial B}\omega &= \int_{\partial B}i^*(f^*\omega) \\ &= \int_B d(f^*\omega) \qquad\text{(by Stokes)} \\ &= \int_B f^*(d\omega) \\ &= \int_B f^*(0) \qquad\text{(by dimension count)} \\ &= 0 \end{split}$$ If $\omega$ is a volume form on $\partial B$, this is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1411178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that from a group of seven people whose (integer) ages add up to 332 one can select three people with the total age at least 142. I need help with this problem, and I was thinking in this way: $$ x_{1} + x_{2} + x_{3} + x_{4} + x_{5} + x_{6} + x_{7} = 332 $$ and I need to find three of these which sum is at least 142. But I don't know what next. Any help would be appreciated
You can form $\binom{7}{3}$ triplets, i.e. 35 triplets. Every men participates in $\binom{6}{2}$ triplets, i.e 15. So the sum of all 35 triplets is $15 \times 332 = 4980$, which means the average is $\frac{4980}{35} = 142.427...$ so there has to be at least one triplet with total age of at least 142. Actually since all years are integers this proves that there has to be a triplet with total sum of at least 143.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1411280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 12, "answer_id": 2 }
Why is the estimate of the order of error in Trapezoid converging to $2.5$? The integral in question is: $\int_{0}^{\infty} \frac{x^{3/2}}{\cosh{(x)}}dx$ I coded a program to compute $p$, an estimate of the order of the error for the Trapezoid method of numerical integration. From class, I know that the order of the error for this method is $2$ as it goes like $h^2$ where $h$ is the step size (this is for the composite Trapeziod rule). However when I run my program the the values for the estimate of $p$ seem to converge to $2.5$. I asked my lecturer and she said this is write, but I do not understand why? I know that the error depends on the 2nd derivative of the integrand,and the function I am dealing with is not bounded on the interval of integration, but I do not get why that results in the convergence of the estimate to 2.5? $p = \frac{\ln{r}}{\ln{2}}, r = \frac{I_n - I_{2n}}{I_{2n}-I_{4n}}$ Where $I_n$ represents the approximate value of the integral for $n$ sub-intervals.
Some random babblings: Let $f(x) =\frac{x^{3/2}}{\cosh(x)} $. For large $x$, $f(x)$ is essentially zero, so you might be losing many significant digits in subtraction, especially if $I_{2n}$ is quite close to $I_n$. Also, at the origin, $f(x) \approx x^{3/2} $, so there is no problem there. $f'(x) =\frac{(3/2)x^{1/2}\cosh(x)-x^{3/2}\sinh(h)}{\cosh^2(x)} $, so $f'(x) = 0$ when $0 =(3/2)x^{1/2}\cosh(x)-x^{3/2}\sinh(x) =x^{1/2}\cosh(x)\left((3/2)-x\tanh(x)\right) $ or, according to Wolfy, $x\approx 1.62182 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1411425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there a Markov-type inequality for the Median? Markov's theorem states that $P(|X| \geq a) \leq \frac{E[|X|]}{a}$. Is there an similar type of inequality that involves the median (somehow I doub't it, but I make no claim to comprehensive knowledge of probability inequalities). What if we restrict $X$ to a non-negative, continuous pdf with a smooth, unimodal density function?
If $X$ is a random variable with finite variance $\sigma^2$, we have that the distance between the median and $\mathbb{E}[X]$ is at most $\sigma$ by Cantelli's inequality, hence the answer is affirmative under slightly stronger assumptions ($X\in L^2$ instead of just $X\in L^1$). That assumptions gives, for instance: $$\mathbb{P}[X>2\cdot\text{med}(X)]\leq \frac{\mathbb{E}[X]}{2\cdot\text{med}(X)}\leq\frac{1}{2}\cdot\min\left(\frac{\text{med}(X)+\sigma}{\text{med}(X)},\frac{\mathbb{E}[X]}{\mathbb{E}[X]-\sigma}\right).$$ On the other hand, $X\in L^2$ is somewhat a necessary assumption to work with the median. We may consider that $f_X(x)=\frac{3\sqrt{3}}{4\pi}\cdot\frac{1}{|x|^3+1}$ is the density of a random variable $X\in L^1\setminus L^2$. For any $n>0$, we may define $X_n$ as the random variable with density: $$ f_{X_n}(x)=\frac{3\sqrt{3}}{4\pi}\cdot\left\{\begin{array}{rcl}\frac{1}{|x|^3+1}&\text{if}& x<0,\\ \frac{n\,x^{n-1}}{x^{3n}+1}&\text{if}&x\geq 0.\end{array}\right.$$ Then the median of $X_n$ is always zero, but: $$ \mathbb{E}[X_n] = -\frac{1}{2}+\frac{3\sqrt{3}}{4\pi}\int_{0}^{+\infty}\frac{n\,x^n}{1+x^{3n}}\,dx = -\frac{1}{2}+\frac{\pi}{3\sin\frac{\pi(n+1)}{3n}}$$ is unbounded as $n\to\left(\frac{1}{2}\right)^+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1411522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of integer functions satisfying three constraints I am trying to understand how many functions $\mathbb{Z^+}\to \mathbb{Z^+}$ which satistfy the three following constraints exist: * *For every $n \in \mathbb{Z^+}$ $$f(f(n))\leq\frac{n+f(n)}{2}$$ *For every $n \in \mathbb{Z^+}$ $$f(n)\leq f(n+1)$$ *For every $(n,m)\in \mathbb{Z}^{+}\times\mathbb{Z}^+ $ such that $n \neq m$ $$\text{GCD}(f(n),f(m))=1$$ One of these functions is indeed $f(n)=1$ for every $n \in \mathbb{Z}^+$, but any other constant function does not satisfy the third constraint, which is probably the strongest. My first intuition about other functions was something on the lines of $f(n)=\text{the n-th prime number}$, but this functions (or any of its multiples) does not satisfy the first constraint, already for $n=2$. Are there any other functions that satisfy these constraints? How can they be found?
The constant function $f(n)=1$ is indeed the only solution, even if we weaken the first condition to: $f(f(n)) \le C(n+f(n))$ for some fixed $C>0$ (no matter how large). As a proof-of-concept, we prove: there is no such solution with $f(1)>1$. Note that if $f(1)>1$ then $f(n+1)>f(n)$ (strictly) for all $n$, because of the gcd condition. In particular, $f(n)>n$; so it suffices to show there is no solution even if we replace $f(f(n)) \le C(n+f(n))$ by $f(f(n)) \le 2Cf(n)$. To do so, it suffices to show that $\lim_{k\to\infty} \frac{f(k)}k = \infty$. Each of the values $f(1),\dots,f(k)$ is divisible by at least one prime, and these primes are distinct by the gcd condition; therefore $f(k)$ is at least as large as the $k$th prime. In particular, $f(k) > k\ln k$ by Rosser's theorem, and so indeed $\lim_{k\to\infty} \frac{f(k)}k = \infty$. This shows that there is no such solution with $f(1)>1$. To consider any solution with $f(1)=1$: If $f(n)$ is not constant, then choose $K$ such that $f(K+1)>1$, and redo the above proof with the values $f(K+1), \dots, f(K+k)$ instead of $f(1),\dots,f(k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1411599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
calculating the characteristic polynomial I have the following matrix: $$A=\begin{pmatrix} -9 & 7 & 4 \\ -9 & 7 & 5\\ -8 & 6 & 2 \end{pmatrix}$$ And I need to find the characteristic polynomial so I use det(xI-A) which is $$\begin{vmatrix} x+9 & -7 & -4 \\ 9 & x-7 & -5\\ 8 & -6 & x-2 \end{vmatrix}$$ Is there a way to calculate the determinate faster or is way is: $$(x+9)\cdot\begin{vmatrix} x-7 & -5 \\ -6 & x-2 \\ \end{vmatrix}+7\cdot\begin{vmatrix} 9 & -5 \\ 8 & x-2 \\ \end{vmatrix} -4\begin{vmatrix} 9 & x-7 \\ 8 & -6 \\ \end{vmatrix}=$$ $$=(x+9)[(x-7)(x-2)-30]+7[9x-18+40]-4[54-8x+56]=(x+9)[x^2-9x-16]+7[9x+22]-4[-8x+2]=x^3-2x+2$$
You could use $\displaystyle\begin{vmatrix}x+9&-7&-4\\9&x-7&-5\\8&-6&x-2\end{vmatrix}=\begin{vmatrix}x+2&-7&-4\\x+2&x-7&-5\\2&-6&x-2\end{vmatrix}$ $\;\;\;$(adding C2 to C1) $\displaystyle\hspace{2.6 in}=\begin{vmatrix}0&-x&1\\x+2&x-7&-5\\2&-6&x-2\end{vmatrix}$$\;\;\;$(subtracting R2 from R1) $\displaystyle\hspace{2.6 in}=\begin{vmatrix}0&0&1\\x+2&-4x-7&-5\\2&x^2-2x-6&x-2\end{vmatrix}$$\;\;$(adding x(C3) to C2)) $\displaystyle\hspace{2.6 in}=\begin{vmatrix}x+2&-4x-7\\2&x^2-2x-6\end{vmatrix}=x^3-2x+2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1411724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let the plane V be defined by $ax + by + cz + d = 0$; with $a, b, c, d \in \mathbb{R}$ and the vector $(a; b; c)$ a unit vector. I am battling to get my mind around some of the concepts involving vectors in $3$-space. This question asks me whether the following statements are True or False: (A) The line $(a; b; c)$ is parallel to $V$ (B) The distance of the plane from the origin is $|d|$ For (A) I don't think I have enough to work with here. I know that if a line is parallel to a plane then it's direction vector must be perpendicular to the planes normal vector... I honestly do not know how to go about proving this and perhaps I am over thinking this. For (B) I have said this is false as the distance between the plane from the origin is not $|d|$ and is given by the formula: $$d=\frac{|Ax+By+Cz+D|}{|A^2+B^2+C^2|}$$ I would really appreciate any input on this. Thanks!
For part (A) your reasoning is correct. A line with direction vector $\mathbf{v}$ is parallel to a plane with normal vector $\mathbf{n}$ if and only if $\mathbf{v}$ is orthogonal to $\mathbf{n}$, that is, if and only if $\mathbf{v}\cdot\mathbf{n} = 0$. In this case the direction vector of the line is the same as the normal vector of the plane and so they cannot be parallel since $\mathbf{v}\cdot\mathbf{n} = \mathbf{n}\cdot\mathbf{n} = 1 \neq 0$. For part (B) you may use the following argument. Let $\mathbf{u}$ be any point in the plane. Then the distance from the origin to $\mathbf{u}$ is $\|\mathbf{u}\|$. Your goal is to find the minimum value of $\|\mathbf{u}\|$. By the Cauchy-Schwarz inequality we have that $$ |d| = |\mathbf{n}\cdot\mathbf{u}| \leq \|\mathbf{n}\|\cdot\|\mathbf{u}\| = \|\mathbf{u}\| $$ since $\mathbf{n}$ is a unit vector. Therefore the minimum distance of the plane from the origin is bounded below by $|d|$. The last step is to show that there exists a vector in the plane whose norm is $|d|$. In this case such a vector is given by $-d\mathbf{n}$ since $$ \mathbf{n}\cdot(-d\mathbf{n}) + d = -d\mathbf{n}\cdot\mathbf{n} +d = -d +d = 0 $$ and $\|-d\mathbf{n}\| = |d|\|\mathbf{n}\| = |d|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1411928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How can the trigonometric equation be proven? This question : https://math.stackexchange.com/questions/1411700/whats-the-size-of-the-x-angle has the answer $10°$. This follows from the equation $$2\sin(80°)=\frac{\sin(60°)}{\sin(100°)}\times \frac{\sin(50°)}{\sin(20°)}$$ which is indeed true , which I checked with Wolfram. * *How can this equation be proven ?
$$\begin{align}\frac{\sin 80^\circ \sin 20^\circ\sin 100^\circ }{\sin 50^\circ}&=\frac{\sin 80^\circ \sin 20^\circ\cdot 2\sin 50^\circ \cos 50^\circ }{\sin 50^\circ}\\&=2\sin 80^\circ\sin 20^\circ \cos 50^\circ\\&=2\left(-\frac 12(\cos 100^\circ-\cos 60^\circ)\right)\cos 50^\circ\\&=(\cos 60^\circ-\cos 100^\circ)\cos 50^\circ\\&=\frac 12\cos 50^\circ-\cos 100^\circ\cos 50^\circ\\&=\frac 12\cos 50^\circ-\frac 12(\cos 150^\circ+\cos 50^\circ)\\&=-\frac 12\times\left(-\frac{\sqrt 3}{2}\right)\\&=\frac{\sqrt 3}{4}\\&=\frac 12\sin 60^\circ\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Inequality problem: Application of Cauchy-Schwarz inequality Let $a,b,c \in (1, \infty)$ such that $ \frac{1}{a} + \frac{1}{b} + \frac{1}{c}=2$. Prove that: $$ \sqrt {a-1} + \sqrt {b-1} + \sqrt {c-1} \leq \sqrt {a+b+c}. $$ This is supposed to be solved using the Cauchy inequality; that is, the scalar product inequality.
We apply the Cauchy Schwarz Inequality to the vectors $x = \left({\sqrt {\dfrac{a-1}{a}} , \sqrt {\dfrac{b-1}{b}} , \sqrt {\dfrac{c-1}{c}}}\right) $ and $y = \left({\dfrac{1}{\sqrt{bc}},\dfrac{1}{\sqrt{ac}}, \dfrac{1}{\sqrt{ab}} }\right)$ in $\Bbb R^3$. Then, $x \cdot y \le \lVert x\rVert \lVert y\rVert$ yields, $$ \dfrac{\sqrt{a-1} + \sqrt{b-1} + \sqrt{c-1}}{\sqrt{abc}} \le \sqrt{\dfrac{a-1}{a} + \dfrac{b-1}{b} + \dfrac{c-1}{c}} \times \sqrt{\dfrac{1}{bc} + \dfrac{1}{ac} + \dfrac{1}{ab}}$$ Now manipulating the $\sqrt{abc}$ across the sign we get $$ \sqrt{a-1} + \sqrt{b-1} + \sqrt{c-1} \le \sqrt{\dfrac{a-1}{a} + \dfrac{b-1}{b} + \dfrac{c-1}{c}} \times \sqrt{a+ b+c} $$ So, $$ \dfrac{\sqrt{a-1} + \sqrt{b-1} + \sqrt{c-1}}{\sqrt{a+ b+c}} \le \sqrt{\left({1 - \dfrac 1 a}\right) + \left({1 - \dfrac 1 b}\right) + \left({1 - \dfrac 1 c}\right) }$$ $$ \dfrac{\sqrt{a-1} + \sqrt{b-1} + \sqrt{c-1}}{\sqrt{a+ b+c}} \le \sqrt{3 - \left({\dfrac 1 a + \dfrac 1 b + \dfrac 1 c}\right)} = \sqrt{3-2} = 1$$ $\mathscr{Q.E.D}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
$W$ a subset of $\mathbb{R}^5$ consisting of all vectors an odd number of the entries in which are equal to $0$. Is $W$ a subspace of $\mathbb{R}^5$? Let $W$ be the subset of $\mathbb{R}^5$ consisting of all vectors an odd number of the entries in which are equal to $0$. Is $W$ a subspace of $\mathbb{R}^5$? I'm not sure how to do this. Any solutions or hints are greatly appreciated. I know that in order for anything to be a subspace of something the zero vector must be in it. How would I go about this? What exactly do we mean by subset here? Is it any $5$-tuple or could it be $1,2,3,4,5$-tuples?
No. For instance, let $v=(0,1,1,1,1)$ and $w=(1,0,1,1,1)$. Then $v,w\in W$ but $v+w=(1,1,2,2,2)\notin W$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that the locus of the poles of tangents to the parabola $y^2=4ax$ with respect to the circle $x^2+y^2-2ax=0$ is the circle $x^2+y^2-ax=0$. Prove that the locus of the poles of tangents to the parabola $y^2=4ax$ with respect to the circle $x^2+y^2-2ax=0$ is the circle $x^2+y^2-ax=0$. I have encountered this question from SL Loney.I have its solution in front of me,but i could not understand. Let $(h,k)$ be the coordinates of the pole,then the polar of $(h,k)$ with respect to the circle $$x^2+y^2-2ax=0$$ is $$xh+yk-a(x+h)=0$$ or $$y=x\left(-\frac{h}{k}+\frac{a}{k}\right)+\frac{ah}{k}\tag{1}$$ The equation of any tangent on the given parabola is $$y=mx+\frac{a}{m}\tag{2}$$ Equations $(1)$ and $(2)$ represent the same straight line,hence equating the coefficients, $$m=\left(-\frac{h}{k}+\frac{a}{k}\right) \;\; \text{and} \;\; \frac{a}{m}=\frac{ah}{k}$$ Eliminate $m$ between the above two equations and locus of $(h,k)$ is $h^2+k^2=ah$. Generalizing, we get the locus as $x^2+y^2=ax$. My question is why equations $(1)$ and $(2)$ represent the same line? Equation $(1)$ is the polar of $(h,k)$ with respect to the circle $x^2+y^2-2ax=0$ and equation $(2)$ is tangent to the parabola.
In the following part of the question, the locus of the poles of $\left(\text{tangents to the parabola $y^2=4ax$}\right)$ with respect to the circle $x^2+y^2−2ax=0$ note that the $\left(\quad\right)$ part is the polar (with respect to the circle $x^2+y^2-2ax=0$). (in other words, the question means that the tangents to the parabola $y^2=4ax$ is the polar with respect to the circle $x^2+y^2-2ax=0$.) (take a look at polar in mathworld) $(1)$ is the polar of the pole $(h,k)$ with respect to the circle $x^2+y^2−2ax=0$. $(2)$ is the tangents to the parabola $y^2=4ax$. So, from the question, we (can/have to) have $(1)=(2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Methods of finding the difference to the nearest integer? The question asked to find the "smallest value of n such that $(1+2^{0.5})^n$ is within 10^-9 of a whole number." I'm unsure of the approach to the question. The question was in the chapter of 'binomial expansion' in the textbook. Thanks for your time!
For such problems, the concept of conjugated terms is important. If you look at the binomial expansion of $(1+\sqrt{2})^n$ and $(1-\sqrt{2})^n$ you will see that all the terms including $\sqrt{2}$ in an odd power will have opposite signs. Hence $$(1+\sqrt{2})^n+(1-\sqrt{2})^n$$ is always an integer. But fortunately $|1-\sqrt{2}| \approx 0.4<1$ and so the second term quickly converges to 0 which makes $(1+\sqrt{2})^n$ getting closer to an integer value. Now, to estimate how close this is you have to analyse how quick $(1-\sqrt{2})^n$ converges to 0. Can you continue from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Minesweeper probability I ran into the situation pictured in the minesweeper game below. Note that the picture is only a small section of the entire board. Note: The bottom right $1$ is the bottom right corner tile of the board and all other tiles have been marked/deemed safe. What we know: * *There are exactly $2$ bombs left on the board *Bombs can't be: (A & B) || (A & C) || (B & D) || (C & D) *Bombs can be: (A & D) || (B & C) Can anyone prove if there is a square that is more likely to be safe/unsafe or is every square equally likely to be safe/unsafe?
You know * *exactly one of A and C is a bomb *exactly one of B and D is a bomb *exactly one of A and B is a bomb So, as you say, there are two possibilities * *A and D are bombs while B and C are not *B and C are bombs while A and D are not As far as I can tell, these have the same likelihood
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 3 }
Spaces $X$ in which every subset is either open or closed, and only $\varnothing$ and $X$ are clopen Let $(X, \tau)$ be a topological space. Then $X, \varnothing \in \tau$ and are both clopen. But I wonder if it is possible to construct a topological space $X$ in which all subsets are either open or closed, but $X$ and $\varnothing$ are the only clopen subsets.
Pick any $X$ and $a \in X$. Define $Y \subset X$ to be open if and only i f$X= \emptyset$ or $a \in Y$. It follows that a set $Y$ is closed if and only if $Y=X$ or $a \notin Y$. It is easy to show that this is a topology which has the required properties. It is likely that there is no separable topology with these properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Is the intersection of the following closed and open set closed? Generally? Ok, I have been informed that the below lemma is incorrect. I needed it to prove the following statement. Could someone else provide a proof? Statement: If m(E) is finite, there exists a compact set K with $K\subset E$ and $m(E-K)\le \epsilon$. m(E) is the Lebesgue measure of set E. Thanks! Lemma: Let F be a closed subset of $\mathbb{R^d}$. Let $B_n(0)$ be a ball of radius n centered at the origin. Let $K_n=F\cap B_n$. Show $K_n$ is compact.
(The OP drastically changed the question after this answer was posted.) I'm not sure if you intended the $n$ in the superscript of $\mathbb{R}^n$ to be the same as the $n$ in $B_{n}(0)$. Nevertheless: Your question asks about the intersection of a closed and open set, so I assume that by "ball" you mean an open ball. The statement is false. Let $n = 1$ so that we are dealing with $\mathbb{R}$. Then $B_{1}(0)$ is the open interval $(-1, 1) \subset \mathbb{R}$. Let $F$ be the closed subset $[-1, 1] \subset \mathbb{R}$. Then $K_{1} = [-1, 1] \cap (-1,1) = (-1, 1)$ is the same open set (i.e., open interval) as $B_{1}(0)$. In particular: This provides a counterexample to the statement in question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Subspaces of an infinite dimensional vector space It is well known that all the subspaces of a finite dimensional vector space are finite dimensional. But it is not true in the case of infinite dimensional vector spaces. For example in the vector space $\mathbb{C}$ over $\mathbb{Q}$, the subspace $\mathbb{R}$ is infinite dimensional, whereas the subspace $\mathbb{Q}$ is of dimension 1. Now I want examples (if it exist) of the following: * *An infinite dimensional vector space all of whose proper subspaces are finite dimensional. *An infinite dimensional vector space all of whose proper subspaces are infinite dimensional.
You won't find anything like that. If $V$ is an infinite dimensional vector spaces, then it has an infinite basis. Any proper subset of that basis spans a proper subspace whose dimension is the cardinality of the subset. So, since an infinite set has both finite and infinite subsets, every infinite dimensional vector space has both finite and infinite proper subspaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What's the answer to this limit question? Can anyone find the limit to this one? $\lim_{n \rightarrow \infty} (\sum_{i=(n+1)/2}^n {n \choose i} \times 0.51^i \times 0.49^{n-i})$ When I plot it, it seems to me to approach 1, which makes me feel that a limit should exist and it should be 1. But I can't solve this mathematically. Here is the same question in WolframAlpha, in case it helps.
Consider tossing an unfair coin $2n-1$ times, where we assign the value $+1$ to head, which has probability $p=0.51$ and $-1$ to tails, with probability $0.49$. So we have a sequence of IID variables $X_1, \dots,X_{2n-1}$, and we denote the $S_n = \sum_{i=1}^{2n-1} X_i$. Furthermore, we have $$\mu = \mathbb{E} X_1 = 0.51 -0.49=0.02 >0 $$ Then we have $$ \mathbb{P}(S_n > 0)= \sum_{i=n}^{2n-1} \binom{2n-1}{i} p^i (1-p)^{2n-1-i}.$$ So we have \begin{eqnarray} \lim_{n \to \infty} \sum_{i=n}^{2n-1} \binom{2n-1}{i} p^i (1-p)^{2n-1-i} &=& \lim_{n \to \infty } \mathbb{P} ( S_n > 0) \\ &=& \lim_{n \to \infty} \mathbb{P} \left( \frac{S_n}{2n-1} > 0\right)\\ &=& \lim_{n \to \infty } \mathbb{P} \left( \frac{1}{2n-1} \sum_{i=1}^{2n-1} X_i > 0 \right) \end{eqnarray} Now we can use the weak Law of Large Numbers and obtain \begin{eqnarray} \lim_{n \to \infty} \mathbb{P} \left( \left|\frac{1}{2n-1} \sum_{i=1}^{2n-1} X_i - \mu \right| > \epsilon \right) = 0 \end{eqnarray} for all $\epsilon > 0$. So if we choose $\epsilon = \frac{\mu}{2}>0$, we have $$\left\{ \frac{S_n}{2n-1} \in \left( \frac{ \mu }{2} , \frac{ 3 \mu}{2} \right) \right\} \subset \left\{ \frac{ S_n}{2n-1} > 0 \right\}.$$ And therefore we find \begin{eqnarray} 1 &\ge& \lim_{n \to \infty } \mathbb{P} \left( \frac{ S_n}{2n-1} >0 \right) \\ &\ge & \lim_{n \to \infty} \mathbb{P} \left( \frac{S_n}{2n-1 } \in \left( \frac{\mu}{2} , \frac{ 3 \mu }{2} \right) \right) \\ &=& \lim_{n \to \infty } 1 - \mathbb{P} \left( \frac{S_n}{2n-1 } \not\in \left( \frac{\mu}{2} , \frac{ 3 \mu }{2} \right) \right) \\ &=& 1 - \lim_{n \to \infty} \mathbb{P} \left( \left| \frac{ S_n}{2n-1} - \mu \right| > \frac{\mu}{2} \right) \\ &=& 1 -0 = 1 \end{eqnarray} So combining everything we see $$\lim_{n\to \infty} \sum_{i=n}^{2n-1} \binom{2n-1}{i} 0.51^p \times 0.49^{2n-1-i}=1$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1412973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In the $x + 1$ problem, does every positive integer $x$ eventually reach $1$? I know that the more famous $3x + 1$ problem is still unresolved. But it seems to me like the similar $x + 1$ problem, with the function $$f(x) = \begin{cases} x/2 & \text{if } x \equiv 0 \pmod{2} \\ x + 1 & \text{if } x \equiv 1 \pmod{2}.\end{cases}$$ should be very easy to prove it with just my modest, haphazardly obtained knowledge of mathematics. But the only thing I can think of is "of course every $x$ has to reach $1$," which is clearly not a rigorous proof. Is this easy to prove, or is it perhaps just as difficult as the $3x + 1$ problem?
The variant you have described is not hard. For example one can give a formal proof by strong induction. Here is the induction step. Suppose that we ultimately end up at $1$ if we start at any number $u\lt k$. We show that we end up at $1$ if we start at $k$. If $k$ is even, in one step we are below $k$, and we are finished. If $k$ is odd and greater than $1$, in two steps we are at $\frac{k+1}{2}\lt k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1413097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Why is $p_n \sim n\ln(n)$? I know that the prime number theorem states that the number of primes less than $x$ is approximately $\frac{x}{\ln(x)}$. However, why does this mean that $p_n \sim n\ln(n)$? (where $p_n$ is the $n$-th prime). If we replace $x$ with $p_n$ in the original equation, we have that $\pi(p_n) \ln({p_n})\sim p_n$, and $\pi(p_n)$ is just $n$, but what about the $\ln({p_n})$?
Made a jpeg, see if it comes out readable: Theorem 3 says that putting in the extra $\log \log n$ term is quite good. Theorem 1. We have $$\frac{x}{\log x}\left(1 + \frac{1}{2 \log x}\right) < \pi(x)\qquad \text{for}\,59\leqq x, \tag{3.1}\label{3.1}$$ $$\pi(x) < \frac{x}{\log x}\left(1 + \frac{3}{2 \log x}\right) \qquad \text{for}\,1 < x. \tag{3.2}\label{3.2}$$ Theorem 2. We have $$x/(\log x - \tfrac{1}{2}) < \pi(x) \qquad\text{for}\,67 \leqq x, \tag{3.3}\label{3.3}$$ $$\pi(x) < x/(\log x - \tfrac{3}{2})\qquad \text{for}\,e^{3/2} < x$$ (and hence for $4.48169 \leqq x$). Corollary 1. We have $$x/\log x < \pi(x) \qquad\text{for}\,17\leqq x, \tag{3.5}\label{3.5}$$ $$\pi(x) < 1.25506 x/\log x \qquad\text{for}\,1 < x. \tag{3.6}\label{3.6}$$ Corollary 2. For $1 < x < 113$ and for $113.6 \leqq x$ $$\pi(x) < 5x/(4 \log x). \tag{3.7}\label{3.7}$$ Corollary 3. We have $$3x/(5 \log x) < \pi(2x) - \pi(x) \qquad\text{for}\,20\tfrac{1}{2} \leqq x, \tag{3.8}\label{3.8}$$ $$0 < \pi(2x) - \pi(x) < 7x/(5 \log x) \qquad\text{for}\, 1 < x. \tag{3.9}\label{3.9}$$ For the ranges of $x$ for which these corollaries do not follow directly from the theorem, they can be verified by reference to Lehmer's table of primes [10]. A similar remark applies to all corollaries of this section unless a proof is indicated. The inequality \eqref{3.8} improves a result of Finsler [3]. The left side of \eqref{3.9} is just the classic result, conjectured by Bertrand (and known as Bertrand's Postulate) and proved in Tchebichef [14], that therre is at least one prime between $x$ and $2x$. The right side of \eqref{3.9} gives a result of Finsler [3], with Finsler's integral $n$ replaced by our real $x$. Finsler's elementary proofs are reproduced in Trost [15] on p. 58. The relation \eqref{3.12} below states a result of Rosser [11]. Theorem 3. We have $$n(\log n + \log\log n - \tfrac{3}{2}) < p_n \qquad\text{for}\,2\leqq n, \tag{3.10}\label{3.10}$$ $$p_n < n(\log n + \log\log n - \tfrac{1}{2})\qquad\text{for}\,20\leqq n. \tag{3.11}\label{3.11}$$ Corollary. We have $$n \log n < p_n \qquad\text{for}\, 1 \leqq n, \tag{3.12}\label{3.12}$$ $$p_n < n(\log n + \log\log n)\qquad\text{for}\,6\leqq n. \tag{3.13}\label{3.13}$$ That looks pretty good. From https://projecteuclid.org/euclid.ijm/1255631807
{ "language": "en", "url": "https://math.stackexchange.com/questions/1413167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Sketching phase portrait of an ellipse I have a system of linear ODE's as follows: $$\frac{dx}{dt} = y, \frac{dy}{dt} = -4x$$ which has solution $$\begin{bmatrix}x\\y\end{bmatrix} = \alpha\begin{bmatrix}\cos2t\\-2\sin2t\end{bmatrix} + \beta\begin{bmatrix}\sin2t\\2\cos2t\end{bmatrix}$$ And I'm having some issues trying to sketch the phase portrait. I know the eigenvalues are $-2i$ and $+2i$ Clearly this means the critical point $(0,0)$ is a "centre" of ellipses. But I don't know how to find the indicative equation of one of the solution curves. In other words, how do I find the shape of the ellipses? How do I know which of x,y is the major and minor axis of the ellipses? Thanks.
$$ \frac{d}{dt}\{ 4x^{2}+y^{2}\} = 8x\frac{d x}{dt}+2y\frac{dy}{dt}=8xy-8yx=0. $$ Therefore, $4x^{2}+y^{2}=C$ is constant in time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1413265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find groups that contain elements $a$ and $b$ such that $|a|=|b|= 2$ and $|ab|=5$ Find groups that contain elements $a$ and $b$ such that $|a|=|b|= 2$ and $|ab|=5$ My thoughts: $|a|=|b|=2\implies a^2=e$ and $b^2=e$ I see that the group cannot be abelian as the order wont be greater than $2$ : $(ab)^2 = a^2b^2=e$ Not really sure how to proceed further. Greatly appreciate any help. Thanks!
Hint: A regular pentagon can be rotated and reflected
{ "language": "en", "url": "https://math.stackexchange.com/questions/1413388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
invariant properties between p-group and it's automorphism Let $G$ be a p-group and $Aut(G)$ be group of automorphisms of $G$ which properties of $G$ can help us with studding $Aut(G)$? for example If $G$ is infinite/finite does this guaranty $Aut(G)$ be infinite/finite? or knowing order of $G$ can help us with making some limits for order of $Aut(G)$?
Any infinite torsion group has infinite automorphism group. This is (allegedly) proved in R. Baer, Finite extensions of abelian groups with minimum condition, Trans. Amer. Math. Soc. 79 (1955), 521-540.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1413603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
We all use mathematical induction to prove results, but is there a proof of mathematical induction itself? I just realized something interesting. At schools and universities you get taught mathematical induction. Usually you jump right into using it to prove something like $$1+2+3+\cdots+n = \frac{n(n+1)}{2}$$ However. I just realized that at no point is mathematical induction proved itself? What's the mathematical induction's proof? Is mathematical induction proved using mathematical induction itself? (That would be mind blowing.)
Suppose that $ 1 \in S \text{ and that } n \in S \Rightarrow n+1 \in S$, and suppose that there is a smallest number $N$ such that $N+1\notin S$... Supposing that any subset of natural numbers has a minimal element, the above proves the principle of induction in set theory, whether some people like it or not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1413680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "49", "answer_count": 6, "answer_id": 4 }
Finding the minimal polynomial of a linear operator Let $P=\begin{pmatrix} i & 2\\ -1 & -i \end{pmatrix}$ and $T_P\colon M_{2\times 2}^{\mathbb{C}} \to M_{2\times 2}^{\mathbb{C}}$ a linear map defined by $T_P(X)=P^{-1}XP$. I need to find the minimal polynomial of $T_P$. I was able to find the minimal polynomial using the matrix $[T_P]_E$ (which represents $T_P$ in the standard basis $E$) and calculating its chracteristic polynomial (the minimal polynomial is $(x-1)(x+1)$), but according to a hint which I was given - there's no need to find $[T_P]_E$. It appears that I must use somehow the fact that $P^2+3I=0$ but I don't know how. I noticed that $T_P(P)=P$ which means $\lambda=1$ is an eigenvalue (and thus the term $(x-1)$ must appear in the minimal polynomial) and that's it. But how can I deduce that $\lambda=-1$ is an eigenvalue as well (without actually plugging in different matrices and hoping to get the desired eigenvector)? Also, how can I ensure that $1,(-1)$ are the only eigenvalues of $T_P$ (again, with minimal computational effort)? Any suggestions?
It's given as a hint that $P^2+3I = 0$, so: $$ \begin{align} &P^2 = -3I \\ &P\cdot-\frac{1}{3}P = I \\ \therefore \quad &P^{-1} = -\frac{1}{3}P \end{align} $$ So, $T_P$ is actually $$ T_P(X)=-\frac{1}{3}PXP $$ Note that $P^2=-3I$ and we have two $P$s in the above expression for $T_P$, so it might be useful to calculate $T_P^2$: $$ \begin{align} T_P^2(X) &= T_P(T_P(X)) \\ &= T_P(-\frac{1}{3}PXP) \\ &= -\frac{1}{3}P(-\frac{1}{3}PXP)P =\\ &= \frac{1}{9}P^2XP^2 \\ &= \frac{1}{9}(-3I)X(-3I) \\ &= \frac{1}{9}\cdot 9\cdot IX =X \\ \end{align} $$ So we got $T^2_P=I$, which means $T^2_P-I=O$, so the following polynomial satisfies $m(T_P)=0$: $$ m(t)=t^2-1=(t+1)(t-1) $$ Since $T_P$ is not scalar, it's trivial to prove that indeed $m(t)$ is the minimal polynomial of $T_P$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1413736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
limit of sum $\dfrac{(-1)^{n-1}}{2^{2n-1}}$ What is: $$\sum^{\infty}_{n=1}\dfrac{(-1)^{n-1}}{2^{2n-1}}$$ I have done a Leibniz convergence test and proved that this series converges, but I do not know how to find the limit. Any suggestions?
We can appeal to the sum of a geometric series as follows: $$\begin{align} \sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{2^{2n-1}}&=\sum_{n=0}^{\infty}\frac{(-1)^{n}}{2^{2n+1}}\\\\ &=\frac12\sum_{n=0}^{\infty}\left(\frac{-1}{4}\right)^n\\\\ &=\frac12\frac{1}{1+\frac14}\\\\ &=\frac25 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1413844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding Maximum in a Set of Numbers If I have a set of $n$ numbers: $(a_1,..., a_n)$, then how can I find the two maximum numbers in the set? Suppose that all the numbers are positive integers.
Completely edited (Misread your question, and you changed your question). The minimum of two numbers is given by: $\min(a,b) = \large \frac{|a+b|- |a-b|}2$ Of three numbers thus: $$\min(a,b,c) = \min(\min(a,b),c) = \frac{\left| \frac{|a+b|- |a-b|}2 + c \right| - \left| \frac{|a+b|- |a-b|}2 - c\right| }2 $$ Since you want to find the two maximum numbers out of $3$, just find the minimum and exclude it. For $n$ numbers, define an algorithm to find the minimum and keep removing each one until two numbers remain (you probably know that).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1414113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Branch cut of $\sqrt{z}$ In my complex analysis book, the author defines the $\sqrt{z}$ on the slit plane $\mathbb{C}\setminus (-\infty,0]$. I understand this is done because $z^2$ is not injective on the entire complex plane, so we restrict the domain of $z^2$ to $(\frac{-\pi}{2},\frac{\pi}{2})$, and then $z^2$ maps bijectively on $\mathbb{C}\setminus(-\infty,0].$ This particular restriction of the domain to $(\frac{-\pi}{2},\frac{\pi}{2})$ give rise to the principle root function. My question is, why can't we include either the positive or negative real axis in this domain? I understand why $[\frac{-\pi}{2},\frac{\pi}{2}]$ wouldn't work, as $(-\infty,0)$ would be mapped to twice, but it seems to me that $z^2:[\frac{-\pi}{2},\frac{\pi}{2})\to\mathbb{C}$ would define a bijection and allow us to define $\sqrt{z}$ as single-valued. So why don't we do this?
The domain of $w=f(z)=z^2$ is not restricted to $(\frac{-\pi}{2},\frac{\pi}{2})$ to obtain an inverse. Rather, the interval $[0,\pi ]$ (or $[\pi ,2\pi ]$ is obtained as the image of a bijection from the slit $w$ plane. This map is then a branch of the inverse. The procedure is as follows: it's no harder to look at the more general case so let $w=f(z)=z^{n}, \ \text {for }n\in \mathbb N$. Then $f$ maps the sector $0\leq \theta \leq2\pi/n$ onto $\mathbb C$. To every point on the positive real axis of the $w$ plane there corresponds one point on each of the two rays $\theta =0$ and $\theta =2\pi/n$ in the $z$ plane. Therefore, except for the positive real axis in the $w-$ plane, the map is bijective. So we can get a bijection if we "cut" the positive real axis of $w$ plane so that after the cut, the upper edge corresponds to the positive real axis in the $z$ plane and the lower edge corresponds to $\theta =2\pi/n$ in the $z$ plane. We now simply define $z=w^{1/n}$ to be the inverse of this map. It maps the slit $w$ plane bijectively onto the sector $0\leq \theta \leq 2\pi/n$. It is easy to see that there are sectors $\frac{2\pi k}{n}\leq \theta \leq \frac{2\pi (k+1)}{n}, \quad k=0,1,\cdots n-1$ in the $z$ plane onto which are mapped bijectively, copies of the slit $w$ plane and so we get by this process $n$ "branches" all of which are inverses of $w=z^n$. They map slit copies of $\mathbb C$ onto sectors in the $z$ plane. A moment's reflection shows that we can take $\textit {any}$ sector that sweeps out an angle $2\pi /n$ in the $z$ plane and the foregoing analysis goes through with very minor modifications. A nice way to tie all this up is to now glue all these copies together so as to define a injection from the glued copies back to the entire $z$ plane. i.e. consider the Riemann Surface for $w=z^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1414181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can 720! be written as the difference of two positive integer powers of 3? Does the equation: $$3^x-3^y=720!$$ have any positive integer solution?
If $$720!=3^x-3^y=3^y\left(3^{x-y}-1\right)$$ and since the power of $3$ dividing $720!$ is $$\left\lfloor\frac{720}{3}\right\rfloor+\left\lfloor\frac{720}{9}\right\rfloor+\left\lfloor\frac{720}{27}\right\rfloor+\left\lfloor\frac{720}{81}\right\rfloor+\left\lfloor\frac{720}{243}\right\rfloor=240+80+26+8+2=356\text{,}$$ it would have to be that $y=356$. So it remains to see if $3^x-3^{356}=720!$ has an integer solution in $x$. Side note: we can get a good approximation to $\log_3(720!)$ using Stirling's formula with one more term than is typically used: $$\log_3(720!)\approx\frac{1}{\ln3}\left(720\ln(720)-720 +\frac{1}{2}\ln(2\pi\cdot720)\right)\approx3660.3\ldots$$ Since the terms in Stirling's formula are alternating after this, we can deduce that this is correct to the tenths place. Values of $\ln(3)$, $\ln(720)$, and $\ln(2\pi)$ are easy to calculate by hand to decent precision if needed. $720!$ is a lot bigger than $3^{356}$. Since $\log_3(720!)\approx3660.3$, in base $3$, $720!$ has $3661$ digits (trigits?), where as $3^{356}$ just has $357$. So $\log_3(720!+3^{356})$ and $\log_3(720!)$ must be very close together. Since the latter is $\approx3660.3$ though, it's not possible for the former to be an integer. More formally, $$\log_3(720!)<\log_3(720!+3^{356})=x=\log_3(720!)+\log_3\mathopen{}\left(1+\frac{3^{356}}{720!}\right)\mathclose{}<\log_3(720!)+\frac{1}{\ln(3)}\frac{3^{356}}{720!}$$ $$3660.3\ldots<x<3660.3\ldots$$ and there is no integer $x$ between the values on the two ends.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1414275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Intuition for a ring homomorphism? A map $f: A \to B$ between two rings $A$ and $B$, is called a ring homomorphism if $f(1_A) = 1_B$, and one has $f(a_1 + a_2) = f(a_1) + f(a_2)$ and $f(a_1 \cdot a_2) = f(a_1) \cdot f(a_2)$, for any $a_1, a_2 \in A$. My question is, what is the intuition for ring homomorphisms? My grasp on them is kind of bad...
Is $482350923581014689 + 51248129432153$ an even or odd number? Of course it is even because adding two odd numbers gives an even number right? You didn't have to calculate. Is $1254346425649847 \times 64341341232606942$ an even or odd number? Well, an odd number times an even number must be even. Again, you didn't have to multiply that big numbers. Now consider the ring homomorphism $\varphi : \mathbb Z \to \mathbb Z_2$ given by $$\varphi(x)= \begin{cases} 0 & \text{if $x$ is even} \\ 1 & \text{if $x$ is odd} \end{cases} $$ In $\mathbb Z_2$, we have $1+1=0$ which means adding two odd numbers gives an even number. Also, $1 \cdot 0 =0$ and so, multiplication of an odd number and an even number gives an even number. In short, making the operation first and finding its image is the same as finding the image and then making the operation in ring homomorphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1414335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Project point on plane - Unique identfier? I have a number of planes (in $\mathbb{R}^3$), each represented by a point $\vec{P_i}$ which lies within each plane and the normal vector $\vec{n_i}$. If I project a point $\vec{Q}$ (which does not lie within any of the planes) onto each plane, the resulting projected points are $\vec{Q'_i}$. Is the following valid? If the planes $k$ and $j$ (given by $\left(\vec{P_k},\vec{n_k}\right)$, and $\left(\vec{P_j},\vec{n_j}\right)$) represent the same plane: $\Rightarrow \vec{Q'_k} = \vec{Q'_j}$ Otherwise (planes $k$ and $j$ are distinct planes that are either parallel or intersect in a line): $\Rightarrow \vec{Q'_k} \neq \vec{Q'_j}$
By projecting $\vec Q$ onto eaqch plane, I assume you mean orthogonal projection. So the question is, can one point projected onto two different planes result in the same image point. For that to be possible, the planes in question have to intersect, otherwise they don't have any common points at all. But if they intersect and are not equal, then their normal vecors differ in direction. And if the normal vectors differ, then the lines of projection differ. And for two different lines to intersect the two different planes in the same point, that point must be the only point of intersection of the lines. We already know that the lines intersect in $\vec Q$, though, so this can only happen if $\vec Q$ lies in both planes. Since you stated that it lies in none of the planes, that can happen either, therefore you can't have two equal image points, and your assumption is valid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1414586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible disconnected graph has euler circuit? I have doubt ! Wikipedia says : An Eulerian graph is one in which all vertices have even degree; Eulerian graphs may be disconnected. What I know : Defitition of an euler graph "An Euler circuit is a circuit that uses every edge of a graph exactly once. ▶ An Euler path starts and ends at different vertices. ▶ An Euler circuit starts and ends at the same vertex." According to my little knowledge "An eluler graph should be degree of all vertices is even, and should be connected graph". I am asking :Is it possible disconnected graph has euler circuit ? If it is possible show an example . EDITED : Here is my suplimentary problem , I voted for the anwser . Which of the following graphs has an Eulerian circuit?
The other answers answer your (misleading) title and miss the real point of your question. Yes, a disconnected graph can have an Euler circuit. That's because an Euler circuit is only required to traverse every edge of the graph, it's not required to visit every vertex; so isolated vertices are not a problem. A graph is connected enough for an Euler circuit if all the edges belong to one and the same component. But that's not what was confusing you five years ago when you asked the question. The confusion arises from the fact that some people use the term "Eulerian graph" to mean a graph that has an Euler circuit, while other people (deplorably) use the term "Eulerian graph" for any graph in which all vertices have even degree, regardless of whether or not it has a Eulerian circuit. From the Wikipedia article Eulerian path: The term Eulerian graph has two common meanings in graph theory. One meaning is a graph with an Eulerian circuit, and the other is a graph with every vertex of even degree. These definitions coincide for connected graphs. People using the second definition would consider the graph $K_3\cup K_3$ to be "Eulerian", although of course it has no Euler circuit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1414667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Evaluate the summation $\sum_{k=1}^{n}{\frac{1}{2k-1}}$ I need to find this sum $$\sum_{k=1}^{n}{\frac{1}{2k-1}}$$ by manipulating the harmonic series. I have been given that $$\sum_{k=1}^{n}{\frac{1}{k}} = \ln(n) + C$$ where $C$ is a constant. I have given quite a bit of thought to it but have not been able to arrive at the solution. Any method to find this summation would really help a lot.
$$\sum_{k=1}^{n}\frac{1}{2k-1}=\sum_{k=1}^{n}\frac{1}{2k}+\sum_{k=1}^{n}\frac{1}{2k(2k-1)}=\frac{H_n}{2}+\log(2)-\sum_{k>n}\frac{1}{2k(2k-1)}$$ hence it follows that your sum behaves like $\frac{1}{2}\log(n)+O(1)$. Through the above line and the asymptotics for harmonic numbers, we can say even more: $$\sum_{k=1}^{n}\frac{1}{2k-1}=\frac{1}{2}\log(n)+\left(\frac{\gamma}{2}+\log(2)\right)+O\left(\frac{1}{n}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1414742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Polynomial tending to infinity Take any polynomial $(x-a_1)(x-a_2)\ldots(x-a_n)$ with roots $a_1, a_2,\ldots,a_n$ where we order them so that $a_{i+1}>a_i$ is increasing so $a_n$ is the biggest root. It doesn't matter whether the polynomial is of odd or even degree or whether there is a minus sign in front of highest power of $x$ since all these cases can be treated similarly. How do you show rigorously that after $x=a_n$ the polynomial is a strictly increasing function?
It is the product of positive and strictly increasing functions. Alternatively: The derivative of $p$ has degree $n-1$ and at least one root in each interval $(a_i,a_{i+1})$ (Rolle). Hence it has no root besides these, especially it does not change signs beyond $x=a_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1415179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Closure set of closure set intersection - formal languages Can we say... Given L1 and L2. Is the following true? $$ L_{1}^{*} \cap L_{2}^{*} = (L_{1}^{*} \cap L_{2}^{*})^{*} $$ I think it is true but I can't be sure. What permutation of either wouldn't be contained in each other. Sorry I don't know how to format on Math Exchange to make things look better. -Steve
Let $L$ be a language of the free monoid $A^*$. The answer easily follows form the fact that $L = L^*$ if and only if $L$ is a submonoid of $A^*$. Now $L_1^*$ and $L_2^*$ are submonoids of $A^*$ and hence their intersection is a submonoid of $A^*$. Thus $L_1^* \cap L_2^*$ is a submonoid of $A^*$ and it is equal to its own star.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1415255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Isolate Costs in NPV equation Hey can anyone help with this? This is the classic NPV equation: NPV = -CapEx + ∑ (Revenue − Costs) / (1+Discount)^i The partial sum is from i = 0 to n years. For my purposes all the elements are know except costs. I need to isolate costs in this equation. Is this possible? Thanks, Mike
Let $\frac{1}{\texttt{1+discount}}=a$. The equation becomes $NVP=-CapEx+\sum_{i=0}^n\texttt{(Revenue-Costs)}\cdot a^i $ $NVP=-CapEx+\texttt{(Revenue-Costs)}\cdot\sum_{i=0}^n a^i $ $\sum_{i=0}^n a^i $ is the partial sum of a geometric series. $\sum_{i=0}^n a^i =\frac{1-a^{n+1}}{1-a}$ Therefore $NVP=-CapEx+\texttt{(Revenue-Costs)}\cdot\frac{1-a^{n+1}}{1-a} $ Adding CapEx on both sides of the equation and after that dividing the equation by $\frac{1-a^{n+1}}{1-a} $ $(NVP+CapEx)\cdot \frac{1-a}{1-a^{n+1}}=\texttt{Revenue-Costs} \quad \color{blue}{(1)}$ Now it is just one step to isolate the costs. Further transformations: Multiplying the equation by (-1). The signs are going to be the opposite. $-(NVP+CapEx)\cdot \frac{1-a}{1-a^{n+1}}=\texttt{-Revenue+Costs}\quad | +\texttt{Revenue} $ $\texttt{Revenue}-(NVP+CapEx)\cdot \frac{1-a}{1-a^{n+1}}=\texttt{Costs}\quad $ Pay attention on the negative sign on the RHS. The costs can be higher or lower than the revenue. The (constant) revenue must be higher than the (constant) costs, if you want a positive NVP. But this is not sufficient (only necessary condition), because of the CapEx. This can be seen in $\color{blue}{(1)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1415380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit $n \rightarrow \infty \frac{n}{e^x-1} \sin\frac{x}{n}$ I am just working through some practice questions and cannot seem to get this one. Plugging this into wolfram alpha I know the limit should be $\frac{x}{e^x-1}$, but I am having a bit of trouble working through this. I have tried to rewrite the limit as $\frac{\frac{n}{e^x-1}}{\csc(\frac{x}{n})}$, then applying L'Hopital's rule a few times gets pretty messy. Is there an easier what to compute this limit that I am not seeing? Thanks very much in advance!
With equivalents, this is trivial: $\;\sin \dfrac xn\sim \dfrac xn\enspace(n\to\infty)$, hence $$\frac n{\mathrm e^x -1}\,\sin \frac xn\sim\frac n{\mathrm e^x -1}\frac xn=\frac x{\mathrm e^x -1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1415457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Show that $k[x,y]/(xy-1)$ is not isomorphic to a polynomial ring in one variable. Let $R=k[x,y]$ be a polynomial ring ($k$, of course, is a field). Show that $R/(xy-1)$ is not isomorphic to a polynomial ring in one variable.
Suppose $R\simeq K[T]$, where $K$ is a commutative ring. Then $K$ is an integral domain and since $\dim K[T]=1$ we get $\dim K=0$ (why?). It follows that $K$ is a field. So $k[X,X^{-1}]\simeq K[T]$. Now use this answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1415542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 0 }
Is $V$ a simple $\text{End}_kV$-module? Let $V$ be a finite-dimensional vector space over $k$ and $A = \text{End}_k V$. Is $V$ a simple $A$-module?
Simply look at the definition: Recall that a module $M$ over a ring $R$ is simple if the only non-zero submodule of $M$ is $M$ itself. Equivalently, for any $x\neq 0$ in $M$, the submodule $Rx=\left\{rx:r\in R\right\}$ generated by $x$ is $M$. Now in your case: Take a nonzero $x\in V$ and any $y\in V$. Is it true that there exists $T\in A=\operatorname{End}(V)$ with $Tx=y$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1415627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How to calculate the closed form of the Euler Sums We know that the closed form of the series $$\sum\limits_{n = 1}^\infty {\frac{1}{{{n^2}\left( {\begin{array}{*{20}{c}} {2n} \\ n \\ \end{array}} \right)}}} = \frac{1}{3}\zeta \left( 2 \right).$$ but how to evaluate the following series $$\sum\limits_{n = 1}^\infty {\frac{{{H_n}}}{{n\left( {\begin{array}{*{20}{c}} {2n} \\ n \\ \end{array}} \right)}}} ,\sum\limits_{n = 1}^\infty {\frac{{{H_n}}}{{{n^2}\left( {\begin{array}{*{20}{c}} {2n} \\ n \\ \end{array}} \right)}}} .$$
Using the main result proved in this question, $$ \sum_{n\geq 1}\frac{x^n}{n^2 \binom{2n}{n}}=2 \arcsin^2\left(\frac{\sqrt{x}}{2}\right)\tag{1}$$ it follows that: $$ \sum_{n\geq 1}\frac{H_n}{n^2 \binom{2n}{n}}=\int_{0}^{1}\frac{2 \arcsin^2\left(\frac{\sqrt{x}}{2}\right)-\frac{\pi^2}{18}}{x-1}\,dx \tag{2}$$ and: $$ \sum_{n\geq 1}\frac{H_n}{n\binom{2n}{n}}=\int_{0}^{1}\frac{\frac{2\sqrt{x}\arcsin\left(\frac{\sqrt{x}}{2}\right)}{\sqrt{4-x}}-\frac{\pi}{3\sqrt{3}}}{x-1}.\tag{3}$$ The RHS of $(2)$ equals: $$ \int_{0}^{\pi/6}\frac{8\sin(t)\cos(t)}{4\sin^2(t)-1}\left(2t^2-\frac{\pi^2}{18}\right)\,dt=-\int_{0}^{\pi/6}4t\cdot\log(1-4\sin^2(t))\,\tag{4}$$ that is computable in terms of dilogarithms and $\zeta(3)$. In fact, we have: $$ \sum_{n\geq 1}\frac{H_n}{n^2\binom{2n}{n}}=\frac{\pi}{18\sqrt{3}}\left(\psi'\left(\frac{1}{3}\right)-\psi'\left(\frac{2}{3}\right)\right)-\frac{\zeta(3)}{9},\tag{5}$$ where $\psi'\left(\frac{1}{3}\right)-\psi'\left(\frac{2}{3}\right)=9\cdot L(\chi_3,2)$ with $\chi_3$ being the non-principal character $\!\!\pmod{3}$. Now, I am working on $(3)$ that should be similar. It boils down to: $$-16\int_{0}^{\pi/6}\log(1-4\sin^2(t))\left(\frac{t}{\cos^2(t)}+\tan(t)\right)\,dt=\\=\frac{2\pi^2}{3}+2\log^2(3)-2\log^2(4)-4\,\text{Li}_2\left(\frac{3}{4}\right)-16\int_{0}^{\pi/6}\log(1-4\sin^2(t))\left(\frac{t}{\cos^2(t)}\right)\,dt$$ and the last integral depends on $$ \int_{0}^{\frac{1}{\sqrt{3}}}\arctan(z)\log(1-3z^2)\,dz,\qquad \int_{0}^{\frac{1}{\sqrt{3}}}\arctan(z)\log(1+z^2)\,dz,$$ so $(2)$ is a complicated expression involving $\log(2),\log(3),\pi$ and their squares, the previous $L(\chi_3,2)$ and $\text{Li}_2\left(\frac{3}{4}\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1415824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$(a,b) \mathbin\# (c,d)=(a+c,b+d)$ and $(a,b) \mathbin\&(c,d)=(ac-bd(r^2+s^2), ad+bc+2rbd)$. Multiplicative inverse? Let $r\in \mathbb{R}$ and let $0\neq s \in \mathbb{R}$. Define operations $\#$ and $\&$ on $\mathbb{R}$ x $\mathbb{R}$ by $(a,b) \mathbin\#(c,d)=(a+c,b+d)$ and $(a,b) \mathbin\&(c,d)=(ac-bd(r^2+s^2), ad+bc+2rbd)$ as addition and multiplication, respectively. Do these operations form a field? I have shown that every axiom of the field is satisfied except for the "existence of multiplicative inverses" axiom. This is where I need help. We either need to show that this axiom fails and so we don't have a field or that the axiom $aa^{-1}=1$ is satisfied for every $a\neq 0 \in \mathbb{R}$ The multiplicative identity in this problem is $(1,0)$. Any solutions or hints are greatly appreciated.
The addition and multiplication formulas somewhat remind of the formulas for complex number. We suspect that a map of the form $$ \phi\colon (x,y)\mapsto \alpha x+\beta y$$ for suitable $\alpha,\beta\in\mathbb C$ turns out to be a field isomorphism. As we want $(1,0)\mapsto 1$, we conclude $\alpha=1$. Then $$ \phi((a,b)\mathop{\&}(c,d))=\phi(a,b)\phi(c,d)=(a+\beta b)(c+\beta d) = ac+\beta(bc+ad)+\beta^2bd$$ and on the other hand $$\phi(ac-bd(r^2+s^2),ad+bc+2rbd)=ac-bd(r^2+s^2)+\beta(bc+ad+2rbd)$$ so that we obtain an identity by chosing $\beta$ with $\beta^2=2r\beta-(r^2+s^2)$, i.e., one of the roots of the polynommial $X^2-2rX+r^2+s^2$. For $s\ne 0$, this garantees $\beta\in\mathbb C\setminus\mathbb R$. Therefore the image of $\phi$ in $\mathbb C$ is more than just $\mathbb R$; as $\phi$ is linear $\mathbb C$ and $F$ both have diemnsion $2$, we conclude that $\phi$ is a vector space isomorphism. As the choice of $\beta$ guaranteed compatibility with multiplication, $\phi$ is indeed a ring isomorphism and as $\mathbb C$ is a field, so is $F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1415915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Similarities Between Derivations of Fourier Series Coefficients (odd, even, exponential) I've recently started learning about Fourier series from this set of tutorials: (http://lpsa.swarthmore.edu/Fourier/Series/WhyFS.html) and a few other sources. While chugging through the material, I noticed and was intrigued by the similarity of the derivation of the coefficients for the even Fourier series, odd Fourier series, and exponential Fourier series. I'll explain what I noticed in the following two paragraphs. The first step in all three derivations I looked at is to multiply both sides of the equation x(t)=sum(etc...) by the non-coefficient term inside of the sum. For the even and odd Fourier series, this is a cosine function. For the exponential Fourier series, this is a natural exponential function. The derivation also requires replacing the index value inside of that function with a new independent variable. The next step in all three derivations is to integrate both sides of the equation over one period. From here, the side of the equation with the sum works out to be the coefficient multiplied by the period in all three derivations, which one then divides by the period to isolate the coefficient. There are probably more similarities between these derivations, and it makes sense that they would be similar, since they are all Fourier series. I was wondering, though, if there is some concrete mathematical reason that all these derivations, particularly the exponential vs. the other two, end up with lots of similarities. Is there something deep, profound, and interesting going on here?
Yes, there is something profound. Spaces of real and complex valued functions can be viewed as infinite dimensional vector spaces since they can be added and multiplied by numbers. In some of these spaces you can introduce scalar product (which allows you to introduce distances and angles). What you are doing when you calculate particular coefficient in your expansion is finding projection of function on given subspace - just as you would calculate projection of a vector in Euclidean space onto given axis. Each vector can be described by giving its components in some system of "axis" and the same can be done with functions. Difference is that in this case scalar product is given by integral. If you want to learn something more about it, look up "Hilbert spaces".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Give an example of a structure of cardinality $\omega_2$ that has a substructure of $\omega$ but no substructure of $\omega_1$ Give an example of a structure of cardinality $\omega_2$ that has a substructure of $\omega$ but no substructure of $\omega_1$ This is from Hodges' A Shorter Model Theory. My idea is to take some set of cardinality $\omega_2$ as the domain, $\mathrm{dom}(A)$. Choose and fix a subset of $\mathrm{dom}(A)$ with cardinality $\omega$, call this $X$. I need to then choose an $n$-ary relation, $R^A$ carefully. The point is that I need to find something that breaks: $R^A=R^B\cap A^n$ for every subset of cardinality $\omega_1$ but such that that holds for A subset of cardinality $\omega$. The ones I've tried (like everything in the set $X$ being related to each other, etc.) didn't work. Any hints?
HINT: Using only relations won't work, since given a language with only relation symbols, and a structure to the language, every subset of the domain defines a substructure. So using functions is necessary here. Let $\{f_\alpha\mid\alpha<\omega_2\}$ be a list of $\omega_2$ unary function symbols. For sake of concreteness we can take $A=\omega_2$. Find a way to interpret $f_\alpha$ so for every $n<\omega$, $f_\alpha(n)<\omega$; and if $\beta\geq\omega$, then $\{f_\alpha(\beta)\mid\alpha<\omega_2\}=\omega_2$ (you actually don't need equality there, requiring that $|\{f_\alpha(\beta)\mid\alpha<\omega_2\}|=\aleph_2$ is enough).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Vector space over an infinite field which is a finite union of subspaces Let $V=\bigcup_{i=1}^n W_i$ where $W_i$'s are subspaces of a vector space $V$ over an infinite field $F$. Show that $V=W_r$ for some $1 \leq r \leq n$. I know the result "Let $W_1 \cup W_2$ is a subspace of a vector space $V$ iff $W_1 \subseteq W_2$ or $W_2 \subseteq W_1$." Now can I extend this to some $n$ subspaces. I have some answers here & here. So before someone put it as a duplicate I want to mention that I want a proof of this problem using basic facts which we use in proving the mentioned result.
The general result is that if $\lvert K\rvert\ge n$, $V$ cannot be the union of $n$ proper subspaces (Avoidance lemma for vector spaces). We'll prove that if $V=\displaystyle\bigcup_{i=1}^n W_i$ and the $W_i$s are proper subspaces of $V$, then $\;\lvert K\rvert\le n-1$. We can suppose no subspace is contained in the union of the others. Pick $u\in W_1\smallsetminus\displaystyle\bigcup_{i\neq1} W_i$, and $v\notin W_1$. The set $v+Ku$ is disjoint from $W_1$, and it intersects each $W_i\enspace(i>1)$ in at most $1$ point (otherwise $u$ would belong to $W_i$). As this set is in bijection with $K$, there results that $K$ has at most $n-1$ elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How does uniqueness of the additive inverse imply that $-(ax) = (-a)x$? How does uniqueness of the additive inverse imply that $-(ax) = (-a)x$? In my title, I should be clear that the additive inverse should be unique. But how does this help? I dont even get why uniqueness of a negative number matters... Is there a third class of numbers I should know about?
Hint: Use distributivity to show that $ax+(-(ax))=0$, then apply uniqueness.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that $1^2 + 2^2 + ..... + (n-1)^2 < \frac {n^3} { 3} < 1^2 + 2^2 + ...... + n^2$ I'm having trouble on starting this induction problem. The question simply reads : prove the following using induction: $$1^{2} + 2^{2} + ...... + (n-1)^{2} < \frac{n^3}{3} < 1^{2} + 2^{2} + ...... + n^{2}$$
If you wish a more direct application of induction, we have that if $$ 1^2+2^2+\cdots+(n-1)^2 < \frac{n^3}{3} $$ then $$ 1^2+2^2+\cdots+n^2 < \frac{n^3+3n^2}{3} < \frac{n^3+3n^2+3n+1}{3} = \frac{(n+1)^3}{3} $$ Similarly, if $$ 1^2+2^2+\cdots+n^2 > \frac{n^3}{3} $$ then $$ 1^2+2^2+\cdots+(n+1)^2 > \frac{n^3+3n^2+6n+3}{3} > \frac{n^3+3n^2+3n+1}{3} = \frac{(n+1)^3}{3} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Largest Number that cannot be expressed as 6nm +- n +- m I'm looking to find out if there is a largest integer that cannot be written as $6nm \pm n \pm m$ for $n,m$ elements of the natural numbers. For example, there are no values of $n,$m for which $6nm \pm n \pm m = 17.$ Other numbers which have no solution are $25$, $30$, and $593$. Can all numbers above a certain point be written as $6nm \pm n \pm m$ or will there always be gaps? Thanks.
This question is equivalent to the Twin Prime Conjecture. Let's look at the three cases: $$6nm+n+m =\frac{(6n+1)(6m+1)-1}{6}\\ 6nm+n-m = \frac{(6n-1)(6m+1)+1}{6}\\ 6nm-n-m = \frac{(6n-1)(6m-1)-1}{6} $$ So if $N$ is of one of these forms, then: $$6N+1=(6n+1)(6m+1)\\ 6N-1=(6n-1)(6m+1)\\ 6N+1=(6n-1)(6m-1)$$ If $6N-1$ and $6N+1$ are both primes, then there is no non-zero solutions. If $6N-1$ is composite, its factorization is of the form $(6n-1)(6m+1)$. If $6N+1$ is composite, its factorization is either of the form $(6n-1)(6m-1)$ or $(6n+1)(6m+1)$. So the question is equivalent to the Twin Prime Conjecture. I don't think this will be solved on this site, unless it is just a reference to a resolution of the conjecture elsewhere. Currently, the state of knowledge about twin primes is thin. For example, when $N=31$, $6N-1=5\cdot 37 = (6\cdot 1-1)(6\cdot 6+1)$. So with $m=1,n=6$, we get $31 = 6\cdot 1\cdot 6 +1 -6$. Alternative, $6N+1=11\cdot 17=(6\cdot 2-1)(6\cdot 3-1)$. So $(m,n)=(2,3)$ and $31=6\cdot2\cdot 3 -2-3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geometric proof of $QM \ge AM$ Prove by geometric reasoning that: $$\sqrt{\frac{a^2 + b^2}{2}} \ge \frac{a + b}{2}$$ The proof should be different than one well known from Wikipedia: DISCLAIMER: I think I devised such proof (that is not published anywhere to my knowledge), but I believe it is too complex (and not that beautiful). It involves some ideas from another question. I may post it in next few days as an answer, but I am hoping someone will offer better, more beautiful, proof by then.
What about something like this: $\hspace{80pt}$ The two right triangles have common base, which is also the diameter of the red circle. The endpoints of the black diameter are also the foci of the ellipses, and their size is determined by the tip of the respective triangle. Observe, that the minor radius of the blue ellipse is equal to the radius of the red circle, thus the circle is inscribed into the blue ellipse. Therefore, as the tip of the green triangle is contained within the blue ellipse, its perimeter has to be smaller or equal the perimeter of the blue triangle, which implies the inequality between the means. $$\sqrt{2a^2+2b^2}\geq a+b.$$ More visually, the green ellipse has the same focal points as the blue one, but intersects the red circle in 4 points (the tip and 3 reflections), while the blue does only in two. Hence the radii of the green ellipse has to be smaller or equal the radii of the blue ellipse, which also implies the inequality between the means. I hope this helps $\ddot\smile$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Calculate simple expression: $\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}$ Tell me please, how calculate this expression: $$ \sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}} $$ The result should be a number. I try this: $$ \frac{\left(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}}\right)\left(\sqrt[3]{\left(2 + \sqrt{5}\right)^2} - \sqrt[3]{\left(2 + \sqrt{5}\right)\left(2 - \sqrt{5}\right)} + \sqrt[3]{\left(2 - \sqrt{5}\right)^2}\right)}{\left(\sqrt[3]{\left(2 + \sqrt{5}\right)^2} - \sqrt[3]{\left(2 + \sqrt{5}\right)\left(2 - \sqrt{5}\right)} + \sqrt[3]{\left(2 - \sqrt{5}\right)^2}\right)} = $$ $$ = \frac{2 + \sqrt{5} + 2 - \sqrt{5}}{\sqrt[3]{\left(2 + \sqrt{5}\right)^2} + 1 + \sqrt[3]{\left(2 - \sqrt{5}\right)^2}} $$ what next?
$(\sqrt[3]{2 + \sqrt{5}} + \sqrt[3]{2 - \sqrt{5}} )^3 \\ =(\sqrt[3]{2 + \sqrt{5}})^3+(\sqrt[3]{2 - \sqrt{5}} )^3+3(\sqrt[3]{2 + \sqrt{5}} ) (\sqrt[3]{2 - \sqrt{5}} )(\sqrt[3]{2 + \sqrt{5}} +\sqrt[3]{2 - \sqrt{5}} ) $ S0 $s^3=4-3s$ From this we get S = 1
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
When do circles on the sides of a triangle intersect? If you have a non degenerate triangle and two of the sides are chords of two respective circles, then under what conditions do the circles intersect at two distinct points? I'm having trouble understanding why this condition holds in a step of a proof of a theorem. Here's the statement and proof of the theorem: Source: http://www.cut-the-knot.org/proofs/nap_circles.shtml Theorem Let triangles be erected externally on the sides of ΔABC so that the sum of the "remote" angles P,Q, and R is 180°. Then the circumcircles of the three triangles ABR, BCP, and ACQ have a common point. Proof Let F be the second point of intersection of the circumcircles of ΔACQ and ΔBCP. ∠BFC + ∠P = 180°. Also, ∠AFC + ∠Q = 180°. Combining these with ∠P + ∠Q + ∠R = 180° immediately yields ∠AFB + ∠R = 180°. So that F also lies on the circumcircle of ΔABR. The bolded line is what my question is about. Why is this true in this case, and when is it true in general? I've thought of a case where this would not be true: if you have $3$ circles tangent to each other and take the triangle formed by connecting the 3 points of contact.
Let we say that $\Gamma_C$ is a circle through $A,B$ and $\Gamma_B$ is a circle through $A,C$. Since $\Gamma_B$ and $\Gamma_C$ meet at $A$, they have for sure a second intersection point, unless they are tangent at $A$. That implies that their centres $O_B,O_C$ and $A$ are collinear, so if $P,Q$ are the remote points on $\Gamma_B$ and $\Gamma_C$ respectively, $$\widehat{APC}+\widehat{AQB}=\frac{\widehat{AO_B C}+\widehat{A O_C B}}{2}=\pi-\left(\widehat{O_B A C}+\widehat{O_C A B}\right)=\widehat{BAC}.$$ That implies that the third remote point $R$ has to fulfill $\widehat{BRC}=\pi-\widehat{BAC}$, so the circumcircle of $BCR$ goes through $A$ and the theorem is still true in this slightly degenerate case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Combinatorics question on group of people making separate groups If there are $9$ people, and $2$ groups get formed, one with $3$ people and one with $6$ people (at random), what is the probability that $2$ people, John and James, will end up in the same group? I'm not sure how to do this. So far, I've got: The total number of groups possible is $${9\choose 6}=\frac{9!}{6!3!}=84$$ The total number of groups when they are together is $${7\choose 4} +{7\choose 1} =\frac{7!}{4!3!}+\frac{7!}{6!}=42$$ Therefore, probability $= \frac{42}{84} =50\%$ However, I am not sure if that is correct.
P(both in same group) = P(both in group of 3) + P(both in group of 6) $$ =\frac{{3\choose 2}+{6\choose 2}}{9\choose 2} = \frac12$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1416939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How is $\mathbb N$ actually defined? I know perfectly well the Peano axioms, but if they were sufficient for defining $\mathbb N$, there would be no controversy whether $0$ is a member of $\mathbb N$ or not because $\mathbb N$ is isomorphic to a host of sub- and supersets of $\mathbb Z$. So it seems that mathematicians, when specifying $\mathbb N$, rarely mean the set of natural numbers according to its definition but rather some particular embedding into sets defined by different axiomatic systems. Doesn't this sort of misstate then the role of the Peano axioms as the defining characteristics of $\mathbb N$? It seems like most of the time the Peano axioms are not more than a retrofit to a subset of $\mathbb Z$ or other sets defined by a different axiomatic system with one or several axioms incompatible with the Peano axioms.
In the set theory course I took, natural numbers were not defined by Peano's axioms. One can define an inductive set in this manner: $I$ is inductive set if $$\emptyset\in I\ \wedge\ \forall x\in I\ ((x\cup \{x\})\in I) $$ We say that $n$ is natural number if $n\in I$ for any inductive set $I$. Some examples of natural numbers are: \begin{align} 0 &= \emptyset\\ 1 &= \{\emptyset\}\\ 2 &= \{\emptyset,\{\emptyset\}\}\\ &\ \ \vdots\\ n &= \{0,1,\ldots ,n-1\} \end{align} But, we still don't know if the set of all natural numbers exists. This is guaranteed by Axiom of infinity: There exists an inductive set. and with this we can prove existence of set $\mathbb N = \{n\ |\ n\ \text{is natural}\}$. Also, we can see that $\mathbb N$ is the smallest inductive set in the sense of set inclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1417042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 4 }
Find $\sum\limits_{n=1}^{\infty}\frac{n^4}{4^n}$ So, yes, I could not do anything except observing that in the denominators, there is a geometric progression and in the numerator, $1^4+2^4+3^4+\cdots$. Edit: I don't want the proof of it for divergence or convergence only the sum.
\begin{align} n^4 = {} & \hphantom{{}+{}} An(n-1)(n-2)(n-3) \\ & {} + B n(n-1)(n-2) \\ & {} + C n(n-1) \\ & {} + D n \end{align} Find $A,B,C,D$. Then \begin{align} n^4 x^n & = An(n-1)(n-2)(n-3) x^n + \cdots \\[10pt] & = A\frac{d^4}{dx^4} x^n + B \frac{d^3}{dx^3} x^n + \cdots \end{align} Then add over all values of $n$, and then find the derivatives. Finally, apply this to the case where $x=\dfrac 1 4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1417151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Proving $[(P\lor Q)\land(P\to R)\land(Q\to R)]\to R$ is a tautology without using a truth table? $$[(P\lor Q)\land(P\to R)\land(Q\to R)]\to R\tag{1}$$ How can I prove that $(1)$ is a tautology without using a truth table? I used the identity $$(P\to R)\land(Q\to R)\equiv(P\lor Q)\to R$$ but from there I get stuck and can't figure out where to go.
$$ \begin{array}{ll} &(P\vee Q) \wedge (P \Rightarrow R) \wedge (Q \Rightarrow R)\\ \equiv&\hspace{1cm}\{ \text{ by the tautology mentioned in the question }\}\\ &(P\vee Q) \wedge ( (P\vee Q) \Rightarrow R) \\ \equiv&\hspace{1cm}\{ \text{ } A \wedge (A \Rightarrow B ) \equiv A \wedge B, \text{ see below }\}\\ &(P\vee Q) \wedge R \\ \Rightarrow&\hspace{1cm}\{ \text{ } A \wedge B \Rightarrow B \text{ } \}\\ &R\\ \\ \\ &A \wedge (A \Rightarrow B)\\ \equiv&\hspace{1cm}\{ \text{ using the disjunctive definition of $\Rightarrow$ }\}\\ &A \wedge (\neg A \vee B) \\ \equiv&\hspace{1cm}\{ \text{ distribution of $\wedge$ over $\vee$ }\}\\ &(A\wedge\neg A)\vee(A \wedge B)\\ \equiv&\hspace{1cm}\{ \text{ law of excluded middle } \}\\ &\mathbf{false}\vee(A \wedge B)\\ \equiv&\hspace{1cm}\{ \text{ $\mathbf{false}$ is the unit of $\wedge$ } \}\\ &A\wedge B\\ \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1417230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
a bijection is an injective (one-to-one) , surjective (onto) map between sets. if S = (0, 1) and T =R, find a map from S to T which is a bijection is an injective (one-to-one) , surjective (onto) map between sets. if S = (0, 1) and T =R, find a map from S to T which is my effort 1) (a) f(x) = x is a one to one function but it is not an onto function as each point in R is not mapped to the domain (b) f(x) = 1/|x| is onto function but not one to one (c) f(x) = -cot(πx) is that correct any help will be appreciated
For part c), possible answers are $\displaystyle f(x)=\frac{2x-1}{x(x-1)},\;f(x)=\frac{\ln 2x}{x-1},\;$ or $f(x)=\tan((x-\frac{1}{2})\pi)$, which is equivalent to your answer. For part b), possible answers are $\displaystyle f(x)=\frac{(4x-1)(2x-1)(4x-3)}{x(1-x)}\;$ or $\hspace{2.1 in}f(x)=\tan((\pi(2x-\frac{1}{2}))$ for $x\ne\frac{1}{2}$, and $f(\frac{1}{2})=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1417290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Deriving the formula of the Surface area of a sphere My young son was asked to derive the surface area of a sphere using pure algebra. He could not get to the right formula but it seems that his reasoning is right. Please tell me what's wrong with his logic. He reasons as follows: * *1.Slice a sphere into thin circles (or rings if you hollow them out) *The sum of the circumferences of all the circles forms the surface area of the sphere. *Since the formula of circumference is $2{\pi}R$, the sum of the circumferences would be $2{\pi}(R_1+R_2+R_3+...+R_n)$ *My son draws the radii beside each other and concludes that their sum would be equivalent to one-half of the area of the largest circle or $({\pi}R^2)/2$. He appears to be right from his drawing. *Substituting the sum of the radii, he comes up with a formula for the surface area of the sphere as $(R^2)*(\pi^2)$. *He asks me what's wrong with his procedure that he cannot derive the correct formula of $4{\pi}R^2$.
An interesting proof may go through the following lines: * *If $S(R)$ and $V(R)$ are the surface area and volume of a sphere with radius $R$, we have $S(R)=\frac{d}{dR} V(R)$ by triangulating the surface and exploiting $S(\alpha R)=\alpha^2 S(R), V(\alpha R)=\alpha^3 V(R)$ for any $\alpha > 0$, so the problem boils down to computing $V(R)$; *By using Cavalieri's principle and exploiting the fact that the area of a circle with radius $r$ is $\pi r^2$, we may see that the problem of computing $V(R)$ boils down to computing the area of $\{(x,y)\in\mathbb{R}^2 : 0\leq x\leq R, 0\leq y\leq x^2\}.$ *The last problem can be solved in many ways: by exploiting the optical properties of the parabola like Archimedes did, or by applying the Cavalieri's principle again and recalling that: $$ \frac{1}{n}\sum_{k=1}^{n}\left(\frac{k}{n}\right)^2 = \frac{(n+1)(2n+1)}{6n^2}.$$ A very short proof relying on the Gaussian integral is given by Keith Ball at page $5$ of his notorious book An Elementary Introduction To Modern Convex Geometry, but maybe it is the case to wait for your son to grow a bit :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1417598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Modular Arithmetic and prime numbers With respect to the maths behind the Diffie Hellman Key exchange algorithm. Why does: (ga mod p)b mod p = gab mod p It might be fairly obvious, but what basic maths guarantees this? Why does the first modulo p expression disappear from the LHS in the expression on the RHS. Apologies if it's not the right forum to ask it in. Maybe I should try the Information security stack Exchange forum instead.
We can actually use the binomial theorem: Observe that if we write $g^a=m+pk$, where $m\equiv g^a\mod p$, then $g^{ab}=(m+pk)^b=\sum_{n=0}^b \binom{b}{n} m^n(pk)^{n-b}\equiv m^b\mod p$, as every term except for the $n=b$ term will we divisible by $p$. Therefore, $(g^a\mod p)^b\mod p\equiv m^b\mod p$ and on the other hand $g^{ab}\mod p\equiv m^b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1417700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Number of divisors $d$ of $n^2$ so that $d\nmid n$ and $d>n$ I just wanted to share this nutshell with you guys, it is a little harder in this particular case of the problem: Find the number of divisors $d$ of $a^2=(2^{31}3^{17})^2$ so that $d>a$. What is the solution for the general case?
I think the number of divisors is (17+1)(31+1)-1=575. d/a^2 and d>a. {1,2,2^2,...,2^31}{1,3,3^2,...,3^17}={(1,1),(1,3),(1,3^2),...,(1,2),(1,2^2),...} From (e,f) then ef is one divisor of a. d=ef*a is devisor of a^2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1417807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Residue of $g(z)g'(z)$ I know how to use the residue theorem and the winding number to find the residue of $f$, but I have no idea how to relate the residue of $g$ to that of $g\cdot g'$, especially without knowing anything about the order of the poles.
Hint: We can apply a substitution write $$ \int_{\gamma} g(z)g'(z)\,dz= \int_{g(\gamma)} u\,du $$ Of course, the function $\phi(u) = u$ is entire.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1417889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
there does not exist a perfect square of the form $7\ell+3$ I have been trying to prove that there does not exist a perfect square of the form $7\ell+3$. I've tried using $n$ as even or odd, and I'm getting stuck. Can someone put me on the path? Is this an equivalence class problem?
For any integer $a,$ $$a\equiv0,\pm1,\pm2,\pm3\pmod7\implies a^2\equiv0,1,4,9\equiv2$$ $$\implies a^2\not\equiv3(=7-4),5(=7-2),6(=7-1)\pmod7$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
checking definition of bounded linear function involves operator maps between different spaces Let $H$ and $K$ be two Hilbert spaces. Let $T:K\to H$ be a bounded linear operator. Denote the inner products on $H$ and $K$ by $\langle\cdot,\cdot\rangle_H$, $\langle\cdot,\cdot\rangle_K$. Fix any $y\in H$; then the linear mapping $$ \Phi: K\to\mathbb{C},\quad\Phi x :=\langle Tx,y\rangle_H$$ defines a bounded functional on the Hilbert space $K$. Could anyone help with the proof of above claim? Recall, a functional simply a linear operator $\Phi:H\to \mathbb{C}$. $\Phi$ is bounded if there exists $K$ such that $|\Phi x|\le K\|x\|,\,\,\forall\,\,x\in H$. So I have $$|\Phi x|=|\langle Tx,y\rangle_H|\le\cdots$$ Please help.
Hint: How do you prove that $y^*\colon H\to {\bf C}$ defined by $y^*(y'):=\langle y',y\rangle$ is bounded? Or, if you know what an adjoint operator is, just notice that $\Phi$ is just $T^*(y^*)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Currency conversion using available rates I have the following rates available: USD -> USD = 1 USD -> EUR = 0.887662 USD -> GBP = 0.654514 I want to calculate the following rates GBP -> EUR EUR -> GBP Using only the first 3 rates. My process so far has been: 1 USD / 0.654514 GBP = 1.527851 So 1 GBP = 1.53 USD 1 USD / 0.887662 = 1.126554 So 1 EUR = 1.13 USD I assume I have all the data needed now to calculate the following conversions GBP -> EUR EUR -> GBP From this point however, I am not sure how to continue.
We know the exchange rates $\frac{$}{€}=0.887662$ and $\frac{$}{£}=0.654514$. So we have $$ \frac{€}{£}=\frac{$}{£}\times\frac{€}{$}=\frac{$}{£}\times \frac{1}{$/€}=0.654514\times \frac{1}{0.887662}=\frac{0.654514}{0.887662}=0.737346 $$ and $$ \frac{£}{€}= \frac{1}{€/£}=\frac{1}{0.737346}=1.356215 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the Lie algebra morphism induced by surjective Lie group morphism also surjective? Let $G$ be a matrix lie group and $\Pi : G\to \Pi(G)$ a surjective Lie group morphism. Let $\mathfrak g$ and $\mathfrak h$ be the respective Lie algebras of $G$ and $\Pi(G)$. Then there is a unique morphism of Lie algebras $\pi : \mathfrak g \to \mathfrak h$ which makes the following diagram commute: $$\begin{matrix} && \Pi & \\ &G & \to & \Pi(G)\\ \exp &\uparrow & & \uparrow & \exp\\ &\mathfrak{g} & \to &\mathfrak{h}\\ && \pi \\ \end{matrix}$$ Is $\pi$ surjective? $\quad$ ie. Can we write $\mathfrak h = \pi(\mathfrak g)$?
Using a dimension argument and Sard theorem (or anything to assure that the identity is a regular value of $\Pi$ ), the dimension of G is the sum of the dimensions of the kernel and image of $\Pi$ and the same holds for $\pi$ so it remains to show that the dimensions of the kenerls coincide, using that $\Pi ( e^X) = e^{\pi (X)} $ it's easy to show that the kenerl of $\pi$ is the lie algebra of $\ker \Pi$ and thus the dimensions coincide.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
verification that simplification in textbook is incorrect I am using a textbook that asks for the following expression to be simplified: ${9vw^3 - 7v^3w - vw^2}$ The answer given is: ${8vw^2 - 7v^3w}$ which seems wrong because I did not think you could subtract 2 similar terms with different polynomials.
The answer given is wrong: $9vw^3$ and $-vw^2$ are not like terms, so they cannot be added to make a single term. Maybe the problem was supposed to be $9vw^2 - 7v^3w - vw^2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An online calculator that can calculate a sum of binomial coefficients Is there any online calculator that can calculate $$\dfrac{\sum_{k=570}^{770} \binom{6,700}{k}\binom{3,300}{1,000-k}}{\binom{10,000}{1,000}} $$ for me? There are a few binomial coefficient calculators but for the sum in the numerator there are not usable.
Typing in the following input into WolframAlpha: Sum[Binomial[6700,k]Binomial[3300,1000-k]/Binomial[10000,1000],{k,570,770}] yields an exact result followed by the approximate decimal value, which to 50 digits is $$0.99999999999855767906391784086205133574169750988724.$$ The same input in Mathematica will yield the same result as above, although you could equivalently use the input Subtract @@ (CDF[HypergeometricDistribution[1000, 6700, 10000], #] & /@ {770, 569}) As Jack D'Aurizio also suggested, this is a hypergeometric probability, so approximation by the normal distribution is possible. The way to do this is to recall the mean of a hypergeometric distribution with PDF $$\Pr[X = k] = \frac{\binom{m}{k}\binom{N-m}{n-k}}{\binom{N}{n}}$$ is given by $$\operatorname{E}[X] = \frac{mn}{N} = \mu,$$ and the variance is $$\operatorname{Var}[X] = \frac{mn(N-m)(N-n)}{N^2(N-1)} = \sigma^2.$$ In your case, you have values $m = 6700$, $N = 10000$, $n = 1000$. Thus we can approximate, with continuity correction, $$\begin{align*} \Pr[570 \le X \le 770] &\approx \Pr\left[\frac{570-\mu-0.5}{\sigma} \le \frac{X - \mu}{\sigma} \le \frac{770-\mu+0.5}{\sigma}\right]\\ &\approx \Pr[-7.12408 \le Z \le 7.12408]. \end{align*}$$ To fifty decimal places, this is $$0.99999999999895220978266454480005261506906978189913.$$ This approximation is good to about $10^{-13}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\csc^n\frac{A}{2}+\csc^n\frac{B}{2}+\csc^n\frac{C}{2}$ has the minimum value $3.2^n$ Show that in a $\Delta ABC$, $\sin\frac{A}{2}\leq\frac{a}{b+c}$ Hence or otherwise show that $\csc^n\frac{A}{2}+\csc^n\frac{B}{2}+\csc^n\frac{C}{2}$ has the minimum value $3.2^n$ for all $n\geq1$. In this problem, I have proved the first part that $\sin\frac{A}{2}\leq\frac{a}{b+c}$ but second part I could not prove. I tried using the AM-GM inequality but could not succeed. Please help.
As a different approach, Lagrange multipliers work here...the constraint function $$A+B+C=\pi$$ has gradient $(1,1,1)$. Hence at an "internal" critical point the three partials of your function must be equal. Note that the boundary doesn't matter...the boundary will have one or more angles equal to $0$ and the function is clearly not minimal there. Set the partial in $A$ equal to the partial in $B$. After some simple algebra we get $$\frac {cos(\frac A2)}{sin^n(\frac A2)}\;=\;\frac {cos(\frac B2)}{sin^n(\frac B2)}$$ But, for positive integers $n$ the function $\frac {cos(x)}{sin^n(x)}$ is strictly decreasing in $x$ (for $x\in [0,\frac {\pi}{2}]$) so we must have $A=B$. Similarly, all three angles are equal, and the minimum follows by taking all three equal to $\frac {\pi}{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proof - Uniqueness part of unique factorization theorem The uniqueness part of the unique factorization theorem for integers says that given any integer $n$, if $n=p_1p_2 \ldots p_r=q_1q_2 \ldots q_s$ for some positive integers $r$ and $s$ and prime numbers $p_1 \leq p_2 \leq \cdots \leq p_r$ and $q_1 \leq q_2 \leq \cdots \leq q_s$, then $r=s$ and $p_i=q_i$ for all integers $i$ with $1 \leq i \leq r$. Fill in the details of the following sketch of a proof: Suppose that $n$ is an integer with two different prime factorizations: $n=p_1p_2 \ldots p_t =q_1q_2 \ldots q_u$. All the prime factors that appear on both sides can be cancelled (as many times as they appear on both sides) to arrive at the situation where $p_1p_2 \ldots p_r=q_1q_2 \ldots q_s$, $p_1 \leq p_2 \leq \cdots \leq p_r$, $q_1 \leq q_2 \leq \cdots \leq q_s$ , and $p_i \neq q_j$ for any integers $i$ and $j$. Then deduce a contradiction, and so the prime factorization of $n$ is unique except, possibly, for the order in which the prime factors are written. Please provide as much detail as possible. I'm very confused about this. I know I'll need Euclid's Lemma at some point in the contradiction, but I have no idea how to arrive there.
The OP can simply state in their proof that they 'arrived' at the following 'situation': $\tag 1 n = p_1p_2 \ldots p_r \text{ and } n=q_1q_2 \ldots q_s \text{ and } p_i \neq q_j \text { for any integers } i \text{ and } j$ In words, there exist an integer $n$ with two prime factorizations containing no common prime factors. We now demonstrate that being left in such a 'situation' is untenable by using the following 'tweak' of Euclid's lemma: If $p$ is a prime number dividing into $a_1 \dots a_z$ then there exist a subscript $k$ such that $p$ divides $a_k$. The integer $n$ in $\text{(1)}$ has a smallest prime number, call it $\gamma(n)$, that divides into it. By applying Euclid's lemma to $n = p_1p_2 \ldots p_r$, we must conclude that there exist a subscript $i$ with $\gamma(n) = p_i$. Similarly, there exist a subscript $j$ with $\gamma(n) = q_j$. But this contradicts $\text{(1)}$. The above proof is similar to Falko's, but since it has more detail and introduces the concept of $\gamma(n)$, I decided to post it. (I just upvoted Falko's answer).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Proving convergence of a series and then finding limit I need to show that $\sqrt{2}$, $\sqrt{2+\sqrt{2}}$, $\sqrt{2+\sqrt{2+\sqrt{2}}},\ldots$ converges and find the limit. I started by defining the sequence by $x_1=\sqrt{2}$ and then $x_{n+1}=\sqrt{2+x_n}$. Then I proved by induction that the sequence is increasing and that it is bounded. Then I use the Monotone Convergence Theorem to prove that it converges. Now, I claim that the limit of the sequence is $2$. So I need to show that for all $\epsilon>0$, there exists $N$ such that for all $n\geq N$ we have $|x_n-2| < \epsilon$. This is kind of where I'm stuck. I don't know how to proceed to prove the convergence to $2$.
Prove that this sequence is increases and bounded. Hence it convergence to some point $p\in \mathbb{R}$ and make limit transition in $x_{n+1}=\sqrt{2+x_n}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
The Galois connection between topological closure and topological interior [Update: I changed the question so that $-$ is only applied to closed sets and $\circ$ is only applied to open sets.] Let $X$ be a topological space with open sets $\mathcal{O}\subseteq 2^X$ and closed sets $\mathcal{C}\subseteq 2^X$. Consider the pair of maps $-:\mathcal{O}\leftrightarrows\mathcal{C}:\circ$ where $-$ is the topological closure and $\circ$ is the topological interior. Under what conditions will it be true that for all $A\in\mathcal{O}$ and $B\in\mathcal{C}$ we have $$A\subseteq B^\circ \Longleftrightarrow A^-\subseteq B \,\,?$$ [Update: Darij showed that this condition holds for all topological spaces.] If this condition does hold then by general nonsense (the theory of abstract Galois connections), we obtain an isomorphism of lattices $$-:P\approx Q:\circ$$ where $P$ is the the lattice of sets $A\in\mathcal{O}$ such that $(A^-)^\circ=A$ and $Q$ is the lattice of sets $B\in\mathcal{C}$ such that $(B^\circ)^-=B$, where both $P$ and $Q$ are partially ordered by inclusion. The existence of this lattice isomorphism makes me wonder: is there a nice characterization of the elements of $P$ and $Q$? Certainly not every open set is in $P$. For example, if $X=\mathbb{R}$ with the usual topology then the set $(0,1)\cup (1,2)$ is open, but $$(((0,1)\cup(1,2))^-)^\circ = ([0,2])^\circ = (0,2) \supsetneq (0,1)\cup(1,2).$$ [Update: I found the answer. See below.]
Put $A=B=S$, where $S$ is any subset of $X$. Then $$ \overline S\subseteq S \iff S\subseteq\mathring S $$ hence $$ S \text{ is closed} \iff S \text{ is open} $$ Conversely, assume that every open set is closed (which implies that any closed set is open). If $A$ and $B$ are subsets of $X$ such that $\overline A ⊆ B$, then we also have $A ⊆ \overline A ⊆ \mathring B$, and if $A ⊆ \mathring B$, then $\overline A ⊆ \mathring B ⊆ B$. Hence such a space satisfies $$ \overline A ⊆ B \iff A \subseteq \mathring B $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
Log function solve for x The function is defined by $y=f(x)=3e^{{1\over3}x+1}$ Solve for $x$ in terms of $y$ My answer: $$x={\ln({y\over3})-1\over3}$$ Is this the correct way to go about this question? Update. Finding the inverse and range. Would the inverse of this function be: $$f^{-1}(x)=3(\ln({x\over3})-1)$$ and its domain $x>0$
The steps are right. Just don't put the 3 in the denominator at last, multiply 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1418989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is a mock theta function? We define a mock theta function as follows: A mock theta function is a function defined by a $q$-series convergent when $|q|<1$ for which we can calculate asymptotic formulae when $q$ tends to a rational point $e^{2\pi ir/s}$ of the unit circle of the same degree of precision as those furnished for the ordinary $\theta$-function by theory of linear transformation. I want to understand this definition. Accordingly, if we have a rational point on the unit circle what precisely does happen? Can someone explain.
Since 2008, the number theory community has decided on a bit more "scientific" definition of mock theta function. However, given the context in which you asked the question, I don't assume you want the technical definition of the holomorphic part of a weak Maas form (whatever that means I just need to say this to keep the trolls from accusing me of handwaving). Just to gather what Ramanujan is thinking, look at THE prototypical theta function when $s=1$ and $r=0$ and $q=e^{-t}$ $$ f(e^{-t}) = \sum_{n=-\infty}^{\infty } e^{-tn^2} = \sqrt{\frac{\pi}{t}}\sum_{n=-\infty}^{\infty } e^{-\frac{n^2}{t}} =\sqrt{\frac{\pi}{t}}f(e^{-\frac{1}{t}}) $$ where you can compute the second identity via poisson summation. This formula is elegant in a two ways. 1) We have an asymptotic formula $f(e^{-t})\approx \sqrt{\frac{\pi }{t}}$ to which the error on this approximation is very very small. The next largest term in the sum is when $n=\pm1$ which for say $t=.1$ is of size $10\sqrt{\pi}e^{-10}\approx 2.5*10^{-5}$ when $t=.01$ this largest term is $6.59*10^{-45}.$ This is what he means by "asymptotic formulae" with "theta precision" 2) The formula is very compact and precise, relating $t\to -\frac{1}{t}.$ In fact, I will leave it to you to try but you can get a formula for $e^{-t}e^{2\pi i \frac{r}{s}}.$ You will always see $f(e^{-t+ 2\pi i \frac{r}{s}})$ having a really nice formula for estimating $f(q)$ when $t$ is small. If you are more careful you will a pattern or a system for computing these estimates relating to fractional linear transformations $(az+b)/(cz+d)$ if you let $t=\pi i z.$ This is the linear transformations part he is talking about. Now he lists several objects which behave like theta functions in the same sense we described but they clearly aren't $q$ to a square of sorts so we wouldn't call them theta functions. Hence they are mock theta functions. I will give you one Example in Theorem 2.1 which you probably can see is very complex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1419183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
If $\lambda=$ measure of a set and all $G_k$'s are open sets, then : $\lambda ( \cup_{k=1}^{\infty} G_k ) \le \sum _{k=1}^{\infty}\lambda ( G_k)$ I just started reading the book Lebesgue Integration on Euclidean Spaces by Frank jones, in which the author gives a result and it's proof as : the If $\lambda$ denotes the measure of a set and all $G_k$'s are open sets, then : $$\lambda ( \bigcup_{k=1}^{\infty} G_k ) \le \sum _{k=1}^{\infty}\lambda ( G_k)$$ A special polygon is any polygon which can be divided further into finite number of rectangles whose sides are parallel to the co-ordinate axes. Proof: Query: The red boxed area says that it's possible to define $P_k$ to be the union of all $I_j's$ such that $I_j \subset G_k$ and $I_j \nsubseteq G_1,G_2, \cdots G_{k-1}~~~~~~ \dots \dots (A)$ Suppose $I_1,I_2 \in G_3$ and no other $G_k:k \in \mathbb N$ contains $I_2$. Suppose $I_1, I_3 \in G_4$ and no other $G_m:m \in \mathbb N$ contains $I_3$. Hence, we can't ignore $G_3$ nor $G_4$ because even though they contain duplicate of $I_1$ , they also contain $I_2$ and $I_3$ respectively which no other $G_k$ contains. Keeping in mind requirement $(A)$, is this not a contradiction with the requirement (A)? Could someone be please kind enough to tell me where I might be going wrong or if the proof might have a glitch?Thank you for reading!
I don't see a problem. In the example you give, assuming $P = I_1 \cup I_2 \cup I_3$, we would take $P_1 = P_2 = \emptyset$, $P_3 = I_1 \cup I_2$, and $P_4 = I_3$, and then $P_5 = P_6 = \dots = \emptyset$. Maybe you are misinterpreting the sentence "Since each $I_j$ is contained in one of $G_1, G_2, \dots$." What Jones means is "Each $I_j$ is contained in at least one of $G_1, G_2, \dots$" - there is no problem if some $I_j$ is contained in several of the $G_k$, as long as it doesn't get missed altogether. This is just to explain the assertion that $P = \bigcup_k P_k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1419287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Checking whether the result is positive definite or positive semi-definite with two methods Given, $$A = \begin{bmatrix} 1 &1 & 1\\ 1&1 & 1\\ 1& 1& 1 \end{bmatrix}.$$ I want to see if the matrix $A$ positive (negative) (semi-) definite. Using Method 1: Define the quadratic form as $Q(x)=x'Ax$. Let $x \in \mathbb{R}^{3}$, with $x \neq 0$. So, $Q(x)=x'Ax = \begin{bmatrix} x_{1} &x_{2} &x_{3} \end{bmatrix} \begin{bmatrix} 1 &1 & 1\\ 1&1 & 1\\ 1& 1& 1 \end{bmatrix} \begin{bmatrix} x_{1}\\x_{2} \\x_{3} \end{bmatrix}$. After multiplying out the matrices I am left with $$Q(x) = x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+2(x_{1}x_{2} + x_{1}x_{3}+x_{2}x_{3}).$$ So by Method 1, I get that $A$ is positive definite. Using Method 2: I calculate all principle minors $A$ and if they're all positive, then the matrix is positive definite (learned recently from @hermes). So $|A_{1}| =1> 0$, $|A_{2}| = 0$, and $|A_{3}| = |A| = 0$. So $A$ is positive semi-definite. Which method am I making a mistake?
First is wrong. $Q(x)$ doesn't mean $A$ is positive definite. Moreover $rank(A)=1$. Since $A$ is not of full rank, it can not be positive definite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1419348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
can real line be written as a disjoint unions of set with cardinality 5 Can the real line be written as a disjoint union of sets with cardinality 5? I tried using to write it as a set of sets.. but I didn't end up to a good position.
If we allow the axiom of Choice, then one can show that the cardinality of $\mathbb R \times 5$ is equal to the cardinality of $\mathbb R$. Now, $card(\mathbb R \times 5)$ is the same as $card(\sum_{i \in \mathbb R}5) $. This indicates that there exists a correspondence between $\mathbb R$ and $\mathbb R$ disjoint sets of cardinality $5$. Using this correspondence we can thus find an infinite collection of sets of cardinality 5, whose union is $\mathbb R$ and pairwise disjoint. References: Naive Set Theory, by P. Halmos.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1419457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Subgroups of generalized dihedral groups A generalized dihedral group, $D(H) := H \rtimes C_2$, is the semi-direct product of an abelian group $H$ with a cyclic group of order $2$, where $C_2$ acts on $H$ by inverting elements. I know that the total number of subgroups of $D(H)$ is the number of subgroups of $H$ plus the sum of subgroup indices of $H$ ($\sum_{L \leq H}[H : L]$). But I'm not interested in the actual subgroups, I only need the structures of subgroups of $D(H)$ up to isomorphism. Naturally all the subgroups of $H$ are (normal) subgroups of $D(H)$ and for each $L \leq H$ also $D(L) \leq D(H)$. But is that all or can there be other structures as well? I read about the subgroups of semi-direct products in general, but the situation seemed quite complicated. Would it be easier to find just the structures?
Let $K$ be a subgroup of $D(H)$. If $K$ is not contained in $H$, then $K$ contains some $g \not\in H$. Then $g$ is an involution and $D(H) = H \rtimes \langle g \rangle$. Hence $K = (H \cap K) \rtimes \langle g \rangle$ and $K \cong D(H \cap K)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1419557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What does it mean for a polynomial to be positive semi-definite? I'm reading proofs about Cauchy-Schwarz inequality. Some of them use the argument of a polynomial being positive semi-definite as the starting argument. That is, they argue that $$\forall x,y \in\mathbb{R^n}, a\in\mathbb{R}\\\langle {ax+y,ax+y} \rangle$$ Is a positive semi-definite polynomial (when expanded). How is this seen? There's one proof here if you can see it (page 2):
It means that the expression $$\langle ax+y, ax+y\rangle$$ is always greater than or equal to zero. This is true because the inequality $\langle x,x\rangle \geq 0$ is true for any element $x$ in your vector space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1419650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof by induction for "sum-of" Prove that for all $n \ge 1$: $$\sum_{k=1}^n \frac{1}{k(k+1)} = \frac{n}{n+1}$$ What I have done currently: Proved that theorem holds for the base case where n=1. Then: Assume that $P(n)$ is true. Now to prove that $P(n+1)$ is true: $$\sum_{k=1}^{n+1} \frac{1}{k(k+1)} = \frac{n}{n+1} + n+1$$ So: $$\sum_{k=1}^{n+1} \frac{1}{k(k+1)} = \frac{n+1}{n+2}$$ However, how do I proceed from here?
Since the other answer doesn't use induction, here's an induction proof. Base case: We use the smallest value for $n$ and check that the form works. The smallest value is $n=1$, in this case, the sum on the LHS is $$ \sum_{k=1}^1\frac{1}{k(k+1)}=\frac{1}{1(1+1)}=\frac{1}{2}. $$ Also, the RHS is $$ \frac{1}{1+1}=\frac{1}{2}. $$ Since the LHS and the RHS are equal, the claim holds when $n=1$. Inductive case: We assume that the claim is true for $n=m$ and prove the claim is true for $n=m+1$. Therefore, we have assumed that $$ \sum_{k=1}^m\frac{1}{k(k+1)}=\frac{m}{m(m+1)} $$ and we can use this fact as if it were true. We want to prove that the statement is true when $n=m+1$, in other words, when $$ \sum_{k=1}^{m+1}\frac{1}{k(k+1)}=\frac{m+1}{(m+1)(m+2)}. $$ We can't use this statement because it's the next one (the one we want to prove). We can, however, use the inductive hypothesis to build up to the next step. Since $$ \sum_{k=1}^m\frac{1}{k(k+1)}=\frac{m}{m(m+1)} $$ we observe that the sum on the LHS is almost the sum that we need for the next case. What's missing from the sum on the LHS is the $m+1$-st term. In other words, we are missing the case where we plug in $m+1$ for $k$ which is $\frac{1}{(m+1)(m+1)}$. WARNING: The OP added $(m+1)$ to both sides and not the $m+1$-st term. If we add this term to both sides we get $$ \frac{1}{(m+1)(m+2)}+\sum_{k=1}^m\frac{1}{k(k+1)}=\frac{m}{m(m+1)}+\frac{1}{(m+1)(m+2)}. $$ The LHS is just the sum $$ \sum_{k=1}^{m+1}\frac{1}{k(k+1)}$$ and the RHS simplifies to the fraction $$ \frac{m+1}{(m+1)(m+2)}. $$ This proves the claim when $n=m+1$. Then, by the PMI, the claim is true for all integers $n\geq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1419749", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Lines and planes - general concepts I've come across a book that has this general questions about lines and planes. I can't agree with some of the answers it presents, for the reasons that I'll state below: True or False: * *Three distinct points form a plane - BOOK ANSWER: True - MY ANSWER: False, they cannot belong to the same line *Two intersecting lines form a plane - BOOK ANSWER: True - MY ANSWER: False, they can be parallel and coincident lines. *Two lines that don't belong to a same plane are skew - BOOK ANSWER: True - MY ANSWER: True *If three lines are parallel, there is a plane that contains them - BOOK ANSWER: True - MY ANSWER: False, they can be parallel and coincident lines. *If three distinct lines are intersecting two by two, then they form only one plane - For this last one there's no answer and I'm not sure about the conclusion. If you could help me, I appreciate it. Thank you.
(1) For three points to be on a plane, they need not be on the same line. Draw a triangle on a sheet of paper, and you will see what I mean. (2) If they are coincident lines, they are not called 'intersecting' lines. Intersecting lines meet at one point ONLY. (4) You are right. (However, even if they were all coincident they would still belong to some plane). But, in general they need not belong to a plane. (5) True. Referring to (2), intersecting lines lie in a plane. If line A intersects line B, then A and B are in same plane. If B intersects with line C, then B and C are in same plane. Thus, intersection of A and C should also be in the same plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1419875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Riemann integral on trigonometric functions I have to calculate Riemann integral of function $g:[0;\pi/4]\rightarrow\mathbb{R}$ (on interval $[0;\pi/4]$) given as $g(x)=\frac{\tan(x)}{(\cos(x)^2-4)}$. Function $g$ is continous on interval $[0;\pi/4]$ so it is enough to calculate $\int_0^{\frac{\pi}{4}}{\frac{\tan(x)}{(\cos(x)^2-4)}}$. How to do that?
substitue $u = cosx$, then you will have $du = -sinx dx$. Then you will have 1 in numerator and some polynomial in denominator, which can be fractionized as $u * (u - 2) * (u + 2)$. Then your going to separate it to partial fractions. Rest is simple integration... Don't forget to change integration limits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1419956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Independence between conditional expectations Suppose $(\Omega, F, P)$ is a sample space, $X$ and $Y$ random variables, and $N$ and $M$ sub sigma algebras of $F$. * *I know that $E(X\mid N)$ and $E(X\mid\{\emptyset, \Omega\})$ are independent. Generally, if $N$ and $M$ are independent, will $E(X\mid N)$ and $E(X\mid M)$ be independent? What are some conditions on $N$, $M$ and $X$ that can make $E(X\mid N)$ and $E(X\mid M)$ independent? *If $X$ and $Y$ are independent, are $E(X\mid N)$ and $E(Y\mid N)$ independent? What are some conditions on $N$, $X$ and $Y$ that can make $E(X\mid N)$ and $E(Y\mid N)$ independent? Thanks.
To answer your second question: Let $X, Y$ be independent random variables that assume the values $0$ and $1$ each with probability $\frac{1}{2}$. Setting $N = X + Y$ you get $$E[X \mid X + Y] = E[Y \mid X + Y] = \frac{X + Y}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1420093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Nilpotent ideal and ring homomorphism In "Problems and Solutions in Mathematics", 2nd Edition, exercice 1308 Problem statement Let $I$ be a nilpotent ideal in a ring $R$, let $M$ and $N$ be $R$-modules, and let \begin{equation} f : M \rightarrow N \end{equation} be an $R$-homomorphism. Show that if the induced map \begin{equation} \overline{f} : M / IM \rightarrow N / IN \end{equation} is surjective, then $f$ is surjective. Beginning of solution Since $\overline{f}$ is surjective, $f(M) + IN = N$. It follows that \begin{equation} I \cdot N / f(M) = IN + f(M) / f(M) = N / f(M) \end{equation} My questions * *Why does $\overline{f}$ being surjective implies $f(M) + IN = N$ ? *I understand that $IM$ is a subgroup of $M$ created by applying the scalar product operation between every element of $I$ to every element of $M$. What is the meaning of $I \cdot N / f(M)$ since $N / f(M)$ is a group of cosets ? *How do you derive the two equalities: $I \cdot N / f(M) = IN + f(M) / f(M) = N / f(M)$ ? From the first equation you immediately get $I \cdot N / f(M) = I \cdot (f(M) + IN) / f(M)$, but how do you continue from there ? EDIT: how in particular do you get $I \cdot N / f(M) = IN + f(M) / f(M)$ in question 3, since $IN + f(M) / f(M) = N / f(M)$ is a direct consequence of question 1 ?
* *It is false. It is even meaningless: a priori, $IM$ is not contained in $N$. What is true is that $f(M)+IN=N$. *It is what is written just afterwards: $\;(IN+f(M))/f(M)$. *It is derived from the correct relation in question $1$. Once you have $N/f(M)=I\cdot N/f(M)$, you deduce repeatedly: $$N/f(M)=I\cdot N/f(M)=I^2\cdot N/f(M)=\dots=I^k\cdot N/f(M)=\dots$$ If $r$ is the index of nilpotency of $I$ you thus have $$N/f(M)=I^r\cdot N/f(M)=0,\enspace\text{whence}\enspace N=f(M). $$ Note: If $N$ is a finitely generated $R$-module, it is a simple consequence of Nakayama's lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1420224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
What is a surjective function? I am a 9th grader self-studying about set theory and functions. I understood most basic concepts, but I didn't understand what is a surjective function. I have understood what is an injective function, and if I know what is a surjective function, I think I could understand what is a bijective function (this is my main goal). In formal terms a function $f$ from $A$ to $B$ is said to be surjective if for all $y$ in $B$, there exists $x$ in $A$ such that $f(x)=y$. I don't understand this clearly because i'm still new to these notations. Can you explain this in intuitive way? And for example can you give me a surjective function that is not injective, and inversely, and neither one of the two?
Basically, if a function $f: A \to B$ is surjective, it means that every element of the set $B$ can be "obtained" through the function $f$. Formally, like you said, it means that for every $b \in B$, there exists some $a \in A$ such that $f(a) = b$. Phrased differently, $f$ being surjective means that the range of $f$ includes all of the set $B$. For an example, take $A = \{1,2,3,4\}$ and $B = \{x,y,z\}$. Define $f : A \to B$ by $$f(1) = x$$ $$f(2) = y$$ $$f(3) = z$$ $$f(4) = z$$ Then $f$ is surjective because each element of $B$ has a corresponding element of $A$ which can be mapped onto it; but $f$ is not injective since both 3 and 4 are mapped into the same element of $B$, namely $z$. If you just fool around with simple functions and sets like these, you can easily get some nice examples of the other two cases you wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1420284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 4 }
Multiplying Single Digit Numbers to Get Product >1000 This is yet another Alice and Bob problem. Alice and Bob are playing a game on a blackboard. Alice starts by writing the number $1$. Then, in alternating turns (starting with Bob), each player multiplies the current number by any number from $2$ to $9$ (inclusive) and writes the new number on the board. The first player to write a number larger than $1000$ wins. A sample game might be (each number is preceded by "A" or "B" to indicate who wrote it): {A1, B3, A12, B60, A420, B1260; Bob wins.} Which player should win, and why? I am assuming this problem uses optimal strategy on both sides, but as there are several ways to win, I'm not sure how to prove who should win. All I do know, is that, if the number after your operation exceeds 1000/18, but isn't greater than 1000/9, it's a guaranteed win for you. Can someone provide me with a solution? Thanks I also apologize for the lack of a proper tag, as I didn't know a phrase that would correctly encompass this particular scenario/game.
bob can pick between 2 and 9. 7,8 and 9 dont work as Ross Milikan pointed out as any number more than 56/9 loses. For 4 and 6 which from bobs perspective are the winning numbers. if he picks between 4 and 6, Alice must pick between 8 and 54, so Bob can then pick between 56 and 111. Whatever Alice then picks, Bob can multiply by 9 and win, so 4, 5 and 6 are winning. 2 and 3 are not possible for Bob if Alice knows what she is doing as she can then make 4 or 6 on her next turn and use the reverse logic against Bob
{ "language": "en", "url": "https://math.stackexchange.com/questions/1420383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Show that $\mathbb{Q}(\sqrt{2})$ is the smallest subfield of $\mathbb{C}$ that contains $\sqrt{2}$ This was an assertion made in our textbook but I have no idea how to show that either statement is true. Also would like to show that that $\mathbb{Q}(\sqrt{2})$ is strictly larger than $\mathbb{Q}$, which was the second part of the assertion.
$\mathbb{Q}(\sqrt{2})$ is by definition the smallest subfield of $\mathbb{C}$ that contains both $\mathbb{Q}$ and $\sqrt{2}$. The surprising thing (perhaps) is that any element of $\mathbb{Q}(\sqrt{2})$ can be written in the form $a + b\sqrt{2}$, with $a,b \in \mathbb{Q}$. To prove this, you have to show the following: * *That the set of all elements of the form $a + b\sqrt{2}$, with $a,b \in \mathbb{Q}$ is a field (in other words, that it is closed under addition and multiplication, contains both additive and multiplicative identities, contains additive inverses, and contains multiplicative inverses for all nonzero elements); *That it contains $\mathbb{Q}$ *That it contains $\sqrt{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1420471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Create a rectangle with coordinates (latitude and longitude) I have two points on a map, I want to create a rectangle where the two points are the line that intersect the rectangle. I understand that there is no true rectangle on a sphere, but the areas I am dealing with are small, the length of the rectangles are no more than a few km and the heights a few hundred meters. So if the calculation is approximate that's fine. Any help appreciated! Thanks, Philip
The simplest case: Let the two points be $(x_1,y_1)$ and $(x_2,y_1)$ and the thickness of rectangle is $t$. The coordinates of the rectangle are: $(x_1,y_1+t/2),(x_1,y_1-t/2),(x_2,y_1-t/2),(x_2,y_1+t/2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1421538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solving Functional Equations by Limits ... Let $f(x)$ be a continuous function and satisfying the equation : $f(2x) - f(x) = x$. Given $f(0)=1$ ; Find $f(3)=?$ My teacher solves this as : $$f(x) - f(x/2) = x/2$$ $$f(x/2) - f(x/4) = x/4$$ . . . $$f(x/2^{n-1}) - f(x/2^{n}) = x/ 2^{n-1}$$ ...........…...... Add them up : And let $n\rightarrow \infty$ $$f(x) - f(x/2^{n}) =(x/2)/(1-(1/2))$$ Thus $f(x) - f(0) = x$. Thus $f(x) = x+1$. Thus $f(3)= 4$. However my problem is that I didn't find it intuitive ; I didn't understand how to get such an idea. So is there an alternate way to go about this problem ?
I would say, search function $f(x) = x+\mu(x), \quad \mu(x)$ continuous for $x\in R,\,\mu(0)=1$: $f(2x)-f(x)=x\Rightarrow 2x + \mu(2x) - x - \mu(x)=x\Rightarrow \mu(2x)=\mu(x)$ $\Rightarrow \mu(x) \equiv$ const $= \mu(0) = 1$ $\Rightarrow f(x)=x+1\Rightarrow f(3)=4$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1421644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A program to visualize Linear Algebra? I am asking here because I believe you have some idea of a good visualizer 3d program to see what are really: eigenvectors, subspaces, rowspaces, columnspaces and just answers on normal matrix multiplication visualized. It's easy to calculate, but it's hard to visualize I think.
It depends how you want to go about it. With this I mean that you need to structure the way you want to visualize things for example eigen vectors a good way to visualize them is by doing 3D or 2D orthogonal transformations a very cool application with great visual implications is Principal components analysis for example. If you are familiar with programing I would recommend matplotlib its python and its very easy syntax and like I said you can visualize things such as linear transformations on 3D objects in very nice ways. Additionally wolfram alpha gives you little matrix heatdiagrams and graphs for adjecency matrices if you are after something like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1421743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Linear Independence for functions defined by integration I came across this problem while doing some work. I'm been unable to make any progression on it. Any suggestions would be greatly appreciated. Given that the set of strictly positive and continuous functions $$f_i(x,y) >0, \quad i=1,\dots,n$$ are linearly independent for $(x,y) \in [0,1]^2$. Let $g_i$ be defined by $$ g_i(x) = \int_{y\in [0,1] } f_i(x,y) d y, \quad i=1,\dots,n $$ where $g_i$ is unique up to some positive multiple (There is a better way to say this?, say $g_i(x) \neq c g_j(x)$ for some $c >0$ and $i$ & $j \in \left\{1,\ldots,n\right\}$ and $x \in(0,1)$ ) Is the set of functions, $g_1,\ldots,g_n$, also linearly independent for $x \in [0,1]$?
No, not in general. For $n$ a natural number greater than or equal to $3$, let $$f_i(x,y) = \left\{ \begin{array}{l l} (i+1)(xy)^i + (i+1)y^i & \text{for $i \in \{1,...,n-1\}$}\\ \sum_{j=1}^{n-1} (j+1)(xy)^j + (n-1)& \text{for $i=n$}\end{array}\right.$$ Then the the functions $f_1,...,f_n$ are linearly independent. Let us see why this is so. For any linear combination we have: $$\sum_{i=1}^n a_i f_i(x,y) = \sum_{i=1}^{n-1}(a_i+a_n)(i+1)(xy)^i + \sum_{i=1}^{n-1}a_i(i+1)y^i +a_n(n-1)$$ and since a polynomial is zero if and only if all its coefficients are zero we see that the linear combination is zero if and only if $a_1=a_2=..=a_n=0$. However the functions $$g_i(x) = \left\{\begin{array}{l l} x^i+1 & \text{for $i\in\{1,..,n-1\}$}\\ \sum_{j=1}^{n-1}x^j + (n-1) & \text{for $i=n$}\end{array}\right.$$ are linearly dependent since $$\sum_{i=1}^{n-1} g_i(x) = g_n(x).$$ One can show more that any $n-1$ of the $g_i$'s are linearly independent (which is much stronger that your requirement that none of them are scalar multiples of each other). From one of your comments "I believe the solution relies on the fact that the mapping from $f_i$ to $g_i$ is an isomorphism" it seems that you perhaps wanted to ask a different question (since definite integration is far from injective). Perhaps you wanted to ask if $f_i:[0,1]\times [0,1]\to \mathbb{R}$ and $g_i : [0,1]\times [0,1] \to \mathbb{R}$ for $i \in \{1,..,n\}$ are functions such that the partial derivative of $g_i$ with respect to $y$ is $f_i$, then $f_1,..,f_n$ being linearly independent implies $g_1,..,g_n$ are linearly independent. This is indeed so since $\sum_{i=1}^n a_ig_i(x,y) = 0$ implies (by taking the partial derivative) that $\sum_{i=1}^{n} a_i f_i(x,y)=0$ which then implies that $a_1=a_2=..=a_n=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1421844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Derivative of integral in interval Let $$F(x)=\int_{2}^{x^3}\frac{dt}{\ln t}$$ and $x$ is in $(2,3)$. Find $F'(x)$. Can somebody give me idea how to do this? Thank you
Recall that: $$F(u) = \int_a^u f(t)\ dt\implies \frac{d}{du}F(u) = f(u)$$ We are now interested in $$\frac{d}{dx}F(u(x)) = \frac{d}{dx}\int_a^{u(x)}f(t)\ dt = \left(\frac{d}{du}F(u(x))\right)\frac{du}{dx} = f(u)\frac{du}{dx} = \dots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1421947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to prove what element in $\mathbb{Z}_n$ you get when the elements of $\mathbb{Z}_n$ are summed? Based on trial and error I found that when $n$ is odd, the sum of the elements of $\mathbb{Z}_n$ is zero in $\mathbb{Z}_n$. When $n$ is even, the sum of the elements of $\mathbb{Z}_n$ is $n/2$ in $\mathbb{Z}_n$. Here is what I have so far for a proof: Let k be an integer. Then k in $\mathbb{Z}_n$=k-$nm$, where $m$ is an integer such that $nm\leq$k and for any integer $t\ne$m, $nt$>k or $nt<nm$. The sum of the elements of $\mathbb{Z}_n$=$\frac{n-1}{2}n$. So the sum of the elements of $\mathbb{Z}_n$ in $\mathbb{Z}_n$ is $\frac{n-1}{2}n$-$nm$ which can be simplified to $\frac{n-1-2m}{2}n$. I am not sure how to relate that to zero or n/2.
Pair $x\in\mathbb{Z}_n$ with $n-x$. Their sum is clearly $n\equiv 0\mod{n}$. If $n$ is odd, each $x$ pairs with $n-x\ne x$, while if $n$ is even, $\frac{n}{2}$ pairs with itself. The result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Convergent Operator, weakly convergent sequence => weakly convergent? Suppose we have a Hilbert space $X$, a weakly convergent sequence $u_k\rightharpoonup u$ and a convergent operator $T_k \rightarrow T$ in the norm of $\mathcal{L}(X)$ (bounded, linear operators). Is the assertion $T_k u_k \rightharpoonup Tu$ correct? Thanks in advance!
As noted by @Razieh Noori, if $f\in X^*$, then $f\circ T\in X^*$. Since $u_k\rightharpoonup u \Rightarrow \exists M:\|u_k\|\leq M\quad \forall k\quad$. Also $T_k\rightarrow T \Rightarrow$ for $\epsilon>0\quad\|T_k-T\|_{op}<\frac{\epsilon}{2 M \|f\|_*}$ for $k>K_1$. Then $|f\circ T_ku_k-f\circ T u|\leq |f\circ T_k u_k-f\circ T u_k|+\underbrace{|f\circ Tu_k-f\circ Tu|}_{<\frac{\epsilon}{2}\quad \forall k> K_2} \leq \|f\|_*\|T_k-T\|_{op}\|u_k\|$ $+\frac{\epsilon}{2}<\|f\|_*\|T_k-T\|_{op}M+\frac{\epsilon}{2}\leq \frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon\quad$ for $k>\max\{K_1,K_2\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Sum of two harmonic alternating series Evaluate the series $$\sum_{n=1}^\infty (-1)^{n+1}\frac{2n+1}{n(n+1)}.$$ I've simplified it to the form $$\sum_{n=1}^\infty (-1)^{n+1}\frac{1}{n+1} + \sum_{n=1}^\infty (-1)^{n+1}\frac{1}{n}$$ and I've proved that both parts converge. However, I'm having trouble finding the limit. Writing out the terms as $(1+\frac 1 2 - \frac 1 2 + \frac 1 3 - \frac 1 3 ... )$ suggest their sum is one. However when I look up the sums of the two parts, they are $-\ln(2)$ and $\ln(2)$ respectively, which suggests the sum of the overall series is $0$. I'm aware that if a series is not absolutely convergent then its terms can be rearranged to converge to any number, but we haven't covered that topic yet so I feel like that shouldn't be a consideration in solving this.
$$ \sum_{n=1}^{\infty}{\frac{{(-1)}^{n+1}}{n+1}}=\sum_{n=0}^{\infty}{\frac{{(-1)}^{n+1}}{n+1}}+1\\ =\sum_{n=1}^{\infty}{\frac{{(-1)}^{n}}{n}}+1 $$ So summing the two summation yields: $$ \sum_{n=1}^{\infty} {{(-1)}^{n+1}\frac{2n+1}{n(n+1)}}=\sum_{n=1}^{\infty}{\frac{{(-1)}^{n+1}}{n+1}}+\sum_{n=1}^{\infty}{\frac{{(-1)}^{n+1}}{n}}\\ =\sum_{n=1}^{\infty}{\frac{{(-1)}^{n}}{n}}+1-\sum_{n=1}^{\infty}{\frac{{(-1)}^{n}}{n}}=1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Succinct Proof: All Pentagons Are Star Shaped Question: What is a succinct proof that all pentagons are star shaped? In case the term star shaped (or star convex) is unfamiliar or forgotten: Definition Reminder: A subset $X$ of $\mathbb{R}^n$ is star shaped if there exists an $x \in X$ such that the line segment from $x$ to any point in $X$ is contained in $X$. This topic has arisen in the past in my class discussions around interior angle sums; specifically, for star shaped polygons in $\mathbb{R}^2$ we can find their sum of interior angles as follows: By assumption, there is an interior point $x$ that can be connected to each of the $n$ vertices. Drawing in these line segments, we construct $n$ triangles; summing across all of their interior angles gives a total of $180n^\circ$, but this over-counts the angle sum for the polygon by the $360^\circ$ around $x$. Therefore, the sum of interior angles is $(180n - 360)^\circ = 180(n-2)^\circ$. This formula for the sum of interior angles holds more generally (often shown by triangulating polygons) but the proof strategy above already fails for some (obviously concave) hexagons. For example: The above depicted polygon is not star shaped. Moreover, it is a fact that any polygon with five or fewer sides is star shaped. And so I re-paste: Question: What is a succinct proof that all pentagons are star shaped? Edit: Since it has come up as a counterexample (of sorts) for each of the first two responses, here is an example of a concave pentagon that may be worth examining in thinking through a proof.
This is several years late, but I think I have a nicer proof. I thought I would post it. Let's believe we can triangulate the 5-gon into three triangles. Note that these three triangles all must have a vertex $v$ in common: To see this, remove one of the triangles $T$, s.t. we have a 4-gon left. Then whichever way we triangulate the 4-gon every edge will contain a vertex from both of its triangles. In particular, the joint edge of $T$ and the 4-gon will have such a vertex $v$. Then $v$ sees all three triangles and hence the entire 5-gon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 2 }
Projection from triangle to spherical triangle Consider a triangle, $T$, in $\mathbb{R}^3$ with vertices $(0,0,1), (0,1,0)$, and $(1,0,0)$. Let $S$ denote the sphere centered at the origin with radius 1 and let $S_1$ denote the portion of the sphere in the same quadrant at the triangle. We can define a map $f: T \rightarrow S_1$ by $$f(x) = \frac{x}{|x|}$$ which is one-to-one and onto. The map is a bijection, so the inverse should exist. Maybe I am losing my mind but, what does the explicit formula for $f^{-1}: S_1 \rightarrow T$ look like?
Notice under the direct mapping, any point $p$ on $T$ get mapped to a point $f(p)$ on $S$ which is a scalar multiple of $p$. So for any point $q = (x,y,z) \in S$, the inverse mapping $f^{-1}$ will send $q$ to a point which is a scalar multiple of $q$. This means there exists a function $\lambda : S \to \mathbb{R}$ such that $$q = (x,y,z)\quad\mapsto\quad f^{-1}(q) = ( \lambda(q)x, \lambda(q)y, \lambda(q)z)$$ It is clear the triangle $T$ lies on the plane $x + y + z = 1$, this means $\lambda(q)$ satisfy the constraint: $$\lambda(q)(x+y+z) = 1 \quad\iff\quad \lambda(q) = \frac{1}{x+y+z}$$ As a result, we have $$f^{-1}(x,y,z) = \left(\frac{x}{x+y+z},\frac{y}{x+y+z},\frac{z}{x+y+z}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Conditional expectation of iid nonnegative random variables I am studying Ross's book, stochastic processes. There is the following lemma: Let $Y_1, Y_2, ... , Y_n$ be iid nonnegative random variable. Then, $E[Y_1+ \cdots +Y_k | Y_1+\cdots+Y_n=y] = \frac{k}{n} \cdot y, \quad k=1,\cdots,n$ But, I really can't understand why this lemma can be established. Could you please help me?
Informally, as all the $Y_i$ are identically distributed, the fact that they sum to $y$ implies that their (conditional) expectation must be $\frac yn$. To see this formally, note that we certainly have equality of all the $E[Y_i\;|\;Y_1+...+Y_n=y]$, call the common value $E$. Then $$nE=E\left[Y_1\;|\;Y_1+Y_2+...+Y_n=y\right]+...+E\left[Y_n\;|\;Y_1+Y_2+...+Y_n=y\right]=E\left[Y_1+...+Y_n\;|\;Y_1+...+Y_n=y\right]=y$$ Thus $E=\frac yn$. But then $$E\left[Y_1+...+Y_k\;|\;Y_1+Y_2+...+Y_n=y\right]=kE=\frac {ky}{n}$$ As desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1422465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }