Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
A solution to a Diophantine equation with three unknowns (similar to the Fermat Theorem) I need to either solve the following Diophantine equation with three unknowns: $n_{1} (n_{1} +1) \pm n_{2} (n_{2} +1) \pm n_{3} (n_{3} +1) = 0$, where $n_{1,2,3}$ can be positive or negative -- or perhaps prove that this equation doesn't have a solution other than the trivial one $n_{1} = 0,\hspace{5mm} n_{2} = n_{3}$ plus the obvious permutations. Given that this equation looks pretty much like the one in the Fermat Theorem, I personally think that the latter is more plausible.
Here is an answer to the question as originally posted. Later on, the OP has somewhat displaced the goalposts. If $n$ is an integer, no matter what its sign is, then $n(n-1)$ will be either zero or positive, with zero exactly if $n=0$ or $n=1$. Since neither of the terms in your equation can be negative, the only way for the entire thing to be zero is if each term is zero, so you have exactly $8$ solutions, corresponding to each combination of $n_1,n_2,n_3\in\{0,1\}$. Answer to the currently (as of this writing) asked question: The numbers of the form $n(n+1)$ are $$ 0, 2, 6, 12, 20, 30, \ldots $$ Their differences are the successive even numbers, and since all the numbers are themselves even, we can just pick one of them and express it as the difference of two others, for example $$ 3(3+1) + 5(5+1) - 6(6+1) = 0 $$ and in general we have solutions of the form $$ n_1 \in \mathbb Z, \qquad n_3 = \frac{n_1(n_1+1)}2, \qquad n_2 = n_3-1 $$ This doesn't exhaust the solutions; we also have, for example $$ 5(5+1) + 6(6+1) - 8(8+1) = 0 $$ $$ 9(9+1) + 13(13+1) - 16(16+1) = 0 $$ $$ 11(11+1) + 14(14+1) - 18(18+1) = 0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1892363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ahlfors' Complex Analysis proof doubt On the third edition of Ahlfors' Complex Analysis, page 121 Lemma 3 it states: Suppose $\phi(\zeta)$ is continuous on the arc $\gamma$. Then the function \begin{equation*} F_n(z)= \int_{\gamma} \frac{\phi(\zeta)}{(\zeta-z)^n} d\zeta \end{equation*} is analytic in each of the regions determined by $\gamma$ , and its derivative is $F'_n(z)=nF_{n+1}(z)$ It begins by proving $F_1$ is continuous. Let $z_0$ be a point not in $\gamma$ and we choose a neighbourhood around that point $|z-z_0| < \delta$ so that it does not meet $\gamma$. Furthermore, we have $|\zeta-z| > \frac{\delta}{2}$ for all $\zeta \in \gamma$ when $|z-z_0| < \frac{\delta}{2}$. Then a few calculations yield \begin{equation*} |F_1(z)-F_1(z_0)| < |z-z_0| \frac{2}{\delta^2} \int_{\gamma} |\phi||d\zeta| \end{equation*} which according to the book proves the continuity of $F_1(z)$ at $z_0$. This is where I get stuck, I understand the rest of the proof. I know that $|z-z_0| < \frac{\delta}{2}$ but we still have a $\delta$ in the denominator and because $\int_{\gamma} |\phi||d\zeta|$ is bounded this means that the right hand side will become bigger as $\delta$ becomes smaller! I can't understand how this implies continuity and I am almost sure there must be a mistake or I am missing something huge.
Don't let $\delta$ get smaller! Leave $\delta$ fixed. Say $c=\frac1{\delta^2}\int_\gamma|\phi|\,|d\zeta|.$ You have $$|F_1(z)-F_1(z_0)|<c|z-z_0|.$$ Hence $F_1(z)\to F_1(z_0)$ as $z\to z_0$. (For instance, $|z-z_0|<\epsilon/c$ implies $|F_1(z)-F_1(z_0)|<\epsilon$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1892453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The vector $\mathbf{x}$ is the derivative of $\mathbf{Ax}$ with respect to what? Consider the linear equation $$\mathbf{A}\mathbf{x}=\mathbf{b},$$ where * *$\mathbf{A}=\begin{bmatrix}\mathbf{a}_1 \\ \vdots \\ \mathbf{a}_n\end{bmatrix}$ is an $N \times K$ random matrix *$\mathbf{x}=\begin{bmatrix}x_1 \\ \vdots \\ x_n\end{bmatrix}$ is a $K \times 1$ vector *$\mathbf{b}=\begin{bmatrix}b_1 \\ \vdots \\ b_n\end{bmatrix}$ is an $N \times 1$ random vector For each $K \times 1$ row $\mathbf{a}_n$ of the matrix $\mathbf{A}$, observe that $$\frac{\partial \mathbf{a}_n \mathbf{x}}{\partial a_k}=\frac{\partial (a_{n1} x_1 + a_{n2} x_2+\cdots+a_{nK} x_K)}{\partial a_{nk}}=x_k$$ Is there some way to generalize this statement for all columns $k$ simultaneously? In other words, The vector $\mathbf{x}$ is the derivative of the system of equations $\mathbf{Ax}$, taken with respect to what? Taking a quick look at this list of identities, I see that $\mathbf{A}$ is the derivative of $\mathbf{A}\mathbf{x}$ with respect to $\mathbf{x}$: $$\mathbf{A}=\frac{\partial \mathbf{Ax}}{\partial{\mathbf{x}}}$$ However, I don't see what derivative would give us $\mathbf{x}$ as an answer: $$\mathbf{x}=\frac{\partial \mathbf{Ax}}{\partial (?)}$$ I'm not particularly good with the rules of matrix differentiation, so a proof would be especially helpful here. Thank you!
With the usual notations, derivating with respect to a vector increases the dimension by 1, and derivating wrt to a matrix increases the dimension by 2. For example, if $s$ is a scalar, $v$ a vector and $A$ a matrix, $\partial_A s$ is a matrix, $\partial_v s$ is a vector and $\partial_v v$ is a matrix. In your case, you differentiate a vector with respect to ? to obtain a vector. So, with usual notations, necessarily ? is a scalar. And I don't think any scalar satisfies this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1892508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
To prove following is a metric space I have $X= \mathbb{R^2}$ $d(x,y)=\sqrt{(x_1-y_1)^2+(x_2-y_2)^2}$. I am having trouble on how to show triangle inequality property while other two are quite trivial. What is method of doing this? Thanks
In $\mathbb R^n$, you can use the dot product as a tool to construct this type of proof. The triangle inequality is in fact a direct consequence of the Cauchy-Schwarz inequality $(\mathbf a\cdot\mathbf b)(\mathbf a\cdot\mathbf b)\le(\mathbf a\cdot\mathbf a)(\mathbf b\cdot\mathbf b)$. There are many proofs, but the simplest one in $\mathbb R^2$ is pobably just enumerating all the terms of the dot products. $$(\mathbf a\cdot\mathbf a)(\mathbf b\cdot\mathbf b)-(\mathbf a\cdot\mathbf b)(\mathbf a\cdot\mathbf b)=a_1^2b_2^2+a_2^2b_1^2-2a_1a_2b_1b_2=(a_1b_2-a_2b_1)^2$$ The difference is therefore nonnegative, so the Cauchy-Schwarz inequality must hold. Let $||\mathbf v||=\sqrt{\mathbf v\cdot\mathbf v}$. We can state the CS inequality as: $$\mathbf a\cdot\mathbf b\le||\mathbf a||\ ||\mathbf b||$$ We can then multiply by $2$ and add the terms $\mathbf a\cdot\mathbf a=||\mathbf a||^2$ and $\mathbf b\cdot\mathbf b=||\mathbf b||^2$ to both sides: $$\mathbf a\cdot\mathbf a+2\mathbf a\cdot\mathbf b+\mathbf b\cdot\mathbf b\le||\mathbf a||^2+2||\mathbf a||\ ||\mathbf b||+||\mathbf b||^2$$ We can factor these, since dot product distributes like multiplication. $$(\mathbf a+\mathbf b)\cdot(\mathbf a+\mathbf b)\le\left(||\mathbf a||+||\mathbf b||\right)^2$$ $$||\mathbf a+\mathbf b||^2\le\left(||\mathbf a||+||\mathbf b||\right)^2$$ $$||\mathbf a+\mathbf b||\le||\mathbf a||+||\mathbf b||$$ This is exactly the triangle inequality we want to prove. Letting $\mathbf a=\mathbf z-\mathbf x$ and $\mathbf b=\mathbf y-\mathbf z$ makes this more clear: $$||\mathbf y-\mathbf x||\le||\mathbf z-\mathbf x||+||\mathbf y-\mathbf z||$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1892620", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Stuck in integration: $\int {\frac{dx}{( 1+\sqrt {x})\sqrt{(x-{x}^2)}}}$ $\displaystyle\int {\frac{dx}{( 1+\sqrt {x})\sqrt{(x-{x}^2)}}}$ $=\displaystyle\int\frac{(1-\sqrt x)}{(1+x)\sqrt{x-x^2}}\,dx$ $=\displaystyle\int\frac{(1-\sqrt x+x-x)}{(1+x)\sqrt{x-x^2}}\,dx$ $=\displaystyle\int\frac{\,dx}{\sqrt{x-x^2}}-\displaystyle\int\frac{\sqrt x(1+\sqrt x)}{\sqrt x(1+x)\sqrt{1-x}}\,dx$ I have tried the above integration a lot . But not able to solve further . Please give me some hint how to proceed or some other method
By subtsitution twice we get $t=\sqrt { x } \Rightarrow dt=\frac { dx }{ 2\sqrt { x } } $ $$\int { \frac { dx }{ \left( 1+\sqrt { x } \right) \sqrt { x-{ x }^{ 2 } } } } =2\int { \frac { d\sqrt { x } }{ \left( 1+\sqrt { x } \right) \sqrt { 1-x } } = } 2\int { \frac { dt }{ \left( 1+t \right) \sqrt { 1-{ t }^{ 2 } } } } \\ \\ $$ $t=\sin { z } \Rightarrow dt=\cos { z } dz$ $$\int { \frac { \cos { z } dz }{ \left( 1+\sin { z } \right) \cos { z } } } =2\int { \frac { dz }{ 1+\sin { z } } } \overset { multiplying\quad both\quad (1-\sin { z) } \quad }{ = } 2\int { \frac { 1-\sin { z } }{ \cos ^{ 2 }{ z } } } dz=2\left( \int { \frac { dz }{ \cos ^{ 2 }{ z } } -\int { \frac { \sin { z } dz }{ \cos ^{ 2 }{ z } } } } \right) =2\left( \tan { z } -\frac { 1 }{ \cos { z } } \right) +C=\\ =2\left( \tan { \left( \arcsin { \left( \sqrt { x } \right) } \right) -\frac { 1 }{ \cos { \left( \arcsin { \left( \sqrt { x } \right) } \right) } } } \right) +C=2\left( \frac { \sqrt { x } }{ \sqrt { 1-x } } -\frac { 1 }{ \sqrt { 1-x } } \right) +C=2\frac { \sqrt { x } -1 }{ \sqrt { 1-x } } +C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1892802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Help solving $1 < \frac{x + 3}{x - 2} < 2$ I worked a lot of inequalities here in MSE and that greatly helped me. I've seen a similar inequality [here] [1], but the one I have today is significantly different, in that I'll end up with a division by 0, which is not possible. [1] [Simple inequality $$1 < \frac{x + 3}{x - 2} < 2$$ $$\implies 1 < \frac{x - 2 + 2 + 3}{x - 2}$$ $$\implies 1 < 1 + \frac{5}{x - 2} < 2$$ $$\implies 0 < \frac{5}{x - 2} < 1$$ $$\implies 0 < \frac{1}{x - 2} < \frac{1}{5}$$ I can't continue without falling into the division by 0. So my many thanks for your help.
Break it into two separate inequalities when you get to here: $$ 0 < \frac{5}{x-2} < 1$$ First let's consider $0 < \dfrac{5}{x-2}$. Since $5 > 0$, this inequality is satisfied when $x-2 > 0$, i.e., when $x > 2$. Now let's consider $\dfrac{5}{x-2} < 1$. Subtract $1$ from both sides and get a common denominator to get $$\dfrac{5}{x-2} - \dfrac{x-2}{x-2} < 0.$$ Combine and we have $$ \dfrac{5 - (x-2)}{x-2} < 0.$$ Simplify to get $$\frac{7-x}{x-2} < 0.$$ This inequality is satisfied when either of the following two conditions hold: * *$7-x > 0$ and $x-2 < 0$ *$7-x < 0$ and $x-2 > 0$ Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1892918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Find the fourier series representation of a function Consider the function $f(x) = \begin{cases} \frac{\pi}{2}+x & & x \in (-\pi, 0] \\ \frac{\pi}{2}-x & & x \in (0, \pi]\\ \end{cases}$ extended 2$\pi$ periodically to $\mathbb{R}$. Calculate $a_0, a_n, b_n$ I understand how to work out a fourier series but I am unsure what to set for $f(x)$ due to the way its set out. Would I have $a_0=\frac{1}{2\pi}\int_{-\pi}^{\pi}\frac{\pi}{2}+x dx$ due to splitting it into odd and even parts?
Divide it two parts and calculate $$a_{ 0 }=\frac { 1 }{ \pi } \int _{ -\pi }^{ \pi } f\left( x \right) dx=\frac { 1 }{ \pi } \int _{ -\pi }^{ 0 }{ \left( \frac { \pi }{ 2 } +x \right) dx } +\frac { 1 }{ \pi } \int _{ 0 }^{ \pi }{ \left( \frac { \pi }{ 2 } -x \right) dx } =\\ ={ \left( \frac { \pi }{ 2 } x+\frac { { x }^{ 2 } }{ 2 } \right) }_{ -\pi }^{ 0 }{ +\left( \frac { \pi }{ 2 } x-\frac { { x }^{ 2 } }{ 2 } \right) }_{ -\pi }^{ 0 }=0\\ { a }_{ n }=\frac { 1 }{ \pi } \int _{ -\pi }^{ \pi } f\left( x \right) \cos { \left( nx \right) } dx=\frac { 1 }{ \pi } \left( \int _{ -\pi }^{ 0 }{ \left( \frac { \pi }{ 2 } +x \right) \cos { \left( nx \right) } dx } +\int _{ 0 }^{ \pi }{ \left( \frac { \pi }{ 2 } -x \right) \cos { \left( nx \right) } dx } \right) \\ { b }_{ n }=\frac { 1 }{ \pi } \int _{ -\pi }^{ \pi } f\left( x \right) \sin { \left( nx \right) } dx=\frac { 1 }{ \pi } \left( \int _{ -\pi }^{ 0 }{ \left( \frac { \pi }{ 2 } +x \right) \sin { \left( nx \right) } dx } +\int _{ 0 }^{ \pi }{ \left( \frac { \pi }{ 2 } -x \right) \sin { \left( nx \right) } dx } \right) $$ Can you take from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are representations of finite groups unitary? I'm reading through a proof where $\Psi:G\rightarrow GL(V)$ and $\Phi:G\rightarrow GL(U)$ are representations of a finite group $G$, $a\in U$, $b\in V$, and $R:U\rightarrow V$ is a linear function. Then there is a line in the proof that goes: $$\sum_{g\in G}\langle b,\Psi(g^{-1})R\Phi(g)a\rangle=\sum_{g\in G}\langle\Psi(g)b,R\Phi(g)a\rangle$$ I thought in general that for an operator $A$, $\langle b,Aa\rangle=\langle A^*b,a\rangle$, so it looks like that line is assuming $\Psi(g^{-1})=\Psi(g)^*$, i.e., $\Psi(g)$ is unitary. I can see that since $g^n=e$ for some $n$, then $\Psi(g)^n=I$, so $det(\Psi(g))$ is an $n$th root of unity, but as far as I can tell that's not a sufficient condition to conclude that $\Psi(g)$ is unitary. Does $\Psi(g)$ need to be unitary when $G$ is finite?
No, it does not. For example, the representation of the $2$-element group $\{e,a\}$ by $$ 1 \to \pmatrix{1 & 0\cr 0 & 1\cr},\ a \to \pmatrix{1 & 1\cr 0 & -1\cr} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Inequality on non-negative reals For non-negative $x,y,z$ satisfy $\frac{1}{2x^2+1}+\frac{1}{2y^2+1}+\frac{1}{2z^2+1}=1$ then show that $x^2+y^2+z^2+6\geq 3(x+y+z)$ Idea how to handle the constraint? I'm unaware .
Yes, it's true for all reals. $\sum\limits_{cyc}(x^2-3x+2)=\sum\limits_{cyc}\left(x^2-3x+2-\frac{9}{4}\left(\frac{1}{2x^2+1}-\frac{1}{3}\right)\right)=\sum\limits_{cyc}\frac{(x-1)^2(2x-1)^2}{2(2x^2+1)}\geq0$. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is the absolute value function a metric? I was trying to find out more information about absolute value, and I came upon the fact that AV satisfies a whole set of properties that usually defines a distance function or metric. But in the Wikipedia article on metrics, there's no mention of the AV function, so I'm a bit confused now. Is it some sort of metric subspace instead? P.S.: I'm not at all well-versed on metrics, so I'd appreciate simple answers :)
The absolute value $x\mapsto |x|$ is not a metric but a norm on $\mathbb R$ (or $\mathbb C$), viewed as a one-dimensional vector space. However, from any norm you can derive a metric in a standard way. In the case of the absolute value, this gives the well-known metric $d(x,y)=|x-y|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Can someone explain this part of this property of invertible matrices proof? The property states, "A square matrix A is invertible iff it can be written as the product of elementary matrices" I'm confused on the part of the theorem where they're trying to show that if A is invertible, then it can be written as the product of elementary matrices. This is that section of the proof: "Assume A is invertible. You know the system of linear equations represented by Ax=0 has only the trivial solution. But this implies that the augmented matrix [A 0] can be rewritten in the form [I 0] (using elementary row operations corresponding to E1,E2,...,Ek). So, Ek,...,E2,E1A I and it follows that A = E1-1E2-1...Ek-1 . A can be written as the product of elementary matrices." I just don't get how knowing that Ax=0 has only the trivial solution implies that [A 0] can be written in the form [I 0]. Wasn't it already obvious that A can be rewritten as I since it's invertible? And obviously if there's a 0 matrix adjoined A to it it's going to stay the zero matrix no matter what row operations are done on it? What's the point of doing that? I'm just generally confused on this proof
The point is that each step in the process of Gauss-Jordan elimination corresponds to multiplying your matrix on the left by an elementary matrix. If you start with $[A\mid b]$ (where $A$ is your matrix and $b$ the augmented column), you get $[E_1 A \mid E_1 b$ in the first step, for some elementary matrix $E_1$, then $[E_2 E_1 A \mid E_2 E_1 b]$ for some elementary matrix $E_2$, and so on. The "augmented" column is not important, the non-augmented part is. If the matrix is invertible, at the end of Gauss-Jordan elimination you get $[ I \mid something]$. That is, Gauss-Jordan elimination ends by telling you a unique value for each variable. And this says that $E_n E_{n-1} \ldots E_1 A = I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $x - ε ≤ y$ for all $ε>0$ then $x ≤ y$ I've been asked to prove the following, if $x - ε ≤ y$ for all $ε>0$ then $x ≤ y$. I tried proof by contrapositive, but I keep having trouble choosing the right $ε$. Can you guys help me out?
Suppose $x > y \implies \epsilon = x-y > 0 \implies x = y + \epsilon > y + \dfrac{\epsilon}{2}$, contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
A convergence problem in real non-negative sequence, $\sum_{n=1}^\infty a_n$. We now have $a_n\geq 0$, $\forall n=1,2,...,$ and $\sum_{n=1}^\infty a_n <\infty$. Then I guess that $\lim_{n\to\infty} a_n \cdot n = 0$. But I realized that it is wrong. Since if we let $a_n = 1/n $ if $n = 2^i$ for some $i=1,2,...$ and $a_n = 0$ for the rest of the $n$. Then we have that $\sum_{n=1}^\infty a_n = 1/2 + 1/4 + 1/8 + \cdots < \infty$, and $a_n\cdot n$ does not converges to $0$. Now I add another condition that $a_n$ is non-increasing. Does this result hold this time. i.e. the formal question is as follows: $a_n\geq 0$, $\forall n=1,2,...,$ and $a_n$ is non-increasing, and $\sum_{n=1}^\infty a_n <\infty$. Then prove $a_n \cdot n \to 0$, or give a counterexample that $a_n\cdot n$ does not necessarily converge to $0$
It is true that $a_n \cdot n \rightarrow 0$ under this condition. To prove this, observe that if the series is convergent, the sequence of partial sums is a Cauchy sequence. In particular, for every $\varepsilon > 0$, there is some $N_0 = N_0(\varepsilon)$ such that whenever $n > m > N_0$, $\left| \sum_{i=1}^n a_i - \sum_{i=1}^m a_i \right| < \varepsilon$. Now consider taking $m = N_0 + 1$ and $n \ge 2m$. Then we have $$ \sum_{i=1}^n a_i - \sum_{i=1}^m a_i = \sum_{i=m+1}^n a_i = a_n \cdot (n-m) + \sum_{i=m+1}^n (a_i - a_n) \ge a_n \cdot (n-m),$$ by the non-increasing property of the sequence $(a_n)_n$. Hence $a_n \cdot (n-m) < \varepsilon$ whenever $n \ge 2(N_0 + 1)$. However, since $n \ge 2m$, $n-m \ge \frac12 n$, and so $a_n \cdot n \le 2 a_n \cdot (n-m) < 2 \varepsilon$. Since $\varepsilon$ can be taken to be arbitrarily small, this, coupled with the non-negativity of $a_n$, proves $a_n \cdot n \rightarrow 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Sum/Intersection of Invariant Subspaces that have Invariant Complements Let $V$ be a finite-dimensional vector space, and let $M: V \rightarrow V$ be a linear transformation. Suppose subspaces $S_1,S_2 \subset V$ are both $M$-invariant and have $M$-invariant complements (say $W_1$ and $W_2$, so $V = S_i \oplus W_i$). Question: do the sum $S_1 + S_2$ and intersection $S_1 \cap S_2$ also have $M$-invariant complements? I haven't yet found a valid proof or counterexample, and would appreciate any intuition (or proof or counterexample, of course) that anyone has towards this question. I know that the answer is yes if $M$ is diagonalizable or if the $W_i$ are actually the orthogonal complements. Can we do it without placing those restrictions? Thanks! (Here's what I've tried so far.) My guess is that the answer is yes, and my proof attempts are inspired primarily by Theorem 2.5.1 of this book. The idea is to split $S_1$ and $S_2$ into irreducible subspaces (where an "irreducible subspace" is an $M$-invariant subspace that cannot be written as the direct sum of other $M$-invariant subspaces). Before starting, also define a "supremal irreducible subspace" to be an irreducible subspace that is not a subset of any other irreducible subspace. With that, here are the broad steps: * *I've shown that an invariant subspace has an invariant complement if and only if it is the direct sum of supremal irreducible subspaces. So the subspaces $S_1$ and $S_2$ can both be written as direct sums of supremal irreducible subspaces. *Since $S_1$ and $S_2$ are both $M$-invariant, so are $S_1 + S_2$ and $S_1 \cap S_2$. Those sum and intersection can be written as direct sums of irreducible subspaces (by definition), but those irreducible subspaces are not necessarily supremal. *If we can prove that those irreducible subspaces are supremal, then we're done... but I haven't had any luck with that so far. It's clearly dependent on choosing decompositions for $S_1$ and $S_2$ that fully characterize the supremal irreducible subspaces common to both of them. *A first step might be to show that any two supremal irreducible subspaces are either equal or independent, and I think I can do the rest. Equivalently, we can show that if two Jordan chains have the same eigenvector and the same length, then their spans are equal.
False! Counterexample: $V = \mathbb{R}^4$ and $$ M = \begin{bmatrix} 1 & 1 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 0 & 0 & 1 \end{bmatrix} \,. $$ Take the two $M$-invariant subspaces $S_i$ (and $M$-invariant complements $W_i$) $$ S_1 = \mathrm{span} \left\{ \begin{bmatrix} 1\\0\\0\\0 \end{bmatrix}, \begin{bmatrix} 0\\1\\0\\0 \end{bmatrix} \right\} \,,\ S_2 = \mathrm{span} \left\{ \begin{bmatrix} 1\\0\\0\\0 \end{bmatrix}, \begin{bmatrix} 0\\1\\1\\0 \end{bmatrix} \right\} \,,\quad W_1 = W_2 = \mathrm{span} \left\{ \begin{bmatrix} 0\\0\\1\\0 \end{bmatrix}, \begin{bmatrix} 0\\0\\1\\1 \end{bmatrix} \right\} \,. $$ The intersection $S_1 \cap S_2$ has only part of a Jordan chain for the top Jordan block of $M$, so it can't have an $M$-invariant complement. The sum $S_1 + S_2$ has only part of a Jordan chain for the bottom Jordan block of $M$, so it also can't have an $M$-invariant complement.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Equivalence $\displaystyle((\lnot p \lor q) \land (q \lor r)) \land (p \land \lnot q) ≡ c$ I am trying to prove this. Using only laws. I am getting nowhere. The 3 variables on the left hand side keep tripping me up. I've tried DeMorgans, Distributive, Identity and Negation as a starting point but hit dead ends.
Distributing, $$((\lnot p \lor q)\land (q\lor r)) \iff (((\lnot p \lor q)\land q ) \lor ((\lnot p \lor q)\land r)) \iff q\,\lor ((\lnot p \lor q)\land r)$$ $$\iff (\lnot p \lor q)\land (q\lor r) \iff \lnot(p \land \lnot q)\land (q\lor r)$$ Then the original expression is $$\lnot(p \land \lnot q)\land (q\lor r) \land (p\land \lnot q)$$ Which is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How should we calculate difference between two numbers? if we are told to find the difference between 3 and 5, then we usually subtract 3 from 5 ,5-3=2 and thus, we say that the difference is 2. but why can't we subtract 5 from 3 ,3-5= -2 to get the difference -2?? which result is right? is the difference ( 2) or (-2)? Also tell me how can we calculate the difference if we are told to calculate difference between two numbers,-5 and 2 on the number line.
Traditionally, the “difference" between two numbers refers to the distance on a number line between the points corresponding to each of the two numbers, a.k.a. the absolute value. Analogously, if you asked “What is the distance from Toronto to Vancouver?” or "What is the distance from Vancouver to Toronto?", you would expect the same answer: the [positive] distance separating the two cities, regardless of the direction of travel. On the other hand, if asked “What is the result when you subtract 3 from 5?”, you should give a different answer (2) than if you were asked “What is the result if you subtract 5 from 3?” (-2). As for calculating on the number line: * *If the two numbers are on the same side of $0$ (e.g., $-2$ and $-6$), the difference is the result when you subtract the smaller absolute value from the larger absolute value (e.g., $\lvert -6 \rvert - \lvert -2 \rvert = 6-2 = 4$); *If the two numbers are on opposite sides of $0$ (e.g., $-5$ and $2$), then you add the absolute values (e.g., $\lvert -5 \rvert + \lvert 2 \rvert = 5+2 = 7$), or alternatively subtract the negative number from the positive one which effects a sign change (e.g., $2-(-5)=2+5=7$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1893988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 7, "answer_id": 1 }
Involute of a circle - what is the separation distance? It seems like a simple enough question. For the involute of a circle, what is the separation distance between successive turns? Is this derivation correct? Parametric formula for the y-coordinate: $ y = r(Sin(\theta) - \theta Cos(\theta)) $ Differentiating: $ \frac{dy}{d\theta} = r \theta Sin(\theta) $ Which has roots at $ \theta = \pi n, n\in \mathbb{Z} $ Taking every other $n$, since those are successive turns: $ y = r(Sin(\theta) - \theta Cos(\theta)) $ simplifies to $ y = -r \pi n $ where $n$ is even and $n \ge 0$. Therefore the spacing between successive turns, $D$ is: $ D = 2 \pi r $ Is that even close to right? Is it that simple? I guess it makes intuitive sense based on the circumference of the circle. And some plots I've made bear it out. But I'd like to know for sure.
We can represent the parametric equations of the circle involute, by factoring out the radius $r$ of the generating circle, which is just a scale factor. $$ \left\{ \begin{gathered} x = X/r = \cos \theta + \theta \sin \theta \hfill \\ y = Y/r = \sin \theta - \theta \cos \theta \hfill \\ \end{gathered} \right.\quad \left\{ \begin{gathered} \frac{{dx}} {{d\theta }} = \theta \cos \theta \hfill \\ \frac{{dy}} {{d\theta }} = \theta \sin \theta \hfill \\ \end{gathered} \right.\quad \left| {\;0 \leqslant \theta \in \;\mathbb{R}} \right. $$ Let's indicate by $P = \left( {x ,\;y } \right)$ the point on the curve and by $T = \left( {\cos \theta ,\;\sin \theta } \right)$ the point on the circle from where the theter evolves. Then $$ \left| {TP} \right| = \sqrt {\left( {x - \cos \theta } \right)^{\,2} + \left( {y - \sin \theta } \right)^{\,2} } = \theta $$ $$ \left| {OP} \right| = \sqrt {x^{\,2} + y^{\,2} } = \sqrt {1 + \theta ^{\,2} } $$ and $$ \mathop {TP}\limits^ \to \; \bot \;\mathop {OT}\limits^ \to \quad \quad \vec v = \left( {dx/d\theta ,\;dy/d\theta } \right)\; \bot \;\mathop {TP}\limits^ \to $$ which means that $T$ is also the center of curvature. Now, by putting $dy/d\theta=0$ you find the $y$ of the points where the tangent to the line is horizontal, so $D/r$ gives the spacing between consecutive horizontal tangents to the line. If you plug $\theta=\pi n$ into the parametric equation for $x$ you get $x=1,-1,1,..$, that means that the points with horizontal tangent lie on the verticals $x=1$ (for $y<=0$) and $x=-1$ (for $0<y$), which is in accordance with the results found above. However , over $x=0$ (as well as over any radial line from $O$) the intersection points are not equally spaced, although the distance tends towards $2 \pi $. To find them, we make the subtitution $$ \left\{ \begin{gathered} \frac{1} {{\sqrt {1 + \theta ^{\,2} } }} = \cos \alpha \hfill \\ \frac{\theta } {{\sqrt {1 + \theta ^{\,2} } }} = \sin \alpha \hfill \\ \end{gathered} \right.\quad \Rightarrow \quad \tan \alpha = \theta $$ so that in the polar coordinates $\rho$ and $\varphi$ we have: $$ \left\{ \begin{gathered} \tan \alpha = \theta \hfill \\ \rho = \sqrt {1 + \theta ^{\,2} } = \frac{1} {{\cos \alpha }} \hfill \\ \tan \varphi = \tan \left( {\theta - \alpha } \right)\quad \Rightarrow \quad \varphi = \tan \alpha - \alpha \hfill \\ \end{gathered} \right. $$ and we can express $\varphi$ in terms of $\rho$ as: $$ \varphi = \tan \alpha - \alpha = \sqrt {\rho ^{\,2} - 1} - \arccos \left( {1/\rho } \right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1894065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$f''(x)\ge 0$, $f'(0)=1$ and $f(x)\le 100$. Does such a function exist? $f''(x)\ge 0$, $f'(0)=1$ and $f(x)\le 100$. Does such a function exist? I can show that $100-e^{-x}$ does not satisfy the given condition. But I have to show that no function can satisfy the initial condition. I can also say that since $f'$ is increasing, using Lagrange's MVT, $$f'(x)-f(0)\ge1$$
Suppose $f(x) \leq 100$ for all $x$. Now $f'(0) = 1$ and $f''(x) \geq 0$ for all $x$ hence $f'$ is increasing and therefore $f'(x) \geq f'(0) = 1$ for all $x \geq 0$. Hence $f$ is strictly increasing on $[0, \infty)$ and since $f(x) \leq 100$ it follows that $\lim_{x \to \infty}f(x) = L$ exists. By Mean Value Theorem we can see that $$f(x + 1) - f(x) = f'(\xi)$$ for some $\xi \in (x, x + 1)$. Now we have an obvious contradiction because the LHS tends to $L - L = 0$ as $x \to \infty$ and the RHS is greater than or equal to $1$. Hence there is no such function $f$ with the desired properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1894216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Pairwise sum and divide I came across a programming task. Given an array of integer numbers $A\mid A_i\in\mathbb{N}$, one needs to calculate the sum: $$S=\sum_{i=1}^{N}\sum_{j=i+1}^{N}\left\lfloor\frac{A_i+A_j}{A_i\cdot A_j}\right\rfloor$$ It is the summation of the floor integer values of the fraction $\frac{A_i+A_j}{A_i\cdot A_j}$. $N$ is the length of the array. In order to compute this faster, one needs to simplify the expression above. I found the corresponding solution code (Python 3): from collections import Counter for _ in range(int(input())): n, A = int(input()), Counter(map(int, input().split())) print(A[2] * (A[2] - 1) // 2 + A[1] * (n - 1)) which suggests that $$S=\left\lfloor\frac{N_2\cdot\left(N_2-1\right)}{2}\right\rfloor+N_1\cdot\left(N-1\right)$$ $N_1$ is the frequency of $1$ in the array $A$ and $N_2$ the frequency of $2$ in $A$. How could one obtain this solution?
Note that $$ S = \sum_{i=1}^N\sum_{j=i+1}^N \left\lfloor \frac {1}{A_i} + \frac{1}{A_j} \right \rfloor = \sum_{1 \leq i < j \leq n} \left\lfloor \frac {1}{A_i} + \frac{1}{A_j} \right\rfloor $$ That is, for every pair of $A_i$, we calculate $f(i,j) = \left\lfloor\frac {1}{A_i} + \frac{1}{A_j} \right\rfloor$, and we add up the results. Note that $f(i,j)$ is non-zero iff one of $A_i$ or $A_j$ is $1$, or $A_i = A_j = 2$. If $A_i = A_j = 1$, then $f(i,j) = 2$. If there are $N_1$ $A_i$'s that equal $1$, then how many pairs satisfy $A_i = A_j = 1$? The answer is $\binom{N_1}{2} = \frac{N_1(N_1 - 1)}{2}$, which means that the resulting $f(i,j)$ add up to $N_1(N_1 - 1)$. How many pairs contain one $1$ and one other value? The answer here is $N_1 \cdot (N - N_1)$. The resulting $f(i,j)$ add up to $N_1 \cdot (N - N_1)$. How many pairs consist of two $2$s? The answer is $\binom {N_2}{2} = \frac{N_2(N_2 - 1)}{2}$. The resulting $f$s add up to $\frac{N_2(N_2 - 1)}{2}$. Our total sum, then, is $$ N_1(N_1 - 1) + N_1 \cdot (N - N_1) + \frac{N_2(N_2 - 1)}{2} = \\ N_1(N-1) + \frac{N_2(N_2 - 1)}{2} $$ as desired. NOTE: $\frac{N_2(N_2 - 1)}{2}$ will always be an integer; there's no need to apply the floor function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1894305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A net $\varphi : [0, \omega_1) \to M$ on a metric space $M$ converges $\iff \varphi$ is eventually constant I want to prove that If $M$ is a metric space, then a net $\varphi : [0, \omega_1) \to M$ converges if and only if $\varphi$ is eventually constant. ($[0, \omega_1)$ is the set of ordinals less than $\omega_1$, where $\omega_1$ is the first uncountable ordinal, with the order topology). $(\Leftarrow)$ is clear. My trouble is with $(\Rightarrow)$. Suppose that $\varphi \rightarrow x$. Let $U = \{ \alpha \in [0, \omega_1) \, \mid \varphi(\alpha) \neq x \}.$ If $U = \emptyset $ we are done, so assume $U \neq \emptyset$. Since $[0, \omega_1)$ is well ordered, there is a $\alpha_1 \in U$ such that $ \alpha_1 \leq \alpha \, \, \forall \alpha \in U.$ If $ U \setminus \alpha_1 = \emptyset$ we are done. If not, we repeat the procedure. It is clear that if $card(U) = n$ for some $n \in \omega$ then we only have a finite set of ordinals $\{\alpha_1, \ldots, \alpha_n\}=U$ so $\varphi$ is constant for $\alpha > \alpha_n.$ If $U$ is countably infinite, then $U = \{\alpha_n\}_{n \in \omega}$ and since $[0, \omega_1)$ is sequentially compact and the $\alpha_n$'s are countable ordinals we obtain a convergent subsequence $\{\alpha_{n_k}\}$ of $U$ to some countable ordinal $\beta \in [0, \omega_1)$ $[\beta = \cup_{n_k \in \omega}\alpha_{n_k}???],$ hence $\varphi$ is constant for all $ \alpha > \beta.$ But what if $U$ is uncountable? Idea: If $U = [0, \omega_1),$ then since $\varphi \rightarrow x$, for every $n \in \omega$ there is $\alpha_n \in U$ such that $\alpha \geq \alpha_n \implies \varphi(\alpha) \in B_{\frac{1}{n}}(x).$ This is a countable sequence, so since $[0, \omega_1)$ is sequentially compact, there is a subsequence converging to a countable ordinal $\beta \in [0, \omega_1)$. Clearly, $\beta$ satisfies that $\phi(\beta) \in B_{\frac{1}{n}}$(x) for every $n \in \omega.$ We can conclude that $\varphi(\beta)= x$ right? For if $\varphi(\beta) \neq x$, then taking $n$ such that $0 < \frac{1}{n} < d(x, \varphi(\beta)),$ we would have that $\varphi(\beta) \not \in B_{\frac{1}{n}}(x),$ contradicting the hypothesis. I feel pretty confident about this, but If someone sees an error in the argument, please comment. Thanks.
For each $n$, the set $\left\{\alpha\in[0,\omega_1):d(\varphi(\alpha),x)>1/n\right\}$ is countable, so $\left\{\alpha\in[0,\omega_1):\varphi(\alpha)\neq x)\right\}$ is also countable, so it is not cofinal in $\omega_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1894387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Example that $R/I$ is not field where $R$ is a commutative ring and $I$ is maximal ideal. Theorem. Let $I$ be an ideal in a commutative ring $R$ with identity. Then $I$ is a maximal ideal if and only if the quotient ring $R/I$ is a field. Above Theorem is very famous theorem. But The Theorem is hold under condition $R$ is a commutative ring with identity. I want to know whether this theorem holds under condition $R$ is a nontrivial commutative ring.
$2\mathbb{Z}/4\mathbb{Z}$ is not a field because it has zero multiplication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1894506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
The large-$N$ limit of eigenvalues of matrices with non-diagonal elements scaling as $1/N$ Define a series of matrices$$H_N= \begin{bmatrix} 1&1/N&1/N&\cdots&1/N\\ 1/N&2&1/N&\cdots&1/N\\ 1/N&1/N&3&\cdots&1/N\\ \vdots&\vdots&\vdots&&\vdots\\ 1/N&1/N&1/N&\cdots&N \end{bmatrix}$$ My question is, when $N\to+\infty$, would the eigenvalues of $H$ be different from $\{1,\ldots,N\}$ ? The answer is not obvious, as, for the matrix series $$G_N=\begin{bmatrix} 1&1/N&1/N&\cdots&1/N\\ 1/N&1&1/N&\cdots&1/N\\ 1/N&1/N&1&\cdots&1/N\\ \vdots&\vdots&\vdots&&\vdots\\ 1/N&1/N&1/N&\cdots&1 \end{bmatrix}$$ You can verify that $G_N$ has an eigenvalue of $2$.
Nearly a year passed and now I finally got the answer for my problem: In the limit $N\to\infty$, the eigenvalues of $H_N$ won't be different from $\{1,2,\ldots,N\}$. Proof: The matrix element of $H_N$ is given by $H_{ij}=j\delta_{ij}+1/N$, where we have added an unimportant $1/N$ for the diagonal elements (and in the following, we won't write the subindex $N$ explicitly). We can directly solve the eigenvalue equation $H_{ij}\psi_j=\lambda\psi_i$ in this case, giving $$i \psi_i+\frac{1}{N}\sum^N_{j=1}\psi_j=\lambda\psi_i~~\Rightarrow~~ \psi_i=-\frac{1}{N(i-\lambda)}\sum^N_{j=1}\psi_j,$$ summing over $i$ on both sides, we get $$\sum^N_{i=1}\psi_i=-\sum^{N}_{i=1}\frac{1}{N(i-\lambda)}\sum^N_{j=1}\psi_j$$ If $\sum^N_{j=1}\psi_j=0$, then we have $(i-\lambda)\psi_i=0$ for all $i$, which is not possible for a nonzero eigenvector. It follows that $$\sum^{N}_{i=1}\frac{1}{(\lambda-i)}=N.$$ It is easy to prove that this equation has $N$ distinct roots $1<\lambda_1<2<\lambda_2<3<\ldots <N<\lambda_N<N+1$. We now prove that $\lambda_j\to j$ in the limit $N\to+\infty$. Let $\lambda_j=j+\Delta_j$, if $$\lim_{N\to\infty}\Delta_j=\Delta>0,$$ then there exist $N_0$ such that for all $N>N_0$, $\Delta/2<\Delta_j<1$, which lead to $$\sum^N_{i=1}\frac{1}{j+\Delta_j-i}<\sum^j_{i=1}\frac{1}{j+\Delta_j-i}=\sum^{j-1}_{i=0}\frac{1}{\Delta_j+i}<\frac{1}{\Delta/2}+\frac{1}{\Delta/2+1}+\ln N,$$ which is obviously much smaller than $N$ for sufficiently large $N$, a contradiction. Thus $$\lim_{N\to\infty}\Delta_j=0~~\Leftrightarrow \lim_{N\to\infty}\lambda_j=j$$ for all $j\in N_+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1894628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Is it possible to find a set of 5 integers, such that the 10 sums each have a different last digit? Full question: If we are given 5 integers, there are $\binom{5}{2} = 10$ different ways to find the sum of 2 of the integers. Is it possible to find a set of 5 integers, such that the 10 sums each have a different last digit? Am I trying to prove that 5 integers can be found, such that the 10 sums each have a different last digit, or am I being asked to find 5 such integers?
Let $o$ be the number of odd integers in the set. Then you can form $o(5-o)\ne5$ odd sums. Therefore a set of the desired kind does not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1894771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
ordinary differential equation of third order using substitution How one can solve ODE in the following form? $$y''' y +(y'')^{2} =0$$ It looks like some kind of substitution. I did try some substitution but they were useless. thanks in advance.
With reference to http://eqworld.ipmnet.ru/en/solutions/ode/ode0503.pdf, Let $u=\left(\dfrac{dy}{dx}\right)^2$ , Then $\dfrac{du}{dx}=2\dfrac{dy}{dx}\dfrac{d^2y}{dx^2}$ $\dfrac{du}{dy}\dfrac{dy}{dx}=2\dfrac{dy}{dx}\dfrac{d^2y}{dx^2}$ $2\dfrac{d^2y}{dx^2}=\dfrac{du}{dy}$ $2\dfrac{d^3y}{dx^3}=\dfrac{d}{dx}\left(\dfrac{du}{dy}\right)=\dfrac{d}{dy}\left(\dfrac{du}{dy}\right)\dfrac{dy}{dx}=\mp\sqrt u\dfrac{d^2u}{dy^2}$ $\therefore\mp y\sqrt u\dfrac{d^2u}{dy^2}+\dfrac{du}{dy}=0$ $\dfrac{d^2u}{dy^2}=\pm\dfrac{1}{y\sqrt u}\dfrac{du}{dy}$ Which reduces to a generalized Emden-Fowler equation of the form http://science.fire.ustc.edu.cn/download/download1/book%5Cmathematics%5CHandbook%20of%20Exact%20Solutions%20for%20Ordinary%20Differential%20EquationsSecond%20Edition%5Cc2972_fm.pdf#page=362 WLOG, just consider $\dfrac{d^2u}{dy^2}=\dfrac{1}{y\sqrt u}\dfrac{du}{dy}$ The general solution is $\begin{cases}u=\dfrac{1}{t^2}\\y=\pm C_2e^{\int\frac{dt}{t^3\left(-\frac{1}{t}-\frac{1}{2t^2}+C_1\right)}}\end{cases}$ $\begin{cases}\left(\dfrac{dy}{dx}\right)^2=\dfrac{1}{t^2}\\y=\pm C_2e^{\int\frac{dt}{C_1t^3-t^2-\frac{t}{2}}}\end{cases}$ $\begin{cases}\dfrac{dy}{dx}=\pm\dfrac{1}{t}\\y=\pm C_2e^{\int\frac{dt}{C_1t^3-t^2-\frac{t}{2}}}\end{cases}$ $\begin{cases}\dfrac{\dfrac{dy}{dt}}{\dfrac{dx}{dt}}=\pm\dfrac{1}{t}\\y=\pm C_2e^{\int\frac{dt}{C_1t^3-t^2-\frac{t}{2}}}\end{cases}$ $\begin{cases}\dfrac{dx}{dt}=\pm\frac{C_2}{C_1t^2-t-\frac{1}{2}}e^{\int\frac{dt}{C_1t^3-t^2-\frac{t}{2}}}\\y=\pm C_2e^{\int\frac{dt}{C_1t^3-t^2-\frac{t}{2}}}\end{cases}$ $\begin{cases}x=\pm\int\frac{C_2}{C_1t^2-t-\frac{1}{2}}e^{\int\frac{dt}{C_1t^3-t^2-\frac{t}{2}}}~dt+C_3\\y=\pm C_2e^{\int\frac{dt}{C_1t^3-t^2-\frac{t}{2}}}\end{cases}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1894996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Explicit formula for crossed recursion When answering this linked question, I ended up with a formula where two sequences are mutually recurrent : $$ \begin{cases} u_0=0, ~~v_0=1\\ \forall n \in \mathbb{N}^*, ~~u_{n+1} = \frac{11}{12}u_n+\frac1{12}v_n\\ \forall n \in \mathbb{N}^*, ~~v_{n+1} = \frac56v_n+\frac16u_n\\ \end{cases} $$ How to compute the explicit formula for this? Here is what I have found yet (not much): Let $f$ a function such as $$ \forall x, y \in \left[0, 1\right], \forall M \in \mathbb{R}^+, ~~f_M(x, y) = \frac1M\left(\left(M-1\right)x+y\right) $$ And we can rewrite our formula as: $$ \begin{cases} u_{n+1} = f_{12}(u_n, v_n)\\ v_{n+1} = f_6(v_n, u_n)\\ \end{cases} $$ We have similar functions describing how $(u_n)$ and $(v_n)$ behave, but I don't know what to do next. Am I even on the good path? What should I try?
One approach is to rewrite your recurrences in matrix form as $$\begin{bmatrix}u_{n+1}\\v_{n+1}\end{bmatrix}=\begin{bmatrix}11/12&1/12\\1/6&5/6\end{bmatrix}\begin{bmatrix}u_n\\v_n\end{bmatrix}\;,$$ so that $$\begin{bmatrix}u_n\\v_n\end{bmatrix}=\begin{bmatrix}11/12&1/12\\1/6&5/6\end{bmatrix}^n\begin{bmatrix}u_0\\v_0\end{bmatrix}=\begin{bmatrix}11/12&1/12\\1/6&5/6\end{bmatrix}^n\begin{bmatrix}0\\1\end{bmatrix}\;.$$ Diagonalize the matrix $$\begin{bmatrix}11/12&1/12\\1/6&5/6\end{bmatrix}\;,$$ and you can calculate the powers very easily to get closed forms for $u_n$ and $v_n$. Another approach is to add your recurrences to get $$u_{n+1}+v_{n+1}=\frac{13}{12}u_n+\frac{11}{12}v_n=u_n+v_n+\frac1{12}(u_n-v_n)$$ and subtract them to get $$u_{n+1}-v_{n+1}=\frac9{12}u_n-\frac9{12}v_n=\frac34(u_n-v_n)\;.$$ Now let $x_n=u_n+v_n$ and $y_n=u_n-v_n$; then $$x_{n+1}=x_n+\frac1{12}y_n\tag{1}$$ and $$y_{n+1}=\frac34y_n\;.\tag{2}$$ The recurrence $(2)$ is easily solved: $y_0=-1$, so $y_n=-\left(\frac34\right)^n$. We can substitute this into $(1)$ to find that $$x_{n+1}=x_n-\frac1{12}\left(\frac34\right)^n\;;$$ this is first-order recurrence whose solution involves only summing a finite geometric series. (I’ve left the result of that calculation in the spoiler-protected block below.) $x_n=\frac23+\frac13\left(\frac34\right)^n$ Finally, it’s easy to solve the system $$\left\{\begin{align*} &u_n+v_n=x_n\\ &u_n-v_n=y_n \end{align*}\right.$$ for $u_n$ and $v_n$ once you have $x_n$ and $y_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1895102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$x \in (a-r,a+r)$ versus $|x-a| < |r-a|$ From the Princeton book for the GRE Subject Test in Maths: That part seems to suggest that if $x \in (a-r,a+r)$ then $f(x) < \frac1q < \varepsilon$ and hence $f$ is continuous at the irrationals in $(0,1)$. I was thinking to establish continuity we needed to show that if $|x-a| < \delta = |r-a|$ then $f(x) < \frac1q < \varepsilon$. What's happening? If $x \in (a-r,a+r)$, then $|x-a| < \delta = |r-a|$?
It's a typo. It should have been: Then, within the open interval $(a - \color{red}{\delta}, a + \color{red}{\delta})$, the value of $f(x)$ will be less than $\color{red}{\text{or equal to}}$ $\frac{1}{q}$ To see why this choice of $\delta$ works, we argue by contradiction. Suppose instead that there is some $r^* \in (a - \delta, a + \delta)$ such that $f(r^*) > \frac{1}{q}$. Looking at how $f$ is defined, we know that $r^*$ can't be irrational (since $f(r^*) = 0 > \frac{1}{q}$ is absurd), so we know that $r^* = \frac{c}{d}$ for some integers $c, d$ in lowest terms with $d > 0$ and we know that $f(r^*) = \frac{1}{d} > \frac{1}{q}$ so that $d < q$. But since $\frac{c}{d} = r^* \in (a - \delta, a + \delta) \subseteq I = (0, 1)$, we know that $\frac{c}{d} < 1$ so that $r^*$ must have been one of the $\binom{q}{2}$ elements in the previous discussed list of rational numbers. Since $|r^* - a| < \delta = |r - a|$, we have found a number in the list that is closer to $a$ than $r$, contradicting the minimality of $r$ ($r$ was supposed to be the closest, not $r^*$). So we conclude that: $$ |x - a| < \delta \iff x \in (a - \delta, a + \delta) \implies f(x) \leq \frac{1}{q} < \epsilon $$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1895167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Troubles understanding solution to $\cos(\arcsin(\frac{24}{25}))$ I am having troubles understanding how the answer key to my Pre-Calculus and Trigonometry document got to the answer it did from this task/question: Find the exact value of the expression $$\cos(\arcsin(\frac{24}{25}))$$ At first, I tried to find the value of $\arcsin(\frac{24}{25})$ and then find the cosine of that value, but it seemed way too hard to do without a calculator. I peeked at the answer key, and I saw this: Let $y=\arcsin(cos(\frac{24}{25})$ Then, $\sin(y)= \frac {24}{25}$ and $\cos(y)= \frac{7}{25}$ I do not understand how they came to this answer or the route to it. Could someone please guide me in the right direction or show me how the document came to this answer?
The basic definition of "sine" is "opposite side divided by hypotenuse" so they have drawn a right triangle with angle (call it $\theta$) on the left, hypotenuse of length 25 and "opposite side" (the vertical line on the right) of length 24. So $\sin(\theta)= \frac{24}{25}$ and $\theta= \arcsin\left(\frac{24}{25}\right)$. By the Pythagorean theorem, then, $x^2+ 24^2= x^2+ 576= 25^2= 625$ where "x" is the length of the "near side", the horizontal line at the bottom of the picture. So $x^2= 625- 576= 49$. Since the length of a side must be positive, the length of the "near side" is $\sqrt{49}= 7$ as shown in the picture. "cosine" is defined as "near side over hypotenuse" we have $cos\left(sin\left(\frac{24}{25}\right)\right)= cos(\theta)= \frac{7}{25}$. A more "trigonometric" method of getting that result is to use the fact that $sin^2(\theta)+ cos^2(\theta)= 1$. Setting $\theta= arcsin\left(\frac{24}{25}\right)$ we have $sin(\theta)= \frac{24}{25}$ and $sin^2(\theta)+ cos^2(\theta)= \left(\frac{24}{25}\right)^2+ cos^2(\theta)= 1$ so $cos^2(\theta)= 1- \frac{576}{625}= \frac{625- 576}{625}= \frac{49}{625}$ and then $cos(\theta)= \sqrt{\frac{49}{625}}= \frac{7}{25}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1895277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Show if $f,g:S^n \to S^n$ and $|\text{deg}(f)| \neq |\text{deg}(g)|$ there is some $x$ with $f(x), g(x)$ orthogonal. More specifically, I want to show if f and g are maps from $S^{n} \to S^{n}$ with $|\text{deg}(f)| \neq |\text{deg}(g)|$ show that there is some $x \in S^{n}$ with $f(x), g(x)$ orthogonal. One way I think the hypothesis could be used is the standard "contrapositive" trick for these sorts of proofs where I assume $h(x) = \langle f(x), g(x)\rangle$ is never zero. Then it either is always positive or negative. But otherwise I'm not really sure where to go from there.
Suppose for every $x, \langle f(x), g(x)\rangle\neq 0$. Consider $h:S^n\rightarrow R$ defined by $h(x)=\langle f(x), g(x)\rangle$. Since $S^n$ is connected, we have: * *For every $x\in S^n$, $h(x)>0$ *For every $x\in S^n, h(x)<0$. Suppose 1. Define $H(t,x)={{tf(x)+(1-t)g(x)}\over{\|t(x)+(1-t)g(x)\|}}$. $H$ is well-defined: $tf(x)+(1-t)g(x)=0$ implies that $f(x), g(x)$ are colinear. If $f(x)=g(x)$, then $tf(x)+(1-t)g(x)=tf(x)+(1-t)f(x)=f(x)\neq 0$. If $f(x)=-g(x)$, then $\langle f(x),g(x)\rangle=-1$. But we have supposed $1$. $H(0,x)=g(x)$ and $H(1,x)=f(x)$, which implies that $H$ is an homotopy between $f$ and $g$, and therefore $f$ and $g$ have the same degree. Contradiction. If $2$ is verified define $H(t,x)={{tf(x)-(1-t)g(x)}\over{\|t(x)-(1-t)g(x)\|}}$. $H$ is well defined, if $tf(x)+(1-t)g(x)=0$, then $f(x)$ and $g(x)$ are colinear. If $f(x)=-g(x), tf(x)-(1-t)g(x)=tf(x)-(1-t)(-f(x))=f(x)\neq 0$, if $f(x)=g(x), \langle f(x),g(x)\rangle=1$. Contradiction since we have supposed 2. $H(0,x)=-g(x), H(1,x)=f(x)$ we deduce that $|degree(g)|=|degree(f)|$. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1895364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Two lines through a point, tangent to a curve We are looking for two lines through $(2,8)$ tangent to $y=x^3$. Let's denote the intersection point as $(a, a^3)$ and use the slope equation together with the derivative to get $\frac{a^3-8}{a-2}=3a^2$. This yields a cubic equation. Of course, one of the lines is tangent to $y=x^3$ at $(2,8)$, so we can get $a=2$ and the first line almost immediately. Knowing this, we write the cubic equation as $(a-2)(a^2+pa+q)$, find $p$ and $q$ from the original cubic equation and get $(a-2)(a-2)(a+1)=0$, which gives the solution and the lines. Is there a quicker way?
The tangents must be of the form $$y-8=m(x-2),$$ and they intersect the cubic $\color{blue}{y=x^3}$ when $$x^3-8=m(x-2).$$ This equation must have a double root, so that differentiating on $x$, we also have $$3x^2=m.$$ With the obvious solution $x=2$, we deduce $m=12$ and $$\color{green}{y-8=12(x-2)}.$$ Otherwise, we may simplify to get $$x^2+2x+4=m=3x^2.$$ This gives another solution $x=-1$, then $m=1$ and $$\color{magenta}{y-8=3(x-2)}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1895466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How would I write a function for the following pattern? \begin{align} Y(0) ={}& 1\\ Y(1) ={}& 2.5\\ Y(2) ={}& 2.5\cdot2.3\\ Y(3) ={}& 2.5\cdot 2.3\cdot 2.1\\ Y(4) ={}& 2.5\cdot 2.3\cdot 2.1\cdot 1.9\\ \vdots\,\,\, \end{align} How would I solve for something like $Y(1.3)$ or $Y(2.7)$? How would a function for $Y(x)$ be defined?
Notice that \begin{align*} Y(x) &= \frac{25}{10} \cdot \frac{23}{10} \cdot \frac{21}{10} \cdots \frac{27 - 2x}{10} \\[5pt] &= \frac{1}{10^x} \cdot \frac{(26)(25)(24)(23)(22)(21) \cdots (28-2x)(27-2x)}{(26)(24)(22)\cdots (28-2x)} \\ &= \frac{1}{10^x} \cdot \frac{(26)(25)(24)(23)(22)(21) \cdots (2)(1)}{(26)(24)(22)\cdots (4)(2)} \cdot \frac{(26-2x)(24-2x)(22-2x)\cdots (4)(2)} {(26-2x)(25-2x)(24-2x)(23-2x)\cdots (2)(1)} \\ &= \frac{1}{10^x} \cdot \frac{26!}{2^{13} 13!} \cdot \frac{2^{13-x} (13-x)!} {(26-2x)!} \\ &= \boxed{\frac{1}{20^x} \cdot \frac{26!}{13!} \cdot \frac{(13-x)!}{(26-2x)!}}. \end{align*} In particular, this gives you a closed form for integer $x$ when $x = 0, 1, 2, 3, \ldots, 13$. But we'd like this to work for $x > 13$ as well (when the above formula is undefined), and for real $x$ instead of just integer $x$. In fact, the factorial function extends to a function $\Pi(z)$ for all complex $z$ except negative integers ($z! = \Pi(z) = \Gamma(z+1)$ where $\Gamma$ is the Gamma function). Then we get $$ Y(x) = \frac{1}{20^x} \cdot \frac{26!}{13!} \cdot \frac{\Pi(13-x)}{\Pi(26-2x)} $$ We still have a problem when $x > 13$ is an integer: $\Pi(13-x)$ and $\Pi(26-2x)$ are undefined but we expect $Y(x)$ to still have a formula. We can employ the duplication formula which says that $$ \Pi(2z) = \frac{1}{\sqrt{\pi}} 4^z \Pi(z) \Pi(z - \tfrac12) $$ with $z = 13-x$ to obtain \begin{align*} Y(x) &= \frac{1}{20^x} \cdot \frac{26!}{13!} \cdot \frac{\Pi(13-x)}{(\tfrac{1}{\sqrt{\pi}})4^{13-x} \Pi(13-x)\Pi(\tfrac{25}{2} - x)} \\ &= \boxed{\left( \frac{26! \sqrt{\pi}}{13! \;4^{13}} \right) \left(\frac{1}{5^x}\right) \left(\frac{1}{\Pi(\tfrac{25}{2} - x)}\right)}. \end{align*} The constant term in front is of course $\Pi(\tfrac{25}{2}) = \Gamma(\tfrac{27}{2})$, so this agrees with Raymond Manzoni's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1895551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Unpacking notation with $\inf$, $\sup$ as part of proof of open set $\mathbb{R}$ can be written as countable union of disjoint open intervals Here is a Proposition from my real analysis book. Proposition. Suppose $G \subset \mathbb{R}$ is open. Then $G$ can be written as the countable union of disjoint open intervals. The proof begins with the following. Let $G$ be an open subset of the real line and for each $x \in G$, let$$A_x = \inf\{a : \text{there exists }b\text{ such that }x \in (a, b) \subset G\}$$and$$B_x = \sup\{d : \text{there exists }c \text{ such that }x \in (c, d) \subset G\}.$$Let $I_x = (A_x, B_x)$. I just realized I don't really get what this beginning part is saying. Can somebody explain it to me/add some more detail? Thanks in advance!
I think it might be easier to understand if it were written as $$A_x = \inf\{ a : (a,x) \subset G\}$$ $$B_x = \sup\{ d : (x,d) \subset G\}$$ Basically, $A_x$ is the left endpoint of the largest open interval containing $x$ and $B_x$ is the corresponding right endpoint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1895834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Volume of ellipsoid outside sphere I have the ellipsoid $\frac{x^2}{49} + y^2 + z^2 = 1$ and I want to calculate the sum of the volume of the parts of my ellipsoid that is outside of the sphere $x^2+y^2+z^2=1$ How to do this? I know the volume of my sphere, $\frac{4\pi}{3}$, and that I probably should set up some double- or triple integral and transform the coordinates to spherical coordinates and evaluate but I have to admit I'm stuck on how to set this up.
If you do not have a good perception of objects in 3D and want a pure analytical solution: Let $A(z)$ be the area of a slice of the ellipsoid outside the sphere, at height $z$. At height $z$, the ellipsoid is the ellipse $$ \frac{x^2}{49(1-z^2)}+\frac{y^2}{1-z^2}=1, $$ which has area $7(1-z^2)\pi$, and the sphere is the disc $$ \frac{x^2}{(1-z^2)}+\frac{y^2}{1-z^2}=1, $$ which has area $\pi(1-z^2)$. And since $A(z)$ equals the area of this ellipse minus the area of the disc: $$ A(z)=6(1-z^2)\pi $$ To compute the total volume, just integrate $A(z)$ between heights $z=-1$ and $z=1$: $$ V=\int_{-1}^1 A(z)\; dz = \int_{-1}^1 6(1-z^2)\pi\; dz = 8\pi $$ Indeed, this equals $6$ times the volume of the sphere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1895948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Finding a point on the line perpendicular to a line from another point I'm sorry if this is kind of basic, it's been a while since I took geometry. I did find this answer, but it requires 4 points and I only have 3. I have three points $A$, $B$ and $C$ that form a non-right triangle. I know the Cartesian coordinates of these three points. Suppose I draw a perpendicular from $B$ through $AC$ where $D$ is the point where the perpendicular intersects $AC$. How can I find the Cartesian coordinates of a point $T$ along $\overrightarrow {BD}$ that is $n$ beyond $D$? I drew a diagram to demonstrate what I'm trying to find. Diagram Any help would be appreciated, thanks!
You can solve this fairly easily with a few vector operations. Finding point $D$ comes down to finding the perpendicular projection of the vector $\vec{AB}$ onto $\vec{AC}$. That’s given by $$\vec{AB}_\parallel={\vec{AB}\cdot\vec{AC}\over\|\vec{AC}\|^2}\vec{AC}$$ and so $D = A+\vec{AB}_\parallel$. Now, recall that a line can be described parametrically using a point on the line and a direction vector. Since we’re measuring distances from $D$, we’ll use $D$ as our point and $\vec{BD}=D-B$ for the direction. Also, ince we want to move a specific distance along this line from $D$, we’ll normalize the direction vector by dividing by its length so that the resulting direction vector has unit length. That way, moving $n$ units along the line is simply a matter of multiplying the direction vector by $n$. Putting this together, we get $$T=D+n{\vec{BD}\over\|\vec{BD}\|}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1896214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Get the number of compositions with restrictions The number of compositions of $n$ is $2^{n-1}$. For $n=5$, the number of compositions is $2^{5-1}=16$ * *5 *4 + 1 *3 + 2 *3 + 1 + 1 *2 + 3 *2 + 2 + 1 *2 + 1 + 2 *2 + 1 + 1 + 1 *1 + 4 *1 + 3 + 1 *1 + 2 + 2 *1 + 2 + 1 + 1 *1 + 1 + 3 *1 + 1 + 2 + 1 *1 + 1 + 1 + 2 *1 + 1 + 1 + 1 + 1 What if I want only the count for compositions where all the parts are greater than a value $k$ ($k=2$ in my example, and the kept compositions are the bold ones) ?
The recursion for the unrestricted compositions is: $F_0=1$ and $F_n=\sum\limits_{j=0}^{n-1} F_j$, it is easy to show with induction that this is equal to $2^{n-1}$. The recursion for the "restricted" compositions in which every block is in $[a,b]$ is $F_0=1$ and $F_n=\sum\limits_{j=a}^{\min(n,b)}F_{n-j}$. This allows you to get $F_n$ in time $\mathcal O(n(b-a+1) )$, I don't know if this is fast enough. some c++ code: #include <bits/stdc++.h> using namespace std; const int MAX=10010; // size of array int F[MAX]; // this array stores the results int main(){ int a,b,n; scanf("%d %d %d",&a,&b,&n); // this takes the input of a,b,n F[0]=1; for(int i=1;i<=n;i++){// we recursively udate the value for(int j=a;j<=b && j<=i;j++){ F[i]+=F[i-j]; // we add F[i-j] to F[i]; } } printf("%d\n",F[n]); //prints the result }
{ "language": "en", "url": "https://math.stackexchange.com/questions/1896313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
derivative $\frac{\ln{x}}{e^x}$ Im asked to solve find the derivative of: $$ \frac{\ln x}{e^x}$$ my attempt $$D\frac{\ln x}{e^x} = \frac{\frac{1}{x}e^x + \ln (x) e^x}{e^x} = e^x \frac{\frac{1}{x}+\ln x}{e^{2x}} = \frac{\frac{1}{x}+\ln x}{e^x}$$ But this is apparently wrong and the correct answer is: $$\frac{\frac{1}{x} - \ln x}{e^x}$$ Where do I go wrong?
Note that: $$ \frac{\ln x}{e^x}=e^{-x}\ln {x}$$ And $$(uv)'=u'v+uv'$$ Thus $$ (\frac{\ln x}{e^x})'=(e^{-x}\ln {x})'=(-e^{-x}\ln {x})+(\frac{e^{-x}}{x})=\frac{\frac{1}{x} - \ln x}{e^x}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1896399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Does Lebesgue integral satisfy Riemann integral properties? From what I did study, I know that Lebesgue integral is more general than Riemann integral. Then, does Lebesgue integral satisfy all of the Riemann integral properties? In particular, is the following true? For some given set X, $\int_{A}f d\mu + \int_{B}f d\mu = \int_{A\cup B}f d\mu$ where $M$ is a $\sigma$-algebra in a set $X$ and $\mu$ is a positive measure on $M$.
As was stated in the comments, if the domain is compact then the Riemann and Lebesgue integral agree. But one thing Riemann has over Lebesgue is that it allows improper integrals. This question from 2013 gives an example: Riemann-integrable (improperly) but not Lebesgue-integrable
{ "language": "en", "url": "https://math.stackexchange.com/questions/1896513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to solve for an integral equation from already having the value of the integral? So I know that $\int_{0}^{m} (mx-x^2)dx$ must equal 8. I also know that $m$ is a positive integer. How do I solve for this without having to use a calculator?
$$8=\int_0^m (mx-x^2) \mathop{dx} = \left[\frac{1}{2}mx^2 - \frac{1}{3}x^3\right]_{x=0}^m = \frac{1}{6} m^3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1896575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Gear Ratios in Number Theory? A machine shop is manufacturing a pair of gears that need to be in a ratio as close to $1.1839323$ as possible, but they can’t make gears with more than $50$ teeth on them. How many teeth should be on each gear to best approximate this ratio? I can't figure out a number-theoretic approach to solve this, or the steps to get to a solution in such a manner. Can someone see a solution I can't?
Express the value as a continued fraction and then simplify it. The continued fraction for 1.1839323 is: 1; 5, 2, 3, 2, 5, 95, 2, 11, 1, 3, 2 45/38 = [1;5,2,3] = 1.1842105263157894 error +0.0002782263157894427 (0.02350%) The best ratio with values under 50 is 45 to 38.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1896657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The family of non-decreasing functions from the segment info itself endowed with the topology of pointwise convergence is first countable Using the fact that the only type of discontinuity compatible with monotonic function is the jump discontinuity, I showed that $A:=\{a\in[0,1]:f\textrm{ is discontinuous at }a\}$ is enumerable for every $f\in X$. But that didn't help, once for a fixed $f\in X$, $\pi^{-1}[]f(x)-\epsilon,f(x)+\epsilon[]$ is an open nhood of $f$ for every $x\in[0,1]$. I cannot see how a enumerable family of open nhoods of $f$ can be a local basis in this case.
Let $f\in X$ be an arbitrary function, $S=\{a_n\}$ be an enumerated countable set containing both the set $A$ of the discontinuity points of the function $f$ and a dense set $D\supset\{0,1\}$ of the segment $[0,1]$ (for instance, its rational points). We claim that the family $$U_n=\{g:\in X: |g(a_i)-f(a_i)|<1/n\mbox{ for each } 1\le i\le n \}$$ is a countable base at $f$. It suffices to show that for any point $x\in [0,1]$ and any natural number $m$ the standard subbase neighborhood $V_x=\{g:\in X: |g(x)-f(x)|<1/m\}$ contains a set $U_n$ for some $n$. If $x$ is a discontinuity point of the function $f$ then the claim follows from the inclusion $S\ni x$. Conversely, there exists a number $\delta>0$ such that $|f(x’)-f(x)|<1/(5m)$ provided $|x’-x|<\delta$. Since the set $S$ is dense in $[0,1]$, there exist numbers $p$ and $q$ such that $$x-\delta<a_p\le x\le a_q<x+\delta.$$ Assume that $n>\max\{p,q, 5m\}$ and $g\in U_n$. Then $$|g(x)-f(x)|\le |g(x)-g(a_q)|+|g(a_q)-f(a_q)|+ |f(a_q)-f(x)| \le$$ $$|g(a_p)-g(a_q)|+|g(a_q)-f(a_q)|+ |f(a_q)-f(x)| \le$$ $$|g(a_p)-f(a_p)|+ |f(a_p)-f(a_q)|+ |f(a_q)-g(a_q)|+|g(a_q)-f(a_q)|+ |f(a_q)-f(x)|<$$ $$\frac 1{5m}+\frac 1{5m}+\frac 1{5m}+\frac 1{5m}+\frac 1{5m}=\frac 1m.$$ Thus $g\in V_x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1896779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $(a + b\sqrt c)^n = d + e\sqrt c$, then $(a - b\sqrt c)^n = d - e\sqrt c$ I think that: If $a,b,(c\ge0$ not a prefect square$),d,e,f\in\mathbb Z$ such that for some $n \ge 1$, $(a + b\sqrt c)^n = d + e\sqrt c$, then $(a - b\sqrt c)^n = d - e\sqrt c$ Is this true? Can someone provide a proof or give a hint for how to prove this? (preferably without induction if it doesn't provide insight)
Consider the set \begin{align*} A = \{ x + y\sqrt{c}: x, y \in \mathbb{Z} \} \end{align*} Consider the map $f: A \rightarrow A$ defined by $f(x+y\sqrt{c}) = x - y \sqrt{c}$. It is easy to see that $f$ is well defined (since $c$ is not a perfect square) and for any two $a_1, a_2 \in A$, \begin{align*} f(a_1+a_2) &= f(a_1) + f(a_2)\\ f(a_1 \cdot a_2) &= f(a_1)\cdot f(a_2) \end{align*} Thus if $(a + b\sqrt c)^n = d + e\sqrt c$, then \begin{align*} f((a + b\sqrt c)^n) &= f(d + e\sqrt c) \\ (f(a+b\sqrt{c}))^n &= d-e\sqrt{c} \\ (a-b\sqrt{c})^n &= d-e\sqrt{c} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1896883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Proof that $\| \varphi \| = \varphi(1)$ for positive linear functionals on operator systems Let $M \subset \mathcal{B}(H)$ be an operator system, i.e., $M$ is a self-adjoint unital subspace, and let $\varphi: M \rightarrow \mathbb{C}$ be a positive linear functional. How does one prove that $\varphi$ is bounded with $\| \varphi \| = \varphi(1)$?
Here is an argument. All the time, $\varphi:M\to\mathbb C$ is linear and positive, and $M\subset B(H)$ is an operator system. Lemma. If $a\in M$ is selfadjoint, then $\varphi(a)\in\mathbb R$. Also, $\varphi(x^*)=\overline{\varphi(x)}$, for all $x\in M$. Proof. Let $a\in M$ selfadjoint; then, as $1\geq0$ and $a+\|a\|\,1\geq0$, $$\varphi(a)=\varphi(a+\|a\|\,1)-\|a\|\varphi(1)\in \mathbb R.$$ Now, for arbitrary $x\in M$, write $x=a+ib$ with $a,b\in M$ selfadjoint. Then $$ \varphi(x^*)=\varphi(a-ib)=\varphi(a)-i\varphi(b)=\overline{\varphi(a)+i\varphi(b)}=\overline{\varphi(a+ib)}=\overline{\varphi(x)}.\ \ \ \ \ \ \Box$$ Now fix $a\in M$. Choose $t\in\mathbb R$ such that $e^{it}\varphi(a)=|\varphi(a)|$. Then $|\varphi(a)|=\varphi(e^{it}a)$. By the Lemma, $$\text{Re}\,\varphi(x)=\varphi(\text{Re}\,x),\ \ \ \ \ x\in M. $$ Then, using the inequality $\text{Re}\,x\leq\|x\|\,1$, $$ |\varphi(a)|=\text{Re}\,|\varphi(a)|=\text{Re}\,\varphi(e^{it}a)=\varphi(\text{Re}\,e^{it}a)\leq\varphi(\|e^{it}a\|\,1)=\|a\|\,\varphi(1). $$ So $\|\varphi\|\leq\varphi(1)\leq\|\varphi\|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1896983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How do I solve a problem consisting of independent events? Question: A leather bag contains 4 black beads, 3 red beads and three white beads. Inside a plastic bag are 5 black beads, 2 red beads and 3 white beads. Another nylon bag contains 6 black beads, 1 red bead and 3 white beads. One bead is randomly withdrawn from each bag. What is the probability of getting at least two white beads? My workout: 1- (4/5 * 4/5 * 4/5) # the probability of not getting 2 white beads and the only way for that to happen is if only 1 white bead is picked therefore leaving 8 to pick from. And since each bag has 10 beads including 3 white beads the chance for this is 8/10= 4/5 This is my logic behind this My final answer: 1- (4/5 * 4/5 * 4/5) = 1 - 64/125 = 61/125 My problem with this now is, I don't know if this is the answer or I have missed a step. Any help will be appreciated
We are drawing one bead each from the three bags, and we are drawing them independently. There is a $\frac3{10}$ chance of drawing a white bead from each bag. If we get at least two white beads, we could have got them from * *the nylon and plastic bags *the leather and nylon bags *the leather and plastic bags *all three bags For each of the first three cases it is implied that we draw a non-white bead from the third bag, which has a $\frac7{10}$ chance of happening. The probability of each of these cases happening is therefore $\frac3{10}\times\frac3{10}\times\frac7{10}=\frac{63}{1000}$; we multiply this by three for the probability of getting exactly two white beads, which works out to be $\frac{189}{1000}$. Similarly, the last case (drawing exactly three white beads) has probability $\frac3{10}\times\frac3{10}\times\frac3{10}=\frac{27}{1000}$ of occurring. Since drawing two white beads and drawing three white beads are mutually exclusive events, add them together to get your answer: $\frac{189}{1000}+\frac{27}{1000}=\frac{216}{1000}=\frac{27}{125}$. If you look a little deeper this is really a binomial distribution $X$ with $n=3$ and $p=\frac3{10}$; we have just calculated $P(X\ge2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Multiplication of Rational Matrices Let $\mathbf A(x)$ and $\mathbf B(x)$ be $n \times n$ rational matrices, whose elements are rational functions in the scalar $x \in \mathbb R$. Suppose that $\mathbf A(x) \mathbf B(x)$ is a polynomial matrix in $x$, meaning that the denominators in the elements of $\mathbf A(x)$ and $\mathbf B(x)$ somehow cancel out in the multiplication. Does it follow that $\mathbf B(x) \mathbf A(x)$ is also a polynomial matrix in $x$? This seems like a simple question, but I haven't been able to prove it or come up with a counter example.
Here is my attempt at a counter example. $$\left[\begin{array}{cc} 1&1/p(x)\\ 0&1 \end{array}\right]\left[\begin{array}{cc} 1&1\\ p(x)&0 \end{array}\right] = \left[\begin{array}{cc} 2&1\\ p(x)&0 \end{array}\right] $$ $$\left[\begin{array}{cc} 1&1\\ p(x)&0 \end{array}\right]\left[\begin{array}{cc} 1&1/p(x)\\ 0&1 \end{array}\right] =\left[\begin{array}{cc} 1&1+1/p(x)\\ p(x)&1 \end{array}\right]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate the value of the sum $1+3+5+\cdots+(2n+1)$ I have been thinking about this for a long time, may I know which step of my thinking is wrong as I do not seems to get the correct answer. If I am not going towards the right direction, may I get some help thanks! My attempt: Let $S = 1+3+5+\dotsb+(2n+1)\label{a}\tag{1}.$ Then I rearrange S from the last to first terms: $S = (2n+1)+(2n-1)+(2n-3)+\dotsb+1\label{b}\tag{2}.$ Adding the two series $(1)+(2)$: $$2S = (2n+2)+(2n+2)+(2n+2)+\dotsb+(2n+2),$$ I have $n$ copies of $(2n+2)$. Therefore: $2S = n(2n+2)$ $S = n(n+1)$.
Only have $n-1$ numbers of (2n+2) therefore: 2S = (n-1)(2n+2) S = (n-1)(n+1) Reason: when $n=1$, then first term is 3 rather than 1. Therefore, you only have $n-1$ numbers of (2n+2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 3 }
Please help me understand the definition of straight line given by Euclid. "A straight-line is (any) one which lies evenly with points on itself." This is how Euclid defines a straight line but I don't know what it really means. Is this saying that any point picked on the straight line will have equal distance from each other? What does he mean exactly by lies evenly?
Within the Euclidean system of geometry that is articulated in Book 1 of the Elements, the three fundamental symmetries for objects in two dimensions are reflection, translation and rotation. The idea that a straight line lies evenly between two points can be understood in terms of reflection. If a line A were to be transformed by reflection along an axis running along the two points to yield line B, then the line A would be straight if the reflection to B yielded a line that, when placed on top of A, every point on B lies evenly on A. If a line C had any curvature or deviation from what is straight, then the reflection would yield a line D that does not lie evenly on line C. Note that this explanation of what it is for a line to be straight rests on a fundamental kind of symmetry--and so it conforms to modern notions of the kinds of group relations that are taken to be fundamental across different kinds of geometries (see Cayley and Klein). As such, Euclid's attempt to dig down to fundamental assumptions does seem to yield remarkable insights.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Other formulation of the inverse Galois problem Is it right to say that the inverse Galois problem is equivalent to the following statement: Does every finite group $G$ occurs as a quotient of $\text{Gal}(\bar{\mathbb Q}/\mathbb Q)$? I'm not sure if this is "quotient" or "subgroup" that I should write. Thank you for your clarifications.
You should write quotient: if $K$ is a finite Galois extension of $\mathbb{Q}$, then $\mathrm{Gal}(K/\mathbb{Q})$ is a quotient of $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$ by the Fundamental Theorem of Galois theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is minimum distance between points on concentric ellipses constant? Given an ellipse, $E_1$ with radii $r_x, r_y$ I would like to know whether the minimum distance between any selected point, $P_a$, and $E_1$ is less than, say $D$. I have seen the related question of finding the distance between a point and an ellipse. Rather than finding that minimum distance, I would like to solve it differently by creating an ellipse, $E_2$ with radii $r_x + D, r_y + D$ and then test whether the point $P_a$ falls within $E_2$, which is easy to do. So, my question is, if the point $P_a$ falls within $E_2$, does it mean that the minimum distance between $E_1$ and $P_a$ is less than $D$?
Pictured below are * *[Red] The (degenerate) ellipse $E_1$ with axes $1$ and $0$. *[Green] The set of points exactly one unit from $E_1$. *[Blue] The ellipse $E_{1+1}$ with axes $1+1$ and $0+1$. All points within $E_{1+1}$ are within one unit of $E_1$ , but the converse is false. There are points within one unit of $E_1$ that don’t lie within $E_{1+1}$. Here’s a picture showing the “correct” (go out a distance $d$ perpendicular to the ellipse) and “approximate” (go out to an ellipse with axes larger by $d$) curves at two distances ($2$ and $6$) from one quarter of an original ellipse with axes $3$ and $1$. At least I hope it’s the right picture! As Matt suspected, the approximation is not bad at all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Minimum and Maximum value of |z| This is a question that I came across today: If $|z-(2/z)|=1$...(1) find the maximum and minimum value of |z|, where z represents a complex number. This is my attempt at a solution: Using the triangle inequality, we can write: $||z|-|2/z||≤|z+2/z|≤|z|+|2/z|$ Let $|z|=r$ which implies that $|r-2/r|≤1≤r+2/r$ (From (1)) How must I proceed to find the value of |z|? Please help! Much thanks in advance :)
Using Complex Inequalities, $$||z|-|w||\le|z+w|\le|z|+|w|$$ $w=-\dfrac2z$ and writing $|z|=r$ $$\left|r-\dfrac2r\right|\le\left|z-\dfrac2z\right|\le r+\dfrac2r$$ $$\implies\left|r-\dfrac2r\right|\le1\le r+\dfrac2r$$ Now as $r>0,$ $$\dfrac{r+\dfrac2r}2\ge\sqrt{r\cdot\dfrac2r}=\sqrt2\iff r+\dfrac2r\ge2\sqrt2>1$$ So, we need $$\left|r-\dfrac2r\right|\le1\iff-1\le r-\dfrac2r\le1$$ $$r-\dfrac2r\le1\iff r^2-r-2\le0$$ Now we know $(x-a)(x-b)\le0$ with $a\le b;a\le x\le b$ Here $-1\le r\le2$ But $r>0\implies0<r\le2$ Can check for $-1\le r-\dfrac2r$ to find $r\ge1$ So, the final range $$1\le r\le2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that $n^n>1\times3\times5\times7\times \dots\times(2n-1)$ The main question is : Prove that $n^n>1\times3\times5\times7\times\dots\times(2n-1)$ My approach : We can write the R.H.S as, $$\frac{(2n-1)!}{2\times4\times6\times\dots\times2(n-1)}$$ We can write $(2n-1)!$ as $(2n-3)!(2n-1)2(n-1)$ Thus, in this manner we can cancel out even terms till $n$ appears. I'm having trouble doing this. What I intend to do is say that any positive number's factorial is less than the number raised to itself. I welcome any alternate method. Please help. P.S. I am still in high school and do not understand binomial theorem and induction.
HINT: Using AM, GM inequality for $r,2n-r>0$ $$\dfrac{r+(2n-r)}2\ge\sqrt{r(2n-r)}$$ Set $r=1,3,5\cdots, 2n-3,2n-1$ and multiply. Observe that the equality can not be held as $r=2n-r$ will not occur unless $r=n$(which is again possible if $n$ is odd)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Question about Column Space Matrix multiplication properties If say two square matrices A,B have the same column space, will it also hold that some multiplication of these matrices with another matrix will have the same column space? i.e if Col(A) = Col(B) does Col(AC) = Col(BC) for some other matrix C of the same dimensions. It seems to make sense to me, but I couldn't work out how to prove it. Thanks,
If the two matrices have the same column space, it means that the corresponding linear maps have the same image. However they may not necessarily have the same kernel. So a counter example to the statement would be if $C$ maps all the vector to the kernel of $A$ but not for $B$. E.g. take $$A=\begin{pmatrix} 0 & 1 \\ 0 &0 \end{pmatrix}$$ $$B=\begin{pmatrix} 1 & 0 \\ 0 &0 \end{pmatrix}$$ $$C=\begin{pmatrix} 1 & 0 \\ 0 &0 \end{pmatrix}$$ The map $A$ is just projection onto y axis and then rotate $90°$ clockwise, kernel of $A$ is any vector of the form $(x,0)$. $C$ is projection onto x axis, so it precisely maps every vector onto the kernel of $A$. As you can check $AC$ is the $0$ matrix, $BC=B$. They have different column space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The conjugate of a sylow $p$-subgroup is a sylow $p$-subgroup Second Sylow theorem states that all Sylow $p$-subgroups are conjugate. But reviewing my proof it seems to me that we also prove that all the conjugates of a Sylow $p$-subgroup are Sylow $p$-subgroups. I can include the proof if needed but can anybody confirm me this idea? Or give a counterexample?
In general if $H$ is a subgroup of $G$ and $g\in G$, then we have $|gHg^{-1}|=|H|$, so it follows that the conjugates of a Sylow subgroup are also Sylow subgroups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1897971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Suppose we have two dice one fair and one tha brings $6$ with quintuple probability.Find the probability to throw randomly one die and show $6$ Suppose we have two dice one fair and one that brings $6$ with quintuple probability than the other numbers.We get a die randomly and we throw it.What is the probability to have $6$?In the same problem if we know that we have $6$ what is the probability that we have throwed the second die? Any ideas for these parts especially the second one?
The probability of rolling 6 on the fair die is obviously $\frac{1}{6}$. Let $x$ denote the probability of rolling 6 on the non-fair die: * *Then $\frac{1}{5}x$ is the probability of rolling each one of the other $5$ values *Therefore $x+5\cdot\frac{1}{5}x=1$, therefore $2x=1$, therefore $x=\frac{1}{2}$ So the probability of rolling 6 on the non-fair die is $\frac{1}{2}$. What is the probability of rolling 6? Split it into disjoint events, and then add up their probabilities: * *The probability of choosing the fair die and then rolling 6 on that die is $\frac{1}{2}\cdot\frac{1}{6}=\frac{1}{12}$ *The probability of choosing the non-fair die and then rolling 6 on that die is $\frac12\cdot\frac{1}{2}=\frac{1}{4}$ So the probability of rolling 6 is $\frac{1}{12}+\frac{1}{4}=\frac{1}{3}$. If we know that we rolled 6, what is the probability that it was on the non-fair die? Use Bayes formula for conditional probability: * *Let $A$ denote the event of rolling the non-fair die *Let $B$ denote the event of rolling 6 So $P(A|B)=\frac{P(A\cap B)}{P(B)}=\frac{\frac{1}{4}}{\frac{1}{12}+\frac{1}{4}}=\frac{3}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1898073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A geometry problem based on circles. Question:- Consider three equal circles $S_1$, $S_2$, and $S_3$ each of which passes through a given point $H$. Other than that, $S_1$ and $S_2$ intersect at $A$, $S_2$ and $S_3$ intersect at $B$, $S_3$ and $S_1$ intersect at $C$. Show that $H$ is the orthocenter of triangle $ABC$. (Note:- I have assumed that $O_1$ is the centre of $S_1$, $O_2$ is the centre of $S_2$ and $O_3$ is the centre of $S_3$. Also, I have made an observation that quadrilaterals $O_1CO_3H$ and $O_3BO_2H$ are rhombuses. Is this fact going to help in the proof? )
We have that $H$ is the radical centre of the circles $S_1, S_2, S_3$, so $AH\perp O_2 O_3$ and the midpoint of $AH$ is also the midpoint $M_A$ of $O_2 O_3$. In particular, $H$ is the circumcenter of $O_1 O_2 O_3$, hence it is the orthocenter of its medial triangle $M_A M_B M_C$. Since $ABC$ and $M_A M_B M_C$ are homothetic with respect to a dilation centered at $H$, $H$ is also the orthocenter of $ABC$. We also have that the circumradius of $M_A M_B M_C$ is at the same time half the circumradius of $ABC$ (by homothety) and half the circumradius of $O_1 O_2 O_3$, since $M_A M_B M_C$ is the medial triangle of $O_1 O_2 O_3$. It follows that the circumradius of $ABC$ has the same length as the radius of $S_1,S_2$ or $S_3$: this is known as Johnson's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1898131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Squaring Infinite Series Expansion Of e^x $Fact$:$$\lim\limits_{n \to \infty}\frac{x^0}{0!}+\frac{x}{1!}+\frac{x^2}{2!}+\frac{x^3}{3!}+\dots+\frac{x^n}{n!}=e^x$$ so $$\lim\limits_{n \to \infty}\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+\dots+\frac{1}{n!}=e$$ also $$\lim\limits_{n \to \infty}e^2=\frac{2^0}{0!}+\frac{2}{1!}+\frac{4}{2!}+\frac{8}{3!}+\dots+\frac{2^n}{n!}$$ can I safely say... $$L.H.S=(\frac{1}{0!}+\frac{1}{1!}+\frac{1}{2!}+\frac{1}{3!}+\dots)^2=\frac{2^0}{0!}+\frac{2}{2!}+\frac{4}{2!}+\frac{8}{3!}+\dots=R.H.S$$ if yes, how to prove L.H.S=R.H.S (without using the above fact)? It is not a problem from textbook I was just thinking about it.
Since $e^ne^m = e^{n+m}$ you can simply let $x = 2t$, giving you $$(e^{t})^2 = e^{2t} = 1 + 2t + \frac{4t^2}{2!}+\frac{8t^3}{3!} + \cdots$$ Now, let $t = 1$ and you have $$e^2 = 1 + 2 + \frac{4}{2!} + \frac{8}{3!} + \cdots = (1 + 1 + \frac{1}{2!} + \frac{1}{3!} + \cdots )^2 = (e^1)^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1898270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Divergence-free vector field on a 2-sphere. What is the solution of the differential equation $$\text{div}X=0$$ on the 2-sphere?
$\def\div{\operatorname{div}}$Let $\nu$ be the volume form on the sphere. The map $X\mapsto\nu(X,\mathord-)$ gives a bijection from vector fields to $1$-forms and, in particular, if $h:S^2\to\mathbb R$ is a smooth function, there is a unique vector field $X_h$ such that $\nu(X_h,Y)=dh(Y)$ for all vector fields $Y$. Using Cartan's magic formula it is easy to see that $\def\L{\mathcal L}\L_{X_h}\nu$, the Lie derivative of $\nu$ in the direction of $X_h$, vanishes. Since $\L_{X_h}\nu=\div(X_h)\nu$ and $\nu$ vanishes nowhere, we see that $\div(X_h)=0$. In this way, we find a divergence-free vector on the sphere for each function $h$. In fact, they all arise in this way. This is a more complicated result —it depends on the simply-connectedness of the sphere and is a symplectic thing. Let's see: let $X$ be a vector field on the sphere with zero divergence, so that $\L_X\nu=0$. Using Cartan's formula, we see that the $1$-form $i_X\nu$ which we get by contracting $\nu$ with $X$ is closed. It therefore has a class in the degree $1$ de Rham cohomology vector space $[i_X\nu]\in H^1(S^2)$. As this vector space is zero, because the sphere is simply connected, $i_X\nu$ has to be exact, that is, there exists a function $h:S^2\to\mathbb R$ such that $i_X\nu=dh$. If you consider what this means, you see that $X$ is in fact equal to the vector field $X_h$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1898371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find the second smallest integer such that its square's last two digits are $ 44 $ Given that the last two digits of $ 12^2 = 144 $ are $ 44, $ find the next integer that have this property. My approach is two solve the equation $ n^2 \equiv 44 \pmod{100}, $ but I do not know how to proceed to solve that equation. I try a different path by letting $ n = 10x + y $ for some integers $ x, y, $ where $ 0 \le y \le9. $ Then $ n^2 \equiv 44 \; \pmod{100} $ can be reduced to $ 20xy + y^2 \equiv 44 \pmod{100}. $ At this point I let $ x $ run from $ 0, 1, 2, \dots $ and find the integer $ y \in \mathbb{Z}_{100} $ such that $ y^2 + 20xy - 44 = 0. $ My question is is there an alternative way to tackle this problem without having to try each $ x $ and $ y? $ Maybe try to solve the initial congruence equation $ n^2 \equiv 44 \pmod{100}. $
If $x^2$ ends with $44$ then $x$ is even. Let $y=2x$. We are trying to solve $$(2y)^2\equiv 44\pmod{100}$$ and this equation is equivalent to $$y^2\equiv 11\pmod{25}$$ Since $6^2\equiv 11\pmod{25}$ this equation can be written as $$(y-6)(y+6)\equiv 0\pmod{25}$$ It is not possible that both $y-6$ and $y+6$ are multiples of $5$, so the solutions are $y\equiv\pm6\pmod{25}$. The first positive values for $y$ are $$6,19,31,44,56,69,81,94,106,\ldots$$ Multiply these numbers by $2$ to get the solutions for $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1898485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Can one infer independence by simple reasoning/intuition? From my recent experience in probability, it feels as though independence is something we "discover" from the system via the equation: $$P(A)*P(B)=P(A\cap B)$$ Could one ever conclude independence from the "system" by intuition? Is it wise to conclude independence for events that are "seemingly" independent? What would be some interesting examples where this would fail.
The OP asks for an intuitive understanding of $\mathbb{P}(A)\mathbb{P}(B)=\mathbb{P}(A\cap B)$ for independent events $A$ and $B$. Here it is: Let $(\Omega, \mathcal{A}, \mathbb{P})$ be a probability space. Assume $\mathbb{P}(B) > 0$. Since $\mathbb{P}(\Omega)=1$, we can write the above equation as $\displaystyle\frac{\mathbb{P}(A)}{\mathbb{P}(\Omega)}=\frac{\mathbb{P}(A\cap B)}{\mathbb{P}(B)}.$ The left hand side is the "proportion of $A$" in $\Omega$, while the right hand side is the "proportion of $A$" that is in $B$. From the equation above, we can see that the probability of the happening of $A$ remains unchanged inside $B$ (if you zoom in $B$ and treat it as a new probability space with $\mathbb{P}_B(B)=1$). Event $A$ will happen with a probability $\mathbb{P}(A)$ independent of whether your space is $\Omega$ or $B$. Thus is the independence of $A$ and $B$. Imagine the $\Omega$ as a unit disk, and $A$ occupies the left semi-circle, so that $\mathbb{P}(A)=\frac{1}{2}$. Now let $B$ be a concentric circle inside $\Omega$. Then $A$ has exactly the same pattern in $B$ as in $\Omega$, and $\mathbb{P}_B(A)=\frac{1}{2}$ inside $B$. So in this respect $B$ is somewhat "irrelevant" for $A$: $B$ "looks like" $\Omega$, so its happening or not get unnoticed for $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1898596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Find large power of a non-diagonalisable matrix If $A = \begin{bmatrix}1 & 0 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}$, then find $A^{30}$. The problem here is that it has only two eigenvectors, $\begin{bmatrix}0\\1\\1\end{bmatrix}$ corresponding to eigenvalue $1$ and $\begin{bmatrix}0\\1\\-1\end{bmatrix}$ corresponding to eigenvalue $-1$. So, it is not diagonalizable. Is there any other way to compute the power?
Notice the characteristic polynomial of $A$ is $$\chi_A(\lambda) \stackrel{def}{=}\det(\lambda I_3 - A) = \lambda^3-\lambda^2-\lambda+1 = (\lambda^2-1)(\lambda-1)$$ By Cayley-Hamilton theorem, we have $$\chi_A(A) = (A^2 - I)(A-I) = 0 \quad\implies (A^2-I)^2 = (A^2-I)(A-I)(A+I) = 0$$ This means $A^2-I$ is nilpotent. In following binary expansion of $A^{30}$ $$A^{30} = (I + (A^2 - I))^{15} = \sum_{k=0}^{15} \binom{15}{k}(A^2-I)^k$$ only the term $k = 0$ and $1$ contributes. i.e. $$A^{30} = I + 15 (A^2-I) = \begin{bmatrix} 1 & 0 & 0\\ 15 & 1 & 0\\ 15 & 0 & 1 \end{bmatrix} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1898710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 1 }
Finding $a$ in quadratic equation $2x^2 - (a+1)x + (a-1)=0$ so that difference of two roots is equal to its product Given equation: $$2x^2 - (a+1)x + (a-1)=0$$ I have to find when the difference of two roots is equal to its product, i.e.: $$x_1x_2 = x_1 - x_2.$$ From Vieta's formulas we know that: $$x_1 + x_2 = \frac{a + 1}{2},$$ $$x_1x_2 = \frac{a-1}{2} = x_1 - x_2.$$ Then, solving system of equations: $$x_1 + x_2 = \frac{a + 1}{2}$$ $$x_1 - x_2 = \frac{a-1}{2}$$ we get that $x_1 = \frac{a}{2}$ and $x_2 = \frac{1}{2}$. Then, plugging it into equation $x_1x_2 = x_1 - x_2$ we get: $$\frac{a}{4} = \frac{a - 1}{2}$$ $$4a - 4 = 2a$$ $$2a = 4$$ $$a = 2.$$ Plugging $2$ into previous equation we get that $\frac{1}{2} = \frac{1}{2}$, so solution have to be true. Is my approach correct? If so, are there another ways to solve those kind of problems?
Using by the formula $${ \left( { x }_{ 1 }-{ x }_{ 2 } \right) }^{ 2 }+4{ x }_{ 1 }{ x }_{ 2 }={ \left( { x }_{ 1 }+{ x }_{ 2 } \right) }^{ 2 }$$ make be easy
{ "language": "en", "url": "https://math.stackexchange.com/questions/1898845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
When is it true that $a^{n}How can I find the smaller $n\in\mathbb{N}$, which makes the equation true: $$a^{n}<n!$$ For example: If $a=2$ then $\longrightarrow 2^{n}<n!$ when $n\geq 4$ If $a=3$ then $\longrightarrow 3^{n}<n!$ when $n\geq 7$ If $a=4$ then $\longrightarrow 4^{n}<n!$ when $n\geq 9$ If $a=5$ then $\longrightarrow 5^{n}<n!$ when $n\geq 12$ $$a,n\in\mathbb{N}$$ $$a^{n}<n!$$when$$n\geq ?$$
I find a equation: $$ a^{n}<n! \Longleftrightarrow n \geq (2a)+\lfloor\frac{(a-1)}{2}\rfloor$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1898974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Historically key works on structural reliability theory? I want to understand better the structural reliability theory. Is it related to the reliability of structures? Where does it originate and what are the most important work for it?
this reference may be is useful Borgonovo E , Iooss B . Moment-independent and reliability-based importance mea- sures. In: Ghanem R, Higdon D, Owhadi H, editors. Springer handbook on uncer- tainty quantification. Springer; 2017. p. 1265–87 . the book is Handbook of Uncertainty Quantification by Roger Ghanem David Higdon Houman Owhadi
{ "language": "en", "url": "https://math.stackexchange.com/questions/1899091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the difference of two irrationals which are each contained under a single square root irrational? Is $ x^\frac{1}{3} - y^\frac{1}{3}$ irrational, given that both $x$ and $y$ are not perfect cubes, are distinct and are integers (i.e. the two cube roots are yield irrational answers)? I understand that the sum/difference of two irrationals can be rational (see this thread: Is the sum and difference of two irrationals always irrational?). However, if my irrationals are contained under one root (so for example $3^\frac{1}{3}$ and not $2^\frac{1}{2} + 1$), can one generalise to show that $ x^\frac{1}{p} - y^\frac{1}{q} $ is irrational, where of course $x$ and $y$ are not powers of $p$ and $q$ respectively?
Denote the cube roots by $X,Y$, so that $X^3=x$ and $Y^3=y$ with $x,y\in \mathbb Z$. Suppose, with slightly greater generality, that we have $X-Y-R=0$ where $X^3,Y^3,R\in \mathbb Q$. We first want to argue that $XY\in \mathbb Q$. To do so, observe the identity: $$X^3-Y^3-R^3-3XYR=(X-Y-R)(X^2+Y^2+R^2+XY+XR-YR)$$ This immediately implies that $$3XYR=X^3-Y^3-R^3$$ which, as desired, implies that $XY\in \mathbb Q$. But then we have two real numbers, $X,Y$ such that both $X-Y$ and $XY$ are rational. It follows that $X,-Y$ both satisfy a quadratic equation with rational coefficients, namely \begin{align} X^2 - (X-Y)X -XY &= 0,& Y^2 +(X-Y)Y -XY&=0. \end{align} But this is not possible, as $Z^3-x=0$ is the minimal polynomial for the cube root of $x$ over $\mathbb Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1899159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Number of arrangements of red, blue, and green balls in which a maximum of three balls of the same color are placed in the five slots From the picture above I have five slots and a bag of balls (three) of color -red, blue and green... Question Now whenever I choose a ball, I note the colour and replace it in the bag, then I randomly keep choosing balls (independent event) until all five slots are filled and put them back in the bag.. How can I derive a formula to find all possible repetitive arrangements (exhaustive approach) with the condition that a MAXIMUM of THREE balls of the same color are in the five slots. Note: I have ask a similar question at Finding the Total number of permutation using a selective formula In the accepted answer, there were five slots and five letters and all arrangements with a maximum of three letters were found using a pattern $3-1-1: \binom5{1,2,2}\binom5{3,1,1}$ $2-2-1: \binom5{2,1,2}\binom5{2,2,1}$ $2-1-1-1: \binom5{1,3,1}\binom5{2,1,1,1}$ $1-1-1-1-1:\binom55\binom5{1,1,1,1,1}$ Now in this question the number of balls and slots varies I would like to build from the pattern above answer using multinomials (since I don't want to write a new algorithm again)
We will solve the problem: "how many words of length $5$ on $\{R,B,G\}$ are there in which no letter appears more than $3$ times". Easier to work backwards. Without the cap rule there are $3^5$ possible words. How many of these have exactly $4\;R's$? Well there are $5$ place to put the non-$R$, and $2$ options for what letter it is. Hence $10$. As there are three letter, there are exactly $30$ words of length $5$ in which some letter appears exactly $4$ times. Now, of course there are eactly $3$ words in which a letter appears $5$ times. Hence there are $30+3=33$ words of length $5$ in which some letter appears more than $3$ times. It follows that there are $3^5-33=\fbox {210}$ words of length $5$ in which no letter appears more than $3$ times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1899238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Probability of combinations that share at least 3 items I have a question about probabilities that I can't get my head around. Suppose I would like to generate a combination of 7 elements, where each element is randomly drawn from a set of n alternatives. For example, the first element has possible alternatives A1, A2, ... A10, the second element has possible alternatives B1, B2, ... B10, etc. The number of possible alternatives for each element is the same, and the order of the elements is also always the same. For instance, three random examples could look like this: * *A5 B2 C6 D2 E7 F1 G2 H6 *A1 B3 C8 D9 E9 F2 G2 H7 *A2 B1 C5 D3 E2 F2 G2 H7 I would like to know two things: * *What is the probability that 2 randomly drawn combinations have the same alternatives for at least three elements? For example, examples (2) and (3) above share 3 alternatives (F2 G2 H7), but (1) and (2) do not (they share only G2). *If I draw 1,000,000 such examples, what proportion of these examples will share at least 3 elements with another example in the set, on average? Thanks a lot for your help!
This is binomial. The probability the $i^{th}$ character matches is $p=\dfrac{1}{n}$. Thus the probability of exactly $x$ matches in a character length of $k$ is $$P(X=x) = {k\choose x}\left(\dfrac{1}{n}\right)^x\left(1-\dfrac{1}{n}\right)^{k-x}$$ At least three matches would be $$P(X\ge 3) = P(X=3)+\cdots+P(X=k) = \sum_{x=3}^k {k\choose x}\left(\dfrac{1}{n}\right)^x\left(1-\dfrac{1}{n}\right)^{k-x}$$ Your second question is more involved. See the birthday problem and notice it is essentially the same. With "two people having the same birthday" is your "shares at least three characters".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1899318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An accountancy of the natural numbers I guess this formula is known since I believe it's true $$\sum_{k=2}^\infty\frac{(-1)^{1+\Omega(k)}}{k}=1$$ where $\Omega(k)$ is the number of prime factors of $k$ (not necessary different primes). I can't find it and want a formal proof or a reference. My intuition about this sum is something like this: The probability of a number to have the prime factor $p$ is $\frac{1}{p}$ and to have the prime factors $p$ or $q$ is $\frac{1}{p}+\frac{1}{q}-\frac{1}{pq}$ etc. The probability of a number $>1$ to have a prime factor is $1$. Is this sum really not known? Haven't anything been published about it before? I finally realize that the series $$\sum_{k=1}^\infty\frac{(-1)^{\Omega(k)}}{k}=0$$ has been known for a long time, and that solves my confusion.
We have $$\sum_{k=2}^{\infty} \frac{(-1)^{1 + \Omega(k)}}{k} = 1 -\prod_p \left(1 - \frac{1}{p} + \frac{1}{p^2} \mp\right) = 1 -\prod_{p} \frac{1}{1 + 1/p}$$ The RHS goes to $1$ since the product goes to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1899443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
PSD rank 1 matrix decomposition into product of two vectors Assume that I'm given with matrix $\textbf{Q} = \textbf{q}\textbf{q}^H$. Assume I only have $\textbf{Q}$, is there any way for me to find vector $\textbf{q}$? Thanks!
Let $x$ be such that $x^H Q x > 0$. Then $\dfrac{Qx}{\sqrt{x^H Q x}} = \dfrac{q^H x}{\|q^Hx\|} q = cq$ where $\|c\| = 1.$ Next note if $q \neq 0$ then for some $i$ we must have $ e_i^H Q e_i = || e^H q||^2 = |q_i|^2 > 0$, where $e_i$ is the $i^{\text{th}}$ canonical basis vector and $q=(q_1,\dots,q_n)^T$. Next note that $q$ can only be determined upto a constant $c$ with $\|c\| = 1$ since $(cq) (cq)^H = \|c\|^2 qq^H = qq^H$ so we can assume wlog $c = 1$ above. We can choose a $q$ as $\dfrac{Qe_i}{\sqrt{e_i^H Q e_i}}$ for the $e_i$ for which $e_i^H Q e_i > 0.$ If no such $i$ exists we must have $Q =0$ and we can choose $q=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1899561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
ternary analogues of the Pell equations We know well about the Pell equations: $x^2 -ny^2=1$ and some variants of them. Criterions about the existence of nontrivial solutions of homogeneous equations $ax^2+by^2+cz^2=0$ are also well-known. Then, how about the 3-v analogues of the Pell equations? I mean, the diophantine equations of this type: $x^2+ay^2+bz^2=1$, where $a, b $ integers. Is there an extensive survey article on them? Comment: maybe it is trivial... in that case, what would it be on $x^2+ay^2+bz^2=n^2$? I guess these are quite nontrivial, according to some brute-force computations.
It turns out that Cassels does this material, pages 301-309. There is quite a big difference based on whether $x^2 - A y^2 - B z^2$ is isotropic or not, meaning there is an integer solution to $x^2 - A y^2 - B z^2=0$ with $x,y,z$ not all equal to zero. When the form is isotropic, pages 301-303, especially the proof of Lemma 5.4 and discussion on page 303. Anisotropic is harder and the sign of the target number matters; compare Theorem 6.2 on page 305 to Theorem 6.3 on page 306. Alright, went through the easiest example, I can see where his notation is a little different from what I expected, but he is consistent, that is what matters. In solving $x_1 x_3 - x_2^2 = 1,$ we have a single orbit, that being his $c = (1,0,1)$ from the paragraph between 5.19 and 5.20 on page 303. The result for $$ x^2 - y^2 - z^2 = 1 $$ is, with $$ \alpha \delta - \beta \gamma = 1 $$ and $$ \alpha + \beta + \gamma + \delta \equiv 0 \pmod 2, $$ $$ \left( \frac{\alpha^2 + \beta^2 + \gamma^2 + \delta^2}{2}, \; \; \frac{\alpha^2 - \beta^2 + \gamma^2 - \delta^2}{2}, \; \; \alpha \beta + \gamma \delta \right) $$ ? p = ( a^2 + b^2 + c^2 + d^2 )^2 - ( a^2 - b^2 + c^2 - d^2 )^2 - (2 * a * b + 2 * c * d )^2 %7 = 4*d^2*a^2 - 8*d*c*b*a + 4*c^2*b^2 ? q = 4 * ( a * d - b * c)^2 %8 = 4*d^2*a^2 - 8*d*c*b*a + 4*c^2*b^2 ? p - q %9 = 0
{ "language": "en", "url": "https://math.stackexchange.com/questions/1899643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Commuting matrix exponentials: necessary condition. In related questions (here, here and here), it was shown that if $A$ and $B$ commute, then $e^A$ and $e^B$ also commute (and incidentally $e^A e^B = e^{(A+B)}$). Here, the commutative property of A and B is a sufficient condition. Is it also a necessary condition? If no, is there another necessary condition for $e^A$ and $e^B$ to commute?
It is not a necessary condition. In particular, we can take $$ A = \pmatrix{2 \pi i & 0\\0&0}, \quad B = \pmatrix{0&1\\0&0} $$ It's clear that $e^A = I$ commutes with $e^B$, but $A$ does not commute with $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1899864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Understanding of a question The cost of producing a math assessment book is made up of three main components , overhead , type-setting and printing . In $2009$, the overhead cost for an assessment book is $\$1200$ , the cost of type setting a page is $\$18$ and the cost of printing a book with $120$ pages is $\$1.45$. How much will it cost to produce an assessment book with $120$ pages with $5000$ copies printed ? My thoughts - Overhead cost - $\$1200$ Then for "Type Setting" , why do I only take $18$ multiply by the number of pages ? Why I cannot - $18 \times 120 \times 5000$ which is the number of copies ? I think I have an understanding issue on what is "type-setting" can I get help ? Thanks in advance !
In the past the term meant, literally, "setting type". That is, in printing newspapers (say) physical letters were positioned as desired in a block. Ink was then applied to the block and if paper was pressed on it, the desired image would pass to the paper. The expression "mind your $p's$ and $q's$" arises from this operation...the physical process results in a reflection so it was very easy for a typesetter to insert a $p$ where a $q$ was intended (or conversely). Thus, type-setting is, essentially, a one time thing. Once the block is set you can run off as many papers as you like. Granted, you have to reapply ink and such, but the hard part, the physical positioning of the letters, need only be done once.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1899945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show $\lim\left ( 1+ \frac{1}{n} \right )^n = e$ if $e$ is defined by $\int_1^e \frac{1}{x} dx = 1$ I have managed to construct the following bound for $e$, which is defined as the unique positive number such that $\int_1^e \frac{dx}x = 1$. $$\left ( 1+\frac{1}{n} \right )^n \leq e \leq \left (\frac{n}{n-1} \right )^n$$ From here, there must surely be a way to deduce the well-known equality $$\lim_{n \rightarrow \infty} \left ( 1+ \frac{1}{n} \right )^n = e$$ I have come up with the following, but I am not absolutely certain if this is correct or not. PROPOSED SOLUTION: The lower bound is fine as it is, so we shall leave it alone. Note that $$\begin{align*} \left ( \frac{n}{n-1} \right )^n &= \left ( 1+\frac{1}{n-1} \right )^{n} \\ &= \left ( 1+\frac{1}{n-1} \right )^{n-1} \left ( 1+\frac{1}{n-1} \right ) \end{align*}$$ So using the fact that the limit distributes over multiplication, we have $$\lim_{n \rightarrow \infty} \left ( \frac{n}{n-1} \right )^n = \lim_{n \rightarrow \infty} \left ( 1+\frac{1}{n-1} \right )^{n-1} \lim_{n \rightarrow \infty} \left ( 1+\frac{1}{n-1} \right ) $$ Since $$\lim_{n \rightarrow \infty} \left ( 1+\frac{1}{n-1} \right ) = 1 $$ and $$\lim_{n \rightarrow \infty} \left ( 1+\frac{1}{n-1} \right )^{n-1} = \lim_{m \rightarrow \infty} \left ( 1+\frac{1}{m} \right )^m = e $$ We then have the required result $$\lim_{n \rightarrow \infty} \left ( 1+ \frac{1}{n} \right )^n = e$$
Let $a_n=\left(1+\frac1n\right)^n$. We need to show first that $\lim_{n\to \infty}a_n$ actually exists. From the OP, we see that $a_n$ is bounded above by the number $e$, which is defined as $1=\int_1^e \frac1t\,dt$. And in THIS ANSWER, I showed using Bernoulli's Inequality that $a_n$ is monotonically increasing. Inasmuch as $a_n$ is monotonically increasing and bounded above, then $\lim_{n\to \infty}a_n$ does indeed exist. Next, we see from the OP that $$a_n\le e\le a_{n-1}\left(1+\frac1{n-1}\right) \tag 1$$ Since we have established convergence of $a_n$, simply applying the squeeze theorem to $(1)$ yields the coveted equality $$\lim_{n\to \infty}a_n=e$$ And we are done! Note that we used $\lim_{n\to \infty}a_{n-1}=\lim_{n\to \infty}a_n$ along with $\lim_{n\to \infty}\left(1+\frac{1}{n-1}\right)=1$ on the right-hand side of $(1)$ in applying the squeeze theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1900042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 6, "answer_id": 2 }
Evaluate the integral $\int_0^\infty \frac{dx}{\sqrt{(x^3+a^3)(x^3+b^3)}}$ This integral looks a lot like an elliptic integral, but with cubes instead of squares: $$I(a,b)=\int_0^\infty \frac{dx}{\sqrt{(x^3+a^3)(x^3+b^3)}}$$ Let's consider $a,b>0$ for now. $$I(a,a)=\int_0^\infty \frac{dx}{x^3+a^3}=\frac{2 \pi}{3 \sqrt{3} a^2}$$ I obtained the general series solution the following way. Choose $a,b$ such that $a \geq b$, then: $$I(a,b)=\frac{1}{a^2} \int_0^\infty \frac{dt}{\sqrt{(t^3+1)(t^3+b^3/a^3)}}=\frac{1}{a^2} I \left(1, \frac{b}{a} \right)$$ $$\frac{b^3}{a^3}=p, \qquad I \left(1, \frac{b}{a} \right)=I_1(p)$$ $$I_1(p)=\int_0^\infty\frac{dt}{\sqrt{(t^3+1)(t^3+p)}}=2 \frac{d}{dp} J(p)$$ $$J(p)=\int_0^\infty\sqrt{\frac{t^3+p}{t^3+1}}dt=\int_0^\infty\sqrt{1+\frac{p-1}{t^3+1}}dt=$$ $$|p-1| \leq 1$$ $$=\sum_{k=0}^\infty \left( \begin{array}( 1/2 \\ ~k \end{array} \right) (p-1)^k \int_0^\infty \frac{dt}{(t^3+1)^k}$$ Now this is the most problematic part. The first integral of this series diverges. However, it's a constant in $p$, so if we differentiate, it formally disappears: $$I_1(p)=2 \sum_{k=1}^\infty \left( \begin{array}( 1/2 \\ ~k \end{array} \right) k (p-1)^{k-1} \int_0^\infty \frac{dt}{(t^3+1)^k}$$ Now, every integral in this series converges. The integtals can be computed using the Beta function, if we substitute: $$t^3=\frac{1}{u}-1$$ Finally, we rewrite: $$I_1(p)=\frac{\Gamma (1/3)}{3 \sqrt{\pi}} \sum_{k=1}^\infty \frac{k^2}{k!^2} \Gamma \left(k- \frac{1}{2}\right) \Gamma \left(k- \frac{1}{3}\right) (1-p)^{k-1}$$ Or, using the Pochhammer symbol: $$I_1(p)=\frac{2 \pi}{3 \sqrt{3}} \sum_{k=0}^\infty \frac{(k+1)^2}{(k+1)!^2} \left(\frac{1}{2}\right)_k \left(\frac{2}{3}\right)_k (1-p)^k$$ My questions are: Is the method I used valid (see the 'problematic part')? How to get this series into a Hypergeometric function form? Is there any 'arithmetic-geometric mean'-like transformation (Landen's transformation) for this integral? How to go about finding it? If the method I used is correct, it can be used for any integral of the form ($m \geq 2$): $$I_m(a,b)=\int_0^\infty \frac{dx}{\sqrt{(x^m+a^m)(x^m+b^m)}}$$
More generally, with $|p-1|<1$, some experimentation shows that, $$\int_0^\infty \frac{dt}{\sqrt{(t^m+1)(t^m+p)}} = \pi\,\frac{\,_2F_1\big(\tfrac12,\tfrac{m-1}{m};1;1-p\big)}{m\sin\big(\tfrac{\pi}{m}\big)}$$ where the question was just the case $m=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1900115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Weak Law of Large Numbers -Special Case For independent and identically distributed random variables $X_1, X_2, \cdots$ with finite mean and variance, by the weak Law of Large Numbers, $\frac{1}{N}\sum_{i=1}^{N} X_i$ converges in probability to $\mathbb{E}[X_i]$: $$\frac{1}{N}\sum_{i=1}^{N} X_i \xrightarrow{p}\mathbb{E}[X_i]$$ If $f(N)$ is a continuous function such that $\lim_{N \to \infty} f(N)=c \in \mathbb{R}^{+}$, then is the following true? $$\frac{f(N)}{N}\sum_{i=1}^{N} X_i \xrightarrow{p}c \mathbb{E}[X_i]$$ We know that $\lim\limits_{N \to \infty} \mathbb{P}\left(|\frac{1}{N}\sum_{i=1}^{N} X_i - \mathbb{E}[X_i]|>\epsilon\right)=0$ and thus $$\lim\limits_{N \to \infty} \mathbb{P}\left(|\frac{f(N)}{N}\sum_{i=1}^{N} X_i - f(N)\mathbb{E}[X_i]|>\epsilon f(N)\right)=0$$ How can we continue form here?
We can assume that $|f(N)|\le M$, for all $N$ and certain $M>0$. Moreover, there exist $N_0$ such that, for $N>N_0$, $|f(N)-c|< \varepsilon/(2 E(X_1))$. Since \begin{align*} \Big|\frac{f(N)}{N}\sum_{i=1}^N X_i - cE(X_1) \Big| \le \Big|\frac{f(N)}{N}\sum_{i=1}^N X_i - f(N)E(X_1) \Big| + \big|f(N)-c\big|\, E(X_1), \end{align*} we can have that \begin{align*} \Big|\frac{1}{N}\sum_{i=1}^N X_i - E(X_1) \Big| &\ge \frac{\Big|\frac{f(N)}{N}\sum_{i=1}^N X_i - cE(X_1) \Big| - \big|f(N)-c\big|\,E(X_1)}{f(N)}\\ &\ge \frac{\Big|\frac{f(N)}{N}\sum_{i=1}^N X_i - cE(X_1) \Big| - \frac{\varepsilon}{2}}{M}. \end{align*} Then, for $N> N_0$, \begin{align*} \left\{\omega: \Big|\frac{f(N)}{N}\sum_{i=1}^N X_i - cE(X_1) \Big|\ge \varepsilon\right\} \subset \left\{\omega: \Big|\frac{1}{N}\sum_{i=1}^N X_i - E(X_1) \Big|\ge \frac{\varepsilon}{2M}\right\}. \end{align*} That is, \begin{align*} \lim_{N\rightarrow\infty}\mathbb{P}\left(\Big|\frac{f(N)}{N}\sum_{i=1}^N X_i - cE(X_1) \Big|\ge \varepsilon\right) = 0. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1900208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
pseudo-identities which are not exact but the error is very small i would like to know more example of pseudo identities.. things that there are not equal but the error is about $ 0.01 $ for example $$ \pi ^{4} +\pi ^{5} =e^{6} $$ the error term is about $ 10^{-5} $ where can i see more of this amazing pseudo identities ? :D thanks
There are many examples given here at MSE. One of my favourites is that $$ e^{\pi \sqrt{163}}=262 537 412 640 768 743.99999999999925 $$ is very close to an integer, see here. This has some serious number theoretical background, as is explained, and generalised, here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1900306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Multi-index sum property Exercise 1.2.3.29 in Donald Knuth's The Art of Computer Programming (3e) states the following property of a multi-indexed sum: $$ \sum_{i=0}^n \sum_{j=0}^i \sum_{k=0}^j a_ia_ja_k = \frac{1}{3}S_3 + \frac{1}{2}S_1S_2 + \frac{1}{6}S_1^3, $$ where $S_r = \sum_{i=0}^n a_i^r$. I tried to prove it and failed. What I found are the following identities: $$ \sum_{i=0}^n \sum_{j=0}^i \sum_{k=0}^j a_ia_ja_k = \frac{1}{2} \sum_{i=0}^n a_i \left( \left( \sum_{j=0}^i a_j \right)^2 + \sum_{j=0}^i a_j^2 \right), $$ $$ S_1S_2 = \sum_{i=0}^n \sum_{j=0}^n a_i a_j^2, $$ $$ S_1^3 = \sum_{i=0}^n \sum_{j=0}^n \sum_{k=0}^n a_i a_j a_k. $$ Can anybody help in completing the proof? Based on grand_chat's answer, the identity can be proven inductively. $$ \begin{align} \sum_{i=0}^{n+1} \sum_{j=0}^i \sum_{k=0}^j a_ia_ja_k & = \sum_{i=0}^n \sum_{j=0}^i \sum_{k=0}^j a_ia_ja_k + a_{n+1} \sum_{i=0}^{n+1} \sum_{j=0}^i a_ia_j \\ & \stackrel{hyp}{=} \underbrace{\frac{1}{3}S_{n,3} + \frac{1}{2}S_{n,1}S_{n,2} + \frac{1}{6}S_{n,1}^3}_{=:S_n} + \frac{1}{2} a_{n+1} \left( \left( \sum_{i=0}^{n+1} a_i \right)^2 + \sum_{i=0}^{n+1} a_i^2 \right) \\ & = S_n + \frac{1}{2} a_{n+1} \left( \left( \sum_{i=0}^n a_i + a_{n+1} \right)^2 + \sum_{i=0}^n a_i^2 + a_{n+1}^2 \right) \\ & = S_n + \frac{1}{2} a_{n+1} \left( \left( \sum_{i=0}^n a_i \right)^2 + 2 \sum_{i=0}^n a_i a_{n+1} + a_{n+1}^2 + \sum_{i=0}^n a_i^2 + a_{n+1}^2 \right) \\ & = S_n + a_{n+1}^3 + \sum_{i=0}^n a_i a_{n+1}^2 + \frac{1}{2} \left( \sum_{i=0}^n a_i \right)^2 a_{n+1} + \frac{1}{2} \sum_{i=0}^n a_i^2 a_{n+1} \\ & = S_n + a_{n+1}^3 \left( \frac{1}{3} + \frac{1}{2} + \frac{1}{6} \right) + \sum_{i=0}^n a_i a_{n+1}^2 \left( \frac{1}{2} + \frac{3}{6} \right) + \frac{3}{6} \left( \sum_{i=0}^n a_i \right)^2 a_{n+1} + \frac{1}{2} \sum_{i=0}^n a_i^2 a_{n+1} \\ & = \frac{1}{3} \left( S_{n,3} + a_{n+1}^3 \right) + \frac{1}{2} \left( S_{n,1}S_{n,2} + \sum_{i=0}^n a_i a_{n+1}^2 + \sum_{i=0}^n a_i^2 a_{n+1} + a_{n+1}^3 \right) + \frac{1}{6} \left( S_{n,1}^3 + 3 \left( \sum_{i=0}^n a_i \right)^2 a_{n+1} + 3 \sum_{i=0}^n a_i a_{n+1}^2 + a_{n+1}^3 \right) \\ & = \frac{1}{3}S_{n+1,3} + \frac{1}{2}S_{n+1,1}S_{n+1,2} + \frac{1}{6}S_{n+1,1}^3. \end{align} $$ The problem here is that the hypothesis is taken from thin air.
For future reference this is the sum over all multisets of size three chosen from the variables $A_0$ to $A_n$ and evaluated at $a_q.$ Therefore by the Polya Enumeration Theorem it is given by $$\left.Z(S_3)\left(\sum_{q=0}^n A_q\right)\right|_{A_q=a_q}$$ where $Z(S_3)$ is the cycle index of the symmetric group $Z(S_3).$ Now the permutations are $(1)(2)(3)$, $(12)(3)$, $(13)(2)$, $(23)(1)$ and $(123)$ and $(132).$ Therefore the cycle index is given by $$Z(S_3)= \frac{1}{6} (s_1^3 + 3 s_1 s_2 + 2 s_3) = \frac{1}{6} s_1^3 + \frac{1}{2} s_1 s_2 + \frac{1}{3} s_3.$$ This gives for the sum $$ \left.\frac{1}{6} s_1^3 + \frac{1}{2} s_1 s_2 + \frac{1}{3} s_3 \right|_{s_r=S_r} = \frac{1}{6} S_1^3 + \frac{1}{2} S_1 S_2 + \frac{1}{3} S_3$$ where we have used the standard cycle index substitution $$s_r = \left.\sum_{q=0}^n A_q^r\right|_{A_q=a_q} = S_r.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1900441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Vector Addition in Euclidean Space I am reading Rudin's Principles of Mathematical Analysis, and I came across something in the proof that I don't quite understand. Let $x$ and $z$ be vectors in $\mathbb R^k$ for some $k \geq 3$ and $r >0 $ is a real number. Suppose $| z - x | = r$. Then this means that $z = x + ru$ for some unit vector $u$. I am struggling to see why this is true. Can someone help me clarify this?
Hint: $$ z = x + (z-x) . $$ Then normalize $z-x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1900607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find two constant$c_1, c_2$ to make the inequality hold true for all N Show that the following inequality holds for all integers $N\geq 1$ $\left|\sum_{n=1}^N\frac{1}{\sqrt{n}}-2\sqrt{N}-c_1\right|\leq\frac{c_2}{\sqrt{N}}$ where $c_1,c_2$ are some constants. It's obvious that by dividing $\sqrt{N}$, we can get $\left|\frac{1}{N}\sum_{n=1}^N \frac{1}{\sqrt{n/N}} -2 - \frac{c_1}{\sqrt{N}}\right|\leq \frac{c_2}{N}$ the first 2 terms in LHS is a Riemann Sum of $\int_{0}^1\frac{1}{\sqrt{x}}dx$. But what is the next step? I have not seen a complete answer of this problem in this site or google, so please give me some hints.
As Hans Engler commented, the problem is not entirely elementary. Let us check the pieces of the expression $$\frac 1 N \sum_{n=1}^n \frac 1 {\sqrt {\frac n N}}=\frac 1 {\sqrt { N}}\sum_{n=1}^n \frac 1 {\sqrt { n}}=\frac{1}{\sqrt{N}}H_N^{\left(\frac{1}{2}\right)}$$ where appear the generalized harmonic numbers. So, the lhs write $$\frac{-c_1+H_N^{\left(\frac{1}{2}\right)}-2 \sqrt{N}}{\sqrt{N}}$$ For large values of $N$, using the asymptotics of harmonic numbers (this is an altenating series ), this can write $$ \left(\zeta \left(\frac{1}{2}\right)-c_1\right)\sqrt{\frac{1}{N}}+\frac{1}{2 N}-\frac{1}{24 N^2}+O\left(\frac{1}{N^{5/2}}\right)$$ Compared to the rhs, just as Hans Engler commented, this seems to lead to $$c_1=\zeta \left(\frac{1}{2}\right)\qquad , \qquad c_2=\frac 12$$ Checking for small values of $N$, this seems to work properly for all $N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1900691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Marginal density function based on conditional density example could you please help me to solve this training example for my exam? I need to find marginal density $f_X (x)$ based on knowing conditional density $f_{X|Y}$ and marginal density $f_{Y}$. $$ f_{X\mid Y} (x\mid y) = \frac{2x}{y^2-1}~\mathbf 1_{1 \le x \le y \le 2} $$ $$ f_{Y} (y) = \frac{4}{9}y(y^2 - 1)~\mathbf 1_{1 \le y \le 2} $$ I am very insecure at statistics so please correct me when I am wrong: based on this formula: $$ f_{X,Y}(x,y) = f_Y(y) \cdot f_{X\mid Y} (x\mid y) $$ I got this: $$ \frac{4}{9}y(y^2 - 1)\frac{2x}{y^2 - 1} = \frac{8}{9}xy $$ and then I computed $f_X (x)$ by inserting limits to $f(x,y)$: $$ f_X(x) = \int_x^{\pi/2} f(x,y)~dy = \int_x^{\pi/2} \frac{8}{9}xy~dy = \frac{1}{9}x(\pi - 4x^2) $$ My friend says it is probably wrong but he is not able to help Is there logical mistake or numeric one? Could someone assist, please? Also if you could help me to understand the logic behind, I would appreciate it.
You were doing well, except that the upper limit $\pi/2$ comes from nowhere. It should be $$ f_X(x) = \int_x^{2} f_{X,Y}(x,y)~dy = \int_x^{2} \frac{8}{9}xy~dy = \frac{16 x}{9}-\frac{4 x^3}{9}\ , $$ for $1\leq x\leq 2$. As a check, you observe that the marginal is correctly normalized: $$ \int_1^2 dx\ f_X(x)=1\ . $$ To be completely precise, when you write the joint pdf of $x$ and $y$, you should also specify the domain over which $x$ and $y$ are running, so $$ f_{X,Y}(x,y)=\frac{8}{9}xy\ , $$ for $1\leq x\leq y\leq 2$. This means that the joint pdf above is normalized to $1$ over the region defined as follows: $$ \int_1^2 dx\int_x^2 f_{X,Y}(x,y)dy=1\ . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1900951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the solution for $y'=(2-y)(x^2+2y)$ when $y(0)=3$ has the property $y(x) \in (2,3]$ * *Without solving the equation, show that the solution for $y'=(2-y)(x^2+2y), y(0)=3$, there exists no $x\in \mathbb{R}$ such that $y(x)=2$. *Without solving the equation, show that the solution for this problem $y$ has the property that for all $x \in \mathbb{R}$, $2<y(x) \leq 3$ I tried to compare the solution for this problem, $y_0(x)$ with the solution of the same problem with $y(0)=2$, $y_1(x)$, and looked at: $y_0(x)-y_1(x)=3-2+\int_0^x((2-y_0(t))(t^2+2y_0(t))-(2-y_1(t))(t^2+2y_1(t)))dt=$ $1+\int_0^x(f(t,y_0(t))-f(t,y_1(t)))dt$, when $f(x,y)=(2-y)(x^2+2y)$. Earlier I was able to prove that in every closed rectangle $D\subseteq \mathbb{R}^2$ $f$ has the Lifschiz property on $D$, and used this to show that $|y_0 (x)-y_1(x)|\leq 1+L\int_0^x|y_0(t)-y_1(t)|dt$ on a rectangle $D$ when $L$ is the Lifschiz constant. I then falsely assumed that exists some $x_0 \in \mathbb{R}$ such that $y_0(x_0)=2$, in order to say that in a rectangle around $(x_0,2)$ $D$, with an appropriate Lifschiz constant $L$ we have that $|2-y_1(x_0)|\leq 1+L\int_0^{x_0}|y_0(t)-y_1(t)|dt \leq 1$.From here I didn't have a direction. I also thought of using the fact that if exists $x_0$ such that $y(x_0)=2$ when $y$ solves $y'=(2-y)(x^2+2y), y(0)=3$ then $y'(x_0)=0$, but this didn't yield results either. I'd like to know what type of direction could be useful for this problem.
Firs of all, we will show that if $y$ is a solution with $y(0)=3$ then there is no $c\in \mathbb{R}$ such that $y(c)=2.$ Assume that there exists. Then the Cauchy problem $y'=(2-y)(x^2+2y), y(c)=2,$ has two different solutions: the solution with $y(0)=3$ and $y(x)=2.$ This contradicts the unicity of solution. Now, since $y(x)>2, \forall x,$ it follows from $y'=(2-y)(x^2+2y),$ that $y'<0.$ So, $y$ is decreasing. Since $y(0)=3$ we have that $y(x)<3, \forall x\in (0,\infty).$ Note that we cannot say that $y(x)\le 3,\forall x\in\mathbb{R}.$ Since $y(0)=3$ and $y'(x)<0$ we have that $y(x)>3,\forall x\in (-\infty,0).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the value of $\tan A + \tan B$, given values of $\frac{\sin (A)}{\sin (B)}$ and $\frac{\cos (A)}{\cos (B)}$ Given $$\frac{\sin (A)}{\sin (B)} = \frac{\sqrt{3}}{2}$$ $$\frac{\cos (A)}{\cos (B)} = \frac{\sqrt{5}}{3}$$ Find $\tan A + \tan B$. Approach Dividing the equations, we get the relation between $\tan A$ and $\tan B$ but that doesn't help in getting the value of $\tan A + \tan B$. The value comes in terms of $\tan A$ or $\tan B$ but the expected answer is independent of any variable . Also $$\frac{\sin(A)\cdot\cos(B) + \sin(B)\cdot\cos(A)}{\cos(A)\cdot\cos(B)} = \tan(A) + \tan(B)$$ We could get a value only if instead of $\cos A$ there was $\sin B$ in the relation(which we get on adding the ratios)
Although there is something wrong with this question,there is a way which I think maybe a little bit easier to solve this kind of problem. $$\tan A + \tan B = \frac{\sin(A)\cdot\cos(B) + \sin(B)\cdot\cos(A)}{\cos(A)\cdot\cos(B)} = \frac{{\sin A \over \sin B}+ {\cos A \over \cos B}}{\cos A \over \sin B} $$ then to get the value of $\cos A \over \sin B $ : $$ {\sin A \over \sin B }= {\sqrt{3} \over 2} \Rightarrow {(1-\cos^2 A) \over \sin^2 B} = {3 \over 4} \Rightarrow {1 \over \cos^2A} = 1+{{3\cdot \sin^2 B} \over 4\cdot \cos^2A }$$ $${\cos A \over \cos B }= {\sqrt{5} \over 3} \Rightarrow {\cos^2A \over (1-\sin^2 B)} = {5 \over 9} \Rightarrow {1 \over \cos^2A} = {9 \over 5}+{{\sin^2 B} \over \cos^2A }$$ then $$ {\sin^2B \over \cos^2A }={-16 \over 5} $$ this is impossible. But if the question is right and then you can get the value of $\cos A \over \sin B $ ,and just get the answer with the first equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Orthosymplectic Lie Superalgebra I am trying to work out a presentation for the orthosymplectic Lie superalgebra $\mathfrak{osp}(m,2n)$. I am following Musson's book "Lie Superalgebras and Enveloping Algebras". From what I understand, we can take as the underlying $\mathbb{Z}_2$-graded vector space $k^m\oplus k^{2n}$, ($k$ is a characteristic $0$ algebraically closed field, say), and consider the bilinear form $J=\begin{bmatrix}G & 0\\0 & H\end{bmatrix}$, where $G$ is $m\times m$ symmetric nonsingular and $H$ is $2n\times 2n$ skew-symmetric nonsingular. Then $\mathfrak{osp}(m,2n)$ is the collection $X=\begin{bmatrix}A & B\\C & D\end{bmatrix}\in\mathfrak{gl}(m,2n)$ such that $X^TJ+JX=0$, where $X^T=\begin{bmatrix}A^t & -C^t\\B^t & D^t\end{bmatrix}$ is the supertranspose of $X$, and the lower case $t$ gives the usual transpose, e.g. $A^t$ is just the transpose of the matrix $A$. So we want $$ X^TJ+JX=\begin{bmatrix}A^t & -C^t\\B^t & D^t\end{bmatrix}\begin{bmatrix}G & 0\\0 & H\end{bmatrix}+\begin{bmatrix}G & 0\\0 & H\end{bmatrix}\begin{bmatrix}A & B\\C & D\end{bmatrix}=0 $$ When I write this out, I get the equations $A^tG+GA=0$, i.e. $A\in\mathfrak{o}(m)$, $D^tH+HD=0$, i.e. $D\in\mathfrak{sp}(2n)$. These two equations seem correct. Finally we get $B^tG+HC=0$ which is equivalent to the other equation $GB-C^tH=0$ upon taking the transpose. However in the book, I've seen two places where the resulting equations have $B^tG-HC=0$ instead of $B^tG+HC=0$, and these equations are used for writing a more explicit presentation for $\mathfrak{osp}(m,2n)$ so I want to understand what I'm doing incorrectly here. Can someone help? ANY advice/hints/help would be greatly appreciated!
I think you might have some misunderstanding relating to supertransposition. Here's the definition of supertransposition on wiki: https://en.wikipedia.org/wiki/Supermatrix#Supertranspose. As you see, the supertransposition is related to the parity of the supermatrix itself, so this question might not be that easy now. Hope this will help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
how to integrate the beta like $\int_{0}^{1} \frac{y^{n-1}(1-y)^{m-n-1}}{1-xy}dy $ How should I find a closed form of $\displaystyle \int_{0}^{1} \frac{y^{n-1}(1-y)^{m-n-1}}{1-xy}dy $ , Any simple methods ?
Assuming there are no convergence issues due to the values of $x$, $m$ and $n$, $$\begin{eqnarray*} \int_{0}^{1}\frac{y^{n-1}(1-y)^{m-n-1}}{1-xy}\,dy &=& \sum_{k\geq 0}x^k \int_{0}^{1}y^{n+k-1}(1-y)^{m-n-1}\,dy\\&=&\sum_{k\geq 0} x^k \frac{\Gamma(n+k)\Gamma(m-n)}{\Gamma(m+k)}\\&=&B(m-n,n)\cdot\phantom{}_2 F_1(1,n;m;x). \end{eqnarray*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exclusive Or -Logic Find a formula using only the connectives: conjunction, disjunction, and negation that is equivalent to, P exclusive or Q. I cannot figure out a way to come up with a logical equivalent statement without randomly guessing.
You want $p$ to be true or $q$ to be true, but not both at the same time, thus all of the below formula would give you the correct result: * *"$p$ is true or $q$ is true, and it's not the case that both $p$ and $q$ are true" $(p \lor q) \land \neg (p \land q)\\$ *"$p$ is true and $q$ is false, or $p$ is false and $q$ is true1" $(p \land \neg q) \lor (\neg p \land q)\\$ *(explaining @Jon Mark Perry's first forumla) "it is neither the case that both $p$ and $q$ are true nor that both $p$ and $q$ are false2" $\neg ((p \land q) \lor (\neg p \land \neg q))\\$ *(explaining Jon Mark Perry's second one) "$p$ is true or $q$ is true, but either $p$ or $q$ must be false3" $(p \lor q) \land (\neg p \lor \neg q)\\$ Of course, there is an infinite number of logically equivalent statements (e.g. $(a \land a)$ would trivially be another possible formula to just saying $(a)$, now replace $(a)$ by one of the inclusive-or-formulas and you got another one), but the above listed should be the most intuitive ones. 1 By the inclusive logical or $\lor$ used here, the statement would also be true if both sides of the formula were true, but that will never happen anyway, since $p \land \neg p$ is a contradiction. 2 $\neg(a \lor b)$ ("not $a$ or $b$") reads as "neither $a$ nor $b$". 3 Again, the inclusive or in the second part of the statement would also allow for both $p$ and $q$ to be false, but that possibility is ruled out be the first part of the conjuntion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Example of non-isomorphic, Morita-equivalent semisimple Hopf-algebras In the paper http://arxiv.org/abs/1509.01548, section 1.3, I found the following definitions: Two fusion categories $\mathcal{C}$ and $\mathcal{D}$ are Morita-equivalent if there exists an indecomposable $\mathcal{C}$-module category $\mathcal{M}$ such that $\mathcal{D}$ is tensor equivalent to $End_{\mathcal{C}}(\mathcal{M})$ or equivalently if their centers are equivalent as braided tensor categories. Two semisimple Hopf-algebras $H$ and $K$ are Morita-equivalent if Rep$K$ is Morita-equivalent to Rep$H$, with Rep$H$ being the representation category of $H$. While it is obvious that isomorphic semisimple Hopf-algebras are also Morita-equivalent, I can`t think of any example of non-isomorphic, Morita-equivalent semisimple Hopf-algebras. Does anyone know an (easy) example? Extra question: Why are both definitions restricted to fusion categories respectively semisimple Hopf-algebras? Thanks for your help!
A few weeks ago, a user posted a good hint for the solution, but deleted his answer after only one day (I couldn't even award him with that 50 bounty he deserved). I did the calculations and the following is a good example for my question: Let $K$ be a field of characteristic zero and $G$ be any finite non-abelian group and consider the Hopf-algebras $K[G]$ (group algebra over $G$) and its dual $K^G$. Then $K[G]$ is not commutative, but co-commutative and $K^G$ is commutative, but not co-commutative. Therefore these two Hopf-algebras cannot be isomorphic. The group algebra $K[G]$ is semisimple (Maschke-theorem) and obviously co-semisimple, so $K^G$ is also semisimple. One then shows with a few calculations that $K[G]$ and $K^G$ are indeed Morita-equivalent, but that is a bit too long for posting this here. I also have found out that the definition of Morita-equivalence is restricted to semisimple Hopf-algebras to make sure $RepH$ is a fusion category. And in terms of categories, Morita-equivalence is restricted to fusion categories to make sure the given conditions are equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
How do calculate the derivative of $\text{det } $? Define $\text{det}:M(n)\rightarrow \mathbb{R}$ as the map that sends $A$ to its determinant. This map is clearly smooth and I want to calculate its differential at $I$ (the identity matrix). I did this for a very special case. Suppose that every eigenvalue of $A$ is real, and call them $\lambda_1,\dots, \lambda_n$, now lets calculate its derivative. $d(\text{det})_I(A)=\lim_{s\rightarrow 0}\displaystyle\frac{\text{det}(I-sA)-\text{det}(I)}{s}=\lim_{s\rightarrow 0}\displaystyle\frac{(1-s\lambda_1)\cdots(1-s\lambda_n)-1}{s}=\lambda_1+\cdots+\lambda_n$ Now this calculation do not hold if $A$ has some complex eigenvalue. How can I prove that $d(\text{det})_I(A)=\text{tr}(A)$ in general?
The determinant of an $n$-by-$n$ matrix $A=(a_{i,j})$ is $$\sum_{\sigma\in S_n}\text{sgn}(\sigma)\prod_{i=1}^{n} a_{i,\sigma(i)},$$ where $S_n$ is the set of permutations of $\{1,\ldots, n\}$, and $\text{sgn}$ is the signature of a permutation. For fixed $A = (a_{i,j})$, $\det(\text{id}+tA)$ is a polynomial in $t$. Summands corresponding to non-identity permutations in the formula for the determinant yield terms of order at least $t^2$ in $\det(\text{id}+tA)$, so they don't contribute to $\frac{d}{dt}|_{t=0}\det(I+tA)$. Thus, $$ d(det)_{I}(A)=\frac{d}{dt}|_{t=0}\det(I+tA) = \frac{d}{dt}|_{t=0}\left[\prod_{i=1}^n(1+ta_{i,i})\right]= \sum_{i=1}^n a_{i,i} = \text{tr}(A). $$ So OP's argument works even if $A$ is not diagonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proof that Rel is a Category The Awodey book about Category Theory gives this definition for Rel: The objects of $\text{Rel}$ are sets, and an arrow $A → B$ is a relation from $A$ to $B$, that is, a subset $R ⊆ A×B$. The equality relation $ \{ \langle a, a\rangle ∈ A×A\;|\; a ∈ A\}$ is the identity arrow on a set $A$. Composition in $\text{Rel}$ is to be given by: $$ S ◦ R = \{\langle a, c \rangle ∈ A × C \; |\; ∃b \;( \langle a, b\rangle ∈ R \; \text{ and }\; \langle b, c\rangle ∈ S) \}$$ for $R ⊆ A × B$ and $S ⊆ B × C$. How can I actually prove that Rel is a category? I'm not really into categories yet but this seem quite obvious since I know a category must have objects (that are the sets, in this case), arrows (relations), identity function (equality relation) and composition of functions (that is the association of relations).
What you've quoted is a definition of all the data needed to have a category: objects, morphisms, identity morphisms, and composition operation. To verify you have a category you then just have to check that this data satisfies the axioms for a category: that the identity morphisms are actually identities for the composition operation, and that composition is associative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
For $R=I\oplus J$, prove that $I= Re$ and $J= Rf$ where $e$ and $f$ are two idempotent elements Given that $R$ is a commutative unitary ring and $I$ and $J$ are two ideals of $R$ such that $R=I\oplus J$ (as an internal direct sum), how do you show that there exists two idempotent elements $e$ and $f$ in $R$ such that $I= Re$ and $J= Rf$?
Yeah, I'll type out all the details. Since $R$ is the internal direct sum of $I$ and $J$, this means that $$ R = I + J = \{i+j \mid i\in I,\; j\in J\} \quad\text{and}\quad IJ = I \cap J =\{0\} \,.$$ Since $R = I + J$, there must be some aptly named $e \in I$ and $f \in J$ such that $e+f=1$. Note that $f = (1-e)$, then since $ef \in IJ = \{0\}$ we have $ef = e(1-e) = e-e^2=0\,,$ so $e$ is idempotent. Then you can verify that $f = (1-e)$ is idempotent directly. Now you should suspect that $I=Re$ and $J = R(1-e),$ so you can write down the correspondence to confirm your suspicion. Since every element $r \in R$ can we written uniquely as a element of $I$ plus and element of $J$, we just need to find such elements using the fact that $e \in I$ and $(1-e) \in J$: $$\begin{align*} R &\longleftrightarrow I \oplus J \\r &\longleftrightarrow re + r(1-e) \\R &\longleftrightarrow Re \oplus R(1-e)\,. \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A limit of a recursion involving the prime number function Suppose $A_1=1$, $B_1=2$ and $\left\{ \begin{array}{l} A_{k+1}=A_k\cdot (p_{k+1}-1)+B_k\\ B_{k+1}=B_k\cdot p_{k+1} \end{array} \right. $ where $p_k$ is the $k$-th prime number. I would like a formal proof of that $\displaystyle\lim_{n\to\infty}\frac{A_n}{B_n}=1$. I've checked it computationally, but am unsure about the proof.
Let $r_k=\frac{A_k}{B_k}$. We have $B_k=p_1\cdots p_k$ so $$r_{k+1}-1=(r_k-1)\cdot\left(1-\frac1{p_{k+1}}\right).$$ It follows that $$r_n=1+\left(1-\frac1{p_1}\right)\cdots\left(1-\frac1{p_n}\right)$$ and it suffices to note that $\prod_p\left(1-\frac1p\right)=0$. In general, when dealing with a linear recursion (with possibly non-constant coefficients) it is worth homogenizing it, that is, getting rid of the constant term. In this case however I discovered it in the following (deeper) way: From $$r_{k+1}=r_k\cdot\left(1-\frac1{p_{k+1}}\right)+\frac1{p_{k+1}}$$ one may note that $$r_k=1+\sum_{\substack{p\mid n\implies p\leq p_k\\ n\text{ even}}}\frac{\mu(n)}n.$$ Because $\displaystyle\sum_{\substack{p\mid n\implies p\leq p_k\\ n\text{ odd}}}\!\!\!\tfrac{\mu(n)}n=-2\sum_{\substack{p\mid n\implies p\leq p_k\\ n\text{ even}}}\!\!\!\tfrac{\mu(n)}n$ one sees that $$r_k-1=\sum_{p\mid n\implies p\leq p_k}\frac{\mu(n)}n=\left(1-\frac1{p_1}\right)\cdots\left(1-\frac1{p_k}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is there a mistake in wikipedia article on interior? In this wikipedia article about the interior, in the section on the interior operator, it is written that $S^{\circ}=X\setminus(X\setminus\bar{S})$, which can't be true since $X\setminus(X\setminus\bar{S})=\bar{S}$. I think a correct definition would be $S^{\circ}=X\setminus(\overline{X\setminus{S}})$. Am I right?
Yes, you are right and Wikipedia is wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1901956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I factorize a polynomial $ax^2 + bx + c$ where $a \neq 1$ How do I factorize a polynomial $ax^2 + bx + c$ where $a \neq 1$? E.g. I know that $6x^2 + 5x + 1$ will factor to $(3x + 1)(2x + 1)$, but is there a recipe or an algorithm that will allow me to factorize this?
If you know how to factor polynomials for $a=1$, then simply writing $$ax^2+bx+c=a(x^2+\frac bax+\frac ca )$$ makes the task immediate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1902077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Inverse of the sum $\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j} j^{\,k} a_j$ $k\in\mathbb{N}$ The inverse of the sum $$b_k:=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j} j^{\,k} a_j$$ is obviously $$a_k=\sum\limits_{j=1}^k \binom{k-1}{j-1}\frac{b_j}{k^j}$$ . How can one proof it (in a clear manner)? Thanks in advance. Background of the question: It’s $$\sum\limits_{k=1}^\infty \frac{b_k}{k!}\int\limits_0^\infty \left(\frac{t}{e^t-1}\right)^k dt =\sum\limits_{k=1}^\infty \frac{a_k}{k}$$ with $\,\displaystyle b_k:=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j}j^{\,k}a_j $. Note: A special case is $\displaystyle a_k:=\frac{1}{k^n}$ with $n\in\mathbb{N}$ and therefore $\,\displaystyle b_k=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j}j^{\,k-n}$ (see Stirling numbers of the second kind) $$\sum\limits_{k=1}^n \frac{b_k}{k!}\int\limits_0^\infty \left(\frac{t}{e^t-1}\right)^k dt =\zeta(n+1)$$ and the invers equation can be found in A formula for $\int\limits_0^\infty (\frac{x}{e^x-1})^n dx$ .
In this proof, the binomial identity $$\binom{m}{n}\,\binom{n}{s}=\binom{m}{s}\,\binom{m-s}{n-s}$$ for all integers $m,n,s$ with $0\leq s\leq n\leq m$ is used frequently, without being specifically mentioned. A particular case of importance is when $s=1$, where it is given by $$n\,\binom{m}{n}=m\,\binom{m-1}{n-1}\,.$$ First, rewrite $$b_k=k\,\sum_{j=1}^{k}\,(-1)^{k-j}\,\binom{k-1}{j-1}\,j^{k-1}\,a_j\,.$$ Then, $$\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}=\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{k}{l^k}\,\sum_{j=1}^k\,(-1)^{k-j}\,\binom{k-1}{j-1}\,j^{k-1}\,a_j\,.$$ Thus, $$\begin{align} \sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}&=\sum_{j=1}^l\,\frac{a_j}{j}\,\sum_{k=j}^l\,(-1)^{k-j}\,\binom{l-1}{k-1}\,\binom{k-1}{j-1}\,k\left(\frac{j}{l}\right)^k \\ &=\sum_{j=1}^l\,\frac{a_j}{j}\,\binom{l-1}{j-1}\,\sum_{k=j}^l\,(-1)^{k-j}\,\binom{l-j}{k-j}\,k\left(\frac{j}{l}\right)^k\,. \end{align}$$ Let $r:=k-j$. We have $$\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}=\sum_{j=1}^l\,\frac{a_j}{j}\,\binom{l-1}{j-1}\,\left(\frac{j}{l}\right)^j\,\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,(r+j)\,\left(\frac{j}{l}\right)^{r}\,.\tag{*}$$ Now, if $j=l$, then $$\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,(r+j)\,\left(\frac{j}{l}\right)^{r}=l\,.$$ If $j<l$, then $$\begin{align} \sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,r\,\left(\frac{j}{l}\right)^{r}&=-(l-j)\left(\frac{j}{l}\right)\,\sum_{r=1}^{l-j}\,(-1)^{r-1}\,\binom{l-j-1}{r-1}\,\left(\frac{j}{l}\right)^{r-1} \\&=-j\left(1-\frac{j}{l}\right)\,\left(1-\frac{j}{l}\right)^{l-j-1}=-j\left(1-\frac{j}{l}\right)^{l-j} \end{align}$$ and $$\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,j\left(\frac{j}{l}\right)^r=j\left(1-\frac{j}{l}\right)^{l-j}\,.$$ Consequently, $$\sum_{r=0}^{l-j}\,(-1)^r\,\binom{l-j}{r}\,(r+j)\,\left(\frac{j}{l}\right)^{r}=\begin{cases} 0\,,&\text{if }j<l\,,\\ l\,,&\text{if }j=l\,. \end{cases}$$ From (*), $$\sum_{k=1}^l\,\binom{l-1}{k-1}\,\frac{b_k}{l^k}=\frac{a_l}{l}\,\binom{l-1}{l-1}\,\left(\frac{l}{l}\right)^l\,l=a_l\,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1902188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
On calculations from $\zeta(3)=\frac{2}{\pi}\sum_{n=1}^\infty\int_0^\infty \frac{\sin ((n+1)x)\sin (nx)}{(xn^2)^2}dx$ I was inspired in a formula that I've found in Internet, page 5 of Jameson's notes about Frullani integrals, to ask to Wolfram Alpha this integral $$\int_0^\infty\frac{\sin ax\sin bx}{x^2}dx=\frac{\pi}{4} \left( \left| a+b \right|- \left| a+b \right|\right), $$ that I presume well known. Then I did the specialisation $a=n+1$, $b=n$ for a fixed integer $n\geq 1$ and after I've multiplied by $\frac{1}{n^4}$ one has if there are no mistakes $$\zeta(3)=\frac{2}{\pi}\sum_{n=1}^\infty\int_0^\infty \frac{\sin ((n+1)x)\sin (nx)}{(xn^2)^2}dx.$$ My goal is learn more mathematics to encourage myself to study more. Question. It's possible do more interesting calculations with nice mathematical content to deduce some identity from previous approach/identity? Thanks in advance. My attempt was that I know that it is possible to ask to a CAS by the series $\sum_{n=1}^\infty\frac{\sin ((n+1)x)\sin (nx)}{n^4}$, but I don't know cvery well what's means the result. Also I know that I can do the change $y=xn^2$ of variable in previous integral to get it as $$\int_0^\infty \frac{\sin (\frac{n+1}{n^2}y)\sin (\frac{y}{n})}{(yn)^2}dy.$$ But neither I don't know if such change of variables will be useful.
$\newcommand{\angles}[1]{\left\langle\,{#1}\,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,\mathrm{Li}_{#1}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\ol}[1]{\overline{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\color{#f00}{{2 \over \pi}\sum_{n = 1}^{\infty} \int_{0}^{\infty}{\sin\pars{\bracks{n + 1}x}\sin\pars{nx} \over \pars{xn^{2}}^{2}}\,\dd x} \\[5mm] = &\ {2 \over \pi}\sum_{n = 1}^{\infty}{1 \over n^{4}}\int_{0}^{\infty} {\sin\pars{\bracks{n + 1}x} \over x}\,{\sin\pars{nx} \over x}\,\dd x \label{1}\tag{1} \end{align} Albeit the integration is an elementary one, it's useful to know that David Borwein and Jonathan Borwein set the following identity: $$ \int_{0}^{\infty}\prod_{k = 0}^{n}{\sin\pars{a_{k}x} \over x}\,\dd x = {\pi \over 2}\prod_{k = 1}^{n}a_{k}\,,\qquad a_{k} \in \mathbb{R}\,,k = 0,1,\ldots,n\,,\quad a_{0} \geq \sum_{k = 1}^{n}\verts{a_{k}} $$ With this identity, \eqref{1} becomes: $$ \color{#f00}{{2 \over \pi}\sum_{n = 1}^{\infty} \int_{0}^{\infty}{\sin\pars{\bracks{n + 1}x}\sin\pars{nx} \over \pars{xn^{2}}^{2}}\,\dd x} = {2 \over \pi}\sum_{n = 1}^{\infty}{1 \over n^{4}}\pars{{\pi \over 2}\,n} = \sum_{n = 1}^{\infty}{1 \over n^{3}} = \color{#f00}{\zeta\pars{3}} $$ By the way; unfortunately, Jonathan Borwein died this month ( $02$-aug-$2016$ ).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1902376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
If $G$ is compact and $K \subset G$ is closed in $G$, must $G/K$ also be compact? If $G$ is compact and $K \subset G$ is closed in $G$, must $G/K$ also be compact? The sets $G$ is a topological group for example.
Yes it is compact if it is endowed with the quotient topology since it is the image of the compact $G$ by the quotient map $G\rightarrow G/K$ and the image of a compact by a continue map is compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1902570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If eigenvectors of a matrix are orthogonal, does that imply anything about the matrix (normal, hermitian, etc)? Normally, content related to my question proves the converse. For example, if a matrix is hermitian, then its eigenvectors corresponding to different eigenvalues are orthogonal. If the eigenvectors of a square matrix under (separately) both $\mathbb{R}$ and $\mathbb{C}$ are orthonormal (or unitary), however, can we infer something about the matrix?
Complex case: If $M[e_1 ... e_n] = [e_1 ... e_n] \mbox{ diag } (\lambda_1 ... \lambda_n)$, or $MP = P\Lambda$ then $M=P\Lambda P^{-1} = P\Lambda P^*$. This is Hermetian iff $\Lambda$ is real (i.e. only real eigenvalues). In the real case if you assume that there are $n$ real eigenvectors then implicitly you have that all eigenvalues are real so the same argument shows that $M$ is automatically symmetric in this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1902692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A set with measure $0$ has a translate containing no rational number. Suppose $E$ is a set with measure $0$. Show there exists $t\in \mathbb{R}$ such that $E+t$ contains no rational number. My idea is to find an interval in $E$, then we can get a contradiction. I try to begin with a point in $E$ and then consider if there is an interval containing this point in $E$. But I don't know how to start. Maybe, we can go by contradiction. If $E+t$ contains a rational number $q_t$ for every $t\in \Bbb R$, then we have a function $f:\mathbb{R}\to\mathbb{Q}$, $t\mapsto q_t$. But this idea leads nowhere.
You can also show that $Z:=\bigcup\limits_{q\in\mathbb{Q}}\,(q-E)$ has measure $0$. For a real number $t\notin Z$, can $t+E$ intersect the rationals?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1902794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
Birthday Problem: Why isn't the probability 253/365 Consider a set of $23$ unrelated people. Because each 23 pair of people shares the same birthday with probability $1/365$, and there are $\binom{23}2 = 253$ pairs, why isn’t the probability that at least two people have the same birthday equal to $253/365$?
Let $A$ be the event that some two people have the same birthday. For $i < j$, let $A_{i,j}$ be the event that persons $i$ and $j$ have the same birthday. Then, $\text{Pr}(A_{i,j}) = \frac{1}{365}$, and your calculation is essentially that $$ \sum_{1 \le i <j \le 23} \text{Pr}(A_{i,j}) = \sum_{i,j} \frac{1}{365} = \frac{\binom{23}{2} }{365} = \frac{253}{365}. $$ But unfortunately, $\text{Pr}(A) \ne \sum_{1 \le i <j \le 23} \text{Pr}(A_{i,j})$, because even though $A = \bigcup_{i,j} A_{i,j}$, the events $A_{i,j}$ are NOT disjoint: it could be that multiple pairs of people have the same birthday. On the other hand, the total number of pairs sharing a birthday is $1$ birthday for each $A_{i,j}$ that occurs; therefore the expected number of pairs sharing a birthday is exactly what you have calculated: $\sum_{1 \le i <j \le 23} \text{Pr}(A_{i,j}) = \frac{253}{365}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1902897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Product of roots of $ax^2 + (a+3)x + a-3 = 0$ when these are positive integers There is only one real value of $'a'$ for which the quadratic equation $$ax^2 + (a+3)x + a-3 = 0$$ has two positive integral solutions.The product of these two solutions is : Since the solutions are positive, therefore the product of roots and sum of root will be positive. This will give us two inequalities in $a$. Substitute the values of $a$ in the quadratic but I'm not getting my answer correct. The two inequalities will be $$\frac{a-3}{a} > 0$$ and the other one will be $$\frac {a+3}{a} < 0$$ From the first one we get $a>3$ and from the second one we get $a< -3$. I don't know how to proceed after this. Kindly help.
We need $$\frac{a+3}{a}\in\Bbb{Z}\ , \frac{a-3}{a}\in\Bbb{Z}$$ or $$1+\frac{3}{a}\in\Bbb{Z}\ , \ 1-\frac{3}{a}\in\Bbb{Z}$$ thus $\displaystyle \frac{3}{a}\in\Bbb{Z}$, means that $\displaystyle a=\frac{3}{m}$ where $m\in\Bbb{Z}$. Now we can write the equation as $$\frac{3}{m}x^2+\left(\frac{3}{m}+3\right)x+\frac{3}{m}-3=0\implies x^2+(m+1)x+1-m=0$$ Using quadratic formula we have $$x_{1,2}=\frac{-m-1\pm\sqrt{m^2+6m-3}}{2}=\frac{-m-1\pm\sqrt{(m+3)^2-12}}{2}$$We need the expression under the square root to be an integer. There are only two squares with difference $12$ (those are $4,16$), hence we want $(m+3)^2=16\implies m=1\text{ or }m=-7$, thus $$a=3 \text{ or } a=-\frac{3}{7}$$ and as $a<0$ we have $$a=-\frac{3}{7}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is $ (2+\sqrt{3})^n+(2-\sqrt{3})^n$ an integer? Answers to limit $\lim_{n\to ∞}\sin(\pi(2+\sqrt3)^n)$ start by saying that $ (2+\sqrt{3})^n+(2-\sqrt{3})^n $ is an integer, but how can one see that is true? Update: I was hoping there is something more than binomial formula for cases like $ (a+\sqrt[m]{b})^n+(a-\sqrt[m]{b})^n $ to be an integer
Consider the matrix $$ A = \begin{bmatrix} 2 & 3\\ 1 & 2 \end{bmatrix} $$ Clearly $ A^n $ is a matrix with integer entries, therefore the trace $ \operatorname{tr} A^n $ is an integer. On the other hand, $ A $ is a diagonalizable matrix whose eigenvalues are $ \lambda_1 = 2 + \sqrt{3} $, $ \lambda_2 = 2 - \sqrt{3} $; therefore the eigenvalues of $ A^n $ are $ \lambda_1^n $ and $ \lambda_2^n $. Since the trace of a matrix is the sum of its eigenvalues, we conclude that $ \operatorname{tr} A^n = \lambda_1^n + \lambda_2^n $, and the quantity on the right hand side is an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 10, "answer_id": 7 }
How to find the linear transformation associated with a given matrix? Good day, I have a little doubt: It is well known that given two bases (or even one if we consider the canonical basis) of a vector space, every linear transformation $T:V \rightarrow W$ can be represented as a matrix, but since this is an isomorphism between $L(V,W)$ and $\mathbb{M}_{m\times n}$ where the latter represents the space $m\times n$ matrices on the same field in which are defined respectively vector spaces. That's where my question comes up, I know find the matrix associated with the linear transformation, but not know how to move from the matrix transformation, ie given any matrix, find the linear transformation that defines it. I wish you could please explain the theoretical process and to see a practical example. Thank you very much, I know it's definitely something silly, but I'm still a student.
Suppose that you have an $m,n$ matrix $A$. Choose a basis $B$ of $V$ and another one $B'$ of $W$. The linear transformation associated with $A$ relative to the bases $B$ and $B'$ is $T(v) = Av$, where $v$ is to be written as a column whose entries are the coefficients of $v$ in the basis $B$ and the resulting column $T(v)$ has entries which are the coefficients of $T(v)$ in the basis $B'$. If you choose other bases, you get different linear transformations
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Prove that if $k\ge 1\,$ and $\,a^k\equiv 1\pmod{\! n}$ then $\gcd(a,n)=1$ Let $p$ be a prime such that $a^p-1 \equiv 0 \pmod{p}$. Prove that $\gcd(a,p) = 1$. We know from Fermat's Little Theorem that $a^{p-1}-1 \equiv 0 \pmod{p}$ if and only if $\gcd(a,p) = 1$, but how do we use this to solve the question?
It is a special case of the general fact that $\,a\,$ invertible mod $\,n\,$ implies $\,\gcd(a,n)= 1.\,$ Indeed, $\,aj\equiv 1\pmod{n}\,\Rightarrow\, aj+kn = 1\ $ so $\ d\mid a,n\Rightarrow\, d\mid 1,\,$ therefore $\,\gcd(a,n) = 1$. OP is a special case since $\,{\rm mod}\ p\!:\ a^p-1\equiv 0\, \Rightarrow a(a^{p-1})\equiv 1\,$ so $\,a\,$ is invertible mod $\,p.$ Remark $\ $ The converse is also true, i.e. $\,\gcd(a,n) = 1\,\Rightarrow\,a\,$ is invertible mod $\,n,\,$ since by the Bezout identity for the gcd $\, \gcd(a,n) = 1\,\Rightarrow\, aj+kn = 1\,$ for some integers $\,j,k.\,$ Reducing this equation mod $\,n\,$ we obtain the congruence $\,aj\equiv 1\pmod n,\,$ i.e. $\,a^{-1}\equiv j\pmod n$ That fact that $\ a\,$ is invertible mod $\,n\iff \gcd(a,n)= 1\ $ is fundamental and ubiquitous in elementary number theory, so one should be sure to be intimately familiar with it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 5 }
Prove that in set theory $A-B = A - (A \cap B)$ Prove that in set theory $A-B = A - (A \cap B)$ Please give me a hint. Let $x\in A-B \implies x\in A~ and ~x \notin B\implies x\in A ~and (x\in A ~and~ x \notin B)\implies x\in A - (A \cap B)$
Let $x \in A - B$. Then, $x$ is in $A$, but $x$ is not in $B$. It follows that $x$ is not in both $A$ and $B$, otherwise $x$ would be in $B$. Hence, $x \in A - (A \cap B)$, so $A-B \subset A - (A\cap B)$. Can you do the other direction using this as a model?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }