Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Explanation of the first step in the proof of Vitali theorem. The theorem and the first part of its proof is given below: But I do not understand the first statement in the proof, why countable subadditivity of outer measure lead us to suppose that $E$ is bounded? could anyone explain this for me, please?
Theorem 17: Any set of real numbers with positive outer measure contains a subset that fails to be measurable. Theorem 17': Any bounded set of real numbers with positive outer measure contains a subset that fails to be measurable. The part of the proof that follows the first sentence proves theorem 17'. We therefore just have to deduce theorem 17 from theorem 17'. Deduction of theorem 17 from theorem 17': Let $E$ be a set of real numbers with positive outer measure. Let $\mu$ denote outer measure. If $\mu(E \cap [n,n+1)) = 0$ for each $n \in \mathbb{Z}$, then $\mu(E) = \mu(\cup_{n \in \mathbb{Z}} E\cap [n,n+1) ) \le \sum_{n \in \mathbb{Z}} \mu(E\cap [n,n+1)) = 0$, a contradiction. In other words, there is some $n \in \mathbb{Z}$ with $\mu(E\cap [n,n+1)) > 0$. Applying theorem 17' to $E \cap [n,n+1)$ (which we may, since $\mu(E\cap[n,n+1)) > 0$ and since $E \cap [n,n+1) \subseteq [n,n+1)$ is bounded), we get some non-measurable set $F \subseteq E\cap [n,n+1)$. Since $F$ is then a non-measurable subset of $E$ as well, theorem 17 is deduced.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3359562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
TRUE OR FALSE : Events in a partition cannot be independent (assumption: every event in partition has nonzero probability) I need help solving this T or F question. I got the following but unsure if correct. Let A and B be two partitions in set X. $$P(A \cap B) = 0$$ Let's assume A and B are independent, then: $$P(A)P(B) = P(A \cap B) = 0$$ So this means either P(A) = 0 or P(B) = 0. Because this violates the assumption that every event in partition has nonzero probability, does that mean this is True?
Yes. Let me formalize a bit. Consider a partition, i.e. some events $A_1, \dots, A_n$ such that $A_i \cap A_j = \emptyset$ if $i \ne j$ and $\cup_{i=1}^n A_i = \Omega$. Then, for all $i \in \{1,\dots,n\}$, for all $j \ne i$, $A_j \subset A_i^c$ (where $B^c = \Omega \setminus B$ for any event $B$). So, as you mentioned, for $i \ne j$ $$\mathbb{P}(A_i \cap A_j) = \mathbb{P}(\emptyset) = 0,$$ while, again as you mentioned $\mathbb{P}(A_i), \mathbb{P}(A_j) \ne 0$. My point is just that this should not surprise you because, in your setting $B \subset A^c$ from the very fact that $\{A,B\}$ is a partition (and in fact, if you have only two events, $B$ is precisely $A^c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3359726", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
"Generalised eigenvectors" $Ax=\lambda Bx$ Proof of B-orthogonality? I came across this in a textbook, and it is not the usual definition of generalized eigenvectors I've seen. The generalised eigenvectors of matrices A and B are vectors that satisfy $Ax=\lambda Bx$ and $\lambda$ is the corresponding generalised eigenvalue. The theorem goes: If A and B are symmetric and B is a positive-definite matrix, the generalised eigenvalues ' are real and eigenvectors vi and vj with distinct eigenvalues are B-orthogonal $v_{i}^{T}Bv_{j} $ = 0. Now my question is why does B have to be positive definite for this to work? When trying to prove this, I used assume $\mu ,\lambda$ distinct eigenvalues corresponding to x, y. Then $\lambda\langle Bx,y\rangle = \langle Ax,y\rangle = \langle x,Ay\rangle$ (using symmetricity of A) = ... $\mu\langle Bx,y\rangle$ (using symmetricity of B). Then$ (\lambda - \mu)\langle Bx,y\rangle = 0$, where $ (\lambda - \mu) \neq 0 $ I feel like this prooves it without using B being positive definite? This textbook only deals with real vector spaces.
If $B$ is not positive definite, then the generalized eigenvalues may not be real, for example: if $A = \left(\begin{smallmatrix}0&1\\1&0\end{smallmatrix}\right)$ and $B = \left(\begin{smallmatrix}1&0\\0&-1\end{smallmatrix}\right)$, then since $B$ is invertible, the generalized eigenvalues are the eigenvalues of $B^{-1}A$, i.e. $\lambda = \pm i$. Other things also go wrong -- when $B$ is positive definite, you'll get a basis of generalized eigenvectors, but this may not be true otherwise, e.g. if $A = \left(\begin{smallmatrix}1&1\\1&1\end{smallmatrix}\right)$ and $B = \left(\begin{smallmatrix}1&0\\0&-1\end{smallmatrix}\right)$, the only generalized eigenvector is $v = \left(\begin{smallmatrix}1\\-1\end{smallmatrix}\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3359876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve $(x+1)(y+1)(z+1)=144$ in primes "Solve $(x+1)(y+1)(z+1)=144$ in primes". So far, I have concluded that the solutions are $(x,y,z)=(2,3,11)$ or $(2,5,7)$ and their permutations. I worked like this: * *$x \equiv 0\mod 2\Rightarrow x+1=3, 144=2^4*3^2 \Leftrightarrow (y+1)(z+1)=48=2^4*3$ * *$y \equiv 0\mod 2\Rightarrow y+1=3 \Leftrightarrow z=15$, contradiction since $15$ is not prime *$y \equiv 1\mod 2\Rightarrow y+1\geq 2^2 \Leftrightarrow y+1=2*3, z+1=2^3$ or $y+1=2^2*3, z+1=2^2 \Leftrightarrow y=5, z=7$ or $y=11, z=3$ And since the equation is symmetric the solutions are the permutations of the latter ones. Similarly, through casework, we find the same solutions if $x \equiv 1\mod 2$ (if I haven't made a mistake). My question is if there exists a more simple way to solve this problem besides lots of casework (and if there exists another triplet that satisfies the given equation).
$144 = 2^4*3^2$ If $x,y,z$ are prime then $a=x+1,b=y+1,c=z+1 \ge 3$ so we will only consider factors at least $3$. If, wolog, $x+1, y+1 \ge 3$ then $z+1 \le \frac {144}9 = 16$. So we need to only consider triplets of factors between $3$ and $16$. The factors of $144$ are of the form $2^4*3^2$ and are $1,2,4,8,16, 3,6,12,24,48,9,18, 72, 144$ and in order of size we are considering only the factors $3,4,6,8,9,12,16$. As we want these to be one more than primes, we can't have $16$ or $9$. So the factors we may have are $3,4,6,8,12$. Now lets find the triplets by listing them in order. wolog $a \le b \le c=\frac {144}{ab}$. We have $a,b,c =$ $3,3,16$ no good! $16$ not in our list $3,4,12$ $3,6,8$ $3,8, *erk*$ we have $c = \frac {144}{ab} < b$ so that's it for $a = 3$. $a=4$ next. $4*4*9$ no good. $9$ not on our list. $4,6,6$ And that's it. We've "hit the middle" for $a=4$. Should we try $a=6$ next? $6,6,*erk*$ we have $c = \frac {144}{ab} < b$ so we've hit the wall. So ignoring permutations $\{x,y, z\} = \{2,3,11\},$ or $\{2,5,7\}, \{3,5,5\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3360021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why does this alternate Fibonacci relation appear to be true? So the normal fibonacci relation is famously as follows: $$\begin{aligned}F_0 &= 0 \\ F_1 &= 1 \\ F_n &= F_{n-1} + F_{n-2}\end{aligned}$$ My friend appears to have discovered this alternate relation that allows you to skip by 3s, but I can't find it anywhere else to confirm why it works. $$\begin{aligned}F_0 &= 0 \\ F_3 &= 2 \\ F_n &= 4F_{n-3} + F_{n-6}\end{aligned}$$ I think it's just a case of "simplification", but I wanted to confirm that. Does this work look correct? $$\begin{aligned} F_n &= F_{n-1} + F_{n-2} \\ F_n &= F_{n-2} + F_{n-3} + F_{n-2} \\ F_n &= 2F_{n-2} + F_{n-3} \\ F_n &= 2(F_{n-3} + F_{n-4}) + F_{n-3} \\ F_n &= 3F_{n-3} + 2F_{n-4} \\ F_n &= 3F_{n-3} + F_{n-4} + F_{n-5} + F_{n-6} \\ F_n &= 4F_{n-3} + F_{n-6} \end{aligned}$$ Edit: I get that it doesn't do the entire sequence this way. We only needed the even terms for what we were working on. That's why I even called out that it skipped by 3s. To "fix" it all you would need to do is define the first 6 anyways.
That alternative definition "forgets" the terms $F_n$ when $n$ is not a multiple of three. So $F_5$, say, could be a crazy elephant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3360168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Books that provide justifications of perturbations and asymptotic methods. I am looking for books which provide justifications (proofs of appropriate theorems ) of various perturbation methods. In particular I would like to study about justification of matched asymptotic expansions, multiply scales, WKB, Poincare method. Thank you in advance.
I would say that a nice source to base your search on is C. Kuehn, Multiple Time Scale Dynamics, Springer (2015), ISBN 978-3-319-12315-8, [link]. The book is very extensive concerning perturbation methods. In addition, it not only contains proofs, but also an extensive literature section, which guides you to literature where necessary proofs can be found.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3360317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating the curvature of product manifold $\mathbb{S}^2 \times \mathbb{R}$ I've read that $\mathbb{S}^2 \times \mathbb{R}$ is one of the model geometries of Thurston which has non constant curvature. I took it to mean that the manifold $(\mathbb{S}^2 \times \mathbb{R}, g)$ has non constant sectional curvature (where $g$ is the standard product metric) and tried to compute its sectional curvature. Here's what I used: Let $(M, g_1)$ and $(N, g_2)$ be two Riemannian manifolds with curvature tensors $R_1$ and $R_2$. Using that for each $(p, q) \in M \times N$, $T_pM \oplus T_q N \cong T_{(p, q)}(M \times N)$, for each $X, Y, Z \in \Gamma(T(M \times N))$, we have: * *$R(X, Y)Z = R_1(X_1, Y_1)Z_1 + R_2(X_2, Y_2)Z_2$ *$\text{Rm}(X, Y, Z, W) = \langle R(X,Y)Z, W \rangle$ where $X = (X_1, X_2)$, with $X_1 \in \Gamma(TM)$, $X_2 \in \Gamma(TN)$ and analogously for $Y$ and $Z$, and the metric on $M \times N$ is given by: $$g^{M \times N}_{(p, q)}(X, Y) = g^{M}_{p}(X_1, Y_1) + g^{N}_{q}(X_2, Y_2)$$ Denoting by $R_1$ the curvature tensor for the sphere $\mathbb{S}^2$ and by $R_2$ the one for the real line, it's obvious that $R_2 \equiv 0$. Now, let $X, Y$ be an orthonormal basis for a $2$ plane contained in some tangent space of $\mathbb{S}^2 \times \mathbb{R}$. We have: $$\begin{align}R(X,Y)Y &= R(X_1 + X_2, Y_1 + Y_2)( X_1 + X_2)\\ &= R(X_1, Y_1)X_1 + R(X_2, Y_1)X_1 + R(X_1, Y_2)X_1 + R(X_2, Y_2)X_1& \\ &+R(X_1, Y_1)X_2 + R(X_2, Y_1)X_2 + R(X_1, Y_2)X_2 + R(X_2, Y_2)X_2 \\ &= R_{1}(X_1, Y_1)X_1 \end{align}$$ since all the other terms disappear, where we're using that $R_2 \equiv 0$. Then: $$\begin{align}K(X, Y) &= \langle R(X, Y)Y, X \rangle \\ &= \langle R_1(X_1, Y_1)Y_1, X_1 \rangle + \langle R_1(X_1, Y_1)Y_1, X_2 \rangle \\ &= \langle R_1(X_1, Y_1)Y_1, X_1 \rangle = 1 \end{align}$$ because $\mathbb{S}^2$ has constant sectional curvature equal to $1$. So we have that $\mathbb{S}^2 \times \mathbb{R}$ has constant sectional curvature as well. Where did I make a mistake here? Or does $\mathbb{S}^2 \times \mathbb{R}$ actually have constant curvature? (I also realized that if my computations are correct, it would imply that $\mathbb{S}^n \times \mathbb{R}$ has constant curvature for all $n \geq 1$...)
The fact that $X$ and $Y$ are orthonormal says nothing about whether or not $X_1$ and $X_2$ are orthonormal. Thus, the condition $\langle R(X_1,Y_1)Y_1, X_1\rangle =1$ does not need to hold. In fact, $X_1$ and $Y_1$ could be linearly dependent, in which case the curvature is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3360460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Bounded operator with an inverse on a dense invariant subspace Let $X$ and $Y$ be two Banach spaces such that $X$ is continuously embedded in $Y$. We suppose also that $X$ is dense in $Y$. Let $A\, \colon \, Y \to Y$ be a bounded operator that maps $X$ into itself such that the restriction $A_{|X} \colon X \to X$ is a bounded operator. Do we have: $A_{|X}$ invertible $\implies$ $A$ invertible ? If $X$ is not supposed to be endowed with a Banach structure, a counter-example is given in Wikipedia.
This is a nice question because everybody should rather immediately think that the answer must be negative (such a theorem would look much too good) -- and nevertheless it is far from obvious (at least for me) how to find a counterexample. I believe that an example is contained in the paper https://doi.org/10.1007/BF01174563 of K. Dayanithy from 1978: For the finite measure $\mu$ on $\mathbb N$ with $\mu(\{n\})=1/(n!)^2$ he caculates the spectral radii of the operators $T_p$ on $L^p(\mu)$ defined by $(x_n)_n\mapsto (x_{n+1}/(n+1))_n$, namely $r(T_p)=0$ for $p>2$ and $r(T_2)=1$. There is thus an element $\lambda$ of the spectrum of $T_2$ with $|\lambda|=1$. Since the spectrum of $T_p$ for $p>0$ is $\{0\}$ we thus get for every $p>2$ that $A= \lambda -T_2$ is not invertible on $L^2(\mu)$ but its restriction to $L^p(\mu)$ is invertible. Note finally that $L^p(\mu)$ is contained in $L^2(\mu)$ because the measure is finite and it is dense because even the space of finite sequences (i.e., with only finitely many non-zero coordinates) is dense. I would like very much to see a simpler example!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3360586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\int_{-\infty}^{\infty}x e^{-x^2 + x(i+1)}dx$ how to? I am having some problems in solving the integral reported in the title, which is: $$\int_{-\infty}^{\infty}x e^{-x^2 + x(i+1)}dx$$ As from the general theory of Gaussian integrals I have been trying to write the known integral $$\int_{-\infty}^{\infty} e^{-x^2 - i k x}dx, \; k \in \mathbb{R}$$ which has a solution with contour integration knowing that the complex function $e^{-z^2}$ is holomorphic on the whole complex plane. Despite my efforts this route does not seem to bring to any result. Thanks to anyone who is so keen to try and help me.
Hint: $$\int_{x=-\infty}^\infty xe^{-x^2+(1+i)x}dx\\ =\int_{x=-\infty}^\infty\left(x-\frac{1+i}2\right)e^{((1+i)/2)^2-(x-(1+i)/2)^2}dx +\frac{1+i}2\int_{x=-\infty}^\infty e^{((1+i)/2)^2-(x-(1+i)/2)^2}dx\\ =e^{i/2}\int_{x=-\infty}^\infty ze^{-z^2}dz +\frac{1+i}2e^{i/2}\int_{x=-\infty}^\infty e^{-z^2}dz.$$ The first integral vanishes by parity and the second is a more classical Gaussian integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3360666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Prove $\frac{a}{b^{2}+1} + \frac{b}{c^{2}+1} + \frac{c}{a^{2} + 1} \ge \frac{3}{2}$ $a,b,c > 0$ and $a+b+c=3$, prove $$ \frac{a}{b^{2} + 1} + \frac{b}{c^{2}+1} + \frac{c}{a^{2}+1} \ge 3/2 $$ Attempt: Notice that by AM-Gm $$\frac{a}{b^{2} + 1} + \frac{b}{c^{2}+1} + \frac{c}{a^{2}+1} \ge 3\frac{\sqrt[3]{abc}}{\sqrt[3]{(a^{2}+1)(b^{2}+1)(c^{2}+1)}} $$ Now, AM-GM again $$ a^{2}+b^{2}+c^{2} + 3 \ge 3 \sqrt[3]{(a^{2}+1)(b^{2}+1)(c^{2}+1)} ... (1)$$ Then $a+b+c = 3 \ge 3 \sqrt[3]{abc} \implies 1 \ge \sqrt[3]{abc}$. Also $$a^{2} + b^{2} + c^{2} \ge 3 \sqrt[3]{(abc)^{2}}$$ multiply by $1 \ge \sqrt[3]{abc}$ and will get $$ a^{2} + b^{2} + c^{2} \ge 3 abc ... (2)$$ subtract $(1)$ with $(2)$ and get $$ 3 \ge 3 \sqrt[3]{(a^{2}+1)(b^{2}+1)(c^{2}+1)} - 3 abc$$ $$ 3 + 3 abc \ge \sqrt[3]{(a^{2}+1)(b^{2}+1)(c^{2}+1)} $$ $$ \frac{3abc}{\sqrt[3]{(a^{2}+1)(b^{2}+1)(c^{2}+1)}} \ge 1 - \frac{3}{\sqrt[3]{(a^{2}+1)(b^{2}+1)(c^{2}+1)}} $$ How to continue..?
Another way. Your inequality is a sixth degree. We can reduce this degree by the Bacteria's method. Indeed, by C-S, Murhead, Rearrangement and SOS(here it's also a Tangent Line method) we obtain: $$\sum_{cyc}\frac{a}{b^2+1}=\sum_{cyc}\frac{a^2(a+c)^2}{a(a+c)^2(b^2+1)}\geq\frac{\left(\sum\limits_{cyc}(a^2+ab)\right)^2}{\sum\limits_{cyc}a(a+c)^2(b^2+1)}=$$ $$=\tfrac{3(a+b+c)\left(\sum\limits_{cyc}(a^2+ab)\right)^2}{\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)}=\frac{3}{2}+\tfrac{3\left(2(a+b+c)\left(\sum\limits_{cyc}(a^2+ab)\right)^2-\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)\right)}{2\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)}=$$ $$=\frac{3}{2}+\frac{3\sum\limits_{cyc}(a^5+3a^4b+2a^4c-4a^3b^2+4a^3c^2+8a^3bc-14a^2b^2c)}{2\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)}=$$ $$=\frac{3}{2}+\tfrac{3\sum\limits_{cyc}(a^5+3a^4b+2a^4c-4a^3b^2-2a^3c^2+8(a^3bc-a^2b^2c)+6(a^3c^2-a^2b^2c))}{2\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)}\geq$$ $$\geq\frac{3}{2}+\frac{3\sum\limits_{cyc}(a^5+3a^4b+2a^4c-4a^3b^2-2a^3c^2)}{2\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)}=$$ $$=\frac{3}{2}+\frac{3\sum\limits_{cyc}(a^5+3a^4b-4a^3b^2-2a^2b^3+2ab^4)}{2\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)}=$$ $$=\frac{3}{2}+\frac{3\sum\limits_{cyc}a(a-b)(a^3+4a^2b-2b^3)}{2\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)}=$$ $$=\frac{3}{2}+\frac{3\sum\limits_{cyc}\left(a(a-b)(a^3+4a^2b-2b^3)-\frac{3(a^5-b^5)}{5}\right)}{2\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)}=$$ $$=\frac{3}{2}+\frac{3\sum\limits_{cyc}(a-b)^2(2a^3+19a^2b+16ab^2+3b^3)}{10\sum\limits_{cyc}a(a+c)^2(9b^2+(a+b+c)^2)}\geq\frac{3}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3360763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
How to find the following limit involving the given integration I have tried the problem and got $\lim_{x\to0}4\frac{f(x)}{x}=4$. From this how to proceed? Since $\displaystyle\lim_{x\to0}4\frac{f(x)}{x}=4$, so $\displaystyle\lim_{x\to0}\frac{1}{4}(4\frac{f(x)}{x})=\frac{1}{4}\lim_{x\to0}4\frac{f(x)}{x}=1\implies\lim_{x\to0}\frac{f(x)}{x}=1$. Is it correct?
Let $g(x) = f(x)-x$. Then $g(x) \to 0$ as $x \to 0$, and $12\frac{g(4x)}{4x}-10\frac{g(2x)}{2x}+2\frac{g(x)}{x} \to 0$ as $x \to 0$. I'll show $\frac{g(x)}{x} \to 0$ as $x \to 0$ (which gives $f(x)/x \to 1$). I'll show $\frac{g(x)}{x} \to 0$ as $x \to 0^+$. The following argument (easily modified) works for $x \to 0^-$. For $n \ge 0$, let $\epsilon_n = \sup_{x \in [0,2^{-n}]} |6\frac{g(4x)}{4x}-5\frac{g(2x)}{2x}+\frac{g(x)}{x}|$. We know $\epsilon_n \to 0$ as $n \to \infty$. Take any $x \in (\frac{1}{2},1]$, and let $x_0,x_1,x_2,\dots = \frac{g(x)}{x},\frac{g(x/2)}{x/2},\frac{g(x/4)}{x/4},\dots$. Then, for each $n \ge 0$, $x_{n+2} = 5x_{n+1}-6x_n+\epsilon_n'$ for some $\epsilon_n' \in [-\epsilon_n,\epsilon_n]$. Fix $\epsilon > 0$. Take $N$ large so that $|\epsilon_n'| \le \epsilon$ for $n \ge N$. Then, by the Lemma below, we must have $|x_n| \le 5\epsilon$ for $n \ge N$, for otherwise $0 < \lim_{n \to \infty} 2^{-n}x_n \le \limsup_{x \to 0} g(x)$, a contradiction. We have shown $x_n \to 0$ uniformly in $x$, giving the result. Lemma: For any $\epsilon > 0$, $(\epsilon_n)_{n \ge 3} \in [-\epsilon,\epsilon]^{n \ge 3}$, and $x_1,x_2 \in \mathbb{R}$, if either $|x_1| > 5\epsilon$ or $|x_2| > 5\epsilon$, then the sequence $(x_n)_{n \ge 3}$ defined by $x_n = 5x_{n-1}-6x_{n-2}+\epsilon_n$ satisfies $\limsup_{n \to \infty} 2^{-n}x_n > 0$. I'm too lazy to prove the Lemma (seems like annoying casework), but the intuition as to why it's true is as follows. If $\epsilon_n = 0$ for each $n \ge 3$, then $x_n = c2^n+d3^n$ for some $c,d \in \mathbb{R}, (c,d) \not = (0,0)$ (if $c=0,d=0$, then $x_1,x_2 = 0$). And when the $\epsilon_n$'s can be nonzero but still tiny, they'll affect the individual values of the sequence but not its overall growth rate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3360892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find rational $\frac{p}{q}$ such that $\frac{1}{3000}<|\sqrt{2}-\frac{p}{q}|<\frac{1}{2000}$ Find rational $\frac{p}{q}$ such that $\frac{1}{3000}<|\sqrt{2}-\frac{p}{q}|<\frac{1}{2000}$ My Attempt take a sequence which converges to $\sqrt{2}$ : $p_1=1+\frac{1}{2}, p_{n+1}=1+\frac{1}{1+p_n}$ I find how to calculate the sequence : $p_n=\frac{x_n}{y_n}, \Delta y_n=x_n$ and $(\Delta^2-2)y_n=0 $ but there is no way how to find the rational which matches the given term.
Try $$ \frac{58}{41}.$$ It is well-known that the continued fraction of $\sqrt 2$ is $$ 1+\frac1{2+\frac1{2+\frac1{2+\frac1{2+\ldots}}}}$$ Numerically(!), we find the continued fractions for $\sqrt 2+\frac1{2000}$ an $\sqrt 2+\frac1{2000}$: $$ 1+\frac1{2+\frac1{2+\frac1{2+\frac1{2+\frac1{\color{red}1+\frac1{\ldots}}}}}}$$ and $$ 1+\frac1{2+\frac1{2+\frac1{2+\frac1{\color{red}3+\frac1{\ldots}}}}}.$$This suggests that the simplest fraction inbetween is $$ 1+\frac1{2+\frac1{2+\frac1{2+\frac1{3}}}} =1+\frac1{2+\frac1{2+\frac 37}} =1+\frac1{2+\frac 7{17}} =1+\frac{17}{41}=\frac{58}{41}.$$ We verify that $$ \left(\frac{58}{41}-\sqrt 2\right)\underbrace{\left(\frac{58}{41}+\sqrt 2\right)}_{\approx 2\sqrt 2}=\left(\frac{58}{41}\right)^2-2=\frac2{1681}$$ and hence $$ \frac{58}{41}-\sqrt 2\approx \frac1{1681\sqrt 2}$$ which is certainly in the required range. Edit: On second thought, it turns out that twice the reciprocal of the above, i.e., $$ \frac{41}{29}$$ is also a valid (and "simpler") solution, just from below.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3360994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Color a 1000 by 1000 grid with 0s and 1s Show that one could remove 990 rows with a 1 remaining in every column, OR delete 990 columns with 0 in remaining in every row. My approach: let $r_i$ be the number of 1s in row i and $c_i$ be the number of 0s in column i but then I am stuck. Any hint would be appreciated!
Not a solution, too long for comments. I was thinking on the problem and decided to write my thoughts; my first idea was to use the pideonhole principle and it would be straightforward; it is more complicated than I thought. A second idea is to try to use some linear indepedence argument, considering rows and columns as vectors on $\mathbb{Z}_2^{1000}$. This sounds familiar, but I could not find a way to express the conditions on this new angle. Any comments/suggestions are welcome. Pidgeonhole attempt. Let us fix some notation. Denote $[n] = \{1, 2, \ldots, n\}$ and also for some set $X$ let $$ \binom{X}{n} = \{Y \subset X; |Y| = n\} $$ the family of all subsets of $X$ with precisely $n$ elements. Using your notation for $r_i$ and $c_i$, we have $$ \sum_1^{1000} c_i + \sum_1^{1000} r_i = 1000 \cdot 1000 = 10^6, $$ since we are counting each $1$ and each $0$ exactly once. For some $X \in \binom{[1000]}{10}$, let us denote $$ r_X = \sum_{i \in X} r_i. $$ Using the pidgeonhole principle, if you have a grid that is $10 \times 1000$ that has at least $999 \cdot 10 + 1$ elements being $1$, then you found a selection of rows that satisfies the first statement. Otherwise, for every $X \in \binom{[1000]}{10}$ you have $$ r_X \le 9990. $$ Now, we make a counting in two ways argument. Notice that $$ \sum_{X \in \binom{[1000]}{10}} r_X = \binom{999}{9} \cdot \sum_{1}^{1000} r_i, $$ since every row belongs to precisely $\binom{999}{9}$ sets of $10$ rows (just choose $9$ other rows from the remaining $999$). The computations above show that if we do not have the first statement then $$ \sum_{1}^{1000} r_i \le \frac{\binom{1000}{10}}{\binom{999}{9}} \cdot 9990 = \frac{1000!}{999!} \cdot \frac{9! \cdot 990!}{10! \cdot 990!} \cdot 9990 = 1000 \cdot 999 $$ and thus the average row has at most $999$ ones. Symmetric argument for $c_i$. Analogously, if we have some $1000 \times 10$ grid with at least $999 \cdot 10 + 1$ entries being zero then every row must have a zero. The same double counting argument proves that $$ \sum_1^{1000} c_i \le 1000 \cdot 999. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3361240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $X_{\tau}$ is a random variable. Let ${Y_{n}, n \geq 0}$ be real-valued random variables on $(\Omega, \mathcal{F}, P)$ that satisfy $\lim _{x \rightarrow \infty} X_{n}(\omega) = \infty$ for every $\omega \in \Omega$, and let $B < \infty$ be a real number. Prove that the integer-valued quantity $$τ (ω) := \inf\left\{n ≥ 0 : X_{n}(\omega)\geq B\right\}$$ is a random variable. Also, prove that $X_{\tau}$ is a random variable. My attempt so far, I have been able to prove that $\tau(\omega)$ is a random variable. If some $k \notin \mathbb{Z^{+}}$, then $\tau^{-1}(\{k\}) = \phi$. We know that $\phi$ has a pre-image in $\mathcal{F}$.$$\tau_{\omega} = k \implies X_{k}(\omega) \geq B \cap X_{i}(\omega) < B$$ for all $i<k$. Since, the $\{X\}'s$ are random variables, $A_{k} \cap (\cap_{i=1}^{k-1}A_{i})$ where, $A_{k} = X_{k}^{-1}([B,\infty)) , X_{k}^{-1} = ((-\infty,B)).$ I am not so sure about how to proceed for proving that $X_{\tau}$ is a random variable. Any help would be appreciated.
Hint: Write $$X_{\tau} = \sum_{n=0}^{\infty} X_n 1_{\{\tau=n\}}$$ * *What do you know about the measurability of $X_n$ and $1_{\{\tau=n\}}$? *What do you know about measurability of products, sums and limits of measurable functions?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3361368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Computing $\underset{x\rightarrow0}{\lim}\big(a^{x}+b^{x}-c^{x}\big)^\frac{1}{x}$ A friend asked me to help him with calculating a certain hideous limit: $$\underset{x\rightarrow0}{\lim}\big(a^{x}+b^{x}-c^{x}\big)^\frac{1}{x},\space\space0<a,b,c\in\mathbb{R}$$ I came up with a solution (and wolfram alpha confirmed), but unfortunately it involves a lot of steps and a couple of theorems and identities so we're actually reaching out hoping someone can come up with a better solution. Hopefully a more intuitive one! Here's what I had in mind: $$\underset{x\rightarrow0}{\lim}\big(a^{x}+b^{x}-c^{x}\big)^\frac{1}{x}=\underset{x\rightarrow0}{\lim}e^{\ln{\big((a^{x}+b^{x}-c^{x})^\frac{1}{x}\big)}}=\underset{x\rightarrow0}{\lim}e^\frac{\ln{\big(a^{x}+b^{x}-c^{x}\big)}}{x}$$ Since $e^x$ is continuous and therefore we have: $$\underset{x\rightarrow{x_0}}{\lim}e^{f(x)}=e^{\underset{x\rightarrow{x_0}}{\lim}f(x)}$$ So we focus on finding: $$\underset{x\rightarrow0}{\lim}\frac{\ln{\big(a^{x}+b^{x}-c^{x}\big)}}{x}$$ Suppose it is equal to some $L$, then our original limit will be equal to $e^L$. We now note that: $$\underset{x\rightarrow0}{\lim}x=0\space,\space \underset{x\rightarrow0}{\lim}\ln{\big(a^{x}+b^{x}-c^{x}\big)}=\ln{(a^0+b^0-c^0)}=\ln{(1+1-1)}=\ln(1)=0$$ So our limit takes the indeterminate form $"\frac{0}{0}"$. After checking all the conditions are satisfied we proceed with L'Hospital setting $g(x)=x$ and $f(x)=\ln{\big(a^{x}+b^{x}-c^{x}\big)}$, and get the following: (oh boy this is going to be an ugly) $$\underset{x\rightarrow0}{\lim}\frac{f(x)}{g(x)}=\underset{x\rightarrow0}{\lim}\frac{f'(x)}{g'(x)}=\underset{x\rightarrow0}{\lim}\frac{\frac{a^{x}\ln{(a)}+b^{x}\ln{(b)}-c^{x}\ln{(c)}}{a^{x}+b^{x}-c^{x}}}{1}=\ln{(a)}+\ln{(b)}-\ln{(c )}=\ln{\big(\frac{ab}{c}\big)}$$ (Again, the computation was straightforward because $\ln$ is continuous and the limit of composition of functions). So we finally get: $$\underset{x\rightarrow0}{\lim}\big(a^{x}+b^{x}-c^{x}\big)^\frac{1}{x}=\underset{x\rightarrow0}{\lim}e^\frac{\ln{\big(a^{x}+b^{x}-c^{x}\big)}}{x}=e^{\ln{(\frac{ab}{c})}}=\frac{ab}{c}$$ And that's it! As you can see it's not that intuitive, and it involves a lot of computation and some theorems and identities and as a result- many steps. I would appreciate in other insight regrading formality and other perspectives on calculating this limit. Thank you
Apply $\ln$ to the expression to get $$\tag 1 \frac{\ln(a^x+b^x-c^x)}{x}.$$ Let $f(x) = \ln(a^x+b^x-c^x).$ Then $(1)$ equals $$\frac{f(x)-f(0)}{x}.$$ The limit of this as $x\to 0$ is, by definition, $f'(0).$ Let's compute: $$f'(x) = \frac{1}{a^x+b^x-c^x}\cdot (\ln a\cdot a^x + \ln b\cdot b^x -\ln c\cdot c^x).$$ Thus $f'(0)= \ln a+ \ln b -\ln c = \ln \left(\dfrac{ab}{c}\right).$ Exponentiating back shows the original limit is $\dfrac{ab}{c}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3361483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Integrate $\int{\frac{x^2-1}{x^4+3x^3+5x^2+3x+1}}dx$ The answer of this integral is $$\int{\frac{x^2-1}{x^4+3x^3+5x^2+3x+1}}dx$$ $$=\frac{2}{\sqrt{3}}arctan(\frac{2}{\sqrt{3}}(x+\frac{1}{x})+\sqrt{3})+C$$ But I can't figure out how could I solve it. I tried to use partial fraction, but the denominator $$x^4+3x^3+5x^2+3x+1$$ can't be easily factored. I also tried to use WolframAlpha to know how to solve it, but it can't give a useful answer for this integral:
\begin{align} &\int{\frac{x^2-1}{x^4+3x^3+5x^2+3x+1}}dx\\ =&\int \frac{ 1 - \frac{1}{x^2}}{x^2+3x+5+\frac{3}{x}+\frac{1}{x^2}}dx =\int \frac{ d\left( x+\frac{1}{x}\right) }{\left( x+\frac{1}{x}+\frac{3}{2}\right)^2 + \frac 34 } \\ =&\ \frac{2}{\sqrt{3}}\arctan\left[\frac{2}{\sqrt{3}}\left(x+\frac{1}{x}\right)+\sqrt{3}\right]+C \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3361579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
The number of the group homomorphsim , $\phi : S_3 \to S_3$ Surely, $S_3 = \langle(1,2), (1,2,3)\rangle$ And Put $\alpha = (1,2)$ and $\beta = (1,2,3)$ for homomorphism, $\phi : S_3 \to S_3$ Say $\phi(\alpha) =f(\in S_3)$ and $\phi(\beta) =r(\in S_3)$ Then, $\vert f \vert $ and $\vert r \vert$ should be divisor of the $2$ and $3$ respectively. Hence there are 4 cases like the below. 1) $\vert f \vert = 1$ and $\vert r \vert = 1$ (The number of the $\phi$ is $1$) 2) $\vert f \vert = 1$ and $\vert r \vert = 3$ (The number of the $\phi$ is $2$) 3) $\vert f \vert = 2$ and $\vert r \vert = 1$(The number of the $\phi$ is $3$) 4) $\vert f \vert = 2$ and $\vert r \vert = 3$(The number of the $\phi$ is $6$) So my answer is $12$. But the answer of that was $10$ not $12$. Which point I was wrong? Any advice would be appreciated.
When classifying homomorphisms of a group, generally one cannot prescribe the orders of images of generators independently. Indeed: If $\operatorname{ord}(\phi((12))) = 1$, then $\phi((12)) = 1$ (the identity permutation). But in a symmetric group any conjugacy class consists precisely of all permutations of a given cycle type, so there is some element $g \in S_3$ such that $$g (12) g^{-1} = (23) .$$ (I encourage you to verify this claim in this case by finding such an element $g$ explicitly.) But since $\phi$ is a homomorphism, $$\phi((23)) = \phi(g (12) g^{-1}) = \phi(g) \phi((12)) \phi(g^{-1}) = \phi(g) \phi(g)^{-1} = 1$$ and so $$\phi((123)) = \phi((12)(23)) = \phi((12)) \phi((23)) = 1 \cdot 1 = 1.$$ In summary, the condition $\phi((12)) = 1$ in fact forces $\phi$ to be trivial, so there are no homomorphisms $S_3 \to S_3$ satisfying case (2). The remaining cases add up to $10$ distinct homomorphisms, but our observation in this answer shows that more is required to verify that all all of the remaining cases really are realizable. Remark More generally, if a homomorphism $\psi : S_n \to S_n$ maps any transposition to $1$, then since all transpositions are conjugate, the above argument shows that $\psi$ maps all transpositions to $1$. But $S_n$ is generated by its transpositions, so any such $\phi$ is actually the trivial homomorphism. This gives another illustration of the first statement in this answer. Another generating set for $S_3$ is $\{(12), (13)\}$, but since both of these generators have the same cycle type, their images under a homomorphism $S_3 \to S_3$ necessarily coincide.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3361706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Behaviour of $x^n$, $ln(x)$, and $e^x$ as $x\to \infty$ In the chapter "Limits of a Function", I came across the following property: As $x\to \infty$, $\ln(x)$ increases much slower than any positive power of $x$ where as $e^x$ increases much faster than any positive power of $x$. So the following properties hold good: $$(1) \lim_{x \to \infty} \frac{\ln(x)}{x}=0 $$ $$(2) \lim_{x \to \infty} \frac{(\ln(x))^n}{x}=0$$ $$(3)\lim_{x \to \infty} \frac{x}{e^x}=0$$ $$(4) \lim_{x \to \infty} \frac{x^n}{e^x}=0$$ For verifying the properties $(1)$ and $(3)$, I used the L'Hospital Rule, and I proved the limits tend to the value $0$. I don't think the other two properties doesn't hold good at all conditions i.e., for all positive integral values of $n$. First of all, I was unable to use the L'Hospital Rule since I felt it would be very lengthy even if we know the value of $n$. So, I decided to use graphing calculator to determine their behaviour. The following graph is for properties (1) and (2). The limit approaches $0$ at lower positive values of n but at a higher value say 98 as in the given graph. The limit itself approaches infinity and not zero. I tried to zoom out to see the behaviour, but as far as I tried the limit approaches infinity and not zero. Further from the graph it is evident the property given in my book is invalid, as the logarithmic function increases faster than the function $x$. Similarly, I tried for the properties 3 and 4, as follows: Clearly the property is again not working for higher values of $n$. So at last, my doubt is: Whether the property (Behaviour of $x^n$, $\ln(x)$, and $e^x$ as $x\to \infty$) given in my book correct for all values of $n$. If yes kindly verify or prove the properties 2 and 4. If no, kindly explain the reason. Thanks in advance.
4) follows by applying L'Hopital's Rule $n$ times. (You will end up with $\lim_{x\to \infty} \frac {n!} {e^{x}}$ which is $0$). 2) is same as 4) with $x$ changed to $\ln x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3361802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Different solution for same contour integral $\int_{0}^{\infty}\frac{\cos(x)}{1+x^2}dx$ This question might be silly but I am quite puzzled by this problem. In this exercise I am required to solve the following integral $$\int_{0}^{\infty}\frac{\cos(x)}{1+x^2}dx$$ which I will call $A$ from now on. Since $f(x)= f(-x)$ I conclude $$A = \frac{1}{2} \int_{-\infty}^{\infty}\frac{\cos(x)}{1+x^2}dx.$$ Now I can switch to the complex variable $z$ and rewrite $\cos(z)$ as $\frac{e^{iz}+e^{-iz}}{2}$ and obtain $$\frac{1}{4}\int_{\gamma}\frac{e^{iz}+e^{-iz}}{1+z^2}dz$$ where $\gamma$ is the upper semicircle with radius $r$. Then I can evaluate the residue for $z=i$ (the radius is eventually going to tend to infinity) and via Jordan's Lemma I get $$A = \frac{\pi}{4}(e^{-1} + e).$$ Now, my professor solved this integral in a very similar manner but he passed from $$A = \frac{1}{2} \int_{-\infty}^{\infty}\frac{\cos(x)}{1+x^2}dx = \frac{1}{2} \int_{-\infty}^{\infty}\frac{e^{ix}}{1+x^2}dx $$ I tried to reconstruct the steps that bring me from one form to the other but I am not capable of doing so. What am I missing here? Thanks in advance to everyone who is going to participate.
Unfortunately your choice of function and curve don't work and is why proof of contours vanishing or not vanishing are important (I wish physicists would take note here). Using the residue theorem and equating it to your integral relies on the integral along the circular arc vanishing. But at the top of the arc $$\frac{e^{iz}+ e^{-iz}}{1+z^2} \to \frac{e^{-y}+e^y}{1-y^2} \not\to 0$$ because it is exponentially growing, not decaying, as you take the radius of the arc to be larger and larger. This is in fact true for all points on the arc in the upper half plane, so this is not a problem that can be principal valued away by ignoring the imaginary axis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3361906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove: Let $x,y \in \mathbb{R}$. If $x^{2}=y^{2}$, then $x=\pm{y}$. Prove: Let $x,y \in \mathbb{R}$. If $x^{2}=y^{2}$, then $x=\pm{y}$. My attempt: If $x^{2}=y^{2}$ then, $\sqrt{x^{2}}=\sqrt{y^{2}}$ $\rightarrow$ $\pm{x}=\pm{y}$ $\rightarrow$ $x=\pm{y}$ and $-x=\pm{y}$. And I think that would leave me $x=y$, $x=-y$, $-x=y$, $-x=-y$. But let's say $x^{2}=4$, so $y^{2}=4$ $\rightarrow$ $\pm{2}=\pm{2}$ then that would give me $2=2$, $2=-2$, $-2=2$ and $-2=-2$. But I have a problem on the $2=-2$ part.
It's generally a bad idea to "take square roots of both sides." It overcomplicates things, as the correct way to handle it is $\sqrt{x^2}=|x|$. Avoid the absolute value so you can avoid cases. Here's how. Realize that from $x^2=y^2$ you get $x^2-y^2=0$. Now factor to get $(x+y)(x-y)=0$. But this means at least one of those terms had to be $0$, so $x=\pm y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3362020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Use coordinate method to solve a pretty hard geometry problem Here is a hard geometry problem for my homework. Let $D$ be a point inside $\Delta ABC$ such that $\angle BAD=\angle BCD$ and $\angle BDC=90^\circ$. If $AB=5,BC=6$ and $M$ is the midpoint of $AC$, find the length of $DM$. (Taken from HK IMO Prelim 2012) The teacher wants us to use the coordinate geometry to solve this problem. It does not appear too hard, but there is a big problem. I set the coordinates of $A,B,C,D,M$ as in the diagram. Then, we can get the following statement: $$\begin{cases} b^2+c^2=36 \\ x^2+\left(b+y\right)^2=25 \\ \sin\angle BAD=\sin\angle BCD \end{cases}$$ We need to find the value of $\sin\angle BAD$ and $\sin\angle BCD$ $$\sin\angle BCD=\dfrac{BD}{BC}=\dfrac{b}{6}\\ \text{The area of }\Delta ABD=\dfrac{1}{2}\left(AB\right)\left(AD\right)\sin\angle BAD \\ \dfrac{bx}{2}=\dfrac{5\sqrt{x^2+y^2}\sin\angle BAD}{2} \\ \sin\angle BAD=\dfrac{bx}{5\sqrt{x^2+y^2}}\\ \dfrac{b}{6}=\dfrac{bx}{5\sqrt{x^2+y^2}} \\ \sqrt{x^2+y^2}=\dfrac{6}{5}x \\ x^2+y^2=\dfrac{36}{25}x^2 \\ y^2=\dfrac{11}{25}x^2 \\ y=\dfrac{\sqrt{11}}{5}x$$ Then, when I proceed further, the expression gets very complicated. I felt puzzled, so I stopped. I found a geometric solution, which gives the answer of $\dfrac{\sqrt{11}}{2}$. I hope you guys can help me solve this question using coordinate method. Thank you!
Let $\angle BDA = \alpha$ and apply the sine rule to the triangle $\triangle ABD$, $$\frac{\sin \alpha}{\sin\angle BAD}=\frac{5}{6\sin\angle BCD}$$ Given that $\beta = \angle BAD = \angle BCD$, we immediately get $$\sin \alpha = \frac 56$$ Now, express the coordinates of $A(-x,-y)$ as follows, $$x=AB\sin \angle ABD = 5\sin (\alpha + \beta)$$ $$y=AB\cos \angle ABD - BD = -5\cos (\alpha + \beta) - 6 \sin\beta$$ Then, the coordinates of the point $M(x_m,y_m)$ are $$x_m = \frac 12 (CD - x)=\frac 12 [6\cos\beta - 5\sin (\alpha + \beta)]$$ $$y_m = -\frac 12 y = \frac 12 [5\cos (\alpha + \beta) + 6 \sin\beta]$$ As a result, $$DM^2 = x_m^2 + y_m^2 = \frac 14 (36+25-60\sin\alpha)=\frac {11}{4}$$ Note that the unknown angle $\beta$ cancels out in the process. Finally, the length of $DM$ is $$DM = \frac{\sqrt{11}}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3362125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is $p \lor (q \land r)\equiv p \land (q \lor r)$? Before writing, I'm not good at English. As I developed the law of distribution, I felt strange. $$\begin{align} p \land (q \lor r)& \equiv (p \land q) \lor (p \land r)\\ &\equiv X \lor (p\land r)\tag{$X=p \land q$}\\ &\equiv (X \lor p) \land (X \lor r)\\ &\equiv [(p \land q) \lor p] \land [(p \land q) \lor r]\\ &\equiv (p \lor p) \land (p \lor q) \land (p \lor r) \land (q \lor r)\\ &\equiv p \land (q \lor r) \land (p \lor q) \land (p \lor r)\\ &\equiv [p \land (q \lor r)] \land p \lor (q \land r)\end{align} $$ When called $p \land (q \lor r)=A,\ p \lor (q \land r)=B.$ I have never heard of $A≡A \land B$. (When they are not the same) So I developed $p \lor (q \land r)$ $$\begin{align} p \lor (q \land r) &\equiv (p \lor q) \land (p \lor r)\\ &\equiv Y \land (p \lor r)\ \ \ \ (let\ Y=p \lor q)\\ &\equiv (Y \land p) \lor (Y \land r)\\ &\equiv [(p \lor q) \land p] \lor [(p \lor q) \land r]\\ &\equiv (p \land p) \lor (p \land q) \lor (p \land r) \lor (q \land r)\\ &\equiv [p \lor (q \land r)] \lor [p \land (q \lor r)]\end{align} $$ $A\equiv A \land B,\ B\equiv A \lor B$ I don't know if they are equal or if I made a mistake.
At first you make a mistake in third rows from down to up. Then it's true that in general $A \equiv A \land B$ isn't true for independent $A,B$. But in your case they are dependent in value!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3362250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
How was row reduction used to obtain the result in this example? In the example below, I obtained a different result for $(2)$, and don't know how to reproduce the given one. Here is my result: $$ \begin{bmatrix}a \\ b \\ c \\d\end{bmatrix} = \begin{bmatrix} -2 + 2r \\ -2 + r \\ -1 \\ r \end{bmatrix}$$ I verified my result with an online row reduction calculator and it seems to be correct. How do we get from $(1)$ to $(2)$ by row reduction in the example below?
Note that in your calculation, you use the last variable $d$ as the "free parameter" (you have the equation $d = r$ and then you express $a,b,c$ in terms of $r$). However, in the calculation attached from the book, they use the first variable $a$ as the free parameter. To convert your answer to the book's answer, starting with your representation of the solution, note that $$ a = -2 + 2r = -2 + 2d \implies d = \frac{a + 2}{2} = \frac{a}{2} + 1,\\ b = -2 + r = -2 + d \implies b = -2 + \frac{a}{2} + 1 = \frac{a}{2} - 1.$$ Hence, if we use $a$ as a free parameter instead of $d$, (setting $a = r$), we get $$ b = \frac{r}{2} - 1, \,\,\, d = \frac{r}{2} + 1. $$ It is worth emphasizing there is "nothing wrong" with your solution. It is perfectly correct and there is no need to convert it to the book's solution. Plugging your solution into the first equation, we get $$ \begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix} + (2r - 2) \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix} + (r - 2) \begin{pmatrix} -1 \\ 0 \\ 1 \end{pmatrix} = \begin{pmatrix} 1 \\ 0 \\ 1 \end{pmatrix} + r \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix}$$ which is just a different parametric representation of the same line as the one written in the book's answer (verify this!).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3362382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What happens when matrix is moved to the other side of the equation? I have been doing several questions related to Matrix. I have a question ask me to simplify the expression (knowing that A,X,B are squared matrix with the same order): $AXB=(AB)^2$ $AXB=A^2B^2$ Then I am thinking where should I put A and B when I move them to the other side (As matrix operation is not commutative): Should it be: $X=A^2A^{-1}B^2B^{-1}$ which equals to $X=AB$ ? However, the right answer is $X=BA$. Since they are the same order, does $AB=BA$? I have always been confused by this kind of question. I have no idea how to properly move the matrix to the other side of the equation
There are a couple errors here. First note that $(AB)^2 = (AB)\cdot (AB) = ABAB$, not $A^2B^2$. The bigger problem is with how you think about what you're doing. There is no mathematical notion of "moving one thing from one side of the equation to the other." There are only things which happen to look like they do this. The fundamental rule here is that you must do exactly the same thing to both sides. In this example, we have $$AXB=(AB)^2=ABAB.$$ We can now multiply by $A^{-1}$ on the left on both sides, this gives $$A^{-1}AXB = A^{-1}ABAB,$$ or $$XB = BAB,$$ since $A^{-1}A=I$. Now we can multiply by $B^{-1}$ on the right on both sides, to get $$XBB^{-1} = BABB^{-1},$$ and simplifying, this gives $$X=BA,$$ as desired. Further comments: Matrix Multiplication: Matrix multiplication is not commutative. I.e., it is not true in general that $AB=BA$ for matrices $A$ and $B$. Indeed, $AB$ and $BA$ need not even both be the same size or both make sense. Consider the following examples: * *$$A=\newcommand\bmat{\begin{pmatrix}}\newcommand\emat{\end{pmatrix}}\bmat 0 & 1 \\ 0 & 0 \emat, \quad B = \bmat 0 & 0 \\ 1 & 0 \emat, $$ then $$ AB = \bmat 1 & 0 \\ 0 & 0 \emat,\quad BA = \bmat 0 & 0 \\ 0 & 1 \emat. $$ *If $A$ is $2\times 3$ and $B$ is $3\times 2$, then $AB$ is $2\times 2$ and $BA$ is $3\times 3$. *If $A$ is $2\times 3$ and $B$ is $3\times 4$, then $AB$ is $2\times 4$, but $BA$ doesn't even make sense, and can't be multiplied because the sizes of the matrices don't line up. Invertibility of matrices: Not every matrix has an inverse, i.e., given a $n\times n$ matrix $A$, there isn't always a matrix $A^{-1}$ such that $AA^{-1}=A^{-1}A=I$. I've assumed that $A$ and $B$ are invertible in your question although it wasn't stated, because otherwise $BA$ is not the unique possibility for $X$, which is the given answer. This is important though, otherwise I couldn't have multiplied by $A^{-1}$ and $B^{-1}$ in the solution. Also even though nonsquare matrices cannot be invertible in the sense given above, they can have one sided inverses. I.e., if $A$ is $n\times m$, with $n\ge m$, there may be a matrix $B$ with $BA = I_m$, and if $m\ge n$, there may be a matrix $C$ with $AC = I_n$. These can also be very useful at times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3362492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many integral solutions does $2x + 3y + 5z = 900$ have when $ x, y, z \ge 0$? Solution: Let $2x + 3y = u.$ Then we must solve $\begin{align} u + 5z = 900 \tag 1 \\ 2x + 3y = u \tag 2 \end{align}$ For $(1),$ a particular solution is $(u_0, z_0) = (0, 180).$ Hence, all the integral solutions of $(1)$ are $\begin{cases} u = 5t \\ z = 180 - t \end{cases} (t \in \mathbb Z)$ Substituting $u = 5t$ into $(2)$ gives $2x + 3y = 5t$ whose particular solution is $(x_0, y_0) = (t, t).$ Hence all the integral solutions of $(2)$ are $\begin{cases} x = t - 3s \\ y = t + 2s \end{cases} (t \in \mathbb Z)$ Thus all the integral solutions of $2x + 3y + 5z = 900$ are given by $$\begin{cases} x = t - 3s \\ y = t + 2s \\ z = 180 - t \end{cases} (s,t \in \mathbb Z)$$ Now suppose $x, y, z \ge 0.$ Note, $180 - t \ge 0 \implies t \le 180$ and so $t + 2s \ge 0 \implies s \ge -90$ and $t - 3s \ge 0 \implies s \le 60.$ Thus we have $-90 \le s \le 60.$ Consider $0 \le s \le 60.$ Now $t \le 180, \ t \ge 3s \implies 3s \le t \le 180$. Thus in this range of $s$, there are $180 - 3s + 1 = 181 - 3s$ of $t$'s. Consider $-90 \le s < 0.$ Now $t \le 180, \ t \ge -2s \implies -2s \le t \le 180$. Thus in this range of $s$, there are $180 + 2s + 1 = 181 + 2s$ of $t$'s. Range $0 \le s \le 60$ has the following points: $(0, 181 - 3(0)), \ (1, 181 - 3(1)), (2, 181 - 3(2), \ldots (60, 181 - 3(60))$ of which there are $61.$ The range $-90 \le s < 0$ must have $91$ points. In sum, we have $61 + 91 = 152$ points for $x, y, z \ge 0.$ My question: According to the book the answer is $\displaystyle{\sum_{s = 0}^{60}(181 - 3s) + \sum_{s = -90}^{-1}(181 + 2s) = 13651.}$ I don't understand why they took the sum of all $t$'s in the range of $s$. That means some of my denotations and labels above must be incorrect. Where's the mistake? Thanks. edit: I think I see my mistake. The number $181 - 3s$ is the number of $t$'s, not necessarily the form of $t$. Given that, the number of ordered pairs (in the given range) must be $(181 - 3s)*61$ by the product rule.
You can solve this problem by creating equals Ax = b. Where A - transition matrix by x and b. Let's consider this example. Introduce F(m, i) where F(m, 1) - count of solutions 2x = m, F(m, 2) - count of solutions 2x + 3y = m, F(m, 3) - count of solution 2x + 3y + 5z = m ... The following equations are valid: F(m, 1) = F(m - 2, 1) F(m, 2) = F(m, 1) + F(m - 3, 2) = F(m - 2, 1) + F(m - 3, 2) F(m, 3) = F(m, 2) + F(m - 5, 3) = F(m - 2, 1) + F(m - 3, 2) + F(m - 5, 3) Vector b we can construct as [F(m, 1), F(m - 1, 1), F(m, 2), F(m - 1, 2), F(m - 2, 2), F(m, 3), F(m - 1, 3), F(m - 2, 3), F(m - 3, 3), F(m - 4, 3)]. Vector x we can construct as [F(m - 1, 1), F(m - 2, 1), F(m - 1, 2), F(m - 2, 2), F(m - 3, 2), F(m - 1, 3), F(m - 2, 3), F(m - 3, 3), F(m - 4, 3), F(m - 5, 3)]. Matrix A is easily constructed by x and b. Start vector x is [F(0, 1), F(-1, 1), F(0, 2), F(-1, 2), F(-2, 2), F(0, 3), F(-1, 3), F(-2, 3), F(-3, 3), F(-4, 3)] and have value [1, 0, 1, 0, 0, 1, 0, 0, 0, 0] And it's finish. We interested in F(900, 3). We need calculate A^900 * x and take appropriate b[5]. We can raise the matrix to a power by O(2 * lb(900) * 10^3)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3362584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
A not that obvious proof of vector property I want to show that if $\vec a \cdot \vec b=0$, then there must be some vector $\vec x$ to let $\vec x\times\vec b=\vec a$ which $\vec a ,\vec b,$ and $\vec x$ are not $\vec 0$ in $\Bbb R^3$. The reverse is obvious. If $\vec x\times\vec b=\vec a$ then $\vec b\cdot(\vec x\times\vec b)=\vec b\cdot\vec a$ Where $\vec b\cdot(\vec x\times\vec b)=\vec x\cdot(\vec b\times\vec b)=0=\vec b\cdot\vec a$ But I don't find anything that is helpful to use $\vec a \cdot \vec b=0$ I know this is the truth. Please don't give me an intuitive way, thanks.
Sketch: * *The case where $\vec{a}$ is $\vec{0}$ is trivial, so this case can be dealt with easily. *The case where $\vec{b}$ is $\vec{0}$ is a counterexample (unless $\vec{a}$ also equals $\vec{0}$), so we must assume that $\vec{b}\not=\vec{0}$. *Observe that $\vec{a}\cdot\vec{b}=0$ means that $\vec{a}$ and $\vec{b}$ are perpendicular. In particular, they point in different directions. *Let $\vec{y}=\vec{a}\times\vec{b}$. Then, $\vec{y}$ is perpendicular to both $\vec{a}$ and $\vec{b}$. Since $\vec{a}$ and $\vec{b}$ do not point in the same direction, $\vec{y}$ is not $\vec{0}$. *Consider $\vec{y}\times\vec{b}$. This is a vector perpendicular to both $\vec{y}$ and $\vec{b}$. *Note that the set of vectors perpendicular to both $\vec{y}$ and $\vec{b}$ are on a line. This line includes $\vec{a}$. Now, scale $\vec{y}$ appropriately to get $\vec{x}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3362862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $f$ is a homeomorphism Let $f: (M,d) \rightarrow (N,\rho)$ be a one-one and onto mapping. Prove that the following are equivalent: * *$f$ is a homeomorphism. *$g: N\rightarrow \mathbb{R}$ is contiuous if and only if $gof: M \rightarrow \mathbb{R}$ is continuous. To show $1 \implies 2$ I have proceeded in the following manner. $f$ is a homeomorphism $\implies$ $f$ is continuous from $M \rightarrow N$ $\implies$ For any sequence $(x_{n})$ in $M$ converging to $x$ we have $f(x_{n}) \rightarrow f(x)$. Let, $g$ is continuous from $N \rightarrow \mathbb{R}$ Therefore, $g(f(x_{n}) \rightarrow g(f(x))$. Hence, $gof(x)$ is continuous in $M$. Since, the statement 2 had an iff next, I assumed that $f$ is a homemomrphism and $gof$ is continuous and proved that $g$ is continuous. Now, I am not sure how I have to show $2 \implies 1$ Source:Real Analysis by N.L Carothers Page 72, Problem 54
A hint for $\ 2\implies 1\ $ is to take $\ g(y) = \rho\left(y,f(x_0)\right)\ $ for any fixed $\ x_0\in M\ $, and show that the continuity of $\ g\circ f\ $ implies the continuity of $\ f\ $ at $\ x_0\ $. Edit: I originally missed the OP's statement that the proof of the continuity of $\ g\ $ following from $1$ and the continuity of $\ g\circ f\ $ had already been done. So I was incorrect in my earlier statement that this was still needed to complete the proof of $\ 1\implies 2\ $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3362962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there a line homomorphism to a disk? This is a question from an interview. I am confused about this problem. I said yes in that interview because I remember something about Hilbert curve(I mean, is there a line that can fill a square completely?), but I am not sure. Is it right?
No. Let a line $L$ be homeomorphic to a disk $B(x,R)$ Then excluding a point $y \in L$ we will have that $B(x,R) \setminus \{point\} $ will be homeomorphic to $L\setminus \{y\}$. This cannot be true because then line minus a point is disconnected and the disk minus a point is still connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3363088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Convergence of $x_{n+1} = \frac{(x_n)^3 + 3ax_n}{3(x_n)^2+a} $ Let $a \geq 0$ and let $(x_n)_n$ be a sequence such that $$x_{n+1} = \frac{(x_n)^3 + 3ax_n}{3(x_n)^2+a} $$ with $x_0 \geq 0$. Does this sequence converge to $\sqrt{a}$ for every $x_0 \geq 0$? If $x_0 > \sqrt{a}$, then we can easily prove that the derivative of the function $ \displaystyle x \mapsto \frac{x^3+3ax}{3x^2+a} $ is bounded above by $\frac{1}{3}$ and below by $0$, so we get that $x_n \to \sqrt{a}$ as $n \to \infty$. However, I don't know how to approach this problem if $x_0 < \sqrt{a}$. I think that $x_n \not\to \sqrt{a}$ as $n \to \infty$ if $x_0 < \sqrt{a}$. Using the same idea as before, we can get that $x_n < \sqrt{a}, \forall n \in \mathbb{Z}_+$. Then I tried to prove that the sequence is decreasing, so it will be convergent, but with limit smaller than $\sqrt{a}$. However, I have not been able to do that. How should I proceed?
The map $\displaystyle\;\varphi: x \mapsto \frac{x^3 + 3ax}{3x^2 + a}$ has $3$ fixed points: $0, \pm \sqrt{a}$. Instead of the sequence $x_{n+1} = \varphi(x_n)$, one can study auxiliary sequence of the form $z_n = f(x_n)$ where $f(x)$ is a rational function over $x$ depending on the fixed points of $\varphi$. If one is lucky, with suitable choice of $f(x)$, the recurrence sequence of $z_n$ will be significantly simpler than that of $x_n$. This will allow us to extract the asymptotic behavior of $x_n$ more easily. In this case, if one define two auxiliary sequences $y_n, z_n$ by $$x_n = \sqrt{a} y_n\quad\text{ and }\quad z_n = \frac{1-y_n}{1+y_n} \iff y_n = \frac{1-z_n}{1+z_n}$$ One can verify $z_n$ satisfies a recurrence relation $$z_{n+1} = z_n^3, \forall n \quad\implies\quad z_n = z_0^{3^n}$$ For any $x_0 > 0$, $|z_0| < 1$ and hence $$\lim_{n\to\infty} z_n = \lim_{n\to\infty} z_0^{3^n} = 0 \implies \lim_{n\to\infty} y_n = 1 \implies \lim_{n\to\infty} x_n = \sqrt{a}$$ When $x_0 = 0$, $z_0 = 1$ and hence all $z_n = 1 \implies y_n = 0 \implies x_n = 0$. In this case, the limit is $0$ instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3363168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Which is the importance of Young’s tableaux in mathematics? I don’t know much about combinatorics, I’m just getting started on this. I want to know, why Young’s tableaux are important? and why it is important to relate them to matrices? Thank you very much.
Any polynomial representation of $GL(n,\mathbb{C})$ has a weight-space decomposition which is basically simultaneous eigenspaces for the diagonal matrices (i.e, maximal torus). Now a lot of information about the representation is encoded in the dimensions of these weight spaces. So we are interested in finding out the dimensions of these weight spaces. This is done by writing out the character of the representation. In particular since every representation can be decomposed as a sum of irreducible representations and the character then becomes the sum of characters, we want just to know the character of each irreducible representation. The finite dimensional irreducible polynomial representations of $GL(n,\mathbb{C})$ are in one to one correspondence with the partitions with atmost $n$ parts. Let us write $V(\lambda)$ for the irreducible representation corresponding to a partition $\lambda$. There exists several formulas for finding the character of $V(\lambda)$ for example the Weyl Character formula. But the problem with this is that it involves mixed signed sums, and can take quite a while to compute by hand. On the other hand, since all we are finding out is the dimensions of weight spaces, which are nonnegative integers we might ask ourselves is there a way to compute the dimensions combinatorially, i.e, in this case count something? And here comes the role of Young Tableux. The number of semistandard Young Tableux of shape $\lambda$ and weight $\mu$ is the dimension of the weight space $V(\lambda)_{\mu}$ and this also gives the combinatorial definition of Schur polynomial, which among other things are the characters of irreducible polynomial representations of $GL(n,\mathbb{C})$. In addition, Young Tableux can be used to give the Littlewood-Richardson rule, which answers the question how does the tensor product of two irreducible representation of $GL(n,\mathbb{C})$ decompose as a sum of irreducibles. And all this is just the tip of the iceberg.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3363369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How find the function $f(f(x)+xy)=f(x)+xf(y),\forall x,y\in R$ Find all function $f:R\to R$ and such $$f(f(x)+xy)=f(x)+xf(y),\forall x,y\in R$$ Let $x=y=0$,then $$f(f(0))=f(0)$$
Let denote that $f^0(x) = x$ and that for positive integer $n$, $f^n(x)=f(f^{n-1}(x))$. It is easily proven by direct calculation that for every positive integer $n$ and every real $x$, $f^n(x)=f(x)+xf(0)$. This fact implies that $f(f(x)-x)=f(x)-x +f(x)f(0)-xf(0)$. But substituting $y = -1$, for every real $x$ provides $f(f(x)-x)=f(x)+xf(-1)$. Thus we get that $[1+f(0)+f(-1)]x=f(x)f(0)$ which shows that $f(0)=0$. Using $f(0)=0$, we can see that for every positive integer $n$ and every real $x$, $f^n(x)=f(x)$. So $f(f(x)-x)=f(x) - x$. Since $f(f(x)-x)=f(x)+xf(-1)$, it leads to $f(-1)=-1$. Now we have that for every real $x$, $f(f(x)-x)=f(x)- x$, and this implies that $f(f^2(x)-f(x))=f^2(x)-f(x)$ equivalent to $f(x)-x=f(0)=0$. Therefore $f(x)=x$. Notice that the case $f(x)=C$ for $C$ a real constant, we get that $C=C+Cx$. Thus $C = 0$ and $f(x)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3363483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 1 }
$ab+ac+bc \equiv 1 \bmod abc$ or "easy chinese remainder theorem problems" When teaching students about the Chinese remainder theorem, it is traditional to ask them questions like: "An integer $n$ is equivalent to $r_1 \bmod m_1$, to $r_2 \bmod m_2$ and to $r_3 \bmod m_3$. Compute $n \bmod m_1 m_2 m_3$." For example, There are certain things whose number is unknown. If we count them by threes, we have two left over; by fives, we have three left over; and by sevens, two are left over. How many things are there? -- Sunzi Suanjing, 3rd century CE. If it happens that $m_1 m_2 \equiv 1 \bmod m_3$, $m_1 m_3 \bmod m_2$ and $m_2 m_2 \bmod m_1$, then the question is particular easy to answer: One has $n = r_1 m_2 m_3 + r_2 m_1 m_3 + r_3 m_1 m_2$. I noticed this when preparing for my class today and planning to ask such a question with $(m_1, m_2, m_3) = (2,3,5)$, which has this property. Note that we can rewrite this property as $m_1 m_2 + m_1 m_3 + m_2 m_3 \equiv 1 \bmod m_1 m_2 m_3$. So, for the fun of it, here is my question: Can we describe all $k$-tuples of integers $(m_1, m_2, \ldots, m_k)$ such that $$m_1 m_2 \cdots m_{j-1} m_{j+1} \cdots m_k \equiv 1 \bmod m_j$$ for $1 \leq j \leq k$?
The trick is nice, but I am afraid it is hard to find any tuples, except for (2,3,5). I tried brute force with 3 mods. For mods below 1000, only (2,3,5) satisfied. However, even if you have only 1 pair that has $m_1\,m_2 \bmod m_3 = ±1$, it helps. Example, $x ≡ a \bmod 5, x ≡ b \bmod 7, x ≡ c \bmod 9$ Since $5\times7 = 4\times9 - 1$, do mod 9 last. Let $x'$ be 1 solution to $x ≡ a \bmod 5, x ≡ b \bmod 7$ $x' = a + 5k' ≡ a - 2k' ≡ b \bmod 7$ $k' = 2^{-1}(a - b) \bmod 7$ We leave $2^{-1} \bmod 7$ un-evaluated. You'll see why later ... Let $x''$ be 1 solution to all three mods $x'' = x' + 35k'' ≡ x' - k'' ≡ c \bmod 9$ $k'' = x' - c$ $x'' = x' + 35(x'-c) = 18(2x') - 35c = 18(2a + 5(a-b)) - 35c = 126a-90b-35c$ No inverse calculations were needed ! $$x ≡ x'' ≡ 126a-90b-35c \bmod 315$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3363602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
How can I find an approximation of a smooth function $R^2 \to R^4$ such that when restricted to $S^1$ it is an immersion? I am having trouble with the following exercise: Given a smooth function $f: R^2 \to R^4$ and $S^{1}$ embedded in $R^2$, then $\forall \epsilon >0$ there exists a smooth function $f_{1}$ such that $sup|f(x)-f_{1}(x)|< \epsilon$, which is an immersion when restricted to $S^1$.
Here's a sketch of an elementary solution if you know the transversality theorem (e.g., see Guillemin & Pollack or "Parametric transversality theorem" here). (1) Observe that you only need to perturb $f$ near $S^1$ to obtain the result. So let $\rho\colon \Bbb R^2\to\Bbb R$ be a smooth function that is $1$ in a small neighborhood of $S^1$ and $0$ outside a slightly larger neighborhood. (2) When $A$ is a $4\times 2$ matrix, let $g_A(x) = f(x)+Ax$. Note that $h_A(x) = (dg_A)_x = df_x + A$. Since the matrices of rank $1$ in the space of $2\times 4$ matrices form a submanifold $Z$ of dimension $5$, if a map $h_A$ on $S^1$ is transverse to $Z$, it must be transverse by default and have image disjoint from $Z$. Likewise for the $0$ matrix. So, by the transversality theorem, $h_A(S^1)$ will be disjoint from $Z\cup \{0\}$ for almost every $A$, and this means that for such $A$, $g_A$ will be an immersion on $S^1$. Choose such an $A$ so that $|Ax|<\epsilon/2$ for all $x\in S^1$. (3) Take $f_1 = f + \rho A$. I leave it to you to work out the details (including a careful definition of the small and slightly larger neighborhoods). After all, this is an exercise for you :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3363709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Problem regarding bacteria increase modeled by formula Studying "Basic Mathematics" p.358, by Serge Lang, came across this exercise. I don't know how to start with it. Any insight would be appreciated.
Hint: First note that $C=B(0)=10^6$. Next, you can rewrite this equation as $\;\dfrac{B(t)}{B(0)}=\mathrm e^{kt}$, and you have to find $k$. You're given that $$\frac{B(12)}{B(0)}=2. $$ WWhen ytou have $k$, you'll just have to solve $$\frac{B(t)}{B(0)}=10.$$ Can you end the calculations?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3363830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof that a function is nonzero given constraints on $x$ Let $f(x)=a_{n} x^{n}+a_{n-1} x^{n-1}+\cdots+a_{1} x+a_{0}, \;\left(a_{n} \neq 0\right) .$ Let $A=\max \left\{\left|a_{0}\right|,\left|a_{1}\right|, \ldots,\left|a_{n}\right|\right\}$. Let $f(x)=a_{n} x^{n}+a_{n-1} x^{n-1}+\cdots+a_{1} x+a_{0}, \left(a_{n} \neq 0\right) .$ and let $B=n A /\left|a_{n}\right|,$ Show that $f(x) \neq 0$ if $|x|>B .$ I have been trying to do this by induction and have successfully proved the base case but am unable to prove the induction step. Can someone point me in the right direction?
It is obvious that $B=nA/{a_n}\ge n\ge1$ For $|x|>B$, we have $\frac{|f(x)|}{a_n|x|^n}\ge|1|-|\frac{a_{n-1}}{a_n}x^{-1}|-...-|\frac{a_0}{a_n}x^{-n}|\gt 1-\frac B n (B^{-1}+B^{-2}+...+B^{-n})\ge1-\frac Bn(nB^{-1})=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3363975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find all $n$ for which $3n^2+3n+1$ is a perfect square. Find all natural numbers $n$ for which $3n^2+3n+1$ is a perfect square. I used discriminant method but failed. Then I found upper and lower bounds of this expression: Lower:$(n+1)^2$ Upper:$(2n)^2$ But, this too does not seem to be useful. Please help me.
Cool. I'll just fully answer the question. Starting with $ \ m^2=3n^2+3n+1\iff (2m)^2-3(2n+1)^2=1 \quad $. Lets call $ \ p=2m, \ \ $ $q=2n+1 \ \ $. So it's now $$p^2-3q^2=1$$ By a quick inspection the smallest solution is: $(p_0,q_0)=(2,1) \quad $. It can be used to find all others. Use it to write the number $1$ in a funny/arbitrary way, and multiplying our equation with this special $1$ will lead to the rest of the solutions. The back-substitution, $(p,q,p_0,q_0) \to (2m,2n+1,2,1)$, will wait until the end to elucidate how this algorithm can be applied to other situations involving pell-type equations: $$ \begin{align} p_0^2-3q_0^2&=1\\ (p_0-q_0 \sqrt 3)(p_0+q_0 \sqrt 3)&=1\\ (p_0-q_0 \sqrt 3)^2(p_0+q_0 \sqrt 3)^2&=1^2=1\\ \bigg[(p_0^2+3q_0^2)-(2p_0q_0) \sqrt 3\bigg]\bigg[(p_0^2+3q_0^2)+(2p_0q_0)\sqrt 3\bigg]&=1\\ \text{now multiply $p^2-3q^2=1$ by this "$1$" in the following way (factor it first):} & \\ (p-q \sqrt 3) \cdot \bigg[(p_0^2+3q_0^2)-(2p_0q_0) \sqrt 3\bigg]& \cdot \\ (p+q \sqrt 3) \cdot \bigg[(p_0^2+3q_0^2)+(2p_0q_0) \sqrt 3\bigg]&=1\\ &\vdots \\ \underbrace{\bigg[(p_0^2+3q_0^2)p+(2\cdot 3p_0q_0)q\bigg]^2-3\bigg[(2p_0q_0)p +(p_0^2+3q_0^2)q\bigg]^2=1}_{=p^2-3q^2=1}\\ \end{align} $$ Interpretable as $$(p_k,q_k) \xrightarrow{k \to k+1} \bigg((p_0^2+3q_0^2)p+(2\cdot 3p_0q_0)q \ \ , \ (2p_0q_0)p +(p_0^2+3q_0^2)q\bigg)$$ Evaluating $$ \begin{cases} p_k&=2m_k\\ p_0=2 \implies m_0&=1\\ q_k&=2n_k+1 \\ q_0=1 \implies n_0&=0\\ \end{cases} $$ Thus $$(2m_k,2n_k+1) \xrightarrow{k \to k+1} \bigg( (7)(2m_k)+(12)(2n_k+1) \ \ , \ (4)(2m_k) +(7)(2n_k+1)\bigg)$$ Or, finally, $$(m_k,n_k) \xrightarrow{k \to k+1} \bigg( 7m_k+12n_k+6 \ \ , \ 4m_k +7n_k+3\bigg)$$ Which is exactly @S.Dolan's ordered pair. Also expressable as $$ \begin{pmatrix} m_k \\ n_k \end{pmatrix} \xrightarrow{T} \begin{pmatrix} 7 & 12 \\ 4 & 7 \end{pmatrix} \cdot \begin{pmatrix} m_k \\ n_k \end{pmatrix} + \begin{pmatrix} 6 \\ 3 \end{pmatrix} $$ if you're into that sort of thing...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3364111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Evaluate $\hat f(x):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(t)e^{-itx}dt$ for $f(t)=\sum_{n=0}^{\infty}\frac{1}{2^n}1_{[-2,2]}(x-2n)$ $\hat f(x):=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty}f(t)e^{-itx}dt$, for $f \in L^1(\mathbb R)$ Let $f(t)=\sum_{n=0}^{\infty}\frac{1}{2^n}1_{[-2,2]}(t-2n)$, where $1_{[.,.]}$ is the indicator function I want to determine $\hat f(x).$ I think we need to interchange the series and the integral which should be no problem because $\frac{1}{2^n}1_{[-2,2]}(x-2n) \ge 0$ so we can use Fubini. Now, using $1_{[-2,2]}(x-2n)=1_{[2n-2,2n+2]}(x)$: $\hat f(x)=\frac{1}{\sqrt{2\pi}}\sum_{n=0}^{\infty}\frac{1}{2^n}\int_{-\infty}^{\infty}1_{[2n-2,2n+2]}(t)e^{-itx}dt=\frac{1}{\sqrt{2\pi}}\sum_{n=0}^{\infty}\frac{1}{2^n}\frac{e^{-i(2n+2)}-e^{-i(2n-2)}}{-ix}$ Is that correct so far? I thought about using $\sin(x)=\frac{1}{2i}(e^{ix}-e^{-ix})$ and $\cos(x)=\frac{1}{2}(e^{ix}+e^{-ix})$ to simplify this term but I don't know if that will help
If you are also required to be rigorous you have to justify the interchange of the integral and the infinite sum. You can do this by observing that your sereis converges in $L^{1}$ norm and $s_n \to f$ in $L^{1}$ norm implies $\hat {s_n} \to \hat {f}$ pointwise. [Take $s_n$ to be the n-th partial sum of the series].
{ "language": "en", "url": "https://math.stackexchange.com/questions/3364216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Perpendicular bisector that pass through a fixed point Points A and B fixed, and point C moves on circle such that ABC acute triangle. $AT = BT$ and $TM \perp AC, \, TN \perp BC$. How can I proove that all the middle perpendiculars (perpendicular bissector) to $MN$ passes through a fixed point?
Hint:Let's mark perpendicular bisector as (PB). If C is coincident on A or B then M and N will be coincident on A and B respectively and the (PB) of MN is exactly the (PB) of AB.Also when the triangle is isosceles MN is parallel with AB, so again (PB) of MN and AB are coincident. That is the point is on the (PB) of AB. To find this point extend the (PB) of MN to cross the the (PB) of AB at P. It can be shown that the ratio of $R=\frac{TP}{AB}$ is independent of position of C and is constant. The angle $(\alpha)$ between PT and (PB) of MN is always equal to the angle $(\beta)$ between AB and MN(or their extensions),because their rays are perpendicular. If $\angle CAB$ or $\angle CBA$ is $90^o$ Then M or N locate on A or B respectively. Let's mark the intersection of (PB) of MN and AB as Q. The right triangles ABC and PQT are similar . Since AB is constant then TP must be constant due to sine law.That is the equality of angles $(\alpha)$ and $(\beta)$ requires that (PB) of MN always passes the point P.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3364317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Limit of this expression when n tends to infinity The limit is equal to: $$\lim_{n\to\infty} n^2 \int_0^1 \frac{1}{(1+x^2)^n} dx$$ P.S. What should be the better approach for such kinds of problems?
This might be a bit overkill but at least it works to show the divergence of the sequence and gives a nice additional result: * *Consider first $I_n = \int_0^1\frac{dx}{(1+x^2)^n}$. *To bring in somehow the exponential function susbtitute $nx^2 = y$: $$I_n =\frac{1}{2\sqrt{n}}\underbrace{\int_0^n\frac{dy}{\sqrt{y}\left(1+\frac{y}{n}\right)^{n}}}_{J_n := }$$ *Now, using DCT (dominated convergence theorem) gives $$J_n \stackrel{DCT:\;n \to \infty}{\longrightarrow}\int_0^{\infty}\frac{e^{-y}}{\sqrt{y}}dy = \Gamma\left(\frac{1}{2}\right) = \sqrt{\pi}$$ Hence, we see $$n^2I_n = n^2\frac{J_n}{2\sqrt{n}}= \frac{n^{\frac{3}{2}}}{2}\underbrace{J_n}_{\stackrel{n\to \infty}{\longrightarrow}\sqrt{\pi}} \stackrel{n\to \infty}{\longrightarrow}\infty$$ p.s.: The additional nice result is that $$\sqrt{n}I_n \stackrel{n\to \infty}{\longrightarrow}\frac{\sqrt{\pi}}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3364413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
$\int \sin x \cos x \ dx \neq \frac {\sin^2x}{2}$ if we use the identity $\sin A \cos B = \frac12[\sin(A-B) + \sin(A+B)]?$ How is it that we can get two different answers for an integral depending on whether we apply an identity or not? Typically, $$\int \sin x \cos x \ dx = \frac {\sin^2x}{2}+C~.$$ However if we apply the trigonometric identity $$\sin A \cos B = \frac12[\sin(A-B) + \sin(A+B)]~,$$ then the integral becomes $$\frac12 \int (\sin(0) + \sin(2x)) dx =\frac12 \int \sin(2x) dx = -\frac14 \cos(2x) $$ So we end up with a different answer. Have I made a mistake here, or is this just a property of integration/trigonometry?
You forgot the arbitrary constant in your second method. The non-constant part of the first result may be transformed as follows: $$\frac{\sin^2x}{2}=\frac{1-\cos 2x}{4}=-\frac14\cos 2x+C,$$ since $C$ is arbitrary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3364525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
How to substitute a complex number in a complex function? I know for the regular cases, but what I am after is something like this: $\lim\limits_{z \to 1+5i} ix+y$ is it: i(1) + (5i) = 6i, or: i(1) + 5 = 5+i ?!
We usually define $z=x+yi$ so $x,\,y\in\Bbb R$, making the answer $5+i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3364689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $A,B\subseteq \Bbb R $such that A and B are bounded, show that AB is bounded $AB=\lbrace ab \mid a\in A , b\in B \rbrace$. Let $M$ is an upper bound of $A$ and $N$ is an upper bound of $B$. Then can we say that $MN$ is upper bound of $AB$? So how can we prove that $AB$ is bounded?
No $MN$ won’t be the upper bound of $AB$. Take the example of $A=[-1,0]$ and $B=\{1\}$. However if $m$ is such that $\vert a \vert \le m$ for all $a \in A$ and $n$ is such that $\vert b \vert \le n$ for all $b \in B$, $mn$ is such that $\vert x \vert\le mn$ for all $x \in AB$. That proves that $AB$ is indeed bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3364803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove S is a subspace of $R^3$ Let S={(a,b,c)$\in$$R^3$|2a+b-3c=0}. Prove S is a subspace of $R^3$ I just asked another question like this earlier and this is just to confirm im on the right track when approaching problems like these. After starting this problem, i am finding myself confused when it comes to making sure its closed under addition(so far). so far i think i have proved the set is not empty with: let 0=(0,0,0), 2(0)+0-3(c)=0 therefore, set is not empty. let u=($a_1,b_1,c_1$) and w=($a_2,b_2,c_2$). u+w= $(a_1+a_2),(b_1+b_2),(c_1+c_2) $ At this point, where does the definition "2a+b-3c=0" come into play? How do i proceed from here to show closure under addition?
$$2a_1+b_1-3c_1=0$$ $$2a_2+b_2-3c_2=0$$ Thus $2(a_1+a_2)+(b_1+b_2)-3(c_1+c_2)=0+0=0$ So $u+v \in S$ Similarly prove that $au \in S ,\forall a \in \Bbb{R}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3364951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove $y = z$ if $y^p = z^p$ where $p \in \mathbb{N}$ and $y, z \in \mathbb{R}_{++}$ As above. I am beginner in analysis. I don't know how rigorously you have to prove things. To me it's obvious.
Let $f(x)=x^p$ and $p\in\mathbb N^*$ then $f'(x)=px^{p-1}>0$ for $x>0$. So $f \nearrow$ strictly on $(0,+\infty)$, which implies injectivity. Thus $f(x)=f(y)\implies x=y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3365078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Vectorization of a diagonal matrix Is there any representation for the vectorization of a diagonal matrix $\text{vec}(\text{Diag}(x))$? For example, for two elements $$\text{vec}(\begin{bmatrix} x_1 & 0 \\ 0 &x_2 \end{bmatrix}) = \begin{bmatrix} 1 & 0 \\ 0 & 0 \\ 0 & 0 \\ 0 & 1 \end{bmatrix} \begin{pmatrix} x_1 \\ x_2 \end{pmatrix} \ . $$ In general case, $\text{vec}(\text{Diag}(x)) = A x$, what is the closed form for matrix $A$?
For $x\in{\mathbb R}^{n}\,$ you can express the matrix $A\in{\mathbb R}^{n^2\times n}\,$ in index notation $$A_{ij} = \begin{cases} {\tt1}\quad{\rm if}\;(i+n=j+nj) \\ {\tt0}\quad{\rm otherwise} \\ \end{cases} $$ This translates verbatim into any language which permits array comprehensions. For example, to reproduce the matrix in your question, in Julia you could write n=2; A = [i+n==j+n*j for i=1:n*n, j=1:n] * 1 4×2 Matrix{Int64}: 1 0 0 0 0 0 0 1
{ "language": "en", "url": "https://math.stackexchange.com/questions/3365228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove divergence of a product of sequences using contradiction If a sequence converges (not to zero) and another sequence diverges, prove by contradiction that the product of these sequences is divergent. This is a past test paper question that I am looking at and I really have no idea how to write the proof. My own working doesn't make any sense (even to me...) Would really appreciate some help!!
If $(a_n)$ converges to $L\neq 0$, then you have $\vert a_n \vert \ge \vert L \vert /2 >0$ for $n$ large enough. Let say $n\ge N$. Now if the sequence $(b_n)$ diverges, either it is not bounded and therefore $(a_n b_n)$ is not bounded either and therefore diverges. And if $(b_n)$ is bounded (and doesn’t converge), it has at least two limit points $B_1,B_2$. $B_1 L, B_2 L$ are then two limit points of $(a_n b_n)$ that can’t converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3365371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Expected number of "trips" for spatial Poisson process The appearance of an object of interest in a region of area $x$ units$^2$ follows a (two-dimensional) spatial Poisson process $N$ with constant rate $\lambda$ units$^{-2}$. An observer, who wishes to witness the object, can only view a fixed amount of total area, say $m$ units$^2$, in a single trip. If distinct areas are visited in each trip, how many trips should the observer expect to take before they have witnessed the object $k$ times? I am not sure how to formulate this problem. I can denote $B_i$ as the region visited in the $i$th trip. If I then denote the total area visited over all trips as $B$, I can write it like so: $$ B = \bigsqcup_{i=1}^{\ell}{B_i} $$ Where $\ell$ is the quantity that I want. Specifically, I believe I need the expected value of $\ell$, given that $N(B)=k$. Do you have any hints as to how I can properly formulate this question?
The probability of observing the object on the $i$th trip is equal to $$P(N(B_i) \ge 1) = 1-P(N(B_i) = 0) = 1 - e^{-\lambda \text{area}(B_i)}$$ since the areas are disjoint, and for a Poisson process the number of objects in disjoint intervals are independent, the number of trips till you observe the object once follows the distribution $$\text{Geometric}(1-e^{-\lambda m})$$ (we assumed that the $B_i$ have area $m$). Therefore the number of trips till you observe $k$ objects is the sum of independent geometric random variables, which is a distribution with a name that I'll let you research.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3365596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why isn't $\lim\limits_{x\to 3}=6$ true in these conditions? Let $\left(u_n\right)$ be a succession with general term: $$u_n=3+\frac{(-1)^n}{n}$$ and $h$ a real function such that $\lim h\left(u_n\right)=6$. This is a multiple choice question and I'm pretty sure the right answer is the following: If $h$ is continuous in $x=3$ then $h(3)=6$. But why can't we say that $\lim\limits_{x\to 3}h(x)=6$ (this is another option)? I'm thinking that if we have $n$ even or odd we could define two successions such that $v_n\to 3^-$ and $w_n\to 3^+$ and that would mean $\lim\limits_{x\to 3}h(x)=6$. Am I wrong?
The point is this: $\lim_{n\to\infty}h(x_n)=6$ means that the sequence $(h(x_n))_{n\in\mathbb{N}}$ (with the given sequence $(x_n)_{n\in\mathbb{N}}$) converges to 6. But $\lim_{x\to 3}h(x)=6$ means, that for every sequence $(x_n)_{n\in\mathbb{N}}\subset \mathbb{R}\setminus \{3\}$ that converges to $3$ it holds that the sequence $(h(x_n))_{n\in\mathbb{N}}$ converges to $6$. Now obviously, from the first one, you can't follow the secound one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3365701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
(elementary?) proof of simplicial homology groups of $\Delta_n$ Using the (trivial) CW-complex structure of $\Delta_n$, I would like to compute the homology groups of $\Delta_n$. It's obvious that (for $k \leq n$) $C_k \simeq \mathbb{Z}^{{n+1 \choose k+1}}$, and that $im \delta_{k+1} \subset ker \delta_k$. It's the reverse inclusion for $k \geq 1$ that I'm not able to show. I know how to prove this using homotopy invariance of homology, as $\Delta_n$ is contractible, and a proof in that vein is given in a comment here Explicit calculation of simplicial homology - but I'm wondering if there is something more elementary? The CW-complex structure makes computing the homology groups of the sphere almost trivial, and I was hoping that it would be easy for the disk as well.
You should remember that it is difficult to construct cellular homology without singular homology (you must figure out how to define degrees of maps of spheres) and then it is even harder to show it is even a homeomorphism invariant. However, if you wish to sweep these things under the rug: the n-simplex is homeomorphic to the n-disk which has a cell structure given by a n-cell attached along the identity to the (n-1)-sphere. The image of the n-cell under the boundary map is the (n-1)-cell, so the n-1 homology is trivial (unless n-1=0 when degrees are tricky to define). The n homology is also trivial because the boundary map is an isomorphism here. The 0 homology is $\mathbb{Z}$ since the boundary map into it is $0$ (if $n>1$ because there are no 1-cells and if $n=1$ because the orientations of the boundary points of the circle cancel each other out). Thus it has the homology of a point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3365805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Determine values for which the general solution converges Textbook problem. Given the following general solution to a recurrence relation $$z_n = \alpha(1+\sqrt{3})^n + \beta(1-\sqrt{3})^n$$ For which values $\alpha, \beta$ does the solution converge? And determine the order of the rate of convergence for these values. By attempting to plot the sequence in some interval with varying values of $\alpha, \beta$ it seems like it will converge whenever $\alpha=0$ and $\beta = (-\infty, \infty)$, but how can i go about determining this in a more rigorous way?
You are correct. For $a \neq 0$ we have that $z_n \to +\infty$ or $-\infty$ since $(1+\sqrt{3})>1$ Also $(1-\sqrt{3})<1 $ Thus the sequence converges $\forall b \in \Bbb{R}$ and for $a=0$ And converges to zero ,for every such value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3365967", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Is $\infty + (\infty/\infty)$ indeterminate? I know $ (\infty/\infty)$ is indeterminate, but it can't be less than $0$. So can you assume $\infty + (\infty/\infty)$ is determinate because $\infty + n$ where $n\ge 0$ is still $\infty$ ? The equation this question is based off of is $$\lim_{n \to \infty} \frac{n \log n + n}{\log n}.$$ This is in the context of big O notation. Would this be form be valid to use to determine the numerator's function is big Omega of the denominator? Or should l'hopitals rule be used to find a determinate and defined limit?
This is a very interesting question... I believe you are correct. Below is a proof. * *Let $lim_{x \rightarrow a}[f(x)/g(x)]$ be some arbitrary indeterminate function such that $lim_{x \rightarrow a}[f(x)/g(x)]=\infty/\infty$ where $x\in \mathbb{R}$ and $a$ is a finite real number. *Also let $lim_{x \rightarrow b}[h(x)]=\infty$ where $x\in \mathbb{R}$ and $b$ is a finite real number. *As $x\rightarrow a$, we know that $f$ and $g$ either (i) grow at the same rate, (ii.) $f$ grows faster than $g$, or (iii.) $g$ grows faster than $f$. Case i. * *If $f$ and $g$ grow at the same rate, then $lim_{x \rightarrow a}[f(x)/g(x)]=L$ where $L\in \mathbb{R}$. *Then, $lim_{x \rightarrow b}[h(x)]+lim_{x \rightarrow a}[f(x)/g(x)]=\infty+L=\infty$. Case ii. * *If $f$ grows faster than $g$, then $lim_{x \rightarrow a}[f(x)/g(x)]=\infty$. *Then, $lim_{x \rightarrow b}[h(x)]+lim_{x \rightarrow a}[f(x)/g(x)]=\infty+\infty=\infty$. Case iii. * *If $g$ grows faster than $f$, then $lim_{x \rightarrow a}[f(x)/g(x)]=0$. *Then, $lim_{x \rightarrow b}[h(x)]+lim_{x \rightarrow a}[f(x)/g(x)]=\infty+0=\infty$. Hence, in any case, $lim_{x \rightarrow b}[h(x)]+lim_{x \rightarrow a}[f(x)/g(x)]=\infty+(\infty/\infty)=\infty$ In this proof we are using limits to make sense of your statement, but I don't think you NEED limits to make sense of the statement. $\infty$ is simply a quantity increasing without limit... Some folks are saying $\infty/\infty$ is undefined; this is not true. It is indeterminate because the answer EXISTS, only it cannot be determined in the current form. By adding $\infty$ to the expression you obtain another expression that is suddenly in determinate form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3366044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Let $f:[0,n]\to \Bbb R$ be continuous with $f(0)=f(n)$. Then there are $n$ pairs of numbers $x,y$ such that $f(x)=f(y)$ and $y-x\in\Bbb N$. Theorem. Let $f:[0,n]\to \Bbb R$ be continuous with $f(0)=f(n)$ ($n\in\Bbb N$). Then there exist (at least) $n$ distinct pairs of numbers $x,y$ which satisfy $f(x)=f(y)$ and $y-x\in \mathbb{N}$ (where $0$ is not a natural number). Partial results (see the two answers below): Proposition. For $f$ as in the Theorem there exists a $x\in[0,n]$ such that $f(x)=f(x+1)$. Proof. Define $g(x)=f(x+1)-f(x)$ where $x\in[0,n-1]$. Note that $\sum_{i=0}^{n-1}g(i)=f(n)-f(0)=0$. If all $g(i)=0$ then the proposition holds trivially. Otherwise there must be $i\neq j$ such that $g(i)$ and $g(j)$ have different sign. The proposition now follows from the Intermediate Value Theorem. Proposition. The Theorem holds under the additional assumption that $f$ is concave or convex. Proof. See the answer by @Maximilian Janisch. *Remark.*It is not for each $0<m\leq n$, there must exist $x$ s.t. $f(x)=f(x+m)$. For example, if $f_{[0,1]}(x)>0 \wedge f_{[n-1,n]}(x)<0$, then there doesn't exist $x$ s.t. $f(x)=f(x+n-1)$. However,for some $m$, it may have more than one $x$ satisfying $f(x)=f(x+m)$.
I thought it would be nice to present a direct proof, so pardon me for answering an old question. First extend $f$ periodically, then for $1\le j<n$, define $g_j(x)=f(x)-f(x+j)$. Notice $g_j$ is $n$-periodic because $f$ is. I will show each $g_j$ has at least 2 roots in $[0,n)$. First suppose $g_j$ has no roots. Since $g_j$ is continuous, it must be either positive or negative. If it is positive, by periodicity, there is some $c>0$ such that $g_j(x)\ge c$ for all $x$. Then $$(n-1)c \le \sum_{l=1}^{n-1} g_j(x+lj) = f(x) - f(x+nj) = 0,$$ because $f$ is $n$-periodic. This gives a contradiction. Hence, $g_j$ must have one root, call it $x_0$, so all $x_0 + n\mathbb{Z}$ are roots of $g_j$. The case $g_j$ negative is similar. Next, suppose for contradiction that $g_j$ has only one root in $[0,n)$. Then $g_j$ must be nonnegative or nonpositive by periodicity. Say $g_j$ is nonnegative. By assumption, all roots of $g_j$ are $x_0 + n\mathbb{Z}$, then $$ 0 < \sum_{l=1}^{n-1} g_j(x_0+1/2+lj) = f(x+1/2) - f(x+1/2+nj) = 0,$$ a contradiction. Finally, notice each root $t$ of $g_j$ gives a pair of points $(t\mod n,\ t+j\mod n)$ satisfying the requirement, and the distance between them is either $j$ or $n-j$. For $1\le j < n/2$, the 2 roots of $g_j$ give distinct pairs. If $n$ is even, we also need 1 root of $g_{n/2}$. These in total give $n-1$ distinct pair of points. Adding $(0,n)$ completes the question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3366262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Show that each $k\in \mathbb{N}$ can be represent as $ k = S_m(a)+S_n(b)$ Let $a,b$ are non negative integers and $m,n$ are positive integers. Let define $$S_m(a)=0^m+1^m+2^m+...+a^m$$ and $$S_n(b)=0^n+1^n+2^n+...+b^n$$ Question Show that each $k\in \mathbb{N}$ can be represent as $$ k = S_m(a)+S_n(b)$$ Example $5= S_2(2)+S_n(0)$ $6= S_1(3)+S_n(0)$ $\ \ = S_1(2)+S_1(2)$ Edit I just changed whole question to simplify , understanding and more motivation, my apologies. I already asked for representation of $ k = S_1(a)-S_1(b)$ here
It appears that this claim is untrue. There are quite a few numbers which cannot be expressed in this manner, the smallest being $52$. Observe the terms $S_n(k) \leqslant 52$ First powers = $\{0,1,3,6,10,15,21,28,36,45\}$ Second powers = $\{0,1,5,14,30\}$ Third powers = $\{0,1,9,36\}$ Fourth powers = $\{0,1,17\}$ Fifth powers = $\{0,1,33\}$ Rest of the powers = $\{0,1\}$ Union of all such values = $\{0,1,3,5,6,9,10,14,15,17,21,28,30,33,36,45\}$ You can easily check that for all values $m$ in the union set, $52-m$ is not in the union set. Thus, $52$ cannot be represented in the required manner. The first few values of this sort are: $$52,77,89,116,118,147,152,179,187,197,...$$ This can be easily coded using the logic executed above for $52$. S=[0] i=1 while i<=10: add=0 j=1 while j<=50: add=add+(j**i) S.append(add) j=j+1 i=i+1 print(S) for m in range(1,1000): sm=0 for index in range(len(S)): if m-S[index] in S: sm=sm+1 if sm==0: print(m)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3366390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\int (f)^n \, \textrm dx$ where $f$ is a polynomial and $n$ is a positive integer Besides expanding the integrand, is there some general method for solving indefinite integrals of the form $\int (f)^n \, \textrm dx$ where $f$ is a polynomial and $n$ is a positive integer? For example, $$\int (x^2 +x)^{100} \, \textrm dx?$$
$\int(x^2+x)^{100}~dx$ $=\int x^{100}(x+1)^{100}~dx$ $=\int x^{100}\sum\limits_{n=0}^{100}C_n^{100}x^n~dx$ $=\int\sum\limits_{n=0}^{100}C_n^{100}x^{n+100}~dx$ $=\sum\limits_{n=0}^{100}\dfrac{C_n^{100}x^{n+101}}{n+101}+C$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3366510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Find the smallest positive integer x where 149|(x^2-69^3) $$ \text{Find the smallest positive integer x where 149|}\left( \text{x}^2-69^3 \right) $$ I almost do not spot any clue about this question. It is from PUMaC-CHINA, 2019.8.17
This follows the comment by J. W. Tanner and the answer by sirous. All congruences are mod $149$. We have $x^2-69^3 \equiv x^2+36 =x^2+6^2 \equiv (x-6j)(x+6j)$, if $j^2\equiv -1$. Write $149 = 7^2 + 10^2$. Then $10j \equiv 7$ and so $j\equiv 105$. Thus $x^2-69^3 \equiv (x-34)(x+34)$ and so $34$ is the smallest positive solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3366591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Simple property of arrow notation Assume that $\kappa \to (\lambda)^r_s$ holds. Prove that if $r'\le r$, $\kappa \to (\lambda)^{r'}_s$ I proved similar cases like when $s'\le s$ etc. but I have no idea about this case. Actually I can't understand why that statment has to be true. Any help or suggestion is welcome!
Given a coloring of the $r'$-sets, define a coloring of the $r$-sets as follows. If $x_1\lt x_2\lt\cdots\lt x_r$, give $\{x_1,\dots,x_r\}$ the same color as $\{x_{r-r'+1},\dots,x_r\}$; i.e., a set of size $r$ gets the same color as the set of its $r'$ biggest elements. Suppose $H$ is a monochromatic set for the new coloring. If we drop the first $r-r'$ elements from $H$, we get a monochromatic set $H'$ for the original coloring, and $H'$ is as big as $H$ assuming $\lambda\ge\omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3366674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\frac{1}{m_1} +\frac{1}{n_1} +\frac{1}{m_2} +\frac{1}{n_2}+...+\frac{1}{m_{2011}} +\frac{1}{n_{2011}}$ When $a=1,2,3,...,2010,2011$ the roots of the equation $x^2 -2x-a^2-a=0$ are $(m_1,n_1 ), (m_2,n_2 ), (m_3,n_ 3),..., (m_{2010},n_{2010} ), (m_{2011},n_{2011 }) $ respectively. Evaluate $\frac{1}{m_1} +\frac{1}{n_1} +\frac{1}{m_2} +\frac{1}{n_2}+...+\frac{1}{m_{2010}} +\frac{1}{n_{2010}} +\frac{1}{m_{2011}} +\frac{1}{n_{2011}}$ **My attempt **$$$$ $ \frac{1}{m_a} +\frac{1}{n_a}$can be manipulated to give $$\frac{m_a+n_a}{m_an_a}$$then w.l.o.g $m_a\ge n_a$ And since$$m_a=2\frac{2+2\sqrt{a^2+a+1}}{2}$$ And $$n_a= 2\frac{2-2\sqrt{a^2+a+1} }{2}$$ We get $$\frac{m_a+n_a}{m_an_a}=\frac{-2}{a(a+1)}$$ But I dont know what to do next, Suggestions and solutions would be appreciated Taken from the 2011 IWYMIC
Hint: $$ \frac{-2}{a(a+1)} = \frac2{a+1} - \frac 2a $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3366852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Difference between a vector function and parametric equations What is the difference between a vector function and parametric equations? Both are capable of describing plane/space curves and both can indicate direction. The only notable difference I see is that a vector function includes a measured distance between a given point and the origin (magnitude of position vector that draws out a plane/space curve).
What do you mean that a vector function includes a measured distance (magnitude)? Vector functions and parametric equations are essentially the same thing. Sometimes, if you're working in a different coordinate system, like polar coordinates, you might see that they include the magnitude $r$ in their functions. For example, $(x,y)=(\cos(t),\sin(t))$ for $t\in [0,2\pi]$, versus $(r,\theta)=(1,t)$ for $t \in [0,2\pi]$. A vector function says the exact same thing except $\vec{v}(t)=\cos(t)\vec{i}+\sin(t)\vec{j}. $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3366963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Dense $G_\delta$ of open interval This is not a hard question, I just want to make sure whether it is possible or not. Let $I$ be an open interval of $\mathbb R$. The question is it always possible to find $G_\delta$ dense subsets of $I$? I know it true to find countable dense of $I$ as it consider as a second countable subspace of $\mathbb R$. But the question is possible to have it $G_\delta$ dense subset of $I$. Any help will appreciated.
Take any countable subset $C$ of $I$ and then $I\setminus C$ is a dense $G_\delta$. Or use the complement of a homeomorphic copy of a Cantor set (the standard Cantor set lies in $[0,1]\subseteq(-1,1)$ and all open intervals are homeomorphic, so we always have them).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3367060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What are the ways to bound function for using the theorem of dominated convergence? I need to bound $f(x,n)=\frac{\sqrt{n}}{1+n\ln(1+x^2)}$, on $(0,1)$. I tried to use the fact that $a^2+b^2\geqslant 2ab$, but integral $\int\limits_0^1 \frac{1}{\sqrt{\ln(1+ x^2)}}\ dx$ is not convergent so I can't use theorem of dominated convergence. Also I know about analysis methods if a function is convex, but I don't know how to use it. I know about Jensen's inequality, but I need to know how with this to see if a function is bigger than some function that is integrable.
You do not need to bound it. $f(x,n)$ is decreasing sequence of functions and $\int_0^1f(x,1)dx <+\infty$ Let $g(x,n)=f(x,1)-f(x,n)$ $f(x,n) \to 0 ,\forall x \in (0,1)\Longrightarrow g(x,n) \to f_1(x,n), \forall x \in (0,1)$ Thus $\int_0^1g(x,n)dx \to \int_0^1 f(x,1)dx \Longrightarrow \int_0^1 f(x,n)dx \to 0$ By monotone convergence $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3367308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is every group the unit group of some ring? Let the functor $F\colon\bf Ring\rightarrow\bf Grp$ send the ring $A$ to its group of units $A^\times,$ and the ring homomorphism $f\colon A\rightarrow B$ to the group homomorphism $f^\times\colon A^\times\rightarrow B^\times:a\mapsto f(a)$. I was curious about this functor, and in particular, whether it is essentially surjective. That is, for any group $G$ (not just finite,) is there a ring $A$ such that $G\cong A^\times$? If not, what groups $G$ satisfies this? A similar question was asked in this question, but what can be said for infinite groups, or groups in general? Calling groups that satisfy this condition R-groups, I have proved that any finitely generated abelian group is an R-group. I have no idea what to do from now, but I have a conjecture that the unit group of the group ring $\mathbb F_2[G]$ is isomorphic to $G.$ If this is true, certainly, all groups are R-groups, and the functor $F$ is essentially surjective, but I am having trouble proving it. Can anyone help me?
Your statement about $\mathbb{F}_2[G]$ is incorrect. Consider when $G = \mathbb{Z}_5$, generated by some element $a$ with $a^5 = e$. Then, $$(e + a^2 + a^3)(e + a + a^4) = (e + a^2 + a^3) + (a + a^3 + a^4) + (a^4 + a + a^2) = e + (a+a) + (a^2+a^2) + (a^3+a^3) + (a^4+a^4) = e$$ So, the unit group of $\mathbb{F}_2[G]$ includes the natural inclusion of $G$, but it also includes $e + a^2 + a^3$, as shown above. For reference on how I found this example: I can call the "weight" of an element in $\mathbb{F}_2[G]$ the number of nonzero coefficients, so both of the elements above have weight 3, while $e+a$ has weight 2. Clearly weights multiply, so if we want to end up with an odd weight element like $e$, we must start with two odd-weight elements; and we don't want to use elements of weight 1. So we need $|G|$ at least 3. With $G = \mathbb{Z}_3$, there is only one element with odd weight more than 1, and it doesn't square to $e$. So I jumped to $\mathbb{Z}_5$ and it worked.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3367423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 0 }
What is the minimum number of groups of three coins you can flip in order to flip every coin so that it is heads-down? Question: Suppose that there are seven coins arranged in a circle. Every coin is heads-up. Your goal is to flip every coin so that it is heads-up. You may, however, only flip groups of three adjacent coins. You may flip any such group of three coins, and you map flip as many such groups as you choose, one group at a time. What is the minimum number of groups of three coins you can flip in order to flip every coin so that it is heads-down? My attempt: Let $x$ be the number of groups to be flipped. So we can interpret the question as $$3x≡0 \,\,\,mod \,\,\,7.$$ One possible solution is $x=7$. So it requires at most 7 groups. On the other hand, clearly it requires at least $3$ groups. So minimum is between $3$ and $7$. Then I am stuck here. Any hint is appreciated.
HINT, as requested. I don't think doing things mod $7$ helps. Instead I would do it mod $2$, to represent the fact that if you flip the same coin twice, that is equivalent to doing nothing. Since you want to flip every coin from Heads to Tails, any solution with $x$ moves (i.e. flipping $x$ groups) must satisfy: $$3x \equiv 7 \pmod 2$$ This of course just means $x$ must be odd. Now you need to analyze case by case: * *$x=1$ is insufficient. *$x=3$ is insufficient. (I'd say this is obvious, but if you really want a rigorous proof, then do a case analysis by whether the first $2$ groups overlap by $0, 1, 2$ or $3$ coins.) *$x=5$ is insufficient. This is the hardest case to analyse but there is a good trick. Hint: look at the complement / see the next bullet. *$x=7$ is sufficient. You need to prove this by construction. All of the above together would prove that minimum $x=7$. Can you finish from here, or do you need more hints?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3367499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Texts on Mathematical Billiards I want to study the theory of mathematical billiards, and was looking for a text for self-study. If you have a text in mind that is more general i.e. on dynamical systems as a whole but still contains some material on billiards, that would be great too. I'm an undergraduate with background in intermediate analysis, basic algebra, linear algebra, basic differential equations and applied complex analysis.
I think these are some books you might find interesting, regarding mathematical billiards at a relatively introductory level * *An Introduction to Mathematical Billiards, by Utkir A. Rozikov; *Chaotic Billiards, by Nikolai Chernov; *Geometry and billiards, by Sergei Tabachnikov. Different approaches and fascinating mathematics can also be found in * *Invariant Manifolds, Entropy and Billiards. Smooth Maps with Singularities, by Anatole Katok et al. Hope these prove useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3367595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Convolutions Support. It is known that: $$ \operatorname{supp}(u *v) \subset \operatorname{supp}(u) + \operatorname{supp}(v) $$ Where: $$ \operatorname{supp}(u) = \overline{\{x \in \mathbb{R}^n: u(x) \neq 0\}} $$ And: $$ (u*v)(x)=\int_\limits{\mathbb{R}^n} u(x-y)v(y) dy $$ I need an example of two functions $u,v$ such that $u$ has compact support, but $u*v$ has no compact support. Does anyone know any examples of this?
Let $u$ be any non-negative function with compact support which has the value $1$ on some open set. Let $v(x)=e^{-x^{2}}$. Then $(u*v)(x) >0$ for all $x$. So $u*v$ does not have compact support. For an explicit example let $u(x)=1$ for $0 \leq x \leq 1$, $0$ for $x \geq 1+\frac 1 n$as well as for $x \leq -\frac 1 n$, $u(x)=1+nx$ for $-\frac 1 n \leq x \leq 0$ and $u(x)=1-n(x-1)$ for $1 \leq x \leq 1+\frac 1 n$. There is no such example where both $u$ and $v$ have compact support. If $u$ has support $K$ and $v$ has support $H$ then $u*v$ vanishes on the complement of $K+H$. Since sum of two compact sets is compact it follows that $u*v$ has compact support.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3367768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving that the sublevel set of a quadratic function is convex Let $$C := \{ x \in \mathbb{R} : 2x^2 \le 1 \}$$ Prove that $C$ is convex. I started with the definition: for $ x_1, x_2 \in C$ $$ \lambda x_1 + (1-\lambda)x_2 = \dots $$ but didn't make any progress. How should I approach this problem?
Hint: I'd rewrite the set as $$C = \{x\in{\Bbb R}\mid -1/\sqrt 2\leq x\leq 1/\sqrt 2\} = \left[-\frac{1}{\sqrt 2};\frac{1}{\sqrt 2}\right].$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3367926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If $f:(a,b)\to \mathbb{R}$ is differentiable in a point $x\in (a,b)$, show that $\lim\limits_{h\to 0}\frac{f(x+h)-f(x-h)}{2h}$ exists If $f:(a,b)\to \mathbb{R}$ is differentiable in a point $x\in (a,b)$, show that $\lim\limits_{h\to 0}\frac{f(x+h)-f(x-h)}{2h}$ exists and that it is equal to $f'(x)$. I figured that since the change in $x$ is described trough $2h$ that there is a $\delta$-neighbourhood around $x$ such that $|x-h|=|x+h|$. Sadly, I don't know how to make use of that argument in my proof. I know for a fact that the quotient describes to neighborhoods that, when $h\to 0$, squeeze the point.
Hint: Just consider $$\frac{f(x+h)-f(x-h)}{2h} =\frac{1}{2}\left(\frac{f(x+h)-f(x)}{h} + \frac{f(x)-f(x-h)}{h}\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3368036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove this inequality with Big O term? Let $s= s_0-\zeta_0^{-1/2}b^{-1/6}(1+\mathcal{O}(\sqrt{b}))$ where $\zeta_0 = \left(\frac{3\pi}{4}\right)^{2/3}$ and $s_0 = b^{-2/3}\zeta_0.$ Note that here $b$ is a parameter. We define $\omega(s) = \exp\left(\frac{2}{3}s_{+}^2\right)$ where $s_{+} = \max(0,s).$ I want to show that for some constant $C$ we have that, $$\omega(s) \leq C\exp\left(\frac{\pi}{2b}-\frac{1}{\sqrt{b}}\right)\leq C\exp\left(-\frac{1}{2\sqrt{b}}\right)\Sigma_{b}^{-1},$$ where $\Sigma_{b}^{-1}=\exp\left(\frac{\pi}{2b}-\frac{1}{\sqrt{b}}\right).$ For the second inequality does not make sense to me as $\exp\left(-\frac{1}{2\sqrt{b}}\right)\leq 1$, but I might be wrong. For the first inequality, $$\frac{2}{3}s_{+}^{2}\leq \frac{2}{3}|s|^2\leq \frac{4}{3}|s_0|^2+\frac{8}{3}(\zeta_0^{-1}b^{-1/3} + C\zeta_0^{-1}b^{2/3})$$ $$=\frac{4}{3}\left(\frac{3\pi}{4b}\right)^{4/3}+\frac{8}{3}\left(\left(\frac{4}{3\pi\sqrt{b}}\right)^{2/3} + C\left(\frac{4b}{3\pi}\right)^{2/3}\right)$$ where I used the inequality $|a+b+c|^2\leq 2|a|^2+4(|b|^2+|c|^2).$ I cannot see how I reduce this expression to derive the first inequality. Any hints would be much appreciated.
The second inequality never holds. The first inequality is equivalent to $\frac 23s_+^2\le \frac{\pi}{2b}-\frac{1}{\sqrt{b}}+D$ for some constant $D$. Since $\frac{\pi}{2b}-\frac{1}{\sqrt{b}}\ge -\frac 1{4\pi}$ for each $b\ge 0$, if $D\ge -\frac 1{4\pi}$ then the inequality holds for $s_+=0$. So it remains to consider an inequality $\frac 23s^2\le \frac{\pi}{2b}-\frac{1}{\sqrt{b}}+D$ $\frac 23\left(b^{-2/3}\zeta_0-\zeta_0^{-1/2}b^{-1/6}(1+\mathcal{O}(\sqrt{b}))\right)^2\le \frac{\pi}{2b}-\frac{1}{\sqrt{b}}+D$ For big $b$ this inequality can fail. For instance, when $\mathcal{O}(\sqrt{b})$ term equals $-1-\sqrt{b}$, then when $b$ tends to infinity, the left hand side of the inequality tends to infinity, whereas the right hand side is bounded. For small $b$ the inequality fails too. Indeed, when $b$ tends to zero, the left hand side of the inequality grows as $b^{-4/3}$, whereas the right hand grows as $b^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3368169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $A \in \mathbb{R}^{n\times n}$ and $\textbf{x}\in \mathbb{R}^{n}$why $A = \frac{1}{2} A + \frac{1}{2} A^\top $ is not always true? If $A \in \mathbb{R}^{n\times n}$ and $\textbf{x}\in \mathbb{R}^{n}$, It is possible to prove that $$ \textbf{x}^\top A \textbf{x} = \textbf{x}^\top(\frac{1}{2} A + \frac{1}{2} A^\top)\textbf{x} $$ That could let us think that $$ A = \frac{1}{2} A + \frac{1}{2} A^\top ? $$ However if we take $$A = \begin{bmatrix}1&2\\ 3&4 \end{bmatrix}$$ and compute $$B = \frac{1}{2} A + \frac{1}{2} A^\top$$ we get $$B = \begin{bmatrix}1&2.5\\ 2.5&4 \end{bmatrix}\ne A$$ Although $$ \textbf{x}^\top A \textbf{x} = \textbf{x}^\top B \textbf{x} $$ Here is my question: How can we explain that the following is not always true? $$ A = \frac{1}{2} A + \frac{1}{2} A^\top $$
It is true that if $$x^T A y = x^T B y$$ for all vectors $x,y$, then $A=B$. The proof is by plugging in basis vectors $e_i$ (with a $1$ in the $i$th entry, and zeroes elsewhere): $$A_{ij} = e_i^T A e_j = e_i^T B e_j = B_{ij}.$$ Now if you are forced to multiply by the same vector on both sides, instead of two arbitrary vectors, you can no longer conclude $$x^TAx = x^TBx \quad \not\Rightarrow \quad A=B,$$ and you have already found a counterexample. You can try to extend the above argument, and easily prove that $A_{ii} = B_{ii}$, but you have no way to isolate the off-diagonal terms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3368309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
the conjugates of r-cycles are r-cycles. Let $0<r \leq n$ be positive integers, and consider an $r$-cycle $\sigma(j_1\ j_2...j_r) \in S_n.$ I want to show that $\tau \sigma \tau^{-1}=(\tau(j_1)\tau(j_2)...\tau(j_3))$. Let $\tau \in S_n.$ We can write $\sigma$ as $\sigma=(j_1j_r)(j_1j_{r-1})...(j_1j_2)$, so \begin{align*} \tau \sigma \tau^{-1}&=\tau \left((j_1j_r)(j_1j_{r-1})...(j_1j_2)\right)\tau^{-1}\\ &= \tau (j_1j_r)\tau^{-1}\ \tau (j_1j_{r-1})\tau^{-1}...\tau (j_1j_2)\tau^{-1} \end{align*} The last equality is true since for any permutation $\tau \in S_n$, $\tau^{-1}$ will take any number $j_t$ to $j_1$ with $0<t\leq r.$ I dont know if this is true or should I prove that in another way? I would appreciate any help or hints with that, thanks in advance.
You don't need to split it into a product of transpositions. It is much easier. If $1\leq k<r$ then where does $\tau\sigma\tau^{-1}$ send $\tau(j_k)$? $\tau\sigma\tau^{-1}(\tau(j_k))=\tau\sigma(j_k)=\tau(j_{k+1})$ And similarly $\tau\sigma\tau^{-1}(\tau(j_r))=\tau(j_1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3368431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$n^\pi$ and $n^e$ for positive integer $n\ge2$ Is it known if $n^\pi$ or $n^e$ are integer for some integer $n\ge 2$? Gelfond - Schneider's theorem does not answer the question because both base and exponent should be algebraic and haven't found nothing about this in internet. I don't expect a proof, it seems extremely difficult. Only references, if any.
I think Schanuel's conjecture should imply that $n^e$ is never an integer. Here is the idea of how that could go: Suppose $n^e = m$ then $e \log(n) = \log(m)$. This would imply $\mathbb{Q}(1, \log(n), \log(m), e, n, m)$ has transcendence degree at most 2, violating the conjecture (assuming we rule out the case that $1$, $\log(n)$ and $\log(m)$ are linearly dependent over $\mathbb{Q}$, but that should be relatively easy) I haven't thought enough about the $n^\pi$ case, but my guess is you could do something similar. Of course Schanuel's conjecture is still open, but at least that gives you a place to start looking.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3368542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is it possible to have $\int_0^\infty f(x)\,dx$ convergent and $|f|=1$ everywhere? I came across a problem that asked if it is posible for a function to be Riemann integrable function in $[0,+\infty)$ but also $|f(x)|\geq 1$ for all $x\geq 0$. At first I thought it was imposible, but I realized that only holds for continuous functions, because they would have to be either positive or negative, and then they would have to go to 0 at infinity. I have an idea of what the function would have to be like, with alternating signs, but whose integral converges, but I haven't been able to find any, so I'm starting to think it is imposible. I would like some help finding this function, or disproving it, as I don't know many tools for working with functions without a constant sign.
Here is a more elementary example: Let $\phi$ be a $1$-periodic function such that $\phi(x)=-1$ for $x \in [0,{1 \over 2})$ and $\phi(x) = 1$ for $x \in [{1 \over 2},1)$. Note that for an integer $n$ and $x \in [0,1]$, we have $|\int_n^{n+x} \phi(2^kt)dt| \le {1 \over 2^{k+1}}$. Define $f(x) = \sum_{n=0}^\infty 1_{[n,n+1)} \phi(2^nx)$. Then $|f(x)| = 1$ for all $x\ge 0$ and $\int_0^\infty f(x)dx = 0$ (the improper integral).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3368655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Can we obtain isomorphic semi-direct groups by different homomorphism given? A semi-direct product of two finite groups is determined by the homomorphism from one group to the other's automorphism group. I wonder if it is possible to have two different homomorphism that turns out to have the same resulting semi-direct product. If yes, what is the general pattern of this? Is there any (equivalent?) condition that we can put on the two different homomorphisms to make isomorphic semi-direct groups? Guess I'm currently considering semi-direct product of groups like $\Bbb Z_n^k$, which may be easier to formulate than general one but harder than the case with simply $\Bbb Z_n$ involved
The tl;dr is yes! As Arturo mentioned in the comments, every (nontrivial) $\varphi : \mathbb{Z}/p \to \text{Aut}(\mathbb{Z}/q)$ (when $p \mid q-1$) gives rise to the same semidirect product (the unique nonabelian group of order $pq$). As for your second question, involving a "general pattern", the answer is a (somewhat surprising) no. See this MSE question. There are some nice results in special cases, like when $G$ and $H$ are finite with coprime order (cf. the same MSE question), but in general the problem is too hard to say much. I hope this helps ^_^
{ "language": "en", "url": "https://math.stackexchange.com/questions/3368811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Potentially overthinking transformations of functions... need confirmation So this may be really simple. The question is Write the equation of the transformed function $y=x^2$ after the following transformation in the order given. * *a vertical compression by a factor of $\frac13$ followed by a transformation of $3$ units right. Isn't this just $y=3\left((x-3)^2\right)$? I feel as though I'm overthinking it.
For any function $y=f(x)$, transformations can be described by the equation $$y=af[k(x-c)]+d$$ where $|a|$ is the vertical dilation, $|k|$ is the horizontal dilation, $c$ is the horizontal translation, and $d$ is the vertical translation. If $a<0$, there is a vertical reflection, and if $k<0$, there is a horizontal reflection. Thus, a vertical compression by a factor of $3$ would mean $a=\frac13$, and a horizontal translation $3$ units right would mean $c=3$. Therefore, we have \begin{align} f(x)&=x^2\\ y=\frac13f(x-3)&=\frac13(x-3)^2 \end{align} You've got the right answer!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3368970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Extreme confusion with the product of distributions Let $l_1$ and $l_2$ be two distributions in disjoint variables $x_1, ..., x_n$ and $y_1, ..., y_m$. Then it is said to be possible to define a product distribution. However, I am fundamentally confused. Distributions are in fact linear functionals on the space of smooth and compactly supported functions. Then, how does the 'product' of linear functionals again become a linear functional? Especially, what can be a definition of $\delta(x_1)\delta(x_2)$ such that it is equal to $\delta(x_1, x_2)$? I am just stuck......
If you want to multiply $\delta(x_1)$ and $\delta(x_2)$ you first need to make them into functions acting on the space space, so you multiply $\delta(x_1)Id(x_2)$ and $Id(x_1)\delta(x_2)$, where $Id$ is just the identity map. With that interpretation you get $\delta(x_1)\delta(x_1)=\delta(x_1, x_2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3369069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Max eigenvalue of symmetric matrix and its relation to diagonal values I saw few questions about it, but still can't understand. Let $A$ be a symmetric matrix and $\lambda_{\max}$ its largest eigenvalue. Is the following true for all $A$? $$ \lambda_{\max} \ge a_{ii} \forall i $$ That is, is the largest eigenvalue of a symmetric matrix always greater than any of its diagonal entries? Is it somehow related to spectral radius and the following equation? $$ \rho(A)=\max|\lambda_i|. $$
Let $\lambda_1,\cdots, \lambda_n$ be eigenvalues of $A $ in increasing order,i.e. $\lambda_n$ is the maximum eigenvalue. By Min-max theorem $$\lambda_n=\max_{x\in \mathbb{R^n},||{x}||=1 }x^TAx=\sum x_ix_j a_{ij} $$ while on the other hand$$a_{ii}=e_i^TAe_i $$ It follows that $$\lambda_n\ge a_{ii} \qquad \forall i$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3369267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
For a stopping time does $E[(\tau \wedge t)1_A]=E[(\tau \wedge s)1_A] $ for $s \le t $ and any $A \in \mathcal F_{\tau \wedge s }$? I have the following question as stated in the title: For a stopping time $\tau $ does $E[(\tau \wedge t)1_A]=E[(\tau \wedge s)1_A] $ for $s \le t $ and any $A \in \mathcal F_{\tau \wedge s }$? Here $\mathcal F_{\tau \wedge s } $ is the $\sigma $-algebra of $\tau \wedge s $-past. The context is that I'm trying to show that $(B^2_{\tau \wedge t } - \tau \wedge t,\mathcal F_{\tau \wedge t })$ is a martingale and the question is here is a second step in showing that $E[\tau \wedge t|\mathcal F_{\tau \wedge s} ]=\tau \wedge s$. Thanks in advance!
We have \begin{alignat*}{2} \mathbb{E}[(\tau \wedge t)\mathbf{1}_A] & = \mathbb{E}\Big[\mathbb{E}\left[(\tau \wedge t)\mathbf{1}_A\;|\;\mathcal F_{\tau \wedge s }\right]\Big]\quad\text{by the tower property}\\ & = \mathbb{E}\Big[\mathbb{E}\left[(\tau \wedge t)\;|\;\mathcal F_{\tau \wedge s }\right]\mathbf{1}_A\Big]\quad\text{by measurability}\\ & \neq \mathbb{E}\Big[\mathbb{E}\left[(\tau \wedge s)\;|\;\mathcal F_{\tau \wedge s }\right]\mathbf{1}_A\Big]\\ & = \mathbb{E}\left[(\tau \wedge s)\mathbf{1}_A\right], \end{alignat*} so answer seems no. Counterexample: $\tau=\infty$ almost surely. If $B$ is a standard Brownian motion, then $\mathbb E\left[ B^2_{t}\right] = {t}$ and $\mathbb E\left[ B^2_{t}\;\;|\;\; \mathcal{F}_s\right] = B_s^2 + {t-s}$. What you need to show seems $$ \mathbb E\left[ B_{\tau\wedge t}^2 - {\tau\wedge t} \;\;|\;\; \mathcal{F}_{\tau\wedge s}\right] = B_{\tau\wedge s}^2 - {\tau\wedge s} $$ for which you need $(B_t-B_s)\amalg\mathcal F_s$ and $(B_t-B_s)\sim B_{t-s}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3369535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim_{n\to \infty}\left(\frac{1}{\sqrt{n^2}}+\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\dots +\frac{1}{\sqrt{n^2+2n}}\right)$ Question: Evaluate the following limit: $$\lim_{n\to \infty}\left(\frac{1}{\sqrt{n^2}}+\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\dots +\frac{1}{\sqrt{n^2+2n}}\right)$$ My Approach: The first step I did was to split the limits using the following property: $$\lim_{x \to a} \left(f(x)+g(x) \right) = \lim_{x \to a}f(x)+\lim_{x \to a}g(x)$$ Like this: $$\lim_{n\to \infty}\left(\frac{1}{\sqrt{n^2}}+\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\dots +\frac{1}{\sqrt{n^2+2n}}\right) = $$ $$\lim_{n\to \infty}\left(\frac{1}{\sqrt{n^2}}\right)+\lim_{n\to \infty}\left(\frac{1}{\sqrt{n^2+1}}\right)+\lim_{n\to \infty}\left(\frac{1}{\sqrt{n^2+2}}\right)+\dots+\lim_{n\to \infty}\left(\frac{1}{\sqrt{n^2+2n}}\right)$$ We know that, $$\lim_{n\to \infty}\left(\frac{1}{n} \right)=0$$ Applying the same concept to all the individual limits obtained, the answer must be $0$. But the answer in my text book is given to be $2$. Did I go wrong some where, or, is the answer in the textbook incorrect? I don't think I am wrong, because even the largest term (the one which has the comparatively smaller denominator), i.e, the first term in the summation, itself is tending towards zero. So rest of the terms must be much closer to zero. Closer to zero means very close to zero and hence each term must be equal to zero as indicated by the property, and so the entire limit must tend towards zero. But the answer says the value of the limit is $2$ Please explain how to solve* this problem and, where and why did I go wrong. *I cant think of any other method of solving this problem other than the one I specified above.
$$\lim_{n\rightarrow \infty} \sum_{k=0}^{2n} \frac{1}{n} \frac{1}{\sqrt{1+k/n^2}}= \int_{0}^{2} dx =2.$$ Here $\frac{k}{n^2}-\rightarrow 0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3369675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$X$ an integral element over $K[X^d]$ I'm currently redoing some homework for my algebra test and I came across a task that wants me to find a noether normalization of $k[X]$, where $k$ is a field. The proof I know uses the argument, that $X$ is an integral element over $k[X^d]$ for $d$ element of IN. However I don't remember why exactly that is the case. I'd be very thankful for some advices regarding this problem, thank you in advance for answers!
It is a solution of the monic polynomial equation $$y^d-X^d=0$$ where the indeterminate is $y$ and $X^d$ is the element of the ring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3369808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Small question regarding left invariant vectorfields Usually, one writes out the left invariance of vector fields $X$ on a Lie group $G$ as $$(L_x)_*X=X$$for every $x$. However, I have trouble understanding this equality, since both sides does not map to the same domain. Evaluating the right hand side in $y$ gives an element in $T_yG$. Evaluating the left hand side in $y$ yields an element in $T_{xy}G$. Is there some canonical identification between $T_{xy}G$ and $T_yG$ going on, or did I make a mistake somewhere? Or, are we actually evaluating the right hand side in $xy$ instead of $y$? Thank you. My apologies if this is duplicate. I already found this one: Definition of a left-invariant vector field, but I do not fully understand the answer.
The derivative of the left multiplication map, $L_g(x)=gx$ gives an isomorphism of the tangent spaces. See this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3369911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Show that the limit is in the set I'm having some trouble proving the following : Let $B = \{(x,y)\in \mathbb{R}^2 : e^x - sin(y)\leq 0\}$ and let $(x_{k},y_{k})_{k \in \mathbb{N}}$ a sequence of elements in B. We suppose that it exists $(x,y) \in \mathbb{R^2}$ such as $\left \| (x_{k},y_{k}) -(x,y)\right \|_{1}\rightarrow _{k\rightarrow +\infty} 0$. 1) Show that $x$ is the limit of $(x_{k})_{k \in \mathbb{N}}$. What can you say about $(y_{k})_{k \in \mathbb{N}}$ ? 2) Show that $(x,y)$ is in B The first question is pretty simple because : $\left \| (x_{k},y_{k}) -(x,y)\right \|_{1} = \left \| (x_{k}-x,y_{k}-y)\right \|_{1} = \left | x_{k}-x \right | + \left | y_{k}-y \right |$ Then we have $\left | x_{k}-x \right |\rightarrow _{k\rightarrow +\infty} 0$ and $\left | y_{k}-y \right |\rightarrow _{k\rightarrow +\infty} 0$ So $x$ is the limit of $(x_{k})_{k \in \mathbb{N}}$ and y is the limit of $(y_{k})_{k \in \mathbb{N}}$ But I have no idea how to prove the second question. I would appreciate some hints. Thanks !
You probably know that exponential and sine are continuous functions. Hence we have $e^{x_k}\to e^x$ and $\sin(y_k)\to \sin(y)$ when $k\to\infty$. Also, since the sequence $(x_k,y_k)$ contains elements of $B$ we have $e^{x_k}-\sin(y_k)\leq 0$ for all $k\in\mathbb{N}$. By taking $k\to\infty$ we get $e^x-\sin(y)\leq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3370014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Integration of function nonnegative at $\mathbb{N}$ I'm having a little trouble to write down my solution of exercise 4G of Bartle's book The Elements of Integration and Lebesgue Measure. Here is the problem: Let X = $\mathbb{N}$ and $\mathcal{A} = 2^X$ be $\sigma$-Algebra and define the measure to be the counting measure. If $f:X \rightarrow \mathbb{R}$ is nonnegative then * *$f$ is measurable *$\int_{\mathbb{N}}f = \sum_{k=1}^{\infty}f(k)$ So what I think to item (1) how the function is from naturals to naturals, then the preimage of f is a subset on $2^{\mathbb{N}}$, I think that this is the idea. To the item (2) I used the item (1) and the following lemma: If $(E_n)$ is a sequence of disjoints sets, considering $E = \cup_{n=1}^{\infty}E_n$ then \begin{equation} \int_{E}fd\mu = \sum_{k=1}^{\infty}\int_{E_k}fd\mu \end{equation} To the exercise consider $E_k = \{k\}$ with $k \in \mathbb{N}$, this sets are disjoints, with $\mathbb{N} = \cup_{n=1}^{\infty}E_n$ then we can use the lemma above, thus: \begin{equation} \int_\mathbb{N} f = \sum_{k=1}^{\infty}\int_{E_k}fd\mu \overset{\mathrm{?}}{=} \sum_{k=1}^{\infty}\sum_{j=1}^{k} a_j \mu(E_k) \end{equation} I'm considering that $\int fd\mu = sup \int sd\mu$ where $s$ are the simple functions and $\int s d\mu = \sum a_j \mu(E_j)$, but I'm not sure about the second equality above (?). Now I used that $\mu(E_k) = 1$ and $\sum_{j=1}^{k} a_j = f(k)$. This is the supose end of proof, but I don't think so. Can you appoint my mistakes and show the path to make rigth?
Denote $v$ the counting measure. Then $$\int_{\{n\}}f(k)dv=\int_{\{k \in \Bbb{N}:k=n\}}f(k)dv=f(n)\int_{\{n\}}dv=f(n)v(\{n\})=f(n)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3370157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Halting problem within a finite time interval? Let the finite time halting problem to be one where the program is counted as "not halting" if the Turing machine takes more than a given time interval to run the program. Is this finite time halting problem undecidable?
This subject has annoying terminological redundancy - e.g. "decidable," "computable," and "recursive" all mean the same thing in this context. Below I'm using "computable" exclusively, since "recursive" is slightly old-fashioned and "decidable" does significant double-duty elsewhere in logic. It is computable. Precisely: the "general finite halting problem" $X$ of triples $(e,m,s)$ such that the $e$th Turing machine on input $m$ halts in at most $s$ steps is computable. Indeed, $X$ (or more precisely, the characteristic function of $X$) is even primitive recursive. (This is all related to Kleene's $T$ predicate.) This is essentially a quick application of the universal Turing machine: to test whether $(e,m,s)\in X$ we want to run the $e$th Turing machine on input $m$ for $s$ steps and see what happens. And this is basically exactly what a universal Turing machine is for. It's a good exercise to write out the details, but there aren't any surprises. Note that the halting problem $H=\{e:\Phi_e(e)\downarrow\}$ is a "projection" of $X$ in a sense: $$H=\{e:\exists s((e,e,s)\in X)\}.$$ In general, the image of a computable set under a computable function need not be computable but will always be computably enumerable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3370296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to show that this graph is planar?-Formal Proof How to show that this graph is planar? I am unable to write a formal proof In the book the answer is given that this graph is planar. Can you kindly say how to prove that this graph has no subgraph homeomorphic to $K_5$ or $K_{3,3}$? I find that this graph has two vertices $[0],[4]$ of degree $8$ and all other vertices are connected to these two. How to show that this graph is planar? Please help.
A graph is planar if there exist at least one drawing of this graph (called an embedding) on the plane with no crossing edges. Therefore as explained by Bercy, you just need to show one drawing of this graph with no crossing edges. To do so take you vertex labelled 0, and pull it out of the other vertices, outside the star-like graph around the vertex 4. Remember that edges are not required to be straight line (even if there is some result proving that this is always possible). So you can use bent lines to join some vertices. You should be able to build such a planar drawing. Edit I just noticed your other question here. There you were asked to prove that a graph is not planar. That's way you needed to prove that it includes $K_5$ or $K_{3,3}$ as a subgraph. Because otherwise you would need to exhibit all possible drawings. However, here to prove that a graph is planar, it is enough to exhibit one planar drawing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3370416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Definite Integral $\int{x^2+1 \over x^4+1}$ Evaluate $$\int_0^{\infty}{x^2+1 \over x^4+1}$$ I tried using Integration by parts , $$\frac{{x^3 \over 3 }+x}{x^4+1}+\int\frac{{x^3 \over 3 }+x}{(x^4+1)^2}.4x^3.dx$$ First term is zero But it got me no where. Any hints.
* *Divide the numerator by $x^2$ and you have $1+\frac{1}{x^2}$ *Divide the denominator by $x^2$ and you have $x^2+\frac{1}{x^2} =\bigl(x-\frac{1}{x}\bigr)^{2}+2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3370526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Diophantine equation from the Latvian Baltic Way team selection competition 2019 So here is the problem statement: Find all integer triples $(a, b, c)$ such that $(a-b)^3(a+b)^2 = c^2 + 2(a-b) + 1$ The only things I have so far figured out is that (-1, 0, 0) and (0, 1, 0) are solution, gcd((a-b), c) = 1 and that c must be even. Any ideas? Maybe even a general solution to the problem?
Equation, $(a-b)^3(a+b)^2 = (c^2 + 2(a-b) + 1)$ Thanks @Piquito, for reviewing my previous answer. By mistake I solved "OP' equation as a quartic rather than a quantic. If we put the condition, $(a=b+c)$ then we get: $c^3(2b+c)^2=(c+1)^2$ Which has solution at, $(b,c)=[(1/2),(1)]$ And so we get, $(a,b,c)=[(3/2),(1/2),(1)]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3370630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Rigorous, concise, self-contained, systematic presentation of calculus (real and complex analysis) I'm looking for a book recommendation satisfying the above requirements, with the presentation accessible to graduate students. Ideally, it would develop real and complex analysis axiomatically, rigorously prove all the major theorems, make clear the logical dependencies, but avoid any unnecessary fluff. Conciseness, economy, logical precision and elegance of presentation are especially valued, i.e., not the regular calculus textbooks with 700+ pages, colorful exercises, real world applications, etc. Just the pure mathematics. Many thanks in advance.
Rudin's Principles of Mathematical Analysis is exactly what I was looking for. Thanks to J. E. Greilhuber for the recommendation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3370746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $PGL(n,\Bbb{R}) \cong SL(n+1, \Bbb{R})$ for even n? Is the following claim correct? Claim: If n is even, $PGL(n,\Bbb{R}) \cong SL(n+1, \Bbb{R})$. Proof: Recall $PGL(n,\Bbb{R}) \cong GL(n+1, \Bbb{R})/Z$, where $Z = \{M | M=\alpha I, \alpha \in \Bbb{R}\}$. Define a homomorphism $ \phi : GL(n+1,\Bbb{R}) \to SL(n+1, \Bbb{R})$ by $\phi (M) = \frac{1}{{\det(M)}^{\frac{1}{n+1}}}M$ (the root is well defined since n+1 is odd). Clearly the image of this map lies in $SL(n+1, \Bbb{R})$, since $\det(\frac{1}{{det(M)}^{\frac{1}{n+1}}}M) = {(\frac{1}{{\det(M)}^{\frac{1}{n+1}}})}^{n+1}\det(M)=1$; in fact it is all of $SL(n+1, \Bbb{R})$ since it acts as the identity on $SL(n+1, \Bbb{R})$. The kernel of $\phi$ is exactly those matrices where $\frac{1}{{\det(M)}^{\frac{1}{n+1}}}M = I$, which clearly are matrices in $Z$; In fact it is all of $Z$, because it is easily see that any scalar matrix satisfies this condition. The claim now follows from the first isomorphism theorem: $PGL(n,\Bbb{R}) \cong GL(n+1, \Bbb{R})/Z \cong GL(n+1, \Bbb{R})/\ker(\phi) \cong Im(\phi) \cong SL(n+1, \Bbb{R})$
The proof looks good to me. Another way of framing this picture is to observe that for a finite-dimensional vector space $V$ over the field $\Bbb F$, the sequences $$0 \to Z(SL(V)) \to SL(V) \to PSL(V) \to 0$$ and $$0 \to PSL(V) \to PGL(V) \to \Bbb F^* / (\Bbb F^*)^{\dim V} \to 0$$ are exact. (A sequence $0 \to A \stackrel{f}{\to} B \stackrel{g}{\to} C \to 0$ is exact iff * *$f$ is injective, *$\operatorname{im} f = \operatorname{ker} g$, and *$g$ is surjective.) In our situation $\Bbb F = \Bbb R$ and $V = \Bbb R^n$. If $n$ odd, then $Z(SL(V)) = \{I\}$ and $\Bbb R^* = (\Bbb R^*)^n$, so in fact $$SL(V) \cong PSL(V) \cong PGL(V) .$$ If $n$ even, then $Z(SL(V)) = \{\pm I\}$, and so $SL(V) \to PSL(V)$ is a double cover, and $(\Bbb R^*)^n = \Bbb R^+$, so $\Bbb R^* / (\Bbb R^*)^n \cong \{\pm 1\}$ and thus $PSL(V) \subset PGL(V)$ is a subgroup of index $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3370836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Informative proof that any real-valued symmetric matrix only has real eigenvalues I am looking for an informative proof that any real-valued symmetric matrix only has real eigenvalues. By informative, I mean that there is an explanation accompanying the proof, rather than just a copy-and-paste job, which is not informative. I came across this question, but (1) the top-rated answer by Lepidopterist is, according to himself, not a proof of the result, but rather an explanation of why the displayed method is not a proof of the desired result, and (2) none of the proofs posted offer explanations and are just copy-and-paste jobs. Also, none of the answers in the question have been accepted by the author, so it seems that they might have also found the answers to be unsatisfactory. I'm seeking a proof and an accompanying explanation, so that I can properly learn the reasoning behind how any real-valued symmetric matrix only has real eigenvalues. I would greatly appreciate it if someone could please take the time to clarify this.
I found the proof in http://pi.math.cornell.edu/~jerison/math2940/real-eigenvalues.pdf to be informative and educational. The Spectral Theorem states that if $A$ is an $n \times n$ symmetric matrix with real entries, then it has $n$ orthogonal eigenvectors. The first step of the proof is to show that all the roots of the characteristic polynomial of $A$ (i.e. the eigenvalues of $A$) are real numbers. Recall that if $z = a + bi$ is a complex number, its complex conjugate is defined by $\bar{z} = a − bi$. We have $z \bar{z} = (a + bi)(a − bi) = a^2 + b^2$, so $z\bar{z}$ is always a nonnegative real number (and equals $0$ only when $z = 0$). It is also true that if $w$, $z$ are complex numbers, then $\overline{wz} = \bar{w}\bar{z}$. Let $\mathbf{v}$ be a vector whose entries are allowed to be complex. It is no longer true that $\mathbf{v} \cdot \mathbf{v} \ge 0$ with equality only when $\mathbf{v} = \mathbf{0}$. For example, $$\begin{bmatrix} 1 \\ i \end{bmatrix} \cdot \begin{bmatrix} 1 \\ i \end{bmatrix} = 1 + i^2 = 0$$ However, if $\bar{\mathbf{v}}$ is the complex conjugate of $\mathbf{v}$, it is true that $\mathbf{v} \cdot \mathbf{v} \ge 0$ with equality only when $\mathbf{v} = 0$. Indeed, $$\begin{bmatrix} a_1 - b_1 i \\ a_2 - b_2 i \\ \dots \\ a_n - b_n i \end{bmatrix} \cdot \begin{bmatrix} a_1 + b_1 i \\ a_2 + b_2 i \\ \dots \\ a_n + b_n i \end{bmatrix} = (a_1^2 + b_1^2) + (a_2^2 + b_2^2) + \dots + (a_n^2 + b_n^2)$$ which is always nonnegative and equals zero only when all the entries $a_i$ and $b_i$ are zero. With this in mind, suppose that $\lambda$ is a (possibly complex) eigenvalue of the real symmetric matrix $A$. Thus there is a nonzero vector $\mathbf{v}$, also with complex entries, such that $A\mathbf{v} = \lambda \mathbf{v}$. By taking the complex conjugate of both sides, and noting that $A = A$ since $A$ has real entries, we get $\overline{A\mathbf{v}} = \overline{\lambda \mathbf{v}} \Rightarrow A \overline{\mathbf{v}} = \overline{\lambda} \overline{\mathbf{v}}$. Then, using that $A^T = A$, $$\overline{\mathbf{v}}^T A \mathbf{v} = \overline{\mathbf{v}}^T(A \mathbf{v}) = \overline{\mathbf{v}}^T(\lambda \mathbf{v}) = \lambda(\overline{\mathbf{v}} \cdot \mathbf{v}),$$ $$\overline{\mathbf{v}}^T A \mathbf{v} = (A \overline{\mathbf{v}})^T \mathbf{v} = (\overline{\lambda} \overline{\mathbf{v}})^T \mathbf{v} = \overline{\lambda}(\overline{\mathbf{v}} \cdot \mathbf{v}).$$ Since $\mathbf{v} \not= \mathbf{0}$,we have $\overline{\mathbf{v}} \cdot \mathbf{v} \not= 0$. Thus $\lambda = \overline{\lambda}$, which means $\lambda \in \mathbb{R}$. For further information on how the author gets from $\overline{\mathbf{v}}^T(\lambda \mathbf{v})$ to $\lambda(\overline{\mathbf{v}} \cdot \mathbf{v})$ and from $(\overline{\lambda} \overline{\mathbf{v}})^T \mathbf{v}$ to $\overline{\lambda}(\overline{\mathbf{v}} \cdot \mathbf{v})$, see this question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3370991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
If $A+B+C=\pi$, prove that $\cos (A-B) \cos (B-C) \cos (C-A)\ge 8\cos A \cos B \cos C$ If $A+B+C=\pi$, prove that $\cos (A-B) \cos (B-C) \cos (C-A)\ge 8\cos A \cos B \cos C$ I know this is true for acute angle triangle. I want to know whether it is true for every real $A,B,C$ such that $A+B+C=\pi.$
Result to be established : $$\begin{matrix}A+B+C=\pi \ \implies\\ \ \cos (A-B) \cos (B-C) \cos (C-A)\ge 8\cos A \cos B \cos C\end{matrix}\tag{*}$$ I would like to give here a variation on the excellent idea of Michael to use the following parameterization of a "triangle shape", i.e., a triangle known by its angles) : $$x:=\cos(A-B), \ \ \ y=\cos(A+B)\tag{1}$$ Let me take his proof where $$x^2+(4y^3+5y)x+8y^2 \ \ \text{has to be proven} \geq 0,\tag{2}$$ and take now a different path. Result $(*)$ has been proven for acute angles ; we can consider WLOG (due to the exchangeability of $A,B,C$ in $(*)$ that $$\pi > A \geq \tfrac{\pi}{2} \geq B \geq C > 0\tag{3}$$ A first consequence of (3) is that $$C \leq \pi/2-A/2 \tag{&}$$ as shown by reasoning by contradiction. In order to understand the impact of restriction (3), I made a simulation (see figure below) that has evidenced that points $(x,y)$ defined by (1) are restricted to be in a certain narrow area bounded in particular by a curve (in red) whose non-evident parametrized (resp. cartesian) equation is (see explanation below) $$\begin{cases}x&=&&\sin(3A/2)\\y&=&-&\sin(A/2)\end{cases} \ \ \implies \ \ x=4y^3-3y \tag{4} $$ As we have, for all $(x,y)$ : $$-1 \leq x \leq 4y^3-3y\tag{5},$$ we can say that : $4y^3 \geq x+3y$, allowing to conclude, from (2): $$x^2+(4y^3+5y)x+8y^2 \geq x^2+(x+3y+5y)x+8y^2=2(x+2y)^2 \geq 0$$ (with equality if and only if $x \to 1,y \to -1/2$ corresponding to the limit case where $A=B \to \pi/2$ whereas $C \to 0.$). Explanation for (4) : For the limit case $C =(\pi-A)/2$ in (&), we have * *$x=\cos(A-B)=\cos(A-(\pi-A-C))=\cos(2A+C-\pi)=$ $=-\cos (2A+C)=-\cos(\pi/2-3A/2)=\sin(3A/2)$ and * *$y=\cos(A+B)=\cos(A+(\pi-A-C))=-\cos(C)=-\cos(\pi/2-A/2)=-\sin(A/2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3371107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
I had to show that a' + b' = a + b mod n, when a mod n = a' and b mod n = b' I had to show the following: For $\,\, n,a,b,a',b' \in N$; if: $ a'\equiv a\mod{n}$ and $ b'\equiv b\mod{n} $ than $ a' + b' = \,(a+b)\mod{n}$ My try: $\frac{a}{n} = k +a'$ , where $k \in N $ and $k$ is divisable by $n$. $\frac{b}{n} = p + b'$ , where $p \in N$ and $p$ is divisable by $n$. (NOTE: I think (please correct me if I am wrong) that the fact that $a',b'$ are both named to be solutions to a modulor function, that both are not divisable by n. (or do I need to explicitly state this again)) $\frac{a+b}{n} = \, (k + a')+(p+b')$ since $p$ and $k$ both are divisable by $n$; $(k + p)$ can be written as $n*h, h\in N$ $\Longrightarrow \frac{a+b}{n} = \, (n*h + ( a'+b')) $ $\Longrightarrow (a+b)\mod{n} = a' + b'$ It would be very helpful when you could say if the logical reasoning is correct, or where its flawed, and especially help me with formulations. So if the formulation of a certain part of how I tried to show what was asked for is bad or improveable, please give me as many advices as you are willing to share :)
$a\equiv a'$ and $b\equiv b'$ mod $n$ means that $a$ and $a'$ give the same remainder when divided by $n$, or equivalently, their difference $a-a'$ is a multiple of $n$. The same applies to $b$ and $b'$. So you have $$ a-a'=kn;\qquad b-b'=rn $$ for some integers $r$ and $k$. If you add the two equations above and arrange terms you get $$ a+b-(a'+b')=(r+k)n $$ and therefore, by the same token, $a+b\equiv a'+b'$ mod $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3371219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof review: Symmetric matrices have real eigenvalues This document provides the following proof: The Spectral Theorem states that if $A$ is an $n \times n$ symmetric matrix with real entries, then it has $n$ orthogonal eigenvectors. The first step of the proof is to show that all the roots of the characteristic polynomial of $A$ (i.e. the eigenvalues of $A$) are real numbers. Recall that if $z = a + bi$ is a complex number, its complex conjugate is defined by $\bar{z} = a − bi$. We have $z \bar{z} = (a + bi)(a − bi) = a^2 + b^2$, so $z\bar{z}$ is always a nonnegative real number (and equals $0$ only when $z = 0$). It is also true that if $w$, $z$ are complex numbers, then $\overline{wz} = \bar{w}\bar{z}$. Let $\mathbf{v}$ be a vector whose entries are allowed to be complex. It is no longer true that $\mathbf{v} \cdot \mathbf{v} \ge 0$ with equality only when $\mathbf{v} = \mathbf{0}$. For example, $$\begin{bmatrix} 1 \\ i \end{bmatrix} \cdot \begin{bmatrix} 1 \\ i \end{bmatrix} = 1 + i^2 = 0$$ However, if $\bar{\mathbf{v}}$ is the complex conjugate of $\mathbf{v}$, it is true that $\mathbf{v} \cdot \mathbf{v} \ge 0$ with equality only when $\mathbf{v} = 0$. Indeed, $$\begin{bmatrix} a_1 - b_1 i \\ a_2 - b_2 i \\ \dots \\ a_n - b_n i \end{bmatrix} \cdot \begin{bmatrix} a_1 + b_1 i \\ a_2 + b_2 i \\ \dots \\ a_n + b_n i \end{bmatrix} = (a_1^2 + b_1^2) + (a_2^2 + b_2^2) + \dots + (a_n^2 + b_n^2)$$ which is always nonnegative and equals zero only when all the entries $a_i$ and $b_i$ are zero. With this in mind, suppose that $\lambda$ is a (possibly complex) eigenvalue of the real symmetric matrix $A$. Thus there is a nonzero vector $\mathbf{v}$, also with complex entries, such that $A\mathbf{v} = \lambda \mathbf{v}$. By taking the complex conjugate of both sides, and noting that $A = A$ since $A$ has real entries, we get $\overline{A\mathbf{v}} = \overline{\lambda \mathbf{v}} \Rightarrow A \overline{\mathbf{v}} = \overline{\lambda} \overline{\mathbf{v}}$. Then, using that $A^T = A$, $$\overline{\mathbf{v}}^T A \mathbf{v} = \overline{\mathbf{v}}^T(A \mathbf{v}) = \overline{\mathbf{v}}^T(\lambda \mathbf{v}) = \lambda(\overline{\mathbf{v}} \cdot \mathbf{v}),$$ $$\overline{\mathbf{v}}^T A \mathbf{v} = (A \overline{\mathbf{v}})^T \mathbf{v} = (\overline{\lambda} \overline{\mathbf{v}})^T \mathbf{v} = \overline{\lambda}(\overline{\mathbf{v}} \cdot \mathbf{v}).$$ Since $\mathbf{v} \not= \mathbf{0}$,we have $\overline{\mathbf{v}} \cdot \mathbf{v} \not= 0$. Thus $\lambda = \overline{\lambda}$, which means $\lambda \in \mathbb{R}$. How does the author get from $\overline{\mathbf{v}}^T(\lambda \mathbf{v})$ to $\lambda(\overline{\mathbf{v}} \cdot \mathbf{v})$ and from $(\overline{\lambda} \overline{\mathbf{v}})^T \mathbf{v}$ to $\overline{\lambda}(\overline{\mathbf{v}} \cdot \mathbf{v})$? I would appreciate it if someone could please take the time to clarify this.
The dot product can be indicated by $$\vec w \cdot \vec v$$ or equivalently $$\vec w^T\vec v$$ or also $$\langle \vec w,\vec v\rangle$$ and we can move the scalar factor $\lambda$ in any position, that is $$\lambda\vec w \cdot \vec v=\vec w \cdot \lambda\vec v=\langle \lambda\vec w,\vec v\rangle=\cdots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3371317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\mathbb{Q}$ - linear function from $\mathbb{R}$ to $\mathbb{R}$ with kernel $\mathbb{Q}$ I read somewhere that it is possible to construct a $\mathbb{Q} $ - linear function $f:\mathbb{R} \rightarrow \mathbb{R}$ with $\ker f = \mathbb {Q}$ . Can someone enlighten me on that matter? I thought it might be an easy matter of extending the canonic quotient map $\mathbb{R} \rightarrow \mathbb{R}/\mathbb{Q}$ with an isomorphism $\mathbb{R}/\mathbb{Q} \rightarrow \mathbb{R}$, but that is probably nothing more than going round in circles.
More of the same: Assume you have $f$ a non-zero $\mathbb{Q}$-linear map from $\mathbb{R}$ to $\mathbb{Q}$. Consider a $\beta$ so that $\alpha\colon=f(\beta)\ne 0$. The map $$p\colon \mathbb{R} \to \mathbb{Q}\\ x\mapsto \frac{1}{\alpha} f(\beta x)$$ is a $\mathbb{Q}$-linear projection. Then $p'\colon= 1_{\mathbb{R}}-p$ has kernel $\mathbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3371407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Prove Variance of Gaussian by Differentiating Integral of N(x) =1 by $\sigma^2$ and rearranging The normalized Gaussian distribution is defined as: $$N(x|\mu,\sigma^2)=\frac{1}{(2\pi\sigma^2)^{1/2}}exp\bigg(\frac{-1}{2\sigma^2}(x-\mu)^2\bigg)$$ Prove that: $$Var[x]=E[(X-\mu)^2]=\sigma^2$$ by differentiating both sides of the normalization condition: $$\int_{-\infty}^{\infty} N(x|\mu, \sigma^2)~dx = 1$$ with respect to $\sigma^2$ and rearrange the result such that: $$E[(X-\mu)^2] = \sigma^2$$ I've tried this proof at least 10 times...and the variance always comes out as fraction of a square...
$$\int_{-\infty}^{\infty} N(x|\mu,\sigma^2)~dx=1$$ Now Differentiate both sides by $\sigma$: $$\int_{-\infty}^{\infty} \frac{d}{d\sigma}N(x|\mu,\sigma^2)~dx=\frac{d}{d\sigma}1$$ $$\int_{-\infty}^{\infty} \frac{d}{d\sigma}N(x|\mu,\sigma^2)~dx=0~~~~~(1)$$ Now Working just on the differentiation: $$\frac{d}{d\sigma}N(x|\mu,\sigma^2)=\Bigg[\bigg(\frac{\sigma^{-1}}{(2\pi)^{1/2}}\bigg)\exp\bigg(\frac{-1}{2}\sigma^{-2}(x-\mu)^2 \bigg)\Bigg]'$$ using differentiation product rule: $$\frac{d}{d\sigma}N(x|\mu,\sigma^2)=\Bigg[\bigg(\frac{\sigma^{-1}}{(2\pi)^{1/2}}\bigg)\Bigg]'\exp\bigg(\frac{-1}{2}\sigma^{-2}(x-\mu)^2 \bigg) + \bigg(\frac{\sigma^{-1}}{(2\pi)^{1/2}}\bigg)\Bigg[\exp\bigg(\frac{-1}{2}\sigma^{-2}(x-\mu)^2 \bigg)\Bigg]'$$ Then, take the result of this differentiation, place it back into the integral (1) and substitute in these already known true equations: $$\int_{-\infty}^{\infty} exp(-\frac{1}{2}\sigma^{-2} (x-\mu)^2 = (2\pi\sigma^2)^{1/2}$$ $$E[(x-\mu)^2]=\int_{-\infty}^{\infty} (x-\mu)^{2}~ \bigg(\frac{\sigma^{-1}}{(2\pi)^{1/2}}\bigg) exp\bigg(-\frac{1}{2}\sigma^{-2} (x-\mu)^2\bigg)~dx = \int_{-\infty}^{\infty} (x-\mu)^{2}~N(x|\mu, \sigma)~dx$$ The result after simplification should yield: $$E[(x-\mu)^2] = \sigma^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3371526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Writing improper integral and infinite sum with limit or not? I just wondering what is actually the correct notation of these two operators? 1) Writing the improper integral Which one is correct? a) $\displaystyle\int_0^{\infty}f(x) \mathbb dx$ b) $\displaystyle\lim_{b \to \infty}\displaystyle\int_0^{b}f(x) \mathbb dx$ 2) Writing the infinite sum Which one is correct? a) $\displaystyle\sum_{k=0}^{\infty} b_k$ b) $\displaystyle\lim_{n \to \infty}\displaystyle\sum_{k=0}^{n} b_k$ c) $\displaystyle\sum_{k=0}^{n} b_k$ 3) What about other Big Notation like Infinite Product, Infinite Union, Infinite Intersection, etc???? It's just notation, but i don't know that sometimes we have many argues how to write it correctly..
$\sum_{k=0}^\infty b_k$ is exactly the definition of $\lim_{n\to\infty}\sum_{k=0}^n b_k$ when the limit exists. And $\int_0^\infty f(x)dx$ is the definition of $\lim_{b\to\infty}\int_0^b f(x)dx$ when $f$ is Riemann integrable in $[0,b]$ for all $b>0$ and the limit exists. So you can use any of these notations, but of course it is shorter to write it without limits.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3371864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How small can "spanning sumsets" of $[n]$ be? Let $[n]$ denote the natural numbers $1$ through $n$. Let's say a subset $X \subset [n]$ is a spanning sumset if $\{x+y: x,y \in X\} = [n] \setminus \{1\}$. I'm interested in studying spanning sumsets of minimal possible size. In particular, is there either an asymptotic or exact expression for the minimal size of a spanning sumset? Is there an algorithm which can be used to find them? Or, in general, some sort of characterization? One simple observation is that $\{x+y: x,y \in X\}$ has at most $\frac{|X|(|X|+1)}{2}$ distinct sums, and if $\frac{|X|(|X|+1)}{2} \geq n-1$, then $|X| = \Omega(\sqrt{n})$. Can this lower bound be attained, however? That is to say, for general $n$ can we find a spanning sumset of size $\Theta(\sqrt{n})$? Or even just $o(n)$?
Let $S_k=\{1,2,3,...,k,2k,3k,4k,...,(k-1)k\}$. Then $S_k$ is a spanning set for $[k^2]$. For a given $n$ if we let $k=\lceil \sqrt{n} \rceil$, then $S_k$ will be a spanning set of $[n]$ with size $2k-2\approx 2\sqrt{n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3371925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Angle of visible part of circle I have a circle and a point in 2D. The point lies outside of the circle. Given the distance between the point and circle center and the radius of the circle, what is the angle of the circle that the point can 'see'? When the point is infinitely far away, it will see 180 degrees or 1 pi radians.
We have that indicating with: * *radius $R$ *distance form the centre $ d$ $$\alpha = 2\arccos \left(\frac R d\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3372017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving a factorial equation Prove that the only solution to $k! +m! =n!$ is $k = 1, m = 1, n = 2$ How would you go about this? I can't seem to figure out where to start.
If you assume, wolog, $k\le m$ and you divide by $k!$ you get: $1 + [(k+1).....m]= [(k+1).....m][(m+1)....n]$ Which is only possilbe if $[(k+1).....m] =1$. Which is $k < m$ means $k=0$ and $m=1$. ...... Just finesse a bit allow for $k = m$ and divide allow "$[(k+1).....m]$" to be $1$ you get, if you divide by $k!=m!$: $1 + 1= [(m+1)....n] = 2$ which means either $n = m+1=2$ or $2=(m+1)(m+2)=(m+1)n$ which means $m=k=1$ and $n=2$ or $m=k=0$ and $n=2$. So only solutions are $0! + 0! = 2=2!$ $0! + 1! = 2!$ or $1! + 1! = 2!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3372103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to recalculate a lerp percentage value, so that it returns the same value, even when the max lerp value is adjusted? I currently have a lerp function, which is $$ y = p \cdot x_2 + (1 - p) \cdot x_1, $$ where * *$x_1$ is the min lerp value, *$x_2$ is the max lerp value, *$p$ is the percentage to lerp between $x_1$ and $x_2$, in a $0.0$ to $1.0$ format, and *$y$ is the result of the lerp function. I was wondering if, by adjusting $x_2$, do there exists a specific formula I could use to recalculate $p$, so that it would output the same result, even with the new $x_2$ value? Is there also a formula I can use for if $x_1$ is adjusted? I'm sorry if the question is not very easy to understand, if you would like me to clarify anything, please just ask me. Thank you for your time!
Let $f(a,b,t)=tb+a(1-t)$. Then, we are looking for a value $t_2$ such that $f(a,b_1,t_1)=f(a,b_2,t_2)$ where $a$ represents the source value, $b_1$ represents the destination value, $t_1$ represents the interpolation factor, and $b_2$ and $t_2$ represent the new source and interplation factor respectively. If we substitute and rearrange: $$ \begin{align*} f(a,b_1,t_1)&=f(a,b_2,t_2);\\ t_1b_1+a(1-t_1)&=t_2b_2+a(1-t_2);\\ t_1(b_1-a)&=t_2(b_2-a);\\ t_2&=\boxed{\frac{t_1(b_1-a)}{b_2-a}}, \end{align*} $$ which is the formula for the interpolation percentage such that the interpolated value stays the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3372204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all real matrices such that $X^{3}-4X^{2}+5X=\begin{pmatrix} 10 & 20 \\ 5 & 10 \end{pmatrix}$ The following question come from the 1998 Romanian Mathematical Competition: Find all matrices in $M_2(\mathbb R)$ such that $$X^{3}-4X^{2}+5X=\begin{pmatrix} 10 & 20 \\ 5 & 10 \end{pmatrix}$$ Can you guy please help me? Thanks a lot!
Let $p$ be the polynomial in question and $R$ be the right hand side. Note that $R$ is equivalent to $D=\operatorname{diag} (20,0)$. Let $V^{-1}RV = D$, then since $V^{-1}p(X)V = p(V^{-1}XV) = D$, we can look for solutions to $p(X)=D$ and then conjugate back to get the original solutions. Note that $De_1 = 20 e_1, D e_2 = 0$. Hence $p(X)e_1 = 20e_1$, $p(X)e_2 = 0$. If $\lambda$ is an eigenvalue of $X$ then $p(\lambda)$ is an eigenvalue of $p(X)$, hence $X$ has distinct eigenvalues and $p(\lambda_1) = 20, p(\lambda_1) = 0$. Hence $e_1,e_2$ are eigenvectors of $X$ (this is the key here). In particular, $X$ is diagonal, so the problem reduces to solving $p(x) = 0$ (roots $0, 2 \pm i$) to get $X_{22}$ and $p(x)=20$ (roots $4,\pm \sqrt{5}i$) to get $X_{11}$ and seeing what combinations work. Since the matrix is real, we see that $X$ must have roots $4,0$ and so $X = \operatorname{diag} (4,0)$. To finish, we need to conjugate, if we let $V= \begin{bmatrix} 2 & -2 \\ 1 & 1\end{bmatrix}$, then $V X V^{-1} = \begin{bmatrix} 2 & 4 \\ 1 & 2\end{bmatrix}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3372342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
Compute in a closed form the following sum : $\sum_{n=1}^{+\infty}\frac{\Gamma^{4}(n+\frac{3}{4})}{(4n+3)^{2}\Gamma^{4}(n+1)}$ Today Im going to find the closed form of : $\sum_{n=1}^{+\infty}\frac{\Gamma^{4}(n+\frac{3}{4})}{(4n+3)^{2}\Gamma^{4}(n+1)}$ My attempt : We know that : $\Gamma(z)=\int_0^{+\infty}t^{n-1}e^{-t}dt$ So : $\Gamma^{4}(n+\frac{3}{4})=\int_{[0,+\infty[}(xyzt)^{n-\frac{1}{4}}e^{-x-y-z-t}dxdydzdt$ But the problems in : $\sum_{n=1}^{+\infty}\frac{x^{n}}{(4n+3)^{2}(n!)^{4}}$ I think related with hypergeomtric function if there some trick to compute this original sum drop here , thanks!
As one could expect, the result must involve hypergeometric function. A CAS gave for the infinite summation $$\frac{\Gamma \left(\frac{3}{4}\right)^4 \left(3136 \left(\, _5F_4\left(\frac{3}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4};1,1,1,\frac{7}{4};1\right)-1\right)-243 \, _6F_5\left(\frac{7}{4},\frac{7}{4},\frac{7}{4},\frac{7}{4},\frac{7}{4},\frac{7}{4};2,2,2,\frac{11}{4},\frac{11}{4};1\right)\right)}{28224}$$ and its numerical representation is $0.0211403036686719835443455214070$. Inverse symbolic calculators do not identify this number but it seems to be very close to the positive root of $$ 43 x^2+141 x-3=0 \implies x=\frac{1}{86} \left(\sqrt{20397}-141\right)\approx 0.02114030330$$ Edit In order to keep it, I shall mention that $\color{red}{\text{David H}}$ greatly simplified the expression to $$\sum_{n=1}^{+\infty}\frac{\Gamma^{4}(n+\frac{3}{4})}{(4n+3)^{2}\Gamma^{4}(n+1)}=\frac{4 \pi ^4}{9 \Gamma \left(\frac{1}{4}\right)^4}\left(\, _6F_5\left(\frac{3}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4},\frac{3}{4},\frac{3}{ 4};1,1,1,\frac{7}{4},\frac{7}{4};1\right)-1 \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3372415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }