Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Find $\angle DCF$. Let $ABC$ an isoscel triangle with $AB=AC$ with $\angle{BAC}=100^{\circ}$ and $D\in (BC)$ s.t. $AC=DC$ and $F$ on $(AB)$ with $DF||AC$. Find $\angle DCF$. I tried a lot of constructions, to find an inscriptible quadrilateral but I didn't succeed.
Let $\angle DCF =x$ and $AB =AC=CD=1$. The given $FD || AC$ leads to similar triangles BDF and BCA, and $\frac{AF}{AC} = \frac 1{2\sin50}$. Then, apply the sine rule to the triangle FCA $$\frac{\sin\angle ACF}{\sin\angle AFC}=\frac{AF}{AC} \implies\frac{\sin(40-x)}{\sin(40+x)}=\frac1{2\sin50}$$ or \begin{align} \sin x &=\cos(10+x)-\sin(40+x) \\ &=\cos(10+x)-\cos(50-x)=\sin(20-x) \end{align} which yields the solution $\angle DCF =x=10^\circ$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3666748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is a symmetric matrix positive definite iff $D$ in its LDU decomposition is positive definite? Given $$A=LDU$$ where * *$A$ is a real symmetric matrix *$L$ is a lower unitriangular matrix *$D$ is a diagonal matrix *$U$ is an upper unitriangular matrix can we say that $$A>0 \iff D>0$$ ? Edit: My thinking is that $(LD^{1/2})(D^{1/2}U)$ is (probably?) the Cholesky decomposition, and $D^{1/2}$ exists iff $D>0$.
First of all, if $A = LDU$ is symmetric, we must have $U=L^T$ by uniqueness of the decomposition ($LDU = A=A^T = U^T D L^T$). so then $A=LDL^T$. So if $D>0$ then by for all $x\neq 0$ denoting $y=L^Tx$ (why is $y \neq 0$?) yields $$x^T A x = x^T L D L^T x = (L^T x)^T D L^T x = y^T D y > 0$$ And if $A>0$ then for all $x \neq 0 $ denoting $y = (L^T)^{-1}x $ yields $$x^T D x = (L^T y)^T D (L^T y)=y^T LDL^T y = y^T A y > 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3666885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Can someone explain the definition of primitive mapping? Definition: If $G$ maps an open set $E \subset R^n$ into $R^n$, and if there is an integer $m$ and a real function $g$ with domain $E$ such that $$G(x)=\sum_{i \neq m} x_i e_i +g(x) e_m,\, (x \in E)$$ then we call $G$ primitive. Can someone explain this definition? How to understand primitive and related equation?
Since not all notions in the question are defined, we try to guess their meaning. We have that $R^n$ probably is $\Bbb R^n$, $e_i$ is the standard basis vector of $\Bbb R^n$ such that its $i$-th coordinate is $1$ and the other coordinates are zeroes, given $x\in E\subset \Bbb R^n$, $x=\sum_{i=1}^n x_ie_i$ is the decomposition of $x$ with respect to the basis $\{e_i\}$, that is $x_i$ is the $i$-th coordinate of $x$ for each $1\le i\le n$. Then $G$ is primitive means that $G$ is the identity map on $E$ distorted on $m$-th coordinate for some $1\le m\le n$ by some function $g:E\to\Bbb R$. In particular, if $g(x)=x_m$ for each $x\in E$ then $G$ is the identity map.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3667172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove an idempotent invertible 2x2 matrix in general linear group $\text{GL}_2(\mathbb{R})$ must be the identity The general linear group of degree $2$ over $\mathbb{R}$ (all $2 \times 2$ invertible real matrices with matrix multiplication), is a group. In any group $G$, if $a \in G$ is idempotent i.e. $aa=a$, then $$a=ae=a(aa^{-1})=(aa)a^{-1}=(a)a^{-1}=e$$ In $\text{GL}_2(\mathbb{R})$, the identity is $e=I_2= \begin{bmatrix}1&0 \\ 0 &1\\ \end{bmatrix}$. I want to show without using the above result that if $A \in \text{GL}_2(\mathbb{R})$ is idempotent, then $A=I_2$. Suppose $A=AA$, so $$ \begin{bmatrix} a & b\\ c & d\\ \end{bmatrix}=A=AA= \begin{bmatrix} a & b\\ c & d\\ \end{bmatrix}\begin{bmatrix} a & b\\ c & d\\ \end{bmatrix} = \begin{bmatrix} a^2+bc & b(a+d)\\ c(a+d) & d^2+bc\\ \end{bmatrix} $$ Thus, $a^2+bc=a,\, d^2+bc=d,\, b(a+d)=b,\, c(a+d)=c$. Using the last two equations, we have $c(a+d-1)=0=b(a+d-1)$. There are three cases: $b=0$, $c=0$, or $a+d=1$. I used the additional fact that $A$ is invertible and $ad \neq bc$ to show that $A=I_2$ for $b=0$ and $c=0$. Notice that if $\text{tr}A=a+d=1$, then it is impossible to have $A=I_2$, but I have had no success with finding a contradiction. What breaks if $\text{tr}A=1$? Perhaps we can use that $\text{det}A=ad-bc=1$ for any invertible idempotent matrix (since $(\text{det}A)^2=\det A$)?
If $a+d=1$, then $$A = \begin{bmatrix} a & b\\ c & 1-a\\ \end{bmatrix}$$ and $A = A^2$ implies $$\begin{bmatrix} a^2+bc & b\\ c & 1-a\\ \end{bmatrix} = \begin{bmatrix} a & b\\ c & a^2+bc-2a+1\\ \end{bmatrix}$$ and we must have $$a^2-a+bc = 0$$ But this implies $\det A = 0$; which is a contraction, since $A$ is invertible. So $a+d \ne 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3667304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Why is $p$ necessarily greater than $r$ in this number theory problem? From 1998 St. Petersburg City Mathematical Olympiad, presented in Andreescu & Andrica NT: SEP: Let $n$ be a positive integer. Show that any number greater than $n^4/16$ can be written in at most one way as the product of two of its divisors having difference not exceeding $n$. The presented solution is this: Suppose, on the contrary, that there exist $a > c \ge d > b$ with $a-b \le n$ and $ab=cd>n^4/16$. Put $p=a+b, q=a-b, r=c+d,s=c-d.$ Now $$p^2-q^2=4ab=4cd=r^2-s^2>n^4/4.$$ Thus $p^2-r^2=q^2-s^2 \le q^2 \le n^2.$ But $r^2>n^4/4$ (so $r>n^2/2$) and $p>r\dots$ There is more to the solution, but that is irrelevant to my question. Why is $p>r$? It seems that this should be obvious, the way that it is presented. I notice that $p > r \Leftrightarrow p^2-r^2 > 0$, but I cannot prove that this is true. Manipulating the chain inequality $a>c\ge d > b$ has also done nothing for me.
It is implicit in the problem that the divisors in question are positive, because if you allowed negative divisors then you could always get a second factorization by reversing the factors' signs. So I'll take $a,b,c,d$ to all be positive. Now I'm going to try to simplify the problem by reducing it to the case where the product $ab=cd$ is $1$. To do that, just divide all four of $a,b,c,d$ by $\sqrt{ab}=\sqrt{cd}$. If I introduce new variables $x=\sqrt{\frac ab}$ and $y=\sqrt{\frac cd}$ then I have $x>y\geq\frac1y>\frac1x$. What I need to prove is that $p>r$, which is $a+b>c+d$, which is (after dividing by $\sqrt{ab}=\sqrt{cd}$) just $x+\frac1x>y+\frac1y$. Since $x$ and $y$ are both $\geq1$ (because they're positive and $\geq$ their reciprocals), it suffices to show that the function $f(x)=x+\frac1x$ is increasing for $x\geq 1$. Fortunately, that's easy, by differentiating. The derivative $f'(x)=1-\frac1{x^2}$ is clearly positive for all $x>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3667451", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Spectral sequence of a filtration: a possible mistake $\require{AMScd}$The following is taken from these notes by Daniel Murfet. Let $ \cdots \subseteq F^{p + 1}(C) \subseteq F^p(C) \subseteq F^{p - 1}(C) \subseteq \cdots$ be a filtration of a complex $C$ in an abelian category. There is either a mistake or I don't understand something. I think that $\ddot{A^{pq}_r} \subseteq \ddot{A^{pq}_{r + 1}}$. Indeed, $A^{pq}_r$ defined by the following pullback $$\begin{CD} A^{pq}_r @>>> F^p(C^{p + q}) \\ @VVV @VVV \\ F^{p + r}(C^{p + q + 1}) @>>> F^p(C^{p + q + 1}) \end{CD}$$ where the bottom morphism is a subobject inclusion and the left morphism is a differential of $F^p(C)$. Concretely, $A^{pq}_r$ is a pullback of $d^{p,p+q}$ along the subobject inclusion $F^{p + r}(C^{p + q + 1}) \subseteq F^p(C^{p + q + 1})$. Then $\ddot{A^{pq}_r}$ is the image of the composition in the following diagram $$\begin{CD} A^{p - r + 1, q + r - 2}_{r - 1} @>>> F^{p - r + 1}(C^{p + q - 1}) \\ @VVV @VVV \\ F^p(C^{p + q}) @>>> F^{p - r + 1}(C^{p + q}) \end{CD}$$ and $\ddot{A^{pq}_{r + 1}}$ is the image of the composition in the following diagram $$\begin{CD} A^{p - r, q + r - 1}_r @>>> F^{p - r}(C^{p + q - 1}) \\ @VVV @VVV \\ F^p(C^{p + q}) @>>> F^{p - r}(C^{p + q}) \end{CD}$$ To have a map from from one image to another, we need a map between their domains and codomains. But the pullback universal property only gives a morphism from $A^{p - r + 1, q + r - 2}$ to $A^{p - r, q + r - 1}$. Similarly, for the following screenshot I only see how to construct a map from $A^{p + r, q - r + 1} \to A^{pq}_r$, for similar reasons. So, my question is: is there is a mistake? If yes, can the proof be salvaged? If not, what am I missing?
We have $$\ddot{A^{p,q}_r}=\partial \newcommand\of[1]{\left({#1}\right)} \of{ F^{p-r+1}C^{p+q-1}\cap \partial^{-1} \of{ F^{p+1}C^{p+q} } }. $$ Note that $\partial^{-1}(F^{p+1}C^{p+q})$ is independent of $r$, and as $r$ increases, the filtration index decreases, and therefore the groups get larger, so you appear to be correct that the order of inclusions should be $\ddot{A}^{p,q}_r\subseteq \ddot{A}^{p,q}_{r+1}$. This doesn't really affect the proof in any way (at least the visible part), because the excerpted discussion doesn't use the ordering at all. However, I will say that this ordering should be the correct one, since we want $Z^{p,q}_{r+1}\subseteq Z^{p,q}_r$ and $B^{p,q}_{r+1}\supseteq B^{p,q}_r$ so that $E_{r+1}^{p,q}$ is a subquotient of $E_r^{p,q}$. $r$-cocycles should get smaller and $r$-coboundaries should get larger so that we have a sensible notion of convergence. I imagine this was a typo, it's really easy to get indices and ordering confused here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3667629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Induction step: $5 + 5n \leq {n}^2$ for $n \geq 6$ Prove by mathematical induction that $5 + 5n \leq {n}^2 $ for all integers $n\geq 6$. Step 1: Base case Suppose $n = 6$, hence $5 + 5(6) \leq {6}^2 = 35 \leq 36$ We proved that base case is true as 35 is less than or equal to 36. Step 2: Induction step We claim that k is true for some integer more than or equal to 6, therefore $5 + 5k \leq {k}^2$ (*) We now need to prove that k+1 claim is true and that is $5 + 5(k + 1) \leq {(k + 1)}^2$ I am stuck at this step. Somehow I am unable to sub in my claim k which is the (*) equation into my k+1 equation correctly. Is the step up to this point correct? I have tried expanding out the k+1 equation for the LHS to get $5k + 10$ but it looks absolutely wrong. If I do it to the RHS, I get $7k + 6$ which looks wrong too although I am able to sub in ${k}^2$ to the RHS equation. Can someone please tell me how to proceed from here on?
\begin{align} 5+5(k+1) &= 5+5k + 5 \\ &\le k^2+5 \\ &\le k^2 + 2k+1 \\ &=(k+1)^2 \end{align} The second last step is due to $5 \le 2k+1$ which is equivalent to $2 \le k$, we know this is true since $k \ge 6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3667771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How do we know that the eigenspaces of $T$ are the only $T$-invariant subspaces? Given $T:\mathbb{C}^2 \rightarrow \mathbb{C}^2$, if I know we have two different eigenvalues, let's call them $\lambda_1, \lambda_2$, then we can say $\mathbb{C}^2, \left\{ 0 \right\}, V_{\lambda_1} , V_{\lambda_2}$ are all invariant sub-spaces for $T$. how do I prove there aren't any more invariant sub-spaces?
If there was, it would have to be $1$-dimensional. In other words, it would be equal to $\Bbb Cv$, for some vector $v\ne0$. But asserting that $\Bbb Cv$ is invariant is the same thing as asserting that $v$ is an eigenvector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3668060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Combinatorial proof of $1+2(\sum_{i=0}^n 3^i)=3^{n+1}$ I have this workbook of proofs that I've been trying to finish for a couple of months now. There is this problem in it that requires me to prove $1+2(\sum_{i=0}^n 3^i)=3^{n+1}$ using combinatorial identities only. This problem has stumped me for several days and I would appreciate any help I could get
Here's a different solution than in the link: consider a tournament with $3^{n+1}$ players; divide them into groups of three, with two games per group(every game an elimination), and let the winner proceed to the next round, until we have a single final winner from the last single group of three. The number of games played is the LHS (excluding the -1 term), but also because every game eliminates one player, it is $3^{n+1}-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3668233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given $ a_n= 6a_{n-1} -4a_{n-2}$ and initial values, find a closed form for $a_n$ I have a recursive formula: $$ a_n= 6a_{n-1} -4a_{n-2}$$ with $a_0=1$ and $a_1=3$, and I need to find a closed-form expression of $(a_n)_{n\in \mathbb{N}}$. I managed to calculate almost everything but at the end I get this expression: $$ a_n= \frac{(3+\sqrt{5})^n}{2} + \frac{(3-\sqrt{5})^n}{2} $$ Is there a way to prove the following statement? Because Everything I have tried up till now doesn't do the job, and are these two expressions equal at all? $$ \frac{(3+\sqrt{5})^n}{2} + \frac{(3-\sqrt{5})^n}{2} = \left \lceil \frac{(3+\sqrt5)^n}{2} \right \rceil$$
Note that $3-\sqrt5$ is a little less than $0.764$, so $0<\frac{(3-\sqrt5)^n}2<\frac12$ for all $n\ge 1$. The lefthand side of your final expression must be an integer; call it $m$. Thus, $$0<m-\frac{(3+\sqrt5)^n}2<\frac12\;,$$ and it follows immediately that $$m=\left\lceil\frac{(3+\sqrt5)^n}2\right\rceil\;.$$ For this it’s actually enough that $\frac{(3-\sqrt5)^n}2<1$; the fact that it’s less than $\frac12$ allows the stronger conclusion that $a_n$ is actually the integer closest to $\frac{(3+\sqrt5)^n}2$ for $n\ge 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3668376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Convergence of series by adding parenthesis Given a series $\sum_{n=1}^{\infty}a_n$ If we know that by inserting parenthesis to the sum, while in each pair of parenthesis all elements are of the same sign we get a convergent series, then the original series is also convergent. My lecturer proved this theorem in a very complex way I could not follow, does anyone know where I can find a proof online? of course I'll be happy if someone wants to prove it here but I think it's a pretty long proof.
Let $s_n=\sum_{k=1}^na_n$ and let $S_n$ be the $n$th partial sum of the series that you get after adding the parenthesis. Suppose, say the the first four $a_k$'s are postive, that the five next ones are negative, and then there are some more positive terms. Then $S_1=s_4$. And, after that, $S_2=s_9$. Besides, $s_1\leqslant s_2\leqslant s_3\leqslant s_4=S_1$. And then $s_4\geqslant s_5\geqslant s_6\geqslant s_7\geqslant s_8\geqslant s_9=S_2$. And so on. Each $s_k$ is between a $S_l$ and $S_{l+1}$. So, since $(S_n)_{n\in\Bbb N}$ converges, so does $(s_n)_{n\in\Bbb N}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3668517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Intuition on the proposition: $f:(X,d_{X})\rightarrow(Y,d_{Y})$ is continuous iff the pre-image of an open set is open I am studying continuity on metric spaces and I have been presented to the proposition in the title. My question is: how do we understand the kernel of this definition? I am already acquainted to the $\varepsilon-\delta$ definition. I do also know that continuous functions map convergent sequences onto convergent sequences. I am really concerned with the result in the title.
Fix an $x \in X$. Note that for any $\epsilon > 0$, the ball of radius $\epsilon$ around $f(x)$ is an open set, so the fact that the pre-image of that ball is open implies that you can find a $\delta$-radius ball around $x$ that maps into the $\epsilon$-ball, which is exactly what is happening with the $\epsilon$-$\delta$ definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3668902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to apply Leibniz rule for this integral? I am struck with the problem of how to evaluate the following integration: $$\frac{d}{dt}\int_0^{a(t)}{(a (t)-s)^{\alpha-1}f (s) ds}$$ where $f(s)$ is differentiable and $\alpha\in (0,1)$. The problem is the integrand might not converge when substituting the upper limit
$$\int\limits_{0}^{a(t)}{(a(t)-s)^{\alpha-1}}f(s)ds = \left|-\frac{(a(t)-s)^\alpha}{\alpha}f(s) \right|_{0}^{a(t)}+\frac{1}{\alpha}\int\limits_{0}^{a(t)}{(a(t)-s)^\alpha}{f^{\prime}(s)}ds$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3669214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is calculus needed in this problem involving work and units of energy? Consider this problem: Suppose it takes k units of energy to lift a cubic meter of water one meter. About how much energy E will it take to pump dry a circular hole one meter in diameter and 100 meters deep that is filled with water? Here's how I tried to solve it: Total volume of water = $\pi r^2h = \pi (\frac{1}{2})^2 100$ Energy required to move this volume of water by $1$ meter $=\pi(\frac{1}{2})^2100k$, and so energy required to move this move this volume of water by $100$ meters $= 100\pi(\frac{1}{2})^2100k = \frac{\pi k10^4}{4}$. However, the answer given is: $\frac{\pi k10^4}{8}$. Where am I going wrong with my reasoning? Also, this problem was assigned in single variable calculus, under definite integrals. Why is calculus needed here at all?
You don't have to lift every cup of water out by the whole $100$m. If a given horizontal cross-section of water is a depth $d$ meters, you need to move that cross-section only $d$ meters, not $100$m. So to find the whole amount of work done you need to add up the work done for each thin cross-section, which depends on the depth; and you can do that with an appropriate integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3669406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Does $B^{-1}-A^{-1}$ has r positive eigenvalues when A-B has r positive eigenvalues? Assume that A and B are 2 positive definite matrices of $n\times n$. As is known, $A-B>0$ implies $B^{-1}-A^{-1} >0$. That is to say, $B^{-1}-A^{-1}$ has n positive eigenvalues when A-B has n positive eigenvalues. My question is :Assume that A and B are 2 symmetric matrices of $n\times n$ and B is positive definite and A is invertible. Does $B^{-1}-A^{-1}$ has r positive eigenvalues when A-B has r positive eigenvalues ?($r\in \{1,2,\ldots,n\}$)
If $B$ is positive definite, then the following argument applies: note that $$ B^{-1/2}(A - B)B^{-1/2} = B^{-1/2}AB^{-1/2} - I $$ is a symmetrc matrix with $r$ positive eigenvalues. This occurs if and only if $B^{-1/2}AB^{-1/2} - I$ has $r$ eigenvalues that are greater than $1$. This in turn occurs if and only if $B^{1/2}A^{-1}B^{1/2}$ has $r$ eigenvalues that lie on the interval $(0,1)$. Equivalently, $I - B^{1/2}A^{-1}B^{1/2}$ has $r$ eigenvalues on the interval $(0,1)$. This allows us to deduce that the matrix $$ B^{-1/2}[I - B^{1/2}A^{-1}B^{1/2}]B^{-1/2} = B^{-1} - A^{-1} $$ has at least $r$ positive eigenvalues. It could have more than $r$ positive eigenvalues, though. Concretely, consider $$ A = \pmatrix{2\\&1/2\\&&-1}, \quad B = I. $$ Verify that $A - B$ has $1$ positive eigenvalue, but $B^{-1} - A^{-1}$ has $2$ positive eigenvalues.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3669561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Polynomial ring is not a UFD Let $K$ be a field, and consider the ring $R=\{f\in K[x]\mid f'(1)=f''(1)=0\}$. Show that $R$ is not a UFD (Unique Factorization Domain). My thoughts: I can show that elements such as $(x-1)^3$ and $(x-1)^4$ are irreducible in $R$. Can this be used to show $R$ is not a UFD? I am not sure the best route to take. Should we exhibit an element with a non-unique factorization into irreducibles, or should we find two elements who do not have a GCD? Another thing we may be able to do, is consider a quotient of $R$ by an irreducible polynomial and show it has zero-divisors (hence it is not a domain, so the polynomial we choose isn't prime, but every irreducible polynomial in a UFD must be prime).
Silly me. Following the advice of @rschwieb, we have the non-unique factorizations $$ (x-1)^{12}=(x-1)^3(x-1)^3(x-1)^3(x-1)^3 $$ and $$ (x-1)^{12}=(x-1)^4(x-1)^4(x-1)^4 $$ into irreducibles. Therefore $R$ is not a UFD. Just to add this bit of detail: $(x-1)^3$ and $(x-1)^4$ are irreducible in $R$ since if they weren't, they'd either have a linear or quadratic factor. But any linear or quadratic factor will either have a non-zero constant first or second derivative, violating the definition of $R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3669717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the all positive integer solutions $(a,b)$ to $\frac{a^3+b^3}{ab+4}=2020$. Find the all positive integer solutions of given equation $$\frac{a^3+b^3}{ab+4}=2020.$$ I find two possible solutions, namely $(1011,1009)$ and $(1009,1011)$, but the way I solve the equation was messy and I don't know if there are any other solutions. Source: Turkey $1.$ TST for IMO $2020$
Write for ease $n=2020$ and let $c=a+b$. As $b=c-a$ we get a following quadrtatic equation on $a$: $$(3c+n)a^2-(3c+nc)a+c^3-4n=0$$ So it discriminat must be a perfect square $d^2$ (as it has solution in $\mathbb{Z}$): $$d^2 = -3c^4+2nc^3+n^2c^2+48nc+16n^2\;\;\;\;\;(*)$$ from here we get $$\boxed{2n\mid d^2+3c^4}$$ Now what can we say about $c$? * *If $5\nmid c$ then $c^4\equiv_5 1$ so $d^2+3\equiv _5 0$ which is not possible. So $5\mid c$. *Since $8\mid d^2+3c^2$, $d$ and $c$ nust be the same parity. Say both are odd. Since for each odd $x$ we have $x^2\equiv_8 1$ we get $$ 0\equiv _8 d^2+3c^4 \equiv_8 1+3$$ A contradiction. So $c$ and $d$ are even. Since $8\mid 3c^4$ we have $8\mid d^2$ so $4\mid d$. *If $101\nmid c$ then $$d^2c^{-4} \equiv_{101} -3\implies \Big({-3\over 101}\Big)=1$$ But $$\Big({-3\over 101}\Big) = \Big({-1\over 101}\Big)\Big({3\over 101}\Big) = 1\cdot \Big({101\over 3}\Big)(-1)^{{3-1\over 2}{101-1\over 2}} = -1$$ A contradiction again, so $101\mid c$ So $$\boxed{1010\mid c}$$ Now suppose $c>n$. From $(*)$ we get: \begin{align}3c^4&\leq 2nc^3+n^2c^2+48nc+16n^2\\ &< 2(c-1)c^3+(c-1)c^2+64c^2\\ & = 3c^4-4c^4+65c^2 \end{align} and now we have $4c^3<65c^2$, a contradiction. So $c\leq 2020$. So $c\in\{1010,2020\}$ and we check both values manualy...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3669889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Proof that $\epsilon_{ijk}$ is the same as the determinant of the Kronecker delta I just learned about the Kronecker Delta function and $\epsilon_{ijk}$ for the first time, and I still can't wrap my mind around how to prove that $$\epsilon_{ijk} = det\begin{pmatrix} \delta_{i1} & \delta_{i2} & \delta_{i3} \\ \delta_{j1} & \delta_{j2} & \delta_{j3} \\ \delta_{k1} & \delta_{k2} & \delta_{k3} \end{pmatrix}$$ I understand that the RHS is $$det\begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{pmatrix}=1$$ but how is that related to the fact that the $\epsilon$ tensor renders 1 for even, -1 for odd permutations of $i,j,k$ and $0$ for identical ones? And how can it be proven? The only "proof" I could think of is$$ \delta_{i1}\delta_{j2}\delta_{k3} + \delta_{i2}\delta_{j3}\delta_{k1} + \delta_{i3}\delta_{j1}\delta_{k2} - \delta_{i3}\delta_{j2}\delta_{k1} - \delta_{i2}\delta_{j1}\delta_{k3} - \delta_{i1}\delta_{j3}\delta_{k2} \\ = \begin{cases} 1 \cdot \delta_{ijk} & |\{ijk\} \in \{123\},\{312\},\{231\} \\ -1 \cdot \delta_{ijk} & |\{ijk\} \in \{213\},\{321\},\{132\} \end{cases} \\ = \epsilon_{ijk} $$ but that doesn't look sound to me. I am aware of this question, but it's kind of like two steps ahead when I'd like to be able to explain just the first. Can somebody explain with an Einstein mindset?
A property of the determinant is: * *Exchanging two rows while leaving everything else unchanged changes the sign of the determinant, but not the magnitude. This can be proven from the formula $\det(AB) = (\det A)(\det B)$. The operation of exchanging rows in a matrix $M$ is the same as taking $EM$, where $E$ is the same row exchange on the identity matrix $I$. One can easily show all such matrices $E$ have determinant $-1$. Now note that if two rows of a matrix $M$ are identical, then exchanging them makes no change to the matrix. Therefore $\det M = -\det M$, from which it follows that $\det M = 0$: * *If a matrix has two identical rows, then its determinant is $0$. Now define $$d_{ijk} = \det \begin{pmatrix} \delta_{i1} & \delta_{i2} & \delta_{i3} \\ \delta_{j1} & \delta_{j2} & \delta_{j3} \\ \delta_{k1} & \delta_{k2} & \delta_{k3} \end{pmatrix}$$ The matrix for $d_{123}$ is the identity matrix, so $d_{123} = 1$. If $i = j$ or $j = k$ or $i = k$, the matrix will have two identical rows, so $d_{ijk} = 0$ in this case. And by the exchange of row rule for matrix determinants, if you exchange the values of any two of $i, j, k$, then $d_{ijk}$ changes sign. These three facts completely determine all values of $d_{ijk}$. But $\epsilon_{ijk}$ obeys exactly the same set of rules. So the two of them must be equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3670026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Show that this inequality is true Show that $\frac{2}{3} \cdot \frac{5}{6} \cdot \frac{8}{9} \cdot ... \cdot \frac{999998}{999999} > \frac{1}{100}$. I tried to take another multiplication $\frac{3}{5} \cdot \frac{6}{8} \cdot \frac{9}{11} \cdot ... \cdot \frac{999996}{999998}$ so that we would have their multiplications equal to $\frac{2}{999999}$. And if we assume that first one equals to $x$, second one equals $y$ we would have an inequality like $x \gt y$ and $x^2 \gt y \cdot x$ so that we can prove that $ x \gt \frac{1}{1000}$ but I can't make it for $\frac{1}{100}$.
Let $$A_n=\sqrt[3]{3n+1}\cdot\prod_{k=1}^n\left(1-\frac1{3k}\right).$$ The claim is that $$\tag1 A_{333333}>1.$$ We compute $$\begin{align}\left(\frac{A_{n}}{A_{n-1}}\right)^3&=\frac{(3n+1)(3n-1)^3}{(3n-2)27n^3}\\ &=1+\frac{6n-1}{(3n-2)27n^3}\\&>1+\frac{6n-4}{(3n-2)27n^3}\\&=1+\frac2{27n^3}>1\end{align}$$ or $A_n>A_{n-1}$. By induction, $$A_{333333}>A_1=\sqrt[3]{4}\cdot \frac23 =\sqrt[3]{\frac{32}{27}}>1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3670193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove $x^3+y^3+z^3\geq 3xyz$ Prove that, $\forall x,y,z\in \mathbb{R}_+\cup \{0\}$, $$x^3+y^3+z^3\geq 3xyz\text{.}$$ I've been trying for a while now whithout making any significant progress.
hint $x\mapsto \ln(x) \; $ is concave at $ (0,+\infty) \;$ since its second derivative $\; x \mapsto \frac{-1}{x^2}\; $ is negative. $$\frac 13 + \frac 13 + \frac 13 = 1$$ thus $$\ln\Bigl(\frac 13(x^3+y^3+z^3)\Bigr)\ge$$ $$ \frac 13\Bigl(\ln(x^3)+\ln(y^3)+\ln(z^3)\Bigr)$$ $$(=\ln(xyz))$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3670362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Use mathematical induction to prove n ≥ 3 I have proved for n =3, and assumed S(k) is true already. I have gotten all the way to the induction step of S(k+1) = 3+4+5+...+(k+1) = ((k+1-2)(k+1+3))/2 I am having trouble proving it past this step, and how to show that what I have added is equal on both sides. Please show me how I can do that last step of showing the truth.
Assume the formula holds for some $k\ge3$, that is: $$S(k)= \frac{(k-2)(k+3)}{2}.$$ Now, $S(k+1) = S(k)+k+1$, so using the above formula we get: $$S(k+1) = \frac{(k-2)(k+3)}{2} +k+1 = \frac{(k-2)(k+3)+2k+2}{2} = \frac{k^{2}+3k-4}{2} = \frac{(k-1)(k+4)}{2} = \frac{((k+1)-2)((k+1)+3)}{2}$$ which is your formula for $n=k+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3670480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding $\lim\limits_{n→∞}n\cos x\cos(\cos x)\cdots\underbrace{\cos(\cos(\cdots(\cos x)))}_{n\text{ times of }\cos}$ Find$$\lim_{n→∞}n\cos x\cos(\cos x)\cdots\underbrace{\cos(\cos(\cdots(\cos x)))}_{n \text{ times of } \cos}.$$ I approximated cos(cosx) to cos x, but i don't think it is the proper approach. I got answer as 0 on the approximation. It is clear that it is a 0/0 form, but how can the l's Hopital rule be applied? I tried using the sandwich theorem but I am unable to reach the answer. I plotted the graph on desmos. But I got the resultant graph covering the entire area. please help me reach the proper answer. Thanks in advanced to all.
Consider the sequence $x_n$ defined by $x_0 = x$ and $x_{n+1} = \cos(x_n)$. Then the sequence in question is $$a_n = n\prod_{n=1}^\infty x_n.$$ I claim that $a_n \to 0$. Here's a sketch of the proof: * *There is a unique point $x^* \in [0, 1)$ such that $\cos(x^*) = x^*$ (the fixed point of $\cos$). *The sequence of fixed point iterates $x_n$ converge to $x^*$, regardless of the value of $x$. *The sum $\sum a_n$ converges, using the ratio test. *The sequence $a_n$ converges to $0$, using the divergence test. The hard bit is 2, which I'll leave to last. To prove 1, note that the function $f(x) = x - \cos(x)$ is continuous, negative at $0$, and positive at $\pi/2$, so by the intermediate value theorem, there must be at least one point where $f(x) = 0$ in $[0, \pi/2]$. Further, $f'(x) = 1 + \sin(x) \ge 0$, meaning that the function is non-decreasing. If $f$ had more than one root, then it'd be an interval of roots, which would correspond to an interval of roots in $f'$. This is clearly not the case, so there is a unique $x^*$ such that $f(x^*) = 0$, i.e. $\cos(x^*) = x^*$. The point $x^*$ lies in the range of $\cos$, i.e. $[-1, 1]$, as well as $[0, \pi/2]$, so $x^* \in [0, 1]$. If $x^* = 1$, then $\cos(x^*) = 1$, hence $x^*$ would have to be an integer multiple of $2\pi$, which it is clearly not. Thus, $x^* \in [0, 1)$, as claimed. To prove 3, assuming 2 is proven, consider $$\left|\frac{a_{n+1}}{a_n}\right| = \frac{n+1}{n}|x_{n+1}| \to x^* < 1,$$ thus the series converges absolutely. Then, 4 follows immediately from this: the terms of a convergent series must tend to $0$. Now, we tackle 2. First, recall the trigonometric identity: $$\cos(x) - \cos(y) = -2\sin\left(\frac{x + y}{2}\right)\sin\left(\frac{x - y}{2}\right).$$ Now, suppose that $x, y \in [0, 1]$. Note that $\frac{x + y}{2} \in [0, 1]$ and $\sin$ is increasing and positive on $[0, 1] \subseteq [0, \pi/2]$, hence $$\left|\sin\left(\frac{x + y}{2}\right)\right| = \sin\left(\frac{x + y}{2}\right) \le \sin(1).$$ Also, recall that $|\sin \theta| \le |\theta|$ for all $\theta$. Hence, assuming still $x, y \in [0, 1]$, $$|\cos(x) - \cos(y)| = 2\left|\sin\left(\frac{x + y}{2}\right)\sin\left(\frac{x - y}{2}\right)\right| < 2 \cdot \sin(1) \cdot \left| \frac{x - y}{2}\right| = \sin(1)|x - y|.$$ Now, note that $x_n \in [-1, 1]$ for $n \ge 1$, and since $\cos$ is positive over $[-1, 1]$, we have $x_n \in [0, 1]$ for $n \ge 2$. So, for $n \ge 2$, we get $$|x_{n+2} - x_{n+1}| = |\cos(x_{n+1}) - \cos(x_n)| \le \sin(1)|x_{n+1} - x_n|.$$ This implies the series $$\sum_{n=2}^\infty (x_{n+1} - x_n)$$ is absolutely summable, as it passes the ratio test (limsup version): $$\left|\frac{x_{n+2} - x_{n+1}}{x_{n+1} - x_n}\right| = \frac{|x_{n+2} - x_{n+1}|}{|x_{n+1} - x_n|} \le \sin(1) \frac{|x_{n+1} - x_n|}{|x_{n+1} - x_n|} = \sin(1) < 1.$$ Therefore, $\sum_{n=2}^\infty (x_{n+1} - x_n)$ converges. This is a telescoping series, whose partial sums take the form $x_n - x_2$. These partial sums converge, and hence so must $x_n$. Now, because $x_n$ converges to some $L$, it follows from $\cos$ being continuous that $$x_{n+1} = \cos(x_n) \implies L = \cos(L) \implies L = x^*,$$ completing step 2, and the full proof, as necessary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3670653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Is $i$ a positive number? This question came to mind when I saw a way of finding the value of $i^i$ which included transforming it to $e^{i\ln(i)}$ and taking $\ln(i)$ as $i\frac{\pi}{2}$ I understand how we can get $\ln(i)$=$i\frac{\pi}{2}$ geometrically but this got me thinking if $i$ is a positive number as the first thing I learned about $\ln$ since high school days is that it can't take negative numbers.
The sign can be defined as: $$ sgn(x)=\begin{cases} \frac{x}{|x|}, & \text{if $x \neq 0$} \\ 0, & \text{if $x = 0$} \end{cases} $$ If you extend this to complex numbers, the sign can be any complex unit or zero. In that way $sgn(i)=i\neq0$ so it's not positive. We usually don't define positive and negative numbers in the means of complex numbers. Some literature define it in the way I have shown above and then "$z$ is positive" implies that $\Im(z)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3670747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 5 }
Number of quarters, dimes and nickles. I'm solving a problem which states the following; Mary has $3.00 in nickels, dimes, and quarters. If she has twice as many dimes as quarters and five more nickels than dimes, how many coins of each type does she have? I transcribed the relationships as such; $2q=d, n=d+5$ And i need to solve the following; $ 3 = 0.01n+0.1d+0.25q \therefore\\ 3=0.01(d+5)+0.1(2q)+0.25q \therefore\\ 3=0.01d+0.45q+0.05 \therefore\\ 3=0.01(2q)+0.45q+0.05\therefore\\ 3=0.47q+0.05\therefore\\ 2.95 = 0.47q \therefore\\ 6.4 = q $ I've checked the answer online, but im interested and why this method failed. I'm not aware of rules i broke here.
One nickel is $5$ cents. The equation is $$3=0.0\color{red}5n+0.1d+0.25q$$ $ 3 = 0.05n+0.1d+0.25q \therefore\\ 3=0.05(d+5)+0.1(2q)+0.25q \therefore\\ 3=0.05d+0.45q+0.25 \therefore\\ 3=0.05(2q)+0.45q+0.25\therefore\\ 3=0.55q+0.25\therefore\\ 2.75 = 0.55q \therefore\\ 5= q $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3670948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it possible to find an expression for $\frac{d^n}{dx^n}e^{-x^2}$? I am trying to find a general form to derivatives of the function $e^{-x^2}$. I tried to do it by finding a pattern for the first derivatives but with no success. Any tip is welcome.
These are called Hermite polynomials, in one form or another. Let me show you show to "discover them". Let $p_n$ be your $n$th derivative and $f=p_0$. To simplify matters I will instead consider $f= e^{x^2/2}$ (you can roll back by replacing $x$ by $\sqrt{2}xi$). Then $p_1 = x p_0$. Assume by induction that $p_n$ is of the form $q_n p_0$ for some polynomial $q_n$. Then you get that the same holds for $p_{n+1}$, since then: $$p_{n+1} = q_n'p_0 + xq_n p_0$$ This gives you a recurrence $q_{n+1} = q_n' +x q_n$. Note that $q_0=1$ and $q_1=x$, and in fact $q_n$ is obtained by iterating the operator $\partial +x$ on $1$, that is $$q_n = (\partial+x)^n q_0.$$ One would be tempted to use the binomial theorem here, but $\partial=d/dx$ and $x$ do not commute (!), in fact we have that for any polynomial $p$: $\partial(xp)- x(\partial p) = p+xp'-xp' = p$ i.e. $\partial x-x \partial$ acts as the identity: we have discovered the defining relation of the Weyl algebra. This allows you to organize your computation into something that looks like a sum of terms $x^n \partial^k$ as in the binomial theorem, but now something curious happens: applying the rule that $ \partial x=x \partial+1$ many times you get that $$\partial^Nx = x \partial^N+N \partial^{N-1}.$$ And here comes the last (not so fun) part that you can prove by induction on $n$: it turns out that $$(x+\partial)^n = \text{usual binomial sum} + \sum_{2a+b+c=n} \frac{n!}{a!b!c!} 2^{-a}x^b\partial ^c$$ where we only require that $c>0$. This tells us how to write down $(x+\partial)^n$ in terms of the simpler operators $x^n\partial^m$, and we know how these act on $1$. So, for example, from $$(x+\partial)^2 = x^2+2x\partial +\partial^2 \underline{+1}$$ you get $q_2 = x^2+1$. From $$q_3 = (x+\partial)^3 = x^3+3x^2\partial^2+3x\partial^2+\partial^3 \underline{+ 3x+ 3\partial}$$ you get $q_3 = x^3+3x$. And now we can finish the computation of $q_n$: since $q_0= 1$ and since $\partial(1)=0$, we only need to consider terms in the sum above where there is no $\partial$ appearing (i.e. only take $x^n$ in the binomial sum and take only the terms with $c=0$ in the second one. This gives $$q_n = x^n+ \sum_{2a+b=n} \frac{(2a+b)!}{a!b!} 2^{-a}x^b.$$ You can obtain a version of the standard Hermite polynomials by changing $x$ to $2x$, so that the leading term is $2^n$ and this reads instead $$r_n = 2^nx^n+ \sum_{2a+b=n} \frac{(2a+b)!}{a!b!} 2^{a+b}x^b.$$ The takeaway you can get here is that understanding a simple relation such as $$\partial x-x\partial =1$$ can give you a nice description of a concrete problem. In this case what we have done is more or less pin down how the first Weyl algebra acts on the vector space of polynomials in one variable. A reference for the discrepancy between $(x+\partial)^n$ and the usual binomial sum in the Weyl algebra is this paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3671084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Finding the derivatives using limits. What to do when you have a limit inside a limt I was trying to prove the second derivative formula using limits \begin{align}f'(x) &= \lim_{h\rightarrow 0}\dfrac{f(x+h)-f(x)}{h} \\f''(x) &= \lim_{h\rightarrow 0}\dfrac{f'(x+h)-f'(x)}{h} \\ &= \lim_{h\rightarrow 0}\dfrac{\lim_{h\rightarrow 0}\dfrac{f(x+2h)-f(x+h)}{h}-\lim_{h\rightarrow 0}\dfrac{f(x+h)-f(x)}{h}}{h} \end{align} However I'm not sure how to go about simplifying this as there is a limit inside of another limit. Does the limit inside nullify as they both tend to zero? If yes, what exactly is the logic behind this and are there any underlying assumptions?
What you have done is almost fine but you should use different symbols for different limit operators like $$f''(a) =\lim_{h\to 0}\dfrac{\lim_{k\to 0}\dfrac{f(a+h+k)-f(a+h)}{k}-\lim_{l\to 0}\dfrac{f(a+l)-f(a)}{l}}{h}$$ However if you are trying to achieve a definition of $f''$ solely in terms of $f$ as a sort of complicated limit then that's not going to work. Why?? Lets understand the requirements of the definition of derivative as a limit. As a pre-requisite we must have $f$ defined in a certain neighborhood of $a$ so that the expression under limit used for $f'(a) $ makes sense. Thus in order to define $f''(a) $ we must have $f' $ defined in some neighborhood of $a$. Now that requirement can not be met using any definition like $$f''(a) =\lim_{h\to 0}g(a,h)$$ because we are essentially dealing with just the neighborhood of $a$ and it can not guarantee us anything about the behavior of a function at points other than $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3671220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A property about uniform convergence Suppose a function series $\sum_{n=1}^{\infty}u_n(x)$ converges on $(a,b)$ and every $u_n(x)$ is continuous on the interval $(a,b)$,but this series diverges at $x=a$ or $x=b$, can we deduce that $\sum_{n=1}^{\infty}u_n(x)$ does not converge uniformly on $(a,b)$? For example, $\sum_{n=1}^{\infty}x^n$ converges on the interval $(0,1)$, but not uniformly on the interval, for it diverges at $x=1$. I conjecture it is true,but I am struggling to give the proof. Any correction or improvement is welcomed! Thanks in advance.
Consider, $u_n(x) = 0$ in $(a, b)$, $u_n(x) = 1, x=a$ or $x = b$. You get a uniform convergence in $(a, b)$ but divergence at the endpoints. Continuity Edit: Let $s_n$ be the partial sum of $u_i$'s up to term n. Suppose, that the sequence does not converge at $a$, that is, $$\exists\epsilon > 0, \forall n_0 \exists n,m > n_0, |s_n(a) - s_m(a)| > \epsilon \tag{*} \label{*}$$ furthermore, suppose that, $$ \forall n\forall\epsilon'> 0 \ \exists\delta > 0\ | x\in(a, a + \delta) \implies |s_n(x) - s_n(a)| \leq \epsilon' \tag{**} \label{**}$$ We will show that the sequence $s_n$ is not uniformly Cauchy. Take $\epsilon$ given by $\eqref{*}$. Pick some $n_0$, take the given $n, m$ by $\eqref{*}$. For each of $n, m$ invoke $\eqref{**}$ with $\epsilon' = \epsilon / 4$. Take the minimum of the $\delta$'s given by \eqref{**} and an $x$ in that interval. Then, $$ |s_n(x) - s_m(x)| \geq |s_n(a) - s_m(a)| - |s_n(x) - s_n(a)| - |s_m(x) - s_m(a)| \geq \epsilon / 2$$ Therefore, $$ \exists\epsilon'' > 0, \forall n_0 \exists n,m > n_0, \exists x\in (a,b), |s_n(x) - s_m(x)| > \epsilon''$$ That is, the sequence is not uniformly cauchy. The rest is showing that a sequence is uniformly cauchy iff it is uniformly convergent. So the statement seems correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3671376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Motivation for considering the upper numbering of ramification groups Let $L/K$ be a finite Galois extension. We denote by $G_s$ its $s$-th ramification group. Define the Herbrand function $$\eta_{L/K}:[-1,\infty) \to [-1, \infty), \ \eta_{L/K}(s) = \int_0^s \frac{1}{(G_0:G_x)} \ dx.$$ Let $\psi_{L/K} : [-1, \infty) \to [-1, \infty)$ be its inverse. Then, introducing the upper numbering $G^t = G_{\psi(t)}$ has many uses. This is great but what is the motivation to actually consider this numbering? I'll admit that my intuition on ramification groups is still quite weak but for me it seems a bit out of the blue. Is it just because Herbrand's Theorem $G_s(L/K)H/H = G_{\eta_{L/L'}(s)}(L'/K)$ for an intermediate field $L'$ appears more natural, so one would try considering a different numbering? Is there a different motivation?
The short answer is that lower ramification groups behave well when taking subgroups, while upper ramification groups behave well when taking quotients. As a result, the upper numbering can be defined for infinite extensions. I think this is the key motivation for defining them. Indeed, if $L/K$ is an infinite extension, we can define $$\mathrm{Gal}(L/K)^u = \varprojlim_{M/K\ \mathrm{finite}, M\subset L}\mathrm{Gal}(M/K)^u.$$ This definition makes sense by Herbrand's theorem, which tells us that we get a projective system to take the limit of. In this way, the upper ramification groups are far more natural than the lower ramification groups. The lower ramification groups have the advantage that they are easier to define and are sufficient for finite extensions. They also behave well with respect to subgroups: if $L/M/K$ is a sequence of Galois extensions, then $$\mathrm{Gal}(L/M)_i = \mathrm{Gal}(L/M)\cap \mathrm{Gal}(L/K)_i.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3671494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing that a quadratic cannot have 3 roots using Determinants Consider the equation $px^2 + qx + r = 0$ and let us assume that a, b and c satisfies the above equation. So, $$a^2p + aq + r = 0$$ $$b^2p + bq + r = 0 $$ $$c^2p + cq + r = 0$$ They can be represented using a matrix. \begin{bmatrix}a^2&a&1\\b^2&b&1\\c^2&c&1\end{bmatrix} From the equation,the matrix sends a point $(p, q, r)$ to the origin $(0, 0, 0)$. Now my book states the following: There are two possibilities: * *$(p,q,r)$ is at the origin *$(p,q,r)$ is some distinct point $(1)$ is not possible as for that value of $(p,q,r)$, the equation is no more an equation. So the latter is the case. According to the book, for $(2)$ to be true, the matrix must be singular. I don't quite get this.
$$px^2+qx+r = 0$$ Imagine that $x$ has $3$ roots $a,b,c$ Representing in matrix $$\begin{bmatrix}a^2&a&1\\b^2&b&1\\c^2&c&1\end{bmatrix}\begin{bmatrix}p\\q\\r\\\end{bmatrix}=\begin{bmatrix}0\\0\\0\end{bmatrix}$$ Since the matrix product is $$\begin{bmatrix}0\\0\\0\end{bmatrix}$$ Then it's determinant must be zero $$\begin{bmatrix}a^2&a&1\\b^2&b&1\\c^2&c&1\end{bmatrix}$$ $$\Delta = (a-b)(a-c)(b-c)$$ $$ (a-b)(a-c)(b-c) = 0$$ This equation here mean the the quadratic polynomial is cursed to always have only $2$ roots $$ (a-b)(a-c)(b-c) = 0$$ If we image that $a,b$ are two of it's root and we solve the quadratic for $c$, the result of $c$ would be $c=a$ or $c=b$ $$ (a-b)(a-c)(b-c) = 0$$ Now if we take only $a$ as one of its root, related $b$ and $c$ linearly and solving for them, it would also turn out that the last root is either one of the first two
{ "language": "en", "url": "https://math.stackexchange.com/questions/3671671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $f$ be (Riemann) integrable over $[0,1]$. Show $\sum_{n=0}^{\infty}\int_{0}^{x^n}f(t)dt$ is continuous on $(0,1)$ Problem: Let $f$ be (Riemann) integrable over $[0,1]$. Show $\sum_{n=0}^{\infty}\int_{0}^{x^n}f(t)dt$ is continuous on $(0,1)$. I know that if $f$ is (Riemann) integrable over $[0,1]$, since $x \in (0,1)$ then $x^n \in (0,1)$ also and every integral exists. Clearly the integral eventually goes to $0$ but I'm not sure how to use that. Also, I know that given an $f(t)$, I can use the epsilon-delta definition of continuity to show it's continuous, but here I'm only given that it's Riemann integrable. Would I do something like show that $\int_{0}^{x^n}f(t)dt$ is bounded, and use the epsilon-delta to show the whole thing must be continuous (as in Continuity of function consisting of an infinite series. for example)? I'm not sure how to do this rigorously though. Any help would be much appreciated!
The function $$F(x):=\int_0^x f(t)dt$$ is Lipschitz, as one easily sees, since $|F(x)|\le \int_0^x|f|\le x\underset{[0,1]}{\text{sup}}(|f|)$. Let $K$ be its Lipschitz constant. Then $$\sum_{n=0}^\infty\left|\int_0^{x^n}f(t)dt\right|=\sum_{n=0}^{\infty}|F(x^n)|\le K\sum_{n=0}^\infty x^n=K\frac{1}{1-x}$$ Since the series is locally normally convergent and every term of the series is continuous, it is continuous. One may ask if the method can be pushed further, i.e. if we can try to prove that the series is continuous also in $1$: this is not the case. Take $f(x)=x$ to find a counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3671824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is this an Expectation Problem? Or is it more complex? Following question is off a probability/statistics review sheet: Every day a professor leaves their home in the morning and walks to their office. Every evening they walk home. They take their umbrella with them only if it is raining. If it is raining and they do not have their umbrella with them (at their home or office), then they must walk in the rain. Suppose that it rains with probability $\frac{1}{3}$ at the beginning of any given trip independently of all other trips. Show that $\frac{63}{16}$ $\approx$ 4 is the expected number of days until the professor must walk in the rain without their umbrella (either that morning or evening), supposing that initially they have their umbrella with them at home. Here is a hint I was given: Let $μ$ be the expected number of days supposing they initially have their umbrella with them at home, and let $v$ be the expected number of days supposing that they do not. Explain why $$μ = \frac{2}{9} + \frac{5}{9}(1-μ) + \frac{2}{9}$$ and then, similarly, find an equation for $v$ in terms of $μ$. Use these equations to solve for $μ$. My thoughts: At first glance to me this looks like it could be done with the expectation formula, but given the details, I'm not sure how to structure a daily $\frac{1}{3}$ probability of it raining up until the professor doesn't have an umbrella on hand. Would you need to keep track of where the umbrella would be based on probability of it raining on trips on different days? I'm guessing that since $v$ and $μ$ are each a calculation of amount of expected days, one with and one without the umbrella, maybe the sum of these expectations would total 1, since these are the only two states the professor could start in? I'm also guessing this relationship would be how we calculate $v$ in terms of $μ$.
Follow the hint. In the starting case, with probability $1/3$ it rains, the professor takes the umbrella, and with probability $2/3$, it does not rain when it is time for the professor to return home. So with probability $2/9$ the professor has not walked in the rain but the umbrella is at the office. Similarly, with probability $1/9$, it has rained both to and from work and the umbrella has made a round trip. With probability $2/9$, it did not rain on the way to work but it rained on the way back from work, meaning the professor got wet. With probability $4/9$, it did not rain either way, and the professor is back home. We can summarize this in a table for the round trip: $$\begin{array}{ccccc} \text{Umbrella} & \text{Rain} & \text{Got wet} & \text{Probability} \\ \hline \text{Office} & \text{Yes, No} & \text{No} & 2/9 \\ \text{Home} & \text{Yes, Yes} & \text{No} & 1/9 \\ \text{Home} & \text{No, Yes} & \text{Yes} & 2/9 \\ \text{Home} & \text{No, No} & \text{No} & 4/9 \\ \end{array}$$ Therefore, with probability $5/9$, we have returned to the initial state (not wet, umbrella home), except a day has passed. Thus the expected number of additional days until getting wet is still $\mu$. With probability $2/9$, the professor got wet that day. With probability $2/9$, the professor has survived a day but now the umbrella is at the office. Since $v$ represents the expected number of days until getting wet when the professor is home but the umbrella is not, we summarize the expected number of days until getting wet is $$\mu = \frac{5}{9}(1 + \mu) + \frac{2}{9}(1) + \frac{2}{9}(1 + v).$$ Now for $v$, we suppose the professor begins the day at home but the umbrella is at the office. Then with probability $1/3$, the professor must walk in the rain to work. With probability $2/9$, the professor makes it to the office, and takes the umbrella home because it rains when it is time to leave. With probability $4/9$, it does not rain at all and the professor survives a day but returns to the state where the umbrella is not at home. So the expected number of days until getting wet in this case is...? I have not given the formula so that you have a chance to do the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3671937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the value of $a \in R$ such that $\langle x_n \rangle$ converges to a positive real number when $x_n=\frac{1}{3}\frac{4}{6}...\frac{3n-2}{3n}n^a$ Find the value of $a \in \mathbb{R}$ such that $\langle x_n \rangle$ converges to a positive real number when $x_n=\dfrac{1}{3}\dfrac{4}{6}\cdots\dfrac{3n-2}{3n}n^a$ Here is my approach. First of all, let $a_n=\dfrac{x_n}{n^a}=\dfrac{1}{3}\dfrac{4}{6}\cdots\dfrac{3n-2}{3n}$ Then, let $b_n=\dfrac{2}{4}\dfrac{5}{7}\cdots\dfrac{3n-1}{3n+1}$ and $c_n=\dfrac{3}{5}\dfrac{6}{8}\cdots\dfrac{3n}{3n+2}$ Since $0<a_n<b_n$ and $0<a_n<c_n$ $0<a_n^3<a_nb_nc_n=\dfrac{1}{3}\dfrac{2}{4}\dfrac{3}{5}\cdots\dfrac{3n-2}{3n}\dfrac{3n-1}{3n+1}\dfrac{3n}{3n+2}=\dfrac{2}{(3n+1)(3n+2)}$ Therefore, $0<x_n^3<\dfrac{2n^{3a}}{(3n+1)(3n+2)}$ So, since the limit of $x_n^3$ should be greater than 0 and less than some positive real number, the limit of $\dfrac{2n^{3a}}{(3n+1)(3n+2)}$ should be neither 0 nor infinity. Therefore, 3a=2, a=2/3. Anything wrong? Or better idea?
Expand $$\log(3n-2) - \log(3n) = \log(1 - \frac23 \frac1n) = - \frac23 \frac1n + \frac49 \epsilon_n \frac1{n^2}$$ where $\epsilon_n$ is a bounded sequence. Then : $$\log x_n = -\frac23 \sum_{k = 1}^n \frac1k + a \log n + \frac49 \sum_{k = 1}^n \epsilon_k \frac1{k^2}$$ Le last sum converges : $$\big|\epsilon_k \frac1{k^2} \big| \leq (\max_m |\epsilon_m|) \times \frac1{k^2}$$ The harmonic serie satisfies : $$\sum_{k = 1}^n \frac1k = \log n - \gamma + \epsilon^1_n$$ where $\epsilon^1_n$ converges to $0$. (to prove that write $\log n = \sum_{k = 1}^n \log(k+1) - \log k = \sum_{k = 1}^n \log(1+\frac1k)$ and expand) Thus, $\log x_n$ converges to a finite limite iff : $$-\frac23 \sum_{k = 1}^n \frac1k + a \log n = (a - \frac23) \log n + \frac23 \gamma + \frac23 \epsilon^1_n$$ converges to a finite limit. Finally $a = 2/3$, as you expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3672074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Closedness of the product of two subgroups Let $G:=\text{SL}(3,\mathbb R)$, equipped with the usual subspace topology, acting on $\mathbb R^3$ by the canonical action. Consider subgroups $\Gamma:=\text{SL}(3,\mathbb Z)$ and $Q_1:=\{g\in G:ge_1=e_1\}$ where $e_1=(1,0,0)$. I wonder how to prove that the product $\Gamma Q_1$ is a closed subset of $G$. To begin with, let $\gamma_n q_n$ be a sequence in $\Gamma Q_1$ ($\gamma_n \in \Gamma$ and $q_n\in Q_1$) converging to $h\in G$. Consider $\gamma_n q_n e_1=\gamma_n e_1$. We can conclude from here that the first column of $\gamma_n$ with stabilize at some point. But I don't know how to proceed from here. Any other approaches will also be appreciated!
Let $\pi$ be the projection $G\to G/Q_1$. Then $\Gamma Q_1$ is closed in $G$ iff the orbit $\Gamma\pi(1)$ is closed in $G/Q_1$. Now observe that the map $g\mapsto ge_1$ induces an identification of $G/Q_1$ with $\mathbf{R}^3\smallsetminus\{0\}$, and in this identification, $\pi(1)=e_1$. Now we see that the orbit of $e_1$ is the set of primitive elements in $\mathbf{Z}^3$, which is obviously closed. This proves the result. Now this can be translated to a more pedestrian proof. Let $g_n=\gamma_nq_n$ converge to $h$. Then $\gamma_ne_1=\gamma_nq_ne_1$ tends to $he_1$. Since $\gamma_ne_1$ converges and also belongs to the closed discrete subset $\mathbf{Z}^3$, we deduce that for $n$ large enough (as we now assume), it is constant, say equal to $\gamma e_1$ for some given $\gamma$. So $\gamma_ne_1=\gamma e_1$, which implies $\gamma^{-1}\gamma_n\in Q_1$, that is, $\gamma_n=\gamma q'_n$. So $\gamma^{-1}g_n=q'_nq_n$ converges, say to $q\in Q_1$. Thus $g_n$ converges to $\gamma q$, and hence $h=\gamma q\in \Gamma Q_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3672243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The subgroups of a cyclic group We've got $G=U(\mathbb Z/(27)\mathbb Z)=\langle 2 \rangle$ a cyclic group, and $H=\langle -8, -1 \rangle$ a subgroup of $G$. I've calculated all the subgroups of $G$. Now I have to indentify $H$ with a subgroup of $G$, without calculating all the elements of $H$. So I think that I can see clearly that $H$ is equal to the subgroup $\langle 8 \rangle =\{8,10,-1,-8,-10,1\}$, but as the problem says that I can't calculate all the elements of H to solve this problem, I don't know how can I justify that H is equal to $\langle 8 \rangle$. How can I do it?
We have $|G|=27-9=18$, hence $2^9 = -1$. Now $H=\langle -8,-1 \rangle = \langle -2^3, 2^9\rangle = \langle 2^9\cdot 2^3, 2^9\rangle = \langle 2^{12},2^9\rangle$. Now thanks to the Bezout Lemma we have $H=\langle 2^{12},2^9\rangle = \langle 2^3\rangle = \langle 8\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3672569", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the value of $\sum _{n=1}^{\infty }\:\frac{a}{n\left(n+a\right)}$ find the value of $\sum _{n=1}^{\infty }\:\frac{a}{n\left(n+a\right)}$ $(a>0)$ I just can analyse $\sum _{n=1}^{\infty }\:\frac{a}{n\left(n+a\right)}=a\left(\frac{1}{1}-\frac{1}{1+a}+\frac{1}{2}-\frac{1}{2+a}+\frac{1}{3}-\frac{1}{3+a}...+\frac{1}{n}-\frac{1}{n+a}\right)$ Can anyone help me? Thanks
It is $\psi (a + 1) + \gamma$, where $\psi$ is the logarithmic derivative of the gamma function and $\gamma$ is the Euler-Mascheroni constant, cf. http://dlmf.nist.gov/5.7.E6 and http://dlmf.nist.gov/5.5.E2 Using this fact, it follows for example that $$ \log a + \gamma + \frac{1}{{2a}} - \frac{1}{{12a^2 }} < \sum\limits_{n = 1}^\infty {\frac{a}{{n(n + a)}}} < \log a + \gamma + \frac{1}{{2a}} $$ for all $a>0$ (see http://dlmf.nist.gov/5.11.ii). Also, for $-1<a<1$, it holds that $$ \sum\limits_{n = 1}^\infty {\frac{a}{{n(n + a)}}} = \sum\limits_{k = 2}^\infty {( - 1)^k \zeta (k)a^{k - 1} } , $$ where $\zeta$ denotes Riemann's zeta function (see http://dlmf.nist.gov/5.7.E4).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3672902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Violation of Faithfulness in Structural Causal Models While reading "Elements of Causal Inference" by Peters et al. I stumbled over something I'm not sure about regarding "Example 6.34 (Violations of Faithfulness)". A link to the book can be found here. There, a linear Gaussian SCM is defined as $X:=N_X$, $Y:=aN_X+N_Y$, $Z:=bY+cX+N_Z$ where $N_X \sim \mathcal{N}(0, \sigma_X^2)$, $N_Y \sim \mathcal{N}(0, \sigma_Y^2)$ and $N_Z \sim \mathcal{N}(0, \sigma_Z^2)$. The corresponding DAG for the SCM is given by $\mathcal{G}_1$. Now the book states that if the SCM is parameterized with $a\cdot b -c = 0$, $\mathcal{G}_1$ is not faithful to the distribution $P$ induced by said SCM. That's still clear to me. But now the book states that it is easy to construct an SCM inducing same distribution, but the DAG depicted in $\mathcal{G}_2$. I'm having trouble to construct this SCM. With the constraint $a\cdot b -c = 0$, the distribution were looking for is given by $X=N_X$, $Y=aN_X + N_Y$ and $Z=bN_Y + N_Z$. I tried to use the properties of Gaussian distributions to define the SCM for $\mathcal{G}_2$ via $X' = N_X$, $Z'=bN_Y + N_Z$ and $Y'= \tilde{a}X' + \tilde{b}Z'$ where $\tilde{a} = a$ and $\tilde{b} = \sqrt{\frac{\sigma_Y^2}{b\sigma_Y^2 + \sigma_Z^2}} $. Now with this, the parameters of the Gaussians should be equal in both SCMs. Even the (in)dependencies work out as far as I can tell. Yet, I doubt they have the same joint distributions as knowing the value of $Z'$ and $X'$ completelly determines the value of $Y'$ which is not the case for the first SCM. Where am I going wrong?
Your mistake is in the definition of $Y'$. Actually, $Y'= \tilde{a}X' + \tilde{b}Z' + N_{Y'}$, with $N_{Y'} \sim \mathcal{N}(0, \sigma_{Y'}^2)$. In order to determine $\tilde{a}, \tilde{b}$ and $\sigma_{Y'}^2$, notice that: * *the joint distribution of $X',Y',Z'$ is Gaussian (since the independent $X'$ and $Z'$ are normally distributed and also the conditional $Y'|X',Z'$ is Gaussian with mean a linear function of $X',Z'$) *the joint distribution of $X,Y,Z$ is Gaussian (using the same type of arguments) *the two distributions must have the same parameters In particular: * *$Cov(X,Y)=Cov(X',Y') \Rightarrow ... \Rightarrow \tilde{a}=a$ *$Var(Y)=Var(Y') \Rightarrow ... \Rightarrow \tilde{b}^2(b^2\sigma_Y^2+\sigma_{Z}^2) +\sigma_{Y'}^2 = \sigma_Y^2$ *$Cov(Y,Z)=Cov(Y',Z') \Rightarrow ... \Rightarrow \tilde{b}(b^2\sigma_Y^2+\sigma_Z^2) =b\sigma_Y^2$ After some calculations we get $\tilde{b}=\frac{b\sigma_Y^2}{b^2\sigma_Y^2 + \sigma_Z^2}$ and $\sigma_{Y'}^2=\frac{\sigma_Y^2\sigma_Z^2}{b^2\sigma_Y^2 + \sigma_Z^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3673017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $\lim_{N\rightarrow \infty} \sum_{n = 1}^{N} \frac{1}{(N+1) \ln (N+1) - n \ln n } = 1$? Question: Find the limit \begin{equation} A = \lim_{N\rightarrow \infty} \sum_{n = 1}^{N} \frac{1}{(N+1) \ln (N+1) - n \ln n } \end{equation} The series originated from the asymptotic analysis in this question. I can show that it converges. Numerical evaluation in Mathematica suggests that it is close to $1$. I am just curious, is it possible to prove one of the following? * *$A > 1$ *$A = 1$ *$A < 1$ Perhaps one can think about the integral \begin{equation} \int_0^1 \frac{1}{( 1 + \frac{1}{N} ) \ln (N+1) - x \ln (x N) } dx \end{equation} Update 1: Let $A_N = \lim_{N\rightarrow \infty} \sum_{n = 1}^{N} \frac{1}{(N+1) \ln (N+1) - n \ln n }$, numerical evaluations up to $10,000$ shows $A_N < 1$ We can also plot the difference $1 - A_N$ as a function of $N$. To check the estimation $1- A_N \sim 0.3 / \ln N $ by one of the comments, we plot $( 1- A_N) \ln N$ update 2 Following Crostul's answer, I did an exercise to prove that the integral is also equal to $1$ in the limit $N \rightarrow \infty$ Relation between $A_N$ and the integral: \begin{equation} A_N = \sum_{n=1}^{N} \frac{1}{N} \frac{1}{ (1 + \frac{1}{N}) \ln ( N+ 1 ) - \frac{n}{N} \ln( \frac{n}{N} N ) } \end{equation} so to find $A_{N\rightarrow \infty}$, we may look at the limit \begin{equation} \lim_{N \rightarrow \infty} \int_0^1 \frac{1}{( 1 + \frac{1}{N} ) \ln (N+1) - x \ln (x N) } dx \end{equation} We can simplify the denominator \begin{equation} ( 1 + \frac{1}{N} ) \ln ( N+1 ) - x \ln ( xN ) = \ln N ( 1 + \frac{b_N}{N} -x - \frac{x \ln x}{\ln N} ) \end{equation} where \begin{equation} b_N = 1 + ( N + 1 )\frac{\ln (1 + \frac{1}{N})}{\ln N } = 1 + \frac{1}{\ln N} + o(\frac{1}{N} ) \end{equation} So we study \begin{equation} I_N = \frac{1}{\ln N} \int_0^1 \frac{1}{ 1 + \frac{b_N}{N} - x - \frac{x \ln x}{\ln N} } dx \end{equation} Now use Crostul's relaxing trick Define $f(x) = x + \frac{x \ln x}{ \ln N}$, the denominator is $f(1 + \frac{1}{N} ) - f(x)$. On one hand we have \begin{equation} f(1 + \frac{1}{N} ) - f(x) \ge f(1 + \frac{1}{N} ) - x \ge 1 + \frac{1}{N} - x \end{equation} where the second inequality holds for large enough $N$. On the other hand, by mean value theorem \begin{equation} \frac{f( 1 + \frac{1}{N} ) - f(x) }{1 + \frac{1}{N} - x} = f'( y) \le f'( 1 + \frac{1}{N} ) = 1 + \frac{1 + \ln (1 + \frac{1}{N})}{ \ln N } \end{equation} Hence \begin{equation} \frac{1}{1 + \frac{1 + \ln (1 + \frac{1}{N})}{ \ln N }} \int_0^1 \frac{1}{1 + \frac{1}{N } - x } dx \le I_N \ln N \le \int_0^1 \frac{1}{1 + \frac{1}{N } - x } dx \end{equation} Both integrals on the left and right are $\ln N$. Therefore \begin{equation} \frac{1}{1 + \frac{1 + \ln (1 + \frac{1}{N})}{ \ln N }} \le I_N \le 1 \end{equation} Taking the limit $N\rightarrow$, we have $\lim_{N\rightarrow \infty } I_N = 1$ .
You are right: the limit is $1$. Here it is the full proof: For all $n \in \{1, \dots N \}$ we have the following inequality: $$(N+1) \log (N+1)- n \log n \ge (N+1) \log (N+1)- n \log (N+1) =\\ = \log(N+1) (N+1-n)$$ Thus $$\sum_{n=1}^N \frac{1}{(N+1)\log(N+1)-n \log n} \le \sum_{n=1}^N \frac{1}{\log(N+1) (N+1-n)} = \frac{H_{N}}{\log(N+1)}$$ where $H_N$ denotes the $N$-th harmonic number. Since $$\lim_{N \to + \infty} \frac{H_N}{\log(N+1)} =1$$ we have the estimate $$\limsup_{N \to + \infty} \sum_{n=1}^N \frac{1}{(N+1)\log(N+1)-n \log n} \le 1$$ in other words, if the limit exists, it is smaller or equal than $1$. Proving the other inequality is harder. Denote by $f(x)=x \log x$. We can compute its derivative $$f'(x)= \log x +1$$ By the Mean Value Theorem, for all $n \in \{1, \dots N \}$ we have $$\frac{(N+1)\log(N+1)-n \log n}{N+1-n}= \frac{f(N+1)-f(n)}{(N+1)-n} = f'(c_n) =\log c_n +1 \le \log(N+1)+1$$ where $c_n$ is some real number in the interval $(n, N+1)$ Thus we have the estimate $$\sum_{n=1}^N \frac{1}{(N+1)\log(N+1)-n \log n} \ge \sum_{n=1}^N \frac{1}{N+1-n} \cdot \frac{1}{\log(N+1)+1} = \frac{H_{N+1}}{\log(N+1)+1}$$ where $H_{N+1}$ denotes the ($N+1$)-th harmonic number. Since $$\lim_{N \to + \infty} \frac{H_{N+1}}{\log(N+1)+1} =1$$ we have the estimate $$\liminf_{N \to + \infty} \sum_{n=1}^N \frac{1}{(N+1)\log(N+1)-n \log n} \ge 1$$ In other words, we have proved the other inequality: and the limit is indeed $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3673227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
How to solve $x^{\prime\prime\prime} + 2x^{\prime\prime} - x = e^{-t}\cos(2t)$ using the operator method? I did $(D^3 + 2D - 1)x = e^{-t}\cos(2t) \Rightarrow x = \frac{1}{D^3 + 2D - 1}e^{-t}\cos(2t) \Rightarrow x= \frac{e^{-t}}{(D-1)^3 + 2(D-1) - 1}\cos(2t) \Rightarrow$ $x =\frac{e^{-t}}{D^3-3D^2+3D-1+2D-2-1}\cos(2t) = \frac{e^{-t}}{D^3-3D^2+5D-4}\cos(2t)$, but I'm having trouble with $D^3-3D^2+5D-4$. Am I doing something wrong?
$$(D^3 + 2D^2 - 1)x = e^{-t}\cos(2t) $$ $$ \implies x_p = \frac{1}{D^3 + 2D^2 - 1}e^{-t}\cos(2t) $$ You forget the power D for the operator. You have $D^3$ and $D^2$. $$\implies x_p =e^{-t} \frac{1}{(D-1)^2(D+1) - 1}\cos(2t)$$ $$\implies x_p =e^{-t} \frac{1}{(-2D-3)(D+1) - 1}\cos(2t)$$ $$ x_p =e^{-t} \frac{1}{(-5D+4)}\cos(2t)$$ $$ x_p =e^{-t} \frac{(5D+4)}{116}\cos(2t)$$ Finally: $$ \boxed { x_p =e^{-t} \frac{(-5 \sin(2t)+2 \cos(2t))}{58}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3673382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Completely metrizable subspaces and $G_\delta$ in Hausdorff spaces Theorem: Let $Y$ be a dense subset of a Hausdorff topological space $X$. If $Y$ is completely metrizable, then $Y$ is a $G_\delta$ set in $X$. This is explained in detail here. The idea is to take the set of points $a\in X$ for which one can find neighborhoods whose trace on $Y$ has arbitrarily small diameter. The set of such points is a $G_\delta$ containing $Y$ and by completeness such $a$ must belong to $Y$, so $Y$ itself is a $G_\delta$ in $X$. Note that $X$ is not even required to be metrizable here. An intructive example is the Niemytzki plane $X$ (not metrizable because it is separable but not second countable). Take $Y = \{(x,y): y>0\}$. As a subspace of $X$, $Y$ has the usual Euclidean topology. The Euclidean distance is not complete on $Y$, but $Y$ is completely metrizable by the following equivalent metric: for $z_1=(x_1,y_1)$, $z_2=(x_2,y_2)$ as complex numbers take $$d(z_1,z_2)=|z_1-z_2|+|\frac{1}{y_1}-\frac{1}{y_2}|$$ (Cauchy sequences in $(Y,d)$ converge in $Y$ because the term $\frac{1}{y}$ keeps the points away from the boundary.) And indeed, $Y$ is a $G_\delta$ in $X$ because it's open in $X$. (1) Are there any good examples where $Y$ (dense) is not open in $X$? (2) If in the theorem above we relax the assumption that $Y$ be dense in $X$, all we can conclude is that $Y$ is a $G_\delta$ in its closure $\overline{Y}$. Now if $X$ is a metric space, any closed subset is a $G_\delta$, so $Y$ would be a $G_\delta$ of a $G_\delta$ and therefore $Y$ would be a $G_\delta$ in $X$. Are there any good examples of a (non-metrizable) Hausdorff space $X$ with a completely metrizable subspace $Y$ that is not a $G_\delta$ in $X$?
A very natural example where $Y$ is not open in $X$ is $X=\mathbb{R}$, $Y=\mathbb{R}\setminus\mathbb{Q}$. Here is it not obvious that $Y$ actually is completely metrizable; one way to prove it is to use continued fractions to show that $Y$ is actually homeomorphic to $\mathbb{N}^{\mathbb{N}}$. For a very simple example where $Y$ is not dense and also not $G_\delta$ in $X$, let $X=[0,1]^I$ for an uncountable set $I$ and let $Y$ be a singleton. Then $Y$ is trivially completely metrizable, but it is not $G_\delta$ since any intersection of fewer than $|I|$ basic open neighborhoods of a point of $X$ is still unconstrained on some coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3673643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove there is such a triple There are $3n$ colored numbers from $1$ to $3n$ such that each color is represented exactly by $n$ numbers. Prove there is 3-colored triple $a,b,c$ such as $$a+b=c$$ I started by assuming various values for minimum each of color, but didnt come to any conclusions(
Note that this problem was initially asked and solved by Alekseev and Savchev, in the Kvant journal, 4:23, problem M1040. Let $A,B,C$ be the three monochromatic subsets of $[3n]$. Without loss of generality let $1,\ldots,k-1$ be in $A$ (i.e. the first $k-1$ integers are in $A$, with $k-1\geq1$), and let $k\in B$. We call three numbers a good triple if they satisfy your condition. Suppose that there are no good triple. Let $a\in C$ be any number. Note that $a-1\not\in B$ as otherwise $(1,a-1,a)$ would be a good triple. Suppose that $a-1\in C$, and consider * *The integer $a-k$. If it is in $A$ then $(a-k,k,a)$ would be a good triple. If it is in $B$, then $(k-1,a-k,a-1)$ would be a good triple. Therefore $a-k \in C$. *The integer $a-k-1$. If it is in $A$ then $(a-k-1,k,a-1)$ would be a good triple. If it is in $B$, then $(1,a-k-1,a-k)$ would be a good triple. Therefore $a-k-1 \in C$. *The integer $a-2k$. If it is in $A$ then $(a-2k,k,a-k)$ would be a good triple. If it is in $B$, then $(k-1,a-2k,a-k-1)$ would be a good triple. Therefore $a-k \in C$. *... Repeating this argument we conclude that all integers of the form $a-ik$ and $a-ik-1$ ($i=0,1,\ldots$) are in $C$. But note that because $a>k$, there exist some $i$ such that $ik\leq a \leq i(k+1)$ and therefore such that $1\leq a-ik \leq k$. And we know that this number is either in $B$ (if it is $k$) or in $A$ (if is it $<k$). Hence a contradiction, and $$a-1\in A$$ Therefore we proved that $\forall a\in C, \ a-1 \in A$. But note that $k-1\in A$ while $k\in B$ therefore there is at least one element $a\in A$ for which $a+1\not\in C$. Therefore $\vert A\vert > \vert C\vert$, a contradiction, and there must be at least one good triple.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3673803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Theorem 29.1 in Munkres's Topology Could someone please explain the highlighted sentence in this proof? I understand that $C$ is contained in $X$, but I don't understand why that implies it is a compact subspace of $X$ given that it is a compact subspace of $Y$. Thanks in advance.
If $X$ is a subspace of $Y$, the open sets of $X$ are of the form $X\cap U$ where $U$ is an open set in $Y$. Therefore any open covering of $C$ as a subset of $X$ is of the form $\{X\cap U_i\}_{i \in I}$. But then $\{U_i\}_{i \in I}$ is an open covering of $C$ in $Y$, so there is a finite sub-covering.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3673902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Lebesgue integral on interval [a,b] Suppose $f$ and $g$ are non negative real functions defined on interval $[a,b]$. $f$,$g$ $\in L^1([a,b])$ If there is a sequence of decreasing measureable sets $...\subset A_2\subset A_{1}$ ,such that $A_n \subset [a,b]$ for all n. On which $\int_{A_n}f d\lambda = \int_{A_n}g d\lambda $ Prove for $A = \bigcap_i A_i$,we have $\int_A f d\lambda = \int_A g d\lambda$. Sorry for the elementary of the question, my attempt is using DCT,since $f\mathbb{1}_{A_n} \le f$ and $f \in L^1$.since $f\mathbb{1}_{A_n} \to f\mathbb{1}_A $ we have $\int_A f d\lambda = \int_A g d\lambda$ right?
the measure $$\lim_{n \to \infty}\mu(A_n \bigcap A^c)=0$$ $$\lim_{n \to \infty}\int_A (f-g)d\lambda= \lim_{n \to \infty}(\int_{A_n} (f-g)d\lambda - \int_{A_n\bigcap A^c} (f-g)d\lambda)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3674034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Polynomial $x^3-2x^2-3x-4=0$ Let $\alpha,\beta,\gamma$ be three distinct roots of the polynomial $x^3-2x^2-3x-4=0$. Then find $$\frac{\alpha^6-\beta^6}{\alpha-\beta}+\frac{\beta^6-\gamma^6}{\beta-\gamma}+\frac{\gamma^6-\alpha^6}{\gamma-\alpha}.$$ I tried to solve with Vieta's theorem. We have $$\begin{align} \alpha+\beta+\gamma &= 2, \\ \alpha\beta+\beta\gamma+\gamma\alpha &= -3, \\ \alpha\beta\gamma &= 4. \end{align}$$ For example, $\alpha^2+\beta^2+\gamma^2=(\alpha+\beta+\gamma)^2-2(\alpha\beta+\beta\gamma+\gamma\alpha)=10$ and similarly, we can find $\alpha^3+\beta^3+\gamma^3$... But it has very long and messy solution. Can anyone help me?
The final answer I got is 608. The answer for this question is provided in the given pdf By Vieta's theorem and using some special algebraic identities that are prevelant in India.. Click on the link .THE whole values are achieved by some easy calculations
{ "language": "en", "url": "https://math.stackexchange.com/questions/3674178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Follow from $d(X_n, Y_n) \rightarrow 0$ in probability that $\log(n)f_n(X_n) = 0 \implies \log(n)f_n(Y_n) = 0$ Lets consider two sequences of random variables $X_n, Y_n$ with values in a metric space $(M, d)$ and $d(X_n, Y_n) \stackrel{\mathbb{P}}{\longrightarrow} 0$. Now suppose there is a sequence $f_n$ of continuous functions, such that $\ln(n)f_n(X_n) \stackrel{\mathbb{P}}{\longrightarrow} 0$. Can we conclude at this point, $$\ln(n)f_n(Y_n) \stackrel{\mathbb{P}}{\rightarrow} 0$$ I think it is true, that $f_n(Y_n)$ will converge to zero in probability, but will it still shrink fast enough to catch up with the $\ln(n)$ term?
It is false. Consider $f_n(x) = nx$, and suppose $X_n$ and $Y_n$ are constant functions $X_n \equiv 0$, $Y_n \equiv 1/n$. You have that $d(X_n,Y_n) \to 0$, and $\ln(n)f_n(X_n) = 0$, but $\ln(n)f_n(Y_n) = \ln(n)$ that does not converge to zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3674348", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Presentation of free group I want to prove that $$<a,b \ | \ aba^{-1}b^{-1},ab^{-1}ab>\cong \mathbb{Z}\oplus\mathbb{Z}/2\mathbb{Z}$$ I already show $a^2=1$, but I don't make sure that $a\neq1$. How can I prove this? Any help would appreciated. Thanks.
If $a=1$, then your presentation collapses to the simple presentation $\langle b\ |\ \ \rangle\cong\mathbb Z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3674722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $\lim_{x\to 0^+}\int_x^{2x}\frac{\sin t}{t^2}dt=\ln 2.$ Question: Show that $$\lim_{x\to 0^+}\int_x^{2x}\frac{\sin t}{t^2}dt=\ln 2.$$ Solution: Note that $$\lim_{x\to 0^+}\int_x^{2x}\frac{\sin t}{t^2}dt=\ln 2=\lim_{x\to 0^+}\int_x^{2x}\frac{dt}{t}\\\iff\lim_{x\to 0^+}\int_x^{2x}\frac{\sin t-t}{t^2}dt=0.$$ Thus it is sufficient to prove that $$\lim_{x\to 0^+}\int_x^{2x}\frac{\sin t-t}{t^2}dt=0.$$ Now, expanding $\sin t$ using Maclaurin's formula having remainder in Lagrange's form, we have $$\sin t=t-\frac{\cos\xi}{3!}t^3, \text{ where } \xi=\theta t\text{ and } 0<\theta<1.$$ Now $$\cos \xi\le 1\implies \frac{\cos \xi}{3!}t^3\le \frac{t^3}{3!}(\because t>0)\\\implies t-\frac{\cos \xi}{3!}t^3\ge t-\frac{t^3}{3!}\\\implies \sin t\ge t-\frac{t^3}{3!}.$$ Again since $t>0$, we have $\sin t<t$. Thus $\forall t\in[x,2x]$ and $\forall x>0,$ we have $$t-\frac{t^3}{3!}\le \sin t\le t\\\implies -\frac{t}{3!}\le \frac{\sin t-t}{t^2}\le 0\\\implies \int_x^{2x} -\frac{t}{3!}dt\le \int_x^{2x}\frac{\sin t-t}{t^2}dt\le 0\\\implies -\frac{x^2}{4}\le \int_x^{2x}\frac{\sin t-t}{t^2}dt\le 0.$$ Now since $\lim_{x\to 0^+} -\frac{x^2}{4}=0$ and $\lim_{x\to 0^+} 0=0$, thus by Sandwich Theorem we can conclude that $$\lim_{x\to 0^+} \int_x^{2x}\frac{\sin t-t}{t^2}dt=0.$$ Hence, we are done. Is this solution correct and rigorous enough? Are there any alternative solutions?
Let $ x\in\left(0,1\right] : $ \begin{aligned}\int_{x}^{2x}{\frac{\sin{t}}{t^{2}}\,\mathrm{d}t}&=\ln{2}+\int_{x}^{2x}{\frac{t-\sin{t}}{t^{2}}\,\mathrm{d}t}\\ &=\ln{2}+\int_{0}^{2x}{\frac{t-\sin{t}}{t^{2}}\,\mathrm{d}t}-\int_{0}^{x}{\frac{t-\sin{t}}{t^{2}}\,\mathrm{d}t}\\ &=\ln{2}+\frac{1}{2}\int_{0}^{x}{\frac{2u-\sin{\left(2u\right)}}{u^{2}}\,\mathrm{d}u}-\int_{0}^{x}{\frac{t-\sin{t}}{t^{2}}\,\mathrm{d}t}\\ \int_{x}^{2x}{\frac{\sin{t}}{t^{2}}\,\mathrm{d}t}&=\ln{2}+\int_{0}^{x}{\frac{\sin{t}\left(1-\cos{t}\right)}{t^{2}}\,\mathrm{d}t}\end{aligned} Since $ x\mapsto\frac{x\left(1-\cos{x}\right)}{x^{2}} $ is continuous on $ \left(0,1\right] $, and can be extended to a continuous function $ \left[0,1\right] $, it can be upper-bounded on $ \left(0,1\right] $ by some constant $ M>0 :$ $$ \left(\exists M>0\right)\left(\forall x\in\left(0,1\right]\right),\ \frac{\sin{x}\left(1-\cos{x}\right)}{x^{2}}\leq M $$ Thus $ \int_{0}^{x}{\frac{\sin{t}\left(1-\cos{t}\right)}{t^{2}}\,\mathrm{d}t}\leq Mx\underset{x\to 0}{\longrightarrow}0 $, and hence : $$ \lim_{x\to 0}{\int_{x}^{2x}{\frac{\sin{t}}{t^{2}}\,\mathrm{d}t}}=\ln{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3674827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 4 }
Stability of higher-order fixed points for systems of ordinary differential equations In the book by Strogatz, Nonlinear Dynamics and Chaos (1994), the author discusses examples of higher-order fixed points for systems of ordinary differential equations in polar coordinates: * *$\dot{r}=ar^3, \dot{\theta}=1$, $a\ne0$ *$\dot{r}=-r, \dot{\theta}=1/\ln(r)$ In the above cases, the linearized systems show a non-isolated fixed point at the origin. However, the nonlinear systems are spirals at the origin. Is there an analytical (non-graphical) method to deduce this result ?
You can solve these examples explicitly. In the first system, the equations for $\dot{r}$ and $\dot{\theta}$ are decoupled, so you can solve the equations individually. In the second equation, you can solve $\dot{r} = -r$ first, and then plug the result into the second equation to get an explicit function for $\dot{\theta}$, which you can then integrate in $t$ to obtain $\theta(t)$. Once you have the explicit solutions you should be able to deduce the spiral behavior.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3675121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Comparing a ratio of gamma functions to a simple polynomial I am still struggling to build my intuition as far as reasoning with ratios of gamma functions. Reasoning with factorials is significantly clearer. Consider this example. I would appreciate if anyone could help me to understand how to complete the following with regard to gamma functions. Let $n > 1$ be any integer. Clearly: $$\frac{(2n + 2)!}{(2n)!} = (2n+2)(2n+1) > (n+1)^2 = n^2+2n+1$$ So, changing this to a ratio of Gamma functions, the equivalent is: $$\frac{\Gamma(2n + 3)}{\Gamma(2n+1)} = (2n+2)(2n+1) > (n+1)^2 = n^2+2n+1$$ So far, so good. My problem comes down to evaluating when a fraction less than 1 gets applied. For example, consider the value of $\frac{1.25506}{\ln n}$ which is less than $1$ for $n > e^{1.25506}$ While it is easy to figure out any given value and it is straight forward to generate a graph, how do I show that this value is true for $n > 800$ for example. How would I determine the derivative and show that is increasing (which I suspect it is)? $$\frac{\Gamma(2n+ 3 - \frac{1.25506}{\ln n})}{\Gamma(2n+1)} > n^2+2n+1$$ In other words, as I leave the safety of factorials, I am at a loss for how to prove or disprove the inequality for all $n > k$ where $k > 800$ for example. Edit: I think that the inequality may not be true for $\dfrac{5n}{3}$. I am switching from $\dfrac{5n}{3}$ to $2n$. I believe that this inequality might be true for a reasonably sized $n$. I believe that the inequality is true for $n=800$
Suppose that we consider the function $$f(n)=\log \left(\Gamma \left(2n+3-\frac{a}{\log (n)}\right)\right)-\log (\Gamma (2 n+1))-2 \log (n+1)$$ Using Stirling approxiamtion followed by Taylor series, we have $$f(n)=-\left(\frac{a \log (2)}{\log (n)}+a-2\log (2)\right)+\frac{a^2-5 a \log (n)-2 \log ^2(n)}{4 n \log ^2(n)}+\cdots$$ Ignoring the second term leads to a lower bound $$n_{\text{low}}=2^{-\frac{a}{a-2 \log (2)}}$$ which, for the value of $a$, gives $n_{\text{low}}= 756.660$. Including the second term, Newton method converges immediatly at $n=792.720$. Using Newton method for $f(n)=0$ with $n_0=n_{\text{low}}$, the iterates are $$\left( \begin{array}{cc} k & n_k \\ 0 & 756.6600 \\ 1 & 791.6120 \\ 2 & 792.7187 \\ 3 & 792.7197 \end{array} \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3675301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral of $\frac{|x|}{x}$ Description In Example 8.6 on my textbook(http://www.math.louisville.edu/~lee/RealAnalysis/), it goes like Let $$ f(x) = \begin{cases} \frac{|x|}{x}, & x \neq 0\\ 0, & x = 0 \end{cases} $$ And the author just shows that $F(x) = \int^{x}_{-1} f(x) dx = |x| - 1$ without any derivation. Question The point where I'm getting stuck at is that I think $F(x) = \int^{x}_{-1} f(x) dx = x - 1$. Because; $$ \int^{x}_{-1} f(x) dx = \int^x_0 \frac{|x|}{x} dx + \int^0_{-1} \frac{|x|}{x} dx = \int^x_0 1 dx + \int^0_{-1} (-1) dx = \int^x_0 1 dx - \int^0_{-1} 1 dx = x - 1 $$ So, why the textbook uses the absolute value for the first term??
If $x\le 0$, then $$\int_{-1}^xf=\int_{-1}^x(-1)dt=$$ $$\Bigl[-t\Bigr]_{-1}^x=-x-1=-1+|x|$$ If $x\ge 0$, then $$\int_{-1}^xf=\int_{-1}^0(-1)dt+\int_0^x(1)dt$$ $$=\Bigl[-t\Bigr]_{-1}^0+\Bigl[t\Bigr]_0^x=-1+x=-1+|x|$$ in all cases, it gives $$|x|-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3675462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the dimension of a subspace in $\mathbb R^n$ Given $S_{u}^\bot:=\{v\in \mathbb R^n:v \cdot u=0\}$, I've proved that $S_{u}^\bot$ is a subspace of $\mathbb R^n$. The next part of the question asks me to prove that $S_{u}^\bot$ has dimension of $n-1$. I assume that it has something to do with the number of elements in the basis of the condition inside $S_{u}^\bot$ but do not know how to tackle. I also solved some other parts of the same question but I don't think that they're related to this part particularly.
For any vector $s \in V$, $s - (s \cdot u)u \in S_u^\perp$, by definition of $S_u^\perp$. In particular, $s = (s\cdot u)u + v$ for some $v \in S_u^\perp$. This shows that every vector can be written as a linear combination of a vector in the span of $u$, and in $S_u^\perp$. In other words, $S_u^\perp + \overline{\{u\}}= V$. However, as the subspaces are disjoint (why?) the sum is infact a direct sum, so by the usual relation between the dimensions of two vector spaces and that of their direct sum, one can conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3675625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that $\frac{1}{\sqrt{1-x^2}}$ is unbounded for $x\in(-1,1)$ I'm trying to prove that $f(x):=\frac{1}{\sqrt{1-x^2}}$ is unbounded for $x\in(-1,1)$; so I must prove that for all $M\in\mathbb{R}$ is $f(x) > M$ for $x\in(-1,1)$. With the definition of limit, since $$\lim_{x \to 1^-} f(x)=\lim_{x \to -1^+} f(x)=\infty$$ For all $M>0$ exists $\delta_M>0$ such that if (for instance) $1-\delta_M<x<1$ then $f(x)>M$, so since $M>0$ is arbitrary the definition of unbounded function is satisfied; the doubt is that I don't know how to work with the $x$ interval for which that is true, I mean that I know from the definition of limit that when $1-\delta_M < x <1$ it is $f(x)>M$ but how do I make this rigorous? I was thinking something like this: since $f$ is continuous in $(-1,1)$, for the Weierstrass theorem $f$ has maximum and minimum (so is bounded) in every interval of the kind $[-1+t,1-t]$ for $t>0$; so if I choose $\delta_M<t$ I can conclude that $f$ is unbounded because it is greater than $M$ for all $M\in\mathbb{R}$ in the interval $[1-\delta_M,1) \subset [1-t,1)$. Is this correct? However if this is correct I have another doubt: how do I prove that $[-1+t,1-t]$ is limited and closed? I know it is trivial, but I've never done this before. Another attempt is by contradiction: suppose that $f$ is bounded, then exists $M\in\mathbb{R}$ such that $|f(x)| \leq M$ for all $x\in(-1,1)$; so we have $$0<\frac{1}{\sqrt{1-x^2}} \leq C \Leftrightarrow0<1\leq C\sqrt{1-x^2}$$ Taking the limit for $x \to1^-$ both sides we have that $$0\leq \lim_{x \to 1^-} 1 \leq \lim_{x \to 1^-} C\sqrt{1-x^2}=0$$ So by comparison we have $$\lim_{x \to 1^-} 1=0$$ Which is a contradiction. Is this correct? Thanks.
$\frac{1}{\sqrt{1-x^2}}$ is an even function, it suffices to consider $x \in [0,1).$ Assume $(1-x^2)^{1/2} <M (>0)$, real, for all $x \in [0,1)$. Then $1/M^2 < 1-x^2$; Set $y_n:=x_n^2=1-1/n$; $1/M^2 < 1/n$, for large enough $n$ a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3675859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Tail random variable and extremum Let $(X_n)_n$ be a sequence of real random variables. Is it true that $\limsup_n\frac{1}{n}\max_{1 \leq k\leq n}X_k$ is tail random variable? $\liminf_n \frac{1}{n}\min_{1 \leq k \leq n}X_k$ ? $\limsup_n\frac{1}{n}\min_{1\leq k \leq n}X_k$? For the first two, I think the answer is correct, since for $r \in \mathbb{N}^*,$ $$\limsup_n\frac{1}{n}\max_{1 \leq k\leq n}X_k=\limsup_n\frac{1}{n+r}\max_{1\leq k \leq n+r}X_k=\limsup_n\max(\frac{1}{n+r}\max_{1\leq k \leq r}X_k,\frac{1}{n+r}\max_{r+1 \leq k \leq n+r}X_k)=\max(0,\limsup_n\frac{1}{n+r}\max_{r+1 \leq k\leq n+r}X_k)$$ which is $\sigma(\bigcup_{k \geq r}\sigma(X_k))$ measurable. (I used that $\limsup_n\max(u_n,v_n)=\max(\limsup_nu_n,\limsup_nv_n)$). The same thing for $\liminf_n \frac{1}{n}\min_{1 \leq k \leq n}X_k.$ I'm stuck on the third one. Do you have any ideas?
In general: $$\left|\min\left(a,b\right)-\min\left(a',b\right)\right|\leq\left|a-a'\right|$$ So if: $$Z_{n}:=\frac{1}{n}\min_{2\leq k\leq n}X_{k}$$ and: $$Y_{n}:=\frac{1}{n}\min_{1\leq k\leq n}X_{k}=\min\left(\frac{1}{n}X_{1},Z_{n}\right)$$ then: $$\left|Y_{n}-\min\left(0,Z_{n}\right)\right|\leq\frac{1}{n}\left|X_{1}\right|$$ From this we conclude that: $$\limsup Y_{n}=\limsup\min\left(0,Z_{n}\right)$$ showing that $\limsup Y_{n}$ is measurable wrt $\sigma\left(X_{2},X_{3},\dots\right)$. This can be expanded to find that $\limsup Y_{n}$ is measurable wrt $\sigma\left(X_{r},X_{r+1},\dots\right)$ for every $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3676046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The argument of a finite Euler product at a non-trivial zero of $\zeta(s)$. With $p_n$ is the $n$-th prime number, we know that: $$\arg\left(\zeta(s)\right)=\arg\prod_{n=1}^{\infty} \frac{1}{1-\frac{1}{p_n^s}}=\sum_{n=1}^{\infty} \arg\left(\frac{1}{1-\frac{1}{p_n^s}}\right)\qquad \Re(s) > 1$$ Now define the finite series: $$f(s,N)=\sum_{n=1}^{N} \arg\left(\frac{1}{1-\frac{1}{p_n^s}}\right) \qquad s \in \mathbb{C}$$ and plot it for values at and near a non-trivial zero ($\rho$) of $\zeta(s)$ for subsequent $N$. This gives for instance: For $N \rightarrow \infty$ these oscillating series show a reducing frequency and increasing amplitude ('spiral waves'). Oscillating series do neither converge or diverge. however it seems that only when $s=\rho$ (middle graph) the oscillation occurs along a horizontal line (the same pattern emerges for all $\rho$s I tested). Is there any possible way of calculating the location of such a line?
The Euler product diverges for $\Re(s) < 1$, you need to replace it by the regularized version (valid for $\Re(s)\in (1/2,1)$ assuming the RH is true) $$\lim_{x\to \infty} -\sum_{p \le x} \log(1-p^{-s}) - pv\int_0^x \frac{t^{-s}}{\log t}dt + \frac{\log(s-1)}{s}$$ If there are infinitely many zeros of real part $\ge \Re(s) $ then there is no regularized version at $s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3676210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the domain of $f^2$ if $f(x)=\sqrt{x+2}$ Now, $f(x)$ is defined for all values of $x$ for which $x+2 \geq 0$ $x+2 \geq 0 \implies x \geq -2$ So, $\mathrm{Domain}(f)=[-2,\infty)$ which means $f : [-2,\infty) \longrightarrow \Bbb R$ $f^2(x) = \Big (f(x) \Big )^2=(\sqrt{x+2})^2=x+2$ So, $f^2$ is defined for all values of $x$, right? So, shouldn't $f^2:\Bbb R \longrightarrow \Bbb R$ ? According to my textbook, $f^2:[-2,\infty) \longrightarrow \Bbb R$ But if we take something outside of $[-2,\infty)$, for example $-5$ and put it in $f^2(x)$, we get: $f^2(-5) = \Big (f(-5) \Big )^2=(\sqrt{-5+2})^2=(\sqrt {-3})^2=(-3) \in \Bbb R$ Doesn't this mean that $f^2$ is defined for values outside the domain of $f$ as well? So, am I right or is the book right? If the book's right, where am I wrong? Thanks!
You have written $f^2(-5)=(f(-5))^2$. It is correct, however, look at the inside function of the RHS. It is $f(-5)$. Can you define $f(-5)$? No. That means $f^2(-5)$ is undefined. Similarly $f^2$ is undefined for any $x<-2$. So, your book is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3676327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
How improper Riemann and Legesgue integral associated? I'm looking for answer to Relation between the Lebesgue integral the improper Riemann integral . $$ (L) \int_{a}^{b}f(x)dx \: \: \: \: (R)\lim_{\alpha \to a+}\int_{\alpha}^{b}f(x)dx $$ I found following theorem: Let $f$ be a nonnegative continuous function. If $f$ is improperly Riemann integrable then it Lebesgue integrable on $\left(a, b\right]$ and we have $$ \int_{a}^{b}f(x)dx = \lim_{\alpha \to a+}\int_{\alpha}^{b}f(x)dx $$ but there is no proof. Can somebody explain how to proof above theorem? This question arose after reading about the Lebesgue integral of a function of arbitrary sign. Could this be somehow related to this? Also I found similar question improper Riemann integral and Lebesgue integral but its for $\left(0, 1\right]$
Consider the sequence $g_n = \chi_{[\frac {1}{n},1]}.f$ and apply the monotone convergence theorem.I,of course,assumed the domain to be $[0,1]$ but you get what I mean. Edit :- You can observe why improper Riemann and Lebesgue integrals might not always be the same.The improper Riemann integral,if you note,is actually the limit of the integrals of $\chi_{[r,b]}.f$ as $r\to a$.This is actually a limit of integrals,which might not always equal the integral of the limit(Indeed,those functions converge pointwise to $f$ which might not necessarily be Lebesgue integrable).However,when assumptions like non negativity or integrability of $f$ are made,we can conclude that they are equal using the monotone and dominated convergence theorems respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3676452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proof that a subset of the plane is an open set. I don't have a specific book that I am learning from because I am trying to learn on my own but my used definition of an open set $A$ is that for every $a\in A$ there exists an $\varepsilon>0$ such that the open ball $B(a,\varepsilon)\subset A$ Let $\boldsymbol{A}=\{(x,y)\in \mathbb{R}^2|0<x<1,0<y<2\}\subset \mathbb{R}^2$. Prove that $\boldsymbol{A}$ is an open set. My attempt: Let $P=(a,b)\in \boldsymbol{A}$. We can choose a radius $\varepsilon<\min\{|a-1|,|b-2|,a,b\}$. (This choice because they are the distances from the lines that bound the set $\boldsymbol{A}$ and it certainly seems to work). Now I think I would have to prove that if I take a point $(x,y)\in B(P,\varepsilon)$, then $(x,y)\in \boldsymbol{A}$, but I don't see how I can do this. I have tried to say that $\sqrt{(a-x)^2+(b-y)^2}<\varepsilon$ and the tried to somehow show in cases that always $0<x<1$ and $0<y<2$ but it doesn't seem to lead anywhere. I also don't even know how much work is needed to finish this proof because I haven't really seen any examples done. Can someone help me finish this argument or give a more elegant one alternatively(should still be from the definition)?
You're definitely on the right track. For each of the four inequalities you need to show, you need only to expand the inequality you have so that it ignores the irrelevant stuff. For example, to show $0<x$, $$a-x<\sqrt{(a-x)^2+(b-y)^2}<\varepsilon<a$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3676567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Combinatorics task. Four men and four women shall get in line in a supermarket. In how many ways can they line up, if the line has to alternate between men and women (two men and women can not stand next to each other)? My solution: $4 \cdot 4 \cdot 3 \cdot 3 \cdot 2 \cdot 2 \cdot 1 \cdot 1 = 576$; Then we multiplicate it by two because a man or a woman can be first in line; $576 \cdot 2 = 1152$. The answer given in my maths book: $2304$. What did I do wrong?
You may as well separate them into two lines and call them up one at a time, alternating between the lines. You can arrange each line in $4!$ ways. You can alternate between the lines starting from either the male or female line, so that it $2$ choices. Meaning there are $(4!)^2 \cdot 2= 1152$ total possibilities, agreeing with your answer. It is probably a typo, the solutions having doubled twice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3676774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Finding an integration factor $x^ay^b$ to solve an ODE I have to solve an ODE:$$2(y-3x)dx+x\left(3-\frac{4x}{y} \right) dy=0$$ I am given that I have to use the integrating factor x^a(y^b) where a and b are real numbers in order to turn the problem into a solvable exact ODE. The problem is that I am unsure of how to find these real numbers. From my understanding, an ODE is exact with the integrating factor (I) if the partial derivative of the dx w.r.t y is equal to the partial derivative of the dy w.r.t x (both multiplied with I). I tried doing this but I am unsure of how to proceed with the algebra. So far I have: If anyone can help with the algebra to find the values for a and b that would be appreciated
Multiply by $y$ the DE: $$2y^2dx-6xydx+3xydy-4x^2dy=0$$ The integrating factor is $\mu (x,y)=xy$ $$y^3dx^2-6(xy)^2dx+3(xy)^2dy-2x^3dy^2=0$$ $$y^3dx^2-2y^2dx^3+x^2dy^3-2x^3dy^2=0$$ Rearrange some terms: $$(y^3dx^2+x^2dy^3)-2(y^2dx^3+x^3dy^2)=0$$ $$dx^2y^3-2dx^3y^2=0$$ Integration gives us: $$x^2y^3-2x^3y^2=K$$ To summarize the integrating factor should be $\mu (x,y)=xy^2$.Then the DE becomes exact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3676905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that if $b_n$ is a subsequence of $a_n$ and $c_n$ is a subsequence of $b_n$, then $c_n$ is a subsequence of $a_n$. Let $(a_{n})_{n=0}^{\infty}$, $(b_{n})_{n=0}^{\infty}$ and $(c_{n})_{n=m}^{\infty}$ be sequences of real numbers. Then $(a_{n})_{n=0}^{\infty}$ is a subsequence of $(a_{n})_{n=0}^{\infty}$. Furthermore, if $(b_{n})_{n=0}^{\infty}$ is a subsequence of $(a_{n})_{n=0}^{\infty}$, and $(c_{n})_{n=0}^{\infty}$ is a subsequence of $(b_{n})_{n=0}^{\infty}$, then $(c_{n})_{n=0}^{\infty}$ is a subsequence of $(a_{n})_{n=0}^{\infty}$. My solution We say that a sequence of real numbers $(x_{n})_{n=0}^{\infty}$ is a subsequence of $(y_{n})_{n=0}^{\infty}$ if there exists a strictly increasing function $f:\textbf{N}\to\textbf{N}$ such that $x_{n} = y_{f(n)}$. Based on this definition, we conclude that $a_{n}$ is a subsequence of itself: it suffices to choose $f(n) = n$. On the other hand, if $b_{n}$ is a subsequence of $a_{n}$ and $c_{n}$ is a subsequence of $b_{n}$, then exist functions $f:\textbf{N}\to\textbf{N}$ and $g:\textbf{N}\to\textbf{N}$ such that $b_{n} = a_{f(n)}$ and $c_{n} = b_{g(n)}$. Consequently, $c_{n} = a_{f(g(n))}$, where $f\circ g$ is strictly increasing because it is a composition of two strictly increasing functions. Could someone please tell me if I am missing any formal step? Any comment is appreciated!
No steps missing, but there is an error. If $b_n = a_{f(n)}$, then $$ c_n = b_{g(n)} = a_{f(g(n))} \text{.} $$ You have the composition reversed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3677015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Prove that there are infinitely many odd number can't be written as $pq-p-q$ Let $p$ and $q$ are prime. Problem Are there infinitely many odd positive integer $a$, which can't be written as $pq-p-q$ ? Example $13$ can't be expressed in $pq-p-q$. Sequence $13,25,33,37,49,53,61,67,73,75,85,93,97,109,...$ It looks there are infinitely many and clearly any even positive integer can't be expressed in $pq-p-q$. Thanks for your time to go through it.
Just choose $a=12k+1$ for any $k \in \mathbb{N}$. If $a=pq-p-q \implies a+1 = (p-1)(q-1)$. Clearly, we have $4 \nmid (a+1)$. Since $4 \nmid (p-1)(q-1)$ , WLOG we have $q=2$. This gives: $$a=p-2 \implies 12k+3=p$$ which is clearly a contradiction as LHS is divisible by $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3677290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a differentiable function whose derivative is greater than or equal to 1 everywhere but 0 at 0? Does there exist a differentiable function $f: \mathbb{R} \longrightarrow \mathbb{R}$ such that $f^{'}(0)=0$ and $f^{'}(x) \ge 1$ $\forall x \ne 0?$ Prove either way. I was thinking of coming up with a counterexample such as $x^3$ but adding an expression to make $f' \ge 1$ but that makes $f'(0) \ne 0.$ What would be a rigorous proof otherwise if no such $f$ exists?
Darboux Theroem says that the derivative of any differentiable function has IVP. So no such function exists. Ref: https://en.wikipedia.org/wiki/Darboux%27s_theorem_(analysis)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3677449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proofing That $\int_0^a(f(x) + f^{-1}(x))dx \geq a^2$ for $a > 0$ Hello everyone $f(x)$ is a increasing function that for $f(0) = 0$ and $f^{-1}(x)$ is the opposite function of $f(x)$ How can I proof that $\int_0^a(f(x) + f^{-1}(x))dx \geq a^2$ for $a > 0$ Thanks!
$\int_{0}^{a} f(x) + f^{-1}(x) dx $ $=\int_{0}^{a} y + f^{-1}(x) dx$ Let, $f^{-1}(a)=a'$ So, $\int_{0}^{a} f^{-1}(x) dx = aa' - \int_{0}^{a'} y dx$ And replacing this to its original expression we get, $=\int_{0}^{a} y + f^{-1}(x) dx = aa' +\int_{a'}^{a} y dx$ Now, $\int_{a'}^{a} y dx \geq (a-a')f(a')=(a-a')a=a^2-aa'$ So, $=\int_{0}^{a} y + f^{-1}(x) dx = aa' +\int_{a'}^{a} y dx \geq aa'+a^2-aa'=a^2$. We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3677598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
can a complex math equation can create multiple closed area? example : equation of circle can easily represent case A. can some single complex mathematical equation can create case B ?
Yes, two curves can be represented with a single equation joining them as their product.Right hand sides of the individual equations should be zero before multiplying. Multiple curves can be drawn on same x-y plane. We can visualize them together or separately. Draw graphs of two or three curves separately on transparent plastic sheets. Superimpose the sheets along their respective axes coinciding them onto a common origin. The single product equation you get is valid representation for what all curves in the set you see.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3677743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is this really a multiplicative group? [subgroups of $\Bbb Z/15$] When looking at some small multiplicative groups of integers modulo n, such as for 15, I found something that confused me. Due to a miscalculation, I had what looks like another set of integers less than 15 that are closed under multiplication. My question is whether this is really a group and if so, is it the only example or is it just another group in disguise? Specifically, the actual group elements are {1, 2, 4, 7, 8, 11, 13, 14} which are the integers coprime to 15. The set I'm interested in is {1, 2, 3, 4, 6, 8, 9, 12}, which looks like as Cayley graphs: So both images here show the groups arranged as $\mathbb{Z}_4\times\mathbb{Z}_2$ where one group is generated by <2> and <7> and the other by <2> and <3>. Is the one on the right actually a group?
Not sure your second Cayley graph works. $3^2=9\neq1$, so it's not true that <3> acts as a copy of $\mathbb{Z}_2$ mod 15. Additionally, since I'm trying to prove a subgroup, simply look for $3^{-1}$ which doesn't exist in $\mathbb{Z}_{15}$ (as a ring), and therefore cannot be embedded in any subgroup of the multiplicative structure. Also, $7^2=49=4$, so it's also not true that <7> is a copy of $\mathbb{Z}_2$ on $\mathbb{Z}_{15}^*$, order(7)=4.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3677907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Having two non-parallel hyperplanes of $ \mathbb{R}^n $, $ S_1 $ and $ S_2 $, prove that $ S_1\cap S_2 \neq\emptyset $ and $\dim(S_1\cap S_2)=n-2$ I've got two affine hyperplanes of $ \mathbb{R}^n $, $ S_1 $ and $ S_2 $, which are non-parallels. I have to prove that $ S_1\cap S_2 \neq\emptyset $ and $\dim(S_1\cap S_2)=n-2$ What I have done is: As $ S_1 $ and $ S_2 $ are hyperplanes, $\dim(S_1)=n-1$ and $\dim(S_2)=n-1$ But I have no idea what else to do, I'm stuck :|
What does the non-parallel hypothesis mean ? Try to explore this fact to show the non emptyness of the intersection. What can you say about the intersection of two linear hyperplanes ? Can you adapt this (maybe by translating the origin ?) to prove what is the dimension of the intersection ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3678035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is this another definition for inverse relation? We know that if $R\colon A\to B$, we can define the inverse relation as follows: $$R^{-1}=\{(y,x)\in B\times A\mid(x,y)\in R\}.$$ Now I want to know if this set, let's call it $R\:'^{-1}$, is the same as $R^{-1}$: $$R\:'^{-1}=\{(x,y)\in R\mid(y,x)\in B\times A\}.$$ I think $R^{-1}\neq R\:'^{-1}$ so I picked an example. I tried with $A=\{1,2\}$, $B=\{3,4\}$, so $B\times A=\{(3,1),(3,2),(4,1),(4,2)\}$. Let's pick $R=\{(3,1),(4,1)\}$. Then, by definition, $$R^{-1}=\{(1,3),(1,4)\}.$$ But if we look at the definition of $R\:'^{-1}$, I think it says "All the elements of $(x,y)\in R$ such that the pair $(y,x)$ is in $B\times A$". Hence, $R\:'^{-1}=\emptyset$, because we know, for example, that $R$ has $x=3$ and $y=1$, so $(3,1)\in R$, but this proposition: $(y,x)=(1,3)\in B\times A$ is false. Thus $R^{-1}\neq R\:'^{-1}$. Are my understanding of $R\:'^{-1}$ and counterexample correct?
We have $(y,x)\in B\times A$ whenever $(x,y)\in R\subseteq A\times B$. Hence your $R'^{-1}$ is just $R$ itself
{ "language": "en", "url": "https://math.stackexchange.com/questions/3678230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solving $\int_0^1 [x^{700}(1-x)^{300} - x^{300}(1-x)^{700}] \, dx$ I am trying to solve the following integral: $$\int_0^1 [x^{700}(1-x)^{300} - x^{300}(1-x)^{700}] \, dx$$ My intuition is that this integral is equal to zero but I am unsure as to which direction to take to prove this. I was thinking binomal expansion but I believe there must be a better way, possibly using summation notation instead.
Let $1/2 - x \to x^\prime$ to see that the integral is $0$. Convert the integrand to $$(x^\prime /2)^{700} (x^\prime /2)^{300} - (x^\prime /2)^{300}(x^\prime /2)^{700}$$ and put in the right limits...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3678353", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Interpolation constraint in Hilbert space The paper "Geodesic Interpolating Splines" made the following comment on an interpolation problem: Interpolation problem: Let $\mathcal{H}$ be a Hilbert space, let $f_1, \dots, f_N \in \mathcal{H}$, and $c_1,\dots,c_N \in \mathbb{R}$ be given. Find $h \in \mathcal{H}$ such that $\|h\|$ is minimum subject to the constraints $\langle f_i, h \rangle = c_i$ for $i=1,\dots,N$. Comment: It is indeed clear that the constraints are not affected if $h$ is replaced by $h + v$ where $v$ is orthogonal to all the $f_i$, so that the solution much be in fact be searched in the linear space spanned by $f_1, \dots, f_N$ and express the unknown $h$ as a linear combination $h = \sum_{i=1}^{N} \alpha_i f_i$. I understand that the interpolation constraint will remain the same if we replace $h$ by $h + v$. But, how this property implies that the solution should be searched in linear space? Is it possible to prove their claim with the elementary level knowledge of Hilbert space?
If you can find minimizer $h$ outside of the $\text{span}\{f_1,\cdots,f_N\}$, then since $\langle f_i,h\rangle=0=c_i$, we have $c_1=\cdots=c_N=0$, which may be contradiction (if there exists $c_i\neq0$ given) or such $h$ does not exists (because if $\text{span}\{f_1,\cdots,f_N\}^\perp\neq\emptyset$, then take $a\in\text{span}\{f_1,\cdots,f_N\}^\perp$, and $||a/n||\rightarrow 0\in\text{span}\{f_1,\cdots,f_N\}$ satisfies the given conditions for all $n\in\mathbb{N}$), which is again a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3678530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Is the sequence $x_n=\dfrac{1}{\sqrt{n}}\left(1+\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{3}}+\ldots+\dfrac{1}{\sqrt{n}}\right)$ monotone? Observe that $x_1=1$ and $x_2=\dfrac{1}{\sqrt{2}}\left(1+\dfrac{1}{\sqrt{2}}\right)>\dfrac{1}{\sqrt{2}}\left(\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{2}}\right)=1$. Thus, $x_2>x_1$. In general, we also have $x_n=\dfrac{1}{\sqrt{n}}\left(1+\dfrac{1}{\sqrt{2}}+\dfrac{1}{\sqrt{3}}+\ldots+\dfrac{1}{\sqrt{n}}\right)>\dfrac{1}{\sqrt{n}}\left(\dfrac{1}{\sqrt{n}}+\dfrac{1}{\sqrt{n}}+\dfrac{1}{\sqrt{n}}+\ldots+\dfrac{1}{\sqrt{n}}\right)=1$. Thus, $x_n\geq 1$ for all $n\in \mathbb{N}$. Also, we have, $x_{n+1}=\dfrac{1}{\sqrt{n+1}}\left(\sqrt{n}x_n+\dfrac{1}{\sqrt{n+1}}\right)=\dfrac{\sqrt{n}}{\sqrt{n+1}}x_n+\dfrac{1}{n+1}$. Is it true that $x_{n+1}>x_n$? Edit : Thanks to the solution provided by a co-user The73SuperBug. Proving $x_{n+1}-x_n>0$ is equivalent to proving $x_n<1+\dfrac{\sqrt{n}}{\sqrt{n+1}}$. This is explained below : \begin{equation} \begin{aligned} &x_{n+1}-x_n>0\\ \Leftrightarrow & \dfrac{\sqrt{n}}{\sqrt{n+1}}x_n+\dfrac{1}{n+1}-x_n>0\\ \Leftrightarrow & \left(1-\dfrac{\sqrt{n}}{\sqrt{n+1}}\right)x_n-\dfrac{1}{n+1}<0\\ \Leftrightarrow & x_n<\dfrac{1}{(n+1)\left(1-\dfrac{\sqrt{n}} {\sqrt{n+1}}\right)}\\ \Leftrightarrow & x_n<1+\dfrac{\sqrt{n}}{\sqrt{n+1}}. \end{aligned} \end{equation}
A generalization I couldn't pass by. Let $\color{blue}{S_n=\frac1n\sum_{k=1}^{n-1}f\big(\frac{k}{n}\big)}$ where $f:(0,1)\to\mathbb{R}$ is strictly convex: $$f\big((1-t)a+tb\big)<(1-t)f(a)+tf(b)\quad\impliedby\quad a<b,0<t<1.$$ If we put $a=k/(n+1),b=(k+1)/(n+1),t=k/n$ for $0<k<n$ here, we obtain $$f\Big(\frac{k}{n}\Big)<\Big(1-\frac{k}{n}\Big)f\Big(\frac{k}{n+1}\Big)+\frac{k}{n}f\Big(\frac{k+1}{n+1}\Big),$$ which, after summing over $k$ (to have "$<$" still, we must assume $n>1$), gives $$\sum_{k=1}^{n-1}f\Big(\frac{k}{n}\Big)<\sum_{k=1}^{\color{red}{n}}\Big(1-\frac{k}{n}\Big)f\Big(\frac{k}{n+1}\Big)+\sum_{k=\color{red}{1}}^{n}\frac{k-1}{n}f\Big(\frac{k}{n+1}\Big)=\frac{n-1}{n}\sum_{k=1}^{n}f\Big(\frac{k}{n+1}\Big),$$ i.e. $\color{blue}{n^2 S_n<(n^2-1)S_{n+1}}$. Returning to the question, if we put $f(x)=1/\sqrt{x}$, we get $x_n=S_n+1/n$ and $n^2 x_n<(n^2-1)x_{n+1}+1$; since $x_{n+1}>1$, the latter implies the needed $x_n<x_{n+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3678612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Integral $I(\tau_1,a,b) = \int_{\tau_1}^\infty d\tau_2\ \frac{1}{b^2 + \tau_2^2} \left(\pi - 2 \tan^{-1} \frac{\tau_2}{a} \right)^2$ I am looking at the integral: $$I(\tau_1,a,b) = \int_{\tau_1}^\infty d\tau_2\ \frac{1}{b^2 + \tau_2^2} \left(\pi - 2 \tan^{-1} \frac{\tau_2}{a} \right)^2, \tag{1}$$ where $\tau_1$ is real and $a, b$ real positive. So far I was only able to solve the following special case: $$I(\tau_1,a,a) = \int_{\tau_1}^\infty d\tau_2\ \frac{1}{a^2 + \tau_2^2} \left(\pi - 2 \tan^{-1} \frac{\tau_2}{a} \right)^2 = \frac{1}{6 a} \left(\pi - \tan^{-1} \frac{\tau_1}{a} \right)^3, \tag{2}$$ but I cannot find a way to crack $(1)$. I am mostly interested by the case $b=1$.
The given integral can be presented in the form of $$I(\tau,a,b) = 4\int\limits_\tau^\infty \operatorname{arccot}^2\dfrac{\tau_2}a \,\dfrac{\mathrm d\tau_2}{\tau_2^2+b^2}.\tag1$$ Substitution $$\varphi=\operatorname{arccot} \dfrac{\tau_2}a,\quad \tau_2 = a\cot\varphi,\quad\mathrm d\tau_2=-a(\cot^2\varphi+1)\,\mathrm d\varphi,$$ allows to write $$I(\tau,a,b) = 4a\int\limits_0^{\operatorname{arccot}{\Large\frac\tau a}} \dfrac{\cot^2\varphi+1}{a^2\cot^2\varphi+b^2}\,\varphi^2\,\mathrm d\varphi = 4a\int\limits_0^{\operatorname{arccot}{\Large\frac\tau a}} \dfrac{\varphi^2\,\mathrm d\varphi}{a^2\cos^2\varphi+b^2\sin^2\varphi}.\tag2$$ Then $$I(\tau,a,a) = \dfrac4{3a}\varphi^3\bigg|_0^{{\Large\frac\pi2}-\arctan\Large\frac\tau a} = \dfrac1{6a}\left(\pi-2\arctan\frac\tau a\right)^3\tag3$$ (see Wolfram Alpha test). At the same time, the antiderivative of $(2)$ is $$\color{brown}{\mathbf{\begin{align} &J(\varphi,a.b)=4a\int \dfrac{\varphi^2}{a^2\cos^2\varphi+b^2\sin^2\varphi}\,\mathrm d\varphi = \dfrac{2\varphi}b \big(\operatorname{Li_2}(r e^{2i\varphi})-\operatorname{Li_2}(^1\!/_{\large r}\, e^{2i\varphi})\big)\\[4pt] &+\dfrac{i}{b}\big(\operatorname{Li_3}(re^{2i\varphi})-\operatorname{Li_3}(^1\!/_{\large r}\, e^{2i\varphi})\big) +\dfrac{2i}{b}\varphi^2\ln\dfrac{1-re^{2i\varphi}}{1-\,^1\!/_{\large r}\,e^{2i\varphi}} +\operatorname{const}, \end{align}}}\tag4$$ (see also Wolfram Alpha calculations), where $\operatorname{Li_n}$ is the Polylogarithm, $$r=\dfrac{b-a}{a+b}.\tag5$$ Therefore, \begin{align} &\color{brown}{\mathbf{I(\tau,a,b)= J\left(\operatorname{arccot}\frac \tau a,a,b\right) -J(0,a,b).}}\tag6 \end{align} If $a=11,\ b=17,\ \tau = 5,$ then $r = \frac3{14},$ $$I(\tau,a,b)\approx 0.10429\,46124\,85634,$$ (see also Wolfram Alpha result), $$J\left(\operatorname{arccot}\frac \tau a,a,b\right)\approx 0.32355\,66131\,49807 -0.26227\,19119\,51703\,i$$ (see also Wolfram Alpha result), $$J(0,a,b)\approx0.21926\,20006\,64173 - 0.26227\,19119\,51703\,i$$ (see also Wolfram Alpha result), $$J\left(\operatorname{arccot}\frac \tau a,a,b\right)-J(0,a,b)\approx 0.10429\,46124\,85634\approx I(\tau,a,b).$$ Test results confirm obtained closed form for the given integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3678746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Existence of a simple homeomorphism Definitions: Define $Q=[0,1] \times [0,1]$ with the product topology and $C=\{(s,t) \in Q:s=0\} \cup \{(s,t) \in Q:t=1\} \cup \{(s,t) \in Q:s=1\}$. Define $Q/C$ the quotient space by the relation: $a,b \in Q$ satisfies $a \mathscr{R} b$ if and only if $a=b$ or $a,b \in C$. with the quotient topology. Define $D^2=\{z \in \mathbb{C}: ||z|| \leq 1\}$. Problem: There exists an homeomorphism $f : Q/C \to D^2$ such that for all $s \in [0,1]$ holds that $f(s,0)=e^{2\pi i s}$. It is clear to me that the statement above is true but I would like to prove it formally. I do not understand if this problem can be solved with elementary tools or not. I really don't know how to start to prove it. Furthermore, we can find such homeomorphism explicitly? Or we can just prove that it exists?
We can find such an $f$ explicitly by constructing a map $g \colon Q \to D^2$ that factors through $Q/C$ and induces the desired homeomorphism. Such a $g$ must map every horizontal line segment $t = \operatorname{const}$ to a closed Jordan curve with base point $1$, and these loops must sweep the whole disk and be disjoint except for the base point. The most easily seen way to do it is $$g(s,t) = t + (1-t)\cdot e^{2\pi i s}\,.$$ It is clear that $g$ is a continuous map from $Q$ to $D^2$ that factors through the quotient $Q/C$. It is also clear that $g(s,t) = 1 \iff (s,t) \in C$. That any two of the loops $s \mapsto g(s,t)$ for fixed $t$ have only the base point in common is easily seen, and that every point of $D^2$ lies in the image of $g$ also isn't difficult to check. Thus $f \colon Q/C \to D^2$ induced by $g$ is a continuous bijective map, by compactness it follows that $f$ is a homeomorphism. The condition $f(s,0) = e^{2\pi i s}$ also holds by construction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3678941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimizing $\| x A - B\|_F^2$ With a Constraint I have previously asked an optimization question Here. I will reiterate the question and simply add a constraint to it: I have 2 known grayscale images (256×256 matrices) $A$ and $B$ and want to find the unknown scalar variable $x$ so that: $$\text{Minimize} \quad \|xA-B\|_F^2$$ $$\text{So That All Of The Final Image Values (256x256) Remain Between} \quad 0 \leq xA-B \leq 256 $$ The previous solution works just fine but I want to know how to optimize $x$ considering the constraint. Is this considered a Linear Programming problem? Thanks.
This can be solved as a linearly-constrained linear least squares problem, for which there are many available numerical solvers, accessible from a variety of computer languages and packages. Alternatively, by not squaring the Frobenius norm, it can be solved as a Second Order Cone Problem (SOCP), again for which there are a number of solvers. Here is how it can be formulated in CVX (under MATLAB), which will transform it to an SOCP, and call a solver to solve it. cvx_begin variable x minimize(norm(x*A-B,'fro')) subject to 0 <= x*A - B <= 256 cvx_end
{ "language": "en", "url": "https://math.stackexchange.com/questions/3679080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The role of topology in continuity Suppose we have two sets $M$ and $N$ endowed with topologies $T_1$ and $T_2$ respectively. Consider a (continuous) map $L: M\to N$. Now if it is possible that we define another topology on M in such a way that the same function becomes discontinuous( is it even possible?), what role did the topology play in the continuity? Is continuity an intrinsic part of the underlying set or it depends on the topology that we define on the set?
Continuity depends on the topology. For example, if N has the trivial topology, or M has the discrete topology, then any map from M to N will be continuous. On the other hand, if N has the discrete topology, then the only continuous functions are locally constant functions. And if M has the trivial topology, the only continuous functions are constant functions EDIT: the only continuous functions are those whose image has the trivial subspace topology, including but not limited to, constant functions. The rough intuition is that the coarser the topology on M or the finer the topology on N, the "harder" it is for a function from M to N to be continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3679197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
What is the expected number of peaks in an array of length $n$ with each number randomly drawed from $[0, 1]$? Suppose we randomly draw a real number from $[0, 1]$ for $n$ times, and get an array: $a_1, a_2, ..., a_n$. If for some integer $i$ such that $2<=i<=n-1$ and $a_{i-1}<a_i$ and $a_i>a_{i+1}$, we call $a_i$ a peak in the array. The question is: what is the expected number of peaks in the array $a_1, a_2, ..., a_n$? For example, in array $[0.6,0.3,0.7,0.4,0.9,0.8]$, there are $2$ peaks: $0.7$ and $0.9$.
Let $A_i =\{\text {Peak at}\ i\}$, and let $Y_i=I_{A_i}$ be the indicator function, where $2\le i\le n-1$. Since the $X_i$ are i.i.d., $P(A_i)=P(\{X_{i-1}\lt X_i\}\cap\{X_i\gt X_{i+1}\})$ is independent of $i$, and an easy calculation or an argument by symmetry shows this probability is $1/3$. The number of peaks is $W =\sum Y_i$ and $$E(W) = \sum E(Y_i) = \sum P(A_i) =\frac{n-2}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3679372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Polar set of $S=\{s\in\mathbb{R}^n:1^Ts≤1,\; s≥0\}$ The definition of a polar set, that I am working with, is: $M^{pol}=\{x\in\mathbb{R}^n:m^Tx≤1, \forall m\in M\} $ Now I want to find the polar $S^{pol}$ of $S$, where $S=\{s\in\mathbb{R}^n:$1$^Ts≤1,\; s≥0\}$ S is the set of vectors in $\mathbb{R}^n$ for which each $s_i$ is bigger than $0$ and for which the sum of all $s_i$ is smaller than $1$. Would then $S^{pol}$ be all vectors $x\in\mathbb{R}^n$ such that 1$^Tx≤1$? Therefore: $S^{pol}=\{x\in\mathbb{R}^n:$1$^Tx≤1\}$ I don't think my answer is wrong but I feel it may be incomplete and I am unsure how to prove that for $x$ with 1$^Tx>1:$ $s^Tx>1$, therefore I would appreciate any help with this problem.
$S^{pol}=U:=\{u\in \mathbb R^n \ | \ \max_{u_i\ge 0} u_i\le 1\}$ for $S=\{s\in \mathbb R^n \ | \ 1^Ts \le 1\text{ and } s\ge 0\}$: (1) $U\subseteq S^{pol}:\quad$ let $u\in U$ and $s\in S$, then by defining $u_+:=\max(u,0)$ and $u_-:=\max(-u,0)$; both componentwise, we have: $$u^Ts=(u_+-u_-)^Ts=u_+^Ts-u_-^Ts\le u_+^Ts=\sum_{u_i\ge 0}u_is_i\le \Big(\max_{u_i\ge 0} u_i\Big)\sum_i s_i\le \max_{u_i\ge 0}u_i\le 1.$$ Thus, $u\in S^{pol}$. (2) $S^{pol}\subseteq U: \quad$ let $p\in S^{pol}$, and $i^*\in argmax_{p_i\ge 0} \ p_i$, then choose $s=\textbf{e}_{i^*}\in \mathbb R^n.$ We see that $s\in S$ and $s^Tp=\max_{p_i\ge 0} p_i\le 1$. Hence, $p\in U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3679529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding coefficients of quadratic formula given certain properties Given the quadratic equation $ax^2+bx+c$ , how do you find $a,b$ and $c$ given you know: the gradient of the curve at the $y$ intercept the equation of the tangent at point $P$ the gradient of the normal at point $P$ I haven’t included the specific equations and stuff as I would like to work it out myself, I just need to know what steps to take.
Let $f(x) = ax^2 + bx + c$ be the curve $C$ in question. If we know the gradient of $C$ at the $y$-intercept (i.e. where $x=0$) is $m_0$, then that is the same as saying we know that $f'(0) = m_0$. If we know the equation of the line $L$ which is tangent to $C$ at the point $P$ with co-ordinates $(x_P, y_P)$, then we know two things: * *$P$ lies on $C$, and thus we know that $f(x_P) = y_P$; and *the equation describing $L$ can be written in the form $y = m_P x + d$, whence $m_P$ is the gradient of this line. Thus, we know that $f'(x_P) = m_P$. Knowing the gradient of the normal to $C$ at $P$ provides no extra information, since this is guaranteed to be equal to $\frac{-1}{m_P}$. You can substitute $f(x)$ and $f'(x)$ into the three "we know that" statements given above, and use the resulting system of three linear equations to solve for $a$, $b$, and $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3679667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
An intuitive explanation of the multiplication identity of modular arithmetic Is there any intuitive explanation for the validity of this identity? If $a\equiv b\pmod n$ and $c\equiv d\pmod n$, then $$a\times c \equiv b\times d \pmod n$$ I want something which appeals to someone's who just beginning to learn number theory.
This is not difficult to prove and understand. The idea is to use a simpler result two times. If $a\equiv b\pmod{n} $ then we have $ka\equiv kb\pmod{n} $. This should be obvious to understand as it is an immediate consequence of the definition of congruence. Just note that if $a-b$ is a multiple of $n$ then $k(a-b) $ is also a multiple of $n$. Using this we can multiply first congruence with $c$ and second congruence with $b$ to get $$ac\equiv bc\pmod{n}, bc\equiv bd\pmod {n} $$ and these two give you $ac\equiv bd\pmod{n} $ (this is based on the fact that if two numbers are multiple of $n$ then so is their sum). I don't think the above would count as a lot of hairy algebra and ideally should be accessible to anyone who has basic idea of factors and multiples (typically 6th standard math suited for children of 10-11 years in Indian curriculum).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3679822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the value of- $\lim_{x \rightarrow -\infty}\sum_{k=1}^{1000} \frac{x^k}{k!}$ QUESTION: Find the value of- $$\lim_{x \rightarrow -\infty}\sum_{k=1}^{1000} \frac{x^k}{k!}$$ MY ANSWER: Since $x→-\infty$ therefore the summation will look like- $$ \frac{-\infty^1}{1!}+ \frac{-\infty^2}{2!} + \frac{-\infty^3}{3!} + \frac{-\infty^4}{4!}$$ Now, we know that $${-\infty^{even}}=\infty$$ And $1000$ can be broken down into 500 pairs. And since $k$ is finite the terms will look like $$ -\infty + \infty -\infty +\infty-.....$$ Hence, we can conclude that the overall limit of the summation will tend to zero. Am I correct? Thank you for your help.
One may observe that, for $x\neq0$, $$ \sum_{k=1}^{1000} \frac{x^k}{k!}=\color{red}{\frac{x^{1000}}{1000!}}\times\left(\color{red}1+\frac{1000!}{x\times999!}+\frac{1000!}{x^2\times998!}+\cdots+ \frac{1000!}{x^{1000}\times1!}\right) $$ giving that $$ \begin{align} \lim_{x \rightarrow -\infty}{\sum_{k=1}^{1000} \frac{x^k}{k!}} &=\lim_{x \rightarrow -\infty} \color{red}{\frac{x^{1000}}{1000!}}\times \lim_{x \rightarrow -\infty}\left(\color{red}1+\frac{1000!}{x\times999!}+\frac{1000!}{x^2\times998!}+\cdots+ \frac{1000!}{x^{1000}\times1!}\right) \\\\&=\lim_{x \rightarrow -\infty} \color{red}{\frac{x^{1000}}{1000!}}\times \color{red}1 \\\\&=\infty. \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3680047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Interesting limit involving hordes of logarithms. $$\lim_{x\to 0^{+}}\ln (x\ln a)\ln \left(\dfrac{\ln ax}{\ln\frac xa}\right)=6$$ Find the value of $a$. Answer: $e^3$ This struck me as an interesting problem and wanted to know if there are any more methods to solve it except the one I have used (written as an answer below). Thanks! Edit: for those requiring additional context, I have SOLVED the problem and my solution is posted below as an answer. I am merely looking for other ways to solve this question.
$$\frac{\log ax}{\log\dfrac xa}=1+\frac{2\log a}{\log\dfrac xa}$$ so that $$\log\frac{\log ax}{\log\dfrac xa}\sim\frac{2\log a}{\log\dfrac xa}.$$ As the denominator will simplify with $\log(x\log a))$, we are left with $$2\log a=6.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3680228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Where does this relation come from? $n^2-1 \approx (n-1)2$ for $n-1 \ll 1$ I came across the relation in the title in a physics textbook and wondered how I get to it. $$n^2-1 \approx (n-1)2$$ for $$n-1\ll 1$$ Could anybody maybe help me out? Thanks!
$n^2-1=(n+1)(n-1)$ so if $n$ is very close to $1$, then $n+1$ is very close to $2$ and $n^2-1$ is very close to $2(n-1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3680378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Show that $\begin{vmatrix} 1+a^2-b^2 & 2ab & -2b \\ 2ab & 1-a^2+b^2 & 2a \\ 2b & -2a & 1-a^2-b^2 \end{vmatrix} = (1+a^2+b^2)^3$ Show that $\begin{vmatrix} 1+a^2-b^2 & 2ab & -2b \\ 2ab & 1-a^2+b^2 & 2a \\ 2b & -2a & 1-a^2-b^2 \end{vmatrix} = (1+a^2+b^2)^3$ Performing the operations $C_1 \rightarrow C_1-bC_3$ and $C_2 \rightarrow C_2+aC_3$, I got $$\Delta =\begin{vmatrix} 1+a^2+b^2&0&-2b \\ \ 0&1+a^2+b^2&2a \\\ b(1+a^2+b^2)&-a(1+a^2+b^2)&1-a^2-b^2 \end {vmatrix}$$ So I got the term $1+a^2+b^2$, but I am not able to pull it out. What should I do next?
$$\begin{aligned}\Delta &=\begin{vmatrix} 1+a^2+b^2&0&-2b \\ \ 0&1+a^2+b^2&2a \\\ b(1+a^2+b^2)&-a(1+a^2+b^2)&1-a^2-b^2 \end {vmatrix}\\ &=(1+a^2+b^2)\begin{vmatrix} 1&0&-2b \\ \ 0&1+a^2+b^2&2a \\\ b&-a(1+a^2+b^2)&1-a^2-b^2 \end {vmatrix}\\ &=(1+a^2+b^2)^2\begin{vmatrix} 1&0&-2b \\ \ 0& 1 &2a \\\ b &-a &1-a^2-b^2 \end {vmatrix}\\ &=(1+a^2+b^2)^2[(1-a^2-b^2 + 2a^2) + b(2b)]\\ &= (1+a^2+b^2)^3 \end{aligned}$$ As you can factor a coefficient in a column and then develop the determinant along the first column.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3680685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solutions to differential equation $\nabla f(x)=f(x)x$ Let us consider the differential equation given by $\nabla f(x)=f(x)x$, where $f:\mathbb{R}^n\to \mathbb{R}$. I have found that $f(x)=K\exp(|x|^2/2)$ is a solution, but are all solution of this form?
We can use the method of integrating factor to prove that this is the only solution. Moving everything to one side, notice that $$\exp\left(-\frac{x^2}{2}\right)(\nabla f - x f) = 0 \implies \nabla \left(\exp\left(-\frac{x^2}{2}\right) f\right) = 0$$ which means $$f(x) = K\exp\left(\frac{x^2}{2}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3680802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Combinatorial problem about $n \times 3$-matrix Let $A$ be an $n \times 3$-matrix so that for each number $k \in \{1,2,3\}$ there are exactly $n$ entries $a_{ij}$ s.t. $a_{ij}=k$. Is it possible to rearrange the entries of each column of $A$, such that in every row every number appears at least once? I was trying to prove this via pidgeonhole principle, but I didn't get far. Clearly, there are $3n \choose n,n,n$ possibilities for such matrices, but this didn't help.
The answer is yes! Proof by construction: * *leave the first column as is *start with row 1 and repeat the following until you can't: * *find the two missing numbers for each row in the 2'nd and 3'rd column (order doesn't matter) and move them up. *move to the next row If the algorithm ends after the last row, it will construct your answer. Suppose that it ends after row $i < n$. This happens if you need a digit $d$ that you cannot find in either of the 2nd or 3rd columns. Since there are total $n$ copies of digit $d$, and the first $i$ rows have only $i$ copies, there are $n - i$ copies left, which all have to be in column 1. That means the rest of the rows have the digit $d$ in their first column and do not need a digit $d$, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3680939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Inequality of $|e^z - 1|$ I'm doing a problem related with Rouché Theorem and to prove it I need to show the following inequality: $|e^z - 1| < e - 1$ if $|z| < 1$. I've seen that $|e^z - 1| \leq e + 1$ and that $|e^z| \leq e$ if $|z| < 1$. But I can't manage to get that $|e^z - 1| < e - 1$. I would be really grateful if someone could help me. Thanks in advance!
$|e^z-1|=|\sum_{n \ge1}\frac{z^n}{n!}| \le \sum_{n \ge1}|\frac{z^n}{n!}| < \sum_{n \ge1}\frac{1}{n!}=e-1$ since $|z| < 1$ so $|z|^n <1$ for all $n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3681262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine the real part of $\left(\cfrac{z-1}{z+1}\right)$ Determine the $Re\left(\cfrac{z-1}{z+1}\right)$ if $z = cos\theta + i \, sin\theta$. I'm not quite sure whether the right approach would be to stick with the polar form and substitute it into $\left(\cfrac{z-1}{z+1}\right)$ and perhaps use $ 1 = sin^2(\theta) + cos^2(\theta) $. Or maybe I should use the Euler's form.
Note that $z=\cos\theta + i\sin\theta$ implies $|z|=1$. Now, $$\frac{z-1}{z+1} = \frac{z\bar z-\bar z}{z\bar z + \bar z} = \frac{1-\bar z}{1+\bar z} = - \overline{ \left(\frac{z-1}{z+1}\right)},$$ so, $\frac{z-1}{z+1}$ is purely imaginary, i.e, the real part is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3681479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
Computing an intergral on a sphere I'm struggling with the following integral: $$\int \limits_{S_t(x)} y_1^2 + y_2^2 + y_3^2 \, dA(y),$$ where $S_t(x) = \{y \in \mathbb{R}^3: \lvert y - x\rvert = t \}$. I understand the integral above as average value of the integrand over $S_t(x)$. However I don't know how to compute it. I would appreciate any tips.
Translating the integral we get that $$\int_{S_t(x)} y^2\:dA(y) = \int_{S_t(0)} (y+x)^2 \:dA(y)$$ Then use the fact that $dA(y) = t^2d\Omega$ $$\int_{S_t(0)} y^2+2y\cdot x + x^2 \:dA(y) = \int_{S^2} t^4 + x^2t^2\:d\Omega + \int_{S_t(0)} 2y\cdot x \:dA(y) = 4\pi t^2(x^2+t^2)$$ where the second integral vanished because the integrand was odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3681960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Divisor summatory function for Odd and Even numbers With Dirichlet's Divisor summatory function the mean divisor growth can be determined. $$D(x)=\frac{1}{x} \sum_{n=1}^x d(x)= \log(x)+2 \gamma -1+ \mathcal{O}\!\left(\frac{1}{\sqrt{x}}\right)$$ Where $\gamma$ is the Euler's constant and $d(n)$ is the divisor function. Do there exist expressions where the Divisor summatory function is expressed in: $D(x=odd)$ and $D(x=even)$? Background for question error growth "Wave Divisor Function": Error in Divisor Function Modelled With Waves
Not sure if that's the question but if one wants the expressions for $d_{odd}(x)=\sum_{2k+1 \le x}d(2k+1)$ and $d_{even}$, one can easily get them from the recurrence: $d_{even}(x)=2d_r(x/2)-d_r(x/4), d_{odd}(x)=d_r(x)-2d_r(x/2)+d_r(x/4)$ where $d_r(x)=xD(x)$ the usual divisor sum - these follow from the relation $d(4q)=2d(2q)-d(q)$ for any integer $q \ge 1$ In particular one gets that $D_{odd}=\frac{1}{4}\log x+\frac{1}{4}(2\gamma -1 + \log 4)+ O(1/\sqrt x)$ and $D_{even}=\frac{3}{4}\log x+\frac{3}{4}(2\gamma -1) -\frac{1}{4}\log 4+ O(1/\sqrt x)$ One can also apply Perron to get the principal terms - for the usual divisor sum it is the residue at $1$ of $\zeta^2(s)x^s/s$, while here one can immediately notice that $\zeta_{odd}(s)=(1-2^{-s})\zeta(s)$ so squaring $\zeta_{odd}(s)^2x^s=\zeta^2(s)x^s-2\zeta^2(s)(x/2)^s+\zeta^2(s)(x/4)^s$ from which the above follow easily too
{ "language": "en", "url": "https://math.stackexchange.com/questions/3682506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confused about the meaning behind making x = 2 in the binomial theorem So the binomial theorem states $(1+x)^n = \sum^{n}_{k=0}$$n \choose k$$x ^ k$. Now I understand that each term of the sum represents the number of ways to arrange 1 and $x$ out of $n$ choices, so there ends up being $k$ number of $x$'s. I understand what happens when instead of a $1$ and $x$ there's an $a$ and $b$, if there's $k$ $a$'s, there's $n-k$ $b$'s. I also understand what it means when $x = 1$, so the right hand side ends up being the summation of all the $n \choose k$, which means the left side equals the number of all subsets you can get from a set, $2^n$, which is already known/proved by this theorem. Now what I don't get, is when $x$ is anything other than 1. What does it mean when $x = 2$ for example? What meaning does the left side $= 3^n$ have? What meaning does $\sum^{n}_{k=0}$$n \choose k$$2 ^ k$. have? I get to some extent it will just be what the polynomial equals when you plug in $x = 2$, but does it have any special meaning that has to do with subsets and size of sets, etc.? Thanks!
Notice that $$\sum _{k=0}^n\binom{n}{k}2^k=\sum _{k=0}^n\binom{n}{n-k}2^{n-k}=\sum _{k=0}^n\binom{n}{k}2^{n-k}=\sum _{k=0}^n\binom{n}{k}\left (\sum _{l=0}^{n-k}\binom{n-k}{l}\right )=\sum _{k=0}^n\sum _{l=0}^{n-k}\binom{n}{k}\binom{n-k}{l}.$$ Check the right hand side of this line, this means choosing the positions for a first object(if, apriori, you know that you want $k$ of those) and then choosing the positions of a second object (if you know you want $l$ of them). Notice that there is a third object, that you will place in the rest $n-(k+l)$ positions. This is why this is $3^n$ which encodes the ways to put $3$ kind of objects in $n$ slots. What you say about $(a+b)^n$ is a little wrong in the sense that $a$ is like a box of objects with $a$ different types and $b$ is another box in which you have $b$ different types of objects. In the particular case of $3^n$ you have in the first box the type $1$ and in the second box the type $2$ and $3.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3682611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the measure of a set using Lebesgue Integral Let's say I have a Lebesgue integrable function $f: X \to \mathbb{R}$, where $X$ is an arbitrary measure space. Now, suppose I want to find the measure of the set $$ A = \{x \in X: f(x) \geq c\} $$ for some $c \in \mathbb{R}$. Is there a way that I can express $\mu(A)$ in terms of $\int f$? Note that $A$ is measurable by definition of a measurable function. I'm not looking for a fancy way to do this; I'm new to this topic and I'm wondering whether there is an easy expression. Edit 1: If not an exact relationship, is there an inequality?
Following Cameron Williams, an inequality is given by $\mu(\{f\geq c\})= \int_ {\{f\geq c\}} 1\, d \mu \leq \frac{1}{c} \int f d\mu,$ since $\frac{f}{c} \geq 1$ on $\{f\geq c\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3682791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Equivalent Definitions of a Compact Subset of a Metric Space Let $X$ be a metric space and let $K \subseteq X$. Please consider the following three statements. * *For every collection $\{V_\alpha : \alpha \in \mathcal{A}\}$ of subsets of $X$ which are both open in $X$ and cover $K$, there exists a finite subset $A$ of $\mathcal{A}$ such that $\{V_\alpha : \alpha\in A\}$ cover $K$. *For every collection $\{V_\alpha : \alpha \in \mathcal{A}\}$ of subsets of $K$ which are both open in $X$ and cover $K$, there exists a finite subset $A$ of $\mathcal{A}$ such that $\{V_\alpha : \alpha\in A\}$ cover $K$. *For every collection $\{V_\alpha : \alpha \in \mathcal{A}\}$ of subsets of $K$ which are both open in $K$ and cover $K$, there exists a finite subset $A$ of $\mathcal{A}$ such that $\{V_\alpha : \alpha\in A\}$ cover $K$. I've convinced myself that 1. and 3. are equivalent. It's also fairly automatic that 2. is implied by either 3. or 1., but I cannot seem to prove that 2. implies either 1. or 3.... Is statement 2., simply, strictly weaker that statements 1. and 3.? Or is there some trick that I'm missing? Thank you.
This is false. Take $X=\mathbb R$ and $K=\mathbb N$. There is no non-empty open set in $X$ contained in $K$ so 2) is vacuously true. But 1) is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3682955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the cubic bezier control points from end points and tangents If i have 2 end points and two unit vectors as tangents at the two end points is it possible to find the cubic bezier curve control points that make the curve ? Is there one solution or many solutions ? Visual of what i am trying to find:
There are infinitely many solutions. At the start of the curve, the given point and unit vector define a line. You can place the curve’s second control point anywhere along this line. The same reasoning applies at the end of the curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3683203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that for $x > e^{2.5102}, 0 \le \lfloor\dfrac{1.25506(x+1)}{\ln(x+1)}\rfloor - \lfloor\dfrac{1.25506x}{\ln x}\rfloor \le 1$ Showing that for $x > e^{2.5102}, 0 \le \lfloor\dfrac{1.25506(x+1)}{\ln(x+1)}\rfloor - \lfloor\dfrac{1.25506x}{\ln x}\rfloor \le 1$ Does this argument work: (1) $\dfrac{1.25506x}{\ln x}$ is increasing for $x > e$ with since: * *$\dfrac{d}{dx}(\dfrac{1.25506x}{\ln x}) = \dfrac{1.25506\ln(x)-1.25506}{(\ln x)^2}$ *$1.25506\ln(x) - 1.25506$ is positive (2) For $x > e^{2.51012}$, since $\dfrac{1.25506x}{\ln x}$ is increasing for $ x > 1$: $$0 < \dfrac{1.25506(x+1)}{\ln(x+1)} - \dfrac{1.25506x}{\ln x} < \dfrac{1.25506(x+1)}{\ln x} - \dfrac{1.25506x}{\ln x} = \dfrac{1.25506}{\ln x} < 0.5$$ (3) There exists integers $a,b$ such that $0 \le a < 1$ and $0 \le b < 1$ such that $$\lfloor\dfrac{1.25506(x+1)}{\ln(x+1)}\rfloor - \lfloor\dfrac{1.25506(x)}{\ln(x)}\rfloor = \dfrac{1.25506(x+1)}{\ln(x+1)} - a - \dfrac{1.25506(x)}{\ln(x)} + b$$ (4) Since $-1 < b - a < 1$, it follows that: $$-1 < \lfloor\dfrac{1.25506(x+1)}{\ln(x+1)}\rfloor - \lfloor\dfrac{1.25506(x)}{\ln(x)}\rfloor < 0.5 + 1 = 1.5$$ Does the conclusion follow? Did I make a mistake? Is there a better way to establish the same conclusion?
(1)(2) are correct though you have a typo in (2). It should be $x\gt e$ instead of $x\gt 1$. (3) is not correct. If $a,b$ are integers such that $0\le a\lt 1$ and $0\le b\lt 1$, then $a=b=0$ for which $$\left\lfloor\dfrac{1.25506(x+1)}{\ln(x+1)}\right\rfloor - \left\lfloor\dfrac{1.25506(x)}{\ln(x)}\right\rfloor = \dfrac{1.25506(x+1)}{\ln(x+1)} - a - \dfrac{1.25506(x)}{\ln(x)} + b$$ does not necessarily hold since the RHS is not necessarily an integer. Let $f(x):=\dfrac{1.25506(x)}{\ln(x)}$. After getting $0\lt f(x+1)-f(x)\lt 0.5$ in (2), we can separate it into two cases : * *If there exists an integer $N$ such that $f(x)\lt N\le f(x+1)$, then $\lfloor f(x)\rfloor=N-1$ and $\lfloor f(x+1)\rfloor=N$ so $\lfloor f(x+1)\rfloor-\lfloor f(x)\rfloor=N-(N-1)=1$. *If there exists an integer $N$ such that $N\le f(x)\lt f(x+1)\lt N+1$, then $\lfloor f(x)\rfloor=\lfloor f(x+1)\rfloor=N$ so $\lfloor f(x+1)\rfloor-\lfloor f(x)\rfloor=N-N=0$. Therefore, $0\le \lfloor f(x+1)\rfloor -\lfloor f(x)\rfloor\le 1$ follows. Added : If you meant $a,b$ are real numbers, not integers, then what you did is correct. In conclusion, you didn't make any mistakes. You just had a few typos.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3683356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ratio of the heights of triangle is given, determine the sides of triangle. Can someone help me with this excersize? The ratio of heights to the sides of the triangle is v_{a}:v_{b}:v_{c}=12:5:8. What are the length of the sides of triangle? (a,b,c=?) Thank you! I tried using formula for area of triangle, e.g. P=(av_a)/2 and P=(bv_b)/2 and then I would get b=12/5 a and I did that for other combinations but did not come up with anything.
Yes, your idea works! Since $$S=\frac{ah_a}{2}=\frac{bh_b}{2}=\frac{ch_c}{2},$$ we obtain: $$\frac{1}{a}:\frac{1}{b}:\frac{1}{c}=12:5:8$$ or $$a:b:c=\frac{1}{12}:\frac{1}{5}:\frac{1}{8}.$$ We need to check that $$\frac{1}{12}+\frac{1}{8}>\frac{1}{5},$$ of course. For minimal natural's $a$, $b$ and $c$ we obtain $(10,24,15).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3683537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Retract spaces- Material or book recomendations Good afternoon to all! I have just done a course on Algebraic Topology and I came across the definition of a retract space. Not much more is mentioned with regards to the topological retraction property in the notes I was using. I wonder what the best material to understand this concept more in depth is? Keeping in mind that I do no have an extensive knowledge on Algebraic Topology, but just the knowledge given by an introductory course on the subject. The definition of such space which I am mentioning here that is presented in the notes is the following: Let $Y \subset X$ . A map $ f: X \rightarrow Y$ is called a retraction of $X$ onto $Y$ if $f$ restricted to $Y$ is the identity map on $Y$. Then $Y$ is called a $retract$ of $X$ . Any suggestions would be appreciated!
The 'Theory of Retracts' that I referenced in the books above is a lot more involved than this (which is why I doubted your professor would introduce it) . Past a little experience, the definition you give is about as in depth as you will need to understand retracts. Spelling out your definition, let $i:A\hookrightarrow X$ be a subspace inclusion. Then $A$ is a retract of $X$ if there is a map $r:X\rightarrow A$, called a retraction, such that $$r\circ i=id_A.$$ The map $r$ need not be unique. Let $X$ be nonempty. Then the empty set $\emptyset$ is not a retract of $X$. If $x\in X$ is any point, then $\{x\}$ is a retract of $X$. Not every subspace $A\subseteq X$ is a retract. For instance $S^n\subseteq D^{n+1}$ is not a retract (can you prove this using some basic algebraic topology?). If $Y$ is also nonempty, then $X$ is a retract of $X\times Y$ and $X\sqcup Y$. If $A\subseteq X$ is a retract and $X$ is Hausdorff, then $A$ is closed in $X$ (this is an exercise in point-set topology). There is not much to them theoretically, and they tend to be more useful when recognised in particular examples. The equation $r\circ i=id_A$, through the eyes of an algebraic topologist, is what contains their real power (see the examples above, for instance).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3683683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
relation between indices of a tensor and its vectorization Assume three-mode tensor $A \in R^{n_1 \times n_2 \times n_3}$ and its vectorization as $a=vec(A)$. What is the relation between the index elements of tensor $A$ and vector $a$? Specifically, I would like to know which element of tensor $A$ is correspond to the $i$th element of vector $a$.
I'll assume that you're following this convention of vectorization. I also assume that your indexing starts at one, which (unfortunately perhaps) is a bit more common in mathematical literature on tensors. The $(i,j,k)$ entry of the tensor $A$ is mapped to the $p$th vector entry, with $$ p = 1 + (i-1) + n_1(j-1) + n_1n_2(k-1). $$ Going in the opposite direction, we find that the $p$th vector entry corresponds to the $(i,j,k)$ element of the tensor $A$, with $$ i = 1 + ([p-1] \bmod n_1), \\ j = 1 + ([p - i - 1] \bmod n_1n_2) = 1 + \lfloor[p - n_1n_2(k-1) - 1]/n_1\rfloor, \\ k = 1 + \lfloor (p-1)/(n_1n_2)\rfloor. $$ As the two forms of $j$ indicate, there are two general ways to compute these indices (for tensors of arbitrary order). You can either compute these indices from left to right using the modulo operator or from right to left using integer division.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3683853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $b \in \bar{A}$ then there exists a filter $G$ such that $A \in G$ and $G \longrightarrow b$ Let $E=(E,\tau)$ be a topological space and $A \subset E$. I want to prove that if $b \in \bar{A}$ then there exists a filter $G$ in $E$ such that $A \in G$ and $G \longrightarrow b$. For this, note that by definition for the closure (by filters) we have $$A \cap U \neq \emptyset,\; \forall \; U \in \mathcal{F}(b), $$ where $\mathcal{F}(b)$ denote the filter of neighborhoods of the point b. Hence, I could prove that the set $$ \mathcal{B}:=\{ A \cap U \subset E \; ; \; U \in \mathcal{F}(b)\}$$ is a basis of filter on A. Considering $G$ the filter generated by $\mathcal{B}$, I am not able to show that $A \in G$ and $G \longrightarrow b$. How to proceed?
A better notation is $$\mathcal{B} = \{A \cap U\mid U \in \mathcal{N}(b) \}$$ where $\mathcal{N}(b)$ is the neighbourhood filter at $b$. Then it's easy to see that $\mathcal{B}$ is a filter base: all sets are non-empty by the assumption $b \in \overline{A}$ and the collection is closed under (finite) intersections, as $\mathcal{N}(b)$ is. Denote by $\mathcal{F}$ the filter it generates (i.e. the supersets of the members of $\mathcal{B}$). Taking $E \in \mathcal{N}(b)$, we already have that $A \cap E = A \in \mathcal{B} \subseteq \mathcal{F}$. And to see $\mathcal{F} \to b$, let $U$ be any neighbourhood of $b$. Then by definition $A \cap U \in \mathcal{B}$ and so $U (\supseteq A \cap U) \in \mathcal{F}$. So $\mathcal{N}(b) \subseteq \mathcal{F}$ or equivalently $\mathcal{F} \to b$. Just apply the definitions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3683955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Structure theorem for modules over Dedekind domains I've come across the structure Theorem for fin. gen. Modules over a Dedekind domain several times now. It was formulated to us the following way: Let $R$ be a Dedekind domain. For every element $\alpha \in C(R)$, let a representative $I_{\alpha}$ in the group of fractional ideals be chosen. Then, to a fin. gen. $R$-Mod $M$ there are unique natural numbers $r$ and $s$, $\alpha \in C(R)$ with $\alpha = 0$ if $s = 0$, and proper nonzero ideals $I_r \subset ... \subset I_1$ such that $M \cong R/I_1 \oplus... \oplus R/I_r, \text{if }s = 0$ $M \cong R/I_1 \oplus ... \oplus R/I_r \oplus R^{s-1} \oplus I_{\alpha}$, if $s > 0$ Now I want to give a description of the finitely generated module over the Dedekind domain according to the structure theorem. In each case the elements of the class group are listed for you, each given by means of a representative ideal. Dedekind domain $\mathbb{Z}[\sqrt{79}]$, class group of order 3, representatives $(1), (3,\sqrt{79}+1), (3, \sqrt{79}-1)$. The Module $M = I \oplus I$, where $I = (3, \sqrt{79}+1)$. I actually don't know where to begin. I also have not found any examples on the web. Any of those would be very appreciated.
To do your exercise, you need one extra piece of information, a theorem of Steinitz: Let $r$ and $s$ be non-negative integers, and let $I_1,\ldots,I_r$ and $J_1,\ldots,J_s$ be ideals in your Dedekind domain $R$. Then one has an isomorphism $\bigoplus_{m=1}^r I_m\cong \bigoplus_{n=1}^sJ_n$ of $R$-modules if and only if $r=s$ and the equality $\prod_m [I_m] = \prod_n [J_n]$ holds in the class group of $R$. For a proof, see Curtis and Reiner, Representation Theory of Finite Groups and Associative Algebras, Wiley (1962), Section 22. You should be able to solve your exercise with this extra information.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3684112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are subgraphs of a minimum spanning tree, also minimum spanning trees? Clarification: MST = minimum spanning tree Say I take some complete, weighted graph $G$ - I create a MST of the graph then split $G$ into 2 trees $T_1$ and $T_2$ by removing some edge connecting them called $uv$. My question is, that if I were to take the subgraph of all nodes constuting $T_1$ and $T_2$ and the edges originally connecting them in $G$, would the MSTs of these subgraphs be equivalent to the original MSTs $T_1$ and $T_2$? That is, are the subgraphs of a MST equal to the MSTs of the subgraphs?
The answer is yes; if the subgraphs created by taking the nodes and edges from $T_1$ and $T_2$ had a smaller weight than the subgraphs of the minimum spanning tree, and were combined with the original graph by adding the edge $uv$ back, the resulting graph would be a minimum spanning tree of $G$ but with a smaller weight than the original minimum spanning tree created for $G$. This leads to a contradiction as the definition of a minimum spanning tree is such that the sum of the weights of the edges used to construct it are a minimum, hence, a smaller minimum spanning tree for $G$ cannot exist - therefore, $T_1$ and $T_2$ have to have already been minimum spanning trees as the sum of the weights of the edges in them cannot be decreased and they are already connected trees - hence fitting the definition of a minimum spanning tree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3684209", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can a function $f$ have an antiderivative even though its indefinite integral $F(x) = \int_{a}^{x} f(t)\ dt$ is not one? The fundamental theorem of calculus states that if $f:[a,b] \to \mathbb{R}$ is integrable and $F(x) = \int_{a}^{x} f(t)\ dt$, then $F'(x) = f(x)$ at every point $x$ at which $f$ is continuous. This means that if $f$ is integrable, $F'(x) = f(x)$ almost everywhere. If $f$ is continuous on $[a,b]$, then $F$ is an antiderivative of $f$, since $F'(x) = f(x)$ holds for all $x \in [a,b]$. But what if $f$ has a discontinuity at some $x \in [a,b]$? In this case, it is not necessarily true that $F'(x) = f(x)$, and so we cannot necessarily conclude that $F$ is an antiderivative of $f$. Does this mean that there is no antiderivative of $f$? Is it possible for $f$ to have an antiderivative but the indefinite integral $F$ is not an antiderivative of $f$? I know that if $f$ has a jump discontinuity, then $f$ can have no antiderivative (since the derivative of a function must satisfy the intermediate value property), but what if we have some other type of discontinuity?
Consider the map$$\begin{array}{rccc}F\colon&\Bbb R&\longrightarrow&\Bbb R\\&x&\mapsto&\begin{cases}x^2\sin\left(\frac1x\right)&\text{ if }x\ne0\\0&\text{ otherwise.}\end{cases}\end{array}$$Then $F$ is differentiable and$$(\forall x\in\Bbb R):F'(x)=\begin{cases}-\cos\left(\frac1x\right)+2x\sin\left(\frac1x\right)&\text{ if }x\ne0\\0&\text{ otherwise.}\end{cases}$$So, $F'$ is discontinuous at $0$. But $F$ is an antiderivative of $F'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3684379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Prove $\frac{x^2+yz}{\sqrt{2x^2(y+z)}}+\frac{y^2+zx}{\sqrt{2y^2(z+x)}}+\frac{z^2+xy}{\sqrt{2z^2(x+y)}}\geqq 1$ For $x,y,z>0$ and $\sqrt{x} +\sqrt{y} +\sqrt{z} =1.$ Prove that$:$ $$\frac{x^2+yz}{\sqrt{2x^2(y+z)}}+\frac{y^2+zx}{\sqrt{2y^2(z+x)}}+\frac{z^2+xy}{\sqrt{2z^2(x+y)}}\geq 1$$ My solution$:$ Let $x=a^2,\,y=b^2,\,z=c^2$ then $a+b+c=1,$ we need to prove$:$ $$\sum\limits_{cyc} \frac{a^4+b^2 c^2}{a^2 \sqrt{2(b^2+c^2)}} \geqq 1\Leftrightarrow \sum\limits_{cyc} \frac{a^4+b^2 c^2}{a^2 \sqrt{2(b^2+c^2)}} \geqq a+b+c$$ By AM-GM$:$ $$\text{LHS} = \sum\limits_{cyc} \frac{a^4+b^2 c^2}{a \sqrt{2a^2(b^2+c^2)}} \geqq \sum\limits_{cyc} \frac{2(a^4+b^2c^2)}{a(2a^2+b^2+c^2)} \geqq a+b+c$$ Last inequality is true by SOS$:$ $$\sum\limits_{cyc} \frac{2(a^4+b^2c^2)}{a(2a^2+b^2+c^2)}-a-b-c=\sum\limits_{cyc} {\frac {{c}^{2} \left( a-b \right) ^{2} \left( a+b \right) \left( 2\, {a}^{2}+ab+2\,{b}^{2}+{c}^{2} \right) }{a \left( 2\,{a}^{2}+{b}^{2}+{c }^{2} \right) b \left( {a}^{2}+2\,{b}^{2}+{c}^{2} \right) }} \geqq 0$$ PS:Is there any another solution for original inequality or the last inequality$?$ Thank you!
Another way. Since $$\left(yz-\frac{1}{2}xy-\frac{1}{2}zx, zx-\frac{1}{2}yz-\frac{1}{2}xy,xy-\frac{1}{2}zx-\frac{1}{2}yz\right)$$ and $$\left(\frac{1}{x\sqrt{2(y+z)}},\frac{1}{y\sqrt{2(z+x)}},\frac{1}{z\sqrt{2(x+y)}}\right)$$ have the same ordering, by AM-GM and Chebyshov we obtain: $$\sum_{cyc}\frac{x^2+yz}{x\sqrt{2(y+z)}}-1=\sum_{cyc}\left(\frac{x^2+yz}{x\sqrt{2(y+z)}}-\sqrt{x}\right)=$$ $$=\sum_{cyc}\frac{x^2+yz-x\sqrt{2x(y+z)}}{x\sqrt{2(y+z)}}\geq \sum_{cyc}\frac{x^2+yz-\frac{1}{2}x(2x+y+z)}{x\sqrt{2(y+z)}}=$$ $$=\sum_{cyc}\frac{yz-\frac{1}{2}xy-\frac{1}{2}zx}{x\sqrt{2(y+z)}}\geq\frac{1}{3}\sum_{cyc}\left(yz-\frac{1}{2}xy-\frac{1}{2}zx\right)\sum_{cyc}\frac{1}{x\sqrt{2(y+z)}}=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3684495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }