Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
$Q \cap N_G(P)=Q\cap P$ where $P,Q$ are both Sylow $p-$groups We just need to prove that $Q \cap N_G(P) \subset P$. This problem becomes to show $gPg^{-1}=P, g \notin P, g\in Q \implies g \in P$. So how exactly $g$ being a member of another $p$-Sylow group helps him to be a member of $P$?
Apply Sylow Theory in $N_G(P)$: $Q \cap N_G(P)$ is a $p$-group in $N_G(P)$, and hence must be contained in some Sylow $p$-subgroup of $N_G(P)$. Since $P \unlhd N_G(P)$, $P$ is the unique one and so $Q \cap N_G(P) \subseteq P$. (Note that $Q$ does not even have to be a Sylow $p$-group here, just being a $p$-subgroup suffices.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3332463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
An expression of scalar curvature In the book "Hamilton's Ricci flow" by B. Chow, P. Lu and L. Ni, at the page- 99, Exercise 2.8, the statement says: If $(M,h)$ is a Riemannian surface and $g=uh$ for some function $u$ on $M$, then $$R_g=u^{-1}(R_h-\Delta_h\log u),$$ where $R_h$ and $R_g$ are scalar curvature of the metric $h$ and $g$ respectively. I want to know how this expression is derived?
One place to find a proof is in my Introduction to Riemannian Manifolds (2nd ed.), Theorem 7.30. (The notation is a little different from yours, but it should be easy to translate between the two notations.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3332561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that the limit of non-eigenvector goes to infinity Let $A$ be a $3$ by $3$ real matrix with the triple eigenvalue $1$. Also, further suppose its eigenspace corresponding to $1$ is only of dimension $1$. Thus, we can find a basis of $\mathbb{R}^3$, denoted by $v$. $w_1$. $w_2$ where $v$ is an eigenvector of $A$. Then I have to show that $\lim_{n \to \infty} \|A^n w_1\|=\lim_{n \to \infty} \|A^n w_2\|=\infty$. How is this possible? I do not have any idea how the norm goes to infinity...Could anyone please help me?
The theorem is correct. Without loss of generality and by using Schur decomposition, we can write the most general form of matrix $A$ as$$A=\begin{bmatrix}1&a&b\\0&1&c\\0&0&1\end{bmatrix}$$Since the eigenspace of $A$ has dimension $1$, and the eigenvector is $v=\begin{bmatrix}v_1\\v_2\\v_3\end{bmatrix}$ therefore the following set of equations$$\begin{bmatrix}1&a&b\\0&1&c\\0&0&1\end{bmatrix}\begin{bmatrix}v_1\\v_2\\v_3\end{bmatrix}=\begin{bmatrix}v_1\\v_2\\v_3\end{bmatrix}$$must set two of $\{v_1,v_2,v_3\}$ equal to zero. Since the set can be reduced to$$av_2+bv_3=0\\cv_3=0$$therefore we can have $v_1\ne0$ and $v_2=v_3=0$. The only case this happens is when$$a,c\ne 0$$. Therefore the only eigenvector of $A$ becomes $$\begin{bmatrix}1\\0\\0\end{bmatrix}$$Now take $w_1=\begin{bmatrix}0\\w_{12}\\w_{13}\end{bmatrix}$ and $w_2=\begin{bmatrix}0\\w_{22}\\w_{23}\end{bmatrix}$ where $w_2\ne kw_1$. One can prove using induction that$$A^nw_1=\begin{bmatrix}naw_{12}+nbw_{13}+{n(n-1)\over 2}acw_{13}\\w_{12}+ncw_{13}\\w_{13}\end{bmatrix}\\A^nw_2=\begin{bmatrix}naw_{22}+nbw_{23}+{n(n-1)\over 2}acw_{23}\\w_{22}+ncw_{23}\\w_{23}\end{bmatrix}$$Since $a,c\ne 0$, it is easy to check that$$||A^nw_1||\to \infty\\||A^nw_2||\to \infty$$$\blacksquare$ P.S. The general form of $A$ has been chosen upper-triangular since from the Schur decomposition and for any arbitrary vector $v$ we have $$||A^nv||{=||QU^nQ^Hv||\\=||QU^nw||\\=||U^nw||}$$where $w\triangleq Q^Hv$ and $U$ and $Q$ are upper-triangular and unitary respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3332669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Find $(x_1,\dots,x_n) \in (\mathbb{R}_+^*)^n$ to minimize $\sum_{k=1}^{n}x_k\prod_{i=1}^{k}{(1 + x_i)}$ such that $\sum_{k=1}^{n}{x_k} = 1$ I want to find $(x_1,\dots,x_n) \in (\mathbb{R}_+^*)^n$ to minimize $$ \sum_{k=1}^{n}{x_k \prod_{i=1}^{k}{(1 + x_i)}} $$ with the following constraint $\displaystyle\sum_{k=1}^{n}{x_k} = 1$.
Not an answer. We could try Lagrange multipliers. Take the Lagrangian $$\mathcal L (\textbf{x},\lambda) = \sum_{k=1}^n x_k \prod _{j=1}^k (1+x_j) + \lambda \left (\sum _{k=1}^n x_k -1\right ) =: f(\textbf{x}) + \lambda g(\textbf{x}). $$ Find the candidate solution(s) by finding stationary point(s) i.e $\nabla\mathcal L= 0$. Get a system $$\frac{\partial}{\partial x_i}f = (2x_i+1)\prod _{j=1}^{i-1}(1+x_j) + \sum _{k=i+1}^n \prod_{j=1 \\ j\neq i}^{i+1} (1+x_j) = -\lambda $$ with $g(\textbf{x})=0$, where for $i=1$ the first additive is simply $2x_1+1$. Unsure how to solve this analytically.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3332752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Milnor Number for holomorphic map germ Definition: Let $f : (\mathbb{C}^{n}, p) \longrightarrow (\mathbb{C}^{n}, q)$ be a holomorphic map germ. The multiplicity of $f$ at $p$, or Milnor number de $f$ at $p$, noted $\mu_{p}(f)$, is the dimension of the $\mathbb{C}$-linear space $\mathcal{Q}_{f}$. Here, we have: 1) $\mathcal{Q}_{f} = \dfrac{\mathcal{O}_{p}}{\mathcal{I}_{f}}$, where : $\mathcal{O}_{p}$ denotes the (local) ring germs of holomorphic functions at $p \in \mathbb{C}^{n}$ and $\mathcal{I}_{f}$ is the ideal in $\mathcal{O}_{p}$ generated by $f_{1}, \cdots , f_{k}$ $(f : (\mathbb{C}^{n}, p) \longrightarrow (\mathbb{C}^{k}, q))$ Consider the germ $f : (\mathbb{C}^{3}, p) \longrightarrow (\mathbb{C}^{3}, 0)$ with $p = (i, 0, 0)$ and for $z = (z_{1}, z_{2}, z_{3})$ the coordinate functions thus defined: 2)$f_{1}(z) = z_{2} + z_{1}z_{3}$, $f_{2}(z) = - z_{2}z_{3}$, $f_{3}(z) = z_{1}^{2} + z_{2}^{2} + 1$ I'm having trouble calculating a dimension of the $\mathbb{C}$-linear space of definition above. Any help is very welcome. Thank you very much.
Expand the function as a power series around $(i,0,0)$ $$ f_1 = z_2 +z_1z_3 =z_2 +(z_1-i+i)z_3 = z_2 +iz_3 + (z_1-i)z_3\\ f_2 = -z_2z_3 \\ f_3 = z_1^2+z_2^2+1 = (z_1-i+i)^2+z_2^2+1 = 2i(z_1-i) + (z_1-i)^2+z_2^2 $$ $\mathcal{O}_{(i,0,0)}= \mathbb{C}\{z_1-i, z_2, z_3\}$ is the ring of convergent power series in the variables $z_1-i, z_2, z_3$. we see that $z_2$ does not belong tho the ideal $(f_1,f_2,f_3)$ then its class in $\mathcal{Q}_f$ is not zero. Now take $\mathcal{Q}_f/(\overline{z_2})$. We have that $$ \mathcal{Q}_f/(\overline{z_2}) = \left(\frac{\mathbb{C}\{z_1-i, z_2, z_3\}}{(z_2 +iz_3 + (z_1-i)z_3,-z_2z_3, 2i(z_1-i) + (z_1-i)^2+z_2^2)} \right)/(\overline{z_2})= \frac{\mathbb{C}\{z_1-i,z_2 , z_3\}}{(z_2 +iz_3 + (z_1-i)z_3,-z_2z_3, 2i(z_1-i) + (z_1-i)^2+z_2^2,z_2)} = \frac{\mathbb{C}\{z_1-i, z_3\}}{(iz_3 + (z_1-i)z_3, 2i(z_1-i) + (z_1-i)^2)} = \frac{\mathbb{C}\{z_1-i, z_3\}}{(z_3, z_1-i)} = \mathbb{C} $$ Hence $\mathcal{Q}_f = \mathbb{C}\oplus \mathbb{C}\overline{z_2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3332866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why are there $2^n-1$ terms in the inclusion-exclusion formula of $n$ sets? Why are there $2^n-1$ terms in the inclusion-exclusion formula of $n$ sets? An example of what I mean by inclusion-exclusion formula is this: There are three sets (i.e. $n$ $=$ $3$): $A, B,$ and $C$. $A \cup B \cup C = |A| +|B|+|C|-|A\cap B| - |A\cap C| - |B \cap C| +|A \cap B \cap C| $ There are $2^3-1 =7$ terms in the right hand side of the equation. This seems to be true in general, but I'm not sure why. It's probably something obvious I'm missing, can anyone give me a hint?
This is because you have to take these sets $1$ by $1$, then $2$ by $2$, &c. and ultimately $n$ by $n$, which makes all nonempty subsets of the set $\{A_1,A_2,\dots, A_n\}$, and because there are $2^n$ subsets of a set with $n$ elements (including the empty subset).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3332999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Solve the equation: $y^3 + 3y^2 + 3y = x^3 + 5x^2 - 19x + 20$ A question asks Solve the equation: $y^3 + 3y^2 + 3y = x^3 + 5x^2 - 19x + 20$ for positive integers $x$ and $y$. I tried factoring the LHS by adding $1$ to both sides so we get $(y+1)^3$ in the LHS. But I couldn't get any factorisation for the RHS, neither could think of any other ways to proceed. How to proceed? Thank you.
You are asking for $$ x^3 + 5 x^2 - 19 x + 21 = (y+1)^3 $$ For large enough positive $x,$ (you need to find out explicit lower bound for $x$), $$ (x+1)^3 < x^3 + 5 x^2 - 19 x + 21 < (x+2)^3 $$ and cannot be a cube. Then check the small values of $x$ remaining. so, $x^3 + 3 x^2 + 3x + 1 < x^3 + 5 x^2 - 19 x + 21,$ or $0 < 2 x^2 -22x + 22$ or $x^2 - 11 x + 11 > 0.$ This one is true for $x \geq 10,$ either draw a picture or... The other one is $x^3 + 5 x^2 - 19 x + 21 < x^3 + 6 x^2 + 12 x + 8,$ or $0 < x^2 + 31 x - 13.$ This one is true for integers $x \geq 1.$ So, check the original problem for $x = 0,1,2,3,4,5,6,7,8,9.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3333099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Calculate $P(X>Y^2)$ Calculate $P(X>Y^2)$ given that $$f(x,y)=6(x-y)$$ for all $(x,y)$ such as $$0\le y\le x\le1$$ My solution: $$6\cdot\int_{0}^{y^2}\int_{0}^{x}(x-y)dydx$$ which in turn results in : $y^6$. However I have no way of knowing whether or not I have actually solved this correctly and hence arrived at the right answer. Is my solution correct? and how do I verify my answer?
Refer to the graph: $\hspace{3cm}$ The total probability is: $$P(\underbrace{0\le Y\le X\le 1}_{\text{the gray region}})=P(0\le Y\le X \ \cap \ 0\le X\le 1)=\int_0^1\int_0^x6(x-y)dydx=1.$$ The required probability is: $$P(X>Y^2)=P(Y^2<X)=P(0\le Y^2<X\le 1)=P(\underbrace{0\le Y<\sqrt{X}\le 1}_{\text{the gray and orange regions altogether}})=\\ P(\underbrace{0\le Y\le X\le 1}_{\text{the gray region}})+P(\underbrace{0\le \color{red}{X< Y}\le \sqrt{X}\le 1}_{\text{the orange region}})=1+0=1,$$ because: $$f(x,y)=0, \color{red}{X<Y} \Rightarrow P(X<Y)=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3333214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is every homogeneous space with $G$ action isomorphic to $G/H$ for some closed subgroup $H$? This is a statement in Lang, Real and Functional Analysis, Chpt XII, sec 4. The space below means homogeneous space equipped with $G$ action where $G$ is some topological group. "Such a space is isomorphic to $G/H$ with some closed subgroup $H$." $\textbf{Q:}$ Why do I see such $H$ always closed? Say space is $X$. Then $G\to Aut(X)$ is group homomorphism.(Is this even topological group homomorphism?) In particular, why kernel of previous map must be closed? What topological assumption has to be assumed on $Aut(X)$? One can assume $X$ is hausdorff. From $G/H\cong X$, I need $H$ better closed.
Given a continuous, transitive action $\alpha:G\times X\to X$ of a topological group $G$ on a Hausdorff (or merely $T_1$) space $X$, the $H$ that's used in the statement you quoted is just the stabilizer of any chosen point $x_0\in X$. That is, $H=\{g\in G:\alpha(g,x_0)=x_0\}$. This $H$ is closed because it is the pre-image, under the continuous map $g\to\alpha(g,x_0)$ of the singleton $\{x_0\}$ (which is closed because $X$ is a $T_1$-space). (Note that, if we had chosen some other $x_1\in X$ instead of $x_0$, we would have gotten a different closed subgroup $H'$ of $G$, but it would be conjugate to $H$. Indeed, since the action of $G$ on $X$ is transitive, there is some element $t\in G$ with $\alpha(t,x_0)=x_1$, and then $H'=tHt^{-1}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3333310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Almost sure convergence to 0 implies probability convergence to 0 I've seen proof of almost sure convergence implying probability convergence but I want to ask whether or not the "proof" for the 0 case is correct: $X_n$ converges almost surely to $X$ if $\mathbb{P}(\lim_{n\to\infty}X_n = X) = 1$. Given that $\lim_{n\to\infty}X_n = 0$ then $X_n$ converges to $X = 0$ almost surely, then I prove that it converges in probability to 0 by considering for $\epsilon > 0$, $$\lim_{n\to\infty}\mathbb{P}(|X_n - 0| > \epsilon) \\ = \lim_{n\to\infty}\mathbb{P}(|X_n| > \epsilon)\\ = \mathbb{P}(\lim_{n\to\infty} |X_n| > \epsilon)\\ = \mathbb{P}(0 > \epsilon) \\ = 0 $$ since $\epsilon > 0$. So since I've shown the probability goes to zero, it converges in probability. Is this how it's done?
Here's a way to salvage your proof. Note that $P(|X_n|>\epsilon) = E(1_{|X_n|>\epsilon})$. Since $X_n \to 0$ a.s, we have $1_{|X_n|>\epsilon} \to 0$ a.s. Since $1_{|X_n|>\epsilon}\leq 1$, the dominated convergence theorem applies and yields $\lim_n E(1_{|X_n|>\epsilon}) = E(0) = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3333412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to prove $\sum_{n=-\infty}^ \infty {\rm sinc}\bigl( \pi(t-n)\bigr) = 1$? Thank you by avance for your help. So, I found on this website, that $\sum_{n=-\infty}^{\infty} {\rm sinc}( \pi n)= 1$. But I could not find any way to prove it. I know it’s about fourrier, but I don’t know how to do so... Does anyone know how to do prove $\sum_{n=-\infty}^{\infty} {\rm sinc}\bigl( \pi(t-n)\bigr) = 1$ ? Thank you !
After getting help of @reuns, here the full demonstration : $f(x)=e^{2i\pi tx}$ $f(0)=e^0=1$ We take a period of 1. So with $C_{n}(f)=\frac{1}{T}\int_{-T/2}^{T/2}f(t)e^{-i2\pi\frac{n}{T}t}dt$ We get : $C_{n}(f)=\int_{-1/2}^{1/2}f(t)e^{-i2n\pi t}dt$ We use $f(x)$ from above : $= C_{n}(f)=\int_{-1/2}^{1/2}f(t)e^{-i2n\pi x}dx$ $= C_{n}(f)=\int_{-1/2}^{1/2} e^{2i\pi tx} . e^{-2i\pi nx} dx$ $= C_{n}(f)=\int_{-1/2}^{1/2} e^{2i\pi (t-n)x}dx$ $=\left [\frac{e^{2i\pi (t-n)x}}{2i\pi (t-n)} \right ]^{1/2}_{-½}$ $=\frac{e^{2i\pi (t-n)1/2} - e^{-2i\pi (t-n)1/2}}{2i\pi (t-n)}$ With Euler we know : $sin(x) = \frac{1}{2i}(e^{ix} - e^{-ix})$ So $C_n(f)= \frac{sin (\pi(t-n))}{\pi (t-n)}$ We also know that $\frac{sin(x)}{x}=sinc(x)$ Based on Dirichlet kernel : $f(x)=\sum_{n=-\infty }^{\infty}C_n(f)e^{\frac{inx2\pi}{T}}$ And $f(0)=1$ So $f(0)=\sum_{n=-\infty }^{\infty}C_n(f)e^{0}=\sum_{n=-\infty }^{\infty}\frac{sin(\pi(t-n))}{\pi(t-n)}=1$ ==> $\sum_{n=-\infty }^{\infty}\frac{sin(\pi(t-n))}{\pi(t-n)}=\sum_{n=-\infty }^{\infty}sinc(\pi(t-n))=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3333540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Find the greatest common divisor of $2^m+1$ and $2^n+1$ that $m,n$ are positive integers. I am confused of a question that needs to know the greatest common divisor of $2^m+1$ and $2^n+1$ ($m,n$ are positive integers), but I don't really know. I am pretty sure that the greatest common divisor of $2^m-1$ and $2^n-1$ ($m,n$ are positive integers) is $2^{\gcd\left(m,n\right)}-1$, even I can prove it by the Euclidean algorithm. However, it is hard to use it in this problem, so I want you guys to help me. Thanks! P.S. I created an excel and I observed the answer (maybe?) from it, but I can't prove or disprove it. Here is my conclusion from the excel: $$\gcd\left(2^m+1,2^n+1\right)=\begin{cases} 2^{\gcd\left(m,n\right)}+1 \\ 1 \end{cases}\begin{matrix} \text{when }m,n\text{ contain the exact same power of }2 \\ \text{otherwise} \end{matrix}$$ Hope it will help me and you guys solving this quesion :D The link of The excel
Your conjectured formula is correct; here is the proof. For integer $m,n\ge 0$, let $d(m,n):=\gcd(2^m+1,2^n+1)$. Assuming for definiteness $m\ge n$, we have \begin{align*} d(m,n) &= \gcd(2^m-2^n,2^n+1) \\ &= \gcd(2^n(2^{m-n}-1),2^n+1) \\ &= \gcd(2^{m-n}-1,2^n+1) \\ &= \gcd(2^{m-n}+2^n,2^n+1). \end{align*} If $m\ge 2n$, then this can be taken a little further, by factoring out $2^n$, to get $$ d(m,n) = \gcd(2^{m-2n}+1,2^n+1); $$ if $m\le 2n$, then factoring out $2^{m-n}$ instead of $2^n$ we get $$ d(m,n) = \gcd(2^{2n-m}+1,2^n+1). $$ In any case, we have the recursive relation $$ d(m,n) = d(|m-2n|,n),\quad m\ge n. \tag{$\ast$} $$ Let $\nu(k)$ denote the $2$-adic valuation of an integer $k\ne 0$; that is, $\nu(k)$ is the largest integer such that $2^{\nu(k)}$ divides $k$. I claim that (1) If $m>n>0$, then $\max\{|m-2n|,n\}<\max\{m,n\}$; (2) if $m>0$ or $n>0$, then $\gcd(|m-2n|,n)=\gcd(m,n)$; (3) if $m\ne 2n$, then $\nu(m)=\nu(n)$ if and only if $\nu(m-2n)=\nu(n)$. The first two assertions are easy to verify. For the last one, let $k:=\nu(n)$ and $l:=\nu(m)$ and consider two cases: If $k>l$ then $2^{l+1}\nmid m-2n$ while $2^{l+1}\mid n$, whence $\nu(n)\ne\nu(m-2n)$, as wanted. If $k<l$ then $2^{k+1}\mid m-2n$ while $2^{k+1}\nmid n$, implying $\nu(n)\ne\nu(m-2n)$ in this case, too. To complete the proof, we use straightforward induction by $m=\max\{m,n\}$ distinguishing the following cases: $n=0$, $m=n$, $m=2n$, and the "general case" where none of these holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3333627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Conditional probability against sum of two normal random variables I am trying to solve the following problem: Let $X$, $Y$ be two variables with the distribution $\mathcal{N}(0,\lambda^2)$. Find the distribution of $X$ under the condtion $X+Y=t$. So what we are looking for is, for $B$ a Borel set, the following thing: $$ P(X\in B\:|\:X+Y=t) $$ But $$P(X\in B\:|\:X+Y=t)=P(Y\in t-B)=\int_{t-B}\frac{1}{\lambda\sqrt{2\pi}}e^\frac{-x^2}{2\lambda^2}dx.$$ By the change of variables, the last integral is equal to $$ -\int_{B}\frac{1}{\lambda\sqrt{2\pi}}e^\frac{-(y-t)^2}{2\lambda^2}dy. $$ This would mean that the distribution of $X|X+Y=t$ is $\mathcal{N}(t,\lambda^2)$ BUT: * *There is unexpected minus sign, which makes a measure of the set negative. I suppose it comes from wrong change of variables? *The answer in the textbook is that this distribution is $\mathcal{N}(t/2, \lambda^2/2)$. So where is the mistake?
Here's another way. I'm assuming $X$and $Y$ are independent. Since $\begin{pmatrix}X\\X+Y\end{pmatrix} = \begin{pmatrix}1 & 0\\ 1 & 1\end{pmatrix} \begin{pmatrix}X\\Y\end{pmatrix}$, the joint distribution of $(X,X+Y)$ is $\mathcal N_2\left(0,\lambda^2 \begin{pmatrix}1 & 1\\ 1 & 2\end{pmatrix}\right)$. Note also that the distribution of $X+Y$ is $\mathcal N(0,2\lambda^2)$. By Bayes' theorem, the conditional density of $X$ given $X+Y=t$ is given by $$\begin{aligned}f_{X|X+Y=t}(x) = \frac{f_{(X,X+Y)}(x,t)}{f_{X+Y}(t)} &\propto\exp\left(-\frac 1{2\lambda^2} (x,t)\begin{pmatrix}2 & -1\\ -1 & 1\end{pmatrix} \begin{pmatrix}x\\t\end{pmatrix}\right)\\ &\propto \exp\left(-\frac{1}{\lambda^2} (x^2-tx)\right) \\ &\propto \exp\left(-\frac{1}{2 \frac{\lambda^2}2} (x-\frac t2)^2\right) \end{aligned}$$ $\propto$ means proportional, which allows me to drop multiplicative factors that do not depend on $x$. Given the final expression, $f_{X|X+Y=t}(x) $ is the density of $\mathcal N(\frac t2, \frac{\lambda^2}2)$, as claimed in your textbook.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3333764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What will be the value of $\cos (n\pi x /l)$ for $x=0$? $\cos0 = 1$ but the value of the above expression is $1$. How it is equal to $1$? If $\cos0 =1$ then the value should be $n\pi/l$.
Here we assume $l\neq 0$. If $x=0$, we have that $$ \cos(n\pi\times 0)/l=\cos(0)/l=1/l. $$ If you mean $\cos(n\pi x/l)$, then, at $x=0$, we have $$ \cos(n\pi\times 0/l)=\cos(0)=1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3333905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is $\mathbb{Z}[{ \sqrt 8 } ] $ a Euclidean domain? Is $\mathbb{Z}[{ \sqrt 8 } ] $ a Euclidean domain ? I have some confusion that is what is difference between euclidean domain and euclidean Norms ? My attempt : I thinks yes i know that $d( a+b \sqrt 8) = |a^2 - 8b^2 | $ as i can show it is euclidean domain by same pattern $\mathbb{Z}[{ \sqrt 2 } ]$ is euclidean domain
It is more straight forward to give a counter example. Since $4=2 \cdot 2 = (\sqrt{8}+2)(\sqrt{8}-2)$, $\mathbb{Z}[\sqrt{8}]$ is not a unique factorisation domain (UFD), hence not a Euclidean domain. Note that those factors are irreducible. Suppose to the contrary that $2$ is not irreducible. There exists $a,b \in \mathbb{Z}[\sqrt{8}]$ with $N(2)=4=2 \cdot 2=N(a)N(b)$. It requires that $a,b \notin U(\mathbb{Z}[\sqrt{8}])$, the set of units. Therefore $N(a)=N(b)=2$. Let $a=u+v\sqrt{8}$ with $u,v \in \mathbb{Z}$ and $u,v$ not both zero. It follows that $N(a)=|u^2-8v^2|=2$ $\implies$ $8v^2-u^2= \pm 2$. Therefore, \begin{equation} (2v)^2 = \frac{u^2}{2} \pm 1 . \end{equation} LHS is even. If $u$ is even, RHS is odd; a contradiction. If $u$ is odd, $ \frac{u^2}{2}\notin \mathbb{Z}$; another contradiction. Therefore $2$ is irreducible. Note that $2 \nmid \sqrt{8} \pm 2$. Hence $\mathbb{Z}[\sqrt{8}]$ is not a UFD. Link to a bigger image: Hasse diagram from rng to ED https://i.stack.imgur.com/jUcJX.png
{ "language": "en", "url": "https://math.stackexchange.com/questions/3334094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Unique factorization of an element in an UFD By definition: An integral domain $R$ is a unique factorization domain if the following conditions are satisfied: * *Every element $a \in R$, $a \neq 0$ that is not a unit can be factored into a product $a = c_1 \cdots c_n$ where $c_1,\dots,c_n \in R$ are irreducible elements. *If $c_1,\dots,c_n$ and $d_1,\dots,d_m$ are two factorizations of the same element of $R$ into irreducibles, then $n = m$ and $d_j$ can be renumbered so that $c_i$ and $d_i$ are associates. I need to prove that every element $a \in R$, $a \neq 0$ which is not a unit can be written uniquely as: \begin{equation} a = up_1^{e_1} \cdots p_s^{e_s} \end{equation} where $u \in R$ is a unit, $p_1,\dots,p_s \in R$ are irreducible elements mutually not associate and $e_1,\dots,e_s \in \mathbb{N} \setminus \{0\}$. I think I need to start with an arbitrary factorization $a = c_1 \cdots c_n$, then use the following result, but honestly I don't know how to put it formally. Let $R$ be an integral domain and let $a,b \in R$. If $a$ and $b$ are associate elements, then $a,b \neq 0$ and $a = b \cdot u$ for some unit $u \in R$.
The action you take is identical to the following situation where you consider words that are monomials in several variables: for example $$(\frac{3}{4}x)(5y)(x)(\frac{5}{3}x)(\frac{2}{5}z)(\frac{1}{4}y) $$ First group all associates: $$(\frac{3}{4}x)(x)(\frac{5}{3}x).(5y)(\frac{1}{4}y). (\frac{2}{5}z)$$ Then for each group of accociates extract a unique unit: $$(\frac{3}{4}\frac{5}{3})(x)(x)(x).(5\frac{1}{4})(y)(y).(\frac{2}{5})(z) $$ Then bring all units together and simplify them and exponentiate the rest: $$ \frac{5}{8}x^3y^2z $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3334188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding the sum of the first term and common difference when given sum of first 5 and sum of first 10 the sum of the first 5 terms of an arithmetic series is 110 and the sum of the first 10 terms is 320. How do i go about finding the first term and common difference. Sn = n/2 [2a+d(n−1)] is the equation for working out the sum of an arithmetic series, but how can i rearrange to find for the first term and common difference. I belive it would be using simulatenous equations.
Sn = n/2 (2a + (n-1) d) 110 = 5/2 (2a+(5–1)d) (Eq. 1) 320=10/2(2a+(10–1)d (Eq.2) 110=2.5(2a+5d-1d) - 110=2.5(2a+4d) (Eq.3) 320=5(2a+10d-1d) - 320=5(2a+9d) (Eq.4) 64=(2a+9d) - Divided both sides 5 from equation 4 (eq.5) 44=2a+4d - Divided both sides 2.5 from equation 3 (eq.6) 20=5d - Simulataneous Equations - just minus it through. 64–44 is 20. 2a-2a is 0 and 9d-4d is 5d 4=d -20/5 is 4 Sub d into equation (5) 64 = 2a+9(4) 64=2a+36 28=2a 14=a Therefore the first term would be 14 and the common difference would be 4
{ "language": "en", "url": "https://math.stackexchange.com/questions/3334283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Theorem 5.13 in "Principles of Mathematical Analysis" by Walter Rudin L'Hospital's Rule L'Hopital's Rule I am reading "Principles of Mathematical Analysis" by Walter Rudin. Thank you Saaqib Mahmood. I copied and pasted your text Theorem 5.13 on p.109: Suppose $f$ and $g$ are real and differentiable in $(a, b)$, and $g^\prime(x) \neq 0$ for all $x \in (a, b)$, where $-\infty \leq a < b \leq +\infty$. Suppose $$ \frac{f^\prime(x)}{g^\prime(x)} \to A \ \mbox{ as } \ x \to a. \tag{13} $$ If $$ f(x) \to 0 \ \mbox{ and } \ g(x) \to 0 \ \mbox{ as } \ x \to a, \tag{14} $$ or if $$ g(x) \to +\infty \ \mbox{ as } \ x \to a, \tag{15} $$ then $$ \frac{f(x)}{g(x)} \to A \ \mbox{ as } \ x \to a. \tag{16}$$ The analogous statement is of course also true if $x \to b$, or if $g(x) \to -\infty$ in (15). Let us note that we now use the limit concept in the extended sense of Definition 4.33. Here is Definition 4.33: Let $f$ be a real function defined on $E \subset \mathbb{R}$. We say that $$ f(t) \to A \ \mbox{ as } \ t \to x, $$ where $A$ and $x$ are in the extended real number system, if for every neighborhood $U$ of $A$ there is a neighborhood $V$ of $x$ such that $V \cap E$ is not empty, and such that $f(t) \in U$ for all $t \in V \cap E$, $t \neq x$. And, here is Rudin's proof: We first consider the case in which $-\infty \leq A < +\infty$. Choose a real number $q$ such that $A < q$, and then choose $r$ such that $A < r < q$. By (13) there is a point $c \in (a, b)$ such that $a < x < c$ implies $$ \frac{ f^\prime(x) }{ g^\prime(x) } < r. \tag{17} $$ If $a < x < y < c$, then Theorem 5.9 shows that there is a point $t \in (x, y)$ such that $$ \frac{ f(x)-f(y) }{ g(x)-g(y) } = \frac{f^\prime(t)}{g^\prime(t)} < r. \tag{18} $$ Suppose (14) holds. Letting $x \to a$ in (18), we see that $$ \frac{f(y)}{g(y)} \leq r < q \qquad \qquad \qquad (a < y < c) \tag{19} $$ Next, suppose (15) holds. Keeping $y$ fixed in (18), we can choos a point $c_1 \in (a, y)$ such that $g(x) > g(y)$ and $g(x) > 0$ if $a < x < c_1$. Multiplying (18) by $\left[ g(x)- g(y) \right]/g(x)$, we obtain $$ \frac{ f(x) }{ g(x) } < r - r \frac{ g(y) }{g(x)} + \frac{f(y)}{g(x)} \qquad \qquad \qquad (a < x < c_1). \tag{20}$$ If we let $x \to a$ in (20), (15) shows that there is a point $c_2 \in \left( a, c_1 \right)$ such that $$ \frac{ f(x) }{ g(x) } < q \qquad \qquad \qquad (a < x < c_2 ). \tag{21} $$ Summing up, (19) and (21) show that for any $q$, subject only to the condition $A < q$, there is a point $c_2$ such that $f(x)/g(x) < q$ if $a < x < c_2$. In the same manner, if $-\infty < A \leq +\infty$, and $p$ is chosen so that $p < A$, we can find a point $c_3$ such that $$ p < \frac{ f(x) }{ g(x) } \qquad \qquad \qquad ( a< x < c_3), \tag{22} $$ and (16) follows from these two statements. Rudin assumed that $g'(x) \neq 0$ for all $x \in (a, b)$ but didn't assume that $g(x) \neq 0$ for all $x \in (a, b)$. If $g(y) = 0$ in (18) and (19), division by zero occurres. By the way, if we write $$ \frac{f(x)}{g(x)} \to A \ \mbox{ as } \ x \to a,$$ do we assume implicitly that $g(x) \neq 0$ for all $x$ which is near $a$? Then, we don't need to assume that $g'(x) \neq 0$ for all $x \in (a, b)$ and don't need to assume that $g(x) \neq 0$ for all $x \in (a, b)$.
If $g(x)=g(y)$ then by Theorem 5.10 (Lagrange), $0=g(x)-g(y)=(x-y)g'(t)$ for some $t\in (x,y)$. It follows that $g'(t)=0$, contradicts with the assumption $g'(x)\neq 0$ for all $x\in (a,b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3334408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Conditions to exploit Polar coordinates in limits. Evaluate, $$\lim_{(x,y)\rightarrow(0,0)}f(x,y)=\lim_{(x,y)\rightarrow(0,0)}\dfrac{2x^2y}{x^4+y^2}$$ When I used polar coordinates with $x=r\cos\theta, y=r\sin\theta$, $$\lim_{r\rightarrow0}\dfrac{r\cos\theta\sin2\theta}{r^2\cos^4\theta+\sin^2\theta}=0$$ But when I use path $y=x^2$, $$\lim_{(x,y)\rightarrow(0,0)}\dfrac{2x^4}{2x^4}=1$$ Also from path $x=0$ or $y=0$ both gives, $$\lim_{(x,y)\rightarrow(0,0)}\dfrac{2x^2y}{x^4+y^2}=0$$ From path knowledge, I can say Limit does not exist. Why this occurred that I got two different values of limits from Polar and the path makes me put a question that when to employ polar coordinates method to compute limits? When can I ascertain that it gives the correct value? Why is it giving out the value $0$ even when limit DNE? Please help!
If in polar coordinates the function takes the form $$ g(r) \, h(r,\theta) $$ where $g(r) \to 0$ as $r \to 0^+$ (standard single-variable limit) and the function $h$ is bounded for all $\theta$ and all $r$ in some region $0 < r < R$, then you can draw the conclusion that the two-variable limit is zero. But that's not what you have in your case. Sure, you get a factor $r$ which tends to zero, but the remaining expression isn't bounded (as your other argument with $y=x^2$ shows; no matter how small $r$ is, you can find a $\theta$ such that the whole expression equals $1$, i.e., that other part standing together with $r$ is equal to $1/r$, which is unbounded as $r \to 0$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3334502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
on solving $\frac{d^2y}{dx^2} = cy$ in matlab The differential equation $\frac{d^2y}{dx^2} = my$ has two solutions $y = e^{\sqrt{m}x}$ and $y = e^{-\sqrt{m}x}$. When I use ode45 (or any other IVP solver) in matlab, it always picks up $y = e^{\sqrt{m}x}$. How do I make matlab pick the other solution, namely $y = e^{-\sqrt{m}x}$?
MATLAB does not pick $y = e^{\sqrt{m} x}$. Since $y_1 = e^{\sqrt{m} x}$ and $y_2 = e^{-\sqrt{m} x}$ are both solutions, then the solution is the linear combination of both namely $$y(x) = c_1 y_1(x) + c_2y_2(x)$$ Now, depending on the initial conditions you've passed ode45, $c_1,c_2$ will be adjusted accordingly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3334644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the directional derivatives of a function Consider the function $f : R^2 → R$ given by $$ f(x, y) = \begin{cases} \begin{matrix} \frac{x^2y}{x^4+y^2} & \mathrm{if}\ (x, y) \ne(0, 0)\\ 0 & \mathrm{if}\ (x, y) = (0, 0) \\ \end{matrix} \end{cases} $$ Using the definition, compute the directional derivative $\partial_uf(0,0) $ for all directions $u=(u_1, u_2)\ne (0, 0)$. [Hint: Consider the cases $u_2 \ne 0$ and $u_2 = 0$ separately, and use that $\partial _uf(0, 0) = \lim_{h\to 0} \frac{f(hu_1,hu_2)−f(0,0)}{h}$.] Here is my solution: We know that $f(0,0)=0$, we can rewrite the formula for the directional derivative as $$\partial _uf(0, 0) = \lim_{h\to 0} \frac{f(hu_1,hu_2)}{h}$$ We have that in the function, $x$ is denoted by $hu_1$ and $y$ is denoted by $hu_2$. We now look at each case, in terms of the original function. Case 1: We have that $u_2=0$, $$\lim_{h\to0}\left(\frac{((hu_1)^2(h\cdot 0))}{h ((hu_1)^4+(hu_2)^2)}\right) = 0$$ Case 2: We have that $u_2\ne0$, $$\lim_{h\to0}\left(\frac{((hu_1)^2(hu_2))}{h ((hu_1)^4+(hu_2)^2)}\right) = \lim_{h\to0}\left(\frac{h^3(u_1^2u_2)}{h^3(h^2 u_1^4+u_2^2)}\right) = \lim_{h\to0}\left(\frac{u_1^2u_2}{h^2 u_1^4+u_2^2}\right)$$ Solving as $h\to0$ we see that the $\lim\to \frac{u_1^2}{u_2}$. Therefore the directional derivatives for $\partial_uf(0,0)$ in the direction $u=(u_1,0)$ is $0$. The directional derivative in the direction $u=(u_1,u_2)$ is $\frac{u_1^2}{u_2}$. Is this solution correct? How can I improve this answer in general?
The way I have edited the question now, the problem lies at the end of the line in case 2: $$\lim_{h\to0}\left(\frac{u_1^2u_2}{h^2 u_1^4+u_2^2}\right) = \frac{u_1^2u_2}{u_2^2} = \frac{u_1^2}{u_2}$$ Hence $\partial_u f(0,0) = \frac{u_1^2}{u_2}$. Have a look at these plots to get an inuitive understanding of the behaviour of the function near $(0,0)$. There you can also tell why it is sensible to distinguish between the case of $u_2 = 0$ ($=: y$ in the plot) and $u_2 \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3334793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Why the rotation of two equal surfaces (with different shape) do not give the same volume I will explain my question with an example: Let's take two surfaces that have the same area: SURFACE A $$ f(x) = x, x \in [0,6] $$ SURFACE B $$ f(x) = 3, x \in [0,6] $$ Both surfaces are equal: Surface A, Sa = 18 Surface B, Sb = 18 But now if we rotate those two surfaces around the (for example) x axis. We obtain two different volumes: Volume A, Va = 72*pi Volume B, Vb = 54*pi I've no problem to apply the formula, but I found the result a bit counter-intuitive. We apply the "same" area (with different shape) around an axis an we obtain a different volume. Why is that (intuitively) ?
There is a result that says that equal areas rotated about axes an equal distance away from the center-of-mass (or centroid) of the areas, produce equal volumes. If the area is not distributed equally with respect to the distance from the axis of rotation, you will get different volumes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3334922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Expected number of tosses to get 3 consecutive heads, what's wrong with my solution? Apparently the answer is 14 Expected number of tosses to get 3 consecutive Heads, but I got 11. Can someone pinpoint the error to my solution? Let $X_i$ = Expected number of tosses to ith head. Hence, $$X_1 = 1 + 1/2*X_1$$ $$X_2 = X_1 + 1/2 + 1/2*X_2$$ $$X_3 = X_2 + 1/2 + 1/2*X_3$$ I know the first line is correct. Regarding the 2nd and 3rd lines, my reasoning is that for the $i$th roll, it's expected value must be the $(i-1)$th roll plus 50% chance of the next role being heads and if not, then everything is reset back to 0. So, the Expected value of getting 2 heads is the expected value of getting one head + 50% chance of immediately getting the second head + the expected value of getting a tail, which resets the state back to the beginning. -Edit- Just to clarify since there is some confusion, I let $X_1, X_2, X_3$ to represent the expected value getting to 1,2, and 3 consecutive heads, respectively.
Your argument is almost correct. Consider $X_2$ Your type of reasoning means that half the time this will be $X_1+1$ and half the time this will be $(X_1+1)+X_2$. So your second line should have "+1+" instead of "+1/2+". Ditto in line 3. Then your method gives $X_1=2, X_2=6,X_3=14.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3335013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Problems in $\sqrt{5x+4}=x-2$ So when I solve $\sqrt{5x+4}=x-2$, I end up with $x(x-9)=0$. Yet only when $x=9$ is the original inequality satisfied. Can somebody give me some details on what exactly goes wrong here?
At $x=0$, the LHS is $2$, while the RHS is $-2$. So the new solution come from squaring both sides, because $(2)^2=(-2)^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3335098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 3 }
Showing a complex inequality Let $z,w \in \mathbb C$ s.t $\bar z w \neq 1$ and $|z|\leq 1, |w| \leq 1$ then $ |\dfrac{z-w}{1-\bar z w}| \leq 1$ The hint is to show that $ |z-w|^2\leq |1-\bar z w|^2$ but i cant relate those 2
$$|1-\overline zw|^2=(1-\overline zw)(1-\overline wz)=1+|wz|^2-\overline z w-\overline wz,$$ $$|z-w|^2=(z-w)(\overline z-\overline w)=|z|^2+|w|^2-z\overline w-w\overline z.$$ The difference is $$1+|wz|^2-|w|^2-|z|^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3335197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Uniform bound on derivatives and uniform convergence If $f_n:[0,1]\to \mathbb{R}$ are differentiable, $|f_n'(x)|\leq C$ for all $n\in\mathbb{N}$ and $x\in [0,1]$, $f_n\to f$ uniformly, $f_n'(x)\to g(x)$ pointwise and $f$ is differentiable, can we conclude that $f'=g$? Equivalently, can we conclude that $$ \lim_{n\to \infty}\lim_{h\to 0}\frac{f_n(x+h)-f(x)}{h}=\lim_{h\to 0}\lim_{n\to \infty}\frac{f_n(x+h)-f(x)}{h} $$ given the assumptions above? My guess is no, but I am unsure of a counter-example.
No. For a counterexample on the whole $\Bbb R$ (but the difference is unessential), consider the $C^\infty$ bump $$\Phi(x)=\begin{cases}\exp\frac1{x^2-1}&\text{if }-1<x<1\\ 0&\text{if }x\le-1\lor x\ge 1\end{cases}$$ and $g_n(x)=\Phi(nx)$, $f_n(x)=\int_{-\infty}^x g_n(y)\,dy$. Then: $$\begin{align} \lvert g_n(x)\rvert&\le \max_{x\in[-1,1]}\lvert\Phi(x)\rvert\\ g_n(x)&\to \begin{cases}\Phi(0)&\text{if }x=0\\ 0&\text{if }x\ne 0\end{cases}&\text{ pointwise}\\ f_n(x)&\to 0&\text{ uniformly}\end{align}$$ because $$\left\lvert\int_{-\infty}^x g_n(y)\,dy\right\rvert\le \int_{-\infty}^x\lvert \Phi(ny)\rvert\,dy= \frac1n\int_{-\infty}^{nx}\Phi(y)\,dy\le \frac1n\int_{-\infty}^\infty\Phi(y)\,dy$$ It is clear that the limit of $g_n$ isn't the derivative of anything because it hasn't got the IVP.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3335333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding solution to a double integral Well, here it is $$\int\int_{[0,1]^{2}}\sqrt{1+x^2+y^2}\mathrm dx\mathrm dy\tag*{}$$ Maybe there's a really nice way of doing it, hopefully someone knows. Good luck!
We can do this integral in polar coordinates by recognizing a symmetry - divide the square in half by the line $y=x$. The integral on the top half will equal the integral on the bottom half, so we will do one of the integrals and multiply its value by 2. $$\iint_{[0,1]^2}\sqrt{1+x^2+y^2}dA = \int_0^{\pi/4} \int_0^{\sec\theta}2r\sqrt{1+r^2}drd\theta = \frac{2}{3}\int_0^{\pi/4} (1+\sec^2\theta)^{3/2} - 1 d\theta$$ $$= \frac{2}{3}\int_0^{\pi/4} (1+\sec^2\theta)^{3/2} d\theta - \frac{\pi}{6}$$ Now focusing on the integral left over, do the following substitution $$1+\sec^2\theta = 2\cosh^2 t$$ $$\sec^2\theta \tan\theta d\theta = 2\cosh t \sinh t dt \implies d\theta = \frac{\sqrt{2}\cosh t}{\cosh 2t}dt$$ The integral becomes: $$\frac{8}{3}\int_{0}^{\cosh^{-1}(\sqrt{3/2})} \frac{\cosh^4 t}{\cosh 2t}dt = \frac{8}{3}\int_{0}^{\cosh^{-1}(\sqrt{3/2})} \frac{\cosh^2 t + \cosh^2 t \sinh^2 t}{\cosh 2t}dt$$ $$ = \frac{2}{3}\int_{0}^{\cosh^{-1}(\sqrt{3/2})} \frac{2+ 2\cosh 2t + \sinh^2 2t}{\cosh 2t}dt $$$$= \frac{2}{3}\int_{0}^{\cosh^{-1}(\sqrt{3/2})} 2\text{ sech } 2t + 2 + \tanh 2t \sinh 2tdt$$ Integrating tanh sinh by parts, we get $$\frac{2}{3}\int_{0}^{\cosh^{-1}(\sqrt{3/2})} (\text{sech } 2t + 2)dt + \frac{1}{3}\sinh 2t = \frac{1}{3}\left[\tan^{-1}(\sinh 2t) + 4t + \sinh 2t \right]_{0}^{\cosh^{-1}(\sqrt{3/2})}$$ $$= \frac{1}{3}\left[\tan^{-1}(2u\sqrt{u^2-1}) + 4\cosh^{-1}(u) + 2u\sqrt{u^2-1}\right]_{1}^{\sqrt{3/2}} = \frac{1}{3}\left[\tan^{-1}(\sqrt{3}) + 4\cosh^{-1}(\sqrt{3/2}) + \sqrt{3}\right]$$ Simplifying and subtracting off the term from earlier, we get $$\frac{4}{3}\cosh^{-1}\left(\sqrt{\frac{3}{2}}\right) + \frac{1}{\sqrt{3}} - \frac{\pi}{18}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3335406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find $f(x)$ for a function $f: R \to R$, which satisfies condition $f(x+y^{3}) = f(x) + [f(y)]^{3}$ for all $x,y \in R$ and $f'(0)≥0$. Find $f(x)$ for a function $f: R \to R$, which satisfies condition $f(x+y^{3}) = f(x) + [f(y)]^{3}$ for all $x,y \in R$ and $f'(0)≥0$ My attempt: Replacing $x$ and $y$ by $0$, $f(0)=0$ Replacing only x by $0$, $ f(y^{3}) = [f(y)]^{3}$ So $f'(x) = \lim_{h \to 0} \frac{f(x+h)-f(x)}{h} $ $= \lim_{h \to 0} \frac{f(x) + [f(h^{1/3})]^{3} - f(x)}{h}$ $= \lim_{h \to 0} \frac{f(h)}{h}$ = $f'(0)$ Then I'm stuck. How to proceed$?$
What you've shown so far is that $f'(x)$ is a constant, since $f'(x) = f'(0)$. And we know that $f(0) = 0$. This means that your solution is going to be something in the form $f(x) = ax$ for a non-negative constant $a$ (since you've specified that $f'(x) \geq 0$). So what constants work? Well, we know that $f(x^3) = ax^3 = f(x)^3 = a^3x^3$. Cancel out the $x^3$ (since this must hold for any $x$, we can just pick whatever $x \neq 0$ we like) and you're left with $a = a^3$. This equation has only two non-negative solutions: $a=0$ and $a=1$. These correspond to the functions $f(x) = 0$ and $f(x) = x$, respectively. (There is of course a third option if you remove the $f'(x) \geq 0$ constraint, namely, $f(x) = -x$. Whoever wrote the question wanted to exclude this one specifically, for whatever unknown reason.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3335501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Derivation of alternative form of Chernoff bound I have encountered an alternative form for the Chernoff bound for the sum of $n$ coins which I have not been able to derive. Specifically, let $X_1,...,X_n$ be independent Poisson trials, let $X = \sum_{i=1}^{n} X_i$ and let $\mu = \mathbb{E}(X)$. Then $$ \forall t > 0 . \mathbb{P}(X \geq \mu + t) \leq \exp\left(-2\frac{t^2}{n}\right) $$ I am familiar with how to arrive at the more common variant of this Chernoff bound, that is $$ \forall \delta > 0. \mathbb{P}(X \geq (1+\delta)\mu) \leq \left(\frac{e^\delta}{(1+\delta)^{(1+\delta)}}\right)^\mu, $$ but I have not been able to derive the former from it. Any help would be greatly appreciated. EDIT: parsiad's answer makes use of Hoeffding's inequality, which was introduced in the lecture notes I was reading well after this variant of the bound was presented. So if there is a way to derive this bound without using Hoeffding's inequality/lemma, I would be grateful to see it. UPDATE: It is possible to show that the bound on the probability as given by the formula I was looking to derive is actually tighter than the other bound. Hence I don't think it is possible to derive it without Hoeffding's inequality.
For each $n$, let $X_{n}$ be a random variable bounded between $a_{n}$ and $b_{n}$. Let $X\equiv X_{1}+\cdots+X_{n}$ and $\mu \equiv \mathbb{E}X$. Hoeffding's inequality states that $$ \mathbb{P}(X \geq \mu + t)\leq\exp\left(-\frac{2t^{2}}{\sum_{i=1}^{n}\left(b_{i}-a_{i}\right)^{2}}\right). $$ In your case, $b_{i}=1$ and $a_{i}=0$ and hence the right hand side above becomes $\exp(-2t^{2}/n)$, as desired. A proof of Hoeffding's inequality is available on the Wikipedia page. There is also a good one in the Appendix of Chapter 4 of All of Statistics by Wasserman.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3335592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Use $x\ge 0 \implies y = 1$ and $x<0 \implies y = 0$ into a linear programming solver For a binary variable $y$ and another decision variable $x$, $x$ being integer, I want to be able to use the following two non-linear constraints into a linear solver: \begin{align} x\ge 0 \implies y = 1\\ x< 0 \implies y = 0 \end{align} What I've tried so far is: $$\dfrac{x}{M} + \varepsilon \le y$$ $$\dfrac{x}{M}+1\ge y$$ With $M$ being a large constant and $\varepsilon$ a very small constant. I then wanted to use those constraints in a linear programming solver and my problem is that whatever value of $M$ and $\varepsilon$, when $x = 0$, $y$ is always $0$ instead of $1$. All other cases are working. What mistake could have made? Is there another way without constants $M$ and $\varepsilon$?
$$ \begin{align} x\ge 0 \implies y = 1\\ x< 0 \implies y = 0 \end{align} $$ I assume your variable $y\in \{0,1\}$ because there is no another possibility. Let $M$ be a big positive number. $$ \begin{align} x \geq (y-1)M\\ x < yM\\ -M \leq x < M \end{align} $$ Using the intervals $[-M, 0), [0, M)$ this set of constraints guarantees the asked properties. OK, this is exactly what you do, but when $x=0$ the second constraint $0<yM$ then $y\neq 0$. You can rewrite as follow ($\epsilon > 0$) : $$ \begin{align} x \geq (y-1)M\\ x + \epsilon \leq yM\\ -M \leq x < M \end{align} $$ Note that $x + \epsilon \leq yM$ and $\frac{x}{M}+\epsilon \leq y$ are the same thing. It is correct. Probably, you make a mistake in your code because when $(x,y)=(0,0)$ the second constraint becomes $\epsilon \leq 0$. It is false for all $\epsilon >0$. UPDATE I think you proof these constraints using the following idea. Let $M>0$ be a big number and $\epsilon>0$ a small number. Assume $-M \leq x \leq M-\epsilon$. $$y=\left\lfloor\frac{x}{M}\right\rfloor+1$$ Note that $0 \leq x \leq M-\epsilon \implies y=1$ and $-M \leq x < 0 \implies \lfloor\frac{x}{M}\rfloor = -1 \implies y=0$ $$y-1=\left\lfloor\frac{x}{M}\right\rfloor$$ $$y-1\leq \frac{x}{M} < y $$ $$y-1\leq \frac{x}{M} \leq y -\frac{\epsilon}{M}$$ $$ \left\{\begin{align} & x \geq (y-1)M\\ & x \leq yM -\epsilon \end{align}\right. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3335703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cardinality of a p-Sylow Let $G$ be a group with cardinal $n=p^{\alpha}m$, (prime $p$ dividing $n$ and $\gcd(m,p)=1$). Let $E$ be the set of subsets of $G$ containing $p^{\alpha}$ elements. I'm trying to understand why $p$ does not divide $\vert E \vert = \binom{p^{\alpha}m}{p^{\alpha}}$ Writing \begin{equation} \displaystyle \binom{p^{\alpha}m}{p^{\alpha}}=\frac{p^{\alpha}m}{p^{\alpha}}.\frac{p^{\alpha}m-1}{p^{\alpha}-1}...\frac{p^{\alpha}m-p^{\alpha}+1}{1} \end{equation} After simplifications, the remaining quantity is not divisible by $p$. The first fraction of the binomial coefficient can be simplified by $p^{\alpha}$ but is it true for the other fractions ? I thank you in advance for any suggestions.
Recall that the $p$-adic valuation of $N$ is $v_p(N)=r$ when $p^r$ is the highest power of $p$ dividing $N$, so that $(p,N)=1$ if and only if $v_p(N)=0$. By counting powers of $p$ smaller than $n$ we get the formula $$ v_p(n!)=\lfloor \frac np\rfloor+\lfloor\frac n{p^2}\rfloor+\lfloor\frac n{p^3}\rfloor+\cdots. $$ Now let $n=p^am$ with $(p,m)=1$. The formula above reads $$ v_p((p^am)!)=p^{a-1}m+p^{a-2}m+\cdots +m. $$ Using the same formula again we get $$ v_p((p^a(m-1))!)=p^{a-1}(m-1)+p^{a-2}(m-1)+\cdots +(m-1) $$ and $$ v_p((p^a!)=p^{a-1}+p^{a-2}+\cdots+p. $$ Now since $v_p(\frac A{BC})=v_p(A)-v_p(B)-v_p(C)$ we can apply the above computation to $$ v_p(\binom{p^am}{p^a})=v_p((p^am)!)-v^p(p^a!)-v_p((p^a(m-1))!)=0 $$ proving the assertion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3335845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is group homomorphism $F: SO(3) \to \mathbb R-\{0\}$ unique? If we assume that $F$ is smooth, then such $F$ is unique. Under this assumption, the question is equivalent to find all 1-dimensional representation of $SO(3)$, and by considering Lie algebra there is only the trivial one. But what if we remove the smoothness assumption? I think this question may be related to the fact that there are "nontrivial" $\mathbb Q$-linear map $\mathbb R \to \mathbb R$, which can be constructed by using the axiom of choice.
Yes, it's unique: indeed, the group $\mathrm{SO}(3)$ is perfect: actually every element is a commutator. Indeed, consider any element $q$: this is a rotation; hence square of another rotation (with same axis) $r$, namely $q=r^2$. Then $r$ and $r^{-1}$ being rotations of the same angle, are conjugate (by any element reversing the axis of $r$): $r^{-1}=srs^{-1}$. So $q=r^2=r(r^{-1})^{-1}=rsr^{-1}s^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3336011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does every odd integer $m$ satisfy $3^x(m)-2^y=1$ for some integer values of $x$ and $y$? Does the equation $$3^x(m)-2^y=1$$ have positive integer solutions $x, y$ for for every positive odd number $m$? For example, for $m = 1$, we have $x = 1, y = 1$: $3^1(1)-2^1=1$. For $m=3$, the (only) solution is $x=1,y=3$. But what about the general case? This question looks like Mihăilescu's theorem, which proves that the only solution to $3^x-2^y=1$ is $x=2$ and $y=3$, but of course we have the extra multiplicand m in there, and what I want to prove is in fact that there are (or aren't) solutions for all positive odd numbers m. I've been looking into an unrelated problem and it would be helpful to prove or disprove this but I really don't know where to start. My inclination is to say that there must be solutions $x,y$ for all $m$, because with an infinite number of powers of two and an infinite number of powers of three to work with there will always be a pair somewhere that will have the necessary relation to one another. But I'm lost as to how to translate that into proof, if indeed the statement is even true. Any help - even partial help - would be greatly appreciated. Edit: Thanks Travis, thanks Conrad, that solves it for me. I think I can't accept either of you as the "solution" here (I'm new!) but tell me if that's untrue. And thanks!
No, take $m$ to be a power of $3$, your question reduces to: Distance between powers of 2 and 3 Hope it helps:)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3336163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Poisson paradigm: Why is $\lambda$, the rate of occurrence of events, equal to the sum of the probabilities of all the events that occur? My notes say the following about the Poisson paradigm: Let $A_1, A_2, \dots, A_n$ be events with $p_j = P(A_j)$, where $n$ is large, the $p_j$ are small, and the $A_j$ are independent or weakly dependent. Let $$X = \sum_{j = 1}^n I(A_j)$$ count how many of the $A_j$ occur. Then $X$ is approximately $Pois(\lambda)$ with $\lambda = \sum_{j = 1}^n p_j$. $\lambda = \sum_{j = 1}^n p_j$ is the sum of all of the probabilities of the events that occur. And $\lambda$ is the rate of occurrence of events. But I'm wondering why $\lambda$, the rate of occurrence of events, would be equal to the sum of the probabilities of all the events that occur? I'm not seeing how this makes sense. I would greatly appreciate it if people could please take the time to clarify this.
If $A_i$ occurs with probability $p_i$, then the expected number of $A_i$ occurring is also $p_i$. Since expected values are additive under all circumstances, $E[X]=\sum p_i$. So if this is a Poisson distribution, then necessarily one with the correct expected value, which is $\lambda$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3336301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Intersection of nested sequence of non-empty compact sets is non-empty (using sequential compactness) Let $(X,d)$ be a metric space, and let $K_1, K_2, K_3, \ldots$ be a sequence of non-empty compact sets in this metric space such that $$K_1 \supseteq K_2 \supseteq K_3 \supseteq \cdots$$ Then the intersection $\bigcap_{n=1}^\infty K_n$ is non-empty. I am aware of the standard proof of this fact using covering compactness, but I am wondering if there is a proof that uses instead sequential compactness. My attempt is the following. Since each $K_n$ is non-empty, we can pick some point $x_n \in K_n$ for each $n$ (this requires the axiom of choice). Now consider the sequence $(x_n)_{n=1}^\infty$. By the nesting property, we have $x_n \in K_1$ for each $n$. Since $K_1$ is compact, this means there is a convergent subsequence $(x_{n_j})_{j=1}^\infty$ which converges to a point $p \in K_1$. We will now show that in fact $p \in K_n$ for each $n$, which would prove that $p \in \bigcap_{n=1}^\infty K_n$ (hence, the intersection is non-empty). Let $n$ be arbitrary. We have $n_j \geq j$ so if $j \geq n$ then $n_j \geq n$. By the nesting property, this means that $x_{n_j} \in K_n$ for all $j \geq n$. Thus $(x_{n_j})_{j=n}^\infty$ is a sequence of points in $K_n$ which converges to $p$. Since $K_n$ is compact, it is closed, so $p \in K_n$. It seems to me that the proof goes through, but all the proofs I could find online used covering compactness, which made me nervous that somehow using sequential compactness here doesn't work. I would be curious to hear if the proof above goes through (and if not, whether there is another way to use sequential compactness).
Your proof is indeed correct. As you noticed, it requires the axiom of choice, and that's perhaps the reason why it is avoided.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3336401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Confusing Limit With Logarithms So I am probably below the knowledge level of the average mathematician here- computer science student here studying algorithms at the graduate level and came across this peculiarity that I would appreciate some context to. Hopefully this is a softball question for you folks. Why is $$\lim_{x\to \infty} \frac{(\ln (\ln x))^{(\ln (\ln (x) )}}{x^2} = 0$$ But, $$\lim_{x\to \infty} \frac{(\ln (\ln x))^{\ln (x) }}{x^2} = \infty$$ I have tried to apply L'Hopital's rule to this case which may be correct, but gives some annoying derivatives to decipher. Would appreciate a bit more reasoning as to why $x^2$ dominates the first case but not the second. To my intuition, since $ln(ln(x))$ approaches infinity similar to $ln(x)$ (albeit incredibly slowly), $x^2$ should be dominated in both cases. Wolfram seems to disagree.
We can convert this limit of logs by making the substitution $x=e^{e^{y}}$ (in the spirit of Yuriy S's comment). This will of course not really change the structure of the limit, but I at least find it easier to think about the size of exponentials than the size of logarithms. Noting that $\ln(e^{e^y})=e^y$ and $\ln(\ln(e^{e^y}))=y$, we have $$\lim_{x\to \infty} \frac{(\ln (\ln x))^{(\ln (\ln (x) )}}{x^2} = \lim\limits_{y\to\infty}\frac{y^y}{\left(e^{e^y}\right)^2}=\lim\limits_{y\to\infty}\frac{e^{y\ln(y)}}{e^{2e^y}}=0.$$ In the exponents above, $2e^y$ easily beats $y\ln(y)$, so the limit is $0$. On the other hand $$\lim_{x\to \infty} \frac{(\ln (\ln x))^{\ln (x) }}{x^2} = \lim\limits_{y\to\infty}\frac{y^{e^y}}{\left(e^{e^y}\right)^2}=\lim\limits_{y\to\infty}\frac{e^{e^y\ln(y)}}{e^{2e^y}}=\infty.$$ In the exponents above, $e^y\ln(y)$ beats $2e^y$, so the limit is infinite. Final note: the power of $x$ in the denominator of the limit is really a red herring. Any positive power of $x$ will give the same results.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3336519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Impossible hyperbolic integral I'm quite good with integral calculation, very rarely happens that I'm not able to solve an indefinite integral. I've tried for 7 days to solve this monster but in the end, I surrendered to this beast. I'm sure that it has a solution because it was a challenge from my calculus professor. Unfortunately, he didn't give the solution of this integral. It has been unsolved for years, since 7 days ago when I found it in my old notebook. Can some expert help me? $$\int { \frac { \sinh { x } }{ \sqrt { 2\cosh { x+2 } } }\frac { 1+\sqrt { \frac { \sqrt { \cosh { x } +1 } +\sqrt { 2 } }{ \sqrt { \cosh { x } +1 } +2\sqrt { 2 } } } }{ 1-\sqrt [ 3 ]{ \frac { \sqrt { \cosh { x } +1 } +\sqrt { 2 } }{ \sqrt { \cosh { x } +1 } +2\sqrt { 2 } } } } } dx$$
First make the substitution $y=\cosh x,$ then simplify the radicals by removing radicals from all denominators, to obtain the simpler $$\int\frac{\sqrt {2y+2}}{2y+2}\frac{\sqrt{(y-7)(y-3-\sqrt{2y+2})}}{\left({(y-7)^2(y-3-\sqrt{2y+2})}\right)^{1/3}}\mathrm d y.$$ Then make the substitution $2y+2=z^2.$ The integral becomes $$\int\left(\frac{z^2-8-2z}{z^2-16}\right)^{1/6}\mathrm d z.$$ Note that $$\frac{z^2-8-2z}{z^2-16}=1-\frac{2}{4+z}.$$ Can you complete it now? OK, seems I need to add that you only need make the substitution $$w^6=\frac{z^2-8-2z}{z^2-16},$$ so that the integral becomes $$12\int\frac{w^6\mathrm d w}{(1-w^6)^2},$$ which is elementary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3336771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Why if $\lim_{x \to a}f(x)=b$ then $\lim_{n \to \infty}f(x_n)=b$? Why is that if $$\lim_{x \to a}f(x)=b \Rightarrow \lim_{n \to \infty}f(x_n)=b?$$ This is often used to prove that the same properties that apply to sequences, also apply to functions, but I don't know from where this implication is coming from. It is given that $\lim_{n \to \infty}x_n=a$ and that is derived from the fact that $a \in D'f$. I am not sure about this part either, t.i., why the $a$ being a limit point implies that there is a sequence within the domain that converges to $a$.
If $(x_{n})_{n\in \mathbb{N}}$ is a sequence such that $\lim_{n \to \infty} x_{n} = a$ Then if $\lim_{x \to a} f(x) = b$ we have that for all $\varepsilon >0$ there exist $\delta >0$ such that if $|x-a|<\delta \implies |f(x)-b| < \varepsilon$ So, if we take a fixed $\varepsilon>0$, there exist a $\delta>0$ then by definition of limit, there exist $N\in \mathbb{N}$ such that if $n≥N$ then $|x_{n}-a|<\delta$ and that implies $|f(x_{n}) - b |<\varepsilon$ In other words, $\lim_{n\to \infty} f(x_{n}) = b$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3336871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
$3$ times a number, plus $4$, is equal to $10$ I was helping my brother with his math homework and there was this question: $3$ times a number, plus $4$, is equal to $10$. What is that number? My first thought was that $3x+4 = 10$ and then, solve for $x$. But then, my brother told me maybe it’s $3(x+4)=10$. Now I’m confused too. Which one is right? I think it’s the former given that “plus $4$” was in-between commas but I’m not sure. Thanks in advance!
Almost certainly the problem is intended to be $3x+4=10$. But your brother has made an important discovery. Until the sixteenth century, all mathematical textbooks would write out such problems in words, like "3 times a number, plus 4, is equal to 10". As your and your brother have discovered, this can be difficult to interpret correctly, and certainly takes up a lot of space. If simply writing down a mathematical equation is difficult and time consuming and error prone then it makes you less likely to spend time working out how to solve the equation. And if you do find a method of solving a particular type of equation then it is much harder to explain your method to other people. The adoption of standard mathematical symbols such as $+$ and $=$ and the use of letters like $x$ to stand for unknown quantities was a big step forwards in the development of modern mathematics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3337056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Proving that $\rho(x,y) = \frac{d(x,y)}{1+d(x,y)}$ is a metric and that $\rho(x,y)$ and $d(x,y)$ are equivalent metrics As you can see the proof is divided into 2, the first part consists on proving that $\rho(x,y)$ is a metric My attempt i) $\rho(x,y) \geq 0$, which is clear since $d(x,y)$ is a metric, and it is $0$ if and only if $x=y$ ii) Symmetry, which also comes from the fact that $d(x,y)$ is a metric iii) The triangular inequality, for which I have already proven that $f(x)=\frac{x}{1+x}$ is non decreasing, however I am stuck proving f(x) is concave. A fact that I need in order to show that $f(d(x,y)) \leq f(d(x,z)) + f(d(y,z))$ which I need to show the triangular inequality. I know how to prove a function is concave, I am only getting a little bit stuck on the algebra. For the second part, I know that the $\rho(x,y) , d(x,y)$ metrics are equivalent if $\exists c_1, c_2 \in \mathbb{R}$ such that $c_1d(x,y)\leq \rho(x,y) \leq c_2d(x,y)$ I have proven that $\rho(x,y) \leq d(x,y) $ since $\frac{1}{1 + d(x,y)} \leq 1$ Hence, $c_2 = 1$. However I can't get the other inequality.
In my functional analysis class, we were allowed to prove metric equivalence as follows. Plot the function $f(x)=\dfrac{1}{1+x}$ to obtain a graph that is decreasing from $y=1$ to $y=0.$ Then note that inside any open ball on the $x$-axis, you can pull back those values to the $y$ axis and fit an open ball inside the result. Conversely, once you have an open ball on the $y$ axis, you can trace those back down to the $x$ axis and fit an open ball inside it. Therefore, the two metrics are equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3337268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Number of roots in the first quadrant I want to find how many roots of the equation $z^4+z^3+1=0$ lies in the first quadrant. Using Rouche's Theorem how to find ?
Look at the family of polynomials $z^4+tz^3+1$. For $t=0$ we know the solutions $z=\sqrt{\frac12}(\pm1\pm i)$ which has one root per quadrant. We additionally know that the set of roots is continuous in the coefficients of the polynomial. Now if changing $t$ from $0$ to $1$ were to change the number of roots in the first quadrant, one of the other roots would have to pass the positive $x$ or $y$ axis. However, on the positive $x$ axis the real part $1+x^3+x^4$ and on the $y$ axis the real part $1+y^4$ are never zero. Thus $$|z^4+tz^3+1|\ge |z^4+1|-t|z|^3>0$$ on the boundary of the first quadrant for $t\in [0,1]$. There is no change in the number of roots over this homotopy. roots of $z^4+tz^3+1$ for red: $t=0$ over blue to green: $t=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3337382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
second derivative of a function equals the function squared Can someone solve the following differential equation for me please? The second derivative of a function equals the function squared. Find $y(x)$ if $$ \frac{d^2 y}{dx^2} = y^2 $$
How familiar are up with Weierstrass elliptic functions? Let $\mathcal{P}(x;a,b)$ be that value of $z$ which makes $$ \int_{-\infty}^z \frac{1}{\sqrt{4t^3 - at - b}}dt = x $$ This is the Weierstrass $\mathcal{P}$ function. The general solution to $$ \frac{d^2 y}{dx^2} = y^2 $$ is $$ y = \sqrt[3]{6} \,\mathcal{P} \left( \frac{x+c_1}{\sqrt[3]{6}}; 0, c_2\right) $$ where $c_1$ and $c_2$ are constants determined by the initial conditions. The Weierstrass $\mathcal{P}$ function looks kind of like your top row of front teeth, as seen by a nearsighted dentist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3337455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Logic: How to prove this argument is not valid Good day to all, I need calcification on how to show this argument is not valid by finding a counterexample, but without using a truth table. Since there are 5 propositions I would need a 32 row truth table. It would be too time consuming to construct. p p v q p --> (r --> s) t --> r ∴ ~s --> ~t Keeping it short I have this in the end. p is True q is False r is True s if False t is True If so, how is that a counterexample? Shouldn't the argument be valid? Thanks This is how I worked it out: My understanding is that for the first four premises we can`t deduce it inference rule as their value could be either true or false which the result will still give me true. Hence we focus on the conclusion that ~t is "False" which makes ~s "False" so that the conclusion will be true. Such that we also narrow the input of t--> r . to be True and True. Since negation of t is False. So on and so forth until we reach the first premise.
The argument is valid. For the four premises to be all true, we must evaluate (1) $p$ as true, (2) $q$ as either true or false (we cannot infer which), (3) $r$ as false or $s$ as true (since $r\to s$ is an inference from evaluating $p$ being true), and (4) $t$ as false or $r$ as true. In short, we must resolve to have $s$ as false or $t$ as false.   Therefore $\neg s\to \neg t$ will be evaluated as true when the premises all are so.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3337684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Linear ordering isomorphic to an initial segment Question: Suppose $\left( L, \prec \right)$ is a linear ordering such that for every $X \subseteq L, \left( X, \prec \right)$ is isomorphic to an initial segment of $\left( L, \prec \right)$. Show that $\left( L, \prec \right)$ is a well ordering. This is what i have gotten so far. In order to show that $\left( L, \prec \right)$ well ordering I need to show that $\left( L, \prec \right)$ is a linear ordering and every non empty subset of $X$ has a $\prec-$ least member. Since $\left( X, \prec \right) \cong \left( L, \prec \right)$ there is a bijective function $f:X \to L$ such that for any $x, y \in X$, if $x \prec y$ than $f(x)\prec f(y)$. Since $f$ sends $x \in X$ to $f(x) \in L$, there is a subset $W=\{v \in L : f(v) \prec v \}$ which is non-empty. This is what I have gotten and I am stuck after this. Also, is my set $W$ wrong? Help is much appreciated.
We don't have an isomorphism $f:(X,\prec) \, \to\, (L,\prec)$, only an order-preserving embedding such that the range of $f$ is an initial segment of $L$. Now it's very easy: if we show that $L$ has a smallest element, then every initial segment of $L$ will have a smallest element, thus by the embedding being order-preserving, $X$ will have a smallest element, too. To show that $L$ has minimum, simply take any singleton $X:=\{a\}$ with $a\in L$. By the hypothesis, $X$ is isomorphic to an initial segment of $L$, thus $f(a)$, the correspondent of $a$, must be minimal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3337787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Regarding perfect squares Is there any positive integer $n > 2$ such that $(n - 1)(5n - 1)$ is a perfect square? It is observed that $(n - 1)(5n - 1)$ is of the form $4k$ or $4k+ 1$. Affirmative answers were given by Pspl and Mindlack (by providing some examples). Now my question is the following: Is there any characterization of positive integer $n$ such that $(n - 1)(5n - 1)$ is a perfect square?
If $m^2=(n-1)(5n-1)$, then $5m^2=5(n-1)(5n-1)=(5n-3)^2-4$. Write this as $x^2-5y^2=4$, for $x=5n-3$ and $y=m$. Write this as $N\left(\frac{x+y\sqrt5}{2}\right)=1$. Then $\frac{x+y\sqrt5}{2}=\left(\frac{1+\sqrt5}{2}\right)^{2k}$, since $\frac{1+\sqrt5}{2}$ is a fundamental unit of norm $-1$. Thus, $\frac{x+y\sqrt5}{2}=\left(\frac{3+\sqrt5}{2}\right)^{k}$. We just need to find those $x$ such that $x \equiv -3 \equiv 2 \bmod 5$. Write $\frac{x_k+y_k\sqrt5}{2}=\left(\frac{3+\sqrt5}{2}\right)^{k}$. Then $x_{k+1}=\frac{3x_k+5y_k}{2}$ and $y_{k+1}=\frac{x_k+3y_k}{2}$, with $x_0=2, y_0=0$. Then, $x_{k+2}=\frac{7x_k+15y_k}{2}$ and $y_{k+2}=\frac{3x_k+7y_k}{2}$. This gives $x_{k+2} \equiv x_k \bmod 5$. Thus $x_{2k} \equiv 2 \bmod 5$, since $x_0 = 2$. Thus, the solutions of $m^2=(n-1)(5n-1)$ are exactly $n=\frac{x_{2k}+3}{5}$ and $m=y_{2k}$. Equivalently, $n_k=\frac{u_{k}+3}{5}$ and $m=v_{k}$, where $\frac{u_k+v_k\sqrt5}{2}=\left(\frac{1+\sqrt5}{2}\right)^{4k}=\left(\frac{7+3\sqrt5}{2}\right)^{k}$, which gives $u_{k+1}=\frac{7u_k+15v_k}{2}$ and $v_{k+1}=\frac{3u_k+7v_k}{2}$, with $u_0=2, v_0=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3337907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How To Use The Steps on Page 149 in Calculus Made Easy To Solve Chapter XIV Example 12 Please how do I take the derivative of $ y = \left(\frac{1}{a^x}\right)^{ax} $ from Calculus Made Easy Chapter XIV Example 12 using the steps used to solve $y=a^x$ on page 149. The steps are \begin{align*} y & = a^x\\ \log_ey & = x\log_e a\\ x & =\frac{\log_ey}{\log_ea} = \frac{1}{\log_ea}\, x \,\log_ey\\ \frac{dx}{dy} & =\frac{1}{\log_ea}\, x \, \frac{1}{a^2 \, x\, \log_ea}\\ \frac{dy}{dx} & = \frac{1}{\frac{dx}{dy}} = a^x \, x\, \log_ea \end{align*} These were my steps and were I got stuck \begin{align*} y & = \left(\frac{1}{a^x} \right)^{ax}\\ y & = a^{-ax^2}\\ \log_ey & = \log_ea^{-ax^2}\\ \log_ey & = -ax^2\log_ea\\ \frac{\log_ey}{\log_ea} & = -ax^2 \end{align*}
A good place to start is to consider that $$\bigg(\dfrac{1}{a^x} \bigg)^{ax} = (a^{-x}) ^{ax} = a^{-ax^2}.$$ Once you get there, apply logarithmic differentiation to get \begin{array}[rcl] $y & =&a^{-ax^2}\\ \ln y& =& -ax^2 \ln a\\ \dfrac{1}{y}\cdot\dfrac{dy}{dx}&=&-2a\ln a \cdot x\\ \dfrac{dy}{dx}&=& -2a\ln a \cdot a^{-ax^2}\cdot x \end{array} ...assuming you are solving for the derivative of y with respect to x. Edit: Following the steps in your book, we can do the following: \begin{array}[rcl] $y & = & a^{-ax^2} \\ \ln y & = & -ax^2 \ln a \\ -ax^2 & = & \dfrac{1}{\ln a}\cdot \ln y \\ -2ax \cdot \dfrac{dx}{dy} & = & \dfrac{1}{\ln a}\cdot \dfrac{1}{y} \\ \dfrac{dx}{dy} & = & -\dfrac{1}{2ax\cdot\ln a}\cdot \dfrac{1}{y} \\ \dfrac{dy}{dx} & = & -2ax\cdot \ln a\cdot y \\ \dfrac{dy}{dx} & = &-2a\ln a\cdot a^{-ax^2}\cdot x \\ \end{array}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3338038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Elements and Subsets This is my first discrete math course so this questions might seem simple but I would like clarification. Say I have a set $A=\{1,2,3\}$ and a set $D=\{1,2,3\}$. Is it true that set $A$ is both an element and a subset of set $D$? Now say $D=\{\{1,2,3\}\}$. Is it true that set $A$ is no longer an element of $D$ because $D$ is a set which contains a set that contains $1,2,3$? A third case. Say I have set $D=\{x,16,\{1,2,3\}\}$. Is set $A$ an element and proper subset of $D$?
In the first case, $A = D$, which is different from $A \subseteq D$ (true) or $A \in D$ (false). In the second case, $A \in D$, while $A \ne D$ and $A \not\subseteq D$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3338248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
Continuous convolution between two functions I'm studing for an exam and I'm stuck on a simple exercise about convolution between two functions. It says: A system has a triangular impulse response (LSF) centered at the origin of the plane $h(x)=Λ(x)$. We input an image with two impulses, defined as $f(x)=δ(x-x_0)+δ(x-2x_0)$. Get the output of the system. I was thinking to use this formula: $\int_{-\infty }^{+\infty} f(x-x_0) h(x_0) dx_0$ So it would become: $\int_{-\infty }^{+\infty} (δ(x-2x_0)+δ(x-3x_0)) Λ(x_0) dx_0$ I don't know if it makes sense, so i'll appreciate any help. Thank you.
Since response of system to $input=\delta(x)$ is impulse response and equal to $$h(x)=Λ(x)$$ Input to the system is a summation of two shifted impulses $$f(x)=\delta(x-x_0)+\delta(x-2x_0)$$ What's the Output of system ? $$Output=f(x)\star h(x)=\int_{-\infty }^{+\infty} \Big(\delta(x-x_0)+\delta(x-2x_0)\Big) Λ(x)dx$$ By definition and properties of $\delta$ function we have: $$\int f(x) \delta(x-x_0) dx=f(x_0)$$ So the final result will be: $$Λ(x_0)+Λ(2x_0)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3338338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Rudin exercise: If $f$ is a diferentiable mapping on an open connected set $E$ and $f'(x) = 0$ for all $x \in E$, then $f$ is constant. Suppose $f: E \subseteq \mathbb{R}^n \to \mathbb{R}^m$ is a differentiable map with $E$ open and connected. If $f'(x)=0$ for all $x \in E$, prove that $f$ is constant. My attempt: For all $x \in E$, choose an element $\epsilon_x >0$ such that the ball $B(x,\epsilon_x) \subseteq E$. Then it is obvious that $$E= \bigcup_{x \in E} B(x, \epsilon_x)$$ Now, consider the corollary of theorem 9.19 in Rudin: Suppose $f$ maps an open, convex set $E\subseteq \mathbb{R}^n$ into $\mathbb{R}^m$, $f$ is differentiable in $E$ and $f'(x) = 0$ for all $x\in E$, then $f$ is constant. Applying this proposition to the map $f$ restricted to a ball $B(x, \epsilon_x)$ implies that $f$ is constant on every ball in the union written above. Intuitively, I can see that the connectedness will imply that we can get from one ball to another balls using chains of "between-balls', or otherwise we will get a separation of $E$ as disjoint union of open sets. I struggle to make this formal though. Any help is much appreciated!
For your attempt, note that $E$ is actually path connected, and the range of a path is always compact. Can you see how to proceed? Here’s a different method altogether: Pick any value $c$ taken by the function. Consider the set $$\{x\in E : f(x)=c\}$$ By continuity, this is closed. By the lemma you stated, it is open. By connectedness, it has to be either $E$ or empty. Since it is not empty, it has to equal $E$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3338471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Algorithm to decompose a number into the product of an integer and a base two exponencial So i`ve been asked to code an algorithm that decomposes an integer into the product of a base two exponencial and some integer.Something like number = k.(2^n) , k and n being random integers with k restricted to being odd. My question is not about the programming aspect of this problem but rather the algebraic one as i have failed to came up with a standard procedure into finding one.The problem says thats the combination k.(2^n) is unique for every integer, so i`ve managed to find the combinations for certian numbers but it gets exponencially dificult as the number increases. For example, given the number 12 the only possible combination under this restrictions is 12 = 3(2^2) Please feel free to edit and improve anything you feel necessary as english is not my mother language
One simple way is to start with the $k$ you are given and $n=0$. Then divide $k$ by $2$ as many times as possible, incrementing $n$ each time. while $k$ is even $\quad k=k/2 $ $\quad n=n+1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3338577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Possible Error in Elementary Analysis by Ross I am having some serious trouble understanding example 5 from chapter 8 of Elementary Analysis by Ross. The example is to do with proving sequence are convergent, and here is the definition of convergence that the example uses: A sequence ($s_n$) is said to converge to the real number $s$ provided that for each $\epsilon>0$ there exists a number $N$ such that $n>N$ implies $|s_n-s|<\epsilon$. Now, the example is as follows: Let ($s_n$) be a sequence of nonnegative real numbers and suppose $s=\text{lim}s_n$. Note that $s \geq 0$. Prove that lim$\sqrt{s_n}=$lim$\sqrt{s}$. Next, the author divides the work into two cases: when $s=0$ and when $s>0$. The case in which $s=0$ is left as an exercise and just by inspection appears trivial, but my question is regarding when $s>0$. He shows that for $\epsilon>0$ we must prove $\exists N \in \mathbb{N}$ such that $n>N$ implies $|\sqrt{s_n}-\sqrt{s}|<\epsilon$ For $s>0$, we have that $$|\sqrt{s_n}-\sqrt{s}|=\frac{|\sqrt{s_n}-\sqrt{s}||\sqrt{s_n}+\sqrt{s}|}{|\sqrt{s_n}+\sqrt{s}|}=\frac{|s_n-s|}{\sqrt{s_n}+\sqrt{s}}\leq\frac{|s_n-s|}{\sqrt{s}},$$ So we will select $N$ so that $|s_n-s|<\sqrt{s}\epsilon$ for $n>N$. What I am confused about is as to where he got $|s_n-s|<\sqrt{s}\epsilon$. If you reorganize the equation above, you get $\sqrt{s}|\sqrt{s_n}-\sqrt{s}|\leq|s_n-s|$. I don't understand how $|s_n-s|<\sqrt{s}\epsilon$ when $\sqrt{s}|\sqrt{s_n}-\sqrt{s}|\leq|s_n-s|$ and $|s_n-s|<\epsilon$. Thanks for any clarification. There is a good chance I misunderstood something up to this point so any help is much appreciated.
What I am confused about is as to where he got $|_−|<\sqrt{s} \epsilon$. First, what he should have said was that for any $\epsilon' > 0$, we can define $\epsilon = \sqrt{s}\epsilon'$. Now the assumption in your first highlighted box shows that there's some number $N$ such that $n > N$ implies $$ |s_n - s | < \epsilon = \sqrt{s} \epsilon' $$ Now since $\epsilon'$ was an arbitrary variable name, we can replace it with the name $\epsilon$, and conclude that for any $\epsilon$, there's a number $N$ such that $n > N$ implies $$ |s_n - s | < \sqrt{s}\epsilon. $$ Why did the author choose to establish this apparently odd fact? Because it's the fact that'll make it possible to derive the conclusion the author wants, namely that for any $\epsilon$, there's an $N$ such that $n > N$ implies $$ |\sqrt{s_n} - \sqrt{s} | < \epsilon. $$ How'd the author know to do this? The author probably worked backwards to find it; it's how a lot of proofs like this are developed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3338717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Limit of Lebesgue integrals Let $a>0$ and $f,g:[0,+\infty) \to \Bbb{R}$ where $f$ is a Lebesgue integrable function and $g$ has the property: $$|\frac{g(t)}{t}| \leq a, \forall t \geq 1$$. Prove that $\lim_{t \to +\infty}\frac{1}{t}\int_1^tf(x)g(x)dx = 0$ Here is my proof: $$|\frac{1}{t}\int_1^tf(x)g(x)dx| \leq \frac{1}{t}\int_1^t|f(x)||g(x)|dx$$ $$=\frac{1}{t}\int_1^{\sqrt{t}}|f(x)||g(x)|dx+\frac{1}{t}\int_{\sqrt{t}}^t|f(x)||g(x)|dx$$ $$\leq \frac{a||f||_1}{\sqrt{t}}+a\int_{\sqrt{t}}^t|f(x)|dx \to 0$$ as $t \to +\infty$ because $\int_{\sqrt{t}}^t|f(x)|dx \to 0$ from integrability of $f$. Is this proof correct,or i am missing something? Thank you in advance.
Your proof is correct. But you can also get this as an immediate consequence of DCT: $\frac 1 t I_{(1,t)} f(t)g(t) \to 0$ for each $x$ and this function is dominated in absolute value by $a|f|$ which is integrable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3338811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What does it mean to take the ratio of two equations? The line joining the origin and the point of intersection of the curves $ax^2+2hxy+by^2+2gx=0$ and $a_1x^2+2h_1xy+b_1y^2+2g_1x=0$ will be mutally perpendicular if $g(a_1+b_1)=g_1(a+b)$ This is solved in my reference as $$ ax^2+2hxy+by^2=-2gx\\ a_1x^2+2h_1xy+b_1y^2=-2g_1x\\ \color{red}{\frac{ax^2+2hxy+by^2}{a_1x^2+2h_1xy+b_1y^2}=\frac{g}{g_1}}\\ x^2(ag_1-a_1g)+2xy(hg_1-h_1g)+y^2(bg_1-b_1g)=0\\ \text{lines are perpendicular }\implies ag_1-a_1g+bg_1-b_1g=0\\ (a+b)g_1=(a_1+b_1)g $$ Mathematical steps are fine but I really do not understand the logic behind it, particularly the first two steps where we take the ratio of the two equations of the given curves ? Intuition Say, we have two lines $x+y=1$ and $x-y=-2$, we can solve it by substituting $y=x+2$ in $x+y=1\implies x+y=x+x+2=2x+2=2(x+1)=1\implies x=\dfrac{1}{2}-1=\dfrac{-1}{2}\implies y=\dfrac{3}{2}$. But, If I do $$ y=1-x\quad;\quad y=x+2\\ 1=\frac{-x+1}{x+2}\implies x+2=-x+1\implies2x=-1\implies x=-1/2\\ y=3/2 $$ So what does it mean to take the ratio of two equations ?
The step you are concerned with is justified by the simple fact that division is a well-defined binary operation on the real numbers, assuming that the denominator is nonzero. To be precise: Given $r,s,t,u \in \mathbb R$, if $r=t$ and if $s=u \ne 0$ then $\frac{r}{s} = \frac{t}{u}$. In your problem, you have two equations of real numbers that you are assuming to be true, namely $$ax^2+2hxy+by^2+2gx=0 \qquad a_1x^2+2h_1xy+b_1y^2+2g_1x=0 $$ Therefore the following two equations are true: $$\underbrace{ax^2+2hxy+by^2}_r=\underbrace{-2gx}_t \qquad \underbrace{a_1x^2+2h_1xy+b_1y^2}_s=\underbrace{-2g_1x}_u $$ So you can now apply well-definedness of division. However, having said that, it is still necessary to assume that $g_1$ is nonzero. I suspect it is a hidden hypothesis for your question that both $g$ and $g_1$ are nonzero, or maybe the case $g_1=0$ can be handled by a separate argument. It is also necessary to assume that $x \ne 0$. Again, maybe the case that $x=0$ should be handled by a separate argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3338899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
divisibility by 21 It's a simple problem but I am stuck. The multiple of 21 between 700 and 950 are 714, 735, 756, 777, 798, 819, 840,861,882,903,924,945. So, there are 12 multiples of 21 between 700 and 950 but 21 *12=252. So 12 multiples of 21 should take an interval of 252 , and there are only 250 numbers between 700 and 950, So there should be only 11 multiples . Where am i making a wrong argument ? Please help. Thanks.
Let's look at multiples of $3$ between $2$ and $13$. There are $3, 6, 9, 12$, which is four of them. But $4 \times 3 = 12$, and between $2$ and $13$ is only $11$ numbers. The problem is that the number $4 \times 3$ doesn't represent the length of the interval containing the four multiples. There are only three gaps of length $3$, plus one more for the last number. That's a length of ten, which fits easily inside the $11$ spaces you've got.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3338986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
What is the Fourier transform of $1$? For sure, $g(x)=1$ has no fourier transform, but it has a Fourier transform in distribution sense. We have for $\varphi \in \mathcal C_0^\infty (\mathbb R)$, $$\left<\hat 1,\varphi \right>:=\left<1,\hat \varphi \right>=\int_{\mathbb R}\hat \varphi (x)dx=\int_{\mathbb R}\hat \varphi (x)e^{2i\pi0x}dx=\varphi (0)=\left<\delta ,\varphi \right>.$$ Do at the end $\hat 1=\delta $. Could someone explain what it mean ? Because in somehow, $\hat 1$ doesn't make sense strongly, but in distribution it is $\delta $, and I'm not sure how to interpret this. Could someone help to try to understand ?
$g$ is a locally integrable function, hence a tempered distribution. Fourier transform of a tempered distribution $u$ is defined by $\hat {u} (\phi)=u(\hat {\phi})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3339067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is the sum of the second numbers in the first $100$ rows of Pascal's triangle (excluding the first row)? What is the sum of the second numbers in the first $100$ rows of Pascal's triangle (excluding the first row, the row containing a single $1$)? The sum should be from the second to the hundredth row. Starting from the second row, I initially thought this meant you count from the left two numbers. So it would be $$1+2+3+4+\cdots+99$$ This means I get $4950$. I thought this would be too simple of a solution. Could someone tell me if the addition I did above is all the question is asking from me?
To make the computations more transparent, I'll start indexing with 0. Therefore we want to sum up the elements $a_{k,1}$ ($k\geq 1$). I'll also assume, that the elements, that are 'outside' of the triangle are all equal 0 (in particular $a_{k,k+1}=0$ and $a_{0,1}=a_{0,2}=a_{1,2}=0$) Note, that for $n\geq 1$ we have: $$a_{n,2}=a_{n-1,1}+a_{n-1,2}$$ Therefore by consecutive replacing the element $a_{*,2}$ on the right side with analogical value, we obtain $$a_{n,2}=a_{n-1,1}+a_{n-2,1}+...+a_{1,1}+a_{0,1}$$ We have then the sum of $n$ elements with he index 1 from the first $n$ rows. Of course $a_{i,j}=\binom{i}{j}$ For $n=100$ we have then: $$\sum_{k=0}^{99}a_{k,1}=a_{100,2}=\binom{100}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3339223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fréchet derivative of non-coercive energy functional $\frac{1}{2}\int_\Omega |\nabla u|^2 - \frac{1}{p}\int_\Omega |u|^p$ Let $\Omega \subseteq \subseteq \mathbb{R}^n$, $n \geq 3$, and for $2 \leq p \leq 2^* := 2n/(n - 2)$ define $E \colon H^1_0(\Omega) \to \mathbb{R}$ by $$E(u) := \frac{1}{2}\int_\Omega |\nabla u|^2 - \frac{1}{p}\int_\Omega |u|^p.$$ I want to show that $E$ is Fréchet differentiable. First of all, we compute the Gâteaux derivative as follows. $$\frac{d}{d\varepsilon}\bigg\vert_{\varepsilon = 0} E(u + \varepsilon v) = \int_\Omega \nabla u\nabla v - \int_\Omega u|u|^{p - 2}v.$$ Thus a good choice for the Fréchet derivative $dE(u) \in (H^1_0(\Omega))^*$ would be $$dE(u)(v) := \int_\Omega \nabla u\nabla v - \int_\Omega u|u|^{p - 2}v.$$ Then we compute $$E(u + v) - E(v) - dE(u)(v) = \frac{1}{2}\int_\Omega|\nabla v|^2 - \frac{1}{p}\int_\Omega\left(|u + v|^p - |u|^p\right) + \int_\Omega u|u|^{p - 2}v.$$ If we let $\|v\|_{H^1_0(\Omega)} \to 0$, the first term is no problem, however, I do not know how to handle the second and the third term. A friend of mine suggested to use Taylor, but I do not see how. Thank you!
It should first be noted that $E$ is well-defined thanks to the Sobolev embedding $$W^{k,p}(U)\subset L^{\frac{np}{n-kp}}(U)$$ for bounded, open $U\subset\mathbb R^n$ and $1\le k < \frac np$. Now, using Hölder's inequality and the Sobolev inequality $$\lVert u\rVert_{L^{\frac{np}{n-kp}}(U)}\lesssim \lVert u\rVert_{W^{k,p}(U)},$$ valid for $1\le k < \frac np$, with $k=1, p=2$, we get $$\left\lvert\int_\Omega u|u|^{p - 2}v\right\lvert\le\left(\int_{\Omega}\left(\lvert u\rvert^{p-1}\right)^{\frac{p}{p-1}}\right)^\frac{p-1}p \lVert v\rVert_{L^p} = \lVert u\rVert_{L^p}^{p-1}\lVert v\rVert_{L^p}\lesssim \lVert u\rVert_{L^p}^{p-1}\lVert v\rVert_{H^1}\to 0.$$ For the second term, use $$\lvert a+b\rvert^p-\lvert a\rvert^p\le np \lvert a\rvert^{p-1} \lvert b\rvert$$ for $a,b\in\mathbb R^n$ so that by the same argument as before $$\frac 1p\int_\Omega |u + v|^p - |u|^p\le n \int_\Omega \lvert u\rvert^{p-1}\lvert v\rvert\lesssim\lVert u\rVert_{L^p}^{p-1}\lVert v\rVert_{H^1}\to 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3339343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
roots of holomorphic functions I've got to check out if there is a holomorphic function $f$ such that $f(z)^3=z^3-1$ for all $a)$ $z \in B_1(0)$ and $b)$ $z\in B_1(1)$, where $B_r(z_0)$ is the open ball around $z_0$ with radius $r$. I think the easiest way would be to take the Taylor series $f(z)= \sum_{n=0}^{\infty}a_nz^n$ and multiply out $f(z)^3$ in order to see if the coefficents there goes well with the cofficients of $z^3-1$. Of course it's a very exhausting way. Is there any more shorter way for this? Maybe working with logarithmus branch because of $f(z)=\sqrt[3]{z^3-1}$?
In $B_1(0)$, yes, there is such a function. If $z\in B_1(0)$, then $\lvert z\rvert<1$. Therefore, $\lvert z\rvert^3<1$ and so $z^3-1$ belongs to the halfplane $\{z\in\mathbb C\,|\,\operatorname{Re}z<0\}$. So, your idea is fine: you can work with an appropriate branch $\log$ of the logarithm and define $f(z)=\exp\left(\frac13\log\left(z^3-1\right)\right)$. But in $B_1(1)$ there is no such function $f$. Note that $f(1)^3=1^3-1=0$. So, $1$ is a zero of $f$. Let $m$ be the order of that zero. But then the order of $1$ as a zero of $f^3$ is $3m$. This is impossible, since the order of $1$ as a zero of $z^3-1$ is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3339484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit of a greatest integer function (sided limit) What is the value of $\lim\limits_{x\to 0^+} \dfrac{b}{x}\left\lfloor\dfrac{x}{a}\right\rfloor$ for $a>0$ and $b>0$. Note that $\lfloor x\rfloor$ denotes the greatest integer less than or equal to $x$. I know that $\left\lfloor\dfrac xa\right\rfloor=0$. But when it comes to $\lim\limits_{x\to0^+}\dfrac bx\cdot0$, the result is indeterminate. I want to know how to remove this indetermination.
Since $\frac{b}{x}\left \lfloor{\frac{x}{a}}\right \rfloor=0,\forall x\in (0,a)$, it follows that $\lim_{x\rightarrow 0^+}\frac{b}{x}\left \lfloor{\frac{x}{a}}\right \rfloor=0$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3339612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
verification of convergence of random variable For $n \in \mathbb{N}$, let $X_n$ be a random variable such that $\mathbb{P} [X_n = \frac{1}{n}] = 1 − \frac{1}{n^2}$ and $\mathbb{P}[X_n = n] = \frac{1}{n^2}$. Does $X_n$ converge in probability? In $L^2$? My attempt: To converge in probability we must have $$ P(|X_n - X| > \epsilon) = 0 $$ Since we can see that as $n$ gets larger, $X_n = \frac{1}{n}$ because its probability tends to 1. So I did the following $$ P(|X_n - \frac{1}{n}| > \epsilon) = P(X_n = n) = \frac{1}{n^2} \\ \lim_{n \to \infty}\frac{1}{n^2} = 0 $$ Is this approach correct? Or do I have to use some version of Chebyshev's inequality to do this? Also I have little idea how to prove it for $L^2$?
Not really. Here $X$ is chosen to let $X_n\to X$ in probability as $n\to\infty$, so we can not let "$n$" appear in $X$. Actually, we can choose $X=0$.Let $\epsilon>0$, for $n>\frac1\epsilon$, we have $$P(|X_n-0|>\epsilon)=P(X_n=n)=\frac1{n^2}\to0,$$ so $X_n\to 0$ in probability. For the $L^2$ convergence, since $E|X_n-0|^2=\frac1{n^2}-\frac1{n^4}+1\to 1\neq 0$, $X_n$ is not convergent to $0$ in $L^2$, so $X_n$ is not convergent in $L^2$. In fact, if $X_n\to X$ in $L^2$ then $X_n\to X$ in probability so $X=0$, a contradiction. Addendum: There is an alternative way to show the convergence in probability: Just note that $X_n\to 0$ in $L^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3339701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the notation $\int_{\Bbb R}$ equivalent to $\int_{-\infty}^\infty$? The question is pretty straightforward. In class, I have seen $\int_{\Bbb R}$ and $\int_{-\infty}^\infty$ being used interchangeably. However $\int_{\Bbb R}$ contains no implicit sense of direction, so technically isn't $\int_\infty^{-\infty}$ also the same as $\int_{\Bbb R}$? Or $\int_{\infty}^{0}+\int_{-\infty}^{0}\equiv\int_{\Bbb R}$, provided all integrals exist? There could be several such possibilities. Is that why we have attached a conventional direction to $\int_{\Bbb R}$, i.e. from $-\infty$ to $\infty$?
The notation $\int_a^b$ (or, in an extended sense, $\int_{-\infty}^{\infty}$) is meant to be suggestive of Riemann integration, which, as you said, entails an orientation in your Riemann sums and hence, your integrals. The notation $\int_{\mathbb{R}}$ is a more general notation from the theory of Lesbegue integration, where integrals are non oriented and over a measure space $X$. You are, indeed, correct that $\int_{-\infty}^{\infty}$ and $\int_{\mathbb{R}}$ are used interchangably, but this is due to the fact that the Lesbegue integrals and the Riemann integrals agree in pretty much all relevant cases. The notation $\int_{\infty}^{-\infty}$ refers, conventionally, to "summing in the wrong direction" and it's not true that this could be substituted with $\int_{\mathbb{R}},$ since the corresponding Riemann and Lesbegue integrals will have a sign difference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3339784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
A truncated alternating sum of product of binomial terms While solving a question I came across the following alternating sum; $C(j,n): = \sum\limits_{i=j}^{n} (-1)^{i}\binom{n+1}{i+1} \binom{i}{j}$ where $j$ and $n$ are integers with $n \geq j \geq 0$. By hand I computed that $C(j, j+r) = (-1)^{j}$ for small positive integers $r$. I think that $C(j,n) = (-1)^j$ for any $n \geq j \geq 0$. But I couldn't prove it by induction or by using some other known identities. I would appreciate any suggestion or reference.
\begin{align} \sum_{i=j}^{n}(-1)^i\binom{n+1}{i+1}\binom{i}{j}&=(-1)^j\sum_{i=0}^{n}(-1)^i\binom{n+1}{i+1}[x^j](1-x)^i\\&=(-1)^j[x^j]\sum_{i=0}^{n}\binom{n+1}{i+1}(x-1)^i\\&=(-1)^j[x^j]\frac{x^{n+1}-1}{x-1}=(-1)^j. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3339932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Inverse of a skew-symmetric matrix For $a, x, y, z \in \mathbb R$, let $$M= \left( \begin{array}{cccc} \cos(a) & \sin(a) \, x & \sin(a)\, y & \sin(a) \, z \\ -\sin(a) \, x & \cos(a) & \sin(a) \,z & -\sin(a)\, y \\ -\sin(a) \, y & -\sin(a) \, z & \cos(a) & \sin(a) \, x \\ -\sin(a) \, z & \sin(a) \, y & -\sin(a)\, x & \cos(a) \end{array} \right).$$ Without doing much calculation, why the matrix $M-I_4$ is invertible and why its inverse is given $$(M-I_4)^{-1}=\,\frac{-1}{2} I_{4} - \frac{\cot(\sqrt{x^2+y^2+z^2} )}{2\sqrt{x^2+y^2+z^2}} A,$$ where $A$ is the skew-symmetric matrix given by $$A=\left( \begin{array}{cccc} 0 & x & y & z \\ -x & 0 & z & -y \\ -y & -z & 0 & x \\ -z & y & -x & 0 \end{array} \right).$$ Thank you in advance
Here is a proof of the first part (inversibility of $M-I_4$) and a computation of the inverse (though I do not get a result in the form of yours). One obtains an efficient simplification by using half-angle formulas (https://en.wikipedia.org/wiki/Tangent_half-angle_substitution) : $$\cos(a)=\dfrac{1-t^2}{1+t^2} \ \ \text{and} \ \ \sin(a)=\dfrac{2t}{1+t^2}$$ Let $N=M-I_4$. All entries in matrix $N$ have a common factor $\dfrac{2t}{1+t^2}$. Therefore we can set $$N=\dfrac{2t}{1+t^2}P \ \ \text{with} \ \ P:=\begin{bmatrix}-t&x&y&z\\-x&-t&z&-y\\-y&-z&-t&x\\-z&y&-x&-t\end{bmatrix}$$ Therefore, an equivalent issue is to show the invertibility of $P$. Please note that $P=A-tI_4$ with matrix $A$ as you have defined it. It turns out that the determinant of $P$ is very compact : $$\det(P)=(t^2 + x^2 + y^2 + z^2)^2\tag{1}$$ which is non-zero, excepted... exceptional cases $x=y=z=0$ and $t=0$ (the last case corresponding to angles $a=k\pi, k \in \mathbb{Z}$). Set apart these cases, $P$ is always invertible. Now, what is the inverse of $M-I$ ? The inverse of $P$, considered as a quaternion (see remark 1 below) is $$P^{-1}=\dfrac{1}{n}\begin{bmatrix}-t&-x&-y&-z\\x&-t&-z&y\\y&z&-t&-x\\z&-y&x&-t\end{bmatrix}=\dfrac{1}{n}(-A-tI_4)\tag{2}$$ $$\text{with }n:=t^2+x^2+y^2+z^2\tag{3}$$ (one can easily check (2) by multiplying $P$ and $P^{-1}$). Therefore, as $M-I_4=\dfrac{2t}{1+t^2}P=\sin(a)P$, using (2) and (3), we get $$(M-I_4)^{-1}=\dfrac{1}{\sin(a)}P^{-1}=\dfrac{1}{n\sin(a)}(A-tI_4)\tag{4}$$ which is different from your formula (as it has been remarked, this formula needs to depend on $a$). Remarks : 1) It is not surprizing to get formula (1) : it is to be related to the $4 \times 4$ matrix representation of quaternions (see paragraph "Matrix representation" in https://en.wikipedia.org/wiki/Quaternion) and their so-called "norm". By the way, I wouldn't be that surprised that your issue comes from quaternionic representations... 2) Using (1) with $t=0$, we get $$\det(A)=(x^2 + y^2 + z^2)^2\tag{2}$$ 3) Formula (1) has to be connected to the so-called Pfaffian. If we have had to compute the determinant of a skew-symmetric matrix of even order, it is good to know that there exists a formula as a square of a polynomial expression in its entries called its associated Pfaffian. See https://en.wikipedia.org/wiki/Pfaffian where you will find the following formula for order $4$ : $$\det\begin{bmatrix}0&a&b&c\\-a&0&d&e\\-b&-d&0&f\\-c&-e&-f&0\end{bmatrix}=(af-be+dc)^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3340067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Intersection of closed sets is closed proof without Morgan's theorem I wish to prove that the intersection of closed sets is closed. However, all proofs that I have come across use Morgan's Theorem, which we have not seen in class (and therefore cannot use). I was thinking that maybe I could use something like the fact that a closed set is one which contains all of its border points, so intersecting several closed sets should also include all of its border points. However I am not sure if that it is valid in this scenario.
Suppose $C_i, i \in I$ is a family of closed subsets, and let $C$ be their intersection. Suppose $x \in C'$ (a limit point of $C$). Then as $C \subseteq C_i$ for each $i$, $x$ is in $C'_i$ for all $i$, and as each $C_i$ is closed, we know that for each $i$, $C'_i \subseteq C_i$ and thus $x \in \bigcap_i C_i = C$. As $C$ contains all its limit points, $C$ is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3340169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does the math symbol $\propto$ mean? I came across this symbol in my engineering class and I have never seen it before. Anyone know this?
It typically means proportional to. Such that If $$y=cx$$ for some constant $c$ we say $$y\propto x$$ so that when x grows, y grows proportionally by the ratio $c$ Alternatively inverse proportionality is when $$y=c\frac{1}{x}$$ so that when x gets smaller, y gets bigger proportionally by $c$ $$y\propto \frac{1}{x}$$ to my knowledge there isn’t a symbol specifically for inverse proportionality and $$y\propto \frac{1}{x}$$ is used instead
{ "language": "en", "url": "https://math.stackexchange.com/questions/3340314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
How can we generalize the linear equation $Ax=b$ from finite to uncountable dimensions? Starting with the equation $Af=g$ where $f,g \in$ $\mathbb{R}^2$ and $A$ is a $2$x$2$ matrix, suppose that we generalize the two indexes of $f=$ ($f$$1$, $f$$2$) to a continuum, e.g all real numbers, so that $f$ becomes a function $f:$ $\mathbb{R}$ $\rightarrow$ $\mathbb{R}$. And the same is done for $g$. So $f$ and $g$ are indexed by their uncountable domains of definition. My question is: Is there a way to accordingly generalize the matrix $A$ so that the equation $Af=g$ remains meaningful? Many possibilities seem to arise, e.g. if $f$ is differentiable and g is the derivative of $f$, then it seems that A should perform differentiation. If A is somehow made to contain the differential $dx$, then integration seems to lurk behind. I know that functional analysis studies these ideas, but the books I know of start directly with abstract settings, so I wonder whether one organize this index-based viewpoint and build a natural bridge from the discrete to the continuous and from finite to infinite and uncountable dimensions.
You would need a basis. And it's really a matter of convention whether one exists in general (you need the Axiom of Choice, in the form of Zorn's lemma). At any rate, it's impossible to actually construct / write down a concrete basis for spaces like the functions $\Bbb R\to\Bbb R$. And since you can't write down a basis, you can't write down a matrix representation of any linear transformations. If you, for instance, limit yourself to just polynomials, then you have bases. The standard basis is $1,x,x^2,x^3,\ldots$, and under that basis, the linear transformation we call "differentiation" is given by the "matrix" $$ \begin{bmatrix}0&1&0&0&\cdots\\ 0&0&2&0&\cdots\\0&0&0&3&\cdots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{bmatrix} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3340439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
proving that $a + b \sqrt {2} + c \sqrt{3} + d \sqrt{6} $ is a subfield of $\mathbb{R}$ The question is given below: My questions are: 1- How can I find the general form of the multiplicative inverse of each element? 2-How can I find the multiplicative identity? 3-Is the only difference between the field and the subfield definition is that in the case of a subfield every nonzero element has an additive identity but in the field every element not only nonzero ones? Could anyone help me in understanding these questions, please? EDIT: I have found this solution on the internet: My question: is that a fully acceptable answer to the question? I guess yes.
Check the field axioms for such expressions. You really need only that the sum and product has the same form, identify the additive and multiplicative inverses; the others follow as you are operating on real numbers. They hold, and the set is clearly a (proper) subset of $\mathbb{R}$. Done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3340541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 3 }
Autonomous equilibrium points Given $$\frac{dx}{dt}= 3x-x^2$$ I don't understand how $$x=0$$ is not semistable. I get the following 0 points: $$x = 0, x = 3$$ Here are the values of $\frac{dx}{dt}$ I get when plugging in and my reasoning: $-3 \to -18$ $-2 \to -10$ $-1 \to -4$ It would seem to me that clearly the slope gets closer to zero as we go up and would have the curve going up and arcing to the right and then flattening out but no the book has the curve going the opposite direction.....doesnt make any sense at all $1 \to 2$ $1.5 \to 2.25$ $2 \to 2$ The slope increases then levels out and then goes down somehow giving an S shape between $x = 0$ and $x = 3$ Now for $6 \to -18$ $5 \to -10$ $4 \to -4$ The slope gets nearer to zero as we near 3 so we get a kind of semi c shape going down and leveling out at $x = 3$ This turned out to be right. So I don't understand how my reasoning worked out in one instance but the same exact approach didn't work in the first part.
The zeroes--critical points--of the equation $\dot x = 3x - x^2 \tag 1$ occur where $\dot x = 0, \tag 2$ that is, where $3x - x^2 = 0; \tag 3$ it is easy to see that the values of $x$ satisfying this quadratic are $x = 0, 3; \tag 4$ the stability of these critical points is, in accord with the well-known theory, determined by the values of $\dfrac{d(\dot x)}{dx} = \dfrac{d(3x - x^2)}{dx} = 3 - 2x \tag 5$ at these values of $x$; we have $\dfrac{d(\dot x)}{dx}(0) = 3 > 0, \tag 6$ hence $0$ is unstable; and since $\dfrac{d(\dot x)}{dx}(3) = -3 < 0, \tag 7$ $x = 3$ is a stable equilibrium of (1). These calculations are simple and could mos' likely be performed in a few minutes during an exam; quicker, I'll warrant, that the amount of arithmetic/sketching a graphical solution requires; also, more rigorous. Nevertheless, I thank my colleagues Quanto and Moo for their explanitory artistry.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3340654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Are conic sections obtained from a cone or a double cone? According to Wikipedia, In mathematics, a conic section (or simply conic) is a curve obtained as the intersection of the surface of a cone with a plane. However most of the images actually show a double cone instead of a cone and it makes sense to me since a hyperbola has two components. So is it true that despite its name and definition, conic section is actually an intersection of a plane with a double cone?
Cone/Plane Intersection scenario A full cone consists of two nappes/sheets. Only when the inclinations of cone generator semi-vertical angle $ \alpha$ is more than sectioning plane angle $\beta$ to the cone axis we get the hyperbola as a real disjuncted double curve. A parabola or ellipse is produced by intersection when $ \alpha < \beta$ on a single nappe. In this case we are not concerned with generality when only one cone is partaking and resulting in an intersection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3340789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Stem and leaf diagrams I have the following data: $2.6$ $ $ $3.3$ $ $ $2.4$ $ $ $1.1$ $ $ $0.8$ $ $ $3.5$ $ $ $3.9$ $ $ $1.6$ $ $ $2.8$ $ $ $2.6$ $ $ $3.4$ $ $ $4.1$ $ $ $2.0$ $ $ $1.7$ $ $ $2.9$ $ $ $1.9$ $ $ $2.9$ $ $ $2.5$ $ $ $4.5$ $ $ $5.0$ Built stem and leaf plot: $Stem$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $Leaf$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $f$ $0$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $8$ $ $ $ $ $ $$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $$ $$ $ $ $$ $$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $$1$ $1$ $ $ $ $ $ $ $ $$ $ $ $ $ $ $ $ $ $ $ $ $1$ $ $ $ $ $ $ $6$ $ $ $ $ $ $ $7$ $ $ $ $ $ $ $9$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $4$ $2$ $ $ $ $ $ $ $ $ $ $ $ $ $0$ $ $ $4$ $ $ $5$ $ $ $6$ $ $ $6$ $ $ $8 \ $ $9$ $ $ $9$ $ $ $ $ $ $ $ $ $ $ $8$ $3$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $3$ $ $ $ $ $4$ $ $ $ $ $5$ $ $ $ $ $9$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $4$ $4$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $1$ $ $ $ $ $ $ $ $ $ $ $5$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $2$ $5$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $0$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $1$ My lecturer said that this is true: $P$ $(2\le$$X\le3)$ = ${8\over 20}$ $P$ $(1\le$$X\le6)$ = $1$ - ${1\over 20}$ = ${19\over 20}$ So, my question is: Why is he not adding the last number into the range? To me, since it is $"less $ $ than $ $ or $ $ equal $ $ to"$ this has to be: $P$ $(2\le$$X\le3)$ = ${12\over 20}$ $P$ $(1\le$$X\le6)$ = $1$ - ${0\over 20}$ = ${20\over 20}$ Does it have something to do with continuous or discrete data? Cause in another example he added the last number into the range.
I created a different diagram than yours. Please see below. It is clear that the probability is as your instructor suggested by the probability definition, namely: P(X)=Number of occurrences of Elements in Interval / Total Number of Elements. If this is not clear let me know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3340924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Prove that if $\mathbf A$ is an invertible matrix then $\mathbf A^{-1}$ is invertible and $\mathbf (\mathbf A^{-1})^{-1} = \mathbf A$ I am asked to prove following proposition: If $\mathbf A$ is an invertible matrix then $\mathbf A^{-1}$ is invertible and $\mathbf (\mathbf A^{-1})^{-1} = \mathbf A$ My attempt: Let $\mathbf A$ be arbitrary non-singular matrix. It follows that it has inverse, call it $\mathbf B$: $$\mathbf B = \mathbf A^{-1}$$ By definiton, if matrix $\mathbf A$ is the inverse of matrix $\mathbf B$ then $\mathbf B$ is the inverse of $\mathbf A$. In other words: $$(\mathbf B)^{-1} = \mathbf A$$ Since $$\mathbf B = \mathbf A^{-1}$$ It follows that $$(\mathbf A^{-1})^{-1} = \mathbf A $$ Is it correct? Although the proposition is quite simple, the proof provided by the book is a bit convoluted, hence I suspect that my proof may have some mistakes.
You can use the definition of an inverse matrix to do it. We know that, if A is a matrix of order $ n $ and it's inverse, then exists B such that: $$ AB = BA = I_n $$ And we know that $ B = A^{-1} $, just by notation. Then it follows that $ A $ is the inverse of $ B $. Just like before, $ A = B^{-1} $, so $ A = (A^{-1})^{-1} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3341143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
About the fact that $\mathbb Q$ has gaps In my book there are two theorems: * *$1.$ The number $\sqrt 2$ is irrational because if we put $\sqrt 2 = \frac pq$ for some integers $p, q$ where $p, q$ have no common factors, then we get a contradiction. *$2.$ The set $A = \mathbb Q \cap (0, \sqrt 2)$ has no largest number and $B = \mathbb Q \cap (\sqrt 2, \infty)$ has no smallest number. Are the two theorems above saying the same thing or are they mutually exclusive? The reason I ask is because in a math book (AFAIK) the same statement is not usually reproven again after it's proven the first time without an explicit mention. So I was wonderig if there's some subtle difference between the two theorems above. The technical part of the second theorem is not difficult, but I am having some uneasy time linking the fact that $A, B$ have no largest/smallest numbers with the fact that $\mathbb Q$ has gaps. Is the theorem saying that $\sqrt 2 \not \in A$ and $\sqrt 2 \not \in B$? But isn't that by very definition of $A, B$? Thanks.
Yeah, it would be more noteworthy for the book to say that $ \mathbb Q \cap (0, \sqrt 2]$ has no largest number and $\mathbb Q \cap [\sqrt 2, \infty)$ has no smallest number. Or, to say it without using the real numbers, that the set $D=\{x\in\mathbb Q^+\mid x^2<2\}$ has neither a largest member nor a least upper bound in the rationals. So, in a nutshell, the distressing thing about the gaps in the rationals is that $D$ is a connected subset of the rationals that we still cannot express in interval format.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3341265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Why are direct proofs often considered better than indirect proofs? As the title indicates, I'm curious why direct proofs are often more preferable than indirect proofs. I can see the appeal of a direct proof, for it often provides more insight into why and how the relationship between the premises and conclusions works, but I would like to know what your thoughts are concerning this. Thanks! Edit: I understand that this question is quite subjective, but that is my intention. There are people who prefer direct proofs more than proof by contradiction, for example. My curiosity is concerning what makes a direct proof preferable to such individuals. In the past, I've had professors grimace whenever I did an indirect proof and showed me that a direct proof was possible, but I never thought to ask them why a direct proof should be done instead. What's the point?
Direct proof is kind of proof which don't depend on number of values your logic can take - in 2-value logic contradiction is just shortcut to take all the option at once. Such proof will need certain modifications before using it to 2< value logic which means you need new proof for new environment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3341323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 7, "answer_id": 5 }
Why do these integration steps hold true? Can someone explain the first three steps of the solution to this integral to me? I have searched but not found a lot: $$\begin{align}\int e^{ax}\cos(bx)dx &= \frac{1}{a}\int \cos(bx)de^{ax} \\ &= \frac{1}{a}e^{ax}\cos(bx)+\frac{b}{a}∫e^{ax}\sin(bx)dx\\ & = \frac{1}{a}e^{ax}\cos(bx)+ \frac{b}{a^2}∫\sin(bx)de^{ax}. \end{align}$$ I know the basics of integration but this doesn't seem familiar. Would also really appreciate a link to a good source where I could catch up integration rules if anyone has any.
The first step involves a shorthand way of your usual $u$-sub. Let $u=e^{ax}$. Then, $du=ae^{ax}\ dx$ and $e^{ax}\ dx = \dfrac{1}{a} du$. The second step involves integration by parts. Let $v=\cos(bx)$ and $du=du$. Then, $dv=-b\sin(bx)\ dx$ and $u=u$. \begin{align*} & \int e^{ax} \cos(bx) dx \\ =\ & \dfrac{1}{a}\int \cos(bx) du \\ =\ & \dfrac{1}{a}\bigg(u\cos(bx)-\int -be^{ax}\sin(bx)\ dx\bigg) \\ =\ & \dfrac{1}{a}e^{ax}\cos(bx)+\dfrac{b}{a}\int e^{ax}\sin(bx)\ dx \\ =\ & \dfrac{1}{a}e^{ax}\cos(bx)+\dfrac{b}{a}\cdot\dfrac{1}{a}\int u\sin(bx)\ du \\ =\ & \cdots \textrm{ apply integration by parts here} \end{align*} Does anything else need explaining?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3341470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Determining convergence of $\sum_{n=1}^\infty \frac{3n i^n}{(n+2i)^3}$ I've tried to solve this problem about convergence: $\sum_{n=1}^\infty \frac{3n i^n}{(n+2i)^3}$ it's supposed to be solved using ratio, root tests or by testing the limit of the sumand. Anyways, I've tried both 3 and I had no success: I get to a point where I'm stucked on: $$lím_{n\rightarrow \infty} \left[\frac{n+1}{n} \left(\sqrt\frac{n^2 +4}{n^2 +2n+5}\right)^3\right]$$ Any suggestions? what would you usually do with therms like $(n+2i)^3$? I tried assuming non imaginary n values (because of the sum) and converting to polar form: $\sqrt{n^2+4}e^{i tan^{-1}(2/n)}$. Also I tried expanding: $$\left|\left(\frac{n+2i}{n+1+2i}\right)^3\right| = \frac{|n+2i||n+2i||n+2i|}{|n+1+2i|^3} = \left(\sqrt\frac{n^2 +4}{n^2 +2n+5}\right)^3$$ Gelp
Hint: $$\left| \frac{3n i^n}{(n+2i)^3} \right| =\frac{3n}{\sqrt{(n^2+4)^3}}<\frac{3n}{\sqrt{(n^2)^3}} = \frac{3n}{n^3}=\frac{3}{n^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3341578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How should we characterize a continuous family of continuous mappings? Let $S$ and $S'$ be two topological spaces. It is known that two continuous mappings $f,g:S\to S'$ are homotopic if there is a continous mapping $$F:[0,1]\times S\to S'$$ with $$F(0,\cdot)=f(\cdot)$$ $$F(1,\cdot)=g(\cdot)$$ The continuity of $F$ is defined because we can equip $[0,1]\times S$ with the product topology. But if we look at this another way, I believe it is natural and reasonable to say ($C(S;S')$ is the set of all continuous mappings $S\to S'$) $$\tilde F:[0,1]\to C(S;S')\\ \quad\quad\ t\mapsto F(t,\cdot)$$ defines a continuous family of continuous mappings $S\to S'$. This means $C(S;S')$ should be equipped with a topology. My question is: how can we characterize this topology using the topology on $S$ and $S'$ only, without resorting to the product topology above? Let me rephrase in a few different ways: (1) How can we characterize the topology on $C(S;S')$? (2) What is a neighborhood of a mapping $f\in C(S;S')$? (3) When can we say a sequence $f_n\in C(S;S')$ converges to $f\in C(S;S')$? I think this is easier if $S$ or $S'$ has additional strcutures. For example when $S'$ is a normed vector space we can define a norm on $C(S;S')$ by $$||F||=\sup_{x\in S}||F(x)||$$ But what about the most general case where $S,S'$ are nothing but topological spaces?
You can topologize $C(S, S')$ in a convenient way by taking the subsets $\mathcal{K}(C,V) = \{f \in C(S, S'): f[C] \subseteq V\} \subseteq C(S,S')$, where $C \subseteq X$ is a compact subspace and $V \subseteq Y$ is open, as a subbasis. The subsequently generated topology is the so-called compact-open topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3341727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find solution for $|x| + |y| \frac{dy}{dx}=0$ I want to solve following differential equation $|x| + |y| \frac{dy}{dx}=0$ with initial condition $y(2)=-1$. @Robert Z, since the it pass through $(2,-1)$ \begin{align} x - y \frac{dy}{dx}=0 \end{align} \begin{align} x dx = y dy \end{align} with the initial condition $y(2)=-1$, I have $y^2 = x^2 - 3 $. so \begin{align} y= - \sqrt{x^2-3} \end{align}
Hint. Start by solving the Cauchy problem in the quadrant which contains the initial point $(2,-1)$ where $$x -y(x) y'(x)=0.$$ Edit. The solution that you obtained $$y(x)= - \sqrt{x^2-3}$$ is valid for $x \in [\sqrt{3},+\infty)$ (where $x\geq 0$ and $y\leq 0$). Now extend the solution in $[0,\sqrt{3}]$ and then in $(-\infty,0]$. Note that $|x| + |y(x)| y'(x)=0$ implies that $y'(x)\leq 0$, that is $y$ is decreasing. Finally the complete solution $y:\mathbb{R}\to \mathbb{R}$ should be $$y(x)=\begin{cases} &- \sqrt{x^2-3}&\text{if $x\in [\sqrt{3},+\infty)$,}\\ &\sqrt{3-x^2}&\text{if $x\in [0,\sqrt{3}]$,}\\ &\sqrt{3+x^2}&\text{if $x\in (-\infty,0]$.}\\ \end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3341796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A particle starts its motion from rest and moves with constant acceleration for time $t_1$ and then it retards with constant rate for $t_2$. And comes to rest. Then the ratio of maximum speed and average speed during the complete motion will be MY SOLUTION Let acceleration be a. Max speed $$v=at_1$$ Also distance covered will be $$s=\frac{(2)(at_1^2)}{2}$$ $$=at_1^2$$ So average speed $$=\frac{at_1^2}{t_1+t_2}$$ Taking their ratio gives $$\frac{t_1+t_2}{t_1}$$ That’s as far as I got. But the answer is 2:1 and I have no idea on how to get there. Please help me proceed. Thanks!
You have assumed that the magnitude of the acceleration and deceleration is equal, which is not given. For the acceleration phase, $v_{max}=a_1t_1$ and distance covered is $\frac12a_1t_1^2$. For the deceleration phase, $v^2-u^2=-a_1^2t_1^2=-2a_2s$ giving the distance covered as $\frac{a_1^2t_1^2}{2a_2}$. You also have $v=0=a_1t_1-a_2t_2$. Thus the average speed is $$\frac12\frac{a_1t_1^2+\frac{a_1^2t_1^2}{a_2}}{t_1+t_2}=\frac12\frac{a_1t_1^2+a_2t_2^2}{t_1+t_2}$$Taking the ratio$$\frac{v_{avg}}{v_{max}}=\frac12\frac{\frac{a_1t_1^2}{a_1t_1}+\frac{a_2t_2^2}{a_1t_1}}{t_1+t_2}=1/2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3341903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $T$ (defined below) a distribution? How to show that $$ \langle T, \varphi\rangle = \int\limits_{0}^{+\infty} \frac{\varphi(x)-\varphi(0)}{x^{3/2}} dx,\quad \varphi \in C_0^\infty(\mathbb{R}), $$ is a distribution? I would first rewrite the definition of $T$, using $ \varphi(x)-\varphi(0) = \int_0^x \varphi'(t) dt $. But how to get around the fact that $ \int_{0}^{+\infty} x^{-3/2} dx $ is divergent, how to "tame" this integral in order to show that $ \lvert \langle T, \varphi\rangle \rvert \leq C \: \text{sup}_{K} \lvert \varphi'(x) \rvert $ for all $ \varphi \in C_0^\infty(K)$?
Two inequalities: * *$|\varphi (x)- \varphi (0)|\le \|\varphi'\|_\infty |x|.$ *$|\varphi (x)- \varphi (0)|\le 2\|\varphi|_\infty.$ Thus $$\int_0^\infty\left |\frac{\varphi (x)- \varphi (0)}{x^{3/2}}\right |\,dx$$ $$ \le \left (\int_0^1x^{-1/2}\,dx \right)\|\varphi'\|_\infty + \left (\int_1^\infty x^{-3/2}\,dx\right)\cdot 2\|\varphi\|_\infty.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3342062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Prove that $\mathbb R^n$ is not the union of finitely many its proper subspaces. I am reading An Introduction to Algebraic Topology by Rotman. After proving Theorem 2.7: For every $k\geq 0$, euclidean space$\mathbb R^n$ contains $k$ points in general position, the book remarked: There are other proofs of this theorem using induction on k. The key geometric observation needed is that $\mathbb R^n$ is not the union of only finitely many proper affine subsets . I want to prove that observation.
Every affine proper subspace of $\mathbb{R}^n$ i contained in an affine hyperplane. Now, let $C=\{(t,t^2,t^3,\ldots,t^n),\,t \in \mathbb{R}^n\}$. Since every nonzero polynomial with degree at most $n$ has at most $n$ roots, an affine hyperplane of $\mathbb{R}^n$ contains at most $n$ points in $C$. So if you have a covering of $\mathbb{R}^n$ by proper affine subsets, you need at least $|\mathbb{R}|$ of them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3342105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Stationarity of AR(2) process I am new to time series modeling and currently struggling with stationarity. Can someone please explain why the roots of the following AR polynomial are $ - 1 $ and $1/2$? The AR(2) process is $X_t = X_{t-1} + 2 X_{t-2} + Z_t$ To my best knowledge so far, I could use backward shift operator writing $(1- B + 2B^2)X_t = Z_t$ But I don`t know how to proceed with that. Thank you in advance
The shift operator $1-B-2B^2$ factorizes as such: $$ 1-B-2B^2=(1+B)(1-2B). $$ Note that in your question you had a sign error when writing down the shift operator - the $2X_{t-2}$ moves from the right to the left side of the equation and thus becomes a $-2B^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3342254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Linear Algebra intuition behind subspaces Hello I am trying to understand better the intuition behind subspaces. I am aware that the subspace of a vector space must have zero(go through the origin), has to be closed by addition, and has to be closed by multiplication. The aspect I fail to understand is why must it equal zero/go through the origin. If anyone has any analogy for subspaces I'd also love to hear it thank you.
A subspace has to be a vector space and therefore must be closed under scalar multiplication. There is always a $0$ value in the field of scalars, so $0\cdot\vec v$, which is the origin $\vec0$, has to be in the subspace. You might be familiar with thinking of Euclidean vector spaces like $\mathbb R^3$, where the nontrivial subspaces are lines or planes through the origin. The collection of lines or planes that don’t necessarily go through the origin are called “affine spaces” or “affine subspaces,”* but those are not vector spaces if they don’t go through the origin. *Like sometimes happens with mathematical definitions more than in English, the adjective “affine” is not restrictive; affine subspaces are not necessarily subspaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3342394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Given the function $f(x) = e^x - ax$ solve for $a$ such that $f(x) \geq 1$. So I have the function $f: \mathbb{R} \to \mathbb{R}$, $f(x) = e^x - ax$ and it is known that $a > 0$. I need to find $a$ such that $f(x) \geq 1$, $\forall x\in \mathbb{R}$. What I have done so far is to set the derivative equal to $0$ in hopes of a minimum. So I found: $f'(x) = e^x - a$ Set it equal to $0$ and found that the point $x = ln(a)$ is the global minimum point. Logic led me to think that since I have to find an $a$ for which the function is $\geq 0$ for all values of $x$, all I have to do is find the $a$ for which the function is $\geq 0$ at that minimum point. Since the minimum point is $x = ln(a)$, I have to solve: $f(ln(a)) \geq 1$ That gives me $a - a*ln(a) \geq 1$, or $a(1 - ln(a)) \geq 1$. Here is where I got stuck. Is my reasoning correct? Should I have done something differently? Is there a better way to do this? And if what I have done so far is correct, how could I go about solving for $a$?
No matter what $a$ is, you'll have $f(0)=e^0-a\cdot 0=1$. So your only hope of getting $f(x)\ge 1$ everywhere is if you have $f'(0)=0$. It turns out that there is exactly one $a$ that achieves this, namely $a=1$. Since $e^x-ax$ is easily seen to be convex no matter what $a$ is, $f'(0)=0$ will also guarantee that it has its minimum at $x=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3342528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
What method to use for differential equations I'm having doubts as to what method to use in the following ODE: $$2t+3x+(x+2)x'=0$$ As this can be changed into: $$x'=\frac{-2t-3x}{x+2}$$ I'm thinking it can be solved by using homogeneous equations method, but I'm not sure this applies because for the $x+2$ in the denominator I'm not sure it would be a homogeneous equation (grade 0 and all). What would you suggest? Thanks!
Compressing the d'Alembert treatment: Insert $p=x'$, then $$ 2t+3x+(x+2)p=0. $$ If $p$ is constant, then the $t$ derivative of this equation gives $$ 2+3p+p^2=0\implies p=-1\text{ or } p=-2. $$ In the other cases, locally use $p$ as parameter and use $T(p)$, $X(p)$ as the dependent functions. Then from the chain rule $X'(p)=pT'(p)$ and from the equation $$ 0=2T'+3X'+(X+2)+pX'\implies 0=(2+3p+p^2)X'+p(X+2) $$ which is separable $$ \frac{X'}{X+2}=\frac{p}{(p+1)(p+2)}=\frac{2}{p+2}-\frac1{p+1} \implies X+2=\frac{C(p+2)^2}{p+1} $$ Inserting this back into the original equation gives a parametrization of the solution curves with $$ 2T=6-\frac{C(p+3)(p+2)^2}{p+1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3342652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the pointwise limit and determine if convergence is uniform Find the pointwise limit of $f_n(x)=nx^n(1-x^n)$ and determine if the convergence is uniform. Solution For $x=1$ $$\lim_{n\to\infty}(nx^n(1-x^n))\to 0$$ For $0\le x<1$ $$\lim_{n\to\infty}(nx^n(1-x^n))\to0$$ as $x^n\to0$ as $n\to\infty$. Thus the pointwise limit of $f_n(x)$ is $f(x)=0$. Determine if the convergence is uniform: We use, $$\sup\vert f_n(x)-f(x)\vert<\epsilon$$ $$\sup\vert nx^n(1-x^n)-0\vert$$ $$\sup\vert nx^n(1-x^n)\vert$$ $$\sup \vert nx^n-nx^{2n}\vert$$ We want too find the largest possible value of this function, we take the derivative set to $0$. $$\frac{d}{dx}(nx^n-nx^{2n})=0$$ $$n^2x^{n-1}-2n^2x^{2n-1}=0$$ $$x^{n-1}-2x^{2n-1}=0$$ $$x^{n-1}(1-2x^{n})=0$$ We can then find the trivial solution $x^{n-1}=0$ and the solution, $$x=\sqrt[n]{\frac{1}{2}}$$ Is this a correct solution? Or have I made an error? As $n\to\infty$, $\sqrt[n]{\frac{1}{2}\to1$, which is obviously not the largest value.
It is always interesting to plot graphical representations concretizing algebraic/analytic proofs. Here it is for the 8 first curves with coordinates of maximum (red stars) $$(\sqrt[n]{\frac12}, \ \frac{n}{4}).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3342771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
divisibility of big powers of 71 Since $$ 71 \equiv7\equiv-1 \pmod 8 $$ $\implies71^2 \equiv 1 \pmod 8$ $\implies $ any power of $71$ would leave either $1$ or $-1$ mod $8$. Is this logic ok ?
You are absolutelly right. It's good to understand and always keep in mind the basic rules of modular arythmetic, so that you never have doubts about your reasonings. For instance, if: $$a \equiv b \pmod k$$ $$c \equiv d \pmod k$$ Then $$ac \equiv bd \pmod k$$ And, as a consequence (set $a=c$, $b=d$ and apply the above equation recursively as many times as needed), for any integer $m$, $$a^m = b^m$$ You are just applying this property to the case $a=71$, $b=-1$ On a sidenote it's also true that: $$a+c \equiv b+d \pmod k$$ But, for instance: $$2^1 \not\equiv 2^5 \mod 4$$ EXTRA: Elementary proof for that $a \equiv b \pmod{k}$ and $c \equiv d \pmod{k}$ imply $ac \equiv bd \pmod{k}$: Let $a$ and $b$ be mod-$k$ congruent. Let also $c$ and $d$ de mod-$k$ congruent. Then $$a=wk+r_1$$ $$b=xk+r_1$$ $$c=yk+r_2$$ $$d=zk+r_2$$ For some integers $w,x,y,z$ and $r_1, r_2 \in \{0,1,...,k-1\}$. Now: $$ac=wyk^2 + r_1yk + r_2wk + r_1 r_2 = (wyk+r_1y+r_2w)k + r_1 r_2$$ $$bd=xzk^2 + r_1zk + r_2xk + r_1 r_2 = (xzk+r_1z+r_2x)k + r_1 r_2$$ Long story short: $$ac=sk+r_1 r_2$$ $$bd=tk+r_1 r_2$$ For some integers $s,t$ that we don't even care about. The point is that the remainder after a division by $k$ is in both cases $r_1 r_2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3342920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Inverse of nth power of a linear transformation Using matrices it is easy to show that doing a linear transformation n times and then taking inverse is same as inverting the linear transformation and then doing it n times: $$(A^n)^{-1}=(AAAA\ldots)^{-1} = \ldots A^{-1}A^{-1}A^{-1}A^{-1}=(A^{-1})^n~~\blacksquare$$ I'm wondering if this can be shown with out reference to matrices, that is by just using linearity properties like f(ax+by)=af(x)+bf(y)? If what I'm asking is not clear, please consider rotation by $10^{\circ}$ as an example. * *First rotate, then invert * *Rotating $5$ times gives $10^{\circ}\times 5=50^{\circ}$. *Taking the inverse gives $-50^{\circ}$ *First invert, then invert * *Inverting gives $-10^{\circ}$. *Rotating $5$ times gives $-10^{\circ}\times 5=-50^{\circ}$
You can prove it by induction: for $n=1$ it’s banal. Suppose thesis true for a certain $n$. Then $(f^{n+1})^{-1}=(f\circ f^n)^{-1}=(f^n)^{-1}\circ f^{-1}=(f^{-1})^n\circ f^{-1}=(f^{-1})^{n+1}$ Did you asked for a similar demonstration?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3343014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$P$ and $A$ are square matrices and $P$ is invertible. Prove that $(P^{-1} AP)^{n} = P^{-1}A^{n}P $ The proposition I would like to prove is following: Proposition. Let $\mathbf{P}$ and $\mathbf{A}$ be square matrices and a matrix $\mathbf{P}$ be invertible. Prove that $({\mathbf{P}^{-1}} \mathbf {AP})^{n} = \mathbf{P}^{-1}\mathbf{A}^{n}\mathbf{P} $ My attempt: We check two$^{1}$ cases: $n = 1$ $ ({\mathbf{P}^{-1}} \mathbf {AP})^{1} = {\mathbf{P}^{-1}} \mathbf {AP}$ $n = 2$ $({\mathbf{P}^{-1}} \mathbf {AP})^{2} = {\mathbf{P}^{-1}} \mathbf {AP}{\mathbf{P}^{-1}} \mathbf {AP}= {\mathbf{P}^{-1}}\mathbf {AI} \mathbf {AP} = {\mathbf{P}^{-1}}\mathbf {A}^2 \mathbf {P} $ Suppose it is true for $n = k$, i.e: $$\tag ! ({\mathbf{P}^{-1}} \mathbf {AP})^{k} = {\mathbf{P}^{-1}} \mathbf {A^{k}P} $$ Now we need to prove that proposition holds for $n = k+1$: $$ ({\mathbf{P}^{-1}} \mathbf {AP})^{k+1} = {\mathbf{P}^{-1}} \mathbf {AP}({\mathbf{P}^{-1}} \mathbf {AP})^{k}$$ Using result obtained in $(!)$: $$({\mathbf{P}^{-1}} \mathbf {AP})^{k+1} = {\mathbf{P}^{-1}} \mathbf {AP}{\mathbf{P}^{-1}} \mathbf {A^{k}P} = {\mathbf{P}^{-1}} \mathbf {A^{k+1}P}$$ As desired. $\Box$ Is the proof correct? * *Technically speaking (if I understand the definition of the mathematical induction correctly), case $n=1$ is sufficient as the base case. But is it better (makes evident that there is a pattern) if I show proposition holds for $n = 2$ too?
Your proof looks good. When you're starting your journey towards becoming a mathematician, it can seem natural to give an extra example like your $n=2$ case to provide extra illustration to the reader. But if you have confidence in your ability to write clear proofs, you'll come to see it as dead weight that is just making your proof longer. So feel free to find that confidence as soon as possible. ^_^
{ "language": "en", "url": "https://math.stackexchange.com/questions/3343115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $A^5 \neq I$ $A \in M_{5}(\mathbb{C})$, $\operatorname{trace}(A) = 0$, $I-A$ is invertible. Prove that $A^5 \neq I$ I think this problem has something to do with eigenvalue and the fact that trace of a matrix = sum of the eigenvalues. But, I have no idea how to proceed! Thanks!
Assume, by the way of contradiction, that $A^5 = I$ holds. In this case, as $A^5 - I = 0$, we can factorize this expression in $(A - I)(A^4 + A^3 + A^2 + A + I) =0 $. As $A - I $ is invertible, the first factor is nonzero, hence the last one is and the minimal polynomial of $A$ divides it, forcing $A$ to have all eigenvalues being fifth roots of 1, but distinct from 1 because $A-I $ is invertible. Now, since $tr (A) = 0$, the sum of all eigenvalues must be zero. Can you see now that, no matter what is the multiplicity of each eigenvalue, if no eigenvalue is 1 it is impossible to sum them all and have 0? It leads to a contradiction, then.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3343255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Induction for $(A_1 \cap A_2 \cap ... \cap A_n)^c = A^c_1 \cup A^c_2 \cup ... \cup A^c_n$. I need to do induction on this problem: $(A_1 \cap A_2 \cap ... \cap A_n)^c = A^c_1 \cup A^c_2 \cup ... \cup A^c_n$. Induction is new to me and this problem is hard to understand. The base case here I'm guessing is just that $A^c_1 = A^c_1$. Since this works we can go to the inductive hypothesis. Then I think we have to prove that $(A_1 \cap A_2 \cap ... \cap A_k)^c \cap A_{k+1} = A^c_1 \cup A^c_2 \cup ... \cup A^c_k \cup A^c_{k+1}$. I am not sure if these are the right steps and if so how to finish the proof.
The base case is $(A_1\cap A_2)^c = A_1^c\cup A_2^c$. To see this, we have \begin{align} x \in (A_1\cap A_2)^c &\iff x\notin A_1\cap A_2\\ &\iff x\notin A_1 \vee x\notin A_2\\ &\iff x\in A_1^c \vee x\in A_2^c\\ &\iff x\in A_1^c\cup A_2^c, \end{align} where $\vee$ denotes logical disjunction (OR). For the induction step, suppose that $$\left(\bigcap_{i=1}^n A_i\right)^c = \bigcup_{i=1}^n A_i^c$$ for some $n\geqslant2$. Then \begin{align} \left(\bigcap_{i=1}^{n+1} A_i\right)^c &= \left(A_{n+1}\cap\bigcap_{i=1}^n A_i\right)^c\\ &= A_{n+1}^c \cup\left(\bigcap_{i=1}^n A_i\right)^c\\ &= A_{n+1}^c \cup \bigcup_{i=1}^n A_i^c\\ &= \bigcup_{i=1}^{n+1} A_i^c. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3343413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $G$ be an open set in $\mathbb{C}$ and $a \in G$ with $B_X(a,r)\cap \delta G = \emptyset$. Then $B_X(a,r) \subseteq G$. I came up with the following and wanted to be sure if my proof is correct: Theorem: Let $G$ be an open set in $\mathbb{C}$ and $a \in G$ with $B_X(a,r)\cap \delta G = \emptyset$. ($\delta G$ is the boundary of $G$). Then $B_X(a,r) \subseteq G$. Proof: Suppose not. Then we can pick $z \in B_X(a,r)$ with $z \notin G$. The line segment $[a,z]$ is contained in $B_X(a,r)$, since balls are convex. Put $f(t)= a(1-t) + zt, t \in [0,1]$ Put $t_0 := \inf\{t : f(t) \notin G\}$, which exists because $f(1) = z \notin G$ and note that $f(t_0) \notin G$ by the continuity of $f$. Also note that $t_0 \neq 0$, since $a=f(0) \in G$. Thus, we can take a sequence $0 \leq t_n, n \geq 0$ such that $t_n \nearrow t_0$ (strictly) and by the continuity of $f$ we have $f(t_n) \to f(t_0)$. However, $f(t_n) \in G$ for all $n$ and hence $f(t_0)$ is in the closure of $G$. But this means that $f(t_0) \in \overline{G} \cap G^c = \overline{G} \cap \overline{G^c}= \delta G$, because $G$ is open. This contradicts the hypothesis $B_X(a,r) \cap \delta G=\emptyset$. Is this correct?
If $B_X(a,r) \cap \partial G= \emptyset$, $B_X(a,r)$ cannot intersect $G^\complement$ or the ball would be disconnected (by the disjoint cover $G$ and $\Bbb C\setminus \overline{G}$, intersected by $B_X(a,r)$. This argument works for any connected neighbourhood of $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3343514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trouble proving $3^2 + 3^3 + ... 3^n = 9 \cdot \frac{3^{n-1} - 1}2$ by induction So i'm supposed to prove by mathematical induction that this formula: $3^2 + 3^3 + ... 3^n = 9 \cdot \dfrac{3^{n-1} - 1}2$ holds true for all numbers greater than 2. I started with the base case and just plugged in 2, it worked. Then I assumed k was true, and then considered k+1. What I ended up with is: $9(3^{k-1} -1/ 2) + 3^{k+1} = 9((3^k - 1)/2)$ I tried making the left handside equal to the right handside but I couldn't. Did I do something wrong along the way?
Suppose $t=3^{n-1}$ Then: $$\frac 92 (t-1)+9t=\frac 92t+9t-\frac92=\frac{27}{2}t-\frac92$$ $$=\frac92(3t-1)$$ Then sub back in, we get: $\frac 92(3^n-1)$ as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3343635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
A matrix of order 8 over $\mathbb{F}_3$ What is an example of an invertible matrix of size 2x2 with coefficients in $\mathbb{F}_3$ that has exact order 8? I have found by computation that the condition that the 8th power of a matrix $\begin{bmatrix}a & b\\c & d\end{bmatrix}$ is the identity is $$ b c (a + d)^2 (a^2 + 2 b c + d^2)^2 + ((a^2 + b c)^2 + b c (a + d)^2)^2=1, \qquad b (a + d) (a^2 + 2 b c + d^2) (a^4 + 4 a^2 b c + 2 b^2 c^2 + 4 a b c d + 4 b c d^2 + d^4)=0, \qquad c (a + d) (a^2 + 2 b c + d^2) (a^4 + 4 a^2 b c + 2 b^2 c^2 + 4 a b c d + 4 b c d^2 + d^4)=0, \qquad b c (a + d)^2 (a^2 + 2 b c + d^2)^2 + (b c (a + d)^2 + (b c + d^2)^2)^2=1 $$ and the condition for invertibility is $ad\neq bc$. If the 4th power is not the identity, then no power that is not a multiple of 8 is not the identity (because we could cancel out to either get that the first power is the identity or that the second power is the identity, both lead to contradiction). That is another cumbersome condition to write out. I hope somebody can suggest a nicer way.
We can use the same approach but reduce drastically the complexity of the system in the entries $a, b, c, d$ if we instead look for a square root of some matrix with order $4$. The matrix $A = \pmatrix{0&-1\\1&0}$ satisfies $A^2 = -I$ and so has order $4$ over any field of characteristic not $2$. In particular, if we can find a matrix $B$ such that $B^2 = A$, then $B$ will have order $8$. Writing $B = \pmatrix{a&b\\c&d}$, the condition is equivalent to the system \begin{align} a^2 + bc &= 0\\ b(a + d) &= -1\\ c(a + d) &= 1\\ d^2 + bc &= 0 \end{align} The second equation implies that $b = \pm 1$, and by negating the matrix (which preserves the property that $B^2 = I$) we may assume $b = 1$, and substituting gives $a + d = -1$ and $c = -1$. The first and fourth equations give that $a, d = \pm 1$, and then the third equation gives that $$a = d = 1 ,$$ yielding the solution $$\pmatrix{1&1\\-1&1} .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3343777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 1 }
X be a set, $\sum$ be any $\sigma$- algebra on X, is there a measure on X such that every element of $\sigma$- algebra is measurable? In particular, on $\mathbb{R}$, $2^{\mathbb{R}}$ be a $\sigma$- algebra, is there a measure on $\mathbb{R}$ such that every element of $2^{\mathbb{R}}$ is measurable?
Pick any $x \in X$ and define $\mu(A)=1$ if $x \in A$, $\mu(A)=0$ for $x \notin A$. This is a measure and every set is measurable. This works for any set and any sigma algebra on it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3343874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
An upper bound on the expected value of the square of random variable dominated by a geometric random variable Let $X$ and $Y$ be two random variables such that: * *$0 \leq X \leq Y$. *$Y$ is a geometric random variable with the success probability $p$ (the expected value of $Y$ is $1/p$). I would be grateful for any help of how one could upperbound $\mathbb{E}(X^2)$ in terms of $p$.
If $0 \leq X \leq Y$ almost surely, then $0 \leq X^2 \leq Y^2$ almost surely, as $x \mapsto x^2$ is monotonous on $\mathbb{R}_+$. We also know, that if $X \leq Y$ almost surely, then $EX \leq EY$. Thus $0 \leq E(X^2) \leq E(Y^2) = (EY)^2 + Var(Y) = \frac{1}{p} + \frac{1 - p}{p^2} = \frac{1}{p^2}$. Thus $E(X^2) \in [0; \frac{1}{p^2}]$. And the is the best possible bound.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3344006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let d1 and d2 be two metrices on X . Then $d(x,y)= d1(x,y)*d2(x,y)$ $x, y\in X$ is also metric on X? I'm able to solve three properties of metric such as * *$d(x,y)\geq 0$ for all $x, y \in X $ *$d(x,y)=0$ iff $x=y$ *$d(x,y)= d(y,x)$ for all $x, y \in X $ But facing problem to solve triangle inequality. Please help me. Thanks in advance.
$|0-\frac12|^2 + |\frac12 - 1|^2 = \frac12 < 1 = |0-1|^2$ so $d_1 = d_2$ equal to the standard distance on $\Bbb R$ already gives a counterexample to the triangle inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3344124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find n in sum that results in a number $aaa$ Lets say that we have the sum $1+2+3+\ldots+n$ where $n$ is a positive natural number and that this sum should equal a three digit number in which all the digits are the same, for example $111, 222,$ and so on. What would be the best way to find the $n$ that would result in such a number? I guess you could solve $\frac{n(n+1)}{2}=111x$ but that seems a bit too hard. From trial and error we know that the only solution is $n=36$ which gives $666$.
$$\begin{align} \frac {n(n+1)}2&=111m\qquad (m=1,2,3,\cdots,9)\\ n^2+n-222m&=0\\ n&=\frac {-1\pm \sqrt{1+888m}}2\end{align}$$ Check for values of $m$ where $(1+888m)$ is a perfect square of an odd number. The only solution is $m=6$, where $\sqrt{1+888m}=73$. This gives $$n=\frac {-1\pm 73}2=36\qquad (n>0)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3344200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How can I say a function $F:\mathbb{R}^n \longrightarrow \mathbb{R}^m$ $nMy doubts are about an application of the Implicit Function Theorem: I do not understand how a function $F: \mathbb{R}^n \longrightarrow \mathbb{R}^m$, with $n<m$ (!!!) can locally be an homeomorphism between a neighbourhood of $x$ and a neighbourhood of $F(x)$, for all $x$ such that $rnk(DF(x))=n$. I just cannot understand how I can apply the Inverse Function Theorem (which I know for Banach spaces), given that $DF(x)$ cannot be invertible! I'm asking this because I found this result written in many texts, for example the solution of exercise 2.2.5 of Berkley-Problems in Mathematics. To be precise, the exercise was: $F: \mathbb{R}^n \longrightarrow \mathbb{R}$, $F\in C^2(\mathbb{R})$. Let0s say that a point $x$ is a nondegenerate critical point if $DF(x)=0$ and $rnk(D^2F(x))=n$. Show that every nondegenerate crytical point is isolated. The solution provided from the book stated: Define $G: \mathbb{R}^n \longrightarrow \mathbb{R}, G(y):= |DF(y)|^2 $, where $|.|$ is the Euclidean norm.Then, $G$ is $C^1$, $G'(x) != 0 $ and $G(x)=0$, so by the Inverse Function Theorem G is locally diffeomorphism to a neighbourhood of 0, hence injective. I don't understand this solution!Thanks in advance to everybody who'll answer me!
The proposed solution is nonsense, you should complain to the people who wrote it. (In the 3rd edition of ``Berkeley Problems in Mathematics" that I have, this is a "solution" of Problem 2.2.10.) A correct solution is to consider the gradient function $G(x)=\nabla F(x), x\in {\mathbb R}^n$. This function is $C^1$ and has zero at some $x_0$. The assumption that the Hessian of $F$ has rank $n$ at $x_0$ amounts to the assumption that the derivative $DG(x)$ has (maximal) rank $n$ at $x_0$. Regarding $G$ as a map ${\mathbb R}^n\to {\mathbb R}^n$ we then can apply the IFT to $G$ at $x_0$ and conclude that $G$ is a local diffeomorphism at $x_0$, implying that $x_0$ is an isolated zero of $G$. In other words, $x_0$ is an isolated critical point of $F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3344318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Combinatorics problems that can be solved more easily using probability I'm looking for examples of combinatorics problems which would be very difficult to solve by direct enumeration, but can be easily solved using ideas from probability, like independence, commuting sums and expectations, etc. I know I have seen such problems before, especially in the HMMT combinatorics subject tests, but I can't now recall any good examples. I am NOT looking for probabilistic existence proofs (the so-called "probabilistic method" introduced by Erdos). The sort of problems I'm interested in are enumerative.
Here are some problems: * *Find the sum of the number of all continuous runs of all possible sequences with $2019$ ones and $2019$ zeros *https://artofproblemsolving.com/community/c6h366278p2018435 *https://artofproblemsolving.com/community/c6h60752p366512 *https://artofproblemsolving.com/community/q2h1151650p5452212 *http://artofproblemsolving.com/community/c6h1170845p5622460 *https://artofproblemsolving.com/community/q4h1497881p9354029 *https://artofproblemsolving.com/community/q1h79679p10589332
{ "language": "en", "url": "https://math.stackexchange.com/questions/3344468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Why is this proof invalid? I don't understand why this theorem is false. Suppose that $A \subseteq C$, $B \subseteq C$, and $x \in A$. Then $x \in B$. Invalid Proof: Suppose that $x \notin B$. Since $x \in A$ and $A \subseteq C$, $x \in C$. Since $x \notin B$ and $B \subset C$, $x \notin C$. But now we have proven both $x \in C$ and $x \notin C$, so we have reached a contradiction. Therefore $x \in B$. I'm thinking that $$\forall x(x \in B \implies x \in C) = \forall x(x \notin B \vee x \in C)$$ It's true that $x \notin B$, so this statement is true, and can't be used to prove through contradiction that $x \in B$. But I'm not completely sure, so any clarification would help here. Thanks.
Since $x \not\in B$ and $B \subseteq C$, $x \not\in C$. Actually, if $x \not\in B$ and $B \subseteq \color{blue}{C}$, both $\color{red}{x \in C}$ and $\color{magenta}{x \not\in C}$ are possible. In a picture:
{ "language": "en", "url": "https://math.stackexchange.com/questions/3344671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }