Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
About a lemma to prove the Cantor-Bernstein-Schroeder theorem. I am reading "Logic in mathematics and set theory" by Kazuyuki Tanaka and Toshio Suzuki. In this book, there is a lemma to prove the Cantor-Bernstein-Schroeder theorem. I cannot understand why the equality $$A_0 = (A_0 - B_0) \cup (B_0 - A_1) \cup (A_1 - B_1) \cup (B_1 - A_2) \cup \cdots \cup (A_n - B_n) \cup (B_n - A_{n+1}) \cup \cdots$$ holds. Maybe there exists an element $x$ such that $x \in A_i$(and $x \in B_i)$ for all $i$. For example I think if $A_0 = B = A_1$ and $f = id$, then $x \in A_i$ for all $i$ if $x \in A_0$. Lemma 1.12 Let $A_0, B, A_1$ be sets such that $A_1 \subset B \subset A_0$ and $A_0 \sim A_1$. Then, $A_0 \sim B$. Proof: Let $f : A_0 \to A_1$ be a bijection. Let $B_0 := B$. Let $A_{n+1} := f[A_n], B_{n+1} := f[B_n]$ for $n \in \{0, 1, 2, \cdots \}$. Then, $A_0 = (A_0 - B_0) \cup (B_0 - A_1) \cup (A_1 - B_1) \cup (B_1 - A_2) \cup \cdots \cup (A_n - B_n) \cup (B_n - A_{n+1}) \cup \cdots.$ Let $g : A_0 \to B_0$ be a mapping such that $g(x) = f(x)$ if $x \in A_n - B_n$ for some $n$ and $g(x) = x$ for $x \in B_n - A_{n+1}$ for some $n$. Then, it is easy to prove $g : A_0 \to B_0$ is a bijection. So, $A_0 \sim B$.
It doesn't. If the equality is true for some $A_1\subseteq B_0\subseteq A_0$ and $f:A_0\to A_1$, we can simply extend the sets using a new set $C$, and also extend $f$ by making it the identity on $C$, in which case $C$ will be contained within all sets $A_i$ and $B_i$. However you can augment the argument by making $g$ act as the identity on $C$ too, where we define $C$ to be the intersection $\bigcap A_i$ (which equals $\bigcap B_i$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3281493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
$f(z)$ is Analytic then $f(\bar z)$ is analytic iff $f$ is constant. let $f_1=u(x,y), f_2=u(x,-y)$ Then is this true that $(f_2)_x$ at point $(x,y) =(f_1)_x $ at point $(x,-y)$ And $(f_2)_y $ at point $(x,y) = - (f_1)_y$ at point $(x,-y)$ If yes does this imply if $f(z)$ is Analytic then $f(\bar z)$ is analytic iff $f$ is constant. Thanks in Advance.
Without CRD: Let $g(z):= f(\overline{z})$ and show that $\lim_{h \to 0 , h \in \mathbb R}\frac{g(z_0+h)-g(z_0)}{h}= f'(\overline{z_0})$ and $\lim_{h \to 0 , h \in i\mathbb R}\frac{g(z_0+h)-g(z_0)}{h}= -f'(\overline{z_0})$. Conclusion ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3281652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does $P(A \cap B) = 0$ imply that $A \cap B = \emptyset$? Given that $P(A \cap B) = 0$, where $A$ and $B$ are two events, does this imply that $A \cap B = \emptyset$ ? Is it not possible to have a probability zero for an event which is not empty? Regards.
No. For example, if I pick a random number from $[0,2]$, then $P([0,1]\cap [1,2])=0$, however, $[0,1]\cap[1,2]=\{1\}\neq \emptyset$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3281752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Determine all singularities and its characters for function: $f(z) = \frac{\sin (z+1) e^{\frac{1}{z}}}{(z-i)^{2}(z+i)\cos ^{2} (z)}.$ Determine all singularities and its characters for function: $$f(z) = \frac{\sin (z+1) e^{\frac{1}{z}}}{(z-i)^{2}(z+i)\cos ^{2} (z)}.$$ I have concluded that $z=0$ is essential singularity, $z=i$ order $2$ pole, $z=-i$ order $1$ pole. 1.question : Is that correct? 2.question : Can you help me with all $z =$ +/- $i\frac{\pi}{2} + 2k\pi, k\in \mathbb{Z}$ ? At first I thought of developing $f$ in Laurent's series, but I got stuck.
Let's write down all points that can be singularities for $f(z)$: * *$z = i$ *$z = -i$ *$z = 0$ *$\cos^2 (z) = 0$ *$z = \infty$ As you mentioned, 1 - order 2 pole, 2 - order 1 pole, 3 = is essential singularity. I think, you can easily proof it. Let's talk about 4. From $\cos^2 (z) = 0$ we get: $z_n = \pi/2 + \pi n$ Function $g(z) = \frac{\sin (z+1) e^{\frac{1}{z}}}{(z-i)^{2}(z+i)}$ is holomorphic in $z_n$, that's why $z_n$ is pole. Insofar as first derivative of $\cos(z)^2$ is $-2\cos(z)\sin(z)$ and equal to $0$ in $z_n$, but not the second derivative, we get that $z_n$ is order 2 pole of $f(z)$. Thus, $\infty$ is not singularity, because it's a limit of singularities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3281843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that $E^2=E=E^T$. Let $E$ be an $n\times n$ matrix. Let $U=\{Ex:x\in \Bbb R^n\}$. If $\text{proj}_U v=Ev$ $ \forall v\in \Bbb R^n$ show that $E^2=E=E^T$. I know that if $U$ has a basis $u_1,u_2,\ldots ,u_n$ then $\text{proj}_U v=\sum_{i=1}^n\langle v,u_i\rangle u_i$ But how to show from here that $E^2=E=E^T$ Can someone kindly help.
Let $\{u_1, \ldots, u_k\}$ be an orthonormal basis for $U$. For each $u_i$, define the projection onto $\operatorname{span}(u_i)$: $$P_i(x) = \operatorname{proj}_{u_i}(x) = \langle x, u_i \rangle u_i.$$ Note that, for any $i, j$, we have $$(P_j P_i)(x) = \langle P_i(x), u_j \langle u_j = \langle \langle x, u_i \rangle u_i, u_j \rangle u_j = \langle x, u_i\rangle \langle u_i, u_j \rangle u_j.$$ When $i = j$, we have $\langle u_i, u_j \rangle = 1$, hence $P_i^2(x) = P_i(x)$. When $i \neq j$, we have $\langle u_i, u_j \rangle = 0$, so $(P_j P_i)(x) = 0$. Further, each $P_i$ is self-adjoint (i.e. its corresponding standard matrix is symmetric, in the real case), as $$\langle P_i(x), y \rangle = \langle \langle x, u_i \rangle u_i , y \rangle = \langle x, u_i \rangle \langle u_i , y \rangle = \langle x, \langle y, u_i \rangle u_i \rangle = \langle x, P_i(x)\rangle.$$ Note that the projection $P$ onto $U$ is simply the sum of these projections. We have, $$P^* = (P_1 + \ldots + P_n)^* = P_1^* + \ldots + P_n^* = P_1 + \ldots + P_n = P,$$ so $P$ is self-adjoint (and hence $E$, the standard matrix of $P$, is symmetric). Further, \begin{align*} P^2 &= \left(\sum_{i=1}^n P_i\right)^2 \\ &= \sum_{i=1}^n \sum_{j = 1}^n P_iP_j \\ &= \sum_{i=1}^n P_i^2 &\text{as $P_iP_j = 0$ unless $i = j$} \\ &= \sum_{i=1}^n P_i \\ &= P. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3281940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can this limit be solved with Riemann sum? Can this limit be solved with Riemann sum? $$ \lim _{n\to \infty }\left[\lim _{x\to 0}\left(\cos x\cdot\cos2x\cdots\cos nx\right)^{\frac{1}{n^3x^2}}\right] $$ What I've tried is to solve it with the Riemann sum but I am getting stuck somewhere , and I am not seeing where . In my textbook I have the following options as answers : $a. e^3$ $b. e^{-2}$ $c. e^{\frac{1}{3}}$ $d. e^{\frac{1}{2}}$ $ e. e^{\frac{-1}{6}}$
$$L=\lim_{n\rightarrow \infty} \lim_{x\rightarrow 0} (\cos x \cos 2x \cos 3x...\cos nx)^{\frac{1}{n^3x^2}}$$ As per @marty cohen, let us use $-y+y^2/2 \le \ln(1-y)\le -y~$ and $(1-t^2/2) \le \cos t \le (1-t^2/2+t^4/24)~$ when $~t~$ and $~y~$ are very small. We get $$\ln L =\lim_{n\rightarrow \infty} \lim_{x\rightarrow 0} \frac{1}{n^3 x^2} \sum_{r=1}^{n} (\ln \cos rx)=\lim_{n\rightarrow \infty} \lim_{x\rightarrow}\frac{1}{n^3 x^2} \sum_{r=1}^n \ln(1-r^2x^2/2).$$ $$\Rightarrow \ln L = \lim_{n\rightarrow \infty} \lim_{x\rightarrow} \frac{1}{n^3 x^2} \sum_{r=1}^n (-r^2x^2/2)=\lim_{n\rightarrow \infty} \frac{1}{n}\sum_{r=1}^n -r^2/(2n^2)= \int_{0}^{1}(-z^2/2) dz=\frac{-1}{6}.$$ So $L=e^{-1/6}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3282040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solutions to equation (or proof that there is no solution) I am unable to find a solution or prove that there is none. I really need help: $a, b,$ and $c$ are positive integers such that $a^2-b^2+c^2=2.$ Is this possible? If possible find the solutions, if not prove that there is no solutions.
To search for a positive answer to such a question, Wolfram Alpha is your friend. In your case, $a=c=3$ and $b=4$ is a solution (the only positive integer solution Wolfram Alpha was able to find, though that is not proof it is unique).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3282297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Proof of every measurable cardinal carries a normal measure I'm reading the proof of Theorem 10.20 in Set Theory by Jech and I don't understand the last argument. The theorem says every measurable cardinal carries a normal measure. The proof goes: Let $U$ be a nonprincipal $\kappa$-complete ultrafilter on $\kappa$. For $f$ and $g$ in $\kappa^\kappa$, define $f<g$ if and only if $\{\alpha<\kappa:f(\alpha)<g(\alpha)\}\in U$. Now let $f:\kappa\rightarrow\kappa$ be the least function with the property that for all $\gamma<\kappa$, $\{\alpha:f(\alpha)>\gamma\}\in U$. Let $D=\{X\subset\kappa: f^{-1}(X)\in U\}$. Then we claim $D$ is a normal measure: Let $h$ be a regressiv efunction on a set $X\in D$. Let $g$ be the function defined by $g(\alpha)=h(f(\alpha))$. As $g(\alpha)<f(\alpha)$ for all $\alpha\in f^{-1}(X)$, we have $g<f$. Here is the part I don't understand: It follows by minimality of $f$ that $g$ is constant on some $Y\in U$. Hence $h$ is constant on $f(Y)$ and $f(Y)\in D$. Why does it follow from the minimality of $f$ that $g$ is constant on some $Y\in U$? And $h$ is constant on $f(Y)$ and $f(Y)\in D$?
By minimality of $f$, we have $$\{\alpha:g(\alpha)\le\gamma\}\in U $$ for some $\gamma,$ so by $\kappa$-completeness, there is a $\beta \le \gamma$ such that $\{\alpha:g(\alpha)=\beta\}\in U.$ (Your other questions can be answered by straightforward definition chasing. For instance that $h$ is constant on $f(Y)$ is a trivial consequence of $g$ being constant on $Y.$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3282395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Closed cone in the euclidean space $\mathbb{R}^n$ Let $T:\mathbb{R}^n\rightarrow\mathbb{R}^m$ be a linear map and let $\mathcal{C}$ be a closed cone in $\mathbb{R}^n.$ Prove that $T(\mathcal{C})$ is a closed cone in $\mathbb{R}^m$ provided $\ker(T)\cap \mathcal{C}=\{0\}$. I have no problem to justify that $T(\mathcal{C})$ is a cone in $\mathbb{R}^m$. The issue is to verify that $T(\mathcal{C})$ is closed in $\mathbb{R}^m$. My idea is trying to use a sequence characterization for closed sets, by which I mean that for any given sequence in $T(\mathcal{C})$ that converges to some $x\in\mathbb{R}^m$, then $x\in T(\mathcal{C}).$ But it was not successful, since I do not know how to apply the given hypothesis $\ker(T)\cap \mathcal{C}=\{0\}$. Does anyone have a useful option/thought or recommendation on this problem? Or if you have a better idea, I would be happy to listen to it.
Let $y_n:=Tx_n \to y$ be a sequence in $T(C)$ with $x_n\in C$. If $(x_n)$ contains a bounded subsequence, then $x_{n_k}\to x$ and $y=Tx$. If $(x_n)$ contains no bounded subsequence, then $\|x_n\|\to\infty$. We can consider the sequence $v_n:= \frac{x_n}{\|x_n\|}\in C$, which is well-defined for all $n$ large enough. It has a converging subsequence $v_{n_k}\to v$ with $\|v\|=1$. Now $$ Tv_n = \frac{y_n}{\|x_n\|} \to 0 $$ as $(y_n)$ is a bounded sequence. This shows $v\in \ker T\cap C$ implying $v=0$. A contradiction. Hence $(x_n)$ cannot be unbounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3282538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Holomorphic extension in 3 complex variables The problem is the following: Suppose $U = \{ z \in \mathbb{D}^3 \ | \ \frac{1}{2} < |z_1| \text{ or } \frac{1}{2} < |z_2| \}$. Prove that every $f \in \mathcal{O}(U)$ extends to $\mathbb{D}^3$. Here $\mathbb{D}^3$ is the unit polydisc in three complex dimensions. I've thought about this for a while and the problem is elusive because the set $\mathbb{D}^3 \setminus U$ is not compact in $\mathbb{D}^3$ (it touches the boundary of $\mathbb{D}^3$ - take for example the point (0,0,1)) nor can we fit a Hartogs' figure. Any ideas? Thanks in advance.
I figured it out! Fix $z_3 = \zeta \in \mathbb{D}$ and let $g : \mathbb{D}^2 \to \mathbb{C}$, where $g(z_1,z_2) := f(z_1,z_2,\zeta)$. Then $g$ is holomorphic on the set $U' := \{ z \in \mathbb{D}^2 \ | \ \frac{1}{2} < |z_1| \text{ or } \frac{1}{2} < |z_2| \}$. However, $g$ extends over the complement of $U'$ in $\mathbb{D}^2$ (by Hartog’s theorem since $U’$ is connected and the complement is compact). It follows that $f$ also extends to this larger set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3282645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$3^2+2=11$, $33^2+22=1111$, $333^2+222=111111$, and so on. $3^2+2=11$ $33^2+22=1111$ $333^2+222=111111$ $3333^2+2222=11111111$ $\vdots$ The pattern here is obvious, but I could not have a proof. Prove that $\underset{n\text{ }{3}\text{'s}}{\underbrace{333\dots3}}^2+\underset{n\text{ }{2}\text{'s}}{\underbrace{222\dots2}}=\underset{2n\text{ }{1}\text{'s}}{\underbrace{111\dots1}}$ for any natural number $n$. Dear, I am not asking you to prove, I just want a hint, how to start proving it. Thanks.
Hint: $$\underbrace{aaa\cdots a}_{n\text{ a's}}=\frac{a(10^n-1)}{9}$$ Where $a\in\{0,1,2,3,4,5,6,7,8,9\}$ and $n\in\mathbb{N_0}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3282744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Adjoint of Matrices Defined by Inner Products Let H be a Hilbert space and let $(e_n)_{n \in \mathbb{N}}$ be an orthonormal basis for H. Now define for each $T \in {\bf B}(H)$ the doubly infinite matrix $A = (\alpha_{nm})$ by setting $\alpha_{nm} = (Te_n|e_m)$. I am trying to find the matrix corresponding to $T^*$. I know that since $T^*$ is the adjoint of T that for every $x,y \in H$, \begin{align*} (Tx|y) = (x|T^*y). \end{align*} Furthermore, since $(e_n)$ forms a basis for H, it suffices to show that \begin{align*} (Te_n|e_m) = (e_n|T^*e_m). \end{align*} So I expressed $Te_n = \alpha_{1n}e_1 + \cdots$ and observed that since $(e_n)$ is an orthonormal that \begin{align*} (Te_n|e_m) = \alpha_{mn}(e_m|e_m) = \alpha_{mn} = (Te_m|e_n). \end{align*} However, I could not see how to relate this to $(e_n|T^*e_m)$. Also, it seems there is almost a canonical way to defined the matrix corresponding to $T^*$, however, I am not sure how to do it. Any help or direction would be appreciated. Thank you in advance.
Hint: $$\langle T^*e_n, e_m\rangle = \langle e_n, Te_m\rangle = \overline{\langle Te_m, e_n\rangle } = \overline{\alpha_{mn}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3282871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Playing the same number at loto game A simple question that was already asked but I didn't understand the modelisation. If I play always the same number at the loto, it does really look like a binomial distribution, like a die that I throw and I want to get at least a 6. In the case of the die, if I bet on a 6 each time, my probability of getting at least a 6 increases at each game. It becomes for instance $1- (\frac {5}{6})^2$ the second time I throw it. So why shouldn't it increase at the loto game, if I play the same numbers and there is no bias?
The probability in lotto does increase the same way. As you say, if you roll dice $n$ times trying to get a $6$ your chance of at least one success is $1-(\frac 56)^n$. If you play a lotto with $10^6$ possible outcomes $n$ times the chance of at least one success is $1-(\frac{999,999}{1,000,000})^n$. For small $n$ this is about $\frac n{1,000,000}$, so it increases linearly with $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3282972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding zero divisors in a polynomial quotient ring Is $x^2+x+I$ a zero divisor in $\mathbb{Z}_7[x]/I$, where $I=(x^3+5x^2+2x+5)$? I know that $\gcd(x^3+5x^2+2x+5,x^2+x)=x+1$, and that it means that $x^2+x$ is indeed a zero divisor. What I struggle with is finding $g(x)$ such that $$(f(x)+\langle x^3+5x^2+2x+5\rangle)(g(x)+\langle x^3+5x^2+2x+5\rangle)=\langle x^3+5x^2+2x+5\rangle$$ in the quotient ring. I know how to find an inverse, is this any similar? How is it done?
$ f=x(x\!+\!1)$ is a $0$-divisor mod $g\!\iff\! d:=\gcd(f,g)\neq 1,g$ $\!\iff\! g(0)=0\,$ xor $g(-1)=0\,$ Then we have $\!\bmod g\!:\ (\color{#c00}{g/d})f \equiv (f/d)g\equiv 0,\,$ both $\,g/d,f\not\equiv 0.\,$ Thus you seek $\,\color{#c00}{g/d}$ This is true in the OP: $\,g(0)= 5\neq 0,\,\ g(-1)=7=0\,$ in $\Bbb Z_{\large 7},\ $ hence $\,d = x\!+\!1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3283085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Terence Tao uncountability of $\mathbb{R}$ There is a small detail I would like to understand. In the proof presented by Tao below: I don't understand why do we have the following cancellation: $\Sigma_{n < n_0 : n \in A} 10^{-n} - \Sigma_{n < n_0 : n \in B} 10^{-n}$ ? I mean we could have elements $n \in \mathbb{N}$ such that $n < n_0$ for which $n \in A - B$ or vice versa.
Our definition of $n_0$ is such that it is the least $k$ such that $k$ is in set $A$ but not $B$ or vice versa. Thus, all $n$ below $n_0$ would appear in both sets, so we can just cancel them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3283303", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Rank of Products of Matrices This is somewhat of a reference request. In several posts on the rank of products of matrices (e.g. Full-rank condition for product of two matrices), it is stated that $$ \mathrm{rank}(AB) = \mathrm{rank}(B) - \dim \big(\mathrm{N}(A) \cap \mathrm{R}(B)\big)$$ It appears that this is a classic result, though I am not familiar with it. If anyone can point me to a textbook that discusses it and other rank inequalities, that would be much appreciated!
Suppose there exists a $v$ with $B u = v$ and $A v = 0$. What is $AB u$? Can you take it from there? EDIT: typo.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3283477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Show that no choice of numbers $a$ and $b$ can make $ax + by = (3,0,0)$ Show that no choice of numbers $a$ and $b$ can make $ax + by = (3,0,0)$ when $x = (3,-1,0)$ and $y = (0,1,5)$. The only materials in the chapter talked about are: * *Vector Space Operations *Standard Basis *Coordinates of a vector $x$ *Components of a vector $x$ I don't think that the vector $v=(3,0,0)$ has a unique linear combination of coordinates with respect to the standard basis with how $x$ and $y$ are defined. I also don't think stating this is "rigorous" enough of an answer. Is there a better (clearer) way I can show that the above statement is true? How can I show that the statement is true from a geometric point of view?
You may not have learned this yet, but $n$ vectors in $ \mathbb {R} ^{n}$ are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero. In your particular case, that matrix has a form (upper triangular) that makes it very easy to compute the determinant (simply multiply the diagonal entries): $\begin{vmatrix} 3 & 3 & 0 \\ 0 & -1 & 1 \\ 0 & 0 & 5 \end{vmatrix}=-15\ne0,$ so they're linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3283575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is the following subset of space of matrices connected, open? If $S=\left\{ A=\begin{bmatrix}A_1&0 \\ 0&A_2 \end{bmatrix} \in \mathbb{M}_4(\mathbb{C}): det A_1=detA_2 \right\}$. Then is this set open in $\mathbb{M}_{4}(\mathbb{C})$ with usual topology. If not then what are the interior points. I would like some hints towards this. Also if someone could suggest a book to study topology of space of complex matrices, it will be helpfull to me. Thank you. I have an analytic map from open unit disc to the norm unit ball of space of matrices having image contained in $S$. That is why i am wondering about the interior of this set.
An idea: denote by $\;\mathcal B\;$ the set of all blocks-matrices in $\;M_4(\Bbb C)\;$, and define $$f:\mathcal B\to\Bbb C\;,\;\;f\begin{pmatrix}A_1&0 \\ 0&A_2 \end{pmatrix}:=\det A_1-\det A_2$$ The above map is continuous (wrt the usual topologies in domain and codomain: the Euclidean one in $\;\Bbb C\;$ and the one in $\;\mathcal B\subset M_4(\Bbb C)\;$ inherited from the Euclidean one in $\;\Bbb C^4\;$), as it is apolynomial map in the entries of the matrix $\;A=\begin{pmatrix}A_1&0 \\ 0&A_2 \end{pmatrix}\;$. Finally, just observe that $\;S=f^{-1}(\{0\})\;$ and thus $\;S\;$ is closed...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3283693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let X be a $T_1$-space satisfying the conclusion of Tietze's Extension Theorem then prove that X is normal I am unsure what exactly the "conclusion" of the theorem means, as in, is it the entire theorem without the assumption of normality of X? So I am assuming that we have to go the other way around as compared to the proof of Tietze's theorem, i.e, assume the function f on (closed?) subspace F has a continuous extension defined on all of X (both of who's values lie in [a,b]) and go backward to prove X is normal. Also, in particular it mentions $T_1$-space, why not a general topological space? The entire proof of Tietze's Theorem is based on the assumption of normality of X, so I cannot understand how to go about proving the reverse to arrive at the condition of normality.
Let $A$ and $B$ be disjoint closed sets. Define $f:A \cup B \to \mathbb R$ by $f(x)=0$ if $x \in A$ and $f(x)=1$ if $x\in B$. Then we can verify that $f$ is continuous by showing that the inverse image of any closed set is closed. We are told that Tietze's Theorem can applied. This means any continuous function defined on a closed subset of $X$ can be extended to a continuous function on $X$. So there is a continuous function $F: X \to \mathbb R$ such that $F(x)=0$ if $x \in A$ and $F(x)=1$ if $x\in B$. Now $F^{-1} (-\infty, \frac 1 2)$ and $F^{-1}(\frac 12 , \infty)$ are disjoint open sets containing $A$ and $B$ respectively. This proves that $X$ is normal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3283799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Composition of matrices - eigenvectors Say $\lambda$ is an eigenvalue of $\textsf{ST}$. There exists $ \ne \textbf{0}$ such that $\textsf{ST}x= \lambda x$. Multiply both sides by $\textsf T$: $$\textsf{TST}x=\textsf{T} (\lambda x)$$ $$\textsf{TS}(\textsf{T}x)=\lambda(\textsf{T}x)$$ Thus $\textsf{T}x$ is an eigenvector for $\textsf{TS}$. * *Why is there such a potent relation between the eigenvectors of $\textsf{ST}$ and $\textsf{TS}$? *How come one is just the transform of the other? Looking for geometric/intuitive explanation. Hints welcome.
$\require{AMScd}$ Maybe more convoluted than necessary, but let me try. We have the following: $$ S: W \to V\\ T: V \to W $$ where $V, W$ are vector spaces. Thus we have $ST: V \to W \to V$ and $TS: W \to V \to W$. A way to combine all these is to consider the following diagram: \begin{CD} V @>T>> W @>S>> V\\ @V T V V @VV S V @VV T V\\ W @>S>> V @>T>> W \end{CD} Let us however look at the two linear maps $ST:V\to V$ and $TS:W\to W$ then \begin{CD} V @>ST>> V\\ @V T V V @VV T V\\ W @>TS>> W \end{CD} So whatever ''happens'' upstairs ($ST$ map) happens ''downstairs'' ($TS$ map) as long as we consider there is a $T$ map between the two. So more geometrically, $x$ being the eigenvectors of $ST$ are the only vectors in $V$ that undergo a scaling by $\lambda$ when you apply $ST$, but don't change direction (definition of eigenvector). Similarly if $TSy= \lambda y$, with $y \in W$. Now the previous relation between the vector spaces tells us that the relation between any vector in $V$ and $W$ is related by $T$ and in particular we have that $y = Tx$. So that while the direction of eigenvectors does not change ''upstair'' and ''downstairs'' they need to change when looking at them ''across'' spaces (trying to convey some intuition here, not sure it helps) Now why would the eigenvector $x$ be mapped through $T$ to and eigenvector $y$. That can be shown by contradiction by assuming that $Tw=y$ but $w$ is not an eigenvector of $ST$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3283922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Approximating the product of two real numbers Let $a,b$ two positive real numbers. For example I want to calculate approximate value of $45.11\times 67.89$ only to 2 decimal places. Note that any calculators or other such devices aren't allowed. Also suppose I want to calculate $\frac{11789}{234558}$ only approximately to 2 decimal places then how to do it? One might ask why do I want to randomly calculate multiplication and division. Actually that isn't the case. There's an exam which I will be taking which has one of the sections as $\text{Data Interpretation}$. In this section there are problems related to the annual turnover ,profits of a company and many such things . Say that its given in the form of a pie chart the quarterly sales of a company for a certain year and we are asked to find the percentage increase /decrease in two consecutive quarters thus the question to quickly estimate the approximate value or if not that atleast giving a small range within which the value may fall. Any quick methods /suggestions will be appreciated! here I have provided one of the sample questions . What I am asking is a generalized method like using percentage(mostly for division).
It's $$(45+0.11)(68-0.11)=3060+23\cdot0.11-0.0121=$$ $$=3062.53-0.0121=3062.5179\approx3062.52$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3283992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
A signed measure is bounded If $\nu$ is a signed measure defined over $(X,\mathcal{M})$ such that $\nu(E)\in\Bbb{R}$, for all $E\in\mathcal{M}$, then $\nu$ is bounded. This looks weird to me. Of course, $\nu(E)\in\Bbb{R}$ implies that $\nu(E)<+\infty$, once we defines a signed measure to take values at $\Bbb{R}\cup\{-\infty,+\infty\}$. But only with this informations, how conclude that $\nu$ is bounded?
Assume that $E_1,E_2,...$ satisfy that $\nu(E_n)\to+\infty$. By excluding finitely many terms from the beginning, we can assume that $\nu(E_n)>0$ and by taking differences of sets of a subsequence for which $\nu(E_{n_k})>2^k$ that the $E_n$ are dijoint. Then $\mathbb{R}\ni\nu\left(\bigcup_nE_n\right)=\sum_{n=1}^{\infty}\nu(E_n)=+\infty$. Therefore, there is no such sequence. Similarly, you can exclude the existence of a sequence with $\nu(E_n)\to-\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3284085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $M=\sum_{g\in G}\rho(g)$ then $\operatorname{tr}(M)=0$ implies $M=0$ Let G be a finite group and $\rho:G\to GL_n(\mathbb C)$ a representation. a) If $M=\sum_{g\in G} \rho (g) \neq 0$ then prove that there is a non-zero vector $v$ such that $\rho(g)v=v $ for every $g\in G$. b) If $\sum_{g\in G} \chi(g)=0$ then prove that $M=0$ The first part is easy, as if $M\neq 0$, then there s a vector $w$ such that $Mw=v \neq 0$. However, then $\rho(g)v= \rho(g) M w= Mw=v $ for every g Any hint would be welcome. For b) however I am a bit stuck. $\sum_{g\in G} \chi(g)=\operatorname{tr}(M)$. Suppose that $M\neq 0$, and consider v be the vector found in (a), then if I complete $v$ to a basis of $\mathbb C^n$ then the first column of every matrix $\rho(g)$ would be the vector $[1,0,...,0]$. However, on the rest of the diagonal I could have negative values so I dont see why trace being $0$ implies $M=0$
Notice that $$ \sum_{g\in G} \chi(g)$$ is the inner product of the representation with the trivial representation, hence if it equals $0$ it means that the decomposition in irreducible representation does not contain the trivial representation. But if $M\neq 0$ then $\rho$ has a one dimensional invariant subspace, i.e. a trivial subrepresentation, thus $M=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3284219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why does $\binom{n}{k} = 0$, if $k > n$? I have came across this in a textbook that I am currently studying, but I don't understand how I should proof this. A short explanation or proof would be nice.
For example expand the binomial $$ (1+x)^n $$ and the coefficient of $x^k$ is $\binom{n}{k}$ for all $k$. [This is why it is called a binomial coefficient.] Of course (when $n$ is a positive integer), this coefficient is $0$ for $k > n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3284361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
change of base for $M_{22}$ I have been stuck on this question for a while now. I can easily do the change of basis matrix if the entries in the basis are vectors as opposed to a matrix. Let $$B_1 = \{\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} ,\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} ,\begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}\} \quad and \quad B_2 = \{\begin{bmatrix} 1 & 1 \\ 0 & -1 \end{bmatrix} ,\begin{bmatrix} 1 & 0 \\ 1 & -1 \end{bmatrix} ,\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}\}$$ be two bases for $span(B_1)$ in $M_{22}$, where the usual left to right ordering is assumed. Find the transition matrix (change of coordinate/change of basis matrix) $P_{B1\rightarrow B2}$
Take the first vector of $B_1$ (in this case, by vector I mean the matrix) and write it as a linear combination of the elements in $B_2$, like this: $$\begin{pmatrix} 1&1 \\ 1&-1\end{pmatrix}=0\begin{pmatrix} 1&1 \\ 0&-1\end{pmatrix}+1\begin{pmatrix} 1&0 \\ 1&-1\end{pmatrix}+1\begin{pmatrix} 0&1 \\ 0&0\end{pmatrix}$$ the scalars are those that go in the first column of the desired matrix. Repeat the same process with the rest of the elements of $B_1$ $$\begin{pmatrix} 0&1 \\ 1&0\end{pmatrix}=-1\begin{pmatrix} 1&1 \\ 0&-1\end{pmatrix}+1\begin{pmatrix} 1&0 \\ 1&-1\end{pmatrix}+2\begin{pmatrix} 0&1 \\ 0&0\end{pmatrix}$$ $$\begin{pmatrix} 0&-1 \\ 1&0\end{pmatrix}=-1\begin{pmatrix} 1&1 \\ 0&-1\end{pmatrix}+1\begin{pmatrix} 1&0 \\ 1&-1\end{pmatrix}+0\begin{pmatrix} 0&1 \\ 0&0\end{pmatrix}$$ and put the scalars in their respective columns. Thus $$P=\begin{pmatrix} 0&-1&-1 \\ 1&1&1 \\ 1&2&0\end{pmatrix}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3284452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can $\mathbb{Q×Q}$ be embedded in $\mathbb{R}$ as group? I think ans is NO : if possible let that is true hence there is a monomorphism from $H= \mathbb{Q×Q}$ to $\mathbb{R}$. as $\mathbb{R} $ has only subgroups which is cyclic or dense and $H$ is not cyclic hence dense but it's proper subgroup $\mathbb{Z×Z}$ is neither cyclic nor dense in $\mathbb{R}$ hence contradiction. Hence the claim. Is my proof correct?? Thanks.
The map $(a,b)\mapsto a+b\sqrt2$ is an injection $\mathbb{Q×Q} \to \mathbb{R}$ because $\sqrt2$ is irrational. $\sqrt2$ is not special here; any irrational number works, for instance $\pi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3284665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Can every closed differential form be expressed via constant coefficients? Let $M$ be a smooth $n$ dimensional manifold, and let $1 \le k < n$. Let $\omega \in \Omega^k(M)$ be a closed $k$-form on $M$. Let $p \in M$. Do there exist coordinates around $p$, such that $\omega=a_{i_1i_2\dots i_k}dx^{i_1} \wedge dx^{i_2} \dots \wedge dx^{i_k}$, where $a_{i_1i_2\dots i_k}$ are constants? That is, I ask whether every closed differential form be locally expressed via constant coefficients. Edit: I forgot to require that $\omega$ should be everywhere non-zero. Otherwise, as mentioned by Paulo Mourão, one can take $xdx$ on $M=\mathbb{R}$.
I believe you don't have such coordinates around $0$ for $\omega=xdx$ in $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3284768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
How many 5-letter words can we make if the letters are in order? Using the $26$ English letters, the number of $5$-letter words that can be made if the letters are distinct is determined as follows: $26P5=26\times25\times24\times23\times22=7893600$ different words. What if the letters in each word are in alphabetical order? For example, the word JLOQY is valid, but the word JUMPY is invalid since U can not be before M
Hint. How many ways can you choose the five different letters? Once you have them, in how many ways can you organize them in alphabetical order? (This assumes the letters are distinct.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3284875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Solve for $(5x-1)(x-3)<0$ The inequality $(5x-1)(x-3)<0$ is true when $(5x-1)<0$ and $(x-3)>0$ or $(5x-1)>0$ and $(x-3)<0$. If I solve for $x $ in the first scenario, $x < \frac{1}{5}$ and $x > 3$ which is wrong. But if I solve for $x$ in the second scenario, $x > \frac{1}{5}$ and $x < 3$ which is correct. Why is such kind of contradiction occurring in the first scenario?
As $x$ increases, the two linear factors are negative, then positive, and each changes sign once, at a root. So one changes sign before the other, and the combinations $--,+-,++$ are possible, but not $-+$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3284981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 3 }
Calculating $\int_0^\infty \frac{\cos(t)}{(1+t^2)^3}\text{d}t$ I am very new to the Residuetheorem and now I am asked to calculate the following integral: $$\int_0^\infty \frac{\cos(t)}{(1+t^2)^3}\text{d}t$$ I know it has poles of order $3$ at $x=\pm i$ and that I have to find a closed curve in order to calculate it. But as I said, I am very new to this and (to be hounest) a little lost at the moment. Therefore any hint or help is very much appreciated!
Note that your integral is equal to$$\frac12\operatorname{Re}\left(\int_{-\infty}^\infty\frac{e^{it}}{(1+t^2)^3}\,\mathrm dt\right).$$And$$\int_{-\infty}^\infty\frac{e^{it}}{(1+t^2)^3}\,\mathrm dt=2\pi i\operatorname{res}_{z=i}\frac{e^{iz}}{(1+z^2)^3}.$$Finally, this last residue is equal to $-\frac{7i}{16e}$. Therefore, your integral is equal to $\frac{7\pi}{16e}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3285099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
proof of second smallest eigenvalue using Lagrange equations Without the use of Spectral Theorem, assume that $A$ is a symmetric $n\times n$ matrix, and define $f: \mathbb{R^n}\to\mathbb{R}$, $g_0: \mathbb{R^n}\to\mathbb{R}$, and $g_1: \mathbb{R^n}\to\mathbb{R}\;$ by $$f(\mathbf{x}) = \mathbf{x}\cdot A\mathbf{x},\ \ \ \ \ g_o(\mathbf{x}) = |\mathbf{x}|^2-1, \ \ \ \ \ g_1(\mathbf{x}) = \mathbf{y}\cdot\mathbf{x}$$ where $\mathbf{y}$ is any solution of the minimization problem where we minimize $f(\mathbf{x})$, subjecting to the constraint $g_o(\mathbf{x}) = 0$. I proved $A\mathbf{y} = \lambda_1\mathbf{y}$ where $\lambda_1$ is the smallest eigenvalue. However, I am having trouble proving the second smallest eigenvalue $\lambda_2$ with its eigenvector where if $\mathbf{z}$ is any solution of the minimization problem where we minimize $f(\mathbf{x})$, subjecting to the constraint $g_o(\mathbf{x}) = 0$ AND $g_1(\mathbf{x}) = 0$, then $A\mathbf{z} = \lambda_2\mathbf{z}$.
Notice that $P=I-yy^T$ projects $x$ onto $y_{\perp}$. So the minimization problem is equivalent to minimizing $xP^TAPx$. It’s easy to see that $M:=P^TAP$ is symmetric. So it will be the smallest eigenvector/eigenvalue of $M$. Can you finish from here? As a hint: $Px=x$ whenever $(x,y)=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3285244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If every polynomial in $k[x]$ has a root in $E$, is $E$ algebraically closed? If $E/k$ is algebraic and for all $f$ in $k[X]$, all roots of $f$ lie in $E$, then $E$ is algebraically closed. The question is: If $E/k$ is algebraic and for all $f$ in $k[X]$, at least one root of $f$ lies in $E$, then is $E$ algebraically closed?
This is true, but it is not trivial. See Gilmer, A Note on the Algebraic Closure of a Field. The OP asked for another reference in the comments. A google search reveals Richman A theorem of Gilmer and the canonical universal splitting ring, which apparently gives a constructive proof. In this Math Stackexchange answer, Martin Brandenburg gives as an additional reference Isaacs Roots of Polynomials in Algebraic Extensions of Fields. Isaacs proves a generalization of Gilmer's theorem: An algebraic extension $K$ of a field $k$ is determined up to isomorphism over $k$ by the set of polynomials in $k[x]$ which have a root in $K$. He cites Gilmer and p.88 of a book called Theory of Fields by Nagata. It's not clear to me that such a book exists, but I did track down a proof of Gilmer's theorem as Theorem 2.12.2 on p. 71 of Nagata's Theory of Commutative Fields. Regarding Gilmer's theorem, Isaacs writes: This theorem is not quite the triviality it may appear to be at first glance. If one knows that all polynomials in $F[X]$ split over $E$, then it is an easy exercise to show that $E$ is algebraically closed. Under the weaker hypothesis of Theorem 1, however, this conclusion is considerably more difficult to prove. (It is more difficult to find in the literature, too. A search of about a dozen books that deal with field extensions was able to uncover only one proof of this result and two cases where at least a part of Theorem 1 appears as a problem.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3285330", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 2, "answer_id": 1 }
Proposition $8.3$ - Fundamental groups and covering spaces by Elon Lages Lima Preliminaries maybe important: Let $M$ and $N$ be oriented manifolds with the same dimension and $f: M \longrightarrow N$ a local diffeomorphism. We say that $f$ is positive (with respect to the chosen orientations) when, for each $x \in M$, the linear isomorphism $f'(x): T_xM \longrightarrow T_{f(x)}N$ is positive (this means $f'(x)$ is orientation preserving, i.e., maps positive basis of $T_xM$ into positive basis of $T_{f(x)}N$). Proposition $8.2.$ Let $f: M \longrightarrow N$ be a surjective local diffeomorphism, defined on a connected oriented manifold. In order that $N$ be orientable, it is necessary and sufficient that, for any $x,y \in M$ with $f(x) = f(y)$, the linear isomorphism $f'(y)^{-1} \circ f'(x): T_xM \longrightarrow T_yM$ be positive. My doubt is concerning to the following result: Proposition $8.3.$ Let $M$ be a connected manifold of class $\mathcal{C}^k$ and $G$ be a properly discontinuous group of diffeomorphisms of class $\mathcal{C}^k$ in $M$. If the quotient space $M/G$ is Hausdorff, then there exists a unique manifold structure of class $\mathcal{C}^k$ in $M/G$ such that the quotient map $\pi: M \longrightarrow M/G$ is a local diffeomorphism of class $\mathcal{C}^k$. Suppose that $M$ is oriented. In order that $M/G$ be orientable, its necessary and sufficient that each diffeomorphism belonging to $G$ preserve orientation. I'm trying understand the final argument of the proof: Suppose now that $M$ is oriented and each $\alpha \in G$ is a positive diffeomorphism of $M$. Then the local diffeomorphism $\pi: M \longrightarrow M/G$ satisfies $\pi(x) = \pi(y) \Rightarrow y = \alpha(x)$, with $\alpha \in G$. Since $\pi \circ \alpha = \pi$, we conclude that $\pi'(y) \circ \alpha'(x) = \pi'(x)$; that is, $\pi'(y)^{-1} \circ \pi(x) = \alpha'(x)$, which is a positive linear isomorphism. It follows from Proposition $8.2$ that $M/G$ is orientable. Conversely, if $M/G$ is orientable , we take arbitrarily $\alpha \in G$ and $x \in M$. Let $y = \alpha(x)$. Then $\pi(x) = \alpha(y)$. By proposition $8.2$, the isomorphism $\pi'(y)^{-1} \circ \pi'(x)$ is positive. But this isomorphism coincides with $\alpha'(x)$. It follows that $\alpha$ is positive, which completes the proof. Why $\pi(x) = \alpha(y)$? I think this doesn't make sense because $\pi(x) \in M/G$ and $\alpha(y) \in M$ (recall that $\alpha \in G$, i.e., $\alpha$ is a diffeomorphism of class $\mathcal{C}^k$ in $M$).
I think you are correct. What is meant is that $\pi(x)=\pi(y)$. Therefore you can compute $\pi^\prime(y)^{-1}\circ\pi^\prime(x)$ because $\pi^\prime(y)$ and $\pi^\prime(x)$ both have value in $T_{\pi(x)}M/G$. Finally as it is said $\pi^\prime(y)^{-1}\circ\pi^\prime(x)=\alpha^{\prime}(x)$ which is positive so you can conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3285426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why is the equivalent Cauchy sequence in a topological group well-defined? I was reading chapter 10 of Atiyah where I met the notion of equivalent Cauchy sequences for topological groups. Atiyah does not explain the reason why equivalent Cauchy sequences indeed give a equivalence relation. I can manage to prove the reflexivity and symmetry of this relation. But I failed to work out the transitivity. Can anyon help me on this? Thank you. FYI. A Cauchy sequence of a topological group $G$ is a sequence $(x_n)_{n \in \mathbb{N}}$ such that for any open neighbourhood $U$ of $0$ (the identity of $G$), there is $N \triangleq N(U)$ such that for any $n,m > N, x_n-x_m \in U$. And we say that two Cauchy sequences $(x_n)$ and $(y_n)$ are equivalent if $x_n-y_n$ converges to $0$, that is, for every open neighbourhood $U$ of $0$, there is $N$ such that for all $n > N, x_n-y_n \in U$. I can see that for any open neighbourhood $U$ 0f $0$, if we can find an open neighbourhood $V$ of $0$ such that $V+V \subset U$, then the transivity will be proved. But, indeed how to explain the existence of such $V$?
This is a standard theorem on topological groups, and follows from the continuity of the group operation. I'll write that as addition, and assume the operation is commutative, but the argument works for non-commutative groups too. As addition is continuous, then if $U$ is an open neighbourhood of $0$ then $W=\{(a,b)\in G\times G:a+b\in U\}$ is an open neighbourhood of $(0,0)$ in $G\times G$. By definition of the product topology, there are open neighbourhoods $V_1$ and $V_2$ of $0$ with $V_1\times V_2\subseteq W$. Let $V=V_1\cap V_2$. It is an open neighbourhood of $0$ and $V\times V\subseteq W$, that is $a+b\in U$ for all $a$, $b\in V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3285552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Condition for a quotient map to have compact image. Let $q:X\to X_{/\sim}$ be a quotient map for some relation $\sim$ on $X$. If there is a compact subspace $A\subset X$ such that every element of $X$ is in relation with some element of $A$, then $X_{/\sim}$ is compact, simply because the condition can be rewritten $q(A)=X_{/\sim}$. I am wondering if the converse is true in the general case, that is if $X_{/\sim}$ is compact, can we find a compact subspace $A\subset X$ such that any element of $X$ is in relation with some element of $A$? If not are there some assumptions that we can put on $X$ and $\sim$ to make the converse true? For example even if $X$ is a topological manifold and $\sim$ is generated by a covering space action, I am not sure the converse holds. Motivation: When $\Gamma$ is a group acting on a manifold $X$ by covering space action, it is sometimes pretty clear that there is no compact subspace $A\subset X$ such that "$A/\Gamma=X/\Gamma$" (for example $X=\Bbb R^2$ and $\Gamma=\langle (x,y)\mapsto (x+1,y)\rangle$). In that case we want to conclude that $X/\Gamma$ is not compact, but what is the "most general case" in which we have such a conclusion? Edit: This question has been asked on mathoverflow here. The answer is negative in general, but positive if $X$ is second countable and locally compact, and $X_{/∼}$ is first countable.
This is only a partial answer. If $Y = X/\sim$ is compact Hausdorff and $q$ is a local homeomorphism (see e.g. https://en.wikipedia.org/wiki/Local_homeomorphism), then the answer is "yes". This covers the case when $\sim$ is generated by a covering space action (then $Y$ is a compact manifold and $q$ is a covering projection). For each $y \in Y$ choose $x(y) \in q^{-1}(y)$ and an open neighborhood $U_y$ of $x(y)$ such that $q$ maps $U_y$ homeomorphically onto an open $V_y \subset Y$. Let $q_y : U_y \to V_y$ denote this homeomorphism. Choose open neighborhoods $W_y$ of $y$ such that $C_y = \overline{W}_y \subset V_y$. Since $Y$ is compact, finitely many $W_{y_i}$ cover $Y$. Hence also finitely many $C_{y_i}$ cover $Y$. Each $K_i = q_{y_i}^{-1}(C_{y_i})$ is compact (and Hausdorff), hence $A = \bigcup K_i$ is compact (but not necessarily Hausdorff if $X$ is not Hausdorff). Obviously $q(A) = Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3285678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
ZFC and axiom of power set It seems that I do not understand thoroughly the axioms of ZFC. I am thinking of "is really the axiom of power set independent from the other axioms, and if it is, how to prove that?". In other words, how to prove that ZFC without the axiom of power set is not equal to ZFC?
Consider the set $H(\kappa)$ consisting of sets $x$ which satisfy $|\text{tc}(x)| < \kappa$. As mentioned in this answer, this set $H(\kappa)$ is a model of all axioms of ZFC except power set. In fact, if one takes $\kappa$ to be a successor cardinal (such as $\aleph_1$), then one can verify that the axiom of power set if false in $H(\aleph_1)$. Now it follows that the axiom of power set cannot be proven from the other axioms of ZFC, for if this were the case, the power set axiom would be true in $H(\aleph_1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3285797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Last element in list We have to find the last element not marked in a list after certain operations. Operations are performed until only one element is left. Suppose I have a list which has certain elements 'marked' alternatively after a particular index $x$. For example, I begin with list: $[a, b, c, d, e, f', g, h', i]$ and $x=5$. The compliment symbol(or dash, whatever you call it) represents that this element is marked. Currently the last element not marked is $i$. Now every not marked element can mark the next not marked element. We always begin from the first index in the list. So, we begin from index $1$ and find that $a$ itself is not marked. Hence $a$ marks the next not marked element which is $b$. The next not marked element is $c$ which marks $d$ and so on. When we reach $e$, the next not marked element is $g$ so it is marked too. The last element, as we see, will not be marked by anyone. Also, since it's the last element it cannot mark anyone. After one iteration our list now becomes: $[a, b', c, d', e, f', g', h', i]$. The last element not marked after our first iteration is $i$. For our second iteration we again begin with index $1$ and see that $a$ is not marked. $a$ hence marks next not marked element, which is $c$. $e$ marks $i$. Our list becomes: $[a, b', c', d', e, f', g', h', i']$. This time our last not marked element is $e$. Finally $a$ will mark $e$ and $a$ will be our last remaining element. So for every iteration we have to find the last not marked element in the list till only one element is left and no more iterations can be performed. Some things I concluded: The first element, here $a$, will always be the last element remaining. Also, the same element will be the last element not marked as in our previous iteration if the not marked elements is odd in our current iteration(Here it happened for our first iteration, $i$ was the last not marked element). I also found some similarities with Josephus problem but couldn't really connect it. I think there will be some recurrence relation connecting the indices and the number of not marked elements for a particular iteration. I am hoping for some closed formula that can directly give me the index of the last not marked element or maybe some efficient technique rather than just brute force.
First eliminate all elements marked before the first step. Then, from the remaining elements, notice that after iteration k, the remaining numbers are on the positions $m\cdot2^k+1$, $m=0,1,\ldots$ So if you have $n$ unmarked elements in the beginning, the index of the last one in the reduced array will be for $m=\left\lfloor\frac{n-1}{2^k}\right\rfloor$ Of course, you have to map this index back into the original array in which you ignored the already marked elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3285874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\int_{0}^\frac{\pi}{2} \sqrt{1+\sin^2(x)}dx$ I feel like I'm very close, so I would only like a hint. I'm only using real methods with the main thing I'm trying to connect the integral to is the Beta function. With a bunch of substitutions, I have boiled the integral down to $$\int_{0}^\frac{\pi}{2} \sqrt{1+\sin^2(x)}dx=\sqrt{2}\int_{0}^\infty \frac{\sqrt{1+x^4}}{(1+x^2)^2}dx=\sqrt{2}\int_{0}^\infty \frac{2x^2(x^4-x^2+2)}{(1+x^2)^3\sqrt{1+x^4}}dx$$ I feel like there is some substitution that could convert the integral into something in terms of the Beta function but I cannot figure it out for the life of me. For reference, $$\int_{0}^\frac{\pi}{2} \sqrt{1+\sin^2(x)}dx=\frac{1}{4\sqrt{2\pi}}\left(4\Gamma^2\left(\frac{3}{4}\right)+\Gamma^2\left(\frac{1}{4}\right)\right) $$
Hint 1: $$\frac{\mathrm d}{\mathrm dx}\sin(x)=\cos(x)=\sqrt{1-\sin^2(x)}$$ Hint 2: Conjugate the "numerator" so that the only radical is in the denominator. Hint 3: Perform a simple substitution so that you get a linear function inside the radical. All steps shown: $$\int_0^{\pi/2}\sqrt{1+\sin^2(x)}~\mathrm dx=\int_0^{\pi/2}\sqrt{\frac{1+\sin^2(x)}{1-\sin^2(x)}}\cos(x)~\mathrm dx=\int_0^1\sqrt{\frac{1+x^2}{1-x^2}}~\mathrm dx\\=\int_0^1\frac{1+x^2}{\sqrt{1-x^4}}~\mathrm dx=\frac14\int_0^1\frac{x^{-3/4}+x^{-1/4}}{\sqrt{1-x}}~\mathrm dx=\frac14B\left(\frac14,\frac12\right)+\frac14B\left(\frac34,\frac12\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3286099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove that $\{\cos x, \sin x, e^x, e^{-x}\}$ is a linearly independent subset of $C^\infty (\mathbb R)$ I am going to prove $\cos x, \sin x, e^x$ and $e^{-x}$ is a linearly independent subset of $C^\infty (\mathbb R)$, which is smooth functions. first we have $a\cos x+b\sin x+ce^x+de^{-x}=0$, WTS that $a=b=c=d=0$. Suppose $c \ne 0$, then for all $x$ in $\mathbb R$, $ce^x$ is sometimes much larger than other 3 terms which is contradiction. So $c=0$. So we have $a\cos x+b\sin x+de^{-x}=0$, if $d \ne 0$, then by the same logic in $c$, for all $x$ in $\mathbb R$, $x$ is also sometimes much larger than other 2 terms which is contradiction. So $d=0$. Here it becomes $a\cos x+b\sin x=0$, when $x=0, a*0+b*1=0$ when $x=\pi/2, a*1+b*0=0$ there is no such situation that $\cos x=\sin x=0$. So $a=b=0$. Above is my proof, we have not learned Wronskian or det, so I could only prove it by definition. While since it is going to prove subset of $C^\infty(\mathbb R)$, is there any correction or improvement for the above? Or is there a more clear way to prove this?
"Sometimes much larger" isn't a precise term, although it can be phrased more rigorously. For example, I suggest considering limits $\lim_{x\rightarrow\pm\infty}$. If $$ a \cos x + b\sin x+ c e^x + d e^{-x} =0$$ for all $x$, then $$ \lim_{x\rightarrow\pm\infty} (a \cos x + b\sin x+ c e^x + d e^{-x}) = 0$$ and you can use that to show that $c=0=d$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3286226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Uniqueness property for the space of finite measure. Let $\mu$ be a finite measure on $\mathbb{R}$ satisfying, $$\int_{\mathbb{R}}f(x)d\mu(x)=0,~\forall f\in C_c(\mathbb{R})$$ Then is it true that $\mu =0$? We know that the result is true for $L^1(\mathbb{R})$, which is a subspace of the above. Edit after the comments of Kavi Rama Murthy Sir: Is the result also true for complex measure $\mu$ on $\mathbb{R}$?
For all $n=1,2,3,\dots$ there exists a non-negative function $f \in \mathrm{C}_{\mathrm{c}}(\mathbb{R})$ such that $f=1$ on $[-n,n]$. Therefore, for all $n=1,2,3,\dots$ \begin{equation} \mu([-n,n]) \leq \int_\mathbb{R} f d \mu = 0. \end{equation} By countable subadditivity $\mu(\mathbb{R})=0$, and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3286379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find a basis and the dimension of the solution space $\textsf{W}$ $$\left\{\begin{align} x + 3y + 2z = 0 \\ x + 5y + z = 0 \\ 3x + 5y + 8z = 0 \\ \end{align}\right.$$ So if we represent this as an augmented matrix $$\begin{pmatrix} 1 & 3 & 2 & 0 \\ 1 & 5 & 1 & 0 \\ 3 & 5 & 8 & 0 \\ \end{pmatrix}$$ In row reduced form would be $$\begin{pmatrix} 1 & 0 & \tfrac{7}{2} & 0 \\ 0 & 1 & -\tfrac{1}{2} & 0 \\ 0 & 0 & 0 & 0 \\ \end{pmatrix}$$ Therefore, a basis of the set we can say would be vector $$\begin{pmatrix} 1 \\ 1 \\ 3 \end{pmatrix}$$ and the vector $$\begin{pmatrix} 3 \\ 5 \\ 5 \end{pmatrix}$$ right? I'm new to these problems and want to make sure I've got the right idea in approaching the solution.
It is not correct. How did you obtain these basis vectors? By row reduction you reduced your system to $$\begin{cases} x_1 + \frac72x_3 = 0 \\ x_2 - \frac12x_3 = 0\end{cases}$$ so $$\begin{bmatrix} x_1 \\ x_2 \\ x_3\end{bmatrix} = t\begin{bmatrix} -\frac72 \\ \frac12 \\ 1\end{bmatrix}, \quad\text{ for some } t \in \mathbb{R}$$ so e.g. $\left\{\begin{bmatrix} -7 \\ 1 \\ 2\end{bmatrix}\right\}$ is a basis for the solution space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3286513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Compact subgroups of a p-adic field Definition: A p-adic field is a finite extension of $Q_p$. Question: Let $E$ be a p-adic field, $G$ is a nontrivial additive compact subgroup of $E$, how to prove: $G$ is isomorphic to $Z_p^n$ for some positive integer $n$. This isomorphism is not only a topological group isomorphism but also a $Z_p$ module isomorphism. I guess we can prove it using non-archimedean analysis, but I still don't know how to prove it. Thanks for any answers!
$G$ is compact, thus closed in $E$. It is stable by multiplication by an element of $\mathbb{Z}$ (dense in $\mathbb{Z}_p$), thus is a $\mathbb{Z}_p$-submodule of $E$. Note that there exists a finite $\mathbb{Q}_p$-base of the vector subspace $V$ spanned by $G$, with vectors $a_1, \ldots, a_n$. Now, for each $v \in V$, denote $v_i \in \mathbb{Q}_p$ to be the coordinate of $v$ in the direction $a_i$. Then $v \longmapsto v_i$ is a linear form, thus is continuous, hence $G_i=\{g_i,\,g \in G\} \subset \mathbb{Q}_p$ is compact. Therefore, there is a $N>0$ such that for each $i$, $p^NG_i \subset \mathbb{Z}_p$. Now, let $$G’=\bigoplus_{i=1}^n{\frac{a_i}{p^N}\mathbb{Z}_p}.$$ $G’$ is a finitely generated $\mathbb{Z}_p$-module, thus is Noetherian, and since $G$ is a submodule of $G’$, $G$ is finitely generated over $\mathbb{Z}_p$. Since $\mathbb{Z}_p$ is principal and $G$ has no torsion, and all the nontrivial quotients of $\mathbb{Z}_p$ are finite, $G$ is a power of $\mathbb{Z}_p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3286653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
I can't find the Centre of Mass I am currently trying to find the centre of mass (COM) with a general coordinate R (radius of big circle) of a circle which is missing another circle, with half of the radius of the big circle (R/2). Half of this smaller circle is in the first quadrant, and half is in the fourth quadrant. I initially tried with an arbitrary value of R, R=2, and got the COM to be 2/6 (the expected value which I know to be true, as x of COM should be R/6). I then proceeded to try and find a general proof of some sort by using R as the value of the radius. I seem to be getting some contradiction. For example, my mass moment around the Y-axis ($\frac{1}{3}R^3$) is different to the calculated value when R=2 (have checked this by putting R=2 in $\frac{1}{3}R^3$, of course). This formula for the mass moment around the y-axis was calculated from the definition of a mass moment about the y-axis: $M_y = \int_{b}^{a} x[f(x)-g(x)] dx$. This inevtiably means that my calculation of the x-coordinate of the COM for quadrants one and four is incorrect... $$x = \frac{8}{\pi R^2} \times\left(\int_{0}^{R} x\left(\sqrt{R^2 - x^2} - \sqrt{\frac{R^2}{4} - \left(x - \frac{R}{2}\right)^2}\right)dx\right) $$ Which comes out to be: $x = \frac{8R}{3\pi}$ This gives the general formula for the x-coordinate of the centre of mass of this shape within the first quadrant, which, as quadrant one and four are symetrical, will be the x-coordinate of the centre of mass for both quadrant one and four (with the corresponding y value being 0); $(\frac{8R}{3\pi},0)$. This x-coordinate must still undergo a weighted average, of sorts, against the normal semi-circle in quadrants two and three. The expected output of this general formula for the x-coordinate of the COM for the value R=2 is 0.6977... (an earlier calculated value). However, I seem to get 1.6977..., +1 to the correct value. Quadrant's two and three will have the COM at position: $(\frac{4R}{3\pi},0)$. Using this, we can later perform the weighted average aforementioned. My guess is that the answer to the integral for the mass-moment is incorrect, but as to how to fix that I am unsure. Could someone possibly help me out please. I'm spending way too much time on this and have gotten myself very confused. Thanks, Aidanaidan12
Your integral should be $$\frac{8}{\pi R^2} \left(\int_{0}^{R} x\left(\sqrt{R^2 - x^2} - \sqrt{\frac{R^2}{4} - \left(x - \frac{R}{2}\right)^2}\right)dx\right) = \frac{8R}{3\pi} - \frac R2$$ and then you get $$x_{\text{cm}} = \frac{\left(-\frac{4R}{3\pi}\right)\cdot \frac{R^2\pi}2 + \left(\frac{8R}{3\pi} - \frac R2\right)\cdot \frac{R^2\pi}4}{\frac{3R^2}4} =-\frac{R}6$$ which is the correct result. The integral is much simpler in polar coordinates. The area of the shape is $A = \frac{3R^2}4$ so \begin{align} x_{\text{cm}}&= \frac4{3R^2}\int_{\text{shape}} x\,dxdy \\ &= \frac4{3R^2}\left[\int_{\phi= -\frac\pi2}^{\frac\pi2}\int_{r=R\cos\phi}^R r^2\cos\phi \,drd\phi + \int_{\phi= \frac\pi2}^{\frac{3\pi}2}\int_{r=0}^R r^2\cos\phi \,drd\phi\right]\\ &= \frac4{3R^2}\cdot \frac{R^3}3\left[\int_{\phi= -\frac\pi2}^{\frac\pi2} (1-\cos^3\phi)\cos\phi \,d\phi + \int_{\phi= \frac\pi2}^{\frac{3\pi}2}\cos\phi \,d\phi\right]\\ &= \frac4{3R^2}\cdot \frac{R^3}3\left[\left(2-\frac{3\pi}8\right) -2\right]\\ &= -\frac{R}6 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3286744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Finding $C$ such that $C^TAC=$diag$(I_k,-I_l,O)$ (Diagonal form) Let $A=\begin{pmatrix} 1 & 2 & 2 & 0 \\ 2 & 1 & 0 & 2 \\ 2 & 0 & 1 & 2 \\ 0 & 2 & 2 & 1 \end{pmatrix} \in M_4(\mathbb{R})$ I want to find a matrix $C$, such that $C^TAC=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix}$, since there are four eigenvalues, three positive and one negative. I used the algorithm reference for linear algebra books that teach reverse Hermite method for symmetric matrices and got: $D=\begin{pmatrix} 1 & 0 & 0 & 0 \\ -2 & 1 & 0 & 0 \\ \frac{2}{3} & -\frac{4}{3} & 1 & 0 \\ -\frac{8}{7} & \frac{2}{7} & \frac{2}{7} & 1 \end{pmatrix}$ So: $DAD^T=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & -3 & 0 & 0 \\ 0 & 0 & \frac{7}{3} & 0 \\ 0 & 0 & 0 & \frac{15}{7} \end{pmatrix}$ But I want $C^TAC=\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix}$ So I multiplied on the far left and far right by $\begin{pmatrix} 1 & 0 & 0 & 0 \\ -\frac{2}{\sqrt{3}} & \frac{1}{\sqrt{3}} & 0 & 0 \\ \frac{2}{3}\sqrt{\frac{3}{7}} & -\frac{4}{3}\sqrt{\frac{3}{7}} & \sqrt{\frac{3}{7}} & 0 \\ -\frac{8}{7}\sqrt{\frac{7}{15}} & \frac{2}{7}\sqrt{\frac{7}{15}} & \frac{2}{7}\sqrt{\frac{7}{15}} & 0 \end{pmatrix}$ and it's transpose, but the result isn't $\begin{pmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{pmatrix}$. So which matrix do I have to multiply on the far left and far right and how can I find such a matrix?
You’re well on the way to a solution, having done the hard part: you’ve found a matrix $D$ such that $DAD^T$ is a diagonal matrix. Now you just have to massage this diagonal matrix the desired form. There are two things that you’ll need to do, in either order: * *Move the negative element on the diagonal down to the lower-right corner. *Scale the diagonal elements so that they’re equal to $\pm1$. For the first of these tasks, look for a permutation matrix $P$ that will move the second column of a $4\times4$ matrix all the way to the right when you right-multiply by $P$. There are many choices, but a simple transposition will do the trick. Left-multiplying by $P^T$ will move the second row to the bottom, so the net effect on your diagonal matrix will be to move the negative element in the second row and column to the bottom right. For the second of these tasks, find another diagonal matrix $S$ such that both left- and right-multiplying by this matrix will scale the elements of your diagonal matrix appropriately. The matrix $C$ that you seek is the product of these two matrices and $D^T$, taken in an appropriate order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3286890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does $\mathbb{E}[X^6] < \infty$ imply $\mathbb{E}[X^4] < \infty$? Let $X$ be a random variable. Does $\mathbb{E}[X^6] < \infty$ imply $\mathbb{E}[X^4] < \infty$? My tries * *I know this isn't rigorous but I thought that when there was a counterexample, the rv in question would have density so I could write $$ \mathbb{E}[X^k] = \int_{\mathbb{R}} x^k f(x) \ \text{d} x, $$ where $f$ is the PDF of $X$. I thought that if I had a rv with density $f(x) := x^{-4}$ this would yield $\mathbb{E}[X^4] = 1$ and $\mathbb{E}[X^6] = \infty$ but as $\int_{\mathbb{R}} x^{-4} \ \text{d} x = \infty$, $f$ isn't a PDF, so I don't know where to continue from there. *Jensens inequality gives $\mathbb{E}[X^k] \ge \mathbb{E}[X]^k$ from $k \in \mathbb{N}$ but if $\mathbb{E}[X] \in (0,1)$ we have $\mathbb{E}[X]^6 < \mathbb{E}[X]^4$ and for $\mathbb{E}[X] > 1$ we have $\mathbb{E}[X]^6 > \mathbb{E}[X]^4$, so I don't know where to continue from there. *I wanted to use that $L^q(\Omega) \subset L^p(\Omega)$ for $p \le q$ if $\Omega$ is a finite measure space but I have no reason to believe I can restrict myself to that special case.
Jensen's inequality in fact gives $E[Y^k] \ge E[Y]^k$ for any nonnegative random variable $Y$ and any $k \ge 1$ ($k$ does not have to be an integer). Now apply this with $Y = X^4$ and $k=6/4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3287005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Which properties does the box topology conserve? The Wikipedia article on the Product topology has a wealth of examples of properties conserved by the product topology. The following is a quote from the linked article: " Separation * *Every product of T0 spaces is T0 *Every product of T1 spaces is T1 *Every product of Hausdorff spaces is Hausdorff *Every product of regular spaces is regular *Every product of Tychonoff spaces is Tychonoff Compactness Every product of compact spaces is compact (Tychonoff's theorem) " I know that Tychconoff's theorem doesn't hold for the box topology, i.e. the product of compact spaces with the box topology is not compact. Which, if any, of the other properties above is conserved by the product topology? Note, that since the box topology and the product topology are identical in the finite case, all of the above are true for the box topology in the finite case.
It preserves the separation axioms up to Tychonoff. In the Handbook of Set-theoretic Topology there is a chapter by Scott S. Williams on box products with the theorem: If $X_i, i \in I$ are non-discrete, Hausdorff completely regular spaces then for an infinite index set $I$ $\prod_{i \in I} X_i$ in the box topology (also denoted $\Box_{i \in I} X_i$) is not i) locally compact or compact (ii) separable (iii) connected or locally connected (iv) first countable (v) perfect (=closed sets are $G_\delta$) So box product are never very "nice", and a nice source of possible counterexamples. The first found ZFC Dowker space is a subspace of a box product e.g.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3287097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is the diameter of a circumcircle of a triangle related to the law of sines? Please refer to the following image for clarity. In this diagram I must find AC ( which is$\sqrt21$) in order to find the radius of the circle. I know that the radius can be found by dividing the diameter by $2$, and I know that the diameter is equal to AC/sin(60). My question is, why is this true? Why is the diameter equal to this?
Let $ABC$ a triangle with acute angle $\gamma$, let $M$ be the center of the circumcircle and $r$ be its radius. By moving $C$ on the arc over $AB$ the angle $\gamma$ doesn't change; move it to the intersection $C'$ of $AM$ and the circle. From Thales the triangle $ABC'$ is right-angled with a right angle at $B$. Hence $\sin(\gamma)=AB/2r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3287318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
$n^n<(n-1)^{n+1}$ for integer $n\ge5$ For natural number $n\ge5$, by mathematical induction or otherwise, prove that $n^n<(n-1)^{n+1}$. Actually I was trying to solve the problem that I posted. My Attempt: Step I: Verify that when $n=5$, then the given inequality holds true; $5^5\overset{?}{<}4^6\Rightarrow 3125<4096$ which is true. Step II: Assume that the inequality is true for $n=k\ge 5$; that is $k^k<(k-1)^{k+1}$ is true for $k\ge5$. Step III: We need to prove that if the inequality is true for $n=k$, then it is true for $n=k+1$ as well; that is to show that if $k^k<(k-1)^{k+1}$ is true, then $(k+1)^{k+1}<k^{k+2}$ is also true. I have a difficulty to complete this part of the proof. Any help would be appreciated.
I'd write $m=n-1$. Then the statement reduces to $$\left(1+\frac1m\right)^m<\frac{m^2}{m+1}.$$ It's well-known that $(1+1/m)^m$ increases to $e$, but more naively, $$\left(1+\frac1m\right)^m=1+1+\frac1{m^2}{m\choose 2} +\frac1{m^3}{m\choose 3}+\cdots<1+1+\frac12+\frac16+\cdots<3.$$ But $$\frac{m^2}{m+1}>\frac{m^2-1}{m+1}=m-1>3$$ if $n>4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3287469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Verifying that a branching process is a Markov chain I would like to verify that the following model of a branching process creates a Markov chain - a Markov chain here defined as having the property that $\mathrm{P}\left\{\xi_{k+1}=a_{k+1} | \xi_{0}, \ldots, \xi_{k}\right\}=\mathrm{P}\left\{\xi_{k+1}=a_{k+1} | \xi_{k}\right\}$ where we take $\mathrm{P}\left\{\xi_{k+1}=a_{k+1} | \xi_{k}\right\}$ to be the random variable $\sum_{j=1 }^ m \mathrm{P}\left\{\xi_{k+1}=a_{k+1} | \xi_{k}=b_j\right\}\mathrm {1}_{\{\xi_k=b_j\} } $ (assuming $\xi_k $ has range $\{b_1,...,b_m \} $), and where $\mathrm{P}\left\{\xi_{k+1}=a_{k+1} | \xi_{k}=b_j\right\} $ is defined as the conditional probability $ \mathrm {P }\left\{\xi_{k+1}=a_{k+1} , \xi_{k}=b_j\right\} /\mathrm {P }\{ \xi_{k}=b_j \}, \ \mathrm {P }\{ \xi_{k}=b_j \} >0.$ Let $\xi_0=1 $ and for $k \ge 0 $ let $\xi_{k+1 } = \eta_1^{(k) }+...+\eta_{\xi_k } ^{(k) } $ where the random variables $\eta _j^{(k) } $ are mutually independent for any $j $ and any $k \ge 0 $. Verify that $\{\xi_k \} $ is a Markov chain.
I believe it is sufficient to check that for any sequence $i_1,...,i_{k+1 } $, $\mathrm{P}\left\{\xi_{k+1}\right.=i_{k+1} | \xi_{k}=i_{k}, \xi_{k-1}=i_{k-1}, \ldots \}=\mathrm{P}\left\{\xi_{k+1}=i_{k+1} | \xi_{k}=i_{k}\right\}$, considering that for a partition $\{A_m \}$ of $\{\xi_k = b_j \}$, $1_{\{\xi_k = b_j \}} =\sum 1_{A_m } $. Thus I would need to verify that $\mathrm{P}\left\{\xi_{k+1}\right.=i_{k+1} | \xi_{k}=i_{k}, \xi_{k-1}=i_{k-1}, \ldots \}=\mathrm{P}\left\{\xi_{k+1}=i_{k+1} | \xi_{k}=i_{k}\right\}$ Considering that the sets $\{\xi_{k+1}=i_{k+1} , \xi_{k}=i_{k}\}$ and $\{\eta_{1}^{(k)}+\cdots+\eta_{i_{k}}^{(k)}=i_{k+1},\xi_{k}=i_{k}\} $ are equal. And by independence that $\mathrm {P } \{\eta_{1}^{(k)}+\cdots+\eta_{i_{k}}^{(k)}=i_{k+1},\xi_{k}=i_{k}\}= \mathrm {P } \{\eta_{1}^{(k)}+\cdots+\eta_{i_{k}}^{(k)}=i_{k+1}\} \mathrm {P }\{\xi_{k}=i_{k}\} $ We get that $\mathrm{P}\left\{\xi_{k+1}=i_{k+1} | \xi_{k}=i_{k}\right\}= \mathrm {P } \{\eta_{1}^{(k)}+\cdots+\eta_{i_{k}}^{(k)}=i_{k+1}\}$ by cancelling the denominator in the former. An identical argument yields $\mathrm{P}\left\{\xi_{k+1}\right.=i_{k+1} | \xi_{k}=i_{k}, \xi_{k-1}=i_{k-1}, \ldots \}= \mathrm {P } \{\eta_{1}^{(k)}+\cdots+\eta_{i_{k}}^{(k)}=i_{k+1}\}$ and thus the desired equality. In the above we use the property that $\eta _j ^ {(k) } $ are mutually independent means that $\eta_{1}^{(k)}+\cdots+\eta_{i_{k}}^{(k)}$ is independent of $\xi_{k}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3287624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\qquad f(tx)=t^2f(x)\iff\left\langle \nabla f(x),x\right\rangle =2f(x)$ $f:\mathbb{R}^n\to\mathbb{R}$ is a differentiable function. How do I show that the following are equivalent: (i) $\qquad f(tx)=t^2f(x)\quad\ \; \ \qquad \forall t\gt 0\land x\neq 0 $ (ii)$\qquad \left\langle \nabla f(x),x\right\rangle =2f(x) \qquad\forall x\neq 0$ This showed up in a set of practice questions and I don't know how to go about answering the question. Thank you.
If (i) is true let $\phi(t) = f(tx)$ and so $\phi'(t) = \langle \nabla f (tx), x \rangle = 2 t f(tx)$. Setting $t=1$ gives the desired result. If (ii) is true let $\eta(t) = {1 \over t^2} f(tx)$ and so $\eta'(t) = {1 \over t^3} (\langle \nabla f (tx), tx \rangle-f(tx)) = 0$. Hence $\eta(t) = \eta(1)$ and we get the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3287783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Baby rudin 2.34 Baby Rudin 2.34: Prove that a compact subset of a metric space is closed. I think I have an alternative solution for Rudin 2.34. So can you check whether my steps are correct? Let $p$ be a limit point of a set $K$. Take any neighborhood $V_s(p)$ in metric space $X$. This neighborhood must contain some element $q_1$ belonging $K$. Take a neighborhood $V_{r_1}(q_1)$ where $r_1=d(p,q_1)/2$. Now take neighborhood $V_{r_1}(p)$. This contains an element $q_2$ belonging $K$. Take a neighborhood $V_{r_2}(q_2)$ where $r_2=d(p,q_2)/2$. And so on. For other elements of $K$ cover them with an open ball. So we have constructed an open cover. But this cover cannot have finite subcover; as if we have then our sequence of neighborhoods stops at some point which means $p$ is not a limit point. Contradiction.
I agree with Siong Thye Goh that it's not a valid proof, but I'm going to rewrite it to give detail where it goes wrong. Let $K$ be a compact subset of a metric space, and assume for the sake of contradiction that it is not closed. Then there exists a limit point $p\notin K$. We will use $p$ to construct an open cover, and then we will try to show that this open cover has no finite subcover, for a contradiction. Now construct a sequence of open sets $U_i$ as follows. Take $q_1$ any point in $K$, and let $U_1$ be the open ball $V_{r_1}(q_1)$ with $r_1=d(p,q_1)/2$. Then $V_{r_1}(p)$ is disjoint from $U_1$ and contains another point $q_2\in K$. Let $U_2$ be the neighborhood $V_{r_2}(q_2)$ where $r_2=d(p,q_2)/2$, and note that $q_1\notin U_2$. Iterating, we get a sequence of sets $U_i$ and points $q_i$ such that $q_i\notin\bigcup_{j\neq i} U_j$ for all $i$. (So far this is OK -- by the last sentence, we can't remove any of the $U_i$ and still cover $K\cap(\bigcup_i U_i)$.) Now comes the hard step: we need an open cover $\{V_\alpha\}$ of $K\setminus (\bigcup_i U_i)$ such that for all $i$ there exists a point $x_i\in U_i$ so that $x_i\notin \bigcup_\alpha V_\alpha$ -- otherwise we could start removing some sets $U_i$ from the open cover, and possibly arrive at a finite subcover. Just taking $V_\alpha$ to be open balls for all $x_\alpha\in K\setminus (\bigcup_i U_i)$ as you have above does not guarantee this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3287892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Probability Exercise. Find a joint distribution. I have been working on some exercises for probability. There is a problem that I cannot even figure out where to start. So, here is the question. * *Let $T$ be drawn from a uniform distribution on the interval $\left[0, \,\sqrt{\,{2}\,}\, - 1\right]$. *Accept $T$ with probability $1/\left(1 + T^{2}\right)$, otherwise start over. *Let $S = 2T/\left(1 + T^{2}\right)$ and $C = 1 - ST$. *Now with probability $1/2$ switch $S$ and $C$. *Then with probability $1/2$ for each, independently, change the signs of $S$ and $C$. *What is the joint distribution of $S$ and $C$ ?. Any comments would be appreciated. Thanks in advance !.
OK, let's go by steps. First, for $T$, there is some rejection sampling happening. Noticing that and recalling the Cauchy distribution, we see that $T$ is drawn from a truncated Cauchy distribution -- i.e. $T$ has the distribution of a standard Cauchy random variable, conditioned on lying in the interval $[0,\sqrt{2}-1]$. What that ends up meaning is that $T$ has density $f_T$ equal to 0 outside of $[0,\sqrt{2}-1]$ and otherwise equal to \[ f_T(x)=\frac{1}{Z} \frac{1}{1 + x^2} \] where \[ Z = \int_0^{\sqrt{2} - 1} \frac{1}{1 + x^2} \, dx = \frac{\pi}{8} \] thankfully -- someone has been nice with the constants. So $f_T(x)=\frac{8}{\pi}\frac{1}{1+x^2}$. Now for $S$. What is this thing? If $g(x)=2\frac{x}{1+x^2}$ then $S=g(T)$. $g$ is a smooth invertible (in fact strictly increasing) function on the domain we care about (the values that $T$ can take, which are $[0,\sqrt{2}-1]$), so we can use a change of variables formula to compute the density $f_S$ of $S$ as follows. First, since $g(0)=0$ and $g(\sqrt{2}-1)=\sqrt{2}^{-1}$ (again we have been blessed by someone here), we see that the probability that $S$ lies outside the interval $[0,\sqrt{2}^{-1}]$ is 0, so that $f_S$ is equal to 0 outside that interval, and inside of it we have\[ f_S(x)=f_T(g^{-1}(x)) g^{-1}{'}(x). \] On our region of interest, $g^{-1}(x)=\frac{1-\sqrt{1-x^2}}{x}$, which has derivative\[ g^{-1}{'}(x)=\frac{1}{x^2\sqrt{1-x^2}} - \frac{1}{x^2}. \] OK, so it's a bit of a pain, but \[ f_S(x)=\frac{8}{\pi}\frac{x^2}{(\sqrt{1-x^2}-x-1)^2}\bigg(\frac{1}{x^2\sqrt{1-x^2}} - \frac{1}{x^2}\bigg)=\frac{4}{\pi}\frac{1}{(x+1)\sqrt{1-x^2}}, \] which isn't so bad in the end. OK, so we have now the distribution of $S$. Let's find that of $C$, noticing that if $h(x)=1-g(x)x$, then $C=h(T)$. But wait -- simplifying $h$ we see that $h(x)=\frac{1}{1+x^2}$. (Who made this problem, and why? Haha.) On the interval we care about,\[ h^{-1}(x)=\frac{\sqrt{1-x}}{\sqrt{x}}, \] with derivative \[h^{-1}{'}(x)=-\frac{1}{2 x \sqrt{x}\sqrt{1-x}},\] so that\[ f_C(x)=f_T(h^{-1}(x)) h^{-1}{'}(x) = ... \] OK, I have to go offline for a while, but this might get you started...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3288022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
On the determinant of a Toeplitz-Hessenberg matrix I am having trouble proving that $$\det \begin{pmatrix} \dfrac{1}{1!} & 1 & 0 & 0 & \cdots & 0 \\ \dfrac{1}{2!} & \dfrac{1}{1!} & 1 & 0 & \cdots & 0 \\ \dfrac{1}{3!} & \dfrac{1}{2!} & \dfrac{1}{1!} & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \ddots & \vdots \\ \dfrac{1}{(n-1)!} & \dfrac{1}{(n-2)!} & \dfrac{1}{(n-3)!} & \cdots & \dfrac{1}{1!} &1\\ \dfrac{1}{n!} & \dfrac{1}{(n-1)!} & \dfrac{1}{(n-2)!} & \dfrac{1}{(n-3)!} & \cdots & \dfrac{1}{1!} \end{pmatrix} =\dfrac{1}{n!}. $$
Hints Prove it by induction. At each step, expand by minors along the top row. At the end, think about the binomial theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3288246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Finding the projection matrix onto a subspace $V$ of $\mathbb R^n$ given an orthonormal basis of $V$ Let $V\subset \mathbb R^n$ be spanned by an orthonormal basis $\{v_1,\dots, v_d\}$, with each vector represented by a column vector under the canonical basis of $\mathbb R^n$. How can I find a projection matrix $P:\mathbb R^n \to V$ in terms of $\{v_1,\dots, v_d\}$? I have noticed many similar questions asked before, but they didn't seem address the special case here (there are no specific numbers. i.e. $n\ge d$ are general.).
You can show that $P(v) = \sum\limits_{k=1}^d \langle v_k,v\rangle v_k$ is a formula for $P$ by noting that it works on an orthonormal basis for $\mathbb R^n$ extending $\{v_1,\ldots,v_d\}$, and you can use this to find the matrix entries $\langle P(e_i),e_j\rangle = \sum\limits_{k=1}^d\langle v_k,e_i\rangle\langle v_k,e_j\rangle$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3288371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a bijection between the set of prime ideals of norm $~ q~$, and the set of ring maps to $~F_q~$? OK, for each given ring morphism, $\mathfrak{o}\rightarrow~F_{q}~$, there exists another distinct arrow after composing with the Frobenius map (or a power of it). So, if $q = p^{n}$, then there are n distinct endomorphisms $F_{q}\rightarrow F_{q}$., and hence at least $n$ distinct ring homomorphisms from $\mathfrak{o}$ to $F_{q}$. But, from a highly technical text on etale cohomology (so I'd be inclined to believe it), the book simply says, the primes (or, prime ideals) $\mathfrak{p}$, such that, $\mathfrak{o}/\mathfrak{p}\approx F_{q}$, correspond bijectively to the ring maps, $\mathfrak{o}\rightarrow F_{q}$, which given the first (?) isomorphism theorem made sense intuitively. However, to get a grasp on this, I tried an example, $\mathbb{Z}_{3} [x]$ as my ring $\mathfrak{o}$, the three prime ideals, $(x^2+1), (x^2+x+2),$ and $(x^2+2x+2) =\mathfrak{p}$, so then I obtained the isomorphisms, $\mathfrak{o}/\mathfrak{p} \approx F_{9}$, but I instead find SIX ring maps $\mathfrak{o} \rightarrow F_9$! The three given from the three prime ideals, and their composition with the non trivia Frobenius map. So finally, my question: Are the sets bijective "up to endomorphism", where the maps are paired in an equivalence relation, or am I missing something? I have four books on abstract algebra, and naturally each isomorphism yields an ideal as the kernel, like I said I'm getting 6 not just 3 maps, and can't find any reference on such a bijective correspondence. Thanks in advance.
Prime ideals of norm $q$ are in bijection with surjections $R \to \mathbb{F}_q$, up to postcomposition with Frobenius, for exactly the reason you state. I don't know whether your text has a typo or whether you have misread it, but it is not true that prime ideals of norm $q$ are in bijection with surjections $R \to \mathbb{F}_q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3288474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Function of a random variable is a random variable Consider the following two statements Statement 1 If $X$ is a continuous random variable and $Y=g(X)$ is a function of $X$, then $Y$ itself is a random variable. Statement 2 A random variable is a function from a sample space S into the real numbers. From statement 2 it is clear that a random variable should be a real valued function. Then how can any function of a random variable can be another random variable as per statement 1? Where my interpretation is going wrong?
$X : \Omega \to \mathbb{R}$, i.e. to each $\omega$ you pair some real number $X(\omega)$. Then, $Y$ given by $g(X)$ should be interpreted as pairing $g(X(\omega))$ to each $\omega$. In other words $Y : \Omega \to \mathbb{R}$ and is just $g \circ X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3288595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Goedel's representability of simple recursive sets I'm referring to Goedel's theorem as exposed here: https://plato.stanford.edu/entries/goedel-incompleteness/ The formal system in question is named Q and is a first order formalization of natural numbers with addition and multiplication operations. A set S of natural numbers is said to be "weakly representable" if there exists a formula A(x) in Q such that for all n in S the formula A(n) can be proved in Q. A set is "strongly representable" if both itself and its complementary set are weakly representable. It turns out that the notions of strongly/weakly representable sets in Q is equivalent to recursive and recursive enumerable sets which. I think this is the core of Goedel's proof. I believe that the set of factorial numbers {n | exists m: m!=n} is recursive and hence should be strongly and also weakly representable. So there must exist a formula A(n) in Q which represents the property of n being the factorial of some number m. What is such a formula? I cannot find anything easy... I feel there is something I'm missing in the story so far.
Thanks to the answer of @Noah Schweber and the description of the $\beta$ function in wikipedia here is a possibile way to write the requested formula: $$ rem(a,b)=c \colon \qquad (c<b) \land \exists n\colon a=b\cdot n + c\\ \beta(a,b,i) = c\colon \qquad rem(a,1+(i+1)\cdot b) = y\\ a_i = c \colon \qquad \beta(a,b,i) = c $$ $$ n!=k\colon \qquad \exists a \exists b \colon (a_1 = 1) \land (a_n =k) \land \forall i\colon ((i+1\le n \land i\ge 1)\implies a_{i+1} = (i+1)\cdot a_i) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3288715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Understanding the definition of an ideal of a semigroup Liapin wrote: Two-sided ideals are also all the possible unions of principal two-sided ideals. I get this as: * *If $I$ is an ideal of $S$ then for $a_1...a_k\in S$ we have $I= \cup(a_i)$. *Also, I think $a_i\in S$ can be chosen from $D$-classes' representatives of $S$ because they form a partition of $S$. Am I right?
Let $S$ be a semigroup and let $S^1$ be the semigroup equal to $S$ if $S$ is a monoid and equal to $S \cup \{1\}$, where $1$ is a new identity element, otherwise. This notation is useful in the context of two-sided ideals because if $X$ is a subset of $S$, then the two-sided ideal generated by $X$ is $S^1XS^1$. In particular, a subset $I$ of $S$ is a two-sided ideal if and only if $I = S^1IS^1$. Thus, if $I$ is a two-sided ideal, then $$ I = S^1IS^1 = S^1(\bigcup_{s \in I}\{s\})S^1 = \bigcup_{s \in I} S^1sS^1 $$ and hence $I$ is the union of the principal two-sided ideals $S^1sS^1$, where $s \in I$. Conversely, let $(S^1s_jS^1)_{j \in J}$ be a family of principal two-sided ideals and let $X = \{s_j \mid j \in J\}$. Then $$ \bigcup_{j \in J} S^1s_jS^1 = S^1XS^1 $$ is a two-sided ideal. This is the meaning of Ljapin's statement. I am not sure to follow your first remark but for the second one, yes: if $I$ is a two-sided ideal, then $$ I = \bigcup_{s \in I/\mathcal{D}} S^1sS^1 $$ where the notation $s \in I/\mathcal{D}$ means that you select only one representative of each $\mathcal{D}$-class contained in $I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3288819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why presheaves are generalized objects? While self studying category theory (Yoneda lemma), I came across the statement that for any category $\mathsf{C}$ the functor category $\mathsf{Fun}(\mathsf{C}^{op}, \mathsf{Set})$ represents generalized objects of $\mathsf{C}.$ Here generalized means bunch of objects of $\mathsf{C}$ glued together. Because of the Yoneda embedding $$Y:\mathsf{C}\to\mathsf{Fun}(\mathsf{C}^{op}, \mathsf{Set}),$$ I can imagine that $\mathsf{C}$ lives inside of $\mathsf{Fun}(\mathsf{C}^{op}, \mathsf{Set})$ as $Y(\mathsf{C}),$ however I can not see why other objects in this category acts like generalized objects of $\mathsf{C}.$ Can anybody explain me why this philosophy works, possibly with some example.
The previous answers are very good, but I also like to always keep in mind a simple example when working with presheaves, to get a feel for what all this means. Luckily, we have a very simple and intuitive category of presheaves to wrok with. Consider the category $\mathbb{G}$, whose objects are $[0]$ and $[1]$, and whose non identity morphisms are $\sigma,\tau : [0] \to [1]$. A preseaf $X$ over $\mathbb{G}$ is giben by two sets $X_{[0]}, X_{[1]}$, together with two applications $s,t : X_{[1]}\to X_{[0]}$. You might recognise from this the definition of a graph (or maybe you call it a multigraph, since you can always have multiple egdes between two vertices, but I will call these graphs in the following). Explicitly, $X_{[0]}$ is the set of vertices, $X_{[1]}$ the set of edges, $s$ associates to each edge its source, and $t$ associates to each edge its target. You can work out that the representable $Y([0])$ is actually the graph consisting of a single point, and $Y([1])$ is the graph consisting of a single arrow between two different points. Rephrasing your statement for this special cases reads "graphs are a generalisation of points and arrows". I find this quite enlightening, to understand what "glued together" means and how the original statement should be understood
{ "language": "en", "url": "https://math.stackexchange.com/questions/3288940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Vector field with constant length Is it correct, that for some pseudo-Riemannian manifold $M$, $X \in \mathfrak{X}(M)$, if $g(X,X)=1$, then the integral curves for $X$ are geodesics? I have the following explanation: Let $\gamma$ be a curve, s.t. $\gamma'(t)=X_{\gamma(t)}$. Then $g(\gamma', \nabla_{\gamma'} \gamma')=\gamma'g(\gamma', \gamma')- g(\gamma', \nabla_{\gamma'} \gamma')$ which implies that $g(\gamma', \nabla_{\gamma'} \gamma')=0$ and therefore, since the metric is non-degenerate and $\gamma' \neq 0$ ,that $\nabla_{\gamma'} \gamma'=0$. I'm pretty sure that this can't be true but at the moment I don't see why it does not hold..
$g(\gamma',\nabla_{\gamma'}\gamma')=0$ means that the velocity and acceleration vectors are orthogonal, this does not necessarily implies $\nabla_{\gamma'}\gamma'=0$. A simple example is the punctured plane $\mathbb R^2\setminus\{0\}$. In polar coordinates, take the smooth vector field $X=(-\sin\theta,\cos\theta)$. The integral curves are circles, which are not geodesics.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3289040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is there a polynomial $p(x)$ with integer coefficients such that $p(2013)=1789$ and $p(1515)=1830$? Problem: Is there a polynomial $p(x)$ with integer coefficients such that $p(2013)=1789$ and $p(1515)=1830$? My attempt: After ruling out polynomials of degree 1, 2 and 3, and with further inspection, it appeared to me that such a polynomial doesn't exist. Is my below proof accurate? Suppose that $p$ exists, and let $q(x)=p(x)-1789$ and $s(x)=p(x)-1830$ then, $q(2013)=0$ and $s(1515)=0$, so clearly, $q(x)=(x-2013)f(x)$ and $s(x)=(x-1515)g(x)$, for some polynomials $f$ and $g$ with integer coefficients. Let $a$, $c$ and $d$ be the constant terms of $p$, $g$ and $f$, respectively, and observe that the constant terms of $s$ and $q$ are $-1515c=a-1830 \ $ and $-2013d=a-1789$, $ \ $ respectively. $ \ \ \ \ \ \ (*)$ Thus, $a \equiv 1830 \equiv 315 \pmod{1515}$ and $a \equiv 1789 \pmod{2013}$. But this system of congruence doesn't have a simultaneous solution (since $ \gcd(1515, 2013)\not\mid 1789-315$). Hence, there's no integer value of $a$ that satisfies $(*)$, which is a contradiction. Therefore, $p$ doesn't exist. If the above is correct, is there perhaps a more direct way of proving this?
$p(2013)-p(1515)$ is divisible by $2013-1515=498,$ but $1789-1830=-41$ is not. It follows from the following reasoning. Let $p(x)=a_0x^n+a_1x^{n-1}+...+a_n,$ where $a_i\in\mathbb Z$. Thus, $$p(m)-p(k)=a_0(m^n-k^n)+a_1(m^{n-1}-k^{n-1})+...+a_{n-1}(m-k)=$$ $$=(m-k)(a_0(m^{n-1}+m^{n-2}k+...+k^{n-1})+a_1(m^{n-2}+...+k^{n-2})+...+a_{n-1}).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3289174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove that if A is an upper triangular matrix with distinct values on the main diagonal, then A is diagonalizable. I know that for a matrix to be diagonalizable, the eigenvectors of its eigenvalues must be linearly independent. However, I am unable to prove the theorem in the title.
Given the matrix $A$ is a $k × k$ upper triangular matrix with distinct diagonal entries, $a_1, ~a_2, \cdots, ~a_k$. The determinant of an upper triangular matrix is the product of its diagonal entries. So $$f(t)= \det(A-tI)=(a_1-t)(a_2-t)\cdots(a_k-t) $$ Setting that to $~0~$, your $~k~$ eigenvalues are all distinct with multiplicity of $~1~$. The dimension of your eigenspace must be $~1~$. So by the test for diagonalizability, since your characteristic polynomial splits and the dimension of your eigenspace is equal to the algebraic multiplicity for each eigenvalue, $A$ is diagonalizable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3289268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Stably trivial vector bundles over a torus It is not difficult to see that there are non-trivial stably trivial bundles of rank $2$ for a closed surface $\Sigma$ of genus $\neq 1$ using that $T \Sigma$ is non-trivial, but what happens in the genus $1$ case? Are there stably trivial real vector bundle over $T^2$ which are non-trivial? In case the answer is affirmative, I'm actually interested in the following situation. Suppose we have an orientable real vector bundle $E \rightarrow T^2$ of rank $2$ such that $E \oplus \epsilon^2$ is trivial, where $\epsilon^2 \rightarrow T^2$ is the trivial real vector bundle of rank $2$. Can we conclude in this case that $E$ is also trivial? Thank you in advance.
I solved the question, I'll leave the answer for reference. The answer is yes for the first question and no for the second. Consider the tangent bundle of $T^2 \times S^2$, which is trivial. $T^2 \times S^2$ contains tori of nonzero self-intersection number. Pick one such torus, say $Y$, and let $E \rightarrow Y$ be the restriction of the tangent bundle of $T^2 \times S^2$ to $Y$. This is a rank $4$ trivial vector bundle over a torus, and it decomposes as $E = TY \oplus NY$, where $TY$ is the tangent bundle and $NY$ the normal bundle. Since a torus is parallelizable, $TY$ is trivial, so we are in the situation of the second question above. However, $NY$ is not trivial, since its Euler class coincides with its self-intersection number, which is nonzero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3289386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Expected time until there are two students Students arrive at a help centre according to a rate $r$ Poisson process. When there are n ≥ 1 students in the centre, the first one to leave does so at a random $Exp (2r)$ time. Suppose that there are presently no students in the centre. What is the expected time until there are two students? This is my answer so far, please let me know if I'm right! $E[T] = E[\text{time for first arrival}] + E[X]$ where $X$ is the time until there are two students given there is already one. Let's define birth as arrival of another student and death as departure of the existing student. $E[T] = \frac{1}{r}\\ + E[X|\text{birth < death}] \cdot P(\text{birth < death}) + E[X|\text{birth > death}] \cdot P(\text{birth > death})$ $E[T] = \frac{1}{r} + \frac{1}{r} \cdot \frac{1}{3} + E[T] \cdot \frac{2}{3}$ $\frac{1}{3}E[T] = \frac{4}{3r} $ Therefore, $E[T] = \frac{4}{r} $ It seems right to me but please let me know how you think.
Let $T_n$ be the interarrival times, $S_n$ the service times, and $$\tau = \inf\{t>0:X(t)=2\} $$ where $X(t)$ is the number of customers in the system at time $t$. Then \begin{align} \mathbb E[\tau] &= \mathbb E[T_1] + \mathbb E[T_1]\mathbb P(T_1<S_1) + (\mathbb E[S_1]+\mathbb E[\tau])\mathbb P(S_1<T_1)\\ &= \frac1r + \frac1r\cdot\frac r{r+2r} + \left(\frac1{2r} +\mathbb E[\tau] \right)\cdot\frac{2r}{r+2r}\\ &= \frac1r+\frac1{3r}+\frac1{3r}+\frac23\mathbb E[\tau]\\ &= \frac5{3r}+\frac23\mathbb E[\tau], \end{align} so that $\mathbb E[\tau] = \frac5r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3289508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Logarithmic inequality (looking for a better solution) $1+\sqrt{17-\log_{x}{2}} \cdot \log_{2}{x^7} \geq \log_{2}{x^{27}}$ Let $t = \log_{2}{x}$. Then we get (taking account of the fact that $x>0$ and $x \ne 1$ $$1+\sqrt{17-\frac{1}{t}}\cdot 7t \geq 27t$$ or $$\sqrt{17-\frac{1}{t}} \cdot 7t \geq 27t-1 \tag{1}$$ It's clear that $17-\frac{1}{t} \geq 0$ or else the root is not defined so either $t<0$ or $t\geq \frac{1}{17}$. Then I consider two cases: when $t<0$ and we can divide by $t$ and reverse the inequality or when $t\geq \frac{1}{17}$ and we can divide by $t$ safely. This way, I am able to get the correct answer. Do you think this is the most rational approach to the problem? At first, I mistakenly simplified the inequality by dividing the left part by $\sqrt{t}$ but this is obviously not correct for all values of $t$.
As you wrote, we have to have $$t< 0\qquad\text{or}\qquad t\ge\frac{1}{17}\tag1$$ Now, we have $$\begin{align}&1+\sqrt{17-\frac{1}{t}}\cdot 7t \geq 27t \\\\&\iff 7t\sqrt{17-\frac{1}{t}} \geq 17t-1+10t \\\\&\iff 7t\sqrt{17-\frac{1}{t}}\geq t\left(17-\frac 1t\right)+10t \\\\&\iff t\left(17-\frac 1t\right)-7t\sqrt{17-\frac 1t}+10t\le 0 \\\\&\iff t\left(\left(17-\frac 1t\right)-7\sqrt{17-\frac{1}{t}}+10\right)\le 0 \\\\&\iff t\left(\sqrt{17-\frac{1}{t}}-2\right)\left(\sqrt{17-\frac{1}{t}}-5\right)\le 0\tag2\end{align}$$ Now, let us consider $(1)$. When $t< 0$, we have $$\begin{align}(2)&\iff \left(\sqrt{17-\frac{1}{t}}-2\right)\left(\sqrt{17-\frac{1}{t}}-5\right)\color{red}{\ge} 0 \\\\&\iff \sqrt{17-\frac{1}{t}}\le 2\qquad\text{or}\qquad \sqrt{17-\frac{1}{t}}\ge 5 \\\\&\iff 17-\frac 1t\le 4\qquad\text{or}\qquad 17-\frac 1t\ge 25 \\\\&\iff 17t-1\ge 4t\qquad\text{or}\qquad 17t-1\le 25t \\\\&\iff t\ge -\frac 18\end{align}$$ When $t\ge\frac{1}{17}$, we have $$\begin{align}(2)&\iff \left(\sqrt{17-\frac{1}{t}}-2\right)\left(\sqrt{17-\frac{1}{t}}-5\right)\le 0 \\\\&\iff 2\le \sqrt{17-\frac{1}{t}}\le 5 \\\\&\iff 4\le 17-\frac 1t\le 25 \\\\&\iff 4t\le 17t-1\le 25t \\\\&\iff t\ge\frac{1}{13}\end{align}$$ So, we get $$-\frac 18\le t <0\qquad\text{or}\qquad t\ge\frac{1}{13}$$ Hence, the answer is $$\color{red}{2^{-1/8}\le x< 1\qquad\text{or}\qquad x\ge 2^{1/13}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3289791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Nonisomorphic Groups with the Same Order and Exponent I am trying to find two nonisomorphic finite abelian groups with the same order and exponent. I've tried solving this problem for a fews days, but I have had no luck. I tried looking for pairs of such groups of the form $\Bbb{Z}_{mn}$ and $\Bbb{Z}_m \oplus \Bbb{Z}_n$. I've tried searching for examples, but when it comes to numeracy, I am severely lacking. But I fairly convinced that no such pair constitutes an example. If $m$ and $n$ are relatively prime, then the two groups are actually isomorphic, so this case is relatively uninteresting. Hence, suppose that $m < n$ are not relatively prime, and let $d$ be their GCD. Under this case, there are million subcases and considered for either finding an example or showing no such example exists (e.g., if one could show that $nd < mn$ and $m | nd$, then I believe this would show no such example exists, etc.)
Let $\mathbf{V}=\{(1),(12)(34),(13)(24),(14)(23)\}$ be the four-group, and let $\Gamma_{4}=\langle i\rangle=\{1,i,-1,-i\}$ be the multiplicative cyclic group of fourth roots of unity, where $i^{2}=-1.$ Then these are the two abelian groups with the same order. We will see they are not isomorphic. If there were an isomorphism $f:\mathbf{V}\rightarrow\Gamma_{4},$ then surjectivity of $f$ would provide some $x\in\mathbf{V}$ with $i=f(x).$ But $x^{2}=(1)$ for all $x\in\mathbf{V},$ so that$$i^{2}=f(x)^{2}=f(x^{2})=f((1))=1,$$contradicting $i^{2}=-1.$ Therefore, $\mathbf{V}$ and $\Gamma_{4}$ are not isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3289931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exercise showing that $\nabla f(r) = \frac{\mathrm{d} f}{\mathrm{d}r}\cdot \frac{\underline{r}}{r} $ Let $f : \mathbb{R} \rightarrow \mathbb{R}$ be a differentiable function. Let $$\underline{r}(x,y,z) := \begin{pmatrix}x\\y\\z\end{pmatrix}$$ be a vector field in Cartesian coordinates. The length $r$ of the vector $\underline{r}$ is $$r = \sqrt{x^2+ y^2 + z^2} $$ Show that $$\nabla f(r) = \dfrac{\mathrm{d} f}{\mathrm{d}r}\cdot \dfrac{\underline{r}}{r} $$ This is an exercise in preparation for a test. Here's what I have tried to do using chain rule: $$\begin{split} \nabla f(r) &= \begin{pmatrix}\partial_x(f(r))\\ \partial_y (f(r))\\ \partial_z(f(r))\end{pmatrix}\\ &= \begin{pmatrix}\partial_x(f(r)) \cdot \dfrac{x}{r}\\ \partial_y (f(r)) \cdot \dfrac{y}{r}\\ \partial_z(f(r))\cdot \dfrac{z}{r}\end{pmatrix}\\ \end{split}$$ So if $$\partial_x(f(r)) = \partial_y(f(r)) = \partial_z(f(r)) = \dfrac{\mathrm{d} f}{\mathrm{d}r}\ $$ was a scalar, I would be done. However I don't know this would be the case or if I have applied the chain rule correctly. How do I continue from here, or how should I start in the first place to solve this?
I'm going to use the notation $\vec{r} = (x, y, z)$. We are ultimately trying to find $\nabla f(r) = \nabla f(g(\vec{r}))$ where $g(r) = \|\vec{r}\|$. Now we know from the chain rule that $\nabla f(g(\vec{r}))=\frac{df}{dg(\vec{r})} \nabla g(\vec{r})$, which gives the desired answer. When remembering the chain rule, it's often helpful to think of the derivative matrix. The chain rule says that (using the notation in the link) $D(f(g(\vec{x}))) = Df(g(\vec{x})) \cdot Dg(\vec{x})$, where the dot symbolizes matrix multiplication. Note that for this matrix multiplication to be well-defined, we must have the composition to be well-defined. In the special case that we have some $h(\vec{x}): \mathbb{R}^n \to \mathbb{R}$, we use the shorthand $Dh(\vec{x}) = \nabla h(\vec{x})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3290035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Growth of the gradient of $f(x+y) \leq f(x) f(y)$ Let $f: \mathbb{R}^3 \rightarrow \mathbb{R}_{\geq 0}$ be a radial continuous function and $C^2$ on $\mathbb{R}^3 \setminus \{0\}$ which satisfies the following functional inequality $$ f(x+y) \leq f(x) f(y) $$ Does there exist constants $c,d$ such that for all $x\in \mathbb{R}^3 \setminus\{ 0\}$ we have $$ \vert \nabla f (x) \vert \leq c e^{d\vert x\vert} \quad (1)$$ In other words If $f$ solves $f(x+y) \leq f(x)f(y)$, does its gradient also grow at most exponentially? How about the Hessian? In case we take $g(\vert x \vert)= f(x)$ and replace the inquality by an equality $g(r+s) = g(r) g(s)$ we get that $g(r) = e^{kr}$ for some $k$. Thus, in that case $f(x) = e^{k\vert x \vert}$. This function clearly satisfies the estimate $(1)$, as for $x\neq 0$ $$ \vert \nabla e^{k \vert x \vert} \vert = \left\vert \frac{kx}{\vert x \vert} e^{k\vert x \vert} \right\vert = \vert k \vert e^{k \vert x \vert}.$$ However, intuitively the function should be "less steep" when we take the functional inequality instead of the equality. Added: In case it helps it would be fine if you assume $f\in C^\infty(\mathbb{R}^3)$.
The answer to my question can essentially be found in the answer of another question here Derivatives of functions satisfying Euler-ish inequality $f(x+y)\le f(x)f(y)$. Many thanks to @PhoemueX and @MaximilianJanisch (in particular for pointing out that we can actually smooth it without any problems). Let me just give the main idea here. If we are in one dimension, then every function $f: \mathbb{R} \rightarrow \mathbb{R}$ with $3 e^x \leq f(x) \leq 5 e^x$ solves the functional inequality as $$ f(x+ y) \leq 5 e^{x+y} < (3 e^x)(3e^y) \leq f(x) f(y). $$ (More generally: Any function $f$ that satisfies $c\cdot e^x\le f(x)\le c^2\cdot e^x$ for a constant $c\geq 1$ satisfies the functional inequality.) Now we apply this idea to my problem. Hence, we pick a function $f$ as above, then $h(x):= f(\vert x \vert)$ will be radial and $$ h(x+y) = f(\vert x +y \vert ) \leq 5 e^{\vert x + y \vert} < (3 e^{\vert x \vert})(3 e^{\vert y \vert}) \leq h(x) h(y). $$ We could almost pick the example from the other answer, namely $$ f(x) = e^x (4 + \sin(e^{x^2})) $$ we would just need to smooth it slightely at the origin. For example $$ g(x) = e^{\vert x \vert + (\vert x \vert^2 - \vert x \vert) \chi(\vert x \vert)} (4 + \sin(e^{\vert x \vert^2})) $$ with a cut-off function supported around the origin will work. In fact this function is smooth and alread the gradient will behave terribly. Of course we could make it worse by replacing $e^{\vert x \vert^2}$ in the sinus by something which grows even faster. This tells us that our gradient will not just be not bounded by some exponential, but we can make it arbitrarily bad. Added: And, as Maximilian pointed out in the comment below, it works in any positive dimension.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3290169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solve $ab + cd = -1; ac + bd = -1; ad + bc = -1$ over the integers I am trying to solve this problem: Solve the system of equations \begin{align} \begin{cases} ab + cd = -1 \\ ac + bd = -1 \\ ad + bc = -1 \end{cases} \end{align} for the integers $a$, $b$, $c$ and $d$. I have found that the first equation gives $d = \dfrac{-1-cd}{a}$, which gives $a\neq0$. Other than that, I don't know where to start. Tips, help or solution is very appreciated. Thanks!
Hint: Squaring the equations gives \begin{align} \begin{cases} a^2b^2 +2abcd+ c^2d^2 = 1 \\ a^2c^2 +2abcd +b^2d^2 = 1 \\ a^2d^2 + 2abcd+b^2c^2 = 1 \end{cases} \end{align} Subtracting them yields \begin{align} \begin{cases} (a^2-b^2)(c^2-d^2)=0 \\ (a^2-c^2)(b^2-d^2)=0 \\ (a^2-d^2)(b^2-c^2)=0 \end{cases} \end{align} Now consider some cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3290290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Analytic continuation of Riemann zeta $\zeta(s)$ from the complex $\mathbb{C}$ to quaternion $\mathbb{H}$? One way to define Riemann zeta function is by the analytic continuation of $$\zeta(s) = \sum_{n=1}^\infty \frac{1}{n^s} = \frac{1}{1^s} + \frac{1}{2^s} + \frac{1}{3^s} + \cdots$$ for the domain $Re(s)>1$ to the full complex plane in $\mathbb{C}$. Thus, Riemann zeta function is defined for $s \in \mathbb{C}$ and $\zeta(s) \in \mathbb{C}$ My question is that do we gain anything new to do analytic continuation of Riemann zeta function such that a "modified Riemann zeta function" so $s \in \mathbb{H}$ is in quaternion? and $\zeta(s) \in \mathbb{H}?$ Does this lead to any interesting result in the math literature? Edit: more precisely, according to the comment, we seek for an analytic continuation of $\zeta(s)$ from the complex $\mathbb{C}/\{1\}$ to quaternion $\mathbb{H}/\{1\}$?
$\Bbb{H}$ is just a sub-algebra of $M_2(\Bbb{C})$. For $A \in M_n(\Bbb{C})$ use the Jordan normal form to obtain $A = P J P^{-1} = P (D+N)P^{-1}$ where $D$ is diagonal and $DN=ND$ and $N^n = 0$. Let $f(s) = (s-1)\zeta(s)= \sum_{k=0}^\infty c_k s^k$ which is entire then $$P^{-1} f(A)P =f(D+N)=\sum_{k=0}^\infty c_k (D+N)^k =\sum_{k=0}^\infty c_k \sum_{l=0}^{n-1} {k \choose l} D^{k-l}N^l= \sum_{k=0}^{n-1} \frac{N^k}{k!} f^{(k)}(D)$$ Note the obtained function $A \mapsto f(A)$ doesn't depend on the basepoint $s_0 = 0$ we chose to expand $f(s)$ in power series. It is not hard to convince that something similar happens with a meromorphic function such as $\zeta(s)$ obtaining $$\zeta(A) = P \zeta(D+N)P^{-1}= P \sum_{k=0}^{n-1} \frac{N^k}{k!} \zeta^{(k)}(D)P^{-1}$$ where $\zeta^{(k)}(D)$ is the matrix of $k$-th derivatives $$\zeta^{(k)}(D) = \pmatrix{\zeta^{(k)}(D_{11}) & & \\ & \zeta^{(k)}(D_{22}) & \\ & & \ldots}$$ If $q \in \Bbb{H}\subset M_2(\Bbb{C})$ then $q q^* = q^* q = N(q) I$ so that $$q = P DP^{-1}, \qquad \zeta(q) =P \zeta(D) P^{-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3290380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Geometric or Trigonometric way to fit a circle to series of points I'm working on a computer vision project where I'm trying to detect a moving colored circle (doughnut actually). After some work I'm able to get it working pretty well except for these two edge cases. My results are always a small set of approximately 5-10 points that describe a circle. Then I can calculate the "center of mass" of the object. When this comes out right like a circle I get the center coordinates correctly. But when it comes out like the sketches below the center point is off. The first image is the result of a little extra object being detected, and the second case is usually the camera occluding part of the image or sometimes an aggressive light source. A lot of my error cases have part of a circle in them. So I've been trying to think if there is a neat mathematical way to identify each edge case, and then solve it, by figuring out where the real center would be of a circle that fit to it. I can see how to approach this from a trial and error convergence sort of approach. But I thought there might be something smarter I hadn't thought of. I do know I could solve the problem on the right with an existing function if I had to, but I don't have a solution for the left. I don't want the circle that surrounds all the points on the left, I'm after what the circle would be if that little nub didn't exits. And I'd love to have a good understanding of solving both edge cases. Any advice is always appreciated it.
One way to fit points to a circle is to do linear regression on the equation in the form $x^2 + a x + y^2 + b y + c = 0$. Writing this as $(x+a/2)^2 + (y+b/2)^2 = (a^2 + b^2)/4 - c$, this corresponds to a circle of centre $(-a/2, -b/2)$ and radius $\sqrt{(a^2+b^2)/4 - c}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3290512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
a proof of that $~C_c(X)~$ is dense in $~L^p~$ in Rudin's RCA I'm reading theorem $3.14$ in Rudin's RCA. The assertion is here: For $1\leq p\leq \infty$, $~C_c(X)$ is dense in $L^p(\mu)$. Note that $C_c(X)$ denotes the class of continuous complex functions that support is compact and the measure $\mu$ has the properties stated in Riesz-Markov-Kakutani theorem(theorem $2.14$ in RCA). The proof is here: Define $S$ as the class of all complex, measurable, simple functions on locally compact Hausdorff space X. If $s\in S$, and $\epsilon>0$, there exists a $g\in C_c(X)$ such that $g(x)=s(x)$ except on a set of measure $<\epsilon$, and $|g|\leq ||s||_\infty$ (Lusin's theorem). Hence $$||g-s||_p \leq 2\epsilon^{1/p}||s||_\infty~.$$ Since $S$ is dense in $L^p(\mu)$, this complete the proof. Lusin's theorem says if $f$ is a complex measurable function ob $X$, $\mu(A)<\infty$, $f(x)=0$ if $x\in X-A$, and $\epsilon>0$. Then there exists a $g\in C_c(X)$ such that $$\mu({x: f(x)\neq g(x)})<\epsilon~.$$ Furthermore, we may arrange it so that $${\rm sup}|g|\leq {\rm sup}|f|~$$ I have a question How $|g|\leq ||s||_\infty~?~~ $ Justify. Lusin's theorem merely says we can take $g$ s.t. $|g|\leq {\rm sup}|f|$. About this question, I see Rudin Real & Complex Analysis Thm 3.14 , but I can't get clear answer. I think Rudin assumes ${\rm sup}|s|=||s||_\infty$, isn't true? If it is true, certainly we can obtain the last inequality: $$||g-s||_p= \{\int |g-s|^p d\mu\}^{1/p}\leq |g-s|\epsilon^{1/p}\leq (|g|+|s|)\epsilon^{1/p}\leq (||s||_\infty+||s||_\infty)\epsilon^{1/p}=2\epsilon^{1/p}||s||_\infty$$.
If $s$ is a simple function, say $f= \sum\limits_{k=1}^{n} c_kI_{A_k}$ then $\sup |f|=\|f\|_{\infty}$ provided none of the sets $A_k$ has measure $0$. In this proof we can assume that each $A_k$ has positive measure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3290656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integrating a scalar function on a manifold So I have the following action in Minkowski spacetime $(M, \eta)$: $ S[\phi] = \int \eta^{\alpha \beta}(\partial_{\alpha} \phi)(\partial_{\beta} \phi)\sqrt{-\eta}d^2x $ Now, I have the following two charts $(x^{\alpha})$ and $(\xi^{\alpha})$ related as : $ (x^{\alpha}(\xi^0,\xi^1)) = (t,x) = (\frac{1}{a}e^{a\xi^1}sinh(a\xi^0), \frac{1}{a}e^{a\xi^1}cosh(a\xi^0)). $ and also useful $ ds^2=dt^2-dx^2=e^{2a\xi^1}((d\xi^0)^2-(d\xi^0)^2) $ The two chart overlap only in the region of the manifold that would correspond to $x>|t|$ and $\xi^0,\xi^1 \in R$. If I recast the action for $(x^{\alpha})$ I would get $ S[\phi] = \int_{x>|t|} ((\partial_{t} \phi)^2-(\partial_{x} \phi))d^2x $ and for the other chart Im getting $ S[\phi] = \int_{R^2} ((\partial_{\xi^0} \phi)^2-(\partial_{\xi^1} \phi))d^2\xi. $ I would argue I did something wrong as the two integrals should yield the same answer as we are integrating $\mathcal{L}$, which doesnt care about diffeos. On the otherhand Im following Mukhanov "Quantum effects in gravity" (yes this is the unruh radiation) and he gets the same two integrals but doesnt specify the integration limits. What am I missing? Thanks!
I ended up finding the answer, I will leave the comment I wrote on my MSc. thesis for someone else to learn. If some experienced mathematician finds this answer incorrect let me know. "Here is where the standard abuse of notation shines with malice. Expressions above naively suggest the actions are everything but equal: same integral but different integration bounds. Nothing further away from reality, we have been careless, remember that the field $\phi: \mathcal{M} \rightarrow \mathbb{R}$, but $(\xi^{\alpha}), (x^{\alpha}) \notin \mathcal{M}$. Hence $\phi$ in expressions above really should be \begin{equation} \left( \phi \circ \psi^{-1} \right) _{\psi(p)} \quad \mathrm{and} \quad \left( \phi \circ \theta^{-1} \right) _{\theta(p)} \end{equation} respectively, where $p \in U$, and $\psi$ and $\theta$ are the inertial and accelerated charts and $\psi^{-1}$ and $\theta^{-1}$ their inverses, i.e. $\psi : p \in U \mapsto (x^{\alpha})$. Thus it is the field itself that differs from the two integrals. We conclude $S_I = S_A$ if they integrate over the same region in $\mathcal{M}$ (which they do)."
{ "language": "en", "url": "https://math.stackexchange.com/questions/3290868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
"Almost" in the kernel Is there a notion of "almost kernel" of a matrix? Namely, a vector $v$ is "almost" in the kernel of the matrix if it is mapped "close" to 0. And thus the "almost kernel" of a matrix is a subspace (this would need to be verified) of all vectors that are mapped closed to zero. I guess one could define something like this, but I am wondering if such a notion already exists.
You can consider for fixed $\epsilon>0$ the set $$ C:=\{v: \ \|Av\|\le \epsilon \|v\|\}. $$ This has some nice properties: it is a closed and convex cone. Also $v\in C$ implies $-v\in C$. Such constructions are useful in constrained optimization and second-order sufficient optimality conditions. Second-order necessary optimality conditions are proven for directions in some critical cone. The critical cone has to be extended for sufficient conditions to work. There, constructions similar to the above are used. You can look in the book of Bonnans and Shapiro for the theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3290957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
mathematical Induction (algebra) assume there is a function like $f:A→B$ which is injective, why it means $\left|A\right|\le\left|B\right|$ or in another way why a function like $g:B→A$ stands for $\left|B\right|\le\left|A\right|$ my problem has been clearly explained here: assume a set like $A=[1,2,3,4]$ and $B=[1,2,3]$ and for $f:A→B$ we have $f=[(1,1)(2,2),(3,3)]$ and $f$ is still injective but $\left|A\right|=4$ and $\left|B\right|=3$ and in this case $\left|B\right|\le\left|A\right|$...
In old books you can find a difference between function and aplication. * *A function $f\colon A\to B$ has as domain a subset of $A$. *An application $f\colon A \to B$ has as domain all $A$ for people, in general , those means the same. If you want to speak strictly, the intuition behind injectivity is: You can find a "copy" of the domain of $f\colon A \to B$ in the set $B$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3291105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How the roots are calculated for this function? It says that the roots of $$f(x) = x^{-1} \sin(x^{-1}\log(x))$$ are defined as $$1 > a_{1} > a_{2} > \cdots > 0$$ where $a_{i} = \exp(-b_{i})$ and $b_{i}$ is the unique solution to the equation $b \exp(b) - i\pi = 0$, $1 < b < \infty$. I am wondering how are formulas of $a_{i}$ and $b_{i}$ calculated?
Because the $x_i>0$, we can find values $b_i$ such that $x_i = \exp(-b_i)$. Plug this into $f(x)$ to get \begin{align} f(\exp(-b_i)) &= \exp(-b_i)^{-1}\sin{(\exp{(-b_i)}^{-1}\log{(\exp{(-b_i)})})}\\ &= \exp(b_i)\sin(-b_i\exp{b_i}) \end{align} Setting this equal to $0$ and using the fact that $\exp(b_i)>0$ we get $$\sin(-b_i\exp(b_i))=0$$ hence, $$-b_i\exp{b_i} = -i\pi$$ for integer $i$. This is equivalent to $$b_i\exp{b_i}-i\pi=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3291282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\sup \{ \varepsilon x:\, x\in A\}=\varepsilon \sup A$ Since this is an introductory course to real analysis I am looking for the most simple and direct proof. Based mostly on definitions. Let $\varepsilon$ be a positive real number. If $A$ is a non-empty bounded subset of $\mathbb{R}$ and $B = \{\varepsilon x : x \in A\}$ then $\sup(B) = \varepsilon \sup(A)$. Since $A$ is a non-empty subset if follows that $\sup(A)$ exists as a finite supremum. Which by definition means that $x \leq \sup(A)$ for all $x\in A$ and by the second multiplicative property since $\varepsilon$ is positive it follows that $\varepsilon x \leq \varepsilon \sup(A)$ which means that $\sup(B) \leq \varepsilon \sup(A)$. The hard part is to show that $\sup(B) \geq \varepsilon \sup(A)$ I can't think of a way to go about that. So maybe a one shot deal... it's true always that $$\varepsilon x = \varepsilon x \space \space \forall x \in A, \space \varepsilon \gt 0 \in \mathbb{R}$$ It's also true by algebraic manipulation and associativity that $(\varepsilon x )= ((\varepsilon)(x))$ by associativity. So it follows that $\sup(B) = ((\varepsilon) (\sup(A))$ To me I don't see any issue with showing $\varepsilon x = \varepsilon x$ can lead to the conclusion of the statement.
If $S$ is a subset of $\mathbb{R}$ with a minimum, $m$, then $\epsilon m \leqslant \epsilon x$ for all $x \in S$, i.e. the set $\epsilon S$ has the minimum $\epsilon m$. If $S$ is the set of all upper bounds of $A$, then $\epsilon S$ is the set of all upper bounds of $\epsilon A$, and the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3291350", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Let $X \subseteq \Bbb Q^2$. Suppose each continuous function $f:X \to \Bbb R^2$ is bounded. Then $X$ is finite. True or false: Let $X \subseteq \Bbb Q^2$. Suppose each continuous function $f:X \to \Bbb R^2$ is bounded. Then $X$ is finite. Now it will be compact for sure just by using distance function. Now what can we do?
Hint: Consider $X = \{(1,0), (1/2,0), (1/3,0), ..., (0,0)\}$. What can we say about the behaviour of $f(x)$ as $x\to (0,0)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3291490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Cannot find the p.d.f. with jacobian transformation. Suppose $X$ and $Y$ are continuous random variable with joint probability density function $$f(x,y)= \begin{cases} 12xy(1-x)&0<x<1,0<y<1\\ 0&\text{for other } x \end{cases}. $$ If $Z_1=X^2Y$, determine the probability density function of $Z_1$. Because of the p.d.f. is consist of two random variables, I am making other transformation $Z_2=X$. So, the domain of $Z_1$ and $Z_2$ are $0<Z_1<1$ and $0<Z_2<1$. Now I find the inverse of $Z_1$ and $Z_2$, i.e. $X$ and $Y$ as below. $$X=Z_2,$$ $$Y=\dfrac{Z_1}{X^2}=\dfrac{Z_1}{Z_2^2}.$$ Next I find the jacobian determinant, $$ \vert J\vert= \left\vert \begin{matrix} \dfrac{\partial X}{\partial Z_1}&\dfrac{\partial X}{\partial Z_2}\\ \dfrac{\partial Y}{\partial Z_1}&\dfrac{\partial Y}{\partial Z_2}\\ \end{matrix} \right\vert = \vert J\vert= \left\vert \begin{matrix} 0&1\\ \dfrac{1}{Z_2^2}&-2\dfrac{Z_1}{Z_2^3}\\ \end{matrix} \right\vert = \left\vert-\dfrac{1}{Z_2^2}\right\vert = \dfrac{1}{Z_2^2}. $$ Next, I find the joint p.d.f. of $Z_1$ and $Z_2$. \begin{eqnarray} g(z_1,z_2)&=&f(x,y)\vert J\vert\\ &=&f\left(z_2,\dfrac{z_1}{z_2^2}\right)\vert J\vert\\ &=&12z_2\left(\dfrac{z_1}{z_2^2}\right)\left(1-z_2\right)\dfrac{1}{z_2^2}\\ &=&12\dfrac{z_1}{z_2^3}\left(1-z_2\right). \end{eqnarray} So, the joint p.d.f. of $Z_1$ and $Z_2$ is $$ \begin{cases} 12\dfrac{z_1}{z_2^3}\left(1-z_2\right)&0<z_1<1,0<z_2<1\\ 0&\text{for other } x \end{cases}. $$ Now I want to find the $g(z_1)$, i.e. marginal p.d.f. of $g(z_1,z_2)$. \begin{eqnarray} g(z_1)&=&\int\limits_{Z_2}g(z_1,z_2)dz_2\\ &=& \int\limits_{0}^1 12\dfrac{z_1}{z_2^3}\left(1-z_2\right) dz_2\\ &=& 12z_1\int\limits_{0}^1 \left(\dfrac{1}{z_2^3}-\dfrac{1}{z_2^2}\right) dz_2\\ &=& 12z_1 \left[-\dfrac{1}{2z_2^2}+\dfrac{1}{z_2}\right]_0^1\\ &=& 12z_1 \left(\dfrac{1}{2}-\infty\right)\\ &=& \infty \end{eqnarray} My question: Why I found the p.d.f. is infinity? Can this problem be solved? What the mistake for my answer?
$Z_1$ and $Z_2$ cannot take all values between $0$ and $1$. There is an extra inequality they have to satisfy: $Z_1 =X^{2}Y <X^{2}=Z_2^{2}$. So the joint density vanishes if $z_1 >z_2^{2}$. Now see if yo get the density of $Z_1$ correctly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3291566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove $ \frac{\ln^k(1+x)}{k!}=\sum_{n=k}^\infty(-1)^{n-k} \begin{bmatrix} n \\ k \end{bmatrix}\frac{x^n}{n!}$ Prove the following formula involving Stirling numbers of the first kind: $$\frac{\ln^k(1+x)}{k!}=\sum_{n=k}^\infty(-1)^{n-k} \begin{bmatrix} n \\ k \end{bmatrix}\frac{x^n}{n!}$$ where $\begin{bmatrix} n \\ k \end{bmatrix}$ is the Stirling number of the first kind. I use this formula (can be found here) a lot in my solutions, but I have not found any proof of it yet. Any idea on how to prove it or where to find the proof? I am tagging harmonic numbers as its very related to this formula. Thank you.
As said within the comment section it is sufficient to visit the Wikipedia Page and therefore no need to invoke some kind of Harmonic Numbers here. Within the subsection Generating Functions we eventually find the following paragraph: A variety of identities may be derived by maniplulating the generating function: \begin{align*} H(z,u)=(1+z)^u&=\sum_{n=0}^\infty\binom unz^n\\ &=\sum_{n=0}^\infty\frac{z^n}{n!}\sum_{k=0}^ns(n,k)u^k\\ &=\sum_{k=0}^\infty u^k\sum_{n=k}^\infty\frac{z^n}{n!}s(n,k) \end{align*} Using the equality $$(1+z)^u=e^{u\log(1+z)}=\sum_{k=0}^\infty(\log(1+z))^k\frac{u^k}{k!}$$ it follows that $$\sum_{n=k}^\infty(-1)^{n-k}\begin{bmatrix}n\\k\end{bmatrix}\frac{z^n}{n!}=\frac{(\log(1+z))^k}{k!}$$ The crucial relations used here are \begin{align*} &1.&&(x)_n~=~\sum_{k=0}^n s(n,k)x^k\\ &2.&&s(n,k)~=~(-1)^{n-k}\begin{bmatrix}n\\k\end{bmatrix} \end{align*} Which are, as far as I can tell (not being that experienced with Stirling Numbers at all), quite fundamental properties of the Stirling Numbers of the First Kind.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3291701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculate $\phi * \phi$ Show that $$\phi * \phi(x)=\begin{cases} >0 , \operatorname{if} x\in(0,4\pi)\\ = 0, \operatorname{otherwise} \end{cases}$$ where $\phi: \mathbb R \to \mathbb R$ and $\phi(t)=\begin{cases} 1-\cos{(t)} , \operatorname{if} t\in[0,2\pi]\\ 0, \operatorname{otherwise} \end{cases}$ My idea: $$\phi * \phi(x)=\int_{\mathbb R} \phi(x-y)\phi(y)dy$$ And for $$\phi(x-y)=[1-\cos{(x-y)}]\chi_{[0,2\pi]}(x-y)$$ and the event $$\chi_{[0,2\pi]}(x-y)=\chi_{[-x,-x+2\pi]}(-y)=\chi_{[x,x-2\pi]}(y) \quad\quad(*)$$. Now using (*): $$\int_{\mathbb R} \phi(x-y)\phi(y)dy=\int_{\mathbb R} [1-\cos{(x-y)}]\chi_{[x,x-2\pi]}(y)[1-\cos{(y)}]\chi_{[0,2\pi]}(y)dy$$ But now how do I go about calculating $$\chi_{[x,x-2\pi]}(y)\chi_{[0,2\pi]}(y)$$ If $x \notin (0,4\pi)$, then $[x,x-2\pi]\cap[0,2\pi]=\varnothing$ but I somehow believe that my bound $[x,x-2\pi]$ is wrong. Any ideas?
Since $\;\phi=0\;$ outside $\;[0,2\pi]\;$ , we get: $$\phi * \phi(x)=\int_{\mathbb R} \phi(x-y)\phi(y)dy=\int_0^{4\pi}\left(1-\cos(x-y)\right)\left(1-\cos y\right)\,dy=$$ $$=\int_0^{4\pi}\left(1-\cos y-\cos(x-y)+\cos y\,\cos(x-y)\right)\,dy=4\pi-\overbrace{\left.\sin y\right|_0^{4\pi}+\left.\sin(x-y)\right|_0^{4\pi}}^{=0}+$$ $$+\int_0^{4\pi}\cos y\,\cos(x-y)\,dy=4\pi+\frac12\int_0^{4\pi}\left(\cos(2y-x)+\cos x\right)\,dy=$$ $$=4\pi+\left.\frac12\left(\frac12\sin(2y-x)+4\pi\cos x\right)\right|_0^{4\pi}=2\pi\left(2+\cos x\right)>0,\,\,\forall\,x\in\Bbb R$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3291830", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find values of $x$ so that the matrix is invertible Find values of $x$ so that the matrix is invertible $$A=\begin{pmatrix} x & 0 & x \\ x & 2 & 1 \\ 2x & 0 & 2x \\ \end{pmatrix}$$ I know that a matrix is invertible if determinant is not $0$, but I don't know how to find the $x$ values. I feel is a tricky question and this matrix will not be invertible no matter which value $x$ takes, but I don't know how to prove that either.
Note that $\forall x, R_3=2R_1\implies Rank(A)<3\implies \det(A)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3291925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Rewriting infinity as a limit to infinity (in terms of Fourier series) Put informally: When writing down the complex Fourier series of a function, is it proper to write $$\displaystyle\sum_{n=-\infty}^\infty \tag*{(1)}$$ or $$\displaystyle\lim_{k\to\infty}\displaystyle\sum_{n=-k}^k? \tag*{(2)}$$ From what I've seen, I can tell that the representation $(1)$ has been used more often. However, I encountered a case when $(1)$ is not valid: $$\pi\cot \pi z=\displaystyle\lim_{k\to\infty}\displaystyle\sum_{n=-k}^k \dfrac{1}{z+n},$$ but $$\pi\cot\pi z\ne\displaystyle\sum_{n=-\infty}^\infty \dfrac{1}{z+n},$$ as it diverges.
The symbolic form $$\sum_{n=-\infty}^\infty \dfrac{1}{z+n}$$ means $$ \lim_{a,b\to\infty}\sum_{n=-a}^b \dfrac{1}{z+n}, $$ that is, the limit should not depend on the path to infinity that the pair $(a,b)$ takes in the grid $\Bbb N\times\Bbb N$. In the given example, this is not the case as the one-sided series are harmonic sums that diverge on their own. Only the occurrence of the opposite terms balances the symmetric sums to get convergence. As example of an unbalanced path to infinity take $(a_k,b_k)=(k,2^k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3292044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving the diophantine equation $x^3+y^3 = z^6+3$ I've the following problem: Show that the congruence $x^3+y^3 \equiv z^6+3\pmod{7}$ has no solutions. Hence find all integer solutions if any to $x^3+y^3-z^6-3 = 0.$ We can rearrange the first equation to $z^6 \equiv (x^3+y^3-3) \mod{7}$. But $z^6\equiv 1\mod{7}$ so this is only possible when $x^3+y^3=4$ which no $x,y \in \mathbb{Z}$ satisfy. Now I don't know if that helps at all solve the second bit. We have $z^6 = x^3+y^3-3$. I know that $x^3+y^3 = (x+y)(x^2-xy+y^2)$.
Any solution for $x^3+y^3-z^6-3=0$ is also a solution mod $7$. Therefore there are no solutions to the original equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3292195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Use Ito lemma to solve some SDE Take B$_t$ to be a Brownian motion and Z$_t$ = e$^{\int_{0}^{t}g(s,w)dB_s-\frac{1}{2}\int_{0}^{t}g^2(s,w)ds}$, how can we apply Ito lemma to get SDE of Z$_t$? I set Z$_t$ = f(t, B$_s$), then by Ito lemma, it follows that dZ$_t$ = f$_t$dt + f$_{x}$dB$_s$ + $\frac{1}{2}$f$_{xx}$dt, but in our case, how to obtain f$_t$, f$_x$, f$_{xx}$ due to the existence of Ito integral in the formula?
You should write $Z_t=e^{Y_t}$ and notice that you know the SDE of $Y_t$ as it is provided in integral form. Then you can apply Ito's lemma on the exponential function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3292328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convexity implies $ \frac{\varphi(c)-\varphi(c-h)}{h} \leq \frac{\varphi(c+h)-\varphi(c)}{h}$?, $h>0 $ Suppose $\varphi $ is a convex function on the real line. I wonder if the following is true? For $h>0 $ $\frac{\varphi(c)-\varphi(c-h)}{h} \leq \frac{\varphi(c+h)-\varphi(c)}{h}$ This seems like a trivial fact that should follow from convexity if one considers the intuitive meaning of a convex function as having increasing slopes for consecutive points on its graph. But I got stuck on which numbers to pick and then use in the convex property. Help would be appreciated!
May be, you could just use Taylor expansions around $h=0$. This would give $$\text{lhs}=\frac{\varphi(c)-\varphi(c-h)}{h}=\varphi '(c)-\frac{1}{2} h \varphi ''(c)+\frac{1}{6} h^2 \varphi ^{(3)}(c)-\frac{1}{24} h^3 \varphi ^{(4)}(c)+O\left(h^4\right)$$ $$\text{rhs}= \frac{\varphi(c+h)-\varphi(c)}{h}=\varphi '(c)+\frac{1}{2} h \varphi ''(c)+\frac{1}{6} h^2 \varphi ^{(3)}(c)+\frac{1}{24} h^3 \varphi ^{(4)}(c)+O\left(h^4\right)$$ $$\text{rhs-lhs}=h \varphi ''(c)+\frac{1}{12} h^3 \varphi ^{(4)}(c)+O\left(h^4\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3292583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Merging two functions So I have two fuctions like this :- $f(x) = (x/5)^2$ and $g(x) = \sqrt{(x/5)}$ and a third fuction as a combination of both $ h(x) = \Biggl[ { }^{ x\; \lt \; 5 : \; f(x) }_{ x \;\ge \; 5: \; g(x)}\Biggr] $ When I put $x =5$ in the first function I get $f(5) = 1$ and in the second one I get $g(5) = 1\;$. That means h(x) is countinous at $x =5$ is there any way I can converge $h(x)$ into a single function??
$$h(x) = \left ( \frac 1 {10} \left (x - 5 - \sqrt {\strut (x - 5)^2} \right ) + 1 \right )^2 \sqrt {\frac 1 {10} \left (x - 5 + \sqrt {\strut (x - 5)^2} \right ) + 1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3292680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Can anyone give a real life example to illustrate why does the principal axis that has the maximal variance retain the most information? One job in PCA is to maximize variance, because the principal axis that has the maximal variance retain the most information. Why is that? How to understand this in a easy or concrete (such as a real life example) way?
This could be quite difficult to be seen in a real life example since it is a theoretical or abstract assumption. When you do PCA you want to project data in a low dimensional space. This projection obviously will loss information so you want to retain the most information. As the PCA assumption, the eigen-values of the matrices will show the variance retained. This variance will illustrate the error reconstruction of data, you want to have a rich representation in the low dimensional space, as good as, you can return to the high dimensional losing little information as possible. Think, if you go to a space with few variance you can think you are projecting onto a point. How do you return to the other space? The most variance in data the better you can reconstruct data. The PCA can be seen as maximizing this variance, so the most variance the best is the fit done. If the variance is near to 1, it is said that the fit is very good and data relies on the space you are projecting. You have here a good lecture on this: https://www.stat.cmu.eitu/~cshalizi/350/lectures/10/lecture-10.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/3292809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Which method to solve this differential equation? $$x'=(x+t+1)^2$$ I need to solve this differential equation but do not know how. We cannot use separation of variables so my only guess here would be to use an integrating factor but how would I find that? EDIT: the official answer is $x(t) = −t − 1 + \tan(t + C) $ EDIT2: subsituting $y = x + t +1$ yielding $y'+1 = y^2$ and integrating $$\int \frac{dy}{y^{2}+1} = \int dt $$ $$ \arctan(y)=t+C$$ $$ \tan(t+C) = y $$ $$ x+t+1=\tan(t+C)$$ $$ x(t) = \tan(t+C) -t-1$$ Which is the answer
Substitute $$x+t+1=u$$ then $$x'+1=u'$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3292875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Sort of positive matrix Hy i want to find a block real or complex matrix of the form $\begin{pmatrix}C&B&C\\B&B&A\\C&A&A\end{pmatrix} $ such that it will be positive semi-definite but not such $A=B=C.$ All blocks are of same size and hermitian. I couldn't find such a matrix in dimension three so that's why i am asking Thanks
In the case that $A, B, C \in \mathbb{R}$, you can use the Sylvester's criterion for positive semi-definiteness. In short, this means any square sub-matrix along the diagonal must have a non-negative determinant. There are two cases we must consider: * *$A = 0$. Sylvester's Criterion tells us that $A, B, C \geq 0$ if the matrix $M$ is positive semi-definite (these are $1 \times 1$ matrices along the diagonal. We also know that $\det M \geq 0$ (this is a $3 \times 3$ matrix along the diagonal. Finally, looking at the upper left $2 \times 2$ matrix, we must have $BC - B^2 \geq 0$. Noticing that $\det M = -C^2B \geq 0$, we conclude that if $A = 0$, then so must $B$ and $C$. *Let us assume that $A = 1$ (this is equivalent to all the cases where $A$ is nonzero), then looking at the bottom-right $2 \times 2$ matrix, we must have $B - 1 \geq 0$. Looking at the top left $2 \times 2$ matrix, we have $BC - B^2 \geq 0$. Since $B \geq 1$, this implies that $C \geq B$. Finally, we must have that the determinant of the entire matrix is non-negative: $$\det M=C(B - 1) - B(B - C) + C (B - BC) = -BC^2 + (3B - 1)C - B^2 \geq 0$$ This is a quadratic in $C$ with solution $C = \frac{1 - 3B \pm \sqrt{-4B^3 + 9B^2 - 6B + 1}}{-2B}$. We can notice that these solutions in $\mathbb{R}$ are necessarily negative when they exist with the exception of $B = 1$. However, we must have that $A, B, C \geq 0$, so we conclude that if $A = 1$, $B = 1$, which subsequently yields $C = 1$. So, as @loup blanc observed, the only solutions where $M$ is positive definite are $A = B = C \geq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3293013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Probability of getting exactly one pair of aces, knowing there's one ace in the draw? We draw 5 cards in a 52 cards game. What is the probability of getting exactly one pair of aces, knowing there's one ace in the draw ? I know that the answer is $\frac{\binom{4}{2} \binom{48}{3}}{\binom{52}{5}-\binom{48}{5}}$, but I don't get why $\frac{\binom{3}{1}\binom{48}{3}}{\binom{51}{4}}$ is not the good answer. I am following the same reasonning as for the question: "What is the probability of getting exactly one pair of aces?", i.e $\frac{\binom{4}{2}\binom{48}{3}}{\binom{52}{5}}$. I can't get why there it is not the right answer.
It looks like you assume when the guaranteed ace is gonna appear - this is not the same. It can be drawn at any of the five draws.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3293128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do we know $\sin$ and $\cos$ are the only solutions to $y'' = -y$? According to Wikipedia, one way of defining the sine and cosine functions is as the solutions to the differential equation $y'' = -y$. How do we know that sin and cos (and linear combinations of them, to include $y=e^{ix}$) are the only solutions to this equation? It seems like a clever combination of inverse polynomials or exponentials could do the same.
Starting from $y'' = -y$, we can add $y$ to form $$y'' + y = 0$$ This is a homogeneous second order linear differential equation which we can simplify by writing the characteristic polynomial as $$r^2 + 1 = 0$$ or $$r = \pm i$$ which are distinct roots. The general solution can be written as $$y(x) = c_1\sin(x) + c_2\cos(x)$$ This tells us that $y_1 = \sin(x)$ is a solution and so is any constant multiple of it, such as $c_1\sin(x)$. Similarly, $y_2 = \cos(x)$ is another solution (and so is any function of the form $c_2\cos(x)$). So, by the principle of superposition, we can add $c_1\sin(x)$ and $c_2\cos(x)$ to form the general solution. As any linear combination of $c_1\sin(x)$ + $c_2\cos(x)$ works, it is clear that $\sin(x)$ and $\cos(x)$ cannot be the only solutions. Principle of Superposition: If $y_1$ and $y_2$ are any two solutions of the homogeneous equation $y′′ + p(x)y′ + q(x)y = 0$. Then any function of the form $y = c_1 y_1 + c_2 y_2$ is also a solution of the equation, for any pair of constants $c_1$ and $c_2$. Remark: However, while the general solution of $y′′ + p(x)y′ + q(x)y = 0$ will always be in the form of $c_1 y_1 + c_2 y_2$, where $y_1$ and $y_2$ are some solutions of the equation, the converse is not always true. Not every pair of solutions $y_1$ and $y_2$ could be used to give a general solution in the form $y = c_1 y_1 + c_2 y_2$. You claim that $\sin(x)$ and $\cos(x)$ are the only solutions. Instead, you should focus on linear combinations of these: $$ y_1 = \cos(x)$$ $$ y_2 = \sin(x)$$ $$ y_3 = \sin(x) + \cos(x)$$ $$ y_4 = 2\sin(x) + 3\cos(x)$$ $$ y_5 = \sin(x) + i\cos(x)$$ $$ y_6 = 10\sin(x) - 11i\cos(x)$$ among many others where $y_i = c_1\sin(x) + c_2\cos(x)$ for constant $c_1,c_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3293359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 9, "answer_id": 2 }
Isomorphism, Homeomorphism and the necessity of proving individual invariants I was wondering if there was a way of avoiding having to prove individual invariants of isomorphism and/or homeomorphism are, in fact, invariants. Consider homeomorphisms. We have to prove that compactness is a topological property. The wikipedia page on topological properties states that "Informally, a topological property is a property of the space that can be expressed using open sets." Is there a way to formalize this? Since a homeomorphism preserve the structure of open sets, it seems any property "formulated in terms of open sets" must immediately be invariant. The situation is similar with isomorphisms. Consider a vector space isomorphism. We need to prove (fairly easily) that isomorphic vector spaces have the same number (cardinality) of dimensions. It seems we should be able to say something like "any property of vector space phrased in terms of the vector space structure is preserved under isomorphism".
The fact that something is preserved under isomorphism (in whatever category) is the formal way of saying that it is "an inherent property of your structure".
{ "language": "en", "url": "https://math.stackexchange.com/questions/3293469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Could you please recommend some open courses on Real Analysis, mainly about Lebesgue measure and Lebesgue Integral? I am a Chinese student and will be enrolled in the course about Lebesgue integral. I want some open courses to help preview the course. A little bit content about functional analysis is okay. Thank you!
You may find this useful for real analysis, and this for functional analysis. Moreover, you can find more video courses here about some other topics in mathematics. If Chinese is OK to you, this course from Shanghai Jiao Tong University is acessible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3293571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Confused with a system of equations with three variables that has infinitely many solutions I'm studying High School Algebra and it had this question: Solve the system by equations: \begin{align*} x + y - z &= \,0 \\ 2x + 4y - 2z &= 6 \\ 3x + 6y - 3z &= \,9 \end{align*} The solution was: infinitely many solutions (x, 3, z) where x = z − 3; y = 3; z is any real number I've spent hours on the problem. The textbook just gave a vague explanation and I can't seem to get how it works. Can someone please intuitively explain how this is?
Hint: Dividing the second equation by $2$ and the third by $3$ we get $$x+y-z=0$$ $$x+2y-z=3$$ $$x+2y-z=3$$ the second and the third equation are the same. Multiplying the first equation by $-1$ and adding to the second we get $y=3$. Plgging this into the first and second equation we get $$x-z=-3$$ $$x-z=-3$$ so we obtain the solutions $$x,3,x+3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3293662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Algebra and Combinatorics books for Mathematical Olympiads Could you kindly point out to me, some good contest-preparation book's to develop theory and problem solving skills in Algebra? It would be good if the book is less of theory and more of problems. I have "Principles and techniques in Combinatorics", but I would like some more difficult books, explaining theory well(this books theory is meh...). The same for Algebra(polynomials, functional Equations, inequalities, etc.) Thanks for your help!!!
'Problem Solving Strategies' by Arthur Engel is a classic for math olympiad training at a fairly advanced level. Contains hundreds of problems that take one line to state, two lines to solve with basic means and are still difficult enough to humble olympiad veterans.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3293811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Probability of hitting the target for the nth time on the mth throw A boy is throwing stones at a target. Probability of him hitting the target is $\frac{1}{2}$ . Then the probability of him hitting the target for the 5th time in 10 throws is ? The answer given is 1/2 . Now ,the reason given is that no matter how many throws he takes or no matter how many times he hits (be it 10 times in 10 throws or 6 times in 7 throws or all initial 5 throws hitting the target out of 10 throws ) since the probability of him hitting the target is always 1/2 therefore the net probability will always be 1/2. But while putting it into the binomial distribution formula the answer coming is different.
In probability theory this is called a negative binomial distribution. What it does is answer the question of what is the probability that you will have $n$ successes after $m$ attempts. Technically this notation is wrong, but I modified it to match your question. For example, you could say that in the event of a Best-of-$3$ rock paper scissors game, what are the odds that the match will be over after only $2$ rounds? In your case the question is, what are the odds that he will hit the target for the $n$th time on the $m$th throw. The formula is as follows $$ \binom{m - 1}{n - 1}p^n(1-p)^{m-n} $$ The $\binom{m-1}{n-1}$ comes from the fact you need to choose the location of all successes, and already know that the final throw has to be a hit to make sure your condition is satisfied. So in your example with $p=\frac{1}{2}$, the formula for any $m$ and $n$ are $$ \binom{m - 1}{n - 1}\dfrac{1}{2^m} $$ so for $n=5,m=10$ we find $$ \binom{9}{4}\dfrac{1}{2^{10}} = \dfrac{63}{512} \approx 12.3\% $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3293907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Calculate the value of $I(9)/I(3)$ when $I(m)=\int_{0}^{\pi}\ln\left(1-2m\cos(x)+m^2\right)\,dx$ We are given that $$I(m)=\int_{0}^{\pi} \ln\left(1-2m\cos (x)+m^2\right)\,dx.$$ I could see that there weren't any standard techniques to calculate this integral directly so I concluded that there must be some kind of reduction formula to be derived. What I did was to apply the King property in definite integrals ie. $$\int_{a}^{b}f(x)\,dx=\int_{a}^{b}f(a+b-x)\,dx.$$ This gave me $$I(m)=\int_{0}^{\pi}\ln\left(1+2m\cos(x)+m^2\right)\,dx,$$ then I added the two expressions for $I(m)$ to get $$2I(m,x)=\int_{0}^{\pi}\ln\left(1-2m^2\cos(2x)+m^4\right)\,dx,$$ i.e.,$$2I(m,x)=I(m^2,2x)$$. So I thought that this should give us the value of $I(9)/I(3)=2$ and that was the correct answer, too; but what bothers me is that the second expression has $2x$ instead of $x$. So shouldn't that cause a change of limits and thus a problem?
\begin{align*} 2I(m)&=\int_{0}^{\pi} \ln\left(1-2m\cos (x)+m^2\right)+\ln\left(1+2m\cos (x)+m^2\right)\,dx\\ &=\int_{0}^{\pi}\ln\left(1-2m^2\cos 2x+m^4\right)\, dx\\ &=\frac{1}{2}\int_{0}^{\color{red}{2\pi}}\ln\left(1-2m^2\cos t+m^4\right)\, dt && (\text{let }t=2x) \end{align*} Now use the fact that if $f(2a-x)=f(x)$, then $$\int_0^{2a}f(x) \, dx =2\int_0^af(x) \, dx$$ to get \begin{align*} 2I_m & =\frac{1}{2}\int_{0}^{2\pi}\ln\left(1-2m^2\cos t+m^4\right)\, dt\\ & = \int_{0}^{\color{red}{\pi}}\ln\left(1-2m^2\cos t+m^4\right)\, dt\\ &=I(m^2). \end{align*} Thus $$\frac{I(m^2)}{I(m)}=2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3294030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Unital commutative Banach algebra $A$, $A/\operatorname{radical}(A)$ has no quasinilpotent elements I am trying to show that for a unital commutative Banach algebra $A$, $A/\operatorname{radical}(A)$ has no quasinilpotent elements (where $\operatorname{rad}(A)=\{x\in A: x \,\text{quasinilpotent}\}$). I know that $\operatorname{rad}(A)$ is a closed ideal, and that the quotient map is continuous, so I'd want to do something like this: Let x not be quasinilpotent, so $\lim_{n\rightarrow \infty} \|x^n\|^{1/n} =\lambda \neq 0$. Let $\pi(x)=\overline{x}$, where $\pi(x):A\rightarrow A/\operatorname{rad}(A)$ is the canonical quotient map. Suppose that $\|\overline{x}^n\|^{1/n}=0$. Then it's spectrum $\sigma(\overline{x})=0$, so $\overline{x}$ is not invertible in $A/\operatorname{rad}(A)$, and thus generates a proper ideal in $A/\operatorname{rad}(A)$. So then $\pi^{-1}(\overline{x})$ generates a proper ideal in A containing $\operatorname{rad}(A)$. From here, if $\operatorname{rad}(A)$ is maximal I think I'd have a contradiction, but I don't know if that's true. If not, does anyone have another strategy I could try?
Assume $x\in A$ such that $x+\operatorname{rad}(A)$ is nilpotent in $A \operatorname{rad}(A)$. This implies that for all $\varepsilon > 0$ there is an $n\in \mathbb N$ and $r\in \operatorname{rad}(A)$ for which $|x^n-r|\leq\varepsilon^n$. $r$ is quasi-nilpotent so there exists $M\in\mathbb N$ s.t. for all $m>M$, $|r^m|\leq\varepsilon^{nm}$ We then show by induction that there exists $A>0$ such that $|x^{nm}|\leq A2^m\varepsilon^{nm}$ for all $m$. We choose $A\geq 1$ such that this is true for all $m\in[0,M]$. Then if $m>M$, we have $$x^{nm}-r^m=(x^n-r)(x^{n(m-1)}+\ldots+r^{m-1}),$$ so $$|x^{nm}|\leq \varepsilon^n(A2^{m-1}\varepsilon^{n(m-1)}+\ldots+A\varepsilon^{n(m-1)}) + \varepsilon^{nm}\leq A\varepsilon^{nm}(1+1+\ldots 2^{m-1})=A 2^m\varepsilon^{nm}.$$ So we can conclude that $\forall \varepsilon >0 \: \liminf |x^n|^{1/n} < 2\varepsilon $, which means that $x\in \operatorname{rad}(A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3294138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find function $f'(x)$ such that its domain $D'=\mathbb{R}$, $f'(x)=f(x)$ $\forall x\in D$ and $f'(x)$ is continuous. Let $f(x)=\frac{^3\sqrt{x^3+3x^2+7}}{x+2}$. I was asked to find $f'(x)$ such that $a)$ the domain $D'$ of $f'(x)$ is $\mathbb{R}$, $b)$ $f(x)=f'(x)$ $\forall x\in D$, with $D$ being domain of $f(x).$ $c)$ $f'(x)$ is continuous in $\mathbb{R}$. I tried to define a function that $f'(x)$ that equals $f(x)$ when $x\neq-2$ and equals $c$ when $x=-2$, with $c$ being the number that makes the function continuous; this is, such that $\lim_{x\to-2} f(x) = c$. But I was enable to find the solution to this limit. Perhaps this is the way to go and I just didn't find a way to solve the limit, and perhaps my whole approach is wrong as a concept, I don't know. I was hoping someone could help me out with this problem.
We have $$\lim_{x\to-2^+} \frac{\sqrt[3]{x^3+3x^2+7}}{x+2} = \frac{\sqrt[3]{11}}{0^+} = +\infty$$ so $f$ cannot be extended to a continuous function on $\mathbb{R} \to \mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3294231", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }