Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Find the solution of the $x^2+ 2x +3=0$ mod 198 Find the solution of the $x^2+2x+3 \equiv0\mod{198}$ i have no idea for this problem i have small hint to we going consider $x^2+2x+3 \equiv0\mod{12}$
Here's a more-or-less generalizable, manual way of finding all of the solutions: First, as P. Vanchinathan does, change variable to $a := x + 1$, which transforms the equation into one with zero linear term: $$a^2 + 2 \equiv 0 \pmod {198} .$$ (This step is option, but reduces the amount of later work.) Now, we exploit the prime factorization $198 = 2 \cdot 3^2 \cdot 11$. Reducing modulo $2$ gives $a^2 \equiv 0 \pmod {198}$, so any solution $a$ to the above display equation has the form $a = 2b$. Substituting into the previous display equation gives $$4 b^2 + 2 \equiv 0 \pmod {198},$$ which is equivalent to $$2 b^2 + 1 \equiv 0 \pmod {99}.$$ Now reducing modulo $11$ (and multiplying by $6$ to produce a monic polynomial on the l.h.s.) leaves $$b^2 + 6 \equiv 0 \pmod {11} .$$ The l.h.s. factors as $(b + 4)(b - 4)$, so any solution $b$ to the equation modulo $99$ has the form $$b = 11 c \pm 4 .$$ Substituting in the above equation modulo $99$ and proceeding as before (in particular, multiplying both sides of the resulting equation by $7$, which is coprime to $9$ and hence produces an equivalent equation) gives $$c^2 \pm 4 c + 3 \equiv 0 \pmod 9 .$$ We may factor the left-hand side as $(c \pm 1)(c \pm 3)$. Since the prime factorization of $9$ is $9 = 3^2$, the equation in $c$ has a solution iff either factor is $0$ or both of the above factors are congruent to $0$ modulo $3$. The latter is impossible since the difference of those factors modulo $3$ is $\pm 1$, so the solutions are exactly $c = \mp 1, \mp 3$. Substituting these four solutions successively into our equations for $b, a, x$ gives all of the solutions to the original equation: $$x \equiv 13, 57, 139, 183 ,$$ which agrees with the solution lioness99a produced using W.A.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2227709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Jacobson radical of group rings $4 b)$ (i) Any hints? (ii) Well $R$ is not semi-simple since $|\mathbb{Z}/3|=3=0 \in F_3$ by the converse of Maschke's theorem. (iii) The surjective $\mathbb{C}$-algebra map $\phi:R \to M_2(\mathbb{C})\times\mathbb{C}\times\mathbb{C}: (a_{i,j}) \mapsto \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix},a_{33},a_{44}$ has nilpotent kernel and semi-simple target.Hence the kernel is the Jacobson radical i.e. \begin{bmatrix} 0 & 0 & * & * \\ 0 & 0 & * & * \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix} Sample solution that may help with iii)
(i) If you know Maschke's theorem (as you hint in (ii)) then you already know the answer to this since the order of $D_8$ is $8$. (ii) Yes. And it would not even be too hard to exhibit some nonzero nilpotent element to prove that the Jacobson radical is nonzero. (iii) You can use exactly the same logic here at this similar question. I'm not sure about your attempt. A surjective homomorphism from $R\to M_2(\mathbb C)\times \mathbb C\times \mathbb C$ would have a six dimensional image and a $6$ dimensional kernel (not $4$ dimensional). But part of the logic at the other post confirms that you can find a homomorphism onto $M_2(\mathbb C)\times M_2(\mathbb C)$, which has and $8$-dimensional image and the $4$-dimensional kernel you describe. Since the quotient has Jacobson radical zero, that is excactly the Jacobson radical of $R$, then.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2227921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Counting integers in a sequence with a least prime factor = $p$ Let $p > 2$ be a prime. It is very easy to count the integers in a sequence that are divisible by $p$. Let $m \ge 0, n > 0$ be integers. The count of $x$ where $m < x \le (m+n)$ and $p | x$ is at most $1 + \left\lfloor\frac{n}{p}\right\rfloor$. For example, if $m=6, n=8, p=7$, there are $2$ integers: $6 < \{ 7, 14\} \le 14$. Let us assume that in the sequence described by $m,n$, there are $w$ integers where $\text{lpf}(x) \ge p$ where lpf is the least prime factor. It seems to me that the most $1 + \left\lfloor\frac{n-p}{p}\right\rfloor$ of the $w$ that are divisible by $p$ so that this is an upper bound on the count of integers where $\text{lpf}(x) = p$. The reason for $-p$ is that we can assume that if there are $2$ or more in sequence, at least one can be ignored since it would be divisible by $2$ and would not be included in $w$. Is there a flaw in my thinking? Is there a better upper bound?
By sieving I find $$\displaystyle w =\sum_{l\in A_{p}} \mu(l)\left(\left\lfloor \frac{n}{pl} \right\rfloor-\left\lfloor \frac{m-1}{pl} \right\rfloor\right)$$ where $\ \mu\ $ is the Möbius function and $\ A_{p}\ $ are the ( squarefree ) integers whose largest prime factor is $< p$. So if $2^p$ is much smaller than $n-m$ : $$\displaystyle w =\sum_{l\in A_{p}} \mu(l) \frac{n-m+1}{pl}+ \mathcal{O}(2|A_p|)= \frac{n-m+1}{p}\prod_{q \text{ prime}, q < p} \left(1-\frac{1}{q}\right)+\mathcal{O}(2^{\pi(p)}) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2228044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An infinite summation involving binomial coefficient The question is to find out the value of $$\sum_{n=k}^{\infty} P^n \binom nk (1/2)^n$$ I tried to break down the binomial coefficient and bring it in form of some known sequence but could not get anything out of it.Please help me in this regard.Thanks.
$$\begin{align} \sum_{n=k}^{\infty}P^n{n\choose k}\left({1\over 2}\right)^n &= {P^k\over 2^kk!}\sum_{n=k}^\infty n(n-1)\dots(n-k+1)\left({P\over 2}\right)^{n-k} \\&= {P^k\over 2^kk!}f^{(k)}({P\over 2}) \end{align}$$ Where $f : x\mapsto \sum_{n=0}^\infty x^n = {1\over 1-x}$ Thus $$\sum_{n=k}^{\infty}P^n{n\choose k}\left({1\over 2}\right)^n = {P^k\over 2^kk!}{k!\over (1-P/2)^{k+1}} = {P^k\over 2^k(1-P/2)^{k+1}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2228171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What topics does Elliptic curve cryptography lie under? I know this is a weird question to ask. Basically for my Math Internal Assessment, I want to explore Elliptic curve cryptography. ( Due to the lack of time, I'm unable to properly study it and I'm suppose to hand in a proposal very soon. Therefore I was wondering what topics come under Elliptic curve cryptography? I think the main topic is functions? For example: If my topic was calculating surface area of an egg; I guess the main topic it would lie under would calculus & maybe algebra
Put it under Number Theory. I would recommend Ketheth Rosen's Book as it is a pleasure to read, even as an undergrad, and its section on ECC is written by Larry Washington, an expert on the subject matter. To see this stuff is real and in use, from the linux command line if you type openssl s_client -host website -port 443 You can see what cipher suite the website is using. Hit CTRL-C to break. For example openssl s_client -host google.com -port 443 will contain in its extensive output ECDHE-RSA-AES128-GCM-SHA256 Elliptic Curve Diffie Hellmen Exchange is being used so that the site and the user can have a shared secret.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2228362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $x$ be the $A.M$ between $y$ and$z$... If $x$ be the AM between $y$ and $z$, $y$ be the GM between $z$ and $x$, then $x$, $y$, $z$ are in : $1$). A.P $2$). G.P $3$). H.P $4$). None. My Attempt: $x$ is the AM between $y$ and $z$ $$x=\dfrac {y+z}{2}$$ $$2x=y+z$$ $y$ is the G.M between $z$ and $x$. $$y=\sqrt {zx}$$ $$y^2=xz$$ What should I do next?
HINT: We have $$y+z=2x=2\cdot\dfrac{y^2}z\implies0=z^2+yz-2y^2=(z-y)(z+2y)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2228579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Coordinate free proof for $a\times (b\times c) = b(a\cdot c) - c(a\cdot b)$ The vector triple product is defined as $\mathbf{a}\times (\mathbf{b}\times \mathbf{c})$. This is often re-written in the following way: \begin{align*}\mathbf{a}\times (\mathbf{b}\times \mathbf{c}) = \mathbf{b}(\mathbf{a}\cdot\mathbf{c}) - \mathbf{c}(\mathbf{a}\cdot\mathbf{b})\end{align*} This is a very useful identity for integrating over vector fields and so on (usually in physics). Every proof I have encountered splits the vectors into components first. This is understandable, because the cross product is purely a three dimensional construct. However, I'm curious as to whether or not there is a coordinate free proof of this identity. Although I don't know much differential geometry, I feel that tensors and so on may form a suitable framework for a coordinate free proof.
Adapted from my previous proof of $\nabla \times (\vec{A} \times \vec{B})$: \begin{align} \vec a \times (\vec b \times \vec c) & = a_l \hat{e}_l \times (b_i c_j \hat{e}_k \epsilon_{ijk}) \\ & = a_l b_i c_j \epsilon_{ijk} \underbrace{ (\hat{e}_l \times \hat{e}_k)}_{(\hat{e}_l \times \hat{e}_k) = \hat{e}_m \epsilon_{lkm} } \\ & = a_l b_i c_j \hat{e}_m \underbrace{\epsilon_{ijk} \epsilon_{mlk}}_{\text{contracted epsilon identity}} \\ & = a_l b_i c_j \hat{e}_m \underbrace{(\delta_{im} \delta_{jl} - \delta_{il} \delta_{jm})}_{\text{They sift other subscripts}} \\ & = a_j (b_i c_j \hat{e}_i) - a_i (b_i c_j \hat{e}_j) \\ & = (b_i \hat{e}_i) (a_j c_j) - (c_j \hat{e}_j) (a_i b_i) \\ & = \vec b (\vec a\cdot\vec c) - \vec c(\vec a\cdot \vec b) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2228700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 4 }
Definition of SQUARE ML (㎖) This is a very simple thing, I suppose, I'm having hard time to find out, what is the meaning of Square ML (㎖) symbol? Square MiLe, Square MilliLiter, Square Maximum Likelihood, or totally something else? I can find these symbols belonging to the physical symbol set in unicode set starting from: 0x3371 13169 SQUARE HPA ㍱ and going to: 0x33DD 13277 SQUARE WB ㏝ http://www.ssec.wisc.edu/~tomw/java/unicode.html Along ML are: 0x3395 13205 SQUARE MU L ㎕ 0x3396 13206 SQUARE ML ㎖ 0x3397 13207 SQUARE DL ㎗ 0x3398 13208 SQUARE KL ㎘ And yes, this seems to be more of physics topic, not that much mathematics...
Disclaimer: This post will use a lot of unicode characters that may not display properly in some environments. The characters in the CJK Compatability block are mostly symbols for units used in Japanese, with some crossover into other languages like Chinese and Korean. Most of them have a name in unicode with "SQUARE" (as in SQUARE ML) because the character is made from multiple symbols designed to fit the square space that a character would fit in for proper Chinese/Japanese/Korean typesetting. For example, compare the spacing of "c""m" vs. "cm" in this string: 三cm三cm因. ㎖ means milliliter, and this meaning is shown on the Japanese and Korean wikipedia pages for "liter". You can also see is referenced in math courses. For example, a middle school math student is asking about the relationship between ㎤ (a single character for $\mathrm{cm}^3$) and ㎖ here. There are many other units and related symbols in the CJK Compatability block, including ㏑ for $\ln$, ㏒ for $\log$, ㎯ ($\mathrm{rad}/\mathrm{s}^2$) for the SI measure of angular acceleration, ㌫ for "percent", and ㌦ for "dollar(s)", etc. The last two are written in Japanese phonetic characters.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2228789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\alpha$ algebraic over field $F$ implies $[F(\alpha):F]=\text{degree of minimal polynomial over $\alpha$}$ Let $\alpha$ be some element algebraic over a field $F$. Then $F(\alpha)$ is isomorphic to $F[x]/\langle m_{\alpha,F}\rangle$, where $m_{\alpha,F}$ is the minimal polynomial with root $\alpha$ over $F$. Moreover, $[F(\alpha) : F] = \deg(m_{\alpha,F})$. The first part of this theorem appears to be clear, but what about the second part (after the word "moreover")? Why does the index of $F(\alpha)$ over $F$ equal the degree of the minimal polynomial in $F[x]$ with root $\alpha$? I think this should be almost obvious, but perhaps I'm lacking some theory.
It seems the other answer and comment are about $F[\alpha]$ being a $F$-vector space of dimension $deg(p)$ ($F[\alpha]$ is the smallest ring containing $F$ and $\alpha$, i.e. $F[\alpha] = \{ \sum_{n =0}^d c_n \alpha^n, c_n \in F\}$). For showing that $F[\alpha] = F(\alpha)$ you need to prove that $\varphi : F[x]/(p(x)) \to F[\alpha],\ \ \varphi(f(x)) = f(\alpha)$ is an isomorphism of rings ($\varphi$ is clearly surjective, and it is injective by definition of the minimal polynomial) Hence $F[\alpha]$ is a field, so $F[\alpha] = F(\alpha)$ and $[F(\alpha):F]= deg(p)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2228910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proof the Levi-Civita symbol is a tensor A tensor of rank $n$ has components $T_{ij\cdots k}$ (with $n$ indices) with respect to each basis $\{\mathbf{e}_i\}$ or coordinate system $\{x_i\}$, and satisfies the following rule of change of basis: $$ T_{ij\cdots k}' = R_{ip}R_{jq}\cdots R_{kr}T_{pq\cdots r}. $$ Define the Levi-Civita symbol as: $$ \varepsilon_{ijk} = \begin{cases} +1 & ijk \text{ is even permutation}\\ -1 & ijk\text{ is odd permutation}\\ 0 & \text{otherwise (ie. repeated suffices)} \end{cases} $$ Show that $ \varepsilon_{ijk} $ is a rank 3 tensor. I actually have a proof but I can't understand it! Can someone help me out? $$ \varepsilon_{ijk}' = R_{ip}R_{jq}R_{kr}\varepsilon_{pqr} = (\det R)\varepsilon_{ijk} = \varepsilon_{ijk}, $$ This shows that $\varepsilon_{ijk}$ obeys the transformation law, so sure... but I don't follow what happened after the second equals sign EDIT: Does this only hold for Cartesian coordinate systems, because then $R$ would be an orthogonal matrix with det 1 or -1?
I met this problem today and this is what I am trying: $$ \epsilon_{ijk}=det(e_i\ e_j \ e_k) $$ Let A be an orthogonal transformation, then: $$ \begin{aligned} \epsilon'_{ijk}&\equiv\epsilon_{lmn}=det( e_l \ e_m \ e_n)\\ &=det(Ae_i\ Ae_j \ Ae_k)\\ &=(det(A))^3 det(e_i e_j e_k)\\ &=det(A)\cdot \epsilon_{ijk}\\ &=\pm \epsilon_{ijk} \end{aligned} $$ I think this can show that the Levi-Civita is a tensor (or pseudotensor).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2228996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
How Hahn-Banach theorem implies that the dual space is non-trivial? Why does the theorem of Hahn-Banach implies that the dual space is not empty ($X^*\neq\emptyset)$ ? Is there an important corollary which I've missed ?
The dual space is always non-empty, as it contains the zero functional. The Hahn-Banach theorem implies that if $X \neq \{0\}$, then also $X^* \neq \{0\}$. Choose a non-zero vector $a \in X$. Denote the subspace $Y := \mathrm{span} (a) \subseteq X$ and the bounded functional $\varphi \in Y^*$ defined by $\varphi(a) = 1$. By Hahn-Banach, it can be extended to some bounded functional on the whole space $\overline{\varphi} \in X^*$, which is non-zero (since $\overline{\varphi}(a) \neq 0$), hence $X^* \neq \{0\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2229175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
What is meant by "The Levi-Civita Connection is an $\mathfrak{so}(n)$-Valued 1-form"? this is a statement that I've seen around, and I thought it's time that I understand it. I know that the LCC is locally given by a matrix $ \omega = (\omega_i^j)$ of 1-forms in a preferred frame $e_i$, so that $$ \nabla f_ie_i = df_i \otimes e_i + f_i \omega_i^j \otimes e_j $$ for any local smooth functions $f_i$. Then, $\omega$ is a matrix representing a linear map on each tangent space. Now, "$\nabla$ is an $\mathfrak{so}(n)$-valued 1-form" suggests to me that each $(\nabla v)|_p$ is in $\mathfrak{so}(T_pM)$ , but I know that this is only true for $v$ a Killing field. But perhaps I'm getting confused between $\nabla$ as an object and its representation $\omega$ in a particular frame. So, my next guess is that it means that, in some choices of local frame $e_i$, the matrix $\omega_i^j(v)$ is skew-symmetric for any $v$. Orthormal frame is the probable condition. But this would mean, in particular, that each $\nabla e_i$ is skew-symmetric, since if $v = e_i$, there are no nonconstant components of $v$ to worry about, and '$\nabla = \omega$'. Then again, we'd be at the statement that all the $e_i$ are (local) Killing fields, which is just rubbish - on a generic Riemannian manifold, there are no nontrivial local Killing fields, if I remember right. So, what does "$\omega$ is $\mathfrak{so}$-valued" mean? Any help would be massively appreciated.
The Levi Civita connection is a particular case of an Ehresmann connection defined on the bundle of frames $F$, such a connection is defined by a $1$-form $\omega$ defined on the tanent bundle of $FM$ andwhich is $gl(n,\mathbb{R}$ valued. A Levi Civita connection means that $\omega$ takes its values in $so(n,\mathbb{R})\subset gl(n,\mathbb{R})$. It means olso that for every $x$, $(\omega_i^j(x))$ defines an element of $so(n,\mathbb{R})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2229280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
The Bent Washer Problem -- divide a shape into 2 pieces of the same volume. Fans of the Ham sandwich theorem know that any set of points can be divided by a plane into two equal halves. Consider instead a 3-D shape that must be divided into 2 equal pieces by a single cut. A sufficiently bent spring washer or keyring cannot be divided into 2 pieces by a plane. But it's possible to make a simpler cut that works -- a partial plane cut. Is that the simplest shape that cannot be split into 2 pieces by a plane? Is there a simple shape that cannot be split into 2 equal pieces by a simple cut?
The Ham Sandwich Theorem says that given three measurable subsets of $\mathbb{R}^3$ can be cut into two equal (with respect to measure) pieces by a single plane. In particular, we can choose two of our sets to be empty. So any measurable subset of $\mathbb{R}^3$ can be cut in half by a plane. EDIT: I originally misunderstood. You're interested in when the cut pieces are connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2229361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim_{r \to \infty} \int_0^\frac{\pi}{2} e^{-r\sin\theta} d\theta$ Evaluating $$\lim_{r \to \infty} \int_0^\frac{\pi}{2} e^{-r\sin\theta} d\theta$$ First for all, I need to show that $e^{-r\sin\theta}$ converges uniformly to a function $F(r)$ Then I can easily take the limit inside, since $e^{-r\sin\theta}$ is continuous. How can I show it is uniformly continuous- my approach was: $$|e^{-r\sin\theta}|\le |e^{-\delta \sin\theta}| \ for \ \delta \le r$$ Then If that $\int_0^\frac{\pi}{2}e^{-\delta \sin\theta}$ exists, in the sense that converges LHD is uniformly continuous so I can take the limit inside and arrive at the soulution. However I couldn't take the integration, any hints?
Note that for $0 \le \theta \le \frac{\pi}{2}$, we have $\sin \theta \ge \frac{2}{\pi}\theta$, so: $$0<\int_0^\frac{\pi}{2} e^{-r\sin\theta} d\theta \le \int_0^\frac{\pi}{2} e^{-(2r/\pi) \theta} d\theta \le \int_0^\infty e^{-(2r/\pi) \theta} d\theta=\frac{\pi}{2r}\to0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2229462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Whether the stochastic process has continuous sample paths I am considering the following question and want to convince myself that the stochastic process $X$ has continuous sample paths. I hope someone could give me some hints or references, many thanks! Suppose that $\{B_t\}_{t\ge 0}$ is a standard Brownian motion and a stochastic process $\{X_t\}_{t\ge 0}$ is defined as $$dX_t=\mathbf 1_{\{X_t\le a\}}dt+dB_t, X_0=x_0 \,\,a.s.$$ By intuition, I think $\{X_t\}_{t\ge 0}$ has continuous sample paths, and it seems that the key is to prove that for each $T\ge 0$ and for almost every $\omega\in \Omega$, the function $$F(T)=\int_0^T \mathbf 1_{\{X_t(\omega)\le a\}}\,dt$$ is continuous
Let $F(T)$ be defined in the way you've defined it. Consider $\epsilon > 0$, we have that $$\begin{align} F(T+\epsilon) &= \int_0^{T+\epsilon}\boldsymbol{1}_{\{X_t(\omega)\le a\}}dt\\ &= \int_0^{T}\boldsymbol{1}_{\{X_t(\omega)\le a\}}dt + \int_{T}^{T+\epsilon}\boldsymbol{1}_{\{X_t(\omega)\le a\}}dt \\ &\le F(T) + \int_{T}^{T+\epsilon}1\cdot dt = F(T) + \epsilon. \end{align}$$ Similarly, we can show that $F(T+\epsilon) \in [F(T),F(T)+\epsilon]$ and $F(T-\epsilon) \in [F(T)-\epsilon,\epsilon].$ Hence, we have shown that $|T'-T|\le \epsilon \Rightarrow |F(T')-F(T)|\le \epsilon$. Hence, $F(T)$ is continuous pointwise in $\omega$ for each $T\geq 0$. And since $B_t(\omega)$ is a.s. continuous, we have that $X_t$ is the sum of a pointwise and a.s. continuous function, hence a.s. continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2229551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove that in the natural numbers "if $a = b$ then $a + c = b + d$ if and only if $c = d$" using Peano Axioms? I want to prove using the Peano axioms that in the natural numbers if $a = b$ then $a + c = b + d$ if and only if $c = d$ preferably by induction.
Once you assume $a = b$, then showing $c = d \rightarrow a + c = b + d$ is just a matter of using the $=$-rules. The more interesting part is showing $a + c = b + d \rightarrow c = d$. Since $a = b$, that reduces to showing that $b + c = b + d \rightarrow c = d$, and to show that, you need to use Induction over $b$. For the base case, this requires proving that $0 + a = a$ for any $a$, which you can do by induction itself. But assuming you have that: $0 + c = 0 + d \rightarrow c = d$ Check Step: For this, you'll need that (AdditionLeftRecursion) $s(a) + b = s(a + b)$ for any $a$ and $b$, which again can be proven fairly easily by induction itself. But once you have that: Assume (inductive hypothesis) $b + c = b + d \rightarrow c =d$. Then: $s(b) + c = s(b) + d \Rightarrow$ (AdditionLeftRecusion) $s(b + c) = s(b + d)\Rightarrow$ (Peano 4) $b + s(c) = b + s(d) \Rightarrow$ (Inductive Hypothesis) $s(c) = s(d) \Rightarrow$ (Peano 2) $c = d$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2229674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Gauss prime divides exactly one integer prime in $\mathbb{Z}[i]$ I am asked to show that a Gauss prime $\pi$ divides exactly one integer prime in $\mathbb{Z}[i]$. To show existence, I have tried to use the fact that the product $\pi \overline{\pi}$ is equal to either an integer prime $p$ or the square of integer prime $p^2$. If $\pi$ satisfies the first case, then the statement is immediate. How about when $\pi \overline{\pi}=p^2$? Also, how do we show that $\pi$ divides no other integer primes (i.e. uniqueness)?
This works in any ring of integers $\mathcal{O}_K$ of a number field $K$ : Take a proper ideal $I$ of $\mathcal{O}_K$ (here $I = \pi \mathcal{O}_K$). Note that $J =I \cap \mathbb{Z}$ is a proper ideal of $\mathbb{Z}$, thus $J = n\mathbb{Z}$ for some $n \in \mathbb{N}_{> 1}$. If $p \in I$ then $p \in I \cap \mathbb{Z}= n \mathbb{Z}$ so $n | p$. If $p$ is a prime number, it means that $n = p$. Finally, if $I$ was a prime ideal then $\mathcal{O}_K/I$ is a finite integral domain (and hence a finite field). Its sub-integral domain (subfield) generated by $1$ is $\mathbb{Z}/n\mathbb{Z}$, therefore $n$ is prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2229786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Trying to prove the real-valued function has a limit. I am trying to show that the function $f:\mathbb{R}^2\to\mathbb{R}$ by the formula $$f(\mathbf{x}) = \dfrac{x_1x_2^2}{x_1^4+x_2^2} \textrm{ if }\mathbf{x} \neq \mathbf{0}$$ $$f(\mathbf{0}) = 0$$ has a limit of $0$ as $\mathbf{x}\to\mathbf{0}$. My book says I have to start by letting $\epsilon>0$ and find $\delta>0$ such that $||\mathbf{x}||<\delta$ implies $|f(\mathbf{x})|<\epsilon$. So I let $\epsilon>0$. $$|f(\mathbf{x})| < \epsilon$$ $$|\dfrac{x_1x_2^2}{x_1^4+x_2^2}|<\epsilon$$ I know I have to get $\delta$ in terms of $\epsilon$ so $\delta$ will be chosen sufficiently corresponding to $\epsilon$. I think I need to somehow use $||\mathbf{x}||<\delta \Longleftrightarrow \sqrt{x_1^2+x_2^2} < \delta$ and relate this equation to $|\dfrac{x_1x_2^2}{x_1^4+x_2^2}|<\epsilon$. I am lost and need help on how I would approach this problem next.
Mario's answer is good, but it is also good to practice $\varepsilon-\delta$ proofs. Let $x:=x_1$ and $y:=x_2$ for easy typing. Fix an arbitrary $\varepsilon>0$. First notice that for any $(x,y)\neq (0,0)$ \begin{align*} |f(x,y)-0|& = \left| \frac{xy^2}{x^4+y^2}\right | \\ & \leq \frac{|x|y^2}{y^2}~\text{noting that}~ x^4,y^2\geq0~\text{,and assuming that}~y\neq 0 \\ &=|x| = \sqrt{x^2} \\ &\leq \sqrt{x^2+y^2} =\|(x,y)-(0,0)\|. \end{align*} If $y=0$ then $|f(x,y)-0|=0<|x|$, so our inequality holds for all $(x,y)\neq (0,0)$. Now choose $\delta=\varepsilon>0$. Then for any $(x,y)\in \mathbb{R^2}$ such that $0<\|(x,y)-(0,0)\|<\delta$ it follows from above that $|f(x,y)-(0,0)|<\varepsilon$. By the definition of a limit the result follows. We have found for any $\varepsilon>0$ a $\delta>0$, such that, for any point within a $\delta$ radius of the point, the difference between the value of the function at the point and the suspected limit is less than $\varepsilon$. I think it is important to note that having $f(\bar 0)=0$ is only important if you want to prove that $f$ is continuous at $\bar 0$. The whole point of a limit is that we don't care what happens at the point, only arbitrarily close to the point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2229916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Size of a convergent series restricted to primes Let $(a_n)_{n=1}^\infty$ be a decreasing sequence of positive real numbers such that $\sum_{n=1}^\infty a_n$ converges. I am interested in the sum $\sum_{p}a_p$, where $p$ ranges over the primes. This subsum obviously converges, but I am interested in how quickly it converges. More precisely, I would like to know what I can say about the asymptotic size of $R(x):=\sum_{p\geq x}a_p$. Because $\sum_{n\geq x}a_n=o(1)$, it seems as though we should have $R(x)=o\left(\frac{1}{\log x}\right)$. In fact, I would like to be able to prove that $$\sum_{m=1}^\infty \frac{R(m)}{m}$$ converges. I am not sure if this is true, though. Any help would be greatly appreciated.
What you need is that $\pi(x) \sim \sum_{n < x} \frac{1}{\ln n}$ for saying that since $a_n > 0$ and is non-increasing, then $\sum_{p > x} a_p \sim \sum_{n > x} \frac{a_n}{\ln n}$. Summing by parts : $$\sum_{p > x} a_p = \sum_{n > x} a_n 1_{n \in P} = \sum_{n > x} \pi(n) (a_n-a_{n+1}) =\sum_{n > x} ((1+o(1))\sum_{k < n} \frac{1}{\ln k}) (a_n-a_{n+1}) \\= \sum_{n > x} (\frac{1}{\ln n}+ o(\frac{1}{\ln n})) a_n =(\sum_{n > x} \frac{a_n}{\ln n})(1+o(1)) $$ Where some of those $o(1)$ means as $n \to \infty$, the others as $x \to \infty$ (use that $a_n > 0$ and non-increasing for bounding the former in term of the latter)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2230000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Remarkable/unexpected rational numbers Consider the Riemann $\zeta$ function. We know that $\zeta(2n)$ is a rational multiple of $\pi^{2n}$ (in particular is transcendental). We also know that $\zeta(3)$ is irrational, and we expect $\zeta(n)$ to be irrational (if not even transcendental) for every $n\in\mathbb{N}$, or at least I would be amazed if - say - $\zeta(5)$ turned out to be rational. Are there known instances of rational numbers appearing where we would not have expected them? Of course the notion of "expectation" here is very subjective, so this is a soft question just out of curiosity, since I have the feeling that usually (in my limited experience always) complicated expressions yield irrational ($\mathbb{C}-\mathbb{Q}$) numbers. As a non-example, we have the series $\sum_{n=1}^\infty 2^{-n}=1$. A series is complicated enough (compared to say a finite arithmetic expression), but of course here we have the explicit formula for geometric series so the resulting $1$ is not really a big surprise.
The average distance between two randomly chosen points in the Sierpinski triangle (of side $1$) is $$\frac{466}{885}$$ (where "distance" means the length of the shortest path between the points that lies within the Sierpinski triangle).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2230120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 1 }
Solving $ax+b$ (mod n) = $cx+d$ (mod n) How does one solve questions of $ax+b$ (mod n) = $cx+d$ (mod n) form? I know that if $b,d$ are $0$ then, I can take multiples of $n$ as solutions for x. but what if $b,n$ are not zero? Under what conditions do solutions exist?
$$ ax + b \pmod n \equiv cx + d \quad \pmod n $$ $$ (a - c)x \equiv d-b \pmod n$$ $$ Ax \equiv B \quad \pmod n$$ where $A = a-c, \; B = d - b$. Thus, 1) If $ a \equiv c \pmod n$: * *If $b \equiv d \pmod n$: $x$ is any integer. *Otherwise, no solution. 2) If $a \not\equiv c \pmod n$: * *If $A$ has a multiplicative inverse$\pmod n$ called C, the solution will be: $ x \equiv BC \pmod n$. *Otherwise, no solution. Note the multiplicative inverse$\pmod n$ can be found by tabulating the product of $ij$ in a square table where the rows are numbered $i = 0,1,...,n-1$ and the columns are numbered $j=0,1,...n-1$. To find the multiplicative inverse of each $i$, find the value of $j$ that makes $ij \equiv 1\pmod n$. In general, not all $i$ have a multiplicative inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2230297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $f(x)=x^2$ is a contraction on each interval in $[0,0.5]$ I need to formally prove that $f(x)=x^2$ is a contraction on each interval on $[0,a],0<a<0.5$. From intuition, we know that its derivative is in the range $(-1,1)$ implies that the distance between $f(x)$ and $f(y)$ is less then the distance between $x$ and $y$. But now I need an explicit $\lambda$ such that $0\le \lambda<1$ and $d(f(x),f(y))\le \lambda d(x,y)$, where $d$ is the standard metric on $\Bbb R$. Thanks a lot!
Remark that $$ |f(x)-f(y)| = |(x+y)(x-y)| \leq (|x|+|y|)|x-y| < 2a |x-y| $$ if $x$, $y \in [0,a]$. Now $2a<1$ by assumption.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2230417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Matlab code to compute the smallest nonzero singular value of the matrix without using SVD I want to compute the smallest nonzero singular value of the matrix A, which is defined as follows. Let $B = rand(500, 250)$, $A = B*B^t$, where $t$ denotes the transpose of the matrix. I found the following matlab code to compute singular values of the matrix A which is based on the Singular value decomposition of the matrix. svds = svd(A); s = min(svds); % smallest singular value I want to know is there any other efficient way to smallest singular value? Thank you in advance
Summary The answer is...use svds. What are the singular values? There may be some confusion over how you get the singular values. The command svd computes the singular values and the components that you don't want. The command svds only computes the singular values. As explained here Computing pinv, the first step in computing the full singular value decomposition of a matrix $\mathbf{A}$ is to compute the eigenvalue spectrum of the product matrix $\mathbf{A}^{*}\mathbf{A}$. $$ \sigma = \sqrt{\lambda\left( \mathbf{A}^{*}\mathbf{A} \right)} $$ These are precisely the numbers you want (after they are ordered and $0$ values are culled). These values are returned by svds. If you want to continue and compute the eigenvectors, and resolve the domain matrices, then execute svd. Background For background, the SVD is very powerful, and very expensive. You just want part of the information, and you hope to avoid the performance penalty. But the heart of the complexity of the SVD is the eigenvalue problem. This demands finding the roots of the characteristic polynomial. In your case, a polynomial of order 500. The task of finding roots to general polynomials is a demanding numeric problem. So by asking for the singular values, you have incurred most of the computational cost. Caution As an inside, make sure you understand how to handle small singular values. There is a tough issue of deciding if a small eigenvalue is a valid element of the spectrum, or a zero eigenvalue disguised as machine noise. Some discussion is Number of Singular Values and Kernel mapping from feature space to input space. It may be reasonable to change your requirement from finding the smallest eigenvalue to setting a threshold for smallest eigenvalue. Keep posting As your problem and understanding evolve, keep posting and keep the discussion going. @Rahul's answer User @Rahul has a better solution because he skips the unneeded step of forming the product matrix. Almost certainly, eigs, svds, and svd call the same routine to find the roots of the characteristic polynomial, and in this instance the time savings may be imperceptible. Failure to recognize that we can bypass the product matrix is a critical oversight.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2230507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
Which infinity do integrals diverge to? When we say $\int{f}dx = \infty$, what is the cardinality of that $\infty$?
You are mixing up two notions of "infinity". One concerns the sizes of sets. A set is infinite (in that sense) when there's a one to one correspondence with a proper subset. That's the infinity that you mean when you talk about cardinality. The other "infinity", written $\infty$, is sometimes confusing shorthand used when discussing limits. To say that a limit "is infinite" means that the quantity is eventually larger than any number specified in advance. $\infty$ is not a number of any kind. Your confusion is quite common, and unsurprising. It's too bad we chose to use one word two different ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2230630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Trigonometric Inequality I have a problem about this trigonometric inequality, which I cannot completely solve. In particular, I cannot get the whole solution the book provides and what a bad luck: I don't have the book with me, because this problem arose from one of my student's problem during a private lesson. $$\sin\left(\frac{x}{2}\right)\left(2\cos(x) - \sqrt{3}\right) <0$$ We have to solve it by invoking the unitary circle method, and all the related blabla questions. The fact is that i get those two solutions (before unifying them) $$\pi + 2k\pi < \frac{x}{2} < 2\pi + 2k\pi$$ $$\frac{\pi}{6} + 2k\pi < x < \frac{11}{12}\pi + 2k\pi$$ I'm strongly afraid it's wrong, but the fact is that the book provides other solutions I cannot managed to find. Unfortunately I don't remember them well, but I underwent the problem to Mathematica too, and it says that the system cannot be solved with the methods available to Reduce. Bah.
Hint: Going with the comment regarding use of the 'unit circle method', it may be easier to think about this geometrically rather than algebraically. For instance, if we pick a point on the upper half of the unit circle then the angle $x/2$ corresponds to a point in the first quadrant with positive sine. On the other hand, the condition that $\cos x>\sqrt{3}/2$ corresponds to a point on the unit circle which lies to the right of $x=\sqrt{3}/2$. By considering all the relevant cases, you should be able to determine the intervals in $[0,2\pi)$ for which $x$ satisfies the inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2230848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
Evaluate $\int_0^{2\pi}\frac{\cos(\theta)}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta \ \text{ for }\ (A^2+B^2) <<1$ I need to evaluate the definite integral $\int_0^{2\pi}\frac{\cos(\theta)}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta \ \text{ for }\ A<<1, B<<1, (A^2+B^2) <<1,$ For unresricted (but real) A&B, Wolfram Alpha provides the following indefinite general solution:- $$\int \frac{\cos(\theta)}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta = \left( \frac{1}{A^2+B^2}\right) \left[ \frac{2B}{K} \tanh^{-1} \left( \frac{A-(B-1)\tan\left(\frac{\theta}{2}\right)}{K}\right) + F(\theta)\right]$$ where $F(\theta) = A \ln(1 + A\sin\theta+B\cos\theta)+B\theta$, and $K = \sqrt{A^2 + B^2 -1}$, therefore K is complex for the range of A,B I am interested in. In a previous question seeking a solution for the similar, but slightly simpler, definite integral (with numerator $1$ rather than $\cos\theta$) user Dr. MV found a solution given by: $$\int_0^{2\pi}\frac{1}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta = \frac{2\pi}{\sqrt{1-A^2-B^2}}\text{ } (\text { for} \sqrt{A^2+B^2}<1) $$. My question: Can similar solutions be found for these two definite integrals $$\int_0^{2\pi}\frac{\cos\theta}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta \tag 1$$ and $$\int_0^{2\pi}\frac{\sin\theta}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta \tag 2$$? EDIT I have taken the solution proposed by user Chappers. By simultaneous equations in A,B,I,J it turns out that $$I=\int_0^{2\pi}\frac{\cos\theta}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta = \frac{B}{A^2+B^2} 2\pi (1-\frac{1}{\sqrt{1-A^2-B^2}})$$ and $$J=\int_0^{2\pi}\frac{\sin\theta}{1 + A\sin(\theta) + B\cos(\theta)}\, \mathrm{d}\theta = \frac{A}{A^2+B^2} 2\pi (1-\frac{1}{\sqrt{1-A^2-B^2}})$$. These were confirmed in a numerical model.
HINT: set $$t=\tan(x/2)$$, $$\sin(x)=\frac{2t}{1+t^2}$$, $$\cos(t)=\frac{1-t^2}{1+t^2}$$ and $$dx=\frac{2dt}{1+t^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2230930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Reference related to Hirzebruch surfaces I was reading about the Hirzebruch surfaces construction to understand a few examples of Delzant construction, and need some help. I am using this reference Hirzebruck surfaces and can't understand the followinga few things about the final step: $P(L_{-n} \oplus \bar{\mathbb{C}}) $. I understand the construction of $L_{-n}$ but is that a usual direct sum or what? Can't understand what is $\bar{\mathbb{C}}=X \times \mathbb{C}$ the "trivial complex line bundle", of what space? If someone can answer this questions perfect, or also a reference for me to read on my own would also be nice. Thanks!
$\overline{\mathbb C}$ is the trivial line bundle over $X$. It's a line bundle over $X$ (in your case, $X = \mathbb P^1$). So the total space of $\overline{\mathbb C}$ is $X\times \mathbb C$, with the projection given by $\pi(x, z) = x$. In your case, $X = \mathbb P^1$ and the Hirzebruch surfaces is by definition $$P(L_{-n} \oplus \overline{\mathbb C}).$$ Here $L_{-n}$ is a line bundle on $\mathbb P^1$, so $L_{-n} \oplus \overline{\mathbb C}$ is a vector bundle of rank two over $\mathbb P^1$, and $P$ is the associated projective bundle: that is for each $\ell \in \mathbb P^1$ (the base), the fiber is $P((L_{-n})_\ell\oplus \mathbb C)$, all lines in the vector space $(L_{-n})_\ell\oplus \mathbb C$. Thus the fiber is again holomorphic to $\mathbb P^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2231018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Algebra rules when finding inverse modulo The idea is to find an inverse modulo for two numbers, $660$ and $43$ in this case. I find that the GCD is easy to calculate, but the step after that, calculating the inverse modulo when calculating back trough the GCD calculation. The thing I do not get, is that 'by algebra' they keep removing certain numbers between parentheses, and it seems illogical to me. $\begin{array}{rcl}660 & = & 43 \cdot 15 + 15 \\ 43 & = & 15 \cdot 2 + 13 \\ 15 & = & 13 \cdot 1 + 2 \\ 13 & = & 2 \cdot 6 + 1 \\ 2 & = & 1 \cdot 2 + 0 \end{array}$ Now, these are steps 1 trough 5, and for step 6 (to calculate the inverse), they give this: $\begin{array}{rcl} (1) & = & 13 - 2 \cdot 6 \\ (2) & = & 13 - (15 - 13) \cdot 6 \\ (3) & = & 7 \cdot 13 - 6 \cdot 5 \\ (4) & = & 7 \cdot (43 - 15 \cdot 2) - 6 \cdot 15 \\ (5) & = & 7 \cdot 43 - 20 \cdot 15 \\ (6) & = & 7 \cdot 43 − 20 \cdot (660 − 43 \cdot 15) \\ (7) & = & 307 \cdot 43 - 20 \cdot 660 \end{array}$ The thing I do not get, for example, is how they end up with 20 at step 5. What exactly are the rules here when simplifying these steps? It seems like they are just replacing any numbers to their liking .. I have this for my discrete math course, and have not had basic algebra lessons before this, so it could be really easy. All help is appreciated! Edit: perhaps there is no real question above, my question thus: what are the rules for this? Can these integers within the parentheses just be shuffled around?
The way I like to describe the process is this: When finding the GCD of two numbers, begin writing a table where the first row is the first number we are interested in, followed by a 1 followed by a 0. The second row will be the second number we are interested in, followed by a 0 followed by a 1. $$\begin{array}{c|c|c}660&1&0\\43&0&1\end{array}$$ Continue building the table by subtracting the largest multiple of the most recent row from the one before it that still results in a non-negative number for the first entry. In this case $15$. We have $[660,1,0]-15[43,0,1] = [15,1,-15]$ so our table continues as: $$\begin{array}{c|c|c}660&1&0\\43&0&1\\15&1&-15\end{array}$$ Again, we look at how many copies of the last row can fit into the one previous, in this case twice: $[43,0,1]-2[15,1,-15]=[13,-2,31]$ so it continues $$\begin{array}{c|c|c}660&1&0\\43&0&1\\15&1&-15\\13&-2&31\end{array}$$ This process continues until you eventually arrive at a zero for the first entry of a row. The GCD is the first entry in the row previous. Note also that these columns have significance. The way I set it up, in finding $\gcd(A,B)$ the number on the left of a row is equal to the middle number times $A$ plus the right number times $B$. In this example, $13 = -2\cdot 660 + 31\cdot 43$ Completing the table: $$\begin{array}{c|c|c}660&1&0\\43&0&1\\15&1&-15\\13&-2&31\\ 2&3&-46\\1&-20&307\\0\end{array}$$ This implies that $1=-20\cdot 660 + 307\cdot 43$, that $\gcd(660,43)=1$, and that $660^{-1}\equiv -20\pmod{43}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2231155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Noetherian hypothesis when permuting elements of a regular sequence? One of the first results showed when studying regular sequences is that we are allowed to shuffle the elements of the sequence if the ring is noetherian, local, and the module is finite (see Proposition 2 in here for a proof). I know the ring being local and the module being finite are required to use Nakayama's lemma, but I can't spot where is noetherianity used! Edit: By MooS' comment I realize Akhil and probably all the other sources are using Krull's Intersection theorem, which I thought I didn't need for 'my' proof (I learned it from Bruns-Herzog). So now I wonder what is wrong with the following: We want to show that under all the hypothesis, if $x,y$ is an $M$-sequence, then $y,x$ is an $M$-sequence. Let's show that $y$ is not a zero-divisor. Denote the kernel of multiplication by $y$ on $M$ by $K$. Assume $m\in K$. Then, by the regularity of the sequence $m\in xK$ (use $\overline{ym}=0\in xM$) and we can write $m=xm'$ for some $m'\in K$, i.e., $K\subseteq xK$. Conversely, if $xym'=0$ by regularity $ym'=0$ and $m'\in K$. Therefore $K=xK$, and by Nakayama's lemma (version from AM) $K=0$. Thus, $y$ is not a zero-divisor. QED What is wrong with this proof?
The assumption that the ring be noetherian is used when Krull's intersection theorem is applied. And the assumption is necessary, as Stacks Project's tag 00LH shows, for example: consider $k[x,y,w_1,w_2,\ldots]/(yw_1,yw_2,\ldots,w_1-xw_2,w_2-xw_3,\ldots)$ and localise in the maximal ideal generated by $x,y$ and all the $w_i$, $i\in\mathbb{N}$. Then $x,y$ is a regular sequence, but $y$ is a zero divisor. We can detect the issue with the OP's proof in the example: $K=\mathrm{Ann}(y) = (w_1,w_2,\ldots)$ does satisfy $xK=K$, but we can't conclude $K=0$ from Nakayama's lemma since $K$ is not finitely generated. So it seems the proof is okay if the ring is noetherian and it does not use Krull's intersection theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2231274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to explain for my daughter that $\frac {2}{3}$ is greater than $\frac {3}{5}$? I was really upset while I was trying to explain for my daughter that $\frac 23$ is greater than $\frac 35$ and she always claimed that $(3$ is greater than $2$ and $5$ is greater than $3)$ then $\frac 35$ must be greater than $\frac 23$. At this stage she can't calculate the decimal so that she can't realize that $(\frac 23 = 0.66$ and $\frac 35 = 0.6).$ She is $8$ years old.
It's quite impressive that an 8-year old can state a clear reason (even if wrong) for a mathematical conclusion. I suggest you should be pleased rather than upset, and should begin with the respectful approach of addressing her reasoning. So you might try asking her which is the greater of $1/3$ and $2/5$. If she chooses $2/5$ you might then explore with her the implication that: $$3/5 + 2/5 > 2/3 + 1/3$$ when in fact: $$3/5 + 2/5 = 1 = 2/3 + 1/3$$ If that proves unpersuasive I would suggest representing fractions as sections of a straight line (rather than of rectangles, circles, cakes etc which introduce further complications). So draw (or perhaps ask her to draw) a line of (for convenience) $15$ cm and mark against it the thirds (every $5$ cm) and fifths (every $3$ cm). You can then ask whether the line section representing $3/5$ is longer than that representing $2/3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2231366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "67", "answer_count": 29, "answer_id": 4 }
How can one generate an open-ended sequence of low-discrepancy points in 3D? I'd like a low-discrepancy sequence of points over a 3D-hypercube $[-1,1]^3$, but don't want to have to commit to a fixed number $n$ of points beforehand, that is just see how the numerical integration estimates develop with increasing numbers of low-discrepancy points. I'd like to avoid have to start all over again, if the results with a fixed $n$ are unsatisfactory. Of course, one could just employ random numbers, but then the convergence behavior would be poorer. "A sequence of n-tuples that fills n-space more uniformly than uncorrelated random points, sometimes also called a low-discrepancy sequence. Although the ordinary uniform random numbers and quasirandom sequences both produce uniformly distributed sequences, there is a big difference between the two." (mathworld.wolfram.com/QuasirandomSequence.html) This question has also just been put on the mathematica.stack.exchange (https://mathematica.stackexchange.com/questions/143457/how-can-one-generate-an-open-ended-sequence-of-low-discrepancy-points-in-3d) Since in his answer below, Martin Roberts advances a very interesting, appealing approach to the open-ended low-discrepancy problem, I’d like to indicate an (ongoing) implementation of his approach I’ve just reported in https://arxiv.org/abs/1809.09040 . In sec. XI (p. 19) and Figs. 5 and 6 there, I analyze two problems—one with sampling dimension $d=36$ and one with $d=64$—both using the parameter $\bf{\alpha}_0$ set to 0 and also to $\frac{1}{2}$. To convert the quasi-uniformly distributed points yielded by the Roberts’ algorithm to quasi-uniformly distributed normal variates, I use the code developed by Henrik Schumacher in his answer to https://mathematica.stackexchange.com/questions/181099/can-i-use-compile-to-speed-up-inversecdf
another good solution to get an open-ended sequence is using the Halton method. It is also very easy to implement, even for any dimension! For d<8 it has usually good properties, beyond this more difficult will typically outperform Halton.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2231391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Motivation Of Correlation Coefficient Formula Definitions correlation coefficient $= r = \frac{\sum_{i=1}^{n}(x_i - \bar{x})(y_i - \bar{y})}{\sqrt{\sum_{i=1}^{n}(x_i - \bar{x})^2\sum_{i=1}^{n}(y_i - \bar{y})^2}}$ My Question What is the motivation of this formula? It's supposed to measure linear relationships on bivariate data, but I don't understand why it would do that as defined. For example, Riemann integrals are said to measure area under a curve, and that makes sense because $\sum f(x_i)\Delta x$ is adding areas of rectangles under the curve $f(x)$ approximating its area more and more as we take more samples. Does such an intuition exist for the correlation coefficient? What is it? My background in statistics is nothing but a bit of discrete probability. I know histograms, data plots, mean, median, range, variance, standard deviation, box plots and scatter plots at this point (from reading the first weeks material on an introductory statistics class). My Research All of the "Questions that may already have your answer" seemed to either be asking about what the formula said mathematically or asked questions that were more advanced than my knowledge.
Suppose we have a scatterplot of heights X and weights Y of n subjects. The 'center of the data cloud' is at the point $(\bar X,\,\bar Y)$. One might expect a positive association between heights and weights. Points above and to the right of the center make a positive contribution to the sum $\sum (X_i -\bar X)(Y_i - \bar Y).$ So also do points below and to the left of center. Points that might suggest a negative association will be above and to the left of center or below and to the right of center. For them, the product $(X_i -\bar X)(Y_i - \bar Y)$ will have a negative and a positive factor, thus a negative product. So such points will make a negative contribution to $\sum (X_i -\bar X)(Y_i - \bar Y).$ The denominator is essentially the product of the numerators of the standard deviations of X and Y. The effect of the denominator is to make $r$ a quantity without units. In the US system of measurements the numerator has units 'foot-pounds', and the denominator has the same units, so $r$ has no units. If the subjects were weighed and measured in the metric system, the correlation of their weights and heights would be numerically the same as if they were weighed and measured in the US system. Also, inclusion of the denominator scales correlations $r$ so that they lie between $-1$ and $+1,$ where $r = 1$ means the points perfectly fit an upward sloping line (regardless of the numerical value of the slope, which has units), and $r = -1$ means the points perfectly fit a downward sloping line. If either the SD of the X's or the SD of the Y's is 0, then the points lie on either a vertical line or a horizontal line, respectively. In either case the denominator of $r$ would be $0$ and the correlation is not defined. In the plot below, there is a strong linear component to the positive association of X and Y: $r = 0.968.$ The horizontal and vertical grid lines cross at $(\bar X, \bar Y).$ Each of the dark green points makes a positive contribution to the numerator of $r,$ as discussed above. Also, the two red points make (slight) negative contributions to the numerator of $r.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2231517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proving inequalities on $L^P$ spaces Suppose $p,q,r \in[1,\infty)$ and $ 1/r = 1/p +1/q$. Prove that $$\|fg\|_r \leq \|f\|_p*\|g\|_q.$$ I am assuming that this proves involves using Hölder inequality , but so far I am unable to proceed in the proof. Maybe so because this is my first problem about using the Hölder/Minkowski inequaltiy.
Clearly $p,q>r$, so one may apply Holder's inequality to $|f|^r$, $|g|^r$ with $p'=\frac{p}r$, $q'=\frac{q}r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2231648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Methods for finding $\cos(4x)$ given that $\sin(2x) = \frac{3}{5}$ If $\sin(2x) = \frac{3}{5}$ Find $\cos(4x)$.. I tried by : $\cos(4x)= \cos(2\cdot2x)$.. And $\cos (2\cdot2x) = 1-2\sin^2(2x)$ .. From it ---- $\cos(4x)=0.28$. Is there any other ways ?
* *Form the 3-4-5 triangle. *Reflect it about the "4" side. to get a 5-5-6 triangle. *apply the cosine law to it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2231737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Rearranging $\left| \sqrt{x} -\sqrt{y} \right| $ I'm just going through an example of a Holder function ($f(x) = \sqrt{x}$), and a step in the example goes as follows, $$\left| \sqrt{x} -\sqrt{y} \right| = \frac{\left|x-y\right|}{\sqrt{x}+\sqrt{y}}$$ I've been fiddling around with this for half an hour and cannot see how to get the RHS from the left. Thanks in advance.
$(\sqrt x-\sqrt y)(\sqrt x+\sqrt y) = (\sqrt x)^2 - (\sqrt y)^2 = x - y$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2231958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Universal Property of Quotients I couldn't recall the UPQ from my memory exactly, so I wrote the following: Let $R, S$ be rings, $I$ be an ideal of $R$, $\pi:R\to R/I$ be the canonical quotient ring homomorphism and $\varphi:R\to S$ be a surjective ring homomorphism. Then there exists isomorphism $\overline{\varphi}:R/I\to S$, such that ${\varphi}=\overline{\varphi}\circ \pi$. I'm looking at the original statement, but it is phrased differently - there is no word "surjective" and no word "isomorphism", and $I$ is said to be subset of $\ker\varphi$. But could there possibly be some truth to what I wrote?
The general universal property of quotients states: For a ring homomorphism $\varphi:R\to S$, and and ideal $I\subseteq\ker\varphi$, there exists a unique homomorphism $\overline{\varphi}:R/I\to S$ such that $\bar{\varphi}\circ\pi=\varphi$, where $\pi:R\to R/I$ is the usual projection. In the case where $\varphi$ is surjective, $\overline{\varphi}$ must be surjective as well, since given any $s\in S$, there exists an element $r\in R$ such that $s = \varphi(r) = \overline{\varphi}(r+I)$. The last bit should be a little more precise; you should have that $I=\ker\varphi$, not just containment $\subseteq$. When $I=\ker\varphi$, the first isomorphism theorem implies that the induced map $\overline{\varphi}:R/I\to S$ is an isomorphism. So, there is some truth to what you wrote, as long as you specify that $I=\ker\varphi$. As an example, let $\varphi:\mathbb{Z}\to\mathbb{Z}/2\mathbb{Z}$ be the projection. Note that this is surjective. Then, we have $4\mathbb{Z}\subseteq\ker\varphi = 2\mathbb{Z}$, and thus there exists a unique map $$\overline{\varphi}:\mathbb{Z}/4\mathbb{Z}\to\mathbb{Z}/2\mathbb{Z}$$ such that $\overline{\varphi}\circ\pi=\varphi$. However, $\overline{\varphi}$ is clearly not an isomorphism, since one ring has $2$ elements while the other has $4$. So the condition $I=\ker\varphi$ is necessary to conclude that the induced map is an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2232070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Since the $\omega$-limit is invariant, then it must contain only points with $\dot{V}(x)=0$ I am trying to understand the following proof: I do not fully get why he says: since $\Gamma^+$ (which I think is usually referred to as the $\omega$-limit) is an invariant set, then $\dot{V}(x)=0$ in $\Gamma^+$. Intuitively, it seems to me that, since $\omega$-limit is invariant, it must be composed of singularities or closed orbits (and I know that the $\omega$-limit is connected because the orbits for positive times are trapped inside a compact set). But closed orbits are impossible because $V$ is strictly decreasing, so $\omega(x)$ must only contain singularities. But even if this reasoning is correct, it seems to me that it is not rigorous as it is.
The preceding sentence says that $V(p)=c$ for each $p \in \Gamma^+$. And since $\Gamma^+$ is invariant, any orbit $y(t)$ which starts in $\Gamma^+$ stays in $\Gamma^+$. Thus $V(y(t))=c$ (identically) for such an orbit, which implies $\frac{d}{dt} \Bigl( V(y(t)) \Bigr)=0$. And this is exactly what the phrase “$\dot V=0$ on $\Gamma^+$” means.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2232167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding value of limit involving cosine I know we can use Maclaurin expansion and l'hopital's rule for solving it but I want another way . Find value of $\lim_{x \to 0} \frac{\cos^2x - \sqrt{\cos x}}{x^2}$. My try : I multiplied and then divided by conjugate of numerator but it didn't help .
Two standard limits $$\lim_{x\to a} \frac{x^{n} - a^{n}} {x-a} =na^{n-1},\,\lim_{x\to 0}\frac{1-\cos x} {x^{2}}=\frac{1}{2}$$ come to our rescue here. We have \begin{align} L&=\lim_{x\to 0}\frac{\cos^{2}x-\sqrt{\cos x}} {x^{2}}\notag\\ &=\lim_{x\to 0}\frac{\cos x - 1}{x^{2}}\cdot\left(\frac{\cos^{2}x-1}{\cos x - 1}-\frac{\cos^{1/2}x - 1}{\cos x - 1}\right)\notag\\ &=-\frac{1}{2}\left(2-\frac{1}{2}\right)\notag\\ &=-\frac{3}{4}\notag \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2232291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Filtered colimit and directed colimit Is there an example that in some abelian category $A$, direct limit always exist, but filtered colimit does not always exist?
No. Every filtered category admits a cofinal functor from a directed category, so the existence of directed colimits implies that of filtered colimits. The construction of this directed category is slightly technical. It can be read in the first part of Adamek and Rosicky's monograph on locally presentable and accessible categories. EDIT: An error in the A-R proof has been pointed out on MSE before and was recalled by @user12580 in the comments, together with a link to a correct proof, which interestingly precedes the A-R book by quite some time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2232442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
"Compositional roots" of functions, how to define them and how many are there? Assume I am interested in solving $$(\underset{k \text{ times}}{\underbrace{g\circ \cdots \circ g)}}(x) = g^{\circ k}(x) = f(x)$$ That is, $g$ is in some sense a function which is a $k$:th root to applying the function $f$. Applying $g$ $k$ times starting with the number $x$ does the same thing as applying $f$ once. I suspect that without extra constraints on the behaviour of $g$ there got to exist a huge amount of candidates for $g$. Say we consider functions $f\in \mathbf{C}^2$, twice continously differentiable. Is there some way to quantify or classify the solutions $g$ depending on which space they are in? What constraints can we put on $g$ to narrow down or make nicer the possible solutions? Own work Some (rather trivial) ones I have found are: $$\text{for } k = l: f(x) = x^{2^l}, g(x) = x^2$$ And of course (more generally) all polynomials on the form: $$\text{ for } k=2, f(x) = \sum_{\forall l} a_l \left(\sum_{\forall m} a_mx^m\right)^l, \text{ have } g(x) = \sum_{\forall k} a_kx^k$$ For example maybe the most simple one turning function composition into an addition machine. Imagine a simple computer having only "increment by $b$" instruction to do addition. $$\cases{g(x)=x+b\\f(x) = g(g(x)) = g(x)+b = (x+b)+b = x+2b}$$ These are all the conceptually most simple ones I could think of, but I am of course interested in how complicated functions $g$ could be while still fulfilling the equation.
Let $f(x)=4x$. That's a useful function, though hardly more so than 0; more importantly, it is nontrivial. Define $g(x)$ as some freehand monotone curve running from (1,2) to (2,4), and then use the functional equation to expand its domain to (2,4), then to (4,8) and so on. You see that it may be made infinitely differentiable everywhere (except possibly 0) in infinitely many ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2232545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Show that this fractal contains every odd, positive integer Let $f(x)=2x+\frac{1}{3}$ Let $g(x)=\frac{2x-1}{3}$ $f^n$ represents composition of $f$ Let $G_0=\{1\}$ Let $F_{m}=\{f^n(x):x\in G_{m}, n\in\mathbb{N_{\geq0}}\}$ Let $G_{m+1}=\{g(x):x\in F_m\}$ Show that for any given odd, positive integer $p$ there is some sufficiently high $m$ for which $p$ is in $F_m$ For what it's worth (and this may well not be helpful at all) but I've got some insight into why it's true but I can't translate that into a solution. I can see that the denominator of $F_m$'s elements is a power of $3$ increasing with each successive $m$. I can see that these functions $f$ and $g$ (and restricted to the positive integers) form a Euclidean function where the orbits of $f$ and $g$ are in some sense orthogonal away from the vicinity of $1$. I think this fact, combined with the fact that $F_m\cap F_{m-1}$ is somehow dense in the odd integers within the range from $G_m$ to $G_{m+1}$, proves the statement. But I have no idea by what metric $F_m\cap F_{m-1}$ are between $G_m$ and $G_{m+1}$. I'm pretty sure it's a 2- and 3- adic number problem. It is also a fact that this is a fractal. Each application of $F$ adds, for every element, a new set of numbers spaced exponentially, just above or below those already included.
Since $n=0$ is allowed, $\bigcup F_n$ is the set of numbers that ca be obtained by an arbitrary finite sequence of $f$ and $g$ from $1$. Thus the claim is that for every $p\in\Bbb N$, there exists a sequence $x_0,x_1,\ldots, x_N$ with $x_0=1$, $x_N=p$ and $x_{k+1}\in\{2x_k+\frac13, \frac{2x-1}3\}$. Let $y_k=6x_k+2$. Then $y_0=8$, $y_N=6p+2$, and $y_{k+1}\in\{2y_k,\frac23 (y_k +1) \}$. We see that $y_k\in\Bbb Z[\frac13]$, and that $y_k\notin\Bbb Z$ implies $y_{k+1}\notin\Bbb Z$. As $y_N\in\Bbb Z$, we must have $y_k\in\Bbb Z$ for all $k$. In particular, $y_{k+1}=\frac23(y_k+1)$ is possible only if $y_k\equiv -1\pmod3$. We also see from this that all $y_k$ are even. Let $z_k=-\frac12y_{N-k}\in\Bbb Z$. So $z_0=-3p-1$, $z_N=-4$ and $z_{k+1}\in\{\frac12z_k,\frac12(3z_k+1)\}$. Now the procedure becomes determined because me must have $z_{k+1}\in\Bbb Z$: If $z_k$ is odd, we must have $z_{k+1}= \frac12(3z_k+1)$, and if $z_k$ is even, we must have $z_{k+1}=\frac12z_k$. This allows us to extend the sequence beyond $z_N=-4$ as $$\tag1\ldots\to -4\to -2\to-1\to -1\to\ldots.$$ This reminds fatally of the (as of today unsolved) Collatz $3n+1$ problem, which states that for any positive integer $z_0$ we will end at $\ldots \to 1\to 2\to 1\to\ldots$ with the above recursion. (Collatz' original formulation splits the odd case into $z_{k+1}=3z_k+1$ and a necessarily following $z_{k+2}=\frac12z_{k+1}$). However, we are dealing with negative $z_0\equiv-1\pmod 3$ here, a slightly different problem. As far as I (and Wikipedia) know, this generalization to negative integers is also unsolved, but at least it is known that there are some additional limit cycles possible, namely apart from the $\ldots \to -1\to\ldots$ we are looking for at least also $$\tag2\ldots\to -5\to -7\to-10\to-5\to\ldots$$ and $$\tag3\ldots \to -17\to -25\to -37\to -55\to -82\to-41\to\\\to -61\to -91\to -136\to -68\to-34\to-17\to\ldots.$$ So our task is to show that for every $p\in\Bbb N$, the choice $z_0=-3p-1$ leads to $(1)$ and not to $(2)$ or $(3)$ or any as of now unknown cycle or unbounded behaviour. Unfortunately, $p=3$ leads to $z_0=-10$, leads to $(2)$, and we find many other such counterexmaples (e.g., $p=8$ leads to $(3)$). In other words, numbers such as $3$ or $8$ are not in any $F_m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2232774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show $(1+x_1)(1+x_2)...(1+x_n)\geq2^n$ given $x_1x_2...x_n=1$ Show $(1+x_1)(1+x_2)...(1+x_n)\geq2^n$ given $x_1x_2...x_n=1$ and that all $x_i $ are positive reals. I think simple AM-GM-HM must work, but I am missing on something trivial.
The brute force approach is: $$(1+x_1)(1+x_2)\cdots(1+x_n)=\sum_{S\subseteq \{1,\dots,n\}}\prod_{i\in S}x_i$$ But by AM/GM, since there are $2^n$ subsets of $\{1,\dots,n\}$, you get: $$\frac{1}{2^n}\sum_{S\subseteq \{1,\dots,n\}}\prod_{i\in S}x_i \geq \sqrt[2^n]{x_1^{2^{n-1}}\cdots x_n^{2^{n-1}}}=1$$ This is because: $$\prod_{S\subseteq \{1,\dots,n\}}\prod_{i\in S} x_i=x_1^{2^{n-1}}\cdots x_n^{2^{n-1}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2232863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Are the Elementary Functions Dense in $C^n((0,1))$ with the Compact-Open Topology? My question is really a basic (and, I dare say, naive) question about the theory of ordinary differential equations and is likely answered in a book on functional analysis or the like, so any pointers to such a book with the answer would be appreciated. It is well-known that $\mathcal{C}^n(I)$ with the compact-open topology for any open interval $I$ of positive width is homeomorphic to $\mathcal{C}^n((0,1))$ with the compact-open topology, essentially via the unique orientation-preserving affine diffeomorphism of $I$ with $(0,1)$. In this post and this post, it is outlined that $\mathcal{C}^n((0,1))$ with the compact-open topology is metrizable by a metric that is complete, that is to say, that $\mathcal{C}^n((0,1))$ with the compact-open topology is topologically complete. My questions is, are the elementary functions dense in $\mathcal{C}^n((0,1))$ with the compact-open topology? If this is true, then given any $n^\text{th}$-order linear IVP with all coefficient functions continuous on $I$ and leading coefficient function $a_n(t)$ having no roots in $I$ has a Cauchy sequence (in any of the metrics outlined above) of elementary functions which converges to the IVP's unique solution $x(t)$ (which may be non-elementary but which will be $n$ times continuously differentiable on $I$ as $\mathcal{C}^n(I)$ with the compact-open topology is topologically complete). This fact would be useful in motivating differential equations students about some of the strategies for "solving", or approximating solutions to, IVPs.
The polynomials are dense by the Stone-Weierstrass theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2232973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove $\sin(x) = x + O(x^3)$? I was doing an exercise in which I have to find the rate of convergence of $$\lim\limits_{h\to 0}\dfrac{\sin h}{h} = 1$$ and the answer is $O(h^2)$. I don't understand why. The only thing I have found is that $$\sin(x) = x + O(x^3)$$ when $x$ tends to zero, and with that the exercise is solved, but how I demonstrate that fact?
Start with $\sin' = \cos $, $\cos' = -\sin $, $\sin(0) = 0$, $\cos(0) = 1$, and $\sin^2+\cos^2 = 1$. For small $t$, $1 \ge \cos(t) \ge 0 $ so $\sin(x) =\int_0^x \cos(t)dt \le x $. Therefore $1-\cos(x) =\int_0^x \sin(t) dt \le \int_0^x t dt = \frac{x^2}{2} $ so $\cos(x) \ge 1-\frac{x^2}{2} $. Therefore $\sin(x) =\int_0^x \cos(t)dt \ge\int_0^x (1-\frac{t^2}{2})dt =x-\frac{x^3}{6} $. So we already have $x-\frac{x^3}{6} \le \sin(x) \le x $. This is actually enough for what you want. Doing this again, $1-\cos(x) =\int_0^x \sin(t) dt \ge \int_0^x (t-\frac{t^3}{6}) dt = \frac{x^2}{2}-\frac{x^4}{24} $ so $\cos(x) \ge 1-\frac{x^2}{2}+\frac{x^4}{24} $. Doing this one more time, $\sin(x) =\int_0^x \cos(t)dt \ge\int_0^x (1-\frac{t^2}{2}+\frac{t^4}{24})dt =x-\frac{x^3}{6}+\frac{x^5}{120} $. By induction, we can get the power series for $\sin$ and $\cos$. Note: This is not original. I first saw this in "100 Great Problems of Elementary Mathematics" by Heinrich Dorrie (less than $15 from Dover). Get it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2233089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Prove that the series is less than the square of a series. Prove that $(\sum_{i=1}^n a_i)^2 \leq n \sum_{i=1}^na_i^2$ for $a_1, ..., a_n$. Hint: You may want to use the triangle inequality or Cauchy-Shwartz inequality. I'm trying to prove this preposition. here's what I've done so far... $(\sum_{i=1}^n a_i)^2$ = $\sum_{i=1}^n\sum_{j=1}^n a_ia_j$ Then for each $i$, $\sum_{j=1}^n a_ia_j = \langle a_i, a\rangle$ (Dot product) By Cauchy Shwartz, $\langle a_i, a_j\rangle \leq ||a_i||\:||a|| $ To me it looks like this is a "nested for loop" which means there will be an $n$ number of dot products producted, or $n \cdot ||a|| \: ||a||$ But I'm stuck, so please help and don't really know what direction this proof is going in. Thanks.
Hint: Consider the vectors $\underbrace{(1,1,\dots,1)}_{n\;1\text{s}}$ and $\;(a_1, a_2,\dots,a_n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2233131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Finding a non-zero vector in Col A Definiton: The column space of an $m \times n$ matrix $A$, written as $\operatorname{Col} A$, is the set of all linear combinations of the columns of $A$. If $A = [ a_1 \ldots a_n ]$, then $\operatorname{Col} A = \operatorname{Span}\{a_1,\ldots,a_n\}$. $$A= \begin{bmatrix} 2 & 4 & -2 & 1 \\ -2 & -5 & 7 & 3 \\ 3 & 7 & -8 & 6 \end{bmatrix}$$ Find a nonzero vector in $\operatorname{Col} A$. Solution: It is easy to find a vector in $\operatorname{Col} A$. Any column of $A$ will do. I'm confused on why any column of $A$ works because $A$ in rref form is: $$\begin{bmatrix} 1 & 0 & 9 & 0 \\ 0 & 1 & -5 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$ So wouldn't only columns 1, 2, and 4 work because column 3 is linearly dependent, and thus not a part of the span?
As you stated the column space is the set of all linear combinations of the columns of $A$. So if the matrix is not the zero matrix, you will be able to find some non zero vector in $Col A$. Colum 3 is non zero and in fact your column space is the set of all vectors in 3D because $x\times col1 + y \times col2 + z\times col3$ gives any vector in $\mathbb{R}^3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2233230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$\int ^{\infty}_0 \frac{\ln x}{x^2 + 2x+ 4}dx$ We have to find the integration of $$\int ^{\infty}_0 \frac{\ln x}{x^2 + 2x+ 4}dx$$ In this I tried to do substitution of $x=e^t$ After that got stuck .
Let $I$ be defined by the integral $$I=\int_0^\infty \frac{\log(x)}{x^2+2x+4}\,dx \tag1$$ and let $J$ be the contour integral $$J=\oint_{C}\frac{\log^2(z)}{z^2+2z+4}\,dz \tag2$$ where the contour $C$ is the classical "key-hole" contour for which the keyhole coincides with the branch cut along the positive real axis. Then, we can write $(2)$ as $$\begin{align} J&=\int_0^R \frac{\log^2(x)}{x^2+2x+4}\,dx-\int_0^R \frac{(\log(x)+i2\pi)^2}{x^2+2x+4}\,dx+\int_{C_R}\frac{\log^2(z)}{z^2+2z+4}\,dz\\\\ &=-i4\pi \int_0^R \frac{\log(x)}{x^2+2x+4}\,dx+4\pi^2\int_0^R\frac{1}{x^2+2x+4}\,dx+\int_{C_R}\frac{\log^2(z)}{z^2+2z+4}\,dz\tag 3 \end{align}$$ As $R\to \infty$ the integral over $C_R$ vanishes and we have from $(1)$ $$\lim_{R\to \infty}J=-i4\pi I +4\pi^2\int_0^\infty\frac{1}{x^2+2x+4}\,dx\tag 4$$ Next, we apply the residue theorem to evaluate the left-hand side of $(4)$. Proceeding we find that $$\begin{align} \lim_{R\to \infty}J&=2\pi i \text{Res}\left(\frac{\log^2(z)}{z^2+2z+4}\,dz, z=-1\pm i\sqrt{3}\right)\\\\ &=2\pi i \left(\frac{(\log(2)+i2\pi/3)^2}{i2\sqrt{3}}+\frac{(\log(2)+i4\pi/3)^2}{-i2\sqrt{3}}\right)\\\\ &=-\frac{i4\pi^2\log(2)}{3\sqrt{3}}+\frac{4\pi^3}{3\sqrt{3}}\tag 5 \end{align}$$ Equating real and imaginary parts of $(3)$ and $(5)$ reveals $$\int_0^\infty \frac{\log(x)}{x^2+2x+4}\,dx =\frac{\pi \log(2)}{3\sqrt{3}}$$ and as a bonus $$\int_0^\infty\frac{1}{x^2+2x+4}\,dx=\frac{\pi}{3\sqrt{3}}$$ And we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2233396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 0 }
Prove:any smooth map $f:M \rightarrow \mathbb{R}$ can't be one to one. Let $M$ be a connected smooth manifold, $\dim M \ge 2$. Prove:any smooth map $f:M \rightarrow \mathbb{R}$ can't be one-to-one.
No continuous map from $M$ to $\mathbb{R}$ can be injective. As $M$ has dimension at least $2$ it contains a subspace $C$ homeomorphic to a circle. Now there is no continuous injective map $g$ from $C$ to $\mathbb{R}$. If there were, then there are $a$ and $b$ on $C$ with $g(a)<g(b)$. Then on each arc of $C$ with endpoints $a$ and $b$ there is a point mapped to $\frac12(g(a)+g(b))$ (intermediate value theorem).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2233453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Class number of the compositum of a quadratic extension of a cyclotomic field Let $d\in\mathbb{Z}$ be a square-free integer and let $p$ be an odd prime. Let $K = \mathbb{Q}(\sqrt{d})$ and let $\zeta_p$ be a primitive $p$th root of unity. I am interested in knowing the class number of $L = K(\zeta_p)$. 1) I tried to use Sage to compute the class number using the following code for various values of $d$ and $p$. However, it is extremely slow even for small $d$ and $p$. K = CyclotomicField(p) L.< a > = K.extension(x^2 - d) L.class_number() Is there a more effective way to compute the class number of $L$? 2) The case that I am interested in is when $p$ is inert in $K$, and I actually want the class number of $L$ to be one. Are there any known results regarding this?
All your fields are abelian CM fields. There is a complete determination of all CM fields of class number one. (The case of imaginary quadratic fields is well known, but the other cases are somewhat easier, because they avoid issues concerning Siegel zeros.) For example, $\mathbf{Q}(\zeta_p)$ has class number bigger than one for all primes $p > 19$, so this will also be a necessary restriction in your case. Here is the complete list of primes $p > 2$ and quadratic fields $K$ such that $K(\zeta_p)$ has class number one, with an indication as to whether $p$ is ramified, inert, or split in $K$. * *The case when $K \subset \mathbf{Q}(\zeta_p)$. Then any $p \le 19$ works, where $K = \mathbf{Q}(\sqrt{(-1)^{(p-1)/2} p})$. In this case, $p$ is ramified in $K$. If $K \not\subset \mathbf{Q}(\zeta_p)$ is quadratic, then (by Galois theory) there is a second quadratic field $K'$ contained in $K(\zeta_p)$ which is not in $\mathbf{Q}(\zeta_p)$. Since $p$ is odd, there will be a unique choice which is unramified at $p$, we take that choice below. *The case when $K \not\subset \mathbf{Q}(\zeta_p)$. We again break into subcases depending on $p$. a. $p = 11$, $K = \mathbf{Q}(\sqrt{-3})$, $K(\zeta_{11}) = \mathbf{Q}(\zeta_{33})$. The prime $11$ is inert in $K$. b. $p = 7$, $K = \mathbf{Q}(\sqrt{5})$. The prime $7$ is inert in $K$. c. $p = 5$, then $K = \mathbf{Q}(\sqrt{d})$ for the seven fields with $$d \in \{-7,-3,-2,-1,2,13,17\}.$$ The prime $p$ is inert for $d \in \{-7,-3,-2,2,13,17\}$, i.e. $d \ne - 1$. d. $p = 3$, then $K = \mathbf{Q}(\sqrt{d})$ for the fields with $$d \in \{-163,-67,-43,-19,-11,-2,-1,2,5,17,41,89\}.$$ I'll let you compute the suitable splitting behavior. You can easily reproduce this list by looking at the paper "The determination of the imaginary abelian number fields with class number one" by Ken Yamamura. There is a link below. http://www.ams.org/journals/mcom/1994-62-206/S0025-5718-1994-1218347-3/home.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/2233554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A continuous a.e orientation-preserving isometry is locally injective? Let $M,N$ be $d$-dimensional Riemannian manifolds. Let $f:M \to N$, and suppose $f$ is continuous, differentiable almost everywhere (a.e ) and that $df$ is an orientation-preserving isometry a.e. Question: Is it true that there exist a ball $B_{\epsilon}(p) \subseteq M$ such that $f|_{B_{\epsilon}(p)}$ is injective?
I want to reply to your comment in https://mathoverflow.net/questions/264873/do-curvature-differences-obstruct-a-e-orientation-preserving-isometries/267163#267163 Question : Assume that $$f :B\rightarrow \mathbb{E}^2$$ is a map s.t. (1) $B$ is a geodesic ball in $S^2(1)$ of radius $\varepsilon$ (2) $df$ is isometric a.e. (3) a.e. orientation preserving Then $f$ is not continuous EXE : Gromov's map $S^2\rightarrow \mathbb{E}^2$ is not orientation preserving, since it can view as a limit of piecewise distance preserving maps that are not an a.e.-orientation-preserving maps. Proof of Question : Assume that $f$ is continuous. Hence by considering following EXE, we conclude that it is volume preserving Consider a triangulation $T_i$ of $B_\epsilon (p)$ where each $T_i$ is 2-dimensional geodesic triangle. Since $f$ is orientation preserving then $ f(T_i)$ are not overlapping except measure $0$-set. Hence we have that $\gamma_i$ is a curve going to $\partial B_\epsilon(p)$ and $f\circ \gamma_i$ goes to $\partial f(B_\epsilon (p))$ s.t. $$\lim_i\ {\rm length}\ \gamma_i={\rm length}\ \partial B_\epsilon (p) \geq \lim_i\ {\rm length}\ f\circ \gamma_i $$ since $f$ is short. Hence by isoperimetric inequality, it is a contradiction. EXE : Consider a continuous map $ f: [0,1]^2\rightarrow \mathbb{E}^2$ s.t. assume that $c(t)=f(t,1)$ is a continuous curve whose image has 2-dimensional. Hence note that curve $f(t,1-\varepsilon_n)$ where $\varepsilon_n\rightarrow 0$ has an arbitrary large length $l_n$ with $\lim_n\ l_n=\infty$ Hence $f$ is not isometric on $[0,1]\times [1-\varepsilon_n,1)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2233696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$\text{Ext}(H,G)$ in the universal coefficient theorem for cohomology In the universal coefficient theorem for cohomology with a chain complex $C$ of free abelian groups having homology groups $H_n(C)$, \begin{eqnarray} 0\rightarrow\text{Ext}(H_{n-1}(C),G)\rightarrow H^n(C,G)\xrightarrow{h}\text{Hom}(H_n(C),G)\rightarrow0, \end{eqnarray} $\text{Ext}(H_{n-1}(C),G)$ can be identified with $\text{Coker}i^*_{n-1}$ in the dualized sequence \begin{eqnarray} 0\leftarrow B_{n-1}^*\xleftarrow{i^*_{n-1}}Z_{n-1}^*\leftarrow H_{n-1}(C)^*\leftarrow0 \end{eqnarray} where $i^*_{n-1}$ is dualization of the inclusion $B_{n-1}\xrightarrow{i_{n-1}}Z_{n-1}$. However, I cannot see why $\text{Coker}i^*_{n-1}$ (equivalently $\text{Ext}(H_{n-1}(C),G)$) can be nontrivial in the above case because any homomorphism Hom$(B_{n-1},G)$ seems to be able to be constructed from the restriction of Hom$(Z_{n-1},G)$ on $B_{n-1}$, namely $i^*_{n-1}$ being surjective. Could you tell me how to see the obstruction in construction of Hom$(B_{n-1},G)$ from the restriction of Hom$(Z_{n-1},G)$ on $B_{n-1}$. Thanks very much in advance!
We typically get examples with non-trivial cokernels where there is torsion around. For a simple example, let $C_0$ and $C_1$ both equal $\mathbb{Z}$ with all other groups $C_n$ being zero. Let $d:C_1\to C_0$ be the map taking $m$ to $2m$. Take $G$ to be $\mathbb{Z}$. In this case $Z_0=\mathbb{Z}$ and $B_0=2\mathbb{Z}$. Not every element in $B_0^*=\textrm{Hom}(2\mathbb{Z},\mathbb{Z})$ is the restriction of an element of something in $Z_0^*=\textrm{Hom}(\mathbb{Z},\mathbb{Z})$; for instance the map $2m\mapsto m$ isn't.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2233810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Trigonometric identity $2\cos(\alpha)\cos(\beta) = \cos(\alpha + \beta) + \cos(\beta - \alpha)$ in a book. Is it correct? The identity should be $2\cos(\alpha)\cos(\beta) = \cos(\alpha + \beta) + \cos(\alpha - \beta)$, but in the book (see attached picture) states it as follows $2\cos(\alpha)\cos(\beta) = \cos(\alpha + \beta) + \cos(\beta - \alpha)$. What am I missing?
It's correct. $\cos x = \cos(-x) \forall x \in \mathbb R$ which means $\cos(A-B) = \cos (B-A) \forall A,B \in \mathbb R$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2233919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Rate of convergence of fixed-point iteration in higher dimensions Consider the fixed-point iteration process in $\mathbb{R}^n$. Given a sufficiently smooth function $f:\mathbb{R}^n\to\mathbb{R}^n$ and an initial value $x_0\in\mathbb{R}^n$, define the iteration sequence $x_{k+1}=f(x_k)$. Suppose that $$\lim_{k\to\infty}x_k=x^*,$$ then apparently $x^*$ is a fixed point of $f(x)$. I'm familiar with the case in $\mathbb{R}^1$ where the sequence is (generally) linearly convergent with rate $$\lim_{k\to\infty}\frac{|x_{k+1}-x^*|}{|x_k-x^*|}=|f'(x^*)|<1.$$ And I thought the analogy of this constant $|f'(x^*)|$ in $\mathbb{R}^n$ case would be $\|J_f(x^*)\|$, where $J_f(x^*)$ denotes the Jacobian matrix $(\partial f_i(x^*)/\partial x_j)_{n\times n}$ and $\|\cdot\|$ the operator norm induced by vector norm. But the results of my numerical experiments proved me wrong, and I found the following claim on some website: $$\lim_{k\to\infty}\frac{\|x_{k+1}-x^*\|}{\|x_k-x^*\|}=\rho(J_f(x^*))<1,$$ where $\rho$ the spectral radius of a matrix. Indeed the claim fits well my experiment results. It does surprise me that the rate of convergence is independent of the vector norm, but I could not find a proper proof either by myself or by online materials. Any help or link on its proof would be appreciated.
As in many similar situations in higher dimensional spaces, it helps to look at the simplest case where the function can be decoupled. That is, the function requires no interaction between the variables. For two dimensions this is $f(x,y) =(f_1(x),f_2(y))$. For even more simplicity, assume $f(x,y) = (ax,by)$. Now the rate of convergence is obviously dominated by $\max(|a|,|b|)$ which corresponds to the spectral radius.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2234006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Sum of correlated normal random variables Suppose I have two correlated random variables, that were generated in the following way: \begin{align*} X_1 &\sim \mathcal{N}(0,1)\\ X_1' &\sim \mathcal{N}(0,1)\\ X_2 &= \rho X_1+\sqrt{1-\rho^2}\cdot X_1'\\ Y_1 &= \mu_1+\sigma_1 X_1\\ Y_2 &= \mu_2+\sigma_2 X_2. \end{align*} Now, is it true that $Y_1+Y_2$ (or, more generally $\alpha_1 Y_1+\alpha_2Y_2$) normally distributed? (I can easily calculate the mean and the variance of $\alpha_1 Y_1+\alpha_2Y_2$, but I am not sure about the distribution...) EDIT: just to clarify, $X_1$ and $X_1'$ are independent.
$\alpha_1 Y_1 + \alpha_2 Y_2$ is a linear combination of $X_1$ and $X_1^\prime$ - that is $\alpha_1 Y_1 + \alpha_2 Y_2 = \beta X_1 + \beta^\prime X_1^\prime$ for some $\beta, \beta^\prime$ that are a bit of a pain to calculate. Linear combinations of independent normal random variables are normal; there are several proofs of this (nontrivial, but well-known) fact. So the answer to your question is yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2234078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proving the inequality $0\leq \frac{\sqrt{xy}}{1-p}\frac{x^{\frac{1}{p}-1}-y^{\frac{1}{p}-1}}{x^{\frac{1}{p}}-y^{\frac{1}{p}}} \leq 1$ Suppose $p\in(0,1)$. How might one show that \begin{equation}\tag{1} 0\leq \frac{\sqrt{xy}}{1-p}\frac{x^{\frac{1}{p}-1}-y^{\frac{1}{p}-1}}{x^{\frac{1}{p}}-y^{\frac{1}{p}}} \leq 1 \end{equation} for all $x,y\in[0,1]$? It is clearly non-negative, so the hard part is to show that it is never greater than 1. I was hoping to use a technique similar to the one to prove that $$ 0\leq \sqrt{xy}\frac{\log x - \log y}{x-y}\leq 1 $$ for all $x,y\in[0,1]$. We can use an integral representation and see that \begin{align*} \sqrt{xy}\frac{\log x - \log y}{x-y} &= \int_{0}^{\infty} \frac{\sqrt{xy}}{(x+t)(y+t)}dt\\ &\leq \int_{0}^{\infty} \frac{\sqrt{xy}}{(\sqrt{xy}+t)^2}dt\\ & = 1. \end{align*} Is there a suitable integral representation that can prove (1)?
I've figured out the correct integral representation to use here. For $a\in(-1,1)$, consider the following integral representations: \begin{align*} \frac{x^a-y^a}{x-y} &= \frac{\sin(a\pi)}{\pi}\int_{0}^{\infty}\frac{t^a}{(x+t)(y+t)}dt\\ \text{and}\qquad ax^{a-1} &= \frac{\sin(a\pi)}{\pi}\int_{0}^{\infty}\frac{t^a}{(x+t)^2}dt. \end{align*} Similar to the example in the original post, we have \begin{align*} \frac{1}{a}\frac{x^a-y^a}{x-y} &\leq \frac{\sin(a\pi)}{a\pi}\int_{0}^{\infty}\frac{t^a}{(\sqrt{xy}+t)^2}dt\\ & = (\sqrt{xy})^{a-1}. \end{align*} Thus, if we let $a=1-p$, we have \begin{align*} \frac{1}{1-p}\frac{x^{\frac{1}{p}-1}-y^{\frac{1}{p}-1}}{x^{\frac{1}{p}}-y^{\frac{1}{p}}} = \frac{1}{1-p}\frac{x^{\frac{1-p}{p}}-y^{\frac{1-p}{p}}}{x^{\frac{1}{p}}-y^{\frac{1}{p}}} &= \frac{1}{a}\frac{x^{\frac{a}{p}}-y^{\frac{a}{p}}}{x^{\frac{1}{p}}-y^{\frac{1}{p}}}\\ &\leq \left(\sqrt{x^{\frac{1}{p}}y^{\frac{1}{p}}}\right)^{a-1} \\ & = \left(\sqrt{x^{\frac{1}{p}}y^{\frac{1}{p}}}\right)^{-p}\\ &=\frac{1}{\sqrt{xy}} \end{align*} which proves the desired result. Hence, even though I only originally conjectured it for $p\in(0,1)$, the claim holds for $p\in(1,2)$ as well!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2234190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that the range of $f$ is all of $\mathbb{R}$. Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be a continous function such that $|f (x)−f (y)| \geqslant |x −y|$, for all real $x$ and $y$. I need to prove that the range of $f$ is all of $\mathbb{R}$. I tried to solve the inequality to get a general form for the functions, but it just ended up in a mess.
Well, let's think. Is it possible that that $f(x)$ is bounded above or below? Probably not as $|f(x)- f(y)| \ge |x-y|$ and $|x-y|$ seems likely to be arbitrarily large. So Lemma 1: $f(x)$ is unbounded. (We'll prove that.) If $f$ is unbounded does it have to take on every value. This seems reasonable as a variation of the Intermediate value theorem. After all if $f$ is continuous and $f(x) < f(y)$ then for all $c \in (f(x),f(y)$ there is an $d$ between $x$ and $y$ so that $f(d) = f(c)$. So wouldn't it be reasonable that for all $c: \inf f < c < sup f$ there is a $d$ so that $f(d) =c$? And that if $f$ is continuous unbounded that for all $c$ there is a $d$ so that $f(d) = c$? So Lemma 2: if $f$ is a continuous real-valued unbounded function, the for all $y \in \mathbb R$ there is an $x$ so that $f(x) = y$. Proof of Lemma 1: If $|f(x)| \le M$ for all $M$ then $|f(x) - f(y)| \le |f(x) + |f(y)| \le 2M$ for all $x,y$. So $2M \ge |f(x) - f(y)| \ge |x-y|$ for all $x,y$. Which means for all $x \in \mathbb R$ then $|x| = | x-0| \le 2M$ so $\mathbb R$ is bounded. This is false so $f$ is not bounded. Lemma 2: $f$ is unbounded so, For any $y \in \mathbb R$ there exists an $x_0$ so that $f(x_0) > y$ and an $x_2$ so that $f(x_1) < y$. $f$ is continuous so by Intermediate Value Theorem there is a $d$ so that $f(d) = y$. And that is it... it would seem. $\mathbb R \subset f(\mathbb R) = range f \subset \mathbb R$ so the range of $f$ is all $\mathbb R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2234286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Are there any continuous “totally rational” functions besides piecewise first-order polynomial or rational functions? Let $I \subseteq \mathbb{R}$ be an interval and $f: I \to \mathbb{R}$ a continuous function. We’ll say that $f$ is totally rational if the following propositions are true for any $x\in I$: * *If $x \in \mathbb{Q}$ then $f(x) \in \mathbb{Q}$ *If $f(x) \in \mathbb{Q}$ then $x \in \mathbb{Q}$ A simple example of such a function is the identity function $f(x)=x$. More generally any function of the form $f(x)=ax + b$ with $a,b\in \mathbb{Q}$ will do. Another class of functions that are totally rational are those of the form $$f(x)=\frac{ax + b}{cx + d}\qquad \text{with}\ a,b,c,d\in \mathbb{Q} \ \text{and}\ x\neq-\frac{d}{c}.$$ Besides functions of these kinds (and piecewise combinations thereof) I cannot find any other examples of such functions. It is easy to see, for instance, that any higher-order polynomial or rational function will fail condition (2). But do other totally rational functions exist?
Here's a different type of example with the property: $f(n + 0.a_1 a_2 a_3 \dots) = 0.0 a_1 0 a_2 0 a_3 \dots$ or $f(n + \sum_{i \ge 1}{a_i \cdot 10^{-i}}) = \sum_{i \ge 1}{a_i \cdot 100^{-i}}$ In other words, the function takes the decimal expansion of the fractional part and inserts a $0$ between every digit. Or it writes it in base $10$ and reads it again in base $100$. The function maps numbers with eventually periodic expansions (rational numbers) to numbers with eventually periodic expansions, and the converse is also true. Also it is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2234425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Relation Between Eigenvalues of a Matrix and its Real Part Let $A$ be an $n \times n$ complex matrix. The real part of $A$ is $\frac{A^H + A}{2}$. What is the relation between the eigenvalues of $A$ and $\frac{A^H + A}{2}$? I know that the eigenvalues of $A^H$ are the complex conjugates of $A$'s eigenvalues. In fact there are some inequalities. We know that eigenvalues of hermite matrices are real. So $A_r=\frac{1}{2}(A+A^H),A_i=\frac{1}{2\mathrm{i}}(A+A^H)$ are hermite matrices. Write their eigenvalues as $ x_1\le x_2\le\cdots, \le x_n;y_1\le y_2,\le\ldots\le y_n$; Our homework is that all eigenvalues $a+b\mathrm{i}$ of $A$ satisfies $ x_1\le a\le x_n,y_1 \le b \le y_n$.How can I prove it?
If $A$ is normal, the eigenvalues of $\frac{A + A^{*}}{2}$ are the real parts of the eigenvalues of $A$. If $A$ is not normal, there isn't any nice relation. For example, if $$ A = \begin{pmatrix} 0 & 2\varepsilon \\ 0 & 0 \end{pmatrix} $$ then the eigenvalues of $A$ are zero but the eigenvalues of $$ \frac{A + A^{*}}{2} = \begin{pmatrix} 0 & \varepsilon \\ \overline{\varepsilon} & 0 \end{pmatrix} $$ are $\pm |\varepsilon|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2234535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help on Algebra Problem I need help on this problem that 8/9 Grade Math Teacher Assigned me (I'm in 7th Grade). Here's the problem - Today is Latisha's birthday. Fifty friends have thrown her a surprise party. Before the party, her friends got together and decided to secretly hide several presents in 50 separate boxes numbered 1 to 50. Latisha is now about to open her gifts, which are arranged in a row in order. Her friends explain that if she follows the instructions below, she will discover which boxes hold a gift. Latisha wants you to help her figure out which boxes hold presents without actually carrying out their instructions. Their Instructions: First, she should go down the line and open every box. Then, starting with box #2, she should close every other box as she goes down the row. Starting with box #3, she should change every third box (she opens the box if it is closed and closes it if the box is open) Starting with box #4, she should change every fourth box. Starting with box #5, she should change every fifth box. Continue this process through box #50. Basically, the boxes at the end that are open are the ones with presents in them. I think that you would have to use a factorial-related-strategy, but I'm not sure. Can anyone help?
A box is flipped once for every divisor its number has and it starts from closed, so it is closed if it has an even number of divisors and open if it has an odd number of divisors. So this is progress, but it would be good to simplify the answer further. So we need to figure out how to tell whether a number has an even or odd number of divisors other than by counting them. The key here is to note that usually, divisors come in pairs. Say you're look for divisors of $10$. Then you have $1$ and $10,$ and $2$ and $5$... two pairs of divisors for a total of 4. So it seems like this would imply that usually there are an even number of divisors. But there is one situation where the above logic fails and you can have an odd number of divisors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2234683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Student's Age in Fictional Stats Class Problem (Probability) Problem: In a fictional stats class, 40% of students are female, and the rest are male. Of the female students, 30% are less than 20 years old and 90% are less than 30 years old. Of the male students, half are less than 20 years old and 70% are less than 30 years old. (a) Make a contingency table to describe these two variables (b) Find the probability that a randomly selected studet is 30 years or older (c) If a student is 20 years or older, what is the probability that the student is female? (d) If a student is less than 30 years old, what is the probability that the student is 20 years or older? My Thoughts: (b) P(<30 years) = 1 - 0.78 = 0.22 (c) What I first did was find P(S2 given 'not A1'), but the answer doesn't make sense because the denominator ended up being smaller than the nominator. (d) Do I solve this problem by doing 'not 20 years'?
Let's follow @BGM's suggestion in the comments. Let $F$ denote female; let $M$ denote male; let $A$ denote age. Since $40\%$ of the students are female and $30\%$ of them are less than $20$ years old, the probability that a student is female and less than $20$ years old is $$P(F~\cap A < 20) = P(F)P(A < 20 \mid F) = 0.40 \cdot 0.30 = 0.12$$ Since $90\%$ of the female students are less than $30$ years old, the probability that a student is female and less than $30$ years old is $$P(F~\cap A < 30) = P(F)P(A < 30 \mid F) = 0.40 \cdot 0.90 = 0.36$$ The probability that a student is female, at least $20$ years old, and less than $30$ years old can be found by subtracting the probability that she is less than $20$ years old from the probability that she is less than $30$ years old, which yields $$P(F~\cap 20 \leq A \leq 30) = P(F~\cap A < 30) - P(F~\cap A < 20) = 0.36 - 0.12 = 0.24$$ Finally, the probability that a student is female and at least $30$ years old is found by subtracting the probability that a student is female and less than $30$ years old from the probability that a student is female, which yields $$P(F~\cap A \geq 30) = P(F) - P(F~\cap A < 30) = 0.40 - 0.36 = 0.04$$ By using similar reasoning, we can fill in the table for the male students. $$ \begin{array}{l | c | c | c | c} & A < 20 & 20 \leq A < 30 & A \geq 30 & Total\\ \hline F & 0.12 & 0.24 & 0.04 & 0.40\\ M & 0.30 & 0.12 & 0.18 & 0.60\\ \hline Total & 0.42 & 0.36 & 0.22 & 1 \end{array} $$ The probability that a student is at least $30$ years old is stated in the contingency table. To find the probability that a student who is at least $20$ years old is female, divide the probability that a female student is at least $20$ years old by the probability that a student is at least $20$ years old, both of which can be found by adding the appropriate columns in the table. The probability that a student who is less than $30$ years old is at least $20$ years old can be found by subtracting the probability that the student is less than $20$ years old from the probability the student is less than $30$ years old. To find the probability that a student is less than $30$ years old, you can subtract the probability that a student is greater than $30$ years old from $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2234809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding E((X-2)²) This is a practice quiz as I have a test in a few days. I've already completed the first and second questions and my answers are: (1) h = 0.4 (2) k = 30 I'm struggling on question 3, I know how to find E(X) using the x•p(x) method but how do I find E((X-2)²). I tried (X-2)²•p(x) but I'm not sure that it's the correct answer. Any help is greatly appreciated!]1
$\begin{align}E[(X-2)^2] &= E[X^2-4X+4]\\ &= E[X^2]-4E[X]+E[4]\\ &= \sum_xx^2p(x)-4\sum_xxp(x)+4\\ &= 93.8 - 4\times4+4\\ &= 81.8 \end{align}$ On a side note: $E[g(X)]=\displaystyle{\sum_x}g(x)p(x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2234908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
union of proper subgroup is proper? Question: Prove that a finite group is the union of proper subgroups IFF the group is not cyclic. Let G be a finite group. Suppose G is the union of proper subgroups $b_{i}$. This means that there is an element in G that is not in $b_{1}$. Iterating this reasoning, we see that there is at least one element in G that is not in the union of the proper subgroups $b_{i}$. Then what? This feels like those days where no questions can be solved. Any hint is appreciated. Thanks in advance. Edit: If G were cyclic, then, there exists an element, say $a$, that generates G. But $G=\cup _{i=1}^{n}b_{i}$ implies that $a$ generates $\cup _{i=1}^{n}b_{i}$ too. Hence, $\cup _{i=1}^{n}b_{i}$ is not a proper union of subgroup since every element in G is in $\cup _{i=1}^{n}b_{i}$.
Let $G$ be a group, finite or infinite. Observe that the following statements are equivalent. * *$G$ is the union of some proper subgroups. *$G$ is the union of all of its proper subgroups. *Each element of $G$ belongs to some proper subgroup of $G.$ *For each element $g\in G,$ $\langle g\rangle$ is a proper subgroup of $G.$ *There is no element $g\in G$ such that $\langle g\rangle=G.$ *$G$ is not cyclic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2235062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $X$ is uniform on $(0,1)$ how can I prove $2X$ is uniform on $(0,2)$? If $X$ is uniform on $(0,1)$ how can I prove $2X$ is uniform on $(0,2)$? I am struggling with this question and similar but harder variants. It makes a lot of sense to me intuitively but I am unfamiliar with rigorous arguments that are needed to prove it. I honestly don't know where to begin; do I need to look at the density function of $2X$ and show it is $1/2$ on $(0,2)$ and $0$ elsewhere or perhaps I need to look at the distribution formula maybe? Could anyone help me with this example but also try to motivate the steps so I can appreciate the technique and hopefully be able to see how I can adapt to harder but similar problems.
What you say is a way, and here I give another point, but both are similar. The distribution function $F_{2X}(x)=P(2X\le x)=P(X\le \frac{1}{2}x)=F_X(\frac{1}{2}x)$ and we can start from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2235180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solve$\int\frac{x^4}{1-x^4}dx$ Question: Solve $\int\frac{x^4}{1-x^4}dx.$ My attempt: $$\int\frac{x^4}{1-x^4}dx = \int\frac{-(1-x^4)+1}{1-x^4}dx = \int 1 + \frac{1}{1-x^4}dx$$ To integrate $\int\frac{1}{1-x^4}dx,$ I apply substitution $x^2=\sin\theta.$ Then we have $2x \frac{dx}{d\theta} = \cos \theta.$ which implies that $\frac{dx}{d\theta}=\frac{\cos \theta}{2\sqrt{\sin \theta}}.$ So we have $\int \frac{1}{1-x^4}dx=\int\frac{1}{\cos^2\theta} \cdot \frac{\cos \theta}{2\sqrt{\sin \theta}} d\theta = \int\frac{1}{2\cos\theta \sqrt{\sin\theta}}d\theta.$ Then I stuck here. Any hint would be appreciated.
By doing long division, you'll get $$-\int \left(-\frac{1}{2(x^2+1)}-\frac{1}{4(x+1)}+\frac{1}{4(x-1)}+1\right) dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2235305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Find the constant term in the expansion of $(x^2+1)(x+\frac{1}{x})^{10}$ I can't solve this problem. How to solve it? The Problem is "Find the constant term in the expansion of $ \left({{x}^{2}\mathrm{{+}}{1}}\right){\left({{x}\mathrm{{+}}\frac{1}{x}}\right)}^{\mathrm{10}} $"
$f(x)=(x^2+1)(x+\dfrac{1}{x})^{10}$ We can rewrite $f(x)$ like below: $f(x) = x^2(x+\dfrac{1}{x})^{10} + (x+\dfrac{1}{x})^{10}$ In the first term, the power of $x$, must be $-2$ in the parenthesis, so the when it's multiplyed by $x^2$ the power of $x$ becomes $0$. And we know that: $(x+\dfrac{1}{x})^{10}=\sum_{k=0}^{10}\binom{10}{k}x^k(\dfrac{1}{x})^{10-k}$ So $k+k-10=-2=>k=4$. So the coefficient of that term is $\binom{10}{4}$. On the other hand, the coefficient of the constant term of the second term of $f(x)$ which is just $(x+\dfrac{1}{x})^{10}$, is when $k=5$. So the coefficient would be $\binom{10}{5}$ So the answer is: $\binom{10}{4}+\binom{10}{5}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2235394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Series convergence test $$\sum_{n=1}^{\infty} \frac{1}{(\log{n})^{(\log n)}}$$ I tried using Cauchy's condensation test: $$\sum_{n=1}^{\infty} 2^n\frac{1}{(\log{2^n})^{(\log 2^n)}}$$ Assume that the log is of base 2: $$\sum_{n=1}^{\infty} 2^n\frac{1}{n^{n}}$$ $$\sum_{n=1}^{\infty} \left(\frac{2}{n}\right)^n$$ And now I'm stuck. Thanks in Advance.
The series $\sum_{n=1}^\infty(\frac{2}{n})^n$ converges, since $(\frac{2}{n})^n\leq(\frac{1}{2})^n$ for $n\geq 4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2235477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Jacobi fields are linearly independent if and only if... If the dimension of the Riemannian manifold $M$ is $n$, there exist exactly $n$ linearly independent Jacobi fields along the geodesic $\gamma : [0,a] \to M$, which are zero at $\gamma(0)$. This follows from the fact, easily checked, that the Jacobi fields $J_1,\ldots,J_k$ with $J_i(0)=0$ are linearly independent if and only if $J_1'(0),\ldots,J_k'(0)$ are linearly independent. (Riemannian geometry, Manfredo do Carmo, page 116) I tried to "prove" first the reverse direction: If $J_1'(0),\ldots,J_k'(0)$ are linearly independent, then $\sum_i c_i J_i'(0)=0 \iff c_i=0$. Applying the limit definition and $J_i(0)=0$, we have that $\lim_{t \to 0^+} \frac{\sum_i c_iJ_i(t)}{t}=0$. Then I used the epsilon-delta definition of the limit: $\forall \epsilon > 0, \exists \delta > 0$ such that $|\frac{\sum_i c_i J_i(t)}t-0|<\epsilon$. Well, $t$ is defined only on $[0,a]$, so $\frac 1t \ge \frac 1a$, and so $\frac{|\sum_i c_i J_i(t)|}a\le|\frac{\sum_i c_i J_i(t)}t-0|<\epsilon$, and so $|\sum_i c_i J_i(t)| < \epsilon a$. Since the inequality we just deduced holds true for all $\epsilon > 0$, we must have that $\sum_i c_i J_i(t) = 0$ if $0 < t < \delta$. Since $c_i=0$, it follows that $\sum_i c_i J_i(t) = 0$ for all $0 \le t \le a$. I don't feel good about this alleged proof; I know there are some statements that don't follow (although they seemed like they would to me). Furthermore, how would I also go about proving the other direction? Please do let me know if there is a better proof that I should follow.
Here is an alternative proof. We know the explicit form of a Jacobi field $J$ with $J(0)=0$. See Corollary 2.5 of Chapter 5 of Do carmo. Now use this explicit formula with the fact that derivative is a linear map. At $t=0$ they are linearly independent by definition. At other points $t\neq 0$ therefore we can use formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2235597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to prove that $\sin\left(\frac{\pi}{n}\right)$ is decreasing, $\forall n\in\mathbb{N}$ (series test) How to prove that $\sin\left(\dfrac{\pi}{n}\right)$ is decreasing, $\forall n\in\mathbb{N}$. My original question was to determine the convergence of $\sum (-1)^{n+1}\sin\left(\dfrac{\pi}{n}\right).$ I showed that the absolute value does not converge, so it does not converge absolutely. I now need to check for conditional convergence. I want to solve the series using the alternating series test. I already showed that $b_n\to\ 0$, as $n\to\infty$. Now I need to show $b_n$ decreasing. I found the derivative, which is $-\dfrac{\pi \cos \left(\frac{\pi }{n}\right)}{n^2}.$ The problem is that $\cos$ is sometimes negative, and I have a negative sign in front of the derivative, which means that the derivative is sometimes positive. So it is not decreasing for all $n\in\mathbb{N}$, but the answer says converges conditionally? How?
One may observe that $$ x \mapsto \sin x \quad \text{is increasing over} \quad \left[0,\frac \pi2\right] $$ and that $$ x \mapsto \frac \pi x \quad \text{is decreasing over} \quad \left[1,\infty\right) $$ giving that $ \sin \circ \:\frac \pi x$ is decreasing over $\left(2,\infty\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2235869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
simultaneous equation, solve for x & y I'm stuck solving what appears to be a simple simultaneous equation. A point in the right direction would be appreciated. Solve the simultaneous equations for x and y: $y=x^{2}+7x-11$, $y=x-1$ my workings: $0=x^{2}+6x-10$ $10=x^{2}+6x$ $10/x=x+6$ From here i go around in circles trying to solve for $x$. I'm sure i've missed something basic. Thank you in advance.
Once you have $x^2+6x-10=0$, you have a standard quadratic equation. Solve it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2236125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A coin is tossed 3 times. What is the probability of getting 3 heads or at least 1 head? * *A coin is tossed $3$ times. Let $A=[3 \text{ heads occur}]$ and $B = [\text{at least 1 head occurs}]$. What is $P(A \cup B)$? This is a SAT MATH 2 questions from Barron's book. The answer is $\frac{7}{8}$ but I do not understand why. *If a coin is flipped and one die is thrown, what is the probability of getting a head or a $4$? For this one, I tried doing $\frac{1}{2} + \frac{1}{6} =\frac{2}{3}$, $\frac{1}{2}$ being the probability of getting a head and $\frac{1}{6}$ being the probability of getting a $4$. Since the problem asks for either or, I added the two. But the answer is $\frac{7}{12}$. Can anyone please explain how to arrive at the answers? Thank you!
This is a common mistake. You likely memorized $$P(A\cup B) = P(A)+P(B)$$ but this is only true if $A$ and $B$ are disjoint: $A\cap B = \varnothing$. Instead, we have by inclusion-exclusion $$P(A\cup B) = P(A)+P(B)-P(A\cap B).$$ Also recall that $$P(AB) = P(A)P(B)$$ if $A$ and $B$ are independent. * *Notice that $A\cap B = A$ and so $$P(A\cup B) = P(A)+P(B)-P(A) = P(B).$$ Use the complement, $$P(B) = 1-P(\bar B) = 1-P(\text{No heads}) = 1-\left(\frac{1}{2}\right)^3 = \frac{7}{8}.$$ *Let $A$ be the event that you flip a head and let $B$ be the event that you land a four. Assume they are independent. Then \begin{align*} P(A\cup B) &= P(A)+P(B)-P(AB) \\ &= P(A)+P(B)-P(A)P(B) \\ &= \frac{1}{2}+ \frac16-\frac{1}{2}\cdot\frac{1}{6} \\ &= \frac{7}{12}. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2236223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why the singular values have to appear in descending order across the diagonal matrix? This question is about Singular Value Decomposition. Given an arbitrary matrix $A$ $$A \in \mathbb{R}^{m\times n} $$ Its reduced SVD form is:- $$A = U\Sigma V^T$$ Where as $\Sigma$ is the diagonal matrix containing scaling factors across its diagonal:- $$ \Sigma = \left( \begin{array}{ccccc} \sigma_1 & \hfill & \hfill & \hfill & \hfill \\ \hfill & \sigma_2 & \hfill & \hfill & \hfill \\ \hfill & \hfill & \ddots &\hfill & \hfill \\ \hfill & \hfill & \hfill & \hfill & \sigma_n \\ \end{array} \right)$$ Now, it is said that the singular values across the diagonal are such that:- $$\sigma_1 \geq \sigma_2 \geq \sigma_3 \geq \cdots \geq \sigma_n \geq 0 $$ Why do these values have to be necessarily in descending order across the diagonal? I read that it's a convention to write so, but there is more to the order than just convention. If this order is changed, we will get a totally different matrix instead of $A$. So the order is important. Or is it so that there is always guaranteed to be one unique solution for SVD in which these singular values across $\Sigma$ are guaranteed to be in descending order?
It is just convention. If you change the order, and permute the columns of $U$ and $V$ correspondingly, you do get the same matrix $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2236320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is this definition of a complete bipartite graph correct? The book I am using defined a complete graph $K_{m,n} = \overline{K}_m + \overline{K}_n$. Is this correct? I am confused since the complement of a complete graph is an empty graph. How is the addition of two empty graph a complete bipartite graph if we follow the definition?
Some authors use $G+H$ to indicate the graph join, which is a copy of $G$ and a copy of $H$ together with every edge between $G$ and $H$. This is IMO unfortunate, since $+$ makes more sense as disjoint union. (Authors who use $+$ for join probably use either $G\cup H$ or $G\sqcup H$ for the disjoint union.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2236435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Triangle - length of the sides - proof a, b and c are the lengths of the sides of a triangle. Prove that $$a^2+b^2 \ge \frac{1}{2}c^2$$ Let $\gamma$ be the angle between sides a and b. then: $$a^2 + b^2 - 2ab\cos(\gamma) = c^2$$ Hence we need to prove that $$a^2+b^2 \ge \frac{1}{2}c^2$$ $$2a^2+2b^2 \ge a^2+b^2 -2ab\cos(\gamma) $$ $$a^2+b^2+2ab\cos(\gamma) \ge0$$ Knowing that $\cos(\gamma) \ge-1$, we get $a^2+b^2+2ab\cos(\gamma) \ge a^2 +b^2 -2ab =(a-b)^2$ I have carried this problem so far but now I am stuck. How should I proceed?
You're already there. Your last line is $$a^2+b^2+2ab\cos(\gamma) \geq (a-b)^2$$ And so you only have to note $(a-b)^2\geq 0$ to see that $$a^2+b^2+2ab\cos(\gamma) \geq 0$$ Note that we can make this inequality strict ($>$) since $\cos(\gamma)=-1$ can't happen, for one angle is then stretched, so we could've taken $\cos(\gamma)>-1$ rather than $\cos(\gamma)\ge-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2236586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Question on infinite geometric series. Problem- The sum of the first two terms of an infinite geometric series is 18. Also, each term of the series is seven times the sum of all the terms that follow. Find the first term and the common ratio of the series respectively. My approach- Let $a+ar+ar^2+\dots$ be the series. Then, $a+ar=18$ and, $a=7\frac{1}{1-r}-a$, solving I get $r = \frac{29}{43}$ but Given answer is $a=16,r=1/8$. where I'm doing wrong?
The sum of the first two terms of an infinite geometric series is 18. $$ S(a, r) = \sum_{k=0}^\infty a r^k = \frac{a}{1-r} \\ a + a r = 18 \quad (*) $$ Also, each term of the series is seven times the sum of all the terms that follow. $$ a r^n = 7 \sum_{k=n+1}^\infty a r^k \quad (**) $$ Find the first term and the common ratio of the series respectively. The first term is $a r^0 = a$ and the common ratio is $$ \frac{a r^{n+1}}{a r^n} = r $$ From $(*)$ we infer $a \ne 0$. The instance $n=0$ of $(**)$ is $$ a = 7 \sum_{k=1}^\infty a r^k $$ so $a \ne 0$ implies $r \ne 0$. We rewrite $(**)$ into $$ \frac{1}{7} r^n = \frac{1}{1-r} - \sum_{k=0}^n r^k = \frac{1}{1-r} - \frac{1-r^{n+1}}{1-r} = \frac{r^{n+1}}{1-r} \iff \\ \frac{1}{7} = \frac{r}{1-r} \iff \\ \frac{1}{7} - \frac{1}{7} r = r \iff \\ \frac{1}{7} = \frac{8}{7} r \iff \\ r = \frac{1}{8} $$ We rewrite $(*)$ into $$ 18 = a (1+r) = a \frac{9}{8} \iff \\ a = 18 \frac{8}{9} = 16 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2236689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to determine the optimal control law? Given the differential equation $$\dot x = -2x + u$$ determine the optimal control law $u = - kx$ that minimizes the performance index $$J = \int_0^{\infty} x^2 \, \mathrm d t$$ My approach was to find the state feedback $k$. But since the value of $R$ (positive semidefinite Hermitian) is not given, that means $R=0$. How do I determine the optimal control for this system where $R=0$?
Using the state feedback law $u = - \kappa \, x$, where $\kappa$ is to be determined, we obtain $$\dot x = -(\kappa + 2) \, x$$ Integrating, $$x (t) = \exp \left( -(\kappa + 2) t \right) \, x_0$$ where $x_0$ is the initial condition. Hence, $$\int_0^{\infty} \left( x (t) \right)^2 \, \mathrm d t = \cdots = \dfrac{x_0^2}{2 (\kappa + 2)}$$ where the integral converges if $\kappa > -2$. Note that there is no minimum, but $$\lim_{\kappa \to \infty} \dfrac{x_0^2}{2 (\kappa + 2)} = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2236785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proof of Lyapunov Matrix Equation I read the book A Linear Systems Primer by Antsaklis, Panos J., and Anthony N. Michel (Vol. 1. Boston: Birkhäuser, 2007) and I am confused by a part of the proof of a theorem about the Lyapunov Matrix Equation. \begin{equation} \dot{x}=Ax\tag{4.22} \end{equation} \begin{equation} C=A^TP+PA\tag{4.25} \end{equation} Theorem 4.22. The equilibrium $x = 0$ of (4.22) is stable if there exists a real,symmetric, and positive definite $n \times n$ matrix $P$ such that the matrix $C$ given in (4.25) is negative semidefinite. Proof. Along any solution $ \phi(t,x_0) \triangleq \phi(t)$ of (4.22) with $\phi(0,x_0)=\phi(0)=x_0$, we have \begin{equation} \phi(t)^TP\phi(t)=x_0^TPx_0+\int_0^t \frac{d}{d\eta}\phi(\eta)^TP\phi(\eta)d\eta\\ =x_0^TPx_0+\int_0^t \phi(\eta)^TC\phi(\eta)d\eta \tag{*} \end{equation} for all $t \ge 0$........ So how to obtain (*)?
This is a variant of the fundamental theorem of calculus i.e. for an absolutely continuous function $f$ on $[0,T]$ we have for $t \in [0,T]$ $$f(t)=f(0)+\int_0^t \frac{d}{dy} f(y) ~dy$$ So in your case for $f(t)=\phi(t)^T P\phi(t)$ we get \begin{align} \phi(t)^T P\phi(t)&=\phi(0)^T P\phi(0)+\int_0^t \frac{d}{dy} \phi(y)^T P\phi(y) ~dy \\ &=x_0^T Px_0+\int_0^t \frac{d}{dy} \phi(y)^T P\phi(y) ~dy \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2236911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many binary words of length n are there in which 0 and 1 occur the same number of times and in which no two 0's are adjacent? I understand that, in order to satisfy the first two conditions (length n, same number of 0's and 1's) all that needs to be done is $$ \frac {n!} { \frac {n} {2}! \times \frac {n} {2}!} $$ but I'm not sure how to account for the third condition (no consecutive 0's).
Not many. If you have too many consecutive 1s, there will be more 1s than 0s in the word. So we can count them: * *If the word starts with a 1, the only option is 1010...1010. *If the word starts with a 0 and ends with a 1, the only option is 0101...0101. *If the word starts and ends with a 0, we need to have one pair of 1s inside. These can start on all even positions except the last one (e.g. 011010, 010110), which is on $n/2-1$ positions. So the final solution is $n/2+1$ (for even $n$; for odd $n$ the answer is of course 0).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2237025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding out the numbers under given conditions Let $M$ be a $2$ digit number $ab$, $N$ is a $3$ digit number $cde$ , and $X=M\times N$ is such that $9(X)=abcde$.The question is to find out the ratio $\frac NM$ I tried to solve it using trial and error and examined a number of cases but couldn't reach the answer so far.What I have got that $d,e$ cannot be simultaneously zero and $a+b+c+d+e$ is a multiple of $9$ I know there should be some logic behind this question instead of dwelling on trial and error.Any help shall be highly appreciated.Thanks.
Rewrite $9(X) = abcde$ as $9MN = abcde = 1000ab + cde = 1000M + N$, then divide by $M$ to get $9N = 1000 + \frac{N}{M}$ or $\frac{N}{M} = 9N - 1000$. Notice then that $\frac{N}{M}$ must be a whole number, call it $k$. Replace $N$ with $kM$ to get $9kM = 1000 + k$. Since the left hand side is divisible by $k$, the right hand side must also be divisible by $k$, and $1000 + k \equiv 1000 \equiv 0 \pmod{k}$ implies that $k$ must be a divisor of $1000$. Since we know that $N$ is three-digit and $M$ is two-digit, the possible values of $k$ are $\{2, 4, 5, 8, 10, 20, 25, 40, 50\}$. Notice that we also know that $1000 + k$ is divisible by $9$, due to $9kM = k + 1000$, so that leaves only one possibility, namely $k = 8$. Plugging this back in, we have $9\cdot 8 M = 72M = 1000 + 8 = 1008$, thus $M = \frac{1008}{72} = \boxed{14}$ and $N = kM = 8\cdot 14 = \boxed{112}$, and this is the only solution. Indeed, $9(X) = 9(14\cdot 112) = 14112$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2237149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
For which continuous functions $f:\mathbb R\to\mathbb R$ does there exist a discontinuous function $g$ such that $f=g\circ g$? Inspired by a bad approach to a homework problem, I'm wondering for which which continuous functions $f:\mathbb R\to\mathbb R$ does there exist a discontinuous function $g$ such that $f=g\circ g$. Maybe I'm missing something immediate, but I'd like to show that there exists no continuous functions $f$ satisfying this. Just kidding, it's apparently immediate.
For examples of a function $f$ that has no $g$ with $f = g \circ g$ (continuous or not), consider a case where $f$ has exactly one fixed point $p$ and exactly one point $q \ne p$ such that $f(q) = p$. Since $g(p)$ would have to be a fixed point of $f$, we need $g(p) = p$, and since $f(g(q)) = g(f(q)) = p$ we need $g(q) = q$. But then $f(q) = g(g(q)) = q$, contradiction. A simple example of this is $f(x) = x + \dfrac{1-e^x}{1-e^{-1}}$ with $p =0$ and $q=-1$. EDIT: Oops, as Dark Malthorp pointed out we could have $g(q) = p$. OK, suppose in addition there is exactly one $r$ with $f(r) = q$. Since $f(g(r)) = g(f(r)) = g(q)=p$, we must have $g(r) = q$. But then $f(r) = g(g(r)) = g(q) = p$, contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2237321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
homomorphism keeps the unit and commutativity? Let $R,S$ be a rings and let $\varphi:R\to S$ be a rings homomorphism on $S$. Prove or disprove with counter example: A. if $R$ is a commutative ring then $S$ is commutative B. if $R$ has a unit then $S$ has a unit. Attempt: A. Take $r_1,r_2\in R$ then $\varphi(r_1\cdot r_2)=\color{blue}{\varphi(r_1)\varphi(r_2)}=\varphi(r_2\cdot r_1)=\color{blue}{\varphi(r_2)\varphi(r_1)}\implies S$ commutative. B. No, let $\varphi:\mathbb Z\to 2\mathbb Z$ there is a unit at $\mathbb Z$ but not in $2\mathbb Z$
Hints: * *For $A$, this only works for elements in the image of $\varphi$. What if $S$ has more elements than are images of $\varphi$? As an example, consider the map $$ \mathbb{Z}\rightarrow M_{2,2} $$ the map from the integers to $2\times 2$ matrices where $a\mapsto\begin{bmatrix}a&0\\0&a\end{bmatrix}$. *For $B$, what is your map from $\mathbb{Z}\rightarrow 2\mathbb{Z}$? Is it the multiplication by $2$ map? If so $2=\varphi(1)=\varphi(1\cdot 1)=\varphi(1)\cdot\varphi(1)=2\cdot 2=4$. So, the map is not well-defined. What if you consider the zero map $\mathbb{Z}\rightarrow2\mathbb{Z}$? (Or the example of @egreg in the comments above). *As a side note to $B$, consider the map $$ \mathbb{Z}\rightarrow\operatorname{Diag}_{2,2} $$ the map from the integers to $2\times 2$ diagonal matrices where $a\mapsto\begin{bmatrix}a&0\\0&0\end{bmatrix}$. In this case, both $\mathbb{Z}$ and $\operatorname{Diag}_{2,2}$ have identities, but the map does not take the identity of $\mathbb{Z}$ to the identity of $\operatorname{Diag}_{2,2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2237430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $\exp(-1/z^2)$ differentiable at $0$? Let $f: \mathbb{C} \rightarrow \mathbb{C}: \begin{cases} \exp(-1/z^2) & z \neq 0 \\ 0 & z=0 \end{cases}$ be a function. Is $f$ differentiable in $0$? Suppose $f$ is differentiable in $a$, then $\lim_{z \rightarrow 0} \frac{f(z)-f(0)}{z-0}=\lim_{z \rightarrow 0} \frac{\exp(-1/z^2)}{z}$ has to exist in $\mathbb{C}$. I'm not sure how to evaluate this limit. Can I use L'Hopital's rule here or does that only work for real functions?
You can try and evaluate the limit where $z \to 0$ along the real line and along the imaginary line. We have $$ \lim_{x \to 0} \frac{e^{-1/x^2}}{x} = 0 $$ while $$ \lim_{x \to 0} \frac{e^{-1/(ix)^2}}{ix} = (-i) \cdot \lim_{x \to 0} \frac{e^{1/x^2}}{x} = \infty. $$ The first limit can be calculated using L'Hopital as a real limit or in any other way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2237535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The differential equation with a matrix We have the differential equation $ \dot x = Ax$, $$ A =\begin{pmatrix} 8 & 12 & -2 \\ -3 & -4 & 1 \\ -1 & -2 & 2 \\ \end{pmatrix} $$ * *I calculated the determinant $det(A-\lambda E) = - (\lambda - 2)^3$, to find the eigenvalues $\lambda $ for the eigenvectors $v$ of our given matrix $A$ from the characteristic polynomial mentioned above. *We have the only eigenvalue: $\lambda _{1,2,3} = 2$ and the eigenvector: $$ v_1 =\begin{pmatrix} -1 \\ 2 \\ 0 \\ \end{pmatrix} $$ *A solution of the differential equations generally looks like $x(t) = Ce^{\lambda _1}v_1 + Be^{\lambda _2}v_2 $ where $C,B$ are some constants, but I don't know how to get it in my case, because I already know the solution. *The solution is according to my textbook: $$ \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ \end{pmatrix} = e^{2t} \left \{ A \begin{pmatrix} 2 \\ -1 \\ 0 \\ \end{pmatrix} + B \left [ \begin{pmatrix} 2 \\ -1 \\ -1 \\ \end{pmatrix} + t\begin{pmatrix} 2 \\ -1 \\ 0 \\ \end{pmatrix} \right ] + C \left [ \begin{pmatrix} -1 \\ 1 \\ 2 \\ \end{pmatrix} + \begin{pmatrix} 2 \\ -1 \\ -1 \\ \end{pmatrix}t + \begin{pmatrix} 2 \\ -1 \\ 0 \\ \end{pmatrix}t^2/2 \right ] \right \} $$ -- Where $A,B,C$ are the constants * *I don't understand the part with that constant $C$ where is the parameter $t$, or globally: what to do with just one eigenvalue and eigenvector of the matrix $A$. Can anyone explain me this problem please?
As Moo has explained in his comment to your question, the matrix is deficient and can’t be diagonalized, so you have to go to the Jordan decomposition instead. If your textbook is giving you such a problem to solve, I’m sure that this has been covered somewhere in the preceding material. The link Moo provided has a good explanation of the process (see Example 3 in particular, as he said), and a search on this site turns up this explanation and many others, so I’m not going to go through that here. The Jordan basis that the textbook used is evidently $(2,-1,0)^T$, $(2,-1,-1)^T$ and $(-1,1,2)^T$, so we have the decomposition $$A=PJP^{-1}=\left[\begin{array}{r}2&2&-1\\-1&-1&1\\0&-1&2\end{array}\right]\begin{bmatrix}2&1&0\\0&2&1\\0&0&2\end{bmatrix}\left[\begin{array}{r}2&2&-1\\-1&-1&1\\0&-1&2\end{array}\right]^{-1}.$$ Just as with a diagonalizable matrix, the $P$s cancel in powers of this expression, so $e^{tA}=Pe^{tJ}P^{-1}$. We can write $J$ as the sum of the diagonal matrix $2I$ and the nilpotent matrix $N=\tiny{\begin{bmatrix}0&1&0\\0&0&1\\0&0&0\end{bmatrix}}$, so that $e^{tJ}=e^{t(2I+N)}=e^{2tI}e^{tN}$. The last equality doesn’t hold for matrices in general, but does when they commute, as is the case here. The first exponential is simply $e^{2t}I$, and we can compute the second by using the series expansion of the exponential $$e^{tN}=I+tN+{t^2\over2!}N^2+{t^3\over3!}N^3+\cdots$$ This series is truncated after three terms because, as you can verify for yourself, $N^3=0$. Putting this all together, the solution to the differential equation is $$e^{tA}\mathbf C=e^{2t}P\left(I+tN+\frac12t^2N^2\right)P^{-1}\mathbf C=e^{2t}P\begin{bmatrix}1&t&\frac12t^2\\0&1&t\\0&0&1\end{bmatrix}P^{-1}\mathbf C,$$ where $\mathbf C$ is a vector of constants to be determined by the boundary conditions. Since these constants are arbitrary, we can absorb $P^{-1}$ into them, so this becomes $$e^{2t}\left[\begin{array}{r}2&2&-1\\-1&-1&1\\0&-1&2\end{array}\right]\begin{bmatrix}1&t&\frac12t^2\\0&1&t\\0&0&1\end{bmatrix}\begin{bmatrix}A\\B\\C\end{bmatrix}.$$ Expand this product using the fact that the columns of a matrix product are linear combinations of the left-hand factor’s columns, and you end up with the textbook solution. Although going through a full Jordan decomposition computation builds character, it’s not really necessary to do so in order to compute the exponential of $A$. This matrix can be decomposed directly into the sum of a scalar multiple of the identity and a nilpotent matrix as follows: The Cayley-Hamilton theorem tells us that $N=A-2I$ is nilpotent of order 3, so as above, $$e^{tN}=I+tN+\frac12t^2N^2.$$ Writing $A=2I+N$, the solution to the differential equation is therefore $$e^{2t}\begin{bmatrix}1+6t+t^2&12t+2t^2&-2t\\-3t-\frac12t^2&1-6t-t^2&t\\-t&-2t&1\end{bmatrix}\begin{bmatrix}C_1\\C_2\\C_3\end{bmatrix}.$$ This might not look the same as the book solution, but remember that the constants are arbitrary, so with a bit of fiddling and renaming, the two solutions can be made to look the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2237644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A Borel-Cantelli lemmas problem Let $X_1$, $X_2$, ... be independent random variables. Show that sup$X_n$ < ∞ almost surely if and only if $\sum_{n=1}^∞$Pr($X_n$ > A) < ∞ for some A > 0. Here is my idea: (Forward direction): Since sup$X_n$ is bounded almost surely, $X_n$ is bounded almost surely for each n. So there exists a positive A such that $X_n$ < A almost surely for each n, i.e. Pr($X_n$ > A)=0 for each n (which is stronger than Pr($X_n$ > A i.o.) = 0). Also, since $X_1$, $X_2$, ... are independent, by Borel-Cantelli lemmas, $\sum_{n=1}^∞$Pr($X_n$ > A) < ∞. However, I failed to show the reverse direction. Given $\sum_{n=1}^∞$Pr($X_n$ > A) < ∞ for some A > 0, by Borel-Cantelli lemmas, I can only get Pr($X_n$ > A i.o.) = 0. But it doesn't necessarily tell us Pr(sup$X_n$ < A) = 1 since it's still possible for finitely many $X_n$ to exceed A, does it? Or doesn't it only mean Pr(limsup$X_n$ < A) = 1? Could anyone please tell me where I got wrong? Thanks in advance!
($X_n : \Omega \to \bar{\mathbb{R}}$). For the other direction, note that $$ \sum_{n\geq 1}\mathbb{P}(X_n > A)<\infty \implies \mathbb{P}\left(\{w : X_n(w) > A, i.o.\}\right) = 0. $$ Now, read this as, for almost every $w$ (i.e. except on a set of measure $0$); $X_n(w) > A$ happens finitely many times. For instance, suppose that for a particular $w$ the indices that makes this happen are $i_1,\dots,i_{N_w}$. For this particular $w$, $$ \sup_n X_n(w) \leq \max\{A,X_{i_1}(w),\dots,X_{i_{N_w}}(w)\} <\infty, $$ hence $sup_{n}X_n < \infty$ for almost every $w$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2237754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof for any two ordered fields with the least upper bound property are order isomorphic I will write down the definitions first and then what I have done. Order isomorphic: $A$ and $B$ are ordered integral domains. They are order isomorphic if $\exists$ a bijection $f: A \to B$ such that $$f(x+y) = f(x) + f(y)$$ $$f(xy) = f(x)f(y)$$ $$x < y \Rightarrow f(x) < f(y)$$ Let $A$ and $B$ be our ordered fields with least upper bound property. Since they are fields, they are also integral domains. Since they are fields, each one has an additive identity and a multiplicative identity. Let these four elements be $0_A, 1_A \in A$ and $0_B, 1_B \in B$ Any field that contains the integers contains the rationals as a subfield (I couldn't prove this). Also, I am assuming that this fields contain the integers, so they contain the rationals as well. Can I set the following function? $f: A \to B$ such that $$f(0_A) \to 0_B; f(1_A) \to 1_B; f(q1_A) = q1_B : q \in \mathbb{Q}$$ These function is linear, but I don't see where can I use the least upper bound properties of these two sets. I don't know how to continue. Any help is appreciated! :)
There is another name for an ordered field that satisfies the least upper bound property -- the system of real numbers. So we are trying to construct an order isomorphism between any two systems of real numbers, say $\mathbb{R}_A$ and $\mathbb{R}_B$, in other words, the system of real numbers is unique. To extend the map $f(q1_A)=q1_B$, $q\in\mathbb{Q}$, for $r_A\in\mathbb{R}_A$, let $$S=\{q\in\mathbb{Q}|q1_A<r_A\}.$$ Then define $$f(r_A):=\sup\{q1_B|q\in S\}$$ where the supreme is taken in the system $\mathbb{R}_B$. Then you can try to check that this map $f$ indeed defines an order isomorphism between $\mathbb{R}_A$ and $\mathbb{R}_B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2237833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Counterexample of pullback lemma. I'm looking for a counterexample of the pullback lemma, i.e., a diagram $$ \require{AMScd} \begin{CD} A @>{}>> B @>>> C\\ @VVV @VVV @VVV\\ D @>>> E @>>> F \end{CD} $$ such that left square and outer square are pullbacks but right square is not a pullback. I have tried hard but I can't find a counterexample. Any hint would be appreciated.
Here's a counterexample that works in (almost) any category : let $p:X\to Y$ be a split epimorphism that is not an isomorphism, with section $s:Y\to X$. Then in the diagram $$\require{AMScd} \begin{CD} Y @>{id_Y}>> Y @>{id_Y}>> Y\\ @V{id_Y}VV @VV{s}V @VV{id_Y}V\\ Y @>>{s}> X @>>_p> Y, \end{CD}$$ the outer rectangle is a pullback since all its sides are identity maps, the left square is a pullback since $s$ is a monomorphism, but the right square cannot be a pullack if $s$ and $p$ are not isomorphisms. Alternatively, you can take any monomorphism $m:A\to B$, and form the diagram $$\require{AMScd} \begin{CD} A @>{id_A}>> A @>{m}>> B\\ @V{id_A}VV @VV{m}V @VV{id_B}V\\ A @>>{m}> B @>>_{id_B}> B. \end{CD}$$ Here the left square is a pullback since $m$ is a mono, the rectangle is a pullback, but the right-hand square is not a pullback if $m$ is not an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2237915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do you find the sum of this alternating series? $$\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)(n+1)!}.$$ I found out from my fellow peers at stack exchange see here, that this series converges from the alternating series test. But how do you find the sum? I know if you use wolfram alpha you get: 0.861528, but my question is what steps you use to achieve it?
Here's a familiar trick for summing such series. This is $f(1)$ where $$f(x)=\sum_{n=0}^\infty\frac{(-1)^nx^{2n+1}}{(2n+1)(n+1)!}.$$ Then $$f'(x)=\sum_{n=0}^\infty\frac{(-1)^nx^{2n}}{(n+1)!}$$ which you can write in closed form. Integrate to get $f(x)$ etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2238011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Help calculating a math limit Can anyone help with this limit? \begin{equation*} \lim_{x \rightarrow 4} \frac{16\sqrt{x-\sqrt{x}}-3\sqrt{2}x-4\sqrt{2}}{16(x-4)^2} \end{equation*} I've tried a variable change of \begin{equation*} y=\sqrt{x} \end{equation*} but this didn't help.
Hint: Set $x=4+h\;(h\to 0)$ and apply repeatedly Taylor's formula at order $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2238165", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Finding the number of edges $k$-connected graph with certain number of vertices I'm trying to figure out giving a $k$-connected graph how to find the number of edges needed for a certain number of vertices. For example, the number of edges in $2$-connected graph giving that it has $8$ vertices. I guess this is relevant to Menger's theorem.
You can do this directly if you take a look at a picture. It's easy to see the minimal number of vertices to gain a connected graph is $k-1$ just by chaining them in a straight line. Now each edge you add adds connectivity to both vertices to which it is connected, but it also clearly increases the connectivity to exactly two at a time, so with $8=2\cdot 4$ vertices you need at least $4$ more edges, however, if you look at the vertices in a line labeled $1-8$ from left to right, you can see that since $[i, i+1]$ is an edge for $1\le i\le 7$ then if you also use $[i, i+4]$ for $1\le i\le 4$ this gives each edge connectivity $2$ so we achieve the theoretical minimum of $11=8-1+ \lceil 8/2\rceil$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2238309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How good is Polya's "How to Solve it"? I will be going for math major this year, and I am hoping to start this book. But after reading some reviews, they say its mostly for teachers. Can it be used by undergrads? If possible include your brief review of it. Some other questions in my mind regarding the same book: * *How to get the most out of it? *What is the structure of the book? *What are the difficulty of problems? *Given that its really old, does it loses its edge somewhere? Thank You!
I'm surprised to hear that people say its mostly for teachers. It's one of my go-to recommended books for someone who wants to start learning proofs (example). However, reading a book about thinking can only do so much for you. As I stress in that answer, the only way to become a proficient proof writer is to read and write them. If you want to get the most out of the book, make sure to supplement reading that book with a variety of problems and exercises! I only learned about the book after I had already advanced beyond its intended audience, so I can't personally vouch for its effectiveness or difficulty. That said, I know a number of people who have used it to help them get acquainted with mathematical thinking and have heard nearly uniformly positive things about it. The answers to this question might also be interesting for you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2238392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Let $G$ be a group. Show that $\forall a, b, c \in G$, the elements $abc, bca, cab$ have the same order. Let $G$ be a group. Show that $\forall a, b, c \in G$, the elements $abc, bca, cab$ have the same order. I thought that my solution ($?$) was enough to show that $abc, bca, cab$ have the same order, but my teacher told that it isn't, so I don't know what else to do here. Attempt: Let $o(abc) = n$. Then $(abc)^n = e$ Therefore \begin{align} abc(abc)^{n-2}abc &= e\\ (bc)\left[abc(abc)^{n-2}abc\right](cb)^{-1} &= e\\ (bca)^n &=e\\ bca(bca)^{n-2}bca &= e\\ (ca)\left[abca(bca)^{n-2}bca\right](ac)^{-1} &=e\\ (cab)^n &= e \end{align} Therefore $(abc)^n = (bca)^n = (cab)^n = e$ What else am I lacking after this?
In general, conjugate elements have the same order: $o(ghg^{-1}) = o(h)$ because $ x \mapsto gxg^{-1}$ is an automorphism. Now note that $bca = a^{-1} (abc) a$ and $cab = c (abc) c^{-1}$ are conjugates of $abc$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2238562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Let $A\subset \mathbb{R}^n$ be a convex set. Show that $f:\mathbb{R}^n\to\mathbb{R}$ defined by $f(x) = d(x,A)$ is convex Let $A\subset \mathbb{R}^n$ be a convex set. Show that $f:\mathbb{R}^n\to\mathbb{R}$ defined by $f(x) = d(x,A)$ is convex All I need to prove is that: $$f((1-t)x+ty) = d((1-t)x+ty, A) \le (1-t)f(x)+tf(y) = (1-t)d(x,A)+td(y,A)$$ *for all $t\in[0,1]$ and $x,y\in \mathbb{R}^n$ By using the fact that $A$ is convex. A convex set is defined as: A set $C$ in $S$(vector space) is said to be convex if, for all $x$ and $y$ in $C$ and all $t$ in the interval $[0, 1]$, the point $(1 − t)x + ty$ also belongs to $C$ and the distance from $x$ to a set $A$ is: $$d(x,A) = \inf \{d(x,a), a\in A\}$$ To begin I'd choose a generic $a\in A$ and try to work with it to show that $$d((1-t)x+ty, a) \le (1-t)d(x,a)+td(y,a)\tag{1}$$ I'm tempted to think something related to the triangular inequality $d(x,y)\le d(x,a) + d(a,y)$ by doing something like: $$d((1-t)x+ty, a)\le d((1-t)x+ty, b) + d(b,a)\tag{2}$$ for some $b\in \mathbb{R}^n$, but the right side of $(2)$ is already bigger than the right side of $(1)$. Also, I'm not even using the fact that $A$ is convex yet. Could somebody help me?
Let $E = \{(x,t) | \exists a \in A \text{ such that } t \ge \|x-a\| \}$. Since $\|\cdot\|$ is convex, it is straightforward to check that $E$ is convex. Let $l(x) = \inf_{(x,t) \in E} t $ and notice that $d(x,A) = l(x)$. Suppose $\lambda \in [0,1]$ and $(x_k,t_k) \in E$ for $k=1,2$, then we have $\lambda (x_1,t_1)+(1-\lambda)(x_2,t_2) \in E$ and hence $l(\lambda x_1 + (1-\lambda) x_2) \le \lambda t_1 + (1-\lambda) t_2$. Now take the $\inf$ on the right hand side over $(x_k,t_k) \in E$ to get $l(\lambda x_1 + (1-\lambda) x_2) \le \lambda l(x_1) + (1-\lambda) l(x_2)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2238882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Difficulty in finding interval of convergence with power series I have the following power series: $$\sum_{n=1}^\infty \frac{(4x+1)^{n}}{n} $$ When finding the interval of convergence, I am left with the following inequality: $$ |4x+1|\lt1 $$ How do I go about finding the values of $x$ for which this series converges absolutely if I end up with 0 on the right side? Thank you!
Another approach, differentiating our series term-by-term we find: $$\frac{\partial }{\partial x}\sum_{n=1}^\infty \frac{(4x+1)^{n}}{n} = \sum_{n=1}^\infty 4\frac{n(4x+1)^{n-1}}{n} = 4\sum_{n=0}^\infty (4x+1)^{n}$$ This is the geometric series for $\cases{\frac{1}{1-r}\\ r = 4x+1}$ which converges on $|r|<1 \Leftrightarrow |4x+1|<1$ as both you and the other answerer got.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2239007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Geodesics on Cylinders I have a question about Geodesics on Cylinders and think I have the right answer but am unsure. The question reads: Let $C_r:=[(x,y,z)\in\mathbb{R}^3: x^2+y^2=r]$ be the infinite cylinder of radius $r$. Show that $C_{r_1}$ is isometric to $C_{r_2}$ iff $r_1=r_2$. Now I understand the logic behind this question I think. An isometry preserves geodesics, and because if you intersect a plane parallel to the axis of the cylinder with this cylinder, you get a curve $C$, which is just a circle, that is a geodesic. Now, if the radius between the two cylinders are different, the smaller circle would lie inside of the bigger cylinder, thus not lying on the surface and definitely not a geodesic. Is this okay to write? Or do I have to explain it mathematically?
Let $S_r$ be the circle of radius $r$ in $\mathbb R^2$. Let $C$ be a curve inside this $S_r$ and have length equals that of $S_r$. Then $S_r\times \mathbb R$ is isometric to $C\times \mathbb R$: Let $i: S_r \to C$ be the unit length parametrization of $C$, then $$ \phi : S_r\times \mathbb R \to C\times \mathbb R, \ \ \ \phi(s, t) = (i(s), t)$$ is an isometry. Thus your argument is not rigorous, that one surface is "inside" the other one does not mean that they are not isometric. However, your idea is definitely a good one. Mathematically, you need to know that if $$\phi: C_{r_1} \to C_{r_2}$$ is an isometry, then $r_1=r_2$. Using your observation, consider the geodesic $ S_{r_1} \times \{0\}\subset C_{r_1}$. The image of this geodesic under $\phi$ is also a closed geodesic in $C_{r_2}$. Can you show that this geodesic is also of the form $S_{r_2} \times \{t\}$ for some $t$? If yes, then as isometry preserves length, one has $$ 2\pi r_1 = 2\pi r_2 \Rightarrow r_1 = r_2.$$ So it really spoils down to this question: Are all closed geodesics in $C_r$ of the form $S_r \times \{t\}$ for some $t$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2239167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Why does parenthesis before exponents not apply to squaring a binomial? This must be a stupid question with an obvious answer that is hidden from me. No one I have found even mentions a conflict. lets say I want to find the square $(2 + 3)^2 = 2^2 + 2\times2\times3 + 3^2$ Why do I not simplify first? Parenthesis first? Edit To clarify for future views. The main confusion was why is the above problem always (in my limited experience) solved one way in situation A and another way in situation B...Z without ever being noted of why the method used is preferred for the given situation (again in my limited experience). Thanks for the clarification. I think I get it now. In some cases it is arbitrary because the order is divorced from the output. In some cases you have an variable that you cannot add or subtract to simplify so you must use the distributive property. I suppose also you may not simplify to show the trinomial pattern of the output produced by the distributive property (in a pedagogical situation).
By the rules of precedence, we compute $$ (2+3)^2 = 5^2 = 25.$$ By the same rules, we compute $$2^2+2\cdot 2\cdot 3+3^2=4+12+9=25. $$ The very fact that both computations produce the same result justifies us to write down the interesting fact $$ (2+3)^2=2^2+2\cdot 2\cdot 3+3^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2239238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Parameterizing a circle Suppose I wanted to parameterize $S = \{x^2 + y^2 \leq 1, 0 \leq z \leq 1\}$. Would this parameterization be given by $G(u,v) = (u \cos(v), u \sin(v), u)$ for $0 \leq u \leq 1$ and $0 \leq v \leq 2\pi$? The confusion I am having is with regards to $x^2 + y^2 \leq 1$. If this were simply one, it would be parameterizing the unit circle, but being $\leq 1$ throws me off here. Any help appreciated. Following suggestions, I assume this would then be $G(u,v, z) = (u \cos(v), u \sin(v), z)$ for $0 \leq u \leq 1$ and $0 \leq v \leq 2\pi, 0 \leq z \leq 1$
In order to parameterize the solid $S$ in $\mathbb{R}^3$, you have to have 3 parameters. In your current parameterization, your $z$ coordinate depends also on the radius of the circle. This would cause it have a cone-like shape. If you wanted to parameterize the solid it would be something like: $g(u,v,z) = (u\cos{v},u\sin{v},z)$ where $0\leq z\leq1$ and $u,v$ have the same restrictions. If you wanted to parameterize only the surface of the cylinder, than, we can parameterize in 2 variables with the parameterization: $g(u,v) = (\cos{v},\sin{v},u)$ where $v$ has the usual restriction and $u$ acts as the $z$-coordinate so its restriction is similar to $z$, namely: $0\leq u\leq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2239341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
proving unity is unique First The textbook I am using (Contemporary Abstract Algebra) already gave a proof for this, but I am trying a different way. I am unsure how to go about one of the cases or if the proof is correct. I would appreciate your help. Claim: If $R$ is a ring with unity, it is unique. If $\alpha \in R$ is a unity, then its multiplicative inverse is unique. My proof Let $R$ be a ring with unity. Suppose $\beta_1, \beta_2 \in R$ both satisfy the properties of being a unity. Then $\forall \alpha \in R$ we have $1) \alpha\beta_1 = \beta_1\alpha = \alpha$ $2) \alpha\beta_2 = \beta_2\alpha = \alpha$ Hence from $(1)$ and $(2)$ we have $\alpha\beta_1 = \alpha\beta_2 \Rightarrow \alpha\beta_1 - \alpha\beta_2 = 0 \Rightarrow \alpha(\beta_1 - \beta_2) = 0$ This means that either $\alpha = 0$ or $\beta_1 - \beta_2 = 0$. If $\alpha = 0$ (here is my struggle case, anything multiplied by 0 is 0 so I dont know what to say, or how this case implies that $\beta_1 = \beta$) If $\alpha \neq 0$ then $\beta_1 - \beta_2 = 0$ which means $\beta_1 = \beta_2$. Hence unity is unique. The other part im okay with. Thank you
The equation $\alpha(\beta_1-\beta_2)=0$ does not imply $\alpha=0$ or $\beta_1-\beta_2=0$ in a general ring $R$. (For example, in $Z_6$ we have $2\cdot 3=0$. We say that $2$ and $3$ are zero divisors because they are nonzero elements which divide $0$. Rings with identity and no zero divisors are called integral domains, and in an integral domain the implication above works, but not every ring is an integral domain.) Instead you should apply properties 1) and 2) with $\alpha=\beta_1$ and $\alpha=\beta_2$. If we use $\alpha=\beta_2$ in property 1), we find $\beta_2\beta_1=\beta_2$. If we use $\alpha=\beta_1$ in property 2), we find $\beta_2\beta_1=\beta_1$. Therefore $\beta_1=\beta_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2239445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the probability of a functioning circuit I keep getting that $$P(\text{Circuit works}) = P(\geq\text{1 of subcircuit 1}) +P(\geq\text{1 of subcircuit 2}) +P(\geq\text{1 of subcircuit 3}) $$ $$= (0.9*0.1^2 + 0.9^2*0.1 + 0.9^3)*(0.95^2 + 0.95*0.05)*(0.99) = 0.770$$ But this is the wrong answer
I think you have to use the binomial distribution. $P(\text{A=at least one device functions in the first subcircuit})$ $=1-P(\text{no device function in the first subcircuit})$ So $P(A)=1-\binom 30 0.9^0 0.1^3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2239536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Sum of $X\sim\chi^2_ \nu$ and $Y\sim\chi^2_ k$ and gamma function Suppose $X\sim\chi^2_ \nu$ and $Y\sim\chi^2_ k$ are independent. (a) Show that $X+Y\sim\chi^2_{\nu+k}.$ (b) Additionally, find the value of $$\int_0^1u^{\frac{\nu}{2}-1}(1-u)^{\frac{k}{2}-1}du$$ as a ratio of Gamma functions. This formula was discovered by Euler. I was able to solve part (a) and this is my result. Proof: If $X$ is $\chi^2_\nu$ distributed then we can express $X$ as $X=A_1^2+\cdots+A_\nu^2$ where $A_1,\ldots,A_\nu$ are $N(0,1)$. Similarly, if $Y$ is $\chi^2_k$ distributed then we can express $Y$ as $Y=B_1^2+\cdots+B_k^2$ where $B_1,\ldots,B_k$ are $N(0,1)$. These two statements can be made due to the definition of $\chi_\nu^2$ with $\nu$ degrees of freedom. Now, we know that the sum of two independent random variables preserves independence, therefore, adding $X+Y$ we obtain $$X+Y=A_1^2+B_1^2+\cdots+A^2_\nu+B_k^2$$ We can conclude that $$X+Y \sim \chi^2_{\nu+k}$$ However, I am uncertain as to how to compute (b). I have seen and computed Gamma integrals but they have been in the conventional form with an exponential term and $1$ variable. This integrals contains $2$ variables and so I am uncertain how to go about it.
Suppose the joint distribution of $R,S$ is $$ \underbrace{\frac 1 {\Gamma(\nu/2)} \left( \frac r 2 \right)^{\nu/2-1} e^{-r/2} \, \frac{dr} 2} \times \underbrace{ \frac 1 {\Gamma(\kappa/2)} \left( \frac s 2 \right)^{\kappa/2} e^{-s/2} \, \frac{ds} 2} \tag 1 $$ so $R,S$ are independent and each has a chi-square distribution. Let $t = r+s$ and $u= r/(r+s)$. Then $r=tu$ and $s=t(1-u).$ Then we have the Jacobian $$ dr\,ds = \left| \frac{\partial(r,s)}{\partial(t,u)} \right| \, dt\,du = \frac{dt\,du} t. $$ So $(1)$ becomes \begin{align} & \frac 1 {\Gamma(\nu/2)\Gamma(\kappa/2)} \left( \frac{tu} 2 \right)^{\nu/2-1} \left( \frac{t(1-u)} 2 \right)^{\kappa/2-1} e^{-t/2} \, \frac{dt\,du} {4t} \\[10pt] = {} & \underbrace{ \frac {\Gamma((\nu+\kappa)/2)} {\Gamma(\nu/2)\Gamma(\kappa/2)} u^{\nu/2-1} (1-u)^{\kappa/2-1} \, du} \times \underbrace{ \frac 1 {\Gamma((\nu+\kappa)/2)} \left( \frac t 2\right)^{(\nu+\kappa)/2-1} e^{-t/2} \, \frac{dt} 2} \end{align} and from this we conclude that $R/(R+S)$ has a Beta distribution with parameters $\nu,\kappa$ and $R+S$ has a chi-square distribution with $\nu+\kappa$ degrees of freedom, and that $R/(R+S), R+S$ are independent. Since we know the second factor in the last expression above is a probability measure rather than a constant multiple of one, we can conclude the same about the first factor, and thus conclude that $$ \int_0^1 u^{\nu/2-1} (1-u)^{\kappa/2-1} \,du = \frac{\Gamma(\nu/2)\Gamma(\kappa/2)}{\Gamma((\nu+\kappa)/2)}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2239670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A question about Taylor expansion Is this statement true? Statement: Let $n$ be a positive integer. Consider the Taylor expansion of $\sqrt[n]{1+x}$ to the $k$th order, that is, \begin{gather*} \sqrt[n]{1+x}=\sum_{j=0}^{k}\binom{\frac{1}{n}}{j}x^j+o(x^k), \qquad \text{as $x\to 0$.} \end{gather*} Then the Taylor polynomial of $k$th order satisfies \begin{gather*} \left(\sum_{j=0}^{k}\binom{\frac{1}{n}}{j}x^j\right)^n=1+x+\sum_{j=k+1}^{nk}a_jx^j,\tag{1} \end{gather*} where $a_j$s are the coefficients which can be determined. The interesting part of this statement is that the right hand side of (1) just a monic polynomial of first order, plus an infinitesimal of very higher-order. I have checked for $n=2.$ For instance, \begin{gather*} \sqrt{1+x}=1+{\frac{1}{2}}x-{\frac{1}{8}}{x}^{2}+{\frac{1}{16}}{x}^{3}-{\frac{5} {128}}{x}^{4}+O \left( {x}^{5} \right), \end{gather*} and \begin{align*} &\left(1+{\frac{1}{2}}x-{\frac{1}{8}}{x}^{2}+{\frac{1}{16}}{x}^{3}-{\frac{5} {128}}{x}^{4}\right)^2\\ =&1+x-{\frac {5\,{x}^{7}}{1024}}+{\frac {25\,{x}^{8}}{16384}}+{\frac {7\,{ x}^{6}}{512}}-{\frac {7\,{x}^{5}}{128}}. \end{align*} Furthermore, if expand it to $6$th order, then \begin{gather*} \sqrt{1+x}=1+{\frac{1}{2}}x-{\frac{1}{8}}{x}^{2}+{\frac{1}{16}}{x}^{3}-{\frac{5} {128}}{x}^{4}+{\frac{7}{256}}{x}^{5}-{\frac{21}{1024}}{x}^{6}+O \left( {x}^{7} \right), \end{gather*} then we have \begin{align*} &\left(1+{\frac{1}{2}}x-{\frac{1}{8}}{x}^{2}+{\frac{1}{16}}{x}^{3}-{\frac{5} {128}}{x}^{4}+{\frac{7}{256}}{x}^{5}-{\frac{21}{1024}}{x}^{6}\right)^2\\ =&1+x-{\frac {147\,{x}^{11}}{131072}}+{\frac {441\,{x}^{12}}{1048576}}-{ \frac {77\,{x}^{9}}{16384}}+{\frac {77\,{x}^{10}}{32768}}-{\frac {33\, {x}^{7}}{1024}}+{\frac {165\,{x}^{8}}{16384}}. \end{align*} Thus, by examples, it seems that the statement above is true. But I do not know how to prove it. Can you help me?
This is actually a very good exercise in combinatorics and in multiplying polynomials so I will post an answer. Firstly note that the following identity holds true: \begin{equation} 1= \sum\limits_{J=0}^{n k} \delta_{j_1+\cdots+j_n,J} \end{equation} Now we insert the unity into the left hand side of (1) and we expand the whole thing and we get: \begin{eqnarray} lhs(1) = \sum\limits_{J=0}^{n k} x^J \cdot \underbrace{\left( \sum\limits_{j_1=0}^k \sum\limits_{j_2=0}^k \cdots \sum\limits_{j_n=0}^k \left[\prod\limits_{\xi=1}^n \binom{\frac{1}{n}}{j_\xi}\right] \cdot \delta_{j_1+\cdots+j_n,J}\right)}_{{\mathfrak a}_J} \end{eqnarray} Now we evaluate the coefficients $\left\{ {\mathfrak a}_J \right\}_{J=0}^{n k}$ for consecutive values of $J=0,1,2,3,4,\cdots,k,k+1,\cdots,n \cdot k$. We have: \begin{eqnarray} {\mathfrak a}_0 &=& 1 \\ {\mathfrak a}_1 &=& n \binom{\frac{1}{n}}{1} = 1 \\ {\mathfrak a}_2 &=& \binom{n}{2} \cdot \binom{\frac{1}{n}}{1}^2 + n \binom{\frac{1}{n}}{2} = 0 \\ {\mathfrak a}_3 &=& \binom{n}{3} \cdot \binom{\frac{1}{n}}{1}^3+\binom{n}{2}(2 \binom{\frac{1}{n}}{2} \binom{\frac{1}{n}}{1}) + n \binom{\frac{1}{n}}{3} = 0 \\ {\mathfrak a}_4 &=& \binom{n}{4} \cdot \binom{\frac{1}{n}}{1}^4+ \binom{n}{3}(3 \binom{\frac{1}{n}}{2} \binom{\frac{1}{n}}{1}^2) +\binom{n}{2} (2 \binom{\frac{1}{n}}{3} \binom{\frac{1}{n}}{1} + \binom{\frac{1}{n}}{2}^2)+n \binom{\frac{1}{n}}{4} =0 \end{eqnarray} Let us analyze the last line above. We can achieve $J=4$ in four different ways(from the left to the right), firstly by taking four distinct $j$-indices equal to unity, secondly by taking one $j$-index equal to two and two other ones equal to one, thirdly by taking one index equal to three and another one to one or two distinct indices being equal to two and fourthly by taking exactly one index equal to four. The corresponding numbers of ways for achieving that are given by the respective binomial factors which stand in front of each term in the right hand side of the last line. Now, clearly for bigger values of $J$ nothing much will change except that we only have to enumerate more possibilities and the expressions become lengthy. The only interesting thing happens when $J$ hits the value $J=k+1$. Then clearly the very last term on the right hand side is missing and therefore: \begin{eqnarray} {\mathfrak a}_{k+1} &=& -n \binom{\frac{1}{n}}{k+1} \end{eqnarray} which gives ${\mathfrak a}_5 = - 7/128$ for $(n,k)=(2,4)$ and ${\mathfrak a}_7 = - 33/1024$ for $(n,k)=(2,6)$ as in the question above. The higher order coefficients are more complicated but it is clear how they can be extracted. As $J$ increases there will be more and more terms missing on the very right of the right hand side. One needs to extract those terms and carefuly sum them up to get the coefficient in question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2239782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
M/M/1 Queue with new arrivals Let us have a simple FIFO M/M/1 queue. There is an initial poisson arrival stream with arrival rate $\lambda_1$. Now after some time $t$ there is an additional stream with arrival rate $\lambda_2$, independent of the original stream. How do i analyze the waiting times after this new arrival? Is it correct to simply assume the new arrival rate as $\lambda_1 + \lambda_2$ and do the analysis? Will it work even if the queue size is finite? Note that there is no priority between streams, it is still FIFO,and the service time distribution still remains the same.
Maybe the multiclass M/G/1 queue model is helpful. See example slides 50-62.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2239909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }