Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
If $T \in L(\mathbb R^2)$ with $T^3 = I$, then is $T$ an isometry? I am aware that there are infinitely many $S \in L(\mathbb R^2)$ such that $S^2 = I$, and $S$ is not an isometry. Can I generalize the result for $T^n = I$, where $n >2$?
An equation like $S^n = I$ will generally tell you something about the minimal polynomial of $S$. This in turn tells you something about the (complex) eigenvalues and to which matrices $S$ might be similar. However, being an isometry (i.e. orthogonal) is generally not preserved under similarity. For example, the matrix $$ \begin{pmatrix}0 && 1 \\ 1 && 0\end{pmatrix} $$ is orthogonal whereas $$ \begin{pmatrix}1 && 1 \\ 0 && 1\end{pmatrix} \begin{pmatrix}0 && 1 \\ 1 && 0\end{pmatrix} \begin{pmatrix}1 && -1 \\ 0 && 1\end{pmatrix} = \begin{pmatrix}1 && 0 \\ 1 && -1\end{pmatrix} $$ is not. (Note that the matrices I multiplied on the left and right are inverse to each other.) So you shouldn't expect an equation like that to tell you anything about the matrix being an isometry, except in very special cases (like $S^1 = I$). Because isometries are rare compared to all linear maps, I would expect to find infinitely many counter-examples. For your specific problem, if $S$ is an endomorphism of $\mathbb{R}^2$ with $S^n = I$, then $S$ is a zero of the polynomial $t^n - 1$ which factors (over $\mathbb{R}$) into * *$t - 1$, *$t + 1$ (if $n$ is even), and *$(t - e^{2\pi ik/n})(t - e^{-2\pi ik/n}) = t^2 - 2 \cos(2\pi k/n) + 1$ for $1 \leq k < n/2$. (This is because the complex roots are the $n$-th roots of unity, i.e. $e^{2\pi i k/n}$ for $0 \leq k < n$.) The minimal polynomial of $S$ hence is a product of some of these polynomials. Its degree is at most $2$, so it can be * *$t - 1$, in which case $S = I$ and $S$ is orthogonal; *$t + 1$, in which case $S = -I$ and $S$ is orthogonal; *$(t - 1)(t + 1)$, (if $n$ is even), in which case $S$ is similar to the matrix considered above, (alternatively, in this case we have the stronger condition $S^2 = I$ and you know that infinitely many counter-examples exist); or *one of the factors considered in 3. above and $S$ is similar to the companion matrix $$\begin{pmatrix}0 && -1 \\ 1 && 2 \cos (2\pi k/n)\end{pmatrix}$$ of that polynomial, which is not orthogonal (as $\cos(2 \pi k /n)$ is never $0$ for the values of $k$ under consideration). For any $n \geq 3$, the last case can occur, and for $n = 2$, the third case can occur.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4192586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $R$ a subring of $M_3(\mathbb{R})$ Let $R =\left \{ \begin{pmatrix} 0 &a &b \\ 0& 0 & c\\ 0& 0& 0 \end{pmatrix}:a,b,c\in \mathbb{R}\right \}$ . Show that $R$ a subring of $M_3(\mathbb{R})$. Is it a ring with identity? If so, is it a unital subring. Clearly, the zero matrix lies in $R$. Also, $$\begin{pmatrix} 0 &a_1 &b_1 \\ 0& 0 & c_1\\ 0& 0& 0 \end{pmatrix}- \begin{pmatrix} 0 &a_2 &b_2 \\ 0& 0 & c_2\\ 0& 0& 0 \end{pmatrix}= \begin{pmatrix} 0 &a_1-a_2 &b_1-b_2 \\ 0& 0 & c_1-c_2\\ 0& 0& 0 \end{pmatrix} \in R $$ and $$\begin{pmatrix} 0 &a_1 &b_1 \\ 0& 0 & c_1\\ 0& 0& 0 \end{pmatrix} \begin{pmatrix} 0 &a_2 &b_2 \\ 0& 0 & c_2\\ 0& 0& 0 \end{pmatrix}= \begin{pmatrix} 0 &0 &a_1c_2 \\ 0& 0 & 0\\ 0& 0& 0 \end{pmatrix}\in R $$ Is it a ring with identity? I do not think so, since the product of these matrices are a subset of $R$ and thus, there is no matrix $I$, such that $AI=IA=A$ for all $A\in R$. Am I right? Finally, whether or not there is an identity matrix, we have that $R$ is not a unital subring. Since identity in $M_3(\mathbb{R})$ is $\begin{pmatrix} 1&0 &0 \\ 0& 1 & 0\\ 0& 0& 1 \end{pmatrix}\notin R$.
Your reasons for the first and third parts are fine, but not the second: Is it a ring with identity? I do not think so, since the product of these matrices are a subset of and thus, there is no matrix , such that == for all ∈. Am I right? Merely being a subset of $R$ does not preclude a subring from having its own identity. The identity $e$ of any ring always satisfies $e^2=e$, that is, it is idempotent. In fact, if you have any such element $e\in R$, then $eRe$ is always a subring with identity $e$. But let's keep going with this fact that $e^2=e$. Note that in the ring in question, $x^3=0$ for every element $x$. It should be obvious to you now that nothing is going to satisfy $e^2=e$ except $0$, which is clearly not the identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4192755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Are there non-integer $\nu$, such that the Parabolic Cylinder Function vanishes as $z \to -\infty$? Suppose we have the differential equation $$ -f''(x) + \Big( \frac{1}{4} x^2 - \nu - \frac{1}{2} \Big) \, f(x) = 0 \: , \quad \text{where } \nu \in \mathbb{R} \: . $$ The general solution to this equation is $f(x) = a \, D_\nu(x) + b \, D_\nu(-x)$, where $D_\nu$ is the parabolic cylinder function. If $\nu$ is an non-negative integer, then $D_\nu(x) \propto \exp(-x^2/4)$ is nice and bounded. My question is: are there any $\nu$ other than the non-negative integers, such that $$ \lim_{x \to -\infty} D_\nu(x) = 0 $$ (ie. $D_\nu$ vanishes as $x$ goes to negative infinity)? From trying various different $\nu$ numerically, it seems that the answer is no, however I wasn't able to prove it. The Digital Library of Mathematical Functions includes asymptotic expansions of the parabolic cylinder functions, I just don't know how to use them to prove this.
By http://dlmf.nist.gov/12.9.E3, $$ D_\nu (xe^{ \pm \pi i} ) = e^{ \pm \pi \nu i} e^{ - \frac{1}{4}x^2 } x^\nu \left( {1 + \mathcal{O}\!\left( {\frac{1}{{x^2 }}} \right)} \right) + \frac{{\sqrt {2\pi } }}{{\Gamma ( - \nu )}}e^{\frac{1}{4}x^2 } x^{ - \nu - 1} \left( {1 + \mathcal{O}\!\left( {\frac{1}{{x^2 }}} \right)} \right) $$ as $x\to +\infty$. Since the parabolic cylinder function is a single valued entire function, taking the average of the two sides gives $$ D_\nu ( - x) = e^{ - \frac{1}{4}x^2 } x^\nu \left( {\cos (\pi \nu ) +\mathcal{O}\!\left( {\frac{1}{{x^2 }}} \right)} \right) + \frac{{\sqrt {2\pi } }}{{\Gamma ( - \nu )}}e^{\frac{1}{4}x^2 } x^{ - \nu - 1} \left( {1 + \mathcal{O}\!\left( {\frac{1}{{x^2 }}} \right)} \right), $$ as $x\to +\infty$. This shows that $\lim _{x \to - \infty } D_\nu (x) = 0$ if and only if $\nu$ is a non-negative integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4192908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the function $ f$: $\mathbb R^+$ $\to$ $\mathbb R^+$ satisfying: $ f(2f(x)+2y) = f(2x+y) +y $ Find the function $f : \mathbb R^+ \to \mathbb R^+$ satisfying: $f(2f(x) + 2y) = f(2x+y) + y.$ I think this problem is difficult in the set of determination and the set of values. If the deterministic set is $\mathbb R$, we can easily solve the problem: Choose $y = 2x - 2f(x)$ $\Rightarrow 2f(x)+2y = 2x+y $ $\Rightarrow f(2f(x)+2y) = f(2x+y)$ $\Rightarrow y= 0$ $\Rightarrow f(x)=x$ (Try again and see if it is correct.) That's all I did. I hope to get help from everyone. Thank you very much.
$f(2f(x) + 2y) = f(2x+y) + y$ Assuming that $f ( a ) = f (b)$ $\Rightarrow$ $ f(2a+b) = f(2b+y)$ If $ a {\neq} b {\Rightarrow} $ f is a periodic function with period s. Notation P(a;b) : Replace x by a, y by b into the original equation . $P(x;y+s)$ and $P(x;y)$ $\Rightarrow$ $s=0$ $\Rightarrow$ $a=b$ $P(x;y-f(x)+f(t))$ $\Rightarrow$ $f(2y+2f(t)) = f(2x+y-f(x)+f(t)) +y-f(x)+f(t) $ $\Rightarrow$ $f(2t+y) + y = f(2x+y-f(x) + f(t)) + y- f(x) + f(t) $ Set $2t-2x+f(x)-f(t) =s $ $\Rightarrow$ $( 2t+y)-(2x+y-f(x)+f(t)) =s $ Replace $y$ by $ 2x+y-f(x)+f(t)$ $\Rightarrow$ $f(y+s) +f(x) - f(t) = f(y) $. Set $ f(x) - f(t) = r$ $\Rightarrow$ $f(y+s) +r = f(y)$ $\Rightarrow$ $f(2f(x)+2y)-r = f(2f(x)+2y+s) $ $\Rightarrow$ $f(2f(x)+2y)-2r = f(2f(x)+2y+2s) $ In a similar way we have $ f(2f(x)+2y)-2r = f(2f(x)+2y-2r)$ $\Rightarrow$ $2f(x)+2y+2s = 2f(x)+2y-2r$ $\Rightarrow$ $ s=-r $ $\Rightarrow$ $ 2t-2x+f(x)- f(t) = -f(x)+f(t) $ $\Rightarrow$ $f(x)-x = f(t)-t = C $ $\Rightarrow$ $f(x) = x+C $ It is easy to see that $C = 0$. $\Rightarrow$ $f(x) = x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4193232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Existence of a real $4 \times 4$ matrix satisfying the equation $x^2+1=0$. Does there exist a real $4 \times 4$ matrix satisfying the equation $x^2+1=0$? I have tried using determinants to see if I can arrive at some sort of contradiction, but it doesn't quite help. Also, I have tried putting entries ($a,b,c,d \ldots$ etc.) in the matrix to see if it gets me anywhere, and it doesn't. And, companion matrices don't help either because the polynomial is of degree $2$ while the matrix is a real $4 \times 4$ matrix. Is there some sort of approach that I am definitely not aware of at this point?
This works: $$\begin{pmatrix}0&-1&0&0\\1&0&0&0\\0&0&0&-1\\0&0&1&0\end{pmatrix}^2=-I_4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4193570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find all primes $(p,q)$ satisfying the condition that $pq$ divides $2^p+2^q.$ Find all primes $(p,q)$ satisfying the condition that $pq$ divides $2^p+2^q.$ I get answers for $p =2$ or $q = 2$ but I want to know the general approach for this question.
Assume both $p$ and $q$ are primes at least 3. Let $a_p$ denote the smallest integer satisfying $2^{a_p} \equiv_p -1$ and let $a_q$ denote the smallest integer satisfying $2^{a_q} \equiv_q -1$. [There indeed exists such an $a_p,a_q$ if there exists a $p,q$ satisfying the equation $pq|(2^p+2^q)$; indeed [assuming $q \ge p$] $$pq|(2^p+2^q) \quad \Rightarrow \quad pq|(1+2^{q-p})$$ $$\Rightarrow \quad 2^{q-p} \equiv_p -1; \ 2^{q-p} \equiv_q -1.$$ Thus given $2^{q-p} \equiv_p -1$ and $2^{q-p} \equiv_q -1$ there indeed exists such an $a_p,a_q$ as claimed.] Also note that $2^{p} \equiv_p 2$ and $2^{q} \equiv_q 2$. Then $$pq|(2^p+2^q); \ 2^p \equiv_p 2; \ 2^q \equiv_q 2 \quad \Rightarrow \quad 2^{q} \equiv_p -2; \ 2^p \equiv_q -2$$ $$\Rightarrow \quad 2^{q-1} \equiv_p -1; \ 2^{p-1} \equiv_q -1.$$ As both $$2^{q-1} \equiv_p -1$$ and $$2^{q-1} \equiv_q 1$$ hold then, it follows that $q-1 = \ell a_q$ for some even integer $\ell$ and $q-1 = ka_p$ for some odd integer $k$. Thus what follows is the strict inequality $$\nu_2(a_p) > \nu_2(a_q),$$ where $\nu_2(M)$ for each integer $M$ is defined as the largest power of $2$ dividing $M$. Likewise, $$2^{p-1} \equiv_q -1.$$ And, $$2^{p-1} \equiv_p 1.$$ So it follows that $p-1 = b a_p$ for some even integer $b$ and $p-1 = ca_q$ for some odd integer $c$. Thus what also follows is the strict inequality: $$\nu_2(a_p) < \nu_2(a_q).$$ However, the strict inequalities $\nu_2(a_p) > \nu_2(a_q)$ and $\nu_2(a_p)<\nu_2(a_q)$ contradict each other, so indeed, there are no two odd primes $p,q \ge 3$ satisfying $pq|(2^p+2^q)$. Thus the result follows at least for any two primes $p,q \ge 3.$ If $p=2$ then $$2^p+2^q \equiv_q 4(1+2^{q-2}) \equiv_q 4(1+2^{-1}),$$ where $2^{-1}$ is defined to be the multiplicative inverse of $2$ in $\mathbb{Z}/q\mathbb{Z}$. [Indeed as $2^{q-1} \equiv_q 1$ it follows that $2^{q-2} \equiv_q 2^{q-1}2^{-1} \equiv_q 1$]. So for $q$ to divide $2^{p}+2^{q}$ for $p=2$ it follows that either the equation $4\equiv_q 0$ or the equation $1+2^{-1} \equiv_q 0$ holds. The former equation gives $q=2$. The latter equation gives $q=3$. [Indeed, $2^{-1} \equiv_q -1$ which gives $2 \times 2^{-1} \equiv_q 2 \times -1$ which gives $1 \equiv_q -2$ which indeed gives $q=3$.] So if $p$ is $2$ then $q$ must be $2$ or $3$. However, note that $p=2,q=2$ and $p=2,q=3$ are both solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4193742", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there an easier/analogous explanation of geometric mean, similar to that of arithmetic mean? My simple understanding of arithmetic mean is that the arithmetic mean represents a "central point" (central tendency) where all others numbers converge as the accumulative linear distance from the left equals the accumulative linear distance from the right. While I know (memorize) the definition of geometric mean, I cannot "interpret/visualize" it (maybe I should not?) in a way analogous to that of the arithmetic mean. I suspect that it may have to do with the fundamental concept of "root", which I fail to grasp, (I did try to read up on the concept of root but have yet found anything beyond its definition and application) but still I cannot find a satisfying answer how taking an nth root would yield a "mean" that reasonably represents the group. Is there an easier way to "understand" geometric mean? (I tried to look for a similar question in this forum but have not found one so far).
I think of the geometric mean as a log-scale version of the arithmetic mean. That requires a little explanation. The arithmetic mean of $x_1, \ldots, x_n$ is $\mathrm{AM}(x) = \frac{x_1 + \cdots + x_n}{n}$. Set $y_i = b^{x_i}$ for a fixed base $b>1$. Then the geometric mean of $y_1, \ldots, y_n$ is \begin{align*} \mathrm{GM}(y) &= (y_1 \cdots y_n)^{1/n} = (b^{x_1 + \cdots + x_n})^{1/n} = b^\left(\frac{x_1 + \cdots + x_n}{n}\right) \\ &= b^{\mathrm{AM}(x)}. \end{align*} As an example, suppose we had 3 bank accounts with $y=\$100, \$1000, \$10000$ in them. The arithmetic mean is basically only going to pick up the \$10000 amount and will be basically $\mathrm{AM}(y) \approx \$10000/3$. However, the geometric mean effectively thinks of these as $\$10^2, \$10^3, \$10^4$ where $x=2, 3, 4$ with $\mathrm{AM}(x) = 3$, so the geometric mean of the amounts in the bank is $\mathrm{GM}(y) = \$10^3 = \$1000$. Lots of data is really of the form $b^x$. For instance, this is basically the insight behind Benford's law. Consequently the geometric mean is sometimes more appropriate than the arithmetic mean.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4193906", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Primes congruent to $1$ mod $4$ are sum of two squares Neukirch mentions in his book on Algebraic Number Theory that the primes $\equiv 1\pmod 4$ are sums of squares, and it is not so obvious (compared to converse). While going for proof of it myself, I came to a problem; there could be similar questions, but I am interested in proceeding one step to complete proof which I was working. Let $p\equiv 1\pmod 4$, and assume $p$ is odd. Then $4$ divides $(p-1)$, which is order of cyclic group $(\mathbb{Z}/p\mathbb{Z})^{\times}$. This gives that $-1$ is square mod $p$, say $a^2\equiv -1\pmod p$ i.e. $p$ divides $(a^2+1)$ for some integer $a$. Case 1. $p$ is reducible in $\mathbb{Z}[i]$. Say $p=(a+bi)(c+di)$ in $\mathbb{Z}[i]$, where factors are non-units. Then we also have $p=(a-bi)(c-di)$, so $p^2=(a^2+b^2)(c^2+d^2)$. Thus $a^2+b^2, c^2+d^2\in \{1,p,p^2\}$. If $a^2+b^2$ is $1$ or $p^2$, then it is easy to see that $a+bi$ or $c+di$ are units (respectively), contradiction. Thus, $a^2+b^2$ must be $p$, which is what we wanted to prove. But we did it only in one case. Case 2. $p$ is irreducible in $\mathbb{Z}[i]$. This case, I was unable to proceed, and I do not know whether we get a contradiction, or not. Any hint for this case?
Here is a simple proof. Suppose that $p \in \mathbb{Z}$ $\nmid$ $m + i$ or $m - i$ (as it does not divide their imaginary parts), but that $p \mid m^2 + 1$. Then we know that $p$ cannot be prime in the Gaussian integers. Thus, there exists a nontrivial factorization of $p$ in the Gaussian integers, which can have only two factors (since the norm is multiplicative, and $p^2 = N(p)$), there are at most $2$ factors of $p$, so it must be of the form $$p = (x + yi)(x - yi)$$ for some $x,\:y \in \mathbb{Z}$. Therefore, we have that $p = x^2 + y^2$. If you are interested, you can see a collection of proofs here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4194098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Value of rational function well-defined? Suppose $f\in O_p(V)$ is a rational function on $V$ that has a value at $p$. Then write $f=a/b=a'/b'$ where $a,b,a',b'\in \Gamma(V)$, the coordinate ring of $V$. Want to show the value of $f$ at $p$ is well-defined, i.e. $a(p)/b(p)=a'(p)/b'(p)$. So since $a/b=a'/b'$ are the same equivalence class, there is some non-zero poly $x\in \Gamma(V)$ such that $x(ab'-a'b)=0$. Then $x(p)(a(p)b'(p)-a'(p)b(p))=0$. Then what? How do we know $x(p)\neq0$?
So $x\neq \bar{0} \in \Gamma(V)$. Since $\Gamma(V)$ is an integral domain, $a'b-ab'=\bar{0}$. Actually one can remove $x$ from the definition of fraction field altogether.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4194296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is manipulating a trivial equality a logically valid proof of a formula? I was working on a proof that $r^{n/2}$ commutes with $f$ in $D_4$. I did not know where to start, so I decided to manipulate a true statement into the formula I was trying to prove, as a sort of derivation. $$\begin{align} ef &= ef \\ r^nf &= r^0f \\ r^{n/2} r^{n/2} f &= r^{n/2} r^{-n/2} f \\ r^{n/2} f &= r^{-n/2} f \\ r^{n/2} f &= f r^{n/2} \\ \end{align} $$ However, I started second-guessing myself because I realized “$ef=ef$” is true always, regardless of whether “$r^{n/2}$ commutes with $f$” is true. This leads me wondering whether “$ef=ef \implies r^{n/2} f = f r^{n/2}$” is tautological and if, in general, if any implication beginning with a trivial equality is tautological.
Actually, $D_4=\langle r,s\mid r^4=e, s^2=e,srs^{-1}=e\rangle$, Then we have $$ sr^2s^{-1}=r^{-2}=r^2, $$ so $r^2$ commutes with $r$ and $s$ and hence with all elements. So it is in the center. So you don't have to start with $ef=ef$, but it is not false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4194443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is it possible to factor the risk ratio from this equation? I have this weighted risk ratio equation here: $$\frac{a(b/e+d/f)}{b(a/g+c/h)}$$ I am interested in factoring this out of the equation, i.e. the unweighted risk ratio: $$\frac{a(b+d)}{b(a+c)}$$ Is this at all possible? Thank you!
Of course the weighted version can't be equal to the unweighted version in general. To reduce to the unweighted version, set the weight to be $e=f=g=h$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4194582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
a monomorphism of schemes induce homomorphism $\kappa ( f ( x )) → \kappa ( x )$ is an isomorphism. Show that a monomorphism $f : X \rightarrow S$ of schemes is separated, purely inseparable(i.e. universally injective) and for $x \in X$ the induced homomorphism $\kappa ( f ( x )) \rightarrow \kappa ( x )$ is an isomorphism. This is exercise 9.6 in Görtz & Wedhorn's book. Since $f$ is monic, the diagonal $\Delta_f$ is an isomorphism, so $f$ is separated and universally injective($\Delta_f$ is surjective). But I don't know how to show that $\kappa ( f ( x )) \rightarrow \kappa ( x )$ is an isomorphism. Any help would be appreciated!
Since being a monomorphism is preserved by base change, we can base change along $\operatorname{Spec} k(f(x))\to S$ and assume that $S$ is the spectrum of a field. Next, since open immersions are monomorphisms, we may replace $X$ by an affine open neighborhood of $x$ and assume $X$ is affine. Thus we've reduced to the case where $\operatorname{Spec} R\to \operatorname{Spec} k$ is a monomorphism, which implies that it's diagonal map is an isomorphism. But $R\to R\otimes_k R$ can only be an isomorphism if $\dim_k R \leq 1$, and as $R$ admits a nonzero map to $k(x)$, it is nonzero and we have the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4194741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $2p_ip_j\lt p_{j+1}^2$ for $i\lt j$? Initially, I was wondering is the following statement true or not- Let $p_k$ be the $k^{\text{th}}$ prime. Then, $2p_ip_j\lt p_{j+1}^2$, where $i\le j$ But then I realized that the case $i=j$ has obvious counter-examples like- $2 \times 5^2 \gt 7^2$ So I excluded the possibility of $i=j$, then the statement becomes Let $p_k$ be the $k^{\text{th}}$ prime. Then, $2p_ip_j\lt p_{j+1}^2$, where $i\lt j$. I cannot find any counter-examples for this. So I thought about using Bertrand's Postulate which states $2p_j\gt p_{j+1}$ and the fact that $\frac{p_i}{p_{j+1}}\lt 1$ since $i\lt j+1$. So, $\frac{p_i}{p_{j+1}}\lt 1$ $\implies -\frac{p_i}{p_{j+1}}>-1$ Multiplying this with the statement of Bertrand's Postulate gives us $-\frac{2p_ip_j}{p_{j+1}}>-p_{j+1}$ $\implies 2p_ip_j<p_{j+1}^2$ Obviously, we can't just multiply inequalities, especially when dealing with negative numbers and not only that but this line of reasoning also implies the case $i=j$ since $j\lt j+1$ which I just disproved. To validate my reasoning I was trying to investigate how to multiply inequalities when dealing with negative numbers but that quickly became too complicated for me to work with. So my question is- Is it possible to validate my reasoning so that it works? If not, I would like to have a proof of the statement.
Try a few examples before attempting to prove anything. 2 x 13 x 17 > 361 2 x 17 x 19 > 529 2 x 19 x 23 > 841 2 x 23 x 29 > 961 2 x 29 x 31 > 1369 2 x 31 x 37 > 1681 2 x 37 x 41 > 1849 2 x 41 x 43 > 2209 2 x 43 x 47 > 2809 2 x 47 x 53 > 3481 2 x 53 x 59 > 3721 2 x 59 x 61 > 4489 2 x 61 x 67 > 5041 2 x 67 x 71 > 5329 2 x 71 x 73 > 6241 2 x 73 x 79 > 6889 2 x 79 x 83 > 7921 Your attempt to use Bertrand's postulate leads to the opposite direction. Feel it with your heart! Bertrand's postulate states the closeness of primes -- the next prime will never be further then twice this prime. However, your conjecture states the apartness of primes -- twice the product of smaller primes will never reach the square of a bigger prime. From this, we can conclude that, if you are to prove your conjecture (which you can't, but we can learn from it), Bertrand's postulate will never, ever help you! It will always push you towards the other direction. Unless.. Unless, you come across a situation where an inequality is "flipped" by a constraint. For instance, given $a+b+c + abc = 10$, and suppose you have the inequality $a+b+c > 3$, which states that $a,b,c$ are big. Then you can get the inequality $abc < 7$, stating that $a,b,c$ are small. This is an example of how inequalities may be "flipped".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4194916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If symmetric $n \times n $ matrix $S^3=I_n$, does $S=I_n$? Let $S$ be a real symmetric $n \times n$ matrix. Does $S^3=I_n$ imply $S=I_n$? I started by looking at $S^2=I_n$ and found that this does not imply $S=I_n$, because of the counterexample \begin{align}\begin{bmatrix} -\sin(\alpha) & \cos(\alpha) \\ \cos(\alpha) & \sin(\alpha) \end{bmatrix} \end{align} For $S^3$ however, I do not really know how to approach the problem.
By the given conditions, \begin{aligned} \langle (S^2+S+I)x,x\rangle &=\langle (S^2+S^4+I)x,x\rangle\\ &=\langle Sx,Sx\rangle+\langle S^2x,S^2x\rangle+\langle x,x\rangle>0 \end{aligned} for every nonzero vector $x$. Hence $S^2+S+I$ is nonsingular. Yet, $S^2+S+I$ divides $S^3-I$. Therefore...
{ "language": "en", "url": "https://math.stackexchange.com/questions/4195081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Proving an inequality given $\frac{a}{b+c+1}+\frac{b}{c+a+1}+\frac{c}{a+b+1}\le 1$ Given that $a,b,c > 0$ are real numbers such that $$\frac{a}{b+c+1}+\frac{b}{c+a+1}+\frac{c}{a+b+1}\le 1,$$ prove that $$\frac{1}{b+c+1}+\frac{1}{c+a+1}+\frac{1}{a+b+1}\ge 1.$$ I first rewrote $$\frac{1}{a+b+1} = 1 - \frac{a+b}{a+b+1},$$ so the second inequality can be rewritten as $$\frac{b+c}{b+c+1} + \frac{c+a}{c+a+1} + \frac{a+b}{a+b+1} \le 2.$$ Cauchy-Schwarz gives us $$\sum \frac{a+b}{a+b+1} \geq \frac{(\sum \sqrt{a+b})^2}{\sum a+ b+ 1}.$$ That can be rewritten as $$\frac{2(a+b+c) + 2\sum \sqrt{(a+b)(a+c)}}{2(a+b+c) + 3},$$ which is greater than or equal to $$\frac{2(a+b+c) + 2 \sum(a + \sqrt{bc})}{2(a+b+c) + 3} = \frac{4(a+b+c) + 2 \sum \sqrt{bc}}{2(a+b+c) + 3} \geq 2,$$ which is the opposite of what I want. Additionally, I'm unsure of how to proceed from here.
Applying Jensens to $ f(x) = \frac{ x} { (a+b+c+1) - x } $, we have $$ 1\geq \sum \frac{a}{b+c+1} = f(a) + f(b) + f(c) \geq 3 f ( \frac{a+b+c } { 3} ) = 3 \times \frac{ a + b + c } { 2a + 2b + 2c + 3 } \Rightarrow a + b + c \leq 3.$$ Applying Jensens to $ g(x) = \frac{ 1 } { (a+b+c+1) - x }$, we have $$ \sum \frac{1}{ b+c+1} = g(a) + g(b) + g(c) \geq 3 g( \frac{ a+b+c} { 3 } ) = 3 \times \frac{ 3 } { 2a+2b+2c + 3 } \geq 1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4195399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Preimage of disjoint sets is disjoint I'm reading through James Munkres' book, Topology, and the theorem about the image of a connected space under mapping being connected. To paraphrase the proof: Let $f:X\to f(X)$ be a continuous map; let $X$ be connected. Suppose that $f(X)=A\bigcup B$ is a separation of $f(X)$ into two disjoint nonempty sets open in $f(X)$. Then $f^{-1}(A)$ and $f^{-1}(B)$ are disjoint sets whose union is $X$. (The proof continues but I'll stop here.) My question is why are $f^{-1}(A)$ and $f^{-1}(B)$ disjoint in $X$? I've seen that $f^{-1}(A\cap B)\subset f^{-1}(A)\cap f^{-1}(B)$, but I don't think this helps as I want to know $f^{-1}(A)\cap f^{-1}(B)=\emptyset$ from the fact that $A\cap B=\emptyset$.
You don't actually need any special conditions or much reasoning at all here; this is a consequence of the requirement that a function match each input to a single output. Imagine the claim were false. Then there must be some $x$ in the preimages of both $A$ and $B$. This is impossible, because $x$ can only be mapped to a single value, but $A$ and $B$ are disjoint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4195556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the number of subgroups of index $2$ in $(\mathbb{Z}/2\mathbb{Z})^3$? What is the number of subgroups of index $2$ in $(\mathbb{Z}/2\mathbb{Z})^3$? I know that a subgroup of index $2$ in $(\mathbb{Z}/2\mathbb{Z})^3$ is isomorphic to $(\mathbb{Z}/2\mathbb{Z})^2$ because of the order of its elements, but I cannot say what is the number of this subgroups. Also, is it possible to generalise this to the number of subgroups of index $2$ of $(\mathbb{Z}/2\mathbb{Z})^n$?
If you pick two distinct non-identity elements they are going to generate such a subgroup. So the answer is $\binom{7}{2}$. But wait ! Each subgroup can be generated in three ways like this. Hence the answer is $\binom{7}{2}/3 = 7$ If you want to generalize it to $\mathbb Z_2^n$ you can use linear algebra. Notice that the map that sends each subspace of dimension $n-1$ to its orthogonal complement under the obvious inner product is a bijection between dimension $n-1$ subspaces and dimension $1$ subspaces (to prove bijectivity you can use the $e_i$ vectors, and see here) Hence there are $2^n-1$ subgroups of index $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4195856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing $|G|=p^r \Rightarrow |G| \equiv |Z(G)| \pmod{p}$ If $|G|=p^r$, then $|G| \equiv |Z(G)| \pmod{p}$ I think it's enough to show that $\sum_{|o(x)>1|}|G:st(x)|=np, n\in\mathbb{N}$ and use the proposal $|G|=|Z(G)|+ \sum_{|o(x)>1|}|G:st(G)|, $ then we get $|G| \equiv |Z(G)| \pmod{p}$ (Here $st(x)$ is the stabilizer of $x$, and $o(x)$ is the orbit of $x$.) Now because $|G|=|st(x)|| o(x)|$ we get that $|st(x)|$ divides the order of $G$, so it must divide $p^r$, and we can say it's of the form $p^m, m< r$, hence it will also divide $p^{r-1}$ Now $\sum_{|o(x)>1|}|G:st(x)|=\frac{p^r}{p^{m_1}}+\dots +\frac{p^r}{p^{m_n}}=p\left(\frac{p^{r-1}}{p^{m_1}}+\dots +\frac{p^{r-1}}{p^{m_n}}\right)=pn$ Is this correct, is there an easier and more intuitive way to prove it?
We want to prove that if $|G|=p^n$ then $p||Z(G)$. If $G=Z(G)$ we're done so assume $G\neq Z(G)$. By the class equation we get: $|G|=|Z(G)|+\sum|G:C_G(g_i)|$ for every $g_i\not\in Z(G)$. $p||G:C_G(gi)|$ and also $p||G|$ so by isolating $|Z(G)|$ we see that $p||Z(G)|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4195958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find all pairs of $(n,k)$ where $n ≥ 2$ and $k$ is a prime number such that $log_n (n + k)$ is rational I found the following math question interesting: Find all pairs of (n,k) where n ≥ 2 and k is a prime number such that log_n (n + k) is rational. I tried this by contradicting certain statements such as: log_n (n+k)=2 and log_n (n+k)=C I contradicted both (final contradiction shown below): k=n(n-1) -- Contradiction since k is a prime number (except n=2) k=(n^(C-1))(n-1) -- Contradiction again since k is a prime number However, I don't know if I did them correct nor how to go from here. What would be the next steps? Are my contradictions correct? If possible can you please explain how you would attack this question with a possible solution? Thanks a lot.
Let $A=n^p=(n+k)^q$ where $p,q$ are coprime. Since $A$ is both a $p$-th and $q$-th power, $A=a^{pq}$ where $a$ is an integer. Can you now express $k$ in terms of $a$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4196308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
beta distribution as ratio gamma distributions I need a proof of this statement please: Let $Y_1$ and $Y_2$ be independent random variables, where $Y_1$ is gamma distributed with parameters $\alpha$ and 1 and $Y_2$ is gamma distributed with parameters $\beta$ and 1. Then the random variable $X$ presented by the following formula: \begin{equation} \label{lemma5} \displaystyle X = \frac{Y_1}{Y_1+Y_2} \end{equation} is beta distributed with parameters $\alpha$ and $\beta$.
It is not difficult. I cannot show you the entire proof because you question is without your own work but this is a useful hint: To simplify the notation, let me set $X,Y$ as the two independent Gamma rv and let's derive the law of $$U=\frac{X}{X+Y}$$ The starting point is the following system $$\begin{cases} u=\frac{x}{x+y} \\ v=x \end{cases}\rightarrow \begin{cases} x=v \\ y=v\frac{1-u}{u} \end{cases}$$ with Jacobian $|J|=\frac{v}{u^2}$ Substitute in $f_{XY}(x,y)$ and solve the integral in $dv$ finding your beta density. It is not difficult so show your works amending your question and, just in case, I will take you to the solution
{ "language": "en", "url": "https://math.stackexchange.com/questions/4196414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ask definition of Riemann integral Let $[a,b]$ be an interval and $f$ a function with domain $[a,b]$. We say that the Riemann sums of $f$ tend to a limit $l$ as $m(P)$ tends to $0$ if, for any $\epsilon > 0$, there is a $\delta > 0$ such that, if $P$ is any partition of $[a,b]$ with $m(P) < \delta$, then $|R(f,P)-l| < \epsilon$ for every choice of $s_j \in I_j$. My question is, I don't know how it is incarnated "as $m(P)$ tends to $0$" in this definition? Source: Real Analysis and Foundations, Book by Steven G. Krantz
The interval $[a,b]$ is partitioned as $a = x_0 < x_1 < \ldots < x_{n-1} < x_n = b$ for some $n\in\mathbb N$. In short this partition is denoted by $P$ and $m(P)$ denotes the length of its largest sub-interval. The idea is that, if the largest sub-interval goes to $0$ in length, so must all the others. If the function is Riemann integrable we might write something like $$ L := \lim _{m(P) \to 0} \sum f(x_i)\Delta_i =: \lim _{m(P)\to 0}R(f,P) $$ where $x_i$ are the knots and $\Delta _i := x_{i}-x_{i-1}$ i.e the length of the respective sub-interval. So by definition of limit, for every $\varepsilon >0$ there exists $\delta>0$ such that $m(P)<\delta$ implies $|R(f,P)-L|<\varepsilon$. The limit is taken as $n\to\infty$ i.e the size of a partition grows. Hence, again by definition, at some point, the size of the partition is big enough to guarantee $m(P)<\delta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4196564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is “implies” the best symbol when rewriting equations? In my mathematical homework, I usually indicate algebraic rewrites of equations using implication, and the symbol "$\implies$" (LaTeX \implies). For instance, I might write $$ 3 x - y = 0 \implies 3 x = y \implies x = \frac{y}{3} $$ to mean that, since $3 x - y = 0$, the equivalent equation $3 x = y$ is also true, which then indicates that $x = y/3$ is true. Is logical implication the correct facility to express rewriting an equation into an equivalent form? If not, what other concept and symbol would be correct here?
The traditional symbol for "therefore" is $\therefore$ (\therefore in $\LaTeX$). I recommend it in your example, because "$A$ implies $B$" and "$A$ therefore $B$" don't mean the same thing: the former means that if $A$ holds, then so does $B$, the latter means that $A$ holds, and as a consquence of that so does $B$. In your example, with $A \equiv 3x - y = 0$, $B \equiv 3x = y$ and so on, it is the latter reading that you want. For a simpler example: "$0 = 1 \implies 1 = 1$" is true, but "$0 = 1 \therefore 1 = 1$" is false. (Technically $\therefore$ is just logical conjunction, but with an implied hint that the right conjunct is easily derivable from the left conjunct.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4196680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
Solution integral $\;\displaystyle \iint \sqrt{\cos^2(x \pi)+\sin^2(y \pi)} \ dx\,dy$ Working on a hobby project: "Circle from (2D) random walk" [SE] and came across this integral: $$\bar{R}=\int_0^1 \int_0^1 \sqrt{\cos^2(X \pi)+\sin^2(Y \pi)} \ dX\,dY$$ My intention is to have the mean vector length of every vector (starting in origin) in a square: $x \in [0,1]$ and $y \in [0,1]$ where: $x=\cos(X \pi)$ and $y=\sin(Y \pi)$. Initial I solved numerical with Python (taking sample of vectors): import numpy as np x=np.linspace(-np.pi/2,0,1001) y=np.linspace(0,np.pi/2,1001) X,Y =np.meshgrid(x,y) def radius(x,y): return np.sqrt((np.cos(x))**2+(np.sin(y))**2) z=np.array([radius(x,y) for (x,y) in zip(np.ravel(X), np.ravel(Y))]) print(np.mean(z)) Giving: $$\bar{R}=0.95802...$$ Solving integral with Wolfram Alpha (online) gives: integral \sqrt(cos^2(x*pi)+sin^2(y*pi)) dxdy from x=0 to 1 and y=0 to 1 $$\bar{R}=0.958091\ldots$$ Values seems to match and looks like I am taking the mean vector length within square. $X$ and $Y$ are random values between $[0,2]$ in original problem. Is this integral known? And how to solve for it? I noticed that I can replace $sin^{2}$ for $cos^{2}$ giving: $$\bar{R}=\int_0^1 \int_0^1 \sqrt{\cos^2(X\pi) + \cos^2(Y\pi)} \ dX\,dY$$ or: $$\bar{R}=\int_0^1 \int_0^1 \sqrt{\sin^2(X\pi) + \sin^2(Y\pi)} \ dX\,dY$$ Does not help me gain more feeling. I would like to learn more about this integral where to start? And how do solutions (without intervals) look like? EDIT: original formula without $\cos$ and $\sin$ looks like: $\;\displaystyle \bar{R}=\frac{1}{a^2} \int_0^a \int_0^a \sqrt{x^2+y^2} \ dx\,dy$. Here Wolfram Alpha (online) gives complicated overwhelming formula. Not sure if nice compact solution exists.
The partial answer I posted showed the $pdf$ (probability density function) of $R^{2}-1$ is equal to: $$R^2-1 \overset{\mathrm{d}}{=} \left\lvert \dfrac{2}{\pi^{2} R} \cdot K \left( 1- \dfrac{1}{R^{2}} \right) \right\rvert $$ With $\overset{\mathrm{d}}{=}$ denoting equality in distribution, $K$ the complete elliptic integral and $R$ the vector length. This distribution can be transformed from $R^{2}-1$ to the distribution of $R$, see: MSE. I have little experience in transformation of $pdf$'s. My level is amateur/hobby and do not know the formal notation first, define: $Y=R^{2}-1$ and $dY/dR=2R$. $$G(Y)=\int_{-1}^{1} \left\lvert \dfrac{2}{\pi^{2} Y} \cdot K \left( 1- \dfrac{1}{Y^{2}} \right) \right\rvert dY =1 $$ $$G(R)=\int_{0}^{\sqrt{2}} \left\lvert \dfrac{4R}{\pi^{2} (R^{2}-1 )} \cdot K \left( 1- \dfrac{1}{(R^{2}-1)^{2}} \right) \right\rvert dR =1 $$ The function within the integral $g(R)$ is plotted and corresponds to observed data. The question asks the mean vector length $\bar{R}$, the solution to the integral: $$\bar{R}=\int_{0}^{1} \int_{0}^{1} \sqrt{\cos^2(X \pi)+\sin^2(Y \pi)} \ dX \,dY$$ The $pdf$ of the radius $R$ is found as: $$g(R)= \left\lvert \dfrac{4R}{\pi^{2} (R^{2}-1 )} \cdot K \left( 1- \dfrac{1}{(R^{2}-1)^{2}} \right) \right\rvert $$ The mean value $\bar{R}$ of this $pdf$ is the solution to the following integral (multiply $pdf$ with $R$ and integrate), see Wiki. $$\boxed{\bar{R}= \int_{0}^{\sqrt{2}} \left\lvert \dfrac{4R^{2}}{\pi^{2} (R^{2}-1 )} \cdot K \left( 1- \dfrac{1}{(R^{2}-1)^{2}} \right) \right\rvert dR }$$ With my available tools I cannot find a nice simple solution. Though integrating $G(R)$ gives a Meijer g function just like mentioned in comments. When assuming the the red and blue squares (see plot) have the same area I calculated the mean value from both. $R \in [0,1]$: with mean value $\bar{R}_1=\int_{0}^{1} \left\lvert 2 R \cdot g(R) \right\rvert \ dR$, note: multiply by $2$ to set half area to $1$. Solution with Wolfram alpha online: integrate 2*4R^2/(pi^2*(R^2-1))*K(1-1/(R^2-1)^2) dR from R=0 to 1 $$\bar{R}_1=0.737076...$$ $R \in [1,\sqrt{2}]$: with mean value $\bar{R}_2=\int_{1}^{\sqrt{2}} 2 R \cdot g(R) \ dR$: integrate 2*4R^2/(pi^2*(R^2-1))*K(1-1/(R^2-1)^2) dR from R=1 to sqrt(2) $$\bar{R}_2=1.179107...$$ The mean value: $$\bar{R}=\frac{0.737076...+1.179107...}{2}=0.958092...$$ With this method the same solution is found as the question. So my question is (partial) answered and gained more insight about this integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4196822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
How i can to calculate the harmonic partial series: $\sum_{k=1}^n\frac{\left(H_k\right)^2-H^{(2)}_k}{k}$? I have trying to calculate $$S_n=\sum_{k=1}^n\frac{\left(H_k\right)^2-H^{(2)}_k}{k} $$ I'm trying to apply Abel's Summation, i.e.: $$ \sum_{k=m}^n a_kb_k=A_nb_n-A_{m-1}b_m-\sum_{k=m}^{n-1}A_k\left(b_{k+1}-b_k\right)\tag{1}$$ where $\displaystyle A_k=\sum_{i=1}^k a_i$. Taking $m=1$, $\displaystyle a_k=\frac1k\to A_k=H_k$ and $\displaystyle b_k=\left(H_k\right)^2-H^{(2)}_k$ in $(1)$, i got that: \begin{align*} S_n&=H_n\left\{ \left(H_k\right)^2-H^{(2)}_k\right\}-\sum_{k=1}^{n-1}H_k\left\{ \left(H_{k+1}\right)^2-H^{(2)}_{k+1}-\left(H_{k}\right)^2+H^{(2)}_{k}\right\} \\ &=H_n\left\{ \left(H_k\right)^2-H^{(2)}_k\right\}-\sum_{k=1}^{n-1}H_k\left\{ \left(H_{k+1}+H_k\right)\left(H_{k+1}-H_k\right)-H^{(2)}_{k+1}+H^{(2)}_{k}\right\}\tag{2}\\ &=H_n\left\{ \left(H_k\right)^2-H^{(2)}_k\right\}-\sum_{k=1}^{n-1}H_k\left\{ \left(2H_{k}+\frac1{k+1}\right)\left(\frac1{k+1}\right)-\frac1{(k+1)^2}\right\}\\ &=H_n\left\{ \left(H_k\right)^2-H^{(2)}_k\right\}-2\sum_{k=1}^{n-1}\frac{\left(H_k\right)^2}{k+1}\color{red}{-\sum_{k=1}^{n-1}\frac{H_k}{(k+1)^2}}\color{red}{+\sum_{k=1}^{n-1}\frac{H_k}{(k+1)^2}}\\ &=H_n\left\{ \left(H_k\right)^2-H^{(2)}_k\right\}-2\sum_{k=1}^{n }\frac{\left(H_{k-1}\right)^2}{k} \end{align*} Notice that, in $(2)$ i used the facts $\displaystyle H_{k+1}=H_k+\frac{1}{k+1}$ and $\displaystyle H_{k+1}^{(2)}=H_k^{(2)}+\frac{1}{(k+1)^2}$. My difficulty here is to evaluate the last sum above. Thank you in advance for any tip or solution and i accept any approach for the original problem.
First, I wondered if the sign between the two harmonic numbers in the denominator is correct. Supposing this is the case, then we have $$\sum_{k=1}^n\frac{H_k^2-H_k^{(2)}}{k}=\frac{1}{3}(H_n^3-3 H_nH_n^{(2)}-4 H_n^{(3)})+2\sum_{k=1}^n\frac{H_k}{k^2},\tag1$$ where for the last sum there is no known simpler form (e.g., to express it in terms of harmonic numbers). During the calculations, I used that $$\sum_{k=1}^n\frac{H_k^2+H_k^{(2)}}{k}=\frac{1}{3}(H_n^3+3 H_nH_n^{(2)}+2 H_n^{(3)}),$$ which appears in (Almost) Impossible Integrals, Sums, and Series (for a proof, see pages $61$-$62$). Also, by Abel's summation we immediately we have that $$\sum_{k=1}^n\frac{H_k}{k^2}+\sum_{k=1}^n\frac{H_k^{(2)}}{k}=H_n H_n^{(2)}+H_n^{(3)},$$ which is useful to get the result in $(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4196973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Differentiate $f(x)=e^x\sin x$ $f(x)=e^x\sin x$ $$f'(x)=e^x\sin x+e^x\cos x $$ $$=e^x\sin x+e^x\sin (\frac{\pi}{2}+x)$$ That's what I got. But, my book wrote something else. $$f'(x)=\sqrt{2}e^x\sin (x+\frac{\pi}{4})$$ How they had taken common?
Note that$$\sqrt2\sin\left(x+\frac\pi4\right)=\sqrt2\left(\sin(x)\cos\left(\frac\pi4\right)+\cos(x)\sin\left(\frac\pi4\right)\right)=\sin(x)+\cos(x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4197151", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Fréchet derivative, redundant norm in the definition? The Fréchet derivative of a map $f:V \to W$ between Banach space at $x \in V$ is defined as the linear map $A:V \to W$ such that $$ \lim_{h\to 0} \frac{\| f(x-h) - f(x) - Ah \|_W}{\| h \|_V} = 0 \in \mathbb R $$ If the Quotient goes to zero then and the denominator goes to zero then numerator must as well, and so $$ \lim_{h\to 0} \| f(x-h) - f(x) - Ah \|_W = 0 \implies \\ \| \lim_{h\to 0} (f(x-h) - f(x) - Ah) \|_W = 0 \iff\lim_{h\to 0} (f(x-h) - f(x) - Ah) = 0 $$ I think that the norm in the numerator is redundant so the definition could be written as $$ \lim_{h\to 0} \frac{f(x-h) - f(x) - Ah}{\| h \|_V} = 0 \in W $$ Is there any counter example why this may not be equivalent ?
Well, not really, because the norm specifies the topology in which the limit is taken. Yes, of course, the only choice of topologoy we have at our disposal is the one that is induced by $\lVert \cdot \rVert_W$. But then, what does the following mean? $$ \lim_{h \rightarrow 0} \frac{f(x-h) - f(x) -Ah}{\lVert h \rVert_V} =0 $$ This is defined as $$ \lim_{h \rightarrow 0} \left \lVert \frac{f(x-h) - f(x) -Ah}{\lVert h \rVert_V} \right \rVert =0 \iff \lim_{h \rightarrow 0} \frac{\lVert f(x-h) - f(x) -Ah \rVert_W}{\lVert h \rVert_V} =0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4197345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find two infinite subsets A and B of $N$ and $0$ such that every natural number can be written uniquely as the sum of a number in A and a number in B. Recently encountered the following question: Find two infinite subsets $A$ and $B$ of $N$ (including $0$) such that every natural number can be written uniquely as the sum of a number in $A$ and a number in $B$. I have tried finding some sets and so far have only found the following: $A = [1,3,5,7,9,11...]$ $B = [0,1,2,3,4,5...]$ Is there a way to find a function possibly for all the values of the subset? How would you attack this question, would you use brute force or some other tool? If so, what? And if possible can you please post your solution to the problem. Thanks heaps.
You can choose sums of distinct powers of 4 for the set A ($\{0, 1, 4, 5, 16, 17, 20, ...\}$), and twice the sums of distinct powers of 4 for the set B ($\{0, 2, 8, 10, 32, 34, 40, ...\}$). The sets A and B correspond to the OEIS sequences A000695 and A062880, respectively. The underlying idea is to separate even and odd indexed bits in binary expansions of a number $n$ to get the unique element of A and of B that sum to that number $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4197463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Laplace transform of $t^2 \cos \omega t$ I have to find the Laplace transform of: $$f(t) = t^2 \cos \omega t$$ We know that, $$\mathcal{L} (t^n f(t)) = (-1)^n \frac{d^n}{ds^n}{F(s)}$$ so , $$\mathcal{L}(\cos \omega t) = \frac{s}{s^2 + \omega^2}$$ $$\therefore \mathcal{L} (t^2 \cos \omega t) = \frac{d^2}{ds^2} \frac{s}{s^2 + \omega^2}$$ let $$\frac{d^2}{ds^2} = F^{\prime\prime}(s)$$ so, Using quotient rule of differentiation $$F^\prime(s) = \frac{\omega^2 -s^2}{(s^2 + \omega^2)^2}$$ however, when I try to differentiate $F^\prime(s)$ again to find $F^{\prime\prime}(s)$ it is an entire mess. Can someone help me to differentiate it or is there any other way to find its Laplace Transformation. The answer is: $$\frac{2s(s^2 - 3 \omega^2)}{(s^2 + \omega^2)^3}$$
OP, your process is correct and the only thing left is to find $F^{\prime\prime}(s)$. $$F^\prime(s) = \frac{\omega^2 -s^2}{(s^2 + \omega^2)^2}$$ $$F^{\prime\prime}(s) = \frac{d}{ds} \frac{\omega^2 -s^2}{(s^2 + \omega^2)^2}$$ Quotient Rule: $$\frac{d}{ds} \frac{f(s)}{g(s)} = \frac{f'(s)g(s)-g'(s)f(s)}{g(s)^2}$$ So,$$\frac{d}{ds} \left(\frac{\omega^2 -s^2}{(s^2 + \omega^2)^2}\right) = \frac{(\frac{d}{ds}(\omega^2 -s^2))\ (s^2 + \omega^2)^2\ \ -\ \ (\frac{d}{ds}(s^2 + \omega^2)^2)\ (\omega^2 -s^2)}{((s^2 + \omega^2)^2)^2}$$ $$=\ \frac{(-2s)(s^2+\omega^2)^2\ -\ (2(s^2+\omega^2)\ (2s))\ (\omega^2 -s^2)}{(s^2 + \omega^2)^4}$$ $$=\ \frac{(-2s)(s^2+\omega^2)\ -\ (4s)\ (\omega^2 -s^2)}{(s^2 + \omega^2)^3}$$ $$=\ \frac{(-2s)(s^2+\omega^2)\ +\ (4s)\ (s^2-\omega^2)}{(s^2 + \omega^2)^3}$$ $$=\ \frac{(2s)\left(2(s^2-\omega^2)-(s^2+\omega^2)\right)}{(s^2 + \omega^2)^3}$$ $$=\ \frac{2s(s^2 - 3 \omega^2)}{(s^2 + \omega^2)^3}$$ $$\therefore\ F^{\prime\prime}(s) = \frac{d}{ds} \left(\frac{\omega^2 -s^2}{(s^2 + \omega^2)^2}\right)\ =\ \frac{2s(s^2 - 3 \omega^2)}{(s^2 + \omega^2)^3}$$ Thus, the Laplace transform of $f(t) = t^2 \cos \omega t$ is $$\frac{2s(s^2 - 3 \omega^2)}{(s^2 + \omega^2)^3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4197586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What does the notation $\mathbb{Z}[X]$ refer to? Suppose that a mathematician wrote, Let $f_0, f_1, f_2, \cdots$ be functions in $\mathbb{Z}[X]$ What does $\mathbb{Z}[X]$ mean? I am aware that $\mathbb{Z}$ is used to denote the set of integers. In other words, $\mathbb{Z} = \{\cdots, -100000, \cdots, -3, -2, -1, 0, +1, +2, +3, \cdots, +100000, \cdots\}$ What on earth is $\mathbb{Z}[X]$?
$\mathbb Z[X]$ is the ring of polynomials with integer coefficients. Written out in set-builder notation: $\mathbb Z[X]=\{\sum_{i=0}^{n} a_iX^i|i\in\mathbb N,a_i\in\mathbb Z \}$. You can plug in values of $\mathbb Z$ in for $X$ to get back an element of the base ring $\mathbb Z$. Though some authors may refer to a specific polynomial as "$f(x)\in \mathbb Z$," we consider polynomials as distinct algebraic objects here rather than functions – we are not considering the polynomial as a morphism/mapping between two groups/rings/fields in the same way we would think about a homomorphism. $X$ in this case is considered an indeterminate, not a variable (you can read more about this here). $\mathbb Z[X]$ is a countably infinite ring (you can prove this by checking the ring axioms as an exercise!), but not a field – all polynomial rings are not closed under multiplicative inverses. This same notation applies to polynomial rings over other rings/fields. For example, $\mathbb Z_2[X]=\{\sum_{i=0}^{n} a_iX^i|i\in\mathbb N,a_i\in\mathbb Z_2 \}$ is the set of polynomials with coefficients $0$ or $1$. You can generate an ideal, a special subgroup of the polynomial ring, using one such $f(x)\in \mathbb Z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4197836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Projection of vector a onto vector b using vector triple product We know the formula for projection of a on b $$ proj_{b} (a)=\left(\frac{a\cdot b}{||b||^2}\right)b = \left(\frac{a\cdot b}{b\cdot b}\right)b$$ and its length is called component of a in the direction of b written $$comp_b(a)=||proj_b(a)||=\frac{a\cdot b}{||b||}$$ How can we establish any relationship of above formulas with the following Projection using vector triple product?
The sum of vector projection of a $\vec{a}$ onto a nonzero $\vec{b}$ and orthogonal projection of $\vec{a}$ onto the plane orthogonal to $\vec{b}$ is equal to $\vec{a}$. More details are here $\rightarrow$ https://en.wikipedia.org/wiki/Vector_projection#:~:text=The%20projection%20of%20a%20vector%20on%20a%20plane,parallel%20to%20the%20plane%2C%20the%20second%20is%20orthogonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4198003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Maximum possible value of a positive integer $n$, such that for any choice of seven distinct elements from ${1, 2, .., n},$ there will exist What is the maximum possible value of a positive integer $n$, such that for any choice of seven distinct elements from ${1, 2, ..., n},$ there will exist two numbers $x$ & $y$ satisfying $1 < x/y \leq 2$ What seems unclear to me are the following points, A) Is $2y$ supposed to be $\leq$ n ? B) Is there some standard approach to find the maximum possible value? Moreover, usually, I try to take a small set to try out the given conditions. In this case, is it possible to take a smaller set and use it to draw conclusions for the larger set? The answer is $2^7 - 2$
Starting with smaller values is the key here. Elimination can be done once a pattern is seen. On the basis of the conditions given ($1 < x/y \leq 2$), try taking some numbers. Notice that, as mentioned in the comments, Choosing $1$ restricts the entry of $2\times1$ Thereafter, choosing $3$, restricts the entry of all numbers $\leq 2 \times 3$ Finally we are left with $7$. This clearly points to the fact/pattern that we get a number that is of the form $2^t-1$. Continuing the trend, we easily observe that the set $\{1,3,7,15,...,127 = 2^7-1\}$ is a set consisting of seven elements, no two $x,y$ of which satisfy the criteria that $1 < \frac{x}{y} \leq 2$. Therefore, we know that the answer to the question is at most $127$. We guess that it is $126$. Indeed, here is a proof. Divide $126$ into the following parts : $S_1 =\{1,2\}$, $S_2 = \{3,4,5,6\}$, $S_3 = \{7,8,...,14\}$, ..., $S_6 = \{63,64,...,126\}$. Now, if you pick any two elements $x,y$ from the same $S_i$, then either $1 < \frac{x}{y} \leq 2$ or $1 < \frac {y}{x} \leq 2$. This can be easily checked. By the pigeonhole principle, if we pick $7$ elements between $1$ and $126$, two of them must be in the same $S_i$, and hence we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4198104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many solutions the equation $(x-2)(x+1)(x+6)(x+9)+108=0$ has in the interval $(-10,-1)$? How many solutions the equation $(x-2)(x+1)(x+6)(x+9)+108=0$ has in the interval $(-10,-1)$ ? Here is my work: By expanding the expression we get, $$(x^2-x-2)(x^2+15x+54)+108=x^4+14x^3+37x^2-84x$$ So I got $x(x^3+14x^2+37x-84)=0$. One root is zero which doesn't lie in the interval $(-10,-1)$. But I don't know how many roots the cubic equation has in that interval.
I just solved the equation with the following method: $$\color{red}{(x-2)}\color{green}{(x+1)(x+6)}\color{red}{(x+9)}+108=0$$ $$(x^2+7x-18)(x^2+7x+6)+108=0$$ By using the substitution $t=x^2+7x$ we get, $$t^2-12t=0\Rightarrow t_1=0\quad,t_2=12$$So we have, $x^2+7x=0\Rightarrow \quad x_1=0 ,\quad x_2=-7$ $x^2+7x-12=0\Rightarrow\quad x_{3}=\dfrac{-7+\sqrt{97}}{2},\quad x_4=\dfrac{-7-\sqrt{97}}{2}$ Hence $x_2,x_4\in(-1,-10)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4198258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Calculating $\lim_{x \to 0} \frac{\sin(x^2)-\sin^2(x)}{x^2\ln(\cos x)}$ without L'Hospital's rule this is my first post here so excuse the lack of knowledge about how things usually go. My question revolves around calculating the limit as $x$ approaches $0$ of the following function: $$\lim_{x \to 0} \frac{\sin(x^2)-\sin^2(x)}{x^2\ln(\cos x)}$$ The question came up in a test about a month ago and while I couldn't solve it in the test I've been working on it since then but I can't seem to get it. I know the limit is supposed to be $\frac{-2}{3}$ from some online calculators which abused l'hopital rule over and over again. I've tried playing around with it in so many ways but I always seem to get 0 over 0 or the so called indeterminate form. I've even tried calculating it by substituting in the Taylor series for the functions given but no luck. If anyone could show me a method of calculating this without using l'hopital rule or better yet, give me a hint as to how I should proceed I would be grateful.
Using composition of Taylor series around $x=0$ $$ y=\frac{\sin(x^2)-\sin^2(x)}{x^2\log(\cos (x))}$$ $$\sin(x^2)=x^2-\frac{x^6}{6}+O\left(x^{10}\right)$$ $$\sin^2(x)=x^2-\frac{x^4}{3}+\frac{2 x^6}{45}+O\left(x^8\right)$$ $$\cos(x)=1-\frac{x^2}{2}+\frac{x^4}{24}+O\left(x^6\right)$$ $$\log(\cos (x))=-\frac{x^2}{2}-\frac{x^4}{12}+O\left(x^6\right)$$ $$y=\frac{\frac{x^4}{3}-\frac{19 x^6}{90}+O\left(x^8\right) } {-\frac{x^4}{2}-\frac{x^6}{12}+O\left(x^8\right) }=-\frac{2}{3}+\frac{8 x^2}{15}+O\left(x^4\right)$$ shows the limit and how it is approached.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4198444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Can we always derive modus ponens from modus tollens, and vice versa? I have a couple of questions about the distinction between the so-called "basic rules" and the so-called "derived rules" in logic. I have been told that there is nothing substantial about this distinction, i.e., that it is a mere convention: for instance, modus ponens is usually regarded as a basic rule, whereas modus tollens is generally regarded as a derived rule, but actually one can derive modus ponens from modus tollens, with suitable additional rule. I have two questions: (1) how exactly you derive modus ponens from modus tollens in propositional logic? (2) Is it always the case (i.e., is it true for any logic) that modus ponens and modus tollens can derive each other or is this only true for propositional logic?
How exactly you derive modus ponens from modus tollens in propositional logic? You will need the equivalence: $$P \to Q ~~\equiv ~~ \neg [P \land \neg Q]$$ Using a form of natural deduction, here is a proof by contradiction. (Screenshot from my proof checker)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4198613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to view the middle third Cantor set as a fixed point via it's self-similarity The definition of middle third cantor set is given in this link" https://en.wikipedia.org/wiki/Cantor_set and we need to use Hausdorff metric, the definition of the Hausdorff metric is given in this link:https://en.wikipedia.org/wiki/Hausdorff_distance The idea is that, first, if we denote the middle third Cantor set by C, then we need to find a function f from the set of all nonempty compact subsets of R(denote this set by K) to itself such that f(C)=C, then we need to show that f is a contraction and then use Banach fixed point theorem(notice that the Hausdorff metric is complete), so how can we achieve this?
We are working on the metric space $(K, d_{H})$ where $K$ is the set of all nonempty compact subsets of $\mathbb{R}$ and $d_{H}$ is the Hausdorff metric. Consider the map $f: K \to K$ given by $$f(S) = \frac{S}{3} \cup \frac{S+2}{3}$$ where we are using the notation $S+c = \{x + c \, : \, x \in S\}$ and $cS = \{cx \, : \, x \in S\}$. It remains to show $f$ is a contraction in the Hausdorff metric. Suppose $x \in f(A)$. Then either $3x \in A$ or $3x-2 \in A$. By the definition of Hausdorff metric $\delta := d_{H} (A,B)$, for every $a \in A$ there exists $b \in B$ with $d(a, b) \le \delta$; here we are using compactness of $A$ and $B$ to produce distance minimizers. Now we check both cases: * *In the first case ($3x \in A$), we deduce the existence of some $y \in B$ such that $d(3x,y) \le \delta$, and then $d(x,y/3) \le \delta/3$. Since $y/3 \in f(B)$, it follows that $d(x, f(B)) \le \delta/3$. *The second case ($3x-2 \in A$) gives some $y \in B$ such that $d(3x-2,y) \le \delta$ so $d(x,(y+2)/3) \le \delta/3$ and again, since $(y+2)/3 \in f(B)$ we have $d(x, f(B)) \le \delta/3$. We have shown that $d(x, f(B)) \le \delta/3$ for all $x\in f(A)$, so in fact $\sup_{x\in f(A)} d(x,f(B)) \le \delta/3$. An entirely analogous argument shows $\sup_{y \in f(B)} d(y, f(A)) \le \delta/3$. We conclude $$d_{H} (f(A), f(B)) \le \frac{1}{3} d_{H}(A,B)$$ so $f$ is indeed a contraction. Banach gives a unique fixed point, which proves the existence of the Cantor set (if one defines the Cantor set as the unique fixed point of the map $f$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4198755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Equivalence definition of rapidly decreasing in $\mathbb R^n$ I am trying to prove: a continuous function $f:\mathbb R^n\to \mathbb R$ is rapidly decreasing iff $\sup_{x\in \mathbb R^n}|x|^k|f(x)| <\infty $ for all $k$ where a conitnuous $f$ is defined to be rapidly decreasing iff $\sup_{x\in\mathbb R^n}|x^\alpha f|<\infty$ for all multi-index $\alpha\in\mathbb N^n$. Here $x^\alpha:=x_1^{\alpha_1}x_2^{\alpha_2}...x_n^{\alpha_n}$. I was trying to show some inequality relations between $|x^\alpha|$ and $|x|^k$. I found that if $1\le |x_i|$ for all $i$, then $|x^\alpha|\le|x|^{2k||\alpha||_{\infty}}$, as by expanding $|x|^{2k}$, we have $x_1^2...x_n^2\le |x|^{2k}$, and therefore $|x^\alpha|\le x_1^{2||\alpha||_\infty}...x_n^{2||\alpha||_\infty}$ as $1\le |x_i|$. I am stuck here. Any help or comment will be appreciated.
For your first inequality, you actually don't need any case distinction, because for any $i\in\{1,\dots, n\}$ we have $|x_i|\leq \|x\|_2$, so \begin{align} |x^{\alpha}|&:=|x_1^{\alpha_1}\cdots x_n^{\alpha_n}|\leq \|x\|_2^{\alpha_1+\dots +\alpha_n}. \end{align} For the other, note that clearly, $|x_i|\leq |x_1|+\dots +|x_n|$ for all $i$, so \begin{align} \|x\|_2&:=\sqrt{x_1^2+\dots +x_n^2} \leq \sqrt{n}(|x_1|+\dots + |x_n|) \end{align} So, if you raise both sides to the power $k$, then on the RHS, by expanding everything out, you get a finite sum of terms of the form $|x^{\alpha}|$; i.e there is a $C>0$ (which is like the maximum of all the combinatorical factors obtained when expanding the RHS) and a finite set $A\subset \Bbb{N}_0^n$ of multindices such that \begin{align} \|x\|_2\leq C\sum_{\alpha\in A}|x^{\alpha}|. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4198869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
One-side component of a bipartite graph Is there such a thing like a one-side (meaning just one of the sets $V_1$ or $V_2$) component of a bipartite graph $B = (V_1, V_2, E)$? For example, for such a bipartite graph $B = (V_1, V_2, E)$, I am interested in the component(s) only for nodes from $V_1$, the vertices from $V_2$ are only relevant for the connection. In other words, only the neighbors of neighbors of nodes in $V_1$ are interesting for me.
I have seen this notion under the name "2-linked" components, typically when discussing graph container arguments like David Galvin's "Independent sets in the discrete hypercube". One definition of the $k^{\text{th}}$ power $G^k$ of a graph $G$ is the graph on $V(G)$ with an edge between two vertices if their distance in $G$ is at most $k$. See On the Pósa-Seymour conjecture by Komlós, Sárközy, and Szemerédi for an example of this usage with Hamiltonian cycles. We then say that a set $S$ of vertices is $k$-linked if $G^k[S]$, the subgraph of $G^k$ induced by $S$, is connected. Since you are explicitly working with bipartite graphs, you might find it easier to just write things in terms of neighborhoods. Let $N(v)$ be the neighborhood of $v$ in an underlying bipartite graph $G$, and let $N(S)$ be the union of $N(v)$ for all $v \in S$. Then $N(N(v))$ is the set of all vertices at distance $0$ or $2$ from $v$. You can define a component $A$ as a minimal set of vertices for which $N(N(A)) = A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4199038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Correction: Calculating distribution function and determining density function of $Y =2X$ Since we were not provided any solutions to our statistics exercises, I wanted to ask you guys for any corrections or errors I did on this exercise. I will only upload a screenshot from my notes app. I hope that's ok. The exercise was: Let $X$ be a random variable with density $fX : \mathbb{R} \to \mathbb{R}$ defined by fX(x) = (1 2 for x ∈ [2, 4] 0 else a) Calculate the corresponding distribution function. b) Determine the density of the random variable Y := 2X Thank you :)
point b is correct even there is a faster way to solve it. Just remembering that with a monotone transformation function there is a suitable formula that is $$f_Y(y)=f_X\left(g^{-1}(y)\right) \left|\frac{d}{dy}g^{-1}(y)\right|$$ Thus being $x=\frac{y}{2}$ and $x'=\frac{1}{2}$ you immediately get $$f_Y(y)=\frac{1}{4}\cdot \mathbb{1}_{[4;8]}(y)$$ as you found. a) is wrong because when $x\geq 4$ the CDF $F_X(x)=1$ (remember also that, when calculating an integral function, i.e. $\int_2^x f(t)dt$ you cannot differentiate in $dx$ but you have to use another letter; you can use the letter you prefer but not $x$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4199175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A mystery of $\int \frac{1}{a^2\cos^2{x}+b^2\sin^2{x}} dx$ I am reading a calculus book. This book contains the following problem: Let $a,b>0$. Find $$\int \frac{1}{a^2\cos^2{x}+b^2\sin^2{x}} dx.$$ The author's answer is $$\frac{1}{ab}\arctan{(\frac{b}{a}\tan{x})}.$$ This function is not continuous and is not even defined at $\frac{\pi}{2}+n\pi(n\in\mathbb{Z}).$ $\frac{1}{a^2\cos^2{x}+b^2\sin^2{x}}$ is defined on $\mathbb{R}$. I think primitive functions are continuous but this function is not continuous. Why?
When you divide the numerator and denominator of $\frac{1}{a^2\cos^2{x}+b^2\sin^2{x}}$ by $cos^2{x}$ to integrate it with respect to $x$, you made a tacit assumption that $cos{x}\neq 0$ i.e $x$ does not belong to odd multiple of $\pi/2$. Therefore, the primitive function $\frac{1}{ab}\arctan{(\frac{b}{a}\tan{x})}$ need not be derivable or continuous or defined at $x= \frac{\pi}{2}+n\pi (n\in\mathbb{Z})$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4199342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to prove $\lim_{x\to x_0}\left(f(x)+g(x)\right)=+\infty$ if $f(x)\rightarrow A$ and $g(x)\rightarrow +\infty$? I want to prove - if $\lim_{x\to x_0} f(x) = A \in \mathbb{R}$ and $\lim_{x\to x_0} g(x) = +\infty$ then $\lim_{x \to x_0} \left(f(x) + g(x)\right) = +\infty$ (where $x_0 \in \bar{\mathbb{R}}$), using Cauchy definition. Translating it into $\epsilon,\delta$ language it becomes: Hypothesis 1. $\forall \epsilon>0, \exists \delta>0, \forall x \in D_f, x\in U(x_0, \delta) \rightarrow A-\epsilon < f(x) < A + \epsilon$. Hypothesis 2. $\forall \epsilon>0, \exists \delta>0, \forall x \in D_g, x\in U(x_0, \delta) \rightarrow \frac{1}{\epsilon} < g(x)$ Goal. $\forall \epsilon>0, \exists\delta>0, \forall x\in D_f \cap D_g, x\in U(x_0, \delta) \rightarrow \frac{1}{\epsilon} < f(x) + g(x)$ So, I have arbitrary $\epsilon$, choose some $\epsilon_1>0$ and $\epsilon_2>0$ to put into both hypotheses, get corresponding $\delta_1>0$ and $\delta_2>0$... then I use value $\min(\delta_1, \delta_2)$... skipping forward, I get to the following: I have to prove from $A-\epsilon_1 < f(x) < A + \epsilon_1$ and $\frac{1}{\epsilon_2} < g(x)$ that $\frac{1}{\epsilon} < f(x) + g(x)$ follows. How to choose $\epsilon_1$ and $\epsilon_2$ to do that? If $A > 0$ it seems I can choose $\epsilon_1 = A$ and $\epsilon_2 = \epsilon$, but what to do when that's not case?
If $\lim_{x \to x_0}f(x)=A$, where $A\in\Bbb{R}$, then there is a $\delta>0$ such that if $|x-x_0|<\delta$ then $|f(x)-A|<1$. Equivalently, $A-1<f(x)<A+1$. If $\lim_{x \to x_0}g(x)=\infty$, then for every $N>0$ there is a $\delta'>0$ such that if $|x-x_0|<\delta'$, then $g(x)>N$. Combining these inequalities, this means for every $N>0$, if $|x-x_0|<\min(\delta,\delta')$, then $f(x)+g(x)>A+N-1$. Can you finish the proof?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4199522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there a standard name for this property of ordered pairs of binary operations? I know that the distributive property of ordered pairs of binary operations is well-known. However, I have thought of a new property of ordered pairs of binary operations. Let $+$ and $*$ be the binary operations on a set $S$, which are arbitrary. (So, for example, don't confuse $+$ with addition). I define $(+,*)$ to be switchable iff for all $x,y,z$ in $S$, $(x+(y*z))=((x+y)*z)$. So, for example, $(+,-)$, where $+$ represents addition on the reals and $-$ represents binary subtraction on the reals, is a switchable pair of binary operations. Also, an operation $*$ is associative iff the pair $(*,*)$ is switchable. Is there a standard name for this property? Also, has any book or paper defined this property?
This is an equation in the term algebra of signature $\{+, *\}$, but I don't think this specific equation has an accepted name.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4199678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A problem on identification of a quotient space . $\mathbf {The \ Problem \ is}:$ I am trying to identifying the quotient space $\frac {I×I}{I×\{0,1\}}$ (where $I =[0,1]$) with some known spaces. $\mathbf {My \ approach}:$ I could only try that if we define $f : I×I \to I×I$ by $f(a,b) = (ab(1-b),e^{2πib})$; then I think the fibre becomes $I×\{0,1\}.$ But, I can't approach further . A small hint is warmly appreciated. Thanks in advance.
As far as i can tell, the description $\frac{I\times I}{I\times \{0,1\}}$ is the best description in terms of "known" spaces, but correct me if someone comes up with a better description. I have made a drawing of the quotient space if that helps you. Another way to think of the space, is to describe it as a quotient of the cylinder $I\times S^1$. Namely, it can also be written as $\frac{I\times S^1}{(t,1)\sim(t',1)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4199823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help evaluating $\lim_{n\to \infty} \sum_{k=1}^n \frac{e^{\frac{-k}{n}}}{n}$ $$ \lim_{n\to \infty} \sum_{k=1}^n \frac{e^{\frac{-k}{n}}}{n} $$ is what I am asked to evaluate. My working: \begin{align} &= \lim_{n\to \infty} \frac{1}{n}\sum_{k=1}^n e^{\frac{-k}{n}} \\ &= \lim_{n\to \infty} \frac{1}{n} \left( \frac{1}{e^\frac{1}{n}} + \frac{1}{e^\frac{2}{n}} + \frac{1}{e^{\frac{3}{n}}} + \cdots + \frac{1}{e^\frac{n}{n}} \right) \\ &= \lim_{n\to \infty} \frac{1}{n}\frac{1}{e^\frac{1}{n} -1} = \lim_{\frac{1}{n} \to 0} \frac{\frac{1}{n}}{e^\frac{1}{n} -1} \\ &= 1. \end{align} Wrong. The answer is $1-\frac{1}{e}$. Where did I go wrong here?
The equality $$\left( \frac{1}{e^\frac{1}{n}} + \frac{1}{e^\frac{2}{n}} + \frac{1}{e^{\frac{3}{n}}} + \cdots + \frac{1}{e^\frac{n}{n}} \right)=\frac{1}{e^\frac{1}{n} -1}$$ is false. I think what you want to see is a geometric sum $$\sum_{k=1}^n e^{-k/n} = \sum_{k=1}^n \left(e^{-1/n}\right)^k =\frac{e^{-1/n-1}-1}{e^{-1/n}-1} $$ where the numerator takes the form $$\lim_{n\to\infty}e^{-1/n-1}-1=\frac{1}{e}-1$$ and by L'Hopital's rule the denominator becomes $$\lim_{n\to\infty} n \left (e^{-1/n}-1 \right ) = \lim_{n\to\infty}\frac{e^{-1/n}-1}{1/n}=-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4200045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Two different solutions for the irrational inequality $(x-1) \sqrt{x^2-x-2} \geq 0$. Upon solving this problem with the concept that I have understood about solving inequalities until now, I got the following range for $x$ :- $(x-1) \sqrt{x^2-x-2} \geq 0$ $\Rightarrow(x-1) \sqrt{(x-2)(x+1)} \geq 0$ $\Rightarrow(x-1)^2 (x-2)(x+1)\geq 0 \text{ ; upon squaring both sides }$ $\Rightarrow x\in(-\infty,-1]\cup[2,\infty)$ This is the solution range that I got for the given inequality and I have confirmed this solution by checking this in Wolfram Alpha. But this solution only comes up if I give the modified problem statement as $(x-1)^2 (x-2)(x+1)\geq 0$ i.e. after squaring both sides, on the website as you can see here. And I thought that this should be the correct answer but when I give the original problem statement into the website, it gives the result as $x\in[2,\infty)$ as you can see here. Now I am confused as to which one is the correct solution for this problem. Why there are two different solutions to the same problem? How a simple act of squaring both sides is changing the solutions completely? Please help me on this !!! Thanks in advance !!!
$x\ge0$ is obviously not the same as $x^2\ge0$ ! For this problem, as a square root is positive by definition, the inequality reduces to $$x-1\ge0,$$ provided that the argument of the square root is non-negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4200195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
SIgnature of bilinear form I've been solving the following problem on symmetrical bilinear forms: Let $V$ a $\mathbb R$-vector space of dimension $n$ and $ f_1, f_2 \in V^\ast$ two non-zero linear functionals such that $f_1$ is not a scalar multiple of $f_2$. Define the application $f:V \times V \rightarrow \mathbb R$ given by: $$f(u, v):= \frac{f_1(u)f_2(v)+f_2(u)f_1(v)}{2}.$$ Of course $f$ is a symmetrical bilinear form. What I don't know what to do is determine the signature of $f$. In fact, I believe that you need to compute the matrix of $f$ in some basis, and after that, find a diagonal matrix similar to the matrix of $f$ (it always exists). After that, count the number of negative and positive terms on the diagonal (which is always equal, by Sylvester's law of inertia). However, I was a little confused on how to do this. My question would be: am I on the right path? Is there any simpler way to find the signature? Also, I was stuck trying to follow after building the matrix of $f$ on some basis. If anyone could continue with any tips, i would be very grateful.
Hint Take $\mathcal B =\{e_1, e_2, e_3, \dots,e_n \}$ as a basis of $V$ where $\operatorname{span}\{e_3, \dots, e_n\} =\ker f_1 \cap \ker f_2$ and $f_1(e_1) =f_2(e_2)=1$. This is possible considering the given hypothesis. In $\mathcal B$, the matrix of $f$ is $$\begin{pmatrix} 0 & 1/2 & 0 &\cdots &0\\ 1/2 & 0 & 0 &\cdots &0\\ 0 & 0 & 0 &\cdots &0\\ \vdots & \vdots & \vdots &\ddots &\vdots\\ 0 & \cdots & 0 &\cdots &0\\ \end{pmatrix}.$$ You’ll get that signature is $1,-1,0, \dots ,0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4200337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to bound logarithm with variable base? (asymptotics) Problem: For any $n>1$ let $\theta(n) = \big( 1 - (\ln n + 1)^{-1} \big)^{-1}$, where $\ln$ is the natural logarithm. Show that for some $k$ (not depending on $n$) it holds that $\log_{\theta(n)}n \in \mathcal{O}((\log n)^k)$, i.e. there exists a positive real number $c$ such that $\log_{\theta(n)}n \leq c (\log n)^k$ for all large enough $n$ (where $\log$ is with respect to any base, e.g. 2). What I know: k=1 is probably not good enough, but $k=2$ seems to be fine (looking at plots). Preferably: $k$ as small as possible. Motivation: If the claim is true, then I have an asymptotically fast algorithm for encoding certain algebraic geometry codes (error-correction).
Hint $$y=\log_{\theta(n)}(n)=\frac{\log (n)}{\log (\theta (n))}$$ $$\theta (n)=\frac{1}{1-\frac{1}{\log (n)+1}}\implies y=\frac{\log (n)}{\log \left(\frac{1}{1-\frac{1}{\log (n)+1}}\right)}=-\frac {\log (n)}{\log\left({1-\frac{1}{\log (n)+1}}\right)}=-\frac{\log (n)}{\log \left(\frac{\log (n)}{\log (n)+1}\right)}$$ $$y=\frac{\log (n)}{\log \left(1+\frac{1}{\log (n)}\right)}$$ Make $n=e^x$, simplify and use Taylor series for large $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4200679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding smallest vector satisfying an equation with a dot product I was wondering whether you could help me understand a solution to one of the problems from the 6.036 Introduction to Machine Learning from MIT Open Learning Library. The task is to find the smallest vector (with respect to the L2 norm) $\theta$ which satisfies the equation $$\theta \cdot x = \frac{1}{y}$$ where $x$ is an arbitrary vector (with the same dimension as $\theta$), y is a constant taking either value $+1$ or $-1$ and $\cdot$ is the dot product. The provided solution is: $$\theta = \frac{x}{\| x \|^2} \times \frac{1}{y}$$ Even after looking at the solution, I am still not sure how this problem was supposed to be solved. It seems that the solution could be derived using manipulations similar to: * *$\theta \times (x \cdot x) = x \times {1 \over y}$ *$\theta \times \| x \| ^ 2 = x \times {1 \over y}$ *$\theta = {x \over \| x \| ^ 2} \times {1 \over y}$ However, I struggle to see what could be the justification for transforming the original equation to the form in the first step, so I suspect this might not be the right way.
$ \theta \cdot x = | \theta | | x | \cos \psi = \dfrac{1}{y}$ where $\psi$ is the angle between the vector $x$ and the vector $\theta$. Since $x$ and $y$ are fixed, this implies the minimum $| \theta |$ occurs when $\cos \psi $ is maximum, and this occurs when $\psi = 0$ so that $\cos \psi = 1$. And with this, $\theta$ will be aligned with $x$ , i.e. along $x$. Thus $\theta = \alpha x$ , for a certain $\alpha \gt 0$ that we're about to find. Plug this expression for $\theta$ in the original equation, you get, $\theta \cdot x = \alpha x \cdot x = \dfrac{1}{y} $ Hence $\alpha = \dfrac{1}{x \cdot x } \times \dfrac{1}{y} $ Finally we note that $x \cdot x = | x |^2 $ This gives the answer mentioned in the lecture.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4201031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Representing the set which contains integers each of which is a subset of certain interval(segment?). I've read the book of cyber security and saw the notation below. $$ \mathbb{Z_{n}} ~~\leftrightarrow~~ \text{set of integers which satisfy }~ 0 \leq i < n $$ And what I want to represent is the set of natural integers equal or greater than 1 and less than n+1 $$ ~~~\uparrow \left( 1 \leq i \leq n \right) ~$$ Is there some good notation(s)? Can I write it as $~ \mathbb{N_{n+1}} ~$ ? which notation is correct? I know using $~ \mathbb{Z_{n+1}} \setminus \left\{ 0 \right\} ~$ is one of the ways but seemingly bit complicated for me.
From proofwiki page, a possible notation is $$\mathbb{N}_n^*=\{x \in \mathbb{N}^*: x \le n\}= \{ 1,2, \ldots, n\}$$ $\mathbb{N}_{n+1}$ includes the element $0$ according to the page as well. Do remember to define it explicitly in your work to avoid confusion of your readers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4201188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do i approach ahead in this question Let $a,b \in\Bbb N$, $a$ is not equal to $b$ and the quadratic equations $(a-1)x^2 -(a^2 + 2)x +a^2 +2a=0$ and $(b-1)x^2 -(b^2 + 2)x +b^2 +2b=0$ have a common root, then the value of $ab/5$ is So what I did was, I subtracted the two equations and got $x=(a+b+2)/(a+b)$ I tried putting it in equation,it didn’t work,then i tried adding the equations and then put the value,still didn’t work. I cant seem to figure out how to approach this problem.Can anybody help out?
Hint: $\;t=a,b$ are the two roots of the quadratic $\,(t-1)x^2-(t^2+2)x+t^2+2t=0\,$ for the value of $\,x\,$ which is the common root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4201360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Calculating $\lim_{h \to 0}\frac{1}{h}\ln\left(\frac{e^h-1}{h}\right)$ in an elementary way I have to calculate the limit of $$\lim_{h \to 0}\dfrac{1}{h}\ln\left(\dfrac{e^h-1}{h}\right)$$ I can calculate this limit using Taylor series, and got the answer $\frac{1}{2}$. However, I want to solve this limit in a somewhat "elementary" way, not using Taylor series, Laurent series, or L'Hopital's rule. Are there any such ways?
$$\lim_{h\to0}\frac1h\ln\left(\frac{e^h-1}h\right)=\left.\frac{d}{dx}\ln\left(\frac{e^x-1}x\right)\right|_{x=0}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4201494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proving $x^2 + x + 1$ is a factor of $P_n (x)=(x+1)^{2n+1} + x^{n+2}$. Further says to consider $P_n(\omega)$ and $P_n(\omega^2)$ wherein $\omega$ is a cube root of unity. $\omega \not=1$. Found this from a examination paper with no solutions. I understand it relates to roots of unity, but I'm unsure how I can bring that into play. Any explanation on how to approach these types of questions is appreciated thank you.
Induction is really helpful here, did you try it? lets check the case $n=0$: $$(x+1)^{2\cdot0+1}+x^{0+2} = x+1+x^2$$ which is of course multiplier of $x^2+x+1$. Assume, that your claim is true for every $k\leq n$, and lets prove the next step ($n+1$): we need to prove that $x^2+x+1$ is factor of $(x+1)^{2(n+1)+1}+x^{n+1+2}$ and this can be proved easily: $$\begin{align}(x+1)^{2n+3}+x^{n+3}&=(x+1)^2\cdot (x+1)^{2n+1}+x\cdot x^{n+2}\\ &=(x^2+2x+1)\cdot(x+1)^{2n+1}+x\cdot x^{n+2}\\ &=(x^2+x+1)\cdot(x+1)^{2n+1}+x\cdot(x+1)^{2n+1}+x\cdot x^{n+2}\\ &=(x^2+x+1)\cdot(x+1)^{2n+1}+x\cdot((x+1)^{2n+1}+x^{n+2}) \end{align}$$ where both terms are multipliers of $x^2+x+1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4202061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
diagonalizability of a matrix depending on a parameter I am trying to understand for which $a\in\mathbb{R}$ the matrix $\begin{bmatrix} a & 2 & -1\\ 1 & a & -1\\ 0 & 0 & 2\end{bmatrix}$ is diagonalizable and, in that case, which are its eigenvectors. My solution: The characteristic polynomial of the matrix is $p(\lambda)=(2-\lambda)[(a-\lambda)^2-2]$ which is zero for $\fbox{$\lambda =2$}$ and, if $a\neq 2\pm \sqrt{2}$, we have other two distinct eigenvalues $\fbox{$ \lambda_{+}=a+\sqrt{2},\lambda_{-}=a-\sqrt{2}$}$ thus three distinct eigenvectors $\fbox{$v_{2}=(2,-1,2), v_{+}=(\sqrt{2},1,0), v_{-}=(\sqrt{2},-1,0)$}$ for $a=2$ and $\fbox{$v_{2}=(-\frac{3}{a-2},\frac{1+a}{(a-2)^2},1), v_{+}=(\sqrt{2},1,0), v_{-}=(\sqrt{2},-1,0)$}$ for $a\neq 2$. Thus, the matrix is diagonalizable for $a\neq 2\pm\sqrt{2}$. Does this make sense? Is there a better way to solve this problem? Thanks
After having got that the eigenvalues are $2$, $a+\sqrt2$, and $a-\sqrt2$, then you know that, unless $a=2\pm\sqrt2$, then you have $3$ distinct eigenvalues. So, the matrix is diagonalizable; there is no need to compute the eigenvectors. And if $a=2\pm\sqrt2$, then it is not diagonalizable. Simple check than, in each case, the eigenspace corresponding to the eigenvalue $2$ is $1$-dimensional. But $2$ is then a double root of the characteristic polynomial. So, if your matrix was indeed diagonalizable, the eigenspace corresponding to the eigenvalue $2$ would be $2$-dimensional.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4202258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Interior of a set in $\mathbb{R}^2$. I am studying Munkres topology on my own. I want to find the interior of each of the following subsets of $\mathbb{R}^2$. $$A = \{x \times y \mid y = 0\}$$ To find the interior, I want show that if $x \times 0$ be an arbitrary point of $A$, then there does not exists any neighborhood $U$ of $x \times 0$ such that $U \subset A$. And to do that I want to show $U$ be a nbd of $x \times 0$ such that $U \subset A$, then it is a contradiction. Hence $\text{Int } A = \emptyset$. But I can not execute the above idea. Please help me.
$A$ is just the entire x-axis according to your definition. Now consider any point on the x axis and try to draw any circle(open ball) having center at that point. It is bound to have points inside it which does not lie on the x axis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4202411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\int_0^1\int_x^1f(t) \,dt \,dx=\int_0^1t f(t) \,dt$ Lets suppose that f is continuous in $[0,1]$. I want to prove that $$\int_0^1\int_x^1f(t) \,dt \,dx=\int_0^1t f(t) \,dt$$ for the left part I set the following function: $$F(y)=\int_0^y f(t)\,dt$$ so from the left part I have that $$\int_0^1\int_x^1f(t)\,dt\,dx$$ $$=\int_0^1(F(1)-F(x))\,dx$$ $$=F(1)-\int_0^1F(x)\,dx$$ $$=F(1)-F(0)-\int_0^1F(x)\,dx$$ Then, I tried to solve the right part by integrating by parts but I got confussed because of the limits of integration. $u=t$, $du=dt$, $v=\int f(t)\,dt$, $dv=f(t)$. Here is were I´am stuck, can you help me?
I will assume that $f$ is absolutely integrable so that we can apply Fubini. Using Iverson Brackets: $$ \begin{align} \int_0^1\int_x^1f(t)\,\mathrm{d}t\,\mathrm{d}x &=\int_0^1\int_0^1f(t)[t\gt x]\,\mathrm{d}t\,\mathrm{d}x\tag1\\ &=\int_0^1\int_0^1f(t)[t\gt x]\,\mathrm{d}x\,\mathrm{d}t\tag2\\ &=\int_0^1tf(t)\,\mathrm{d}t\tag3 \end{align} $$ Explanation: $(1)$: write the inner integral using the indicator function of $t\gt x$ $(2)$: change order of integration $(3)$: evaluate the integral in $x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4202480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Sequence of polynomials $g_n$ converging pontwise with $g(\mathbb{C})=\mathbb{Z}$. Prove that there exists a sequence of polynomials $g_n(z)$ that converges for all $z\in \mathbb{C}$ to a limit function $g(z)$ with $g(\mathbb{C})=\mathbb{Z}$. This question was on my complex analysis prelim last August. I have given it a go at various points throughout the time since, but I seriously have no idea where I could concretely start. Clearly the limit function can not be analytic, because this violates the open mapping theorem. Any hints would be excellent!
A corollary of Runge's theorem (Rudin RCA) says: Suppose $K\subset \mathbb C$ is compact and $S^2\setminus K$ is connected. Suppose further $\Omega$ is open and contains $K.$ If $f\in H(\Omega),$ then there is a sequence of polynomials that converges uniformly to $f$ on $K.$ To our problem: For each $n$ define $E_n=\{z:|\text{ Im }z|\ge 1/n\}$ and $F_n= \mathbb Z\cup \{z\in \mathbb R:d(z,\mathbb Z)\ge 1/(2n)\}.$ Then both $E_n,F_n$ are closed sets. Now define $$K_n= [-n,n]^2\cap(E_n\cup F_n).$$ Then $K_n$ is compact, and $S^2\setminus K_n$ is connected. Good to draw a picture here. Note $K_n$ has a bunch of connected components: The big ones in the upper and lower half planes, and on the real axis the $2n+1$ points $-n,\dots,n$ as well as the $2n$ closed intervals between these points. Set $\Omega_n =\{z:d(z,K_n)<1/(2n)\}.$ Then $\Omega_n$ has as many components as does $K_n.$ The components containing integers are discs $D(k,1/(2n)), k=-n,\dots,n.$ On each $D(k,1/(2n))$ define $f_n=k.$ For all other components, define $f_n=0.$ Then $f_n\in H(\Omega_n).$ By the corollary to Runge, there is a polynomial $P_n$ such that $|P_n-f|<1/n$ on $K_n.$ These are the desired polynomials; they converge pointwise to the identity on $\mathbb Z,$ and $0$ everywhere else.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4202686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
What is the probability that they form a match? On a dating site, users can select $5$ out of $24$ adjectives to describe themselves. A match is declared between two users if they match on at least $4$ adjectives. If Alice and Bob randomly pick adjectives, what is the probability that they form a match? My Attempt Probability space is $24\choose 5$$^2$ we have match for $4$ or $5$ so for $5$ it's just $24\choose 5$ and for for it's $24\choose 4$ so the overall probability is $$\frac{^{24}C_5+^{24}C_4}{(^{24}C_5)^2}$$ the given answer is $$\frac{^{24}C_5.(1+5(24-5))}{^{24}C_5.^{24}C_5}$$
Going by your method, we count all cases where i) exactly $4$ adjectives match and ii) all $5$ adjectives match. You are right that number of ways for $5$ match is ${24 \choose 5}$. Now number of ways for exactly $4$ adjective match - ${24 \choose 4} \cdot 20 \cdot 19$ We first choose $4$ adjectives that are match and then there are $20 \cdot 19$ ways for both of them to choose an adjective each from remaining $20$ adjectives. You missed the last part. So desired probability is, $\displaystyle \cfrac{{24 \choose 5} + {24 \choose 4} \cdot 20 \cdot 19} {{24 \choose 5} \cdot {24 \choose 5}}$ which is same as the given answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4202887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Existence of solution for linear system of ODE Consider linear system of ODE given by \begin{eqnarray} u'_i(t)&=&\sum\limits_{i=1}^{n}a_{ij}(t)u_j+b_i(t) &\quad i=1,2,\ldots,n\\ u_i(0)&=&u_i^0 &\quad i=1,2,\ldots,n \end{eqnarray} Suppose $a_{ij},b_i \in L^{\infty}(\mathbb{R}^+)$ and $u_i^0\in \mathbb{R}$ for $i=1,2,\ldots,n,$ then do we have the existence of solution? ?If so how to prove it? In other words, can we relax continuity assumption on $a_{ij},b_i$ in the existence proof?
As the other answers demonstrate, there are obstacles to solving $u'=Au+b$ as differential equation with differentiable solutions. However, the associated integral equation $$ u(t)=u(0)+\int_0^t(A(s)u(s)+b(s))\,ds $$ is well-defined as fixed-point equation over continuous functions. As $A$ is essentially bounded, $L={\rm ess}\sup_t\|A(t)\|<\infty$ and $$ \|u\|_L=\sup_t e^{-2L|t|}\|u(t)\| $$ is a norm on a closed subspace of continuous functions that makes the fixed-point iteration to the above equation contracting with factor $\frac12$. Thus by the fixed-point theorem there is a unique solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4203076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Abelian subalgebras of von Neumann algebras Is it true that one always have faithful normal conditional expectation from von Neumann algebra $M$ to any abelian subalgebra $A\subset M$? Does it hold for some special class of von Neumann algebras?
By a theorem of Takesaki (Section 6 in Conditional Expectations in von Neumann Algebras) there exists a normal conditional expectation onto every maximally abelian subalgebra of $M$ if and only if $M$ is finite. So finiteness of $M$ is necessary. If $M$ is finite, there is a faithful normal conditional expectation onto every von Neumann subalgebra $N$. This can be constructed as the restriction of the orthogonal projection $L^2(M,\tau)\to L^2(N,\tau)$, where $\tau$ is a faithful normal tracial state on $M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4203162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integer solutions for $2^x+5^x=3^x+4^x$ How many integer solutions exist for the equation $2^x+5^x=3^x+4^x$? Attempt: Clearly, $x=0$ is a solution. Then, I divided both sides by $5^x$ to get; $$\left(\frac 25\right)^x+1=\left(\frac 35\right)^x+\left(\frac 45\right)^x$$ It is easy to see that the left hand side of this equation is always greater than $1$. Moreover, taking a first derivative on the right hand side, it is easy to see that it is a decreasing function, which attains value $1$ at $x=2$. Thus, for $x>2$, RHS$<1$ while LHS$>1$, so no solutions exist, and the only natural solutions for $x$ are, trivially, $x=0,1$. But I am struggling with the case where $x<0$. Can anyone please help with that? I also see that the bases on both sides have sum equal to $7$, which makes me think that some kind of inequality might be involved, but I am unable to proceed that way. Edit: It would be helpful to see a solution that exploits the integer constraint.
I think the solutions in the link provided answer a different question--nonintegral $x$, and so are not helpful here. For $x$ integral, there is a much simpler observation. For $x$ a negative integer, note that $2^x > 3^x+4^x$ for all integers $x \le -3$ [you can show this by induction], so if $x$ is nonpositive, then $|x|$ must be no larger than $2$. Likewise for $x$ a positive integer, note that $5^x > 3^x+4^x$ for all $x \ge 3$ [you can show this by induction as well], so $x$ must also be no larger than $2$. So all this leaves you to check is which $x \in \{-2,-1,0,1,2\}$ satisfy this equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4203386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Prove that this sequence is complete. Suppose $\{\varphi_n\}$ is an orthonormal sequence in $L^2[a,b]$ that satisfies $$\sum_{n=1}^\infty \left|\int_a^x \varphi_n(t)\,dt\right|^2=x-a,$$ for all $x \in [a,b]$. Conclude that $\{\varphi_n\}$ is complete in $L^2[a,b]$. Now, my initial thought is showing that Plancherel's equality will hold. My assumption it already holds for these type of indicator functions $f(x)=\chi_{[a,x]}$. My idea is to use the density of these functions in $L^2[a,b]$ to somehow extend it to conclude that Plancherel's equality holds for all functions in $L^2[a,b]$ and this would finish the proof. However, I am unsure if this is the right way to do this. Any help is appreciated. Krull.
The problem with your argument is functions of the form $\chi_{[a,x]}$ are not dense in $L^{2}$. Linear combinations of these functions are dense but it is difficult to extend the given equaiton to linear combinations. Here are some hints: If we show that $\chi_{[a,x]}=\sum \langle {\chi_{[a,x]}}, \phi_n\rangle \phi_n$ in $L^{2}$ sense the we can conclude that $\chi_{[a,x]}$ belongs to te closed subspace spanned by $(\phi_n)$. Now we can take linear combinations and limits to show that $L^{2}$ is the closed linear span of $(\phi_n)$, completing the proof. To show that $\chi_{[a,x]}=\sum \langle {\chi_{[a,x]}}, \phi_n\rangle \phi_n$ simply expand $\|\chi_{[a,x]}-\sum\limits_{k=1}^{n} \langle {\chi_{[a,x]}}, \phi_k\rangle \phi_k\|^{2}$ and use orthogonality as well as the given hypothesis. [$\|\chi_{[a,x]}-\sum\limits_{k=1}^{n} \langle {\chi_{[a,x]}}, \phi_k\rangle \phi_k\|^{2}=\|\chi_{[a,x]}\|^{2}-2 \langle {\chi_{[a,x]}}, \sum\limits_{k=1}^{n} \langle {\chi_{[a,x]}} \rangle \phi_k+\|\sum\limits_{k=1}^{n} {\chi_{[a,x]}}\phi_k\|^{2}$. Using orthonormality this reduces to $(x-a)-2\sum\limits_{k=1}^{n}|\int_a^{x}\phi_k(t)dt|^{2}+\sum\limits_{k=1}^{n}|\int_a^{x}\phi_k(t)dt|^{2}$ and this tends to $(x-a)-2(x-a)+(x-a)=0$ by hypothesis].
{ "language": "en", "url": "https://math.stackexchange.com/questions/4203736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding Matousek's proof of Equiangular lines In Miniature 9 in 33 Miniatures by Matousek, he proofs that: The largest number of equiangular lines in $\mathbb R^3$ is 6, and in general, there cannot be more than $\binom{d+1}{2}$ equiangular lines in $\mathbb R^d$. The proof starts with: Let us consider a configuration of $n$ lines, where each pair has the same angle $\vartheta \in (0, \pi/2]$. Let $v_i$ be a unit vector in the direction of the $i$th line (we choose one of the two possible orientations of $v_i$ arbitrarily). The condition of equal angles is equivalent to $$ |\langle v_i, v_j\rangle| = \cos \vartheta, \quad \text{for all } i\neq j.$$ Let us regard $v_i$ as a column vector, or a $d\times 1$ matrix. Then $v_i^Tv_j$ is the scalar product $\langle v_i, v_j \rangle$. On the other hand, $v_iv_j^T$ is a $d\times d$ matrix. We show that the matrices $v_iv_i^T, i = 1,2,\dots, n$ are linearly independent. And I do not see how showing the independence of these matrices shows that we can not have more than the claimed number of equiangular lines? Thanks!
The key may be the first sentence "Let us consider a configuration of lines..." You can think of it as proof by contradiction. If we had more than 6 equiangular lines, we would construct more than 6 such independent symmetric $3\times 3$ matrices (by the outlined procedure), which is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4203876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to form a set from 4 sets with the given property? Hi, this question is related to Combinatorics. If there is anything I can improve with my question, please let me know. Problem: Let us suppose we have $4$ sets $S_1,S_2,S_3,S_4$ having $a,b,c,d$ elements. All elements in each $S_i$ are distinct for $1\leq i\leq 4$. We need to form a set $S$ from the elements of $S_i;1\leq i\leq 4$ with the following property: $S$ must contain $a-1$ elements of $S_1$, $b-1$ elements of $S_2$, $c-1$ elements of $S_3$ and $d-1$ elements of $S_4$. Find the number of ways to form $S$ if $S$ has * *$a+b+c+d-3$ elements. *$a+b+c+d-2$ elements. My try: Method 1: Note that $a+b+c+d-3=(a-1)+(b-1)+(c-1)+(d-1)+1$. We first choose $a-1$ elements from $S_1$ in $\binom{a}{a-1}=a$ ways,then choose $b-1$ elements from $S_2$ in $\binom{b}{b-1}=b$ ways, then choose $c-1$ elements from $S_3$ in $\binom{c}{c-1}=c$ ways and finally choose $d-1$ elements from $S_4$ in $\binom{d}{d-1}=d$ ways. We now need to choose $1$ remaining element of $S$ from $\bigcup_{i=1}^4 S_i$. Now $1$ remaining element of $S$ from $\bigcup_{i=1}^4 S_i$ can be chosen in $\binom{4}{1}=4$ ways. So total number of ways= $4\times a\times b\times c\times d$. Method 2: We choose $a$ elements from $S_1$, $b-1$ from $S_2$, $c-1$ from $S_3$, $d-1$ from $S_4$. Note that $a$ elements can be chosen from $S_1$ in $\binom{a}{a}=1$ way,then choose $b-1$ elements from $S_2$ in $\binom{b}{b-1}=b$ ways, then choose $c-1$ elements from $S_3$ in $\binom{c}{c-1}=c$ ways and finally choose $d-1$ elements from $S_4$ in $\binom{d}{d-1}=d$ way. Thus it becomes $b\times c\times d$ ways. Similarly we can choose $a-1$ elements from $S_1$, $b$ from $S_2$, $c-1$ from $S_3$, $d-1$ from $S_4$. Thus it becomes $a\times c\times d$ ways. Similarly we can choose $a-1$ elements from $S_1$, $b-1$ from $S_2$, $c$ from $S_3$, $d-1$ from $S_4$. Thus it becomes $a\times b\times d$ ways. So total ways= $bcd+acd+abd+abc.$ I have 3 questions: * *Why are the answers in two methods coming different? *I feel Method 2 gives the correct answer but the method is very lengthy and cumbersome and can't be extended. Is it possible to write it in a short and elegant way? *How to do the second part of the question where $S$ must have $(a-1)+(b-1)+(c-1)+(d-1)+2$ elements? How to extend Method 1 or Method 2 here? I am looking for some help from all the experts out here. Waiting for your responses.
Method 2 is correct. Method 1 overcounts, because in the event that you end up using all $a$ elements of $S_1$, it distinguishes between the $a-1$ originally chosen elements and the element added later. But all ways to choose $a-1$ elements and then add the last one are equivalent (they all give you the whole of $S_1$). To answer your final question, in part 2 instead of having three sets with one element missing and one full, you have two sets with one element missing and two full. The number of ways to choose $a-1$ elements from $S_1$, $b-1$ from $S_2$, and all the rest is $ab$, so you need all permutations of this, i.e. $ab+ac+ad+bc+bd+cd$. The issue with extending is really one of notation. It is cumbersome because you have $a$ elements in $S_1$, $b$ in $S_2$, etc., instead of the more natural $a_1$ in $S_1$, $a_2$ in $S_2$, etc. With this notation your first expression would become $\sum_{i,j,k\text{ distinct}}a_ia_ja_k$ and your second $\sum_{i,j\text{ distinct}}a_ia_j$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4204015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
If $x^3 + \frac{1}{x^3} = 2\sqrt{5}$, then find $x^2 + \frac{1}{x^2}$. Question: If $x$ is a real number satisfying $x^3 + \frac{1}{x^3} = 2\sqrt{5}$, determine the exact value of $x^2 + \frac{1}{x^2}$. My partial solution: $(x + \frac{1}{x})^3 = x^3 + 3x + \frac{3}{x} + \frac{1}{x^3}.$ We know that $x^3 + \frac{1}{x^3} = 2\sqrt5 \Longrightarrow (x + \frac{1}{x})^3 = 2\sqrt5 + 3x + \frac{3}{x}.$ So, $(x + \frac{1}{x})^3 = 2\sqrt5 + 3(x + \frac{1}{x})$. Let $y = x + \frac{1}{x}$. Then, $y^3 = 2\sqrt5 + 3y$. We want to solve for $x^2 + \frac{1}{x^2}$. $(x + \frac{1}{x})^2 = x^2 + \frac{1}{x^2} + 2$. So, we want to solve for $y^2 - 2$. But, I'm confused about how to solve the equation $y^3 = 2\sqrt5 + 3y$, should I check random values? Any help is appreciated, thanks in advance! (Thanks for the suggestions in the comments :)
Reading the comments, I now realize that from the equation $x^3 + \frac{1}{x^3} = 2\sqrt5$, if we let $z = x^3$, then we get $z + \frac{1}{z} = 2\sqrt5 \Longrightarrow z^2 + 1 - 2\sqrt5z = 0 \Longrightarrow$ using the quadratic formula, we get $z = \sqrt5 \pm 2$. Hence, from here, we can get that the answer is $5 - 2 = 3$. I could have also tried different values of $y$ to get $\sqrt5$ as the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4204311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to show following inequality? Suppose we have \begin{align*}M(r)-m(2 r) &\leq C(m(r)-m(2 r))\\ M(2 r)-m(r) &\leq C(M(2 r)-M(r)). \end{align*} From that how to show that $$ \omega(r) \leq \frac{C-1}{C+1} \omega(2 r) $$ where $\omega(r)=M(r)-m(r)$ and $m(r)=\underset{B_{r}}{\operatorname{essinf}} u, \quad M(r)=\operatorname{esssup}_{B_{r}} u$ My attempt: \begin{align*} M(r)-m(2 r) &\leq C(m(r)-m(2 r))\\ \implies M(r)-m(r)&\le (C-1)(m(r)-m(2r))\\ \omega(r)&\le (C-1)(m(r)-m(2r)) \end{align*} Also \begin{align*} M(2 r)-m(r) &\leq C(M(2 r)-M(r))\\ \implies M(r)-m(r)&\le (C-1)(M(2r)-M(r))\\ \omega(r)&\le (C-1)(M(2r)-M(r)) \end{align*} From that how to show that $$ \omega(r) \leq \frac{C-1}{C+1} \omega(2 r) $$ Any help will be appreciated
Add the two inequalities you started with. You get $$ \omega(2r) + \omega(r) \leq C (\omega(2r)- \omega(r)) $$ Now just rearrange.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4204475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Using Rearrangement Inequality . Let $a,b,c\in\mathbf R^+$, such that $a+b+c=3$. Prove that $$\frac{a^2}{b+c}+\frac{b^2}{c+a}+\frac{c^2}{a+b}\ge\frac{a^2+b^2+c^2}{2}$$ $Hint$ : Use Rearrangement Inequality My Work :-$\\$ Without Loss of Generality let's assume that $0\le a\le b\le c$ then we can infer that $c^2\ge b^2\ge a^2$ also $b+c\ge c+a \ge a+b$. Thus $\frac{1}{b+c}\le \frac{1}{c+a}\le \frac{1}{a+b}$. Hence by Rearrangement Inequality $\frac{a^2}{b+c}+\frac{b^2}{c+a}+\frac{c^2}{a+b}$ is the greatest permutation however I am not able to express $\frac{a^2+b^2+c^2}{2}$ as another permutation of the same, hence I require assistance. Thank You
I apply rearrangement inequality 3 times to obtain $$ 3\left(\frac{a^2}{b+c}+\frac{b^2}{c+a}+\frac{c^2}{a+b}\right) \geqq (a^2+b^2+c^2)\left(\frac{1}{b+c}+\frac{1}{c+a}+\frac{1}{a+b}\right) $$ Next we can show that $$ \frac{1}{b+c}+\frac{1}{c+a}+\frac{1}{a+b}\geqq \frac{3}{2} $$ by using Cauchy-Schwarz inequality and the fact that $a+b+c=3$. Combining both inequalities give the desired result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4204601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
$|f''(x)|\leq 1$, $\exists\ x_1\neq x_2, \in [0,1]$ such that $f(x_1)=f(x_2)=0$. Show that $|f(x)|\leq 1, \forall\ x\in [0,1].$ $|f''(x)|\leq 1$, $\exists\ x_1\neq x_2, \in [0,1]$ such that $f(x_1)=f(x_2)=0$. Show that $|f(x)|\leq 1, \forall\ x\in [0,1].$ If $x_1=0, x_2=1$, then Taylor expansion would help. But here we only know that $x_1\neq x_2$! What this helps?
Since $f(x_1)=f(x_2)$ then by Rolle's theorem $f'(c) = 0$ for some $c$ between $x_1$ and $x_2$. If $f''$ is continuous then we have $ \left| f'(x) \right|=\left| \int_{c}^{x} f''(t)dt \right|\leq \left|x-c\right|$ for all $x \in [0,1]$. Similarly , $ \left| f(x) \right|=\left| \int_{x_1}^{x} f'(t)dt \right|\leq | \int_{x_1}^{x} \left| t-c \right| dt |\leq \frac{1}{2}(c-x_1)^2+\frac{1}{2}(x-c)^2\leq 1 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4204788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How should $\int_0^1 |dX_s|$ be understood for a real valued semimartingale $(X_t)_{t \geq 0}$ of finite variation? How should $\int_0^1 |dX_s|$ be understood for a real valued semimartingale $(X_t)_{t \geq 0}$ of finite variation? I read this in many sources but I can not find any explanation of this term. I know the stochastic integral but it confuses me that we take the absolute value of $dX_s$. This notation appears in Theorem 2.3 in this paper: Jacod, J. and Protter, P. (1998). Asymptotic error distributions for the Euler method for stochastic differential equations. Ann. Probab. 26 267–307. MR1617049
Suppose that $X_t$ is a continuous semi-martingale; then $X_t$ has the decomposition $$X_t = X_0 + M_t + A_t$$ where $M_t$ is a local martingale and $A_t$ is a process of bounded variation. If $X_t$ has finite variation then $M_t = 0$ (cf. Continuous local martingale of finite variation is constant). Thus, we may interpret this integral pathwise; that is, for each $\omega \in \Omega$ $$\int_0^1 |dX_s|(\omega) = \int_0^1 |dA_s|(\omega) = TV(A)(\omega)$$ where $|dA_s|$ is the total variation measure associated with $A_t$ and $TV(A)(\omega)$ is the total variation of $A_t(\omega)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4205167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determine Maximum-Likelihood-Estimator without knowing the observed value Consider the following distribution with Lebesgue density $$f_{\theta}(x) = \begin{cases} \theta x^{-\theta-1}, \quad \text{if } 1 < x < \infty \\ 0, \quad \text{else} \end{cases},$$ for $\theta \in [a,b]$ with $1 < a < b < \infty$. Let $r > 1$ be a real number and the observed value $x_1$ be greater than $r$. However, the value of $x_1$ is unknown. Determine the MLE for $\theta$. As we don't know the exact value of $x_1$ the classic approach cannot be done, so I determined the probability of a random variable (with the distribution above) being greater than r and that is $r^{-\theta}$. If we consider this to be the Likelihood function, then the MLE for $\theta$ is going to be the minimum value of the interval for $\theta$ and that means $a$, as $r^{-\theta}$ is decreasing in $\theta$. Is this correct or is there another approach which has to be considered?
Your log likelihood is (unless an additive constant) $$l(\theta)=\log\theta-\theta\log x_1$$ with MLE $$\hat{\theta}=\frac{1}{\log x_1}$$ but being $\theta\in[a;b]$ the mle becomes $$\hat{\theta}=\max\left\{a;\min\left\{\frac{1}{\log x_1};b\right\}\right\}$$ Being $x_1$ unknown but $x_1>r$ you mle estimate is $$\hat{\theta}=\max\left\{a;\min\left\{\frac{1}{\log r};b\right\}\right\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4205364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Apostol Analytic Number Theory Ch3 ex1 I have worked through Exercise 1 in Chapter 3. Part (a) requires you to show that $$\sum\limits_{n \le x} {\frac{{\log n}}{n} = \frac{1}{2}{{\log }^2}x + A + O\left( {\frac{{\log x}}{x}} \right)} $$ The Euler summation formula to apply is $$\sum\limits_{y < n \le x} {f(n) = \,\,\,\int\limits_y^x {f(t)\,\,{\rm{d}}t} } \,\,\, + \,\,\,\int\limits_y^x {\left( {t - \left[ t \right]} \right)f'(t)\,\,{\rm{d}}t} \,\, + \,\,f(x)\left( {\left[ x \right] - x} \right)\,\, + \,\,f(y)\left( {\left[ y \right] - y} \right)$$ I managed to solve it satisfactorily, but I have also looked at other solutions posted online. In particular, one by a chap called Greg Hirst who has kindly made availabe his solutions to the entire book, which can be accessed here The extract I am referring to in this post is here: His approach was clear enough, except one line where he claims, $$\left| {\int\limits_x^\infty {\left( {t - \left[ t \right]} \right)\left( {\frac{{1 - \log t}}{{{t^2}}}} \right)\,\,{\rm{d}}t} } \right|\,\, \le \,\,2\int\limits_x^\infty {\frac{{\log t}}{{{t^2}}}} \,\,{\rm{d}}t$$ I have constructed my own argument why the "2" can be incorportated as shown, but it seems rather awkward. Is there anyone who reads this post that can explain why this might be the "go to" approach favoured by some people? Any insight much appreciated.
Note that for $x\geq e$, $$\left| {\int_x^\infty {\left( {t - \left[ t \right]} \right)\left( {\frac{{1 - \log t}}{{{t^2}}}} \right)\,dt} } \right|\leq {\int\limits_x^\infty { \underbrace{\left({t - \left[ t \right]}\right)}_{<1} \Big( {\frac{{\overbrace{1}^{\leq \log t} + \log t}}{{{t^2}}}} \Big)\,dt} } \leq 2\int_x^\infty {\frac{{\log t}}{{{t^2}}}} \,dt.$$ A more natural upper bound would be $$\int_x^\infty \frac{1}{{t^2}} \,dt+\int_x^\infty {\frac{{\log t}}{{{t^2}}}} \,dt=\frac{1}{x}+\int_x^\infty {\frac{{\log t}}{{{t^2}}}} \,dt.$$ But in both cases, at end, we will arrive to $O\left(\frac{\log(x)}{x}\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4205481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing that $\lim_{n\to\infty}\cos(n\theta)$ and $\lim_{n\to\infty}\sin(n\theta)$ do not exist if $\theta$ is not an integer multiple of $\pi$ The problem is the problem 7 of chapter 1.5. (pp. 63) from the bookComplex Analysis with Applications by Asmar et al. We have to show that for $\theta$ not an integer multiple of $\pi$, $\lim_{n\to\infty}\cos(n\theta)$ and $\lim_{n\to\infty}\sin(n\theta)$ do not exist. As of the time of writing I haven't really managed to start proving the claim, as my only progress so far is that if $\theta \neq k\pi, k \in \mathbb{Z} \Longleftrightarrow \theta = (l + \epsilon)\pi, l \in \mathbb{Z}, \epsilon \in \mathbb{R}\setminus \mathbb{Z}$, so that $\cos(n\theta) = \cos(n(l + \epsilon)\pi) = \cos(nl\pi)\cos(n\epsilon\pi) - \underbrace{\sin(nl\pi)}_{=0}\sin(n\epsilon\pi) = \pm\cos(n\epsilon\pi)$, depending on whether $2|nl$ or not. This would hint that $\cos(n\epsilon\pi)$ remains oscillating with the $\pm$, but I'm not sure how to finish the proof. One tool that is given before the problem is that for $z\in\mathbb{C}:\lim_{n\to\infty}z^n = 0$ if $|z| < 1$, $1$ if $z = 1$, and the limit does not exist in other cases. I know that $\cos(t) = (\exp(it) + \exp(-it))/2$, but I'm not yet sure how to actually use this in the proof.
Let's suppose that $\sin(n\theta)$ converges to a limit $l$. One has $$\sin((n+1)\theta)=\sin(n\theta)\cos(\theta)+\sin(\theta)\cos(n\theta) \quad \quad (1)$$ $$\sin((n-1)\theta)=\sin(n\theta)\cos(\theta)-\sin(\theta)\cos(n\theta) \quad \quad (2)$$ so adding $(1)+(2)$, and letting $n$ tend to $+\infty$, you get $$2l = 2l \cos(\theta)$$ so $l=0$ because $\cos(\theta)\neq 1$. But then $(1)$ implies that $\cos(n\theta)$ also tends to $0$, which is absurd since $\sin^2(n\theta)+\cos^2(n\theta)$ must be constant equal to $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4205624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about projection onto subspaces Suppose I have an $n\times n$ (complex) matrix $Q$ viewed as a linear transformation from $\mathbb{C}^{n}$ to $\mathbb{C}^{n}$. I'm struggling to understand the difference between the following two scenarios. For simplicity, let us consider $n = 4$. Scenario 1: Let $S$ be a subspace of $\mathbb{C}^{4}$ of dimension $d=2$. Let us think of $S$ as the subspace generated by two eigenvectors $v_{1}$ and $v_{2}$ of $Q$.One can construct a projection matrix $P_{S}$. According to this post, this projection should be given by: $$P_{S} = A(A^{T}A)^{-1}A^{T}$$ where $A = [v_{1} \hspace{0.1cm} v_{2}]$ the a $4\times 2$ matrix. This is a $4\times 4$ matrix. Scenario 2: Let us write $\mathbb{C}^{4} = S \oplus S^{\perp}$. Since $Q$ is a linear operator on $\mathbb{C}^{4}$, it might be possible to decompose $Q = Q_{1}\oplus Q_{2}$ so that, if $u = u_{1}+u_{2}$ and $u_{1}\in S$ and $u_{2} \in S^{\perp}$: $$Qu = Q_{1}u_{1}\oplus Q_{2}u_{2}$$ Each $Q_{i}$ should be a $2\times 2$ matrix. Here is my problem. I would expect that $Q_{1} = QP_{S}$, since $Q_{1}$ should be the restriction of $Q$ to $S$. However, as I stressed before, both $Q$ and $P_{S}$ are $4\times 4$ matrices. So, where is the gap? If $Q_{1}$ is not $QP_{S}$, how can I obtain $Q_{1}$? And finally, why is $P_{S}$ a $4\times 4$ matrix if it projects into a subspace of dimension $2$? (I would expect it to be a $2\times 4$ matrix instead).
I started working on a trivial example before the much better answer appeared, but perhaps this simple concrete example will help guide someone. You can express tensors in different bases. Just like I can express the 2D vector at $v = 3 \mathbf{e}_x + 4 \mathbf{e}_y$ in terms of basis vectors for an $x-y$-plane, I can also express it in terms of $v = 5 \mathbf{e}_v$ where $\mathbf{e}_v = \frac{3}{5} \mathbf{e}_x + \frac{4}{5} \mathbf{e}_y$. When we only pass around matrices with just the coefficients, e.g. $v = [3,4]$ there is an assumption made of what base they are expressed in. If we want to project another vector, e.g. $u = 5 \mathbf{e}_x + 0 \mathbf{e}_y$ we can express this projection in several ways, in terms of the 2 coefficients of $\mathbf{e}_x$ and $\mathbf{e}_y$, or in terms of the 1 coefficient of $\mathbf{e}_v$. Working through the computations for the case I made up here, one will get $$ u_v = \frac{9}{5} \mathbf{e}_x + \frac{12}{5} \mathbf{e}_y \equiv 3 \mathbf{e}_v $$ Either way to tackle it, it's the same vector, we only need to think of what base we are expressing the coefficients in.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4205788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How do we solve for $x$ in $x^5-x-1=0$? What is the procedure to finding the simplest exact (or atleast a verifiable approximation to desired level of precision) correct answer to this quintic equation, and more generally to other polynomial equations of degree- 4 and higher yielding non-rational zeroes? And in this particular case, what are all the (exact or approximate) solutions? On desmos there appears to be one real x-intercept at ~1.167; I would imagine that there are up to four other non-real solutions (or fewer, should some have multiplicity greater than 1---is this possible for non-real complex roots?)
Using Newton - Raphson $$f(x) = x^5 - x - 1$$ $$\text{At}\quad x= 1.00,\space f(x) = -1.00 $$ $$\text{At}\quad x= 2.00 ,\space f(x) = 29.00 $$ $$f′(x)=5x^4-1$$ $$x_0=2$$ $$x_1=x_0-\frac{f\left(x_0\right)}{f'\left(x_0\right)}$$ $$x_1≈1.6329$$ $$x_2≈1.3731$$ $$x_3≈1.2236$$ $$x_4≈1.1727$$ $$x_5≈1.1674$$ $$x_6≈1.167$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4205999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Solving for $y$, given $x = \frac{-1 + \sqrt{1+ (4y/50)} }{2}$ I have the following quadratic equation: $$x = \frac{-1 + \sqrt{1+ (4y/50)} }{2}$$ in this case $y$ is a known variable so I can solve the equation like this for $y = 600$ $$x = \frac{-1 + \sqrt{1+ (4\times 600/50)} }{2}$$ and I was looking for help on how to refactor it so I can solve it while knowing $x$ instead of $y$, but I have no idea how to do it. I would be looking for something like $y = x$??? Thanks!
You probably know the quadratic formula: $x_{1/2} = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}$ This is the formula for the quadratic function $f=ax^2+bx+c$. In this case we have $a=1$ and $b=1$. Comparing $4ac=4c$ with $-\frac{4y}{50}$ gives $c=-\frac{y}{50}$. Thus $f=x^2+x-\frac{y}{50}=0$. Finally you can solve for y.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4206112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
For which $n \ge 0$ is $n \cdot 2^n + 1$ divisible by 3? $\textbf{Edit:}$ Thank you all so much for all your help! Superglad I chose to ask. All your different approaches were very insightful and taught me a lot. Thank you! Same question: Found here. Hi, I'm a bit stuck with my question. I've read through the answers in the above link, but I cannot quite understand the answers fully. I don't want to hijack the other thread either, so I thought I'd start a separate thread. $\textbf{The question is:}$ For which ${n \ge 0}$ is the number ${n \cdot 2^n + 1}$ divisible by 3? I'v gotten this far in my understanding: For even ${n(n=2k)}$: \begin{equation} \begin{aligned} 2^n&=2^{2k} \equiv_3 4^k \equiv_3 1^k \equiv_3 1\\ n\cdot 2^n+1 &\equiv_3 n\cdot 1 + 1 \equiv_3 n +1 \end{aligned} \end{equation} For odd ${n(n=2k+1)}$: \begin{equation} \begin{aligned} 2^{2k+1}&=2^{2k}\cdot 2 \equiv_3 1^k \cdot -1 \equiv_3 1 \cdot - 1 \equiv_3 -1\\ n\cdot 2^n+1 &\equiv_3 n\cdot -1 + 1 \equiv_3 -n +1 \end{aligned} \end{equation} One of the answers in the link above mentioned \begin{equation} n\equiv 0 (\text{mod 2})\qquad n\equiv 2(\text{mod 3}) \end{equation} which could be "consolidated as ${n\equiv 2 (\text{mod 6})}$". I don't understand what the "consolidate"-part means. So, would someone please like to share some knowledge and insights with me? It feels like I'm almost there, but still a far way to go. Thanks
$$n\equiv 0 \text{ (mod 2)} \;\text{ and }\;n\equiv 2\text{ (mod 3)}\\\iff n-2\equiv 0 \text{ (mod 2)} \;\text{ and }\;n-2\equiv 0\text{ (mod 3)}\\\iff n-2 \;\text{ is a multiple of both 2 and 3}\\\iff n-2 \;\text{ is a multiple of lcm(2,3)}\\\iff n\equiv 2 \text{ (mod 6)}$$ Continuing where you left off: * *For even ${n(n=2k)}$: $$ 2^n=2^{2k} \equiv_3 4^k \equiv_3 1^k \equiv_3 1.\\ n\cdot 2^n+1 \equiv_3 n\cdot 1 + 1 \equiv_3 n +1\equiv_3 n -2.\\ n \cdot 2^n + 1 \text{ is divisible by 3} \iff n \equiv_3 2 \text { and } n\equiv_2 0\\\iff n\equiv_6 2. $$ *For odd ${n(n=2k+1)}$: $$ 2^{2k+1}=2^{2k}\cdot 2 \equiv_3 1^k \cdot -1 \equiv_3 1 \cdot - 1 \equiv_3 -1.\\ n\cdot 2^n+1 \equiv_3 n\cdot -1 + 1 \equiv_3 -n +1.\\ n \cdot 2^n + 1 \text{ is divisible by 3} \iff n \equiv_3 1 \text { and } n\equiv_2 1\\\iff n\equiv_6 1. $$ Hence, for $n\geq0,$ $$n \cdot 2^n + 1 \text{ is divisible by 3} \\\iff n=\equiv_6 1 \text{ or } 2 \\\iff n\in \{6k+1\mid k\in\mathbb{Z}_0^+\} \cup\{n=6k+2\mid k\in\mathbb{Z}_0^+\}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4206318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Why does $f:\mathbb{C}\setminus\{\pm 1\} \rightarrow\mathbb{C}; f(z)=\frac{1}{z^2-1}$ not have an antiderivative? My solution would be to define a counter-clockwise circle with the radius 1 around the point z=1 and integrate over it using the Residue-Theorem. So i get: $\int\limits_{\partial B_1(1)}\frac{1}{z^2-1}\text{d}z=\int\limits_{\partial B_1(1)}\frac{1}{(z-1)(z+1)}\text{d}z=2\pi i\sum\limits_{a\in B_1(1)}$ind$_{\partial B_1(0)}(a)$ Res$_a(f)=2\pi i \frac{1}{2}=\pi i\neq 0$ As shown above, the value of the complex integral is not zero, and hence the function does not have an anti-derivative on this domain. Is this solution correct?
If you replace $B_1(0)$ by $B_1(1)$ everywhere, I agree. To complete your argument: Suppose that there exists a holomorphic function $F:\mathbb C\setminus\{-1,1\}\to\mathbb C$ satisfying $F'=f$ on its domain. Let $\gamma:[0,1]\to\mathbb C$ be a parametrization of $\partial B_1(1)$ (for instance $\gamma(t)= 1+\exp(2\pi i t)$.) Then $$\pi i = \int_\gamma f =\int_0^1\frac{\mathrm d(F\circ\gamma)}{\mathrm dt}=F(\gamma(1))-F(\gamma(0))=0,$$ which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4206480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $K=\mathbb{Q}(\alpha,\sqrt{-3},\sqrt[3]{10})$ is a splitting field for $f(x)$. a) Show that $f(x)=x^6-2x^3-10$ is irreducible over $\mathbb{Q}$. Proof: Using Eisenstein's Criterion, with $p=2$, we show $f(x)$ is irreducible over $\mathbb{Q}$. b) Let $\alpha=\sqrt[3]{1+\sqrt{11}}$. Show that $|\mathbb{Q}(\alpha):\mathbb{Q}|=6$. Proof: Note that $\alpha$ is a root of $f(x)$. By part (a), we can conclude that $f(x)$ is the minimal polynomial of $\mathbb{Q}(\alpha)$. Furthermore $|\mathbb{Q}(\alpha):\mathbb{Q}|=6$. c) Show that $K=\mathbb{Q}(\alpha,\sqrt{-3},\sqrt[3]{10})$ is a splitting field for $f(x)$. Proof: Note that $\beta=\sqrt[3]{1-\sqrt{11}}$ is also a root of $f(x)$. [Check by plugging it in $f(x)$.] $$ \alpha \cdot \beta = \sqrt[3]{(1-\sqrt{11})(1-\sqrt{11})} = \sqrt[3]{-10} $$ * *How do I get rid of the negative in my radical? Recall $\sqrt[3]{-1}=-1$. *What combination will help me get $\sqrt{-3}$? Refer to: Determine splitting field $K$ over $\mathbb{Q}$ of the polynomial $x^3 - 2$ d) Show that $\sqrt[3]{10}\not\in \mathbb{Q}(\sqrt{-3},\sqrt{11})$ and conclude that $|L:\mathbb{Q}|=12$, where $L=\mathbb{Q}(\sqrt{-3},\sqrt{11},\sqrt[3]{10})$. Moreover $K=L(\alpha)$ and thus $|K:\mathbb{Q}|=12$ or $36$. * *How do I show $\sqrt[3]{10}\not\in \mathbb{Q}(\sqrt{-3},\sqrt{11})$? *Since the corresponding minimal polynomials of $\sqrt{-3},\sqrt{11},\sqrt[3]{10}$ have degree $2,2,3$, respectively, and share no roots, $|L:\mathbb{Q}|=12$. *$L(\alpha)= \mathbb{Q}(\sqrt{-3},\sqrt{11},\sqrt[3]{10})(\alpha)$. Does $\alpha$ get consumed by $\sqrt[3]{10}$ from what we shown in part (c) and the first part in (d)? Is this how we get $K=L(\alpha)$? *How can $|K:\mathbb{Q}|=36$?
Supplementing the other answer with a possibly unnecessarily high-browed argument proving that the splitting field has degree $36$. Or, equivalently, that $1+\sqrt{11}$ has no cube root in the field $L$. The tool I use is Dedekind's theorem, so the argument relies on Galois theory and group actions. Consider the polynomial $f(x)=x^6-2x^3-10$. Let $G$ be the Galois group of this polynomial, viewed as a subgroup of the group of permutations of the six roots, identified with $S_6$. * *Eisenstein's criterion ($p=2$) tells us that $f(x)$ is irreducible over $\Bbb{Q}$. Therefore $G$ acts transitively on the set of roots, and hence $6\mid |G|$. Let us fix a root $\alpha$ of $f$ in the splitting field $K$ (unique as a subfield of $\Bbb{C}$). Factorization of $f$ modulo different primes reveals other things about the action of $G$. I will pay particular attention to the subgroup $H=Stab_G(\alpha)$ that keeps the root $\alpha$ fixed. * *Modulo $7$ we have $f(x)=(x^3+1)(x^3+4)$. The first cubic factors fully, $x^3+1=(x+1)(x+2)(x+4)$, as the field $\Bbb{F}_7$ has three third roots of unity: $1,2,4$. On the other hand, the cubic $x^3+4$ remains irreducible modulo $7$ because the congruence $x^3+4\equiv0\pmod7$ has no integer solutions. So Dedekind tells us that $G$ contains a $3$-cycle $\sigma$. This $3$-cycle fixes three of the zeros. Because $G$ acts transitively on the set of roots, one of the conjugates of $\sigma$ in $G$ has $\alpha$ as a fixed point. It follows that $H$ contains a $3$-cycle, and hence $3\mid |H|$. *By the orbit-stabilizer theorem $|G|=6|H|$ is thus a multiple of $18$. The other answer shows that $|G|=[K:\Bbb{Q}]\in\{12,36\}$, so we can conclude that $$|G|=[K:\Bbb{Q}]=36.$$ This was somewhat unsatisfactory in the sense that initially I needed help from a computer program. At some point this becomes necessary as eliminating alternative Galois groups becomes more and more demanding, when the degree grows (barring some special cases). In the thread inspired by this Paramanand Singh reaches the same conclusion by a different computer aided calculation proving that an element of $K$ has a minimal polynomial of degree nine, implying $9\mid |G|$ and again eliminating $[K:\Bbb{Q}]=12$ as an alternative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4206623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
how to find greatest and least values of $|z-2-3i|$ if $|z-5-7i|=9$ Context: conceptual question in jee I got the answer by using properties of modulus but i don't know if the procedure is correct If $|z-5-7i|=9$ then find the greatest and least values of $|z-2-3i|$ Answer: $14$, $4$ here's what I did: given, $$||z|- |-5-7i||\leq |z-5-7i| \leq |z| + |-5-7i|$$ $$||z|- \sqrt{74}|\leq 9 \leq |z| + \sqrt{74}$$ idk if what i'm about to do next is correct I did this $$9\leq|z|+\sqrt{74}=> |z| \geq 9-\sqrt{74}$$ I assumed $|z| = 9-\sqrt{74}$ $$||z|-|-2-3i|| \leq|z-(2+3i)|\leq|z|+|-2-3i|$$ $$|9-\sqrt{74}-5| \leq |z-(2+3i) \leq 9+5$$ $$|-4.60| \leq |z-(2+3i) \leq 14$$ Again I assumed $|-4.60|$ approx $4$ $$4\leq|z-(2+3i) \leq 14$$ Lookig at the question $|z-(5+7i)| = 9$ resembles equation of circle whose center at $5+7i$with radius 9, is there any way to solve this question using circle equation? and how does equation of cirlce co-relate to finding greatest and least value of $|z-2-3i|$ Is what i did correct above cause I got that answer by randomly by plugging value of $|z|$ as I can't find solid solution to this question in any module/internet/book.
The set $c=\{z\in\Bbb C\mid|z-5-7i|=9\}$ is the circle centered at $C=5+7i$ with radius $9$. The points at which a point of that circle is nearest or further away from $P=2+3i$ are the points at which the line defined by $P$ and $C$ intersects the circle $c$; see the picture below. So, consider the points of the form $tC+(1-t)P$. Then\begin{align}\bigl|tC+(1-t)P-C\bigr|=9&\iff|(3+4i)(t-1)|=9\\&\iff|t-1|=\frac95\\&\iff t=\pm\frac{14}5\text{ or }t=-\frac45.\end{align}Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4206738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
$\forall$ a,b $\in$ $\mathbb{R}$ satisfying a < b, $\exists$ n $\in$ $\mathbb{N}$ such that a + $\frac{1}{n}$ < b. My aim is to negate the statement. Attempt: $\exists$ a,b $\in$ $\mathbb{R}$ satisfying a $\geq$ b, $\forall$ $n \in \mathbb{N}$ we have that $a + \frac{1}{n} \geq b$. When I went to search for the answer in the solution manual, I found that the author had the following: $\exists a,b \in \mathbb{R}$ satisfying $a < b$ such that $\forall n \in \mathbb{N}$ we have $a + \frac{1}{n} \geq b$. I don’t understand why the author didn’t negate the condition that $a < b$. Is this an error, or do I have a misunderstanding? Thanks!
You misunderstand. The word "satisfying" implies a property of the numbers that is independent of the statement itself, much like a number can be prime, or divisible by 4. Therefore negation will have no effect on this; the set of numbers is still the same. You can also think of it like this: The original statement can be reduced to this: $a,b \in \mathbb{R}$ satisfying $a < b$ is equal to picking elements from the set $S = \{(a,b) \in \mathbb{R}^2 | a< b\}$ and therefore the original statement can be written: $\forall a,b \in S, \exists n \in \mathbb{N}, a+\frac{1}{n}< b$ Applying negation to each part of this statement, you get: $\exists a,b \in S, \forall n \in \mathbb{N}, a+\frac{1}{n} \geq b$ Hence the result. Note that this negated statement is always false, a result of the Archemedian property of the Real numbers
{ "language": "en", "url": "https://math.stackexchange.com/questions/4206817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimization using rotations with constraint Given angle $\phi \in [-\pi,\pi]$ and vector ${\bf a} \in \mathbb{C}^n$, $$\begin{array}{ll} \underset{{\bf v} \in \mathbb{C}^n}{\text{minimize}} & \left| {\bf a}^H \bf{v} \right|^2\\ \text{subject to} & |v_1| = \cdots = |v_n| = 1\\ & \arg \left( {\bf a}^H {\bf v} \right) = \phi\end{array}$$ This minimization problem is part of beamforming design in wireless communications. I would like to ask you to please help me with this minimization problem.
Calling $$ \cases{ v_k = e^{i\theta_k}\\ a_k = x_k + i y_k } $$ we have $$ p_r = \sum_k^nx_k \cos\theta_k-y_k\sin\theta_k\\ p_i = \sum_k^nx_k \sin\theta_k+y_k\cos\theta_k\\ $$ then the problem can be formulated as $$ \min_{\theta_k}p_r^2+p_i^2\ \ \text{s. t.}\ \ p_i = \tan\phi p_r $$ This problem can be posed as $$ \min_{c_k,s_k}\left(\sum_k^nx_k c_k-y_k s_k\right)^2+\left(\sum_k^nx_k s_k+y_kc_k\right)^2 \ \ \text{s. t.}\ \ \cases{c_k^2+s_k^2=1, \ \ k = 1,\cdots, n\\ \sum_k^nx_k s_k+y_kc_k = \tan\phi\left(\sum_k^nx_k c_k-y_k s_k\right)} $$ and can be successfully tackled using a Sequential Quadratic Programming approach. Attached a MATHEMATICA script showing a typical procedure n = 10; ar = RandomReal[{-3, 3}, n]; ai = RandomReal[{-3, 3}, n]; ck = Table[c[k], {k, 1, n}]; sk = Table[s[k], {k, 1, n}]; re = Sum[ar[[k]] c[k] - ai[[k]] s[k], {k, 1, n}]; im = Sum[ar[[k]] s[k] + ai[[k]] c[k], {k, 1, n}]; obj = re^2 + im^2; restr1 = Table[s[k]^2 + c[k]^2 - 1, {k, 1, n}]; restr2 = im - Tan[phi] re /. {phi -> Pi/4}; restrs = Join[restr1, {restr2}]; sol = NMinimize[{obj, restrs == 0}, Join[ck, sk]] restrs /. sol[[2]]
{ "language": "en", "url": "https://math.stackexchange.com/questions/4206946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine how many $4$ digit numbers divisible by $5$ can be generated using the set: $\{1,3,5,6,7,8\}$ Determine how many $4$ digit numbers divisible by $5$ can be generated using the set: $\{1,3,5,6,7,8\}$. Repetion is not allowed. If I am not wrong, we have to use permutation in this question right? I still do not understand how to solve this question. I know that the numbers must end with either $5$ or $0$. Looking at the question I can also tell that there are $5$ possibilities ($1,3,6,7$, or $8$) for the thousands digit, $4$ possibilities for the hundreds digit, $3$ possibilities for the tens digit, and $1$ possibility for the ones digit. I just don't know how to use the formula, and how to show my work. I need someone to show me all the steps, and the answer, so I can use this to solve similar questions. Can someone please help me?
You've almost got the answer to the question. Just use the multiplication principle. If you want to use permutations, it's $P(5,3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4207110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Inequality for $p>1$ with nonnegative real numbers Let $1<p<\infty$ and $x=(x_1,x_2,\ldots,x_N)\in\mathbb{R}^N$, $N\geq 2$. Then does the following inequality holds: $$ (|x_1|^{2(p-1)}+\ldots+|x_N|^{2(p-1)})^\frac{1}{2}\leq(|x_1|^p+\ldots+|x_N|^p)^\frac{p-1}{p}. $$ I can only see it holds for $p=2$. Can anyone please help to prove or disprove it for $p\neq 2$. Thanks.
In $p$-Norm notation, we have $\|x\|_{2(p-1)}^{p-1}\leq \|x\|_p^{p-1}$. This is equivalent to $\|x\|_{2(p-1)}\leq \|x\|_p$, which is true, because $2(p-1)\geq p$ for $p\geq 2$. This is true by the monotonicity of the $p$-norms. We prove $\|x\|_p\leq \|x\|_r$ for $r<p$. W.l.o.g. suppose that $\|x\|_p=1$. Then $\|x\|_p^p=1$ and $|x_i|\le 1$ for all $i$. Since $p>r$, we obtain $|x_i|^p \le |x_i|^r$ for all $i$ and we get $$1=\|x\|_p^p = \sum_{i} |x_i|^p \le \sum_{i} |x_i|^r = \|x\|_r^r.$$ The proof of the monociticy is not my own proof. You can find it on several places in the internet.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4207436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How did the posterior distribution get factorized in this manner? (bayes rule) I am refering to this course on sampling https://www.youtube.com/watch?v=TNZk8lo4e-Q. At around minute 6, the lecturer shows on the slide the posterior probability factored as: $$p(\Theta |X,Y)=\frac{p(Y|X,\Theta)p(\Theta)}{Z}$$ where Z is the normalizing constant. According to the product rule, the numerator should be $$p(\Theta )p(X|\Theta )p(Y|X,\Theta )$$ What is the reason for dropping $p(X|\Theta )$? Thanks alot for your help!
You can't be too mathematical with machine learning, but this is one way to think about it equationally: $$\begin{split}p(\Theta|X,Y)&=\frac{p(\Theta,Y|X)}{p(Y|X)}\\ &=\frac{p(Y|\Theta,X)p(\Theta|X)}{\int p(Y|\Theta,X)p(\Theta|X)d\Theta}\\ &=\frac{p(Y|\Theta,X)p(\Theta)}{\int p(Y|\Theta,X)p(\Theta)d\Theta}\end{split}$$ where the last equation follows from the prior distribution of $\Theta$ does not depend on the input values X. X are the givens (predictors).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4207610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which group actions are "set-transitive"? Given a faithful group action $G$ on a finite set $X$ of cardinality $n$ and an integer $1\le k\le n$, say that $G$ is $k$-set-transitive if we can map any unordered $k$-tuple in $X$ to any other via an element $g\in G$. (Contrast with being $k$-transitive, in which case this holds for ordered tuples.) Say that $G$ is set-transitive if it is $k$-set-transitive for all $k$. Obviously, $S_n$ and $A_n$ are set-transitive with their natural action on an $n$-element set (except $n=2$ in the alternating case), and in general $k$-transitivity implies both $k$-set-transitivity and $(n-k)$-set-transitivity. The Mathieu group $M_{12}$ is $k$-set-transitive for all $k\neq 6$. Some questions: * *Are there any set-transitive actions besides the two categories mentioned above? *Besides $A_4$ and $A_5$, are there any groups which are $k$-set-transitive for all $k\le 4$ which are not also $4$-transitive? *More generally, are there any group actions which are $k$-set-transitive for some $k\le n/2$ but not $k$-transitive? The condition seems much weaker, but I'm struggling to generate examples. Any pointers to discussion of this property in the literature would also be welcome.
The standard term for a group that is transitive on $k$-sets is $k$-homogeneous. A basic result is due to Livingstone and Wagner (1965) and Kantor (1972): Suppose that $G \le {\rm Sym}(X)$ with $|X| \le n$, and assume that $G$ is $k$-homogeneous with $k \le n/2$ (that is no restriction since $G$ is $k$-homogeneous if and only if it is $(n-k)$-homogeneous). Then $G$ is $(k-1)$-transitive and, with the following exceptions, $G$ is $k$-transitive: (i) $k=2$, ${\rm ASL}_1(q) \le G \le {\rm A \Sigma L}_1(q)$ with $q \equiv 3 \bmod 4$; (ii) $k=3$, ${\rm PSL}_2(q) \le G \le {\rm P \Sigma L}_2(q)$ with $q \equiv 3 \bmod 4$; (iii) $k=3$, $G={\rm AGL}_1(8)$, ${\rm A \Gamma L}_1(8)$, or ${\rm A \Gamma L}_1(32)$; (iv) $k=4$, $G={\rm PGL}_2(8)$, ${\rm P \Gamma L}_2(8)$, or ${\rm P \Gamma L}_2(32)$. This is stated (but not proved) as Theorem 9.4B in Dixon & Mortimer's book "Permutation Groups".
{ "language": "en", "url": "https://math.stackexchange.com/questions/4207740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Simplifying $\left( 2^{\aleph_\alpha}\right)^{\aleph_0}$ Let $X = \omega_\lambda^\omega$ be the product space where each $\omega_\lambda$ has the discrete topology. I'm able to bound the cardinality of open sets in $X$ below by $2^{\aleph_\lambda}$. The bound above should also be $2^{\aleph_\lambda}$. But it comes down to proving that $$\left(2^{\aleph_\lambda}\right)^{\aleph_0} = 2^{\aleph_\lambda}$$ Is this result immediate as the exponent would be $\aleph_\lambda \cdot \aleph_0 = \text{max}(\aleph_\lambda, \aleph_0) = \aleph_\lambda $? I'm not extremely familiar with cardinal exponentiation, so I'm not sure if I'm able to make such a simplification (bringing the exponent inside). If that's true, is there a good reference as to why I'm able to do that?
Yes. Keep in mind that $|A|^{|B|} = |A^B|$ by definition. Now in any Cartesian Closed Category, we have the following isomorphism, natural in $A$: $\begin{equation} \begin{split} Hom(A, (B^C)^D) &\simeq Hom(D \times A, B^C)\\ &\simeq Hom(C \times (D \times A), B) \\ &\simeq Hom((C \times D) \times A, B) \\ &\simeq Hom(A, B^{C \times D}) \end{split} \end{equation}$ so by the Yoneda Lemma, we have an isomorpism $(B^C)^D \simeq B^{C \times D}$. Thus, we have for any sets $B, C, D$, we have $(|B|^{|C|})^{|D|} = |B^C|^{|D|} = |(B^C)^D| = |B^{C \times D}| = |B|^{|C \times D|} = |B|^{|C| \cdot |D|}$. Thus, for any cardinals $\alpha, \beta, \gamma$, we have $(\alpha^\beta)^\gamma = \alpha ^ {\beta \cdot \gamma}$. So in particular, we have $(2^{\aleph_\alpha})^{\aleph_0} = 2^{\aleph_\alpha \cdot \aleph_0}$. And since $\alpha \geq 0$, we have $\aleph_\alpha \cdot \aleph_0 = \aleph_\alpha$. So the cardinal simplifies to $2^{\aleph_\alpha}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4207843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove $a^2 + b^2 + c^2 + ab + bc +ca \ge 6$ given $a+b+c = 3$ for $a,b,c$ non-negative real. I want to solve this problem using only the AM-GM inequality. Can someone give me the softest possible hint? Thanks. Useless fact: from equality we can conclude $abc \le 1$. Attempt 1: Adding $(ab + bc + ca)$ to both sides of inequality and using the equality leaves me to prove: $ab + bc + ca \le 3$. Final edit: I found a easy way to prove above. $18 = 2(a+b+c)^2 = (a^2 + b^2) + (b^2 + c^2) + (c^2 + a^2) + 4ab + 4bc + 4ca \ge 6(ab + bc + ca) \implies ab + bc + ca \le 3$ (please let me know if there is a mistake in above). Attempt 2: multiplying both sides of inequality by $2$, we get: $(a+b)^2 + (b+c)^2 + (c+a)^2 \ge 12$. By substituting $x = a+b, y = b+c, z = c+a$ and using $x+y+z = 6$ we will need to show: $x^2 + y^2 + z^2 \ge 12$. This doesnt seem trivial either based on am-gm. Edit: This becomes trivial by C-S. $(a+b).1 + (b+c).1 + (c+a).1 = 6 \Rightarrow \sqrt{((a+b)^2 + (b+c)^2 + (c+a)^2)(1 + 1 + 1)} \ge 6 \implies (a+b)^2 + (b+c)^2 + (c+a)^2 \ge 12$ Attempt 3: $x = 1-t-u$, $y = 1+t-u$, $z = 1 + 2u$ $(1-u-t)^2 + (1-u+t)^2 + (1+2u)^2 + (1-u-t)(1-u+t) + (1+t-u)(1+2u) + (1-t-u)(1+2u)$ $ = 2(1-u)^2 + 2t^2 + (1 + 2u)^2 + (1-u)^2 - t^2 + 2(1+2u)$ expanding we get: $ = 3(1 + u^2 -2u) + t^2 + 1 + 4u^2 + 4u + 2 + 4u = 6 + 7u^2 + t^2 + 2u\ge 6$. Yes, this works.. (not using am-gm or any such thing).
For Attempt 1, you can use the rearrangement inequality: $ab+bc+ca\le a^2+b^2+c^2$ to get: $$ab+bc+ca\le \frac{(a+b+c)^2}{3}=3$$ For Attempt 2, you can use $a+b=3-c$: $$(3-a)^2+(3-b)^2+(3-c)^2\ge 12 \Rightarrow \\ 27+a^2+b^2+c^2-6(a+b+c)\ge 12\Rightarrow \\ a^2+b^2+c^2\ge 3,$$ which is true by QM-AM.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4208010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 6, "answer_id": 1 }
How to calculate lower bound on probability? Let $Y$ be a random variable such that $E[Y] = \lambda$, $\lambda \in \mathbb{R}$ and $E[Y^2]<\infty$. Then the problem is to find a lower bound on the probability $$ P \left[|Y| > \frac{|\lambda|}{2} \right]. $$ Any leads would be appreciated.
In general, without further conditions there is non. For example, let $\lambda=1$ and $Y_n$ be a random variable given by $\mathbb{P}(Y_n=0)=1-1/n$ and $\mathbb{P}(Y_n=n)=1/n$ for each $n\in\mathbb{N}$. Then $\mathbb{E}Y_n = 1$, but $$\mathbb{P}(|Y_n|>1/2) = 1/n \overset{n\to\infty}{\longrightarrow} 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4208126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the relationship between Rank of square matrix and its higher orders? Suppose $A$ is a square matrix, and $\operatorname{Rank}(A^{2})=3$, then can we establish any relationship to determine the rank of $A$? If it helps, $A$ has three distinct eigenvalues and it is a $4\times 4$ matrix. Also, I want to know if there is a theorem or general statement about figuring out ranks of matrices of higher orders than the original one.
$A^2$ has Eigenvalue $0$ and an Eigenspace of $0$ of dimension $1$. This means $A$ has also Eigenvalue $0$ (there are several ways to see that for example $\det(A^2)=0=\det(A)\det(A)$) This means $\text{rank(A)}<4$. Now we need to verify that $\text{rank}(A^2)\le \text{rank}(A)$ Its sufficient to show $\ker(A)\subseteq \ker(A^2)$: Let $v\in \ker(A) \Longleftrightarrow Av=0 \Longrightarrow A^2v=A(Av)=A0=0\Longrightarrow v\in\ker(A^2)$ $\ker(A)\subseteq \ker(A^2)$ concluding $\text{rank}(A)=3$ In general its not that easy, consider a nilpotent endomorphism. $\text{rank}(A)\ge\text{rank}(A^2)\ge\text{rank}(A^3)...$ and you usually cannot say anything about the ranks without doing some calculations A information which can help alot is knowing the algebraic and geometric multiplicity. It can help you to understand the structure and get informations about the ranks. For your question in the comment: Okey first of all, the case that the rank of $A^n$ is full, which means, $A^n$ is invertable, gives you (take for example the determinant argument I stated above) that the rank of $A$ is also full. So in general only endomorphisms with no full rank are interesting. And btw if $\lambda$ is an eigenvalue of $A^n$, so is $\lambda^{1/n}$ for $A$. So $A^n$ has an eigenvalue $0$ (the other case is already clear from above). There are two cases which are important to understand: Is the geometric multiplicity $\ne$ algebraic multiplicity for eigenvalue $0$ of $A$, then and only then $\text{rank}(A)\ne\text{rank}(A^n)$, $n>1$. It helps to look over the theory of the jordan canonical form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4208250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How summation is changed in Analytic number theory Consider this expression S(x, z) = $\sum_{n\leq x} \sum_{{d|n , d|P(z) } }\mu(d) $ . I don't understand the logic behind next step and get really confused on how summation is changed. In next step author writes $\sum_{n\leq x} \sum_{{d|n , d|P(z) } }\mu(d) $ = $ \sum_{d| P(z), d \leq x }\mu(d) \sum_{n\leq x, d|n } 1 $ . My thoughts: It is clear to me that 1 variable is d and 1 is n. I understand 1st summation from left , $d| P(z) , d\leq x $ and how n$\leq x$ is used but I don't understand why author wrote d|n in 2 nd summation. Can you please explain it in detail.
Sometimes, one way to approach switching orders of summation is to make indices completely independent by introducing functions that give conditions. For example, we might write $$ \sum_{n \leq x} \sum_{\substack{d \mid n \\ d \mid P(z)}} \mu(d) = \sum_{n \leq x} \sum_{d \leq x} 1_{[d \mid n]} 1_{[d \mid P(z)]} \mu(d). \tag{1}$$ Here, I use a form of Iverson Bracket notation of the form $$ 1_{[\text{condition}]} = \begin{cases} 1 & \text{condition is true,} \\ 0 & \text{else.} \end{cases}$$ In this form, the two regions of summation are independent, and so we can swap the order of summation in $(1)$ without problem. So $$ \sum_{n \leq x} \sum_{d \leq x} 1_{[d \mid n]} 1_{[d \mid P(z)]} \mu(d) = \sum_{d \leq x} \mu(d) 1_{[d \mid P(z)]} \sum_{n \leq x} 1_{[d \mid n]} \mu(d) = \sum_{\substack{d \leq x \\ d \mid P(z)}} \mu(d) \sum_{\substack{n \leq x \\ d \mid n}} 1, $$ as the author claimed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4208365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Eigenvalues of the sum of a matrix such that all its principal submatrices are stable, and a diagonal matrix with non-positive entries I have a real square matrix $A$ (not necessarily symmetric) with all its principal submatrices (including $A$) having eigenvalues with a negative real part. On the other hand, I have a matrix $D$ which is diagonal and has only non-positive elements in its diagonal. I think that this implies that $A+D$ must also have eigenvalues with a negative real part. Does anyone know how to prove this, or if this is even true? I have read some things about this in this link: Principal minors of sum of a matrix and a diagonal matrix That implies that the odd (resp. even) minors of $A+D$ will be negative (resp. positive). However, this doesn't prove my assertion. Until now, I have only found the fact that $A+D$ cannot have a real positive eigenvalue, but could it have a complex-conjugate pair of eigenvalues with a positive real part? EDIT: It is worth noticing that the condition on all principal submatrices to be stable is a requirement for the claim to be (possibly) true. In fact, if we only consider that $A$ is stable but we allow that some of its submatrices have eigenvalues with a positive real part, then the statement clearly does not hold. For instance, take $$A=\begin{pmatrix} 1 & -3 \\ 1 & -2 \end{pmatrix}.$$ This matrix is clearly stable. Nevertheless, if we consider the diagonal matrix $$D=\begin{pmatrix} 0 & 0 \\ 0 & -3 \end{pmatrix},$$ then $$A+D=\begin{pmatrix} 1 & -3 \\ 1 & -5 \end{pmatrix},$$ which is not a stable matrix. Notice that, in this case, $A$ has one principal submatrix with a positive real eigenvalue.
By going over Nonnegative Matrices in the Mathematical Sciences by Abraham Berman & Robert J. Plemmons, if we add that $A \in \{ B = (b_{ij}) \in \mathbb{R}^{n \times n}: b_{ij}>=0 \text{ for } i \neq j \}$, then this is an equivalence. It might hold for a larger class of matrices, but I could not find any proof that holds for the broader case. Your description that "I have a real square matrix $A$ (not necessarily symmetric) with all its principal submatrices (including $A$) having eigenvalues with a negative real part." Leads to $-A$ having all principal minor being positive (A1 in the book). If you go over [Berman & Plemmons, Ch.6 M-matrices] you will find that the description above means that $-A$ is a non-singular M-matrix. Which is also equivalent to existing a positive diagonal matrix $P$ such that $ -AP-PA^\top \succ 0 $ (H24 in the book.) From there, given any $D$ which is diagonal and has only non-positive elements, since both $-D$ and $P$ matrices are non-negative diagonal matrices then $-PD \succeq0$, we have that $$ (A+D)P+P(A+D)^\top = AP+PA^\top + 2DP \prec 2DP \preceq 0.$$ Therefore $A+D$ is Hurwitz stable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4208661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Decaying after 75 days how much left After $75$ days, a radioactive substance has decayed to $26.7\%$ of its original amount. After an additional $75$ days, what percent of its original amount will it have decayed to? I got $100-26.7=19.6$, meaning there is $73.3$ from original and multiplied $0.267$ to $73.3$ and got $19.57$. Is this right?
Here's an answer with a bit of formalism to complete that of @xav MX. Let $r$ be the daily rate of decay. That means every day, the substance's radioactivity decays by $r$ compared to the day before. So in $75$ days, it decays by $r^{75}$. By hypothesis, $r^{75}=0.267$. And in after another $75$ days, it will have decayed by $r^{75+75}=\left(r^{75}\right)^2=0.267^2=0.0713$, which gives $7.13%$ as the answer. The reason for using some formalism is to be able to generalize to a different period of time. For instance, if they asked you what the decay would be after an additional $100$ days instead of $75$, you'd compute $r^{75+100}=\left(r^{75}\right)^{\frac {175}{75}}=0.267^{\frac {175}{75}}=0.046$ and the answer would be 4.6%.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4208826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why smooth section of vector bundle $F\to M$ is $\Gamma(TM) \times \Gamma(TM) \to \Gamma(NM)$ Let $M\subset \tilde{M}$ be the embedded Riemann submanifold,We can construct the vector bundle $F\to M$ where each fiber is bilinear map $T_pM\times T_pM \to N_pM$,which is a smooth vector bundle. The question is why the smooth section of this bundle is $\Gamma(TM) \times \Gamma(TM) \to \Gamma(NM)$ where $NM$ is normal bundle of $M$ ,and $\Gamma$ means smooth section of the corresponding vector bundle. Let's make the theorem more clear,we have the following charaterization lemma: Let $B$ be a rough section of vector bundle $F$ then we can define the map $B(X,Y)(p) = B_p(X_p,Y_p)\in N_pM$ then $B$ is a smooth section of $F$ if and only if $B(X,Y)(p)$ is a smooth section of $NM$ for each smooth vector field $X,Y$
Pick the local trivialization as follows:take the smooth adapted local frame on $U$,then we have $(E_1,E_2,...,E_k)$ as basis for $T_pM$ and $(E'_{k+1},...,E'_n) $ as basis for $N_pM$. Hence locally on $U$, the section is $B:U\to \pi^{-1}(U)$ is smooth if and only if $U\to \pi^{-1}(U) \to U\times \Bbb{R}^{k}$ is smooth. The map we want to study the smoothness under the choice of local trivilization associated with the local frame is : $$U\to \pi^{-1}(U) \to U\times \Bbb{R}^{k}\\ p\mapsto (p,(B^l_{i,j}(p)))$$ Hence $B$ is smooth section if and only if $B^{l}_{i,j}(p)$ is smooth in $p$. If $B_p(X,Y)$ gives a smooth section of $NM$ for any smooth vector field $X,Y$. in particular for $(E_i,E_j)$ ,due to $B_p(E_i,E_j) = B_{i,j}^l(p)E'_l$ ,we have $B^{l}_{i,j}(p)$ is smooth in $p$. Conversely if all $B^{l}_{i,j}(p)$ is smooth in $p$. due to the arguement above $B_p(E_i,E_j) $ is smooth section for $NM$ for each $i,j\le k$. since any smooth vector field $X,Y$ can be represented as smooth linear combination of $E_i$( that is $X = X^i(p)E_i$ with $X^i(p)$ smooth),then smoothness of $B_p(E_i,E_j)$ implies smoothness of $B_p(X,Y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4208952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How does $\sum_{k=j}^{i+j}(i+j-k)$ = $\sum_{k=1}^{i}(k)$ I am working with summations and I came across these two equivalent summations $\sum_{k=j}^{i+j}(i+j-k)$ = $\sum_{k=1}^{i}(k)$ but there is no explanation as to how the latter summation was computed from the former.
Express the sum as: $$\sum_{k=j}^{i+j}(i+j-k)=\sum_{k=j}^{i+j}(i-(k-j))$$ Change the bound: $$\begin{align}k&=j,j+1,j+2,...,j+i-1,j+i \Rightarrow \\ k-j&=0,1,2,...,i \end{align}$$ Hence: $$\sum_{k=j}^{i+j}(i-(k-j))=\sum_{k-j=0}^{i}(i-(k-j))=\sum_{t=0}^{i}(i-t)$$ Change the bound: $$\begin{align}t&=0,1,...,i \Rightarrow \\ -t&=0,-1,...,-i \Rightarrow\\ i-t&=i,i-1,...,0\\ i-t&=0,1,2,...,i-1,i \end{align} $$ Hence: $$\sum_{t=0}^{i}(i-t)=\sum_{i-t=0}^{i}(i-t)=\sum_{k=0}^{i}k=\sum_{k=1}^{i}k.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4209367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Alternative to the Binomial PMF? Since hypergeometric distribution is the without replacement version of the Binomial distribution why can't we replace the combinations in the Hypergeometric PMF with combinations with replacement and expect the same result with the standard Binomial PMF ? To elaborate this is the regular combination formula counting unordered without replacement : $$C(n,r)= \frac{n!}{r!(n−r)!}$$ And this is the combination with replacement formula counting unordered with replacement: $$C^R(n,r)= \frac{(n+r−1)!}{r!(n−1)!}$$ and this is the hypergeometric distribution PMF: For $X\sim\operatorname{HGeom}(w,b,n)$ $$P(X=k)= \frac{C(w,k)C(b,n-k)}{C(w+b,n)}$$ My question is, since binomial and hypergeometric are both fixed number of trials where Binomial is counting with replacement and Hypergeometric is counting without replacement, why doesn't altering the Hypergeometric formula to be $$P(X=k)= \frac{C^R(w,k)C^R(b,n-k)}{C^R(w+b,n)}$$ give us the PMF of the Binomial Distribution ? What am I missing ?
One thing you're missing is that you need a logical reason why something should be true before wondering why it isn't. The binomial distribution is that of the number of successes in a non-random number of independent trials with the same probability of success on each trial. Suppose we want the probability of exactly two successes in six trials, with probability $0.3$ of success on each trial. \begin{align} ssffff & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ sfsfff & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ sffsff & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ sfffsf & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ sffffs & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ fssfff & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ fsfsff & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ fsffsf & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ fsfffs & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ ffssff & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ ffsfsf & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ ffsffs & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ fffssf & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ fffsfs & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \\ ffffss & \quad \longleftarrow \quad \text{The probability of this is } 0.3^2\times0.7^4 \end{align} Their are $15$ of these: $\binom 6 2 = 15.$ The probability that one of these happens is $$ \overbrace{0.3^2\times0.7^4 + \cdots\cdots + 0.3^2\times0.7^4}^{15 \text{ terms}} = 15 \times 0.3^2\times0.7^4 $$ The reason $15$ appears is that it is the number of ways to choose $2$ out of $6$ without replacement. Those $15$ ways appear in the list above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4209496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove or disprove $\int_0^1 \left(\sum_{n=0}^{\infty} a_nx^n\right)dx=\sum_{n=0}^{\infty} \frac{a_n}{n+1}$ By the given condition, the series $~~\displaystyle \sum_{n=0}^{\infty} |a_n|~$ is convergent. Then prove or disprove:$$\int_0^1 \left(\sum_{n=0}^{\infty} a_nx^n\right)dx=\sum_{n=0}^{\infty} \frac{a_n}{n+1}.$$ Now note that, since $\sum_{n=0}^{\infty} |a_n|~$ is convergent, then we have $$|a_n| < \frac{1}n~~\forall~~ n > N.$$ Then by Weierstrass M-test the series $\sum_{n=0}^{\infty} a_nx^n$ is convergent. But I am not able to prove the aforesaid argument.
By Hadamard's Theorem, the series $\sum_{n=0}^{\infty} a_nx^n$ converges uniformly. Then, by the integration theorem for function series you have that $\int_0^1 \left(\sum_{n=0}^{\infty} a_nx^n\right)\,dx=\sum_{n=0}^{\infty} \left(\int_0^1 a_nx^n\,dx\right)$. Now, solving the integral, you get $\int_0^1 \left(\sum_{n=0}^{\infty} a_nx^n\right)\,dx=\sum_{n=0}^{\infty}\frac{a_n}{n+1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4211790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solving the equation $3\cos x+\frac{1}{2}=\cos ^{2}x\left(1+2\cos x\left(1+4\sin ^2x\right)\right)$ Solve the equation:$$3\cos x+\frac{1}{2}=\cos ^{2}x\left(1+2\cos x\left(1+4\sin ^2x\right)\right)$$ To solve it, I tried writing the equation in term of $\cos x$. ( I denote $\cos x$ by $c$): $$3c+\frac12=c^2(1+2c(5-4c^2))$$ $$3c+\frac12=c^2+10c^3-8c^5$$ $$16c^5-20c^3-2c^2+6c+1=0 \qquad\text{Where $c\in[-1,1]$}$$ I tried $c=\pm1,\pm\frac12,0$, but neither of them satisfied the equation. So I don't know how to find $\cos x$.
$3\cos x+\frac{1}{2}=\cos ^{2}x\left(1+2\cos x\left(1+4\sin ^2x\right)\right)$ $ \displaystyle 6\cos x + 1 = (1 + \cos 2x) \left(1 + 2 \cos x (3 - 2 \cos2x)\right)$ $ \displaystyle 6\cos x + 1 = (1 + \cos 2x) (1 + 6 \cos x - 4 \cos x \cos 2x)$ $ \displaystyle 6\cos x + 1 = (1 + \cos 2x) (1 + 4 \cos x - 2 \cos 3x)$ $6\cos x + 1 = 5 \cos x + \cos 2x - \cos 5x + 1$ $\cos 2x = \cos x + \cos 5x$ $\cos 2x = 2 \cos 2x \cos 3x$ $\cos 2x = 0$ is a solution and if $\cos 2x \ne 0$, $\cos 3x = \frac{1}{2}$ is a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4211972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why is the statistic $t=r \sqrt{\frac{n-2}{1-r ^2}} \approx t (n-2)$? Let $r$ be the sample correlation for two random variables $X,Y$ based on a random sample $(X_1, Y_1), (X_2, Y_2), \dots (X_n, Y_n)$. According to Wikipedia, under the null hypothesis of zero correlation, the test statistic $t=r \sqrt{\frac{n-2}{1-r ^2}}$ approximately follows a t-distribution with $n-2$ degrees of freedom when the number of observations $n$ is large enough. Is there an easy way to prove this? So far, I tried to rewrite the formula for $r$ in such a way that I can apply the Central Limit Theorem, but I was unable to make something out of it.
In Wackerly et al. this is actually given as a problem 11.55: Testing the null hypothesis $H_0:\beta_1 = 0$, the statistic $$T = \frac{\hat \beta_1 - 0}{\frac{S}{\sqrt{S_{xx}}}}$$ possesses a $t$ distribution with $n-2$ degrees of freedom if the null hypothesis is true. Show that the equation for T can also be written as $$T = \frac{r\sqrt{n-2}}{\sqrt{1-r^2}}$$ So you want to start from the first equation and convert it to the second.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4212076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that $A : L^2 \to L^2$, $(Af)(x) =\int_0^1 \frac{f(t)}{\sqrt{|x-t|}}dt$ is a bounded operator Prove that the operator $A:L^2([0,1])\rightarrow L^2([0,1]) $ where $$(Af)(x) =\int_0^1 \frac{f(t)}{\sqrt{|x-t|}}dt$$ is bounded. A hint is provided: consider using the fact that $\sqrt{|x-t|}=(|x-t|)^{1/4}(|x-t|)^{1/4}$. How can I show that this operator is bounded?
$$|(Af)(x)|=\left|\int_0^1 (f(t) (x-t)^{0.25} )(x-t)^{-0.75} dt\right| \leq\sqrt{ \int_0^1 (|f(t)|^2 |x-t|^{0.5} )dt}\sqrt{\int_0^1 |x-t|^{-1.5}dt} \leq M\sqrt{\int_0^1 |x-t|^{-1.5}dt} \cdot||f||$$ Hence $$||Af||\leq M\sqrt{\int_0^1 \left(\int_0^1 |x-t|^{-1.5}dt\right)dx} \cdot ||f||$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4212214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Analysis of a smooth flow on a smooth manifold Let $M$ be a smooth manifold. An open subset $D$ of $\mathbb{R}\times M$ is called a flow domain for $M$ if $\forall p\in M$, the set $$D^{(p)}=\{t\in\mathbb{R}:(t,p)\in D\}$$ is an open interval containing $0$. A smooth flow on $M$ is a smooth map $F$ from a flow domain $D$ to $M$ that satisfies: $\forall p\in M$, $$F(0,p)=p,$$ and for all $s\in D^{(p)}$ and $t\in D^{(F(s,p))}$ such that $s+t\in D^{(p)}$, $$F(t,F(s,p))=F(t+s,p).$$ Here comes the question. For any $p_0\in M$, why is $F(t,p)$ defined and smooth for all $(t,p)$ sufficiently close to $(0,p_0)$? This property is mentioned in Lee's ISM, and Lee said that it is because $D$ is open. But my knowledge of topology is unable to get me out of the question. Would you do me a favor? Thank you.
I think I have found the answer. It is all about a basis for a topology. According to Munkres's book, the product topology on $\mathbb{R}\times M$ is the topology having as basis the collection of all sets of the form $U\times V$, where $U$ and $V$ are open sets in $\mathbb{R}$ and $M$, respectively. Now $(0,p_0)$ is in the open set $D$, so there exist open sets $U\subseteq\mathbb{R}$ and $V\subseteq M$ such that $(0,p_0)\in U\times V\subseteq D$. The rest is history.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4212427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does a variable substitution not preserve boundary conditions? I am playing around with lagrange multipliers and have run into a feature I don't totally understand. I'm thinking about situations where I might just use substitution, and I've found sometimes the solution set with this method misses answers "on the boundary". My question is basically an extension of this stack exchange post. Very simply, consider optimizing $f(x,y)$ on the unit circle, i.e. $$ f(x,y) = -3x^2 - 10 y^2 \\ g(x,y) = x^2 + y^2 -1. $$ This problem has solutions at $(0,\pm1)$ and $(\pm1, 0)$. If I substitute $x^2 = 1-y^2$, then I get $\tilde{f}(y) = -7y^2 -3 $, which only yields the $y=0$ solutions (the other two solutions are found if substituting for $x$). I understand after reading the post above, that the issue is $\tilde{f}$ is not constrained by $y<1$, which is of course true on the unit circle. However, I am interested in why this is the case? Doesn't the function $x^2 = 1 - y^2$ have solutions only on the unit circle (at least in $\mathbb{R}$), why does substituting this condition not preserve "all the information"?
You should be optimizing $\tilde{f}(y)$ with the constraint $x^2 = 1-y^2$. Since $x$ does not appear in the objective function, the constraint simply implies $1-y^2 \ge 0$. So it does contain "all the information"; your mistake was discarding the constraint $x^2 = 1-y^2$ when considering the optimization of $\tilde{f}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4212571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Projectivity of analytic $\mathbb{P}^1$ bundles Let $f: X\to Y$ be a smooth analytic $\mathbb{P}^1$-bundle from a complex manifold $X$ to a complex projective manifold $Y$. Is $X$ a projective manifold?
Question: "Let $f:X→Y$ be a smooth analytic $P^1$-bundle from a complex manifold $X$ to a complex projective manifold $Y$. Is $X$ a projective manifold?" Answer: You find an explicit proof in "Compact complex surfaces" Peters/Van de Ven/.... page 190 of the following result: If $Y$ is a smooth compact complex curve and $f$ is an analytic fiber bundle with structure group $PGL(n+1,\mathbb{C})$ where the fibers of $f$ is $\mathbb{P}^n$, then there is an algebraic vector bundle $V$ of rank $n+1$ on $C$ with $X \cong \mathbb{P}(V^*)$. Note: If $Z$ is a complex manifold, there is an "equivalence of categories" between the category of finite rank holomorphic complex vector bundles on $Z$ and finite rank locally trivial $\mathcal{O}_Z$-modules. If $Z$ is projective it follows $Z$ is algebraic and by Hartshorne, Ex II.7.10 any $\mathbb{P}^n$-bundle (in the sense of Ex.7.10) $\pi: w \rightarrow Z$ on $Z$ is on the form $W \cong \mathbb{P}(V^*)$ for some locally free sheaf $V$ on $Z$. And since $Z$ is projective it follows $\mathbb{P}(V^*)$ is projective. Hence the general result holds in some cases when $Z$ is projective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4212703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }