Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why does $n \choose k$ get you the $k^{th}$ (starting from 0) coefficient of $(a+b)^n?$ I'm aware of the connection between Pascal's triangle and the binomial theorem, and how each edge, left and right if we consider the triangle to be a graph, represents multiplying by $a$ or $b.$ But how do we relate this to combinatorics?
$$(a+b)^n=\underbrace{(a+b)(a+b)\cdots(a+b)}_{\text{n times}}$$ To get a term of the form $a^kb^{n-k}$ we need to choose an $a$ out of the $n$ factors $k$ times and a $b$ out of the remaining $(n-k)$ factors $(n-k)$ times. Hence the coefficient of $a^kb^{n-k}$ in the expansion is $$\binom{n}{k}\binom{n-k}{n-k}=\binom{n}{k}$$ As an example consider $$ \begin{align} (a+b)^3 &=(a+b)(a+b)(a+b)\\ &={aaa}+{aab+aba+baa}+{abb+bab+bba}+bbb\\ &=\binom{3}{3}a^3+\binom{3}{2}a^2b+\binom{3}{1}ab^2+\binom{3}{0}a^0b^3 \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
How to formalize "It doesn't matter how places the brackets" Let $X$ be a set and $X\times X\to X: (a, b)\mapsto a\cdot b$ an associative operation on $X$. Now one can prove by induction that it doesn't matter how one places the brackets in a product $x_1\cdot x_2\cdot\text{ }\dots\text{ } \cdot x_n$, the product always evaluates to the same value. This result justifies the notation of a product $x_1\cdot x_2\cdot\text{ }\dots\text{ } \cdot x_n$ without any brackets. But how can one formalize it?
This is called the associative law which says precisely: $$\forall x,y,z [(x*(y*z))=((x*y)*z))] $$ That it extends to arbitrary finite products can then proven using induction. Some more details: you will prove that every expresention is equivalent to the one with brackets on left e.g. to $(((x*y)*z)*w))$ (every expresion uses each variable only once). Then you can define complexity $C:\mathrm{EXP}\to\mathbb{N}$ of an expression recursively as : $C(x)=0$, and for composed expressions $C(e*f)=C(e)+C(f)+\mathrm{length}(f)-1$, where $e,f$ are expressions and $\mathrm{length}(f)$ is a number of variables in $f$. Examples: $C(x*y)=C(x)+C(y)+1-1=0$, $C((x*y)*z))=C(x*y)+C(z)+1-1=0$, but $C(x*(y*z))=C(x)+C(y*z)+2-1=1$, and use induction on the complexity. Calculate that any one application of associativity decreases the complexity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
If $f\not\in L^{\infty}$ then $\lim_{p\rightarrow\infty}||f||_p=\infty $. I want to prove that If $f\not\in L^{\infty}$ then $\lim_{p\rightarrow\infty}||f||_p=\infty $. I'd like any hint that approaches to solution. I now that for every real number $r$ the set $\{x:|f(x)|>r\}$ has positive measure $f$ but I do not know what to do next.
If $f \not\in L^{\infty}$, then given any $M > 0$, we have $|f| > M$ on a set $E$ of positive measure. Therefore, $|f|^p > M^p$ on $E$, so $$\int |f|^p \geq \int_E |f|^p \geq M^p \mu(E)$$ If $\mu(E) = \infty$ then this shows that $\|f\|_p = \infty$ for all $p<\infty$, so the result certainly holds. Otherwise, $$\|f\|_p = \left(\int |f|^p\right)^{1/p} \geq M \mu(E)^{1/p}$$ As $0 < \mu(E) < \infty$, we have $\lim_{p \to \infty}\mu(E)^{1/p} = 1$, so $$\liminf_{p \to \infty}\|f\|_p \geq M$$ As this holds for arbitrarily large $M$, we conclude that $\lim_{p \to \infty}\|f\|_p = \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Second order derivative of the inverse matrix operator Let $f : Gl_{n}(\mathbb{R}) \to Gl_{n}(\mathbb{R})$ defined by $f(X)=X^{-1}$. Compute $f''(X)(H,K)$. I calculated $f'(X).H=-X^{-1}HX^{-1}$ so I tried to use some composition of linear functions but did not find the appropriate functions. Can anyone help me in this matter?
Using the Taylor formula, (As Rodrigo de Azevedo and user1952009 did) you can calculate $f''(X)(H,H)$. If you want the gneral formula $f''(X)(H,K)$, then you can calculate the derivative (with respect to $X$, considering $H$ as a fixed vector) of $f'(X)(H)=-X^{-1}HX^{-1}$. We obtain $f''(X)(H,K)=X^{-1}KX^{-1}HX^{-1}+X^{-1}HX^{-1}KX^{-1}$ (that is the "bilinearization" of the quadratic form $f''(X)(H,H)$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
If $a$, $b$, and $c$ are sides of a triangle, then $\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}<2$. Let $a,b,c$ be the lengths of the sides of a triangle. Prove that $$\sum_{\text{cyc}}\frac{a}{b+c}=\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b}<2\,.$$ Attempt. By clearing the denominators, the required inequality is equivalent to $$a^2(b+c)+b^2(c+a)+c^2(a+b)>a^3+b^3+c^3\,.$$ Since $b+c>a$, $c+a>b$, and $a+b>c$, the inequality above is true. Is there a better, non-bruteforce way?
\begin{align*} \frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b} & = \frac{2a}{2(b+c)}+\frac{2b}{2(c+a)}+\frac{2c}{2(a+b)} \\ &< \frac{2a}{a+b+c} + \frac{2b}{c+a+b} + \frac{2c}{a+b+c} \\ &= 2 \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Zero Vectors for Vector Spaces other than $R^n$ I understand what a zero vectors is in $R^n$ but I need some help visualising other zero vectors: For example, the vector space of all functions $${ y : \mathbb R \rightarrow \mathbb R \ \ | \ y''+xy'+e^xy=0 } $$ Is the zero vector just $z(x)=0$ ? Explicit examples of less obvious vector spaces would be greatly appreciate ted. Another example could be the set of all functions $$y:\mathbb R\rightarrow\mathbb R \ \ | \ y''= 0$$ In this example wouldn't the zero vector be any functions $z(x)=ax+b$ but does this contradict the fact that the zero vector is unique? Or does that fact mean that the set above is NOT a vector space? Kind Regards, Hugh
A vector space is among other things a group with respect to sum. So the zero vector is exactly the zero of the sum, the unique element that can be added to any other element without changing it. In both your examples the zero is the constant $z(x) = 0$. In the second example, all the functions $z(x)=ax+b$ are in fact in the space (their second derivative is 0) but they are not the zero of the space, as for instance they cannot be doubled without changing.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1903888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
To test convergence of infinite series $x^2(\log 2)^p + x^3(\log 3)^p + x^4(\log 4)^p +\dots$ To test convergence of infinite series $x^2(\log 2)^p + x^3(\log 3)^p + x^4(\log 4)^p +\dots$ My approach to the above problem: let $u_n = x^{(n+1)}(\log(n+1))^p$. the $u_{(n+1)}= x^{(n+2)}(\log(n+2))^p$ now, $n \log\frac{u_n}{u_{n+1}}$=n\log($\frac{1}{x}$) + n$p$[\log(\log(n(1+$\frac{1}{n}$))-(\log(n(1+$\frac{2}{n}$)))]. After that I have used the expansion for $\log (1+x)$ and got stuck. It seems if I take the limit of $n \log\frac{u_n}{u_{n+1}}$, i.e $\lim_{n\to\infty}n \log\frac{u_n}{u_{n+1}}$ =$\infty$ $(>1) $$\Rightarrow$The series seems to be convergent, which is not the answer. Please clarify the mistake and guide me.
There are a lot of possibilties to check convergence. E.g. the ratio test here for $p\ge 0$ is: $$|\frac{x^{n+1}(\ln(n+1))^p}{x^n(\ln n)^p}|=|x|(\frac{\ln(n+1)}{\ln n})^p<1$$ This means $|x|<(\frac{\ln n}{\ln(n+1)})^p<1$: Convergence for $|x|<1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Is it possible to have a $f(\vec{r})$ satisfy this relation? It is known that: $$ (\nabla^2+k^2)(-\frac{e^{ikr}}{4\pi r})=\delta(\vec{r}) $$ where $k>0$ and $\delta(\vec{r})$ is the three dimensional Dirac delta function. My question is, is it possible to find a function $f(\vec{r})$ that satisfies the following relation: $$ \left[\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}-(\frac{\partial}{\partial z}+i\alpha)^2-k^2\right] f(\vec{r})=\delta(\vec{r}) $$ Where $\alpha>0,k>0$.
Fourier transforming your equation, you can check that the function $f({\bf r})$ is given by $$f({\bf r}) = \int \frac{d^3 q}{(2\pi)^3} \frac{e^{i {\bf q} \cdot {\bf r}}}{(q_3+\alpha)^2 - q_1^2 -q_2^2 -k^2 }$$ with ${\bf q} = (q_1,q_2,q_3)$. You can still impose boundary conditions on $f({\bf r})$ which corresponds to shifting the position of the poles below or above the real axis (here, I will choose outgoing wave Green's function). The integrand has poles at $q_3 = -\alpha \pm \sqrt{k^2 +q_1^2+q_2^2}$. We perform the integral over $q_3$ by closing the contour on the upper complex plane and taking the pole with $+$ inside the contour. We obtain $$f({\bf r}) = -\frac{i e^{i \alpha r_3}}{2} \int\frac{d^2 q}{(2\pi)^2} \frac{e^{i q_1 r_1 + i q_2 r_2 +i (k^2 + q_1^2 + q_2^2)^{1/2}r_3}}{\sqrt{k^2+q_1^2 + q_2^2}} .$$ For the remaining integral, we introduce $\rho = \sqrt{r_1^2+r_2^2}$ and $q_1 = q \cos \phi, q_2= q \sin\phi$, where $\phi$ is the angle measured with respect to $(r_1,r_2)$. We obtain $$\int\frac{d^2 q}{(2\pi)^2} \frac{e^{i q_1 r_1 + i q_2 r_2+i (k^2 + q_1^2 + q_2^2)^{1/2}r_3}}{\sqrt{k^2+q_1^2 + q_2^2}} = \int_0^\infty \!\frac{dq}{2\pi}\,\int_{0}^{2\pi}\!\frac{d\phi}{2\pi}\frac{q e^{iq \rho \cos\phi + i (k^2+q^2)^{1/2} r_3}}{\sqrt{k^2+q^2}}\\=\int_0^\infty \!\frac{dq}{2\pi} \frac{q e^{i (k^2+q^2)^{1/2} r_3} J_0(\rho q)}{\sqrt{k^2+q^2}} = -\frac{i e^{ik (r_3^2 -\rho^2)^{1/2}} }{2\pi \sqrt{r_3^2 -\rho^2}}. $$ So, we obtain the final result $$ f({\bf r}) =- \frac{ e^{-i \alpha r_3 +ik (r_3^2 -r_1^2-r_2^2)^{1/2}}}{4 \pi\sqrt{r_3^2 -r_1^2-r_2^2}}.$$ Edit: There is an alternative much more conceptual way to arrive at the final result. It is obtained by using ideas of analytical continuation. In physics it is also known as Wick rotation. We are interested in a solution to $$\left[\partial_1^2+\partial_2^2-(\partial_3+i\alpha)^2-k^2\right] f({\bf r})=\delta({\bf r}).\tag{1}$$ For convenience, we first introduce a new function $\tilde f({\bf r}) = e^{-i \alpha r_3} f({\bf r})$ in terms of which (1) assumes the (homogeneous) form $$ (\partial_1^2+ \partial_2^2 - \partial_3^2 -k^2) \tilde f({\bf r}) =\delta({\bf r}).$$ Let us instead study the family of equations $$(-e^{i\theta}\partial_1^2-e^{i\theta} \partial_2^2 - \partial_3^2 -k^2) g_\theta({\bf r}) = \delta({\bf r}).$$ For $\theta=0$ we have $g_0({\bf r}) =e^{i k |{\bf r}|}/(4\pi |{\bf r}|)$ according to the question. We obtain $\tilde f$ from $g_\theta$ by analytical continuation as $\theta \uparrow \pi$ or $\theta \downarrow -\pi$. We have the general relation $$ g_\theta({\bf r}) = \int\!\frac{d^3q}{(2\pi)^3} \frac{e^{i{\bf q}\cdot{\bf r}}}{q_3^3 + e^{i\theta} (q_1^2 + q_2^2) -k^2}. $$ Introducing new integration variable $ \tilde{\bf q}= (e^{i\theta/2}q_1,e^{i\theta/2}q_2,q_3)$, we obtain the alternative expression (here we have to be careful that there is no contribution from the arc at infinity) $$ g_\theta({\bf r}) = e^{-i\theta}\int\!\frac{d^3 \tilde q}{(2\pi)^3} \frac{e^{i\tilde{\bf q}\cdot\tilde{\bf r}}}{\tilde {\bf q}^2 -k^2} =e^{-i\theta}g_0(\tilde{\bf r}) $$ with $$\tilde{\bf r}= (e^{-i\theta/2} r_1,e^{-i\theta/2} r_2 ,r_3).$$ Letting $\theta \to \pi$, we obtain the result $$ \tilde f({\bf r}) = g_\pi({\bf r}) = -g_0(-i r_1, -i r_2,r_3)= - \frac{e^{i k (r_3^2-r_1^2-r_2^2)^{1/2}}}{4\pi \sqrt{r_3^2-r_1^2-r_2^2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Conflicting definitions of continuity (strict or non-strict inequality)? On page 97 of Kreyzig's functional analysis book he provides a proof that a linear operator $T$ is continuous if and only if it is bounded. When proving that $T$ is continuous implies that $T$ is bounded he says that if we assume $T$ is continuous at some $x_0$ then for any $\varepsilon >0$ there exists a $\delta > 0$ such that $\Vert Tx - Tx_0 \Vert \le \varepsilon$ for all $x$ satisfying $\Vert x - x_0 \Vert \le \delta$. But on the previous page where he gave the definition of a continuous operator $T$ he used strictly less inequalities for $\varepsilon$ and $\delta$. That is, $T$ is continuous at some $x_0$ means that for any $\varepsilon >0$ there exists a $\delta > 0$ such that $\Vert Tx - Tx_0 \Vert < \varepsilon$ for all $x$ satisfying $\Vert x - x_0 \Vert < \delta$. How can he extend the strict inequality to a less than or equal to inequality?
If you know that you can show this with $\le$ and need $<$ just apply the $\le$ case to $\frac{\varepsilon}{2}$ in order to get the $<$ for $\varepsilon$ (and vice versa)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Sequence converging to definite integral Let's define \begin{equation*} I_0 := \log\frac{6}{5} \end{equation*} and for $k = 1, 2, \ldots, n$ \begin{equation*} I_k := \frac{1}{k} - 5 I_{k-1}. \end{equation*} How the value $I_n$ is linked with the value of $$\int_0^1\frac{x^n}{x+5} \mathrm{d}x \ ?$$
If we set $$ I_n = \int_{0}^{1}\frac{x^n}{x+5}\,dx \tag{1}$$ we clearly have $I_0=\log\frac{6}{5}$ and $$ I_n+ 5I_{n-1} = \int_{0}^{1}\frac{x^n+5 x^{n-1}}{x+5}\,dx = \int_{0}^{1}x^{n-1}\,dx = \frac{1}{n}.\tag{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Why can't an improper transfer function be realized? A major result in control system theory is that a transfer function, $$G\left( s \right) = \frac{{Y\left( s \right)}}{{U\left( s \right)}}$$ has a state space realization if and only if the degree of $Y(s)$ is less than or equal to the degree of $U(s)$. I cannot find a proof of this fact in most major (undergraduate and introductory graduate) textbooks. If someone knows the proof could they sketch it out for me or point me to references where the proof exists? There is a related question here but it still does not answer the "why" of state-space realizations being non-existent for improper transfer functions.
To realize an improper transfer function, derivatives of the input would be needed. The answer above by Rodrigo de Azevedo helps make clear why. The problem is that it is not possible to realize perfect derivatives. A number of arguments are helpful in understanding why. The modulus of the frequency response of a differentiator increases with frequency. However it is not possible to construct an apparatus whose gain becomes arbitrary large at large frequencies. On the contrary, any device known will have a cutoff frequency after which its response falls. Or, suppose you feed a discontinuous signal into a perfect differentiator. It will have to compute the derivative of the signal, before noticing that the derivative doesn't exist! So any "differentiator" will be at best an approximation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Can the limit $\lim_{x\to0}\left(\frac{1}{x^5}\int_0^xe^{-t^2}\,dt-\frac{1}{x^4}+\frac{1}{3x^2}\right)$ be calculated? $$\displaystyle\lim_{x\to0}\left(\frac{1}{x^5}\int_0^xe^{-t^2}\,dt-\frac{1}{x^4}+\frac{1}{3x^2}\right)$$ I have this limit to be calculated. Since the first term takes the form $\frac 00$, I apply the L'Hospital rule. But after that all the terms are taking the form $\frac 10$. So, according to me the limit is $ ∞$. But in my book it is given 1/10. How should I solve it?
By bringing the fractions to the same denominator, start by writing the limit as $$\displaystyle\lim_{x\to0}\frac{3\int_0^xe^{-t^2}\,dt -3x+x^3 }{3x^5}$$ Now, since this is of the form $0/0$ by L'H and FTC you get $$\displaystyle\lim_{x\to0}\frac{3e^{-x^2}-3+x^2 }{15x^4}$$ From here it is easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Tips on identifying pigeon and pigeonhole I always have trouble trying to exactly identifying the exact pigeon and the pigeonholes for questions with slightly more integers. For example, questions like this. Eleven integer are chosen from 1 to 20 inclusive. Use pigeonhole principle to prove the selection include integer a and b such that b = a + 1 To me, it's a little confusing because of the number of integers and the integer a and b. Is it safe for me to assume the pigeon = 2 because of the integer a and b while the pigeonhole = 11 because of the chosen eleven integer? The above seems to be less confusing compared to another question that uses generalized pigeonhole A fruit basket that contains 10 apples, 8 oranges and 9 banana. If someone pick some fruits without looking, use the generalized pigeonhole principle to determine how many must you pick to be sure of getting at least 5 fruits of the same type With so many potential values lingering in the question, is there any way to simply and identify the pigeon and pigeonhole to use the generalized pigeonhole to prove it?
For the first problem you want to divide the integers from $1$ through $20$ into $10$ pairs of consecutive integers: $\{1,2\}$, $\{3,4\}$, and so on up through $\{19,20\}$. These $10$ pairs are your pigeonholes, and the $11$ numbers that you choose are your pigeons. Since you have $11$ pigeons and only $10$ pigeonholes, some pigeonhole must contain two pigeons. In other words, two of your $11$ numbers belong to the same odd-even pair of consecutive integers. The second problem is actually easier, because less ingenuity is needed in order to decide what the pigeons and pigeonholes are. In fact, I prefer not to think about pigeons and pigeonholes at all when dealing with this kind of problem. Ask yourself: what’s the most fruit I could take and not get at least $5$ of one type? Clearly I could take $4$ apples, $4$ oranges, and $4$ bananas, a total of $12$ pieces of fruit. The moment I take $13$ pieces, however, I must have more than $4$ of at least one of the types. I’m afraid that there is no simple general way to identify pigeons and pigeonholes; sometimes it takes a great deal of ingenuity to come up with something that works. In the first problem here, though, it’s a good guess that the $11$ chosen numbers will be the pigeons, and that you have to show that $11$ is enough to ensure that two are consecutive. (It is at least true that if having more of something in the problem makes the desired outcome more likely, that something is fairly likely to be your pigeons.) That means that you should have at most $10$ pigeonholes, and they should be chosen so that if you have two pigeons in the same pigeonhole — i.e., two of the chosen numbers in the same set — then those numbers are consecutive. That suggests dividing $\{1,2,\ldots,20\}$ into $10$ pairs of consecutive numbers, those pairs being the pigeonholes, and in this case it works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do I calculate $u(w)=\int_0^\infty \frac{1-\cos(wt)}{t}\,e^{-t}\,dt$? How do I calculate $$u(w)=\int_0^\infty \frac{1-\cos(wt)}{t}\,e^{-t}\,dt$$ I tried to do it, I use partial integration but I get lost. Is there any nice simple way to calculate it?
$\int \frac {(1−\cos\omega t)e^{−t}}{t}dt$ cannot be evaluated into elementary functions. You need to get tricky. $F(s) = \int_0^{\infty} \frac {(1−\cos\omega t)e^{−st}}{t}dt$ and if we can find $F(1)$ we are done. $\frac {dF}{ds} = $$\int_0^{\infty} -(1−\cos\omega t)e^{−st}dt\\ \frac 1s e^{-st} + \frac {-s\cos\omega te^{−st} + \omega \sin\omega t e^{−st}}{s^2+\omega^2} |_0^\infty\\ -\frac 1s + \frac {s}{s^2+\omega^2}$ $F(\infty) - F(1) = \int_1^{\infty}-\frac 1s + \frac {s}{s^2+\omega^2}\\ F(\infty) - F(1) = -\ln s + \frac 12 (s^2 + \omega^2)|_1^{\infty}$ I am going to leave it to you to prove to yourself that $\lim_\limits{s\to\infty} \ln s + \frac 12 (s^2 + \omega^2) = 0$ $F(\infty) - F(1) = -\frac 12 \ln(1 + \omega^2)$ Going back to the definition of $F$, it should be clear that $F(\infty) = 0$ $F(1) = \frac 12 \ln(1 + \omega^2)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Unconventional Hardy-Ramanujan number $a^3 + b^3 = c^3 + c^3$ - can it's existence / non-existence be proven? Can it be proven that a number exists such that $$\text{number} = a^3 + b^3 = c^3 + c^3,$$ where $a,b$ and $c$ are $3$ distinct positive integers? If it cannot be proven, can it be proven that such a number cannot exist?
If such a triple existed with $a<b$, then $a,c,b$ would be an arithmetic progression of cubes. But it is known that there cannot be three $n$-th powers in arithmetic progression if $n\geq 3$, see for example the paper of Darmon and Morel here. There is probably an elementary proof of this when $n=3$, possibly in the paper of Dénes that they reference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Which matrix satisfies the following condition? $P$ is a real (symmetric) positive definite matrix. Let $P_i$ and $P_j$ represent the $i$'th and $j$'th columns of $P$, respectively. Further, let $P_{ki}$ represent the element situated at the $k$'th row of the column vector $P_i$. I want to find additional conditions on the matrix $P$ such that the following inequality holds for all $i,j,k$: $$ 0 \leq \frac{P_{ij} P_{kj}}{(c + P_{jj})} \leq 2 P_{ki}, $$ with $c > 0$ and $P_{ij}$ represents the $(i,j)$'th element of $P$. If $P$ is strictly ultrametric, then, would the above inequality be satisfied? $P$ is a strictly ultrametric matrix of size $n \times n$ if: * *‍‍‍‍‍‍ ‍‍All the elements of $P$ are non-negative *‍‍‍‍‍‍ ‍‍$\forall (i,j, i \neq j) \in (1,\dots,n): P(i,i) > P(i,j)$ *‍‍‍‍‍‍ ‍‍$\forall (i,j,k) \in (1,\dots,n): P(i,j) \geq {\rm{min}}(P(i,k),P(k,j))$ Any logical conjectures would be appreciated if a direct answer is hard to come by.
Let $A$ be the following matrix $a_{ik} = \log_2 P_{ik}$. Then $$ a_{ij} + a_{kj} \leq 1 + a_{ik} + a_{jj} \tag{*} $$ is sufficient for $$ \frac{P_{ij} P_{kj}}{c + P_{jj}} < \frac{P_{ij} P_{kj}}{P_{jj}} \leq 2P_{ik}. $$ If $a_{ik} > -1, a_{jj} > 0$ and the matrix $A$ is diagonally dominant $|a_{jj}| \geq \sum_{k=1}^n |a_{kj}|$ then $$ a_{jj} = |a_{jj}| \geq \sum_{k=1}^n |a_{kj}| \geq |a_{ij}| + |a_{kj}| \geq a_{ij} + a_{kj}. $$ There are also weaker conditions that guarantee $(*)$. If the matrix $A$ is pseudo-ultrametric in the following sense $$ \begin{gather} a_{ik} \geq \min(a_{ij}, a_{kj}) \tag{1}\\ a_{jj} + 1 \geq \max_s a_{sj} \geq \max(a_{ij}, a_{kj}).\tag{2} \end{gather} $$ Summing (1) and (2) yields $$ 1 + a_{ik} + a_{jj} \geq \min(a_{ij}, a_{kj}) + \max(a_{ij}, a_{kj}) = a_{ij} + a_{kj}. $$ You can easily reformulate the conditions in terms of the original matrix $P$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1904934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is A276175 integer-only? The terms of the sequence A276123, defined by $a_0=a_1=a_2=1$ and $$a_n=\dfrac{(a_{n-1}+1)(a_{n-2}+1)}{a_{n-3}}\;,$$ are all integers (it's easy to prove that for all $n\geq2$, $a_n=\frac{9-3(-1)^n}{2}a_{n-1}-a_{n-2}-1$). But is it also true for the sequence A276175 defined by $a_0=a_1=a_2=a_3=1$ and $$a_n=\dfrac{(a_{n-1}+1)(a_{n-2}+1)(a_{n-3}+1)}{a_{n-4}} \;\;?$$ Update : I crossposted to MO.
Yes, $(a_n)$ is a sequence of integers. To prove this we first need to study some auxiliary sequences that satisfy a polynomial recurrence relation (unlike $(a_n)$ which has a rational fraction as its recurrence). Consider the sequences $(b_n)$ of positive reals satisfying the recurrence relation $b_nb_{n+4} = b_{n+1}b_{n+2}b_{n+3} + 1$. It turns out we can express $b_{n+8}$ as a polynomial in $b_n, \ldots, b_{n+7}$ : Since $b_{n+1}b_{n+5} \equiv b_{n+2}b_{n+6} \equiv b_{n+3}b_{n+7} \equiv 1 \pmod {b_{n+4}}$ and $b_{n+1}b_{n+2}b_{n+3} \equiv -1 \pmod {b_{n+4}}$, we have $b_{n+5}b_{n+6}b_{n+7} \equiv -1 \pmod {b_{n+4}}$, which suggests the existence of a formula for $b_{n+8}$. With this roadmap, we can write $(b_{n+1}b_{n+2}b_{n+3})(b_{n+5}b_{n+6}b_{n+7}+1) \\ = (b_{n+1}b_{n+5})(b_{n+2}b_{n+6})(b_{n+3}b_{n+7}) + (b_{n+1}b_{n+2}b_{n+3}) \\ = (b_{n+2}b_{n+3}b_{n+4}+1)(b_{n+3}b_{n+4}b_{n+5}+1)(b_{n+4}b_{n+5}b_{n+6}+1)+(b_nb_{n+4}-1) \\ = b_{n+4}.F(b_{n+i})$ where $F$ is some big polynomial. And finally, $(b_{n+5}b_{n+6}b_{n+7}+1) = (b_{n+5}b_{n+6}b_{n+7}+1)(b_nb_{n+4} - b_{n+1}b_{n+2}b_{n+3}) \\ = b_{n+4}(b_nb_{n+5}b_{n+6}b_{n+7}+b_n - F(b_{n+i})) = b_{n+4} G(b_{n+i})$. And so, $b_{n+8} = G(b_{n+i})$. This means that if $b_0, \ldots, b_7 \in R$ for some subring $R$ of $\Bbb R$, then the whole sequence is in $R$. Now to link back to the original sequence. Given such a sequence $(b_n)$, we define a sequence $(a_n)$ by $a_n = b_nb_{n+1}b_{n+2}$. This sequence satisfies $a_na_{n+4} = (b_n b_{n+1}b_{n+2})(b_{n+4}b_{n+5}b_{n+6}) \\ = (b_n b_{n+4})(b_{n+1} b_{n+5})(b_{n+2} b_{n+6}) = (b_{n+1}b_{n+2}b_{n+3}+1)(b_{n+2}b_{n+3}b_{n+4}+1)(b_{n+3}b_{n+4}b_{n+5}+1) \\ = (a_{n+1}+1)(a_{n+2}+1)(a_{n+3}+1)$. Finally, taking $b_0 \ldots b_7 = \frac 12, 4, \frac 12, \frac 12, 4, \frac 12, 4, 18$, we obtain a sequence $(b_n)$ with terms in $\Bbb Z[\frac 12]$, with the corresponding $(a_n)$ sequence $1,1,1,1,8,36, \ldots$ Since the recurrence relation is symmetric, it can go backwards as well as forward, hence the ring $R_n = \Bbb Z[b_n, \ldots, b_{n+7}]$ is independant of $n$. There is no hope of finding $8$ consecutive integer values in our sequence $b_n$. If we look at the sequence $(b_n)$ modulo $8$, from our first octuplet and by applying the polynomial transformation, we can get to $17225$ different octuplets mod $8$, and none of those correspond to any noninteger $a_n$. This computation proves that $a_n$ is an integer forall $n$ (be careful, one step can go from one octuplet to several octuplets, because precision can be lost sometimes). Note that using this definition, $a_na_{n+2}/a_{n+1}(a_{n+1}+1) = b_nb_{n+2}b_{n+4}/(b_{n+1}b_{n+2}b_{n+3}+1) = b_{n+2}$, and so to go in the other direction you have to define $(b_n)$ from $(a_n)$ with $b_n = a_{n-2}a_n/a_{n-1}(a_{n-1}+1)$. Then, once again the recurrence relation of $(b_n)$ follows from that of $(a_n)$. This shows that for any such rational sequence $(a_n)$, there is a corresponding rational sequence $(b_n)$, and so $(a_n)$ is in a finitely generated subring of $\Bbb Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 1, "answer_id": 0 }
Let $R$ be a commutative Noetherian ring (with unity), and let $I$ be an ideal of $R$ such that $R/I \cong R$. Then is $I=(0)$? Let $R$ be a commutative Noetherian ring (with unity), and let $I$ be an ideal of $R$ such that $R/I \cong R$. Then is it true that $I=(0)$ ? I know that a surjective ring endomorphism of a Noetherian ring is also injective, and since there is a natural surjection from $R$ onto $R/I$ we get a surjection from $R$ onto $R$, but the problem is I can not determine the map explicitly and I am not sure about the statement. Please help. Thanks in advance.
Assume $a$ is a proper ideal. Suppose they were isomorphic. Then $\varphi: A \to A/a$ is some arbitrary isomorphism, and correspondingly $\varphi(a) := I_{1} \subset A/a$ is an ideal (proper inclusion as $a \subset A$ is a proper inclusion). By the correspondence principle, $I_{1} \subset A/a$ pulls back to an ideal $a \subset I'_{1} \subset A$ which are all proper inclusions (as $\varphi(a)$ is nonzero in $A/a$), where $I'_{1} = \pi^{-1}(I_{1})$ for $\pi: A \to A/a$ the canonical projection map. The key step is that $\varphi$ induces an isomorphism $\overline{\varphi}: A/a \to A/I_{1}$. But then $\overline{\varphi}(I_{1}):= I_{2}$ pulls back to an ideal $I'_{2} \subset A$ (proper inclusion using the same rationale as above) via $\pi_{1}: A \to A/I_{1}$ the canonical projection, such that $a \subset I'_{1} \subset I'_{2}$ in $A$ are all proper inclusions. Now iterate this process to yield an ascending chain which does not stabilize. Contradiction!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Trouble understanding proof of the inequality - $(\frac{1}{a}+1)(\frac{1}{b}+1)(\frac{1}{c}+1) \ge 64 $, for $a,b,c > 0$ and $a+b+c = 1$ I was looking into this problem in a book discussing inequalities, However I found the proof quite hard to understand.The problem is as follows: Let $a,b,c$ be positive numbers with $a+b+c=1$, prove that $$\left(\frac{1}{a}+1\right)\left(\frac{1}{b}+1\right)\left(\frac{1}{c}+1\right) \ge 64$$ and the proof provided was the following: Note that $$ abc \le (\frac{a+b+c}{3})^3 = \frac{1}{27} \tag{1}$$ by AM-GM inequality. Then $$(\frac{1}{a}+1)(\frac{1}{b}+1)(\frac{1}{c}+1)=1+\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{ab}+\frac{1}{bc}+\frac{1}{ca}+\frac{1}{abc} \tag{2}$$ $$\ge 1+\frac{3}{\sqrt[3]{abc}}+\frac{3}{\sqrt[3]{(abc)^2}} +\frac {1}{abc} \tag{3}$$ $$=(1+\frac{1}{\sqrt[3]{abc}})^3 \ge 4^3 \tag{4}$$ Steps 1 and 2 are easy for me to understand, but if someone could help me with steps 3 and 4 ,I would be very thankful.
using for $$\frac{1}{a},\frac{1}{b},\frac{1}{c}$$ the AM-GM inequality we obtain $$\frac{1}{3}\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)\geq \sqrt[3]{\frac{1}{abc}}$$ and for $$\frac{1}{ab},\frac{1}{bc},\frac{1}{ca}$$ the same we get $(3)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Prove $\int_0^\infty \frac{dx}{\sqrt{(x^4+a^4)(x^4+b^4)}}=\frac{\pi}{2 \sqrt2 a b} ( \text{agm} (\frac{a+b}{2},\sqrt{\frac{a^2+b^2}{2}} ))^{-1}$ The following definite integral turns out to be expressible as the Arithmetic-Geometric Mean: $$I_4(a,b)=\int_0^\infty \frac{dx}{\sqrt{(x^4+a^4)(x^4+b^4)}}=\frac{\pi}{2 \sqrt2 a b} \left( \text{agm} \left(\frac{a+b}{2},\sqrt{\frac{a^2+b^2}{2}} \right)\right)^{-1}$$ $$I_4(1,1)=\frac{\pi}{2 \sqrt2}$$ I would like to remind you that: $$I_2(a,b)=\int_0^\infty \frac{dx}{\sqrt{(x^2+a^2)(x^2+b^2)}}=\frac{\pi}{2} \left( \text{agm} \left(a,b \right)\right)^{-1}$$ $$I_2(1,1)=\frac{\pi}{2}$$ Which is why I have two questions: How do we prove the identity for $I_4(a,b)$? Is it possible to also express other integrals of this type using agm? Such as $I_8(a,b)$? Because of the relation of the agm to elliptic integrals, we can also write: $$I_4(a,b)=\frac{1}{a b \sqrt{a^2+b^2}} K \left( \frac{a-b}{\sqrt{2(a^2+b^2)}} \right)$$ Here the parameter convention is $$K(k)=\int_0^1 \frac{dt}{\sqrt{(1-t^2)(1-k^2 t^2)}}$$ This seems to be the best way to prove the identity, but I don't know which substitution to use. Another way to express this integral would be through the hypergeometric function: $$I_4(a,b)=\frac{\pi}{2 \sqrt2 a^3} {_2F_1} \left(\frac{1}{2},\frac{3}{4};1;1-\frac{b^4}{a^4} \right)$$ And for every integral of this type we have: $$I_m(a,b)=\frac{I_m(1,1)}{a^{m-1}} {_2F_1} \left(\frac{1}{2},\frac{m-1}{m};1;1-\frac{b^m}{a^m} \right)$$ The outline for the proof can be found in this question for $m=3$ and is easily adapted to the general case. Here we assume $a \geq b$. As for the arithmetic geometric mean, I tried to get it into a simpler form, but the only transformation I was able to achieve is this: $$\text{agm} \left(\frac{a+b}{2},\sqrt{\frac{a^2+b^2}{2}} \right)=\text{agm} \left(\frac{a+b}{2}+i \frac{a-b}{2},\frac{a+b}{2}-i \frac{a-b}{2} \right)$$ Since we have complex conjugates, it's quite obvious, that they will give real numbers at first iteration, an it would give the left hand side.
Substitition is sufficient. Let $$\displaystyle z=x-\frac1x,w=x+\frac1x$$ then $$\displaystyle \frac{\mathrm dx}{\sqrt{x^8+p x^4+1}}=\frac12\left( \frac{\mathrm dz}{\sqrt{z^4+4z^2+2+p}}+\frac{\mathrm dw}{\sqrt{w^4-4w^2+2+p}}\right) $$ So $$f(p)=\displaystyle \int_0^\infty \frac{\mathrm dx}{\sqrt{x^8+p x^4+1}} =\frac12\int_{-\infty}^\infty \frac{\mathrm dx}{\sqrt{x^4+4x^2+2+p}} =\int_0^\infty \frac{\mathrm dx}{\sqrt{x^4+4x^2+2+p}} $$ let $$x^4+4x^2+2+p= (x^2+u^2)(x^2+v^2) $$ We know that $$\displaystyle \operatorname{agm}(u,v)= \frac{\pi}2\left( \int_0^\infty \frac{\mathrm dx}{\sqrt{(x^2+u^2)(x^2+v^2)}}\right)^{-1} =\frac{\pi}2 f(p)^{-1} $$ With the identity of agm, $$ \operatorname{agm}(u,v)=\operatorname{agm}\left(\frac{u+v}2,\sqrt{uv}\right) =\operatorname{agm}\left(\sqrt{\frac{\sqrt{p+2}}2+1},\sqrt[4]{p+2}\right) $$ So $$ \int_0^\infty \frac{\mathrm dx}{\sqrt{(x^4+a^4)(x^4+b^4)}} =\frac1{ab\sqrt{ab}}f\left(\frac{a^2}{b^2}+\frac{b^2}{a^2}\right) =\frac\pi{2ab\sqrt{ab}}\left(\operatorname{agm}\left( \frac{a+b}{\sqrt{2ab}},\sqrt{\frac{a^2+b^2}{ab}} \right)\right)^{-1} =\frac\pi{2\sqrt2 ab}\left(\operatorname{agm}\left( \frac{a+b}2,\sqrt{\frac{a^2+b^2}2} \right)\right)^{-1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Why does $\frac{1}{x} < 4$ have two answers? Solving $\frac{1}{x} < 4$ gives me $x > \frac{1}{4}$. The book however states the answer is: $x < 0$ or $x > \frac{1}{4}$. My questions are: Why does this inequality has two answers (preferably the intuition behind it)? When using Wolfram Alpha it gives me two answers, but when using $1 < 4x$ it only gives me one answer. Aren't the two forms equivalent?
Here is an important aspect which should be always considered. If someone asks me: Problem: Find the solution of \begin{align*} \frac{1}{x}<4 \end{align*} I would not answer the problem, but instead ask: What is the domain of $x$? Please note the problem is not fully specified if the domain of $x$, the range of validity, is not given. This is crucial to determine the set of solutions. Some examples: Find the solution of \begin{array}{lcl} \text{domain of }x\qquad&\qquad\text{inequality}\qquad&\qquad\text{solution}\\ \hline\\ \{x|x\in\mathbb{R}\setminus\{0\}\}\qquad&\qquad\frac{1}{x}<4\qquad&\qquad (-\infty,0)\cap(1/4,\infty)\\ \{x|x\in\mathbb{R}^{+}\}\qquad&\qquad\frac{1}{x}<4\qquad&\qquad (1/4,\infty)\\ \{\pi\}\qquad&\qquad\frac{1}{x}<4\qquad&\qquad \{\pi\}\\ \{x|x\in(0,1/4)\}\qquad&\qquad\frac{1}{x}<4\qquad&\qquad \emptyset \end{array} Note: If a domain is not explicitly stated in the problem section of a book we should expect a corresponding statement somewhere else at the beginning of the chapter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "60", "answer_count": 13, "answer_id": 8 }
Problem 3.2 in Gilbarg-Trudinger: Elliptic PDEs of second order I am strugging on problem 3.2 in Gilbarg-Trudinger, which says that if $L=a^{ij}(x)D_{ij}+b^{i}(x)D_{i}+c(x)$ is an elliptic operator in a bounded domain $\Omega \subset \mathbb{R}^{n}$ with $c<0$, and $u\in C^{2}(\Omega)\cap > C^(\overline\Omega)$ satisfies $Lu=f$ in $\Omega$, then we have $\sup_{\Omega}|u| \le\sup_{\partial \Omega}|u|+\sup_{\Omega}|\frac{f}{c}|$ In previous part of this chapter, we have actually shown that for the case $c\le0$, we have $\sup_{\Omega}|u| \le\sup_{\partial \Omega}|u|+C\sup_{\Omega}|\frac{f}{\lambda}|$, where $\lambda(x)$ is the minimum eigenvalue of $[a^{ij}(x)]$ and $C$ is a constant depending only on $diam(\Omega)$ and $\beta=\sup \frac{|\mathbf{b}|}{\lambda} (<\infty)$. I have no idea why the bound can be independent of $diam(\Omega)$ and $\beta$, can anybody give me some idea?
There is a version of the maximum principle where you use the zeroth order term instead of uniform ellipticity to get the estimate. I'll sketch the proof below, and I'll take $c<0$ to be a constant (don't have the book in front of me right now). The argument should work just as well if $c(x)$ is negative and bounded away from zero. If $u$ attains its max at a point $x\in \Omega$, then $$f(x) = a^{ij}u_{x_ix_j} + b^iu_{x_i} + cu \leq cu(x),$$ due to the ellipticity of $a^{ij}$. Since $c$ is negative we have $$\sup_\Omega u = u(x) \leq \frac{f(x)}{c} \leq \sup_{\Omega} \left|\frac{f}{c}\right|.$$ If the maximum is attained on the boundary, then $\sup_\Omega u \leq \sup_{\partial \Omega} |u|$, hence $$\sup_\Omega u \leq \sup_{\partial \Omega} |u| + \sup_{\Omega} \left|\frac{f}{c}\right|.$$ Actually you can put the max of the two terms on the right hand side to get a slightly better estimate, if you like. You can obtain a similar estimate for $-u$ since $L(-u) = -f$. Basically, when you use the zeroth order term to obtain estimates with the maximum principle, you do not need to play the perturbation trick (adding $\varepsilon e^{\alpha x_1}$ or something similar to $u$), so the size of the domain does not enter into the estimate, and you don't need uniform ellipticity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Discontinuous at infinitely many points While doing a worksheet on real analysis I came across the following problem. $Q$. Let $f$ be a function defined on $[0,1]$ with the following property. For every $y \in R$, either there is no $x$ in $[0,1]$ for which $f(x)=y$ or there are exactly two values of $x$ in $[0,1]$ for which $f(x)=y$. (a) Prove that $f$ cannot be continuous on $[0,1]$. (b) Construct a function $f$ which has the above property. (c) Prove that any such function with this property has infinitely many discontinuous on $[0,1]$. I really have absolutely no idea how to solve the problem. Even constructing the function is proving pretty difficult. Any help would be appreciated asap.
To construct a function satisfying your condition, we first construct such a function $f$ on $\Bbb R$. Decompose $\Bbb R$ into union of $[n,n+1)$, construct a function $f$ whose restriction $f_n:[n,n+1)\to[n,n+1)$ is defined by $$\begin{align} f_n(x) &= x\quad\text{;}\quad n\le x<n+\frac 12 \\ &= x-\frac 12\quad\text{;}\quad n+\frac 12 \le x <n+1 \end{align}$$ It is not hard to verify that $f$ has the property you want (but the domain is $\Bbb R$, however). Now get any homeomorphism $h:\Bbb R\to (0,1)$ and compose it with $f$. The function $\bar f:[0,1]\to \Bbb R$ satisfying the condition is construct by assigning any number not in $\mathcal R(f)$ to the image of $\bar f(0)=\bar f(1) $ and $$\bar f:=h\circ f$$ otherwise. For the sake of completeness, you can let $h=\frac 1{1+e^{x}}$. To prove $(c)$ see Mr. Anubhav's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
When is $\sin(x) = \cos(x)$? How do I solve the following equation? $$\cos(x) - \sin(x) = 0$$ I need to find the minimum and maximum of this function: $$f(x) = \frac{\tan(x)}{(1+\tan(x)^2}$$ I differentiated it, and in order to find the stationary points I need to put the numerator equal to zero. But I can't find a way to solve this trigonometric equation.
Approach $1$ (Squaring): $$(\sin x-\cos x)^2=0$$ $$(\sin^2x+\cos^2x)-2\sin x\cos x=0$$ $$1-\sin2x=0$$ $$\sin2x=1$$ $$2x=\frac{\pi}2+2n\pi,n\in\Bbb{N}$$ $$x=\frac{\pi}4+n\pi,n\in\Bbb{N}$$ Approach $2$ (By definition of $\sin x$ and $\cos x$): $$\cos t=\frac{e^{it}+e^{-it}}2=\frac{e^{it}-e^{-it}}{2i}=\sin t$$ $$(1+i)e^{-it}=(1-i)e^{it}$$ $$e^{2it}=\frac{1+i}{1-i}=i$$ $$\cos2t+i\sin2t=i$$ $$\implies\cos 2t=0 \text{ }\cap \sin2t=1$$ $$x=\frac{\pi}4+n\pi,n\in\Bbb{N}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 8, "answer_id": 5 }
find position of a point rotate about an arbitrary axis I have the axis $u=[1,1,0]$ Follow by $u$ is a vector $v$ which perpendicular to $u$ (i.e. $z \times u=v$ which $z=[0,0,1]$) I want to find vector $p = u+v$ when rotate $v$ about $u$ by angle $\theta$ Basically, it's about finding the transformation matrix to find the point P w.r.t. $O\textbf{xyz}$ (given $u$ and $\theta$)
I will use $\hat{u}=u/|u|$. The vector $w=v\times\hat{u}$ is perpendicular to $w$ and $u$, and $v\times w$ is along $u$. $v'$, which we obtain by rotating $v$ around $u$ by an angle $\theta$ is therefore in the plane of $v$ and $w$. It has a component $|v| \cos\theta$ along $v$, and a component $|v|\sin\theta$ along $w$. we can write this as $$v'=v\cos\theta+\hat{u}\times v \sin\theta$$ For more in detail derivation, you should look up Rodrigues' rotation formula https://en.wikipedia.org/wiki/Rodrigues%27_rotation_formula
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Problem in finding limit function of $f_n(x)=1-(1-x^2)^{n}$. Its answer is $f(x) = \begin{cases} 0, & \text{when $x=0$ } \\ 1, & \text{when $0<\vert x \vert< \sqrt 2$} \end{cases}$. I'm not getting how second line of $f(x)$ came? Apologies if the post is too basic, but I've invested a great time in understanding this but of no use. Any hints are welcome!
Assuming you want the limit as $n\to\infty$, then you need $|1-x^2|<1$. This is the open interval $(0,\sqrt 2)$. So $$ \lim_{n\to\infty}f_n(x) = \begin{cases} 0, & \text{when $x=0$ } \\ 1, & \text{when } 0<x< \sqrt 2. \end{cases} $$ For $x\geq \sqrt 2$ or $x<0$ the limit does not exist, since $|1-x^2|>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do you show that two sets are disjoint? Here's a problem I am trying to solve for recreation. $$ A\cap B\subset C' \text{ and } A\cup C\subset B. \hspace{2 mm}\text{ Show that $A$ and $C$ are disjoint.} $$ I can clearly see how A and C would be disjoint. Essentially, if my understanding is correct, A and C are non-overlapping sets within the bound of set B. But, I'm not exactly clear on how I would prove this by set logic. If you could provide some guidance, I would really appreciate it.
Let's try to capture your argument algebraically. I think your core idea is that if you take $B$ as the universe, then $A$ and $C$ are still subsets of the universe, and $A \cap C$ within $B$. The question, now, is how to translate that back to the actual universe you're working in. The main thing you want here is that taking the intersection of everything with $B$ is how you get from the whole universe to within $B$, and this leaves unchanged the sets already within $B$. That is, if $S \subseteq B$, then $S \cap B = S$. So that's what you want to use. Prove first that * *$A = A \cap B$ (or equivalently $A \subseteq B$) *$C = C \cap B$ (or equivalently $C \subseteq B$) and then your statement that they are disjoint "within $B$" becomes * *$(A \cap B) \cap (C \cap B) = \varnothing$ If you can prove that too, then you put all three bullets together and you win. The first two bullets are easy to prove: e.g. $A \subseteq A \cup C \subseteq B$. The last bullet is straightforward too: $$ (A \cap B) \cap (C \cap B) \subseteq C' \cap (C \cap B) = (C' \cap C) \cap B = \varnothing \cap B $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1905935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
If $a+b+c=0$ then the roots of $ax^2+bx+c=0$ are rational? If $a+b+c=0$ then the roots of $ax^2+bx+c=0$ are rational ? Is it a "If and only if " statement or "only if " statement ? For $a,b,c \in \mathbb Q$ , I think it is a "if and only if" statement . Am I correct ? I can prove that if $a+b+c=0$ and $a,b,c \in \mathbb Q$ , then roots are rational. But I can not prove that if roots are rational and $a,b,c \in \mathbb Q$ , then $a+b+c=0$. Any help ? or any conditions that $ax^2 +bx+c =0 $ should satisfy in order to have rational roots ? My clarifications : 1) What is a rational number ? It is a number which can be written in the form $\frac{p}{q} $ , where $q \neq 0$ and $p , q \in \mathbb Z$ 2) In this case : Let's take the quadratic equation $ax^2+bx+c=0$ where $a \neq 0$. We all know that the roots are given by , $$x=\frac{-b \pm \sqrt{b^2-4ac}}{2a}$$ Case 1 : Suppose that $a,b,c \in \mathbb Q$. Then $x$ is rational if and only if $b^2-4ac$ is a perfect square or zero $(0,1,4,9,16,...)$. Now we need to make $b^2-4ac$ a perfect square ! Now observe that if $a+b+c=0$ , then $b^2-4ac = (-a-c)^2-4ac=(a-c)^2$ That is if $a,b,c \in \mathbb Q$ and $a+b+c=0$ then the solutions are rational. Case 2 : Suppose that $a,b,c \in \mathbb Q$. If $c=0$ , then all the roots are rational. (This is easy if all $a,b,c$ are rationals) Case 3 : Suppose that $b,c \in \mathbb R- \mathbb Q$.(If $a$ is irrational we can divide by $a$ ) $a+b+c=0$ condition does not satisfy. Ex : $(1-\sqrt{2})x^2-2x+(1+\sqrt{2})=0$
If $c = -a-b$, then the discriminant is $$b^2-4ac = b^2 +4a(a+b) = b^2 +4ab +4a^2 = (b+2a)^2.$$ Since the discriminant is a perfect square, then the roots are always rational. Their values are $$\dfrac{-b \pm \sqrt{b^2-4ac}}{2a} =\dfrac{-b \pm (b+2a)}{2a} \in \left\{ 1, -\dfrac{a+b}{a} \right\}$$ Of course, now that you know this, you can say that $$(x-1)(ax+a+b) = ax^2 + bx -(a+b) = ax^2+ bx + c$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1906065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When not to treat dy/dx as a fraction in single-variable calculus? While I do know that $\frac{dy}{dx}$ isn't a fraction and shouldn't be treated as such, in many situations, doing things like multiplying both sides by $dx$ and integrating, cancelling terms, doing things like $\frac{dy}{dx} = \frac{1}{\frac{dx}{dy}}$ works out just fine. So I wanted to know: Are there any particular cases (in single-variable calculus) we have to look out for, where treating $\frac{dy}{dx}$ as a fraction gives incorrect answers, in particular, at an introductory level? Note: Please provide specific instances and examples where treating $\frac{dy}{dx}$ as a fraction fails
There are places where it is "obvious" that we should not blindly apply the laws of arithmetic to $\frac{dy}{dx}$ as if it were a ratio of real numbers $dy$ and $dx$. An example from another question is $$ \frac{dy}{dx}+\frac{du}{dv} \overset ?= \frac{dy\,dv+dx\,du}{dx\, dv}, $$ where the left-hand side has a clear interpretation but the right-hand side does not. As for any false equation that you might actually be tempted to write by treating $\frac{dy}{dx}$ as a ratio, however, I have not seen any actual counterexamples in any of the several related questions and their answers (including the question already mentioned, this question, or this question). In practice, the problem I see with treating $\frac{dy}{dx}$ as if it were a ratio is not whether an equation is true or not, but how we know that it is true. For example, if you write $\frac{dy}{dx} \, \frac{dx}{dt} = \frac{dy}{dt}$ because it seems to you that the $dx$ terms cancel, without having first learned (or discovered) the chain rule and having recognized that it justifies this particular equation, then I would say you're just making an ill-educated guess about this equation rather than doing mathematics. (I'll grant that the equation is valid mathematics even if you don't remember that it's called the "chain rule". I think that particular detail is mainly important when teaching or when answering questions on calculus exams that are designed to test whether you were paying attention when that rule was introduced.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1906241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "77", "answer_count": 8, "answer_id": 4 }
Solving quadratic inequalities I have a quadratic inequality I am halfway through solving but cannot quite figure out the concept. $$3+\frac{4-x}x>0$$ I so far understand that without knowing whether x is positive, I would need to square it, but I do not know what comes next.
We have $$3+\frac{4-x}{x} >0$$ $$\frac{3x+4-x}{x} >0$$ $$\frac{2x+4}{x} >0$$ Divide both sides by $2$ $$\frac{x+2}{x} >0$$ Now use wavy curve method $-\infty+++++ (-2)-----(0)++++++ \infty$ As we want L.H.S. to be positive, hence we get $x \in (-\infty, -2) \cup (0, \infty)$ Edit 1 Let me explain you how I decided the signs. When $x>0$, then $x+2>0$ and $x>0$, hence $\frac{x+2}{x}$ will be positive. Again when $x<-2$, then $x+2<0$ and $x<0$, hence $\frac{x+2}{x}$ will be positive. When $-2<x<0$, then $x+2>0$ but $x<0$, hence $\frac{x+2}{x}$ will be negative. That is how I decided the sign.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1906311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
How can I argue that Lie derivative is not a connection? I am reading Lee's book of riemannian geometry and he asks to show that Lie derivative of two vector fields on a riemannian manifold is not a connection. How can I argue that this is true? He also asks to show that there is a vector field $V$ on $\mathbb{R}^2$ such that $V$ vanishes on the $x$-axis but $\mathcal{L}_{\partial_x}V$ does not. This was a confusion to me too. I can take for example: $V = x\partial_x.$ Then $$[\partial_x,x\partial_x]f = \partial_x(x\partial_xf) - x\partial_x(\partial_xf) = \partial_xf + x\partial^2_{xx}f - x\partial_{xx}^2f = \partial_xf.$$ Then $$[\partial_x,x\partial_x] = \partial_x.$$ And that is a possible solution. Is this right?
Your example for the second problem would be OK, except that I suppose your $V$ vanishes on the $y$-axis instead of the $x$-axis. Your example is also the sort of thing you should think about to solve your first problem. The axiom $\nabla_{fX} Y = f \nabla_X Y$ for a connection is actually equivalent to saying that, for fixed $p \in M$, the map $Y \mapsto (\nabla_XY)(p)$ only depends (linearly) on $X(p)$. This equivalence follows from the divisibility properties of smooth functions (if $f : \mathbb{R}^n \to \mathbb{R}$ is smooth and vanishes at the origin, then one can write $f$ as a finite sum $\sum f_i g_i$ where all functions are smooth and the $f_i$ vanish at the origin). The conclusion is that connections have the following property: $(\nabla_XY)(p)$ must be zero if $X(p)=0$. The Lie derivative does not have this property, as you can discover by thinking about your example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1906479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Invert Cubic Bezier Curve I am trying to 'invert' a path. Lines are fairly easy as they just need to be multiplied by -1, however, I am struggling to convert the bezier curve path into its inverse. The relevant code is: c0,0,1.628,34.086-32.059,34.086 c-33.688,0-32.059-34.086-32.059-34.086 where c denotes the start of a new curve. For clarification purposes, inverse means that if the curve is starting from right to left, then after inverse, it would start from left to right & vice versa. Here is a link.
Your clarification does not clarify (for me, at least). I still don't know what you mean by "invert". If you want to "flip" the curve (mirror it about a vertical line), then negate the x-coordinates of all the control points. If you want to reverse the direction of the curve (trace out the same curve, but in the opposite direction), then just reverse the order of the control points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1906602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A concrete example of an orthonormal basis of a Hilbert module over $K(H)$, the algebra of compact operators Suppose $H$ be a Hilbert space, $K(H)$ be the set of compact operators on $H$, $E$ be Hilbert module over $K(H)$ and $(e_{\lambda})_{\lambda \in I}$ be a orthonormal bass for $E$, Can you mention example for $H, K(H), E$ $\hspace{0.1cm}$ and $(e_{\lambda})_{\lambda \in I}$?
Why don't you take any finite-dimensional Hilbert space $H$ and $E=K(H)$. Then $E$ is just the algebra of matrices $M_n$ with $n=\dim H$. The standard matrix units in $M_n$ certainly form an orthonormal basis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1906687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Given two equivalent norms, convergence in one implies convergences in other. Suppose we have two equivalent norms, $\|\cdot \| \sim \left\vert\!\left\vert\!\left\vert \cdot\right\vert\!\right\vert\!\right\vert$. We assume that $x_n \rightarrow x $ in $(X,\|\cdot \|)$. Show that $x_n \rightarrow x $ in $(X,\left\vert\!\left\vert\!\left\vert \cdot\right\vert\!\right\vert\!\right\vert)$ We observe since the norms are equivalent, there exits constants $\alpha, \beta$, such that $\alpha \|\cdot \| \leq \left\vert\!\left\vert\!\left\vert \cdot\right\vert\!\right\vert\!\right\vert \leq \beta \|\cdot \|$, with $0 \lt \alpha \leq \beta$ Since, $x_n \rightarrow x$ in $(X,\|\cdot \|$) we know that for all $\epsilon>0, \exists K(\epsilon)$ such that for n>$K(\epsilon),$ we have $\|x_n -x\| \leq \epsilon$ Since, we know $\beta \gt 0$ , then $ \beta \|x_n -x\| \leq \beta \epsilon$, Thus we see that $\left\vert\!\left\vert\!\left\vert x_n-x\right\vert\!\right\vert\!\right\vert\leq \beta \epsilon$. This is how far I've gotten, our goal to is to find a a $J(\epsilon)$ such that for all $\epsilon>0, \exists J(\epsilon)$ such that for n>$J(\epsilon),$ we have $\left\vert\!\left\vert\!\left\vert x_n-x \right\vert\!\right\vert\!\right\vert \leq \epsilon$ My candidate for $J(\epsilon$) is $J(\epsilon) = \frac{K(\epsilon)}{\beta}$ from my above argument, but I'm not exactly sure how to validate this. If someone could help me on the proof, it would be very appreciated. I believe I'm pretty close, just missing something small.
I think you mean $J(\epsilon) = K(\epsilon/\beta)$. For all $n \ge J(\epsilon)=K(\epsilon/\beta)$, we have $\|x_n-x\| \le \epsilon/\beta$, so $$||| x_n-x||| \le \beta \|x_n-x\| \le \beta \epsilon/\beta=\epsilon.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1906782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Standard deviation is diverging... I've made some mistake. I'm supposed to find the standard deviation of the Fourier transformed function $f(t) = e^{-|t|/a}$. Just a note: to make it easier on you guys I'll be leaving off the normalization constant. Let me know for some reason you think I should add it back in. The Fourier transform of this is $$\hat f(\omega) = \frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} e^{-|t|/a - i\omega t}\ dt = \frac{1}{\sqrt{2\pi}}\left[\int_0^{\infty} e^{-t((1/a)-ik)}\ dt + \int_0^{\infty}e^{-t((1/a)+ik)}\ dt\right] = \cdots = \frac{1}{\sqrt{2\pi}(a^2\omega^2+1)}$$ I don't think that I've done that part wrong as I used this result from an integral table: $$\int_0^{\infty} e^{-ax}\ dx = \frac{1}{a},\qquad \text{for }\operatorname{Re}(a)>0$$ But then to find the standard deviation I do the following integral $$\sigma_{\omega} = \sqrt{\langle(\omega - \langle \omega\rangle )^2\rangle} = \sqrt{\langle \omega^2\rangle} = \int_{-\infty}^{\infty} \frac{\omega^2}{a^2\omega^2 + 1}\ d\omega$$ where $\langle(\omega - \langle \omega\rangle )^2\rangle = \langle \omega^2\rangle$ because $\langle \omega \rangle = 0$. But this last integral clearly diverges (it's even and approaches $1$ as $\omega \to \pm\infty$). So I must have made a mistake somewhere. Can anyone explain to me where I've erred? Thanks.
Comment: This is a Laplace distribution with median 0 and scale parameter $\alpha.$ The Wikipedia article on this distribution states that its variance is $2\alpha^2.$ It also givee the correct characteristic function ('CF'). I don't think it is difficult to get the variance directly from the density function. If your homework is to get the variance via the Fourier transform (known as a 'characteristic function' in probability), it may be helpful to know the answer. If your assignment is just to find the variance, then I think you may find a more direct approach easier. This is sometimes called the double exponential distribution. To see why, look at the density function. Also, consider that if $X_1, X_2$ are iid $Exp(rate = 1/\alpha),$ then $Y = X_1 - X_2$ is $Laplace(0, \alpha).$ Here is a brief simulation in R statistical software, based on a million realizations of $Y$ with $\alpha = 4,$ which gives numerical results accurate to about two or three places. It is followed by approximate values of $E(Y)$ and $SD(Y) = \sqrt{Var(Y)},$ and a histogram of the simulated distribution of $Laplace(0, 4)$ along with the actual density function. m = 10^6; alp = 4 x1 = rexp(m, 1/alp); x2 = rexp(m, 1/alp) y = x1 - x2 mean(y); sd(y); sqrt(2*alp^2) ## -0.001015087 # aprx E(Y) = 0 ## 5.659927 # aprx SD(Y) ## 5.656854 # exact SD(Y) hist(y, br=50, prob=T, col="skyblue") curve((.5/alp)*exp(-abs(x)/alp), lwd=2, col="blue", add=T)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1906893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many ways can a natural number n be expressed as a sum of one or more positive integers, taking order into account? Q: The number 4 can be expressed as a sum of one or more positive integers, taking order into account, in 8 ways: \begin{array}{l} 4&=1+3&=3+1&=2+2&=1+1+2\\ &=1+2+1&=2+1+1&=1+1+1+1. \end{array} In general, given $\mathbb n$ $\in$ $\mathbb N$, how many ways can $\mathbf n$ be so expressed? Query: I managed to solve it via a combinatoric proof, but the solution provided is as such: \begin{align} &~~Idea: n= x_1 + x_2 +...+x_k, k \in \mathbb N, x_k \gt 0.\\ &\implies x_1 - 1 + x_2 - 1 +...+ x_k -1 = n - k\\ &\implies x_1* + x_2* + ... + x_k* = n-k \qquad-(*) \end{align} Since $H^n_r$ = \begin{pmatrix} r+n-1 \\ r \end{pmatrix}, we have $H^k_{n-k}$ = \begin{pmatrix} n-k+k-1 \\ n-k \end{pmatrix} And the answer is as such: $$\sum_{k=1}^n H^k_{n-k} =$$ $$\sum_{k=1}^n \begin{pmatrix} n-1 \\ n-k \end{pmatrix} = 2^{n-1}$$. I have no idea why $H^n_r$ is applied and also why $$\sum_{k=1}^n H^k_{n-k}$$ is used to derive the desired result $$2^{n-1}$$. Some explanation on this will be deeply appreciated.
Since your title carefully distinguishes between natural numbers (which include $0$) and positive integers (which do not, at least not for the English sense of "positive"), I think a formula should be given that gives the proper value (namely $0$, since at least one summand was required) for $n=0$, and $2^{n-1}$ does not do that. A correct formula would be $\lfloor2^{n-1}\rfloor$, or you could just give the value as $$ \begin{cases}0&\text{if $n=0$,}\\2^{n-1}&\text{if $n>0$.}\end{cases} $$ The proof can be simply by induction, where we need $n=0$ and $n=1$ as starting cases to capture the exceptional behaviour at $n=0$ (both starting cases are obvious). If we assume for $n>0$ proved that there $2^{n-1}$ compositions of $n$, then from each of them we can get two different compositions of $n+1$: one by increasing the final term by one, and another by adding a new term $1$ and the end. It is clear that the first method, applied to all compositions of $n$ gives all compositions of $n+1$ that do not have $1$ as final term, and the other method gives all compositions of $n+1$ that do have $1$ as final term. Together that gives all compositions of $n+1$, and each of them in just one manner; their number is then $2^{n-1}\times2=2^n$, as was to be proved. Note that the inductive step cannot be applied to the case $n=0$, as there is no final term to modify there; this explains the exception at the start of the sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 3, "answer_id": 2 }
Why is the distance between two circles/spheres that don't intersect minimised at points that are in the line formed by their centers? From GRE 0568 From MathematicsGRE.Com: * *I'm guessing the idea applies to circles also? *Is there a way to prove this besides the following non-elegant way? * *Form a line between centers $C_1$ and $C_2$ *Given a point on circle/sphere 1 $(x_1,y_1)$, minimise $$f(x_2,y_2) = (x_1-x_2)^2 + (y_1-y_2)^2$$ to get $(x_2^*,y_2^*)$ *Minimise $$g(x_1,y_1) = (x_1-x_2^*)^2 + (y_1-y_2^*)^2$$ to get $(x_1^*,y_1^*)$ *Show that the $(x_2^*,y_2^*)$ and $(x_1^*,y_1^*)$ are on the line.
Basically the question in your title is answered by the fact that a straight line is the shortest path between two points. If $P,Q$ are the points on your two spheres (or circles, if you are in a plane), and $C_i$ and $r_i$ are their respective centres and radii, for $i=1,2$, then $C_1-P-Q-C_2$ is a path from $C_1$ to $C_2$ of length $r_1+d(P,Q)+r_2$. Here only the middle term, the distance $d(P,Q)$ from $P$ to $Q$, depends on the choice of these points; the other two terms are constant. The shortest possible path from $C_1$ to $C_2$ is a straight line, and given that $d(C_1,C_2)\geq r_1+r_2$, this path can be obtained by choosing $P$ and $Q$ on the segment $[C_1,C_2]$. Every other choice of $P,Q$ gives a longer path, hence a larger value of $d(P,Q)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Rationalizing denominator with cube roots Rationalize the denominator of $$\frac{6}{\sqrt[3]{4}+\sqrt[3]{16}+\sqrt[3]{64}}$$ and simplify. I already have an answer. I just want to compare answers with others. Maybe someone has different solutions? Also, I really disagree with the answer found at the back of the questionnaire.
$$\sqrt[3]{64}=4\;,\;\;\sqrt[3]{16}=4^{2/3}\;\implies$$ $$\sqrt[3]4+\sqrt[3]{16}+\sqrt[3]{64}=4^{1/3}+4^{2/3}+4=4^{1/3}\left(1+4^{1/3}+4^{2/3}\right)=$$ $$=4^{1/3}\frac{1-4}{1-4^{1/3}}=3\cdot4^{1/3}\frac1{4^{1/3}-1}\implies$$ $$\frac6{\sqrt[3]4+\sqrt[3]{16}+\sqrt[3]{64}}=\frac{2(4^{1/3}-1)}{4^{1/3}}=2^{1/3}(4^{1/3}-1)=2-\sqrt[3]2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
In a triangle $ABC$, $AB = a-b$ and $BC = 2\sqrt{ab}$, then find $\angle B$? Is this question solvable? In $\Delta ABC$, $AB = a-b$ and $BC = 2\sqrt{ab}$, then $\angle B$ is (a) $\: 60^{\circ}$ (b) $\: 30^{\circ}$ (c) $\: 90^{\circ}$ (d) $\: 45^{\circ}$
Using @JanEerland ‘s suggestion, $BC = k \sin A$, $AC = k \sin B$, and $AB = k \sin C$; for some $k \ne 0$. Substituting in the cosine law (wrt B), we get $$\sin^2 B = \sin^2 A + \sin^2 C – 2 \sin A \sin C \cos B$$ (There might be others) but one solution of it is $B = 90^0$ and $A$ is then complement to $C$. Note that, up to this point, the given $AB = a - b$ and $BC = ...$ have not been used. They probably are used to find $AC = … = a + b$ by Pythagoras theorem, and for the checking of triangle inequality (which maybe or maybe not necessary). This is not difficult to do with the help of $a – b \gt 0$. However, the notation used in the question is very misleading. This is because according to our common naming convention, for $\triangle ABC$, $BC = a$, $AC = b$ etc. Such confusion can be found in various comments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Evaluate the integral $\int_0^\pi \sin{(x \cos{t}})\cos{t}\; dt$ How to evaluate: $\int \sin{(x \cos{t}})\cos{t}\; dt$ or: $\int_0^\pi \sin{(x \cos{t}})\cos{t}\; dt$
$\int\sin(x\cos t)\cos t~dt=\int\sum\limits_{n=0}^\infty\dfrac{(-1)^nx^{2n+1}\cos^{2n+2}t}{(2n+1)!}~dt$ For $n$ is any non-negative integer, $\int\cos^{2n+2}t~dt=\dfrac{(2n+2)!t}{4^{n+1}((n+1)!)^2}+\sum\limits_{k=0}^n\dfrac{(2n+2)!(k!)^2\sin t\cos^{2k+1}t}{4^{n-k+1}((n+1)!)^2(2k+1)!}+C$ This result can be done by successive integration by parts. $\therefore\int\sum\limits_{n=0}^\infty\dfrac{(-1)^nx^{2n+1}\cos^{2n+2}t}{(2n+1)!}~dt$ $=\sum\limits_{n=0}^\infty\dfrac{(-1)^nx^{2n+1}t}{2^{2n+1}n!(n+1)!}+\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^nx^{2n+1}(k!)^2\sin t\cos^{2k+1}t}{2^{2n-2k+1}n!(n+1)!(2k+1)!}+C$ $\therefore\int_0^\pi\sin(x\cos t)\cos t~dt$ $=\left[\sum\limits_{n=0}^\infty\dfrac{(-1)^nx^{2n+1}t}{2^{2n+1}n!(n+1)!}+\sum\limits_{n=0}^\infty\sum\limits_{k=0}^n\dfrac{(-1)^nx^{2n+1}(k!)^2\sin t\cos^{2k+1}t}{2^{2n-2k+1}n!(n+1)!(2k+1)!}\right]_0^\pi$ $=\sum\limits_{n=0}^\infty\dfrac{(-1)^n\pi x^{2n+1}}{2^{2n+1}n!(n+1)!}$ $=\pi J_1(x)$ Specifically for $\int_0^\pi\sin(x\cos t)\cos t~dt$ , $\int_0^\pi\sin(x\cos t)\cos t~dt$ $=\int_0^\frac{\pi}{2}\sin(x\cos t)\cos t~dt+\int_\frac{\pi}{2}^\pi\sin(x\cos t)\cos t~dt$ $=\int_0^\frac{\pi}{2}\sin(x\cos t)\cos t~dt+\int_\frac{\pi}{2}^0\sin(x\cos(\pi-t))\cos(\pi-t)~d(\pi-t)$ $=\int_0^\frac{\pi}{2}\sin(x\cos t)\cos t~dt+\int_0^\frac{\pi}{2}\sin(x\cos t)\cos t~dt$ $=2\int_0^\frac{\pi}{2}\sin(x\cos t)\cos t~dt$ $=2\int_\frac{\pi}{2}^0\sin\left(x\cos\left(\dfrac{\pi}{2}-t\right)\right)\cos\left(\dfrac{\pi}{2}-t\right)~d\left(\dfrac{\pi}{2}-t\right)$ $=2\int_0^\frac{\pi}{2}\sin(x\sin t)\sin t~dt$ $=\int_0^\frac{\pi}{2}\cos(x\sin t-t)~dt-\int_0^\frac{\pi}{2}\cos(x\sin t+t)~dt$ $=\int_0^\frac{\pi}{2}\cos(x\sin t-t)~dt-\int_\pi^\frac{\pi}{2}\cos(x\sin(\pi-t)+\pi-t)~d(\pi-t)$ $=\int_0^\frac{\pi}{2}\cos(x\sin t-t)~dt+\int_\frac{\pi}{2}^\pi\cos(x\sin t-t)~dt$ $=\int_0^\pi\cos(x\sin t-t)~dt$ $=\pi J_1(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Minimize the number of points in a piecewise linear approximation I have $m$ data points $(x_i,y_i)$ in a given interval. I would like to find a piecewise linear function $f(x)$ that approximate these $m$ points with a minimum number of points $n$ so that my approximation error is below a tolerance $\epsilon$. My $m$ points: The function $f$ is a piecewise linear function defined with $n$ points $(x_a^{i},y_a^{i})$. For $n=4$, it would look like: Approximation error: $$\frac{1}{m} \sum_{1\le i\le m}(y_i-f(x_i))^2 \leq\epsilon$$ To solve that problem I need to find, for a given $n$, a way to obtain the optimal set of points $(x_a^{i},y_a^{i})$. I can try to minimize my approximation error with gradient descent, but the function is non-convex, so it might not converge to the global optimum. If I solve the previous step, I can simply simply run the algorithm from $n=1,2,3,...$ and stop when my approximation error drops below $\epsilon$ I sounds like a rather common problem that perhaps already has a solution. Do you know of one, or can you propose an approach to that problem?
Here is the way that looks obvious to me; maybe someone wiser will point out how it's inefficient, or fails on perverse input. Consider the $(a,b)$ plane in which each point represents a function $y=ax+b$. Each of your inputs, with its tolerances, defines a band in that plane. An intersection of such bands is a convex polygon. So, starting at the left, pile on the constraints until this polygon vanishes, and then back up by one. Your first line is represented by a point anywhere in this polygon; you may as well use its centroid, or the average of its corners. Then do it again, starting with the last point "covered" by the first line. Your $(x_a^2,y_a^2)$ is, of course, the intersection of the first two solution lines. It could be interesting to see if starting from the right gives a different result. (My aesthetic preference would be to use all maximal compatible subsequences, but it's not my question.) Edit: This is the main idea of the following paper and is discussed here
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 1 }
How to solve this complex equation for the modulus of z? The question is as follows: All the roots of the equation $11z^{10}+10iz^9+10iz-11=0$ lie: $\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad (i=\sqrt{-1})$ (a) inside $|z|=1$ (b) on $|z|=1$ (c) outside $|z|=1$ (d) can't say The answer is (b). I tried factorizing it, but to no avail. Also, it doesn't appear to me that taking modulus would help. How to approach this problem?
Substitution $z=e^{it}$ gives the trigonometrical equation $$11\sin5t+10\cos4t=0,\qquad(1)$$ or $$\cos 4t=-1.1\sin 5t.$$ Easy to see that $$RHS\left(\dfrac{2k+1}{10}\pi\right)=1.1(-1)^{k+1}$$ for $k=-3,-2,-1,0,1,2$, so LHS and RHS have at least five intersections for $t\in\left(-\dfrac\pi2,\dfrac\pi2\right)$. This means that $(1)$ has at least 5 real roots for $t\in\left(-\dfrac\pi2,\dfrac\pi2\right)$. On the other hand, it is known that $$\sin5t=16\sin^5t-20\sin^3t+5\sin t$$ and $$\cos4t=8\sin^4t-8\sin^2t+1,$$ so $(1)$ is equivalent to $$176y^5+80y^4-220y^3-80y^2+55y+10=0,\quad y=\sin t.$$ In this way, the 5th order polynomial has $5$ real roots for $y\in(-1,1)$. Therefore, equation $(1)$ has only real roots. Thus, the right answer is $$\boxed{\text{ on |z|=1}}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
For any $q\in [0,1]$ there exist $p>0$ such that $q(1+p)=1$ I was reading what is mathematics and to prove that if $q$ is a number between $0$ and $1$ then $q^n$ tends to $0$, they use that $q$ can be written as $q=\frac{1}{1+p}$ with $p>0$. It's equivalent to: for any $q\in (0,1)$ there exist $p>0$ such that $q(1+p)=1$. If I have a number, such as $8/10$ I can find such a $p$, but not sure how can I proceed to prove it in 'general'.
How about just solving the equation? $$q(1+p)=1$$ $$1+p = \frac{1}{q}$$ $$ p=\frac{1}{q}-1$$ Of course this doesn't work for $q=0$ (division by zero) or $q=1$ (then $p =0 \not \gt 0$) , but in those cases, the statement is clearly false. Thus, for $q \in (0,1)$, we know that $\frac{1}{q}>1$, so $p=\frac{1}{q}-1>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What logic rule is used to show $(p \lor q) \land (p)=p~$? What logic rule is used to show: $(p \vee q) \wedge (p)=p~$? This is obvious, because if p is true the whole expression is true, and if p is false, the whole expression is false, but I'm not sure which logic rule is used to draw this conclusion.
One can show this with Boolean algebra as follows: $$ (p \vee q)\wedge p = (p \vee q)\wedge (p\vee F) = p \vee (q \wedge F) = p \vee F = p $$ If we have $\cdot$ for $\wedge$ and $+$ for $\vee$, then here's what this looks like: $$ (p+q)p = (p+q)(p+0) = p+q0 = p+0 = p $$ To prove this, I am using the "distributive laws" of and/or. That is, * *$p \wedge(q \vee r) = (p \wedge q) \vee (p \wedge r)$ *$p \vee(q \wedge r) = (p \vee q) \wedge (p \vee r)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Rolling an $n$-sided die until repeat is found Problem: We are rolling an $n$-sided die. You roll until you reach a number which you have rolled previously. I need to calculate the probability $p_m$ that we have rolled $m$ times for such a repeat. My first thought was to try some inputs. I took $n=6$. I noticed that when $m=1$, we will always get a probability of $0$, since you are only rolling one time. Also, for $m>7$, we will also have $0$, since we will never reach that case. Now, I don't get how to find a general formula for when $1<m<8$
for a die with $n$ sides, if you haven't already seen a duplicate, the probability of getting a repeat on the $k$th roll is $$\frac{k-1}{n}$$ So, to get a repeat exactly on the $j$ you must first succeed at getting to the $j$th roll without any repeats, and then roll a repeat: $$\begin{align}P(j) &= \frac{j-1}{n}\cdot\prod_{k=1}^{j-1}\left(1-\frac{k-1}{n}\right)\\ &=\frac{j-1}{n}\cdot\prod_{k=1}^{j-1}\left(\frac{n+1-k}{n}\right)\\ &=\frac{j-1}{n^j}\cdot\prod_{k=1}^{j-1}\left(n+1-k\right)\\ &=\frac{j-1}{n^j}\cdot\left(n\times(n-1)\times\cdots\times(n+3-j)\times(n+2-j)\right)\\ &=\frac{j-1}{n^j}\cdot\frac{n!}{(n+1-j)!}\\ &=\frac{n!\cdot(j-1)}{(n+1-j)!\cdot n^j}\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1907979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Find the sixth side of hexagon. You are given a hexagon inscribed in a circle. If the lengths of $5$ sides taken in order are $3,4,6,8$ and $7$ units, find the length of $6^\text{th}$ side. Not got the slightest of idea how to proceed, so I can't show my attempts.
If the radius of the circle is $r$, the sixth side length is $s$, and you label the central angles $\theta_1,\ldots,\theta_6$, then using the Law of Cosines $$\begin{align} 2\pi&=\theta_1+\ldots+\theta_5+\theta_6\\ 2\pi&=\arccos\left(1-\frac{3^2}{2r^2}\right)+\ldots+\arccos\left(1-\frac{7^2}{2r^2}\right)+\arccos\left(1-\frac{s^2}{2r^2}\right)\\ \end{align}$$ This allows you to solve for $s$ in terms of $r$: $$ \begin{align} s&=\sqrt{2r^2\left(1-\cos\left(2\pi-\left(\arccos\left(1-\frac{3^2}{2r^2}\right)+\ldots+\arccos\left(1-\frac{7^2}{2r^2}\right)\right)\right)\right)}\\ s&=r\sqrt{2-2\cos\left(\arccos\left(1-\frac{3^2}{2r^2}\right)+\ldots+\arccos\left(1-\frac{7^2}{2r^2}\right)\right)}\\ \end{align}$$ If you follow the instances of $r$ in this expression, as each instance of $r$ would grows larger, so would $s$. In other words, it is clear (after thinking through all the negations and inversions) that $s$ is an increasing function of $r$. In particular, it's not constant. So for each radius $r$ where this expression is defined and that sum of $\arccos$ terms does not exceed $2\pi$, there is a different value for the sixth side length $s$. In theory you can also solve for $r$ in terms of $s$, since $s$ is an increasing function of $r$. But I don't think I want to.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1908096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to integrate $\int_1^\infty e^{-\alpha x}J_0(\beta\sqrt{x^2-1})\mathrm{d}x \,$? This integral is from (6.616.2) in Gradshteyn and Ryzhik. $$ \int_1^\infty e^{-\alpha x}J_0(\beta\sqrt{x^2-1})\mathrm{d}x \,=\frac{1}{\sqrt{\alpha^2+\beta^2}}e^{-\sqrt{\alpha^2+\beta^2}} $$ I want to know how to do this integral and the restriction of $\alpha$ and $\beta$. The integral table doesn't mention it. I doubt the results must have some restriction on $\alpha$ and $\beta$, because: * *it seems that when $\mathrm{Re}\,\alpha <0$, the integral diverges. *also when $\alpha$ is purely imaginary, and for $\beta$ real, the result should be complex conjugate when $\alpha$ takes conjugate purely imaginary pairs, $\pm i$ for example, however the results given depends only on $\alpha^2$. *I also found by Mathematica numerical integration that when $\alpha$ is purely imaginary, the integration also seems troublesome. Edit: The answer by @Fabian give a general condition that when $\mathrm{Re}\alpha>\mathrm{Im}\beta$, the integral converges. However, what about $\mathrm{Re}\alpha=\mathrm{Im}\beta$. For the simplest case when $\alpha$ is purely imaginary and $\beta$ is real, Mathematica can give the sensible result when $|\alpha|>|\beta|$, while seems diverges when $|\alpha|<|\beta|$: \[Alpha]=-3I ;\[Beta]=2; NIntegrate[Exp[-\[Alpha] x]BesselJ[0,\[Beta] Sqrt[x^2-1]],{x,1,Infinity}] f[a_,b_]:=Exp[-Sqrt[a^2+b^2]]/Sqrt[a^2+b^2] f[\[Alpha],\[Beta]]//N -0.351845-0.276053 I -0.351845+0.276053 I
The two expressions are equal by analytical continuation whenever the left hand side exists. The only problem for the convergence of the integral is at $x\to \infty$. We have the asymptotic expansion $(|\arg z| < \pi)$ $$ J_0(z) \sim \sqrt{\frac{2}{\pi z}} \cos(z-\pi/4).$$ Thus, for $x\to \infty$, we have that the integrand behaves as $$ e^{-\alpha x}J_0(\beta\sqrt{x^2-1}) \sim e^{-\alpha x} \sqrt{\frac{2}{\pi \beta x}} \cos(\beta x-\pi/4). $$ Let us investigate first the case $\mathop{\rm Im} \beta>0$, we then have that $$ e^{-\alpha x}J_0(\beta\sqrt{x^2-1}) \sim \sqrt{\frac{i}{2\pi \beta x}} e^{-(\alpha+i \beta) x};$$ the integral thus converges for $\mathop{\rm Re}\alpha >\mathop{\rm Im} \beta$. The case $\mathop{\rm Im} \beta<0$ can be treated similarly with the result that the integral converges whenever $$ \mathop{\rm Re} \alpha >|\mathop{\rm Im} \beta|.\tag{1}$$ The case $\mathop{\rm Re} \alpha =|\mathop{\rm Im} \beta|$ needs special attention as on this line the convergence is conditional. Regarding your questions: 1) is incorrect, see (1) above. $\text{ }$ 2) this is exactly covered by analytical continuation. If you continue $\alpha$ from the real line to the upper imaginary axis, then you obtain one branch of the square root. Continuing it to the lower part of the imaginary axis you get the other branch. In formula, we have that $$ \sqrt{\alpha^2 + \beta^2} \stackrel{\alpha \to\pm i a}\mapsto \pm i \sqrt{a^2-\beta^2}.$$ 3) when $\alpha=i a$ is purely imaginary then the integral is troublesome as it is oscillating very fast. It is normal that in this case numerical routines run into troubles. So you should rely in this case to analytical continuation instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1908181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Exact probabilities in a $M/M/1$ queue Suppose we have a $M/M/1$ queue with arrival rate $\lambda$, service rate $\mu$, and $\lambda<\mu$. Suppose also that there are initially $k$ people in the queue. I want to find the exact probability that there are $n$ people in the queue at time $t$, for arbitrary $n$ and $t$. How can I do so? (Of course, as $t\rightarrow \infty$ the probability approaches the invariant distribution, but I am really only interested in the exact probability here.)
This is covered in Wikipedia's M/M/1 queue article, it is the transient solution of the model. You're looking for $$p_n(t)=e^{-(\lambda+\mu)t} \left[ \rho^{\frac{n-k}{2}} I_{n-k}(at) + \rho^{\frac{n-k-1}{2}} I_{n+k+1}(at) + (1-\rho) \rho^{n} \sum_{j=n+k+2}^{\infty} \rho^{-j/2}I_j(at) \right]$$ where $p_n(t)$ is the probability that there are $n$ customers in the queue at time $t$. $\rho=\lambda/\mu$, $a=2\sqrt{\lambda\mu}$ and $I_{n}$ is the modified Bessel function of the first kind. Note the notation in the article is slightly different to yours, in particular $k$ is different.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1908275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
All simple modules over a PID. The exercise is to classify all simple modules over a PID. I tried the following: If $M$ is a simple module over $R$ (PID) then for a $m \in M$ with $m \neq 0$ we have $M = (m)$ then i can define $f:R \rightarrow M$ by $f(r) = rm$. The homomorphism theorem give that $R/\ker(f) \simeq M$ and $\ker(f) = (s)$ for some $s \in R$, because $R$ is a PID. Now come my doubt, what specifically is to classify? I have to decompose $s$ in irreducible elements and apply the structure theorem? (the exercise is in the structure theorem section in the book).
Let $S$ be a simple module over a commutative ring $R$. Then $S\ne\{0\}$ by definition and, if $x\in S$, $x\ne0$, we have $Rx=S$ because $S$ is simple. Then the map $\varphi\colon R\to S$ defined by $r\mapsto rx$ is surjective and so $$ S\cong R/\ker\varphi $$ Since $S$ is simple, it follows from the homomorphism theorems that $I=\ker\varphi$ is a maximal ideal. We can also note that $IS=\{0\}$, so $I$ is contained in the annihilator of $S$. On the other hand, $S=RS\ne\{0\}$ and, by maximality, $I$ is the annihilator of $S$. Conversely, if $I$ is a maximal ideal of $R$, then $R/I$ is simple. It is clear that isomorphic modules have the same annihilator, so we have a complete classification of the simple modules: a complete and irredundant set of representatives of the simple modules is given by the family of quotients $R/I$, where $I$ is a maximal ideal. In the case of a PID, an ideal $(p)$ is maximal if and only if $p$ is irreducible. For two irreducible elements $p$ and $q$ we have $$ R/(p)\cong R/(q) $$ if and only if $(p)=(q)$ (by what has been proved above), which is equivalent to $p$ and $q$ being associate (that is $q=up$ where $u$ is invertible).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1908463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Implication in Zassenhaus Lemma I need to proof Zassenhaus Lemma using the First Isomorphism Theorem and I have a problem with the following implication: $H' \vartriangleleft H < G, K' \vartriangleleft K < G \Longrightarrow H'(H \cap K') \vartriangleleft H'(H \cap K) < H.$ I have shown that $H \cap K' \vartriangleleft H \cap K, $ but I don't see why $H'(H \cap K') \vartriangleleft H'(H \cap K)$. It should follow from the First Isomorphism Theorem and from $H' \vartriangleleft H,$ but I don't see how.
Perhaps it is more clear if you prove the following: Claim: If $C \trianglelefteq H$ and $A \trianglelefteq B \leq H$, then $CA \trianglelefteq CB \leq H$. Then your claim follows with $C = H'$ and $A = H \cap K'$, $B = H \cap K$. Note that in this situation $A \trianglelefteq B$ since $K' \trianglelefteq K$. Maybe you want to try to prove the claim yourself. If you get stuck, here is a proof: $CA$ and $CB$ are subgroups since $C$ is normal, and $CA \leq CB$ is obvious. Now $B$ normalizes $CA$ since $C$ is normal, and since $A \trianglelefteq B$. Furthermore, $C$ normalizes $CA$ since $C$ is a subgroup of $CA$. Thus $CB$ normalizes $CA$, that is, $CA \trianglelefteq CB$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1908534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is $\Pr(Y\in[\pi,X+\pi]\mid X)$ if $X \sim U(0,\pi)$ and $Y \sim U(0,2\pi)$? Let $X \sim U(0,\pi)$ and $Y \sim U(0,2\pi)$ be two uniform independent distributions. What is $\Pr(\left.Y\in[\pi,X+\pi]\right|X)$? Intuitively I know that the result is $\frac{1}{4}$ but how can I formally derive the density function in order to compute the integral?
It depends on the joint distribution of $X$ and $Y$. If almost surely, $Y=2X$, then $\Pr(Y\in [\pi, \pi+X]|X) = \mathbb I(X>\pi/2)$. If $X$ and $Y$ are independent, $\Pr(Y\in [\pi, \pi+X]|X) = X/(2\pi)$. Indeed, the unconditional probability in this last case is 1/4. I can't find, off the top of my head, a situation when the conditional probability is 1/4.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1908667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that $GH$ is parallel to $AD$ in the given figure In the above figure, $ABCD$ and $AECF$ are two parallelograms such that $EF$ is parallel to $AB$. $DHF$ and $BGE$ are straight lines intersecting $EC$ and $AF$ at $H$ and $G$ respectively. We need to prove that $GH \parallel AD$. One approach may be by using Basic Proportionality Theorem (BPT). However, there is no triangle where it can be applied. I tried some constructions but that only made the problem more difficult.
Notice first of all that all lines are symmetric around the common center $O$ of parallelograms $ABCD$ and $AECF$. It follows that $EHFG$ is a parallelogram and $EO=FO$. Produce $GH$ to meet $AB$ at $M$. By similar triangles one has: $$ EO:BM=OG:GM=FO:AM. $$ Hence $AM=BM$ and $GH$ belongs to line $OM$, connecting the midpoint of $AB$ to the midpoint of $AC$ and therefore parallel to $BC$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1908733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Confused by proof of the irrationality of root 2: if $p^2$ is divisible by $2$, then so is $p$. In typical proofs of the irrationality of $\sqrt{2}$, I have seen the following logic: If $p^2$ is divisible by $2$, then $p$ is divisible by $2$. Perhaps I am being over-analytical, but how do we know this to be true? IE. do we require a proof of this implication, or is it simply fact?
The quickest proof of that fact is to note that every whole number $n$ is either even or odd. If $n$ is even, $n=2k$ for some whole number $k$: $n^2 = 4k^2 = 2(2k^2)$ is even. If $n$ is odd, $n=2k+1$ for some whole number $k$: $n^2 = (2k+1)^2 = 4k^2 +4k + 1 = 2(2k^2 +2k) +1$ is odd. Therefore the square of a whole number is even if and only if that number is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1908946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 12, "answer_id": 4 }
Confusion about derivative notation I have always been confused about Leibniz notation. Not the notation itself, but the fact that it treats the differential operators ($d$, $\partial$) as being multipliable. The most famous example would probably be the Schrödinger equation, which if often denoted something like this: $$E\psi=\left(-\frac{h^2}{2m}\frac{\partial^2}{\partial x^2}+V(x)\right)\psi=-\frac{h^2}{2m}\frac{\partial^2\psi}{\partial x^2}+V(x)\psi$$ The problem with this is that is makes use of multiplication to expand the expression, $(a+b)c=a*c+b*c$, which would imply that $\partial *f=\partial f$. Personally, I would define the differential operators as functions: $$d(f)=\lim_{h\rightarrow \infty}{\frac{f(x+h)-f(x)}{h}}$$ So the above statement would make no sense at all. First I just accepted it as mathematical laziness, but then I stumbled upon this monstrosity: Observe that $$\left(v^2\frac{\partial^2}{\partial x^2}-\frac{\partial^2}{\partial t^2}\right)y=0$$ can be factored as (which is what you probably mean by "squaring" in the question) $$\left(v\frac{\partial}{\partial x}+\frac{\partial}{\partial t}\right)\left(v\frac{\partial}{\partial x}-\frac{\partial}{\partial t}\right)y=0$$ What is happening here???
What is happening is that the physicists writing out the equation are using an operator notation, where $\frac{\partial}{\partial x}$ is a shorthand for the operator of taking the partial derivative in the $x$ direction of whatever appears on the right of the operator. The reason this appears like multiplication is that $\frac{\partial}{\partial x}$ is a linear operator, so that for any objects $\mathcal{O}$ and $\mathcal{P}$ and scalar (real or complex number) $k$ $$ \frac{\partial}{\partial x} (\mathcal{O}+ \mathcal{P}) =\frac{\partial}{\partial x} (\mathcal{O}) + \frac{\partial}{\partial x} (\mathcal{P})\\ \frac{\partial}{\partial x} (k\mathcal{O}) = k\frac{\partial}{\partial x} (k\mathcal{O}) \\ $$ That first property allows you to write things that look like you are using the distributive law of multiplication over addition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find all numbers $z \in \mathbb{C}$ such that $(z−i)^5 = \sqrt{3} +i$ This is a follow-up question to finding all solutions for $z \in \mathbb{C}$ such that $z^5 = \sqrt{3} +i$ but I have no idea how to approach this question (might just be having a brain fart) The only way I can think of solving it would be to expand it into a polynomial and then solve for z but that seems like a lot more work than necessary, and I would think this answer would have something to do with my answer to the previous question.
Hint: Write $\sqrt 3+i$ in exponential form., and you'll find out it's a problem of finding the $5$th roots of a complex number with modulus $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Show that any integer $n>7$ can be written as the sum of $3$s and $5$s exclusively Show that any integer $n>7$ can be written as the sum of $3$s and $5$s exclusively, i.e., $$ 8= 5+3 \\ 9=3+3+3 \\ 10 = 5+5 \\ 11 = 5+3+3 \\ 12 = 3+3+3+3 $$ So I've started in a couple directions without progress. I think it makes sense to write $$n=3x+5y$$ for $x,y \geq 0$, observing that either $3\mid n$ or $5\mid n$. It also seems that if you are at $n$, you can increment to $n+1$ by replacing a $5$ with two $3$s. Or you can take $n-1$ and replace a $3$ with a $5$. In this way, it seems like you should be able to keep incrementing after $n=8$, but these ideas aren't formalizing into anything...
$2*3 - 5 = 1$ $2n*3 - 5n = n$. $3(2n - 5k) + 5(3k - n)= n$. To assure that $2n - 5k \ge 0$ and $3k - n > 0$... If $n = 3m - r; r = 0, 1,2$ then $k$ can be anything equal or greater than $m$ so long as $2n - 5k \ge 0$ i.e. $6m - 2r - 5k \ge 0\implies k \le 6m/5 - 2r/5= m + \frac{m-2r}5$. So long as $m \ge 4$ we will always be able to find such $k$. i.e. so long as $n \ge 3*4 -2 = 10$. If $m = 3$ we'll be able find such $k$ if $r \le 1$ i.e. if $n = 8$ or $9$. We will not be able to find any such $k$ for $n =7$ (where $m = 3; r= 2$). ======= Or another way: If we can find $n = 3a + 5b$ we can find $n+1 = 3(a-3) + 5(b+2) = 3(a+2) + 5(b-1)$ so long as either $a \ge 3$ or $b \ge 1$. If we have $a + b \ge 2$ and $n=3a+5b$ we can find $n+1$ and in doing so either $a$ will increase by $2$ and $b$ (which was at least $1$) decrease by $1$ and $(a+2) + (b-1)\ge 2$ or $a$ (which was at least $3$) will decrease by $3$ but $b$ will in by $2$ so $(a - 3) + (b+2)\ge 2$. Thus by induction if $n = 3a + 5b; a+b \ge 2$ then $n+1 = 3a' + 5b'; a'+b' \ge 2$. Base case: $n = 8 = 3*1 + 5*1$. So is possible for all $n \ge 8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 4 }
Intuition for the epsilon-delta definition of continuity This is my first question so I hope this sort of thing is OK to ask. I'm working my way through Rudin's Principles of Mathematical analysis, and I'm up to chapter 4, which is on continuity in the context of functions between metric spaces. It introduces what I understand to be the standard epsilon-delta definition used in calculus, but I'm struggling to gain an intuitive understanding of what it means. I came up with what I think is an English version of the gist of it: A function f is continuous at some point p in its domain iff sufficiently small deviations from p result in arbitrarily small variations in f(p). Does this show the general idea of continuity? If not, how should it be changed to fix it? Thanks in advance for any answers :)
That is an almost correct intuitive formulation of what continuity is. Somehow you also need to get across that the actual size of the allowable deviations does not have anything to do with it. You could do that by saying "for any interpretation of the word 'small'", or something like that. It does definitely show the general idea, though. Just for completeness, the general idea is formalised by the $\epsilon$-$\delta$ definition: A function $f$ is continuous at a point $p$ in its domain if, for any $\epsilon > 0$, there is a $\delta > 0$ such that $$ |x-p| < \delta \implies |f(x) - f(p)| < \epsilon $$ The translation is that $\epsilon$ is the given bound on allowable variations in function value. $\delta$ is the bound you find on deviations from $p$ that keeps the function value within the given $\epsilon$-bound.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Locally Path Connected Definition Why are the following two statements equivalent for any topological space $X$? 1) $X$ is locally path connected (meaning, it has a basis of path connected sets). 2) Every point of $X$ has a path connected neighborhood. Is it simply that a path connected neighborhood is an open set in the subspace topology?
They aren't equivalent. Indeed, any path-connected space satisfies (2), since you can take the neighborhood to just be $X$ itself. But not every path-connected space is locally path-connected (see https://math.stackexchange.com/a/135483/86856, for instance).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Counting using permutation and combination How many solutions are there to the equation $x_1+x_2+x_3+x_4+x_5=21$, where $x_i,i=1,2,3,4,5$, is a nonnegative integer such that $ 0 ≤ x_1 ≤ 3$, $1 ≤ x_2 < 4$, and $x_3 ≥ 15$? I tried it .My Approach-: $ x_3=x_3'+15 \implies x_1+x_2+x_3'+15+x_4+x_5=21 \implies x_1+x_2+x_3'+x_4+x_5=6 \implies C(5+6-1,6)$ but stuck at finding $0 ≤ x_1 ≤3$ $1 ≤ x_2 ≤4$ please help!!!
Let $y_3 = x_3 -15$. We need the number of solutions to $x_1+x_2+y_3+x_4+x_5 = 21 - 15$ with $y_3 \geq 0$, $0 \leq x_1 \leq 3$ and $1 \leq x_2 <4$. The number of solutions is the coefficient of $x^6$ in \begin{align*} (1+x+x^2+x^3)&(x+x^2+x^3)(1+x+x^2+\cdots)(1+x+x^2+\cdots)(1+x+x^2+\cdots)\\ &= x(1+x+x^2+x^3)(1+x+x^2)(1+x+x^2+\cdots)^3 \\ &= x\left(\frac{1-x^4}{1-x}\right)(1+x+x^2)\left(\frac{1}{1-x}\right)^3\\ &= x(1-x^4)(1+x+x^2)(1-x)^{-4} \\ &=x(1-x^4)(1+x+x^2)\left(1+4x+\binom{5}{2}x^2+\binom{6}{3}x^3+ \cdots\right)\\ &= x(1+x+x^2-x^4-x^5-x^6)\left(1+4x+\binom{5}{2}x^2+\binom{6}{3}x^3+ \cdots\right)\\ \end{align*} Hence the coefficient is \begin{align*} -1-4+\binom{6}{3}+\binom{7}{4}+\binom{8}{5} = 106 \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Empty Set $\{\}$ is the Only Basis of the Zero Vector Space $\{0\}$ Question Suppose we want to find a basis for the vector space $\{0\}$. I know that the answer is that the only basis is the empty set. Is this answer a definition itself or it is a result of the definitions for linearly independent/dependent sets and Spanning/Generating sets? If it is a result then would you mind mentioning the definitions of bold items which based on them this answer can be deduced. Useful Links These are the links that I found useful for answering this question. It needs some elementary background form mathematical logic. You can learn it by spending a few hours on this Wikipedia page. Link 1, Link 2, Link 3, Link 4, Link 5, Link 6
The standard definition of basis in vector spaces is: $\mathcal B$ is a basis of a space $X$ if: * *$\mathcal B$ is linearly independent. *The span of $\mathcal B$ is $X$. You can easily show both of these statements are true when $X=\{0\}$ and $\mathcal B= \{\}$. Again, you have to look at the definitions: * *Is $\{\}$ linearly independent? Well, a set $A$ is linearly independent if, for every nonempty finite subset $\{a_1,a_2\dots, a_n\}$, we have that if $$\alpha_1a_1 + \dots + \alpha_n a_n=0,$$ then $\alpha_i=0$ for all $i$. This condition is satisfied automaticall in the case of an empty set (everything follows from a false statement). This part may be difficult to understand, but since there is no nonempty finite collection of vectors from $\{\}$, any statement you say about nonempty finite collections of vectors from $\{\}$ must be true (because any such statement includes an assumption that a nonempty finite collection exists. It does not, meaning that any such statement is of the type $F\to A$ and is automatically true). This means $\{\}$ is linearly independent. *Is the span of $\{\}$ equal to $\{0\}$? Well, the span of a set $A\subseteq X$ is defined as the smallest vector subspace of $X$ that contains $A$. Since all vector subspaces contain $\{\}$, it is clear that $\{0\}$, which is the smallest vector subspace at all, must be the span of $\{\}$. Alternatively, the span of $A$ is the intersection of all vector subspaces that contain $A$. Again, it should be obvious that this implies that the span of $\{\}$ is $\{0\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 4, "answer_id": 1 }
Show $A^T$ has an eigenvector with all components rational Matrix $A$ is a $5 \times 5$ matrix with rational entries such that $(1, \sqrt{2}, \sqrt{3}, \sqrt{4}, \sqrt{5})^T$ is an eigenvector of A. Show that $A^T$ has eigenvector with all components rational. My idea is: let the eigenvalue associated with the above eigenvector be $λ$. Since all matrix entries are rational numbers so an irrational number will be linearly independent. Use this $2(a_{11} + 2a_{14}) = a_{41} + 2a_{44}$ $a_{21}+2a_{24} = 0$ $a_{31}+2a_{34} = 0$ $a_{51}+2a_{54} = 0$ but I can't find transposed matrix's eigenvector.
Let $v = (1, \sqrt{2}, \sqrt{3}, \sqrt{4}, \sqrt{5})^T$ and assume $A v = \lambda v$. From the first row we get $$a_{1,1} + 2 a_{1,4} + a_{1,2} \sqrt{2} + a_{1,3} \sqrt{3} + a_{1,5} \sqrt{5} = \lambda$$ From the second row we get: $$a_{2,1} + 2 a_{2,4} + a_{2,2} \sqrt{2} + a_{2,3} \sqrt{3} + a_{2,5} \sqrt{5} = \lambda \sqrt{2}$$ Now substitute $\lambda$: $$a_{2,1} + 2 a_{2,4} + a_{2,2} \sqrt{2} + a_{2,3} \sqrt{3} + a_{2,5} \sqrt{5} = (a_{1,1} + 2 a_{1,4} + a_{1,2} \sqrt{2} + a_{1,3} \sqrt{3} + a_{1,5} \sqrt{5}) \sqrt{2}$$ Multiply and rearrange the terms: $$a_{2,1} + 2 a_{2,4} - 2a_{1,2} + (a_{2,2}-a_{1,1}-2a_{1,4}) \sqrt{2} + a_{2,3} \sqrt{3} + a_{2,5} \sqrt{5} - a_{1,3} \sqrt{6} - a_{1,5} \sqrt{10} = 0$$ Since the roots of the squarefree positive integers are linearly independent over $\mathbb{Q}$, we obtain in particular $a_{1,3} = 0$ and $a_{1,5} = 0$, so $$\lambda = a_{1,1} + 2a_{1,4} + a_{1,2} \sqrt{2}.$$ From the third row we get: $$a_{3,1} + 2 a_{3,4} + a_{3,2} \sqrt{2} + a_{3,3} \sqrt{3} + a_{3,5} \sqrt{5} = \lambda \sqrt{3}$$ Again, substitute $\lambda$: $$a_{3,1} + 2 a_{3,4} + a_{3,2} \sqrt{2} + a_{3,3} \sqrt{3} + a_{3,5} \sqrt{5} = (a_{1,1} + 2a_{1,4} + a_{1,2} \sqrt{2}) \sqrt{3}$$ Multiply and rearrange the terms: $$a_{3,1} + 2 a_{3,4} + a_{3,2} \sqrt{2} + (a_{3,3}-a_{1,1}-2a_{1,4}) \sqrt{3} + a_{3,5} \sqrt{5} - a_{1,2} \sqrt{6} = 0$$ So we obtain $a_{1,2} = 0$ and $$\lambda = a_{1,1} + 2 a_{1,4}$$ Thus $\lambda \in \mathbb{Q}$. Since $A^T$ and $A$ have the same set of eigenvalues, $A^T$ is a matrix with rational components and a rational eigenvalue. Therefore $A^T$ has an eigenvector with rational components.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to find tangents to curves at points with undefined derivatives I will explain my question with the help of an example. We need to find the tangent at origin to the curve $$x^3 + y^3 =3axy$$ The derivative at origin is $0/0$ or indeterminate, found after implicit differentiation. But the tangents exist (via Wolfram Alpha) and they are $x=y=0$. * *If the derivative at the origin does not exist, how are we getting the tangents? At least $y=0$ has a determinate slope (0). *Also how should I find tangents to more general curves at points where the derivative doesn't exist? Is there a general method using differentiation? *My professor told me that as $x,y\to0$, $x^3 + y^3\ll3axy$ and hence the zeroes of the function will be approximately where the zeroes of $3axy$ are. Now I couldn't understand the next line that he said: Near the origin the curve will look like the solutions to $3axy$. What does he mean by this? Of course the solutions to $3axy=0$ are $x=0$ and $y=0$, which are the tangents, but the curve isn't like that. Can anyone please explain me this? And is there a general method to find tangents at points where the derivative doesn't exist?
For algebraic curves, you can use the notion of tangent cone. Consider the curve defined by $P(x, y) = 0$, where $P(x, y)$ is a polynomial. Write $$P(x, y) = P_m(x, y) + P_{m+1}(x, y) + \dotsb + P_{m+k}(x, y)$$ where each $P_i(x, y)$ is a polynomial of degree $i$, and $P_m(x, y) \neq 0$, i.e. $P_m(x, y)$ is the homogeneous component of $P(x, y)$ of the lowest degree. Then, the equation $P_m(x, y) = 0$ defines the tangent cone to the curve at the origin, and the line of equation $a x + b y = 0$ is tangent to the curve at the origin if and only if $a x + b y$ divides $P_m(x, y)$. In your case, since $P(x, y) = 3axy - x^3 - y^3 = P_2(x, y) + P_3(x, y)$, the tangent cone is given by $3axy = 0$, and so the tangent lines have equations $x = 0$ and $y = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving $|z-1|<|z-i|$ is an open set Consider this set: $$|z-1|<|z-i|$$ Suppose $z=x+iy$, then: $$\sqrt{(x-1)^2+y^2}<\sqrt{x^2+(y-1)^2}\implies$$ $$(x-1)^2+y^2<x^2+(y-1)^2\implies$$ $$x^2-2x+1+y^2<x^2+y^2-2y+1\implies$$ $$-2x<-2y\implies y<x$$ First of all, am I right? Now, in order to prove that the set $O = \{(x,y); y<x\}$ is open, I need to pick a point $z\in O$, then construct an open ball of radius $r$ and prove it's entirely contained in $O$. I think that $r$ in this case must be the distance from this point $z$ to the line $y=x$. I have, then, to pick a point $w\in B(z, r)$ and prove that $w\in O$. How can I do that?
If you draw the picture, you see that your set is just $\mathcal O =\left \{ (x,y):y<x \right \}$ so pick a point $(x_0,y_0)\in \mathcal O$ and observe that the ball centered at $(x_0,y_0)$ of radius $\frac{\vert x_0-y_0\vert }{2\sqrt{2}}$ lies entirely in $\mathcal O$. Or if you know that $f(z)=|z-1|-|z-i|$ is continuous, then the result is immediate, since $\mathcal O=f^{-1}(\left \{ y\in \mathbb R:y<0 \right \})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1909973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 0 }
Prove that $4^n+ 1$ is not divisible by $3$ For all integers $n \ge 0$, prove that the value $4^n + 1$ is not divisible by 3. I need to use Proof by Induction to solve this problem. The base case is obviously 0, so I solved $4^0 + 1 = 2$. 2 is not divisible by 3. I just need help proving the inductive step. I was trying to use proof by contradiction by saying that $4^n + 1 = 4m - 1$ for some integer $m$ and then disproving it. But I'd rather use proof by induction to solve this question. Thanks so much.
I think that if you need to use induction, instead of proving "$4^n+1$ is not divisible by $3$", you should prove the more specific "$4^n+1$ has remainder $2$ when divide by $3$". $$4^n+1=3k+2\implies4^n=3k+1\implies4^{n+1}=12k+4$$ $$\implies4^{n+1}+1=12k+5\implies4^{n+1}+1=3(4k+1)+2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1910085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 9, "answer_id": 1 }
Lateral limits of an endpoint of the interval. Imagine we have a the domain $D=[d_1,d_2]$ of a continuous function $f$. The definition of right limit I'm using is the following: $$\lim_{x\rightarrow a^+}f(x)=b \Leftrightarrow \forall_{\epsilon}\exists_{\delta}\forall_{x}(x\in D \ \cap \ ]a,a+\delta[\ \Rightarrow \ f(x) \in N_{\epsilon}(b) )$$, where $N_{\epsilon}(b)$ is the neighbourhood of length $2\epsilon$ at point $b$. We define similarly a left limit. If I pick point $a=d_2$, which is a limit point(<=> accumulation point), then the implication is vacuously true, for any value $b$... Then how can I say that there's no $\displaystyle \lim_{x\rightarrow d_2^+}f(x)$? Or when I say that $\lim_{x\rightarrow d_2^-}f(x)=\lim_{x\rightarrow d_2^+}f(x) \Leftrightarrow \lim_{x\rightarrow d_2}f(x) \text{ exists }$, it's valid only for interior points? Thanks.
I think I get what Fujisaki is talking about. My definition of right limit is incomplete. I should have demanded, right at the begining of the definition, that $a$ be an adherent point to the set $D \cap ]a,+\infty[$, otherwise we get this problem, since $[b,a] \cap ]a,a+\delta[=\emptyset$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1910156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What does $f|A$ mean? Let $X$ be some space, $A$ a subspace of $X$, $f:X \rightarrow X$ a function. My first guess for what $f|A$ is would be that the domain of $f$ is restricted to $A$, but I can't find any confirmation that this is actually the case. The only notation that I'm aware of is $f|_A$. For reference, the actual place that this question arose in is in Hatcher's Algebraic Topology page 2: A deformation retraction of a space $X$ onto a subspace $A$ is a family of maps $f_t:X \rightarrow X$, $t \in I$, such that $f_0 = \mathbf{1}$ (the identity map), $f_1(X) = A$, and $f_t|A = \mathbf{1}$ for all $t$.
You are correct, this is indeed just a restriction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1910250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Discriminant of splitting field Let K number field, $O_K$ be its integer domain. We all know $O_K$ is a free $\mathbb{Z}$-module. If L is a finite (or galois) extension of K, whether $O_L$ is a free $O_K$-module? In addition, let $f$ be a irreducible polynomial in $\mathbb{Q}$, α is a root of $f$, $K$ is the splitting field of $f$. $Δ(F)$ denote the discriminant of a number field $F$. Is $Δ(K)$ devides $Δ(\mathbb{Q}(α))^n$ for some integer $n$?
Answer to your 2nd question: In the relative situation $K/k$, one defines the discriminant ideal $\Delta (K/k)$ as being the ideal of $O_k$ generated by all the discriminants of all the $k$-bases of $K$ consiting of integral elements. A finer invariant is the different $\mathfrak D(K/k)$, which is an ideal of $O_K$ defined as follows: the elements $x$ of $K$ such that $Tr_{K/k}(x O_K) < O_k$ , where $Tr_{K/k}$ is the trace map of $K/k$ , form a fractional ideal of $K$, and its inverse is $\mathfrak D(K/k)$. The relation between the discriminant and the different is: $\Delta(K/k)$ = $N_{K/k}(\mathfrak D(K/k))$, where $N_{K/k}$ is the norm map in $K/k$. Given a tower of number fields $k < K < L$, one has a "transitivity" formula for differents: $\mathfrak D(L/k) = \mathfrak D(L/K).\mathfrak D(K/k)$. Taking norms and using their transitivity, one gets readily: $\Delta(L/k) = N_{K/k}(\Delta(L/K)).\Delta(K/k)^m$, where $m$ is the degree of $L/K$. It remains just to apply this formula to your particular case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1910360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
A nice identity involving urns and balls problem Prove the identity: $$\frac{\displaystyle\sum_{k=0}^{a} {n+a-k-2\choose n-2}}{\displaystyle {n+a-1\choose a}} = 1$$ where $C_{i}^{j}$ is defined as the number of ways to simultaneously choose $j$ objects from $i$ objects. My attempt: I was trying to use a combinatorial argument by saying that out of the given $n$ balls, each of the terms within the summation is the probability of a given urn containing exactly $k$ balls for $k = 0,1,2,...,n$. Thus, the sum reflects the sum of all the probabilities of that urn having exactly $k$ balls, so it must be $1$. My question: I would like to see an algebraic proof for this identity. Could someone please help with such a proof? In case my argument above is incorrect, please help point out the mistake.
${n+a-1 \choose n-1}={n+a-1 \choose a}$ is the number of non-negative integer solutions to: $x_1+x_2+\ldots+x_n=a$ Let $x_1=k$ with $k \in \{0,1,\ldots,a\}$. Then the corresponding number of solutions is ${n+a-k-2 \choose n-2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1910602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Lower bound on the sum of divisor function It there a lower bound on the sum of divisors function? More specifically is there a "simple" function $f(n)$ such that $\sigma(n) \ge f(n)$ for all $n$ large enough?
The best function is actually $f(n)=n+1$ which holds iff $n$ is prime. Since there are infinitely many primes you can't expect anything better for $n$ large enough. Ramanujan under the assumption of Riemann hypothesis has shown that $\sigma(n)<e^{\gamma}n{\log\log}n$ for $n$ sufficiently large. Unfortunately, $\sigma(n)$ does not have a certain rate of growth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1910739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Can we take derivative of $x$ over $f(x)$ ? I have, probably the most dumb question ever: if we have $f = f(x)$ can we take derivative: $$\frac{dx}{df} = \frac{dx}{df(x)} ?$$ I was thinking of that it may be possible, if we find some inverse, or smth like substitution. So if $f= cx^2$, then: $$\frac{dx}{df} = \frac{dx}{dcx^2} $$ is actually: $\sqrt{c}x = \sqrt{y}$, and $x = \sqrt{\frac{y}{c}}$ then: $$\frac{dx}{df} = \frac{dx}{dcx^2} = \frac{d \sqrt{\frac{y}{c}} }{d y} =\sqrt{\frac{1}{c}} \frac{d \sqrt{y} }{d y} = ...$$ Is it a correct approach?
Suppose that $f$ is a function $D\to\Bbb R$. To make sense of what you write, we need that $f$ has an inverse function, that is, that there exists $g$ such that for all $x\in D$, $g(f(x)) = x$. Then using the chain rule, we have that $\frac{d(g\circ f)}{dx}(x) = \frac{dg}{df}(f(x))\times \frac{df}{dx}(x)$ for all $x\in D$. Note that since $g\circ f$ is the identity function, we have that the left-hand side of the equality is equal to 1 at every $x\in D$. Therefore, we have $\frac{dg}{df}(f(x)) = \frac1{\frac{df}{dx}(x)}$. Replacing $f(x)$ by $y$, this gives the formula $$\frac{dg}{df}(y) = \frac1{\frac{df}{dx}(g(y))}$$ and by abuse of notation, this quantity could also be written as $\frac{dx}{df}$. Taking your example, if $f(x)=cx^2$ over $[0,+\infty]$, then $g(x) = \sqrt{\frac{x}{c}}$. We then have $\frac{dg}{df}(y) = \frac1{2\sqrt{cx}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1910853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the smallest index $l$ with $x_l$ = $x_{2l}$ in a seqeunce The sequence $(x_i)_{\displaystyle i \geq 0}$ has the preperiod $2,3,5,7,11,13,17,19,23,29$ and the periodic part $31,37,41,43$. Find the smallest index $l \in \mathbb{N}$ with $x_l = x_{2l}$. In other words, the sequence is $$2,3,5,7,11,13,17,19,23,29,31,37,41,43,31,37,41,43,...$$ By inspection I found out that the answer is $l=12$. But I am not so much interested in the answer but rather if there more general way of finding the answer which does not involve writing out the sequence and comparing individual elements.
The sequence verifies $a_{10+k} = a_{10 + (k \; {\rm mod}\; 4)}$ for $k\geq 0$ (and there are no other relations). So look for $\ell=10+k$, $k\geq 0$ so that $a_{10+k}=a_{20+2k}=a_{10+(10+2k)}$. And this is equivalent to $k\geq 0$ and $$ k \equiv 10+2 k \ {\rm mod} \ 4 $$ or $k \equiv 2 \ {\rm mod} \ 4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1910957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proof that lim sup of union equals union of lim sup My home work is: let $ A_{n},B_{n}$ be subsets of the sample space. Prove that $$ \limsup_{n\to\infty} (A_{n}\cup B_{n}) = \limsup_{n\to\infty} A_{n}\cup\limsup_{n\to\infty} B_{n} $$ I managed to get to this: $$ \bigcap_{1}^{n}\bigcup_{n\geq m}^{ } A_{m}\cup \bigcap_{1}^{n}\bigcup_{n\geq m}^{ } B_{m}\ = \bigcap_{1}^{n}\bigcup_{n\geq m}^{ } A_{m}\cup B_{m} $$ Really appreciate if anyone can help me with this
I think you can prove this using the distributivity laws of sets. $x \in \text{limsup} A_n \cup B_n \iff x \in \cap_n \cup_{ k \geq n} (A_k \cup B_k) \iff x \in \cap_n [ (\cup_{k \geq n} A_k) \cup (\cup_{k \geq n} B_k)] \iff x \in [\cap_n \cup_{k \geq n} A_k] \cup [\cap_n \cup_{k \geq n} B_k] \iff x \in \text{limsup} A_n \cup \text{limsup} B_n.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1911034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Prove the product of a polynomial function of the roots of another polynomial is an integer. I noticed this while solving another problem on this site. Let $P(x)$ be a polynomial in $x$ with integer coefficients, and let the roots of $P(x)=0$ be $r_1, r_2 \ldots ,r_n$, where multiple $r_i$ might be equal if there are roots with multiplicity higher than one. Let Q(x) be some other polynomial in $x$, also with integer coefficients. Prove that $$ \prod_i Q(r_i) \in \Bbb{Z} $$ For example, if $P(x) = x^5+2x^2+1$ and $Q(x) = x^2-2$ then $\prod Q(r_i) = -7$. I am pretty sure it is true, because you can express each term in the product of those polynomials in a form like $$ \sum_{i<j<\ldots <n} r_i^{p_1} r_j^{p_2} \ldots $$ and laboriously express those sums as sums of products of combinations of the roots that match expressions determined by the (integer) coefficients of $P(x)$. But making that constructive proof anything more than hand-waving seems difficult. I wonder if any ideals in the theory of rings, for instance, can make this proposition easier to prove. NOTE Afterward A counterexample would also nicely resolve the question, showing that the conjecture is false.
I agree with Bill that symmetric polynomials should somehow be the standard solution, but I wanted to point out that Galois theory makes this straightforward. It is easy to show that $s = \prod_i Q(r_i)$ is an algebraic integer, so it is enough to show that $s\in\mathbb{Q}$. By the Galois correspondence, this is the same as checking that $\sigma(s)=s$ for every automorphism $\sigma$ of a splitting field $\mathbb{Q}\subset K$ of $P(X)$. But any such automorphism just permutes the roots of $P$, and therefore permutes the terms of the product $\prod_i Q(r_i)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1911132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove that if $\sum |a_n|^2$ converges then $ \sum \frac{a_n}{n}$ converges Let $ \{a_n\}\in \mathbb{C}$. Prove that if $\sum |a_n|^2$ converges than $\sum \frac{a_n}{n}$ converges. Note that this problem is taken from the first chapter on series of a calculus book, so it should be solvable with very basic tools (e.g. the only convergence test introduced was the comparison test and nothing about absolute convergence is assumed). I have tried a few thing but nothing worked. Could you give me a hint on how to approach the problem?
With Cauchy-Schwarz inequality and the fact that $\sum_n 1/n^2 < \infty$, the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1911211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Permutations of $\{1,2,3,...,n\}$ where first k elements precede each other. Number of permutations of $\{1,2,3,...,n\}$ where first $k$ elements have the property that element $1$ precedes element $2$ which precedes element $3$ ...... which precedes $k-1$ which precedes element $k$ (not necessarily immediately). E.g: For $n = 7$ and $k = 3$, $(6152437) , (1243567)$, ... etc. I tried like this: $n$ numbers can be permuted in $n!$ ways. The $k$ elements with preceding property can be permuted in $k!$ ways. But there is only one way satisfying the preceding property. So $n!/k!$ is the answer. Is this correct? If so how could I write a more formal proof. Even though I verified for up to $n=6$ and $k=2,3$. I am still in doubt/
Your argument is correct. A slightly different way to say it is that there are $\binom{n}k$ ways to choose which positions in the permutation contain the numbers $1,\ldots,k$, whose order within those positions is fixed, and there are then $(n-k)!$ ways to arrange the remaining numbers in their positions. The total number of permutations satisfying the requirement is therefore $$\binom{n}k(n-k)!=\frac{n!}{k!}\;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1911276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
integral solutions to a prime number? Find the number of non-negative integer solutions to: $$(x_1+x_2+\cdots + x_n)(y_1+y_2+\cdots + y_p)=P$$ where $n\in\mathbb{N}$ and $P$ is a prime number. I know the answer is $2n\binom{p+n-1}{p}$ but I don't understand why?
Hint: $ P $ being prime means that its factors are $ 1 $ and $ P $, so the integer solutions to that equation satisfy either $$ x_1 + \dots + x_n = 1 \text{ and } y_1 + \dots + y_p = P$$ or $$ x_1 + \dots + x_n = P \text{ and } y_1 + \dots + y_p = 1 $$ So your answer should be the sum of the numbers of solutions to these two sets of equations
{ "language": "en", "url": "https://math.stackexchange.com/questions/1911390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does this matrix have a name? (Maybe in combinatorics?) Or rather, is there a name for the class of matrices that resemble this one: \begin{bmatrix} 1 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 0 & 1\\ 1 & 0 & 0\\ 0 & 1 & 1\\ 0 & 1 & 0\\ 0 & 0 & 1\\ 0 & 0 & 0 \end{bmatrix} I made the matrix because I wanted to be able to think of each column as representing a thing that could be either chosen or set aside (like in combinatorics?), and each row as representing one way of choosing/not choosing from among the three things.
I don't think there's a standard name for this matrix, but the idea that it represents is well known and useful. Good for you if you invented it yourself. Each row of your matrix describes one of the $2^n$ subsets of an $n$ element set (you have $8$ rows since $n=3$) using a string of $n$ bits, each either $1$ or $0$, indicating which of the $n$ elements of the set (these are the columns) are or are not in the subset. The rows are sometimes called "bit vectors". If you think of each bit vector as a binary number then you've nicely labeled all the subsets using the numbers from $0$ to $2^{n-1}$. In your example you listed them in descending order from $7$ (the whole set) to $0$ (the empty set). You can then do boolean algebra (union, intersection, complement, symmetric difference) with arithmetic on bit strings. Some set counting problems are best approached with this representation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1911507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$n \geq 4$ men, among whom are A, B and C stand in a row.then what is the probability that C stands somewhere between A and B $n \geq 4$ men, among whom are A, B and C stand in a row. Assume that all possible orderings of the $n$ men are equally likely, then what is the probability that C stands somewhere (not necessarily adjacent to A and B) between A and B. Well, I know the number of orderings is $n!$ but here it seems we don't need to use this. Since they are placed equally, it means the probability of A between B and C, and the probability of B between A and C, and the probability of C between A and B should be equal. So the answer should be $\frac{1}{3}$? Thanks.
Your explanation is correct. To make it a bit more explicit, consider the positions 3 occupied by $A$, $B$, and $C$, regardless of how the three are arranged within those positions. there are $\binom n3$ such positions, and for each there are $6$ orderings of $A$, $B$, and $C$ from leftmost to rightmost. By symmetry, each is equally likely. $$ABC,ACB,BAC,BCA,CAB,CBA$$ Thus in each of the possible placements of the three, two of the six possible orderings place $C$ in the center. The probability is thus $\frac 13$. To think about this another way, you can place $A$, $B$, and $C$ first, at which point the pobability that $C$ is in the center is clearly $\frac 13$. Then insert the other $n-3$ individuals into the line without any swapping of positions. The ordering, and the probability of $C$ being in between $A$ and $B$, will not change in this process.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1911617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Number of ways to arrange n people of increasing height The task is to arrange $n$ people in a single line such that exactly $x$ of them are visible from left and $y$ of them visible from right (Since some taller people block the view of the shorter ones). For example, if the people were arranged in line with heights 30 cm, 10 cm, 50 cm, 40 cm, and then 20 cm, someone looking from the left side would see 2 persons (30 and 50 cm) while someone looking from the right side would see 3 persons (20, 40 and 50 cm). The answer lies to place the tallest person in such a way that the problem is divided into 2 halves, but I'am not sure how. Edit : It is to note that all the heights are distinct
This is building on @Fimpellizieri's answer. As @Fimpellizieri showed, we can reduce this to the problem of calculating $l(n,k)$, the number of ways of ordering $n$ people so that exactly $k$ are visible from the left. (Note that $r(n,k) = l(n,k)$, since you can reverse to order to switch left-visibility into right-visibility.) We claim that $l(n,k) = \left[ \begin{array}{c} n \\ k \end{array} \right]$, the Stirling number of the first kind. $\left[ \begin{array}{c} n \\ k \end{array} \right]$ denotes the number of permutations of $n$ that, when written in cycle notation, have exactly $k$ cycles. Now suppose we have a permutation of the $n$ people with exactly $k$ visible from the right. Note that the $k$ visible people must be ordered in increasing order of height, with the last visible person being the tallest. Also, this divides the $n$ people into $k$ ordered sets: the sets of people behind the $i$th visible person, for $1 \le i \le k$. Hence we can biject these permutations to permutations with $k$ cycles - simply take these ordered sets to be the cycles of a permutation. For the reverse map, given a $k$-cycle permutation, take the $k$ cycles (in their cyclic order) to be the $k$ sets, rotate the cycles so that the tallest person in the cycle is first (since this person should be visible, and none of the others in the set), and then order the cycles in increasing order of the height of their tallest people (so that each of these people is visible). Hence we get $l(n,k) = \left[ \begin{array}{c} n \\ k \end{array} \right]$. Unfortunately there is not a nice closed formula for this Stirling number, but there are recursive formulae that can be used to compute them quickly. EDIT: An alternative approach. While one can substitute the Stirling numbers into the summation formula given in @Fimpellizieri's answer, the bijective argument above can be extended to give a closed (up to the Stirling numbers) formula. Indeed, suppose we have an ordering with $x$ people visible from the left and $y$ people visible from the right. The tallest person is the last person visible from either side, and to the left there are $x-1$ people visible from the left, and to the right there are $y-1$ people visible from the right. Hence after removing the tallest person, we can divide the $n-1$ remaining people into $(x-1) + (y-1)$ ordered sets: for each $1 \le i \le x-1$, the people behind the $i$th visible person from the left, and for each $1 \le j \le y-1$, the people in front of the $j$th visible person from the right. As before, thinking of these ordered sets as cyclic permutations gives us a permutation of $n-1$ people with $x + y - 2$ cycles, with $x-1$ "left" cycles and $y-1$ "right" cycles. Conversely, suppose we have a permutation of $n-1$ people (the tallest person has still been removed) with $x + y - 2$ cycles. From these cycles, choose $x-1$ cycles for the left, with the remaining cycles to be on the right. There are $\binom{x+y-2}{x-1}$ ways to do this. Arrange the left-cycles in increasing order of the height of their tallest member, and rotate them so that the tallest person is first in each cycle. Place these cycles at the beginning of the line. Arrange the right-cycles in decreasing order of the height of their tallest member, and rotate them so that the tallest person is last in each cycle. Place these cycles at the end of the line. Finally, put the (overall) tallest person in the middle, between the left-cycles and the right-cycles. Hence the number of ways to arrange $n$ people in a line with exactly $x$ visible from the left and exactly $y$ visible from the right should be $$\left[ \begin{array}{c} n-1 \\ x+y-2 \end{array} \right] \binom{x+y-2}{x-1}.$$ Rough notes: A small sanity check: if $x = 2$ and $y = 1$, we must have the second-tallest person first and the tallest person last, with the $n-2$ people in the middle arranged arbitrarily. Hence there are $(n-2)!$ possible orders. Since $\left[ \begin{array}{c} r \\ 1 \end{array} \right] = (r-1)!$, our formula indeed gives $(n-2)!$. Also, for an example for the map between line-ups and permutations with cycles, suppose $n = 8$, $x = 3$ and $y = 2$. Consider the permutation (in cyclic notation) of $[7]$ given by $\pi = (5 6) (2 7) (1 3 4)$. We choose two cycles for the left, say $(2 7)$ and $(1 3 4)$, leaving $(5 6)$ for the right. Now the left-cycles we rotate so that the tallest person is first, giving $(7 2)$ and $(4 1 3)$ [rotating cycles doesn't change the underlying permutation $\pi$], and then order in increasing order of height: $ 4, 1, 3, 7, 2$. We then put the tallest person ($8$ - I am naming them after their rank), followed by the right-permutations, which are rotated so that the tallest person is last, giving $4, 1, 3, 7, 2, 8, 5, 6$ as the ordering of the people given by this choice of permutation and division of left-/right-cycles. Note that here we have $4, 7$ and $8$ visible from the left, and $6, 8$ visible from the right, matching the desired $x = 3$ and $y=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1911769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If $X$ and $Y$ are independent exponential then $\{\min(X,Y)>z\}$ and $\{X Given that $X$ and $Y$ are exponential random variables, possibly with different rates. Also, $Z = \min(X,Y)$. Show that the event $\{Z>z\}$ is independent of $\{X<Y\}$. So, I've found the probabilities for both $P(Z>z)$ and $P(X<Y)$. But I don't know how to show that they are independent of each other. I know that $P(A \cap B) = P(A)P(B)$ means that $A$ and $B$ are independent of each other, but I don't know how I can apply that to $X,Y,Z$. P.S.: I rarely use this website, so I apologise if I'm doing some things wrong. Please tell me how I can fix my ways to do better in the future. Thank you.
I assume that $X$ and $Y$ are independent. Let $\lambda_X$ and $\lambda_Y$ be the rates of $X$ and $Y$, respectively. We want to find the following probability: $$P(Z > z \wedge X < Y) = P(\min(X,Y) > z \wedge X <Y).$$ The set $\mathcal{S}_z = \{(X,Y) : \min(X,Y) > z \wedge X <Y\}$ is defined by inequalities $X < Y$ and $z < X$ since $\min(X,Y) = X$ when $X<Y$. Then: $$\begin{align}P((X,Y) \in \mathcal{S}_z) ~=~& \int_{X=z}^{+\infty}\left[ \int_{Y=X}^{+\infty}\lambda_Ye^{-\lambda_Y Y} dY \right]\lambda_Xe^{-\lambda_X X}dX \\[1ex] =~& \int_{X=z}^{+\infty}e^{-\lambda_Y X}\lambda_Xe^{-\lambda_X X}dX \\[1ex] =~& \frac{\lambda_X}{\lambda_X+\lambda_Y}e^{-(\lambda_X+\lambda_Y)z}\end{align}$$ Now, we need to find also the probabilities $P(X < Y)$ and $P(Z > z)$: $$\begin{align}P(X < Y) ~=~& \int_{X=0}^{+\infty}\left[ \int_{Y=X}^{+\infty}\lambda_Ye^{-\lambda_Y Y} dY \right]\lambda_Xe^{-\lambda_X X}dX \\[1ex] =~& \frac{\lambda_X}{\lambda_X+\lambda_Y} \\[2ex] P(\min(X,Y) > z) ~=~& \int_{X=z}^{+\infty}\left[ \int_{Y=z}^{+\infty}\lambda_Ye^{-\lambda_Y Y} dY \right]\lambda_Xe^{-\lambda_X X}dX \\[1ex] =~& \int_{X=z}^{+\infty}e^{-\lambda_Y z}\lambda_Xe^{-\lambda_X X}dX \\[1ex] =~& e^{-(\lambda_X+\lambda_Y)z}\end{align}$$ Finally: $$P(Z > z \wedge X <Y ) =\frac{\lambda_X}{\lambda_X+\lambda_Y}e^{-(\lambda_X+\lambda_Y)z} = P(Z > z) \cdot P(X <Y)$$ This means that the two events are independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1911852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Limit of $\sum\limits_{k=0}^n \frac{{n\choose k}}{n^k(k+3)}$ when $n\to\infty$ What is $$ \lim_{n \to \infty} \sum_{k=0}^n \frac{{n\choose k}}{n^k(k+3)}\ ?$$ I know the way by integration and that the answer is $e-2$ but I am more interested in use of sandwich theorem which provides a maxima or a closed form to it. Expansions may also be useful.
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Besides the 'sandwich' answer of $\texttt{@RobertZ}$, the following answer provides an alternative approach: \begin{align} \lim_{n \to \infty}\sum_{k = 0}^{n}{{n \choose k} \over n^{k}\pars{k + 3}} & = \lim_{n \to \infty}\sum_{k = 0}^{n}{n \choose k} \pars{1 \over n}^{k}\int_{0}^{1}t^{k + 2}\,\dd t = \lim_{n \to \infty}\int_{0}^{1}t^{2}\sum_{k = 0}^{n}{n \choose k} \pars{t \over n}^{k}\,\dd t \\[5mm] & = \lim_{n \to \infty}\int_{0}^{1}t^{2}\pars{1 + {t \over n}}^{n}\,\dd t = \lim_{n \to \infty}\bracks{n^{3}\int_{0}^{1/n}t^{2}\pars{1 + t}^{n}\,\dd t} \end{align} The last integral can be evaluated by succesive integration by parts which decreases the $\ds{t^{2}\mbox{-exponent}}$ to $\ds{0}$. Namely, \begin{align} &\lim_{n \to \infty}\sum_{k = 0}^{n}{{n \choose k} \over n^{k}\pars{k + 3}} \\[5mm] = &\ \lim_{n \to \infty}\braces{n^{3}\bracks{-2n^{3} + \pars{1 + 1/n}^{n}\pars{1 + n}\pars{2 + n + n^{2}} \over n^{3}\pars{1 + n}\pars{2 + n}\pars{3 + n}}} = \bbx{\expo{} - 2} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Linear subspaces dimensions (2) I have the following question: $U$ and $V$ are two linear subspaces of $\mathbb{R}^7$, with $\dim(U) = \dim(V) = 5$. What is the minimum dimension of $U ∩ V$? Using the Grassmann Formula I have: $\dim(U ∩ V) + \dim(V + U) = \dim (U) + \dim(V)$ $\dim(U ∩ V) + \dim(V + U) = 5 + 5 = 10$ $\dim(U ∩ V) = 10 - \dim(V + U)$ So to answer I have to find the maximum dimension of $\dim(V + U)$. Because $V + U ⊆ ℝ^7$, the max dimension is $7$. So: $\dim(U ∩ V) = 10 - \dim(V + U) = 10 - 7 = 3$ Is this right? If yes, are there other ways to solve it?
You're correct. By Grassmann's formula, $$ \dim(U\cap V)=\dim U+\dim V-\dim(U+V) $$ so it is minimal when $\dim(U+V)$ is maximal. Since it is possible that $\dim(U+V)=7$ (an example would be needed), but not more, the minimum value for $\dim(U\cap V)$ is $5+5-7=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How are matrices related to vectors? I know it's a silly question, but there's something I feel like I'm missing in my understanding of matrices. I'm studying linear algebra, and much of what we covered in the first few topics related to vectors (vector spaces, linear independence, etc.), but then all of a sudden we started using matrices, loosely defining them as an "array of numbers". So I'm kind of confused, is a matrix supposed to be a list of vectors? And if so, are they the rows or the columns of the matrix?
A vector is a linear array of quantities. A matrix is a 2-dimensional array of quantities. Three dimensional and higher dimensional arrays also exist, they are called Tensors. A matrix can be thought of a sequence of column vectors, but also as a sequence of row vectors, both interpretations are useful. An example of an important theorem in this regard is :row rank=column rank.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 2 }
Prove Without Induction: $\sum\limits_{k=2}^{n} \frac{1}{k(k-1)} = 1 - \frac{1}{n}$ everybody. I'm suppose to prove this without induction: Prove Without Induction: $\sum\limits_{k=2}^{n} \frac{1}{k(k-1)} = 1 - \frac{1}{n}$ I'm not sure how to do it. I tried a bit of algebraic manipulation, but I'm not sure how to do it. It's suppose to be basic. I did get a hint of factorizing $\frac{1}{k(k-1)}$ but that didn't get me anywhere. A hint or any directions would be much appreciated!
That's a telescoping series. Use partial fraction techniques to do the following split: $$\frac{1}{k(k-1)} = \frac{1}{k-1} - \frac{1}{k},$$ and proceed from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
parentheses vs brackets for indexed family of sets I understand the definition of parentheses and brackets. However this problem involving indexed family of sets has me questioning the definitions. Q: Let $\mathscr A = \{[-x,0]: x \in \mathbb R \text{ and } 0 < x < 1\}$. Find the union over $\mathscr A$ and the intersection over $\mathscr A.$
$(-1,0]$ is correct. The number $-1$ is not an element of any set in the family $\mathcal{A}$ and therefore cannot be in the union.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
prove that $R$ is an equivalence relation $$\forall a,b \in \mathbb{Q} \quad aRb \Leftrightarrow \quad \exists k \in \mathbb{Z}: \quad b=2^ka$$ 1) Reflexivity: $\forall a \in \mathbb{Q}\quad aRa \Leftrightarrow \quad \exists k \in \mathbb{Z}: \quad a=2^ka $ choosing $k=0 \quad \Rightarrow a=2^0a=a \Rightarrow aRa \Rightarrow R \text{ is reflexive}$ 2) Symmetry 3) Transitivity: $\forall a,b,c \in \mathbb{Q}:$ $$ aRb \Leftrightarrow \quad \exists k \in \mathbb{Z}: \quad b=2^ka $$ $$ bRc \Leftrightarrow \quad \exists h \in \mathbb{Z}: \quad c=2^hb $$ Then $$ aRc \Leftrightarrow \quad \exists p \in \mathbb{Z}: \quad c=2^pa $$ so $aRb,bRc \Rightarrow c=2^hb=2^h2^ka=2^{h+k}a$ choosing $p=k+h \Rightarrow c=2^pa \Rightarrow aRc \Rightarrow \text{ R is transitive}$ Can anyone confirm that 1) and 3) are correct? I tried to prove 2) but is ended like transitive proof and I think that is entirely wrong, I have no idea how to succeed, can anyone provide some hints/proof/solution?. thanks in advance
Your proofs for parts (1) and (3) are correct. For symmetry, suppose $aRb$, so that \begin{equation} b = 2^ka \end{equation} for some $k\in\mathbf{Z}$. Can you think of an integer $l$ so that \begin{equation} a = 2^lb? \end{equation} (Hint: Remember that negative integers are integers too!) Once you have such an integer, you can conclude that $bRa$, meaning that $R$ is symmetric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Example of a countable compact set in the real numbers Can someone give me an example or a hint to come up with a countable compact set in the real line with infinitely many accumulation points? Thank you in advance!
What about if we define $H = \{ \frac{1}{n} : n\in \mathbb{N}\} \cup \{0\}$, a sort of standard countable compact set with 0 at its sole limit point, then define your countable compact set to be: $$S = \{ x + y \mid x, y \in H\}.$$ To unpack the thought behind this definition: * *This set $S$ is countable. *This set $S$ is bounded, because its smallest member is 0 and its largest member is 1+1 = 2. *Each $x \in H$ is a limit point of $S$. If we fix $x$ and vary $y\in H$ in the definition of $S$, we can see that $x$ is a limit point of $S$. *This set $S$ is closed (and therefore compact, because it is a bounded subset of the real line.) ($S$ is closed because it is the sum of two compact subsets of $\mathbb{R}$ and is therefore closed in $\mathbb{R}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
A proof of the identity $ \sum_{k = 0}^{n} \frac{(-1)^{k} \binom{n}{k}}{x + k} = \frac{n!}{(x + 0) (x + 1) \cdots (x + n)} $. I have to prove that $$ \forall n \in \mathbb{N}_{0}, ~ \forall x \in \mathbb{R} \setminus \mathbb{N}_{0}: \qquad \sum_{k = 0}^{n} \frac{(-1)^{k} \binom{n}{k}}{x + k} = \frac{n!}{(x + 0) (x + 1) \cdots (x + n)}. $$ However, I was unable to find a proof. I have tried to use the binomial expansion of $ (1 + x)^{n} $ to get the l.h.s., by performing a suitable multiplication followed by integration, but I was unable to obtain the required form. Please help me out with the proof. Thanks in advance.
I thought it might be instructive to present a proof by induction. First, we establish a base case. For $n=0$, it is straightforward to show that the expression holds. Second, we assume that for some $n\geq 0$, we have $$\sum_{k=0}^n \binom{n}{k}\frac{(-1)^k}{x+k}=n!\prod_{k=0}^n\frac{1}{x+k}$$ Third, we analyze the sum $\sum_{k=0}^{n+1} \binom{n+1}{k}\frac{(-1)^k}{x+k}$. Clearly, we can write $$\begin{align} \sum_{k=0}^{n+1} \binom{n+1}{k}\frac{(-1)^k}{x+k}&=\frac1x+\sum_{k=1}^{n} \binom{n+1}{k}\frac{(-1)^k}{x+k}+\frac{(-1)^{n+1}}{x+n+1}\\\\ &=\color{green}{\frac1x}+\sum_{k=1}^{n} \left(\color{green}{\binom{n}{k}}+\color{blue}{\binom{n}{k-1}}\right)\frac{(-1)^k}{x+k}+\color{red}{\frac{(-1)^{n+1}}{x+n+1}}\\\\ &=\color{green}{n!\prod_{k=0}^n\frac{1}{x+k}}+\color{blue}{\sum_{k=0}^{n-1} \binom{n}{k}\frac{(-1)^{k+1}}{(x+1)+k}}+\color{red}{\frac{(-1)^{n+1}}{x+n+1}}\\\\ &=\color{green}{n!\prod_{k=0}^n\frac{1}{x+k}}+\color{blue}{-n!\prod_{k=0}^n\frac{1}{x+1+k}-\frac{(-1)^{n+1}}{x+n+1}}+\color{red}{\frac{(-1)^{n+1}}{x+n+1}}\\\\ &=(n+1)!\prod_{k=0}^{n+1}\frac{1}{x+k} \end{align}$$ And we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 3 }
Payoff of a dice game I came across this question today: "A fair die is tossed. If 2,3 or 5 occurs, the player wins that number of rupees, but if 1, 4 or 6 occurs, the player loses that number if rupees. Then find the possible payoffs for the player". My textbook has then proceeded to solve it like this: What is the logic behind directly adding all the values? Secondly, how is it that the game is unfavorable to the player? There is an equal chance that the player will get 2, 3 or 5 and 1,4 or 6! Please help. (Also note that this doubt is not specific to this problem alone, since this concept is crucial to understanding this part of the chapter "Probability".) Much thanks in advance :) Regards. Edit: Thanks ever so much for these answers :) I read up on "Expected values" but encountered another important doubt. I have posted it here and hope that you will clear my doubt :)
For such simple excercise you can imagine to throw the die a big number of times, obtaining perfect statistics. In your case let's throw the die 600 times, obtaining 100 ones, 100 twos and exactly 100 polls for all the figures. Our earnings would have been $$100*(-1)+100*2+100*(3)+100*(-4)+100*(5)+100*(-6)=-100$$ so, for 600 throws we are "expected" to lose 100, dividing these values we have the game payoff of $-1/6$. If you do the division before the sum you have exactly the solution of your book: $${100*(-1)+100*2+100*(3)+100*(-4)+100*(5)+100*(-6) \over 600}={-100 \over 600}$$. The game have a negative payoff because, although you have equal probabilities to win or lose one match, not all matches have the same earnings so you can't assume the game is fair, but you have to evaluate your earnings as I did before.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Galois Basic Question (Cyclic Order 4 Extension of Q cannot contain i) Suppose $K$ is an extension of $\mathbb{Q}$ in $\mathbb{C}$, where $Gal(K/\mathbb{Q})$ is cyclic of order 4. Show that $i\notin K$. ($i$ is the imaginary number $i^2=-1$.) My Galois theory is quite weak, hope someone can check if my attempt is correct. My attempt: Suppose to the contrary $i\in K$. Let $\sigma\in Gal(K/\mathbb{Q})$. Note that $\sigma (i)\sigma(i)=\sigma(i^2)=\sigma(-1)=-1$ since $\sigma$ fixes $\mathbb{Q}$. This means that $\sigma(i)=i$ or $\sigma(i)=-i$. The first case is ruled out since $\sigma$ only fixes $\mathbb{Q}$. So $\sigma(i)=-i$. This means that $\sigma(a+bi)=a-bi$, so $\sigma$ is effectively complex conjugation, which has order 2. Since $\sigma$ was arbitrary, this contradicts that $Gal(K/\mathbb{Q})$ has an element of order 4. Is this ok? Thanks. Update: Now I see that my argument is clearly flawed. What would be the correct proof?
Here is a slick solution, I think. Consider generally a cyclic extension $F/k$ and try to embed it in an over-extension $K/F/k$ such that $K/k$ is cyclic. For simplification, suppose that $F/k$ has degree $p^n$, $K/F$ has degree $p$ and $k$ contains a primitive $p$-th root $\zeta$ of unity ($p$ a prime). Then, using Kummer theory, it can be shown that $K$ exists iff $\zeta$ is a norm in $F/k$ (see e.g. https://math.stackexchange.com/a/1691332/300700, where the situation is a bit more general). In our particular case here, $p=2$, $\zeta=-1$, $K/\mathbf Q$ is the given cyclic extension of degree 4. If $K$ contained $i$, take $F=\mathbf Q(i)$. Then $-1$ would be a norm in $F/\mathbf Q$, i.e. would be a sum of two squares in $\mathbf Q$ : impossible. Note that here, the previous embeddability criterion can be shown directly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Enumerative combinatorics applications in Computer Science I am interested in specific examples and applications of enumerative combinatorics in Computer Science -- concrete problems in this field that make explicit use of the concepts and ideas from combinatorics. Are there any good references that you can point me to (books, lectures, ...)?
The emphasis is on enumeration rather than counting if I understand the question correctly. The perfect match would be the combstruct package that is included with Maple. This software is a companion to the book Analytical Combinatorics by Flajolet and Sedgewick, which is the canonical text and basically provides a map of future computer science research for decades to come. Highly recommended. Computer science has a particular focus on trees and combstruct really shines here, providing total enumeration as well as generating functions and functional equations. The landmark paper by Flajolet et al. on random mapping statistics (which are closely related to the labeled tree function) is discussed at this MSE link. The Maple package is used the following MSE combstruct link, I and this MSE combstruct link, II. The book Analytical Combinatorics is unprecedented in that it pioneered the use of complex variable methods to treat generating functions that arise from species theory and the folklore theorem of combinatorial enumeration (providing instant translation from species equations to generating functions), thereby putting an emphasis on unifying combinatorial methods with complex variable techniques. Another relevant early text is the book Graphical Enumeration by Harary and Palmer which contains many results on labeled and unlabeled trees as well as accessible presentations of the Polya Enumeration Theorem and of Power Group Enumeration. Finally a classic contender for the enumeration of unlabeled trees is the NAUTY package by McKay, which was used at this MSE NAUTY link. (Use Pruefer codes for labeled trees.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1912995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 4 }
$A$ is a $3\times3$ matrix and $A^2+4 A−12 I =0 $. If det$(A+2I )>0$, what is $det(A+2I )$? I need to rearrange the following $A^2+4A-12I=0$, where $A$ is an unknown $3\times3$ matrix so that I can find det of $(A+2I)$
A has -6 and 2 and 0 as Eigen values. It can be a matrix with all elements zero except diagonal matrix with -6,2,0 in the diagonal. Now the determinant of A+2I should be greater than zero and it will be if 2 is at 1,1 and -6 at 3,3. This matrix satisfies all the conditions u asked. And determinant of A+2I is 4. Not the best method though.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1913072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Metrizability of the weak topology $\sigma(E,E')$ of a normed space $E$ Let $(E , \left \| \cdot \right \|_E)$ a normed space with topology $\mathcal{T}_E$ induced by norm. We have that (1) If $\mathrm{dim}(E) < \infty$ then $\sigma(E,E') = \mathcal{T}_E$ and weak topology $\sigma(E,E')$ is metrizable. (2) If $\mathrm{dim}(E)=\infty$ then $\sigma(E,E') \neq \mathcal{T}_E$ and weak topology $\sigma(E,E')$ is not metrizable. I have some doubts on the demonstration of the point (2) $Proof$ $(2)$. Let $\mathrm{dim}(E)=\infty$, then it is sufficient to prove that each $\sigma(E,E')$-open neighborhood if the origin contains a infinite-dimensional subspace, consequently the weak topology can not be induced by any norm (why it can not be induced by any norm?). If $u_i \in E'$ $\forall i=1,...,n$ and $\epsilon > 0$, $\sigma(E,E')$-open neighborhood $U_n$ has the form \begin{align*} \displaystyle U_n=\lbrace x \in E : \max\lbrace p_{u_1}(x),...,p_{u_n}(x) \rbrace < \epsilon \rbrace = \lbrace x \in E : |u_i(x)| < \epsilon \rbrace= \bigcap_{i=1}^n B_{\mathbb{K}}^{u_i}(\epsilon) \end{align*} $\forall x \in E$, let $u(x):=(u_1(x),...,u_n(x))$ a homeomorphism with values in $\mathbb{K}^n$, from dimensional equation we have that $\mathrm{dim}(E)=\mathrm{dim}N(u) + \mathrm{dim} R(u)$, but $\mathrm{dim}R(u) \leq n$ and then $\mathrm{dim}N(u)=\infty$ necessarily, and also $N(u) \subset U_n$.
It boils down to this: Metrizable topological vector spaces (and hence normed spaces) are locally bounded, and what this argument shows is that the $\sigma(E,E')$ topology on $E$ is not locally bounded. To see this, pick $x_0\in N(u)$ with $x_0\neq0$. The Hahn-Banach theorem furnishes some $v\in E'$ with $v(x_0)=\|x_0\|_E$ and $|v(x)|\leq\|x\|_E$ for all $x\in E$. Put $$ V=\{x\in E:|v(x)|<1\} $$ Then $V$ is a $\sigma(E,E')$ neighborhood of $0$, and $U_n\not\subset tV$ for any $t>0$. Since every open neighborhood of $0$ contains an open neighborhood of the form $U_n$, the $\sigma(E,E')$ topology is not locally bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1913169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Serre's Trick for flatness of a morphism of schemes I'm reading some exercises on abelian varieties and I came across the following claim: Claim (Serre's Trick): Let $X,Y,S$ be schemes and suppose that $X \times_S Y$ is flat over $S$. If $X(S) \neq \emptyset$ then $Y$ is flat over $S$. Evidently to prove the claim we may reduce to the case where $X = \operatorname{Spec} C$, $S = \operatorname{Spec} A$ and $Y= \operatorname{Spec} B$ where $A,B,C$ are local rings and the maps $A \to B$, $A \to C$ are local homomorphisms. Let $\mathfrak{m}_A, \mathfrak{m}_B$ and $\mathfrak{m}_C$ be the maximal ideals of $A,B,C$ respectively. (**) Suppose for the moment that the tensor product $B \otimes_A C$ contains a maximal ideal $\mathfrak{m}$ that contracts simultaneously to $\mathfrak{m}_A$ and $\mathfrak{m}_B$. By localizing at $\mathfrak{m}$, we may assume that $B \otimes_A C$ is local, and the composition $$A \to B \to B \otimes_A C$$ is faithfully flat (flat + local homomorphism implies faithfully flat). With this I think we can prove that $B$ is flat over $A$ as follows. Let $M \to N$ be an injection of $A$-modules. Let $K$ be the kernel of the map $M \otimes_A B \to N \otimes_A B$. Tensoring with $C$, we get that $$K \otimes_A C = 0$$ using flatness of $B \otimes_A C$ over $A$. Now tensor with $B$ to get $$ K \otimes_A (C \otimes_A B) = 0$$ and conclude that $K= 0$ by faithful flatness. My question is: A necessary condition for (**) to be true is that the product $B \otimes_A C$ is not zero. How can I get this just from the fact that there is a map $C \to A$? Indeed the example $$ \Bbb{Z}/2 \otimes_{\Bbb{Z}} \Bbb{Q} = 0$$ shows that the condition on the existence of a section is really needed.\ Edit: I was tired from travelling and stupidly concluded that $-\otimes_A C$ was injective.
Let $f:C\rightarrow A$ be a map (which is non zero, I assume you have units in the ring involved), you have a non zero $A$-bilinear map $h:C\times B\rightarrow B$ defined by $h(c,b)=f(c)b$, by the universal property of the tensor product, $h$ factors by a map $\bar h:C\otimes_AB\rightarrow B$, henceforth, $C\otimes_AB$ is not trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1913276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to prove that the rank of a matrix is a lower semi-continuous function? I need to prove that rank($\mathrm{A}$) is not continuous everywhere but is lower semi-continuous everywhere, where $\mathrm{A}\in \mathbb{C}^{n\times m} $
Let $A\in{\mathbb C}^{n×m}$; we use the symbol $m$ to denote a square minor (a selection of $k$ rows and $k$ columns). For any minor $m$ of the matrix define a function $f_m:{\mathbb C}^{n×m}\to {\mathbb R}$ in this way: if the minor is invertible then $f_m(A) = k$ otherwise $f_m(A) = 0$. Obviously $f_m$ is lower semicontinuous. The rank is the maximum of all $f_m$, for all possible choices of $m$ (including all possible choices of $k$), so it is lower semicontinuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1913394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
How would you solve this polynomial? Is there a way to find the roots of equations such as $x^3-9\sqrt[3]{2}+9=0$? I've just been using Wolfram Alpha to factor it into $(x-\sqrt[3]{4}+\sqrt[3]{2}-1)(x^2+(1-\sqrt[3]{2}+\sqrt[3]{4})x+3\sqrt[3]{4}-3)$. But for harder equations such as $$x^3-63\sqrt[3]{20}+9=0$$, Wolfram Alpha just factors it into $$(x-\text{root of }x^9+27x^6+243x^3-5000211)(x-\text{same thing})(x-\text{same thing})$$ Which is kind of problematic, because I would like the exact values of the factors. So is there a way to factor such polynomials into multiple factors? Or some way to find its roots! Anything helps. Note: I would not like nested radicals such as $x=\sqrt[3]{3-\sqrt[4]{4}}$ because I then have to go through the process of finding there simplified surds.
We can find $\sqrt[3]{9(\sqrt[3]2-1)}$ by the following way without WA. Indeed, let $\sqrt[3]2=x$. Hence, $$x^3=2$$ or $$9(x^3-1)=9$$ or $$9(x-1)=\frac{9}{1+x+x^2}$$ or $$9(x-1)=\frac{27}{x^3+3x^2+3x+1}$$ or $$\sqrt[3]{9(\sqrt[3]2-1)}=\frac{3}{\sqrt[3]2+1}$$ or $$\sqrt[3]{9(\sqrt[3]2-1)}=\sqrt[3]4-\sqrt[3]2+1$$ and we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1913452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
limit of product of $(a_1a_2.\dots a_n)^{\frac{1}{n}}$ How to calculate the following limit $$ \lim_{n\rightarrow \infty} \left[ \left(1+\frac{1}{n}\right)\left(1+\frac{2}{n}\right)\cdots \left(1+\frac{n}{n}\right) \right]^\frac{1}{n} .$$ I was trying this by taking the $\log $ of the product and then limit but I am not getting the answer could anybody please help me. And, also is there any general rule for calculating the limit for $$ (a_1a_2.\dots a_n)^{\frac{1}{n}}.$$ Thanks.
Answering your second question as the previous answers all answer your first. Select a sequence $\{b_n\}$ such that $a_n = e^{b_n}$ Now, instead, we have ${\left({e^{\left(\sum\limits_{i=1}^n b_i \right)}}\right)^{\frac 1 n}} = \left(\prod\limits_{i=1}^n a_i \right)^{\frac 1 n}$ This may allow you to leverage identities involving infinite sums as opposed to infinite products as $n \to \infty$. However, you can't always expect certain properties of the sequence to be preserved by mapping them as powers of $e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1913595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 9, "answer_id": 5 }
Let $F,G$ to be distribution. Is the product $FG$ also a distribution? Let $F,G$ to be distribution functions of a random variable, say $X$. Is the product $FG$ also a distribution? We define $\psi(x) = F(x)G(x)$. We first have: $$\lim_{x \to \infty } \psi(x) = \lim_{x \to \infty} F(x)G(x) = 1,$$ $$\lim_{x \to -\infty } \psi(x) = \lim_{x \to -\infty} F(x)G(x) = 0.$$ because both $F,G$ are distribution. Also, if $x < y$, then $$\psi(x) \leq \psi(y) \Leftrightarrow F(x)G(x) \leq F(y)G(y),$$ but $F(x) \leq F(y)$ and $G(x) \leq G(y)$, so the previous inequality holds. Finally, we have $$\lim_{h \to 0+ } \psi(x + h) = \lim_{h \to 0+ }F(x+h)G(x+h) = F(x)G(x) = \psi(x).$$ Is this reasoning correct? I am not sure, if the products of two functions is defined I used them. I am also not sure if the operations make sense. Thanks in advance
Your reasonment is correct, your function $FG$ matches all criteria of a CDF: * *increasing *is $0$ in $-\infty$ and $1$ in $\infty$ *defined on $\mathbb{R}$ Interestingly, $FG$ is the cumulative distribution function of $max(X_F,X_G)$ if $(X_F,X_G)$ are independent random variables with F and G as CDF : $\forall x\in\mathbb{R} \quad \mathbb{P}(max(X_F,X_G)\leq x)=\mathbb{P}(X_F\leq x ,X_G \leq x )=\mathbb{P}(X_F\leq x )\mathbb{P}(X_G \leq x )=F(x)G(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1913678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that a particular mapping of an open ball in $\mathbb{R}^k$ onto $\mathbb{R}^k$ is a diffeomorphism. This is a problem from Guilleman and Pollack: Page 5, question 4. Let $B_a$ be the open ball $\{ x: |x|^2 < a \}$ in $\mathbb{R}^k$, where $|x|^2 = \sum_i x_i^2$. Show that the map \begin{equation} x \mapsto \frac{ax}{\sqrt{a^2 - |x|^2}} \end{equation} is a diffeomorphism of $B_a$ onto $\mathbb{R}^k$. My question is mostly about the language of this problem. The author says show that this is a map ${\bf onto}$ $\mathbb{R}^k$. Does this mean it is supposed to be a ${\bf surjective}$ map? If so, shouldn't the definition of the ball be such that $\{ x: |x|^2 < a^2 \}$, so that it is a ball of radius $a$, and points near the boundary get sent to infinity in $\mathbb{R}^k$?
This is listed as the first typo of the book by Ted Shifrin here, so at least Ted agree with you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1913765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that the following spaces are complete/not How to show if the following facts are true/false: * *$(0,1)$ with the usual topology admits a metric which is complete. *$ [0,1]$ with the usual topology admits a metric which is not complete. Since usual metric gives usual topology and we know that $(0,1)$ is not complete with usual metric so the first question is false. Since usual metric gives usual topology and we know that $[0,1]$ is complete with usual metric so the second question is also false. Are the justifications correct?Please help
The important thing to remember is that it is possible that a metrizable topological space $(X,\tau)$ supports two different metrics $d_1,d_2$ that induce the topology $\tau$ for which $(X,d_1)$ is complete but $(X,d_2)$ isn't. For the first question, note that $(0,1)$ is homeomorphic to $\mathbb{R}$. Choose some homeomorphism $f \colon (0,1) \rightarrow \mathbb{R}$ and define a metric $d$ on $(0,1)$ by $d(x,y) = |f(x) - f(y)|$. Since $f$ is a homeomorphism, $d$ induces the usual topology on $(0,1)$ but $f$ is an isometry and so $d$ is complete. For the second question, note that $[0,1]$ with the usual topology is compact and any compact metric space is complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1913923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }